T O P

  • By -

RestorativeAlly

This kind of stuff was sci fi not long ago.


DreamLizard47

you won't download a milf


Ace8Ace8

Haha ...bet.


ClassNext

does this work with pony?


Itchy_Sandwich518

It works with all models that work in Invoke the only issue is that different models and schedulers take differently to lighting I've found that ForReal in combination with TCD makes the best results. I've never used Pony I know prompting is very different for that and I have no idea which samplers/schedulers are the best


Enshitification

That looks really nice. Thanks for sharing the gen info.


DefiantDeviantArt

Eerily realistic. But I would also appreciate how far AI has advanced.


Itchy_Sandwich518

Thanks i'd dare say this isn't even as realistic as it can get if you take the time to polish the subject, get rid of the AI mistakes and stuff :)


afk4life2015

Looks good. You're using parentheses but not weighting, didn't thing that did anything. It looks good, if I'm doing a prompt like these I don't trust it to understand "flash photo" I'd use "flash photo, flash photography." One thing I tested with good results to do something similar. analog, was pushing Lightning models way past the advised CFG/steps to "break" the AI and make it not so perfect.


Open_Channel_8626

Some UIs will add weights based on parentheses


Itchy_Sandwich518

I didn't know this, I only use it to separate the stylistic part of the prompt for me so it's easier to see. I thought it had no effect.


Open_Channel_8626

its possible that it is increasing the weight of the words inside but I am not sure for this UI


Itchy_Sandwich518

I can never find concrete info on how to weigh words for fooocus or invoke, I've looked online but I keep coming up with very old reddit posts or info from before SDXL.


afk4life2015

I'm sure there's a way to do it because both ComfyUI and A1111 have it so that you can do (brunette hair:1.25) to make the AI focus on getting that detail right, though there's a big difference between how strongly those two interpret it. I'd just experiment with doing that in whichever you're using and see if it makes a difference since the syntax is the same for at least two UIs.


Itchy_Sandwich518

That's what I've been trying but never noticed any effects. like with the shoes thing, in the negative prompt I added shoes:2, nothing it kept making them, then added 3 but still nothing. I'm not sure how high the weights go either. For positive prompts I've also tried this in the past but never noticed results. This is why I did away with prompting almost completely and generate using lineart and gradients, image to image and so forth to get the compositions, poses, interactions and lighting and colros i want :)


Open_Channel_8626

Its a really hard transition but I recommend Comfy UI Its long to learn but the thing is, once you have it set up you know 100% of what is going into your image, so you always have control over it


Itchy_Sandwich518

My main reason for avoiding it is while I know the basics are going to work fine on my 2070 Super with a mere 8 gigs of VRAM, when I dive into control nets and such that won't fly. Invoke supports t2i adapters which I use to make images out of outlines and it's all very fast and fluid. (Control Net models are a no go tho) I just won't be able to get the most out of ComfyUI on my current GPU and as an artist I will want to dive into all that stuff for sure


Open_Channel_8626

ah I see VRAM is a good point


afk4life2015

Usually the model card on Civitai or Huggingface will give you recommended settings. I also always recommend using one of the DPM/karras sampler combos for people (not v3, though). CFG scale is also important because that is prompt adherence, so if it's too low it won't help as much to weight things


Itchy_Sandwich518

CFG scale is important in both how your prompt is interpreted and how the image looks. For lighting models you have to use low CFG's tho, a CFG of 2 for example on a lighting model is equivalent to a CFG of 6 on a normal model in terms of image quality or so. Back when I was trying out weighing with numbers after the words I'd just use normal models with CFG at 6 or 7 even. I just stick to line art and regional prompting with control layers in Invoke now and get the results I want with relative ease. For most normal models I like keeping my CFG anywhere form 5-6 (rarely 7) because the image looks best for realism at those scales. For the one lighting model I use, ForReal CFG 1.8 if I generate form nothing and 2.8 if I generate from outlines, use lighting gradients and so on. As for schedulers, honestly it depends on what and how I'm generating and which UI I'm using. I swear by TCD for Invoke and for doing this lighting thing, but I wouldn't use TCD in fooocus, it just does not work well. Euler if I don't want the image to change too much during the generation process and dpm 2m sde if needed, it all depends on the model and stuff. In Invoke Sampler and Scheduler seem to be tied together, you can't combine them, but in fooocus you can select your sampler and your scheduler which gives a lot more freedom when experimenting.


afk4life2015

I think it was mentioned, but ComfyUI while a somewhat painful learning curve is worth it, much more flexibility, I think if you stick with Lightning models/Turbo you might be able to get away with 8G VRAM. There's not a better generation tool for offline that gives you plenty of learning about how things work


Itchy_Sandwich518

Juggernaut Tensor and Hyper Realistic XL(on tesnor art) understood prompting like this. As for the parentheses, I just put it there to separate my stylistic prompt part visually, for me so it's easier to see when I do inpainting and have to remove or change the rest of the prompt but leave that in


llkj11

Now make her a zombie


Open_Channel_8626

is it based on IC-light or is it a proprietary model?


Itchy_Sandwich518

It's just a way I've come up with to control lighting in any model I find it that it works best with the ForReal model using TCD as a scheduler I've never used IC Light


Open_Channel_8626

Ah I see If you liked this then you might like IC Light


Itchy_Sandwich518

Model: Forreal 0.5 CFG 2.8 Steps 15 Scheduler: TCD - works amazingly well for lighting and especially with the ForReal lighting model NOTE: I had to add shoes like 5 times in the negative prompt because weighing didn't work and yesterday when I tried to make this topic it wouldn't stop generating bloody sneakers on her topic in question: [https://www.reddit.com/r/StableDiffusion/comments/1dkqe5e/ive\_come\_to\_the\_conclusion\_that\_sdxl\_learns\_as/](https://www.reddit.com/r/StableDiffusion/comments/1dkqe5e/ive_come_to_the_conclusion_that_sdxl_learns_as/)


dedfishy

This is the most infuriating thing I've read all year. You clearly either dont fully understand how these models function or you have a mental block preventing you from even entertaining the idea that you could be seeing a pattern that doesn't actually exsist (something all humans do). 5+ people explained how to provide useful evidence and you ignored them all and continued to rant. I want my 10 min back.


ZenEngineer

That's weird. If you shut down and restart the server and render the same seed, do you get different results without the shoes?


Itchy_Sandwich518

It just goes away after that, it's simply a fixation on concepts during that one single session, a restart of the UI clears it up


ZenEngineer

File a bug on whatever UI you're using, that's not supposed to happen. I'd guess it's something on how they handle and cache the CLIP embeddings, especially if you're doing anything fancy with regional prompting or something. Adding the generated image with metadata before and after a reboot might help them see that it is indeed the same seed. If ComfyUI then start removing custom nodes until you find the culprit.


Itchy_Sandwich518

This is something that has been happening to me across pretty much all UIs, be it online services like tensor and open art or locally. It's less of a glitch because it doesn't break anything per se, it just focuses on certain concepts for a while


ZenEngineer

I very much doubt it, as that would need something to write back to the model, which would require additional code and would slow things down. It may be that you hit a batch of seeds that all happen to produce that concept. In theory some code.might be updating the wrong area of memory, but across multiple services they'd use different versions of libraries so it shouldn't be that consistent. Unless you can post two images with the same seeds and settings that show one has that glitch after using a UI I'm going to assume something else is going on.


[deleted]

it's still anecdotal. need an expert to chime in


ArtificialMediocrity

I'm an anecdotal expert. That was definitely an anecdote, but it included some supporting evidence, so I rate it one raised eyebrow.


Educational_Smell292

Yeah, no. That's still bullshit.


Itchy_Sandwich518

bullshit or not it happened to me and I didn't make it up as you can see i contribute to this community and have contributed in the past, I don't come here to troll people.


voltisvolt

Fucking really cool, gonna have to actually try Invoke


Turkino

This is kind of cool it puts the artistic element in drawing the mask back into it.


Itchy_Sandwich518

I always do my generations by drawing outlines first so I have full control of the generation, then make sure the colors are up to my liking and the lighting too. However in this case I used the outlines of a random generation I made months ago becausw what mattered to me was to show off the different lighting effects


weedological

Why is the image so fucking ugly?


Itchy_Sandwich518

It was based on an image i created long ago and invoke just did it's own thing with it, I honestly didnt even notice i was focused on the lighting


registered-to-browse

am I the only one who see two different and crazy misshaped legs, feet, toes?


Itchy_Sandwich518

yeah invoke did that, but who cares, the lighting is what matters


GunterJanek

What UI are you using?


hugo_prado

this UI is InvokeAI


SCAREDFUCKER

the leg! lmao


Itchy_Sandwich518

we're testing lighting not legs :)


PixarCEO

now this is impressing, can you load HDRIs?


Itchy_Sandwich518

I don't think I've ever used  HDRIs, dunno what they are, sorry


Quantamphysx

I am new to SD, is this comfy UI and which base model are you using


Itchy_Sandwich518

This is Invoke, I've never used COMFYUI but I believe that is even more robust and allows stuff like this to be done in even more detail


Moist-Apartment-6904

This is just a combination of using ControlNet and feeding the KSampler a color gradient image instead of an empty latent, and lowering the denoise. Can be easily done in Comfy as well as probably any other UI.