While it is true the model doesn't do humans well, i hate how dismissive people are over this info. Thank you for sharing, as it still is a technical marvel in of itself. You can be impressed about the technical achievement, and still shit on the quality, guys.
dismissive? this whole sub hates the model, post after post hordes of people who make characters complain all day every day.
I personally don't do characters at all.. I'm happy
We have 4 bit quantizations for LLMs (which is basically a text encoder on steroids) [https://huggingface.co/astronomer/Llama-3-8B-GPTQ-4-Bit](https://huggingface.co/astronomer/Llama-3-8B-GPTQ-4-Bit), the text encoder being quantized to 6 bits is not really a big surprise.
Now with new leadership/investors they just brought in, there's no chance they ever freely release the 8B... Would have been awesome though, the 8B is so much better
> 8B is so much better
It really is.
I remember posting my prompt to SD3 requests thread and after seeing how good prompt adherence is, thinking how cool LORA training will be on a model that understands a lot of words properly, so there will be no need in simplifying captions just because model doesn’t understand these words.
That's like saying "I can run this game at 1000FPS" (when setting everything to lowest settings)
It doesn't really matter when the quality looks like ass.
I prefer it to various XL finetunes for quite a lot of things just because of how much better the VAE is. In terms of raw fidelity of direct outputs XL comes absolutely nowhere close. To each their own though.
Sometimes we need the AI generate stable output for industries use , make it running is only the very first step.
With my test, a customized SDXL is better than SD3.
Makes sense, it only renders 1/6 of the image and thereby requires significantly less VRAM.
SCIENCE PEOPLE, AI IT!
That’s right, cutting edge, try and keep up, ‘Google’ was so last pandemic.
Now that Eric Schmidt is on board, it is clear that Stability AI will be actively eroding open source AI.
Continuing to use SD3 and future SAI products makes us complicit.
Our efforts and support belong with those who want to build an open AI future.
Personally I will no longer be publishing checkpoints or LORAs using SAI models (SD 1.5, 2.x, XL, 3+) .
While the model does have obvious flaws, the polar bear is cute.
The actual image quality in SD3 is extremely high. It's good for lots of stuff.
While it is true the model doesn't do humans well, i hate how dismissive people are over this info. Thank you for sharing, as it still is a technical marvel in of itself. You can be impressed about the technical achievement, and still shit on the quality, guys.
dismissive? this whole sub hates the model, post after post hordes of people who make characters complain all day every day. I personally don't do characters at all.. I'm happy
that happens every time theres a new model that doesnt have thousands of coomer content, give it few months
coomer?
NSFW.
2.2GiB to generate deformed women on grass, what a time to be alive.
Genrating half a person takes half the RAM, win win!
When not use SD3 for the background/scene then use SDXL to inpaint the person?
Hey, don't kink shame!
porn addicts be like
I really don't care if SD3 ran on a rasppi. It's worthless because SAI fell apart.
useless garbage model from a dead company
Don't worry, they got a bunch of VC funding, and we know MBA's are really great at making things!
6bit T5? Crazy. How does it performs?
We have 4 bit quantizations for LLMs (which is basically a text encoder on steroids) [https://huggingface.co/astronomer/Llama-3-8B-GPTQ-4-Bit](https://huggingface.co/astronomer/Llama-3-8B-GPTQ-4-Bit), the text encoder being quantized to 6 bits is not really a big surprise.
It is better than the fp8 version (fp8 is really limiting quantization scheme). I think you can quantize T5 better than 6-bit but we are conservative.
interesting
[удалено]
I dequant prior to GEMM, and the dequant cost is negligible comparing to GEMM.
As someone with 6GB potato GPU, I really hope it will be possible to run 8B version once it will be released (hopefully)
Now with new leadership/investors they just brought in, there's no chance they ever freely release the 8B... Would have been awesome though, the 8B is so much better
> 8B is so much better It really is. I remember posting my prompt to SD3 requests thread and after seeing how good prompt adherence is, thinking how cool LORA training will be on a model that understands a lot of words properly, so there will be no need in simplifying captions just because model doesn’t understand these words.
I only have 4 Gb VRAm,,, I really hope it does run in crappy pc's
That's like saying "I can run this game at 1000FPS" (when setting everything to lowest settings) It doesn't really matter when the quality looks like ass.
SD3 has great fidelity for many things though. Even does hot ladies pretty well if they're standing up.
i love having my creativity being constrained to 1 single pose
I prefer it to various XL finetunes for quite a lot of things just because of how much better the VAE is. In terms of raw fidelity of direct outputs XL comes absolutely nowhere close. To each their own though.
thank you for sharing.
Yes, you can run shit on shit.
Nice. Hopefully these kinds of optimizations would work for all transformer-based models.
Sometimes we need the AI generate stable output for industries use , make it running is only the very first step. With my test, a customized SDXL is better than SD3.
But my toilet requires 0.0GiB
And the best part is that you do not need a single byte to not run it at all!
Makes sense, it only renders 1/6 of the image and thereby requires significantly less VRAM. SCIENCE PEOPLE, AI IT! That’s right, cutting edge, try and keep up, ‘Google’ was so last pandemic.
Now that Eric Schmidt is on board, it is clear that Stability AI will be actively eroding open source AI. Continuing to use SD3 and future SAI products makes us complicit. Our efforts and support belong with those who want to build an open AI future. Personally I will no longer be publishing checkpoints or LORAs using SAI models (SD 1.5, 2.x, XL, 3+) .
That cat is out of the bag. He can't do anything to SDXL or below, or any new trains.