T O P

  • By -

chiekat

While the model does have obvious flaws, the polar bear is cute.


ZootAllures9111

The actual image quality in SD3 is extremely high. It's good for lots of stuff.


Niwa-kun

While it is true the model doesn't do humans well, i hate how dismissive people are over this info. Thank you for sharing, as it still is a technical marvel in of itself. You can be impressed about the technical achievement, and still shit on the quality, guys.


nug4t

dismissive? this whole sub hates the model, post after post hordes of people who make characters complain all day every day. I personally don't do characters at all.. I'm happy


fhrwddsgshfhgdnhrrtg

that happens every time theres a new model that doesnt have thousands of coomer content, give it few months


nug4t

coomer?


CloudyRiverMind

NSFW.


S4L7Y

2.2GiB to generate deformed women on grass, what a time to be alive.


kinggoosey

Genrating half a person takes half the RAM, win win!


djamp42

When not use SD3 for the background/scene then use SDXL to inpaint the person?


Individual-Cup-7458

Hey, don't kink shame!


ImplementComplex8762

porn addicts be like


FourtyMichaelMichael

I really don't care if SD3 ran on a rasppi. It's worthless because SAI fell apart.


spacekitt3n

useless garbage model from a dead company


FourtyMichaelMichael

Don't worry, they got a bunch of VC funding, and we know MBA's are really great at making things!


vanonym_

6bit T5? Crazy. How does it performs?


martianunlimited

We have 4 bit quantizations for LLMs (which is basically a text encoder on steroids) [https://huggingface.co/astronomer/Llama-3-8B-GPTQ-4-Bit](https://huggingface.co/astronomer/Llama-3-8B-GPTQ-4-Bit), the text encoder being quantized to 6 bits is not really a big surprise.


liuliu

It is better than the fp8 version (fp8 is really limiting quantization scheme). I think you can quantize T5 better than 6-bit but we are conservative.


vanonym_

interesting


[deleted]

[удалено]


liuliu

I dequant prior to GEMM, and the dequant cost is negligible comparing to GEMM.


kekerelda

As someone with 6GB potato GPU, I really hope it will be possible to run 8B version once it will be released (hopefully)


schuylkilladelphia

Now with new leadership/investors they just brought in, there's no chance they ever freely release the 8B... Would have been awesome though, the 8B is so much better


kekerelda

> 8B is so much better It really is. I remember posting my prompt to SD3 requests thread and after seeing how good prompt adherence is, thinking how cool LORA training will be on a model that understands a lot of words properly, so there will be no need in simplifying captions just because model doesn’t understand these words.


broctordf

I only have 4 Gb VRAm,,, I really hope it does run in crappy pc's


StickiStickman

That's like saying "I can run this game at 1000FPS" (when setting everything to lowest settings) It doesn't really matter when the quality looks like ass.


ZootAllures9111

SD3 has great fidelity for many things though. Even does hot ladies pretty well if they're standing up.


spacekitt3n

i love having my creativity being constrained to 1 single pose


ZootAllures9111

I prefer it to various XL finetunes for quite a lot of things just because of how much better the VAE is. In terms of raw fidelity of direct outputs XL comes absolutely nowhere close. To each their own though.


treksis

thank you for sharing.


Lucaspittol

Yes, you can run shit on shit.


InflationAaron

Nice. Hopefully these kinds of optimizations would work for all transformer-based models.


brucedontsingasong

Sometimes we need the AI generate stable output for industries use , make it running is only the very first step. With my test, a customized SDXL is better than SD3.


Shuteye_491

But my toilet requires 0.0GiB


Paradigmind

And the best part is that you do not need a single byte to not run it at all!


ThemWhoNoseNothing

Makes sense, it only renders 1/6 of the image and thereby requires significantly less VRAM. SCIENCE PEOPLE, AI IT! That’s right, cutting edge, try and keep up, ‘Google’ was so last pandemic.


Individual-Cup-7458

Now that Eric Schmidt is on board, it is clear that Stability AI will be actively eroding open source AI. Continuing to use SD3 and future SAI products makes us complicit. Our efforts and support belong with those who want to build an open AI future. Personally I will no longer be publishing checkpoints or LORAs using SAI models (SD 1.5, 2.x, XL, 3+) .


dvztimes

That cat is out of the bag. He can't do anything to SDXL or below, or any new trains.