This is an automated archive made by the Lemmit Bot.

The original was posted on /r/stablediffusion by /u/Iory1998 on 2024-09-09 17:02:21+00:00.


Hi,

A few weeks ago, I made a quick comparison between the FP16, Q8 and nf4. My conclusion then was that Q8 is almost like the fp16 but at half size. Find attached a few examples.

After a few weeks, and playing around with different quantization levels, I make the following observations:

  • What I am concerned with is how close a quantization level to the full precision model. I am not discussing which versions provide the best quality since the latter is subjective, but which generates images close to the Fp16. - As I mentioned, quality is subjective. A few times lower quantized models yielded, aesthetically, better images than the Fp16! Sometimes, Q4 generated images that are closer to FP16 than Q6.
  • Overall, the composition of an image changes noticeably once you go Q5_0 and below. Again, this doesn’t mean that the image quality is worse, but the image itself is slightly different.
  • If you have 24GB, use Q8. It’s almost exactly as the FP16. If you force the text-encoders to be loaded in RAM, you will use about 15GB of VRAM, giving you ample space for multiple LoRAs, hi-res fix, and generation in batches. For some reasons, is faster than Q6_KM on my machine. I can even load an LLM with Flux when using a Q8.
  • If you have 16 GB of VRAM, then Q6_KM is a good match for you. It takes up about 12GB of Vram Assuming you are forcing the text-encoders to remain in RAM), and you won’t have to offload some layers to the CPU. It offers high accuracy at lower size. Again, you should have some Vram space for multiple LoRAs and Hi-res fix.
  • If you have 12GB, then Q5_1 is the one for you. It takes 10GB of Vram (assuming you are loading text-encoder in RAM), and I think it’s the model that offers the best balance between size, speed, and quality. It’s almost as good as Q6_KM. If I have to keep two models, I’ll keep Q8 and Q5_1. As for Q5_0, it’s closer to Q4 than Q6 in terms of accuracy, and in my testing it’s the quantization level where you start noticing differences.
  • If you have less than 10GB, use Q4_0 or Q4_1 rather than the NF4. I am not saying the NF4 is bad. It has it’s own charm. But if you are looking for the models that are closer to the FP16, then Q4_0 is the one you want.
  • Finally, I noticed that the NF4 is the most unpredictable version in terms of image quality. Sometimes, the images are really good, and other times they are bad. I feel that this model has consistency issues.

The great news is, whatever model you are using (I haven’t tested lower quantization levels), you are not missing much in terms of accuracy.

Flux.1 Model Quants Levels Comparison