This is an automated archive made by the Lemmit Bot.

The original was posted on /r/stablediffusion by /u/thefool00 on 2024-11-03 21:43:02+00:00.


I’ve been playing with training SD 3.5 Large locally using Kohya on Windows with 24GB VRAM, I’m still messing with it a bit but have found some settings that are working well for me. I’ll update the article if I find better settings. For 16GB (possibly 12GB), there is an argument that can be added to force Kohya to quant during training, --fp8_base.

Honestly, I missed SD. It still can’t do hands worth shit but there is so much chin variety.

Sample file and notes for running

Local LORA Training for Stable Diffusion 3.5 Large on Kohya-SS