This is an automated archive made by the Lemmit Bot.

The original was posted on /r/stablediffusion by /u/terminusresearchorg on 2024-10-13 04:43:18+00:00.


the release:

New to this release include goodies like loss masking (as in OneTrainer or Kohya’s tools) and a new regularisation technique described in the Dreambooth guide that achieves something like this.

  • no lora = the base Flux model
  • no_reg = typical Flux LoRA training
  • prior_reg_self = setting the training data as is_regularisation_data=true
  • prior_reg_ext = externally-obtained regularisation images (but not super high quality)

this is the recommended method ^

  • prior_reg_self-empty = no captions on the training data, being used as the regularisation dataset

provided by dxqbYD