This is an automated archive made by the Lemmit Bot.
The original was posted on /r/stablediffusion by /u/rerri on 2024-11-04 18:22:48+00:00.
Patch Model Patcher Order node enabling LoRA with torch.compile
Switching to a different LoRA is really fast, no need for full recompile (still required with resolution changes though).
With torch.compile Flux generation is roughly 40% faster on a 4090, torch 2.5.0.
Tried with Flux and SD3.5L, works with both.
PS. Unrelated bonus PSA, comfyanonymous released a FP8 Scaled version of Flux (optimized for better accuracy, same gen speed as old FP8):
You must log in or register to comment.