v1.2 - EMA for LoRA/Lycoris training
Features
- EMA is reworked. Previous training runs using EMA should not update to this release. Your checkpoints will not load the EMA weights correctly.
- EMA now works fully for PEFT Standard LoRA and Lycoris adapters (tested LoKr only)
- When EMA is enabled, side-by-side comparisons are now done by default (can be disabled with
--ema_validation=ema_only
ornone
)
Example; the starting model benchmark is on the left as before, the centre is the training Lycoris adapter, and the right side is the EMA weights. (SD3.5 Medium)
Bugfixes
- Text encoders are now properly quantised if the parameter is given, they were in bf16 before
- Updated doc reference link to caption filter example
What's Changed
- quantise text encoders upon request correctly by @bghira in #1167
- merge minor follow-up fixes by @bghira in #1168
- (experimental) Allow EMA on LoRA/Lycoris networks by @bghira in #1170
- Update
caption_filter_list.txt.example
reference by @emmanuel-ferdman in #1178 - merge EMA LoRA/Lycoris support by @bghira in #1176
New Contributors
- @emmanuel-ferdman made their first contribution in #1178
Full Changelog: v1.1.5...v1.2