You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This commit refactors the learning rate (LR) handling in `kohya_gui/lora_gui.py` for LoRA training.
The previous fix for LR misinterpretation involved commenting out a line. This commit completes the cleanup by:
- Removing the `do_not_set_learning_rate` variable and its associated conditional logic, which became redundant.
- Renaming the float-converted `learning_rate` to `learning_rate_float` for clarity.
- Ensuring that `learning_rate_float` and the float-converted `unet_lr_float` are consistently used when preparing the `config_toml_data` for the training script.
This makes the code cleaner and the intent of always passing the main learning rate (along with specific TE/UNet LRs) more direct. The functional behavior of the LR fix remains the same.
0 commit comments