Anima (Cosmos-Predict2) LoRA/LoKr trainer for Windows. 中文说明请见: README.zh-CN.md
anima_train.py (orchestration) + utils/* modules--config <path-to-toml>lora and lokr (LyCORIS full mode for LoKr)constant, constant_with_warmup, linear, cosine)radam_schedulefree optimizeranima_train.py: main training scriptutils/config_loader.py: TOML config loaderconfig/save/anima_lora_config.example.toml: generic config templatestart_webui.bat: build frontend and launch WebUI backendinstall_dependencies.bat: one-click dependency installer (Windows)modelling/: modeling codeanima/text_encoders/: tokenizer/config assetsTwo ways to prepare config, one unified way to train:
install_dependencies.bat
start_webui.bat
config/save/*.toml).Notes:
radam_schedulefree + cosine) are blocked from save.install_dependencies.bat
copy config\save\anima_lora_config.example.toml config\save\my_train.toml
config\save\my_train.toml (model path, data path, output fields)..venv\Scripts\python.exe anima_train.py --config .\config\save\my_train.toml
Notes:
--tensorboard-enabled).--tensorboard-autostart) on port 6006 (auto-increments if occupied).--no-tensorboard-autostart.Weight files are not stored in this repository. Place your local model files under paths you configure, for example:
anima/diffusion_models/*.safetensorsanima/vae/*.safetensorsanima/text_encoders/ (tokenizer + encoder files)config/save/*.toml.config/save/anima_lora_config.example.toml as the shared template.adamw8bit, install bitsandbytes separately if needed.radam_schedulefree requires lr_scheduler=constant and no warmup (lr_warmup_steps=0, lr_warmup_ratio=0).MIT License for this repository code. Please follow original licenses/terms for upstream models and third-party assets.