logo
0
0
WeChat Login

Anima Trainer v1.02

Anima (Cosmos-Predict2) LoRA/LoKr trainer for Windows. 中文说明请见: README.zh-CN.md

Overview

  • Trainer entry: anima_train.py (orchestration) + utils/* modules
  • Config via TOML: --config <path-to-toml>
  • Supports lora and lokr (LyCORIS full mode for LoKr)
  • Optional ComfyUI key conversion on save
  • Optional W&B logging
  • LR schedulers follow HF warmup/decay behavior (constant, constant_with_warmup, linear, cosine)
  • Supports radam_schedulefree optimizer

Repository Layout

  • anima_train.py: main training script
  • utils/config_loader.py: TOML config loader
  • config/save/anima_lora_config.example.toml: generic config template
  • start_webui.bat: build frontend and launch WebUI backend
  • install_dependencies.bat: one-click dependency installer (Windows)
  • modelling/: modeling code
  • anima/text_encoders/: tokenizer/config assets

Recommended Workflow

Two ways to prepare config, one unified way to train:

  1. Open WebUI or edit TOML directly.
  2. Run training from terminal.
  3. TensorBoard is enabled automatically by default.

Path A: WebUI

  1. Install dependencies:
install_dependencies.bat
  1. Start WebUI:
start_webui.bat
  1. In WebUI, edit and save config (config/save/*.toml).
  2. Save config only. WebUI is a config editor; training is started from CLI.

Notes:

  • WebUI only edits/validates config and does semantic checks in real time.
  • Invalid configs (for example radam_schedulefree + cosine) are blocked from save.

Path B: Edit TOML Directly

  1. Install dependencies:
install_dependencies.bat
  1. Create a local config:
copy config\save\anima_lora_config.example.toml config\save\my_train.toml
  1. Edit config\save\my_train.toml (model path, data path, output fields).
  2. Run training:
.venv\Scripts\python.exe anima_train.py --config .\config\save\my_train.toml

Notes:

  • TensorBoard event writing is on by default (--tensorboard-enabled).
  • CLI training auto-starts TensorBoard by default (--tensorboard-autostart) on port 6006 (auto-increments if occupied).
  • Disable auto-start with --no-tensorboard-autostart.

Model Files

Weight files are not stored in this repository. Place your local model files under paths you configure, for example:

  • anima/diffusion_models/*.safetensors
  • anima/vae/*.safetensors
  • anima/text_encoders/ (tokenizer + encoder files)

Notes

  • Local configs are usually kept under config/save/*.toml.
  • Use config/save/anima_lora_config.example.toml as the shared template.
  • For adamw8bit, install bitsandbytes separately if needed.
  • radam_schedulefree requires lr_scheduler=constant and no warmup (lr_warmup_steps=0, lr_warmup_ratio=0).

Contributors

Acknowledgements

  • Community references for LoRA/LoKr training workflows
  • This training script references community-circulated implementations (including upper directory v1.01). If any content infringes your rights, please contact for removal.
  • PyTorch, Transformers, LyCORIS, Weights & Biases

License

MIT License for this repository code. Please follow original licenses/terms for upstream models and third-party assets.

About

No description, topics, or website provided.
Animarepo-named
Language
Python79.7%
Vue12.1%
TypeScript6%
CSS1.3%
Others0.9%