Distillation-accelerated versions of Wan2.1 - Dramatically faster while maintaining exceptional quality

|
|
|
|
|
Transform still images into dynamic videos
|
Generate videos from text descriptions
|
| Precision | Model Identifier | Model Size | Framework | Quality vs Speed |
|---|---|---|---|---|
| 🏆 BF16 | lightx2v_4step | ~28-32 GB | LightX2V | ⭐⭐⭐⭐⭐ Highest quality |
| ⚡ FP8 | scaled_fp8_e4m3_lightx2v_4step | ~15-17 GB | LightX2V | ⭐⭐⭐⭐ Excellent balance |
| 🎯 INT8 | int8_lightx2v_4step | ~15-17 GB | LightX2V | ⭐⭐⭐⭐ Fast & efficient |
| 🔷 FP8 ComfyUI | scaled_fp8_e4m3_lightx2v_4step_comfyui | ~15-17 GB | ComfyUI | ⭐⭐⭐ ComfyUI ready |
# Pattern: wan2.1_{task}_{resolution}_{precision}.safetensors
# Examples:
wan2.1_i2v_720p_lightx2v_4step.safetensors # 720P I2V - BF16
wan2.1_i2v_720p_scaled_fp8_e4m3_lightx2v_4step.safetensors # 720P I2V - FP8
wan2.1_i2v_480p_int8_lightx2v_4step.safetensors # 480P I2V - INT8
wan2.1_t2v_14b_scaled_fp8_e4m3_lightx2v_4step_comfyui.safetensors # T2V - FP8 ComfyUI
💡 Explore all models: Browse Full Model Collection →
LightX2V is a high-performance inference framework optimized for these models, approximately 2x faster than ComfyUI with better quantization accuracy. Highly recommended!
huggingface-cli download lightx2v/Wan2.1-Distill-Models \
--local-dir ./models/wan2.1_i2v_720p \
--include "wan2.1_i2v_720p_scaled_fp8_e4m3_lightx2v_4step.safetensors"
git clone https://github.com/ModelTC/LightX2V.git
cd LightX2V
pip install -r requirements.txt
Or refer to Quick Start Documentation to use docker
Choose the appropriate configuration based on your GPU memory:
For 80GB+ GPU (A100/H100)
For 24GB+ GPU (RTX 4090)
cd scripts bash wan/run_wan_i2v_distill_4step_cfg.sh
Additional Components: These models only contain DIT weights. You also need:
Refer to LightX2V Documentation for how to organize the complete model directory.
If you find this project helpful, please give us a ⭐ on GitHub