OmniLottie is the first family of end-to-end multimodal Lottie generators that leverage pre-trained Vision-Language Models (VLMs), capable of generating complex and detailed Lottie animations from multi-modal instructions including texts, images, and videos. We also introduce MMLottie-2M, a multimodal dataset with two million richly annotated Lottie animations, along with a standardized evaluation protocol for multi-modal vector animation generation tasks.
| Model | Download link | Size | Update date |
|---|---|---|---|
| OmniLottie(4B) | Huggingface | 8.46 GB | 2026-03-02 |
The dependencies configured according to the following instructions provide an environment equipped for inference
git clone https://github.com/OpenVGLab/OmniLottie
cd OmniLottie
Create and activate a new conda environment with Python 3.10:
conda create -n omnilottie python=3.10 conda activate omnilottie
We have tested our environment with CUDA 12.1. You can install CUDA 12.1 by following the CUDA Toolkit installation guide.
Install PyTorch with CUDA 12.1 support:
pip install torch==2.3.0+cu121 torchvision==0.18.0+cu121 --index-url https://download.pytorch.org/whl/cu121
Install remaining dependencies:
pip install -r requirements.txt
| GPU Memory Usage | Time per 256/512/1024/2048/4096 tokens | |
|---|---|---|
| OmniLottie | 15.2G | 8.34/16.68/33.38/66.74/133.49 seconds |
Note: The inference time shown here is measured per OmniLottie Lottie tokens, while the inference time reported in our paper is measured per JSON code tokens for fair comparison with baseline methods.
Download Model Weights
First, install the Hugging Face CLI tool:
pip install huggingface-hub
Download the model from Hugging Face:
# Download OmniLottie model
huggingface-cli download OmniLottie/OmniLottie --local-dir /PATH/TO/OmniLottie
Try with Example Data
We provide example prompts, images, and videos in the example/ directory:
example/demo.txt - 37 text promptsexample/demo_images/ - 26 images (with corresponding text descriptions)example/demo_video/ - 30 videos# Test with example text prompts
python inference.py \
--sketch_weight /PATH/TO/OmniLottie \
--batch_text_file example/demo.txt \
--output_dir ./output_demo_text
# Test with example images
python inference.py \
--sketch_weight /PATH/TO/OmniLottie \
--single_image example/demo_images/00de75e2c031cb3fc3f472e356aba5b6.png \
--output_dir ./output_demo_image
# Test with example videos
python inference.py \
--sketch_weight /PATH/TO/OmniLottie \
--single_video example/demo_video/02b8ce2014690a9e30dc25da846e8afb.mp4 \
--output_dir ./output_demo_video
Generate Lottie animations from text descriptions:
Single prompt:
python inference.py \
--sketch_weight /PATH/TO/OmniLottie \
--single_text "A red ball appearing, bouncing up and down, then fading out, repeating seamlessly" \
--output_dir ./output_text
Batch generation from file:
# Create a prompts.txt file with one prompt per line
python inference.py \
--sketch_weight /PATH/TO/OmniLottie \
--batch_text_file example/demo.txt \
--output_dir ./output_text
Custom generation parameters:
python inference.py \
--sketch_weight /PATH/TO/OmniLottie \
--single_text "a blue bird appearing, pulsing while sliding downward, lingers briefly, then growing back while sliding upward to reset with clear phase changes, repeating seamlessly" \
--use_sampling \
--temperature 0.8 \
--top_p 0.25 \
--top_k 5 \
--repetition_penalty 1.01 \
--output_dir ./output
Generate with Best-of-N selection:
python inference.py \
--sketch_weight /PATH/TO/OmniLottie \
--single_text "a light blue piggy bank with a darker blue outline, with a single light blue coin with a dark blue yen symbol (£) appears above the piggy bank, then starts descending towards the piggy bank's opening" \
--num_candidates 8 \
--output_dir ./output
Generate Lottie animations from an image:
Single image:
python inference.py \ --sketch_weight /PATH/TO/OmniLottie \ --single_image /path/to/image.png \ --output_dir ./output_image
Convert video to Lottie animation:
Single video:
python inference.py \ --sketch_weight /PATH/TO/OmniLottie \ --single_video /path/to/video.mp4 \ --output_dir ./output_video
Specify tokenizer path:
python inference.py \
--sketch_weight /PATH/TO/OmniLottie \
--tokenizer_name /PATH/TO/Qwen2.5-VL-3B-Instruct \
--single_text "Your prompt here" \
--output_dir ./output
Adjust token length:
python inference.py \
--sketch_weight /PATH/TO/OmniLottie \
--maxlen 6072 \
--text_len 512 \
--single_text "Your prompt here" \
--output_dir ./output
Filter by task type (when using MMLottieBench dataset):
python inference.py \
--sketch_weight /PATH/TO/OmniLottie \
--mmlottie_bench_dir /PATH/TO/mmlottie_bench \
--split real \
--task_filter text \
--output_dir ./output
Process limited samples with shuffling:
python inference.py \
--sketch_weight /PATH/TO/OmniLottie \
--mmlottie_bench_dir /PATH/TO/mmlottie_bench \
--split real \
--max_samples 10 \
--shuffle \
--output_dir ./output
We provide an interactive generation interface using Gradio:
Local Deployment
python app.py
Online Demo
Try our live demo on Hugging Face Spaces
We provide MMLottieBench for standardized evaluation of Lottie generation models.
Option 1: Using download script:
python download_mmlottie_bench.py --output_dir /PATH/TO/mmlottie_bench
Option 2: Using Hugging Face CLI:
huggingface-cli download OmniLottie/MMLottieBench --repo-type dataset --local-dir /PATH/TO/mmlottie_bench
Option 3: Automatic download (in code):
from datasets import load_dataset
dataset = load_dataset("OmniLottie/MMLottieBench")
MMLottieBench contains 900 samples split into:
Each split contains 3 task types (150 samples each):
MMLottieBench provides two splits that can be switched using --split:
--split real - Test on 450 real-world Lottie animations--split synthetic - Test on 450 synthetically generated samplesTest on real split (all tasks):
python inference.py \
--sketch_weight /PATH/TO/OmniLottie \
--mmlottie_bench_dir /PATH/TO/mmlottie_bench \
--split real \
--output_dir ./benchmark_results_real
Test on synthetic split (all tasks):
python inference.py \
--sketch_weight /PATH/TO/OmniLottie \
--mmlottie_bench_dir /PATH/TO/mmlottie_bench \
--split synthetic \
--output_dir ./benchmark_results_synthetic
Test specific task type on real split:
python inference.py \
--sketch_weight /PATH/TO/OmniLottie \
--mmlottie_bench_dir /PATH/TO/mmlottie_bench \
--split real \
--mmlottie_task text2lottie \
--output_dir ./benchmark_results
Available task types:
text2lottie - Text-to-Lottie generation (150 samples per split)text_image2lottie - Text-Image-to-Lottie generation (150 samples per split)video2lottie - Video-to-Lottie generation (150 samples per split)Process limited samples with filtering:
python inference.py \
--sketch_weight /PATH/TO/OmniLottie \
--mmlottie_bench_dir /PATH/TO/mmlottie_bench \
--split real \
--max_samples 50 \
--shuffle \
--output_dir ./benchmark_results
For detailed usage, see:
OmniLottie is licensed under the Apache License 2.0, while MMLottie-2M dataset is under Creative Commons Attribution Non Commercial Share Alike 4.0 License. You can find the license files in the respective github and HuggingFace repositories.
The MMLottie-2M Dataset (the "Dataset") is provided exclusively for research and non-commercial purposes. Any commercial use, redistribution for profit, or deployment in commercial products is strictly prohibited without explicit authorization.
The Dataset is provided "AS IS" and "AS AVAILABLE", without warranties of any kind, either express or implied, including but not limited to:
Under no circumstances shall the authors, contributors, or affiliated organizations be liable for any direct, indirect, incidental, special, consequential, or punitive damages arising from or related to:
By using the Dataset, you agree that:
If you are a rights holder and believe that any content in this Dataset infringes your intellectual property rights, please contact us immediately. We are committed to addressing legitimate concerns and will promptly remove any content upon verification of valid claims.
For questions, concerns, or content removal requests, please reach out through:
@article{yang2026omnilottie, title={OmniLottie: Generating Vector Animations via Parameterized Lottie Tokens}, author={Yiying Yang and Wei Cheng and Sijin Chen and Honghao Fu and Xianfang Zeng and Yujun Cai and Gang Yu and Xinjun Ma}, journal={arXiv preprint arxiv:2603.02138}, year={2026} }