🌐 English | 简体中文 | 繁體中文 | Español | Français | 日本語
If you find this project useful,
a star ⭐ on GitHub would be greatly appreciated!
ThinkSound is a unified Any2Audio generation framework with flow matching guided by Chain-of-Thought (CoT) reasoning.
PyTorch implementation for multimodal audio generation and editing: generate or edit audio from video, text, and audio, powered by step-by-step reasoning from Multimodal Large Language Models (MLLMs).

.bat scripts automate environment creation and script running.ThinkSound decomposes audio generation and editing into three interactive stages, all guided by MLLM-based Chain-of-Thought (CoT) reasoning:

Environment Preparation:
git clone https://github.com/liuhuadai/ThinkSound.git
cd ThinkSound
conda create -n thinksound python=3.10
conda activate thinksound
pip install thinksound
conda install -y -c conda-forge 'ffmpeg<7'
# Download pretrained weights https://huggingface.co/liuhuadai/ThinkSound to Directory ckpts/
# model weights can be also downloaded from https://www.modelscope.cn/models/iic/ThinkSound
git lfs install
git clone https://huggingface.co/liuhuadai/ThinkSound ckpts
# To improve inference and training speed, you may optionally install a FlashAttention backend compatible with your system and PyTorch version.
✅ Windows Tip:
Windows users can simply runsetup_windows.bat(or double-click it) to automatically create the conda environment, install all dependencies (including FFmpeg), and download the pretrained model — no manual setup required.
Make surecondaandgitare installed and available in your system PATH before running the script.
chmod +x scripts/demo.sh
./scripts/demo.sh <path-to-your-demo-video> <title> <CoT description> [use-half]
You can use the provided .bat script instead:
.\scripts\demo.bat <path-to-your-demo-video> <title> <CoT description> [use-half]
Note:
<path-to-your-demo-video>: The path to a single video[use-half] (optional): Add use-half at the end to enable half precision feature extraction.chmod +x scripts/eval_batch.sh
./scripts/eval_batch.sh <video_path> <csv_path> <save_path (optional)> [use-half]
Use the equivalent .bat script:
.\scripts\eval_batch.bat <video_path> <csv_path> <save_path (optional)> [use-half]
Note:
<video_path>: Path to the root directory containing all .mp4 videos to be processed (all videos must be of equal duration).<csv_path>: A CSV file with text prompts for each video (see demo_test.csv for format).<save_path> (optional): Where to save generated audio. Defaults to results/features.[use-half] (optional): Add use-half at the end to enable half precision feature extraction.For an interactive experience, launch the Gradio web interface:
python app.py
See Training.md
This project is released under the Apache 2.0 License.
Note: The code, models, and dataset are for research and educational purposes only. Commercial use is NOT permitted. For commercial licensing, please contact the authors.
📦 Third-Party Components
Stable Audio Open VAE (by Stability AI): This repository includes a fine-tuned VAE from Stable Audio Open, licensed under the Stability AI Community License. Commercial use and redistribution require prior permission from Stability AI.
📘 All other code and models are released under the Apache License 2.0.
Many thanks to:
If you find ThinkSound useful in your research or work, please cite our paper:
@misc{liu2025thinksoundchainofthoughtreasoningmultimodal,
title={ThinkSound: Chain-of-Thought Reasoning in Multimodal Large Language Models for Audio Generation and Editing},
author={Huadai Liu and Jialei Wang and Kaicheng Luo and Wen Wang and Qian Chen and Zhou Zhao and Wei Xue},
year={2025},
eprint={2506.21448},
archivePrefix={arXiv},
primaryClass={eess.AS},
url={https://arxiv.org/abs/2506.21448},
}
✨ Feel free to open an issue or contact us via email (liuhuadai@zju.edu.cn) if you have any questions or suggestions!