logo
0
0
WeChat Login

SoulX-FlashTalk: Real-Time Infinite Streaming of Audio-Driven Avatars via Self-Correcting Bidirectional Distillation

Le Shen*, Qian Qiao*, Tan Yu*, Ke Zhou, Tianhang Yu, Yu Zhan, Zhenjie Wang, Dingcheng Zhen, Ming Tao, Shunshun Yin, Siyuan Liu

*Equal Contribution Corresponding Author

HF space 

🔥 News

🤫 Coming soon

A 4-GPU version of SoulX-FlashTalk and a new open-source real-time streaming digital human model designed specifically for consumer-grade GPUs like 4090 etc.

📑 Todo List

  • Technical report
  • Project Page
  • Inference code
  • Checkpoint release
  • Online demo

📢 Live Streaming & Video Podcast

🎬 Online Demos

🌰 Examples

📖 Quickstart

🔧 Installation

1. Create a Conda environment

conda create -n flashtalk python=3.10 conda activate flashtalk

2. Install PyTorch on CUDA

pip install torch==2.7.1 torchvision==0.22.1 --index-url https://download.pytorch.org/whl/cu128

3. Install other dependencies

pip install -r requirements.txt

4. Flash-attention installation:

pip install ninja pip install flash_attn==2.8.0.post2 --no-build-isolation

5. FFmpeg installation

# Ubuntu / Debian apt-get install ffmpeg # CentOS / RHEL yum install ffmpeg ffmpeg-devel

or

# Conda (no root required) conda install -c conda-forge ffmpeg==7

🤗 Model download

Model ComponentDescriptionLink
SoulX-FlashTalk-14BOur 14b model🤗 Huggingface
chinese-wav2vec2-basechinese-wav2vec2-base🤗 Huggingface
# If you are in china mainland, run this first: export HF_ENDPOINT=https://hf-mirror.com pip install "huggingface_hub[cli]" huggingface-cli download Soul-AILab/SoulX-FlashTalk-14B --local-dir ./models/SoulX-FlashTalk-14B huggingface-cli download TencentGameMate/chinese-wav2vec2-base --local-dir ./models/chinese-wav2vec2-base

🚀 Inference

# Infer on single GPU # Requires more than 64G of VRAM. Use --cpu_offload to reduce VRAM usage to 40G. bash inference_script_single_gpu.sh # Infer on multy GPUs # Real-time inference speed can only be supported on 8xH800 or higher graphics cards bash inference_script_multi_gpu.sh

👋 Online Demo

Coming Soon!

📧 Contact Us

If you are interested in leaving a message to our work, feel free to email le.shen@mail.dhu.edu.cn or qiaoqian@soulapp.cn or yutan@soulapp.cn or zhouke@soulapp.cn or liusiyuan@soulapp.cn

Due to Group 1 reaching its capacity, we have opened a new WeChat group. Additionally, we represent SoulApp and warmly welcome everyone to download the app and join our Soul group for further technical discussions and updates!

WeChat Group QR Code
Join WeChat Group
(加入微信技术群)
Soul App Group QR Code
Download SoulApp & Join Group
(下载SoulApp加入群组)

📚 Citation

If you find our work useful in your research, please consider citing:

@misc{shen2025soulxflashtalktechnicalreport, title={SoulX-FlashTalk: Real-Time Infinite Streaming of Audio-Driven Avatars via Self-Correcting Bidirectional Distillation}, author={Le Shen and Qian Qiao and Tan Yu and Ke Zhou and Tianhang Yu and Yu Zhan and Zhenjie Wang and Ming Tao and Shunshun Yin and Siyuan Liu}, year={2025}, eprint={2512.23379}, archivePrefix={arXiv}, primaryClass={cs.CV}, url={https://arxiv.org/abs/2512.23379}, }

🙇 Acknowledgement

TIP

If you find our work useful, please also consider starring the original repositories of these foundational methods.

💡 Star History

Star History Chart

About

No description, topics, or website provided.
25.66 MiB
0 forks0 stars1 branches0 TagREADMEApache-2.0 license
Language
Python99.6%
Shell0.4%