ACE-Step: A Step Towards Music Generation Foundation Model
Model Description
ACE-Step is a novel open-source foundation model for music generation that overcomes key limitations of existing approaches through a holistic architectural design. It integrates diffusion-based generation with Sana's Deep Compression AutoEncoder (DCAE) and a lightweight linear transformer, achieving state-of-the-art performance in generation speed, musical coherence, and controllability.
Key Features:
15× faster than LLM-based baselines (20s for 4-minute music on A100)
Superior musical coherence across melody, harmony, and rhythm
full-song generation, duration control and accepts natural language descriptions
Uses
Direct Use
ACE-Step can be used for:
Generating original music from text descriptions
Music remixing and style transfer
edit song lyrics
Downstream Use
The model serves as a foundation for:
Voice cloning applications
Specialized music generation (rap, jazz, etc.)
Music production tools
Creative AI assistants
Out-of-Scope Use
The model should not be used for:
Generating copyrighted content without permission
Creating harmful or offensive content
Misrepresenting AI-generated music as human-created
@misc{gong2025acestep,
title={ACE-Step: A Step Towards Music Generation Foundation Model},
author={Junmin Gong, Wenxiao Zhao, Sen Wang, Shengyuan Xu, Jing Guo},
howpublished={\url{https://github.com/ace-step/ACE-Step}},
year={2025},
note={GitHub repository}
}