Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Official inference code for SoulX-Singer: Towards High-Quality Zero-Shot Singing Voice Synthesis

License

Notifications You must be signed in to change notification settings

Soul-AILab/SoulX-Singer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

23 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

🎀 SoulX-Singer

Official inference code for
SoulX-Singer: Towards High-Quality Zero-Shot Singing Voice Synthesis

SoulX-Logo

Demo Page HF Space Demo HF-model Technical Report arXiv License


🎡 Overview

SoulX-Singer is a high-fidelity, zero-shot singing voice synthesis model that enables users to generate realistic singing voices for unseen singers.
It supports melody-conditioned (F0 contour) and score-conditioned (MIDI notes) control for precise pitch, rhythm, and expression.


✨ Key Features

  • 🎀 Zero-Shot Singing – Generate high-fidelity voices for unseen singers, no fine-tuning needed.
  • 🎡 Flexible Control Modes – Melody (F0) and Score (MIDI) conditioning.
  • πŸ“š Large-Scale Dataset – 42,000+ hours of aligned vocals, lyrics, notes across Mandarin, English, Cantonese.
  • πŸ§‘β€πŸŽ€ Timbre Cloning – Preserve singer identity across languages, styles, and edited lyrics.
  • ✏️ Singing Voice Editing – Modify lyrics while keeping natural prosody.
  • 🌐 Cross-Lingual Synthesis – High-fidelity synthesis by disentangling timbre from content.

Performance Radar


🎬 Demo Examples

-Soul-Singer.mp4
-Soux-Singer.mp4

πŸ“° News

  • [2026-02-12] SoulX-Singer Eval Dataset is now available on Hugging Face Datasets.
  • [2026-02-09] SoulX-Singer Online Demo is live on Hugging Face Spaces β€” try singing voice synthesis in your browser.
  • [2026-02-08] MIDI Editor is available on Hugging Face Spaces.
  • [2026-02-06] SoulX-Singer inference code and models released.

πŸš€ Quick Start

1. Clone Repository

git clone https://github.com/Soul-AILab/SoulX-Singer.git
cd SoulX-Singer

2. Set Up Environment

1. Install Conda (if not already installed): https://docs.conda.io/en/latest/miniconda.html

2. Create and activate a Conda environment:

conda create -n soulxsinger -y python=3.10
conda activate soulxsinger

3. Install dependencies:

pip install -r requirements.txt

⚠️ If you are in mainland China, use a PyPI mirror:

pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/ --trusted-host=mirrors.aliyun.com

3. Download Pretrained Models

Install Hugging Face Hub if needed:

pip install -U huggingface_hub

Download the SVS model and preprocessing models:

pip install -U huggingface_hub

# Download the SoulX-Singer SVS model
hf download Soul-AILab/SoulX-Singer --local-dir pretrained_models/SoulX-Singer

# Download models required for preprocessing
hf download Soul-AILab/SoulX-Singer-Preprocess --local-dir pretrained_models/SoulX-Singer-Preprocess

4. Run the Demo

Run the inference demo:

bash example/infer.sh

This script relies on metadata generated from the preprocessing pipeline, including vocal separation and transcription. Users should follow the steps in preprocess to prepare the necessary metadata before running the demo with their own data.

⚠️ Important Note The metadata produced by the automatic preprocessing pipeline may not perfectly align the singing audio with the corresponding lyrics and musical notes. For best synthesis quality, we strongly recommend manually correcting the alignment using the 🎼 Midi-Editor.

How to use the Midi-Editor:

🌐 WebUI

You can launch the interactive interface with:

python webui.py

🚧 Roadmap

  • πŸ–₯️ Web-based UI for easy and interactive inference
  • 🌐 Online MIDI Editor deployment on Hugging Face Spaces
  • 🌐 Online demo deployment on Hugging Face Spaces
  • πŸ“Š Release the SoulX-Singer-Eval benchmark
  • 🎹 Inference support for user-friendly MIDI-based input
  • πŸ“š Comprehensive tutorials and usage documentation
  • 🎡 Support for wav-to-wav singing voice conversion (without transcription)

πŸ™ Acknowledgements

Special thanks to the following open-source projects:

πŸ“„ License

We use the Apache 2.0 license. Researchers and developers are free to use the codes and model weights of our SoulX-Singer. Check the license at LICENSE for more details.

⚠️ Usage Disclaimer

SoulX-Singer is intended for academic research, educational purposes, and legitimate applications such as personalized singing synthesis and assistive technologies.

Please note:

  • 🎀 Respect intellectual property, privacy, and personal consent when generating singing content.
  • 🚫 Do not use the model to impersonate individuals without authorization or to create deceptive audio.
  • ⚠️ The developers assume no liability for any misuse of this model.

We advocate for the responsible development and use of AI and encourage the community to uphold safety and ethical principles. For ethics or misuse concerns, please contact us.

πŸ“„ Citation

If you use SoulX-Singer in your research, please cite:

@misc{soulxsinger,
      title={SoulX-Singer: Towards High-Quality Zero-Shot Singing Voice Synthesis}, 
      author={Jiale Qian and Hao Meng and Tian Zheng and Pengcheng Zhu and Haopeng Lin and Yuhang Dai and Hanke Xie and Wenxiao Cao and Ruixuan Shang and Jun Wu and Hongmei Liu and Hanlin Wen and Jian Zhao and Zhonglin Jiang and Yong Chen and Shunshun Yin and Ming Tao and Jianguo Wei and Lei Xie and Xinsheng Wang},
      year={2026},
      eprint={2602.07803},
      archivePrefix={arXiv},
      primaryClass={eess.AS},
      url={https://arxiv.org/abs/2602.07803}, 
}

πŸ“¬ Contact Us

We welcome your feedback, questions, and collaboration:


WeChat Group QR Code

About

Official inference code for SoulX-Singer: Towards High-Quality Zero-Shot Singing Voice Synthesis

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published