IndexTTS2: A Breakthrough in Emotionally Expressive and Duration-Controlled Auto-Regressive Zero-Shot Text-to-Speech
Existing autoregressive large-scale text-to-speech (TTS) models have advantages in speech naturalness, but their token-by-token generation mechanism makes it difficult to precisely control the duration of synthesized speech. This becomes a significant limitation in applications requiring strict audio-visual synchronization, such as video dubbing.
This paper introduces IndexTTS2, which proposes a novel, general, and autoregressive model-friendly method for speech duration control.
The method supports two generation modes: one explicitly specifies the number of generated tokens to precisely control speech duration; the other freely generates speech in an autoregressive manner without specifying the number of tokens, while faithfully reproducing the prosodic features of the input prompt.
Furthermore, IndexTTS2 achieves disentanglement between emotional expression and speaker identity, enabling independent control over timbre and emotion. In the zero-shot setting, the model can accurately reconstruct the target timbre (from the timbre prompt) while perfectly reproducing the specified emotional tone (from the style prompt).
To enhance speech clarity in highly emotional expressions, we incorporate GPT latent representations and design a novel three-stage training paradigm to improve the stability of the generated speech. Additionally, to lower the barrier for emotional control, we designed a soft instruction mechanism based on text descriptions by fine-tuning Qwen3, effectively guiding the generation of speech with the desired emotional orientation.
Finally, experimental results on multiple datasets show that IndexTTS2 outperforms state-of-the-art zero-shot TTS models in terms of word error rate, speaker similarity, and emotional fidelity. Audio samples are available at: IndexTTS2 demo page.
Tips: Please contact the authors for more detailed information. For commercial usage and cooperation, please contact [email protected].
IndexTTS2: The Future of Voice, Now Generating
Click the image to watch the IndexTTS2 introduction video.
QQ Group:553460296(No.1) 663272642(No.4)
Discord:https://discord.gg/uT32E7KDmy
Email:[email protected]
You are welcome to join our community! 🌏
欢迎大家来交流讨论!
2025/09/08🔥🔥🔥 We release IndexTTS-2 to the world!- The first autoregressive TTS model with precise synthesis duration control, supporting both controllable and uncontrollable modes. This functionality is not yet enabled in this release.
- The model achieves highly expressive emotional speech synthesis, with emotion-controllable capabilities enabled through multiple input modalities.
2025/05/14🔥🔥 We release IndexTTS-1.5, significantly improving the model's stability and its performance in the English language.2025/03/25🔥 We release IndexTTS-1.0 with model weights and inference code.2025/02/12🔥 We submitted our paper to arXiv, and released our demos and test sets.
Architectural overview of IndexTTS2, our state-of-the art speech model:
The key contributions of IndexTTS2 are summarized as follows:
- We propose a duration adaptation scheme for autoregressive TTS models. IndexTTS2 is the first autoregressive zero-shot TTS model to combine precise duration control with natural duration generation, and the method is scalable for any autoregressive large-scale TTS model.
- The emotional and speaker-related features are decoupled from the prompts, and a feature fusion strategy is designed to maintain semantic fluency and pronunciation clarity during emotionally rich expressions. Furthermore, a tool was developed for emotion control, utilizing natural language descriptions for the benefit of users.
- To address the lack of highly expressive speech data, we propose an effective training strategy, significantly enhancing the emotional expressiveness of zeroshot TTS to State-of-the-Art (SOTA) level.
- We will publicly release the code and pre-trained weights to facilitate future research and practical applications.
| HuggingFace | ModelScope |
|---|---|
| 😁 IndexTTS-2 | IndexTTS-2 |
| IndexTTS-1.5 | IndexTTS-1.5 |
| IndexTTS | IndexTTS |
The Git-LFS plugin must also be enabled on your current user account:
git lfs install- Download this repository:
git clone https://github.com/index-tts/index-tts.git && cd index-tts
git lfs pull # download large repository files- Install the uv package manager. It is required for a reliable, modern installation environment.
Warning
We only support the uv installation method. Other tools, such as conda
or pip, don't provide any guarantees that they will install the correct
dependency versions. You will almost certainly have random bugs, error messages,
missing GPU acceleration, and various other problems if you don't use uv.
Please do not report any issues if you use non-standard installations, since
almost all such issues are invalid.
Furthermore, uv is up to 115x faster
than pip, which is another great reason to embrace the new industry-standard
for Python project management.
- Install required dependencies:
We use uv to manage the project's dependency environment. The following command
will install the correct versions of all dependencies into your .venv directory.
uv sync --all-extrasIf the download is slow, please try a local mirror, for example China:
uv sync --all-extras --default-index "https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple"Tip
Available Extra Features:
--all-extras: Automatically adds every extra feature listed below. You can remove this flag if you want to customize your installation choices.--extra webui: Adds WebUI support (recommended).--extra deepspeed: Adds DeepSpeed support (faster inference).
Important
Important (Windows): The DeepSpeed library may be difficult to install for
some Windows users. You can skip it by removing the --all-extras flag. If you
want any of the other extra features above, you can manually add their specific
feature flags instead.
Important (Linux/Windows): If you see an error about CUDA during the installation, please ensure that you have installed NVIDIA's CUDA Toolkit version 12.8 (or newer) on your system.
- Download the required models:
Download via huggingface-cli:
uv tool install "huggingface_hub[cli]"
hf download IndexTeam/IndexTTS-2 --local-dir=checkpointsOr download via modelscope:
uv tool install "modelscope"
modelscope download --model IndexTeam/IndexTTS-2 --local_dir checkpointsNote
In addition to the above models, some small models will also be automatically downloaded when the project is run for the first time. If your network environment has slow access to HuggingFace, it is recommended to execute the following command before running the code:
除了以上模型外,项目初次运行时还会自动下载一些小模型,如果您的网络环境访问HuggingFace的速度较慢,推荐执行:
export HF_ENDPOINT="https://hf-mirror.com"If you need to diagnose your environment to see which GPUs are detected, you can use our included utility to check your system:
uv run tools/gpu_check.pyuv run webui.pyOpen your browser and visit http://127.0.0.1:7860 to see the demo.
You can also adjust the settings to enable features such as FP16 inference (lower VRAM usage), DeepSpeed acceleration, compiled CUDA kernels for speed, etc. All available options can be seen via the following command:
uv run python webui.py -hHave fun!
You can also run IndexTTS2 as an API server.
uv run api_server.pyThe server will be available at http://127.0.0.1:8000. You can change the host and port with the --host and --port arguments.
You can also enable FP16 inference with the --fp16 flag:
uv run api_server.py --fp16Here is an example of how to use the API with curl:
curl -X POST "http://127.0.0.1:8000/tts" -H "Content-Type: application/json" -d '{
"text": "Hello, this is a test.",
"spk_audio_prompt": "examples/voice_01.wav",
"output_path": "outputs/api_gen.wav"
}' --output outputs/api_gen.wavTo run scripts, you must use the uv run <file.py> command to ensure that
the code runs inside your current "uv" environment. It may sometimes also be
necessary to add the current directory to your PYTHONPATH, to help it find
the IndexTTS modules.
Example of running a script via uv:
PYTHONPATH="$PYTHONPATH:." uv run indextts/infer_v2.pyHere are several examples of how to use IndexTTS2 in your own scripts:
- Synthesize new speech with a single reference audio file (voice cloning):
from indextts.infer_v2 import IndexTTS2
tts = IndexTTS2(cfg_path="checkpoints/config.yaml", model_dir="checkpoints", use_fp16=False, use_cuda_kernel=False, use_deepspeed=False)
text = "Translate for me, what is a surprise!"
tts.infer(spk_audio_prompt='examples/voice_01.wav', text=text, output_path="gen.wav", verbose=True)- Using a separate, emotional reference audio file to condition the speech synthesis:
from indextts.infer_v2 import IndexTTS2
tts = IndexTTS2(cfg_path="checkpoints/config.yaml", model_dir="checkpoints", use_fp16=False, use_cuda_kernel=False, use_deepspeed=False)
text = "酒楼丧尽天良,开始借机竞拍房间,哎,一群蠢货。"
tts.infer(spk_audio_prompt='examples/voice_07.wav', text=text, output_path="gen.wav", emo_audio_prompt="examples/emo_sad.wav", verbose=True)- When an emotional reference audio file is specified, you can optionally set
the
emo_alphato adjust how much it affects the output. Valid range is0.0 - 1.0, and the default value is1.0(100%):
from indextts.infer_v2 import IndexTTS2
tts = IndexTTS2(cfg_path="checkpoints/config.yaml", model_dir="checkpoints", use_fp16=False, use_cuda_kernel=False, use_deepspeed=False)
text = "酒楼丧尽天良,开始借机竞拍房间,哎,一群蠢货。"
tts.infer(spk_audio_prompt='examples/voice_07.wav', text=text, output_path="gen.wav", emo_audio_prompt="examples/emo_sad.wav", emo_alpha=0.9, verbose=True)- It's also possible to omit the emotional reference audio and instead provide
an 8-float list specifying the intensity of each emotion, in the following order:
[happy, angry, sad, afraid, disgusted, melancholic, surprised, calm]. You can additionally use theuse_randomparameter to introduce stochasticity during inference; the default isFalse, and setting it toTrueenables randomness:
from indextts.infer_v2 import IndexTTS2
tts = IndexTTS2(cfg_path="checkpoints/config.yaml", model_dir="checkpoints", use_fp16=False, use_cuda_kernel=False, use_deepspeed=False)
text = "哇塞!这个爆率也太高了!欧皇附体了!"
tts.infer(spk_audio_prompt='examples/voice_10.wav', text=text, output_path="gen.wav", emo_vector=[0, 0, 0, 0, 0, 0, 0.45, 0], use_random=False, verbose=True)- Alternatively, you can enable
use_emo_textto guide the emotions based on your providedtextscript. Your text script will then automatically be converted into emotion vectors. You can introduce randomness withuse_random(default:False;Trueenables randomness):
from indextts.infer_v2 import IndexTTS2
tts = IndexTTS2(cfg_path="checkpoints/config.yaml", model_dir="checkpoints", use_fp16=False, use_cuda_kernel=False, use_deepspeed=False)
text = "快躲起来!是他要来了!他要来抓我们了!"
tts.infer(spk_audio_prompt='examples/voice_12.wav', text=text, output_path="gen.wav", use_emo_text=True, use_random=False, verbose=True)- It's also possible to directly provide a specific text emotion description
via the
emo_textparameter. Your emotion text will then automatically be converted into emotion vectors. This gives you separate control of the text script and the text emotion description:
from indextts.infer_v2 import IndexTTS2
tts = IndexTTS2(cfg_path="checkpoints/config.yaml", model_dir="checkpoints", use_fp16=False, use_cuda_kernel=False, use_deepspeed=False)
text = "快躲起来!是他要来了!他要来抓我们了!"
emo_text = "你吓死我了!你是鬼吗?"
tts.infer(spk_audio_prompt='examples/voice_12.wav', text=text, output_path="gen.wav", use_emo_text=True, emo_text=emo_text, use_random=False, verbose=True)IndexTTS2: A Zero-shot Text-to-Speech System with Soft Spoken Instructions and Disentangled Emotion Representation
- [2025/04/05] We release IndexTTS2, a breakthrough autoregressive zero-shot TTS system with precise duration control, disentangled emotion representation, and soft spoken instructions.
- [2025/04/05] IndexTTS2 supports both audio and text-based emotion control, enabling fine-grained manipulation of emotional expressions.
- [2025/04/05] The project is open-sourced under the Apache 2.0 license.
IndexTTS2 is a breakthrough autoregressive zero-shot text-to-speech (TTS) system that features precise duration control, disentangled emotion representation, and soft spoken instructions. It enables high-quality speech synthesis with fine-grained control over speaking style, emotion, and duration.
- Precise Duration Control: IndexTTS2 allows users to specify the exact duration of the generated speech, enabling synchronized speech generation for video and animation production.
- Disentangled Emotion Representation: The system disentangles emotion representation from speaker identity, allowing independent control of emotional expression and voice characteristics.
- Soft Spoken Instructions: IndexTTS2 supports text-based emotion control, allowing users to describe the desired emotion in natural language.
- Zero-shot Learning: The system can synthesize speech in unseen voices and emotions without requiring additional training.
# Clone the repository
git clone https://github.com/your-org/index-tts.git
cd index-tts
# Install dependencies
pip install -e .Download the model checkpoints from Hugging Face and place them in the checkpoints directory.
python -m indextts.cli "你好,世界!" --prompt prompts/reference.wav --output output.wavpython webui.pyFor more detailed information, please refer to the technical documentation.
This project is licensed under the Apache 2.0 License - see the LICENSE file for details.