A lightweight text-to-speech (TTS) application designed to run efficiently on CPUs. Forget about the hassle of using GPUs and web APIs serving TTS models. With Kyutai's Pocket TTS, generating audio is just a pip install and a function call away.
Supports Python 3.10, 3.11, 3.12, 3.13 and 3.14. Requires PyTorch 2.5+. Does not require the gpu version of PyTorch.
🔊 Demo | 🐱💻GitHub Repository | 🤗 Hugging Face Model Card | 📄 Paper | 📚 Documentation
- Runs on CPU
- Small model size, 100M parameters
- Audio streaming
- Low latency, ~200ms to get the first audio chunk
- Faster than real-time, ~6x real-time on a CPU of MacBook Air M4
- Uses only 2 CPU cores
- Python API and CLI
- Voice cloning
- English only at the moment
- Can handle infinitely long text inputs
Navigate to the Kyutai website to try it out directly in your browser. You can input text, select different voices, and generate speech without any installation.
You can use pocket-tts directly from the command line. We recommend using
uv as it installs any dependencies on the fly in an isolated environment (uv installation instructions here).
You can also use pip install pocket-tts to install it manually.
This will generate a wav file ./tts_output.wav saying the default text with the default voice, and display some speed statistics.
uvx pocket-tts generate
# or if you installed it manually with pip:
pocket-tts generateModify the voice with --voice and the text with --text. We provide a small catalog of voices.
You can take a look at this page which details the licenses for each voice.
The --voice argument can also take a plain wav file as input for voice cloning.
Feel free to check out the generate documentation for more details and examples.
For trying multiple voices and prompts quickly, prefer using the serve command.
You can also run a local server to generate audio via HTTP requests.
uvx pocket-tts serve
# or if you installed it manually with pip:
pocket-tts serveNavigate to http://localhost:8000 to try the web interface, it's faster than the command line as the model is kept in memory between requests.
You can check out the serve documentation for more details and examples.
Install the package with
pip install pocket-tts
# or
uv add pocket-ttsYou can use this package as a simple Python library to generate audio from text.
from pocket_tts import TTSModel
import scipy.io.wavfile
tts_model = TTSModel.load_model()
voice_state = tts_model.get_state_for_audio_prompt(
"hf://kyutai/tts-voices/alba-mackenna/casual.wav"
)
audio = tts_model.generate_audio(voice_state, "Hello world, this is a test.")
# Audio is a 1D torch tensor containing PCM data.
scipy.io.wavfile.write("output.wav", tts_model.sample_rate, audio.numpy())You can have multiple voice states around if
you have multiple voices you want to use. load_model()
and get_state_for_audio_prompt() are relatively slow operations,
so we recommend to keep the model and voice states in memory if you can.
You can check out the Python API documentation for more details and examples.
At the moment, we do not support (but would love pull requests adding):
- Running the TTS inside a web browser (WebAssembly)
- A compiled version with for example
torch.compile()orcandle. - Adding silence in the text input to generate pauses.
- Quantization to run the computation in int8.
We tried running this TTS model on the GPU but did not observe a speedup compared to CPU execution, notably because we use a batch size of 1 and a very small model.
We accept contributions! Feel free to open issues or pull requests on GitHub.
You can find development instructions in the CONTRIBUTING.md file. You'll also find there how to have an editable install of the package for local development.
Use of our model must comply with all applicable laws and regulations and must not result in, involve, or facilitate any illegal, harmful, deceptive, fraudulent, or unauthorized activity. Prohibited uses include, without limitation, voice impersonation or cloning without explicit and lawful consent; misinformation, disinformation, or deception (including fake news, fraudulent calls, or presenting generated content as genuine recordings of real people or events); and the generation of unlawful, harmful, libelous, abusive, harassing, discriminatory, hateful, or privacy-invasive content. We disclaim all liability for any non-compliant use.