Misaki is a G2P engine designed for Kokoro models.
Hosted demo: https://hf.co/spaces/hexgrad/Misaki-G2P
You can run this in one cell on Google Colab:
!pip install -q misaki[en]
from misaki import en
g2p = en.G2P(trf=False, british=False, fallback=None) # no transformer, American English
text = '[Misaki](/misˈɑki/) is a G2P engine designed for [Kokoro](/kˈOkəɹO/) models.'
phonemes, tokens = g2p(text)
print(phonemes) # misˈɑki ɪz ə ʤˈitəpˈi ˈɛnʤən dəzˈInd fɔɹ kˈOkəɹO mˈɑdᵊlz.To fallback to espeak:
# Installing espeak varies across platforms, this silent install works on Colab:
!apt-get -qq -y install espeak-ng > /dev/null 2>&1
!pip install -q misaki[en] phonemizer
from misaki import en, espeak
fallback = espeak.EspeakFallback(british=False) # en-us
g2p = en.G2P(trf=False, british=False, fallback=fallback) # no transformer, American English
text = 'Now outofdictionary words are handled by espeak.'
phonemes, tokens = g2p(text)
print(phonemes) # nˈW Wɾɑfdˈɪkʃənˌɛɹi wˈɜɹdz ɑɹ hˈændəld bI ˈispik.- Data: Compress data (no need for indented json) and eliminate redundancy between gold and silver dictionaries.
- Fallbacks: Train seq2seq fallback models on dictionaries using this notebook.
- Homographs: Escalate hard words like
axes bass bow lead tear windusing BERT contextual word embeddings (CWEs) and logistic regression (LR) models (nn.Linearfollowed by sigmoid) as described in this paper. Assumingtrf=True, BERT CWEs can be accessed viadoc._.trf_data, see en.py#L479. Per-word LR models can be trained on WikipediaHomographData, llama-hd-dataset, and LLM-generated data. - More languages: Add
ko.py,ja.py,zh.py. - Per-language pip installs
- https://github.com/explosion/spaCy
- https://github.com/savoirfairelinux/num2words
- https://github.com/hexgrad/misaki/blob/main/EN_PHONES.md