Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

FFengIll
Copy link

Some model may not provide vocab.txt and
tokenizer must work well after parse the tokenizer.json (and rebuild the vocab internal).

So we can just load vocab from tokenizer with a re-sort.

@@ -22,8 +22,6 @@
with open(dir_model + "/config.json", "r", encoding="utf-8") as f:
hparams = json.load(f)

with open(dir_model + "/vocab.txt", "r", encoding="utf-8") as f:
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

vocab.txt may not exist.

vocab_list = []

# print(tokenizer.get_vocab())
vocab = tokenizer.get_vocab()
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tokenizer has a good implement to get vocab.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant