Thanks to visit codestin.com
Credit goes to github.com

Skip to content

ai-glimpse/toynlp

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

50 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Python PyPI PyPI Downloads

Ruff Checked with mypy pre-commit

Build Docs Test Codecov GitHub License

ToyNLP

Implementing classic NLP models from scratch with clean code and easy-to-understand architecture.

This library is for educational purposes only. It is not optimized for production use. And it may contain bugs CURRENTLY, so feel free to contribute and report issues.

Until now, we have only done simple tests, which is not enough. But we will do much more rigorous testing in the future. We will also add more docs so you can run it easily, and add more playgrounds for you to experiment with the models and look inside the model implementations.

Models

8 important NLP models ranging from 2003 to 2018:

Model & Paper Code Doc(EN) Blog(ZH)
NNLM(2003)
JMLR
Code Coming soon Coming soon
Word2Vec(2013)
arXiv
Code Coming soon Coming soon
Seq2Seq(2014)
arXiv
Code Coming soon Coming soon
Attention(2014)
arXiv
Code Coming soon Coming soon
fastText(2016)
arXiv
Code Coming soon Coming soon
Transformer(2017)
arXiv
Code Coming soon Coming soon
GPT(2018)
OpenAI
Code Doc Coming soon
BERT(2018)
arXiv
Code Doc Blog

FAQ

I find there are DIFFERENCES between the implementations in toynlp and original papers.

Yes, there are some differences in the implementations. The goal of toynlp is to provide a simple and educational implementation of these models, which may not include all the optimizations and features in the original papers.

The reason is that I want to focus on the core ideas and concepts behind each model, rather than getting bogged down in implementation details, especially when the original papers may introduce complexities that are not essential for understanding the main contributions of the work.

However, I do need to add docs for each model to clarify these differences and provide guidance on how to use the implementations effectively. I'll do this later. Let's first make it work and then make it better.

Where is GPT-2 and other LLMs?

Well, it's in toyllm! I separated the models into two libraries, toynlp for traditional "small" NLP models and toyllm for LLMs, which are typically larger and more complex.

Like the "toy" style, is there anything else?

Glad you asked! The "toy" style is all about simplicity and educational value. We have two other toys besides toynlp and toyllm: toyml for traditional machine learning models, and toyrl for deep reinforcement learning models.