This package tokenizes (splits) words, sentences and graphemes, based on Unicode text segmentation (UAX #29), for Unicode version 15.0.0. Details and usage are in the respective packages:
Any time our code operates on individual words, we are tokenizing. Often, we do it ad hoc, such as splitting on spaces, which gives inconsistent results. The Unicode standard is better: it is multi-lingual, and handles punctuation, special characters, etc.
The uax29 module has 4 tokenizers. In decreasing granularity: sentences → phrases → words → graphemes. Words and graphemes are the most common uses.
You might use words for inverted indexes, full-text search, TF-IDF, BM25, embeddings, etc.
If you're doing embeddings, the definition of “meaningful unit” will depend on your application. You might choose sentences, phrases, words, or a combination.
We use the official Unicode test suites. Status:
go get "github.com/clipperhouse/uax29/v2/words"
import "github.com/clipperhouse/uax29/v2/words"
text := "Hello, 世界. Nice dog! 👍🐶"
tokens := words.FromString(text)
for tokens.Next() { // Next() returns true until end of data
fmt.Println(tokens.Value()) // Do something with the current token
}jargon, a text pipelines package for CLI and Go, which consumes this package.
C# (also by me)