Thanks to visit codestin.com
Credit goes to github.com

Skip to content

A tokenizer based on Unicode text segmentation (UAX #29), for Go. Split graphemes, words, sentences.

License

Notifications You must be signed in to change notification settings

clipperhouse/uax29

Repository files navigation

This package tokenizes (splits) words, sentences and graphemes, based on Unicode text segmentation (UAX #29), for Unicode version 15.0.0. Details and usage are in the respective packages:

uax29/graphemes

uax29/words

uax29/phrases

uax29/sentences

Why tokenize?

Any time our code operates on individual words, we are tokenizing. Often, we do it ad hoc, such as splitting on spaces, which gives inconsistent results. The Unicode standard is better: it is multi-lingual, and handles punctuation, special characters, etc.

Uses

The uax29 module has 4 tokenizers. In decreasing granularity: sentences → phrases → words → graphemes. Words and graphemes are the most common uses.

You might use words for inverted indexes, full-text search, TF-IDF, BM25, embeddings, etc.

If you're doing embeddings, the definition of “meaningful unit” will depend on your application. You might choose sentences, phrases, words, or a combination.

Conformance

We use the official Unicode test suites. Status:

Go

Quick start

go get "github.com/clipperhouse/uax29/v2/words"
import "github.com/clipperhouse/uax29/v2/words"

text := "Hello, 世界. Nice dog! 👍🐶"

tokens := words.FromString(text)

for tokens.Next() {                     // Next() returns true until end of data
	fmt.Println(tokens.Value())         // Do something with the current token
}

See also

jargon, a text pipelines package for CLI and Go, which consumes this package.

Prior art

blevesearch/segment

rivo/uniseg

Other language implementations

C# (also by me)

JavaScript

Rust

Java

Python

About

A tokenizer based on Unicode text segmentation (UAX #29), for Go. Split graphemes, words, sentences.

Topics

Resources

License

Stars

Watchers

Forks

Languages