Tokenizer for language models.
Tokenize text for Llama, Gemini, GPT-4, DeepSeek, Mistral and many others; in the web, on the client and any platform.
use kitoken::Kitoken;
let encoder = Kitoken::from_file("models/llama4.kit")?;
let tokens = encoder.encode("Your future belongs to me.", true)?;
let string = String::from_utf8(encoder.decode(&tokens, true)?)?;
assert!(string == "Your future belongs to me.");Kitoken is a fast and versatile tokenizer for language models compatible with SentencePiece, HuggingFace Tokenizers, OpenAI Tiktoken and Mistral Tekken, supporting BPE, Unigram and WordPiece tokenization.
- Fast and efficient tokenization
Faster than most other tokenizers in both common and uncommon scenarios; see the benchmarks for comparisons with different datasets. - Runs in all environments
Native in Rust and with bindings for Web, Node and Python; see kitoken.dev for a web demo. - Supports input and output processing
Including unicode-aware normalization, pre-tokenization and post-processing options. - Compact data format
Definitions are stored in an efficient binary format and without merge list.
See also kitoken-cli for Kitoken in the command line.
Kitoken can load and convert many existing tokenizer formats. Every supported format is tested against the original implementation across a variety of inputs to ensure correctness and compatibility.
Note
Most models on Hugging Face are supported. Just take the tokenizer.json or spiece.model and load it into Kitoken.
Kitoken aims to be output-identical with existing implementations for all models. See the notes below for differences in specific cases.
let encoder = Kitoken::from_file("models/gemma.model")?;Kitoken can convert and initialize with SentencePiece models in BPE and Unigram format.
BPEmodels are converted toBytePairdefinitions in character mode. A merge list is generated and sorted using the token scores, which is then used to sort the vocabulary by merge priority. The scores and the merge list are then discarded.Unigrammodels are converted toUnigramdefinitions retaining the token scores.
If the model does not contain a trainer definition, Unigram is assumed as the default encoding mode. Normalization options and the unicode normalization scheme are taken from the contained normalizer definition and converted to the respective Kitoken configurations.
Notes
- SentencePiece uses different
nfkcnormalization rules in thenmt_nfkcandnmt_nfkc_cfschemes than during regularnfkcnormalization. This difference is not entirely additive and prevents the normalization of~to~. Kitoken uses the regularnfkcnormalization rules fornmt_nfkcandnmt_nfkc_cfand normalizes~to~. - SentencePiece's implementation of Unigram merges pieces with the same merge priority in a different order depending on preceding non-encodable pieces. For example, with
xlnet_base_cased, SentencePiece encodes.nnnandԶnnnas.., 8705, 180butԶԶnnnas.., 180, 8705. Kitoken always merges pieces with the same merge priority in the same order, resulting in.., 180, 8705for either case in the example and matching the behavior of Tokenizers.
let encoder = Kitoken::from_file("models/llama4.json")?;Kitoken can convert and initialize with HuggingFace Tokenizers definitions for BPE, Unigram and WordPiece models.
BPEmodels are converted toBytePairdefinitions. The included merge list is used to sort the vocabulary by merge priority and is then discarded.Unigrammodels are converted toUnigramdefinitions retaining the token scores.WordPiecemodels are converted toWordPiecedefinitions.
Normalization, pre-tokenization, post-processing and decoding options contained in the definition are converted to the respective Kitoken configurations.
Some normalization, post-processing and decoding options used by Tokenizers are used for converting alternative token-byte representations during encoding and decoding. Kitoken always stores and operates on tokens as byte sequences, and will use these options to pre-normalize the vocabulary during conversion.
Notes
- When using a
BPEdefinition with an incomplete vocabulary and without anunktoken, Tokenizers skips over non-encodable pieces and attempts to merge the surrounding ones. Kitoken always considers non-encodable pieces as un-mergeable and encodes the surrounding pieces individually. This can affect models that exploit the behavior of Tokenizers with a deliberately restricted vocabulary. - Tokenizers normalizes inputs character-by-character, while Kitoken normalizes inputs as one. This can result in differences during case-folding in some cases. For example, greek letter
Σhas two lowercase forms,σfor within-word andςfor end-of-word use. Tokenizers will always lowercaseΣtoσ, while Kitoken will lowercase it to either depending on the context.
let encoder = Kitoken::from_file("models/o200k_base.tiktoken")?;Tiktoken is a BPE tokenizer used by OpenAI for GPT-3 and newer models and uses BytePair tokenization in byte mode.
Tiktoken definitions contain a sorted vocabulary of base64 encoded bytes and corresponding token ids without any additional metadata. Special tokens and the split regex are expected to be provided separately, but will be inferred from the data for common models including GPT-3, GPT-4 and GPT-4o. For other models, or depending on the data and requirements, these values can be adjusted manually.
let encoder = Kitoken::from_file("models/mistral.json")?;Tekken is a BPE tokenizer based on Tiktoken, used by Mistral for NeMo and newer models and uses BytePair tokenization in byte mode.
Tekken definitions contain a sorted vocabulary of base64 encoded bytes and corresponding token ids, as well as metadata including the split regex and special tokens.
Kitoken uses merge-list-free variations of the BPE algorithm and a reversed variation of the Unigram algorithm. The basis for the merge-list-free BPE algorithm was inspired by Tiktoken, which has similarly good performance characteristics with common tokenization inputs. However, Kitoken can be much faster with inputs that fail to split during pre-tokenization by falling back to a priority-queue-based implementation when optimal.
The core tokenization functions are optimized for multiple CPU architectures and make use of SIMD instructions where available. Kitoken also avoids memory allocations and copying of data to great extent, and most operations are performed in-place and buffers are reused where possible.
Benchmarks were performed on a MacBook Pro M1 Max using each libraries Python bindings with tokenizer-bench.
Llama 2 uses a SentencePiece-based tokenizer model and BytePair tokenization in character mode with byte mode fallback.
GPT-2 uses a Tokenizers-based tokenizer model and BytePair tokenization in byte mode.
-
Pride and Prejudice: A text document containing Pride and Prejudice by Jane Austen. This data is a good representation for common English-language inputs containing a mix of short and long paragraphs.
-
UTF-8 Sequence: A text document containing a single-line UTF-8 sequence. This data is a good representation of inputs that might fail to split during pre-tokenization.
-
Wagahai: A text document containing Wagahai wa Neko de Aru by Natsume Sōseki. This data is a good representation for Japanese-language inputs containing many long paragraphs.