Fast tokenizers optimized for research and production. Rust-based implementation tokenizes 1GB in <20 seconds. Supports BPE, WordPiece, and Unigram algorithms. Train custom vocabularies, track alignments, handle padding/truncation. Integrates seamlessly with transformers. Use when you need high-performance tokenization or custom tokenizer training.
/plugin marketplace add zechenzhangAGI/AI-research-SKILLs/plugin install huggingface-tokenizers@zechenzhangAGI/AI-research-SKILLs