bpe-lite 0.4.0 → 0.4.2
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +14 -14
- package/package.json +1 -1
package/README.md
CHANGED
|
@@ -64,29 +64,29 @@ Vocab files are bundled in the package — no network required at runtime or ins
|
|
|
64
64
|
|
|
65
65
|
## Performance
|
|
66
66
|
|
|
67
|
-
Benchmarked on Node v24
|
|
67
|
+
Benchmarked on Node v24 (win32/x64). Benchmark command: `node --expose-gc scripts/bench.js`.
|
|
68
68
|
|
|
69
69
|
**OpenAI cl100k — large text (~54 KB)**
|
|
70
70
|
|
|
71
|
-
| impl | ops/s | MB/s |
|
|
72
|
-
|
|
73
|
-
| bpe-lite | **
|
|
74
|
-
| ai-tokenizer |
|
|
75
|
-
| js-tiktoken |
|
|
71
|
+
| impl | ops/s | tokens/s | MB/s |
|
|
72
|
+
|------|------:|---------:|-----:|
|
|
73
|
+
| bpe-lite | **257** | **3.15M** | **13.6** |
|
|
74
|
+
| ai-tokenizer | 201 | 2.46M | 10.7 |
|
|
75
|
+
| js-tiktoken | 23 | 282k | 1.2 |
|
|
76
76
|
|
|
77
77
|
**Anthropic — large text (~54 KB)**
|
|
78
78
|
|
|
79
|
-
| impl | ops/s | MB/s |
|
|
80
|
-
|
|
81
|
-
| bpe-lite | **
|
|
82
|
-
| ai-tokenizer |
|
|
79
|
+
| impl | ops/s | tokens/s | MB/s |
|
|
80
|
+
|------|------:|---------:|-----:|
|
|
81
|
+
| bpe-lite | **257** | 3.15M | **13.6** |
|
|
82
|
+
| ai-tokenizer | 253 | **4.62M** | 13.4 |
|
|
83
83
|
|
|
84
84
|
**Gemini — large text (8 KB)**
|
|
85
85
|
|
|
86
|
-
| impl | ops/s | MB/s | note |
|
|
87
|
-
|
|
88
|
-
| bpe-lite | **
|
|
89
|
-
| ai-tokenizer | 1,
|
|
86
|
+
| impl | ops/s | tokens/s | MB/s | note |
|
|
87
|
+
|------|------:|---------:|-----:|------|
|
|
88
|
+
| bpe-lite | **3,800** | **6.23M** | **29.7** | actual Gemma3 SPM |
|
|
89
|
+
| ai-tokenizer | 1,220 | 2.01M | 9.6 | o200k BPE — different algorithm, different results |
|
|
90
90
|
|
|
91
91
|
ai-tokenizer does not implement Gemini tokenization. The row above uses their o200k encoding on the same input string; it produces different token ids and counts than the Gemini tokenizer, so it is not a real comparison.
|
|
92
92
|
|