llm-chat-msg-compressor 1.0.1 → 1.0.3
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +9 -0
- package/package.json +2 -2
package/README.md
CHANGED
|
@@ -9,6 +9,7 @@ Intelligent JSON optimizer for LLM APIs. Automatically reduces token usage by se
|
|
|
9
9
|
## Features
|
|
10
10
|
|
|
11
11
|
- **🧠 Intelligent**: Analyzes payload structure to pick the best strategy
|
|
12
|
+
- **⚡ High Performance**: Optimized for low-latency with single-pass analysis and zero production dependencies
|
|
12
13
|
- **📉 Efficient**: Saves 10-40% input tokens on average
|
|
13
14
|
- **✅ Safe**: Full restoration of original data (semantic equality)
|
|
14
15
|
- **🔌 Easy**: Simple `optimize()` and `restore()` API
|
|
@@ -71,6 +72,14 @@ By default, the library is **Safe-by-Default**. It preserves all data types (inc
|
|
|
71
72
|
|
|
72
73
|
If you need maximum compression and your LLM can handle `1`/`0` instead of `true`/`false`, you can enable `unsafe: true`.
|
|
73
74
|
|
|
75
|
+
## Performance
|
|
76
|
+
|
|
77
|
+
The library is designed for high-throughput environments:
|
|
78
|
+
|
|
79
|
+
- **Zero-Stringify Analysis**: Estimates payload size during traversal to avoid memory spikes.
|
|
80
|
+
- **Lazy Detection**: Decompression auto-detects strategies using targeted marker searches instead of full-string scans.
|
|
81
|
+
- **Memory Efficient**: Uses optimized loops and reuses strategy instances to minimize garbage collection.
|
|
82
|
+
|
|
74
83
|
## Contributing
|
|
75
84
|
|
|
76
85
|
Contributions are welcome! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines and [CODE_OF_CONDUCT.md](CODE_OF_CONDUCT.md) for our code of conduct.
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "llm-chat-msg-compressor",
|
|
3
|
-
"version": "1.0.
|
|
3
|
+
"version": "1.0.3",
|
|
4
4
|
"description": "Intelligent JSON compression for LLM API optimization",
|
|
5
5
|
"main": "dist/index.js",
|
|
6
6
|
"types": "dist/index.d.ts",
|
|
@@ -38,7 +38,7 @@
|
|
|
38
38
|
"bugs": {
|
|
39
39
|
"url": "https://github.com/Sridharvn/llm-chat-msg-compressor/issues"
|
|
40
40
|
},
|
|
41
|
-
"homepage": "https://github.
|
|
41
|
+
"homepage": "https://sridharvn.github.io/llm-compressor-ui/",
|
|
42
42
|
"engines": {
|
|
43
43
|
"node": ">=18.0.0"
|
|
44
44
|
},
|