@cognisos/liminal 2.4.2 → 2.5.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md ADDED
@@ -0,0 +1,88 @@
1
+ # @cognisos/liminal
2
+
3
+ **Transparent LLM context compression proxy.** Liminal sits between your AI coding tools and the LLM API, compressing context to save tokens, reduce costs, and extend effective context windows — all without changing your workflow.
4
+
5
+ ## Install
6
+
7
+ ```bash
8
+ npm install -g @cognisos/liminal
9
+ ```
10
+
11
+ ## Quick Start
12
+
13
+ ```bash
14
+ liminal init # Guided setup — auth, tool detection, config
15
+ liminal start # Start the compression proxy
16
+ liminal # Launch the TUI dashboard
17
+ ```
18
+
19
+ ## Features
20
+
21
+ - **Zero-config compression** — Routes through Claude Code, Codex, Cursor, and OpenAI-compatible tools automatically
22
+ - **TUI dashboard** — Run `liminal` to launch a full-screen live dashboard with stats, config, and logs
23
+ - **Setup wizard** — 5-step guided setup with verification and error recovery
24
+ - **Stats tracking** — Session and all-time metrics with token savings, context extension, and cost estimates
25
+ - **Cursor hooks** — Transparent file compression via preToolUse hooks (no sudo, no TLS hacks)
26
+ - **Multi-session** — Concurrent session management with circuit breakers and graceful degradation
27
+ - **Zero UI dependencies** — All terminal rendering uses raw ANSI codes
28
+
29
+ ## Commands
30
+
31
+ ```
32
+ liminal Launch TUI dashboard
33
+ liminal init Guided setup wizard
34
+ liminal start [-d] [--port PORT] Start the compression proxy
35
+ liminal stop Stop the proxy
36
+ liminal status Quick health check
37
+ liminal stats [--json] Compression metrics & savings
38
+ liminal config [--set k=v] [--get k] View or edit configuration
39
+ liminal logs [--follow] [--lines N] View proxy logs
40
+ liminal setup cursor [--teardown] Install Cursor compression hooks
41
+ liminal login Log in or create an account
42
+ liminal logout Log out
43
+ liminal trust-ca Install CA cert (TLS intercept)
44
+ liminal untrust-ca Remove CA cert
45
+ liminal uninstall Remove all Liminal configuration
46
+ ```
47
+
48
+ ## TUI Dashboard
49
+
50
+ Run `liminal` with no arguments to launch the interactive dashboard:
51
+
52
+ - **Dashboard** — Live daemon health, tool routing status, session metrics, recent activity
53
+ - **Stats** — Token savings, cost impact, context extension (session + all-time)
54
+ - **Config** — Current configuration at a glance
55
+ - **Logs** — Colorized live tail of daemon logs
56
+
57
+ Navigate with arrow keys or Tab. Press `q` to exit.
58
+
59
+ ## How It Works
60
+
61
+ 1. **Proxy** — Liminal runs a local HTTP proxy (default port 3141)
62
+ 2. **Intercept** — Your AI tool sends API requests through the proxy
63
+ 3. **Compress** — RSC (Recursive Semiotic Computation) normalizes and compresses the context
64
+ 4. **Forward** — Compressed request goes to the upstream LLM API
65
+ 5. **Learn** — Patterns are learned over time to improve compression
66
+
67
+ Supported protocols: Anthropic Messages API, OpenAI Chat Completions, OpenAI Responses API.
68
+
69
+ ## Configuration
70
+
71
+ Config is stored at `~/.liminal/config.json`. Key settings:
72
+
73
+ | Key | Default | Description |
74
+ |-----|---------|-------------|
75
+ | `port` | `3141` | Proxy listen port |
76
+ | `compressionThreshold` | `100` | Min tokens to compress |
77
+ | `learnFromResponses` | `true` | Learn patterns from LLM responses |
78
+ | `latencyBudgetMs` | `10000` | Max compression time before fallback |
79
+ | `enabled` | `true` | Global compression toggle |
80
+
81
+ ## Requirements
82
+
83
+ - Node.js >= 18.0.0
84
+ - A Cognisos account (created during `liminal init`)
85
+
86
+ ## License
87
+
88
+ MIT