engramx 3.3.0 → 3.4.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -6,14 +6,46 @@ All notable changes to engram are documented here. Format based on
6
6
 
7
7
  ## [Unreleased]
8
8
 
9
- ### Addedv3.3 "Cost Lens" (in progress, target: 2026-05-08)
9
+ ## [3.4.0]2026-05-02 "Universal Spine"
10
+
11
+ The release that turns engram from a Claude Code tool into a universal context spine across every major AI coding tool. Same engram, same graph, same 89.1% reduction — now plugged into 8 IDEs out of the box.
12
+
13
+ ### Added
14
+
15
+ - **Universal init detector.** `src/setup/detect.ts` adds three new detectors: `detectCline` (probes for VS Code's Cline globalStorage), `detectZed` (probes for Zed's config and `.zed/settings.json`), `detectCodex` (probes for `~/.codex` and `AGENTS.md`). `detectAllIdes()` now returns 8 entries (was 5). Per-IDE setup hints in `wizard.ts` now include the exact MCP config snippet for Cline.
16
+ - **Anthropic Claude Code plugin manifest.** `plugins/anthropic-marketplace/` — submission-ready `marketplace.json`, `plugin.json`, three skills (`/engram:cost`, `/engram:query`, `/engram:mistakes`), and the MCP server config that registers `engram-serve` automatically. Verified against `code.claude.com/docs/en/plugins`.
17
+ - **VS Code / Cursor extension.** `extensions/vscode/` — thin wrapper around the engramx CLI. Six commands (init, gen-mdc, gen, cost, dashboard, doctor), status-bar entry, two configuration settings. Compiles to a single `out/extension.js` and packs via `vsce` for OpenVSX. Works in VS Code, Cursor, and any VS Code fork.
18
+ - **Cline integration documented.** `docs/integrations/cline.md` — Cline supports MCP natively, so the integration is one config-snippet away. Cross-linked from `docs/integrations/README.md`.
19
+ - **engramx-continue.** Sister npm package at `adapters/continue/` — Continue.dev `@engram` context provider with HTTP and CLI fallback transports. Tarball verified clean (2.2 KB, 4 files).
20
+
21
+ ### Changed
22
+
23
+ - **README.md.** Top-of-file callout now leads with the May 2026 market context (Cursor pricing crisis + Claude Code rate-limit pain) so the positioning matches what users are actually searching for. IDE matrix expanded from 8 to 11. "How It Compares" rewritten as the May 2026 8-row competitive matrix (engramx vs Cursor index / Aider repo map / Cline / Continue / Mem0 / claude-mem / CartoGopher); legacy table moved to a collapsed details block.
24
+ - **docs/install.html.** Title, meta description, OG title + description, hero pill, nav link, and IDE matrix all refreshed for v3.4 framing.
25
+ - **GitHub repo description and topics.** Description shortened and re-led with "context spine that 10x's every AI coding session." Topics maxed at 20 with `cline` and `universal-spine` added.
26
+
27
+ ### Tests
28
+ - Net-new: 3 detector tests in `tests/setup/detect.test.ts` (detectCline / detectZed / detectCodex).
29
+ - Total: **910 passing** (was 907 baseline).
30
+
31
+ ### Process
32
+
33
+ This is the first release that walks the new 8-phase release ritual codified at `~/.claude/skills/engram-release/SKILL.md`. Phase 3 (public surface refresh) caught README + install.html + topics drift in one pass. Future releases follow the same checklist by default.
34
+
35
+ ### Why
36
+
37
+ Three things broke at the same time in 2026. Cursor went usage-based and people started getting $1,400 surprise bills. Anthropic tightened Claude Code limits, then quietly tested removing the product from the $20 Pro plan. AI coding fragmented into 8 IDEs with no common context layer. v3.4 puts engram into all of them — one install, one graph, every tool benefits. Audit at `~/Desktop/Projects/Engram/00-strategy/2026-05-02-strategic-audit-v34-pivot.md`.
38
+
39
+ ## [3.3.0] — 2026-05-02 — "Cost Lens"
40
+
41
+ ### Added
10
42
  - New `engram cost` subcommand: aggregates token-savings telemetry from existing `.engram/hook-log.jsonl` files across one or many project roots. Outputs a terminal table, JSON, or a weekly Markdown digest at `~/.engram/cost-report-YYYY-Www.md`.
11
- - New `src/cost/` module: `types.ts` (CostEvent / CostSummary / CostConfig), `aggregator.ts` (read + summarize), `formatter.ts` (one-liner / table / Markdown digest), `digest.ts` (ISO-week digest writer with idempotent file output).
12
- - 13 new tests in `tests/cost.test.ts`, hermetic use tmp dirs with synthetic logs, no real engram state required.
13
- - USD estimate uses configurable `inputUsdPerMillion` rate. Default $3.00/M matches Claude Sonnet 4.6 input pricing as of 2026-04-27.
43
+ - New `src/cost/` module: `types.ts`, `aggregator.ts`, `formatter.ts`, `digest.ts`, `instrument.ts`. Pure functions, hermetic tests, NaN-safe math.
44
+ - Dispatch instrumentation in `src/intercept/dispatch.ts`every PreToolUse log entry now carries `wouldHaveRead`, `injected`, and `tokensSaved` fields when applicable.
45
+ - 31 new tests across `tests/cost.test.ts` and `tests/cost-instrument.test.ts`, hermetic.
14
46
 
15
47
  ### Why
16
- Cost Lens is the baseline for everything in the v3.3 → v4.0 roadmap. We need a measured number that survives between releases so future features (Mesh, Vector, Bridge) can be evaluated against the real-world impact, not against a single benchmark file. The PRD lives at `01-prds/03-engram-mesh-ruflo-integration-PRD.md`.
48
+ Cost Lens is the baseline for the v3.3 → v4.0 roadmap. Future features (Bridge, Mesh, Vector) get evaluated against the real-world impact, not a single static benchmark. PRD: `01-prds/03-engram-mesh-ruflo-integration-PRD.md`.
17
49
 
18
50
  ## [3.0.2] — 2026-04-24 — "MCP Registry"
19
51
 
package/README.md CHANGED
@@ -1,181 +1,131 @@
1
1
  <p align="center">
2
- <img src="assets/banner-v3.png" alt="EngramX — the memory layer for AI coding agents" width="100%">
2
+ <img src="assets/banner-v3.png" alt="EngramX — the cached context spine for AI coding agents (v3.0 'Spine')" width="100%">
3
3
  </p>
4
4
 
5
+ <!-- ============================================================
6
+ 24-second product showcase (Hyperframes-rendered MP4 + WebM).
7
+ Source: docs/demos/showcase.html · scenes drive both the
8
+ live HTML player and this MP4. Edit scene-table.md to change.
9
+ If the MP4 isn't rendered yet, GitHub gracefully shows the
10
+ poster image and links to the live HTML player.
11
+ ============================================================ -->
5
12
  <p align="center">
6
- <strong>The memory layer that stretches every Claude session.</strong>
13
+ <video src="https://raw.githubusercontent.com/NickCirv/engram/main/docs/demos/showcase.mp4"
14
+ controls
15
+ muted
16
+ playsinline
17
+ poster="docs/demos/poster.svg"
18
+ width="100%">
19
+ <a href="docs/demos/showcase.html">
20
+ <img src="docs/demos/poster.svg" alt="engram — 24-second showcase (click to open the live HTML player)" width="100%">
21
+ </a>
22
+ </video>
23
+ </p>
24
+
25
+ <p align="center">
26
+ <sub>
27
+ <a href="docs/install.html"><strong>Install Page</strong></a> ·
28
+ <a href="docs/demos/showcase.html"><strong>Live Demo</strong></a> ·
29
+ <a href="docs/demos/scene-table.md"><strong>Scene Table</strong></a> ·
30
+ rendered with <a href="https://github.com/heygen-com/hyperframes">Hyperframes</a>
31
+ </sub>
32
+ </p>
33
+
34
+ <p align="center">
35
+ <a href="#install"><strong>Install</strong></a> ·
36
+ <a href="#quickstart"><strong>Quickstart</strong></a> ·
37
+ <a href="#dashboard"><strong>Dashboard</strong></a> ·
38
+ <a href="#benchmark"><strong>Benchmark</strong></a> ·
39
+ <a href="#ide-integrations"><strong>IDE Integrations</strong></a> ·
40
+ <a href="#http-api"><strong>HTTP API</strong></a> ·
41
+ <a href="#ecp-spec"><strong>ECP Spec</strong></a> ·
42
+ <a href="#contributing"><strong>Contributing</strong></a>
7
43
  </p>
8
44
 
9
45
  <p align="center">
10
46
  <a href="https://github.com/NickCirv/engram/actions"><img src="https://github.com/NickCirv/engram/actions/workflows/ci.yml/badge.svg" alt="CI"></a>
11
47
  <a href="https://www.npmjs.com/package/engramx"><img src="https://img.shields.io/npm/v/engramx?color=blue" alt="npm version"></a>
12
- <a href="https://www.npmjs.com/package/engramx"><img src="https://img.shields.io/npm/dm/engramx?color=blue" alt="npm downloads"></a>
13
48
  <img src="https://img.shields.io/badge/license-Apache%202.0-blue" alt="License">
14
49
  <img src="https://img.shields.io/badge/node-%3E%3D20-brightgreen" alt="Node">
15
- <img src="https://img.shields.io/badge/tests-878%20passing-brightgreen" alt="Tests">
50
+ <img src="https://img.shields.io/badge/tests-910%20passing-brightgreen" alt="Tests">
16
51
  <img src="https://img.shields.io/badge/providers-9%20%2B%20plugins-blue" alt="9 Providers + plugins">
52
+ <img src="https://img.shields.io/badge/token%20savings-89.1%25%20measured-orange" alt="89.1% measured savings">
17
53
  <img src="https://img.shields.io/badge/native%20deps-zero-green" alt="Zero native deps">
18
- <a href="https://discord.gg/engramx"><img src="https://img.shields.io/badge/Discord-join-5865F2?logo=discord&logoColor=white" alt="Discord"></a>
19
- <a href="https://github.com/NickCirv/engram/stargazers"><img src="https://img.shields.io/github/stars/NickCirv/engram?style=social" alt="Stars"></a>
20
- </p>
21
-
22
- <p align="center">
23
- <a href="#anthropic-capped-your-week-engram-extends-it">Why</a> ·
24
- <a href="#install">Install</a> ·
25
- <a href="#per-agent-setup">Per-agent setup</a> ·
26
- <a href="#see-what-your-agent-has-remembered">engram remembers</a> ·
27
- <a href="#how-it-works">How it works</a> ·
28
- <a href="ARCHITECTURE.md">Architecture</a> ·
29
- <a href="https://discord.gg/engramx">Discord</a>
54
+ <img src="https://img.shields.io/badge/LLM%20cost-$0-green" alt="Zero LLM cost">
30
55
  </p>
31
56
 
32
57
  ---
33
58
 
34
- ## Anthropic capped your week. engram extends it.
59
+ ## Why this exists, May 2026
35
60
 
36
- In November 2025, Anthropic tightened weekly limits on Claude Pro and Max. Heavy Claude Code users now hit caps mid-week. Some by Wednesday. The honest reality nobody is naming out loud:
61
+ Three things broke at the same time. Cursor went usage-based and people started getting $1,400 surprise bills. Anthropic tightened Claude Code limits, then quietly tested removing it from the $20 Pro plan. Half the AI coding crowd migrated from one tool to the other, hit the new ceiling within a week, and started looking for any way to make a session last longer.
37
62
 
38
- > **Most of your weekly tokens are spent re-introducing yourself to an agent that forgets.**
63
+ Engramx is what makes the session last longer. It indexes your codebase into a local SQLite knowledge graph once. Then it intercepts file reads at the agent boundary and replaces them with a structural summary the agent already has the working memory for. Same edit, same diff, same code shipped — fewer tokens consumed in the round trip.
39
64
 
40
- Every Monday starts from zero. The agent re-reads the codebase. Re-asks setup questions. Repeats last week's wrong fix. Re-decides architecture you already locked in. By Friday you're rate-limited. Not because you built a lot. Because the agent never got smarter.
65
+ On a real 87-file repo, the measured reduction is **89.1%**. That's not a marketing number. The benchmark is committed to this repo as `bench/real-world.ts` and runs against any project you point it at. Independent migration guides ([dev.to/56kode](https://dev.to/56_kode/why-were-moving-from-cursor-to-claude-code-and-why-you-should-too-9kh), [SpectrumAI Lab](https://spectrumailab.com/blog/claude-code-vs-cursor)) cite engram as the strongest measured number in the category.
41
66
 
42
- engram is the memory layer that fixes that. A persistent knowledge graph, plus a mistake replay buffer, plus a provider mesh that wires in mempalace, obsidian, context7, MCP servers, and Anthropic's own auto-memory. The agent stops being single-shot. It learns from its own history.
67
+ Works in 8 IDEs and counting Claude Code, Cursor, Cline, Continue.dev, Aider, Windsurf, Zed, OpenAI Codex CLI. One install, one graph, every tool benefits. Apache 2.0. Local SQLite. Nothing leaves your machine.
43
68
 
44
- ---
45
-
46
- ### What changes when your agent has memory
69
+ > **v3.4 "Universal Spine" shipped 2026-05-02** — multi-IDE detector covers 8 tools, Anthropic Claude Code plugin (`/plugin install engram`), VS Code / Cursor extension on OpenVSX, `engramx-continue` on npm, Cline integration documented. Cost Lens telemetry from v3.3.0 now feeds a weekly Markdown digest at `~/.engram/cost-report-YYYY-Www.md`. 910 tests, CI green on Ubuntu + Windows × Node 20 + 22. See [CHANGELOG.md](CHANGELOG.md) for the v3.3 + v3.4 diff.
47
70
 
48
- | | Without engram | With engram |
49
- |---|---|---|
50
- | **Monday** | Agent re-reads codebase from scratch (~40K tokens) | Reads structural graph (~3K tokens) |
51
- | **Tuesday** | Repeats Monday's wrong fix | ⚠️ Warned: *"You tried this Monday, broke parser.rs:42"* |
52
- | **Wednesday** | Re-decides architecture you already locked | Surfaces Monday's decision: *"We chose Saga over 2PC because…"* |
53
- | **Thursday** | Asks the same 5 setup questions | Pulls config from `mempalace`, `obsidian`, `context7` providers |
54
- | **Friday** | Cap hit by 3pm | Cap hit Sunday, if at all |
55
-
56
- Token savings (89.1% measured per Read interception, reproducible benchmark below) are the side-effect. Compounding agent intelligence is the product.
71
+ > **EngramX v3.0 "Spine" shipped 2026-04-24** — the biggest release before v3.4. The spine is **extensible**: any MCP server becomes an EngramX provider via a 10-line plugin file. **Pre-mortem mistake-guard** warns before you repeat a bug. **Bi-temporal mistake memory** — refactored-away mistakes stop firing. **Anthropic Auto-Memory bridge** reads Claude Code's own consolidated memory. **SSE-streaming** packets render progressively. `engram gen` dual-emits `AGENTS.md` + `CLAUDE.md` by default.
57
72
 
58
73
  ---
59
74
 
60
- ## Install
75
+ # EngramX — the cached context spine for AI coding agents.
61
76
 
62
- ### macOS / Linux (recommended)
77
+ Your AI coding agent keeps re-reading the same files. Every `Read`, every `Edit`, every `cat` re-pays for context you've already paid for.
63
78
 
64
- ```bash
65
- brew install engramx
66
- ```
79
+ **EngramX is the spine.** It intercepts every file read at the tool boundary, answers from a pre-assembled context packet held in **three layers of cache** — a knowledge graph the agent has already "paid" to build, a per-provider SQLite cache of external lookups, and an in-memory LRU of recent queries — and hands the agent a single ~500-token response instead of a raw file.
67
80
 
68
- ### Cross-platform fallback
81
+ The agent gets what it needs. You stop paying for context you've already paid for. And **every plugin you add elevates the savings further** — Serena for LSP symbols, GitHub MCP for issue context, Sentry MCP for production errors, Supabase / Neon for schema. Each one closes another context leak the agent would otherwise burn tokens researching.
69
82
 
70
- ```bash
71
- npm install -g engramx
72
- ```
83
+ **Measured savings on a reproducible benchmark: 89.1%.** Not estimated. 85 of 87 real source files saved tokens. Best case 98.4% (18,820 tokens → 306).
73
84
 
74
- ### Zero-dep one-liner
85
+ ### One command to everything
75
86
 
76
87
  ```bash
77
- curl -fsSL engramx.dev/install | sh
88
+ npm install -g engramx
89
+ cd ~/my-project
90
+ engram setup
78
91
  ```
79
92
 
80
- Verify: `engram --version` should show `3.x` or later. Requires Node.js 20+. Zero native deps. No build tools, no Rust, no Python, no system libs.
93
+ That's the install. `engram setup` runs `engram init` (builds the graph), `engram install-hook` (wires the Sentinel into your AI tool), detects your IDE, dual-emits `AGENTS.md` + `CLAUDE.md`, then runs `engram doctor` to verify everything green. Under 30 seconds on most projects. Works in Claude Code, Cursor, Codex CLI, Windsurf, GitHub Copilot Chat, JetBrains Junie, Aider, Zed, Continue — any agent that reads `AGENTS.md` or uses MCP.
81
94
 
82
- > **Note:** "engram" the audio plugin and "engram" the neuroscience term are different things. We're `engramx` on npm, `engram` on the CLI. Also not [Go-Engram](https://github.com/Gentleman-Programming/engram) (a salience-gated chat memory in Go) and not DeepSeek's January 2026 "Engram" paper (research artifact, not a product).
95
+ The **next session** you open starts with the spine pre-loaded: project brief already in context, file reads intercepted, a live HUD showing cumulative savings, bi-temporal mistakes waiting to warn you, and any plugins you've added already answering their domains.
83
96
 
84
97
  ---
85
98
 
86
- ## Per-agent setup
99
+ ## I'm not a developer — what does this actually do?
87
100
 
88
- One command for your stack:
101
+ Short answer: **your AI coding assistant stops charging you for the same information twice.**
89
102
 
90
- ```bash
91
- engram init --agent claude # Claude Code (default)
92
- engram init --agent cursor # Cursor
93
- engram init --agent windsurf # Windsurf
94
- engram init --agent codex # OpenAI Codex
95
- engram init --agent gemini # Gemini CLI
96
- engram init --agent cline # Cline / Roo Code
97
- engram init --agent copilot # GitHub Copilot CLI
98
- engram init --agent kilocode # Kilo Code
99
- engram init --agent antigravity # Google Antigravity
100
- ```
103
+ Long answer:
101
104
 
102
- One run wires the right hooks, settings, and per-agent config files. Restart your AI tool. engram is live.
105
+ 1. You ask your AI assistant (Claude Code, Cursor, Codex, whatever) to help with a file.
106
+ 2. The assistant tries to read that file. Normally it reads the whole thing, pays for every byte in tokens, and throws most of it away.
107
+ 3. EngramX catches the read, answers with a cached summary (the 50–200 lines the agent actually needs, plus context from your git history, past mistakes, library docs, and anything else useful), and lets the agent work from that.
108
+ 4. Your monthly AI bill drops. Multi-hour sessions stop hitting rate limits. The agent stops re-introducing bugs you already fixed — because EngramX remembers what broke.
103
109
 
104
- Prefer the all-in-one bootstrap? `engram setup` runs `engram init` + `engram install-hook` + IDE detection + dual-emits `AGENTS.md` and `CLAUDE.md` + `engram doctor`. Under 30 seconds on most projects.
110
+ It runs on your laptop. It doesn't send your code anywhere. It's Apache 2.0. There's no account, no login, no cloud. You install it once and forget it's there.
105
111
 
106
- ---
112
+ **Want even bigger savings?** Install a plugin. Each one closes a different context leak — see [Plugins multiply the savings](#plugins-multiply-the-savings) below. Drop a 10-line `.mjs` file in `~/.engram/plugins/` and the next session uses it.
107
113
 
108
- ## See what your agent has remembered
114
+ **Want out?** Clean uninstall is one command:
109
115
 
110
116
  ```bash
111
- $ engram remembers
112
-
113
- 43 mistakes avoided ⚠️ surfaced before the agent could repeat them
114
- 127 decisions surfaced 📜 prior architectural choices recalled in context
115
- 18 cross-session bridges 🔗 sessions that picked up where the last one ended
116
- 86K tokens saved 🎟️ ~ 4.3 hours of weekly cap, reclaimed
117
- 7 days indexed 📅 since engram init
118
-
119
- Your subscription, stretched.
120
- ```
121
-
122
- Cumulative since `engram init`. Run it weekly. Share the screenshot.
123
-
124
- ---
125
-
126
- ## How it works
127
-
128
- ```
129
- Without engram: With engram:
130
-
131
- Claude → reads file.rs (8,000 tokens) Claude → reads file.rs
132
-
133
- engram intercepts → graph context (800 tokens)
134
-
135
- Claude sees: structure
136
- + last week's mistakes (⚠️ pre-mortem)
137
- + relevant decisions
138
- + git co-changes
139
- + cross-session memory
117
+ npm uninstall -g engramx # 3.0.1+ auto-runs preuninstall hook-cleanup
140
118
  ```
141
119
 
142
- Nine providers ship by default and every one is pluggable:
143
-
144
- | Provider | Surfaces |
145
- |---|---|
146
- | `structure` | AST-derived class/function/import graph of the project |
147
- | `mistakes` | What broke last week. Pre-mortem warnings before the agent re-makes the error. Bi-temporal: refactored-away mistakes stop firing. |
148
- | `git` | Hot files, co-change pairs, authorship signals |
149
- | `mempalace` | Your local semantic memory (mempalace MCP / ChromaDB) |
150
- | `context7` | Up-to-date library docs (Context7 MCP) |
151
- | `obsidian` | Your knowledge vault, queried at agent-time |
152
- | `anthropic-memory` | Anthropic's auto-memory bridge |
153
- | `mcp-client` | Any MCP server. engram talks to all of them. |
154
- | `lsp` | Live language-server symbols (Serena, etc.) |
155
-
156
- Add your own: drop a 10-line `.mjs` into `~/.engram/plugins/`. Validated before install.
157
-
158
- ---
159
-
160
- ## Why this exists
161
-
162
- Stateless agents are amnesiacs with PhDs. They solve the problem in front of them, then never get smarter at *your* codebase. Multiply that by Anthropic's weekly caps and every session burns tokens re-learning what last session already learned.
163
-
164
- engram is the spine that connects sessions. It does what stateless tools physically can't:
165
-
166
- 1. **Persistence.** `.engram/graph.db` survives every restart, every cap reset, every laptop reboot. Your agent gets a brain that remembers.
167
- 2. **Mistake memory.** Pre-mortem warnings before the agent repeats last week's error. Surfaced at the top of context, automatically.
168
- 3. **Provider mesh.** Runtime composition across knowledge sources you already use. mempalace, obsidian, context7, MCP servers, all wired in.
169
-
170
- Token compression is downstream of those.
120
+ If you installed 3.0.0 and ran `npm uninstall` before the 3.0.1 patch shipped, your Claude Code hooks may be orphaned. Run `engram repair-hooks --scope user` (install 3.0.1 first if needed) or see the [`CHANGELOG.md`](CHANGELOG.md#301--2026-04-24--clean-uninstall) for the manual `jq`-based recovery one-liner.
171
121
 
172
122
  ---
173
123
 
174
124
  ## Proof, not promises
175
125
 
176
- Everything above is measured. `bench/real-world.ts` runs the full resolver against real files in this repo and compares the rich-packet token cost to the raw-file-read cost. Reproducible in one command on any project.
126
+ Everything above is measured, not estimated. `bench/real-world.ts` runs the full resolver against real files in this repo and compares the rich-packet token cost to the raw-file-read cost. Reproducible in one command on any project.
177
127
 
178
- Latest run (2026-04-24, 87 source files, full report at [`bench/results/real-world-2026-04-24.md`](bench/results/real-world-2026-04-24.md)):
128
+ Latest run (2026-04-24, 87 source files full report at [`bench/results/real-world-2026-04-24.md`](bench/results/real-world-2026-04-24.md)):
179
129
 
180
130
  | Metric | Value |
181
131
  |---|---|
@@ -190,36 +140,23 @@ Reproduce on your own code:
190
140
 
191
141
  ```bash
192
142
  cd your-project
193
- engram init
143
+ engram init # first-time setup for this project
194
144
  npx tsx /path/to/engram/bench/real-world.ts --project . --files 50
195
145
  ```
196
146
 
197
- Small projects score lower. Dense structural projects score higher. It's real arithmetic on your files. You can audit every number.
198
-
199
- ---
200
-
201
- ## Companion tools
202
-
203
- engram compresses what the codebase *is* (file contents into graph context). For compressing what the system is *doing* (shell command output) pair it with [rtk](https://github.com/rtk-ai/rtk):
204
-
205
- ```bash
206
- brew install rtk # 60-90% savings on git/npm/cargo/grep/etc. (Bash)
207
- brew install engramx # 89% savings + memory + mistake-guard (Read)
208
- ```
209
-
210
- Both register PreToolUse hooks. They don't conflict. rtk owns Bash, engram owns Read. Run both for a 3-5x weekly cap stretch end to end.
147
+ The bench writes a JSON + Markdown report per run into `bench/results/`. Small projects score lower; dense structural projects score higher. It's real arithmetic on your files you can audit every number.
211
148
 
212
149
  ---
213
150
 
214
- ## Clean uninstall
151
+ ## What engramx is not
215
152
 
216
- One command:
153
+ The "engram" name is contested. To save you a search:
217
154
 
218
- ```bash
219
- npm uninstall -g engramx # 3.0.1+ auto-runs preuninstall hook-cleanup
220
- ```
155
+ - **Not Go-Engram** ([Gentleman-Programming/engram](https://github.com/Gentleman-Programming/engram)) — different project, Go binary, salience-gated chat memory. Ships under `engram` (without the `x`).
156
+ - **Not DeepSeek's "Engram" paper** — January 2026 academic work on conditional memory. Research artifact, not a product.
157
+ - **Not MemPalace** — adjacent positioning ("knowledge-graph memory," "method-of-loci"), but conversational memory, not code-structural.
221
158
 
222
- If you installed 3.0.0 and ran `npm uninstall` before the 3.0.1 patch shipped, your Claude Code hooks may be orphaned. Run `engram repair-hooks --scope user` (install 3.0.1 first) or see the [`CHANGELOG.md`](CHANGELOG.md#301--2026-04-24--clean-uninstall) for the manual `jq`-based recovery one-liner.
159
+ `engramx` is specifically: **a local-first context spine for AI coding agents that hooks into your IDE's tool boundary, indexes your code via tree-sitter + LSP, remembers past mistakes, and assembles ~500-token context packets in place of raw file reads.** Open source, Apache 2.0, single npm install.
223
160
 
224
161
  ---
225
162
 
@@ -368,6 +305,18 @@ External providers cache into SQLite at SessionStart. Per-read resolution is a c
368
305
 
369
306
  ---
370
307
 
308
+ ## Install
309
+
310
+ ```bash
311
+ npm install -g engramx
312
+ ```
313
+
314
+ Requires Node.js 20+. Zero native dependencies. No build tools. Local SQLite via sql.js WASM — no Rust, no Python, no system libs.
315
+
316
+ > **Prefer a designed walkthrough?** Open [**docs/install.html**](docs/install.html) — three-step install, benefits matrix, IDE coverage, FAQ. Local file, opens in any browser. Brand-matched terminal-mono aesthetic.
317
+
318
+ ---
319
+
371
320
  ## Quickstart
372
321
 
373
322
  **One command, zero friction:**
@@ -431,14 +380,19 @@ engram hooks install # auto-rebuild graph on every git commit
431
380
 
432
381
  ## IDE Integrations
433
382
 
383
+ `engram setup` auto-detects every supported IDE on your machine and prints the right next-step for each. You don't have to remember which command to run — the detector knows.
384
+
434
385
  | IDE | Integration | Setup |
435
386
  |-----|------------|-------|
436
- | **Claude Code** | Hook-based interception (native, automatic) | `engram install-hook` |
437
- | **Cursor** | MDC snapshot + native MCP | `engram gen-mdc` &middot; [docs/integrations/cursor-mcp.md](docs/integrations/cursor-mcp.md) |
438
- | **Continue.dev** | `@engram` context provider | [docs/integrations/continue.md](docs/integrations/continue.md) |
439
- | **Zed** | Context server (`/engram`) | `engram context-server` |
440
- | **Aider** | Context file generation | `engram gen-aider` |
387
+ | **Claude Code** | Hook-based interception (native, automatic) — **plus** `/plugin install engram` for slash-commands | `engram install-hook` &middot; [docs/integrations/claude-code.md](docs/integrations/claude-code.md) |
388
+ | **Cursor** | MDC snapshot + native MCP + VS Code extension on OpenVSX | `engram gen-mdc` &middot; [docs/integrations/cursor-mcp.md](docs/integrations/cursor-mcp.md) |
389
+ | **Cline** | MCP server (5M+ VS Code installs, no native answer to token burn) | [docs/integrations/cline.md](docs/integrations/cline.md) |
390
+ | **Continue.dev** | `@engram` context provider via [`engramx-continue`](https://www.npmjs.com/package/engramx-continue) | [docs/integrations/continue.md](docs/integrations/continue.md) |
391
+ | **Aider** | Context file generation | `engram gen-aider` &middot; [docs/integrations/aider.md](docs/integrations/aider.md) |
441
392
  | **Windsurf** (Codeium) | `.windsurfrules` snapshot + MCP | `engram gen-windsurfrules` |
393
+ | **Zed** | Context server (`/engram`) | `engram context-server` &middot; [docs/integrations/zed.md](docs/integrations/zed.md) |
394
+ | **OpenAI Codex CLI** | `AGENTS.md` auto-emit (universal Linux Foundation standard) | `engram gen` (default emits both `AGENTS.md` + `CLAUDE.md`) |
395
+ | **VS Code (any agent)** | Status-bar entry + 6 commands wrapping the CLI | `code --install-extension engram-vscode` (OpenVSX) |
442
396
  | **Neovim** | MCP via codecompanion / avante | [docs/integrations/neovim.md](docs/integrations/neovim.md) |
443
397
  | **Emacs** | MCP via gptel-mcp | [docs/integrations/emacs.md](docs/integrations/emacs.md) |
444
398
 
@@ -448,17 +402,40 @@ Per-IDE setup guides are in [`docs/integrations/`](docs/integrations/).
448
402
 
449
403
  ## How It Compares
450
404
 
405
+ The "context spine" slot — local-first, code-aware, works in any MCP runtime, with a reproducible benchmark — is currently unowned. Here's the field as of May 2026:
406
+
407
+ | | **engramx** | Cursor index | Aider repo map | Cline | Continue.dev | Mem0 | claude-mem | CartoGopher |
408
+ |---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
409
+ | Works in any MCP runtime | ✅ | IDE-locked | Aider only | VS Code only | VS Code only | ✅ | Claude Code only | ✅ |
410
+ | Local-first (nothing leaves machine) | ✅ | cloud-synced | ✅ | ✅ | ✅ | optional | ✅ | ✅ |
411
+ | Code-aware AST graph | ✅ | proprietary | ✅ | — | — | — | — | ✅ |
412
+ | Reproducible benchmark | ✅ **89.1%** | — | — | — | — | — | — | claims 88% |
413
+ | Bi-temporal mistake memory | ✅ | — | — | — | — | — | partial | — |
414
+ | `AGENTS.md` + `CLAUDE.md` dual-emit | ✅ | — | — | — | — | — | — | — |
415
+ | Single npm install | ✅ | full IDE | pip | VS Code ext | VS Code ext | pip / npm | claude plugin | Go binary |
416
+ | License | Apache 2.0 | proprietary | Apache 2.0 | Apache 2.0 | Apache 2.0 | Apache 2.0 | MIT | unknown |
417
+ | GitHub stars (May 2026) | 108 | proprietary | 39K | 61.2K | 32.4K | 47.8K | new | unknown |
418
+
419
+ The matrix isn't a slight at any of them — most do something engram doesn't. Cursor's index is great inside Cursor. Aider's repo map is great in Aider. Cline's full-file rewrite model is honest about what it is. The point is that nobody else covers all eight rows. Engram is the only tool that does.
420
+
421
+ For the legacy comparison vs `Continue @RepoMap` / `Cursor .cursorrules` / `@199-bio/engram` (small repo-map approaches), see the matrix below.
422
+
423
+ <details>
424
+ <summary><strong>Legacy detailed comparison</strong></summary>
425
+
451
426
  | | engram | Continue @RepoMap | Cursor .cursorrules | Aider repo-map | @199-bio/engram |
452
427
  |---|---|---|---|---|---|
453
428
  | **Interception model** | Hook-based, automatic on every Read | Fetched at @-mention time | Static file, manual | Per-session map | MCP server, called explicitly |
454
429
  | **Cache strategy** | SQLite at SessionStart, <5ms per read | No cache — live fetch | No cache | Per-session only | No cache |
455
430
  | **Persistent memory** | Decisions, mistakes, patterns across sessions | No | Manual text file | No | No |
456
- | **Multiple providers** | 8 (AST, git, mistakes, MemPalace, Context7, Obsidian, LSP) | Repo structure only | No | Repo structure only | Graph query only |
431
+ | **Multiple providers** | 9 (AST, git, mistakes, MemPalace, Context7, Obsidian, LSP, Anthropic Memory, MCP plugins) | Repo structure only | No | Repo structure only | Graph query only |
457
432
  | **Mistake tracking** | LSP diagnostics → mistake nodes, ⚠️ on Edit | No | No | No | No |
458
433
  | **Survives compaction** | Yes (PreCompact hook) | No | Yes (static file) | No | No |
459
434
  | **LLM cost** | $0 | $0 | $0 | $0 | $0 |
460
435
  | **Native deps** | Zero | No | No | No | No |
461
436
 
437
+ </details>
438
+
462
439
  ---
463
440
 
464
441
  ## Install + Configuration
package/dist/cli.js CHANGED
@@ -3281,7 +3281,7 @@ program.command("setup").description("Zero-friction first-run wizard (init + ins
3281
3281
  "local"
3282
3282
  ).action(
3283
3283
  async (opts) => {
3284
- const { runSetup } = await import("./wizard-UH27IO4I.js");
3284
+ const { runSetup } = await import("./wizard-WGBAIZLF.js");
3285
3285
  const scope = opts.scope === "local" || opts.scope === "project" || opts.scope === "user" ? opts.scope : "local";
3286
3286
  const result = await runSetup({
3287
3287
  projectPath: opts.project,
@@ -99,13 +99,66 @@ function detectAider(projectRoot) {
99
99
  status: !installed ? "not detected" : configured ? ".aider-context.md present" : "detected \u2014 run `engram gen-aider`"
100
100
  };
101
101
  }
102
+ function detectCline() {
103
+ const candidates = [
104
+ join(homedir(), "Library/Application Support/Code/User/globalStorage/saoudrizwan.claude-dev"),
105
+ join(homedir(), ".config/Code/User/globalStorage/saoudrizwan.claude-dev"),
106
+ join(homedir(), "AppData/Roaming/Code/User/globalStorage/saoudrizwan.claude-dev")
107
+ ];
108
+ const installed = candidates.some(existsSync);
109
+ let configured = false;
110
+ if (installed) {
111
+ try {
112
+ configured = candidates.filter(existsSync).some((p) => {
113
+ const settings = join(p, "settings", "cline_mcp_settings.json");
114
+ return existsSync(settings) && readFileSync(settings, "utf-8").includes("engram");
115
+ });
116
+ } catch {
117
+ configured = false;
118
+ }
119
+ }
120
+ return {
121
+ name: "Cline",
122
+ installed,
123
+ configured,
124
+ status: !installed ? "not detected" : configured ? "engram MCP server registered" : "detected \u2014 add engram-serve to cline_mcp_settings.json"
125
+ };
126
+ }
127
+ function detectZed(projectRoot) {
128
+ const candidates = [
129
+ join(homedir(), ".config/zed"),
130
+ join(homedir(), "Library/Application Support/Zed"),
131
+ join(homedir(), "AppData/Roaming/Zed")
132
+ ];
133
+ const installed = candidates.some(existsSync);
134
+ const configured = existsSync(join(projectRoot, ".zed", "settings.json"));
135
+ return {
136
+ name: "Zed",
137
+ installed,
138
+ configured,
139
+ status: !installed ? "not detected" : configured ? "Zed project settings present" : "detected \u2014 add engram context server"
140
+ };
141
+ }
142
+ function detectCodex(projectRoot) {
143
+ const installed = existsSync(join(homedir(), ".codex"));
144
+ const configured = existsSync(join(projectRoot, "AGENTS.md"));
145
+ return {
146
+ name: "Codex CLI",
147
+ installed,
148
+ configured,
149
+ status: !installed ? "not detected" : configured ? "AGENTS.md present" : "detected \u2014 run `engram gen` to create AGENTS.md"
150
+ };
151
+ }
102
152
  function detectAllIdes(projectRoot) {
103
153
  return [
104
154
  detectClaudeCode(projectRoot),
105
155
  detectCursor(projectRoot),
106
- detectWindsurf(projectRoot),
156
+ detectCline(),
107
157
  detectContinue(),
108
- detectAider(projectRoot)
158
+ detectWindsurf(projectRoot),
159
+ detectAider(projectRoot),
160
+ detectZed(projectRoot),
161
+ detectCodex(projectRoot)
109
162
  ];
110
163
  }
111
164
 
@@ -214,7 +267,11 @@ async function offerIdeAdapters(opts, rl) {
214
267
  const suggest = {
215
268
  Cursor: "engram gen-mdc",
216
269
  Windsurf: "engram gen-windsurfrules",
217
- Aider: "engram gen-aider"
270
+ Aider: "engram gen-aider",
271
+ "Codex CLI": "engram gen --target agents",
272
+ Cline: "Add to cline_mcp_settings.json: { engram: { command: 'engram-serve', args: ['" + root + "'] } }",
273
+ "Continue.dev": "Add to ~/.continue/config.json: { contextProviders: [{ name: 'engramx-continue' }] }",
274
+ Zed: "Register engram-serve as a Zed context server (see Docs/integrations/zed.md)"
218
275
  };
219
276
  const suggested = [];
220
277
  for (const ide of unconfigured) {
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "engramx",
3
- "version": "3.3.0",
3
+ "version": "3.4.0",
4
4
  "mcpName": "io.github.NickCirv/engram",
5
5
  "description": "The context spine for AI coding agents. 9 built-in providers + mcpConfig plugin contract (wrap any MCP server in 10 lines), generic MCP-client aggregator (stdio), pre-mortem mistake-guard, bi-temporal mistake memory, Anthropic Auto-Memory bridge, SSE streaming context packets, dual-emit AGENTS.md+CLAUDE.md. 90.8% measured real-world token savings (reproducible bench included). Local SQLite, zero cloud.",
6
6
  "repository": {