memorix 1.0.2 → 1.0.4

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -2,6 +2,53 @@
2
2
 
3
3
  All notable changes to this project will be documented in this file.
4
4
 
5
+ ## [1.0.4] — 2026-03-17
6
+
7
+ ### Added
8
+ - **Git Memory pipeline** — `git commit` can now flow directly into Memorix via `memorix git-hook`, `memorix git-hook-uninstall`, and `memorix ingest commit --auto`. Stored observations now carry `source` and `commitHash`, and Git memories can be filtered explicitly with `source: "git"`.
9
+ - **Reasoning Memory tools** — Added `memorix_store_reasoning` and `memorix_search_reasoning` so design rationale, alternatives, constraints, and risks can be stored and searched as a first-class memory layer.
10
+ - **Source-aware retrieval and cross-linking** — Search now boosts Git, reasoning, and problem-solution memories differently based on query intent. Git memories and reasoning memories can cross-reference each other via related commits and shared entities.
11
+ - **Structured config model** — Added project/user `memorix.yml`, project/user `.env` loading, `memorix init`, and configuration provenance diagnostics in `memorix status`.
12
+ - **Dashboard control plane upgrades** — Added Git Memory, Config Provenance, and Identity Health views, plus richer stats and a stabilized graph layout for the HTTP dashboard.
13
+
14
+ ### Changed
15
+ - **Documentation consolidation** — Reworked README, README.zh-CN, setup, architecture, API reference, configuration, Git Memory, and development guides so they match the current product model: local-first platform, `memorix.yml + .env`, Git Memory, HTTP dashboard, and the four-layer architecture.
16
+ - **Project detection model** — Project identity now centers on real Git roots, MCP roots support, system-directory fallback handling, and runtime project switching instead of older placeholder-style fallback identities.
17
+ - **Dashboard usage model** — `memorix serve-http --port 3211` is now the primary “control plane” entrypoint when you want HTTP transport, collaboration features, and dashboard access in one place.
18
+
19
+ ### Fixed
20
+ - **Project identity drift** — Fixed Codex/Windsurf startup issues that produced `local/System32`, IDE-installation-directory identities, or other incorrect local project bindings.
21
+ - **Worktree-safe Git hooks** — Hook installation, uninstall, auto-install checks, and status reporting now resolve hooks directories correctly for both normal repos and Git worktrees.
22
+ - **Runtime config correctness** — Fixed project-level `memorix.yml` not reaching runtime getters, `.env` values leaking across project switches, and legacy `config.json` not showing up correctly in provenance diagnostics.
23
+ - **Git Memory quality** — Added noise filtering, preserved release/version milestone commits, and implemented `memorix ingest commit --force` as an escape hatch for manual ingestion.
24
+ - **Cross-project detail retrieval** — Global search results can now be opened reliably with project-aware refs instead of colliding on observation IDs from different projects.
25
+ - **Skill generation noise** — `memorix_skills generate` now filters low-signal command-history observations like `git`, `gh`, `npm`, and `npx` so generated skills stay project-relevant.
26
+ - **OpenCode static plugin noise** — Merged the first external PR to silence `console.log` spam in the static OpenCode plugin without reintroducing session lifecycle side effects.
27
+ - **CI/publish flow** — Restored CI green after type/test regressions and changed npm publishing workflow to manual trigger instead of automatic release publishing.
28
+
29
+ ### Stats
30
+ - **Tests:** 879/879 passing across 68 files
31
+ - **Runtime modes:** stdio MCP (`memorix serve`), HTTP MCP + dashboard (`memorix serve-http --port 3211`), and standalone dashboard remain supported
32
+
33
+ ---
34
+
35
+ ## [1.0.3] — 2026-03-14
36
+
37
+ ### Added
38
+ - **Memory Formation Pipeline** — Three-stage pipeline (Extract → Resolve → Evaluate) runs in shadow mode on every `memorix_store` call and hooks trigger. Collects quality metrics without affecting storage decisions.
39
+ - **Extract**: Automatic fact extraction from narratives, title normalization, entity resolution against Knowledge Graph, observation type verification.
40
+ - **Resolve**: 4 resolution actions (new/merge/evolve/discard) based on similarity scoring, word overlap, and contradiction detection.
41
+ - **Evaluate**: Multi-factor knowledge value assessment (type weight, fact density, specificity, causal reasoning, noise detection). Categorizes memories as core/contextual/ephemeral.
42
+ - **`memorix_formation_metrics` tool** — New MCP tool to query aggregated Formation Pipeline metrics (value scores, resolution actions, extraction rates, processing times).
43
+ - **`getEntityNames()` method** on `KnowledgeGraphManager` for Formation Pipeline entity resolution.
44
+
45
+ ### Stats
46
+ - **Default MCP Tools:** 23 (+1: `memorix_formation_metrics`)
47
+ - **Tests:** 803/803 passing across 60 files (+50 new Formation Pipeline tests)
48
+ - **Hooks safety:** handler.ts +21 lines (shadow call only), zero modification to existing hook logic
49
+
50
+ ---
51
+
5
52
  ## [1.0.2] — 2026-03-14
6
53
 
7
54
  ### Fixed
package/README.md CHANGED
@@ -5,339 +5,216 @@
5
5
  <h1 align="center">Memorix</h1>
6
6
 
7
7
  <p align="center">
8
- <strong>Persistent memory layer for AI coding agents.</strong><br>
9
- One MCP server. Ten agents. Zero context loss.
8
+ <strong>Local-first memory platform for AI coding agents.</strong><br>
9
+ Git truth, reasoning memory, and cross-agent recall in one MCP server.
10
10
  </p>
11
11
 
12
12
  <p align="center">
13
13
  <a href="https://www.npmjs.com/package/memorix"><img src="https://img.shields.io/npm/v/memorix.svg?style=flat-square&color=cb3837" alt="npm"></a>
14
14
  <a href="https://www.npmjs.com/package/memorix"><img src="https://img.shields.io/npm/dm/memorix.svg?style=flat-square&color=blue" alt="downloads"></a>
15
15
  <a href="LICENSE"><img src="https://img.shields.io/badge/license-Apache%202.0-green.svg?style=flat-square" alt="license"></a>
16
- <a href="https://github.com/AVIDS2/memorix"><img src="https://img.shields.io/github/stars/AVIDS2/memorix?style=flat-square&color=yellow" alt="stars"></a>
17
- <img src="https://img.shields.io/badge/tests-753%20passed-brightgreen?style=flat-square" alt="tests">
18
16
  <a href="https://github.com/AVIDS2/memorix/actions/workflows/ci.yml"><img src="https://img.shields.io/github/actions/workflow/status/AVIDS2/memorix/ci.yml?style=flat-square&label=CI" alt="CI"></a>
17
+ <a href="https://github.com/AVIDS2/memorix"><img src="https://img.shields.io/github/stars/AVIDS2/memorix?style=flat-square&color=yellow" alt="stars"></a>
19
18
  </p>
20
19
 
21
20
  <p align="center">
22
- <strong>v1.0 Stable | 22 MCP tools | Auto-cleanup | Multi-agent collaboration | 10 IDEs supported</strong>
23
- </p>
24
-
25
- <p align="center">
26
- <img src="https://img.shields.io/badge/-Cursor-orange?style=flat-square" alt="Cursor">
27
- <img src="https://img.shields.io/badge/-Windsurf-blue?style=flat-square" alt="Windsurf">
28
- <img src="https://img.shields.io/badge/-Claude%20Code-purple?style=flat-square" alt="Claude Code">
29
- <img src="https://img.shields.io/badge/-Codex-green?style=flat-square" alt="Codex">
30
- <img src="https://img.shields.io/badge/-Copilot-lightblue?style=flat-square" alt="Copilot">
31
- <img src="https://img.shields.io/badge/-Kiro-red?style=flat-square" alt="Kiro">
32
- <img src="https://img.shields.io/badge/-Antigravity-grey?style=flat-square" alt="Antigravity">
33
- <img src="https://img.shields.io/badge/-OpenCode-teal?style=flat-square" alt="OpenCode">
34
- <img src="https://img.shields.io/badge/-Trae-FF6B35?style=flat-square" alt="Trae">
35
- <img src="https://img.shields.io/badge/-Gemini%20CLI-4285F4?style=flat-square" alt="Gemini CLI">
21
+ <strong>Git Memory</strong> · <strong>Reasoning Memory</strong> · <strong>Cross-Agent Recall</strong> · <strong>Control Plane Dashboard</strong>
36
22
  </p>
37
23
 
38
24
  <p align="center">
39
25
  <a href="README.zh-CN.md">中文文档</a> ·
40
26
  <a href="#quick-start">Quick Start</a> ·
41
- <a href="#features">Features</a> ·
42
- <a href="#architecture">Architecture</a> ·
27
+ <a href="#how-it-works">How It Works</a> ·
28
+ <a href="#documentation">Documentation</a> ·
43
29
  <a href="docs/SETUP.md">Setup Guide</a>
44
30
  </p>
45
31
 
46
32
  ---
47
33
 
48
- ## Introduction
34
+ ## Why Memorix
49
35
 
50
- AI coding agents lose all context between sessions. Switch IDEs and previous decisions, debugging history, and architectural knowledge are gone. Memorix provides a shared, persistent memory layer across agents and sessions — storing decisions, gotchas, and project knowledge that any agent can retrieve instantly.
36
+ Most AI coding agents remember only the current thread. Memorix gives them a shared, persistent memory layer across IDEs, sessions, and projects.
51
37
 
52
- ```
53
- Session 1 (Cursor): "Use JWT with refresh tokens, 15-min expiry" → stored as decision
54
- Session 2 (Claude Code): "Add login endpoint" → retrieves the decision → implements correctly
55
- ```
38
+ What makes Memorix different:
56
39
 
57
- No re-explaining. No copy-pasting. No vendor lock-in.
58
-
59
- ### Core Capabilities
60
-
61
- - **Cross-Agent Memory**: All agents share the same memory store. Store in Cursor, retrieve in Claude Code.
62
- - **Multi-Agent Collaboration**: Team tools for agent coordination — join/leave, file locks, task boards, and cross-IDE messaging via shared `team-state.json`.
63
- - **Auto-Cleanup on Startup**: Background retention archiving and intelligent deduplication (LLM or heuristic) run automatically — zero manual maintenance.
64
- - **Dual-Mode Quality**: Free heuristic engine for basic dedup; optional LLM mode for intelligent compression, reranking, and conflict resolution.
65
- - **3-Layer Progressive Disclosure**: Search returns compact indices (~50 tokens/result), timeline shows chronological context, detail provides full content. ~10x token savings over full-text retrieval.
66
- - **Mini-Skills**: Promote high-value observations to permanent skills that auto-inject at every session start. Critical knowledge never decays.
67
- - **Auto-Memory Hooks**: Automatically capture decisions, errors, and gotchas from IDE tool calls. Pattern detection in English and Chinese.
68
- - **Knowledge Graph**: Entity-relation model compatible with [MCP Official Memory Server](https://github.com/modelcontextprotocol/servers/tree/main/src/memory). Auto-creates relations from entity extraction.
40
+ - **Git Memory**: turn `git commit` into searchable engineering memory with noise filtering and commit provenance.
41
+ - **Reasoning Memory**: store why a decision was made, not just what changed.
42
+ - **Cross-Agent Local Recall**: Cursor, Windsurf, Claude Code, Codex, Copilot, Kiro, OpenCode, Gemini CLI, and more can read the same local memory base.
43
+ - **Memory Quality Pipeline**: formation, compaction, retention, and source-aware retrieval work together instead of acting like isolated tools.
69
44
 
70
45
  ---
71
46
 
72
47
  ## Quick Start
73
48
 
49
+ Install globally:
50
+
74
51
  ```bash
75
52
  npm install -g memorix
76
53
  ```
77
54
 
78
- Add to your agent's MCP config:
79
-
80
- <details open>
81
- <summary><strong>Cursor</strong> · <code>.cursor/mcp.json</code></summary>
82
-
83
- ```json
84
- { "mcpServers": { "memorix": { "command": "memorix", "args": ["serve"] } } }
85
- ```
86
- </details>
87
-
88
- <details>
89
- <summary><strong>Claude Code</strong></summary>
55
+ Initialize project config:
90
56
 
91
57
  ```bash
92
- claude mcp add memorix -- memorix serve
58
+ memorix init
93
59
  ```
94
- </details>
95
60
 
96
- <details>
97
- <summary><strong>Windsurf</strong> · <code>~/.codeium/windsurf/mcp_config.json</code></summary>
61
+ Memorix uses two files with two roles:
98
62
 
99
- ```json
100
- { "mcpServers": { "memorix": { "command": "memorix", "args": ["serve"] } } }
101
- ```
102
- </details>
103
-
104
- <details>
105
- <summary><strong>VS Code Copilot</strong> · <code>.vscode/mcp.json</code></summary>
106
-
107
- ```json
108
- { "servers": { "memorix": { "command": "memorix", "args": ["serve"] } } }
109
- ```
110
- </details>
63
+ - `memorix.yml` for behavior and project settings
64
+ - `.env` for secrets such as API keys
111
65
 
112
- <details>
113
- <summary><strong>Codex</strong> · <code>~/.codex/config.toml</code></summary>
66
+ Choose one runtime mode:
114
67
 
115
- ```toml
116
- [mcp_servers.memorix]
117
- command = "memorix"
118
- args = ["serve"]
68
+ ```bash
69
+ memorix serve
119
70
  ```
120
- </details>
121
71
 
122
- <details>
123
- <summary><strong>Kiro</strong> · <code>.kiro/settings/mcp.json</code></summary>
72
+ Use `serve` for normal stdio MCP integrations.
124
73
 
125
- ```json
126
- { "mcpServers": { "memorix": { "command": "memorix", "args": ["serve"] } } }
74
+ ```bash
75
+ memorix serve-http --port 3211
127
76
  ```
128
- </details>
129
77
 
130
- <details>
131
- <summary><strong>Antigravity</strong> · <code>~/.gemini/antigravity/mcp_config.json</code></summary>
78
+ Use `serve-http` when you want the HTTP transport, collaboration features, and the dashboard on the same port.
132
79
 
133
- ```json
134
- { "mcpServers": { "memorix": { "command": "memorix", "args": ["serve"], "env": { "MEMORIX_PROJECT_ROOT": "/your/project/path" } } } }
135
- ```
136
- </details>
80
+ Add Memorix to your MCP client:
137
81
 
138
- <details>
139
- <summary><strong>OpenCode</strong> · <code>~/.config/opencode/config.json</code></summary>
82
+ <details open>
83
+ <summary><strong>Cursor</strong> · <code>.cursor/mcp.json</code></summary>
140
84
 
141
85
  ```json
142
- { "mcpServers": { "memorix": { "command": "memorix", "args": ["serve"] } } }
86
+ {
87
+ "mcpServers": {
88
+ "memorix": {
89
+ "command": "memorix",
90
+ "args": ["serve"]
91
+ }
92
+ }
93
+ }
143
94
  ```
144
95
  </details>
145
96
 
146
97
  <details>
147
- <summary><strong>Trae</strong> · <code>~/%APPDATA%/Trae/User/mcp.json</code></summary>
98
+ <summary><strong>Claude Code</strong></summary>
148
99
 
149
- ```json
150
- { "mcpServers": { "memorix": { "command": "memorix", "args": ["serve"] } } }
100
+ ```bash
101
+ claude mcp add memorix -- memorix serve
151
102
  ```
152
103
  </details>
153
104
 
154
105
  <details>
155
- <summary><strong>Gemini CLI</strong> · <code>.gemini/settings.json</code></summary>
106
+ <summary><strong>Codex</strong> · <code>~/.codex/config.toml</code></summary>
156
107
 
157
- ```json
158
- { "mcpServers": { "memorix": { "command": "memorix", "args": ["serve"] } } }
108
+ ```toml
109
+ [mcp_servers.memorix]
110
+ command = "memorix"
111
+ args = ["serve"]
159
112
  ```
160
113
  </details>
161
114
 
162
- Restart your agent. No API keys required. No cloud. No external dependencies.
163
-
164
- > **Auto-update**: Memorix checks for updates on startup (once per 24h) and self-updates in the background.
165
-
166
- > **Note**: Do not use `npx` — it re-downloads on each invocation and causes MCP timeout. Use global install.
167
- >
168
- > [Full setup guide](docs/SETUP.md) · [Troubleshooting](docs/SETUP.md#troubleshooting)
115
+ For the full IDE matrix, Windows notes, and troubleshooting, see [docs/SETUP.md](docs/SETUP.md).
169
116
 
170
117
  ---
171
118
 
172
- ## Features
173
-
174
- ### 22 MCP Tools (Default)
175
-
176
- | Category | Tools |
177
- |----------|-------|
178
- | **Memory** | `memorix_store` · `memorix_search` · `memorix_detail` · `memorix_timeline` · `memorix_resolve` · `memorix_deduplicate` · `memorix_suggest_topic_key` |
179
- | **Sessions** | `memorix_session_start` · `memorix_session_end` · `memorix_session_context` |
180
- | **Skills** | `memorix_skills` · `memorix_promote` |
181
- | **Workspace** | `memorix_workspace_sync` · `memorix_rules_sync` |
182
- | **Maintenance** | `memorix_retention` · `memorix_consolidate` · `memorix_transfer` |
183
- | **Team** | `team_manage` · `team_file_lock` · `team_task` · `team_message` |
184
- | **Dashboard** | `memorix_dashboard` |
185
-
186
- <details>
187
- <summary><strong>+9 Optional: Knowledge Graph tools</strong> (enable in <code>~/.memorix/settings.json</code>)</summary>
188
-
189
- `create_entities` · `create_relations` · `add_observations` · `delete_entities` · `delete_observations` · `delete_relations` · `search_nodes` · `open_nodes` · `read_graph`
190
-
191
- Enable with: `{ "knowledgeGraph": true }` in `~/.memorix/settings.json`
192
- </details>
193
-
194
- ### Observation Types
119
+ ## Core Workflows
195
120
 
196
- Nine structured types for classifying stored knowledge:
121
+ ### 1. Store and retrieve memory
197
122
 
198
- `session-request` · `gotcha` · `problem-solution` · `how-it-works` · `what-changed` · `discovery` · `why-it-exists` · `decision` · `trade-off`
123
+ Use MCP tools such as:
199
124
 
200
- ### Hybrid Search
125
+ - `memorix_store`
126
+ - `memorix_search`
127
+ - `memorix_detail`
128
+ - `memorix_timeline`
129
+ - `memorix_resolve`
201
130
 
202
- BM25 fulltext search works out of the box with minimal resources (~50MB RAM). Semantic vector search is opt-in with three provider options:
131
+ This covers decisions, gotchas, problem-solution notes, and session handoff context.
203
132
 
204
- | Provider | Configuration | Resources | Quality |
205
- |----------|--------------|-----------|---------|
206
- | **API** (recommended) | `MEMORIX_EMBEDDING=api` | Zero local RAM | Highest |
207
- | **fastembed** | `MEMORIX_EMBEDDING=fastembed` | ~300MB RAM | High |
208
- | **transformers** | `MEMORIX_EMBEDDING=transformers` | ~500MB RAM | High |
209
- | **Off** (default) | `MEMORIX_EMBEDDING=off` | ~50MB RAM | BM25 only |
133
+ ### 2. Capture Git truth automatically
210
134
 
211
- API embedding works with any OpenAI-compatible endpoint — OpenAI, Qwen/DashScope, OpenRouter, Ollama, or any proxy:
135
+ Install the post-commit hook:
212
136
 
213
137
  ```bash
214
- MEMORIX_EMBEDDING=api
215
- MEMORIX_EMBEDDING_API_KEY=sk-xxx
216
- MEMORIX_EMBEDDING_MODEL=text-embedding-3-small
217
- MEMORIX_EMBEDDING_BASE_URL=https://api.openai.com/v1 # optional
218
- MEMORIX_EMBEDDING_DIMENSIONS=512 # optional
138
+ memorix git-hook --force
219
139
  ```
220
140
 
221
- Embedding infrastructure includes 10K LRU cache with disk persistence, batch API calls (up to 2048 texts per request), parallel processing (4 concurrent chunks), and text normalization for improved cache hit rates. Zero external dependencies — no Chroma, no SQLite.
222
-
223
- For local embedding:
141
+ Or ingest manually:
224
142
 
225
143
  ```bash
226
- npm install -g fastembed # ONNX runtime
227
- npm install -g @huggingface/transformers # JS/WASM runtime
144
+ memorix ingest commit
145
+ memorix ingest log --count 20
228
146
  ```
229
147
 
230
- ### LLM Enhanced Mode
231
-
232
- Optional LLM integration that significantly improves memory quality. Three capabilities layered on top of the base search:
148
+ Git memories are stored with `source='git'`, commit hashes, changed files, and noise filtering.
233
149
 
234
- | Capability | Description | Measured Impact |
235
- |-----------|-------------|-----------------|
236
- | **Narrative Compression** | Compresses verbose observations before storage, preserving all technical facts | 27% token reduction (up to 44% on narrative-heavy content) |
237
- | **Search Reranking** | LLM reranks search results by semantic relevance to the current query | 60% of queries improved, 0% degraded |
238
- | **Compact on Write** | Detects duplicates and conflicts at write time; merges, updates, or skips as appropriate | Prevents redundant storage, resolves contradictions |
239
-
240
- Smart filtering ensures LLM calls are only made when beneficial — structured content like commands and file paths is bypassed automatically.
150
+ ### 3. Run the control plane
241
151
 
242
152
  ```bash
243
- MEMORIX_LLM_API_KEY=sk-xxx
244
- MEMORIX_LLM_PROVIDER=openai # openai | anthropic | openrouter | custom
245
- MEMORIX_LLM_MODEL=gpt-4.1-nano # any chat completion model
246
- MEMORIX_LLM_BASE_URL=https://... # custom endpoint (optional)
153
+ memorix serve-http --port 3211
247
154
  ```
248
155
 
249
- Memorix auto-detects existing environment variables:
250
-
251
- | Variable | Provider |
252
- |----------|----------|
253
- | `OPENAI_API_KEY` | OpenAI |
254
- | `ANTHROPIC_API_KEY` | Anthropic |
255
- | `OPENROUTER_API_KEY` | OpenRouter |
256
-
257
- **Without LLM**: Free heuristic deduplication (similarity-based rules). **With LLM**: Intelligent compression, contextual reranking, contradiction detection, and fact extraction.
156
+ Then open:
258
157
 
259
- ### Mini-Skills
158
+ - MCP HTTP endpoint: `http://localhost:3211/mcp`
159
+ - Dashboard: `http://localhost:3211`
260
160
 
261
- Promote high-value observations to permanent skills using `memorix_promote`. Mini-skills are:
161
+ This mode gives you collaboration tools, project identity diagnostics, config provenance, Git Memory views, and the dashboard in one place.
262
162
 
263
- - **Permanent** — exempt from retention decay, never archived
264
- - **Auto-injected** — loaded into context at every `memorix_session_start`
265
- - **Project-scoped** — isolated per project, no cross-project pollution
266
-
267
- Use this for critical knowledge that must survive indefinitely: deployment procedures, architectural constraints, recurring gotchas.
268
-
269
- ### Team Collaboration
163
+ ---
270
164
 
271
- Multiple agents working in the same workspace can coordinate via 4 team tools:
165
+ ## How It Works
272
166
 
273
- | Tool | Actions | Purpose |
274
- |------|---------|---------|
275
- | `team_manage` | join, leave, status | Agent registry see who's active |
276
- | `team_file_lock` | lock, unlock, status | Advisory file locks to prevent conflicts |
277
- | `team_task` | create, claim, complete, list | Shared task board with dependencies |
278
- | `team_message` | send, broadcast, inbox | Direct and broadcast messaging |
167
+ ```mermaid
168
+ graph TB
169
+ A["git commit / agent tool call / manual store"] --> B["Memorix Runtime"]
170
+ B --> C["Observation / Reasoning / Git Memory"]
171
+ C --> D["Formation + Indexing + Graph + Retention"]
172
+ D --> E["Search / Detail / Timeline / Dashboard / Team"]
173
+ ```
279
174
 
280
- State is persisted to `team-state.json` and shared across all IDE processes. See [TEAM.md](TEAM.md) for the full protocol.
175
+ ### Memory Layers
281
176
 
282
- ### Auto-Memory Hooks
177
+ - **Observation Memory**: what changed, how something works, gotchas, problem-solution notes
178
+ - **Reasoning Memory**: why a choice was made, alternatives, trade-offs, risks
179
+ - **Git Memory**: immutable engineering facts derived from commits
283
180
 
284
- ```bash
285
- memorix hooks install
286
- ```
181
+ ### Retrieval Model
287
182
 
288
- Captures decisions, errors, and gotchas automatically from IDE tool calls. Pattern detection supports English and Chinese. Smart filtering applies 30-second cooldown and skips trivial commands. High-value memories are injected at session start.
183
+ - Default search is **project-scoped**
184
+ - `scope="global"` searches across projects
185
+ - Global hits can be opened explicitly with project-aware refs
186
+ - Source-aware retrieval boosts Git memories for “what changed” questions and reasoning memories for “why” questions
289
187
 
290
- ### Interactive CLI
188
+ ---
291
189
 
292
- ```bash
293
- memorix # Interactive menu
294
- memorix configure # LLM + Embedding provider setup
295
- memorix status # Project info and statistics
296
- memorix dashboard # Web UI at localhost:3210
297
- memorix hooks install # Install auto-capture for IDEs
298
- ```
190
+ ## Documentation
299
191
 
300
- ---
192
+ ### Getting Started
301
193
 
302
- ## Architecture
194
+ - [Setup Guide](docs/SETUP.md)
195
+ - [Configuration Guide](docs/CONFIGURATION.md)
303
196
 
304
- ```mermaid
305
- graph TB
306
- A["Cursor · Claude Code · Windsurf · Codex · +6 more"]
307
- A -->|MCP stdio| Core
308
- Core["Memorix MCP Server\n22 Default Tools · Auto-Hooks · Auto-Cleanup"]
309
- Core --> Search["Search Pipeline\nBM25 + Vector + Rerank"]
310
- Core --> Team["Team Collab\nAgents · Tasks · Locks · Msgs"]
311
- Core --> Sync["Rules & Workspace Sync\n10 Adapters"]
312
- Core --> Cleanup["Auto-Cleanup\nRetention + LLM Dedup"]
313
- Core --> KG["Knowledge Graph\nEntities · Relations"]
314
- Search --> Disk["~/.memorix/data/\nobservations · sessions · mini-skills · team-state · entities · relations"]
315
- Team --> Disk
316
- KG --> Disk
317
- ```
197
+ ### Product and Architecture
318
198
 
319
- ### Search Pipeline
199
+ - [Architecture](docs/ARCHITECTURE.md)
200
+ - [Memory Formation Pipeline](docs/MEMORY_FORMATION_PIPELINE.md)
201
+ - [Design Decisions](docs/DESIGN_DECISIONS.md)
320
202
 
321
- Three-stage retrieval with progressive quality enhancement:
203
+ ### Reference
322
204
 
323
- ```
324
- Stage 1: Orama (BM25 + Vector Hybrid) → Top-N candidates
325
- Stage 2: LLM Reranking (optional) → Reordered by semantic relevance
326
- Stage 3: Recency + Project Affinity → Final scored results
327
- ```
205
+ - [API Reference](docs/API_REFERENCE.md)
206
+ - [Git Memory Guide](docs/GIT_MEMORY.md)
207
+ - [Modules](docs/MODULES.md)
328
208
 
329
- ### Write Pipeline
209
+ ### Development
330
210
 
331
- ```
332
- Input → LLM Compression (optional) → Compact on Write (dedup/merge) → Store + Index
333
- ```
211
+ - [Development Guide](docs/DEVELOPMENT.md)
212
+ - [Known Issues and Roadmap](docs/KNOWN_ISSUES_AND_ROADMAP.md)
334
213
 
335
- ### Key Design Decisions
214
+ ### AI-Facing Project Docs
336
215
 
337
- - **Project isolation**: Auto-detected from `git remote`. Scoped search by default.
338
- - **Shared storage**: All agents read/write `~/.memorix/data/`. Cross-IDE by design.
339
- - **Token efficiency**: 3-layer progressive disclosure (search, timeline, detail). ~10x savings.
340
- - **Graceful degradation**: Every LLM and embedding feature is optional. Core functionality requires zero configuration.
216
+ - [`llms.txt`](llms.txt)
217
+ - [`llms-full.txt`](llms-full.txt)
341
218
 
342
219
  ---
343
220
 
@@ -345,22 +222,28 @@ Input → LLM Compression (optional) → Compact on Write (dedup/merge) →
345
222
 
346
223
  ```bash
347
224
  git clone https://github.com/AVIDS2/memorix.git
348
- cd memorix && npm install
225
+ cd memorix
226
+ npm install
349
227
 
350
- npm run dev # watch mode
351
- npm test # 753 tests
352
- npm run build # production build
228
+ npm run dev
229
+ npm test
230
+ npm run build
353
231
  ```
354
232
 
355
- [Architecture](docs/ARCHITECTURE.md) · [API Reference](docs/API_REFERENCE.md) · [Modules](docs/MODULES.md) · [Design Decisions](docs/DESIGN_DECISIONS.md)
233
+ Key local commands:
356
234
 
357
- > For AI agents: [`llms.txt`](llms.txt) · [`llms-full.txt`](llms-full.txt)
235
+ ```bash
236
+ memorix status
237
+ memorix dashboard
238
+ memorix serve-http --port 3211
239
+ memorix git-hook --force
240
+ ```
358
241
 
359
242
  ---
360
243
 
361
244
  ## Acknowledgements
362
245
 
363
- Built on ideas from [mcp-memory-service](https://github.com/doobidoo/mcp-memory-service), [MemCP](https://github.com/maydali28/memcp), [claude-mem](https://github.com/anthropics/claude-code), and [Mem0](https://github.com/mem0ai/mem0).
246
+ Memorix builds on ideas from [mcp-memory-service](https://github.com/doobidoo/mcp-memory-service), [MemCP](https://github.com/maydali28/memcp), [claude-mem](https://github.com/anthropics/claude-code), [Mem0](https://github.com/mem0ai/mem0), and the broader MCP ecosystem.
364
247
 
365
248
  ## Star History
366
249