@aman_asmuei/amem 0.1.2 → 0.1.4

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. package/README.md +287 -139
  2. package/package.json +2 -1
package/README.md CHANGED
@@ -1,34 +1,80 @@
1
+ <div align="center">
2
+
1
3
  # amem
2
4
 
3
- The memory layer for AI coding tools. Local-first. Developer-specific. Works everywhere.
5
+ ### Give your AI a memory it never forgets
4
6
 
5
- [![npm version](https://img.shields.io/npm/v/@aman_asmuei/amem.svg)](https://www.npmjs.com/package/@aman_asmuei/amem)
6
- [![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](LICENSE)
7
+ Your AI assistant forgets everything the moment a conversation ends.<br/>
8
+ **amem** gives it persistent memory — so it remembers your preferences, decisions, and corrections forever.
7
9
 
8
- > Your AI forgets everything between conversations. amem fixes that.
10
+ [![npm version](https://img.shields.io/npm/v/@aman_asmuei/amem.svg?style=flat-square&color=cb3837)](https://www.npmjs.com/package/@aman_asmuei/amem)
11
+ [![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg?style=flat-square)](LICENSE)
12
+ [![CI](https://img.shields.io/github/actions/workflow/status/amanasmuei/amem/ci.yml?style=flat-square&label=tests)](https://github.com/amanasmuei/amem/actions)
13
+ [![Node](https://img.shields.io/badge/node-%3E%3D18-brightgreen?style=flat-square)](https://nodejs.org)
9
14
 
10
- ## Install
15
+ [Get Started](#-get-started) · [How It Works](#-how-it-works) · [Tools Reference](#-tools) · [CLI](#-cli) · [FAQ](#-faq)
11
16
 
12
- ```bash
13
- npx @aman_asmuei/amem
17
+ </div>
18
+
19
+ ---
20
+
21
+ ## The Problem
22
+
23
+ Every time you start a new conversation with an AI coding assistant, it starts from zero. It doesn't know:
24
+
25
+ - That you told it **three times** not to use `any` in TypeScript
26
+ - That your team **chose PostgreSQL** over MongoDB last month (and why)
27
+ - That you **prefer** functional style, early returns, and pnpm
28
+ - Where your auth module lives, or how your project is structured
29
+
30
+ You end up repeating yourself. Every. Single. Time.
31
+
32
+ ## The Solution
33
+
34
+ **amem** is a memory layer that plugs into any AI tool — Claude Code, Cursor, Windsurf, or anything that speaks MCP. It remembers what matters and surfaces it automatically.
35
+
36
+ ```
37
+ You: "Don't use any type in TypeScript"
38
+
39
+ amem saves this as a correction (highest priority).
40
+ Next conversation — or next month — your AI already knows.
14
41
  ```
15
42
 
16
- ## What it does
43
+ ---
17
44
 
18
- amem is an MCP server that gives any AI assistant persistent memory about:
45
+ ## What Gets Remembered
19
46
 
20
- - **Corrections** "Don't mock the database in integration tests" *(highest priority, always surfaced)*
21
- - **Decisions** — "Chose Postgres over MongoDB because of ACID requirements"
22
- - **Patterns** — "User prefers early returns over nested conditionals"
23
- - **Preferences** — "Uses pnpm, not npm"
24
- - **Topology** — "Auth module lives in src/auth/, uses JWT"
25
- - **Facts** — "Project started in January 2025"
47
+ amem organizes memories into six types, ranked by importance:
26
48
 
27
- Memories are ranked by **relevance x recency x confidence x importance**. Corrections always surface first. Old memories decay. Contradictions are detected. Related memories evolve together.
49
+ | Priority | Type | What it captures | Example |
50
+ |:--------:|------|-----------------|---------|
51
+ | 🔴 | **Correction** | Mistakes to never repeat | *"Don't mock the database in integration tests"* |
52
+ | 🟠 | **Decision** | Architectural choices + why | *"Chose Postgres over MongoDB for ACID compliance"* |
53
+ | 🟡 | **Pattern** | Coding style & habits | *"Prefers early returns over nested conditionals"* |
54
+ | 🟢 | **Preference** | Tool & workflow choices | *"Uses pnpm, not npm"* |
55
+ | 🔵 | **Topology** | Where things are | *"Auth module lives in src/auth/, uses JWT"* |
56
+ | ⚪ | **Fact** | General project knowledge | *"API uses REST, launched January 2025"* |
28
57
 
29
- ## Quick start
58
+ Corrections always surface first. They're the "never do this" rules your AI should always follow.
30
59
 
31
- ### Claude Code
60
+ ---
61
+
62
+ ## 🚀 Get Started
63
+
64
+ ### Step 1: Install
65
+
66
+ You need [Node.js](https://nodejs.org) 18 or higher. Then:
67
+
68
+ ```bash
69
+ npm install -g @aman_asmuei/amem
70
+ ```
71
+
72
+ That's it. amem is now installed on your machine.
73
+
74
+ ### Step 2: Connect your AI tool
75
+
76
+ <details>
77
+ <summary><strong>Claude Code</strong></summary>
32
78
 
33
79
  Add to `~/.claude/settings.json`:
34
80
 
@@ -36,37 +82,151 @@ Add to `~/.claude/settings.json`:
36
82
  {
37
83
  "mcpServers": {
38
84
  "amem": {
39
- "command": "npx",
40
- "args": ["-y", "@aman_asmuei/amem"]
85
+ "command": "amem"
86
+ }
87
+ }
88
+ }
89
+ ```
90
+
91
+ Restart Claude Code. You'll see 7 memory tools available.
92
+
93
+ </details>
94
+
95
+ <details>
96
+ <summary><strong>Cursor</strong></summary>
97
+
98
+ Add to `.cursor/mcp.json` in your project:
99
+
100
+ ```json
101
+ {
102
+ "mcpServers": {
103
+ "amem": {
104
+ "command": "amem"
41
105
  }
42
106
  }
43
107
  }
44
108
  ```
45
109
 
46
- ### Cursor
110
+ Restart Cursor.
111
+
112
+ </details>
47
113
 
48
- Add to `.cursor/mcp.json`:
114
+ <details>
115
+ <summary><strong>Windsurf</strong></summary>
116
+
117
+ Add to your MCP configuration:
49
118
 
50
119
  ```json
51
120
  {
52
121
  "mcpServers": {
53
122
  "amem": {
54
- "command": "npx",
55
- "args": ["-y", "@aman_asmuei/amem"]
123
+ "command": "amem"
56
124
  }
57
125
  }
58
126
  }
59
127
  ```
60
128
 
61
- ### Any MCP client
129
+ </details>
130
+
131
+ <details>
132
+ <summary><strong>Any other MCP client</strong></summary>
133
+
134
+ amem speaks standard [Model Context Protocol](https://modelcontextprotocol.io/) over stdio. Point your client to:
135
+
136
+ ```bash
137
+ npx @aman_asmuei/amem
138
+ ```
139
+
140
+ </details>
141
+
142
+ ### Step 3: Start talking
143
+
144
+ That's it. Your AI now has memory tools. Ask it to remember something:
62
145
 
63
- amem speaks standard MCP over stdio. Any client that supports MCP can connect.
146
+ > *"Remember that we use Tailwind with a custom theme in this project."*
64
147
 
65
- ## Tools
148
+ Next conversation, ask:
66
149
 
67
- ### `memory_store`
150
+ > *"What CSS framework do we use?"*
68
151
 
69
- Store a typed memory with tags and confidence scoring.
152
+ It knows.
153
+
154
+ ---
155
+
156
+ ## 🧠 How It Works
157
+
158
+ ```
159
+ ┌──────────────────────────────────┐
160
+ │ Your AI Tool │
161
+ │ Claude · Cursor · Windsurf │
162
+ └──────────┬───────────────────────┘
163
+
164
+ MCP Protocol (local)
165
+
166
+ ┌──────────▼───────────────────────┐
167
+ │ amem server │
168
+ │ │
169
+ │ Store → Score → Deduplicate │
170
+ │ Recall → Rank → Surface │
171
+ │ │
172
+ │ ┌────────────────────────────┐ │
173
+ │ │ SQLite + Local Embeddings │ │
174
+ │ │ ~/.amem/memory.db │ │
175
+ │ └────────────────────────────┘ │
176
+ └──────────────────────────────────┘
177
+ ```
178
+
179
+ **Everything stays on your machine.** No cloud. No API keys. No data leaving your laptop.
180
+
181
+ ### Smart ranking
182
+
183
+ Every memory gets a score:
184
+
185
+ ```
186
+ score = relevance × recency × confidence × importance
187
+ ```
188
+
189
+ - **Relevance** — How closely the memory matches what you're working on (semantic similarity)
190
+ - **Recency** — Recent memories score higher; old ones gradually fade
191
+ - **Confidence** — Memories confirmed multiple times score higher
192
+ - **Importance** — Corrections (1.0) > Decisions (0.85) > Patterns (0.7) > Facts (0.4)
193
+
194
+ ### Conflict detection
195
+
196
+ Store a memory that contradicts an existing one? amem catches it:
197
+
198
+ - **>85% similar but different** → Flags the conflict, updates the existing memory
199
+ - **>80% similar and agreeing** → Reinforces the existing memory (+confidence)
200
+ - **No match** → Stores as new
201
+
202
+ ### Memory evolution
203
+
204
+ When you store a new memory, related existing memories get reinforced automatically. Your knowledge base stays connected and current.
205
+
206
+ ---
207
+
208
+ ## 🔧 Tools
209
+
210
+ amem gives your AI **7 tools** it can use during conversation:
211
+
212
+ ### Core tools
213
+
214
+ | Tool | What it does |
215
+ |------|-------------|
216
+ | `memory_store` | Save a single memory with type, tags, and confidence |
217
+ | `memory_recall` | Search memories by meaning (not just keywords) |
218
+ | `memory_context` | Load everything relevant to a topic, organized by type |
219
+ | `memory_extract` | Batch-save multiple memories at once from a conversation |
220
+ | `memory_forget` | Delete outdated or incorrect memories |
221
+
222
+ ### Utility tools
223
+
224
+ | Tool | What it does |
225
+ |------|-------------|
226
+ | `memory_stats` | See how many memories you have, broken down by type |
227
+ | `memory_export` | Export all memories as readable markdown |
228
+
229
+ ### Example: Storing a memory
70
230
 
71
231
  ```
72
232
  memory_store({
@@ -77,18 +237,12 @@ memory_store({
77
237
  })
78
238
  ```
79
239
 
80
- ### `memory_recall`
81
-
82
- Semantic search across all memories. Returns results ranked by composite score.
240
+ ### Example: Recalling memories
83
241
 
84
242
  ```
85
- memory_recall({
86
- query: "TypeScript best practices",
87
- limit: 5
88
- })
243
+ memory_recall({ query: "TypeScript best practices", limit: 5 })
89
244
  ```
90
245
 
91
- Returns:
92
246
  ```
93
247
  1. [correction] Never use 'any' type — always define proper interfaces
94
248
  Score: 0.892 | Confidence: 100% | Age: 2d ago
@@ -97,18 +251,12 @@ Returns:
97
251
  Score: 0.756 | Confidence: 85% | Age: 5d ago
98
252
  ```
99
253
 
100
- ### `memory_context`
101
-
102
- Load full context for a topic. Groups by type with corrections first.
254
+ ### Example: Loading context for a task
103
255
 
104
256
  ```
105
- memory_context({
106
- topic: "authentication system",
107
- max_tokens: 2000
108
- })
257
+ memory_context({ topic: "authentication system" })
109
258
  ```
110
259
 
111
- Returns:
112
260
  ```markdown
113
261
  ## Context for: authentication system
114
262
 
@@ -122,142 +270,142 @@ Returns:
122
270
  - Auth module is in src/auth/, middleware in src/middleware/auth.ts (85% confidence)
123
271
  ```
124
272
 
125
- ### `memory_extract`
273
+ ### Example: Batch extraction
126
274
 
127
- Batch-extract memories from a conversation. The AI calls this proactively.
275
+ Your AI can extract multiple memories from a single conversation:
128
276
 
129
277
  ```
130
278
  memory_extract({
131
279
  memories: [
132
- { content: "Don't mock the DB in integration tests", type: "correction", confidence: 1.0, tags: ["testing"] },
133
- { content: "Chose event sourcing for audit trail", type: "decision", confidence: 0.9, tags: ["architecture"] }
280
+ { content: "Don't mock the DB in integration tests", type: "correction", confidence: 1.0 },
281
+ { content: "Chose event sourcing for audit trail", type: "decision", confidence: 0.9 }
134
282
  ]
135
283
  })
136
284
  ```
137
285
 
138
- Automatically deduplicates — if a memory is >85% similar to an existing one, it reinforces the existing memory instead of creating a duplicate.
286
+ ---
139
287
 
140
- ### `memory_forget`
288
+ ## 💻 CLI
141
289
 
142
- Delete specific memories or search-and-delete with confirmation.
290
+ amem also includes a command-line interface for managing memories directly:
143
291
 
144
- ```
145
- memory_forget({ id: "abc12345" })
146
- memory_forget({ query: "old project", confirm: true })
292
+ ```bash
293
+ amem-cli recall "authentication" # Search memories
294
+ amem-cli stats # Show statistics
295
+ amem-cli list # List all memories
296
+ amem-cli list --type correction # List by type
297
+ amem-cli export --file memories.md # Export to markdown
298
+ amem-cli forget abc12345 # Delete by ID
147
299
  ```
148
300
 
149
- ### `memory_stats`
301
+ ---
150
302
 
151
- Show memory statistics: total count, breakdown by type, confidence distribution.
303
+ ## 📡 MCP Resources
152
304
 
153
- ### `memory_export`
305
+ amem exposes **4 resources** that AI clients can read proactively:
154
306
 
155
- Export all memories as formatted markdown, grouped by type.
307
+ | Resource | What it provides |
308
+ |----------|-----------------|
309
+ | `amem://corrections` | All active corrections — hard rules the AI should always follow |
310
+ | `amem://decisions` | Past architectural decisions and their rationale |
311
+ | `amem://profile` | Your preferences and coding patterns |
312
+ | `amem://summary` | Quick overview of everything stored |
156
313
 
157
- ## MCP Prompts
314
+ ---
158
315
 
159
- amem includes two prompts that teach AI clients how to use it effectively:
316
+ ## ⚙️ Configuration
160
317
 
161
- - **`extraction-guide`** When and what to extract from conversations
162
- - **`session-start`** — How to load relevant context at conversation start
318
+ amem works out of the box with zero configuration. For advanced use:
163
319
 
164
- ## MCP Resources
320
+ | Environment Variable | Default | Description |
321
+ |---------------------|---------|-------------|
322
+ | `AMEM_DIR` | `~/.amem` | Where amem stores data |
323
+ | `AMEM_DB` | `~/.amem/memory.db` | Database file path |
165
324
 
166
- Proactive context that clients can read automatically:
325
+ ---
167
326
 
168
- | Resource | Description |
169
- |----------|-------------|
170
- | `amem://corrections` | All active corrections — hard constraints |
171
- | `amem://decisions` | Architectural decisions and rationale |
172
- | `amem://profile` | Developer preferences and patterns |
173
- | `amem://summary` | Quick overview of all stored memories |
327
+ ## FAQ
174
328
 
175
- ## CLI
329
+ <details>
330
+ <summary><strong>Is my data sent to the cloud?</strong></summary>
176
331
 
177
- ```bash
178
- amem-cli recall "authentication" # Search memories
179
- amem-cli stats # Show statistics
180
- amem-cli list --type correction # List by type
181
- amem-cli export --file memories.md # Export to markdown
182
- amem-cli forget abc12345 # Delete a memory
183
- ```
332
+ No. Everything stays on your machine. amem uses a local SQLite database at `~/.amem/memory.db` and generates embeddings locally using an 80MB model that runs on your CPU. No internet connection required after the first model download.
184
333
 
185
- ## How it works
334
+ </details>
186
335
 
187
- ```
188
- ┌─────────────────────────────────┐
189
- │ AI Client │
190
- │ Claude Code · Cursor · Any MCP │
191
- └──────────┬──────────────────────┘
192
- │ MCP Protocol (stdio)
193
- ┌──────────▼──────────────────────┐
194
- │ amem server │
195
- │ │
196
- │ ┌───────────┐ ┌────────────┐ │
197
- │ │ Scoring │ │ Conflict │ │
198
- │ │ Engine │ │ Detection │ │
199
- │ └─────┬─────┘ └─────┬──────┘ │
200
- │ │ │ │
201
- │ ┌─────▼──────────────▼──────┐ │
202
- │ │ SQLite + Embeddings │ │
203
- │ │ ~/.amem/memory.db │ │
204
- │ └───────────────────────────┘ │
205
- └─────────────────────────────────┘
206
- ```
336
+ <details>
337
+ <summary><strong>Does it work offline?</strong></summary>
207
338
 
208
- ### Scoring formula
339
+ Yes. After the first run (which downloads the embedding model), amem works completely offline.
209
340
 
210
- ```
211
- score = relevance x recency x confidence x importance
212
- ```
341
+ </details>
213
342
 
214
- | Factor | How it works |
215
- |--------|-------------|
216
- | **Relevance** | Cosine similarity between query and memory embeddings (0-1) |
217
- | **Recency** | Exponential decay: `0.995^hours_since_last_access` |
218
- | **Confidence** | How many times confirmed (0-1, corrections from user = 1.0) |
219
- | **Importance** | Memory type weight: correction(1.0) > decision(0.85) > pattern(0.7) > preference(0.7) > topology(0.5) > fact(0.4) |
343
+ <details>
344
+ <summary><strong>What AI tools does it work with?</strong></summary>
220
345
 
221
- ### Conflict detection
346
+ Any tool that supports the [Model Context Protocol (MCP)](https://modelcontextprotocol.io/) — including Claude Code, Cursor, Windsurf, and many others. The list is growing rapidly.
222
347
 
223
- When storing a new memory, amem checks for conflicts:
224
- - **>85% similarity** with different content — conflict detected, existing memory updated
225
- - **>80% similarity** with same intent — existing memory reinforced (confidence +0.1)
226
- - **No match** — new memory stored
348
+ </details>
227
349
 
228
- ### Memory evolution
350
+ <details>
351
+ <summary><strong>How much memory/disk does it use?</strong></summary>
229
352
 
230
- When a new memory is stored, related existing memories (0.6-0.8 similarity) are reinforcedtheir access timestamps update, keeping them active and relevant.
353
+ The embedding model is ~80MB (downloaded once). The SQLite database grows with your memories typically a few MB even after months of use. CPU usage is minimal.
231
354
 
232
- ### Local-first
355
+ </details>
233
356
 
234
- - All data stays on your machine at `~/.amem/memory.db`
235
- - Embeddings generated locally via `all-MiniLM-L6-v2` (~80MB model, runs on CPU)
236
- - No cloud, no API keys, no data leaving your laptop
237
- - Works offline after first model download
357
+ <details>
358
+ <summary><strong>Can I see what's stored?</strong></summary>
238
359
 
239
- ## Configuration
360
+ Yes! Use `amem-cli list` to see all memories, `amem-cli stats` for an overview, or `amem-cli export --file backup.md` to export everything as readable markdown.
240
361
 
241
- | Variable | Default | Description |
242
- |----------|---------|-------------|
243
- | `AMEM_DIR` | `~/.amem` | Directory for amem data |
244
- | `AMEM_DB` | `~/.amem/memory.db` | Database file path |
362
+ </details>
363
+
364
+ <details>
365
+ <summary><strong>Can I delete specific memories?</strong></summary>
366
+
367
+ Yes. Use `amem-cli forget <id>` or ask your AI to call `memory_forget`. You can also search-and-delete: `memory_forget({ query: "old project", confirm: true })`.
245
368
 
246
- ## Roadmap
369
+ </details>
247
370
 
248
- - [x] 7 MCP tools (store, recall, context, forget, extract, stats, export)
249
- - [x] 2 MCP prompts (extraction guide, session start)
250
- - [x] 4 MCP resources (corrections, decisions, profile, summary)
251
- - [x] CLI with 5 commands
252
- - [x] Local embeddings via HuggingFace transformers
371
+ <details>
372
+ <summary><strong>Does it slow down my AI?</strong></summary>
373
+
374
+ No. Memory operations typically take under 50ms. Embedding generation for new memories takes ~200ms. The server runs as a lightweight background process.
375
+
376
+ </details>
377
+
378
+ <details>
379
+ <summary><strong>Can I use it across multiple projects?</strong></summary>
380
+
381
+ Yes. amem stores memories globally at `~/.amem/memory.db` by default. All your AI conversations across all projects share the same memory. You can also set `AMEM_DB` per-project for isolated memories.
382
+
383
+ </details>
384
+
385
+ ---
386
+
387
+ ## 🗺️ Roadmap
388
+
389
+ - [x] 7 MCP tools for storing, recalling, and managing memories
390
+ - [x] Semantic search with local embeddings
391
+ - [x] Smart conflict detection and deduplication
253
392
  - [x] Memory evolution (related memories reinforce each other)
254
- - [x] Conflict detection and deduplication
393
+ - [x] CLI for direct memory management
394
+ - [x] MCP prompts and resources for proactive context
255
395
  - [x] Published on npm
256
- - [ ] Memory verification (check code-related memories against filesystem)
257
- - [ ] Knowledge graph (entity + relation tables)
258
- - [ ] Team memory (shared project context via git-synced SQLite)
396
+ - [ ] Memory verification against filesystem
397
+ - [ ] Knowledge graph with entity relationships
398
+ - [ ] Team memory (shared context across developers)
259
399
  - [ ] Proactive mid-conversation context injection
260
400
 
261
- ## License
401
+ ---
402
+
403
+ <div align="center">
404
+
405
+ **Built by [Aman Asmuei](https://github.com/amanasmuei)**
406
+
407
+ [Report Bug](https://github.com/amanasmuei/amem/issues) · [Request Feature](https://github.com/amanasmuei/amem/issues) · [npm](https://www.npmjs.com/package/@aman_asmuei/amem)
408
+
409
+ MIT License
262
410
 
263
- MIT
411
+ </div>
package/package.json CHANGED
@@ -1,9 +1,10 @@
1
1
  {
2
2
  "name": "@aman_asmuei/amem",
3
- "version": "0.1.2",
3
+ "version": "0.1.4",
4
4
  "description": "The memory layer for AI coding tools. Local-first, developer-specific, works everywhere.",
5
5
  "type": "module",
6
6
  "bin": {
7
+ "@aman_asmuei/amem": "dist/index.js",
7
8
  "amem": "dist/index.js",
8
9
  "amem-cli": "dist/cli.js"
9
10
  },