@aman_asmuei/amem 0.1.2 → 0.1.3
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +285 -134
- package/package.json +1 -1
package/README.md
CHANGED
|
@@ -1,36 +1,102 @@
|
|
|
1
|
+
<div align="center">
|
|
2
|
+
|
|
1
3
|
# amem
|
|
2
4
|
|
|
3
|
-
|
|
5
|
+
### Give your AI a memory it never forgets
|
|
6
|
+
|
|
7
|
+
Your AI assistant forgets everything the moment a conversation ends.<br/>
|
|
8
|
+
**amem** gives it persistent memory — so it remembers your preferences, decisions, and corrections forever.
|
|
9
|
+
|
|
10
|
+
[](https://www.npmjs.com/package/@aman_asmuei/amem)
|
|
11
|
+
[](LICENSE)
|
|
12
|
+
[](https://github.com/amanasmuei/amem/actions)
|
|
13
|
+
[](https://nodejs.org)
|
|
14
|
+
|
|
15
|
+
[Get Started](#-get-started) · [How It Works](#-how-it-works) · [Tools Reference](#-tools) · [CLI](#-cli) · [FAQ](#-faq)
|
|
16
|
+
|
|
17
|
+
</div>
|
|
18
|
+
|
|
19
|
+
---
|
|
20
|
+
|
|
21
|
+
## The Problem
|
|
22
|
+
|
|
23
|
+
Every time you start a new conversation with an AI coding assistant, it starts from zero. It doesn't know:
|
|
24
|
+
|
|
25
|
+
- That you told it **three times** not to use `any` in TypeScript
|
|
26
|
+
- That your team **chose PostgreSQL** over MongoDB last month (and why)
|
|
27
|
+
- That you **prefer** functional style, early returns, and pnpm
|
|
28
|
+
- Where your auth module lives, or how your project is structured
|
|
29
|
+
|
|
30
|
+
You end up repeating yourself. Every. Single. Time.
|
|
31
|
+
|
|
32
|
+
## The Solution
|
|
33
|
+
|
|
34
|
+
**amem** is a memory layer that plugs into any AI tool — Claude Code, Cursor, Windsurf, or anything that speaks MCP. It remembers what matters and surfaces it automatically.
|
|
35
|
+
|
|
36
|
+
```
|
|
37
|
+
You: "Don't use any type in TypeScript"
|
|
38
|
+
|
|
39
|
+
amem saves this as a correction (highest priority).
|
|
40
|
+
Next conversation — or next month — your AI already knows.
|
|
41
|
+
```
|
|
42
|
+
|
|
43
|
+
---
|
|
44
|
+
|
|
45
|
+
## What Gets Remembered
|
|
46
|
+
|
|
47
|
+
amem organizes memories into six types, ranked by importance:
|
|
48
|
+
|
|
49
|
+
| Priority | Type | What it captures | Example |
|
|
50
|
+
|:--------:|------|-----------------|---------|
|
|
51
|
+
| 🔴 | **Correction** | Mistakes to never repeat | *"Don't mock the database in integration tests"* |
|
|
52
|
+
| 🟠 | **Decision** | Architectural choices + why | *"Chose Postgres over MongoDB for ACID compliance"* |
|
|
53
|
+
| 🟡 | **Pattern** | Coding style & habits | *"Prefers early returns over nested conditionals"* |
|
|
54
|
+
| 🟢 | **Preference** | Tool & workflow choices | *"Uses pnpm, not npm"* |
|
|
55
|
+
| 🔵 | **Topology** | Where things are | *"Auth module lives in src/auth/, uses JWT"* |
|
|
56
|
+
| ⚪ | **Fact** | General project knowledge | *"API uses REST, launched January 2025"* |
|
|
4
57
|
|
|
5
|
-
|
|
6
|
-
[](LICENSE)
|
|
58
|
+
Corrections always surface first. They're the "never do this" rules your AI should always follow.
|
|
7
59
|
|
|
8
|
-
|
|
60
|
+
---
|
|
9
61
|
|
|
10
|
-
##
|
|
62
|
+
## 🚀 Get Started
|
|
63
|
+
|
|
64
|
+
### Step 1: Install
|
|
65
|
+
|
|
66
|
+
You need [Node.js](https://nodejs.org) 18 or higher. Then:
|
|
11
67
|
|
|
12
68
|
```bash
|
|
13
69
|
npx @aman_asmuei/amem
|
|
14
70
|
```
|
|
15
71
|
|
|
16
|
-
|
|
72
|
+
That's it. amem runs as a local server on your machine.
|
|
73
|
+
|
|
74
|
+
### Step 2: Connect your AI tool
|
|
17
75
|
|
|
18
|
-
|
|
76
|
+
<details>
|
|
77
|
+
<summary><strong>Claude Code</strong></summary>
|
|
19
78
|
|
|
20
|
-
|
|
21
|
-
|
|
22
|
-
|
|
23
|
-
|
|
24
|
-
|
|
25
|
-
|
|
79
|
+
Add to `~/.claude/settings.json`:
|
|
80
|
+
|
|
81
|
+
```json
|
|
82
|
+
{
|
|
83
|
+
"mcpServers": {
|
|
84
|
+
"amem": {
|
|
85
|
+
"command": "npx",
|
|
86
|
+
"args": ["-y", "@aman_asmuei/amem"]
|
|
87
|
+
}
|
|
88
|
+
}
|
|
89
|
+
}
|
|
90
|
+
```
|
|
26
91
|
|
|
27
|
-
|
|
92
|
+
Restart Claude Code. You'll see 7 memory tools available.
|
|
28
93
|
|
|
29
|
-
|
|
94
|
+
</details>
|
|
30
95
|
|
|
31
|
-
|
|
96
|
+
<details>
|
|
97
|
+
<summary><strong>Cursor</strong></summary>
|
|
32
98
|
|
|
33
|
-
Add to
|
|
99
|
+
Add to `.cursor/mcp.json` in your project:
|
|
34
100
|
|
|
35
101
|
```json
|
|
36
102
|
{
|
|
@@ -43,9 +109,14 @@ Add to `~/.claude/settings.json`:
|
|
|
43
109
|
}
|
|
44
110
|
```
|
|
45
111
|
|
|
46
|
-
|
|
112
|
+
Restart Cursor.
|
|
47
113
|
|
|
48
|
-
|
|
114
|
+
</details>
|
|
115
|
+
|
|
116
|
+
<details>
|
|
117
|
+
<summary><strong>Windsurf</strong></summary>
|
|
118
|
+
|
|
119
|
+
Add to your MCP configuration:
|
|
49
120
|
|
|
50
121
|
```json
|
|
51
122
|
{
|
|
@@ -58,15 +129,107 @@ Add to `.cursor/mcp.json`:
|
|
|
58
129
|
}
|
|
59
130
|
```
|
|
60
131
|
|
|
61
|
-
|
|
132
|
+
</details>
|
|
133
|
+
|
|
134
|
+
<details>
|
|
135
|
+
<summary><strong>Any other MCP client</strong></summary>
|
|
136
|
+
|
|
137
|
+
amem speaks standard [Model Context Protocol](https://modelcontextprotocol.io/) over stdio. Point your client to:
|
|
138
|
+
|
|
139
|
+
```bash
|
|
140
|
+
npx @aman_asmuei/amem
|
|
141
|
+
```
|
|
142
|
+
|
|
143
|
+
</details>
|
|
144
|
+
|
|
145
|
+
### Step 3: Start talking
|
|
146
|
+
|
|
147
|
+
That's it. Your AI now has memory tools. Ask it to remember something:
|
|
148
|
+
|
|
149
|
+
> *"Remember that we use Tailwind with a custom theme in this project."*
|
|
150
|
+
|
|
151
|
+
Next conversation, ask:
|
|
152
|
+
|
|
153
|
+
> *"What CSS framework do we use?"*
|
|
154
|
+
|
|
155
|
+
It knows.
|
|
156
|
+
|
|
157
|
+
---
|
|
158
|
+
|
|
159
|
+
## 🧠 How It Works
|
|
160
|
+
|
|
161
|
+
```
|
|
162
|
+
┌──────────────────────────────────┐
|
|
163
|
+
│ Your AI Tool │
|
|
164
|
+
│ Claude · Cursor · Windsurf │
|
|
165
|
+
└──────────┬───────────────────────┘
|
|
166
|
+
│
|
|
167
|
+
MCP Protocol (local)
|
|
168
|
+
│
|
|
169
|
+
┌──────────▼───────────────────────┐
|
|
170
|
+
│ amem server │
|
|
171
|
+
│ │
|
|
172
|
+
│ Store → Score → Deduplicate │
|
|
173
|
+
│ Recall → Rank → Surface │
|
|
174
|
+
│ │
|
|
175
|
+
│ ┌────────────────────────────┐ │
|
|
176
|
+
│ │ SQLite + Local Embeddings │ │
|
|
177
|
+
│ │ ~/.amem/memory.db │ │
|
|
178
|
+
│ └────────────────────────────┘ │
|
|
179
|
+
└──────────────────────────────────┘
|
|
180
|
+
```
|
|
181
|
+
|
|
182
|
+
**Everything stays on your machine.** No cloud. No API keys. No data leaving your laptop.
|
|
183
|
+
|
|
184
|
+
### Smart ranking
|
|
185
|
+
|
|
186
|
+
Every memory gets a score:
|
|
187
|
+
|
|
188
|
+
```
|
|
189
|
+
score = relevance × recency × confidence × importance
|
|
190
|
+
```
|
|
191
|
+
|
|
192
|
+
- **Relevance** — How closely the memory matches what you're working on (semantic similarity)
|
|
193
|
+
- **Recency** — Recent memories score higher; old ones gradually fade
|
|
194
|
+
- **Confidence** — Memories confirmed multiple times score higher
|
|
195
|
+
- **Importance** — Corrections (1.0) > Decisions (0.85) > Patterns (0.7) > Facts (0.4)
|
|
196
|
+
|
|
197
|
+
### Conflict detection
|
|
198
|
+
|
|
199
|
+
Store a memory that contradicts an existing one? amem catches it:
|
|
200
|
+
|
|
201
|
+
- **>85% similar but different** → Flags the conflict, updates the existing memory
|
|
202
|
+
- **>80% similar and agreeing** → Reinforces the existing memory (+confidence)
|
|
203
|
+
- **No match** → Stores as new
|
|
204
|
+
|
|
205
|
+
### Memory evolution
|
|
206
|
+
|
|
207
|
+
When you store a new memory, related existing memories get reinforced automatically. Your knowledge base stays connected and current.
|
|
208
|
+
|
|
209
|
+
---
|
|
210
|
+
|
|
211
|
+
## 🔧 Tools
|
|
62
212
|
|
|
63
|
-
amem
|
|
213
|
+
amem gives your AI **7 tools** it can use during conversation:
|
|
64
214
|
|
|
65
|
-
|
|
215
|
+
### Core tools
|
|
66
216
|
|
|
67
|
-
|
|
217
|
+
| Tool | What it does |
|
|
218
|
+
|------|-------------|
|
|
219
|
+
| `memory_store` | Save a single memory with type, tags, and confidence |
|
|
220
|
+
| `memory_recall` | Search memories by meaning (not just keywords) |
|
|
221
|
+
| `memory_context` | Load everything relevant to a topic, organized by type |
|
|
222
|
+
| `memory_extract` | Batch-save multiple memories at once from a conversation |
|
|
223
|
+
| `memory_forget` | Delete outdated or incorrect memories |
|
|
68
224
|
|
|
69
|
-
|
|
225
|
+
### Utility tools
|
|
226
|
+
|
|
227
|
+
| Tool | What it does |
|
|
228
|
+
|------|-------------|
|
|
229
|
+
| `memory_stats` | See how many memories you have, broken down by type |
|
|
230
|
+
| `memory_export` | Export all memories as readable markdown |
|
|
231
|
+
|
|
232
|
+
### Example: Storing a memory
|
|
70
233
|
|
|
71
234
|
```
|
|
72
235
|
memory_store({
|
|
@@ -77,18 +240,12 @@ memory_store({
|
|
|
77
240
|
})
|
|
78
241
|
```
|
|
79
242
|
|
|
80
|
-
###
|
|
81
|
-
|
|
82
|
-
Semantic search across all memories. Returns results ranked by composite score.
|
|
243
|
+
### Example: Recalling memories
|
|
83
244
|
|
|
84
245
|
```
|
|
85
|
-
memory_recall({
|
|
86
|
-
query: "TypeScript best practices",
|
|
87
|
-
limit: 5
|
|
88
|
-
})
|
|
246
|
+
memory_recall({ query: "TypeScript best practices", limit: 5 })
|
|
89
247
|
```
|
|
90
248
|
|
|
91
|
-
Returns:
|
|
92
249
|
```
|
|
93
250
|
1. [correction] Never use 'any' type — always define proper interfaces
|
|
94
251
|
Score: 0.892 | Confidence: 100% | Age: 2d ago
|
|
@@ -97,18 +254,12 @@ Returns:
|
|
|
97
254
|
Score: 0.756 | Confidence: 85% | Age: 5d ago
|
|
98
255
|
```
|
|
99
256
|
|
|
100
|
-
###
|
|
101
|
-
|
|
102
|
-
Load full context for a topic. Groups by type with corrections first.
|
|
257
|
+
### Example: Loading context for a task
|
|
103
258
|
|
|
104
259
|
```
|
|
105
|
-
memory_context({
|
|
106
|
-
topic: "authentication system",
|
|
107
|
-
max_tokens: 2000
|
|
108
|
-
})
|
|
260
|
+
memory_context({ topic: "authentication system" })
|
|
109
261
|
```
|
|
110
262
|
|
|
111
|
-
Returns:
|
|
112
263
|
```markdown
|
|
113
264
|
## Context for: authentication system
|
|
114
265
|
|
|
@@ -122,142 +273,142 @@ Returns:
|
|
|
122
273
|
- Auth module is in src/auth/, middleware in src/middleware/auth.ts (85% confidence)
|
|
123
274
|
```
|
|
124
275
|
|
|
125
|
-
###
|
|
276
|
+
### Example: Batch extraction
|
|
126
277
|
|
|
127
|
-
|
|
278
|
+
Your AI can extract multiple memories from a single conversation:
|
|
128
279
|
|
|
129
280
|
```
|
|
130
281
|
memory_extract({
|
|
131
282
|
memories: [
|
|
132
|
-
{ content: "Don't mock the DB in integration tests", type: "correction", confidence: 1.0
|
|
133
|
-
{ content: "Chose event sourcing for audit trail", type: "decision", confidence: 0.9
|
|
283
|
+
{ content: "Don't mock the DB in integration tests", type: "correction", confidence: 1.0 },
|
|
284
|
+
{ content: "Chose event sourcing for audit trail", type: "decision", confidence: 0.9 }
|
|
134
285
|
]
|
|
135
286
|
})
|
|
136
287
|
```
|
|
137
288
|
|
|
138
|
-
|
|
289
|
+
---
|
|
139
290
|
|
|
140
|
-
|
|
291
|
+
## 💻 CLI
|
|
141
292
|
|
|
142
|
-
|
|
293
|
+
amem also includes a command-line interface for managing memories directly:
|
|
143
294
|
|
|
144
|
-
```
|
|
145
|
-
|
|
146
|
-
|
|
295
|
+
```bash
|
|
296
|
+
amem-cli recall "authentication" # Search memories
|
|
297
|
+
amem-cli stats # Show statistics
|
|
298
|
+
amem-cli list # List all memories
|
|
299
|
+
amem-cli list --type correction # List by type
|
|
300
|
+
amem-cli export --file memories.md # Export to markdown
|
|
301
|
+
amem-cli forget abc12345 # Delete by ID
|
|
147
302
|
```
|
|
148
303
|
|
|
149
|
-
|
|
304
|
+
---
|
|
150
305
|
|
|
151
|
-
|
|
306
|
+
## 📡 MCP Resources
|
|
152
307
|
|
|
153
|
-
|
|
308
|
+
amem exposes **4 resources** that AI clients can read proactively:
|
|
154
309
|
|
|
155
|
-
|
|
310
|
+
| Resource | What it provides |
|
|
311
|
+
|----------|-----------------|
|
|
312
|
+
| `amem://corrections` | All active corrections — hard rules the AI should always follow |
|
|
313
|
+
| `amem://decisions` | Past architectural decisions and their rationale |
|
|
314
|
+
| `amem://profile` | Your preferences and coding patterns |
|
|
315
|
+
| `amem://summary` | Quick overview of everything stored |
|
|
156
316
|
|
|
157
|
-
|
|
317
|
+
---
|
|
158
318
|
|
|
159
|
-
|
|
319
|
+
## ⚙️ Configuration
|
|
160
320
|
|
|
161
|
-
|
|
162
|
-
- **`session-start`** — How to load relevant context at conversation start
|
|
321
|
+
amem works out of the box with zero configuration. For advanced use:
|
|
163
322
|
|
|
164
|
-
|
|
323
|
+
| Environment Variable | Default | Description |
|
|
324
|
+
|---------------------|---------|-------------|
|
|
325
|
+
| `AMEM_DIR` | `~/.amem` | Where amem stores data |
|
|
326
|
+
| `AMEM_DB` | `~/.amem/memory.db` | Database file path |
|
|
165
327
|
|
|
166
|
-
|
|
328
|
+
---
|
|
167
329
|
|
|
168
|
-
|
|
169
|
-
|----------|-------------|
|
|
170
|
-
| `amem://corrections` | All active corrections — hard constraints |
|
|
171
|
-
| `amem://decisions` | Architectural decisions and rationale |
|
|
172
|
-
| `amem://profile` | Developer preferences and patterns |
|
|
173
|
-
| `amem://summary` | Quick overview of all stored memories |
|
|
330
|
+
## ❓ FAQ
|
|
174
331
|
|
|
175
|
-
|
|
332
|
+
<details>
|
|
333
|
+
<summary><strong>Is my data sent to the cloud?</strong></summary>
|
|
176
334
|
|
|
177
|
-
|
|
178
|
-
amem-cli recall "authentication" # Search memories
|
|
179
|
-
amem-cli stats # Show statistics
|
|
180
|
-
amem-cli list --type correction # List by type
|
|
181
|
-
amem-cli export --file memories.md # Export to markdown
|
|
182
|
-
amem-cli forget abc12345 # Delete a memory
|
|
183
|
-
```
|
|
335
|
+
No. Everything stays on your machine. amem uses a local SQLite database at `~/.amem/memory.db` and generates embeddings locally using an 80MB model that runs on your CPU. No internet connection required after the first model download.
|
|
184
336
|
|
|
185
|
-
|
|
337
|
+
</details>
|
|
186
338
|
|
|
187
|
-
|
|
188
|
-
|
|
189
|
-
│ AI Client │
|
|
190
|
-
│ Claude Code · Cursor · Any MCP │
|
|
191
|
-
└──────────┬──────────────────────┘
|
|
192
|
-
│ MCP Protocol (stdio)
|
|
193
|
-
┌──────────▼──────────────────────┐
|
|
194
|
-
│ amem server │
|
|
195
|
-
│ │
|
|
196
|
-
│ ┌───────────┐ ┌────────────┐ │
|
|
197
|
-
│ │ Scoring │ │ Conflict │ │
|
|
198
|
-
│ │ Engine │ │ Detection │ │
|
|
199
|
-
│ └─────┬─────┘ └─────┬──────┘ │
|
|
200
|
-
│ │ │ │
|
|
201
|
-
│ ┌─────▼──────────────▼──────┐ │
|
|
202
|
-
│ │ SQLite + Embeddings │ │
|
|
203
|
-
│ │ ~/.amem/memory.db │ │
|
|
204
|
-
│ └───────────────────────────┘ │
|
|
205
|
-
└─────────────────────────────────┘
|
|
206
|
-
```
|
|
339
|
+
<details>
|
|
340
|
+
<summary><strong>Does it work offline?</strong></summary>
|
|
207
341
|
|
|
208
|
-
|
|
342
|
+
Yes. After the first run (which downloads the embedding model), amem works completely offline.
|
|
209
343
|
|
|
210
|
-
|
|
211
|
-
score = relevance x recency x confidence x importance
|
|
212
|
-
```
|
|
344
|
+
</details>
|
|
213
345
|
|
|
214
|
-
|
|
215
|
-
|
|
216
|
-
| **Relevance** | Cosine similarity between query and memory embeddings (0-1) |
|
|
217
|
-
| **Recency** | Exponential decay: `0.995^hours_since_last_access` |
|
|
218
|
-
| **Confidence** | How many times confirmed (0-1, corrections from user = 1.0) |
|
|
219
|
-
| **Importance** | Memory type weight: correction(1.0) > decision(0.85) > pattern(0.7) > preference(0.7) > topology(0.5) > fact(0.4) |
|
|
346
|
+
<details>
|
|
347
|
+
<summary><strong>What AI tools does it work with?</strong></summary>
|
|
220
348
|
|
|
221
|
-
|
|
349
|
+
Any tool that supports the [Model Context Protocol (MCP)](https://modelcontextprotocol.io/) — including Claude Code, Cursor, Windsurf, and many others. The list is growing rapidly.
|
|
222
350
|
|
|
223
|
-
|
|
224
|
-
- **>85% similarity** with different content — conflict detected, existing memory updated
|
|
225
|
-
- **>80% similarity** with same intent — existing memory reinforced (confidence +0.1)
|
|
226
|
-
- **No match** — new memory stored
|
|
351
|
+
</details>
|
|
227
352
|
|
|
228
|
-
|
|
353
|
+
<details>
|
|
354
|
+
<summary><strong>How much memory/disk does it use?</strong></summary>
|
|
229
355
|
|
|
230
|
-
|
|
356
|
+
The embedding model is ~80MB (downloaded once). The SQLite database grows with your memories — typically a few MB even after months of use. CPU usage is minimal.
|
|
231
357
|
|
|
232
|
-
|
|
358
|
+
</details>
|
|
233
359
|
|
|
234
|
-
|
|
235
|
-
|
|
236
|
-
- No cloud, no API keys, no data leaving your laptop
|
|
237
|
-
- Works offline after first model download
|
|
360
|
+
<details>
|
|
361
|
+
<summary><strong>Can I see what's stored?</strong></summary>
|
|
238
362
|
|
|
239
|
-
|
|
363
|
+
Yes! Use `amem-cli list` to see all memories, `amem-cli stats` for an overview, or `amem-cli export --file backup.md` to export everything as readable markdown.
|
|
240
364
|
|
|
241
|
-
|
|
242
|
-
|
|
243
|
-
|
|
244
|
-
|
|
365
|
+
</details>
|
|
366
|
+
|
|
367
|
+
<details>
|
|
368
|
+
<summary><strong>Can I delete specific memories?</strong></summary>
|
|
245
369
|
|
|
246
|
-
|
|
370
|
+
Yes. Use `amem-cli forget <id>` or ask your AI to call `memory_forget`. You can also search-and-delete: `memory_forget({ query: "old project", confirm: true })`.
|
|
247
371
|
|
|
248
|
-
|
|
249
|
-
|
|
250
|
-
|
|
251
|
-
|
|
252
|
-
|
|
372
|
+
</details>
|
|
373
|
+
|
|
374
|
+
<details>
|
|
375
|
+
<summary><strong>Does it slow down my AI?</strong></summary>
|
|
376
|
+
|
|
377
|
+
No. Memory operations typically take under 50ms. Embedding generation for new memories takes ~200ms. The server runs as a lightweight background process.
|
|
378
|
+
|
|
379
|
+
</details>
|
|
380
|
+
|
|
381
|
+
<details>
|
|
382
|
+
<summary><strong>Can I use it across multiple projects?</strong></summary>
|
|
383
|
+
|
|
384
|
+
Yes. amem stores memories globally at `~/.amem/memory.db` by default. All your AI conversations across all projects share the same memory. You can also set `AMEM_DB` per-project for isolated memories.
|
|
385
|
+
|
|
386
|
+
</details>
|
|
387
|
+
|
|
388
|
+
---
|
|
389
|
+
|
|
390
|
+
## 🗺️ Roadmap
|
|
391
|
+
|
|
392
|
+
- [x] 7 MCP tools for storing, recalling, and managing memories
|
|
393
|
+
- [x] Semantic search with local embeddings
|
|
394
|
+
- [x] Smart conflict detection and deduplication
|
|
253
395
|
- [x] Memory evolution (related memories reinforce each other)
|
|
254
|
-
- [x]
|
|
396
|
+
- [x] CLI for direct memory management
|
|
397
|
+
- [x] MCP prompts and resources for proactive context
|
|
255
398
|
- [x] Published on npm
|
|
256
|
-
- [ ] Memory verification
|
|
257
|
-
- [ ] Knowledge graph
|
|
258
|
-
- [ ] Team memory (shared
|
|
399
|
+
- [ ] Memory verification against filesystem
|
|
400
|
+
- [ ] Knowledge graph with entity relationships
|
|
401
|
+
- [ ] Team memory (shared context across developers)
|
|
259
402
|
- [ ] Proactive mid-conversation context injection
|
|
260
403
|
|
|
261
|
-
|
|
404
|
+
---
|
|
405
|
+
|
|
406
|
+
<div align="center">
|
|
407
|
+
|
|
408
|
+
**Built by [Aman Asmuei](https://github.com/amanasmuei)**
|
|
409
|
+
|
|
410
|
+
[Report Bug](https://github.com/amanasmuei/amem/issues) · [Request Feature](https://github.com/amanasmuei/amem/issues) · [npm](https://www.npmjs.com/package/@aman_asmuei/amem)
|
|
411
|
+
|
|
412
|
+
MIT License
|
|
262
413
|
|
|
263
|
-
|
|
414
|
+
</div>
|