@aman_asmuei/aman-agent 0.4.0 → 0.5.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +210 -88
- package/dist/index.js +495 -118
- package/dist/index.js.map +1 -1
- package/package.json +1 -1
package/README.md
CHANGED
|
@@ -8,7 +8,7 @@
|
|
|
8
8
|
<h1 align="center">aman-agent</h1>
|
|
9
9
|
|
|
10
10
|
<p align="center">
|
|
11
|
-
<strong>
|
|
11
|
+
<strong>The AI companion that actually remembers you.</strong>
|
|
12
12
|
</p>
|
|
13
13
|
|
|
14
14
|
<p align="center">
|
|
@@ -24,14 +24,14 @@
|
|
|
24
24
|
</p>
|
|
25
25
|
|
|
26
26
|
<p align="center">
|
|
27
|
-
|
|
28
|
-
|
|
27
|
+
An AI companion that learns from every conversation, recalls relevant memories per message,<br/>
|
|
28
|
+
extracts knowledge silently, and adapts to your time of day — all running locally.
|
|
29
29
|
</p>
|
|
30
30
|
|
|
31
31
|
<p align="center">
|
|
32
32
|
<a href="#-quick-start">Quick Start</a> •
|
|
33
|
-
<a href="#-
|
|
34
|
-
<a href="#-
|
|
33
|
+
<a href="#-intelligent-companion-features">Features</a> •
|
|
34
|
+
<a href="#-how-it-works">How It Works</a> •
|
|
35
35
|
<a href="#-commands">Commands</a> •
|
|
36
36
|
<a href="#-supported-llms">LLMs</a> •
|
|
37
37
|
<a href="#-the-ecosystem">Ecosystem</a>
|
|
@@ -43,17 +43,17 @@
|
|
|
43
43
|
|
|
44
44
|
AI coding assistants forget everything between sessions. You re-explain your stack, preferences, and boundaries every time. There's no single place where your AI loads its full context and just *works*.
|
|
45
45
|
|
|
46
|
+
Other "memory" solutions are just markdown files the AI reads on startup — they don't *learn* from conversation, they don't *recall* per-message, and they silently lose context when the window fills up.
|
|
47
|
+
|
|
46
48
|
## The Solution
|
|
47
49
|
|
|
48
|
-
**aman-agent**
|
|
50
|
+
**aman-agent** is the first open-source AI companion that genuinely learns from conversation. It doesn't just store memories — it recalls them per-message, extracts new knowledge automatically, and uses your LLM to intelligently compress context instead of truncating it.
|
|
49
51
|
|
|
50
52
|
```bash
|
|
51
53
|
npx @aman_asmuei/aman-agent
|
|
52
54
|
```
|
|
53
55
|
|
|
54
|
-
|
|
55
|
-
|
|
56
|
-
> **Your AI knows who it is, what it remembers, what tools it has, and what rules to follow — before you say a word.**
|
|
56
|
+
> **Your AI knows who it is, what it remembers, what tools it has, what rules to follow, what time it is, and what reminders are due — before you say a word.**
|
|
57
57
|
|
|
58
58
|
---
|
|
59
59
|
|
|
@@ -85,42 +85,146 @@ aman-agent --budget 12000
|
|
|
85
85
|
|
|
86
86
|
---
|
|
87
87
|
|
|
88
|
-
##
|
|
88
|
+
## Intelligent Companion Features
|
|
89
89
|
|
|
90
|
-
|
|
91
|
-
|---|---|---|
|
|
92
|
-
| **Streaming with tools** | Blocked — no output until LLM finishes | Real-time streaming, even during tool calls |
|
|
93
|
-
| **Conversation persistence** | 200-char resume, full history lost | Full conversation saved to amem on exit |
|
|
94
|
-
| **Context management** | Messages grow forever, eventual crash | Auto-trims at 80K tokens, keeps recent context |
|
|
95
|
-
| **`/save` command** | N/A | Manually save conversation mid-session |
|
|
96
|
-
| **Reminders/Schedules** | Broken — lost on exit, no daemon | Removed (replaced with `/save`) |
|
|
90
|
+
### Per-Message Memory Recall
|
|
97
91
|
|
|
98
|
-
|
|
92
|
+
Every message you send triggers a semantic search against your memory database. Relevant memories are injected into the AI's context for *that turn only* — so the AI always has the right context without bloating the conversation.
|
|
99
93
|
|
|
100
|
-
|
|
94
|
+
```
|
|
95
|
+
You > Let's set up the auth service
|
|
101
96
|
|
|
102
|
-
|
|
97
|
+
Agent recalls:
|
|
98
|
+
- [decision] Auth service uses JWT tokens (confidence: 0.92)
|
|
99
|
+
- [preference] User prefers PostgreSQL (confidence: 0.88)
|
|
100
|
+
- [fact] Auth middleware rewrite driven by compliance (confidence: 0.75)
|
|
103
101
|
|
|
104
|
-
|
|
105
|
-
|
|
106
|
-
|
|
107
|
-
| **Memory** | `~/.amem/memory.db` | Past decisions, corrections, patterns, conversation history |
|
|
108
|
-
| **Tools** | `~/.akit/kit.md` | Available capabilities (GitHub, search, databases) |
|
|
109
|
-
| **Workflows** | `~/.aflow/flow.md` | Multi-step processes (code review, bug fix) |
|
|
110
|
-
| **Guardrails** | `~/.arules/rules.md` | Safety boundaries and permissions |
|
|
111
|
-
| **Skills** | `~/.askill/skills.md` | Deep domain expertise |
|
|
102
|
+
Aman > Based on our previous decisions, I'll set up JWT-based auth
|
|
103
|
+
with PostgreSQL, keeping the compliance requirements in mind...
|
|
104
|
+
```
|
|
112
105
|
|
|
113
|
-
|
|
106
|
+
### Hybrid Memory Extraction
|
|
114
107
|
|
|
115
|
-
|
|
108
|
+
After every response, the agent analyzes the conversation and extracts memories worth keeping. Preferences, facts, patterns, and topology are stored silently. Decisions and corrections require your confirmation.
|
|
116
109
|
|
|
117
|
-
|
|
110
|
+
```
|
|
111
|
+
You > I think we should go with microservices for the payment system
|
|
118
112
|
|
|
113
|
+
Aman > That makes sense given the compliance isolation requirements...
|
|
114
|
+
|
|
115
|
+
Remember: "Payment system will use microservices architecture"? (y/N) y
|
|
116
|
+
[1 memory stored]
|
|
119
117
|
```
|
|
120
|
-
|
|
118
|
+
|
|
119
|
+
### LLM-Powered Context Summarization
|
|
120
|
+
|
|
121
|
+
When the conversation gets long, the agent uses your LLM to generate real summaries — preserving decisions, preferences, and action items. No more losing critical context to 150-character truncation.
|
|
122
|
+
|
|
123
|
+
### Parallel Tool Execution
|
|
124
|
+
|
|
125
|
+
When the AI needs multiple tools, they run in parallel via `Promise.all` instead of sequentially. Faster responses, same guardrail checks.
|
|
126
|
+
|
|
127
|
+
### Retry with Backoff
|
|
128
|
+
|
|
129
|
+
LLM calls and MCP tool calls automatically retry on transient errors (rate limits, timeouts) with exponential backoff and jitter. Auth errors fail immediately.
|
|
130
|
+
|
|
131
|
+
### Time-Aware Greetings
|
|
132
|
+
|
|
133
|
+
The agent knows the time of day and day of week. It adapts its tone naturally — you'll notice the difference between a morning and a late-night session.
|
|
134
|
+
|
|
135
|
+
### Reminders
|
|
136
|
+
|
|
121
137
|
```
|
|
138
|
+
You > Remind me to review PR #42 by Thursday
|
|
122
139
|
|
|
123
|
-
|
|
140
|
+
Aman > I'll set that reminder for you.
|
|
141
|
+
[Reminder set: "Review PR #42" — due 2026-03-27]
|
|
142
|
+
```
|
|
143
|
+
|
|
144
|
+
Next session:
|
|
145
|
+
```
|
|
146
|
+
[OVERDUE] Review PR #42 (was due 2026-03-27)
|
|
147
|
+
```
|
|
148
|
+
|
|
149
|
+
Reminders persist in SQLite across sessions. Set them, forget them, get nudged.
|
|
150
|
+
|
|
151
|
+
### Memory Consolidation
|
|
152
|
+
|
|
153
|
+
On every startup, the agent automatically merges duplicate memories, prunes stale low-confidence ones, and promotes frequently-accessed entries.
|
|
154
|
+
|
|
155
|
+
```
|
|
156
|
+
Memory health: 94% (merged 2 duplicates, pruned 1 stale)
|
|
157
|
+
```
|
|
158
|
+
|
|
159
|
+
### Structured Debug Logging
|
|
160
|
+
|
|
161
|
+
Every operation that can fail logs to `~/.aman-agent/debug.log` with structured JSON. No more silent failures — use `/debug` to see what's happening under the hood.
|
|
162
|
+
|
|
163
|
+
---
|
|
164
|
+
|
|
165
|
+
## How It Works
|
|
166
|
+
|
|
167
|
+
```
|
|
168
|
+
┌───────────────────────────────────────────────────────────┐
|
|
169
|
+
│ Your Terminal │
|
|
170
|
+
│ │
|
|
171
|
+
│ You > tell me about our auth decisions │
|
|
172
|
+
│ │
|
|
173
|
+
│ [recalling memories...] │
|
|
174
|
+
│ Agent > Based on your previous decisions: │
|
|
175
|
+
│ - OAuth2 with PKCE (decided 2 weeks ago) │
|
|
176
|
+
│ - JWT for API tokens... │
|
|
177
|
+
│ │
|
|
178
|
+
│ [1 memory stored] │
|
|
179
|
+
└──────────────────────┬────────────────────────────────────┘
|
|
180
|
+
│
|
|
181
|
+
┌──────────────────────▼────────────────────────────────────┐
|
|
182
|
+
│ aman-agent runtime │
|
|
183
|
+
│ │
|
|
184
|
+
│ On Startup │
|
|
185
|
+
│ ┌────────────────────────────────────────────────┐ │
|
|
186
|
+
│ │ 1. Load ecosystem (identity, tools, rules...) │ │
|
|
187
|
+
│ │ 2. Connect MCP servers (aman-mcp + amem) │ │
|
|
188
|
+
│ │ 3. Consolidate memory (merge/prune/promote) │ │
|
|
189
|
+
│ │ 4. Check reminders (overdue/today/upcoming) │ │
|
|
190
|
+
│ │ 5. Inject time context (morning/evening/...) │ │
|
|
191
|
+
│ │ 6. Recall session context from memory │ │
|
|
192
|
+
│ └────────────────────────────────────────────────┘ │
|
|
193
|
+
│ │
|
|
194
|
+
│ Per Message │
|
|
195
|
+
│ ┌────────────────────────────────────────────────┐ │
|
|
196
|
+
│ │ 1. Semantic memory recall (top 5 relevant) │ │
|
|
197
|
+
│ │ 2. Augment system prompt with memories │ │
|
|
198
|
+
│ │ 3. Stream LLM response (with retry) │ │
|
|
199
|
+
│ │ 4. Execute tools in parallel (with guardrails) │ │
|
|
200
|
+
│ │ 5. Extract memories from response │ │
|
|
201
|
+
│ │ - Auto-store: preferences, facts, patterns │ │
|
|
202
|
+
│ │ - Confirm: decisions, corrections │ │
|
|
203
|
+
│ └────────────────────────────────────────────────┘ │
|
|
204
|
+
│ │
|
|
205
|
+
│ Context Management │
|
|
206
|
+
│ ┌────────────────────────────────────────────────┐ │
|
|
207
|
+
│ │ Auto-trim at 80K tokens │ │
|
|
208
|
+
│ │ LLM-powered summarization (not truncation) │ │
|
|
209
|
+
│ │ Fallback to text preview if LLM call fails │ │
|
|
210
|
+
│ └────────────────────────────────────────────────┘ │
|
|
211
|
+
│ │
|
|
212
|
+
│ MCP Integration │
|
|
213
|
+
│ ┌────────────────────────────────────────────────┐ │
|
|
214
|
+
│ │ aman-mcp → identity, tools, workflows, eval │ │
|
|
215
|
+
│ │ amem → memory, knowledge graph, reminders │ │
|
|
216
|
+
│ └────────────────────────────────────────────────┘ │
|
|
217
|
+
└───────────────────────────────────────────────────────────┘
|
|
218
|
+
```
|
|
219
|
+
|
|
220
|
+
### Session Lifecycle
|
|
221
|
+
|
|
222
|
+
| Phase | What happens |
|
|
223
|
+
|:---|:---|
|
|
224
|
+
| **Start** | Load ecosystem, connect MCP, consolidate memory, check reminders, inject time context |
|
|
225
|
+
| **Each turn** | Recall relevant memories, stream response, execute tools in parallel, extract new memories |
|
|
226
|
+
| **Auto-trim** | LLM-powered summarization when approaching 80K tokens |
|
|
227
|
+
| **Exit** | Save conversation to amem, update session resume, optional session rating |
|
|
124
228
|
|
|
125
229
|
---
|
|
126
230
|
|
|
@@ -136,6 +240,9 @@ Default budget: 8,000 tokens. Override with `--budget`.
|
|
|
136
240
|
| `/skills` | View skills `[install\|uninstall ...]` |
|
|
137
241
|
| `/eval` | View evaluation `[milestone ...]` |
|
|
138
242
|
| `/memory` | View memories `[search\|clear ...]` |
|
|
243
|
+
| `/decisions` | View decision log `[<project>]` |
|
|
244
|
+
| `/export` | Export conversation to markdown |
|
|
245
|
+
| `/debug` | Show debug log (last 20 entries) |
|
|
139
246
|
| `/status` | Ecosystem dashboard |
|
|
140
247
|
| `/doctor` | Health check all layers |
|
|
141
248
|
| `/save` | Save conversation to memory |
|
|
@@ -147,66 +254,42 @@ Default budget: 8,000 tokens. Override with `--budget`.
|
|
|
147
254
|
|
|
148
255
|
---
|
|
149
256
|
|
|
150
|
-
##
|
|
257
|
+
## What It Loads
|
|
151
258
|
|
|
152
|
-
|
|
153
|
-
|:---|:---|:---|:---|
|
|
154
|
-
| **Anthropic** | Claude Sonnet 4.5, Opus 4.6, Haiku 4.5 | Full | Full (with tools) |
|
|
155
|
-
| **OpenAI** | GPT-4o, GPT-4o Mini, o3 | Full | Full (with tools) |
|
|
156
|
-
| **Ollama** | Llama, Mistral, Gemma, any local model | Text only | Full |
|
|
259
|
+
On every session start, aman-agent assembles your full AI context:
|
|
157
260
|
|
|
158
|
-
|
|
261
|
+
| Layer | Source | What it provides |
|
|
262
|
+
|:---|:---|:---|
|
|
263
|
+
| **Identity** | `~/.acore/core.md` | AI personality, your preferences, relationship state |
|
|
264
|
+
| **Memory** | `~/.amem/memory.db` | Past decisions, corrections, patterns, conversation history |
|
|
265
|
+
| **Reminders** | `~/.amem/memory.db` | Overdue, today, and upcoming reminders |
|
|
266
|
+
| **Tools** | `~/.akit/kit.md` | Available capabilities (GitHub, search, databases) |
|
|
267
|
+
| **Workflows** | `~/.aflow/flow.md` | Multi-step processes (code review, bug fix) |
|
|
268
|
+
| **Guardrails** | `~/.arules/rules.md` | Safety boundaries and permissions |
|
|
269
|
+
| **Skills** | `~/.askill/skills.md` | Deep domain expertise |
|
|
270
|
+
| **Time** | System clock | Time of day, day of week for tone adaptation |
|
|
159
271
|
|
|
160
|
-
|
|
272
|
+
All layers are optional — the agent works with whatever you've set up.
|
|
273
|
+
|
|
274
|
+
### Token Budgeting
|
|
275
|
+
|
|
276
|
+
Layers are included by priority when space is limited:
|
|
161
277
|
|
|
162
278
|
```
|
|
163
|
-
|
|
164
|
-
│ Your Terminal │
|
|
165
|
-
│ │
|
|
166
|
-
│ You > tell me about our auth decisions │
|
|
167
|
-
│ │
|
|
168
|
-
│ Agent > [using memory_recall...] │
|
|
169
|
-
│ Based on your previous decisions: │
|
|
170
|
-
│ - OAuth2 with PKCE (decided 2 weeks ago) │
|
|
171
|
-
│ - JWT for API tokens... │
|
|
172
|
-
└─────────────────┬────────────────────────────┘
|
|
173
|
-
│
|
|
174
|
-
┌─────────────────▼────────────────────────────┐
|
|
175
|
-
│ aman-agent runtime │
|
|
176
|
-
│ │
|
|
177
|
-
│ System Prompt Assembly │
|
|
178
|
-
│ ┌─────────────────────────────────────┐ │
|
|
179
|
-
│ │ Identity + Memory + Tools + │ │
|
|
180
|
-
│ │ Workflows + Guardrails + Skills │ │
|
|
181
|
-
│ │ (priority-based token budgeting) │ │
|
|
182
|
-
│ └─────────────────────────────────────┘ │
|
|
183
|
-
│ │
|
|
184
|
-
│ Streaming LLM Client │
|
|
185
|
-
│ ┌─────────────────────────────────────┐ │
|
|
186
|
-
│ │ Anthropic / OpenAI / Ollama │ │
|
|
187
|
-
│ │ Always streaming, even with tools │ │
|
|
188
|
-
│ └─────────────────────────────────────┘ │
|
|
189
|
-
│ │
|
|
190
|
-
│ Context Manager │
|
|
191
|
-
│ ┌─────────────────────────────────────┐ │
|
|
192
|
-
│ │ Auto-trim at 80K tokens │ │
|
|
193
|
-
│ │ Keep initial context + recent msgs │ │
|
|
194
|
-
│ └─────────────────────────────────────┘ │
|
|
195
|
-
│ │
|
|
196
|
-
│ MCP Integration │
|
|
197
|
-
│ ┌─────────────────────────────────────┐ │
|
|
198
|
-
│ │ aman-mcp → identity, tools, eval │ │
|
|
199
|
-
│ │ amem → memory, knowledge │ │
|
|
200
|
-
│ └─────────────────────────────────────┘ │
|
|
201
|
-
└──────────────────────────────────────────────┘
|
|
279
|
+
Identity (always) → Guardrails → Workflows → Tools → Skills (can truncate)
|
|
202
280
|
```
|
|
203
281
|
|
|
204
|
-
|
|
282
|
+
Default budget: 8,000 tokens. Override with `--budget`.
|
|
205
283
|
|
|
206
|
-
|
|
207
|
-
|
|
208
|
-
|
|
209
|
-
|
|
284
|
+
---
|
|
285
|
+
|
|
286
|
+
## Supported LLMs
|
|
287
|
+
|
|
288
|
+
| Provider | Models | Tool Use | Streaming |
|
|
289
|
+
|:---|:---|:---|:---|
|
|
290
|
+
| **Anthropic** | Claude Sonnet 4.5, Opus 4.6, Haiku 4.5 | Full | Full (with tools) |
|
|
291
|
+
| **OpenAI** | GPT-4o, GPT-4o Mini, o3 | Full | Full (with tools) |
|
|
292
|
+
| **Ollama** | Llama, Mistral, Gemma, any local model | Text only | Full |
|
|
210
293
|
|
|
211
294
|
---
|
|
212
295
|
|
|
@@ -218,7 +301,16 @@ Config is stored in `~/.aman-agent/config.json`:
|
|
|
218
301
|
{
|
|
219
302
|
"provider": "anthropic",
|
|
220
303
|
"apiKey": "sk-ant-...",
|
|
221
|
-
"model": "claude-sonnet-4-5-20250514"
|
|
304
|
+
"model": "claude-sonnet-4-5-20250514",
|
|
305
|
+
"hooks": {
|
|
306
|
+
"memoryRecall": true,
|
|
307
|
+
"sessionResume": true,
|
|
308
|
+
"rulesCheck": true,
|
|
309
|
+
"workflowSuggest": true,
|
|
310
|
+
"evalPrompt": true,
|
|
311
|
+
"autoSessionSave": true,
|
|
312
|
+
"extractMemories": true
|
|
313
|
+
}
|
|
222
314
|
}
|
|
223
315
|
```
|
|
224
316
|
|
|
@@ -227,6 +319,20 @@ Config is stored in `~/.aman-agent/config.json`:
|
|
|
227
319
|
| Model override | `--model <id>` | From config |
|
|
228
320
|
| Token budget | `--budget <n>` | 8000 |
|
|
229
321
|
|
|
322
|
+
### Hook Toggles
|
|
323
|
+
|
|
324
|
+
All hooks are on by default. Disable any in `config.json`:
|
|
325
|
+
|
|
326
|
+
| Hook | What it controls |
|
|
327
|
+
|:---|:---|
|
|
328
|
+
| `memoryRecall` | Load memory context on session start |
|
|
329
|
+
| `sessionResume` | Resume from last session state |
|
|
330
|
+
| `rulesCheck` | Pre-tool guardrail enforcement |
|
|
331
|
+
| `workflowSuggest` | Auto-detect matching workflows |
|
|
332
|
+
| `evalPrompt` | Session rating on exit |
|
|
333
|
+
| `autoSessionSave` | Save conversation to amem on exit |
|
|
334
|
+
| `extractMemories` | Auto-extract memories from conversation |
|
|
335
|
+
|
|
230
336
|
> Treat the config file like a credential — it contains your API key.
|
|
231
337
|
|
|
232
338
|
---
|
|
@@ -252,7 +358,7 @@ aman
|
|
|
252
358
|
| Layer | Package | What it does |
|
|
253
359
|
|:---|:---|:---|
|
|
254
360
|
| Identity | [acore](https://github.com/amanasmuei/acore) | Personality, values, relationship memory |
|
|
255
|
-
| Memory | [amem](https://github.com/amanasmuei/amem) | Persistent memory with knowledge graph (MCP) |
|
|
361
|
+
| Memory | [amem](https://github.com/amanasmuei/amem) | Persistent memory with knowledge graph + reminders (MCP) |
|
|
256
362
|
| Tools | [akit](https://github.com/amanasmuei/akit) | Portable AI tools (MCP + manual fallback) |
|
|
257
363
|
| Workflows | [aflow](https://github.com/amanasmuei/aflow) | Reusable AI workflows |
|
|
258
364
|
| Guardrails | [arules](https://github.com/amanasmuei/arules) | Safety boundaries and permissions |
|
|
@@ -265,13 +371,29 @@ aman
|
|
|
265
371
|
|
|
266
372
|
---
|
|
267
373
|
|
|
374
|
+
## What Makes This Different
|
|
375
|
+
|
|
376
|
+
| Feature | aman-agent | MemoryCore / Others |
|
|
377
|
+
|:---|:---|:---|
|
|
378
|
+
| Memory storage | SQLite + embeddings + knowledge graph | Markdown files |
|
|
379
|
+
| Per-message recall | Semantic search every turn | Static blob at session start |
|
|
380
|
+
| Memory extraction | Auto-extract from conversation (LLM) | AI must manually write to files |
|
|
381
|
+
| Context compression | LLM-powered summarization | Truncation or line limits |
|
|
382
|
+
| Tool execution | Parallel with guardrail checks | Sequential or none |
|
|
383
|
+
| Reminders | Persistent, cross-session, deadline-aware | None |
|
|
384
|
+
| Error handling | Structured JSON debug log | Silent failures |
|
|
385
|
+
| Multi-LLM | Anthropic, OpenAI, Ollama | Usually single provider |
|
|
386
|
+
| Reliability | Retry with exponential backoff | Single attempt |
|
|
387
|
+
|
|
388
|
+
---
|
|
389
|
+
|
|
268
390
|
## Contributing
|
|
269
391
|
|
|
270
392
|
```bash
|
|
271
393
|
git clone https://github.com/amanasmuei/aman-agent.git
|
|
272
394
|
cd aman-agent && npm install
|
|
273
395
|
npm run build # zero errors
|
|
274
|
-
npm test #
|
|
396
|
+
npm test # 84 tests pass
|
|
275
397
|
```
|
|
276
398
|
|
|
277
399
|
PRs welcome. See [Issues](https://github.com/amanasmuei/aman-agent/issues).
|