@aman_asmuei/aman-agent 0.5.0 → 0.6.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -18,7 +18,7 @@
18
18
   
19
19
  <a href="LICENSE"><img src="https://img.shields.io/badge/license-MIT-blue?style=for-the-badge" alt="MIT License" /></a>
20
20
  &nbsp;
21
- <img src="https://img.shields.io/badge/node-%E2%89%A518-brightgreen?style=for-the-badge&logo=node.js&logoColor=white" alt="Node.js 18+" />
21
+ <img src="https://img.shields.io/badge/node-%E2%89%A520-brightgreen?style=for-the-badge&logo=node.js&logoColor=white" alt="Node.js 20+" />
22
22
  &nbsp;
23
23
  <a href="https://github.com/amanasmuei/aman"><img src="https://img.shields.io/badge/part_of-aman_ecosystem-ff6b35?style=for-the-badge" alt="aman ecosystem" /></a>
24
24
  </p>
@@ -28,6 +28,10 @@
28
28
  extracts knowledge silently, and adapts to your time of day — all running locally.
29
29
  </p>
30
30
 
31
+ <p align="center">
32
+ <img src="https://raw.githubusercontent.com/amanasmuei/aman-agent/main/docs/demo/demo.gif" alt="aman-agent demo" width="720" />
33
+ </p>
34
+
31
35
  <p align="center">
32
36
  <a href="#-quick-start">Quick Start</a> &bull;
33
37
  <a href="#-intelligent-companion-features">Features</a> &bull;
@@ -69,9 +73,28 @@ npx @aman_asmuei/aman-agent
69
73
  npm install -g @aman_asmuei/aman-agent
70
74
  ```
71
75
 
72
- ### 2. Configure
76
+ **Zero config if you already have an API key in your environment:**
77
+
78
+ ```bash
79
+ # aman-agent auto-detects these (in priority order):
80
+ export ANTHROPIC_API_KEY="sk-ant-..." # → uses Claude Sonnet 4.6
81
+ export OPENAI_API_KEY="sk-..." # → uses GPT-4o
82
+ # Or if Ollama is running locally # → uses llama3.2
83
+ ```
84
+
85
+ No env var? First run prompts for your LLM provider, API key, and model.
73
86
 
74
- First run prompts for your LLM provider, API key, and model. Config saved to `~/.aman-agent/config.json`.
87
+ ### 2. (Optional) Set up your companion
88
+
89
+ ```bash
90
+ # Guided wizard — pick a persona preset
91
+ aman-agent init
92
+
93
+ # Choose from: Coding Partner, Creative Collaborator,
94
+ # Personal Assistant, Learning Buddy, or Minimal
95
+ ```
96
+
97
+ Or just skip this — aman-agent auto-creates a default profile on first run.
75
98
 
76
99
  ### 3. Talk
77
100
 
@@ -87,35 +110,83 @@ aman-agent --budget 12000
87
110
 
88
111
  ## Intelligent Companion Features
89
112
 
90
- ### Per-Message Memory Recall
113
+ ### Per-Message Memory Recall with Progressive Disclosure
91
114
 
92
- Every message you send triggers a semantic search against your memory database. Relevant memories are injected into the AI's context for *that turn only* so the AI always has the right context without bloating the conversation.
115
+ Every message you send triggers a semantic search against your memory database. Results use **progressive disclosure** a compact index (~50-100 tokens) is injected instead of full content (~500-1000 tokens), giving **~10x token savings**. The agent shows the cost:
93
116
 
94
117
  ```
95
118
  You > Let's set up the auth service
96
119
 
120
+ [memories: ~47 tokens]
121
+
97
122
  Agent recalls:
98
- - [decision] Auth service uses JWT tokens (confidence: 0.92)
99
- - [preference] User prefers PostgreSQL (confidence: 0.88)
100
- - [fact] Auth middleware rewrite driven by compliance (confidence: 0.75)
123
+ a1b2c3d4 [decision] Auth service uses JWT tokens... (92%)
124
+ e5f6g7h8 [preference] User prefers PostgreSQL... (88%)
125
+ i9j0k1l2 [fact] Auth middleware rewrite driven by compliance... (75%)
101
126
 
102
127
  Aman > Based on our previous decisions, I'll set up JWT-based auth
103
128
  with PostgreSQL, keeping the compliance requirements in mind...
104
129
  ```
105
130
 
106
- ### Hybrid Memory Extraction
131
+ ### Silent Memory Extraction
107
132
 
108
- After every response, the agent analyzes the conversation and extracts memories worth keeping. Preferences, facts, patterns, and topology are stored silently. Decisions and corrections require your confirmation.
133
+ After every response, the agent analyzes the conversation and extracts memories worth keeping — preferences, facts, patterns, decisions, corrections, and topology are all stored automatically. No confirmation prompts interrupting your flow.
109
134
 
110
135
  ```
111
136
  You > I think we should go with microservices for the payment system
112
137
 
113
138
  Aman > That makes sense given the compliance isolation requirements...
114
139
 
115
- Remember: "Payment system will use microservices architecture"? (y/N) y
116
140
  [1 memory stored]
117
141
  ```
118
142
 
143
+ Don't want something remembered? Use `/memory search` to find it and `/memory clear` to remove it.
144
+
145
+ ### Rich Terminal Output
146
+
147
+ Responses are rendered with full markdown formatting — **bold**, *italic*, `code`, code blocks, tables, lists, and headings all display beautifully in your terminal. Responses are framed with visual dividers:
148
+
149
+ ```
150
+ Aman ──────────────────────────────────────────────
151
+
152
+ Here's how to set up Docker for this project...
153
+
154
+ ──────────────────────────────── memories: ~45 tokens
155
+ ```
156
+
157
+ ### First-Run & Returning Greeting
158
+
159
+ **First session:** Your companion introduces itself and asks your name — the relationship starts naturally.
160
+
161
+ **Returning sessions:** A warm one-liner greets you with context from your last conversation:
162
+
163
+ ```
164
+ Welcome back. Last time we talked about your Duit Raya tracker.
165
+ Reminder: Submit PR for auth refactor (due today)
166
+ ```
167
+
168
+ ### Progressive Feature Discovery
169
+
170
+ aman-agent surfaces tips about features you haven't tried yet, at the right moment:
171
+
172
+ ```
173
+ Tip: Teach me multi-step processes with /workflows add
174
+ ```
175
+
176
+ One hint per session, never repeated. Disable with `hooks.featureHints: false`.
177
+
178
+ ### Human-Readable Errors
179
+
180
+ No more cryptic API errors. Every known error maps to an actionable message:
181
+
182
+ ```
183
+ API key invalid. Run /reconfig to fix.
184
+ Rate limited. I'll retry automatically.
185
+ Network error. Check your internet connection.
186
+ ```
187
+
188
+ Failed messages are preserved — just press Enter to retry naturally.
189
+
119
190
  ### LLM-Powered Context Summarization
120
191
 
121
192
  When the conversation gets long, the agent uses your LLM to generate real summaries — preserving decisions, preferences, and action items. No more losing critical context to 150-character truncation.
@@ -128,6 +199,18 @@ When the AI needs multiple tools, they run in parallel via `Promise.all` instead
128
199
 
129
200
  LLM calls and MCP tool calls automatically retry on transient errors (rate limits, timeouts) with exponential backoff and jitter. Auth errors fail immediately.
130
201
 
202
+ ### Passive Tool Observation Capture
203
+
204
+ Every tool the AI executes is automatically logged to amem's conversation log — tool name, input, and result. This happens passively (fire-and-forget) without slowing down the agent. Your AI builds a complete history of what it *did*, not just what it *said*.
205
+
206
+ ### Token Cost Visibility
207
+
208
+ Every memory recall shows how many tokens it costs, so you always know the overhead:
209
+
210
+ ```
211
+ [memories: ~47 tokens]
212
+ ```
213
+
131
214
  ### Time-Aware Greetings
132
215
 
133
216
  The agent knows the time of day and day of week. It adapts its tone naturally — you'll notice the difference between a morning and a late-night session.
@@ -199,7 +282,7 @@ Every operation that can fail logs to `~/.aman-agent/debug.log` with structured
199
282
  │ │ 4. Execute tools in parallel (with guardrails) │ │
200
283
  │ │ 5. Extract memories from response │ │
201
284
  │ │ - Auto-store: preferences, facts, patterns │ │
202
- │ │ - Confirm: decisions, corrections │ │
285
+ │ │ - All types auto-stored silently │ │
203
286
  │ └────────────────────────────────────────────────┘ │
204
287
  │ │
205
288
  │ Context Management │
@@ -239,7 +322,7 @@ Every operation that can fail logs to `~/.aman-agent/debug.log` with structured
239
322
  | `/tools` | View tools `[add\|remove ...]` |
240
323
  | `/skills` | View skills `[install\|uninstall ...]` |
241
324
  | `/eval` | View evaluation `[milestone ...]` |
242
- | `/memory` | View memories `[search\|clear ...]` |
325
+ | `/memory` | View memories `[search\|clear\|timeline]` |
243
326
  | `/decisions` | View decision log `[<project>]` |
244
327
  | `/export` | Export conversation to markdown |
245
328
  | `/debug` | Show debug log (last 20 entries) |
@@ -287,7 +370,7 @@ Default budget: 8,000 tokens. Override with `--budget`.
287
370
 
288
371
  | Provider | Models | Tool Use | Streaming |
289
372
  |:---|:---|:---|:---|
290
- | **Anthropic** | Claude Sonnet 4.5, Opus 4.6, Haiku 4.5 | Full | Full (with tools) |
373
+ | **Anthropic** | Claude Sonnet 4.6, Opus 4.6, Haiku 4.5 | Full | Full (with tools) |
291
374
  | **OpenAI** | GPT-4o, GPT-4o Mini, o3 | Full | Full (with tools) |
292
375
  | **Ollama** | Llama, Mistral, Gemma, any local model | Text only | Full |
293
376
 
@@ -301,7 +384,7 @@ Config is stored in `~/.aman-agent/config.json`:
301
384
  {
302
385
  "provider": "anthropic",
303
386
  "apiKey": "sk-ant-...",
304
- "model": "claude-sonnet-4-5-20250514",
387
+ "model": "claude-sonnet-4-6",
305
388
  "hooks": {
306
389
  "memoryRecall": true,
307
390
  "sessionResume": true,
@@ -309,7 +392,8 @@ Config is stored in `~/.aman-agent/config.json`:
309
392
  "workflowSuggest": true,
310
393
  "evalPrompt": true,
311
394
  "autoSessionSave": true,
312
- "extractMemories": true
395
+ "extractMemories": true,
396
+ "featureHints": true
313
397
  }
314
398
  }
315
399
  ```
@@ -332,6 +416,7 @@ All hooks are on by default. Disable any in `config.json`:
332
416
  | `evalPrompt` | Session rating on exit |
333
417
  | `autoSessionSave` | Save conversation to amem on exit |
334
418
  | `extractMemories` | Auto-extract memories from conversation |
419
+ | `featureHints` | Show progressive feature discovery tips |
335
420
 
336
421
  > Treat the config file like a credential — it contains your API key.
337
422
 
@@ -373,17 +458,39 @@ aman
373
458
 
374
459
  ## What Makes This Different
375
460
 
376
- | Feature | aman-agent | MemoryCore / Others |
377
- |:---|:---|:---|
378
- | Memory storage | SQLite + embeddings + knowledge graph | Markdown files |
379
- | Per-message recall | Semantic search every turn | Static blob at session start |
380
- | Memory extraction | Auto-extract from conversation (LLM) | AI must manually write to files |
381
- | Context compression | LLM-powered summarization | Truncation or line limits |
382
- | Tool execution | Parallel with guardrail checks | Sequential or none |
383
- | Reminders | Persistent, cross-session, deadline-aware | None |
384
- | Error handling | Structured JSON debug log | Silent failures |
385
- | Multi-LLM | Anthropic, OpenAI, Ollama | Usually single provider |
386
- | Reliability | Retry with exponential backoff | Single attempt |
461
+ ### aman-agent vs other companion runtimes
462
+
463
+ | Feature | aman-agent | Letta / MemGPT | Raw LLM CLI |
464
+ |:---|:---|:---|:---|
465
+ | Identity system | 7 portable layers | None | None |
466
+ | Memory | amem (SQLite + embeddings + graph) | Postgres + embeddings | None |
467
+ | Per-message recall | Progressive disclosure (~10x token savings) | Yes | No |
468
+ | Learns from conversation | Auto-extract (silent) | Requires configuration | No |
469
+ | Guardrail enforcement | Runtime tool blocking | None | None |
470
+ | Reminders | Persistent, deadline-aware | None | None |
471
+ | Context compression | LLM-powered summarization | Archival system | Truncation |
472
+ | Tool observation capture | Passive logging of all tool calls | None | None |
473
+ | Token cost visibility | Shows memory injection cost per turn | None | None |
474
+ | Multi-LLM | Anthropic, OpenAI, Ollama | OpenAI-focused | Single provider |
475
+ | Tool execution | Parallel with guardrails | Sequential | None |
476
+
477
+ ### amem vs other memory layers
478
+
479
+ | Feature | amem | claude-mem (40K stars) | mem0 |
480
+ |:---|:---|:---|:---|
481
+ | Works with | Any MCP client | Claude Code only | OpenAI-focused |
482
+ | Storage | SQLite + local embeddings | SQLite + Chroma vectors | Cloud vector DB |
483
+ | Progressive disclosure | Compact index + on-demand detail | Yes (10x savings) | No |
484
+ | Memory types | 6 typed (correction > decision > fact) | Untyped observations | Untyped blobs |
485
+ | Knowledge graph | Typed relations between memories | None | None |
486
+ | Reminders | Persistent, deadline-aware | None | None |
487
+ | Scoring | relevance x recency x confidence x importance | Recency-based | Similarity only |
488
+ | Consolidation | Auto merge/prune/promote | None | None |
489
+ | Version history | Immutable snapshots | Immutable observations | None |
490
+ | Token cost visibility | Shown per recall | Shown per injection | None |
491
+ | License | MIT | AGPL-3.0 | Apache-2.0 |
492
+
493
+ > **claude-mem** excels at capturing what Claude Code *did*. **amem** is a structured memory system that works with *any* MCP client, with typed memories, a knowledge graph, reminders, progressive disclosure, and consolidation.
387
494
 
388
495
  ---
389
496
 
@@ -393,7 +500,7 @@ aman
393
500
  git clone https://github.com/amanasmuei/aman-agent.git
394
501
  cd aman-agent && npm install
395
502
  npm run build # zero errors
396
- npm test # 84 tests pass
503
+ npm test # 111 tests pass
397
504
  ```
398
505
 
399
506
  PRs welcome. See [Issues](https://github.com/amanasmuei/aman-agent/issues).