@aman_asmuei/aman-agent 0.3.0 → 0.4.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +252 -38
- package/dist/index.js +264 -411
- package/dist/index.js.map +1 -1
- package/package.json +1 -1
package/README.md
CHANGED
|
@@ -1,51 +1,218 @@
|
|
|
1
|
-
|
|
1
|
+
<p align="center">
|
|
2
|
+
<picture>
|
|
3
|
+
<source media="(prefers-color-scheme: dark)" srcset="https://img.shields.io/badge/aman--agent-runtime_layer-white?style=for-the-badge&labelColor=0d1117&color=58a6ff">
|
|
4
|
+
<img alt="aman-agent" src="https://img.shields.io/badge/aman--agent-runtime_layer-black?style=for-the-badge&labelColor=f6f8fa&color=24292f">
|
|
5
|
+
</picture>
|
|
6
|
+
</p>
|
|
2
7
|
|
|
3
|
-
|
|
8
|
+
<h1 align="center">aman-agent</h1>
|
|
9
|
+
|
|
10
|
+
<p align="center">
|
|
11
|
+
<strong>Your AI companion, running locally.</strong>
|
|
12
|
+
</p>
|
|
13
|
+
|
|
14
|
+
<p align="center">
|
|
15
|
+
<a href="https://www.npmjs.com/package/@aman_asmuei/aman-agent"><img src="https://img.shields.io/npm/v/@aman_asmuei/aman-agent?style=for-the-badge&logo=npm&logoColor=white&color=cb3837" alt="npm version" /></a>
|
|
16
|
+
|
|
17
|
+
<a href="https://github.com/amanasmuei/aman-agent/actions"><img src="https://img.shields.io/github/actions/workflow/status/amanasmuei/aman-agent/ci.yml?style=for-the-badge&logo=github&label=CI" alt="CI status" /></a>
|
|
18
|
+
|
|
19
|
+
<a href="LICENSE"><img src="https://img.shields.io/badge/license-MIT-blue?style=for-the-badge" alt="MIT License" /></a>
|
|
20
|
+
|
|
21
|
+
<img src="https://img.shields.io/badge/node-%E2%89%A518-brightgreen?style=for-the-badge&logo=node.js&logoColor=white" alt="Node.js 18+" />
|
|
22
|
+
|
|
23
|
+
<a href="https://github.com/amanasmuei/aman"><img src="https://img.shields.io/badge/part_of-aman_ecosystem-ff6b35?style=for-the-badge" alt="aman ecosystem" /></a>
|
|
24
|
+
</p>
|
|
25
|
+
|
|
26
|
+
<p align="center">
|
|
27
|
+
Loads the full aman ecosystem and runs a streaming AI agent in your terminal —<br/>
|
|
28
|
+
identity, memory, tools, workflows, guardrails, and skills in every conversation.
|
|
29
|
+
</p>
|
|
30
|
+
|
|
31
|
+
<p align="center">
|
|
32
|
+
<a href="#-quick-start">Quick Start</a> •
|
|
33
|
+
<a href="#-what-it-loads">What It Loads</a> •
|
|
34
|
+
<a href="#-whats-new-in-v040">What's New</a> •
|
|
35
|
+
<a href="#-commands">Commands</a> •
|
|
36
|
+
<a href="#-supported-llms">LLMs</a> •
|
|
37
|
+
<a href="#-the-ecosystem">Ecosystem</a>
|
|
38
|
+
</p>
|
|
39
|
+
|
|
40
|
+
---
|
|
41
|
+
|
|
42
|
+
## The Problem
|
|
43
|
+
|
|
44
|
+
AI coding assistants forget everything between sessions. You re-explain your stack, preferences, and boundaries every time. There's no single place where your AI loads its full context and just *works*.
|
|
45
|
+
|
|
46
|
+
## The Solution
|
|
47
|
+
|
|
48
|
+
**aman-agent** loads your entire AI ecosystem into a local streaming agent. One command. Full context. Every session.
|
|
49
|
+
|
|
50
|
+
```bash
|
|
51
|
+
npx @aman_asmuei/aman-agent
|
|
52
|
+
```
|
|
53
|
+
|
|
54
|
+
First run walks you through LLM configuration. After that, just run and talk.
|
|
55
|
+
|
|
56
|
+
> **Your AI knows who it is, what it remembers, what tools it has, and what rules to follow — before you say a word.**
|
|
57
|
+
|
|
58
|
+
---
|
|
4
59
|
|
|
5
60
|
## Quick Start
|
|
6
61
|
|
|
62
|
+
### 1. Run
|
|
63
|
+
|
|
7
64
|
```bash
|
|
65
|
+
# Run directly (always latest)
|
|
8
66
|
npx @aman_asmuei/aman-agent
|
|
67
|
+
|
|
68
|
+
# Or install globally
|
|
69
|
+
npm install -g @aman_asmuei/aman-agent
|
|
9
70
|
```
|
|
10
71
|
|
|
11
|
-
|
|
72
|
+
### 2. Configure
|
|
12
73
|
|
|
13
|
-
|
|
74
|
+
First run prompts for your LLM provider, API key, and model. Config saved to `~/.aman-agent/config.json`.
|
|
14
75
|
|
|
15
|
-
|
|
76
|
+
### 3. Talk
|
|
16
77
|
|
|
17
|
-
|
|
18
|
-
|
|
19
|
-
-
|
|
20
|
-
- **Workflows** (aflow) — your AI follows your defined processes
|
|
21
|
-
- **Guardrails** (arules) — your AI respects your boundaries
|
|
22
|
-
- **Skills** (askill) — your AI applies learned capabilities
|
|
78
|
+
```bash
|
|
79
|
+
# Override model per session
|
|
80
|
+
aman-agent --model claude-opus-4-6
|
|
23
81
|
|
|
24
|
-
|
|
82
|
+
# Adjust system prompt token budget
|
|
83
|
+
aman-agent --budget 12000
|
|
84
|
+
```
|
|
85
|
+
|
|
86
|
+
---
|
|
87
|
+
|
|
88
|
+
## What's New in v0.4.0
|
|
89
|
+
|
|
90
|
+
| Feature | Before | After |
|
|
91
|
+
|---|---|---|
|
|
92
|
+
| **Streaming with tools** | Blocked — no output until LLM finishes | Real-time streaming, even during tool calls |
|
|
93
|
+
| **Conversation persistence** | 200-char resume, full history lost | Full conversation saved to amem on exit |
|
|
94
|
+
| **Context management** | Messages grow forever, eventual crash | Auto-trims at 80K tokens, keeps recent context |
|
|
95
|
+
| **`/save` command** | N/A | Manually save conversation mid-session |
|
|
96
|
+
| **Reminders/Schedules** | Broken — lost on exit, no daemon | Removed (replaced with `/save`) |
|
|
25
97
|
|
|
26
|
-
|
|
98
|
+
---
|
|
27
99
|
|
|
28
|
-
|
|
29
|
-
|
|
100
|
+
## What It Loads
|
|
101
|
+
|
|
102
|
+
On every session start, aman-agent assembles your full AI context:
|
|
103
|
+
|
|
104
|
+
| Layer | Source | What it provides |
|
|
105
|
+
|:---|:---|:---|
|
|
106
|
+
| **Identity** | `~/.acore/core.md` | AI personality, your preferences, relationship state |
|
|
107
|
+
| **Memory** | `~/.amem/memory.db` | Past decisions, corrections, patterns, conversation history |
|
|
108
|
+
| **Tools** | `~/.akit/kit.md` | Available capabilities (GitHub, search, databases) |
|
|
109
|
+
| **Workflows** | `~/.aflow/flow.md` | Multi-step processes (code review, bug fix) |
|
|
110
|
+
| **Guardrails** | `~/.arules/rules.md` | Safety boundaries and permissions |
|
|
111
|
+
| **Skills** | `~/.askill/skills.md` | Deep domain expertise |
|
|
112
|
+
|
|
113
|
+
All layers are optional — the agent works with whatever you've set up.
|
|
114
|
+
|
|
115
|
+
### Token Budgeting
|
|
116
|
+
|
|
117
|
+
Layers are included by priority when space is limited:
|
|
118
|
+
|
|
119
|
+
```
|
|
120
|
+
Identity (always) → Guardrails → Workflows → Tools → Skills (can truncate)
|
|
121
|
+
```
|
|
122
|
+
|
|
123
|
+
Default budget: 8,000 tokens. Override with `--budget`.
|
|
124
|
+
|
|
125
|
+
---
|
|
126
|
+
|
|
127
|
+
## Commands
|
|
128
|
+
|
|
129
|
+
| Command | Description |
|
|
130
|
+
|:---|:---|
|
|
30
131
|
| `/help` | Show available commands |
|
|
31
|
-
| `/identity` | View
|
|
32
|
-
| `/
|
|
33
|
-
| `/workflows` | View
|
|
34
|
-
| `/
|
|
35
|
-
| `/skills` | View
|
|
132
|
+
| `/identity` | View identity `[update <section>]` |
|
|
133
|
+
| `/rules` | View guardrails `[add\|remove\|toggle ...]` |
|
|
134
|
+
| `/workflows` | View workflows `[add\|remove ...]` |
|
|
135
|
+
| `/tools` | View tools `[add\|remove ...]` |
|
|
136
|
+
| `/skills` | View skills `[install\|uninstall ...]` |
|
|
137
|
+
| `/eval` | View evaluation `[milestone ...]` |
|
|
138
|
+
| `/memory` | View memories `[search\|clear ...]` |
|
|
139
|
+
| `/status` | Ecosystem dashboard |
|
|
140
|
+
| `/doctor` | Health check all layers |
|
|
141
|
+
| `/save` | Save conversation to memory |
|
|
36
142
|
| `/model` | Show current LLM model |
|
|
143
|
+
| `/update` | Check for updates |
|
|
144
|
+
| `/reconfig` | Reset LLM configuration |
|
|
37
145
|
| `/clear` | Clear conversation history |
|
|
38
146
|
| `/quit` | Exit |
|
|
39
147
|
|
|
148
|
+
---
|
|
149
|
+
|
|
40
150
|
## Supported LLMs
|
|
41
151
|
|
|
42
|
-
|
|
43
|
-
|
|
44
|
-
|
|
152
|
+
| Provider | Models | Tool Use | Streaming |
|
|
153
|
+
|:---|:---|:---|:---|
|
|
154
|
+
| **Anthropic** | Claude Sonnet 4.5, Opus 4.6, Haiku 4.5 | Full | Full (with tools) |
|
|
155
|
+
| **OpenAI** | GPT-4o, GPT-4o Mini, o3 | Full | Full (with tools) |
|
|
156
|
+
| **Ollama** | Llama, Mistral, Gemma, any local model | Text only | Full |
|
|
157
|
+
|
|
158
|
+
---
|
|
159
|
+
|
|
160
|
+
## How It Works
|
|
161
|
+
|
|
162
|
+
```
|
|
163
|
+
┌──────────────────────────────────────────────┐
|
|
164
|
+
│ Your Terminal │
|
|
165
|
+
│ │
|
|
166
|
+
│ You > tell me about our auth decisions │
|
|
167
|
+
│ │
|
|
168
|
+
│ Agent > [using memory_recall...] │
|
|
169
|
+
│ Based on your previous decisions: │
|
|
170
|
+
│ - OAuth2 with PKCE (decided 2 weeks ago) │
|
|
171
|
+
│ - JWT for API tokens... │
|
|
172
|
+
└─────────────────┬────────────────────────────┘
|
|
173
|
+
│
|
|
174
|
+
┌─────────────────▼────────────────────────────┐
|
|
175
|
+
│ aman-agent runtime │
|
|
176
|
+
│ │
|
|
177
|
+
│ System Prompt Assembly │
|
|
178
|
+
│ ┌─────────────────────────────────────┐ │
|
|
179
|
+
│ │ Identity + Memory + Tools + │ │
|
|
180
|
+
│ │ Workflows + Guardrails + Skills │ │
|
|
181
|
+
│ │ (priority-based token budgeting) │ │
|
|
182
|
+
│ └─────────────────────────────────────┘ │
|
|
183
|
+
│ │
|
|
184
|
+
│ Streaming LLM Client │
|
|
185
|
+
│ ┌─────────────────────────────────────┐ │
|
|
186
|
+
│ │ Anthropic / OpenAI / Ollama │ │
|
|
187
|
+
│ │ Always streaming, even with tools │ │
|
|
188
|
+
│ └─────────────────────────────────────┘ │
|
|
189
|
+
│ │
|
|
190
|
+
│ Context Manager │
|
|
191
|
+
│ ┌─────────────────────────────────────┐ │
|
|
192
|
+
│ │ Auto-trim at 80K tokens │ │
|
|
193
|
+
│ │ Keep initial context + recent msgs │ │
|
|
194
|
+
│ └─────────────────────────────────────┘ │
|
|
195
|
+
│ │
|
|
196
|
+
│ MCP Integration │
|
|
197
|
+
│ ┌─────────────────────────────────────┐ │
|
|
198
|
+
│ │ aman-mcp → identity, tools, eval │ │
|
|
199
|
+
│ │ amem → memory, knowledge │ │
|
|
200
|
+
│ └─────────────────────────────────────┘ │
|
|
201
|
+
└──────────────────────────────────────────────┘
|
|
202
|
+
```
|
|
203
|
+
|
|
204
|
+
### Session Lifecycle
|
|
205
|
+
|
|
206
|
+
1. **Start** — Load ecosystem, connect MCP servers, recall memory context
|
|
207
|
+
2. **Chat** — Stream responses, execute tools with guardrail checks, match workflows
|
|
208
|
+
3. **Auto-trim** — Compress old messages when approaching token limits
|
|
209
|
+
4. **Exit** — Save conversation to amem, update session resume, rate session
|
|
210
|
+
|
|
211
|
+
---
|
|
45
212
|
|
|
46
213
|
## Configuration
|
|
47
214
|
|
|
48
|
-
Config stored in `~/.aman-agent/config.json
|
|
215
|
+
Config is stored in `~/.aman-agent/config.json`:
|
|
49
216
|
|
|
50
217
|
```json
|
|
51
218
|
{
|
|
@@ -55,25 +222,72 @@ Config stored in `~/.aman-agent/config.json`. Treat this file like a credential
|
|
|
55
222
|
}
|
|
56
223
|
```
|
|
57
224
|
|
|
58
|
-
|
|
59
|
-
|
|
60
|
-
|
|
61
|
-
|
|
225
|
+
| Option | CLI Flag | Default |
|
|
226
|
+
|:---|:---|:---|
|
|
227
|
+
| Model override | `--model <id>` | From config |
|
|
228
|
+
| Token budget | `--budget <n>` | 8000 |
|
|
229
|
+
|
|
230
|
+
> Treat the config file like a credential — it contains your API key.
|
|
231
|
+
|
|
232
|
+
---
|
|
62
233
|
|
|
63
234
|
## The Ecosystem
|
|
64
235
|
|
|
65
236
|
```
|
|
66
237
|
aman
|
|
67
|
-
├── acore → identity
|
|
68
|
-
├── amem → memory
|
|
69
|
-
├── akit → tools
|
|
70
|
-
├── aflow → workflows
|
|
71
|
-
├── arules → guardrails
|
|
72
|
-
├──
|
|
73
|
-
├──
|
|
74
|
-
|
|
238
|
+
├── acore → identity → who your AI IS
|
|
239
|
+
├── amem → memory → what your AI KNOWS
|
|
240
|
+
├── akit → tools → what your AI CAN DO
|
|
241
|
+
├── aflow → workflows → HOW your AI works
|
|
242
|
+
├── arules → guardrails → what your AI WON'T do
|
|
243
|
+
├── askill → skills → what your AI MASTERS
|
|
244
|
+
├── aeval → evaluation → how GOOD your AI is
|
|
245
|
+
├── achannel → channels → WHERE your AI lives
|
|
246
|
+
└── aman-agent → runtime → the engine ← YOU ARE HERE
|
|
75
247
|
```
|
|
76
248
|
|
|
77
|
-
|
|
249
|
+
<details>
|
|
250
|
+
<summary><strong>Full ecosystem packages</strong></summary>
|
|
251
|
+
|
|
252
|
+
| Layer | Package | What it does |
|
|
253
|
+
|:---|:---|:---|
|
|
254
|
+
| Identity | [acore](https://github.com/amanasmuei/acore) | Personality, values, relationship memory |
|
|
255
|
+
| Memory | [amem](https://github.com/amanasmuei/amem) | Persistent memory with knowledge graph (MCP) |
|
|
256
|
+
| Tools | [akit](https://github.com/amanasmuei/akit) | Portable AI tools (MCP + manual fallback) |
|
|
257
|
+
| Workflows | [aflow](https://github.com/amanasmuei/aflow) | Reusable AI workflows |
|
|
258
|
+
| Guardrails | [arules](https://github.com/amanasmuei/arules) | Safety boundaries and permissions |
|
|
259
|
+
| Skills | [askill](https://github.com/amanasmuei/askill) | Domain expertise |
|
|
260
|
+
| Evaluation | [aeval](https://github.com/amanasmuei/aeval) | Relationship tracking |
|
|
261
|
+
| Channels | [achannel](https://github.com/amanasmuei/achannel) | Telegram, Discord, webhooks |
|
|
262
|
+
| **Unified** | **[aman](https://github.com/amanasmuei/aman)** | **One command to set up everything** |
|
|
263
|
+
|
|
264
|
+
</details>
|
|
265
|
+
|
|
266
|
+
---
|
|
267
|
+
|
|
268
|
+
## Contributing
|
|
269
|
+
|
|
270
|
+
```bash
|
|
271
|
+
git clone https://github.com/amanasmuei/aman-agent.git
|
|
272
|
+
cd aman-agent && npm install
|
|
273
|
+
npm run build # zero errors
|
|
274
|
+
npm test # 61 tests pass
|
|
275
|
+
```
|
|
276
|
+
|
|
277
|
+
PRs welcome. See [Issues](https://github.com/amanasmuei/aman-agent/issues).
|
|
278
|
+
|
|
279
|
+
---
|
|
280
|
+
|
|
281
|
+
<p align="center">
|
|
282
|
+
Built by <a href="https://github.com/amanasmuei"><strong>Aman Asmuei</strong></a>
|
|
283
|
+
</p>
|
|
284
|
+
|
|
285
|
+
<p align="center">
|
|
286
|
+
<a href="https://github.com/amanasmuei/aman-agent">GitHub</a> ·
|
|
287
|
+
<a href="https://www.npmjs.com/package/@aman_asmuei/aman-agent">npm</a> ·
|
|
288
|
+
<a href="https://github.com/amanasmuei/aman-agent/issues">Issues</a>
|
|
289
|
+
</p>
|
|
78
290
|
|
|
79
|
-
|
|
291
|
+
<p align="center">
|
|
292
|
+
<sub>MIT License</sub>
|
|
293
|
+
</p>
|