0agent 1.0.42 → 1.0.44

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -2,141 +2,216 @@
2
2
 
3
3
  **A persistent, learning AI agent that runs on your machine.**
4
4
 
5
+ > Runs a local daemon. Learns from every task. Remembers everything. Gets better over time.
6
+
5
7
  ```bash
6
8
  npx 0agent@latest
7
9
  ```
8
10
 
9
- That's it. 0agent installs, walks you through a 4-step setup, and starts a daemon that gets smarter with every task you run.
11
+ [![npm](https://img.shields.io/npm/v/0agent?color=black&label=npm)](https://www.npmjs.com/package/0agent)
12
+ [![license](https://img.shields.io/badge/license-Apache%202.0-black)](LICENSE)
13
+ [![node](https://img.shields.io/badge/node-%E2%89%A520-black)](https://nodejs.org)
10
14
 
11
15
  ---
12
16
 
13
- ## What it does
17
+ ## What is this?
14
18
 
15
- ```bash
16
- # Sprint workflow
17
- 0agent /office-hours "I want to build a Slack bot"
18
- 0agent /plan-ceo-review
19
- 0agent /plan-eng-review
20
- 0agent /build
21
- 0agent /review
22
- 0agent /qa --url https://staging.myapp.com
23
- 0agent /ship
24
- 0agent /retro
25
-
26
- # One-off tasks
27
- 0agent /research "Acme Corp Series B funding"
28
- 0agent /debug "TypeError at auth.ts:47"
29
- 0agent /test-writer src/payments/
30
- 0agent /refactor src/api/routes.ts
31
-
32
- # Plain language
33
- 0agent run "fix the auth bug Marcus reported"
34
- 0agent run "research Acme Corp and draft a follow-up email to Sarah"
35
-
36
- # Entity-scoped (learns who you are)
37
- 0agent run "pull auth metrics" --entity sarah_chen
38
- ```
19
+ 0agent is a CLI agent that runs as a background daemon on your machine. It executes real tasks — shell commands, file operations, web search, browser automation — using your API key, and learns from every outcome via a weighted knowledge graph.
39
20
 
40
- ---
21
+ Unlike chat-based AI tools, 0agent:
41
22
 
42
- ## How it learns
23
+ - **Persists** runs in the background, remembers past sessions
24
+ - **Learns** — every task outcome updates edge weights in a graph; plan selection improves over time
25
+ - **Executes** — actually runs commands, writes files, searches the web, opens browsers
26
+ - **Syncs** — optionally backs up the knowledge graph to a private GitHub repo
43
27
 
44
- Every time you run a task, 0agent records which strategy it chose and whether it worked. After 50 interactions, it converges to your optimal workflow — measurably, provably, via a weighted knowledge graph.
28
+ ---
45
29
 
46
- - Edge weights start at 0.5 (neutral)
47
- - Positive outcomes push them toward 1.0
48
- - Negative outcomes push them toward 0.0
49
- - After 100 traces, plan selection is noticeably better
30
+ ## Quick start
50
31
 
51
- ---
32
+ ```bash
33
+ npx 0agent@latest
34
+ ```
52
35
 
53
- ## Requirements
36
+ The wizard asks for:
37
+ 1. LLM provider + API key (Anthropic, OpenAI, xAI, Gemini, or local Ollama)
38
+ 2. GitHub repo for memory backup (optional, uses `gh` CLI if installed)
39
+ 3. Workspace folder (where the agent creates files — default: `~/0agent-workspace`)
40
+ 4. Embedding provider (for semantic memory search)
54
41
 
55
- - **Node.js** 20
56
- - **API key** for Anthropic, OpenAI, or a local Ollama instance
57
- - **Docker** (optional but recommended — enables sandboxed subagents)
42
+ After setup, the chat TUI opens automatically. No manual steps.
58
43
 
59
44
  ---
60
45
 
61
- ## Install
46
+ ## Usage
47
+
48
+ ### Interactive chat
62
49
 
63
50
  ```bash
64
- # One-liner
51
+ 0agent # open chat (starts daemon if needed)
65
52
  npx 0agent@latest
53
+ ```
66
54
 
67
- # Global install
68
- npm install -g 0agent
69
- 0agent init
55
+ ```
56
+ 0agent anthropic/claude-sonnet-4-6
57
+ Type a task, or /help for commands.
70
58
 
71
- # Or via brew (coming soon)
72
- brew install 0agent
59
+ make a website for my coffee shop and deploy it locally
60
+ build a REST API in Go with auth
61
+ › research my competitor's pricing and draft a response strategy
73
62
  ```
74
63
 
75
- ---
64
+ Type while the agent works — messages queue automatically and run one after another.
76
65
 
77
- ## Local development
66
+ ### Slash skills
78
67
 
79
68
  ```bash
80
- git clone https://github.com/0agent-oss/0agent
81
- cd 0agent
82
- pnpm install
83
- pnpm build
84
-
85
- # Run the wizard
86
- node bin/0agent.js init
69
+ # Software engineering
70
+ 0agent /review # code review current branch
71
+ 0agent /build # run build, fix errors
72
+ 0agent /qa # generate and run tests
73
+ 0agent /debug # debug a failing test or error
74
+ 0agent /refactor # refactor a file or module
75
+ 0agent /test-writer # write unit tests
76
+ 0agent /doc # generate documentation
77
+
78
+ # Planning & strategy
79
+ 0agent /office-hours "I want to build a payments feature"
80
+ 0agent /plan-eng-review
81
+ 0agent /plan-ceo-review
82
+ 0agent /retro # weekly retrospective
83
+ 0agent /ship # pre-release checklist
87
84
 
88
- # Start daemon
89
- node bin/0agent.js start
85
+ # Research
86
+ 0agent /research "Acme Corp Series B"
87
+ 0agent /security-audit
88
+ ```
90
89
 
91
- # Check status
92
- node bin/0agent.js status
90
+ ### Scheduled tasks
93
91
 
94
- # Open dashboard
95
- open http://localhost:4200
96
92
  ```
93
+ › /schedule add "run /retro" every Friday at 5pm
94
+ › /schedule add "check the build" every day at 9am
95
+ › /schedule list
96
+ ```
97
+
98
+ ### Commands
99
+
100
+ | Command | Description |
101
+ |---|---|
102
+ | `/model` | Show or switch model |
103
+ | `/model add anthropic sk-ant-...` | Add a provider API key |
104
+ | `/key anthropic sk-ant-...` | Update a stored key |
105
+ | `/status` | Daemon health + graph stats |
106
+ | `/skills` | List available skills |
107
+ | `/schedule` | Manage scheduled jobs |
108
+ | `/update` | Update to latest version |
109
+ | `/graph` | Open 3D knowledge graph |
110
+ | `/clear` | Clear screen |
111
+ | `Ctrl+C` | Cancel current task |
97
112
 
98
113
  ---
99
114
 
100
- ## Architecture
115
+ ## How it learns
116
+
117
+ Every task updates a weighted knowledge graph stored in `~/.0agent/graph.db`.
101
118
 
102
119
  ```
103
- You 0agent CLI Daemon (port 4200) → Knowledge Graph
104
- → Subagents (sandboxed)
105
- → MCP Tools (filesystem, browser, shell)
106
- Learning Engine (weight propagation)
120
+ Edge weights: 0.0 ──── 0.5 ──── 1.0
121
+ bad neutral good
122
+
123
+ After each task:
124
+ success → weight += 0.1 × learning_rate
125
+ failure → weight -= 0.1 × learning_rate
126
+ decay → weight → 0.5 over time (forgetting)
107
127
  ```
108
128
 
109
- - **Knowledge graph** — weighted, multimodal. SQLite + HNSW. Persists to `~/.0agent/graph.db`
110
- - **Subagents** sandboxed (Docker/Podman/process). Zero-trust capability tokens. Never write to the graph.
111
- - **MCP** connects to any MCP server. Built-in: filesystem, shell, browser, memory.
112
- - **Skills** 15 built-in YAML-defined skills. Add your own in `~/.0agent/skills/custom/`
113
- - **Self-improvement** — weekly analysis of skill gaps, workflow optimization, prompt refinement.
129
+ After ~50 interactions, plan selection measurably improves. The graph also stores:
130
+ - Discovered facts: URLs, ports, file paths, API endpoints (via `memory_write` tool)
131
+ - Conversation history (last 8 exchanges injected as context)
132
+ - Identity + personality per entity
114
133
 
115
134
  ---
116
135
 
117
- ## Entity nesting
136
+ ## Memory sync
118
137
 
119
- 0agent can learn individual personalities within an organization:
138
+ 0agent can back up its knowledge graph to a private GitHub repo:
120
139
 
121
- ```yaml
122
- # One-time setup in config
123
- entity_nesting:
140
+ ```bash
141
+ # Set up during init, or add manually to ~/.0agent/config.yaml:
142
+ github_memory:
124
143
  enabled: true
125
- visibility_policy:
126
- allow_work_context: true # company sees projects/tasks
127
- allow_personality_profile: false # company can't see communication style
144
+ token: ghp_...
145
+ owner: your-username
146
+ repo: 0agent-memory
128
147
  ```
129
148
 
130
- After 3+ interactions with Sarah, responses automatically match her style:
131
- - Terse? Leads with numbers, no preamble.
132
- - Bullet-point user? Bullets.
133
- - Exploratory? More context and options.
149
+ - **Pulls** on daemon start
150
+ - **Pushes** every 30 minutes if there are changes
151
+ - **Final push** on daemon shutdown
152
+ - The same repo doubles as a GitHub Codespace template for browser sessions
153
+
154
+ ---
155
+
156
+ ## What can the agent actually do?
134
157
 
135
- The company graph sees `[from member] Sarah used /build` — not the raw conversations.
158
+ | Capability | How |
159
+ |---|---|
160
+ | Run shell commands | `shell_exec` — bash, any CLI tool |
161
+ | Read / write files | `file_op` — read, write, list, mkdir |
162
+ | Search the web | `web_search` — DuckDuckGo, no API key needed |
163
+ | Scrape pages | `scrape_url` — full page text, tables, links |
164
+ | Open browser | `browser_open` — system Chrome or default OS browser |
165
+ | Remember facts | `memory_write` — persists to knowledge graph |
166
+ | Schedule tasks | Natural language cron via `/schedule` |
167
+ | Self-heal | Detects runtime errors, proposes + applies patches |
136
168
 
137
169
  ---
138
170
 
139
- ## Config
171
+ ## Architecture
172
+
173
+ ```
174
+ npx 0agent@latest
175
+
176
+
177
+ ┌─────────────────────────────────────────────────────────┐
178
+ │ CLI (bin/0agent.js + bin/chat.js) │
179
+ │ • Init wizard • Chat TUI • Slash commands │
180
+ └───────────────────────┬─────────────────────────────────┘
181
+ │ HTTP + WebSocket
182
+
183
+ ┌─────────────────────────────────────────────────────────┐
184
+ │ Daemon (dist/daemon.mjs) — port 4200 │
185
+ │ │
186
+ │ SessionManager ── AgentExecutor ── LLMExecutor │
187
+ │ │ │ │ │
188
+ │ │ CapabilityRegistry │ │
189
+ │ │ • shell_exec │ │
190
+ │ │ • file_op │ │
191
+ │ │ • web_search │ │
192
+ │ │ • scrape_url │ │
193
+ │ │ • browser_open │ │
194
+ │ │ • memory_write │ │
195
+ │ │ │ │
196
+ │ KnowledgeGraph ◄────── outcome feedback ┘ │
197
+ │ (SQLite + HNSW) │
198
+ │ │ │
199
+ │ GitHubMemorySync ── SchedulerManager ── SelfHealLoop │
200
+ └─────────────────────────────────────────────────────────┘
201
+ ```
202
+
203
+ **Key packages:**
204
+
205
+ | Package | Description |
206
+ |---|---|
207
+ | `packages/core` | Knowledge graph, inference engine, storage adapters |
208
+ | `packages/daemon` | HTTP server, session manager, agent executor, capabilities |
209
+ | `bin/chat.js` | Claude Code-style TUI with message queue, WS events, spinner |
210
+ | `bin/0agent.js` | CLI entry point, init wizard, daemon lifecycle |
211
+
212
+ ---
213
+
214
+ ## Configuration
140
215
 
141
216
  `~/.0agent/config.yaml` — created by `0agent init`, edit anytime:
142
217
 
@@ -144,21 +219,83 @@ The company graph sees `[from member] Sarah used /build` — not the raw convers
144
219
  llm_providers:
145
220
  - provider: anthropic
146
221
  model: claude-sonnet-4-6
147
- api_key: sk-ant-...
222
+ api_key: sk-ant-... # never committed to git
148
223
  is_default: true
149
224
 
225
+ workspace:
226
+ path: /Users/you/0agent-workspace # agent creates files here
227
+
150
228
  sandbox:
151
229
  backend: docker # docker | podman | process | firecracker
152
230
 
153
- entity_nesting:
231
+ github_memory:
154
232
  enabled: true
233
+ token: ghp_...
234
+ owner: your-username
235
+ repo: 0agent-memory
236
+
237
+ embedding:
238
+ provider: nomic-ollama # nomic-ollama | openai | none
239
+ model: nomic-embed-text
240
+ dimensions: 768
241
+ ```
155
242
 
156
- self_improvement:
157
- schedule: weekly
243
+ ---
244
+
245
+ ## Local development
246
+
247
+ ```bash
248
+ git clone https://github.com/cadetmaze/0agentv1
249
+ cd 0agentv1
250
+ pnpm install
251
+ pnpm build
252
+
253
+ # Run init wizard
254
+ node bin/0agent.js init
255
+
256
+ # Or start daemon directly
257
+ node bin/0agent.js start
258
+ node bin/chat.js
259
+
260
+ # Bundle daemon into single file
261
+ node scripts/bundle.mjs
262
+
263
+ # Check status
264
+ node bin/0agent.js status
265
+ open http://localhost:4200 # 3D knowledge graph dashboard
158
266
  ```
159
267
 
268
+ **Requirements:**
269
+ - Node.js ≥ 20
270
+ - pnpm (`npm install -g pnpm`)
271
+ - API key for Anthropic, OpenAI, xAI, Gemini, or a local Ollama instance
272
+ - Docker (optional — enables sandboxed execution)
273
+
274
+ ---
275
+
276
+ ## Roadmap
277
+
278
+ - [ ] Telegram bot interface
279
+ - [ ] MCP server support (connect to external tools)
280
+ - [ ] Team collaboration (shared graph, sync via GitHub)
281
+ - [ ] Mobile companion app
282
+ - [ ] Plugin SDK for custom capabilities
283
+
284
+ ---
285
+
286
+ ## Contributing
287
+
288
+ Issues and PRs welcome. This is early-stage software — things break, APIs change.
289
+
290
+ 1. Fork the repo
291
+ 2. `pnpm install && pnpm build`
292
+ 3. Make changes to `packages/daemon/src/` or `bin/`
293
+ 4. `node scripts/bundle.mjs` to rebuild the bundle
294
+ 5. Test with `node bin/0agent.js init`
295
+ 6. Submit a PR
296
+
160
297
  ---
161
298
 
162
299
  ## License
163
300
 
164
- Apache 2.0
301
+ [Apache 2.0](LICENSE) — use it, fork it, build on it.
package/bin/chat.js CHANGED
@@ -7,12 +7,42 @@
7
7
  * /model to switch. /key to add provider keys. Never forgets previous keys.
8
8
  */
9
9
 
10
- import { createInterface } from 'node:readline';
10
+ import { createInterface, emitKeypressEvents, moveCursor, clearLine as rlClearLine } from 'node:readline';
11
11
  import { readFileSync, writeFileSync, existsSync, mkdirSync } from 'node:fs';
12
12
  import { resolve } from 'node:path';
13
13
  import { homedir } from 'node:os';
14
14
  import YAML from 'yaml';
15
15
 
16
+ // ─── Slash command registry (used for live menu + tab completion) ─────────────
17
+ const SLASH_COMMANDS = [
18
+ // Skills
19
+ { cmd: '/review', desc: 'Code review — bugs, style, security issues' },
20
+ { cmd: '/build', desc: 'Build project and fix compilation errors' },
21
+ { cmd: '/qa', desc: 'Generate and run tests' },
22
+ { cmd: '/debug', desc: 'Debug a failing test or runtime error' },
23
+ { cmd: '/refactor', desc: 'Refactor a file or module' },
24
+ { cmd: '/test-writer', desc: 'Write unit tests for your code' },
25
+ { cmd: '/doc', desc: 'Generate documentation' },
26
+ { cmd: '/research', desc: 'Research a topic, person, or company' },
27
+ { cmd: '/retro', desc: 'Weekly retrospective — what went well / badly' },
28
+ { cmd: '/ship', desc: 'Pre-release checklist — ready to deploy?' },
29
+ { cmd: '/office-hours', desc: 'Plan a new feature or project from scratch' },
30
+ { cmd: '/plan-eng-review',desc:'Engineering planning review' },
31
+ { cmd: '/security-audit',desc: 'Security audit — find vulnerabilities' },
32
+ { cmd: '/design-review', desc: 'Design review — architecture and patterns' },
33
+ // Built-ins
34
+ { cmd: '/telegram', desc: 'Connect Telegram bot — forward messages to 0agent'},
35
+ { cmd: '/model', desc: 'Show or switch the LLM model' },
36
+ { cmd: '/key', desc: 'Update a stored API key' },
37
+ { cmd: '/status', desc: 'Daemon health, graph stats, active sessions' },
38
+ { cmd: '/skills', desc: 'List all available skills' },
39
+ { cmd: '/schedule', desc: 'Manage scheduled / recurring tasks' },
40
+ { cmd: '/update', desc: 'Update 0agent to the latest version' },
41
+ { cmd: '/graph', desc: 'Open 3D knowledge graph in browser' },
42
+ { cmd: '/clear', desc: 'Clear the screen' },
43
+ { cmd: '/help', desc: 'Show this help' },
44
+ ];
45
+
16
46
  const AGENT_DIR = resolve(homedir(), '.0agent');
17
47
  const CONFIG_PATH = resolve(AGENT_DIR, 'config.yaml');
18
48
  const BASE_URL = process.env['ZEROAGENT_URL'] ?? 'http://localhost:4200';
@@ -183,7 +213,7 @@ function printHeader() {
183
213
  const modelStr = provider ? `${provider.provider}/${provider.model}` : 'no model';
184
214
  console.log();
185
215
  console.log(fmt(C.bold, ' 0agent') + fmt(C.dim, ` — ${modelStr}`));
186
- console.log(fmt(C.dim, ' Type a task, or /help for commands. Ctrl+C to exit.\n'));
216
+ console.log(fmt(C.dim, ' Type a task or /command press Tab to browse, / to see all.\n'));
187
217
  }
188
218
 
189
219
  function printInsights() {
@@ -328,6 +358,14 @@ function handleWsEvent(event) {
328
358
  if (r.files_written?.length) console.log(`\n ${fmt(C.green, '✓')} ${r.files_written.join(', ')}`);
329
359
  if (r.tokens_used) process.stdout.write(fmt(C.dim, `\n ${r.tokens_used} tokens · ${r.model ?? ''}\n`));
330
360
 
361
+ // Contextual next-step suggestions
362
+ const suggestions = _suggestNext(lineBuffer, r);
363
+ if (suggestions.length > 0 && messageQueue.length === 0) {
364
+ process.stdout.write(
365
+ ` ${fmt(C.dim, '→')} ${suggestions.map(s => fmt(C.cyan, s)).join(fmt(C.dim, ' · '))}\n`
366
+ );
367
+ }
368
+
331
369
  // Confirm server if port mentioned
332
370
  confirmServer(r, lineBuffer);
333
371
  lineBuffer = '';
@@ -381,6 +419,26 @@ function handleWsEvent(event) {
381
419
  }
382
420
  }
383
421
 
422
+ function _suggestNext(output, result) {
423
+ const text = (output + ' ' + (result?.output ?? '') + ' ' + JSON.stringify(result?.files_written ?? [])).toLowerCase();
424
+ if (text.includes('written') || text.includes('creat') || text.includes('built') || (result?.files_written?.length > 0)) {
425
+ return ['/review', '/qa'];
426
+ }
427
+ if (text.includes('test') || text.includes('spec') || text.includes('pass')) {
428
+ return ['/ship'];
429
+ }
430
+ if (text.includes('research') || text.includes('found') || text.includes('results')) {
431
+ return ['/office-hours', '/doc'];
432
+ }
433
+ if (text.includes('bug') || text.includes('fix') || text.includes('error') || text.includes('debug')) {
434
+ return ['/qa', '/review'];
435
+ }
436
+ if (text.includes('deploy') || text.includes('server') || text.includes('running')) {
437
+ return ['/qa', '/ship'];
438
+ }
439
+ return [];
440
+ }
441
+
384
442
  async function confirmServer(result, output) {
385
443
  const allText = [...(result.commands_run ?? []), output].join(' ');
386
444
  const portMatch = allText.match(/(?:localhost:|port\s*[=:]?\s*)(\d{4,5})/i);
@@ -697,6 +755,33 @@ async function handleCommand(input) {
697
755
  break;
698
756
  }
699
757
 
758
+ // /telegram — configure Telegram bot token
759
+ case '/telegram': {
760
+ if (!cfg) { console.log(fmt(C.red, ' No config found. Run: 0agent init')); break; }
761
+ const existingToken = cfg?.telegram?.token;
762
+ if (existingToken) {
763
+ console.log(`\n Telegram bot: ${fmt(C.green, '✓ configured')}`);
764
+ console.log(` Token: ${existingToken.slice(0, 10)}••••`);
765
+ console.log(` ${fmt(C.dim, 'To update: /telegram <new-token>\n')}`);
766
+ }
767
+ const token = parts[1];
768
+ if (!token) {
769
+ if (!existingToken) {
770
+ console.log('\n Connect your Telegram bot to 0agent:\n');
771
+ console.log(` 1. Create a bot: ${fmt(C.cyan, 'https://t.me/BotFather')} → /newbot`);
772
+ console.log(` 2. Copy the token and run: ${fmt(C.cyan, '/telegram <token>')}`);
773
+ console.log(` 3. Restart daemon: ${fmt(C.dim, '0agent stop && 0agent start')}\n`);
774
+ }
775
+ break;
776
+ }
777
+ if (!cfg.telegram) cfg.telegram = {};
778
+ cfg.telegram.token = token;
779
+ saveConfig(cfg);
780
+ console.log(` ${fmt(C.green, '✓')} Telegram token saved`);
781
+ console.log(` ${fmt(C.dim, 'Restart daemon for changes to take effect: 0agent stop && 0agent start\n')}`);
782
+ break;
783
+ }
784
+
700
785
  case '/skills': {
701
786
  try {
702
787
  const skills = await fetch(`${BASE_URL}/api/skills`).then(r => r.json());
@@ -751,23 +836,116 @@ async function handleCommand(input) {
751
836
  }
752
837
  }
753
838
 
839
+ // ─── Live slash-command menu ──────────────────────────────────────────────────
840
+ // Drawn below the prompt as the user types. Uses moveCursor to avoid cursor
841
+ // save/restore conflicts with readline.
842
+ let _menuLines = 0; // how many lines the current menu occupies below the cursor
843
+
844
+ function _drawMenu(filter) {
845
+ if (pendingResolve) { _clearMenu(); return; } // don't show while session running
846
+
847
+ const items = filter === null ? [] :
848
+ SLASH_COMMANDS.filter(c =>
849
+ !filter || c.cmd.slice(1).toLowerCase().startsWith(filter.toLowerCase())
850
+ ).slice(0, 10);
851
+
852
+ // If nothing changed (same count), skip redraw to avoid flicker
853
+ if (items.length === 0) { _clearMenu(); return; }
854
+
855
+ const needed = items.length + 1; // +1 for blank line
856
+
857
+ // Move down past existing menu lines (or 0), then clear downward
858
+ const existingLines = _menuLines;
859
+ if (existingLines > 0) {
860
+ moveCursor(process.stdout, 0, existingLines);
861
+ for (let i = 0; i < existingLines; i++) {
862
+ rlClearLine(process.stdout, 0);
863
+ if (i < existingLines - 1) moveCursor(process.stdout, 0, -1);
864
+ }
865
+ moveCursor(process.stdout, 0, -(existingLines - 1));
866
+ }
867
+
868
+ // Print blank separator + menu items, tracking column 0
869
+ process.stdout.write('\n');
870
+ for (const m of items) {
871
+ process.stdout.write(
872
+ ` ${fmt(C.cyan, m.cmd.padEnd(20))} ${fmt(C.dim, m.desc)}\x1b[K\n`
873
+ );
874
+ }
875
+
876
+ // Move back up to the prompt line and restore cursor after the typed text
877
+ moveCursor(process.stdout, 0, -(needed));
878
+ // Jump to end of current line (readline already put cursor there)
879
+ moveCursor(process.stdout, 999, 0);
880
+
881
+ _menuLines = needed;
882
+ }
883
+
884
+ function _clearMenu() {
885
+ if (_menuLines === 0) return;
886
+ const n = _menuLines;
887
+ _menuLines = 0;
888
+ moveCursor(process.stdout, 0, n);
889
+ for (let i = 0; i < n; i++) {
890
+ rlClearLine(process.stdout, 0);
891
+ moveCursor(process.stdout, 0, -1);
892
+ }
893
+ moveCursor(process.stdout, 0, 1); // back to prompt line
894
+ }
895
+
754
896
  // ─── Main REPL ────────────────────────────────────────────────────────────────
755
897
  const rl = createInterface({
756
898
  input: process.stdin,
757
899
  output: process.stdout,
758
900
  prompt: `\n ${fmt(C.cyan, '›')} `,
759
901
  historySize: 100,
760
- completer: (line) => {
761
- const commands = ['/model', '/key', '/status', '/skills', '/graph', '/clear', '/help',
762
- '/schedule', '/schedule list', '/schedule add',
763
- '/review', '/build', '/debug', '/qa', '/research', '/refactor', '/test-writer', '/retro'];
764
- const hits = commands.filter(c => c.startsWith(line));
765
- return [hits.length ? hits : commands, line];
902
+ completer: (line, callback) => {
903
+ if (!line.startsWith('/')) return callback(null, [[], line]);
904
+
905
+ const filter = line.slice(1).toLowerCase();
906
+ const matches = SLASH_COMMANDS.filter(c =>
907
+ !filter || c.cmd.slice(1).startsWith(filter)
908
+ );
909
+
910
+ if (matches.length === 0) return callback(null, [[], line]);
911
+
912
+ // Single match — let readline silently auto-complete
913
+ if (matches.length === 1) {
914
+ _clearMenu();
915
+ return callback(null, [[matches[0].cmd + ' '], line]);
916
+ }
917
+
918
+ // Multiple matches — print formatted menu, suppress readline's plain list
919
+ _clearMenu();
920
+ process.stdout.write('\n\n');
921
+ for (const m of matches.slice(0, 12)) {
922
+ process.stdout.write(` ${fmt(C.cyan, m.cmd.padEnd(22))} ${fmt(C.dim, m.desc)}\n`);
923
+ }
924
+ if (matches.length > 12) {
925
+ process.stdout.write(` ${fmt(C.dim, `…and ${matches.length - 12} more`)}\n`);
926
+ }
927
+ process.stdout.write('\n');
928
+ setImmediate(() => rl.prompt(true));
929
+ return callback(null, [[], line]);
766
930
  },
767
931
  });
768
932
 
769
- // Restore history from conversations if possible
770
- rl.on('history', () => {});
933
+ // Live menu on keypress draws below the prompt as user types
934
+ emitKeypressEvents(process.stdin, rl);
935
+ process.stdin.on('keypress', (_char, key) => {
936
+ if (key?.name === 'return' || key?.name === 'enter') {
937
+ _clearMenu(); // clear before readline processes the line
938
+ return;
939
+ }
940
+ setImmediate(() => {
941
+ const line = rl.line ?? '';
942
+ if (line.startsWith('/') && !pendingResolve) {
943
+ _drawMenu(line.slice(1));
944
+ } else {
945
+ _clearMenu();
946
+ }
947
+ });
948
+ });
771
949
 
772
950
  printHeader();
773
951
  printInsights();
@@ -1075,9 +1253,21 @@ async function drainQueue() {
1075
1253
  }
1076
1254
 
1077
1255
  rl.on('line', async (input) => {
1256
+ _clearMenu(); // always clear menu when a line is submitted
1078
1257
  const line = input.trim();
1079
1258
  if (!line) { rl.prompt(); return; }
1080
1259
 
1260
+ // Bare `/` → show full command palette
1261
+ if (line === '/') {
1262
+ console.log('');
1263
+ for (const m of SLASH_COMMANDS) {
1264
+ console.log(` ${fmt(C.cyan, m.cmd.padEnd(22))} ${fmt(C.dim, m.desc)}`);
1265
+ }
1266
+ console.log('');
1267
+ rl.prompt();
1268
+ return;
1269
+ }
1270
+
1081
1271
  // If a session is already running, queue the message.
1082
1272
  // pauseFor() stops the spinner briefly so the user can see the confirmation,
1083
1273
  // then resumes — prevents spinner from overwriting their typed text.
package/dist/daemon.mjs CHANGED
@@ -3027,13 +3027,19 @@ content = element.text if element else page.get_all_text()` : `content = page.ge
3027
3027
  `- Be concise in your final response: state what was done and where to find it`,
3028
3028
  ...hasMemory ? [
3029
3029
  ``,
3030
- `Memory (IMPORTANT):`,
3031
- `- ALWAYS call memory_write after discovering important facts:`,
3032
- ` \xB7 Live URLs (ngrok, deployed apps, APIs): memory_write({label:"ngrok_url", content:"https://...", type:"url"})`,
3030
+ `Memory (CRITICAL \u2014 write EVERYTHING you learn):`,
3031
+ `- Call memory_write for ANY fact you discover \u2014 conversational OR from tools:`,
3032
+ ` \xB7 User's name/identity: memory_write({label:"user_name", content:"Sahil", type:"identity"})`,
3033
+ ` \xB7 Projects they mention: memory_write({label:"project_telegram_bot", content:"user has a Telegram bot", type:"project"})`,
3034
+ ` \xB7 Tech stack / tools: memory_write({label:"tech_stack", content:"Node.js, Telegram", type:"tech"})`,
3035
+ ` \xB7 Preferences and decisions they express`,
3036
+ ` \xB7 Live URLs (ngrok, deployed apps): memory_write({label:"ngrok_url", content:"https://...", type:"url"})`,
3033
3037
  ` \xB7 Server ports: memory_write({label:"dev_server_port", content:"3000", type:"config"})`,
3034
3038
  ` \xB7 File paths of created projects: memory_write({label:"project_path", content:"/path/to/project", type:"path"})`,
3035
- ` \xB7 Task outcomes: memory_write({label:"last_task_result", content:"...", type:"outcome"})`,
3036
- `- Call memory_write immediately when you find the value, not just at the end`
3039
+ ` \xB7 Task outcomes: memory_write({label:"last_outcome", content:"...", type:"outcome"})`,
3040
+ `- Write to memory FIRST when the user tells you something about themselves or their work`,
3041
+ `- If the user says "my name is X" \u2192 memory_write immediately, before anything else`,
3042
+ `- If they say "we have a Y" or "our Y" \u2192 memory_write it as a project fact`
3037
3043
  ] : []
3038
3044
  ];
3039
3045
  if (isSelfMod && this.agentRoot) {
@@ -4516,6 +4522,7 @@ var SessionManager = class {
4516
4522
  weightUpdater;
4517
4523
  anthropicFetcher = new AnthropicSkillFetcher();
4518
4524
  agentRoot;
4525
+ onMemoryWritten;
4519
4526
  constructor(deps = {}) {
4520
4527
  this.inferenceEngine = deps.inferenceEngine;
4521
4528
  this.eventBus = deps.eventBus;
@@ -4525,6 +4532,7 @@ var SessionManager = class {
4525
4532
  this.identity = deps.identity;
4526
4533
  this.projectContext = deps.projectContext;
4527
4534
  this.agentRoot = deps.agentRoot;
4535
+ this.onMemoryWritten = deps.onMemoryWritten;
4528
4536
  if (deps.adapter) {
4529
4537
  this.conversationStore = new ConversationStore(deps.adapter);
4530
4538
  this.conversationStore.init();
@@ -4816,6 +4824,8 @@ Current task:`;
4816
4824
  this.addStep(sessionId, `Commands run: ${agentResult.commands_run.length}`);
4817
4825
  }
4818
4826
  this.addStep(sessionId, `Done (${agentResult.tokens_used} tokens, ${agentResult.iterations} LLM turns)`);
4827
+ this._extractAndPersistFacts(enrichedReq.task, agentResult.output, activeLLM).catch(() => {
4828
+ });
4819
4829
  this.completeSession(sessionId, {
4820
4830
  output: agentResult.output,
4821
4831
  files_written: agentResult.files_written,
@@ -4884,6 +4894,68 @@ Current task:`;
4884
4894
  }
4885
4895
  return this.llm;
4886
4896
  }
4897
+ /**
4898
+ * After every session, run a lightweight LLM pass to extract factual entities
4899
+ * (name, projects, tech, preferences, URLs) and persist them to the graph.
4900
+ * This catches everything the agent didn't explicitly memory_write during execution.
4901
+ */
4902
+ async _extractAndPersistFacts(task, output, llm) {
4903
+ if (!this.graph || !llm.isConfigured) return;
4904
+ const prompt = `Extract factual entities from this conversation that should be remembered long-term.
4905
+ Return ONLY a JSON array, no other text, max 12 items.
4906
+
4907
+ Types: identity (name/role), project (apps/products), tech (stack/tools), preference, url, path, config, outcome
4908
+
4909
+ Format: [{"label":"snake_case_key","content":"value to remember","type":"type"}]
4910
+
4911
+ Examples:
4912
+ - User says "my name is Sahil" \u2192 {"label":"user_name","content":"Sahil","type":"identity"}
4913
+ - User says "we have a telegram bot" \u2192 {"label":"project_telegram_bot","content":"user has a Telegram bot project","type":"project"}
4914
+ - User says "I use React and Next.js" \u2192 {"label":"tech_stack","content":"React, Next.js","type":"tech"}
4915
+
4916
+ Conversation:
4917
+ User: ${task.slice(0, 600)}
4918
+ Agent: ${output.slice(0, 400)}`;
4919
+ try {
4920
+ const resp = await llm.complete(
4921
+ [{ role: "user", content: prompt }],
4922
+ "You are a concise memory extraction system. Extract only factual, durable information. Skip generic statements."
4923
+ );
4924
+ const jsonMatch = resp.content.match(/\[[\s\S]*?\]/);
4925
+ if (!jsonMatch) return;
4926
+ const entities = JSON.parse(jsonMatch[0]);
4927
+ if (!Array.isArray(entities) || entities.length === 0) return;
4928
+ let wrote = 0;
4929
+ for (const e of entities.slice(0, 12)) {
4930
+ if (!e.label?.trim() || !e.content?.trim()) continue;
4931
+ const nodeId = `memory:${e.label.toLowerCase().replace(/[^a-z0-9_]/g, "_")}`;
4932
+ try {
4933
+ const existing = this.graph.getNode(nodeId);
4934
+ if (existing) {
4935
+ this.graph.updateNode(nodeId, {
4936
+ label: e.label,
4937
+ metadata: { ...existing.metadata, content: e.content, type: e.type ?? "note", updated_at: (/* @__PURE__ */ new Date()).toISOString() }
4938
+ });
4939
+ } else {
4940
+ this.graph.addNode(createNode({
4941
+ id: nodeId,
4942
+ graph_id: "root",
4943
+ label: e.label,
4944
+ type: "context" /* CONTEXT */,
4945
+ metadata: { content: e.content, type: e.type ?? "note", saved_at: (/* @__PURE__ */ new Date()).toISOString() }
4946
+ }));
4947
+ }
4948
+ wrote++;
4949
+ } catch {
4950
+ }
4951
+ }
4952
+ if (wrote > 0) {
4953
+ console.log(`[0agent] Memory: extracted ${wrote} facts from session`);
4954
+ this.onMemoryWritten?.();
4955
+ }
4956
+ } catch {
4957
+ }
4958
+ }
4887
4959
  /**
4888
4960
  * Convert a task result into a weight signal for the knowledge graph.
4889
4961
  *
@@ -6930,6 +7002,192 @@ var CodespaceManager = class {
6930
7002
 
6931
7003
  // packages/daemon/src/ZeroAgentDaemon.ts
6932
7004
  init_RuntimeSelfHeal();
7005
+
7006
+ // packages/daemon/src/TelegramBridge.ts
7007
+ var TelegramBridge = class {
7008
+ constructor(config, sessions, eventBus) {
7009
+ this.config = config;
7010
+ this.sessions = sessions;
7011
+ this.eventBus = eventBus;
7012
+ this.token = config.token;
7013
+ this.allowedUsers = new Set(config.allowed_users ?? []);
7014
+ }
7015
+ token;
7016
+ allowedUsers;
7017
+ offset = 0;
7018
+ pollTimer = null;
7019
+ running = false;
7020
+ // session_id per chat for streaming
7021
+ pendingSessions = /* @__PURE__ */ new Map();
7022
+ start() {
7023
+ if (this.running) return;
7024
+ this.running = true;
7025
+ console.log("[0agent] Telegram: bot polling started");
7026
+ this._poll();
7027
+ this.eventBus.onEvent((event) => {
7028
+ const chatId = this._getChatIdForSession(String(event.session_id ?? ""));
7029
+ if (!chatId) return;
7030
+ if (event.type === "session.completed") {
7031
+ const result = event.result;
7032
+ const output = String(result?.output ?? "").trim();
7033
+ if (output && output !== "(no output)") {
7034
+ this._send(chatId, output).catch(() => {
7035
+ });
7036
+ }
7037
+ this.pendingSessions.delete(chatId);
7038
+ } else if (event.type === "session.failed") {
7039
+ const err = String(event.error ?? "Task failed");
7040
+ this._send(chatId, `\u26A0\uFE0F ${err}`).catch(() => {
7041
+ });
7042
+ this.pendingSessions.delete(chatId);
7043
+ }
7044
+ });
7045
+ }
7046
+ stop() {
7047
+ this.running = false;
7048
+ if (this.pollTimer) {
7049
+ clearTimeout(this.pollTimer);
7050
+ this.pollTimer = null;
7051
+ }
7052
+ }
7053
+ _getChatIdForSession(sessionId) {
7054
+ for (const [chatId, sid] of this.pendingSessions) {
7055
+ if (sid === sessionId) return chatId;
7056
+ }
7057
+ return null;
7058
+ }
7059
+ async _poll() {
7060
+ if (!this.running) return;
7061
+ try {
7062
+ const updates = await this._getUpdates();
7063
+ for (const u of updates) {
7064
+ await this._handleUpdate(u);
7065
+ }
7066
+ } catch {
7067
+ }
7068
+ if (this.running) {
7069
+ this.pollTimer = setTimeout(() => this._poll(), 1e3);
7070
+ }
7071
+ }
7072
+ async _getUpdates() {
7073
+ const res = await fetch(
7074
+ `https://api.telegram.org/bot${this.token}/getUpdates?offset=${this.offset}&timeout=10&limit=20`,
7075
+ { signal: AbortSignal.timeout(15e3) }
7076
+ );
7077
+ if (!res.ok) return [];
7078
+ const data = await res.json();
7079
+ if (!data.ok || !data.result.length) return [];
7080
+ this.offset = data.result[data.result.length - 1].update_id + 1;
7081
+ return data.result;
7082
+ }
7083
+ async _handleUpdate(u) {
7084
+ const msg = u.message;
7085
+ if (!msg?.text || !msg.from) return;
7086
+ const chatId = msg.chat.id;
7087
+ const userId = msg.from.id;
7088
+ const text = msg.text.trim();
7089
+ const userName = msg.from.first_name ?? msg.from.username ?? "User";
7090
+ if (this.allowedUsers.size > 0 && !this.allowedUsers.has(userId)) {
7091
+ await this._send(chatId, "\u26D4 You are not authorised to use this agent.");
7092
+ return;
7093
+ }
7094
+ if (text === "/start" || text === "/help") {
7095
+ await this._send(
7096
+ chatId,
7097
+ `\u{1F44B} Hi ${userName}! I'm 0agent \u2014 your AI that runs on your machine.
7098
+
7099
+ Send me any task and I'll get it done:
7100
+ \u2022 "make a website for my coffee shop"
7101
+ \u2022 "research my competitor's pricing"
7102
+ \u2022 "fix the bug in auth.ts"
7103
+
7104
+ I remember everything across sessions.`
7105
+ );
7106
+ return;
7107
+ }
7108
+ if (text === "/status") {
7109
+ try {
7110
+ const r = await fetch("http://localhost:4200/api/health", { signal: AbortSignal.timeout(2e3) });
7111
+ const h = await r.json();
7112
+ await this._send(
7113
+ chatId,
7114
+ `\u2705 Daemon running
7115
+ Graph: ${h.graph_nodes} nodes \xB7 ${h.graph_edges} edges
7116
+ Sessions: ${h.active_sessions} active`
7117
+ );
7118
+ } catch {
7119
+ await this._send(chatId, "\u26A0\uFE0F Daemon not reachable");
7120
+ }
7121
+ return;
7122
+ }
7123
+ await this._sendAction(chatId, "typing");
7124
+ await this._send(chatId, `\u23F3 Working on it\u2026`);
7125
+ try {
7126
+ const res = await fetch("http://localhost:4200/api/sessions", {
7127
+ method: "POST",
7128
+ headers: { "Content-Type": "application/json" },
7129
+ body: JSON.stringify({
7130
+ task: text,
7131
+ context: { system_context: `User's name: ${userName}. Message from Telegram.` }
7132
+ }),
7133
+ signal: AbortSignal.timeout(5e3)
7134
+ });
7135
+ const session = await res.json();
7136
+ const sessionId = session.session_id ?? session.id;
7137
+ if (sessionId) {
7138
+ this.pendingSessions.set(chatId, sessionId);
7139
+ } else {
7140
+ await this._send(chatId, "\u26A0\uFE0F Could not start session");
7141
+ }
7142
+ } catch (e) {
7143
+ await this._send(chatId, `\u26A0\uFE0F Error: ${e instanceof Error ? e.message : String(e)}`);
7144
+ }
7145
+ }
7146
+ async _send(chatId, text) {
7147
+ const chunks = this._splitMessage(text);
7148
+ for (const chunk of chunks) {
7149
+ await fetch(`https://api.telegram.org/bot${this.token}/sendMessage`, {
7150
+ method: "POST",
7151
+ headers: { "Content-Type": "application/json" },
7152
+ body: JSON.stringify({ chat_id: chatId, text: chunk, parse_mode: "Markdown" }),
7153
+ signal: AbortSignal.timeout(1e4)
7154
+ }).catch(() => {
7155
+ return fetch(`https://api.telegram.org/bot${this.token}/sendMessage`, {
7156
+ method: "POST",
7157
+ headers: { "Content-Type": "application/json" },
7158
+ body: JSON.stringify({ chat_id: chatId, text: chunk }),
7159
+ signal: AbortSignal.timeout(1e4)
7160
+ }).catch(() => {
7161
+ });
7162
+ });
7163
+ }
7164
+ }
7165
+ async _sendAction(chatId, action) {
7166
+ await fetch(`https://api.telegram.org/bot${this.token}/sendChatAction`, {
7167
+ method: "POST",
7168
+ headers: { "Content-Type": "application/json" },
7169
+ body: JSON.stringify({ chat_id: chatId, action }),
7170
+ signal: AbortSignal.timeout(5e3)
7171
+ }).catch(() => {
7172
+ });
7173
+ }
7174
+ _splitMessage(text) {
7175
+ if (text.length <= 4e3) return [text];
7176
+ const chunks = [];
7177
+ let i = 0;
7178
+ while (i < text.length) {
7179
+ chunks.push(text.slice(i, i + 4e3));
7180
+ i += 4e3;
7181
+ }
7182
+ return chunks;
7183
+ }
7184
+ static isConfigured(config) {
7185
+ const c = config;
7186
+ return !!(c?.token && typeof c.token === "string" && c.token.length > 10);
7187
+ }
7188
+ };
7189
+
7190
+ // packages/daemon/src/ZeroAgentDaemon.ts
6933
7191
  import { fileURLToPath as fileURLToPath3 } from "node:url";
6934
7192
  import { dirname as dirname6 } from "node:path";
6935
7193
  var ZeroAgentDaemon = class {
@@ -6949,6 +7207,7 @@ var ZeroAgentDaemon = class {
6949
7207
  codespaceManager = null;
6950
7208
  schedulerManager = null;
6951
7209
  runtimeHealer = null;
7210
+ telegramBridge = null;
6952
7211
  startedAt = 0;
6953
7212
  pidFilePath;
6954
7213
  constructor() {
@@ -7042,8 +7301,12 @@ var ZeroAgentDaemon = class {
7042
7301
  identity: identity ?? void 0,
7043
7302
  projectContext: projectContext ?? void 0,
7044
7303
  adapter: this.adapter,
7045
- agentRoot
7304
+ agentRoot,
7046
7305
  // agent source path — self-improvement tasks read the right files
7306
+ // Mark GitHub memory dirty immediately when facts are extracted — pushes within 2min
7307
+ onMemoryWritten: () => {
7308
+ this.githubMemorySync?.markDirty();
7309
+ }
7047
7310
  });
7048
7311
  const teamSync = identity && teams.length > 0 ? new TeamSync(teamManager, this.adapter, identity.entity_node_id) : null;
7049
7312
  if (this.githubMemorySync) {
@@ -7055,7 +7318,7 @@ var ZeroAgentDaemon = class {
7055
7318
  console.log(`[0agent] Memory auto-synced: ${result.nodes_synced} nodes`);
7056
7319
  }
7057
7320
  }
7058
- }, 30 * 60 * 1e3);
7321
+ }, 2 * 60 * 1e3);
7059
7322
  if (typeof this.memorySyncTimer === "object") this.memorySyncTimer.unref?.();
7060
7323
  }
7061
7324
  let proactiveSurface = null;
@@ -7085,6 +7348,11 @@ var ZeroAgentDaemon = class {
7085
7348
  }
7086
7349
  this.schedulerManager = new SchedulerManager(this.adapter, this.sessionManager, this.eventBus);
7087
7350
  this.schedulerManager.start();
7351
+ const tgCfg = this.config["telegram"];
7352
+ if (TelegramBridge.isConfigured(tgCfg) && this.sessionManager && this.eventBus) {
7353
+ this.telegramBridge = new TelegramBridge(tgCfg, this.sessionManager, this.eventBus);
7354
+ this.telegramBridge.start();
7355
+ }
7088
7356
  this.backgroundWorkers = new BackgroundWorkers({
7089
7357
  graph: this.graph,
7090
7358
  traceStore: this.traceStore,
@@ -7156,6 +7424,8 @@ var ZeroAgentDaemon = class {
7156
7424
  this.memorySyncTimer = null;
7157
7425
  }
7158
7426
  this.githubMemorySync = null;
7427
+ this.telegramBridge?.stop();
7428
+ this.telegramBridge = null;
7159
7429
  this.schedulerManager?.stop();
7160
7430
  this.schedulerManager = null;
7161
7431
  this.codespaceManager?.closeTunnel();
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "0agent",
3
- "version": "1.0.42",
3
+ "version": "1.0.44",
4
4
  "description": "A persistent, learning AI agent that runs on your machine. An agent that learns.",
5
5
  "private": false,
6
6
  "license": "Apache-2.0",