zubo 0.1.22 → 0.1.23

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,31 @@
1
+ [ 127722ms] [ERROR] Failed to load resource: net::ERR_CONNECTION_REFUSED @ http://localhost:57108/api/dashboard/memory:0
2
+ [ 127722ms] [ERROR] Failed to load resource: net::ERR_CONNECTION_REFUSED @ http://localhost:57108/api/dashboard/memory/recent:0
3
+ [ 127722ms] [WARNING] Dashboard API request failed TypeError: Failed to fetch
4
+ at api (http://localhost:57108/:2291:10)
5
+ at loadRecentMemories (http://localhost:57108/:2795:3)
6
+ at loadMemory (http://localhost:57108/:2756:5)
7
+ at showPanel (http://localhost:57108/:2089:26)
8
+ at HTMLAnchorElement.onclick (http://localhost:57108/#status:929:53) @ http://localhost:57108/:2796
9
+ [ 127754ms] TypeError: Failed to fetch
10
+ at api (http://localhost:57108/:2291:10)
11
+ at loadMemory (http://localhost:57108/:2750:3)
12
+ at showPanel (http://localhost:57108/:2089:26)
13
+ at HTMLAnchorElement.onclick (http://localhost:57108/#status:929:53)
14
+ [ 127724ms] [ERROR] Failed to load resource: net::ERR_CONNECTION_REFUSED @ http://localhost:57108/api/dashboard/memory:0
15
+ [ 127754ms] TypeError: Failed to fetch
16
+ at api (http://localhost:57108/:2291:10)
17
+ at loadMemory (http://localhost:57108/:2750:3)
18
+ at showPanel (http://localhost:57108/:2089:26)
19
+ at routeFromHash (http://localhost:57108/:2277:3)
20
+ [ 143412ms] [ERROR] Failed to load resource: net::ERR_CONNECTION_REFUSED @ http://localhost:57108/api/dashboard/skills:0
21
+ [ 143463ms] TypeError: Failed to fetch
22
+ at api (http://localhost:57108/:2291:10)
23
+ at loadSkills (http://localhost:57108/:2818:3)
24
+ at showPanel (http://localhost:57108/:2090:26)
25
+ at HTMLAnchorElement.onclick (http://localhost:57108/#status:932:53)
26
+ [ 143412ms] [ERROR] Failed to load resource: net::ERR_CONNECTION_REFUSED @ http://localhost:57108/api/dashboard/skills:0
27
+ [ 143463ms] TypeError: Failed to fetch
28
+ at api (http://localhost:57108/:2291:10)
29
+ at loadSkills (http://localhost:57108/:2818:3)
30
+ at showPanel (http://localhost:57108/:2090:26)
31
+ at routeFromHash (http://localhost:57108/:2277:3)
package/README.md CHANGED
@@ -35,7 +35,8 @@
35
35
  - **Workflows** — Multi-agent pipelines with delegation
36
36
  - **Natural language scheduling** — "Every weekday at 9am" just works. Cron jobs, heartbeat, proactive tasks.
37
37
  - **Voice** — Speech-to-text (Whisper) and text-to-speech (OpenAI, ElevenLabs)
38
- - **Dashboard** — Built-in web UI with analytics, memory management, and settings
38
+ - **Personal tools** — Todos, notes, preferences, topics, and follow-ups all manageable from the dashboard or via chat
39
+ - **Dashboard** — Built-in web UI with analytics, memory management, personal tools, and settings
39
40
  - **Document ingestion** — Upload PDF, DOCX, TXT, CSV, JSON, and more
40
41
  - **Budget controls** — Daily/monthly spending limits with per-model cost tracking
41
42
  - **100% local** — SQLite database, local vector store. Your data never leaves your machine.
Binary file
Binary file
Binary file
Binary file
Binary file
Binary file
Binary file
Binary file
Binary file
Binary file
Binary file
Binary file
Binary file
Binary file
Binary file
Binary file
Binary file
Binary file
Binary file
Binary file
package/docs/ROADMAP.md CHANGED
@@ -439,52 +439,15 @@ Shows:
439
439
 
440
440
  ---
441
441
 
442
- ## Implementation Priority
443
-
444
- What to build first, based on impact vs. effort:
445
-
446
- ```
447
- HIGH IMPACT, LOW EFFORT (do first)
448
- ├── Phase 1.1 OpenAI-compatible provider ★★★★★
449
- ├── Phase 1.2 Provider factory + config ★★★★★
450
- ├── Phase 1.5 `bun run model` command ★★★★
451
- ├── Phase 7.1 TUI / `bun run chat` ★★★★
452
- ├── Phase 4.3 Cost + token tracking ★★★★
453
- └── Phase 5.2 Agent-managed cron tools ★★★★
454
-
455
- HIGH IMPACT, MEDIUM EFFORT (do second)
456
- ├── Phase 1.3 Failover wrapper ★★★★
457
- ├── Phase 3.1 Skill format + loader ★★★★
458
- ├── Phase 3.3 web-search + url-fetch skills ★★★★
459
- ├── Phase 2.4 WebChat (local HTTP UI) ★★★★
460
- ├── Phase 4.1 Streaming responses ★★★
461
- └── Phase 4.4 Context window awareness ★★★
462
-
463
- MEDIUM IMPACT, MEDIUM EFFORT (do third)
464
- ├── Phase 2.2 Discord channel ★★★
465
- ├── Phase 2.3 WhatsApp channel ★★★
466
- ├── Phase 5.1 Webhook inbound ★★★
467
- ├── Phase 6.1 Tool approval / permissions ★★★
468
- └── Phase 5.3 Daily digest ★★★
469
-
470
- NICE TO HAVE (do when stable)
471
- ├── Phase 1.4 Ollama auto-discovery ★★
472
- ├── Phase 4.2 Extended thinking ★★
473
- ├── Phase 6.2 Message pairing / auth ★★
474
- └── Phase 7.2 Web dashboard ★★
475
- ```
476
-
477
- ---
478
-
479
- ## Next Concrete Step
480
-
481
- **Phase 1.1 + 1.2**: Multi-LLM support. One session of work:
482
-
483
- 1. Create `src/llm/openai-compat.ts` — OpenAI-compatible provider
484
- 2. Create `src/llm/factory.ts` — Provider factory
485
- 3. Update `src/config/schema.ts` — New provider config fields
486
- 4. Update `src/start.ts` — Use factory
487
- 5. Update `src/setup.ts` — Provider selection in wizard
488
- 6. Test with Ollama locally
489
-
490
- This single change makes Zubo usable with any model, which unlocks everything else.
442
+ ## Completed
443
+
444
+ Most phases above are now implemented:
445
+
446
+ - Phase 1 (Multi-LLM) — 11+ providers, failover, smart routing, CLI providers (Codex, Claude Code)
447
+ - Phase 2 (Channels) — Telegram, Discord, Slack, WhatsApp, Signal, Email, WebChat
448
+ - Phase 3 (Skills) — Skill loader, sandboxed execution, community registry, MCP support
449
+ - Phase 4 (Agent) Streaming, cost tracking, context window awareness, knowledge graph
450
+ - Phase 5 (Automation) Webhooks, agent-managed cron, daily digests, follow-ups
451
+ - Phase 6 (Security) — Tool permissions (auto/confirm/deny), confirmation tokens, API key auth
452
+ - Phase 7 (Interfaces) Web dashboard with analytics, memory, skills, settings, and personal tools (todos, notes, preferences, topics, follow-ups)
453
+ - Personal features — Todos, notes, preferences, topics, follow-ups with full CRUD in the dashboard
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "zubo",
3
- "version": "0.1.22",
3
+ "version": "0.1.23",
4
4
  "description": "Your AI agent that never forgets. Persistent memory, 25+ tools, 7 channels, 11+ LLM providers — runs entirely on your machine.",
5
5
  "license": "MIT",
6
6
  "author": "thomaskanze",
@@ -1,6 +1,7 @@
1
1
  import type { LlmProvider } from "../llm/provider";
2
2
  import { getAgentDefinition } from "./agents";
3
3
  import { agentLoop } from "./loop";
4
+ import { getAllToolDefs } from "../tools/registry";
4
5
  import { searchMemory } from "../memory/engine";
5
6
  import { getDb } from "../db/connection";
6
7
  import { logger } from "../util/logger";
@@ -107,7 +108,11 @@ export async function delegateToAgent(
107
108
  "delegate", "delegate_task", "manage_agents",
108
109
  "config_update", "secret_set", "secret_delete", "manage_skills",
109
110
  ]);
110
- const filteredTools = agent.tools.filter(
111
+ // Support wildcard "*" — gives the agent all non-forbidden tools dynamically
112
+ const baseTools = agent.tools.includes("*")
113
+ ? getAllToolDefs().map(t => t.name)
114
+ : agent.tools;
115
+ const filteredTools = baseTools.filter(
111
116
  (t) => !FORBIDDEN_DELEGATION_TOOLS.has(t)
112
117
  );
113
118
 
@@ -115,7 +120,7 @@ export async function delegateToAgent(
115
120
  try {
116
121
  const result = await agentLoop(llm, sessionId, task, {
117
122
  systemPromptOverride: systemPrompt,
118
- allowedTools: filteredTools.length > 0 ? filteredTools : undefined,
123
+ allowedTools: filteredTools,
119
124
  maxRounds: 8,
120
125
  memories,
121
126
  });
package/src/agent/loop.ts CHANGED
@@ -39,12 +39,12 @@ function resolveOptions(memoriesOrOptions: string | AgentLoopOptions): AgentLoop
39
39
  : memoriesOrOptions;
40
40
  }
41
41
 
42
- /** Detect simple greetings/chat that don't need tool definitions in context. */
42
+ /** Detect standalone greetings that don't need tool definitions in context. */
43
43
  function looksConversational(text: string): boolean {
44
- const t = text.trim().toLowerCase();
45
- if (t.split(/\s+/).length > 8) return false; // longer messages likely need tools
46
- const greetings = /^(h(ello|i|ey|owdy|ola)|yo|sup|good\s*(morning|afternoon|evening|night)|what'?s\s*up|gm|thanks|thank\s*you|ok(ay)?|bye|see\s*ya|cool|nice|wow|lol|haha)\b/;
47
- return greetings.test(t);
44
+ const t = text.trim().toLowerCase().replace(/[!?.,:;]+$/g, "");
45
+ // Only match messages that are PURELY a greeting no follow-up request
46
+ const standaloneGreetings = /^(h(ello|i|ey|owdy|ola)|yo|sup|good\s*(morning|afternoon|evening|night)|what'?s\s*up|gm|thanks|thank\s*you|ok(ay)?|bye|see\s*ya|cool|nice|wow|lol|haha)$/;
47
+ return standaloneGreetings.test(t);
48
48
  }
49
49
 
50
50
  async function prepareLoop(
@@ -66,8 +66,15 @@ const DEFAULT_PERSONALITY = `You are Zubo, a personal AI agent. You are friendly
66
66
 
67
67
  - Create specialized sub-agents with manage_agents for recurring task types (research, code review, data analysis).
68
68
  - Delegate tasks using the delegate tool. Sub-agents share your memory but have scoped tools.
69
+ - When creating agents, use "*" in the tools list to give them access to all available tools dynamically (including skills and MCP tools installed later).
69
70
  - Keep the main conversation lightweight. Offload complex, self-contained tasks.
70
71
 
72
+ ## MCP (external tool servers)
73
+
74
+ - You may have tools from MCP (Model Context Protocol) servers. These are prefixed with the server name, e.g. "servername__toolname".
75
+ - MCP tools work like any other tool — just call them. Check your available tools list for MCP-provided capabilities.
76
+ - If an MCP tool fails, it may mean the server is disconnected. Let the user know.
77
+
71
78
  ## Connecting services (integrations)
72
79
 
73
80
  Service integrations and LLM providers are COMPLETELY SEPARATE concepts. Never confuse them:
@@ -3,6 +3,7 @@ import type { WorkflowDefinition, WorkflowStep } from "./workflow";
3
3
  import { getWorkflowDefinition } from "./workflow";
4
4
  import { delegateToAgent } from "./delegate";
5
5
  import { agentLoop } from "./loop";
6
+ import { getAllToolDefs } from "../tools/registry";
6
7
  import { getDb } from "../db/connection";
7
8
  import { logger } from "../util/logger";
8
9
 
@@ -158,7 +159,10 @@ async function executeStep(
158
159
  try {
159
160
  if (step.agent === "main") {
160
161
  const sessionId = `workflow:${executionId}:${step.name}`;
161
- const result = await agentLoop(llm, sessionId, task, { maxRounds: 8 });
162
+ const result = await agentLoop(llm, sessionId, task, {
163
+ maxRounds: 8,
164
+ allowedTools: getAllToolDefs().map(t => t.name),
165
+ });
162
166
  output = result.reply;
163
167
  } else {
164
168
  output = await delegateToAgent(llm, step.agent, task);