obol-ai 0.2.17 ā 0.2.18
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +7 -0
- package/README.md +42 -15
- package/docs/obol-status.png +0 -0
- package/package.json +1 -1
- package/src/bridge.js +4 -2
- package/src/telegram/commands/status.js +5 -16
- package/src/telegram/handlers/callbacks.js +60 -0
- package/src/telegram/handlers/text.js +2 -2
package/CHANGELOG.md
CHANGED
|
@@ -1,3 +1,10 @@
|
|
|
1
|
+
## 0.2.18
|
|
2
|
+
- remove evolution progress bar from status UI
|
|
3
|
+
- bidirectional bridge with reply button + memory_remove tool
|
|
4
|
+
- update background tasks section in readme
|
|
5
|
+
- add status UI screenshot to readme
|
|
6
|
+
- update readme with stop controls, commands, and model escalation
|
|
7
|
+
|
|
1
8
|
## 0.2.17
|
|
2
9
|
- add force stop button to instantly abort mid-tool execution
|
|
3
10
|
- replace web_fetch with native web_search tool
|
package/README.md
CHANGED
|
@@ -22,7 +22,7 @@ obol start -d # runs as background daemon (auto-installs pm2)
|
|
|
22
22
|
|
|
23
23
|
š§ **Living memory** ā Vector memory with semantic search. Haiku routes queries and rewrites them for better embedding hits. Free local embeddings.
|
|
24
24
|
|
|
25
|
-
š¤ **Smart routing** ā Haiku decides per-message: does it need memory? Sonnet or Opus? No wasted API calls
|
|
25
|
+
š¤ **Smart routing** ā Haiku decides per-message: does it need memory? Sonnet or Opus? Auto-escalates to Sonnet when tool use is needed. No wasted API calls
|
|
26
26
|
|
|
27
27
|
š° **Prompt caching** ā Static system prompt and conversation history prefix are cached via Anthropic's prompt caching, cutting ~85% of repeated input token costs across turns
|
|
28
28
|
|
|
@@ -59,9 +59,9 @@ User message
|
|
|
59
59
|
ā ā
|
|
60
60
|
Memory recall Model selection
|
|
61
61
|
ā ā
|
|
62
|
-
Multi-query Sonnet (
|
|
63
|
-
ranked recall
|
|
64
|
-
ā
|
|
62
|
+
Multi-query Haiku ā Sonnet (auto-
|
|
63
|
+
ranked recall escalates on tool use)
|
|
64
|
+
ā or Opus (complex)
|
|
65
65
|
āāāāāāāā¬āāāāāāā
|
|
66
66
|
ā
|
|
67
67
|
Claude (tool use loop)
|
|
@@ -199,21 +199,32 @@ Month 6: evolution/ has 180+ archived souls
|
|
|
199
199
|
|
|
200
200
|
### Background Tasks
|
|
201
201
|
|
|
202
|
-
Heavy work runs in the background. The main conversation stays responsive.
|
|
202
|
+
Heavy work runs in the background with its own live status UI. The main conversation stays responsive ā you can keep chatting while tasks run.
|
|
203
203
|
|
|
204
204
|
```
|
|
205
205
|
You: "research the best coworking spaces in Barcelona"
|
|
206
|
-
OBOL:
|
|
207
|
-
|
|
208
|
-
[30s] ā³ Found 15 spaces, filtering by reviews...
|
|
209
|
-
[60s] ā³ Narrowed to top 7, checking prices...
|
|
206
|
+
OBOL: spawns BG #1 with live status
|
|
210
207
|
|
|
211
208
|
You: "what time is it?"
|
|
212
209
|
OBOL: "11:42 PM CET"
|
|
213
210
|
|
|
214
|
-
|
|
211
|
+
ā
BG #1 done (1m 32s)
|
|
212
|
+
Here are the top 5 coworking spaces: ...
|
|
215
213
|
```
|
|
216
214
|
|
|
215
|
+
### Live Status & Stop Controls
|
|
216
|
+
|
|
217
|
+

|
|
218
|
+
|
|
219
|
+
Every request shows a live status message with elapsed time, model routing info, and what tools are being used. Two inline buttons let you cancel:
|
|
220
|
+
|
|
221
|
+
| Button | Behavior |
|
|
222
|
+
|--------|----------|
|
|
223
|
+
| **ā Stop** | Cancels after the current API call finishes |
|
|
224
|
+
| **ā Force Stop** | Instantly aborts mid-tool ā races the handler and returns immediately |
|
|
225
|
+
|
|
226
|
+
The `/stop` command also works as a text alternative.
|
|
227
|
+
|
|
217
228
|
## Multi-User Architecture
|
|
218
229
|
|
|
219
230
|
One Telegram bot token, one Node.js process, full per-user isolation.
|
|
@@ -275,29 +286,39 @@ Each new user starts fresh. Their bot evolves independently from every other use
|
|
|
275
286
|
|
|
276
287
|
### Bridge (couples / roommates / teams)
|
|
277
288
|
|
|
278
|
-
When two users share the same OBOL instance, their agents can talk to each other.
|
|
289
|
+
When two users share the same OBOL instance, their agents can talk to each other ā bidirectionally.
|
|
279
290
|
|
|
280
291
|
```
|
|
281
292
|
User A: "what does Jo want for dinner tonight?"
|
|
282
293
|
Agent A: ā bridge_ask ā Agent B (one-shot, no tools, no history)
|
|
283
294
|
Agent B: "Jo mentioned craving Thai food earlier today"
|
|
284
295
|
Agent A: "Jo's been wanting Thai ā maybe suggest pad see ew?"
|
|
296
|
+
|
|
297
|
+
Jo gets: "šŖ Your partner's agent asked: 'what does Jo want for dinner?'
|
|
298
|
+
Your agent answered: 'Jo mentioned craving Thai food earlier today'"
|
|
285
299
|
```
|
|
286
300
|
|
|
287
301
|
```
|
|
288
302
|
User A: "remind Jo I'll be home late"
|
|
289
303
|
Agent A: ā bridge_tell ā stores in Agent B's memory + Telegram notification
|
|
290
|
-
|
|
304
|
+
|
|
305
|
+
Jo gets: "šŖ Message from your partner's agent:
|
|
306
|
+
'I'll be home late'"
|
|
307
|
+
[ā© Reply]
|
|
308
|
+
|
|
309
|
+
Jo taps Reply ā Jo's agent reads recent bridge context, composes a reply
|
|
310
|
+
ā sends back via bridge_tell
|
|
311
|
+
A gets: "šŖ Message from your partner's agent: 'Got it, I'll start dinner around 7'"
|
|
291
312
|
```
|
|
292
313
|
|
|
293
314
|
Two tools:
|
|
294
315
|
|
|
295
316
|
| Tool | Direction | What happens |
|
|
296
317
|
|------|-----------|--------------|
|
|
297
|
-
| `bridge_ask` | A ā B ā A | Query the partner's agent. One-shot
|
|
298
|
-
| `bridge_tell` | A ā B | Send a message to the partner. Stored in their memory (importance 0.6) + Telegram notification.
|
|
318
|
+
| `bridge_ask` | A ā B ā A | Query the partner's agent. One-shot Haiku call with partner's personality + memories. No tools, no history, no recursion risk. Partner is notified with both the question and your agent's answer. |
|
|
319
|
+
| `bridge_tell` | A ā B (ā© B ā A) | Send a message to the partner. Stored in their memory (importance 0.6) + Telegram notification with a Reply button. Tapping Reply has their agent compose a contextual response and send it back ā no typing needed. |
|
|
299
320
|
|
|
300
|
-
The partner always gets notified when their agent is contacted. Privacy rules apply ā the responding agent gives summaries, never raw data or secrets.
|
|
321
|
+
The partner always gets notified when their agent is contacted. Privacy rules apply ā the responding agent gives summaries, never raw data or secrets. Rate-limited to 20 bridge calls per user per hour.
|
|
301
322
|
|
|
302
323
|
Enable during `obol init` (auto-prompted when 2+ users are added) or toggle later with `obol config` ā Bridge.
|
|
303
324
|
|
|
@@ -471,6 +492,7 @@ Or edit `~/.obol/config.json` directly:
|
|
|
471
492
|
/memory ā Search or view memory stats
|
|
472
493
|
/recent ā Last 10 memories
|
|
473
494
|
/today ā Today's memories
|
|
495
|
+
/events ā Show upcoming scheduled events
|
|
474
496
|
/tasks ā Running background tasks
|
|
475
497
|
/status ā Bot status, uptime, evolution progress, traits
|
|
476
498
|
/backup ā Trigger GitHub backup
|
|
@@ -478,6 +500,11 @@ Or edit `~/.obol/config.json` directly:
|
|
|
478
500
|
/traits ā View or adjust personality traits (0-100)
|
|
479
501
|
/secret ā Manage per-user encrypted secrets
|
|
480
502
|
/evolution ā Evolution progress
|
|
503
|
+
/verbose ā Toggle verbose mode on/off
|
|
504
|
+
/toolimit ā View or set max tool iterations per message
|
|
505
|
+
/tools ā Toggle optional tools on/off
|
|
506
|
+
/stop ā Stop the current request
|
|
507
|
+
/upgrade ā Check for updates and upgrade
|
|
481
508
|
/help ā Show available commands
|
|
482
509
|
```
|
|
483
510
|
|
|
Binary file
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "obol-ai",
|
|
3
|
-
"version": "0.2.
|
|
3
|
+
"version": "0.2.18",
|
|
4
4
|
"description": "Self-evolving AI assistant that learns, remembers, and acts on its own. Persistent vector memory, self-rewriting personality, proactive heartbeats.",
|
|
5
5
|
"main": "src/index.js",
|
|
6
6
|
"bin": {
|
package/src/bridge.js
CHANGED
|
@@ -106,7 +106,8 @@ async function bridgeAsk(question, fromUserId, config, notifyFn, targetId) {
|
|
|
106
106
|
|
|
107
107
|
if (notifyFn) {
|
|
108
108
|
try {
|
|
109
|
-
|
|
109
|
+
const preview = answer.length > 200 ? `${answer.substring(0, 200)}ā¦` : answer;
|
|
110
|
+
await notifyFn(partnerUserId, `šŖ Your partner's agent asked: "${question}"\nYour agent answered: "${preview}"`);
|
|
110
111
|
} catch (e) {
|
|
111
112
|
console.error(`[bridge] Notify failed for ${partnerUserId}:`, e.message);
|
|
112
113
|
}
|
|
@@ -170,7 +171,8 @@ async function bridgeTell(message, fromUserId, config, notifyFn, targetId) {
|
|
|
170
171
|
|
|
171
172
|
if (notifyFn) {
|
|
172
173
|
try {
|
|
173
|
-
|
|
174
|
+
const replyMarkup = { inline_keyboard: [[{ text: 'ā© Reply', callback_data: `bridge:reply:${fromUserId}` }]] };
|
|
175
|
+
await notifyFn(partnerUserId, `šŖ Message from your partner's agent:\n"${message}"`, { reply_markup: replyMarkup });
|
|
174
176
|
} catch (e) {
|
|
175
177
|
console.error(`[bridge] Notify failed for ${partnerUserId}:`, e.message);
|
|
176
178
|
}
|
|
@@ -1,8 +1,7 @@
|
|
|
1
1
|
const path = require('path');
|
|
2
2
|
const { getTenant } = require('../../tenant');
|
|
3
|
-
const { loadConfig } = require('../../config');
|
|
4
3
|
const { loadTraits } = require('../../personality');
|
|
5
|
-
const {
|
|
4
|
+
const { loadEvolutionState } = require('../../evolve');
|
|
6
5
|
const { getMaxToolIterations } = require('../../claude');
|
|
7
6
|
const { termBar, formatTraits } = require('../utils');
|
|
8
7
|
const { TERM_SEP } = require('../constants');
|
|
@@ -44,16 +43,13 @@ function register(bot, config) {
|
|
|
44
43
|
);
|
|
45
44
|
|
|
46
45
|
const evoState = loadEvolutionState(tenant.userDir);
|
|
47
|
-
const cfg = loadConfig();
|
|
48
|
-
const intervalHours = cfg?.evolution?.intervalHours ?? 24;
|
|
49
|
-
const elapsed = evoState.lastEvolution ? (Date.now() - new Date(evoState.lastEvolution).getTime()) / 3600000 : Infinity;
|
|
50
|
-
const evoPct = Math.min(100, Math.round((elapsed / intervalHours) * 100));
|
|
51
|
-
const timeLeft = Math.max(0, intervalHours - elapsed);
|
|
52
46
|
lines.push(
|
|
53
47
|
``, `EVOLUTION`,
|
|
54
|
-
` ${
|
|
55
|
-
` ${timeLeft < 1 ? 'ready' : `${timeLeft.toFixed(1)}h remaining`} āŖ ${evoState.evolutionCount || 0} completed`,
|
|
48
|
+
` ${evoState.evolutionCount || 0} completed`,
|
|
56
49
|
);
|
|
50
|
+
if (evoState.lastEvolution) {
|
|
51
|
+
lines.push(` last ${new Date(evoState.lastEvolution).toLocaleDateString()}`);
|
|
52
|
+
}
|
|
57
53
|
|
|
58
54
|
const personalityDir = path.join(tenant.userDir, 'personality');
|
|
59
55
|
const traits = loadTraits(personalityDir);
|
|
@@ -68,18 +64,11 @@ function register(bot, config) {
|
|
|
68
64
|
if (!ctx.from) return;
|
|
69
65
|
const tenant = await getTenant(ctx.from.id, config);
|
|
70
66
|
const state = loadEvolutionState(tenant.userDir);
|
|
71
|
-
const cfg = loadConfig();
|
|
72
|
-
const intervalHours = cfg?.evolution?.intervalHours ?? 24;
|
|
73
|
-
const elapsed = state.lastEvolution ? (Date.now() - new Date(state.lastEvolution).getTime()) / 3600000 : Infinity;
|
|
74
|
-
const pct = Math.min(100, Math.round((elapsed / intervalHours) * 100));
|
|
75
|
-
const timeLeft = Math.max(0, intervalHours - elapsed);
|
|
76
67
|
|
|
77
68
|
const lines = [
|
|
78
69
|
`ā OBOL EVOLUTION CYCLE`,
|
|
79
70
|
TERM_SEP,
|
|
80
71
|
``,
|
|
81
|
-
` ${termBar(pct)} ${pct}%`,
|
|
82
|
-
` ${timeLeft < 1 ? 'ready' : `${timeLeft.toFixed(1)}h remaining`}`,
|
|
83
72
|
` ${state.evolutionCount || 0} completed`,
|
|
84
73
|
];
|
|
85
74
|
if (state.lastEvolution) {
|
|
@@ -36,6 +36,66 @@ function registerCallbackHandler(bot, { config, pendingAsks, getTenant }) {
|
|
|
36
36
|
return;
|
|
37
37
|
}
|
|
38
38
|
|
|
39
|
+
if (data.startsWith('bridge:reply:')) {
|
|
40
|
+
const targetUserId = parseInt(data.split(':')[2]);
|
|
41
|
+
const reactingUserId = ctx.from.id;
|
|
42
|
+
|
|
43
|
+
const tenant = await getTenant(reactingUserId, config);
|
|
44
|
+
if (!tenant) return answer({ text: 'Could not load your agent' });
|
|
45
|
+
|
|
46
|
+
const { checkBridgeRateLimit, bridgeTell } = require('../../bridge');
|
|
47
|
+
const rateErr = checkBridgeRateLimit(reactingUserId);
|
|
48
|
+
if (rateErr) return answer({ text: rateErr });
|
|
49
|
+
|
|
50
|
+
let memoryContext = '';
|
|
51
|
+
if (tenant.memory) {
|
|
52
|
+
try {
|
|
53
|
+
const memories = await tenant.memory.search('message from partner bridge', { limit: 5, threshold: 0.3 });
|
|
54
|
+
if (memories.length > 0) {
|
|
55
|
+
memoryContext = '\n\n[Recent bridge messages]\n' + memories.map(m => `- ${m.content}`).join('\n');
|
|
56
|
+
}
|
|
57
|
+
} catch {}
|
|
58
|
+
}
|
|
59
|
+
|
|
60
|
+
const systemParts = [
|
|
61
|
+
'Compose a brief, natural reply to send back to your partner\'s agent via bridge. 1-3 sentences. Be genuine and respond to the most recent message from them.',
|
|
62
|
+
];
|
|
63
|
+
if (tenant.personality?.soul) systemParts.push(`\n## Your Personality\n${tenant.personality.soul}`);
|
|
64
|
+
if (tenant.personality?.user) systemParts.push(`\n## About You\n${tenant.personality.user}`);
|
|
65
|
+
if (memoryContext) systemParts.push(memoryContext);
|
|
66
|
+
|
|
67
|
+
let replyText;
|
|
68
|
+
try {
|
|
69
|
+
const response = await tenant.claude.client.messages.create({
|
|
70
|
+
model: 'claude-haiku-4-5-20251001',
|
|
71
|
+
max_tokens: 256,
|
|
72
|
+
system: systemParts.join('\n'),
|
|
73
|
+
messages: [{ role: 'user', content: 'Compose your reply to send via bridge.' }],
|
|
74
|
+
});
|
|
75
|
+
replyText = response.content.filter(b => b.type === 'text').map(b => b.text).join('\n').trim();
|
|
76
|
+
} catch (e) {
|
|
77
|
+
console.error('[bridge:reply] Generation failed:', e.message);
|
|
78
|
+
return answer({ text: 'Failed to generate reply' });
|
|
79
|
+
}
|
|
80
|
+
|
|
81
|
+
if (!replyText) return answer({ text: 'Could not generate a reply' });
|
|
82
|
+
|
|
83
|
+
const notifyFn = (uid, msg, opts = {}) => ctx.api.sendMessage(uid, msg, opts);
|
|
84
|
+
try {
|
|
85
|
+
await bridgeTell(replyText, reactingUserId, config, notifyFn, targetUserId);
|
|
86
|
+
} catch (e) {
|
|
87
|
+
console.error('[bridge:reply] bridgeTell failed:', e.message);
|
|
88
|
+
return answer({ text: 'Failed to send reply' });
|
|
89
|
+
}
|
|
90
|
+
|
|
91
|
+
ctx.editMessageText(
|
|
92
|
+
ctx.callbackQuery.message.text + '\n\nā Reply sent',
|
|
93
|
+
{ reply_markup: { inline_keyboard: [] } }
|
|
94
|
+
).catch(() => {});
|
|
95
|
+
|
|
96
|
+
return answer({ text: 'Reply sent!' });
|
|
97
|
+
}
|
|
98
|
+
|
|
39
99
|
if (!data.startsWith('ask:')) return answer();
|
|
40
100
|
const parts = data.split(':');
|
|
41
101
|
const askId = parseInt(parts[1]);
|
|
@@ -27,9 +27,9 @@ function createChatContext(ctx, tenant, config, { allowedUsers, bot, createAsk }
|
|
|
27
27
|
sendHtml(ctx, `\`${msg}\``).catch(() => {});
|
|
28
28
|
} : undefined,
|
|
29
29
|
telegramAsk: (message, options, timeout) => createAsk(ctx, message, options, timeout),
|
|
30
|
-
_notifyFn: (targetUserId, message) => {
|
|
30
|
+
_notifyFn: (targetUserId, message, opts = {}) => {
|
|
31
31
|
if (!allowedUsers.has(targetUserId)) throw new Error('Cannot notify user outside allowed list');
|
|
32
|
-
return bot.api.sendMessage(targetUserId, message);
|
|
32
|
+
return bot.api.sendMessage(targetUserId, message, opts);
|
|
33
33
|
},
|
|
34
34
|
};
|
|
35
35
|
}
|