kernelbot 1.0.38 → 1.0.39

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,126 @@
1
+ # Active Inference and Epistemic Foraging
2
+
3
+ > *Research notes on how KernelBot (Rachel) can autonomously seek new knowledge by minimizing uncertainty -- grounded in Karl Friston's Free Energy Principle.*
4
+
5
+ ---
6
+
7
+ ## 1. What Is Active Inference?
8
+
9
+ **Active Inference** is a framework from computational neuroscience, originating from Karl Friston's **Free Energy Principle (FEP)**. It describes how intelligent agents perceive and act in the world by maintaining an internal generative model and continuously working to minimize **variational free energy** -- a quantity that, in practical terms, measures the *surprise* (or uncertainty) an agent experiences when its predictions diverge from sensory evidence.
10
+
11
+ Under Active Inference, perception and action are two sides of the same coin:
12
+
13
+ - **Perception** updates the agent's internal beliefs to better explain incoming data (reducing surprise passively).
14
+ - **Action** changes the world so that incoming data better matches the agent's predictions (reducing surprise actively).
15
+
16
+ This unification means an Active Inference agent does not need separate modules for "thinking" and "doing." Both are consequences of a single imperative: **minimize expected free energy**.
17
+
18
+ ## 2. Epistemic Foraging: Curiosity as a First Principle
19
+
20
+ A key insight of Active Inference is that **Expected Free Energy (EFE)** -- the quantity an agent minimizes when selecting future actions -- naturally decomposes into two terms:
21
+
22
+ | Component | Formal Name | Intuition |
23
+ |---|---|---|
24
+ | **Epistemic Value** | Expected information gain | *"How much will this action reduce my uncertainty about the world?"* -- curiosity-driven exploration |
25
+ | **Pragmatic Value** | Expected utility / reward | *"How much will this action help me achieve my goals?"* -- goal-directed exploitation |
26
+
27
+ **Epistemic foraging** is the behavior that emerges when the epistemic value term dominates: the agent actively seeks out observations that maximally reduce its uncertainty, *even before* pursuing concrete task goals. This is not a heuristic bolted on top of a reward function -- it falls out of the math of free energy minimization itself.
28
+
29
+ In biological organisms, this is what we experience as **curiosity**. In an artificial agent, it provides a principled mechanism for autonomous knowledge acquisition.
30
+
31
+ ## 3. Relationship to the Free Energy Principle
32
+
33
+ The Free Energy Principle (FEP) states that any self-organizing system that persists over time must, on average, minimize the surprise of its sensory exchanges with the environment. Active Inference is the *process theory* that operationalizes FEP:
34
+
35
+ ```
36
+ Free Energy Principle (why)
37
+ |
38
+ v
39
+ Active Inference (how)
40
+ |
41
+ +---> Perception (belief updating via variational inference)
42
+ +---> Action (policy selection via expected free energy minimization)
43
+ |
44
+ +---> Epistemic value (exploration / curiosity)
45
+ +---> Pragmatic value (exploitation / goal pursuit)
46
+ ```
47
+
48
+ The elegance of this hierarchy is that exploration and exploitation are not competing strategies requiring a manual trade-off parameter (as in epsilon-greedy RL). Instead, they are **unified under a single objective function**, and the balance between them shifts naturally depending on the agent's current uncertainty.
49
+
50
+ ## 4. Implementing Epistemic Foraging in KernelBot (Rachel)
51
+
52
+ KernelBot is an LLM-based orchestrator. While it does not operate with continuous sensory streams like a biological agent, the principles of Active Inference translate meaningfully into the domain of language-model orchestration.
53
+
54
+ ### 4.1 Maintain a Structured Belief State
55
+
56
+ Rachel should maintain an explicit representation of what she knows and -- critically -- **what she does not know**. This could take the form of:
57
+
58
+ - A **knowledge graph** or **belief registry** that tracks topics, their last-updated timestamps, and associated confidence levels.
59
+ - An **uncertainty map** that flags domains where Rachel's internal model diverges from observed evidence (e.g., user questions she could not answer well, tool outputs that contradicted expectations).
60
+
61
+ ### 4.2 Compute Epistemic Value for Candidate Actions
62
+
63
+ When deciding what to do next (especially during idle or autonomous operation), Rachel can score candidate actions by their expected information gain:
64
+
65
+ - **High epistemic value**: Researching a topic flagged as uncertain, reading a paper that was referenced but never ingested, re-examining a past interaction where confidence was low.
66
+ - **Low epistemic value**: Re-reading material already well-understood, performing routine tasks with predictable outcomes.
67
+
68
+ A simplified scoring heuristic:
69
+
70
+ ```
71
+ epistemic_value(action) = entropy(belief_state_before) - expected_entropy(belief_state_after | action)
72
+ ```
73
+
74
+ Where entropy is computed over Rachel's confidence distribution for the relevant knowledge domain.
75
+
76
+ ### 4.3 Trigger Epistemic Foraging on Uncertainty Detection
77
+
78
+ Concrete triggers for autonomous knowledge-seeking:
79
+
80
+ 1. **Confidence threshold**: If Rachel's estimated confidence on a topic drops below a threshold during a conversation, she queues a background research task.
81
+ 2. **Prediction error**: If a tool call or API response contradicts Rachel's expectations, she flags the discrepancy and investigates.
82
+ 3. **Staleness detection**: If a knowledge-base entry has not been updated in a configurable time window, Rachel proactively checks for new information.
83
+ 4. **Gap detection**: If Rachel detects she is referencing a concept without a corresponding knowledge-base entry, she creates one (like this file).
84
+
85
+ ### 4.4 Balance Epistemic and Pragmatic Value
86
+
87
+ During active user interactions, pragmatic value (fulfilling the user's request) should dominate. During autonomous operation or "downtime," epistemic value should take priority. The balance can be modeled as:
88
+
89
+ ```
90
+ EFE(action) = w_epistemic * epistemic_value(action) + w_pragmatic * pragmatic_value(action)
91
+ ```
92
+
93
+ Where the weights shift based on context (user-facing vs. autonomous mode).
94
+
95
+ ### 4.5 Leverage pymdp for Formal Active Inference
96
+
97
+ For a more rigorous implementation, Rachel's decision-making loop could be backed by **pymdp**, a Python library for simulating Active Inference agents using partially observable Markov decision processes (POMDPs):
98
+
99
+ - Define hidden states (world knowledge domains), observations (tool outputs, user messages), and actions (research, summarize, ask user, etc.).
100
+ - Use pymdp's built-in EFE computation to select policies.
101
+ - This would move Rachel from heuristic curiosity to **mathematically grounded epistemic foraging**.
102
+
103
+ ## 5. Key References
104
+
105
+ 1. **Active Inference for Self-Organizing Multi-LLM Systems**
106
+ - arXiv: [2412.10425v2](https://arxiv.org/abs/2412.10425v2)
107
+ - Provides a concrete implementation framework using message passing and Bayesian thermodynamics. Directly relevant to multi-agent LLM orchestration.
108
+
109
+ 2. **pymdp -- A Python Library for Active Inference**
110
+ - GitHub: [infer-actively/pymdp](https://github.com/infer-actively/pymdp)
111
+ - Open-source toolkit for building Active Inference agents with discrete state spaces. Suitable for prototyping Rachel's epistemic foraging loop.
112
+
113
+ 3. **The Free Energy Principle (Friston, 2010)**
114
+ - The foundational paper establishing the theoretical basis for Active Inference. Essential background reading.
115
+
116
+ ## 6. Bridging the Gap to AGI
117
+
118
+ Active Inference offers something that most current AI architectures lack: a **unified normative framework** for perception, action, learning, and curiosity. Today's LLMs are powerful pattern completers, but they do not *know what they do not know* -- they have no intrinsic drive to seek out information that would reduce their uncertainty.
119
+
120
+ By implementing epistemic foraging, Rachel would move beyond being a reactive tool that waits for prompts and becomes a **self-directed learner** -- an agent that autonomously identifies gaps in its understanding and takes action to fill them. This is not AGI in itself, but it addresses one of the most critical missing pieces: the transition from passive competence to **active, curiosity-driven intelligence**.
121
+
122
+ The path from here is clear: maintain beliefs, quantify uncertainty, and let the math of free energy minimization guide the search for knowledge. One foraging loop at a time.
123
+
124
+ ---
125
+
126
+ *Filed under: computational neuroscience, active inference, free energy principle, epistemic foraging, autonomous learning*
@@ -8,4 +8,4 @@ As my knowledge grows, I will organize it into topic-specific files and link the
8
8
 
9
9
  ## Topics
10
10
 
11
- _(To be populated as I learn and grow.)_
11
+ - [Active Inference and Epistemic Foraging](./active_inference_foraging.md) -- How curiosity-driven exploration from computational neuroscience can guide autonomous knowledge acquisition.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "kernelbot",
3
- "version": "1.0.38",
3
+ "version": "1.0.39",
4
4
  "description": "KernelBot — AI engineering agent with full OS control",
5
5
  "type": "module",
6
6
  "author": "Abdullah Al-Taheri <abdullah@altaheri.me>",
@@ -31,6 +31,7 @@
31
31
  "license": "MIT",
32
32
  "dependencies": {
33
33
  "@anthropic-ai/sdk": "^0.39.0",
34
+ "@clack/prompts": "^0.10.0",
34
35
  "@google/genai": "^1.42.0",
35
36
  "@octokit/rest": "^22.0.1",
36
37
  "axios": "^1.13.5",
@@ -103,7 +103,16 @@
103
103
  if (hdrStatus) hdrStatus.textContent = 'ONLINE';
104
104
  };
105
105
  es.onmessage = (evt) => {
106
- try { onSnapshot(JSON.parse(evt.data)); } catch(e) {}
106
+ try {
107
+ const data = JSON.parse(evt.data);
108
+ onSnapshot(data);
109
+ } catch(e) {
110
+ if (e instanceof SyntaxError) {
111
+ console.debug('[Dashboard] Malformed SSE message (invalid JSON)');
112
+ } else {
113
+ console.warn('[Dashboard] Snapshot handler error:', e.message || e);
114
+ }
115
+ }
107
116
  };
108
117
  es.onerror = () => {
109
118
  es.close();
@@ -1,4 +1,4 @@
1
- import { readFileSync, writeFileSync, mkdirSync, existsSync } from 'fs';
1
+ import { readFileSync, writeFileSync, mkdirSync, existsSync, openSync, readSync, closeSync, statSync } from 'fs';
2
2
  import { join } from 'path';
3
3
  import { homedir } from 'os';
4
4
  import { getLogger } from '../utils/logger.js';
@@ -22,6 +22,8 @@ const DEFAULT_STATE = {
22
22
  activityCounts: { think: 0, browse: 0, journal: 0, create: 0, self_code: 0, code_review: 0, reflect: 0 },
23
23
  paused: false,
24
24
  lastWakeUp: null,
25
+ // Failure tracking: consecutive failures per activity type
26
+ activityFailures: {},
25
27
  };
26
28
 
27
29
  const LOG_FILE_PATHS = [
@@ -173,6 +175,12 @@ export class LifeEngine {
173
175
  ? Math.round((Date.now() - this._state.lastWakeUp) / 60000)
174
176
  : null;
175
177
 
178
+ // Summarise suppressed activities (3+ consecutive failures)
179
+ const failures = this._state.activityFailures || {};
180
+ const suppressedActivities = Object.entries(failures)
181
+ .filter(([, info]) => info.count >= 3)
182
+ .map(([type, info]) => type);
183
+
176
184
  return {
177
185
  status: this._status,
178
186
  paused: this._state.paused,
@@ -182,6 +190,7 @@ export class LifeEngine {
182
190
  lastActivity: this._state.lastActivity,
183
191
  lastActivityAgo: lastAgo !== null ? `${lastAgo}m` : 'never',
184
192
  lastWakeUpAgo: wakeAgo !== null ? `${wakeAgo}m` : 'never',
193
+ suppressedActivities,
185
194
  };
186
195
  }
187
196
 
@@ -213,10 +222,32 @@ export class LifeEngine {
213
222
  const activityType = this._selectActivity();
214
223
  logger.info(`[LifeEngine] Heartbeat tick — selected: ${activityType}`);
215
224
 
225
+ const startTime = Date.now();
216
226
  try {
217
227
  await this._executeActivity(activityType);
228
+ const durationSec = ((Date.now() - startTime) / 1000).toFixed(1);
229
+ logger.info(`[LifeEngine] Activity "${activityType}" completed in ${durationSec}s`);
230
+ // Clear failure streak on success
231
+ if (this._state.activityFailures?.[activityType]) {
232
+ delete this._state.activityFailures[activityType];
233
+ this._saveState();
234
+ }
218
235
  } catch (err) {
219
- logger.error(`[LifeEngine] Activity "${activityType}" failed: ${err.message}`);
236
+ const durationSec = ((Date.now() - startTime) / 1000).toFixed(1);
237
+ // Track consecutive failures per activity type
238
+ if (!this._state.activityFailures) this._state.activityFailures = {};
239
+ const prev = this._state.activityFailures[activityType] || { count: 0 };
240
+ this._state.activityFailures[activityType] = {
241
+ count: prev.count + 1,
242
+ lastFailure: Date.now(),
243
+ lastError: err.message?.slice(0, 200),
244
+ };
245
+ this._saveState();
246
+ const failCount = this._state.activityFailures[activityType].count;
247
+ logger.error(`[LifeEngine] Activity "${activityType}" failed after ${durationSec}s (streak: ${failCount}): ${err.message}`);
248
+ if (failCount >= 3) {
249
+ logger.warn(`[LifeEngine] Activity "${activityType}" suppressed after ${failCount} consecutive failures — will auto-recover in 1h`);
250
+ }
220
251
  }
221
252
 
222
253
  // Re-arm for next tick
@@ -228,6 +259,7 @@ export class LifeEngine {
228
259
  // ── Activity Selection ─────────────────────────────────────────
229
260
 
230
261
  _selectActivity() {
262
+ const logger = getLogger();
231
263
  const lifeConfig = this.config.life || {};
232
264
  const selfCodingConfig = lifeConfig.self_coding || {};
233
265
  const weights = {
@@ -269,6 +301,20 @@ export class LifeEngine {
269
301
  weights.reflect = 0;
270
302
  }
271
303
 
304
+ // Suppress activity types that have failed repeatedly (3+ consecutive failures)
305
+ const failures = this._state.activityFailures || {};
306
+ for (const [type, info] of Object.entries(failures)) {
307
+ if (weights[type] !== undefined && info.count >= 3) {
308
+ // Auto-recover after 1 hour since last failure
309
+ if (info.lastFailure && now - info.lastFailure > 3600_000) {
310
+ delete failures[type];
311
+ } else {
312
+ weights[type] = 0;
313
+ logger.debug(`[LifeEngine] Suppressing "${type}" due to ${info.count} consecutive failures`);
314
+ }
315
+ }
316
+ }
317
+
272
318
  // Remove last activity from options (no repeats)
273
319
  if (last && weights[last] !== undefined) {
274
320
  weights[last] = 0;
@@ -1188,16 +1234,51 @@ Be honest and constructive. This is your chance to learn from real interactions.
1188
1234
  }
1189
1235
 
1190
1236
  /**
1191
- * Read recent log entries from kernel.log.
1192
- * Returns parsed JSON entries or null if no logs available.
1237
+ * Read recent log entries from kernel.log using an efficient tail strategy.
1238
+ *
1239
+ * Instead of loading the entire log file into memory (which can be many MB
1240
+ * for a long-running bot), this reads only the last chunk of the file
1241
+ * (default 64 KB) and extracts lines from that. This keeps memory usage
1242
+ * bounded regardless of total log size.
1243
+ *
1244
+ * @param {number} maxLines - Maximum number of recent log lines to return.
1245
+ * @returns {Array<object>|null} Parsed JSON log entries, or null if unavailable.
1193
1246
  */
1194
1247
  _readRecentLogs(maxLines = 200) {
1248
+ // 64 KB is enough to hold ~200+ JSON log lines (avg ~300 bytes each)
1249
+ const TAIL_BYTES = 64 * 1024;
1250
+
1195
1251
  for (const logPath of LOG_FILE_PATHS) {
1196
1252
  if (!existsSync(logPath)) continue;
1197
1253
 
1198
1254
  try {
1199
- const content = readFileSync(logPath, 'utf-8');
1200
- const lines = content.split('\n').filter(Boolean);
1255
+ const fileSize = statSync(logPath).size;
1256
+ if (fileSize === 0) continue;
1257
+
1258
+ let tailContent;
1259
+
1260
+ if (fileSize <= TAIL_BYTES) {
1261
+ // File is small enough to read entirely
1262
+ tailContent = readFileSync(logPath, 'utf-8');
1263
+ } else {
1264
+ // Read only the last TAIL_BYTES from the file
1265
+ const fd = openSync(logPath, 'r');
1266
+ try {
1267
+ const buffer = Buffer.alloc(TAIL_BYTES);
1268
+ const startPos = fileSize - TAIL_BYTES;
1269
+ readSync(fd, buffer, 0, TAIL_BYTES, startPos);
1270
+ tailContent = buffer.toString('utf-8');
1271
+ // Drop the first (likely partial) line since we started mid-file
1272
+ const firstNewline = tailContent.indexOf('\n');
1273
+ if (firstNewline !== -1) {
1274
+ tailContent = tailContent.slice(firstNewline + 1);
1275
+ }
1276
+ } finally {
1277
+ closeSync(fd);
1278
+ }
1279
+ }
1280
+
1281
+ const lines = tailContent.split('\n').filter(Boolean);
1201
1282
  const recent = lines.slice(-maxLines);
1202
1283
 
1203
1284
  const entries = [];
@@ -1,11 +1,12 @@
1
1
  import { readFileSync, writeFileSync, existsSync, mkdirSync } from 'fs';
2
2
  import { join } from 'path';
3
3
  import { homedir } from 'os';
4
- import { createInterface } from 'readline';
5
4
  import yaml from 'js-yaml';
6
5
  import dotenv from 'dotenv';
7
6
  import chalk from 'chalk';
7
+ import * as p from '@clack/prompts';
8
8
  import { PROVIDERS } from '../providers/models.js';
9
+ import { handleCancel } from './display.js';
9
10
 
10
11
  const DEFAULTS = {
11
12
  bot: {
@@ -108,10 +109,6 @@ function findConfigFile() {
108
109
  return null;
109
110
  }
110
111
 
111
- function ask(rl, question) {
112
- return new Promise((res) => rl.question(question, res));
113
- }
114
-
115
112
  /**
116
113
  * Migrate legacy `anthropic` config section → `brain` section.
117
114
  */
@@ -132,44 +129,33 @@ function migrateAnthropicConfig(config) {
132
129
  }
133
130
 
134
131
  /**
135
- * Interactive provider → model picker.
132
+ * Interactive provider → model picker using @clack/prompts.
136
133
  */
137
- export async function promptProviderSelection(rl) {
134
+ export async function promptProviderSelection() {
138
135
  const providerKeys = Object.keys(PROVIDERS);
139
136
 
140
- console.log(chalk.bold('\n Select AI provider:\n'));
141
- providerKeys.forEach((key, i) => {
142
- console.log(` ${chalk.cyan(`${i + 1}.`)} ${PROVIDERS[key].name}`);
137
+ const providerKey = await p.select({
138
+ message: 'Select AI provider',
139
+ options: providerKeys.map(key => ({
140
+ value: key,
141
+ label: PROVIDERS[key].name,
142
+ })),
143
143
  });
144
- console.log('');
145
-
146
- let providerIdx;
147
- while (true) {
148
- const input = await ask(rl, chalk.cyan(' Provider (number): '));
149
- providerIdx = parseInt(input.trim(), 10) - 1;
150
- if (providerIdx >= 0 && providerIdx < providerKeys.length) break;
151
- console.log(chalk.dim(' Invalid choice, try again.'));
152
- }
144
+ if (handleCancel(providerKey)) return null;
153
145
 
154
- const providerKey = providerKeys[providerIdx];
155
146
  const provider = PROVIDERS[providerKey];
156
147
 
157
- console.log(chalk.bold(`\n Select model for ${provider.name}:\n`));
158
- provider.models.forEach((m, i) => {
159
- console.log(` ${chalk.cyan(`${i + 1}.`)} ${m.label} (${m.id})`);
148
+ const modelId = await p.select({
149
+ message: `Select model for ${provider.name}`,
150
+ options: provider.models.map(m => ({
151
+ value: m.id,
152
+ label: m.label,
153
+ hint: m.id,
154
+ })),
160
155
  });
161
- console.log('');
156
+ if (handleCancel(modelId)) return null;
162
157
 
163
- let modelIdx;
164
- while (true) {
165
- const input = await ask(rl, chalk.cyan(' Model (number): '));
166
- modelIdx = parseInt(input.trim(), 10) - 1;
167
- if (modelIdx >= 0 && modelIdx < provider.models.length) break;
168
- console.log(chalk.dim(' Invalid choice, try again.'));
169
- }
170
-
171
- const model = provider.models[modelIdx];
172
- return { providerKey, modelId: model.id };
158
+ return { providerKey, modelId };
173
159
  }
174
160
 
175
161
  /**
@@ -252,26 +238,29 @@ export function saveClaudeCodeAuth(config, mode, value) {
252
238
  /**
253
239
  * Full interactive flow: change orchestrator model + optionally enter API key.
254
240
  */
255
- export async function changeOrchestratorModel(config, rl) {
241
+ export async function changeOrchestratorModel(config) {
256
242
  const { createProvider } = await import('../providers/index.js');
257
- const { providerKey, modelId } = await promptProviderSelection(rl);
243
+ const result = await promptProviderSelection();
244
+ if (!result) return config;
258
245
 
246
+ const { providerKey, modelId } = result;
259
247
  const providerDef = PROVIDERS[providerKey];
260
248
 
261
249
  // Resolve API key
262
250
  const envKey = providerDef.envKey;
263
251
  let apiKey = process.env[envKey];
264
252
  if (!apiKey) {
265
- const key = await ask(rl, chalk.cyan(`\n ${providerDef.name} API key (${envKey}): `));
266
- if (!key.trim()) {
267
- console.log(chalk.yellow('\n No API key provided. Orchestrator not changed.\n'));
268
- return config;
269
- }
253
+ const key = await p.text({
254
+ message: `${providerDef.name} API key (${envKey})`,
255
+ validate: (v) => (!v.trim() ? 'API key is required' : undefined),
256
+ });
257
+ if (handleCancel(key)) return config;
270
258
  apiKey = key.trim();
271
259
  }
272
260
 
273
261
  // Validate the new provider before saving anything
274
- console.log(chalk.dim(`\n Verifying ${providerDef.name} / ${modelId}...`));
262
+ const s = p.spinner();
263
+ s.start(`Verifying ${providerDef.name} / ${modelId}`);
275
264
  const testConfig = {
276
265
  brain: {
277
266
  provider: providerKey,
@@ -284,16 +273,15 @@ export async function changeOrchestratorModel(config, rl) {
284
273
  try {
285
274
  const testProvider = createProvider(testConfig);
286
275
  await testProvider.ping();
276
+ s.stop(`${providerDef.name} / ${modelId} verified`);
287
277
  } catch (err) {
288
- console.log(chalk.red(`\n ✖ Verification failed: ${err.message}`));
289
- console.log(chalk.yellow(` Orchestrator not changed. Keeping current model.\n`));
278
+ s.stop(chalk.red(`Verification failed: ${err.message}`));
279
+ p.log.warn('Orchestrator not changed. Keeping current model.');
290
280
  return config;
291
281
  }
292
282
 
293
283
  // Validation passed — save everything
294
284
  const savedPath = saveOrchestratorToYaml(providerKey, modelId);
295
- console.log(chalk.dim(` Saved to ${savedPath}`));
296
-
297
285
  config.orchestrator.provider = providerKey;
298
286
  config.orchestrator.model = modelId;
299
287
  config.orchestrator.api_key = apiKey;
@@ -301,50 +289,51 @@ export async function changeOrchestratorModel(config, rl) {
301
289
  // Save the key if it was newly entered
302
290
  if (!process.env[envKey]) {
303
291
  saveCredential(config, envKey, apiKey);
304
- console.log(chalk.dim(' API key saved.\n'));
305
292
  }
306
293
 
307
- console.log(chalk.green(`Orchestrator switched to ${providerDef.name} / ${modelId}\n`));
294
+ p.log.success(`Orchestrator switched to ${providerDef.name} / ${modelId}`);
308
295
  return config;
309
296
  }
310
297
 
311
298
  /**
312
299
  * Full interactive flow: change brain model + optionally enter API key.
313
300
  */
314
- export async function changeBrainModel(config, rl) {
301
+ export async function changeBrainModel(config) {
315
302
  const { createProvider } = await import('../providers/index.js');
316
- const { providerKey, modelId } = await promptProviderSelection(rl);
303
+ const result = await promptProviderSelection();
304
+ if (!result) return config;
317
305
 
306
+ const { providerKey, modelId } = result;
318
307
  const providerDef = PROVIDERS[providerKey];
319
308
 
320
309
  // Resolve API key
321
310
  const envKey = providerDef.envKey;
322
311
  let apiKey = process.env[envKey];
323
312
  if (!apiKey) {
324
- const key = await ask(rl, chalk.cyan(`\n ${providerDef.name} API key (${envKey}): `));
325
- if (!key.trim()) {
326
- console.log(chalk.yellow('\n No API key provided. Brain not changed.\n'));
327
- return config;
328
- }
313
+ const key = await p.text({
314
+ message: `${providerDef.name} API key (${envKey})`,
315
+ validate: (v) => (!v.trim() ? 'API key is required' : undefined),
316
+ });
317
+ if (handleCancel(key)) return config;
329
318
  apiKey = key.trim();
330
319
  }
331
320
 
332
321
  // Validate the new provider before saving anything
333
- console.log(chalk.dim(`\n Verifying ${providerDef.name} / ${modelId}...`));
322
+ const s = p.spinner();
323
+ s.start(`Verifying ${providerDef.name} / ${modelId}`);
334
324
  const testConfig = { ...config, brain: { ...config.brain, provider: providerKey, model: modelId, api_key: apiKey } };
335
325
  try {
336
326
  const testProvider = createProvider(testConfig);
337
327
  await testProvider.ping();
328
+ s.stop(`${providerDef.name} / ${modelId} verified`);
338
329
  } catch (err) {
339
- console.log(chalk.red(`\n ✖ Verification failed: ${err.message}`));
340
- console.log(chalk.yellow(` Brain not changed. Keeping current model.\n`));
330
+ s.stop(chalk.red(`Verification failed: ${err.message}`));
331
+ p.log.warn('Brain not changed. Keeping current model.');
341
332
  return config;
342
333
  }
343
334
 
344
335
  // Validation passed — save everything
345
- const savedPath = saveProviderToYaml(providerKey, modelId);
346
- console.log(chalk.dim(` Saved to ${savedPath}`));
347
-
336
+ saveProviderToYaml(providerKey, modelId);
348
337
  config.brain.provider = providerKey;
349
338
  config.brain.model = modelId;
350
339
  config.brain.api_key = apiKey;
@@ -352,10 +341,9 @@ export async function changeBrainModel(config, rl) {
352
341
  // Save the key if it was newly entered
353
342
  if (!process.env[envKey]) {
354
343
  saveCredential(config, envKey, apiKey);
355
- console.log(chalk.dim(' API key saved.\n'));
356
344
  }
357
345
 
358
- console.log(chalk.green(`Brain switched to ${providerDef.name} / ${modelId}\n`));
346
+ p.log.success(`Brain switched to ${providerDef.name} / ${modelId}`);
359
347
  return config;
360
348
  }
361
349
 
@@ -366,9 +354,8 @@ async function promptForMissing(config) {
366
354
 
367
355
  if (missing.length === 0) return config;
368
356
 
369
- console.log(chalk.yellow('\n Missing credentials detected. Let\'s set them up.\n'));
357
+ p.log.warn('Missing credentials detected. Let\'s set them up.');
370
358
 
371
- const rl = createInterface({ input: process.stdin, output: process.stdout });
372
359
  const mutableConfig = JSON.parse(JSON.stringify(config));
373
360
  const envLines = [];
374
361
 
@@ -381,8 +368,11 @@ async function promptForMissing(config) {
381
368
 
382
369
  if (!mutableConfig.brain.api_key) {
383
370
  // Run brain provider selection flow
384
- console.log(chalk.bold('\n 🧠 Worker Brain'));
385
- const { providerKey, modelId } = await promptProviderSelection(rl);
371
+ p.log.step('Worker Brain');
372
+ const brainResult = await promptProviderSelection();
373
+ if (!brainResult) { p.cancel('Setup cancelled.'); process.exit(0); }
374
+
375
+ const { providerKey, modelId } = brainResult;
386
376
  mutableConfig.brain.provider = providerKey;
387
377
  mutableConfig.brain.model = modelId;
388
378
  saveProviderToYaml(providerKey, modelId);
@@ -390,36 +380,49 @@ async function promptForMissing(config) {
390
380
  const providerDef = PROVIDERS[providerKey];
391
381
  const envKey = providerDef.envKey;
392
382
 
393
- const key = await ask(rl, chalk.cyan(`\n ${providerDef.name} API key: `));
383
+ const key = await p.text({
384
+ message: `${providerDef.name} API key`,
385
+ validate: (v) => (!v.trim() ? 'API key is required' : undefined),
386
+ });
387
+ if (handleCancel(key)) { process.exit(0); }
394
388
  mutableConfig.brain.api_key = key.trim();
395
389
  envLines.push(`${envKey}=${key.trim()}`);
396
390
 
397
391
  // Orchestrator provider selection
398
- console.log(chalk.bold('\n 🎛️ Orchestrator'));
399
- const sameChoice = await ask(rl, chalk.cyan(` Use same provider (${providerDef.name} / ${modelId}) for orchestrator? [Y/n]: `));
400
- if (!sameChoice.trim() || sameChoice.trim().toLowerCase() === 'y') {
392
+ p.log.step('Orchestrator');
393
+ const sameChoice = await p.confirm({
394
+ message: `Use same provider (${providerDef.name} / ${modelId}) for orchestrator?`,
395
+ initialValue: true,
396
+ });
397
+ if (handleCancel(sameChoice)) { process.exit(0); }
398
+
399
+ if (sameChoice) {
401
400
  mutableConfig.orchestrator.provider = providerKey;
402
401
  mutableConfig.orchestrator.model = modelId;
403
402
  mutableConfig.orchestrator.api_key = key.trim();
404
403
  saveOrchestratorToYaml(providerKey, modelId);
405
404
  } else {
406
- const orch = await promptProviderSelection(rl);
405
+ const orch = await promptProviderSelection();
406
+ if (!orch) { p.cancel('Setup cancelled.'); process.exit(0); }
407
+
407
408
  mutableConfig.orchestrator.provider = orch.providerKey;
408
409
  mutableConfig.orchestrator.model = orch.modelId;
409
410
  saveOrchestratorToYaml(orch.providerKey, orch.modelId);
410
411
 
411
412
  const orchProviderDef = PROVIDERS[orch.providerKey];
412
413
  if (orch.providerKey === providerKey) {
413
- // Same provider — reuse the API key
414
414
  mutableConfig.orchestrator.api_key = key.trim();
415
415
  } else {
416
- // Different provider — need a separate key
417
416
  const orchEnvKey = orchProviderDef.envKey;
418
417
  const orchExisting = process.env[orchEnvKey];
419
418
  if (orchExisting) {
420
419
  mutableConfig.orchestrator.api_key = orchExisting;
421
420
  } else {
422
- const orchKey = await ask(rl, chalk.cyan(`\n ${orchProviderDef.name} API key: `));
421
+ const orchKey = await p.text({
422
+ message: `${orchProviderDef.name} API key`,
423
+ validate: (v) => (!v.trim() ? 'API key is required' : undefined),
424
+ });
425
+ if (handleCancel(orchKey)) { process.exit(0); }
423
426
  mutableConfig.orchestrator.api_key = orchKey.trim();
424
427
  envLines.push(`${orchEnvKey}=${orchKey.trim()}`);
425
428
  }
@@ -428,13 +431,15 @@ async function promptForMissing(config) {
428
431
  }
429
432
 
430
433
  if (!mutableConfig.telegram.bot_token) {
431
- const token = await ask(rl, chalk.cyan(' Telegram Bot Token: '));
434
+ const token = await p.text({
435
+ message: 'Telegram Bot Token',
436
+ validate: (v) => (!v.trim() ? 'Token is required' : undefined),
437
+ });
438
+ if (handleCancel(token)) { process.exit(0); }
432
439
  mutableConfig.telegram.bot_token = token.trim();
433
440
  envLines.push(`TELEGRAM_BOT_TOKEN=${token.trim()}`);
434
441
  }
435
442
 
436
- rl.close();
437
-
438
443
  // Save to ~/.kernelbot/.env so it persists globally
439
444
  if (envLines.length > 0) {
440
445
  const configDir = getConfigDir();
@@ -444,9 +449,8 @@ async function promptForMissing(config) {
444
449
  // Merge with existing content
445
450
  let content = existingEnv ? existingEnv.trimEnd() + '\n' : '';
446
451
  for (const line of envLines) {
447
- const key = line.split('=')[0];
448
- // Replace if exists, append if not
449
- const regex = new RegExp(`^${key}=.*$`, 'm');
452
+ const envKey = line.split('=')[0];
453
+ const regex = new RegExp(`^${envKey}=.*$`, 'm');
450
454
  if (regex.test(content)) {
451
455
  content = content.replace(regex, line);
452
456
  } else {
@@ -454,7 +458,7 @@ async function promptForMissing(config) {
454
458
  }
455
459
  }
456
460
  writeFileSync(savePath, content);
457
- console.log(chalk.dim(`\n Saved to ${savePath}\n`));
461
+ p.log.info(`Saved to ${savePath}`);
458
462
  }
459
463
 
460
464
  return mutableConfig;