hyperclaw 5.0.4 → 5.0.6

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (40) hide show
  1. package/README.md +113 -5
  2. package/dist/chat-C4jrx5t1.js +258 -0
  3. package/dist/chat-jnikU_sY.js +258 -0
  4. package/dist/daemon-CL5Mu4Zj.js +5 -0
  5. package/dist/daemon-CRtwyRPM.js +5 -0
  6. package/dist/daemon-DNfRHk6k.js +318 -0
  7. package/dist/daemon-Dhd1g5G3.js +318 -0
  8. package/dist/engine-Cooyw_YH.js +7 -0
  9. package/dist/engine-DAblSk65.js +305 -0
  10. package/dist/engine-DwMKOfwf.js +305 -0
  11. package/dist/engine-wq9XrYn_.js +7 -0
  12. package/dist/hyperclawbot-BnsEqbB1.js +505 -0
  13. package/dist/hyperclawbot-_lByyYQP.js +505 -0
  14. package/dist/mcp-loader-BIUZmG6R.js +94 -0
  15. package/dist/mcp-loader-C7wVxPeQ.js +94 -0
  16. package/dist/onboard-BKfMt31e.js +3959 -0
  17. package/dist/onboard-ChdCrzG4.js +3959 -0
  18. package/dist/onboard-DE6HRM9h.js +11 -0
  19. package/dist/onboard-a9t7X2tC.js +11 -0
  20. package/dist/orchestrator-BC-7wA2Q.js +6 -0
  21. package/dist/orchestrator-Dr-Kx8Sj.js +189 -0
  22. package/dist/orchestrator-DwydLMrT.js +6 -0
  23. package/dist/orchestrator-VeQQE-rp.js +189 -0
  24. package/dist/run-main.js +26 -31
  25. package/dist/server-5BJD83CF.js +4 -0
  26. package/dist/server-B6xEN6iX.js +4 -0
  27. package/dist/server-BEqsQF8r.js +1255 -0
  28. package/dist/server-Cv6kgXvX.js +1255 -0
  29. package/dist/skill-runtime-BtR0gCZh.js +102 -0
  30. package/dist/skill-runtime-BxRuz-SL.js +102 -0
  31. package/dist/skill-runtime-ChEoWYiG.js +5 -0
  32. package/dist/skill-runtime-CrsErr-V.js +5 -0
  33. package/dist/src-BJtvfghE.js +63 -0
  34. package/dist/src-BXw1Qn62.js +458 -0
  35. package/dist/src-CcX1rZa1.js +458 -0
  36. package/dist/src-D28v-IHz.js +63 -0
  37. package/dist/sub-agent-tools-BN_8yWye.js +39 -0
  38. package/dist/sub-agent-tools-CDaWtHHX.js +39 -0
  39. package/package.json +1 -1
  40. package/scripts/postinstall.js +1 -1
package/README.md CHANGED
@@ -84,24 +84,72 @@ hyperclaw onboard --install-daemon
84
84
 
85
85
  The wizard walks you through: AI provider → model → channels → skills. Done.
86
86
 
87
+ ## ▶️ Running your bot
88
+
87
89
  ```bash
88
- # After setup, start your assistant
90
+ # Step 1 — Start the daemon (runs in the background, bot becomes active on Telegram/Discord/etc.)
89
91
  hyperclaw daemon start
90
92
 
91
- # Interactive terminal chat (multi-turn, streams responses)
93
+ # Step 2 Open the TUI dashboard to see live status, channels, logs
94
+ hyperclaw dashboard
95
+
96
+ # Step 3 — Or chat directly from your terminal (no Telegram needed)
92
97
  hyperclaw chat
98
+ ```
99
+
100
+ ### How it works
101
+
102
+ ```
103
+ hyperclaw daemon start
104
+
105
+ Gateway (background process)
106
+
107
+ ┌─────────────┬──────────────┐
108
+ │ Telegram │ Dashboard │
109
+ │ Discord │ hyperclaw │
110
+ │ WhatsApp │ dashboard │
111
+ └─────────────┴──────────────┘
112
+ ```
113
+
114
+ > **The daemon must be running** for the Telegram/Discord/WhatsApp bot to answer messages and for the TUI dashboard to show "ONLINE".
115
+ > On **Windows** the daemon runs via Task Scheduler (no WSL, no admin).
116
+ > On **Linux/macOS** with `--install-daemon` it uses systemd / launchd.
117
+
118
+ ### Without the daemon (foreground mode)
119
+
120
+ If you haven't installed the daemon, run the bot in the foreground — the terminal stays open:
93
121
 
122
+ ```bash
123
+ # Run bot in foreground (Ctrl+C to stop)
124
+ hyperclaw start
125
+
126
+ # In a second terminal, open the dashboard
127
+ hyperclaw dashboard
128
+ ```
129
+
130
+ ### Quick reference
131
+
132
+ | I want to… | Command |
133
+ |-----------------------------------------|----------------------------|
134
+ | Start Telegram/Discord bot (background) | `hyperclaw daemon start` |
135
+ | Stop the bot | `hyperclaw daemon stop` |
136
+ | Check if running | `hyperclaw daemon status` |
137
+ | Open TUI dashboard | `hyperclaw dashboard` |
138
+ | Chat from terminal (no Telegram needed) | `hyperclaw chat` |
139
+ | Run in foreground (no daemon) | `hyperclaw start` |
140
+ | Health check | `hyperclaw doctor` |
141
+
142
+ ```bash
94
143
  # Send a single message (non-interactive)
95
144
  hyperclaw agent --message "What can you do?"
96
-
97
- # Health check
98
- hyperclaw doctor
99
145
  ```
100
146
 
101
147
  > **Windows**: No WSL2, no admin rights needed. The daemon uses Task Scheduler and runs as your account.
102
148
 
103
149
  > **Linux**: If you get an `EACCES: permission denied` error during `npm install -g`, see the [Linux install fix](#-linux-permission-fix) below.
104
150
 
151
+ > **Emojis not showing?** See the [Terminal emoji fix](#-emojis-not-showing-in-terminal) below.
152
+
105
153
  <details>
106
154
  <summary>More install options</summary>
107
155
 
@@ -198,6 +246,65 @@ sudo npm install -g hyperclaw@latest
198
246
 
199
247
  </details>
200
248
 
249
+ <details>
250
+ <summary id="-emojis-not-showing-in-terminal">🎨 Emojis not showing in terminal (showing as ?? or □)</summary>
251
+
252
+ HyperClaw uses emojis in its banner and wizard. If you see `??` or empty boxes, your terminal or font doesn't support emoji rendering.
253
+
254
+ ---
255
+
256
+ ### 🪟 Windows — CMD / PowerShell
257
+
258
+ The old `cmd.exe` and classic PowerShell window do **not** support emoji.
259
+
260
+ **Fix: Use Windows Terminal** (free, from Microsoft Store):
261
+
262
+ 1. Open **Microsoft Store** → search **"Windows Terminal"** → Install
263
+ 2. Open Windows Terminal → Settings → Profiles → Defaults → Appearance
264
+ 3. Set font to **Cascadia Code** (already included) or install [JetBrains Mono Nerd Font](https://www.nerdfonts.com/font-downloads)
265
+ 4. Run `hyperclaw onboard` from Windows Terminal — emojis will show correctly
266
+
267
+ ---
268
+
269
+ ### 🐧 Linux — XFCE / Konsole / other terminals
270
+
271
+ Install the Google Noto emoji font:
272
+
273
+ ```bash
274
+ # Debian / Ubuntu / Kali
275
+ sudo apt install fonts-noto-color-emoji
276
+
277
+ # Arch / Manjaro
278
+ sudo pacman -S noto-fonts-emoji
279
+
280
+ # Fedora / RHEL
281
+ sudo dnf install google-noto-emoji-fonts
282
+ ```
283
+
284
+ Then **restart your terminal** — emojis will appear.
285
+
286
+ > If you use **XFCE Terminal**: go to Edit → Preferences → Appearance and set a font like **JetBrains Mono** or **DejaVu Sans Mono**.
287
+
288
+ ---
289
+
290
+ ### 🍎 macOS — Terminal.app
291
+
292
+ macOS Terminal.app supports emoji out of the box on recent versions. If you still see `?`:
293
+
294
+ 1. Terminal → Settings → Profiles → Text → Change Font
295
+ 2. Set to **Menlo**, **SF Mono**, or any font from [Nerd Fonts](https://www.nerdfonts.com/)
296
+ 3. Alternatively, use **iTerm2** (free) which has full emoji support by default
297
+
298
+ ```bash
299
+ brew install --cask iterm2
300
+ ```
301
+
302
+ ---
303
+
304
+ > **Note**: If you prefer to keep using your current terminal without emoji, HyperClaw works perfectly — only the visual display is affected, not the functionality.
305
+
306
+ </details>
307
+
201
308
  ---
202
309
 
203
310
  ## Channels
@@ -519,6 +626,7 @@ Example — ask the agent:
519
626
  "Create a note in Obsidian: Meeting notes..."
520
627
  "Add a card to my Trello board"
521
628
  "Set my 8Sleep to temperature 20 on the left side"
629
+
522
630
  "Get my GitHub password from 1Password"
523
631
  "Send an iMessage to +1234567890: I'll be late"
524
632
  "What's playing on my Sonos?"
@@ -0,0 +1,258 @@
1
+ const require_chunk = require('./chunk-jS-bbMI5.js');
2
+ const require_paths = require('./paths-AIyBxIzm.js');
3
+ require('./browser-tools-JZ9ji6AW.js');
4
+ require('./src-CGQjRI4N.js');
5
+ const require_engine = require('./engine-DAblSk65.js');
6
+ require('./extraction-tools-Dg7AHS35.js');
7
+ const require_inference = require('./inference-DhA8jpfH.js');
8
+ require('./memory-auto-D-L2q21G.js');
9
+ require('./orchestrator-VeQQE-rp.js');
10
+ require('./pc-access-B0KocJNe.js');
11
+ require('./session-store-DCTQIVur.js');
12
+ require('./sessions-tools-BdlN6Pb6.js');
13
+ require('./skill-loader-Wf3brNOj.js');
14
+ require('./skill-runtime-BtR0gCZh.js');
15
+ require('./vision-tools-dwn9p4el.js');
16
+ require('./website-watch-tools-B-jRAeTe.js');
17
+ const require_src$1 = require('./src-BXw1Qn62.js');
18
+ const chalk = require_chunk.__toESM(require("chalk"));
19
+ const ora = require_chunk.__toESM(require("ora"));
20
+ const fs_extra = require_chunk.__toESM(require("fs-extra"));
21
+ const readline = require_chunk.__toESM(require("readline"));
22
+
23
+ //#region src/cli/chat.ts
24
+ require_src$1.init_src();
25
+ const DIVIDER = chalk.default.gray(" " + "─".repeat(56));
26
+ function printHeader(model, sessionId) {
27
+ console.log();
28
+ console.log(DIVIDER);
29
+ console.log(chalk.default.bold.cyan(" 🦅 HYPERCLAW CHAT"));
30
+ console.log(chalk.default.gray(` Model: ${model} · Session: ${sessionId}`));
31
+ console.log(DIVIDER);
32
+ console.log(chalk.default.gray(" Type your message and press Enter."));
33
+ console.log(chalk.default.gray(" Commands: /exit /clear /model /skills /help"));
34
+ console.log(DIVIDER);
35
+ console.log();
36
+ }
37
+ function printHelp() {
38
+ console.log();
39
+ console.log(chalk.default.bold(" Commands:"));
40
+ console.log(` ${chalk.default.cyan("/exit")} — quit the chat`);
41
+ console.log(` ${chalk.default.cyan("/clear")} — clear conversation history`);
42
+ console.log(` ${chalk.default.cyan("/model")} — show current model`);
43
+ console.log(` ${chalk.default.cyan("/skills")} — list installed skills + how to add more`);
44
+ console.log(` ${chalk.default.cyan("/help")} — show this help`);
45
+ console.log();
46
+ console.log(chalk.default.gray(" Tip: you can also tell the agent to install a skill:"));
47
+ console.log(chalk.default.gray(" \"Install the web-search skill\" or paste a clawhub.ai link"));
48
+ console.log();
49
+ }
50
+ async function printSkills() {
51
+ console.log();
52
+ try {
53
+ const { loadSkills } = await Promise.resolve().then(() => require("./src-D28v-IHz.js"));
54
+ const skills = await loadSkills();
55
+ if (skills.length === 0) console.log(chalk.default.gray(" No skills installed yet."));
56
+ else {
57
+ console.log(chalk.default.bold(" Installed skills:"));
58
+ for (const s of skills) {
59
+ console.log(` ${chalk.default.cyan("•")} ${chalk.default.bold(s.title || s.id)} ${chalk.default.gray(`(${s.id})`)}`);
60
+ if (s.capabilities) console.log(chalk.default.gray(` ${s.capabilities}`));
61
+ }
62
+ }
63
+ } catch {
64
+ console.log(chalk.default.gray(" Could not load skills list."));
65
+ }
66
+ console.log();
67
+ console.log(chalk.default.bold(" How to add a skill:"));
68
+ console.log(` ${chalk.default.gray("1.")} Tell the agent: ${chalk.default.cyan("\"Install the web-search skill\"")}`);
69
+ console.log(` ${chalk.default.gray("2.")} Paste a link: ${chalk.default.cyan("\"Install this: https://clawhub.ai/user/skill-name\"")}`);
70
+ console.log(` ${chalk.default.gray("3.")} CLI (outside chat): ${chalk.default.cyan("hyperclaw skill install <name>")}`);
71
+ console.log(` ${chalk.default.gray("4.")} Re-run wizard: ${chalk.default.cyan("hyperclaw onboard")}`);
72
+ console.log();
73
+ }
74
+ function makeSessionId() {
75
+ return Math.random().toString(36).slice(2, 9);
76
+ }
77
+ async function runChat(opts) {
78
+ const cfg = await fs_extra.default.readJson(require_paths.getConfigPath()).catch(() => null);
79
+ if (!cfg) {
80
+ console.log(chalk.default.red("\n No configuration found. Run: hyperclaw onboard\n"));
81
+ return;
82
+ }
83
+ const { getProviderCredentialAsync } = await Promise.resolve().then(() => require("./env-resolve-bDYssfih.js"));
84
+ const apiKey = await getProviderCredentialAsync(cfg).catch(() => null);
85
+ const isLocal = [
86
+ "local",
87
+ "ollama",
88
+ "lmstudio"
89
+ ].includes(cfg?.provider?.providerId ?? "");
90
+ if (!apiKey && !isLocal) {
91
+ console.log(chalk.default.red("\n No API key configured. Run: hyperclaw config set-key\n"));
92
+ return;
93
+ }
94
+ const { getProvider } = await Promise.resolve().then(() => require("./providers-DxiamZSL.js"));
95
+ const providerMeta = getProvider(cfg?.provider?.providerId ?? "");
96
+ const CUSTOM_IDS = new Set([
97
+ "groq",
98
+ "mistral",
99
+ "deepseek",
100
+ "perplexity",
101
+ "huggingface",
102
+ "ollama",
103
+ "lmstudio",
104
+ "local",
105
+ "xai",
106
+ "openai",
107
+ "google",
108
+ "minimax",
109
+ "moonshot",
110
+ "qwen",
111
+ "zai",
112
+ "litellm",
113
+ "cloudflare",
114
+ "copilot",
115
+ "vercel-ai",
116
+ "opencode-zen"
117
+ ]);
118
+ const isAnthropicVariant = [
119
+ "anthropic",
120
+ "anthropic-oauth",
121
+ "anthropic-setup-token"
122
+ ].includes(cfg?.provider?.providerId ?? "");
123
+ const provider = isAnthropicVariant ? "anthropic" : cfg?.provider?.providerId === "custom" || isLocal || CUSTOM_IDS.has(cfg?.provider?.providerId ?? "") ? "custom" : "openrouter";
124
+ const rawModel = opts.model || cfg?.provider?.modelId || "claude-sonnet-4-5";
125
+ const model = rawModel.startsWith("ollama/") ? rawModel.slice(7) : rawModel;
126
+ const resolvedBaseUrl = cfg?.provider?.baseUrl || providerMeta?.baseUrl || (isLocal ? "http://localhost:11434/v1" : void 0);
127
+ const THINKING_BUDGET = {
128
+ high: 1e4,
129
+ medium: 4e3,
130
+ low: 1e3,
131
+ none: 0
132
+ };
133
+ const thinkingBudget = THINKING_BUDGET[opts.thinking ?? "none"] ?? 0;
134
+ const maxTokens = thinkingBudget > 0 ? thinkingBudget + 4096 : 4096;
135
+ const context = await require_engine.loadWorkspaceContext(opts.workspace) + await require_engine.loadSkillsContext();
136
+ const tools = await require_engine.resolveTools({
137
+ config: cfg,
138
+ source: "cli",
139
+ elevated: true,
140
+ daemonMode: false
141
+ });
142
+ const engineOpts = {
143
+ model,
144
+ apiKey,
145
+ provider,
146
+ system: context || void 0,
147
+ tools,
148
+ maxTokens,
149
+ onToken: () => {},
150
+ ...provider === "custom" ? { baseUrl: resolvedBaseUrl || "" } : {},
151
+ ...thinkingBudget > 0 && model.includes("claude") ? { thinking: { budget_tokens: thinkingBudget } } : {}
152
+ };
153
+ const sessionId = opts.sessionId ?? makeSessionId();
154
+ const messages = [];
155
+ printHeader(rawModel, sessionId);
156
+ const rl = readline.default.createInterface({
157
+ input: process.stdin,
158
+ output: process.stdout,
159
+ terminal: true
160
+ });
161
+ rl.on("SIGINT", () => {
162
+ console.log(chalk.default.gray("\n\n Bye!\n"));
163
+ rl.close();
164
+ process.exit(0);
165
+ });
166
+ const prompt = () => {
167
+ rl.question(chalk.default.bold.green(" You › "), async (input) => {
168
+ const text = input.trim();
169
+ if (!text) {
170
+ prompt();
171
+ return;
172
+ }
173
+ if ([
174
+ "/exit",
175
+ "/quit",
176
+ "/bye",
177
+ "exit",
178
+ "quit",
179
+ "bye"
180
+ ].includes(text.toLowerCase())) {
181
+ console.log(chalk.default.gray("\n Bye!\n"));
182
+ rl.close();
183
+ process.exit(0);
184
+ }
185
+ if (text === "/help") {
186
+ printHelp();
187
+ prompt();
188
+ return;
189
+ }
190
+ if (text === "/skills") {
191
+ await printSkills();
192
+ prompt();
193
+ return;
194
+ }
195
+ if (text === "/model") {
196
+ console.log(chalk.default.gray(`\n Model: ${rawModel}\n`));
197
+ prompt();
198
+ return;
199
+ }
200
+ if (text === "/clear") {
201
+ messages.length = 0;
202
+ console.log(chalk.default.gray("\n Conversation cleared.\n"));
203
+ prompt();
204
+ return;
205
+ }
206
+ messages.push({
207
+ role: "user",
208
+ content: text
209
+ });
210
+ const spinner = (0, ora.default)({
211
+ text: chalk.default.gray("Thinking..."),
212
+ color: "cyan",
213
+ prefixText: " "
214
+ }).start();
215
+ let responseText = "";
216
+ try {
217
+ const engine = new require_inference.InferenceEngine({
218
+ ...engineOpts,
219
+ onToken: (token) => {
220
+ if (spinner.isSpinning) spinner.stop();
221
+ process.stdout.write(token);
222
+ },
223
+ onToolCall: (name) => {
224
+ if (spinner.isSpinning) spinner.stop();
225
+ console.log(chalk.default.gray(`\n [tool: ${name}]`));
226
+ }
227
+ });
228
+ spinner.stop();
229
+ process.stdout.write(chalk.default.bold.blue("\n Agent › "));
230
+ const result = await engine.run(messages);
231
+ responseText = result.text || "";
232
+ if (!responseText && !result.text) process.stdout.write(chalk.default.gray("(empty)"));
233
+ console.log("\n");
234
+ if (result.usage) console.log(chalk.default.gray(` Tokens — in: ${result.usage.input} out: ${result.usage.output}\n`));
235
+ } catch (e) {
236
+ spinner.stop();
237
+ responseText = `Error: ${e.message}`;
238
+ console.log(chalk.default.red(`\n Error: ${e.message}\n`));
239
+ }
240
+ if (responseText) messages.push({
241
+ role: "assistant",
242
+ content: responseText
243
+ });
244
+ try {
245
+ const { AutoMemory } = await Promise.resolve().then(() => require("./src-D28v-IHz.js"));
246
+ const mem = new AutoMemory({ extractEveryNTurns: 3 });
247
+ mem.addTurn("user", text);
248
+ if (responseText) mem.addTurn("assistant", responseText);
249
+ mem.extract().catch(() => {});
250
+ } catch {}
251
+ prompt();
252
+ });
253
+ };
254
+ prompt();
255
+ }
256
+
257
+ //#endregion
258
+ exports.runChat = runChat;
@@ -0,0 +1,258 @@
1
+ const require_chunk = require('./chunk-jS-bbMI5.js');
2
+ const require_paths = require('./paths-AIyBxIzm.js');
3
+ require('./browser-tools-JZ9ji6AW.js');
4
+ require('./src-CGQjRI4N.js');
5
+ const require_engine = require('./engine-DwMKOfwf.js');
6
+ require('./extraction-tools-Dg7AHS35.js');
7
+ const require_inference = require('./inference-DhA8jpfH.js');
8
+ require('./memory-auto-D-L2q21G.js');
9
+ require('./orchestrator-Dr-Kx8Sj.js');
10
+ require('./pc-access-B0KocJNe.js');
11
+ require('./session-store-DCTQIVur.js');
12
+ require('./sessions-tools-BdlN6Pb6.js');
13
+ require('./skill-loader-Wf3brNOj.js');
14
+ require('./skill-runtime-BxRuz-SL.js');
15
+ require('./vision-tools-dwn9p4el.js');
16
+ require('./website-watch-tools-B-jRAeTe.js');
17
+ const require_src$1 = require('./src-CcX1rZa1.js');
18
+ const chalk = require_chunk.__toESM(require("chalk"));
19
+ const ora = require_chunk.__toESM(require("ora"));
20
+ const fs_extra = require_chunk.__toESM(require("fs-extra"));
21
+ const readline = require_chunk.__toESM(require("readline"));
22
+
23
+ //#region src/cli/chat.ts
24
+ require_src$1.init_src();
25
+ const DIVIDER = chalk.default.gray(" " + "─".repeat(56));
26
+ function printHeader(model, sessionId) {
27
+ console.log();
28
+ console.log(DIVIDER);
29
+ console.log(chalk.default.bold.cyan(" 🦅 HYPERCLAW CHAT"));
30
+ console.log(chalk.default.gray(` Model: ${model} · Session: ${sessionId}`));
31
+ console.log(DIVIDER);
32
+ console.log(chalk.default.gray(" Type your message and press Enter."));
33
+ console.log(chalk.default.gray(" Commands: /exit /clear /model /skills /help"));
34
+ console.log(DIVIDER);
35
+ console.log();
36
+ }
37
+ function printHelp() {
38
+ console.log();
39
+ console.log(chalk.default.bold(" Commands:"));
40
+ console.log(` ${chalk.default.cyan("/exit")} — quit the chat`);
41
+ console.log(` ${chalk.default.cyan("/clear")} — clear conversation history`);
42
+ console.log(` ${chalk.default.cyan("/model")} — show current model`);
43
+ console.log(` ${chalk.default.cyan("/skills")} — list installed skills + how to add more`);
44
+ console.log(` ${chalk.default.cyan("/help")} — show this help`);
45
+ console.log();
46
+ console.log(chalk.default.gray(" Tip: you can also tell the agent to install a skill:"));
47
+ console.log(chalk.default.gray(" \"Install the web-search skill\" or paste a clawhub.ai link"));
48
+ console.log();
49
+ }
50
+ async function printSkills() {
51
+ console.log();
52
+ try {
53
+ const { loadSkills } = await Promise.resolve().then(() => require("./skill-loader-B5oeliGu.js"));
54
+ const skills = await loadSkills();
55
+ if (skills.length === 0) console.log(chalk.default.gray(" No skills installed yet."));
56
+ else {
57
+ console.log(chalk.default.bold(" Installed skills:"));
58
+ for (const s of skills) {
59
+ console.log(` ${chalk.default.cyan("•")} ${chalk.default.bold(s.title || s.id)} ${chalk.default.gray(`(${s.id})`)}`);
60
+ if (s.capabilities) console.log(chalk.default.gray(` ${s.capabilities}`));
61
+ }
62
+ }
63
+ } catch {
64
+ console.log(chalk.default.gray(" Could not load skills list."));
65
+ }
66
+ console.log();
67
+ console.log(chalk.default.bold(" How to add a skill:"));
68
+ console.log(` ${chalk.default.gray("1.")} Tell the agent: ${chalk.default.cyan("\"Install the web-search skill\"")}`);
69
+ console.log(` ${chalk.default.gray("2.")} Paste a link: ${chalk.default.cyan("\"Install this: https://clawhub.ai/user/skill-name\"")}`);
70
+ console.log(` ${chalk.default.gray("3.")} CLI (outside chat): ${chalk.default.cyan("hyperclaw skill install <name>")}`);
71
+ console.log(` ${chalk.default.gray("4.")} Re-run wizard: ${chalk.default.cyan("hyperclaw onboard")}`);
72
+ console.log();
73
+ }
74
+ function makeSessionId() {
75
+ return Math.random().toString(36).slice(2, 9);
76
+ }
77
+ async function runChat(opts) {
78
+ const cfg = await fs_extra.default.readJson(require_paths.getConfigPath()).catch(() => null);
79
+ if (!cfg) {
80
+ console.log(chalk.default.red("\n No configuration found. Run: hyperclaw onboard\n"));
81
+ return;
82
+ }
83
+ const { getProviderCredentialAsync } = await Promise.resolve().then(() => require("./env-resolve-bDYssfih.js"));
84
+ const apiKey = await getProviderCredentialAsync(cfg).catch(() => null);
85
+ const isLocal = [
86
+ "local",
87
+ "ollama",
88
+ "lmstudio"
89
+ ].includes(cfg?.provider?.providerId ?? "");
90
+ if (!apiKey && !isLocal) {
91
+ console.log(chalk.default.red("\n No API key configured. Run: hyperclaw config set-key\n"));
92
+ return;
93
+ }
94
+ const { getProvider } = await Promise.resolve().then(() => require("./providers-DxiamZSL.js"));
95
+ const providerMeta = getProvider(cfg?.provider?.providerId ?? "");
96
+ const CUSTOM_IDS = new Set([
97
+ "groq",
98
+ "mistral",
99
+ "deepseek",
100
+ "perplexity",
101
+ "huggingface",
102
+ "ollama",
103
+ "lmstudio",
104
+ "local",
105
+ "xai",
106
+ "openai",
107
+ "google",
108
+ "minimax",
109
+ "moonshot",
110
+ "qwen",
111
+ "zai",
112
+ "litellm",
113
+ "cloudflare",
114
+ "copilot",
115
+ "vercel-ai",
116
+ "opencode-zen"
117
+ ]);
118
+ const isAnthropicVariant = [
119
+ "anthropic",
120
+ "anthropic-oauth",
121
+ "anthropic-setup-token"
122
+ ].includes(cfg?.provider?.providerId ?? "");
123
+ const provider = isAnthropicVariant ? "anthropic" : cfg?.provider?.providerId === "custom" || isLocal || CUSTOM_IDS.has(cfg?.provider?.providerId ?? "") ? "custom" : "openrouter";
124
+ const rawModel = opts.model || cfg?.provider?.modelId || "claude-sonnet-4-5";
125
+ const model = rawModel.startsWith("ollama/") ? rawModel.slice(7) : rawModel;
126
+ const resolvedBaseUrl = cfg?.provider?.baseUrl || providerMeta?.baseUrl || (isLocal ? "http://localhost:11434/v1" : void 0);
127
+ const THINKING_BUDGET = {
128
+ high: 1e4,
129
+ medium: 4e3,
130
+ low: 1e3,
131
+ none: 0
132
+ };
133
+ const thinkingBudget = THINKING_BUDGET[opts.thinking ?? "none"] ?? 0;
134
+ const maxTokens = thinkingBudget > 0 ? thinkingBudget + 4096 : 4096;
135
+ const context = await require_engine.loadWorkspaceContext(opts.workspace) + await require_engine.loadSkillsContext();
136
+ const tools = await require_engine.resolveTools({
137
+ config: cfg,
138
+ source: "cli",
139
+ elevated: true,
140
+ daemonMode: false
141
+ });
142
+ const engineOpts = {
143
+ model,
144
+ apiKey,
145
+ provider,
146
+ system: context || void 0,
147
+ tools,
148
+ maxTokens,
149
+ onToken: () => {},
150
+ ...provider === "custom" ? { baseUrl: resolvedBaseUrl || "" } : {},
151
+ ...thinkingBudget > 0 && model.includes("claude") ? { thinking: { budget_tokens: thinkingBudget } } : {}
152
+ };
153
+ const sessionId = opts.sessionId ?? makeSessionId();
154
+ const messages = [];
155
+ printHeader(rawModel, sessionId);
156
+ const rl = readline.default.createInterface({
157
+ input: process.stdin,
158
+ output: process.stdout,
159
+ terminal: true
160
+ });
161
+ rl.on("SIGINT", () => {
162
+ console.log(chalk.default.gray("\n\n Bye!\n"));
163
+ rl.close();
164
+ process.exit(0);
165
+ });
166
+ const prompt = () => {
167
+ rl.question(chalk.default.bold.green(" You › "), async (input) => {
168
+ const text = input.trim();
169
+ if (!text) {
170
+ prompt();
171
+ return;
172
+ }
173
+ if ([
174
+ "/exit",
175
+ "/quit",
176
+ "/bye",
177
+ "exit",
178
+ "quit",
179
+ "bye"
180
+ ].includes(text.toLowerCase())) {
181
+ console.log(chalk.default.gray("\n Bye!\n"));
182
+ rl.close();
183
+ process.exit(0);
184
+ }
185
+ if (text === "/help") {
186
+ printHelp();
187
+ prompt();
188
+ return;
189
+ }
190
+ if (text === "/skills") {
191
+ await printSkills();
192
+ prompt();
193
+ return;
194
+ }
195
+ if (text === "/model") {
196
+ console.log(chalk.default.gray(`\n Model: ${rawModel}\n`));
197
+ prompt();
198
+ return;
199
+ }
200
+ if (text === "/clear") {
201
+ messages.length = 0;
202
+ console.log(chalk.default.gray("\n Conversation cleared.\n"));
203
+ prompt();
204
+ return;
205
+ }
206
+ messages.push({
207
+ role: "user",
208
+ content: text
209
+ });
210
+ const spinner = (0, ora.default)({
211
+ text: chalk.default.gray("Thinking..."),
212
+ color: "cyan",
213
+ prefixText: " "
214
+ }).start();
215
+ let responseText = "";
216
+ try {
217
+ const engine = new require_inference.InferenceEngine({
218
+ ...engineOpts,
219
+ onToken: (token) => {
220
+ if (spinner.isSpinning) spinner.stop();
221
+ process.stdout.write(token);
222
+ },
223
+ onToolCall: (name) => {
224
+ if (spinner.isSpinning) spinner.stop();
225
+ console.log(chalk.default.gray(`\n [tool: ${name}]`));
226
+ }
227
+ });
228
+ spinner.stop();
229
+ process.stdout.write(chalk.default.bold.blue("\n Agent › "));
230
+ const result = await engine.run(messages);
231
+ responseText = result.text || "";
232
+ if (!responseText && !result.text) process.stdout.write(chalk.default.gray("(empty)"));
233
+ console.log("\n");
234
+ if (result.usage) console.log(chalk.default.gray(` Tokens — in: ${result.usage.input} out: ${result.usage.output}\n`));
235
+ } catch (e) {
236
+ spinner.stop();
237
+ responseText = `Error: ${e.message}`;
238
+ console.log(chalk.default.red(`\n Error: ${e.message}\n`));
239
+ }
240
+ if (responseText) messages.push({
241
+ role: "assistant",
242
+ content: responseText
243
+ });
244
+ try {
245
+ const { AutoMemory } = await Promise.resolve().then(() => require("./memory-auto-DTcy5VBy.js"));
246
+ const mem = new AutoMemory({ extractEveryNTurns: 3 });
247
+ mem.addTurn("user", text);
248
+ if (responseText) mem.addTurn("assistant", responseText);
249
+ mem.extract().catch(() => {});
250
+ } catch {}
251
+ prompt();
252
+ });
253
+ };
254
+ prompt();
255
+ }
256
+
257
+ //#endregion
258
+ exports.runChat = runChat;
@@ -0,0 +1,5 @@
1
+ const require_chunk = require('./chunk-jS-bbMI5.js');
2
+ require('./server-BEqsQF8r.js');
3
+ const require_daemon = require('./daemon-DNfRHk6k.js');
4
+
5
+ exports.DaemonManager = require_daemon.DaemonManager;
@@ -0,0 +1,5 @@
1
+ const require_chunk = require('./chunk-jS-bbMI5.js');
2
+ require('./server-Cv6kgXvX.js');
3
+ const require_daemon = require('./daemon-Dhd1g5G3.js');
4
+
5
+ exports.DaemonManager = require_daemon.DaemonManager;