cc-claw 0.1.0 → 0.2.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,12 +1,24 @@
1
- # CC-Claw — Coding CLI Claw
1
+ # CC-Claw
2
2
 
3
3
  <p align="center">
4
- <img src="assets/cc_claw.png" alt="CC-Claw Logo" width="300" />
4
+ <img src="assets/cc_claw.png" alt="CC-Claw Logo" width="500" />
5
5
  </p>
6
6
 
7
- Your personal AI assistant on Telegram, powered by the best coding CLIs. CC-Claw uses **Claude Code**, **Gemini CLI**, and **Codex** as its backend intelligence — giving you a full personal and productivity assistant accessible from any device.
7
+ A personal AI assistant on Telegram, powered by coding CLIs. CC-Claw uses **Claude Code**, **Gemini CLI**, and **Codex** as interchangeable backends — giving you a multi-model AI assistant accessible from any device.
8
8
 
9
- Send text, voice messages, photos, documents, or videos. The agent handles coding, research, email, scheduling, content creation, document analysis, and any task you need. Switch backends on the fly with `/claude`, `/gemini`, or `/codex`. Spawn sub-agents across different CLIs to work in parallel.
9
+ Send text, voice, photos, documents, or videos. Switch backends with `/claude`, `/gemini`, or `/codex`. Spawn sub-agents across different CLIs to work in parallel. Schedule recurring tasks. All state persists in SQLite.
10
+
11
+ ## What Can It Do
12
+
13
+ - **Multi-backend AI** — Switch between Claude, Gemini, and Codex mid-conversation. Each backend supports per-model thinking levels.
14
+ - **Agent orchestration** — Spawn named sub-agents across different CLIs. Agents communicate via inbox, share data on a whiteboard, and track tasks with dependencies.
15
+ - **Agent templates** — Define reusable agent personas as markdown files (`~/.cc-claw/agents/*.md`). Security reviewers, content writers, researchers — spawn them by name.
16
+ - **Scheduling** — Cron jobs, one-shot tasks, and intervals. Per-job backend, model, and delivery (Telegram, webhook, or silent).
17
+ - **Memory** — Hybrid vector + keyword search. The agent remembers what you tell it across sessions, with salience decay.
18
+ - **Voice** — Send voice messages, get voice replies. Groq Whisper for transcription, ElevenLabs for synthesis.
19
+ - **50+ CLI commands** — Full system management from terminal. Every command supports `--json` for scripting.
20
+ - **MCP support** — Model Context Protocol servers extend the agent with external tools and data.
21
+ - **Cross-channel awareness** — Actions from CLI are visible to Telegram and vice versa via the activity log.
10
22
 
11
23
  ## Supported Backends
12
24
 
@@ -16,19 +28,15 @@ Send text, voice messages, photos, documents, or videos. The agent handles codin
16
28
  | **Gemini** | `gemini` | 3.1 Pro Preview, 3 Flash |
17
29
  | **Codex** | `codex` | GPT-5.4, GPT-5.3 Codex, GPT-5.2 Codex |
18
30
 
19
- Each backend supports per-model thinking/reasoning level control, session resumption, and configurable permission modes.
20
-
21
31
  ## Prerequisites
22
32
 
23
33
  - **Node.js 20+**
24
- - **At least one coding CLI** installed:
25
- - Claude Code CLI — `npm install -g @anthropic-ai/claude-code`
26
- - Gemini CLI — `npm install -g @google/gemini-cli` (or via Homebrew)
27
- - Codex CLI — `npm install -g @openai/codex` (or via Homebrew)
34
+ - **At least one backend CLI** installed:
35
+ - Claude Code — `npm install -g @anthropic-ai/claude-code`
36
+ - Gemini CLI — `npm install -g @google/gemini-cli`
37
+ - Codex — `npm install -g @openai/codex`
28
38
  - **Telegram bot token** from [@BotFather](https://t.me/BotFather)
29
39
 
30
- Backend-specific setup (e.g., Vertex AI for Claude, Google Cloud for Gemini, ChatGPT subscription for Codex) is handled during `cc-claw setup`.
31
-
32
40
  ## Install
33
41
 
34
42
  ```bash
@@ -41,181 +49,185 @@ npm install -g cc-claw
41
49
  cc-claw setup
42
50
  ```
43
51
 
44
- The interactive wizard walks you through:
52
+ The wizard walks you through:
45
53
 
46
54
  1. Creating a Telegram bot and pasting the token
47
- 2. Getting your Telegram chat ID (for security — only you can use the bot)
48
- 3. Selecting your default backend and configuring its credentials
49
- 4. Optional features: voice (Groq STT + ElevenLabs TTS), web dashboard, video analysis (Gemini)
55
+ 2. Setting your chat ID (security — only you can use the bot)
56
+ 3. Configuring backend credentials (all optional configure what you use)
57
+ 4. Optional features: voice, web dashboard, video analysis
50
58
  5. Installing as a background service
51
59
 
52
- Configuration is saved to `~/.cc-claw/.env`. Data is organized under `~/.cc-claw/`:
53
-
54
- ```
55
- ~/.cc-claw/
56
- .env config (Telegram token, backend credentials)
57
- data/cc-claw.db SQLite database
58
- logs/cc-claw.log stdout log
59
- logs/cc-claw.error.log stderr log
60
- runners/ config-driven CLI runner definitions (JSON)
61
- workspace/
62
- SOUL.md agent identity and personality (customizable)
63
- USER.md your profile and preferences
64
- CLAUDE.md / GEMINI.md native CLI awareness notes (auto-generated)
65
- context/ on-demand context files
66
- skills/ skill repository (available to all backends)
67
- ```
60
+ Config saved to `~/.cc-claw/.env`.
68
61
 
69
62
  ## Run
70
63
 
71
64
  ```bash
72
- # Foreground (see output in terminal)
65
+ # Background service (recommended)
66
+ cc-claw service install
67
+
68
+ # Or foreground
73
69
  cc-claw start
70
+ ```
74
71
 
75
- # Or install as a background service (launchd on macOS, systemd on Linux)
76
- cc-claw install
72
+ ```bash
73
+ cc-claw service start # Start
74
+ cc-claw service stop # Stop
75
+ cc-claw service restart # Restart (pick up config changes)
76
+ cc-claw service status # Check if running
77
77
  ```
78
78
 
79
79
  ## CLI
80
80
 
81
- CC-Claw has a comprehensive CLI with 50+ commands. Every command supports `--json` for AI agents and scripts.
82
-
83
81
  ```bash
84
- cc-claw status # System status (backend, model, usage)
85
- cc-claw doctor # Health checks + auto-fix
86
- cc-claw logs -f # Follow daemon logs
87
- cc-claw chat send "hello" # Talk to the AI from terminal
88
- cc-claw tui # Interactive terminal chat
89
- cc-claw backend set gemini # Switch backend
90
- cc-claw model list # List models
91
- cc-claw cron list # List scheduled jobs
92
- cc-claw agents list # List active sub-agents
82
+ cc-claw status # System status
83
+ cc-claw doctor --fix # Health checks + auto-repair
84
+ cc-claw logs -f # Follow logs
85
+ cc-claw chat send "hello" # Talk to the AI from terminal
86
+ cc-claw tui # Interactive terminal chat
87
+ cc-claw backend set gemini # Switch backend
93
88
  cc-claw agents spawn --runner claude --task "review code"
94
- cc-claw db stats # Database health
95
- cc-claw completion --shell zsh # Shell completions
96
- cc-claw --ai # Generate AI skill file
89
+ cc-claw cron list # Scheduled jobs
90
+ cc-claw --ai # Generate AI skill file
97
91
  ```
98
92
 
99
- | Command Group | Description |
100
- |---------------|-------------|
93
+ | Group | Description |
94
+ |-------|-------------|
101
95
  | `service` | start, stop, restart, status, install, uninstall |
102
96
  | `status` / `doctor` / `logs` | Diagnostics and monitoring |
103
- | `chat` / `tui` | Talk to the AI from terminal |
97
+ | `chat` / `tui` | Terminal AI chat |
104
98
  | `backend` / `model` / `thinking` | Backend and model management |
105
99
  | `memory` | list, search, add, forget, history |
106
- | `cron` / `schedule` | Scheduler management (create, cancel, pause, resume, run) |
100
+ | `cron` / `schedule` | Scheduler (create, cancel, pause, resume, run) |
107
101
  | `agents` / `tasks` / `runners` | Agent orchestration |
108
- | `config` | Runtime and static configuration |
109
- | `usage` / `cost` | Cost tracking and usage limits |
102
+ | `config` | Runtime configuration |
103
+ | `usage` / `cost` | Cost tracking and limits |
110
104
  | `permissions` / `tools` / `verbose` | Permission and tool management |
111
105
  | `skills` / `mcps` | Skills and MCP servers |
112
106
  | `db` | Database stats, backup, prune |
113
- | `completion` | Shell completions (bash/zsh/fish) |
114
- | `setup` | Interactive configuration wizard |
115
107
 
116
- Read commands work offline (direct DB). Write commands require the daemon running.
108
+ Read commands work offline (direct DB access). Write commands require the daemon.
109
+
110
+ ## Telegram Commands
111
+
112
+ ### Core
113
+
114
+ | Command | Description |
115
+ |---------|-------------|
116
+ | `/help` | All commands |
117
+ | `/status` | Backend, model, session, usage |
118
+ | `/newchat` | Start fresh (summarizes current session) |
119
+ | `/stop` | Cancel running task |
120
+
121
+ ### Backend & Model
122
+
123
+ | Command | Description |
124
+ |---------|-------------|
125
+ | `/backend` | Switch backend via keyboard |
126
+ | `/claude` `/gemini` `/codex` | Quick switch |
127
+ | `/model` | Switch model + thinking level |
128
+ | `/summarizer` | Session summarization model |
129
+
130
+ ### Scheduling
131
+
132
+ | Command | Description |
133
+ |---------|-------------|
134
+ | `/schedule <desc>` | Create a scheduled task |
135
+ | `/jobs` | List jobs |
136
+ | `/run <id>` | Trigger now |
137
+ | `/runs` | Run history |
138
+ | `/editjob <id>` | Edit a job |
139
+ | `/pause` / `/resume` / `/cancel` | Job lifecycle |
140
+ | `/health` | Scheduler health |
141
+
142
+ ### Agents
117
143
 
118
- ### Telegram Commands
144
+ | Command | Description |
145
+ |---------|-------------|
146
+ | `/agents` | List sub-agents |
147
+ | `/tasks` | Task board |
148
+ | `/stopagent <id>` | Cancel one agent |
149
+ | `/stopall` | Cancel all agents |
150
+ | `/runners` | Available CLI runners |
119
151
 
120
- Once running, send these to your bot:
152
+ ### Settings
121
153
 
122
154
  | Command | Description |
123
155
  |---------|-------------|
124
- | `/help` | Show available commands |
125
- | `/backend` | Switch AI backend (Claude / Gemini / Codex) |
126
- | `/model` | Switch model for active backend (with thinking level picker) |
127
- | `/summarizer` | Configure session summarization model |
128
- | `/status` | Backend, model, thinking level, session, context usage |
129
- | `/cost` | Estimated API cost for this chat (per-model breakdown) |
130
- | `/cost all` | All-time cost breakdown by model across all chats |
131
- | `/usage` | Usage per backend with limit warnings |
132
- | `/limits` | Configure usage limits per backend |
133
- | `/newchat` | Start a fresh conversation |
134
- | `/cwd <path>` | Set the working directory |
135
- | `/memory` | List stored memories |
136
- | `/forget <keyword>` | Remove a memory |
137
- | `/voice` | Toggle voice responses |
138
- | `/cron <desc>` | Schedule a task (or `/schedule`) |
139
- | `/cron` | List scheduled jobs (or `/jobs`) |
140
- | `/cron cancel <id>` | Cancel a scheduled job |
141
- | `/cron pause <id>` | Pause a job |
142
- | `/cron resume <id>` | Resume a job |
143
- | `/cron run <id>` | Trigger a job now |
144
- | `/cron runs [id]` | View run history |
145
- | `/cron edit <id>` | Edit a job |
146
- | `/cron health` | Scheduler health |
147
- | `/skills` | List skills from all backends |
148
- | `/skill-install <url>` | Install a skill from GitHub |
149
- | `/setup-profile` | Set up your user profile |
150
- | `/chats` | List authorized chats and aliases |
151
- | `/heartbeat` | Proactive awareness (on/off/interval/hours) |
152
- | `/history` | List recent session summaries |
153
- | `/stop` | Cancel the current task |
154
- | `/tools` | Configure which tools the agent can use |
155
- | `/permissions` | Switch permission mode |
156
- | `/verbose` | Tool visibility (off / normal / verbose) |
157
- | `/claude` `/gemini` `/codex` | Quick backend switch |
158
- | `/agents` | List active sub-agents |
159
- | `/tasks` | Show task board for current orchestration |
160
- | `/stopagent <id>` | Cancel a specific sub-agent |
161
- | `/stopall` | Cancel all sub-agents |
162
- | `/runners` | List registered CLI runners |
163
- | `/mcp` | List all MCP servers (central + backends) |
164
-
165
- ### Permission Modes
166
-
167
- - **yolo** — Full autopilot, all tools auto-approved
168
- - **safe** — Only user-configured tools allowed
169
- - **readonly** — Read, Glob, Grep only
170
- - **plan** — Plan only, no execution
156
+ | `/cwd <path>` | Set working directory |
157
+ | `/permissions` | Permission mode (yolo/safe/readonly/plan) |
158
+ | `/tools` | Configure allowed tools |
159
+ | `/voice` | Toggle voice replies |
160
+ | `/verbose` | Tool visibility level |
161
+ | `/limits` | Usage limits per backend |
162
+
163
+ ### Memory & Context
164
+
165
+ | Command | Description |
166
+ |---------|-------------|
167
+ | `/memory` | List memories |
168
+ | `/forget <keyword>` | Remove memories |
169
+ | `/history` | Past session summaries |
170
+ | `/cost` | API cost breakdown |
171
+ | `/skills` | Browse skills |
172
+ | `/mcp` | MCP servers |
171
173
 
172
174
  ## Architecture
173
175
 
174
176
  ```
175
- Telegram --> TelegramChannel --> router (handleMessage)
176
- |-- command --> handleCommand
177
- |-- voice --> Groq Whisper STT --> askAgent
178
- |-- photo/document/video --> handleMedia --> askAgent
179
- '-- text --> askAgent --> sendResponse
180
- '--> [SEND_FILE:/path] --> channel.sendFile
177
+ Telegram TelegramChannel router (handleMessage)
178
+ ├── command handleCommand
179
+ ├── voice Groq Whisper STT askAgent
180
+ ├── photo/document/video handleMedia askAgent
181
+ └── text askAgent sendResponse
182
+ └── [SEND_FILE:/path] channel.sendFile
181
183
  ```
182
184
 
183
- The core `askAgent()` function delegates to the active backend's adapter, which handles CLI arg construction, NDJSON parsing, and env setup. All state is persisted in SQLite with FTS5 full-text search.
185
+ `askAgent()` delegates to the active backend adapter, which constructs CLI args, spawns the subprocess, and parses NDJSON output. All state lives in SQLite with FTS5 full-text search.
184
186
 
185
187
  ### Agent Orchestration
186
188
 
187
- CC-Claw can spawn and manage multiple sub-agents across different coding CLIs simultaneously:
188
-
189
189
  ```
190
- User: "Have Claude review the code while Gemini researches the API docs"
191
- Main agent calls spawn_agent(runner: "claude", task: "Review...", role: "reviewer")
192
- Main agent calls spawn_agent(runner: "gemini", task: "Research...", role: "worker")
193
- Sub-agents communicate via inbox, share data via whiteboard
194
- → Results flow back to the main agent for synthesis
190
+ "Have Claude review the code while Gemini researches the API docs"
191
+ → spawn_agent(runner: "claude", name: "code-reviewer", role: "reviewer")
192
+ → spawn_agent(runner: "gemini", name: "api-researcher", role: "researcher")
193
+ Agents communicate via inbox, share data on whiteboard
194
+ → Results flow back to the main agent
195
195
  ```
196
196
 
197
- Sub-agents support task dependencies, role-based prompts (worker/reviewer/planner), idle detection, timeout enforcement, and per-orchestration cost tracking. See `docs/plans/2026-03-09-agent-orchestration-design.md` for the full design.
197
+ Sub-agents support named agents with personas, 9 role presets, file-based templates, per-agent model/tool overrides, task dependencies, idle detection, and cost tracking.
198
198
 
199
- ### MCP Management
199
+ ### Agent Templates
200
200
 
201
- MCP (Model Context Protocol) servers are managed centrally. `/mcp` shows all MCPs across all backends. Each backend's native MCPs are available when running through CC-Claw. The orchestrator MCP server gives agents tools for spawning sub-agents, messaging, and task management.
201
+ Define reusable agents in `~/.cc-claw/agents/*.md`:
202
+
203
+ ```markdown
204
+ ---
205
+ name: security-reviewer
206
+ description: Reviews code for security vulnerabilities
207
+ runner: claude
208
+ role: reviewer
209
+ model: claude-sonnet-4-6
210
+ ---
211
+ You are a senior security engineer...
212
+ ```
213
+
214
+ The main agent discovers templates via `list_templates` and spawns them by name.
202
215
 
203
216
  ### Adding a CLI Runner
204
217
 
205
- Users can add any coding CLI as a sub-agent runner without code changes:
218
+ No code changes needed:
206
219
 
207
220
  1. Tell CC-Claw: *"Onboard cursor CLI as a sub-agent"*
208
- 2. The agent discovers capabilities, tests the CLI, and creates a JSON config at `~/.cc-claw/runners/`
209
- 3. The new runner is immediately available via `/runners`
210
-
211
- For developers: create a JSON runner config (see `src/agents/runners/config-types.ts`) or a TypeScript adapter.
221
+ 2. The agent discovers capabilities, writes a JSON config to `~/.cc-claw/runners/`
222
+ 3. Available immediately via `/runners`
212
223
 
213
224
  ### Adding a Channel
214
225
 
215
- 1. Create `src/channels/<name>.ts` implementing the `Channel` interface
216
- 2. Register it in `src/index.ts`
226
+ 1. Implement the `Channel` interface in `src/channels/<name>.ts`
227
+ 2. Implement `getCapabilities()` for supported features
228
+ 3. Register in `src/index.ts`
217
229
 
218
- The router handles all message logic the channel only delivers and sends messages.
230
+ Notifications broadcast to all channels. Health monitored with auto-reconnection.
219
231
 
220
232
  ## Environment Variables
221
233
 
@@ -224,22 +236,22 @@ Set in `~/.cc-claw/.env` (created by `cc-claw setup`):
224
236
  | Variable | Required | Description |
225
237
  |----------|----------|-------------|
226
238
  | `TELEGRAM_BOT_TOKEN` | Yes | From @BotFather |
227
- | `ALLOWED_CHAT_ID` | Yes | Comma-separated chat IDs (your user ID + group IDs) |
239
+ | `ALLOWED_CHAT_ID` | Yes | Comma-separated chat IDs |
228
240
  | `CLAUDE_CODE_USE_VERTEX` | For Claude | Set to `1` |
229
- | `CLOUD_ML_REGION` | For Claude | e.g. `us-east5` |
230
- | `ANTHROPIC_VERTEX_PROJECT_ID` | For Claude | Your GCP project |
241
+ | `CLOUD_ML_REGION` | For Claude | e.g., `us-east5` |
242
+ | `ANTHROPIC_VERTEX_PROJECT_ID` | For Claude | GCP project ID |
231
243
  | `GROQ_API_KEY` | No | Voice transcription |
232
- | `ELEVENLABS_API_KEY` | No | Voice replies (TTS) |
244
+ | `ELEVENLABS_API_KEY` | No | Voice replies |
233
245
  | `GEMINI_API_KEY` | No | Video analysis |
234
- | `DASHBOARD_ENABLED` | No | Set to `1` for web dashboard |
235
- | `DASHBOARD_TOKEN` | No | Fixed auth token for dashboard API (auto-generated if not set) |
236
- | `CLAUDE_CODE_EXECUTABLE` | No | Path to claude CLI (auto-detected) |
237
- | `GEMINI_CLI_EXECUTABLE` | No | Path to gemini CLI (auto-detected) |
238
- | `CODEX_EXECUTABLE` | No | Path to codex CLI (auto-detected) |
239
- | `EMBEDDING_PROVIDER` | No | Embedding provider: `ollama` (default), `gemini`, `openai`, `off` |
240
- | `MEMORY_VECTOR_WEIGHT` | No | Vector vs keyword weight for memory search (default: `0.7`) |
241
- | `OLLAMA_URL` | No | Ollama server URL (default: `http://localhost:11434`) |
242
- | `CC_CLAW_HOME` | No | Config dir (default: `~/.cc-claw`) |
246
+ | `DASHBOARD_ENABLED` | No | `1` for web dashboard on port 3141 |
247
+ | `EMBEDDING_PROVIDER` | No | `ollama` (default), `gemini`, `openai`, `off` |
248
+ | `CC_CLAW_HOME` | No | Config directory (default: `~/.cc-claw`) |
249
+
250
+ ## Documentation
251
+
252
+ - **[Quick Start](docs/QUICKSTART.md)** Zero to working bot in 10 minutes
253
+ - **[User Guide](docs/USER_GUIDE.md)** Full reference for all features
254
+ - **[Roadmap](docs/ROADMAP.md)** Planned features
243
255
 
244
256
  ## License
245
257
 
@@ -9,19 +9,42 @@ var CHAT_ID = process.env.CC_CLAW_CHAT_ID ?? "";
9
9
  var AGENT_ID = process.env.CC_CLAW_AGENT_ID ?? "main";
10
10
  var DASHBOARD_TOKEN = process.env.CC_CLAW_DASHBOARD_TOKEN ?? "";
11
11
  var IS_SUB_AGENT = AGENT_ID !== "main";
12
+ var MAX_RETRIES = 3;
13
+ var RETRY_BASE_MS = 500;
12
14
  async function callApi(path, body) {
13
15
  const url = `${BASE_URL}${path}`;
14
16
  const headers = { "Content-Type": "application/json" };
15
17
  if (DASHBOARD_TOKEN) {
16
18
  headers["Authorization"] = `Bearer ${DASHBOARD_TOKEN}`;
17
19
  }
18
- const opts = body ? { method: "POST", headers, body: JSON.stringify(body) } : { method: "GET", headers };
19
- const res = await fetch(url, opts);
20
- if (!res.ok) {
21
- const text = await res.text();
22
- throw new Error(`API error ${res.status}: ${text}`);
20
+ const reqInit = body ? { method: "POST", headers, body: JSON.stringify(body) } : { method: "GET", headers };
21
+ let lastError = null;
22
+ for (let attempt = 0; attempt <= MAX_RETRIES; attempt++) {
23
+ try {
24
+ const res = await fetch(url, reqInit);
25
+ if (!res.ok) {
26
+ const text = await res.text();
27
+ if (res.status >= 500 && attempt < MAX_RETRIES) {
28
+ lastError = new Error(`API error ${res.status}: ${text}`);
29
+ await sleep(RETRY_BASE_MS * Math.pow(2, attempt));
30
+ continue;
31
+ }
32
+ throw new Error(`API error ${res.status}: ${text}`);
33
+ }
34
+ return res.json();
35
+ } catch (err) {
36
+ if (err instanceof TypeError && attempt < MAX_RETRIES) {
37
+ lastError = err;
38
+ await sleep(RETRY_BASE_MS * Math.pow(2, attempt));
39
+ continue;
40
+ }
41
+ throw err;
42
+ }
23
43
  }
24
- return res.json();
44
+ throw lastError ?? new Error(`API call failed after ${MAX_RETRIES} retries`);
45
+ }
46
+ function sleep(ms) {
47
+ return new Promise((resolve) => setTimeout(resolve, ms));
25
48
  }
26
49
  var server = new McpServer({
27
50
  name: "cc-claw-orchestrator",
@@ -304,6 +327,35 @@ server.tool(
304
327
  return { content: [{ type: "text", text: `Broadcast sent to ${result.sent} agent(s).` }] };
305
328
  }
306
329
  );
330
+ server.tool(
331
+ "report_progress",
332
+ "Report progress on a long-running task. Writes to the shared whiteboard and notifies the main agent. Call periodically (every few minutes) during tasks that take more than a couple of minutes.",
333
+ {
334
+ status: z.string().describe("Short status message (e.g., 'Analyzing 5 of 12 files', '70% complete')"),
335
+ detail: z.string().optional().describe("Optional longer description of current work")
336
+ },
337
+ async ({ status, detail }) => {
338
+ const shortId = AGENT_ID.slice(0, 8);
339
+ await callApi("/api/orchestrator/set-state", {
340
+ chatId: CHAT_ID,
341
+ key: `progress:${shortId}`,
342
+ value: JSON.stringify({ status, detail, timestamp: (/* @__PURE__ */ new Date()).toISOString() }),
343
+ setBy: AGENT_ID
344
+ });
345
+ if (IS_SUB_AGENT) {
346
+ await callApi("/api/orchestrator/send-message", {
347
+ chatId: CHAT_ID,
348
+ message: {
349
+ toAgentId: "main",
350
+ fromAgentId: AGENT_ID,
351
+ messageType: "status_update",
352
+ content: `Progress: ${status}${detail ? ` \u2014 ${detail}` : ""}`
353
+ }
354
+ });
355
+ }
356
+ return { content: [{ type: "text", text: `Progress reported: ${status}` }] };
357
+ }
358
+ );
307
359
  server.tool(
308
360
  "list_runners",
309
361
  "List all registered CLI runners and their capabilities",
package/dist/cli.js CHANGED
@@ -48,7 +48,7 @@ var VERSION;
48
48
  var init_version = __esm({
49
49
  "src/version.ts"() {
50
50
  "use strict";
51
- VERSION = true ? "0.1.0" : (() => {
51
+ VERSION = true ? "0.2.2" : (() => {
52
52
  try {
53
53
  return JSON.parse(readFileSync(join2(process.cwd(), "package.json"), "utf-8")).version ?? "unknown";
54
54
  } catch {
@@ -9996,11 +9996,17 @@ async function patchUsmForCcClaw(usmDir) {
9996
9996
  }
9997
9997
  }
9998
9998
  }
9999
- const oldFind = "find ~/.{claude,agents,gemini,gemini/antigravity,openclaw/workspace,cursor,config/opencode,config/goose,roo,cline}/skills";
10000
- const newFind = "find ~/.{claude,agents,gemini,gemini/antigravity,openclaw/workspace,cc-claw/workspace,cursor,config/opencode,config/goose,roo,cline}/skills";
10001
- if (content.includes(oldFind)) {
10002
- content = content.replace(oldFind, newFind);
10003
- patched = true;
9999
+ const findPatterns = [
10000
+ "find ~/.{claude,agents,gemini/antigravity,openclaw/workspace,cursor,config/opencode,config/goose,roo,cline}/skills",
10001
+ "find ~/.{claude,agents,gemini,gemini/antigravity,openclaw/workspace,cursor,config/opencode,config/goose,roo,cline}/skills"
10002
+ ];
10003
+ for (const oldFind of findPatterns) {
10004
+ if (content.includes(oldFind) && !content.includes("cc-claw/workspace")) {
10005
+ const newFind = oldFind.replace("openclaw/workspace,", "openclaw/workspace,cc-claw/workspace,");
10006
+ content = content.replace(oldFind, newFind);
10007
+ patched = true;
10008
+ break;
10009
+ }
10004
10010
  }
10005
10011
  if (patched) {
10006
10012
  await writeFile3(skillPath, content, "utf-8");
@@ -13817,38 +13823,52 @@ async function setup() {
13817
13823
  header(3, TOTAL_STEPS, "Backend Configuration");
13818
13824
  console.log(" CC-Claw supports multiple AI backends: Claude, Gemini, and Codex.");
13819
13825
  console.log(" Configure the ones you plan to use. You can add more later.\n");
13820
- const configureVertex = env.CLAUDE_CODE_USE_VERTEX === "1" || await confirm("Configure Claude (Vertex AI)?", true);
13821
- if (configureVertex) {
13826
+ const configureClaude = env.CLAUDE_CODE_USE_VERTEX === "1" || env.ANTHROPIC_API_KEY || await confirm("Configure Claude Code?", true);
13827
+ if (configureClaude) {
13822
13828
  const detectedVertex = process.env.CLAUDE_CODE_USE_VERTEX === "1";
13823
- const detectedRegion = process.env.CLOUD_ML_REGION;
13824
- const detectedProject = process.env.ANTHROPIC_VERTEX_PROJECT_ID;
13825
- if (detectedVertex && detectedProject) {
13829
+ const detectedApiKey = process.env.ANTHROPIC_API_KEY;
13830
+ if (detectedVertex && process.env.ANTHROPIC_VERTEX_PROJECT_ID) {
13826
13831
  console.log(green(" Auto-detected Vertex AI from your environment:"));
13827
- console.log(green(` Project: ${detectedProject}`));
13828
- console.log(green(` Region: ${detectedRegion ?? "global"}`));
13832
+ console.log(green(` Project: ${process.env.ANTHROPIC_VERTEX_PROJECT_ID}`));
13833
+ console.log(green(` Region: ${process.env.CLOUD_ML_REGION ?? "global"}`));
13829
13834
  const useDetected = await confirm("Use these settings?", true);
13830
13835
  if (useDetected) {
13831
13836
  env.CLAUDE_CODE_USE_VERTEX = "1";
13832
- env.CLOUD_ML_REGION = detectedRegion ?? "global";
13833
- env.ANTHROPIC_VERTEX_PROJECT_ID = detectedProject;
13834
- } else {
13835
- delete env.CLAUDE_CODE_USE_VERTEX;
13837
+ env.CLOUD_ML_REGION = process.env.CLOUD_ML_REGION ?? "global";
13838
+ env.ANTHROPIC_VERTEX_PROJECT_ID = process.env.ANTHROPIC_VERTEX_PROJECT_ID;
13839
+ }
13840
+ } else if (detectedApiKey) {
13841
+ console.log(green(" Auto-detected Anthropic API key from your environment."));
13842
+ const useDetected = await confirm("Use it?", true);
13843
+ if (useDetected) {
13844
+ env.ANTHROPIC_API_KEY = detectedApiKey;
13836
13845
  }
13837
13846
  }
13838
- if (!env.CLAUDE_CODE_USE_VERTEX) {
13839
- console.log(dim(" You need a Google Cloud project with the Vertex AI API enabled."));
13840
- console.log(dim(" See: https://docs.anthropic.com/en/docs/build-with-claude/vertex-ai\n"));
13841
- const projectId = await requiredInput(
13842
- "Vertex AI Project ID",
13843
- env.ANTHROPIC_VERTEX_PROJECT_ID
13844
- );
13845
- env.ANTHROPIC_VERTEX_PROJECT_ID = projectId;
13846
- const regionInput = await ask(
13847
- ` Cloud ML Region ${dim(`[${env.CLOUD_ML_REGION ?? "global"}]`)}: `
13848
- );
13849
- env.CLOUD_ML_REGION = regionInput.trim() || env.CLOUD_ML_REGION || "global";
13850
- env.CLAUDE_CODE_USE_VERTEX = "1";
13851
- console.log(green(` Vertex AI configured: ${projectId} (${env.CLOUD_ML_REGION})`));
13847
+ if (!env.CLAUDE_CODE_USE_VERTEX && !env.ANTHROPIC_API_KEY) {
13848
+ console.log(" How do you access Claude Code?\n");
13849
+ console.log(cyan(" 1. Anthropic API key (most common)"));
13850
+ console.log(cyan(" 2. Google Vertex AI (enterprise)"));
13851
+ console.log(cyan(" 3. Skip Claude for now\n"));
13852
+ const choice = await ask(" Choose [1/2/3]: ");
13853
+ if (choice.trim() === "1") {
13854
+ console.log("");
13855
+ console.log(dim(" Get an API key at: https://console.anthropic.com/settings/keys\n"));
13856
+ const apiKey = await requiredInput("Anthropic API key", env.ANTHROPIC_API_KEY);
13857
+ env.ANTHROPIC_API_KEY = apiKey;
13858
+ console.log(green(" Claude configured with API key."));
13859
+ } else if (choice.trim() === "2") {
13860
+ console.log("");
13861
+ console.log(dim(" You need a Google Cloud project with the Vertex AI API enabled."));
13862
+ console.log(dim(" See: https://docs.anthropic.com/en/docs/build-with-claude/vertex-ai\n"));
13863
+ const projectId = await requiredInput("Vertex AI Project ID", env.ANTHROPIC_VERTEX_PROJECT_ID);
13864
+ env.ANTHROPIC_VERTEX_PROJECT_ID = projectId;
13865
+ const regionInput = await ask(` Cloud ML Region ${dim(`[${env.CLOUD_ML_REGION ?? "global"}]`)}: `);
13866
+ env.CLOUD_ML_REGION = regionInput.trim() || env.CLOUD_ML_REGION || "global";
13867
+ env.CLAUDE_CODE_USE_VERTEX = "1";
13868
+ console.log(green(` Vertex AI configured: ${projectId} (${env.CLOUD_ML_REGION})`));
13869
+ } else {
13870
+ console.log(dim(" Skipping Claude. You can configure it later in ~/.cc-claw/.env"));
13871
+ }
13852
13872
  }
13853
13873
  } else {
13854
13874
  console.log(dim(" Skipping Claude. You can configure it later in ~/.cc-claw/.env"));
@@ -13890,13 +13910,20 @@ async function setup() {
13890
13910
  "",
13891
13911
  "# Telegram",
13892
13912
  `TELEGRAM_BOT_TOKEN=${env.TELEGRAM_BOT_TOKEN ?? ""}`,
13893
- `ALLOWED_CHAT_ID=${env.ALLOWED_CHAT_ID ?? ""}`,
13894
- "",
13895
- "# Vertex AI",
13896
- `CLAUDE_CODE_USE_VERTEX=${env.CLAUDE_CODE_USE_VERTEX ?? "1"}`,
13897
- `CLOUD_ML_REGION=${env.CLOUD_ML_REGION ?? "global"}`,
13898
- `ANTHROPIC_VERTEX_PROJECT_ID=${env.ANTHROPIC_VERTEX_PROJECT_ID ?? ""}`
13913
+ `ALLOWED_CHAT_ID=${env.ALLOWED_CHAT_ID ?? ""}`
13899
13914
  ];
13915
+ if (env.ANTHROPIC_API_KEY) {
13916
+ envLines.push("", "# Claude (API key)", `ANTHROPIC_API_KEY=${env.ANTHROPIC_API_KEY}`);
13917
+ }
13918
+ if (env.CLAUDE_CODE_USE_VERTEX === "1") {
13919
+ envLines.push(
13920
+ "",
13921
+ "# Claude (Vertex AI)",
13922
+ `CLAUDE_CODE_USE_VERTEX=1`,
13923
+ `CLOUD_ML_REGION=${env.CLOUD_ML_REGION ?? "global"}`,
13924
+ `ANTHROPIC_VERTEX_PROJECT_ID=${env.ANTHROPIC_VERTEX_PROJECT_ID ?? ""}`
13925
+ );
13926
+ }
13900
13927
  if (env.GROQ_API_KEY) {
13901
13928
  envLines.push("", "# Voice", `GROQ_API_KEY=${env.GROQ_API_KEY}`);
13902
13929
  if (env.ELEVENLABS_API_KEY) {
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "cc-claw",
3
- "version": "0.1.0",
3
+ "version": "0.2.2",
4
4
  "description": "CC-Claw: Personal AI assistant on Telegram — multi-backend (Claude, Gemini, Codex), sub-agent orchestration, MCP management",
5
5
  "type": "module",
6
6
  "main": "dist/cli.js",