claudish-oai 5.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,716 @@
1
+ # Claudish AI Agent Usage Guide
2
+
3
+ **Version:** 2.2.0
4
+ **Target Audience:** AI Agents running within Claude Code
5
+ **Purpose:** Quick reference for using Claudish CLI and MCP server in agentic workflows
6
+
7
+ ---
8
+
9
+ ## TL;DR - Quick Start
10
+
11
+ ```bash
12
+ # 1. Get available models
13
+ claudish --models --json
14
+
15
+ # 2. Run task with specific model (OpenRouter)
16
+ claudish --model openai/gpt-5.3 "your task here"
17
+
18
+ # 3. Run with direct Gemini API
19
+ claudish --model g/gemini-2.0-flash "your task here"
20
+
21
+ # 4. Run with local model
22
+ claudish --model ollama/llama3.2 "your task here"
23
+
24
+ # 5. For large prompts, use stdin
25
+ echo "your task" | claudish --stdin --model openai/gpt-5.3
26
+ ```
27
+
28
+ ## What is Claudish?
29
+
30
+ Claudish = Claude Code + Any AI Model
31
+
32
+ - ✅ Run Claude Code with **any AI model** via prefix-based routing
33
+ - ✅ Supports OpenRouter (100+ models), direct Gemini API, direct OpenAI API
34
+ - ✅ Supports local models (Ollama, LM Studio, vLLM, MLX)
35
+ - ✅ **MCP Server mode** - expose models as tools for Claude Code
36
+ - ✅ 100% Claude Code feature compatibility
37
+ - ✅ Local proxy server (no data sent to Claudish servers)
38
+ - ✅ Cost tracking and model selection
39
+
40
+ ## Model Routing
41
+
42
+ | Prefix | Backend | Example |
43
+ |--------|---------|---------|
44
+ | _(none)_ | OpenRouter | `openai/gpt-5.3` |
45
+ | `g/` `gemini/` | Google Gemini | `g/gemini-2.0-flash` |
46
+ | `v/` `vertex/` | Vertex AI | `v/gemini-2.5-flash` |
47
+ | `oai/` `openai/` | OpenAI | `oai/gpt-4o` |
48
+ | `ollama/` | Ollama | `ollama/llama3.2` |
49
+ | `lmstudio/` | LM Studio | `lmstudio/model` |
50
+ | `http://...` | Custom | `http://localhost:8000/model` |
51
+
52
+ ### Vertex AI Partner Models
53
+
54
+ Vertex AI supports Google + partner models (MaaS):
55
+
56
+ ```bash
57
+ # Google Gemini on Vertex
58
+ claudish --model v/gemini-2.5-flash "task"
59
+
60
+ # Partner models (MiniMax, Mistral, DeepSeek, Qwen, OpenAI OSS)
61
+ claudish --model vertex/minimax/minimax-m2-maas "task"
62
+ claudish --model vertex/mistralai/codestral-2 "write code"
63
+ claudish --model vertex/deepseek/deepseek-v3-2-maas "analyze"
64
+ claudish --model vertex/qwen/qwen3-coder-480b-a35b-instruct-maas "implement"
65
+ claudish --model vertex/openai/gpt-oss-120b-maas "reason"
66
+ ```
67
+
68
+ ## Prerequisites
69
+
70
+ 1. **Install Claudish:**
71
+ ```bash
72
+ npm install -g claudish
73
+ ```
74
+
75
+ 2. **Set API Key (at least one):**
76
+ ```bash
77
+ # OpenRouter (100+ models)
78
+ export OPENROUTER_API_KEY='sk-or-v1-...'
79
+
80
+ # OR Gemini direct
81
+ export GEMINI_API_KEY='...'
82
+
83
+ # OR Vertex AI (Express mode)
84
+ export VERTEX_API_KEY='...'
85
+
86
+ # OR Vertex AI (OAuth mode - uses gcloud ADC)
87
+ export VERTEX_PROJECT='your-gcp-project-id'
88
+ ```
89
+
90
+ 3. **Optional but recommended:**
91
+ ```bash
92
+ export ANTHROPIC_API_KEY='sk-ant-api03-placeholder'
93
+ ```
94
+
95
+ ## Top Models for Development
96
+
97
+ | Model ID | Provider | Category | Best For |
98
+ |----------|----------|----------|----------|
99
+ | `openai/gpt-5.3` | OpenAI | Reasoning | **Default** - Most advanced reasoning |
100
+ | `minimax/minimax-m2.1` | MiniMax | Coding | Budget-friendly, fast |
101
+ | `z-ai/glm-4.7` | Z.AI | Coding | Balanced performance |
102
+ | `google/gemini-3-pro-preview` | Google | Reasoning | 1M context window |
103
+ | `moonshotai/kimi-k2-thinking` | MoonShot | Reasoning | Extended thinking |
104
+ | `deepseek/deepseek-v3.2` | DeepSeek | Coding | Code specialist |
105
+ | `qwen/qwen3-vl-235b-a22b-thinking` | Alibaba | Vision | Vision + reasoning |
106
+
107
+ **Direct API Options (lower latency):**
108
+
109
+ | Model ID | Backend | Best For |
110
+ |----------|---------|----------|
111
+ | `g/gemini-2.0-flash` | Gemini | Fast tasks, large context |
112
+ | `v/gemini-2.5-flash` | Vertex AI | Enterprise, GCP billing |
113
+ | `oai/gpt-4o` | OpenAI | General purpose |
114
+ | `ollama/llama3.2` | Local | Free, private |
115
+
116
+ **Vertex AI Partner Models (MaaS):**
117
+
118
+ | Model ID | Provider | Best For |
119
+ |----------|----------|----------|
120
+ | `vertex/minimax/minimax-m2-maas` | MiniMax | Fast, budget-friendly |
121
+ | `vertex/mistralai/codestral-2` | Mistral | Code specialist |
122
+ | `vertex/deepseek/deepseek-v3-2-maas` | DeepSeek | Deep reasoning |
123
+ | `vertex/qwen/qwen3-coder-480b-a35b-instruct-maas` | Qwen | Agentic coding |
124
+ | `vertex/openai/gpt-oss-120b-maas` | OpenAI | Open-weight reasoning |
125
+
126
+ **Update models:**
127
+ ```bash
128
+ claudish --models --force-update
129
+ ```
130
+
131
+ ## Critical: File-Based Pattern for Sub-Agents
132
+
133
+ ### ⚠️ Problem: Context Window Pollution
134
+
135
+ Running Claudish directly in main conversation pollutes context with:
136
+ - Entire conversation transcript
137
+ - All tool outputs
138
+ - Model reasoning (10K+ tokens)
139
+
140
+ ### ✅ Solution: File-Based Sub-Agent Pattern
141
+
142
+ **Pattern:**
143
+ 1. Write instructions to file
144
+ 2. Run Claudish with file input
145
+ 3. Read result from file
146
+ 4. Return summary only (not full output)
147
+
148
+ **Example:**
149
+ ```typescript
150
+ // Step 1: Write instruction file
151
+ const instructionFile = `/tmp/claudish-task-${Date.now()}.md`;
152
+ const resultFile = `/tmp/claudish-result-${Date.now()}.md`;
153
+
154
+ const instruction = `# Task
155
+ Implement user authentication
156
+
157
+ # Requirements
158
+ - JWT tokens
159
+ - bcrypt password hashing
160
+ - Protected route middleware
161
+
162
+ # Output
163
+ Write to: ${resultFile}
164
+ `;
165
+
166
+ await Write({ file_path: instructionFile, content: instruction });
167
+
168
+ // Step 2: Run Claudish
169
+ await Bash(`claudish --model x-ai/grok-code-fast-1 --stdin < ${instructionFile}`);
170
+
171
+ // Step 3: Read result
172
+ const result = await Read({ file_path: resultFile });
173
+
174
+ // Step 4: Return summary only
175
+ const summary = extractSummary(result);
176
+ return `✅ Completed. ${summary}`;
177
+
178
+ // Clean up
179
+ await Bash(`rm ${instructionFile} ${resultFile}`);
180
+ ```
181
+
182
+ ## Using Claudish in Sub-Agents
183
+
184
+ ### Method 1: Direct Bash Execution
185
+
186
+ ```typescript
187
+ // For simple tasks with short output
188
+ const { stdout } = await Bash("claudish --model x-ai/grok-code-fast-1 --json 'quick task'");
189
+ const result = JSON.parse(stdout);
190
+
191
+ // Return only essential info
192
+ return `Cost: $${result.total_cost_usd}, Result: ${result.result.substring(0, 100)}...`;
193
+ ```
194
+
195
+ ### Method 2: Task Tool Delegation
196
+
197
+ ```typescript
198
+ // For complex tasks requiring isolation
199
+ const result = await Task({
200
+ subagent_type: "general-purpose",
201
+ description: "Implement feature with Grok",
202
+ prompt: `
203
+ Use Claudish to implement feature with Grok model:
204
+
205
+ STEPS:
206
+ 1. Create instruction file at /tmp/claudish-instruction-${Date.now()}.md
207
+ 2. Write feature requirements to file
208
+ 3. Run: claudish --model x-ai/grok-code-fast-1 --stdin < /tmp/claudish-instruction-*.md
209
+ 4. Read result and return ONLY:
210
+ - Files modified (list)
211
+ - Brief summary (2-3 sentences)
212
+ - Cost (if available)
213
+
214
+ DO NOT return full implementation details.
215
+ Keep response under 300 tokens.
216
+ `
217
+ });
218
+ ```
219
+
220
+ ### Method 3: Multi-Model Comparison
221
+
222
+ ```typescript
223
+ // Compare results from multiple models
224
+ const models = [
225
+ "x-ai/grok-code-fast-1",
226
+ "google/gemini-2.5-flash",
227
+ "openai/gpt-5"
228
+ ];
229
+
230
+ for (const model of models) {
231
+ const result = await Bash(`claudish --model ${model} --json "analyze security"`);
232
+ const data = JSON.parse(result.stdout);
233
+
234
+ console.log(`${model}: $${data.total_cost_usd}`);
235
+ // Store results for comparison
236
+ }
237
+ ```
238
+
239
+ ## Essential CLI Flags
240
+
241
+ ### Core Flags
242
+
243
+ | Flag | Description | Example |
244
+ |------|-------------|---------|
245
+ | `--model <model>` | OpenRouter model to use | `--model x-ai/grok-code-fast-1` |
246
+ | `--stdin` | Read prompt from stdin | `cat task.md \| claudish --stdin --model grok` |
247
+ | `--json` | JSON output (structured) | `claudish --json "task"` |
248
+ | `--list-models` | List available models | `claudish --list-models --json` |
249
+
250
+ ### Useful Flags
251
+
252
+ | Flag | Description | Default |
253
+ |------|-------------|---------|
254
+ | `--quiet` / `-q` | Suppress logs | Enabled in single-shot |
255
+ | `--verbose` / `-v` | Show logs | Enabled in interactive |
256
+ | `--debug` / `-d` | Debug logging to file | Disabled |
257
+ | `--no-auto-approve` | Require prompts | Auto-approve enabled |
258
+
259
+ ## Common Workflows
260
+
261
+ ### Workflow 1: Quick Code Fix (Grok)
262
+
263
+ ```bash
264
+ # Fast coding with visible reasoning
265
+ claudish --model x-ai/grok-code-fast-1 "fix null pointer error in user.ts"
266
+ ```
267
+
268
+ ### Workflow 2: Complex Refactoring (GPT-5)
269
+
270
+ ```bash
271
+ # Advanced reasoning for architecture
272
+ claudish --model openai/gpt-5 "refactor to microservices architecture"
273
+ ```
274
+
275
+ ### Workflow 3: Code Review (Gemini)
276
+
277
+ ```bash
278
+ # Deep analysis with large context
279
+ git diff | claudish --stdin --model google/gemini-2.5-flash "review for bugs"
280
+ ```
281
+
282
+ ### Workflow 4: UI Implementation (Qwen Vision)
283
+
284
+ ```bash
285
+ # Vision model for visual tasks
286
+ claudish --model qwen/qwen3-vl-235b-a22b-instruct "implement dashboard from design"
287
+ ```
288
+
289
+ ## MCP Server Mode
290
+
291
+ Claudish can run as an MCP (Model Context Protocol) server, exposing OpenRouter models as tools that Claude Code can call mid-conversation. This is useful when you want to:
292
+
293
+ - Query external models without spawning a subprocess
294
+ - Compare responses from multiple models
295
+ - Use specific models for specific subtasks
296
+
297
+ ### Starting MCP Server
298
+
299
+ ```bash
300
+ # Start MCP server (stdio transport)
301
+ claudish --mcp
302
+ ```
303
+
304
+ ### Claude Code Configuration
305
+
306
+ Add to `~/.claude/settings.json`:
307
+
308
+ ```json
309
+ {
310
+ "mcpServers": {
311
+ "claudish": {
312
+ "command": "claudish",
313
+ "args": ["--mcp"],
314
+ "env": {
315
+ "OPENROUTER_API_KEY": "sk-or-v1-..."
316
+ }
317
+ }
318
+ }
319
+ }
320
+ ```
321
+
322
+ Or use npx (no installation needed):
323
+
324
+ ```json
325
+ {
326
+ "mcpServers": {
327
+ "claudish": {
328
+ "command": "npx",
329
+ "args": ["claudish@latest", "--mcp"]
330
+ }
331
+ }
332
+ }
333
+ ```
334
+
335
+ ### Available MCP Tools
336
+
337
+ | Tool | Description | Example Use |
338
+ |------|-------------|-------------|
339
+ | `run_prompt` | Execute prompt on any model | Get a second opinion from Grok |
340
+ | `list_models` | Show recommended models | Find models with tool support |
341
+ | `search_models` | Fuzzy search all models | Find vision-capable models |
342
+ | `compare_models` | Run same prompt on multiple models | Compare reasoning approaches |
343
+
344
+ ### Using MCP Tools from Claude Code
345
+
346
+ Once configured, Claude Code can use these tools directly:
347
+
348
+ ```
349
+ User: "Use Grok to review this code"
350
+ Claude: [calls run_prompt tool with model="x-ai/grok-code-fast-1"]
351
+
352
+ User: "What models support vision?"
353
+ Claude: [calls search_models tool with query="vision"]
354
+
355
+ User: "Compare how GPT-5 and Gemini explain this concept"
356
+ Claude: [calls compare_models tool with models=["openai/gpt-5.3", "google/gemini-3-pro-preview"]]
357
+ ```
358
+
359
+ ### MCP vs CLI Mode
360
+
361
+ | Feature | CLI Mode | MCP Mode |
362
+ |---------|----------|----------|
363
+ | Use case | Replace Claude Code model | Call models as tools |
364
+ | Context | Full Claude Code session | Single prompt/response |
365
+ | Streaming | Full streaming | Buffered response |
366
+ | Best for | Primary model replacement | Second opinions, comparisons |
367
+
368
+ ### MCP Tool Details
369
+
370
+ **run_prompt**
371
+ ```typescript
372
+ {
373
+ model: string, // e.g., "x-ai/grok-code-fast-1"
374
+ prompt: string, // The prompt to send
375
+ system_prompt?: string, // Optional system prompt
376
+ max_tokens?: number // Default: 4096
377
+ }
378
+ ```
379
+
380
+ **list_models**
381
+ ```typescript
382
+ // No parameters - returns curated list of recommended models
383
+ {}
384
+ ```
385
+
386
+ **search_models**
387
+ ```typescript
388
+ {
389
+ query: string, // e.g., "grok", "vision", "free"
390
+ limit?: number // Default: 10
391
+ }
392
+ ```
393
+
394
+ **compare_models**
395
+ ```typescript
396
+ {
397
+ models: string[], // e.g., ["openai/gpt-5.3", "x-ai/grok-code-fast-1"]
398
+ prompt: string, // Prompt to send to all models
399
+ system_prompt?: string // Optional system prompt
400
+ }
401
+ ```
402
+
403
+ ## Getting Model List
404
+
405
+ ### JSON Output (Recommended)
406
+
407
+ ```bash
408
+ claudish --list-models --json
409
+ ```
410
+
411
+ **Output:**
412
+ ```json
413
+ {
414
+ "version": "1.8.0",
415
+ "lastUpdated": "2025-11-19",
416
+ "source": "https://openrouter.ai/models",
417
+ "models": [
418
+ {
419
+ "id": "x-ai/grok-code-fast-1",
420
+ "name": "Grok Code Fast 1",
421
+ "description": "Ultra-fast agentic coding",
422
+ "provider": "xAI",
423
+ "category": "coding",
424
+ "priority": 1,
425
+ "pricing": {
426
+ "input": "$0.20/1M",
427
+ "output": "$1.50/1M",
428
+ "average": "$0.85/1M"
429
+ },
430
+ "context": "256K",
431
+ "supportsTools": true,
432
+ "supportsReasoning": true
433
+ }
434
+ ]
435
+ }
436
+ ```
437
+
438
+ ### Parse in TypeScript
439
+
440
+ ```typescript
441
+ const { stdout } = await Bash("claudish --list-models --json");
442
+ const data = JSON.parse(stdout);
443
+
444
+ // Get all model IDs
445
+ const modelIds = data.models.map(m => m.id);
446
+
447
+ // Get coding models
448
+ const codingModels = data.models.filter(m => m.category === "coding");
449
+
450
+ // Get cheapest model
451
+ const cheapest = data.models.sort((a, b) =>
452
+ parseFloat(a.pricing.average) - parseFloat(b.pricing.average)
453
+ )[0];
454
+ ```
455
+
456
+ ## JSON Output Format
457
+
458
+ When using `--json` flag, Claudish returns:
459
+
460
+ ```json
461
+ {
462
+ "result": "AI response text",
463
+ "total_cost_usd": 0.068,
464
+ "usage": {
465
+ "input_tokens": 1234,
466
+ "output_tokens": 5678
467
+ },
468
+ "duration_ms": 12345,
469
+ "num_turns": 3,
470
+ "modelUsage": {
471
+ "x-ai/grok-code-fast-1": {
472
+ "inputTokens": 1234,
473
+ "outputTokens": 5678
474
+ }
475
+ }
476
+ }
477
+ ```
478
+
479
+ **Extract fields:**
480
+ ```bash
481
+ claudish --json "task" | jq -r '.result' # Get result text
482
+ claudish --json "task" | jq -r '.total_cost_usd' # Get cost
483
+ claudish --json "task" | jq -r '.usage' # Get token usage
484
+ ```
485
+
486
+ ## Error Handling
487
+
488
+ ### Check Claudish Installation
489
+
490
+ ```typescript
491
+ try {
492
+ await Bash("which claudish");
493
+ } catch (error) {
494
+ console.error("Claudish not installed. Install with: npm install -g claudish");
495
+ // Use fallback (embedded Claude models)
496
+ }
497
+ ```
498
+
499
+ ### Check API Key
500
+
501
+ ```typescript
502
+ const apiKey = process.env.OPENROUTER_API_KEY;
503
+ if (!apiKey) {
504
+ console.error("OPENROUTER_API_KEY not set. Get key at: https://openrouter.ai/keys");
505
+ // Use fallback
506
+ }
507
+ ```
508
+
509
+ ### Handle Model Errors
510
+
511
+ ```typescript
512
+ try {
513
+ const result = await Bash("claudish --model x-ai/grok-code-fast-1 'task'");
514
+ } catch (error) {
515
+ if (error.message.includes("Model not found")) {
516
+ console.error("Model unavailable. Listing alternatives...");
517
+ await Bash("claudish --list-models");
518
+ } else {
519
+ console.error("Claudish error:", error.message);
520
+ }
521
+ }
522
+ ```
523
+
524
+ ### Graceful Fallback
525
+
526
+ ```typescript
527
+ async function runWithClaudishOrFallback(task: string) {
528
+ try {
529
+ // Try Claudish with Grok
530
+ const result = await Bash(`claudish --model x-ai/grok-code-fast-1 "${task}"`);
531
+ return result.stdout;
532
+ } catch (error) {
533
+ console.warn("Claudish unavailable, using embedded Claude");
534
+ // Run with standard Claude Code
535
+ return await runWithEmbeddedClaude(task);
536
+ }
537
+ }
538
+ ```
539
+
540
+ ## Cost Tracking
541
+
542
+ ### View Cost in Status Line
543
+
544
+ Claudish shows cost in Claude Code status line:
545
+ ```
546
+ directory • x-ai/grok-code-fast-1 • $0.12 • 67%
547
+ ```
548
+
549
+ ### Get Cost from JSON
550
+
551
+ ```bash
552
+ COST=$(claudish --json "task" | jq -r '.total_cost_usd')
553
+ echo "Task cost: \$${COST}"
554
+ ```
555
+
556
+ ### Track Cumulative Costs
557
+
558
+ ```typescript
559
+ let totalCost = 0;
560
+
561
+ for (const task of tasks) {
562
+ const result = await Bash(`claudish --json --model grok "${task}"`);
563
+ const data = JSON.parse(result.stdout);
564
+ totalCost += data.total_cost_usd;
565
+ }
566
+
567
+ console.log(`Total cost: $${totalCost.toFixed(4)}`);
568
+ ```
569
+
570
+ ## Best Practices Summary
571
+
572
+ ### ✅ DO
573
+
574
+ 1. **Use file-based pattern** for sub-agents to avoid context pollution
575
+ 2. **Choose appropriate model** for task (Grok=speed, GPT-5=reasoning, Qwen=vision)
576
+ 3. **Use --json output** for automation and parsing
577
+ 4. **Handle errors gracefully** with fallbacks
578
+ 5. **Track costs** when running multiple tasks
579
+ 6. **Update models regularly** with `--force-update`
580
+ 7. **Use --stdin** for large prompts (git diffs, code review)
581
+
582
+ ### ❌ DON'T
583
+
584
+ 1. **Don't run Claudish directly** in main conversation (pollutes context)
585
+ 2. **Don't ignore model selection** (different models have different strengths)
586
+ 3. **Don't parse text output** (use --json instead)
587
+ 4. **Don't hardcode model lists** (query dynamically)
588
+ 5. **Don't skip error handling** (Claudish might not be installed)
589
+ 6. **Don't return full output** in sub-agents (summary only)
590
+
591
+ ## Quick Reference Commands
592
+
593
+ ```bash
594
+ # Installation
595
+ npm install -g claudish
596
+
597
+ # Get models
598
+ claudish --list-models --json
599
+
600
+ # Run task
601
+ claudish --model x-ai/grok-code-fast-1 "your task"
602
+
603
+ # Large prompt
604
+ git diff | claudish --stdin --model google/gemini-2.5-flash "review"
605
+
606
+ # JSON output
607
+ claudish --json --model grok "task" | jq -r '.total_cost_usd'
608
+
609
+ # Update models
610
+ claudish --list-models --force-update
611
+
612
+ # Get help
613
+ claudish --help
614
+ ```
615
+
616
+ ## Example: Complete Sub-Agent Implementation
617
+
618
+ ```typescript
619
+ /**
620
+ * Example: Implement feature with Claudish + Grok
621
+ * Returns summary only, full implementation in file
622
+ */
623
+ async function implementFeatureWithGrok(description: string): Promise<string> {
624
+ const timestamp = Date.now();
625
+ const instructionFile = `/tmp/claudish-implement-${timestamp}.md`;
626
+ const resultFile = `/tmp/claudish-result-${timestamp}.md`;
627
+
628
+ try {
629
+ // 1. Create instruction
630
+ const instruction = `# Feature Implementation
631
+
632
+ ## Description
633
+ ${description}
634
+
635
+ ## Requirements
636
+ - Clean, maintainable code
637
+ - Comprehensive tests
638
+ - Error handling
639
+ - Documentation
640
+
641
+ ## Output File
642
+ ${resultFile}
643
+
644
+ ## Format
645
+ \`\`\`markdown
646
+ ## Files Modified
647
+ - path/to/file1.ts
648
+ - path/to/file2.ts
649
+
650
+ ## Summary
651
+ [2-3 sentence summary]
652
+
653
+ ## Tests Added
654
+ - test description 1
655
+ - test description 2
656
+ \`\`\`
657
+ `;
658
+
659
+ await Write({ file_path: instructionFile, content: instruction });
660
+
661
+ // 2. Run Claudish
662
+ await Bash(`claudish --model x-ai/grok-code-fast-1 --stdin < ${instructionFile}`);
663
+
664
+ // 3. Read result
665
+ const result = await Read({ file_path: resultFile });
666
+
667
+ // 4. Extract summary
668
+ const filesMatch = result.match(/## Files Modified\s*\n(.*?)(?=\n##|$)/s);
669
+ const files = filesMatch ? filesMatch[1].trim().split('\n').length : 0;
670
+
671
+ const summaryMatch = result.match(/## Summary\s*\n(.*?)(?=\n##|$)/s);
672
+ const summary = summaryMatch ? summaryMatch[1].trim() : "Implementation completed";
673
+
674
+ // 5. Clean up
675
+ await Bash(`rm ${instructionFile} ${resultFile}`);
676
+
677
+ // 6. Return concise summary
678
+ return `✅ Feature implemented. Modified ${files} files. ${summary}`;
679
+
680
+ } catch (error) {
681
+ // 7. Handle errors
682
+ console.error("Claudish implementation failed:", error.message);
683
+
684
+ // Clean up if files exist
685
+ try {
686
+ await Bash(`rm -f ${instructionFile} ${resultFile}`);
687
+ } catch {}
688
+
689
+ return `❌ Implementation failed: ${error.message}`;
690
+ }
691
+ }
692
+ ```
693
+
694
+ ## Additional Resources
695
+
696
+ - **Full Documentation:** `<claudish-install-path>/README.md`
697
+ - **Skill Document:** `skills/claudish-usage/SKILL.md` (in repository root)
698
+ - **Model Integration:** `skills/claudish-integration/SKILL.md` (in repository root)
699
+ - **OpenRouter Docs:** https://openrouter.ai/docs
700
+ - **Claudish GitHub:** https://github.com/MadAppGang/claude-code
701
+
702
+ ## Get This Guide
703
+
704
+ ```bash
705
+ # Print this guide
706
+ claudish --help-ai
707
+
708
+ # Save to file
709
+ claudish --help-ai > claudish-agent-guide.md
710
+ ```
711
+
712
+ ---
713
+
714
+ **Version:** 2.2.0
715
+ **Last Updated:** January 22, 2026
716
+ **Maintained by:** MadAppGang