ralph-cli-sandboxed 0.4.1 → 0.4.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (69) hide show
  1. package/README.md +30 -0
  2. package/dist/commands/action.js +9 -9
  3. package/dist/commands/chat.js +13 -12
  4. package/dist/commands/config.js +2 -1
  5. package/dist/commands/daemon.js +4 -3
  6. package/dist/commands/docker.js +102 -66
  7. package/dist/commands/fix-config.js +2 -1
  8. package/dist/commands/fix-prd.js +2 -2
  9. package/dist/commands/init.js +78 -17
  10. package/dist/commands/listen.js +3 -1
  11. package/dist/commands/notify.js +1 -1
  12. package/dist/commands/once.js +17 -9
  13. package/dist/commands/prd.js +4 -1
  14. package/dist/commands/run.js +40 -25
  15. package/dist/commands/slack.js +2 -2
  16. package/dist/config/responder-presets.json +69 -0
  17. package/dist/index.js +1 -1
  18. package/dist/providers/discord.d.ts +28 -0
  19. package/dist/providers/discord.js +227 -14
  20. package/dist/providers/slack.d.ts +41 -1
  21. package/dist/providers/slack.js +389 -8
  22. package/dist/providers/telegram.d.ts +30 -0
  23. package/dist/providers/telegram.js +185 -5
  24. package/dist/responders/claude-code-responder.d.ts +48 -0
  25. package/dist/responders/claude-code-responder.js +203 -0
  26. package/dist/responders/cli-responder.d.ts +62 -0
  27. package/dist/responders/cli-responder.js +298 -0
  28. package/dist/responders/llm-responder.d.ts +135 -0
  29. package/dist/responders/llm-responder.js +582 -0
  30. package/dist/templates/macos-scripts.js +2 -4
  31. package/dist/templates/prompts.js +4 -2
  32. package/dist/tui/ConfigEditor.js +19 -5
  33. package/dist/tui/components/ArrayEditor.js +1 -1
  34. package/dist/tui/components/EditorPanel.js +10 -6
  35. package/dist/tui/components/HelpPanel.d.ts +1 -1
  36. package/dist/tui/components/HelpPanel.js +1 -1
  37. package/dist/tui/components/JsonSnippetEditor.js +8 -5
  38. package/dist/tui/components/KeyValueEditor.js +54 -9
  39. package/dist/tui/components/LLMProvidersEditor.d.ts +22 -0
  40. package/dist/tui/components/LLMProvidersEditor.js +357 -0
  41. package/dist/tui/components/ObjectEditor.js +1 -1
  42. package/dist/tui/components/Preview.js +1 -1
  43. package/dist/tui/components/RespondersEditor.d.ts +22 -0
  44. package/dist/tui/components/RespondersEditor.js +437 -0
  45. package/dist/tui/components/SectionNav.js +27 -3
  46. package/dist/utils/chat-client.d.ts +4 -0
  47. package/dist/utils/chat-client.js +12 -5
  48. package/dist/utils/config.d.ts +84 -0
  49. package/dist/utils/config.js +78 -1
  50. package/dist/utils/daemon-client.d.ts +21 -0
  51. package/dist/utils/daemon-client.js +28 -1
  52. package/dist/utils/llm-client.d.ts +82 -0
  53. package/dist/utils/llm-client.js +185 -0
  54. package/dist/utils/message-queue.js +6 -6
  55. package/dist/utils/notification.d.ts +6 -1
  56. package/dist/utils/notification.js +103 -2
  57. package/dist/utils/prd-validator.js +60 -19
  58. package/dist/utils/prompt.js +22 -12
  59. package/dist/utils/responder-logger.d.ts +47 -0
  60. package/dist/utils/responder-logger.js +129 -0
  61. package/dist/utils/responder-presets.d.ts +92 -0
  62. package/dist/utils/responder-presets.js +156 -0
  63. package/dist/utils/responder.d.ts +88 -0
  64. package/dist/utils/responder.js +207 -0
  65. package/dist/utils/stream-json.js +6 -6
  66. package/docs/CHAT-RESPONDERS.md +785 -0
  67. package/docs/DEVELOPMENT.md +25 -0
  68. package/docs/chat-architecture.md +251 -0
  69. package/package.json +11 -1
@@ -0,0 +1,785 @@
1
+ # Chat Responders
2
+
3
+ Chat responders allow your Ralph chat bot to intelligently respond to messages using LLMs, Claude Code, or custom CLI commands. Instead of just executing commands, your bot can answer questions about your codebase, review code, or run custom automation scripts.
4
+
5
+ ## Overview
6
+
7
+ Responders are message handlers that process incoming chat messages based on trigger patterns. When a message matches a responder's trigger, the message content is sent to the configured handler (LLM, Claude Code, or CLI command) and the response is sent back to the chat.
8
+
9
+ ### Responder Types
10
+
11
+ | Type | Description | Use Case |
12
+ |------|-------------|----------|
13
+ | `llm` | Send message to an LLM provider (Anthropic, OpenAI, Ollama) | Q&A, code review, explanations |
14
+ | `claude-code` | Run Claude Code CLI with the message as prompt | File modifications, complex tasks |
15
+ | `cli` | Execute a custom CLI command | Run aider, linters, custom scripts |
16
+
17
+ ### Trigger Patterns
18
+
19
+ | Pattern | Example | Matches |
20
+ |---------|---------|---------|
21
+ | `@mention` | `@qa` | Messages starting with `@qa what does this function do?` |
22
+ | `keyword` | `!lint` | Messages starting with `!lint src/index.ts` |
23
+ | (none) | - | Default responder for messages that don't match any trigger |
24
+
25
+ ---
26
+
27
+ ## Quick Start
28
+
29
+ ### 1. Configure LLM Providers
30
+
31
+ Add your LLM provider credentials to `.ralph/config.json`:
32
+
33
+ ```json
34
+ {
35
+ "llmProviders": {
36
+ "anthropic": {
37
+ "type": "anthropic",
38
+ "model": "claude-sonnet-4-20250514"
39
+ },
40
+ "openai": {
41
+ "type": "openai",
42
+ "model": "gpt-4o"
43
+ },
44
+ "ollama": {
45
+ "type": "ollama",
46
+ "model": "llama3",
47
+ "baseUrl": "http://localhost:11434"
48
+ }
49
+ }
50
+ }
51
+ ```
52
+
53
+ API keys can be set via environment variables:
54
+ - `ANTHROPIC_API_KEY` for Anthropic
55
+ - `OPENAI_API_KEY` for OpenAI
56
+ - Ollama doesn't require an API key
57
+
58
+ ### 2. Configure Responders
59
+
60
+ Add responders to your chat configuration:
61
+
62
+ ```json
63
+ {
64
+ "chat": {
65
+ "enabled": true,
66
+ "provider": "telegram",
67
+ "telegram": {
68
+ "botToken": "your-bot-token",
69
+ "allowedChatIds": ["123456789"]
70
+ },
71
+ "responders": {
72
+ "qa": {
73
+ "type": "llm",
74
+ "trigger": "@qa",
75
+ "provider": "anthropic",
76
+ "systemPrompt": "You are a helpful assistant for the {{project}} project. Answer questions about the codebase."
77
+ },
78
+ "code": {
79
+ "type": "claude-code",
80
+ "trigger": "@code"
81
+ },
82
+ "lint": {
83
+ "type": "cli",
84
+ "trigger": "!lint",
85
+ "command": "npm run lint"
86
+ }
87
+ }
88
+ }
89
+ }
90
+ ```
91
+
92
+ ### 3. Start the Chat Daemon
93
+
94
+ ```bash
95
+ ralph chat start
96
+ ```
97
+
98
+ Now you can message your bot:
99
+ - `@qa What does the config loader do?` - Get an LLM-powered answer
100
+ - `@code Add error handling to the login function` - Claude Code modifies files
101
+ - `!lint src/` - Run the linter
102
+
103
+ ---
104
+
105
+ ## LLM Provider Setup
106
+
107
+ ### Anthropic (Claude)
108
+
109
+ The recommended provider for high-quality responses.
110
+
111
+ ```json
112
+ {
113
+ "llmProviders": {
114
+ "anthropic": {
115
+ "type": "anthropic",
116
+ "model": "claude-sonnet-4-20250514"
117
+ }
118
+ }
119
+ }
120
+ ```
121
+
122
+ **Environment variable:** `ANTHROPIC_API_KEY`
123
+
124
+ **Available models:**
125
+ - `claude-sonnet-4-20250514` (recommended - fast and capable)
126
+ - `claude-opus-4-20250514` (most capable, slower)
127
+
128
+ ### OpenAI
129
+
130
+ ```json
131
+ {
132
+ "llmProviders": {
133
+ "openai": {
134
+ "type": "openai",
135
+ "model": "gpt-4o"
136
+ }
137
+ }
138
+ }
139
+ ```
140
+
141
+ **Environment variable:** `OPENAI_API_KEY`
142
+
143
+ **Available models:**
144
+ - `gpt-4o` (recommended)
145
+ - `gpt-4o-mini` (faster, cheaper)
146
+ - `gpt-4-turbo`
147
+
148
+ ### Ollama (Local)
149
+
150
+ Run models locally without API keys.
151
+
152
+ ```json
153
+ {
154
+ "llmProviders": {
155
+ "local": {
156
+ "type": "ollama",
157
+ "model": "llama3",
158
+ "baseUrl": "http://localhost:11434"
159
+ }
160
+ }
161
+ }
162
+ ```
163
+
164
+ **Setup:**
165
+ 1. Install Ollama: https://ollama.ai
166
+ 2. Pull a model: `ollama pull llama3`
167
+ 3. Start Ollama: `ollama serve`
168
+
169
+ **Popular models:**
170
+ - `llama3` - General purpose
171
+ - `codellama` - Code-focused
172
+ - `mistral` - Fast and capable
173
+
174
+ ### Custom API Endpoints
175
+
176
+ For OpenAI-compatible APIs (e.g., Azure OpenAI, local servers):
177
+
178
+ ```json
179
+ {
180
+ "llmProviders": {
181
+ "azure": {
182
+ "type": "openai",
183
+ "model": "gpt-4",
184
+ "apiKey": "your-azure-api-key",
185
+ "baseUrl": "https://your-resource.openai.azure.com/openai/deployments/gpt-4"
186
+ }
187
+ }
188
+ }
189
+ ```
190
+
191
+ ---
192
+
193
+ ## Responder Types
194
+
195
+ ### LLM Responder
196
+
197
+ Send messages to an LLM and return the response.
198
+
199
+ ```json
200
+ {
201
+ "qa": {
202
+ "type": "llm",
203
+ "trigger": "@qa",
204
+ "provider": "anthropic",
205
+ "systemPrompt": "You are a Q&A assistant for {{project}}. Answer questions about the codebase.",
206
+ "timeout": 60000,
207
+ "maxLength": 2000
208
+ }
209
+ }
210
+ ```
211
+
212
+ | Field | Type | Required | Default | Description |
213
+ |-------|------|----------|---------|-------------|
214
+ | `type` | string | Yes | - | Must be `"llm"` |
215
+ | `trigger` | string | No | - | Trigger pattern (`@mention` or `keyword`) |
216
+ | `provider` | string | No | `"anthropic"` | LLM provider name from `llmProviders` config |
217
+ | `systemPrompt` | string | No | - | System prompt (supports `{{project}}` placeholder) |
218
+ | `timeout` | number | No | `60000` | Timeout in milliseconds |
219
+ | `maxLength` | number | No | `2000` | Max response length in characters |
220
+
221
+ **System Prompt Placeholder:**
222
+ - `{{project}}` - Replaced with the project directory name
223
+
224
+ #### Automatic File Detection
225
+
226
+ LLM responders automatically detect file paths mentioned in messages and include their contents in the context. This allows you to ask questions about specific files without manually copying code.
227
+
228
+ **Supported formats:**
229
+ - `src/utils/config.ts` - Full file path
230
+ - `src/utils/config.ts:42` - File with line number (shows context around that line)
231
+ - `./relative/path.js` - Relative paths
232
+ - `package.json` - Root-level files
233
+ - `Dockerfile`, `Makefile`, `.gitignore`, `.env` - Config files without extensions
234
+
235
+ **Example:**
236
+ ```
237
+ @qa What does the loadConfig function do in src/utils/config.ts:50?
238
+ ```
239
+
240
+ The responder will automatically read the file, extract ~20 lines around line 50, and include it in the LLM context.
241
+
242
+ **Limits:**
243
+ - Max 50KB total file content per message
244
+ - Max 30KB per individual file
245
+ - Files larger than 100KB are skipped
246
+ - 50+ file extensions supported (ts, js, py, go, rs, java, etc.)
247
+
248
+ #### Git Diff Keywords
249
+
250
+ LLM responders recognize git-related keywords and automatically include relevant diffs:
251
+
252
+ | Keyword | Git Command | Description |
253
+ |---------|-------------|-------------|
254
+ | `diff` / `changes` | `git diff` | Unstaged changes |
255
+ | `staged` | `git diff --cached` | Staged changes |
256
+ | `last` / `last commit` | `git show HEAD` | Last commit |
257
+ | `all` | `git diff HEAD` | All uncommitted changes |
258
+ | `HEAD~N` | `git show HEAD~N` | Specific commit (e.g., `HEAD~2`) |
259
+
260
+ **Examples:**
261
+ ```
262
+ @review diff # Review unstaged changes
263
+ @review last # Review the last commit
264
+ @review staged # Review staged changes
265
+ @qa what changed in HEAD~3? # Ask about a specific commit
266
+ ```
267
+
268
+ #### Multi-Turn Thread Conversations
269
+
270
+ When using Slack or Discord, responders support multi-turn conversations within threads:
271
+
272
+ 1. Start a conversation with a trigger (e.g., `@review diff`)
273
+ 2. The response appears in a thread
274
+ 3. Reply in the thread to continue the conversation
275
+ 4. The responder maintains context from previous messages (up to 20 messages)
276
+
277
+ **How it works:**
278
+ - Thread replies don't need the trigger prefix
279
+ - The same responder handles all messages in a thread
280
+ - Conversation history is included in each LLM call
281
+ - History is stored in memory (cleared on restart)
282
+
283
+ **Example thread:**
284
+ ```
285
+ User: @review diff
286
+ Bot: [Reviews the diff, identifies potential issues]
287
+
288
+ User: Can you explain the change to the config loader?
289
+ Bot: [Explains with full context from previous messages]
290
+
291
+ User: How would you refactor this?
292
+ Bot: [Suggests refactoring based on the entire conversation]
293
+ ```
294
+
295
+ ### Claude Code Responder
296
+
297
+ Run Claude Code CLI to make file modifications or perform complex coding tasks.
298
+
299
+ ```json
300
+ {
301
+ "code": {
302
+ "type": "claude-code",
303
+ "trigger": "@code",
304
+ "timeout": 300000,
305
+ "maxLength": 2000
306
+ }
307
+ }
308
+ ```
309
+
310
+ | Field | Type | Required | Default | Description |
311
+ |-------|------|----------|---------|-------------|
312
+ | `type` | string | Yes | - | Must be `"claude-code"` |
313
+ | `trigger` | string | No | - | Trigger pattern |
314
+ | `timeout` | number | No | `300000` | Timeout in milliseconds (5 minutes) |
315
+ | `maxLength` | number | No | `2000` | Max response length in characters |
316
+
317
+ **Note:** Claude Code runs with `--dangerously-skip-permissions` for autonomous operation. Use with caution and only in trusted environments.
318
+
319
+ ### CLI Responder
320
+
321
+ Execute custom CLI commands with the user's message.
322
+
323
+ ```json
324
+ {
325
+ "lint": {
326
+ "type": "cli",
327
+ "trigger": "!lint",
328
+ "command": "npm run lint {{message}}",
329
+ "timeout": 120000,
330
+ "maxLength": 2000
331
+ }
332
+ }
333
+ ```
334
+
335
+ | Field | Type | Required | Default | Description |
336
+ |-------|------|----------|---------|-------------|
337
+ | `type` | string | Yes | - | Must be `"cli"` |
338
+ | `trigger` | string | No | - | Trigger pattern |
339
+ | `command` | string | Yes | - | Command to execute |
340
+ | `timeout` | number | No | `120000` | Timeout in milliseconds (2 minutes) |
341
+ | `maxLength` | number | No | `2000` | Max response length in characters |
342
+
343
+ **Command Placeholder:**
344
+ - `{{message}}` - Replaced with the user's message (properly escaped)
345
+ - If no placeholder is present, the message is appended as a quoted argument
346
+
347
+ **Examples:**
348
+
349
+ ```json
350
+ {
351
+ "aider": {
352
+ "type": "cli",
353
+ "trigger": "@aider",
354
+ "command": "aider --message '{{message}}'",
355
+ "timeout": 300000
356
+ },
357
+ "test": {
358
+ "type": "cli",
359
+ "trigger": "!test",
360
+ "command": "npm test"
361
+ },
362
+ "grep": {
363
+ "type": "cli",
364
+ "trigger": "!grep",
365
+ "command": "grep -rn"
366
+ }
367
+ }
368
+ ```
369
+
370
+ ---
371
+
372
+ ## Trigger Patterns and Matching
373
+
374
+ ### Mention Triggers (`@name`)
375
+
376
+ Match when the message starts with `@name`:
377
+
378
+ ```
379
+ @qa what does this function do?
380
+ @review check this code for bugs
381
+ @code add logging to the auth module
382
+ ```
383
+
384
+ The responder receives the text after the mention as the message.
385
+
386
+ ### Keyword Triggers
387
+
388
+ Match when the message starts with a keyword:
389
+
390
+ ```
391
+ !lint src/
392
+ help me understand this error
393
+ debug: the login is broken
394
+ ```
395
+
396
+ Keywords can be any text that identifies the command.
397
+
398
+ ### Default Responder
399
+
400
+ Handle messages that don't match any trigger:
401
+
402
+ ```json
403
+ {
404
+ "default": {
405
+ "type": "llm",
406
+ "provider": "anthropic",
407
+ "systemPrompt": "You are a helpful assistant."
408
+ }
409
+ }
410
+ ```
411
+
412
+ Note: The default responder has no trigger field or is named "default".
413
+
414
+ ### Matching Priority
415
+
416
+ 1. **Mention triggers** (`@name`) - Highest priority
417
+ 2. **Keyword triggers** - Match by prefix
418
+ 3. **Default responder** - Fallback
419
+
420
+ ---
421
+
422
+ ## Preset Configurations
423
+
424
+ Ralph includes preset responder configurations for common use cases. Select presets during `ralph init` or add them manually.
425
+
426
+ ### Available Presets
427
+
428
+ | Preset | Trigger | Type | Description |
429
+ |--------|---------|------|-------------|
430
+ | `qa` | `@qa` | LLM | Q&A about the codebase |
431
+ | `reviewer` | `@review` | LLM | Code review feedback |
432
+ | `architect` | `@arch` | LLM | Architecture discussions |
433
+ | `explain` | `@explain` | LLM | Detailed code explanations |
434
+ | `code` | `@code` | Claude Code | File modifications |
435
+
436
+ ### Preset Bundles
437
+
438
+ | Bundle | Presets | Description |
439
+ |--------|---------|-------------|
440
+ | `standard` | qa, reviewer, code | Common workflow (recommended) |
441
+ | `full` | All presets | Complete feature set |
442
+ | `minimal` | qa, code | Just the essentials |
443
+
444
+ ### Using Presets
445
+
446
+ **During initialization:**
447
+ ```bash
448
+ ralph init
449
+ # Answer "Yes" when asked about chat responders
450
+ # Select a bundle or individual presets
451
+ ```
452
+
453
+ **Manual configuration:**
454
+
455
+ Copy preset configs from `src/config/responder-presets.json` or use these examples:
456
+
457
+ ```json
458
+ {
459
+ "chat": {
460
+ "responders": {
461
+ "qa": {
462
+ "type": "llm",
463
+ "trigger": "@qa",
464
+ "provider": "anthropic",
465
+ "systemPrompt": "You are a knowledgeable Q&A assistant for the {{project}} project. Answer questions accurately and concisely about the codebase, its architecture, and functionality.",
466
+ "timeout": 60000,
467
+ "maxLength": 2000
468
+ },
469
+ "code": {
470
+ "type": "claude-code",
471
+ "trigger": "@code",
472
+ "timeout": 300000,
473
+ "maxLength": 2000
474
+ }
475
+ }
476
+ }
477
+ }
478
+ ```
479
+
480
+ ---
481
+
482
+ ## Example Configurations
483
+
484
+ ### Basic Q&A Bot
485
+
486
+ Simple setup for answering questions about your project:
487
+
488
+ ```json
489
+ {
490
+ "llmProviders": {
491
+ "anthropic": {
492
+ "type": "anthropic",
493
+ "model": "claude-sonnet-4-20250514"
494
+ }
495
+ },
496
+ "chat": {
497
+ "enabled": true,
498
+ "provider": "telegram",
499
+ "telegram": {
500
+ "botToken": "YOUR_BOT_TOKEN",
501
+ "allowedChatIds": ["YOUR_CHAT_ID"]
502
+ },
503
+ "responders": {
504
+ "default": {
505
+ "type": "llm",
506
+ "provider": "anthropic",
507
+ "systemPrompt": "You are a helpful assistant for the {{project}} project. Answer questions about the codebase, explain how things work, and help with development tasks."
508
+ }
509
+ }
510
+ }
511
+ }
512
+ ```
513
+
514
+ ### Multi-Purpose Development Bot
515
+
516
+ A complete development assistant with multiple responders:
517
+
518
+ ```json
519
+ {
520
+ "llmProviders": {
521
+ "anthropic": {
522
+ "type": "anthropic",
523
+ "model": "claude-sonnet-4-20250514"
524
+ },
525
+ "local": {
526
+ "type": "ollama",
527
+ "model": "codellama",
528
+ "baseUrl": "http://localhost:11434"
529
+ }
530
+ },
531
+ "chat": {
532
+ "enabled": true,
533
+ "provider": "slack",
534
+ "slack": {
535
+ "botToken": "xoxb-...",
536
+ "appToken": "xapp-...",
537
+ "signingSecret": "...",
538
+ "allowedChannelIds": ["C0123456789"]
539
+ },
540
+ "responders": {
541
+ "qa": {
542
+ "type": "llm",
543
+ "trigger": "@qa",
544
+ "provider": "anthropic",
545
+ "systemPrompt": "Answer questions about the {{project}} codebase."
546
+ },
547
+ "review": {
548
+ "type": "llm",
549
+ "trigger": "@review",
550
+ "provider": "anthropic",
551
+ "systemPrompt": "Review the provided code for bugs, security issues, and improvements."
552
+ },
553
+ "code": {
554
+ "type": "claude-code",
555
+ "trigger": "@code",
556
+ "timeout": 300000
557
+ },
558
+ "lint": {
559
+ "type": "cli",
560
+ "trigger": "!lint",
561
+ "command": "npm run lint"
562
+ },
563
+ "test": {
564
+ "type": "cli",
565
+ "trigger": "!test",
566
+ "command": "npm test"
567
+ },
568
+ "quick": {
569
+ "type": "llm",
570
+ "trigger": "@quick",
571
+ "provider": "local",
572
+ "systemPrompt": "Give brief, direct answers."
573
+ }
574
+ }
575
+ }
576
+ }
577
+ ```
578
+
579
+ ### Aider Integration
580
+
581
+ Use Aider as a responder for code modifications:
582
+
583
+ ```json
584
+ {
585
+ "chat": {
586
+ "responders": {
587
+ "aider": {
588
+ "type": "cli",
589
+ "trigger": "@aider",
590
+ "command": "aider --yes --message '{{message}}'",
591
+ "timeout": 600000,
592
+ "maxLength": 3000
593
+ }
594
+ }
595
+ }
596
+ }
597
+ ```
598
+
599
+ ---
600
+
601
+ ## Troubleshooting
602
+
603
+ ### API Key and Connection Issues
604
+
605
+ #### "API key not found"
606
+
607
+ Set the appropriate environment variable:
608
+
609
+ ```bash
610
+ export ANTHROPIC_API_KEY="sk-ant-..."
611
+ export OPENAI_API_KEY="sk-..."
612
+ ```
613
+
614
+ Or add the key directly to config (not recommended for production):
615
+
616
+ ```json
617
+ {
618
+ "llmProviders": {
619
+ "anthropic": {
620
+ "type": "anthropic",
621
+ "model": "claude-sonnet-4-20250514",
622
+ "apiKey": "sk-ant-..."
623
+ }
624
+ }
625
+ }
626
+ ```
627
+
628
+ #### "LLM provider not found"
629
+
630
+ Make sure the provider name in the responder matches a key in `llmProviders`:
631
+
632
+ ```json
633
+ {
634
+ "llmProviders": {
635
+ "my-claude": { // This name...
636
+ "type": "anthropic",
637
+ "model": "claude-sonnet-4-20250514"
638
+ }
639
+ },
640
+ "chat": {
641
+ "responders": {
642
+ "qa": {
643
+ "type": "llm",
644
+ "provider": "my-claude" // ...must match here
645
+ }
646
+ }
647
+ }
648
+ }
649
+ ```
650
+
651
+ #### "Ollama connection failed"
652
+
653
+ 1. Check Ollama is running: `ollama list`
654
+ 2. Verify the base URL matches your Ollama server
655
+ 3. Try: `curl http://localhost:11434/api/tags`
656
+
657
+ ### Responder Not Triggering
658
+
659
+ #### Message doesn't match trigger
660
+
661
+ - Mention triggers must be at the start: `@qa question` (not `question @qa`)
662
+ - Check case sensitivity (triggers are case-insensitive for matching)
663
+ - Ensure no extra whitespace before the trigger
664
+
665
+ #### Bot not responding at all
666
+
667
+ 1. Check the chat daemon is running: `ralph chat status`
668
+ 2. Verify responders are configured in config.json
669
+ 3. Check the chat ID is in `allowedChatIds`
670
+
671
+ ### Response Issues
672
+
673
+ #### Response is truncated
674
+
675
+ Increase `maxLength` in the responder config:
676
+
677
+ ```json
678
+ {
679
+ "qa": {
680
+ "type": "llm",
681
+ "maxLength": 4000
682
+ }
683
+ }
684
+ ```
685
+
686
+ Note: Chat platforms have their own limits (Telegram: 4096 chars, Slack: 40000 chars, Discord: 2000 chars).
687
+
688
+ #### Response times out
689
+
690
+ Increase `timeout` in the responder config:
691
+
692
+ ```json
693
+ {
694
+ "code": {
695
+ "type": "claude-code",
696
+ "timeout": 600000 // 10 minutes
697
+ }
698
+ }
699
+ ```
700
+
701
+ #### CLI command not working
702
+
703
+ 1. Test the command manually in the terminal
704
+ 2. Check the command path is correct
705
+ 3. Ensure required tools are installed in the execution environment
706
+ 4. Check the `{{message}}` placeholder is properly placed
707
+
708
+ ### Claude Code Issues
709
+
710
+ #### "Failed to spawn claude"
711
+
712
+ Ensure Claude Code is installed and in PATH:
713
+
714
+ ```bash
715
+ which claude
716
+ claude --version
717
+ ```
718
+
719
+ #### "Claude Code timed out"
720
+
721
+ Increase the timeout for complex tasks:
722
+
723
+ ```json
724
+ {
725
+ "code": {
726
+ "type": "claude-code",
727
+ "timeout": 600000 // 10 minutes
728
+ }
729
+ }
730
+ ```
731
+
732
+ ---
733
+
734
+ ## Auto-Send Run Results
735
+
736
+ When a chat provider (Slack, Telegram, or Discord) is configured and enabled, Ralph automatically sends notifications about `ralph run` progress to your chat:
737
+
738
+ | Event | Message |
739
+ |-------|---------|
740
+ | Task Complete | "Task completed: [description]" |
741
+ | Iteration Complete | "Iteration complete" |
742
+ | PRD Complete | "All PRD tasks complete!" |
743
+ | Run Stopped | "Run stopped: [reason]" |
744
+ | Error | "Error: [message]" |
745
+
746
+ **Requirements:**
747
+ - Chat provider must be configured in `.ralph/config.json`
748
+ - `chat.enabled` must be `true`
749
+ - Bot must have permission to send messages to the configured channel/chat
750
+
751
+ **Example config:**
752
+ ```json
753
+ {
754
+ "chat": {
755
+ "enabled": true,
756
+ "provider": "slack",
757
+ "slack": {
758
+ "botToken": "xoxb-...",
759
+ "appToken": "xapp-...",
760
+ "allowedChannelIds": ["C0123456789"]
761
+ }
762
+ }
763
+ }
764
+ ```
765
+
766
+ With this configuration, running `ralph run` or `ralph docker run` will automatically post updates to your Slack channel.
767
+
768
+ ---
769
+
770
+ ## Security Considerations
771
+
772
+ 1. **Restrict chat access** - Always use `allowedChatIds`/`allowedChannelIds`
773
+ 2. **Be cautious with `claude-code`** - It can modify files autonomously
774
+ 3. **Validate CLI commands** - Don't expose dangerous system commands
775
+ 4. **Protect API keys** - Use environment variables, not hardcoded values
776
+ 5. **Run in containers** - Use Ralph's Docker sandbox for isolation
777
+
778
+ ---
779
+
780
+ ## Related Documentation
781
+
782
+ - [Chat Clients Setup](./CHAT-CLIENTS.md) - Setting up Telegram, Slack, Discord
783
+ - [Chat Architecture](./chat-architecture.md) - Technical architecture diagrams and message flow
784
+ - [Docker Sandbox](./DOCKER.md) - Running Ralph in containers
785
+ - [Daemon Actions](./USEFUL_ACTIONS.md) - Host daemon configuration