thepopebot 1.2.76-beta.16 → 1.2.76-beta.17
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/lib/CLAUDE.md +2 -2
- package/lib/ai/CLAUDE.md +54 -56
- package/lib/ai/index.js +282 -430
- package/lib/ai/line-mappers.js +42 -24
- package/lib/ai/sdk-adapters/CLAUDE.md +4 -3
- package/lib/ai/workspace-setup.js +3 -2
- package/lib/chat/actions.js +56 -0
- package/lib/chat/api.js +34 -17
- package/lib/chat/components/CLAUDE.md +5 -1
- package/lib/chat/components/chat-input.js +2 -17
- package/lib/chat/components/chat-input.jsx +3 -17
- package/lib/chat/components/chat-page.js +2 -0
- package/lib/chat/components/chat-page.jsx +3 -0
- package/lib/chat/components/chat.js +16 -7
- package/lib/chat/components/chat.jsx +19 -6
- package/lib/chat/components/code-mode-toggle.js +33 -10
- package/lib/chat/components/code-mode-toggle.jsx +27 -9
- package/lib/chat/components/crons-page.js +17 -3
- package/lib/chat/components/crons-page.jsx +34 -6
- package/lib/chat/components/message.js +15 -0
- package/lib/chat/components/message.jsx +15 -0
- package/lib/chat/components/settings-coding-agents-page.js +109 -15
- package/lib/chat/components/settings-coding-agents-page.jsx +85 -1
- package/lib/chat/components/triggers-page.js +17 -3
- package/lib/chat/components/triggers-page.jsx +34 -6
- package/lib/code/terminal-view.js +21 -1
- package/lib/code/terminal-view.jsx +16 -1
- package/lib/config.js +8 -2
- package/lib/db/index.js +12 -0
- package/lib/tools/github.js +18 -0
- package/package.json +1 -3
- package/setup/lib/targets.mjs +1 -1
- package/templates/agent-job/CLAUDE.md.template +2 -1
- package/templates/agent-job/CRONS.json +16 -0
- package/templates/event-handler/TRIGGERS.json +18 -2
- package/lib/ai/agent.js +0 -65
- package/lib/ai/async-channel.js +0 -51
- package/lib/ai/tools.js +0 -184
package/lib/CLAUDE.md
CHANGED
|
@@ -18,10 +18,10 @@ If the task needs to *think*, use `agent`. If it just needs to *do*, use `comman
|
|
|
18
18
|
|
|
19
19
|
## Cron Jobs
|
|
20
20
|
|
|
21
|
-
Defined in `agent-job/CRONS.json`, loaded by `lib/cron.js` at startup via `node-cron`. Each entry has `name`, `schedule` (cron expression), `type` (`agent`/`command`/`webhook`), and the corresponding action fields (`job`, `command`, or `url`/`method`/`headers`/`vars`). Set `enabled: false` to disable. Agent-type entries support optional `
|
|
21
|
+
Defined in `agent-job/CRONS.json`, loaded by `lib/cron.js` at startup via `node-cron`. Each entry has `name`, `schedule` (cron expression), `type` (`agent`/`command`/`webhook`), and the corresponding action fields (`job`, `command`, or `url`/`method`/`headers`/`vars`). Set `enabled: false` to disable. Agent-type entries support optional `agent_backend`, `llm_model`, and `scope` fields. `agent_backend` picks which coding agent runs the job (e.g. `claude-code`, `codex-cli`); `llm_model` overrides the model within that agent. `scope` sets the agent's working directory to a subdirectory (e.g., `"scope": "agents/gary-vee"`) — the system prompt and skills resolve from that scope.
|
|
22
22
|
|
|
23
23
|
## Webhook Triggers
|
|
24
24
|
|
|
25
|
-
Defined in `event-handler/TRIGGERS.json`, loaded by `lib/triggers.js`. Each trigger watches an endpoint path (`watch_path`) and fires an array of actions (fire-and-forget, after auth, before route handler). Actions use the same `type`/`job`/`command`/`url` fields as cron jobs, including optional `
|
|
25
|
+
Defined in `event-handler/TRIGGERS.json`, loaded by `lib/triggers.js`. Each trigger watches an endpoint path (`watch_path`) and fires an array of actions (fire-and-forget, after auth, before route handler). Actions use the same `type`/`job`/`command`/`url` fields as cron jobs, including optional `agent_backend`/`llm_model`/`scope` overrides.
|
|
26
26
|
|
|
27
27
|
Template tokens in `job` and `command` strings: `{{body}}`, `{{body.field}}`, `{{query}}`, `{{query.field}}`, `{{headers}}`, `{{headers.field}}`.
|
package/lib/ai/CLAUDE.md
CHANGED
|
@@ -1,44 +1,60 @@
|
|
|
1
1
|
# lib/ai/ — LLM Integration
|
|
2
2
|
|
|
3
|
-
##
|
|
3
|
+
## Architecture
|
|
4
4
|
|
|
5
|
-
|
|
5
|
+
Every chat message flows through `chatStream()` in `index.js`. After workspace setup, it forks on whether a registered SDK adapter exists for the active coding agent:
|
|
6
6
|
|
|
7
|
-
**
|
|
8
|
-
-
|
|
9
|
-
- Tools: `agent_job`, `coding_agent`
|
|
10
|
-
- Call `resetAgentChats()` to clear both singletons (required if hot-reloading)
|
|
7
|
+
- **SDK path** (`streamViaSdk`) — in-process `@anthropic-ai/claude-agent-sdk` via `sdk-adapters/claude-code.js`. Used only when `CODING_AGENT=claude-code`.
|
|
8
|
+
- **Direct path** (`streamViaContainer`) — spawns the configured coding agent in an ephemeral headless Docker container via `runHeadlessContainer()`. Streams output through `parseHeadlessStream()`. Used for every agent without an SDK adapter (pi, codex, gemini, opencode, kimi).
|
|
11
9
|
|
|
12
|
-
|
|
13
|
-
- System prompt: `event-handler/code-chat/SYSTEM.md` (rendered fresh each invocation)
|
|
14
|
-
- Tools: `coding_agent` (reads repo/branch/workspace from `runtime.configurable`)
|
|
10
|
+
Both paths yield the same normalized chunk shape and use the same DB persistence pattern. There is no LangGraph React agent and no intermediate LLM between the user's message and the agent.
|
|
15
11
|
|
|
16
|
-
##
|
|
12
|
+
## Multi-Turn Memory
|
|
17
13
|
|
|
18
|
-
|
|
19
|
-
|
|
20
|
-
|
|
14
|
+
Neither path persists conversation context at the LangChain/LangGraph layer — that layer is gone. Memory lives where the coding agent naturally keeps it:
|
|
15
|
+
|
|
16
|
+
- **SDK path** — session ID captured from the SDK's `meta` chunk and written via `session-manager.js` (`{workspaceBaseDir}/.claude-ttyd-sessions/7681`). Passed back into the SDK on the next turn.
|
|
17
|
+
- **Direct path** — `runHeadlessContainer()` passes `CONTINUE_SESSION=1` into the container. Each agent's `run.sh` reads its own port-keyed session file and resumes natively (see `docker/coding-agent/CLAUDE.md` § Session Tracking).
|
|
21
18
|
|
|
22
19
|
## Chat Modes
|
|
23
20
|
|
|
24
|
-
|
|
21
|
+
`chats.chatMode` is either `'agent'` or `'code'`:
|
|
22
|
+
|
|
23
|
+
- **Agent mode** (`chatMode: 'agent'`) — repo/branch defaulted from `GH_OWNER`/`GH_REPO`, `main` branch, agent job secrets injected, system prompt built from `event-handler/agent-chat/SYSTEM.md` with scope-resolved skills.
|
|
24
|
+
- **Code mode** (`chatMode: 'code'`) — user-selected repo/branch, no secret injection, system prompt from `event-handler/code-chat/SYSTEM.md`.
|
|
25
|
+
|
|
26
|
+
Per-chat sub-mode via `codeModeType`:
|
|
27
|
+
- **plan** — `PERMISSION=plan` (read-only).
|
|
28
|
+
- **code** — `PERMISSION=code` (write/dangerous).
|
|
29
|
+
|
|
30
|
+
The "job" sub-mode is no longer wired — a skill will replace autonomous job dispatch.
|
|
31
|
+
|
|
32
|
+
## Chunk Shape
|
|
33
|
+
|
|
34
|
+
`chatStream()` yields normalized chunks consumed by `lib/chat/api.js`:
|
|
35
|
+
|
|
36
|
+
- `{ type: 'text', text }`
|
|
37
|
+
- `{ type: 'tool-call', toolCallId, toolName, args }`
|
|
38
|
+
- `{ type: 'tool-result', toolCallId, result }`
|
|
39
|
+
- `{ type: 'error', message }` — surfaced to the UI as a red message and persisted for refresh
|
|
40
|
+
- `{ type: 'meta', ... }`, `{ type: 'result', ... }` — internal, not emitted to client
|
|
41
|
+
- `{ type: 'thinking-start' | 'thinking' | 'thinking-end' }` — SDK path only
|
|
42
|
+
|
|
43
|
+
## Workspace Setup
|
|
25
44
|
|
|
26
|
-
|
|
27
|
-
- **plan** — `coding_agent` runs in read-only permission mode
|
|
28
|
-
- **code** — `coding_agent` runs in write (dangerous) permission mode
|
|
29
|
-
- **job** — `agent_job` dispatches autonomous Docker container task
|
|
45
|
+
`ensureWorkspaceRepo()` (workspace-setup.js) is called before either path runs. It clones the repo, sets git identity, and checks out/creates the feature branch on the host — agent-agnostic. The container's `2_clone.sh` is a no-op when `.git` already exists.
|
|
30
46
|
|
|
31
|
-
|
|
47
|
+
On the first message in a new chat, `chatStream` yields a visible `tool-call`/`tool-result` pair with `toolName: 'workspace'` so the setup appears in the UI.
|
|
32
48
|
|
|
33
|
-
|
|
49
|
+
## Utility LLM Calls
|
|
34
50
|
|
|
35
|
-
|
|
51
|
+
`createModel()` in `model.js` remains LangChain-based for two utility calls: `autoTitle()` (2-5 word chat title on first message) and `summarizeAgentJob()` (webhook-triggered PR merge summary). These use `LLM_PROVIDER` + `LLM_MODEL` configured via `/admin/event-handler/chat`.
|
|
36
52
|
|
|
37
|
-
`createModel()`
|
|
53
|
+
Phase 2 will replace `createModel()` with a tiny fetch-based multi-provider client (or route utility calls through the active coding agent's credentials) and drop the remaining `@langchain/*` dependencies.
|
|
38
54
|
|
|
39
55
|
### LLM Providers
|
|
40
56
|
|
|
41
|
-
Source of truth: `lib/llm-providers.js` (`BUILTIN_PROVIDERS`).
|
|
57
|
+
Source of truth: `lib/llm-providers.js` (`BUILTIN_PROVIDERS`).
|
|
42
58
|
|
|
43
59
|
| Provider | `LLM_PROVIDER` | Default Model | Required Key |
|
|
44
60
|
|----------|----------------|---------------|-------------|
|
|
@@ -52,49 +68,31 @@ Source of truth: `lib/llm-providers.js` (`BUILTIN_PROVIDERS`). Each provider dec
|
|
|
52
68
|
| Kimi | `kimi` | `kimi-k2.5` | `MOONSHOT_API_KEY` |
|
|
53
69
|
| OpenRouter | `openrouter` | (user-specified) | `OPENROUTER_API_KEY` |
|
|
54
70
|
|
|
55
|
-
All credentials are stored in the settings DB (encrypted)
|
|
71
|
+
All credentials are stored in the settings DB (encrypted). `LLM_MAX_TOKENS` defaults to 4096.
|
|
56
72
|
|
|
57
|
-
**Custom providers**:
|
|
73
|
+
**Custom providers**: users can add OpenAI-compatible providers via `/admin/event-handler/llms`. Stored as `type: 'llm_provider'` in the settings table. Resolved in `model.js` via `getCustomProvider()`.
|
|
58
74
|
|
|
59
|
-
`
|
|
60
|
-
|
|
61
|
-
> **Google model compatibility note:** `gemini-2.5-pro` and `gemini-3.*` models require `thought_signature` round-tripping that `@langchain/google-genai` doesn't support. Auto-falls back to `gemini-2.5-flash` with a warning (issue #201).
|
|
62
|
-
|
|
63
|
-
## Chat Streaming
|
|
64
|
-
|
|
65
|
-
`chatStream()` in `index.js` yields chunks: `{ type: 'text', content }`, `{ type: 'tool-call', name, args }`, `{ type: 'tool-result', name, result }`. Called by `lib/chat/api.js` (the `/stream/chat` endpoint).
|
|
75
|
+
> **Google model compatibility note:** `gemini-2.5-pro` and `gemini-3.*` require `thought_signature` round-tripping that `@langchain/google-genai` doesn't support. Auto-falls back to `gemini-2.5-flash` (issue #201).
|
|
66
76
|
|
|
67
77
|
## Headless Stream Parser (headless-stream.js)
|
|
68
78
|
|
|
69
|
-
Three-layer parser
|
|
79
|
+
Three-layer parser consumed by the direct path:
|
|
70
80
|
|
|
71
|
-
1. **Docker frame decoder** —
|
|
72
|
-
2. **NDJSON splitter** —
|
|
73
|
-
3. **Event mapper** (`mapLine()`) —
|
|
81
|
+
1. **Docker frame decoder** — parses 8-byte multiplexed stream headers (type + size), extracts stdout frames, discards stderr.
|
|
82
|
+
2. **NDJSON splitter** — accumulates decoded UTF-8 and splits on newlines.
|
|
83
|
+
3. **Event mapper** (`mapLine()`) — converts each line to chat events:
|
|
74
84
|
- `assistant` messages: `text` blocks → `{ type: 'text' }`, `tool_use` blocks → `{ type: 'tool-call' }`
|
|
75
|
-
- `user` messages: `tool_result` blocks → `{ type: 'tool-result' }` (priority: stdout > string
|
|
76
|
-
- `result` messages: → `{ type: 'text' }` (final summary
|
|
85
|
+
- `user` messages: `tool_result` blocks → `{ type: 'tool-result' }` (priority: stdout > string > array)
|
|
86
|
+
- `result` messages: → `{ type: 'text' }` (final summary)
|
|
77
87
|
- Non-JSON lines (e.g. `NO_CHANGES`, `AGENT_FAILED`): wrapped as plain text events
|
|
78
88
|
|
|
79
|
-
`
|
|
80
|
-
|
|
81
|
-
### Tool Return Format
|
|
82
|
-
|
|
83
|
-
The `coding_agent` tool (in `tools.js`) returns the **full container session** as a flat JSON array. This becomes the ToolMessage in LangGraph's checkpoint, giving the LLM complete context on the current turn. The array contains:
|
|
84
|
-
|
|
85
|
-
- `{ type: 'meta', codingAgent, backendApi }` — first event, agent identity
|
|
86
|
-
- `{ type: 'text', text }` — agent text output
|
|
87
|
-
- `{ type: 'tool-call', toolCallId, toolName, args }` — agent tool invocations
|
|
88
|
-
- `{ type: 'tool-result', toolCallId, result }` — tool execution results
|
|
89
|
-
- `{ type: 'exit', exitCode }` — last event, container exit status
|
|
90
|
-
|
|
91
|
-
On error before streaming starts: `[{ type: 'error', message }]`.
|
|
89
|
+
`mapLine()` is also reused by `lib/cluster/stream.js` for worker log parsing.
|
|
92
90
|
|
|
93
91
|
### Adding a New Agent Mapper (line-mappers.js)
|
|
94
92
|
|
|
95
|
-
Each coding agent CLI has its own mapper
|
|
93
|
+
Each coding agent CLI has its own mapper (`mapClaudeCodeLine`, `mapPiLine`, `mapGeminiLine`, `mapCodexLine`, `mapOpenCodeLine`, `mapKimiLine`). To add one:
|
|
96
94
|
|
|
97
|
-
1. Create `mapXxxLine(parsed)` in `line-mappers.js` that returns an array of `{ type, ... }` events
|
|
98
|
-
2. Register it in `headless-stream.js`:
|
|
99
|
-
3. Map the agent's JSON output to
|
|
100
|
-
4. Return `[{ type: 'skip' }]` for noise events
|
|
95
|
+
1. Create `mapXxxLine(parsed)` in `line-mappers.js` that returns an array of `{ type, ... }` events.
|
|
96
|
+
2. Register it in `headless-stream.js`: imports, re-exports, and the `mapperMap` object.
|
|
97
|
+
3. Map the agent's JSON output to the chunk shape above.
|
|
98
|
+
4. Return `[{ type: 'skip' }]` for noise events to suppress them without triggering the unknown fallback.
|