agentgui 1.0.855 → 1.0.857

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -1,3 +1,9 @@
1
+ ## [Unreleased] - merge: integrate remote UI redesign + cleanup CLAUDE.md
2
+
3
+ - Merge origin/main: resolve UU conflicts in server.js (take remote _jsonlWatcher setter) and static/index.html (take remote UI redesign with overflow menu + SVG icons)
4
+ - Accept AA (both-added) files from new modules: lib/jsonl-parser.js, lib/jsonl-watcher.js, lib/process-message.js, lib/server-startup.js, lib/stream-event-handler.js, static/css/main.css, static/js/client.js, static/js/conversations.js
5
+ - docs: cleanup CLAUDE.md — trim verbosity (499→368L, -26%): consolidate REST API (42→9L), tighten XState docs (21→4L), shrink WebSocket wire format (29→2L), compress message flow (15→1L), tighten tool detection (21→6L), merge voice model + debug sections. All key patterns preserved.
6
+
1
7
  ## [Unreleased] - refactor: extract routes registry + wire tool/debug routes
2
8
 
3
9
  - Extract all route and WS handler registrations from server.js L201-270 to lib/routes-registry.js (63L, createRegistry factory)
package/CLAUDE.md CHANGED
@@ -156,24 +156,11 @@ static/vendor/ Third-party assets (highlight.js, Prism, Ripple
156
156
 
157
157
  ## XState State Machines
158
158
 
159
- XState v5 machines are authoritative for their respective state domains. Ad-hoc Maps/Sets/booleans they replaced have been deleted.
159
+ XState v5 machines own their domains exclusively. No ad-hoc Maps/Sets parallel to machines.
160
160
 
161
- **Server machines** (ESM, `lib/`):
162
- - `execution-machine.js`: One actor per conversation. States: idle → streaming → draining (queue drain) → streaming → idle. Also rate_limited state. `execMachine.send(convId, event)` API. `conv.get` and `conv.full` WS responses include `executionState` field.
163
- - `acp-server-machine.js`: One actor per ACP tool (opencode/kilo/codex). States: stopped → starting → running ↔ crashed → restarting. Used by `acp-sdk-manager.js` to track health and drive restart backoff.
164
- - `tool-install-machine.js`: One actor per tool ID. States: unchecked → checking → idle/installed/needs_update/installing/updating/failed. Replaces `installLocks` Map in `tool-spawner.js`. Events: CHECK_START, IDLE, INSTALLED, NEEDS_UPDATE, INSTALL_START, INSTALL_COMPLETE, UPDATE_START, UPDATE_COMPLETE, FAILED. API: `getOrCreate(toolId)`, `send(toolId, event)`, `isLocked(toolId)`, `getMachineActors()`. Context: version, error, installedAt, lastCheckedAt. `GET /api/debug/machines` returns all snapshots when `DEBUG=1`.
161
+ **Server** (lib/): `execution-machine` (per conversation: idle/streaming/draining/rate_limited), `acp-server-machine` (per tool: stopped/starting/running/crashed/restarting), `tool-install-machine` (per tool: unchecked→checking→idle/installed/needs_update/installing/updating/failed). API: `send(id, event)`, `isLocked()`, snapshots at `GET /api/debug/machines` when DEBUG=1.
165
162
 
166
- **Client machines** (browser UMD, `static/js/`):
167
- - `ws-machine.js`: Wraps WebSocketManager. States: disconnected/connecting/connected/reconnecting. Actor accessible as `wsManager._wsActor`. State readable via `wsManager.connectionState`.
168
- - `conv-machine.js`: One actor per conversation. States: idle/streaming/queued. API exposed as `window.convMachineAPI`. All actors in `window.__convMachines` Map for debug.
169
- - `tool-install-machine.js`: One actor per tool ID. States: idle/installing/installed/updating/needs_update/failed. Replaces `operationInProgress` Set in `tools-manager.js`. Context: version, error, progress, installedVersion, publishedVersion. API: `window.toolInstallMachineAPI`. Actors in `window.__toolInstallMachines`.
170
- - `voice-machine.js`: Single actor for TTS playback. States: idle/queued/speaking/disabled. Replaces isSpeaking, isPlayingChunk, ttsDisabledUntilReset booleans and ttsConsecutiveFailures counter in `voice.js`. Circuit-breaker trips at 3 consecutive failures (disabled state, RESET to recover). API: `window.voiceMachineAPI`. Actor at `window.__voiceMachine`.
171
- - `conv-list-machine.js`: Single actor for conversation list. States: unloaded/loading/loaded/error. Context: conversations[], activeId, streamingIds[], version, lastPollAt. Replaces `_conversationVersion`, `_lastMutationSource`, `streamingConversations` Set in `ConversationManager`. All list mutations go through machine events. API: `window.convListMachineAPI`. Actor at `window.__convListMachine`.
172
- - `prompt-machine.js`: Single actor for prompt area. States: ready/loading/streaming/queued/disabled. Replaces dead `_promptState` string and `_promptStateTransitions` object in `client.js`. Driven by `enableControls()`, `disableControls()`, `handleStreamingStart()`, `handleStreamingComplete()`, `handleStreamingError()`. API: `window.promptMachineAPI`. Actor at `window.__promptMachine`.
173
-
174
- **XState browser loading**: UMD bundle at `static/lib/xstate.umd.min.js` (copied from `node_modules/xstate/dist/xstate.umd.min.js` during npm install). Loaded as `defer` script. Load order: xstate.umd.min.js → ws-machine.js → conv-machine.js → tool-install-machine.js → voice-machine.js → conv-list-machine.js → prompt-machine.js → all other app scripts. Exposes `window.XState` global.
175
-
176
- **Authoritative pattern**: Each machine owns its domain exclusively. No parallel ad-hoc state alongside machines. `window.__*` globals expose all client actors for debug inspection.
163
+ **Client** (static/js/, UMD): `ws-machine` (disconnected/connecting/connected/reconnecting), `conv-machine` (per conv: idle/streaming/queued), `tool-install-machine` (per tool), `voice-machine` (single: idle/queued/speaking/disabled circuit-breaker), `conv-list-machine` (single: unloaded/loading/loaded/error), `prompt-machine` (single: ready/loading/streaming/queued/disabled). Load order: xstate.umd.min.js → ws-machine → conv-machine → tool-install-machine → voice-machine → conv-list-machine → prompt-machine. Exposed at `window.__*` globals for debug.
177
164
 
178
165
  ## Key Details
179
166
 
@@ -207,45 +194,21 @@ Managed by `lib/acp-sdk-manager.js`. Features: crash restart with exponential ba
207
194
 
208
195
  ## REST API
209
196
 
210
- All routes are prefixed with `BASE_URL` (default `/gm`).
211
-
212
- - `GET /api/conversations` - List conversations
213
- - `POST /api/conversations` - Create conversation (body: agentId, title, workingDirectory)
214
- - `GET /api/conversations/:id` - Get conversation with streaming status
215
- - `POST /api/conversations/:id` - Update conversation
216
- - `DELETE /api/conversations/:id` - Delete conversation
217
- - `POST /api/conversations/:id/archive` - Archive conversation (soft-delete)
218
- - `POST /api/conversations/:id/restore` - Restore archived conversation
219
- - `GET /api/conversations/archived` - List archived conversations
220
- - `GET /api/conversations/:id/messages` - Get messages (query: limit, offset)
221
- - `POST /api/conversations/:id/messages` - Send message (body: content, agentId)
222
- - `POST /api/conversations/:id/stream` - Start streaming execution
223
- - `GET /api/conversations/:id/full` - Full conversation load with chunks
224
- - `GET /api/conversations/:id/chunks` - Get stream chunks (query: since)
225
- - `GET /api/conversations/:id/sessions/latest` - Get latest session
226
- - `GET /api/sessions/:id` - Get session
227
- - `GET /api/sessions/:id/chunks` - Get session chunks (query: since)
228
- - `GET /api/sessions/:id/execution` - Get execution events (query: limit, offset, filterType)
229
- - `GET /api/agents` - List discovered agents
230
- - `GET /api/acp/status` - ACP tool lifecycle status (ports, health, PIDs, restart counts)
231
- - `GET /api/health` - Server health check (version, uptime, agents, wsClients, memory, acp status)
232
- - `GET /api/home` - Get home directory
233
- - `POST /api/stt` - Speech-to-text (raw audio body)
234
- - `POST /api/tts` - Text-to-speech (body: text)
235
- - `GET /api/speech-status` - Speech model loading status
236
- - `POST /api/folders` - Create folder
237
- - `GET /api/tools` - List detected tools with installation status (via WebSocket tools.list handler)
238
- - `GET /api/tools/:id/status` - Get tool installation status (version, installed_at, error_message)
239
- - `POST /api/tools/:id/install` - Start tool installation (returns `{ success: true }` with background async install)
240
- - `POST /api/tools/:id/update` - Start tool update (body: targetVersion)
241
- - `GET /api/tools/:id/history` - Get tool install/update history (query: limit, offset)
242
- - `POST /api/tools/update` - Batch update all tools with available updates
243
- - `POST /api/tools/refresh-all` - Refresh all tool statuses from package manager
244
- - `POST /api/codex-oauth/start` - Start Codex CLI OAuth flow (returns `{ authUrl, mode }`)
245
- - `GET /api/codex-oauth/status` - Get current Codex OAuth state `{ status, email, error }`
246
- - `POST /api/codex-oauth/relay` - Relay OAuth code+state from remote browser (body: `{ code, state }`)
247
- - `POST /api/codex-oauth/complete` - Complete OAuth by pasting redirect URL (body: `{ url }`)
248
- - `GET /codex-oauth2callback` - OAuth callback endpoint (redirect_uri for local flows)
197
+ All routes prefixed with `BASE_URL` (default `/gm`). Key endpoints:
198
+
199
+ **Conversations**: `GET /api/conversations`, `POST /api/conversations`, `GET/POST/DELETE /api/conversations/:id`, `POST /api/conversations/:id/archive`, `POST /api/conversations/:id/restore`, `GET /api/conversations/:id/messages`, `POST /api/conversations/:id/messages`, `POST /api/conversations/:id/stream`, `GET /api/conversations/:id/full`, `GET /api/conversations/:id/chunks`, `GET /api/conversations/:id/sessions/latest`
200
+
201
+ **Sessions**: `GET /api/sessions/:id`, `GET /api/sessions/:id/chunks`, `GET /api/sessions/:id/execution`
202
+
203
+ **Agents & ACP**: `GET /api/agents`, `GET /api/acp/status`, `GET /api/health`
204
+
205
+ **Speech**: `POST /api/stt`, `POST /api/tts`, `GET /api/speech-status`
206
+
207
+ **Tools**: `GET /api/tools`, `GET/POST /api/tools/:id/install`, `POST /api/tools/:id/update`, `GET /api/tools/:id/history`, `POST /api/tools/update`, `POST /api/tools/refresh-all`
208
+
209
+ **OAuth**: `POST /api/codex-oauth/start`, `GET /api/codex-oauth/status`, `POST /api/codex-oauth/relay`, `POST /api/codex-oauth/complete`, `GET /codex-oauth2callback`
210
+
211
+ **Utility**: `POST /api/folders`, `GET /api/home`
249
212
 
250
213
  ## Tool Update System
251
214
 
@@ -273,24 +236,13 @@ Tool updates are managed through a complete pipeline:
273
236
 
274
237
  ## Tool Detection System
275
238
 
276
- TOOLS array in `lib/tool-manager.js` two categories:
277
- - **`cli`**: `{ id, name, pkg, category: 'cli' }` — detected via `which <bin>` + `<bin> --version`
278
- - **`plugin`**: `{ id, name, pkg, installPkg, pluginId, category: 'plugin', frameWork }` — detected via plugin.json files
279
-
280
- Current tools:
281
- - `cli-claude`: bin=`claude`, pkg=`@anthropic-ai/claude-code`
282
- - `cli-opencode`: bin=`opencode`, pkg=`opencode-ai`
283
- - `cli-gemini`: bin=`gemini`, pkg=`@google/gemini-cli`
284
- - `cli-kilo`: bin=`kilo`, pkg=`@kilocode/cli`
285
- - `cli-codex`: bin=`codex`, pkg=`@openai/codex`
286
- - `cli-agent-browser`: bin=`agent-browser`, pkg=`agent-browser` — uses `-V` flag (not `--version`) for version detection
287
- - `gm-cc`, `gm-oc`, `gm-gc`, `gm-kilo`, `gm-codex`: plugin tools
239
+ **TOOLS** array in `lib/tool-manager.js`: cli (via which + --version) or plugin (via plugin.json). Current: claude, opencode, gemini, kilo, codex, agent-browser (uses `-V`, not `--version`), + plugin tools (gm-cc, gm-oc, gm-gc, gm-kilo, gm-codex).
288
240
 
289
- **BIN_MAP gotcha:** `lib/tool-version-check.js` has a single `BIN_MAP` constant shared by `checkCliInstalled()` and `getCliVersion()`. Any new CLI tool must be added there. `agent-browser` uses `-V` (not `--version`) — a `versionFlag` override handles this.
241
+ **BIN_MAP**: Single constant in `lib/tool-version-check.js` shared by detect + version functions; new CLI tools must be added.
290
242
 
291
- **Framework paths:** `lib/tool-version-check.js` uses a `FRAMEWORK_PATHS` data table instead of per-framework if/else chains. Each framework entry defines pluginDir, versionFile, parseVersion, and optional markerFile/fallbackInstalled. Adding a new framework means adding one entry to this table.
243
+ **FRAMEWORK_PATHS**: Data table (pluginDir/versionFile/parseVersion/optional markerFile). New framework = one table entry.
292
244
 
293
- **Background provisioning:** `autoProvision()` runs at startup, checks/installs missing tools (~10s). `startPeriodicUpdateCheck()` runs every 6 hours in background to check for updates. Both broadcast tool status via WebSocket so UI stays in sync.
245
+ **Provisioning**: `autoProvision()` at startup (~10s), `startPeriodicUpdateCheck()` every 6h. Both broadcast tool status via WS.
294
246
 
295
247
  ### Tool Installation and Update UI Flow
296
248
 
@@ -302,49 +254,11 @@ When user clicks Install/Update button on a tool:
302
254
 
303
255
  ## WebSocket Protocol
304
256
 
305
- Endpoint: `BASE_URL + /sync`
306
-
307
- **Wire format (msgpack binary):**
308
- - Client RPC request: `{ r: requestId, m: method, p: params }`
309
- - Server RPC reply: `{ r: requestId, d: data }` or `{ r: requestId, e: { c: code, m: message } }`
310
- - Server push/broadcast: `{ type, seq, ...data }` or array of these when batched
311
-
312
- **Legacy control messages** (bypass RPC router, handled in `onLegacy`): `subscribe`, `unsubscribe`, `ping`, `latency_report`, `terminal_*`, `pm2_*`, `set_voice`, `get_subscriptions`
313
-
314
- Client sends:
315
- - `{ type: "subscribe", sessionId }` or `{ type: "subscribe", conversationId }`
316
- - `{ type: "unsubscribe", sessionId }`
317
- - `{ type: "ping" }`
318
-
319
- Server broadcasts:
320
- - `streaming_start` - Agent execution started (high priority, flushes immediately)
321
- - `streaming_progress` - New event/chunk from agent (normal priority, batched)
322
- - `streaming_complete` - Execution finished (high priority)
323
- - `streaming_error` - Execution failed (high priority)
324
- - `message_created` - New message (high priority, flushes immediately)
325
- - `conversation_created`, `conversation_updated`, `conversation_deleted`
326
- - `all_conversations_deleted` - Must be in BROADCAST_TYPES set
327
- - `model_download_progress` - Voice model download progress
328
- - `voice_list` - Available TTS voices
329
-
330
- **WSOptimizer** (`lib/ws-optimizer.js`): Per-client priority queue. High-priority events flush immediately; normal/low batch by latency tier (16ms excellent → 200ms bad). Rate limit: 100 msg/sec — overflow is re-queued (not dropped). No `lastKey` deduplication (was removed — caused valid event drops).
331
-
332
- ### WS RPC Methods (86 total)
333
-
334
- **agent:** `agent.auth`, `agent.authstat`, `agent.desc`, `agent.get`, `agent.ls`, `agent.models`, `agent.search`, `agent.subagents`, `agent.update`
335
- **auth:** `auth.configs`, `auth.save`
336
- **codex:** `codex.complete`, `codex.relay`, `codex.start`, `codex.status`
337
- **conv:** `conv.cancel`, `conv.chunks`, `conv.chunks.earlier`, `conv.del`, `conv.del.all`, `conv.export`, `conv.full`, `conv.get`, `conv.import`, `conv.inject`, `conv.ls`, `conv.new`, `conv.prune`, `conv.run-script`, `conv.scripts`, `conv.search`, `conv.steer`, `conv.stop-script`, `conv.sync`, `conv.tags`, `conv.upd`
338
- **gemini:** `gemini.complete`, `gemini.relay`, `gemini.start`, `gemini.status`
339
- **git:** `git.check`, `git.push`
340
- **msg:** `msg.get`, `msg.ls`, `msg.ls.earlier`, `msg.send`, `msg.stream`
341
- **q:** `q.del`, `q.ls`, `q.upd`
342
- **run:** `run.cancel`, `run.del`, `run.get`, `run.new`, `run.resume`, `run.search`, `run.stream`, `run.stream.get`, `run.wait`
343
- **sess:** `sess.chunks`, `sess.exec`, `sess.get`, `sess.latest`
344
- **speech:** `speech.download`, `speech.status`
345
- **thread:** `thread.copy`, `thread.del`, `thread.get`, `thread.history`, `thread.new`, `thread.run.cancel`, `thread.run.steer`, `thread.run.stream`, `thread.run.stream.get`, `thread.search`, `thread.upd`
346
- **tools:** `tools.list`
347
- **util:** `clone`, `discover.claude`, `folders`, `home`, `import.claude`, `voice.cache`, `voice.generate`, `voices`, `ws.stats`
257
+ Endpoint: `BASE_URL + /sync`. Msgpack binary. Wire: RPC request `{r, m, p}`, reply `{r, d}` or `{r, e}`, broadcast `{type, seq, ...}` batched by `WSOptimizer`. Per-client priority queue: high-priority (streaming_start, message_created, streaming_complete) flush immediately; normal/low batch by latency tier. Rate limit: 100 msg/sec (re-queued if overflow).
258
+
259
+ **Legacy messages** (onLegacy): subscribe/unsubscribe/ping/latency_report/terminal_*/pm2_*/set_voice/get_subscriptions
260
+
261
+ **RPC methods** (86 total by category): agent (auth/authstat/desc/get/ls/models/search/subagents/update), auth (configs/save), codex (start/status/relay/complete), conv (ls/new/get/upd/del/cancel/chunks/full/steer/inject/search/prune/scripts/run-script), gemini (start/status/relay/complete), git (check/push), msg (send/stream/get/ls), q (ls/upd/del), run (new/stream/get/wait/cancel/search/resume), sess (get/latest/chunks/exec), speech (download/status), thread (new/get/upd/del/search/copy/history/run.stream/run.cancel/run.steer), tools (list), util (home/folders/clone/voices/voice.cache/voice.generate/ws.stats/discover.claude/import.claude)
348
262
 
349
263
  ## Steering
350
264
 
@@ -367,18 +281,7 @@ Three parallel state stores (must stay in sync):
367
281
 
368
282
  ## Message Flow
369
283
 
370
- 1. User sends`startExecution()` checks `streamingConversations.has(convId)`
371
- 2. If NOT streaming: show optimistic "User" message in UI
372
- 3. If streaming: skip optimistic (will queue server-side)
373
- 4. Send via RPC `msg.stream` → backend creates message + broadcasts `message_created`
374
- 5. Backend checks `activeExecutions.has(convId)`:
375
- - YES: queues, returns `{ queued: true }`, broadcasts `queue_status`
376
- - NO: executes, returns `{ session }`
377
- 6. Queue items render as yellow control blocks in `queue-indicator` div
378
- 7. `message_created` only broadcast for non-queued messages (ws-handlers-conv.js)
379
- 8. When queued message executes: becomes regular user message, queue-indicator updates
380
-
381
- **Streaming session blocks:** `handleStreamingComplete()` removes `.event-streaming-start` and `.event-streaming-complete` DOM blocks to prevent accumulation in long conversations.
284
+ User sendcheck if streaming → (streaming: queue server-side, skip optimistic; else: show optimistic message) → RPC msg.stream → backend checks activeExecutions.has(convId) → (yes: queue, broadcast queue_status; no: execute, return session) → broadcast message_created (non-queued only). Queue renders as yellow blocks. On complete, remove .event-streaming-* DOM blocks.
382
285
 
383
286
  ## Conversations Sidebar
384
287
 
@@ -405,49 +308,15 @@ MIME type priority: `event.media_type` → magic-byte detection (PNG/JPEG/WebP/G
405
308
 
406
309
  ## Voice Model Download
407
310
 
408
- Speech models (~470MB total) are downloaded automatically on server startup. No credentials required.
409
-
410
- ### Download Sources (fallback chain)
411
- 1. **GitHub LFS** (primary): `https://github.com/AnEntrypoint/models`
412
- 2. **HuggingFace** (fallback): `onnx-community/whisper-base` for STT, `AnEntrypoint/sttttsmodels` for TTS
413
-
414
- ### Models
415
- - **Whisper Base** (~280MB): encoder + decoder ONNX models, tokenizer, config files
416
- - **TTS Models** (~190MB): mimi encoder/decoder, flow_lm, text_conditioner, tokenizer
417
-
418
- ### UI Behavior
419
- - Voice tab hidden until models ready; circular progress indicator in header during download
420
- - Model status broadcast via WebSocket `model_download_progress` events
421
- - Cache location: `~/.gmgui/models/`
422
-
423
- ## Performance Notes
424
-
425
- - **Static asset serving:** gzip-only (no brotli — too slow for payloads this size). Pre-compressed once on first request, cached in `_assetCache` Map (etag → `{ raw, gz }`). HTML cached as `_htmlCache` after first request, invalidated on hot-reload.
426
- - **`/api/conversations` N+1 fix:** Uses `getActiveSessionConversationIds()` (single `DISTINCT` query) instead of per-conversation `getSessionsByStatus()` calls.
427
- - **`conv.chunks` since-filter:** Pushed to DB via `getConversationChunksSince(convId, since)` — no JS array filter on full chunk set.
428
- - **Client init:** `loadAgents()`, `loadConversations()`, `checkSpeechStatus()` run in parallel via `Promise.all()`.
429
- - **`perMessageDeflate: false`** on WebSocket server — msgpack binary doesn't compress well, and zlib was blocking the event loop on every streaming_progress send.
430
-
431
- ## Codex CLI OAuth
432
-
433
- OpenAI Codex CLI uses PKCE authorization code flow against `https://auth.openai.com`.
311
+ Models (~470MB: Whisper Base ~280MB + TTS ~190MB) downloaded at startup from GitHub LFS or HuggingFace (fallback). UI: voice tab hidden until ready; progress indicator in header; `model_download_progress` WS broadcast. Cache: `~/.gmgui/models/`.
434
312
 
435
- **Flow:**
436
- 1. `POST /api/codex-oauth/start` generates PKCE (SHA-256 S256 challenge), CSRF state, returns `authUrl`
437
- 2. User opens `authUrl` in browser and authenticates via OpenAI/ChatGPT
438
- 3. **Local**: Browser redirects to `http://localhost:1455/auth/callback` — but since agentgui's server is on a different port, the redirect goes to `GET /codex-oauth2callback` (agentgui intercepts via matching route). Token exchange happens server-side.
439
- 4. **Remote**: Redirect goes to `/codex-oauth2callback` which serves a relay page. Relay POSTs `{ code, state }` to `/api/codex-oauth/relay`. Token exchange happens on the server.
440
- 5. Tokens saved to `$CODEX_HOME/auth.json` (default: `~/.codex/auth.json`) as `{ auth_mode: "chatgpt", tokens: { id_token, access_token, refresh_token }, last_refresh }`
313
+ ## Performance & Observability
441
314
 
442
- **Constants (in server.js):**
443
- - Issuer: `https://auth.openai.com`
444
- - Client ID: `app_EMoamEEZ73f0CkXaXp7hrann`
445
- - Scopes: `openid profile email offline_access api.connectors.read api.connectors.invoke`
446
- - Redirect URI (local): `http://localhost:1455/auth/callback` (actual callback goes to agentgui's `/codex-oauth2callback`)
315
+ **Asset serving**: gzip only (no brotli), pre-compressed once, cached in `_assetCache` (etag-keyed). HTML cached, invalidated on hot-reload. **/api/conversations**: single `DISTINCT` query (not N+1). **Chunks**: `getConversationChunksSince()` pushes filter to DB. **Client init**: loadAgents/loadConversations/checkSpeechStatus parallel. **WS**: perMessageDeflate: false (msgpack + zlib blocked event loop).
447
316
 
448
- **WebSocket handlers** (in `lib/ws-handlers-util.js`): `codex.start`, `codex.status`, `codex.relay`, `codex.complete`
317
+ **Debug API** (`DEBUG=1`): `/api/debug/machines` snapshots, `/api/debug/state` inspection, `/api/debug/ws-stats` latency. Browser: `window.__debug.getSyncState()` exposes all XState machines.
449
318
 
450
- **Agent auth**: `POST /api/agents/codex/auth` starts OAuth flow same as Gemini broadcasts `script_started`/`script_output`/`script_stopped` events as OAuth progresses.
319
+ PKCE S256 flow vs auth.openai.com. `POST /api/codex-oauth/start` authUrl. User authenticates redirect to `/codex-oauth2callback` (local: intercepts localhost:1455/auth/callback; remote: relay page POSTs to `/api/codex-oauth/relay`). Tokens saved to `$CODEX_HOME/auth.json`. WS handlers: codex.start/status/relay/complete.
451
320
 
452
321
  ## ACP SDK Integration
453
322
 
@@ -1,10 +1,19 @@
1
1
  import path from 'path';
2
+ <<<<<<< HEAD
3
+
4
+ export class JsonlParser {
5
+ constructor({ broadcastSync, queries, ownedSessionIds }) {
6
+ this._bc = broadcastSync;
7
+ this._q = queries;
8
+ this._owned = ownedSessionIds;
9
+ =======
2
10
  import fs from 'fs';
3
11
 
4
12
  export class JsonlParser {
5
13
  constructor({ broadcastSync, queries }) {
6
14
  this._bc = broadcastSync;
7
15
  this._q = queries;
16
+ >>>>>>> 6bfde951cbeb65ec72b73da9c23b9c8c0ba0bbc1
8
17
  this._convMap = new Map();
9
18
  this._emitted = new Map();
10
19
  this._seqs = new Map();
@@ -12,6 +21,8 @@ export class JsonlParser {
12
21
  this._sessions = new Map();
13
22
  }
14
23
 
24
+ <<<<<<< HEAD
25
+ =======
15
26
  /**
16
27
  * Pre-register a GUI-spawned session so _conv finds the right conversation
17
28
  * and _dbSession reuses the existing session ID instead of creating a new one.
@@ -23,6 +34,7 @@ export class JsonlParser {
23
34
  if (dbSessionId) this._sessions.set(claudeSessionId, dbSessionId);
24
35
  }
25
36
 
37
+ >>>>>>> 6bfde951cbeb65ec72b73da9c23b9c8c0ba0bbc1
26
38
  clear() {
27
39
  this._convMap.clear();
28
40
  this._emitted.clear();
@@ -43,12 +55,28 @@ export class JsonlParser {
43
55
  for (const sid of [...this._streaming]) this._endStreaming(this._convMap.get(sid), sid);
44
56
  }
45
57
 
58
+ <<<<<<< HEAD
59
+ _line(fp, line) {
60
+ line = line.trim(); if (!line) return;
61
+ let e; try { e = JSON.parse(line); } catch (_) { return; }
62
+ if (!e || !e.sessionId) return;
63
+ if (this._owned?.has(e.sessionId)) return;
64
+ const cid = this._conv(e.sessionId, e, fp);
65
+ if (cid) this._route(cid, e.sessionId, e);
66
+ }
67
+
68
+ _conv(sid, e) {
69
+ =======
46
70
  _conv(sid, e, fp) {
71
+ >>>>>>> 6bfde951cbeb65ec72b73da9c23b9c8c0ba0bbc1
47
72
  if (this._convMap.has(sid)) return this._convMap.get(sid);
48
73
  const found = this._q.getConversations().find(c => c.claudeSessionId === sid);
49
74
  if (found) { this._convMap.set(sid, found.id); return found.id; }
50
75
  if (e.type === 'queue-operation' || e.type === 'last-prompt') return null;
51
76
  if (e.type === 'user' && e.isMeta) return null;
77
+ <<<<<<< HEAD
78
+ const cwd = e.cwd || process.cwd();
79
+ =======
52
80
 
53
81
  // Resolve workingDirectory: event cwd → sessions-index.json → decoded path
54
82
  let cwd = e.cwd || null;
@@ -67,6 +95,7 @@ export class JsonlParser {
67
95
  }
68
96
  cwd = cwd || process.cwd();
69
97
 
98
+ >>>>>>> 6bfde951cbeb65ec72b73da9c23b9c8c0ba0bbc1
70
99
  const branch = e.gitBranch || '';
71
100
  const base = path.basename(cwd);
72
101
  const title = branch ? `${branch} @ ${base}` : base;
@@ -3,6 +3,11 @@ import { JsonlWatcher as CCFWatcher } from 'ccfollow';
3
3
  import { JsonlParser } from './jsonl-parser.js';
4
4
 
5
5
  export class JsonlWatcher extends CCFWatcher {
6
+ <<<<<<< HEAD
7
+ constructor({ broadcastSync, queries, ownedSessionIds }) {
8
+ super();
9
+ this._parser = new JsonlParser({ broadcastSync, queries, ownedSessionIds });
10
+ =======
6
11
  constructor({ broadcastSync, queries }) {
7
12
  super();
8
13
  this._parser = new JsonlParser({ broadcastSync, queries });
@@ -14,6 +19,7 @@ export class JsonlWatcher extends CCFWatcher {
14
19
  this._currentFp = fp;
15
20
  super._read(fp);
16
21
  this._currentFp = null;
22
+ >>>>>>> 6bfde951cbeb65ec72b73da9c23b9c8c0ba0bbc1
17
23
  }
18
24
 
19
25
  _line(line) {
@@ -22,6 +28,12 @@ export class JsonlWatcher extends CCFWatcher {
22
28
  let e;
23
29
  try { e = JSON.parse(line); } catch (_) { return; }
24
30
  if (!e || !e.sessionId) return;
31
+ <<<<<<< HEAD
32
+ const cid = this._parser._conv(e.sessionId, e);
33
+ if (cid) this._parser._route(cid, e.sessionId, e);
34
+ }
35
+
36
+ =======
25
37
  const cid = this._parser._conv(e.sessionId, e, this._currentFp);
26
38
  if (cid) this._parser._route(cid, e.sessionId, e);
27
39
  }
@@ -35,6 +47,7 @@ export class JsonlWatcher extends CCFWatcher {
35
47
  this._parser.registerSession(claudeSessionId, convId, dbSessionId);
36
48
  }
37
49
 
50
+ >>>>>>> 6bfde951cbeb65ec72b73da9c23b9c8c0ba0bbc1
38
51
  stop() {
39
52
  super.stop();
40
53
  this._parser.endAllStreaming();
@@ -1,4 +1,8 @@
1
+ <<<<<<< HEAD
2
+ export function createProcessMessage({ queries, activeExecutions, rateLimitState, execMachine, broadcastSync, runClaudeWithStreaming, cleanupExecution, checkpointManager, discoveredAgents, ownedSessionIds, STARTUP_CWD, buildSystemPrompt, parseRateLimitResetTime, eagerTTS, touchACP, createChunkBatcher, debugLog, logError, scheduleRetry, drainMessageQueue, createEventHandler }) {
3
+ =======
1
4
  export function createProcessMessage({ queries, activeExecutions, rateLimitState, execMachine, broadcastSync, runClaudeWithStreaming, cleanupExecution, checkpointManager, discoveredAgents, STARTUP_CWD, buildSystemPrompt, parseRateLimitResetTime, eagerTTS, touchACP, getJsonlWatcher, debugLog, logError, scheduleRetry, drainMessageQueue, createEventHandler }) {
5
+ >>>>>>> 6bfde951cbeb65ec72b73da9c23b9c8c0ba0bbc1
2
6
  async function processMessageWithStreaming(conversationId, messageId, sessionId, content, agentId, model, subAgent) {
3
7
  const startTime = Date.now();
4
8
  touchACP(agentId);
@@ -27,10 +31,19 @@ export function createProcessMessage({ queries, activeExecutions, rateLimitState
27
31
  execMachine.send(conversationId, { type: 'START', sessionId });
28
32
  queries.setIsStreaming(conversationId, true);
29
33
  queries.updateSession(sessionId, { status: 'active' });
34
+ <<<<<<< HEAD
35
+ const batcher = createChunkBatcher(queries, debugLog);
36
+ const cwd = conv?.workingDirectory || STARTUP_CWD;
37
+ const allBlocksRef = { val: [] };
38
+ const currentSequenceRef = { val: queries.getMaxSequence(sessionId) ?? -1 };
39
+ const batcherRef = { batcher, eventCount: 0, resumeSessionId: conv?.claudeSessionId || null };
40
+ const onEvent = createEventHandler({ queries, activeExecutions, broadcastSync, rateLimitState, batcherRef, sessionId, conversationId, messageId, content, agentId, model, subAgent, ownedSessionIds, allBlocksRef, currentSequenceRef, scheduleRetry, eagerTTS, debugLog, parseRateLimitResetTime });
41
+ =======
30
42
  const cwd = conv?.workingDirectory || STARTUP_CWD;
31
43
  // stateRef tracks eventCount (for session response metadata) and resumeSessionId
32
44
  const stateRef = { eventCount: 0, resumeSessionId: conv?.claudeSessionId || null };
33
45
  const onEvent = createEventHandler({ queries, activeExecutions, broadcastSync, rateLimitState, batcherRef: stateRef, sessionId, conversationId, messageId, content, agentId, model, subAgent, getJsonlWatcher, scheduleRetry, eagerTTS, debugLog, parseRateLimitResetTime });
46
+ >>>>>>> 6bfde951cbeb65ec72b73da9c23b9c8c0ba0bbc1
34
47
  try {
35
48
  debugLog(`[stream] Starting: conversationId=${conversationId}, sessionId=${sessionId}`);
36
49
  let resolvedAgentId = agentId || 'claude-code';
@@ -40,7 +53,11 @@ export function createProcessMessage({ queries, activeExecutions, rateLimitState
40
53
  const resolvedSubAgent = subAgent || conv?.subAgent || null;
41
54
  const config = {
42
55
  verbose: true, outputFormat: 'stream-json', timeout: 1800000, print: true,
56
+ <<<<<<< HEAD
57
+ resumeSessionId: batcherRef.resumeSessionId,
58
+ =======
43
59
  resumeSessionId: stateRef.resumeSessionId,
60
+ >>>>>>> 6bfde951cbeb65ec72b73da9c23b9c8c0ba0bbc1
44
61
  systemPrompt: buildSystemPrompt(agentId, resolvedModel, resolvedSubAgent),
45
62
  model: resolvedModel || undefined, subAgent: resolvedSubAgent || undefined, onEvent,
46
63
  onPid: (pid) => { const e = activeExecutions.get(conversationId); if (e) e.pid = pid; execMachine.send(conversationId, { type: 'SET_PID', pid }); },
@@ -53,6 +70,19 @@ export function createProcessMessage({ queries, activeExecutions, rateLimitState
53
70
  }
54
71
  activeExecutions.delete(conversationId);
55
72
  execMachine.send(conversationId, { type: 'COMPLETE' });
73
+ <<<<<<< HEAD
74
+ batcher.drain();
75
+ if (claudeSessionId) ownedSessionIds.delete(claudeSessionId);
76
+ debugLog(`[stream] Claude returned ${outputs.length} outputs, sessionId=${claudeSessionId}`);
77
+ queries.updateSession(sessionId, { status: 'complete', response: JSON.stringify({ outputs, eventCount: batcherRef.eventCount }), completed_at: Date.now() });
78
+ broadcastSync({ type: 'streaming_complete', sessionId, conversationId, agentId, eventCount: batcherRef.eventCount, seq: currentSequenceRef.val, timestamp: Date.now() });
79
+ debugLog(`[stream] Completed: ${outputs.length} outputs, ${batcherRef.eventCount} events`);
80
+ } catch (error) {
81
+ const elapsed = Date.now() - startTime;
82
+ debugLog(`[stream] Error after ${elapsed}ms: ${error.message}`);
83
+ const conv2 = queries.getConversation(conversationId);
84
+ if (conv2?.claudeSessionId) ownedSessionIds.delete(conv2.claudeSessionId);
85
+ =======
56
86
  debugLog(`[stream] Claude returned ${outputs.length} outputs, sessionId=${claudeSessionId}`);
57
87
  queries.updateSession(sessionId, { status: 'complete', response: JSON.stringify({ outputs, eventCount: stateRef.eventCount }), completed_at: Date.now() });
58
88
  // streaming_complete is broadcast by JsonlParser when it sees the turn_duration event.
@@ -63,6 +93,7 @@ export function createProcessMessage({ queries, activeExecutions, rateLimitState
63
93
  } catch (error) {
64
94
  const elapsed = Date.now() - startTime;
65
95
  debugLog(`[stream] Error after ${elapsed}ms: ${error.message}`);
96
+ >>>>>>> 6bfde951cbeb65ec72b73da9c23b9c8c0ba0bbc1
66
97
  if (rateLimitState.get(conversationId)?.isStreamDetected) {
67
98
  debugLog(`[rate-limit] Rate limit already handled in stream for conv ${conversationId}, skipping catch handler`);
68
99
  return;
@@ -76,6 +107,10 @@ export function createProcessMessage({ queries, activeExecutions, rateLimitState
76
107
  const errMsg = queries.createMessage(conversationId, 'assistant', `Error: Authentication failed. ${error.message}. Please update your credentials and try again.`);
77
108
  broadcastSync({ type: 'message_created', conversationId, message: errMsg, timestamp: Date.now() });
78
109
  queries.setIsStreaming(conversationId, false);
110
+ <<<<<<< HEAD
111
+ batcher.drain();
112
+ =======
113
+ >>>>>>> 6bfde951cbeb65ec72b73da9c23b9c8c0ba0bbc1
79
114
  activeExecutions.delete(conversationId);
80
115
  return;
81
116
  }
@@ -94,6 +129,10 @@ export function createProcessMessage({ queries, activeExecutions, rateLimitState
94
129
  const retryAt = Date.now() + cooldownMs;
95
130
  rateLimitState.set(conversationId, { retryAt, cooldownMs, retryCount });
96
131
  broadcastSync({ type: 'rate_limit_hit', sessionId, conversationId, retryAfterMs: cooldownMs, retryAt, retryCount, timestamp: Date.now() });
132
+ <<<<<<< HEAD
133
+ batcher.drain();
134
+ =======
135
+ >>>>>>> 6bfde951cbeb65ec72b73da9c23b9c8c0ba0bbc1
97
136
  debugLog(`[rate-limit] Scheduling retry for conv ${conversationId} in ${cooldownMs}ms (attempt ${retryCount + 1})`);
98
137
  setTimeout(() => {
99
138
  debugLog(`[rate-limit] Timeout fired for conv ${conversationId}, calling scheduleRetry`);
@@ -103,13 +142,21 @@ export function createProcessMessage({ queries, activeExecutions, rateLimitState
103
142
  }, cooldownMs);
104
143
  return;
105
144
  }
145
+ <<<<<<< HEAD
146
+ const isSessionConflict = error.exitCode === null && batcherRef.eventCount === 0;
147
+ =======
106
148
  const isSessionConflict = error.exitCode === null && stateRef.eventCount === 0;
149
+ >>>>>>> 6bfde951cbeb65ec72b73da9c23b9c8c0ba0bbc1
107
150
  broadcastSync({ type: 'streaming_error', sessionId, conversationId, error: error.message, isPrematureEnd: error.isPrematureEnd || false, exitCode: error.exitCode, stderrText: error.stderrText, recoverable: elapsed < 60000, isSessionConflict, timestamp: Date.now() });
108
151
  if (!isSessionConflict) {
109
152
  const errMsg = queries.createMessage(conversationId, 'assistant', `Error: ${error.message}`);
110
153
  broadcastSync({ type: 'message_created', conversationId, message: errMsg, timestamp: Date.now() });
111
154
  }
112
155
  } finally {
156
+ <<<<<<< HEAD
157
+ batcher.drain();
158
+ =======
159
+ >>>>>>> 6bfde951cbeb65ec72b73da9c23b9c8c0ba0bbc1
113
160
  if (!rateLimitState.has(conversationId)) {
114
161
  cleanupExecution(conversationId);
115
162
  drainMessageQueue(conversationId);
@@ -1,6 +1,10 @@
1
1
  import { JsonlWatcher } from './jsonl-watcher.js';
2
2
 
3
+ <<<<<<< HEAD
4
+ export function createOnServerReady({ queries, broadcastSync, warmAssetCache, staticDir, toolManager, discoveredAgents, PORT, BASE_URL, watch, ownedSessionIds, resumeInterruptedStreams, activeExecutions, debugLog, installGMAgentConfigs, startACPTools, getACPStatus, execMachine, toolInstallMachine, getSpeech, ensureModelsDownloaded, performAutoImport, performAgentHealthCheck, pm2Manager, pm2Subscribers, recoverStaleSessions }) {
5
+ =======
3
6
  export function createOnServerReady({ queries, broadcastSync, warmAssetCache, staticDir, toolManager, discoveredAgents, PORT, BASE_URL, watch, setWatcher, resumeInterruptedStreams, activeExecutions, debugLog, installGMAgentConfigs, startACPTools, getACPStatus, execMachine, toolInstallMachine, getSpeech, ensureModelsDownloaded, performAutoImport, performAgentHealthCheck, pm2Manager, pm2Subscribers, recoverStaleSessions }) {
7
+ >>>>>>> 6bfde951cbeb65ec72b73da9c23b9c8c0ba0bbc1
4
8
  let jsonlWatcher = null;
5
9
 
6
10
  function getJsonlWatcher() { return jsonlWatcher; }
@@ -23,9 +27,14 @@ export function createOnServerReady({ queries, broadcastSync, warmAssetCache, st
23
27
  }, 6 * 60 * 60 * 1000);
24
28
 
25
29
  try {
30
+ <<<<<<< HEAD
31
+ jsonlWatcher = new JsonlWatcher({ broadcastSync, queries, ownedSessionIds });
32
+ jsonlWatcher.start();
33
+ =======
26
34
  jsonlWatcher = new JsonlWatcher({ broadcastSync, queries });
27
35
  jsonlWatcher.start();
28
36
  if (setWatcher) setWatcher(jsonlWatcher);
37
+ >>>>>>> 6bfde951cbeb65ec72b73da9c23b9c8c0ba0bbc1
29
38
  console.log('[JSONL] Watcher started');
30
39
  } catch (err) { console.error('[JSONL] Watcher failed to start:', err.message); }
31
40