@rubytech/create-realagent 1.0.844 → 1.0.846

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (28) hide show
  1. package/dist/__tests__/port-canonicalisation.test.js +1 -0
  2. package/dist/index.js +45 -2
  3. package/dist/port-resolution.js +2 -1
  4. package/package.json +1 -1
  5. package/payload/platform/config/brand.json +1 -0
  6. package/payload/platform/lib/oauth-llm/dist/index.d.ts.map +1 -1
  7. package/payload/platform/lib/oauth-llm/dist/index.js +11 -1
  8. package/payload/platform/lib/oauth-llm/dist/index.js.map +1 -1
  9. package/payload/platform/lib/oauth-llm/src/index.ts +11 -1
  10. package/payload/platform/plugins/cloudflare/mcp/dist/lib/cloudflared.d.ts +1 -0
  11. package/payload/platform/plugins/cloudflare/mcp/dist/lib/cloudflared.d.ts.map +1 -1
  12. package/payload/platform/plugins/cloudflare/mcp/dist/lib/cloudflared.js +14 -17
  13. package/payload/platform/plugins/cloudflare/mcp/dist/lib/cloudflared.js.map +1 -1
  14. package/payload/platform/plugins/cloudflare/references/manual-setup.md +5 -2
  15. package/payload/platform/plugins/cloudflare/scripts/setup-tunnel.sh +32 -13
  16. package/payload/platform/plugins/docs/references/troubleshooting.md +5 -0
  17. package/payload/platform/scripts/check-sdk-oauth.mjs +13 -6
  18. package/payload/platform/scripts/vnc.sh +95 -36
  19. package/payload/platform/templates/agents/admin/IDENTITY.md +4 -2
  20. package/payload/platform/templates/systemd/edge.service.template +1 -1
  21. package/payload/server/chunk-6NZQKUSW.js +1577 -0
  22. package/payload/server/chunk-HPQ67IIU.js +10412 -0
  23. package/payload/server/chunk-NYGJNXX2.js +10376 -0
  24. package/payload/server/client-pool-2H6JWYC3.js +34 -0
  25. package/payload/server/maxy-edge.js +2 -2
  26. package/payload/server/public/assets/{admin-CvwOOG4D.js → admin-CedLGnCT.js} +1 -1
  27. package/payload/server/public/index.html +1 -1
  28. package/payload/server/server.js +3 -3
@@ -64,6 +64,11 @@ tail -200 ~/.maxy/logs/maxy-ui.log | rg '\[remote-auth\].*resolvedKind='
64
64
 
65
65
  **Agent searches the filesystem after uploading a zip.** If you uploaded a zip and the agent burns several turns running `find` / `Glob` instead of unzipping, that is the symptom of the recovery-retry attachment-context regression (now closed by the recovery context preservation contract in `.docs/agents.md`). Greppable confirmation is the `[context-overflow-recovery] retry … attachmentsCarried=<n>` line in the conversation stream log. If you see `[context-overflow-recovery] WARN attachment-context-lost`, the regression has returned — surface to support.
66
66
 
67
+ **Wrong Claude account answering on a multi-brand device.** On a host running both Maxy and Real Agent, each brand's admin agent reads its own `~/${brand.configDir}/.claude/.credentials.json`; there is no longer a shared `~/.claude/` thrashing them against one another. If a brand reports auth failures or appears to be operating against the wrong subscription, check three things:
68
+ 1. `grep "\[claude-auth\] init" ~/.${brand}/logs/server.log | tail -1` — the resolved path must end with `~/.${brand}/.claude/.credentials.json`. If a `[claude-auth] WARN cross-brand-path-detected` line is present, the runtime is still pointing at `~/.claude/`; the brand main service did not pick up the `Environment=CLAUDE_CONFIG_DIR=` setting (re-run the brand installer to refresh the unit file).
69
+ 2. `diff <(jq .claudeAiOauth.accessToken ~/.maxy/.claude/.credentials.json) <(jq .claudeAiOauth.accessToken ~/.realagent/.claude/.credentials.json)` — must be non-empty after each brand's operator has run `claude /login` against distinct Anthropic accounts; if it's empty, both brands are still logged in to the same account (operator action, not a code bug).
70
+ 3. `grep "\[install\] claude-creds pickup" ~/.${brand}/logs/install-*.log` — fires once on the first post-Task-923 install of any brand and moves the legacy `~/.claude/.credentials.json` into that brand's path. Subsequent brands install with no credentials and require a fresh `claude /login` inside that brand's chat (which writes to the brand-scoped path because the systemd unit env is in scope).
71
+
67
72
  ---
68
73
 
69
74
  ## Memory Not Working
@@ -15,14 +15,19 @@
15
15
  //
16
16
  // PASS condition: apiKeySource ∈ {'oauth', 'none'}. Both indicate OAuth-only mode:
17
17
  // 'oauth' = SDK supplied an OAuth-issued API key; 'none' = SDK supplied no key
18
- // at all and the claude binary's own ~/.claude/.credentials.json OAuth state is
19
- // in use. Cross-check with `claude --print --output-format json "Reply…"` to
20
- // confirm the same value: parity = SDK is faithfully proxying claude's auth.
18
+ // at all and the claude binary's own credentials file is in use. Production
19
+ // brand installs (Task 923) read this file from CLAUDE_CONFIG_DIR — which
20
+ // the brand main service sets to `${persistDir}/.claude` so the
21
+ // per-brand path is `~/${BRAND.configDir}/.claude/.credentials.json`. This
22
+ // orphan/standalone Pi spike runs without that env var and falls back to
23
+ // `~/.claude/.credentials.json`; that's by design (the spike is not in the
24
+ // production execution path; see Task 746 brief). Cross-check with
25
+ // `claude --print --output-format json "Reply…"` to confirm parity.
21
26
  //
22
27
  // Verdict reasons (exit 1 unless PASS):
23
28
  // env-set ANTHROPIC_API_KEY present (operator must `unset`)
24
- // no-oauth-credentials ~/.claude/.credentials.json unreadable, empty,
25
- // or contains no OAuth-shaped fields (run `claude login`)
29
+ // no-oauth-credentials credentials file unreadable, empty, or
30
+ // contains no OAuth-shaped fields (run `claude login`)
26
31
  // exception:<msg> SDK import or runtime error
27
32
  // no-system-init SDK never emitted system.init message
28
33
  // wrong-api-key-source:<value> apiKeySource ∉ {oauth, none} — value verbatim;
@@ -46,7 +51,9 @@ import { join } from 'node:path'
46
51
  const TIMEOUT_MS = 60_000
47
52
  const PROMPT = 'Reply with the literal string OK and nothing else.'
48
53
  // Claude Code emits apiKeySource='none' when the SDK supplies no API key and
49
- // the claude binary uses its own OAuth credentials at ~/.claude/.credentials.json.
54
+ // the claude binary uses its own OAuth credentials at the path resolved from
55
+ // CLAUDE_CONFIG_DIR (Task 923) — falling back to `~/.claude/.credentials.json`
56
+ // when CLAUDE_CONFIG_DIR is unset, as in this orphan Pi spike's runtime.
50
57
  // Brief assumed 'oauth' based on stale type-defs; runtime-correct OAuth-only
51
58
  // indicator on Claude Code 2.1.x is 'none'. Both accepted; cross-check with
52
59
  // `claude --print --output-format json` to confirm parity.
@@ -4,11 +4,19 @@
4
4
  #
5
5
  # Usage: vnc.sh start | stop | start-chrome | start-chrome-native | status
6
6
  #
7
- # Components:
8
- # Xtigervnc :99 — virtual X11 display + VNC server on port 5900
9
- # websockify :6080 — WebSocket bridge serving noVNC static files
10
- # Chromium :9222 — headed browser with CDP enabled
11
- # (Playwright MCP connects via --cdp-endpoint)
7
+ # Components (brand-scoped; see Task 553):
8
+ # Xtigervnc :${VNC_DISPLAY} — virtual X11 display per brand
9
+ # websockify :6080 — WebSocket bridge (port shared until follow-up)
10
+ # Chromium — headed browser with --user-data-dir per brand
11
+ # CDP :9222 — Playwright MCP endpoint (port shared until follow-up)
12
+ #
13
+ # Brand isolation (Task 553): the X display number and Chromium profile are
14
+ # brand-scoped so two brands on the same device do not share session cookies,
15
+ # extensions, or local storage. Maxy=:99, Real Agent=:100; per-brand value
16
+ # stamped from brand.json.vncDisplay at install time and re-read at runtime.
17
+ # CDP port 9222 and websockify 6080 are NOT yet brand-scoped — until that
18
+ # follow-up lands, only one brand's Chromium can hold CDP at a time and
19
+ # concurrent multi-brand VNC stacks fail loudly on port collision.
12
20
  #
13
21
  # Task 664 retired the admin-UI terminal surface entirely. This script
14
22
  # no longer spawns GUI terminal emulators of any kind; upgrades and
@@ -16,64 +24,103 @@
16
24
  # (/api/admin/actions/*) rather than an in-browser terminal.
17
25
  #
18
26
  # Display modes (DISPLAY_MODE env var, set by installer --display flag):
19
- # virtual (default) — Chromium runs on :99 (VNC virtual display)
27
+ # virtual (default) — Chromium runs on the brand's :${VNC_DISPLAY}
20
28
  # native — Chromium runs on the login session's real display
21
29
  # (discovered via loginctl, NOT from $DISPLAY which
22
- # is poisoned by systemd Environment=DISPLAY=:99)
30
+ # is poisoned by the systemd Environment=DISPLAY line)
23
31
 
24
32
  set -uo pipefail
25
33
 
26
- # Derive config dir from brand.json so logs go to the correct brand-specific
27
- # directory (e.g. ~/.realagent/ instead of ~/.maxy/). Primary source is the
28
- # script's own filesystem location; $MAXY_PLATFORM_ROOT is a fallback.
34
+ # Derive config dir AND VNC display from brand.json so logs and X display go
35
+ # to the correct brand-specific values (e.g. ~/.realagent/ + :100 instead of
36
+ # ~/.maxy/ + :99). Primary source is the script's own filesystem location;
37
+ # $MAXY_PLATFORM_ROOT is a fallback.
29
38
  SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
30
39
  PLATFORM_ROOT="${MAXY_PLATFORM_ROOT:-$(dirname "$SCRIPT_DIR")}"
31
40
  BRAND_JSON="${PLATFORM_ROOT}/config/brand.json"
32
41
  CONFIG_DIR=".maxy"
42
+ VNC_DISPLAY_NUM=99
43
+ BRAND_HOSTNAME="maxy"
33
44
  if [ -f "$BRAND_JSON" ] && command -v jq >/dev/null 2>&1; then
34
45
  _dir=$(jq -r '.configDir // empty' "$BRAND_JSON" 2>/dev/null) || true
35
46
  [ -n "$_dir" ] && CONFIG_DIR="$_dir"
47
+ _vd=$(jq -r '.vncDisplay // empty' "$BRAND_JSON" 2>/dev/null) || true
48
+ if [ -n "$_vd" ] && [ "$_vd" -eq "$_vd" ] 2>/dev/null; then
49
+ VNC_DISPLAY_NUM="$_vd"
50
+ fi
51
+ _hn=$(jq -r '.hostname // empty' "$BRAND_JSON" 2>/dev/null) || true
52
+ [ -n "$_hn" ] && BRAND_HOSTNAME="$_hn"
36
53
  fi
37
54
 
55
+ VNC_DISPLAY=":${VNC_DISPLAY_NUM}"
38
56
  MAXY_DIR="${HOME}/${CONFIG_DIR}"
39
57
  LOG_DIR="${MAXY_DIR}/logs"
40
58
  LOG_FILE="${LOG_DIR}/vnc-boot.log"
59
+ CHROMIUM_PROFILE_DIR="${MAXY_DIR}/chromium-profile"
41
60
 
42
61
  mkdir -p "$LOG_DIR"
43
62
 
44
63
  log() { echo "[$(date '+%Y-%m-%d %H:%M:%S')] $*" >> "$LOG_FILE"; }
45
64
 
46
65
  kill_stale() {
47
- pkill -f 'chromium.*remote-debugging-port=9222' 2>/dev/null || true
48
- pkill -f 'Xtigervnc :99' 2>/dev/null || true
66
+ # Brand-scoped matchers (Task 553): pkill on --user-data-dir narrows the
67
+ # Chromium kill to this brand's profile only, so two brands on the same
68
+ # device do not stomp on each other. Xtigervnc matcher narrows to this
69
+ # brand's display number.
70
+ pkill -f "chromium.*--user-data-dir=${CHROMIUM_PROFILE_DIR}" 2>/dev/null || true
71
+ pkill -f "Xtigervnc ${VNC_DISPLAY}" 2>/dev/null || true
49
72
  pkill -f 'websockify.*6080' 2>/dev/null || true
50
- rm -f /tmp/.X99-lock /tmp/.X11-unix/X99
51
- # Clear Chromium profile locks left by unclean shutdown — without this,
52
- # Chromium refuses to start after a service restart or power loss.
53
- # System Chromium stores locks under ~/.config/chromium/; snap-installed
54
- # Chromium (Ubuntu) stores them under ~/snap/chromium/common/chromium/.
55
- rm -f "${HOME}/.config/chromium/SingletonLock" \
56
- "${HOME}/.config/chromium/SingletonCookie" \
57
- "${HOME}/.config/chromium/SingletonSocket" 2>/dev/null || true
58
- rm -f "${HOME}/snap/chromium/common/chromium/SingletonLock" \
59
- "${HOME}/snap/chromium/common/chromium/SingletonCookie" \
60
- "${HOME}/snap/chromium/common/chromium/SingletonSocket" 2>/dev/null || true
73
+ rm -f "/tmp/.X${VNC_DISPLAY_NUM}-lock" "/tmp/.X11-unix/X${VNC_DISPLAY_NUM}"
74
+ # Clear this brand's Chromium profile locks left by unclean shutdown.
75
+ # Without this, Chromium refuses to start after a service restart or
76
+ # power loss. The per-brand profile path means peer brands' locks are
77
+ # never touched here.
78
+ rm -f "${CHROMIUM_PROFILE_DIR}/SingletonLock" \
79
+ "${CHROMIUM_PROFILE_DIR}/SingletonCookie" \
80
+ "${CHROMIUM_PROFILE_DIR}/SingletonSocket" 2>/dev/null || true
61
81
  sleep 2
62
82
 
63
83
  # If any VNC-stack processes survived SIGTERM, force-kill them.
64
84
  local survivors=0
65
- pgrep -f 'chromium.*remote-debugging-port=9222' >/dev/null 2>&1 && survivors=1
66
- pgrep -f 'Xtigervnc :99' >/dev/null 2>&1 && survivors=1
85
+ pgrep -f "chromium.*--user-data-dir=${CHROMIUM_PROFILE_DIR}" >/dev/null 2>&1 && survivors=1
86
+ pgrep -f "Xtigervnc ${VNC_DISPLAY}" >/dev/null 2>&1 && survivors=1
67
87
  pgrep -f 'websockify.*6080' >/dev/null 2>&1 && survivors=1
68
88
  if [ "$survivors" -eq 1 ]; then
69
89
  log "SIGTERM survivors detected — sending SIGKILL"
70
- pkill -9 -f 'chromium.*remote-debugging-port=9222' 2>/dev/null || true
71
- pkill -9 -f 'Xtigervnc :99' 2>/dev/null || true
90
+ pkill -9 -f "chromium.*--user-data-dir=${CHROMIUM_PROFILE_DIR}" 2>/dev/null || true
91
+ pkill -9 -f "Xtigervnc ${VNC_DISPLAY}" 2>/dev/null || true
72
92
  pkill -9 -f 'websockify.*6080' 2>/dev/null || true
73
93
  sleep 1
74
94
  fi
75
95
  }
76
96
 
97
+ # Refuse to start when this brand's display is already held by an unrelated
98
+ # process (e.g. a peer brand's Xtigervnc whose vncDisplay collides, or a
99
+ # manual Xvfb invocation). Loud-fail so the operator sees the held-by-pid
100
+ # rather than silently sharing a display with another stack. Task 553.
101
+ check_display_collision() {
102
+ local lock="/tmp/.X${VNC_DISPLAY_NUM}-lock"
103
+ local socket="/tmp/.X11-unix/X${VNC_DISPLAY_NUM}"
104
+ local held_pid=""
105
+ if [ -f "$lock" ]; then
106
+ held_pid="$(cat "$lock" 2>/dev/null | tr -d ' ' || true)"
107
+ fi
108
+ # Empty / stale lock → nothing to refuse. kill_stale will clean it up.
109
+ if [ -z "$held_pid" ] || ! kill -0 "$held_pid" 2>/dev/null; then
110
+ return 0
111
+ fi
112
+ # Live PID — but is it OUR own previous Xtigervnc? If yes, kill_stale will
113
+ # handle it; only refuse when the holder is unrelated to this brand's
114
+ # vnc.sh stack.
115
+ if pgrep -f "Xtigervnc ${VNC_DISPLAY}" 2>/dev/null | grep -qx "$held_pid"; then
116
+ return 0
117
+ fi
118
+ log "[vnc.sh:collision] brand=${BRAND_HOSTNAME} display=${VNC_DISPLAY} held-by-pid=${held_pid} socket=${socket}"
119
+ echo "ERROR: display ${VNC_DISPLAY} is held by PID ${held_pid} (not this brand's Xtigervnc)" >&2
120
+ echo " Refusing to start; resolve the collision before retrying." >&2
121
+ exit 1
122
+ }
123
+
77
124
  wait_for_port() {
78
125
  local port="$1" max="${2:-40}"
79
126
  for _ in $(seq 1 "$max"); do
@@ -134,12 +181,15 @@ discover_native_session() {
134
181
  start_chrome_on() {
135
182
  local target_display="$1"
136
183
  local label="$2" # "vnc" (native mode uses start_chrome_native instead)
137
- pkill -f 'chromium.*remote-debugging-port=9222' 2>/dev/null || true
184
+ # Brand-scoped Chromium kill: only this brand's profile-bound chromium.
185
+ pkill -f "chromium.*--user-data-dir=${CHROMIUM_PROFILE_DIR}" 2>/dev/null || true
138
186
  sleep 0.3
139
187
 
140
- log "Starting Chromium on ${target_display} (${label}) with CDP on :9222"
188
+ mkdir -p "${CHROMIUM_PROFILE_DIR}"
189
+ log "Starting Chromium on ${target_display} (${label}) profile=${CHROMIUM_PROFILE_DIR} CDP=:9222"
141
190
 
142
191
  DISPLAY="${target_display}" /usr/bin/chromium \
192
+ --user-data-dir="${CHROMIUM_PROFILE_DIR}" \
143
193
  --ozone-platform=x11 \
144
194
  --no-sandbox \
145
195
  --test-type \
@@ -182,7 +232,7 @@ start_chrome_on() {
182
232
  }
183
233
 
184
234
  start_chrome() {
185
- start_chrome_on ":99" "vnc"
235
+ start_chrome_on "${VNC_DISPLAY}" "vnc"
186
236
  }
187
237
 
188
238
  # ---------------------------------------------------------------------------
@@ -194,16 +244,18 @@ start_chrome() {
194
244
  start_chrome_native() {
195
245
  discover_native_session
196
246
 
197
- pkill -f 'chromium.*remote-debugging-port=9222' 2>/dev/null || true
247
+ # Brand-scoped Chromium kill: only this brand's profile-bound chromium.
248
+ pkill -f "chromium.*--user-data-dir=${CHROMIUM_PROFILE_DIR}" 2>/dev/null || true
198
249
  sleep 0.3
199
250
 
251
+ mkdir -p "${CHROMIUM_PROFILE_DIR}"
200
252
  log "Starting Chromium natively (${NATIVE_SESSION_TYPE}, $(
201
253
  if [ "$NATIVE_SESSION_TYPE" = "wayland" ]; then
202
254
  echo "WAYLAND_DISPLAY=${NATIVE_WAYLAND_DISPLAY}"
203
255
  else
204
256
  echo "DISPLAY=${NATIVE_DISPLAY}"
205
257
  fi
206
- )) with CDP on :9222"
258
+ )) profile=${CHROMIUM_PROFILE_DIR} CDP=:9222"
207
259
 
208
260
  local ozone_flag="--ozone-platform=x11"
209
261
  local -a env_vars=("DISPLAY=${NATIVE_DISPLAY}")
@@ -214,6 +266,7 @@ start_chrome_native() {
214
266
  fi
215
267
 
216
268
  env "${env_vars[@]}" /usr/bin/chromium \
269
+ --user-data-dir="${CHROMIUM_PROFILE_DIR}" \
217
270
  "$ozone_flag" \
218
271
  --no-sandbox \
219
272
  --test-type \
@@ -256,11 +309,17 @@ start_chrome_native() {
256
309
 
257
310
  case "${1:-}" in
258
311
  start)
259
- log "DISPLAY_MODE=${DISPLAY_MODE:-virtual}"
312
+ log "[vnc.sh] start brand=${BRAND_HOSTNAME} display=${VNC_DISPLAY} profile=${CHROMIUM_PROFILE_DIR} mode=${DISPLAY_MODE:-virtual}"
313
+ # Collision check runs BEFORE kill_stale: kill_stale removes the X lock
314
+ # file, so a check after kill_stale would always see an empty lock and
315
+ # never refuse. The check itself is a no-op for our own brand's prior
316
+ # Xtigervnc (kill_stale will reap it); it only refuses when the holder
317
+ # is unrelated to this brand's vnc.sh stack.
318
+ check_display_collision
260
319
  kill_stale
261
- log "Starting Xtigervnc on :99"
320
+ log "Starting Xtigervnc on ${VNC_DISPLAY}"
262
321
 
263
- Xtigervnc :99 \
322
+ Xtigervnc "${VNC_DISPLAY}" \
264
323
  -geometry 1280x800 \
265
324
  -depth 24 \
266
325
  -rfbport 5900 \
@@ -310,7 +369,7 @@ case "${1:-}" in
310
369
  ;;
311
370
 
312
371
  stop)
313
- log "Stopping VNC stack"
372
+ log "[vnc.sh] stop brand=${BRAND_HOSTNAME} display=${VNC_DISPLAY}"
314
373
  kill_stale
315
374
  log "VNC stack stopped"
316
375
  ;;
@@ -247,9 +247,11 @@ When `<previous-context>` is present:
247
247
 
248
248
  When `<previous-context>` is absent, Neo4j was unreachable or no prior context exists — proceed normally, using tool calls to establish state.
249
249
 
250
- A separate `<recovery-context>` block on the user-message side appears only when the previous turn was aborted AND the platform could not perform a true SDK resume (the parent's pending tool_use_id was not captured, or the SDK session id was lost). Treat it as the authoritative description of what failed and what was incomplete do not re-execute the failed work, do not call `session-list` to figure out what was happening, and do not re-research the blocker. The block coexists with `<previous-context>` (system-prompt session summary) on the recovery turn; the two are not duplicates — `<previous-context>` orients you to the session, `<recovery-context>` orients you to the specific failed turn.
250
+ A `<recovery-context>` block on the user-message side appears whenever the previous turn was aborted by a stall recovery — both when the platform performed a true SDK resume AND when it fell back to a cold-create handoff. Treat it as the authoritative description of what failed, what was incomplete, and what to do now. Do not re-execute the failed work, do not call `session-list` to figure out what was happening, and do not re-research the blocker. The block coexists with `<previous-context>` (system-prompt session summary) on the recovery turn; the two are not duplicates — `<previous-context>` orients you to the session, `<recovery-context>` orients you to the specific failed turn.
251
251
 
252
- When the platform CAN resume, the recovery is invisible at the prompt layer: the prior conversation is replayed by the API and the next user message you receive contains a `tool_result` for the previously in-flight tool_use, summarising what completed before the pause. This means a "Continue" turn may arrive with no preamble read the `tool_result` content to see how many sub-tasks completed, then continue the work. Do not assume a stalled subagent failed in approach: many stalls are upstream API latency, not the subagent's fault.
252
+ The block has two shapes. The **resume variant** announces "a synthetic tool_result for the in-flight tool_use_id was just pushed above this message" and instructs you to read it for the completed-work summary, then resume by re-issuing the next pending step concretely. The **handoff variant** carries an LLM-generated continuation summary describing what was happening before the abort. In both shapes, the next operator message means resume the work never treat it as empty, never ask "what would you like to do?", never wait for direction.
253
+
254
+ The platform also operates an api-wait-ping liveness gate: a heartbeat-driven stall fire is suppressed while the SDK API request is still alive (most recent ping within the gate window), bounded by a 600 s cap. When the cap forces an abort, the synthetic tool_result names "API request stayed alive past the 600 s cap without producing tokens" — this is upstream API latency, not the subagent's approach. Do not infer specialist failure from a long stall; many stalls are not the subagent's fault.
253
255
 
254
256
  In managed context mode, conversation history is provided within `<conversation-history>` tags. Use `session-compact-status` to retrieve older archived context if needed.
255
257
 
@@ -26,7 +26,7 @@ Environment=MAXY_UI_HOST=127.0.0.1
26
26
  Environment=MAXY_UI_PORT=__MAXY_UI_PORT__
27
27
  Environment=WEBSOCKIFY_HOST=127.0.0.1
28
28
  Environment=WEBSOCKIFY_PORT=6080
29
- Environment=DISPLAY=:99
29
+ Environment=DISPLAY=:__VNC_DISPLAY__
30
30
  Environment=MAXY_PLATFORM_ROOT=__INSTALL_DIR__/platform
31
31
  Environment=PATH=%h/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
32
32
  StandardOutput=append:__PERSIST_DIR__/logs/edge.log