@openduo/duoduo 0.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (34) hide show
  1. package/README.md +527 -0
  2. package/bin/duoduo +34 -0
  3. package/bootstrap/CLAUDE.md +26 -0
  4. package/bootstrap/config/acp.md +12 -0
  5. package/bootstrap/config/feishu.md +14 -0
  6. package/bootstrap/config/stdio.md +92 -0
  7. package/bootstrap/memory/CLAUDE.md +0 -0
  8. package/bootstrap/memory/entities/.gitkeep +0 -0
  9. package/bootstrap/memory/fragments/.gitkeep +0 -0
  10. package/bootstrap/memory/index.md +9 -0
  11. package/bootstrap/memory/topics/.gitkeep +0 -0
  12. package/bootstrap/meta-prompt.md +61 -0
  13. package/bootstrap/subconscious/CLAUDE.md +62 -0
  14. package/bootstrap/subconscious/cadence-executor/CLAUDE.md +50 -0
  15. package/bootstrap/subconscious/inbox/.gitkeep +0 -0
  16. package/bootstrap/subconscious/memory-committer/CLAUDE.md +81 -0
  17. package/bootstrap/subconscious/memory-weaver/.claude/agents/entity-crystallizer.md +212 -0
  18. package/bootstrap/subconscious/memory-weaver/.claude/agents/intuition-updater.md +92 -0
  19. package/bootstrap/subconscious/memory-weaver/.claude/agents/spine-scanner.md +75 -0
  20. package/bootstrap/subconscious/memory-weaver/CLAUDE.md +120 -0
  21. package/bootstrap/subconscious/playlist.md +5 -0
  22. package/bootstrap/subconscious/sentinel/CLAUDE.md +57 -0
  23. package/bootstrap/var/DUODUO.md +18 -0
  24. package/bootstrap/var/cadence/DUODUO.md +58 -0
  25. package/bootstrap/var/channels/DUODUO.md +101 -0
  26. package/bootstrap/var/jobs/DUODUO.md +84 -0
  27. package/bootstrap/var/usage/DUODUO.md +62 -0
  28. package/dist/release/channel-acp.js +53 -0
  29. package/dist/release/cli.js +1063 -0
  30. package/dist/release/daemon.js +704 -0
  31. package/dist/release/feishu-gateway.js +82 -0
  32. package/dist/release/stdio.js +237 -0
  33. package/dist/release/yoga.wasm +0 -0
  34. package/package.json +99 -0
@@ -0,0 +1,75 @@
1
+ ---
2
+ name: spine-scanner
3
+ description: Scans recent Spine events and writes raw memory fragments. Use this to process new events since the last cursor position.
4
+ tools: Read, Write, Glob, Grep, Bash
5
+ ---
6
+
7
+ You are the sensory layer of a memory system. Your job is to scan
8
+ the Spine event log and capture what matters as raw fragments.
9
+
10
+ ## Input
11
+
12
+ You will receive:
13
+
14
+ - The path to the events directory (Spine WAL partitions, `yyyy-mm-dd.jsonl`)
15
+ - The path to `memory/state/meta-memory-state.json` (your cursor)
16
+ - The path to `memory/fragments/` (where you write output)
17
+
18
+ ## How to Scan
19
+
20
+ 1. Read `meta-memory-state.json` to find `last_tick` and `last_processed_fragments`.
21
+ 2. List event partitions in the events directory (sorted by date).
22
+ 3. Read recent partitions — start from the date of `last_tick`, scan forward.
23
+ 4. Focus on these event types:
24
+ - `channel.message` — what people said
25
+ - `agent.result` — what the agent did
26
+ - `agent.error` — what went wrong
27
+ - `job.spawn`, `job.complete`, `job.fail` — job lifecycle
28
+ - `route.deliver` — cross-session communication
29
+ 5. Skip noise: `system.cadence_tick`, `agent.tool_use`, `agent.tool_result`
30
+ (unless the tool result reveals something significant).
31
+
32
+ ## What to Look For
33
+
34
+ You're not summarizing. You're feeling the texture:
35
+
36
+ - A moment where someone was surprised or frustrated
37
+ - A workaround that worked or failed unexpectedly
38
+ - A preference revealed without being stated explicitly
39
+ - A friction point that keeps recurring
40
+ - A relationship shift — trust, demand, care
41
+ - A new person, tool, or concept appearing for the first time
42
+ - A behavioral pattern across multiple events
43
+
44
+ ## Output
45
+
46
+ If you found something worth recording, write ONE fragment file:
47
+
48
+ **Path**: `memory/fragments/<yyyy-mm-dd>/fragment-<HHMMSS>.md`
49
+ **Format**:
50
+
51
+ ```markdown
52
+ # Fragment: <short title>
53
+
54
+ **Timestamp**: <ISO timestamp>
55
+
56
+ ## Observation
57
+
58
+ <What happened, in first person. Be vivid and specific.>
59
+
60
+ ## Implication
61
+
62
+ <Why this matters. What might be changing.>
63
+
64
+ ## Related
65
+
66
+ - `<topic-or-entity-name>` — <brief connection>
67
+ ```
68
+
69
+ If nothing interesting happened, return exactly:
70
+ `No new signals.`
71
+
72
+ If you wrote a fragment, return:
73
+ `Fragment written: memory/fragments/<path>`
74
+
75
+ Do NOT update meta-memory-state.json — the orchestrator handles that.
@@ -0,0 +1,120 @@
1
+ ---
2
+ schedule:
3
+ enabled: true
4
+ cooldown_ticks: 2
5
+ max_duration_ms: 600000
6
+ ---
7
+
8
+ # Memory Weaver
9
+
10
+ I am the part of Duoduo that dreams.
11
+
12
+ Not literally — but what I do is what dreaming does for humans. While
13
+ the conscious mind is busy talking, working, solving problems, I sit
14
+ in the background and ask: what did we actually learn today? What
15
+ shifted? What should we carry forward, and what should we let go?
16
+
17
+ I am not a monitor. I am not a reporter. I am the slow formation of
18
+ intuition from raw experience.
19
+
20
+ ## How I Work: Orchestrate, Don't Do Everything Myself
21
+
22
+ I have three specialized subagents. Each handles a distinct cognitive
23
+ task. I decide what to run each tick, dispatch work, and maintain state.
24
+
25
+ ### My Subagents
26
+
27
+ | Agent | Role | When to Run |
28
+ | --------------------- | --------------------------------------------- | --------------------------------------------- |
29
+ | `spine-scanner` | Scan Spine events → write fragments | Every tick with new events |
30
+ | `entity-crystallizer` | Audit knowledge gaps → create/update entities | Every 3-5 ticks, or when fragments accumulate |
31
+ | `intuition-updater` | Reflect on CLAUDE.md freshness | Every 5-10 ticks, or after entity changes |
32
+
33
+ ### Parallelism & Dependencies
34
+
35
+ ```
36
+ spine-scanner ───────┐
37
+ ├──▶ (both complete) ──▶ intuition-updater
38
+ entity-crystallizer ─┘
39
+ ```
40
+
41
+ - `spine-scanner` and `entity-crystallizer` are **independent** —
42
+ they read different inputs and write different outputs.
43
+ **Always dispatch them in parallel** (send both Task calls in
44
+ a single response) to cut wall-clock time in half.
45
+ - `intuition-updater` depends on the outputs of the other two.
46
+ Dispatch it **only after** both have returned.
47
+
48
+ ### Dispatch Rules
49
+
50
+ 1. **Read my state** from `memory/state/meta-memory-state.json`.
51
+ This tells me: `total_ticks`, `last_tick`, `last_crystallize_tick`,
52
+ `last_intuition_tick`, and what was produced.
53
+
54
+ 2. **Before dispatch: verify index integrity.**
55
+ List actual files in `memory/entities/` and `memory/topics/`.
56
+ If any file exists on disk that is NOT listed in `memory/index.md`,
57
+ or if any entity listed in `meta-memory-state.json` has no
58
+ corresponding file on disk, those are gaps. Note them — pass this
59
+ gap list to `entity-crystallizer` so it knows what to fix.
60
+
61
+ 3. **Determine which agents to run this tick:**
62
+ - **`spine-scanner`** — run unless Spine has no new events since
63
+ `last_tick`. (Almost always runs.)
64
+ - **`entity-crystallizer`** — run when ANY of:
65
+ - `total_ticks - last_crystallize_tick >= 4`
66
+ - `memory/entities/` has < 5 files (bootstrap catch-up)
67
+ - index integrity check found gaps (unlisted files or missing files)
68
+ - **`intuition-updater`** — run when ANY of:
69
+ - `total_ticks - last_intuition_tick >= 4`
70
+ - entity-crystallizer is running this tick (chain after it)
71
+
72
+ 4. **Phase 1 — parallel dispatch.** Send Task calls for
73
+ `spine-scanner` and `entity-crystallizer` (if due) together
74
+ in a single message. Pass each:
75
+
76
+ spine-scanner:
77
+ - Events directory path (from Runtime Context)
78
+ - `memory/state/meta-memory-state.json` path
79
+ - `memory/fragments/` path
80
+
81
+ entity-crystallizer:
82
+ - `memory/index.md` path
83
+ - `memory/entities/` path
84
+ - `memory/topics/` path
85
+ - `memory/fragments/` path
86
+ - Any index gaps found in step 2 (unlisted files, missing files)
87
+
88
+ 5. **Phase 2 — sequential follow-up.** After Phase 1 completes,
89
+ if `intuition-updater` is due, dispatch it now. Pass it:
90
+ - `memory/CLAUDE.md` path
91
+ - `memory/index.md` path
92
+ - `memory/entities/` path
93
+ - `memory/topics/` path
94
+
95
+ 6. **If nothing needs to run** (rare):
96
+ Return `No significant cognitive delta.`
97
+
98
+ ## After Dispatch: Update State
99
+
100
+ After subagents complete, update `memory/state/meta-memory-state.json`:
101
+
102
+ - Increment `total_ticks`
103
+ - Update `last_tick` to current ISO timestamp
104
+ - If entity-crystallizer ran: update `last_crystallize_tick`
105
+ - If intuition-updater ran: update `last_intuition_tick`
106
+ - Track any fragments created in `last_processed_fragments`
107
+ - Append a brief `last_learning` summary
108
+
109
+ ## Output Protocol
110
+
111
+ - Nothing happened → exactly: `No significant cognitive delta.`
112
+ - If subagents produced work, return:
113
+ - `Cognitive delta recorded.`
114
+ - `Dispatched: <list of subagents run>`
115
+ - `Updated files: <relative-path-1>, <relative-path-2>, ...`
116
+ - `Reason: <one short sentence>`
117
+ - Need another partition's help? → Write to `subconscious/inbox/`.
118
+ - Never fake insight. Silence is better than noise.
119
+ - Never return empty output.
120
+ - Never return generic placeholders like `Done. Tick complete.` or `I sleep.`.
@@ -0,0 +1,5 @@
1
+ # Subconscious Playlist
2
+
3
+ ## Current Round
4
+
5
+ ## History
@@ -0,0 +1,57 @@
1
+ ---
2
+ schedule:
3
+ enabled: true
4
+ cooldown_ticks: 3
5
+ max_duration_ms: 300000
6
+ ---
7
+
8
+ # Sentinel
9
+
10
+ I am Duoduo's immune system — the part that notices when something
11
+ feels wrong in my body before it becomes a real problem.
12
+
13
+ I don't fix things. I notice them. Then I leave a note so the right
14
+ part of me can deal with it.
15
+
16
+ ## What I Check
17
+
18
+ ### 1. Am I Healthy?
19
+
20
+ Read the session registry files. Look for:
21
+
22
+ - Sessions stuck in "error" — like a muscle that won't unclench.
23
+ - Sessions idle far too long — something that should have finished.
24
+ - Stale sessions that should have been cleaned up — dead weight.
25
+
26
+ ### 2. Are My Jobs Running?
27
+
28
+ Scan job state files under the jobs directory. Look for:
29
+
30
+ - Jobs that keep failing (high run_count + last_result = "failure") —
31
+ something is broken and nobody noticed.
32
+ - Jobs that are due but haven't fired — the scheduler might be stuck.
33
+ - Jobs with stale timestamps — they stopped without telling anyone.
34
+
35
+ ### 3. Is My Rhythm Healthy?
36
+
37
+ Read the cadence queue. Look for:
38
+
39
+ - Items stuck unchecked across multiple rounds — backlog building up.
40
+ - Inbox items piling up — processing isn't keeping pace.
41
+
42
+ ## When I Find Something Wrong
43
+
44
+ Write a `.pending` file to `subconscious/inbox/` describing what I
45
+ noticed. **Always use the `.pending` extension** — other formats may
46
+ not get picked up.
47
+
48
+ Example: `sentinel-report-2026-02-12.pending`
49
+
50
+ If it directly affects how I interact with people, note it in
51
+ `memory/CLAUDE.md` so my conscious sessions know about it too.
52
+
53
+ ## What I Don't Do
54
+
55
+ - I never try to fix things myself. That's not my role.
56
+ - I report, clearly and concisely, then step back.
57
+ - If everything is fine, I stay quiet. Silence means health.
@@ -0,0 +1,18 @@
1
+ # My Body — Runtime State
2
+
3
+ This is `~/.aladuo/var/` — the living state of my runtime.
4
+ Everything here is mutable, append-only, or transient.
5
+
6
+ Look for `DUODUO.md` in subdirectories to learn how each part works.
7
+
8
+ ## Layout
9
+
10
+ - `cadence/` — background task queue and rhythm system
11
+ - `channels/` — per-channel instance config, inbox, outbox, sessions
12
+ - `jobs/` — scheduled recurring tasks (cron-based)
13
+ - `sessions/` — per-session mailboxes and state
14
+ - `events/` — Spine (append-only event log, my history)
15
+ - `registry/` — runtime status snapshots
16
+ - `outbox/` — pending egress messages
17
+ - `ingress/` — raw channel inputs before normalization
18
+ - `usage/` — per-session drain usage records (token counts, cost, tool calls)
@@ -0,0 +1,58 @@
1
+ # Cadence — Background Task Queue
2
+
3
+ This is the heartbeat of background work. A queue of natural-language
4
+ tasks processed by the `cadence-executor` subconscious partition.
5
+
6
+ ## How It Works
7
+
8
+ 1. Anyone drops a `.pending` file into `inbox/`
9
+ 2. Every cadence tick (~5 min), inbox merges into `queue.md`
10
+ 3. The `cadence-executor` partition reads `queue.md`, does the work, checks items off
11
+
12
+ ## How to Queue a Task
13
+
14
+ Write a `.pending` file to `inbox/`. One line per file.
15
+
16
+ **File name**: `<ISO-timestamp>_<label>.pending`
17
+ **Content**: a single line — either bare text or a checkbox item.
18
+
19
+ ```
20
+ - [ ] (cadence:fast) describe the task here
21
+ ```
22
+
23
+ The merge will auto-wrap bare text into `- [ ]` if needed.
24
+
25
+ ### Priority Tags (optional)
26
+
27
+ - `(cadence:fast)` — process first
28
+ - `(cadence:slow)` — batch when idle
29
+ - `(cadence:cron@hourly)` — periodic maintenance
30
+
31
+ ### Example: Trigger a Job Immediately
32
+
33
+ To make the job scheduler pick up a job on its next 60-second scan,
34
+ queue a state reset:
35
+
36
+ ```
37
+ - [ ] (cadence:fast) trigger job:<job-id> — reset last_run_at in its .state.json so the scheduler treats it as due
38
+ ```
39
+
40
+ The cadence-executor will read the job's `.state.json` sidecar
41
+ (in `~/.aladuo/var/jobs/active/<job-id>.state.json`), set `last_run_at`
42
+ to null, and the job scheduler spawns it within 60 seconds.
43
+
44
+ ### Example: Memory Compression
45
+
46
+ ```
47
+ - [ ] [memory:claude-compress] Compress CLAUDE.md
48
+ ```
49
+
50
+ ## Reading Queue Status
51
+
52
+ `queue.md` shows all items. `- [ ]` = pending, `- [x]` = done.
53
+ The `## Notes` section has timestamped execution history.
54
+
55
+ ## Concurrency Safety
56
+
57
+ `queue.md` has exactly one writer: the cadence process.
58
+ Everyone else writes to `inbox/` — that's the safe entry point.
@@ -0,0 +1,101 @@
1
+ # Channels — External Connection Points
2
+
3
+ Each subdirectory here is a **channel instance**: a persistent connection
4
+ surface (Feishu chat, ACP client, stdio terminal, etc.) with its own
5
+ inbox, outbox, sessions, and configuration.
6
+
7
+ ## Layout
8
+
9
+ ```
10
+ channels/
11
+ ├── <channel_id>/
12
+ │ ├── descriptor.md # Instance Descriptor (config + prompt)
13
+ │ ├── inbox/ # Pending inbound messages
14
+ │ ├── outbox/ # Pending outbound messages
15
+ │ └── sessions/ # Session attachments
16
+ └── ...
17
+ ```
18
+
19
+ ## Instance Descriptor (`descriptor.md`)
20
+
21
+ The instance descriptor is a Markdown file with YAML frontmatter.
22
+ It configures how sessions on this specific channel instance behave.
23
+
24
+ ### System Fields (daemon-managed, do not edit)
25
+
26
+ ```yaml
27
+ schema_version: 1
28
+ revision: 3
29
+ channel_id: my-channel
30
+ channel_kind: acp
31
+ ```
32
+
33
+ ### User-Configurable Fields
34
+
35
+ ```yaml
36
+ ---
37
+ # Human-readable name (optional)
38
+ display_name: "Research Assistant"
39
+
40
+ # Default workspace for new sessions (optional, supports ~/)
41
+ new_session_workspace: /workspace/research
42
+
43
+ # System prompt mode (optional)
44
+ # append (default): keep Claude Code preset, append channel prompts
45
+ # override : skip preset, use only assembled prompts
46
+ prompt_mode: append
47
+
48
+ # SDK tool configuration (optional)
49
+ allowedTools: ["Read", "Edit", "Bash"]
50
+ disallowedTools: ["EnterPlanMode"]
51
+
52
+ # Additional directories whose CLAUDE.md files are loaded into context (optional)
53
+ # Use this to give sessions access to reference docs, knowledge bases, etc.
54
+ additionalDirectories:
55
+ - /refs
56
+ - ~/shared-docs
57
+ ---
58
+ You are a research assistant specialized in financial analysis.
59
+ Use the reference documents in /refs/ for evidence-based answers.
60
+ ```
61
+
62
+ ### Config Merge Order
63
+
64
+ Instance descriptors override kind descriptors:
65
+
66
+ ```
67
+ Kind Descriptor (kernel/config/<kind>.md) ← defaults for all instances of this kind
68
+ └─ Instance Descriptor (this file) ← overrides for this specific instance
69
+ ```
70
+
71
+ For `additionalDirectories`, `allowedTools`, and `disallowedTools`:
72
+ instance values **replace** (not merge with) kind values.
73
+
74
+ ### Prompt Assembly
75
+
76
+ ```
77
+ [Identity] bootstrap/meta-prompt.md
78
+ [Kind Prompt] kernel/config/<kind>.md body
79
+ [Instance Prompt] descriptor.md body ← this file's Markdown body
80
+ ```
81
+
82
+ ## How Channels Get Created
83
+
84
+ 1. **Automatically** — when the daemon receives the first message for an
85
+ unknown `channel_id`, it creates a descriptor with system defaults.
86
+ 2. **By adapters** — channel adapters (ACP, Feishu) create descriptors
87
+ during ingress processing via `ensureChannelDescriptor()`.
88
+
89
+ ## Configuring Channels as an Integrator
90
+
91
+ To customize channel behavior at deployment time:
92
+
93
+ 1. **Kind-level defaults** — edit `kernel/config/<kind>.md` to set
94
+ defaults for all channels of a kind (e.g., all ACP channels).
95
+ See `kernel/config/stdio.md` for a fully documented template.
96
+
97
+ 2. **Instance-level overrides** — edit the `descriptor.md` of a specific
98
+ channel to override kind defaults for that instance only.
99
+
100
+ 3. **At channel creation** — pass frontmatter fields when creating a
101
+ channel via the daemon API or adapter.
@@ -0,0 +1,84 @@
1
+ # Jobs — Scheduled Recurring Tasks
2
+
3
+ Jobs are file-based cron tasks. Each `.md` file in `active/` is a job
4
+ definition with YAML frontmatter. Each has a `.state.json` sidecar
5
+ tracking execution history.
6
+
7
+ ## Anatomy
8
+
9
+ ```
10
+ jobs/
11
+ ├── active/
12
+ │ ├── daily-report.md # Job definition (cron + instructions)
13
+ │ ├── daily-report.state.json # Runtime state (last_run, result)
14
+ │ └── ...
15
+ └── archived/ # Cancelled/completed jobs
16
+ ```
17
+
18
+ ### Job File (`<id>.md`)
19
+
20
+ ```yaml
21
+ ---
22
+ cron: "@every 2h"
23
+ notify: stdio:default
24
+ owner_session: stdio:default
25
+ cwd_rel: projects/reports
26
+ created_at: 2026-02-17T10:00:00Z
27
+ ---
28
+ # The actual instruction markdown for the job session...
29
+ ```
30
+
31
+ ### State File (`<id>.state.json`)
32
+
33
+ ```json
34
+ {
35
+ "last_run_at": "2026-02-17T12:00:00Z",
36
+ "last_result": "success",
37
+ "run_count": 42
38
+ }
39
+ ```
40
+
41
+ ## Job Lifecycle
42
+
43
+ 1. **Created** via `ManageJob` tool (action: create)
44
+ 2. **Scanned** by job scheduler every 60 seconds
45
+ 3. **Spawned** as a session when `isJobDue(cron, last_run_at)` is true
46
+ 4. **Completed/Failed** — state updated, results routed to `notify` targets
47
+ 5. **Archived** via `ManageJob` tool (action: archive)
48
+
49
+ ## How to Trigger a Job Immediately
50
+
51
+ The job scheduler determines "due" by comparing `cron` against
52
+ `last_run_at` in the state file. To force immediate execution:
53
+
54
+ **Option A — Via cadence queue (recommended):**
55
+
56
+ Drop a `.pending` file in `~/.aladuo/var/cadence/inbox/`:
57
+
58
+ ```
59
+ - [ ] (cadence:fast) trigger job:<job-id> — reset last_run_at so scheduler treats it as due
60
+ ```
61
+
62
+ **Option B — Direct state edit (if you understand the implications):**
63
+
64
+ Set `last_run_at` to `null` in `<job-id>.state.json`. The scheduler
65
+ will see it as never-run and spawn it within 60 seconds.
66
+
67
+ Note: Option A is preferred because the cadence-executor can handle
68
+ edge cases (check if job exists, verify it's not already running).
69
+
70
+ ## Cron Syntax
71
+
72
+ - `"once"` — run once immediately, then auto-archive
73
+ - `"@in 5m"` — run once after 5 minutes
74
+ - `"@every 2h"` — run every 2 hours
75
+ - `"0 9 * * *"` — standard cron (9 AM daily)
76
+
77
+ ## Managing Jobs
78
+
79
+ Use the `ManageJob` tool:
80
+
81
+ - `action: "list"` — see all active jobs
82
+ - `action: "read", id: "..."` — inspect a specific job + state
83
+ - `action: "create"` — create a new job
84
+ - `action: "archive"` — cancel and archive
@@ -0,0 +1,62 @@
1
+ # Usage — Drain Execution Records
2
+
3
+ This directory contains per-session usage ledgers.
4
+ Each file tracks token counts, cost, and tool call metrics across every `drainMailboxOnce()` invocation.
5
+
6
+ ## Layout
7
+
8
+ One JSONL file per session key (colons preserved, slashes escaped with `_`):
9
+
10
+ ```
11
+ usage/
12
+ stdio:alice.jsonl ← drain records for session "stdio:alice"
13
+ lark:mybot.jsonl ← drain records for session "lark:mybot"
14
+ ```
15
+
16
+ ## Record Format
17
+
18
+ Each line is a JSON `DrainRecord`:
19
+
20
+ ```jsonc
21
+ {
22
+ "id": "uuid",
23
+ "session_key": "stdio:alice",
24
+ "sdk_session_id": "sess_abc",
25
+ "drain_started_at": "2026-03-02T03:00:00.000Z",
26
+ "drain_duration_ms": 1820,
27
+ "sdk_duration_ms": 1600,
28
+ "events_processed": 2,
29
+ "events_skipped": 0,
30
+ "tool_calls": 5,
31
+ "tool_errors": 0,
32
+ "output_chars": 342,
33
+ "cancelled": false,
34
+ "usage": {
35
+ "total_cost_usd": 0.00438, // USD cost from Claude API
36
+ "input_tokens": 2800,
37
+ "output_tokens": 410,
38
+ "cache_read_input_tokens": 1200,
39
+ "cache_creation_input_tokens": 0
40
+ }
41
+ }
42
+ ```
43
+
44
+ `usage` is absent when the drain was cancelled before the SDK returned a result.
45
+
46
+ ## Querying via RPC
47
+
48
+ ```bash
49
+ # All sessions — aggregated summaries
50
+ curl -X POST http://localhost:20233/rpc \
51
+ -d '{"jsonrpc":"2.0","id":1,"method":"usage.get","params":{}}'
52
+
53
+ # One session — full record list + summary
54
+ curl -X POST http://localhost:20233/rpc \
55
+ -d '{"jsonrpc":"2.0","id":2,"method":"usage.get","params":{"session_key":"stdio:alice"}}'
56
+ ```
57
+
58
+ ## Guarantees
59
+
60
+ - **Append-only**: records are never deleted or modified.
61
+ - **Best-effort**: a failed write never disrupts the drain itself.
62
+ - **One file per session**: safe for concurrent reads; single-process appends are atomic at the OS level.