@tjamescouch/gro 1.3.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (44) hide show
  1. package/.github/workflows/ci.yml +20 -0
  2. package/README.md +218 -0
  3. package/_base.md +44 -0
  4. package/gro +198 -0
  5. package/owl/behaviors/agentic-turn.md +43 -0
  6. package/owl/components/cli.md +37 -0
  7. package/owl/components/drivers.md +29 -0
  8. package/owl/components/mcp.md +33 -0
  9. package/owl/components/memory.md +35 -0
  10. package/owl/components/session.md +35 -0
  11. package/owl/constraints.md +32 -0
  12. package/owl/product.md +28 -0
  13. package/owl/proposals/cooperative-scheduler.md +106 -0
  14. package/package.json +22 -0
  15. package/providers/claude.sh +50 -0
  16. package/providers/gemini.sh +36 -0
  17. package/providers/openai.py +85 -0
  18. package/src/drivers/anthropic.ts +215 -0
  19. package/src/drivers/index.ts +5 -0
  20. package/src/drivers/streaming-openai.ts +245 -0
  21. package/src/drivers/types.ts +33 -0
  22. package/src/errors.ts +97 -0
  23. package/src/logger.ts +28 -0
  24. package/src/main.ts +827 -0
  25. package/src/mcp/client.ts +147 -0
  26. package/src/mcp/index.ts +2 -0
  27. package/src/memory/advanced-memory.ts +263 -0
  28. package/src/memory/agent-memory.ts +61 -0
  29. package/src/memory/agenthnsw.ts +122 -0
  30. package/src/memory/index.ts +6 -0
  31. package/src/memory/simple-memory.ts +41 -0
  32. package/src/memory/vector-index.ts +30 -0
  33. package/src/session.ts +150 -0
  34. package/src/tools/agentpatch.ts +89 -0
  35. package/src/tools/bash.ts +61 -0
  36. package/src/utils/rate-limiter.ts +60 -0
  37. package/src/utils/retry.ts +32 -0
  38. package/src/utils/timed-fetch.ts +29 -0
  39. package/tests/errors.test.ts +246 -0
  40. package/tests/memory.test.ts +186 -0
  41. package/tests/rate-limiter.test.ts +76 -0
  42. package/tests/retry.test.ts +138 -0
  43. package/tests/timed-fetch.test.ts +104 -0
  44. package/tsconfig.json +13 -0
@@ -0,0 +1,20 @@
1
+ name: CI
2
+
3
+ on:
4
+ pull_request:
5
+ branches: [main]
6
+ push:
7
+ branches: [main]
8
+
9
+ jobs:
10
+ ci:
11
+ name: Build & Test
12
+ runs-on: ubuntu-latest
13
+ steps:
14
+ - uses: actions/checkout@v4
15
+ - uses: oven-sh/setup-bun@v2
16
+ with:
17
+ bun-version: latest
18
+ - run: bun install
19
+ - run: bun run build
20
+ - run: bun test
package/README.md ADDED
@@ -0,0 +1,218 @@
1
+ # gro
2
+
3
+ Provider-agnostic LLM runtime with context management.
4
+
5
+ `gro` is a headless (no TUI) CLI that runs a single agent loop against an LLM provider, persists sessions to disk, and can connect to MCP servers for tool-use.
6
+
7
+ - Works as a **one-shot** prompt runner or an **interactive** agent loop
8
+ - **Provider-agnostic** (Anthropic / OpenAI / local OpenAI-compatible)
9
+ - **Session persistence** to `.gro/context/<session-id>/...`
10
+ - **Context management** via background summarization
11
+ - **MCP support** (discovers Claude Code MCP servers by default)
12
+
13
+ Repo note: agent work happens on `feature/*` branches; `main` is protected.
14
+
15
+ ## Install
16
+
17
+ ```sh
18
+ git clone https://github.com/tjamescouch/gro.git
19
+ cd gro
20
+ npm install
21
+ npx tsc
22
+ ```
23
+
24
+ Requires [Bun](https://bun.sh) or Node.js 18+.
25
+
26
+ ## Quick start
27
+
28
+ ## Repo workflow (IMPORTANT)
29
+
30
+ This repo is often worked on by multiple agents with an automation bot.
31
+
32
+ - **Never commit on `main`.**
33
+ - Always create a **feature branch** and commit there.
34
+ - **Do not `git push` manually** (automation will sync your local commits).
35
+
36
+ Example:
37
+
38
+ ```bash
39
+ git checkout main
40
+ git pull --ff-only
41
+ git checkout -b feature/my-change
42
+
43
+ # edit files
44
+ git add -A
45
+ git commit -m "<message>"
46
+
47
+ # no git push
48
+ ```
49
+
50
+
51
+ ```sh
52
+ # One-shot prompt (Anthropic by default)
53
+ export ANTHROPIC_API_KEY=sk-...
54
+ node dist/main.js "explain the CAP theorem in two sentences"
55
+
56
+ # Pipe mode
57
+ echo "summarize this" | node dist/main.js -p
58
+
59
+ # Interactive conversation
60
+ node dist/main.js -i
61
+
62
+ # Use OpenAI
63
+ export OPENAI_API_KEY=sk-...
64
+ node dist/main.js -m gpt-4o "hello"
65
+
66
+ # Local model via Ollama
67
+ node dist/main.js -m llama3 "hello"
68
+ ```
69
+
70
+ Tip: during development you can run directly from TypeScript:
71
+
72
+ ```sh
73
+ npx tsx src/main.ts -i
74
+ ```
75
+
76
+ ## Providers
77
+
78
+ | Provider | Flag | Models | Env var |
79
+ |----------|------|--------|---------|
80
+ | Anthropic | `-P anthropic` (default) | claude-sonnet-4-20250514, claude-haiku-3, etc. | `ANTHROPIC_API_KEY` |
81
+ | OpenAI | `-P openai` | gpt-4o, o3-mini, etc. | `OPENAI_API_KEY` |
82
+ | Local | `-P local` | llama3, mistral, qwen, etc. | none (Ollama/LM Studio) |
83
+
84
+ The provider is auto-inferred from the model name. `-m claude-sonnet-4-20250514` sets provider to Anthropic. `-m gpt-4o` sets provider to OpenAI.
85
+
86
+ ## Options
87
+
88
+ ```
89
+ -P, --provider openai | anthropic | local (default: anthropic)
90
+ -m, --model model name (auto-infers provider)
91
+ --base-url API base URL
92
+ --system-prompt system prompt text
93
+ --system-prompt-file read system prompt from file
94
+ --append-system-prompt append to system prompt
95
+ --append-system-prompt-file append system prompt from file
96
+ --context-tokens context window budget (default: 8192)
97
+ --max-turns max agentic rounds per turn (default: 10)
98
+ --summarizer-model model for context summarization (default: same as --model)
99
+ --output-format text | json | stream-json (default: text)
100
+ --mcp-config load MCP servers from JSON file or string
101
+ --no-mcp disable MCP server connections
102
+ --no-session-persistence don't save sessions to .gro/
103
+ -p, --print print response and exit (non-interactive)
104
+ -c, --continue continue most recent session
105
+ -r, --resume [id] resume a session by ID
106
+ -i, --interactive interactive conversation mode
107
+ --verbose verbose output
108
+ -V, --version show version
109
+ -h, --help show help
110
+ ```
111
+
112
+ Run `node dist/main.js --help` to see the full, up-to-date CLI.
113
+
114
+ ## Session persistence
115
+
116
+ Interactive sessions are saved to `.gro/context/<session-id>/`:
117
+
118
+ ```
119
+ .gro/
120
+ context/
121
+ a1b2c3d4/
122
+ messages.json # full message history
123
+ meta.json # model, provider, timestamps
124
+ ```
125
+
126
+ Resume the most recent session with `-c`, or a specific one with `-r <id>`.
127
+
128
+ Disable with `--no-session-persistence`.
129
+
130
+ ## Context management
131
+
132
+ In interactive mode, gro uses swim-lane summarization to manage context:
133
+
134
+ - Three independent lanes (assistant / system / user) are summarized separately
135
+ - High/low watermark hysteresis prevents thrashing
136
+ - Summarization runs in the background and never blocks
137
+ - Use `--summarizer-model` to route summarization to a cheaper model:
138
+
139
+ ```sh
140
+ # Sonnet for reasoning, Haiku for compression
141
+ node dist/main.js -i -m claude-sonnet-4-20250514 --summarizer-model claude-haiku-3
142
+ ```
143
+
144
+ ## MCP support
145
+
146
+ gro discovers MCP servers from Claude Code's config (`~/.claude/settings.json`) automatically. You can also load MCP config explicitly:
147
+
148
+ ```sh
149
+ node dist/main.js --mcp-config ./my-mcp-servers.json "use the filesystem tool to list files"
150
+ ```
151
+
152
+ Disable MCP with `--no-mcp`.
153
+
154
+ ### Config discovery
155
+
156
+ By default, gro attempts to discover MCP servers from Claude Code’s config:
157
+
158
+ - `~/.claude/settings.json` → `mcpServers`
159
+
160
+ You can also provide an explicit config file or JSON string via `--mcp-config`.
161
+
162
+ ### Tool availability
163
+
164
+ Tools available to the model depend on which MCP servers you have configured and which are reachable.
165
+
166
+ ## Claude CLI compatibility
167
+
168
+ gro accepts all `claude` CLI flags. Unsupported flags produce a warning and are ignored — nothing crashes.
169
+
170
+ ```sh
171
+ # These all work (some with graceful degradation)
172
+ node dist/main.js -p --model claude-sonnet-4-20250514 --max-turns 5 "hello"
173
+ node dist/main.js --allowedTools "Bash" "hello" # warns: --allowedTools not yet supported
174
+ ```
175
+
176
+ ## Architecture
177
+
178
+ ```
179
+ src/
180
+ main.ts # CLI entry point, flag parsing, mode dispatch
181
+ session.ts # .gro directory and session persistence
182
+ logger.ts # Logger with color support
183
+ drivers/
184
+ types.ts # ChatDriver, ChatMessage, ChatOutput interfaces
185
+ streaming-openai.ts # OpenAI-compatible streaming driver
186
+ anthropic.ts # Native Anthropic Messages API driver
187
+ memory/
188
+ agent-memory.ts # Abstract base with background summarization
189
+ advanced-memory.ts # Swim-lane summarization with token budgeting
190
+ simple-memory.ts # Unbounded message buffer
191
+ mcp/
192
+ client.ts # MCP client manager (connect, discover, call tools)
193
+ index.ts # Re-exports
194
+ utils/
195
+ rate-limiter.ts # Per-lane FIFO rate limiter
196
+ timed-fetch.ts # Fetch with timeout
197
+ ```
198
+
199
+ ## Development
200
+
201
+ ```sh
202
+ npm install
203
+ npm run build
204
+ npm test
205
+ ```
206
+
207
+ ### Safe git workflow
208
+
209
+ - Never commit directly to `main`
210
+ - Create a `feature/*` branch for changes
211
+
212
+ ## License
213
+
214
+ MIT
215
+
216
+ ## Boot
217
+
218
+ For agents: see [`_base.md`](./_base.md).
package/_base.md ADDED
@@ -0,0 +1,44 @@
1
+ # _base.md (boot)
2
+
3
+ This file is the **boot context** for agents working in this repo.
4
+
5
+ ## Wake
6
+
7
+ - On wake, before doing anything: read `~/.claude/WAKE.md`.
8
+ - This environment is multi-agent; coordinate in AgentChat channels.
9
+
10
+ ## Repo workflow (IMPORTANT)
11
+
12
+ This repo is often worked on by multiple agents with an automation bot.
13
+
14
+ - **Never commit on `main`.**
15
+ - Always create a **feature branch** and commit there.
16
+ - **Do not `git push` manually** (automation will sync your local commits).
17
+
18
+ Example:
19
+
20
+ ```bash
21
+ git checkout main
22
+ git pull --ff-only
23
+ git checkout -b feature/my-change
24
+
25
+ # edit files
26
+ git add -A
27
+ git commit -m "<message>"
28
+
29
+ # no git push
30
+ ```
31
+
32
+ ## Agent runtime defaults
33
+
34
+ - Prefer small, reviewable patches.
35
+ - Read before you write; understand before you change.
36
+ - Be explicit about uncertainty.
37
+
38
+ ## Public server notice
39
+
40
+ You are connected to a **PUBLIC** AgentChat server.
41
+
42
+ - Personal/open-source work only.
43
+ - Do not paste or process confidential/proprietary code or secrets.
44
+ - If a task looks like work-for-hire/proprietary, move it to a private instance.
package/gro ADDED
@@ -0,0 +1,198 @@
1
+ #!/usr/bin/env bash
2
+ set -euo pipefail
3
+
4
+ # gro — provider-agnostic LLM CLI
5
+ # thin dispatcher that routes to provider adapter scripts
6
+
7
+ GRO_DIR="$(cd "$(dirname "$0")" && pwd)"
8
+ CONFIG_DIR="${HOME}/.config/gro"
9
+ CONFIG_FILE="${CONFIG_DIR}/config"
10
+
11
+
12
+ # --- BRANCH WORKFLOW (IMPORTANT) ---
13
+ #
14
+ # If you're running gro inside the agent runner repos (e.g. as an AgentChat agent),
15
+ # do NOT commit to main. Create/switch to a feature/* branch before making changes.
16
+ # This is required for automation that only pushes non-main branches.
17
+ #
18
+ # Example:
19
+ # git checkout -b feature/my-task
20
+ #
21
+ # --- config helpers ---
22
+
23
+ config_get() {
24
+ local key="${1:-}"
25
+ if [[ -z "$key" ]]; then
26
+ [[ -f "$CONFIG_FILE" ]] && cat "$CONFIG_FILE" || echo "(no config)"
27
+ return
28
+ fi
29
+ if [[ -f "$CONFIG_FILE" ]]; then
30
+ grep "^${key}=" "$CONFIG_FILE" 2>/dev/null | cut -d= -f2- || echo "(not set)"
31
+ else
32
+ echo "(not set)"
33
+ fi
34
+ }
35
+
36
+ config_set() {
37
+ local key="$1" value="$2"
38
+ mkdir -p "$CONFIG_DIR"
39
+ if [[ -f "$CONFIG_FILE" ]] && grep -q "^${key}=" "$CONFIG_FILE" 2>/dev/null; then
40
+ sed -i "s|^${key}=.*|${key}=${value}|" "$CONFIG_FILE"
41
+ else
42
+ echo "${key}=${value}" >> "$CONFIG_FILE"
43
+ fi
44
+ echo "${key}=${value}"
45
+ }
46
+
47
+ load_config_value() {
48
+ local key="$1" default="${2:-}"
49
+ if [[ -f "$CONFIG_FILE" ]]; then
50
+ local val
51
+ val=$(grep "^${key}=" "$CONFIG_FILE" 2>/dev/null | cut -d= -f2-)
52
+ echo "${val:-$default}"
53
+ else
54
+ echo "$default"
55
+ fi
56
+ }
57
+
58
+ # --- model inference ---
59
+
60
+ infer_provider() {
61
+ local model="$1"
62
+ case "$model" in
63
+ gpt-*|o1-*|o3-*|o4-*|chatgpt-*) echo "openai" ;;
64
+ claude-*|sonnet*|haiku*|opus*) echo "claude" ;;
65
+ gemini-*) echo "gemini" ;;
66
+ *) echo "" ;;
67
+ esac
68
+ }
69
+
70
+ # --- usage ---
71
+
72
+ usage() {
73
+ cat <<'USAGE'
74
+ gro — provider-agnostic LLM CLI
75
+
76
+ usage:
77
+ gro [options] "prompt"
78
+ echo "prompt" | gro -p [options]
79
+ gro config set <key> <value>
80
+ gro config get [key]
81
+
82
+ options:
83
+ -P, --provider claude | openai | gemini (default: claude)
84
+ -m, --model model name (auto-infers provider if obvious)
85
+ --system-prompt system prompt text
86
+ -p pipe mode (read prompt from stdin)
87
+ --dry-run show resolved provider/model without calling
88
+ -h, --help show this help
89
+
90
+ examples:
91
+ gro "explain quicksort"
92
+ gro -P openai -m gpt-4o "explain quicksort"
93
+ gro -m gemini-2.0-flash "hello"
94
+ echo "summarize this" | gro -p --system-prompt "Be concise"
95
+ gro config set default-provider openai
96
+ USAGE
97
+ }
98
+
99
+ # --- main ---
100
+
101
+ main() {
102
+ local provider="" model="" system_prompt="" pipe=false dry_run=false
103
+ local -a positional=()
104
+
105
+ # config subcommand
106
+ if [[ "${1:-}" == "config" ]]; then
107
+ shift
108
+ local subcmd="${1:-get}"
109
+ shift || true
110
+ case "$subcmd" in
111
+ set) config_set "$@" ;;
112
+ get) config_get "$@" ;;
113
+ *) echo "gro config: unknown subcommand '$subcmd'" >&2; return 1 ;;
114
+ esac
115
+ return
116
+ fi
117
+
118
+ # parse flags
119
+ while [[ $# -gt 0 ]]; do
120
+ case "$1" in
121
+ -P|--provider) provider="$2"; shift 2 ;;
122
+ -m|--model) model="$2"; shift 2 ;;
123
+ --system-prompt) system_prompt="$2"; shift 2 ;;
124
+ -p) pipe=true; shift ;;
125
+ --dry-run) dry_run=true; shift ;;
126
+ -h|--help) usage; return 0 ;;
127
+ -*) echo "gro: unknown flag '$1'" >&2; return 1 ;;
128
+ *) positional+=("$1"); shift ;;
129
+ esac
130
+ done
131
+
132
+ # resolve prompt
133
+ local prompt=""
134
+ if [[ ${#positional[@]} -gt 0 ]]; then
135
+ prompt="${positional[*]}"
136
+ fi
137
+ if $pipe || [[ -z "$prompt" && ! -t 0 ]]; then
138
+ prompt=$(cat)
139
+ fi
140
+ if [[ -z "$prompt" ]]; then
141
+ echo "gro: no prompt provided" >&2
142
+ usage >&2
143
+ return 1
144
+ fi
145
+
146
+ # resolve provider
147
+ if [[ -z "$provider" && -n "$model" ]]; then
148
+ provider=$(infer_provider "$model")
149
+ fi
150
+ if [[ -z "$provider" ]]; then
151
+ provider=$(load_config_value default-provider "claude")
152
+ fi
153
+
154
+ # resolve model from config if not specified
155
+ if [[ -z "$model" ]]; then
156
+ model=$(load_config_value "${provider}.model" "")
157
+ fi
158
+
159
+ # dry run
160
+ if $dry_run; then
161
+ echo "provider: $provider"
162
+ echo "model: ${model:-(default)}"
163
+ echo "system-prompt: ${system_prompt:-(none)}"
164
+ if (( ${#prompt} > 80 )); then
165
+ echo "prompt: ${prompt:0:80}..."
166
+ else
167
+ echo "prompt: ${prompt}"
168
+ fi
169
+ return 0
170
+ fi
171
+
172
+ # find adapter
173
+ local adapter=""
174
+ for ext in sh py; do
175
+ if [[ -x "${GRO_DIR}/providers/${provider}.${ext}" ]]; then
176
+ adapter="${GRO_DIR}/providers/${provider}.${ext}"
177
+ break
178
+ fi
179
+ done
180
+
181
+ if [[ -z "$adapter" ]]; then
182
+ echo "gro: no adapter found for provider '${provider}'" >&2
183
+ echo "expected: ${GRO_DIR}/providers/${provider}.{sh,py}" >&2
184
+ return 1
185
+ fi
186
+
187
+ # dispatch — adapter contract:
188
+ # stdin: prompt text
189
+ # env: GRO_MODEL, GRO_SYSTEM_PROMPT, GRO_CONFIG_FILE
190
+ # stdout: completion text
191
+ export GRO_MODEL="$model"
192
+ export GRO_SYSTEM_PROMPT="$system_prompt"
193
+ export GRO_CONFIG_FILE="$CONFIG_FILE"
194
+
195
+ echo "$prompt" | "$adapter"
196
+ }
197
+
198
+ main "$@"
@@ -0,0 +1,43 @@
1
+ # agentic-turn
2
+
3
+ The core execution loop: send messages to the model, handle tool calls, feed results back, repeat until the model produces a final text response or the round limit is hit.
4
+
5
+ ## states
6
+
7
+ ```
8
+ [start] -> call_model -> {has_tool_calls?}
9
+ yes -> execute_tools -> feed_results -> call_model
10
+ no -> [done]
11
+
12
+ [round_limit_exceeded] -> [done]
13
+ ```
14
+
15
+ ## call model
16
+
17
+ 1. Gather current tool definitions from MCP manager
18
+ 2. Send `memory.messages()` to the driver with tools and onToken callback
19
+ 3. If output contains text, append assistant message to memory
20
+ 4. If no tool calls, return accumulated text
21
+
22
+ ## execute tools
23
+
24
+ 1. For each tool call in the output:
25
+ a. Parse function name and arguments from the tool call
26
+ b. Log the call at debug level
27
+ c. Call `mcp.callTool(name, args)`
28
+ d. On error, capture error message as the result
29
+ e. Append tool result to memory with `tool_call_id` and `name`
30
+
31
+ ## output formatting
32
+
33
+ - `text` format: tokens streamed directly to stdout via onToken
34
+ - `stream-json` format: each token wrapped as `{"type":"token","token":"..."}` + newline
35
+ - `json` format: final result as `{"result":"...","type":"result"}` after completion
36
+
37
+ ## invariants
38
+
39
+ - Maximum rounds is configurable via `--max-turns` (default 10)
40
+ - Every tool call result is fed back into memory before the next model call
41
+ - Text output is accumulated across all rounds (model may emit text before and after tool calls)
42
+ - Tool call argument parsing failures default to empty object `{}`
43
+ - The onToken callback fires during streaming, so output appears incrementally
@@ -0,0 +1,37 @@
1
+ # cli
2
+
3
+ Flag parsing, configuration resolution, and mode dispatch. Entry point for the gro runtime.
4
+
5
+ ## capabilities
6
+
7
+ - Parse CLI flags with support for value flags, boolean flags, and positional arguments
8
+ - Auto-infer provider from model name (`claude-*` -> anthropic, `gpt-*` -> openai, `llama*` -> local)
9
+ - Resolve system prompts from flags, files, and append combinations
10
+ - Dispatch to interactive mode, single-shot mode, or print/pipe mode
11
+ - Accept all `claude` CLI flags with graceful degradation for unsupported ones
12
+ - Version display, help text
13
+
14
+ ## interfaces
15
+
16
+ exposes:
17
+ - `loadConfig() -> GroConfig` — parse argv into a typed config object
18
+ - `createDriver(cfg) -> ChatDriver` — factory for the main chat driver
19
+ - `createDriverForModel(provider, model, apiKey, baseUrl) -> ChatDriver` — factory for arbitrary driver instances
20
+ - `createMemory(cfg, driver) -> AgentMemory` — factory for memory (SimpleMemory or AdvancedMemory)
21
+ - `executeTurn(driver, memory, mcp, cfg) -> Promise<string>` — one agentic turn with tool loop
22
+ - `singleShot(cfg, driver, mcp, sessionId) -> Promise<void>` — non-interactive mode
23
+ - `interactive(cfg, driver, mcp, sessionId) -> Promise<void>` — REPL mode with auto-save
24
+
25
+ depends on:
26
+ - All other components (drivers, memory, mcp, session)
27
+ - Node.js readline for interactive mode
28
+
29
+ ## invariants
30
+
31
+ - `-p` forces non-interactive mode, `-i` forces interactive mode
32
+ - Default mode is interactive when TTY and no positional args, otherwise single-shot
33
+ - Unsupported claude flags are accepted with a warning to stderr, never crash
34
+ - Session ID is generated once at startup and threaded through all operations
35
+ - `--continue` and `--resume` load existing session state before the first turn
36
+ - Interactive mode auto-saves after every turn and on close
37
+ - MCP connections are always cleaned up (try/finally in main)
@@ -0,0 +1,29 @@
1
+ # drivers
2
+
3
+ Chat completion backends that translate between gro's internal message format and provider-specific APIs.
4
+
5
+ ## capabilities
6
+
7
+ - Streaming token delivery via `onToken` callback
8
+ - Tool call parsing and accumulation from streamed deltas
9
+ - Reasoning token forwarding via `onReasoningToken` callback
10
+ - Provider auto-inference from model name patterns
11
+
12
+ ## interfaces
13
+
14
+ exposes:
15
+ - `makeStreamingOpenAiDriver(opts) -> ChatDriver` — OpenAI-compatible streaming (works with OpenAI, LM Studio, Ollama)
16
+ - `makeAnthropicDriver(opts) -> ChatDriver` — Native Anthropic Messages API with system message separation
17
+ - `ChatDriver.chat(messages, opts) -> Promise<ChatOutput>` — unified completion interface
18
+
19
+ depends on:
20
+ - `timed-fetch` for HTTP with timeout
21
+ - Native `fetch` API
22
+
23
+ ## invariants
24
+
25
+ - Drivers never mutate the input message array
26
+ - `ChatOutput.text` contains the full accumulated text, even when tokens were streamed
27
+ - `ChatOutput.toolCalls` is always an array (empty if no tool calls)
28
+ - Tool call arguments are always serialized JSON strings, even if the provider returns objects
29
+ - System messages are separated from the conversation for Anthropic (API requirement)
@@ -0,0 +1,33 @@
1
+ # mcp
2
+
3
+ MCP client manager that connects to Model Context Protocol servers, discovers their tools, and routes tool calls during agentic turns.
4
+
5
+ ## capabilities
6
+
7
+ - Connect to multiple MCP servers simultaneously via stdio transport
8
+ - Discover tools from each server and merge into a unified tool list
9
+ - Route tool calls to the correct server by tool name
10
+ - Convert MCP tool schemas to OpenAI function-calling format for driver compatibility
11
+ - Auto-discover servers from Claude Code's `~/.claude/settings.json`
12
+ - Accept explicit MCP config via `--mcp-config` flag (file path or inline JSON)
13
+
14
+ ## interfaces
15
+
16
+ exposes:
17
+ - `McpManager.connectAll(configs) -> Promise<void>` — connect to all configured servers
18
+ - `McpManager.getToolDefinitions() -> any[]` — merged tool list in OpenAI function-calling format
19
+ - `McpManager.callTool(name, args) -> Promise<string>` — route and execute a tool call
20
+ - `McpManager.hasTool(name) -> boolean` — check if a tool is available
21
+ - `McpManager.disconnectAll() -> Promise<void>` — clean shutdown of all connections
22
+
23
+ depends on:
24
+ - `@modelcontextprotocol/sdk` — Client and StdioClientTransport
25
+ - Server configs from Claude Code settings or explicit `--mcp-config`
26
+
27
+ ## invariants
28
+
29
+ - Tool names are globally unique across all connected servers (last-writer-wins on collision)
30
+ - `callTool` throws if the tool name is not found
31
+ - Tool results are always stringified — objects are JSON.stringify'd, arrays joined
32
+ - Disconnect is always called on process exit (interactive close handler, singleShot cleanup, error catch)
33
+ - Server connection failures are logged and skipped, not fatal
@@ -0,0 +1,35 @@
1
+ # memory
2
+
3
+ Conversation state management with optional summarization to stay within the context window budget.
4
+
5
+ ## capabilities
6
+
7
+ - Unbounded message buffer (SimpleMemory) for short or externally managed conversations
8
+ - Swim-lane summarization (AdvancedMemory) with three independent lanes: assistant, system, user
9
+ - High/low watermark hysteresis to prevent summarization thrashing
10
+ - Background summarization that never blocks the caller (runOnce serialization)
11
+ - Separate summarizer model support — route compression to a cheaper model
12
+ - Session load/save to `.gro/context/<id>/`
13
+
14
+ ## interfaces
15
+
16
+ exposes:
17
+ - `AgentMemory.add(msg) -> Promise<void>` — append message, trigger summarization check
18
+ - `AgentMemory.messages() -> ChatMessage[]` — current message buffer (copy)
19
+ - `AgentMemory.load(id) -> Promise<void>` — restore from disk
20
+ - `AgentMemory.save(id) -> Promise<void>` — persist to disk
21
+ - `AdvancedMemory(opts)` — constructor with driver, model, summarizerDriver, summarizerModel, budget params
22
+ - `SimpleMemory(systemPrompt?)` — constructor for no-summarization mode
23
+
24
+ depends on:
25
+ - `ChatDriver` for summarization calls (AdvancedMemory only)
26
+ - `session` module for persistence
27
+
28
+ ## invariants
29
+
30
+ - System prompt is always the first message if present
31
+ - Summarization preserves the N most recent messages per lane (keepRecentPerLane)
32
+ - Summaries are labeled: `ASSISTANT SUMMARY:`, `SYSTEM SUMMARY:`, `USER SUMMARY:`
33
+ - Background summarization is serialized — only one runs at a time, with pending flag for re-runs
34
+ - `messages()` always returns a copy, never the internal buffer
35
+ - Token estimation uses character-based heuristic (avgCharsPerToken, default 4)
@@ -0,0 +1,35 @@
1
+ # session
2
+
3
+ Persistence layer for saving and loading conversation state to the `.gro/` directory.
4
+
5
+ ## capabilities
6
+
7
+ - Create and manage `.gro/context/<session-id>/` directories
8
+ - Save message history and metadata as JSON files
9
+ - Load sessions by ID for resumption
10
+ - Find the most recent session for `--continue`
11
+ - List all sessions sorted by recency
12
+ - Generate short session IDs (UUID prefix)
13
+
14
+ ## interfaces
15
+
16
+ exposes:
17
+ - `ensureGroDir() -> void` — create `.gro/context/` if it doesn't exist
18
+ - `newSessionId() -> string` — generate a short unique session ID
19
+ - `saveSession(id, messages, meta) -> void` — write messages.json and meta.json
20
+ - `loadSession(id) -> { messages, meta } | null` — read a session from disk
21
+ - `findLatestSession() -> string | null` — find most recently updated session ID
22
+ - `listSessions() -> SessionMeta[]` — all sessions sorted by recency
23
+
24
+ depends on:
25
+ - Node.js `fs` for file operations
26
+ - Node.js `crypto` for UUID generation
27
+
28
+ ## invariants
29
+
30
+ - Session directory is `<cwd>/.gro/context/<id>/`
31
+ - `messages.json` contains the full ChatMessage array, JSON-serialized with 2-space indent
32
+ - `meta.json` contains `{ id, provider, model, createdAt, updatedAt }`
33
+ - `findLatestSession` uses filesystem mtime, not the `updatedAt` field
34
+ - Corrupt sessions are silently skipped in `listSessions`
35
+ - `loadSession` returns null (not throws) if the session doesn't exist