llm-party-cli 0.1.0 → 0.1.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -4,8 +4,8 @@
4
4
  <strong>Bring your models. We'll bring the party.</strong>
5
5
  </p>
6
6
  <p align="center">
7
- <a href="https://llm-party.party">Website</a> &middot;
8
- <a href="https://www.npmjs.com/package/llm-party-cli">npm</a> &middot;
7
+ <a href="https://llm-party.party">Website</a> ·
8
+ <a href="https://www.npmjs.com/package/llm-party-cli">npm</a> ·
9
9
  <a href="https://github.com/aalasolutions/llm-party">GitHub</a>
10
10
  </p>
11
11
  <p align="center">
@@ -36,53 +36,75 @@ No MCP. No master/servant. No window juggling. Just peers at a terminal table.
36
36
 
37
37
  ## Why llm-party?
38
38
 
39
- | | Traditional multi-agent | llm-party |
40
- |---|---|---|
41
- | **Architecture** | MCP (master controls servants) | Peer orchestration (you control all) |
42
- | **Integration** | CLI wrapping, output scraping | Direct SDK adapters |
43
- | **Sessions** | Fresh each time | Persistent per provider |
44
- | **Context** | Agents are siloed | Every agent sees the full conversation |
45
- | **API tokens** | Separate keys per tool | Uses your existing CLI auth |
39
+ | | Traditional multi-agent | llm-party |
40
+ | ---------------------- | ------------------------------ | -------------------------------------- |
41
+ | **Architecture** | MCP (master controls servants) | Peer orchestration (you control all) |
42
+ | **Integration** | CLI wrapping, output scraping | Direct SDK adapters |
43
+ | **Sessions** | Fresh each time | Persistent per provider |
44
+ | **Context** | Agents are siloed | Every agent sees the full conversation |
45
+ | **API tokens** | Separate keys per tool | Uses your existing CLI auth |
46
46
 
47
47
  <br/>
48
48
 
49
49
  ## Getting started
50
50
 
51
- ### Install and run
51
+ ### Prerequisites
52
+
53
+ Bun runtime and Node.js 20+ are required (22+ if using the Copilot provider, which depends on `node:sqlite`). Make sure at least one AI CLI is installed and authenticated:
54
+
55
+ ```bash
56
+ claude --version # Claude Code CLI
57
+ codex --version # OpenAI Codex CLI
58
+ copilot --version # GitHub Copilot CLI
59
+ ```
60
+
61
+ If a CLI doesn't work on its own, it won't work inside llm-party.
62
+
63
+ ### Install
52
64
 
53
65
  ```bash
54
66
  npm install -g llm-party-cli
67
+ ```
68
+
69
+ ### First run
70
+
71
+ ```bash
55
72
  llm-party
56
73
  ```
57
74
 
58
- That's it. Agents use your current working directory. Config defaults are included in the package.
75
+ On first run, llm-party automatically creates `~/.llm-party/` with a default config and global memory structure. Your system username is detected automatically. No setup commands needed.
59
76
 
60
- ### Set up your agents
77
+ ### Add more agents
61
78
 
62
- Edit `configs/default.json`. Each agent needs a name, provider, and model:
79
+ Edit `~/.llm-party/config.json`:
63
80
 
64
81
  ```json
65
82
  {
66
- "humanName": "YOUR NAME",
67
83
  "agents": [
68
84
  {
69
85
  "name": "Claude",
70
86
  "tag": "claude",
71
87
  "provider": "claude",
72
- "model": "opus",
73
- "systemPrompt": ["./prompts/base.md"]
88
+ "model": "opus"
74
89
  },
75
90
  {
76
91
  "name": "Codex",
77
92
  "tag": "codex",
78
93
  "provider": "codex",
79
- "model": "gpt-5.2",
80
- "systemPrompt": ["./prompts/base.md"]
94
+ "model": "gpt-5.2"
95
+ },
96
+ {
97
+ "name": "Copilot",
98
+ "tag": "copilot",
99
+ "provider": "copilot",
100
+ "model": "gpt-4.1"
81
101
  }
82
102
  ]
83
103
  }
84
104
  ```
85
105
 
106
+ That's it. No paths, no prompts, no usernames to configure. Just name, tag, provider, model.
107
+
86
108
  ### Talk to your agents
87
109
 
88
110
  ```
@@ -98,48 +120,42 @@ Edit `configs/default.json`. Each agent needs a name, provider, and model:
98
120
 
99
121
  Agents can pass the conversation to each other by ending their response with `@next:<tag>`. The orchestrator picks it up and dispatches automatically. Max 15 hops per cycle to prevent loops.
100
122
 
101
- <br/>
102
-
103
- ## Before you start
123
+ ## **WARNING: FULL AUTONOMY.**
104
124
 
105
- **Verify your CLIs work first.** Before adding an agent to `configs/default.json`, make sure its CLI is installed and authenticated:
125
+ All agents run with full permissions. They can read, write, edit files and execute shell commands with zero approval gates. There is no confirmation step before any action. Run in a disposable environment. You are responsible for any changes, data loss, costs, or side effects. Do not run against production systems.
106
126
 
107
- ```bash
108
- claude --version # Claude Code CLI
109
- codex --version # OpenAI Codex CLI
110
- copilot --version # GitHub Copilot CLI
111
- ```
127
+ ## Important notes
112
128
 
113
- If a CLI doesn't work on its own, it won't work inside llm-party.
114
-
115
- **No extra API tokens.** llm-party uses the original CLIs and SDKs under the hood. Your existing authentication and subscriptions are used directly. Sessions created by agents appear in each tool's native session history (Claude Code sessions, Codex threads, etc.) since the underlying SDKs manage their own persistence.
129
+ **Uses your existing CLIs.** llm-party uses official SDKs that delegate to each provider's CLI binary. If `claude`, `codex`, or `copilot` commands work on your machine, llm-party works. Authentication is handled entirely by the provider's own tools.
116
130
 
117
131
  **Run in isolation.** Always run llm-party inside a disposable environment: a Docker container, a VM, or at minimum a throwaway git branch. Agents have full filesystem and shell access with zero approval gates.
118
132
 
133
+ **Full permissions.** All agents can read, write, edit files and execute shell commands. There is no confirmation step before any action. You are responsible for any changes, data loss, costs, or side effects.
134
+
119
135
  <br/>
120
136
 
121
137
  ## How we use the SDKs
122
138
 
123
139
  llm-party uses **official, publicly available SDKs and CLIs** published by each provider. Nothing is reverse-engineered, patched, or bypassed.
124
140
 
125
- | Provider | Official SDK | Published by |
126
- |----------|-------------|-------------|
127
- | Claude | [`@anthropic-ai/claude-agent-sdk`](https://www.npmjs.com/package/@anthropic-ai/claude-agent-sdk) | Anthropic |
128
- | Codex | [`@openai/codex-sdk`](https://www.npmjs.com/package/@openai/codex-sdk) | OpenAI |
129
- | Copilot | [`@github/copilot-sdk`](https://www.npmjs.com/package/@github/copilot-sdk) | GitHub |
141
+ | Provider | Official SDK | Published by |
142
+ | -------- | ----------------------------------------------------------------------------------------------- | ------------ |
143
+ | Claude | [`@anthropic-ai/claude-agent-sdk`](https://www.npmjs.com/package/@anthropic-ai/claude-agent-sdk) | Anthropic |
144
+ | Codex | [`@openai/codex-sdk`](https://www.npmjs.com/package/@openai/codex-sdk) | OpenAI |
145
+ | Copilot | [`@github/copilot-sdk`](https://www.npmjs.com/package/@github/copilot-sdk) | GitHub |
130
146
 
131
- All authentication flows through the provider's own CLI login. Your API keys, OAuth tokens, and subscriptions are used as-is. llm-party does not store, proxy, or intercept credentials.
147
+ All authentication flows through the provider's own CLI. llm-party does not implement its own auth flow, store credentials, or intercept authentication traffic.
132
148
 
133
- If any provider believes this project violates their terms of service, please [open an issue](https://github.com/aalasolutions/llm-party/issues) and we will address it immediately.
149
+ <br/>
134
150
 
135
151
  ## Supported providers
136
152
 
137
- | Provider | SDK | Session | System Prompt |
138
- |----------|-----|---------|---------------|
139
- | **Claude** | `@anthropic-ai/claude-agent-sdk` | Persistent via session ID resume | Full control |
140
- | **Codex** | `@openai/codex-sdk` | Persistent thread with `run()` turns | Via `developer_instructions` (see limitations) |
141
- | **Copilot** | `@github/copilot-sdk` | Persistent via `sendAndWait()` | Full control |
142
- | **GLM** | Claude SDK + env proxy | Same as Claude | Full control |
153
+ | Provider | SDK | Session | Prompt Support |
154
+ | ----------------- | ---------------------------------- | -------------------------------------- | -------------------------------------------------- |
155
+ | **Claude** | `@anthropic-ai/claude-agent-sdk` | Persistent via session ID resume | Full control |
156
+ | **Codex** | `@openai/codex-sdk` | Persistent thread with `run()` turns | Via `developer_instructions` (limitations below) |
157
+ | **Copilot** | `@github/copilot-sdk` | Persistent via `sendAndWait()` | Full control |
158
+ | **GLM** | Claude SDK + env proxy | Same as Claude | Full control |
143
159
 
144
160
  <br/>
145
161
 
@@ -149,8 +165,6 @@ If any provider believes this project violates their terms of service, please [o
149
165
 
150
166
  ## How it works
151
167
 
152
- Most multi-agent setups use MCP (one agent controls others) or CLI wrapping (spawn processes and scrape terminal output). Both are fragile and hierarchical.
153
-
154
168
  llm-party uses SDK adapters directly. Each agent gets a persistent session with its provider. Full tool access. Real conversation threading. The orchestrator owns routing, agents are peers.
155
169
 
156
170
  ```
@@ -170,7 +184,9 @@ Orchestrator
170
184
  +-- Transcript Writer (JSONL, append-only, per session)
171
185
  ```
172
186
 
173
- Each agent receives a rolling window of recent messages (default 16) plus any unseen messages since its last turn. Messages from other agents are included so everyone sees the full multi-party conversation.
187
+ Each agent receives a rolling window of recent messages (configurable, default 16) plus any unseen messages since its last turn. Messages from other agents are included so everyone sees the full multi-party conversation.
188
+
189
+ `~/.llm-party/config.json` is your global config. Every agent receives a base system prompt automatically. The `prompts` field in config adds extra prompt files on top of it.
174
190
 
175
191
  <br/>
176
192
 
@@ -178,125 +194,112 @@ Each agent receives a rolling window of recent messages (default 16) plus any un
178
194
 
179
195
  ### Claude
180
196
 
181
- | | |
182
- |---|---|
183
- | SDK | `@anthropic-ai/claude-agent-sdk` |
197
+ | | |
198
+ | ------- | ----------------------------------------------------------------------------------------------- |
199
+ | SDK | `@anthropic-ai/claude-agent-sdk` |
184
200
  | Session | Persistent via `resume: sessionId`. First call creates a session, subsequent calls resume it. |
185
- | System prompt | Passed directly to the SDK via `options.systemPrompt`. Full control. |
186
- | Tools | Read, Write, Edit, Bash, Glob, Grep |
187
- | Permissions | `permissionMode: "bypassPermissions"` (all tools auto-approved) |
188
-
189
- System prompt works exactly as expected. Personality, behavior, workflow rules all respected.
201
+ | Prompt | Passed directly to the SDK. Full control over personality, behavior, and workflow rules. |
202
+ | Tools | Read, Write, Edit, Bash, Glob, Grep |
190
203
 
191
204
  ### Codex
192
205
 
193
- | | |
194
- |---|---|
195
- | SDK | `@openai/codex-sdk` |
196
- | Session | Persistent thread. `startThread()` creates it, `thread.run()` adds turns to the same conversation. |
197
- | System prompt | Injected via `developer_instructions` config key, passed as `--config` flag to the CLI subprocess. |
198
- | Tools | exec_command, apply_patch, file operations (Codex built-in toolset) |
199
- | Permissions | `sandboxMode: "danger-full-access"`, `approvalPolicy: "never"` |
206
+ | | |
207
+ | ------- | ---------------------------------------------------------------------------------------------------------------- |
208
+ | SDK | `@openai/codex-sdk` |
209
+ | Session | Persistent thread.`startThread()` creates it, `thread.run()` adds turns to the same conversation. |
210
+ | Prompt | Injected via `developer_instructions` config key. Appended alongside Codex's built-in 13k token system prompt. |
211
+ | Tools | exec_command, apply_patch, file operations |
200
212
 
201
- **Known limitation:** Codex ships with a massive built-in system prompt (~13k tokens) that cannot be overridden. Your `developer_instructions` are appended alongside it, not replacing it. This means:
202
-
203
- - **Works:** Action instructions (create files, follow naming conventions), formatting rules (prefix responses with agent name), workflow rules (handoff syntax, routing tags)
204
- - **Does not work:** Personality overrides, identity changes, behavioral rewrites
205
-
206
- We tested `instructions`, `developer_instructions`, and `experimental_instructions_file`. All three append to the built-in prompt. None replace it.
207
-
208
- **Also observed:** Codex is aggressive with file operations. When asked to "create your file," it read the orchestrator source code, ran the Codex CLI, and modified `src/ui/terminal.ts` instead of just creating a simple markdown file.
213
+ **Known limitation:** Codex's built-in system prompt cannot be overridden. Your instructions are appended alongside it. Action instructions (naming, formatting, workflow rules) work. Personality overrides do not.
209
214
 
210
215
  ### Copilot
211
216
 
212
- | | |
213
- |---|---|
214
- | SDK | `@github/copilot-sdk` |
215
- | Session | Persistent via `CopilotClient.createSession()`. Messages sent with `session.sendAndWait()`. |
216
- | System prompt | Set as `systemMessage: { content: prompt }` on session creation. |
217
- | Tools | Copilot built-in toolset |
218
- | Permissions | `onPermissionRequest: approveAll` (all actions auto-approved) |
219
-
220
- System prompt works as expected.
217
+ | | |
218
+ | ------- | ----------------------------------------------------------- |
219
+ | SDK | `@github/copilot-sdk` |
220
+ | Session | Persistent via `CopilotClient.createSession()`. |
221
+ | Prompt | Set as `systemMessage` on session creation. Full control. |
222
+ | Tools | Copilot built-in toolset |
221
223
 
222
224
  ### GLM
223
225
 
224
- | | |
225
- |---|---|
226
- | SDK | `@anthropic-ai/claude-agent-sdk` (same as Claude) |
227
- | Session | Same as Claude, but routed through a proxy. |
228
- | System prompt | Same as Claude. Full control. |
229
- | Tools | Same as Claude |
230
- | Permissions | Same as Claude |
226
+ | | |
227
+ | ------- | --------------------------------------------------------- |
228
+ | SDK | `@anthropic-ai/claude-agent-sdk` (same as Claude) |
229
+ | Session | Same as Claude, routed through a proxy via env overrides. |
230
+ | Prompt | Same as Claude. Full control. |
231
+ | Tools | Same as Claude |
231
232
 
232
- GLM is not tied to any specific CLI. It uses the Claude SDK as the transport layer because Claude Code supports environment variable overrides for base URL and model aliases, making it a convenient proxy bridge. Any CLI that supports similar env-based routing could be swapped in.
233
+ GLM uses the Claude SDK as a transport layer. The adapter routes API calls through a proxy by setting `ANTHROPIC_BASE_URL` and model aliases via the `env` config field.
233
234
 
234
235
  <br/>
235
236
 
236
237
  ## Config reference
237
238
 
238
- Config file: `configs/default.json`. Override with `LLM_PARTY_CONFIG` env var.
239
+ Config file: `~/.llm-party/config.json` (created automatically on first run).
240
+
241
+ Override with `LLM_PARTY_CONFIG` env var to point to a different file.
239
242
 
240
243
  ### Top-level fields
241
244
 
242
- | Field | Required | Default | Description |
243
- |-------|----------|---------|-------------|
244
- | `humanName` | No | `USER` | Your name displayed in the terminal prompt and passed to agents |
245
- | `humanTag` | No | derived from `humanName` | Tag used for human handoff detection. When an agent says `@next:you`, the orchestrator stops and returns control to you |
246
- | `maxAutoHops` | No | `15` | Max agent-to-agent handoffs per cycle. Prevents infinite loops. Use `"unlimited"` to remove the cap |
247
- | `timeout` | No | `600` | Default timeout in seconds for all agents. 10 minutes by default |
248
- | `agents` | Yes | | Array of agent definitions. Must have at least one |
245
+ | Field | Required | Default | Description |
246
+ | --------------- | -------- | -------------------------- | ---------------------------------------------------------------------------- |
247
+ | `humanName` | No | Your system username | Display name in the terminal prompt and passed to agents |
248
+ | `humanTag` | No | derived from `humanName` | Tag for human handoff detection (`@next:you`) |
249
+ | `maxAutoHops` | No | `15` | Max agent-to-agent handoffs per cycle. Use `"unlimited"` to remove the cap |
250
+ | `timeout` | No | `600` | Default timeout in seconds for all agents |
251
+ | `agents` | Yes | | Array of agent definitions |
249
252
 
250
253
  ### Agent fields
251
254
 
252
- | Field | Required | Default | Description |
253
- |-------|----------|---------|-------------|
254
- | `name` | Yes | | Display name shown in responses as `[AGENT NAME]` |
255
- | `tag` | No | derived from `name` | Routing tag for `@tag` targeting. Auto-generated as lowercase with dashes if omitted |
256
- | `provider` | Yes | | SDK adapter: `claude`, `codex`, `copilot`, or `glm` |
257
- | `model` | Yes | | Model ID passed to the provider. Examples: `opus`, `sonnet`, `gpt-5.2`, `gpt-4.1`, `glm-5` |
258
- | `systemPrompt` | Yes | | Path or array of paths to prompt markdown files. Relative to project root |
259
- | `executablePath` | No | PATH lookup | Path to the CLI binary. Supports `~/` for home directory. Only needed if the CLI is not in your PATH |
260
- | `env` | No | inherits `process.env` | Environment variable overrides for this agent's process |
261
- | `timeout` | No | top-level value | Per-agent timeout in seconds. Overrides the top-level default |
255
+ | Field | Required | Default | Description |
256
+ | ------------------ | -------- | ------------------------ | --------------------------------------------------------------------------------------------------- |
257
+ | `name` | Yes | | Display name shown in responses as `[AGENT NAME]`. Must be unique. |
258
+ | `tag` | No | derived from `name` | Routing tag for `@tag` targeting |
259
+ | `provider` | Yes | | SDK adapter:`claude`, `codex`, `copilot`, or `glm` |
260
+ | `model` | Yes | | Model ID passed to the provider. Examples:`opus`, `sonnet`, `gpt-5.2`, `gpt-4.1`, `glm-5` |
261
+ | `prompts` | No | none | Array of extra prompt file paths, concatenated after `base.md`. Relative to project root |
262
+ | `executablePath` | No | PATH lookup | Path to the CLI binary. Supports `~/`. Only needed if the CLI is not in your PATH |
263
+ | `env` | No | inherits `process.env` | Environment variable overrides for this agent |
264
+ | `timeout` | No | top-level value | Per-agent timeout override in seconds |
262
265
 
263
- ### System prompts
266
+ ### Prompts
264
267
 
265
- Single file or multiple files merged in order:
268
+ Every agent receives a base system prompt automatically. To add extra instructions per agent, use the `prompts` field:
266
269
 
267
270
  ```json
268
- "systemPrompt": "./prompts/base.md"
269
- "systemPrompt": ["./prompts/base.md", "./prompts/reviewer.md"]
271
+ {
272
+ "name": "Reviewer",
273
+ "tag": "reviewer",
274
+ "provider": "claude",
275
+ "model": "opus",
276
+ "prompts": ["./prompts/code-review.md"]
277
+ }
270
278
  ```
271
279
 
272
- Files are concatenated with `---` separators, then template variables are replaced. Available variables:
273
-
274
- | Variable | Description |
275
- |----------|-------------|
276
- | `{{agentName}}` | This agent's display name |
277
- | `{{agentTag}}` | This agent's routing tag |
278
- | `{{humanName}}` | The human's display name |
279
- | `{{humanTag}}` | The human's routing tag |
280
- | `{{agentCount}}` | Total number of active agents |
281
- | `{{allAgentNames}}` | All agent names, comma-separated |
282
- | `{{allAgentTags}}` | All agent tags as `@tag` |
283
- | `{{otherAgentList}}` | Other agents formatted as `- Name: use @tag` |
284
- | `{{otherAgentNames}}` | Other agent names, comma-separated |
285
- | `{{validHandoffTargets}}` | Valid `@next:tag` values for handoff |
280
+ Template variables available in prompt files:
286
281
 
287
- ### GLM environment setup
282
+ | Variable | Description |
283
+ | --------------------------- | ----------------------------- |
284
+ | `{{agentName}}` | This agent's display name |
285
+ | `{{agentTag}}` | This agent's routing tag |
286
+ | `{{humanName}}` | Your display name |
287
+ | `{{humanTag}}` | Your routing tag |
288
+ | `{{agentCount}}` | Total number of active agents |
289
+ | `{{allAgentNames}}` | All agent names |
290
+ | `{{allAgentTags}}` | All agent tags as `@tag` |
291
+ | `{{otherAgentList}}` | Other agents with their tags |
292
+ | `{{validHandoffTargets}}` | Valid `@next:tag` targets |
288
293
 
289
- GLM requires environment overrides to route through a proxy. The adapter first tries to load env variables from your shell `glm` alias (`zsh -ic "alias glm"`). If you have a `glm` alias that sets `ANTHROPIC_AUTH_TOKEN` and `ANTHROPIC_BASE_URL`, it picks those up automatically.
294
+ ### GLM environment setup
290
295
 
291
- Without the alias, provide everything in the `env` block:
296
+ GLM requires environment overrides to route through a proxy. The adapter tries to load env variables from your shell `glm` alias automatically. Without the alias, provide everything in the `env` block:
292
297
 
293
298
  ```json
294
299
  {
295
- "name": "GLM Agent",
300
+ "name": "GLM",
296
301
  "provider": "glm",
297
302
  "model": "glm-5",
298
- "systemPrompt": ["./prompts/base.md"],
299
- "executablePath": "~/.local/bin/claude",
300
303
  "env": {
301
304
  "ANTHROPIC_AUTH_TOKEN": "your-glm-api-key",
302
305
  "ANTHROPIC_BASE_URL": "https://api.z.ai/api/anthropic",
@@ -311,24 +314,23 @@ Without the alias, provide everything in the `env` block:
311
314
 
312
315
  ## Session and transcript
313
316
 
314
- Every run generates a unique session ID and appends messages to a JSONL transcript file under `.llm-party/sessions/`. The session ID and transcript path are printed at startup.
317
+ Every run generates a unique session ID and appends messages to a JSONL transcript in `.llm-party/sessions/` (project-level). The session ID and transcript path are printed at startup.
315
318
 
316
- File changes made by agents during their turns are detected via `git status` after each response cycle. Newly modified files are printed with timestamps.
317
-
318
- Use `/save <path>` to export the full in-memory conversation as formatted JSON.
319
+ File changes made by agents are detected via `git status` after each response. Newly modified files are printed with timestamps.
319
320
 
320
321
  <br/>
321
322
 
322
323
  ## Terminal commands
323
324
 
324
- | Command | What it does |
325
- |---------|-------------|
326
- | `/agents` | List active agents with tag, provider, model |
327
- | `/history` | Print full conversation history |
328
- | `/save <path>` | Export conversation as JSON |
329
- | `/session` | Show session ID and transcript path |
330
- | `/changes` | Show git-modified files |
331
- | `/exit` | Quit |
325
+ | Command | What it does |
326
+ | ---------------- | -------------------------------------------- |
327
+ | `/agents` | List active agents with tag, provider, model |
328
+ | `/save <path>` | Export conversation as JSON |
329
+ | `/session` | Show session ID and transcript path |
330
+ | `/changes` | Show git-modified files |
331
+ | `/clear` | Clear chat display (Ctrl+L also works) |
332
+ | `/exit` | Quit (graceful shutdown, all adapters cleaned up) |
333
+ | `Ctrl+C` | Exit (or copy selected text if selection active) |
332
334
 
333
335
  <br/>
334
336
 
@@ -337,56 +339,51 @@ Use `/save <path>` to export the full in-memory conversation as formatted JSON.
337
339
  ```bash
338
340
  git clone https://github.com/aalasolutions/llm-party.git
339
341
  cd llm-party
340
- npm install
341
- npm run dev
342
+ bun install
343
+ bun run dev
342
344
  ```
343
345
 
344
- Build and run from dist:
346
+ Build and run:
345
347
 
346
348
  ```bash
347
- npm run build
348
- npm start
349
+ bun run build
350
+ bun start
349
351
  ```
350
352
 
351
- Override config path:
353
+ Override config:
352
354
 
353
355
  ```bash
354
- LLM_PARTY_CONFIG=/path/to/config.json npm run dev
356
+ LLM_PARTY_CONFIG=/path/to/config.json bun run dev
355
357
  ```
356
358
 
357
359
  <br/>
358
360
 
359
361
  ## Troubleshooting
360
362
 
361
- **"ENOENT for prompt path"**
362
- Your `systemPrompt` points to a file that does not exist. Paths are relative to project root. Verify with `ls prompts/`.
363
-
364
363
  **"No agent matched @tag"**
365
- The tag you typed does not match any agent's `tag`, `name`, or `provider`. Run `/agents` to see what is available.
364
+ Run `/agents` to see available tags. Tags match against agent `tag`, `name`, and `provider`.
366
365
 
367
366
  **"Unsupported provider"**
368
- Your config has a provider value that is not one of: `claude`, `codex`, `copilot`, `glm`.
367
+ Valid providers: `claude`, `codex`, `copilot`, `glm`.
368
+
369
+ **"Duplicate agent name"**
370
+ Agent names must be unique (case-insensitive). Rename one of the duplicates in config.
369
371
 
370
372
  **Agent modifies source code unexpectedly**
371
- Expected behavior with full-access permissions. Agents can read, write, and execute anything. Use git to review and revert. Codex in particular is aggressive with file operations.
373
+ Expected with full permissions. Use git to review and revert.
372
374
 
373
375
  **Codex ignores personality instructions**
374
- Known limitation. Codex has a 13k+ token built-in system prompt that overrides personality and identity instructions. Functional instructions (naming, workflow, formatting) still work.
376
+ Known limitation. Codex's 13k token built-in prompt overrides personality. Functional instructions still work.
375
377
 
376
- **Agent response timeout**
377
- Claude and Copilot have a 120-second timeout. GLM has 240 seconds. If an agent consistently times out, check your API keys and network connectivity.
378
-
379
- <br/>
378
+ **"ERR_UNKNOWN_BUILTIN_MODULE: node:sqlite"**
379
+ Your Node.js version is below 22. The Copilot SDK requires Node.js 22+.
380
380
 
381
- ## Warning
382
-
383
- All agents run with **full permissions**. They can read, write, edit files and execute shell commands. There is no confirmation step before any action.
384
-
385
- You are responsible for any changes, data loss, costs, or side effects. Do not run against production systems or repos you cannot recover from.
381
+ **Agent response timeout**
382
+ Default is 600 seconds (10 minutes). Adjust with `timeout` in config (top-level or per-agent).
386
383
 
387
384
  <br/>
388
385
 
389
386
  <p align="center">
390
- <a href="https://llm-party.party">llm-party.party</a> &middot;
387
+ <a href="https://llm-party.party">llm-party.party</a> ·
391
388
  Built by <a href="https://aalasolutions.com">AALA Solutions</a>
392
389
  </p>
@@ -4,19 +4,34 @@
4
4
  "maxAutoHops": 15,
5
5
  "agents": [
6
6
  {
7
- "name": "Agent 2",
8
- "tag": "opus",
7
+ "name": "GLM",
8
+ "tag": "glm",
9
+ "provider": "glm",
10
+ "model": "glm-5",
11
+ "env": {
12
+ "ANTHROPIC_BASE_URL": "https://api.z.ai/api/anthropic",
13
+ "ANTHROPIC_DEFAULT_HAIKU_MODEL": "glm-4.5-air",
14
+ "ANTHROPIC_DEFAULT_SONNET_MODEL": "glm-4.5",
15
+ "ANTHROPIC_DEFAULT_OPUS_MODEL": "glm-5"
16
+ }
17
+ },
18
+ {
19
+ "name": "Claude",
20
+ "tag": "claude",
9
21
  "provider": "claude",
10
- "model": "opus",
11
- "systemPrompt": ["./prompts/base.md"],
12
- "executablePath": "~/.local/bin/claude"
22
+ "model": "opus"
13
23
  },
14
24
  {
15
- "name": "Agent 3",
25
+ "name": "Copilot",
16
26
  "tag": "copilot",
17
27
  "provider": "copilot",
18
- "model": "gpt-4.1",
19
- "systemPrompt": ["./prompts/base.md"]
28
+ "model": "gpt-4.1"
29
+ },
30
+ {
31
+ "name": "Codex",
32
+ "tag": "codex",
33
+ "provider": "codex",
34
+ "model": "gpt-5.2"
20
35
  }
21
36
  ]
22
37
  }
@@ -12,7 +12,7 @@ export class ClaudeAdapter {
12
12
  this.model = model;
13
13
  }
14
14
  async init(config) {
15
- this.systemPrompt = Array.isArray(config.systemPrompt) ? config.systemPrompt.join("\n\n") : config.systemPrompt;
15
+ this.systemPrompt = config.resolvedPrompt ?? "";
16
16
  this.runtimeEnv = { ...process.env, ...(config.env ?? {}) };
17
17
  this.claudeExecutable = config.executablePath ?? process.env.CLAUDE_CODE_EXECUTABLE;
18
18
  }
@@ -11,9 +11,7 @@ export class CodexAdapter {
11
11
  }
12
12
  async init(config) {
13
13
  const cliPath = config.executablePath ?? process.env.CODEX_CLI_EXECUTABLE;
14
- const systemPrompt = Array.isArray(config.systemPrompt)
15
- ? config.systemPrompt.join("\n\n")
16
- : config.systemPrompt;
14
+ const systemPrompt = config.resolvedPrompt ?? "";
17
15
  this.codex = new Codex({
18
16
  ...(cliPath ? { codexPathOverride: cliPath } : {}),
19
17
  ...(config.env?.OPENAI_API_KEY ? { apiKey: config.env.OPENAI_API_KEY } : {}),
@@ -10,9 +10,7 @@ export class CopilotAdapter {
10
10
  this.model = model;
11
11
  }
12
12
  async init(config) {
13
- const systemPrompt = Array.isArray(config.systemPrompt)
14
- ? config.systemPrompt.join("\n\n")
15
- : config.systemPrompt;
13
+ const systemPrompt = config.resolvedPrompt ?? "";
16
14
  const cliPath = config.executablePath ?? process.env.COPILOT_CLI_EXECUTABLE;
17
15
  this.client = new CopilotClient({
18
16
  ...(cliPath ? { cliPath } : {}),
@@ -13,7 +13,7 @@ export class GlmAdapter {
13
13
  this.model = model;
14
14
  }
15
15
  async init(config) {
16
- this.systemPrompt = Array.isArray(config.systemPrompt) ? config.systemPrompt.join("\n\n") : config.systemPrompt;
16
+ this.systemPrompt = config.resolvedPrompt ?? "";
17
17
  const aliasEnv = await loadGlmAliasEnv();
18
18
  this.runtimeEnv = { ...process.env, ...aliasEnv, ...(config.env ?? {}) };
19
19
  this.claudeExecutable = config.executablePath ?? process.env.CLAUDE_CODE_EXECUTABLE;