llm-party-cli 0.1.0 → 0.1.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -48,41 +48,69 @@ No MCP. No master/servant. No window juggling. Just peers at a terminal table.
48
48
 
49
49
  ## Getting started
50
50
 
51
- ### Install and run
51
+ ### Prerequisites
52
+
53
+ Node.js 20+ is required (22+ if using the Copilot provider, which depends on `node:sqlite`). Make sure at least one AI CLI is installed and authenticated:
54
+
55
+ ```bash
56
+ claude --version # Claude Code CLI
57
+ codex --version # OpenAI Codex CLI
58
+ copilot --version # GitHub Copilot CLI
59
+ ```
60
+
61
+ If a CLI doesn't work on its own, it won't work inside llm-party.
62
+
63
+ ### Install
52
64
 
53
65
  ```bash
54
66
  npm install -g llm-party-cli
67
+ ```
68
+
69
+ ### First run
70
+
71
+ ```bash
55
72
  llm-party
56
73
  ```
57
74
 
58
- That's it. Agents use your current working directory. Config defaults are included in the package.
75
+ On first run, llm-party creates `~/.llm-party/` with a default config and base prompt. Your system username is detected automatically. A single Claude agent is configured out of the box.
59
76
 
60
- ### Set up your agents
77
+ To re-run setup or reset your config:
61
78
 
62
- Edit `configs/default.json`. Each agent needs a name, provider, and model:
79
+ ```bash
80
+ llm-party --init
81
+ ```
82
+
83
+ ### Add more agents
84
+
85
+ Edit `~/.llm-party/config.json`:
63
86
 
64
87
  ```json
65
88
  {
66
- "humanName": "YOUR NAME",
67
89
  "agents": [
68
90
  {
69
91
  "name": "Claude",
70
92
  "tag": "claude",
71
93
  "provider": "claude",
72
- "model": "opus",
73
- "systemPrompt": ["./prompts/base.md"]
94
+ "model": "opus"
74
95
  },
75
96
  {
76
97
  "name": "Codex",
77
98
  "tag": "codex",
78
99
  "provider": "codex",
79
- "model": "gpt-5.2",
80
- "systemPrompt": ["./prompts/base.md"]
100
+ "model": "gpt-5.2"
101
+ },
102
+ {
103
+ "name": "Copilot",
104
+ "tag": "copilot",
105
+ "provider": "copilot",
106
+ "model": "gpt-4.1"
81
107
  }
82
108
  ]
83
109
  }
84
110
  ```
85
111
 
112
+ That's it. No paths, no prompts, no usernames to configure. Just name, tag, provider, model.
113
+
86
114
  ### Talk to your agents
87
115
 
88
116
  ```
@@ -100,22 +128,14 @@ Agents can pass the conversation to each other by ending their response with `@n
100
128
 
101
129
  <br/>
102
130
 
103
- ## Before you start
131
+ ## Important notes
104
132
 
105
- **Verify your CLIs work first.** Before adding an agent to `configs/default.json`, make sure its CLI is installed and authenticated:
106
-
107
- ```bash
108
- claude --version # Claude Code CLI
109
- codex --version # OpenAI Codex CLI
110
- copilot --version # GitHub Copilot CLI
111
- ```
112
-
113
- If a CLI doesn't work on its own, it won't work inside llm-party.
114
-
115
- **No extra API tokens.** llm-party uses the original CLIs and SDKs under the hood. Your existing authentication and subscriptions are used directly. Sessions created by agents appear in each tool's native session history (Claude Code sessions, Codex threads, etc.) since the underlying SDKs manage their own persistence.
133
+ **No extra API tokens.** llm-party uses the original CLIs and SDKs under the hood. Your existing authentication and subscriptions are used directly. Sessions created by agents appear in each tool's native session history (Claude Code sessions, Codex threads, etc.).
116
134
 
117
135
  **Run in isolation.** Always run llm-party inside a disposable environment: a Docker container, a VM, or at minimum a throwaway git branch. Agents have full filesystem and shell access with zero approval gates.
118
136
 
137
+ **Full permissions.** All agents can read, write, edit files and execute shell commands. There is no confirmation step before any action. You are responsible for any changes, data loss, costs, or side effects.
138
+
119
139
  <br/>
120
140
 
121
141
  ## How we use the SDKs
@@ -128,16 +148,18 @@ llm-party uses **official, publicly available SDKs and CLIs** published by each
128
148
  | Codex | [`@openai/codex-sdk`](https://www.npmjs.com/package/@openai/codex-sdk) | OpenAI |
129
149
  | Copilot | [`@github/copilot-sdk`](https://www.npmjs.com/package/@github/copilot-sdk) | GitHub |
130
150
 
131
- All authentication flows through the provider's own CLI login. Your API keys, OAuth tokens, and subscriptions are used as-is. llm-party does not store, proxy, or intercept credentials.
151
+ All authentication flows through the provider's own CLI login. llm-party does not store, proxy, or intercept credentials.
132
152
 
133
153
  If any provider believes this project violates their terms of service, please [open an issue](https://github.com/aalasolutions/llm-party/issues) and we will address it immediately.
134
154
 
155
+ <br/>
156
+
135
157
  ## Supported providers
136
158
 
137
- | Provider | SDK | Session | System Prompt |
159
+ | Provider | SDK | Session | Prompt Support |
138
160
  |----------|-----|---------|---------------|
139
161
  | **Claude** | `@anthropic-ai/claude-agent-sdk` | Persistent via session ID resume | Full control |
140
- | **Codex** | `@openai/codex-sdk` | Persistent thread with `run()` turns | Via `developer_instructions` (see limitations) |
162
+ | **Codex** | `@openai/codex-sdk` | Persistent thread with `run()` turns | Via `developer_instructions` (limitations below) |
141
163
  | **Copilot** | `@github/copilot-sdk` | Persistent via `sendAndWait()` | Full control |
142
164
  | **GLM** | Claude SDK + env proxy | Same as Claude | Full control |
143
165
 
@@ -149,8 +171,6 @@ If any provider believes this project violates their terms of service, please [o
149
171
 
150
172
  ## How it works
151
173
 
152
- Most multi-agent setups use MCP (one agent controls others) or CLI wrapping (spawn processes and scrape terminal output). Both are fragile and hierarchical.
153
-
154
174
  llm-party uses SDK adapters directly. Each agent gets a persistent session with its provider. Full tool access. Real conversation threading. The orchestrator owns routing, agents are peers.
155
175
 
156
176
  ```
@@ -172,6 +192,8 @@ Orchestrator
172
192
 
173
193
  Each agent receives a rolling window of recent messages (default 16) plus any unseen messages since its last turn. Messages from other agents are included so everyone sees the full multi-party conversation.
174
194
 
195
+ `~/.llm-party/config.json` is your global config. `base.md` is always loaded first for every agent. The `prompts` field in config adds extra prompt files on top of it.
196
+
175
197
  <br/>
176
198
 
177
199
  ## Provider details
@@ -182,11 +204,8 @@ Each agent receives a rolling window of recent messages (default 16) plus any un
182
204
  |---|---|
183
205
  | SDK | `@anthropic-ai/claude-agent-sdk` |
184
206
  | Session | Persistent via `resume: sessionId`. First call creates a session, subsequent calls resume it. |
185
- | System prompt | Passed directly to the SDK via `options.systemPrompt`. Full control. |
207
+ | Prompt | Passed directly to the SDK. Full control over personality, behavior, and workflow rules. |
186
208
  | Tools | Read, Write, Edit, Bash, Glob, Grep |
187
- | Permissions | `permissionMode: "bypassPermissions"` (all tools auto-approved) |
188
-
189
- System prompt works exactly as expected. Personality, behavior, workflow rules all respected.
190
209
 
191
210
  ### Codex
192
211
 
@@ -194,109 +213,101 @@ System prompt works exactly as expected. Personality, behavior, workflow rules a
194
213
  |---|---|
195
214
  | SDK | `@openai/codex-sdk` |
196
215
  | Session | Persistent thread. `startThread()` creates it, `thread.run()` adds turns to the same conversation. |
197
- | System prompt | Injected via `developer_instructions` config key, passed as `--config` flag to the CLI subprocess. |
198
- | Tools | exec_command, apply_patch, file operations (Codex built-in toolset) |
199
- | Permissions | `sandboxMode: "danger-full-access"`, `approvalPolicy: "never"` |
200
-
201
- **Known limitation:** Codex ships with a massive built-in system prompt (~13k tokens) that cannot be overridden. Your `developer_instructions` are appended alongside it, not replacing it. This means:
216
+ | Prompt | Injected via `developer_instructions` config key. Appended alongside Codex's built-in 13k token system prompt. |
217
+ | Tools | exec_command, apply_patch, file operations |
202
218
 
203
- - **Works:** Action instructions (create files, follow naming conventions), formatting rules (prefix responses with agent name), workflow rules (handoff syntax, routing tags)
204
- - **Does not work:** Personality overrides, identity changes, behavioral rewrites
205
-
206
- We tested `instructions`, `developer_instructions`, and `experimental_instructions_file`. All three append to the built-in prompt. None replace it.
207
-
208
- **Also observed:** Codex is aggressive with file operations. When asked to "create your file," it read the orchestrator source code, ran the Codex CLI, and modified `src/ui/terminal.ts` instead of just creating a simple markdown file.
219
+ **Known limitation:** Codex's built-in system prompt cannot be overridden. Your instructions are appended alongside it. Action instructions (naming, formatting, workflow rules) work. Personality overrides do not.
209
220
 
210
221
  ### Copilot
211
222
 
212
223
  | | |
213
224
  |---|---|
214
225
  | SDK | `@github/copilot-sdk` |
215
- | Session | Persistent via `CopilotClient.createSession()`. Messages sent with `session.sendAndWait()`. |
216
- | System prompt | Set as `systemMessage: { content: prompt }` on session creation. |
226
+ | Session | Persistent via `CopilotClient.createSession()`. |
227
+ | Prompt | Set as `systemMessage` on session creation. Full control. |
217
228
  | Tools | Copilot built-in toolset |
218
- | Permissions | `onPermissionRequest: approveAll` (all actions auto-approved) |
219
-
220
- System prompt works as expected.
221
229
 
222
230
  ### GLM
223
231
 
224
232
  | | |
225
233
  |---|---|
226
234
  | SDK | `@anthropic-ai/claude-agent-sdk` (same as Claude) |
227
- | Session | Same as Claude, but routed through a proxy. |
228
- | System prompt | Same as Claude. Full control. |
235
+ | Session | Same as Claude, routed through a proxy via env overrides. |
236
+ | Prompt | Same as Claude. Full control. |
229
237
  | Tools | Same as Claude |
230
- | Permissions | Same as Claude |
231
238
 
232
- GLM is not tied to any specific CLI. It uses the Claude SDK as the transport layer because Claude Code supports environment variable overrides for base URL and model aliases, making it a convenient proxy bridge. Any CLI that supports similar env-based routing could be swapped in.
239
+ GLM uses the Claude SDK as a transport layer. The adapter routes API calls through a proxy by setting `ANTHROPIC_BASE_URL` and model aliases via the `env` config field.
233
240
 
234
241
  <br/>
235
242
 
236
243
  ## Config reference
237
244
 
238
- Config file: `configs/default.json`. Override with `LLM_PARTY_CONFIG` env var.
245
+ Config file: `~/.llm-party/config.json` (created on first run).
246
+
247
+ Override with `LLM_PARTY_CONFIG` env var to point to a different file.
239
248
 
240
249
  ### Top-level fields
241
250
 
242
251
  | Field | Required | Default | Description |
243
252
  |-------|----------|---------|-------------|
244
- | `humanName` | No | `USER` | Your name displayed in the terminal prompt and passed to agents |
245
- | `humanTag` | No | derived from `humanName` | Tag used for human handoff detection. When an agent says `@next:you`, the orchestrator stops and returns control to you |
246
- | `maxAutoHops` | No | `15` | Max agent-to-agent handoffs per cycle. Prevents infinite loops. Use `"unlimited"` to remove the cap |
247
- | `timeout` | No | `600` | Default timeout in seconds for all agents. 10 minutes by default |
248
- | `agents` | Yes | | Array of agent definitions. Must have at least one |
253
+ | `humanName` | No | Your system username | Display name in the terminal prompt and passed to agents |
254
+ | `humanTag` | No | derived from `humanName` | Tag for human handoff detection (`@next:you`) |
255
+ | `maxAutoHops` | No | `15` | Max agent-to-agent handoffs per cycle. Use `"unlimited"` to remove the cap |
256
+ | `timeout` | No | `600` | Default timeout in seconds for all agents |
257
+ | `agents` | Yes | | Array of agent definitions |
249
258
 
250
259
  ### Agent fields
251
260
 
252
261
  | Field | Required | Default | Description |
253
262
  |-------|----------|---------|-------------|
254
263
  | `name` | Yes | | Display name shown in responses as `[AGENT NAME]` |
255
- | `tag` | No | derived from `name` | Routing tag for `@tag` targeting. Auto-generated as lowercase with dashes if omitted |
264
+ | `tag` | No | derived from `name` | Routing tag for `@tag` targeting |
256
265
  | `provider` | Yes | | SDK adapter: `claude`, `codex`, `copilot`, or `glm` |
257
266
  | `model` | Yes | | Model ID passed to the provider. Examples: `opus`, `sonnet`, `gpt-5.2`, `gpt-4.1`, `glm-5` |
258
- | `systemPrompt` | Yes | | Path or array of paths to prompt markdown files. Relative to project root |
259
- | `executablePath` | No | PATH lookup | Path to the CLI binary. Supports `~/` for home directory. Only needed if the CLI is not in your PATH |
260
- | `env` | No | inherits `process.env` | Environment variable overrides for this agent's process |
261
- | `timeout` | No | top-level value | Per-agent timeout in seconds. Overrides the top-level default |
267
+ | `prompts` | No | none | Array of extra prompt file paths, concatenated after `base.md`. Relative to project root |
268
+ | `executablePath` | No | PATH lookup | Path to the CLI binary. Supports `~/`. Only needed if the CLI is not in your PATH |
269
+ | `env` | No | inherits `process.env` | Environment variable overrides for this agent |
270
+ | `timeout` | No | top-level value | Per-agent timeout override in seconds |
271
+
272
+ ### Prompts
262
273
 
263
- ### System prompts
274
+ `base.md` is always loaded first for every agent. It defines orchestrator rules, routing syntax, handoff protocol, and team context. It lives at `~/.llm-party/base.md` (copied from the package on first run).
264
275
 
265
- Single file or multiple files merged in order:
276
+ To add extra instructions per agent, use the `prompts` field:
266
277
 
267
278
  ```json
268
- "systemPrompt": "./prompts/base.md"
269
- "systemPrompt": ["./prompts/base.md", "./prompts/reviewer.md"]
279
+ {
280
+ "name": "Reviewer",
281
+ "tag": "reviewer",
282
+ "provider": "claude",
283
+ "model": "opus",
284
+ "prompts": ["./prompts/code-review.md"]
285
+ }
270
286
  ```
271
287
 
272
- Files are concatenated with `---` separators, then template variables are replaced. Available variables:
288
+ The final prompt sent to the agent is: `base.md` + `prompts[0]` + `prompts[1]` + ... joined with `---` separators. Template variables are rendered after concatenation:
273
289
 
274
290
  | Variable | Description |
275
291
  |----------|-------------|
276
292
  | `{{agentName}}` | This agent's display name |
277
293
  | `{{agentTag}}` | This agent's routing tag |
278
- | `{{humanName}}` | The human's display name |
279
- | `{{humanTag}}` | The human's routing tag |
294
+ | `{{humanName}}` | Your display name |
295
+ | `{{humanTag}}` | Your routing tag |
280
296
  | `{{agentCount}}` | Total number of active agents |
281
- | `{{allAgentNames}}` | All agent names, comma-separated |
297
+ | `{{allAgentNames}}` | All agent names |
282
298
  | `{{allAgentTags}}` | All agent tags as `@tag` |
283
- | `{{otherAgentList}}` | Other agents formatted as `- Name: use @tag` |
284
- | `{{otherAgentNames}}` | Other agent names, comma-separated |
285
- | `{{validHandoffTargets}}` | Valid `@next:tag` values for handoff |
299
+ | `{{otherAgentList}}` | Other agents with their tags |
300
+ | `{{validHandoffTargets}}` | Valid `@next:tag` targets |
286
301
 
287
302
  ### GLM environment setup
288
303
 
289
- GLM requires environment overrides to route through a proxy. The adapter first tries to load env variables from your shell `glm` alias (`zsh -ic "alias glm"`). If you have a `glm` alias that sets `ANTHROPIC_AUTH_TOKEN` and `ANTHROPIC_BASE_URL`, it picks those up automatically.
290
-
291
- Without the alias, provide everything in the `env` block:
304
+ GLM requires environment overrides to route through a proxy. The adapter first tries to load env variables from your shell `glm` alias. Without the alias, provide everything in the `env` block:
292
305
 
293
306
  ```json
294
307
  {
295
- "name": "GLM Agent",
308
+ "name": "GLM",
296
309
  "provider": "glm",
297
310
  "model": "glm-5",
298
- "systemPrompt": ["./prompts/base.md"],
299
- "executablePath": "~/.local/bin/claude",
300
311
  "env": {
301
312
  "ANTHROPIC_AUTH_TOKEN": "your-glm-api-key",
302
313
  "ANTHROPIC_BASE_URL": "https://api.z.ai/api/anthropic",
@@ -311,11 +322,9 @@ Without the alias, provide everything in the `env` block:
311
322
 
312
323
  ## Session and transcript
313
324
 
314
- Every run generates a unique session ID and appends messages to a JSONL transcript file under `.llm-party/sessions/`. The session ID and transcript path are printed at startup.
315
-
316
- File changes made by agents during their turns are detected via `git status` after each response cycle. Newly modified files are printed with timestamps.
325
+ Every run generates a unique session ID and appends messages to a JSONL transcript in `.llm-party/sessions/` (project-level). The session ID and transcript path are printed at startup.
317
326
 
318
- Use `/save <path>` to export the full in-memory conversation as formatted JSON.
327
+ File changes made by agents are detected via `git status` after each response. Newly modified files are printed with timestamps.
319
328
 
320
329
  <br/>
321
330
 
@@ -341,14 +350,14 @@ npm install
341
350
  npm run dev
342
351
  ```
343
352
 
344
- Build and run from dist:
353
+ Build and run:
345
354
 
346
355
  ```bash
347
356
  npm run build
348
357
  npm start
349
358
  ```
350
359
 
351
- Override config path:
360
+ Override config:
352
361
 
353
362
  ```bash
354
363
  LLM_PARTY_CONFIG=/path/to/config.json npm run dev
@@ -358,31 +367,23 @@ LLM_PARTY_CONFIG=/path/to/config.json npm run dev
358
367
 
359
368
  ## Troubleshooting
360
369
 
361
- **"ENOENT for prompt path"**
362
- Your `systemPrompt` points to a file that does not exist. Paths are relative to project root. Verify with `ls prompts/`.
363
-
364
370
  **"No agent matched @tag"**
365
- The tag you typed does not match any agent's `tag`, `name`, or `provider`. Run `/agents` to see what is available.
371
+ Run `/agents` to see available tags. Tags match against agent `tag`, `name`, and `provider`.
366
372
 
367
373
  **"Unsupported provider"**
368
- Your config has a provider value that is not one of: `claude`, `codex`, `copilot`, `glm`.
374
+ Valid providers: `claude`, `codex`, `copilot`, `glm`.
369
375
 
370
376
  **Agent modifies source code unexpectedly**
371
- Expected behavior with full-access permissions. Agents can read, write, and execute anything. Use git to review and revert. Codex in particular is aggressive with file operations.
377
+ Expected with full permissions. Use git to review and revert.
372
378
 
373
379
  **Codex ignores personality instructions**
374
- Known limitation. Codex has a 13k+ token built-in system prompt that overrides personality and identity instructions. Functional instructions (naming, workflow, formatting) still work.
380
+ Known limitation. Codex's 13k token built-in prompt overrides personality. Functional instructions still work.
375
381
 
376
- **Agent response timeout**
377
- Claude and Copilot have a 120-second timeout. GLM has 240 seconds. If an agent consistently times out, check your API keys and network connectivity.
378
-
379
- <br/>
382
+ **"ERR_UNKNOWN_BUILTIN_MODULE: node:sqlite"**
383
+ Your Node.js version is below 22. The Copilot SDK requires Node.js 22+.
380
384
 
381
- ## Warning
382
-
383
- All agents run with **full permissions**. They can read, write, edit files and execute shell commands. There is no confirmation step before any action.
384
-
385
- You are responsible for any changes, data loss, costs, or side effects. Do not run against production systems or repos you cannot recover from.
385
+ **Agent response timeout**
386
+ Default is 600 seconds (10 minutes). Adjust with `timeout` in config (top-level or per-agent).
386
387
 
387
388
  <br/>
388
389
 
@@ -1,22 +1,10 @@
1
1
  {
2
- "humanName": "AAMIR",
3
- "humanTag": "aamir",
4
- "maxAutoHops": 15,
5
2
  "agents": [
6
3
  {
7
- "name": "Agent 2",
8
- "tag": "opus",
4
+ "name": "Claude",
5
+ "tag": "claude",
9
6
  "provider": "claude",
10
- "model": "opus",
11
- "systemPrompt": ["./prompts/base.md"],
12
- "executablePath": "~/.local/bin/claude"
13
- },
14
- {
15
- "name": "Agent 3",
16
- "tag": "copilot",
17
- "provider": "copilot",
18
- "model": "gpt-4.1",
19
- "systemPrompt": ["./prompts/base.md"]
7
+ "model": "sonnet"
20
8
  }
21
9
  ]
22
10
  }
@@ -12,7 +12,7 @@ export class ClaudeAdapter {
12
12
  this.model = model;
13
13
  }
14
14
  async init(config) {
15
- this.systemPrompt = Array.isArray(config.systemPrompt) ? config.systemPrompt.join("\n\n") : config.systemPrompt;
15
+ this.systemPrompt = config.resolvedPrompt ?? "";
16
16
  this.runtimeEnv = { ...process.env, ...(config.env ?? {}) };
17
17
  this.claudeExecutable = config.executablePath ?? process.env.CLAUDE_CODE_EXECUTABLE;
18
18
  }
@@ -11,9 +11,7 @@ export class CodexAdapter {
11
11
  }
12
12
  async init(config) {
13
13
  const cliPath = config.executablePath ?? process.env.CODEX_CLI_EXECUTABLE;
14
- const systemPrompt = Array.isArray(config.systemPrompt)
15
- ? config.systemPrompt.join("\n\n")
16
- : config.systemPrompt;
14
+ const systemPrompt = config.resolvedPrompt ?? "";
17
15
  this.codex = new Codex({
18
16
  ...(cliPath ? { codexPathOverride: cliPath } : {}),
19
17
  ...(config.env?.OPENAI_API_KEY ? { apiKey: config.env.OPENAI_API_KEY } : {}),
@@ -10,9 +10,7 @@ export class CopilotAdapter {
10
10
  this.model = model;
11
11
  }
12
12
  async init(config) {
13
- const systemPrompt = Array.isArray(config.systemPrompt)
14
- ? config.systemPrompt.join("\n\n")
15
- : config.systemPrompt;
13
+ const systemPrompt = config.resolvedPrompt ?? "";
16
14
  const cliPath = config.executablePath ?? process.env.COPILOT_CLI_EXECUTABLE;
17
15
  this.client = new CopilotClient({
18
16
  ...(cliPath ? { cliPath } : {}),
@@ -13,7 +13,7 @@ export class GlmAdapter {
13
13
  this.model = model;
14
14
  }
15
15
  async init(config) {
16
- this.systemPrompt = Array.isArray(config.systemPrompt) ? config.systemPrompt.join("\n\n") : config.systemPrompt;
16
+ this.systemPrompt = config.resolvedPrompt ?? "";
17
17
  const aliasEnv = await loadGlmAliasEnv();
18
18
  this.runtimeEnv = { ...process.env, ...aliasEnv, ...(config.env ?? {}) };
19
19
  this.claudeExecutable = config.executablePath ?? process.env.CLAUDE_CODE_EXECUTABLE;
@@ -1,6 +1,8 @@
1
- import { readFile } from "node:fs/promises";
2
- import { homedir } from "node:os";
1
+ import { readFile, access, mkdir, copyFile } from "node:fs/promises";
2
+ import { homedir, userInfo } from "node:os";
3
+ import path from "node:path";
3
4
  const VALID_PROVIDERS = ["claude", "codex", "copilot", "glm"];
5
+ const LLM_PARTY_HOME = path.join(homedir(), ".llm-party");
4
6
  function validateConfig(data) {
5
7
  if (!data || typeof data !== "object") {
6
8
  throw new Error("Config must be an object");
@@ -26,11 +28,10 @@ function validateConfig(data) {
26
28
  if (!VALID_PROVIDERS.includes(agent.provider)) {
27
29
  throw new Error(`Agent '${agent.name}' has invalid provider '${agent.provider}'. Valid: ${VALID_PROVIDERS.join(", ")}`);
28
30
  }
29
- if (agent.systemPrompt !== undefined) {
30
- const isString = typeof agent.systemPrompt === "string";
31
- const isArray = Array.isArray(agent.systemPrompt) && agent.systemPrompt.every((p) => typeof p === "string");
32
- if (!isString && !isArray) {
33
- throw new Error(`Agent '${agent.name}' systemPrompt must be a string or string array`);
31
+ if (agent.prompts !== undefined) {
32
+ const isArray = Array.isArray(agent.prompts) && agent.prompts.every((p) => typeof p === "string");
33
+ if (!isArray) {
34
+ throw new Error(`Agent '${agent.name}' prompts must be a string array`);
34
35
  }
35
36
  }
36
37
  }
@@ -39,10 +40,74 @@ function validateConfig(data) {
39
40
  agent.executablePath = homedir() + agent.executablePath.slice(1);
40
41
  }
41
42
  }
43
+ if (!cfg.humanName || (typeof cfg.humanName === "string" && cfg.humanName.trim() === "")) {
44
+ cfg.humanName = userInfo().username || "USER";
45
+ }
42
46
  return cfg;
43
47
  }
44
- export async function loadConfig(path) {
45
- const raw = await readFile(path, "utf8");
48
+ export async function resolveConfigPath(appRoot) {
49
+ if (process.env.LLM_PARTY_CONFIG) {
50
+ return path.resolve(process.env.LLM_PARTY_CONFIG);
51
+ }
52
+ const globalConfig = path.join(LLM_PARTY_HOME, "config.json");
53
+ try {
54
+ await access(globalConfig);
55
+ return globalConfig;
56
+ }
57
+ catch {
58
+ // fall through to package default
59
+ }
60
+ return path.join(appRoot, "configs", "default.json");
61
+ }
62
+ export async function resolveBasePrompt(appRoot) {
63
+ const globalBase = path.join(LLM_PARTY_HOME, "base.md");
64
+ try {
65
+ await access(globalBase);
66
+ return await readFile(globalBase, "utf8");
67
+ }
68
+ catch {
69
+ // fall through to bundled
70
+ }
71
+ const bundledBase = path.join(appRoot, "prompts", "base.md");
72
+ return await readFile(bundledBase, "utf8");
73
+ }
74
+ export async function initLlmPartyHome(appRoot) {
75
+ await mkdir(LLM_PARTY_HOME, { recursive: true });
76
+ await mkdir(path.join(LLM_PARTY_HOME, "sessions"), { recursive: true });
77
+ const globalBase = path.join(LLM_PARTY_HOME, "base.md");
78
+ try {
79
+ await access(globalBase);
80
+ }
81
+ catch {
82
+ const bundledBase = path.join(appRoot, "prompts", "base.md");
83
+ await copyFile(bundledBase, globalBase);
84
+ }
85
+ const globalConfig = path.join(LLM_PARTY_HOME, "config.json");
86
+ try {
87
+ await access(globalConfig);
88
+ }
89
+ catch {
90
+ const username = userInfo().username || "USER";
91
+ const defaultConfig = {
92
+ humanName: username,
93
+ agents: [
94
+ {
95
+ name: "Claude",
96
+ tag: "claude",
97
+ provider: "claude",
98
+ model: "sonnet"
99
+ }
100
+ ]
101
+ };
102
+ const { writeFile } = await import("node:fs/promises");
103
+ await writeFile(globalConfig, JSON.stringify(defaultConfig, null, 2) + "\n", "utf8");
104
+ }
105
+ console.log(`Initialized ~/.llm-party/`);
106
+ console.log(` config: ${globalConfig}`);
107
+ }
108
+ export async function loadConfig(configPath) {
109
+ const raw = await readFile(configPath, "utf8");
46
110
  const parsed = JSON.parse(raw);
47
111
  return validateConfig(parsed);
48
112
  }
113
+ export { LLM_PARTY_HOME };
package/dist/index.js CHANGED
@@ -7,26 +7,39 @@ import { ClaudeAdapter } from "./adapters/claude.js";
7
7
  import { CodexAdapter } from "./adapters/codex.js";
8
8
  import { CopilotAdapter } from "./adapters/copilot.js";
9
9
  import { GlmAdapter } from "./adapters/glm.js";
10
- import { loadConfig } from "./config/loader.js";
10
+ import { loadConfig, resolveConfigPath, resolveBasePrompt, initLlmPartyHome, LLM_PARTY_HOME } from "./config/loader.js";
11
11
  import { Orchestrator } from "./orchestrator.js";
12
12
  import { runTerminal } from "./ui/terminal.js";
13
13
  async function main() {
14
14
  const appRoot = path.resolve(path.dirname(fileURLToPath(import.meta.url)), "..");
15
- const configPath = process.env.LLM_PARTY_CONFIG
16
- ? path.resolve(process.env.LLM_PARTY_CONFIG)
17
- : path.join(appRoot, "configs", "default.json");
15
+ if (process.argv.includes("--init")) {
16
+ await initLlmPartyHome(appRoot);
17
+ return;
18
+ }
19
+ try {
20
+ const { access } = await import("node:fs/promises");
21
+ await access(LLM_PARTY_HOME);
22
+ }
23
+ catch {
24
+ console.log("First run detected. Setting up ~/.llm-party/...\n");
25
+ await initLlmPartyHome(appRoot);
26
+ }
27
+ const configPath = await resolveConfigPath(appRoot);
18
28
  const config = await loadConfig(configPath);
19
29
  const humanName = config.humanName?.trim() || "USER";
20
30
  const humanTag = config.humanTag?.trim() || toTag(humanName);
21
31
  const maxAutoHops = resolveMaxAutoHops(config.maxAutoHops);
32
+ const basePrompt = await resolveBasePrompt(appRoot);
22
33
  const resolveFromAppRoot = (value) => {
23
34
  return path.isAbsolute(value) ? value : path.resolve(appRoot, value);
24
35
  };
25
- const adapters = await Promise.all(config.agents.map(async (agent, index, allAgents) => {
26
- const promptPaths = Array.isArray(agent.systemPrompt)
27
- ? agent.systemPrompt.map((p) => resolveFromAppRoot(p))
28
- : [resolveFromAppRoot(agent.systemPrompt)];
29
- const promptParts = await Promise.all(promptPaths.map((p) => readFile(p, "utf8")));
36
+ const adapters = await Promise.all(config.agents.map(async (agent, _index, allAgents) => {
37
+ const promptParts = [basePrompt];
38
+ if (agent.prompts && agent.prompts.length > 0) {
39
+ const extraPaths = agent.prompts.map((p) => resolveFromAppRoot(p));
40
+ const extraParts = await Promise.all(extraPaths.map((p) => readFile(p, "utf8")));
41
+ promptParts.push(...extraParts);
42
+ }
30
43
  const promptTemplate = promptParts.join("\n\n---\n\n");
31
44
  const peers = allAgents.filter((candidate) => candidate.name !== agent.name);
32
45
  const tag = agent.tag?.trim() || toTag(agent.name);
@@ -63,9 +76,9 @@ async function main() {
63
76
  ? new GlmAdapter(agent.name, agent.model)
64
77
  : null;
65
78
  if (!adapter) {
66
- throw new Error(`Unsupported provider in Phase 1: ${agent.provider}`);
79
+ throw new Error(`Unsupported provider: ${agent.provider}`);
67
80
  }
68
- await adapter.init({ ...agent, systemPrompt: prompt });
81
+ await adapter.init({ ...agent, resolvedPrompt: prompt });
69
82
  return adapter;
70
83
  }));
71
84
  const defaultTimeout = typeof config.timeout === "number" && config.timeout > 0
package/package.json CHANGED
@@ -1,12 +1,15 @@
1
1
  {
2
2
  "name": "llm-party-cli",
3
- "version": "0.1.0",
3
+ "version": "0.1.1",
4
4
  "type": "module",
5
5
  "bin": {
6
6
  "llm-party": "dist/index.js"
7
7
  },
8
8
  "description": "Bring your models. We'll bring the party.",
9
9
  "license": "MIT",
10
+ "engines": {
11
+ "node": ">=22.0.0"
12
+ },
10
13
  "homepage": "https://llm-party.party",
11
14
  "repository": {
12
15
  "type": "git",
@@ -47,7 +50,8 @@
47
50
  "@github/copilot-sdk": "^0.1.33-preview.2",
48
51
  "@openai/codex-sdk": "^0.115.0",
49
52
  "chalk": "^5.3.0",
50
- "dotenv": "^16.4.5"
53
+ "dotenv": "^16.4.5",
54
+ "feather-icons": "^4.29.2"
51
55
  },
52
56
  "devDependencies": {
53
57
  "@types/node": "^22.10.2",