llm-party-cli 0.1.1 → 0.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -4,8 +4,8 @@
4
4
  <strong>Bring your models. We'll bring the party.</strong>
5
5
  </p>
6
6
  <p align="center">
7
- <a href="https://llm-party.party">Website</a> &middot;
8
- <a href="https://www.npmjs.com/package/llm-party-cli">npm</a> &middot;
7
+ <a href="https://llm-party.party">Website</a> ·
8
+ <a href="https://www.npmjs.com/package/llm-party-cli">npm</a> ·
9
9
  <a href="https://github.com/aalasolutions/llm-party">GitHub</a>
10
10
  </p>
11
11
  <p align="center">
@@ -36,13 +36,13 @@ No MCP. No master/servant. No window juggling. Just peers at a terminal table.
36
36
 
37
37
  ## Why llm-party?
38
38
 
39
- | | Traditional multi-agent | llm-party |
40
- |---|---|---|
41
- | **Architecture** | MCP (master controls servants) | Peer orchestration (you control all) |
42
- | **Integration** | CLI wrapping, output scraping | Direct SDK adapters |
43
- | **Sessions** | Fresh each time | Persistent per provider |
44
- | **Context** | Agents are siloed | Every agent sees the full conversation |
45
- | **API tokens** | Separate keys per tool | Uses your existing CLI auth |
39
+ | | Traditional multi-agent | llm-party |
40
+ | ---------------------- | ------------------------------ | -------------------------------------- |
41
+ | **Architecture** | MCP (master controls servants) | Peer orchestration (you control all) |
42
+ | **Integration** | CLI wrapping, output scraping | Direct SDK adapters |
43
+ | **Sessions** | Fresh each time | Persistent per provider |
44
+ | **Context** | Agents are siloed | Every agent sees the full conversation |
45
+ | **API tokens** | Separate keys per tool | Uses your existing CLI auth |
46
46
 
47
47
  <br/>
48
48
 
@@ -50,12 +50,13 @@ No MCP. No master/servant. No window juggling. Just peers at a terminal table.
50
50
 
51
51
  ### Prerequisites
52
52
 
53
- Node.js 20+ is required (22+ if using the Copilot provider, which depends on `node:sqlite`). Make sure at least one AI CLI is installed and authenticated:
53
+ [Bun](https://bun.sh) runtime is required. At least one AI CLI must be installed and authenticated:
54
54
 
55
55
  ```bash
56
- claude --version # Claude Code CLI
57
- codex --version # OpenAI Codex CLI
58
- copilot --version # GitHub Copilot CLI
56
+ bun --version # Bun runtime
57
+ claude --version # Claude Code CLI
58
+ codex --version # OpenAI Codex CLI
59
+ copilot --version # GitHub Copilot CLI
59
60
  ```
60
61
 
61
62
  If a CLI doesn't work on its own, it won't work inside llm-party.
@@ -63,7 +64,7 @@ If a CLI doesn't work on its own, it won't work inside llm-party.
63
64
  ### Install
64
65
 
65
66
  ```bash
66
- npm install -g llm-party-cli
67
+ bun add -g llm-party-cli
67
68
  ```
68
69
 
69
70
  ### First run
@@ -72,13 +73,7 @@ npm install -g llm-party-cli
72
73
  llm-party
73
74
  ```
74
75
 
75
- On first run, llm-party creates `~/.llm-party/` with a default config and base prompt. Your system username is detected automatically. A single Claude agent is configured out of the box.
76
-
77
- To re-run setup or reset your config:
78
-
79
- ```bash
80
- llm-party --init
81
- ```
76
+ On first run, llm-party automatically creates `~/.llm-party/` with a default config and global memory structure. Your system username is detected automatically. No setup commands needed.
82
77
 
83
78
  ### Add more agents
84
79
 
@@ -126,11 +121,13 @@ That's it. No paths, no prompts, no usernames to configure. Just name, tag, prov
126
121
 
127
122
  Agents can pass the conversation to each other by ending their response with `@next:<tag>`. The orchestrator picks it up and dispatches automatically. Max 15 hops per cycle to prevent loops.
128
123
 
129
- <br/>
124
+ ## **WARNING: FULL AUTONOMY.**
125
+
126
+ All agents run with full permissions. They can read, write, edit files and execute shell commands with zero approval gates. There is no confirmation step before any action. Run in a disposable environment. You are responsible for any changes, data loss, costs, or side effects. Do not run against production systems.
130
127
 
131
128
  ## Important notes
132
129
 
133
- **No extra API tokens.** llm-party uses the original CLIs and SDKs under the hood. Your existing authentication and subscriptions are used directly. Sessions created by agents appear in each tool's native session history (Claude Code sessions, Codex threads, etc.).
130
+ **Uses your existing CLIs.** llm-party uses official SDKs that delegate to each provider's CLI binary. If `claude`, `codex`, or `copilot` commands work on your machine, llm-party works. Authentication is handled entirely by the provider's own tools.
134
131
 
135
132
  **Run in isolation.** Always run llm-party inside a disposable environment: a Docker container, a VM, or at minimum a throwaway git branch. Agents have full filesystem and shell access with zero approval gates.
136
133
 
@@ -142,26 +139,24 @@ Agents can pass the conversation to each other by ending their response with `@n
142
139
 
143
140
  llm-party uses **official, publicly available SDKs and CLIs** published by each provider. Nothing is reverse-engineered, patched, or bypassed.
144
141
 
145
- | Provider | Official SDK | Published by |
146
- |----------|-------------|-------------|
147
- | Claude | [`@anthropic-ai/claude-agent-sdk`](https://www.npmjs.com/package/@anthropic-ai/claude-agent-sdk) | Anthropic |
148
- | Codex | [`@openai/codex-sdk`](https://www.npmjs.com/package/@openai/codex-sdk) | OpenAI |
149
- | Copilot | [`@github/copilot-sdk`](https://www.npmjs.com/package/@github/copilot-sdk) | GitHub |
150
-
151
- All authentication flows through the provider's own CLI login. llm-party does not store, proxy, or intercept credentials.
142
+ | Provider | Official SDK | Published by |
143
+ | -------- | ----------------------------------------------------------------------------------------------- | ------------ |
144
+ | Claude | [`@anthropic-ai/claude-agent-sdk`](https://www.npmjs.com/package/@anthropic-ai/claude-agent-sdk) | Anthropic |
145
+ | Codex | [`@openai/codex-sdk`](https://www.npmjs.com/package/@openai/codex-sdk) | OpenAI |
146
+ | Copilot | [`@github/copilot-sdk`](https://www.npmjs.com/package/@github/copilot-sdk) | GitHub |
152
147
 
153
- If any provider believes this project violates their terms of service, please [open an issue](https://github.com/aalasolutions/llm-party/issues) and we will address it immediately.
148
+ All authentication flows through the provider's own CLI. llm-party does not implement its own auth flow, store credentials, or intercept authentication traffic.
154
149
 
155
150
  <br/>
156
151
 
157
152
  ## Supported providers
158
153
 
159
- | Provider | SDK | Session | Prompt Support |
160
- |----------|-----|---------|---------------|
161
- | **Claude** | `@anthropic-ai/claude-agent-sdk` | Persistent via session ID resume | Full control |
162
- | **Codex** | `@openai/codex-sdk` | Persistent thread with `run()` turns | Via `developer_instructions` (limitations below) |
163
- | **Copilot** | `@github/copilot-sdk` | Persistent via `sendAndWait()` | Full control |
164
- | **GLM** | Claude SDK + env proxy | Same as Claude | Full control |
154
+ | Provider | SDK | Session | Prompt Support |
155
+ | ----------------- | ---------------------------------- | -------------------------------------- | -------------------------------------------------- |
156
+ | **Claude** | `@anthropic-ai/claude-agent-sdk` | Persistent via session ID resume | Full control |
157
+ | **Codex** | `@openai/codex-sdk` | Persistent thread with `run()` turns | Via `developer_instructions` (limitations below) |
158
+ | **Copilot** | `@github/copilot-sdk` | Persistent via `sendAndWait()` | Full control |
159
+ | **GLM** | Claude SDK + env proxy | Same as Claude | Full control |
165
160
 
166
161
  <br/>
167
162
 
@@ -190,9 +185,9 @@ Orchestrator
190
185
  +-- Transcript Writer (JSONL, append-only, per session)
191
186
  ```
192
187
 
193
- Each agent receives a rolling window of recent messages (default 16) plus any unseen messages since its last turn. Messages from other agents are included so everyone sees the full multi-party conversation.
188
+ Each agent receives a rolling window of recent messages (configurable, default 16) plus any unseen messages since its last turn. Messages from other agents are included so everyone sees the full multi-party conversation.
194
189
 
195
- `~/.llm-party/config.json` is your global config. `base.md` is always loaded first for every agent. The `prompts` field in config adds extra prompt files on top of it.
190
+ `~/.llm-party/config.json` is your global config. Every agent receives a base system prompt automatically. The `prompts` field in config adds extra prompt files on top of it.
196
191
 
197
192
  <br/>
198
193
 
@@ -200,41 +195,41 @@ Each agent receives a rolling window of recent messages (default 16) plus any un
200
195
 
201
196
  ### Claude
202
197
 
203
- | | |
204
- |---|---|
205
- | SDK | `@anthropic-ai/claude-agent-sdk` |
198
+ | | |
199
+ | ------- | ----------------------------------------------------------------------------------------------- |
200
+ | SDK | `@anthropic-ai/claude-agent-sdk` |
206
201
  | Session | Persistent via `resume: sessionId`. First call creates a session, subsequent calls resume it. |
207
- | Prompt | Passed directly to the SDK. Full control over personality, behavior, and workflow rules. |
208
- | Tools | Read, Write, Edit, Bash, Glob, Grep |
202
+ | Prompt | Passed directly to the SDK. Full control over personality, behavior, and workflow rules. |
203
+ | Tools | Read, Write, Edit, Bash, Glob, Grep |
209
204
 
210
205
  ### Codex
211
206
 
212
- | | |
213
- |---|---|
214
- | SDK | `@openai/codex-sdk` |
215
- | Session | Persistent thread. `startThread()` creates it, `thread.run()` adds turns to the same conversation. |
216
- | Prompt | Injected via `developer_instructions` config key. Appended alongside Codex's built-in 13k token system prompt. |
217
- | Tools | exec_command, apply_patch, file operations |
207
+ | | |
208
+ | ------- | ---------------------------------------------------------------------------------------------------------------- |
209
+ | SDK | `@openai/codex-sdk` |
210
+ | Session | Persistent thread.`startThread()` creates it, `thread.run()` adds turns to the same conversation. |
211
+ | Prompt | Injected via `developer_instructions` config key. Appended alongside Codex's built-in 13k token system prompt. |
212
+ | Tools | exec_command, apply_patch, file operations |
218
213
 
219
214
  **Known limitation:** Codex's built-in system prompt cannot be overridden. Your instructions are appended alongside it. Action instructions (naming, formatting, workflow rules) work. Personality overrides do not.
220
215
 
221
216
  ### Copilot
222
217
 
223
- | | |
224
- |---|---|
225
- | SDK | `@github/copilot-sdk` |
226
- | Session | Persistent via `CopilotClient.createSession()`. |
227
- | Prompt | Set as `systemMessage` on session creation. Full control. |
228
- | Tools | Copilot built-in toolset |
218
+ | | |
219
+ | ------- | ----------------------------------------------------------- |
220
+ | SDK | `@github/copilot-sdk` |
221
+ | Session | Persistent via `CopilotClient.createSession()`. |
222
+ | Prompt | Set as `systemMessage` on session creation. Full control. |
223
+ | Tools | Copilot built-in toolset |
229
224
 
230
225
  ### GLM
231
226
 
232
- | | |
233
- |---|---|
234
- | SDK | `@anthropic-ai/claude-agent-sdk` (same as Claude) |
227
+ | | |
228
+ | ------- | --------------------------------------------------------- |
229
+ | SDK | `@anthropic-ai/claude-agent-sdk` (same as Claude) |
235
230
  | Session | Same as Claude, routed through a proxy via env overrides. |
236
- | Prompt | Same as Claude. Full control. |
237
- | Tools | Same as Claude |
231
+ | Prompt | Same as Claude. Full control. |
232
+ | Tools | Same as Claude |
238
233
 
239
234
  GLM uses the Claude SDK as a transport layer. The adapter routes API calls through a proxy by setting `ANTHROPIC_BASE_URL` and model aliases via the `env` config field.
240
235
 
@@ -242,38 +237,36 @@ GLM uses the Claude SDK as a transport layer. The adapter routes API calls throu
242
237
 
243
238
  ## Config reference
244
239
 
245
- Config file: `~/.llm-party/config.json` (created on first run).
240
+ Config file: `~/.llm-party/config.json` (created automatically on first run).
246
241
 
247
242
  Override with `LLM_PARTY_CONFIG` env var to point to a different file.
248
243
 
249
244
  ### Top-level fields
250
245
 
251
- | Field | Required | Default | Description |
252
- |-------|----------|---------|-------------|
253
- | `humanName` | No | Your system username | Display name in the terminal prompt and passed to agents |
254
- | `humanTag` | No | derived from `humanName` | Tag for human handoff detection (`@next:you`) |
255
- | `maxAutoHops` | No | `15` | Max agent-to-agent handoffs per cycle. Use `"unlimited"` to remove the cap |
256
- | `timeout` | No | `600` | Default timeout in seconds for all agents |
257
- | `agents` | Yes | | Array of agent definitions |
246
+ | Field | Required | Default | Description |
247
+ | --------------- | -------- | -------------------------- | ---------------------------------------------------------------------------- |
248
+ | `humanName` | No | Your system username | Display name in the terminal prompt and passed to agents |
249
+ | `humanTag` | No | derived from `humanName` | Tag for human handoff detection (`@next:you`) |
250
+ | `maxAutoHops` | No | `15` | Max agent-to-agent handoffs per cycle. Use `"unlimited"` to remove the cap |
251
+ | `timeout` | No | `600` | Default timeout in seconds for all agents |
252
+ | `agents` | Yes | | Array of agent definitions |
258
253
 
259
254
  ### Agent fields
260
255
 
261
- | Field | Required | Default | Description |
262
- |-------|----------|---------|-------------|
263
- | `name` | Yes | | Display name shown in responses as `[AGENT NAME]` |
264
- | `tag` | No | derived from `name` | Routing tag for `@tag` targeting |
265
- | `provider` | Yes | | SDK adapter: `claude`, `codex`, `copilot`, or `glm` |
266
- | `model` | Yes | | Model ID passed to the provider. Examples: `opus`, `sonnet`, `gpt-5.2`, `gpt-4.1`, `glm-5` |
267
- | `prompts` | No | none | Array of extra prompt file paths, concatenated after `base.md`. Relative to project root |
268
- | `executablePath` | No | PATH lookup | Path to the CLI binary. Supports `~/`. Only needed if the CLI is not in your PATH |
269
- | `env` | No | inherits `process.env` | Environment variable overrides for this agent |
270
- | `timeout` | No | top-level value | Per-agent timeout override in seconds |
256
+ | Field | Required | Default | Description |
257
+ | ------------------ | -------- | ------------------------ | --------------------------------------------------------------------------------------------------- |
258
+ | `name` | Yes | | Display name shown in responses as `[AGENT NAME]`. Must be unique. |
259
+ | `tag` | No | derived from `name` | Routing tag for `@tag` targeting |
260
+ | `provider` | Yes | | SDK adapter:`claude`, `codex`, `copilot`, or `glm` |
261
+ | `model` | Yes | | Model ID passed to the provider. Examples:`opus`, `sonnet`, `gpt-5.2`, `gpt-4.1`, `glm-5` |
262
+ | `prompts` | No | none | Array of extra prompt file paths, concatenated after `base.md`. Relative to project root |
263
+ | `executablePath` | No | PATH lookup | Path to the CLI binary. Supports `~/`. Only needed if the CLI is not in your PATH |
264
+ | `env` | No | inherits `process.env` | Environment variable overrides for this agent |
265
+ | `timeout` | No | top-level value | Per-agent timeout override in seconds |
271
266
 
272
267
  ### Prompts
273
268
 
274
- `base.md` is always loaded first for every agent. It defines orchestrator rules, routing syntax, handoff protocol, and team context. It lives at `~/.llm-party/base.md` (copied from the package on first run).
275
-
276
- To add extra instructions per agent, use the `prompts` field:
269
+ Every agent receives a base system prompt automatically. To add extra instructions per agent, use the `prompts` field:
277
270
 
278
271
  ```json
279
272
  {
@@ -285,23 +278,23 @@ To add extra instructions per agent, use the `prompts` field:
285
278
  }
286
279
  ```
287
280
 
288
- The final prompt sent to the agent is: `base.md` + `prompts[0]` + `prompts[1]` + ... joined with `---` separators. Template variables are rendered after concatenation:
281
+ Template variables available in prompt files:
289
282
 
290
- | Variable | Description |
291
- |----------|-------------|
292
- | `{{agentName}}` | This agent's display name |
293
- | `{{agentTag}}` | This agent's routing tag |
294
- | `{{humanName}}` | Your display name |
295
- | `{{humanTag}}` | Your routing tag |
296
- | `{{agentCount}}` | Total number of active agents |
297
- | `{{allAgentNames}}` | All agent names |
298
- | `{{allAgentTags}}` | All agent tags as `@tag` |
299
- | `{{otherAgentList}}` | Other agents with their tags |
300
- | `{{validHandoffTargets}}` | Valid `@next:tag` targets |
283
+ | Variable | Description |
284
+ | --------------------------- | ----------------------------- |
285
+ | `{{agentName}}` | This agent's display name |
286
+ | `{{agentTag}}` | This agent's routing tag |
287
+ | `{{humanName}}` | Your display name |
288
+ | `{{humanTag}}` | Your routing tag |
289
+ | `{{agentCount}}` | Total number of active agents |
290
+ | `{{allAgentNames}}` | All agent names |
291
+ | `{{allAgentTags}}` | All agent tags as `@tag` |
292
+ | `{{otherAgentList}}` | Other agents with their tags |
293
+ | `{{validHandoffTargets}}` | Valid `@next:tag` targets |
301
294
 
302
295
  ### GLM environment setup
303
296
 
304
- GLM requires environment overrides to route through a proxy. The adapter first tries to load env variables from your shell `glm` alias. Without the alias, provide everything in the `env` block:
297
+ GLM requires environment overrides to route through a proxy. The adapter tries to load env variables from your shell `glm` alias automatically. Without the alias, provide everything in the `env` block:
305
298
 
306
299
  ```json
307
300
  {
@@ -330,14 +323,17 @@ File changes made by agents are detected via `git status` after each response. N
330
323
 
331
324
  ## Terminal commands
332
325
 
333
- | Command | What it does |
334
- |---------|-------------|
335
- | `/agents` | List active agents with tag, provider, model |
336
- | `/history` | Print full conversation history |
337
- | `/save <path>` | Export conversation as JSON |
338
- | `/session` | Show session ID and transcript path |
339
- | `/changes` | Show git-modified files |
340
- | `/exit` | Quit |
326
+ | Command | What it does |
327
+ | ---------------- | -------------------------------------------- |
328
+ | `/agents` | Open agents panel overlay (Ctrl+P also works) |
329
+ | `/config` | Open config wizard |
330
+ | `/save <path>` | Export conversation as JSON |
331
+ | `/session` | Show session ID and transcript path |
332
+ | `/changes` | Show git-modified files |
333
+ | `/clear` | Clear chat display (Ctrl+L also works) |
334
+ | `/exit` | Quit (graceful shutdown, all adapters cleaned up) |
335
+ | `Ctrl+P` | Toggle agents panel |
336
+ | `Ctrl+C` | Exit (or copy selected text if selection active) |
341
337
 
342
338
  <br/>
343
339
 
@@ -346,21 +342,21 @@ File changes made by agents are detected via `git status` after each response. N
346
342
  ```bash
347
343
  git clone https://github.com/aalasolutions/llm-party.git
348
344
  cd llm-party
349
- npm install
350
- npm run dev
345
+ bun install
346
+ bun run dev
351
347
  ```
352
348
 
353
349
  Build and run:
354
350
 
355
351
  ```bash
356
- npm run build
357
- npm start
352
+ bun run build
353
+ bun start
358
354
  ```
359
355
 
360
356
  Override config:
361
357
 
362
358
  ```bash
363
- LLM_PARTY_CONFIG=/path/to/config.json npm run dev
359
+ LLM_PARTY_CONFIG=/path/to/config.json bun run dev
364
360
  ```
365
361
 
366
362
  <br/>
@@ -373,6 +369,9 @@ Run `/agents` to see available tags. Tags match against agent `tag`, `name`, and
373
369
  **"Unsupported provider"**
374
370
  Valid providers: `claude`, `codex`, `copilot`, `glm`.
375
371
 
372
+ **"Duplicate agent name"**
373
+ Agent names must be unique (case-insensitive). Rename one of the duplicates in config.
374
+
376
375
  **Agent modifies source code unexpectedly**
377
376
  Expected with full permissions. Use git to review and revert.
378
377
 
@@ -388,6 +387,6 @@ Default is 600 seconds (10 minutes). Adjust with `timeout` in config (top-level
388
387
  <br/>
389
388
 
390
389
  <p align="center">
391
- <a href="https://llm-party.party">llm-party.party</a> &middot;
390
+ <a href="https://llm-party.party">llm-party.party</a> ·
392
391
  Built by <a href="https://aalasolutions.com">AALA Solutions</a>
393
392
  </p>
@@ -1,10 +1,37 @@
1
1
  {
2
+ "humanName": "AAMIR",
3
+ "humanTag": "aamir",
4
+ "maxAutoHops": 15,
2
5
  "agents": [
6
+ {
7
+ "name": "GLM",
8
+ "tag": "glm",
9
+ "provider": "glm",
10
+ "model": "glm-5",
11
+ "env": {
12
+ "ANTHROPIC_BASE_URL": "https://api.z.ai/api/anthropic",
13
+ "ANTHROPIC_DEFAULT_HAIKU_MODEL": "glm-4.5-air",
14
+ "ANTHROPIC_DEFAULT_SONNET_MODEL": "glm-4.5",
15
+ "ANTHROPIC_DEFAULT_OPUS_MODEL": "glm-5"
16
+ }
17
+ },
3
18
  {
4
19
  "name": "Claude",
5
20
  "tag": "claude",
6
21
  "provider": "claude",
7
- "model": "sonnet"
22
+ "model": "opus"
23
+ },
24
+ {
25
+ "name": "Copilot",
26
+ "tag": "copilot",
27
+ "provider": "copilot",
28
+ "model": "gpt-4.1"
29
+ },
30
+ {
31
+ "name": "Codex",
32
+ "tag": "codex",
33
+ "provider": "codex",
34
+ "model": "gpt-5.2"
8
35
  }
9
36
  ]
10
37
  }
@@ -1,4 +1,4 @@
1
- import { readFile, access, mkdir, copyFile } from "node:fs/promises";
1
+ import { readFile, writeFile, access, mkdir } from "node:fs/promises";
2
2
  import { homedir, userInfo } from "node:os";
3
3
  import path from "node:path";
4
4
  const VALID_PROVIDERS = ["claude", "codex", "copilot", "glm"];
@@ -60,50 +60,68 @@ export async function resolveConfigPath(appRoot) {
60
60
  return path.join(appRoot, "configs", "default.json");
61
61
  }
62
62
  export async function resolveBasePrompt(appRoot) {
63
- const globalBase = path.join(LLM_PARTY_HOME, "base.md");
63
+ const bundledBase = path.join(appRoot, "prompts", "base.md");
64
+ return await readFile(bundledBase, "utf8");
65
+ }
66
+ export async function resolveArtifactsPrompt(appRoot) {
67
+ const bundledArtifacts = path.join(appRoot, "prompts", "artifacts.md");
68
+ return await readFile(bundledArtifacts, "utf8");
69
+ }
70
+ export async function initProjectFolder(cwd) {
71
+ const projectHome = path.join(cwd, ".llm-party");
72
+ const memoryDir = path.join(projectHome, "memory");
73
+ const skillsDir = path.join(projectHome, "skills");
74
+ await mkdir(memoryDir, { recursive: true });
75
+ await mkdir(skillsDir, { recursive: true });
76
+ const tasksFile = path.join(projectHome, "TASKS.md");
64
77
  try {
65
- await access(globalBase);
66
- return await readFile(globalBase, "utf8");
78
+ await access(tasksFile);
67
79
  }
68
80
  catch {
69
- // fall through to bundled
81
+ await writeFile(tasksFile, "# Tasks\n", "utf8");
82
+ }
83
+ const projectMd = path.join(memoryDir, "project.md");
84
+ try {
85
+ await access(projectMd);
86
+ }
87
+ catch {
88
+ await writeFile(projectMd, "# Project Memory\n\n## Current State\n\nLast Updated:\nActive:\nBlockers:\nNext:\n\n---\n\n## Log\n", "utf8");
89
+ }
90
+ const decisionsMd = path.join(memoryDir, "decisions.md");
91
+ try {
92
+ await access(decisionsMd);
93
+ }
94
+ catch {
95
+ await writeFile(decisionsMd, "# Decisions\n", "utf8");
70
96
  }
71
- const bundledBase = path.join(appRoot, "prompts", "base.md");
72
- return await readFile(bundledBase, "utf8");
73
97
  }
74
98
  export async function initLlmPartyHome(appRoot) {
75
99
  await mkdir(LLM_PARTY_HOME, { recursive: true });
76
100
  await mkdir(path.join(LLM_PARTY_HOME, "sessions"), { recursive: true });
77
- const globalBase = path.join(LLM_PARTY_HOME, "base.md");
101
+ await mkdir(path.join(LLM_PARTY_HOME, "network"), { recursive: true });
102
+ await mkdir(path.join(LLM_PARTY_HOME, "agents"), { recursive: true });
103
+ const projectsYml = path.join(LLM_PARTY_HOME, "network", "projects.yml");
78
104
  try {
79
- await access(globalBase);
105
+ await access(projectsYml);
80
106
  }
81
107
  catch {
82
- const bundledBase = path.join(appRoot, "prompts", "base.md");
83
- await copyFile(bundledBase, globalBase);
108
+ await writeFile(projectsYml, "projects: []\n", "utf8");
109
+ }
110
+ const librariesYml = path.join(LLM_PARTY_HOME, "network", "libraries.yml");
111
+ try {
112
+ await access(librariesYml);
113
+ }
114
+ catch {
115
+ await writeFile(librariesYml, "libraries: []\n", "utf8");
84
116
  }
85
117
  const globalConfig = path.join(LLM_PARTY_HOME, "config.json");
86
118
  try {
87
119
  await access(globalConfig);
88
120
  }
89
121
  catch {
90
- const username = userInfo().username || "USER";
91
- const defaultConfig = {
92
- humanName: username,
93
- agents: [
94
- {
95
- name: "Claude",
96
- tag: "claude",
97
- provider: "claude",
98
- model: "sonnet"
99
- }
100
- ]
101
- };
102
- const { writeFile } = await import("node:fs/promises");
103
- await writeFile(globalConfig, JSON.stringify(defaultConfig, null, 2) + "\n", "utf8");
104
- }
105
- console.log(`Initialized ~/.llm-party/`);
106
- console.log(` config: ${globalConfig}`);
122
+ const bundledConfig = await readFile(path.join(appRoot, "configs", "default.json"), "utf8");
123
+ await writeFile(globalConfig, bundledConfig, "utf8");
124
+ }
107
125
  }
108
126
  export async function loadConfig(configPath) {
109
127
  const raw = await readFile(configPath, "utf8");