llm-party-cli 0.1.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/LICENSE +21 -0
- package/README.md +392 -0
- package/configs/default.json +22 -0
- package/dist/adapters/base.js +1 -0
- package/dist/adapters/claude.js +64 -0
- package/dist/adapters/codex.js +46 -0
- package/dist/adapters/copilot.js +48 -0
- package/dist/adapters/glm.js +91 -0
- package/dist/config/loader.js +48 -0
- package/dist/index.js +101 -0
- package/dist/orchestrator.js +185 -0
- package/dist/types.js +1 -0
- package/dist/ui/terminal.js +213 -0
- package/package.json +57 -0
- package/prompts/base.md +54 -0
package/LICENSE
ADDED
|
@@ -0,0 +1,21 @@
|
|
|
1
|
+
MIT License
|
|
2
|
+
|
|
3
|
+
Copyright (c) 2026 AALA Solutions
|
|
4
|
+
|
|
5
|
+
Permission is hereby granted, free of charge, to any person obtaining a copy
|
|
6
|
+
of this software and associated documentation files (the "Software"), to deal
|
|
7
|
+
in the Software without restriction, including without limitation the rights
|
|
8
|
+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
|
9
|
+
copies of the Software, and to permit persons to whom the Software is
|
|
10
|
+
furnished to do so, subject to the following conditions:
|
|
11
|
+
|
|
12
|
+
The above copyright notice and this permission notice shall be included in all
|
|
13
|
+
copies or substantial portions of the Software.
|
|
14
|
+
|
|
15
|
+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
16
|
+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
17
|
+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
|
18
|
+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
|
19
|
+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
|
20
|
+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
21
|
+
SOFTWARE.
|
package/README.md
ADDED
|
@@ -0,0 +1,392 @@
|
|
|
1
|
+
<p align="center">
|
|
2
|
+
<h1 align="center">llm-party</h1>
|
|
3
|
+
<p align="center">
|
|
4
|
+
<strong>Bring your models. We'll bring the party.</strong>
|
|
5
|
+
</p>
|
|
6
|
+
<p align="center">
|
|
7
|
+
<a href="https://llm-party.party">Website</a> ·
|
|
8
|
+
<a href="https://www.npmjs.com/package/llm-party-cli">npm</a> ·
|
|
9
|
+
<a href="https://github.com/aalasolutions/llm-party">GitHub</a>
|
|
10
|
+
</p>
|
|
11
|
+
<p align="center">
|
|
12
|
+
<a href="https://www.npmjs.com/package/llm-party-cli"><img src="https://img.shields.io/npm/v/llm-party-cli?style=flat-square&color=cb3837" alt="npm version"></a>
|
|
13
|
+
<a href="https://github.com/aalasolutions/llm-party/blob/main/LICENSE"><img src="https://img.shields.io/github/license/aalasolutions/llm-party?style=flat-square" alt="license"></a>
|
|
14
|
+
<a href="https://github.com/aalasolutions/llm-party"><img src="https://img.shields.io/github/stars/aalasolutions/llm-party?style=flat-square" alt="stars"></a>
|
|
15
|
+
</p>
|
|
16
|
+
</p>
|
|
17
|
+
|
|
18
|
+
<br/>
|
|
19
|
+
|
|
20
|
+
A peer orchestrator that puts **Claude**, **Codex**, **Copilot**, and **GLM** in the same terminal. You talk, they listen. They talk to each other. Nobody is the boss except you.
|
|
21
|
+
|
|
22
|
+
```
|
|
23
|
+
YOU > @claude review this function
|
|
24
|
+
[CLAUDE] The error handling on line 42 swallows exceptions silently...
|
|
25
|
+
|
|
26
|
+
YOU > @codex fix what claude found
|
|
27
|
+
[CODEX] Fixed. Wrapped in try/catch with proper logging. See diff below.
|
|
28
|
+
|
|
29
|
+
YOU > @copilot write tests for the fix
|
|
30
|
+
[COPILOT] Added 3 test cases covering the happy path and both error branches.
|
|
31
|
+
```
|
|
32
|
+
|
|
33
|
+
No MCP. No master/servant. No window juggling. Just peers at a terminal table.
|
|
34
|
+
|
|
35
|
+
<br/>
|
|
36
|
+
|
|
37
|
+
## Why llm-party?
|
|
38
|
+
|
|
39
|
+
| | Traditional multi-agent | llm-party |
|
|
40
|
+
|---|---|---|
|
|
41
|
+
| **Architecture** | MCP (master controls servants) | Peer orchestration (you control all) |
|
|
42
|
+
| **Integration** | CLI wrapping, output scraping | Direct SDK adapters |
|
|
43
|
+
| **Sessions** | Fresh each time | Persistent per provider |
|
|
44
|
+
| **Context** | Agents are siloed | Every agent sees the full conversation |
|
|
45
|
+
| **API tokens** | Separate keys per tool | Uses your existing CLI auth |
|
|
46
|
+
|
|
47
|
+
<br/>
|
|
48
|
+
|
|
49
|
+
## Getting started
|
|
50
|
+
|
|
51
|
+
### Install and run
|
|
52
|
+
|
|
53
|
+
```bash
|
|
54
|
+
npm install -g llm-party-cli
|
|
55
|
+
llm-party
|
|
56
|
+
```
|
|
57
|
+
|
|
58
|
+
That's it. Agents use your current working directory. Config defaults are included in the package.
|
|
59
|
+
|
|
60
|
+
### Set up your agents
|
|
61
|
+
|
|
62
|
+
Edit `configs/default.json`. Each agent needs a name, provider, and model:
|
|
63
|
+
|
|
64
|
+
```json
|
|
65
|
+
{
|
|
66
|
+
"humanName": "YOUR NAME",
|
|
67
|
+
"agents": [
|
|
68
|
+
{
|
|
69
|
+
"name": "Claude",
|
|
70
|
+
"tag": "claude",
|
|
71
|
+
"provider": "claude",
|
|
72
|
+
"model": "opus",
|
|
73
|
+
"systemPrompt": ["./prompts/base.md"]
|
|
74
|
+
},
|
|
75
|
+
{
|
|
76
|
+
"name": "Codex",
|
|
77
|
+
"tag": "codex",
|
|
78
|
+
"provider": "codex",
|
|
79
|
+
"model": "gpt-5.2",
|
|
80
|
+
"systemPrompt": ["./prompts/base.md"]
|
|
81
|
+
}
|
|
82
|
+
]
|
|
83
|
+
}
|
|
84
|
+
```
|
|
85
|
+
|
|
86
|
+
### Talk to your agents
|
|
87
|
+
|
|
88
|
+
```
|
|
89
|
+
@claude explain this error # talk to one agent
|
|
90
|
+
@claude @codex review this # talk to multiple
|
|
91
|
+
@all what does everyone think? # broadcast to all agents
|
|
92
|
+
@everyone same as @all # alias
|
|
93
|
+
```
|
|
94
|
+
|
|
95
|
+
> **Note:** Once you tag an agent, all follow-up messages without a tag go to that same agent. Use `@all` or `@everyone` to broadcast again.
|
|
96
|
+
|
|
97
|
+
### Agent-to-agent handoff
|
|
98
|
+
|
|
99
|
+
Agents can pass the conversation to each other by ending their response with `@next:<tag>`. The orchestrator picks it up and dispatches automatically. Max 15 hops per cycle to prevent loops.
|
|
100
|
+
|
|
101
|
+
<br/>
|
|
102
|
+
|
|
103
|
+
## Before you start
|
|
104
|
+
|
|
105
|
+
**Verify your CLIs work first.** Before adding an agent to `configs/default.json`, make sure its CLI is installed and authenticated:
|
|
106
|
+
|
|
107
|
+
```bash
|
|
108
|
+
claude --version # Claude Code CLI
|
|
109
|
+
codex --version # OpenAI Codex CLI
|
|
110
|
+
copilot --version # GitHub Copilot CLI
|
|
111
|
+
```
|
|
112
|
+
|
|
113
|
+
If a CLI doesn't work on its own, it won't work inside llm-party.
|
|
114
|
+
|
|
115
|
+
**No extra API tokens.** llm-party uses the original CLIs and SDKs under the hood. Your existing authentication and subscriptions are used directly. Sessions created by agents appear in each tool's native session history (Claude Code sessions, Codex threads, etc.) since the underlying SDKs manage their own persistence.
|
|
116
|
+
|
|
117
|
+
**Run in isolation.** Always run llm-party inside a disposable environment: a Docker container, a VM, or at minimum a throwaway git branch. Agents have full filesystem and shell access with zero approval gates.
|
|
118
|
+
|
|
119
|
+
<br/>
|
|
120
|
+
|
|
121
|
+
## How we use the SDKs
|
|
122
|
+
|
|
123
|
+
llm-party uses **official, publicly available SDKs and CLIs** published by each provider. Nothing is reverse-engineered, patched, or bypassed.
|
|
124
|
+
|
|
125
|
+
| Provider | Official SDK | Published by |
|
|
126
|
+
|----------|-------------|-------------|
|
|
127
|
+
| Claude | [`@anthropic-ai/claude-agent-sdk`](https://www.npmjs.com/package/@anthropic-ai/claude-agent-sdk) | Anthropic |
|
|
128
|
+
| Codex | [`@openai/codex-sdk`](https://www.npmjs.com/package/@openai/codex-sdk) | OpenAI |
|
|
129
|
+
| Copilot | [`@github/copilot-sdk`](https://www.npmjs.com/package/@github/copilot-sdk) | GitHub |
|
|
130
|
+
|
|
131
|
+
All authentication flows through the provider's own CLI login. Your API keys, OAuth tokens, and subscriptions are used as-is. llm-party does not store, proxy, or intercept credentials.
|
|
132
|
+
|
|
133
|
+
If any provider believes this project violates their terms of service, please [open an issue](https://github.com/aalasolutions/llm-party/issues) and we will address it immediately.
|
|
134
|
+
|
|
135
|
+
## Supported providers
|
|
136
|
+
|
|
137
|
+
| Provider | SDK | Session | System Prompt |
|
|
138
|
+
|----------|-----|---------|---------------|
|
|
139
|
+
| **Claude** | `@anthropic-ai/claude-agent-sdk` | Persistent via session ID resume | Full control |
|
|
140
|
+
| **Codex** | `@openai/codex-sdk` | Persistent thread with `run()` turns | Via `developer_instructions` (see limitations) |
|
|
141
|
+
| **Copilot** | `@github/copilot-sdk` | Persistent via `sendAndWait()` | Full control |
|
|
142
|
+
| **GLM** | Claude SDK + env proxy | Same as Claude | Full control |
|
|
143
|
+
|
|
144
|
+
<br/>
|
|
145
|
+
|
|
146
|
+
---
|
|
147
|
+
|
|
148
|
+
<br/>
|
|
149
|
+
|
|
150
|
+
## How it works
|
|
151
|
+
|
|
152
|
+
Most multi-agent setups use MCP (one agent controls others) or CLI wrapping (spawn processes and scrape terminal output). Both are fragile and hierarchical.
|
|
153
|
+
|
|
154
|
+
llm-party uses SDK adapters directly. Each agent gets a persistent session with its provider. Full tool access. Real conversation threading. The orchestrator owns routing, agents are peers.
|
|
155
|
+
|
|
156
|
+
```
|
|
157
|
+
Terminal (you)
|
|
158
|
+
|
|
|
159
|
+
v
|
|
160
|
+
Orchestrator
|
|
161
|
+
|
|
|
162
|
+
+-- Agent Registry
|
|
163
|
+
| +-- Claude -> ClaudeAdapter (SDK session, resume by ID)
|
|
164
|
+
| +-- Codex -> CodexAdapter (SDK thread, persistent turns)
|
|
165
|
+
| +-- Copilot -> CopilotAdapter (SDK session, sendAndWait)
|
|
166
|
+
| +-- GLM -> GlmAdapter (Claude SDK + env proxy)
|
|
167
|
+
|
|
|
168
|
+
+-- Conversation Log (ordered, all messages, agent-prefixed)
|
|
169
|
+
|
|
|
170
|
+
+-- Transcript Writer (JSONL, append-only, per session)
|
|
171
|
+
```
|
|
172
|
+
|
|
173
|
+
Each agent receives a rolling window of recent messages (default 16) plus any unseen messages since its last turn. Messages from other agents are included so everyone sees the full multi-party conversation.
|
|
174
|
+
|
|
175
|
+
<br/>
|
|
176
|
+
|
|
177
|
+
## Provider details
|
|
178
|
+
|
|
179
|
+
### Claude
|
|
180
|
+
|
|
181
|
+
| | |
|
|
182
|
+
|---|---|
|
|
183
|
+
| SDK | `@anthropic-ai/claude-agent-sdk` |
|
|
184
|
+
| Session | Persistent via `resume: sessionId`. First call creates a session, subsequent calls resume it. |
|
|
185
|
+
| System prompt | Passed directly to the SDK via `options.systemPrompt`. Full control. |
|
|
186
|
+
| Tools | Read, Write, Edit, Bash, Glob, Grep |
|
|
187
|
+
| Permissions | `permissionMode: "bypassPermissions"` (all tools auto-approved) |
|
|
188
|
+
|
|
189
|
+
System prompt works exactly as expected. Personality, behavior, workflow rules all respected.
|
|
190
|
+
|
|
191
|
+
### Codex
|
|
192
|
+
|
|
193
|
+
| | |
|
|
194
|
+
|---|---|
|
|
195
|
+
| SDK | `@openai/codex-sdk` |
|
|
196
|
+
| Session | Persistent thread. `startThread()` creates it, `thread.run()` adds turns to the same conversation. |
|
|
197
|
+
| System prompt | Injected via `developer_instructions` config key, passed as `--config` flag to the CLI subprocess. |
|
|
198
|
+
| Tools | exec_command, apply_patch, file operations (Codex built-in toolset) |
|
|
199
|
+
| Permissions | `sandboxMode: "danger-full-access"`, `approvalPolicy: "never"` |
|
|
200
|
+
|
|
201
|
+
**Known limitation:** Codex ships with a massive built-in system prompt (~13k tokens) that cannot be overridden. Your `developer_instructions` are appended alongside it, not replacing it. This means:
|
|
202
|
+
|
|
203
|
+
- **Works:** Action instructions (create files, follow naming conventions), formatting rules (prefix responses with agent name), workflow rules (handoff syntax, routing tags)
|
|
204
|
+
- **Does not work:** Personality overrides, identity changes, behavioral rewrites
|
|
205
|
+
|
|
206
|
+
We tested `instructions`, `developer_instructions`, and `experimental_instructions_file`. All three append to the built-in prompt. None replace it.
|
|
207
|
+
|
|
208
|
+
**Also observed:** Codex is aggressive with file operations. When asked to "create your file," it read the orchestrator source code, ran the Codex CLI, and modified `src/ui/terminal.ts` instead of just creating a simple markdown file.
|
|
209
|
+
|
|
210
|
+
### Copilot
|
|
211
|
+
|
|
212
|
+
| | |
|
|
213
|
+
|---|---|
|
|
214
|
+
| SDK | `@github/copilot-sdk` |
|
|
215
|
+
| Session | Persistent via `CopilotClient.createSession()`. Messages sent with `session.sendAndWait()`. |
|
|
216
|
+
| System prompt | Set as `systemMessage: { content: prompt }` on session creation. |
|
|
217
|
+
| Tools | Copilot built-in toolset |
|
|
218
|
+
| Permissions | `onPermissionRequest: approveAll` (all actions auto-approved) |
|
|
219
|
+
|
|
220
|
+
System prompt works as expected.
|
|
221
|
+
|
|
222
|
+
### GLM
|
|
223
|
+
|
|
224
|
+
| | |
|
|
225
|
+
|---|---|
|
|
226
|
+
| SDK | `@anthropic-ai/claude-agent-sdk` (same as Claude) |
|
|
227
|
+
| Session | Same as Claude, but routed through a proxy. |
|
|
228
|
+
| System prompt | Same as Claude. Full control. |
|
|
229
|
+
| Tools | Same as Claude |
|
|
230
|
+
| Permissions | Same as Claude |
|
|
231
|
+
|
|
232
|
+
GLM is not tied to any specific CLI. It uses the Claude SDK as the transport layer because Claude Code supports environment variable overrides for base URL and model aliases, making it a convenient proxy bridge. Any CLI that supports similar env-based routing could be swapped in.
|
|
233
|
+
|
|
234
|
+
<br/>
|
|
235
|
+
|
|
236
|
+
## Config reference
|
|
237
|
+
|
|
238
|
+
Config file: `configs/default.json`. Override with `LLM_PARTY_CONFIG` env var.
|
|
239
|
+
|
|
240
|
+
### Top-level fields
|
|
241
|
+
|
|
242
|
+
| Field | Required | Default | Description |
|
|
243
|
+
|-------|----------|---------|-------------|
|
|
244
|
+
| `humanName` | No | `USER` | Your name displayed in the terminal prompt and passed to agents |
|
|
245
|
+
| `humanTag` | No | derived from `humanName` | Tag used for human handoff detection. When an agent says `@next:you`, the orchestrator stops and returns control to you |
|
|
246
|
+
| `maxAutoHops` | No | `15` | Max agent-to-agent handoffs per cycle. Prevents infinite loops. Use `"unlimited"` to remove the cap |
|
|
247
|
+
| `timeout` | No | `600` | Default timeout in seconds for all agents. 10 minutes by default |
|
|
248
|
+
| `agents` | Yes | | Array of agent definitions. Must have at least one |
|
|
249
|
+
|
|
250
|
+
### Agent fields
|
|
251
|
+
|
|
252
|
+
| Field | Required | Default | Description |
|
|
253
|
+
|-------|----------|---------|-------------|
|
|
254
|
+
| `name` | Yes | | Display name shown in responses as `[AGENT NAME]` |
|
|
255
|
+
| `tag` | No | derived from `name` | Routing tag for `@tag` targeting. Auto-generated as lowercase with dashes if omitted |
|
|
256
|
+
| `provider` | Yes | | SDK adapter: `claude`, `codex`, `copilot`, or `glm` |
|
|
257
|
+
| `model` | Yes | | Model ID passed to the provider. Examples: `opus`, `sonnet`, `gpt-5.2`, `gpt-4.1`, `glm-5` |
|
|
258
|
+
| `systemPrompt` | Yes | | Path or array of paths to prompt markdown files. Relative to project root |
|
|
259
|
+
| `executablePath` | No | PATH lookup | Path to the CLI binary. Supports `~/` for home directory. Only needed if the CLI is not in your PATH |
|
|
260
|
+
| `env` | No | inherits `process.env` | Environment variable overrides for this agent's process |
|
|
261
|
+
| `timeout` | No | top-level value | Per-agent timeout in seconds. Overrides the top-level default |
|
|
262
|
+
|
|
263
|
+
### System prompts
|
|
264
|
+
|
|
265
|
+
Single file or multiple files merged in order:
|
|
266
|
+
|
|
267
|
+
```json
|
|
268
|
+
"systemPrompt": "./prompts/base.md"
|
|
269
|
+
"systemPrompt": ["./prompts/base.md", "./prompts/reviewer.md"]
|
|
270
|
+
```
|
|
271
|
+
|
|
272
|
+
Files are concatenated with `---` separators, then template variables are replaced. Available variables:
|
|
273
|
+
|
|
274
|
+
| Variable | Description |
|
|
275
|
+
|----------|-------------|
|
|
276
|
+
| `{{agentName}}` | This agent's display name |
|
|
277
|
+
| `{{agentTag}}` | This agent's routing tag |
|
|
278
|
+
| `{{humanName}}` | The human's display name |
|
|
279
|
+
| `{{humanTag}}` | The human's routing tag |
|
|
280
|
+
| `{{agentCount}}` | Total number of active agents |
|
|
281
|
+
| `{{allAgentNames}}` | All agent names, comma-separated |
|
|
282
|
+
| `{{allAgentTags}}` | All agent tags as `@tag` |
|
|
283
|
+
| `{{otherAgentList}}` | Other agents formatted as `- Name: use @tag` |
|
|
284
|
+
| `{{otherAgentNames}}` | Other agent names, comma-separated |
|
|
285
|
+
| `{{validHandoffTargets}}` | Valid `@next:tag` values for handoff |
|
|
286
|
+
|
|
287
|
+
### GLM environment setup
|
|
288
|
+
|
|
289
|
+
GLM requires environment overrides to route through a proxy. The adapter first tries to load env variables from your shell `glm` alias (`zsh -ic "alias glm"`). If you have a `glm` alias that sets `ANTHROPIC_AUTH_TOKEN` and `ANTHROPIC_BASE_URL`, it picks those up automatically.
|
|
290
|
+
|
|
291
|
+
Without the alias, provide everything in the `env` block:
|
|
292
|
+
|
|
293
|
+
```json
|
|
294
|
+
{
|
|
295
|
+
"name": "GLM Agent",
|
|
296
|
+
"provider": "glm",
|
|
297
|
+
"model": "glm-5",
|
|
298
|
+
"systemPrompt": ["./prompts/base.md"],
|
|
299
|
+
"executablePath": "~/.local/bin/claude",
|
|
300
|
+
"env": {
|
|
301
|
+
"ANTHROPIC_AUTH_TOKEN": "your-glm-api-key",
|
|
302
|
+
"ANTHROPIC_BASE_URL": "https://api.z.ai/api/anthropic",
|
|
303
|
+
"ANTHROPIC_DEFAULT_HAIKU_MODEL": "glm-4.5-air",
|
|
304
|
+
"ANTHROPIC_DEFAULT_SONNET_MODEL": "glm-4.5",
|
|
305
|
+
"ANTHROPIC_DEFAULT_OPUS_MODEL": "glm-5"
|
|
306
|
+
}
|
|
307
|
+
}
|
|
308
|
+
```
|
|
309
|
+
|
|
310
|
+
<br/>
|
|
311
|
+
|
|
312
|
+
## Session and transcript
|
|
313
|
+
|
|
314
|
+
Every run generates a unique session ID and appends messages to a JSONL transcript file under `.llm-party/sessions/`. The session ID and transcript path are printed at startup.
|
|
315
|
+
|
|
316
|
+
File changes made by agents during their turns are detected via `git status` after each response cycle. Newly modified files are printed with timestamps.
|
|
317
|
+
|
|
318
|
+
Use `/save <path>` to export the full in-memory conversation as formatted JSON.
|
|
319
|
+
|
|
320
|
+
<br/>
|
|
321
|
+
|
|
322
|
+
## Terminal commands
|
|
323
|
+
|
|
324
|
+
| Command | What it does |
|
|
325
|
+
|---------|-------------|
|
|
326
|
+
| `/agents` | List active agents with tag, provider, model |
|
|
327
|
+
| `/history` | Print full conversation history |
|
|
328
|
+
| `/save <path>` | Export conversation as JSON |
|
|
329
|
+
| `/session` | Show session ID and transcript path |
|
|
330
|
+
| `/changes` | Show git-modified files |
|
|
331
|
+
| `/exit` | Quit |
|
|
332
|
+
|
|
333
|
+
<br/>
|
|
334
|
+
|
|
335
|
+
## Development
|
|
336
|
+
|
|
337
|
+
```bash
|
|
338
|
+
git clone https://github.com/aalasolutions/llm-party.git
|
|
339
|
+
cd llm-party
|
|
340
|
+
npm install
|
|
341
|
+
npm run dev
|
|
342
|
+
```
|
|
343
|
+
|
|
344
|
+
Build and run from dist:
|
|
345
|
+
|
|
346
|
+
```bash
|
|
347
|
+
npm run build
|
|
348
|
+
npm start
|
|
349
|
+
```
|
|
350
|
+
|
|
351
|
+
Override config path:
|
|
352
|
+
|
|
353
|
+
```bash
|
|
354
|
+
LLM_PARTY_CONFIG=/path/to/config.json npm run dev
|
|
355
|
+
```
|
|
356
|
+
|
|
357
|
+
<br/>
|
|
358
|
+
|
|
359
|
+
## Troubleshooting
|
|
360
|
+
|
|
361
|
+
**"ENOENT for prompt path"**
|
|
362
|
+
Your `systemPrompt` points to a file that does not exist. Paths are relative to project root. Verify with `ls prompts/`.
|
|
363
|
+
|
|
364
|
+
**"No agent matched @tag"**
|
|
365
|
+
The tag you typed does not match any agent's `tag`, `name`, or `provider`. Run `/agents` to see what is available.
|
|
366
|
+
|
|
367
|
+
**"Unsupported provider"**
|
|
368
|
+
Your config has a provider value that is not one of: `claude`, `codex`, `copilot`, `glm`.
|
|
369
|
+
|
|
370
|
+
**Agent modifies source code unexpectedly**
|
|
371
|
+
Expected behavior with full-access permissions. Agents can read, write, and execute anything. Use git to review and revert. Codex in particular is aggressive with file operations.
|
|
372
|
+
|
|
373
|
+
**Codex ignores personality instructions**
|
|
374
|
+
Known limitation. Codex has a 13k+ token built-in system prompt that overrides personality and identity instructions. Functional instructions (naming, workflow, formatting) still work.
|
|
375
|
+
|
|
376
|
+
**Agent response timeout**
|
|
377
|
+
Claude and Copilot have a 120-second timeout. GLM has 240 seconds. If an agent consistently times out, check your API keys and network connectivity.
|
|
378
|
+
|
|
379
|
+
<br/>
|
|
380
|
+
|
|
381
|
+
## Warning
|
|
382
|
+
|
|
383
|
+
All agents run with **full permissions**. They can read, write, edit files and execute shell commands. There is no confirmation step before any action.
|
|
384
|
+
|
|
385
|
+
You are responsible for any changes, data loss, costs, or side effects. Do not run against production systems or repos you cannot recover from.
|
|
386
|
+
|
|
387
|
+
<br/>
|
|
388
|
+
|
|
389
|
+
<p align="center">
|
|
390
|
+
<a href="https://llm-party.party">llm-party.party</a> ·
|
|
391
|
+
Built by <a href="https://aalasolutions.com">AALA Solutions</a>
|
|
392
|
+
</p>
|
|
@@ -0,0 +1,22 @@
|
|
|
1
|
+
{
|
|
2
|
+
"humanName": "AAMIR",
|
|
3
|
+
"humanTag": "aamir",
|
|
4
|
+
"maxAutoHops": 15,
|
|
5
|
+
"agents": [
|
|
6
|
+
{
|
|
7
|
+
"name": "Agent 2",
|
|
8
|
+
"tag": "opus",
|
|
9
|
+
"provider": "claude",
|
|
10
|
+
"model": "opus",
|
|
11
|
+
"systemPrompt": ["./prompts/base.md"],
|
|
12
|
+
"executablePath": "~/.local/bin/claude"
|
|
13
|
+
},
|
|
14
|
+
{
|
|
15
|
+
"name": "Agent 3",
|
|
16
|
+
"tag": "copilot",
|
|
17
|
+
"provider": "copilot",
|
|
18
|
+
"model": "gpt-4.1",
|
|
19
|
+
"systemPrompt": ["./prompts/base.md"]
|
|
20
|
+
}
|
|
21
|
+
]
|
|
22
|
+
}
|
|
@@ -0,0 +1 @@
|
|
|
1
|
+
export {};
|
|
@@ -0,0 +1,64 @@
|
|
|
1
|
+
import { query } from "@anthropic-ai/claude-agent-sdk";
|
|
2
|
+
export class ClaudeAdapter {
|
|
3
|
+
name;
|
|
4
|
+
provider = "claude";
|
|
5
|
+
model;
|
|
6
|
+
systemPrompt = "";
|
|
7
|
+
sessionId = "";
|
|
8
|
+
runtimeEnv = {};
|
|
9
|
+
claudeExecutable;
|
|
10
|
+
constructor(name, model) {
|
|
11
|
+
this.name = name;
|
|
12
|
+
this.model = model;
|
|
13
|
+
}
|
|
14
|
+
async init(config) {
|
|
15
|
+
this.systemPrompt = Array.isArray(config.systemPrompt) ? config.systemPrompt.join("\n\n") : config.systemPrompt;
|
|
16
|
+
this.runtimeEnv = { ...process.env, ...(config.env ?? {}) };
|
|
17
|
+
this.claudeExecutable = config.executablePath ?? process.env.CLAUDE_CODE_EXECUTABLE;
|
|
18
|
+
}
|
|
19
|
+
async send(messages) {
|
|
20
|
+
const transcript = messages
|
|
21
|
+
.map((m) => `[${m.from}]: ${m.text}`)
|
|
22
|
+
.join("\n\n");
|
|
23
|
+
return await this.queryClaude(transcript);
|
|
24
|
+
}
|
|
25
|
+
async destroy() {
|
|
26
|
+
return;
|
|
27
|
+
}
|
|
28
|
+
async queryClaude(transcript) {
|
|
29
|
+
const executableOpt = this.claudeExecutable
|
|
30
|
+
? { pathToClaudeCodeExecutable: this.claudeExecutable }
|
|
31
|
+
: {};
|
|
32
|
+
const options = {
|
|
33
|
+
cwd: process.cwd(),
|
|
34
|
+
env: this.runtimeEnv,
|
|
35
|
+
allowedTools: ["Read", "Write", "Edit", "Bash", "Glob", "Grep"],
|
|
36
|
+
permissionMode: "bypassPermissions",
|
|
37
|
+
allowDangerouslySkipPermissions: true,
|
|
38
|
+
systemPrompt: this.systemPrompt,
|
|
39
|
+
model: this.model,
|
|
40
|
+
settingSources: [],
|
|
41
|
+
...(this.sessionId ? { resume: this.sessionId } : {}),
|
|
42
|
+
...executableOpt
|
|
43
|
+
};
|
|
44
|
+
for await (const message of query({ prompt: transcript, options })) {
|
|
45
|
+
if (message &&
|
|
46
|
+
typeof message === "object" &&
|
|
47
|
+
"type" in message &&
|
|
48
|
+
"subtype" in message &&
|
|
49
|
+
"session_id" in message &&
|
|
50
|
+
message.type === "system" &&
|
|
51
|
+
message.subtype === "init" &&
|
|
52
|
+
typeof message.session_id === "string") {
|
|
53
|
+
this.sessionId = message.session_id;
|
|
54
|
+
}
|
|
55
|
+
if (message && typeof message === "object" && "result" in message) {
|
|
56
|
+
const result = message.result;
|
|
57
|
+
return typeof result === "string" && result.length > 0
|
|
58
|
+
? result
|
|
59
|
+
: "[No text response from Claude]";
|
|
60
|
+
}
|
|
61
|
+
}
|
|
62
|
+
return "[No text response from Claude]";
|
|
63
|
+
}
|
|
64
|
+
}
|
|
@@ -0,0 +1,46 @@
|
|
|
1
|
+
import { Codex } from "@openai/codex-sdk";
|
|
2
|
+
export class CodexAdapter {
|
|
3
|
+
name;
|
|
4
|
+
provider = "codex";
|
|
5
|
+
model;
|
|
6
|
+
codex;
|
|
7
|
+
thread;
|
|
8
|
+
constructor(name, model) {
|
|
9
|
+
this.name = name;
|
|
10
|
+
this.model = model;
|
|
11
|
+
}
|
|
12
|
+
async init(config) {
|
|
13
|
+
const cliPath = config.executablePath ?? process.env.CODEX_CLI_EXECUTABLE;
|
|
14
|
+
const systemPrompt = Array.isArray(config.systemPrompt)
|
|
15
|
+
? config.systemPrompt.join("\n\n")
|
|
16
|
+
: config.systemPrompt;
|
|
17
|
+
this.codex = new Codex({
|
|
18
|
+
...(cliPath ? { codexPathOverride: cliPath } : {}),
|
|
19
|
+
...(config.env?.OPENAI_API_KEY ? { apiKey: config.env.OPENAI_API_KEY } : {}),
|
|
20
|
+
...(systemPrompt ? { config: { developer_instructions: systemPrompt } } : {}),
|
|
21
|
+
});
|
|
22
|
+
this.thread = this.codex.startThread({
|
|
23
|
+
model: this.model,
|
|
24
|
+
sandboxMode: "danger-full-access",
|
|
25
|
+
workingDirectory: process.cwd(),
|
|
26
|
+
approvalPolicy: "never",
|
|
27
|
+
});
|
|
28
|
+
}
|
|
29
|
+
async send(messages) {
|
|
30
|
+
if (!this.thread) {
|
|
31
|
+
return "[Codex thread not initialized]";
|
|
32
|
+
}
|
|
33
|
+
const transcript = messages
|
|
34
|
+
.map((m) => `[${m.from}]: ${m.text}`)
|
|
35
|
+
.join("\n\n");
|
|
36
|
+
const turn = await this.thread.run(transcript);
|
|
37
|
+
if (turn.finalResponse && turn.finalResponse.length > 0) {
|
|
38
|
+
return turn.finalResponse;
|
|
39
|
+
}
|
|
40
|
+
return "[No text response from Codex]";
|
|
41
|
+
}
|
|
42
|
+
async destroy() {
|
|
43
|
+
this.thread = undefined;
|
|
44
|
+
this.codex = undefined;
|
|
45
|
+
}
|
|
46
|
+
}
|
|
@@ -0,0 +1,48 @@
|
|
|
1
|
+
import { CopilotClient, approveAll } from "@github/copilot-sdk";
|
|
2
|
+
export class CopilotAdapter {
|
|
3
|
+
name;
|
|
4
|
+
provider = "copilot";
|
|
5
|
+
model;
|
|
6
|
+
client;
|
|
7
|
+
session;
|
|
8
|
+
constructor(name, model) {
|
|
9
|
+
this.name = name;
|
|
10
|
+
this.model = model;
|
|
11
|
+
}
|
|
12
|
+
async init(config) {
|
|
13
|
+
const systemPrompt = Array.isArray(config.systemPrompt)
|
|
14
|
+
? config.systemPrompt.join("\n\n")
|
|
15
|
+
: config.systemPrompt;
|
|
16
|
+
const cliPath = config.executablePath ?? process.env.COPILOT_CLI_EXECUTABLE;
|
|
17
|
+
this.client = new CopilotClient({
|
|
18
|
+
...(cliPath ? { cliPath } : {}),
|
|
19
|
+
});
|
|
20
|
+
await this.client.start();
|
|
21
|
+
this.session = await this.client.createSession({
|
|
22
|
+
model: this.model,
|
|
23
|
+
systemMessage: { content: systemPrompt },
|
|
24
|
+
onPermissionRequest: approveAll,
|
|
25
|
+
});
|
|
26
|
+
}
|
|
27
|
+
async send(messages) {
|
|
28
|
+
if (!this.session) {
|
|
29
|
+
return "[Copilot session not initialized]";
|
|
30
|
+
}
|
|
31
|
+
const transcript = messages
|
|
32
|
+
.map((m) => `[${m.from}]: ${m.text}`)
|
|
33
|
+
.join("\n\n");
|
|
34
|
+
const response = await this.session.sendAndWait({ prompt: transcript });
|
|
35
|
+
if (response && response.data && typeof response.data.content === "string" && response.data.content.length > 0) {
|
|
36
|
+
return response.data.content;
|
|
37
|
+
}
|
|
38
|
+
return "[No text response from Copilot]";
|
|
39
|
+
}
|
|
40
|
+
async destroy() {
|
|
41
|
+
if (this.session) {
|
|
42
|
+
await this.session.disconnect();
|
|
43
|
+
}
|
|
44
|
+
if (this.client) {
|
|
45
|
+
await this.client.stop();
|
|
46
|
+
}
|
|
47
|
+
}
|
|
48
|
+
}
|
|
@@ -0,0 +1,91 @@
|
|
|
1
|
+
import { spawn } from "node:child_process";
|
|
2
|
+
import { query } from "@anthropic-ai/claude-agent-sdk";
|
|
3
|
+
export class GlmAdapter {
|
|
4
|
+
name;
|
|
5
|
+
provider = "glm";
|
|
6
|
+
model;
|
|
7
|
+
systemPrompt = "";
|
|
8
|
+
sessionId = "";
|
|
9
|
+
runtimeEnv = {};
|
|
10
|
+
claudeExecutable;
|
|
11
|
+
constructor(name, model) {
|
|
12
|
+
this.name = name;
|
|
13
|
+
this.model = model;
|
|
14
|
+
}
|
|
15
|
+
async init(config) {
|
|
16
|
+
this.systemPrompt = Array.isArray(config.systemPrompt) ? config.systemPrompt.join("\n\n") : config.systemPrompt;
|
|
17
|
+
const aliasEnv = await loadGlmAliasEnv();
|
|
18
|
+
this.runtimeEnv = { ...process.env, ...aliasEnv, ...(config.env ?? {}) };
|
|
19
|
+
this.claudeExecutable = config.executablePath ?? process.env.CLAUDE_CODE_EXECUTABLE;
|
|
20
|
+
}
|
|
21
|
+
async send(messages) {
|
|
22
|
+
const transcript = messages
|
|
23
|
+
.map((m) => `[${m.from}]: ${m.text}`)
|
|
24
|
+
.join("\n\n");
|
|
25
|
+
return await this.queryGlm(transcript);
|
|
26
|
+
}
|
|
27
|
+
async destroy() {
|
|
28
|
+
return;
|
|
29
|
+
}
|
|
30
|
+
async queryGlm(transcript) {
|
|
31
|
+
const executableOpt = this.claudeExecutable
|
|
32
|
+
? { pathToClaudeCodeExecutable: this.claudeExecutable }
|
|
33
|
+
: {};
|
|
34
|
+
const options = {
|
|
35
|
+
cwd: process.cwd(),
|
|
36
|
+
env: this.runtimeEnv,
|
|
37
|
+
allowedTools: ["Read", "Write", "Edit", "Bash", "Glob", "Grep"],
|
|
38
|
+
permissionMode: "bypassPermissions",
|
|
39
|
+
allowDangerouslySkipPermissions: true,
|
|
40
|
+
systemPrompt: this.systemPrompt,
|
|
41
|
+
model: this.model,
|
|
42
|
+
settingSources: [],
|
|
43
|
+
...(this.sessionId ? { resume: this.sessionId } : {}),
|
|
44
|
+
...executableOpt
|
|
45
|
+
};
|
|
46
|
+
for await (const message of query({ prompt: transcript, options })) {
|
|
47
|
+
if (message &&
|
|
48
|
+
typeof message === "object" &&
|
|
49
|
+
"type" in message &&
|
|
50
|
+
"subtype" in message &&
|
|
51
|
+
"session_id" in message &&
|
|
52
|
+
message.type === "system" &&
|
|
53
|
+
message.subtype === "init" &&
|
|
54
|
+
typeof message.session_id === "string") {
|
|
55
|
+
this.sessionId = message.session_id;
|
|
56
|
+
}
|
|
57
|
+
if (message && typeof message === "object" && "result" in message) {
|
|
58
|
+
const result = message.result;
|
|
59
|
+
return typeof result === "string" && result.length > 0
|
|
60
|
+
? result
|
|
61
|
+
: "[No text response from GLM]";
|
|
62
|
+
}
|
|
63
|
+
}
|
|
64
|
+
return "[No text response from GLM]";
|
|
65
|
+
}
|
|
66
|
+
}
|
|
67
|
+
async function loadGlmAliasEnv() {
|
|
68
|
+
return new Promise((resolve) => {
|
|
69
|
+
const child = spawn("zsh", ["-ic", "alias glm"], {
|
|
70
|
+
cwd: process.cwd(),
|
|
71
|
+
env: process.env,
|
|
72
|
+
stdio: ["ignore", "pipe", "pipe"]
|
|
73
|
+
});
|
|
74
|
+
let stdout = "";
|
|
75
|
+
child.stdout.on("data", (chunk) => {
|
|
76
|
+
stdout += String(chunk);
|
|
77
|
+
});
|
|
78
|
+
child.on("close", () => {
|
|
79
|
+
const env = {};
|
|
80
|
+
const tokens = stdout.match(/[A-Z_]+="[^"]*"/g) ?? [];
|
|
81
|
+
for (const token of tokens) {
|
|
82
|
+
const [key, raw] = token.split("=");
|
|
83
|
+
env[key] = raw.replace(/^"|"$/g, "");
|
|
84
|
+
}
|
|
85
|
+
resolve(env);
|
|
86
|
+
});
|
|
87
|
+
child.on("error", () => {
|
|
88
|
+
resolve({});
|
|
89
|
+
});
|
|
90
|
+
});
|
|
91
|
+
}
|
|
@@ -0,0 +1,48 @@
|
|
|
1
|
+
import { readFile } from "node:fs/promises";
|
|
2
|
+
import { homedir } from "node:os";
|
|
3
|
+
const VALID_PROVIDERS = ["claude", "codex", "copilot", "glm"];
|
|
4
|
+
function validateConfig(data) {
|
|
5
|
+
if (!data || typeof data !== "object") {
|
|
6
|
+
throw new Error("Config must be an object");
|
|
7
|
+
}
|
|
8
|
+
const cfg = data;
|
|
9
|
+
if (!Array.isArray(cfg.agents)) {
|
|
10
|
+
throw new Error("Config must have 'agents' array");
|
|
11
|
+
}
|
|
12
|
+
if (cfg.agents.length === 0) {
|
|
13
|
+
throw new Error("Config 'agents' array cannot be empty");
|
|
14
|
+
}
|
|
15
|
+
for (let i = 0; i < cfg.agents.length; i++) {
|
|
16
|
+
const agent = cfg.agents[i];
|
|
17
|
+
if (!agent || typeof agent !== "object") {
|
|
18
|
+
throw new Error(`Agent at index ${i} must be an object`);
|
|
19
|
+
}
|
|
20
|
+
if (typeof agent.name !== "string" || agent.name.trim() === "") {
|
|
21
|
+
throw new Error(`Agent at index ${i} must have a non-empty 'name' string`);
|
|
22
|
+
}
|
|
23
|
+
if (typeof agent.model !== "string" || agent.model.trim() === "") {
|
|
24
|
+
throw new Error(`Agent '${agent.name}' must have a non-empty 'model' string`);
|
|
25
|
+
}
|
|
26
|
+
if (!VALID_PROVIDERS.includes(agent.provider)) {
|
|
27
|
+
throw new Error(`Agent '${agent.name}' has invalid provider '${agent.provider}'. Valid: ${VALID_PROVIDERS.join(", ")}`);
|
|
28
|
+
}
|
|
29
|
+
if (agent.systemPrompt !== undefined) {
|
|
30
|
+
const isString = typeof agent.systemPrompt === "string";
|
|
31
|
+
const isArray = Array.isArray(agent.systemPrompt) && agent.systemPrompt.every((p) => typeof p === "string");
|
|
32
|
+
if (!isString && !isArray) {
|
|
33
|
+
throw new Error(`Agent '${agent.name}' systemPrompt must be a string or string array`);
|
|
34
|
+
}
|
|
35
|
+
}
|
|
36
|
+
}
|
|
37
|
+
for (const agent of cfg.agents) {
|
|
38
|
+
if (typeof agent.executablePath === "string" && agent.executablePath.startsWith("~/")) {
|
|
39
|
+
agent.executablePath = homedir() + agent.executablePath.slice(1);
|
|
40
|
+
}
|
|
41
|
+
}
|
|
42
|
+
return cfg;
|
|
43
|
+
}
|
|
44
|
+
export async function loadConfig(path) {
|
|
45
|
+
const raw = await readFile(path, "utf8");
|
|
46
|
+
const parsed = JSON.parse(raw);
|
|
47
|
+
return validateConfig(parsed);
|
|
48
|
+
}
|
package/dist/index.js
ADDED
|
@@ -0,0 +1,101 @@
|
|
|
1
|
+
#!/usr/bin/env node
|
|
2
|
+
import "dotenv/config";
|
|
3
|
+
import { readFile } from "node:fs/promises";
|
|
4
|
+
import path from "node:path";
|
|
5
|
+
import { fileURLToPath } from "node:url";
|
|
6
|
+
import { ClaudeAdapter } from "./adapters/claude.js";
|
|
7
|
+
import { CodexAdapter } from "./adapters/codex.js";
|
|
8
|
+
import { CopilotAdapter } from "./adapters/copilot.js";
|
|
9
|
+
import { GlmAdapter } from "./adapters/glm.js";
|
|
10
|
+
import { loadConfig } from "./config/loader.js";
|
|
11
|
+
import { Orchestrator } from "./orchestrator.js";
|
|
12
|
+
import { runTerminal } from "./ui/terminal.js";
|
|
13
|
+
async function main() {
|
|
14
|
+
const appRoot = path.resolve(path.dirname(fileURLToPath(import.meta.url)), "..");
|
|
15
|
+
const configPath = process.env.LLM_PARTY_CONFIG
|
|
16
|
+
? path.resolve(process.env.LLM_PARTY_CONFIG)
|
|
17
|
+
: path.join(appRoot, "configs", "default.json");
|
|
18
|
+
const config = await loadConfig(configPath);
|
|
19
|
+
const humanName = config.humanName?.trim() || "USER";
|
|
20
|
+
const humanTag = config.humanTag?.trim() || toTag(humanName);
|
|
21
|
+
const maxAutoHops = resolveMaxAutoHops(config.maxAutoHops);
|
|
22
|
+
const resolveFromAppRoot = (value) => {
|
|
23
|
+
return path.isAbsolute(value) ? value : path.resolve(appRoot, value);
|
|
24
|
+
};
|
|
25
|
+
const adapters = await Promise.all(config.agents.map(async (agent, index, allAgents) => {
|
|
26
|
+
const promptPaths = Array.isArray(agent.systemPrompt)
|
|
27
|
+
? agent.systemPrompt.map((p) => resolveFromAppRoot(p))
|
|
28
|
+
: [resolveFromAppRoot(agent.systemPrompt)];
|
|
29
|
+
const promptParts = await Promise.all(promptPaths.map((p) => readFile(p, "utf8")));
|
|
30
|
+
const promptTemplate = promptParts.join("\n\n---\n\n");
|
|
31
|
+
const peers = allAgents.filter((candidate) => candidate.name !== agent.name);
|
|
32
|
+
const tag = agent.tag?.trim() || toTag(agent.name);
|
|
33
|
+
const otherAgentList = peers.length > 0
|
|
34
|
+
? peers
|
|
35
|
+
.map((peer) => {
|
|
36
|
+
const peerTag = peer.tag?.trim() || toTag(peer.name);
|
|
37
|
+
return `- ${peer.name}: use @${peerTag}`;
|
|
38
|
+
})
|
|
39
|
+
.join("\n")
|
|
40
|
+
: "- None";
|
|
41
|
+
const validHandoffTargets = peers.length > 0
|
|
42
|
+
? peers.map((peer) => `@next:${peer.tag?.trim() || toTag(peer.name)}`).join(", ")
|
|
43
|
+
: "none";
|
|
44
|
+
const prompt = renderPromptTemplate(promptTemplate, {
|
|
45
|
+
humanName,
|
|
46
|
+
humanTag,
|
|
47
|
+
agentName: agent.name,
|
|
48
|
+
agentTag: tag,
|
|
49
|
+
validHandoffTargets,
|
|
50
|
+
otherAgentList,
|
|
51
|
+
otherAgentNames: peers.map((peer) => peer.name).join(", ") || "none",
|
|
52
|
+
allAgentNames: allAgents.map((candidate) => candidate.name).join(", "),
|
|
53
|
+
allAgentTags: allAgents.map((candidate) => `@${candidate.tag?.trim() || toTag(candidate.name)}`).join(", "),
|
|
54
|
+
agentCount: String(allAgents.length)
|
|
55
|
+
});
|
|
56
|
+
const adapter = agent.provider === "claude"
|
|
57
|
+
? new ClaudeAdapter(agent.name, agent.model)
|
|
58
|
+
: agent.provider === "codex"
|
|
59
|
+
? new CodexAdapter(agent.name, agent.model)
|
|
60
|
+
: agent.provider === "copilot"
|
|
61
|
+
? new CopilotAdapter(agent.name, agent.model)
|
|
62
|
+
: agent.provider === "glm"
|
|
63
|
+
? new GlmAdapter(agent.name, agent.model)
|
|
64
|
+
: null;
|
|
65
|
+
if (!adapter) {
|
|
66
|
+
throw new Error(`Unsupported provider in Phase 1: ${agent.provider}`);
|
|
67
|
+
}
|
|
68
|
+
await adapter.init({ ...agent, systemPrompt: prompt });
|
|
69
|
+
return adapter;
|
|
70
|
+
}));
|
|
71
|
+
const defaultTimeout = typeof config.timeout === "number" && config.timeout > 0
|
|
72
|
+
? config.timeout * 1000
|
|
73
|
+
: 600000;
|
|
74
|
+
const agentTimeouts = Object.fromEntries(config.agents
|
|
75
|
+
.filter((agent) => typeof agent.timeout === "number" && agent.timeout > 0)
|
|
76
|
+
.map((agent) => [agent.name, agent.timeout * 1000]));
|
|
77
|
+
const orchestrator = new Orchestrator(adapters, humanName, Object.fromEntries(config.agents.map((agent) => [agent.name, agent.tag?.trim() || toTag(agent.name)])), humanTag, defaultTimeout, agentTimeouts);
|
|
78
|
+
await runTerminal(orchestrator, { maxAutoHops });
|
|
79
|
+
}
|
|
80
|
+
function resolveMaxAutoHops(value) {
|
|
81
|
+
if (value === "unlimited") {
|
|
82
|
+
return Number.POSITIVE_INFINITY;
|
|
83
|
+
}
|
|
84
|
+
if (typeof value === "number" && Number.isFinite(value) && value >= 1) {
|
|
85
|
+
return Math.floor(value);
|
|
86
|
+
}
|
|
87
|
+
return 15;
|
|
88
|
+
}
|
|
89
|
+
function renderPromptTemplate(template, variables) {
|
|
90
|
+
return template.replace(/\{\{\s*([a-zA-Z0-9_]+)\s*\}\}/g, (_, key) => {
|
|
91
|
+
return variables[key] ?? "";
|
|
92
|
+
});
|
|
93
|
+
}
|
|
94
|
+
function toTag(value) {
|
|
95
|
+
const compact = value.trim().toLowerCase().replace(/[^a-z0-9]+/g, "-").replace(/^-+|-+$/g, "");
|
|
96
|
+
return compact || "agent";
|
|
97
|
+
}
|
|
98
|
+
main().catch((err) => {
|
|
99
|
+
console.error("Fatal error:", err);
|
|
100
|
+
process.exit(1);
|
|
101
|
+
});
|
|
@@ -0,0 +1,185 @@
|
|
|
1
|
+
import { appendFile, mkdir, writeFile } from "node:fs/promises";
|
|
2
|
+
import path from "node:path";
|
|
3
|
+
export class Orchestrator {
|
|
4
|
+
agents;
|
|
5
|
+
agentTags;
|
|
6
|
+
conversation = [];
|
|
7
|
+
lastSeenByAgent = new Map();
|
|
8
|
+
humanName;
|
|
9
|
+
humanTag;
|
|
10
|
+
sessionId;
|
|
11
|
+
transcriptPath;
|
|
12
|
+
defaultTimeout;
|
|
13
|
+
agentTimeouts;
|
|
14
|
+
messageId = 0;
|
|
15
|
+
contextWindowSize = 16;
|
|
16
|
+
constructor(agents, humanName = "USER", agentTags, humanTag, defaultTimeout = 600000, agentTimeouts) {
|
|
17
|
+
this.agents = new Map(agents.map((agent) => [agent.name, agent]));
|
|
18
|
+
this.agentTags = new Map(agents.map((agent) => [agent.name, agentTags?.[agent.name] ?? defaultTagFor(agent.name)]));
|
|
19
|
+
this.humanName = humanName;
|
|
20
|
+
this.humanTag = humanTag ?? defaultTagFor(humanName);
|
|
21
|
+
this.defaultTimeout = defaultTimeout;
|
|
22
|
+
this.agentTimeouts = new Map(Object.entries(agentTimeouts ?? {}));
|
|
23
|
+
this.sessionId = createSessionId();
|
|
24
|
+
this.transcriptPath = path.resolve(".llm-party", "sessions", `transcript-${this.sessionId}.jsonl`);
|
|
25
|
+
for (const agent of agents) {
|
|
26
|
+
this.lastSeenByAgent.set(agent.name, 0);
|
|
27
|
+
}
|
|
28
|
+
}
|
|
29
|
+
getSessionId() {
|
|
30
|
+
return this.sessionId;
|
|
31
|
+
}
|
|
32
|
+
getTranscriptPath() {
|
|
33
|
+
return this.transcriptPath;
|
|
34
|
+
}
|
|
35
|
+
getHumanName() {
|
|
36
|
+
return this.humanName;
|
|
37
|
+
}
|
|
38
|
+
getHumanTag() {
|
|
39
|
+
return this.humanTag;
|
|
40
|
+
}
|
|
41
|
+
listAgents() {
|
|
42
|
+
return Array.from(this.agents.values()).map((agent) => ({
|
|
43
|
+
name: agent.name,
|
|
44
|
+
tag: this.agentTags.get(agent.name) ?? defaultTagFor(agent.name),
|
|
45
|
+
provider: agent.provider,
|
|
46
|
+
model: agent.model
|
|
47
|
+
}));
|
|
48
|
+
}
|
|
49
|
+
addUserMessage(text) {
|
|
50
|
+
const message = {
|
|
51
|
+
id: ++this.messageId,
|
|
52
|
+
from: this.humanName,
|
|
53
|
+
text,
|
|
54
|
+
createdAt: new Date().toISOString()
|
|
55
|
+
};
|
|
56
|
+
this.conversation.push(message);
|
|
57
|
+
return message;
|
|
58
|
+
}
|
|
59
|
+
getHistory() {
|
|
60
|
+
return [...this.conversation];
|
|
61
|
+
}
|
|
62
|
+
resolveTargets(selector) {
|
|
63
|
+
const normalized = selector.trim().toLowerCase();
|
|
64
|
+
if (normalized === "all" || normalized === "everyone") {
|
|
65
|
+
return Array.from(this.agents.keys());
|
|
66
|
+
}
|
|
67
|
+
const byName = Array.from(this.agents.values())
|
|
68
|
+
.filter((agent) => {
|
|
69
|
+
const tag = this.agentTags.get(agent.name) ?? defaultTagFor(agent.name);
|
|
70
|
+
return agent.name.toLowerCase() === normalized || tag.toLowerCase() === normalized;
|
|
71
|
+
})
|
|
72
|
+
.map((agent) => agent.name);
|
|
73
|
+
if (byName.length > 0) {
|
|
74
|
+
return byName;
|
|
75
|
+
}
|
|
76
|
+
return Array.from(this.agents.values())
|
|
77
|
+
.filter((agent) => agent.provider.toLowerCase() === normalized)
|
|
78
|
+
.map((agent) => agent.name);
|
|
79
|
+
}
|
|
80
|
+
async fanOut(targetAgentNames) {
|
|
81
|
+
return this.fanOutWithProgress(targetAgentNames, () => { });
|
|
82
|
+
}
|
|
83
|
+
async fanOutWithProgress(targetAgentNames, onMessage) {
|
|
84
|
+
const requestedTargets = targetAgentNames && targetAgentNames.length > 0
|
|
85
|
+
? targetAgentNames
|
|
86
|
+
: Array.from(this.agents.keys());
|
|
87
|
+
const targets = requestedTargets
|
|
88
|
+
.map((name) => this.agents.get(name))
|
|
89
|
+
.filter((agent) => Boolean(agent));
|
|
90
|
+
const historyMaxId = this.messageId;
|
|
91
|
+
const settled = await Promise.allSettled(targets.map(async (agent) => {
|
|
92
|
+
const lastSeen = this.lastSeenByAgent.get(agent.name) ?? 0;
|
|
93
|
+
const unseen = this.conversation.filter((msg) => msg.id > lastSeen && msg.from.toUpperCase() !== agent.name.toUpperCase());
|
|
94
|
+
if (unseen.length === 0) {
|
|
95
|
+
this.lastSeenByAgent.set(agent.name, historyMaxId);
|
|
96
|
+
const response = {
|
|
97
|
+
id: ++this.messageId,
|
|
98
|
+
from: agent.name.toUpperCase(),
|
|
99
|
+
text: "[No new messages for this agent]",
|
|
100
|
+
createdAt: new Date().toISOString()
|
|
101
|
+
};
|
|
102
|
+
this.conversation.push(response);
|
|
103
|
+
await this.appendTranscript(response);
|
|
104
|
+
onMessage(response);
|
|
105
|
+
return response;
|
|
106
|
+
}
|
|
107
|
+
const inputMessages = this.buildInputForAgent(agent.name, unseen);
|
|
108
|
+
const responseText = await this.sendWithTimeout(agent, inputMessages, this.timeoutFor(agent.name));
|
|
109
|
+
const response = {
|
|
110
|
+
id: ++this.messageId,
|
|
111
|
+
from: agent.name.toUpperCase(),
|
|
112
|
+
text: responseText,
|
|
113
|
+
createdAt: new Date().toISOString()
|
|
114
|
+
};
|
|
115
|
+
this.lastSeenByAgent.set(agent.name, historyMaxId);
|
|
116
|
+
this.conversation.push(response);
|
|
117
|
+
await this.appendTranscript(response);
|
|
118
|
+
onMessage(response);
|
|
119
|
+
return response;
|
|
120
|
+
}));
|
|
121
|
+
const results = settled.map((item, idx) => {
|
|
122
|
+
if (item.status === "fulfilled") {
|
|
123
|
+
return item.value;
|
|
124
|
+
}
|
|
125
|
+
const agent = targets[idx];
|
|
126
|
+
const response = {
|
|
127
|
+
id: ++this.messageId,
|
|
128
|
+
from: agent.name.toUpperCase(),
|
|
129
|
+
text: `[Adapter Error] ${item.reason instanceof Error ? item.reason.message : String(item.reason)}`,
|
|
130
|
+
createdAt: new Date().toISOString()
|
|
131
|
+
};
|
|
132
|
+
this.lastSeenByAgent.set(agent.name, historyMaxId);
|
|
133
|
+
this.conversation.push(response);
|
|
134
|
+
void this.appendTranscript(response);
|
|
135
|
+
onMessage(response);
|
|
136
|
+
return response;
|
|
137
|
+
});
|
|
138
|
+
return results;
|
|
139
|
+
}
|
|
140
|
+
async appendTranscript(message) {
|
|
141
|
+
const transcriptDir = path.dirname(this.transcriptPath);
|
|
142
|
+
await mkdir(transcriptDir, { recursive: true });
|
|
143
|
+
await appendFile(this.transcriptPath, `${JSON.stringify(message)}\n`, "utf8");
|
|
144
|
+
}
|
|
145
|
+
async saveHistory(targetPath) {
|
|
146
|
+
await writeFile(targetPath, JSON.stringify(this.conversation, null, 2), "utf8");
|
|
147
|
+
}
|
|
148
|
+
async sendWithTimeout(agent, messages, timeoutMs) {
|
|
149
|
+
let timer;
|
|
150
|
+
const timeoutPromise = new Promise((resolve) => {
|
|
151
|
+
timer = setTimeout(() => {
|
|
152
|
+
resolve(`[Timeout] ${agent.name} exceeded ${Math.floor(timeoutMs / 1000)}s`);
|
|
153
|
+
}, timeoutMs);
|
|
154
|
+
});
|
|
155
|
+
try {
|
|
156
|
+
return await Promise.race([agent.send(messages), timeoutPromise]);
|
|
157
|
+
}
|
|
158
|
+
finally {
|
|
159
|
+
if (timer) {
|
|
160
|
+
clearTimeout(timer);
|
|
161
|
+
}
|
|
162
|
+
}
|
|
163
|
+
}
|
|
164
|
+
timeoutFor(agentName) {
|
|
165
|
+
return this.agentTimeouts.get(agentName) ?? this.defaultTimeout;
|
|
166
|
+
}
|
|
167
|
+
buildInputForAgent(agentName, unseen) {
|
|
168
|
+
const recent = this.conversation.slice(-this.contextWindowSize);
|
|
169
|
+
const merged = [...recent, ...unseen];
|
|
170
|
+
const dedupById = new Map();
|
|
171
|
+
for (const msg of merged) {
|
|
172
|
+
dedupById.set(msg.id, msg);
|
|
173
|
+
}
|
|
174
|
+
const ordered = Array.from(dedupById.values()).sort((a, b) => a.id - b.id);
|
|
175
|
+
return ordered.filter((msg) => msg.from.toUpperCase() !== agentName.toUpperCase());
|
|
176
|
+
}
|
|
177
|
+
}
|
|
178
|
+
function createSessionId() {
|
|
179
|
+
const timestamp = new Date().toISOString().replace(/[-:]/g, "").replace(/\..+/, "").replace("T", "-");
|
|
180
|
+
return `${timestamp}-${process.pid}`;
|
|
181
|
+
}
|
|
182
|
+
function defaultTagFor(name) {
|
|
183
|
+
const compact = name.trim().toLowerCase().replace(/[^a-z0-9]+/g, "-").replace(/^-+|-+$/g, "");
|
|
184
|
+
return compact || "agent";
|
|
185
|
+
}
|
package/dist/types.js
ADDED
|
@@ -0,0 +1 @@
|
|
|
1
|
+
export {};
|
|
@@ -0,0 +1,213 @@
|
|
|
1
|
+
import readline from "node:readline/promises";
|
|
2
|
+
import { execFile } from "node:child_process";
|
|
3
|
+
import { stdin as input, stdout as output } from "node:process";
|
|
4
|
+
import chalk from "chalk";
|
|
5
|
+
export async function runTerminal(orchestrator, options = {}) {
|
|
6
|
+
const rl = readline.createInterface({ input, output });
|
|
7
|
+
const humanName = orchestrator.getHumanName();
|
|
8
|
+
const tags = formatTagHints(orchestrator);
|
|
9
|
+
let lastTargets;
|
|
10
|
+
let knownChangedFiles = await getChangedFiles();
|
|
11
|
+
process.on("SIGINT", () => {
|
|
12
|
+
rl.close();
|
|
13
|
+
output.write("\n");
|
|
14
|
+
process.exit(0);
|
|
15
|
+
});
|
|
16
|
+
output.write(chalk.cyan(`llm-party Phase 1 started. Commands: /agents, /history, /save <path>, /session, /changes, /exit. Tags: ${tags}\n`));
|
|
17
|
+
output.write(chalk.gray(`Session: ${orchestrator.getSessionId()}\n`));
|
|
18
|
+
output.write(chalk.gray(`Transcript: ${orchestrator.getTranscriptPath()}\n`));
|
|
19
|
+
while (true) {
|
|
20
|
+
let line = "";
|
|
21
|
+
try {
|
|
22
|
+
line = (await rl.question(chalk.green(`${humanName} > `))).trim();
|
|
23
|
+
}
|
|
24
|
+
catch (error) {
|
|
25
|
+
const code = error.code;
|
|
26
|
+
if (code === "ERR_USE_AFTER_CLOSE" || code === "ABORT_ERR") {
|
|
27
|
+
break;
|
|
28
|
+
}
|
|
29
|
+
throw error;
|
|
30
|
+
}
|
|
31
|
+
if (!line) {
|
|
32
|
+
continue;
|
|
33
|
+
}
|
|
34
|
+
if (line === "/exit") {
|
|
35
|
+
break;
|
|
36
|
+
}
|
|
37
|
+
if (line === "/history") {
|
|
38
|
+
const history = orchestrator.getHistory();
|
|
39
|
+
for (const msg of history) {
|
|
40
|
+
output.write(`${chalk.gray(msg.createdAt)} ${chalk.yellow("[" + msg.from + "]")} ${msg.text}\n`);
|
|
41
|
+
}
|
|
42
|
+
continue;
|
|
43
|
+
}
|
|
44
|
+
if (line === "/agents") {
|
|
45
|
+
const agents = orchestrator.listAgents();
|
|
46
|
+
for (const agent of agents) {
|
|
47
|
+
output.write(`${chalk.cyan(agent.name)} tag=@${agent.tag} provider=${agent.provider} model=${agent.model}\n`);
|
|
48
|
+
}
|
|
49
|
+
continue;
|
|
50
|
+
}
|
|
51
|
+
if (line === "/session") {
|
|
52
|
+
output.write(chalk.cyan(`Session: ${orchestrator.getSessionId()}\n`));
|
|
53
|
+
output.write(chalk.cyan(`Transcript: ${orchestrator.getTranscriptPath()}\n`));
|
|
54
|
+
continue;
|
|
55
|
+
}
|
|
56
|
+
if (line === "/changes") {
|
|
57
|
+
const changedFiles = await getChangedFiles();
|
|
58
|
+
if (changedFiles.length === 0) {
|
|
59
|
+
output.write(chalk.cyan("No modified files in git working tree.\n"));
|
|
60
|
+
}
|
|
61
|
+
else {
|
|
62
|
+
output.write(chalk.cyan("Modified files:\n"));
|
|
63
|
+
for (const file of changedFiles) {
|
|
64
|
+
output.write(`- ${file}\n`);
|
|
65
|
+
}
|
|
66
|
+
}
|
|
67
|
+
continue;
|
|
68
|
+
}
|
|
69
|
+
if (line.startsWith("/save ")) {
|
|
70
|
+
const filePath = line.replace("/save ", "").trim();
|
|
71
|
+
if (!filePath) {
|
|
72
|
+
output.write(chalk.red("Usage: /save <path>\n"));
|
|
73
|
+
continue;
|
|
74
|
+
}
|
|
75
|
+
await orchestrator.saveHistory(filePath);
|
|
76
|
+
output.write(chalk.cyan(`Saved history to ${filePath}\n`));
|
|
77
|
+
continue;
|
|
78
|
+
}
|
|
79
|
+
const routing = parseRouting(line);
|
|
80
|
+
const explicitTargets = routing.targets && routing.targets.length > 0
|
|
81
|
+
? Array.from(new Set(routing.targets.flatMap((target) => orchestrator.resolveTargets(target))))
|
|
82
|
+
: undefined;
|
|
83
|
+
if (routing.targets && routing.targets.length > 0 && (!explicitTargets || explicitTargets.length === 0)) {
|
|
84
|
+
output.write(chalk.red(`No agent matched ${routing.targets.map((target) => `@${target}`).join(", ")}. Use /agents to list names/providers.\n`));
|
|
85
|
+
continue;
|
|
86
|
+
}
|
|
87
|
+
const targets = explicitTargets ?? lastTargets;
|
|
88
|
+
if (explicitTargets && explicitTargets.length > 0) {
|
|
89
|
+
lastTargets = explicitTargets;
|
|
90
|
+
}
|
|
91
|
+
const userMessage = orchestrator.addUserMessage(routing.message);
|
|
92
|
+
await orchestrator.appendTranscript(userMessage);
|
|
93
|
+
knownChangedFiles = await dispatchWithHandoffs(orchestrator, output, targets, knownChangedFiles, options.maxAutoHops ?? 6);
|
|
94
|
+
}
|
|
95
|
+
rl.close();
|
|
96
|
+
}
|
|
97
|
+
async function getChangedFiles() {
|
|
98
|
+
return new Promise((resolve) => {
|
|
99
|
+
execFile("git", ["status", "--porcelain"], { cwd: process.cwd() }, (error, stdout) => {
|
|
100
|
+
if (error) {
|
|
101
|
+
resolve([]);
|
|
102
|
+
return;
|
|
103
|
+
}
|
|
104
|
+
const files = stdout
|
|
105
|
+
.split("\n")
|
|
106
|
+
.filter((line) => line.length >= 4)
|
|
107
|
+
.map((line) => line.slice(3).trim());
|
|
108
|
+
resolve(Array.from(new Set(files)));
|
|
109
|
+
});
|
|
110
|
+
});
|
|
111
|
+
}
|
|
112
|
+
async function dispatchWithHandoffs(orchestrator, out, initialTargets, previousChangedFiles = [], maxHops = 6) {
|
|
113
|
+
let targets = initialTargets;
|
|
114
|
+
let hops = 0;
|
|
115
|
+
let knownChangedFiles = previousChangedFiles;
|
|
116
|
+
while (true) {
|
|
117
|
+
const targetLabel = targets && targets.length > 0 ? targets.join(",") : "all";
|
|
118
|
+
out.write(chalk.gray(`Dispatching to ${targetLabel}...\n`));
|
|
119
|
+
const batch = [];
|
|
120
|
+
await orchestrator.fanOutWithProgress(targets, (msg) => {
|
|
121
|
+
batch.push(msg);
|
|
122
|
+
out.write(chalk.magenta(`[${msg.from}]`) + ` ${msg.text}\n\n`);
|
|
123
|
+
});
|
|
124
|
+
const latestChangedFiles = await getChangedFiles();
|
|
125
|
+
const newlyChanged = diffChangedFiles(knownChangedFiles, latestChangedFiles);
|
|
126
|
+
if (newlyChanged.length > 0) {
|
|
127
|
+
out.write(chalk.yellow(`LLM modified files at ${new Date().toISOString()}:\n`));
|
|
128
|
+
for (const file of newlyChanged) {
|
|
129
|
+
out.write(chalk.yellow(`- ${file}\n`));
|
|
130
|
+
}
|
|
131
|
+
}
|
|
132
|
+
knownChangedFiles = latestChangedFiles;
|
|
133
|
+
const nextSelectors = extractNextSelectors(batch);
|
|
134
|
+
if (nextSelectors.length === 0) {
|
|
135
|
+
return knownChangedFiles;
|
|
136
|
+
}
|
|
137
|
+
if (nextSelectors.some((selector) => {
|
|
138
|
+
const normalized = selector.toLowerCase();
|
|
139
|
+
return normalized === orchestrator.getHumanTag().toLowerCase()
|
|
140
|
+
|| normalized === orchestrator.getHumanName().toLowerCase();
|
|
141
|
+
})) {
|
|
142
|
+
return knownChangedFiles;
|
|
143
|
+
}
|
|
144
|
+
const resolvedTargets = Array.from(new Set(nextSelectors.flatMap((selector) => orchestrator.resolveTargets(selector))));
|
|
145
|
+
if (resolvedTargets.length === 0) {
|
|
146
|
+
out.write(chalk.yellow(`Ignored @next target(s): ${nextSelectors.join(",")}\n`));
|
|
147
|
+
return knownChangedFiles;
|
|
148
|
+
}
|
|
149
|
+
hops += 1;
|
|
150
|
+
if (Number.isFinite(maxHops) && hops >= maxHops) {
|
|
151
|
+
out.write(chalk.yellow(`Stopped auto-handoff after ${maxHops} hops to prevent loops.\n`));
|
|
152
|
+
return knownChangedFiles;
|
|
153
|
+
}
|
|
154
|
+
out.write(chalk.gray(`Auto handoff via @next to ${resolvedTargets.join(",")}\n`));
|
|
155
|
+
targets = resolvedTargets;
|
|
156
|
+
}
|
|
157
|
+
}
|
|
158
|
+
function diffChangedFiles(before, after) {
|
|
159
|
+
const beforeSet = new Set(before);
|
|
160
|
+
return after.filter((file) => !beforeSet.has(file));
|
|
161
|
+
}
|
|
162
|
+
function formatTagHints(orchestrator) {
|
|
163
|
+
const agents = orchestrator.listAgents();
|
|
164
|
+
const tags = new Set();
|
|
165
|
+
tags.add("@all");
|
|
166
|
+
tags.add("@everyone");
|
|
167
|
+
for (const agent of agents) {
|
|
168
|
+
tags.add(`@${agent.tag}`);
|
|
169
|
+
tags.add(`@${agent.provider}`);
|
|
170
|
+
}
|
|
171
|
+
return Array.from(tags).join(", ");
|
|
172
|
+
}
|
|
173
|
+
function extractNextSelectors(messages) {
|
|
174
|
+
const selectors = [];
|
|
175
|
+
for (const msg of messages) {
|
|
176
|
+
const regex = /@next\s*:\s*([A-Za-z0-9_-]+)/gi;
|
|
177
|
+
let match = null;
|
|
178
|
+
while ((match = regex.exec(msg.text)) !== null) {
|
|
179
|
+
selectors.push(match[1]);
|
|
180
|
+
}
|
|
181
|
+
const controlMatch = msg.text.match(/@control[\s\S]*?next\s*:\s*([A-Za-z0-9_-]+)[\s\S]*?@end/i);
|
|
182
|
+
if (controlMatch?.[1]) {
|
|
183
|
+
selectors.push(controlMatch[1]);
|
|
184
|
+
}
|
|
185
|
+
}
|
|
186
|
+
return selectors;
|
|
187
|
+
}
|
|
188
|
+
function parseRouting(line) {
|
|
189
|
+
const normalizedStart = line.replace(/^[^A-Za-z0-9@_-]+/, "");
|
|
190
|
+
const startMatch = normalizedStart.match(/^@([A-Za-z0-9_-]+)[\.,:;!?-]*\s+([\s\S]+)$/);
|
|
191
|
+
if (startMatch) {
|
|
192
|
+
return {
|
|
193
|
+
targets: [startMatch[1].toLowerCase()],
|
|
194
|
+
message: startMatch[2].trim()
|
|
195
|
+
};
|
|
196
|
+
}
|
|
197
|
+
const mentionRegex = /(^|[^A-Za-z0-9_-])@([A-Za-z0-9_-]+)\b/g;
|
|
198
|
+
const targets = [];
|
|
199
|
+
let stripped = line;
|
|
200
|
+
let match = null;
|
|
201
|
+
while ((match = mentionRegex.exec(line)) !== null) {
|
|
202
|
+
targets.push(match[2].toLowerCase());
|
|
203
|
+
}
|
|
204
|
+
if (targets.length === 0) {
|
|
205
|
+
return { message: line };
|
|
206
|
+
}
|
|
207
|
+
stripped = stripped.replace(/(^|[^A-Za-z0-9_-])@([A-Za-z0-9_-]+)\b/g, (full, prefix) => prefix || "");
|
|
208
|
+
stripped = stripped.replace(/\s{2,}/g, " ").trim();
|
|
209
|
+
return {
|
|
210
|
+
targets,
|
|
211
|
+
message: stripped || line
|
|
212
|
+
};
|
|
213
|
+
}
|
package/package.json
ADDED
|
@@ -0,0 +1,57 @@
|
|
|
1
|
+
{
|
|
2
|
+
"name": "llm-party-cli",
|
|
3
|
+
"version": "0.1.0",
|
|
4
|
+
"type": "module",
|
|
5
|
+
"bin": {
|
|
6
|
+
"llm-party": "dist/index.js"
|
|
7
|
+
},
|
|
8
|
+
"description": "Bring your models. We'll bring the party.",
|
|
9
|
+
"license": "MIT",
|
|
10
|
+
"homepage": "https://llm-party.party",
|
|
11
|
+
"repository": {
|
|
12
|
+
"type": "git",
|
|
13
|
+
"url": "git+https://github.com/aalasolutions/llm-party.git"
|
|
14
|
+
},
|
|
15
|
+
"bugs": {
|
|
16
|
+
"url": "https://github.com/aalasolutions/llm-party/issues"
|
|
17
|
+
},
|
|
18
|
+
"author": "AALA Solutions <hello@aalasolutions.com> (https://aalasolutions.com)",
|
|
19
|
+
"keywords": [
|
|
20
|
+
"llm",
|
|
21
|
+
"multi-agent",
|
|
22
|
+
"orchestrator",
|
|
23
|
+
"claude",
|
|
24
|
+
"codex",
|
|
25
|
+
"copilot",
|
|
26
|
+
"terminal",
|
|
27
|
+
"ai",
|
|
28
|
+
"peer-orchestration",
|
|
29
|
+
"agent"
|
|
30
|
+
],
|
|
31
|
+
"files": [
|
|
32
|
+
"dist",
|
|
33
|
+
"configs",
|
|
34
|
+
"prompts/base.md",
|
|
35
|
+
"README.md",
|
|
36
|
+
"LICENSE"
|
|
37
|
+
],
|
|
38
|
+
"scripts": {
|
|
39
|
+
"dev": "tsx src/index.ts",
|
|
40
|
+
"build": "tsc -p tsconfig.json",
|
|
41
|
+
"start": "node dist/index.js",
|
|
42
|
+
"prepublishOnly": "npm run build"
|
|
43
|
+
},
|
|
44
|
+
"dependencies": {
|
|
45
|
+
"@anthropic-ai/claude-agent-sdk": "^0.2.74",
|
|
46
|
+
"@anthropic-ai/claude-code": "^2.1.74",
|
|
47
|
+
"@github/copilot-sdk": "^0.1.33-preview.2",
|
|
48
|
+
"@openai/codex-sdk": "^0.115.0",
|
|
49
|
+
"chalk": "^5.3.0",
|
|
50
|
+
"dotenv": "^16.4.5"
|
|
51
|
+
},
|
|
52
|
+
"devDependencies": {
|
|
53
|
+
"@types/node": "^22.10.2",
|
|
54
|
+
"tsx": "^4.19.2",
|
|
55
|
+
"typescript": "^5.7.2"
|
|
56
|
+
}
|
|
57
|
+
}
|
package/prompts/base.md
ADDED
|
@@ -0,0 +1,54 @@
|
|
|
1
|
+
# {{agentName}}
|
|
2
|
+
|
|
3
|
+
You are {{agentName}}. You are one of {{agentCount}} AI agents working with {{humanName}} in a shared terminal orchestrator called llm-party.
|
|
4
|
+
|
|
5
|
+
## Your Identity
|
|
6
|
+
|
|
7
|
+
- Name: {{agentName}}
|
|
8
|
+
- Tag: @{{agentTag}}
|
|
9
|
+
- The human you serve: {{humanName}}
|
|
10
|
+
|
|
11
|
+
## How the System Works
|
|
12
|
+
|
|
13
|
+
{{humanName}} types messages in a terminal. The orchestrator routes them to one or more agents based on tags.
|
|
14
|
+
|
|
15
|
+
Routing rules:
|
|
16
|
+
|
|
17
|
+
- `@{{agentTag}}` routes the message only to you
|
|
18
|
+
- `@all` or no tag routes the message to all active agents in parallel
|
|
19
|
+
- Tags are case-insensitive and may include punctuation after the tag
|
|
20
|
+
|
|
21
|
+
You receive a rolling window of recent conversation messages so you keep context between turns.
|
|
22
|
+
|
|
23
|
+
## Agent-to-Agent Handoff
|
|
24
|
+
|
|
25
|
+
You do have agent-to-agent handoff in this orchestrator. The orchestrator watches your plain text output for `@next:<tag>` and dispatches accordingly.
|
|
26
|
+
|
|
27
|
+
If another agent should respond next, end your message with one of these valid targets:
|
|
28
|
+
|
|
29
|
+
{{validHandoffTargets}}
|
|
30
|
+
|
|
31
|
+
If you want {{humanName}} to take over, end with:
|
|
32
|
+
|
|
33
|
+
```
|
|
34
|
+
@next:{{humanTag}}
|
|
35
|
+
```
|
|
36
|
+
|
|
37
|
+
Rules:
|
|
38
|
+
|
|
39
|
+
- Do not claim handoff is unavailable. It is available through the orchestrator parser.
|
|
40
|
+
- Use agent tags for handoff, not provider names and not display names.
|
|
41
|
+
- Max 15 automatic hops before the system stops to prevent loops.
|
|
42
|
+
- Only use `@next` when another agent's perspective is genuinely useful.
|
|
43
|
+
|
|
44
|
+
## Team Context
|
|
45
|
+
|
|
46
|
+
- Active agent names: {{allAgentNames}}
|
|
47
|
+
- Direct tag examples: {{allAgentTags}}, @all
|
|
48
|
+
- Other agents besides you:
|
|
49
|
+
{{otherAgentList}}
|
|
50
|
+
|
|
51
|
+
## Behavior Rules
|
|
52
|
+
|
|
53
|
+
- Address {{humanName}} by name.
|
|
54
|
+
- NEVER LEAVE `cwd`
|