@jackchen_me/open-multi-agent 0.1.0 → 1.0.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.github/ISSUE_TEMPLATE/bug_report.md +40 -0
- package/.github/ISSUE_TEMPLATE/feature_request.md +23 -0
- package/.github/pull_request_template.md +14 -0
- package/.github/workflows/ci.yml +23 -0
- package/CLAUDE.md +80 -0
- package/CODE_OF_CONDUCT.md +48 -0
- package/CONTRIBUTING.md +72 -0
- package/DECISIONS.md +43 -0
- package/README.md +144 -144
- package/README_zh.md +277 -0
- package/SECURITY.md +17 -0
- package/dist/agent/agent.d.ts +20 -1
- package/dist/agent/agent.d.ts.map +1 -1
- package/dist/agent/agent.js +233 -12
- package/dist/agent/agent.js.map +1 -1
- package/dist/agent/loop-detector.d.ts +39 -0
- package/dist/agent/loop-detector.d.ts.map +1 -0
- package/dist/agent/loop-detector.js +122 -0
- package/dist/agent/loop-detector.js.map +1 -0
- package/dist/agent/pool.d.ts +2 -1
- package/dist/agent/pool.d.ts.map +1 -1
- package/dist/agent/pool.js +4 -2
- package/dist/agent/pool.js.map +1 -1
- package/dist/agent/runner.d.ts +23 -1
- package/dist/agent/runner.d.ts.map +1 -1
- package/dist/agent/runner.js +113 -12
- package/dist/agent/runner.js.map +1 -1
- package/dist/agent/structured-output.d.ts +33 -0
- package/dist/agent/structured-output.d.ts.map +1 -0
- package/dist/agent/structured-output.js +116 -0
- package/dist/agent/structured-output.js.map +1 -0
- package/dist/index.d.ts +5 -2
- package/dist/index.d.ts.map +1 -1
- package/dist/index.js +4 -1
- package/dist/index.js.map +1 -1
- package/dist/llm/adapter.d.ts +12 -4
- package/dist/llm/adapter.d.ts.map +1 -1
- package/dist/llm/adapter.js +28 -5
- package/dist/llm/adapter.js.map +1 -1
- package/dist/llm/anthropic.d.ts +1 -1
- package/dist/llm/anthropic.d.ts.map +1 -1
- package/dist/llm/anthropic.js +2 -1
- package/dist/llm/anthropic.js.map +1 -1
- package/dist/llm/copilot.d.ts +92 -0
- package/dist/llm/copilot.d.ts.map +1 -0
- package/dist/llm/copilot.js +427 -0
- package/dist/llm/copilot.js.map +1 -0
- package/dist/llm/gemini.d.ts +65 -0
- package/dist/llm/gemini.d.ts.map +1 -0
- package/dist/llm/gemini.js +317 -0
- package/dist/llm/gemini.js.map +1 -0
- package/dist/llm/grok.d.ts +21 -0
- package/dist/llm/grok.d.ts.map +1 -0
- package/dist/llm/grok.js +24 -0
- package/dist/llm/grok.js.map +1 -0
- package/dist/llm/openai-common.d.ts +54 -0
- package/dist/llm/openai-common.d.ts.map +1 -0
- package/dist/llm/openai-common.js +242 -0
- package/dist/llm/openai-common.js.map +1 -0
- package/dist/llm/openai.d.ts +2 -2
- package/dist/llm/openai.d.ts.map +1 -1
- package/dist/llm/openai.js +23 -226
- package/dist/llm/openai.js.map +1 -1
- package/dist/orchestrator/orchestrator.d.ts +25 -1
- package/dist/orchestrator/orchestrator.d.ts.map +1 -1
- package/dist/orchestrator/orchestrator.js +214 -41
- package/dist/orchestrator/orchestrator.js.map +1 -1
- package/dist/task/queue.d.ts +31 -2
- package/dist/task/queue.d.ts.map +1 -1
- package/dist/task/queue.js +70 -3
- package/dist/task/queue.js.map +1 -1
- package/dist/task/task.d.ts +3 -0
- package/dist/task/task.d.ts.map +1 -1
- package/dist/task/task.js +5 -1
- package/dist/task/task.js.map +1 -1
- package/dist/team/messaging.d.ts.map +1 -1
- package/dist/team/messaging.js +2 -1
- package/dist/team/messaging.js.map +1 -1
- package/dist/tool/text-tool-extractor.d.ts +32 -0
- package/dist/tool/text-tool-extractor.d.ts.map +1 -0
- package/dist/tool/text-tool-extractor.js +187 -0
- package/dist/tool/text-tool-extractor.js.map +1 -0
- package/dist/types.d.ts +167 -7
- package/dist/types.d.ts.map +1 -1
- package/dist/utils/trace.d.ts +12 -0
- package/dist/utils/trace.d.ts.map +1 -0
- package/dist/utils/trace.js +30 -0
- package/dist/utils/trace.js.map +1 -0
- package/examples/05-copilot-test.ts +49 -0
- package/examples/06-local-model.ts +200 -0
- package/examples/07-fan-out-aggregate.ts +209 -0
- package/examples/08-gemma4-local.ts +192 -0
- package/examples/09-structured-output.ts +73 -0
- package/examples/10-task-retry.ts +132 -0
- package/examples/11-trace-observability.ts +133 -0
- package/examples/12-grok.ts +154 -0
- package/examples/13-gemini.ts +48 -0
- package/package.json +14 -3
- package/src/agent/agent.ts +273 -15
- package/src/agent/loop-detector.ts +137 -0
- package/src/agent/pool.ts +9 -2
- package/src/agent/runner.ts +148 -19
- package/src/agent/structured-output.ts +126 -0
- package/src/index.ts +17 -1
- package/src/llm/adapter.ts +29 -5
- package/src/llm/anthropic.ts +2 -1
- package/src/llm/copilot.ts +552 -0
- package/src/llm/gemini.ts +378 -0
- package/src/llm/grok.ts +29 -0
- package/src/llm/openai-common.ts +294 -0
- package/src/llm/openai.ts +31 -261
- package/src/orchestrator/orchestrator.ts +260 -40
- package/src/task/queue.ts +74 -4
- package/src/task/task.ts +8 -1
- package/src/team/messaging.ts +3 -1
- package/src/tool/text-tool-extractor.ts +219 -0
- package/src/types.ts +186 -6
- package/src/utils/trace.ts +34 -0
- package/tests/agent-hooks.test.ts +473 -0
- package/tests/agent-pool.test.ts +212 -0
- package/tests/approval.test.ts +464 -0
- package/tests/built-in-tools.test.ts +393 -0
- package/tests/gemini-adapter.test.ts +97 -0
- package/tests/grok-adapter.test.ts +74 -0
- package/tests/llm-adapters.test.ts +357 -0
- package/tests/loop-detection.test.ts +456 -0
- package/tests/openai-fallback.test.ts +159 -0
- package/tests/orchestrator.test.ts +281 -0
- package/tests/scheduler.test.ts +221 -0
- package/tests/semaphore.test.ts +57 -0
- package/tests/shared-memory.test.ts +122 -0
- package/tests/structured-output.test.ts +331 -0
- package/tests/task-queue.test.ts +244 -0
- package/tests/task-retry.test.ts +368 -0
- package/tests/task-utils.test.ts +155 -0
- package/tests/team-messaging.test.ts +329 -0
- package/tests/text-tool-extractor.test.ts +170 -0
- package/tests/tool-executor.test.ts +193 -0
- package/tests/trace.test.ts +453 -0
- package/vitest.config.ts +9 -0
package/README.md
CHANGED
|
@@ -1,47 +1,46 @@
|
|
|
1
1
|
# Open Multi-Agent
|
|
2
2
|
|
|
3
|
-
|
|
3
|
+
TypeScript framework for multi-agent orchestration. One `runTeam()` call from goal to result — the framework decomposes it into tasks, resolves dependencies, and runs agents in parallel.
|
|
4
|
+
|
|
5
|
+
3 runtime dependencies · 33 source files · Deploys anywhere Node.js runs · Mentioned in [Latent Space](https://www.latent.space/p/ainews-a-quiet-april-fools) AI News
|
|
4
6
|
|
|
5
7
|
[](https://github.com/JackChen-me/open-multi-agent/stargazers)
|
|
6
8
|
[](./LICENSE)
|
|
7
9
|
[](https://www.typescriptlang.org/)
|
|
10
|
+
[](https://github.com/JackChen-me/open-multi-agent/actions)
|
|
11
|
+
|
|
12
|
+
**English** | [中文](./README_zh.md)
|
|
8
13
|
|
|
9
14
|
## Why Open Multi-Agent?
|
|
10
15
|
|
|
11
|
-
- **
|
|
12
|
-
- **
|
|
13
|
-
- **
|
|
14
|
-
- **
|
|
16
|
+
- **Goal In, Result Out** — `runTeam(team, "Build a REST API")`. A coordinator agent auto-decomposes the goal into a task DAG with dependencies and assignees, runs independent tasks in parallel, and synthesizes the final output. No manual task definitions or graph wiring required.
|
|
17
|
+
- **TypeScript-Native** — Built for the Node.js ecosystem. `npm install`, import, run. No Python runtime, no subprocess bridge, no sidecar services. Embed in Express, Next.js, serverless functions, or CI/CD pipelines.
|
|
18
|
+
- **Auditable and Lightweight** — 3 runtime dependencies (`@anthropic-ai/sdk`, `openai`, `zod`). 33 source files. The entire codebase is readable in an afternoon.
|
|
19
|
+
- **Model Agnostic** — Claude, GPT, Gemma 4, and local models (Ollama, vLLM, LM Studio, llama.cpp server) in the same team. Swap models per agent via `baseURL`.
|
|
20
|
+
- **Multi-Agent Collaboration** — Agents with different roles, tools, and models collaborate through a message bus and shared memory.
|
|
21
|
+
- **Structured Output** — Add `outputSchema` (Zod) to any agent. Output is parsed as JSON, validated, and auto-retried once on failure. Access typed results via `result.structured`.
|
|
22
|
+
- **Task Retry** — Set `maxRetries` on tasks for automatic retry with exponential backoff. Failed attempts accumulate token usage for accurate billing.
|
|
23
|
+
- **Human-in-the-Loop** — Optional `onApproval` callback on `runTasks()`. After each batch of tasks completes, your callback decides whether to proceed or abort remaining work.
|
|
24
|
+
- **Lifecycle Hooks** — `beforeRun` / `afterRun` on `AgentConfig`. Intercept the prompt before execution or post-process results after. Throw from either hook to abort.
|
|
25
|
+
- **Loop Detection** — `loopDetection` on `AgentConfig` catches stuck agents repeating the same tool calls or text output. Configurable action: warn (default), terminate, or custom callback.
|
|
26
|
+
- **Observability** — Optional `onTrace` callback emits structured spans for every LLM call, tool execution, task, and agent run — with timing, token usage, and a shared `runId` for correlation. Zero overhead when not subscribed, zero extra dependencies.
|
|
15
27
|
|
|
16
28
|
## Quick Start
|
|
17
29
|
|
|
30
|
+
Requires Node.js >= 18.
|
|
31
|
+
|
|
18
32
|
```bash
|
|
19
33
|
npm install @jackchen_me/open-multi-agent
|
|
20
34
|
```
|
|
21
35
|
|
|
22
|
-
Set
|
|
23
|
-
|
|
24
|
-
```typescript
|
|
25
|
-
import { OpenMultiAgent } from '@jackchen_me/open-multi-agent'
|
|
26
|
-
|
|
27
|
-
const orchestrator = new OpenMultiAgent({ defaultModel: 'claude-sonnet-4-6' })
|
|
28
|
-
|
|
29
|
-
// One agent, one task
|
|
30
|
-
const result = await orchestrator.runAgent(
|
|
31
|
-
{
|
|
32
|
-
name: 'coder',
|
|
33
|
-
model: 'claude-sonnet-4-6',
|
|
34
|
-
tools: ['bash', 'file_write'],
|
|
35
|
-
},
|
|
36
|
-
'Write a TypeScript function that reverses a string, save it to /tmp/reverse.ts, and run it.',
|
|
37
|
-
)
|
|
36
|
+
Set the API key for your provider. Local models via Ollama require no API key — see [example 06](examples/06-local-model.ts).
|
|
38
37
|
|
|
39
|
-
|
|
40
|
-
|
|
41
|
-
|
|
42
|
-
|
|
38
|
+
- `ANTHROPIC_API_KEY`
|
|
39
|
+
- `OPENAI_API_KEY`
|
|
40
|
+
- `GEMINI_API_KEY`
|
|
41
|
+
- `GITHUB_TOKEN` (for Copilot)
|
|
43
42
|
|
|
44
|
-
|
|
43
|
+
Three agents, one goal — the framework handles the rest:
|
|
45
44
|
|
|
46
45
|
```typescript
|
|
47
46
|
import { OpenMultiAgent } from '@jackchen_me/open-multi-agent'
|
|
@@ -86,132 +85,54 @@ console.log(`Success: ${result.success}`)
|
|
|
86
85
|
console.log(`Tokens: ${result.totalTokenUsage.output_tokens} output tokens`)
|
|
87
86
|
```
|
|
88
87
|
|
|
89
|
-
|
|
88
|
+
What happens under the hood:
|
|
90
89
|
|
|
91
|
-
<details>
|
|
92
|
-
<summary><b>Task Pipeline</b> — explicit control over task graph and assignments</summary>
|
|
93
|
-
|
|
94
|
-
```typescript
|
|
95
|
-
const result = await orchestrator.runTasks(team, [
|
|
96
|
-
{
|
|
97
|
-
title: 'Design the data model',
|
|
98
|
-
description: 'Write a TypeScript interface spec to /tmp/spec.md',
|
|
99
|
-
assignee: 'architect',
|
|
100
|
-
},
|
|
101
|
-
{
|
|
102
|
-
title: 'Implement the module',
|
|
103
|
-
description: 'Read /tmp/spec.md and implement the module in /tmp/src/',
|
|
104
|
-
assignee: 'developer',
|
|
105
|
-
dependsOn: ['Design the data model'], // blocked until design completes
|
|
106
|
-
},
|
|
107
|
-
{
|
|
108
|
-
title: 'Write tests',
|
|
109
|
-
description: 'Read the implementation and write Vitest tests.',
|
|
110
|
-
assignee: 'developer',
|
|
111
|
-
dependsOn: ['Implement the module'],
|
|
112
|
-
},
|
|
113
|
-
{
|
|
114
|
-
title: 'Review code',
|
|
115
|
-
description: 'Review /tmp/src/ and produce a structured code review.',
|
|
116
|
-
assignee: 'reviewer',
|
|
117
|
-
dependsOn: ['Implement the module'], // can run in parallel with tests
|
|
118
|
-
},
|
|
119
|
-
])
|
|
120
90
|
```
|
|
121
|
-
|
|
122
|
-
|
|
123
|
-
|
|
124
|
-
|
|
125
|
-
|
|
126
|
-
|
|
127
|
-
|
|
128
|
-
|
|
129
|
-
|
|
130
|
-
|
|
131
|
-
|
|
132
|
-
|
|
133
|
-
description: 'Search the web and return the top results.',
|
|
134
|
-
inputSchema: z.object({
|
|
135
|
-
query: z.string().describe('The search query.'),
|
|
136
|
-
maxResults: z.number().optional().describe('Number of results (default 5).'),
|
|
137
|
-
}),
|
|
138
|
-
execute: async ({ query, maxResults = 5 }) => {
|
|
139
|
-
const results = await mySearchProvider(query, maxResults)
|
|
140
|
-
return { data: JSON.stringify(results), isError: false }
|
|
141
|
-
},
|
|
142
|
-
})
|
|
143
|
-
|
|
144
|
-
const registry = new ToolRegistry()
|
|
145
|
-
registerBuiltInTools(registry)
|
|
146
|
-
registry.register(searchTool)
|
|
147
|
-
|
|
148
|
-
const executor = new ToolExecutor(registry)
|
|
149
|
-
const agent = new Agent(
|
|
150
|
-
{ name: 'researcher', model: 'claude-sonnet-4-6', tools: ['web_search'] },
|
|
151
|
-
registry,
|
|
152
|
-
executor,
|
|
153
|
-
)
|
|
154
|
-
|
|
155
|
-
const result = await agent.run('Find the three most recent TypeScript releases.')
|
|
91
|
+
agent_start coordinator
|
|
92
|
+
task_start architect
|
|
93
|
+
task_complete architect
|
|
94
|
+
task_start developer
|
|
95
|
+
task_start developer // independent tasks run in parallel
|
|
96
|
+
task_complete developer
|
|
97
|
+
task_start reviewer // unblocked after implementation
|
|
98
|
+
task_complete developer
|
|
99
|
+
task_complete reviewer
|
|
100
|
+
agent_complete coordinator // synthesizes final result
|
|
101
|
+
Success: true
|
|
102
|
+
Tokens: 12847 output tokens
|
|
156
103
|
```
|
|
157
104
|
|
|
158
|
-
|
|
159
|
-
|
|
160
|
-
<details>
|
|
161
|
-
<summary><b>Multi-Model Teams</b> — mix Claude and GPT in one workflow</summary>
|
|
162
|
-
|
|
163
|
-
```typescript
|
|
164
|
-
const claudeAgent: AgentConfig = {
|
|
165
|
-
name: 'strategist',
|
|
166
|
-
model: 'claude-opus-4-6',
|
|
167
|
-
provider: 'anthropic',
|
|
168
|
-
systemPrompt: 'You plan high-level approaches.',
|
|
169
|
-
tools: ['file_write'],
|
|
170
|
-
}
|
|
171
|
-
|
|
172
|
-
const gptAgent: AgentConfig = {
|
|
173
|
-
name: 'implementer',
|
|
174
|
-
model: 'gpt-5.4',
|
|
175
|
-
provider: 'openai',
|
|
176
|
-
systemPrompt: 'You implement plans as working code.',
|
|
177
|
-
tools: ['bash', 'file_read', 'file_write'],
|
|
178
|
-
}
|
|
179
|
-
|
|
180
|
-
const team = orchestrator.createTeam('mixed-team', {
|
|
181
|
-
name: 'mixed-team',
|
|
182
|
-
agents: [claudeAgent, gptAgent],
|
|
183
|
-
sharedMemory: true,
|
|
184
|
-
})
|
|
105
|
+
## Three Ways to Run
|
|
185
106
|
|
|
186
|
-
|
|
187
|
-
|
|
107
|
+
| Mode | Method | When to use |
|
|
108
|
+
|------|--------|-------------|
|
|
109
|
+
| Single agent | `runAgent()` | One agent, one prompt — simplest entry point |
|
|
110
|
+
| Auto-orchestrated team | `runTeam()` | Give a goal, framework plans and executes |
|
|
111
|
+
| Explicit pipeline | `runTasks()` | You define the task graph and assignments |
|
|
188
112
|
|
|
189
|
-
|
|
113
|
+
## Examples
|
|
190
114
|
|
|
191
|
-
|
|
192
|
-
<summary><b>Streaming Output</b></summary>
|
|
115
|
+
All examples are runnable scripts in [`examples/`](./examples/). Run any of them with `npx tsx`:
|
|
193
116
|
|
|
194
|
-
```
|
|
195
|
-
|
|
196
|
-
|
|
197
|
-
const registry = new ToolRegistry()
|
|
198
|
-
registerBuiltInTools(registry)
|
|
199
|
-
const executor = new ToolExecutor(registry)
|
|
200
|
-
|
|
201
|
-
const agent = new Agent(
|
|
202
|
-
{ name: 'writer', model: 'claude-sonnet-4-6', maxTurns: 3 },
|
|
203
|
-
registry,
|
|
204
|
-
executor,
|
|
205
|
-
)
|
|
206
|
-
|
|
207
|
-
for await (const event of agent.stream('Explain monads in two sentences.')) {
|
|
208
|
-
if (event.type === 'text' && typeof event.data === 'string') {
|
|
209
|
-
process.stdout.write(event.data)
|
|
210
|
-
}
|
|
211
|
-
}
|
|
117
|
+
```bash
|
|
118
|
+
npx tsx examples/01-single-agent.ts
|
|
212
119
|
```
|
|
213
120
|
|
|
214
|
-
|
|
121
|
+
| Example | What it shows |
|
|
122
|
+
|---------|---------------|
|
|
123
|
+
| [01 — Single Agent](examples/01-single-agent.ts) | `runAgent()` one-shot, `stream()` streaming, `prompt()` multi-turn |
|
|
124
|
+
| [02 — Team Collaboration](examples/02-team-collaboration.ts) | `runTeam()` auto-orchestration with coordinator pattern |
|
|
125
|
+
| [03 — Task Pipeline](examples/03-task-pipeline.ts) | `runTasks()` explicit dependency graph (design → implement → test + review) |
|
|
126
|
+
| [04 — Multi-Model Team](examples/04-multi-model-team.ts) | `defineTool()` custom tools, mixed Anthropic + OpenAI providers, `AgentPool` |
|
|
127
|
+
| [05 — Copilot](examples/05-copilot-test.ts) | GitHub Copilot as an LLM provider |
|
|
128
|
+
| [06 — Local Model](examples/06-local-model.ts) | Ollama + Claude in one pipeline via `baseURL` (works with vLLM, LM Studio, etc.) |
|
|
129
|
+
| [07 — Fan-Out / Aggregate](examples/07-fan-out-aggregate.ts) | `runParallel()` MapReduce — 3 analysts in parallel, then synthesize |
|
|
130
|
+
| [08 — Gemma 4 Local](examples/08-gemma4-local.ts) | `runTasks()` + `runTeam()` with local Gemma 4 via Ollama — zero API cost |
|
|
131
|
+
| [09 — Structured Output](examples/09-structured-output.ts) | `outputSchema` (Zod) on AgentConfig — validated JSON via `result.structured` |
|
|
132
|
+
| [10 — Task Retry](examples/10-task-retry.ts) | `maxRetries` / `retryDelayMs` / `retryBackoff` with `task_retry` progress events |
|
|
133
|
+
| [11 — Trace Observability](examples/11-trace-observability.ts) | `onTrace` callback — structured spans for LLM calls, tools, tasks, and agents |
|
|
134
|
+
| [12 — Grok](examples/12-grok.ts) | Same as example 02 (`runTeam()` collaboration) with Grok (`XAI_API_KEY`) |
|
|
135
|
+
| [13 — Gemini](examples/13-gemini.ts) | Gemini adapter smoke test with `gemini-2.5-flash` (`GEMINI_API_KEY`) |
|
|
215
136
|
|
|
216
137
|
## Architecture
|
|
217
138
|
|
|
@@ -244,6 +165,9 @@ for await (const event of agent.stream('Explain monads in two sentences.')) {
|
|
|
244
165
|
│ - prompt() │───►│ LLMAdapter │
|
|
245
166
|
│ - stream() │ │ - AnthropicAdapter │
|
|
246
167
|
└────────┬──────────┘ │ - OpenAIAdapter │
|
|
168
|
+
│ │ - CopilotAdapter │
|
|
169
|
+
│ │ - GeminiAdapter │
|
|
170
|
+
│ │ - GrokAdapter │
|
|
247
171
|
│ └──────────────────────┘
|
|
248
172
|
┌────────▼──────────┐
|
|
249
173
|
│ AgentRunner │ ┌──────────────────────┐
|
|
@@ -263,17 +187,93 @@ for await (const event of agent.stream('Explain monads in two sentences.')) {
|
|
|
263
187
|
| `file_edit` | Edit a file by replacing an exact string match. |
|
|
264
188
|
| `grep` | Search file contents with regex. Uses ripgrep when available, falls back to Node.js. |
|
|
265
189
|
|
|
190
|
+
## Supported Providers
|
|
191
|
+
|
|
192
|
+
| Provider | Config | Env var | Status |
|
|
193
|
+
|----------|--------|---------|--------|
|
|
194
|
+
| Anthropic (Claude) | `provider: 'anthropic'` | `ANTHROPIC_API_KEY` | Verified |
|
|
195
|
+
| OpenAI (GPT) | `provider: 'openai'` | `OPENAI_API_KEY` | Verified |
|
|
196
|
+
| Grok (xAI) | `provider: 'grok'` | `XAI_API_KEY` | Verified |
|
|
197
|
+
| GitHub Copilot | `provider: 'copilot'` | `GITHUB_TOKEN` | Verified |
|
|
198
|
+
| Gemini | `provider: 'gemini'` | `GEMINI_API_KEY` | Verified |
|
|
199
|
+
| Ollama / vLLM / LM Studio | `provider: 'openai'` + `baseURL` | — | Verified |
|
|
200
|
+
| llama.cpp server | `provider: 'openai'` + `baseURL` | — | Verified |
|
|
201
|
+
|
|
202
|
+
Verified local models with tool-calling: **Gemma 4** (see [example 08](examples/08-gemma4-local.ts)).
|
|
203
|
+
|
|
204
|
+
Any OpenAI-compatible API should work via `provider: 'openai'` + `baseURL` (DeepSeek, Groq, Mistral, Qwen, MiniMax, etc.). **Grok now has first-class support** via `provider: 'grok'`.
|
|
205
|
+
|
|
206
|
+
### Local Model Tool-Calling
|
|
207
|
+
|
|
208
|
+
The framework supports tool-calling with local models served by Ollama, vLLM, LM Studio, or llama.cpp. Tool-calling is handled natively by these servers via the OpenAI-compatible API.
|
|
209
|
+
|
|
210
|
+
**Verified models:** Gemma 4, Llama 3.1, Qwen 3, Mistral, Phi-4. See the full list at [ollama.com/search?c=tools](https://ollama.com/search?c=tools).
|
|
211
|
+
|
|
212
|
+
**Fallback extraction:** If a local model returns tool calls as text instead of using the `tool_calls` wire format (common with thinking models or misconfigured servers), the framework automatically extracts them from the text output.
|
|
213
|
+
|
|
214
|
+
**Timeout:** Local inference can be slow. Use `timeoutMs` on `AgentConfig` to prevent indefinite hangs:
|
|
215
|
+
|
|
216
|
+
```typescript
|
|
217
|
+
const localAgent: AgentConfig = {
|
|
218
|
+
name: 'local',
|
|
219
|
+
model: 'llama3.1',
|
|
220
|
+
provider: 'openai',
|
|
221
|
+
baseURL: 'http://localhost:11434/v1',
|
|
222
|
+
apiKey: 'ollama',
|
|
223
|
+
tools: ['bash', 'file_read'],
|
|
224
|
+
timeoutMs: 120_000, // abort after 2 minutes
|
|
225
|
+
}
|
|
226
|
+
```
|
|
227
|
+
|
|
228
|
+
**Troubleshooting:**
|
|
229
|
+
- Model not calling tools? Ensure it appears in Ollama's [Tools category](https://ollama.com/search?c=tools). Not all models support tool-calling.
|
|
230
|
+
- Using Ollama? Update to the latest version (`ollama update`) — older versions have known tool-calling bugs.
|
|
231
|
+
- Proxy interfering? Use `no_proxy=localhost` when running against local servers.
|
|
232
|
+
|
|
233
|
+
### LLM Configuration Examples
|
|
234
|
+
|
|
235
|
+
```typescript
|
|
236
|
+
const grokAgent: AgentConfig = {
|
|
237
|
+
name: 'grok-agent',
|
|
238
|
+
provider: 'grok',
|
|
239
|
+
model: 'grok-4',
|
|
240
|
+
systemPrompt: 'You are a helpful assistant.',
|
|
241
|
+
}
|
|
242
|
+
```
|
|
243
|
+
|
|
244
|
+
(Set your `XAI_API_KEY` environment variable — no `baseURL` needed anymore.)
|
|
245
|
+
|
|
266
246
|
## Contributing
|
|
267
247
|
|
|
268
248
|
Issues, feature requests, and PRs are welcome. Some areas where contributions would be especially valuable:
|
|
269
249
|
|
|
270
|
-
- **
|
|
250
|
+
- **Provider integrations** — Verify and document OpenAI-compatible providers (DeepSeek, Groq, Qwen, MiniMax, etc.) via `baseURL`. See [#25](https://github.com/JackChen-me/open-multi-agent/issues/25). For providers that are NOT OpenAI-compatible (e.g. Gemini), a new `LLMAdapter` implementation is welcome — the interface requires just two methods: `chat()` and `stream()`.
|
|
271
251
|
- **Examples** — Real-world workflows and use cases.
|
|
272
252
|
- **Documentation** — Guides, tutorials, and API docs.
|
|
273
253
|
|
|
254
|
+
## Author
|
|
255
|
+
|
|
256
|
+
> JackChen — Ex PM (¥100M+ revenue), now indie builder. Follow on [X](https://x.com/JackChen_x) for AI Agent insights.
|
|
257
|
+
|
|
258
|
+
## Contributors
|
|
259
|
+
|
|
260
|
+
<a href="https://github.com/JackChen-me/open-multi-agent/graphs/contributors">
|
|
261
|
+
<img src="https://contrib.rocks/image?repo=JackChen-me/open-multi-agent&v=20260405" />
|
|
262
|
+
</a>
|
|
263
|
+
|
|
274
264
|
## Star History
|
|
275
265
|
|
|
276
|
-
|
|
266
|
+
<a href="https://star-history.com/#JackChen-me/open-multi-agent&Date">
|
|
267
|
+
<picture>
|
|
268
|
+
<source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=JackChen-me/open-multi-agent&type=Date&theme=dark&v=20260405" />
|
|
269
|
+
<source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=JackChen-me/open-multi-agent&type=Date&v=20260405" />
|
|
270
|
+
<img alt="Star History Chart" src="https://api.star-history.com/svg?repos=JackChen-me/open-multi-agent&type=Date&v=20260405" />
|
|
271
|
+
</picture>
|
|
272
|
+
</a>
|
|
273
|
+
|
|
274
|
+
## Translations
|
|
275
|
+
|
|
276
|
+
Help translate this README — [open a PR](https://github.com/JackChen-me/open-multi-agent/pulls).
|
|
277
277
|
|
|
278
278
|
## License
|
|
279
279
|
|
package/README_zh.md
ADDED
|
@@ -0,0 +1,277 @@
|
|
|
1
|
+
# Open Multi-Agent
|
|
2
|
+
|
|
3
|
+
TypeScript 多智能体编排框架。一次 `runTeam()` 调用从目标到结果——框架自动拆解任务、解析依赖、并行执行。
|
|
4
|
+
|
|
5
|
+
3 个运行时依赖 · 33 个源文件 · Node.js 能跑的地方都能部署 · 被 [Latent Space](https://www.latent.space/p/ainews-a-quiet-april-fools) AI News 提及(AI 工程领域头部 Newsletter,17 万+订阅者)
|
|
6
|
+
|
|
7
|
+
[](https://github.com/JackChen-me/open-multi-agent/stargazers)
|
|
8
|
+
[](./LICENSE)
|
|
9
|
+
[](https://www.typescriptlang.org/)
|
|
10
|
+
[](https://github.com/JackChen-me/open-multi-agent/actions)
|
|
11
|
+
|
|
12
|
+
[English](./README.md) | **中文**
|
|
13
|
+
|
|
14
|
+
## 为什么选择 Open Multi-Agent?
|
|
15
|
+
|
|
16
|
+
- **目标进,结果出** — `runTeam(team, "构建一个 REST API")`。协调者智能体自动将目标拆解为带依赖关系的任务图,分配给对应智能体,独立任务并行执行,最终合成输出。无需手动定义任务或编排流程图。
|
|
17
|
+
- **TypeScript 原生** — 为 Node.js 生态而生。`npm install` 即用,无需 Python 运行时、无子进程桥接、无额外基础设施。可嵌入 Express、Next.js、Serverless 函数或 CI/CD 流水线。
|
|
18
|
+
- **可审计、极轻量** — 3 个运行时依赖(`@anthropic-ai/sdk`、`openai`、`zod`),33 个源文件。一个下午就能读完全部源码。
|
|
19
|
+
- **模型无关** — Claude、GPT、Gemma 4 和本地模型(Ollama、vLLM、LM Studio、llama.cpp server)可以在同一个团队中使用。通过 `baseURL` 即可接入任何 OpenAI 兼容服务。
|
|
20
|
+
- **多智能体协作** — 定义不同角色、工具和模型的智能体,通过消息总线和共享内存协作。
|
|
21
|
+
- **结构化输出** — 为任意智能体添加 `outputSchema`(Zod),输出自动解析为 JSON 并校验,校验失败自动重试一次。通过 `result.structured` 获取类型化结果。
|
|
22
|
+
- **任务重试** — 为任务设置 `maxRetries`,失败时自动指数退避重试。所有尝试的 token 用量累计,确保计费准确。
|
|
23
|
+
- **人机协同** — `runTasks()` 支持可选的 `onApproval` 回调。每批任务完成后,由你的回调决定是否继续执行后续任务。
|
|
24
|
+
- **生命周期钩子** — `AgentConfig` 上的 `beforeRun` / `afterRun`。在执行前拦截 prompt,或在执行后处理结果。从钩子中 throw 可中止运行。
|
|
25
|
+
- **循环检测** — `AgentConfig` 上的 `loopDetection` 可检测智能体重复相同工具调用或文本输出的卡死循环。可配置行为:警告(默认)、终止、或自定义回调。
|
|
26
|
+
- **可观测性** — 可选的 `onTrace` 回调为每次 LLM 调用、工具执行、任务和智能体运行发出结构化 span 事件——包含耗时、token 用量和共享的 `runId` 用于关联追踪。未订阅时零开销,零额外依赖。
|
|
27
|
+
|
|
28
|
+
## 快速开始
|
|
29
|
+
|
|
30
|
+
需要 Node.js >= 18。
|
|
31
|
+
|
|
32
|
+
```bash
|
|
33
|
+
npm install @jackchen_me/open-multi-agent
|
|
34
|
+
```
|
|
35
|
+
|
|
36
|
+
根据使用的 Provider 设置对应的 API key。通过 Ollama 使用本地模型无需 API key — 参见 [example 06](examples/06-local-model.ts)。
|
|
37
|
+
|
|
38
|
+
- `ANTHROPIC_API_KEY`
|
|
39
|
+
- `OPENAI_API_KEY`
|
|
40
|
+
- `GEMINI_API_KEY`
|
|
41
|
+
- `XAI_API_KEY`(Grok)
|
|
42
|
+
- `GITHUB_TOKEN`(Copilot)
|
|
43
|
+
|
|
44
|
+
三个智能体,一个目标——框架处理剩下的一切:
|
|
45
|
+
|
|
46
|
+
```typescript
|
|
47
|
+
import { OpenMultiAgent } from '@jackchen_me/open-multi-agent'
|
|
48
|
+
import type { AgentConfig } from '@jackchen_me/open-multi-agent'
|
|
49
|
+
|
|
50
|
+
const architect: AgentConfig = {
|
|
51
|
+
name: 'architect',
|
|
52
|
+
model: 'claude-sonnet-4-6',
|
|
53
|
+
systemPrompt: 'You design clean API contracts and file structures.',
|
|
54
|
+
tools: ['file_write'],
|
|
55
|
+
}
|
|
56
|
+
|
|
57
|
+
const developer: AgentConfig = {
|
|
58
|
+
name: 'developer',
|
|
59
|
+
model: 'claude-sonnet-4-6',
|
|
60
|
+
systemPrompt: 'You implement what the architect designs.',
|
|
61
|
+
tools: ['bash', 'file_read', 'file_write', 'file_edit'],
|
|
62
|
+
}
|
|
63
|
+
|
|
64
|
+
const reviewer: AgentConfig = {
|
|
65
|
+
name: 'reviewer',
|
|
66
|
+
model: 'claude-sonnet-4-6',
|
|
67
|
+
systemPrompt: 'You review code for correctness and clarity.',
|
|
68
|
+
tools: ['file_read', 'grep'],
|
|
69
|
+
}
|
|
70
|
+
|
|
71
|
+
const orchestrator = new OpenMultiAgent({
|
|
72
|
+
defaultModel: 'claude-sonnet-4-6',
|
|
73
|
+
onProgress: (event) => console.log(event.type, event.agent ?? event.task ?? ''),
|
|
74
|
+
})
|
|
75
|
+
|
|
76
|
+
const team = orchestrator.createTeam('api-team', {
|
|
77
|
+
name: 'api-team',
|
|
78
|
+
agents: [architect, developer, reviewer],
|
|
79
|
+
sharedMemory: true,
|
|
80
|
+
})
|
|
81
|
+
|
|
82
|
+
// 描述一个目标——框架将其拆解为任务并编排执行
|
|
83
|
+
const result = await orchestrator.runTeam(team, 'Create a REST API for a todo list in /tmp/todo-api/')
|
|
84
|
+
|
|
85
|
+
console.log(`成功: ${result.success}`)
|
|
86
|
+
console.log(`Token 用量: ${result.totalTokenUsage.output_tokens} output tokens`)
|
|
87
|
+
```
|
|
88
|
+
|
|
89
|
+
执行过程:
|
|
90
|
+
|
|
91
|
+
```
|
|
92
|
+
agent_start coordinator
|
|
93
|
+
task_start architect
|
|
94
|
+
task_complete architect
|
|
95
|
+
task_start developer
|
|
96
|
+
task_start developer // 无依赖的任务并行执行
|
|
97
|
+
task_complete developer
|
|
98
|
+
task_start reviewer // 实现完成后自动解锁
|
|
99
|
+
task_complete developer
|
|
100
|
+
task_complete reviewer
|
|
101
|
+
agent_complete coordinator // 综合所有结果
|
|
102
|
+
Success: true
|
|
103
|
+
Tokens: 12847 output tokens
|
|
104
|
+
```
|
|
105
|
+
|
|
106
|
+
## 三种运行模式
|
|
107
|
+
|
|
108
|
+
| 模式 | 方法 | 适用场景 |
|
|
109
|
+
|------|------|----------|
|
|
110
|
+
| 单智能体 | `runAgent()` | 一个智能体,一个提示词——最简入口 |
|
|
111
|
+
| 自动编排团队 | `runTeam()` | 给一个目标,框架自动规划和执行 |
|
|
112
|
+
| 显式任务管线 | `runTasks()` | 你自己定义任务图和分配 |
|
|
113
|
+
|
|
114
|
+
## 示例
|
|
115
|
+
|
|
116
|
+
所有示例都是可运行脚本,位于 [`examples/`](./examples/) 目录。使用 `npx tsx` 运行:
|
|
117
|
+
|
|
118
|
+
```bash
|
|
119
|
+
npx tsx examples/01-single-agent.ts
|
|
120
|
+
```
|
|
121
|
+
|
|
122
|
+
| 示例 | 展示内容 |
|
|
123
|
+
|------|----------|
|
|
124
|
+
| [01 — 单智能体](examples/01-single-agent.ts) | `runAgent()` 单次调用、`stream()` 流式输出、`prompt()` 多轮对话 |
|
|
125
|
+
| [02 — 团队协作](examples/02-team-collaboration.ts) | `runTeam()` 自动编排 + 协调者模式 |
|
|
126
|
+
| [03 — 任务流水线](examples/03-task-pipeline.ts) | `runTasks()` 显式依赖图(设计 → 实现 → 测试 + 评审) |
|
|
127
|
+
| [04 — 多模型团队](examples/04-multi-model-team.ts) | `defineTool()` 自定义工具、Anthropic + OpenAI 混合、`AgentPool` |
|
|
128
|
+
| [05 — Copilot](examples/05-copilot-test.ts) | GitHub Copilot 作为 LLM 提供者 |
|
|
129
|
+
| [06 — 本地模型](examples/06-local-model.ts) | Ollama + Claude 混合流水线,通过 `baseURL` 接入(兼容 vLLM、LM Studio 等) |
|
|
130
|
+
| [07 — 扇出聚合](examples/07-fan-out-aggregate.ts) | `runParallel()` MapReduce — 3 个分析师并行,然后综合 |
|
|
131
|
+
| [08 — Gemma 4 本地](examples/08-gemma4-local.ts) | `runTasks()` + `runTeam()` 本地 Gemma 4 via Ollama — 零 API 费用 |
|
|
132
|
+
| [09 — 结构化输出](examples/09-structured-output.ts) | `outputSchema`(Zod)— 校验 JSON 输出,通过 `result.structured` 获取 |
|
|
133
|
+
| [10 — 任务重试](examples/10-task-retry.ts) | `maxRetries` / `retryDelayMs` / `retryBackoff` + `task_retry` 进度事件 |
|
|
134
|
+
| [11 — 可观测性](examples/11-trace-observability.ts) | `onTrace` 回调 — LLM 调用、工具、任务、智能体的结构化 span 事件 |
|
|
135
|
+
| [12 — Grok](examples/12-grok.ts) | 同示例 02(`runTeam()` 团队协作),使用 Grok(`XAI_API_KEY`) |
|
|
136
|
+
| [13 — Gemini](examples/13-gemini.ts) | Gemini 适配器测试,使用 `gemini-2.5-flash`(`GEMINI_API_KEY`) |
|
|
137
|
+
|
|
138
|
+
## 架构
|
|
139
|
+
|
|
140
|
+
```
|
|
141
|
+
┌─────────────────────────────────────────────────────────────────┐
|
|
142
|
+
│ OpenMultiAgent (Orchestrator) │
|
|
143
|
+
│ │
|
|
144
|
+
│ createTeam() runTeam() runTasks() runAgent() getStatus() │
|
|
145
|
+
└──────────────────────┬──────────────────────────────────────────┘
|
|
146
|
+
│
|
|
147
|
+
┌──────────▼──────────┐
|
|
148
|
+
│ Team │
|
|
149
|
+
│ - AgentConfig[] │
|
|
150
|
+
│ - MessageBus │
|
|
151
|
+
│ - TaskQueue │
|
|
152
|
+
│ - SharedMemory │
|
|
153
|
+
└──────────┬──────────┘
|
|
154
|
+
│
|
|
155
|
+
┌─────────────┴─────────────┐
|
|
156
|
+
│ │
|
|
157
|
+
┌────────▼──────────┐ ┌───────────▼───────────┐
|
|
158
|
+
│ AgentPool │ │ TaskQueue │
|
|
159
|
+
│ - Semaphore │ │ - dependency graph │
|
|
160
|
+
│ - runParallel() │ │ - auto unblock │
|
|
161
|
+
└────────┬──────────┘ │ - cascade failure │
|
|
162
|
+
│ └───────────────────────┘
|
|
163
|
+
┌────────▼──────────┐
|
|
164
|
+
│ Agent │
|
|
165
|
+
│ - run() │ ┌──────────────────────┐
|
|
166
|
+
│ - prompt() │───►│ LLMAdapter │
|
|
167
|
+
│ - stream() │ │ - AnthropicAdapter │
|
|
168
|
+
└────────┬──────────┘ │ - OpenAIAdapter │
|
|
169
|
+
│ │ - CopilotAdapter │
|
|
170
|
+
│ │ - GeminiAdapter │
|
|
171
|
+
│ │ - GrokAdapter │
|
|
172
|
+
│ └──────────────────────┘
|
|
173
|
+
┌────────▼──────────┐
|
|
174
|
+
│ AgentRunner │ ┌──────────────────────┐
|
|
175
|
+
│ - conversation │───►│ ToolRegistry │
|
|
176
|
+
│ loop │ │ - defineTool() │
|
|
177
|
+
│ - tool dispatch │ │ - 5 built-in tools │
|
|
178
|
+
└───────────────────┘ └──────────────────────┘
|
|
179
|
+
```
|
|
180
|
+
|
|
181
|
+
## 内置工具
|
|
182
|
+
|
|
183
|
+
| 工具 | 说明 |
|
|
184
|
+
|------|------|
|
|
185
|
+
| `bash` | 执行 Shell 命令。返回 stdout + stderr。支持超时和工作目录设置。 |
|
|
186
|
+
| `file_read` | 读取指定绝对路径的文件内容。支持偏移量和行数限制以处理大文件。 |
|
|
187
|
+
| `file_write` | 写入或创建文件。自动创建父目录。 |
|
|
188
|
+
| `file_edit` | 通过精确字符串匹配编辑文件。 |
|
|
189
|
+
| `grep` | 使用正则表达式搜索文件内容。优先使用 ripgrep,回退到 Node.js 实现。 |
|
|
190
|
+
|
|
191
|
+
## 支持的 Provider
|
|
192
|
+
|
|
193
|
+
| Provider | 配置 | 环境变量 | 状态 |
|
|
194
|
+
|----------|------|----------|------|
|
|
195
|
+
| Anthropic (Claude) | `provider: 'anthropic'` | `ANTHROPIC_API_KEY` | 已验证 |
|
|
196
|
+
| OpenAI (GPT) | `provider: 'openai'` | `OPENAI_API_KEY` | 已验证 |
|
|
197
|
+
| Grok (xAI) | `provider: 'grok'` | `XAI_API_KEY` | 已验证 |
|
|
198
|
+
| GitHub Copilot | `provider: 'copilot'` | `GITHUB_TOKEN` | 已验证 |
|
|
199
|
+
| Gemini | `provider: 'gemini'` | `GEMINI_API_KEY` | 已验证 |
|
|
200
|
+
| Ollama / vLLM / LM Studio | `provider: 'openai'` + `baseURL` | — | 已验证 |
|
|
201
|
+
| llama.cpp server | `provider: 'openai'` + `baseURL` | — | 已验证 |
|
|
202
|
+
|
|
203
|
+
已验证支持 tool-calling 的本地模型:**Gemma 4**(见[示例 08](examples/08-gemma4-local.ts))。
|
|
204
|
+
|
|
205
|
+
任何 OpenAI 兼容 API 均可通过 `provider: 'openai'` + `baseURL` 接入(DeepSeek、Groq、Mistral、Qwen、MiniMax 等)。**Grok 现已原生支持**,使用 `provider: 'grok'`。
|
|
206
|
+
|
|
207
|
+
### 本地模型 Tool-Calling
|
|
208
|
+
|
|
209
|
+
框架支持通过 Ollama、vLLM、LM Studio 或 llama.cpp 运行的本地模型进行 tool-calling。Tool-calling 由这些服务通过 OpenAI 兼容 API 原生处理。
|
|
210
|
+
|
|
211
|
+
**已验证模型:** Gemma 4、Llama 3.1、Qwen 3、Mistral、Phi-4。完整列表见 [ollama.com/search?c=tools](https://ollama.com/search?c=tools)。
|
|
212
|
+
|
|
213
|
+
**兜底提取:** 如果本地模型以文本形式返回工具调用,而非使用 `tool_calls` 协议格式(常见于 thinking 模型或配置不当的服务),框架会自动从文本输出中提取。
|
|
214
|
+
|
|
215
|
+
**超时设置:** 本地推理可能较慢。使用 `AgentConfig` 上的 `timeoutMs` 防止无限等待:
|
|
216
|
+
|
|
217
|
+
```typescript
|
|
218
|
+
const localAgent: AgentConfig = {
|
|
219
|
+
name: 'local',
|
|
220
|
+
model: 'llama3.1',
|
|
221
|
+
provider: 'openai',
|
|
222
|
+
baseURL: 'http://localhost:11434/v1',
|
|
223
|
+
apiKey: 'ollama',
|
|
224
|
+
tools: ['bash', 'file_read'],
|
|
225
|
+
timeoutMs: 120_000, // 2 分钟后中止
|
|
226
|
+
}
|
|
227
|
+
```
|
|
228
|
+
|
|
229
|
+
**常见问题:**
|
|
230
|
+
- 模型不调用工具?确保该模型出现在 Ollama 的 [Tools 分类](https://ollama.com/search?c=tools)中。并非所有模型都支持 tool-calling。
|
|
231
|
+
- 使用 Ollama?更新到最新版(`ollama update`)——旧版本有已知的 tool-calling bug。
|
|
232
|
+
- 代理干扰?本地服务使用 `no_proxy=localhost`。
|
|
233
|
+
|
|
234
|
+
### LLM 配置示例
|
|
235
|
+
|
|
236
|
+
```typescript
|
|
237
|
+
const grokAgent: AgentConfig = {
|
|
238
|
+
name: 'grok-agent',
|
|
239
|
+
provider: 'grok',
|
|
240
|
+
model: 'grok-4',
|
|
241
|
+
systemPrompt: 'You are a helpful assistant.',
|
|
242
|
+
}
|
|
243
|
+
```
|
|
244
|
+
|
|
245
|
+
(设置 `XAI_API_KEY` 环境变量即可,无需 `baseURL`。)
|
|
246
|
+
|
|
247
|
+
## 参与贡献
|
|
248
|
+
|
|
249
|
+
欢迎提 Issue、功能需求和 PR。以下方向的贡献尤其有价值:
|
|
250
|
+
|
|
251
|
+
- **Provider 集成** — 验证并文档化 OpenAI 兼容 Provider(DeepSeek、Groq、Qwen、MiniMax 等)通过 `baseURL` 接入。详见 [#25](https://github.com/JackChen-me/open-multi-agent/issues/25)。对于非 OpenAI 兼容的 Provider,欢迎贡献新的 `LLMAdapter` 实现——接口只需两个方法:`chat()` 和 `stream()`。
|
|
252
|
+
- **示例** — 真实场景的工作流和用例。
|
|
253
|
+
- **文档** — 指南、教程和 API 文档。
|
|
254
|
+
|
|
255
|
+
## 作者
|
|
256
|
+
|
|
257
|
+
> JackChen — 前 WPS 产品经理,现独立创业者。关注小红书[「杰克西|硅基杠杆」](https://www.xiaohongshu.com/user/profile/5a1bdc1e4eacab4aa39ea6d6),持续获取我的 AI Agent 观点和思考。
|
|
258
|
+
|
|
259
|
+
## 贡献者
|
|
260
|
+
|
|
261
|
+
<a href="https://github.com/JackChen-me/open-multi-agent/graphs/contributors">
|
|
262
|
+
<img src="https://contrib.rocks/image?repo=JackChen-me/open-multi-agent&v=20260405" />
|
|
263
|
+
</a>
|
|
264
|
+
|
|
265
|
+
## Star 趋势
|
|
266
|
+
|
|
267
|
+
<a href="https://star-history.com/#JackChen-me/open-multi-agent&Date">
|
|
268
|
+
<picture>
|
|
269
|
+
<source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=JackChen-me/open-multi-agent&type=Date&theme=dark&v=20260405" />
|
|
270
|
+
<source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=JackChen-me/open-multi-agent&type=Date&v=20260405" />
|
|
271
|
+
<img alt="Star History Chart" src="https://api.star-history.com/svg?repos=JackChen-me/open-multi-agent&type=Date&v=20260405" />
|
|
272
|
+
</picture>
|
|
273
|
+
</a>
|
|
274
|
+
|
|
275
|
+
## 许可证
|
|
276
|
+
|
|
277
|
+
MIT
|
package/SECURITY.md
ADDED
|
@@ -0,0 +1,17 @@
|
|
|
1
|
+
# Security Policy
|
|
2
|
+
|
|
3
|
+
## Supported Versions
|
|
4
|
+
|
|
5
|
+
| Version | Supported |
|
|
6
|
+
|---------|-----------|
|
|
7
|
+
| latest | Yes |
|
|
8
|
+
|
|
9
|
+
## Reporting a Vulnerability
|
|
10
|
+
|
|
11
|
+
If you discover a security vulnerability, please report it responsibly via email:
|
|
12
|
+
|
|
13
|
+
**jack@yuanasi.com**
|
|
14
|
+
|
|
15
|
+
Please do **not** open a public GitHub issue for security vulnerabilities.
|
|
16
|
+
|
|
17
|
+
We will acknowledge receipt within 48 hours and aim to provide a fix or mitigation plan within 7 days.
|