@jackchen_me/open-multi-agent 0.1.0 → 0.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (84) hide show
  1. package/.github/ISSUE_TEMPLATE/bug_report.md +40 -0
  2. package/.github/ISSUE_TEMPLATE/feature_request.md +23 -0
  3. package/.github/pull_request_template.md +14 -0
  4. package/.github/workflows/ci.yml +23 -0
  5. package/CLAUDE.md +72 -0
  6. package/CODE_OF_CONDUCT.md +48 -0
  7. package/CONTRIBUTING.md +72 -0
  8. package/DECISIONS.md +43 -0
  9. package/README.md +73 -140
  10. package/README_zh.md +217 -0
  11. package/SECURITY.md +17 -0
  12. package/dist/agent/agent.d.ts +5 -0
  13. package/dist/agent/agent.d.ts.map +1 -1
  14. package/dist/agent/agent.js +90 -3
  15. package/dist/agent/agent.js.map +1 -1
  16. package/dist/agent/structured-output.d.ts +33 -0
  17. package/dist/agent/structured-output.d.ts.map +1 -0
  18. package/dist/agent/structured-output.js +116 -0
  19. package/dist/agent/structured-output.js.map +1 -0
  20. package/dist/index.d.ts +2 -1
  21. package/dist/index.d.ts.map +1 -1
  22. package/dist/index.js +2 -1
  23. package/dist/index.js.map +1 -1
  24. package/dist/llm/adapter.d.ts +9 -4
  25. package/dist/llm/adapter.d.ts.map +1 -1
  26. package/dist/llm/adapter.js +17 -5
  27. package/dist/llm/adapter.js.map +1 -1
  28. package/dist/llm/anthropic.d.ts +1 -1
  29. package/dist/llm/anthropic.d.ts.map +1 -1
  30. package/dist/llm/anthropic.js +2 -1
  31. package/dist/llm/anthropic.js.map +1 -1
  32. package/dist/llm/copilot.d.ts +92 -0
  33. package/dist/llm/copilot.d.ts.map +1 -0
  34. package/dist/llm/copilot.js +426 -0
  35. package/dist/llm/copilot.js.map +1 -0
  36. package/dist/llm/openai-common.d.ts +47 -0
  37. package/dist/llm/openai-common.d.ts.map +1 -0
  38. package/dist/llm/openai-common.js +209 -0
  39. package/dist/llm/openai-common.js.map +1 -0
  40. package/dist/llm/openai.d.ts +1 -1
  41. package/dist/llm/openai.d.ts.map +1 -1
  42. package/dist/llm/openai.js +3 -224
  43. package/dist/llm/openai.js.map +1 -1
  44. package/dist/orchestrator/orchestrator.d.ts +25 -1
  45. package/dist/orchestrator/orchestrator.d.ts.map +1 -1
  46. package/dist/orchestrator/orchestrator.js +130 -37
  47. package/dist/orchestrator/orchestrator.js.map +1 -1
  48. package/dist/task/queue.js +1 -1
  49. package/dist/task/queue.js.map +1 -1
  50. package/dist/task/task.d.ts +3 -0
  51. package/dist/task/task.d.ts.map +1 -1
  52. package/dist/task/task.js +5 -1
  53. package/dist/task/task.js.map +1 -1
  54. package/dist/team/messaging.d.ts.map +1 -1
  55. package/dist/team/messaging.js +2 -1
  56. package/dist/team/messaging.js.map +1 -1
  57. package/dist/types.d.ts +31 -3
  58. package/dist/types.d.ts.map +1 -1
  59. package/examples/05-copilot-test.ts +49 -0
  60. package/examples/06-local-model.ts +199 -0
  61. package/examples/07-fan-out-aggregate.ts +209 -0
  62. package/examples/08-gemma4-local.ts +203 -0
  63. package/examples/09-gemma4-auto-orchestration.ts +162 -0
  64. package/package.json +4 -3
  65. package/src/agent/agent.ts +115 -6
  66. package/src/agent/structured-output.ts +126 -0
  67. package/src/index.ts +2 -1
  68. package/src/llm/adapter.ts +18 -5
  69. package/src/llm/anthropic.ts +2 -1
  70. package/src/llm/copilot.ts +551 -0
  71. package/src/llm/openai-common.ts +255 -0
  72. package/src/llm/openai.ts +8 -258
  73. package/src/orchestrator/orchestrator.ts +164 -38
  74. package/src/task/queue.ts +1 -1
  75. package/src/task/task.ts +8 -1
  76. package/src/team/messaging.ts +3 -1
  77. package/src/types.ts +31 -2
  78. package/tests/semaphore.test.ts +57 -0
  79. package/tests/shared-memory.test.ts +122 -0
  80. package/tests/structured-output.test.ts +331 -0
  81. package/tests/task-queue.test.ts +244 -0
  82. package/tests/task-retry.test.ts +368 -0
  83. package/tests/task-utils.test.ts +155 -0
  84. package/tests/tool-executor.test.ts +193 -0
@@ -0,0 +1,40 @@
1
+ ---
2
+ name: Bug Report
3
+ about: Report a bug to help us improve
4
+ title: "[Bug] "
5
+ labels: bug
6
+ assignees: ''
7
+ ---
8
+
9
+ ## Describe the bug
10
+
11
+ A clear and concise description of what the bug is.
12
+
13
+ ## To Reproduce
14
+
15
+ Steps to reproduce the behavior:
16
+
17
+ 1. Configure agent with '...'
18
+ 2. Call `runTeam(...)` with '...'
19
+ 3. See error
20
+
21
+ ## Expected behavior
22
+
23
+ A clear description of what you expected to happen.
24
+
25
+ ## Error output
26
+
27
+ ```
28
+ Paste any error messages or logs here
29
+ ```
30
+
31
+ ## Environment
32
+
33
+ - OS: [e.g. macOS 14, Ubuntu 22.04]
34
+ - Node.js version: [e.g. 20.11]
35
+ - Package version: [e.g. 0.1.0]
36
+ - LLM provider: [e.g. Anthropic, OpenAI]
37
+
38
+ ## Additional context
39
+
40
+ Add any other context about the problem here.
@@ -0,0 +1,23 @@
1
+ ---
2
+ name: Feature Request
3
+ about: Suggest an idea for this project
4
+ title: "[Feature] "
5
+ labels: enhancement
6
+ assignees: ''
7
+ ---
8
+
9
+ ## Problem
10
+
11
+ A clear description of the problem or limitation you're experiencing.
12
+
13
+ ## Proposed Solution
14
+
15
+ Describe what you'd like to happen.
16
+
17
+ ## Alternatives Considered
18
+
19
+ Any alternative solutions or features you've considered.
20
+
21
+ ## Additional context
22
+
23
+ Add any other context, code examples, or screenshots about the feature request here.
@@ -0,0 +1,14 @@
1
+ ## What
2
+
3
+ <!-- What does this PR do? One or two sentences. -->
4
+
5
+ ## Why
6
+
7
+ <!-- Why is this change needed? Link to an issue if applicable: Fixes #123 -->
8
+
9
+ ## Checklist
10
+
11
+ - [ ] `npm run lint` passes
12
+ - [ ] `npm test` passes
13
+ - [ ] Added/updated tests for changed behavior
14
+ - [ ] No new runtime dependencies (or justified in the PR description)
@@ -0,0 +1,23 @@
1
+ name: CI
2
+
3
+ on:
4
+ push:
5
+ branches: [main]
6
+ pull_request:
7
+ branches: [main]
8
+
9
+ jobs:
10
+ test:
11
+ runs-on: ubuntu-latest
12
+ strategy:
13
+ matrix:
14
+ node-version: [18, 20, 22]
15
+ steps:
16
+ - uses: actions/checkout@v4
17
+ - uses: actions/setup-node@v4
18
+ with:
19
+ node-version: ${{ matrix.node-version }}
20
+ cache: npm
21
+ - run: npm ci
22
+ - run: npm run lint
23
+ - run: npm test
package/CLAUDE.md ADDED
@@ -0,0 +1,72 @@
1
+ # CLAUDE.md
2
+
3
+ This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
4
+
5
+ ## Commands
6
+
7
+ ```bash
8
+ npm run build # Compile TypeScript (src/ → dist/)
9
+ npm run dev # Watch mode compilation
10
+ npm run lint # Type-check only (tsc --noEmit)
11
+ npm test # Run all tests (vitest run)
12
+ npm run test:watch # Vitest watch mode
13
+ ```
14
+
15
+ No test files exist yet in `tests/`. Examples in `examples/` are standalone scripts requiring API keys (`ANTHROPIC_API_KEY`, `OPENAI_API_KEY`).
16
+
17
+ ## Architecture
18
+
19
+ ES module TypeScript framework for multi-agent orchestration. Three runtime dependencies: `@anthropic-ai/sdk`, `openai`, `zod`.
20
+
21
+ ### Core Execution Flow
22
+
23
+ **`OpenMultiAgent`** (`src/orchestrator/orchestrator.ts`) is the top-level public API with three execution modes:
24
+
25
+ 1. **`runAgent(config, prompt)`** — single agent, one-shot
26
+ 2. **`runTeam(team, goal)`** — automatic orchestration: a temporary "coordinator" agent decomposes the goal into a task DAG via LLM call, then tasks execute in dependency order
27
+ 3. **`runTasks(team, tasks)`** — explicit task pipeline with user-defined dependencies
28
+
29
+ ### The Coordinator Pattern (runTeam)
30
+
31
+ This is the framework's key feature. When `runTeam()` is called:
32
+ 1. A coordinator agent receives the goal + agent roster and produces a JSON task array (title, description, assignee, dependsOn)
33
+ 2. `TaskQueue` resolves dependencies topologically — independent tasks run in parallel, dependent tasks wait
34
+ 3. `Scheduler` auto-assigns any unassigned tasks (strategies: `dependency-first` default, `round-robin`, `least-busy`, `capability-match`)
35
+ 4. Each task result is written to `SharedMemory` so subsequent agents see prior results
36
+ 5. The coordinator synthesizes all task results into a final output
37
+
38
+ ### Layer Map
39
+
40
+ | Layer | Files | Responsibility |
41
+ |-------|-------|----------------|
42
+ | Orchestrator | `orchestrator/orchestrator.ts`, `orchestrator/scheduler.ts` | Top-level API, task decomposition, coordinator pattern |
43
+ | Team | `team/team.ts`, `team/messaging.ts` | Agent roster, MessageBus (point-to-point + broadcast), SharedMemory binding |
44
+ | Agent | `agent/agent.ts`, `agent/runner.ts`, `agent/pool.ts` | Agent lifecycle (idle→running→completed/error), conversation loop, concurrency pool with Semaphore |
45
+ | Task | `task/queue.ts`, `task/task.ts` | Dependency-aware queue, auto-unblock on completion, cascade failure to dependents |
46
+ | Tool | `tool/framework.ts`, `tool/executor.ts`, `tool/built-in/` | `defineTool()` with Zod schemas, ToolRegistry, parallel batch execution with concurrency semaphore |
47
+ | LLM | `llm/adapter.ts`, `llm/anthropic.ts`, `llm/openai.ts` | `LLMAdapter` interface (`chat` + `stream`), factory `createAdapter()` |
48
+ | Memory | `memory/shared.ts`, `memory/store.ts` | Namespaced key-value store (`agentName/key`), markdown summary injection into prompts |
49
+ | Types | `types.ts` | All interfaces in one file to avoid circular deps |
50
+ | Exports | `index.ts` | Public API surface |
51
+
52
+ ### Agent Conversation Loop (AgentRunner)
53
+
54
+ `AgentRunner.run()`: send messages → extract tool-use blocks → execute tools in parallel batch → append results → loop until `end_turn` or `maxTurns` exhausted. Accumulates `TokenUsage` across all turns.
55
+
56
+ ### Concurrency Control
57
+
58
+ Two independent semaphores: `AgentPool` (max concurrent agent runs, default 5) and `ToolExecutor` (max concurrent tool calls, default 4).
59
+
60
+ ### Error Handling
61
+
62
+ - Tool errors → caught, returned as `ToolResult(isError: true)`, never thrown
63
+ - Task failures → cascade to all dependents; independent tasks continue
64
+ - LLM API errors → propagate to caller
65
+
66
+ ### Built-in Tools
67
+
68
+ `bash`, `file_read`, `file_write`, `file_edit`, `grep` — registered via `registerBuiltInTools(registry)`.
69
+
70
+ ### Adding an LLM Adapter
71
+
72
+ Implement `LLMAdapter` interface with `chat(messages, options)` and `stream(messages, options)`, then register in `createAdapter()` factory in `src/llm/adapter.ts`.
@@ -0,0 +1,48 @@
1
+ # Contributor Covenant Code of Conduct
2
+
3
+ ## Our Pledge
4
+
5
+ We as members, contributors, and leaders pledge to make participation in our
6
+ community a positive experience for everyone, regardless of background or
7
+ identity.
8
+
9
+ ## Our Standards
10
+
11
+ Examples of behavior that contributes to a positive environment:
12
+
13
+ - Using welcoming and inclusive language
14
+ - Being respectful of differing viewpoints and experiences
15
+ - Gracefully accepting constructive feedback
16
+ - Focusing on what is best for the community
17
+ - Showing empathy towards other community members
18
+
19
+ Examples of unacceptable behavior:
20
+
21
+ - Trolling, insulting or derogatory comments, and personal attacks
22
+ - Public or private unwelcome conduct
23
+ - Publishing others' private information without explicit permission
24
+ - Other conduct which could reasonably be considered inappropriate in a
25
+ professional setting
26
+
27
+ ## Enforcement Responsibilities
28
+
29
+ Community leaders are responsible for clarifying and enforcing our standards of
30
+ acceptable behavior and will take appropriate and fair corrective action in
31
+ response to any behavior that they deem inappropriate or harmful.
32
+
33
+ ## Scope
34
+
35
+ This Code of Conduct applies within all community spaces, and also applies when
36
+ an individual is officially representing the community in public spaces.
37
+
38
+ ## Enforcement
39
+
40
+ Instances of unacceptable behavior may be reported to the community leaders
41
+ responsible for enforcement at **jack@yuanasi.com**. All complaints will be
42
+ reviewed and investigated promptly and fairly.
43
+
44
+ ## Attribution
45
+
46
+ This Code of Conduct is adapted from the [Contributor Covenant](https://www.contributor-covenant.org),
47
+ version 2.1, available at
48
+ [https://www.contributor-covenant.org/version/2/1/code_of_conduct.html](https://www.contributor-covenant.org/version/2/1/code_of_conduct.html).
@@ -0,0 +1,72 @@
1
+ # Contributing
2
+
3
+ Thanks for your interest in contributing to Open Multi-Agent! This guide covers the basics to get you started.
4
+
5
+ ## Setup
6
+
7
+ ```bash
8
+ git clone https://github.com/JackChen-me/open-multi-agent.git
9
+ cd open-multi-agent
10
+ npm install
11
+ ```
12
+
13
+ Requires Node.js >= 18.
14
+
15
+ ## Development Commands
16
+
17
+ ```bash
18
+ npm run build # Compile TypeScript (src/ → dist/)
19
+ npm run dev # Watch mode compilation
20
+ npm run lint # Type-check (tsc --noEmit)
21
+ npm test # Run all tests (vitest)
22
+ npm run test:watch # Vitest watch mode
23
+ ```
24
+
25
+ ## Running Tests
26
+
27
+ All tests live in `tests/`. They test core modules (TaskQueue, SharedMemory, ToolExecutor, Semaphore) without requiring API keys or network access.
28
+
29
+ ```bash
30
+ npm test
31
+ ```
32
+
33
+ Every PR must pass `npm run lint && npm test`. CI runs both automatically on Node 18, 20, and 22.
34
+
35
+ ## Making a Pull Request
36
+
37
+ 1. Fork the repo and create a branch from `main`
38
+ 2. Make your changes
39
+ 3. Add or update tests if you changed behavior
40
+ 4. Run `npm run lint && npm test` locally
41
+ 5. Open a PR against `main`
42
+
43
+ ### PR Checklist
44
+
45
+ - [ ] `npm run lint` passes
46
+ - [ ] `npm test` passes
47
+ - [ ] New behavior has test coverage
48
+ - [ ] Linked to a relevant issue (if one exists)
49
+
50
+ ## Code Style
51
+
52
+ - TypeScript strict mode, ES modules (`.js` extensions in imports)
53
+ - No additional linter/formatter configured — follow existing patterns
54
+ - Keep dependencies minimal (currently 3 runtime deps: `@anthropic-ai/sdk`, `openai`, `zod`)
55
+
56
+ ## Architecture Overview
57
+
58
+ See the [README](./README.md#architecture) for an architecture diagram. Key entry points:
59
+
60
+ - **Orchestrator**: `src/orchestrator/orchestrator.ts` — top-level API
61
+ - **Task system**: `src/task/queue.ts`, `src/task/task.ts` — dependency DAG
62
+ - **Agent**: `src/agent/runner.ts` — conversation loop
63
+ - **Tools**: `src/tool/framework.ts`, `src/tool/executor.ts` — tool registry and execution
64
+ - **LLM adapters**: `src/llm/` — Anthropic, OpenAI, Copilot
65
+
66
+ ## Where to Contribute
67
+
68
+ Check the [issues](https://github.com/JackChen-me/open-multi-agent/issues) page. Issues labeled `good first issue` are scoped and approachable. Issues labeled `help wanted` are larger but well-defined.
69
+
70
+ ## License
71
+
72
+ By contributing, you agree that your contributions will be licensed under the MIT License.
package/DECISIONS.md ADDED
@@ -0,0 +1,43 @@
1
+ # Architecture Decisions
2
+
3
+ This document records deliberate "won't do" decisions for the project. These are features we evaluated and chose NOT to implement — not because they're bad ideas, but because they conflict with our positioning as the **simplest multi-agent framework**.
4
+
5
+ If you're considering a PR in any of these areas, please open a discussion first.
6
+
7
+ ## Won't Do
8
+
9
+ ### 1. Agent Handoffs
10
+
11
+ **What**: Agent A transfers an in-progress conversation to Agent B (like OpenAI Agents SDK `handoff()`).
12
+
13
+ **Why not**: Handoffs are a different paradigm from our task-based model. Our tasks have clear boundaries — one agent, one task, one result. Handoffs blur those boundaries and add state-transfer complexity. Users who need handoffs likely need a different framework (OpenAI Agents SDK is purpose-built for this).
14
+
15
+ ### 2. State Persistence / Checkpointing
16
+
17
+ **What**: Save workflow state to a database so long-running workflows can resume after crashes (like LangGraph checkpointing).
18
+
19
+ **Why not**: Requires a storage backend (SQLite, Redis, Postgres), schema migrations, and serialization logic. This is enterprise infrastructure — it triples the complexity surface. Our target users run workflows that complete in seconds to minutes, not hours. If you need checkpointing, LangGraph is the right tool.
20
+
21
+ **Related**: Closing #20 with this rationale.
22
+
23
+ ### 3. A2A Protocol (Agent-to-Agent)
24
+
25
+ **What**: Google's open protocol for agents on different servers to discover and communicate with each other.
26
+
27
+ **Why not**: Too early — the spec is still evolving and adoption is minimal. Our users run agents in a single process, not across distributed services. If A2A matures and there's real demand, we can revisit. Today it would add complexity for zero practical benefit.
28
+
29
+ ### 4. MCP Integration (Model Context Protocol)
30
+
31
+ **What**: Anthropic's protocol for connecting LLMs to external tools and data sources.
32
+
33
+ **Why not**: MCP is valuable but targets a different layer. Our `defineTool()` API already lets users wrap any external service as a tool in ~10 lines of code. Adding MCP would mean maintaining protocol compatibility, transport layers, and tool discovery — complexity that serves tool platform builders, not our target users who just want to run agent teams.
34
+
35
+ ### 5. Dashboard / Visualization
36
+
37
+ **What**: Built-in web UI to visualize task DAGs, agent activity, and token usage.
38
+
39
+ **Why not**: We expose data, we don't build UI. The `onProgress` callback and upcoming `onTrace` (#18) give users all the raw data. They can pipe it into Grafana, build a custom dashboard, or use console logs. Shipping a web UI means owning a frontend stack, which is outside our scope.
40
+
41
+ ---
42
+
43
+ *Last updated: 2026-04-03*
package/README.md CHANGED
@@ -1,47 +1,36 @@
1
1
  # Open Multi-Agent
2
2
 
3
- Build AI agent teams that work together. One agent plans, another implements, a third reviews — the framework handles task scheduling, dependencies, and communication automatically.
3
+ Build AI agent teams that decompose goals into tasks automatically. Define agents with roles and tools, describe a goal — the framework plans the task graph, schedules dependencies, and runs everything in parallel.
4
+
5
+ 3 runtime dependencies. 27 source files. One `runTeam()` call from goal to result.
4
6
 
5
7
  [![GitHub stars](https://img.shields.io/github/stars/JackChen-me/open-multi-agent)](https://github.com/JackChen-me/open-multi-agent/stargazers)
6
8
  [![license](https://img.shields.io/github/license/JackChen-me/open-multi-agent)](./LICENSE)
7
9
  [![TypeScript](https://img.shields.io/badge/TypeScript-5.6-blue)](https://www.typescriptlang.org/)
8
10
 
11
+ **English** | [中文](./README_zh.md)
12
+
9
13
  ## Why Open Multi-Agent?
10
14
 
15
+ - **Auto Task Decomposition** — Describe a goal in plain text. A built-in coordinator agent breaks it into a task DAG with dependencies and assignees — no manual orchestration needed.
11
16
  - **Multi-Agent Teams** — Define agents with different roles, tools, and even different models. They collaborate through a message bus and shared memory.
12
17
  - **Task DAG Scheduling** — Tasks have dependencies. The framework resolves them topologically — dependent tasks wait, independent tasks run in parallel.
13
- - **Model Agnostic** — Claude and GPT in the same team. Swap models per agent. Bring your own adapter for any LLM.
18
+ - **Model Agnostic** — Claude, GPT, Gemma 4, and local models (Ollama, vLLM, LM Studio) in the same team. Swap models per agent via `baseURL`.
19
+ - **Structured Output** — Add `outputSchema` (Zod) to any agent. Output is parsed as JSON, validated, and auto-retried once on failure. Access typed results via `result.structured`.
20
+ - **Task Retry** — Set `maxRetries` on tasks for automatic retry with exponential backoff. Failed attempts accumulate token usage for accurate billing.
14
21
  - **In-Process Execution** — No subprocess overhead. Everything runs in one Node.js process. Deploy to serverless, Docker, CI/CD.
15
22
 
16
23
  ## Quick Start
17
24
 
25
+ Requires Node.js >= 18.
26
+
18
27
  ```bash
19
28
  npm install @jackchen_me/open-multi-agent
20
29
  ```
21
30
 
22
- Set `ANTHROPIC_API_KEY` (and optionally `OPENAI_API_KEY`) in your environment.
23
-
24
- ```typescript
25
- import { OpenMultiAgent } from '@jackchen_me/open-multi-agent'
26
-
27
- const orchestrator = new OpenMultiAgent({ defaultModel: 'claude-sonnet-4-6' })
28
-
29
- // One agent, one task
30
- const result = await orchestrator.runAgent(
31
- {
32
- name: 'coder',
33
- model: 'claude-sonnet-4-6',
34
- tools: ['bash', 'file_write'],
35
- },
36
- 'Write a TypeScript function that reverses a string, save it to /tmp/reverse.ts, and run it.',
37
- )
38
-
39
- console.log(result.output)
40
- ```
31
+ Set `ANTHROPIC_API_KEY` (and optionally `OPENAI_API_KEY` or `GITHUB_TOKEN` for Copilot) in your environment. Local models via Ollama require no API key — see [example 06](examples/06-local-model.ts).
41
32
 
42
- ## Multi-Agent Team
43
-
44
- This is where it gets interesting. Three agents, one goal:
33
+ Three agents, one goal — the framework handles the rest:
45
34
 
46
35
  ```typescript
47
36
  import { OpenMultiAgent } from '@jackchen_me/open-multi-agent'
@@ -86,132 +75,56 @@ console.log(`Success: ${result.success}`)
86
75
  console.log(`Tokens: ${result.totalTokenUsage.output_tokens} output tokens`)
87
76
  ```
88
77
 
89
- ## More Examples
90
-
91
- <details>
92
- <summary><b>Task Pipeline</b> — explicit control over task graph and assignments</summary>
78
+ What happens under the hood:
93
79
 
94
- ```typescript
95
- const result = await orchestrator.runTasks(team, [
96
- {
97
- title: 'Design the data model',
98
- description: 'Write a TypeScript interface spec to /tmp/spec.md',
99
- assignee: 'architect',
100
- },
101
- {
102
- title: 'Implement the module',
103
- description: 'Read /tmp/spec.md and implement the module in /tmp/src/',
104
- assignee: 'developer',
105
- dependsOn: ['Design the data model'], // blocked until design completes
106
- },
107
- {
108
- title: 'Write tests',
109
- description: 'Read the implementation and write Vitest tests.',
110
- assignee: 'developer',
111
- dependsOn: ['Implement the module'],
112
- },
113
- {
114
- title: 'Review code',
115
- description: 'Review /tmp/src/ and produce a structured code review.',
116
- assignee: 'reviewer',
117
- dependsOn: ['Implement the module'], // can run in parallel with tests
118
- },
119
- ])
120
80
  ```
121
-
122
- </details>
123
-
124
- <details>
125
- <summary><b>Custom Tools</b> define tools with Zod schemas</summary>
126
-
127
- ```typescript
128
- import { z } from 'zod'
129
- import { defineTool, Agent, ToolRegistry, ToolExecutor, registerBuiltInTools } from '@jackchen_me/open-multi-agent'
130
-
131
- const searchTool = defineTool({
132
- name: 'web_search',
133
- description: 'Search the web and return the top results.',
134
- inputSchema: z.object({
135
- query: z.string().describe('The search query.'),
136
- maxResults: z.number().optional().describe('Number of results (default 5).'),
137
- }),
138
- execute: async ({ query, maxResults = 5 }) => {
139
- const results = await mySearchProvider(query, maxResults)
140
- return { data: JSON.stringify(results), isError: false }
141
- },
142
- })
143
-
144
- const registry = new ToolRegistry()
145
- registerBuiltInTools(registry)
146
- registry.register(searchTool)
147
-
148
- const executor = new ToolExecutor(registry)
149
- const agent = new Agent(
150
- { name: 'researcher', model: 'claude-sonnet-4-6', tools: ['web_search'] },
151
- registry,
152
- executor,
153
- )
154
-
155
- const result = await agent.run('Find the three most recent TypeScript releases.')
81
+ agent_start coordinator
82
+ task_start architect
83
+ task_complete architect
84
+ task_start developer
85
+ task_start developer // independent tasks run in parallel
86
+ task_complete developer
87
+ task_start reviewer // unblocked after implementation
88
+ task_complete developer
89
+ task_complete reviewer
90
+ agent_complete coordinator // synthesizes final result
91
+ Success: true
92
+ Tokens: 12847 output tokens
156
93
  ```
157
94
 
158
- </details>
95
+ ## Three Ways to Run
159
96
 
160
- <details>
161
- <summary><b>Multi-Model Teams</b> — mix Claude and GPT in one workflow</summary>
97
+ | Mode | Method | When to use |
98
+ |------|--------|-------------|
99
+ | Single agent | `runAgent()` | One agent, one prompt — simplest entry point |
100
+ | Auto-orchestrated team | `runTeam()` | Give a goal, framework plans and executes |
101
+ | Explicit pipeline | `runTasks()` | You define the task graph and assignments |
162
102
 
163
- ```typescript
164
- const claudeAgent: AgentConfig = {
165
- name: 'strategist',
166
- model: 'claude-opus-4-6',
167
- provider: 'anthropic',
168
- systemPrompt: 'You plan high-level approaches.',
169
- tools: ['file_write'],
170
- }
171
-
172
- const gptAgent: AgentConfig = {
173
- name: 'implementer',
174
- model: 'gpt-5.4',
175
- provider: 'openai',
176
- systemPrompt: 'You implement plans as working code.',
177
- tools: ['bash', 'file_read', 'file_write'],
178
- }
179
-
180
- const team = orchestrator.createTeam('mixed-team', {
181
- name: 'mixed-team',
182
- agents: [claudeAgent, gptAgent],
183
- sharedMemory: true,
184
- })
103
+ ## Contributors
185
104
 
186
- const result = await orchestrator.runTeam(team, 'Build a CLI tool that converts JSON to CSV.')
187
- ```
105
+ <a href="https://github.com/JackChen-me/open-multi-agent/graphs/contributors">
106
+ <img src="https://contrib.rocks/image?repo=JackChen-me/open-multi-agent" />
107
+ </a>
188
108
 
189
- </details>
109
+ ## Examples
190
110
 
191
- <details>
192
- <summary><b>Streaming Output</b></summary>
111
+ All examples are runnable scripts in [`examples/`](./examples/). Run any of them with `npx tsx`:
193
112
 
194
- ```typescript
195
- import { Agent, ToolRegistry, ToolExecutor, registerBuiltInTools } from '@jackchen_me/open-multi-agent'
196
-
197
- const registry = new ToolRegistry()
198
- registerBuiltInTools(registry)
199
- const executor = new ToolExecutor(registry)
200
-
201
- const agent = new Agent(
202
- { name: 'writer', model: 'claude-sonnet-4-6', maxTurns: 3 },
203
- registry,
204
- executor,
205
- )
206
-
207
- for await (const event of agent.stream('Explain monads in two sentences.')) {
208
- if (event.type === 'text' && typeof event.data === 'string') {
209
- process.stdout.write(event.data)
210
- }
211
- }
113
+ ```bash
114
+ npx tsx examples/01-single-agent.ts
212
115
  ```
213
116
 
214
- </details>
117
+ | Example | What it shows |
118
+ |---------|---------------|
119
+ | [01 — Single Agent](examples/01-single-agent.ts) | `runAgent()` one-shot, `stream()` streaming, `prompt()` multi-turn |
120
+ | [02 — Team Collaboration](examples/02-team-collaboration.ts) | `runTeam()` auto-orchestration with coordinator pattern |
121
+ | [03 — Task Pipeline](examples/03-task-pipeline.ts) | `runTasks()` explicit dependency graph (design → implement → test + review) |
122
+ | [04 — Multi-Model Team](examples/04-multi-model-team.ts) | `defineTool()` custom tools, mixed Anthropic + OpenAI providers, `AgentPool` |
123
+ | [05 — Copilot](examples/05-copilot-test.ts) | GitHub Copilot as an LLM provider |
124
+ | [06 — Local Model](examples/06-local-model.ts) | Ollama + Claude in one pipeline via `baseURL` (works with vLLM, LM Studio, etc.) |
125
+ | [07 — Fan-Out / Aggregate](examples/07-fan-out-aggregate.ts) | `runParallel()` MapReduce — 3 analysts in parallel, then synthesize |
126
+ | [08 — Gemma 4 Local](examples/08-gemma4-local.ts) | Pure-local Gemma 4 agent team with tool-calling — zero API cost |
127
+ | [09 — Gemma 4 Auto-Orchestration](examples/09-gemma4-auto-orchestration.ts) | `runTeam()` with Gemma 4 as coordinator — auto task decomposition, fully local |
215
128
 
216
129
  ## Architecture
217
130
 
@@ -244,6 +157,7 @@ for await (const event of agent.stream('Explain monads in two sentences.')) {
244
157
  │ - prompt() │───►│ LLMAdapter │
245
158
  │ - stream() │ │ - AnthropicAdapter │
246
159
  └────────┬──────────┘ │ - OpenAIAdapter │
160
+ │ │ - CopilotAdapter │
247
161
  │ └──────────────────────┘
248
162
  ┌────────▼──────────┐
249
163
  │ AgentRunner │ ┌──────────────────────┐
@@ -263,17 +177,36 @@ for await (const event of agent.stream('Explain monads in two sentences.')) {
263
177
  | `file_edit` | Edit a file by replacing an exact string match. |
264
178
  | `grep` | Search file contents with regex. Uses ripgrep when available, falls back to Node.js. |
265
179
 
180
+ ## Supported Providers
181
+
182
+ | Provider | Config | Env var | Status |
183
+ |----------|--------|---------|--------|
184
+ | Anthropic (Claude) | `provider: 'anthropic'` | `ANTHROPIC_API_KEY` | Verified |
185
+ | OpenAI (GPT) | `provider: 'openai'` | `OPENAI_API_KEY` | Verified |
186
+ | GitHub Copilot | `provider: 'copilot'` | `GITHUB_TOKEN` | Verified |
187
+ | Ollama / vLLM / LM Studio | `provider: 'openai'` + `baseURL` | — | Verified |
188
+
189
+ Verified local models with tool-calling: **Gemma 4** (see [example 08](examples/08-gemma4-local.ts)).
190
+
191
+ Any OpenAI-compatible API should work via `provider: 'openai'` + `baseURL` (DeepSeek, Groq, Mistral, Qwen, MiniMax, etc.). These providers have not been fully verified yet — contributions welcome via [#25](https://github.com/JackChen-me/open-multi-agent/issues/25).
192
+
266
193
  ## Contributing
267
194
 
268
195
  Issues, feature requests, and PRs are welcome. Some areas where contributions would be especially valuable:
269
196
 
270
- - **LLM Adapters** — Ollama, llama.cpp, vLLM, Gemini. The `LLMAdapter` interface requires just two methods: `chat()` and `stream()`.
197
+ - **Provider integrations** — Verify and document OpenAI-compatible providers (DeepSeek, Groq, Qwen, MiniMax, etc.) via `baseURL`. See [#25](https://github.com/JackChen-me/open-multi-agent/issues/25). For providers that are NOT OpenAI-compatible (e.g. Gemini), a new `LLMAdapter` implementation is welcome — the interface requires just two methods: `chat()` and `stream()`.
271
198
  - **Examples** — Real-world workflows and use cases.
272
199
  - **Documentation** — Guides, tutorials, and API docs.
273
200
 
274
201
  ## Star History
275
202
 
276
- [![Star History Chart](https://api.star-history.com/svg?repos=JackChen-me/open-multi-agent&type=Date)](https://star-history.com/#JackChen-me/open-multi-agent&Date)
203
+ <a href="https://star-history.com/#JackChen-me/open-multi-agent&Date">
204
+ <picture>
205
+ <source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=JackChen-me/open-multi-agent&type=Date&theme=dark&v=20260403" />
206
+ <source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=JackChen-me/open-multi-agent&type=Date&v=20260403" />
207
+ <img alt="Star History Chart" src="https://api.star-history.com/svg?repos=JackChen-me/open-multi-agent&type=Date&v=20260403" />
208
+ </picture>
209
+ </a>
277
210
 
278
211
  ## License
279
212