agent-orcha 2026.320.2002 → 2026.320.2159

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,29 +1,36 @@
1
- ![alt text](https://github.com/ddalcu/agent-orcha/raw/main/docs/assets/images/logo.png "Agent Orcha Logo")
1
+ <p align="center">
2
+ <img src="docs/assets/images/screenshots/p2p.png" alt="Agent Orcha — P2P Network" width="100%" />
3
+ </p>
2
4
 
3
5
  # Agent Orcha
4
6
 
5
- Agent Orcha is a declarative framework designed to build, manage, and scale multi-agent AI systems with ease. It combines the flexibility of TypeScript with the simplicity of YAML to orchestrate complex workflows, manage diverse tools via MCP, and integrate semantic search seamlessly. Built for developers and operators who demand reliability, extensibility, and clarity in their AI operations.
7
+ Agent Orcha is a declarative framework for building, managing, and scaling multi-agent AI systems. Define agents, workflows, and knowledge stores in YAML Orcha handles the rest. Run locally on bare metal for maximum performance, in Docker for cloud providers, or download a native desktop app for macOS, Windows, and Linux.
6
8
 
7
- **[Documentation](https://agentorcha.com)** | **[NPM Package](https://www.npmjs.com/package/agent-orcha)** | **[Docker Hub](https://hub.docker.com/r/ddalcu/agent-orcha)**
9
+ **[Documentation](https://agentorcha.com)** | **[NPM Package](https://www.npmjs.com/package/agent-orcha)** | **[Docker Hub](https://hub.docker.com/r/ddalcu/agent-orcha)** | **[Native Apps](https://github.com/ddalcu/agent-orcha/releases)**
8
10
 
9
11
  ```bash
10
- # With Docker (cloud LLM providers)
11
- docker run -p 3000:3000 -v ./my-workspace:/data -e AUTH_PASSWORD=your-secret-password ddalcu/agent-orcha start
12
+ # Native app (macOS, Windows, Linux) — download from Releases
13
+ # https://github.com/ddalcu/agent-orcha/releases
12
14
 
13
15
  # With npx (local inference — uses your GPU / Apple Silicon directly)
14
- npx agent-orcha init my-workspace && cd my-workspace && npx agent-orcha start
16
+ npx agent-orcha
17
+
18
+ # With Docker (cloud LLM providers)
19
+ docker run -p 3000:3000 -v ./my-workspace:/data ddalcu/agent-orcha
15
20
  ```
16
21
 
17
22
  ## Why Agent Orcha?
18
23
 
19
24
  - **Declarative AI**: Define agents, workflows, and infrastructure in clear, version-controlled YAML files
25
+ - **P2P Agent & LLM Sharing**: Share agents and LLM engines across your team or organization over an encrypted peer-to-peer network — no API keys exposed, no central server required, with per-peer rate limiting and private network keys
26
+ - **Native Desktop Apps**: Download pre-built binaries for macOS (.app), Windows (.exe), and Linux from [GitHub Releases](https://github.com/ddalcu/agent-orcha/releases) — system tray, auto-updates, zero setup
27
+ - **Model Agnostic**: Seamlessly swap between OpenAI, Gemini, Anthropic, or local LLMs (llama-cpp, MLX, Ollama, LM Studio) without rewriting logic
20
28
  - **Published Agents**: Share agents via standalone chat pages at `/chat/<name>` with optional per-agent password protection
21
- - **Model Agnostic**: Seamlessly swap between OpenAI, Gemini, Anthropic, or local LLMs (Ollama, LM Studio) without rewriting logic
22
29
  - **Universal Tooling**: Leverage the **Model Context Protocol (MCP)** to connect agents to any external service, API, or database
23
30
  - **Knowledge Stores**: Built-in SQLite-based vector store with optional **direct mapping** for knowledge graphs — semantic search and graph analysis as a first-class citizen
24
- - **Robust Workflow Engine**: Orchestrate complex multi-agent sequences with parallel execution, conditional logic, and state management — or use **ReAct** for autonomous prompt-driven workflows
25
- - **Conversation Memory**: Built-in session-based memory for multi-turn dialogues with automatic message management and TTL cleanup
31
+ - **Robust Workflow Engine**: Orchestrate complex multi-agent sequences with parallel execution, conditional logic, and state management — or use **ReAct** for autonomous prompt-driven workflows with multi-turn continuations
26
32
  - **Browser Sandbox**: Full Chromium browser with CDP control, Xvfb, and noVNC — plus an experimental **Vision Browser** for pixel-coordinate control with vision LLMs
33
+ - **Conversation Memory**: Built-in session-based memory for multi-turn dialogues with automatic message management and TTL cleanup
27
34
  - **Security**: Rate limiting on auth endpoints, SSRF protection, SQL injection hardening, sandboxed execution
28
35
  - **Extensible Functions**: Drop in simple JavaScript functions to extend agent capabilities with zero boilerplate
29
36
 
@@ -39,14 +46,18 @@ Built-in web dashboard at `http://localhost:3000` with agent testing, knowledge
39
46
  <img src="docs/assets/images/screenshots/0.0.7-agentedit.png" alt="Agent Orcha Studio — Visual Agent Composer" width="100%" />
40
47
  </p>
41
48
 
49
+ <p align="center">
50
+ <img src="docs/assets/images/screenshots/llm.png" alt="Agent Orcha Studio — Local LLM Management" width="100%" />
51
+ </p>
52
+
42
53
  - **Agents** — Browse, invoke, stream responses, manage sessions
43
54
  - **Knowledge** — Browse, search, view entities and graph structure
44
55
  - **MCP** — Browse servers, view and call tools
45
56
  - **Skills** — Browse and inspect skills
46
- - **Monitor** — Real-time LLM call logs, ReAct loop metrics, and activity feed
57
+ - **Monitor** — Real-time LLM call logs, P2P task tracking, ReAct loop metrics, and activity feed
47
58
  - **IDE** — File editor with syntax highlighting, hot-reload, and **visual agent composer** for `.agent.yaml` files
48
59
  - **Local LLM** — Download, activate, and manage local model engines (llama-cpp, MLX, Ollama, LM Studio)
49
- - **P2P** — Browse peers, test remote agents and LLMs on the P2P network
60
+ - **P2P** — Browse peers, test remote agents and LLMs, configure sharing and rate limits
50
61
 
51
62
  ## Architecture
52
63
 
@@ -64,29 +75,32 @@ Built-in web dashboard at `http://localhost:3000` with agent testing, knowledge
64
75
 
65
76
  Agent Orcha can be used in multiple ways:
66
77
 
67
- 1. **Docker Image** — Official image at [ddalcu/agent-orcha](https://hub.docker.com/r/ddalcu/agent-orcha)
68
- 2. **CLI Tool** — `npx agent-orcha` to initialize and run projects
69
- 3. **Backend API Server** — REST API for your existing frontends
70
- 4. **Library** — Import programmatically in TypeScript/JavaScript
78
+ 1. **Native Desktop App** — Download from [GitHub Releases](https://github.com/ddalcu/agent-orcha/releases) (macOS .app, Windows .exe, Linux binary) with system tray integration
79
+ 2. **CLI Tool** — `npx agent-orcha` to start the server (auto-scaffolds workspace on first run)
80
+ 3. **Docker Image** — Official image at [ddalcu/agent-orcha](https://hub.docker.com/r/ddalcu/agent-orcha)
81
+ 4. **Backend API Server** — REST API for your existing frontends
71
82
 
72
- **Requirements:** Node.js >= 24.0.0 (or Docker)
83
+ **Requirements:** Node.js >= 24.0.0 (for CLI/library) or Docker
73
84
 
74
85
  ## Quick Start
75
86
 
76
- ### CLI (Recommended for Local Inference)
87
+ ### Native App (Recommended)
88
+
89
+ Download the latest release for your platform from [GitHub Releases](https://github.com/ddalcu/agent-orcha/releases). Launch the app — it auto-scaffolds a workspace at `~/.orcha/workspace` with example agents and configurations. A system tray icon provides quick access to the Studio UI.
90
+
91
+ ### CLI
77
92
 
78
93
  Run directly on your machine to take advantage of bare metal GPU / Apple Silicon performance for local models (llama-cpp, MLX, Ollama, LM Studio).
79
94
 
80
95
  ```bash
81
- # Initialize a project
82
- npx agent-orcha init my-project
83
- cd my-project
96
+ # Start the server (auto-scaffolds ~/.orcha/workspace on first run)
97
+ npx agent-orcha
84
98
 
85
- # Start the server
86
- npx agent-orcha start
99
+ # Or point to a custom workspace
100
+ WORKSPACE=./my-project npx agent-orcha
87
101
  ```
88
102
 
89
- ### Docker (Recommended with External LLM Providers)
103
+ ### Docker
90
104
 
91
105
  Best when using cloud LLM providers (OpenAI, Anthropic, Gemini) or connecting to an LLM server running on the host. Docker does not have direct access to the host GPU, so local inference engines will not be available inside the container.
92
106
 
@@ -96,23 +110,6 @@ docker run -p 3000:3000 -e AUTH_PASSWORD=mypass -v ./my-project:/data ddalcu/age
96
110
 
97
111
  An empty workspace is automatically scaffolded with example agents, workflows, and configurations.
98
112
 
99
-
100
- ### Library
101
-
102
- ```typescript
103
- import { Orchestrator } from 'agent-orcha';
104
-
105
- const orchestrator = new Orchestrator({ workspaceRoot: './my-project' });
106
- await orchestrator.initialize();
107
-
108
- const result = await orchestrator.agents.invoke('researcher', {
109
- topic: 'machine learning'
110
- });
111
-
112
- console.log(result.output);
113
- await orchestrator.close();
114
- ```
115
-
116
113
  ## Configuration
117
114
 
118
115
  ### LLM Configuration (llm.json)
@@ -129,7 +126,8 @@ All LLM and embedding configs are defined in `llm.json`. Agents and knowledge st
129
126
  "engine": "llama-cpp",
130
127
  "model": "Qwen3.5-4B-IQ4_NL",
131
128
  "reasoningBudget": 0,
132
- "contextSize": 32768
129
+ "contextSize": 32768,
130
+ "p2p": true
133
131
  },
134
132
  "ollama": {
135
133
  "provider": "local",
@@ -168,6 +166,7 @@ All LLM and embedding configs are defined in `llm.json`. Agents and knowledge st
168
166
  - **`provider`** — `local`, `openai`, `anthropic`, or `gemini`
169
167
  - **`contextSize`** — Context window size (local engines)
170
168
  - **`reasoningBudget`** / **`thinkingBudget`** — Token budget for reasoning (0 to disable)
169
+ - **`p2p`** — Share this model on the P2P network (`true`)
171
170
  - **`engineUrls`** — Base URLs for engines running on remote hosts
172
171
  - **`${ENV_VAR}`** — Environment variable substitution (works in all config files)
173
172
 
@@ -175,8 +174,8 @@ All LLM and embedding configs are defined in `llm.json`. Agents and knowledge st
175
174
 
176
175
  ```bash
177
176
  PORT=3000 # Server port
178
- HOST=0.0.0.0 # Server host
179
- WORKSPACE=/path/to/project # Base directory for config files
177
+ HOST=0.0.0.0 # Server host (SEA default: 127.0.0.1)
178
+ WORKSPACE=/path/to/project # Workspace directory (default: ~/.orcha/workspace)
180
179
  AUTH_PASSWORD=your-secret-password # Password auth for all API routes and Studio
181
180
  CORS_ORIGIN=https://your-frontend.com # Cross-origin policy (default: same-origin)
182
181
  LOG_LEVEL=debug # Pino log level (default: info)
@@ -227,30 +226,9 @@ memory: true # Enable persistent memory (optional)
227
226
  skills: # Skills to attach (optional)
228
227
  - skill-name
229
228
  publish: true # Standalone chat at /chat/researcher (optional)
229
+ p2p: true # Share on P2P network (optional)
230
230
  ```
231
231
 
232
- ### Agent Schema Reference
233
-
234
- | Field | Description |
235
- |-------|-------------|
236
- | `name` | Unique identifier (required) |
237
- | `description` | Human-readable description (required) |
238
- | `version` | Semantic version (default: "1.0.0") |
239
- | `llm` | LLM config reference — string or `{ name, temperature }` |
240
- | `prompt.system` | System message/instructions |
241
- | `prompt.inputVariables` | Variables to interpolate in the prompt |
242
- | `tools` | Tool references: `mcp:`, `knowledge:`, `function:`, `builtin:`, `sandbox:`, `workspace:` |
243
- | `output.format` | `text` or `structured` |
244
- | `output.schema` | JSON Schema (required when format is `structured`) |
245
- | `maxIterations` | Override default 200 iteration limit |
246
- | `sampleQuestions` | Example prompts shown in Studio UI |
247
- | `skills` | Skills to attach (list or `{ mode: all }`) |
248
- | `memory` | Enable persistent memory |
249
- | `integrations` | External integrations (collabnook, email) |
250
- | `triggers` | Cron or webhook triggers |
251
- | `publish` | Standalone chat page (`true` or `{ enabled, password }`) |
252
- | `p2p` | Share agent on P2P network (`true`) |
253
-
254
232
  ### Conversation Memory
255
233
 
256
234
  Pass a `sessionId` to maintain context across interactions:
@@ -312,7 +290,7 @@ output:
312
290
 
313
291
  ### ReAct
314
292
 
315
- Autonomous, prompt-driven workflows. The agent decides which tools and agents to call.
293
+ Autonomous, prompt-driven workflows with multi-turn conversation support. The agent decides which tools and agents to call. Thread state is preserved after completion for follow-up questions.
316
294
 
317
295
  ```yaml
318
296
  name: react-research
@@ -351,7 +329,9 @@ output:
351
329
 
352
330
  ## P2P Network
353
331
 
354
- Share agents and LLM engines across instances using a peer-to-peer swarm network (powered by [Hyperswarm](https://github.com/holepunchto/hyperswarm)). P2P is enabled by default set `P2P_ENABLED=false` to disable.
332
+ Share agents and LLM engines across machines using an encrypted peer-to-peer swarm network powered by [Hyperswarm](https://github.com/holepunchto/hyperswarm). No central server, no cloud dependency — peers discover each other directly using a shared network key. P2P is enabled by default; set `P2P_ENABLED=false` to disable.
333
+
334
+ All communication is encrypted end-to-end via Noise protocol handshakes. No API keys, secrets, or model weights are ever transmitted — only inference requests and responses flow over the wire. Per-peer rate limiting protects against abuse.
355
335
 
356
336
  The **P2P tab** in Studio provides a settings panel to enable/disable P2P, change the machine name, set a private network key, configure rate limiting, and view what you're sharing.
357
337
 
@@ -465,7 +445,7 @@ Stores with entities get additional graph tools: `entity_lookup`, `traverse`, `g
465
445
  Custom JavaScript tools in `functions/`:
466
446
 
467
447
  ```javascript
468
- // functions/fibonacci.function.js
448
+ // functions/fibonacci.function.mjs
469
449
  export default {
470
450
  name: 'fibonacci',
471
451
  description: 'Returns the nth Fibonacci number',
@@ -512,7 +492,7 @@ Reference in agents with `mcp:fetch`.
512
492
  | `mcp:<server>` | External tools from MCP servers |
513
493
  | `knowledge:<store>` | Semantic search on knowledge stores |
514
494
  | `function:<name>` | Custom JavaScript functions |
515
- | `builtin:<name>` | Framework tools (`ask_user`, `memory_save`) |
495
+ | `builtin:<name>` | Framework tools (`ask_user`, `memory_save`, `canvas_write`, `canvas_append`) |
516
496
  | `sandbox:exec` | JavaScript execution in sandboxed VM |
517
497
  | `sandbox:shell` | Shell commands (non-root sandbox user) |
518
498
  | `sandbox:web_fetch` | URL fetching with SSRF protection |
@@ -565,11 +545,11 @@ Full API documentation is available at [agentorcha.com](https://agentorcha.com).
565
545
  ## Directory Structure
566
546
 
567
547
  ```
568
- my-project/
548
+ ~/.orcha/workspace/
569
549
  ├── agents/ # Agent definitions (YAML)
570
550
  ├── workflows/ # Workflow definitions (YAML)
571
551
  ├── knowledge/ # Knowledge store configs and data
572
- ├── functions/ # Custom function tools (JavaScript)
552
+ ├── functions/ # Custom function tools (JavaScript .mjs)
573
553
  ├── skills/ # Skill prompt files (Markdown)
574
554
  ├── llm.json # LLM and embedding configurations
575
555
  ├── mcp.json # MCP server configuration
@@ -604,8 +584,8 @@ If nothing is printed, all dependencies are satisfied.
604
584
  ## Development
605
585
 
606
586
  ```bash
607
- npm run dev # Dev server with auto-reload
608
- npm run dev:p2p # Dev server with P2P enabled
587
+ npm run dev # Dev server with auto-reload (uses ~/.orcha/workspace)
588
+ WORKSPACE=./templates npm run dev # Dev with local templates
609
589
  npm run build # Build
610
590
  npm start # Run build
611
591
  npm run lint # ESLint