crewswarm 0.8.2-beta → 0.9.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,315 +1,205 @@
1
1
  # crewswarm
2
2
 
3
- **Local-first AI orchestration for people who want real files, real tools, and real control.**
4
-
5
- crewswarm is an open-source AI workspace for software development. It combines multi-agent orchestration, project-aware memory, local tool execution, chat surfaces, and editor/MCP integrations into one stack you can run yourself.
3
+ **The only multi-engine AI coding platform.** Switch between Claude Code, Cursor, Gemini, Codex, and OpenCode mid-conversation. Parallel agents. Persistent sessions. No vendor lock-in.
6
4
 
5
+ [![npm version](https://img.shields.io/npm/v/crewswarm)](https://www.npmjs.com/package/crewswarm)
6
+ [![Tests](https://img.shields.io/badge/tests-647%20passed-brightgreen)]()
7
7
  [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](LICENSE)
8
8
  [![Node.js](https://img.shields.io/badge/Node.js-20%2B-green)](https://nodejs.org)
9
9
  [![Website](https://img.shields.io/badge/website-crewswarm.ai-blue)](https://crewswarm.ai)
10
+ [![GitHub Sponsors](https://img.shields.io/github/sponsors/crewswarm?label=Sponsor)](https://github.com/sponsors/crewswarm)
10
11
 
11
12
  ![crewswarm Dashboard](website/dashboard-agents.webp)
12
13
 
13
14
  ---
14
15
 
15
- ## What crewswarm is
16
-
17
- Most AI dev tools are just a chat box bolted onto an editor.
18
-
19
- **crewswarm** is different:
20
-
21
- - **Local-first** — run it on your own machine
22
- - **Real execution** — agents write files, run commands, and operate on actual projects
23
- - **Multi-agent** — planner, coder, QA, fixer, security, GitHub, and more
24
- - **Persistent context** — memory and session history survive beyond one chat
25
- - **Multiple control surfaces** — dashboard, CLI, Telegram, SwiftBar, MCP/editor integrations
26
- - **Model-flexible** — use Groq, OpenAI, Anthropic, Gemini, Mistral, DeepSeek, xAI, Ollama, and more
27
-
28
- It is built for:
29
- - solo builders
30
- - AI-native dev teams
31
- - local-first users who do not want SaaS lock-in
32
- - people building real software with agent workflows, not toy demos
33
-
34
- ---
35
-
36
- ## Why it matters
37
-
38
- Most “agent” tools still fake the important part.
16
+ ## Why crewswarm?
39
17
 
40
- They can talk. They can plan. They can look clever.
18
+ **Rate limits are real.** Every $20/month AI coding plan has them. Claude, Cursor, Codex — you'll hit the wall mid-feature.
41
19
 
42
- Then they fall apart when it is time to:
43
- - write real files
44
- - work across multiple steps
45
- - keep project memory
46
- - coordinate multiple specialists
47
- - run locally without disappearing into someone else’s cloud
20
+ crewswarm is the only tool where you seamlessly switch to another engine and keep your session context. Or pick the best CLI for each job:
48
21
 
49
- crewswarm is built to handle actual execution.
22
+ | Engine | Best for | Key strength |
23
+ |---|---|---|
24
+ | **Claude Code** | Large refactors, frontend | Full workspace context, session resume |
25
+ | **Cursor CLI** | Architecture, complex reasoning | Parallel waves, isolated contexts |
26
+ | **Gemini CLI** | Research, SEO, free fallback | Free: 60 req/min, Google Search built in |
27
+ | **Codex CLI** | Backend, fast iteration | Full sandbox, no approval prompts |
28
+ | **OpenCode** | Provider flexibility | Any model (Groq/DeepSeek/Ollama) |
29
+ | **crew-cli** | Orchestration, quality workflows | 20+ agents, sandbox, 3x parallel speedup |
50
30
 
51
31
  ---
52
32
 
53
33
  ## Quickstart
54
34
 
55
- ### Requirements
35
+ ```bash
36
+ npm install -g crewswarm
37
+ crewswarm
38
+ ```
56
39
 
57
- - Node.js 20+
58
- - At least one LLM provider key for best results
59
- - Groq is the fastest free starting point: [console.groq.com](https://console.groq.com)
40
+ That's it. Dashboard opens at `localhost:4319`, Vibe IDE at `localhost:3333`.
60
41
 
61
- ### Install
42
+ ### Or install from source
62
43
 
63
44
  ```bash
64
45
  git clone https://github.com/crewswarm/crewswarm
65
46
  cd crewswarm
66
47
  bash install.sh
67
- ```
68
-
69
- **Fresh machine shortcut:**
70
- ```bash
71
- bash <(curl -fsSL https://raw.githubusercontent.com/crewswarm/crewswarm/main/install.sh)
72
- ```
73
-
74
- ### Start
75
- ```bash
76
48
  npm run doctor
77
- npm run restart-all
49
+ npm start
78
50
  ```
79
51
 
80
- Then open:
81
- `http://127.0.0.1:4319`
52
+ ### What does it cost?
82
53
 
83
- Add your provider key in the Providers tab, then go to Chat and start giving tasks.
54
+ **$0.** crewswarm is free and open source (MIT). You bring your own API keys or use CLI OAuth (Claude, Cursor, Gemini login once, no keys needed). Free options: Gemini CLI (1,000 req/day), Groq (free tier), Ollama (fully local).
84
55
 
85
56
  ---
86
57
 
87
- ### First example
58
+ ## What makes it different
88
59
 
89
- In the dashboard chat, type:
90
- > Build a REST API for user authentication with JWT and tests
91
-
92
- **crewswarm will:**
93
- - route the request through `crew-lead`
94
- - break it down through `crew-pm`
95
- - dispatch implementation to the right coding agent
96
- - run QA / validation
97
- - optionally hand off Git tasks
98
-
99
- This is not simulated. It works against real files and your real local workspace.
100
-
101
- ---
102
-
103
- ### Headless / non-interactive install
104
-
105
- For Cursor, Codex, CI, or remote shell setups:
106
- ```bash
107
- CREWSWARM_SETUP_MCP=1 \
108
- CREWSWARM_START_NOW=1 \
109
- bash install.sh --non-interactive
110
- ```
111
-
112
- Optional flags:
113
- - `CREWSWARM_BUILD_CREWCHAT=1`
114
- - `CREWSWARM_SETUP_TELEGRAM=1`
115
- - `CREWSWARM_SETUP_WHATSAPP=1`
116
- - `CREWSWARM_ENABLE_AUTONOMOUS=1`
60
+ | Capability | crewswarm | Cursor | Windsurf | Devin | Copilot |
61
+ |---|---|---|---|---|---|
62
+ | Multi-engine (6 CLIs) | Yes | No | No | No | No |
63
+ | Native session resume | Yes | No | No | No | No |
64
+ | Parallel agent waves | Yes | No | No | Partial | No |
65
+ | Browser IDE + terminal | Vibe | Desktop | Desktop | Yes | Yes |
66
+ | 20+ specialist agents | Yes | 1 | 1 | 1 | 1 |
67
+ | PM Loop (autonomous roadmap) | Yes | No | No | Partial | No |
68
+ | Local-first / no cloud | Yes | Partial | No | No | No |
69
+ | Open source | Yes | No | No | No | No |
117
70
 
118
71
  ---
119
72
 
120
- ## Core capabilities
73
+ ## How it works
121
74
 
122
- **Real tool execution**
123
- Agents can write files, read files, create directories, run commands, and work inside a real project folder.
124
-
125
- **PM-led orchestration**
126
- Natural language requests are broken into structured tasks and routed to the right agent.
127
-
128
- **Shared memory**
129
- Project context persists through files like `brain.md`, `session-log.md`, `current-state.md`, and `orchestration-protocol.md`.
130
-
131
- **Fault tolerance**
132
- Retries, escalation to fixer agents, task leases, and dead-letter replay support are built in.
133
-
134
- **Command approval gate**
135
- Potentially risky shell commands require approval before execution.
136
-
137
- **Multi-engine support**
138
- Route work through different engines and environments, including Codex, Claude Code, Cursor, Gemini, OpenCode, and `crew-cli`.
139
-
140
- **Multiple control surfaces**
141
- Use crewswarm from the web dashboard, CLI, Telegram, macOS SwiftBar, or MCP/editor integrations.
142
-
143
- ---
144
-
145
- ## Why crewswarm vs other frameworks
146
-
147
- | Feature | crewswarm | LangChain / LangGraph | AutoGen | CrewAI |
148
- |---|---|---|---|---|
149
- | Real file writes | ✅ | ⚠️ | ⚠️ | ⚠️ |
150
- | PM-led planning | ✅ | ❌ | ❌ | ⚠️ |
151
- | Persistent memory | ✅ | ⚠️ | ❌ | ⚠️ |
152
- | Local-first | ✅ | ⚠️ | ⚠️ | ⚠️ |
153
- | Built-in dashboard | ✅ | ❌ | ❌ | ❌ |
154
- | Telegram / messaging bridges | ✅ | ❌ | ❌ | ❌ |
155
- | Easy local startup | ✅ | ⚠️ | ⚠️ | ⚠️ |
156
-
157
- ---
158
-
159
- ## Main components
160
-
161
- | Component | Purpose |
162
- |---|---|
163
- | `crew-lead` | conversational command layer and dispatcher |
164
- | `crew-pm` | planning, task breakdown, roadmap management |
165
- | `crew-coder` | implementation |
166
- | `crew-qa` | testing and validation |
167
- | `crew-fixer` | debugging and repair |
168
- | `crew-security` | security review |
169
- | `crew-github` | Git and repo actions |
170
- | `dashboard` | browser-based control surface |
171
- | `crew-cli` | command-line interface |
172
- | `crew-scribe` | memory and task summarization |
173
-
174
- ---
175
-
176
- ## Architecture
75
+ 1. **You write a requirement** — one sentence, one paragraph, or a full spec
76
+ 2. **crew-pm plans it** breaks work into phases, assigns specialists
77
+ 3. **Agents execute in parallel** — backend, frontend, tests built simultaneously (3x faster)
78
+ 4. **Done. Files on disk.** — real files, real tests, real output
177
79
 
178
80
  ```
179
- Dashboard / Vibe / crew-cli / Telegram / SwiftBar / MCP
180
- |
181
- crew-lead
182
- |
183
- RT Bus
184
- |
185
- -----------------------------------
81
+ Dashboard / Vibe IDE / crew-cli / Telegram / MCP
82
+ |
83
+ crew-lead (router)
84
+ |
85
+ RT Bus
86
+ |
87
+ ─────────────────────────────────
186
88
  | | | | |
187
89
  crew-pm coder qa fixer github
188
- |
189
- local tools + models
190
- |
191
- real files, commands, memory
90
+ |
91
+ Code Engines: Claude · Cursor · Gemini · Codex · OpenCode · crew-cli
92
+ |
93
+ real files, commands, memory
192
94
  ```
193
95
 
194
96
  ---
195
97
 
196
- ## Commands
98
+ ## Surfaces
197
99
 
198
- - **Preflight:** `npm run doctor`
199
- - **Start the stack:** `npm run restart-all`
200
- - **Launch Dashboard only:** `npm run dashboard`
201
- - **Run CLI task:** `crew exec "Build a REST API with JWT auth and tests"`
202
- - **Static smoke** (no services): `npm run smoke:static`
203
- - **Live smoke** (stack must be running): `npm run smoke`
204
- - **Health check:** `npm run health`
205
- - **Release preflight:** `npm run release:check`
100
+ - **Dashboard** — web control plane at `localhost:4319` (agents, engines, models, build, sessions)
101
+ - **Vibe IDE** browser-based editor + terminal + chat at `localhost:3333`
102
+ - **crew-cli** terminal-first with 34+ built-in tools
103
+ - **Telegram** chat with your crew from your phone
104
+ - **MCP server** plug crewswarm into any MCP-compatible editor
206
105
 
207
106
  ---
208
107
 
209
- ## Configuration
210
-
211
- crewswarm stores config under your local `~/.crewswarm` directory.
212
-
213
- Typical settings include provider API keys, agent model assignments, RT auth token, output path, and command allowlist.
108
+ ## Per-agent model configuration
214
109
 
215
- The easiest way to manage config is through the dashboard.
110
+ Every agent gets its own model. Use cheap models for routing, expensive for coding:
216
111
 
217
- **Example agent config**
218
112
  ```json
219
113
  {
220
114
  "agents": [
221
- { "id": "crew-pm", "model": "perplexity/sonar-pro" },
115
+ { "id": "crew-lead", "model": "groq/llama-3.3-70b-versatile" },
116
+ { "id": "crew-pm", "model": "google/gemini-2.5-flash" },
222
117
  { "id": "crew-coder", "model": "anthropic/claude-sonnet-4-20250514" },
223
- { "id": "crew-qa", "model": "groq/llama-3.3-70b-versatile" }
118
+ { "id": "crew-qa", "model": "google/gemini-2.5-flash" },
119
+ { "id": "crew-fixer", "model": "openai/codex-mini-latest" }
224
120
  ]
225
121
  }
226
122
  ```
227
123
 
228
- ---
229
-
230
- ## Deployment
124
+ Or skip API keys entirely — use Claude Code, Cursor, or Gemini CLI with OAuth login.
231
125
 
232
- crewswarm is designed for local-first deployment but supports production containerization and cloud deployment. The system runs as a collection of Node.js services coordinated through an RT (realtime) bus, with optional web dashboard, CLI, and messaging bridge interfaces.
126
+ ---
233
127
 
234
- **Environment Configuration**
128
+ ## The crew
235
129
 
236
- Production deployments require proper environment variable management. Store sensitive credentials in `.env` files and exclude them from version control using `.gitignore`. At minimum, configure one LLM provider API key (`GROQ_API_KEY`, `OPENAI_API_KEY`, or `ANTHROPIC_API_KEY`). The RT bus requires `RT_PORT` (default 4319) and `RT_AUTH_TOKEN` for secure inter-service communication. Dashboard configuration uses `VITE_RT_URL` and `VITE_RT_AUTH_TOKEN` to connect to the RT bus. Optional variables include `CREWSWARM_OUTPUT_PATH` for task outputs and `CREWSWARM_WORKSPACE` for project directories.
130
+ | Agent | Role |
131
+ |---|---|
132
+ | `crew-lead` | Routes tasks, manages conversation |
133
+ | `crew-pm` | Plans, breaks down, prioritizes |
134
+ | `crew-coder` | Writes code (full-stack) |
135
+ | `crew-coder-back` | Backend specialist |
136
+ | `crew-coder-front` | Frontend specialist |
137
+ | `crew-qa` | Tests and validates |
138
+ | `crew-fixer` | Debugs and repairs |
139
+ | `crew-security` | Security review |
140
+ | `crew-github` | Git, PRs, branches |
141
+ | `crew-architect` | System design |
142
+ | `crew-orchestrator` | Wave dispatch |
143
+ | `crew-copywriter` | Docs and content |
144
+ | `crew-frontend` | UI/UX polish |
145
+ | `crew-main` | General coordination |
237
146
 
238
- Generate cryptographically secure tokens for production:
239
- ```bash
240
- node -e "console.log(require('crypto').randomBytes(32).toString('hex'))"
241
- ```
147
+ ---
242
148
 
243
- Additional configuration resides in `~/.crewswarm/config.json`, including provider credentials, agent-to-model assignments, command approval settings, and allowlists for shell command execution.
149
+ ## Built with crewswarm
244
150
 
245
- **Docker Support**
151
+ - **VS Code extension** — full extension from prompt to package in 10 minutes
152
+ - **crewswarm.ai** — the production website you see, 90% built by the swarm in 30 minutes
153
+ - **Session resume** — native resume across 6 CLI engines, built in one session
246
154
 
247
- All crewswarm components support Docker containerization. Use multi-stage Dockerfiles to separate build dependencies from runtime artifacts, reducing final image size. A proper `.dockerignore` should exclude `node_modules/`, `.git/`, `.env` files, build artifacts, logs, and IDE configuration. Configure services using environment variables rather than hardcoded values, enabling 12-factor app compliance.
155
+ ---
248
156
 
249
- For the RT bus and Node services, expose only necessary ports (typically 4319 for RT bus, 8080 for dashboard). Use `docker-compose.yml` for local development and multi-container orchestration, or Kubernetes manifests for production clusters. Health checks ensure container orchestrators can detect and restart failed services.
157
+ ## Commands
250
158
 
251
- Example minimal `.dockerignore`:
252
- ```
253
- node_modules/
254
- .git/
255
- .env
256
- *.log
257
- dist/
159
+ ```bash
160
+ crewswarm # Start all services
161
+ crewswarm pm-loop # Run autonomous PM loop
162
+ npm run doctor # Preflight check
163
+ npm run restart-all # Restart the stack
164
+ npm test # Run 647 tests
165
+ crew exec "Build X" # Send task via CLI
258
166
  ```
259
167
 
260
- **Production Infrastructure**
168
+ ---
261
169
 
262
- Production deployments benefit from process managers like PM2 or systemd for automatic restarts and clustering. Configure a reverse proxy (nginx, Caddy, Traefik) in front of services to handle TLS termination, load balancing, and request routing. Enable HTTPS using Let's Encrypt certificates for external-facing deployments.
170
+ ## Deployment
263
171
 
264
- Implement comprehensive monitoring using Prometheus for metrics collection and Grafana for visualization. Structure logs as JSON for easier aggregation in ELK stack (Elasticsearch, Logstash, Kibana) or similar log management platforms. Key metrics include task throughput, error rates, agent response times, and system resource utilization.
172
+ ```bash
173
+ # Docker
174
+ docker-compose up -d
265
175
 
266
- Security considerations for production:
267
- - Enable command approval gates to review shell commands before execution
268
- - Restrict RT bus port access via firewall rules
269
- - Rotate API keys and auth tokens regularly
270
- - Run services with minimal required permissions
271
- - Implement rate limiting on public endpoints
272
- - Regular dependency updates for security patches
176
+ # Or the one-liner
177
+ bash <(curl -fsSL https://raw.githubusercontent.com/crewswarm/crewswarm/main/install.sh)
178
+ ```
273
179
 
274
- For comprehensive deployment instructions including Docker examples, Kubernetes manifests, nginx configuration, monitoring setup, and troubleshooting, see **[DEPLOYMENT.md](DEPLOYMENT.md)**.
180
+ See [deploy.html](https://crewswarm.ai/deploy.html) for Docker, cloud VM, and production setup.
275
181
 
276
182
  ---
277
183
 
278
- ## Project structure
184
+ ## Documentation
279
185
 
280
- ```
281
- crewswarm/
282
- ├── crew-lead.mjs
283
- ├── crew-cli/
284
- ├── gateway-bridge.mjs
285
- ├── telegram-bridge.mjs
286
- ├── pm-loop.mjs
287
- ├── unified-orchestrator.mjs
288
- ├── phased-orchestrator.mjs
289
- ├── continuous-build.mjs
290
- ├── scripts/
291
- ├── memory/
292
- ├── docs/
293
- ├── contrib/swiftbar/
294
- └── website/
295
- ```
186
+ - [Website](https://crewswarm.ai)
187
+ - [Vibe IDE](https://crewswarm.ai/vibe.html)
188
+ - [crew-cli](https://crewswarm.ai/cli.html)
189
+ - [Models & Providers](https://crewswarm.ai/models.html)
190
+ - [Security](https://crewswarm.ai/security.html)
191
+ - [API Docs](https://crewswarm.ai/api.html)
192
+ - [@@Protocol](https://crewswarm.ai/atat.html)
296
193
 
297
194
  ---
298
195
 
299
- ## Docs
196
+ ## Contributing
300
197
 
301
- - `docs/CANONICAL/README.md`
302
- - `docs/ARCHITECTURE.md`
303
- - `docs/ORCHESTRATOR-GUIDE.md`
304
- - `docs/SETUP-NEW-AGENTS.md`
305
- - `docs/MODEL-RECOMMENDATIONS.md`
306
- - `docs/TROUBLESHOOTING.md`
198
+ See [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines. Issues and PRs welcome.
307
199
 
308
- ## Related repos
200
+ ## Security
309
201
 
310
- - `crew-cli` — standalone CLI for task execution and orchestration
311
-
312
- ---
202
+ See [SECURITY.md](SECURITY.md) for reporting vulnerabilities.
313
203
 
314
204
  ## License
315
205
 
@@ -70,3 +70,9 @@
70
70
  {"role":"assistant","content":"hello","ts":1774511218267,"source":"cli","agent":"crew-cli","metadata":{"engine":"crew-cli","exitCode":0,"agentName":"crew-cli","agentEmoji":"⚡"}}
71
71
  {"role":"user","content":"[Currently open file: /Users/jeffhobbs/CrewSwarm/apps/vibe/server.mjs]\n\nAdd session resume for all CLI engines with --resume flag","ts":1774512864121,"source":"cli","metadata":{"engine":"claude","agentName":"You","agentEmoji":"👤"}}
72
72
  {"role":"assistant","content":"Let me read the CLI engine command builder and the chat handler to understand the current session flow.Now let me check how the chat endpoint handles sessions:Now I understand the architecture. Each CLI engine gets spawned fresh per message. To add session resume, I need to:\n\n1. Track conversation IDs per engine (Claude uses `--resume`, Codex uses `--conversation-id`, Cursor uses session, etc.)\n2. Store the session/conversation ID when a CLI returns one\n3. Pass `--resume` on subsequent calls\n\nLet me check which CLIs support resume:Now let me check the output parsing to see if session IDs are extracted anywhere:Good — line 692 already detects `session id:` in output. Let me see that context:So `session id:` is currently being **filtered out as noise**. Now I have the full picture. Here's my plan:\n\n1. Add a `cliSessions` map to track engine → session/conversation IDs per project\n2. Modify `getCliCommand()` to accept and pass resume flags\n3. Extract session IDs from CLI output instead of filtering them\n4. Pass resume IDs on subsequent calls\n\nLet me implement this:**Step 1: Add the session store****Step 2: Update `getCliCommand()` to accept a `resumeSession` param and pass resume flags per engine****Step 3: Update `runCli()` to look up resume sessions and extract session IDs from output**Now I need to extract session IDs from the CLI output. Let me update the `close` handler to parse session IDs, and also stop filtering `session id:` from the output:Hmm, that IIFE inside a filter is fragile. Let me do it cleaner — extract the session ID in the line handler instead. Let me revert that and do it properly:Instead, I'll extract session IDs in the `runCli` output handler and save them on close. Let me find the right spot:**Step 4: Update the backward-compat `getCodexCommand` and `runCodexCli` aliases**Good. Now let me also add an API endpoint to check/clear resume sessions, and make sure the chat handler passes `resume` through:Let me add the resume sessions endpoint after the version endpoint:Now let me verify the full flow compiles correctly by checking the `runCli` function one more time:Good. Let me also verify the close handler stores the session properly:Let me verify there are no syntax errors by doing a quick parse check:Clean parse. Now let me also update the `handleCliChatViaCrewLead` function to pass the resume session through to crew-lead:","ts":1774513104176,"source":"cli","agent":"claude","metadata":{"engine":"claude","exitCode":1,"agentName":"claude","agentEmoji":"⚡"}}
73
+ {"role":"user","content":"what quote would you use for the entire repo? instead of this one from gemini-cli? Early access\nWhat builders are saying\n✦ Beta builder\n\"One sentence in. Working API out. No standups, no tickets, no back-and-forth.\"\nVlad\nFullstack Engineer · Early access\n✦ Beta builder\n\"I start the PM Loop, go eat dinner, come back to a finished feature. It's genuinely autonomous.\"\nDev\nStartup Founder · Early access\n✦ Beta builder\n\"Shared memory is the killer feature. I can stop mid-project and pick it up days later — agents just know where we left off.\"\nAttila\nIndie Developer · Early access","ts":1774516824165,"source":"cli","metadata":{"engine":"gemini","agentName":"You","agentEmoji":"👤"}}
74
+ {"role":"assistant","content":"I will read the `README.md` and `ARCHITECTURE.md` files to understand the core purpose and value proposition of the \"vibe\" project.Based on my research into the project's purpose and \"vibe,\" here is a quote that encapsulates the essence of the repository in the style of the beta builder testimonials:\n\n✦ **Beta builder**\n\"Vibe turns my local machine into a high-fidelity command center. I’m no longer just writing code; I’m orchestrating a crew in real-time with zero latency and a gorgeous UI that makes shipping feel effortless.\"\n**Jeff**\nLead Architect · Early access\n\n***\n\n### Why this fits the \"Vibe\" repo:\n* **Local-First Strength:** It highlights the \"real local project persistence\" and PTY-backed terminal mentioned in the `README.md`.\n* **Agent Integration:** It references the \"orchestration\" of the `crewswarm` specialist agents (the \"crew\").\n* **Aesthetic Focus:** It acknowledges the polished, modern CSS and \"dark mode\" design found in `index.html`.\n* **Flow & Speed:** It captures the \"zero latency\" feeling of a local IDE that integrates AI directly into the Monaco editor via `cli:codex`.","ts":1774516861356,"source":"cli","agent":"gemini","metadata":{"engine":"gemini","exitCode":0,"agentName":"gemini","agentEmoji":"⚡"}}
75
+ {"role":"user","content":"no i want a quote from Gemini - simple 1 liner","ts":1774516886109,"source":"cli","metadata":{"engine":"gemini","agentName":"You","agentEmoji":"👤"}}
76
+ {"role":"user","content":"Vibe turns my local machine into a high-fidelity command center. I’m no longer just writing code; I’m orchestrating a crew in real-time with zero latency and a gorgeous UI that makes shipping feel effortless.\" Gemini","ts":1774516924358,"source":"cli","metadata":{"engine":"gemini","agentName":"You","agentEmoji":"👤"}}
77
+ {"role":"assistant","content":"✦ **Gemini**\n\"Local files, global crew: the high-fidelity surface for shipping with a multi-agent pulse.\"","ts":1774516954600,"source":"cli","agent":"gemini","metadata":{"engine":"gemini","exitCode":0,"agentName":"gemini","agentEmoji":"⚡"}}
78
+ {"role":"assistant","content":"\"The local-first command center for orchestrating your AI crew with zero latency and high-fidelity flow.\" — Gemini","ts":1774516964996,"source":"cli","agent":"gemini","metadata":{"engine":"gemini","exitCode":0,"agentName":"gemini","agentEmoji":"⚡"}}
@@ -63,17 +63,18 @@ function esc(s) {
63
63
 
64
64
  const frames = [
65
65
  {
66
- title: "Install — 60 seconds",
66
+ title: "Install — real output from Docker",
67
67
  lines: [
68
68
  "$ npm install -g crewswarm",
69
69
  "",
70
- "added 47 packages in 8s",
70
+ "added 59 packages in 9s",
71
71
  "",
72
- " crewswarm@0.8.1 installed globally",
72
+ "$ crewswarm --version",
73
+ "0.8.2-beta",
73
74
  "",
74
75
  "$ crewswarm",
75
76
  "",
76
- "⚡ crewswarm v0.8.1",
77
+ "⚡ crewswarm v0.8.2-beta",
77
78
  "→ Dashboard: http://localhost:4319",
78
79
  "→ Vibe IDE: http://localhost:3333",
79
80
  "→ crew-lead: http://localhost:5010",
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "crewswarm",
3
- "version": "0.8.2-beta",
3
+ "version": "0.9.0",
4
4
  "description": "Local-first multi-agent orchestration platform — coordinate AI coding agents, LLMs, and tools from a single dashboard",
5
5
  "type": "module",
6
6
  "license": "MIT",