adelie-ai 0.2.0 → 0.2.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,254 +1,312 @@
1
1
  <p align="center">
2
- <img src="docs/adelie_logo.jpeg" alt="Adelie Logo" width="200" />
2
+ <img src="docs/adelie_logo.jpeg" alt="Adelie" width="180" />
3
3
  </p>
4
4
 
5
5
  <h1 align="center">Adelie</h1>
6
6
 
7
7
  <p align="center">
8
- <strong>Self-Communicating Autonomous AI Loop System</strong><br/>
9
- An AI orchestrator that plans, codes, reviews, tests, deploys, and evolves — autonomously.
8
+ <strong>Autonomous AI Orchestration System</strong><br/>
9
+ <sub>10 specialized agents · 6-phase lifecycle · zero human intervention</sub>
10
10
  </p>
11
11
 
12
12
  <p align="center">
13
- <img src="https://img.shields.io/badge/python-3.10+-blue?style=for-the-badge&logo=python&logoColor=white" alt="Python 3.10+" />
14
- <img src="https://img.shields.io/badge/LLM-Gemini%20%7C%20Ollama-orange?style=for-the-badge" alt="LLM Support" />
15
- <img src="https://img.shields.io/badge/license-MIT-green?style=for-the-badge" alt="MIT License" />
16
- <img src="https://img.shields.io/badge/tests-183%20passed-brightgreen?style=for-the-badge" alt="Tests" />
13
+ <a href="https://www.npmjs.com/package/adelie-ai"><img src="https://img.shields.io/npm/v/adelie-ai?style=flat-square&logo=npm&color=CB3837" alt="npm version" /></a>
14
+ <img src="https://img.shields.io/badge/python-3.10+-3776AB?style=flat-square&logo=python&logoColor=white" alt="Python" />
15
+ <img src="https://img.shields.io/badge/LLM-Gemini%20│%20Ollama-FF6F00?style=flat-square" alt="LLM" />
16
+ <img src="https://img.shields.io/badge/tests-197%20passing-2EA043?style=flat-square" alt="Tests" />
17
+ <a href="./LICENSE"><img src="https://img.shields.io/badge/license-MIT-blue?style=flat-square" alt="License" /></a>
17
18
  </p>
18
19
 
19
20
  <p align="center">
20
- <a href="#-quick-start">Quick Start</a> ·
21
- <a href="#-architecture">Architecture</a> ·
22
- <a href="#-features">Features</a> ·
23
- <a href="#-cli-reference">CLI</a> ·
24
- <a href="#-testing">Testing</a> ·
25
- <a href="#-license">License</a>
21
+ <a href="#quick-start">Quick Start</a>&ensp;·&ensp;
22
+ <a href="#how-it-works">How It Works</a>&ensp;·&ensp;
23
+ <a href="#architecture">Architecture</a>&ensp;·&ensp;
24
+ <a href="#cli">CLI</a>&ensp;·&ensp;
25
+ <a href="#dashboard">Dashboard</a>&ensp;·&ensp;
26
+ <a href="#configuration">Configuration</a>
26
27
  </p>
27
28
 
28
29
  ---
29
30
 
30
- ## 🤔 What is Adelie?
31
+ ## Overview
31
32
 
32
- Adelie is an **autonomous AI loop system** that orchestrates 10 specialized AI agents to build, maintain, and evolve software projects — with minimal human intervention.
33
-
34
- Think of it as a full AI development team running 24/7:
33
+ Adelie is an autonomous AI orchestrator that plans, codes, reviews, tests, deploys, and evolves software projects through a coordinated multi-agent loop. It ships as a single CLI (`npm install -g adelie-ai`) and requires only an LLM provider no cloud backend, no account.
35
34
 
36
35
  ```
37
- 🧠 Expert AI → Strategic decisions & task dispatch
38
- ✍️ Writer AI → Knowledge Base documentation
39
- 💻 Coder AI → Code generation (3-layer architecture)
40
- 🔍 Reviewer AI → Code quality review & feedback
41
- 🧪 Tester AI → Test execution & failure reporting
42
- 🚀 Runner AI → Build & deployment
43
- 📡 Monitor AI → System health monitoring
44
- 📊 Analyst AI → Project insights & analysis
45
- 🔎 Research AI → Web search for external info
46
- 🔬 Scanner AI → Codebase scanning on first run
36
+ (o_ Adelie v0.2.1
37
+ //\ ollama · deepseek-v3.1:671b-cloud
38
+ V_/_ Phase: mid_2
47
39
  ```
48
40
 
49
- All agents communicate through a **file-based Knowledge Base** and are coordinated by the **Orchestrator** — an endless loop with a built-in state machine.
41
+ **What Adelie does in every cycle:**
50
42
 
51
- ---
43
+ 1. **Writer** curates the Knowledge Base
44
+ 2. **Expert** makes strategic decisions — what to build next, what to fix
45
+ 3. **Research** gathers external context from the web
46
+ 4. **Coder** generates code in 3 dependency layers
47
+ 5. **Reviewer** scores code quality; rejects until standards are met
48
+ 6. **Checkpoint** snapshots the project before promotion
49
+ 7. **Tester** runs tests and reports failures
50
+ 8. **Runner** builds, installs, deploys
51
+ 9. **Monitor** watches system health
52
+ 10. **Phase gates** decide when to advance the project lifecycle
52
53
 
53
- ## Features
54
+ The loop runs continuously at a configurable interval (default 30 s), or once with `adelie run once`.
54
55
 
55
- ### 🎯 Core
56
+ ---
56
57
 
57
- - **10 Specialized Agents** — Each with a focused role, scheduled independently
58
- - **6-Phase Project Lifecycle** — `INITIAL → MID → MID_1 → MID_2 → LATE → EVOLVE`
59
- - **Layered Code Generation** — Layer 0 (features) → Layer 1 (connectors) → Layer 2 (infra)
60
- - **Knowledge Base** — Tag-based & semantic retrieval across 6 categories
61
- - **Multi-LLM** — Gemini + Ollama with automatic fallback chains
58
+ ## Quick Start
62
59
 
63
- ### 🛡️ Safety
60
+ ### Prerequisites
64
61
 
65
- - **Loop Detector** 5 stuck-pattern types with escalating interventions
66
- - **Phase Gates** — Quality-metric thresholds (KB count, test rate, review scores)
67
- - **Context Budget** Per-agent token limits prevent unbounded growth
68
- - **Process Supervisor** Timeout enforcement, orphan cleanup, concurrency limits
62
+ | Requirement | Version |
63
+ |:--|:--|
64
+ | Python | 3.10+ |
65
+ | Node.js | 16+ |
66
+ | LLM | Gemini API key **or** Ollama instance |
69
67
 
70
- ### 🔌 Extensibility (New in Phase 2-3)
68
+ ### Install
71
69
 
72
- | Feature | Description |
73
- |---------|-------------|
74
- | 💾 **Checkpoint System** | Auto-snapshot before file promotion, instant rollback |
75
- | 🐳 **Docker Sandboxing** | Configurable workspace access, network isolation, security blocklist |
76
- | 🌐 **REST Gateway** | HTTP API: `/api/status`, `/api/tools`, `/api/control` |
77
- | 🧩 **Skill Registry** | Install/update/uninstall skills from Git or local dirs |
78
- | 📡 **Multichannel** | `ChannelProvider` ABC — Discord, Slack, and custom channels |
79
- | 🤝 **A2A Protocol** | Agent-to-Agent HTTP API for external agent integration |
80
- | 🔧 **MCP Support** | Model Context Protocol for external tool ecosystems |
70
+ #### npm (recommended)
81
71
 
82
- ---
72
+ ```bash
73
+ npm install -g adelie-ai
74
+ ```
83
75
 
84
- ## 🏗️ Architecture
76
+ #### curl (macOS / Linux)
85
77
 
86
- ```
87
- ┌─────────────────────────────────────────────────────────────────┐
88
- │ ORCHESTRATOR │
89
- │ │
90
- │ ┌──────────┐ ┌──────────┐ ┌──────────────┐ │
91
- │ │ Writer AI│───>│Expert AI │───>│ Coder Manager│ │
92
- │ └──────────┘ └──────────┘ └──────┬───────┘ │
93
- │ │ │ │ │
94
- │ v │ ┌──────┴──────┐ │
95
- │ ┌──────────┐ │ │ Layer 0-2 │ │
96
- │ │Knowledge │ │ │ Coders │ │
97
- │ │ Base │<────────┘ └──────┬──────┘ │
98
- │ └──────────┘ │ │
99
- │ v │
100
- │ ┌──────────┐ ┌──────────┐ ┌──────────────────┐ │
101
- │ │ Reviewer │ │ Tester │ │ Runner / Monitor │ │
102
- │ │ AI │ │ AI │ │ AI │ │
103
- │ └──────────┘ └──────────┘ └──────────────────┘ │
104
- │ │
105
- │ ┌─────────────────────────────────────────────────────────┐ │
106
- │ │ Loop Detector │ Scheduler │ Process Supervisor │ │
107
- │ ├─────────────────────────────────────────────────────────┤ │
108
- │ │ Checkpoint │ Sandbox │ Gateway │ A2A │ Channels │ │
109
- │ └─────────────────────────────────────────────────────────┘ │
110
- └─────────────────────────────────────────────────────────────────┘
78
+ ```bash
79
+ curl -fsSL https://raw.githubusercontent.com/Ade1ie/adelie/main/install.sh | bash
111
80
  ```
112
81
 
113
- ---
82
+ #### PowerShell (Windows)
114
83
 
115
- ## 🔄 Project Lifecycle
84
+ ```powershell
85
+ irm https://raw.githubusercontent.com/Ade1ie/adelie/main/install.ps1 | iex
86
+ ```
116
87
 
117
- Adelie evolves your project through **6 phases**, each gated by quality metrics:
88
+ #### Homebrew (macOS / Linux)
118
89
 
119
- ```
120
- INITIAL ──> MID ──> MID_1 ──> MID_2 ──> LATE ──> EVOLVE
121
- Planning Code Test Optimize Maintain Autonomous
90
+ ```bash
91
+ brew tap Ade1ie/tap
92
+ brew install adelie
122
93
  ```
123
94
 
124
- | Phase | Focus | Coder Layers |
125
- |-------|-------|-------------|
126
- | Initial | Documentation, architecture, roadmap | — |
127
- | Mid | Implementation, feature coding | Layer 0 |
128
- | Mid-1 | Integration, testing | Layer 0-1 |
129
- | Mid-2 | Stabilization, optimization | Layer 0-2 |
130
- | Late | Maintenance, new features | All |
131
- | Evolve | Self-improvement | All |
95
+ #### From source
132
96
 
133
- ---
97
+ ```bash
98
+ git clone https://github.com/Ade1ie/adelie.git
99
+ cd adelie
100
+ pip install -r requirements.txt
101
+ python adelie/cli.py --version
102
+ ```
134
103
 
135
- ## 🚀 Quick Start
104
+ ### Update
136
105
 
137
- ### Prerequisites
138
-
139
- - **Python 3.10+**
140
- - **Node.js 16+** (for CLI wrapper)
141
- - **Gemini API key** or **Ollama** running locally
106
+ ```bash
107
+ # npm
108
+ npm install -g adelie-ai@latest
142
109
 
143
- ### Installation
110
+ # curl / PowerShell — re-run the install command above
144
111
 
145
- ```bash
146
- # Install via npm (recommended)
147
- npm install -g adelie-ai
112
+ # Homebrew
113
+ brew upgrade adelie
148
114
 
149
- # Or install from source
150
- git clone https://github.com/kimhyunbin/Adelie.git
151
- cd Adelie
152
- pip install -r requirements.txt
153
- npm install -g .
115
+ # Check current version
116
+ adelie --version
154
117
  ```
155
118
 
156
- ### Setup
119
+ ### Configure
157
120
 
158
121
  ```bash
159
- # Initialize workspace
160
- cd /path/to/your/project
122
+ cd your-project/
161
123
  adelie init
162
124
 
163
- # Configure LLM provider
164
- adelie config --provider gemini --api-key YOUR_GEMINI_API_KEY
125
+ # Gemini
126
+ adelie config --provider gemini --api-key YOUR_KEY
165
127
 
166
- # Or use Ollama (local, free)
128
+ # or Ollama (local, free)
167
129
  adelie config --provider ollama --model gemma3:12b
168
130
  ```
169
131
 
170
132
  ### Run
171
133
 
172
134
  ```bash
173
- # Start the autonomous AI loop
135
+ # Continuous autonomous loop
174
136
  adelie run --goal "Build a REST API for task management"
175
137
 
176
138
  # Single cycle
177
139
  adelie run once --goal "Analyze and document the codebase"
178
140
  ```
179
141
 
142
+ The real-time **dashboard** opens automatically at **http://localhost:5042**.
143
+
144
+ ---
145
+
146
+ ## How It Works
147
+
148
+ ### Agents
149
+
150
+ | Agent | Role | When |
151
+ |:--|:--|:--|
152
+ | **Writer** | Curates Knowledge Base — skills, logic, dependencies, exports | Every cycle |
153
+ | **Expert** | Strategic JSON decisions — action + coder tasks + phase vote | Every cycle |
154
+ | **Scanner** | Scans existing codebase on first run | Once |
155
+ | **Coder** | Multi-layer code generation with dependency ordering | On demand |
156
+ | **Reviewer** | Quality review (1–10 score) with retry-on-reject | After coding |
157
+ | **Tester** | Executes tests, collects failures, feeds back to coder | After review |
158
+ | **Runner** | Installs deps, builds, deploys (whitelisted commands) | Mid-phase + |
159
+ | **Monitor** | System health, resource checks, service restarts | Periodic |
160
+ | **Analyst** | Trend analysis, insights, KB synthesis | Periodic |
161
+ | **Research** | Web search → KB for external knowledge | On demand |
162
+
163
+ ### 6-Phase Lifecycle
164
+
165
+ <p align="center">
166
+ <img src="docs/lifecycle.png" alt="Adelie 6-Phase Lifecycle" width="800" />
167
+ </p>
168
+
169
+ Each phase transition is gated by quality metrics — KB file count, test pass rate, review scores, stability indicators. The Expert AI votes on phase transitions; the system enforces the gates.
170
+
171
+ ### Layered Code Generation
172
+
173
+ The Coder Manager dispatches tasks across three dependency layers:
174
+
175
+ - **Layer 0** — Features and pages (parallel execution)
176
+ - **Layer 1** — Connectors and integrations (depends on Layer 0)
177
+ - **Layer 2** — Infrastructure and configuration (depends on Layer 1)
178
+
179
+ Failed layers trigger targeted retries with reviewer feedback.
180
+
181
+ ---
182
+
183
+ ## Architecture
184
+
185
+ <p align="center">
186
+ <img src="docs/architecture.png" alt="Adelie Architecture" width="800" />
187
+ </p>
188
+
189
+ ---
190
+
191
+ ## Dashboard
192
+
193
+ Adelie serves a real-time monitoring UI at **`http://localhost:5042`** (auto-starts with `adelie run`).
194
+
195
+ - **Agent grid** — live status of all 10 agents (idle / running / done / error)
196
+ - **Log stream** — real-time SSE-powered log feed with category filtering
197
+ - **Cycle metrics** — tokens, LLM calls, files written, test results, review scores
198
+ - **Phase timeline** — visual progress through the 6-phase lifecycle
199
+ - **Cycle history chart** — last 30 cycles at a glance
200
+
201
+ Built with zero external dependencies — Python `http.server` + SSE + embedded HTML/JS.
202
+
203
+ | Setting | Default | Env var |
204
+ |:--|:--|:--|
205
+ | Enable | `true` | `DASHBOARD_ENABLED` |
206
+ | Port | `5042` | `DASHBOARD_PORT` |
207
+
180
208
  ---
181
209
 
182
- ## 💻 CLI Reference
210
+ ## CLI
183
211
 
212
+ ```bash
213
+ adelie --version # Show version
214
+ adelie help # Full command reference
184
215
  ```
185
- Workspace
186
- adelie init [dir] Initialize workspace
187
- adelie ws List all workspaces
188
- adelie ws remove <N> Remove workspace #N
189
-
190
- Run
191
- adelie run --goal "..." Start AI loop
192
- adelie run ws <N> Resume loop in workspace #N
193
- adelie run once --goal "..." Run one cycle
194
-
195
- Configuration
196
- adelie config Show current config
197
- adelie config --provider ... Switch LLM (gemini/ollama)
198
- adelie config --model ... Set model name
199
- adelie config --interval N Loop interval (seconds)
200
- adelie config --api-key KEY Set Gemini API key
201
- adelie config --lang ko|en Display language
202
-
203
- Monitoring
204
- adelie status System health
205
- adelie inform Project status report
206
- adelie phase Show phase
207
- adelie phase set <phase> Set phase manually
208
- adelie metrics Cycle metrics
209
-
210
- Knowledge Base
211
- adelie kb KB file counts
212
- adelie kb --clear-errors Clear error files
213
- adelie kb --reset Reset KB (destructive)
214
-
215
- Project
216
- adelie goal Show project goal
217
- adelie goal set "..." Set project goal
218
- adelie feedback "message" Send feedback to AI loop
219
- adelie research "topic" Web search → KB
220
- adelie git Git status & recent commits
221
-
222
- Ollama
223
- adelie ollama list List models
224
- adelie ollama pull <model> Download model
225
- adelie ollama run [model] Interactive chat
226
-
227
- Telegram
228
- adelie telegram setup Setup bot token
229
- adelie telegram start Start Telegram bot
216
+
217
+ ### Workspace
218
+
219
+ ```bash
220
+ adelie init [dir] # Initialize .adelie workspace
221
+ adelie ws # List all workspaces
222
+ adelie ws remove <N> # Remove workspace
230
223
  ```
231
224
 
232
- ---
225
+ ### Execution
226
+
227
+ ```bash
228
+ adelie run --goal "…" # Start continuous loop
229
+ adelie run once --goal "…" # Single cycle
230
+ adelie run ws <N> # Resume workspace #N
231
+ ```
232
+
233
+ ### Configuration
233
234
 
234
- ## 🔧 Configuration
235
+ ```bash
236
+ adelie config # Show current config
237
+ adelie config --provider ollama # Switch LLM provider
238
+ adelie config --model gpt-4o # Set model
239
+ adelie config --api-key KEY # Set Gemini API key
240
+ adelie config --ollama-url URL # Set Ollama server URL
241
+ ```
235
242
 
236
- ### Environment Variables
243
+ ### Settings
244
+
245
+ ```bash
246
+ adelie settings # View all settings
247
+ adelie settings --global # View global settings
248
+ adelie settings set <key> <val> # Change workspace setting
249
+ adelie settings set --global <key> <val> # Change global setting
250
+ adelie settings reset <key> # Reset to default
251
+ ```
237
252
 
238
- All settings stored in `.adelie/.env`:
253
+ Available settings: `dashboard`, `dashboard.port`, `loop.interval`, `plan.mode`, `sandbox`, `mcp`, `browser.search`, `browser.max_pages`, `fallback.models`, `fallback.cooldown`, `language`
254
+
255
+ ### Monitoring
256
+
257
+ ```bash
258
+ adelie status # System health & provider status
259
+ adelie inform # AI-generated project report
260
+ adelie phase # Show current phase
261
+ adelie phase set <phase> # Set phase manually
262
+ adelie metrics # Cycle metrics & history
263
+ ```
264
+
265
+ ### Knowledge Base & Project
266
+
267
+ ```bash
268
+ adelie kb # KB file counts by category
269
+ adelie kb --clear-errors # Clear error files
270
+ adelie kb --reset # Reset entire KB
271
+ adelie goal # Show project goal
272
+ adelie goal set "…" # Set project goal
273
+ adelie feedback "message" # Inject feedback into AI loop
274
+ adelie research "topic" # Web search → KB
275
+ adelie spec load <file> # Load spec (MD/PDF/DOCX) into KB
276
+ adelie git # Git status & recent commits
277
+ ```
278
+
279
+ ### Integrations
280
+
281
+ ```bash
282
+ adelie telegram setup # Configure Telegram bot
283
+ adelie telegram start # Start Telegram bot
284
+ adelie ollama list # List Ollama models
285
+ adelie ollama pull <model> # Download model
286
+ adelie ollama run [model] # Interactive chat
287
+ ```
288
+
289
+ ---
290
+
291
+ ## Configuration
292
+
293
+ ### Environment (`.adelie/.env`)
239
294
 
240
295
  | Variable | Default | Description |
241
- |----------|---------|-------------|
296
+ |:--|:--|:--|
242
297
  | `LLM_PROVIDER` | `gemini` | `gemini` or `ollama` |
243
- | `GEMINI_API_KEY` | — | Required for Gemini |
244
- | `GEMINI_MODEL` | `gemini-2.0-flash` | Gemini model |
245
- | `OLLAMA_BASE_URL` | `http://localhost:11434` | Ollama URL |
246
- | `OLLAMA_MODEL` | `llama3.2` | Ollama model |
247
- | `FALLBACK_MODELS` | — | Fallback chain (e.g. `gemini:flash,ollama:llama3.2`) |
248
- | `LOOP_INTERVAL_SECONDS` | `30` | Loop interval |
249
- | `ADELIE_LANGUAGE` | `ko` | Display language |
250
-
251
- ### Docker Sandbox Config
298
+ | `GEMINI_API_KEY` | — | Google Gemini API key |
299
+ | `GEMINI_MODEL` | `gemini-2.0-flash` | Gemini model name |
300
+ | `OLLAMA_BASE_URL` | `http://localhost:11434` | Ollama server URL |
301
+ | `OLLAMA_MODEL` | `llama3.2` | Ollama model name |
302
+ | `FALLBACK_MODELS` | — | Fallback chain (`gemini:flash,ollama:llama3.2`) |
303
+ | `LOOP_INTERVAL_SECONDS` | `30` | Cycle interval in seconds |
304
+ | `DASHBOARD_ENABLED` | `true` | Dashboard on/off |
305
+ | `DASHBOARD_PORT` | `5042` | Dashboard port |
306
+ | `PLAN_MODE` | `false` | Require approval before execution |
307
+ | `SANDBOX_MODE` | `none` | `none`, `seatbelt`, or `docker` |
308
+
309
+ ### Docker Sandbox
252
310
 
253
311
  Optional `.adelie/sandbox.json`:
254
312
 
@@ -259,15 +317,14 @@ Optional `.adelie/sandbox.json`:
259
317
  "workspaceAccess": "rw",
260
318
  "network": "none",
261
319
  "memoryLimit": "512m",
262
- "cpuLimit": 1.0,
263
- "readOnlyRoot": false
320
+ "cpuLimit": 1.0
264
321
  }
265
322
  }
266
323
  ```
267
324
 
268
- ### Skills
325
+ ### Custom Skills
269
326
 
270
- Place skills in `.adelie/skills/<name>/SKILL.md`:
327
+ Place custom skills in `.adelie/skills/<name>/SKILL.md`:
271
328
 
272
329
  ```yaml
273
330
  ---
@@ -276,138 +333,95 @@ description: React/TypeScript best practices
276
333
  agents: [coder, reviewer]
277
334
  trigger: auto
278
335
  ---
279
- # Instructions
280
- Use functional components...
336
+ Use functional components with TypeScript props…
281
337
  ```
282
338
 
283
339
  ---
284
340
 
285
- ## 🧪 Testing
341
+ ## Platform Features
286
342
 
287
- ```bash
288
- # Run all tests
289
- python -m pytest tests/ -v
290
-
291
- # Specific modules
292
- python -m pytest tests/test_checkpoint.py -v
293
- python -m pytest tests/test_gateway.py -v
294
- python -m pytest tests/test_a2a.py -v
295
- ```
296
-
297
- **183 tests** across 8 test suites:
298
-
299
- | Suite | Tests | Coverage |
300
- |-------|-------|----------|
301
- | MCP Client | 35 | MCP server connection, tool discovery |
302
- | Tool Registry | 20 | Tool registration, categories, user tools |
303
- | Checkpoint | 16 | Create, restore, prune, metadata |
304
- | Docker Sandbox | 26 | Config, bind safety, Docker wrapping |
305
- | Gateway API | 18 | REST endpoints, auth, CORS |
306
- | Skill Registry | 19 | Install, uninstall, manifest, helpers |
307
- | Multichannel | 24 | Providers, router, broadcast, events |
308
- | A2A Protocol | 25 | Task lifecycle, persistence, HTTP API |
343
+ | Feature | Description |
344
+ |:--|:--|
345
+ | 💾 **Checkpoints** | Auto-snapshot before promotion, instant rollback |
346
+ | 🐳 **Docker Sandbox** | Configurable workspace isolation, network policy, resource limits |
347
+ | 🌐 **REST Gateway** | HTTP API — `/api/status`, `/api/tools`, `/api/control` |
348
+ | 🧩 **Skill Registry** | Install/update skills from Git repos or local directories |
349
+ | 📡 **Multichannel** | `ChannelProvider` ABC — Discord, Slack, custom channels |
350
+ | 🤝 **A2A Protocol** | Agent-to-Agent HTTP for external agent integration |
351
+ | 🔧 **MCP Support** | Model Context Protocol for external tool ecosystems |
352
+ | 📊 **Dashboard** | Real-time web UI with SSE streaming on port 5042 |
353
+ | 🔄 **Loop Detector** | 5 stuck-pattern types with escalating interventions |
354
+ | ⚡ **Scheduler** | Per-agent frequency control with cooldown/priority |
309
355
 
310
356
  ---
311
357
 
312
- ## 📁 Project Structure
358
+ ## Testing
313
359
 
314
- ```
315
- Adelie/
316
- ├── adelie/ # Core package
317
- │ ├── orchestrator.py # Main loop controller (state machine)
318
- │ ├── cli.py # CLI commands
319
- │ ├── config.py # Configuration & env loading
320
- │ ├── llm_client.py # LLM abstraction (Gemini + Ollama)
321
- │ ├── checkpoint.py # 💾 Checkpoint system
322
- │ ├── sandbox.py # 🐳 Docker/Seatbelt sandboxing
323
- │ ├── gateway.py # 🌐 REST API gateway
324
- │ ├── skill_manager.py # 🧩 Skill loading & registry
325
- │ ├── scheduler.py # Per-agent scheduling
326
- │ ├── phases.py # Project lifecycle phases
327
- │ ├── hooks.py # Event-driven plugin system
328
- │ ├── loop_detector.py # Stuck-loop detection
329
- │ ├── context_engine.py # Per-agent context assembly
330
- │ ├── process_supervisor.py # Subprocess management
331
- │ ├── feedback_queue.py # User feedback injection
332
- │ ├── channels/ # 📡 Multichannel abstraction
333
- │ │ ├── base.py # ChannelProvider ABC
334
- │ │ ├── discord.py # Discord integration
335
- │ │ ├── slack.py # Slack integration
336
- │ │ └── router.py # Multi-channel routing
337
- │ ├── a2a/ # 🤝 Agent-to-Agent protocol
338
- │ │ ├── types.py # Task/Event types
339
- │ │ ├── server.py # A2A HTTP server
340
- │ │ └── persistence.py # Task persistence
341
- │ ├── agents/ # AI agents (10 specialized)
342
- │ │ ├── expert_ai.py
343
- │ │ ├── writer_ai.py
344
- │ │ ├── coder_ai.py
345
- │ │ ├── reviewer_ai.py
346
- │ │ ├── tester_ai.py
347
- │ │ ├── runner_ai.py
348
- │ │ ├── monitor_ai.py
349
- │ │ ├── analyst_ai.py
350
- │ │ ├── research_ai.py
351
- │ │ └── scanner_ai.py
352
- │ └── kb/ # Knowledge Base
353
- │ ├── retriever.py
354
- │ └── embedding_store.py
355
- ├── tests/ # 183 tests
356
- ├── bin/ # Node.js CLI wrapper
357
- ├── requirements.txt # Python dependencies
358
- └── package.json # npm config
360
+ ```bash
361
+ python -m pytest tests/ -v # 197 tests
359
362
  ```
360
363
 
361
364
  ---
362
365
 
363
- ## ⚙️ How It Works
366
+ ## Project Structure
364
367
 
365
- Each orchestrator cycle runs these steps:
366
-
367
- 1. **Writer AI** creates/updates Knowledge Base files
368
- 2. **Expert AI** reads KB and makes structured decisions (JSON)
369
- 3. **Research AI** searches the web if requested
370
- 4. **Coder Manager** dispatches code generation by layer
371
- 5. **Reviewer AI** reviews code; retries on failure
372
- 6. **💾 Checkpoint** snapshots current files before promotion
373
- 7. **Staging Project** promotes approved code
374
- 8. **Tester AI** runs tests; retries on failure
375
- 9. **Runner AI** builds and deploys
376
- 10. **Monitor AI** checks health; restarts if needed
377
- 11. **Phase Gates** evaluate readiness for next phase
378
-
379
- The loop runs continuously with the **Scheduler** controlling agent frequency and the **Loop Detector** intervening when the system gets stuck.
380
-
381
- ---
382
-
383
- ## 🗺️ Roadmap
384
-
385
- - [x] **Phase 1** — MCP Server Integration
386
- - [x] **Phase 2** Checkpoint System, Docker Sandboxing, REST Gateway
387
- - [x] **Phase 3** — Skill Registry, Multichannel, A2A Protocol
388
- - [ ] **Phase 4** — VS Code Extension, Web Dashboard, Plugin Marketplace
368
+ ```
369
+ adelie/
370
+ ├── orchestrator.py # Main loop state machine + phase gates
371
+ ├── cli.py # All CLI commands
372
+ ├── config.py # Configuration & env loading
373
+ ├── llm_client.py # LLM abstraction (Gemini + Ollama + fallback)
374
+ ├── interactive.py # REPL + dashboard integration
375
+ ├── dashboard.py # Real-time web server (HTTP + SSE)
376
+ ├── dashboard_html.py # Embedded dashboard UI template
377
+ ├── agents/ # 10 specialized AI agents
378
+ │ ├── writer_ai.py # Knowledge Base curator
379
+ │ ├── expert_ai.py # Strategic decision maker
380
+ │ ├── coder_ai.py # Code generator
381
+ │ ├── coder_manager.py # Layer dispatch & retry
382
+ │ ├── reviewer_ai.py # Quality reviewer
383
+ │ ├── tester_ai.py # Test runner
384
+ │ ├── runner_ai.py # Build & deploy
385
+ │ ├── monitor_ai.py # Health monitor
386
+ │ ├── analyst_ai.py # Trend analyzer
387
+ │ ├── research_ai.py # Web researcher
388
+ │ └── scanner_ai.py # Initial codebase scanner
389
+ ├── kb/ # Knowledge Base (retriever + embeddings)
390
+ ├── channels/ # Multichannel providers (Discord, Slack)
391
+ ├── a2a/ # Agent-to-Agent protocol
392
+ ├── checkpoint.py # Snapshot & rollback
393
+ ├── sandbox.py # Docker/Seatbelt isolation
394
+ ├── gateway.py # REST API gateway
395
+ ├── skill_manager.py # Skill registry
396
+ ├── loop_detector.py # Stuck-pattern detection
397
+ ├── scheduler.py # Per-agent scheduling
398
+ ├── phases.py # Lifecycle phase definitions
399
+ ├── hooks.py # Event-driven plugin system
400
+ ├── process_supervisor.py # Subprocess management
401
+ └── env_strategy.py # Runtime environment detection
402
+ ```
389
403
 
390
404
  ---
391
405
 
392
- ## 🤝 Contributing
406
+ ## Contributing
393
407
 
394
- Contributions are welcome! Please feel free to submit Pull Requests.
408
+ ```bash
409
+ git clone https://github.com/Ade1ie/adelie.git
410
+ cd adelie
411
+ pip install -r requirements.txt
412
+ python -m pytest tests/ -v # Ensure all tests pass
413
+ ```
395
414
 
396
- 1. Fork the repository
397
- 2. Create your feature branch (`git checkout -b feature/amazing-feature`)
398
- 3. Run tests (`python -m pytest tests/ -v`)
399
- 4. Commit your changes (`git commit -m 'Add amazing feature'`)
400
- 5. Push to the branch (`git push origin feature/amazing-feature`)
401
- 6. Open a Pull Request
415
+ 1. Fork branch → implement → test → PR
416
+ 2. Follow existing code style and patterns
417
+ 3. Add tests for new features
402
418
 
403
419
  ---
404
420
 
405
- ## 📄 License
406
-
407
- MIT — see [LICENSE](./LICENSE) for details.
421
+ ## License
408
422
 
409
- ---
423
+ [MIT](./LICENSE)
410
424
 
411
425
  <p align="center">
412
- <sub>Built with 🐧 Adelie the penguin that codes</sub>
426
+ <sub>Built with 🐧 by the Adelie team</sub>
413
427
  </p>
package/adelie/cli.py CHANGED
@@ -11,6 +11,7 @@ from __future__ import annotations
11
11
  import argparse
12
12
  import json
13
13
  import os
14
+ import platform
14
15
  import shutil
15
16
  import subprocess
16
17
  import sys
@@ -290,6 +291,296 @@ def _sync_specs() -> None:
290
291
  console.print(f"[green] > Auto-synced specs: {', '.join(parts)}[/green]")
291
292
 
292
293
 
294
+ # ═══════════════════════════════════════════════════════════════════════════════
295
+ # OS DETECTION
296
+ # ═══════════════════════════════════════════════════════════════════════════════
297
+
298
+
299
+ def _detect_os() -> dict:
300
+ """Detect the current OS, shell, and architecture."""
301
+ system = platform.system() # "Windows", "Linux", "Darwin"
302
+ release = platform.release()
303
+ machine = platform.machine() # "x86_64", "arm64", "AMD64"
304
+ version = platform.version()
305
+
306
+ # Detect shell
307
+ if system == "Windows":
308
+ shell = "PowerShell"
309
+ comspec = os.environ.get("COMSPEC", "")
310
+ # Detect if running in PowerShell vs cmd
311
+ if os.environ.get("PSModulePath"):
312
+ shell = "PowerShell"
313
+ elif "cmd.exe" in comspec.lower():
314
+ shell = "cmd"
315
+ else:
316
+ shell_path = os.environ.get("SHELL", "/bin/sh")
317
+ shell = Path(shell_path).name # "bash", "zsh", "fish", etc.
318
+
319
+ # Friendly OS name
320
+ if system == "Darwin":
321
+ os_name = "macOS"
322
+ try:
323
+ mac_ver = platform.mac_ver()[0]
324
+ if mac_ver:
325
+ release = mac_ver
326
+ except Exception:
327
+ pass
328
+ elif system == "Windows":
329
+ os_name = "Windows"
330
+ else:
331
+ os_name = "Linux"
332
+ # Try to get distro info
333
+ try:
334
+ import distro # type: ignore
335
+ os_name = f"Linux ({distro.name(pretty=True)})"
336
+ except ImportError:
337
+ # Fallback: read /etc/os-release
338
+ try:
339
+ osrel = Path("/etc/os-release").read_text()
340
+ for line in osrel.splitlines():
341
+ if line.startswith("PRETTY_NAME="):
342
+ os_name = f"Linux ({line.split('=', 1)[1].strip('\"')})"
343
+ break
344
+ except Exception:
345
+ pass
346
+
347
+ return {
348
+ "system": system,
349
+ "os_name": os_name,
350
+ "release": release,
351
+ "machine": machine,
352
+ "version": version,
353
+ "shell": shell,
354
+ }
355
+
356
+
357
+ def _generate_os_context(os_info: dict) -> str:
358
+ """Generate English OS-specific context markdown for AI agent prompts."""
359
+ system = os_info["system"]
360
+ os_name = os_info["os_name"]
361
+ release = os_info["release"]
362
+ machine = os_info["machine"]
363
+ shell = os_info["shell"]
364
+
365
+ header = (
366
+ f"## System Environment\n\n"
367
+ f"- **OS**: {os_name} {release} / {machine}\n"
368
+ f"- **Shell**: {shell}\n"
369
+ )
370
+
371
+ if system == "Windows":
372
+ return header + (
373
+ "- **Path separator**: `\\` (backslash)\n"
374
+ "- **Line ending**: CRLF\n\n"
375
+ "### Command Reference (use ONLY these for this OS)\n\n"
376
+ "| Task | Command |\n"
377
+ "|------|---------|\n"
378
+ "| Delete file | `Remove-Item -Force <path>` |\n"
379
+ "| Delete directory | `Remove-Item -Recurse -Force <path>` |\n"
380
+ "| Copy file | `Copy-Item <src> <dst>` |\n"
381
+ "| Move file | `Move-Item <src> <dst>` |\n"
382
+ "| List files | `Get-ChildItem` or `ls` |\n"
383
+ '| Set env variable | `$env:VAR = "value"` |\n'
384
+ "| Chain commands | `cmd1; cmd2` (PowerShell) |\n"
385
+ "| Run script | `.\\script.ps1` |\n"
386
+ "| Null device | `$null` or `NUL` |\n\n"
387
+ "### Docker on Windows\n"
388
+ '- Use PowerShell-style volume mounts: `-v "${PWD}:/app"`\n'
389
+ "- Container shell: use `/bin/sh` (NOT `/bin/bash` unless confirmed)\n"
390
+ "- Line endings: ensure Dockerfiles use LF, not CRLF\n"
391
+ "- Docker Desktop required (or WSL2 backend)\n\n"
392
+ "### Testing & Build\n"
393
+ "- Use `npx` or `npm run` for Node.js scripts\n"
394
+ "- Python: `python` (not `python3`)\n"
395
+ "- pytest: `python -m pytest`\n"
396
+ "- Avoid `&&` for chaining — use `;` in PowerShell\n"
397
+ )
398
+ elif system == "Darwin":
399
+ silicon_note = ""
400
+ if machine == "arm64":
401
+ silicon_note = (
402
+ "- On Apple Silicon (arm64): be aware of `--platform linux/amd64` for x86 images\n"
403
+ "- Homebrew packages are at `/opt/homebrew/`\n"
404
+ )
405
+ else:
406
+ silicon_note = "- Homebrew packages are at `/usr/local/`\n"
407
+
408
+ return header + (
409
+ "- **Path separator**: `/` (forward slash)\n"
410
+ "- **Line ending**: LF\n\n"
411
+ "### Command Reference (use ONLY these for this OS)\n\n"
412
+ "| Task | Command |\n"
413
+ "|------|---------|\n"
414
+ "| Delete file | `rm -f <path>` |\n"
415
+ "| Delete directory | `rm -rf <path>` |\n"
416
+ "| Copy file | `cp <src> <dst>` |\n"
417
+ "| Move file | `mv <src> <dst>` |\n"
418
+ "| List files | `ls -la` |\n"
419
+ '| Set env variable | `export VAR="value"` |\n'
420
+ "| Chain commands | `cmd1 && cmd2` |\n"
421
+ "| Run script | `bash script.sh` or `./script.sh` |\n"
422
+ "| Null device | `/dev/null` |\n\n"
423
+ "### Docker on macOS\n"
424
+ '- Volume mounts: `-v "$(pwd):/app"`\n'
425
+ "- Container shell: `/bin/bash` or `/bin/sh`\n"
426
+ "- Docker Desktop for Mac required\n"
427
+ f"{silicon_note}\n"
428
+ "### Testing & Build\n"
429
+ "- Python: `python3` (not `python`)\n"
430
+ "- pytest: `python3 -m pytest`\n"
431
+ "- Use `&&` for command chaining\n"
432
+ )
433
+ else: # Linux
434
+ return header + (
435
+ "- **Path separator**: `/` (forward slash)\n"
436
+ "- **Line ending**: LF\n\n"
437
+ "### Command Reference (use ONLY these for this OS)\n\n"
438
+ "| Task | Command |\n"
439
+ "|------|---------|\n"
440
+ "| Delete file | `rm -f <path>` |\n"
441
+ "| Delete directory | `rm -rf <path>` |\n"
442
+ "| Copy file | `cp <src> <dst>` |\n"
443
+ "| Move file | `mv <src> <dst>` |\n"
444
+ "| List files | `ls -la` |\n"
445
+ '| Set env variable | `export VAR="value"` |\n'
446
+ "| Chain commands | `cmd1 && cmd2` |\n"
447
+ "| Run script | `bash script.sh` or `./script.sh` |\n"
448
+ "| Null device | `/dev/null` |\n\n"
449
+ "### Docker on Linux\n"
450
+ '- Volume mounts: `-v "$(pwd):/app"`\n'
451
+ "- Container shell: `/bin/bash` or `/bin/sh`\n"
452
+ "- May need `sudo` for Docker commands (unless user is in docker group)\n"
453
+ "- Native Docker Engine — no Docker Desktop needed\n\n"
454
+ "### Testing & Build\n"
455
+ "- Python: `python3` (not `python` on most distros)\n"
456
+ "- pytest: `python3 -m pytest`\n"
457
+ "- Use `&&` for command chaining\n"
458
+ )
459
+
460
+
461
+ # ═══════════════════════════════════════════════════════════════════════════════
462
+ # AUTO GOAL GENERATION
463
+ # ═══════════════════════════════════════════════════════════════════════════════
464
+
465
+
466
+ def _auto_generate_goal() -> str | None:
467
+ """
468
+ Auto-generate project Main Goal from spec files + project structure.
469
+ Uses LLM to analyze .adelie/specs/ and file tree, producing a comprehensive
470
+ project_goal.md that all agents reference.
471
+ Returns the generated goal summary, or None if no specs found.
472
+ """
473
+ import adelie.config as cfg
474
+ from adelie.kb import retriever
475
+
476
+ ws_root = _find_workspace_root()
477
+ specs_dir = ws_root / "specs"
478
+ goal_path = cfg.WORKSPACE_PATH / "logic" / "project_goal.md"
479
+
480
+ # If goal already exists, just return its summary
481
+ if goal_path.exists():
482
+ content = goal_path.read_text(encoding="utf-8")
483
+ # Extract first meaningful line as summary
484
+ for line in content.splitlines():
485
+ line = line.strip()
486
+ if line and not line.startswith("#") and not line.startswith("**") and not line.startswith("<!--"):
487
+ return line[:200]
488
+ return "Project goal defined (see project_goal.md)"
489
+
490
+ # Collect spec contents
491
+ spec_contents = ""
492
+ if specs_dir.exists():
493
+ for f in sorted(specs_dir.iterdir()):
494
+ if f.is_file() and not f.name.startswith(".") and f.suffix in (".md", ".txt", ".pdf", ".docx"):
495
+ try:
496
+ text = f.read_text(encoding="utf-8")
497
+ spec_contents += f"\n### {f.name}\n{text[:3000]}\n"
498
+ except Exception:
499
+ pass
500
+
501
+ if not spec_contents.strip():
502
+ # No specs — no auto goal
503
+ return None
504
+
505
+ # Collect project structure
506
+ from adelie.project_context import get_tree_summary
507
+ file_tree = get_tree_summary()
508
+
509
+ # Also check KB for any scanner output
510
+ kb_summary = ""
511
+ try:
512
+ retriever.ensure_workspace()
513
+ kb_summary = retriever.get_index_summary()
514
+ except Exception:
515
+ pass
516
+
517
+ # LLM call to generate Main Goal
518
+ prompt = f"""You are analyzing a software project to create a comprehensive project roadmap.
519
+ Your output will be used as the "Main Goal" document that guides ALL AI agents working on this project.
520
+
521
+ ## Spec Files (provided by user)
522
+ {spec_contents}
523
+
524
+ ## Project Structure
525
+ {file_tree}
526
+
527
+ {f"## Existing Knowledge Base{chr(10)}{kb_summary}" if kb_summary else ""}
528
+
529
+ Based on the above, create a structured project roadmap in markdown.
530
+ The document should include:
531
+
532
+ 1. **Vision** — One paragraph summary of what this project aims to achieve
533
+ 2. **Objectives** — Numbered list of major objectives with measurable criteria
534
+ 3. **Technical Requirements** — Technologies, frameworks, architecture decisions
535
+ 4. **Milestones** — Phased delivery plan
536
+ 5. **Constraints & Notes** — Any important constraints or considerations
537
+
538
+ Be thorough and specific — this document guides ALL AI agents.
539
+ Output ONLY the markdown content, no extra commentary."""
540
+
541
+ console.print("[bold cyan]🎯 Auto-generating Main Goal from spec files...[/bold cyan]")
542
+
543
+ try:
544
+ from adelie.llm_client import generate
545
+ result = generate(
546
+ system_prompt="You are a project planning expert. Analyze the given specs and project structure to create a clear, actionable project roadmap.",
547
+ user_prompt=prompt,
548
+ temperature=0.3,
549
+ )
550
+
551
+ # Save to project_goal.md
552
+ goal_path.parent.mkdir(parents=True, exist_ok=True)
553
+ from datetime import datetime
554
+ header = (
555
+ f"<!-- auto-generated from specs at {datetime.now().isoformat(timespec='seconds')} -->\n"
556
+ f"<!-- regenerate with: adelie goal reset -->\n\n"
557
+ )
558
+ goal_path.write_text(header + result, encoding="utf-8")
559
+
560
+ # Update KB index
561
+ retriever.update_index(
562
+ "logic/project_goal.md",
563
+ tags=["goal", "project", "roadmap", "priority"],
564
+ summary="Auto-generated project Main Goal from spec files",
565
+ )
566
+
567
+ console.print("[green] ✓ Main Goal generated → project_goal.md[/green]")
568
+
569
+ # Extract summary for display
570
+ first_line = ""
571
+ for line in result.splitlines():
572
+ line = line.strip()
573
+ if line and not line.startswith("#"):
574
+ first_line = line[:120]
575
+ break
576
+ return first_line or "Project goal auto-generated from specs"
577
+
578
+ except Exception as e:
579
+ console.print(f"[yellow]⚠️ Auto goal generation failed: {e}[/yellow]")
580
+ console.print("[dim] Continuing without Main Goal — set manually with: adelie goal set \"...\"[/dim]")
581
+ return None
582
+
583
+
293
584
  # ═══════════════════════════════════════════════════════════════════════════════
294
585
  # INIT
295
586
  # ═══════════════════════════════════════════════════════════════════════════════
@@ -335,6 +626,13 @@ def cmd_init(args: argparse.Namespace) -> None:
335
626
  specs_dir = adelie_dir / "specs"
336
627
  specs_dir.mkdir(parents=True, exist_ok=True)
337
628
 
629
+ # ── Detect OS ─────────────────────────────────────────────────────
630
+ os_info = _detect_os()
631
+ console.print(
632
+ f"[green] +[/green] OS detected: [bold]{os_info['os_name']} {os_info['release']}[/bold] "
633
+ f"({os_info['machine']}, {os_info['shell']})"
634
+ )
635
+
338
636
  # ── Create config ────────────────────────────────────────────────────
339
637
  default_config = {
340
638
  "loop_interval": 30,
@@ -356,10 +654,33 @@ def cmd_init(args: argparse.Namespace) -> None:
356
654
  "detected_at": datetime.now().isoformat(timespec="seconds"),
357
655
  }
358
656
 
657
+ # Save OS info to config
658
+ default_config["os"] = {
659
+ "system": os_info["system"],
660
+ "os_name": os_info["os_name"],
661
+ "release": os_info["release"],
662
+ "machine": os_info["machine"],
663
+ "shell": os_info["shell"],
664
+ "detected_at": datetime.now().isoformat(timespec="seconds"),
665
+ }
666
+
359
667
  config_path = adelie_dir / "config.json"
360
668
  if not config_path.exists() or args.force:
361
669
  config_path.write_text(json.dumps(default_config, indent=2, ensure_ascii=False), encoding="utf-8")
362
670
 
671
+ # ── Generate context.md with OS info ──────────────────────────────
672
+ context_file = adelie_dir / "context.md"
673
+ os_context = _generate_os_context(os_info)
674
+ if not context_file.exists() or args.force:
675
+ context_file.write_text(os_context, encoding="utf-8")
676
+ console.print(f"[green] +[/green] Generated context.md (OS-specific prompts)")
677
+ else:
678
+ # Append/update OS section if context.md already exists
679
+ existing = context_file.read_text(encoding="utf-8")
680
+ if "## System Environment" not in existing:
681
+ context_file.write_text(existing.rstrip() + "\n\n" + os_context, encoding="utf-8")
682
+ console.print(f"[green] +[/green] Updated context.md with OS info")
683
+
363
684
  # ── Create .env template ─────────────────────────────────────────
364
685
  env_file = adelie_dir / ".env"
365
686
  if not env_file.exists() or args.force:
@@ -376,7 +697,7 @@ OLLAMA_BASE_URL=http://localhost:11434
376
697
  OLLAMA_MODEL=llama3.2
377
698
 
378
699
  # ── Ollama Cloud (ollama.com 클라우드 사용 시) ────────────────────────
379
- # OLLAMA_BASE_URL=https://api.ollama.com
700
+ # OLLAMA_BASE_URL=https://ollama.com
380
701
  # OLLAMA_API_KEY=your-ollama-cloud-api-key
381
702
 
382
703
  # ── Fallback Chain ───────────────────────────────────────────────────
@@ -557,24 +878,32 @@ def cmd_run(args: argparse.Namespace) -> None:
557
878
 
558
879
  # Use last goal if no new goal specified
559
880
  goal = args.goal
560
- if goal == "Operate and improve the Adelie autonomous AI system" and ws.get("last_goal"):
881
+ if not goal and ws.get("last_goal"):
561
882
  goal = ws["last_goal"]
562
883
 
563
884
  console.print(f"[bold cyan]{t('run.resuming', n=args.workspace_num)}[/bold cyan]")
564
885
  console.print(f" [dim]{ws_path}[/dim]")
565
886
 
566
- registry.update_last_used(ws_path, goal)
887
+ registry.update_last_used(ws_path, goal or "")
567
888
  else:
568
889
  goal = args.goal
569
890
  ws_root = _find_workspace_root()
570
891
  if ws_root.exists():
571
- registry.update_last_used(str(ws_root.parent), goal)
892
+ registry.update_last_used(str(ws_root.parent), goal or "")
572
893
 
573
894
  _validate_provider()
574
895
 
575
896
  # Auto-sync spec files from .adelie/specs/
576
897
  _sync_specs()
577
898
 
899
+ # Auto-generate Main Goal if no --goal provided
900
+ if not goal:
901
+ goal_summary = _auto_generate_goal()
902
+ if goal_summary:
903
+ goal = goal_summary
904
+ else:
905
+ goal = "Autonomously develop and improve the project based on available context"
906
+
578
907
  # Get current phase from workspace config
579
908
  ws_config = _load_workspace_config()
580
909
  phase = ws_config.get("phase", "initial")
@@ -1750,7 +2079,7 @@ def main() -> None:
1750
2079
  p_run.add_argument("workspace_num", nargs="?", type=int, default=None,
1751
2080
  help="Workspace number (use with 'ws')")
1752
2081
  p_run.add_argument("--goal", type=str,
1753
- default="Operate and improve the Adelie autonomous AI system",
2082
+ default=None,
1754
2083
  help="High-level goal for the AI agents")
1755
2084
  p_run.add_argument("--once", action="store_true",
1756
2085
  help="Run exactly one cycle then exit")
@@ -286,6 +286,14 @@ class ThreadingDashboardHTTPServer(socketserver.ThreadingMixIn, HTTPServer):
286
286
  daemon_threads = True
287
287
  allow_reuse_address = True
288
288
 
289
+ def handle_error(self, request, client_address):
290
+ """Silently ignore client-side disconnects (common on Windows)."""
291
+ import sys
292
+ exc = sys.exc_info()[1]
293
+ if isinstance(exc, (ConnectionAbortedError, ConnectionResetError, BrokenPipeError)):
294
+ return # client closed connection early — nothing to do
295
+ super().handle_error(request, client_address)
296
+
289
297
 
290
298
  # ── Dashboard Server ─────────────────────────────────────────────────────────
291
299
 
@@ -57,8 +57,8 @@ class Orchestrator:
57
57
  MAX_RECOVER_RETRIES = 3
58
58
  MAX_NEW_LOGIC_CYCLES = 3 # Force transition after N cycles in new_logic
59
59
 
60
- def __init__(self, goal: str = "Operate the Adelie autonomous AI system", phase: str = "initial"):
61
- self.goal = goal
60
+ def __init__(self, goal: str | None = None, phase: str = "initial"):
61
+ self.goal = goal or "Autonomously develop and improve the project"
62
62
  self.phase = phase
63
63
  self.state = LoopState.NORMAL
64
64
  self.loop_iteration = 0
@@ -13,6 +13,7 @@ Provides:
13
13
  from __future__ import annotations
14
14
 
15
15
  import os
16
+ import platform
16
17
  from datetime import datetime
17
18
  from pathlib import Path
18
19
  from typing import NamedTuple
@@ -135,7 +136,8 @@ def get_tree_summary() -> str:
135
136
  header = (
136
137
  f"📁 Project: {PROJECT_ROOT.name} | "
137
138
  f"{len(files)} files ({code_count} code, {config_count} config) | "
138
- f"{_format_size(total_size)} total\n"
139
+ f"{_format_size(total_size)} total | "
140
+ f"{get_os_info()}\n"
139
141
  )
140
142
 
141
143
  truncated = ""
@@ -194,3 +196,22 @@ def _format_size(size: int) -> str:
194
196
  return f"{size / 1024:.1f}KB"
195
197
  else:
196
198
  return f"{size / (1024 * 1024):.1f}MB"
199
+
200
+
201
+ def get_os_info() -> str:
202
+ """Return a one-line OS summary string for context injection."""
203
+ system = platform.system()
204
+ machine = platform.machine()
205
+ if system == "Darwin":
206
+ os_name = "macOS"
207
+ try:
208
+ ver = platform.mac_ver()[0]
209
+ if ver:
210
+ os_name = f"macOS {ver}"
211
+ except Exception:
212
+ pass
213
+ elif system == "Windows":
214
+ os_name = f"Windows {platform.release()}"
215
+ else:
216
+ os_name = f"Linux {platform.release()}"
217
+ return f"OS: {os_name} / {machine}"
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "adelie-ai",
3
- "version": "0.2.0",
3
+ "version": "0.2.2",
4
4
  "description": "Adelie — Self-Communicating Autonomous AI Loop CLI",
5
5
  "bin": {
6
6
  "adelie": "bin/adelie.js"