@jiayu6954/seed-ai 0.9.1-alpha.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/LICENSE +21 -0
- package/README.md +359 -0
- package/dist/index.js +6552 -0
- package/package.json +72 -0
package/LICENSE
ADDED
|
@@ -0,0 +1,21 @@
|
|
|
1
|
+
MIT License
|
|
2
|
+
|
|
3
|
+
Copyright (c) 2026 Seed AI Contributors
|
|
4
|
+
|
|
5
|
+
Permission is hereby granted, free of charge, to any person obtaining a copy
|
|
6
|
+
of this software and associated documentation files (the "Software"), to deal
|
|
7
|
+
in the Software without restriction, including without limitation the rights
|
|
8
|
+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
|
9
|
+
copies of the Software, and to permit persons to whom the Software is
|
|
10
|
+
furnished to do so, subject to the following conditions:
|
|
11
|
+
|
|
12
|
+
The above copyright notice and this permission notice shall be included in all
|
|
13
|
+
copies or substantial portions of the Software.
|
|
14
|
+
|
|
15
|
+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
16
|
+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
17
|
+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
|
18
|
+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
|
19
|
+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
|
20
|
+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
21
|
+
SOFTWARE.
|
package/README.md
ADDED
|
@@ -0,0 +1,359 @@
|
|
|
1
|
+
# Seed AI
|
|
2
|
+
|
|
3
|
+
[](https://github.com/jiayu6954-sudo/seed-ai/actions/workflows/ci.yml)
|
|
4
|
+
[](LICENSE)
|
|
5
|
+
[](package.json)
|
|
6
|
+
|
|
7
|
+
**A production-grade TypeScript CLI AI coding assistant — built from scratch with 14 delivered innovations beyond Claude Code.**
|
|
8
|
+
|
|
9
|
+
> Not a wrapper. Not a clone. A ground-up reimplementation that systematically analyzed Claude Code's architecture and design patterns, then shipped measurable improvements in every critical dimension.
|
|
10
|
+
|
|
11
|
+
> **Status:** Active development. Unit tests cover core modules; integration tests are limited — contributions welcome. See [Quality & Testing](#quality--testing).
|
|
12
|
+
|
|
13
|
+
## Demo
|
|
14
|
+
|
|
15
|
+
https://github.com/user-attachments/assets/7981b487-3d67-42f4-aafb-2cf1de1e3575
|
|
16
|
+
|
|
17
|
+
https://github.com/user-attachments/assets/de6061b3-a7ee-4d3e-bf46-3fade4ab4c37
|
|
18
|
+
|
|
19
|
+
> Walkthrough: streaming output, parallel tool execution, real-time diff rendering, model switching, long-term memory recall.
|
|
20
|
+
|
|
21
|
+
---
|
|
22
|
+
|
|
23
|
+
## Why Seed AI?
|
|
24
|
+
|
|
25
|
+
| Problem | Seed AI's answer |
|
|
26
|
+
|---------|---------------|
|
|
27
|
+
| Locked into Anthropic's API | 8 providers: OpenAI, DeepSeek, Groq, Gemini, Ollama, OpenRouter, Moonshot, custom |
|
|
28
|
+
| API costs spiral out of control | Tool result cache reduces repeated reads; LLM compression prunes context; local Ollama runs at zero API cost |
|
|
29
|
+
| AI forgets everything between sessions | 3-layer long-term memory + semantic vector retrieval (constant ~800 tokens) |
|
|
30
|
+
| Local LLMs need manual plumbing | Auto-discover Ollama/LM Studio/vLLM, detect tool support, fall back to XML tool calls |
|
|
31
|
+
| `fetch()` breaks on half the web | Native fetch with automatic `curl` fallback — BBC News, Sina Finance all work |
|
|
32
|
+
| Docker unavailable = crash | Graceful host fallback with sandbox-aware system prompt |
|
|
33
|
+
| Repeated file reads waste time | Session-level tool result cache, write-before-invalidate ordering |
|
|
34
|
+
| Context window fills up silently | Haiku-powered semantic compression with cumulative summary injection |
|
|
35
|
+
|
|
36
|
+
---
|
|
37
|
+
|
|
38
|
+
## Features
|
|
39
|
+
|
|
40
|
+
### Core Agent Engine
|
|
41
|
+
- **Parallel tool execution** — permissions collected serially (clear UX), execution via `Promise.allSettled()` — N×T latency → ~1.2×T
|
|
42
|
+
- **Session-level tool cache** — idempotent reads (`file_read`, `glob`, `grep`, `web_fetch`) cached; write ops invalidate before execution to prevent stale reads
|
|
43
|
+
- **LLM-driven context compression** — triggers at 80% context window; Haiku ($0.0002/compress) summarizes pruned messages into system prompt; multi-round cumulative summaries preserved
|
|
44
|
+
- **Token Budget Guard** — natural language budget (`"+500k"`, `"2M tokens"`) parsed inline, hard limit enforced per loop iteration
|
|
45
|
+
- **50-iteration loop** with Ctrl+C abort, streaming output, real-time diff rendering
|
|
46
|
+
|
|
47
|
+
### Multi-Provider Support
|
|
48
|
+
```bash
|
|
49
|
+
seed config model # interactive menu — 8 providers, key status shown
|
|
50
|
+
seed config model --deepseek # quick switch to DeepSeek cloud
|
|
51
|
+
seed config model --local # Ollama auto-discover
|
|
52
|
+
seed config model --anthropic --set-model claude-opus-4-6
|
|
53
|
+
```
|
|
54
|
+
Supported: `anthropic` · `openai` · `deepseek` · `groq` · `gemini` · `openrouter` · `ollama` (local) · `moonshot` · custom URL
|
|
55
|
+
|
|
56
|
+
### Smart Local Model Layer (I011)
|
|
57
|
+
Ollama / LM Studio / llama.cpp / vLLM — zero config required:
|
|
58
|
+
1. Auto-discover running services on standard ports (11434, 1234, 8080, 8000)
|
|
59
|
+
2. Detect tool-call capability (dry-run probe + pattern matching)
|
|
60
|
+
3. Native tool calls if supported; XML `<tool_call>` fallback if not
|
|
61
|
+
4. Handle vendor-specific quirks (e.g. Ollama R1 strips `<think>` opening tags)
|
|
62
|
+
|
|
63
|
+
Example: `ollama pull qwen2.5-coder:7b` then `seed config model --local` — Seed AI detects the model, confirms tool-call support, and starts immediately. No config file edits.
|
|
64
|
+
|
|
65
|
+
### Long-Term Memory (I007 + I012)
|
|
66
|
+
Automatically persists and retrieves knowledge **across sessions**:
|
|
67
|
+
```
|
|
68
|
+
~/.seed/memory/
|
|
69
|
+
├── user.md ← cross-project user profile
|
|
70
|
+
└── projects/{sha1}/
|
|
71
|
+
├── context.md ← tech stack, architecture
|
|
72
|
+
├── decisions.md ← key decisions + reasoning
|
|
73
|
+
└── learnings.md ← bugs fixed, patterns that worked
|
|
74
|
+
```
|
|
75
|
+
- **Auto-extraction**: Haiku distills only durable knowledge at session end (not ephemeral variable values)
|
|
76
|
+
- **Semantic vector retrieval** (I012): embed query → cosine similarity → inject top-8 relevant chunks (~800 tokens constant, regardless of total memory size)
|
|
77
|
+
- TF-IDF offline fallback when Ollama embedding service unavailable
|
|
78
|
+
- `pruneStale()` auto-cleans chunks older than 90 days
|
|
79
|
+
|
|
80
|
+
### Resilient Web Fetch
|
|
81
|
+
```
|
|
82
|
+
native fetch (15s timeout)
|
|
83
|
+
└── on timeout/connection error → curl.exe fallback
|
|
84
|
+
├── charset-aware decode (GBK, GB2312, UTF-8...)
|
|
85
|
+
├── 10MB buffer
|
|
86
|
+
└── shared result pipeline (HTML strip, JSON parse)
|
|
87
|
+
```
|
|
88
|
+
|
|
89
|
+
### Docker Sandbox (I005)
|
|
90
|
+
Three isolation levels: `strict` (read-only FS + no network) · `standard` · `permissive`
|
|
91
|
+
Graceful host fallback when Docker is unavailable — never crashes, always notifies.
|
|
92
|
+
|
|
93
|
+
### Storage Guard (I016)
|
|
94
|
+
Auto-runs at startup (non-blocking). Prevents unbounded disk growth:
|
|
95
|
+
|
|
96
|
+
| Category | Quota | Strategy |
|
|
97
|
+
|----------|-------|---------|
|
|
98
|
+
| `vectors.json` | 200 MB | Delete oldest 30% chunks |
|
|
99
|
+
| Session files | 100 files | FIFO eviction |
|
|
100
|
+
| `debug.log` | 10 MB | Keep last 5 MB |
|
|
101
|
+
|
|
102
|
+
```bash
|
|
103
|
+
seed config show --storage # color-coded quota dashboard (green/yellow/red)
|
|
104
|
+
SEED_DATA_DIR=F:/seed-data seed # relocate all data off the system drive (C:)
|
|
105
|
+
```
|
|
106
|
+
|
|
107
|
+
### Terminal UI
|
|
108
|
+
- Ink 4 (React) component architecture
|
|
109
|
+
- **Static/Dynamic split rendering** — completed messages written to terminal scrollback once via Ink `<Static>` (never redrawn); only the live streaming zone (~TAIL_LINES + 8 chrome lines) repaints every 80 ms — eliminates streaming jitter
|
|
110
|
+
- `tailMessage()` caps streaming output to `termRows - 8` lines (≈22 on a standard 30-row terminal) — large enough to read, safe from cursor-tracking overflow
|
|
111
|
+
- Precise `rgb()` color system — identical across all terminals and themes (no ANSI color override)
|
|
112
|
+
- Real-time diff rendering (green `+` / red `-`) — color values sourced directly from Claude Code's theme
|
|
113
|
+
- Shift+Enter multiline input (kitty keyboard protocol + xterm modifyOtherKeys)
|
|
114
|
+
- Braille spinner during tool execution; status bar with live token/cost/budget/elapsed tracking
|
|
115
|
+
- Primary buffer (not alternate screen) — terminal scrollback preserved, scrollbar works normally
|
|
116
|
+
|
|
117
|
+
### Slash Commands
|
|
118
|
+
| Command | Description |
|
|
119
|
+
|---------|-------------|
|
|
120
|
+
| `/clear` | Reset context |
|
|
121
|
+
| `/compact` | Force LLM compression |
|
|
122
|
+
| `/cost` | Token usage + estimated cost |
|
|
123
|
+
| `/help` | All commands and shortcuts |
|
|
124
|
+
| `/model` | Current provider/model |
|
|
125
|
+
| `/memory` | Loaded memory entries |
|
|
126
|
+
|
|
127
|
+
---
|
|
128
|
+
|
|
129
|
+
## Installation
|
|
130
|
+
|
|
131
|
+
### Prerequisites
|
|
132
|
+
- Node.js ≥ 20 (recommend 24+ for native fetch performance)
|
|
133
|
+
- npm ≥ 9
|
|
134
|
+
|
|
135
|
+
### From source
|
|
136
|
+
```bash
|
|
137
|
+
git clone https://github.com/jiayu6954-sudo/seed-ai.git
|
|
138
|
+
cd seed-ai
|
|
139
|
+
npm install
|
|
140
|
+
npm run build
|
|
141
|
+
npm link # makes `seed` available globally
|
|
142
|
+
|
|
143
|
+
# Verify
|
|
144
|
+
seed --version
|
|
145
|
+
```
|
|
146
|
+
|
|
147
|
+
> **npm package coming soon** — `npx seed-ai` one-liner install is planned once the package is published to the npm registry.
|
|
148
|
+
|
|
149
|
+
### Quick start — Anthropic
|
|
150
|
+
```bash
|
|
151
|
+
export ANTHROPIC_API_KEY=sk-ant-...
|
|
152
|
+
seed setup # interactive wizard
|
|
153
|
+
seed
|
|
154
|
+
```
|
|
155
|
+
|
|
156
|
+
### Quick start — DeepSeek (cost-effective cloud)
|
|
157
|
+
```bash
|
|
158
|
+
export DEEPSEEK_API_KEY=sk-...
|
|
159
|
+
seed config model --deepseek --set-model deepseek-chat
|
|
160
|
+
seed
|
|
161
|
+
```
|
|
162
|
+
|
|
163
|
+
### Quick start — Local (zero API cost)
|
|
164
|
+
```bash
|
|
165
|
+
# 1. Install and start Ollama https://ollama.com
|
|
166
|
+
ollama pull qwen2.5-coder:7b # tool-call capable, good for code tasks
|
|
167
|
+
# or: ollama pull llama3.2:3b # smaller, faster
|
|
168
|
+
|
|
169
|
+
# 2. Point Seed AI at it
|
|
170
|
+
seed config model --local
|
|
171
|
+
seed
|
|
172
|
+
```
|
|
173
|
+
Seed AI auto-discovers the Ollama service and selects from installed models — no config editing required.
|
|
174
|
+
|
|
175
|
+
---
|
|
176
|
+
|
|
177
|
+
## Usage
|
|
178
|
+
|
|
179
|
+
```bash
|
|
180
|
+
# Interactive REPL
|
|
181
|
+
seed
|
|
182
|
+
|
|
183
|
+
# Single prompt (pipe/headless mode)
|
|
184
|
+
echo "explain this function" | seed
|
|
185
|
+
|
|
186
|
+
# Resume a previous session
|
|
187
|
+
seed --session abc123
|
|
188
|
+
|
|
189
|
+
# Override model for one run
|
|
190
|
+
seed -m claude-opus-4-6
|
|
191
|
+
|
|
192
|
+
# Read-only mode (no writes or exec)
|
|
193
|
+
seed --deny-all
|
|
194
|
+
|
|
195
|
+
# Auto-approve all tool permissions
|
|
196
|
+
seed --allow-all
|
|
197
|
+
|
|
198
|
+
# Relocate runtime data to another drive
|
|
199
|
+
SEED_DATA_DIR=F:/seed-data seed
|
|
200
|
+
```
|
|
201
|
+
|
|
202
|
+
### In-session shortcuts
|
|
203
|
+
| Key | Action |
|
|
204
|
+
|-----|--------|
|
|
205
|
+
| `Enter` | Send message |
|
|
206
|
+
| `Shift+Enter` | New line |
|
|
207
|
+
| `Ctrl+C` | Interrupt agent / exit |
|
|
208
|
+
| `Esc` | Cancel pending input |
|
|
209
|
+
|
|
210
|
+
---
|
|
211
|
+
|
|
212
|
+
## Configuration
|
|
213
|
+
|
|
214
|
+
Settings stored at `~/.seed/settings.json` (or `$SEED_DATA_DIR/settings.json`):
|
|
215
|
+
|
|
216
|
+
```bash
|
|
217
|
+
seed config show # formatted config summary
|
|
218
|
+
seed config show --json # raw JSON
|
|
219
|
+
seed config show --storage # storage quota dashboard
|
|
220
|
+
seed config set maxTokens 8192
|
|
221
|
+
seed config set memory.enabled false
|
|
222
|
+
seed config set sandbox.enabled true
|
|
223
|
+
seed config reset # restore defaults
|
|
224
|
+
```
|
|
225
|
+
|
|
226
|
+
Key settings:
|
|
227
|
+
```json
|
|
228
|
+
{
|
|
229
|
+
"provider": "anthropic",
|
|
230
|
+
"model": "claude-sonnet-4-6",
|
|
231
|
+
"maxTokens": 16384,
|
|
232
|
+
"memory": { "enabled": true },
|
|
233
|
+
"sandbox": { "enabled": false },
|
|
234
|
+
"ui": { "showThinking": true }
|
|
235
|
+
}
|
|
236
|
+
```
|
|
237
|
+
|
|
238
|
+
`SEED_DATA_DIR` — set this environment variable to move all Seed AI data (memory, sessions, logs) off your system drive. Useful when the system partition is nearly full.
|
|
239
|
+
|
|
240
|
+
---
|
|
241
|
+
|
|
242
|
+
## Known Limitations
|
|
243
|
+
|
|
244
|
+
- **SPA / JavaScript-rendered content**: `web_fetch` fetches initial HTML only; React/Vue/Angular apps return the shell, not the dynamic data. Puppeteer-level tooling is out of scope for a CLI assistant.
|
|
245
|
+
- **Heavy bot protection**: DataDome, Cloudflare Enterprise Bot Management are not bypassable without a full browser fingerprint (TLS characteristics, JS execution, behavioral analysis).
|
|
246
|
+
- **Docker requires manual start**: On Windows, Docker Desktop must be running before use if `sandbox.enabled=true`. Seed AI cannot start the Docker daemon automatically.
|
|
247
|
+
- **Docker on Windows**: Requires WSL2 integration and drive sharing configured in Docker Desktop settings.
|
|
248
|
+
- **Local LLM tool calls**: Reasoning-focused models (e.g. DeepSeek-R1) rarely emit tool-call syntax reliably. Use tool-capable models: `qwen2.5-coder`, `llama3.1`, `mistral` for code tasks.
|
|
249
|
+
|
|
250
|
+
---
|
|
251
|
+
|
|
252
|
+
## Architecture
|
|
253
|
+
|
|
254
|
+
```
|
|
255
|
+
CLI (index.ts)
|
|
256
|
+
└── Ink TUI ←→ useAgentLoop (React hook)
|
|
257
|
+
├── Token Budget Parser (I010)
|
|
258
|
+
└── runAgentLoop (loop.ts)
|
|
259
|
+
├── SystemPrompt (static/dynamic split, I009)
|
|
260
|
+
├── AIProvider (8 implementations)
|
|
261
|
+
│ └── SmartLocalProvider (I011)
|
|
262
|
+
├── ToolRegistry
|
|
263
|
+
│ ├── Cache (I002)
|
|
264
|
+
│ ├── Sandbox (I005)
|
|
265
|
+
│ └── MCPRegistry (I006)
|
|
266
|
+
└── ContextManager (I003 compression, triggers at 80%)
|
|
267
|
+
|
|
268
|
+
Memory Layer
|
|
269
|
+
├── LongTermMemory (I007) ~/.seed/memory/
|
|
270
|
+
└── VectorStore (I012) ~/.seed/memory/vectors.json
|
|
271
|
+
```
|
|
272
|
+
|
|
273
|
+
---
|
|
274
|
+
|
|
275
|
+
## Comparison with Claude Code
|
|
276
|
+
|
|
277
|
+
### Seed AI leads
|
|
278
|
+
|
|
279
|
+
| Dimension | Seed AI | Claude Code |
|
|
280
|
+
|-----------|-------|-------------|
|
|
281
|
+
| **Providers** | 8+ (Anthropic, OpenAI, DeepSeek, Groq, Gemini, Ollama, OpenRouter, Moonshot) | Anthropic only |
|
|
282
|
+
| **Local LLMs** | Auto-discover, tool-cap probe, XML fallback | None |
|
|
283
|
+
| **Long-term memory** | 3-layer, Haiku extraction, semantic vector search | None |
|
|
284
|
+
| **Tool result cache** | Session-level, write-before-invalidate | None |
|
|
285
|
+
| **web_fetch fallback** | native → curl auto-downgrade, charset-aware | None |
|
|
286
|
+
| **Context compression** | LLM-powered semantic summary at 80% window | Conversation compaction via `/compact` (different approach) |
|
|
287
|
+
| **Token budget** | Natural language: `"+500k"`, `"2M tokens"` | Static config only |
|
|
288
|
+
| **Docker sandbox** | 3 isolation levels + graceful host fallback | None |
|
|
289
|
+
| **Storage quotas** | Auto-enforced + SEED_DATA_DIR relocation | None |
|
|
290
|
+
| **Model switching** | Interactive menu + single-command flags | None |
|
|
291
|
+
| **MCP** | **Client** + registry + lifecycle management | **Server** only |
|
|
292
|
+
|
|
293
|
+
### Claude Code leads
|
|
294
|
+
|
|
295
|
+
| Dimension | Claude Code advantage | Seed AI roadmap |
|
|
296
|
+
|-----------|----------------------|---------------|
|
|
297
|
+
| **Hooks system** | `PreToolUse` / `PostSampling` programmable hooks | I015 — next priority |
|
|
298
|
+
| **Plan Mode** | Read-only planning mode | Planned |
|
|
299
|
+
| **VSCode integration** | Full IDE extension + inline code | Out of scope (CLI-first positioning) |
|
|
300
|
+
| **Permission granularity** | Tool-level + path-level, session learning | Iterating |
|
|
301
|
+
| **Test coverage / maturity** | Large-scale production validation | See below |
|
|
302
|
+
|
|
303
|
+
---
|
|
304
|
+
|
|
305
|
+
## Quality & Testing
|
|
306
|
+
|
|
307
|
+
```bash
|
|
308
|
+
npm run test:run # Vitest unit tests
|
|
309
|
+
npm run typecheck # tsc --noEmit, strict mode
|
|
310
|
+
```
|
|
311
|
+
|
|
312
|
+
- Unit tests cover core modules: permissions, tool cache, context compression, cost tracking, sandbox, vector store, token budget parser, storage guard.
|
|
313
|
+
- **Integration tests are currently limited** — end-to-end agent loop tests against live APIs are a known gap.
|
|
314
|
+
- Contributions that add integration test coverage are especially welcome. See [CONTRIBUTING.md](.github/CONTRIBUTING.md).
|
|
315
|
+
|
|
316
|
+
---
|
|
317
|
+
|
|
318
|
+
## Roadmap
|
|
319
|
+
|
|
320
|
+
| ID | Feature | Status |
|
|
321
|
+
|----|---------|--------|
|
|
322
|
+
| I001–I013 | Parallel exec, cache, LLM compression, memory, sandbox, MCP, budget, system prompt, local LLM, vector memory, model switcher | Done |
|
|
323
|
+
| I016 | Storage Guard + SEED_DATA_DIR | Done (v0.9.1) |
|
|
324
|
+
| ~~I014~~ | ~~Out-of-process LLM compression~~ | Dropped — I012 fixed the root cause; UI Spinner handles the rare freeze |
|
|
325
|
+
| I015 | Hooks system: `PreToolUse` / `PostToolUse` shell scripts, executed inside Docker sandbox (security constraint) | Next |
|
|
326
|
+
| — | Plan Mode: read-only planning, no tool execution until user confirms | Planned |
|
|
327
|
+
| — | Integration test suite | Help wanted |
|
|
328
|
+
|
|
329
|
+
> **v0.9.1-r6** — Streaming rendering architecture finalized: Static/Dynamic split, isStreaming atomization fix, TAIL_LINES sized to viewport.
|
|
330
|
+
|
|
331
|
+
---
|
|
332
|
+
|
|
333
|
+
## Contributing
|
|
334
|
+
|
|
335
|
+
We welcome bug reports, feature ideas, and pull requests. Please read [CONTRIBUTING.md](.github/CONTRIBUTING.md) for:
|
|
336
|
+
|
|
337
|
+
- Development setup (build, dev mode, typecheck, tests)
|
|
338
|
+
- Innovation numbering conventions (I001–I016 done; I015 is next)
|
|
339
|
+
- Code standards: TypeScript strict, Zod schemas, layered error handling, no speculative abstractions
|
|
340
|
+
- PR checklist and what we will not merge
|
|
341
|
+
|
|
342
|
+
**Where contributions matter most right now:**
|
|
343
|
+
- Integration tests for the agent loop (end-to-end with real API calls)
|
|
344
|
+
- Windows-specific edge cases (path handling, terminal encoding)
|
|
345
|
+
- Additional local LLM provider testing
|
|
346
|
+
|
|
347
|
+
---
|
|
348
|
+
|
|
349
|
+
## Acknowledgments
|
|
350
|
+
|
|
351
|
+
Seed AI was built by systematically studying Claude Code's architecture, design patterns, and engineering decisions. The color system, diff rendering values, and MCP protocol integration are informed by that study. This project stands on the shoulders of Anthropic's engineering work — the goal is to push the open-source ecosystem forward, not to compete with or diminish it.
|
|
352
|
+
|
|
353
|
+
14 delivered innovations (I001–I013, I016) are fully documented in [WHITEPAPER.md](WHITEPAPER.md). I014 was evaluated and dropped; I015 (Hooks system) is the next priority.
|
|
354
|
+
|
|
355
|
+
---
|
|
356
|
+
|
|
357
|
+
## License
|
|
358
|
+
|
|
359
|
+
MIT — see [LICENSE](LICENSE).
|