agestra 4.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.ko.md ADDED
@@ -0,0 +1,241 @@
1
+ # Agestra
2
+
3
+ **Agent + Orchestra** — 여러 AI 공급자를 Claude Code에서 오케스트레이션하는 플러그인.
4
+
5
+ [English](README.md) | [한국어](README.ko.md)
6
+
7
+ Agestra는 Ollama(로컬), Gemini CLI, Codex CLI를 Claude Code에 플러그형으로 연결합니다. 멀티에이전트 토론, 병렬 작업 분배, 교차 검증, 지속적 GraphRAG 메모리 시스템을 28개 MCP 도구로 제공합니다.
8
+
9
+ ## 빠른 시작
10
+
11
+ ```bash
12
+ claude plugin add agestra
13
+ ```
14
+
15
+ 끝. Agestra가 첫 사용 시 사용 가능한 공급자(Ollama, Gemini CLI, Codex CLI)를 자동 감지합니다.
16
+
17
+ ### 사전 요구사항
18
+
19
+ 최소 하나의 AI 공급자가 설치되어야 합니다:
20
+
21
+ | 공급자 | 설치 | 유형 |
22
+ |--------|------|------|
23
+ | [Ollama](https://ollama.com/) | `curl -fsSL https://ollama.com/install.sh \| sh` | 로컬 LLM |
24
+ | [Gemini CLI](https://github.com/google-gemini/gemini-cli) | `npm install -g @google/gemini-cli` | 클라우드 |
25
+ | [Codex CLI](https://github.com/openai/codex) | `npm install -g @openai/codex` | 클라우드 |
26
+
27
+ ---
28
+
29
+ ## 철학
30
+
31
+ **멀티 AI는 검증을 위한 것이지, 토큰 절약을 위한 것이 아닙니다.** 리뷰, 설계 탐색, 아이디어 발굴 워크플로우는 검증 프로세스로 설계되었습니다 — 속도를 위한 병렬화가 아니라, 사각지대를 잡기 위해 여러 AI 공급자로부터 독립적인 의견을 얻는 것입니다.
32
+
33
+ ## 커맨드
34
+
35
+ | 커맨드 | 설명 |
36
+ |--------|------|
37
+ | `/agestra review [대상]` | 코드 품질, 보안, 통합 완성도 검증 |
38
+ | `/agestra idea [주제]` | 유사 프로젝트 비교를 통한 개선점 발굴 |
39
+ | `/agestra design [주제]` | 구현 전 아키텍처 및 설계 트레이드오프 탐색 |
40
+
41
+ 각 커맨드는 선택지를 제시합니다: **Claude만**, **비교** (여러 AI 나란히), **토론** (구조화된 멀티AI 논의), **기타** (사용자 지정).
42
+
43
+ ## 에이전트
44
+
45
+ | 에이전트 | 모델 | 역할 |
46
+ |----------|------|------|
47
+ | `reviewer` | Opus | 엄격한 품질 검증 — 보안, 고아 시스템, 스펙 이탈, 테스트 공백 |
48
+ | `designer` | Opus | 아키텍처 탐색 — 소크라테스식 질문, 트레이드오프 분석 |
49
+ | `ideator` | Sonnet | 개선점 발굴 — 웹 리서치, 경쟁 분석 |
50
+ | `moderator` | Sonnet | 토론 진행 — 중립, 턴 관리, 합의 판정 |
51
+
52
+ ---
53
+
54
+ ## 아키텍처
55
+
56
+ Turborepo 모노레포, 8개 패키지:
57
+
58
+ | 패키지 | 설명 |
59
+ |--------|------|
60
+ | `@agestra/core` | `AIProvider` 인터페이스, 레지스트리, 설정 로더, CLI 러너, 원자적 쓰기, 작업 큐 |
61
+ | `@agestra/provider-ollama` | Ollama HTTP 어댑터 (모델 자동 감지) |
62
+ | `@agestra/provider-gemini` | Google Gemini CLI 어댑터 |
63
+ | `@agestra/provider-codex` | OpenAI Codex CLI 어댑터 |
64
+ | `@agestra/agents` | 토론 엔진, 작업 분배기, 교차 검증기, 세션 관리자 |
65
+ | `@agestra/workspace` | 코드 리뷰 워크플로우용 문서 관리자 |
66
+ | `@agestra/memory` | GraphRAG — FTS5 + 벡터 + 지식 그래프 하이브리드 검색, 실패 추적 |
67
+ | `@agestra/mcp-server` | MCP 프로토콜 레이어, 28개 도구, 디스패치 |
68
+
69
+ ### 설계 원칙
70
+
71
+ - **공급자 추상화** — 모든 백엔드가 `AIProvider`(`chat`, `healthCheck`, `getCapabilities`)를 구현. 기존 코드 수정 없이 새 공급자 추가 가능.
72
+ - **제로 설정** — 시작 시 공급자를 자동 감지. 수동 설정 불필요.
73
+ - **플러그인 네이티브** — Claude Code 플러그인으로 설치. Skills, hooks, MCP 서버가 함께 번들.
74
+ - **모듈형 디스패치** — 각 도구 카테고리가 `getTools()` + `handleTool()`을 내보내는 독립 모듈. 서버가 동적으로 수집·디스패치.
75
+ - **원자적 쓰기** — 모든 파일 연산이 임시 파일 → rename 방식. 크래시 시 손상 방지.
76
+ - **실패 추적** — 실패한 접근법이 GraphRAG에 자동 기록, 이후 프롬프트에 주입.
77
+
78
+ ---
79
+
80
+ ## 도구 (28개)
81
+
82
+ ### AI 채팅 (3개)
83
+
84
+ | 도구 | 설명 |
85
+ |------|------|
86
+ | `ai_chat` | 특정 공급자와 채팅 |
87
+ | `ai_analyze_files` | 파일을 디스크에서 읽어 공급자에게 질문과 함께 전송 |
88
+ | `ai_compare` | 같은 프롬프트를 여러 공급자에 보내 응답 비교 |
89
+
90
+ ### 에이전트 오케스트레이션 (9개)
91
+
92
+ | 도구 | 설명 |
93
+ |------|------|
94
+ | `agent_debate_start` | 다중 공급자 토론 시작 (논블로킹, 품질 루프 + 검증자 옵션) |
95
+ | `agent_debate_status` | 토론 상태 및 트랜스크립트 확인 |
96
+ | `agent_debate_create` | 턴 기반 토론 세션 생성 (토론 ID 반환) |
97
+ | `agent_debate_turn` | 공급자 1턴 실행; `provider: "claude"`로 Claude 독립 참여 지원 |
98
+ | `agent_debate_conclude` | 토론 종료 및 최종 트랜스크립트 생성 |
99
+ | `agent_assign_task` | 특정 공급자에게 작업 위임 |
100
+ | `agent_task_status` | 작업 완료 상태 및 결과 확인 |
101
+ | `agent_dispatch` | 공급자 간 병렬 작업 분배 (의존성 순서 지원) |
102
+ | `agent_cross_validate` | 출력 교차 검증 (에이전트 등급 검증자만 가능) |
103
+
104
+ ### 워크스페이스 (4개)
105
+
106
+ | 도구 | 설명 |
107
+ |------|------|
108
+ | `workspace_create_review` | 파일과 규칙이 포함된 코드 리뷰 문서 생성 |
109
+ | `workspace_request_review` | 공급자에게 문서 리뷰 요청 |
110
+ | `workspace_add_comment` | 리뷰에 코멘트 추가 |
111
+ | `workspace_read` | 리뷰 내용 읽기 |
112
+
113
+ ### 공급자 관리 (2개)
114
+
115
+ | 도구 | 설명 |
116
+ |------|------|
117
+ | `provider_list` | 공급자 목록 (상태, 능력 포함) |
118
+ | `provider_health` | 공급자 상태 체크 |
119
+
120
+ ### Ollama (2개)
121
+
122
+ | 도구 | 설명 |
123
+ |------|------|
124
+ | `ollama_models` | 설치된 모델 및 크기 목록 |
125
+ | `ollama_pull` | 모델 다운로드 |
126
+
127
+ ### 메모리 (6개)
128
+
129
+ | 도구 | 설명 |
130
+ |------|------|
131
+ | `memory_search` | 하이브리드 검색 (FTS5 + 벡터 + 그래프) |
132
+ | `memory_index` | 파일/디렉토리를 메모리에 인덱싱 |
133
+ | `memory_store` | 지식 노드 저장 (fact, decision, dead_end, finding) |
134
+ | `memory_dead_ends` | 이전 실패 접근법 검색 (반복 방지) |
135
+ | `memory_context` | 토큰 예산 내 관련 컨텍스트 조립 |
136
+ | `memory_add_edge` | 지식 노드 간 관계 엣지 생성 |
137
+
138
+ ### 작업 (2개)
139
+
140
+ | 도구 | 설명 |
141
+ |------|------|
142
+ | `cli_job_submit` | 장시간 CLI 작업을 백그라운드에 제출 |
143
+ | `cli_job_status` | 작업 상태 확인 및 출력 조회 |
144
+
145
+ ---
146
+
147
+ ## 설정
148
+
149
+ ### providers.config.json (선택)
150
+
151
+ Agestra는 시작 시 공급자를 자동 감지합니다. 수동 제어가 필요하면 프로젝트 루트에 `providers.config.json`을 생성하세요:
152
+
153
+ | 필드 | 설명 |
154
+ |------|------|
155
+ | `defaultProvider` | 미지정 시 사용할 공급자 ID |
156
+ | `providers[].id` | 고유 식별자 |
157
+ | `providers[].type` | `ollama`, `gemini-cli`, `codex-cli` |
158
+ | `providers[].enabled` | 시작 시 로드 여부 |
159
+ | `providers[].config` | 타입별 설정 (host, timeout 등) |
160
+
161
+ ### 런타임 데이터
162
+
163
+ `.agestra/` 아래 저장 (gitignore 대상):
164
+
165
+ | 경로 | 용도 |
166
+ |------|------|
167
+ | `.agestra/sessions/` | 토론 및 작업 세션 상태 |
168
+ | `.agestra/workspace/` | 코드 리뷰 문서 |
169
+ | `.agestra/memory.db` | GraphRAG SQLite 데이터베이스 |
170
+ | `.agestra/.jobs/` | 백그라운드 작업 큐 |
171
+
172
+ ---
173
+
174
+ ## 개발
175
+
176
+ ```bash
177
+ npm install # 의존성 설치
178
+ npm run build # 전체 빌드 (Turborepo)
179
+ npm test # 전체 테스트 (Vitest)
180
+ npm run bundle # 단일 파일 플러그인 번들 (esbuild)
181
+ npm run dev # 워치 모드
182
+ npm run lint # 린트 (ESLint)
183
+ npm run clean # dist/ 삭제
184
+ ```
185
+
186
+ ### 프로젝트 구조
187
+
188
+ ```
189
+ agestra/
190
+ ├── plugin.json # Claude Code 플러그인 매니페스트
191
+ ├── commands/
192
+ │ ├── review.md # /agestra review — 품질 검증
193
+ │ ├── idea.md # /agestra idea — 개선점 발굴
194
+ │ └── design.md # /agestra design — 아키텍처 탐색
195
+ ├── agents/
196
+ │ ├── reviewer.md # 엄격한 품질 검증자 (Opus)
197
+ │ ├── designer.md # 아키텍처 탐색자 (Opus)
198
+ │ ├── ideator.md # 개선점 발굴자 (Sonnet)
199
+ │ └── moderator.md # 토론 진행자 (Sonnet)
200
+ ├── skills/
201
+ │ └── provider-guide.md # 공급자 사용 가이드라인 (skill)
202
+ ├── hooks/
203
+ │ └── user-prompt-submit.md # 도구 추천 hook
204
+ ├── dist/
205
+ │ └── bundle.js # 단일 파일 MCP 서버 번들
206
+ ├── scripts/
207
+ │ └── bundle.mjs # esbuild 번들 스크립트
208
+ ├── packages/
209
+ │ ├── core/ # AIProvider 인터페이스, 레지스트리
210
+ │ ├── provider-ollama/ # Ollama HTTP 어댑터
211
+ │ ├── provider-gemini/ # Gemini CLI 어댑터
212
+ │ ├── provider-codex/ # Codex CLI 어댑터
213
+ │ ├── agents/ # 토론 엔진, 분배기, 교차 검증기
214
+ │ ├── workspace/ # 코드 리뷰 문서 관리자
215
+ │ ├── memory/ # GraphRAG: 하이브리드 검색, 실패 추적
216
+ │ └── mcp-server/ # MCP 서버, 28개 도구, 디스패치
217
+ ├── package.json # 워크스페이스 루트
218
+ └── turbo.json # Turborepo 파이프라인
219
+ ```
220
+
221
+ ### 새 공급자 추가
222
+
223
+ 1. `packages/provider-<이름>/`에 `AIProvider` 구현.
224
+ 2. `packages/mcp-server/src/index.ts`에 팩토리 추가.
225
+ 3. `npm run build && npm test`
226
+
227
+ ---
228
+
229
+ ## 제거
230
+
231
+ ```bash
232
+ claude plugin remove agestra
233
+ ```
234
+
235
+ 프로젝트에 잔여 파일 없음. 깔끔한 제거.
236
+
237
+ ---
238
+
239
+ ## 라이선스
240
+
241
+ GPL-3.0
package/README.md ADDED
@@ -0,0 +1,241 @@
1
+ # Agestra
2
+
3
+ **Agent + Orchestra** — A Claude Code plugin that orchestrates multiple AI providers.
4
+
5
+ [English](README.md) | [한국어](README.ko.md)
6
+
7
+ Agestra connects Ollama (local), Gemini CLI, and Codex CLI to Claude Code as pluggable providers, enabling multi-agent debates, parallel task dispatch, cross-validation, and a persistent GraphRAG memory system — all through 28 MCP tools.
8
+
9
+ ## Quick Start
10
+
11
+ ```bash
12
+ claude plugin add agestra
13
+ ```
14
+
15
+ That's it. Agestra auto-detects available providers (Ollama, Gemini CLI, Codex CLI) on first use.
16
+
17
+ ### Prerequisites
18
+
19
+ At least one AI provider must be installed:
20
+
21
+ | Provider | Install | Type |
22
+ |----------|---------|------|
23
+ | [Ollama](https://ollama.com/) | `curl -fsSL https://ollama.com/install.sh \| sh` | Local LLM |
24
+ | [Gemini CLI](https://github.com/google-gemini/gemini-cli) | `npm install -g @google/gemini-cli` | Cloud |
25
+ | [Codex CLI](https://github.com/openai/codex) | `npm install -g @openai/codex` | Cloud |
26
+
27
+ ---
28
+
29
+ ## Philosophy
30
+
31
+ **Multi-AI is for verification, not token savings.** The review, design exploration, and idea generation workflows are structured as validation processes — getting independent opinions from multiple AI providers to catch blind spots, not to parallelize for speed.
32
+
33
+ ## Commands
34
+
35
+ | Command | Description |
36
+ |---------|-------------|
37
+ | `/agestra review [target]` | Review code quality, security, and integration completeness |
38
+ | `/agestra idea [topic]` | Discover improvements by comparing with similar projects |
39
+ | `/agestra design [subject]` | Explore architecture and design trade-offs before implementation |
40
+
41
+ Each command presents a choice: **Claude only**, **Compare** (multiple AIs side-by-side), **Debate** (structured multi-AI discussion), or **Other** (user-specified).
42
+
43
+ ## Agents
44
+
45
+ | Agent | Model | Role |
46
+ |-------|-------|------|
47
+ | `reviewer` | Opus | Strict quality verifier — security, orphans, spec drift, test gaps |
48
+ | `designer` | Opus | Architecture explorer — Socratic questioning, trade-off analysis |
49
+ | `ideator` | Sonnet | Improvement discoverer — web research, competitive analysis |
50
+ | `moderator` | Sonnet | Debate facilitator — neutral, manages turns, judges consensus |
51
+
52
+ ---
53
+
54
+ ## Architecture
55
+
56
+ Turborepo monorepo with 8 packages:
57
+
58
+ | Package | Description |
59
+ |---------|-------------|
60
+ | `@agestra/core` | `AIProvider` interface, registry, config loader, CLI runner, atomic writes, job queue |
61
+ | `@agestra/provider-ollama` | Ollama HTTP adapter with model detection |
62
+ | `@agestra/provider-gemini` | Google Gemini CLI adapter |
63
+ | `@agestra/provider-codex` | OpenAI Codex CLI adapter |
64
+ | `@agestra/agents` | Debate engine, task dispatcher, cross-validator, session manager |
65
+ | `@agestra/workspace` | Document manager for code review workflows |
66
+ | `@agestra/memory` | GraphRAG — FTS5 + vector + knowledge graph hybrid search, dead-end tracking |
67
+ | `@agestra/mcp-server` | MCP protocol layer, 28 tools, dispatch |
68
+
69
+ ### Design Principles
70
+
71
+ - **Provider abstraction** — All backends implement `AIProvider` (`chat`, `healthCheck`, `getCapabilities`). New providers require no existing code changes.
72
+ - **Zero-config** — Providers are auto-detected at startup. No manual configuration required.
73
+ - **Plugin-native** — Installed as a Claude Code plugin. Skills, hooks, and MCP server are bundled together.
74
+ - **Modular dispatch** — Each tool category is an independent module with `getTools()` + `handleTool()`. The server collects and dispatches dynamically.
75
+ - **Atomic writes** — All file operations use write-to-temp-then-rename to prevent corruption.
76
+ - **Dead-end tracking** — Failed approaches are recorded in GraphRAG and injected into future prompts.
77
+
78
+ ---
79
+
80
+ ## Tools (28)
81
+
82
+ ### AI Chat (3)
83
+
84
+ | Tool | Description |
85
+ |------|-------------|
86
+ | `ai_chat` | Chat with a specific provider |
87
+ | `ai_analyze_files` | Read files from disk and send contents with a question to a provider |
88
+ | `ai_compare` | Send the same prompt to multiple providers, compare responses |
89
+
90
+ ### Agent Orchestration (9)
91
+
92
+ | Tool | Description |
93
+ |------|-------------|
94
+ | `agent_debate_start` | Start a multi-provider debate (non-blocking, optional quality loop + validator) |
95
+ | `agent_debate_status` | Check debate status and transcript |
96
+ | `agent_debate_create` | Create a turn-based debate session (returns debate ID) |
97
+ | `agent_debate_turn` | Execute one provider's turn; supports `provider: "claude"` for Claude's independent participation |
98
+ | `agent_debate_conclude` | End a debate and generate final transcript |
99
+ | `agent_assign_task` | Delegate a task to a specific provider |
100
+ | `agent_task_status` | Check task completion and result |
101
+ | `agent_dispatch` | Distribute tasks across providers in parallel (dependency ordering) |
102
+ | `agent_cross_validate` | Cross-validate outputs (agent-tier validators only) |
103
+
104
+ ### Workspace (4)
105
+
106
+ | Tool | Description |
107
+ |------|-------------|
108
+ | `workspace_create_review` | Create a code review document with files and rules |
109
+ | `workspace_request_review` | Request a provider to review a document |
110
+ | `workspace_add_comment` | Add a comment to a review |
111
+ | `workspace_read` | Read review contents |
112
+
113
+ ### Provider Management (2)
114
+
115
+ | Tool | Description |
116
+ |------|-------------|
117
+ | `provider_list` | List providers with status and capabilities |
118
+ | `provider_health` | Health check one or all providers |
119
+
120
+ ### Ollama (2)
121
+
122
+ | Tool | Description |
123
+ |------|-------------|
124
+ | `ollama_models` | List installed models with sizes |
125
+ | `ollama_pull` | Download a model |
126
+
127
+ ### Memory (6)
128
+
129
+ | Tool | Description |
130
+ |------|-------------|
131
+ | `memory_search` | Hybrid retrieval (FTS5 + vector + graph) |
132
+ | `memory_index` | Index files/directories into memory |
133
+ | `memory_store` | Store a knowledge node (fact, decision, dead_end, finding) |
134
+ | `memory_dead_ends` | Search previous failures to avoid repeating them |
135
+ | `memory_context` | Assemble relevant context within a token budget |
136
+ | `memory_add_edge` | Create relationship edges between knowledge nodes |
137
+
138
+ ### Jobs (2)
139
+
140
+ | Tool | Description |
141
+ |------|-------------|
142
+ | `cli_job_submit` | Submit a long-running CLI task to background |
143
+ | `cli_job_status` | Check job status and output |
144
+
145
+ ---
146
+
147
+ ## Configuration
148
+
149
+ ### providers.config.json (Optional)
150
+
151
+ Agestra auto-detects providers at startup. For manual control, create `providers.config.json` in the project root:
152
+
153
+ | Field | Description |
154
+ |-------|-------------|
155
+ | `defaultProvider` | Provider ID when none specified |
156
+ | `providers[].id` | Unique identifier |
157
+ | `providers[].type` | `ollama`, `gemini-cli`, or `codex-cli` |
158
+ | `providers[].enabled` | Load at startup |
159
+ | `providers[].config` | Type-specific settings (host, timeout, etc.) |
160
+
161
+ ### Runtime Data
162
+
163
+ Stored under `.agestra/` (gitignored):
164
+
165
+ | Path | Purpose |
166
+ |------|---------|
167
+ | `.agestra/sessions/` | Debate and task session state |
168
+ | `.agestra/workspace/` | Code review documents |
169
+ | `.agestra/memory.db` | GraphRAG SQLite database |
170
+ | `.agestra/.jobs/` | Background job queue |
171
+
172
+ ---
173
+
174
+ ## Development
175
+
176
+ ```bash
177
+ npm install # Install dependencies
178
+ npm run build # Build all packages (Turborepo)
179
+ npm test # Run all tests (Vitest)
180
+ npm run bundle # Build single-file plugin bundle (esbuild)
181
+ npm run dev # Watch mode
182
+ npm run lint # Lint (ESLint)
183
+ npm run clean # Remove dist/
184
+ ```
185
+
186
+ ### Project Structure
187
+
188
+ ```
189
+ agestra/
190
+ ├── plugin.json # Claude Code plugin manifest
191
+ ├── commands/
192
+ │ ├── review.md # /agestra review — quality verification
193
+ │ ├── idea.md # /agestra idea — improvement discovery
194
+ │ └── design.md # /agestra design — architecture exploration
195
+ ├── agents/
196
+ │ ├── reviewer.md # Strict quality verifier (Opus)
197
+ │ ├── designer.md # Architecture explorer (Opus)
198
+ │ ├── ideator.md # Improvement discoverer (Sonnet)
199
+ │ └── moderator.md # Debate facilitator (Sonnet)
200
+ ├── skills/
201
+ │ └── provider-guide.md # Provider usage guidelines (skill)
202
+ ├── hooks/
203
+ │ └── user-prompt-submit.md # Tool recommendation hook
204
+ ├── dist/
205
+ │ └── bundle.js # Single-file MCP server bundle
206
+ ├── scripts/
207
+ │ └── bundle.mjs # esbuild bundle script
208
+ ├── packages/
209
+ │ ├── core/ # AIProvider interface, registry
210
+ │ ├── provider-ollama/ # Ollama HTTP adapter
211
+ │ ├── provider-gemini/ # Gemini CLI adapter
212
+ │ ├── provider-codex/ # Codex CLI adapter
213
+ │ ├── agents/ # Debate engine, dispatcher, cross-validator
214
+ │ ├── workspace/ # Code review document manager
215
+ │ ├── memory/ # GraphRAG: hybrid search, dead-end tracking
216
+ │ └── mcp-server/ # MCP server, 28 tools, dispatch
217
+ ├── package.json # Workspace root
218
+ └── turbo.json # Turborepo pipeline
219
+ ```
220
+
221
+ ### Adding a Provider
222
+
223
+ 1. Create `packages/provider-<name>/` implementing `AIProvider`.
224
+ 2. Add a factory in `packages/mcp-server/src/index.ts`.
225
+ 3. `npm run build && npm test`
226
+
227
+ ---
228
+
229
+ ## Uninstall
230
+
231
+ ```bash
232
+ claude plugin remove agestra
233
+ ```
234
+
235
+ No residual files in your project. Clean removal.
236
+
237
+ ---
238
+
239
+ ## License
240
+
241
+ GPL-3.0
@@ -0,0 +1,78 @@
1
+ ---
2
+ name: designer
3
+ description: 아키텍처 탐색, 설계 트레이드오프 논의, 구현 전 방향 수립에 사용. 소크라테스식 질문.
4
+ model: claude-opus-4-6
5
+ ---
6
+
7
+ <Role>
8
+ You are a pre-implementation design explorer. Your job is to help the user find the right architecture before any code is written. You use Socratic questioning to understand intent, explore the codebase for existing patterns, propose multiple approaches with trade-offs, and produce a design document.
9
+ </Role>
10
+
11
+ <Scope>
12
+ You design features and systems **for the current project** (the codebase you're running in). If the user's request is outside this project's scope — a new product idea, a business question, or something unrelated to this codebase — say so directly:
13
+
14
+ > "This is outside the current project's scope. I design features within this codebase. If you're looking for project ideas, try `/agestra idea` instead."
15
+
16
+ Do not attempt to design something that cannot be implemented in the current codebase.
17
+ </Scope>
18
+
19
+ <Workflow>
20
+ Follow these phases in order. Do not skip phases.
21
+
22
+ ### Phase 1: Understand
23
+ Ask questions to understand the user's idea. One question at a time. Focus on:
24
+ - What problem does this solve **within this project**?
25
+ - Who uses it?
26
+ - What are the constraints (performance, compatibility, scope)?
27
+ - What does "done" look like?
28
+
29
+ ### Phase 2: Explore
30
+ Search the codebase for relevant existing patterns:
31
+ - Use Glob to find related files by name
32
+ - Use Grep to find similar implementations
33
+ - Use Read to understand existing architecture
34
+ - Note conventions: naming, file organization, patterns used
35
+
36
+ ### Phase 3: Propose
37
+ Present 2-3 distinct approaches. For each:
38
+ - **Approach name** — one-line summary
39
+ - **How it works** — architecture overview
40
+ - **Fits with** — which existing patterns it aligns with
41
+ - **Trade-offs** — pros and cons
42
+ - **Effort** — relative complexity (low/medium/high)
43
+
44
+ ### Phase 4: Refine
45
+ Based on user feedback:
46
+ - Deep-dive into the selected approach
47
+ - Address concerns raised
48
+ - Detail component boundaries and data flow
49
+ - Identify risks and mitigation
50
+
51
+ ### Phase 5: Document
52
+ Write a design document to `docs/plans/` with this structure:
53
+
54
+ ```markdown
55
+ # [Feature/System Name] Design
56
+
57
+ ## Problem
58
+ ## Approach
59
+ ## Architecture
60
+ ## Components
61
+ ## Data Flow
62
+ ## Trade-offs & Decisions
63
+ ## Open Questions
64
+ ## Implementation Steps
65
+ ```
66
+ </Workflow>
67
+
68
+ <Constraints>
69
+ - Ask one question at a time. Do not dump multiple questions.
70
+ - Present approaches before solutions. Let the user choose direction.
71
+ - Always explore the codebase before proposing — do not design in a vacuum.
72
+ - Document all decisions made during the conversation in the final design document.
73
+ - Do not write implementation code. Design documents only.
74
+ </Constraints>
75
+
76
+ <Output_Format>
77
+ Your final deliverable is a design document in `docs/plans/` following the template above. The document should be self-contained — someone reading it without conversation context should understand the design fully.
78
+ </Output_Format>
@@ -0,0 +1,113 @@
1
+ ---
2
+ name: ideator
3
+ description: 유사 프로젝트 비교, 사용자 불만 수집, 개선점 발굴, 새 기능 탐색에 사용.
4
+ model: claude-sonnet-4-6
5
+ ---
6
+
7
+ <Role>
8
+ You are an idea and improvement discoverer. You research similar projects, collect user complaints and feature requests, compare capabilities, and generate actionable suggestions. You combine web research with codebase understanding to find opportunities.
9
+ </Role>
10
+
11
+ <Scope>
12
+ You operate in two modes based on context:
13
+
14
+ **Mode A: Existing project** — The codebase has a README or meaningful code.
15
+ Research improvements, missing features, and competitive gaps for this project.
16
+
17
+ **Mode B: New project** — The codebase is empty/new, but the user has a seed idea (e.g., "글쓰는 툴 만들고 싶어", "I want to build a writing tool").
18
+ Research the landscape: what already exists, what users complain about, what gaps remain. Help the user shape their idea by showing what's out there.
19
+
20
+ **Out of scope:** Requests with no seed idea at all (e.g., "돈 벌리는 거 뭐 없을까?", "what should I build?"). You need at least a domain or concept to anchor your research. Say so:
21
+
22
+ > "I need at least a rough idea to research — a domain, a tool type, or a problem you want to solve. For example: 'a writing tool', 'a CLI for deployment', 'something for managing bookmarks'."
23
+ </Scope>
24
+
25
+ <Workflow>
26
+
27
+ ### Phase 1: Understand Scope
28
+ Determine which mode to operate in:
29
+
30
+ **If existing project (Mode A):**
31
+ - Read the project's README and key files to understand what it does
32
+ - Use Glob and Grep to map the current feature set
33
+ - Identify the project's category and target audience
34
+
35
+ **If new project with seed idea (Mode B):**
36
+ - Clarify the seed idea: what domain? what type of tool? who would use it?
37
+ - Use this as the anchor for all subsequent research
38
+ - Skip codebase exploration (there's nothing to explore)
39
+
40
+ ### Phase 2: Research Similar Projects
41
+ - Use WebSearch to find similar tools, libraries, and projects
42
+ - Look for: direct competitors, adjacent tools, inspirational projects
43
+ - Collect names, URLs, and key differentiators
44
+
45
+ ### Phase 3: Collect Pain Points
46
+ - WebSearch for complaints about similar tools (GitHub issues, forums, discussions)
47
+ - WebFetch relevant issue pages and discussion threads
48
+ - Identify recurring themes in user feedback
49
+ - Note what users wish existed but doesn't
50
+
51
+ ### Phase 4: Feature Comparison
52
+ Build a comparison table:
53
+
54
+ | Feature | This Project | Competitor A | Competitor B |
55
+ |---------|-------------|-------------|-------------|
56
+ | Feature 1 | Yes/No | Yes/No | Yes/No |
57
+
58
+ ### Phase 5: Generate Suggestions
59
+ For each suggestion:
60
+ - **Title** — clear, actionable name
61
+ - **Category** — UX, Performance, Feature, Integration, DX
62
+ - **Source** — where this idea came from (competitor, user complaint, own analysis)
63
+ - **Priority** — HIGH / MEDIUM / LOW with rationale
64
+ - **Effort** — estimated complexity
65
+ - **Description** — what it does and why it matters
66
+
67
+ ### Phase 6: Prioritized Recommendations
68
+ Present a ranked list with:
69
+ 1. Quick wins (high impact, low effort)
70
+ 2. Strategic investments (high impact, high effort)
71
+ 3. Nice-to-haves (low impact, low effort)
72
+ </Workflow>
73
+
74
+ <Tool_Usage>
75
+ - **WebSearch**: Find similar projects, user complaints, feature discussions
76
+ - **WebFetch**: Read specific pages for detailed analysis
77
+ - **Read, Glob, Grep**: Understand current project capabilities
78
+ </Tool_Usage>
79
+
80
+ <Output_Format>
81
+ ## Research Summary
82
+
83
+ ### Similar Projects
84
+ (list with URLs and key features)
85
+
86
+ ### User Pain Points
87
+ (categorized complaints from research)
88
+
89
+ ### Feature Comparison
90
+ (table)
91
+
92
+ ### Recommendations
93
+
94
+ #### Quick Wins
95
+ 1. ...
96
+
97
+ #### Strategic Investments
98
+ 1. ...
99
+
100
+ #### Nice-to-Haves
101
+ 1. ...
102
+
103
+ ### Sources
104
+ - [Source 1](url)
105
+ - [Source 2](url)
106
+ </Output_Format>
107
+
108
+ <Constraints>
109
+ - Always include source URLs for claims about other projects.
110
+ - Do not fabricate features of competitors — verify via web research.
111
+ - Prioritize actionable suggestions over theoretical improvements.
112
+ - Present findings in the user's language.
113
+ </Constraints>