@cliangdev/flux-plugin 0.0.0-dev.1db9c6c

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/package.json ADDED
@@ -0,0 +1,65 @@
1
+ {
2
+ "name": "@cliangdev/flux-plugin",
3
+ "version": "0.0.0-dev.1db9c6c",
4
+ "description": "Claude Code plugin for AI-first workflow orchestration with MCP server",
5
+ "type": "module",
6
+ "main": "./dist/server/index.js",
7
+ "bin": {
8
+ "flux-plugin": "./bin/install.cjs"
9
+ },
10
+ "files": [
11
+ "bin/",
12
+ "dist/",
13
+ "skills/",
14
+ "commands/"
15
+ ],
16
+ "scripts": {
17
+ "dev": "bun run src/server/index.ts",
18
+ "build": "bun build src/server/index.ts --outdir dist/server --target node --external better-sqlite3",
19
+ "postbuild": "node -e \"const fs=require('fs');const f='dist/server/index.js';const c=fs.readFileSync(f,'utf-8');if(!c.startsWith('#!/usr/bin/env node')){fs.writeFileSync(f,'#!/usr/bin/env node\\n'+c)}\"",
20
+ "build:compile": "bun build --compile --outfile bin/flux-server src/server/index.ts && bun build --compile --outfile bin/flux-status src/status-line/index.ts",
21
+ "build:compile:server": "bun build --compile --outfile bin/flux-server src/server/index.ts",
22
+ "build:compile:status": "bun build --compile --outfile bin/flux-status src/status-line/index.ts",
23
+ "prepublishOnly": "bun run build",
24
+ "test": "bun test",
25
+ "test:linear-description": "bun run src/server/adapters/__tests__/linear-description-test.ts",
26
+ "typecheck": "tsc --noEmit",
27
+ "lint": "biome check .",
28
+ "lint:fix": "biome check --write .",
29
+ "format": "biome format --write .",
30
+ "verify-release": "bun run scripts/verify-release.ts"
31
+ },
32
+ "repository": {
33
+ "type": "git",
34
+ "url": "https://github.com/cliangdev/flux-plugin.git"
35
+ },
36
+ "keywords": [
37
+ "claude",
38
+ "claude-code",
39
+ "mcp",
40
+ "model-context-protocol",
41
+ "workflow",
42
+ "orchestration",
43
+ "ai",
44
+ "productivity",
45
+ "task-management"
46
+ ],
47
+ "author": "Chris Liang <chris@cliangdev.com>",
48
+ "license": "MIT",
49
+ "devDependencies": {
50
+ "@biomejs/biome": "^2.3.11",
51
+ "@types/better-sqlite3": "^7.6.13",
52
+ "@types/bun": "^1.3.6",
53
+ "typescript": "^5.0.0"
54
+ },
55
+ "dependencies": {
56
+ "@linear/sdk": "^70.0.0",
57
+ "@modelcontextprotocol/sdk": "^1.25.2",
58
+ "better-sqlite3": "^12.6.2",
59
+ "chalk": "^5.4.1",
60
+ "zod": "^4.3.5"
61
+ },
62
+ "publishConfig": {
63
+ "access": "public"
64
+ }
65
+ }
@@ -0,0 +1,312 @@
1
+ ---
2
+ name: flux:agent-creator
3
+ description: Guide for creating effective subagents. Use when users want to create a new agent that extends Claude's capabilities with specialized knowledge, workflows, or tool integrations.
4
+ user-invocable: false
5
+ ---
6
+
7
+ # Agent Creator Skill
8
+
9
+ This skill guides the creation of well-designed Claude Code subagents following Anthropic's best practices.
10
+
11
+ ## When to Use This Skill
12
+
13
+ - User wants to create a new subagent
14
+ - User needs to automate a repetitive workflow
15
+ - User wants to add domain-specific expertise to Claude
16
+ - User is building agents for a plugin
17
+
18
+ ## Core Principles (from Anthropic)
19
+
20
+ ### 1. Start Simple
21
+ "Find the simplest solution possible, and only increase complexity when needed."
22
+
23
+ **Complexity ladder:**
24
+ 1. Single optimized prompt (try this first)
25
+ 2. Prompt chaining (fixed sequence of steps)
26
+ 3. Routing (classify input → specialized handler)
27
+ 4. Orchestrator-workers (dynamic delegation)
28
+ 5. Full autonomous agent (only if truly needed)
29
+
30
+ ### 2. When Agents Excel
31
+ - Open-ended problems where steps are unpredictable
32
+ - Tasks requiring dynamic tool selection
33
+ - Work that benefits from iterative refinement
34
+ - Multi-step processes with branching logic
35
+
36
+ ### 3. When to Avoid Agents
37
+ - Fixed, predictable workflows → use prompt chaining
38
+ - Single-domain tasks → use a focused prompt
39
+ - Tasks with clear evaluation criteria → use evaluator-optimizer pattern
40
+
41
+ ## Subagent Architecture Patterns
42
+
43
+ ### Pattern 1: Specialist Agent
44
+ Single-purpose expert for a specific domain.
45
+
46
+ ```markdown
47
+ ---
48
+ name: {domain}-specialist
49
+ description: Expert in {domain}. Use when {trigger condition}.
50
+ tools: {minimal required tools}
51
+ model: sonnet
52
+ ---
53
+
54
+ You are an expert in {domain}.
55
+
56
+ When invoked:
57
+ 1. {First action}
58
+ 2. {Second action}
59
+ 3. {Output format}
60
+
61
+ Focus on: {core responsibilities}
62
+ Avoid: {out of scope items}
63
+ ```
64
+
65
+ **Use when:** Task requires deep expertise in one area.
66
+
67
+ ### Pattern 2: Workflow Agent
68
+ Orchestrates a multi-step process with defined stages.
69
+
70
+ ```markdown
71
+ ---
72
+ name: {workflow}-workflow
73
+ description: Runs the {workflow} process. Use when {trigger}.
74
+ tools: {tools for all stages}
75
+ model: sonnet
76
+ ---
77
+
78
+ You orchestrate the {workflow} process.
79
+
80
+ ## Workflow Stages
81
+
82
+ ### Stage 1: {Name}
83
+ {What to do}
84
+ {Success criteria}
85
+
86
+ ### Stage 2: {Name}
87
+ {What to do}
88
+ {Success criteria}
89
+
90
+ ### Stage 3: {Name}
91
+ {What to do}
92
+ {Success criteria}
93
+
94
+ ## Completion
95
+ {How to know when done}
96
+ {What to output}
97
+ ```
98
+
99
+ **Use when:** Task has predictable stages but needs intelligent execution within each.
100
+
101
+ ### Pattern 3: Evaluator Agent
102
+ Reviews and provides structured feedback.
103
+
104
+ ```markdown
105
+ ---
106
+ name: {domain}-reviewer
107
+ description: Reviews {what} for {criteria}. Use proactively after {trigger}.
108
+ tools: Read, Grep, Glob
109
+ model: inherit
110
+ ---
111
+
112
+ You are a {domain} reviewer ensuring {quality standard}.
113
+
114
+ ## Review Checklist
115
+ - [ ] {Criterion 1}
116
+ - [ ] {Criterion 2}
117
+ - [ ] {Criterion 3}
118
+
119
+ ## Output Format
120
+ Organize feedback by priority:
121
+ 1. **Critical** (must fix)
122
+ 2. **Warning** (should fix)
123
+ 3. **Suggestion** (consider)
124
+ ```
125
+
126
+ **Use when:** Clear evaluation criteria exist and feedback improves outcomes.
127
+
128
+ ### Pattern 4: Research Agent
129
+ Gathers and synthesizes information.
130
+
131
+ ```markdown
132
+ ---
133
+ name: {domain}-researcher
134
+ description: Researches {topic area}. Use when encountering unfamiliar {domain}.
135
+ tools: Read, Glob, Grep, WebFetch, WebSearch
136
+ model: sonnet
137
+ ---
138
+
139
+ You are a research specialist for {domain}.
140
+
141
+ When asked to research:
142
+ 1. Identify key questions to answer
143
+ 2. Search for authoritative sources
144
+ 3. Synthesize findings into actionable insights
145
+ 4. Cite sources
146
+
147
+ ## Output Format
148
+ ### Summary
149
+ {1-2 sentence overview}
150
+
151
+ ### Key Findings
152
+ - {Finding 1}
153
+ - {Finding 2}
154
+
155
+ ### Recommendations
156
+ {How to apply findings}
157
+
158
+ ### Sources
159
+ - {Source 1}
160
+ - {Source 2}
161
+ ```
162
+
163
+ **Use when:** Tasks require gathering external information.
164
+
165
+ ## Best Practices
166
+
167
+ ### 1. Write Detailed Descriptions
168
+ The `description` field determines when Claude delegates to the subagent.
169
+
170
+ **Good:**
171
+ ```yaml
172
+ description: Expert code reviewer for TypeScript. Use proactively after writing or modifying any .ts or .tsx files. Focuses on type safety, best practices, and potential bugs.
173
+ ```
174
+
175
+ **Bad:**
176
+ ```yaml
177
+ description: Reviews code
178
+ ```
179
+
180
+ ### 2. Limit Tool Access
181
+ Grant only the tools the agent needs. Less is more.
182
+
183
+ | Agent Type | Typical Tools |
184
+ |------------|---------------|
185
+ | Read-only analyzer | `Read`, `Grep`, `Glob` |
186
+ | Code modifier | `Read`, `Edit`, `Grep`, `Glob` |
187
+ | Build/test runner | `Bash`, `Read`, `Glob` |
188
+ | Research agent | `WebFetch`, `WebSearch`, `Read` |
189
+
190
+ ### 3. Choose the Right Model
191
+
192
+ | Model | Use When |
193
+ |-------|----------|
194
+ | `haiku` | Fast, simple tasks; exploration; high-volume |
195
+ | `sonnet` | Balanced capability; most agents |
196
+ | `opus` | Complex reasoning; critical decisions |
197
+ | `inherit` | Match parent conversation |
198
+
199
+ ### 4. Make Agents Proactive
200
+ Add "use proactively" to description for autonomous invocation:
201
+
202
+ ```yaml
203
+ description: Test runner that executes after code changes. Use proactively when files in src/ are modified.
204
+ ```
205
+
206
+ ### 5. Define Clear Completion Criteria
207
+ Every agent should know when it's done:
208
+
209
+ ```markdown
210
+ ## Completion
211
+ You are done when:
212
+ - All tests pass
213
+ - Coverage > 80%
214
+ - No linting errors
215
+
216
+ Output a summary of what was accomplished.
217
+ ```
218
+
219
+ ### 6. Handle Edge Cases
220
+ Document what the agent should NOT do:
221
+
222
+ ```markdown
223
+ ## Boundaries
224
+ - Do NOT modify files outside src/
225
+ - Do NOT run commands that could affect production
226
+ - If unsure, ask for clarification instead of guessing
227
+ ```
228
+
229
+ ## Creating an Agent: Interview Flow
230
+
231
+ When helping create an agent, ask:
232
+
233
+ 1. **Purpose**: "What task should this agent handle?"
234
+ 2. **Trigger**: "When should it be invoked? (after code changes, on request, etc.)"
235
+ 3. **Tools**: "What capabilities does it need? (read files, edit, run commands, web search)"
236
+ 4. **Autonomy**: "Should it act proactively or wait to be called?"
237
+ 5. **Output**: "What should it produce when done?"
238
+
239
+ Then generate the appropriate pattern.
240
+
241
+ ## File Locations
242
+
243
+ | Scope | Location | Use Case |
244
+ |-------|----------|----------|
245
+ | User | `~/.claude/agents/` | Personal agents across all projects |
246
+ | Project | `.claude/agents/` | Shared with team via version control |
247
+ | Plugin | `{plugin}/agents/` | Distributed with plugin |
248
+
249
+ ## Example: Creating a Research Subagent
250
+
251
+ **User request:** "I need an agent that researches technologies when I mention something unfamiliar"
252
+
253
+ **Generated agent:**
254
+
255
+ ```markdown
256
+ ---
257
+ name: tech-researcher
258
+ description: Researches unfamiliar technologies, libraries, and APIs. Use proactively when user mentions tech that may need investigation, or when explicitly asked to research something.
259
+ tools: WebFetch, WebSearch, Read
260
+ model: sonnet
261
+ ---
262
+
263
+ You are a technology research specialist.
264
+
265
+ ## When to Activate
266
+ - User mentions a library, framework, or API you're uncertain about
267
+ - User explicitly asks "research X" or "what is X?"
268
+ - Confidence in technical recommendation < 70%
269
+
270
+ ## Research Process
271
+ 1. **Identify** the core questions to answer
272
+ 2. **Search** using WebSearch for recent, authoritative sources
273
+ 3. **Deep dive** using WebFetch on official docs, GitHub repos
274
+ 4. **Synthesize** findings relevant to user's context
275
+
276
+ ## Output Format
277
+
278
+ ### {Technology Name}
279
+
280
+ **What it is:** {1-sentence description}
281
+
282
+ **Key features:**
283
+ - {Feature 1}
284
+ - {Feature 2}
285
+
286
+ **Relevance to your project:**
287
+ {How this applies to what user is building}
288
+
289
+ **Recommendations:**
290
+ {Should they use it? Alternatives?}
291
+
292
+ **Sources:**
293
+ - [{Source title}]({url})
294
+ ```
295
+
296
+ ## Validation Checklist
297
+
298
+ Before finalizing an agent, verify:
299
+
300
+ - [ ] Description clearly states when to use it
301
+ - [ ] Tools are minimal and appropriate
302
+ - [ ] Model matches complexity needs
303
+ - [ ] Prompt includes clear instructions
304
+ - [ ] Output format is defined
305
+ - [ ] Completion criteria are specified
306
+ - [ ] Edge cases and boundaries are documented
307
+
308
+ ## References
309
+
310
+ - [Building Effective Agents](https://www.anthropic.com/engineering/building-effective-agents) - Anthropic's agent design principles
311
+ - [Claude Code Subagents](https://code.claude.com/docs/en/sub-agents) - Official subagent documentation
312
+ - [Awesome Claude Code Subagents](https://github.com/VoltAgent/awesome-claude-code-subagents) - Community examples
@@ -0,0 +1,158 @@
1
+ ---
2
+ name: flux:epic-template
3
+ description: Epic and task structure patterns for Flux. Use when breaking PRDs into epics and tasks. Epics should be self-contained with clear acceptance criteria.
4
+ user-invocable: false
5
+ ---
6
+
7
+ # Epic Template Skill
8
+
9
+ Epics are **self-contained work packages** that can be implemented independently.
10
+
11
+ ## Epic Structure
12
+
13
+ ```markdown
14
+ ## {Epic Title}
15
+
16
+ **Goal**: {One sentence: what does completing this epic achieve?}
17
+
18
+ **Scope**:
19
+ - IN: {What's included}
20
+ - OUT: {What's explicitly excluded}
21
+
22
+ **Tasks**:
23
+ 1. {Task title} - {brief description}
24
+ 2. {Task title} - {brief description}
25
+
26
+ **Acceptance Criteria**:
27
+ - [ ] {Testable criterion 1}
28
+ - [ ] {Testable criterion 2}
29
+
30
+ **Dependencies**: {Other epics this depends on, or "None"}
31
+ ```
32
+
33
+ ## Task Structure
34
+
35
+ ```markdown
36
+ ### {Task Title}
37
+
38
+ {1-2 sentences: what needs to be done}
39
+
40
+ **Acceptance Criteria**:
41
+ - [ ] {Specific, testable criterion}
42
+ - [ ] {Specific, testable criterion}
43
+
44
+ **Files**: {Key files to create/modify, if known}
45
+ ```
46
+
47
+ ## Guidelines
48
+
49
+ ### Epic Sizing
50
+ - **Too small**: "Add login button" → merge into larger epic
51
+ - **Right size**: "User Authentication" (3-7 tasks, 1-3 days work)
52
+ - **Too big**: "Build entire frontend" → split into multiple epics
53
+
54
+ ### Task Sizing
55
+ - One task = one commit
56
+ - Should take 30min - 4hrs
57
+ - Has clear "done" state
58
+
59
+ ### Dependency Order
60
+ - Order epics so dependencies come first
61
+ - Mark blocking dependencies explicitly
62
+ - Prefer parallel-safe epics when possible
63
+
64
+ ### Test Type Convention
65
+
66
+ Mark each acceptance criterion with its test type as a prefix:
67
+ - **`[auto]`** - Verified by automated test (unit, integration, e2e)
68
+ - **`[manual]`** - Requires human verification (include steps)
69
+
70
+ For manual criteria, add verification steps after `→ Verify:`:
71
+ ```
72
+ [manual] Dashboard displays correctly on mobile → Verify: Open on phone, check layout
73
+ ```
74
+
75
+ Examples:
76
+ - `[auto] API returns 401 for invalid credentials`
77
+ - `[auto] User record is created in database`
78
+ - `[manual] Error message is user-friendly → Verify: Read message aloud, is it clear?`
79
+ - `[manual] Loading animation feels smooth → Verify: Test on slow network`
80
+
81
+ Prefer `[auto]` wherever possible - automated tests are more reliable and repeatable.
82
+
83
+ ## Breaking Down PRDs
84
+
85
+ ### Step 1: Identify Epics
86
+ Read PRD features and group into logical work packages:
87
+ - Each P0 feature often maps to 1-2 epics
88
+ - Shared infrastructure (auth, database setup) = separate epic
89
+ - UI and backend for same feature can be same or separate epic
90
+
91
+ ### Step 2: Order by Dependencies
92
+ ```
93
+ Epic 1: Project Setup (no deps)
94
+ Epic 2: Database Schema (depends on 1)
95
+ Epic 3: User Auth (depends on 2)
96
+ Epic 4: Core Feature A (depends on 3)
97
+ Epic 5: Core Feature B (depends on 3) ← can parallel with 4
98
+ ```
99
+
100
+ ### Step 3: Break into Tasks
101
+ For each epic, create 3-7 tasks:
102
+ - Start with data/schema tasks
103
+ - Then business logic
104
+ - Then API/interface
105
+ - Finally integration/wiring
106
+
107
+ ### Example Breakdown
108
+
109
+ **PRD Feature**:
110
+ > **User Authentication**: Users can sign up and log in
111
+ > - Sign up with email, password, name
112
+ > - Login returns JWT
113
+ > - Forgot password flow
114
+
115
+ **Epic**:
116
+ ```markdown
117
+ ## User Authentication
118
+
119
+ **Goal**: Users can create accounts and authenticate.
120
+
121
+ **Scope**:
122
+ - IN: Sign up, login, JWT tokens, forgot password
123
+ - OUT: OAuth, 2FA, session management
124
+
125
+ **Tasks**:
126
+ 1. Create users table schema
127
+ 2. Implement sign up endpoint
128
+ 3. Implement login endpoint with JWT
129
+ 4. Add forgot password email flow
130
+ 5. Create auth middleware
131
+
132
+ **Acceptance Criteria**:
133
+ - [ ] User can sign up with email/password/name
134
+ - [ ] User can login and receive JWT
135
+ - [ ] Invalid credentials return 401
136
+ - [ ] Forgot password sends reset email
137
+
138
+ **Dependencies**: Database Setup epic
139
+ ```
140
+
141
+ ## MCP Integration
142
+
143
+ When breaking down:
144
+
145
+ 1. Create epic: `create_epic` with prd_ref, title, description
146
+ 2. Add criteria: `add_criteria` for each acceptance criterion
147
+ 3. Create tasks: `create_task` with epic_ref, title, description
148
+ 4. Add task criteria: `add_criteria` for each task criterion
149
+ 5. Add dependencies: `add_dependency` between epics if needed
150
+
151
+ ## Workflow
152
+
153
+ 1. **Read PRD** → Understand features and scope
154
+ 2. **Draft epics** → Group features into work packages
155
+ 3. **Review with user** → Confirm epic structure
156
+ 4. **Create in MCP** → Use tools to persist
157
+ 5. **Break into tasks** → Detail each epic
158
+ 6. **Set dependencies** → Order the work
@@ -0,0 +1,132 @@
1
+ ---
2
+ name: flux:flux-orchestrator
3
+ description: Orchestrates Flux workflows based on project context
4
+ user-invocable: false
5
+ ---
6
+
7
+ # Flux Orchestrator Skill
8
+
9
+ This skill is automatically active when working in a Flux project. It provides context about available tools and workflow patterns.
10
+
11
+ ## Available MCP Tools
12
+
13
+ ### Query Tools
14
+ - `get_project_context` - Check if project initialized, get name/vision/prefix
15
+ - `get_stats` - Get PRD/epic/task counts by status
16
+ - `get_entity` - Fetch entity by ref with optional includes (criteria, tasks, dependencies)
17
+ - `query_entities` - Search entities by type, status, parent ref
18
+
19
+ ### Mutation Tools
20
+ - `init_project` - Initialize new .flux/ directory with project.json and database
21
+ - `create_prd` - Create a new PRD
22
+ - `create_epic` - Create an epic linked to a PRD
23
+ - `create_task` - Create a task linked to an epic
24
+ - `update_entity` - Update any entity's fields
25
+ - `update_status` - Update entity status with validation
26
+ - `delete_entity` - Delete entity with cascade
27
+
28
+ ### Relationship Tools
29
+ - `add_dependency` - Add dependency between epics or tasks
30
+ - `remove_dependency` - Remove a dependency
31
+ - `add_criteria` - Add acceptance criterion to epic or task
32
+ - `mark_criteria_met` - Mark criterion as satisfied
33
+
34
+ ## Workflow States
35
+
36
+ The Flux project progresses through these states:
37
+
38
+ 1. **Uninitialized** - No .flux/ directory
39
+ - Action: Run `/flux` to initialize
40
+
41
+ 2. **No PRDs** - Project initialized but empty
42
+ - Action: Run `/flux:prd` to create first PRD
43
+
44
+ 3. **PRD Draft** - PRD created but needs review
45
+ - Action: Review and submit for approval or refine
46
+
47
+ 4. **PRD Pending Review** - PRD submitted for review
48
+ - Action: Run critique agent, then approve or revise
49
+
50
+ 5. **PRD Reviewed** - Critique complete
51
+ - Action: Address feedback, then approve or revise to DRAFT
52
+
53
+ 6. **PRD Approved** - Ready for epic breakdown
54
+ - Action: Run `/flux:breakdown` to create epics
55
+
56
+ 7. **Breakdown Ready** - Epics and tasks created
57
+ - Action: Run `/flux:implement` to start coding
58
+
59
+ 8. **Implementation In Progress** - Tasks IN_PROGRESS
60
+ - Action: Continue implementing current task
61
+
62
+ 9. **Complete** - All tasks COMPLETED
63
+ - Action: Review and create PR
64
+
65
+ ## Entity References
66
+
67
+ All entities have a reference format: `{PREFIX}-{TYPE}{NUMBER}`
68
+
69
+ - PRD: `MSA-P1`, `MSA-P2`
70
+ - Epic: `MSA-E1`, `MSA-E2`
71
+ - Task: `MSA-T1`, `MSA-T2`
72
+
73
+ The prefix is generated from the project name during initialization.
74
+
75
+ ## Status Values
76
+
77
+ ### PRD Statuses (6-stage workflow)
78
+ - `DRAFT` - Initial state, being created/refined
79
+ - `PENDING_REVIEW` - Submitted for critique
80
+ - `REVIEWED` - Critique complete, awaiting approval
81
+ - `APPROVED` - Ready for epic breakdown
82
+ - `BREAKDOWN_READY` - Epics and tasks created
83
+ - `COMPLETED` - All epics done
84
+
85
+ ### Valid PRD Transitions
86
+ ```
87
+ DRAFT → PENDING_REVIEW
88
+ PENDING_REVIEW → REVIEWED | DRAFT (revise)
89
+ REVIEWED → APPROVED | DRAFT (revise)
90
+ APPROVED → BREAKDOWN_READY
91
+ BREAKDOWN_READY → COMPLETED
92
+ ```
93
+
94
+ ### Epic/Task Statuses
95
+ - `PENDING` - Not started
96
+ - `IN_PROGRESS` - Currently being worked on
97
+ - `COMPLETED` - Done
98
+
99
+ ## Confidence-Based Autonomy
100
+
101
+ The orchestrator uses confidence levels to determine autonomy:
102
+
103
+ | Confidence | Behavior | Example |
104
+ |------------|----------|---------|
105
+ | > 80% | Auto-execute, inform user | "I'm creating the epic structure..." |
106
+ | 50-80% | Suggest action, wait for confirmation | "Ready to break down into tasks. Proceed?" |
107
+ | < 50% | Ask clarifying question | "Should we research this technology first?" |
108
+
109
+ ### Confidence Indicators
110
+ - **High confidence (>80%)**: Clear next step, no ambiguity, user has been responsive
111
+ - **Medium confidence (50-80%)**: Reasonable next step, some uncertainty
112
+ - **Low confidence (<50%)**: Multiple valid paths, unclear requirements, unfamiliar tech
113
+
114
+ ## Available Subagents
115
+
116
+ ### Research Agent
117
+ - **Trigger**: Unfamiliar technology mentioned, confidence < 70%
118
+ - **Purpose**: Gather information about libraries, frameworks, APIs
119
+ - **Tools**: Context7, WebSearch, WebFetch
120
+
121
+ ### Critique Agent
122
+ - **Trigger**: PRD status becomes PENDING_REVIEW
123
+ - **Purpose**: Analyze feasibility, scope, risks
124
+ - **Output**: Structured critique with recommendations
125
+
126
+ ## Best Practices
127
+
128
+ 1. **Check context first** - Always call `get_project_context` before taking actions
129
+ 2. **Use refs, not IDs** - Tools accept human-readable refs like `MSA-E1`
130
+ 3. **Validate status transitions** - Use `update_status` which enforces valid transitions
131
+ 4. **Include related data** - Use `include` parameter to fetch nested entities in one call
132
+ 5. **Handle errors gracefully** - Tools return errors with codes, display user-friendly messages