crewkit 0.1.0 → 1.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,215 @@
1
+ # Adapter: Cursor
2
+
3
+ This adapter is executed during Phase 7, Step 10 of `/crewkit-setup`.
4
+ You are the AI. Follow every instruction in this file to generate Cursor-compatible context files.
5
+
6
+ **Input:** Read `.crewkit/last-scan.md` for the project profile. The Claude Code files generated in Steps 1-9 are your source of truth.
7
+ **Output:** Files under `.cursor/rules/` and `AGENTS.md` at the project root.
8
+
9
+ ---
10
+
11
+ ## Rules for this adapter
12
+
13
+ 1. All generated files MUST be in **English**.
14
+ 2. Do NOT duplicate `.ai/memory/` — it is shared between all IDEs. No transformation needed.
15
+ 3. `model:` frontmatter from `.claude/agents/*.md` is Claude Code-only — strip it.
16
+ 4. Cursor has no equivalent for skills/prompts — skip `.claude/skills/` entirely.
17
+ 5. Create `.cursor/rules/` directory if it does not exist.
18
+
19
+ ---
20
+
21
+ ## Step U1 — `.cursor/rules/project.md`
22
+
23
+ **Source:** `CLAUDE.md`
24
+ **Transformation:** Reformat for Cursor. Add required frontmatter. Remove agent/skill/hook sections that are Claude Code-specific.
25
+
26
+ **Required Cursor frontmatter:**
27
+ ```markdown
28
+ ---
29
+ description: "Project rules"
30
+ alwaysApply: true
31
+ ---
32
+ ```
33
+
34
+ **Expected output format:**
35
+ ```markdown
36
+ ---
37
+ description: "Project rules"
38
+ alwaysApply: true
39
+ ---
40
+ # [PROJECT NAME] — Project Rules
41
+
42
+ ## Overview
43
+ [1-2 sentences from CLAUDE.md overview — what the project is, main stack]
44
+ [Business domain: what it does, core entities, risk profile]
45
+
46
+ **Stack:** [stacks]
47
+ **Architecture:** [key patterns]
48
+
49
+ ---
50
+
51
+ ## Hard rules (apply to every response)
52
+
53
+ [Numbered list — copy from CLAUDE.md hard rules verbatim.]
54
+
55
+ 1. [Rule 1]
56
+ 2. [Rule 2]
57
+ ...
58
+
59
+ Details for each rule → `.ai/memory/conventions.md`
60
+
61
+ ---
62
+
63
+ ## Project Memory (`.ai/memory/`)
64
+
65
+ Load context on demand:
66
+
67
+ | File | When to load |
68
+ |------|-------------|
69
+ | `architecture.md` | Always — modules, layers, dependencies |
70
+ | `conventions.md` | Always — naming, patterns, anti-patterns |
71
+ | `commands.md` | When running build/test/deploy |
72
+ | `testing.md` | When creating or running tests |
73
+ | `lessons-{domain}.md` | When working on that domain |
74
+
75
+ ---
76
+
77
+ ## Output Format
78
+
79
+ Always return:
80
+ - **Summary** — what was done
81
+ - **Files changed** — list with brief description
82
+ - **Tests** — pass/fail count (if tests were run)
83
+ - **Risks / Next steps** — if any
84
+ ```
85
+
86
+ **What to REMOVE from CLAUDE.md:**
87
+ - `## Agent Discipline` section
88
+ - `## Skills (slash commands)` section
89
+ - `## Architect Decision Gate` section
90
+ - `## Test Safety Loop` section
91
+
92
+ ---
93
+
94
+ ## Step U2 — `.cursor/rules/*.md`
95
+
96
+ **Source:** `.claude/rules/*.md`
97
+ **Transformation:** Convert frontmatter to Cursor format. Keep glob patterns and all rule content unchanged.
98
+
99
+ Claude Code frontmatter format:
100
+ ```markdown
101
+ ---
102
+ description: "Node.js coding rules — applied when editing src/**/*.{js,ts}"
103
+ globs: "src/**/*.{js,ts}"
104
+ ---
105
+ ```
106
+
107
+ Cursor frontmatter format:
108
+ ```markdown
109
+ ---
110
+ description: "Node.js coding rules"
111
+ globs: "src/**/*.{js,ts}"
112
+ ---
113
+ ```
114
+
115
+ **Mapping:**
116
+ - `globs:` → `globs:` (keep as-is — Cursor uses the same key)
117
+ - `description:` → keep as-is, but shorten to the rule name without the "applied when editing..." suffix if present
118
+ - Body: copy verbatim
119
+
120
+ **File naming:** `.claude/rules/dotnet.md` → `.cursor/rules/dotnet.md`
121
+ Keep the same filename — just place it under `.cursor/rules/`.
122
+
123
+ **Example — source `.claude/rules/python.md`:**
124
+ ```markdown
125
+ ---
126
+ description: "Python coding rules — applied when editing **/*.py"
127
+ globs: "**/*.py"
128
+ ---
129
+
130
+ # Python Rules
131
+
132
+ - Use type hints on all function signatures
133
+ - Validate input with Pydantic models at API boundaries
134
+ ```
135
+
136
+ **Example — target `.cursor/rules/python.md`:**
137
+ ```markdown
138
+ ---
139
+ description: "Python coding rules"
140
+ globs: "**/*.py"
141
+ ---
142
+
143
+ # Python Rules
144
+
145
+ - Use type hints on all function signatures
146
+ - Validate input with Pydantic models at API boundaries
147
+ ```
148
+
149
+ ---
150
+
151
+ ## Step U3 — `AGENTS.md` (project root)
152
+
153
+ **Source:** All `.claude/agents/*.md` files
154
+ **Transformation:** Concatenate all agents into a single markdown file with `##` sections. Strip `model:` frontmatter from each. Remove the `<!-- crewkit:context-start -->...<!-- crewkit:context-end -->` block from each agent. Keep the `name:` and `description:` from frontmatter and all agent instructions.
155
+
156
+ **Output format:**
157
+ ```markdown
158
+ # Agents
159
+
160
+ This file describes the AI agents available in this project.
161
+ Each agent has a specific role and scope. Invoke the appropriate agent for each task type.
162
+
163
+ ---
164
+
165
+ ## [Agent Name]
166
+
167
+ > [description from frontmatter]
168
+
169
+ [agent body — full instructions, stripped of model: and crewkit context block]
170
+
171
+ ---
172
+
173
+ ## [Agent Name 2]
174
+
175
+ > [description from frontmatter]
176
+
177
+ [agent body]
178
+
179
+ ---
180
+ ```
181
+
182
+ **Agent order:** explorer, architect, coder, tester, reviewer (same order as in `.claude/agents/`).
183
+
184
+ **What to strip from each agent:**
185
+ - `model:` frontmatter line
186
+ - `name:` frontmatter line (becomes the `## heading` instead)
187
+ - `description:` frontmatter line (becomes the `> blockquote` instead)
188
+ - The entire `<!-- crewkit:context-start -->...<!-- crewkit:context-end -->` block (inclusive)
189
+ - The YAML frontmatter delimiters (`---`) — the content moves to the `##` section body
190
+
191
+ **File location:** `AGENTS.md` at the project root (not under `.cursor/`).
192
+
193
+ ---
194
+
195
+ ## Step U4 — Skills
196
+
197
+ **Source:** `.claude/skills/`
198
+ **Action:** Skip entirely. Cursor has no equivalent concept for skills or prompts.
199
+
200
+ Do NOT generate any file for this step. Log: "Cursor adapter: skills skipped (no Cursor equivalent)."
201
+
202
+ ---
203
+
204
+ ## Completion Checklist — Cursor Adapter
205
+
206
+ Before reporting done, verify each item:
207
+
208
+ - [ ] `.cursor/rules/project.md` — exists, has `alwaysApply: true` frontmatter, contains hard rules, does NOT contain Agent Discipline or slash command sections
209
+ - [ ] `.cursor/rules/` — one `.md` file per `.claude/rules/*.md` source file (plus `project.md`)
210
+ - [ ] `.cursor/rules/*.md` — each has `globs:` frontmatter matching the source rule file
211
+ - [ ] `AGENTS.md` — exists at project root, has `##` section for each of the 5 agents
212
+ - [ ] `AGENTS.md` — no `model:` lines, no `crewkit:context-start` blocks, no YAML frontmatter delimiters
213
+ - [ ] `.ai/memory/` — NOT duplicated under `.cursor/` (shared, no copy needed)
214
+ - [ ] `.claude/skills/` — NOT copied (no Cursor equivalent, intentionally skipped)
215
+ - [ ] No Portuguese in any generated file
@@ -47,6 +47,7 @@ If your project uses `.claude/rules/` directory rules, they are loaded automatic
47
47
  - Do not create a new file when adding to an existing file achieves the same goal
48
48
  - Do not add `TODO` comments — either fix it now or leave it for the plan
49
49
  - **NEVER create test files** — test creation is the **tester agent's exclusive responsibility**
50
+ - **NEVER mark phases, tasks, or features as completed** in `napkin.md`, `state.md`, or any memory/status file — only the orchestrator does this, and only after tester PASS + reviewer APPROVED
50
51
 
51
52
  ## Return Format
52
53
 
@@ -0,0 +1,126 @@
1
+ ---
2
+ name: dev-metrics
3
+ description: "Generate development metrics from git history — commit patterns, fix loop frequency, agent usage, file hotspots, and workflow efficiency."
4
+ ---
5
+
6
+ Generate development metrics for: $ARGUMENTS
7
+
8
+ If $ARGUMENTS is empty, analyze the last 90 days of git history.
9
+ If $ARGUMENTS is a number (e.g. `30`), use that many days.
10
+ If $ARGUMENTS is a branch or date range, scope to that.
11
+
12
+ ---
13
+
14
+ ## Steps
15
+
16
+ ### 1. Collect raw data
17
+
18
+ Run all commands in parallel:
19
+
20
+ ```bash
21
+ # Commit volume and cadence
22
+ git log --oneline --since="90 days ago" --format="%ad %s" --date=short
23
+
24
+ # File change frequency (hotspots)
25
+ git log --since="90 days ago" --name-only --pretty=format: | sort | uniq -c | sort -rn | head -30
26
+
27
+ # Author breakdown (if multi-contributor)
28
+ git shortlog -sn --since="90 days ago"
29
+
30
+ # Fix/correction commits (heuristic: commit message contains fix, hotfix, correction, revert)
31
+ git log --oneline --since="90 days ago" --grep="fix\|hotfix\|correction\|revert\|bugfix" -i
32
+
33
+ # Merge commits (PR merges)
34
+ git log --oneline --since="90 days ago" --merges
35
+ ```
36
+
37
+ Adapt the `--since` window to match $ARGUMENTS if provided.
38
+
39
+ ### 2. Compute metrics
40
+
41
+ From raw data, derive:
42
+
43
+ #### Commit patterns
44
+ | Metric | Value |
45
+ |--------|-------|
46
+ | Total commits | count |
47
+ | Commits per week (avg) | count |
48
+ | Peak activity day/week | date or range |
49
+ | Fix/correction commit ratio | fix commits / total commits (%) |
50
+
51
+ #### Fix loop frequency
52
+ Estimate fix loop frequency by counting sequential commits on the same file within a 24h window.
53
+ Flag any file with 3+ consecutive fix commits as a **fix loop hotspot**.
54
+
55
+ | File | Fix loop count | Most recent |
56
+ |------|---------------|-------------|
57
+ | ... | ... | ... |
58
+
59
+ #### File hotspots
60
+ Top 10 most-changed files:
61
+
62
+ | File | Change count | Risk level |
63
+ |------|-------------|-----------|
64
+ | ... | ... | HIGH if auth/tenant/migration, MEDIUM if handler/service, LOW otherwise |
65
+
66
+ Apply risk classification using `.ai/memory/conventions.md` if present (read it).
67
+
68
+ #### Workflow efficiency signals
69
+ | Signal | Value | Health |
70
+ |--------|-------|--------|
71
+ | Fix commit ratio | X% | GREEN <15%, YELLOW 15-30%, RED >30% |
72
+ | Revert count | N | GREEN 0, YELLOW 1-2, RED 3+ |
73
+ | Hotfix commits | N | flag if >2 in window |
74
+ | Files changed per commit (avg) | N | GREEN <5, YELLOW 5-10, RED >10 |
75
+
76
+ #### Agent usage (if detectable from commit messages)
77
+ If commit messages contain agent names (coder, tester, reviewer, architect, explorer),
78
+ tally usage per agent. Otherwise, skip this section.
79
+
80
+ ### 3. Identify systemic risks
81
+
82
+ Cross-reference hotspot files with risk classification:
83
+ - If a HIGH-risk file (auth, tenant, billing, migration) is in the top 5 hotspots → flag as **systemic risk**
84
+ - If fix commit ratio > 30% → flag as **process health concern**
85
+ - If the same file appears in both hotspots and fix loops → flag as **instability candidate**
86
+
87
+ ### 4. Suggest improvements
88
+
89
+ For each systemic risk or process health concern, propose one concrete action:
90
+ - Do not propose vague items ("write more tests")
91
+ - Each suggestion must reference a specific file, module, or metric
92
+
93
+ ---
94
+
95
+ ## Return Format
96
+
97
+ ```markdown
98
+ ---
99
+ **Dev Metrics Report**
100
+ **Period:** [date range]
101
+ **Total commits analyzed:** N
102
+
103
+ ## Commit Patterns
104
+ [table]
105
+
106
+ ## Fix Loop Frequency
107
+ [table or "No fix loop hotspots detected"]
108
+
109
+ ## File Hotspots (top 10)
110
+ [table]
111
+
112
+ ## Workflow Efficiency
113
+ [table with GREEN/YELLOW/RED indicators]
114
+
115
+ ## Agent Usage
116
+ [table or "Not detectable from commit messages"]
117
+
118
+ ## Systemic Risks
119
+ [list or "None identified"]
120
+
121
+ ## Suggested Improvements
122
+ [numbered list, max 5, concrete and actionable]
123
+ ---
124
+ ```
125
+
126
+ Keep the report structured and scannable. Do not include raw git output.
@@ -121,81 +121,11 @@ If a durable lesson was learned, append to the appropriate `lessons-{domain}.md`
121
121
 
122
122
  ---
123
123
 
124
- # Part 2 Operational Policies
125
-
126
- ## Exit gate
127
-
128
- **HARD BLOCK: No task is complete without reviewer APPROVED (clean).**
129
-
130
- - Tester PASS alone is **not sufficient**
131
- - Reviewer APPROVED is **mandatory** before Summarize
132
- - **APPROVED with IMPORTANT+ findings is NOT clean.** Fix, then re-run tester + reviewer.
133
- - Both must be clean (PASS + APPROVED without IMPORTANT+ findings) before Summarize.
134
-
135
- ## Findings consolidation
136
-
137
- After tester and reviewer finish:
138
-
139
- 1. **Collect** results from both
140
- 2. **Classify:** Tester = PASS/FAIL. Reviewer = APPROVED/NEEDS_CHANGES
141
- 3. **Deduplicate** — same file + same concern → keep higher severity
142
- 4. **APPROVED with IMPORTANT+ findings** = treat as NEEDS_CHANGES
143
- 5. **Decision matrix:**
144
-
145
- | Tester | Reviewer | Action |
146
- |--------|----------|--------|
147
- | PASS | APPROVED (clean) | Done → Summarize |
148
- | PASS | APPROVED with IMPORTANT+ | Fix loop |
149
- | PASS | NEEDS_CHANGES | Fix loop (reviewer findings) |
150
- | FAIL | APPROVED | Fix loop (test failures) |
151
- | FAIL | NEEDS_CHANGES | Fix loop (merge into ONE list for coder) |
152
-
153
- When both fail, call coder **once** with the merged list.
154
-
155
- ## Fix loop
156
-
157
- 1. **Fix:**
158
- - Risk **HIGH**: all fixes through **coder** — never auto-fix
159
- - Risk LOW/MEDIUM: `auto_fixable: yes` → orchestrator applies directly. Else → coder
160
- - When fix changes an exception type or interface → instruct coder to grep for all test doubles/fakes
161
- 2. **Revalidate in parallel** (tester fix-loop mode + reviewer)
162
- 3. Consolidate again
163
- 4. Exit when PASS + APPROVED
164
- 5. **Max 5 iterations** — then STOP and report to user.
165
-
166
- **MINOR findings** do not trigger fix loop alone.
167
-
168
- **Tester time budget:** if the tester reports pre-existing failures unrelated to the current task, the orchestrator must NOT ask the tester to fix them. Note them for a separate task and proceed.
169
-
170
- ## Test creation rule
171
-
172
- **Every behavioral change must be validated by tests.** The tester creates them automatically.
173
-
174
- - New feature with logic → unit tests + integration when applicable
175
- - Bug fix → test that reproduces the bug + verifies the fix
176
- - Refactor with preserved behavior → existing tests are sufficient
177
- - Cosmetic/text/DTO change without logic → build + review is sufficient
178
-
179
- ## HIGH risk rules
180
-
181
- - Never auto-fix — all through coder
182
- - Full test suite on every revalidation
183
- - Reviewer always mandatory
184
- - Architect mandatory if any design decision is open
185
-
186
- ## Stop conditions
187
-
188
- STOP and escalate when:
189
- - Build doesn't stabilize after 2 corrections
190
- - Reviewer flags an architectural problem
191
- - Tester finds widespread failures outside task scope
192
- - Root cause unclear after 1 fix loop
193
- - Affected files grow beyond plan
194
- - SMALL/MEDIUM reveals structural impact
124
+ > **Operational policies** (exit gate, fix loop, findings consolidation, stop conditions): load `references/operational-policies.md` when entering consolidation or fix loop.
195
125
 
196
126
  ---
197
127
 
198
- # Part 3 — Stack Configuration
128
+ # Part 2 — Stack Configuration
199
129
 
200
130
  The orchestrator must tell subagents which build/test commands to use. Read `.ai/memory/commands.md` at the start and use the correct commands for each stack.
201
131
 
@@ -0,0 +1,85 @@
1
+ # Full-Workflow — Operational Policies
2
+
3
+ Referenced by `SKILL.md`. Load when entering consolidation, fix loop, or stop conditions.
4
+
5
+ ---
6
+
7
+ ## Exit gate
8
+
9
+ **HARD BLOCK: No task is complete without reviewer APPROVED (clean).**
10
+
11
+ - Tester PASS alone is **not sufficient**
12
+ - Reviewer APPROVED is **mandatory** before Summarize
13
+ - **APPROVED with IMPORTANT+ findings is NOT clean.** Fix, then re-run tester + reviewer.
14
+ - Both must be clean (PASS + APPROVED without IMPORTANT+ findings) before Summarize.
15
+
16
+ ---
17
+
18
+ ## Findings consolidation
19
+
20
+ After tester and reviewer finish:
21
+
22
+ 1. **Collect** results from both
23
+ 2. **Classify:** Tester = PASS/FAIL. Reviewer = APPROVED/NEEDS_CHANGES
24
+ 3. **Deduplicate** — same file + same concern → keep higher severity
25
+ 4. **APPROVED with IMPORTANT+ findings** = treat as NEEDS_CHANGES
26
+ 5. **Decision matrix:**
27
+
28
+ | Tester | Reviewer | Action |
29
+ |--------|----------|--------|
30
+ | PASS | APPROVED (clean) | Done → Summarize |
31
+ | PASS | APPROVED with IMPORTANT+ | Fix loop |
32
+ | PASS | NEEDS_CHANGES | Fix loop (reviewer findings) |
33
+ | FAIL | APPROVED | Fix loop (test failures) |
34
+ | FAIL | NEEDS_CHANGES | Fix loop (merge into ONE list for coder) |
35
+
36
+ When both fail, call coder **once** with the merged list.
37
+
38
+ ---
39
+
40
+ ## Fix loop
41
+
42
+ 1. **Fix:**
43
+ - Risk **HIGH**: all fixes through **coder** — never auto-fix
44
+ - Risk LOW/MEDIUM: `auto_fixable: yes` → orchestrator applies directly. Else → coder
45
+ - When fix changes an exception type or interface → instruct coder to grep for all test doubles/fakes
46
+ 2. **Revalidate in parallel** (tester fix-loop mode + reviewer)
47
+ 3. Consolidate again
48
+ 4. Exit when PASS + APPROVED
49
+ 5. **Max 5 iterations** — then STOP and report to user.
50
+
51
+ **MINOR findings** do not trigger fix loop alone.
52
+
53
+ **Tester time budget:** if the tester reports pre-existing failures unrelated to the current task, the orchestrator must NOT ask the tester to fix them. Note them for a separate task and proceed.
54
+
55
+ ---
56
+
57
+ ## Test creation rule
58
+
59
+ **Every behavioral change must be validated by tests.** The tester creates them automatically.
60
+
61
+ - New feature with logic → unit tests + integration when applicable
62
+ - Bug fix → test that reproduces the bug + verifies the fix
63
+ - Refactor with preserved behavior → existing tests are sufficient
64
+ - Cosmetic/text/DTO change without logic → build + review is sufficient
65
+
66
+ ---
67
+
68
+ ## HIGH risk rules
69
+
70
+ - Never auto-fix — all through coder
71
+ - Full test suite on every revalidation
72
+ - Reviewer always mandatory
73
+ - Architect mandatory if any design decision is open
74
+
75
+ ---
76
+
77
+ ## Stop conditions
78
+
79
+ STOP and escalate when:
80
+ - Build doesn't stabilize after 2 corrections
81
+ - Reviewer flags an architectural problem
82
+ - Tester finds widespread failures outside task scope
83
+ - Root cause unclear after 1 fix loop
84
+ - Affected files grow beyond plan
85
+ - SMALL/MEDIUM reveals structural impact
@@ -0,0 +1,157 @@
1
+ ---
2
+ name: impact
3
+ description: "Analyze blast radius of changing a file, handler, entity, or module. Maps callers, tests, endpoints, and UI pages affected."
4
+ ---
5
+
6
+ Analyze blast radius of: $ARGUMENTS
7
+
8
+ $ARGUMENTS must be a file path, handler name, entity name, module name, or endpoint.
9
+ Examples: `src/Orders/OrderHandler.cs`, `OrderEntity`, `POST /api/orders`, `Orders module`
10
+
11
+ ---
12
+
13
+ ## When to use
14
+
15
+ Use before starting any MEDIUM or LARGE task to understand the full scope of change.
16
+ Use before `/explore-and-plan` when the target is already known but blast radius is uncertain.
17
+ Use after a production incident to understand what else might be affected by the fix.
18
+
19
+ ---
20
+
21
+ ## Steps
22
+
23
+ ### 1. Identify the target
24
+
25
+ From $ARGUMENTS, determine:
26
+ - **Target type:** file / class / handler / entity / endpoint / module
27
+ - **Target location:** resolve to exact file path(s) if not already a path
28
+ - **Stack:** infer from path extension and `.ai/memory/architecture.md`
29
+
30
+ Read `.ai/memory/architecture.md` and `.ai/memory/conventions.md` to understand layer rules
31
+ and naming conventions before searching.
32
+
33
+ ### 2. Map direct callers
34
+
35
+ Search for all direct references to the target:
36
+
37
+ ```bash
38
+ # Search for imports, usages, and references
39
+ # Adapt search patterns to the detected stack:
40
+ # - .NET: class name, interface name, constructor injection, handler registration
41
+ # - Node.js: require/import of the file, function call sites
42
+ # - Blazor: component references, @inject, @page routes, event handlers
43
+ # - SQL/migrations: table name, column name in queries and seeders
44
+ ```
45
+
46
+ Build the direct caller list:
47
+
48
+ | File | Reference type | Layer |
49
+ |------|---------------|-------|
50
+ | ... | import / call / inject / inherit | controller / service / handler / UI / test |
51
+
52
+ ### 3. Map transitive impact
53
+
54
+ For each direct caller, check if it is itself called by other files:
55
+ - Go one level deeper if the direct caller is an interface, base class, or shared service
56
+ - Stop at two levels unless the target is a core shared abstraction (entity, base class, shared interface)
57
+ - Flag if the dependency graph is too wide to enumerate (>20 unique callers at any level)
58
+
59
+ ### 4. Map tests
60
+
61
+ Find all test files that directly or indirectly test the target:
62
+
63
+ ```bash
64
+ # Search test directories for the target name, class name, or endpoint path
65
+ # Look for test doubles (mocks, fakes, stubs) of the target
66
+ ```
67
+
68
+ | Test file | Tests what | Has mock/fake of target? |
69
+ |-----------|-----------|--------------------------|
70
+ | ... | ... | yes / no |
71
+
72
+ Flag any test file that uses a mock/fake of the target — changing the target's interface or
73
+ exception types will require updating those fakes.
74
+
75
+ ### 5. Map API endpoints and UI pages
76
+
77
+ If the target is a handler, service, or entity:
78
+ - Find which API endpoints call it (controller/route → handler)
79
+ - Find which UI pages or components consume those endpoints (if frontend source is available)
80
+
81
+ | Endpoint | Method | UI page/component | Consumer type |
82
+ |----------|--------|------------------|---------------|
83
+ | ... | ... | ... | internal / public API |
84
+
85
+ Mark endpoints as **public API** if they are exposed externally — changes to those have higher blast radius.
86
+
87
+ ### 6. Classify blast radius
88
+
89
+ | Dimension | Count | Assessment |
90
+ |-----------|-------|-----------|
91
+ | Direct callers | N | — |
92
+ | Transitive callers | N | — |
93
+ | Test files affected | N | — |
94
+ | API endpoints affected | N | — |
95
+ | UI pages affected | N | — |
96
+ | Public API contracts affected | N | HIGH risk if >0 |
97
+ | Auth/tenant code affected | yes/no | HIGH risk if yes |
98
+ | DB schema affected | yes/no | HIGH risk if yes |
99
+
100
+ **Overall blast radius:**
101
+ - **LOW** — 1-2 files, same layer, no public API, no auth/schema
102
+ - **MEDIUM** — 3-7 files, cross-layer, no public API change
103
+ - **HIGH** — 8+ files, or public API, or auth/tenant, or DB schema change
104
+
105
+ ### 7. Identify change categories
106
+
107
+ Classify what types of changes to the target would cause breakage vs. safe changes:
108
+
109
+ | Change type | Breakage risk | Affected consumers |
110
+ |-------------|--------------|-------------------|
111
+ | Add new field (non-breaking) | LOW | none |
112
+ | Rename field or method | HIGH | all callers + test fakes |
113
+ | Change return type | HIGH | all callers |
114
+ | Change exception thrown | MEDIUM | test fakes + callers that catch |
115
+ | Add required parameter | HIGH | all call sites |
116
+ | Add optional parameter | LOW | none |
117
+ | Split into two classes | HIGH | all callers + DI registrations |
118
+ | Change DB column | HIGH | queries + migrations |
119
+
120
+ ---
121
+
122
+ ## Return Format
123
+
124
+ ```markdown
125
+ ---
126
+ **Impact Analysis: [target name]**
127
+ **Target type:** [file / class / handler / entity / endpoint / module]
128
+ **Stack:** [detected]
129
+
130
+ ## Direct Callers
131
+ [table from Step 2]
132
+
133
+ ## Transitive Impact
134
+ [table or "None — direct callers are leaf nodes"]
135
+
136
+ ## Tests Affected
137
+ [table from Step 4]
138
+ [Flag: "N test files use a mock/fake of this target — update them if interface changes"]
139
+
140
+ ## Endpoints and UI Pages
141
+ [table from Step 5, or "Not applicable"]
142
+
143
+ ## Blast Radius Summary
144
+ [table from Step 6]
145
+
146
+ **Blast radius: LOW / MEDIUM / HIGH**
147
+
148
+ ## Safe vs. Breaking Changes
149
+ [table from Step 7]
150
+
151
+ ## Recommendation
152
+ [1-3 sentences: what to do before making this change, and what to watch for]
153
+ ---
154
+ ```
155
+
156
+ If $ARGUMENTS does not resolve to a known file or name, ask for clarification before proceeding.
157
+ Do not guess at the target.