all-for-claudecode 2.4.0 → 2.5.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.claude-plugin/marketplace.json +2 -2
- package/.claude-plugin/plugin.json +1 -1
- package/README.md +2 -1
- package/agents/afc-impl-worker.md +2 -0
- package/agents/afc-pr-analyst.md +57 -0
- package/agents/afc-security.md +5 -0
- package/commands/implement.md +14 -10
- package/commands/init.md +34 -16
- package/commands/pr-comment.md +181 -0
- package/commands/release-notes.md +219 -0
- package/commands/triage.md +222 -0
- package/package.json +2 -2
- package/scripts/afc-blast-radius.sh +41 -119
- package/scripts/afc-consistency-check.sh +1 -1
- package/scripts/afc-subagent-context.sh +1 -1
- package/scripts/afc-triage.sh +131 -0
- package/scripts/afc-user-prompt-submit.sh +9 -4
|
@@ -6,14 +6,14 @@
|
|
|
6
6
|
},
|
|
7
7
|
"metadata": {
|
|
8
8
|
"description": "Automated pipeline for Claude Code — spec → plan → implement → review → clean",
|
|
9
|
-
"version": "2.
|
|
9
|
+
"version": "2.5.0"
|
|
10
10
|
},
|
|
11
11
|
"plugins": [
|
|
12
12
|
{
|
|
13
13
|
"name": "afc",
|
|
14
14
|
"source": "./",
|
|
15
15
|
"description": "Automated pipeline for Claude Code. Automates the full development cycle: spec → plan → implement → review → clean.",
|
|
16
|
-
"version": "2.
|
|
16
|
+
"version": "2.5.0",
|
|
17
17
|
"category": "automation",
|
|
18
18
|
"tags": ["pipeline", "automation", "spec", "plan", "implement", "review", "critic-loop"]
|
|
19
19
|
}
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "afc",
|
|
3
|
-
"version": "2.
|
|
3
|
+
"version": "2.5.0",
|
|
4
4
|
"description": "Automated pipeline for Claude Code. Automates the full development cycle: spec → plan → implement → review → clean.",
|
|
5
5
|
"author": { "name": "jhlee0409", "email": "relee6203@gmail.com" },
|
|
6
6
|
"homepage": "https://github.com/jhlee0409/all-for-claudecode",
|
package/README.md
CHANGED
|
@@ -6,10 +6,11 @@
|
|
|
6
6
|
|
|
7
7
|
**Claude Code plugin that automates the full development cycle — spec → plan → implement → review → clean.**
|
|
8
8
|
|
|
9
|
+
[](https://github.com/jhlee0409/all-for-claudecode/actions/workflows/ci.yml)
|
|
9
10
|
[](https://www.npmjs.com/package/all-for-claudecode)
|
|
10
11
|
[](https://www.npmjs.com/package/all-for-claudecode)
|
|
11
12
|
[](./LICENSE)
|
|
12
|
-
[](https://github.com/jhlee0409/all-for-claudecode)
|
|
13
14
|
|
|
14
15
|
> One command (`/afc:auto`) runs the entire cycle. Zero runtime dependencies — pure markdown commands + bash hook scripts.
|
|
15
16
|
|
|
@@ -0,0 +1,57 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: afc-pr-analyst
|
|
3
|
+
description: "PR deep analysis worker — performs build/test/lint verification in an isolated worktree for triage."
|
|
4
|
+
tools:
|
|
5
|
+
- Read
|
|
6
|
+
- Bash
|
|
7
|
+
- Glob
|
|
8
|
+
- Grep
|
|
9
|
+
model: sonnet
|
|
10
|
+
maxTurns: 15
|
|
11
|
+
---
|
|
12
|
+
|
|
13
|
+
You are a PR deep-analysis worker for the all-for-claudecode triage pipeline.
|
|
14
|
+
|
|
15
|
+
## Purpose
|
|
16
|
+
|
|
17
|
+
You run inside an **isolated worktree** to perform deep analysis that requires checking out the PR branch: building, testing, linting, and architectural impact assessment.
|
|
18
|
+
|
|
19
|
+
## Workflow
|
|
20
|
+
|
|
21
|
+
1. **Checkout the PR branch** using `gh pr checkout {number}` (provided in your prompt)
|
|
22
|
+
2. **Detect project tooling**:
|
|
23
|
+
- Read `.claude/afc.config.md` for CI commands (if present)
|
|
24
|
+
- Read `CLAUDE.md` for build/test commands
|
|
25
|
+
- Read `package.json`, `Makefile`, `Cargo.toml`, etc. for standard commands
|
|
26
|
+
3. **Run verification** (skip steps if commands are not available):
|
|
27
|
+
a. **Build**: run the project build command
|
|
28
|
+
b. **Lint**: run the project lint command
|
|
29
|
+
c. **Test**: run the project test command
|
|
30
|
+
4. **Analyze results**:
|
|
31
|
+
- Parse build/test/lint output for errors and warnings
|
|
32
|
+
- Identify files with issues
|
|
33
|
+
- Assess architectural impact (does the PR cross layer boundaries?)
|
|
34
|
+
5. **Return structured report**
|
|
35
|
+
|
|
36
|
+
## Output Format
|
|
37
|
+
|
|
38
|
+
```
|
|
39
|
+
BUILD_STATUS: pass|fail|skip
|
|
40
|
+
BUILD_OUTPUT: {first 20 lines of errors if failed, otherwise "clean"}
|
|
41
|
+
TEST_STATUS: pass|fail|skip (N passed, M failed)
|
|
42
|
+
TEST_OUTPUT: {failed test names and first error lines if failed}
|
|
43
|
+
LINT_STATUS: pass|fail|skip
|
|
44
|
+
LINT_OUTPUT: {lint warnings/errors if any}
|
|
45
|
+
ARCHITECTURE_IMPACT: {assessment of cross-cutting changes}
|
|
46
|
+
DEEP_FINDINGS: {key findings from deep analysis, numbered list}
|
|
47
|
+
RECOMMENDATION: merge|request-changes|needs-discussion
|
|
48
|
+
RECOMMENDATION_REASON: {one sentence explanation}
|
|
49
|
+
```
|
|
50
|
+
|
|
51
|
+
## Rules
|
|
52
|
+
|
|
53
|
+
- **Read-only intent**: you are analyzing, not fixing. Do not modify code.
|
|
54
|
+
- **Time budget**: keep total execution under 2 minutes. Skip long-running test suites (use `timeout 90s` wrapper).
|
|
55
|
+
- **Error resilience**: if a command fails to run (missing tool, permissions), report `skip` and continue.
|
|
56
|
+
- **No network calls**: do not install dependencies unless the build step explicitly requires it and it completes within the time budget.
|
|
57
|
+
- Follow the project's shell script conventions when running commands.
|
package/agents/afc-security.md
CHANGED
package/commands/implement.md
CHANGED
|
@@ -180,11 +180,13 @@ Task("T004: Create AuthService", subagent_type: "afc:afc-impl-worker", isolation
|
|
|
180
180
|
|
|
181
181
|
**Failure Recovery** (per-task, not per-batch):
|
|
182
182
|
1. Identify the failed task from the agent's error return
|
|
183
|
-
2.
|
|
184
|
-
3.
|
|
185
|
-
4.
|
|
186
|
-
5. If retryCount
|
|
187
|
-
|
|
183
|
+
2. Capture the `agentId` from the failed agent's result (returned in Task tool output)
|
|
184
|
+
3. Reset: `TaskUpdate(taskId, status: "pending")`
|
|
185
|
+
4. Track: `TaskUpdate(taskId, metadata: { retryCount: N, lastAgentId: agentId })`
|
|
186
|
+
5. If retryCount < 3 → re-launch with `resume: lastAgentId` in the next batch round. The resumed agent retains full context from the previous attempt (what it tried, what failed, partial progress), enabling more targeted retry instead of starting from scratch.
|
|
187
|
+
- **Worktree caveat**: if the failed worker made no file changes, its worktree is auto-cleaned and `resume` will fail. In this case, fall back to a fresh launch (omit `resume`) for the retry.
|
|
188
|
+
6. If retryCount >= 3 → mark as failed, report: `"T{ID} failed after 3 attempts: {last error}"`
|
|
189
|
+
7. Continue with remaining tasks — a single failure does not block the entire phase
|
|
188
190
|
|
|
189
191
|
#### Swarm Mode (6+ [P] tasks)
|
|
190
192
|
|
|
@@ -247,11 +249,13 @@ Task("Worker 2: T008, T010, T012", subagent_type: "afc:afc-impl-worker", isolati
|
|
|
247
249
|
When a worker agent returns an error:
|
|
248
250
|
1. Identify which tasks the worker was assigned (from the pre-assigned list)
|
|
249
251
|
2. Check which tasks the worker actually completed (from its result summary)
|
|
250
|
-
3.
|
|
251
|
-
4.
|
|
252
|
-
5.
|
|
253
|
-
6. If retryCount
|
|
254
|
-
|
|
252
|
+
3. Capture the `agentId` from the failed worker's result
|
|
253
|
+
4. Reset uncompleted tasks: `TaskUpdate(taskId, status: "pending")`
|
|
254
|
+
5. Track retry count: `TaskUpdate(taskId, metadata: { retryCount: N, lastAgentId: agentId })`
|
|
255
|
+
6. If retryCount < 3 → re-launch with `resume: lastAgentId` to preserve context from the previous attempt. The resumed agent retains its full conversation history (files read, changes attempted, errors encountered), enabling targeted retry.
|
|
256
|
+
- **Worktree caveat**: if the failed worker made no file changes, its worktree is auto-cleaned and `resume` will fail. In this case, fall back to a fresh launch (omit `resume`) for the retry.
|
|
257
|
+
7. If retryCount >= 3 → mark as failed, report: `"T{ID} failed after 3 attempts: {last error}"`
|
|
258
|
+
8. Continue with remaining tasks
|
|
255
259
|
|
|
256
260
|
> Single task failure does not block the phase. The orchestrator reassigns failed tasks to subsequent batches.
|
|
257
261
|
|
package/commands/init.md
CHANGED
|
@@ -224,22 +224,30 @@ IMPORTANT: For requests matching the afc skill routing table below, always invok
|
|
|
224
224
|
|
|
225
225
|
## Skill Routing
|
|
226
226
|
|
|
227
|
-
|
|
228
|
-
|
|
229
|
-
|
|
|
230
|
-
|
|
231
|
-
|
|
|
232
|
-
|
|
|
233
|
-
| Design | `afc:plan` |
|
|
234
|
-
|
|
|
235
|
-
|
|
|
236
|
-
|
|
|
237
|
-
|
|
|
238
|
-
|
|
|
239
|
-
|
|
|
240
|
-
|
|
|
241
|
-
|
|
|
242
|
-
| Consult | `afc:consult` |
|
|
227
|
+
Classify the user's intent and route to the matching skill. Use semantic understanding — not keyword matching.
|
|
228
|
+
|
|
229
|
+
| User Intent | Skill | Route When |
|
|
230
|
+
|-------------|-------|------------|
|
|
231
|
+
| Full lifecycle | `afc:auto` | User wants end-to-end feature development, or the request is a non-trivial new feature without an existing plan |
|
|
232
|
+
| Specification | `afc:spec` | User wants to define or write requirements, acceptance criteria, or success conditions |
|
|
233
|
+
| Design/Plan | `afc:plan` | User wants to plan HOW to implement before coding — approach, architecture decisions, design |
|
|
234
|
+
| Implement | `afc:implement` | User wants specific code changes with a clear scope: add feature, refactor, modify. Requires existing plan or precise instructions |
|
|
235
|
+
| Review | `afc:review` | User wants code review, PR review, or quality check on existing/changed code |
|
|
236
|
+
| Debug/Fix | `afc:debug` | User reports a bug, error, or broken behavior and wants diagnosis and fix |
|
|
237
|
+
| Test | `afc:test` | User wants to write tests, improve coverage, or verify behavior |
|
|
238
|
+
| Validate | `afc:validate` | User wants to check consistency or validate existing pipeline artifacts |
|
|
239
|
+
| Analyze | `afc:analyze` | User wants to understand, explore, or audit existing code without modifying it |
|
|
240
|
+
| Research | `afc:research` | User wants deep investigation of external tools, libraries, APIs, or technical concepts |
|
|
241
|
+
| Ideate | `afc:ideate` | User wants to brainstorm ideas, explore possibilities, or draft a product brief |
|
|
242
|
+
| Consult | `afc:consult` | User wants expert advice on a decision: library choice, architecture direction, legal/security/infra guidance |
|
|
243
|
+
| Tasks | `afc:tasks` | User explicitly wants to decompose work into a task breakdown |
|
|
244
|
+
| Ambiguous | `afc:clarify` | User's request is too vague or underspecified to route confidently |
|
|
245
|
+
|
|
246
|
+
### Routing Rules
|
|
247
|
+
|
|
248
|
+
1. **Auto vs Implement**: A new feature request without an existing plan routes to `afc:auto`. Only use `afc:implement` when the user has a clear, scoped task or an existing plan/spec.
|
|
249
|
+
2. **Compound intents**: Route to the primary intent. The pipeline handles sequencing internally.
|
|
250
|
+
3. **Design-first**: When scope is non-trivial (multiple files, architectural decisions needed), prefer `afc:auto` or `afc:plan` over direct `afc:implement`.
|
|
243
251
|
|
|
244
252
|
User-only (not auto-triggered — inform user on request):
|
|
245
253
|
- `afc:launch` — inform user when release artifact generation is requested
|
|
@@ -249,6 +257,9 @@ User-only (not auto-triggered — inform user on request):
|
|
|
249
257
|
- `afc:checkpoint` — inform user when session save is requested
|
|
250
258
|
- `afc:resume` — inform user when session restore is requested
|
|
251
259
|
- `afc:principles` — inform user when project principles management is requested
|
|
260
|
+
- `afc:triage` — inform user when parallel PR/issue triage is requested
|
|
261
|
+
- `afc:pr-comment` — inform user when posting PR review comments to GitHub is requested
|
|
262
|
+
- `afc:release-notes` — inform user when generating release notes from git history is requested
|
|
252
263
|
|
|
253
264
|
## Pipeline
|
|
254
265
|
|
|
@@ -257,6 +268,13 @@ spec → plan → implement → review → clean
|
|
|
257
268
|
## Override Rules
|
|
258
269
|
|
|
259
270
|
NEVER use executor, deep-executor, debugger, planner, analyst, verifier, test-engineer, code-reviewer, quality-reviewer, style-reviewer, api-reviewer, security-reviewer, performance-reviewer for tasks that an afc skill covers above. ALWAYS invoke the afc skill instead.
|
|
271
|
+
|
|
272
|
+
## Source Verification
|
|
273
|
+
|
|
274
|
+
When analyzing or making claims about external systems, APIs, SDKs, or third-party tools:
|
|
275
|
+
- Verify against official documentation, NOT project-internal docs
|
|
276
|
+
- Do not hardcode reference data when delegating to sub-agents — instruct them to look up primary sources
|
|
277
|
+
- Cross-verify high-severity findings before reporting
|
|
260
278
|
</afc-pipeline>
|
|
261
279
|
<!-- AFC:END -->
|
|
262
280
|
```
|
|
@@ -0,0 +1,181 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: afc:pr-comment
|
|
3
|
+
description: "Post PR review comments to GitHub"
|
|
4
|
+
argument-hint: "<PR number> [--severity critical,warning,info]"
|
|
5
|
+
allowed-tools:
|
|
6
|
+
- Read
|
|
7
|
+
- Bash
|
|
8
|
+
- Glob
|
|
9
|
+
- Grep
|
|
10
|
+
- Task
|
|
11
|
+
model: sonnet
|
|
12
|
+
---
|
|
13
|
+
|
|
14
|
+
# /afc:pr-comment — Post PR Review to GitHub
|
|
15
|
+
|
|
16
|
+
> Analyzes a PR and posts a structured review comment to GitHub.
|
|
17
|
+
> Reuses existing triage reports when available. Asks for user confirmation before posting.
|
|
18
|
+
|
|
19
|
+
## Arguments
|
|
20
|
+
|
|
21
|
+
- `$ARGUMENTS` — PR number (required), with optional severity filter
|
|
22
|
+
- `42` — Analyze and post review for PR #42
|
|
23
|
+
- `42 --severity critical,warning` — Only include Critical and Warning findings
|
|
24
|
+
- `42 --severity critical` — Only include Critical findings
|
|
25
|
+
|
|
26
|
+
Parse the arguments:
|
|
27
|
+
1. Extract the PR number (first numeric argument)
|
|
28
|
+
2. Extract `--severity` filter if present (comma-separated list of: `critical`, `warning`, `info`)
|
|
29
|
+
3. If no PR number is found, ask the user: "Which PR number should I review?"
|
|
30
|
+
|
|
31
|
+
## Execution Steps
|
|
32
|
+
|
|
33
|
+
### 1. Collect PR Information
|
|
34
|
+
|
|
35
|
+
```bash
|
|
36
|
+
gh pr view {number} --json number,title,headRefName,author,body,additions,deletions,changedFiles,labels,reviewDecision,state
|
|
37
|
+
```
|
|
38
|
+
|
|
39
|
+
```bash
|
|
40
|
+
gh pr diff {number}
|
|
41
|
+
```
|
|
42
|
+
|
|
43
|
+
Verify the PR exists and is open. If closed/merged, inform the user and ask whether to proceed anyway.
|
|
44
|
+
|
|
45
|
+
### 2. Check for Existing Triage Report
|
|
46
|
+
|
|
47
|
+
Look for a recent triage report that covers this PR:
|
|
48
|
+
|
|
49
|
+
```bash
|
|
50
|
+
ls -t .claude/afc/memory/triage/*.md 2>/dev/null | head -5
|
|
51
|
+
```
|
|
52
|
+
|
|
53
|
+
If a report exists from today, search it for `PR #{number}` analysis. If found, reuse that analysis as the basis for the review instead of re-analyzing from scratch.
|
|
54
|
+
|
|
55
|
+
### 3. Analyze PR (if no triage data available)
|
|
56
|
+
|
|
57
|
+
If no existing triage report covers this PR, perform a focused review of the diff.
|
|
58
|
+
|
|
59
|
+
Examine the diff from the following perspectives (abbreviated from review.md):
|
|
60
|
+
|
|
61
|
+
#### A. Code Quality
|
|
62
|
+
- Style compliance, naming conventions, unnecessary complexity
|
|
63
|
+
|
|
64
|
+
#### B. Architecture
|
|
65
|
+
- Layer violations, boundary crossings, structural concerns
|
|
66
|
+
|
|
67
|
+
#### C. Security
|
|
68
|
+
- XSS, injection, sensitive data exposure, auth issues
|
|
69
|
+
|
|
70
|
+
#### D. Performance
|
|
71
|
+
- Latency concerns, unnecessary computation, resource leaks
|
|
72
|
+
|
|
73
|
+
#### E. Maintainability
|
|
74
|
+
- Function/file size, naming clarity, readability
|
|
75
|
+
|
|
76
|
+
Classify each finding with a severity:
|
|
77
|
+
- **Critical (C)** — Must fix before merge. Bugs, security vulnerabilities, data loss risks.
|
|
78
|
+
- **Warning (W)** — Should fix. Code quality issues, potential problems, maintainability concerns.
|
|
79
|
+
- **Info (I)** — Nice to have. Suggestions, minor improvements, style preferences.
|
|
80
|
+
|
|
81
|
+
### 4. Apply Severity Filter
|
|
82
|
+
|
|
83
|
+
If `--severity` was specified, filter findings to only include the specified severity levels.
|
|
84
|
+
|
|
85
|
+
Default (no filter): include all severity levels.
|
|
86
|
+
|
|
87
|
+
### 5. Generate Review Comment
|
|
88
|
+
|
|
89
|
+
Compose the review comment in this format:
|
|
90
|
+
|
|
91
|
+
```markdown
|
|
92
|
+
## AFC Code Review — PR #{number}
|
|
93
|
+
|
|
94
|
+
### Summary
|
|
95
|
+
|
|
96
|
+
| Severity | Count |
|
|
97
|
+
|----------|-------|
|
|
98
|
+
| Critical | {N} |
|
|
99
|
+
| Warning | {N} |
|
|
100
|
+
| Info | {N} |
|
|
101
|
+
|
|
102
|
+
### Findings
|
|
103
|
+
|
|
104
|
+
#### C-{N}: {title}
|
|
105
|
+
- **File**: `{path}:{line}`
|
|
106
|
+
- **Issue**: {description}
|
|
107
|
+
- **Suggested fix**: {suggestion}
|
|
108
|
+
|
|
109
|
+
#### W-{N}: {title}
|
|
110
|
+
{same format}
|
|
111
|
+
|
|
112
|
+
#### I-{N}: {title}
|
|
113
|
+
{same format}
|
|
114
|
+
|
|
115
|
+
### Positives
|
|
116
|
+
- {1-2 things done well}
|
|
117
|
+
|
|
118
|
+
---
|
|
119
|
+
*Reviewed by [all-for-claudecode](https://github.com/anthropics/claude-code)*
|
|
120
|
+
```
|
|
121
|
+
|
|
122
|
+
If there are zero findings after filtering, the comment should be:
|
|
123
|
+
|
|
124
|
+
```markdown
|
|
125
|
+
## AFC Code Review — PR #{number}
|
|
126
|
+
|
|
127
|
+
No issues found. Code looks good!
|
|
128
|
+
|
|
129
|
+
---
|
|
130
|
+
*Reviewed by [all-for-claudecode](https://github.com/anthropics/claude-code)*
|
|
131
|
+
```
|
|
132
|
+
|
|
133
|
+
### 6. Preview and Confirm
|
|
134
|
+
|
|
135
|
+
Display the full review comment to the user in the console.
|
|
136
|
+
|
|
137
|
+
Then determine the review event type:
|
|
138
|
+
- **Critical findings exist** → `REQUEST_CHANGES`
|
|
139
|
+
- **Only Warning/Info findings** → `COMMENT`
|
|
140
|
+
- **No findings** → `APPROVE`
|
|
141
|
+
|
|
142
|
+
Tell the user:
|
|
143
|
+
```
|
|
144
|
+
Review event: {APPROVE|COMMENT|REQUEST_CHANGES}
|
|
145
|
+
Findings: Critical {N} / Warning {N} / Info {N}
|
|
146
|
+
```
|
|
147
|
+
|
|
148
|
+
Ask the user to confirm using AskUserQuestion with these options:
|
|
149
|
+
1. **Post as-is** — Post the review comment to GitHub
|
|
150
|
+
2. **Edit first** — Let me modify the comment before posting (user provides edits, then re-confirm)
|
|
151
|
+
3. **Cancel** — Do not post anything
|
|
152
|
+
|
|
153
|
+
### 7. Post to GitHub
|
|
154
|
+
|
|
155
|
+
On approval, write the review comment to a temp file and post via `--body-file` (avoids shell escaping issues with markdown):
|
|
156
|
+
|
|
157
|
+
```bash
|
|
158
|
+
tmp_file=$(mktemp)
|
|
159
|
+
cat > "$tmp_file" << 'REVIEW_EOF'
|
|
160
|
+
{review comment content}
|
|
161
|
+
REVIEW_EOF
|
|
162
|
+
gh pr review {number} --body-file "$tmp_file" --event {COMMENT|REQUEST_CHANGES|APPROVE}
|
|
163
|
+
rm -f "$tmp_file"
|
|
164
|
+
```
|
|
165
|
+
|
|
166
|
+
### 8. Final Output
|
|
167
|
+
|
|
168
|
+
```
|
|
169
|
+
PR review posted
|
|
170
|
+
├─ PR: #{number} ({title})
|
|
171
|
+
├─ Event: {APPROVE|COMMENT|REQUEST_CHANGES}
|
|
172
|
+
├─ Findings: Critical {N} / Warning {N} / Info {N}
|
|
173
|
+
└─ URL: {PR URL from gh pr view}
|
|
174
|
+
```
|
|
175
|
+
|
|
176
|
+
## Notes
|
|
177
|
+
|
|
178
|
+
- **User confirmation required**: Never post to GitHub without explicit user approval.
|
|
179
|
+
- **Idempotent**: Running multiple times on the same PR creates additional review comments (GitHub does not deduplicate).
|
|
180
|
+
- **Respects existing reviews**: This command does not dismiss or override other reviewers' reviews.
|
|
181
|
+
- **Rate limits**: Uses a single `gh pr review` call. No rate limit concerns for normal usage.
|
|
@@ -0,0 +1,219 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: afc:release-notes
|
|
3
|
+
description: "Generate user-facing release notes from git history"
|
|
4
|
+
argument-hint: "[v1.0.0..v2.0.0 | v2.0.0 | --post]"
|
|
5
|
+
allowed-tools:
|
|
6
|
+
- Read
|
|
7
|
+
- Bash
|
|
8
|
+
- Glob
|
|
9
|
+
- Grep
|
|
10
|
+
model: sonnet
|
|
11
|
+
---
|
|
12
|
+
|
|
13
|
+
# /afc:release-notes — Generate Release Notes
|
|
14
|
+
|
|
15
|
+
> Rewrites commit/PR history into user-facing release notes and optionally publishes to GitHub Releases.
|
|
16
|
+
> This is a **standalone utility** — not part of the auto pipeline.
|
|
17
|
+
> Does NOT modify local files (CHANGELOG updates are handled by `/afc:launch`).
|
|
18
|
+
|
|
19
|
+
## Arguments
|
|
20
|
+
|
|
21
|
+
- `$ARGUMENTS` — (optional) Version range and flags
|
|
22
|
+
- `v2.3.0..v2.4.0` — specific tag range
|
|
23
|
+
- `v2.4.0` — from that tag to HEAD
|
|
24
|
+
- Not specified — auto-detect last tag to HEAD
|
|
25
|
+
- `--post` — publish to GitHub Releases after preview (can combine with any range)
|
|
26
|
+
|
|
27
|
+
Parse the arguments:
|
|
28
|
+
1. Extract `--post` flag if present
|
|
29
|
+
2. Parse remaining as version range:
|
|
30
|
+
- If contains `..`: split into `{from_tag}..{to_tag}`
|
|
31
|
+
- If single version: use `{version}..HEAD`
|
|
32
|
+
- If empty: auto-detect with `git describe --tags --abbrev=0`
|
|
33
|
+
|
|
34
|
+
## Execution Steps
|
|
35
|
+
|
|
36
|
+
### 1. Determine Range
|
|
37
|
+
|
|
38
|
+
```bash
|
|
39
|
+
# Auto-detect last tag if no range specified
|
|
40
|
+
git describe --tags --abbrev=0 2>/dev/null
|
|
41
|
+
```
|
|
42
|
+
|
|
43
|
+
- If a tag is found, set `from_tag` to that tag, `to_tag` to HEAD
|
|
44
|
+
- If no tags exist, inform the user: "No tags found. Include all commits? (y/n)"
|
|
45
|
+
- If a `to_tag` is specified (not HEAD), verify it exists: `git rev-parse --verify {to_tag} 2>/dev/null`
|
|
46
|
+
- Determine the version label for the notes header:
|
|
47
|
+
- If `to_tag` is a version tag: use it (e.g., `v2.4.0`)
|
|
48
|
+
- If `to_tag` is HEAD: use "Unreleased" or the `from_tag` bumped (ask user)
|
|
49
|
+
|
|
50
|
+
### 2. Collect Raw Data
|
|
51
|
+
|
|
52
|
+
Run these commands to gather change context:
|
|
53
|
+
|
|
54
|
+
```bash
|
|
55
|
+
# Commit history (no merges)
|
|
56
|
+
git log {from_tag}..{to_tag} --pretty=format:"%H %s" --no-merges
|
|
57
|
+
|
|
58
|
+
# Merged PRs since the from_tag date
|
|
59
|
+
from_date=$(git log -1 --format=%aI {from_tag})
|
|
60
|
+
gh pr list --state merged --search "merged:>$from_date" --json number,title,author,labels,body --limit 100
|
|
61
|
+
```
|
|
62
|
+
|
|
63
|
+
If `gh` is not available or the repo has no remote, skip PR collection — proceed with git-only data.
|
|
64
|
+
|
|
65
|
+
### 3. Detect Breaking Changes
|
|
66
|
+
|
|
67
|
+
Search commit messages and PR titles for breaking change indicators:
|
|
68
|
+
|
|
69
|
+
- Patterns: `BREAKING`, `BREAKING CHANGE`, `!:` (conventional commits `feat!:`, `fix!:`)
|
|
70
|
+
- Also check PR labels for: `breaking`, `breaking-change`, `semver-major`
|
|
71
|
+
|
|
72
|
+
Flag any matches for the Breaking Changes section.
|
|
73
|
+
|
|
74
|
+
### 4. Categorize and Rewrite
|
|
75
|
+
|
|
76
|
+
Categorize each commit/PR into one of:
|
|
77
|
+
|
|
78
|
+
| Category | Conventional Commit Prefixes | Fallback Heuristics |
|
|
79
|
+
|----------|------------------------------|---------------------|
|
|
80
|
+
| Breaking Changes | `!:` suffix, `BREAKING` | Label: `breaking` |
|
|
81
|
+
| New Features | `feat:` | "add", "new", "implement", "support" |
|
|
82
|
+
| Bug Fixes | `fix:` | "fix", "resolve", "correct", "patch" |
|
|
83
|
+
| Other Changes | `chore:`, `docs:`, `ci:`, `refactor:`, `perf:`, `test:`, `style:`, `build:` | Everything else |
|
|
84
|
+
|
|
85
|
+
**Rewriting rules** — transform each entry from developer-speak to user-facing language:
|
|
86
|
+
|
|
87
|
+
1. Remove conventional commit prefixes (`feat:`, `fix(scope):`, etc.)
|
|
88
|
+
2. Rewrite in terms of **what the user experiences**, not what the developer changed
|
|
89
|
+
- Bad: "Refactor ThemeProvider to use context API"
|
|
90
|
+
- Good: "Improved theme switching reliability"
|
|
91
|
+
- Bad: "Fix race condition in afc-state.sh"
|
|
92
|
+
- Good: "Fixed an issue where pipeline state could become corrupted during concurrent execution"
|
|
93
|
+
3. Merge related commits into a single entry when they address the same feature/fix
|
|
94
|
+
4. Include PR number references where available: `(#42)`
|
|
95
|
+
5. For breaking changes, add a brief migration note after the description
|
|
96
|
+
|
|
97
|
+
### 5. Collect Contributor Info
|
|
98
|
+
|
|
99
|
+
Build a contributors section:
|
|
100
|
+
|
|
101
|
+
```bash
|
|
102
|
+
# Get commit authors with their commit counts
|
|
103
|
+
git log {from_tag}..{to_tag} --format="%aN" | sort | uniq -c | sort -rn
|
|
104
|
+
|
|
105
|
+
# Resolve repo identity for GitHub username lookup
|
|
106
|
+
gh repo view --json owner,name --jq '"\(.owner.login)/\(.name)"'
|
|
107
|
+
|
|
108
|
+
# Map git authors to GitHub usernames via commit SHAs
|
|
109
|
+
git log {from_tag}..{to_tag} --format="%H" | head -100 | while read sha; do
|
|
110
|
+
gh api "repos/{owner}/{repo}/commits/$sha" --jq '.author.login // empty' 2>/dev/null
|
|
111
|
+
done | sort -u
|
|
112
|
+
```
|
|
113
|
+
|
|
114
|
+
For each contributor:
|
|
115
|
+
- Try to resolve GitHub username via the commit SHA lookup above, or from PR author data collected in step 2
|
|
116
|
+
- List their PR numbers if available
|
|
117
|
+
- Fall back to git author name if no GitHub username found
|
|
118
|
+
- If `gh` is not available, skip username resolution entirely — use git author names
|
|
119
|
+
|
|
120
|
+
### 6. Compose Release Notes
|
|
121
|
+
|
|
122
|
+
Assemble the final release notes in this format:
|
|
123
|
+
|
|
124
|
+
```markdown
|
|
125
|
+
# {version}
|
|
126
|
+
|
|
127
|
+
{2-3 sentence summary: what is the most important thing in this release? Written for end users.}
|
|
128
|
+
|
|
129
|
+
## Breaking Changes
|
|
130
|
+
|
|
131
|
+
- {description + migration guide}
|
|
132
|
+
|
|
133
|
+
## New Features
|
|
134
|
+
|
|
135
|
+
- {user-facing description} (#{pr_number})
|
|
136
|
+
|
|
137
|
+
## Bug Fixes
|
|
138
|
+
|
|
139
|
+
- {user-facing description} (#{pr_number})
|
|
140
|
+
|
|
141
|
+
## Other Changes
|
|
142
|
+
|
|
143
|
+
- {description} (#{pr_number})
|
|
144
|
+
|
|
145
|
+
## Contributors
|
|
146
|
+
|
|
147
|
+
{contributor list with @mentions and PR numbers}
|
|
148
|
+
```
|
|
149
|
+
|
|
150
|
+
**Format rules**:
|
|
151
|
+
- Omit empty sections entirely (if no breaking changes, skip that section)
|
|
152
|
+
- Breaking Changes section always comes first when present
|
|
153
|
+
- Each entry is a single bullet point, max 2 sentences
|
|
154
|
+
- Summary paragraph should highlight the top 1-2 changes
|
|
155
|
+
- Contributors section: `@username (#42, #45)` format, or `Name (#42)` if no GitHub username
|
|
156
|
+
|
|
157
|
+
### 7. Preview Output
|
|
158
|
+
|
|
159
|
+
Display the complete release notes to the user in the console.
|
|
160
|
+
|
|
161
|
+
Print a summary:
|
|
162
|
+
```
|
|
163
|
+
Release notes generated
|
|
164
|
+
├─ Version: {version}
|
|
165
|
+
├─ Range: {from_tag}..{to_tag}
|
|
166
|
+
├─ Commits: {N}
|
|
167
|
+
├─ PRs referenced: {N}
|
|
168
|
+
├─ Breaking changes: {count or "none"}
|
|
169
|
+
├─ Contributors: {N}
|
|
170
|
+
└─ --post: {will publish / preview only}
|
|
171
|
+
```
|
|
172
|
+
|
|
173
|
+
### 8. Publish to GitHub Releases (if --post)
|
|
174
|
+
|
|
175
|
+
If `--post` flag is present:
|
|
176
|
+
|
|
177
|
+
1. Determine the tag name:
|
|
178
|
+
- If `to_tag` is a specific tag: use it
|
|
179
|
+
- If `to_tag` is HEAD: ask the user what tag to create (suggest next version)
|
|
180
|
+
|
|
181
|
+
2. Ask user to confirm using AskUserQuestion:
|
|
182
|
+
- **Post as-is** — Publish the GitHub Release immediately
|
|
183
|
+
- **Post as draft** — Create as draft release (can review and publish later from GitHub)
|
|
184
|
+
- **Edit first** — Let me modify the notes before posting
|
|
185
|
+
- **Cancel** — Do not post
|
|
186
|
+
|
|
187
|
+
3. On approval, write to a temp file and publish:
|
|
188
|
+
```bash
|
|
189
|
+
tmp_file=$(mktemp)
|
|
190
|
+
cat > "$tmp_file" << 'NOTES_EOF'
|
|
191
|
+
{release notes content}
|
|
192
|
+
NOTES_EOF
|
|
193
|
+
# Add --draft if user chose "Post as draft"
|
|
194
|
+
gh release create {tag} --notes-file "$tmp_file" --title "{version}" [--draft]
|
|
195
|
+
rm -f "$tmp_file"
|
|
196
|
+
```
|
|
197
|
+
|
|
198
|
+
4. Print result:
|
|
199
|
+
```
|
|
200
|
+
GitHub Release published
|
|
201
|
+
├─ Tag: {tag}
|
|
202
|
+
├─ Title: {version}
|
|
203
|
+
├─ Status: {published / draft}
|
|
204
|
+
└─ URL: {release URL from gh output}
|
|
205
|
+
```
|
|
206
|
+
|
|
207
|
+
If `--post` is not present, print:
|
|
208
|
+
```
|
|
209
|
+
To publish these notes as a GitHub Release, run again with --post flag.
|
|
210
|
+
```
|
|
211
|
+
|
|
212
|
+
## Notes
|
|
213
|
+
|
|
214
|
+
- **Read-only by default**: Without `--post`, this command only outputs to console. No files are created or modified.
|
|
215
|
+
- **Complements `/afc:launch`**: Use `launch` for local artifact generation (CHANGELOG, README). Use `release-notes` for GitHub Release publishing.
|
|
216
|
+
- **Conventional Commits aware**: Projects using conventional commits get better categorization. Projects without them still get reasonable results via heuristic matching.
|
|
217
|
+
- **Contributor attribution**: Best effort. GitHub username resolution requires `gh` CLI and repository access.
|
|
218
|
+
- **User confirmation required**: `--post` always asks for explicit approval before publishing to GitHub.
|
|
219
|
+
- **Idempotent**: Running without `--post` is safe to repeat. With `--post`, GitHub Releases for existing tags will fail (GitHub does not allow duplicate release tags).
|
|
@@ -0,0 +1,222 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: afc:triage
|
|
3
|
+
description: "Parallel triage of open PRs and issues"
|
|
4
|
+
argument-hint: "[scope: --pr, --issue, --all (default), or specific numbers]"
|
|
5
|
+
allowed-tools:
|
|
6
|
+
- Read
|
|
7
|
+
- Grep
|
|
8
|
+
- Glob
|
|
9
|
+
- Bash
|
|
10
|
+
- Task
|
|
11
|
+
- Write
|
|
12
|
+
model: sonnet
|
|
13
|
+
---
|
|
14
|
+
|
|
15
|
+
# /afc:triage — PR & Issue Triage
|
|
16
|
+
|
|
17
|
+
> Collects open PRs and issues, analyzes them in parallel, and produces a priority-ranked triage report.
|
|
18
|
+
> Uses lightweight analysis first (no checkout), then selective deep analysis with worktree isolation for PRs that require build/test verification.
|
|
19
|
+
|
|
20
|
+
## Arguments
|
|
21
|
+
|
|
22
|
+
- `$ARGUMENTS` — (optional) Triage scope
|
|
23
|
+
- `--pr` — PRs only
|
|
24
|
+
- `--issue` — Issues only
|
|
25
|
+
- `--all` — Both PRs and issues (default)
|
|
26
|
+
- Specific numbers (e.g., `#42 #43`) — Analyze only those items
|
|
27
|
+
- `--deep` — Force deep analysis (worktree) for all PRs
|
|
28
|
+
|
|
29
|
+
## Execution Steps
|
|
30
|
+
|
|
31
|
+
### 1. Collect Targets
|
|
32
|
+
|
|
33
|
+
Run the metadata collection script:
|
|
34
|
+
|
|
35
|
+
```bash
|
|
36
|
+
"${CLAUDE_PLUGIN_ROOT}/scripts/afc-triage.sh" "$ARGUMENTS"
|
|
37
|
+
```
|
|
38
|
+
|
|
39
|
+
This returns JSON with PR/issue metadata. If the script is not available, fall back to direct `gh` commands:
|
|
40
|
+
|
|
41
|
+
```bash
|
|
42
|
+
# PRs
|
|
43
|
+
gh pr list --json number,title,headRefName,author,labels,additions,deletions,changedFiles,createdAt,updatedAt,reviewDecision,isDraft --limit 50
|
|
44
|
+
|
|
45
|
+
# Issues
|
|
46
|
+
gh issue list --json number,title,labels,author,createdAt,updatedAt,comments --limit 50
|
|
47
|
+
```
|
|
48
|
+
|
|
49
|
+
### 2. Phase 1 — Lightweight Parallel Analysis (no checkout)
|
|
50
|
+
|
|
51
|
+
For each PR/issue, gather analysis data **without** checking out branches:
|
|
52
|
+
|
|
53
|
+
#### PR Analysis (parallel — one agent per PR, max 5 concurrent)
|
|
54
|
+
|
|
55
|
+
Spawn parallel agents in a **single message**:
|
|
56
|
+
|
|
57
|
+
```
|
|
58
|
+
Task("Triage PR #{number}: {title}", subagent_type: "general-purpose",
|
|
59
|
+
prompt: "Analyze this PR without checking out the branch.
|
|
60
|
+
|
|
61
|
+
PR #{number}: {title}
|
|
62
|
+
Author: {author}
|
|
63
|
+
Branch: {headRefName}
|
|
64
|
+
Changed files: {changedFiles}, +{additions}/-{deletions}
|
|
65
|
+
Labels: {labels}
|
|
66
|
+
Review status: {reviewDecision}
|
|
67
|
+
Draft: {isDraft}
|
|
68
|
+
|
|
69
|
+
Steps:
|
|
70
|
+
1. Run: gh pr diff {number}
|
|
71
|
+
2. Run: gh pr view {number} --comments
|
|
72
|
+
3. Analyze the diff for:
|
|
73
|
+
- What the PR does (1-2 sentence summary)
|
|
74
|
+
- Risk level: Critical (core logic, auth, data) / Medium (features, UI) / Low (docs, config, tests)
|
|
75
|
+
- Complexity: High (>10 files or cross-cutting) / Medium (3-10 files) / Low (<3 files)
|
|
76
|
+
- Whether build/test verification is needed (yes/no + reason)
|
|
77
|
+
- Potential issues or concerns (max 3)
|
|
78
|
+
- Suggested reviewers or labels if obvious
|
|
79
|
+
|
|
80
|
+
Output as structured text:
|
|
81
|
+
SUMMARY: ...
|
|
82
|
+
RISK: Critical|Medium|Low
|
|
83
|
+
COMPLEXITY: High|Medium|Low
|
|
84
|
+
NEEDS_DEEP: yes|no
|
|
85
|
+
DEEP_REASON: ... (if yes)
|
|
86
|
+
CONCERNS: ...
|
|
87
|
+
SUGGESTION: ...")
|
|
88
|
+
```
|
|
89
|
+
|
|
90
|
+
#### Issue Analysis (parallel — one agent per batch of 5 issues)
|
|
91
|
+
|
|
92
|
+
```
|
|
93
|
+
Task("Triage Issues #{n1}-#{n5}", subagent_type: "general-purpose",
|
|
94
|
+
prompt: "Analyze these issues:
|
|
95
|
+
{issue list with titles, labels, comment count}
|
|
96
|
+
|
|
97
|
+
For each issue:
|
|
98
|
+
1. Read issue body and comments: gh issue view {number} --comments
|
|
99
|
+
2. Classify:
|
|
100
|
+
- Type: Bug / Feature / Enhancement / Question / Maintenance
|
|
101
|
+
- Priority: P0 (blocking) / P1 (important) / P2 (nice-to-have) / P3 (backlog)
|
|
102
|
+
- Estimated effort: Small (< 1 day) / Medium (1-3 days) / Large (3+ days)
|
|
103
|
+
- Related PRs (if any mentioned)
|
|
104
|
+
3. One-line summary
|
|
105
|
+
|
|
106
|
+
Output as structured text per issue:
|
|
107
|
+
ISSUE #{number}: {title}
|
|
108
|
+
TYPE: ...
|
|
109
|
+
PRIORITY: P0|P1|P2|P3
|
|
110
|
+
EFFORT: Small|Medium|Large
|
|
111
|
+
RELATED_PR: #N or none
|
|
112
|
+
SUMMARY: ...")
|
|
113
|
+
```
|
|
114
|
+
|
|
115
|
+
### 3. Phase 2 — Selective Deep Analysis (worktree, optional)
|
|
116
|
+
|
|
117
|
+
From Phase 1 results, identify PRs where `NEEDS_DEEP: yes`.
|
|
118
|
+
|
|
119
|
+
For each deep-analysis PR, spawn a **worktree-isolated agent**:
|
|
120
|
+
|
|
121
|
+
```
|
|
122
|
+
Task("Deep triage PR #{number}", subagent_type: "afc:afc-pr-analyst",
|
|
123
|
+
isolation: "worktree",
|
|
124
|
+
prompt: "Deep-analyze PR #{number} ({title}).
|
|
125
|
+
|
|
126
|
+
Branch: {headRefName}
|
|
127
|
+
Phase 1 concerns: {concerns from Phase 1}
|
|
128
|
+
|
|
129
|
+
Steps:
|
|
130
|
+
1. Checkout the PR branch: gh pr checkout {number}
|
|
131
|
+
2. Run project CI/test commands if available (from .claude/afc.config.md or CLAUDE.md)
|
|
132
|
+
3. Check for type errors, lint issues, test failures
|
|
133
|
+
4. Analyze architectural impact
|
|
134
|
+
5. Report findings
|
|
135
|
+
|
|
136
|
+
Output:
|
|
137
|
+
BUILD_STATUS: pass|fail|skip
|
|
138
|
+
TEST_STATUS: pass|fail|skip (N passed, M failed)
|
|
139
|
+
LINT_STATUS: pass|fail|skip
|
|
140
|
+
DEEP_FINDINGS: ...
|
|
141
|
+
RECOMMENDATION: merge|request-changes|needs-discussion")
|
|
142
|
+
```
|
|
143
|
+
|
|
144
|
+
**Important**: Launch at most 3 worktree agents concurrently to avoid resource contention.
|
|
145
|
+
|
|
146
|
+
If `--deep` flag was specified, run Phase 2 for **all** PRs regardless of Phase 1 classification.
|
|
147
|
+
|
|
148
|
+
### 4. Consolidate Triage Report
|
|
149
|
+
|
|
150
|
+
Merge Phase 1 and Phase 2 results into a single report:
|
|
151
|
+
|
|
152
|
+
```markdown
|
|
153
|
+
# Triage Report
|
|
154
|
+
|
|
155
|
+
> Date: {YYYY-MM-DD}
|
|
156
|
+
> Repository: {owner/repo}
|
|
157
|
+
> Scope: {PRs: N, Issues: M}
|
|
158
|
+
|
|
159
|
+
## PRs ({count})
|
|
160
|
+
|
|
161
|
+
### Priority Actions
|
|
162
|
+
|
|
163
|
+
| # | Title | Risk | Complexity | Status | Action |
|
|
164
|
+
|---|-------|------|------------|--------|--------|
|
|
165
|
+
| {sorted by: Critical first, then by staleness} |
|
|
166
|
+
|
|
167
|
+
### PR Details
|
|
168
|
+
|
|
169
|
+
#### PR #{number}: {title}
|
|
170
|
+
- **Author**: {author} | **Branch**: {branch}
|
|
171
|
+
- **Changes**: +{add}/-{del} across {files} files
|
|
172
|
+
- **Risk**: {risk} | **Complexity**: {complexity}
|
|
173
|
+
- **Summary**: {summary}
|
|
174
|
+
- **Concerns**: {concerns or "None"}
|
|
175
|
+
- **Deep analysis**: {findings if Phase 2 ran, otherwise "Skipped"}
|
|
176
|
+
- **Recommendation**: {recommendation}
|
|
177
|
+
|
|
178
|
+
## Issues ({count})
|
|
179
|
+
|
|
180
|
+
### By Priority
|
|
181
|
+
|
|
182
|
+
| Priority | # | Title | Type | Effort | Related PR |
|
|
183
|
+
|----------|---|-------|------|--------|------------|
|
|
184
|
+
| {sorted by priority, then by creation date} |
|
|
185
|
+
|
|
186
|
+
## Summary
|
|
187
|
+
|
|
188
|
+
- **Immediate attention**: {list of Critical PRs and P0 issues}
|
|
189
|
+
- **Ready to merge**: {PRs with no concerns and passing checks}
|
|
190
|
+
- **Needs discussion**: {PRs/issues requiring team input}
|
|
191
|
+
- **Stale items**: {PRs/issues with no activity > 14 days}
|
|
192
|
+
```
|
|
193
|
+
|
|
194
|
+
### 5. Save Report
|
|
195
|
+
|
|
196
|
+
Save the triage report:
|
|
197
|
+
|
|
198
|
+
```
|
|
199
|
+
.claude/afc/memory/triage/{YYYY-MM-DD}.md
|
|
200
|
+
```
|
|
201
|
+
|
|
202
|
+
If a previous triage report exists for today, overwrite it.
|
|
203
|
+
|
|
204
|
+
### 6. Final Output
|
|
205
|
+
|
|
206
|
+
```
|
|
207
|
+
Triage complete
|
|
208
|
+
├─ PRs analyzed: {N} (deep: {M})
|
|
209
|
+
├─ Issues analyzed: {N}
|
|
210
|
+
├─ Immediate attention: {count}
|
|
211
|
+
├─ Ready to merge: {count}
|
|
212
|
+
├─ Report: .claude/afc/memory/triage/{date}.md
|
|
213
|
+
└─ Duration: {elapsed}
|
|
214
|
+
```
|
|
215
|
+
|
|
216
|
+
## Notes
|
|
217
|
+
|
|
218
|
+
- **Read-only**: Triage does not modify any code, merge PRs, or close issues.
|
|
219
|
+
- **Rate limits**: `gh` API calls are rate-limited. For repos with 50+ open items, consider using `--pr` or `--issue` to reduce scope.
|
|
220
|
+
- **Worktree cleanup**: Worktree agents auto-clean on completion. If a worktree is left behind, use `git worktree prune`.
|
|
221
|
+
- **NEVER use `run_in_background: true` on Phase 1 Task calls**: agents must run in foreground so results are collected before consolidation. Phase 2 worktree agents also run in foreground.
|
|
222
|
+
- **Parallel limits**: Phase 1 — max 5 concurrent agents. Phase 2 — max 3 concurrent worktree agents.
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "all-for-claudecode",
|
|
3
|
-
"version": "2.
|
|
3
|
+
"version": "2.5.0",
|
|
4
4
|
"description": "Claude Code plugin that automates the full dev cycle — spec, plan, implement, review, clean.",
|
|
5
5
|
"bin": {
|
|
6
6
|
"all-for-claudecode": "bin/cli.mjs"
|
|
@@ -34,7 +34,7 @@
|
|
|
34
34
|
"url": "https://github.com/jhlee0409/all-for-claudecode/issues"
|
|
35
35
|
},
|
|
36
36
|
"scripts": {
|
|
37
|
-
"lint": "shellcheck -x --source-path=scripts scripts/*.sh && bash scripts/afc-schema-validate.sh --all && bash scripts/afc-consistency-check.sh",
|
|
37
|
+
"lint": "shellcheck -x --source-path=scripts --severity=warning scripts/*.sh && bash scripts/afc-schema-validate.sh --all && bash scripts/afc-consistency-check.sh",
|
|
38
38
|
"test": "vendor/shellspec/shellspec",
|
|
39
39
|
"test:all": "npm run lint && npm run test",
|
|
40
40
|
"setup:test": "bash scripts/install-shellspec.sh"
|
|
@@ -213,136 +213,58 @@ if [ -s "$HOOKS_REFS" ]; then
|
|
|
213
213
|
mv "$TMPDIR_WORK/hooks_unique.txt" "$HOOKS_REFS"
|
|
214
214
|
fi
|
|
215
215
|
|
|
216
|
-
# ── Cycle detection
|
|
217
|
-
#
|
|
218
|
-
#
|
|
219
|
-
# So we detect cycles in the "sources" graph: edge from sourcer to sourced
|
|
216
|
+
# ── Cycle detection ──────────────────────────────────────
|
|
217
|
+
# Detect cycles using reachability: for each edge A→B, check if B can reach A.
|
|
218
|
+
# Simple and portable — no file-based DFS or md5 hashing needed.
|
|
220
219
|
|
|
221
220
|
CYCLE_FOUND=0
|
|
222
|
-
NODES_FILE="$TMPDIR_WORK/nodes.txt"
|
|
223
|
-
: > "$NODES_FILE"
|
|
224
221
|
|
|
225
|
-
# Collect all unique nodes from deps
|
|
226
222
|
if [ -s "$ALL_DEPS" ]; then
|
|
227
|
-
|
|
228
|
-
|
|
229
|
-
|
|
230
|
-
|
|
231
|
-
|
|
232
|
-
|
|
233
|
-
|
|
234
|
-
|
|
235
|
-
|
|
236
|
-
|
|
237
|
-
|
|
238
|
-
|
|
239
|
-
|
|
240
|
-
|
|
241
|
-
|
|
242
|
-
|
|
243
|
-
|
|
244
|
-
|
|
245
|
-
# Map hash back to name
|
|
246
|
-
printf '%s\t%s\n' "$node_hash" "$node" >> "$TMPDIR_WORK/hash_map.txt"
|
|
247
|
-
done < "$NODES_FILE"
|
|
248
|
-
|
|
249
|
-
# DFS from each white node
|
|
250
|
-
dfs_visit() {
|
|
251
|
-
local start_hash="$1"
|
|
252
|
-
local stack_file="$TMPDIR_WORK/dfs_stack.txt"
|
|
253
|
-
local path_file="$TMPDIR_WORK/dfs_path.txt"
|
|
254
|
-
printf '%s\n' "$start_hash" > "$stack_file"
|
|
255
|
-
: > "$path_file"
|
|
256
|
-
|
|
257
|
-
while [ -s "$stack_file" ]; do
|
|
258
|
-
current_hash="$(tail -1 "$stack_file")"
|
|
259
|
-
|
|
260
|
-
color_file="$COLOR_DIR/$current_hash"
|
|
261
|
-
if [ ! -f "$color_file" ]; then
|
|
262
|
-
# Remove from stack
|
|
263
|
-
sed -i '' '$d' "$stack_file" 2>/dev/null || sed -i '$d' "$stack_file"
|
|
223
|
+
# Check each edge: if A sources B, does B (transitively) source A?
|
|
224
|
+
while IFS=$'\t' read -r src dst || [ -n "$src" ]; do
|
|
225
|
+
[ -z "$src" ] || [ -z "$dst" ] && continue
|
|
226
|
+
|
|
227
|
+
# BFS from dst to see if we can reach src
|
|
228
|
+
visited="$TMPDIR_WORK/visited.txt"
|
|
229
|
+
queue="$TMPDIR_WORK/queue.txt"
|
|
230
|
+
printf '%s\n' "$dst" > "$queue"
|
|
231
|
+
: > "$visited"
|
|
232
|
+
|
|
233
|
+
while [ -s "$queue" ]; do
|
|
234
|
+
current=$(head -1 "$queue")
|
|
235
|
+
# Remove first line from queue
|
|
236
|
+
tail -n +2 "$queue" > "$TMPDIR_WORK/queue_tmp.txt"
|
|
237
|
+
mv "$TMPDIR_WORK/queue_tmp.txt" "$queue"
|
|
238
|
+
|
|
239
|
+
# Skip if already visited
|
|
240
|
+
if grep -qxF "$current" "$visited" 2>/dev/null; then
|
|
264
241
|
continue
|
|
265
242
|
fi
|
|
243
|
+
printf '%s\n' "$current" >> "$visited"
|
|
244
|
+
|
|
245
|
+
# Check if we reached src → cycle
|
|
246
|
+
if [ "$current" = "$src" ]; then
|
|
247
|
+
CYCLE_FOUND=1
|
|
248
|
+
# Build cycle path from visited trail
|
|
249
|
+
cycle_str="CYCLE: ${src} -> ${dst} -> ${src}"
|
|
250
|
+
printf '%s\n' "$cycle_str" > "$CYCLE_RESULT"
|
|
251
|
+
break
|
|
252
|
+
fi
|
|
266
253
|
|
|
267
|
-
|
|
268
|
-
|
|
269
|
-
|
|
270
|
-
|
|
271
|
-
|
|
272
|
-
printf '%s\n' "$current_hash" >> "$path_file"
|
|
273
|
-
|
|
274
|
-
# Get current node name
|
|
275
|
-
current_name=$(grep -E "^${current_hash}\t" "$TMPDIR_WORK/hash_map.txt" 2>/dev/null | cut -f2 | head -1 || true)
|
|
276
|
-
[ -z "$current_name" ] && { sed -i '' '$d' "$stack_file" 2>/dev/null || sed -i '$d' "$stack_file"; continue; }
|
|
277
|
-
|
|
278
|
-
# Find neighbors (files this script sources)
|
|
279
|
-
has_neighbor=0
|
|
280
|
-
while IFS=$'\t' read -r src dst || [ -n "$src" ]; do
|
|
281
|
-
if [ "$src" = "$current_name" ]; then
|
|
282
|
-
dst_hash=$(printf '%s' "$dst" | md5sum 2>/dev/null | cut -d' ' -f1 || printf '%s' "$dst" | md5 2>/dev/null || printf '%s' "$dst" | tr '/' '_')
|
|
283
|
-
dst_color_file="$COLOR_DIR/$dst_hash"
|
|
284
|
-
[ ! -f "$dst_color_file" ] && continue
|
|
285
|
-
|
|
286
|
-
dst_color="$(cat "$dst_color_file")"
|
|
287
|
-
if [ "$dst_color" = "1" ]; then
|
|
288
|
-
# Back edge found -> cycle
|
|
289
|
-
CYCLE_FOUND=1
|
|
290
|
-
# Reconstruct cycle path
|
|
291
|
-
dst_name=$(grep -E "^${dst_hash}\t" "$TMPDIR_WORK/hash_map.txt" 2>/dev/null | cut -f2 | head -1 || true)
|
|
292
|
-
cycle_str="CYCLE:"
|
|
293
|
-
in_cycle=0
|
|
294
|
-
while IFS= read -r ph || [ -n "$ph" ]; do
|
|
295
|
-
if [ "$ph" = "$dst_hash" ]; then
|
|
296
|
-
in_cycle=1
|
|
297
|
-
fi
|
|
298
|
-
if [ "$in_cycle" -eq 1 ]; then
|
|
299
|
-
pname=$(grep -E "^${ph}\t" "$TMPDIR_WORK/hash_map.txt" 2>/dev/null | cut -f2 | head -1 || true)
|
|
300
|
-
if [ -n "$pname" ]; then
|
|
301
|
-
cycle_str="${cycle_str} ${pname} ->"
|
|
302
|
-
fi
|
|
303
|
-
fi
|
|
304
|
-
done < "$path_file"
|
|
305
|
-
cycle_str="${cycle_str} ${dst_name}"
|
|
306
|
-
printf '%s\n' "$cycle_str" > "$CYCLE_RESULT"
|
|
307
|
-
return
|
|
308
|
-
elif [ "$dst_color" = "0" ]; then
|
|
309
|
-
printf '%s\n' "$dst_hash" >> "$stack_file"
|
|
310
|
-
has_neighbor=1
|
|
311
|
-
fi
|
|
254
|
+
# Enqueue neighbors (files that current sources)
|
|
255
|
+
while IFS=$'\t' read -r s d || [ -n "$s" ]; do
|
|
256
|
+
if [ "$s" = "$current" ] && [ -n "$d" ]; then
|
|
257
|
+
if ! grep -qxF "$d" "$visited" 2>/dev/null; then
|
|
258
|
+
printf '%s\n' "$d" >> "$queue"
|
|
312
259
|
fi
|
|
313
|
-
done < "$ALL_DEPS"
|
|
314
|
-
|
|
315
|
-
if [ "$has_neighbor" -eq 0 ]; then
|
|
316
|
-
# Leaf node, mark BLACK
|
|
317
|
-
printf '2' > "$color_file"
|
|
318
|
-
sed -i '' '$d' "$stack_file" 2>/dev/null || sed -i '$d' "$stack_file"
|
|
319
|
-
sed -i '' '$d' "$path_file" 2>/dev/null || sed -i '$d' "$path_file"
|
|
320
260
|
fi
|
|
321
|
-
|
|
322
|
-
# Returning from recursion, mark BLACK
|
|
323
|
-
printf '2' > "$color_file"
|
|
324
|
-
sed -i '' '$d' "$stack_file" 2>/dev/null || sed -i '$d' "$stack_file"
|
|
325
|
-
sed -i '' '$d' "$path_file" 2>/dev/null || sed -i '$d' "$path_file"
|
|
326
|
-
else
|
|
327
|
-
# Already BLACK, skip
|
|
328
|
-
sed -i '' '$d' "$stack_file" 2>/dev/null || sed -i '$d' "$stack_file"
|
|
329
|
-
fi
|
|
261
|
+
done < "$ALL_DEPS"
|
|
330
262
|
done
|
|
331
|
-
|
|
332
|
-
|
|
333
|
-
|
|
334
|
-
[ -z "$node" ] && continue
|
|
335
|
-
node_hash=$(printf '%s' "$node" | md5sum 2>/dev/null | cut -d' ' -f1 || printf '%s' "$node" | md5 2>/dev/null || printf '%s' "$node" | tr '/' '_')
|
|
336
|
-
color_file="$COLOR_DIR/$node_hash"
|
|
337
|
-
[ ! -f "$color_file" ] && continue
|
|
338
|
-
color="$(cat "$color_file")"
|
|
339
|
-
if [ "$color" = "0" ]; then
|
|
340
|
-
dfs_visit "$node_hash"
|
|
341
|
-
if [ "$CYCLE_FOUND" -eq 1 ]; then
|
|
342
|
-
break
|
|
343
|
-
fi
|
|
263
|
+
|
|
264
|
+
if [ "$CYCLE_FOUND" -eq 1 ]; then
|
|
265
|
+
break
|
|
344
266
|
fi
|
|
345
|
-
done < "$
|
|
267
|
+
done < "$ALL_DEPS"
|
|
346
268
|
fi
|
|
347
269
|
|
|
348
270
|
# ── Generate Report ──────────────────────────────────────
|
|
@@ -243,7 +243,7 @@ check_phase_ssot() {
|
|
|
243
243
|
# Sub-check B: Every command name should map to a valid phase or be a known non-phase command
|
|
244
244
|
# Non-phase commands that are not pipeline phases
|
|
245
245
|
# NOTE: Update this list when adding non-phase commands to commands/
|
|
246
|
-
local non_phase_cmds="auto|init|doctor|principles|checkpoint|resume|launch|ideate|research|architect|security|debug|analyze|validate|test|consult"
|
|
246
|
+
local non_phase_cmds="auto|init|doctor|principles|checkpoint|resume|launch|ideate|research|architect|security|debug|analyze|validate|test|consult|triage|pr-comment|release-notes"
|
|
247
247
|
local commands_dir="$PROJECT_DIR/commands"
|
|
248
248
|
if [ -d "$commands_dir" ]; then
|
|
249
249
|
for cmd_file in "$commands_dir"/*.md; do
|
|
@@ -32,7 +32,7 @@ FEATURE=$(afc_state_read feature || echo "unknown")
|
|
|
32
32
|
PHASE=$(afc_state_read phase || echo "unknown")
|
|
33
33
|
|
|
34
34
|
# 3. Build context string
|
|
35
|
-
CONTEXT="[AFC PIPELINE] Feature: $FEATURE | Phase: $PHASE"
|
|
35
|
+
CONTEXT="[AFC PIPELINE] Feature: $FEATURE | Phase: $PHASE | [AFC] When this task matches an AFC skill (analyze, implement, review, debug, test, plan, spec, research, ideate), use the Skill tool to invoke it. Do not substitute with raw Task agents. When analyzing external systems, verify against official documentation."
|
|
36
36
|
|
|
37
37
|
# 4. Extract config sections from afc.config.md
|
|
38
38
|
CONFIG_FILE="$PROJECT_DIR/.claude/afc.config.md"
|
|
@@ -0,0 +1,131 @@
|
|
|
1
|
+
#!/bin/bash
|
|
2
|
+
set -euo pipefail
|
|
3
|
+
|
|
4
|
+
# Triage Metadata Collector: Gathers PR and issue metadata via gh CLI
|
|
5
|
+
#
|
|
6
|
+
# Usage: afc-triage.sh [--pr|--issue|--all|#N #M ...]
|
|
7
|
+
# --pr Collect PRs only
|
|
8
|
+
# --issue Collect issues only
|
|
9
|
+
# --all Collect both (default)
|
|
10
|
+
# #N #M Collect specific items by number
|
|
11
|
+
#
|
|
12
|
+
# Output: JSON object with "prs" and "issues" arrays to stdout
|
|
13
|
+
# Exit 0: success
|
|
14
|
+
# Exit 1: gh CLI not available or API error
|
|
15
|
+
|
|
16
|
+
# shellcheck disable=SC2329
|
|
17
|
+
cleanup() {
|
|
18
|
+
:
|
|
19
|
+
}
|
|
20
|
+
trap cleanup EXIT
|
|
21
|
+
|
|
22
|
+
# ── Check prerequisites ──────────────────────────────────
|
|
23
|
+
|
|
24
|
+
if ! command -v gh >/dev/null 2>&1; then
|
|
25
|
+
printf '[afc:triage] Error: gh CLI not found. Install from https://cli.github.com/\n' >&2
|
|
26
|
+
exit 1
|
|
27
|
+
fi
|
|
28
|
+
|
|
29
|
+
if ! gh auth status >/dev/null 2>&1; then
|
|
30
|
+
printf '[afc:triage] Error: gh not authenticated. Run: gh auth login\n' >&2
|
|
31
|
+
exit 1
|
|
32
|
+
fi
|
|
33
|
+
|
|
34
|
+
# ── Parse arguments ──────────────────────────────────────
|
|
35
|
+
|
|
36
|
+
MODE="all"
|
|
37
|
+
SPECIFIC_NUMBERS=()
|
|
38
|
+
DEEP_FLAG="false"
|
|
39
|
+
|
|
40
|
+
for arg in "$@"; do
|
|
41
|
+
case "$arg" in
|
|
42
|
+
--pr) MODE="pr" ;;
|
|
43
|
+
--issue) MODE="issue" ;;
|
|
44
|
+
--all) MODE="all" ;;
|
|
45
|
+
--deep) DEEP_FLAG="true" ;;
|
|
46
|
+
\#*)
|
|
47
|
+
# Strip leading # and add number
|
|
48
|
+
num="${arg#\#}"
|
|
49
|
+
if [[ "$num" =~ ^[0-9]+$ ]]; then
|
|
50
|
+
SPECIFIC_NUMBERS+=("$num")
|
|
51
|
+
fi
|
|
52
|
+
;;
|
|
53
|
+
*)
|
|
54
|
+
# Try as plain number
|
|
55
|
+
if [[ "$arg" =~ ^[0-9]+$ ]]; then
|
|
56
|
+
SPECIFIC_NUMBERS+=("$arg")
|
|
57
|
+
fi
|
|
58
|
+
;;
|
|
59
|
+
esac
|
|
60
|
+
done
|
|
61
|
+
|
|
62
|
+
# ── Collect metadata ─────────────────────────────────────
|
|
63
|
+
|
|
64
|
+
PR_JSON="[]"
|
|
65
|
+
ISSUE_JSON="[]"
|
|
66
|
+
|
|
67
|
+
if [ ${#SPECIFIC_NUMBERS[@]} -gt 0 ]; then
|
|
68
|
+
# Specific items: try each as PR first, then as issue
|
|
69
|
+
PR_ITEMS="[]"
|
|
70
|
+
ISSUE_ITEMS="[]"
|
|
71
|
+
|
|
72
|
+
for num in "${SPECIFIC_NUMBERS[@]}"; do
|
|
73
|
+
# Try as PR
|
|
74
|
+
pr_data=""
|
|
75
|
+
pr_data=$(gh pr view "$num" --json number,title,headRefName,author,labels,additions,deletions,changedFiles,createdAt,updatedAt,reviewDecision,isDraft 2>/dev/null || true)
|
|
76
|
+
|
|
77
|
+
if [ -n "$pr_data" ]; then
|
|
78
|
+
if command -v jq >/dev/null 2>&1; then
|
|
79
|
+
PR_ITEMS=$(printf '%s\n' "$PR_ITEMS" | jq --argjson item "$pr_data" '. + [$item]')
|
|
80
|
+
else
|
|
81
|
+
# Fallback: append raw JSON (best effort)
|
|
82
|
+
PR_ITEMS="$pr_data"
|
|
83
|
+
fi
|
|
84
|
+
else
|
|
85
|
+
# Try as issue
|
|
86
|
+
issue_data=""
|
|
87
|
+
issue_data=$(gh issue view "$num" --json number,title,labels,author,createdAt,updatedAt,comments 2>/dev/null || true)
|
|
88
|
+
|
|
89
|
+
if [ -n "$issue_data" ]; then
|
|
90
|
+
if command -v jq >/dev/null 2>&1; then
|
|
91
|
+
ISSUE_ITEMS=$(printf '%s\n' "$ISSUE_ITEMS" | jq --argjson item "$issue_data" '. + [$item]')
|
|
92
|
+
else
|
|
93
|
+
ISSUE_ITEMS="$issue_data"
|
|
94
|
+
fi
|
|
95
|
+
else
|
|
96
|
+
printf '[afc:triage] Warning: #%s not found as PR or issue\n' "$num" >&2
|
|
97
|
+
fi
|
|
98
|
+
fi
|
|
99
|
+
done
|
|
100
|
+
|
|
101
|
+
PR_JSON="$PR_ITEMS"
|
|
102
|
+
ISSUE_JSON="$ISSUE_ITEMS"
|
|
103
|
+
else
|
|
104
|
+
# Bulk collection by mode
|
|
105
|
+
if [ "$MODE" = "pr" ] || [ "$MODE" = "all" ]; then
|
|
106
|
+
PR_JSON=$(gh pr list --json number,title,headRefName,author,labels,additions,deletions,changedFiles,createdAt,updatedAt,reviewDecision,isDraft --limit 50 2>/dev/null || printf '[]')
|
|
107
|
+
fi
|
|
108
|
+
|
|
109
|
+
if [ "$MODE" = "issue" ] || [ "$MODE" = "all" ]; then
|
|
110
|
+
ISSUE_JSON=$(gh issue list --json number,title,labels,author,createdAt,updatedAt,comments --limit 50 2>/dev/null || printf '[]')
|
|
111
|
+
fi
|
|
112
|
+
fi
|
|
113
|
+
|
|
114
|
+
# ── Build output ─────────────────────────────────────────
|
|
115
|
+
|
|
116
|
+
if command -v jq >/dev/null 2>&1; then
|
|
117
|
+
jq -n \
|
|
118
|
+
--argjson prs "$PR_JSON" \
|
|
119
|
+
--argjson issues "$ISSUE_JSON" \
|
|
120
|
+
--arg deep "$DEEP_FLAG" \
|
|
121
|
+
'{prs: $prs, issues: $issues, deep: ($deep == "true"), collectedAt: now | todate}'
|
|
122
|
+
else
|
|
123
|
+
# Fallback: construct JSON manually
|
|
124
|
+
printf '{"prs":%s,"issues":%s,"deep":%s,"collectedAt":"%s"}\n' \
|
|
125
|
+
"$PR_JSON" \
|
|
126
|
+
"$ISSUE_JSON" \
|
|
127
|
+
"$DEEP_FLAG" \
|
|
128
|
+
"$(date -u +%Y-%m-%dT%H:%M:%SZ)"
|
|
129
|
+
fi
|
|
130
|
+
|
|
131
|
+
exit 0
|
|
@@ -1,9 +1,10 @@
|
|
|
1
1
|
#!/bin/bash
|
|
2
2
|
set -euo pipefail
|
|
3
3
|
|
|
4
|
-
# UserPromptSubmit Hook:
|
|
5
|
-
#
|
|
6
|
-
#
|
|
4
|
+
# UserPromptSubmit Hook: Two modes of operation:
|
|
5
|
+
# 1. Pipeline INACTIVE: No action (routing handled by CLAUDE.md intent-based skill table)
|
|
6
|
+
# 2. Pipeline ACTIVE: Inject Phase/Feature context + drift checkpoint at thresholds
|
|
7
|
+
# Exit 0 immediately if no action needed (minimize overhead)
|
|
7
8
|
|
|
8
9
|
# shellcheck source=afc-state.sh
|
|
9
10
|
. "$(dirname "$0")/afc-state.sh"
|
|
@@ -17,11 +18,15 @@ trap cleanup EXIT
|
|
|
17
18
|
# Consume stdin (required -- pipe breaks if not consumed)
|
|
18
19
|
cat > /dev/null
|
|
19
20
|
|
|
20
|
-
#
|
|
21
|
+
# --- Branch: Pipeline INACTIVE → no action needed ---
|
|
22
|
+
# Routing is handled by CLAUDE.md intent-based skill routing table.
|
|
23
|
+
# The main model classifies user intent natively — no hook-level keyword matching.
|
|
21
24
|
if ! afc_state_is_active; then
|
|
22
25
|
exit 0
|
|
23
26
|
fi
|
|
24
27
|
|
|
28
|
+
# --- Branch: Pipeline ACTIVE → existing Phase/Feature context ---
|
|
29
|
+
|
|
25
30
|
# Read Feature/Phase + JSON-safe processing (strip special characters)
|
|
26
31
|
FEATURE="$(afc_state_read feature || echo '')"
|
|
27
32
|
FEATURE="$(printf '%s' "$FEATURE" | tr -d '"' | cut -c1-100)"
|