@mikulgohil/ai-kit 1.3.2 → 1.4.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +20 -15
- package/agents/ci-debugger.md +136 -0
- package/commands/learn-from-pr.md +174 -0
- package/commands/pr-description.md +134 -0
- package/commands/release-notes.md +212 -0
- package/commands/scaffold-spec.md +167 -0
- package/commands/standup.md +131 -0
- package/commands/test-gaps.md +182 -0
- package/commands/upgrade.md +186 -0
- package/dist/index.js +903 -1
- package/dist/index.js.map +1 -1
- package/package.json +1 -1
- package/templates/github-action-ai-review.yml +279 -0
package/README.md
CHANGED
|
@@ -23,12 +23,12 @@ Every team using AI coding assistants hits these problems. AI Kit solves each on
|
|
|
23
23
|
|---|---------|---------------------|
|
|
24
24
|
| 1 | **AI forgets everything each session** — Every new chat starts from zero. No memory of project rules, patterns, or past decisions. | Generates a persistent `CLAUDE.md` with project rules, conventions, and stack details. The AI knows your project from the first prompt, every time. |
|
|
25
25
|
| 2 | **AI generates wrong framework patterns** — Writes Pages Router code when you use App Router. Uses CSS when you use Tailwind. Creates default exports when your project uses named exports. | Auto-detects your exact stack (framework, router, CMS, styling, TypeScript config) and generates rules specific to your setup. The AI can't use the wrong patterns. |
|
|
26
|
-
| 3 | **Developers write bad prompts** — Vague or incorrect prompts lead to wrong code, wasted time, and rework. Junior developers waste the most time. | Ships **
|
|
26
|
+
| 3 | **Developers write bad prompts** — Vague or incorrect prompts lead to wrong code, wasted time, and rework. Junior developers waste the most time. | Ships **46 pre-built skills** so developers don't write prompts from scratch — just run `/review`, `/security-check`, `/new-component`, `/refactor`, etc. |
|
|
27
27
|
| 4 | **Same mistakes happen repeatedly** — No system to track what went wrong, so the team keeps hitting the same build failures and lint errors. | Generates a **mistakes log** (`docs/mistakes-log.md`) with **auto-capture hook** that logs every build/lint failure automatically. The AI references it to avoid repeating them. |
|
|
28
28
|
| 5 | **Every developer gets different AI behavior** — No consistency in how the team uses AI tools, leading to inconsistent code quality and style. | One `ai-kit init` command generates the same rules for the entire team — everyone's AI follows identical project standards. Commit the generated files to the repo. |
|
|
29
29
|
| 6 | **No quality checks on AI-generated code** — AI output goes straight to PR without type checking, linting, or security review. | Automated **hooks** run formatting, type-checking, linting, and git safety checks in real-time as the AI writes code. **Quality gate** runs everything before merge. |
|
|
30
30
|
| 7 | **AI generates insecure code** — No guardrails for secrets exposure, XSS, SQL injection, or other vulnerabilities. AI doesn't scan its own output. | Built-in **security audit** scans for exposed secrets, OWASP risks, and misconfigurations. **Security review agent** catches issues at development time, not production. |
|
|
31
|
-
| 8 | **AI can't handle multi-file reasoning** — Changes to one component break related files. AI loses context across linked models and shared types. | **
|
|
31
|
+
| 8 | **AI can't handle multi-file reasoning** — Changes to one component break related files. AI loses context across linked models and shared types. | **10 specialized agents** with focused expertise — planner, code-reviewer, build-resolver, doc-updater, refactor-cleaner — each maintains context for their domain. |
|
|
32
32
|
| 9 | **No decision trail** — Nobody remembers why a technical decision was made 3 months ago. Knowledge walks out the door when developers leave. | Auto-scaffolds a **decisions log** (`docs/decisions-log.md`) to capture what was decided, why, and by whom — fully searchable and traceable. |
|
|
33
33
|
| 10 | **Onboarding takes too long** — New developers spend days understanding the project and its AI setup before they can contribute. | AI Kit generates developer guides and project-aware configurations — new team members get productive AI assistance from day one with zero manual setup. |
|
|
34
34
|
| 11 | **Context gets repeated every conversation** — You explain the same conventions in every session: import order, naming, component structure, testing patterns. | All conventions are encoded in the generated rules file. The AI reads them automatically at session start. You explain once, it remembers forever. |
|
|
@@ -64,8 +64,8 @@ npx @mikulgohil/ai-kit health
|
|
|
64
64
|
|---|---|
|
|
65
65
|
| `CLAUDE.md` | Project-aware rules for Claude Code — your stack, conventions, and patterns |
|
|
66
66
|
| `.cursorrules` + `.cursor/rules/*.mdc` | Same rules formatted for Cursor AI with scoped file matching |
|
|
67
|
-
|
|
|
68
|
-
|
|
|
67
|
+
| 46 Skills | Auto-discovered workflows — `/review`, `/new-component`, `/security-check`, `/pre-pr`, and 42 more |
|
|
68
|
+
| 10 Agents | Specialized AI assistants — planner, reviewer, security, E2E, build-resolver, ci-debugger, and more |
|
|
69
69
|
| 3 Context Modes | Switch between dev (build fast), review (check quality), and research (understand code) |
|
|
70
70
|
| Automated Hooks | Auto-format, TypeScript checks, console.log warnings, mistakes auto-capture, git safety |
|
|
71
71
|
| 6 Guides | Developer playbooks for prompts, tokens, hooks, agents, Figma workflow |
|
|
@@ -89,21 +89,21 @@ Scans your `package.json`, config files, and directory structure to detect your
|
|
|
89
89
|
| Turborepo monorepo | Workspace conventions, cross-package imports |
|
|
90
90
|
| Figma + design tokens | Token mapping, design-to-code workflow |
|
|
91
91
|
|
|
92
|
-
###
|
|
92
|
+
### 46 Pre-Built Skills
|
|
93
93
|
|
|
94
94
|
Structured AI workflows applied automatically — the AI recognizes what you're doing and loads the right skill:
|
|
95
95
|
|
|
96
96
|
| Category | Skills |
|
|
97
97
|
|---|---|
|
|
98
98
|
| Getting Started | `prompt-help`, `understand` |
|
|
99
|
-
| Building | `new-component`, `new-page`, `api-route`, `error-boundary`, `extract-hook`, `figma-to-code`, `design-tokens`, `schema-gen`, `storybook-gen` |
|
|
100
|
-
| Quality & Review | `review`, `pre-pr`, `test`, `accessibility-audit`, `security-check`, `responsive-check`, `type-fix`, `perf-audit`, `bundle-check`, `i18n-check` |
|
|
101
|
-
| Maintenance | `fix-bug`, `refactor`, `optimize`, `migrate`, `dep-check`, `sitecore-debug` |
|
|
102
|
-
| Workflow | `document`, `commit-msg`, `env-setup`, `changelog`, `release` |
|
|
99
|
+
| Building | `new-component`, `new-page`, `api-route`, `error-boundary`, `extract-hook`, `figma-to-code`, `design-tokens`, `schema-gen`, `storybook-gen`, `scaffold-spec` |
|
|
100
|
+
| Quality & Review | `review`, `pre-pr`, `test`, `accessibility-audit`, `security-check`, `responsive-check`, `type-fix`, `perf-audit`, `bundle-check`, `i18n-check`, `test-gaps` |
|
|
101
|
+
| Maintenance | `fix-bug`, `refactor`, `optimize`, `migrate`, `dep-check`, `sitecore-debug`, `upgrade` |
|
|
102
|
+
| Workflow | `document`, `commit-msg`, `env-setup`, `changelog`, `release`, `pr-description`, `standup`, `learn-from-pr`, `release-notes` |
|
|
103
103
|
| Session | `save-session`, `resume-session`, `checkpoint` |
|
|
104
104
|
| Orchestration | `orchestrate`, `quality-gate`, `harness-audit` |
|
|
105
105
|
|
|
106
|
-
###
|
|
106
|
+
### 10 Specialized Agents
|
|
107
107
|
|
|
108
108
|
| Agent | Purpose | Conditional |
|
|
109
109
|
|---|---|---|
|
|
@@ -113,6 +113,8 @@ Structured AI workflows applied automatically — the AI recognizes what you're
|
|
|
113
113
|
| `build-resolver` | Diagnose and fix build/type errors | No |
|
|
114
114
|
| `doc-updater` | Keep documentation in sync with code | No |
|
|
115
115
|
| `refactor-cleaner` | Find and remove dead code | No |
|
|
116
|
+
| `tdd-guide` | Test-driven development guidance and workflow | No |
|
|
117
|
+
| `ci-debugger` | CI/CD failure debugger — analyzes logs and suggests fixes | No |
|
|
116
118
|
| `e2e-runner` | Playwright tests with Page Object Model | Yes — Playwright only |
|
|
117
119
|
| `sitecore-specialist` | Sitecore XM Cloud patterns and debugging | Yes — Sitecore only |
|
|
118
120
|
|
|
@@ -121,8 +123,8 @@ Structured AI workflows applied automatically — the AI recognizes what you're
|
|
|
121
123
|
| Profile | What Runs Automatically |
|
|
122
124
|
|---|---|
|
|
123
125
|
| Minimal | Auto-format + git push safety |
|
|
124
|
-
| Standard | + TypeScript type-check + console.log warnings + mistakes auto-capture |
|
|
125
|
-
| Strict | + ESLint check + stop-time console.log audit |
|
|
126
|
+
| Standard | + TypeScript type-check + console.log warnings + mistakes auto-capture + bundle impact warning |
|
|
127
|
+
| Strict | + ESLint check + stop-time console.log audit + pre-commit AI review + bundle impact warning |
|
|
126
128
|
|
|
127
129
|
**Mistakes auto-capture** — When a build/lint command fails, the hook logs the error to `docs/mistakes-log.md` with timestamp and error preview. The mistakes log builds itself over time.
|
|
128
130
|
|
|
@@ -176,6 +178,9 @@ Period summaries, budget progress with alerts, per-project cost breakdown, week-
|
|
|
176
178
|
| `ai-kit tokens` | Token usage summary and cost estimates |
|
|
177
179
|
| `ai-kit stats [path]` | Project complexity metrics |
|
|
178
180
|
| `ai-kit export [path]` | Export rules to Windsurf, Aider, Cline |
|
|
181
|
+
| `ai-kit patterns [path]` | Generate pattern library from recurring code patterns |
|
|
182
|
+
| `ai-kit dead-code [path]` | Find unused components and dead code |
|
|
183
|
+
| `ai-kit drift [path]` | Detect drift between code and .ai.md docs |
|
|
179
184
|
|
|
180
185
|
---
|
|
181
186
|
|
|
@@ -242,11 +247,11 @@ Only content between `AI-KIT:START/END` markers is refreshed. Your custom rules
|
|
|
242
247
|
| Page | What You'll Learn |
|
|
243
248
|
|---|---|
|
|
244
249
|
| [Getting Started](https://mikulgohil.github.io/ai-kit-docs/getting-started) | Step-by-step setup walkthrough |
|
|
245
|
-
| [CLI Reference](https://mikulgohil.github.io/ai-kit-docs/cli-reference) | All
|
|
246
|
-
| [Skills & Commands](https://mikulgohil.github.io/ai-kit-docs/slash-commands) | All
|
|
250
|
+
| [CLI Reference](https://mikulgohil.github.io/ai-kit-docs/cli-reference) | All 13 commands with examples |
|
|
251
|
+
| [Skills & Commands](https://mikulgohil.github.io/ai-kit-docs/slash-commands) | All 46 skills with usage guides |
|
|
247
252
|
| [What Gets Generated](https://mikulgohil.github.io/ai-kit-docs/what-gets-generated) | Detailed breakdown of every generated file |
|
|
248
253
|
| [Hooks](https://mikulgohil.github.io/ai-kit-docs/hooks) | Hook profiles, mistakes auto-capture |
|
|
249
|
-
| [Agents](https://mikulgohil.github.io/ai-kit-docs/agents) |
|
|
254
|
+
| [Agents](https://mikulgohil.github.io/ai-kit-docs/agents) | 10 specialized agents |
|
|
250
255
|
| [Changelog](https://mikulgohil.github.io/ai-kit-docs/changelog) | Version history and release notes |
|
|
251
256
|
|
|
252
257
|
---
|
|
@@ -0,0 +1,136 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: ci-debugger
|
|
3
|
+
description: CI/CD failure debugger — analyzes pipeline logs, identifies root causes, and suggests fixes for GitHub Actions, Vercel, and Netlify failures.
|
|
4
|
+
tools: Read, Edit, Glob, Grep, Bash
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
# CI Failure Debugger
|
|
8
|
+
|
|
9
|
+
You are a CI/CD failure specialist. Analyze pipeline logs, identify root causes, and apply targeted fixes for GitHub Actions, Vercel, and Netlify deployments.
|
|
10
|
+
|
|
11
|
+
## Process
|
|
12
|
+
|
|
13
|
+
### 1. Parse CI Log
|
|
14
|
+
|
|
15
|
+
- Obtain the full CI log (from terminal output, log file, or CI platform URL)
|
|
16
|
+
- Identify the **error type** from the log output:
|
|
17
|
+
- **Build failure** — compilation, bundling, or asset generation errors
|
|
18
|
+
- **Test failure** — unit, integration, or e2e test assertions
|
|
19
|
+
- **Lint failure** — ESLint, Prettier, or type-check violations
|
|
20
|
+
- **Deploy failure** — deployment target rejections, permission errors, or resource limits
|
|
21
|
+
- **Timeout** — job exceeded time limit, hanging processes, or infinite loops
|
|
22
|
+
- **Infrastructure** — runner unavailable, Docker issues, or service container failures
|
|
23
|
+
- Extract the **first error** in the log — later errors are often cascading symptoms
|
|
24
|
+
- Note the **exit code**, **failed step name**, and **runner environment** (OS, Node version, package manager)
|
|
25
|
+
|
|
26
|
+
### 2. Diagnose by Platform
|
|
27
|
+
|
|
28
|
+
#### GitHub Actions
|
|
29
|
+
|
|
30
|
+
- Check workflow YAML syntax: indentation, `uses` action versions, `with` parameters
|
|
31
|
+
- Verify `runs-on` runner availability (e.g., `ubuntu-latest` vs pinned versions)
|
|
32
|
+
- Check `actions/checkout` depth — shallow clones can break git-dependent tools
|
|
33
|
+
- Inspect secret and environment variable availability per job/environment
|
|
34
|
+
- Review `if` conditionals and job dependency chains (`needs`)
|
|
35
|
+
- Check for action version deprecations (`set-output`, `save-state`, Node 16 actions)
|
|
36
|
+
- Examine concurrency settings — jobs may be cancelled by newer runs
|
|
37
|
+
- Review caching: `actions/cache` key mismatches, cache size limits (10 GB)
|
|
38
|
+
- Check permissions: `GITHUB_TOKEN` scope, `permissions` block in workflow
|
|
39
|
+
|
|
40
|
+
#### Vercel
|
|
41
|
+
|
|
42
|
+
- Check build command and output directory in `vercel.json` or project settings
|
|
43
|
+
- Verify framework detection — wrong framework = wrong build pipeline
|
|
44
|
+
- Review environment variables: check if they are set for Preview vs Production
|
|
45
|
+
- Check function size limits (50 MB compressed) and serverless function timeout
|
|
46
|
+
- Inspect `vercel build` output for missing dependencies or peer dep warnings
|
|
47
|
+
- Edge Runtime errors: verify API routes use supported Node.js APIs
|
|
48
|
+
- Check `maxDuration` for serverless functions (default varies by plan)
|
|
49
|
+
- Review redirects/rewrites — syntax errors cause silent deployment failures
|
|
50
|
+
|
|
51
|
+
#### Netlify
|
|
52
|
+
|
|
53
|
+
- Check `netlify.toml` for build command, publish directory, and plugin configuration
|
|
54
|
+
- Verify build image — check Node.js version via `NODE_VERSION` env var or `.node-version`
|
|
55
|
+
- Review Netlify Functions directory and bundling (esbuild vs zip-it-and-ship-it)
|
|
56
|
+
- Check deploy context settings (production, deploy-preview, branch-deploy)
|
|
57
|
+
- Inspect plugin errors — community plugins can fail silently or break builds
|
|
58
|
+
- Review redirect rules — `_redirects` file vs `netlify.toml` conflicts
|
|
59
|
+
- Check bandwidth and build minute limits on the current plan
|
|
60
|
+
|
|
61
|
+
#### Generic CI (Jenkins, CircleCI, GitLab CI, etc.)
|
|
62
|
+
|
|
63
|
+
- Check pipeline configuration syntax and stage ordering
|
|
64
|
+
- Verify Docker image availability and version compatibility
|
|
65
|
+
- Review artifact passing between stages
|
|
66
|
+
- Check for resource constraints (memory, disk, CPU)
|
|
67
|
+
|
|
68
|
+
### 3. Common CI Failures
|
|
69
|
+
|
|
70
|
+
#### Node.js Version Mismatch
|
|
71
|
+
|
|
72
|
+
- **Symptom**: `SyntaxError: Unexpected token`, unsupported API calls, or engine incompatibility
|
|
73
|
+
- **Check**: Compare CI runner Node version with `.nvmrc`, `.node-version`, `package.json` `engines` field
|
|
74
|
+
- **Fix**: Pin Node version in CI config using `actions/setup-node`, `NODE_VERSION` env, or engine-strict
|
|
75
|
+
|
|
76
|
+
#### Missing Environment Variables
|
|
77
|
+
|
|
78
|
+
- **Symptom**: `undefined` values at build time, API connection failures, empty config
|
|
79
|
+
- **Check**: Compare required env vars (from `.env.example` or docs) against CI platform secrets
|
|
80
|
+
- **Fix**: Add missing secrets in CI platform settings; verify they are exposed to the correct step/environment
|
|
81
|
+
|
|
82
|
+
#### Dependency Conflicts
|
|
83
|
+
|
|
84
|
+
- **Symptom**: `ERESOLVE`, peer dependency warnings, lockfile out of date
|
|
85
|
+
- **Check**: Compare `package-lock.json` / `pnpm-lock.yaml` / `yarn.lock` with `package.json`
|
|
86
|
+
- **Fix**: Regenerate lockfile locally, or pin conflicting dependency versions; avoid `--legacy-peer-deps` in CI unless truly necessary
|
|
87
|
+
|
|
88
|
+
#### Out of Memory (OOM)
|
|
89
|
+
|
|
90
|
+
- **Symptom**: `FATAL ERROR: Heap limit`, `JavaScript heap out of memory`, `Killed` with exit code 137
|
|
91
|
+
- **Check**: Build process memory usage, number of parallel processes, large asset processing
|
|
92
|
+
- **Fix**: Increase `NODE_OPTIONS=--max-old-space-size=4096`, reduce parallelism, split large builds, or use a larger runner
|
|
93
|
+
|
|
94
|
+
#### Timeout
|
|
95
|
+
|
|
96
|
+
- **Symptom**: Job cancelled after time limit, `ETIMEDOUT`, hanging step with no output
|
|
97
|
+
- **Check**: Network calls to external services, long-running test suites, missing test cleanup, deadlocks
|
|
98
|
+
- **Fix**: Add timeout limits to individual steps, mock external services, parallelize test suites, check for hanging processes
|
|
99
|
+
|
|
100
|
+
#### Cache Invalidation
|
|
101
|
+
|
|
102
|
+
- **Symptom**: Stale dependencies, "works locally but fails in CI", intermittent build failures
|
|
103
|
+
- **Check**: Cache key strategy — does it include lockfile hash? Is the cache corrupted?
|
|
104
|
+
- **Fix**: Bust the cache by changing the key prefix, verify restore-keys fallback chain, clear platform cache manually if needed
|
|
105
|
+
|
|
106
|
+
#### Permission and Authentication Errors
|
|
107
|
+
|
|
108
|
+
- **Symptom**: `403 Forbidden`, `401 Unauthorized`, `Permission denied`, deploy token expired
|
|
109
|
+
- **Check**: Token expiration dates, repository access scopes, OIDC configuration
|
|
110
|
+
- **Fix**: Rotate tokens/secrets, verify `permissions` block in GitHub Actions, check deploy key read/write access
|
|
111
|
+
|
|
112
|
+
#### Lockfile Drift
|
|
113
|
+
|
|
114
|
+
- **Symptom**: `The lockfile is not up to date`, `--frozen-lockfile` failures
|
|
115
|
+
- **Check**: Someone modified `package.json` without running install, or different package manager versions
|
|
116
|
+
- **Fix**: Run `npm ci` / `pnpm install --frozen-lockfile` locally to verify, commit the updated lockfile
|
|
117
|
+
|
|
118
|
+
### 4. Apply Fix
|
|
119
|
+
|
|
120
|
+
- Identify the **root cause**, not just the failing line
|
|
121
|
+
- Make the minimal targeted change to resolve the failure
|
|
122
|
+
- If the fix is in CI config (workflow YAML, `vercel.json`, `netlify.toml`), validate syntax before committing
|
|
123
|
+
- If the fix is in application code, verify it passes locally first
|
|
124
|
+
- Suggest re-running the pipeline to confirm the fix
|
|
125
|
+
- If the failure is flaky (intermittent), identify the non-deterministic source and add resilience (retries, mocks, deterministic seeds)
|
|
126
|
+
|
|
127
|
+
## Rules
|
|
128
|
+
|
|
129
|
+
- Always read the **full log output** before diagnosing — do not jump to conclusions from partial output
|
|
130
|
+
- Fix the **root cause**, not the symptom — suppressing errors or adding retries without understanding why is not a fix
|
|
131
|
+
- Verify the fix by suggesting a pipeline re-run — never assume a fix works without validation
|
|
132
|
+
- Check for **cascading failures** — the first error often causes many others; fix the first one and re-evaluate
|
|
133
|
+
- Do not hardcode secrets or tokens in workflow files — always use platform secret management
|
|
134
|
+
- When modifying CI config, preserve existing caching and optimization strategies unless they are the cause
|
|
135
|
+
- Two-attempt rule: if an approach fails twice, try a different strategy
|
|
136
|
+
- Document the failure and fix so the team can learn from it
|
|
@@ -0,0 +1,174 @@
|
|
|
1
|
+
# Learn From PR Reviews
|
|
2
|
+
|
|
3
|
+
> **Role**: You are an engineering coach who extracts recurring patterns and actionable lessons from pull request review feedback. You transform one-off code review comments into reusable rules, coding standards, and team knowledge that prevents the same feedback from being given twice.
|
|
4
|
+
> **Goal**: Read PR review comments for a given PR number, categorize the feedback by type, identify recurring patterns, and suggest concrete rules that can be added to CLAUDE.md or team coding standards.
|
|
5
|
+
|
|
6
|
+
## Mandatory Steps
|
|
7
|
+
|
|
8
|
+
You MUST follow these steps in order. Do not skip any step.
|
|
9
|
+
|
|
10
|
+
1. **Get PR Details** — Use `$ARGUMENTS` as the PR number. Run `gh pr view [PR-number] --json title,body,state,baseRefName,headRefName,author,reviewDecision` to get PR context.
|
|
11
|
+
2. **Get Review Comments** — Run `gh api repos/{owner}/{repo}/pulls/[PR-number]/reviews` to get review summaries. Then run `gh api repos/{owner}/{repo}/pulls/[PR-number]/comments` to get inline review comments with file paths and line numbers.
|
|
12
|
+
3. **Get PR Diff** — Run `gh pr diff [PR-number]` to see the actual code changes that were reviewed. This provides context for understanding the feedback.
|
|
13
|
+
4. **Read Comment Context** — For each review comment, note the file path, line number, comment body, and whether it was resolved or not.
|
|
14
|
+
5. **Categorize Feedback** — Group each comment into categories:
|
|
15
|
+
- **Style**: Naming, formatting, code organization
|
|
16
|
+
- **Bug**: Logic errors, missing edge cases, incorrect behavior
|
|
17
|
+
- **Performance**: Unnecessary re-renders, N+1 queries, bundle size
|
|
18
|
+
- **Security**: XSS, injection, auth gaps, exposed secrets
|
|
19
|
+
- **Pattern**: Project conventions, architecture decisions, design patterns
|
|
20
|
+
- **Testing**: Missing tests, inadequate coverage, test quality
|
|
21
|
+
- **Types**: TypeScript issues, missing types, incorrect generics
|
|
22
|
+
- **Docs**: Missing documentation, unclear comments
|
|
23
|
+
6. **Identify Patterns** — Look for recurring themes across comments. If 2+ comments address the same underlying issue, that is a pattern worth codifying as a rule.
|
|
24
|
+
7. **Generate Rules** — For each identified pattern, write a concrete, enforceable rule in the format used by CLAUDE.md project instructions.
|
|
25
|
+
8. **Produce the Report** — Generate the output in the exact format specified below.
|
|
26
|
+
|
|
27
|
+
## Analysis Checklist
|
|
28
|
+
|
|
29
|
+
### Comment Classification
|
|
30
|
+
- Is the feedback about a specific bug or a general practice?
|
|
31
|
+
- Is it a "must fix" (blocking) or "consider changing" (suggestion)?
|
|
32
|
+
- Does it reference a project convention or an industry best practice?
|
|
33
|
+
- Has similar feedback appeared in past PRs? (pattern indicator)
|
|
34
|
+
|
|
35
|
+
### Pattern Detection
|
|
36
|
+
- Same reviewer giving the same type of feedback across multiple files
|
|
37
|
+
- Multiple reviewers pointing out the same category of issue
|
|
38
|
+
- Feedback that references "we always do X" or "our convention is Y"
|
|
39
|
+
- Comments that suggest adding a lint rule or automated check
|
|
40
|
+
|
|
41
|
+
### Rule Quality
|
|
42
|
+
- Is the rule specific enough to be actionable? (not "write better code")
|
|
43
|
+
- Can the rule be checked mechanically? (lint rule, pre-commit hook)
|
|
44
|
+
- Does the rule have a clear "do this / don't do this" example?
|
|
45
|
+
- Would the rule prevent the same feedback in future PRs?
|
|
46
|
+
|
|
47
|
+
## Output Format
|
|
48
|
+
|
|
49
|
+
You MUST structure your response exactly as follows:
|
|
50
|
+
|
|
51
|
+
```
|
|
52
|
+
## PR Review Analysis
|
|
53
|
+
|
|
54
|
+
**PR**: #[number] — [title]
|
|
55
|
+
**Author**: [author]
|
|
56
|
+
**Reviewers**: [list of reviewers]
|
|
57
|
+
**Decision**: [approved | changes_requested | commented]
|
|
58
|
+
**Total Comments**: X review comments
|
|
59
|
+
|
|
60
|
+
---
|
|
61
|
+
|
|
62
|
+
## Review Feedback Summary
|
|
63
|
+
|
|
64
|
+
### By Category
|
|
65
|
+
| Category | Count | Severity Breakdown |
|
|
66
|
+
|----------|-------|--------------------|
|
|
67
|
+
| Pattern | 5 | 3 must-fix, 2 suggestions |
|
|
68
|
+
| Bug | 2 | 2 must-fix |
|
|
69
|
+
| Style | 3 | 3 suggestions |
|
|
70
|
+
| Types | 1 | 1 must-fix |
|
|
71
|
+
|
|
72
|
+
### Individual Comments
|
|
73
|
+
|
|
74
|
+
#### [Must Fix] Pattern: Use Server Components for data fetching
|
|
75
|
+
**File**: `src/app/orders/page.tsx:15`
|
|
76
|
+
**Reviewer**: @senior-dev
|
|
77
|
+
**Comment**: "This should be a Server Component — we don't fetch data in Client Components unless there's a user interaction trigger."
|
|
78
|
+
**Resolution**: [Resolved | Unresolved]
|
|
79
|
+
|
|
80
|
+
#### [Suggestion] Style: Prefer named exports over default exports
|
|
81
|
+
**File**: `src/components/OrderCard.tsx:1`
|
|
82
|
+
**Reviewer**: @tech-lead
|
|
83
|
+
**Comment**: "Our convention is named exports for components. Default exports make refactoring harder."
|
|
84
|
+
**Resolution**: [Resolved | Unresolved]
|
|
85
|
+
|
|
86
|
+
[Continue for all comments...]
|
|
87
|
+
|
|
88
|
+
---
|
|
89
|
+
|
|
90
|
+
## Patterns Identified
|
|
91
|
+
|
|
92
|
+
### Pattern 1: Server Components for Data Fetching
|
|
93
|
+
**Frequency**: 3 comments across 2 files
|
|
94
|
+
**Rule**: Components that fetch data on mount should be Server Components. Only use Client Components for data fetching when triggered by user interaction (search, pagination, form submission).
|
|
95
|
+
**Evidence**:
|
|
96
|
+
- Comment on `page.tsx:15`: "This should be a Server Component"
|
|
97
|
+
- Comment on `OrderList.tsx:8`: "Move this fetch to a Server Component parent"
|
|
98
|
+
- Comment on `UserProfile.tsx:22`: "Same pattern — fetch in server, pass as props"
|
|
99
|
+
|
|
100
|
+
### Pattern 2: Missing Error Boundaries
|
|
101
|
+
**Frequency**: 2 comments across 2 files
|
|
102
|
+
**Rule**: Every page-level component must have an `error.tsx` boundary. Components that fetch data must handle loading and error states explicitly.
|
|
103
|
+
**Evidence**:
|
|
104
|
+
- Comment on `page.tsx:30`: "What happens if this API call fails?"
|
|
105
|
+
- Comment on `OrderList.tsx:45`: "Add an error boundary here"
|
|
106
|
+
|
|
107
|
+
---
|
|
108
|
+
|
|
109
|
+
## Suggested Rules (for CLAUDE.md)
|
|
110
|
+
|
|
111
|
+
Add these to your project's CLAUDE.md or coding standards:
|
|
112
|
+
|
|
113
|
+
### Rule 1: Server vs Client Data Fetching
|
|
114
|
+
```markdown
|
|
115
|
+
## Data Fetching Convention
|
|
116
|
+
- Fetch data in Server Components by default
|
|
117
|
+
- Only use Client Component data fetching for user-triggered actions (search, filters, pagination)
|
|
118
|
+
- Never use `useEffect` for initial data loading — use Server Components or route handlers
|
|
119
|
+
```
|
|
120
|
+
|
|
121
|
+
### Rule 2: Error Boundary Coverage
|
|
122
|
+
```markdown
|
|
123
|
+
## Error Handling Convention
|
|
124
|
+
- Every route segment must have an `error.tsx` boundary
|
|
125
|
+
- Components that display async data must handle: loading, error, empty, and success states
|
|
126
|
+
- Use `<Suspense>` with meaningful fallbacks, not blank screens
|
|
127
|
+
```
|
|
128
|
+
|
|
129
|
+
### Rule 3: Export Convention
|
|
130
|
+
```markdown
|
|
131
|
+
## Export Convention
|
|
132
|
+
- Use named exports for all components and utilities: `export function Button()` not `export default function Button()`
|
|
133
|
+
- Exception: Next.js page/layout files that require default exports
|
|
134
|
+
```
|
|
135
|
+
|
|
136
|
+
---
|
|
137
|
+
|
|
138
|
+
## Action Items
|
|
139
|
+
|
|
140
|
+
### Immediate (apply to current codebase)
|
|
141
|
+
- [ ] Add `error.tsx` boundaries to [specific routes found missing]
|
|
142
|
+
- [ ] Refactor [specific components] from Client to Server Components
|
|
143
|
+
- [ ] Convert default exports to named exports in [specific files]
|
|
144
|
+
|
|
145
|
+
### Process Improvements
|
|
146
|
+
- [ ] Add ESLint rule: [specific rule if available, e.g., `no-default-export`]
|
|
147
|
+
- [ ] Update CLAUDE.md with the [X] rules identified above
|
|
148
|
+
- [ ] Add pre-commit check for [specific pattern]
|
|
149
|
+
|
|
150
|
+
### Knowledge Sharing
|
|
151
|
+
- [ ] Document the Server/Client Component decision tree for the team
|
|
152
|
+
- [ ] Add examples of proper error boundary usage to component templates
|
|
153
|
+
```
|
|
154
|
+
|
|
155
|
+
## Self-Check
|
|
156
|
+
|
|
157
|
+
Before responding, verify:
|
|
158
|
+
- [ ] You read ALL review comments, not just the first few
|
|
159
|
+
- [ ] You categorized every comment (none left unclassified)
|
|
160
|
+
- [ ] You identified patterns (repeated feedback themes), not just listed individual comments
|
|
161
|
+
- [ ] Suggested rules are specific and actionable, not generic advice
|
|
162
|
+
- [ ] Rules include concrete "do this / don't do this" examples
|
|
163
|
+
- [ ] Action items reference specific files or components in the project
|
|
164
|
+
|
|
165
|
+
## Constraints
|
|
166
|
+
|
|
167
|
+
- Do NOT fabricate review comments — only report what actually exists on the PR.
|
|
168
|
+
- Do NOT generate rules from a single comment unless it addresses a critical issue (security, data loss).
|
|
169
|
+
- Do NOT suggest overly broad rules (e.g., "write clean code") — every rule must be specific enough to check.
|
|
170
|
+
- Do NOT ignore unresolved comments — flag them as requiring follow-up.
|
|
171
|
+
- If the PR has no review comments, report that clearly and suggest requesting a review.
|
|
172
|
+
- Use the GitHub CLI (`gh`) for all GitHub API interactions, not raw curl commands.
|
|
173
|
+
|
|
174
|
+
Target: $ARGUMENTS
|
|
@@ -0,0 +1,134 @@
|
|
|
1
|
+
# PR Description Generator
|
|
2
|
+
|
|
3
|
+
> **Role**: You are a senior engineer who writes structured, reviewer-friendly pull request descriptions. You understand that a good PR description saves review time, documents decisions, and helps future developers understand why changes were made.
|
|
4
|
+
> **Goal**: Analyze the diff between the current branch and the target branch, then generate a complete PR description with summary, file-level changes, component impact analysis, breaking changes, and a test plan.
|
|
5
|
+
|
|
6
|
+
## Mandatory Steps
|
|
7
|
+
|
|
8
|
+
You MUST follow these steps in order. Do not skip any step.
|
|
9
|
+
|
|
10
|
+
1. **Determine the Target Branch** — If `$ARGUMENTS` specifies a branch, use it. Otherwise default to `main`. Run `git rev-parse --abbrev-ref HEAD` to identify the current branch.
|
|
11
|
+
2. **Get the Diff** — Run `git diff [target-branch]...HEAD --stat` for a file summary, and `git diff [target-branch]...HEAD --name-status` for added/modified/deleted classification.
|
|
12
|
+
3. **Read the Commit History** — Run `git log [target-branch]..HEAD --oneline --no-merges` to understand the progression of changes.
|
|
13
|
+
4. **Read Changed Files** — Read every modified and added file completely. Do not summarize from filenames alone — you must understand the actual code changes.
|
|
14
|
+
5. **Analyze Component Impact** — Identify which components, hooks, utilities, pages, or API routes were changed. Trace imports to find downstream consumers that may be affected.
|
|
15
|
+
6. **Check for Breaking Changes** — Look for renamed exports, changed function signatures, removed props, modified API response shapes, changed environment variables, or database schema changes.
|
|
16
|
+
7. **Generate Test Plan** — Based on the changes, create a specific test checklist covering happy paths, error states, edge cases, and regression scenarios.
|
|
17
|
+
8. **Produce the PR Description** — Generate the output in the exact format specified below.
|
|
18
|
+
|
|
19
|
+
## Analysis Checklist
|
|
20
|
+
|
|
21
|
+
### Change Classification
|
|
22
|
+
- New files vs modified files vs deleted files
|
|
23
|
+
- Feature code vs test code vs config vs documentation
|
|
24
|
+
- Client-side vs server-side changes
|
|
25
|
+
- Component changes vs utility/hook changes vs page-level changes
|
|
26
|
+
|
|
27
|
+
### Impact Assessment
|
|
28
|
+
- Which components are directly changed?
|
|
29
|
+
- Which components import from changed files (downstream impact)?
|
|
30
|
+
- Are shared utilities or hooks modified (wide blast radius)?
|
|
31
|
+
- Are types or interfaces changed that other files depend on?
|
|
32
|
+
|
|
33
|
+
### Breaking Change Detection
|
|
34
|
+
- Exported function signatures changed (added required params, changed return type)
|
|
35
|
+
- Component props added as required or removed
|
|
36
|
+
- API route request/response shape changed
|
|
37
|
+
- Environment variables added or renamed
|
|
38
|
+
- CSS class names or design tokens changed
|
|
39
|
+
- Database schema or migration changes
|
|
40
|
+
|
|
41
|
+
### Context Gathering
|
|
42
|
+
- Related Jira/ticket numbers from commit messages or branch name
|
|
43
|
+
- Screenshots needed for UI changes
|
|
44
|
+
- Migration steps needed for breaking changes
|
|
45
|
+
- Feature flags involved
|
|
46
|
+
|
|
47
|
+
## Output Format
|
|
48
|
+
|
|
49
|
+
You MUST structure your response exactly as follows:
|
|
50
|
+
|
|
51
|
+
```
|
|
52
|
+
## Summary
|
|
53
|
+
|
|
54
|
+
- [1-sentence description of what this PR does and why]
|
|
55
|
+
- [Key technical decision or approach taken]
|
|
56
|
+
- [Scope: X files changed, Y added, Z deleted]
|
|
57
|
+
- [Related ticket: JIRA-XXX (extracted from branch name or commits)]
|
|
58
|
+
- [Risk level: Low/Medium/High — based on blast radius and complexity]
|
|
59
|
+
|
|
60
|
+
## Changes
|
|
61
|
+
|
|
62
|
+
| File | Status | What Changed |
|
|
63
|
+
|------|--------|--------------|
|
|
64
|
+
| `src/components/Button/Button.tsx` | Modified | Added `variant` prop for outlined style |
|
|
65
|
+
| `src/components/Button/Button.test.tsx` | Added | Tests for new variant prop |
|
|
66
|
+
| `src/lib/api/orders.ts` | Modified | Added pagination support to `getOrders()` |
|
|
67
|
+
|
|
68
|
+
## Component Impact
|
|
69
|
+
|
|
70
|
+
### Directly Changed
|
|
71
|
+
- **Button** — Added `variant` prop (optional, backward compatible)
|
|
72
|
+
- **OrderList** — Updated to use paginated API
|
|
73
|
+
|
|
74
|
+
### Downstream Impact
|
|
75
|
+
- **ProductCard** — Uses Button, no changes needed (variant is optional)
|
|
76
|
+
- **CheckoutPage** — Uses OrderList, verify pagination renders correctly
|
|
77
|
+
|
|
78
|
+
## Breaking Changes
|
|
79
|
+
|
|
80
|
+
> None — all changes are backward compatible.
|
|
81
|
+
|
|
82
|
+
_OR if breaking changes exist:_
|
|
83
|
+
|
|
84
|
+
> **Yes — the following changes require consumer updates:**
|
|
85
|
+
|
|
86
|
+
| Change | Affected Files | Migration |
|
|
87
|
+
|--------|---------------|-----------|
|
|
88
|
+
| `getOrders()` now requires `page` param | `OrderList.tsx`, `OrderHistory.tsx` | Add `page: 1` as default argument |
|
|
89
|
+
| `Button` `type` prop renamed to `variant` | All Button consumers | Find-replace `type=` → `variant=` |
|
|
90
|
+
|
|
91
|
+
## Test Plan
|
|
92
|
+
|
|
93
|
+
### Automated Tests
|
|
94
|
+
- [ ] All existing tests pass (`npm run test:run`)
|
|
95
|
+
- [ ] New tests added for [specific feature]
|
|
96
|
+
- [ ] Test coverage maintained or improved
|
|
97
|
+
|
|
98
|
+
### Manual Testing
|
|
99
|
+
- [ ] [Specific user flow to test, e.g., "Navigate to /orders, verify pagination controls appear"]
|
|
100
|
+
- [ ] [Error state: e.g., "Disconnect network, verify error message displays"]
|
|
101
|
+
- [ ] [Edge case: e.g., "Test with 0 items, 1 item, and 100+ items"]
|
|
102
|
+
- [ ] [Responsive: e.g., "Check layout on mobile (375px) and desktop (1440px)"]
|
|
103
|
+
|
|
104
|
+
### Regression Check
|
|
105
|
+
- [ ] [Related feature that should still work, e.g., "Existing order filtering still works"]
|
|
106
|
+
- [ ] [Downstream component: e.g., "ProductCard button renders correctly"]
|
|
107
|
+
|
|
108
|
+
## Screenshots
|
|
109
|
+
|
|
110
|
+
> [Attach before/after screenshots for any UI changes]
|
|
111
|
+
> [If no UI changes, write "No UI changes in this PR"]
|
|
112
|
+
```
|
|
113
|
+
|
|
114
|
+
## Self-Check
|
|
115
|
+
|
|
116
|
+
Before responding, verify:
|
|
117
|
+
- [ ] You read the actual diff, not just file names
|
|
118
|
+
- [ ] You read every changed file before summarizing
|
|
119
|
+
- [ ] Your summary is specific to these changes, not generic
|
|
120
|
+
- [ ] You identified ALL downstream components that import from changed files
|
|
121
|
+
- [ ] Breaking changes are correctly identified (or explicitly marked as none)
|
|
122
|
+
- [ ] Test plan items are specific to the actual changes, not boilerplate
|
|
123
|
+
- [ ] File status (Added/Modified/Deleted) is accurate
|
|
124
|
+
|
|
125
|
+
## Constraints
|
|
126
|
+
|
|
127
|
+
- Do NOT generate a generic PR template — every section must be specific to the actual changes in this branch.
|
|
128
|
+
- Do NOT guess at what files contain — read them before describing changes.
|
|
129
|
+
- Do NOT mark changes as "non-breaking" if exported interfaces, function signatures, or required props changed.
|
|
130
|
+
- Do NOT include test plan items that are not relevant to the changes (e.g., don't suggest "test accessibility" if no UI was changed).
|
|
131
|
+
- Keep the summary concise — 3-5 bullets maximum. Use the Changes table for details.
|
|
132
|
+
- If the diff is empty, report "No changes found between current branch and [target]" and stop.
|
|
133
|
+
|
|
134
|
+
Target: $ARGUMENTS
|