forgedev 1.1.3 → 1.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (53) hide show
  1. package/README.md +2 -1
  2. package/bin/devforge.js +2 -1
  3. package/docs/00-README.md +310 -0
  4. package/docs/01-universal-prompt-library.md +1049 -0
  5. package/docs/02-claude-code-mastery-playbook.md +283 -0
  6. package/docs/03-multi-agent-verification.md +565 -0
  7. package/docs/04-errata-and-verification-checklist.md +284 -0
  8. package/docs/05-universal-scaffolder-vision.md +452 -0
  9. package/docs/06-confidence-assessment-and-repo-prompt.md +407 -0
  10. package/docs/errata.md +58 -0
  11. package/docs/multi-agent-verification.md +66 -0
  12. package/docs/plans/.gitkeep +0 -0
  13. package/docs/playbook.md +95 -0
  14. package/docs/prompt-library.md +160 -0
  15. package/docs/uat/UAT_CHECKLIST.csv +9 -0
  16. package/docs/uat/UAT_TEMPLATE.md +163 -0
  17. package/package.json +10 -2
  18. package/src/claude-configurator.js +1 -0
  19. package/src/cli.js +5 -5
  20. package/src/index.js +3 -3
  21. package/src/utils.js +1 -1
  22. package/templates/base/docs/plans/.gitkeep +0 -0
  23. package/templates/base/docs/uat/UAT_CHECKLIST.csv.template +2 -0
  24. package/templates/base/docs/uat/UAT_TEMPLATE.md.template +22 -0
  25. package/templates/claude-code/agents/build-error-resolver.md +3 -2
  26. package/templates/claude-code/agents/code-quality-reviewer.md +1 -1
  27. package/templates/claude-code/agents/database-reviewer.md +1 -1
  28. package/templates/claude-code/agents/doc-updater.md +1 -1
  29. package/templates/claude-code/agents/harness-optimizer.md +26 -0
  30. package/templates/claude-code/agents/loop-operator.md +2 -1
  31. package/templates/claude-code/agents/product-strategist.md +124 -0
  32. package/templates/claude-code/agents/security-reviewer.md +1 -0
  33. package/templates/claude-code/agents/spec-validator.md +31 -1
  34. package/templates/claude-code/agents/uat-validator.md +4 -0
  35. package/templates/claude-code/claude-md/base.md +1 -0
  36. package/templates/claude-code/claude-md/nextjs.md +1 -1
  37. package/templates/claude-code/commands/code-review.md +7 -1
  38. package/templates/claude-code/commands/full-audit.md +3 -2
  39. package/templates/claude-code/commands/workflows.md +3 -0
  40. package/templates/claude-code/hooks/scripts/autofix-polyglot.mjs +20 -10
  41. package/templates/claude-code/hooks/scripts/autofix-python.mjs +3 -4
  42. package/templates/claude-code/hooks/scripts/autofix-typescript.mjs +3 -3
  43. package/templates/claude-code/hooks/scripts/guard-protected-files.mjs +2 -2
  44. package/templates/claude-code/skills/git-workflow/SKILL.md +2 -2
  45. package/templates/claude-code/skills/nextjs/SKILL.md +1 -1
  46. package/templates/claude-code/skills/playwright/SKILL.md +6 -5
  47. package/templates/claude-code/skills/security-web/SKILL.md +1 -0
  48. package/templates/infra/github-actions/.github/workflows/ci.yml.template +49 -0
  49. package/templates/testing/pytest/backend/tests/__init__.py +0 -0
  50. package/templates/testing/pytest/backend/tests/conftest.py.template +11 -0
  51. package/templates/testing/pytest/backend/tests/test_health.py.template +10 -0
  52. package/templates/testing/vitest/vitest.config.ts.template +18 -0
  53. package/CLAUDE.md +0 -38
@@ -0,0 +1,160 @@
1
+ # DevForge Prompt Library
2
+
3
+ 8 workflow guides for developing DevForge. Each workflow includes the exact prompts to use.
4
+
5
+ ---
6
+
7
+ ## Flow 1: Add a New Stack
8
+
9
+ When you want to add support for a new tech stack (e.g., Hono, React+Vite, Express).
10
+
11
+ **Step 1: Plan**
12
+ ```
13
+ I want to add [stack] support to DevForge. Enter plan mode. Research:
14
+ 1. What files/directories are needed for a typical [stack] project
15
+ 2. What dependencies go in package.json / requirements.txt
16
+ 3. What the recommender decision tree should look like
17
+ Write a plan to docs/plans/add-[stack].md
18
+ ```
19
+
20
+ **Step 2: Templates**
21
+ ```
22
+ Following the plan in docs/plans/add-[stack].md, create template files in
23
+ templates/[category]/[stack]/. Use {{VARIABLE_NAME}} for substitution.
24
+ Follow the patterns in existing templates like templates/frontend/nextjs/.
25
+ ```
26
+
27
+ **Step 3: Recommender**
28
+ ```
29
+ Update src/recommender.js to route to the new [stack] templates.
30
+ Add the new templateModules paths. Update formatStackSummary.
31
+ ```
32
+
33
+ **Step 4: Test**
34
+ ```
35
+ Add tests in tests/recommender.test.js for the new stack routing.
36
+ Run npx vitest run to verify all tests pass.
37
+ ```
38
+
39
+ ---
40
+
41
+ ## Flow 2: Create a New Template
42
+
43
+ When adding individual template files to an existing stack.
44
+
45
+ ```
46
+ I want to add a [template] to the [stack] stack. Create the template file at
47
+ templates/[category]/[stack]/[path]. Use {{VARIABLE_NAME}} placeholders where
48
+ the project name, description, or config values should go. Check
49
+ src/composer.js buildVariables() for available variables.
50
+ ```
51
+
52
+ ---
53
+
54
+ ## Flow 3: Fix a Bug
55
+
56
+ **Step 1: Reproduce**
57
+ ```
58
+ There's a bug: [describe]. Write a failing test in tests/ that reproduces it.
59
+ Run npx vitest run to confirm the test fails.
60
+ ```
61
+
62
+ **Step 2: Fix**
63
+ ```
64
+ Fix the bug that causes [test name] to fail. Run npx vitest run to confirm
65
+ the fix and that no other tests break.
66
+ ```
67
+
68
+ ---
69
+
70
+ ## Flow 4: Refactor
71
+
72
+ ```
73
+ I want to refactor [module/function]. Enter plan mode. First:
74
+ 1. Check test coverage for the code being refactored
75
+ 2. Add tests for any uncovered behavior
76
+ 3. Plan the refactoring steps (each should keep tests green)
77
+ Write a plan to docs/plans/refactor-[module].md
78
+ ```
79
+
80
+ ---
81
+
82
+ ## Flow 5: Add a Feature
83
+
84
+ For features that touch multiple modules (prompts, recommender, composer, configurator).
85
+
86
+ **Step 1: Plan**
87
+ ```
88
+ I want to add [feature] to DevForge. Enter plan mode. Trace through:
89
+ - src/prompts.js — does the user need to be asked anything new?
90
+ - src/recommender.js — does the decision tree change?
91
+ - src/composer.js — are new template variables needed?
92
+ - src/claude-configurator.js — does the generated infrastructure change?
93
+ - src/uat-generator.js — do UAT scenarios need updating?
94
+ Write a plan to docs/plans/[feature].md
95
+ ```
96
+
97
+ **Step 2: Implement**
98
+ ```
99
+ Following docs/plans/[feature].md, implement the feature. Work module by module.
100
+ Run npx vitest run after each module change.
101
+ ```
102
+
103
+ ---
104
+
105
+ ## Flow 6: Verification
106
+
107
+ ```
108
+ Run /project:verify-all
109
+ ```
110
+
111
+ This launches all 5 agents (code-quality, security, spec-validator, production-readiness, uat-validator) and runs tests.
112
+
113
+ ---
114
+
115
+ ## Flow 7: Pre-PR
116
+
117
+ ```
118
+ Run /project:pre-pr
119
+ ```
120
+
121
+ This runs tests, smoke test, code quality review, security review, and checks for staged secrets.
122
+
123
+ ---
124
+
125
+ ## Flow 8: UAT
126
+
127
+ ```
128
+ Run /project:run-uat
129
+ ```
130
+
131
+ This reads docs/uat/UAT_TEMPLATE.md, maps scenarios to tests, runs them, and updates UAT_CHECKLIST.csv.
132
+
133
+ ---
134
+
135
+ ## Utility Prompts
136
+
137
+ ### "I'm lost"
138
+ ```
139
+ Read CLAUDE.md, git log --oneline -10, and git status. Tell me where I am,
140
+ what I was working on, and what I should do next.
141
+ ```
142
+
143
+ ### "Is this right?"
144
+ ```
145
+ Review my changes (git diff). Check if they follow DevForge patterns:
146
+ ESM imports with .js extensions, chalk for output, path.join for paths,
147
+ {{VARIABLE}} for templates. Flag anything that looks wrong.
148
+ ```
149
+
150
+ ### "Before I PR"
151
+ ```
152
+ Run /project:pre-pr
153
+ ```
154
+
155
+ ### "Explain this code"
156
+ ```
157
+ Read [file] and explain what it does, how it fits into the DevForge pipeline
158
+ (prompts → recommender → composer → configurator → uat-generator), and what
159
+ calls it.
160
+ ```
@@ -0,0 +1,9 @@
1
+ UAT_ID,Scenario,Priority,Automated,Test_File,Status,Last_Run,Notes
2
+ UAT-001,Scaffold Next.js Full-Stack,P0,PARTIAL,tests/composer.test.js,NOT RUN,,Manual smoke test required
3
+ UAT-002,Scaffold FastAPI Backend,P0,PARTIAL,tests/composer.test.js,NOT RUN,,Manual smoke test required
4
+ UAT-003,Scaffold Polyglot Full-Stack,P0,PARTIAL,tests/composer.test.js,NOT RUN,,Manual smoke test required
5
+ UAT-004,Recommender Selects Correct Stack,P0,YES,tests/recommender.test.js,NOT RUN,,
6
+ UAT-005,Template Variable Substitution,P0,YES,tests/composer.test.js,NOT RUN,,
7
+ UAT-006,Claude Code Infrastructure Generated,P1,YES,tests/claude-configurator.test.js,NOT RUN,,
8
+ UAT-007,Invalid Input Handling,P1,NO,,NOT RUN,,Manual test required
9
+ UAT-008,Unsupported Stack Selection,P1,PARTIAL,tests/recommender.test.js,NOT RUN,,
@@ -0,0 +1,163 @@
1
+ # UAT Scenario Pack: DevForge
2
+
3
+ ## Pre-Conditions
4
+ - [ ] Node.js >= 18 installed
5
+ - [ ] npm available
6
+ - [ ] DevForge dependencies installed (`npm install`)
7
+ - [ ] No existing `test-output/` directory
8
+
9
+ ## Scenarios
10
+
11
+ ### UAT-001: Scaffold Next.js Full-Stack Project — Happy Path
12
+ **Priority:** P0
13
+ **Preconditions:** Clean environment, no test-output/ directory
14
+ **Steps:**
15
+ 1. Run `node bin/devforge.js test-output`
16
+ 2. Select "Full-stack app"
17
+ 3. Select "TypeScript" for language
18
+ 4. Select "Yes" for authentication
19
+ 5. Select "No" for AI integration
20
+ 6. Select "Docker" for deployment
21
+ 7. Confirm the recommended stack
22
+ **Expected Result:**
23
+ - `test-output/` directory created
24
+ - Contains `package.json` with Next.js, React, TypeScript, Tailwind, Prisma, NextAuth dependencies
25
+ - Contains `src/app/layout.tsx`, `src/app/page.tsx`
26
+ - Contains `src/app/api/health/route.ts` (health check endpoint)
27
+ - Contains `prisma/schema.prisma`
28
+ - Contains `.claude/` directory with hooks, agents, commands
29
+ - Contains `CLAUDE.md` with Next.js-specific rules
30
+ - Contains `docs/uat/UAT_TEMPLATE.md`
31
+ **Actual Result:** ___
32
+ **Status:** NOT RUN
33
+ **Tester:** ___
34
+ **Date:** ___
35
+ **Notes:** ___
36
+
37
+ ### UAT-002: Scaffold FastAPI Backend Project — Happy Path
38
+ **Priority:** P0
39
+ **Preconditions:** Clean environment, no test-output/ directory
40
+ **Steps:**
41
+ 1. Run `node bin/devforge.js test-output`
42
+ 2. Select "API / backend service"
43
+ 3. Select "Python" for language
44
+ 4. Select "Yes" for authentication
45
+ 5. Select "No" for AI integration
46
+ 6. Select "Docker" for deployment
47
+ 7. Confirm the recommended stack
48
+ **Expected Result:**
49
+ - `test-output/` directory created
50
+ - Contains `backend/requirements.txt` with FastAPI, SQLAlchemy, Pydantic
51
+ - Contains `backend/app/main.py` with health endpoint and graceful shutdown
52
+ - Contains `backend/app/api/health.py`
53
+ - Contains `backend/app/core/config.py`, `errors.py`, `retry.py`
54
+ - Contains `backend/tests/` with pytest fixtures
55
+ - Contains `.claude/` directory with Python-specific hooks
56
+ - Contains `CLAUDE.md` with FastAPI-specific rules
57
+ **Actual Result:** ___
58
+ **Status:** NOT RUN
59
+ **Tester:** ___
60
+ **Date:** ___
61
+ **Notes:** ___
62
+
63
+ ### UAT-003: Scaffold Polyglot Full-Stack Project — Happy Path
64
+ **Priority:** P0
65
+ **Preconditions:** Clean environment, no test-output/ directory
66
+ **Steps:**
67
+ 1. Run `node bin/devforge.js test-output`
68
+ 2. Select "Full-stack app"
69
+ 3. Select "TypeScript" and "Python" for language
70
+ 4. Select "Yes" for authentication
71
+ 5. Select "Yes" for AI integration
72
+ 6. Select "Docker" for deployment
73
+ 7. Confirm the recommended stack
74
+ **Expected Result:**
75
+ - `test-output/` directory created
76
+ - Contains both `frontend/` and `backend/` directories
77
+ - Contains `docker-compose.yml` at root
78
+ - Contains Next.js frontend and FastAPI backend
79
+ - Contains both Prisma and SQLAlchemy database configs
80
+ - Contains polyglot Claude Code hooks (TypeScript + Python)
81
+ **Actual Result:** ___
82
+ **Status:** NOT RUN
83
+ **Tester:** ___
84
+ **Date:** ___
85
+ **Notes:** ___
86
+
87
+ ### UAT-004: Recommender Selects Correct Stack
88
+ **Priority:** P0
89
+ **Preconditions:** None
90
+ **Steps:**
91
+ 1. Run `npx vitest run tests/recommender.test.js`
92
+ 2. Verify all test cases pass
93
+ 3. Verify: web_app + TypeScript → Next.js full-stack
94
+ 4. Verify: api_service + Python → FastAPI backend
95
+ 5. Verify: full_stack + TS + Python → polyglot
96
+ 6. Verify: unsupported combos return helpful error
97
+ **Expected Result:** All recommender tests pass, all 3 stacks correctly selected
98
+ **Actual Result:** ___
99
+ **Status:** NOT RUN
100
+ **Tester:** ___
101
+ **Date:** ___
102
+ **Notes:** ___
103
+
104
+ ### UAT-005: Template Variable Substitution
105
+ **Priority:** P0
106
+ **Preconditions:** None
107
+ **Steps:**
108
+ 1. Run `npx vitest run tests/composer.test.js`
109
+ 2. Verify `{{PROJECT_NAME}}` replaced in all .template files
110
+ 3. Verify non-.template files copied without modification
111
+ 4. Verify .gitkeep files preserved
112
+ 5. Verify no `{{` patterns remain in output
113
+ **Expected Result:** All composer tests pass, variables correctly substituted
114
+ **Actual Result:** ___
115
+ **Status:** NOT RUN
116
+ **Tester:** ___
117
+ **Date:** ___
118
+ **Notes:** ___
119
+
120
+ ### UAT-006: Claude Code Infrastructure Generated
121
+ **Priority:** P1
122
+ **Preconditions:** Scaffold a project first
123
+ **Steps:**
124
+ 1. Run `npx vitest run tests/claude-configurator.test.js`
125
+ 2. Verify `.claude/settings.json` created with correct hooks
126
+ 3. Verify CLAUDE.md generated with stack-specific content
127
+ 4. Verify agents copied (5 agents)
128
+ 5. Verify skills copied (filtered by stack)
129
+ 6. Verify commands copied (6 commands)
130
+ **Expected Result:** All claude-configurator tests pass, infrastructure complete
131
+ **Actual Result:** ___
132
+ **Status:** NOT RUN
133
+ **Tester:** ___
134
+ **Date:** ___
135
+ **Notes:** ___
136
+
137
+ ### UAT-007: Invalid Input Handling
138
+ **Priority:** P1
139
+ **Preconditions:** None
140
+ **Steps:**
141
+ 1. Run `node bin/devforge.js` (no project name) — should show error
142
+ 2. Run `node bin/devforge.js .` (invalid name) — should show error
143
+ 3. Create `test-output/` dir, then run `node bin/devforge.js test-output` — should warn about existing dir
144
+ **Expected Result:** Clear error messages, no crashes, exit code 1
145
+ **Actual Result:** ___
146
+ **Status:** NOT RUN
147
+ **Tester:** ___
148
+ **Date:** ___
149
+ **Notes:** ___
150
+
151
+ ### UAT-008: Unsupported Stack Selection
152
+ **Priority:** P1
153
+ **Preconditions:** None
154
+ **Steps:**
155
+ 1. Run `node bin/devforge.js test-output`
156
+ 2. Select "Mobile app" or "Desktop app"
157
+ 3. Observe the recommendation
158
+ **Expected Result:** Displays message that the stack is not yet supported in V1, suggests closest supported option
159
+ **Actual Result:** ___
160
+ **Status:** NOT RUN
161
+ **Tester:** ___
162
+ **Date:** ___
163
+ **Notes:** ___
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "forgedev",
3
- "version": "1.1.3",
3
+ "version": "1.2.0",
4
4
  "description": "Universal, AI-first project scaffolding CLI with Claude Code infrastructure",
5
5
  "type": "module",
6
6
  "bin": {
@@ -29,5 +29,13 @@
29
29
  },
30
30
  "engines": {
31
31
  "node": ">=18.0.0"
32
- }
32
+ },
33
+ "files": [
34
+ "bin/",
35
+ "src/",
36
+ "templates/",
37
+ "docs/",
38
+ "LICENSE",
39
+ "README.md"
40
+ ]
33
41
  }
@@ -226,6 +226,7 @@ function generateAgents(outputDir, config, vars) {
226
226
  'chief-of-staff.md',
227
227
  'loop-operator.md',
228
228
  'harness-optimizer.md',
229
+ 'product-strategist.md',
229
230
  ];
230
231
 
231
232
  for (const agent of agents) {
package/src/cli.js CHANGED
@@ -56,11 +56,11 @@ export async function parseCommand(args) {
56
56
  if (!command.startsWith('-')) {
57
57
  const targetDir = path.resolve(process.cwd(), command);
58
58
  if (fs.existsSync(targetDir)) {
59
- console.log('');
59
+ console.error('');
60
60
  log.warn(`"${command}" already exists. Did you mean:`);
61
- console.log(` ${chalk.bold('devforge init')} Add dev guardrails to current project`);
62
- console.log(` ${chalk.bold('devforge doctor')} Diagnose and optimize current project`);
63
- console.log('');
61
+ console.error(` ${chalk.bold('devforge init')} Add dev guardrails to current project`);
62
+ console.error(` ${chalk.bold('devforge doctor')} Diagnose and optimize current project`);
63
+ console.error('');
64
64
  process.exit(1);
65
65
  }
66
66
  const { runNew } = await import('./index.js');
@@ -86,7 +86,7 @@ function showUsage() {
86
86
  -h, --help Show this help message
87
87
  -v, --version Show version number
88
88
 
89
- Run ${chalk.cyan('devforge new --help')} for more details.
89
+ Run ${chalk.cyan('devforge --help')} for more details.
90
90
  `);
91
91
  }
92
92
 
package/src/index.js CHANGED
@@ -12,9 +12,9 @@ import { generateUAT } from './uat-generator.js';
12
12
  export async function runNew(projectName) {
13
13
  const safeName = toKebabCase(projectName);
14
14
 
15
- // Prevent path traversalproject name must not escape cwd
16
- if (/[\/\\]/.test(safeName) || safeName.includes('..')) {
17
- log.error('Project name must not contain path separators or ".."');
15
+ // Validate project namemust be a clean kebab-case identifier
16
+ if (!/^[a-z0-9][a-z0-9-]*$/.test(safeName)) {
17
+ log.error('Project name must start with a letter or number and contain only lowercase letters, numbers, and hyphens.');
18
18
  process.exit(1);
19
19
  }
20
20
 
package/src/utils.js CHANGED
@@ -11,7 +11,7 @@ export const ROOT_DIR = path.resolve(__dirname, '..');
11
11
  export const log = {
12
12
  info: (msg) => console.log(chalk.cyan(msg)),
13
13
  success: (msg) => console.log(chalk.green(msg)),
14
- warn: (msg) => console.log(chalk.yellow(msg)),
14
+ warn: (msg) => console.error(chalk.yellow(msg)),
15
15
  error: (msg) => console.error(chalk.red(msg)),
16
16
  step: (n, total, msg) => console.log(chalk.blue(`[${n}/${total}] ${msg}`)),
17
17
  dim: (msg) => console.log(chalk.dim(msg)),
File without changes
@@ -0,0 +1,2 @@
1
+ UAT_ID,Feature,Priority,Status,Tester,Date,Defect_ID,Notes
2
+ UAT-001,Health Check,P0,NOT RUN,,,,
@@ -0,0 +1,22 @@
1
+ # UAT Scenario Pack: {{PROJECT_NAME}}
2
+
3
+ ## Pre-Conditions
4
+ - [ ] Application is deployed to staging
5
+ - [ ] Test accounts are created
6
+ - [ ] Test data is seeded
7
+
8
+ ## Scenarios
9
+
10
+ ### UAT-001: Health Check — Happy Path
11
+ **Priority:** P0
12
+ **Preconditions:** Application is running
13
+ **Steps:**
14
+ 1. Send GET request to /health (or /api/health)
15
+ 2. Verify response status is 200
16
+ 3. Verify response body contains status: "ok"
17
+ **Expected Result:** Health endpoint responds with 200 and status ok
18
+ **Actual Result:** ___
19
+ **Status:** NOT RUN
20
+ **Tester:** ___
21
+ **Date:** ___
22
+ **Notes:** ___
@@ -10,16 +10,17 @@ You are a build error resolution specialist. Your job is to fix build/type/lint
10
10
  2. **Group by file** — Sort errors by file path, fix in dependency order (imports/types before logic)
11
11
  3. **Fix one error at a time** — Read the file, diagnose root cause, apply minimal edit
12
12
  4. **Verify** — After each fix, re-run all three commands to confirm the error is gone and no new errors were introduced
13
+
13
14
  ## Common Fix Patterns
14
15
 
15
16
  | Error Type | Fix |
16
17
  |-----------|-----|
17
18
  | Missing import | Add the import statement |
18
- | Type mismatch | Add type annotation or assertion |
19
+ | Type mismatch | Add correct type annotation, adjust code to match expected types, or fix the actual type |
19
20
  | Undefined variable | Check spelling, add declaration, or fix import |
20
21
  | Missing dependency | Suggest install command (`npm install X` or `pip install X`) |
21
22
  | Config error | Compare with known working defaults |
22
- | Circular dependency | Identify the cycle, suggest extraction to shared module |
23
+ | Circular dependency | Identify the cycle, report to user with suggested breaking strategies |
23
24
 
24
25
  ## Rules
25
26
 
@@ -38,4 +38,4 @@ Stack: {{STACK_SUMMARY}}
38
38
  - [ ] Functions are reasonably sized (< 50 lines)
39
39
 
40
40
  ## Output
41
- For each issue: **File** | **Line** | **Severity** (critical/warning/info) | **Issue** | **Fix**
41
+ For each issue: **File** | **Line** | **Severity** (critical/high/medium/low) | **Issue** | **Fix**
@@ -46,7 +46,7 @@ You are a database specialist. Your job is to review database code for performan
46
46
  | `SELECT *` | Fetches unnecessary data, breaks on schema change | Specify exact columns |
47
47
  | OFFSET pagination | Slow on large tables (scans skipped rows) | Use cursor-based pagination |
48
48
  | N+1 queries | 1 query per row instead of 1 query for all | Use joins or eager loading |
49
- | String IDs | Poor index performance | Use UUID or serial |
49
+ | String IDs | Poor index performance | Use sequential identifiers (SERIAL, UUIDv7) |
50
50
  | No connection pooling | Exhausts database connections | Use connection pool |
51
51
  | `GRANT ALL` | Violates least privilege | Grant specific permissions |
52
52
 
@@ -6,7 +6,7 @@ You are a documentation specialist. Your job is to keep project documentation ac
6
6
 
7
7
  ## Workflow
8
8
 
9
- 1. **Detect changes** — Run `git diff --name-only HEAD~1` to see files changed in the last commit
9
+ 1. **Detect changes** — Run `git diff --name-only HEAD~1` to see files changed in the last commit (or `git diff --name-only` for uncommitted changes)
10
10
  2. **Identify affected docs** — Map code changes to documentation that needs updating
11
11
  3. **Update docs** — Edit README, API docs, changelogs, and inline comments
12
12
  4. **Verify links** — Check that all referenced files and endpoints still exist
@@ -40,6 +40,32 @@ You are a Claude Code harness optimizer. Your job is to audit the project's Clau
40
40
  - [ ] No commands that duplicate agent functionality
41
41
  - [ ] Commands reference correct tool commands for the project's stack
42
42
 
43
+ ### Internal Consistency (cross-template validation)
44
+ - [ ] No contradictory guidelines across agents, skills, and CLAUDE.md
45
+ - Cross-reference DO/DON'T rules — ensure fix suggestions don't violate their own rules
46
+ - Verify branching/rebase/merge advice is consistent across git-workflow skill and CLAUDE.md
47
+ - [ ] No duplicate guidelines (same advice in multiple places → stale risk)
48
+ - [ ] All severity levels referenced in report outputs are defined with criteria
49
+ - [ ] All process steps referenced in output sections have matching report formats
50
+ - [ ] Hook scripts: path validation uses `cwd + sep` (not bare `startsWith`)
51
+ - [ ] Hook scripts: `cwd` option matches expected filePath prefix (no double-prefix bug)
52
+ - [ ] Settings files: no hardcoded absolute paths or debug artifacts in permissions
53
+
54
+ ### Technical Accuracy (advice matches reality)
55
+ - [ ] Framework-specific advice matches actual framework behavior
56
+ - Server Components can't use client hooks (useState, useEffect)
57
+ - Pydantic v2 doesn't reject extra fields by default (needs `extra = "forbid"`)
58
+ - Playwright: getByRole/getByLabel preferred over CSS selectors
59
+ - [ ] Code examples use valid syntax (JSON with quoted keys, correct API signatures)
60
+ - [ ] Version-specific features match the version declared in CLAUDE.md
61
+
62
+ ### Formatting Integrity (no corrupted templates)
63
+ - [ ] No merged lines (two steps concatenated without newline)
64
+ - [ ] No duplicate content on same line
65
+ - [ ] Markdown tables have correct column counts per row
66
+ - [ ] All files end with a trailing newline
67
+ - [ ] Proper blank lines between sections (## heading preceded by blank line)
68
+
43
69
  ## Output Format
44
70
 
45
71
  ```
@@ -14,7 +14,8 @@ Execute iterative improvement loops safely: run a sequence of checks → fixes
14
14
  2. **Set stop conditions** — Define when to stop (all tests pass, zero lint errors, or max 5 iterations)
15
15
  3. **Execute iteration** — Fix one category of issues per iteration
16
16
  4. **Checkpoint** — After each iteration, record progress and compare to baseline
17
- 5. **Evaluate** — If no progress across 2 consecutive iterations, stop and report6. **Report** — Show baseline vs final state with concrete numbers
17
+ 5. **Evaluate** — If no progress across 2 consecutive iterations, stop and report
18
+ 6. **Report** — Show baseline vs final state with concrete numbers
18
19
 
19
20
  ## Stop Conditions (halt the loop if any are true)
20
21
 
@@ -0,0 +1,124 @@
1
+ ---
2
+ description: Research competitors via web search, evaluate project maturity against industry leaders, and recommend strategic improvements with competitive context.
3
+ disallowedTools:
4
+ - Write
5
+ - Edit
6
+ - MultiEdit
7
+ ---
8
+
9
+ # Product Strategist
10
+
11
+ You are a product strategist for {{PROJECT_NAME_PASCAL}}. Your job is to evaluate this project against real competitors and industry best practices — using live research, not assumptions.
12
+
13
+ ## Process
14
+
15
+ ### Phase 1: Understand the Project
16
+ 1. Read CLAUDE.md, package.json/pyproject.toml, and project structure
17
+ 2. Read product documents if they exist: PRD (`docs/prd/`), user stories (`docs/stories/`), or any spec files
18
+ 3. Identify the project's domain, stack, target audience, and stated goals
19
+ 4. List the project's current features and capabilities
20
+
21
+ ### Phase 2: Competitive Research (Web Search Required)
22
+ 5. **Search for direct competitors** — Use WebSearch to find 5-7 projects/products that solve the same problem
23
+ 6. **Search for best-in-class examples** — Find the top-rated or most-starred open source projects in the same domain
24
+ 7. **Search for industry standards** — Look up current best practices for the specific stack (e.g., "Next.js 15 production best practices 2026", "FastAPI security checklist 2026")
25
+ 8. **Search for user reviews and feedback** — Find reviews, GitHub issues, Reddit threads, or forum discussions about competitors to understand what users love and hate
26
+ 9. Document what competitors offer that this project doesn't
27
+ 10. Document common user complaints about competitors (opportunities to differentiate)
28
+
29
+ ### Phase 3: Internal Evaluation
30
+ 11. Evaluate each category below against what competitors actually do (not abstract ideals)
31
+ 12. Rate: AHEAD (exceeds competitors), ON PAR (matches competitors), BEHIND (competitors do this, we don't), N/A
32
+
33
+ ## Evaluation Categories
34
+
35
+ ### Developer Experience
36
+ - [ ] One-command setup (`npm install` or `docker compose up` → working app)
37
+ - [ ] Hot reload in development
38
+ - [ ] Meaningful error messages (not stack traces)
39
+ - [ ] Automated code formatting on save
40
+ - [ ] Pre-commit hooks for quality gates
41
+
42
+ ### API Design
43
+ - [ ] OpenAPI/Swagger documentation auto-generated
44
+ - [ ] Consistent error response format
45
+ - [ ] API versioning strategy
46
+ - [ ] Rate limiting
47
+ - [ ] Pagination for list endpoints
48
+
49
+ ### Testing Strategy
50
+ - [ ] Unit test coverage > 80%
51
+ - [ ] E2E tests for critical user flows
52
+ - [ ] CI runs tests on every PR
53
+ - [ ] Test data factories/fixtures (not hardcoded test data)
54
+ - [ ] Performance/load testing setup
55
+
56
+ ### Security Posture
57
+ - [ ] Dependency vulnerability scanning (npm audit / safety)
58
+ - [ ] Secret scanning in CI
59
+ - [ ] OWASP Top 10 coverage
60
+ - [ ] Content Security Policy headers
61
+ - [ ] Input sanitization beyond basic validation
62
+
63
+ ### Observability
64
+ - [ ] Structured logging (JSON, not plain text)
65
+ - [ ] Request tracing (correlation IDs)
66
+ - [ ] Health check endpoints (shallow + deep)
67
+ - [ ] Error tracking integration (Sentry, etc.)
68
+ - [ ] Performance monitoring
69
+
70
+ ### Deployment & Infrastructure
71
+ - [ ] Containerized (Docker)
72
+ - [ ] CI/CD pipeline
73
+ - [ ] Environment parity (dev ≈ staging ≈ prod)
74
+ - [ ] Database migration strategy
75
+ - [ ] Rollback plan documented
76
+
77
+ ### Documentation
78
+ - [ ] README with quickstart that works in < 5 minutes
79
+ - [ ] API documentation (auto-generated preferred)
80
+ - [ ] Architecture decision records (ADRs) for key decisions
81
+ - [ ] Contributing guide
82
+ - [ ] Changelog
83
+
84
+ ## Output
85
+
86
+ ### Competitive Landscape (5-7 competitors)
87
+ | Competitor | What They Do Well | What Users Complain About | What We Do Better | Key Feature We're Missing |
88
+ |-----------|-------------------|--------------------------|-------------------|--------------------------|
89
+ | [name + link] | [specific feature] | [from reviews/issues] | [our advantage] | [gap] |
90
+
91
+ ### User Sentiment Summary
92
+ Key themes from user reviews and discussions across competitors:
93
+ - **Users love**: [common positive themes]
94
+ - **Users hate**: [common pain points — opportunities for us]
95
+ - **Most requested features**: [what users are asking for that nobody fully delivers]
96
+
97
+ ### Scorecard
98
+ | Category | Rating | Competitor Benchmark | Our Status | Recommendation |
99
+ |----------|--------|---------------------|------------|----------------|
100
+ | [category] | AHEAD/ON PAR/BEHIND | [what competitors do] | [what we do] | [specific action] |
101
+
102
+ ### Strategic Recommendations
103
+ For each finding, present the choice:
104
+
105
+ **[Feature/Gap Name]**
106
+ - Match: [What to implement to reach parity with competitors]
107
+ - Exceed: [What to implement to go beyond competitors]
108
+ - Skip: [Why it might be OK to skip this — trade-offs]
109
+ - **Recommendation**: [Your informed opinion on which option and why]
110
+
111
+ ### Priority Roadmap
112
+ 1. [Highest impact — what to do first, with effort estimate]
113
+ 2. [Second priority]
114
+ 3. [Third priority]
115
+
116
+ ## Rules
117
+ - Always use WebSearch — never rely solely on your training data for competitive info
118
+ - Cite specific competitors by name with links
119
+ - Be honest: if the project is already ahead, say so
120
+ - Recommendations must be actionable: specific libraries, patterns, or implementations
121
+ - Adapt categories to the actual stack (skip frontend checks for backend-only projects)
122
+ - If the project is a CLI tool, compare against CLI tools, not web apps
123
+ - Present choices, don't dictate — the user decides the strategy
124
+ - Prioritize by impact-to-effort ratio
@@ -23,6 +23,7 @@ Read-only. Never modify code.
23
23
  - [ ] All user input validated before use
24
24
  - [ ] SQL injection prevention (parameterized queries/ORM)
25
25
  - [ ] XSS prevention (proper escaping/sanitization)
26
+ - [ ] CSRF protection for state-changing operations
26
27
  - [ ] File upload validation (type, size, extension)
27
28
 
28
29
  ### Data Exposure