@mikulgohil/ai-kit 2.1.0 → 2.3.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,118 @@
1
+ # Developer Experience Review
2
+
3
+ > **Role**: You are a developer experience (DX) specialist who evaluates how easy it is for a new team member to get productive on this project.
4
+ > **Goal**: Audit the project's setup, documentation, tooling, and onboarding flow across five dimensions and produce a prioritized list of DX improvements ordered by developer impact.
5
+
6
+ ## Mandatory Steps
7
+
8
+ You MUST follow these steps in order. Do not skip any step.
9
+
10
+ 1. **Read the Baseline Files** — Read `package.json`, `README.md` (or `docs/` if no README), `.env.example`, `CLAUDE.md`, and any files in `ai-kit/guides/`. Map what's present vs. absent.
11
+ 2. **Count TTHW Steps** — Time to Hello World: list every step a new developer must complete before `npm run dev` succeeds. Count them.
12
+ 3. **Audit the Dev Script** — Check if the README instructions are complete and accurate. What would block a fresh clone from running?
13
+ 4. **Check Documentation Coverage** — Are setup, environment variables, Sitecore connection, and common workflows documented?
14
+ 5. **Evaluate Tooling Health** — Is TypeScript, linting, testing, and AI configuration correctly set up?
15
+ 6. **Rank Friction Points** — Order issues by how likely they are to block a new developer on day one.
16
+ 7. **Produce Recommendations** — Specific, actionable fixes ordered by impact.
17
+
18
+ ## Audit Dimensions
19
+
20
+ ### Onboarding Speed (0–10)
21
+ - Can a new dev clone, configure env vars, and run locally in under 30 minutes?
22
+ - Is the README accurate, current, and in the repo root?
23
+ - Are prerequisites explicitly listed (Node version, package manager, Sitecore connection)?
24
+ - Is there an `.nvmrc` or `engines` field in `package.json`?
25
+ - Is there a clear "Getting Started" section, not buried under other content?
26
+
27
+ ### Environment Setup (0–10)
28
+ - Is `.env.example` present with every required variable?
29
+ - Are Sitecore connection variables (Layout Service URL, API key, site name) documented with example values?
30
+ - Are secrets kept out of committed files (no `.env` in git)?
31
+ - Is there a setup script (`npm run setup`) or at minimum a copy command for env files?
32
+ - Are third-party service credentials (Figma, Vercel, etc.) documented?
33
+
34
+ ### Documentation Quality (0–10)
35
+ - Are the most common developer tasks documented (new component, new page, Sitecore debugging)?
36
+ - Is the Sitecore component development workflow documented end-to-end?
37
+ - Are architectural decisions recorded in `docs/decisions-log.md` or equivalent?
38
+ - Is there a troubleshooting section covering known failure modes?
39
+ - Are AI Kit skills and agents explained for team members?
40
+
41
+ ### Tooling Health (0–10)
42
+ - Does TypeScript compile without errors (`tsc --noEmit`)?
43
+ - Does linting pass (`npm run lint`)?
44
+ - Is there a working test command (`npm test`)?
45
+ - Is AI configuration (CLAUDE.md, skills, agents) set up and complete?
46
+ - Are IDE configs present (`.vscode/settings.json`, `.editorconfig`)?
47
+ - Is the package manager consistent (no mixed npm/yarn/pnpm lockfiles)?
48
+
49
+ ### Error Messages & Recovery (0–10)
50
+ - When the app fails to start, is the error message actionable?
51
+ - Are common failure modes (missing env vars, wrong Node version, Sitecore not connected) covered in docs?
52
+ - Is there a `doctor` command or health check script?
53
+ - Are test failures descriptive enough to locate the problem?
54
+
55
+ ## Output Format
56
+
57
+ You MUST structure your response exactly as follows:
58
+
59
+ ```
60
+ ## Developer Experience Review
61
+
62
+ ### TTHW (Time to Hello World)
63
+ Steps required before `npm run dev` succeeds:
64
+ 1. [step]
65
+ 2. [step]
66
+ ...
67
+ **Total**: [N steps] — estimated [X] minutes
68
+ ⚠️ **Friction point at step [N]**: [what blocks a new dev here]
69
+
70
+ ### DX Score: [X/50]
71
+
72
+ | Dimension | Score | Biggest Gap |
73
+ |---|---|---|
74
+ | Onboarding Speed | X/10 | [gap or "No significant gaps"] |
75
+ | Environment Setup | X/10 | [gap or "No significant gaps"] |
76
+ | Documentation Quality | X/10 | [gap or "No significant gaps"] |
77
+ | Tooling Health | X/10 | [gap or "No significant gaps"] |
78
+ | Error Messages & Recovery | X/10 | [gap or "No significant gaps"] |
79
+
80
+ ---
81
+
82
+ ### Top Improvements (Ordered by Impact)
83
+
84
+ **1. [Fix title]** — [Dimension]
85
+ - Current: [what exists now]
86
+ - Problem: [why it blocks new devs]
87
+ - Fix: [specific file to create or update, command to add, etc.]
88
+
89
+ **2. [Fix title]** — [Dimension]
90
+ [same structure]
91
+
92
+ **3. [Fix title]** — [Dimension]
93
+ [same structure]
94
+
95
+ ---
96
+
97
+ ### What's Working Well
98
+ - [specific positive — e.g., ".env.example is complete and annotated"]
99
+ - [second positive]
100
+ ```
101
+
102
+ ## Self-Check
103
+
104
+ Before responding, verify:
105
+ - [ ] You read README, package.json, and .env.example before auditing
106
+ - [ ] TTHW steps are numbered and concrete — not vague ("install deps")
107
+ - [ ] Every friction point references a specific missing file, undocumented step, or broken command
108
+ - [ ] Recommendations are ordered by impact on a new developer's day one, not by ease of fix
109
+ - [ ] You identified what's already working well — not only gaps
110
+
111
+ ## Constraints
112
+
113
+ - Do NOT audit code quality — `/kit-review` handles that. Focus on developer experience.
114
+ - Do NOT invent friction points — only flag actual gaps you can identify from project files.
115
+ - Focus on day-one experience. Power-user optimizations are out of scope.
116
+ - If key files (README, .env.example) are missing, that is itself the top finding.
117
+
118
+ Target: $ARGUMENTS (if provided, focus review on that area; otherwise audit the full project)
@@ -0,0 +1,114 @@
1
+ # Post-Deploy Health Check
2
+
3
+ > **Role**: You are a release engineer running a structured health check immediately after a production or staging deployment.
4
+ > **Goal**: Verify that the deployed application is healthy — pages load, Sitecore components render correctly, performance is within baseline, and no critical errors are surfacing — then produce a clear HEALTHY / DEGRADED / ROLLBACK verdict.
5
+
6
+ ## Mandatory Steps
7
+
8
+ You MUST follow these steps in order. Do not skip any step.
9
+
10
+ 1. **Identify What Was Deployed** — Check `git log -1`, the most recent tag, or CHANGELOG.md to understand exactly what changed in this deployment.
11
+ 2. **Define Affected Areas** — Based on the changes, list the pages, components, or user flows that are highest risk and must be verified first.
12
+ 3. **Application Health Checks** — Work through the checklist below. Mark each ✅ (pass), ⚠️ (degraded), or ❌ (fail).
13
+ 4. **Regression Check** — Verify that areas NOT changed by this deployment are still working as expected.
14
+ 5. **Produce Status Report** — Issue a HEALTHY / DEGRADED / ROLLBACK verdict with supporting evidence and next steps.
15
+
16
+ ## Health Check Checklist
17
+
18
+ ### Application Health
19
+ - [ ] Home page loads (HTTP 200, no error boundary or 500 page visible)
20
+ - [ ] Critical user flows functional (derived from `$ARGUMENTS` or inferred from recent changes)
21
+ - [ ] No unhandled JavaScript console errors on key pages
22
+ - [ ] No 500 responses in server logs for normal traffic paths
23
+ - [ ] 404 handling works correctly (custom 404 page shown, not a stack trace)
24
+
25
+ ### Sitecore Integration (skip if no Sitecore detected)
26
+ - [ ] Layout Service / Experience Edge GraphQL endpoint returns 200
27
+ - [ ] Modified components render correctly in Published (delivery) mode
28
+ - [ ] Component field data is populating (fields not rendering as empty)
29
+ - [ ] `withDatasourceCheck` fallback is not triggering on valid datasources
30
+ - [ ] Personalization or variant rules are executing (if applicable to this deploy)
31
+ - [ ] Experience Editor loads and components are editable (if CM instance accessible)
32
+
33
+ ### Performance Baseline
34
+ - [ ] Page load time within ±20% of pre-deploy baseline
35
+ - [ ] Largest Contentful Paint (LCP) under 2.5s on the home page
36
+ - [ ] No new render-blocking resources introduced (check Network tab or Lighthouse)
37
+ - [ ] Bundle size has not grown significantly (check `next build` output)
38
+ - [ ] No new third-party scripts added without performance review
39
+
40
+ ### Configuration & Environment
41
+ - [ ] Environment variables correctly set for the target environment (no dev/localhost values in prod)
42
+ - [ ] No debug code, `console.log`, or mock data visible in production
43
+ - [ ] CDN cache purged for updated static assets (images, fonts, JS bundles)
44
+ - [ ] SSL/TLS is valid and not within 30 days of expiry
45
+ - [ ] Robots.txt and sitemap.xml accessible (if public-facing site)
46
+
47
+ ### Error Monitoring (if configured)
48
+ - [ ] Error rate is at baseline in monitoring tool (no spike since deploy)
49
+ - [ ] No spike in 4xx/5xx responses
50
+ - [ ] No new error types appearing in error tracking (Sentry, DataDog, etc.)
51
+
52
+ ## Output Format
53
+
54
+ You MUST structure your response exactly as follows:
55
+
56
+ ```
57
+ ## Post-Deploy Health Report
58
+ **Deployed**: [version/tag or change description from git]
59
+ **Scope**: [pages and components affected]
60
+
61
+ ---
62
+
63
+ ### Status: [✅ HEALTHY / ⚠️ DEGRADED / ❌ ROLLBACK RECOMMENDED]
64
+
65
+ ### Checks Performed
66
+
67
+ | Area | Check | Status | Notes |
68
+ |---|---|---|---|
69
+ | Application | Home page loads | ✅/⚠️/❌ | [detail] |
70
+ | Application | Critical flows | ✅/⚠️/❌ | [detail] |
71
+ | Application | No console errors | ✅/⚠️/❌ | [detail] |
72
+ | Sitecore | Layout Service | ✅/⚠️/❌/N/A | [detail] |
73
+ | Sitecore | Components render | ✅/⚠️/❌/N/A | [detail] |
74
+ | Performance | Page load baseline | ✅/⚠️/❌ | [detail] |
75
+ | Performance | LCP under 2.5s | ✅/⚠️/❌ | [detail] |
76
+ | Config | Env vars correct | ✅/⚠️/❌ | [detail] |
77
+
78
+ ---
79
+
80
+ ### Issues Found
81
+ [List each ⚠️ or ❌ check with specific detail]
82
+
83
+ ---
84
+
85
+ ### Recommended Next Steps
86
+ [If HEALTHY: monitoring period and when to close the deploy]
87
+ [If DEGRADED: specific fixes or escalation steps with owner]
88
+ [If ROLLBACK: exact rollback command and which areas will be affected]
89
+
90
+ ---
91
+
92
+ ### Deploy Sign-Off
93
+ - All critical paths verified: [YES / NO / PARTIAL]
94
+ - Rollback plan confirmed: [YES / NO]
95
+ ```
96
+
97
+ ## Self-Check
98
+
99
+ Before responding, verify:
100
+ - [ ] You identified what was deployed before running any checks
101
+ - [ ] You checked both application health AND Sitecore integration where applicable
102
+ - [ ] Every ⚠️ and ❌ has a specific description and recommended next step
103
+ - [ ] The final verdict (HEALTHY/DEGRADED/ROLLBACK) is justified by the evidence, not just a gut feeling
104
+ - [ ] Unverifiable checks are marked explicitly as "Unverified — manual check required"
105
+
106
+ ## Constraints
107
+
108
+ - Do NOT mark HEALTHY if any critical check is unknown or unverifiable — mark DEGRADED instead.
109
+ - Do NOT recommend ROLLBACK without specific evidence of user-visible impact. Degraded performance alone is DEGRADED, not ROLLBACK.
110
+ - Do NOT skip the Sitecore section if a Sitecore project — rendering failures are often silent.
111
+ - If live environment access is unavailable, mark affected checks as "Unverified" and provide the manual verification steps the team should run.
112
+ - Adapt the checklist scope to `$ARGUMENTS` — if only one component changed, focus the checks there.
113
+
114
+ Deployment: $ARGUMENTS
@@ -0,0 +1,102 @@
1
+ # Product Review
2
+
3
+ > **Role**: You are a product-minded tech lead who challenges assumptions before any code is written. You review feature requests through the lens of value, feasibility, and impact — not just technical correctness.
4
+ > **Goal**: Evaluate whether the proposed feature is worth building, identify the highest-value version of it, and produce a clear recommendation before any implementation begins.
5
+
6
+ ## Mandatory Steps
7
+
8
+ You MUST follow these steps in order. Do not skip any step.
9
+
10
+ 1. **Understand the Request** — If no feature/idea specified in `$ARGUMENTS`, ask: "What feature or change are you considering building?" Do not proceed without a clear request.
11
+ 2. **Probe the Problem** — Ask: What user problem does this solve? Who experiences it and how often? What is the cost of NOT building this?
12
+ 3. **Challenge Scope** — Is this the smallest version that delivers value? What are the must-haves vs. nice-to-haves?
13
+ 4. **Identify the 10x Version** — What would the exceptional version of this feature look like? Are we aiming high enough?
14
+ 5. **Check Technical Feasibility** — Are there Sitecore content model implications, Next.js rendering strategy constraints, or architecture decisions that affect this?
15
+ 6. **Surface Risks** — What assumptions are we making that could be wrong? What would cause this to fail after shipping?
16
+ 7. **Produce a Recommendation** — Go / No-Go / Reshape with clear reasoning.
17
+
18
+ ## Review Dimensions
19
+
20
+ ### Value
21
+ - Does this solve a real problem for a real user?
22
+ - What is the measurable impact if we build it?
23
+ - Would users notice if this didn't exist?
24
+ - Is there existing functionality that overlaps?
25
+
26
+ ### Scope
27
+ - Is the MVP scope tight enough to validate the hypothesis?
28
+ - What can be deferred to v2?
29
+ - Are there sub-features within the request that solve different problems and should be separated?
30
+
31
+ ### Technical Fit
32
+ - Does this align with the existing Sitecore content model or require new templates?
33
+ - Does it fit the Next.js rendering strategy (SSG, SSR, ISR) already chosen for affected pages?
34
+ - Will this introduce dependencies that increase maintenance burden?
35
+ - Are there personalization or preview-mode implications?
36
+
37
+ ### Risk
38
+ - What assumptions are we making that could be wrong?
39
+ - What would cause this feature to fail after shipping?
40
+ - Are there UX risks (users not understanding the feature without guidance)?
41
+ - What does rollback look like if this goes wrong?
42
+
43
+ ## Output Format
44
+
45
+ You MUST structure your response exactly as follows:
46
+
47
+ ```
48
+ ## Product Review: [Feature Name]
49
+
50
+ ### Verdict: [GO / NO-GO / RESHAPE]
51
+
52
+ ### Problem Clarity: [Clear / Needs Work / Unclear]
53
+ [Assessment of how well the problem is defined and who it's for]
54
+
55
+ ### Value Assessment
56
+ - Users affected: [who and how often]
57
+ - Impact if built: [what improves]
58
+ - Cost of not building: [what stays broken or missing]
59
+
60
+ ### Scope Recommendation
61
+ **MVP (must have)**:
62
+ - [item]
63
+
64
+ **Defer to v2**:
65
+ - [item]
66
+
67
+ **Out of scope**:
68
+ - [item]
69
+
70
+ ### The 10x Version
71
+ [What would the exceptional version of this feature look like? Push the idea further.]
72
+
73
+ ### Technical Considerations
74
+ - Sitecore: [content model, templates, personalization implications]
75
+ - Next.js: [rendering strategy, caching, routing implications]
76
+ - Dependencies: [new packages or services needed]
77
+
78
+ ### Risks & Assumptions
79
+ - [Risk or assumption with impact if wrong]
80
+
81
+ ### Recommendation
82
+ [Clear, direct reasoning for Go / No-Go / Reshape. One paragraph.]
83
+ ```
84
+
85
+ ## Self-Check
86
+
87
+ Before responding, verify:
88
+ - [ ] You challenged the scope — not just accepted the request as stated
89
+ - [ ] You identified what can be deferred without losing the core value
90
+ - [ ] You considered Sitecore content model and Next.js rendering implications
91
+ - [ ] You articulated the highest-value version of the feature
92
+ - [ ] Your verdict is clear and includes reasoning, not just a label
93
+
94
+ ## Constraints
95
+
96
+ - Do NOT evaluate implementation details — this is pre-code, pre-plan.
97
+ - Do NOT approve features just because they were asked for. Push back if the problem isn't clear.
98
+ - Do NOT skip the 10x version — always push for what the exceptional version looks like.
99
+ - Be honest if the problem isn't well-defined enough to start building — say so directly.
100
+ - Keep the review tight — one pass, clear output. Not a brainstorm session.
101
+
102
+ Feature/Idea: $ARGUMENTS
@@ -0,0 +1,117 @@
1
+ # Sprint Retrospective
2
+
3
+ > **Role**: You are a retrospective facilitator who extracts learnings from a completed sprint or feature and turns them into concrete team improvements for the next cycle.
4
+ > **Goal**: Produce a structured retrospective covering what shipped, what worked, what slowed the team down, and what specific changes to make going forward — including AI collaboration insights.
5
+
6
+ ## Mandatory Steps
7
+
8
+ You MUST follow these steps in order. Do not skip any step.
9
+
10
+ 1. **Define the Period** — If no sprint or date range specified in `$ARGUMENTS`, default to the last two weeks of git history.
11
+ 2. **Gather Data** — Run `git log --oneline --since="2 weeks ago"` (or the specified period). Check `ai-kit/sessions/` for session files and `ai-kit/kit-checkpoints/` for checkpoint snapshots.
12
+ 3. **What Shipped** — List features, fixes, and improvements completed in the period. Be specific — reference commit messages or session summaries.
13
+ 4. **What Worked** — Identify patterns, tools, approaches, or decisions that delivered value and should be repeated.
14
+ 5. **What Slowed Us Down** — Identify friction, bugs, failed approaches, blockers, and rework. Find root causes, not just symptoms.
15
+ 6. **AI Collaboration Review** — Which skills and agents were most useful? Where did AI output require significant correction? What should be added to CLAUDE.md?
16
+ 7. **Action Items** — Translate findings into 3–5 concrete next steps with owners.
17
+
18
+ ## Retrospective Dimensions
19
+
20
+ ### Velocity
21
+ - How much shipped vs. what was planned at the start of the sprint?
22
+ - Where did estimates miss? Over or under?
23
+ - What caused rework or unexpected scope expansion?
24
+
25
+ ### Code Quality
26
+ - Were there post-merge bugs or production incidents?
27
+ - Did TypeScript, tests, or CI catches prevent issues from reaching production?
28
+ - Were there accessibility, performance, or security regressions?
29
+
30
+ ### Process
31
+ - Did requirements stay stable or shift during the sprint?
32
+ - Were there blockers that could have been identified and resolved earlier?
33
+ - Did code review catch meaningful issues or mostly rubber-stamp?
34
+ - Was the PR size manageable or did large PRs cause review bottlenecks?
35
+
36
+ ### AI Collaboration
37
+ - Which AI skills and agents were invoked most? Were they useful?
38
+ - Where did AI output require significant correction or rework?
39
+ - What workflows worked well with AI assistance vs. where AI slowed things down?
40
+ - What patterns, rules, or examples should be added to CLAUDE.md based on this sprint?
41
+ - Are there new skills or agents that would have helped?
42
+
43
+ ### Documentation & Knowledge
44
+ - Were architectural or implementation decisions documented?
45
+ - Did new team members have what they needed to contribute?
46
+ - What knowledge is currently in someone's head that should be written down?
47
+
48
+ ## Output Format
49
+
50
+ You MUST structure your response exactly as follows:
51
+
52
+ ```
53
+ ## Sprint Retrospective
54
+ **Period**: [date range or feature name]
55
+ **Commits reviewed**: [count from git log]
56
+
57
+ ---
58
+
59
+ ### What Shipped ✅
60
+ - [specific feature/fix with brief outcome]
61
+
62
+ ---
63
+
64
+ ### What Worked Well 🟢
65
+ [Each item should be specific enough to repeat — not "good communication" but "pairing on the Sitecore component before implementing saved two rounds of rework"]
66
+
67
+ - [pattern or approach]
68
+
69
+ ---
70
+
71
+ ### What Slowed Us Down 🔴
72
+ [Each item should include a root cause — not "tests were slow" but "integration tests are hitting a real Sitecore CM instance; local mock would reduce feedback loop from 4 min to 30s"]
73
+
74
+ - [friction point with root cause]
75
+
76
+ ---
77
+
78
+ ### What to Change Next Sprint
79
+ | Area | Current Approach | Proposed Change |
80
+ |---|---|---|
81
+ | [area] | [what we did] | [what to do instead] |
82
+
83
+ ---
84
+
85
+ ### AI Collaboration Review 🤖
86
+ **Skills that helped most**: [list]
87
+ **Skills that underperformed or needed correction**: [list]
88
+ **Suggested CLAUDE.md additions**:
89
+ - [rule or pattern to add]
90
+
91
+ ---
92
+
93
+ ### Action Items
94
+ - [ ] [Concrete action] — [owner if known]
95
+ - [ ] [Concrete action] — [owner if known]
96
+ - [ ] [Concrete action] — [owner if known]
97
+ ```
98
+
99
+ ## Self-Check
100
+
101
+ Before responding, verify:
102
+ - [ ] You ran `git log` (or asked for it) before generating the "What Shipped" list
103
+ - [ ] Every "What Worked" item is specific enough to be repeated intentionally
104
+ - [ ] Every "What Slowed Us Down" item includes a root cause, not just a symptom
105
+ - [ ] AI Collaboration section has specific skill or agent observations — not generic praise
106
+ - [ ] Action items are concrete and assignable — not "improve communication" or "be more careful"
107
+ - [ ] You have no more than 5 action items — prioritized, not exhaustive
108
+
109
+ ## Constraints
110
+
111
+ - Do NOT produce generic retrospective platitudes ("team worked hard", "communication was good").
112
+ - Do NOT list action items without a clear owner or next step that someone can do tomorrow.
113
+ - Keep the retro focused — aim for the 5 highest-impact observations, not an exhaustive list.
114
+ - If data is sparse (no session files, few commits), say so and work with what's available.
115
+ - Do NOT include items that aren't supported by actual evidence from the git log or session files.
116
+
117
+ Sprint/Feature: $ARGUMENTS (if blank, uses the last 2 weeks of git history)
package/dist/index.js CHANGED
@@ -15,7 +15,7 @@ var GUIDES_DIR = path.join(PACKAGE_ROOT, "guides");
15
15
  var DOCS_SCAFFOLDS_DIR = path.join(PACKAGE_ROOT, "docs-scaffolds");
16
16
  var AGENTS_DIR = path.join(PACKAGE_ROOT, "agents");
17
17
  var CONTEXTS_DIR = path.join(PACKAGE_ROOT, "contexts");
18
- var VERSION = "2.1.0";
18
+ var VERSION = "2.3.0";
19
19
  var SKILL_PREFIX = "kit-";
20
20
  var AI_KIT_CONFIG_FILE = "ai-kit.config.json";
21
21
  var BACKUP_DIR = ".ai-kit/backups";
@@ -1339,6 +1339,25 @@ function generateHooks(scan, profile = "standard") {
1339
1339
  ]
1340
1340
  });
1341
1341
  }
1342
+ if (profile !== "minimal") {
1343
+ preToolUse.push({
1344
+ matcher: "Edit|Write",
1345
+ hooks: [
1346
+ {
1347
+ type: "command",
1348
+ command: [
1349
+ 'if [ -f "$CLAUDE_FILE_PATH" ]; then',
1350
+ ' LINES=$(wc -l < "$CLAUDE_FILE_PATH" 2>/dev/null || echo 0)',
1351
+ ' if [ "$LINES" -gt 150 ]; then',
1352
+ ' echo "\u{1F4D6} Large file ($LINES lines): $CLAUDE_FILE_PATH"',
1353
+ ' echo " Ensure you have read the full file before editing to avoid missing imports or breaking invariants."',
1354
+ " fi",
1355
+ "fi"
1356
+ ].join("\n")
1357
+ }
1358
+ ]
1359
+ });
1360
+ }
1342
1361
  if (profile === "strict") {
1343
1362
  preToolUse.push({
1344
1363
  matcher: "Bash(git commit*)",
@@ -1424,6 +1443,65 @@ function generateHooks(scan, profile = "standard") {
1424
1443
  ]
1425
1444
  });
1426
1445
  }
1446
+ if (profile === "strict") {
1447
+ postToolUse.push({
1448
+ matcher: "Edit|Write",
1449
+ hooks: [
1450
+ {
1451
+ type: "command",
1452
+ command: [
1453
+ 'if [ -f "$CLAUDE_FILE_PATH" ]; then',
1454
+ ' GUARDS=""',
1455
+ ' case "$CLAUDE_FILE_PATH" in',
1456
+ " *.ts|*.tsx|*.js|*.jsx|*.mjs)",
1457
+ ' DISABLE=$(grep -c "eslint-disable" "$CLAUDE_FILE_PATH" 2>/dev/null || echo 0)',
1458
+ ' if [ "$DISABLE" -gt 0 ]; then',
1459
+ ' GUARDS="${GUARDS}\\n \u26A0\uFE0F eslint-disable: $DISABLE directive(s) found \u2014 weakens linting"',
1460
+ " fi",
1461
+ ' SUPPRESS=$(grep -cE "@ts-ignore|@ts-nocheck" "$CLAUDE_FILE_PATH" 2>/dev/null || echo 0)',
1462
+ ' if [ "$SUPPRESS" -gt 0 ]; then',
1463
+ ' GUARDS="${GUARDS}\\n \u26A0\uFE0F TypeScript suppression: $SUPPRESS @ts-ignore/@ts-nocheck found"',
1464
+ " fi",
1465
+ " ;;",
1466
+ " *tsconfig*.json)",
1467
+ ` if grep -qE '"strict"[[:space:]]*:[[:space:]]*false|"noImplicitAny"[[:space:]]*:[[:space:]]*false' "$CLAUDE_FILE_PATH" 2>/dev/null; then`,
1468
+ ' GUARDS="${GUARDS}\\n \u26A0\uFE0F TypeScript strict mode disabled \u2014 this weakens type safety"',
1469
+ " fi",
1470
+ " ;;",
1471
+ " esac",
1472
+ ' if [ -n "$GUARDS" ]; then',
1473
+ ' echo "\u{1F6E1}\uFE0F Config guard triggered in $CLAUDE_FILE_PATH:"',
1474
+ ' printf "$GUARDS\\n"',
1475
+ ' echo " Confirm these are intentional before committing."',
1476
+ " fi",
1477
+ "fi"
1478
+ ].join("\n")
1479
+ }
1480
+ ]
1481
+ });
1482
+ }
1483
+ if (profile === "strict") {
1484
+ postToolUse.push({
1485
+ matcher: "Edit|Write",
1486
+ hooks: [
1487
+ {
1488
+ type: "command",
1489
+ command: [
1490
+ 'case "$CLAUDE_FILE_PATH" in',
1491
+ " *.ts|*.tsx|*.js|*.jsx|*.json|*.yaml|*.yml)",
1492
+ ' CREDS=$(grep -inE "(apikey|api_key|secret_key|access_token|private_key|password|auth_token)[[:space:]]*[:=][[:space:]]*[^$][^[:space:]]{8,}" "$CLAUDE_FILE_PATH" 2>/dev/null | grep -v "process\\.env" | head -3)',
1493
+ ' if [ -n "$CREDS" ]; then',
1494
+ ' echo "\u{1F510} Credential scan \u2014 potential secret detected in $CLAUDE_FILE_PATH:"',
1495
+ ' echo "$CREDS"',
1496
+ ' echo " Move real secrets to .env and reference via process.env \u2014 never commit credentials."',
1497
+ " fi",
1498
+ " ;;",
1499
+ "esac"
1500
+ ].join("\n")
1501
+ }
1502
+ ]
1503
+ });
1504
+ }
1427
1505
  if (profile !== "minimal") {
1428
1506
  postCompact.push({
1429
1507
  matcher: "",
@@ -1464,6 +1542,9 @@ function generateHooks(scan, profile = "standard") {
1464
1542
  function generateSettingsLocal(scan, profile = "standard") {
1465
1543
  const hooks = generateHooks(scan, profile);
1466
1544
  return {
1545
+ env: {
1546
+ CLAUDE_AUTOCOMPACT_PCT_OVERRIDE: "65"
1547
+ },
1467
1548
  hooks
1468
1549
  };
1469
1550
  }
@@ -1521,7 +1602,14 @@ var BASE_SKILLS = [
1521
1602
  "deep-interview",
1522
1603
  "clarify-requirements",
1523
1604
  // New skills (v1.10.0) — documentation freshness
1524
- "fetch-docs"
1605
+ "fetch-docs",
1606
+ // New skills (v2.3.0) — roles: product, design, DX, retrospective, deploy, safety
1607
+ "product-review",
1608
+ "design-review",
1609
+ "devex-review",
1610
+ "retro",
1611
+ "post-deploy",
1612
+ "careful"
1525
1613
  ];
1526
1614
  var AVAILABLE_SKILLS = BASE_SKILLS.map((s) => `${SKILL_PREFIX}${s}`);
1527
1615
  var BASE_SKILL_DESCRIPTIONS = {
@@ -1570,7 +1658,13 @@ var BASE_SKILL_DESCRIPTIONS = {
1570
1658
  "harness-audit": "Audit AI agent configuration \u2014 check CLAUDE.md, hooks, agents, skills, MCP servers",
1571
1659
  "deep-interview": "Socratic requirements gathering \u2014 structured interview to transform vague ideas into detailed specifications",
1572
1660
  "clarify-requirements": "Quick task clarification \u2014 identify gaps and ambiguities in under 5 minutes before coding",
1573
- "fetch-docs": "Pre-load current, version-specific docs for your tech stack using Context7 MCP \u2014 run at session start to prevent outdated API usage"
1661
+ "fetch-docs": "Pre-load current, version-specific docs for your tech stack using Context7 MCP \u2014 run at session start to prevent outdated API usage",
1662
+ "product-review": "Product owner review \u2014 evaluate whether a feature is worth building, identify the 10x version, and produce a Go/No-Go/Reshape recommendation before coding starts",
1663
+ "design-review": "Visual design quality audit \u2014 score typography, spacing, color, hierarchy, consistency, and interaction design across six dimensions with specific Tailwind fixes",
1664
+ "devex-review": "Developer experience audit \u2014 measure TTHW (time to hello world), identify setup friction, and produce prioritized DX improvements for new team members",
1665
+ "retro": "Sprint or feature retrospective \u2014 extract what shipped, what worked, what slowed the team down, and concrete action items including AI collaboration insights",
1666
+ "post-deploy": "Post-deployment health check \u2014 verify pages load, Sitecore components render, performance is within baseline, and issue a HEALTHY/DEGRADED/ROLLBACK verdict",
1667
+ "careful": "Safety guard \u2014 pauses before destructive or irreversible operations (rm -rf, force push, DROP TABLE, reset --hard), shows blast radius, and requires explicit confirmation"
1574
1668
  };
1575
1669
  var SKILL_DESCRIPTIONS = Object.fromEntries(
1576
1670
  Object.entries(BASE_SKILL_DESCRIPTIONS).map(([k, v]) => [`${SKILL_PREFIX}${k}`, v])
@@ -1668,7 +1762,11 @@ var BASE_UNIVERSAL_AGENTS = [
1668
1762
  "performance-profiler",
1669
1763
  "migration-specialist",
1670
1764
  "dependency-auditor",
1671
- "api-designer"
1765
+ "api-designer",
1766
+ // New agents (v2.3.0) — roles: design, release, onboarding
1767
+ "ui-designer",
1768
+ "release-manager",
1769
+ "onboarding-guide"
1672
1770
  ];
1673
1771
  var UNIVERSAL_AGENTS = BASE_UNIVERSAL_AGENTS.map((a) => `${SKILL_PREFIX}${a}`);
1674
1772
  var CONDITIONAL_AGENTS = [
@@ -1683,6 +1781,10 @@ var CONDITIONAL_AGENTS = [
1683
1781
  {
1684
1782
  name: `${SKILL_PREFIX}tdd-guide`,
1685
1783
  condition: (scan) => scan.tools.playwright || !!scan.scripts["test"]
1784
+ },
1785
+ {
1786
+ name: `${SKILL_PREFIX}database-reviewer`,
1787
+ condition: (scan) => scan.cms !== "none" || scan.framework === "nextjs"
1686
1788
  }
1687
1789
  ];
1688
1790
  async function copyAgents(targetDir, scan) {