hatch3r 1.0.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/LICENSE +21 -0
- package/README.md +437 -0
- package/agents/hatch3r-a11y-auditor.md +126 -0
- package/agents/hatch3r-architect.md +160 -0
- package/agents/hatch3r-ci-watcher.md +123 -0
- package/agents/hatch3r-context-rules.md +97 -0
- package/agents/hatch3r-dependency-auditor.md +164 -0
- package/agents/hatch3r-devops.md +138 -0
- package/agents/hatch3r-docs-writer.md +97 -0
- package/agents/hatch3r-implementer.md +162 -0
- package/agents/hatch3r-learnings-loader.md +108 -0
- package/agents/hatch3r-lint-fixer.md +104 -0
- package/agents/hatch3r-perf-profiler.md +123 -0
- package/agents/hatch3r-researcher.md +642 -0
- package/agents/hatch3r-reviewer.md +81 -0
- package/agents/hatch3r-security-auditor.md +119 -0
- package/agents/hatch3r-test-writer.md +134 -0
- package/commands/hatch3r-agent-customize.md +146 -0
- package/commands/hatch3r-api-spec.md +49 -0
- package/commands/hatch3r-benchmark.md +50 -0
- package/commands/hatch3r-board-fill.md +504 -0
- package/commands/hatch3r-board-init.md +315 -0
- package/commands/hatch3r-board-pickup.md +672 -0
- package/commands/hatch3r-board-refresh.md +198 -0
- package/commands/hatch3r-board-shared.md +369 -0
- package/commands/hatch3r-bug-plan.md +410 -0
- package/commands/hatch3r-codebase-map.md +1182 -0
- package/commands/hatch3r-command-customize.md +94 -0
- package/commands/hatch3r-context-health.md +112 -0
- package/commands/hatch3r-cost-tracking.md +139 -0
- package/commands/hatch3r-dep-audit.md +171 -0
- package/commands/hatch3r-feature-plan.md +379 -0
- package/commands/hatch3r-healthcheck.md +307 -0
- package/commands/hatch3r-hooks.md +282 -0
- package/commands/hatch3r-learn.md +217 -0
- package/commands/hatch3r-migration-plan.md +51 -0
- package/commands/hatch3r-onboard.md +56 -0
- package/commands/hatch3r-project-spec.md +1153 -0
- package/commands/hatch3r-recipe.md +179 -0
- package/commands/hatch3r-refactor-plan.md +426 -0
- package/commands/hatch3r-release.md +328 -0
- package/commands/hatch3r-roadmap.md +556 -0
- package/commands/hatch3r-rule-customize.md +114 -0
- package/commands/hatch3r-security-audit.md +370 -0
- package/commands/hatch3r-skill-customize.md +93 -0
- package/commands/hatch3r-workflow.md +377 -0
- package/dist/cli/hooks-ZOTFDEA3.js +59 -0
- package/dist/cli/index.d.ts +2 -0
- package/dist/cli/index.js +3584 -0
- package/github-agents/hatch3r-docs-agent.md +46 -0
- package/github-agents/hatch3r-lint-agent.md +41 -0
- package/github-agents/hatch3r-security-agent.md +54 -0
- package/github-agents/hatch3r-test-agent.md +66 -0
- package/hooks/hatch3r-ci-failure.md +10 -0
- package/hooks/hatch3r-file-save.md +11 -0
- package/hooks/hatch3r-post-merge.md +10 -0
- package/hooks/hatch3r-pre-commit.md +11 -0
- package/hooks/hatch3r-pre-push.md +10 -0
- package/hooks/hatch3r-session-start.md +10 -0
- package/mcp/mcp.json +62 -0
- package/package.json +84 -0
- package/prompts/hatch3r-bug-triage.md +155 -0
- package/prompts/hatch3r-code-review.md +131 -0
- package/prompts/hatch3r-pr-description.md +173 -0
- package/rules/hatch3r-accessibility-standards.md +77 -0
- package/rules/hatch3r-accessibility-standards.mdc +75 -0
- package/rules/hatch3r-agent-orchestration.md +160 -0
- package/rules/hatch3r-api-design.md +176 -0
- package/rules/hatch3r-api-design.mdc +176 -0
- package/rules/hatch3r-browser-verification.md +73 -0
- package/rules/hatch3r-browser-verification.mdc +73 -0
- package/rules/hatch3r-ci-cd.md +70 -0
- package/rules/hatch3r-ci-cd.mdc +68 -0
- package/rules/hatch3r-code-standards.md +102 -0
- package/rules/hatch3r-code-standards.mdc +100 -0
- package/rules/hatch3r-component-conventions.md +102 -0
- package/rules/hatch3r-component-conventions.mdc +102 -0
- package/rules/hatch3r-data-classification.md +85 -0
- package/rules/hatch3r-data-classification.mdc +83 -0
- package/rules/hatch3r-dependency-management.md +17 -0
- package/rules/hatch3r-dependency-management.mdc +15 -0
- package/rules/hatch3r-error-handling.md +17 -0
- package/rules/hatch3r-error-handling.mdc +15 -0
- package/rules/hatch3r-feature-flags.md +112 -0
- package/rules/hatch3r-feature-flags.mdc +112 -0
- package/rules/hatch3r-git-conventions.md +47 -0
- package/rules/hatch3r-git-conventions.mdc +45 -0
- package/rules/hatch3r-i18n.md +90 -0
- package/rules/hatch3r-i18n.mdc +90 -0
- package/rules/hatch3r-learning-consult.md +29 -0
- package/rules/hatch3r-learning-consult.mdc +27 -0
- package/rules/hatch3r-migrations.md +17 -0
- package/rules/hatch3r-migrations.mdc +15 -0
- package/rules/hatch3r-observability.md +165 -0
- package/rules/hatch3r-observability.mdc +165 -0
- package/rules/hatch3r-performance-budgets.md +109 -0
- package/rules/hatch3r-performance-budgets.mdc +109 -0
- package/rules/hatch3r-secrets-management.md +76 -0
- package/rules/hatch3r-secrets-management.mdc +74 -0
- package/rules/hatch3r-security-patterns.md +211 -0
- package/rules/hatch3r-security-patterns.mdc +211 -0
- package/rules/hatch3r-testing.md +89 -0
- package/rules/hatch3r-testing.mdc +87 -0
- package/rules/hatch3r-theming.md +51 -0
- package/rules/hatch3r-theming.mdc +51 -0
- package/rules/hatch3r-tooling-hierarchy.md +92 -0
- package/rules/hatch3r-tooling-hierarchy.mdc +79 -0
- package/skills/hatch3r-a11y-audit/SKILL.md +131 -0
- package/skills/hatch3r-agent-customize/SKILL.md +75 -0
- package/skills/hatch3r-api-spec/SKILL.md +66 -0
- package/skills/hatch3r-architecture-review/SKILL.md +96 -0
- package/skills/hatch3r-bug-fix/SKILL.md +129 -0
- package/skills/hatch3r-ci-pipeline/SKILL.md +76 -0
- package/skills/hatch3r-command-customize/SKILL.md +67 -0
- package/skills/hatch3r-context-health/SKILL.md +76 -0
- package/skills/hatch3r-cost-tracking/SKILL.md +65 -0
- package/skills/hatch3r-dep-audit/SKILL.md +82 -0
- package/skills/hatch3r-feature/SKILL.md +129 -0
- package/skills/hatch3r-gh-agentic-workflows/SKILL.md +150 -0
- package/skills/hatch3r-incident-response/SKILL.md +86 -0
- package/skills/hatch3r-issue-workflow/SKILL.md +139 -0
- package/skills/hatch3r-logical-refactor/SKILL.md +73 -0
- package/skills/hatch3r-migration/SKILL.md +76 -0
- package/skills/hatch3r-perf-audit/SKILL.md +114 -0
- package/skills/hatch3r-pr-creation/SKILL.md +85 -0
- package/skills/hatch3r-qa-validation/SKILL.md +86 -0
- package/skills/hatch3r-recipe/SKILL.md +67 -0
- package/skills/hatch3r-refactor/SKILL.md +86 -0
- package/skills/hatch3r-release/SKILL.md +93 -0
- package/skills/hatch3r-rule-customize/SKILL.md +70 -0
- package/skills/hatch3r-skill-customize/SKILL.md +67 -0
- package/skills/hatch3r-visual-refactor/SKILL.md +89 -0
|
@@ -0,0 +1,87 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Test standards and conventions for the project
|
|
3
|
+
alwaysApply: true
|
|
4
|
+
---
|
|
5
|
+
# Testing Standards
|
|
6
|
+
|
|
7
|
+
## Core Principles
|
|
8
|
+
|
|
9
|
+
- Unit tests: project test runner. Integration: test runner + emulators/mocks. E2E: browser automation (Playwright or equivalent).
|
|
10
|
+
- **Deterministic.** Mock time where needed. No wall clock dependency.
|
|
11
|
+
- **Isolated.** Each test sets up and tears down its own state.
|
|
12
|
+
- **Fast.** Unit tests < 50ms. Integration tests < 2s.
|
|
13
|
+
- **Named clearly.** Describe behavior: `"should award 15 XP for 25-min focus block"`.
|
|
14
|
+
- **Regression.** Every bug fix includes a test that fails before the fix and passes after.
|
|
15
|
+
- **No network.** Unit tests must not make network calls. Use mocks.
|
|
16
|
+
- No `any` types in tests. No `.skip` without a linked issue.
|
|
17
|
+
- Write tests to `tests/unit/`, `tests/integration/`, `tests/e2e/`, or equivalent.
|
|
18
|
+
- Use test fixtures from `tests/fixtures/` or equivalent.
|
|
19
|
+
- **Browser verification.** For UI changes, verify visually in the browser via browser automation MCP after automated tests pass. Capture screenshots as evidence.
|
|
20
|
+
|
|
21
|
+
## Coverage Thresholds
|
|
22
|
+
|
|
23
|
+
- **Statement coverage:** 80% minimum across the project. New code must not decrease overall coverage.
|
|
24
|
+
- **Branch coverage:** 70% minimum. Uncovered branches must be justified (e.g., defensive error handling unlikely to trigger).
|
|
25
|
+
- **Function coverage:** 80% minimum. Every exported function must have at least one test.
|
|
26
|
+
- **Per-PR gate:** CI blocks merge if the PR decreases coverage by more than 1% in any metric.
|
|
27
|
+
- **Critical modules** (auth, payments, data persistence, security rules): 90% statement, 85% branch minimum.
|
|
28
|
+
- Generate coverage reports in CI and publish as PR comments or artifacts for visibility.
|
|
29
|
+
- Exclude generated code, type declarations, and config files from coverage metrics.
|
|
30
|
+
|
|
31
|
+
## Mocking Strategy
|
|
32
|
+
|
|
33
|
+
- **Prefer fakes over mocks** for stateful dependencies (databases, caches). Fakes implement the real interface with in-memory state, making tests more realistic.
|
|
34
|
+
- **Use stubs** for simple value returns where behavior is irrelevant to the test (e.g., config lookups, feature flags).
|
|
35
|
+
- **Use mocks** (with call verification) only when the interaction itself is the behavior under test (e.g., verifying an event was emitted, an API was called with specific arguments).
|
|
36
|
+
- **Mock boundaries, not internals.** Mock at module/service boundaries (HTTP clients, database drivers, external SDKs). Never mock private functions or internal implementation details.
|
|
37
|
+
- **Reset mocks between tests.** Use `beforeEach` / `afterEach` to restore original implementations. Leaked mock state is a top source of flaky tests.
|
|
38
|
+
- **Type-safe mocks.** Mock implementations must satisfy the same TypeScript interface as the real dependency. Avoid `as any` in mock setup.
|
|
39
|
+
- **No mocking the unit under test.** If you need to mock part of the module you are testing, the module has too many responsibilities — refactor first.
|
|
40
|
+
|
|
41
|
+
## Property-Based Testing
|
|
42
|
+
|
|
43
|
+
- Use a property-based testing library (fast-check or equivalent) for functions with wide input domains.
|
|
44
|
+
- **Priority targets:** parsers, serializers, validators, encoders/decoders, mathematical functions, and any pure function with complex input types.
|
|
45
|
+
- Define invariants as properties: round-trip (encode then decode equals original), idempotency (applying twice equals applying once), monotonicity, commutativity.
|
|
46
|
+
- Use `fc.assert` with at least 100 runs per property. Increase to 1000 for critical paths.
|
|
47
|
+
- When a property test finds a failure, add the minimal counterexample as a dedicated regression unit test.
|
|
48
|
+
- Shrinking must be enabled — it reduces failing inputs to the smallest reproduction case.
|
|
49
|
+
- Property tests belong alongside unit tests in `tests/unit/`. Name them clearly: `"property: round-trip serialization for UserProfile"`.
|
|
50
|
+
|
|
51
|
+
## Mutation Testing
|
|
52
|
+
|
|
53
|
+
- Use Stryker (or equivalent mutation testing framework) on critical modules to measure test effectiveness beyond line coverage.
|
|
54
|
+
- **Mutation score target:** 70% minimum on critical modules (auth, data layer, business rules). 60% minimum project-wide.
|
|
55
|
+
- Run mutation testing in CI on a weekly schedule (not per-PR — too slow). Report results as a CI artifact.
|
|
56
|
+
- **Surviving mutants** indicate tests that pass regardless of code changes — these are false-coverage tests. Fix them by adding assertions that detect the mutation.
|
|
57
|
+
- Focus mutation testing effort on modules where a bug would cause data loss, security vulnerability, or financial impact.
|
|
58
|
+
- Exclude test files, generated code, and UI presentation logic from mutation analysis.
|
|
59
|
+
|
|
60
|
+
## Flaky Test Handling
|
|
61
|
+
|
|
62
|
+
- **Zero tolerance policy.** A flaky test erodes trust in the entire suite. Fix or quarantine within 48 hours of detection.
|
|
63
|
+
- **Quarantine process:** Move the flaky test to a `tests/quarantine/` directory or tag with `.skip("FLAKY: #issue-number")`. Create a tracking issue immediately.
|
|
64
|
+
- **Retry strategy in CI:** Allow a maximum of 1 automatic retry for the full test suite. Never retry individual tests silently — that masks flakiness.
|
|
65
|
+
- **Root cause investigation:** Common causes are shared mutable state, timing dependencies (real clocks, `setTimeout`), port conflicts, uncontrolled randomness, and external service calls.
|
|
66
|
+
- **Fix patterns:** Replace `setTimeout` with fake timers, replace shared state with per-test setup, replace port binding with dynamic ports, seed random generators deterministically.
|
|
67
|
+
- **Flaky test metrics:** Track flaky test rate over time. Target < 0.5% flaky rate (flaky runs / total runs). Alert when rate exceeds 1%.
|
|
68
|
+
- **Quarantine review:** Review quarantined tests weekly. Tests quarantined for more than 30 days must be either fixed or deleted with justification.
|
|
69
|
+
|
|
70
|
+
## Test Data Management
|
|
71
|
+
|
|
72
|
+
- **Factories over fixtures.** Use factory functions (builder pattern) to generate test data with sensible defaults and per-test overrides. Factories produce valid objects by default; tests override only the fields relevant to the scenario.
|
|
73
|
+
- **Builder pattern example:** `buildUser({ role: "admin" })` returns a full valid User with admin role and random but valid defaults for all other fields.
|
|
74
|
+
- **No shared mutable fixtures.** If multiple tests read the same fixture data, each test must get its own copy. Use `structuredClone()` or factory functions.
|
|
75
|
+
- **Realistic data.** Use faker or equivalent for generating realistic names, emails, dates. Avoid magic strings like `"test"`, `"foo"`, `"abc123"`.
|
|
76
|
+
- **Deterministic seeding.** When using random data generators, seed them per test file so failures are reproducible.
|
|
77
|
+
- **Fixture files** (JSON, YAML) are acceptable for large, complex, or externally-sourced test inputs (API response snapshots, configuration samples). Store in `tests/fixtures/`.
|
|
78
|
+
- **Database state:** Integration tests that require database state must set up and tear down within the test using helpers. Never depend on database state from a previous test.
|
|
79
|
+
|
|
80
|
+
## Snapshot Testing
|
|
81
|
+
|
|
82
|
+
- **Use sparingly.** Snapshots are appropriate for serialized output (JSON API responses, CLI output, rendered HTML structure) where the exact output matters and is stable.
|
|
83
|
+
- **Not appropriate for:** UI component visual appearance (use visual regression tests), objects with timestamps or random IDs (unstable), large objects (unreadable diffs).
|
|
84
|
+
- **Review discipline.** Snapshot updates (`--update-snapshots`) must be reviewed as carefully as code changes. Reviewers must verify the new snapshot is intentionally correct, not just "different."
|
|
85
|
+
- **Keep snapshots small.** Snapshot files > 100 lines suggest the test is asserting too broadly. Narrow the assertion to the relevant subset.
|
|
86
|
+
- **Inline snapshots** (where supported) are preferred over external `.snap` files for short outputs (< 20 lines) because they keep the assertion co-located with the test.
|
|
87
|
+
- **Name snapshot files** to match their test file: `auth.test.ts` → `auth.test.ts.snap`.
|
|
@@ -0,0 +1,51 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Theming, dark mode, and color system conventions for the project
|
|
3
|
+
globs: src/**/*.vue, src/**/*.tsx, src/**/*.jsx, src/**/*.css, src/**/*.scss
|
|
4
|
+
alwaysApply: false
|
|
5
|
+
---
|
|
6
|
+
# Theming & Dark Mode
|
|
7
|
+
|
|
8
|
+
## Color System
|
|
9
|
+
|
|
10
|
+
- Define all colors as semantic CSS custom properties (`--color-surface`, `--color-text-primary`, `--color-text-secondary`, `--color-border`, `--color-brand`, `--color-error`, `--color-success`, `--color-warning`).
|
|
11
|
+
- Maintain three token sets: **light** (default), **dark**, and **high-contrast**.
|
|
12
|
+
- Never use raw hex, `rgb()`, or `hsl()` values in component code — always reference tokens.
|
|
13
|
+
- Layer tokens: primitive (`--gray-900`) → semantic (`--color-text-primary: var(--gray-900)`) → component (`--btn-text: var(--color-text-primary)`).
|
|
14
|
+
|
|
15
|
+
## Theme Detection & Persistence
|
|
16
|
+
|
|
17
|
+
- Detect system preference with `prefers-color-scheme` media query for CSS defaults.
|
|
18
|
+
- Read system preference in JS via `window.matchMedia('(prefers-color-scheme: dark)')` and listen for changes with `addEventListener('change', ...)`.
|
|
19
|
+
- Persist explicit user override in `localStorage` (or equivalent settings store).
|
|
20
|
+
- Resolution order: user override → system preference → light fallback.
|
|
21
|
+
|
|
22
|
+
## Theme Switching
|
|
23
|
+
|
|
24
|
+
- Apply theme via `data-theme` attribute on `<html>` (e.g., `<html data-theme="dark">`).
|
|
25
|
+
- Define token overrides scoped to `[data-theme="dark"]` and `[data-theme="high-contrast"]` selectors.
|
|
26
|
+
- Add `color-scheme: light dark` on `:root` so browser-native controls (scrollbars, form elements) adapt automatically.
|
|
27
|
+
- Prevent flash of wrong theme (FOWT): inject a blocking `<script>` in `<head>` that reads the stored preference and sets `data-theme` before first paint.
|
|
28
|
+
- Use `transition: background-color 200ms ease, color 200ms ease` on body for smooth theme changes — but disable transitions on initial load.
|
|
29
|
+
|
|
30
|
+
## Dark Mode Patterns
|
|
31
|
+
|
|
32
|
+
- Reduce color saturation 10–20% for dark backgrounds to avoid visual vibration.
|
|
33
|
+
- Express elevation through progressively lighter surface colors (e.g., `--color-surface-raised`, `--color-surface-overlay`) — not box-shadows.
|
|
34
|
+
- Adjust images: apply `filter: brightness(0.9)` or reduce opacity on decorative images; provide dark-optimized variants for logos and illustrations when possible.
|
|
35
|
+
- Use reduced contrast (e.g., `--color-text-secondary: #a0a0a0`) for secondary text in dark mode — primary text should remain high contrast (≥ 7:1).
|
|
36
|
+
- Avoid pure white (`#fff`) text on pure black (`#000`) backgrounds — use off-white on dark gray for reduced eye strain.
|
|
37
|
+
|
|
38
|
+
## High Contrast Support
|
|
39
|
+
|
|
40
|
+
- Provide a `high-contrast` token set with ≥ 7:1 contrast ratios for all text and ≥ 3:1 for non-text UI.
|
|
41
|
+
- Detect user preference with `@media (prefers-contrast: more)` and apply high-contrast tokens.
|
|
42
|
+
- Support `forced-colors` mode: use system color keywords (`Canvas`, `CanvasText`, `LinkText`, `ButtonFace`, `ButtonText`) and test that information is not conveyed by color alone.
|
|
43
|
+
- Ensure focus indicators and borders remain visible under forced-colors by using `Highlight` / `SelectedItem` keywords.
|
|
44
|
+
|
|
45
|
+
## Testing
|
|
46
|
+
|
|
47
|
+
- Verify theme toggle switches all tokens correctly — no unstyled or hard-coded colors leak through.
|
|
48
|
+
- Validate contrast ratios per theme using automated tools (axe-core, Lighthouse) against WCAG AA (4.5:1 text, 3:1 non-text).
|
|
49
|
+
- Capture screenshots across light, dark, and high-contrast themes at key viewport sizes for visual regression comparison.
|
|
50
|
+
- Test `prefers-color-scheme` and `prefers-contrast` media query overrides using browser DevTools emulation or Playwright `emulateMedia`.
|
|
51
|
+
- Confirm no flash of wrong theme on hard refresh (disable cache, reload, verify first paint matches stored preference).
|
|
@@ -0,0 +1,51 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Theming, dark mode, and color system conventions for the project
|
|
3
|
+
globs: src/**/*.vue, src/**/*.tsx, src/**/*.jsx, src/**/*.css, src/**/*.scss
|
|
4
|
+
alwaysApply: false
|
|
5
|
+
---
|
|
6
|
+
# Theming & Dark Mode
|
|
7
|
+
|
|
8
|
+
## Color System
|
|
9
|
+
|
|
10
|
+
- Define all colors as semantic CSS custom properties (`--color-surface`, `--color-text-primary`, `--color-text-secondary`, `--color-border`, `--color-brand`, `--color-error`, `--color-success`, `--color-warning`).
|
|
11
|
+
- Maintain three token sets: **light** (default), **dark**, and **high-contrast**.
|
|
12
|
+
- Never use raw hex, `rgb()`, or `hsl()` values in component code — always reference tokens.
|
|
13
|
+
- Layer tokens: primitive (`--gray-900`) → semantic (`--color-text-primary: var(--gray-900)`) → component (`--btn-text: var(--color-text-primary)`).
|
|
14
|
+
|
|
15
|
+
## Theme Detection & Persistence
|
|
16
|
+
|
|
17
|
+
- Detect system preference with `prefers-color-scheme` media query for CSS defaults.
|
|
18
|
+
- Read system preference in JS via `window.matchMedia('(prefers-color-scheme: dark)')` and listen for changes with `addEventListener('change', ...)`.
|
|
19
|
+
- Persist explicit user override in `localStorage` (or equivalent settings store).
|
|
20
|
+
- Resolution order: user override → system preference → light fallback.
|
|
21
|
+
|
|
22
|
+
## Theme Switching
|
|
23
|
+
|
|
24
|
+
- Apply theme via `data-theme` attribute on `<html>` (e.g., `<html data-theme="dark">`).
|
|
25
|
+
- Define token overrides scoped to `[data-theme="dark"]` and `[data-theme="high-contrast"]` selectors.
|
|
26
|
+
- Add `color-scheme: light dark` on `:root` so browser-native controls (scrollbars, form elements) adapt automatically.
|
|
27
|
+
- Prevent flash of wrong theme (FOWT): inject a blocking `<script>` in `<head>` that reads the stored preference and sets `data-theme` before first paint.
|
|
28
|
+
- Use `transition: background-color 200ms ease, color 200ms ease` on body for smooth theme changes — but disable transitions on initial load.
|
|
29
|
+
|
|
30
|
+
## Dark Mode Patterns
|
|
31
|
+
|
|
32
|
+
- Reduce color saturation 10–20% for dark backgrounds to avoid visual vibration.
|
|
33
|
+
- Express elevation through progressively lighter surface colors (e.g., `--color-surface-raised`, `--color-surface-overlay`) — not box-shadows.
|
|
34
|
+
- Adjust images: apply `filter: brightness(0.9)` or reduce opacity on decorative images; provide dark-optimized variants for logos and illustrations when possible.
|
|
35
|
+
- Use reduced contrast (e.g., `--color-text-secondary: #a0a0a0`) for secondary text in dark mode — primary text should remain high contrast (≥ 7:1).
|
|
36
|
+
- Avoid pure white (`#fff`) text on pure black (`#000`) backgrounds — use off-white on dark gray for reduced eye strain.
|
|
37
|
+
|
|
38
|
+
## High Contrast Support
|
|
39
|
+
|
|
40
|
+
- Provide a `high-contrast` token set with ≥ 7:1 contrast ratios for all text and ≥ 3:1 for non-text UI.
|
|
41
|
+
- Detect user preference with `@media (prefers-contrast: more)` and apply high-contrast tokens.
|
|
42
|
+
- Support `forced-colors` mode: use system color keywords (`Canvas`, `CanvasText`, `LinkText`, `ButtonFace`, `ButtonText`) and test that information is not conveyed by color alone.
|
|
43
|
+
- Ensure focus indicators and borders remain visible under forced-colors by using `Highlight` / `SelectedItem` keywords.
|
|
44
|
+
|
|
45
|
+
## Testing
|
|
46
|
+
|
|
47
|
+
- Verify theme toggle switches all tokens correctly — no unstyled or hard-coded colors leak through.
|
|
48
|
+
- Validate contrast ratios per theme using automated tools (axe-core, Lighthouse) against WCAG AA (4.5:1 text, 3:1 non-text).
|
|
49
|
+
- Capture screenshots across light, dark, and high-contrast themes at key viewport sizes for visual regression comparison.
|
|
50
|
+
- Test `prefers-color-scheme` and `prefers-contrast` media query overrides using browser DevTools emulation or Playwright `emulateMedia`.
|
|
51
|
+
- Confirm no flash of wrong theme on hard refresh (disable cache, reload, verify first paint matches stored preference).
|
|
@@ -0,0 +1,92 @@
|
|
|
1
|
+
---
|
|
2
|
+
id: hatch3r-tooling-hierarchy
|
|
3
|
+
type: rule
|
|
4
|
+
description: Priority order for tools and knowledge sources
|
|
5
|
+
scope: always
|
|
6
|
+
---
|
|
7
|
+
# Tooling Hierarchy
|
|
8
|
+
|
|
9
|
+
## A. GitHub CLI-First
|
|
10
|
+
|
|
11
|
+
**Prefer `gh` CLI over GitHub MCP tools** for GitHub operations. CLI tools are optimized for agent use — lower token cost, faster execution, and deterministic output parsing.
|
|
12
|
+
|
|
13
|
+
**Prerequisites:** `gh auth login` must be completed, or `GITHUB_TOKEN` environment variable set. For Projects v2: `gh auth refresh -s project`.
|
|
14
|
+
|
|
15
|
+
**Primary tool for:**
|
|
16
|
+
- Issue CRUD: `gh issue create`, `gh issue edit`, `gh issue view`, `gh issue list`
|
|
17
|
+
- PR CRUD: `gh pr create`, `gh pr view`, `gh pr list`, `gh pr merge`
|
|
18
|
+
- Search: `gh search issues`, `gh search prs`, `gh search code`
|
|
19
|
+
- Labels: `gh label create`, `gh label list`
|
|
20
|
+
- Releases: `gh release create`
|
|
21
|
+
- CI/Actions: `gh run list`, `gh run view`, `gh run watch`
|
|
22
|
+
- Projects v2: `gh project item-add`, `gh project item-edit`, `gh project item-list`, `gh project field-list`, `gh project view`
|
|
23
|
+
|
|
24
|
+
**Fallback to GitHub MCP only when:**
|
|
25
|
+
- The `gh` CLI lacks the specific capability (e.g., sub-issue management via `sub_issue_write`).
|
|
26
|
+
- GraphQL queries are needed that `gh api graphql` cannot express concisely.
|
|
27
|
+
|
|
28
|
+
**Never** use GitHub MCP for operations that `gh` CLI handles well (issue CRUD, PR CRUD, search, labels, releases).
|
|
29
|
+
|
|
30
|
+
## B. Documentation MCP for Library Documentation
|
|
31
|
+
|
|
32
|
+
Use documentation MCP (e.g., Context7) to retrieve up-to-date, version-specific documentation for external libraries and frameworks. This prevents hallucinated APIs and outdated patterns.
|
|
33
|
+
|
|
34
|
+
**When to use:**
|
|
35
|
+
- Working with any external dependency.
|
|
36
|
+
- Verifying API signatures, configuration options, or migration paths.
|
|
37
|
+
- Reviewing code that uses third-party libraries.
|
|
38
|
+
- Writing tests with external test frameworks.
|
|
39
|
+
- Debugging errors from external libraries.
|
|
40
|
+
|
|
41
|
+
**When NOT to use:**
|
|
42
|
+
- Internal project specs — use project docs.
|
|
43
|
+
- Internal codebase patterns — use Grep, SemanticSearch, or exploration tools.
|
|
44
|
+
- General programming concepts not tied to a specific library.
|
|
45
|
+
|
|
46
|
+
## C. Web Research for External Context
|
|
47
|
+
|
|
48
|
+
Use web search to retrieve current, real-world information not available in project docs or library documentation.
|
|
49
|
+
|
|
50
|
+
**When to use:**
|
|
51
|
+
- Latest security advisories, CVEs, or vulnerability disclosures for dependencies.
|
|
52
|
+
- Breaking changes or deprecations in upcoming dependency versions.
|
|
53
|
+
- Current best practices for architecture patterns, deployment strategies, or tooling.
|
|
54
|
+
- Novel problems with no match in docs (e.g., obscure error messages, platform-specific quirks).
|
|
55
|
+
- Comparing alternative approaches or tools with current community consensus.
|
|
56
|
+
|
|
57
|
+
**When NOT to use:**
|
|
58
|
+
- Questions answerable from project specs or codebase exploration.
|
|
59
|
+
- Standard library API questions (use documentation MCP instead).
|
|
60
|
+
- Internal project decisions (use project ADRs).
|
|
61
|
+
|
|
62
|
+
## D. Browser Verification for UI Changes
|
|
63
|
+
|
|
64
|
+
Use browser automation MCP tools to visually verify UI changes after automated tests pass.
|
|
65
|
+
|
|
66
|
+
**When to use:**
|
|
67
|
+
- Verifying UI component changes render correctly.
|
|
68
|
+
- Reproducing and confirming fixes for visually observable bugs.
|
|
69
|
+
- Accessibility auditing (keyboard nav, contrast, focus indicators).
|
|
70
|
+
- Frontend performance profiling (CPU, frame rate, memory).
|
|
71
|
+
- Capturing screenshot evidence for PRs.
|
|
72
|
+
|
|
73
|
+
**When NOT to use:**
|
|
74
|
+
- Pure backend or API changes with no visual impact.
|
|
75
|
+
- Configuration or infrastructure changes.
|
|
76
|
+
- Code refactors that do not alter rendered output.
|
|
77
|
+
|
|
78
|
+
**Available tools:**
|
|
79
|
+
- IDE-native browser MCP (e.g., `cursor-ide-browser` in Cursor).
|
|
80
|
+
- Playwright MCP (`@anthropic/mcp-playwright`) for cross-editor browser automation.
|
|
81
|
+
|
|
82
|
+
## E. Knowledge Augmentation Priority
|
|
83
|
+
|
|
84
|
+
When seeking information, follow this priority order:
|
|
85
|
+
|
|
86
|
+
1. **Project specs and ADRs** — authoritative for project-specific behavior, constraints, and decisions.
|
|
87
|
+
2. **Codebase exploration** (code search tools, semantic code search) — ground truth for current implementation.
|
|
88
|
+
3. **Documentation MCP** — authoritative for external library/framework APIs and patterns.
|
|
89
|
+
4. **Web research** — current events, best practices, security advisories, novel problems.
|
|
90
|
+
5. **Browser verification** — visual confirmation of UI changes after automated tests pass.
|
|
91
|
+
|
|
92
|
+
Combine sources when valuable: read the spec first, then verify external API usage with docs MCP, then check for recent advisories via web research.
|
|
@@ -0,0 +1,79 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Priority order for tools and knowledge sources
|
|
3
|
+
alwaysApply: true
|
|
4
|
+
---
|
|
5
|
+
# Tooling Hierarchy
|
|
6
|
+
|
|
7
|
+
## A. GitHub MCP-First (when available)
|
|
8
|
+
|
|
9
|
+
**Prefer GitHub MCP tools over `gh` CLI** when the MCP server provides typed tools with structured input/output. Use them as the primary interface for GitHub operations.
|
|
10
|
+
|
|
11
|
+
**Fallback to `gh` CLI only when:**
|
|
12
|
+
- The MCP tool catalog lacks the specific capability.
|
|
13
|
+
- An MCP call fails repeatedly and the CLI provides a viable alternative.
|
|
14
|
+
|
|
15
|
+
**Never** use `gh` CLI for operations that have a direct MCP equivalent (issue CRUD, PR CRUD, search, labels).
|
|
16
|
+
|
|
17
|
+
## B. Documentation MCP for Library Documentation
|
|
18
|
+
|
|
19
|
+
Use documentation MCP (e.g., Context7) to retrieve up-to-date, version-specific documentation for external libraries and frameworks. This prevents hallucinated APIs and outdated patterns.
|
|
20
|
+
|
|
21
|
+
**When to use:**
|
|
22
|
+
- Working with any external dependency.
|
|
23
|
+
- Verifying API signatures, configuration options, or migration paths.
|
|
24
|
+
- Reviewing code that uses third-party libraries.
|
|
25
|
+
- Writing tests with external test frameworks.
|
|
26
|
+
- Debugging errors from external libraries.
|
|
27
|
+
|
|
28
|
+
**When NOT to use:**
|
|
29
|
+
- Internal project specs — use project docs.
|
|
30
|
+
- Internal codebase patterns — use Grep, SemanticSearch, or exploration tools.
|
|
31
|
+
- General programming concepts not tied to a specific library.
|
|
32
|
+
|
|
33
|
+
## C. Web Research for External Context
|
|
34
|
+
|
|
35
|
+
Use web search to retrieve current, real-world information not available in project docs or library documentation.
|
|
36
|
+
|
|
37
|
+
**When to use:**
|
|
38
|
+
- Latest security advisories, CVEs, or vulnerability disclosures for dependencies.
|
|
39
|
+
- Breaking changes or deprecations in upcoming dependency versions.
|
|
40
|
+
- Current best practices for architecture patterns, deployment strategies, or tooling.
|
|
41
|
+
- Novel problems with no match in docs (e.g., obscure error messages, platform-specific quirks).
|
|
42
|
+
- Comparing alternative approaches or tools with current community consensus.
|
|
43
|
+
|
|
44
|
+
**When NOT to use:**
|
|
45
|
+
- Questions answerable from project specs or codebase exploration.
|
|
46
|
+
- Standard library API questions (use documentation MCP instead).
|
|
47
|
+
- Internal project decisions (use project ADRs).
|
|
48
|
+
|
|
49
|
+
## D. Browser Verification for UI Changes
|
|
50
|
+
|
|
51
|
+
Use browser automation MCP tools to visually verify UI changes after automated tests pass.
|
|
52
|
+
|
|
53
|
+
**When to use:**
|
|
54
|
+
- Verifying UI component changes render correctly.
|
|
55
|
+
- Reproducing and confirming fixes for visually observable bugs.
|
|
56
|
+
- Accessibility auditing (keyboard nav, contrast, focus indicators).
|
|
57
|
+
- Frontend performance profiling (CPU, frame rate, memory).
|
|
58
|
+
- Capturing screenshot evidence for PRs.
|
|
59
|
+
|
|
60
|
+
**When NOT to use:**
|
|
61
|
+
- Pure backend or API changes with no visual impact.
|
|
62
|
+
- Configuration or infrastructure changes.
|
|
63
|
+
- Code refactors that do not alter rendered output.
|
|
64
|
+
|
|
65
|
+
**Available tools:**
|
|
66
|
+
- IDE-native browser MCP (e.g., `cursor-ide-browser` in Cursor).
|
|
67
|
+
- Playwright MCP (`@anthropic/mcp-playwright`) for cross-editor browser automation.
|
|
68
|
+
|
|
69
|
+
## E. Knowledge Augmentation Priority
|
|
70
|
+
|
|
71
|
+
When seeking information, follow this priority order:
|
|
72
|
+
|
|
73
|
+
1. **Project specs and ADRs** — authoritative for project-specific behavior, constraints, and decisions.
|
|
74
|
+
2. **Codebase exploration** (Grep, SemanticSearch) — ground truth for current implementation.
|
|
75
|
+
3. **Documentation MCP** — authoritative for external library/framework APIs and patterns.
|
|
76
|
+
4. **Web research** — current events, best practices, security advisories, novel problems.
|
|
77
|
+
5. **Browser verification** — visual confirmation of UI changes after automated tests pass.
|
|
78
|
+
|
|
79
|
+
Combine sources when valuable: read the spec first, then verify external API usage with docs MCP, then check for recent advisories via web research.
|
|
@@ -0,0 +1,131 @@
|
|
|
1
|
+
---
|
|
2
|
+
id: hatch3r-a11y-audit
|
|
3
|
+
description: Comprehensive WCAG AA accessibility audit with findings and fixes. Use when auditing accessibility, verifying WCAG compliance, or improving a11y across the application.
|
|
4
|
+
---
|
|
5
|
+
# Accessibility Audit Workflow
|
|
6
|
+
|
|
7
|
+
## Quick Start
|
|
8
|
+
|
|
9
|
+
```
|
|
10
|
+
Task Progress:
|
|
11
|
+
- [ ] Step 1: Read accessibility requirements from rules and specs
|
|
12
|
+
- [ ] Step 2: Automated scan — run axe-core or similar on all pages/components
|
|
13
|
+
- [ ] Step 3: Manual audit — keyboard, contrast, ARIA, reduced motion, screen reader
|
|
14
|
+
- [ ] Step 4: Catalog findings by severity (critical/major/minor)
|
|
15
|
+
- [ ] Step 5: Fix critical and major findings
|
|
16
|
+
- [ ] Step 6: Verify fixes with re-scan and manual check
|
|
17
|
+
```
|
|
18
|
+
|
|
19
|
+
## Step 1: Read Accessibility Requirements
|
|
20
|
+
|
|
21
|
+
**From project component rules (Accessibility section):**
|
|
22
|
+
|
|
23
|
+
- All animations: wrap in `prefers-reduced-motion` media query AND check user's reduced motion setting.
|
|
24
|
+
- Color contrast: ≥ 4.5:1 for text (WCAG AA).
|
|
25
|
+
- Interactive elements: keyboard focusable with visible focus indicator.
|
|
26
|
+
- Dynamic content changes: use `aria-live` regions.
|
|
27
|
+
- Support high contrast mode.
|
|
28
|
+
|
|
29
|
+
**From project quality documentation:**
|
|
30
|
+
|
|
31
|
+
| Requirement | Standard | Details |
|
|
32
|
+
| ------------------- | -------- | ---------------------------------------------------------------- |
|
|
33
|
+
| Reduced motion | WCAG 2.1 | All animations respect `prefers-reduced-motion` and user setting |
|
|
34
|
+
| Color contrast | WCAG AA | Text contrast ratio ≥ 4.5:1 |
|
|
35
|
+
| Keyboard navigation | WCAG 2.1 | All interactive elements focusable and operable via keyboard |
|
|
36
|
+
| Screen reader | WCAG 2.1 | Dynamic state and reactions announced via ARIA live regions |
|
|
37
|
+
| High contrast mode | Custom | User-configurable high contrast theme (if applicable) |
|
|
38
|
+
|
|
39
|
+
- For external library docs and current best practices, follow the project's tooling hierarchy.
|
|
40
|
+
|
|
41
|
+
## Step 2: Automated Scan
|
|
42
|
+
|
|
43
|
+
- Install and run `@axe-core/vue`, `vitest-axe`, or Playwright's `@axe-core/playwright` on all pages and key components.
|
|
44
|
+
- Run against: main routes, key components (if testable in isolation).
|
|
45
|
+
- Capture all violations. Map to WCAG criteria (e.g., 1.1.1, 1.4.3, 2.1.1, 4.1.2).
|
|
46
|
+
- Document: rule ID, description, impact, elements affected, WCAG level.
|
|
47
|
+
|
|
48
|
+
## Step 3: Manual Audit
|
|
49
|
+
|
|
50
|
+
**Keyboard navigation:**
|
|
51
|
+
|
|
52
|
+
- Tab through all interactive elements. Ensure logical order, no focus traps.
|
|
53
|
+
- All buttons, links, inputs, custom controls focusable.
|
|
54
|
+
- Visible focus indicator (outline or ring) — no `outline: none` without replacement.
|
|
55
|
+
- Escape closes modals/dropdowns. Enter/Space activates buttons.
|
|
56
|
+
|
|
57
|
+
**Color contrast:**
|
|
58
|
+
|
|
59
|
+
- Check text vs background: ≥ 4.5:1 for normal text, ≥ 3:1 for large text.
|
|
60
|
+
- Use DevTools or contrast checker. Test with design tokens — ensure no ad-hoc colors fail.
|
|
61
|
+
|
|
62
|
+
**ARIA attributes:**
|
|
63
|
+
|
|
64
|
+
- `aria-label` or `aria-labelledby` for icon-only buttons, custom controls.
|
|
65
|
+
- `aria-live="polite"` or `aria-live="assertive"` for dynamic state changes, notifications.
|
|
66
|
+
- `role` correct for custom widgets (button, link, tab, etc.).
|
|
67
|
+
- `aria-expanded`, `aria-selected`, `aria-hidden` where appropriate.
|
|
68
|
+
|
|
69
|
+
**Reduced motion:**
|
|
70
|
+
|
|
71
|
+
- Test with `prefers-reduced-motion: reduce` (DevTools → Rendering → Emulate CSS media).
|
|
72
|
+
- Verify animations are disabled or simplified. Check user's reduced motion setting.
|
|
73
|
+
- No motion-dependent information (per WCAG 2.1).
|
|
74
|
+
|
|
75
|
+
**Screen reader:**
|
|
76
|
+
|
|
77
|
+
- Test with NVDA, VoiceOver, or similar. Verify announcements for dynamic content.
|
|
78
|
+
- Dynamic state, errors, and success messages announced.
|
|
79
|
+
- Form labels associated, error messages linked via `aria-describedby` or `aria-errormessage`.
|
|
80
|
+
|
|
81
|
+
**High contrast mode:**
|
|
82
|
+
|
|
83
|
+
- Verify user-configurable high contrast theme works (if applicable). No loss of information.
|
|
84
|
+
|
|
85
|
+
## Step 4: Catalog Findings
|
|
86
|
+
|
|
87
|
+
| Severity | Definition | Examples |
|
|
88
|
+
| -------- | --------------------------------------- | ------------------------------------------------------------- |
|
|
89
|
+
| Critical | Blocks core functionality, fails WCAG A | Missing form labels, no keyboard access to primary actions |
|
|
90
|
+
| Major | Significant barrier, fails WCAG AA | Contrast < 4.5:1, missing focus indicators, no reduced motion |
|
|
91
|
+
| Minor | Improves experience, best practice | Redundant labels, suboptimal heading order |
|
|
92
|
+
|
|
93
|
+
- Produce a findings table: ID, severity, WCAG criterion, description, location, fix suggestion.
|
|
94
|
+
- Prioritize: critical first, then major. Minor can be batched.
|
|
95
|
+
|
|
96
|
+
## Step 5: Fix Critical and Major Findings
|
|
97
|
+
|
|
98
|
+
- Implement fixes following project component and quality requirements.
|
|
99
|
+
- Use semantic HTML where possible (`<button>`, `<a>`, `<nav>`, `<main>`).
|
|
100
|
+
- Add `aria-*` attributes for custom components.
|
|
101
|
+
- Ensure `prefers-reduced-motion` respected in CSS and JS.
|
|
102
|
+
- Add or fix focus styles. Use design tokens for focus ring.
|
|
103
|
+
- Verify reduced-motion behavior in tests.
|
|
104
|
+
|
|
105
|
+
## Step 6: Verify Fixes
|
|
106
|
+
|
|
107
|
+
- Re-run automated scan. No critical or major violations.
|
|
108
|
+
- Manual keyboard and screen reader check on fixed areas.
|
|
109
|
+
- Run full test suite to ensure no regressions.
|
|
110
|
+
- Document remaining minor findings for future backlog.
|
|
111
|
+
|
|
112
|
+
## Required Agent Delegation
|
|
113
|
+
|
|
114
|
+
You MUST spawn these agents via the Task tool (`subagent_type: "generalPurpose"`) at the appropriate points:
|
|
115
|
+
|
|
116
|
+
- **`hatch3r-a11y-auditor`** — MUST spawn to perform the full WCAG AA compliance audit autonomously. Provide the list of surfaces/components to audit and the current violation list.
|
|
117
|
+
|
|
118
|
+
## Related Rules
|
|
119
|
+
|
|
120
|
+
- **Rule**: `hatch3r-browser-verification` — follow this rule for live browser-based accessibility testing
|
|
121
|
+
|
|
122
|
+
## Definition of Done
|
|
123
|
+
|
|
124
|
+
- [ ] No critical a11y violations
|
|
125
|
+
- [ ] WCAG AA compliance on all audited surfaces
|
|
126
|
+
- [ ] Reduced motion respected (`prefers-reduced-motion` + user setting)
|
|
127
|
+
- [ ] Keyboard navigation complete with visible focus
|
|
128
|
+
- [ ] Color contrast ≥ 4.5:1 for text
|
|
129
|
+
- [ ] ARIA live regions for dynamic content
|
|
130
|
+
- [ ] Automated scan clean for critical/major
|
|
131
|
+
- [ ] Manual verification completed
|
|
@@ -0,0 +1,75 @@
|
|
|
1
|
+
---
|
|
2
|
+
id: hatch3r-agent-customize
|
|
3
|
+
description: Create and manage per-agent customization files for model overrides, description changes, and project-specific markdown instructions. Use when tailoring agent behavior to project-specific needs.
|
|
4
|
+
---
|
|
5
|
+
# Agent Customization Management
|
|
6
|
+
|
|
7
|
+
## Quick Start
|
|
8
|
+
|
|
9
|
+
```
|
|
10
|
+
Task Progress:
|
|
11
|
+
- [ ] Step 1: Identify which agent to customize
|
|
12
|
+
- [ ] Step 2: Determine customization needs
|
|
13
|
+
- [ ] Step 3: Create the customization files
|
|
14
|
+
- [ ] Step 4: Sync to propagate changes
|
|
15
|
+
- [ ] Step 5: Verify the customized output
|
|
16
|
+
```
|
|
17
|
+
|
|
18
|
+
## Step 1: Identify Agent
|
|
19
|
+
|
|
20
|
+
Determine which hatch3r agent needs customization:
|
|
21
|
+
- Review the agents in `.agents/agents/` and their default behaviors
|
|
22
|
+
- Identify gaps between default behavior and project needs
|
|
23
|
+
- Check for existing customization files in `.hatch3r/agents/`
|
|
24
|
+
|
|
25
|
+
## Step 2: Determine Customization Needs
|
|
26
|
+
|
|
27
|
+
Decide which customization approach to use:
|
|
28
|
+
|
|
29
|
+
**YAML (`.customize.yaml`)** — for structured overrides:
|
|
30
|
+
- **Model**: Override the agent's preferred model (e.g., `model: opus`)
|
|
31
|
+
- **Description**: Change how the agent is described in adapter frontmatter
|
|
32
|
+
- **Enabled**: Set to `false` to disable the agent entirely
|
|
33
|
+
|
|
34
|
+
**Markdown (`.customize.md`)** — for free-form instructions:
|
|
35
|
+
- Domain-specific review checklists
|
|
36
|
+
- Architecture context and constraints
|
|
37
|
+
- Project-specific workflow steps
|
|
38
|
+
- Compliance and security requirements
|
|
39
|
+
|
|
40
|
+
## Step 3: Create Customization Files
|
|
41
|
+
|
|
42
|
+
Create files in `.hatch3r/agents/`:
|
|
43
|
+
|
|
44
|
+
**For YAML overrides:**
|
|
45
|
+
```yaml
|
|
46
|
+
# .hatch3r/agents/{agent-id}.customize.yaml
|
|
47
|
+
model: opus
|
|
48
|
+
description: "Security-focused reviewer for healthcare platform"
|
|
49
|
+
```
|
|
50
|
+
|
|
51
|
+
**For markdown instructions:**
|
|
52
|
+
Create `.hatch3r/agents/{agent-id}.customize.md` with project-specific instructions. This content is injected into the managed block under `## Project Customizations`.
|
|
53
|
+
|
|
54
|
+
Set only the fields/content you need — partial customization is valid.
|
|
55
|
+
|
|
56
|
+
## Step 4: Sync
|
|
57
|
+
|
|
58
|
+
Run `npx hatch3r sync` to propagate customizations to all adapter outputs. The sync:
|
|
59
|
+
- Reads `.customize.yaml` for structured overrides (model, description, enabled)
|
|
60
|
+
- Reads `.customize.md` and appends it inside the managed block
|
|
61
|
+
- Generates updated output for every configured adapter (Cursor, Claude, etc.)
|
|
62
|
+
|
|
63
|
+
## Step 5: Verify
|
|
64
|
+
|
|
65
|
+
Confirm customizations appear in adapter output files:
|
|
66
|
+
- Check model appears in frontmatter (e.g., `.cursor/agents/hatch3r-reviewer.md`)
|
|
67
|
+
- Check markdown instructions appear inside the managed block
|
|
68
|
+
- Verify disabled agents are absent from adapter outputs
|
|
69
|
+
|
|
70
|
+
## Definition of Done
|
|
71
|
+
|
|
72
|
+
- [ ] Customization files created in `.hatch3r/agents/`
|
|
73
|
+
- [ ] `npx hatch3r sync` completes without errors
|
|
74
|
+
- [ ] Adapter output files reflect the customizations
|
|
75
|
+
- [ ] Customization files committed to the repository
|
|
@@ -0,0 +1,66 @@
|
|
|
1
|
+
---
|
|
2
|
+
id: hatch3r-api-spec
|
|
3
|
+
type: skill
|
|
4
|
+
description: Generate and validate OpenAPI specifications from codebase. Covers endpoint design, schema validation, and documentation generation.
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
# API Specification Workflow
|
|
8
|
+
|
|
9
|
+
## Quick Start
|
|
10
|
+
|
|
11
|
+
```
|
|
12
|
+
Task Progress:
|
|
13
|
+
- [ ] Step 1: Inventory existing endpoints
|
|
14
|
+
- [ ] Step 2: Generate OpenAPI spec
|
|
15
|
+
- [ ] Step 3: Validate schemas
|
|
16
|
+
- [ ] Step 4: Generate documentation
|
|
17
|
+
- [ ] Step 5: Verify spec accuracy
|
|
18
|
+
```
|
|
19
|
+
|
|
20
|
+
## Step 1: Inventory Existing Endpoints
|
|
21
|
+
|
|
22
|
+
- Scan route definitions across the codebase (controllers, handlers, route files).
|
|
23
|
+
- For each endpoint, extract: HTTP method, path, request parameters, request body shape, response body shape, status codes, authentication requirements.
|
|
24
|
+
- Identify inconsistencies in naming, parameter styles, or response formats.
|
|
25
|
+
- Check for undocumented endpoints that exist in code but lack API docs.
|
|
26
|
+
|
|
27
|
+
## Step 2: Generate OpenAPI Spec
|
|
28
|
+
|
|
29
|
+
- Create or update `openapi.yaml` (or `openapi.json`) at the project root or `docs/api/` directory.
|
|
30
|
+
- Use OpenAPI 3.1 format.
|
|
31
|
+
- Include `info` block with title, version, description, and contact.
|
|
32
|
+
- Group endpoints by tag (resource or domain area).
|
|
33
|
+
- Define reusable `components/schemas` for shared request/response types.
|
|
34
|
+
- Use `$ref` references to avoid schema duplication.
|
|
35
|
+
- Add `security` schemes matching the project's authentication (Bearer, API key, OAuth2).
|
|
36
|
+
|
|
37
|
+
## Step 3: Validate Schemas
|
|
38
|
+
|
|
39
|
+
- Ensure all request bodies have JSON Schema validation constraints (`required`, `minLength`, `maxLength`, `pattern`, `enum`).
|
|
40
|
+
- Verify response schemas match actual serialized output (check serializers, DTOs, or response builders).
|
|
41
|
+
- Validate enum values match database constraints or application constants.
|
|
42
|
+
- Check for nullable fields — mark explicitly with `nullable: true` or type union.
|
|
43
|
+
- Run a spec linter (e.g., `spectral`, `redocly lint`) if available in the project.
|
|
44
|
+
|
|
45
|
+
## Step 4: Generate Documentation
|
|
46
|
+
|
|
47
|
+
- Produce human-readable API docs from the spec (Swagger UI, Redoc, or Markdown).
|
|
48
|
+
- Include example request/response bodies for each endpoint.
|
|
49
|
+
- Document error response shapes with status code meanings.
|
|
50
|
+
- Add authentication setup instructions.
|
|
51
|
+
- Include rate limiting and pagination details where applicable.
|
|
52
|
+
|
|
53
|
+
## Step 5: Verify Spec Accuracy
|
|
54
|
+
|
|
55
|
+
- Cross-reference the generated spec against integration tests to confirm endpoint behavior.
|
|
56
|
+
- Verify content types (`application/json`, `multipart/form-data`, etc.) match actual handlers.
|
|
57
|
+
- Check that path parameters, query parameters, and headers are correctly documented.
|
|
58
|
+
- Validate against any existing API consumers (SDKs, frontend clients) for breaking changes.
|
|
59
|
+
|
|
60
|
+
## Definition of Done
|
|
61
|
+
|
|
62
|
+
- [ ] OpenAPI spec covers all endpoints in the codebase
|
|
63
|
+
- [ ] All schemas have validation constraints
|
|
64
|
+
- [ ] Spec passes linter validation
|
|
65
|
+
- [ ] Example requests/responses included
|
|
66
|
+
- [ ] No breaking changes to existing API consumers
|