hatch3r 1.0.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/LICENSE +21 -0
- package/README.md +437 -0
- package/agents/hatch3r-a11y-auditor.md +126 -0
- package/agents/hatch3r-architect.md +160 -0
- package/agents/hatch3r-ci-watcher.md +123 -0
- package/agents/hatch3r-context-rules.md +97 -0
- package/agents/hatch3r-dependency-auditor.md +164 -0
- package/agents/hatch3r-devops.md +138 -0
- package/agents/hatch3r-docs-writer.md +97 -0
- package/agents/hatch3r-implementer.md +162 -0
- package/agents/hatch3r-learnings-loader.md +108 -0
- package/agents/hatch3r-lint-fixer.md +104 -0
- package/agents/hatch3r-perf-profiler.md +123 -0
- package/agents/hatch3r-researcher.md +642 -0
- package/agents/hatch3r-reviewer.md +81 -0
- package/agents/hatch3r-security-auditor.md +119 -0
- package/agents/hatch3r-test-writer.md +134 -0
- package/commands/hatch3r-agent-customize.md +146 -0
- package/commands/hatch3r-api-spec.md +49 -0
- package/commands/hatch3r-benchmark.md +50 -0
- package/commands/hatch3r-board-fill.md +504 -0
- package/commands/hatch3r-board-init.md +315 -0
- package/commands/hatch3r-board-pickup.md +672 -0
- package/commands/hatch3r-board-refresh.md +198 -0
- package/commands/hatch3r-board-shared.md +369 -0
- package/commands/hatch3r-bug-plan.md +410 -0
- package/commands/hatch3r-codebase-map.md +1182 -0
- package/commands/hatch3r-command-customize.md +94 -0
- package/commands/hatch3r-context-health.md +112 -0
- package/commands/hatch3r-cost-tracking.md +139 -0
- package/commands/hatch3r-dep-audit.md +171 -0
- package/commands/hatch3r-feature-plan.md +379 -0
- package/commands/hatch3r-healthcheck.md +307 -0
- package/commands/hatch3r-hooks.md +282 -0
- package/commands/hatch3r-learn.md +217 -0
- package/commands/hatch3r-migration-plan.md +51 -0
- package/commands/hatch3r-onboard.md +56 -0
- package/commands/hatch3r-project-spec.md +1153 -0
- package/commands/hatch3r-recipe.md +179 -0
- package/commands/hatch3r-refactor-plan.md +426 -0
- package/commands/hatch3r-release.md +328 -0
- package/commands/hatch3r-roadmap.md +556 -0
- package/commands/hatch3r-rule-customize.md +114 -0
- package/commands/hatch3r-security-audit.md +370 -0
- package/commands/hatch3r-skill-customize.md +93 -0
- package/commands/hatch3r-workflow.md +377 -0
- package/dist/cli/hooks-ZOTFDEA3.js +59 -0
- package/dist/cli/index.d.ts +2 -0
- package/dist/cli/index.js +3584 -0
- package/github-agents/hatch3r-docs-agent.md +46 -0
- package/github-agents/hatch3r-lint-agent.md +41 -0
- package/github-agents/hatch3r-security-agent.md +54 -0
- package/github-agents/hatch3r-test-agent.md +66 -0
- package/hooks/hatch3r-ci-failure.md +10 -0
- package/hooks/hatch3r-file-save.md +11 -0
- package/hooks/hatch3r-post-merge.md +10 -0
- package/hooks/hatch3r-pre-commit.md +11 -0
- package/hooks/hatch3r-pre-push.md +10 -0
- package/hooks/hatch3r-session-start.md +10 -0
- package/mcp/mcp.json +62 -0
- package/package.json +84 -0
- package/prompts/hatch3r-bug-triage.md +155 -0
- package/prompts/hatch3r-code-review.md +131 -0
- package/prompts/hatch3r-pr-description.md +173 -0
- package/rules/hatch3r-accessibility-standards.md +77 -0
- package/rules/hatch3r-accessibility-standards.mdc +75 -0
- package/rules/hatch3r-agent-orchestration.md +160 -0
- package/rules/hatch3r-api-design.md +176 -0
- package/rules/hatch3r-api-design.mdc +176 -0
- package/rules/hatch3r-browser-verification.md +73 -0
- package/rules/hatch3r-browser-verification.mdc +73 -0
- package/rules/hatch3r-ci-cd.md +70 -0
- package/rules/hatch3r-ci-cd.mdc +68 -0
- package/rules/hatch3r-code-standards.md +102 -0
- package/rules/hatch3r-code-standards.mdc +100 -0
- package/rules/hatch3r-component-conventions.md +102 -0
- package/rules/hatch3r-component-conventions.mdc +102 -0
- package/rules/hatch3r-data-classification.md +85 -0
- package/rules/hatch3r-data-classification.mdc +83 -0
- package/rules/hatch3r-dependency-management.md +17 -0
- package/rules/hatch3r-dependency-management.mdc +15 -0
- package/rules/hatch3r-error-handling.md +17 -0
- package/rules/hatch3r-error-handling.mdc +15 -0
- package/rules/hatch3r-feature-flags.md +112 -0
- package/rules/hatch3r-feature-flags.mdc +112 -0
- package/rules/hatch3r-git-conventions.md +47 -0
- package/rules/hatch3r-git-conventions.mdc +45 -0
- package/rules/hatch3r-i18n.md +90 -0
- package/rules/hatch3r-i18n.mdc +90 -0
- package/rules/hatch3r-learning-consult.md +29 -0
- package/rules/hatch3r-learning-consult.mdc +27 -0
- package/rules/hatch3r-migrations.md +17 -0
- package/rules/hatch3r-migrations.mdc +15 -0
- package/rules/hatch3r-observability.md +165 -0
- package/rules/hatch3r-observability.mdc +165 -0
- package/rules/hatch3r-performance-budgets.md +109 -0
- package/rules/hatch3r-performance-budgets.mdc +109 -0
- package/rules/hatch3r-secrets-management.md +76 -0
- package/rules/hatch3r-secrets-management.mdc +74 -0
- package/rules/hatch3r-security-patterns.md +211 -0
- package/rules/hatch3r-security-patterns.mdc +211 -0
- package/rules/hatch3r-testing.md +89 -0
- package/rules/hatch3r-testing.mdc +87 -0
- package/rules/hatch3r-theming.md +51 -0
- package/rules/hatch3r-theming.mdc +51 -0
- package/rules/hatch3r-tooling-hierarchy.md +92 -0
- package/rules/hatch3r-tooling-hierarchy.mdc +79 -0
- package/skills/hatch3r-a11y-audit/SKILL.md +131 -0
- package/skills/hatch3r-agent-customize/SKILL.md +75 -0
- package/skills/hatch3r-api-spec/SKILL.md +66 -0
- package/skills/hatch3r-architecture-review/SKILL.md +96 -0
- package/skills/hatch3r-bug-fix/SKILL.md +129 -0
- package/skills/hatch3r-ci-pipeline/SKILL.md +76 -0
- package/skills/hatch3r-command-customize/SKILL.md +67 -0
- package/skills/hatch3r-context-health/SKILL.md +76 -0
- package/skills/hatch3r-cost-tracking/SKILL.md +65 -0
- package/skills/hatch3r-dep-audit/SKILL.md +82 -0
- package/skills/hatch3r-feature/SKILL.md +129 -0
- package/skills/hatch3r-gh-agentic-workflows/SKILL.md +150 -0
- package/skills/hatch3r-incident-response/SKILL.md +86 -0
- package/skills/hatch3r-issue-workflow/SKILL.md +139 -0
- package/skills/hatch3r-logical-refactor/SKILL.md +73 -0
- package/skills/hatch3r-migration/SKILL.md +76 -0
- package/skills/hatch3r-perf-audit/SKILL.md +114 -0
- package/skills/hatch3r-pr-creation/SKILL.md +85 -0
- package/skills/hatch3r-qa-validation/SKILL.md +86 -0
- package/skills/hatch3r-recipe/SKILL.md +67 -0
- package/skills/hatch3r-refactor/SKILL.md +86 -0
- package/skills/hatch3r-release/SKILL.md +93 -0
- package/skills/hatch3r-rule-customize/SKILL.md +70 -0
- package/skills/hatch3r-skill-customize/SKILL.md +67 -0
- package/skills/hatch3r-visual-refactor/SKILL.md +89 -0
|
@@ -0,0 +1,96 @@
|
|
|
1
|
+
---
|
|
2
|
+
id: hatch3r-architecture-review
|
|
3
|
+
description: Evaluate architectural decisions and produce ADRs following the project template. Use when making architectural decisions, evaluating trade-offs, or creating ADRs.
|
|
4
|
+
---
|
|
5
|
+
# Architecture Review Workflow
|
|
6
|
+
|
|
7
|
+
## Quick Start
|
|
8
|
+
|
|
9
|
+
```
|
|
10
|
+
Task Progress:
|
|
11
|
+
- [ ] Step 1: Read existing ADRs and the template
|
|
12
|
+
- [ ] Step 2: Define the decision context — problem, constraints, options
|
|
13
|
+
- [ ] Step 3: Evaluate options — pros/cons, prototype if needed, check ADR constraints
|
|
14
|
+
- [ ] Step 4: For external library docs and current best practices, follow the project's tooling hierarchy
|
|
15
|
+
- [ ] Step 5: Write ADR following the template
|
|
16
|
+
- [ ] Step 6: Update affected specs or docs to reference the new ADR
|
|
17
|
+
```
|
|
18
|
+
|
|
19
|
+
## Step 1: Read Existing ADRs and Template
|
|
20
|
+
|
|
21
|
+
- Read all ADRs in project docs to understand current architecture and constraints.
|
|
22
|
+
- Read the ADR template (e.g., `docs/adr/0001_template.md` or equivalent).
|
|
23
|
+
- Template sections: Status, Date, Decision Makers, Context, Decision, Alternatives Considered, Consequences, Compliance, Related.
|
|
24
|
+
- Check if any existing ADR is superseded or conflicts with the proposed decision.
|
|
25
|
+
- Read foundational ADRs for invariants.
|
|
26
|
+
|
|
27
|
+
## Step 2: Define the Decision Context
|
|
28
|
+
|
|
29
|
+
- **Problem:** What issue or opportunity motivates this decision?
|
|
30
|
+
- **Constraints:** Technical (stack, performance budgets), organizational (timeline, team), regulatory (privacy, security).
|
|
31
|
+
- **Options:** List 2–4 viable alternatives. Include "do nothing" or "status quo" if relevant.
|
|
32
|
+
- **Stakeholders:** Who is affected? Who decides?
|
|
33
|
+
|
|
34
|
+
Reference:
|
|
35
|
+
|
|
36
|
+
- Performance budgets: project rules
|
|
37
|
+
- Privacy/security invariants: project rules and specs
|
|
38
|
+
- Data model: project documentation
|
|
39
|
+
- Event model: project documentation
|
|
40
|
+
|
|
41
|
+
## Step 3: Evaluate Options
|
|
42
|
+
|
|
43
|
+
- For each option, list **pros** and **cons**.
|
|
44
|
+
- Consider: complexity, maintainability, performance, security, cost, migration effort.
|
|
45
|
+
- Prototype or spike if uncertainty is high — document findings.
|
|
46
|
+
- Check against existing ADR constraints: does this align with core architecture?
|
|
47
|
+
- Use a comparison table (as in the template):
|
|
48
|
+
|
|
49
|
+
| Option | Pros | Cons |
|
|
50
|
+
| ------ | ---- | ---- |
|
|
51
|
+
| A | ... | ... |
|
|
52
|
+
| B | ... | ... |
|
|
53
|
+
|
|
54
|
+
## Step 4: External Research
|
|
55
|
+
|
|
56
|
+
- For external library docs and current best practices, follow the project's tooling hierarchy.
|
|
57
|
+
- **GitHub MCP:** Search for related issues, prior discussions, or similar decisions in the repo.
|
|
58
|
+
- **Context7 MCP:** Look up current API patterns for relevant libraries.
|
|
59
|
+
- **Web search:** For novel problems, security advisories, or best practices.
|
|
60
|
+
|
|
61
|
+
## Step 5: Write ADR
|
|
62
|
+
|
|
63
|
+
Follow the project ADR template:
|
|
64
|
+
|
|
65
|
+
- **Title:** `ADR-XXXX: Short descriptive title`
|
|
66
|
+
- **Status:** `PROPOSED` (draft) or `ACCEPTED` (approved)
|
|
67
|
+
- **Date:** YYYY-MM-DD
|
|
68
|
+
- **Decision Makers:** Names/roles
|
|
69
|
+
- **Context:** Problem, constraints, motivation
|
|
70
|
+
- **Decision:** What we are doing
|
|
71
|
+
- **Alternatives Considered:** Table with pros/cons
|
|
72
|
+
- **Consequences:** Positive, negative, risks
|
|
73
|
+
- **Compliance checklist:**
|
|
74
|
+
- [ ] Aligns with privacy/security specs
|
|
75
|
+
- [ ] Aligns with performance budgets
|
|
76
|
+
- [ ] Does not introduce new dependencies without justification
|
|
77
|
+
- **Related:** Specs, issues, previous ADRs
|
|
78
|
+
|
|
79
|
+
Save as `docs/adr/XXXX_short-title.md` (or project convention). Use next available number.
|
|
80
|
+
|
|
81
|
+
## Step 6: Update Affected Docs
|
|
82
|
+
|
|
83
|
+
- Add references to the new ADR in relevant specs (e.g., data model, event model, quality engineering).
|
|
84
|
+
- Update ADR index if the project maintains one.
|
|
85
|
+
- Link from related GitHub issues or PRs.
|
|
86
|
+
- If superseding an ADR, update the old ADR's Status to `SUPERSEDED by ADR-XXXX`.
|
|
87
|
+
|
|
88
|
+
## Definition of Done
|
|
89
|
+
|
|
90
|
+
- [ ] ADR written following project template
|
|
91
|
+
- [ ] Options evaluated with pros/cons table
|
|
92
|
+
- [ ] Tooling-hierarchy followed for external research where relevant
|
|
93
|
+
- [ ] Compliance checklist completed
|
|
94
|
+
- [ ] Affected specs or docs updated to reference the ADR
|
|
95
|
+
- [ ] No privacy/security invariant violations
|
|
96
|
+
- [ ] Performance budgets considered
|
|
@@ -0,0 +1,129 @@
|
|
|
1
|
+
---
|
|
2
|
+
id: hatch3r-bug-fix
|
|
3
|
+
description: Step-by-step bug fix workflow. Diagnose root cause, implement minimal fix, write regression test. Use when fixing bugs, working on bug report issues, or when the user mentions a bug.
|
|
4
|
+
---
|
|
5
|
+
> **Note:** Commands below use `npm` as an example. Substitute with your project's package manager (`yarn`, `pnpm`, `bun`) or build tool as appropriate.
|
|
6
|
+
|
|
7
|
+
# Bug Fix Workflow
|
|
8
|
+
|
|
9
|
+
## Quick Start
|
|
10
|
+
|
|
11
|
+
```
|
|
12
|
+
Task Progress:
|
|
13
|
+
- [ ] Step 1: Read the issue and relevant specs
|
|
14
|
+
- [ ] Step 2: Produce a diagnosis plan
|
|
15
|
+
- [ ] Step 2b: Browser reproduction (if UI bug)
|
|
16
|
+
- [ ] Step 2c: Test-first approach (TDD alternative — optional)
|
|
17
|
+
- [ ] Step 3: Implement minimal fix
|
|
18
|
+
- [ ] Step 4: Write regression test
|
|
19
|
+
- [ ] Step 5: Verify all tests pass
|
|
20
|
+
- [ ] Step 5b: Browser verification (if UI bug)
|
|
21
|
+
- [ ] Step 6: Open PR
|
|
22
|
+
```
|
|
23
|
+
|
|
24
|
+
## Step 1: Read Inputs
|
|
25
|
+
|
|
26
|
+
- Parse the issue body: problem description, reproduction steps, expected/actual behavior, severity, affected area.
|
|
27
|
+
- Read relevant project documentation based on affected area (see spec mapping in project context).
|
|
28
|
+
- Review existing tests in the affected area.
|
|
29
|
+
- For external library docs and current best practices, follow the project's tooling hierarchy.
|
|
30
|
+
|
|
31
|
+
## Step 2: Diagnosis Plan
|
|
32
|
+
|
|
33
|
+
Before fixing, output:
|
|
34
|
+
|
|
35
|
+
- **Root cause hypothesis:** what is wrong and why
|
|
36
|
+
- **Files to investigate:** list of files
|
|
37
|
+
- **Reproduction strategy:** how to confirm the bug via tests
|
|
38
|
+
- **Risks:** what could go wrong with the fix
|
|
39
|
+
|
|
40
|
+
## Step 2b: Browser Reproduction (if UI Bug)
|
|
41
|
+
|
|
42
|
+
Skip this step if the bug has no visual or interactive symptoms.
|
|
43
|
+
|
|
44
|
+
- Ensure the dev server is running. If not, start it in the background.
|
|
45
|
+
- Navigate to the page where the bug manifests.
|
|
46
|
+
- Follow the reproduction steps from the issue to confirm the bug is observable.
|
|
47
|
+
- Take a screenshot of the broken state as baseline evidence.
|
|
48
|
+
- Note any browser console errors or warnings associated with the bug.
|
|
49
|
+
|
|
50
|
+
## Step 2c: Test-First Approach (TDD Alternative)
|
|
51
|
+
|
|
52
|
+
When the root cause is clear from diagnosis, write the regression test BEFORE implementing the fix:
|
|
53
|
+
|
|
54
|
+
1. **Write a failing test** that reproduces the exact bug scenario from the issue's reproduction steps.
|
|
55
|
+
2. **Run the test** — confirm it fails with the expected symptom (not a setup error).
|
|
56
|
+
3. **Implement the minimal fix** (Step 3) to make the test pass.
|
|
57
|
+
4. **Verify** the test now passes AND no other tests broke.
|
|
58
|
+
|
|
59
|
+
This approach guarantees the fix addresses the actual bug and prevents regression. Prefer TDD when:
|
|
60
|
+
- The bug has clear reproduction steps
|
|
61
|
+
- The affected code has existing test infrastructure
|
|
62
|
+
- The root cause is well-understood from Step 2
|
|
63
|
+
|
|
64
|
+
Skip TDD and use the standard flow (Steps 3→4) when:
|
|
65
|
+
- The bug requires exploratory debugging to locate
|
|
66
|
+
- Test infrastructure needs setup first
|
|
67
|
+
- The fix involves configuration or environment changes
|
|
68
|
+
|
|
69
|
+
## Step 3: Minimal Fix
|
|
70
|
+
|
|
71
|
+
- Fix the root cause with minimal changes.
|
|
72
|
+
- Do not refactor unrelated code.
|
|
73
|
+
- Do not introduce new dependencies unless absolutely necessary.
|
|
74
|
+
- Remove dead code created by the fix.
|
|
75
|
+
|
|
76
|
+
## Step 4: Regression Test
|
|
77
|
+
|
|
78
|
+
- Write a test that **fails before** the fix and **passes after**.
|
|
79
|
+
- Add edge case tests if the bug reveals coverage gaps.
|
|
80
|
+
- Ensure all existing tests still pass.
|
|
81
|
+
|
|
82
|
+
## Step 5: Verify
|
|
83
|
+
|
|
84
|
+
```bash
|
|
85
|
+
npm run lint && npm run typecheck && npm run test
|
|
86
|
+
```
|
|
87
|
+
|
|
88
|
+
## Step 5b: Browser Verification (if UI Bug)
|
|
89
|
+
|
|
90
|
+
Skip this step if the bug had no visual or interactive symptoms.
|
|
91
|
+
|
|
92
|
+
- Navigate to the same page where the bug was reproduced in Step 2b.
|
|
93
|
+
- Follow the original reproduction steps — confirm the bug is now fixed.
|
|
94
|
+
- Take a screenshot of the corrected state.
|
|
95
|
+
- Verify no new visual regressions were introduced in the surrounding UI.
|
|
96
|
+
- Check the browser console for errors or warnings.
|
|
97
|
+
|
|
98
|
+
## Step 6: Open PR
|
|
99
|
+
|
|
100
|
+
Use the project's PR template. Include:
|
|
101
|
+
|
|
102
|
+
- Root cause explanation
|
|
103
|
+
- Fix description with before/after behavior
|
|
104
|
+
- Test evidence
|
|
105
|
+
- Rollback plan (required for P0/P1)
|
|
106
|
+
|
|
107
|
+
## Required Agent Delegation
|
|
108
|
+
|
|
109
|
+
You MUST spawn these agents via the Task tool (`subagent_type: "generalPurpose"`) at the appropriate points:
|
|
110
|
+
|
|
111
|
+
- **`hatch3r-researcher`** — MUST spawn before implementation with modes `symptom-trace`, `root-cause`, `codebase-impact`. Skip only for trivially simple bugs (`risk:low` AND `priority:p3`).
|
|
112
|
+
- **`hatch3r-test-writer`** — MUST spawn after fix implementation to write regression tests covering the fixed behavior and related edge cases.
|
|
113
|
+
- **`hatch3r-reviewer`** — MUST spawn after implementation for code review before PR creation.
|
|
114
|
+
|
|
115
|
+
## Related Skills
|
|
116
|
+
|
|
117
|
+
- **Skill**: `hatch3r-qa-validation` — use this skill for end-to-end verification of the bug fix
|
|
118
|
+
|
|
119
|
+
## Definition of Done
|
|
120
|
+
|
|
121
|
+
- [ ] Root cause identified and documented in PR
|
|
122
|
+
- [ ] Fix implemented with minimal diff
|
|
123
|
+
- [ ] Regression test written
|
|
124
|
+
- [ ] All existing tests pass
|
|
125
|
+
- [ ] No new linter warnings
|
|
126
|
+
- [ ] Browser-verified fix (if UI bug)
|
|
127
|
+
- [ ] Performance budgets maintained
|
|
128
|
+
- [ ] Security/privacy invariants respected
|
|
129
|
+
- [ ] If P0/P1: rollback plan documented
|
|
@@ -0,0 +1,76 @@
|
|
|
1
|
+
---
|
|
2
|
+
id: hatch3r-ci-pipeline
|
|
3
|
+
type: skill
|
|
4
|
+
description: Design and optimize CI/CD pipelines. Covers stage design, test parallelization, artifact management, and pipeline performance.
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
# CI Pipeline Workflow
|
|
8
|
+
|
|
9
|
+
## Quick Start
|
|
10
|
+
|
|
11
|
+
```
|
|
12
|
+
Task Progress:
|
|
13
|
+
- [ ] Step 1: Audit existing pipeline
|
|
14
|
+
- [ ] Step 2: Design stage structure
|
|
15
|
+
- [ ] Step 3: Optimize test parallelization
|
|
16
|
+
- [ ] Step 4: Configure artifact management
|
|
17
|
+
- [ ] Step 5: Implement and validate
|
|
18
|
+
```
|
|
19
|
+
|
|
20
|
+
## Step 1: Audit Existing Pipeline
|
|
21
|
+
|
|
22
|
+
- Map the current pipeline stages, their dependencies, and execution times.
|
|
23
|
+
- Identify bottlenecks: which stages take the longest? Which block others unnecessarily?
|
|
24
|
+
- Check for flaky tests that cause unnecessary reruns.
|
|
25
|
+
- Review resource usage: are runners appropriately sized? Are caches effective?
|
|
26
|
+
- Measure total pipeline duration from push to deployable artifact.
|
|
27
|
+
|
|
28
|
+
## Step 2: Design Stage Structure
|
|
29
|
+
|
|
30
|
+
- Organize into logical stages: install, lint, typecheck, unit test, integration test, build, deploy.
|
|
31
|
+
- Maximize parallelism: lint, typecheck, and unit tests can run in parallel after install.
|
|
32
|
+
- Use fail-fast: if lint fails, skip tests. If unit tests fail, skip integration tests.
|
|
33
|
+
- Gate deployments behind all quality checks.
|
|
34
|
+
- Separate environment-specific deployment stages (staging, production) with manual approval gates for production.
|
|
35
|
+
|
|
36
|
+
## Step 3: Optimize Test Parallelization
|
|
37
|
+
|
|
38
|
+
- Split test suites across multiple runners using test file sharding or test duration-based splitting.
|
|
39
|
+
- Use test timing data from previous runs to balance shard workloads.
|
|
40
|
+
- Run unit tests and integration tests on separate runners in parallel.
|
|
41
|
+
- For monorepos: only run tests for changed packages and their dependents.
|
|
42
|
+
- Cache test results for unchanged code paths where the test framework supports it.
|
|
43
|
+
|
|
44
|
+
## Step 4: Configure Artifact Management
|
|
45
|
+
|
|
46
|
+
- Build artifacts once, deploy the same artifact to all environments.
|
|
47
|
+
- Tag artifacts with commit SHA and build number for traceability.
|
|
48
|
+
- Set retention policies: keep production artifacts longer, clean up PR artifacts after merge.
|
|
49
|
+
- Store build metadata (git SHA, branch, build time, test results) alongside artifacts.
|
|
50
|
+
- Use content-addressable storage or artifact registries appropriate to the project (npm, Docker, S3).
|
|
51
|
+
|
|
52
|
+
## Step 5: Implement and Validate
|
|
53
|
+
|
|
54
|
+
- Implement pipeline changes incrementally — test each stage change in a feature branch.
|
|
55
|
+
- Verify caching works correctly: first run populates cache, second run uses it.
|
|
56
|
+
- Confirm parallel stages don't have hidden dependencies causing race conditions.
|
|
57
|
+
- Measure pipeline duration improvement against the baseline from Step 1.
|
|
58
|
+
- Document the pipeline architecture for the team.
|
|
59
|
+
|
|
60
|
+
## Pipeline Performance Targets
|
|
61
|
+
|
|
62
|
+
| Metric | Target |
|
|
63
|
+
|--------|--------|
|
|
64
|
+
| Lint + typecheck | < 2 minutes |
|
|
65
|
+
| Unit tests | < 5 minutes |
|
|
66
|
+
| Integration tests | < 10 minutes |
|
|
67
|
+
| Full pipeline (push to artifact) | < 15 minutes |
|
|
68
|
+
| Cache hit ratio | > 80% |
|
|
69
|
+
|
|
70
|
+
## Definition of Done
|
|
71
|
+
|
|
72
|
+
- [ ] Pipeline stages optimized with maximum parallelism
|
|
73
|
+
- [ ] Test parallelization configured and balanced
|
|
74
|
+
- [ ] Artifact management with retention policies
|
|
75
|
+
- [ ] Pipeline duration meets performance targets
|
|
76
|
+
- [ ] Documentation updated with pipeline architecture
|
|
@@ -0,0 +1,67 @@
|
|
|
1
|
+
---
|
|
2
|
+
id: hatch3r-command-customize
|
|
3
|
+
description: Create and manage per-command customization files for description overrides, enable/disable control, and project-specific markdown instructions. Use when tailoring command behavior to project-specific needs.
|
|
4
|
+
---
|
|
5
|
+
# Command Customization Management
|
|
6
|
+
|
|
7
|
+
## Quick Start
|
|
8
|
+
|
|
9
|
+
```
|
|
10
|
+
Task Progress:
|
|
11
|
+
- [ ] Step 1: Identify which command to customize
|
|
12
|
+
- [ ] Step 2: Determine customization needs
|
|
13
|
+
- [ ] Step 3: Create the customization files
|
|
14
|
+
- [ ] Step 4: Sync to propagate changes
|
|
15
|
+
- [ ] Step 5: Verify the customized output
|
|
16
|
+
```
|
|
17
|
+
|
|
18
|
+
## Step 1: Identify Command
|
|
19
|
+
|
|
20
|
+
Determine which hatch3r command needs customization:
|
|
21
|
+
- Review the commands in `.agents/commands/` and their default behavior
|
|
22
|
+
- Identify gaps between default behavior and project needs
|
|
23
|
+
- Check for existing customization files in `.hatch3r/commands/`
|
|
24
|
+
|
|
25
|
+
## Step 2: Determine Customization Needs
|
|
26
|
+
|
|
27
|
+
Decide which customization approach to use:
|
|
28
|
+
|
|
29
|
+
**YAML (`.customize.yaml`)** — for structured overrides:
|
|
30
|
+
- **Description**: Change how the command is described in adapter outputs
|
|
31
|
+
- **Enabled**: Set to `false` to disable the command entirely
|
|
32
|
+
|
|
33
|
+
**Markdown (`.customize.md`)** — for free-form instructions:
|
|
34
|
+
- Project-specific workflow steps
|
|
35
|
+
- Additional prerequisites or constraints
|
|
36
|
+
- Custom deployment or release procedures
|
|
37
|
+
|
|
38
|
+
## Step 3: Create Customization Files
|
|
39
|
+
|
|
40
|
+
Create files in `.hatch3r/commands/`:
|
|
41
|
+
|
|
42
|
+
**For YAML overrides:**
|
|
43
|
+
```yaml
|
|
44
|
+
# .hatch3r/commands/{command-id}.customize.yaml
|
|
45
|
+
description: "Release workflow with staging validation"
|
|
46
|
+
```
|
|
47
|
+
|
|
48
|
+
**For markdown instructions:**
|
|
49
|
+
Create `.hatch3r/commands/{command-id}.customize.md` with project-specific additions. This content is injected into the managed block under `## Project Customizations`.
|
|
50
|
+
|
|
51
|
+
## Step 4: Sync
|
|
52
|
+
|
|
53
|
+
Run `npx hatch3r sync` to propagate customizations to all adapter outputs.
|
|
54
|
+
|
|
55
|
+
## Step 5: Verify
|
|
56
|
+
|
|
57
|
+
Confirm customizations appear in adapter output files:
|
|
58
|
+
- Check description in adapter outputs (where applicable)
|
|
59
|
+
- Check markdown instructions appear inside the managed block
|
|
60
|
+
- Verify disabled commands are absent from adapter outputs
|
|
61
|
+
|
|
62
|
+
## Definition of Done
|
|
63
|
+
|
|
64
|
+
- [ ] Customization files created in `.hatch3r/commands/`
|
|
65
|
+
- [ ] `npx hatch3r sync` completes without errors
|
|
66
|
+
- [ ] Adapter output files reflect the customizations
|
|
67
|
+
- [ ] Customization files committed to the repository
|
|
@@ -0,0 +1,76 @@
|
|
|
1
|
+
---
|
|
2
|
+
id: hatch3r-context-health
|
|
3
|
+
description: Monitor and maintain conversation context health during long sessions. Use when context may be degrading, after many turns, or when experiencing repeated errors.
|
|
4
|
+
---
|
|
5
|
+
# Context Health Monitoring
|
|
6
|
+
|
|
7
|
+
## Quick Start
|
|
8
|
+
|
|
9
|
+
```
|
|
10
|
+
Task Progress:
|
|
11
|
+
- [ ] Step 1: Assess current context health
|
|
12
|
+
- [ ] Step 2: Identify degradation signals
|
|
13
|
+
- [ ] Step 3: Apply corrective action
|
|
14
|
+
- [ ] Step 4: Verify health improvement
|
|
15
|
+
```
|
|
16
|
+
|
|
17
|
+
## Step 1: Assess Context Health
|
|
18
|
+
|
|
19
|
+
Run through the self-assessment checklist:
|
|
20
|
+
|
|
21
|
+
1. **Task recall**: Can you state the original task, acceptance criteria, and scope boundaries without looking?
|
|
22
|
+
2. **Progress tracking**: List what's been completed and what remains.
|
|
23
|
+
3. **Error check**: Count recent failed tool calls or incorrect assumptions.
|
|
24
|
+
4. **File currency**: List files you've modified — when did you last read each one?
|
|
25
|
+
5. **Scope check**: Compare your current work to the original issue description.
|
|
26
|
+
|
|
27
|
+
## Step 2: Identify Degradation
|
|
28
|
+
|
|
29
|
+
| Check | Healthy | Degraded |
|
|
30
|
+
|-------|---------|----------|
|
|
31
|
+
| Task recall | Can state requirements from memory | Need to re-read issue |
|
|
32
|
+
| Progress | Clear forward momentum | Cycling or stuck |
|
|
33
|
+
| Errors | Occasional, different causes | Repeated, same cause |
|
|
34
|
+
| Files | Recently read and current | Stale, may have drifted |
|
|
35
|
+
| Scope | Aligned with acceptance criteria | Drifted to tangential work |
|
|
36
|
+
|
|
37
|
+
## Step 3: Apply Corrective Action
|
|
38
|
+
|
|
39
|
+
### If 0-1 checks degraded (Green): Continue normally
|
|
40
|
+
|
|
41
|
+
### If 2-3 checks degraded (Yellow): Refresh
|
|
42
|
+
1. Re-read the issue body and acceptance criteria
|
|
43
|
+
2. Re-read all files you've modified in this session
|
|
44
|
+
3. Create a progress summary of completed work
|
|
45
|
+
4. Re-plan remaining steps from the refreshed context
|
|
46
|
+
|
|
47
|
+
### If 4 checks degraded (Orange): Delegate
|
|
48
|
+
1. Create a handoff document with all context
|
|
49
|
+
2. Spawn a sub-agent using the Task tool with the handoff
|
|
50
|
+
3. Monitor the sub-agent's output
|
|
51
|
+
4. Aggregate results
|
|
52
|
+
|
|
53
|
+
### If 5 checks degraded (Red): Checkpoint and Stop
|
|
54
|
+
1. Save all progress (files changed, tests written)
|
|
55
|
+
2. Document remaining work and blockers
|
|
56
|
+
3. Post progress on the GitHub issue
|
|
57
|
+
4. Recommend fresh conversation
|
|
58
|
+
|
|
59
|
+
## Step 4: Verify Improvement
|
|
60
|
+
|
|
61
|
+
After corrective action:
|
|
62
|
+
- Re-run the assessment checklist
|
|
63
|
+
- Confirm health is at Green or Yellow
|
|
64
|
+
- Resume work on the original task
|
|
65
|
+
|
|
66
|
+
## Definition of Done
|
|
67
|
+
|
|
68
|
+
- [ ] Context health assessed with all 5 checks
|
|
69
|
+
- [ ] Degradation level determined (Green/Yellow/Orange/Red)
|
|
70
|
+
- [ ] Appropriate corrective action taken
|
|
71
|
+
- [ ] Health verified at Green or Yellow after correction
|
|
72
|
+
|
|
73
|
+
## Related Skills & Agents
|
|
74
|
+
|
|
75
|
+
- **Command**: `hatch3r-context-health` — full monitoring protocol with integration points
|
|
76
|
+
- **Command**: `hatch3r-board-pickup` — auto-advance mode uses context health for session management
|
|
@@ -0,0 +1,65 @@
|
|
|
1
|
+
---
|
|
2
|
+
id: hatch3r-cost-tracking
|
|
3
|
+
description: Track token usage and estimate costs for agent sessions. Use when monitoring spend, approaching budget limits, or generating cost reports.
|
|
4
|
+
---
|
|
5
|
+
# Cost Tracking Workflow
|
|
6
|
+
|
|
7
|
+
## Quick Start
|
|
8
|
+
|
|
9
|
+
```
|
|
10
|
+
Task Progress:
|
|
11
|
+
- [ ] Step 1: Review cost tracking configuration
|
|
12
|
+
- [ ] Step 2: Estimate current session token usage
|
|
13
|
+
- [ ] Step 3: Identify cost optimization opportunities
|
|
14
|
+
- [ ] Step 4: Generate cost report
|
|
15
|
+
```
|
|
16
|
+
|
|
17
|
+
## Step 1: Review Configuration
|
|
18
|
+
|
|
19
|
+
1. Check `hatch.json` for a `costTracking` section.
|
|
20
|
+
2. Note configured budgets: `sessionBudget`, `issueBudget`, `epicBudget`.
|
|
21
|
+
3. Note warning thresholds and whether `hardStop` is enabled.
|
|
22
|
+
4. If no configuration exists, operate in report-only mode.
|
|
23
|
+
|
|
24
|
+
## Step 2: Estimate Token Usage
|
|
25
|
+
|
|
26
|
+
Estimate tokens for the current session using these rules:
|
|
27
|
+
|
|
28
|
+
| Content Type | Rule |
|
|
29
|
+
|-------------|------|
|
|
30
|
+
| Messages | ~4 characters per token |
|
|
31
|
+
| Tool calls | JSON length / 4 (input), response length / 4 (output) |
|
|
32
|
+
| File reads | Character count / 4 |
|
|
33
|
+
| Web searches | ~500 tokens per search |
|
|
34
|
+
|
|
35
|
+
Calculate estimated cost using the model tier rates from the `hatch3r-cost-tracking` command reference.
|
|
36
|
+
|
|
37
|
+
## Step 3: Identify Optimizations
|
|
38
|
+
|
|
39
|
+
Review usage patterns for savings:
|
|
40
|
+
|
|
41
|
+
- **Large file reads**: Were files read multiple times without changes? Cache instead.
|
|
42
|
+
- **Model tier**: Could routine tasks (linting, formatting) use a faster/cheaper model?
|
|
43
|
+
- **Context bloat**: Is irrelevant context accumulating? Summarize and trim.
|
|
44
|
+
- **Batching**: Were multiple small tool calls made that could be combined?
|
|
45
|
+
- **Scope creep**: Did work expand beyond the original issue? Scope back.
|
|
46
|
+
|
|
47
|
+
## Step 4: Generate Report
|
|
48
|
+
|
|
49
|
+
Produce a cost report using the output format from the `hatch3r-cost-tracking` command. Include:
|
|
50
|
+
- Total estimated tokens (input + output)
|
|
51
|
+
- Estimated cost at the current model tier
|
|
52
|
+
- Budget status (if configured)
|
|
53
|
+
- Top optimization opportunities
|
|
54
|
+
|
|
55
|
+
## Definition of Done
|
|
56
|
+
|
|
57
|
+
- [ ] Cost configuration reviewed (or report-only mode noted)
|
|
58
|
+
- [ ] Token usage estimated for current session
|
|
59
|
+
- [ ] Optimization opportunities identified
|
|
60
|
+
- [ ] Cost report generated with budget status
|
|
61
|
+
|
|
62
|
+
## Related Skills & Agents
|
|
63
|
+
|
|
64
|
+
- **Command**: `hatch3r-cost-tracking` — full cost tracking protocol with guardrails and budget enforcement
|
|
65
|
+
- **Skill**: `hatch3r-context-health` — context health monitoring complements cost tracking for session management
|
|
@@ -0,0 +1,82 @@
|
|
|
1
|
+
---
|
|
2
|
+
id: hatch3r-dep-audit
|
|
3
|
+
description: Audit and update npm dependencies for security, freshness, and bundle impact. Use when auditing dependencies, responding to CVEs, or upgrading packages.
|
|
4
|
+
---
|
|
5
|
+
> **Note:** Commands below use `npm` as an example. Substitute with your project's package manager (`yarn`, `pnpm`, `bun`) or build tool as appropriate.
|
|
6
|
+
|
|
7
|
+
# Dependency Audit Workflow
|
|
8
|
+
|
|
9
|
+
## Quick Start
|
|
10
|
+
|
|
11
|
+
```
|
|
12
|
+
Task Progress:
|
|
13
|
+
- [ ] Step 1: Run npm audit + npm outdated, categorize findings
|
|
14
|
+
- [ ] Step 2: Research CVEs via web search for critical/high
|
|
15
|
+
- [ ] Step 3: Plan upgrades (breaking vs non-breaking, bundle impact)
|
|
16
|
+
- [ ] Step 4: Implement upgrades one-by-one, run tests after each
|
|
17
|
+
- [ ] Step 5: Verify quality gates and bundle size
|
|
18
|
+
- [ ] Step 6: Open PR with upgrade rationale
|
|
19
|
+
```
|
|
20
|
+
|
|
21
|
+
## Step 1: Gather Findings
|
|
22
|
+
|
|
23
|
+
- Run `npm audit` and capture output. Categorize by severity: critical, high, moderate, low.
|
|
24
|
+
- Run `npm outdated` to identify packages with newer versions.
|
|
25
|
+
- Cross-reference with project dependency management rules: fix high/critical before merge, patch within 48h for critical CVEs.
|
|
26
|
+
- Document findings in a structured table: package, current version, available version, severity, CVE IDs (if any).
|
|
27
|
+
|
|
28
|
+
## Step 2: Research CVEs
|
|
29
|
+
|
|
30
|
+
For critical and high vulnerabilities:
|
|
31
|
+
|
|
32
|
+
- Use **web search** to look up each CVE: exploitability, affected versions, fix version, workarounds.
|
|
33
|
+
- Check npm advisories and GitHub security advisories for official guidance.
|
|
34
|
+
- Prioritize: critical first, then high. Moderate/low can be batched.
|
|
35
|
+
- Note any packages with no fix available — document mitigation or deferral rationale.
|
|
36
|
+
|
|
37
|
+
## Step 3: Plan Upgrades
|
|
38
|
+
|
|
39
|
+
Before changing anything:
|
|
40
|
+
|
|
41
|
+
- **Breaking vs non-breaking:** Check each package's changelog (npm, GitHub releases). For external library docs and current best practices, follow the project's tooling hierarchy.
|
|
42
|
+
- **Bundle impact:** Check bundle size budget from project rules. Run `npm run build` and measure before/after for each upgrade.
|
|
43
|
+
- **Upgrade order:** Security fixes first, then non-breaking minor/patch, then breaking changes (one at a time).
|
|
44
|
+
- **Risks:** List packages that may require code changes (e.g., major version bumps).
|
|
45
|
+
|
|
46
|
+
## Step 4: Implement Upgrades
|
|
47
|
+
|
|
48
|
+
- Upgrade **one package at a time** (or one logical group, e.g., all patch-level ecosystem packages).
|
|
49
|
+
- After each upgrade: run `npm install`, then `npm run lint && npm run typecheck && npm run test`.
|
|
50
|
+
- If tests fail: fix or revert. Document any required code changes.
|
|
51
|
+
- Remove unused dependencies during the pass (per dependency-management rule).
|
|
52
|
+
- Commit `package-lock.json` — never use `npm install --no-save`.
|
|
53
|
+
|
|
54
|
+
## Step 5: Verify
|
|
55
|
+
|
|
56
|
+
```bash
|
|
57
|
+
npm run lint && npm run typecheck && npm run test
|
|
58
|
+
npm run build
|
|
59
|
+
```
|
|
60
|
+
|
|
61
|
+
- Confirm bundle size within budget (if defined).
|
|
62
|
+
- Run `npm audit` — no critical or high vulnerabilities remaining.
|
|
63
|
+
- Ensure `package-lock.json` is committed.
|
|
64
|
+
|
|
65
|
+
## Step 6: Open PR
|
|
66
|
+
|
|
67
|
+
Use the project's PR template. Include:
|
|
68
|
+
|
|
69
|
+
- **Upgrade rationale:** why each package was upgraded (CVE, freshness, feature).
|
|
70
|
+
- **Breaking changes:** any code changes required and why.
|
|
71
|
+
- **Bundle impact:** before/after gzipped size.
|
|
72
|
+
- **Test evidence:** all tests pass, no regressions.
|
|
73
|
+
- **Rollback plan:** if risky (e.g., major version bump).
|
|
74
|
+
|
|
75
|
+
## Definition of Done
|
|
76
|
+
|
|
77
|
+
- [ ] No critical or high CVEs remaining
|
|
78
|
+
- [ ] All tests pass (lint, typecheck, unit, integration, E2E)
|
|
79
|
+
- [ ] Bundle size within budget (if defined)
|
|
80
|
+
- [ ] `package-lock.json` committed
|
|
81
|
+
- [ ] PR includes upgrade rationale and bundle impact
|
|
82
|
+
- [ ] No duplicate packages; unused deps removed
|