hatch3r 1.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (132) hide show
  1. package/LICENSE +21 -0
  2. package/README.md +437 -0
  3. package/agents/hatch3r-a11y-auditor.md +126 -0
  4. package/agents/hatch3r-architect.md +160 -0
  5. package/agents/hatch3r-ci-watcher.md +123 -0
  6. package/agents/hatch3r-context-rules.md +97 -0
  7. package/agents/hatch3r-dependency-auditor.md +164 -0
  8. package/agents/hatch3r-devops.md +138 -0
  9. package/agents/hatch3r-docs-writer.md +97 -0
  10. package/agents/hatch3r-implementer.md +162 -0
  11. package/agents/hatch3r-learnings-loader.md +108 -0
  12. package/agents/hatch3r-lint-fixer.md +104 -0
  13. package/agents/hatch3r-perf-profiler.md +123 -0
  14. package/agents/hatch3r-researcher.md +642 -0
  15. package/agents/hatch3r-reviewer.md +81 -0
  16. package/agents/hatch3r-security-auditor.md +119 -0
  17. package/agents/hatch3r-test-writer.md +134 -0
  18. package/commands/hatch3r-agent-customize.md +146 -0
  19. package/commands/hatch3r-api-spec.md +49 -0
  20. package/commands/hatch3r-benchmark.md +50 -0
  21. package/commands/hatch3r-board-fill.md +504 -0
  22. package/commands/hatch3r-board-init.md +315 -0
  23. package/commands/hatch3r-board-pickup.md +672 -0
  24. package/commands/hatch3r-board-refresh.md +198 -0
  25. package/commands/hatch3r-board-shared.md +369 -0
  26. package/commands/hatch3r-bug-plan.md +410 -0
  27. package/commands/hatch3r-codebase-map.md +1182 -0
  28. package/commands/hatch3r-command-customize.md +94 -0
  29. package/commands/hatch3r-context-health.md +112 -0
  30. package/commands/hatch3r-cost-tracking.md +139 -0
  31. package/commands/hatch3r-dep-audit.md +171 -0
  32. package/commands/hatch3r-feature-plan.md +379 -0
  33. package/commands/hatch3r-healthcheck.md +307 -0
  34. package/commands/hatch3r-hooks.md +282 -0
  35. package/commands/hatch3r-learn.md +217 -0
  36. package/commands/hatch3r-migration-plan.md +51 -0
  37. package/commands/hatch3r-onboard.md +56 -0
  38. package/commands/hatch3r-project-spec.md +1153 -0
  39. package/commands/hatch3r-recipe.md +179 -0
  40. package/commands/hatch3r-refactor-plan.md +426 -0
  41. package/commands/hatch3r-release.md +328 -0
  42. package/commands/hatch3r-roadmap.md +556 -0
  43. package/commands/hatch3r-rule-customize.md +114 -0
  44. package/commands/hatch3r-security-audit.md +370 -0
  45. package/commands/hatch3r-skill-customize.md +93 -0
  46. package/commands/hatch3r-workflow.md +377 -0
  47. package/dist/cli/hooks-ZOTFDEA3.js +59 -0
  48. package/dist/cli/index.d.ts +2 -0
  49. package/dist/cli/index.js +3584 -0
  50. package/github-agents/hatch3r-docs-agent.md +46 -0
  51. package/github-agents/hatch3r-lint-agent.md +41 -0
  52. package/github-agents/hatch3r-security-agent.md +54 -0
  53. package/github-agents/hatch3r-test-agent.md +66 -0
  54. package/hooks/hatch3r-ci-failure.md +10 -0
  55. package/hooks/hatch3r-file-save.md +11 -0
  56. package/hooks/hatch3r-post-merge.md +10 -0
  57. package/hooks/hatch3r-pre-commit.md +11 -0
  58. package/hooks/hatch3r-pre-push.md +10 -0
  59. package/hooks/hatch3r-session-start.md +10 -0
  60. package/mcp/mcp.json +62 -0
  61. package/package.json +84 -0
  62. package/prompts/hatch3r-bug-triage.md +155 -0
  63. package/prompts/hatch3r-code-review.md +131 -0
  64. package/prompts/hatch3r-pr-description.md +173 -0
  65. package/rules/hatch3r-accessibility-standards.md +77 -0
  66. package/rules/hatch3r-accessibility-standards.mdc +75 -0
  67. package/rules/hatch3r-agent-orchestration.md +160 -0
  68. package/rules/hatch3r-api-design.md +176 -0
  69. package/rules/hatch3r-api-design.mdc +176 -0
  70. package/rules/hatch3r-browser-verification.md +73 -0
  71. package/rules/hatch3r-browser-verification.mdc +73 -0
  72. package/rules/hatch3r-ci-cd.md +70 -0
  73. package/rules/hatch3r-ci-cd.mdc +68 -0
  74. package/rules/hatch3r-code-standards.md +102 -0
  75. package/rules/hatch3r-code-standards.mdc +100 -0
  76. package/rules/hatch3r-component-conventions.md +102 -0
  77. package/rules/hatch3r-component-conventions.mdc +102 -0
  78. package/rules/hatch3r-data-classification.md +85 -0
  79. package/rules/hatch3r-data-classification.mdc +83 -0
  80. package/rules/hatch3r-dependency-management.md +17 -0
  81. package/rules/hatch3r-dependency-management.mdc +15 -0
  82. package/rules/hatch3r-error-handling.md +17 -0
  83. package/rules/hatch3r-error-handling.mdc +15 -0
  84. package/rules/hatch3r-feature-flags.md +112 -0
  85. package/rules/hatch3r-feature-flags.mdc +112 -0
  86. package/rules/hatch3r-git-conventions.md +47 -0
  87. package/rules/hatch3r-git-conventions.mdc +45 -0
  88. package/rules/hatch3r-i18n.md +90 -0
  89. package/rules/hatch3r-i18n.mdc +90 -0
  90. package/rules/hatch3r-learning-consult.md +29 -0
  91. package/rules/hatch3r-learning-consult.mdc +27 -0
  92. package/rules/hatch3r-migrations.md +17 -0
  93. package/rules/hatch3r-migrations.mdc +15 -0
  94. package/rules/hatch3r-observability.md +165 -0
  95. package/rules/hatch3r-observability.mdc +165 -0
  96. package/rules/hatch3r-performance-budgets.md +109 -0
  97. package/rules/hatch3r-performance-budgets.mdc +109 -0
  98. package/rules/hatch3r-secrets-management.md +76 -0
  99. package/rules/hatch3r-secrets-management.mdc +74 -0
  100. package/rules/hatch3r-security-patterns.md +211 -0
  101. package/rules/hatch3r-security-patterns.mdc +211 -0
  102. package/rules/hatch3r-testing.md +89 -0
  103. package/rules/hatch3r-testing.mdc +87 -0
  104. package/rules/hatch3r-theming.md +51 -0
  105. package/rules/hatch3r-theming.mdc +51 -0
  106. package/rules/hatch3r-tooling-hierarchy.md +92 -0
  107. package/rules/hatch3r-tooling-hierarchy.mdc +79 -0
  108. package/skills/hatch3r-a11y-audit/SKILL.md +131 -0
  109. package/skills/hatch3r-agent-customize/SKILL.md +75 -0
  110. package/skills/hatch3r-api-spec/SKILL.md +66 -0
  111. package/skills/hatch3r-architecture-review/SKILL.md +96 -0
  112. package/skills/hatch3r-bug-fix/SKILL.md +129 -0
  113. package/skills/hatch3r-ci-pipeline/SKILL.md +76 -0
  114. package/skills/hatch3r-command-customize/SKILL.md +67 -0
  115. package/skills/hatch3r-context-health/SKILL.md +76 -0
  116. package/skills/hatch3r-cost-tracking/SKILL.md +65 -0
  117. package/skills/hatch3r-dep-audit/SKILL.md +82 -0
  118. package/skills/hatch3r-feature/SKILL.md +129 -0
  119. package/skills/hatch3r-gh-agentic-workflows/SKILL.md +150 -0
  120. package/skills/hatch3r-incident-response/SKILL.md +86 -0
  121. package/skills/hatch3r-issue-workflow/SKILL.md +139 -0
  122. package/skills/hatch3r-logical-refactor/SKILL.md +73 -0
  123. package/skills/hatch3r-migration/SKILL.md +76 -0
  124. package/skills/hatch3r-perf-audit/SKILL.md +114 -0
  125. package/skills/hatch3r-pr-creation/SKILL.md +85 -0
  126. package/skills/hatch3r-qa-validation/SKILL.md +86 -0
  127. package/skills/hatch3r-recipe/SKILL.md +67 -0
  128. package/skills/hatch3r-refactor/SKILL.md +86 -0
  129. package/skills/hatch3r-release/SKILL.md +93 -0
  130. package/skills/hatch3r-rule-customize/SKILL.md +70 -0
  131. package/skills/hatch3r-skill-customize/SKILL.md +67 -0
  132. package/skills/hatch3r-visual-refactor/SKILL.md +89 -0
@@ -0,0 +1,1182 @@
1
+ ---
2
+ id: hatch3r-codebase-map
3
+ type: command
4
+ description: Reverse-engineer business and technical project specs from an existing codebase using parallel analyzer sub-agents with dual business/technical scoping
5
+ ---
6
+ # Codebase Map — Brownfield Codebase Analysis & Spec Generation
7
+
8
+ Analyze an existing codebase to reverse-engineer project documentation across **two dimensions**: business domain analysis and technical architecture analysis. Discovers modules, dependencies, conventions, tech stack, technical debt, business logic, domain models, and production readiness using parallel analyzer sub-agents. Outputs structured specs to `docs/specs/business/` (business domain, market context, production readiness) and `docs/specs/technical/` (modules, conventions, stack, debt), plus inferred architectural decision records to `docs/adr/`. Optionally generates a root-level `AGENTS.md` as the project's "README for agents." This command is **purely read-only** until the final write step — all analysis is static (file reading, pattern matching). Works for any language or framework.
9
+
10
+ ---
11
+
12
+ ## Shared Context
13
+
14
+ **Read `hatch3r-board-shared` at the start of the run** if available. It contains GitHub Context, Project Reference, and tooling directives. While this command does not perform board operations directly, the shared context establishes owner/repo and tooling hierarchy for any follow-up commands.
15
+
16
+ ## Token-Saving Directives
17
+
18
+ 1. **Limit documentation reads.** Read TOC/headers first (first ~30 lines), full content only for relevant sections.
19
+ 2. **Do NOT re-read shared context files** after the initial load — cache values for the duration of the run.
20
+ 3. **Delegate heavy analysis to sub-agents.** Keep the orchestrator lightweight; sub-agents do the deep file reading.
21
+ 4. **Skip binary and minified files.** Do not attempt to read or analyze them.
22
+
23
+ ---
24
+
25
+ ## Workflow
26
+
27
+ Execute these steps in order. **Do not skip any step.** Ask the user at every checkpoint marked with ASK. When in doubt, **ASK** — it is better to ask one question too many than to make one wrong assumption. Discovery questions are never wasted.
28
+
29
+ ### Step 1: Initial Scan, Scope & Discovery
30
+
31
+ Perform a lightweight scan of the project root to build a project fingerprint, then gather business context.
32
+
33
+ #### 1a. Detect Package Managers & Config
34
+
35
+ Scan for:
36
+
37
+ | Signal | Ecosystem |
38
+ | ------ | --------- |
39
+ | `package.json` | Node.js / JavaScript / TypeScript |
40
+ | `Cargo.toml` | Rust |
41
+ | `go.mod` | Go |
42
+ | `requirements.txt`, `pyproject.toml`, `setup.py`, `Pipfile` | Python |
43
+ | `Gemfile` | Ruby |
44
+ | `pom.xml`, `build.gradle` | Java / Kotlin |
45
+ | `*.csproj`, `*.sln` | .NET / C# |
46
+ | `composer.json` | PHP |
47
+ | `pubspec.yaml` | Dart / Flutter |
48
+ | `mix.exs` | Elixir |
49
+
50
+ Also detect: `Dockerfile`, `docker-compose.yml`, `.github/workflows/`, `.gitlab-ci.yml`, `Makefile`, `tsconfig.json`, `.eslintrc.*`, `.prettierrc.*`, `turbo.json`, `nx.json`, `lerna.json`, `pnpm-workspace.yaml`.
51
+
52
+ #### 1b. Detect Tech Stack
53
+
54
+ From config files and top-level imports, identify:
55
+
56
+ - **Languages:** primary and secondary (by file extension count)
57
+ - **Frameworks:** (React, Next.js, Express, Django, Rails, Spring, etc.)
58
+ - **Databases:** (from config files, ORM configs, migration directories)
59
+ - **Infrastructure:** (cloud provider configs, IaC files, container orchestration)
60
+
61
+ #### 1c. Estimate Project Size
62
+
63
+ - File count by language (exclude `node_modules/`, `vendor/`, `dist/`, `build/`, `.git/`)
64
+ - Approximate LOC by language (sample-based for large codebases)
65
+ - Directory depth and breadth
66
+
67
+ #### 1d. Check Existing Documentation
68
+
69
+ - Check for `docs/specs/` — if exists, note contents (including `business/` and `technical/` subdirectories)
70
+ - Check for `docs/adr/` — if exists, note contents
71
+ - Check for `README.md`, `CONTRIBUTING.md`, `ARCHITECTURE.md`, or similar
72
+ - Check for `/.agents/hatch.json` — if exists, this project already has hatch3r configuration
73
+ - Check for root `AGENTS.md` — if exists, note its contents
74
+
75
+ If `docs/specs/` or `docs/adr/` already exist:
76
+
77
+ **ASK:** "Existing documentation found at `docs/specs/` and/or `docs/adr/`. (a) Supplement — keep existing files and add new ones, (b) Replace — archive existing and generate fresh, (c) Abort."
78
+
79
+ #### 1e. Present Project Fingerprint
80
+
81
+ ```
82
+ Project Fingerprint
83
+ ===================
84
+ Root: {project root path}
85
+ Languages: {language1} ({N files}), {language2} ({N files}), ...
86
+ Frameworks: {framework1}, {framework2}, ...
87
+ Databases: {db1}, {db2}, ... (or "None detected")
88
+ Package Mgr: {npm/cargo/pip/...}
89
+ Build Tools: {webpack/vite/tsc/make/...}
90
+ CI/CD: {GitHub Actions/GitLab CI/...} (or "None detected")
91
+ Infra: {Docker/K8s/Terraform/...} (or "None detected")
92
+ Project Size: {N files}, ~{N}K LOC
93
+ Monorepo: {yes — N workspaces / no}
94
+ Existing Docs: {docs/specs/ (N files), docs/adr/ (N files) / None}
95
+ AGENTS.md: {found / not found}
96
+ ```
97
+
98
+ #### 1f. Onboarding Scope Selection
99
+
100
+ **ASK:** "Should I analyze the **full product**, or only **specific parts**? If specific, list the directories, modules, or domains to focus on. Options: (a) full codebase analysis, (b) specific directories only — list them, (c) exclude directories — list them (e.g., vendor, generated code)."
101
+
102
+ #### 1g. Company Stage Assessment
103
+
104
+ **ASK:** "To calibrate the analysis depth and recommendations to your situation, tell me about your company/project stage:
105
+
106
+ - **Company stage**: pre-revenue / early-revenue / growth / scale / enterprise
107
+ - **Team composition**: solo founder, small team (2-5), medium (5-20), large (20+)
108
+ - **Current user/revenue scale**: no users yet, beta (<1K), early traction (1K-50K), growth (50K-500K), scale (500K+)
109
+ - **Funding/runway**: bootstrapped, pre-seed, seed, Series A+, profitable
110
+ - **Regulatory/compliance needs**: none, basic (GDPR/SOC2), heavy (HIPAA/PCI/FedRAMP)
111
+ - **Deployment maturity**: no deployment yet, manual, CI/CD, full GitOps"
112
+
113
+ Cache the stage assessment. It drives **stage-adaptive depth** throughout the analysis:
114
+ - **Pre-revenue / early-revenue**: MVP-focused, lean analysis. Emphasize speed-to-market, core user flows, minimal viable infrastructure.
115
+ - **Growth**: Scaling focus. Emphasize performance bottlenecks, horizontal scaling readiness, monitoring gaps, technical debt velocity impact.
116
+ - **Scale / enterprise**: Production hardening and compliance. Emphasize SLA readiness, disaster recovery, governance, audit trails, multi-region.
117
+
118
+ #### 1h. Business Discovery
119
+
120
+ Before asking, attempt to reverse-engineer business context from the codebase: look for payment/billing code, user roles, analytics events, domain models, README descriptions, and marketing copy in the repo.
121
+
122
+ Present what was inferred, then **ASK** to fill gaps:
123
+
124
+ "Based on the codebase, I inferred the following business context. Please confirm or correct, and fill in any gaps:
125
+
126
+ - **Business model type**: {inferred or unknown} (SaaS, marketplace, platform, API-first, consumer app, internal tool, open source)
127
+ - **Revenue model**: {inferred or unknown} (subscription, transactional, freemium, advertising, enterprise licensing)
128
+ - **Key competitors**: {list any found in docs, or ask} (names and URLs — I will research them)
129
+ - **Target market segments / ICP**: {inferred or unknown}
130
+ - **Key business metrics/KPIs**: {inferred from analytics code, or ask} (tracked or planned)
131
+ - **Go-to-market status**: {inferred or unknown} (pre-launch, launched, scaling)
132
+ - **Regulatory or industry-specific requirements**: {inferred or unknown}
133
+
134
+ Any additional business context I should know?"
135
+
136
+ If running as part of a pipeline after another hatch3r command that already gathered this context, check for `.hatch3r-session.json`. If found, pre-fill company stage and business context from the session file. Confirm with the user rather than re-asking.
137
+
138
+ ---
139
+
140
+ ### Step 2: Spawn Parallel Analyzer Sub-Agents
141
+
142
+ Launch one analyzer sub-agent per domain below in parallel — as many as the platform supports — using the **Task tool** with `subagent_type: "generalPurpose"`. Each analyzer receives the project fingerprint, confirmed scope, company stage assessment, and business context from Step 1.
143
+
144
+ **Each sub-agent prompt must include:**
145
+
146
+ - The full project fingerprint, confirmed scope, company stage, and business context from Step 1
147
+ - Instruction to use file reading and code search tools extensively
148
+ - Instruction to use **Context7 MCP** (`resolve-library-id` then `query-docs`) for understanding framework conventions and library APIs
149
+ - Instruction to use **web search** for current best practices, benchmarks, and industry standards relevant to their analysis area
150
+ - Instruction to output in structured markdown format
151
+ - Explicit instruction: **do NOT create files — return the output as a structured result**
152
+
153
+ #### Sub-Agent 1: Module & Dependency Analyzer
154
+
155
+ **Prompt context:** Project fingerprint, confirmed scope.
156
+
157
+ **Task:**
158
+
159
+ 1. Map all modules/packages/components in the codebase:
160
+ - For monorepos: each workspace package is a top-level module
161
+ - For single-package projects: identify logical modules from directory structure (e.g., `src/auth/`, `src/api/`, `src/models/`)
162
+ - For framework projects: follow framework conventions (e.g., Next.js `app/` routes, Django apps, Rails controllers/models)
163
+ 2. Build internal dependency graph:
164
+ - Trace imports between modules
165
+ - Identify: entry points, shared utilities, orphaned code (files with no importers), circular dependencies
166
+ 3. Map external dependency usage:
167
+ - Which external packages are used by which modules
168
+ - Identify wrapper/adapter patterns around external dependencies
169
+
170
+ **Output format:**
171
+
172
+ ```markdown
173
+ ## Module Map
174
+
175
+ | Module | Path | Type | Description | Key Exports |
176
+ | ------ | ---- | ---- | ----------- | ----------- |
177
+ | ... | ... | ... | ... | ... |
178
+
179
+ ## Internal Dependency Graph
180
+
181
+ {module} → {module} (via {import path})
182
+ ...
183
+
184
+ ## Entry Points
185
+
186
+ - {path} — {description}
187
+
188
+ ## Shared Utilities
189
+
190
+ - {path} — used by {N} modules
191
+
192
+ ## Concerns
193
+
194
+ - Circular: {A} ↔ {B}
195
+ - Orphaned: {path} (no importers)
196
+ ```
197
+
198
+ #### Sub-Agent 2: Conventions & Patterns Analyzer
199
+
200
+ **Prompt context:** Project fingerprint, confirmed scope.
201
+
202
+ **Task:**
203
+
204
+ 1. Discover coding conventions:
205
+ - Naming: file naming (camelCase, kebab-case, PascalCase), variable/function naming, class naming
206
+ - File structure: how files are organized within modules, co-location patterns
207
+ - Export patterns: default vs named exports, barrel files (index.ts), re-exports
208
+ 2. Discover architectural patterns:
209
+ - Error handling: try/catch patterns, error types, error propagation
210
+ - State management: global state, context, stores, signals
211
+ - API design: REST conventions, GraphQL schema patterns, RPC patterns
212
+ - Data access: repository pattern, direct queries, ORM usage
213
+ - Testing: test file location, naming (`*.test.*`, `*.spec.*`, `__tests__/`), frameworks used, fixture patterns
214
+ 3. Identify code style:
215
+ - Indentation (tabs/spaces, width)
216
+ - Quote style (single/double)
217
+ - Semicolons (present/absent for JS/TS)
218
+ - Line length tendencies
219
+ - Framework-specific patterns (hooks patterns, component patterns, middleware patterns)
220
+
221
+ **Output format:**
222
+
223
+ ```markdown
224
+ ## Conventions
225
+
226
+ ### Naming
227
+ - Files: {pattern}
228
+ - Functions: {pattern}
229
+ - Classes: {pattern}
230
+ - Constants: {pattern}
231
+
232
+ ### File Structure
233
+ - {pattern description}
234
+
235
+ ### Exports
236
+ - {pattern description}
237
+
238
+ ## Architectural Patterns
239
+
240
+ ### Architecture Style
241
+ {MVC / Clean Architecture / Layered / Modular / Monolithic / ...}
242
+ Evidence: {file paths and patterns observed}
243
+
244
+ ### Error Handling
245
+ - {pattern description with examples}
246
+
247
+ ### State Management
248
+ - {pattern description}
249
+
250
+ ### API Design
251
+ - {pattern description}
252
+
253
+ ### Data Access
254
+ - {pattern description}
255
+
256
+ ### Testing
257
+ - Framework: {jest/vitest/pytest/...}
258
+ - Location: {co-located / separate test directory}
259
+ - Naming: {pattern}
260
+ - Coverage: {estimated from test file presence}
261
+
262
+ ## Code Style
263
+ - Indentation: {tabs/spaces, width}
264
+ - Quotes: {single/double}
265
+ - Semicolons: {yes/no}
266
+ - Notable: {any other consistent patterns}
267
+ ```
268
+
269
+ #### Sub-Agent 3: Tech Stack & Config Analyzer
270
+
271
+ **Prompt context:** Project fingerprint, confirmed scope.
272
+
273
+ **Task:**
274
+
275
+ 1. Deep dependency analysis:
276
+ - Runtime dependencies: purpose, version, last update
277
+ - Dev dependencies: purpose, version
278
+ - Peer dependencies and version constraints
279
+ - Identify outdated dependencies (compare against latest if version info available in lockfiles)
280
+ 2. Build pipeline analysis:
281
+ - Build tool configuration and scripts
282
+ - Compilation/transpilation pipeline
283
+ - Asset processing (bundling, minification, image optimization)
284
+ 3. CI/CD configuration:
285
+ - Workflow files and pipeline stages
286
+ - Test automation, linting, type checking in CI
287
+ - Deployment targets and strategies
288
+ 4. Environment setup:
289
+ - Environment variables (from `.env.example`, config files — **never read actual `.env` files**)
290
+ - Configuration management approach
291
+ - Secrets management (references only, never actual values)
292
+ 5. Infrastructure:
293
+ - IaC files (Terraform, CloudFormation, Pulumi)
294
+ - Container configuration
295
+ - Deployment targets (cloud provider, serverless, VMs)
296
+
297
+ **Output format:**
298
+
299
+ ```markdown
300
+ ## Dependencies
301
+
302
+ ### Runtime ({N} packages)
303
+
304
+ | Package | Version | Purpose | Health |
305
+ | ------- | ------- | ------- | ------ |
306
+ | ... | ... | ... | ... |
307
+
308
+ ### Dev ({N} packages)
309
+
310
+ | Package | Version | Purpose |
311
+ | ------- | ------- | ------- |
312
+ | ... | ... | ... |
313
+
314
+ ## Build Pipeline
315
+ - Tool: {webpack/vite/tsc/esbuild/...}
316
+ - Scripts: {key npm scripts or Makefile targets}
317
+ - Output: {dist directory, bundle format}
318
+
319
+ ## CI/CD
320
+ - Platform: {GitHub Actions / GitLab CI / ...}
321
+ - Stages: {lint → test → build → deploy}
322
+ - Deploy Target: {Vercel / AWS / GCP / self-hosted / ...}
323
+
324
+ ## Environment
325
+ - Config approach: {env vars / config files / ...}
326
+ - Required env vars: {list from .env.example}
327
+
328
+ ## Infrastructure
329
+ - Containerized: {yes/no}
330
+ - IaC: {Terraform / CloudFormation / none}
331
+ - Cloud: {AWS / GCP / Azure / none detected}
332
+
333
+ ## Health Assessment
334
+ - Outdated: {N packages need updates}
335
+ - Missing tooling: {linter/formatter/type checker not configured}
336
+ - Security: {known advisory matches from lockfile}
337
+ ```
338
+
339
+ #### Sub-Agent 4: Concerns & Debt Analyzer
340
+
341
+ **Prompt context:** Project fingerprint, confirmed scope.
342
+
343
+ **Task:**
344
+
345
+ 1. Scan for markers:
346
+ - `TODO`, `FIXME`, `HACK`, `XXX`, `WORKAROUND` comments — capture location and content
347
+ - `@ts-ignore`, `@ts-expect-error`, `# type: ignore`, `# noqa`, `// nolint` — suppression markers
348
+ - `any` type usage (TypeScript), untyped parameters, missing return types
349
+ 2. Identify dead code:
350
+ - Exported symbols with no importers (cross-reference with Module Analyzer if available)
351
+ - Unused files (no imports pointing to them)
352
+ - Commented-out code blocks
353
+ 3. Complexity analysis:
354
+ - Functions exceeding ~50 lines
355
+ - Files exceeding ~300 lines
356
+ - Deeply nested logic (>3 levels)
357
+ - Functions with many parameters (>4)
358
+ 4. Missing coverage:
359
+ - Modules with no corresponding test files
360
+ - Critical paths without error handling
361
+ - Input validation gaps at module boundaries
362
+ 5. Security concerns:
363
+ - Hardcoded strings that look like secrets/tokens/keys
364
+ - Unsafe practices (eval, innerHTML without sanitization, SQL string concatenation)
365
+ - Missing authentication/authorization checks on routes
366
+ - Overly permissive CORS or security headers
367
+ 6. Performance patterns:
368
+ - N+1 query patterns (loops containing database calls)
369
+ - Large payload construction without pagination
370
+ - Missing caching opportunities
371
+ - Synchronous I/O in async contexts
372
+
373
+ **Output format:**
374
+
375
+ ```markdown
376
+ ## Technical Debt Register
377
+
378
+ ### Critical (address immediately)
379
+
380
+ | # | Type | Location | Description | Effort |
381
+ | - | ---- | -------- | ----------- | ------ |
382
+ | 1 | Security | {path}:{line} | {description} | {S/M/L} |
383
+
384
+ ### High (address soon)
385
+
386
+ | # | Type | Location | Description | Effort |
387
+ | - | ---- | -------- | ----------- | ------ |
388
+
389
+ ### Medium (plan for)
390
+
391
+ | # | Type | Location | Description | Effort |
392
+ | - | ---- | -------- | ----------- | ------ |
393
+
394
+ ### Low (nice to have)
395
+
396
+ | # | Type | Location | Description | Effort |
397
+ | - | ---- | -------- | ----------- | ------ |
398
+
399
+ ## Summary
400
+ - TODO/FIXME count: {N}
401
+ - Type suppressions: {N}
402
+ - Dead code files: {N}
403
+ - Functions >50 LOC: {N}
404
+ - Files >300 LOC: {N}
405
+ - Untested modules: {N} of {total}
406
+ - Security concerns: {N}
407
+ - Performance hotspots: {N}
408
+
409
+ ## Top 5 Debt Items
410
+ 1. {item} — {severity} — {effort}
411
+ 2. ...
412
+ ```
413
+
414
+ #### Sub-Agent 5: Business Domain Analyzer
415
+
416
+ **Prompt context:** Project fingerprint, confirmed scope, company stage, business context from Step 1h.
417
+
418
+ **Task:** Reverse-engineer the business logic embedded in the codebase. Use the business context from Step 1h to guide interpretation. Use **web search** to research the product's industry, domain patterns, and competitor approaches where helpful.
419
+
420
+ 1. Identify domain entities and their relationships:
421
+ - DDD aggregate roots, value objects, entities
422
+ - Database models/schemas and their relationships
423
+ - Enum types that encode business states or categories
424
+ 2. Map business rules:
425
+ - Validation logic (what constraints does the system enforce?)
426
+ - State machines and workflow engines (order lifecycle, user states, approval flows)
427
+ - Pricing logic, discount rules, tier calculations
428
+ - Authorization rules (who can do what?)
429
+ 3. Trace revenue-relevant code paths:
430
+ - Payment processing (Stripe, PayPal, custom billing)
431
+ - Subscription management (plan creation, upgrades, downgrades, cancellation)
432
+ - Usage metering, quota enforcement, rate limiting by plan
433
+ - Invoice generation, tax calculation
434
+ 4. Map user journey touchpoints in code:
435
+ - Authentication and onboarding flows
436
+ - Core value-delivery flows (the "aha moment" path)
437
+ - Conversion funnels (free → paid, trial → subscription)
438
+ - Retention mechanisms (notifications, emails, engagement hooks)
439
+ 5. Identify business metrics collection points:
440
+ - Analytics events (segment, mixpanel, amplitude, custom)
441
+ - Tracking pixels, attribution code
442
+ - KPI computation logic (MRR, churn, LTV, DAU)
443
+ 6. Discover business invariants:
444
+ - Uniqueness constraints (unique emails, unique slugs)
445
+ - Data integrity rules (referential integrity, cascade behavior)
446
+ - Compliance-related logic (data retention, right-to-delete, audit logs)
447
+
448
+ **Output format:**
449
+
450
+ ```markdown
451
+ ## Business Domain Map
452
+
453
+ ### Domain Entities
454
+
455
+ | Entity | Location | Type | Relationships | Business Significance |
456
+ | ------ | -------- | ---- | ------------- | --------------------- |
457
+ | {name} | {path} | aggregate root / entity / value object | {relations} | {why it matters} |
458
+
459
+ ### Entity Relationship Diagram
460
+ {Mermaid ER diagram or textual description of key entity relationships}
461
+
462
+ ## Business Rules Register
463
+
464
+ | # | Rule | Location | Type | Enforcement | Confidence |
465
+ | - | ---- | -------- | ---- | ----------- | ---------- |
466
+ | 1 | {rule description} | {path}:{line} | validation / state machine / authorization / pricing | {how enforced} | high/medium/low |
467
+
468
+ ## Revenue Flow
469
+
470
+ ### Payment & Billing
471
+ - Payment processor: {Stripe / PayPal / custom / none detected}
472
+ - Billing model: {subscription / one-time / usage-based / none detected}
473
+ - Key paths: {list of file paths involved in payment flow}
474
+
475
+ ### Monetization Touchpoints
476
+ - {touchpoint}: {path} — {description}
477
+
478
+ ## User Journey Code Map
479
+
480
+ | Journey | Entry Point | Key Steps | Exit/Completion | Gaps |
481
+ | ------- | ----------- | --------- | --------------- | ---- |
482
+ | {journey name} | {path} | {step flow through code} | {completion path} | {missing steps} |
483
+
484
+ ## Business Metrics & Analytics
485
+
486
+ | Event/Metric | Location | Provider | What It Tracks |
487
+ | ------------ | -------- | -------- | -------------- |
488
+ | {event name} | {path} | {analytics provider} | {description} |
489
+
490
+ ## Business Invariants
491
+
492
+ | Invariant | Location | Enforcement | Risk if Violated |
493
+ | --------- | -------- | ----------- | ---------------- |
494
+ | {rule} | {path} | {how enforced} | {business impact} |
495
+
496
+ ## Uncertainties
497
+
498
+ - {business logic that is unclear from static analysis — marked for human review}
499
+ ```
500
+
501
+ #### Sub-Agent 6: Production Readiness & Scale Analyzer
502
+
503
+ **Prompt context:** Project fingerprint, confirmed scope, company stage from Step 1g.
504
+
505
+ **Task:** Evaluate infrastructure maturity relative to the company stage. Use **web search** to research current best practices for the detected stack, cloud provider recommendations, and SLA benchmarks for the industry. Grade each dimension relative to what is appropriate for the company's stage — a seed-stage startup has different production readiness needs than a Series B company.
506
+
507
+ 1. **Deployment maturity:**
508
+ - CI/CD pipeline completeness (build, test, deploy stages)
509
+ - Deployment strategies (blue-green, canary, rolling, or manual)
510
+ - Rollback capability and speed
511
+ - Environment management (dev, staging, prod separation)
512
+ - Feature flag infrastructure
513
+ 2. **Observability:**
514
+ - Logging coverage and structured logging adoption
515
+ - Metrics instrumentation (application metrics, infrastructure metrics)
516
+ - Distributed tracing (OpenTelemetry, Jaeger, Datadog)
517
+ - Alerting rules and escalation paths
518
+ - Dashboard presence (Grafana, Datadog, CloudWatch)
519
+ 3. **Scaling patterns:**
520
+ - Horizontal scaling readiness (stateless services, session management)
521
+ - Caching layers (Redis, Memcached, CDN, browser caching)
522
+ - Database scaling (read replicas, connection pooling, sharding readiness)
523
+ - Queue-based processing (background jobs, event-driven architecture)
524
+ - Rate limiting and throttling
525
+ 4. **Reliability:**
526
+ - Error budgets and SLO definitions
527
+ - Circuit breakers and retry patterns
528
+ - Graceful degradation paths
529
+ - Health check endpoints
530
+ - Timeout configuration
531
+ 5. **Incident readiness:**
532
+ - Runbook presence
533
+ - On-call setup indicators
534
+ - Incident response procedures
535
+ - Post-mortem templates
536
+ - Chaos engineering indicators
537
+ 6. **Cost efficiency:**
538
+ - Resource utilization patterns
539
+ - Autoscaling configuration
540
+ - Spot/reserved instance usage
541
+ - Cost monitoring indicators
542
+ 7. **Data management:**
543
+ - Backup strategy indicators
544
+ - Disaster recovery configuration
545
+ - Data retention policies
546
+ - Migration tooling and history
547
+
548
+ **Output format:**
549
+
550
+ ```markdown
551
+ ## Production Readiness Scorecard
552
+
553
+ Company Stage: {stage from Step 1g}
554
+ Grading Baseline: {what "good" looks like for this stage}
555
+
556
+ ### Deployment Maturity
557
+ - Grade: {A/B/C/D/F} (for stage)
558
+ - Current state: {description}
559
+ - Gap to stage-appropriate: {what's missing}
560
+ - Recommendation: {next step}
561
+
562
+ ### Observability
563
+ - Grade: {A/B/C/D/F} (for stage)
564
+ - Current state: {description}
565
+ - Gap to stage-appropriate: {what's missing}
566
+ - Recommendation: {next step}
567
+
568
+ ### Scaling Readiness
569
+ - Grade: {A/B/C/D/F} (for stage)
570
+ - Current state: {description}
571
+ - Gap to stage-appropriate: {what's missing}
572
+ - Recommendation: {next step}
573
+
574
+ ### Reliability
575
+ - Grade: {A/B/C/D/F} (for stage)
576
+ - Current state: {description}
577
+ - Gap to stage-appropriate: {what's missing}
578
+ - Recommendation: {next step}
579
+
580
+ ### Incident Readiness
581
+ - Grade: {A/B/C/D/F} (for stage)
582
+ - Current state: {description}
583
+ - Gap to stage-appropriate: {what's missing}
584
+ - Recommendation: {next step}
585
+
586
+ ### Cost Efficiency
587
+ - Grade: {A/B/C/D/F} (for stage)
588
+ - Current state: {description}
589
+ - Gap to stage-appropriate: {what's missing}
590
+ - Recommendation: {next step}
591
+
592
+ ### Data Management
593
+ - Grade: {A/B/C/D/F} (for stage)
594
+ - Current state: {description}
595
+ - Gap to stage-appropriate: {what's missing}
596
+ - Recommendation: {next step}
597
+
598
+ ## Overall Production Readiness
599
+ - Overall Grade: {A/B/C/D/F} (for stage)
600
+ - Launch Readiness: {ready / not ready — list blockers}
601
+ - Top 3 Production Risks:
602
+ 1. {risk} — {mitigation}
603
+ 2. {risk} — {mitigation}
604
+ 3. {risk} — {mitigation}
605
+
606
+ ## Stage-Appropriate Recommendations
607
+ {Ordered list of actions calibrated to the company stage — do not recommend enterprise-grade solutions for pre-revenue startups}
608
+ ```
609
+
610
+ ---
611
+
612
+ ### Step 3: Review Analyzer Outputs
613
+
614
+ Collect all sub-agent outputs and produce a merged codebase map with both business and technical dimensions.
615
+
616
+ #### 3a. Merge & Cross-Reference
617
+
618
+ 1. Merge module map with dependency graph
619
+ 2. Cross-reference conventions with debt:
620
+ - If Conventions Analyzer found pattern X, check if Debt Analyzer found violations of pattern X
621
+ - If Tech Stack Analyzer found outdated dependencies, cross-reference with modules that use them
622
+ 3. Cross-reference business domain with technical modules:
623
+ - Map business entities to the technical modules that own them
624
+ - Identify business rules that lack test coverage (cross-reference Business Domain Analyzer with Concerns Analyzer)
625
+ - Flag revenue-critical code paths that have technical debt items
626
+ 4. Cross-reference production readiness with business stage:
627
+ - Identify production gaps that block business milestones
628
+ - Flag scaling bottlenecks on revenue-critical paths
629
+ 5. Identify conflicts between analyzer outputs (e.g., different conclusions about architecture style)
630
+
631
+ #### 3b. Present Merged Summary
632
+
633
+ ```
634
+ Codebase Map Summary
635
+ ====================
636
+ Architecture: {detected pattern} (confidence: high/medium/low)
637
+ Module Count: {N} modules
638
+ Entry Points: {N}
639
+ Dependency Health: {healthy/warning/critical} ({N outdated, N vulnerable})
640
+ Test Coverage: {estimated} ({N}/{M} modules have tests)
641
+ Technical Debt: {low/medium/high} ({N items: X critical, Y high, Z medium})
642
+ Conventions: {consistent/mostly consistent/inconsistent}
643
+
644
+ Business Domain:
645
+ Domain Entities: {N} entities identified
646
+ Business Rules: {N} rules mapped (confidence: high/medium/low)
647
+ Revenue Paths: {payment provider} — {billing model}
648
+ User Journeys: {N} journeys traced
649
+ Analytics Coverage: {comprehensive/partial/minimal/none}
650
+
651
+ Production Readiness:
652
+ Overall Grade: {A/B/C/D/F} (for {stage})
653
+ Launch Readiness: {ready/not ready}
654
+ Top Gaps: {list}
655
+
656
+ Key Findings:
657
+ 1. {finding} — {impact}
658
+ 2. {finding} — {impact}
659
+ 3. {finding} — {impact}
660
+
661
+ Cross-Reference Alerts:
662
+ - Convention "{X}" violated in {N} locations (see debt items #...)
663
+ - Module "{Y}" depends on outdated package "{Z}" (see tech stack)
664
+ - Business rule "{R}" in revenue path has no test coverage
665
+ - Production gap "{G}" blocks business milestone "{M}"
666
+ - ...
667
+ ```
668
+
669
+ If any sub-agent failed, present partial results and note the gap.
670
+
671
+ **ASK:** "Here is the merged codebase map with business and technical dimensions. Review the findings. (a) Confirm and proceed to spec generation, (b) flag corrections — list what needs adjusting, (c) re-run a specific analyzer with adjusted scope, (d) I have additional business context to add."
672
+
673
+ ---
674
+
675
+ ### Step 4: Generate Specs (Dual-Lens)
676
+
677
+ From the merged analyzer outputs, draft spec documents in **two separate directories**: business specs and technical specs. These specs document **what exists**, not what should be. Mark any gaps or uncertainties explicitly.
678
+
679
+ #### 4a. Technical Specs — `docs/specs/technical/`
680
+
681
+ ##### Technical Glossary — `docs/specs/technical/00_glossary.md`
682
+
683
+ - Assign stable IDs to discovered technical entities, events, modules
684
+ - Format: `{prefix}_{name}` (e.g., `mod_auth`, `evt_user_login`, `ent_user`)
685
+ - Include all modules, key domain entities, events, and shared utilities
686
+
687
+ ##### Technical Overview — `docs/specs/technical/01_overview.md`
688
+
689
+ - Codebase overview: architecture style, tech stack summary, module map
690
+ - Dependency graph (text or mermaid diagram)
691
+ - Build and deployment pipeline summary
692
+ - Conventions summary (reference the detailed conventions from analyzer output)
693
+
694
+ ##### Module Specs — `docs/specs/technical/02_{module_name}.md` (numbered sequentially)
695
+
696
+ ```markdown
697
+ # {Module Name}
698
+
699
+ > Status: Inferred — reverse-engineered from codebase analysis
700
+
701
+ ## Overview
702
+
703
+ {What this module does, based on code analysis}
704
+
705
+ ## Current State
706
+
707
+ ### Structure
708
+
709
+ {Directory layout, key files}
710
+
711
+ ### Key Components
712
+
713
+ {Classes, functions, exports with brief descriptions}
714
+
715
+ ### Integration Points
716
+
717
+ {How this module connects to other modules — imports, exports, API contracts}
718
+
719
+ ## Patterns
720
+
721
+ {Module-specific conventions and patterns observed}
722
+
723
+ ## Test Coverage
724
+
725
+ {Existing tests, estimated coverage, gaps}
726
+
727
+ ## Technical Debt
728
+
729
+ {Debt items from the Concerns Analyzer specific to this module}
730
+
731
+ ## Uncertainties
732
+
733
+ {Anything unclear from static analysis — marked for human review}
734
+ ```
735
+
736
+ #### 4b. Business Specs — `docs/specs/business/`
737
+
738
+ ##### Business Glossary — `docs/specs/business/00_business_glossary.md`
739
+
740
+ - Assign stable IDs to discovered business entities, domain terms, and events
741
+ - Format: `{prefix}_{name}` (e.g., `biz_subscription`, `evt_payment_completed`, `dom_pricing_tier`)
742
+ - Include domain entities, business events, user journey stages, and business metrics
743
+ - Cross-reference with technical glossary IDs where entities map to code
744
+
745
+ ##### Business Overview — `docs/specs/business/01_business_overview.md`
746
+
747
+ ```markdown
748
+ # {Project Name} — Business Overview
749
+
750
+ > Status: Inferred — reverse-engineered from codebase analysis and user input
751
+
752
+ ## Business Model
753
+
754
+ {Business model type, revenue model — from Step 1h and Business Domain Analyzer}
755
+
756
+ ## Market Context
757
+
758
+ {Target market, ICP, competitors — from Step 1h}
759
+
760
+ ## Value Proposition
761
+
762
+ {Inferred from code: what does the product do for users?}
763
+
764
+ ## Personas & User Segments
765
+
766
+ {Inferred from auth roles, user types, permission models in code}
767
+
768
+ | Persona | Code Evidence | Primary Goals | Key Flows |
769
+ | ------- | ------------- | ------------- | --------- |
770
+ | {name} | {user type/role in code} | {goals} | {flows} |
771
+
772
+ ## Key Business Metrics
773
+
774
+ {From Business Domain Analyzer — analytics events, KPI computation}
775
+
776
+ | Metric | Tracked | Location | Notes |
777
+ | ------ | ------- | -------- | ----- |
778
+ | {metric} | yes/inferred/not tracked | {path} | {notes} |
779
+
780
+ ## Company Stage Context
781
+
782
+ {From Step 1g — stage, team, users, funding}
783
+ ```
784
+
785
+ ##### Business Domain Specs — `docs/specs/business/02_{domain}.md` (one per business domain)
786
+
787
+ ```markdown
788
+ # {Business Domain Name}
789
+
790
+ > Status: Inferred — reverse-engineered from codebase analysis
791
+
792
+ ## Domain Overview
793
+
794
+ {What this business domain covers}
795
+
796
+ ## Business Rules
797
+
798
+ | # | Rule | Enforcement | Test Coverage | Confidence |
799
+ | - | ---- | ----------- | ------------- | ---------- |
800
+ | 1 | {rule} | {how} | {covered/gap} | {high/med/low} |
801
+
802
+ ## User Journeys
803
+
804
+ | Journey | Steps | Code Path | Completeness |
805
+ | ------- | ----- | --------- | ------------ |
806
+ | {name} | {steps} | {file paths} | {complete/gaps noted} |
807
+
808
+ ## Domain Invariants
809
+
810
+ | Invariant | Enforcement | Business Impact if Violated |
811
+ | --------- | ----------- | --------------------------- |
812
+ | {rule} | {how enforced} | {impact} |
813
+
814
+ ## Revenue Relevance
815
+
816
+ {How this domain relates to revenue — payment flows, conversion, retention}
817
+
818
+ ## Uncertainties
819
+
820
+ {Business logic unclear from static analysis — marked for human review}
821
+ ```
822
+
823
+ ##### Production Readiness Report — `docs/specs/business/03_production_readiness.md`
824
+
825
+ Full production readiness scorecard from Sub-Agent 6, formatted for the business audience — emphasizing business impact of each gap rather than purely technical descriptions.
826
+
827
+ #### 4c. Present for Review
828
+
829
+ Present the list of specs to be generated with a brief summary of each, organized by business and technical.
830
+
831
+ **ASK:** "Here are the specs I will generate across both business and technical dimensions. Review the outlines:
832
+
833
+ **Technical specs** (`docs/specs/technical/`):
834
+ - `00_glossary.md` — {N} technical entities
835
+ - `01_overview.md` — architecture & stack overview
836
+ - {list of module specs}
837
+
838
+ **Business specs** (`docs/specs/business/`):
839
+ - `00_business_glossary.md` — {N} business entities
840
+ - `01_business_overview.md` — business model & market context
841
+ - {list of domain specs}
842
+ - `03_production_readiness.md` — production scorecard
843
+
844
+ (a) Confirm and proceed, (b) adjust module/domain boundaries or naming, (c) add/remove items."
845
+
846
+ ---
847
+
848
+ ### Step 5: Generate ADRs
849
+
850
+ From conventions, architecture patterns, and tech stack choices discovered by the analyzers, infer architectural decisions. Include both technical and business-driven decisions.
851
+
852
+ #### 5a. Identify Decisions
853
+
854
+ Look for:
855
+
856
+ - Framework/language choice (evidence: config files, package manager)
857
+ - Database choice (evidence: ORM config, migration files, connection strings)
858
+ - Architecture pattern choice (evidence: directory structure, layer boundaries)
859
+ - State management approach (evidence: store files, context providers)
860
+ - Testing strategy (evidence: test framework config, test file patterns)
861
+ - Build/deploy tooling (evidence: CI config, build scripts)
862
+ - API design style (evidence: route definitions, schema files)
863
+ - Authentication/authorization approach (evidence: auth middleware, token handling)
864
+ - Payment/billing architecture (evidence: payment provider integration patterns)
865
+ - Analytics/tracking strategy (evidence: analytics provider, event patterns)
866
+ - Business model implementation (evidence: pricing logic, subscription management)
867
+
868
+ #### 5b. Draft ADRs
869
+
870
+ For each inferred decision, draft `docs/adr/0001_{decision_slug}.md` (numbered sequentially):
871
+
872
+ ```markdown
873
+ # {N}. {Decision Title}
874
+
875
+ **Date:** Inferred {today's date}
876
+
877
+ **Status:** Inferred
878
+
879
+ **Scope:** {Technical / Business / Both}
880
+
881
+ > This ADR was reverse-engineered from codebase analysis, not from original
882
+ > decision documentation. Review and change status to "Accepted" if accurate.
883
+
884
+ ## Context
885
+
886
+ {What problem or need this decision addresses, inferred from code patterns}
887
+
888
+ ## Decision
889
+
890
+ {The decision that was made, inferred from what exists in the codebase}
891
+
892
+ ## Evidence
893
+
894
+ {Specific files, patterns, and configurations that support this inference}
895
+
896
+ - {file path}: {what it shows}
897
+ - {pattern}: {where observed}
898
+
899
+ ## Consequences
900
+
901
+ {Observed consequences of this decision — both positive and negative}
902
+
903
+ ## Uncertainties
904
+
905
+ {Aspects of this decision that are unclear from static analysis}
906
+ ```
907
+
908
+ #### 5c. Present for Review
909
+
910
+ **ASK:** "Here are the inferred ADRs (including both technical and business-scope decisions). Each has status 'Inferred'. Review and: (a) confirm all, (b) change status to 'Accepted' for confirmed decisions — list numbers, (c) reject/remove specific ADRs — list numbers, (d) adjust content."
911
+
912
+ ---
913
+
914
+ ### Step 6: Generate Codebase Health Report
915
+
916
+ Compile a summary health report from all 6 analyzer outputs, covering both technical and business health.
917
+
918
+ ```
919
+ Codebase Health Report
920
+ ======================
921
+ Project: {name} ({owner}/{repo} if available)
922
+ Analysis Date: {today's date}
923
+ Analyzer Version: hatch3r-codebase-map v2
924
+ Company Stage: {stage from Step 1g}
925
+
926
+ — Technical Health —
927
+ Architecture: {detected pattern}
928
+ Module Count: {N}
929
+ Dependency Health: {healthy/warning/critical}
930
+ - Runtime deps: {N} ({X outdated, Y vulnerable})
931
+ - Dev deps: {N} ({X outdated})
932
+ Test Coverage: {estimated percentage or qualitative} ({N}/{M} modules with tests)
933
+ Technical Debt: {low/medium/high} ({N total items})
934
+ - Critical: {N}
935
+ - High: {N}
936
+ - Medium: {N}
937
+ - Low: {N}
938
+ Convention Consistency: {high/medium/low}
939
+
940
+ — Business Health —
941
+ Business Logic Coverage: {what % of business rules have test coverage}
942
+ Revenue Path Reliability: {error handling quality in payment/billing flows}
943
+ User Journey Completeness: {gaps in critical user flows}
944
+ Analytics Instrumentation: {comprehensive/partial/minimal/none}
945
+ Business Rule Test Coverage: {N}/{M} rules have corresponding tests
946
+
947
+ — Production Readiness —
948
+ Overall Grade: {A/B/C/D/F} (for {stage})
949
+ Deployment: {grade}
950
+ Observability: {grade}
951
+ Scaling: {grade}
952
+ Reliability: {grade}
953
+ Incident Ready: {grade}
954
+
955
+ Top 5 Technical Concerns:
956
+ 1. {concern} — {severity} — {recommended action}
957
+ 2. {concern} — {severity} — {recommended action}
958
+ 3. {concern} — {severity} — {recommended action}
959
+ 4. {concern} — {severity} — {recommended action}
960
+ 5. {concern} — {severity} — {recommended action}
961
+
962
+ Top 5 Business Concerns:
963
+ 1. {concern} — {severity} — {recommended action}
964
+ 2. {concern} — {severity} — {recommended action}
965
+ 3. {concern} — {severity} — {recommended action}
966
+ 4. {concern} — {severity} — {recommended action}
967
+ 5. {concern} — {severity} — {recommended action}
968
+
969
+ Strengths:
970
+ - {strength observed}
971
+ - {strength observed}
972
+ - ...
973
+ ```
974
+
975
+ **ASK:** "Codebase health report above (technical + business + production readiness). (a) Write report to `docs/codebase-health.md`? (b) Generate a `todo.md` with prioritized improvement items? (c) Both? (d) Neither — display only. Answer for each."
976
+
977
+ ---
978
+
979
+ ### Step 7: Write All Files
980
+
981
+ Write all confirmed files to disk.
982
+
983
+ #### 7a. Create Directories
984
+
985
+ ```bash
986
+ mkdir -p docs/specs/technical docs/specs/business docs/adr
987
+ ```
988
+
989
+ #### 7b. Replace Path (Step 1d option "b")
990
+
991
+ If user chose Replace in Step 1d: archive existing docs before writing.
992
+
993
+ 1. Create `docs/.archive-{timestamp}/` (e.g., `docs/.archive-20250223T120000/`).
994
+ 2. Move all existing files from `docs/specs/` and `docs/adr/` into the archive directory.
995
+ 3. Proceed to write fresh files (7c, 7d). ADRs start at `0001_` (no continuation from archived numbers).
996
+
997
+ #### 7c. Write Technical Spec Files
998
+
999
+ Write each technical spec file confirmed in Step 4a:
1000
+
1001
+ - `docs/specs/technical/00_glossary.md`
1002
+ - `docs/specs/technical/01_overview.md`
1003
+ - `docs/specs/technical/02_{module}.md` (one per module)
1004
+
1005
+ If supplementing existing specs (Step 1d option "a"), do not overwrite existing files. Add new files alongside them.
1006
+
1007
+ #### 7d. Write Business Spec Files
1008
+
1009
+ Write each business spec file confirmed in Step 4b:
1010
+
1011
+ - `docs/specs/business/00_business_glossary.md`
1012
+ - `docs/specs/business/01_business_overview.md`
1013
+ - `docs/specs/business/02_{domain}.md` (one per business domain)
1014
+ - `docs/specs/business/03_production_readiness.md`
1015
+
1016
+ #### 7e. Write ADR Files
1017
+
1018
+ Write each ADR confirmed in Step 5:
1019
+
1020
+ - `docs/adr/0001_{decision}.md` (numbered sequentially)
1021
+
1022
+ If supplementing (option "a") and `docs/adr/` already contains ADRs, continue numbering from the highest existing number.
1023
+
1024
+ If Replace was chosen (option "b"), start at `0001_`.
1025
+
1026
+ #### 7f. Write Optional Files
1027
+
1028
+ If user confirmed in Step 6:
1029
+
1030
+ - `docs/codebase-health.md` — health report
1031
+ - `todo.md` — prioritized improvement items (if `todo.md` already exists, **ASK** before overwriting or appending)
1032
+
1033
+ #### 7g. Save Session Context
1034
+
1035
+ Write `.hatch3r-session.json` to the project root with the company stage assessment and business context gathered in Step 1. This allows subsequent hatch3r commands (`hatch3r-project-spec`, `hatch3r-roadmap`) to skip re-asking the same discovery questions.
1036
+
1037
+ ```json
1038
+ {
1039
+ "timestamp": "{ISO timestamp}",
1040
+ "command": "hatch3r-codebase-map",
1041
+ "companyStage": { ... },
1042
+ "businessContext": { ... },
1043
+ "scope": "{full / specific parts}"
1044
+ }
1045
+ ```
1046
+
1047
+ #### 7h. Present Summary
1048
+
1049
+ ```
1050
+ Files Written:
1051
+ docs/specs/technical/
1052
+ - 00_glossary.md
1053
+ - 01_overview.md
1054
+ - 02_{module_1}.md
1055
+ - 02_{module_2}.md
1056
+ - ...
1057
+ docs/specs/business/
1058
+ - 00_business_glossary.md
1059
+ - 01_business_overview.md
1060
+ - 02_{domain_1}.md
1061
+ - ...
1062
+ - 03_production_readiness.md
1063
+ docs/adr/
1064
+ - 0001_{decision_1}.md
1065
+ - 0002_{decision_2}.md
1066
+ - ...
1067
+ docs/codebase-health.md (if requested)
1068
+ todo.md (if requested)
1069
+ .hatch3r-session.json
1070
+
1071
+ Total: {N} files created, {M} directories created
1072
+
1073
+ Next steps:
1074
+ - Review generated specs and correct any inaccuracies
1075
+ - Change ADR statuses from "Inferred" to "Accepted" for confirmed decisions
1076
+ - Run `hatch3r-board-fill` to create issues from todo.md (if generated)
1077
+ - Run `hatch3r-healthcheck` for deep QA audit of each module
1078
+ - Run `hatch3r-security-audit` for full security audit of each module
1079
+ ```
1080
+
1081
+ ---
1082
+
1083
+ ### Step 8: AGENTS.md Generation
1084
+
1085
+ **ASK:** "Generate or update the root-level `AGENTS.md` with a project summary derived from the specs just created? This file serves as the 'README for agents' — consumed by OpenCode, Windsurf, and other AI coding tools so they understand your project's business context, architecture, and conventions from the first interaction.
1086
+
1087
+ (a) Yes — generate it, (b) No — skip, (c) Let me review the content first."
1088
+
1089
+ If yes or review-first: generate `AGENTS.md` at the project root containing:
1090
+
1091
+ ```markdown
1092
+ # {Project Name} — Agent Instructions
1093
+
1094
+ > Auto-generated by hatch3r-codebase-map on {today's date}. Review and adjust as needed.
1095
+
1096
+ ## Project Purpose
1097
+
1098
+ {One-paragraph vision/purpose from business overview}
1099
+
1100
+ ## Business Context
1101
+
1102
+ - **Business model**: {type}
1103
+ - **Revenue model**: {model}
1104
+ - **Company stage**: {stage}
1105
+ - **Target market**: {segments}
1106
+ - **Key metrics**: {KPIs}
1107
+
1108
+ ## Technology Stack
1109
+
1110
+ {Concise stack summary — languages, frameworks, databases, infrastructure}
1111
+
1112
+ ## Architecture Overview
1113
+
1114
+ {Architecture style, key components, deployment topology — 3-5 sentences}
1115
+
1116
+ ## Module Map
1117
+
1118
+ | Module | Purpose |
1119
+ | ------ | ------- |
1120
+ | {module} | {one-line description} |
1121
+
1122
+ ## Key Business Rules & Domain Constraints
1123
+
1124
+ {Top 5-10 business rules that agents must respect when making changes}
1125
+
1126
+ - {rule}: {constraint}
1127
+
1128
+ ## Conventions
1129
+
1130
+ {Key coding conventions agents should follow — naming, patterns, testing}
1131
+
1132
+ ## Documentation Reference
1133
+
1134
+ - Business specs: `docs/specs/business/`
1135
+ - Technical specs: `docs/specs/technical/`
1136
+ - Architecture decisions: `docs/adr/`
1137
+ - Health report: `docs/codebase-health.md`
1138
+ ```
1139
+
1140
+ If the user chose "review first," present the content and **ASK** for confirmation before writing.
1141
+
1142
+ If `AGENTS.md` already exists, **ASK** before overwriting: "Root `AGENTS.md` already exists. (a) Replace entirely, (b) Append hatch3r section, (c) Skip."
1143
+
1144
+ ---
1145
+
1146
+ ### Step 9: Cross-Command Handoff
1147
+
1148
+ **ASK:** "Analysis complete. Recommended next steps:
1149
+ - Run `hatch3r-project-spec` to create forward-looking specs and fill gaps identified in the analysis
1150
+ - Run `hatch3r-roadmap` to generate a phased roadmap from these specs
1151
+ - Run `hatch3r-board-fill` to create GitHub issues from todo.md (if generated)
1152
+
1153
+ Which would you like to run next? (or none)"
1154
+
1155
+ ---
1156
+
1157
+ ## Error Handling
1158
+
1159
+ - **Sub-agent failure:** Retry the failed analyzer once with the same prompt. If it fails again, present partial results from the other analyzers and note the gap. Ask the user whether to continue with partial data or abort.
1160
+ - **Very large codebases (>10K files):** Warn the user about scope. Focus analysis on primary source directories (e.g., `src/`, `app/`, `lib/`). Exclude generated code, vendored dependencies, and build artifacts. Present the scoping decision before spawning analyzers.
1161
+ - **Unreadable files (binary, minified, generated):** Skip silently. Note skipped file count in the fingerprint summary.
1162
+ - **Existing docs conflict:** Never overwrite without explicit confirmation. When supplementing, use unique filenames that do not collide with existing files.
1163
+ - **Monorepo detection failure:** If workspace configuration is ambiguous, ask the user to clarify package boundaries before proceeding.
1164
+ - **Business context gaps:** If the user cannot answer business discovery questions, proceed with "Unknown" markers and flag these as uncertainties in the business specs.
1165
+ - **Stage assessment unclear:** Default to "early-revenue" if the user is unsure. This provides balanced analysis depth without over- or under-engineering recommendations.
1166
+
1167
+ ## Guardrails
1168
+
1169
+ - **Never skip ASK checkpoints.** Every step with an ASK must pause for user confirmation.
1170
+ - **When in doubt, ASK.** It is better to ask one question too many than to make one wrong assumption. Discovery questions are never wasted.
1171
+ - **Never write files without user review and confirmation.** All file writes happen in Step 7 only, after all ASK checkpoints.
1172
+ - **Never overwrite existing documentation** without explicit user confirmation.
1173
+ - **Do not execute code or run builds.** All analysis is purely static — file reading and pattern matching only.
1174
+ - **Respect .gitignore** and always skip: `node_modules/`, `vendor/`, `dist/`, `build/`, `.git/`, `__pycache__/`, `.venv/`, `target/` (Rust), `bin/`, `obj/` (.NET).
1175
+ - **Never read `.env` files or actual secrets.** Only read `.env.example` or similar templates. If a hardcoded secret is found during analysis, flag it as a security concern but do not include the actual value in any output.
1176
+ - **Mark all inferred information as "Inferred."** Do not present static analysis guesses as established facts.
1177
+ - **Handle monorepos correctly.** Detect workspace configuration (`workspaces` in `package.json`, `pnpm-workspace.yaml`, `turbo.json`, `nx.json`, `lerna.json`, Cargo workspaces, Go workspaces) and analyze each package as a separate module.
1178
+ - **If `todo.md` already exists,** ASK before overwriting or appending.
1179
+ - **Sub-agents must not create files.** They return structured text results to the orchestrator. Only the orchestrator writes files in Step 7.
1180
+ - **Stage-adaptive recommendations.** Never recommend enterprise-grade solutions for pre-revenue startups. Never recommend MVP shortcuts for scale/enterprise companies. Calibrate all recommendations to the company stage from Step 1g.
1181
+ - **Business specs must cross-reference technical specs.** Use stable IDs from both glossaries to link business entities to technical modules.
1182
+ - **Never overwrite `AGENTS.md`** without explicit user confirmation.