gaia-framework 1.65.1 → 1.83.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (57) hide show
  1. package/.claude/commands/gaia-create-stakeholder.md +20 -0
  2. package/.claude/commands/gaia-test-gap-analysis.md +17 -0
  3. package/CLAUDE.md +102 -1
  4. package/README.md +2 -2
  5. package/_gaia/_config/global.yaml +5 -1
  6. package/_gaia/_config/lifecycle-sequence.yaml +20 -0
  7. package/_gaia/_config/skill-manifest.csv +2 -0
  8. package/_gaia/_config/workflow-manifest.csv +3 -1
  9. package/_gaia/core/engine/workflow.xml +11 -1
  10. package/_gaia/core/protocols/review-gate-check.xml +29 -1
  11. package/_gaia/core/workflows/party-mode/steps/step-01-agent-loading.md +60 -9
  12. package/_gaia/creative/workflows/problem-solving/checklist.md +64 -14
  13. package/_gaia/creative/workflows/problem-solving/instructions.xml +367 -22
  14. package/_gaia/creative/workflows/problem-solving/workflow.yaml +31 -1
  15. package/_gaia/dev/agents/_base-dev.md +7 -1
  16. package/_gaia/dev/skills/_skill-index.yaml +9 -0
  17. package/_gaia/dev/skills/figma-integration.md +296 -0
  18. package/_gaia/lifecycle/knowledge/brownfield/config-contradiction-scan.md +137 -0
  19. package/_gaia/lifecycle/knowledge/brownfield/dead-code-scan.md +179 -0
  20. package/_gaia/lifecycle/knowledge/brownfield/test-execution-scan.md +209 -0
  21. package/_gaia/lifecycle/skills/document-rulesets.md +91 -6
  22. package/_gaia/lifecycle/templates/brownfield-scan-doc-code-prompt.md +219 -0
  23. package/_gaia/lifecycle/templates/brownfield-scan-hardcoded-prompt.md +169 -0
  24. package/_gaia/lifecycle/templates/brownfield-scan-integration-seam-prompt.md +127 -0
  25. package/_gaia/lifecycle/templates/brownfield-scan-runtime-behavior-prompt.md +141 -0
  26. package/_gaia/lifecycle/templates/brownfield-scan-security-prompt.md +440 -0
  27. package/_gaia/lifecycle/templates/gap-entry-schema.md +282 -0
  28. package/_gaia/lifecycle/templates/infra-prd-template.md +356 -0
  29. package/_gaia/lifecycle/templates/platform-prd-template.md +431 -0
  30. package/_gaia/lifecycle/templates/prd-template.md +70 -0
  31. package/_gaia/lifecycle/templates/story-template.md +22 -1
  32. package/_gaia/lifecycle/workflows/2-planning/create-ux-design/instructions.xml +52 -3
  33. package/_gaia/lifecycle/workflows/4-implementation/add-feature/checklist.md +1 -1
  34. package/_gaia/lifecycle/workflows/4-implementation/add-feature/instructions.xml +2 -3
  35. package/_gaia/lifecycle/workflows/4-implementation/add-stories/checklist.md +5 -0
  36. package/_gaia/lifecycle/workflows/4-implementation/add-stories/instructions.xml +73 -1
  37. package/_gaia/lifecycle/workflows/4-implementation/create-stakeholder/checklist.md +25 -0
  38. package/_gaia/lifecycle/workflows/4-implementation/create-stakeholder/instructions.xml +79 -0
  39. package/_gaia/lifecycle/workflows/4-implementation/create-stakeholder/workflow.yaml +22 -0
  40. package/_gaia/lifecycle/workflows/4-implementation/create-story/instructions.xml +11 -1
  41. package/_gaia/lifecycle/workflows/4-implementation/retrospective/instructions.xml +21 -1
  42. package/_gaia/lifecycle/workflows/4-implementation/retrospective/workflow.yaml +1 -1
  43. package/_gaia/lifecycle/workflows/4-implementation/validate-story/instructions.xml +11 -0
  44. package/_gaia/lifecycle/workflows/anytime/brownfield-onboarding/checklist.md +12 -0
  45. package/_gaia/lifecycle/workflows/anytime/brownfield-onboarding/instructions.xml +248 -4
  46. package/_gaia/lifecycle/workflows/anytime/brownfield-onboarding/workflow.yaml +1 -0
  47. package/_gaia/testing/workflows/test-gap-analysis/checklist.md +8 -0
  48. package/_gaia/testing/workflows/test-gap-analysis/instructions.xml +53 -0
  49. package/_gaia/testing/workflows/test-gap-analysis/workflow.yaml +38 -0
  50. package/bin/gaia-framework.js +44 -8
  51. package/bin/helpers/derive-bump-label.js +41 -0
  52. package/bin/helpers/validate-bump-labels.js +38 -0
  53. package/gaia-install.sh +96 -21
  54. package/package.json +1 -1
  55. package/_gaia/_memory/tier2-results/.gitkeep +0 -0
  56. package/_gaia/_memory/tier2-results/checkpoint-resume-2026-03-24.yaml +0 -6
  57. package/_gaia/_memory/tier2-results/engine-scenarios-2026-03-22.yaml +0 -14
@@ -88,9 +88,81 @@
88
88
  If new epic: include epic header with name, description, goal, success criteria before its stories.
89
89
  Prepend a Change Log entry if not already present.
90
90
  </template-output>
91
+ <action>Recount epic overview table story counts and update in-place:
92
+ 1. Parse all `### Story (E\d+)-S\d+:` headers in the saved epics-and-stories.md file
93
+ 2. Group and count story headers per epic key (E1, E2, etc.)
94
+ 3. Locate the Epic Overview table in epics-and-stories.md (the markdown table with columns: epic ID, name, goal, count, priority)
95
+ 4. For each epic row in the table, update the count column to match the actual count of story headers for that epic
96
+ 5. If a new epic was created in Step 4 and does not yet have a row in the Epic Overview table, insert a new row with the correct initial story count
97
+ 6. If an existing epic row in the overview table has zero matching story headers after recount, preserve the row as-is (do not delete it) — log a warning: "Epic {EN} has 0 story headers but overview row preserved"
98
+ 7. Table format: `| E{N} | {name} | {goal} | {count} | {priority} |` — only the `{count}` column cell is updated; do not modify other columns
99
+ 8. Save the updated epics-and-stories.md file</action>
91
100
  </step>
92
101
 
93
- <step n="8" title="Next Steps">
102
+ <step n="8" title="Inline Validation">
103
+ <action>Check Val prerequisites: verify {project-root}/_gaia/lifecycle/agents/validator.md exists AND {project-root}/_memory/validator-sidecar/ directory exists. If either prerequisite is missing, log warning "Val prerequisites unavailable — skipping inline validation. Stories will be marked as validating." Set val_available = false.</action>
104
+
105
+ <action>For each newly created story from Step 5 (batch iteration — each story validated independently):
106
+ Set story_validation_status = "pending"
107
+ Set validation_attempt = 0
108
+
109
+ If val_available == false:
110
+ Set story_validation_status = "degraded"
111
+ Mark story status as 'validating' in the story entry
112
+ Log: "Story {story_key}: Val unavailable — marked as validating"
113
+ Continue to next story
114
+ End If
115
+
116
+ VALIDATION LOOP (maximum 3 attempts):
117
+ Increment validation_attempt
118
+
119
+ Invoke Val validation directly within workflow context (no subagent nesting — works at both standalone level 1 and subagent level 2 from add-feature, never exceeding 2-level nesting max):
120
+ <invoke-workflow
121
+ target="{project-root}/_gaia/lifecycle/workflows/4-implementation/val-validate-artifact/workflow.yaml"
122
+ input_artifact="{planning_artifacts}/epics-and-stories.md"
123
+ story_scope="{story_key}"
124
+ on_error="warn_and_continue" />
125
+
126
+ If Val invocation fails (timeout, context overflow, crash, or missing prerequisites):
127
+ Log warning: "Val validation failed for {story_key}: {reason}"
128
+ Set story_validation_status = "degraded"
129
+ Mark story status as 'validating'
130
+ Break loop — continue to next story
131
+ End If
132
+
133
+ Separate findings by severity:
134
+ - CRITICAL and WARNING findings → actionable_findings (trigger fix loop)
135
+ - INFO findings → info_findings (do not trigger fix loop, non-blocking — log only)
136
+
137
+ Log all INFO findings to output (INFO does not block progression and does not trigger re-validation)
138
+
139
+ If actionable_findings is empty:
140
+ Set story_validation_status = "validated"
141
+ Note story as "validated" in output validation summary
142
+ Break loop — continue to next story
143
+ End If
144
+
145
+ If validation_attempt less than 3:
146
+ Auto-fix each CRITICAL and WARNING finding in the story content
147
+ Re-invoke Val validation (loop continues)
148
+ Else (validation_attempt == 3, maximum reached):
149
+ Set story_validation_status = "failed"
150
+ Mark story status as 'validating' (not 'ready-for-dev')
151
+ Log remaining unresolved findings
152
+ Break loop — continue to next story
153
+ End If
154
+ END VALIDATION LOOP
155
+ End For Each
156
+
157
+ Report batch validation summary:
158
+ - Stories validated successfully: {count} ({list})
159
+ - Stories marked validating (failed after 3 attempts): {count} ({list})
160
+ - Stories marked validating (Val unavailable/degraded): {count} ({list})
161
+ - INFO findings logged: {count}
162
+ </action>
163
+ </step>
164
+
165
+ <step n="9" title="Next Steps">
94
166
  <action>Report summary: new epic(s) created (if any), stories added with IDs and epic assignments</action>
95
167
  <action>For each new story: "Run /gaia-create-story {story_key} to elaborate before development"</action>
96
168
  <action>If stories should enter current sprint: "Run /gaia-correct-course to inject into sprint"</action>
@@ -0,0 +1,25 @@
1
+ ---
2
+ title: 'Create Stakeholder Validation'
3
+ validation-target: 'Stakeholder file'
4
+ ---
5
+ ## Structure
6
+ - [ ] workflow.yaml present with agent: orchestrator
7
+ - [ ] instructions.xml present with sequential steps
8
+ - [ ] checklist.md present
9
+ - [ ] Registered in workflow-manifest.csv
10
+ - [ ] Slash command file exists at .claude/commands/gaia-create-stakeholder.md
11
+ ## Input Collection
12
+ - [ ] Required fields prompted: name, role, expertise, personality
13
+ - [ ] Optional fields prompted: perspective, tags
14
+ - [ ] Required field validation (non-empty check)
15
+ ## Validation Guards
16
+ - [ ] 50-file cap enforced before file creation
17
+ - [ ] Case-insensitive duplicate name detection against existing stakeholder name frontmatter
18
+ - [ ] custom/stakeholders/ directory auto-created if missing
19
+ ## Output
20
+ - [ ] Filename is kebab-case slug of name with .md extension
21
+ - [ ] File written to custom/stakeholders/{slug}.md
22
+ - [ ] YAML frontmatter includes all required fields
23
+ - [ ] Optional fields included only when provided
24
+ - [ ] Markdown body with ## Background section
25
+ - [ ] File does not exceed 100 lines
@@ -0,0 +1,79 @@
1
+ <workflow name="create-stakeholder">
2
+ <critical>
3
+ <mandate>Stakeholder files are written to custom/stakeholders/ — never to _gaia/</mandate>
4
+ <mandate>The 50-file cap and 100-line limit are hard gates (FR-164)</mandate>
5
+ <mandate>Duplicate name detection is case-insensitive against the name frontmatter field (FR-157)</mandate>
6
+ </critical>
7
+
8
+ <step n="1" title="Ensure Directory Exists">
9
+ <action>Check if {project-root}/custom/stakeholders/ directory exists</action>
10
+ <action if="directory does not exist">Create {project-root}/custom/stakeholders/ directory (and {project-root}/custom/ if needed)</action>
11
+ <action>Confirm directory is ready for writing</action>
12
+ </step>
13
+
14
+ <step n="2" title="Collect Required Inputs">
15
+ <ask>Provide the following required fields for the new stakeholder:
16
+
17
+ **Name** (display name, e.g., "Maria Santos"):
18
+ **Role** (title/function, e.g., "Housekeeper Manager"):
19
+ **Expertise** (domain skills, e.g., "Room turnover logistics"):
20
+ **Personality** (traits, e.g., "Pragmatic, detail-oriented"):
21
+ </ask>
22
+ <check if="any required field is empty">HALT: All four fields (name, role, expertise, personality) are required. Please provide all values.</check>
23
+ </step>
24
+
25
+ <step n="3" title="Collect Optional Inputs">
26
+ <ask>Optionally provide these additional fields (press Enter to skip):
27
+
28
+ **Perspective** (viewpoint/biases, e.g., "Focuses on operational efficiency"):
29
+ **Tags** (comma-separated, e.g., "operations, hospitality"):
30
+ </ask>
31
+ </step>
32
+
33
+ <step n="4" title="Validate Against Cap and Duplicates">
34
+ <action>Count existing .md files in {project-root}/custom/stakeholders/ directory</action>
35
+ <check if="count >= 50">HALT: The 50-file cap has been reached in custom/stakeholders/ (FR-164). There are already {count} stakeholder files. Remove unused stakeholders before creating new ones.</check>
36
+ <action>Scan all existing stakeholder files in custom/stakeholders/*.md — read the name field from each file's YAML frontmatter</action>
37
+ <action>Compare each existing name against the new stakeholder name using case-insensitive comparison</action>
38
+ <check if="duplicate name found (case-insensitive match)">HALT: A stakeholder with the name "{existing_name}" already exists at custom/stakeholders/{existing_file}. Name collision detected (case-insensitive). Choose a different name.</check>
39
+ </step>
40
+
41
+ <step n="5" title="Generate Filename Slug">
42
+ <action>Convert the stakeholder name to a kebab-case slug:
43
+ 1. Convert to lowercase
44
+ 2. Replace spaces with hyphens
45
+ 3. Strip all characters that are not alphanumeric or hyphens
46
+ 4. Collapse multiple consecutive hyphens into a single hyphen
47
+ 5. Trim leading/trailing hyphens
48
+ 6. Append .md extension
49
+ </action>
50
+ <action>Example: "Maria Santos" → "maria-santos.md", "Jean-Pierre O'Brien III" → "jean-pierre-obrien-iii.md"</action>
51
+ <action>Set output path: {project-root}/custom/stakeholders/{slug}.md</action>
52
+ <check if="file already exists at output path">HALT: File custom/stakeholders/{slug}.md already exists. This may indicate a slug collision from a different display name. Choose a different name or remove the existing file.</check>
53
+ </step>
54
+
55
+ <step n="6" title="Generate and Write Stakeholder File">
56
+ <action>Generate the stakeholder file with YAML frontmatter and Markdown body:
57
+
58
+ ```
59
+ ---
60
+ name: "{name}"
61
+ role: "{role}"
62
+ expertise: "{expertise}"
63
+ personality: "{personality}"
64
+ perspective: "{perspective}" # Only include if provided
65
+ tags: [{tags_as_yaml_array}] # Only include if provided
66
+ ---
67
+
68
+ ## Background
69
+
70
+ {A 2-3 sentence description synthesized from the provided fields, describing this stakeholder's viewpoint and discussion style.}
71
+ ```
72
+ </action>
73
+ <action>Verify the generated file does not exceed 100 lines. If it does, trim the Background section to fit within the limit.</action>
74
+ <action>Write the file to {project-root}/custom/stakeholders/{slug}.md</action>
75
+ <template-output file="{project-root}/custom/stakeholders/{slug}.md">
76
+ Stakeholder file with YAML frontmatter (name, role, expertise, personality, and optionally perspective and tags) plus a Markdown Background section.
77
+ </template-output>
78
+ </step>
79
+ </workflow>
@@ -0,0 +1,22 @@
1
+ name: create-stakeholder
2
+ description: 'Scaffold a new stakeholder file for Party Mode discussions'
3
+ module: lifecycle
4
+ agent: orchestrator
5
+ config_resolved: "{installed_path}/.resolved/create-stakeholder.yaml"
6
+ config_source: "{project-root}/_gaia/lifecycle/config.yaml"
7
+ installed_path: "{project-root}/_gaia/lifecycle/workflows/4-implementation/create-stakeholder"
8
+ instructions: "{installed_path}/instructions.xml"
9
+ validation: "{installed_path}/checklist.md"
10
+ quality_gates:
11
+ pre_start: []
12
+ post_complete:
13
+ - check: "stakeholder_file_line_count_valid == true"
14
+ on_fail: "HALT: Stakeholder file exceeds 100-line limit (FR-164). Edit {project-root}/custom/stakeholders/{slug}.md to reduce content or re-run /gaia-create-stakeholder."
15
+ - check: "stakeholder_count_in_directory_valid == true"
16
+ on_fail: "HALT: custom/stakeholders/ has more than 50 files (FR-164). Remove unused stakeholders from {project-root}/custom/stakeholders/ first."
17
+ on_error:
18
+ missing_file: "ask_user"
19
+ unresolved_variable: "halt"
20
+
21
+ output:
22
+ primary: "{project-root}/custom/stakeholders/{slug}.md"
@@ -92,8 +92,16 @@
92
92
  <action>Append DoD checklist: all AC met, tests pass, code reviewed, docs updated</action>
93
93
  </step>
94
94
  <step n="6" title="Generate Output">
95
- <action>Read the story template from {project-root}/_gaia/lifecycle/templates/story-template.md</action>
95
+ <action>Read the story template from the engine-resolved template path. The engine resolves this in Step 1 (Load and Resolve Config): if {project-root}/custom/templates/story-template.md exists and is non-empty, the custom template is used; otherwise it falls back to {project-root}/_gaia/lifecycle/templates/story-template.md. Use whichever path the engine resolved.</action>
96
96
  <action>Read sizing_map from {project-root}/_gaia/_config/global.yaml to resolve T-shirt size to story points (S→2, M→5, L→8, XL→13).</action>
97
+ <action>Detect invocation context to determine origin fields:
98
+ If invoked from problem-solving routing (E16-S3): set origin="problem-solving" and origin_ref to the path of the Problem Brief or problem-solving checkpoint artifact (e.g., docs/creative-artifacts/problem-solving-YYYY-MM-DD.md).
99
+ If invoked from triage routing: set origin="triage" and origin_ref to the triage artifact path.
100
+ If invoked from add-feature routing: set origin="add-feature" and origin_ref to the source artifact path.
101
+ If invoked from sprint-planning: set origin="sprint-planning" and origin_ref to the sprint plan artifact path.
102
+ If invoked with explicit origin parameters from caller: use the provided origin and origin_ref values.
103
+ If invoked normally (no routing context): set origin=null and origin_ref=null (planned work default).
104
+ </action>
97
105
  <action>Populate ALL YAML frontmatter fields from epics-and-stories.md data:
98
106
  - key: story key from epics (e.g., E1-S1)
99
107
  - title: story title
@@ -110,6 +118,8 @@
110
118
  - date: current date
111
119
  - author: agent name (e.g., "Nate (Scrum Master)")
112
120
  - priority_flag: null (default — set to "next-sprint" by add-feature for high-urgency stories)
121
+ - origin: workflow origin (null for planned work, "problem-solving" from problem-solving routing, "triage" from triage, "add-feature" from add-feature routing, "sprint-planning" from sprint planning, "manual" for explicit manual creation)
122
+ - origin_ref: path to source artifact that triggered story creation (null when origin is null)
113
123
  </action>
114
124
  <template-output file="{implementation_artifacts}/{story_key}-{story_title_slug}.md">
115
125
  Generate the story file following the story-template.md structure. The filename must use the story key and slugified title (e.g., E1-S1-user-login.md). Include complete YAML frontmatter with ALL 15 fields populated. Fill all template sections: User Story, Acceptance Criteria, Tasks/Subtasks (linked to AC numbers), Dev Notes, Technical Notes, Dependencies, Test Scenarios, Project Structure Notes, References, Dev Agent Record, and Estimate. IMPORTANT: The body "**Status:**" line MUST match the frontmatter status field exactly. Both must say the same status value.
@@ -96,7 +96,27 @@
96
96
  <action>For each related skill, propose a concrete addition or modification: what section should change, what content to add, and why (link back to the retro finding)</action>
97
97
  <action if="yolo_mode">In YOLO mode: auto-approve all recommended skill improvements. Skip the user prompt below.</action>
98
98
  <ask>Here are the proposed skill improvements based on this sprint's findings. Approve, modify, or skip each. [approve all / select / skip]</ask>
99
- <action>If approved: append to the relevant skill file with comment: "<!-- Added from retro-{sprint_id}: {reason} -->"</action>
99
+ <action>If approved: write the skill improvement to {project-root}/custom/skills/{skill-name}.md (NOT _gaia/dev/skills/).
100
+ Follow this sequence for each approved skill improvement:
101
+
102
+ 1. Ensure directory exists: create {project-root}/custom/skills/ if it does not already exist (mkdir -p equivalent).
103
+
104
+ 2. Base-skill copy guard: check if {project-root}/custom/skills/{skill-name}.md already exists.
105
+ - If the custom skill file does NOT exist: copy the base skill from _gaia/dev/skills/{skill-name}.md (resolved at {project-root}) to {project-root}/custom/skills/{skill-name}.md, preserving all &lt;!-- SECTION: xxx --&gt; markers intact. This ensures the engine's sectioned loading continues to work for the custom copy.
106
+ - If the base skill does NOT exist at _gaia/dev/skills/{skill-name}.md (e.g., removed in a framework update): log a warning ("Base skill {skill-name} not found at _gaia/dev/skills/ — creating custom skill from scratch") and write the improvement content directly to custom/skills/{skill-name}.md without a base copy.
107
+ - If the custom skill file already exists: preserve existing content and apply the improvement on top.
108
+
109
+ 3. Apply the improvement: append to or modify the relevant section in custom/skills/{skill-name}.md with comment "<!-- Added from retro-{sprint_id}: {reason} -->".
110
+
111
+ 4. Register in .customize.yaml: after writing the custom skill file, register it in {project-root}/custom/skills/all-dev.customize.yaml so the engine loads from the custom path on subsequent runs (ADR-020 — customization registries live alongside custom skills in custom/skills/).
112
+ - If custom/skills/all-dev.customize.yaml does not exist: create it with proper YAML structure:
113
+ ```yaml
114
+ skill_overrides:
115
+ {skill-name}:
116
+ source: "custom/skills/{skill-name}.md"
117
+ ```
118
+ - If custom/skills/all-dev.customize.yaml already exists: read current content, check if a skill_overrides entry for this skill already exists. If it does not exist, append the new entry under skill_overrides. Preserve all existing entries — only add the new one. If it already exists (duplicate), skip registration to prevent duplicate entries.
119
+ </action>
100
120
  <action>If no skill improvements identified, state: "No skill improvements identified this sprint."</action>
101
121
  </step>
102
122
  <step n="7" title="Cross-Retro Pattern Detection">
@@ -28,4 +28,4 @@ output:
28
28
  sidecar_updates:
29
29
  - "{memory_path}/*-sidecar/*.md"
30
30
  skill_updates:
31
- - "{project-root}/_gaia/dev/skills/*.md"
31
+ - "{project-root}/custom/skills/*.md"
@@ -45,6 +45,17 @@
45
45
  (file paths, component references, API endpoints, dependency versions).
46
46
  Verify against filesystem and ground truth (if available).
47
47
  Classify findings as CRITICAL (broken reference), WARNING (outdated), INFO (style).
48
+
49
+ (g) Origin Field Validation (optional fields — backward compatible):
50
+ The origin and origin_ref fields are OPTIONAL — stories without these fields
51
+ are valid (backward compatibility). Missing origin/origin_ref fields do NOT
52
+ cause errors and are accepted without warnings.
53
+ If the origin field IS present, validate:
54
+ - origin must be one of: "manual", "problem-solving", "triage", "add-feature",
55
+ "sprint-planning", or null. An invalid origin enum value is a CRITICAL finding.
56
+ - If origin is non-null, origin_ref must be non-empty (not null, not empty string).
57
+ A non-null origin with empty or null origin_ref is a WARNING finding.
58
+ If origin is null or absent, origin_ref is not validated (orphaned refs are acceptable).
48
59
  </action>
49
60
  </step>
50
61
  <step n="3" title="Validation Fix Loop">
@@ -18,6 +18,18 @@ validation-target: 'Brownfield onboarding output'
18
18
  - [ ] Event catalog subagent completed (if has_events) — {planning_artifacts}/event-catalog.md exists
19
19
  - [ ] All API docs use Swagger/OpenAPI format
20
20
  - [ ] All diagrams use Mermaid syntax
21
+ ## Step 2.5: Deep Analysis Subagents
22
+ - [ ] Config contradiction scanner subagent completed — {planning_artifacts}/brownfield-scan-config-contradiction.md exists
23
+ - [ ] Scanner detected all supported config file types (.yaml, .yml, .json, .env, .toml, .ini, .properties, .xml)
24
+ - [ ] Stack-aware patterns applied based on {tech_stack}
25
+ - [ ] Gap entries conform to standardized schema (gap-entry-schema.md)
26
+ - [ ] Budget control enforced — max ~70 gap entries per scanner
27
+ ## Step 2.75: Test Execution During Discovery
28
+ - [ ] Test execution subagent completed — {planning_artifacts}/brownfield-scan-test-execution.md exists (or warning logged if no test suite)
29
+ - [ ] Test runner auto-detection ran for supported runners (npm, pytest, Maven, Gradle, Go, Flutter)
30
+ - [ ] Test failures converted to gap entries conforming to E11-S1 schema
31
+ - [ ] Infrastructure errors distinguished from test failures
32
+ - [ ] Test execution non-blocking — failures did not halt workflow
21
33
  ## Step 3: NFR Assessment & Performance Test Plan
22
34
  - [ ] NFR assessment subagent completed — {test_artifacts}/nfr-assessment.md exists
23
35
  - [ ] NFR baseline summary table has real measured values (not placeholders)
@@ -17,9 +17,23 @@
17
17
  <action>Frontend Detection: scan for React, Angular, Vue, Flutter, SwiftUI, UI frameworks, CSS/styling. Set {has_frontend} flag (true/false)</action>
18
18
  <action>Testing infrastructure: identify test framework, coverage config, test count, test patterns</action>
19
19
  <action>CI/CD: identify GitHub Actions, Jenkins, Docker, Terraform, or other pipeline files</action>
20
- <action>Generate brownfield assessment: read {installed_path}/../../templates/brownfield-assessment-template.md capture component inventory, technical debt, migration constraints, coexistence strategy, and adoption path. Output to {planning_artifacts}/brownfield-assessment.md</action>
20
+ <action>Project Type Detection (ADR-022 Template Discriminator Pattern): scan {project-path} for infrastructure file patterns across 6 marker categories to determine {project_type}. Each category is independently detected:
21
+ — Terraform: *.tf, *.tfvars
22
+ — Docker: Dockerfile, docker-compose.yml
23
+ — Helm: helm/, Chart.yaml, values.yaml
24
+ — Kubernetes: k8s/, kustomization.yaml
25
+ — Pulumi: Pulumi.yaml
26
+ — CloudFormation: cloudformation*.yaml
27
+ If ANY infrastructure marker is detected, set {has_infra} = true.</action>
28
+ <action>Application code detection: scan for framework imports (Express, Spring Boot, Django, FastAPI, Angular, React, Next.js, NestJS, Flask, Gin, Fiber) in source files (*.ts, *.js, *.java, *.py, *.go, *.dart). If framework imports found, set {has_app_code} = true.</action>
29
+ <action>Classification decision tree — set {project_type}:
30
+ — If {has_infra} = true AND {has_app_code} = true → {project_type} = platform (infrastructure and application code both present)
31
+ — If {has_infra} = true AND {has_app_code} = false → {project_type} = infrastructure
32
+ — If {has_infra} = false → {project_type} = application (default — standard software project)
33
+ The {project_type} variable is set here in Step 1 so it is available before E11 scanners execute in Step 2.5.</action>
34
+ <action>Generate brownfield assessment: read {installed_path}/../../templates/brownfield-assessment-template.md — capture component inventory, technical debt, migration constraints, coexistence strategy, and adoption path. Include {project_type} in the assessment output. Output to {planning_artifacts}/brownfield-assessment.md</action>
21
35
  <template-output file="{planning_artifacts}/project-documentation.md">
22
- Generate enhanced project documentation following the project documentation conventions. Include all standard sections plus: detected capability flags (has_apis, has_events, has_external_deps, has_frontend), testing infrastructure summary, and CI/CD pipeline summary.
36
+ Generate enhanced project documentation following the project documentation conventions. Include all standard sections plus: detected capability flags (has_apis, has_events, has_external_deps, has_frontend), {project_type}, testing infrastructure summary, and CI/CD pipeline summary.
23
37
  </template-output>
24
38
  </step>
25
39
 
@@ -36,14 +50,158 @@
36
50
  </template-output>
37
51
  </step>
38
52
 
53
+ <step n="2.5" title="Deep Analysis Subagents (Infra-Aware)">
54
+ <action>Spawn deep analysis scan subagents in parallel using the Agent tool with multiple calls in a single message. These run alongside the Step 2 documentation subagents to detect gaps that structural analysis misses. Each scan subagent receives {tech_stack} (from Step 1), {project-path}, and {project_type} as context variables. When {project_type} is `infrastructure` or `platform`, infra-specific detection patterns are applied alongside application patterns. When {project_type} is `application`, only application patterns are applied.</action>
55
+
56
+ <action>Spawn subagent — Config Contradiction Scanner (infra-aware): "Read the config contradiction scan prompt template at {project-root}/_gaia/lifecycle/knowledge/brownfield/config-contradiction-scan.md. Follow the prompt instructions EXACTLY. Use tech stack: {tech_stack}. Project type: {project_type}. Scan the project at {project-path}. Reference the gap entry schema at {project-root}/_gaia/lifecycle/templates/gap-entry-schema.md for output formatting. If project_type is infrastructure or platform: also apply infra-specific config contradiction patterns for terraform.tfvars, values.yaml, and kustomize overlays as specified in the prompt template. Output to {planning_artifacts}/brownfield-scan-config-contradiction.md"</action>
57
+
58
+ <action>Spawn subagent — Dead Code &amp; Dead State Scanner: "Read the dead code scan prompt template at {project-root}/_gaia/lifecycle/knowledge/brownfield/dead-code-scan.md. Follow the prompt instructions EXACTLY. Use tech stack: {tech_stack}. Project type: {project_type}. Scan the project at {project-path}. Reference the gap entry schema at {project-root}/_gaia/lifecycle/templates/gap-entry-schema.md for output formatting. Output to {planning_artifacts}/brownfield-scan-dead-code.md"</action>
59
+
60
+ <action>Spawn subagent — Hard-Coded Logic Detector (infra-aware): "Read the hard-coded logic scan prompt template at {project-root}/_gaia/lifecycle/templates/brownfield-scan-hardcoded-prompt.md. Follow the prompt instructions EXACTLY. Use tech stack: {tech_stack}. Project type: {project_type}. Scan the project at {project-path}. Reference the gap entry schema at {project-root}/_gaia/lifecycle/templates/gap-entry-schema.md for output formatting. If project_type is infrastructure or platform: also detect hard-coded IPs, magic ports, embedded secrets/AMI IDs, and hard-coded resource limits in IaC files as specified in the prompt template. Output to {planning_artifacts}/brownfield-scan-hardcoded.md"</action>
61
+
62
+ <action>Spawn subagent — Security Endpoint Audit (infra-aware): "Read the security scan prompt template at {project-root}/_gaia/lifecycle/templates/brownfield-scan-security-prompt.md. Follow the prompt instructions EXACTLY. Use tech stack: {tech_stack}. Project type: {project_type}. Scan the project at {project-path}. Reference the gap entry schema at {project-root}/_gaia/lifecycle/templates/gap-entry-schema.md for output formatting. If project_type is infrastructure or platform: also detect exposed ports in k8s manifests, permissive ingress rules, overly broad RBAC bindings, and missing NetworkPolicy as specified in the prompt template. Output to {planning_artifacts}/brownfield-scan-security.md"</action>
63
+
64
+ <action>Spawn subagent — Runtime Behavior Inventory (infra-aware): "Read the runtime behavior scan prompt template at {project-root}/_gaia/lifecycle/templates/brownfield-scan-runtime-behavior-prompt.md. Follow the prompt instructions EXACTLY. Use tech stack: {tech_stack}. Project type: {project_type}. Scan the project at {project-path}. Reference the gap entry schema at {project-root}/_gaia/lifecycle/templates/gap-entry-schema.md for output formatting. If project_type is infrastructure or platform: also catalog CronJobs, DaemonSets, init containers, sidecar patterns, and health probes as specified in the prompt template. Output to {planning_artifacts}/brownfield-scan-runtime-behavior.md"</action>
65
+
66
+ <action>Spawn subagent — Documentation-Code Mismatch Scanner: "Read the doc-vs-code scan prompt template at {project-root}/_gaia/lifecycle/templates/brownfield-scan-doc-code-prompt.md. Follow the prompt instructions EXACTLY. Use tech stack: {tech_stack}. Project type: {project_type}. Scan the project at {project-path}. Reference the gap entry schema at {project-root}/_gaia/lifecycle/templates/gap-entry-schema.md for output formatting. Output to {planning_artifacts}/brownfield-scan-doc-code.md"</action>
67
+
68
+ <action>Spawn subagent — Integration Seam Analyzer (infra-aware): "Read the integration seam scan prompt template at {project-root}/_gaia/lifecycle/templates/brownfield-scan-integration-seam-prompt.md. Follow the prompt instructions EXACTLY. Use tech stack: {tech_stack}. Project type: {project_type}. Scan the project at {project-path}. Reference the gap entry schema at {project-root}/_gaia/lifecycle/templates/gap-entry-schema.md for output formatting. If project_type is infrastructure or platform: also map service mesh topology, ingress/egress routes, and cross-namespace dependencies as specified in the prompt template. Output to {planning_artifacts}/brownfield-scan-integration-seam.md"</action>
69
+
70
+ <action>Wait for all deep analysis subagents to complete. Verify output files exist at {planning_artifacts}/: brownfield-scan-config-contradiction.md, brownfield-scan-dead-code.md, brownfield-scan-hardcoded.md, brownfield-scan-security.md, brownfield-scan-runtime-behavior.md, brownfield-scan-doc-code.md, and brownfield-scan-integration-seam.md.</action>
71
+ <action>If any subagent failed to write its output file: log a warning and continue — individual scan failures should not block the overall brownfield workflow.</action>
72
+
73
+ <!-- Gap-to-Requirement Mapping Reference (E12-S6, ADR-022 §10.16.5)
74
+ When {project_type} is infrastructure or platform, infra gap categories
75
+ map to infra PRD sections as follows:
76
+
77
+ | Gap Category | Infra PRD Section |
78
+ |--------------------|----------------------------|
79
+ | resource-drift | Resource Specifications |
80
+ | config-sprawl | Environment Strategy & DX |
81
+ | secret-exposure | Security Posture |
82
+ | missing-policy | Verification Strategy |
83
+ | environment-skew | Environment Strategy & DX |
84
+
85
+ Application gap categories map to standard PRD sections:
86
+ | Gap Category | PRD Section |
87
+ |-----------------------|--------------------------|
88
+ | functional | Resource Specifications |
89
+ | behavioral | Operational SLOs |
90
+ | security | Security Posture |
91
+ | operational | Operational Runbooks |
92
+ | documentation | Overview & Scope |
93
+ | configuration | Environment Strategy & DX|
94
+ | data-integrity | Resource Specifications |
95
+
96
+ This mapping is consumed by the gap consolidation step (E11-S10/S12)
97
+ and the PRD generation step (Step 4) to route gap findings to the
98
+ correct sections of the generated PRD. -->
99
+ </step>
100
+
101
+ <step n="2.75" title="Test Execution During Discovery">
102
+ <action>After all Step 2/2.5 parallel scan subagents complete, execute the existing test suite at {project-path} to capture test failures as gap entries. This step is non-blocking — test execution failures must not halt the overall brownfield onboarding workflow.</action>
103
+ <action>Spawn subagent — Test Execution Scanner: "Read the test execution scan prompt template at {project-root}/_gaia/lifecycle/knowledge/brownfield/test-execution-scan.md. Follow the prompt instructions EXACTLY. Scan the project at {project-path} for test runners. Reference the gap entry schema at {project-root}/_gaia/lifecycle/templates/gap-entry-schema.md for output formatting. Auto-detect test runners (package.json with test script, pytest, Maven, Gradle, Go, Flutter) in priority order. Execute each detected runner with a 5-minute timeout. Parse test output for metrics (total, passing, failing, skipped). Convert failing tests to gap entries with severity mapped by test type (unit=medium, integration=high, e2e=critical). Detect infrastructure errors (ECONNREFUSED, missing env vars) and log as warning gaps instead of test failure gaps. For monorepo/polyglot projects, execute all detected runners sequentially and aggregate results. Truncate output per NFR-024 token budget if needed. If no test suite is detected, log an info-level gap entry GAP-TEST-INFO-001. Output to {planning_artifacts}/brownfield-scan-test-execution.md"</action>
104
+ <action>When subagent returns: verify output file exists at {planning_artifacts}/brownfield-scan-test-execution.md.</action>
105
+ <action>If the subagent failed to write its output file: log a warning and continue — test execution scan failures should not block the overall brownfield workflow.</action>
106
+ </step>
107
+
39
108
  <step n="3" title="NFR Assessment &amp; Performance Test Plan">
40
109
  <action>Spawn a subagent — "Analyze the codebase at {project-path} for non-functional requirements. Read {installed_path}/../../templates/nfr-assessment-template.md for the output format. Assess: code quality (linting, complexity, duplication), security posture (dependency vulnerabilities, secrets handling, auth quality — load security-basics skill section vulnerability-scanning if needed), performance (bundle size if frontend, query patterns, caching, resource management), accessibility (ARIA, semantic HTML, keyboard nav if frontend), test coverage (framework, count, coverage %, untested areas, quality), CI/CD (build pipeline, deploy strategy, environments, IaC). Create NFR Baseline Summary Table with measured values (not placeholders). Output to {test_artifacts}/nfr-assessment.md. Then create a performance test plan: load {project-root}/_gaia/testing/knowledge/performance/k6-patterns.md for load testing patterns. If {has_frontend}: also load {project-root}/_gaia/testing/knowledge/performance/lighthouse-ci.md. Define performance budgets (P50/P95/P99) based on NFR baselines, load test scenarios (gradual, spike, soak), backend profiling targets (slow queries, N+1, connection pools), CI performance gates. If {has_frontend}: define Core Web Vitals targets (LCP under 2.5s, INP under 200ms, CLS under 0.1). Output to {test_artifacts}/performance-test-plan-{date}.md"</action>
41
110
  <action>When subagent returns: verify both {test_artifacts}/nfr-assessment.md and {test_artifacts}/performance-test-plan-{date}.md exist. If the subagent failed to write files, the orchestrator MUST write them to {test_artifacts}/ as declared in workflow.yaml output.artifacts — do NOT fall back to {planning_artifacts}/.</action>
42
111
  </step>
43
112
 
113
+ <step n="3.5" title="Gap Consolidation &amp; Deduplication">
114
+ <action>Spawn a subagent using the Agent tool with this prompt:
115
+
116
+ "You are the Gap Consolidation subagent. Your task is to load all scan results, deduplicate them, rank them, and produce a single consolidated output file.
117
+
118
+ **Context:**
119
+ - Planning artifacts directory: {planning_artifacts}/
120
+ - Test artifacts directory: {test_artifacts}/
121
+ - Gap schema reference: {installed_path}/../../templates/gap-entry-schema.md
122
+ - Tech stack: {tech_stack}
123
+
124
+ **Step 1 — Load all scan outputs:**
125
+ Load gap entries from ALL of the following sources. For each file, if it exists, parse YAML gap entries matching the schema from gap-entry-schema.md. If a file is empty or missing, log a warning noting which scanner produced no results and continue processing the remaining files without error.
126
+
127
+ Deep analysis scan outputs (7 files — Step 2.5):
128
+ - {planning_artifacts}/brownfield-scan-config-contradiction.md
129
+ - {planning_artifacts}/brownfield-scan-dead-code.md
130
+ - {planning_artifacts}/brownfield-scan-hardcoded.md
131
+ - {planning_artifacts}/brownfield-scan-security.md
132
+ - {planning_artifacts}/brownfield-scan-runtime-behavior.md
133
+ - {planning_artifacts}/brownfield-scan-doc-code.md
134
+ - {planning_artifacts}/brownfield-scan-integration-seam.md
135
+
136
+ Test execution scan output (1 file — Step 2.75):
137
+ - {planning_artifacts}/brownfield-scan-test-execution.md (failing tests as gap entries)
138
+
139
+ Step 2 documentation subagent outputs (4 files):
140
+ - {planning_artifacts}/api-documentation.md (API gaps)
141
+ - {planning_artifacts}/event-catalog.md (event/messaging gaps)
142
+ - {planning_artifacts}/ux-design.md (frontend/UX gaps)
143
+ - {planning_artifacts}/dependency-map.md (dependency gaps)
144
+
145
+ Step 3 NFR assessment:
146
+ - {test_artifacts}/nfr-assessment.md (NFR gap findings)
147
+
148
+ **Step 2 — Validate entries against schema:**
149
+ For each parsed gap entry, validate that all required fields are present: id, category, severity, title, description (or evidence), evidence_file, evidence_line, recommendation. Entries missing any required field are logged as warnings (noting the source file and which field is missing) and skipped from consolidation rather than causing a failure.
150
+
151
+ **Step 3 — Deduplicate:**
152
+ Group gap entries by evidence_file + evidence_line (exact match on both fields). For each group of duplicates:
153
+ a. Retain the entry with the highest severity (critical > high > medium > low)
154
+ b. Merge recommendations from all duplicate entries into the retained entry
155
+ c. Add a merged_from field listing all original gap IDs that were merged
156
+ d. If duplicates have different categories, retain the primary category from the highest-severity entry and note the alternate category in the description
157
+
158
+ **Step 4 — Rank:**
159
+ Sort the deduplicated gaps by:
160
+ 1. severity DESC (critical first, then high, medium, low)
161
+ 2. confidence DESC (high first, then medium, low)
162
+ 3. category alphabetical within each severity+confidence tier
163
+ Assign final sequential numbering to the ranked list.
164
+
165
+ **Step 5 — Budget check:**
166
+ Estimate the token count of the output (~100 tokens per gap entry). If the estimated total exceeds the 40K token budget, truncate low-severity and info entries with a count summary: 'N additional low/info gaps omitted for budget'. Ensure the output stays within budget.
167
+
168
+ **Step 6 — Generate consolidated output:**
169
+ Write consolidated-gaps.md to {planning_artifacts}/ using the consolidated output format from gap-entry-schema.md. Include summary statistics at the top:
170
+ - Total raw gaps (pre-dedup count)
171
+ - Duplicates removed
172
+ - Final unique count
173
+ - Breakdown by category (all categories found)
174
+ - Breakdown by severity (critical, high, medium, low)
175
+ - Per-scanner source counts
176
+
177
+ Output to {planning_artifacts}/consolidated-gaps.md"
178
+ </action>
179
+ <action>When subagent returns: verify {planning_artifacts}/consolidated-gaps.md exists. If the subagent failed to write the file, log error and halt.</action>
180
+ <template-output file="{planning_artifacts}/consolidated-gaps.md">
181
+ Consolidated gap analysis with deduplicated, ranked gaps in standardized schema format. Includes summary statistics header.
182
+ </template-output>
183
+ </step>
184
+
44
185
  <step n="4" title="Create PRD for Gaps">
45
- <action>Read the PRD template from {installed_path}/../../templates/prd-template.md</action>
186
+ <action>Select PRD template based on {project_type} (set by Step 1 discovery). Template selection is a simple lookup — no template inheritance or composition at runtime:
187
+
188
+ | project_type | Template File | Requirement ID Scheme |
189
+ |------------------|----------------------------|----------------------------------------------|
190
+ | application | prd-template.md | FR-###, NFR-### |
191
+ | infrastructure | infra-prd-template.md | IR-###, OR-###, SR-### |
192
+ | platform | platform-prd-template.md | FR-###, NFR-### and IR-###, OR-###, SR-### |
193
+
194
+ Resolve template path: {installed_path}/../../templates/{selected_template}.
195
+ Verify the selected template file exists before proceeding. If the template file is missing, HALT with: "Template {selected_template} not found at {installed_path}/../../templates/. Ensure E12-S2 (infra) or E12-S3 (platform) templates are installed."
196
+ If {project_type} is not set or unrecognized, default to application (prd-template.md) for backward compatibility.</action>
197
+ <action>Read the selected PRD template from {installed_path}/../../templates/{selected_template}</action>
198
+ <action>Set requirement ID scheme based on {project_type}:
199
+ — If application: use FR-### for functional requirements and NFR-### for non-functional requirements (existing behavior, backward compatible)
200
+ — If infrastructure: use IR-### for infrastructure requirements, OR-### for operational requirements, and SR-### for security requirements exclusively. Do NOT use FR/NFR prefixes.
201
+ — If platform: use BOTH ID scheme families — FR-###/NFR-### for application-layer requirements and IR-###/OR-###/SR-### for infrastructure-layer requirements. All requirement IDs are globally unique within the project — the prefix disambiguates (e.g., FR-001 and IR-001 are distinct, no collision).</action>
46
202
  <action>Read upstream artifacts to inform gap analysis:</action>
203
+ <action>— project-documentation.md → project context: tech stack, architecture patterns, conventions, detected capability flags, CI/CD summary. Use for PRD Overview section (existing project summary) and to ground gap requirements in the actual project structure.</action>
204
+ <action>— consolidated-gaps.md → primary input: deduplicated, ranked, and code-verified gap list from Steps 3.5 and 5.5. If a '## Verification Corrections for PRD' section exists, use it to correct factual errors from contradicted claims.</action>
47
205
  <action>— nfr-assessment.md → NFR section gets real "Current Baseline" and "Target" columns</action>
48
206
  <action>— api-documentation.md (if exists) → extract API gaps (undocumented endpoints, missing validation)</action>
49
207
  <action>— event-catalog.md (if exists) → extract messaging gaps (missing DLQ, missing schemas)</action>
@@ -73,6 +231,92 @@
73
231
  </template-output>
74
232
  </step>
75
233
 
234
+ <step n="5.5" title="Code-Verified Review">
235
+ <action>Spawn a subagent using the Agent tool with this prompt:
236
+
237
+ "You are the Code-Verified Review subagent. Your task is to verify every factual claim in the consolidated gap entries against the actual codebase, classify each gap, and update the consolidated output.
238
+
239
+ **Context:**
240
+ - Consolidated gaps file: {planning_artifacts}/consolidated-gaps.md
241
+ - Codebase root: {project-path}
242
+ - Tech stack: {tech_stack}
243
+ - Gap schema reference: {installed_path}/../../templates/gap-entry-schema.md
244
+
245
+ **Step 1 — Load and parse consolidated-gaps.md:**
246
+ Read {planning_artifacts}/consolidated-gaps.md. Parse all YAML gap entries matching the standardized gap schema. If the file is empty or contains zero parseable gap entries, output a summary stating '0 gaps to verify' and exit gracefully without error, producing an empty verification report.
247
+
248
+ Handle malformed gap entries: if a gap entry is missing required fields (id, category, severity, title, description, evidence_file, recommendation), log a warning noting which field is missing and which entry, skip the entry, and include it in the summary as a skipped entry.
249
+
250
+ **Step 2 — Extract verifiable claims from each gap entry:**
251
+ For each valid gap entry, extract all machine-verifiable claims:
252
+ a. File existence: extract the `evidence_file` field — this is a relative path from {project-path} that should exist in the codebase.
253
+ b. Line range validity: extract the `evidence_line` field — this line number must be within the file's total line count.
254
+ c. Pattern/string presence: extract patterns, strings, or code snippets from the `description` and `recommendation` fields that claim specific code patterns exist in the referenced file.
255
+ d. Configuration key existence: for gaps in the `configuration` category, extract config key paths claimed to exist in YAML/JSON config files.
256
+
257
+ For gap entries with no verifiable claims (purely subjective descriptions with no file/line/pattern references), classify as `unverifiable` with reason 'No machine-verifiable claims'.
258
+
259
+ **Step 3 — Verify each claim against the codebase at {project-path}:**
260
+ Use grep, glob, and read tools for all verification (not shell commands). For each claim:
261
+
262
+ a. File existence check: Use glob/read to check if {project-path}/{evidence_file} exists.
263
+ - If the file does not exist: classify the gap as `contradicted` with reason 'Referenced file not found: {evidence_file}'. Preserve the original gap with downgraded confidence.
264
+ - If the file is a binary file (detected by extension: .png, .jpg, .gif, .woff, .ttf, .ico, .pdf, .zip, .tar, .gz, .exe, .dll, .so, .dylib): classify as `unverifiable` with reason 'Binary file — cannot verify textual claims'.
265
+
266
+ b. Line range validation: Read the file and count total lines. Compare against `evidence_line`.
267
+ - If evidence_line exceeds the file's total line count: classify as `contradicted` with reason 'Line {evidence_line} exceeds file length ({actual_lines} lines)'. Flag the entry for correction.
268
+
269
+ c. Pattern search: Use grep to search for stated patterns in the referenced file. Handle regex special characters safely by escaping them.
270
+ - If the pattern is found: this claim is confirmed.
271
+ - If the pattern is not found: this contributes to a `contradicted` classification.
272
+
273
+ d. Config key verification: For configuration-category gaps, parse YAML/JSON config files and check for key existence at the stated paths.
274
+ - If the key exists: claim confirmed.
275
+ - If the key does not exist: contributes to `contradicted` classification.
276
+
277
+ **Step 4 — Apply tristate classification to each gap:**
278
+ Based on the verification results from Step 3:
279
+ - **verified**: All verifiable claims in the gap entry are confirmed by evidence in the codebase. Set `verification_status: verified`.
280
+ - **unverifiable**: Claims cannot be confirmed or denied from code alone (e.g., runtime behavior, subjective assessments, binary files, no verifiable claims). Set `verification_status: unverifiable`.
281
+ - **contradicted**: Evidence in the codebase directly contradicts one or more claims. Set `verification_status: contradicted`.
282
+
283
+ For contradicted gaps:
284
+ - Downgrade the confidence of the original gap entry.
285
+ - Attach a `reason` string explaining exactly what was contradicted and what was found instead.
286
+ - Generate a new gap entry using the standardized gap schema with:
287
+ - `id`: GAP-VERIFIED-{seq}
288
+ - `category`: preserved from original gap
289
+ - `severity`: inherited from original gap
290
+ - `verified_by: "code-verified"`
291
+ - `description`: details of the contradiction and the actual state found in codebase
292
+
293
+ **Step 5 — Update consolidated-gaps.md:**
294
+ Write back to {planning_artifacts}/consolidated-gaps.md with the following additions to each gap entry:
295
+ - `verification_status`: verified | unverifiable | contradicted
296
+ - `verified_by: "code-verified"`
297
+ Preserve all existing fields on each entry — do not remove or overwrite original data.
298
+ Append new gap entries generated from contradicted claims at the end of the file.
299
+
300
+ **Step 6 — Generate verification summary:**
301
+ Output a summary report including:
302
+ - Total gaps processed
303
+ - Verified count
304
+ - Unverifiable count
305
+ - Contradicted count
306
+ - New gap entries generated from contradicted claims
307
+ - Skipped entries (malformed) count
308
+
309
+ **Step 7 — Feed contradicted claims back to Step 4:**
310
+ Contradicted claims must be fed back to Step 4 (PRD generation) for correction. Write the list of contradicted entries and their reasons to a section at the top of consolidated-gaps.md marked '## Verification Corrections for PRD'. This section will be read by Step 4 when the PRD is regenerated to correct factual errors.
311
+
312
+ Output to {planning_artifacts}/consolidated-gaps.md"
313
+ </action>
314
+ <action>When subagent returns: verify {planning_artifacts}/consolidated-gaps.md has been updated with verification_status fields. If the subagent failed, log error and halt.</action>
315
+ <template-output file="{planning_artifacts}/consolidated-gaps.md">
316
+ Code-verified consolidated gaps with verification_status and verified_by fields on each entry. Includes verification summary and corrections section for PRD feedback.
317
+ </template-output>
318
+ </step>
319
+
76
320
  <step n="6" title="Architecture">
77
321
  <action>Architecture is created using the Phase 3 workflow which auto-detects brownfield mode from the PRD "Mode: Brownfield" header set in step 4.</action>
78
322
  <action>If {planning_artifacts}/architecture.md already exists: WARN user — "An architecture document already exists. Continuing will overwrite it with the brownfield version. Choose: (a) overwrite, (b) save as architecture-brownfield.md instead." If user chooses (b), instruct the subagent to output to {planning_artifacts}/architecture-brownfield.md.</action>
@@ -92,7 +336,7 @@
92
336
  <check if="refresh-ground-truth workflow directory not found">Val ground truth bootstrap skipped — refresh-ground-truth workflow not available. Run /gaia-refresh-ground-truth manually after E8-S9 is implemented. Brownfield onboarding completes successfully.</check>
93
337
 
94
338
  <action>Invoke /gaia-refresh-ground-truth to scan the filesystem and populate framework inventory facts (workflows, agents, skills, commands) into ground-truth.md</action>
95
- <invoke-workflow ref="refresh-ground-truth" mode="yolo" />
339
+ <invoke-workflow ref="val-refresh-ground-truth" mode="yolo" />
96
340
 
97
341
  <action>Load brownfield-extraction section of ground-truth-management skill JIT from {project-root}/_gaia/lifecycle/skills/ground-truth-management.md</action>
98
342
  <action>Read available brownfield artifacts and extract project-specific facts. For each artifact, check if the file exists before reading — skip missing artifacts without error: