gaia-framework 1.65.1 → 1.83.2
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.claude/commands/gaia-create-stakeholder.md +20 -0
- package/.claude/commands/gaia-test-gap-analysis.md +17 -0
- package/CLAUDE.md +102 -1
- package/README.md +2 -2
- package/_gaia/_config/global.yaml +5 -1
- package/_gaia/_config/lifecycle-sequence.yaml +20 -0
- package/_gaia/_config/skill-manifest.csv +2 -0
- package/_gaia/_config/workflow-manifest.csv +3 -1
- package/_gaia/core/engine/workflow.xml +11 -1
- package/_gaia/core/protocols/review-gate-check.xml +29 -1
- package/_gaia/core/workflows/party-mode/steps/step-01-agent-loading.md +60 -9
- package/_gaia/creative/workflows/problem-solving/checklist.md +64 -14
- package/_gaia/creative/workflows/problem-solving/instructions.xml +367 -22
- package/_gaia/creative/workflows/problem-solving/workflow.yaml +31 -1
- package/_gaia/dev/agents/_base-dev.md +7 -1
- package/_gaia/dev/skills/_skill-index.yaml +9 -0
- package/_gaia/dev/skills/figma-integration.md +296 -0
- package/_gaia/lifecycle/knowledge/brownfield/config-contradiction-scan.md +137 -0
- package/_gaia/lifecycle/knowledge/brownfield/dead-code-scan.md +179 -0
- package/_gaia/lifecycle/knowledge/brownfield/test-execution-scan.md +209 -0
- package/_gaia/lifecycle/skills/document-rulesets.md +91 -6
- package/_gaia/lifecycle/templates/brownfield-scan-doc-code-prompt.md +219 -0
- package/_gaia/lifecycle/templates/brownfield-scan-hardcoded-prompt.md +169 -0
- package/_gaia/lifecycle/templates/brownfield-scan-integration-seam-prompt.md +127 -0
- package/_gaia/lifecycle/templates/brownfield-scan-runtime-behavior-prompt.md +141 -0
- package/_gaia/lifecycle/templates/brownfield-scan-security-prompt.md +440 -0
- package/_gaia/lifecycle/templates/gap-entry-schema.md +282 -0
- package/_gaia/lifecycle/templates/infra-prd-template.md +356 -0
- package/_gaia/lifecycle/templates/platform-prd-template.md +431 -0
- package/_gaia/lifecycle/templates/prd-template.md +70 -0
- package/_gaia/lifecycle/templates/story-template.md +22 -1
- package/_gaia/lifecycle/workflows/2-planning/create-ux-design/instructions.xml +52 -3
- package/_gaia/lifecycle/workflows/4-implementation/add-feature/checklist.md +1 -1
- package/_gaia/lifecycle/workflows/4-implementation/add-feature/instructions.xml +2 -3
- package/_gaia/lifecycle/workflows/4-implementation/add-stories/checklist.md +5 -0
- package/_gaia/lifecycle/workflows/4-implementation/add-stories/instructions.xml +73 -1
- package/_gaia/lifecycle/workflows/4-implementation/create-stakeholder/checklist.md +25 -0
- package/_gaia/lifecycle/workflows/4-implementation/create-stakeholder/instructions.xml +79 -0
- package/_gaia/lifecycle/workflows/4-implementation/create-stakeholder/workflow.yaml +22 -0
- package/_gaia/lifecycle/workflows/4-implementation/create-story/instructions.xml +11 -1
- package/_gaia/lifecycle/workflows/4-implementation/retrospective/instructions.xml +21 -1
- package/_gaia/lifecycle/workflows/4-implementation/retrospective/workflow.yaml +1 -1
- package/_gaia/lifecycle/workflows/4-implementation/validate-story/instructions.xml +11 -0
- package/_gaia/lifecycle/workflows/anytime/brownfield-onboarding/checklist.md +12 -0
- package/_gaia/lifecycle/workflows/anytime/brownfield-onboarding/instructions.xml +248 -4
- package/_gaia/lifecycle/workflows/anytime/brownfield-onboarding/workflow.yaml +1 -0
- package/_gaia/testing/workflows/test-gap-analysis/checklist.md +8 -0
- package/_gaia/testing/workflows/test-gap-analysis/instructions.xml +53 -0
- package/_gaia/testing/workflows/test-gap-analysis/workflow.yaml +38 -0
- package/bin/gaia-framework.js +44 -8
- package/bin/helpers/derive-bump-label.js +41 -0
- package/bin/helpers/validate-bump-labels.js +38 -0
- package/gaia-install.sh +96 -21
- package/package.json +1 -1
- package/_gaia/_memory/tier2-results/.gitkeep +0 -0
- package/_gaia/_memory/tier2-results/checkpoint-resume-2026-03-24.yaml +0 -6
- package/_gaia/_memory/tier2-results/engine-scenarios-2026-03-22.yaml +0 -14
|
@@ -0,0 +1,296 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: figma-integration
|
|
3
|
+
version: '1.0'
|
|
4
|
+
requires_mcp: design-tool
|
|
5
|
+
applicable_agents: [typescript-dev, angular-dev, flutter-dev, java-dev, python-dev, mobile-dev]
|
|
6
|
+
test_scenarios:
|
|
7
|
+
- scenario: Figma MCP server available and healthy
|
|
8
|
+
expected: Mode selection (Generate/Import/Skip) presented to user
|
|
9
|
+
- scenario: Figma MCP server not installed
|
|
10
|
+
expected: Silent fallback to markdown-only, no error or warning
|
|
11
|
+
- scenario: Figma MCP server not running
|
|
12
|
+
expected: Silent fallback to markdown-only, no error or warning
|
|
13
|
+
- scenario: Figma API token expired
|
|
14
|
+
expected: Warning displayed, fallback to markdown-only
|
|
15
|
+
- scenario: Rate limited (429)
|
|
16
|
+
expected: Single retry after delay, fallback with warning if retry fails
|
|
17
|
+
- scenario: Timeout exceeding 5 seconds
|
|
18
|
+
expected: Fallback with warning, continue markdown-only
|
|
19
|
+
- scenario: Design tool detection via MCP probe
|
|
20
|
+
expected: Correct adapter selected based on available MCP tool prefix
|
|
21
|
+
- scenario: Token extraction produces W3C DTCG format
|
|
22
|
+
expected: design-tokens.json contains $type/$value structure with semantic aliases
|
|
23
|
+
- scenario: Component spec extraction
|
|
24
|
+
expected: component-specs.yaml contains typed props, abstract layout, and states
|
|
25
|
+
---
|
|
26
|
+
|
|
27
|
+
**DesignToolProvider Interface** — abstract interface for design tool integrations. Adapters implement these 5 operations:
|
|
28
|
+
|
|
29
|
+
| Operation | Description | Returns |
|
|
30
|
+
|-----------|-------------|---------|
|
|
31
|
+
| `detect()` | Probe MCP tools to identify available design tool | Adapter instance or null |
|
|
32
|
+
| `getTokens()` | Extract design tokens from the design file | W3C DTCG JSON (`design-tokens.json`) |
|
|
33
|
+
| `getComponents()` | Extract component specifications | YAML spec (`component-specs.yaml`) |
|
|
34
|
+
| `getFrames()` | Generate UI kit frames across viewports | Frame metadata for UI kit page |
|
|
35
|
+
| `exportAssets()` | Export images and icons at required densities | Asset files in `assets/` directory |
|
|
36
|
+
|
|
37
|
+
**Adapter Implementations:**
|
|
38
|
+
- **FigmaAdapter** (active) — wraps `figma_*` / `figma/` MCP tools (e.g., `figma/get_file`, `figma/get_styles`, `figma/get_components`). Detected when MCP tools matching prefix `figma` are available.
|
|
39
|
+
- **PenpotAdapter** (planned) — will wrap `penpot_*` MCP tools. Detected via `penpot_` prefix. Not yet implemented.
|
|
40
|
+
- **SketchAdapter** (planned) — will wrap `sketch_*` MCP tools. Detected via `sketch_` prefix. Not yet implemented.
|
|
41
|
+
|
|
42
|
+
**Selection logic:** probe MCP tool list for known prefixes in order: `figma_` / `figma/` → `penpot_` → `sketch_`. Use the first match. If none found, report "No design tool MCP server detected."
|
|
43
|
+
|
|
44
|
+
**MCP constraint (FR-140):** operations are read-heavy/write-light. Most interactions read design data (tokens, components, styles). Write operations are limited to frame generation and are clearly documented per section.
|
|
45
|
+
|
|
46
|
+
<!-- SECTION: detection -->
|
|
47
|
+
## Detection Probe
|
|
48
|
+
|
|
49
|
+
Detect Figma MCP server availability using a lightweight, read-only probe call.
|
|
50
|
+
This section is consumed by `/gaia-create-ux` at workflow start.
|
|
51
|
+
|
|
52
|
+
> **Security mandate:** NEVER persist Figma API tokens in any GAIA file — checkpoints, sidecars, logs, or artifacts. MCP auth is handled by the MCP server process; GAIA does not touch tokens.
|
|
53
|
+
|
|
54
|
+
> **Detection-only mandate:** GAIA MUST never install, configure, or modify the MCP server. Detection is read-only — probe for availability via `figma/get_user_info` or tool listing, nothing more.
|
|
55
|
+
|
|
56
|
+
### Probe Call
|
|
57
|
+
|
|
58
|
+
Use `figma/get_user_info` as the detection probe:
|
|
59
|
+
- Read-only, lightweight, validates both connectivity and token validity
|
|
60
|
+
- 5-second hard timeout (NFR-026 compliance)
|
|
61
|
+
- Zero added latency when MCP is not available (silent skip)
|
|
62
|
+
|
|
63
|
+
### Detection Flow
|
|
64
|
+
|
|
65
|
+
1. **Attempt probe:** call `figma/get_user_info` with a 5-second hard timeout
|
|
66
|
+
2. **On success:** set `figma_mcp_available = true`, proceed to mode selection
|
|
67
|
+
3. **On failure:** classify the failure and handle per the failure mode table below
|
|
68
|
+
|
|
69
|
+
### Failure Mode Handling
|
|
70
|
+
|
|
71
|
+
| Failure | Detection Signal | Behavior |
|
|
72
|
+
|---------|-----------------|----------|
|
|
73
|
+
| **Not installed** (AC5) | Tool not found / tool not available | Silent fallback to markdown-only mode — no error, no warning, no prompt |
|
|
74
|
+
| **Not running** (AC6) | Connection refused / connection error | Silent fallback to markdown-only mode — no error, no warning, no prompt |
|
|
75
|
+
| **Token expired** (AC7) | 401 or 403 response from `figma/get_user_info` | Warn: "Figma token expired — falling back to markdown" then continue markdown-only |
|
|
76
|
+
| **Rate limited** (AC8) | 429 response | Retry once after `Retry-After` header delay (default: 2 seconds). If retry also fails, warn and fallback to markdown-only |
|
|
77
|
+
| **Timeout** (AC9) | No response within 5-second hard timeout | Warn: "Figma MCP did not respond within 5 seconds — falling back to markdown" then continue markdown-only |
|
|
78
|
+
| **Malformed response** | Unexpected or partial data | Treat as unavailable — silent fallback to markdown-only |
|
|
79
|
+
|
|
80
|
+
### Mode Selection (on success)
|
|
81
|
+
|
|
82
|
+
When `figma_mcp_available == true`, present the user with:
|
|
83
|
+
|
|
84
|
+
```
|
|
85
|
+
Figma MCP detected. Select UX design mode:
|
|
86
|
+
[g] Generate — AI-generated UX with Figma export
|
|
87
|
+
[i] Import — Import existing Figma designs into GAIA
|
|
88
|
+
[s] Skip — Proceed with markdown-only (ignore Figma)
|
|
89
|
+
```
|
|
90
|
+
|
|
91
|
+
### Minimum API Scopes
|
|
92
|
+
|
|
93
|
+
The Figma API token used by the MCP server requires these minimum scopes:
|
|
94
|
+
|
|
95
|
+
| Scope | Required For | Mode |
|
|
96
|
+
|-------|-------------|------|
|
|
97
|
+
| `files:read` | Reading design files, styles, components | Default (all modes) |
|
|
98
|
+
| `file_content:read` | Reading file content, nodes, images | Default (all modes) |
|
|
99
|
+
| `files:write` | Creating frames, writing to design files | Generate mode only |
|
|
100
|
+
|
|
101
|
+
Scope enforcement is the MCP server's responsibility — GAIA documents scope expectations only and does not validate or request token scopes.
|
|
102
|
+
|
|
103
|
+
### Error Sanitization Rules
|
|
104
|
+
|
|
105
|
+
All error messages from MCP operations MUST follow this safe error format:
|
|
106
|
+
|
|
107
|
+
```
|
|
108
|
+
Figma MCP error: {status_code} — {generic_description}. Falling back to markdown-only workflow.
|
|
109
|
+
```
|
|
110
|
+
|
|
111
|
+
**Disallowed content in error messages:** Figma file URLs, file keys, node IDs, design data, access tokens, or any dynamic content from the Figma API response.
|
|
112
|
+
|
|
113
|
+
| Status Code | Generic Description |
|
|
114
|
+
|-------------|-------------------|
|
|
115
|
+
| 401 | Authentication failed |
|
|
116
|
+
| 403 | Access denied |
|
|
117
|
+
| 404 | Resource not found |
|
|
118
|
+
| 429 | Rate limit exceeded — retry once, then fallback |
|
|
119
|
+
| 500 | Server error |
|
|
120
|
+
|
|
121
|
+
### Security Boundary
|
|
122
|
+
|
|
123
|
+
- The Figma API token lives exclusively in the MCP server configuration (ADR-024)
|
|
124
|
+
- GAIA files must NEVER contain or log Figma tokens, API keys, or credentials
|
|
125
|
+
- Detection probe interacts through MCP tool abstraction only — no direct HTTP calls
|
|
126
|
+
|
|
127
|
+
### Traceability
|
|
128
|
+
|
|
129
|
+
- FR-132: Figma MCP detection probe requirement
|
|
130
|
+
- FR-143: Graceful MCP failure handling
|
|
131
|
+
- NFR-026: MCP detection latency < 5 seconds
|
|
132
|
+
- ADR-024: Figma MCP integration via shared skill
|
|
133
|
+
|
|
134
|
+
<!-- SECTION: tokens -->
|
|
135
|
+
## Design Token Extraction
|
|
136
|
+
|
|
137
|
+
> **Security mandate:** MCP auth is handled by the MCP server — NEVER persist or reference Figma API tokens in extraction outputs, logs, or GAIA files.
|
|
138
|
+
|
|
139
|
+
Extract design tokens from the connected design tool and output in W3C DTCG format.
|
|
140
|
+
|
|
141
|
+
### Extraction Steps
|
|
142
|
+
|
|
143
|
+
1. **Fetch styles** — call `figma/get_styles` to retrieve all published styles (colors, typography, effects, grids)
|
|
144
|
+
2. **Map to W3C DTCG** — transform each style into the W3C Design Tokens Community Group draft format:
|
|
145
|
+
```json
|
|
146
|
+
{
|
|
147
|
+
"color": {
|
|
148
|
+
"primary": { "$type": "color", "$value": "#3B82F6", "$description": "Brand primary" }
|
|
149
|
+
},
|
|
150
|
+
"spacing": {
|
|
151
|
+
"sm": { "$type": "dimension", "$value": "8px" }
|
|
152
|
+
},
|
|
153
|
+
"typography": {
|
|
154
|
+
"heading-1": {
|
|
155
|
+
"$type": "typography",
|
|
156
|
+
"$value": { "fontFamily": "Inter", "fontSize": "32px", "fontWeight": 700, "lineHeight": 1.2 }
|
|
157
|
+
}
|
|
158
|
+
}
|
|
159
|
+
}
|
|
160
|
+
```
|
|
161
|
+
3. **Include semantic aliases** — map raw tokens to semantic names (e.g., `color.surface.primary` → `color.blue.500`)
|
|
162
|
+
4. **Add composite tokens** — typography composites, shadow composites, border-radius scales
|
|
163
|
+
5. **Write output** — save to `{planning_artifacts}/design-system/design-tokens.json` with `"schema_version": "1.0"`
|
|
164
|
+
|
|
165
|
+
<!-- SECTION: components -->
|
|
166
|
+
## Component Spec Extraction
|
|
167
|
+
|
|
168
|
+
> **Security mandate:** MCP auth is handled by the MCP server — NEVER include Figma API tokens in component specs, logs, or any GAIA output files.
|
|
169
|
+
|
|
170
|
+
Extract component specifications into a tech-agnostic intermediate format.
|
|
171
|
+
|
|
172
|
+
### Extraction Steps
|
|
173
|
+
|
|
174
|
+
1. **Fetch components** — call `figma/get_components` to list all published components and variants
|
|
175
|
+
2. **For each component**, extract:
|
|
176
|
+
- **name** — component name (PascalCase)
|
|
177
|
+
- **props** — typed properties: `{ name: string, type: "string"|"number"|"boolean"|"enum", values?: string[] }`
|
|
178
|
+
- **layout** — abstract layout type: `row | column | stack | grid` with spacing via token references (`{spacing.sm}`)
|
|
179
|
+
- **states** — `[default, hover, active, disabled, focus]` with visual diff per state
|
|
180
|
+
- **children** — nested component references with slot definitions
|
|
181
|
+
- **variants** — named variants with their property overrides
|
|
182
|
+
- **responsive** — breakpoint behavior at 375px, 768px, 1280px
|
|
183
|
+
- **a11y** — role, aria-label pattern, description, keyboard interaction
|
|
184
|
+
3. **Write output** — save to `{planning_artifacts}/design-system/component-specs.yaml` with `schema_version: "1.0"`
|
|
185
|
+
|
|
186
|
+
### Output Schema
|
|
187
|
+
|
|
188
|
+
```yaml
|
|
189
|
+
schema_version: "1.0"
|
|
190
|
+
components:
|
|
191
|
+
- name: Button
|
|
192
|
+
props:
|
|
193
|
+
- { name: label, type: string }
|
|
194
|
+
- { name: variant, type: enum, values: [primary, secondary, ghost] }
|
|
195
|
+
- { name: disabled, type: boolean }
|
|
196
|
+
layout: { type: row, gap: "{spacing.sm}" }
|
|
197
|
+
states: [default, hover, active, disabled, focus]
|
|
198
|
+
a11y: { role: button, label: "{props.label}" }
|
|
199
|
+
```
|
|
200
|
+
|
|
201
|
+
<!-- SECTION: frames -->
|
|
202
|
+
## Frame Generation
|
|
203
|
+
|
|
204
|
+
> **Security mandate:** MCP auth is handled by the MCP server — NEVER persist Figma API tokens in frame metadata, logs, or any GAIA output files.
|
|
205
|
+
|
|
206
|
+
Create UI kit frames in the design tool across standard viewports.
|
|
207
|
+
|
|
208
|
+
### Generation Steps
|
|
209
|
+
|
|
210
|
+
1. **Create UI Kit page** — create a dedicated page named "UI Kit — Generated" in the design file
|
|
211
|
+
2. **For each screen** defined in the UX design:
|
|
212
|
+
- Create 3 viewport frames: mobile (375px), tablet (768px), desktop (1280px)
|
|
213
|
+
- Apply auto-layout with responsive constraints from component specs
|
|
214
|
+
- Place components using the extracted component specs and token values
|
|
215
|
+
3. **Add prototype flows** — link frames with interaction flows matching the UX navigation spec
|
|
216
|
+
4. **Label frames** — use naming convention: `{ScreenName}/{Viewport}` (e.g., `Dashboard/Desktop`)
|
|
217
|
+
|
|
218
|
+
### Output
|
|
219
|
+
|
|
220
|
+
Frame metadata logged for verification. No file output — frames are created directly in the design tool via MCP calls (`figma/create_frame`, `figma/create_component_instance`).
|
|
221
|
+
|
|
222
|
+
<!-- SECTION: assets -->
|
|
223
|
+
## Asset Export
|
|
224
|
+
|
|
225
|
+
> **Security mandate:** MCP auth is handled by the MCP server — NEVER include Figma API tokens in asset manifests, export logs, or any GAIA output files.
|
|
226
|
+
|
|
227
|
+
Export raster and vector assets from the design tool at required densities.
|
|
228
|
+
|
|
229
|
+
### Export Steps
|
|
230
|
+
|
|
231
|
+
1. **Identify exportable nodes** — scan the design file for nodes marked as exportable (icons, images, illustrations)
|
|
232
|
+
2. **Export icons** as SVG — call `figma/get_images` with `format: svg` for all icon nodes
|
|
233
|
+
3. **Export images** as PNG at 3 densities — call `figma/get_images` with `format: png` and `scale: 1`, `scale: 2`, `scale: 3` for image nodes
|
|
234
|
+
4. **Organize output** into directory structure:
|
|
235
|
+
```
|
|
236
|
+
{planning_artifacts}/design-system/assets/
|
|
237
|
+
├── icons/ # SVG icons
|
|
238
|
+
│ ├── icon-name.svg
|
|
239
|
+
├── images/ # PNG images at 1x/2x/3x
|
|
240
|
+
│ ├── image-name@1x.png
|
|
241
|
+
│ ├── image-name@2x.png
|
|
242
|
+
│ └── image-name@3x.png
|
|
243
|
+
```
|
|
244
|
+
5. **Generate asset manifest** — list all exported assets with dimensions and file sizes
|
|
245
|
+
|
|
246
|
+
<!-- SECTION: export -->
|
|
247
|
+
## Per-Stack Token Resolution
|
|
248
|
+
|
|
249
|
+
> **Security mandate:** MCP auth is handled by the MCP server — NEVER embed Figma API tokens in generated token files, stack outputs, or any GAIA output files.
|
|
250
|
+
|
|
251
|
+
Maps abstract design tokens to framework-specific implementations. Each dev agent uses this table to generate native code from `design-tokens.json`.
|
|
252
|
+
|
|
253
|
+
| Agent | Stack | Token Format | Example |
|
|
254
|
+
|-------|-------|-------------|---------|
|
|
255
|
+
| Cleo | TypeScript/React | CSS custom properties | `--color-primary: #3B82F6;` in `:root {}` |
|
|
256
|
+
| Lena | Angular | SCSS variables + CSS custom properties | `$color-primary: #3B82F6;` in `_tokens.scss` |
|
|
257
|
+
| Freya | Flutter/Dart | ThemeData extensions | `ThemeData(primaryColor: Color(0xFF3B82F6))` |
|
|
258
|
+
| Hugo | Java/Spring | Spring properties + Java constants | `design.color.primary=#3B82F6` in `application.properties` |
|
|
259
|
+
| Ravi | Python | Python dict constants | `TOKENS = {"color": {"primary": "#3B82F6"}}` in `design_tokens.py` |
|
|
260
|
+
| Talia | Mobile (RN/Swift/Compose) | RN StyleSheet / Swift extensions / Compose theme | `StyleSheet.create({primary: '#3B82F6'})` or `extension UIColor { static let primary = UIColor(hex: "3B82F6") }` or `val Primary = Color(0xFF3B82F6)` |
|
|
261
|
+
|
|
262
|
+
### Resolution Process
|
|
263
|
+
|
|
264
|
+
1. Read `design-tokens.json` (W3C DTCG format) from `{planning_artifacts}/design-system/`
|
|
265
|
+
2. Read `component-specs.yaml` from the same directory for component definitions and widget hints
|
|
266
|
+
3. Identify the active dev agent's stack from the agent persona
|
|
267
|
+
4. For each token, generate the stack-native representation using the table above
|
|
268
|
+
5. For each component, use the `widget_hints` field to guide framework-specific widget/component tree generation
|
|
269
|
+
6. Output token files to the project's design system directory (stack-specific path)
|
|
270
|
+
|
|
271
|
+
### Token Path Resolution Rules
|
|
272
|
+
|
|
273
|
+
Token paths use `{group.token}` syntax. The resolution pattern per stack:
|
|
274
|
+
|
|
275
|
+
| Stack | Pattern | Example Path | Resolved Output |
|
|
276
|
+
|-------|---------|-------------|-----------------|
|
|
277
|
+
| TypeScript/React | `--{group}-{token}` | `{color.blue-500}` | `var(--color-blue-500)` |
|
|
278
|
+
| Angular | `${group}-{token}` | `{spacing.4}` | `$spacing-4` |
|
|
279
|
+
| Flutter/Dart | `AppTokens.{group}.{token}` | `{typography.body}` | `AppTokens.typography.body` |
|
|
280
|
+
| Java/Spring | `design.{group}.{token}` | `{color.interactive-primary}` | `design.color.interactive-primary` |
|
|
281
|
+
| Python | `TOKENS['{group}']['{token}']` | `{shadow.md}` | `TOKENS['shadow']['md']` |
|
|
282
|
+
| Mobile (RN) | `tokens.{group}.{token}` | `{borderRadius.md}` | `tokens.borderRadius.md` |
|
|
283
|
+
| Mobile (Swift) | `DesignTokens.{group}.{token}` | `{color.blue-500}` | `DesignTokens.color.blue500` |
|
|
284
|
+
| Mobile (Compose) | `AppTheme.{group}.{token}` | `{spacing.2}` | `AppTheme.spacing.s2` |
|
|
285
|
+
|
|
286
|
+
### Semantic Alias Resolution
|
|
287
|
+
|
|
288
|
+
Semantic tokens reference primitives via `{group.token}` syntax in their `$value` field. When generating stack-specific code, resolve the alias chain to produce the final value. Example:
|
|
289
|
+
|
|
290
|
+
```
|
|
291
|
+
"interactive-primary": { "$type": "color", "$value": "{color.blue-500}" }
|
|
292
|
+
→ resolves to → #3B82F6
|
|
293
|
+
→ CSS: --color-interactive-primary: #3B82F6;
|
|
294
|
+
→ SCSS: $color-interactive-primary: #3B82F6;
|
|
295
|
+
→ Dart: static const interactivePrimary = Color(0xFF3B82F6);
|
|
296
|
+
```
|
|
@@ -0,0 +1,137 @@
|
|
|
1
|
+
# Config Contradiction Scanner — Subagent Prompt Template
|
|
2
|
+
|
|
3
|
+
> Brownfield deep analysis scan subagent for detecting contradictory configuration values across files.
|
|
4
|
+
> Reference: Architecture ADR-021, Section 10.15.2, 10.15.3, 10.15.5, ADR-022 §10.16.5
|
|
5
|
+
> Infra-awareness: E12-S6 — applies infra-specific patterns when project_type is infrastructure or platform.
|
|
6
|
+
|
|
7
|
+
## Subagent Invocation
|
|
8
|
+
|
|
9
|
+
**Input variables:**
|
|
10
|
+
- `{tech_stack}` — Detected technology stack from Step 1 discovery
|
|
11
|
+
- `{project-path}` — Absolute path to the project source code directory
|
|
12
|
+
- `{project_type}` — Project type: `application`, `infrastructure`, or `platform`
|
|
13
|
+
|
|
14
|
+
**Output file:** `{planning_artifacts}/brownfield-scan-config-contradiction.md`
|
|
15
|
+
|
|
16
|
+
## Subagent Prompt
|
|
17
|
+
|
|
18
|
+
```
|
|
19
|
+
You are a Config Contradiction Scanner for brownfield project analysis. Your task is to discover config files in the target project, build key-value maps, cross-reference values across files, and report contradictions using the standardized gap schema.
|
|
20
|
+
|
|
21
|
+
### Inputs
|
|
22
|
+
- Tech stack: {tech_stack}
|
|
23
|
+
- Project path: {project-path}
|
|
24
|
+
- Project type: {project_type}
|
|
25
|
+
- Gap schema reference: Read _gaia/lifecycle/templates/gap-entry-schema.md for the output format
|
|
26
|
+
|
|
27
|
+
### Step 1: Config File Discovery
|
|
28
|
+
|
|
29
|
+
Discover config files using glob patterns. Apply both generic and stack-specific patterns.
|
|
30
|
+
|
|
31
|
+
**Generic patterns (always apply):**
|
|
32
|
+
- `**/*.yaml`, `**/*.yml` — YAML config files
|
|
33
|
+
- `**/*.json` — JSON config files (exclude package-lock.json, yarn.lock)
|
|
34
|
+
- `**/*.env` and `**/.env*` — Environment variable files
|
|
35
|
+
- `**/*.toml` — TOML config files (exclude Pipfile.lock)
|
|
36
|
+
- `**/*.ini` — INI config files
|
|
37
|
+
- `**/*.properties` — Java properties files
|
|
38
|
+
- `**/config*.xml` — XML config files
|
|
39
|
+
|
|
40
|
+
**Exclusion patterns (always apply):**
|
|
41
|
+
- `node_modules/`, `vendor/`, `dist/`, `build/`, `.git/`
|
|
42
|
+
- Lock files: `package-lock.json`, `yarn.lock`, `Pipfile.lock`, `go.sum`, `pnpm-lock.yaml`
|
|
43
|
+
- Test fixtures and mock data directories
|
|
44
|
+
|
|
45
|
+
**Stack-specific patterns (apply based on {tech_stack}):**
|
|
46
|
+
|
|
47
|
+
#### Java/Spring
|
|
48
|
+
- `application.yml`, `application.properties`, `bootstrap.yml`
|
|
49
|
+
- `application-{profile}.yml`, `application-{profile}.properties`
|
|
50
|
+
- `src/main/resources/**/*.properties`, `src/main/resources/**/*.yml`
|
|
51
|
+
|
|
52
|
+
#### Node/Express
|
|
53
|
+
- `.env`, `.env.production`, `.env.development`, `.env.test`, `.env.local`
|
|
54
|
+
- `config/` directory contents
|
|
55
|
+
- `package.json` scripts section
|
|
56
|
+
|
|
57
|
+
#### Python/Django
|
|
58
|
+
- `settings.py`, `settings/*.py`
|
|
59
|
+
- `.env`, `pyproject.toml` tool sections
|
|
60
|
+
- `config.py`, `config/*.py`
|
|
61
|
+
|
|
62
|
+
#### Go/Gin
|
|
63
|
+
- `config.yaml`, `config.json`, `config.toml`
|
|
64
|
+
- `.env`
|
|
65
|
+
- Struct tags with `json:` / `mapstructure:` bindings
|
|
66
|
+
|
|
67
|
+
### Step 1b: Infrastructure Config File Discovery (E12-S6)
|
|
68
|
+
|
|
69
|
+
**Apply ONLY when {project_type} is `infrastructure` or `platform`.**
|
|
70
|
+
|
|
71
|
+
In addition to the generic and stack-specific patterns above, scan for infrastructure configuration files:
|
|
72
|
+
|
|
73
|
+
#### Terraform
|
|
74
|
+
- `**/*.tf` — Terraform configuration files
|
|
75
|
+
- `**/*.tfvars` — Terraform variable files (terraform.tfvars, *.auto.tfvars)
|
|
76
|
+
- `**/*.tfvars.json` — JSON-format Terraform variables
|
|
77
|
+
- `**/terraform.tfstate` — State files (check for drift, do not parse fully)
|
|
78
|
+
- `**/backend.tf` — Backend configuration
|
|
79
|
+
|
|
80
|
+
#### Helm / Kubernetes
|
|
81
|
+
- `**/values.yaml`, `**/values-*.yaml` — Helm values files (values.yaml, values-dev.yaml, values-prod.yaml)
|
|
82
|
+
- `**/Chart.yaml` — Helm chart metadata
|
|
83
|
+
- `**/templates/**/*.yaml` — Helm templates (scan for hardcoded values vs template refs)
|
|
84
|
+
- `**/*.yaml` in directories matching `k8s/`, `kubernetes/`, `manifests/`, `deploy/`
|
|
85
|
+
|
|
86
|
+
#### Kustomize
|
|
87
|
+
- `**/kustomization.yaml`, `**/kustomization.yml` — Kustomize configs
|
|
88
|
+
- `**/overlays/**/*.yaml` — Kustomize overlay patches (detect contradictions between base and overlays)
|
|
89
|
+
- `**/base/**/*.yaml` — Kustomize base resources
|
|
90
|
+
|
|
91
|
+
#### Docker / Compose
|
|
92
|
+
- `**/Dockerfile*` — Dockerfile variants
|
|
93
|
+
- `**/docker-compose*.yml`, `**/docker-compose*.yaml` — Compose files
|
|
94
|
+
- `**/.dockerignore` — Docker ignore files
|
|
95
|
+
|
|
96
|
+
#### CI/CD
|
|
97
|
+
- `.github/workflows/**/*.yml` — GitHub Actions workflows
|
|
98
|
+
- `**/.gitlab-ci.yml` — GitLab CI config
|
|
99
|
+
- `**/Jenkinsfile*` — Jenkins pipelines
|
|
100
|
+
- `**/.circleci/config.yml` — CircleCI config
|
|
101
|
+
|
|
102
|
+
**Infra contradiction detection focus areas:**
|
|
103
|
+
- Same variable defined differently across terraform.tfvars files for different environments
|
|
104
|
+
- Helm values.yaml contradicting kustomize overlay values for the same resource
|
|
105
|
+
- Port numbers, resource limits, replica counts, and image tags inconsistent across environments
|
|
106
|
+
- Backend configuration (S3 bucket, DynamoDB table) mismatched between Terraform state backends
|
|
107
|
+
|
|
108
|
+
### Step 2: Build Key-Value Maps
|
|
109
|
+
|
|
110
|
+
For each discovered config file, extract a key-value map:
|
|
111
|
+
- Parse structured formats (YAML, JSON, TOML, INI, properties) into nested key paths
|
|
112
|
+
- For .env files: parse KEY=VALUE pairs
|
|
113
|
+
- For Terraform files: extract variable defaults, locals, and resource attributes
|
|
114
|
+
- For Helm values: extract the full values tree
|
|
115
|
+
- For kustomize overlays: extract patch operations and their target values
|
|
116
|
+
|
|
117
|
+
### Step 3: Cross-Reference and Detect Contradictions
|
|
118
|
+
|
|
119
|
+
Compare key-value maps across files:
|
|
120
|
+
- Same key path with different values across files = contradiction
|
|
121
|
+
- Environment-specific overrides that conflict with defaults
|
|
122
|
+
- Port/host/URL mismatches between services
|
|
123
|
+
- For infra projects: resource specification mismatches between environments
|
|
124
|
+
|
|
125
|
+
### Step 4: Output
|
|
126
|
+
|
|
127
|
+
Format each contradiction as a gap entry using the standardized schema:
|
|
128
|
+
- category: `config-contradiction`
|
|
129
|
+
- For infra-specific contradictions (terraform.tfvars, values.yaml, kustomize): also tag with infra context in the description
|
|
130
|
+
- id: `GAP-CONFIG-{seq}` — sequential numbering starting at 001
|
|
131
|
+
- verified_by: `machine-detected`
|
|
132
|
+
- Budget: max 70 entries, truncate low-severity entries if exceeded
|
|
133
|
+
```
|
|
134
|
+
|
|
135
|
+
## Output File
|
|
136
|
+
|
|
137
|
+
Write all findings to: `{planning_artifacts}/brownfield-scan-config-contradiction.md`
|
|
@@ -0,0 +1,179 @@
|
|
|
1
|
+
# Dead Code & Dead State Scanner — Subagent Prompt Template
|
|
2
|
+
|
|
3
|
+
> Brownfield deep analysis scan subagent for detecting dead code, dead state, and abandoned functionality.
|
|
4
|
+
> Reference: Architecture ADR-021, Section 10.15.2, 10.15.3, 10.15.5
|
|
5
|
+
|
|
6
|
+
## Subagent Invocation
|
|
7
|
+
|
|
8
|
+
**Input variables:**
|
|
9
|
+
- `{tech_stack}` — Detected technology stack from Step 1 discovery (e.g., "Java/Spring", "Node/Express", "Python/Django", "Go/Gin")
|
|
10
|
+
- `{project-path}` — Absolute path to the project source code directory
|
|
11
|
+
|
|
12
|
+
**Output file:** `{planning_artifacts}/brownfield-scan-dead-code.md`
|
|
13
|
+
|
|
14
|
+
**Invocation model:** Spawned via Agent tool in a single message alongside 6 other deep analysis scan subagents (parallel execution per architecture 10.15.2).
|
|
15
|
+
|
|
16
|
+
## Subagent Prompt
|
|
17
|
+
|
|
18
|
+
```
|
|
19
|
+
You are a Dead Code & Dead State Scanner for brownfield project analysis. Your task is to discover dead code, unused state, and abandoned functionality in the target project using LLM-based static analysis (grep/glob/read), then report findings using the standardized gap schema format.
|
|
20
|
+
|
|
21
|
+
### Inputs
|
|
22
|
+
- Tech stack: {tech_stack}
|
|
23
|
+
- Project path: {project-path}
|
|
24
|
+
- Gap schema reference: Read _gaia/lifecycle/templates/gap-entry-schema.md for the output format
|
|
25
|
+
|
|
26
|
+
### Step 1: Universal Dead Code Detection
|
|
27
|
+
|
|
28
|
+
Apply these detection patterns regardless of tech stack.
|
|
29
|
+
|
|
30
|
+
#### 1.1 Unreachable Code Paths
|
|
31
|
+
Scan for code that can never execute:
|
|
32
|
+
- Code after unconditional `return`, `throw`, `exit`, `break`, `continue` statements
|
|
33
|
+
- Unreachable switch/match branches (default after exhaustive cases)
|
|
34
|
+
- Dead branches behind constant `false` conditions (`if (false)`, `if (0)`)
|
|
35
|
+
- Functions defined but never called anywhere in the project
|
|
36
|
+
|
|
37
|
+
#### 1.2 Unused Exports, Functions, and Classes
|
|
38
|
+
Cross-reference declarations against usage across the entire project:
|
|
39
|
+
- Grep for all exported symbols (functions, classes, constants, types)
|
|
40
|
+
- Cross-reference each export against import/require/usage statements in other files
|
|
41
|
+
- A declaration with zero references across the project is definitely unused (confidence: high)
|
|
42
|
+
- A declaration referenced only in the same file where it is defined may be dead if not exported
|
|
43
|
+
|
|
44
|
+
#### 1.3 Commented-Out Code Blocks (>5 Lines)
|
|
45
|
+
Scan for blocks of more than 5 consecutive commented lines that contain code patterns:
|
|
46
|
+
- Function definitions, class declarations, control flow (if/else, for, while, switch)
|
|
47
|
+
- Variable assignments, return statements, import/require statements
|
|
48
|
+
- Threshold is strictly greater than 5 lines — exactly 5 lines does NOT trigger detection
|
|
49
|
+
- Distinguish code comments from documentation comments (JSDoc, Javadoc, docstrings)
|
|
50
|
+
|
|
51
|
+
#### 1.4 Unused Database Artifacts (Dead State)
|
|
52
|
+
Cross-reference migration files against ORM models and query patterns:
|
|
53
|
+
- Tables or columns defined in migration files but not referenced in any ORM model, query builder, or raw SQL
|
|
54
|
+
- Indexes on columns/tables that are no longer queried
|
|
55
|
+
- Seed data for tables that are no longer used
|
|
56
|
+
|
|
57
|
+
#### 1.5 Feature Flag Staleness
|
|
58
|
+
Identify feature flags that are permanently on or permanently off:
|
|
59
|
+
- Flag variables assigned a constant value (true/false) with no conditional reassignment anywhere
|
|
60
|
+
- Feature gate checks where the flag value is always the same at every call site
|
|
61
|
+
- Determination is based on static analysis of the codebase only — no commit history analysis required
|
|
62
|
+
|
|
63
|
+
### Step 2: Stack-Aware Pattern Detection
|
|
64
|
+
|
|
65
|
+
Apply patterns based on the detected {tech_stack}. For multi-stack projects (monorepos), apply all relevant stack patterns — each stack's patterns apply only to files matching that stack's file extensions, preventing cross-contamination.
|
|
66
|
+
|
|
67
|
+
#### Java/Spring
|
|
68
|
+
- Unused `@Service`, `@Repository`, `@Component` beans — annotated classes with no `@Autowired` or constructor injection anywhere in the project
|
|
69
|
+
- Unused `@Scheduled` methods — scheduled task methods that are defined but their containing bean is never loaded
|
|
70
|
+
- Orphaned `@Entity` classes — JPA entities not referenced by any repository or query
|
|
71
|
+
- Unused Spring `@Configuration` beans — config classes that declare beans never injected
|
|
72
|
+
- Confidence: set to `medium` for Spring beans (XML config or component scan may inject dynamically)
|
|
73
|
+
|
|
74
|
+
#### Node/Express
|
|
75
|
+
- Unused `module.exports` or `export` declarations — exported symbols never imported elsewhere
|
|
76
|
+
- Orphaned route handlers — handler functions defined but not registered in any router
|
|
77
|
+
- Unused middleware — middleware functions defined but not applied to any route or app
|
|
78
|
+
- Dead `require()` or `import` in index/barrel files — re-exported modules never consumed
|
|
79
|
+
- Unused npm scripts — scripts in package.json never referenced by other scripts or CI
|
|
80
|
+
|
|
81
|
+
#### Python/Django
|
|
82
|
+
- Unused views — view functions or classes defined in views.py but not mapped in any `urlpatterns`
|
|
83
|
+
- Unused serializers — serializer classes defined but never used in any view or viewset
|
|
84
|
+
- Orphaned management commands — commands defined but never invoked in scripts or docs
|
|
85
|
+
- Dead Celery tasks — task functions decorated with `@shared_task` or `@app.task` but never called via `.delay()` or `.apply_async()`
|
|
86
|
+
- Unused Django model methods — methods on models never called outside the model file
|
|
87
|
+
|
|
88
|
+
#### Go/Gin
|
|
89
|
+
- Unexported functions with no callers in the same package — lowercase functions never referenced
|
|
90
|
+
- Unused handler functions — HTTP handler functions not registered in any router group
|
|
91
|
+
- Dead `init()` blocks — init functions in files that are never imported
|
|
92
|
+
- Unused struct methods — methods on types never called anywhere in the project
|
|
93
|
+
- Unused interface implementations — types implementing interfaces but never used polymorphically
|
|
94
|
+
|
|
95
|
+
### Step 3: Confidence Level Assignment
|
|
96
|
+
|
|
97
|
+
Assign confidence levels to distinguish between "definitely unused" and "possibly unused":
|
|
98
|
+
|
|
99
|
+
- **`high`** — Zero references found anywhere in the project. The code is definitely unused based on static analysis. No dynamic import, reflection, or metaprogramming patterns could reference it.
|
|
100
|
+
- **`medium`** — No direct references found, but dynamic import patterns exist in the project (e.g., `require(variable)`, `importlib.import_module()`, Spring component scanning). The code is possibly unused but dynamic references cannot be ruled out.
|
|
101
|
+
- **`low`** — The code appears unused, but reflection, metaprogramming, or runtime code generation patterns are present (e.g., Java reflection, Python `getattr()`, Go `reflect` package). Cannot confidently determine usage status.
|
|
102
|
+
|
|
103
|
+
Include a note in the `description` field explaining why certainty is limited for medium and low confidence findings.
|
|
104
|
+
|
|
105
|
+
### Step 4: Format Output
|
|
106
|
+
|
|
107
|
+
Format all findings as gap entries using the standardized gap entry schema format:
|
|
108
|
+
|
|
109
|
+
- `category`: always `"dead-code"`
|
|
110
|
+
- `verified_by`: always `"machine-detected"`
|
|
111
|
+
- `id`: sequential `GAP-DEAD-CODE-001`, `GAP-DEAD-CODE-002`, etc.
|
|
112
|
+
- `confidence`: per Step 3 classification
|
|
113
|
+
|
|
114
|
+
Example gap entry structure:
|
|
115
|
+
```yaml
|
|
116
|
+
gap:
|
|
117
|
+
id: "GAP-DEAD-CODE-001"
|
|
118
|
+
category: "dead-code"
|
|
119
|
+
severity: "medium"
|
|
120
|
+
title: "Unused exported function processLegacyData()"
|
|
121
|
+
description: "Function is exported but never imported elsewhere. Zero references — definitely unused."
|
|
122
|
+
evidence:
|
|
123
|
+
file: "src/utils/legacy.js"
|
|
124
|
+
line: 42
|
|
125
|
+
recommendation: "Remove the unused function or mark as deprecated."
|
|
126
|
+
verified_by: "machine-detected"
|
|
127
|
+
confidence: "high"
|
|
128
|
+
```
|
|
129
|
+
|
|
130
|
+
All required fields must be populated:
|
|
131
|
+
- `id` — unique identifier in format `GAP-DEAD-CODE-{seq}` (zero-padded 3-digit sequence)
|
|
132
|
+
- `category` — always `"dead-code"`
|
|
133
|
+
- `severity` — impact level (critical/high/medium/low)
|
|
134
|
+
- `title` — one-line summary (max 80 chars)
|
|
135
|
+
- `description` — detailed explanation including evidence and confidence rationale
|
|
136
|
+
- `evidence` — composite object with `file` (relative path) and `line` (line number)
|
|
137
|
+
- `recommendation` — actionable fix suggestion
|
|
138
|
+
- `verified_by` — always `"machine-detected"`
|
|
139
|
+
- `confidence` — detection certainty (high/medium/low)
|
|
140
|
+
|
|
141
|
+
**Severity classification:**
|
|
142
|
+
- **critical:** Dead code that masks active security vulnerabilities or causes resource leaks
|
|
143
|
+
- **high:** Large dead code blocks (>50 lines) or dead database state causing confusion
|
|
144
|
+
- **medium:** Unused functions, classes, or exports (standard dead code)
|
|
145
|
+
- **low:** Small commented-out blocks, unused imports, stale feature flags
|
|
146
|
+
|
|
147
|
+
### Step 5: Budget Control
|
|
148
|
+
|
|
149
|
+
Use structured schema format (~100 tokens per gap entry) — no prose descriptions.
|
|
150
|
+
|
|
151
|
+
- Maximum ~70 gap entries in the output (per NFR-024)
|
|
152
|
+
- If more than 70 findings are detected, include the 70 highest-severity entries
|
|
153
|
+
- When approaching the budget limit, prioritize higher-severity findings and summarize remaining as a count
|
|
154
|
+
- Append a budget summary section:
|
|
155
|
+
```
|
|
156
|
+
## Budget Summary
|
|
157
|
+
Total gaps detected: {N}. Showing top 70 by severity. Omitted: {N-70} entries ({breakdown by severity}).
|
|
158
|
+
```
|
|
159
|
+
|
|
160
|
+
Write the complete output to: `{planning_artifacts}/brownfield-scan-dead-code.md`
|
|
161
|
+
|
|
162
|
+
The output file should have this structure:
|
|
163
|
+
```markdown
|
|
164
|
+
# Brownfield Scan: Dead Code & Dead State
|
|
165
|
+
|
|
166
|
+
> Scanner: Dead Code & Dead State Scanner
|
|
167
|
+
> Tech Stack: {tech_stack}
|
|
168
|
+
> Date: {date}
|
|
169
|
+
> Files Scanned: {count}
|
|
170
|
+
|
|
171
|
+
## Findings
|
|
172
|
+
|
|
173
|
+
{gap entries in standardized schema format}
|
|
174
|
+
|
|
175
|
+
## Budget Summary (if applicable)
|
|
176
|
+
|
|
177
|
+
{truncation details if >70 entries}
|
|
178
|
+
```
|
|
179
|
+
```
|