cfsa-antigravity 2.7.0 → 2.9.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/package.json +1 -1
- package/template/.agent/kit-sync.md +3 -3
- package/template/.agent/skills/idea-extraction/SKILL.md +61 -18
- package/template/.agent/skills/prd-templates/references/architecture-completeness-checklist.md +28 -0
- package/template/.agent/skills/prd-templates/references/be-spec-classification.md +41 -0
- package/template/.agent/skills/prd-templates/references/bootstrap-verification-protocol.md +50 -0
- package/template/.agent/skills/prd-templates/references/constraint-exploration.md +41 -0
- package/template/.agent/skills/prd-templates/references/decision-confirmation-protocol.md +68 -0
- package/template/.agent/skills/prd-templates/references/decision-propagation.md +121 -0
- package/template/.agent/skills/prd-templates/references/domain-exhaustion-criteria.md +37 -0
- package/template/.agent/skills/prd-templates/references/engagement-tier-protocol.md +58 -0
- package/template/.agent/skills/prd-templates/references/evolution-layer-guidance.md +91 -0
- package/template/.agent/skills/prd-templates/references/expansion-modes.md +27 -0
- package/template/.agent/skills/prd-templates/references/folder-seeding-protocol.md +77 -0
- package/template/.agent/skills/prd-templates/references/input-classification.md +23 -0
- package/template/.agent/skills/prd-templates/references/map-guard-protocol.md +44 -0
- package/template/.agent/skills/prd-templates/references/persona-completeness-gate.md +20 -0
- package/template/.agent/skills/prd-templates/references/shard-boundary-analysis.md +76 -0
- package/template/.agent/skills/prd-templates/references/write-verification-protocol.md +57 -0
- package/template/.agent/workflows/create-prd-architecture.md +17 -23
- package/template/.agent/workflows/create-prd-compile.md +31 -22
- package/template/.agent/workflows/create-prd-design-system.md +18 -14
- package/template/.agent/workflows/create-prd-security.md +22 -24
- package/template/.agent/workflows/create-prd-stack.md +20 -11
- package/template/.agent/workflows/create-prd.md +27 -99
- package/template/.agent/workflows/decompose-architecture-structure.md +14 -4
- package/template/.agent/workflows/decompose-architecture-validate.md +29 -80
- package/template/.agent/workflows/decompose-architecture.md +27 -60
- package/template/.agent/workflows/evolve-contract.md +7 -2
- package/template/.agent/workflows/evolve-feature-cascade.md +34 -78
- package/template/.agent/workflows/evolve-feature-classify.md +22 -56
- package/template/.agent/workflows/ideate-discover.md +89 -100
- package/template/.agent/workflows/ideate-extract.md +42 -138
- package/template/.agent/workflows/ideate-validate.md +57 -133
- package/template/.agent/workflows/ideate.md +32 -19
- package/template/.agent/workflows/implement-slice-setup.md +15 -5
- package/template/.agent/workflows/implement-slice-tdd.md +21 -5
- package/template/.agent/workflows/plan-phase-write.md +30 -1
- package/template/.agent/workflows/propagate-decision-apply.md +23 -90
- package/template/.agent/workflows/propagate-decision-scan.md +20 -91
- package/template/.agent/workflows/remediate-pipeline-execute.md +6 -1
- package/template/.agent/workflows/validate-phase-quality.md +14 -3
- package/template/.agent/workflows/validate-phase-readiness.md +1 -1
- package/template/.agent/workflows/verify-infrastructure.md +2 -0
- package/template/.agent/workflows/write-architecture-spec-deepen.md +8 -2
- package/template/.agent/workflows/write-architecture-spec-design.md +11 -14
- package/template/.agent/workflows/write-be-spec-classify.md +26 -104
- package/template/.agent/workflows/write-be-spec-write.md +49 -3
- package/template/.agent/workflows/write-be-spec.md +1 -1
- package/template/.agent/workflows/write-fe-spec-write.md +62 -3
- package/template/.agent/workflows/write-fe-spec.md +1 -1
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
# Kit Sync State
|
|
2
2
|
|
|
3
3
|
upstream: https://github.com/RepairYourTech/cfsa-antigravity
|
|
4
|
-
last_synced_commit:
|
|
5
|
-
last_synced_at: 2026-03-
|
|
6
|
-
kit_version: 2.
|
|
4
|
+
last_synced_commit: 9f4ba4f09f8234f0e81f568f66f6394148c0e383
|
|
5
|
+
last_synced_at: 2026-03-17T05:20:16Z
|
|
6
|
+
kit_version: 2.9.0
|
|
@@ -124,9 +124,10 @@ When processing a document, scan for these signals before creating any domain fi
|
|
|
124
124
|
|
|
125
125
|
When hub-and-spoke is identified:
|
|
126
126
|
|
|
127
|
-
- **The hub surface owns shared domains.** Device History, Payments, Certification — these live INSIDE the hub surface's domain tree, not in a separate `shared/` folder.
|
|
127
|
+
- **The hub surface owns shared API/data domains.** Device History, Payments, Certification — these live INSIDE the hub surface's domain tree, not in a separate `shared/` folder.
|
|
128
128
|
- **Spoke surfaces reference hub domains via CX.** Desktop's CX files say "Feature X consumes web/domain/feature via API."
|
|
129
|
-
- **
|
|
129
|
+
- **Spoke surfaces own their operational domains.** A domain belongs on the surface where it is **primarily experienced and operated**, not where the data lives. POS is a desktop domain even though it calls the hub's Stripe API. Inventory is a desktop domain even though it syncs to the hub's database.
|
|
130
|
+
- **Proportionality check**: The hub may have more domains than any single spoke, but spokes must reflect the FULL experience their users have. If a spoke user's daily workflow involves 10+ distinct capabilities, 2-3 domains is a red flag — re-examine whether concepts were incorrectly collapsed into the hub.
|
|
130
131
|
|
|
131
132
|
### Peer Mode Implications
|
|
132
133
|
|
|
@@ -159,19 +160,35 @@ Does it belong to an EXISTING domain or sub-domain?
|
|
|
159
160
|
│ │
|
|
160
161
|
│ NO ──► It's a FEATURE ──► create .md file inside existing parent
|
|
161
162
|
│
|
|
162
|
-
NO ──►
|
|
163
|
+
NO ──► (Surface Placement — two questions, not one)
|
|
163
164
|
│
|
|
164
|
-
|
|
165
|
+
Q1: "Where is this PRIMARILY experienced/operated?"
|
|
166
|
+
│ (Which surface's user interacts with this most?)
|
|
165
167
|
│
|
|
166
|
-
|
|
167
|
-
|
|
168
|
-
|
|
169
|
-
|
|
170
|
-
|
|
168
|
+
Q2: "Which hub APIs does it consume?"
|
|
169
|
+
│ (What data/services does it call from the hub?)
|
|
170
|
+
│
|
|
171
|
+
├── Primarily experienced on a SPOKE surface
|
|
172
|
+
│ ──► It's a SPOKE DOMAIN ──► create in that spoke surface
|
|
173
|
+
│ ──► Log hub API dependencies in CX (Q2 answer)
|
|
174
|
+
│
|
|
175
|
+
├── Primarily experienced on the HUB surface
|
|
176
|
+
│ ──► It's a HUB DOMAIN ──► create in hub surface
|
|
177
|
+
│
|
|
178
|
+
├── No primary surface (pure API/data layer, no direct user interaction)
|
|
179
|
+
│ ──► It's a HUB DOMAIN (infrastructure) ──► create in hub
|
|
180
|
+
│
|
|
181
|
+
└── Equally used across multiple surfaces (rare)
|
|
182
|
+
──► It's a shared domain ──► create in shared/ (peer) or hub (hub-and-spoke)
|
|
171
183
|
|
|
172
184
|
**WHEN UNCERTAIN: Ask the user.** Never assume placement.
|
|
173
185
|
```
|
|
174
186
|
|
|
187
|
+
> **Key principle**: A domain belongs where it is PRIMARILY USED, not where the data lives.
|
|
188
|
+
> POS is a desktop domain even though payments go through the hub's Stripe API.
|
|
189
|
+
> Inventory is a desktop domain even though stock data is in the hub's database.
|
|
190
|
+
> The hub API dependency is logged as a CX reference, not as a classification signal.
|
|
191
|
+
|
|
175
192
|
### Sub-Domain vs Feature Test
|
|
176
193
|
|
|
177
194
|
The key question: **"Does this thing have its own internal features that interact with each other?"**
|
|
@@ -187,7 +204,8 @@ The key question: **"Does this thing have its own internal features that interac
|
|
|
187
204
|
|
|
188
205
|
| ❌ Wrong | ✅ Right |
|
|
189
206
|
|----------|---------|
|
|
190
|
-
| Creating "
|
|
207
|
+
| Creating "Print Receipt" as a new domain | Recognizing it's a feature within POS/Payments |
|
|
208
|
+
| Classifying POS as a hub domain because it calls Stripe | Classifying POS as a desktop domain because the shop tech primarily operates it there |
|
|
191
209
|
| Creating a domain for every feature mentioned | Grouping related features under their parent domain/sub-domain |
|
|
192
210
|
| Pre-creating 4 levels of empty folders | Creating depth reactively as complexity is discovered |
|
|
193
211
|
| Putting a shared domain in `shared/` when hub-and-spoke is active | Putting it inside the hub surface, with CX references from spokes |
|
|
@@ -196,6 +214,7 @@ The key question: **"Does this thing have its own internal features that interac
|
|
|
196
214
|
| Collapsing a rich multi-system area into a single domain | Recognizing 2+ interacting capabilities = sub-domain or promoted domain |
|
|
197
215
|
| Creating "Data Architecture" or "Tech Stack" as product domains | Noting architectural concerns for `/create-prd` — they are not product domains |
|
|
198
216
|
| Creating folders before presenting the classification to the user | Always presenting the classification table and getting user confirmation first |
|
|
217
|
+
| Classifying everything that touches the hub API as a hub domain | Using the "primarily experienced" test — hub API usage = CX, not classification |
|
|
199
218
|
|
|
200
219
|
---
|
|
201
220
|
|
|
@@ -283,6 +302,12 @@ Before starting, classify what the user has provided and select the right mode.
|
|
|
283
302
|
> and document structure tell you where to LOOK for concepts. They do NOT tell you what those
|
|
284
303
|
> concepts ARE (domain, sub-domain, feature, cross-cut, or not-a-product-domain). The Node
|
|
285
304
|
> Classification Gate determines what each concept is. NEVER mirror source headings as domains.
|
|
305
|
+
>
|
|
306
|
+
> **Document surface signals ARE classification input.** When a concept appears WITHIN a
|
|
307
|
+
> surface-specific section of the document (e.g., "Shop Software", "Mobile App", "Desktop"),
|
|
308
|
+
> that is a strong signal for surface placement. The section heading doesn't dictate the
|
|
309
|
+
> domain NAME, but it DOES inform which surface the concept is primarily experienced on.
|
|
310
|
+
> Use this as input to the "primarily experienced" question in the classification gate.
|
|
286
311
|
|
|
287
312
|
**Process — Phase 1: Interview the Document** (silent — no user interaction)
|
|
288
313
|
|
|
@@ -293,22 +318,25 @@ Before starting, classify what the user has provided and select the right mode.
|
|
|
293
318
|
- Source location (line numbers or section reference — this is the citation)
|
|
294
319
|
- What the document says about it (summary of the content)
|
|
295
320
|
- How many distinct interacting capabilities it contains (the sub-domain test)
|
|
296
|
-
-
|
|
321
|
+
- Which surface section of the document it appeared in (if any) — this is a surface placement signal
|
|
322
|
+
- Whether it is cross-cutting (affects multiple domains regardless of surface)
|
|
297
323
|
Do NOT use source document headings as concept names unless they happen to be accurate after classification. The concept name must reflect what the thing actually IS after gate analysis.
|
|
298
324
|
4. **BLOCKING GATE — Classify every concept.** Run the Node Classification Gate on EACH extracted concept individually. For each concept, answer the gate questions explicitly:
|
|
299
325
|
- "Does it belong to an EXISTING domain?" → if YES, it's a sub-domain or feature of that domain
|
|
300
326
|
- "Does it have 2+ distinct capabilities that interact with each other?" → if YES, it's a sub-domain (folder); if NO, it's a feature (file)
|
|
301
|
-
- "
|
|
327
|
+
- "Where is this PRIMARILY experienced/operated?" → determines surface placement
|
|
328
|
+
- "Which hub APIs does it consume?" → logged as CX, NOT as classification signal
|
|
302
329
|
- "Is it an architectural concern, not a product domain?" → note for `/create-prd`, do NOT create a domain
|
|
303
330
|
- "Is it a cross-cutting concern?" → log in CX files, do NOT create a domain
|
|
304
331
|
Produce a **classification table**:
|
|
305
332
|
|
|
306
|
-
| Concept | Source Location | Gate Result | Reasoning |
|
|
307
|
-
|
|
308
|
-
| Inventory System | lines 982–1017 | Sub-domain of Shop
|
|
309
|
-
|
|
|
310
|
-
|
|
|
311
|
-
|
|
|
333
|
+
| Concept | Source Location | Gate Result | Primary Surface | Also Used On | Reasoning |
|
|
334
|
+
|---------|----------------|-------------|----------------|-------------|----------|
|
|
335
|
+
| Inventory System | lines 982–1017 (§Shop Software) | Sub-domain of Shop Operations | Desktop | Web (read-only dashboard) | 4+ interacting capabilities, primarily operated by shop tech at the counter |
|
|
336
|
+
| POS / Payments | lines 1018–1050 (§Shop Software) | Domain in Desktop | Desktop | Hub (Stripe API) | Shop tech runs transactions at register; hub provides payment processing API |
|
|
337
|
+
| In-Platform Messaging | lines 759–788 | Feature of Consumer Platform | Web | Mobile | Single capability with role-based routing, no internal sub-features |
|
|
338
|
+
| Data Architecture | lines 1261–1282 | NOT a product domain | — | — | Architectural concern — database selection, sync strategy → note for /create-prd |
|
|
339
|
+
| Analytics & Insights | lines 730–758 | Cross-cutting concern | — | All surfaces | Three tiers serving all domains, no exclusive features of its own |
|
|
312
340
|
|
|
313
341
|
5. **BLOCKING GATE — Build proposed domain map.** From the classification table, group concepts into a proposed domain hierarchy:
|
|
314
342
|
- Only concepts classified as **domain** become domain folders
|
|
@@ -316,6 +344,21 @@ Before starting, classify what the user has provided and select the right mode.
|
|
|
316
344
|
- Concepts classified as **feature** become leaf files inside their parent
|
|
317
345
|
- Concepts classified as **cross-cut** go in the appropriate CX files
|
|
318
346
|
- Concepts classified as **not-a-product-domain** are noted in `meta/constraints.md` for `/create-prd`
|
|
347
|
+
|
|
348
|
+
5.5. **BLOCKING GATE — Surface Distribution Audit.** Before presenting to the user, verify the classification didn't starve any spoke:
|
|
349
|
+
|
|
350
|
+
For each spoke surface:
|
|
351
|
+
1. List all concepts the document describes within that surface's sections
|
|
352
|
+
2. Count how many were classified as spoke domains vs. hub domains
|
|
353
|
+
3. For each hub-classified concept that appeared in a spoke section: **re-run the "primarily experienced" test**. If the concept is primarily operated on the spoke, reclassify it as a spoke domain with hub CX
|
|
354
|
+
4. **Red flag**: If a spoke has fewer than 5 domains AND the document dedicates 100+ lines to that spoke → the classification is likely wrong. Re-examine every concept from that spoke section
|
|
355
|
+
|
|
356
|
+
5.6. **Spoke Persona Walk-Through.** For each spoke surface, identify its primary persona(s) and mentally walk through their daily workflow:
|
|
357
|
+
- "If I'm a [persona] sitting at the [surface], what are ALL the things I do in a typical day?"
|
|
358
|
+
- Enumerate every distinct workflow: each workflow with 2+ interacting capabilities is a candidate spoke domain
|
|
359
|
+
- Cross-reference against the classification table: any daily workflow capability that was classified as hub is suspect
|
|
360
|
+
- Reclassify as needed — the walk-through is the final validation before presenting to user
|
|
361
|
+
|
|
319
362
|
6. **Identify gaps** — interview questions the document doesn't answer. These become Phase 2 interview questions.
|
|
320
363
|
|
|
321
364
|
**Process — Phase 2: Present and Interview the User**
|
package/template/.agent/skills/prd-templates/references/architecture-completeness-checklist.md
ADDED
|
@@ -0,0 +1,28 @@
|
|
|
1
|
+
# Architecture Completeness Checklist
|
|
2
|
+
|
|
3
|
+
Apply after the architecture rubric self-check. All items must pass.
|
|
4
|
+
|
|
5
|
+
## Feature Coverage
|
|
6
|
+
- [ ] Every "Must Have" feature from `ideation-index.md` MoSCoW list has a home in the architecture
|
|
7
|
+
|
|
8
|
+
## Security Coverage
|
|
9
|
+
- [ ] Security model addresses all compliance constraints from `ideation/meta/constraints.md`
|
|
10
|
+
- [ ] Compliance-heavy domains have their own top-level sections (not buried as sub-bullets)
|
|
11
|
+
|
|
12
|
+
## Stack Coverage
|
|
13
|
+
- [ ] All relevant skills installed for chosen stack
|
|
14
|
+
|
|
15
|
+
## Standards Coverage
|
|
16
|
+
- [ ] Validation command in Engineering Standards matches `AGENTS.md` validation command
|
|
17
|
+
|
|
18
|
+
## Multi-Surface (if applicable)
|
|
19
|
+
- [ ] Sync strategy defined
|
|
20
|
+
- [ ] Data ownership clear
|
|
21
|
+
- [ ] Conflict resolution specified
|
|
22
|
+
|
|
23
|
+
## Cross-Platform (if applicable)
|
|
24
|
+
- [ ] Platform-specific considerations documented for each target OS
|
|
25
|
+
|
|
26
|
+
## Design System
|
|
27
|
+
- [ ] Design system document exists at `docs/plans/design-system.md`
|
|
28
|
+
- [ ] All seven decision areas are filled (no placeholders)
|
|
@@ -0,0 +1,41 @@
|
|
|
1
|
+
# BE Spec Classification Guide
|
|
2
|
+
|
|
3
|
+
Use when classifying an IA shard during `/write-be-spec-classify` Step 2.
|
|
4
|
+
|
|
5
|
+
## Classification Types
|
|
6
|
+
|
|
7
|
+
| Classification | Description | BE Spec Output | Detection Criteria |
|
|
8
|
+
|---------------|-------------|----------------|-------------------|
|
|
9
|
+
| **Feature domain** | Defines user interactions, data models, and API-facing behavior for a single cohesive domain | 1 BE spec | Has data model + user flows + access model that imply API endpoints |
|
|
10
|
+
| **Multi-domain** | Covers multiple distinct backend sub-systems with independent APIs | N BE specs (split along sub-feature boundaries) | Section headers map to independent API surfaces; data models don't overlap; could be developed by different teams |
|
|
11
|
+
| **Cross-cutting** | Defines shared patterns consumed by all feature specs (auth, API conventions, error handling) | 1 cross-cutting BE spec (`00-*`) | Content is about "how all endpoints work" not "what this feature does" |
|
|
12
|
+
| **Structural reference** | Maps structure, naming, or routing without defining API behavior | 0 BE specs | No data model, no user flows, no endpoints — just reference tables |
|
|
13
|
+
| **Composite** | Contains both a structural reference section AND feature behavior | Depends — feature portion may belong in another shard's BE spec | Look for cross-references pointing feature content to its owning domain |
|
|
14
|
+
|
|
15
|
+
## Multi-Domain Split Heuristic
|
|
16
|
+
|
|
17
|
+
Before classifying as multi-domain, build a **sub-feature endpoint inventory**:
|
|
18
|
+
|
|
19
|
+
| Sub-feature | Expected endpoints | Data model(s) | Independent? |
|
|
20
|
+
|-------------|-------------------|---------------|-------------|
|
|
21
|
+
| [sub-feature] | `POST /api/...`, `GET /api/...` | [table/collection names] | [Yes/No] |
|
|
22
|
+
|
|
23
|
+
**Split criterion**: Two or more independent groups each have their own data model and could be assigned to a different developer without coordination → split. Section header count alone is NOT the criterion — independence of data models and API surfaces is.
|
|
24
|
+
|
|
25
|
+
## Endpoint Completeness Reconciliation Table
|
|
26
|
+
|
|
27
|
+
Use during `/write-be-spec-write` Step 7 before writing spec sections:
|
|
28
|
+
|
|
29
|
+
| Sub-feature | Expected endpoints | Specced? | Notes |
|
|
30
|
+
|-------------|-------------------|----------|-------|
|
|
31
|
+
| [sub-feature] | `POST /api/...` | ✅ | — |
|
|
32
|
+
| [sub-feature] | `GET /api/...` | ❌ | [Deferred to Phase N — reason] |
|
|
33
|
+
|
|
34
|
+
**Rule**: For every unspecced expected endpoint, either add it to the spec or add an explicit `[Deferred to Phase N — reason]` note. An empty Notes column for an unspecced endpoint is a spec failure.
|
|
35
|
+
|
|
36
|
+
## Naming Convention
|
|
37
|
+
|
|
38
|
+
- Same number prefix as the IA shard source
|
|
39
|
+
- Kebab-case feature name
|
|
40
|
+
- Multi-domain splits: append letter suffix (e.g., `09a-chat-api.md`, `09b-agent-flow-api.md`)
|
|
41
|
+
- Cross-cutting specs: `00-` prefix (e.g., `00-api-conventions.md`)
|
|
@@ -0,0 +1,50 @@
|
|
|
1
|
+
# Bootstrap Verification Protocol
|
|
2
|
+
|
|
3
|
+
**Purpose**: Every bootstrap invocation MUST be verified. This protocol replaces all inline "fire bootstrap" instructions with a hard-gated fire→verify→stop cycle.
|
|
4
|
+
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
## Procedure
|
|
8
|
+
|
|
9
|
+
### 1. Fire
|
|
10
|
+
|
|
11
|
+
Read `.agent/workflows/bootstrap-agents.md` and execute with the provided key(s).
|
|
12
|
+
|
|
13
|
+
### 2. Verify
|
|
14
|
+
|
|
15
|
+
After bootstrap returns, verify that the target was actually filled:
|
|
16
|
+
|
|
17
|
+
| Key Type | Where to Verify | How to Verify |
|
|
18
|
+
|----------|----------------|---------------|
|
|
19
|
+
| Language/Framework/DB/ORM | Surface stack map in `.agent/instructions/tech-stack.md` | The cell for the provided key's surface row is no longer empty |
|
|
20
|
+
| CI/CD, Hosting, Auth, Security | Cross-cutting section of surface stack map | The cell is no longer empty |
|
|
21
|
+
| Design direction | `.agent/skills/brand-guidelines/SKILL.md` | `{{DESIGN_DIRECTION}}` no longer appears as literal text |
|
|
22
|
+
| Project structure | `.agent/instructions/structure.md` | `{{PROJECT_STRUCTURE}}` no longer appears as literal text |
|
|
23
|
+
| Dev tooling | `.agent/instructions/commands.md` | Template values are replaced with real commands |
|
|
24
|
+
| New dependency skill | `.agent/skills/[skill-name]/SKILL.md` | The skill directory exists and `SKILL.md` is readable |
|
|
25
|
+
|
|
26
|
+
### 3. Hard Gate
|
|
27
|
+
|
|
28
|
+
> **HARD STOP** — If any verification check fails, do NOT proceed. Report to the user:
|
|
29
|
+
>
|
|
30
|
+
> "Bootstrap was invoked with `[KEY]=[VALUE]` but verification failed:
|
|
31
|
+
> - Expected `[target file]` cell `[cell name]` to contain `[VALUE]`
|
|
32
|
+
> - Actual: `[what was found — empty, still placeholder, etc.]`
|
|
33
|
+
>
|
|
34
|
+
> This is a bootstrap bug. Do not proceed until resolved."
|
|
35
|
+
|
|
36
|
+
### 4. Log
|
|
37
|
+
|
|
38
|
+
After successful verification, note in the current workflow step: `Bootstrap verified: [KEY]=[VALUE] → [target cell] confirmed filled.`
|
|
39
|
+
|
|
40
|
+
---
|
|
41
|
+
|
|
42
|
+
## When to Use
|
|
43
|
+
|
|
44
|
+
Any workflow step that says:
|
|
45
|
+
- "fire bootstrap"
|
|
46
|
+
- "invoke `/bootstrap-agents`"
|
|
47
|
+
- "execute bootstrap"
|
|
48
|
+
- "read bootstrap-agents.md and execute"
|
|
49
|
+
|
|
50
|
+
MUST follow this protocol. There are no exceptions. A bootstrap fire without verification is an enforcement failure.
|
|
@@ -0,0 +1,41 @@
|
|
|
1
|
+
# Constraint Exploration Questions
|
|
2
|
+
|
|
3
|
+
Use during `/ideate-validate` Step 8 to explore constraints with the user. Write answers to `docs/plans/ideation/meta/constraints.md`.
|
|
4
|
+
|
|
5
|
+
## Core Constraint Categories
|
|
6
|
+
|
|
7
|
+
1. **Budget** — Self-funded? VC-backed? Monthly infrastructure ceiling?
|
|
8
|
+
2. **Timeline** — Launch target? Phased rollout?
|
|
9
|
+
3. **Team** — Solo dev? Small team? Skill gaps?
|
|
10
|
+
4. **Compliance** — GDPR, PCI, COPPA, HIPAA, SOC 2? Age restrictions?
|
|
11
|
+
5. **Performance** — Expected scale (users, requests, data)? Latency requirements?
|
|
12
|
+
6. **Surface classification validation** — Verify the structural classification from `ideation-index.md` (set in `ideate-extract` Step 1.3) still holds. Have any new surfaces been discovered during exploration? Has the project shape changed?
|
|
13
|
+
|
|
14
|
+
## Deep Think Prompt
|
|
15
|
+
|
|
16
|
+
> "Based on the product type and user personas, what constraints would I expect that haven't been mentioned? For example, does this product handle payments (PCI)? Does it serve minors (COPPA)? Does it store health data (HIPAA)?"
|
|
17
|
+
|
|
18
|
+
## Tier-Specific Behavior
|
|
19
|
+
|
|
20
|
+
| Tier | Behavior |
|
|
21
|
+
|---|---|
|
|
22
|
+
| **Interactive/Hybrid** | Present each constraint question to user, wait for answers. Write each confirmed constraint immediately. |
|
|
23
|
+
| **Auto** | Self-interview using Deep Think. Write all answers with reasoning immediately. Mark each answer as `[AUTO-CONFIRMED]` for traceability. |
|
|
24
|
+
|
|
25
|
+
## Success Metrics (Per Persona)
|
|
26
|
+
|
|
27
|
+
For each persona, define concrete success metrics:
|
|
28
|
+
- What metric proves this product solves the persona's problem?
|
|
29
|
+
- What's the target number? (specific — not "good response times")
|
|
30
|
+
- What's the measurement method?
|
|
31
|
+
|
|
32
|
+
Write to `ideation-index.md` (or link to domain files where the metric applies).
|
|
33
|
+
|
|
34
|
+
## Competitive Positioning
|
|
35
|
+
|
|
36
|
+
If not already explored during `/ideate-discover`:
|
|
37
|
+
- Name 2-4 direct competitors
|
|
38
|
+
- For each: what they do well, where they fail, how we differentiate
|
|
39
|
+
- What's the moat? (network effects, data, expertise, switching costs)
|
|
40
|
+
|
|
41
|
+
Write to `docs/plans/ideation/meta/competitive-landscape.md`.
|
|
@@ -0,0 +1,68 @@
|
|
|
1
|
+
# Decision Confirmation Protocol
|
|
2
|
+
|
|
3
|
+
**Purpose**: Every decision that gets written to a spec or instruction file MUST be explicitly confirmed before writing. This protocol replaces all "refine based on discussion" patterns.
|
|
4
|
+
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
## Procedure
|
|
8
|
+
|
|
9
|
+
### 1. Present
|
|
10
|
+
|
|
11
|
+
Present the decision to the user with:
|
|
12
|
+
- The specific question or choice
|
|
13
|
+
- Your recommendation and reasoning
|
|
14
|
+
- The options (if applicable)
|
|
15
|
+
- Where the decision will be written (file + section)
|
|
16
|
+
|
|
17
|
+
### 2. Wait for Confirmation
|
|
18
|
+
|
|
19
|
+
**HARD GATE** — Do NOT write anything until the user explicitly confirms.
|
|
20
|
+
|
|
21
|
+
Acceptable confirmations: explicit "yes", "approved", "confirmed", "go ahead", "looks good", selection of an option, or equivalent affirmative.
|
|
22
|
+
|
|
23
|
+
NOT acceptable: silence, moving on to other topics, ambiguous responses. If ambiguous → ask again.
|
|
24
|
+
|
|
25
|
+
### 3. Handle Requests for Changes
|
|
26
|
+
|
|
27
|
+
If the user requests changes:
|
|
28
|
+
1. Apply the requested changes to your proposed content
|
|
29
|
+
2. Re-present the updated version
|
|
30
|
+
3. Return to Step 2 — wait for confirmation again
|
|
31
|
+
|
|
32
|
+
Do NOT write a partially-confirmed version. The full content must be confirmed before writing.
|
|
33
|
+
|
|
34
|
+
### 4. Write
|
|
35
|
+
|
|
36
|
+
After confirmation, write the decision to the target file + section.
|
|
37
|
+
|
|
38
|
+
### 5. Verify Write
|
|
39
|
+
|
|
40
|
+
Follow the write verification protocol (`.agent/skills/prd-templates/references/write-verification-protocol.md`):
|
|
41
|
+
- Read back the target file
|
|
42
|
+
- Verify the section header exists and content matches what was confirmed
|
|
43
|
+
- If missing or mismatched → write failed, retry
|
|
44
|
+
|
|
45
|
+
---
|
|
46
|
+
|
|
47
|
+
## Tier-Aware Variant (for /create-prd workflows)
|
|
48
|
+
|
|
49
|
+
When the current engagement tier is known:
|
|
50
|
+
|
|
51
|
+
| Tier | Behavior |
|
|
52
|
+
|------|----------|
|
|
53
|
+
| **Interactive** | Full present → wait → confirm → write cycle per decision |
|
|
54
|
+
| **Hybrid** | Group related decisions, present batch, confirm batch, write batch |
|
|
55
|
+
| **Auto** | Use Deep Think reasoning, write with `[AUTO-CONFIRMED]` tag, present all auto-confirmed decisions for batch review at end of shard |
|
|
56
|
+
|
|
57
|
+
**Even in Auto mode**, the user reviews and can reject auto-confirmed decisions at the batch review step. Rejection triggers re-presentation at Interactive tier for that decision.
|
|
58
|
+
|
|
59
|
+
---
|
|
60
|
+
|
|
61
|
+
## What This Replaces
|
|
62
|
+
|
|
63
|
+
This protocol replaces all instances of:
|
|
64
|
+
|
|
65
|
+
> ~~"Refine based on discussion before proceeding"~~
|
|
66
|
+
> ~~"Adjust based on feedback"~~
|
|
67
|
+
|
|
68
|
+
Those patterns allow agents to skip refinement and write unconfirmed content. This protocol requires explicit confirmation before any write.
|
|
@@ -0,0 +1,121 @@
|
|
|
1
|
+
# Decision Propagation Reference
|
|
2
|
+
|
|
3
|
+
Templates, formats, and scan procedures for `/propagate-decision` workflows.
|
|
4
|
+
|
|
5
|
+
## Decision Type Sources
|
|
6
|
+
|
|
7
|
+
| Decision Type | Source Document | Downstream Scope |
|
|
8
|
+
|--------------|-----------------|-------------------|
|
|
9
|
+
| **structure** | `.agent/instructions/structure.md` | All IA shards, BE specs, FE specs |
|
|
10
|
+
| **tech-stack** | `.agent/instructions/tech-stack.md` + architecture doc | All IA shards, BE specs, FE specs |
|
|
11
|
+
| **auth-model** | Architecture doc (auth/security section) | All BE specs with middleware, all FE specs with auth flows |
|
|
12
|
+
| **data-placement** | `docs/plans/data-placement-strategy.md` | All IA shards with data models, all BE specs with storage |
|
|
13
|
+
| **patterns** | `.agent/instructions/patterns.md` | All IA shards, BE specs with implementation patterns, FE specs with component patterns |
|
|
14
|
+
| **error-architecture** | Architecture doc `## Error Architecture` (5 sub-sections) | All BE specs, all FE specs |
|
|
15
|
+
|
|
16
|
+
## Selection Menu Format
|
|
17
|
+
|
|
18
|
+
```
|
|
19
|
+
Decision propagation pre-scan:
|
|
20
|
+
|
|
21
|
+
[1] structure — structure.md — inconsistencies detected in X documents
|
|
22
|
+
[2] tech-stack — tech-stack.md — inconsistencies detected in X documents
|
|
23
|
+
[3] auth-model — architecture doc — inconsistencies detected in X documents
|
|
24
|
+
[4] data-placement — data-placement-strategy.md — inconsistencies detected in X documents
|
|
25
|
+
[5] patterns — patterns.md — inconsistencies detected in X documents
|
|
26
|
+
[6] error-architecture — architecture doc ## Error Architecture — inconsistencies detected in X documents
|
|
27
|
+
|
|
28
|
+
[A] All with inconsistencies
|
|
29
|
+
[Q] Quit — no propagation needed
|
|
30
|
+
```
|
|
31
|
+
|
|
32
|
+
## Error-Architecture Scan Procedure
|
|
33
|
+
|
|
34
|
+
**Source extraction:**
|
|
35
|
+
- Locate `docs/plans/*-architecture-design.md` using glob
|
|
36
|
+
- Read `## Error Architecture` and extract locked decisions from: `### Global Error Envelope`, `### Error Propagation Chain`, `### Unhandled Exception Strategy`, `### Client Fallback Contract`, `### Error Boundary Strategy`
|
|
37
|
+
|
|
38
|
+
**BE spec conformance** (scan `docs/plans/be/`):
|
|
39
|
+
- Check error envelope by locked name and canonical field names
|
|
40
|
+
- Check propagation chain rules
|
|
41
|
+
- Classify: explicit contradiction / implicit assumption / consistent
|
|
42
|
+
|
|
43
|
+
**FE spec conformance** (scan `docs/plans/fe/`):
|
|
44
|
+
- Check client fallback contract for surface type
|
|
45
|
+
- Check error boundary placement
|
|
46
|
+
- Same classification
|
|
47
|
+
|
|
48
|
+
## Contradiction Display Format
|
|
49
|
+
|
|
50
|
+
```
|
|
51
|
+
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
|
52
|
+
Contradiction N of X
|
|
53
|
+
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
|
54
|
+
Document: [path]
|
|
55
|
+
Section: [section name]
|
|
56
|
+
Current: [current text]
|
|
57
|
+
Locked: [locked value]
|
|
58
|
+
Fix to: [proposed fix]
|
|
59
|
+
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
|
60
|
+
[Y] Apply fix and move to next
|
|
61
|
+
[n] Skip this item
|
|
62
|
+
[skip] Skip entire document
|
|
63
|
+
[stop-and-save] Save progress and stop
|
|
64
|
+
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
|
65
|
+
```
|
|
66
|
+
|
|
67
|
+
## Assumption Display Format
|
|
68
|
+
|
|
69
|
+
```
|
|
70
|
+
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
|
71
|
+
Assumption N of X
|
|
72
|
+
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
|
73
|
+
Document: [path]
|
|
74
|
+
Section: [section name]
|
|
75
|
+
Current: [current text]
|
|
76
|
+
Issue: [ambiguity description]
|
|
77
|
+
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
|
78
|
+
[Y] Flag for /resolve-ambiguity
|
|
79
|
+
[n] Ignore
|
|
80
|
+
[skip] Skip entire document
|
|
81
|
+
[stop-and-save] Save progress and stop
|
|
82
|
+
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
|
83
|
+
```
|
|
84
|
+
|
|
85
|
+
## Impact Report Format
|
|
86
|
+
|
|
87
|
+
### Explicit Contradictions
|
|
88
|
+
|
|
89
|
+
| Document | Line | Current Text | Locked Value | Decision Type |
|
|
90
|
+
|----------|------|--------------|--------------|---------------|
|
|
91
|
+
| [path] | [N] | [text] | [value] | [type] |
|
|
92
|
+
|
|
93
|
+
### Implicit Assumptions
|
|
94
|
+
|
|
95
|
+
| Document | Line | Current Text | Concern | Decision Type |
|
|
96
|
+
|----------|------|--------------|---------|---------------|
|
|
97
|
+
| [path] | [N] | [text] | [concern] | [type] |
|
|
98
|
+
|
|
99
|
+
### Summary
|
|
100
|
+
|
|
101
|
+
- **Explicit contradictions**: X items across Y documents
|
|
102
|
+
- **Implicit assumptions**: X items across Y documents
|
|
103
|
+
- **Consistent references**: X items (no action needed)
|
|
104
|
+
|
|
105
|
+
## Completion Summary Format
|
|
106
|
+
|
|
107
|
+
```
|
|
108
|
+
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
|
109
|
+
Propagation Complete
|
|
110
|
+
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
|
111
|
+
Fixed: X contradictions across Y documents
|
|
112
|
+
Flagged: X implicit assumptions for /resolve-ambiguity
|
|
113
|
+
Skipped: X items
|
|
114
|
+
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
|
115
|
+
|
|
116
|
+
Recommended next steps:
|
|
117
|
+
1. Run `/resolve-ambiguity` to address the X flagged assumptions
|
|
118
|
+
(see docs/audits/propagation-ambiguity-[date].md)
|
|
119
|
+
2. Run `/remediate-pipeline` to audit all layers with the corrected specs
|
|
120
|
+
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
|
121
|
+
```
|
|
@@ -0,0 +1,37 @@
|
|
|
1
|
+
# Domain Exhaustion Criteria
|
|
2
|
+
|
|
3
|
+
Final validation gate before vision compilation. All criteria must pass.
|
|
4
|
+
|
|
5
|
+
| Check | Criteria | Action if Fail |
|
|
6
|
+
|-------|----------|----------------|
|
|
7
|
+
| All leaf nodes ≥ `[DEEP]` | Every feature file in the tree is `[DEEP]` or `[EXHAUSTED]` | Drill remaining feature files |
|
|
8
|
+
| Status propagation correct | Parent nodes reflect their children's status | Update parent indexes |
|
|
9
|
+
| All Must Have features ≥ Level 2 | Every Must Have has sub-features AND edge cases AND Role Lens | Deep Think + drill |
|
|
10
|
+
| Deep Think zero hypotheses | Final Deep Think pass across ALL leaf nodes yields no new hypotheses | Present any new hypotheses, drill if confirmed |
|
|
11
|
+
| All CX files clean | No Medium/Low confidence entries remain at any level — all are High or rejected | Run synthesis questions on pending pairs |
|
|
12
|
+
| Role Lens complete | Every feature file has a populated Role Lens table | Fill missing Role Lens entries |
|
|
13
|
+
| User confirmation | User explicitly confirms "nothing else" for each domain | Ask for each under-explored domain |
|
|
14
|
+
|
|
15
|
+
## Execution Procedure
|
|
16
|
+
|
|
17
|
+
1. Walk the fractal tree. For each leaf node below `[DEEP]`:
|
|
18
|
+
- "Feature [X] in [domain] is still at [status]. Drill deeper or intentionally minimal?"
|
|
19
|
+
- If "drill" → return to `/ideate-discover`
|
|
20
|
+
- If "intentionally minimal" → note in feature file and proceed
|
|
21
|
+
|
|
22
|
+
2. Run **final Deep Think pass**: For each `[DEEP]` leaf node, apply the four Deep Think questions. Present any new hypotheses.
|
|
23
|
+
- If confirmed → drill, update feature files
|
|
24
|
+
- If zero hypotheses → mark `[EXHAUSTED]`, propagate status upward
|
|
25
|
+
|
|
26
|
+
3. Walk ALL CX files at every level. Resolve any Medium/Low confidence entries.
|
|
27
|
+
|
|
28
|
+
4. Verify all feature files have populated Role Lens tables.
|
|
29
|
+
|
|
30
|
+
5. Update `ideation-index.md` progress summary with final counts (total leaf nodes, exhausted count, CX entries confirmed).
|
|
31
|
+
|
|
32
|
+
## Proportionality Check
|
|
33
|
+
|
|
34
|
+
- **Rich inputs**: Total domain file content (all files combined) should be at least 30% of the original source document's line count. If short, identify what was lost.
|
|
35
|
+
- **All inputs**: Each domain with `[DEEP]` or `[EXHAUSTED]` status should have at least 3 sub-areas drilled with edge cases.
|
|
36
|
+
|
|
37
|
+
If proportionality fails, return to `/ideate-discover` for the under-explored areas.
|
|
@@ -0,0 +1,58 @@
|
|
|
1
|
+
# Engagement Tier Protocol
|
|
2
|
+
|
|
3
|
+
**Purpose**: Single source of truth for engagement tier definitions and behavior. Workflows reference this instead of inlining tier blocks.
|
|
4
|
+
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
## Tier Definitions
|
|
8
|
+
|
|
9
|
+
| Tier | When | User Experience |
|
|
10
|
+
|------|------|-----------------|
|
|
11
|
+
| **Auto** | 6+ well-constrained axes, experienced user, clear preferences | Agent uses Deep Think reasoning per axis, presents all decisions for batch review at shard end |
|
|
12
|
+
| **Hybrid** | 3-5 axes, some need discussion, some are obvious | Group related decisions, present key choices, auto-confirm obvious ones |
|
|
13
|
+
| **Interactive** | ≤2 axes, novel/uncertain domain, user requests it | Full interview — one decision at a time with options and trade-offs |
|
|
14
|
+
|
|
15
|
+
## Tier Selection
|
|
16
|
+
|
|
17
|
+
Read the user's context:
|
|
18
|
+
1. Count the number of decision axes in the current step
|
|
19
|
+
2. Assess user's familiarity (prior conversations, explicit preferences, technical depth)
|
|
20
|
+
3. Default to **Interactive** if uncertain
|
|
21
|
+
|
|
22
|
+
Present the proposed tier: "I'll approach this as [tier] — [brief rationale]. Want a different level of involvement?"
|
|
23
|
+
|
|
24
|
+
**HARD GATE**: Wait for user acknowledgment before proceeding. If the user requests a different tier, switch.
|
|
25
|
+
|
|
26
|
+
## Behavior by Tier
|
|
27
|
+
|
|
28
|
+
### Auto Tier
|
|
29
|
+
1. For each decision: apply Deep Think reasoning (consider constraints, project goals, prior decisions)
|
|
30
|
+
2. Write the decision with `[AUTO-CONFIRMED — reasoning: ...]` annotation
|
|
31
|
+
3. At shard end: present ALL auto-confirmed decisions in a summary table
|
|
32
|
+
4. User reviews batch — any rejection triggers Interactive re-presentation for that decision
|
|
33
|
+
5. Remove `[AUTO-CONFIRMED]` annotations after batch approval
|
|
34
|
+
|
|
35
|
+
### Hybrid Tier
|
|
36
|
+
1. Group related decisions (e.g., all auth decisions together)
|
|
37
|
+
2. For obvious decisions: present with recommendation, apply if user says "looks good"
|
|
38
|
+
3. For debatable decisions: present options with trade-offs, wait for selection
|
|
39
|
+
4. Follow the decision-confirmation-protocol for each group
|
|
40
|
+
|
|
41
|
+
### Interactive Tier
|
|
42
|
+
1. Present one decision at a time
|
|
43
|
+
2. Include 2-3 options with trade-offs and your recommendation
|
|
44
|
+
3. Wait for explicit selection
|
|
45
|
+
4. Follow the decision-confirmation-protocol for each decision
|
|
46
|
+
|
|
47
|
+
## Gate Classification
|
|
48
|
+
|
|
49
|
+
Not all decisions follow the engagement tier. Some are always user-facing regardless of tier:
|
|
50
|
+
|
|
51
|
+
| Gate Type | Always Interactive? | Examples |
|
|
52
|
+
|-----------|-------------------|----------|
|
|
53
|
+
| **Product** | Yes — user always decides | Feature scope, UX philosophy, pricing |
|
|
54
|
+
| **Architecture** | Yes — present options | Database choice, framework, hosting |
|
|
55
|
+
| **Structural** | No — tier-aware | File naming, directory organization |
|
|
56
|
+
| **Implementation** | No — agent decides | Import ordering, variable naming |
|
|
57
|
+
|
|
58
|
+
See `.agent/rules/decision-classification.md` for the full classification.
|