thevoidforge 21.0.11 → 21.0.12
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/dist/.claude/commands/ai.md +69 -0
- package/dist/.claude/commands/architect.md +121 -0
- package/dist/.claude/commands/assemble.md +201 -0
- package/dist/.claude/commands/assess.md +75 -0
- package/dist/.claude/commands/blueprint.md +135 -0
- package/dist/.claude/commands/build.md +116 -0
- package/dist/.claude/commands/campaign.md +201 -0
- package/dist/.claude/commands/cultivation.md +166 -0
- package/dist/.claude/commands/current.md +128 -0
- package/dist/.claude/commands/dangerroom.md +74 -0
- package/dist/.claude/commands/debrief.md +178 -0
- package/dist/.claude/commands/deploy.md +99 -0
- package/dist/.claude/commands/devops.md +143 -0
- package/dist/.claude/commands/gauntlet.md +140 -0
- package/dist/.claude/commands/git.md +104 -0
- package/dist/.claude/commands/grow.md +146 -0
- package/dist/.claude/commands/imagine.md +126 -0
- package/dist/.claude/commands/portfolio.md +50 -0
- package/dist/.claude/commands/prd.md +113 -0
- package/dist/.claude/commands/qa.md +107 -0
- package/dist/.claude/commands/review.md +151 -0
- package/dist/.claude/commands/security.md +100 -0
- package/dist/.claude/commands/test.md +96 -0
- package/dist/.claude/commands/thumper.md +116 -0
- package/dist/.claude/commands/treasury.md +100 -0
- package/dist/.claude/commands/ux.md +118 -0
- package/dist/.claude/commands/vault.md +189 -0
- package/dist/.claude/commands/void.md +108 -0
- package/dist/CHANGELOG.md +1918 -0
- package/dist/CLAUDE.md +250 -0
- package/dist/HOLOCRON.md +856 -0
- package/dist/VERSION.md +123 -0
- package/dist/docs/NAMING_REGISTRY.md +478 -0
- package/dist/docs/methods/AI_INTELLIGENCE.md +276 -0
- package/dist/docs/methods/ASSEMBLER.md +142 -0
- package/dist/docs/methods/BACKEND_ENGINEER.md +165 -0
- package/dist/docs/methods/BUILD_JOURNAL.md +185 -0
- package/dist/docs/methods/BUILD_PROTOCOL.md +426 -0
- package/dist/docs/methods/CAMPAIGN.md +568 -0
- package/dist/docs/methods/CONTEXT_MANAGEMENT.md +189 -0
- package/dist/docs/methods/DEEP_CURRENT.md +184 -0
- package/dist/docs/methods/DEVOPS_ENGINEER.md +295 -0
- package/dist/docs/methods/FIELD_MEDIC.md +261 -0
- package/dist/docs/methods/FORGE_ARTIST.md +108 -0
- package/dist/docs/methods/FORGE_KEEPER.md +268 -0
- package/dist/docs/methods/GAUNTLET.md +344 -0
- package/dist/docs/methods/GROWTH_STRATEGIST.md +466 -0
- package/dist/docs/methods/HEARTBEAT.md +168 -0
- package/dist/docs/methods/MCP_INTEGRATION.md +139 -0
- package/dist/docs/methods/MUSTER.md +148 -0
- package/dist/docs/methods/PRD_GENERATOR.md +186 -0
- package/dist/docs/methods/PRODUCT_DESIGN_FRONTEND.md +250 -0
- package/dist/docs/methods/QA_ENGINEER.md +337 -0
- package/dist/docs/methods/RELEASE_MANAGER.md +145 -0
- package/dist/docs/methods/SECURITY_AUDITOR.md +320 -0
- package/dist/docs/methods/SUB_AGENTS.md +335 -0
- package/dist/docs/methods/SYSTEMS_ARCHITECT.md +171 -0
- package/dist/docs/methods/TESTING.md +359 -0
- package/dist/docs/methods/THUMPER.md +175 -0
- package/dist/docs/methods/TIME_VAULT.md +120 -0
- package/dist/docs/methods/TREASURY.md +184 -0
- package/dist/docs/methods/TROUBLESHOOTING.md +265 -0
- package/dist/docs/patterns/README.md +52 -0
- package/dist/docs/patterns/ad-billing-adapter.ts +537 -0
- package/dist/docs/patterns/ad-platform-adapter.ts +421 -0
- package/dist/docs/patterns/ai-classifier.ts +195 -0
- package/dist/docs/patterns/ai-eval.ts +272 -0
- package/dist/docs/patterns/ai-orchestrator.ts +341 -0
- package/dist/docs/patterns/ai-router.ts +194 -0
- package/dist/docs/patterns/ai-tool-schema.ts +237 -0
- package/dist/docs/patterns/api-route.ts +241 -0
- package/dist/docs/patterns/backtest-engine.ts +499 -0
- package/dist/docs/patterns/browser-review.ts +292 -0
- package/dist/docs/patterns/combobox.tsx +300 -0
- package/dist/docs/patterns/component.tsx +262 -0
- package/dist/docs/patterns/daemon-process.ts +338 -0
- package/dist/docs/patterns/data-pipeline.ts +297 -0
- package/dist/docs/patterns/database-migration.ts +466 -0
- package/dist/docs/patterns/e2e-test.ts +629 -0
- package/dist/docs/patterns/error-handling.ts +312 -0
- package/dist/docs/patterns/execution-safety.ts +601 -0
- package/dist/docs/patterns/financial-transaction.ts +342 -0
- package/dist/docs/patterns/funding-plan.ts +462 -0
- package/dist/docs/patterns/game-entity.ts +137 -0
- package/dist/docs/patterns/game-loop.ts +113 -0
- package/dist/docs/patterns/game-state.ts +143 -0
- package/dist/docs/patterns/job-queue.ts +225 -0
- package/dist/docs/patterns/kongo-integration.ts +164 -0
- package/dist/docs/patterns/middleware.ts +363 -0
- package/dist/docs/patterns/mobile-screen.tsx +139 -0
- package/dist/docs/patterns/mobile-service.ts +167 -0
- package/dist/docs/patterns/multi-tenant.ts +382 -0
- package/dist/docs/patterns/oauth-token-lifecycle.ts +223 -0
- package/dist/docs/patterns/outbound-rate-limiter.ts +260 -0
- package/dist/docs/patterns/prompt-template.ts +195 -0
- package/dist/docs/patterns/revenue-source-adapter.ts +311 -0
- package/dist/docs/patterns/service.ts +224 -0
- package/dist/docs/patterns/sse-endpoint.ts +118 -0
- package/dist/docs/patterns/stablecoin-adapter.ts +511 -0
- package/dist/docs/patterns/third-party-script.ts +68 -0
- package/dist/scripts/thumper/gom-jabbar.sh +241 -0
- package/dist/scripts/thumper/relay.sh +610 -0
- package/dist/scripts/thumper/scan.sh +359 -0
- package/dist/scripts/thumper/thumper.sh +190 -0
- package/dist/scripts/thumper/water-rings.sh +76 -0
- package/package.json +1 -1
- package/dist/tsconfig.tsbuildinfo +0 -1
|
@@ -0,0 +1,69 @@
|
|
|
1
|
+
# /ai — Seldon's AI Intelligence Audit
|
|
2
|
+
|
|
3
|
+
*"The fall is inevitable. The recovery can be guided."*
|
|
4
|
+
|
|
5
|
+
The AI Intelligence Audit reviews every LLM-powered component in your application. Seldon's team examines model selection, prompt engineering, tool-use schemas, orchestration patterns, failure modes, token economics, evaluation strategy, safety, versioning, and observability.
|
|
6
|
+
|
|
7
|
+
## Context Setup
|
|
8
|
+
1. Read `/docs/methods/AI_INTELLIGENCE.md` for operating rules
|
|
9
|
+
2. Read the PRD — check for `ai: yes` in frontmatter
|
|
10
|
+
3. Scan the codebase for LLM integration points: imports from `anthropic`, `@anthropic-ai/sdk`, `openai`, `@langchain`, prompt files, tool definitions
|
|
11
|
+
|
|
12
|
+
## Phase 0 — AI Surface Map (Hari Seldon)
|
|
13
|
+
|
|
14
|
+
Reconnaissance — find all AI integration points:
|
|
15
|
+
1. Grep for LLM SDK imports (`anthropic`, `openai`, `@ai-sdk`, `langchain`)
|
|
16
|
+
2. Find prompt files/constants (system prompts, few-shot examples)
|
|
17
|
+
3. Find tool/function definitions (tool-use schemas)
|
|
18
|
+
4. Find orchestration patterns (agent loops, chains, workflows)
|
|
19
|
+
5. Produce: AI component inventory with file paths, model used, and purpose
|
|
20
|
+
|
|
21
|
+
## Phase 1 — Parallel Audits (4 agents)
|
|
22
|
+
|
|
23
|
+
Use the Agent tool to run all four in parallel:
|
|
24
|
+
|
|
25
|
+
- **Agent 1 (Salvor Hardin — Model Selection):** For each AI call, is this the right model? Could a smaller/faster model handle it? Is the latency budget met? Is cost tracked?
|
|
26
|
+
- **Agent 2 (Gaal Dornick — Prompt Architecture):** Are prompts structured, versioned, testable? System prompt separated? Output format specified? Edge cases handled? Few-shot where needed?
|
|
27
|
+
- **Agent 3 (Hober Mallow — Tool Schemas):** Are tool descriptions clear? Parameter types correct? Required vs optional right? No overlapping tools? Return types documented?
|
|
28
|
+
- **Agent 4 (Bliss — AI Safety):** Prompt injection risk? PII in prompts? Output content safety? System prompt extractable? Jailbreak vectors?
|
|
29
|
+
|
|
30
|
+
## Phase 2 — Sequential Audits (5 agents)
|
|
31
|
+
|
|
32
|
+
Run sequentially — each builds on the previous:
|
|
33
|
+
|
|
34
|
+
- **Bel Riose (Orchestration):** Is this a completion, chain, agent loop, or workflow? Appropriate for the reliability requirement? Loops bounded? Maximum iteration count? Intermediate state persisted?
|
|
35
|
+
- **The Mule (Failure Modes):** What happens when the model hallucinates? Refuses? Times out? Context overflows? API is down? Is there a fallback? Circuit breaker? Bounded retries?
|
|
36
|
+
- **Ducem Barr (Token Economics):** Token usage tracked per request? Caching strategies? Context window efficient? System prompts deduplicated? Streaming where appropriate?
|
|
37
|
+
- **Bayta Darell (Evaluation):** How do you know outputs are correct? Golden datasets? Automated scoring? Regression suite for prompt changes? Quality degradation detection?
|
|
38
|
+
- **Dors Venabili (Observability):** Can you see what the AI decided and why? Trace logging? Inputs/outputs logged (PII-scrubbed)? Latency tracked? Quality scores over time?
|
|
39
|
+
- **Janov Pelorat (Context Engineering):** RAG retrieval returning relevant docs? Embeddings right dimensionality? Chunking appropriate?
|
|
40
|
+
- **R. Daneel Olivaw (Versioning):** When models update, does behavior change? Prompts pinned? Migration strategy?
|
|
41
|
+
|
|
42
|
+
## Phase 3 — Remediate
|
|
43
|
+
|
|
44
|
+
Fix all Critical and High findings. Use the standard finding format with confidence scores.
|
|
45
|
+
|
|
46
|
+
## Phase 4 — Re-Verify
|
|
47
|
+
|
|
48
|
+
**The Mule + Wanda Seldon** re-probe all remediated areas. Wanda validates structured outputs. The Mule attempts adversarial bypass of fixes.
|
|
49
|
+
|
|
50
|
+
## Arguments
|
|
51
|
+
- No arguments → full 5-phase audit of all AI components
|
|
52
|
+
- `--prompts` → Focus on prompt engineering only (Gaal Dornick deep dive)
|
|
53
|
+
- `--tools` → Focus on tool-use schemas only (Hober Mallow solo)
|
|
54
|
+
- `--safety` → Focus on AI safety and prompt injection (Bliss + The Mule)
|
|
55
|
+
- `--eval` → Focus on evaluation strategy and test coverage (Bayta Darell solo)
|
|
56
|
+
- `--cost` → Focus on token economics and optimization (Ducem Barr solo)
|
|
57
|
+
|
|
58
|
+
## Deliverables
|
|
59
|
+
1. AI component inventory (all LLM integration points)
|
|
60
|
+
2. Finding log with severity, confidence, and remediation
|
|
61
|
+
3. Eval strategy recommendations
|
|
62
|
+
4. Model selection justification for each AI call
|
|
63
|
+
5. Token budget estimate
|
|
64
|
+
|
|
65
|
+
## Handoffs
|
|
66
|
+
- Security findings → Kenobi (`/security`)
|
|
67
|
+
- Test gaps → Batman (`/qa`, `/test`)
|
|
68
|
+
- Architecture concerns → Picard (`/architect`)
|
|
69
|
+
- Performance/cost concerns → Kusanagi (`/devops`)
|
|
@@ -0,0 +1,121 @@
|
|
|
1
|
+
# /architect — Picard's Architecture Review
|
|
2
|
+
|
|
3
|
+
**AGENT DEPLOYMENT IS MANDATORY.** Steps 1 and 4 specify parallel agent launches via the Agent tool. You MUST actually launch these agents as separate sub-processes — do NOT shortcut to inline analysis, even if you think you can answer faster by reading files directly. The agents exist because parallel analysis catches things sequential reading misses. Skipping agent deployment is a protocol violation. (Field report #68)
|
|
4
|
+
|
|
5
|
+
## Context Setup
|
|
6
|
+
1. Read `/logs/build-state.md` — understand current project state
|
|
7
|
+
2. Read `/docs/methods/SYSTEMS_ARCHITECT.md`
|
|
8
|
+
3. Read `/docs/PRD.md` (System Architecture + Tech Stack sections)
|
|
9
|
+
|
|
10
|
+
## Pre-Analysis — Conflict Scan
|
|
11
|
+
Before any deep analysis, scan the PRD frontmatter for structural contradictions (see SYSTEMS_ARCHITECT.md Conflict Checklist). Check: auth+database, payments+auth, websockets+deploy, workers+deploy, database+deploy, cache+deploy, admin+auth, email+credentials. Flag any contradictions immediately — these cost hours if caught late.
|
|
12
|
+
|
|
13
|
+
## Agent Deployment Manifest
|
|
14
|
+
|
|
15
|
+
**Lead:** Picard (Star Trek)
|
|
16
|
+
**Full bridge crew:** Spock (schema), Uhura (integrations), Worf (security implications), Tuvok (security architecture), Scotty (service architecture + scaling), Kim (API design), Janeway (novel architectures), Torres (performance), La Forge (failure analysis), Data (tech debt), Crusher (system diagnostics), Archer (greenfield), Pike (bold ordering — challenges Dax in /campaign), Riker (ADR review — challenges trade-offs), Troi (PRD compliance)
|
|
17
|
+
|
|
18
|
+
## Step 0 — System Discovery (**Crusher** + **Archer**)
|
|
19
|
+
**Crusher** assesses system health first — test coverage, build time, dependency age, code complexity. Baseline before changes.
|
|
20
|
+
**Archer** (for greenfield projects) proposes initial directory structure, module boundaries, naming conventions.
|
|
21
|
+
Produce: system identity, component inventory, data flow diagram (ASCII), dependency graph.
|
|
22
|
+
Write to `/logs/` (phase-00 if during orient, or a dedicated architecture log).
|
|
23
|
+
|
|
24
|
+
## Step 1 — Parallel Analysis (Spock + Uhura + Worf)
|
|
25
|
+
Use the Agent tool to run these in parallel — they are independent analysis tasks:
|
|
26
|
+
|
|
27
|
+
**Agent 1 (Spock — Schema Review):**
|
|
28
|
+
- Normalization: are relationships correct?
|
|
29
|
+
- Indexes: do they match actual query patterns?
|
|
30
|
+
- Nullable fields: intentional or oversight?
|
|
31
|
+
- Audit fields: createdAt, updatedAt on every table?
|
|
32
|
+
- PII isolation: sensitive data identified and separated?
|
|
33
|
+
- Data lifecycle: what gets archived? What gets deleted?
|
|
34
|
+
- Backup/recovery plan: defined and tested?
|
|
35
|
+
|
|
36
|
+
**Agent 2 (Uhura — Integration Review):**
|
|
37
|
+
For each external service, produce:
|
|
38
|
+
|
|
39
|
+
| Service | Purpose | Failure Mode | Fallback | Cost | Lock-in Risk |
|
|
40
|
+
|---------|---------|-------------|----------|------|-------------|
|
|
41
|
+
|
|
42
|
+
Verify: API versions pinned, responses validated, abstraction layer exists.
|
|
43
|
+
|
|
44
|
+
**Agent 3 (Worf — Security Implications):**
|
|
45
|
+
For each architectural decision (schema, service boundaries, data flows), flag security implications. PII colocation, unauthenticated access to internal state, overly permissive service boundaries. Worf audits *design*, not code.
|
|
46
|
+
|
|
47
|
+
**Agent 4 (Tuvok — Security Architecture):**
|
|
48
|
+
Auth flow design, token storage strategy, session architecture, encryption at rest vs in transit. Where Worf flags implications, Tuvok designs the solutions.
|
|
49
|
+
|
|
50
|
+
Synthesize findings from all four agents.
|
|
51
|
+
|
|
52
|
+
## Step 2 — Scotty's Service Architecture + Kim's API Design
|
|
53
|
+
- Boundary assessment: is the boundary between services/modules clean?
|
|
54
|
+
- Monolith vs services: default monolith. Only split if there's a specific operational reason (different scaling profile, different team, different deploy cadence).
|
|
55
|
+
- Async vs sync: which operations should be background jobs?
|
|
56
|
+
- **Kim** reviews API surface: REST conventions, consistent error shapes, pagination patterns, versioning strategy.
|
|
57
|
+
- **Janeway** (conditional): when the standard monolith doesn't fit, proposes event-sourcing, CQRS, serverless, edge computing.
|
|
58
|
+
- Informed by Spock's schema, Uhura's integrations, and Worf/Tuvok's security findings.
|
|
59
|
+
|
|
60
|
+
## Step 3 — Scotty's Scaling + Torres's Performance
|
|
61
|
+
- **Scotty:** Identify the first bottleneck. Three-tier plan: Tier 1 (current), Tier 2 (10x, vertical), Tier 3 (100x, horizontal). Cost estimates.
|
|
62
|
+
- **Torres:** Performance architecture — N+1 query patterns in schema design, missing indexes for anticipated queries, connection pool sizing, caching strategy gaps. Catches performance problems before code is written.
|
|
63
|
+
|
|
64
|
+
## Step 4 — Parallel Analysis (La Forge + Data)
|
|
65
|
+
Use the Agent tool to run these in parallel — they are independent analysis tasks:
|
|
66
|
+
|
|
67
|
+
**Agent 1 (La Forge — Failure Analysis):**
|
|
68
|
+
For each component, answer: "What happens when this fails?"
|
|
69
|
+
- Database down → app shows error, no data loss, auto-reconnect
|
|
70
|
+
- Redis down → app works without cache (slower), sessions fall back
|
|
71
|
+
- External API down → graceful degradation, queue retries
|
|
72
|
+
- Worker crashes → job retries, dead letter queue, alerting
|
|
73
|
+
|
|
74
|
+
**Agent 2 (Data — Tech Debt):**
|
|
75
|
+
Catalog each item:
|
|
76
|
+
|
|
77
|
+
| Item | Type | Impact | Risk | Effort | Urgency |
|
|
78
|
+
|------|------|--------|------|--------|---------|
|
|
79
|
+
|
|
80
|
+
Types: wrong abstraction, missing abstraction, premature optimization, deferred decision, dependency debt, documentation debt.
|
|
81
|
+
|
|
82
|
+
## Step 5 — ADRs + Riker's Decision Review
|
|
83
|
+
Write Architecture Decision Records to `/docs/adrs/` for every non-obvious choice. After writing, **Riker reviews**: challenges trade-offs, verifies alternatives were truly considered, checks for second-order effects.
|
|
84
|
+
```
|
|
85
|
+
# ADR-001: [Title]
|
|
86
|
+
## Status: Accepted
|
|
87
|
+
## Context: [Why this decision was needed]
|
|
88
|
+
## Decision: [What was decided]
|
|
89
|
+
## Consequences: [Trade-offs, what this enables, what this prevents]
|
|
90
|
+
## Alternatives: [What else was considered and why it was rejected]
|
|
91
|
+
```
|
|
92
|
+
|
|
93
|
+
## Conflict Resolution
|
|
94
|
+
When architectural decisions conflict with other agents:
|
|
95
|
+
1. Check the PRD — product requirements take precedence
|
|
96
|
+
2. If PRD is silent, present trade-offs to the user with a recommendation
|
|
97
|
+
3. Document the resolution as an ADR
|
|
98
|
+
4. Log to `/logs/decisions.md`
|
|
99
|
+
|
|
100
|
+
For specific conflicts:
|
|
101
|
+
- **Picard vs Kusanagi (infra can't support arch):** Picard adjusts architecture to match real constraints
|
|
102
|
+
- **Picard vs Stark (implementation disagrees with design):** Present options, Picard decides, document as ADR
|
|
103
|
+
- **Picard vs Kenobi (security vs simplicity):** Security wins. Find the simplest secure architecture.
|
|
104
|
+
|
|
105
|
+
## Deliverables
|
|
106
|
+
1. ARCHITECTURE.md
|
|
107
|
+
2. /docs/adrs/ directory with decision records
|
|
108
|
+
3. SCALING.md
|
|
109
|
+
4. TECH_DEBT.md
|
|
110
|
+
5. FAILURE_MODES.md
|
|
111
|
+
6. All findings logged to appropriate `/logs/` file
|
|
112
|
+
|
|
113
|
+
## Arguments
|
|
114
|
+
- `--plan [description]` → Planning mode: analyze and recommend without executing. Present findings and proposed changes for review.
|
|
115
|
+
- `--muster` → Full 9-universe deployment. Instead of the Star Trek bridge crew, deploy every viable agent across all universes in 3 waves (Vanguard → Main Force → Adversarial). See `docs/methods/MUSTER.md`. **ENFORCEMENT: Must launch Agent tool sub-processes per MUSTER.md. Inline analysis is not a Muster.**
|
|
116
|
+
|
|
117
|
+
## Handoffs
|
|
118
|
+
- API/DB implementation → Stark, log to `/logs/handoffs.md`
|
|
119
|
+
- UI impacts → Galadriel, log to `/logs/handoffs.md`
|
|
120
|
+
- Security implications → Kenobi, log to `/logs/handoffs.md`
|
|
121
|
+
- Infrastructure constraints → Kusanagi, log to `/logs/handoffs.md`
|
|
@@ -0,0 +1,201 @@
|
|
|
1
|
+
Avengers, assemble. Full pipeline from architecture to launch — one command to rule them all.
|
|
2
|
+
|
|
3
|
+
## Context Setup
|
|
4
|
+
1. Read `/logs/assemble-state.md` — if it exists, resume from the last completed phase
|
|
5
|
+
2. If no assemble state exists, start fresh from Phase 1
|
|
6
|
+
3. Read `/docs/methods/ASSEMBLER.md` for operating rules
|
|
7
|
+
|
|
8
|
+
**Hill** tracks phase completion — logs each gate pass to `assemble-state.md`. **Jarvis** provides status summaries between phases.
|
|
9
|
+
|
|
10
|
+
## Agent Deployment Manifest — The Full Initiative
|
|
11
|
+
|
|
12
|
+
When `/assemble` invokes each sub-command, it deploys the FULL roster for that command — not just the lead. The leads below are coordinators; they bring their complete teams.
|
|
13
|
+
|
|
14
|
+
| Phase | Lead | Full Team Deployed |
|
|
15
|
+
|-------|------|--------------------|
|
|
16
|
+
| Architecture | **Picard** | Spock, Uhura, Worf, Tuvok, Scotty, Kim, Janeway, Torres, La Forge, Data, Crusher, Archer, Pike, Riker, Troi |
|
|
17
|
+
| Build | **Stark + Galadriel + Kusanagi** | Full /build roster (~35 agents across 4 universes) |
|
|
18
|
+
| Smoke Test | **Hawkeye** | Solo — runtime verification |
|
|
19
|
+
| Code Review (×3) | **Picard** | Spock, Seven, Data + **Rogers, Banner, Strange, Barton, Thor, Romanoff, Wanda, T'Challa** + **Nightwing, Bilbo, Troi, Constantine, Samwise** (cross-domain) |
|
|
20
|
+
| UX | **Galadriel** | Elrond, Arwen, Samwise, Bilbo, Legolas, Gimli, Radagast, Éowyn, Celeborn, Aragorn, Faramir, Pippin, Boromir, Haldir, Frodo, Merry |
|
|
21
|
+
| Security (×2) | **Kenobi** | Leia, Chewie, Rex, Maul, Yoda, Windu, Ahsoka, Padmé, Han, Cassian, Sabine, Qui-Gon, Bo-Katan, Anakin, Din Djarin |
|
|
22
|
+
| DevOps | **Kusanagi** | Senku, Levi, Spike, L, Bulma, Holo, Valkyrie, Vegeta, Trunks, Mikasa, Erwin, Mustang, Olivier, Hughes, Calcifer, Duo |
|
|
23
|
+
| QA | **Batman** | Oracle, Red Hood, Alfred, Deathstroke, Constantine, Nightwing, Lucius, Cyborg, Raven, Wonder Woman, Flash, Green Lantern, Batgirl, Aquaman |
|
|
24
|
+
| Test | **Batman** | Oracle, Red Hood, Alfred, Nightwing (testing subset) |
|
|
25
|
+
| Crossfire | **Maul, Deathstroke, Loki, Constantine, Éowyn** | Adversarial — each attacks another domain's work |
|
|
26
|
+
| Council | **Spock, Ahsoka, Nightwing, Samwise, Padmé, Troi** | Final convergence — one voice per domain |
|
|
27
|
+
|
|
28
|
+
## Phase 1 — Architecture (Picard has the conn)
|
|
29
|
+
**Fury:** "Picard, you're up. Review the architecture before we build anything."
|
|
30
|
+
|
|
31
|
+
Run the full `/architect` protocol. If `$ARGUMENTS` includes `--skip-arch`, skip this phase.
|
|
32
|
+
|
|
33
|
+
**Gate:** ADRs written, no critical architectural concerns. Log to `/logs/assemble-state.md`.
|
|
34
|
+
|
|
35
|
+
## Phase 2 — Build (All hands on deck)
|
|
36
|
+
**Fury:** "Stark, Galadriel, Kusanagi — build it. The full protocol."
|
|
37
|
+
|
|
38
|
+
Run the full `/build` protocol (all 13 phases). If `$ARGUMENTS` includes `--skip-build`, skip this phase (for re-running the review pipeline on existing code).
|
|
39
|
+
|
|
40
|
+
**Gate:** All build phase gates pass, test suite green. Update assemble-state.
|
|
41
|
+
|
|
42
|
+
## Phase 2.5 — Smoke Test (Hawkeye)
|
|
43
|
+
**Fury:** "Hawkeye — hit every endpoint. I want proof it runs, not just proof it compiles."
|
|
44
|
+
|
|
45
|
+
Mandatory runtime verification BEFORE code review begins:
|
|
46
|
+
1. If the project has a runnable server (Express, FastAPI, Next.js, Django, Rails), start it
|
|
47
|
+
2. Hit every new/modified API endpoint with `curl` — verify HTTP status codes match expectations
|
|
48
|
+
3. For web apps: list all registered routes and **check for path collisions** (duplicate method+path across routers)
|
|
49
|
+
4. For React/frontend: trace the primary user flow through the component tree — follow state changes through the store, identify re-render cycles. For every `useEffect` with store values in its deps, verify the effect body doesn't trigger a store update that changes those same deps.
|
|
50
|
+
5. If the server cannot be started (scaffold branch, methodology-only), skip with a note
|
|
51
|
+
|
|
52
|
+
**Gate:** All endpoints return expected status codes. No route collisions. No infinite render loops detected. Update assemble-state.
|
|
53
|
+
|
|
54
|
+
## Phase 3 — Review Round 1 (Full Roster — see Agent Deployment Manifest)
|
|
55
|
+
**Fury:** "Picard's team — first pass. Find everything. Full roster deployed."
|
|
56
|
+
|
|
57
|
+
Run `/review` with full Agent Deployment Manifest (Stark's Marvel team + cross-domain agents). Fix all Must Fix and Should Fix items.
|
|
58
|
+
|
|
59
|
+
**A11y spot-check (Samwise, during review):** Semantic headings (h1-h6 hierarchy), aria-hidden on decorative elements, aria-labels on ambiguous links, skip-nav link, landmark roles. This catches structural a11y issues early — before the full `/ux` pass. (Field report #118)
|
|
60
|
+
|
|
61
|
+
Log findings count.
|
|
62
|
+
|
|
63
|
+
## Phase 4 — Review Round 2 (Re-verify)
|
|
64
|
+
**Fury:** "Again. Verify the fixes didn't break anything."
|
|
65
|
+
|
|
66
|
+
Run `/review` on all files modified in Phase 3. Fix any new issues.
|
|
67
|
+
|
|
68
|
+
## Phase 5 — Review Round 3 (Final clean pass)
|
|
69
|
+
**Fury:** "Last pass. I want zero Must Fix items."
|
|
70
|
+
|
|
71
|
+
Run `/review`. If any Must Fix items remain, fix them.
|
|
72
|
+
|
|
73
|
+
**Gate:** Zero Must Fix items. Update assemble-state.
|
|
74
|
+
|
|
75
|
+
## Phase 6 — UX Pass (Galadriel leads)
|
|
76
|
+
**Fury:** "Galadriel — the user doesn't care how clean the code is if the product is confusing."
|
|
77
|
+
|
|
78
|
+
Run the full `/ux` protocol in two sub-phases. Skip if PRD frontmatter `type: api-only`.
|
|
79
|
+
|
|
80
|
+
**6A — Usability Review:** Trace the primary user flow step by step. For each step: What does the user see? What do they click? What happens? Is it what they expected? Specifically check:
|
|
81
|
+
- Can the user complete the primary flow without confusion?
|
|
82
|
+
- Do inputs retain focus when typing?
|
|
83
|
+
- Do modals/panels close cleanly on first attempt?
|
|
84
|
+
- Is there visual feedback for every mutation (success AND failure)?
|
|
85
|
+
- Does every loading state resolve (no infinite spinners)?
|
|
86
|
+
|
|
87
|
+
**6B — Accessibility Audit:** ARIA, keyboard nav, focus management, contrast, screen reader, reduced motion — the existing checklist.
|
|
88
|
+
|
|
89
|
+
**Gate:** Zero critical usability or a11y findings. Update assemble-state.
|
|
90
|
+
|
|
91
|
+
## Phase 7 — Security Round 1 (Kenobi leads)
|
|
92
|
+
**Fury:** "Kenobi, find what they missed. Think like the enemy."
|
|
93
|
+
|
|
94
|
+
Run the full `/security` protocol (Phases 1-3: parallel scans, sequential audits, remediation).
|
|
95
|
+
|
|
96
|
+
## Phase 8 — Security Round 2 (Maul re-probes)
|
|
97
|
+
**Fury:** "Maul — attack every fix. I want proof they hold."
|
|
98
|
+
|
|
99
|
+
Run `/security` Phase 4 only (Maul's re-verification). If new issues found, fix and re-verify.
|
|
100
|
+
|
|
101
|
+
**Gate:** Zero Critical or High security findings. Update assemble-state.
|
|
102
|
+
|
|
103
|
+
## Phase 9 — Infrastructure (Kusanagi leads)
|
|
104
|
+
**Fury:** "Kusanagi — make it deployable. Scripts, monitoring, backups."
|
|
105
|
+
|
|
106
|
+
Run the full `/devops` protocol.
|
|
107
|
+
|
|
108
|
+
**Gate:** Deploy scripts work, monitoring configured, post-deploy smoke tests defined. Update assemble-state.
|
|
109
|
+
|
|
110
|
+
## Phase 10 — QA (Batman leads)
|
|
111
|
+
**Fury:** "Batman — break everything. I want zero surprises in production."
|
|
112
|
+
|
|
113
|
+
Run the full `/qa` protocol (including Step 2.5 smoke tests).
|
|
114
|
+
|
|
115
|
+
**Gate:** All critical/high bugs fixed, regression checklist complete. Update assemble-state.
|
|
116
|
+
|
|
117
|
+
## Phase 11 — Test Suite (Batman writes)
|
|
118
|
+
**Fury:** "Now make it permanent. Every bug we found becomes a test."
|
|
119
|
+
|
|
120
|
+
Run the full `/test` protocol. Write missing unit tests, integration tests, and cross-module tests.
|
|
121
|
+
|
|
122
|
+
**Gate:** Test suite green, coverage acceptable. Update assemble-state.
|
|
123
|
+
|
|
124
|
+
## Phase 12 — The Crossfire (Multiverse challenge)
|
|
125
|
+
**Fury:** "Now the real test. Everyone attacks everyone else's work."
|
|
126
|
+
|
|
127
|
+
Use the Agent tool to run these in parallel — all are adversarial, read-only analysis:
|
|
128
|
+
|
|
129
|
+
- **Maul** (Star Wars) — attacks code that passed /review. Looks for exploits in "clean" code.
|
|
130
|
+
- **Deathstroke** (DC) — probes endpoints that /security hardened. Tests if remediations can be bypassed.
|
|
131
|
+
- **Loki** (Marvel) — chaos-tests features that /qa cleared. Finds what breaks under unexpected conditions.
|
|
132
|
+
- **Constantine** (DC) — hunts cursed code in FIXED areas specifically. Code that works by accident.
|
|
133
|
+
|
|
134
|
+
Synthesize findings. **Conflict detection:** If any two agents produce conflicting findings on the same code (one says "fix," another says "by design" or "not exploitable"), trigger the debate protocol instead of listing both. See SUB_AGENTS.md "Agent Debate Protocol": Agent A states finding → Agent B responds → Agent A rebuts → Arbiter (Picard or user) decides. 3 exchanges max. Log the debate transcript as an ADR. Fix all Must Fix items. If any fixes were applied, re-run the four agents on the fixed areas only.
|
|
135
|
+
|
|
136
|
+
**Gate:** All four adversarial agents sign off. All conflicts resolved via debate (no unresolved disagreements). Update assemble-state.
|
|
137
|
+
|
|
138
|
+
## Phase 13 — The Council (Convergence)
|
|
139
|
+
**Fury:** "Last call. One agent from each domain — verify nobody broke anyone else's work."
|
|
140
|
+
|
|
141
|
+
Use the Agent tool to run these in parallel:
|
|
142
|
+
|
|
143
|
+
- **Spock** (Star Trek) — Did any security/QA/UX fix break code patterns or quality?
|
|
144
|
+
- **Ahsoka** (Star Wars) — Did any review/QA fix introduce access control gaps?
|
|
145
|
+
- **Nightwing** (DC) — Did any fix cause a regression? Run the full test suite.
|
|
146
|
+
- **Samwise** (Tolkien) — Did any fix break accessibility?
|
|
147
|
+
- **Troi** (Star Trek) — PRD compliance: read the PRD prose section-by-section, verify every claim against the implementation. Not just "does the route exist?" but "does the component render what the PRD describes?" Check numeric claims, visual treatments, copy accuracy. Flag asset gaps as BLOCKED. (Troi runs on the final Council iteration, or always when `--skip-build` is used for campaign victory gates.)
|
|
148
|
+
|
|
149
|
+
**Conflict detection:** If Council members disagree (e.g., Spock says a fix broke patterns but Ahsoka says it's necessary for access control), trigger the debate protocol. Do not list both opinions — resolve via debate. Arbiter: Picard for code/architecture conflicts, Troi for PRD compliance conflicts.
|
|
150
|
+
|
|
151
|
+
If the Council finds issues:
|
|
152
|
+
1. Fix code discrepancies. Flag asset requirements as BLOCKED.
|
|
153
|
+
2. Resolve conflicts via debate protocol (see SUB_AGENTS.md). Log debate transcripts as ADRs.
|
|
154
|
+
3. Re-run the Council (max 3 iterations)
|
|
155
|
+
4. If not converged after 3 rounds, present remaining findings to the user
|
|
156
|
+
|
|
157
|
+
**Gate:** All five Council members sign off. Zero cross-domain regressions. Update assemble-state.
|
|
158
|
+
|
|
159
|
+
## Completion
|
|
160
|
+
**Fury:** "The Initiative is complete."
|
|
161
|
+
|
|
162
|
+
Write final summary to `/logs/assemble-state.md`:
|
|
163
|
+
- Total phases completed
|
|
164
|
+
- Total findings across all passes (review + security + QA + crossfire + council)
|
|
165
|
+
- Total fixes applied
|
|
166
|
+
- Final test suite status
|
|
167
|
+
- Time span (first phase start → last phase end)
|
|
168
|
+
|
|
169
|
+
Present the summary to the user.
|
|
170
|
+
|
|
171
|
+
**IMPORTANT: Include this disclaimer in the completion message:**
|
|
172
|
+
"All phases analyze source code and trace data flows. If this project has a runnable UI, manual testing of the deployed application is still recommended before shipping to users — runtime interaction bugs (render loops, endpoint collisions, focus management) can pass static analysis."
|
|
173
|
+
|
|
174
|
+
## Lessons Extraction (Wong)
|
|
175
|
+
After the summary, Wong extracts learnings for future builds:
|
|
176
|
+
|
|
177
|
+
1. Review all findings from Phases 3-13. For each pattern that appeared 2+ times or took 2+ fix iterations, distill into a lesson.
|
|
178
|
+
2. Append new entries to `/docs/LESSONS.md` using the existing format:
|
|
179
|
+
```
|
|
180
|
+
### [Short title]
|
|
181
|
+
**Agent:** [who discovered it] | **Category:** pattern/antipattern/decision/gotcha
|
|
182
|
+
**Context:** [this project name and phase]
|
|
183
|
+
**Lesson:** [what we learned]
|
|
184
|
+
**Action:** [what to do differently]
|
|
185
|
+
**Promoted to:** Not yet
|
|
186
|
+
```
|
|
187
|
+
3. Check existing lessons — if a lesson from a PREVIOUS project was confirmed again, add a note: "Confirmed in [project]. Promote to method doc." If the lesson was already promoted, note: "Promoted lesson held — no regressions."
|
|
188
|
+
4. If any lesson appears in 3+ projects, promote it: add the rule to the relevant method doc and update the lesson's "Promoted to" field.
|
|
189
|
+
|
|
190
|
+
## Operating Rules
|
|
191
|
+
- Update `/logs/assemble-state.md` after EVERY phase completion
|
|
192
|
+
- If you notice context pressure symptoms (re-reading files, forgetting decisions), ask user to run `/context`. Only checkpoint if usage exceeds 70%.
|
|
193
|
+
- Each phase runs the FULL protocol of its command — no shortcuts
|
|
194
|
+
- Fixes happen BETWEEN rounds, not batched at the end
|
|
195
|
+
- The Crossfire (Phase 12) and Council (Phase 13) can be skipped with `/assemble --fast`
|
|
196
|
+
- `/assemble --resume` picks up from the last completed phase in assemble-state.md
|
|
197
|
+
- `--blitz` — Autonomous execution: no pause between phases, auto-continue. Does NOT imply --fast.
|
|
198
|
+
|
|
199
|
+
## Handoffs
|
|
200
|
+
- If any phase is blocked by an issue outside its domain, log to `/logs/handoffs.md` and continue to the next phase
|
|
201
|
+
- At completion, note any outstanding handoffs for the user
|
|
@@ -0,0 +1,75 @@
|
|
|
1
|
+
# /assess — Picard's Pre-Build Assessment
|
|
2
|
+
|
|
3
|
+
Evaluate an existing codebase before a rebuild, migration, or VoidForge onboarding. Chains architecture review, assessment-mode Gauntlet, and PRD gap analysis into a unified "State of the Codebase" report.
|
|
4
|
+
|
|
5
|
+
## Context Setup
|
|
6
|
+
1. Read `/logs/build-state.md` if it exists — understand current project state
|
|
7
|
+
2. Read `/docs/methods/SYSTEMS_ARCHITECT.md`
|
|
8
|
+
3. Read `/docs/methods/GAUNTLET.md` (Flags section — `--assess`)
|
|
9
|
+
4. Read `/docs/PRD.md` if it exists
|
|
10
|
+
|
|
11
|
+
## The Sequence
|
|
12
|
+
|
|
13
|
+
### Step 1 — Picard's Architecture Scan
|
|
14
|
+
Run `/architect` — full bridge crew analysis. This maps the system: schema, integrations, security posture, service boundaries, tech debt.
|
|
15
|
+
|
|
16
|
+
### Step 2 — Thanos's Assessment Gauntlet
|
|
17
|
+
Run `/gauntlet --assess` — Rounds 1-2 only (Discovery + First Strike). No fix batches. Produces an assessment report grouped by root cause rather than domain.
|
|
18
|
+
|
|
19
|
+
**Key detection targets for pre-build:**
|
|
20
|
+
- **RC-STUB: Stub code** — Grep for `throw new Error('Implement`, `throw new Error('Not implemented`, `throw new Error('TODO`. Also detect functions returning `{ ok: true }` or `{ success: true }` without side effects, and handlers that log but perform no work. This is the #1 source of false functionality. (Field report: v17.0 assessment found 77 stub throws across 8 files.)
|
|
21
|
+
- **Abandoned migrations:** Duplicate implementations in competing directories (RC-1 pattern)
|
|
22
|
+
- **Stubs returning success:** Methods that return True/ok without side effects (RC-2 pattern)
|
|
23
|
+
- **Auth-free defaults:** HTTP endpoints with no authentication middleware (RC-3 pattern)
|
|
24
|
+
- **Dead code:** Services wired but never called, preferences stored but never read
|
|
25
|
+
|
|
26
|
+
### Step 3 — PRD Gap Analysis (Dax + Troi)
|
|
27
|
+
If a PRD exists:
|
|
28
|
+
1. **Dax** diffs PRD requirements against implemented features (structural + semantic)
|
|
29
|
+
2. **Troi** reads PRD prose section-by-section and verifies claims against reality
|
|
30
|
+
3. Check for YAML frontmatter — if missing, flag it (see CAMPAIGN.md Step 1)
|
|
31
|
+
|
|
32
|
+
If no PRD exists:
|
|
33
|
+
1. Produce a "What Exists" inventory: routes, schema, components, integrations, test coverage
|
|
34
|
+
2. Flag areas that need a PRD before building can begin
|
|
35
|
+
|
|
36
|
+
### Step 4 — State of the Codebase Report
|
|
37
|
+
|
|
38
|
+
Produce a unified report in `/logs/assessment.md`:
|
|
39
|
+
|
|
40
|
+
```markdown
|
|
41
|
+
# State of the Codebase — [Project Name]
|
|
42
|
+
## Date: [date]
|
|
43
|
+
|
|
44
|
+
## Architecture Summary
|
|
45
|
+
[From Step 1 — schema, services, integrations, tech debt]
|
|
46
|
+
|
|
47
|
+
## Root Causes (grouped)
|
|
48
|
+
[From Step 2 — findings grouped by root cause, not by domain]
|
|
49
|
+
|
|
50
|
+
## PRD Alignment
|
|
51
|
+
[From Step 3 — what matches, what's missing, what contradicts]
|
|
52
|
+
|
|
53
|
+
## Remediation Plan
|
|
54
|
+
| Priority | Root Cause | Impact | Recommended Action |
|
|
55
|
+
|----------|-----------|--------|-------------------|
|
|
56
|
+
|
|
57
|
+
## Recommendation
|
|
58
|
+
[One of: "Ready to build", "Needs remediation first (Phase 0)", "Needs PRD first", "Needs migration completion first"]
|
|
59
|
+
```
|
|
60
|
+
|
|
61
|
+
### Step 5 — Debrief (optional)
|
|
62
|
+
If findings are methodology-relevant (patterns that VoidForge should catch but doesn't), offer: "Want Bashir to file a field report?"
|
|
63
|
+
|
|
64
|
+
## When to Use
|
|
65
|
+
- Before onboarding an existing codebase to VoidForge
|
|
66
|
+
- Before a major version rebuild (v2 → v3)
|
|
67
|
+
- When inheriting a codebase from another team
|
|
68
|
+
- When the PRD assumes existing code works but you haven't verified
|
|
69
|
+
|
|
70
|
+
## When NOT to Use
|
|
71
|
+
- On a fresh project (nothing to assess — just run `/build`)
|
|
72
|
+
- On methodology-only changes (no runtime code)
|
|
73
|
+
- After a build (use `/gauntlet` instead — it includes fix batches)
|
|
74
|
+
|
|
75
|
+
(Field report #125: user chained `/architect → /gauntlet → /prd → /debrief` manually. This command formalizes that workflow.)
|
|
@@ -0,0 +1,135 @@
|
|
|
1
|
+
# /blueprint — The Blueprint Path
|
|
2
|
+
|
|
3
|
+
> You brought the blueprint. The forge brings it to life.
|
|
4
|
+
|
|
5
|
+
Accept a pre-written PRD and supporting documents, validate them, provision infrastructure, and prepare for campaign execution. The fourth entry path — for users who already have a complete spec from Claude chat, a consultant, a previous iteration, or another tool.
|
|
6
|
+
|
|
7
|
+
## Context Setup
|
|
8
|
+
1. Read `/docs/methods/BUILD_PROTOCOL.md` (Phase 0 — Orient)
|
|
9
|
+
2. Read `/docs/methods/SYSTEMS_ARCHITECT.md` (Conflict Checklist)
|
|
10
|
+
|
|
11
|
+
## Prerequisites
|
|
12
|
+
- `docs/PRD.md` MUST exist with valid YAML frontmatter
|
|
13
|
+
- VoidForge methodology CLAUDE.md must be present at project root
|
|
14
|
+
|
|
15
|
+
## Step 1 — Picard Validates the PRD
|
|
16
|
+
|
|
17
|
+
Read `docs/PRD.md` and validate:
|
|
18
|
+
|
|
19
|
+
1. **Parse YAML frontmatter** using the frontmatter parser
|
|
20
|
+
- Required fields: `name` (any other fields are optional but recognized)
|
|
21
|
+
- Type-check: `type` must be one of: full-stack, api-only, static-site, prototype
|
|
22
|
+
- Deploy-check: `deploy` must be one of: vps, vercel, railway, cloudflare, static, docker
|
|
23
|
+
- If validation fails → show specific errors with guidance on how to fix
|
|
24
|
+
|
|
25
|
+
2. **Troi's structural compliance** (from `prd-validator.ts`):
|
|
26
|
+
- Does the PRD have an OVERVIEW or SUMMARY section?
|
|
27
|
+
- Are there feature sections?
|
|
28
|
+
- If `database` is configured, is there a DATA MODELS section?
|
|
29
|
+
- If `deploy` is set, is there a DEPLOYMENT section?
|
|
30
|
+
- If `auth: yes`, does the PRD mention authentication?
|
|
31
|
+
- If `workers: yes`, does the PRD mention background jobs?
|
|
32
|
+
- These are warnings, not blockers. Present them and proceed.
|
|
33
|
+
|
|
34
|
+
3. **Extract architecture** from frontmatter:
|
|
35
|
+
- Framework, database, auth strategy, deploy target, workers
|
|
36
|
+
- Show: "Blueprint: [name] — [framework] + [language], deploy to [target]"
|
|
37
|
+
|
|
38
|
+
## Step 2 — Wong Discovers Supporting Documents
|
|
39
|
+
|
|
40
|
+
Wong opens a portal and scans the project directory for supporting materials:
|
|
41
|
+
|
|
42
|
+
| File | Action |
|
|
43
|
+
|------|--------|
|
|
44
|
+
| `docs/PRD.md` | **Required.** Already validated in Step 1. |
|
|
45
|
+
| `docs/PROJECT-DIRECTIVES.md` (or `docs/PROJECT-CLAUDE.md`, `docs/DIRECTIVES.md`) | **Appended** to CLAUDE.md methodology. Never replaces. |
|
|
46
|
+
| `docs/OPERATIONS.md` | Loaded into context. Sisko references during campaign planning. |
|
|
47
|
+
| `docs/ADR/*.md` (or `docs/adrs/*.md`) | Loaded into context. Picard references during architecture review. |
|
|
48
|
+
| `docs/reference/*` | Loaded into context. Available to all agents during build. |
|
|
49
|
+
|
|
50
|
+
Present discovery summary: "Wong found N supporting documents: [list]"
|
|
51
|
+
|
|
52
|
+
## Step 3 — Merge Project Directives
|
|
53
|
+
|
|
54
|
+
If a project directives file was discovered:
|
|
55
|
+
1. Read its contents
|
|
56
|
+
2. Append to CLAUDE.md under a `# PROJECT-SPECIFIC DIRECTIVES` marker
|
|
57
|
+
3. This is idempotent — if the marker already exists, skip
|
|
58
|
+
4. **CRITICAL: The merge APPENDS. It never replaces VoidForge's methodology.**
|
|
59
|
+
|
|
60
|
+
Log: "Project directives merged into CLAUDE.md from [path]"
|
|
61
|
+
|
|
62
|
+
## Step 4 — Picard's Conflict Scan
|
|
63
|
+
|
|
64
|
+
Run the conflict checklist against frontmatter:
|
|
65
|
+
- Auth + Database: auth needs persistent storage
|
|
66
|
+
- Payments + Auth: payments need authenticated users
|
|
67
|
+
- Workers + Deploy: workers need persistent hosting (not static/cloudflare)
|
|
68
|
+
- Cache + Deploy: Redis needs a host
|
|
69
|
+
- Admin + Auth: admin panel needs authentication
|
|
70
|
+
- Email + Credentials: email needs API keys
|
|
71
|
+
|
|
72
|
+
Present conflicts as warnings. User can fix (edit PRD) or proceed.
|
|
73
|
+
|
|
74
|
+
## Step 5 — Boromir's Challenge (if `--challenge`)
|
|
75
|
+
|
|
76
|
+
If the `--challenge` flag is passed:
|
|
77
|
+
1. Boromir reads the complete PRD
|
|
78
|
+
2. Argues AGAINST it: expensive features, fragile integrations, schema gaps, deploy target mismatches
|
|
79
|
+
3. User can accept challenges (edit PRD) or override (proceed as-is)
|
|
80
|
+
4. A 30-second argument now saves a 3-hour refactor later
|
|
81
|
+
|
|
82
|
+
## Step 6 — Kusanagi Provisions Infrastructure (unless `--no-provision`)
|
|
83
|
+
|
|
84
|
+
If provisioning is requested (default):
|
|
85
|
+
1. Use the existing wizard provisioning pipeline
|
|
86
|
+
2. Configure from PRD frontmatter: framework, database, deploy target, workers
|
|
87
|
+
3. Install dependencies, set up tsconfig/package.json
|
|
88
|
+
4. Configure deploy target (EC2, Vercel, Railway, Cloudflare, Docker)
|
|
89
|
+
5. Set up PM2 if workers are defined
|
|
90
|
+
6. Set up Docker Compose if containerized services specified
|
|
91
|
+
7. DNS + SSL if domain specified
|
|
92
|
+
|
|
93
|
+
Skip this step if `--no-provision` flag is set.
|
|
94
|
+
|
|
95
|
+
## Step 7 — Hand Off to Campaign
|
|
96
|
+
|
|
97
|
+
All prerequisites are met:
|
|
98
|
+
- PRD validated
|
|
99
|
+
- Supporting docs loaded
|
|
100
|
+
- CLAUDE.md merged (if directives exist)
|
|
101
|
+
- Conflicts scanned
|
|
102
|
+
- Infrastructure provisioned (unless --no-provision)
|
|
103
|
+
|
|
104
|
+
Present summary:
|
|
105
|
+
```
|
|
106
|
+
═══════════════════════════════════════════
|
|
107
|
+
BLUEPRINT VALIDATED
|
|
108
|
+
═══════════════════════════════════════════
|
|
109
|
+
Project: [name]
|
|
110
|
+
Stack: [framework] + [language]
|
|
111
|
+
Deploy: [target]
|
|
112
|
+
Docs loaded: [N] supporting documents
|
|
113
|
+
Conflicts: [N] warnings (non-blocking)
|
|
114
|
+
Provision: [done / skipped]
|
|
115
|
+
═══════════════════════════════════════════
|
|
116
|
+
Ready to build. Run:
|
|
117
|
+
/campaign --blitz # Autonomous build
|
|
118
|
+
/campaign --blitz --muster # Full multi-agent review
|
|
119
|
+
═══════════════════════════════════════════
|
|
120
|
+
```
|
|
121
|
+
|
|
122
|
+
## Arguments
|
|
123
|
+
- `--challenge` — Boromir argues against the PRD before provisioning
|
|
124
|
+
- `--no-provision` — Skip infrastructure provisioning (validate + discover only)
|
|
125
|
+
- No arguments — full pipeline: validate → discover → merge → conflict-scan → provision
|
|
126
|
+
|
|
127
|
+
## Agents
|
|
128
|
+
|
|
129
|
+
| Agent | Role |
|
|
130
|
+
|-------|------|
|
|
131
|
+
| **Picard** | Validates frontmatter + runs conflict scan |
|
|
132
|
+
| **Troi** | Structural PRD compliance check |
|
|
133
|
+
| **Wong** | Discovers and loads supporting documents |
|
|
134
|
+
| **Boromir** | Challenges PRD design (with --challenge) |
|
|
135
|
+
| **Kusanagi** | Provisions infrastructure from frontmatter |
|