@exaudeus/workrail 3.11.2 → 3.13.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (77) hide show
  1. package/dist/console/assets/index-DW78t31j.css +1 -0
  2. package/dist/console/assets/index-EsSXrC_a.js +28 -0
  3. package/dist/console/index.html +2 -2
  4. package/dist/di/container.js +8 -0
  5. package/dist/di/tokens.d.ts +1 -0
  6. package/dist/di/tokens.js +1 -0
  7. package/dist/infrastructure/session/HttpServer.js +2 -14
  8. package/dist/manifest.json +139 -91
  9. package/dist/mcp/boundary-coercion.d.ts +2 -0
  10. package/dist/mcp/boundary-coercion.js +73 -0
  11. package/dist/mcp/handler-factory.d.ts +1 -1
  12. package/dist/mcp/handler-factory.js +13 -6
  13. package/dist/mcp/handlers/shared/request-workflow-reader.d.ts +10 -2
  14. package/dist/mcp/handlers/shared/request-workflow-reader.js +27 -10
  15. package/dist/mcp/handlers/shared/workflow-source-visibility.d.ts +3 -1
  16. package/dist/mcp/handlers/shared/workflow-source-visibility.js +7 -3
  17. package/dist/mcp/handlers/v2-execution/replay.js +25 -1
  18. package/dist/mcp/handlers/v2-execution/start.js +23 -17
  19. package/dist/mcp/handlers/v2-manage-workflow-source.d.ts +7 -0
  20. package/dist/mcp/handlers/v2-manage-workflow-source.js +50 -0
  21. package/dist/mcp/handlers/v2-workflow.js +123 -8
  22. package/dist/mcp/output-schemas.d.ts +393 -0
  23. package/dist/mcp/output-schemas.js +49 -1
  24. package/dist/mcp/server.js +2 -0
  25. package/dist/mcp/tool-descriptions.js +20 -0
  26. package/dist/mcp/tools.js +6 -0
  27. package/dist/mcp/types/tool-description-types.d.ts +1 -1
  28. package/dist/mcp/types/tool-description-types.js +1 -0
  29. package/dist/mcp/types/workflow-tool-edition.d.ts +1 -1
  30. package/dist/mcp/types.d.ts +2 -0
  31. package/dist/mcp/v2/tool-registry.js +8 -0
  32. package/dist/mcp/v2/tools.d.ts +15 -0
  33. package/dist/mcp/v2/tools.js +8 -1
  34. package/dist/v2/durable-core/constants.d.ts +1 -0
  35. package/dist/v2/durable-core/constants.js +2 -1
  36. package/dist/v2/durable-core/domain/observation-builder.d.ts +4 -1
  37. package/dist/v2/durable-core/domain/observation-builder.js +9 -0
  38. package/dist/v2/durable-core/schemas/export-bundle/index.d.ts +76 -16
  39. package/dist/v2/durable-core/schemas/session/events.d.ts +26 -5
  40. package/dist/v2/durable-core/schemas/session/events.js +2 -1
  41. package/dist/v2/infra/in-memory/managed-source-store/index.d.ts +8 -0
  42. package/dist/v2/infra/in-memory/managed-source-store/index.js +33 -0
  43. package/dist/v2/infra/local/data-dir/index.d.ts +2 -0
  44. package/dist/v2/infra/local/data-dir/index.js +6 -0
  45. package/dist/v2/infra/local/managed-source-store/index.d.ts +15 -0
  46. package/dist/v2/infra/local/managed-source-store/index.js +164 -0
  47. package/dist/v2/infra/local/session-summary-provider/index.js +2 -0
  48. package/dist/v2/infra/local/workspace-anchor/index.js +1 -0
  49. package/dist/v2/ports/data-dir.port.d.ts +2 -0
  50. package/dist/v2/ports/managed-source-store.port.d.ts +25 -0
  51. package/dist/v2/ports/managed-source-store.port.js +2 -0
  52. package/dist/v2/ports/workspace-anchor.port.d.ts +3 -0
  53. package/dist/v2/projections/resume-ranking.d.ts +1 -0
  54. package/dist/v2/usecases/console-routes.js +26 -0
  55. package/dist/v2/usecases/console-service.js +25 -6
  56. package/dist/v2/usecases/console-types.d.ts +22 -1
  57. package/dist/v2/usecases/worktree-service.d.ts +10 -0
  58. package/dist/v2/usecases/worktree-service.js +136 -0
  59. package/package.json +1 -1
  60. package/workflows/adaptive-ticket-creation.json +276 -282
  61. package/workflows/architecture-scalability-audit.json +317 -0
  62. package/workflows/document-creation-workflow.json +70 -191
  63. package/workflows/documentation-update-workflow.json +59 -309
  64. package/workflows/intelligent-test-case-generation.json +37 -212
  65. package/workflows/personal-learning-materials-creation-branched.json +1 -21
  66. package/workflows/presentation-creation.json +143 -308
  67. package/workflows/relocation-workflow-us.json +161 -535
  68. package/workflows/routines/tension-driven-design.json +5 -5
  69. package/workflows/scoped-documentation-workflow.json +110 -181
  70. package/workflows/workflow-for-workflows.v2.json +21 -5
  71. package/dist/console/assets/index-C5C4nDs4.css +0 -1
  72. package/dist/console/assets/index-CSUqsoQl.js +0 -28
  73. package/workflows/CHANGELOG-bug-investigation.md +0 -298
  74. package/workflows/bug-investigation.agentic.json +0 -212
  75. package/workflows/bug-investigation.json +0 -112
  76. package/workflows/mr-review-workflow.agentic.json +0 -538
  77. package/workflows/mr-review-workflow.json +0 -277
@@ -34,30 +34,30 @@
34
34
  {
35
35
  "id": "step-understand-deeply",
36
36
  "title": "Step 2: Understand the Problem Deeply",
37
- "prompt": "Understand the problem before proposing anything.\n\nReason through:\n- What are the core tensions in this problem? (e.g., performance vs simplicity, flexibility vs type safety, backward compatibility vs clean design)\n- How does the codebase already solve similar problems? Study the most relevant existing patterns — analyze the architectural decisions and constraints they protect, not just list files.\n- What's the simplest naive solution? Why is it insufficient? (If it IS sufficient, note that — it may be the best candidate.)\n- What makes this problem hard? What would a junior developer miss?\n- Which of the dev's philosophy principles are under pressure from this problem's constraints?\n\nWorking notes:\n- Core tensions (2-4 real tradeoffs, not generic labels)\n- Existing patterns analysis (decisions, invariants they protect)\n- Naive solution and why it's insufficient (or sufficient)\n- What makes this hard\n- Philosophy principles under pressure",
37
+ "prompt": "Understand the problem before proposing anything.\n\nReason through:\n- What are the core tensions in this problem? (e.g., performance vs simplicity, flexibility vs type safety, backward compatibility vs clean design)\n- How does the codebase already solve similar problems? Study the most relevant existing patterns — analyze the architectural decisions and constraints they protect, not just list files.\n- Where does the problem most likely live? Is the requested location the real seam, or just where the symptom appears?\n- What nearby callers, consumers, sibling paths, or contracts must remain consistent if that boundary changes?\n- What's the simplest naive solution? Why is it insufficient? (If it IS sufficient, note that — it may be the best candidate.)\n- What makes this problem hard? What would a junior developer miss?\n- Which of the dev's philosophy principles are under pressure from this problem's constraints?\n\nWorking notes:\n- Core tensions (2-4 real tradeoffs, not generic labels)\n- Existing patterns analysis (decisions, invariants they protect)\n- Likely seam / plausible boundaries\n- Nearby impact surface that must stay consistent\n- Naive solution and why it's insufficient (or sufficient)\n- What makes this hard\n- Philosophy principles under pressure",
38
38
  "agentRole": "You are reasoning deeply about the problem space before generating any solutions.",
39
39
  "requireConfirmation": false
40
40
  },
41
41
  {
42
42
  "id": "step-generate-candidates",
43
43
  "title": "Step 3: Generate Candidates from Tensions",
44
- "prompt": "Generate design candidates that resolve the identified tensions differently.\n\nMANDATORY candidates:\n1. The simplest possible change that satisfies acceptance criteria. If the problem doesn't need an architectural solution, say so.\n2. Follow the existing repo pattern — adapt what the codebase already does for similar problems. Don't invent when you can adapt.\n\nAdditional candidates (1-2 more):\n- Each must resolve the identified tensions DIFFERENTLY, not just vary surface details\n- Each must be grounded in a real constraint or tradeoff, not an abstract perspective label\n- Consider philosophy conflicts: if the stated philosophy disagrees with repo patterns, one candidate could follow the stated philosophy and another could follow the established pattern\n\nFor each candidate, produce:\n- One-sentence summary of the approach\n- Which tensions it resolves and which it accepts\n- The specific failure mode you'd watch for\n- How it relates to existing repo patterns (follows / adapts / departs)\n- What you gain and what you give up\n- Which philosophy principles it honors and which it conflicts with (by name)\n\nRules:\n- candidates must be genuinely different in shape, not just wording\n- if all candidates converge on the same approach, that's signal — note it honestly rather than manufacturing fake diversity\n- cite specific files or patterns when they materially shape a candidate",
44
+ "prompt": "Generate design candidates that resolve the identified tensions differently.\n\nMANDATORY candidates:\n1. The simplest possible change that satisfies acceptance criteria. If the problem doesn't need an architectural solution, say so.\n2. Follow the existing repo pattern — adapt what the codebase already does for similar problems. Don't invent when you can adapt.\n\nAdditional candidates (1-2 more):\n- Each must resolve the identified tensions DIFFERENTLY, not just vary surface details\n- Each must be grounded in a real constraint or tradeoff, not an abstract perspective label\n- Consider philosophy conflicts: if the stated philosophy disagrees with repo patterns, one candidate could follow the stated philosophy and another could follow the established pattern\n\nFor each candidate, produce:\n- One-sentence summary of the approach\n- Which tensions it resolves and which it accepts\n- Boundary solved at, and why that boundary is the best fit\n- The specific failure mode you'd watch for\n- How it relates to existing repo patterns (follows / adapts / departs)\n- What you gain and what you give up\n- Impact surface beyond the immediate task\n- Scope judgment: too narrow / best-fit / too broad, with concrete evidence\n- Which philosophy principles it honors and which it conflicts with (by name)\n\nRules:\n- candidates must be genuinely different in shape, not just wording\n- if all candidates converge on the same approach, that's signal — note it honestly rather than manufacturing fake diversity\n- broader scope requires concrete evidence\n- cite specific files or patterns when they materially shape a candidate",
45
45
  "agentRole": "You are generating genuinely diverse design candidates grounded in real tensions.",
46
46
  "requireConfirmation": false
47
47
  },
48
48
  {
49
49
  "id": "step-compare-and-recommend",
50
50
  "title": "Step 4: Compare via Tradeoffs and Recommend",
51
- "prompt": "Compare candidates through tradeoff analysis, not checklists.\n\nFor the set of candidates, assess:\n- Which tensions does each resolve best?\n- Which has the most manageable failure mode?\n- Which best fits the dev's philosophy? Where are the philosophy conflicts?\n- Which is most consistent with existing repo patterns?\n- Which would be easiest to evolve or reverse if assumptions are wrong?\n\nProduce a clear recommendation with rationale tied back to tensions and philosophy. If two candidates are close, say so and explain what would tip the decision.\n\nSelf-critique your recommendation:\n- What's the strongest argument against your pick?\n- What would make you switch to a different candidate?\n- What assumption, if wrong, would invalidate this design?\n\nWorking notes:\n- Comparison matrix (tensions x candidates)\n- Recommendation and rationale\n- Strongest counter-argument\n- Pivot conditions",
51
+ "prompt": "Compare candidates through tradeoff analysis, not checklists.\n\nFor the set of candidates, assess:\n- Which tensions does each resolve best?\n- Which solves the problem at the best-fit boundary?\n- Which has the most manageable failure mode?\n- Which best fits the dev's philosophy? Where are the philosophy conflicts?\n- Which is most consistent with existing repo patterns?\n- Which would be easiest to evolve or reverse if assumptions are wrong?\n- Which is too narrow, best-fit, or too broad — and why?\n\nProduce a clear recommendation with rationale tied back to tensions, scope judgment, repo patterns, and philosophy. If two candidates are close, say so and explain what would tip the decision.\n\nSelf-critique your recommendation:\n- What's the strongest argument against your pick?\n- What narrower option might still work, and why did it lose?\n- What broader option might be justified, and what evidence would be required?\n- What assumption, if wrong, would invalidate this design?\n\nWorking notes:\n- Comparison matrix (tensions x candidates)\n- Recommendation and rationale\n- Strongest counter-argument\n- Pivot conditions",
52
52
  "agentRole": "You are comparing candidates honestly and recommending based on tradeoffs, not advocacy.",
53
53
  "requireConfirmation": false
54
54
  },
55
55
  {
56
56
  "id": "step-deliver",
57
57
  "title": "Step 5: Deliver the Design Candidates",
58
- "prompt": "Create `{deliverableName}`.\n\nRequired structure:\n- Problem Understanding (tensions, what makes it hard)\n- Philosophy Constraints (which principles matter, any conflicts)\n- Candidates (each with: summary, tensions resolved/accepted, failure mode, repo-pattern relationship, gains/losses, philosophy fit)\n- Comparison and Recommendation\n- Self-Critique (strongest counter-argument, pivot conditions)\n- Open Questions for the Main Agent\n\nThe main agent will interrogate this output — it is raw investigative material, not a final decision. Optimize for honest, useful analysis over polished presentation.",
58
+ "prompt": "Create `{deliverableName}`.\n\nRequired structure:\n- Problem Understanding (tensions, likely seam, what makes it hard)\n- Philosophy Constraints (which principles matter, any conflicts)\n- Impact Surface (what nearby paths, consumers, or contracts must stay consistent)\n- Candidates (each with: summary, tensions resolved/accepted, boundary solved at, why that boundary is the best fit, failure mode, repo-pattern relationship, gains/losses, scope judgment, philosophy fit)\n- Comparison and Recommendation\n- Self-Critique (strongest counter-argument, pivot conditions)\n- Open Questions for the Main Agent\n\nThe main agent will interrogate this output — it is raw investigative material, not a final decision. Optimize for honest, useful analysis over polished presentation.",
59
59
  "agentRole": "You are delivering design analysis for the main agent to interrogate and build on.",
60
60
  "requireConfirmation": false
61
61
  }
62
62
  ]
63
- }
63
+ }
@@ -1,13 +1,13 @@
1
1
  {
2
2
  "id": "scoped-documentation-workflow",
3
3
  "name": "Scoped Documentation Workflow",
4
- "version": "1.0.0",
5
- "description": "Create documentation for a SINGLE, BOUNDED subject with strict scope enforcement. Perfect for: one class/component, one integration point, one mechanism, one architecture decision. Prevents documentation sprawl through continuous boundary validation (9+/10 scope compliance required). NOT for: project READMEs, multi-component systems, or comprehensive guides - use document-creation-workflow for those.",
4
+ "version": "2.0.0",
5
+ "description": "Create documentation for a SINGLE, BOUNDED subject with strict scope enforcement. Perfect for: one class/component, one integration point, one mechanism, one architecture decision. Prevents documentation sprawl through continuous boundary validation.",
6
6
  "clarificationPrompts": [
7
7
  "What specifically do you want to document? (feature, component, library, mechanism, interaction, architecture, process, etc.)",
8
8
  "Who will read this documentation? (team members, external users, new developers, architects, etc.)",
9
9
  "What should readers be able to do after reading this documentation?",
10
- "Do you have a preferred format or sections you want? (Or should I recommend based on subject type?)",
10
+ "Do you have a preferred format or sections? (Or should I recommend based on subject type?)",
11
11
  "Are there existing documentation templates or patterns I should follow?"
12
12
  ],
13
13
  "preconditions": [
@@ -16,58 +16,18 @@
16
16
  "Agent can read files, analyze code, and write documentation"
17
17
  ],
18
18
  "metaGuidance": [
19
- "WHO YOU ARE: You are a documentation specialist with a unique superpower - scope discipline. While others let documentation sprawl across systems, you maintain laser focus on exactly what needs to be documented.",
20
- "YOUR MISSION: Create focused, high-quality documentation with clear boundaries. Not everything related to the subject - just what's in scope, documented thoroughly with evidence and examples.",
21
- "WHY THIS WORKFLOW EXISTS: It prevents the #1 documentation problem: scope creep. Documentation that tries to explain everything ends up maintaining nothing and helping no one.",
22
- "THE CORE PRINCIPLE - SCOPE IS LAW:",
23
- "You will define a boundary (what's in vs out) and defend it religiously. When tempted to explain out-of-scope items, you reference them instead. This discipline makes documentation maintainable.",
24
- "HOW IT WORKS: Each major phase follows PLAN EXECUTE SELF-CRITIQUE. You think strategically before doing work, then critique your own output honestly. Better planning prevents scope drift.",
25
- "THE PHASES:",
26
- "Phase 0: Quick reconnaissance, then define scope boundaries with user approval",
27
- "Phase 1: Plan analysis approach, execute it, critique your scope adherence",
28
- "Phase 2: Plan documentation structure, create detailed outline, verify scope alignment",
29
- "Phase 3: Plan writing strategy, write content with continuous scope checks",
30
- "Phase 4: Adversarial validation - be your harshest critic before delivering",
31
- "Phase 5: Finalize with scope map and maintenance notes",
32
- "CRITICAL DISTINCTION - REFERENCE VS EXPLAIN:",
33
- "When you encounter something out of scope, you have two choices: REFERENCE it (link, one sentence) or EXPLAIN it (detailed documentation). You must REFERENCE, never EXPLAIN.",
34
- "Explaining out-of-scope items creates duplication, maintenance burden, and scope drift. One violation leads to more. Protect the boundary.",
35
- "THE FORCING FUNCTIONS:",
36
- "You cannot skip self-critique - it runs after every execute phase. You cannot proceed with understanding <8/10 - gather more evidence. You cannot proceed with quality <9/10 - iterate until passing. You cannot violate scope - get approval or fix it.",
37
- "YOUR DELIVERABLE: Focused documentation with SCOPE_MAP showing what's in (explained) vs out (referenced). Someone reading it knows exactly what this covers and where to find related information.",
38
- "SUCCESS MEANS: Readers can achieve their goal using your documentation. It stays current because boundaries are clear. It doesn't duplicate other docs because you referenced instead of explained.",
39
- "WORKFLOW MECHANICS: I'll check with you at critical points (scope definition, validation issues). Otherwise I work autonomously with continuous self-critique. You can adjust anytime: 'check more often' or 'just finish'."
19
+ "SCOPE IS LAW: Define a boundary and defend it. REFERENCE out-of-scope items (one sentence + link). Never EXPLAIN them. One violation leads to more protect the boundary.",
20
+ "REFERENCE vs EXPLAIN: Good 'This uses CacheManager (see Cache Docs) to store results.' Bad 'CacheManager works by maintaining an LRU cache that...' Never explain out-of-scope internals.",
21
+ "TEMPTATION LOGGING: Every time you almost explain an out-of-scope item, log it: 'Almost explained [X] but stopped out of scope.' Zero logs on a complex subject is a red flag.",
22
+ "NOTES-FIRST DURABILITY: use output.notesMarkdown as the primary durable record. ANALYSIS.md, OUTLINE.md, SCOPE_CONTRACT.md are optional human-facing artifacts not required workflow memory.",
23
+ "RUBRIC OVER VIBES: Score concrete dimensions with evidence sentences. Derive your next action from the rubric result not from a gut feeling about whether things seem okay.",
24
+ "DEFAULT BEHAVIOR: self-execute with tools. Only ask the user for approvals at explicit checkpoints or for external knowledge you genuinely cannot determine yourself."
40
25
  ],
41
26
  "steps": [
42
27
  {
43
- "id": "phase-0a-reconnaissance",
44
- "title": "Phase 0A: Reconnaissance & User Empowerment",
45
- "prompt": "**SCOPED DOCUMENTATION WORKFLOW**\n\nYou want to document: \"[user's request]\"\n\n**How I'll work:**\nI'll handle most decisions autonomously and check in only for critical choices (like scope definition and validation). This keeps us moving efficiently while maintaining quality.\n\n**You have full control - adjust anytime:**\n- Say **\"check with me more often\"** I'll add approval points at each phase\n- Say **\"just finish it\"** → I'll only stop for critical blocking issues\n- Say **\"stop, let me review [X]\"** → I'll pause and show you anything\n\n**Let me quickly analyze what exists to present informed scope options...**\n\n**RECONNAISSANCE (2-3 minutes):**\n\nPerform quick exploration to understand context:\n\n**For CODE subjects:**\n- Locate mentioned files/components/features\n- Identify primary interfaces and entry points\n- Map immediate dependencies (one level)\n- Check for existing documentation\n- Note related components/features\n- Assess complexity (simple/moderate/complex)\n\n**For SYSTEM/CONCEPT subjects:**\n- Identify key components or elements\n- Map relationships and interactions\n- Find existing documentation or specs\n- Note related systems/concepts\n- Assess complexity\n\n**For PROCESS/WORKFLOW subjects:**\n- Identify steps and participants\n- Map inputs/outputs and decision points\n- Find existing documentation\n- Note related processes\n- Assess complexity\n\n**OUTPUT FORMAT:**\n\n**RECONNAISSANCE FINDINGS:**\n- **Primary Subject:** [what user mentioned]\n- **Type:** [code/system/concept/process/interaction]\n- **Located at:** [file paths, system names, or description]\n- **Related Components:** [list with brief descriptions]\n- **Dependencies:** [key dependencies identified]\n- **Existing Docs:** [found/not found, links if found]\n- **Complexity Assessment:** [Simple/Moderate/Complex]\n- **Potential Scope Boundaries:** [initial thoughts on what's in vs out]\n\n**Next: Proposing scope definition based on these findings...**",
46
- "agentRole": "You are a documentation strategist performing rapid reconnaissance. Your goal is to understand the landscape quickly (2-3 minutes) so you can present informed, realistic scope options. Be efficient but thorough enough to make good recommendations.",
47
- "guidance": [
48
- "Keep reconnaissance fast (2-3 minutes) - light exploration, not deep analysis",
49
- "Focus on identifying boundaries and related items",
50
- "Check for existing docs to avoid duplication",
51
- "Assess complexity to inform scope recommendations",
52
- "Use tools efficiently: file search, grep, codebase_search",
53
- "Don't analyze deeply yet - that comes in Phase 1"
54
- ],
55
- "requireConfirmation": false
56
- },
57
- {
58
- "id": "phase-0b-scope-definition",
59
- "title": "Phase 0B: Scope Definition (CRITICAL CHECKPOINT)",
60
- "prompt": "**SCOPE DEFINITION - CRITICAL CHECKPOINT**\n\nBased on reconnaissance, I need your approval on what to document.\n\n**PROPOSED SCOPE (Balanced Recommendation):**\n\n**Subject:** [One clear sentence describing what you're documenting]\n\n**IN SCOPE** (will be explained in detail):\n• [Specific component/feature/mechanism]\n• Core functionality and key behaviors\n• Common use cases and usage patterns (3-7 examples)\n• Integration points and interfaces\n• Important edge cases and error handling\n• Design decisions and tradeoffs (where relevant)\n\n**OUT OF SCOPE** (will be referenced only, not explained):\n• [Dependency X] internals (referenced as prerequisite/integration)\n• [Related feature Y] (will link to separate docs if they exist)\n• Advanced edge cases (unless critical to understanding)\n• Historical context or migration paths (unless specifically needed)\n\n**BOUNDARY CONDITIONS** (where in-scope meets out-of-scope):\n• Interface with [System A]: Document our side, reference their docs\n• Integration with [Component B]: Document the contract, not their internals\n• Uses [Library C]: Document our usage patterns, reference library docs\n\n**TARGET AUDIENCE:** [Who will read this]\n**SUCCESS CRITERIA:** Reader can [specific outcome: use it, modify it, understand how it works, etc.]\n\n**WHY THIS SCOPE:**\n- Covers [user's request] comprehensively\n- Focused enough to complete in reasonable time (~4-6 hours)\n- Appropriate depth for [target audience]\n- Maintainable - clear boundaries prevent documentation drift\n- Balances completeness with practicality\n\n**ESTIMATED EFFORT:** 4-6 hours\n**ESTIMATED LENGTH:** ~1500-2500 words\n\n**Does this scope look right?**\n\n**Options:**\n1. **✓ Proceed** - Scope looks good, go ahead\n2. **Expand scope** - Add: [what to include that's currently out]\n3. **Narrow scope** - Remove: [what to exclude that's currently in]\n4. **Adjust boundaries** - Change: [specific adjustments]\n\n**Please reply with your choice or adjustments.**\n\n(If no response after 30 seconds, I'll interpret as approval and proceed)\n\n**After approval: I'll create SCOPE_CONTRACT.md and proceed with autonomous analysis, checking back only for critical questions or validation.**",
61
- "agentRole": "You are presenting a scope contract for approval. This is a critical checkpoint because scope affects all downstream work. Present a clear, realistic scope with reasoning. Make it easy for the user to approve or adjust.",
62
- "guidance": [
63
- "Present ONE recommended scope (balanced), not multiple options",
64
- "Be specific about what's in vs out - vague boundaries cause problems later",
65
- "Explain WHY this scope makes sense based on reconnaissance",
66
- "Make boundaries crystal clear - this is the contract",
67
- "Include effort estimate to set expectations",
68
- "Provide easy approval mechanism",
69
- "After approval, create SCOPE_CONTRACT.md for reference throughout workflow"
70
- ],
28
+ "id": "phase-0-reconnaissance-and-scope",
29
+ "title": "Phase 0: Reconnaissance & Scope Definition",
30
+ "prompt": "I want to create focused documentation for: \"[user's request]\"\n\n**How I work:** I handle most decisions autonomously and stop only for critical choices like scope approval. You can adjust anytime: say 'check with me more often' to add phase checkpoints, or 'just finish it' to minimize stops.\n\n**Step 1 Reconnaissance (2-3 minutes):**\n\nExplore the subject quickly to understand the landscape before proposing scope:\n\n- Locate the subject (files, system definitions, process docs)\n- Identify primary interfaces and entry points\n- Map immediate dependencies (one level deep)\n- Check for existing documentation to avoid duplication\n- Note related components and assess complexity (simple/moderate/complex)\n\nReconnaissance findings:\n- **Primary Subject:** [what was requested]\n- **Type:** [code/system/concept/process/interaction]\n- **Located at:** [file paths, system names, or description]\n- **Related Components:** [list with brief descriptions]\n- **Dependencies:** [key dependencies identified]\n- **Existing Docs:** [found/not found]\n- **Complexity:** [Simple/Moderate/Complex]\n\n**Step 2 — Scope Proposal:**\n\nBased on reconnaissance, propose a scope contract:\n\n**Subject:** [One clear sentence describing what you're documenting]\n\n**IN SCOPE** (will be explained in detail):\n- [Specific component/feature/mechanism and its core behaviors]\n- [Common use cases and usage patterns]\n- [Integration points and interfaces]\n- [Important edge cases and design decisions]\n\n**OUT OF SCOPE** (will be referenced only, not explained):\n- [Dependency X] internals — referenced as prerequisite\n- [Related feature Y] — link to separate docs\n- [Historical context] — unless specifically needed\n\n**BOUNDARY CONDITIONS** (where in-scope meets out-of-scope):\n- [Interface with System A]: document our side, reference their docs\n- [Integration with Component B]: document the contract, not their internals\n\n**Target Audience:** [who will read this]\n**Success Criteria:** Reader can [specific outcome]\n**Estimated Length:** ~[N] words\n\n**Does this scope look right?** Reply with approval or adjustments. If no response, I'll interpret as approval and proceed.\n\nAfter approval: I'll proceed with autonomous analysis, checking back only for critical questions or validation issues.",
71
31
  "requireConfirmation": true,
72
32
  "validationCriteria": [
73
33
  {
@@ -89,164 +49,133 @@
89
49
  "hasValidation": true
90
50
  },
91
51
  {
92
- "id": "phase-1a-analysis-planning",
93
- "title": "Phase 1A: Analysis Strategy Planning",
94
- "prompt": "**PLAN: Analysis Strategy**\n\n**ANALYSIS APPROACH DECISION (Autonomous):**\n\nI'll use a **Behavior-Focused Analysis** approach:\n\n**Investigation Plan:**\n1. Map interfaces and public API surface (within scope)\n2. Trace key execution flows and behaviors\n3. Identify important mechanisms and patterns\n4. Extract representative examples from code/usage\n5. Document integration points and dependencies\n6. Note design decisions and tradeoffs\n7. Verify understanding through testing/observation where possible\n\n**Depth Target:**\n- Understand WHAT it does (functionality)\n- Understand HOW it works (mechanisms)\n- Understand WHY it's designed this way (key decisions)\n- Stop before deep implementation minutiae (unless in scope)\n\n**Scope Violation Prevention Strategy:**\n\nI've identified these temptation points where I might drift out of scope:\n\n1. **Risk:** Diving into [Dependency X] implementation\n **Prevention:** Only document its interface/contract. If I start reading its internals, STOP.\n \n2. **Risk:** Explaining [Related Feature Y] in detail\n **Prevention:** Reference it only. One sentence about what it does, link to docs.\n \n3. **Risk:** Over-analyzing [Low-Level Detail Z]\n **Prevention:** Keep at appropriate abstraction level for target audience.\n\n**Evidence Collection Plan:**\n- Code examples: Extract 5-7 representative usage patterns\n- Behavior observation: [Test/run code if applicable]\n- Documentation review: Check comments, existing docs for design context\n- Dependency contracts: Document interfaces only\n\n**PROCEEDING TO ANALYSIS EXECUTION...**",
95
- "agentRole": "You are planning your analysis approach. This is autonomous (no user checkpoint) but you must document your strategy clearly. The plan guides your execution and prevents scope drift.",
96
- "guidance": [
97
- "Choose behavior-focused approach by default (works for most cases)",
98
- "Explicitly identify scope violation risks - forces you to think about boundaries",
99
- "Document prevention strategies - these become your guardrails",
100
- "Be specific about what you'll investigate and what you'll skip",
101
- "This plan is your contract with yourself - follow it in Phase 1B"
102
- ],
103
- "requireConfirmation": false
104
- },
105
- {
106
- "id": "phase-1b-analysis-execution",
107
- "title": "Phase 1B: Analysis Execution with Self-Critique",
108
- "prompt": "**EXECUTE: Scoped Analysis**\n\nExecuting analysis plan from Phase 1A...\n\n**ANALYSIS PROCESS:**\n\nFollow your investigation plan systematically:\n\n**For CODE subjects:**\n- Read in-scope files/components\n- Trace key execution paths (within scope)\n- Map public interfaces and APIs\n- Extract code examples showing usage\n- Note design patterns and decisions\n- Document integration points\n\n**For SYSTEM/CONCEPT subjects:**\n- Map components and their relationships\n- Trace data/control flow\n- Identify key behaviors and mechanisms\n- Extract examples or scenarios\n- Note design tradeoffs\n- Document integration patterns\n\n**For PROCESS subjects:**\n- Map steps and decision points\n- Identify participants and roles\n- Trace typical and edge-case paths\n- Extract example scenarios\n- Note design rationale\n- Document integration points\n\n**BOUNDARY ENFORCEMENT (continuous):**\n\nWhen you encounter something outside scope:\n- ✅ **REFERENCE IT:** \"This integrates with [System X] via [interface]\"\n- ✅ **LINK TO DOCS:** \"For [System X] details, see [link]\"\n- ❌ **DON'T EXPLAIN IT:** Don't dive into how it works internally\n- ❌ **DON'T ANALYZE IT:** Don't read its code or trace its logic\n\n**TEMPTATION LOGGING:**\n\nLog every time you're tempted to go out of scope:\n- \"Almost analyzed [X] implementation but stopped - out of scope\"\n- \"Tempted to explain [Y] mechanism but only referenced it\"\n- \"Started reading [Z] file but caught myself - dependency only\"\n\n**CREATE ANALYSIS.md with:**\n\n## Subject Overview\n- What it is and what it does\n- Key purpose and use cases\n\n## Core Components/Elements\n- Main parts (within scope)\n- How they relate/interact\n\n## Key Behaviors and Mechanisms\n- How it works (step-by-step where helpful)\n- Important patterns\n\n## Integration Points\n- Dependencies (interface-level only)\n- Consumers/users of this\n- Key contracts\n\n## Examples and Usage Patterns\n- Representative examples (5-7)\n- Common scenarios\n\n## Design Decisions and Tradeoffs\n- Why it's built this way\n- Key choices made\n\n## Scope Boundary Log\n- Items encountered but not analyzed (out of scope)\n- Temptations stopped\n\n---\n\n**SELF-CRITIQUE (MANDATORY):**\n\nAnswer these questions honestly:\n\n**1. Did I stay within scope boundaries?**\n→ [YES/NO with evidence]\n→ If NO: [What violations? Get user approval to adjust]\n\n**2. Did I analyze any out-of-scope dependencies in detail?**\n→ [YES/NO with specifics]\n→ Temptations logged: [count and list]\n\n**3. What tempted me to go off-track?**\n→ [List specific items and how you stopped]\n\n**4. Is my understanding depth appropriate for the scope?**\n→ [Too shallow/Right level/Too deep]\n→ Evidence: [why you think so]\n\n**5. Did I gather sufficient evidence for claims I'll make in documentation?**\n→ [YES/NO with assessment]\n→ Examples collected: [count]\n→ Behaviors verified: [how]\n\n**6. What am I still uncertain about (within scope)?**\n→ [List specific uncertainties]\n→ **CRITICAL:** Any uncertainty that blocks documentation must be resolved\n\n**7. Understanding confidence rating: __/10**\n→ **FORCING FUNCTION:** If <8/10, I MUST gather more evidence before proceeding\n→ If <8: [What additional analysis needed?]\n\n**8. Any scope violations or major uncertainties discovered?**\n→ If YES: This triggers CRITICAL CHECKPOINT for user input\n\n---\n\n**OUTPUT:**\n\nIf understanding ≥8/10 AND no critical issues:\n→ \"Analysis complete. Understanding: [X]/10. Proceeding to structure planning...\"\n\nIf understanding <8/10:\n→ \"Understanding insufficient ([X]/10). Gathering more evidence on: [specifics]...\"\n→ [Continue analysis until ≥8/10]\n\nIf critical blocking question:\n→ \"**CRITICAL CHECKPOINT:** I need your input on: [specific question]\"\n→ [Wait for user response]\n\nIf scope violations discovered:\n→ \"**SCOPE VIOLATION DETECTED:** Found [X] which affects scope boundary. Need approval to: [adjust scope / stay course]\"\n→ [Wait for user response]",
109
- "agentRole": "You are executing analysis with rigorous scope enforcement and self-critique. This is autonomous unless you hit a critical blocker. Be thorough in analysis but ruthless about scope boundaries. Self-critique is mandatory and must be honest.",
110
- "guidance": [
111
- "Follow your Phase 1A plan systematically",
112
- "Log EVERY temptation to drift out of scope - this proves discipline",
113
- "Reference out-of-scope items, never explain them",
114
- "Gather concrete evidence - examples, code snippets, observed behaviors",
115
- "Self-critique must be honest - this is quality assurance",
116
- "Understanding <8/10 is a forcing function - keep analyzing",
117
- "Critical questions or scope violations trigger user checkpoint",
118
- "Create ANALYSIS.md as structured artifact for later phases",
119
- "Document design decisions - they inform documentation narrative"
120
- ],
121
- "requireConfirmation": false
122
- },
123
- {
124
- "id": "phase-2a-structure-planning",
125
- "title": "Phase 2A: Plan Documentation Structure",
126
- "prompt": "**PLAN: Documentation Structure Strategy**\n\n**STRUCTURE PLANNING (Autonomous):**\n\nBased on analysis, I'll design the documentation structure.\n\n**PLANNING QUESTIONS:**\n\n1. **What structure best serves this subject?**\n - For CODE: Comprehensive Guide (overview → how it works → usage → reference)\n - For SYSTEMS: Architecture Guide (components → interactions → integration)\n - For CONCEPTS: Explanation Guide (problem → solution → examples → applications)\n - For PROCESSES: Workflow Guide (overview → steps → decision points → examples)\n\n2. **What's the optimal information flow for target audience?**\n - What do they need to know first?\n - What order builds understanding naturally?\n - Where should examples appear?\n\n3. **How many sections and what depth?**\n - Based on scope complexity\n - Based on target audience expertise\n - Based on success criteria (use it? modify it? understand internals?)\n\n4. **Scope alignment in structure:**\n - Which sections document in-scope items?\n - Where do out-of-scope items get referenced?\n - How to make boundaries clear?\n\n**CHOSEN STRUCTURE: Comprehensive Guide**\n\n**Planned Sections:**\n\n```\n1. Overview & Context (~200 words)\n - What [subject] is and why it exists\n - Key concepts and terminology\n - Quick usage example\n\n2. How It Works (~400-600 words)\n - Core mechanisms (step-by-step)\n - Key behaviors and patterns\n - Integration points and dependencies\n \n3. Usage Guide (~500-700 words)\n - Getting started / Basic usage\n - Common patterns (5-7 examples)\n - Best practices\n \n4. Reference (~300-500 words)\n - [API/Interface/Component reference as appropriate]\n - Parameters, options, configurations\n \n5. Edge Cases & Troubleshooting (~200-300 words)\n - Important gotchas\n - Common issues and solutions\n - Limitations\n \n6. Related Documentation (~100 words)\n - Dependencies (with links)\n - Related features/systems (with links)\n - Further reading\n```\n\n**Total Estimated Length:** ~1800-2500 words\n\n**WHY THIS STRUCTURE:**\n- Serves [target audience] who need to [success criteria]\n- Progressive disclosure: overview → understanding → usage → reference\n- Examples throughout, not isolated\n- Troubleshooting integrated for practical use\n- Clear scope boundaries in \"Related Documentation\" section\n\n**SCOPE COMPLIANCE CHECK:**\n✅ Every section maps to in-scope items from ANALYSIS.md\n✅ Out-of-scope items only in \"Related Documentation\"\n✅ No section would require explaining out-of-scope dependencies\n✅ Structure matches SCOPE_CONTRACT.md\n\n**PROCEEDING TO CREATE DETAILED OUTLINE...**",
127
- "agentRole": "You are planning the documentation structure. Think strategically about what structure best serves the target audience and scope. This is the PLAN step - you'll create the detailed outline in Phase 2B.",
128
- "guidance": [
129
- "Choose structure based on subject type and audience needs",
130
- "Consider information flow and progressive disclosure",
131
- "Ensure every section maps to in-scope items",
132
- "Plan for appropriate depth based on scope and audience",
133
- "This is planning only - detailed outline comes next"
134
- ],
135
- "requireConfirmation": false
136
- },
137
- {
138
- "id": "phase-2b-detailed-outline",
139
- "title": "Phase 2B: Create Detailed Outline",
140
- "prompt": "**EXECUTE: Creating Detailed Outline**\n\nFollowing the structure plan from Phase 2A, creating detailed outline...\n\n**DETAILED OUTLINE CREATION:**\n\nFor each section from Phase 2A, create specific content outline based on ANALYSIS.md:\n\n**1. Overview & Context**\n- **What:** [One-sentence description from analysis]\n- **Why:** [Purpose and problem it solves from analysis]\n- **Key Concepts:** [2-3 terms that need definition]\n- **Quick Example:** [Simplest usage pattern from analysis examples]\n\n**2. How It Works**\n- **2.1 Core Mechanism:** [Step-by-step explanation - from analysis mechanisms section]\n- **2.2 Key Behaviors:** [3-4 important behaviors from analysis]\n- **2.3 Integration Points:** [How it connects to dependencies - interface level only]\n\n**3. Usage Guide**\n- **3.1 Getting Started:** [Setup/initialization steps from analysis]\n- **3.2 Common Patterns:** [5-7 examples from analysis]\n - Pattern 1: [Common use case A - from analysis example 1]\n - Pattern 2: [Common use case B - from analysis example 2]\n - Pattern 3: [Integration scenario - from analysis example 3]\n - Pattern 4: [Edge case handling - from analysis example 4]\n - Pattern 5: [Advanced usage - from analysis example 5]\n - Pattern 6-7: [Additional examples if analysis has them]\n- **3.3 Best Practices:** [From design decisions in analysis]\n\n**4. Reference**\n- [API methods / Configuration options / Components / etc.]\n- [Specific items from analysis - organized logically]\n- [Parameters, types, return values, descriptions]\n\n**5. Edge Cases & Troubleshooting**\n- **Gotcha 1:** [From analysis edge cases]\n- **Gotcha 2:** [From analysis limitations]\n- **Issue → Solution pairs:** [From analysis and testing]\n\n**6. Related Documentation**\n- **Dependencies:** [From analysis dependency section - with links if available]\n- **Related Features:** [From analysis related components - with links]\n- **Further Reading:** [Background concepts or advanced topics]\n\n---\n\n**SELF-CRITIQUE (Mandatory):**\n\n**1. Does every section map to items from ANALYSIS.md?**\n→ [YES/NO with verification]\n→ If NO: [What's missing from analysis? Need to go back?]\n\n**2. Are out-of-scope items only in \"Related Documentation\"?**\n→ [YES/NO with check]\n→ If NO: [Which sections have out-of-scope items? Fix them]\n\n**3. Is the flow logical for target audience?**\n→ [YES/NO with reasoning]\n→ If NO: [How should it be reordered?]\n\n**4. Does this outline serve the success criteria?**\n→ [YES/NO - can readers achieve their goal?]\n→ If NO: [What's missing?]\n\n**5. Can I write each section from my analysis?**\n→ [YES/NO for each section]\n→ If NO for any: [What additional analysis needed? Block and gather more]\n\n**6. Outline completeness confidence: __/10**\n→ **FORCING FUNCTION:** If <8/10, identify gaps and address them\n\n**7. Scope compliance in outline: __/10**\n→ Must be ≥9/10 to proceed\n\n**OUTLINE QUALITY CHECK:**\n- Specific enough to guide writing? ✓/✗\n- All examples sourced from analysis? ✓/✗\n- No out-of-scope explanations planned? ✓/✗\n- Appropriate depth for each section? ✓/✗\n\n**OUTPUT: OUTLINE.md created**\n\n**If outline passes self-critique: Proceeding to writing planning...**\n**If issues found: Fix them now before planning writing.**",
141
- "agentRole": "You are creating a detailed outline based on your structure plan and analysis findings. This is execution with mandatory self-critique. Be specific and thorough - this outline guides all writing.",
142
- "guidance": [
143
- "Pull every content point from ANALYSIS.md - don't invent new content",
144
- "Be specific - vague outline leads to vague documentation",
145
- "Every example must be from analysis - no making up examples",
146
- "Self-critique is mandatory - catch scope issues before writing",
147
- "Outline quality <8/10 means you need more analysis",
148
- "Scope compliance <9/10 is a blocker - fix before proceeding"
149
- ],
52
+ "id": "phase-1-analysis",
53
+ "title": "Phase 1: Scoped Analysis",
54
+ "promptBlocks": {
55
+ "goal": "Analyze the subject thoroughly within the approved scope boundary and score evidence quality before proceeding.",
56
+ "constraints": [
57
+ "Enforce scope during analysis: read and trace only in-scope files and behaviors. When you encounter an out-of-scope item, log it and move on — do not analyze it.",
58
+ "Notes-first: record analysis findings in notesMarkdown, not in ANALYSIS.md as a required artifact.",
59
+ "Derive the proceed/gather-more decision from the evidence rubric — not from a gut feeling."
60
+ ],
61
+ "procedure": [
62
+ "Investigation approach: (1) Map interfaces and public API surface (in-scope only). (2) Trace key execution flows and behaviors. (3) Identify important mechanisms and patterns. (4) Extract 5+ representative examples from code or usage. (5) Document integration points at the interface/contract level only. (6) Note design decisions and tradeoffs.",
63
+ "Boundary enforcement (continuous) — when you encounter something outside scope: REFERENCE IT ('This integrates with [System X] via [interface]'), LINK TO DOCS ('For [System X] details, see [link]'), DO NOT EXPLAIN IT, DO NOT ANALYZE IT, LOG THE TEMPTATION ('Almost analyzed [X] but stopped — out of scope').",
64
+ "Evidence rubric — score all 4 dimensions before deciding to proceed. Score each dimension 0, 1, or 2 and write one evidence sentence for each.",
65
+ "subjectBoundaryClarity: 0=boundary confirmed and clear, 1=likely correct but one area uncertain, 2=boundary still ambiguous",
66
+ "behaviorCoverage: 0=all key behaviors identified with examples, 1=most behaviors covered with minor gaps, 2=significant behavior gaps remain",
67
+ "examplesCollected: 0=5+ concrete examples extracted from subject, 1=2-4 examples found, 2=fewer than 2 verifiable examples",
68
+ "integrationContractClarity: 0=all integration points documented at interface level, 1=most documented, 2=key integration points still unclear",
69
+ "Rubric gate: if any dimension scores 2, gather more evidence before proceeding. If total score is 5 or more, gather more evidence. Otherwise proceed to Phase 2.",
70
+ "If you find a critical scope issue (the subject is actually two distinct subjects, or a required dependency cannot be referenced without explaining it), surface this to the user before proceeding."
71
+ ],
72
+ "outputRequired": {
73
+ "analysisFindings": "Subject overview, core behaviors/components, integration points, examples (5+), design decisions — recorded in notesMarkdown",
74
+ "scopeBoundaryLog": "Every temptation stopped: what was encountered and why it was left out",
75
+ "evidenceRubricScores": "All 4 dimensions scored with evidence sentences, plus gate decision (proceed or gather more with specifics)"
76
+ },
77
+ "verify": [
78
+ "All 4 rubric dimensions scored with evidence sentences",
79
+ "No dimension scored 2 and total score < 5",
80
+ "Temptation log present (zero temptations on a complex subject is worth noting)",
81
+ "Analysis findings recorded in notesMarkdown, not only in a sidecar file"
82
+ ]
83
+ },
150
84
  "requireConfirmation": false
151
85
  },
152
86
  {
153
- "id": "phase-3a-writing-strategy",
154
- "title": "Phase 3A: Plan Writing Strategy",
155
- "prompt": "**PLAN: Writing Strategy**\n\n**WRITING APPROACH PLANNING (Autonomous):**\n\nBefore writing, plan HOW I'll write systematically.\n\n**PLANNING DECISIONS:**\n\n**1. Writing Order:**\n- **Recommended:** Follow outline order (1 → 2 → 3 → 4 → 5 → 6)\n- **Rationale:** Each section builds on previous understanding\n- **Alternative:** Write reference first if it's the clearest? [NO - stick to logical flow]\n\n**2. Scope Enforcement Strategy:**\n- **After every paragraph:** Check scope compliance\n- **Boundary markers:** Use explicit \"See [Link]\" for out-of-scope items\n- **Temptation logging:** Track what I almost explained but stopped\n- **Red flags:** Watch for: explaining dependencies, analyzing out-of-scope code, diving too deep\n\n**3. Example Integration:**\n- Pull examples from ANALYSIS.md examples section\n- Test/verify each example before including\n- Format consistently (code blocks, clear explanations)\n- Show complete, runnable examples (not fragments)\n\n**4. Self-Critique Checkpoints:**\n- After each section: Run 3-question scope check\n- Midpoint (after section 3): Overall quality check\n- End: Comprehensive self-audit before validation\n\n**5. Quality Standards During Writing:**\n- Accuracy: Every claim must trace to ANALYSIS.md\n- Clarity: Write for [target audience] at [expertise level]\n- Completeness: Cover all outline points\n- Scope: Zero out-of-scope explanations\n\n**RISK MITIGATION:**\n\nIdentified writing risks and prevention:\n\n**Risk 1:** Drifting into explaining [Dependency X]\n**Prevention:** When I mention it, use template: \"This integrates with [X] via [interface]. See [X Docs].\"\n\n**Risk 2:** Going too deep in \"How It Works\" section\n**Prevention:** Stick to behavior description, not implementation minutiae. Check: \"Is this level appropriate for scope?\"\n\n**Risk 3:** Creating new examples not from analysis\n**Prevention:** Only use examples from ANALYSIS.md. If I need more, note it as needing more analysis.\n\n**Risk 4:** Scope creep in Usage Guide\n**Prevention:** Each pattern must use only in-scope features. Check examples against scope contract.\n\n**WRITING CHECKPOINTS:**\n- Checkpoint 1: After Section 1 (Overview) - verify foundation is solid\n- Checkpoint 2: After Section 3 (Usage Guide) - halfway check\n- Checkpoint 3: After Section 6 (Complete) - final self-audit\n\n**PLAN CONFIRMED. PROCEEDING TO CONTENT CREATION...**",
156
- "agentRole": "You are planning your writing approach strategically. Think about HOW you'll write to maintain quality and scope discipline. This is the PLAN step - actual writing comes in Phase 3B.",
157
- "guidance": [
158
- "Plan for scope enforcement - this is the #1 risk",
159
- "Identify specific risks for this subject and plan prevention",
160
- "Decide on checkpoint frequency for self-critique",
161
- "Commit to only using examples from analysis",
162
- "This planning prevents drift during writing"
163
- ],
87
+ "id": "phase-2-structure-and-outline",
88
+ "title": "Phase 2: Structure & Detailed Outline",
89
+ "promptBlocks": {
90
+ "goal": "Design the documentation structure and create a detailed outline that maps directly to the Phase 1 analysis findings.",
91
+ "constraints": [
92
+ "Every section must map to an in-scope item from Phase 1 analysis.",
93
+ "Out-of-scope items belong only in a 'Related Documentation' section — as references, never explanations.",
94
+ "Pull all content points from Phase 1 findings — do not invent new content that wasn't in the analysis.",
95
+ "Notes-first: record the outline in notesMarkdown, not only in OUTLINE.md."
96
+ ],
97
+ "procedure": [
98
+ "Step 1 — Choose structure type based on subject: Code subject: Overview → How It Works → Usage Guide → Reference → Edge Cases → Related Docs. System/concept: Overview → Components → Interactions → Integration → Examples → Related Docs. Process/workflow: Overview → Steps → Decision Points → Examples → Troubleshooting → Related Docs. Adjust sections based on what analysis actually found.",
99
+ "Step 2 — Create detailed outline. For each section, specify: content points sourced from Phase 1 analysis (with evidence references), examples to include (from Phase 1 examples), approximate word count target, whether this section documents in-scope items or references out-of-scope ones.",
100
+ "Step 3 — Scope compliance check on the outline. Review every section and confirm: Does it map to an in-scope item? Does it require explaining any out-of-scope dependency? Can the content be written entirely from Phase 1 findings?",
101
+ "If any section requires explaining an out-of-scope item: convert it to a reference, remove it, or surface a scope re-negotiation to the user. If any section cannot be written from Phase 1 findings: flag what additional analysis is needed and return to Phase 1."
102
+ ],
103
+ "outputRequired": {
104
+ "documentationStructure": "Section titles, word count targets, and content-source mapping — recorded in notesMarkdown",
105
+ "scopeComplianceConfirmation": "All sections confirmed to map to in-scope items",
106
+ "totalEstimatedWordCount": "Total word count estimate",
107
+ "analysisGaps": "Any gaps requiring return to Phase 1 (if none, state none)"
108
+ },
109
+ "verify": [
110
+ "Every section maps to an in-scope item from Phase 1 analysis",
111
+ "Out-of-scope items only appear in 'Related Documentation' section",
112
+ "All examples sourced from Phase 1 — none invented",
113
+ "Outline recorded in notesMarkdown"
114
+ ]
115
+ },
164
116
  "requireConfirmation": false
165
117
  },
166
118
  {
167
- "id": "phase-3b-content-creation",
168
- "title": "Phase 3B: Execute Content Creation with Continuous Scope Monitoring",
169
- "prompt": "**EXECUTE: Creating Documentation Content**\n\nExecuting writing strategy from Phase 3A, following detailed outline from Phase 2B...\n\n**WRITING PROCESS (Section by Section):**\n\n**For Each Section:**\n1. Draft content from outline and ANALYSIS.md\n2. Add examples from analysis (tested/verified)\n3. Check scope compliance (inline)\n4. Verify clarity for target audience\n5. Mark out-of-scope references clearly\n\n**SCOPE ENFORCEMENT (Continuous):**\n\n**After EVERY paragraph, ask yourself:**\n- ❓ Am I explaining something in-scope or out-of-scope?\n- ❓ If out-of-scope: Am I referencing (✓) or explaining (✗)?\n- ❓ Is this depth appropriate for scope definition?\n- ❓ Would a reader understand this is a boundary?\n\n**Marking Boundaries Clearly:**\n\n✅ GOOD (reference only):\n- \"This uses the CacheManager (see Cache Documentation) to store results.\"\n- \"For authentication details, see Auth Service Documentation.\"\n- \"Built on top of the standard logging framework.\"\n\n❌ BAD (explaining out-of-scope):\n- \"The CacheManager works by maintaining an in-memory LRU cache that...\"\n- \"Authentication is handled using JWT tokens which are validated by...\"\n- \"Logging uses Winston with custom transports configured to...\"\n\n**WRITING SECTIONS:**\n\n**Section 1: Overview & Context**\n[Write ~200 words covering what, why, key concepts, quick example]\n\n**Self-Check:**\n☑️ Scope compliant? [Check: no out-of-scope explanations]\n☑️ Clear for audience? [Check: terminology defined, example works]\n☑️ Accurate? [Check: claims match analysis]\n\n**Section 2: How It Works**\n[Write ~400-600 words covering mechanisms, behaviors, integration points]\n\n**Self-Check:**\n☑️ Scope compliant? [Check: dependencies referenced only, not explained]\n☑️ Clear for audience? [Check: step-by-step where helpful]\n☑️ Accurate? [Check: behaviors verified in analysis]\n\n**Section 3: Usage Guide**\n[Write ~500-700 words with getting started + 5-7 examples]\n\n**Self-Check:**\n☑️ Scope compliant? [Check: examples use in-scope features only]\n☑️ Clear for audience? [Check: examples are complete and runnable]\n☑️ Accurate? [Check: examples tested/verified]\n\n**Section 4: Reference**\n[Write ~300-500 words with detailed reference material]\n\n**Self-Check:**\n☑️ Scope compliant? [Check: only in-scope items documented]\n☑️ Clear for audience? [Check: comprehensive and well-organized]\n☑️ Accurate? [Check: parameters, types, behaviors correct]\n\n**Section 5: Edge Cases & Troubleshooting**\n[Write ~200-300 words with gotchas and solutions]\n\n**Self-Check:**\n☑️ Scope compliant? [Check: edge cases are in-scope ones]\n☑️ Clear for audience? [Check: actionable solutions provided]\n☑️ Accurate? [Check: issues and solutions verified]\n\n**Section 6: Related Documentation**\n[Write ~100 words with links and brief context]\n\n**Self-Check:**\n☑️ Scope compliant? [Check: just references, no explanations]\n☑️ Helpful? [Check: clear what each link covers]\n\n---\n\n**DRAFT COMPLETE - FINAL SELF-CRITIQUE:**\n\n**Scope Compliance Audit (Read through line by line):**\n- Lines documenting in-scope items: [count/percentage]\n- Lines referencing out-of-scope items: [count - should be minimal]\n- Lines explaining out-of-scope items: [count - should be ZERO]\n- Violations found: [list any violations found]\n\n**If violations found:** Fix them now before validation\n\n**Quality Self-Assessment:**\n- Accuracy (claims match analysis): __/10\n- Clarity (audience will understand): __/10\n- Completeness (all in-scope items covered): __/10\n- Scope compliance (no violations): __/10\n\n**Issues Found During Writing:**\n- [List any issues discovered and how you addressed them]\n\n**Ready for Validation:** [YES if all self-checks passed]\n\n**PROCEEDING TO ADVERSARIAL VALIDATION...**",
170
- "agentRole": "You are writing documentation with continuous scope monitoring. This is autonomous - no user checkpoint unless you find issues. Write clearly for target audience, maintain scope boundaries ruthlessly, and conduct honest self-critique after each section.",
171
- "guidance": [
172
- "Write section by section following detailed outline",
173
- "Pull content from ANALYSIS.md - don't invent new information",
174
- "Check scope after every paragraph - it's that important",
175
- "Mark boundaries explicitly so readers know what's in vs out",
176
- "Examples must be complete, tested, and in-scope",
177
- "Self-critique after each section - catch issues early",
178
- "Fix violations immediately, don't leave them for validation",
179
- "Be honest in quality self-assessment",
180
- "If you discover critical inaccuracies, that triggers user checkpoint"
181
- ],
119
+ "id": "phase-3-content-creation",
120
+ "title": "Phase 3: Content Creation",
121
+ "promptBlocks": {
122
+ "goal": "Write the documentation section by section, enforcing scope boundaries continuously and logging all boundary decisions.",
123
+ "constraints": [
124
+ "Content must come from Phase 1 analysis and Phase 2 outline — do not introduce new claims or examples that weren't in the analysis.",
125
+ "After every paragraph: check scope compliance. Am I explaining something in scope or referencing something out of scope?",
126
+ "Mark boundaries clearly in the text so readers know what's in vs out.",
127
+ "Notes-first: the documentation file is the primary artifact; ANALYSIS.md or OUTLINE.md sidecar files are optional."
128
+ ],
129
+ "procedure": [
130
+ "Writing approach — section by section. For each section from the Phase 2 outline: (1) Draft content from outline and Phase 1 findings. (2) Add examples from analysis (complete, not fragments). (3) Check scope compliance inline. (4) Write boundary markers for out-of-scope references. (5) Log any temptation to explain an out-of-scope item.",
131
+ "Scope boundary wording — use these patterns. Good (reference only): 'This uses the CacheManager (see Cache Documentation) to store results.' Good: 'For authentication details, see Auth Service Documentation.' Bad (explaining out-of-scope): 'The CacheManager works by maintaining an in-memory LRU cache that...'",
132
+ "After writing all sections — final scope audit. Read through the complete draft and count: lines documenting in-scope items (vast majority), lines referencing out-of-scope items (minimal), lines explaining out-of-scope items (must be zero). Fix any violations before advancing to Phase 4.",
133
+ "If you discover a critical inaccuracy (a claim you cannot verify from analysis), flag it for user confirmation in Phase 4 — don't silently remove it.",
134
+ "Write the documentation to a file. Suggested filename: [SUBJECT]_Documentation.md or similar. Record the path in mainDocumentationFile."
135
+ ],
136
+ "outputRequired": {
137
+ "mainDocumentationFile": "Path to documentation file written to disk",
138
+ "temptationLog": "All out-of-scope encounters logged with what was encountered and why it was stopped",
139
+ "scopeAuditResult": "Violation count (should be 0 before advancing)"
140
+ },
141
+ "verify": [
142
+ "Documentation file written to disk",
143
+ "mainDocumentationFile path recorded",
144
+ "Zero unexplained scope violations in final draft",
145
+ "Temptation log present in notesMarkdown",
146
+ "All examples verifiable from Phase 1 analysis"
147
+ ]
148
+ },
182
149
  "requireConfirmation": false
183
150
  },
184
151
  {
185
- "id": "phase-4-adversarial-validation",
186
- "title": "Phase 4: Adversarial Validation (CRITICAL CHECKPOINT)",
187
- "prompt": "**ADVERSARIAL VALIDATION - CRITICAL CHECKPOINT**\n\nI've completed the documentation. Now I'll be my harshest critic.\n\n**VALIDATION PROCESS:**\n\n**1. SCOPE COMPLIANCE AUDIT (Line by Line):**\n\nGo through every paragraph:\n- Documents in-scope items\n- References (not explains) out-of-scope items\n- Explains out-of-scope items (VIOLATION)\n- Analyzes dependencies in detail (VIOLATION)\n\n**Scope Compliance Result: __/10**\n\n**Violations Found:**\n[If any: List specific violations with line/section references]\n[If violations: Fix them and re-audit]\n\n**Must be ≥9/10 to proceed**\n\n---\n\n**2. COMPLETENESS CHECK:**\n\nAgainst SCOPE_CONTRACT.md, verify:\n- Every in-scope item documented\n- All promised examples included\n- All behaviors explained\n- All integration points covered\n- ✗ Missing: [any in-scope items not documented]\n\n**Completeness Result: __/10**\n\n**Gaps Found:**\n[If any: List what's missing and why]\n[If gaps: Add missing content and re-check]\n\n**Must be ≥9/10 to proceed**\n\n---\n\n**3. ACCURACY VALIDATION:**\n\n**Technical Claims Verification:**\n- All behaviors described match analysis? ✓/✗\n- All code examples tested/verified? ✓/✗\n- All API signatures correct? ✓/✗\n- All integration descriptions accurate? ✓/✗\n\n**Uncertainties:**\n[List any claims you're not 100% certain about]\n\n**Accuracy Result: __/10**\n\n**Issues Found:**\n[If any: List specific accuracy concerns]\n\n**Critical Uncertainties Requiring User Confirmation:**\n[If any: List specific questions that block 9+/10 confidence]\n\n**Must be ≥9/10 to proceed** (may require user input)\n\n---\n\n**4. CLARITY ASSESSMENT:**\n\n**For Target Audience [audience type]:**\n- Can they understand the overview? ✓/✗\n- Can they follow the mechanisms explanation? ✓/✗\n- Can they use the examples successfully? ✓/✗\n- Can they find reference information? ✓/✗\n- Can they solve problems with troubleshooting section? ✓/✗\n\n**Clarity Result: __/10**\n\n**Improvements Needed:**\n[If any: List sections that could be clearer]\n\n**Must be ≥8/10 to proceed** (lower threshold - clarity is more subjective)\n\n---\n\n**5. ADVERSARIAL REVIEW QUESTIONS:**\n\n**Challenge yourself:**\n\n1. **Would a peer reviewer find scope violations I missed?**\n [Honest assessment]\n\n2. **Is there anything in-scope that's missing?**\n → [Check completeness honestly]\n\n3. **Would I bet my reputation on every technical claim?**\n → [Identify any shaky claims]\n\n4. **Could the target audience actually use this successfully?**\n → [Put yourself in their shoes]\n\n5. **Will this become stale/incorrect quickly?**\n → [Maintainability check]\n\n---\n\n**VALIDATION SUMMARY:**\n\n**Metrics:**\n- Scope Compliance: __/10 (need ≥9)\n- Completeness: __/10 (need ≥9)\n- Accuracy: __/10 (need ≥9)\n- Clarity: __/10 (need ≥8)\n- **Overall: __/10**\n\n**Status:**\n[PASS / FAIL with specifics]\n\n---\n\n**IF ALL METRICS MEET THRESHOLDS AND NO CRITICAL ISSUES:**\n\n Validation passed. Minor improvements made:\n[List any minor issues fixed]\n\nReady for finalization.\n\n---\n\n**IF CRITICAL ISSUES FOUND (triggers user checkpoint):**\n\n**CRITICAL CHECKPOINT REQUIRED**\n\n**Issues Requiring Your Input:**\n\n**1. Accuracy Confirmation Needed:**\n- Question: [Specific technical question where you're uncertain]\n- Context: [Why this matters for documentation]\n- Current documentation says: [what you wrote]\n- Need your confirmation: [specific yes/no or factual answer]\n\n**2. Scope Violations Found:**\n- Violation: [Specific violation]\n- Location: [Section/paragraph]\n- Fix options:\n a) Remove explanation, convert to reference only\n b) Adjust scope to include this (expand scope contract)\n- Recommendation: [Your suggestion]\n\n**3. Quality Below Threshold:**\n- Metric: [Which metric failed]\n- Score: [X/10] (need [threshold])\n- Issues: [Specific problems]\n- Fix plan: [How you'll address it]\n\n**Your Input Required:**\n[Specific questions or decisions needed]\n\n**Once you respond, I'll implement fixes and re-validate.**\n\n---\n\n**IF MINOR ISSUES (below threshold but fixable):**\n\nI found issues preventing validation pass:\n- [List specific issues]\n\nFixing these issues now...\n[Make fixes]\n\nRe-validating...\n[Run validation again until passing]\n\n**After Passing Validation: Proceed to finalization**",
188
- "agentRole": "You are conducting rigorous adversarial validation. Be your harshest critic. This is a CRITICAL CHECKPOINT if you find issues requiring user input (accuracy questions, scope violations, quality failures). Otherwise, fix issues and iterate until passing all thresholds.",
189
- "guidance": [
190
- "Be genuinely adversarial - actively look for problems",
191
- "Scope compliance must be ≥9/10 - this is non-negotiable",
192
- "Completeness must be ≥9/10 - check against scope contract systematically",
193
- "Accuracy must be ≥9/10 - any uncertainty triggers user checkpoint",
194
- "Clarity must be ≥8/10 - lower threshold but still important",
195
- "If below threshold, fix and re-validate - no proceeding with poor quality",
196
- "Critical issues (accuracy uncertainty, scope violations, quality failures) trigger user checkpoint",
197
- "Minor issues should be fixed autonomously and re-validated",
198
- "Be honest in self-assessment - this is quality assurance",
199
- "Document all issues found and fixes made"
200
- ],
152
+ "id": "phase-4-validation-and-delivery",
153
+ "title": "Phase 4: Adversarial Validation & Delivery",
154
+ "prompt": "Now be the harshest critic before delivering.\n\n**VALIDATION RUBRIC score all 4 dimensions with evidence:**\n\n**1. Scope Compliance**\n- PASS: zero unexplained out-of-scope items in the documentation\n- PARTIAL: minor violations found and fixed autonomously\n- FAIL: violations found that cannot be fixed without user input\n- Evidence: [sentence + violation count]\n- Action: FAIL triggers user input; PARTIAL acceptable with disclosure\n\n**2. Completeness**\n- PASS: all in-scope items from the scope contract are documented\n- PARTIAL: minor gaps (non-critical items missing)\n- FAIL: significant in-scope items missing\n- Evidence: [sentence + gap count]\n- Action: FAIL triggers autonomous content addition then re-validate; PARTIAL acceptable with disclosure\n\n**3. Accuracy**\n- PASS: all technical claims verifiable from Phase 1 analysis\n- PARTIAL: minor uncertainties clearly marked in documentation\n- FAIL: key claims uncertain external confirmation required\n- Evidence: [sentence + uncertain claim count]\n- Action: FAIL triggers user checkpoint with specific questions; PARTIAL acceptable with disclosure\n\n**4. Clarity**\n- PASS: target audience can achieve the success criteria using this documentation\n- PARTIAL: some sections need clearer wording\n- FAIL: significant clarity gaps preventing successful use\n- Evidence: [sentence]\n- Action: FAIL triggers autonomous rewrite then re-validate; PARTIAL acceptable with disclosure\n\n---\n\n**Gate rules:**\n- scopeCompliance and completeness must both reach PASS for delivery\n- accuracy FAIL triggers user checkpoint list specific questions below\n- clarity FAIL triggers autonomous rewrite\n- accuracy PARTIAL and clarity PARTIAL proceed with disclosure\n\n**If all gates pass or are acceptable for delivery:**\n\nCreate SCOPE_MAP.md:\n```\n# Scope Map: [Subject]\n\n## Documented Here (In Scope)\n- [Item]: [brief description] see [Section]\n\n## Referenced Only (Out of Scope)\n- [Item]: [brief description] [link if available]\n\n## Integration Points\n- [Where this connects to other systems / interface contracts]\n\n## Related Documentation\n- [Links with descriptions]\n```\n\nOptionally create MAINTENANCE_NOTES.md with: when to update this documentation, which sections to review for accuracy, scope boundary reminders (what stays referenced-only).\n\n**DELIVERY SUMMARY:**\n\n- Main documentation: [path] (~[N] words)\n- Scope map: SCOPE_MAP.md\n- Validation results: scopeCompliance=[X], completeness=[X], accuracy=[X], clarity=[X]\n- Scope discipline: [N] temptations stopped, 0 violations in final delivery\n- Audience: [target audience]\n- Success criteria: Reader can [outcome]\n\n---\n\n**IF accuracy FAIL user checkpoint required:**\n\nBefore I can deliver, I need your input on accuracy:\n\n[List specific technical claims that need confirmation, with context for each]\n\nOnce you confirm, I'll finalize and deliver.",
201
155
  "requireConfirmation": true,
202
156
  "validationCriteria": [
203
157
  {
204
158
  "type": "contains",
205
159
  "value": "Scope Compliance",
206
- "message": "Must include scope compliance audit with rating"
160
+ "message": "Must include scope compliance dimension with PASS/PARTIAL/FAIL rating"
207
161
  },
208
162
  {
209
163
  "type": "contains",
210
164
  "value": "Completeness",
211
- "message": "Must include completeness check with rating"
165
+ "message": "Must include completeness dimension with PASS/PARTIAL/FAIL rating"
212
166
  },
213
167
  {
214
168
  "type": "contains",
215
169
  "value": "Accuracy",
216
- "message": "Must include accuracy validation with rating"
170
+ "message": "Must include accuracy dimension with PASS/PARTIAL/FAIL rating"
217
171
  },
218
172
  {
219
173
  "type": "contains",
220
174
  "value": "Clarity",
221
- "message": "Must include clarity assessment with rating"
222
- },
223
- {
224
- "type": "regex",
225
- "pattern": "(9|10)/10",
226
- "message": "Must show at least one metric meeting threshold (9+/10)"
175
+ "message": "Must include clarity dimension with PASS/PARTIAL/FAIL rating"
227
176
  }
228
177
  ],
229
178
  "hasValidation": true
230
- },
231
- {
232
- "id": "phase-5-finalization",
233
- "title": "Phase 5: Finalization & Delivery",
234
- "prompt": "**FINALIZATION: Preparing Final Deliverable**\n\nValidation passed. Finalizing documentation...\n\n**FINAL STEPS:**\n\n**1. Incorporate Validation Feedback**\n[If user provided any feedback in Phase 4, incorporate it now]\n- Accuracy confirmations: [Applied]\n- Scope adjustments: [Applied]\n- Other feedback: [Applied]\n\n**2. Polish and Format**\n- Consistent heading levels\n- Code formatting (syntax highlighting hints)\n- Links working (internal and external)\n- Table of contents (if appropriate)\n- Consistent terminology throughout\n\n**3. Add Metadata**\n\nAt top of documentation:\n```\n---\nSubject: [What this documents]\nScope: [Brief scope description]\nAudience: [Target readers]\nLast Updated: [Date]\nStatus: Complete\n---\n```\n\n**4. Create Scope Map**\n\nCreate SCOPE_MAP.md:\n```markdown\n# Scope Map: [Subject] Documentation\n\n## What's Documented (In Scope)\n- [List in-scope items with brief descriptions]\n- [Link to sections in main doc]\n\n## What's Referenced (Out of Scope)\n- [Dependency X]: [Brief description] → [Link to its docs]\n- [Feature Y]: [Brief description] → [Link to its docs]\n\n## Integration Points\n- [Where this connects to other systems]\n- [Interface contracts documented]\n\n## How This Fits in the Larger System\n[Context diagram or description]\n\n## Related Documentation\n- [Links to related docs with descriptions]\n```\n\n**5. Create Maintenance Guide**\n\nCreate MAINTENANCE_NOTES.md:\n```markdown\n# Maintenance Guide: [Subject] Documentation\n\n## When to Update This Documentation\n- When [in-scope item X] changes\n- When new [functionality Y] is added to scope\n- When [integration point Z] contract changes\n\n## What to Check\n- [Specific sections to review for accuracy]\n- [Examples to re-test]\n- [Integration points to verify]\n\n## Scope Boundaries\n- Remember: [Dependency A] is out of scope - reference only\n- Remember: [Feature B] has its own docs - don't duplicate\n\n## Key Dependencies\n- [Dependency]: [Why it matters] → [Where documented]\n\n## Ownership\n- Primary: [Team/person if known]\n- Scope: [What's covered]\n```\n\n---\n\n**DELIVERY PACKAGE COMPLETE**\n\n**📦 Final Deliverable:**\n\n**1. Main Documentation: [SUBJECT]_Documentation.md** (~[word count] words)\n Comprehensive guide covering:\n - Overview and context\n - How it works (mechanisms and behaviors)\n - Usage guide with [N] examples\n - Complete reference\n - Edge cases and troubleshooting\n - Related documentation links\n\n**2. Scope Map: SCOPE_MAP.md**\n Clear visualization of:\n - What's in scope (documented here)\n - What's out of scope (referenced only)\n - How this fits in the larger system\n\n**3. Maintenance Guide: MAINTENANCE_NOTES.md**\n Guidance for keeping docs current:\n - When to update\n - What to check\n - Scope reminders\n\n**4. Original Artifacts (for reference):**\n - SCOPE_CONTRACT.md (the approved boundary)\n - ANALYSIS.md (the investigation findings)\n\n---\n\n**📊 QUALITY METRICS ACHIEVED:**\n\n- ✅ Scope Compliance: [X]/10 (≥9 required)\n- ✅ Completeness: [X]/10 (≥9 required)\n- ✅ Accuracy: [X]/10 (≥9 required)\n- ✅ Clarity: [X]/10 (≥8 required)\n- ✅ **Overall: [X]/10**\n\n**📈 SCOPE DISCIPLINE:**\n- Temptations to drift: [N] (all stopped)\n- Scope violations: 0\n- Out-of-scope items referenced: [N]\n- Out-of-scope items explained: 0 ✓\n\n---\n\n**✅ DOCUMENTATION READY TO USE**\n\n**Subject:** [What was documented]\n**Scope:** [Brief description of boundaries]\n**Quality:** [Overall score]/10\n**Maintainability:** Scope boundaries clear, maintenance guide provided\n\n**Next Steps (if any):**\n[Any identified follow-up documentation needs]\n[Any related features that might need separate docs]\n\n**Thank you for using Scoped Documentation Workflow!**",
235
- "agentRole": "You are finalizing and delivering the documentation package. This is autonomous - polish, package, and deliver with comprehensive summary. Include all artifacts and clear quality metrics.",
236
- "guidance": [
237
- "Incorporate any Phase 4 feedback first",
238
- "Polish formatting and consistency",
239
- "Add metadata for maintainability",
240
- "Create scope map for clarity about boundaries",
241
- "Create maintenance guide for future updates",
242
- "Include all artifacts (scope contract, analysis) for reference",
243
- "Provide clear quality metrics summary",
244
- "Celebrate scope discipline - highlight temptations stopped",
245
- "Suggest next steps if relevant",
246
- "Make deliverable professional and complete"
247
- ],
248
- "requireConfirmation": false
249
179
  }
250
180
  ]
251
181
  }
252
-