@exaudeus/workrail 3.15.0 → 3.16.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (139) hide show
  1. package/dist/application/services/workflow-service.d.ts +2 -0
  2. package/dist/application/services/workflow-service.js +3 -0
  3. package/dist/console/assets/index-BE5PAgPO.js +28 -0
  4. package/dist/console/assets/index-BZNM03t1.css +1 -0
  5. package/dist/console/index.html +2 -2
  6. package/dist/env-flags.d.ts +1 -0
  7. package/dist/env-flags.js +4 -0
  8. package/dist/infrastructure/session/HttpServer.d.ts +3 -3
  9. package/dist/infrastructure/session/HttpServer.js +68 -74
  10. package/dist/infrastructure/storage/caching-workflow-storage.d.ts +2 -0
  11. package/dist/infrastructure/storage/caching-workflow-storage.js +15 -6
  12. package/dist/infrastructure/storage/file-workflow-storage.js +3 -4
  13. package/dist/infrastructure/storage/schema-validating-workflow-storage.js +9 -8
  14. package/dist/manifest.json +257 -193
  15. package/dist/mcp/assert-output.d.ts +37 -0
  16. package/dist/mcp/assert-output.js +52 -0
  17. package/dist/mcp/boundary-coercion.d.ts +1 -0
  18. package/dist/mcp/boundary-coercion.js +44 -0
  19. package/dist/mcp/dev-mode.d.ts +1 -0
  20. package/dist/mcp/dev-mode.js +4 -0
  21. package/dist/mcp/handler-factory.js +12 -9
  22. package/dist/mcp/handlers/session.js +8 -9
  23. package/dist/mcp/handlers/v2-advance-core/event-builders.d.ts +2 -0
  24. package/dist/mcp/handlers/v2-advance-core/event-builders.js +6 -6
  25. package/dist/mcp/handlers/v2-advance-core/index.d.ts +2 -0
  26. package/dist/mcp/handlers/v2-advance-core/index.js +4 -3
  27. package/dist/mcp/handlers/v2-advance-core/input-validation.d.ts +2 -0
  28. package/dist/mcp/handlers/v2-advance-core/input-validation.js +32 -9
  29. package/dist/mcp/handlers/v2-advance-core/outcome-blocked.d.ts +2 -0
  30. package/dist/mcp/handlers/v2-advance-core/outcome-blocked.js +1 -1
  31. package/dist/mcp/handlers/v2-advance-core/outcome-success.d.ts +2 -0
  32. package/dist/mcp/handlers/v2-advance-core/outcome-success.js +1 -1
  33. package/dist/mcp/handlers/v2-checkpoint.d.ts +1 -1
  34. package/dist/mcp/handlers/v2-checkpoint.js +5 -6
  35. package/dist/mcp/handlers/v2-execution/advance.d.ts +4 -2
  36. package/dist/mcp/handlers/v2-execution/advance.js +5 -7
  37. package/dist/mcp/handlers/v2-execution/continue-advance.js +56 -26
  38. package/dist/mcp/handlers/v2-execution/continue-rehydrate.d.ts +1 -1
  39. package/dist/mcp/handlers/v2-execution/continue-rehydrate.js +9 -9
  40. package/dist/mcp/handlers/v2-execution/replay.d.ts +6 -4
  41. package/dist/mcp/handlers/v2-execution/replay.js +47 -30
  42. package/dist/mcp/handlers/v2-execution/start.d.ts +2 -3
  43. package/dist/mcp/handlers/v2-execution/start.js +11 -11
  44. package/dist/mcp/handlers/v2-execution/workflow-object-cache.d.ts +5 -0
  45. package/dist/mcp/handlers/v2-execution/workflow-object-cache.js +19 -0
  46. package/dist/mcp/handlers/v2-execution-helpers.d.ts +1 -0
  47. package/dist/mcp/handlers/v2-execution-helpers.js +23 -7
  48. package/dist/mcp/handlers/v2-resume.d.ts +1 -1
  49. package/dist/mcp/handlers/v2-resume.js +3 -4
  50. package/dist/mcp/handlers/v2-state-conversion.js +5 -1
  51. package/dist/mcp/handlers/v2-workflow.d.ts +80 -0
  52. package/dist/mcp/handlers/v2-workflow.js +36 -21
  53. package/dist/mcp/handlers/workflow.d.ts +2 -5
  54. package/dist/mcp/handlers/workflow.js +15 -12
  55. package/dist/mcp/output-schemas.d.ts +20 -27
  56. package/dist/mcp/output-schemas.js +5 -7
  57. package/dist/mcp/server.js +22 -4
  58. package/dist/mcp/tool-call-timing.d.ts +24 -0
  59. package/dist/mcp/tool-call-timing.js +85 -0
  60. package/dist/mcp/transports/http-entry.js +3 -2
  61. package/dist/mcp/transports/http-listener.d.ts +1 -0
  62. package/dist/mcp/transports/http-listener.js +25 -0
  63. package/dist/mcp/transports/shutdown-hooks.d.ts +4 -1
  64. package/dist/mcp/transports/shutdown-hooks.js +3 -2
  65. package/dist/mcp/transports/stdio-entry.js +6 -28
  66. package/dist/mcp/v2-response-formatter.js +2 -4
  67. package/dist/mcp/validation/schema-introspection.d.ts +1 -0
  68. package/dist/mcp/validation/schema-introspection.js +15 -5
  69. package/dist/mcp/validation/suggestion-generator.js +2 -2
  70. package/dist/runtime/adapters/node-process-signals.d.ts +1 -0
  71. package/dist/runtime/adapters/node-process-signals.js +5 -0
  72. package/dist/runtime/adapters/noop-process-signals.d.ts +1 -0
  73. package/dist/runtime/adapters/noop-process-signals.js +2 -0
  74. package/dist/runtime/ports/process-signals.d.ts +1 -0
  75. package/dist/types/workflow-definition.d.ts +2 -0
  76. package/dist/types/workflow.d.ts +3 -0
  77. package/dist/types/workflow.js +35 -26
  78. package/dist/v2/durable-core/domain/context-template-resolver.js +2 -2
  79. package/dist/v2/durable-core/domain/function-definition-expander.js +2 -17
  80. package/dist/v2/durable-core/domain/prompt-renderer.d.ts +1 -0
  81. package/dist/v2/durable-core/domain/prompt-renderer.js +23 -18
  82. package/dist/v2/durable-core/domain/recap-recovery.js +23 -16
  83. package/dist/v2/durable-core/domain/retrieval-contract.js +13 -7
  84. package/dist/v2/durable-core/session-index.d.ts +22 -0
  85. package/dist/v2/durable-core/session-index.js +58 -0
  86. package/dist/v2/durable-core/sorted-event-log.d.ts +6 -0
  87. package/dist/v2/durable-core/sorted-event-log.js +15 -0
  88. package/dist/v2/infra/local/fs/index.js +8 -8
  89. package/dist/v2/infra/local/session-store/index.d.ts +1 -1
  90. package/dist/v2/infra/local/session-store/index.js +71 -61
  91. package/dist/v2/infra/local/session-summary-provider/index.js +9 -4
  92. package/dist/v2/infra/local/snapshot-store/index.js +2 -1
  93. package/dist/v2/ports/session-event-log-store.port.d.ts +1 -1
  94. package/dist/v2/projections/assessment-consequences.d.ts +2 -1
  95. package/dist/v2/projections/assessment-consequences.js +0 -5
  96. package/dist/v2/projections/assessments.d.ts +2 -1
  97. package/dist/v2/projections/assessments.js +2 -4
  98. package/dist/v2/projections/gaps.d.ts +2 -1
  99. package/dist/v2/projections/gaps.js +0 -5
  100. package/dist/v2/projections/preferences.d.ts +2 -1
  101. package/dist/v2/projections/preferences.js +0 -5
  102. package/dist/v2/projections/run-context.d.ts +2 -2
  103. package/dist/v2/projections/run-context.js +0 -5
  104. package/dist/v2/projections/run-dag.js +7 -1
  105. package/dist/v2/projections/run-execution-trace.d.ts +8 -0
  106. package/dist/v2/projections/run-execution-trace.js +124 -0
  107. package/dist/v2/projections/run-status-signals.d.ts +2 -2
  108. package/dist/v2/usecases/console-routes.d.ts +3 -1
  109. package/dist/v2/usecases/console-routes.js +123 -25
  110. package/dist/v2/usecases/console-service.d.ts +1 -0
  111. package/dist/v2/usecases/console-service.js +83 -25
  112. package/dist/v2/usecases/console-types.d.ts +53 -0
  113. package/dist/v2/usecases/worktree-service.js +32 -1
  114. package/package.json +6 -5
  115. package/spec/workflow.schema.json +18 -0
  116. package/workflows/adaptive-ticket-creation.json +23 -16
  117. package/workflows/architecture-scalability-audit.json +29 -22
  118. package/workflows/bug-investigation.agentic.v2.json +7 -0
  119. package/workflows/coding-task-workflow-agentic.json +7 -0
  120. package/workflows/coding-task-workflow-agentic.lean.v2.json +16 -8
  121. package/workflows/coding-task-workflow-agentic.v2.json +7 -0
  122. package/workflows/cross-platform-code-conversion.v2.json +7 -0
  123. package/workflows/document-creation-workflow.json +15 -8
  124. package/workflows/documentation-update-workflow.json +15 -8
  125. package/workflows/intelligent-test-case-generation.json +7 -0
  126. package/workflows/learner-centered-course-workflow.json +9 -2
  127. package/workflows/mr-review-workflow.agentic.v2.json +7 -0
  128. package/workflows/personal-learning-materials-creation-branched.json +15 -8
  129. package/workflows/presentation-creation.json +12 -5
  130. package/workflows/production-readiness-audit.json +7 -0
  131. package/workflows/relocation-workflow-us.json +39 -32
  132. package/workflows/scoped-documentation-workflow.json +33 -26
  133. package/workflows/ui-ux-design-workflow.json +7 -0
  134. package/workflows/workflow-diagnose-environment.json +6 -0
  135. package/workflows/workflow-for-workflows.json +7 -0
  136. package/workflows/workflow-for-workflows.v2.json +23 -11
  137. package/workflows/wr.discovery.json +8 -1
  138. package/dist/console/assets/index-BZYIjrzJ.js +0 -28
  139. package/dist/console/assets/index-OLCKbDdm.css +0 -1
@@ -3,6 +3,13 @@
3
3
  "name": "Agentic Task Dev Workflow (v2 \u2022 Notes-First \u2022 WorkRail Executor)",
4
4
  "version": "2.0.0",
5
5
  "description": "Use this to implement a software feature or task. Follows a plan-then-execute approach with architecture decisions, invariant tracking, and final verification.",
6
+ "about": "## Agentic Coding Task Workflow\n\nThis workflow structures the full lifecycle of a software implementation task: from understanding and classifying the work, through architecture decisions and incremental implementation, to final verification and handoff.\n\n### What it does\n\nThe workflow guides an AI agent through a disciplined plan-then-execute process. It begins by analyzing the task to determine complexity, risk, and the right level of rigor (QUICK, STANDARD, or THOROUGH). For non-trivial tasks, it then gathers codebase context, surfaces invariants and non-goals, generates competing design candidates, and selects an approach before writing a single line of code. Implementation proceeds slice by slice, with built-in verification gates after each slice. A final integration verification pass confirms acceptance criteria are met before handoff.\n\n### When to use it\n\nUse this workflow whenever you are implementing a feature, fixing a non-trivial bug, or making an architectural change in a real codebase. It is especially valuable when:\n- The task touches multiple files or systems\n- There is meaningful risk of regressions or invariant violations\n- You want the agent to surface trade-offs and commit to a reasoned design decision rather than guessing\n- You need a resumable, auditable record of what was decided and why\n\nFor quick one-liner fixes or very small changes, the workflow includes a fast path that skips heavyweight planning.\n\n### What it produces\n\n- An `implementation_plan.md` artifact covering the selected approach, vertical slices, test design, and philosophy alignment\n- A `spec.md` for large or high-risk tasks, capturing observable behavior and acceptance criteria\n- Step-level notes in WorkRail that serve as a durable execution log\n- A PR-ready handoff summary with acceptance criteria status, invariant proofs, and follow-up tickets\n\n### How to get good results\n\n- Provide a clear task description and at least partial acceptance criteria before starting\n- If you have coding philosophy or project conventions configured in session rules or Memory MCP, the workflow will apply them automatically as a design lens\n- Let the workflow classify complexity and rigor itself; override only if the classification is clearly wrong\n- For large or high-risk tasks, review the architecture decision step before implementation begins",
7
+ "examples": [
8
+ "Implement JWT refresh token rotation in the auth service",
9
+ "Fix the race condition in the cache invalidation path when concurrent writes occur",
10
+ "Refactor the payment flow to use a Result type instead of throwing exceptions",
11
+ "Add pagination support to the messaging inbox API endpoint"
12
+ ],
6
13
  "recommendedPreferences": {
7
14
  "recommendedAutonomy": "guided",
8
15
  "recommendedRiskPolicy": "conservative"
@@ -3,6 +3,13 @@
3
3
  "name": "Cross-Platform Code Conversion",
4
4
  "version": "0.1.0",
5
5
  "description": "Use this to convert code from one platform to another (e.g. Android to iOS, iOS to Web). Triages files by difficulty, parallelizes easy translations, and handles platform-specific design decisions.",
6
+ "about": "## Cross-Platform Code Conversion Workflow\n\nThis workflow guides an AI agent through converting code from one platform to another - for example, Android (Kotlin) to iOS (Swift), iOS to Web (TypeScript/React), or any similar migration. It handles everything from scoping and analysis through idiomatic conversion, build verification, and final handoff.\n\n### What it does\n\nThe workflow starts by scoping the migration and classifying its complexity (Small, Medium, or Large) and adaptation depth (low, moderate, or high). It then analyzes the source architecture to understand patterns, dependencies, concurrency models, and semantic contracts. Files are triaged into three buckets: mechanical translations delegated to subagents in parallel (Bucket A), library substitutions (Bucket B), and platform-specific code needing design decisions (Bucket C). For high-adaptation migrations, the workflow runs a full design generation phase to choose an idiomatic target-platform architecture before any code is written. Implementation proceeds batch by batch, with drift detection after each batch to catch files that turn out harder than classified. A final build-and-integration loop verifies the full converted codebase before handoff.\n\n### When to use it\n\nUse this workflow when migrating a module, feature, or full component from one platform to another. It is especially valuable when:\n- The source and target platforms have meaningfully different idioms (e.g., Kotlin coroutines vs Swift async/await, Hilt vs Swinject)\n- You want parallel delegation of mechanical work while keeping design-sensitive boundaries with the main agent\n- Semantic contracts (lifecycle, threading, cancellation, error handling) must be preserved across the migration\n- The target repo has existing architectural patterns the migrated code must fit into\n\nFor very small, straightforward file-by-file translations, the workflow includes a fast path that skips planning and triage.\n\n### What it produces\n\n- A triage matrix classifying every file into a conversion bucket\n- A semantic contract inventory for non-trivial migration boundaries\n- A target integration analysis mapping boundaries to their destination repo seams\n- Converted source files in the target platform's idioms\n- A passing build or typecheck on the full converted output\n- A handoff summary covering adaptation decisions, known gaps, and items needing manual review\n\n### How to get good results\n\n- Specify the exact scope of the migration - which files, modules, or features to convert\n- If the target repo is not in the same workspace, point the agent to it explicitly or configure the source-to-target path mapping\n- Review the triage and semantic contract inventory steps before conversion begins, especially for high-adaptation migrations\n- Flag any invariants that must survive the migration (API contracts, behavioral guarantees, threading assumptions)",
7
+ "examples": [
8
+ "Convert the Android messaging inbox feature from Kotlin/Coroutines to iOS Swift/Combine",
9
+ "Migrate the Android authentication module (Hilt + ViewModel) to a SwiftUI equivalent",
10
+ "Port the shared data models and repository layer from Android Kotlin to TypeScript for the web client",
11
+ "Convert the Android search feature UI layer from Jetpack Compose to SwiftUI"
12
+ ],
6
13
  "recommendedPreferences": {
7
14
  "recommendedAutonomy": "guided",
8
15
  "recommendedRiskPolicy": "conservative"
@@ -2,7 +2,14 @@
2
2
  "id": "document-creation-workflow",
3
3
  "name": "Document Creation Workflow",
4
4
  "version": "1.0.0",
5
- "description": "Use this to create broad or comprehensive documentation spanning multiple components or systems \u2014 project READMEs, complete API docs, user guides, or technical specifications.",
5
+ "description": "Use this to create broad or comprehensive documentation spanning multiple components or systems project READMEs, complete API docs, user guides, or technical specifications.",
6
+ "about": "## Document Creation Workflow\n\nThis workflow guides you through creating new documentation from scratch -- ranging from a simple project README to a full technical specification spanning multiple systems. It automatically calibrates depth to match the complexity of your request: simple tasks go straight to writing, while complex documentation gets a full analysis-and-planning phase first.\n\n### What it produces\n\nA complete, saved documentation file ready for use. Depending on complexity, it may also include a quality review pass covering accuracy, completeness, audience fit, usability, and style consistency.\n\n### When to use it\n\n- You need to create a **new** document (not update an existing one -- see the Documentation Update workflow for that).\n- The document spans one or more systems, components, or audiences.\n- Examples: project READMEs, API reference docs, user guides, onboarding docs, technical specifications, architecture overviews.\n\n### When NOT to use it\n\n- You want to update or refresh an existing doc -- use the Documentation Update workflow instead.\n- You need tight scope discipline for a single class or mechanism -- the Scoped Documentation workflow is better suited.\n\n### How to get good results\n\n- Be specific about the document type and intended audience upfront. The workflow probes for these, but the clearer your initial goal, the less back-and-forth.\n- If your project has existing documentation or style conventions, mention them -- the workflow will follow them.\n- For complex documentation, the workflow asks a small number of targeted questions it cannot answer from the codebase. Answer these concisely to keep momentum.",
7
+ "examples": [
8
+ "Create a README for the payments-service repo with setup, config, and deployment instructions",
9
+ "Write a full API reference for the new notifications SDK, including all endpoints and error codes",
10
+ "Create a user guide for the self-serve onboarding flow targeting non-technical customers",
11
+ "Write a technical specification for the proposed event-sourcing migration"
12
+ ],
6
13
  "preconditions": [
7
14
  "User has a clear idea of the document type and purpose.",
8
15
  "Relevant project files or information are available for reference.",
@@ -12,9 +19,9 @@
12
19
  "metaGuidance": [
13
20
  "NOTES-FIRST DURABILITY: use output.notesMarkdown as the primary durable record. Do NOT create CONTEXT.md, doc_spec.md, or content_plan.md as workflow memory.",
14
21
  "DISCOVER BEFORE ASKING: use tools to explore the project before asking clarification questions. Only ask what tools cannot answer.",
15
- "COMPLEXITY DRIVES BRANCHING: docComplexity=Simple uses the fast path; Standard/Complex uses the full path. If complexity changes during work, note it in notesMarkdown and adapt \u2014 no retriage ceremony needed.",
22
+ "COMPLEXITY DRIVES BRANCHING: docComplexity=Simple uses the fast path; Standard/Complex uses the full path. If complexity changes during work, note it in notesMarkdown and adapt no retriage ceremony needed.",
16
23
  "CONTENT-FIRST: the deliverable is the document, not planning artifacts. Keep planning proportional to scope.",
17
- "EVIDENCE-BASED QUALITY: each quality dimension in the review step requires a one-sentence evidence statement and a pass or needs-work verdict \u2014 not a numeric score."
24
+ "EVIDENCE-BASED QUALITY: each quality dimension in the review step requires a one-sentence evidence statement and a pass or needs-work verdict not a numeric score."
18
25
  ],
19
26
  "steps": [
20
27
  {
@@ -41,7 +48,7 @@
41
48
  }
42
49
  ]
43
50
  },
44
- "prompt": "Analyze the project to inform documentation strategy. Limit this analysis to 1500 words; prioritize documentation-relevant insights.\n\nCover:\n1. **Existing documentation landscape** \u2014 current docs, style patterns, gaps\n2. **Project architecture** \u2014 key components relevant to this document\n3. **User or developer workflows** \u2014 how documentation fits into user journeys\n4. **Technical constraints** \u2014 APIs, systems, integrations to document\n5. **Style conventions** \u2014 terminology, formatting, naming patterns to follow\n6. **Audience** \u2014 who will use this documentation and what they need to accomplish\n\nNote any complexity indicators that might warrant reclassifying `docComplexity` upward.",
51
+ "prompt": "Analyze the project to inform documentation strategy. Limit this analysis to 1500 words; prioritize documentation-relevant insights.\n\nCover:\n1. **Existing documentation landscape** current docs, style patterns, gaps\n2. **Project architecture** key components relevant to this document\n3. **User or developer workflows** how documentation fits into user journeys\n4. **Technical constraints** APIs, systems, integrations to document\n5. **Style conventions** terminology, formatting, naming patterns to follow\n6. **Audience** who will use this documentation and what they need to accomplish\n\nNote any complexity indicators that might warrant reclassifying `docComplexity` upward.",
45
52
  "requireConfirmation": false
46
53
  },
47
54
  {
@@ -77,7 +84,7 @@
77
84
  }
78
85
  ]
79
86
  },
80
- "prompt": "Create a content plan for this documentation in your notes.\n\nThe plan should cover:\n1. Document purpose and success criteria\n2. Target audience and their primary goals\n3. Section outline with one-line descriptions\n4. Writing strategy \u2014 tone, technical depth, key terminology\n5. Visual elements or code examples needed\n\nKeep the plan proportional to scope. The goal is a clear outline to execute against, not a heavyweight specification.",
87
+ "prompt": "Create a content plan for this documentation in your notes.\n\nThe plan should cover:\n1. Document purpose and success criteria\n2. Target audience and their primary goals\n3. Section outline with one-line descriptions\n4. Writing strategy tone, technical depth, key terminology\n5. Visual elements or code examples needed\n\nKeep the plan proportional to scope. The goal is a clear outline to execute against, not a heavyweight specification.",
81
88
  "promptFragments": [
82
89
  {
83
90
  "id": "phase-3-plan-complex",
@@ -111,7 +118,7 @@
111
118
  }
112
119
  ]
113
120
  },
114
- "prompt": "Review the documentation you just wrote using this rubric. For each dimension, provide a one-sentence evidence statement and a verdict of `pass` or `needs-work`.\n\n1. **Accuracy** \u2014 Does the content correctly describe the actual project or system? *(Evidence: cite one verified fact.)*\n2. **Completeness** \u2014 Does it cover all planned sections? *(Evidence: list planned vs completed sections.)*\n3. **Audience fit** \u2014 Is the technical depth right for the target reader? *(Evidence: identify one audience-appropriate choice made.)*\n4. **Usability** \u2014 Could a reader actually accomplish their goal using this doc? *(Evidence: trace one user journey through the doc.)*\n5. **Consistency** \u2014 Does it match project conventions for style, terminology, and format? *(Evidence: cite one convention followed.)*\n\nIf any dimension is `needs-work`, fix the issue immediately and re-assert the dimension as `pass` in your notes before continuing.",
121
+ "prompt": "Review the documentation you just wrote using this rubric. For each dimension, provide a one-sentence evidence statement and a verdict of `pass` or `needs-work`.\n\n1. **Accuracy** Does the content correctly describe the actual project or system? *(Evidence: cite one verified fact.)*\n2. **Completeness** Does it cover all planned sections? *(Evidence: list planned vs completed sections.)*\n3. **Audience fit** Is the technical depth right for the target reader? *(Evidence: identify one audience-appropriate choice made.)*\n4. **Usability** Could a reader actually accomplish their goal using this doc? *(Evidence: trace one user journey through the doc.)*\n5. **Consistency** Does it match project conventions for style, terminology, and format? *(Evidence: cite one convention followed.)*\n\nIf any dimension is `needs-work`, fix the issue immediately and re-assert the dimension as `pass` in your notes before continuing.",
115
122
  "promptFragments": [
116
123
  {
117
124
  "id": "phase-5-quality-review-complex",
@@ -119,7 +126,7 @@
119
126
  "var": "docComplexity",
120
127
  "equals": "Complex"
121
128
  },
122
- "text": "Also review a sixth dimension:\n6. **Integration coherence** \u2014 Does the doc integrate correctly with the existing documentation ecosystem? *(Evidence: describe how it cross-links or relates to existing docs.)*"
129
+ "text": "Also review a sixth dimension:\n6. **Integration coherence** Does the doc integrate correctly with the existing documentation ecosystem? *(Evidence: describe how it cross-links or relates to existing docs.)*"
123
130
  }
124
131
  ],
125
132
  "requireConfirmation": false
@@ -141,4 +148,4 @@
141
148
  "requireConfirmation": false
142
149
  }
143
150
  ]
144
- }
151
+ }
@@ -3,6 +3,13 @@
3
3
  "name": "Documentation Update & Maintenance Workflow",
4
4
  "version": "2.0.0",
5
5
  "description": "Use this to update and maintain existing documentation. Uses git history to detect staleness, maps sections to current code, and refreshes outdated content while preserving what's still accurate.",
6
+ "about": "## Documentation Update & Maintenance Workflow\n\nUse this when you have **existing** documentation that may be out of date and needs to be refreshed to match the current state of the codebase. The workflow uses git history as its primary evidence source: it checks when the docs were last committed, what changed in the relevant code since then, and classifies staleness before touching anything.\n\n### What it produces\n\nUpdated documentation files with stale or inaccurate sections corrected, missing coverage added, and removed content pruned. A completion summary is written to notes for future maintainers, including maintenance recommendations and sections at risk of going stale again quickly.\n\n### When to use it\n\n- A feature shipped and the docs were never updated.\n- You suspect a doc is outdated but aren't sure which parts.\n- You want a systematic, section-by-section audit rather than a quick edit.\n- The repo has git history covering both code and docs (the workflow degrades gracefully without git, but git history is the primary evidence source).\n\n### When NOT to use it\n\n- You are writing a doc from scratch -- use the Document Creation workflow instead.\n- You only need to fix a single known typo or sentence -- just edit the file directly.\n\n### How to get good results\n\n- Point the workflow at the specific documentation files and the code directories they describe.\n- The workflow will ask you to approve an update plan before making any edits -- review it carefully. This is the main checkpoint where you control scope.\n- If you want to defer lower-priority improvements, say so during plan review.",
7
+ "examples": [
8
+ "Update the API docs for the search service after last month's v3 endpoint changes",
9
+ "Refresh the developer onboarding guide -- it hasn't been updated since we migrated to Gradle 8",
10
+ "Audit the architecture decision records in docs/adr/ for accuracy against the current codebase",
11
+ "Update the GraphQL schema documentation to reflect recent breaking changes"
12
+ ],
6
13
  "preconditions": [
7
14
  "Target documentation files are accessible",
8
15
  "Agent has git access to the repository containing both docs and code",
@@ -14,15 +21,15 @@
14
21
  "GIT-EVIDENCE-FIRST: staleness judgment must be grounded in actual git log output. Do not assert a doc is stale based on reading it alone. Run git log; record commit SHAs and messages as evidence.",
15
22
  "PRESERVATION-FIRST: keep accurate, well-written content unchanged. Only update what is demonstrably stale or incorrect. Targeted updates are better than wholesale rewrites.",
16
23
  "VERIFY AGAINST CODE: all updated technical content must be checked against current codebase state. Code examples and API references must match what is actually in the code today.",
17
- "DEGRADE AND DISCLOSE: if git history is unavailable or shallow for some paths, classify staleness as medium and note what evidence is missing. Never block \u2014 proceed with what is available.",
24
+ "DEGRADE AND DISCLOSE: if git history is unavailable or shallow for some paths, classify staleness as medium and note what evidence is missing. Never block proceed with what is available.",
18
25
  "SELF-EXECUTE: explore first with tools. Ask the user only what you genuinely cannot determine from the codebase and git history. The one real confirmation gate is the update plan before executing edits.",
19
- "LOOP DISCIPLINE: the update loop runs without per-section gates \u2014 the plan was approved in phase-2. Only pause if a section requires changes beyond what the approved plan covers; note the deviation and ask."
26
+ "LOOP DISCIPLINE: the update loop runs without per-section gates the plan was approved in phase-2. Only pause if a section requires changes beyond what the approved plan covers; note the deviation and ask."
20
27
  ],
21
28
  "steps": [
22
29
  {
23
30
  "id": "phase-0-assess",
24
31
  "title": "Phase 0: Assess Documentation & Establish Git Baseline",
25
- "prompt": "Locate the target documentation and establish an evidence-based staleness assessment before you decide anything.\n\n**Step 1: Locate and inventory target docs**\n- Identify all documentation files in scope\n- Note file formats, structure, and rough section organization\n- Infer the code paths these docs reference (scopePaths)\n\n**Step 2: Git baseline**\n- Run `git log -1 <docPath>` for each target doc to get the last commit SHA and date\n- Run `git log <lastCommitSha>..HEAD -- <scopePaths>` to get all code changes since the doc was last updated\n- For each commit, classify impact: API/breaking (new exports, changed interfaces, removed functions), behavioral (changed logic), config (schema/option changes), or minor (refactor, rename, test-only)\n\n**Step 3: Staleness classification (rubric-based)**\n\nScore these three dimensions:\n- **Impact**: any API/breaking changes? any behavioral or config changes?\n- **Volume**: how many commits changed the relevant scope since last doc update?\n- **Age**: how many days since the doc was last committed?\n\nDerive `stalenessLevel`:\n- `high`: any API/breaking impact, OR volume > 5 commits AND age > 90 days\n- `medium`: volume > 2 commits, OR age > 60 days, OR behavioral/config changes present\n- `low`: few changes, nothing impacting documented behavior, age < 60 days\n- If git history is unavailable: `medium` \u2014 note what is missing\n\nDerive `updateUrgency`:\n- `high` staleness \u2192 `immediate`\n- `medium` staleness \u2192 `scheduled`\n- `low` staleness \u2192 `monitor` \u2014 if user did not request a forced update, document why and offer to exit\n\n**Capture:**\n- `targetDocPaths`, `scopePaths`\n- `gitLastDocCommitSha`, `gitLastDocCommitDate`\n- `stalenessLevel`, `updateUrgency`\n- `gitChangeSummary` \u2014 prose summary of what changed and why it matters for the docs",
32
+ "prompt": "Locate the target documentation and establish an evidence-based staleness assessment before you decide anything.\n\n**Step 1: Locate and inventory target docs**\n- Identify all documentation files in scope\n- Note file formats, structure, and rough section organization\n- Infer the code paths these docs reference (scopePaths)\n\n**Step 2: Git baseline**\n- Run `git log -1 <docPath>` for each target doc to get the last commit SHA and date\n- Run `git log <lastCommitSha>..HEAD -- <scopePaths>` to get all code changes since the doc was last updated\n- For each commit, classify impact: API/breaking (new exports, changed interfaces, removed functions), behavioral (changed logic), config (schema/option changes), or minor (refactor, rename, test-only)\n\n**Step 3: Staleness classification (rubric-based)**\n\nScore these three dimensions:\n- **Impact**: any API/breaking changes? any behavioral or config changes?\n- **Volume**: how many commits changed the relevant scope since last doc update?\n- **Age**: how many days since the doc was last committed?\n\nDerive `stalenessLevel`:\n- `high`: any API/breaking impact, OR volume > 5 commits AND age > 90 days\n- `medium`: volume > 2 commits, OR age > 60 days, OR behavioral/config changes present\n- `low`: few changes, nothing impacting documented behavior, age < 60 days\n- If git history is unavailable: `medium` note what is missing\n\nDerive `updateUrgency`:\n- `high` staleness `immediate`\n- `medium` staleness `scheduled`\n- `low` staleness `monitor` if user did not request a forced update, document why and offer to exit\n\n**Capture:**\n- `targetDocPaths`, `scopePaths`\n- `gitLastDocCommitSha`, `gitLastDocCommitDate`\n- `stalenessLevel`, `updateUrgency`\n- `gitChangeSummary` prose summary of what changed and why it matters for the docs",
26
33
  "requireConfirmation": {
27
34
  "var": "updateUrgency",
28
35
  "equals": "monitor"
@@ -31,13 +38,13 @@
31
38
  {
32
39
  "id": "phase-1-analyze",
33
40
  "title": "Phase 1: Section-by-Section Gap Analysis",
34
- "prompt": "Now map each documentation section to current code and classify what needs to change.\n\nFor each section in the target docs:\n1. **Map to code** \u2014 which files, functions, APIs, or behaviors does this section describe?\n2. **Assess accuracy** \u2014 does the current code match what the section says? Check API signatures, config options, behavioral descriptions, examples, and file paths.\n3. **Classify the section action**:\n - `preserve` \u2014 still accurate, well-written; keep unchanged\n - `update` \u2014 needs correction or expansion; note specifically what changed\n - `remove` \u2014 describes something that no longer exists\n\n4. **Assign update type** for sections marked `update`:\n - `corrective` \u2014 fix inaccurate information\n - `additive` \u2014 add missing coverage for new features\n - `expansive` \u2014 expand thin explanations\n - `reductive` \u2014 remove deprecated or removed content\n - `structural` \u2014 reorganize while preserving content\n\n5. **Assign priority** for sections marked `update`:\n - `critical` \u2014 inaccurate content that would cause errors or confusion for users\n - `important` \u2014 missing or outdated content for significant features or workflows\n - `beneficial` \u2014 improvements that add value but aren't blocking\n\n**Capture:**\n- `sectionInventory` \u2014 list of all sections with: sectionId, action (preserve/update/remove), updateType, priority, and a one-line reason\n\nEvery section must be classified before moving on.",
41
+ "prompt": "Now map each documentation section to current code and classify what needs to change.\n\nFor each section in the target docs:\n1. **Map to code** which files, functions, APIs, or behaviors does this section describe?\n2. **Assess accuracy** does the current code match what the section says? Check API signatures, config options, behavioral descriptions, examples, and file paths.\n3. **Classify the section action**:\n - `preserve` still accurate, well-written; keep unchanged\n - `update` needs correction or expansion; note specifically what changed\n - `remove` describes something that no longer exists\n\n4. **Assign update type** for sections marked `update`:\n - `corrective` fix inaccurate information\n - `additive` add missing coverage for new features\n - `expansive` expand thin explanations\n - `reductive` remove deprecated or removed content\n - `structural` reorganize while preserving content\n\n5. **Assign priority** for sections marked `update`:\n - `critical` inaccurate content that would cause errors or confusion for users\n - `important` missing or outdated content for significant features or workflows\n - `beneficial` improvements that add value but aren't blocking\n\n**Capture:**\n- `sectionInventory` list of all sections with: sectionId, action (preserve/update/remove), updateType, priority, and a one-line reason\n\nEvery section must be classified before moving on.",
35
42
  "requireConfirmation": false
36
43
  },
37
44
  {
38
45
  "id": "phase-2-plan",
39
46
  "title": "Phase 2: Update Plan",
40
- "prompt": "Build the ordered update plan based on the section inventory, then confirm it with me before you start editing anything.\n\nFrom `sectionInventory`, create `updatePlan` as an ordered list:\n1. All `critical` updates first\n2. `important` updates next\n3. `beneficial` updates last (may defer if scope is large)\n\nFor each entry in the plan:\n- Section name and location\n- Update type (corrective / additive / expansive / reductive / structural)\n- Specific description of what to change and why\n- What content to preserve unchanged\n\nAlso note:\n- Total sections to update vs total sections to preserve\n- Any `remove` entries that need explicit deletion\n- Any sections that are `beneficial` you recommend deferring\n\n**This step requires confirmation** \u2014 I need to review the plan before you make edits to the documentation files.\n\n**Capture:**\n- `updatePlan` \u2014 ordered list as described above\n- `sectionsRemaining` \u2014 total count of sections to update (for loop tracking)",
47
+ "prompt": "Build the ordered update plan based on the section inventory, then confirm it with me before you start editing anything.\n\nFrom `sectionInventory`, create `updatePlan` as an ordered list:\n1. All `critical` updates first\n2. `important` updates next\n3. `beneficial` updates last (may defer if scope is large)\n\nFor each entry in the plan:\n- Section name and location\n- Update type (corrective / additive / expansive / reductive / structural)\n- Specific description of what to change and why\n- What content to preserve unchanged\n\nAlso note:\n- Total sections to update vs total sections to preserve\n- Any `remove` entries that need explicit deletion\n- Any sections that are `beneficial` you recommend deferring\n\n**This step requires confirmation** I need to review the plan before you make edits to the documentation files.\n\n**Capture:**\n- `updatePlan` ordered list as described above\n- `sectionsRemaining` total count of sections to update (for loop tracking)",
41
48
  "requireConfirmation": true
42
49
  },
43
50
  {
@@ -69,7 +76,7 @@
69
76
  {
70
77
  "id": "verify-section",
71
78
  "title": "Verify the Updated Section",
72
- "prompt": "Verify that the update to `currentSection` is correct before moving on.\n\nCheck:\n1. **Technical accuracy** \u2014 does every technical claim match the current codebase? Check the code directly if needed.\n2. **Code examples** \u2014 are all code blocks in this section syntactically valid and behaviorally correct against current APIs?\n3. **Preservation** \u2014 what did you keep unchanged? Confirm the preserved content is still present and intact.\n4. **Cross-references** \u2014 are any internal links pointing to or from this section still working?\n\nDecrement `sectionsRemaining` by 1.\n\nRecord findings in notes: what you changed, what you preserved, any issues found.\n\n**Capture:** updated `sectionsRemaining`",
79
+ "prompt": "Verify that the update to `currentSection` is correct before moving on.\n\nCheck:\n1. **Technical accuracy** does every technical claim match the current codebase? Check the code directly if needed.\n2. **Code examples** are all code blocks in this section syntactically valid and behaviorally correct against current APIs?\n3. **Preservation** what did you keep unchanged? Confirm the preserved content is still present and intact.\n4. **Cross-references** are any internal links pointing to or from this section still working?\n\nDecrement `sectionsRemaining` by 1.\n\nRecord findings in notes: what you changed, what you preserved, any issues found.\n\n**Capture:** updated `sectionsRemaining`",
73
80
  "requireConfirmation": false
74
81
  },
75
82
  {
@@ -87,7 +94,7 @@
87
94
  {
88
95
  "id": "phase-4-validate",
89
96
  "title": "Phase 4: End-to-End Validation",
90
- "prompt": "Read through all updated documentation as a fresh reader and validate it as a whole.\n\n1. **End-to-end consistency** \u2014 read the docs in order. Is terminology consistent? Does the logical flow make sense? Do sections refer to each other correctly?\n\n2. **Technical accuracy pass** \u2014 for any section you feel uncertain about, verify it against current code now. If verification reveals a remaining inaccuracy, note it explicitly: what is wrong, where it is, and what the correct information should be. Do not silently pass a section you are unsure about.\n\n3. **User journey test** \u2014 walk through the key documented workflows from start to finish using only the documentation. Do setup instructions work? Are the most important use cases covered correctly?\n\n4. **Navigation and structure** \u2014 are all cross-references working? Is the table of contents (if present) accurate? Can a user find what they need?\n\n5. **Completeness check** \u2014 look back at the original gap analysis. Were all critical and important updates completed? Note explicitly if any were deferred.\n\nDocument what you found: any remaining issues, any sections that still need work, and your overall assessment of the documentation quality after the update.",
97
+ "prompt": "Read through all updated documentation as a fresh reader and validate it as a whole.\n\n1. **End-to-end consistency** read the docs in order. Is terminology consistent? Does the logical flow make sense? Do sections refer to each other correctly?\n\n2. **Technical accuracy pass** for any section you feel uncertain about, verify it against current code now. If verification reveals a remaining inaccuracy, note it explicitly: what is wrong, where it is, and what the correct information should be. Do not silently pass a section you are unsure about.\n\n3. **User journey test** walk through the key documented workflows from start to finish using only the documentation. Do setup instructions work? Are the most important use cases covered correctly?\n\n4. **Navigation and structure** are all cross-references working? Is the table of contents (if present) accurate? Can a user find what they need?\n\n5. **Completeness check** look back at the original gap analysis. Were all critical and important updates completed? Note explicitly if any were deferred.\n\nDocument what you found: any remaining issues, any sections that still need work, and your overall assessment of the documentation quality after the update.",
91
98
  "requireConfirmation": false
92
99
  },
93
100
  {
@@ -97,4 +104,4 @@
97
104
  "requireConfirmation": false
98
105
  }
99
106
  ]
100
- }
107
+ }
@@ -3,6 +3,13 @@
3
3
  "name": "Test Case Generation from Tickets",
4
4
  "version": "1.0.0",
5
5
  "description": "Use this to generate integration and end-to-end test cases from ticket requirements. Reads the ticket, traces affected code paths, identifies boundary conditions, and produces developer-readable test case descriptions.",
6
+ "about": "## Intelligent Test Case Generation\n\nThis workflow generates structured integration and end-to-end test cases directly from a ticket. It reads the ticket requirements, traces the affected code paths in the codebase, identifies boundary conditions and failure scenarios, and produces developer-readable test case descriptions that a developer can implement without guessing.\n\n**What it does:**\nThe workflow extracts every acceptance criterion from the ticket, traces which modules, endpoints, and integration boundaries are involved, identifies the existing test patterns in the repo (so generated cases match the team's style), then systematically generates happy path, boundary, and failure scenarios for each criterion. It checks coverage before writing, resolves ambiguities with you before generating anything uncertain, and finishes with a full test case list plus a coverage summary.\n\n**When to use it:**\n- When a ticket has clear acceptance criteria and you want comprehensive test coverage without manually reasoning through every edge case\n- When onboarding to a feature area and wanting to understand the expected behavior through its test scenarios\n- When a ticket spans multiple services or integration points and you need coverage across all of them\n- When preparing for a QA handoff or code review where test coverage must be explicitly demonstrated\n\n**What it produces:**\nNumbered test cases (TC-1, TC-2, ...) each with a title, acceptance criterion mapping, test type (Integration or E2E), risk level, preconditions, numbered test steps, expected result, and implementation notes. Cases are grouped by acceptance criterion and followed by a summary table. Open ambiguities and coverage gaps are disclosed explicitly.\n\n**How to get good results:**\nProvide the ticket in any standard format -- title, description, and acceptance criteria are enough. The workflow will trace the codebase itself. If the ticket has linked specs, API docs, or architecture diagrams, mention them. The more complete the acceptance criteria, the fewer clarifying questions the workflow will need to ask.",
7
+ "examples": [
8
+ "Generate test cases for ACEI-1591: expire cached conversations after 24 hours of inactivity",
9
+ "Write integration test scenarios for the ticket adding multi-factor authentication to the login flow",
10
+ "Generate E2E test cases for the checkout redesign ticket covering all payment method variations",
11
+ "Create test cases for the ticket migrating user profile storage from Postgres to the new profile service"
12
+ ],
6
13
  "preconditions": [
7
14
  "User provides a ticket (title, description, acceptance criteria) in any standard format.",
8
15
  "Agent has read access to the codebase for tracing affected paths and finding existing test patterns.",
@@ -3,6 +3,13 @@
3
3
  "name": "Personal Learning Course Design Workflow",
4
4
  "version": "1.0.0",
5
5
  "description": "Use this to design a personal learning course. Creates structured learning objectives, sequencing, and a course outline suited to your time constraints.",
6
+ "about": "## Personal Learning Course Design Workflow\n\nUse this to design a structured personal learning course -- defining clear objectives, sequencing, assessments, and a schedule that fits your time constraints. This workflow focuses on the **design** phase: building the blueprint for your learning program before you create any materials.\n\n### What it produces\n\nDepending on the path you choose:\n\n- **Quick Start (3-5 days)**: a functional learning plan with 2-3 focused objectives, a weekly schedule, a resource list, and basic progress tracking.\n- **Balanced (1-2 weeks)**: a comprehensive learning system with modules, structured assessments, active learning activities, and accountability measures.\n- **Comprehensive (2-3 weeks)**: a professional-grade learning system with full Bloom's Taxonomy-aligned objectives, spaced repetition design, multi-layer assessments, and long-term retention planning.\n\n### When to use it\n\n- You want to learn something specific and want a structured plan rather than ad-hoc resource consumption.\n- You are preparing for a certification, career transition, or skill upgrade and need a realistic timeline and sequence.\n- You've tried self-study before and found it hard to stay on track -- a well-designed plan with clear checkpoints helps.\n\n### When NOT to use it\n\n- You need to create the actual study materials -- use the Personal Learning Materials Creation workflow after this one.\n- You're designing a course for other learners, not for yourself -- consider an instructional design workflow instead.\n\n### How to get good results\n\n- Be honest about your weekly time budget. An ambitious plan that doesn't fit your schedule is worse than a modest plan you actually follow.\n- Start with the Quick Start path if you're uncertain -- you can always expand. Choosing Comprehensive when you're time-constrained leads to abandonment.\n- The more specific your goal (\"pass the AWS SAA exam in 3 months\"), the better the resulting plan will be compared to a vague goal (\"learn cloud\").",
7
+ "examples": [
8
+ "Design a 12-week self-study plan to pass the AWS Solutions Architect Associate exam",
9
+ "Create a balanced learning plan for learning Go, starting from zero, with 6 hours per week available",
10
+ "Build a comprehensive course design for mastering data visualization with Python over 3 months",
11
+ "Design a quick-start plan to get productive with Rust in 4 weeks"
12
+ ],
6
13
  "clarificationPrompts": [
7
14
  "What specific knowledge or skill do you want to master for yourself? (Be precise about the learning goal)",
8
15
  "How will you know you've successfully learned this? What will you be able to do, create, or solve? (Define your personal success criteria)",
@@ -29,7 +36,7 @@
29
36
  {
30
37
  "id": "select-design-path",
31
38
  "title": "Choose Your Learning Design Path",
32
- "prompt": "Based on your time constraints and learning design experience, select the approach that best fits your needs:\n\n**\ud83d\ude80 QUICK START PATH (3-5 days)**\n- **Best for**: First-time course designers, tight timelines, simple learning goals\n- **What you get**: Essential structure with clear objectives, basic assessment, and simple schedule\n- **Time investment**: 3-5 days to complete design process\n- **Result**: Functional learning plan that covers the basics effectively\n\n**\u2696\ufe0f BALANCED PATH (1-2 weeks)**\n- **Best for**: Some learning design experience, moderate complexity goals, want good system without overwhelm\n- **What you get**: Solid instructional design plus engagement features, assessment strategy, and progress tracking\n- **Time investment**: 1-2 weeks to complete design process\n- **Result**: Comprehensive learning system with proven pedagogical principles\n\n**\ud83c\udf93 COMPREHENSIVE PATH (2-3 weeks)**\n- **Best for**: Complex learning goals, want professional-grade system, experienced with instructional design\n- **What you get**: Full pedagogical depth with spaced repetition, detailed accountability, and advanced monitoring\n- **Time investment**: 2-3 weeks to complete design process\n- **Result**: Professional-grade learning system with all advanced features\n\n**Please select your path:**\n- Type 'quick' for Quick Start Path (3-5 days)\n- Type 'balanced' for Balanced Path (1-2 weeks)\n- Type 'comprehensive' for Comprehensive Path (2-3 weeks)\n\nYour choice will customize the remaining steps to match your needs and time constraints.",
39
+ "prompt": "Based on your time constraints and learning design experience, select the approach that best fits your needs:\n\n**🚀 QUICK START PATH (3-5 days)**\n- **Best for**: First-time course designers, tight timelines, simple learning goals\n- **What you get**: Essential structure with clear objectives, basic assessment, and simple schedule\n- **Time investment**: 3-5 days to complete design process\n- **Result**: Functional learning plan that covers the basics effectively\n\n**⚖️ BALANCED PATH (1-2 weeks)**\n- **Best for**: Some learning design experience, moderate complexity goals, want good system without overwhelm\n- **What you get**: Solid instructional design plus engagement features, assessment strategy, and progress tracking\n- **Time investment**: 1-2 weeks to complete design process\n- **Result**: Comprehensive learning system with proven pedagogical principles\n\n**🎓 COMPREHENSIVE PATH (2-3 weeks)**\n- **Best for**: Complex learning goals, want professional-grade system, experienced with instructional design\n- **What you get**: Full pedagogical depth with spaced repetition, detailed accountability, and advanced monitoring\n- **Time investment**: 2-3 weeks to complete design process\n- **Result**: Professional-grade learning system with all advanced features\n\n**Please select your path:**\n- Type 'quick' for Quick Start Path (3-5 days)\n- Type 'balanced' for Balanced Path (1-2 weeks)\n- Type 'comprehensive' for Comprehensive Path (2-3 weeks)\n\nYour choice will customize the remaining steps to match your needs and time constraints.",
33
40
  "agentRole": "You are a learning design consultant who helps users choose the right level of complexity for their learning design process. Guide them to select a path that matches their experience, time constraints, and learning goals without overwhelming them.",
34
41
  "guidance": [
35
42
  "Help users honestly assess their time availability and design experience",
@@ -285,4 +292,4 @@
285
292
  "hasValidation": true
286
293
  }
287
294
  ]
288
- }
295
+ }
@@ -3,6 +3,13 @@
3
3
  "name": "MR Review Workflow (Lean v2 \u2022 Notes-First \u2022 Evidence-Driven Reviewer Families)",
4
4
  "version": "2.4.0",
5
5
  "description": "Lean v2 MR review workflow. Merges intake, missing-input gating, context gathering, and re-triage into one structured front phase, then drives review through a shared fact packet, parallel reviewer families, contradiction-driven synthesis, and evidence-first final validation.",
6
+ "about": "## MR Review Workflow\n\nThis workflow conducts a structured, evidence-driven code review of a merge request or pull request. It is designed for cases where you want a thorough, audit-quality review rather than a quick glance -- particularly when the change touches critical surfaces, spans many files, or carries real production risk.\n\n**What it does:**\nThe workflow locates and bounds the review target, enriches it with PR context and ticket intent, classifies the change by risk and shape, then runs parallel \"reviewer family\" agents (covering correctness, architecture, runtime risk, tests/docs, and more) from a shared neutral fact packet. It reconciles contradictions between reviewer families, stress-tests the recommendation with adversarial validators, and produces a final handoff with severity-classified findings and ready-to-post MR comments.\n\n**When to use it:**\n- Before merging a PR that touches auth, data models, APIs, or critical paths\n- When you want independent perspectives on a change without the noise of an unstructured review\n- When the change is large or the reviewer is unfamiliar with the surrounding code\n- When you need a reproducible audit trail for compliance or team review processes\n\n**What it produces:**\nA final review recommendation (approve / request changes / needs discussion) with a confidence band, severity-graded findings (Critical / Major / Minor / Nit), ready-to-post MR comments, a coverage ledger showing which review domains were checked, and an honest disclosure of any context that could not be recovered.\n\n**How to get good results:**\nProvide the PR URL, branch name, or diff. The workflow can recover most context on its own -- ticket links, repo patterns, policy docs -- but if the change has non-obvious intent, a one-sentence description of the goal helps calibrate review sensitivity. The workflow will not post comments or approve/reject without explicit instruction.",
7
+ "examples": [
8
+ "Review PR #47 adding JWT refresh token rotation before it merges to main",
9
+ "Review the feature/cache-expiration branch for correctness and production risk before merging",
10
+ "Audit PR #312 refactoring the payment service data model for backward compatibility and rollout safety",
11
+ "Review the open MR for the new rate-limiting middleware against our API contract standards"
12
+ ],
6
13
  "recommendedPreferences": {
7
14
  "recommendedAutonomy": "guided",
8
15
  "recommendedRiskPolicy": "conservative"
@@ -2,7 +2,14 @@
2
2
  "id": "personal-learning-materials-creation-branched",
3
3
  "name": "Personal Learning Materials Creation Workflow (Branched)",
4
4
  "version": "1.1.0",
5
- "description": "Use this to create learning materials for a course or subject. Adapts depth and format to your time budget \u2014 Quick Start, Balanced, or Comprehensive.",
5
+ "description": "Use this to create learning materials for a course or subject. Adapts depth and format to your time budget Quick Start, Balanced, or Comprehensive.",
6
+ "about": "## Personal Learning Materials Creation Workflow\n\nUse this to create the actual study materials for a course or subject you are learning -- study guides, exercises, assessments, and spaced-repetition review materials. This workflow assumes you already have a learning plan or course design with defined objectives; it focuses on producing materials that directly support those objectives.\n\n### What it produces\n\nDepending on the path you choose:\n\n- **Quick Start (2-3 weeks)**: study guides and basic exercises for immediate use.\n- **Balanced (4-6 weeks)**: a complete learning system -- study guides, exercises, assessments, and spaced repetition materials.\n- **Comprehensive (8-12 weeks)**: a full learning ecosystem with interactive elements, effectiveness measurement, and a scalable update protocol.\n\n### When to use it\n\n- You have a learning plan and need to turn it into usable materials.\n- You are preparing for a certification, exam, or structured self-study program.\n- You want materials tailored to your specific objectives rather than relying entirely on off-the-shelf resources.\n\n### When NOT to use it\n\n- You haven't designed your learning course yet -- use the Personal Learning Course Design workflow first to define objectives and structure.\n- You need to design a course for others to take -- use the Learner-Centered Course workflow instead.\n\n### How to get good results\n\n- Select the path honestly based on available time. Starting with Quick Start and expanding later is better than committing to Comprehensive and abandoning it.\n- Have your learning objectives written out before starting -- the workflow maps every material directly to an objective.\n- Be specific about your preferred learning formats (text, diagrams, flashcards, practice problems) at the start.",
7
+ "examples": [
8
+ "Create study guides and exercises for my AWS Solutions Architect certification prep",
9
+ "Build a complete set of flashcards and practice problems for learning Rust ownership",
10
+ "Create materials for a 6-week self-study course on Bayesian statistics",
11
+ "Make a quick-start study guide for the CKAD Kubernetes exam"
12
+ ],
6
13
  "clarificationPrompts": [
7
14
  "Do you have a completed learning plan with defined objectives and modules?",
8
15
  "How much time can you dedicate weekly to materials creation?",
@@ -25,7 +32,7 @@
25
32
  {
26
33
  "id": "phase-0-select-thoroughness-path",
27
34
  "title": "Phase 0: Select Your Materials Creation Path",
28
- "prompt": "Choose your learning materials creation approach based on your time, goals, and quality needs:\n\n\ud83d\udcda **Quick Start Path**\n\u2022 Timeline: 2-3 weeks (5-8 hours total)\n\u2022 Materials: Study guides + basic exercises\n\u2022 Best for: Time-constrained learners, getting started quickly\n\u2022 Outcome: Functional materials for immediate use\n\n\ud83c\udfaf **Balanced Path**\n\u2022 Timeline: 4-6 weeks (12-20 hours total)\n\u2022 Materials: Study guides + exercises + assessments + spaced repetition\n\u2022 Best for: Comprehensive learning support, professional quality\n\u2022 Outcome: Complete learning system with proven effectiveness\n\n\ud83c\udfc6 **Comprehensive Path**\n\u2022 Timeline: 8-12 weeks (25-40 hours total)\n\u2022 Materials: All types + interactive elements + full testing\n\u2022 Best for: Professional educators, enterprise-grade projects\n\u2022 Outcome: Optimized learning ecosystem with maximum effectiveness\n\nWhich path best matches your timeline and quality goals?",
35
+ "prompt": "Choose your learning materials creation approach based on your time, goals, and quality needs:\n\n📚 **Quick Start Path**\n Timeline: 2-3 weeks (5-8 hours total)\n Materials: Study guides + basic exercises\n Best for: Time-constrained learners, getting started quickly\n Outcome: Functional materials for immediate use\n\n🎯 **Balanced Path**\n Timeline: 4-6 weeks (12-20 hours total)\n Materials: Study guides + exercises + assessments + spaced repetition\n Best for: Comprehensive learning support, professional quality\n Outcome: Complete learning system with proven effectiveness\n\n🏆 **Comprehensive Path**\n Timeline: 8-12 weeks (25-40 hours total)\n Materials: All types + interactive elements + full testing\n Best for: Professional educators, enterprise-grade projects\n Outcome: Optimized learning ecosystem with maximum effectiveness\n\nWhich path best matches your timeline and quality goals?",
29
36
  "agentRole": "You are a learning materials consultant specializing in helping users choose the right approach for their constraints and goals. Guide users toward the path that best fits their needs. Set the thoroughnessLevel context variable based on their selection.",
30
37
  "guidance": [
31
38
  "Help users make realistic choices based on their actual time availability",
@@ -41,7 +48,7 @@
41
48
  "equals": "Quick"
42
49
  },
43
50
  "title": "Phase 1: Essential Learning Plan Analysis (Quick Start)",
44
- "prompt": "Extract the core elements from your learning plan for rapid materials creation:\n\n**STEP 1: Core Objectives**\n\u2022 Identify your 3-5 most important learning objectives\n\u2022 Note success criteria for each objective\n\u2022 Skip complex prerequisite analysis\n\n**STEP 2: Essential Materials Map**\n\u2022 For each objective, identify if you need: study guide, basic exercises, or both\n\u2022 Focus on immediate learning needs, not comprehensive coverage\n\u2022 Note existing resources that can supplement your materials\n\n**STEP 3: Quick Resource Assessment**\n\u2022 List available source materials (books, courses, notes)\n\u2022 Identify 2-3 key resources for each objective\n\u2022 Note time constraints and creation priorities\n\nGoal: Practical roadmap for essential materials creation in minimal time.",
51
+ "prompt": "Extract the core elements from your learning plan for rapid materials creation:\n\n**STEP 1: Core Objectives**\n Identify your 3-5 most important learning objectives\n Note success criteria for each objective\n Skip complex prerequisite analysis\n\n**STEP 2: Essential Materials Map**\n For each objective, identify if you need: study guide, basic exercises, or both\n Focus on immediate learning needs, not comprehensive coverage\n Note existing resources that can supplement your materials\n\n**STEP 3: Quick Resource Assessment**\n List available source materials (books, courses, notes)\n Identify 2-3 key resources for each objective\n Note time constraints and creation priorities\n\nGoal: Practical roadmap for essential materials creation in minimal time.",
45
52
  "agentRole": "You are an efficient learning analyst focused on rapid materials development. Help users identify core needs quickly without over-analysis. Emphasize practical, immediately actionable insights.",
46
53
  "guidance": [
47
54
  "Keep analysis focused and action-oriented",
@@ -75,7 +82,7 @@
75
82
  "equals": "Balanced"
76
83
  },
77
84
  "title": "Phase 1: Comprehensive Learning Plan Analysis (Balanced)",
78
- "prompt": "Analyze your learning plan to guide professional-quality materials creation:\n\n**STEP 1: Objective Architecture**\n\u2022 Extract all learning objectives with success criteria\n\u2022 Identify prerequisite relationships between objectives\n\u2022 Note assessment strategies for each objective\n\u2022 Map objectives to modules and time allocations\n\n**STEP 2: Materials Requirements Matrix**\n\u2022 For each objective, determine needed materials: study guides, exercises, assessments\n\u2022 Identify concepts requiring multiple reinforcement approaches\n\u2022 Note which objectives need spaced repetition support\n\u2022 Flag areas requiring practical application or hands-on practice\n\n**STEP 3: Resource Integration Plan**\n\u2022 Evaluate existing resources for quality and coverage\n\u2022 Identify gaps where custom materials are essential\n\u2022 Plan integration between created materials and external resources\n\u2022 Design quality standards for materials consistency\n\nGoal: Strategic foundation for professional learning materials system.",
85
+ "prompt": "Analyze your learning plan to guide professional-quality materials creation:\n\n**STEP 1: Objective Architecture**\n Extract all learning objectives with success criteria\n Identify prerequisite relationships between objectives\n Note assessment strategies for each objective\n Map objectives to modules and time allocations\n\n**STEP 2: Materials Requirements Matrix**\n For each objective, determine needed materials: study guides, exercises, assessments\n Identify concepts requiring multiple reinforcement approaches\n Note which objectives need spaced repetition support\n Flag areas requiring practical application or hands-on practice\n\n**STEP 3: Resource Integration Plan**\n Evaluate existing resources for quality and coverage\n Identify gaps where custom materials are essential\n Plan integration between created materials and external resources\n Design quality standards for materials consistency\n\nGoal: Strategic foundation for professional learning materials system.",
79
86
  "agentRole": "You are a professional instructional designer specializing in systematic materials development. Help users create comprehensive yet practical plans that balance quality with efficiency. Focus on proven instructional design principles.",
80
87
  "guidance": [
81
88
  "Apply instructional design best practices systematically",
@@ -104,7 +111,7 @@
104
111
  "equals": "Comprehensive"
105
112
  },
106
113
  "title": "Phase 1: Expert Learning Plan Analysis (Comprehensive)",
107
- "prompt": "Conduct thorough analysis of learning architecture for enterprise-grade materials:\n\n**STEP 1: Learning System Architecture**\n\u2022 Map complete learning objective hierarchy with dependencies\n\u2022 Analyze cognitive load and complexity progression\n\u2022 Identify multiple learning pathways and individual differences\n\u2022 Design assessment strategy aligned with learning taxonomies\n\n**STEP 2: Advanced Materials Strategy**\n\u2022 Determine optimal material types for each learning objective\n\u2022 Plan multi-modal approach for different learning styles\n\u2022 Design integration points for spaced repetition and active recall\n\u2022 Identify opportunities for interactive and adaptive elements\n\n**STEP 3: Quality & Effectiveness Framework**\n\u2022 Establish criteria for materials effectiveness measurement\n\u2022 Plan user testing and feedback integration\n\u2022 Design continuous improvement and iteration protocols\n\u2022 Create scalability and maintenance considerations\n\nGoal: Strategic foundation for optimized, enterprise-grade learning ecosystem.",
114
+ "prompt": "Conduct thorough analysis of learning architecture for enterprise-grade materials:\n\n**STEP 1: Learning System Architecture**\n Map complete learning objective hierarchy with dependencies\n Analyze cognitive load and complexity progression\n Identify multiple learning pathways and individual differences\n Design assessment strategy aligned with learning taxonomies\n\n**STEP 2: Advanced Materials Strategy**\n Determine optimal material types for each learning objective\n Plan multi-modal approach for different learning styles\n Design integration points for spaced repetition and active recall\n Identify opportunities for interactive and adaptive elements\n\n**STEP 3: Quality & Effectiveness Framework**\n Establish criteria for materials effectiveness measurement\n Plan user testing and feedback integration\n Design continuous improvement and iteration protocols\n Create scalability and maintenance considerations\n\nGoal: Strategic foundation for optimized, enterprise-grade learning ecosystem.",
108
115
  "agentRole": "You are an expert learning systems architect with deep expertise in advanced instructional design and learning optimization. Guide users in creating sophisticated materials that maximize learning effectiveness through evidence-based approaches.",
109
116
  "guidance": [
110
117
  "Apply advanced learning science principles and research",
@@ -133,7 +140,7 @@
133
140
  "equals": "Quick"
134
141
  },
135
142
  "title": "Phase 2: Efficient Materials Strategy (Quick Start)",
136
- "prompt": "Create a focused strategy for essential materials creation:\n\n**STEP 1: Format Selection**\n\u2022 Choose 1-2 primary formats based on your learning style\n\u2022 Prioritize formats you can create quickly (text-based, simple templates)\n\u2022 Plan minimal but consistent formatting approach\n\u2022 Focus on immediate usability over visual polish\n\n**STEP 2: Creation Workflow**\n\u2022 Design simple templates for study guides and exercises\n\u2022 Plan batch creation approach to maximize efficiency\n\u2022 Set realistic quality standards (functional over perfect)\n\u2022 Create basic organization system for easy access\n\n**STEP 3: Quality Framework**\n\u2022 Establish minimum viable product standards\n\u2022 Plan quick self-review process\n\u2022 Design simple feedback collection for future improvement\n\u2022 Focus on completion over perfection\n\nGoal: Practical strategy for rapid materials creation without sacrificing core functionality.",
143
+ "prompt": "Create a focused strategy for essential materials creation:\n\n**STEP 1: Format Selection**\n Choose 1-2 primary formats based on your learning style\n Prioritize formats you can create quickly (text-based, simple templates)\n Plan minimal but consistent formatting approach\n Focus on immediate usability over visual polish\n\n**STEP 2: Creation Workflow**\n Design simple templates for study guides and exercises\n Plan batch creation approach to maximize efficiency\n Set realistic quality standards (functional over perfect)\n Create basic organization system for easy access\n\n**STEP 3: Quality Framework**\n Establish minimum viable product standards\n Plan quick self-review process\n Design simple feedback collection for future improvement\n Focus on completion over perfection\n\nGoal: Practical strategy for rapid materials creation without sacrificing core functionality.",
137
144
  "agentRole": "You are an efficiency expert specializing in rapid content creation. Help users design streamlined approaches that maximize output while maintaining essential quality. Focus on practical, time-saving strategies.",
138
145
  "guidance": [
139
146
  "Emphasize efficiency and speed over perfection",
@@ -162,7 +169,7 @@
162
169
  "equals": "Comprehensive"
163
170
  },
164
171
  "title": "Phase 2: Advanced Materials Strategy (Comprehensive)",
165
- "prompt": "Develop sophisticated strategy for enterprise-grade materials:\n\n**STEP 1: Multi-Modal Format Strategy**\n\u2022 Design format variety to engage different learning modes\n\u2022 Plan advanced visual elements, interactive components, adaptive features\n\u2022 Create sophisticated template system with consistent branding\n\u2022 Consider accessibility, mobile optimization, and universal design\n\n**STEP 2: Integration Architecture**\n\u2022 Plan seamless connections between all material types\n\u2022 Design advanced spaced repetition integration with learning analytics\n\u2022 Create sophisticated cross-referencing and linking systems\n\u2022 Plan for collaborative features and social learning elements\n\n**STEP 3: Quality Excellence Framework**\n\u2022 Establish enterprise-grade quality standards and measurement\n\u2022 Design comprehensive user testing and feedback integration\n\u2022 Plan continuous optimization based on learning effectiveness data\n\u2022 Create scalable maintenance and update protocols\n\nGoal: Strategic foundation for learning materials that optimize effectiveness through sophisticated design.",
172
+ "prompt": "Develop sophisticated strategy for enterprise-grade materials:\n\n**STEP 1: Multi-Modal Format Strategy**\n Design format variety to engage different learning modes\n Plan advanced visual elements, interactive components, adaptive features\n Create sophisticated template system with consistent branding\n Consider accessibility, mobile optimization, and universal design\n\n**STEP 2: Integration Architecture**\n Plan seamless connections between all material types\n Design advanced spaced repetition integration with learning analytics\n Create sophisticated cross-referencing and linking systems\n Plan for collaborative features and social learning elements\n\n**STEP 3: Quality Excellence Framework**\n Establish enterprise-grade quality standards and measurement\n Design comprehensive user testing and feedback integration\n Plan continuous optimization based on learning effectiveness data\n Create scalable maintenance and update protocols\n\nGoal: Strategic foundation for learning materials that optimize effectiveness through sophisticated design.",
166
173
  "agentRole": "You are a learning systems architect with expertise in enterprise-grade materials design. Help users create sophisticated strategies that maximize learning effectiveness through advanced features and optimization.",
167
174
  "guidance": [
168
175
  "Apply advanced instructional design and learning optimization principles",
@@ -185,4 +192,4 @@
185
192
  "hasValidation": true
186
193
  }
187
194
  ]
188
- }
195
+ }
@@ -3,6 +3,13 @@
3
3
  "name": "Presentation Creation Workflow",
4
4
  "version": "1.0.0",
5
5
  "description": "Use this to create a compelling presentation. Covers audience analysis, content strategy, slide structure, and delivery preparation. Output works with any presentation tool.",
6
+ "about": "## Presentation Creation Workflow\n\nUse this to build a compelling, audience-specific presentation from scratch -- whether for a conference talk, internal strategy review, client pitch, or team demo. The workflow grounds every content decision in a concrete audience profile, so the result is written for real people in a real context rather than a generic slide deck.\n\n### What it produces\n\n- An audience profile and context map.\n- A content strategy with a single core message, supporting arguments, and a call-to-action.\n- A numbered slide outline with content types and timing estimates.\n- Full slide content and speaker notes for every slide.\n- Backup slides for anticipated deep-dive questions.\n- A delivery preparation plan including practice schedule, Q&A prep, and technical checklist.\n\n### When to use it\n\n- You are building a presentation that needs to persuade, inform, or motivate a specific audience.\n- You want structured help moving from \"I have a topic\" to \"I have a complete, rehearsal-ready deck.\"\n- The presentation has real stakes -- a client pitch, a leadership review, a conference talk.\n\n### When NOT to use it\n\n- You just need to slap a few bullets onto slides quickly -- this workflow is for presentations where quality matters.\n\n### How to get good results\n\n- The more specific you are about your audience, the better the content strategy will be. \"Engineering managers at a Series B fintech\" beats \"technical people.\"\n- The workflow has two confirmation gates: after the audience profile and after the slide outline. Use these to redirect before content gets written.\n- Bring source materials, data, and any existing slides you want to incorporate -- the content development step can ingest these.",
7
+ "examples": [
8
+ "Create a 20-minute conference talk on migrating a monolith to microservices for a senior engineering audience",
9
+ "Build a 10-slide investor pitch deck for our Series A fundraise",
10
+ "Prepare a quarterly product roadmap presentation for internal stakeholders",
11
+ "Create a client workshop deck introducing our platform migration methodology"
12
+ ],
6
13
  "preconditions": [
7
14
  "You have a clear presentation topic or objective.",
8
15
  "You know roughly who your audience is.",
@@ -56,9 +63,9 @@
56
63
  "promptBlocks": {
57
64
  "goal": "Define the core message and argument structure that will guide every slide and talking point. Using your audience profile from the previous step, build a content strategy grounded in their specific needs.",
58
65
  "constraints": [
59
- "Start with one core message \u2014 a single, memorable sentence. Everything else should support it.",
66
+ "Start with one core message a single, memorable sentence. Everything else should support it.",
60
67
  "Supporting arguments should directly address the audience motivations and pain points you identified.",
61
- "The narrative arc should feel natural for this audience in this context \u2014 not a generic template unless it actually fits."
68
+ "The narrative arc should feel natural for this audience in this context not a generic template unless it actually fits."
62
69
  ],
63
70
  "procedure": [
64
71
  "State your core message: one clear, memorable sentence that captures the single most important thing you want the audience to take away.",
@@ -85,7 +92,7 @@
85
92
  "constraints": [
86
93
  "One key idea per slide. If a slide is trying to say two things, split it.",
87
94
  "Plan for pacing: balance information-dense slides with breathing room and interaction moments.",
88
- "Think about how the slides will work if someone reads them later without you \u2014 titles should be informative, not just labels."
95
+ "Think about how the slides will work if someone reads them later without you titles should be informative, not just labels."
89
96
  ],
90
97
  "procedure": [
91
98
  "Opening (1-3 slides): attention-grabbing hook, context framing, and agenda or roadmap.",
@@ -109,7 +116,7 @@
109
116
  "id": "content-development",
110
117
  "title": "Content Development",
111
118
  "promptBlocks": {
112
- "goal": "Write the actual presentation content \u2014 slide text and speaker notes \u2014 following the approved outline. Every piece of content should be grounded in your audience profile and core message.",
119
+ "goal": "Write the actual presentation content slide text and speaker notes following the approved outline. Every piece of content should be grounded in your audience profile and core message.",
113
120
  "constraints": [
114
121
  "Write for the ear, not the eye. Slide text should be sparse; speaker notes should sound natural when spoken aloud.",
115
122
  "Use active voice and concrete language. Replace abstractions with specific examples.",
@@ -162,4 +169,4 @@
162
169
  "requireConfirmation": false
163
170
  }
164
171
  ]
165
- }
172
+ }
@@ -3,6 +3,13 @@
3
3
  "name": "Production Readiness Audit (v2 \u2022 Evidence-Driven Readiness Review)",
4
4
  "version": "0.1.0",
5
5
  "description": "Use this to audit a codebase scope for production readiness. Checks debugging correctness, runtime operability, artifact realism, technical debt, and anything that would prevent honest production deployment.",
6
+ "about": "## Production Readiness Audit\n\nThis workflow performs a structured, evidence-driven audit to answer one question honestly: is this code actually ready for production? It goes beyond style and lint -- it looks for debugging correctness, runtime operability under real conditions, artifact realism (stale code, fake completeness, placeholder behavior), maintainability debt, test and observability gaps, and security or performance risks.\n\n**What it does:**\nThe workflow bounds the audit scope, states a readiness hypothesis, freezes a neutral fact packet, then runs parallel reviewer families -- each specializing in a different readiness dimension. It reconciles contradictions through an evidence loop and produces a final verdict: `ready`, `ready_with_conditions`, `not_ready`, or `inconclusive`.\n\n**When to use it:**\n- Before shipping a new service, feature, or major refactor to production\n- When a codebase has been under rapid development and you want an honest readiness check before a launch deadline\n- When onboarding to a codebase and wanting a structured assessment of its production posture\n- When a post-incident review surfaces questions about whether the system was truly ready\n\n**What it produces:**\nA verdict with a confidence band, a prioritized list of blocker-grade and major findings, debugging leads, runtime and operational risk callouts, artifact-realism concerns (misleading completeness, stale docs, dead paths), a coverage ledger by audit domain, and a remediation order with specific follow-up recommendations.\n\n**How to get good results:**\nProvide a clear scope -- a service name, a module path, or a feature boundary. The narrower and more concrete the scope, the sharper the findings. If \"production-ready\" has a specific meaning for your team (e.g. SLA requirements, specific deployment constraints), mention it. The workflow will try to infer the production bar from repo patterns and context, but explicit criteria improve accuracy.",
7
+ "examples": [
8
+ "Audit the notifications service for production readiness before the Q3 launch",
9
+ "Check if the new checkout flow feature is ready to ship -- focus on runtime operability and error handling",
10
+ "Production readiness review of the auth module after the recent login refactor",
11
+ "Assess whether the data ingestion pipeline is ready for production given the volume targets in the launch brief"
12
+ ],
6
13
  "recommendedPreferences": {
7
14
  "recommendedAutonomy": "guided",
8
15
  "recommendedRiskPolicy": "conservative"