@exaudeus/workrail 3.8.0 → 3.8.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/package.json +1 -1
- package/workflows/exploration-workflow.json +163 -428
package/package.json
CHANGED
|
@@ -1,435 +1,170 @@
|
|
|
1
1
|
{
|
|
2
|
-
|
|
3
|
-
|
|
4
|
-
|
|
5
|
-
|
|
6
|
-
|
|
7
|
-
|
|
8
|
-
|
|
9
|
-
|
|
10
|
-
|
|
11
|
-
|
|
12
|
-
|
|
13
|
-
|
|
14
|
-
|
|
15
|
-
|
|
16
|
-
|
|
17
|
-
|
|
18
|
-
|
|
19
|
-
|
|
20
|
-
|
|
21
|
-
|
|
22
|
-
|
|
23
|
-
|
|
24
|
-
|
|
25
|
-
|
|
26
|
-
|
|
27
|
-
|
|
28
|
-
|
|
29
|
-
|
|
30
|
-
|
|
31
|
-
|
|
32
|
-
|
|
33
|
-
|
|
34
|
-
|
|
35
|
-
|
|
36
|
-
|
|
37
|
-
|
|
38
|
-
|
|
39
|
-
|
|
40
|
-
|
|
41
|
-
|
|
42
|
-
|
|
43
|
-
|
|
44
|
-
|
|
45
|
-
|
|
46
|
-
|
|
47
|
-
|
|
48
|
-
|
|
49
|
-
|
|
50
|
-
|
|
51
|
-
|
|
52
|
-
|
|
53
|
-
|
|
54
|
-
|
|
55
|
-
|
|
56
|
-
|
|
57
|
-
|
|
58
|
-
|
|
59
|
-
|
|
60
|
-
|
|
61
|
-
|
|
62
|
-
|
|
63
|
-
|
|
64
|
-
|
|
65
|
-
|
|
66
|
-
|
|
67
|
-
|
|
68
|
-
|
|
69
|
-
|
|
70
|
-
|
|
71
|
-
|
|
72
|
-
|
|
73
|
-
|
|
74
|
-
|
|
75
|
-
|
|
76
|
-
|
|
77
|
-
|
|
78
|
-
|
|
79
|
-
|
|
80
|
-
|
|
81
|
-
|
|
82
|
-
|
|
83
|
-
|
|
84
|
-
|
|
85
|
-
|
|
86
|
-
|
|
87
|
-
|
|
88
|
-
|
|
89
|
-
|
|
90
|
-
|
|
91
|
-
|
|
92
|
-
|
|
93
|
-
|
|
94
|
-
|
|
95
|
-
|
|
96
|
-
|
|
97
|
-
|
|
98
|
-
|
|
99
|
-
|
|
100
|
-
|
|
101
|
-
|
|
102
|
-
|
|
103
|
-
|
|
104
|
-
|
|
105
|
-
|
|
106
|
-
|
|
107
|
-
{
|
|
108
|
-
|
|
109
|
-
|
|
110
|
-
|
|
111
|
-
|
|
112
|
-
|
|
113
|
-
|
|
114
|
-
|
|
115
|
-
|
|
116
|
-
|
|
117
|
-
|
|
118
|
-
|
|
119
|
-
|
|
120
|
-
|
|
121
|
-
|
|
122
|
-
"Flag any complexity indicators that might warrant re-triage"
|
|
123
|
-
],
|
|
124
|
-
"requireConfirmation": false
|
|
125
|
-
},
|
|
126
|
-
{
|
|
127
|
-
"id": "phase-1-complex-investigation",
|
|
128
|
-
"runCondition": {"var": "explorationComplexity", "equals": "Complex"},
|
|
129
|
-
"title": "Phase 1: Comprehensive Investigation (Complex Path)",
|
|
130
|
-
"prompt": "**PREP**: Break down the problem into research domains and sub-questions based on clarifications.\n\n**IMPLEMENT**: \n1. Perform comprehensive research across multiple domains and source types\n2. Identify key variables, constraints, and decision factors\n3. Generate 5-8 detailed options with variations and hybrid approaches\n4. Include risk assessments and implementation considerations\n5. Map option relationships and dependencies\n6. Document research methodology and source quality\n\n**VERIFY**: Validate comprehensive coverage of problem space with expert-level depth and systematic approach.",
|
|
131
|
-
"agentRole": "You are a strategic research investigator specializing in complex problem decomposition and comprehensive analysis. Your expertise lies in navigating ambiguous problem spaces and synthesizing insights from diverse domains.",
|
|
132
|
-
"guidance": [
|
|
133
|
-
"Use advanced research techniques including cross-domain synthesis",
|
|
134
|
-
"Document assumptions, uncertainties, and research gaps",
|
|
135
|
-
"Consider both direct and indirect approaches",
|
|
136
|
-
"Maintain systematic methodology throughout"
|
|
137
|
-
],
|
|
138
|
-
"requireConfirmation": true
|
|
139
|
-
},
|
|
140
|
-
{
|
|
141
|
-
"id": "phase-2-informed-clarification",
|
|
142
|
-
"runCondition": {"var": "explorationComplexity", "not_equals": "Simple"},
|
|
143
|
-
"title": "Phase 2: Informed Requirements Clarification",
|
|
144
|
-
"prompt": "Based on your research from Phase 1, you now have domain understanding that reveals important clarifications needed. Your research may have uncovered trade-offs, constraints, or decision factors that weren't apparent from the initial exploration request.\n\n**Your goal is to ask specific, informed questions that will lead to optimal recommendations. Consider:**\n\n1. **Priority Trade-offs**: Which factors are most important - cost, speed, reliability, maintainability, etc.?\n2. **Context Constraints**: What environmental, technical, or organizational constraints should influence the choice?\n3. **Risk Tolerance**: How much risk is acceptable for potentially better outcomes?\n4. **Implementation Reality**: What resources, skills, or timeline constraints affect feasibility?\n5. **Success Metrics**: How will you measure if the chosen approach is working?\n6. **Integration Requirements**: How must the solution fit with existing systems or processes?\n7. **Future Considerations**: What long-term factors should influence the decision?\n8. **Complexity Concerns**: Based on research, should this exploration be more/less complex than initially classified?\n\n**Present 3-7 well-formulated questions that will significantly improve recommendation quality and implementability.**",
|
|
145
|
-
"agentRole": "You are a strategic consultant specializing in requirements elicitation based on domain research. Your expertise lies in translating research insights into precise questions that eliminate ambiguity and enable optimal decision-making.",
|
|
146
|
-
"guidance": [
|
|
147
|
-
"Ask questions that could only be formulated after domain research",
|
|
148
|
-
"Focus on questions that significantly impact recommendation quality",
|
|
149
|
-
"Avoid generic questions - make them specific to the domain and findings",
|
|
150
|
-
"Present questions in prioritized, clear manner",
|
|
151
|
-
"Include questions about potential complexity changes"
|
|
152
|
-
],
|
|
153
|
-
"requireConfirmation": {
|
|
154
|
-
"or": [
|
|
155
|
-
{"var": "automationLevel", "equals": "Low"},
|
|
156
|
-
{"var": "automationLevel", "equals": "Medium"}
|
|
157
|
-
]
|
|
158
|
-
}
|
|
159
|
-
},
|
|
160
|
-
{
|
|
161
|
-
"id": "phase-2b-dynamic-retriage",
|
|
162
|
-
"runCondition": {"var": "explorationComplexity", "not_equals": "Simple"},
|
|
163
|
-
"title": "Phase 2b: Dynamic Complexity Re-Triage",
|
|
164
|
-
"prompt": "Based on your domain research and requirements clarification, re-evaluate the initial complexity classification. New insights may have revealed:\n\n- Domain complexity greater than initially apparent\n- More/fewer viable options than expected\n- Clearer consensus or more conflicting expert opinions\n- Technical constraints that increase difficulty\n- Scope expansion based on clarified requirements\n- **OR established patterns/tools that simplify the exploration**\n\n**EVALUATE:**\n1. Review the original explorationComplexity classification\n2. Consider new information from research and clarifications\n3. Assess if complexity should be upgraded (e.g., Medium → Complex) OR downgraded (e.g., Complex → Medium)\n4. Provide detailed reasoning for any recommended changes\n\n**If you recommend upgrading complexity:**\n- Clearly explain what research insights led to this recommendation\n- Describe additional complexity or ambiguity discovered\n- Justify why the higher complexity path would be beneficial\n- Ask for user confirmation to change the explorationComplexity variable\n\n**If you recommend downgrading complexity:**\n- Set proposedDowngrade context variable to true\n- Clearly explain what patterns, consensus, or simplified scope led to this recommendation\n- Provide evidence of reduced ambiguity and clearer options\n- Require user confirmation unless automationLevel=High and confidence >8\n- Justify why the lower complexity path is appropriate\n\n**If current classification remains appropriate:**\n- Briefly confirm classification accuracy\n- Proceed without requesting changes\n\n**Note:** Both upgrades and downgrades are allowed with proper justification for optimal workflow efficiency.",
|
|
165
|
-
"agentRole": "You are a research complexity assessor specializing in domain exploration evaluation. Your expertise lies in identifying when initial complexity assumptions need adjustment based on research findings and domain understanding.",
|
|
166
|
-
"guidance": [
|
|
167
|
-
"This step allows both upgrading and downgrading complexity based on research insights",
|
|
168
|
-
"Only change complexity if there are clear, justifiable reasons",
|
|
169
|
-
"For downgrades, set proposedDowngrade flag and require explicit user approval unless automationLevel=High and confidence >8",
|
|
170
|
-
"Be specific about what research findings led to the reassessment",
|
|
171
|
-
"If changing complexity, workflow continues with new complexity path",
|
|
172
|
-
"Reset proposedDowngrade to false after user confirmation or rejection"
|
|
173
|
-
],
|
|
174
|
-
"requireConfirmation": {
|
|
175
|
-
"or": [
|
|
176
|
-
{"var": "automationLevel", "equals": "Low"},
|
|
177
|
-
{"var": "automationLevel", "equals": "Medium"},
|
|
178
|
-
{"and": [
|
|
179
|
-
{"var": "automationLevel", "equals": "High"},
|
|
180
|
-
{"var": "confidenceScore", "lt": 8}
|
|
181
|
-
]}
|
|
182
|
-
]
|
|
183
|
-
}
|
|
184
|
-
},
|
|
185
|
-
{
|
|
186
|
-
"id": "phase-2c-iterative-research-loop",
|
|
187
|
-
"type": "loop",
|
|
188
|
-
"title": "Phase 2c: Multi-Phase Deep Research with Saturation Detection",
|
|
189
|
-
"runCondition": {"var": "explorationComplexity", "not_equals": "Simple"},
|
|
190
|
-
"loop": {
|
|
191
|
-
"type": "for",
|
|
192
|
-
"count": 5,
|
|
193
|
-
"maxIterations": 5,
|
|
194
|
-
"iterationVar": "researchPhase"
|
|
2
|
+
"id": "exploration-workflow",
|
|
3
|
+
"name": "Exploration Workflow (Lean • Notes-First • WorkRail Executor)",
|
|
4
|
+
"version": "2.0.0",
|
|
5
|
+
"description": "Guides an agent through broad exploration work: understand the ask, gather enough context, generate materially different approaches, evaluate them, challenge the front-runner, and deliver a recommendation with bounded uncertainty.",
|
|
6
|
+
"recommendedPreferences": {
|
|
7
|
+
"recommendedAutonomy": "guided",
|
|
8
|
+
"recommendedRiskPolicy": "conservative"
|
|
9
|
+
},
|
|
10
|
+
"preconditions": [
|
|
11
|
+
"User has a question, problem, decision, or opportunity that requires exploration.",
|
|
12
|
+
"Agent has access to the relevant tools for the domain being explored.",
|
|
13
|
+
"A recommendation can be judged against explicit constraints, success criteria, or decision factors."
|
|
14
|
+
],
|
|
15
|
+
"clarificationPrompts": [
|
|
16
|
+
"What are you trying to decide, understand, or compare?",
|
|
17
|
+
"What constraints, preferences, or decision criteria already matter?",
|
|
18
|
+
"What would make this exploration useful when we're done?"
|
|
19
|
+
],
|
|
20
|
+
"metaGuidance": [
|
|
21
|
+
"DEFAULT BEHAVIOR: self-execute with tools. Only ask the user for missing external facts, permissions, or decision preferences you cannot resolve yourself.",
|
|
22
|
+
"V2 DURABILITY: use output.notesMarkdown and explicit context variables as the durable exploration state. Do NOT rely on EXPLORATION_CONTEXT.md or any markdown checkpoint file as required memory.",
|
|
23
|
+
"ARTIFACT STRATEGY: markdown artifacts are optional human-facing outputs only. If created, they must be derived from notes/context state rather than treated as the source of truth.",
|
|
24
|
+
"MAIN AGENT OWNS EXPLORATION: the main agent owns synthesis, comparison, ranking, and the final recommendation.",
|
|
25
|
+
"SUBAGENT MODEL: use the WorkRail Executor only. Delegate bounded cognition, not ownership.",
|
|
26
|
+
"PARALLELISM: parallelize independent research or challenge passes; serialize synthesis, scoring, and final recommendation.",
|
|
27
|
+
"DOMAIN FLEXIBILITY: adapt tools and vocabulary to the actual exploration domain. Technical explorations may inspect code and architecture. Business or creative explorations may lean more on external sources, user constraints, and comparative reasoning.",
|
|
28
|
+
"ANTI-PREMATURE-CONVERGENCE: generate materially different approaches before committing. If all candidates cluster in the same pattern family, force at least one more contrasting approach.",
|
|
29
|
+
"CHALLENGE BEFORE RECOMMENDATION: the leading approach must survive an explicit challenge pass. If it does not, revise the shortlist or recommendation deliberately.",
|
|
30
|
+
"TRIGGERS: WorkRail can only react to explicit outputs. Use fields like `contextUnknownCount`, `retriageNeeded`, `alternativesConsideredCount`, `hasStrongAlternative`, `comparisonGapCount`, and `recommendationConfidenceBand`."
|
|
31
|
+
],
|
|
32
|
+
"steps": [
|
|
33
|
+
{
|
|
34
|
+
"id": "phase-0-understand-and-classify",
|
|
35
|
+
"title": "Phase 0: Understand and Classify",
|
|
36
|
+
"prompt": "Understand the exploration before you start researching.\n\nCapture:\n- `explorationSummary`: concise statement of the question, decision, or problem\n- `explorationDomain`: `technical`, `business`, `creative`, or `mixed`\n- `taskComplexity`: Small / Medium / Large\n- `riskLevel`: Low / Medium / High\n- `rigorMode`: QUICK / STANDARD / THOROUGH\n- `automationLevel`: High / Medium / Low\n- `successCriteria`: what will make the exploration useful\n- `constraints`: hard constraints and strong preferences already known\n- `openQuestions`: only real questions you cannot answer with tools\n\nDecision guidance:\n- QUICK: narrow question, clear success criteria, low ambiguity, few viable approaches\n- STANDARD: moderate ambiguity, multiple viable approaches, or meaningful trade-offs\n- THOROUGH: broad option space, high ambiguity, high-stakes decision, or materially conflicting evidence likely\n\nIf critical inputs are missing, ask only for the minimum needed to explore well. Do not ask for information you can discover yourself.",
|
|
37
|
+
"requireConfirmation": {
|
|
38
|
+
"or": [
|
|
39
|
+
{ "var": "taskComplexity", "equals": "Large" },
|
|
40
|
+
{ "var": "riskLevel", "equals": "High" },
|
|
41
|
+
{ "var": "automationLevel", "equals": "Low" }
|
|
42
|
+
]
|
|
43
|
+
}
|
|
44
|
+
},
|
|
45
|
+
{
|
|
46
|
+
"id": "phase-1-context-and-research-posture",
|
|
47
|
+
"title": "Phase 1: Context and Research Posture",
|
|
48
|
+
"prompt": "Build the minimum complete context needed to compare approaches well.\n\nDo the main context gathering yourself using the tools that fit the domain.\n\nDeliverable:\n- key facts, constraints, and unknowns that materially affect the decision\n- relevant sources, files, systems, or examples\n- the evaluation criteria that should drive comparison\n- the initial option-space sketch\n\nSet context variables:\n- `contextSummary`\n- `candidateSources`\n- `candidateFiles`\n- `evaluationCriteria`\n- `contextUnknownCount`\n- `optionSpaceEstimate`\n- `retriageNeeded`\n\nComputation rules:\n- `contextUnknownCount` = number of unresolved unknowns that still materially affect recommendation quality\n- `optionSpaceEstimate` = rough count or range of materially distinct approach families currently in play\n- set `retriageNeeded = true` if the real ambiguity, risk, or breadth is larger than Phase 0 assumed",
|
|
49
|
+
"promptFragments": [
|
|
50
|
+
{
|
|
51
|
+
"id": "phase-1-quick",
|
|
52
|
+
"when": { "var": "rigorMode", "equals": "QUICK" },
|
|
53
|
+
"text": "Keep this tight. Gather only what you need to compare a small number of plausible approaches."
|
|
54
|
+
},
|
|
55
|
+
{
|
|
56
|
+
"id": "phase-1-standard",
|
|
57
|
+
"when": { "var": "rigorMode", "equals": "STANDARD" },
|
|
58
|
+
"text": "If `contextUnknownCount > 0` and delegation is available, run `routine-context-gathering` twice in parallel: once with `focus=COMPLETENESS` and once with `focus=DEPTH`. Synthesize both outputs before you leave this step."
|
|
59
|
+
},
|
|
60
|
+
{
|
|
61
|
+
"id": "phase-1-thorough",
|
|
62
|
+
"when": { "var": "rigorMode", "equals": "THOROUGH" },
|
|
63
|
+
"text": "If delegation is available, run `routine-context-gathering` twice in parallel: once with `focus=COMPLETENESS` and once with `focus=DEPTH`. Synthesize both outputs before you leave this step."
|
|
64
|
+
}
|
|
65
|
+
],
|
|
66
|
+
"requireConfirmation": false
|
|
67
|
+
},
|
|
68
|
+
{
|
|
69
|
+
"id": "phase-1b-retriage-after-context",
|
|
70
|
+
"title": "Phase 1b: Re-Triage After Context",
|
|
71
|
+
"runCondition": {
|
|
72
|
+
"var": "retriageNeeded",
|
|
73
|
+
"equals": true
|
|
74
|
+
},
|
|
75
|
+
"prompt": "Reassess the exploration now that the real context is known.\n\nReview:\n- `contextUnknownCount`\n- `optionSpaceEstimate`\n- the actual breadth of systems, sources, or decision factors involved\n- whether the decision now looks riskier or more ambiguous than expected\n\nDo:\n- confirm or adjust `taskComplexity`\n- confirm or adjust `riskLevel`\n- confirm or adjust `rigorMode`\n- set `retriageChanged`\n\nRule:\n- upgrade rigor if the real exploration surface is broader or riskier than expected\n- downgrade only if the task is genuinely simpler than it first appeared",
|
|
76
|
+
"requireConfirmation": {
|
|
77
|
+
"or": [
|
|
78
|
+
{ "var": "retriageChanged", "equals": true },
|
|
79
|
+
{ "var": "automationLevel", "equals": "Low" }
|
|
80
|
+
]
|
|
81
|
+
}
|
|
82
|
+
},
|
|
83
|
+
{
|
|
84
|
+
"id": "phase-2-generate-and-shortlist-approaches",
|
|
85
|
+
"title": "Phase 2: Generate and Shortlist Approaches",
|
|
86
|
+
"prompt": "Generate materially different approaches before you decide what deserves deeper comparison.\n\nDo:\n- generate candidate approaches that differ in shape, not just wording\n- include the obvious / mainstream path\n- include a more conservative or lower-risk path when relevant\n- include a more ambitious, higher-upside, or non-obvious path when relevant\n- merge duplicates and label approach families\n- identify whether a strong alternative still exists after ranking the shortlist\n\nSet context variables:\n- `candidateApproaches`\n- `approachFamilies`\n- `alternativesConsideredCount`\n- `hasStrongAlternative`\n- `currentLeadingApproach`\n\nComputation rules:\n- `alternativesConsideredCount` = number of materially distinct viable approaches after merging duplicates\n- `hasStrongAlternative = true` when a non-leading approach still looks competitive on the current evidence\n\nRules:\n- QUICK: self-generate at least 3 materially different approaches\n- STANDARD: self-generate at least 3, and if the option space still clusters too tightly, run `routine-ideation` once to force contrast\n- THOROUGH: if delegation is available, run 2 or 3 bounded ideation passes from different lenses, then synthesize the shortlist yourself\n- if every candidate lands in the same pattern family, this phase is not done yet",
|
|
87
|
+
"requireConfirmation": false
|
|
88
|
+
},
|
|
89
|
+
{
|
|
90
|
+
"id": "phase-3-evaluate-and-rank",
|
|
91
|
+
"type": "loop",
|
|
92
|
+
"title": "Phase 3: Evaluate, Challenge, and Refine",
|
|
93
|
+
"runCondition": {
|
|
94
|
+
"var": "taskComplexity",
|
|
95
|
+
"not_equals": "Small"
|
|
96
|
+
},
|
|
97
|
+
"loop": {
|
|
98
|
+
"type": "while",
|
|
99
|
+
"conditionSource": {
|
|
100
|
+
"kind": "artifact_contract",
|
|
101
|
+
"contractRef": "wr.contracts.loop_control",
|
|
102
|
+
"loopId": "exploration_review_loop"
|
|
103
|
+
},
|
|
104
|
+
"maxIterations": 2
|
|
105
|
+
},
|
|
106
|
+
"body": [
|
|
107
|
+
{
|
|
108
|
+
"id": "phase-3a-compare-approaches",
|
|
109
|
+
"title": "Compare Approaches",
|
|
110
|
+
"prompt": "Compare the shortlisted approaches against the criteria that actually matter.\n\nDo:\n- score or rank the shortlisted approaches against `evaluationCriteria`\n- make the trade-offs explicit instead of hiding them inside a summary\n- identify missing comparison evidence or unresolved assumptions\n- choose a leading approach and runner-up for the challenge step\n\nSet context variables:\n- `selectedApproach`\n- `runnerUpApproach`\n- `evaluationSummary`\n- `keyTradeoffs`\n- `comparisonGapCount`\n\nRule:\n- if the ranking depends on an assumption you have not tested or cannot justify, count it in `comparisonGapCount` and say so plainly.",
|
|
111
|
+
"requireConfirmation": false
|
|
112
|
+
},
|
|
113
|
+
{
|
|
114
|
+
"id": "phase-3b-challenge-recommendation",
|
|
115
|
+
"title": "Challenge the Front-Runner",
|
|
116
|
+
"prompt": "Challenge the current front-runner before you turn it into a recommendation.\n\nDo:\n- identify the strongest case against `selectedApproach`\n- test whether `runnerUpApproach` or another alternative actually deserves to win instead\n- call out hidden assumptions, failure modes, and context changes that would flip the choice\n- decide whether the challenge changed the recommendation or just bounded its uncertainty\n\nSet context variables:\n- `challengeFindings`\n- `challengeChangedRecommendation`\n- `criticalUncertainties`\n- `recommendationConfidenceBand`\n\nConfidence rules:\n- High = the leading approach survives challenge, no material comparison gaps remain, and uncertainty is bounded\n- Medium = the recommendation is likely right but one meaningful uncertainty remains\n- Low = the challenge exposed unresolved gaps, close competitors, or major assumption risk",
|
|
117
|
+
"promptFragments": [
|
|
118
|
+
{
|
|
119
|
+
"id": "phase-3b-quick",
|
|
120
|
+
"when": { "var": "rigorMode", "equals": "QUICK" },
|
|
121
|
+
"text": "Do the challenge yourself unless the decision still feels unexpectedly fragile."
|
|
195
122
|
},
|
|
196
|
-
|
|
197
|
-
|
|
198
|
-
|
|
199
|
-
|
|
200
|
-
"runCondition": { "var": "researchPhase", "equals": 1 },
|
|
201
|
-
"prompt": "**OBJECTIVE**: Cast a wide net to map the solution landscape, identify key themes, and find conflicting viewpoints.",
|
|
202
|
-
"agentRole": "Systematic Researcher: Broad Scan Specialist",
|
|
203
|
-
"guidance": [
|
|
204
|
-
"Use multiple search strategies (e.g., 'how to [task]', 'alternatives to [tool]').",
|
|
205
|
-
"Identify 3-5 high-level solution categories.",
|
|
206
|
-
"Note sources that directly conflict with each other.",
|
|
207
|
-
"ACTIONS: Update context.evidenceLog[], context.broadScanThemes[], context.contradictions[]"
|
|
208
|
-
]
|
|
209
|
-
},
|
|
210
|
-
{
|
|
211
|
-
"id": "research-phase-2-deep-dive",
|
|
212
|
-
"title": "Research Phase 2/5: Deep Dive",
|
|
213
|
-
"runCondition": { "var": "researchPhase", "equals": 2 },
|
|
214
|
-
"prompt": "**OBJECTIVE**: Focus on the most promising themes from the broad scan. Investigate technical details, find implementation examples, and assess feasibility.",
|
|
215
|
-
"agentRole": "Systematic Researcher: Deep Dive Analyst",
|
|
216
|
-
"guidance": [
|
|
217
|
-
"Focus on the themes in context.broadScanThemes[].",
|
|
218
|
-
"Find specific, real-world implementation examples or case studies.",
|
|
219
|
-
"Assess complexity, dependencies, and requirements for each.",
|
|
220
|
-
"ACTIONS: Update context.evidenceLog[], context.deepDiveFindings[]"
|
|
221
|
-
]
|
|
222
|
-
},
|
|
223
|
-
{
|
|
224
|
-
"id": "research-phase-3-contrarian",
|
|
225
|
-
"title": "Research Phase 3/5: Contrarian Research",
|
|
226
|
-
"runCondition": { "var": "researchPhase", "equals": 3 },
|
|
227
|
-
"prompt": "**OBJECTIVE**: Actively seek out opposing viewpoints, failure cases, and critiques of the promising solutions. The goal is to challenge assumptions.",
|
|
228
|
-
"agentRole": "Systematic Researcher: Devil's Advocate",
|
|
229
|
-
"guidance": [
|
|
230
|
-
"Search for '[solution] problems', '[approach] failures', 'why not use [tool]'.",
|
|
231
|
-
"Identify hidden assumptions in the mainstream approaches.",
|
|
232
|
-
"Look for entirely different paradigms that were missed.",
|
|
233
|
-
"ACTIONS: Update context.evidenceLog[], context.contrarianEvidence[]"
|
|
234
|
-
]
|
|
235
|
-
},
|
|
236
|
-
{
|
|
237
|
-
"id": "research-phase-4-synthesis",
|
|
238
|
-
"title": "Research Phase 4/5: Evidence Synthesis",
|
|
239
|
-
"runCondition": { "var": "researchPhase", "equals": 4 },
|
|
240
|
-
"prompt": "**OBJECTIVE**: Consolidate all findings. Resolve contradictions, identify patterns, and build a coherent narrative of the solution landscape.",
|
|
241
|
-
"agentRole": "Systematic Researcher: Synthesizer",
|
|
242
|
-
"guidance": [
|
|
243
|
-
"Review evidence from all previous phases.",
|
|
244
|
-
"Where sources conflict, try to understand the reason for the disagreement.",
|
|
245
|
-
"Build a framework or matrix to compare the approaches.",
|
|
246
|
-
"ACTIONS: Update context.synthesisFramework, context.evidenceGaps[]"
|
|
247
|
-
]
|
|
248
|
-
},
|
|
249
|
-
{
|
|
250
|
-
"id": "research-phase-5-gap-filling",
|
|
251
|
-
"title": "Research Phase 5/5: Gap Filling & Closure",
|
|
252
|
-
"runCondition": { "var": "researchPhase", "equals": 5 },
|
|
253
|
-
"prompt": "**OBJECTIVE**: Address the specific, critical unknowns identified during synthesis. Verify key assumptions and prepare for solution generation.",
|
|
254
|
-
"agentRole": "Systematic Researcher: Finisher",
|
|
255
|
-
"guidance": [
|
|
256
|
-
"Focus only on the critical gaps listed in context.evidenceGaps[].",
|
|
257
|
-
"Perform targeted searches to answer these specific questions.",
|
|
258
|
-
"This is the final research step. The goal is to be 'done', not perfect.",
|
|
259
|
-
"ACTIONS: Update context.evidenceLog[], set context.researchComplete = true"
|
|
260
|
-
]
|
|
261
|
-
},
|
|
262
|
-
{
|
|
263
|
-
"id": "research-phase-validation",
|
|
264
|
-
"title": "Validation: Research Quality Check",
|
|
265
|
-
"prompt": "**OBJECTIVE**: After each research phase, perform a quick quality check.",
|
|
266
|
-
"agentRole": "Quality Analyst",
|
|
267
|
-
"guidance": [
|
|
268
|
-
"EVIDENCE CHECK: Have we gathered at least 3 new sources in this phase? (unless it was gap-filling).",
|
|
269
|
-
"QUALITY CHECK: Is there at least one 'High' or 'Medium' grade source?",
|
|
270
|
-
"SATURATION CHECK: Use checkSaturation() to assess if we are still gathering novel information. If not, we can consider exiting the loop early by setting context.researchComplete = true.",
|
|
271
|
-
"ACTIONS: Update context.qualityMetrics[]"
|
|
272
|
-
]
|
|
273
|
-
}
|
|
274
|
-
]
|
|
275
|
-
},
|
|
276
|
-
{
|
|
277
|
-
"id": "phase-3-context-documentation",
|
|
278
|
-
"runCondition": {"var": "explorationComplexity", "not_equals": "Simple"},
|
|
279
|
-
"title": "Phase 3: Create Context Documentation",
|
|
280
|
-
"prompt": "Create a comprehensive context documentation file (`EXPLORATION_CONTEXT.md`) that captures all critical information from the workflow so far. This document enables seamless handoffs between chat sessions when context limits are reached.\n\n**For automationLevel=High, generate summary-only version (limit 1000 words); otherwise, full documentation (limit 2000 words).**\n\n**Your `EXPLORATION_CONTEXT.md` must include:**\n\n## 1. ORIGINAL EXPLORATION CONTEXT\n- Original question/problem and requirements\n- Complexity classification and reasoning\n- Any re-triage decisions and rationale\n- Automation level and time constraints\n\n## 2. DOMAIN RESEARCH SUMMARY\n- Key findings from domain analysis\n- Viable options identified and their characteristics\n- Critical decision factors and trade-offs discovered\n- Research methodology and source quality assessment\n\n## 3. CLARIFICATIONS AND DECISIONS\n- Questions asked and answers received\n- Ambiguities resolved and how\n- Priority weightings and constraints clarified\n- Risk tolerance and success criteria defined\n\n## 4. CURRENT STATUS\n- Research completeness assessment\n- Option space coverage validation\n- Key insights and patterns identified\n- Remaining unknowns or research gaps\n\n## 5. WORKFLOW PROGRESS TRACKING\n- ✅ Completed phases (0, 1, 2, 2b, 3)\n- 🔄 Current phase: Option Evaluation (Phase 4)\n- ⏳ Remaining phases: 4, 4b, 5, 6\n- 📋 Context variables set (explorationComplexity, automationLevel, etc.)\n\n## 6. HANDOFF INSTRUCTIONS\n- Key findings to highlight when resuming\n- Critical decisions that must not be forgotten\n- Methodology to continue if context is lost\n\n**Format as scannable document using bullet points for easy agent onboarding.**",
|
|
281
|
-
"agentRole": "You are a research documentation specialist with expertise in creating comprehensive exploration handoff documents. Your role is to capture all critical research context enabling seamless continuity across team members or chat sessions.",
|
|
282
|
-
"guidance": [
|
|
283
|
-
"This step is automatically skipped for Simple explorations",
|
|
284
|
-
"Create document allowing completely new agent to continue seamlessly",
|
|
285
|
-
"Include specific findings, options, and decisions discovered",
|
|
286
|
-
"Reference all key research insights from previous phases",
|
|
287
|
-
"Make progress tracking very clear for workflow continuation",
|
|
288
|
-
"Use bullet points for scannability; limit based on automation level"
|
|
289
|
-
],
|
|
290
|
-
"requireConfirmation": false
|
|
291
|
-
},
|
|
292
|
-
{
|
|
293
|
-
"id": "phase-3a-prepare-solutions",
|
|
294
|
-
"title": "Phase 3a: Prepare Solution Generation",
|
|
295
|
-
"runCondition": {"var": "explorationComplexity", "not_equals": "Simple"},
|
|
296
|
-
"prompt": "**PREPARE SOLUTION GENERATION**\n\nBased on your research findings, prepare for systematic solution generation:\n\n**SETUP TASKS:**\n1. Review research synthesis from Phase 2c\n2. Identify top solution categories/approaches\n3. Create solution generation framework\n\n**CREATE SOLUTION APPROACHES ARRAY:**\nSet context.solutionApproaches with these 5 types:\n```json\n[\n {\"type\": \"Quick/Simple\", \"focus\": \"Minimal time, proven approaches, immediate value\"},\n {\"type\": \"Thorough/Proven\", \"focus\": \"Best practices, comprehensive, long-term sustainability\"},\n {\"type\": \"Creative/Novel\", \"focus\": \"Innovation, emerging tech, competitive advantage\"},\n {\"type\": \"Optimal/Balanced\", \"focus\": \"Best trade-offs, practical yet forward-thinking\"},\n {\"type\": \"Contrarian/Alternative\", \"focus\": \"Challenge assumptions, overlooked approaches\"}\n]\n```\n\n**Also set:**\n- context.solutionCriteria[] from research findings\n- context.evaluationFramework for comparing solutions\n- context.userConstraints from Phase 0a\n\n**This enables the next loop to generate each solution type systematically.**",
|
|
297
|
-
"agentRole": "You are preparing the solution generation phase by creating a structured framework based on research findings.",
|
|
298
|
-
"guidance": [
|
|
299
|
-
"This step makes the loop cleaner by preparing the array",
|
|
300
|
-
"Each solution type should address different user needs",
|
|
301
|
-
"Framework should incorporate research insights"
|
|
302
|
-
],
|
|
303
|
-
"requireConfirmation": false
|
|
304
|
-
},
|
|
305
|
-
{
|
|
306
|
-
"id": "phase-3b-solution-generation-loop",
|
|
307
|
-
"type": "loop",
|
|
308
|
-
"title": "Phase 3b: Diverse Solution Portfolio Generation",
|
|
309
|
-
"runCondition": {"var": "explorationComplexity", "not_equals": "Simple"},
|
|
310
|
-
"loop": {
|
|
311
|
-
"type": "forEach",
|
|
312
|
-
"items": "solutionApproaches",
|
|
313
|
-
"itemVar": "approach",
|
|
314
|
-
"indexVar": "solutionIndex",
|
|
315
|
-
"maxIterations": 5
|
|
123
|
+
{
|
|
124
|
+
"id": "phase-3b-standard",
|
|
125
|
+
"when": { "var": "rigorMode", "equals": "STANDARD" },
|
|
126
|
+
"text": "If the choice is close or the downside risk matters, run `routine-hypothesis-challenge` before finalizing the confidence band."
|
|
316
127
|
},
|
|
317
|
-
|
|
318
|
-
|
|
319
|
-
|
|
320
|
-
|
|
321
|
-
"prompt": "**GENERATE SOLUTION: {{approach.type}}**\n\n**Focus for this solution type**: {{approach.focus}}\n\n**DIVERGENT THINKING MODE - NO JUDGMENT**\nYou are in pure generation mode. Do NOT evaluate, compare, or judge this solution against others. Focus solely on creating a complete solution that embodies the {{approach.type}} approach.\n\n**SOLUTION REQUIREMENTS:**\n1. Generate a solution that embodies the {{approach.type}} approach\n2. Base it on evidence from all research phases\n3. Make it genuinely different from other solutions (not just variations)\n4. DEFER ALL JUDGMENT - no scoring, ranking, or comparison\n\n**INCORPORATE USER CONTEXT:**\n- Apply all relevant rules from context.userRules[]\n- Respect constraints from context.constraints[]\n- Align with organizational standards and preferences\n- Consider environment-specific factors\n\n**SOLUTION STRUCTURE:**\n1. **Core Approach**: Clear description (what makes this {{approach.type}}?)\n2. **Implementation Path**: 3-5 key steps to execute\n3. **Evidence Base**: Which research findings support this approach?\n4. **Key Features**: What distinguishes this approach?\n5. **Resource Requirements**: What's needed to implement?\n6. **Success Indicators**: Observable outcomes when working\n\n**NO EVALUATION ELEMENTS:**\n- Do NOT include confidence scores\n- Do NOT compare to other solutions\n- Do NOT rank or judge quality\n- Simply generate and document\n\n**ACTIONS:**\n- generateSolution({{solutionIndex}}, '{{approach.type}}')\n- Store complete solution in context.solutions[{{solutionIndex}}]\n- Track which evidence supports this approach",
|
|
322
|
-
"agentRole": "You are in DIVERGENT THINKING mode, generating the {{approach.type}} solution. Focus on creation without judgment. Draw from research to build a complete solution.",
|
|
323
|
-
"guidance": [
|
|
324
|
-
"DIVERGENT PHASE: Generate without evaluating or comparing",
|
|
325
|
-
"Each solution should be genuinely different, not just variations",
|
|
326
|
-
"Ground each solution in evidence from research phases",
|
|
327
|
-
"Align with user rules and preferences from Phase 0a",
|
|
328
|
-
"Include enough detail to be actionable",
|
|
329
|
-
"Reference specific sources from evidenceLog",
|
|
330
|
-
"If a solution conflicts with user rules, note it factually without judgment",
|
|
331
|
-
"DEFER ALL EVALUATION until Phase 4"
|
|
332
|
-
],
|
|
333
|
-
"hasValidation": true,
|
|
334
|
-
"validationCriteria": {
|
|
335
|
-
"and": [
|
|
336
|
-
{
|
|
337
|
-
"type": "contains",
|
|
338
|
-
"value": "Evidence:",
|
|
339
|
-
"message": "Must include evidence section"
|
|
340
|
-
},
|
|
341
|
-
{
|
|
342
|
-
"type": "contains",
|
|
343
|
-
"value": "Key Features:",
|
|
344
|
-
"message": "Must describe distinguishing features"
|
|
345
|
-
}
|
|
346
|
-
]
|
|
347
|
-
},
|
|
348
|
-
"requireConfirmation": false
|
|
349
|
-
}
|
|
350
|
-
]
|
|
351
|
-
},
|
|
352
|
-
{
|
|
353
|
-
"id": "phase-4-option-evaluation",
|
|
354
|
-
"runCondition": {"var": "explorationComplexity", "not_equals": "Simple"},
|
|
355
|
-
"title": "Phase 4: CONVERGENT THINKING - Option Evaluation & Ranking",
|
|
356
|
-
"prompt": "**TRANSITION TO CONVERGENT THINKING MODE**\n\nThe divergent generation phase is complete. Now shift to analytical, convergent thinking to systematically evaluate all solutions.\n\n**CONVERGENT THINKING PRINCIPLES:**\n- This is NOW the time for judgment and comparison\n- Apply critical analysis to all generated solutions\n- Use evidence-based evaluation criteria\n- Be rigorous and systematic\n\n**PREP**: Define evaluation criteria based on clarified requirements, constraints, and priorities.\n\n**IMPLEMENT**: \n1. Create weighted scoring matrix with 4-6 evaluation criteria based on clarifications\n2. Score each option quantitatively (1-10 scale) with detailed rationale\n3. Calculate weighted scores and rank options\n4. Perform sensitivity analysis on key criteria weights\n5. Identify decision breakpoints and scenario dependencies\n6. Document evaluation methodology and assumptions\n\n**VERIFY**: Ensure evaluation is objective, comprehensive, and incorporates all clarified priorities.",
|
|
357
|
-
"agentRole": "You are an objective decision analyst expert in multi-criteria evaluation and quantitative assessment. Your expertise lies in translating qualitative factors into structured, defensible evaluations.",
|
|
358
|
-
"guidance": [
|
|
359
|
-
"Use at least 4-6 evaluation criteria based on clarifications",
|
|
360
|
-
"Incorporate user's stated priorities and constraints",
|
|
361
|
-
"Provide quantitative justification for all scores",
|
|
362
|
-
"Consider both direct and indirect factors",
|
|
363
|
-
"Include uncertainty and sensitivity analysis"
|
|
364
|
-
],
|
|
365
|
-
"validationCriteria": [
|
|
366
|
-
{
|
|
367
|
-
"type": "contains",
|
|
368
|
-
"value": "Scoring Matrix",
|
|
369
|
-
"message": "Must include a quantitative scoring matrix for options"
|
|
370
|
-
},
|
|
371
|
-
{
|
|
372
|
-
"type": "contains",
|
|
373
|
-
"value": "Weighted Score",
|
|
374
|
-
"message": "Must include weighted scoring calculations"
|
|
375
|
-
}
|
|
376
|
-
],
|
|
377
|
-
"hasValidation": true,
|
|
378
|
-
"requireConfirmation": true
|
|
379
|
-
},
|
|
380
|
-
{
|
|
381
|
-
"id": "phase-4b-devil-advocate-review",
|
|
382
|
-
"runCondition": {"var": "explorationComplexity", "not_equals": "Simple"},
|
|
383
|
-
"title": "Phase 4b: Devil's Advocate Evaluation Review",
|
|
384
|
-
"prompt": "Perform a rigorous 'devil's advocate' review of your solutions and evaluation. This is a mandatory adversarial self-challenge to prevent overconfidence and blind spots.\n\n**STRUCTURED ADVERSARIAL ANALYSIS:**\n\n1. **Evidence Challenge**: For each solution's top 3 claims:\n - Is the evidence truly supporting this claim?\n - Are there contradicting sources we dismissed?\n - What evidence grade did we assign vs. what it deserves?\n\n2. **Hidden Failure Modes**: For the top-ranked solution:\n - What could cause catastrophic failure?\n - What assumptions could be completely wrong?\n - What context changes would invalidate this approach?\n\n3. **Overlooked Alternatives**:\n - What hybrid approaches could combine solution strengths?\n - What completely different paradigm did we miss?\n - Are we solving the right problem?\n\n4. **Bias Detection**:\n - Did we favor familiar over novel?\n - Did recent sources overshadow established wisdom?\n - Did domain bias affect our evaluation?\n\n5. **Confidence Calibration**:\n - Where are we overconfident?\n - What unknowns are we treating as knowns?\n - calculateConfidence() with penalty for identified weaknesses\n\n**OUTPUT REQUIREMENTS:**\n- Identify at least 3 significant concerns\n- Propose specific remedies for each\n- Re-calculate confidence scores\n- Set context.confidenceScore (1-10) for overall analysis quality\n- Set context.criticalIssues[] with must-address items\n\ntriggerDeepDive() if confidence drops below 0.7",
|
|
385
|
-
"agentRole": "You are a skeptical but fair senior research analyst with 15+ years of experience in strategic decision analysis. Your role is to identify potential blind spots, biases, and overlooked factors in evaluation methodologies. You excel at constructive criticism that strengthens analysis rather than destroys it.",
|
|
386
|
-
"guidance": [
|
|
387
|
-
"This is critical thinking step to find weaknesses in your own analysis",
|
|
388
|
-
"Not all identified 'risks' may be realistic - be balanced",
|
|
389
|
-
"After this review, user can ask for revised evaluation before final recommendation",
|
|
390
|
-
"This step is skipped for Simple explorations",
|
|
391
|
-
"CRITICAL: Set confidenceScore variable (1-10) in your response",
|
|
392
|
-
"For automationLevel=High with confidenceScore >8, auto-approve if no critical issues"
|
|
393
|
-
],
|
|
394
|
-
"requireConfirmation": {
|
|
395
|
-
"or": [
|
|
396
|
-
{"var": "automationLevel", "equals": "Low"},
|
|
397
|
-
{"var": "automationLevel", "equals": "Medium"},
|
|
398
|
-
{"and": [
|
|
399
|
-
{"var": "automationLevel", "equals": "High"},
|
|
400
|
-
{"var": "confidenceScore", "lt": 8}
|
|
401
|
-
]}
|
|
402
|
-
]
|
|
128
|
+
{
|
|
129
|
+
"id": "phase-3b-thorough",
|
|
130
|
+
"when": { "var": "rigorMode", "equals": "THOROUGH" },
|
|
131
|
+
"text": "If delegation is available, run `routine-hypothesis-challenge` before finalizing the confidence band. For technical explorations where feasibility or runtime behavior could flip the choice, also run `routine-execution-simulation`."
|
|
403
132
|
}
|
|
133
|
+
],
|
|
134
|
+
"requireConfirmation": false
|
|
404
135
|
},
|
|
405
136
|
{
|
|
406
|
-
|
|
407
|
-
|
|
408
|
-
|
|
409
|
-
|
|
410
|
-
|
|
411
|
-
|
|
412
|
-
|
|
413
|
-
"Address concerns raised in devil's advocate review",
|
|
414
|
-
"Provide multiple scenarios and contingency plans",
|
|
415
|
-
"Ensure traceability back to research and evaluation"
|
|
416
|
-
],
|
|
417
|
-
"requireConfirmation": false
|
|
418
|
-
},
|
|
419
|
-
{
|
|
420
|
-
"id": "phase-6-final-documentation",
|
|
421
|
-
"runCondition": {"var": "explorationComplexity", "not_equals": "Simple"},
|
|
422
|
-
"title": "Phase 6: Final Documentation & Knowledge Transfer",
|
|
423
|
-
"prompt": "Create final comprehensive documentation by updating `EXPLORATION_CONTEXT.md` with complete exploration results and knowledge transfer information.\n\n**Add these final sections:**\n\n## 7. FINAL EVALUATION RESULTS\n- Complete scoring matrix and methodology\n- Top-ranked options with detailed comparison\n- Devil's advocate review insights and resolution\n- Confidence assessment and reasoning\n\n## 8. FINAL RECOMMENDATION\n- Primary recommendation and implementation roadmap\n- Alternative options and decision criteria for choosing them\n- Risk mitigation strategies and success metrics\n- Immediate next steps and milestones\n\n## 9. EXPLORATION COMPLETION STATUS\n- ✅ Research phases completed\n- ✅ Options identified and evaluated\n- ✅ Recommendations validated through devil's advocate review\n- 📁 Deliverables created (evaluation matrix, implementation guide)\n- 📊 Quality metrics (confidence score, source count, option coverage)\n- 📋 Limitations and assumptions documented\n\n## 10. KNOWLEDGE TRANSFER SUMMARY\n- Key insights for future similar explorations\n- Methodology lessons learned\n- Domain expertise gained\n- Recommended follow-up research areas\n- Reusable evaluation frameworks\n\nConclude with summary of exploration quality and any recommended follow-up work or monitoring.",
|
|
424
|
-
"agentRole": "You are a knowledge management specialist responsible for final project documentation and organizational learning. Your expertise lies in creating comprehensive exploration archives that enable future reference, replication, and knowledge transfer.",
|
|
425
|
-
"guidance": [
|
|
426
|
-
"This is the final knowledge capture for organizational learning",
|
|
427
|
-
"Include specific details enabling future replication",
|
|
428
|
-
"Document lessons learned and methodology insights",
|
|
429
|
-
"Ensure all promised deliverables are documented",
|
|
430
|
-
"Include quantitative quality metrics and assessments"
|
|
431
|
-
],
|
|
432
|
-
"requireConfirmation": true
|
|
137
|
+
"id": "phase-3c-loop-decision",
|
|
138
|
+
"title": "Evaluation Loop Decision",
|
|
139
|
+
"prompt": "Decide whether the comparison needs another pass.\n\nDecision rules:\n- if `challengeChangedRecommendation = true` -> continue\n- else if `comparisonGapCount > 0` and the gaps materially affect ranking -> continue\n- else if `recommendationConfidenceBand = Low` and a better answer is still realistically reachable -> continue\n- else -> stop\n\nIf you stop because the remaining uncertainty is bounded, say that explicitly.\nIf you've hit the iteration limit, stop and record what still matters.\n\nEmit the required loop-control artifact in this shape (`decision` must be `continue` or `stop`):\n```json\n{\n \"artifacts\": [{\n \"kind\": \"wr.loop_control\",\n \"decision\": \"continue or stop\"\n }]\n}\n```",
|
|
140
|
+
"requireConfirmation": false,
|
|
141
|
+
"outputContract": {
|
|
142
|
+
"contractRef": "wr.contracts.loop_control"
|
|
143
|
+
}
|
|
433
144
|
}
|
|
434
|
-
|
|
435
|
-
}
|
|
145
|
+
]
|
|
146
|
+
},
|
|
147
|
+
{
|
|
148
|
+
"id": "phase-3-small-task-comparison",
|
|
149
|
+
"title": "Phase 3: Compare and Challenge (Small Fast Path)",
|
|
150
|
+
"runCondition": {
|
|
151
|
+
"var": "taskComplexity",
|
|
152
|
+
"equals": "Small"
|
|
153
|
+
},
|
|
154
|
+
"prompt": "For Small explorations:\n- compare the strongest few approaches directly\n- make the key trade-offs explicit\n- challenge the front-runner yourself\n- set `selectedApproach`, `runnerUpApproach`, `keyTradeoffs`, `criticalUncertainties`, and `recommendationConfidenceBand`\n\nDo not create extra ceremony if the question is small and the uncertainty is already bounded.",
|
|
155
|
+
"requireConfirmation": false
|
|
156
|
+
},
|
|
157
|
+
{
|
|
158
|
+
"id": "phase-4-final-recommendation",
|
|
159
|
+
"title": "Phase 4: Final Recommendation and Handoff",
|
|
160
|
+
"prompt": "Give the recommendation in a way someone can act on.\n\nInclude:\n- the recommended approach and why it won\n- the runner-up and what would make it the better choice instead\n- the key trade-offs and assumptions\n- the bounded uncertainties that still remain, if any\n- practical next steps\n- verification suggestions or decision checks the user should use if they act on this recommendation\n- follow-up research only if it would materially change the decision\n\nOptional artifact:\n- create a final handoff markdown artifact only if it materially helps a human reviewer, and derive it from notes/context state rather than using it as workflow memory\n\nSet context variables:\n- `finalRecommendation`\n- `actionGuidance`\n- `verificationSuggestions`\n- `followUpResearch`",
|
|
161
|
+
"requireConfirmation": {
|
|
162
|
+
"or": [
|
|
163
|
+
{ "var": "recommendationConfidenceBand", "equals": "Low" },
|
|
164
|
+
{ "var": "riskLevel", "equals": "High" },
|
|
165
|
+
{ "var": "automationLevel", "equals": "Low" }
|
|
166
|
+
]
|
|
167
|
+
}
|
|
168
|
+
}
|
|
169
|
+
]
|
|
170
|
+
}
|