@exaudeus/workrail 0.0.11 → 0.0.12

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@exaudeus/workrail",
3
- "version": "0.0.11",
3
+ "version": "0.0.12",
4
4
  "description": "MCP server for structured workflow orchestration and step-by-step task guidance",
5
5
  "license": "MIT",
6
6
  "bin": {
@@ -0,0 +1,190 @@
1
+ {
2
+ "id": "systematic-bug-investigation",
3
+ "name": "Systematic Bug Investigation Workflow",
4
+ "version": "1.0.0",
5
+ "description": "A comprehensive workflow for systematic bug and failing test investigation that prevents LLMs from jumping to conclusions. Enforces thorough evidence gathering, hypothesis formation, debugging instrumentation, and validation to achieve near 100% certainty about root causes. This workflow does NOT fix bugs - it produces detailed diagnostic writeups that enable effective fixing by providing complete understanding of what is happening, why it's happening, and supporting evidence.",
6
+ "clarificationPrompts": [
7
+ "What type of system is this? (web app, mobile app, backend service, desktop app, etc.)",
8
+ "How consistently can you reproduce this bug? (always reproducible, sometimes reproducible, rarely reproducible)",
9
+ "What was the last known working version or state if applicable?",
10
+ "Are there any time constraints or urgency factors for this investigation?",
11
+ "What level of system access do you have? (full codebase, limited access, production logs only)"
12
+ ],
13
+ "preconditions": [
14
+ "User has identified a specific bug or failing test to investigate",
15
+ "Agent has access to codebase analysis tools (grep, file readers, etc.)",
16
+ "Agent has access to build/test execution tools for the project type",
17
+ "User can provide error messages, stack traces, or test failure output"
18
+ ],
19
+ "metaGuidance": [
20
+ "INVESTIGATION DISCIPLINE: Never propose fixes or solutions until Phase 6 (Comprehensive Diagnostic Writeup). Focus entirely on systematic evidence gathering and analysis.",
21
+ "HYPOTHESIS RIGOR: All hypotheses must be based on concrete evidence from code analysis with quantified scoring (1-10 scales). Maximum 5 hypotheses per investigation.",
22
+ "DEBUGGING INSTRUMENTATION: Always implement debugging mechanisms before running tests - logs, print statements, or test modifications that will provide evidence.",
23
+ "EVIDENCE THRESHOLD: Require minimum 3 independent sources of evidence before confirming any hypothesis. Use objective verification criteria.",
24
+ "SYSTEMATIC PROGRESSION: Complete each investigation phase fully before proceeding. Each phase builds critical context for the next with structured documentation.",
25
+ "CONFIDENCE CALIBRATION: Use mathematical confidence framework with 9.0/10 minimum threshold. Actively challenge conclusions with adversarial analysis.",
26
+ "UNCERTAINTY ACKNOWLEDGMENT: Explicitly document all remaining unknowns and their potential impact. No subjective confidence assessments."
27
+ ],
28
+ "steps": [
29
+ {
30
+ "id": "phase-0-triage",
31
+ "title": "Phase 0: Initial Triage & Context Gathering",
32
+ "prompt": "**SYSTEMATIC INVESTIGATION BEGINS** - Your mission is to achieve near 100% certainty about this bug's root cause through systematic evidence gathering. NO FIXES will be proposed until Phase 6.\n\n**STEP 1: Bug Report Analysis**\nPlease provide the complete bug context:\n- **Bug Description**: What is the observed behavior vs expected behavior?\n- **Error Messages/Stack Traces**: Paste the complete error output\n- **Reproduction Steps**: How can this bug be consistently reproduced?\n- **Environment Details**: OS, language version, framework version, etc.\n- **Recent Changes**: Any recent commits, deployments, or configuration changes?\n\n**STEP 2: Project Type Classification**\nBased on the information provided, I will classify the project type and set debugging strategies:\n- **Languages/Frameworks**: Primary tech stack\n- **Build System**: Maven, Gradle, npm, etc.\n- **Testing Framework**: JUnit, Jest, pytest, etc.\n- **Logging System**: Available logging mechanisms\n\n**STEP 3: Complexity Assessment**\nI will analyze the bug complexity using these criteria:\n- **Simple**: Single function/method, clear error path, minimal dependencies\n- **Standard**: Multiple components, moderate investigation required\n- **Complex**: Cross-system issues, race conditions, complex state management\n\n**OUTPUTS**: Set `projectType`, `bugComplexity`, and `debuggingMechanism` context variables.",
33
+ "agentRole": "You are a senior debugging specialist and bug triage expert with 15+ years of experience across multiple technology stacks. Your expertise lies in quickly classifying bugs, understanding project architectures, and determining appropriate investigation strategies. You excel at extracting critical information from bug reports and setting up systematic investigation approaches.",
34
+ "guidance": [
35
+ "CLASSIFICATION ACCURACY: Proper complexity assessment determines investigation depth - be thorough but decisive",
36
+ "CONTEXT CAPTURE: Gather complete environmental and situational context now to avoid gaps later",
37
+ "DEBUGGING STRATEGY: Choose debugging mechanisms appropriate for the project type and bug complexity",
38
+ "NO ASSUMPTIONS: If critical information is missing, explicitly request it before proceeding"
39
+ ]
40
+ },
41
+ {
42
+ "id": "phase-1-streamlined-analysis",
43
+ "runCondition": {
44
+ "var": "bugComplexity",
45
+ "equals": "simple"
46
+ },
47
+ "title": "Phase 1: Streamlined Analysis (Simple Bugs)",
48
+ "prompt": "**STREAMLINED CODEBASE INVESTIGATION** - For simple bugs, I will perform focused analysis of the core issue.\n\n**STEP 1: Direct Component Analysis**\nI will examine the specific component involved:\n- **Primary Function/Method**: Direct analysis of the failing code\n- **Input/Output Analysis**: What data enters and exits the component\n- **Logic Flow**: Step-by-step execution path\n- **Error Point**: Exact location where failure occurs\n\n**STEP 2: Immediate Context Review**\n- **Recent Changes**: Git commits affecting this specific component\n- **Related Tests**: Existing test coverage for this functionality\n- **Dependencies**: Direct dependencies that could affect this component\n\n**STEP 3: Quick Hypothesis Formation**\nI will generate 1-3 focused hypotheses based on:\n- **Obvious Error Patterns**: Common failure modes for this type of component\n- **Change Impact**: How recent modifications could cause this issue\n- **Input Validation**: Whether invalid inputs are causing the failure\n\n**OUTPUTS**: Focused understanding of the simple bug with 1-3 targeted hypotheses ready for validation.",
49
+ "agentRole": "You are an experienced debugging specialist who excels at quickly identifying and resolving straightforward technical issues. Your strength lies in pattern recognition and efficient root cause analysis for simple bugs. You focus on the most likely causes while avoiding over-analysis.",
50
+ "guidance": [
51
+ "FOCUSED ANALYSIS: Concentrate on the specific failing component, avoid deep architectural analysis",
52
+ "PATTERN RECOGNITION: Use experience to identify common failure modes quickly",
53
+ "EFFICIENT HYPOTHESIS: Generate 1-3 focused hypotheses, not exhaustive possibilities",
54
+ "DIRECT APPROACH: Skip complex dependency analysis unless directly relevant"
55
+ ]
56
+ },
57
+ {
58
+ "id": "phase-1-comprehensive-analysis",
59
+ "runCondition": {
60
+ "or": [
61
+ {
62
+ "var": "bugComplexity",
63
+ "equals": "standard"
64
+ },
65
+ {
66
+ "var": "bugComplexity",
67
+ "equals": "complex"
68
+ }
69
+ ]
70
+ },
71
+ "title": "Phase 1: Deep Codebase Analysis (Standard/Complex Bugs)",
72
+ "prompt": "**SYSTEMATIC CODEBASE INVESTIGATION** - I will now perform comprehensive analysis of the relevant codebase components.\n\n**STEP 1: Affected Component Identification**\nBased on the bug report, I will identify and analyze:\n- **Primary Components**: Classes, functions, modules directly involved\n- **Dependency Chain**: Related components that could influence the bug\n- **Data Flow**: How data moves through the affected systems\n- **Error Propagation Paths**: Where and how errors can originate and propagate\n\n**STEP 2: Code Structure Analysis**\nFor each relevant component, I will examine:\n- **Implementation Logic**: Step-by-step code execution flow\n- **State Management**: How state is created, modified, and shared\n- **Error Handling**: Existing error handling mechanisms\n- **External Dependencies**: Third-party libraries, APIs, database interactions\n- **Concurrency Patterns**: Threading, async operations, shared resources\n\n**STEP 3: Historical Context Review**\nI will analyze:\n- **Recent Changes**: Git history around the affected components\n- **Test Coverage**: Existing tests and their coverage of the bug area\n- **Known Issues**: TODO comments, FIXME notes, or similar patterns\n\n**OUTPUTS**: Comprehensive understanding of the codebase architecture and potential failure points.",
73
+ "agentRole": "You are a principal software architect and code analysis expert specializing in systematic codebase investigation. Your strength lies in quickly understanding complex system architectures, identifying failure points, and tracing execution flows. You excel at connecting code patterns to potential runtime behaviors.",
74
+ "guidance": [
75
+ "SYSTEMATIC COVERAGE: Analyze all relevant components, not just the obvious ones",
76
+ "EXECUTION FLOW FOCUS: Trace the actual code execution path that leads to the bug",
77
+ "STATE ANALYSIS: Pay special attention to state management and mutation patterns",
78
+ "DEPENDENCY MAPPING: Understand how external dependencies could contribute to the issue"
79
+ ]
80
+ },
81
+ {
82
+ "id": "phase-2-hypothesis-formation",
83
+ "title": "Phase 2: Evidence-Based Hypothesis Formation",
84
+ "prompt": "**HYPOTHESIS GENERATION FROM EVIDENCE** - Based on the codebase analysis, I will now formulate testable hypotheses about the bug's root cause.\n\n**STEP 1: Evidence-Based Hypothesis Development**\nI will create a maximum of 5 prioritized hypotheses. For each potential root cause, I will create a hypothesis that includes:\n- **Root Cause Theory**: Specific technical explanation of what is happening\n- **Supporting Evidence**: Code patterns, architectural decisions, or logic flows that support this theory\n- **Failure Mechanism**: Exact sequence of events that leads to the observed bug\n- **Testability Score**: Quantified assessment (1-10) of how easily this can be validated\n- **Evidence Strength Score**: Quantified assessment (1-10) based on concrete code findings\n\n**STEP 2: Hypothesis Prioritization Matrix**\nI will rank hypotheses using this weighted scoring system:\n- **Evidence Strength** (40%): How much concrete code analysis supports this theory\n- **Testability** (35%): How easily this can be validated with debugging instruments\n- **Impact Scope** (25%): How well this explains all observed symptoms\n\n**STEP 3: Hypothesis Validation Strategy**\nFor the top 3 hypotheses, I will define:\n- **Required Evidence**: What specific evidence would confirm or refute this hypothesis\n- **Debugging Approach**: What instrumentation or tests would provide this evidence\n- **Success Criteria**: What results would prove this hypothesis correct\n- **Confidence Threshold**: Minimum evidence quality needed to validate\n\n**STEP 4: Hypothesis Documentation**\nI will create a structured hypothesis registry:\n- **Hypothesis ID**: H1, H2, H3 for tracking\n- **Status**: Active, Refuted, Confirmed\n- **Evidence Log**: All supporting and contradicting evidence\n- **Validation Plan**: Specific testing approach\n\n**CRITICAL RULE**: All hypotheses must be based on concrete evidence from code analysis, not assumptions or common patterns.\n\n**OUTPUTS**: Maximum 5 hypotheses with quantified scoring, top 3 selected for validation with structured documentation.",
85
+ "agentRole": "You are a senior software detective and root cause analysis expert with deep expertise in systematic hypothesis formation. Your strength lies in connecting code evidence to potential failure mechanisms and creating testable theories. You excel at logical reasoning and evidence-based deduction. You must maintain rigorous quantitative standards and reject any hypothesis not grounded in concrete code evidence.",
86
+ "guidance": [
87
+ "EVIDENCE-BASED ONLY: Every hypothesis must be grounded in concrete code analysis findings with quantified evidence scores",
88
+ "HYPOTHESIS LIMITS: Generate maximum 5 hypotheses to prevent analysis paralysis",
89
+ "QUANTIFIED SCORING: Use 1-10 scales for evidence strength and testability with clear criteria",
90
+ "STRUCTURED DOCUMENTATION: Create formal hypothesis registry with tracking IDs and status",
91
+ "VALIDATION RIGOR: Only proceed with top 3 hypotheses that meet minimum evidence thresholds"
92
+ ],
93
+ "validationCriteria": [
94
+ {
95
+ "type": "contains",
96
+ "value": "Evidence Strength Score",
97
+ "message": "Must include quantified evidence strength scoring (1-10) for each hypothesis"
98
+ },
99
+ {
100
+ "type": "contains",
101
+ "value": "Testability Score",
102
+ "message": "Must include quantified testability scoring (1-10) for each hypothesis"
103
+ },
104
+ {
105
+ "type": "contains",
106
+ "value": "Hypothesis ID",
107
+ "message": "Must assign tracking IDs (H1, H2, H3, etc.) to each hypothesis"
108
+ },
109
+ {
110
+ "type": "regex",
111
+ "pattern": "H[1-5]",
112
+ "message": "Must use proper hypothesis ID format (H1, H2, H3, H4, H5)"
113
+ }
114
+ ]
115
+ },
116
+ {
117
+ "id": "phase-3-debugging-instrumentation",
118
+ "title": "Phase 3: Debugging Instrumentation Setup",
119
+ "prompt": "**SYSTEMATIC DEBUGGING INSTRUMENTATION** - I will now implement debugging mechanisms to gather evidence for hypothesis validation.\n\n**STEP 1: Instrumentation Strategy Selection**\nBased on the `projectType` and `debuggingMechanism` context, I will choose appropriate debugging approaches:\n- **Logging**: Strategic log statements to capture state and flow\n- **Print Debugging**: Console output for immediate feedback\n- **Test Modifications**: Enhanced test cases with additional assertions\n- **Debugging Tests**: New test cases specifically designed to validate hypotheses\n- **Profiling**: Performance monitoring if relevant to the bug\n\n**STEP 2: Strategic Instrumentation Implementation**\nFor each top-priority hypothesis, I will implement:\n- **Entry/Exit Logging**: Function entry and exit points with parameter/return values\n- **State Capture**: Critical variable values at key decision points\n- **Flow Tracing**: Execution path tracking through complex logic\n- **Error Context**: Enhanced error messages with additional diagnostic information\n- **Timing Information**: Timestamps for race condition or performance-related issues\n\n**STEP 3: Instrumentation Validation**\nI will verify that the instrumentation:\n- **Covers All Hypotheses**: Each hypothesis has corresponding debugging output\n- **Maintains Code Safety**: Debugging code doesn't alter production behavior\n- **Provides Clear Evidence**: Output will clearly confirm or refute hypotheses\n- **Handles Edge Cases**: Instrumentation works for all potential execution paths\n\n**STEP 4: Execution Instructions**\nI will provide clear instructions for:\n- **How to run the instrumented code**: Specific commands or procedures\n- **What to look for**: Expected output patterns for each hypothesis\n- **How to capture results**: Ensuring complete log/output collection\n\n**OUTPUTS**: Instrumented code ready for execution with clear validation criteria.",
120
+ "agentRole": "You are a debugging instrumentation specialist and diagnostic expert with extensive experience in systematic evidence collection. Your expertise lies in implementing non-intrusive debugging mechanisms that provide clear evidence for hypothesis validation. You excel at strategic instrumentation that maximizes diagnostic value.",
121
+ "guidance": [
122
+ "STRATEGIC PLACEMENT: Place instrumentation at points that will provide maximum diagnostic value",
123
+ "NON-INTRUSIVE: Ensure debugging code doesn't alter the bug's behavior",
124
+ "COMPREHENSIVE COVERAGE: Instrument all critical paths related to the hypotheses",
125
+ "CLEAR OUTPUT: Design instrumentation to provide unambiguous evidence"
126
+ ]
127
+ },
128
+ {
129
+ "id": "phase-4-evidence-collection",
130
+ "title": "Phase 4: Evidence Collection & Analysis",
131
+ "prompt": "**EVIDENCE COLLECTION PHASE** - Time to execute the instrumented code and gather evidence for hypothesis validation.\n\n**STEP 1: Execution Coordination**\nI will guide you through:\n- **Execution Commands**: Precise commands to run the instrumented code\n- **Data Collection**: How to capture all relevant output, logs, and results\n- **Multiple Runs**: Instructions for running different scenarios if needed\n- **Failure Scenarios**: How to handle execution failures or unexpected results\n\n**STEP 2: Evidence Analysis Framework**\nOnce you provide the execution results, I will systematically analyze:\n- **Hypothesis Validation**: Which hypotheses are confirmed or refuted by the evidence\n- **Unexpected Findings**: Any results that don't match our predictions\n- **Evidence Quality**: Strength and reliability of the collected evidence\n- **Confidence Assessment**: Current confidence level in each hypothesis\n\n**STEP 3: Evidence Correlation**\nI will examine:\n- **Pattern Recognition**: Consistent patterns across multiple execution runs\n- **Timing Analysis**: Sequence of events leading to the bug\n- **State Evolution**: How system state changes during bug reproduction\n- **Error Propagation**: How errors cascade through the system\n\n**STEP 4: Confidence Evaluation**\nI will assess:\n- **Evidence Strength**: How conclusively the evidence supports each hypothesis\n- **Remaining Uncertainties**: What questions remain unanswered\n- **Additional Evidence Needs**: Whether more debugging is required\n\n**CRITICAL THRESHOLD**: If confidence level is below 90%, I will recommend additional instrumentation or evidence collection.\n\n**OUTPUTS**: Evidence-based validation of hypotheses with confidence assessment.",
132
+ "agentRole": "You are a forensic evidence analyst and systematic debugging expert specializing in evidence collection and hypothesis validation. Your expertise lies in coordinating debugging execution, analyzing complex diagnostic output, and drawing reliable conclusions from evidence. You excel at maintaining objectivity and rigor in evidence evaluation.",
133
+ "guidance": [
134
+ "SYSTEMATIC ANALYSIS: Analyze evidence methodically against each hypothesis",
135
+ "OBJECTIVE EVALUATION: Remain objective - let evidence drive conclusions, not preferences",
136
+ "CONFIDENCE THRESHOLDS: Don't proceed to conclusions without sufficient evidence",
137
+ "MULTIPLE PERSPECTIVES: Consider alternative interpretations of the evidence"
138
+ ]
139
+ },
140
+ {
141
+ "id": "phase-5-root-cause-confirmation",
142
+ "title": "Phase 5: Root Cause Confirmation",
143
+ "prompt": "**ROOT CAUSE CONFIRMATION** - Based on collected evidence, I will confirm the definitive root cause with high confidence.\n\n**STEP 1: Evidence Synthesis**\n- **Confirm Primary Hypothesis**: Identify strongest evidence-supported hypothesis\n- **Eliminate Alternatives**: Rule out other hypotheses based on evidence\n- **Address Contradictions**: Resolve conflicting evidence or unexpected findings\n- **Validate Completeness**: Ensure hypothesis explains all observed symptoms\n\n**STEP 2: Objective Evidence Verification**\n- **Evidence Diversity**: Minimum 3 independent supporting sources\n- **Reproducibility**: Evidence consistently reproducible across test runs\n- **Specificity**: Evidence directly relates to hypothesis, not circumstantial\n- **Contradiction Resolution**: Conflicting evidence explicitly addressed\n\n**STEP 3: Adversarial Challenge Protocol**\n- **Devil's Advocate Analysis**: Argue against primary hypothesis with available evidence\n- **Alternative Explanation Search**: Identify 2+ alternative explanations for evidence\n- **Confidence Calibration**: Rate certainty on calibrated scale with explicit reasoning\n- **Uncertainty Documentation**: List remaining unknowns and their potential impact\n\n**STEP 4: Confidence Assessment Matrix**\n- **Evidence Quality Score** (1-10): Reliability and completeness of supporting evidence\n- **Explanation Completeness** (1-10): How well root cause explains all symptoms\n- **Alternative Likelihood** (1-10): Probability alternatives are correct (inverted)\n- **Final Confidence** = (Evidence Quality × 0.4) + (Completeness × 0.4) + (Alternative × 0.2)\n\n**CONFIDENCE THRESHOLD**: Proceed only if Final Confidence ≥ 9.0/10. If below, recommend additional investigation with specific evidence gaps.\n\n**OUTPUTS**: High-confidence root cause with quantified assessment and adversarial validation.",
144
+ "agentRole": "You are a senior root cause analysis expert and forensic investigator with deep expertise in systematic evidence evaluation and definitive conclusion formation. Your strength lies in synthesizing complex evidence into clear, confident determinations. You excel at maintaining rigorous standards for certainty while providing actionable insights. You must actively challenge your own conclusions and maintain objective, quantified confidence assessments.",
145
+ "guidance": [
146
+ "OBJECTIVE VERIFICATION: Use quantified evidence quality criteria, not subjective assessments",
147
+ "ADVERSARIAL MINDSET: Actively challenge your own conclusions with available evidence",
148
+ "CONFIDENCE CALIBRATION: Use mathematical framework for confidence scoring, not intuition",
149
+ "UNCERTAINTY DOCUMENTATION: Explicitly list all remaining unknowns and their impact",
150
+ "EVIDENCE CITATION: Support every conclusion with specific, reproducible evidence"
151
+ ],
152
+ "validationCriteria": [
153
+ {
154
+ "type": "contains",
155
+ "value": "Evidence Quality Score",
156
+ "message": "Must include quantified evidence quality scoring (1-10) for root cause confirmation"
157
+ },
158
+ {
159
+ "type": "contains",
160
+ "value": "Explanation Completeness",
161
+ "message": "Must include explanation completeness scoring (1-10) for root cause confirmation"
162
+ },
163
+ {
164
+ "type": "contains",
165
+ "value": "Alternative Likelihood",
166
+ "message": "Must include alternative likelihood scoring (1-10) for root cause confirmation"
167
+ },
168
+ {
169
+ "type": "regex",
170
+ "pattern": "Final Confidence = [0-9\\.]+",
171
+ "message": "Must calculate and report final confidence score using the specified formula"
172
+ }
173
+ ]
174
+ },
175
+ {
176
+ "id": "phase-6-diagnostic-writeup",
177
+ "title": "Phase 6: Comprehensive Diagnostic Writeup",
178
+ "prompt": "**FINAL DIAGNOSTIC DOCUMENTATION** - I will create comprehensive writeup enabling effective bug fixing and knowledge transfer.\n\n**STEP 1: Executive Summary**\n- **Bug Summary**: Concise description of issue and impact\n- **Root Cause**: Clear, non-technical explanation of what is happening\n- **Confidence Level**: Final confidence assessment with calculation methodology\n- **Scope**: What systems, users, or scenarios are affected\n\n**STEP 2: Technical Deep Dive**\n- **Root Cause Analysis**: Detailed technical explanation of failure mechanism\n- **Code Component Analysis**: Specific files, functions, and lines with exact locations\n- **Execution Flow**: Step-by-step sequence of events leading to bug\n- **State Analysis**: How system state contributes to failure\n\n**STEP 3: Investigation Methodology**\n- **Investigation Timeline**: Chronological summary with time investments per phase\n- **Hypothesis Evolution**: Complete record of all hypotheses (H1-H5) with status changes\n- **Evidence Quality Assessment**: Rating and reliability of each evidence source\n- **Key Evidence**: Most important evidence that led to root cause with citations\n\n**STEP 4: Knowledge Transfer & Action Plan**\n- **Skill Requirements**: Technical expertise needed to understand and fix issue\n- **Prevention Strategies**: Specific measures to prevent similar issues\n- **Code Review Checklist**: Items to check during reviews to catch similar problems\n- **Immediate Actions**: Steps to mitigate issue temporarily with owners and timelines\n- **Root Cause Remediation**: Areas needing permanent fixes with complexity estimates\n- **Testing Strategy**: Comprehensive approach to verify fixes work correctly\n\n**DELIVERABLE**: Enterprise-grade diagnostic report enabling confident bug fixing, knowledge transfer, and organizational learning.",
179
+ "agentRole": "You are a senior technical writer and diagnostic documentation specialist with expertise in creating comprehensive, actionable bug reports for enterprise environments. Your strength lies in translating complex technical investigations into clear, structured documentation that enables effective problem resolution, knowledge transfer, and organizational learning. You excel at creating reports that serve immediate fixing needs, long-term system improvement, and team collaboration.",
180
+ "guidance": [
181
+ "ENTERPRISE FOCUS: Write for multiple stakeholders including developers, managers, and future team members",
182
+ "KNOWLEDGE TRANSFER: Include methodology and reasoning, not just conclusions",
183
+ "COLLABORATIVE DESIGN: Structure content for peer review and team coordination",
184
+ "COMPREHENSIVE COVERAGE: Include all information needed for resolution and prevention",
185
+ "ACTIONABLE DOCUMENTATION: Provide specific, concrete next steps with clear ownership"
186
+ ]
187
+ }
188
+ ]
189
+ }
190
+