gsd-opencode 1.33.3 → 1.35.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (118) hide show
  1. package/agents/gsd-advisor-researcher.md +23 -0
  2. package/agents/gsd-ai-researcher.md +142 -0
  3. package/agents/gsd-code-fixer.md +523 -0
  4. package/agents/gsd-code-reviewer.md +361 -0
  5. package/agents/gsd-debugger.md +14 -1
  6. package/agents/gsd-domain-researcher.md +162 -0
  7. package/agents/gsd-eval-auditor.md +170 -0
  8. package/agents/gsd-eval-planner.md +161 -0
  9. package/agents/gsd-executor.md +70 -7
  10. package/agents/gsd-framework-selector.md +167 -0
  11. package/agents/gsd-intel-updater.md +320 -0
  12. package/agents/gsd-phase-researcher.md +26 -0
  13. package/agents/gsd-plan-checker.md +12 -0
  14. package/agents/gsd-planner.md +16 -6
  15. package/agents/gsd-project-researcher.md +23 -0
  16. package/agents/gsd-ui-researcher.md +23 -0
  17. package/agents/gsd-verifier.md +55 -1
  18. package/commands/gsd/gsd-ai-integration-phase.md +36 -0
  19. package/commands/gsd/gsd-audit-fix.md +33 -0
  20. package/commands/gsd/gsd-autonomous.md +1 -0
  21. package/commands/gsd/gsd-code-review-fix.md +52 -0
  22. package/commands/gsd/gsd-code-review.md +55 -0
  23. package/commands/gsd/gsd-eval-review.md +32 -0
  24. package/commands/gsd/gsd-explore.md +27 -0
  25. package/commands/gsd/gsd-from-gsd2.md +45 -0
  26. package/commands/gsd/gsd-import.md +36 -0
  27. package/commands/gsd/gsd-intel.md +183 -0
  28. package/commands/gsd/gsd-next.md +2 -0
  29. package/commands/gsd/gsd-reapply-patches.md +58 -3
  30. package/commands/gsd/gsd-review.md +4 -2
  31. package/commands/gsd/gsd-scan.md +26 -0
  32. package/commands/gsd/gsd-undo.md +34 -0
  33. package/commands/gsd/gsd-workstreams.md +6 -6
  34. package/get-shit-done/bin/gsd-tools.cjs +143 -5
  35. package/get-shit-done/bin/lib/commands.cjs +10 -2
  36. package/get-shit-done/bin/lib/config.cjs +71 -37
  37. package/get-shit-done/bin/lib/core.cjs +70 -8
  38. package/get-shit-done/bin/lib/gsd2-import.cjs +511 -0
  39. package/get-shit-done/bin/lib/init.cjs +20 -6
  40. package/get-shit-done/bin/lib/intel.cjs +660 -0
  41. package/get-shit-done/bin/lib/learnings.cjs +378 -0
  42. package/get-shit-done/bin/lib/milestone.cjs +25 -15
  43. package/get-shit-done/bin/lib/model-profiles.cjs +17 -17
  44. package/get-shit-done/bin/lib/phase.cjs +148 -112
  45. package/get-shit-done/bin/lib/roadmap.cjs +12 -5
  46. package/get-shit-done/bin/lib/security.cjs +119 -0
  47. package/get-shit-done/bin/lib/state.cjs +283 -221
  48. package/get-shit-done/bin/lib/template.cjs +8 -4
  49. package/get-shit-done/bin/lib/verify.cjs +42 -5
  50. package/get-shit-done/references/ai-evals.md +156 -0
  51. package/get-shit-done/references/ai-frameworks.md +186 -0
  52. package/get-shit-done/references/common-bug-patterns.md +114 -0
  53. package/get-shit-done/references/few-shot-examples/plan-checker.md +73 -0
  54. package/get-shit-done/references/few-shot-examples/verifier.md +109 -0
  55. package/get-shit-done/references/gates.md +70 -0
  56. package/get-shit-done/references/ios-scaffold.md +123 -0
  57. package/get-shit-done/references/model-profile-resolution.md +6 -7
  58. package/get-shit-done/references/model-profiles.md +20 -14
  59. package/get-shit-done/references/planning-config.md +237 -0
  60. package/get-shit-done/references/thinking-models-debug.md +44 -0
  61. package/get-shit-done/references/thinking-models-execution.md +50 -0
  62. package/get-shit-done/references/thinking-models-planning.md +62 -0
  63. package/get-shit-done/references/thinking-models-research.md +50 -0
  64. package/get-shit-done/references/thinking-models-verification.md +55 -0
  65. package/get-shit-done/references/thinking-partner.md +96 -0
  66. package/get-shit-done/references/universal-anti-patterns.md +6 -1
  67. package/get-shit-done/references/verification-overrides.md +227 -0
  68. package/get-shit-done/templates/AI-SPEC.md +246 -0
  69. package/get-shit-done/workflows/add-tests.md +3 -0
  70. package/get-shit-done/workflows/add-todo.md +2 -0
  71. package/get-shit-done/workflows/ai-integration-phase.md +284 -0
  72. package/get-shit-done/workflows/audit-fix.md +154 -0
  73. package/get-shit-done/workflows/autonomous.md +33 -2
  74. package/get-shit-done/workflows/check-todos.md +2 -0
  75. package/get-shit-done/workflows/cleanup.md +2 -0
  76. package/get-shit-done/workflows/code-review-fix.md +497 -0
  77. package/get-shit-done/workflows/code-review.md +515 -0
  78. package/get-shit-done/workflows/complete-milestone.md +40 -15
  79. package/get-shit-done/workflows/diagnose-issues.md +1 -1
  80. package/get-shit-done/workflows/discovery-phase.md +3 -1
  81. package/get-shit-done/workflows/discuss-phase-assumptions.md +1 -1
  82. package/get-shit-done/workflows/discuss-phase.md +21 -7
  83. package/get-shit-done/workflows/do.md +2 -0
  84. package/get-shit-done/workflows/docs-update.md +2 -0
  85. package/get-shit-done/workflows/eval-review.md +155 -0
  86. package/get-shit-done/workflows/execute-phase.md +307 -57
  87. package/get-shit-done/workflows/execute-plan.md +64 -93
  88. package/get-shit-done/workflows/explore.md +136 -0
  89. package/get-shit-done/workflows/help.md +1 -1
  90. package/get-shit-done/workflows/import.md +273 -0
  91. package/get-shit-done/workflows/inbox.md +387 -0
  92. package/get-shit-done/workflows/manager.md +4 -10
  93. package/get-shit-done/workflows/new-milestone.md +3 -1
  94. package/get-shit-done/workflows/new-project.md +2 -0
  95. package/get-shit-done/workflows/new-workspace.md +2 -0
  96. package/get-shit-done/workflows/next.md +56 -0
  97. package/get-shit-done/workflows/note.md +2 -0
  98. package/get-shit-done/workflows/plan-phase.md +97 -17
  99. package/get-shit-done/workflows/plant-seed.md +3 -0
  100. package/get-shit-done/workflows/pr-branch.md +41 -13
  101. package/get-shit-done/workflows/profile-user.md +4 -2
  102. package/get-shit-done/workflows/quick.md +99 -4
  103. package/get-shit-done/workflows/remove-workspace.md +2 -0
  104. package/get-shit-done/workflows/review.md +53 -6
  105. package/get-shit-done/workflows/scan.md +98 -0
  106. package/get-shit-done/workflows/secure-phase.md +2 -0
  107. package/get-shit-done/workflows/settings.md +18 -3
  108. package/get-shit-done/workflows/ship.md +3 -0
  109. package/get-shit-done/workflows/ui-phase.md +10 -2
  110. package/get-shit-done/workflows/ui-review.md +2 -0
  111. package/get-shit-done/workflows/undo.md +314 -0
  112. package/get-shit-done/workflows/update.md +2 -0
  113. package/get-shit-done/workflows/validate-phase.md +2 -0
  114. package/get-shit-done/workflows/verify-phase.md +83 -0
  115. package/get-shit-done/workflows/verify-work.md +12 -1
  116. package/package.json +1 -1
  117. package/skills/gsd-code-review/SKILL.md +48 -0
  118. package/skills/gsd-code-review-fix/SKILL.md +44 -0
@@ -4,7 +4,7 @@
4
4
 
5
5
  const fs = require('fs');
6
6
  const path = require('path');
7
- const { normalizePhaseName, findPhaseInternal, generateSlugInternal, normalizeMd, toPosixPath, output, error } = require('./core.cjs');
7
+ const { normalizePhaseName, findPhaseInternal, generateSlugInternal, normalizeMd, toPosixPath, planningDir, output, error } = require('./core.cjs');
8
8
  const { reconstructFrontmatter } = require('./frontmatter.cjs');
9
9
 
10
10
  function cmdTemplateSelect(cwd, planPath, raw) {
@@ -131,6 +131,10 @@ function cmdTemplateFill(cwd, templateType, options, raw) {
131
131
  must_haves: { truths: [], artifacts: [], key_links: [] },
132
132
  ...fields,
133
133
  };
134
+ const planBase = planningDir(cwd);
135
+ const projectRef = toPosixPath(path.relative(cwd, path.join(planBase, 'PROJECT.md')));
136
+ const roadmapRef = toPosixPath(path.relative(cwd, path.join(planBase, 'ROADMAP.md')));
137
+ const stateRef = toPosixPath(path.relative(cwd, path.join(planBase, 'STATE.md')));
134
138
  body = [
135
139
  `# Phase ${options.phase} Plan ${planNum}: [Title]`,
136
140
  '',
@@ -140,9 +144,9 @@ function cmdTemplateFill(cwd, templateType, options, raw) {
140
144
  '- **Output:** [Concrete deliverable]',
141
145
  '',
142
146
  '## Context',
143
- '@.planning/PROJECT.md',
144
- '@.planning/ROADMAP.md',
145
- '@.planning/STATE.md',
147
+ `@${projectRef}`,
148
+ `@${roadmapRef}`,
149
+ `@${stateRef}`,
146
150
  '',
147
151
  '## Tasks',
148
152
  '',
@@ -5,7 +5,7 @@
5
5
  const fs = require('fs');
6
6
  const path = require('path');
7
7
  const os = require('os');
8
- const { safeReadFile, loadConfig, normalizePhaseName, escapeRegex, execGit, findPhaseInternal, getMilestoneInfo, stripShippedMilestones, extractCurrentMilestone, planningDir, planningRoot, output, error, checkAgentsInstalled, CONFIG_DEFAULTS } = require('./core.cjs');
8
+ const { safeReadFile, loadConfig, normalizePhaseName, escapeRegex, execGit, findPhaseInternal, getMilestoneInfo, stripShippedMilestones, extractCurrentMilestone, planningDir, output, error, checkAgentsInstalled, CONFIG_DEFAULTS } = require('./core.cjs');
9
9
  const { extractFrontmatter, parseMustHavesBlock } = require('./frontmatter.cjs');
10
10
  const { writeStateMd } = require('./state.cjs');
11
11
 
@@ -534,11 +534,10 @@ function cmdValidateHealth(cwd, options, raw) {
534
534
  }
535
535
 
536
536
  const planBase = planningDir(cwd);
537
- const planRoot = planningRoot(cwd);
538
- const projectPath = path.join(planRoot, 'PROJECT.md');
537
+ const projectPath = path.join(planBase, 'PROJECT.md');
539
538
  const roadmapPath = path.join(planBase, 'ROADMAP.md');
540
539
  const statePath = path.join(planBase, 'STATE.md');
541
- const configPath = path.join(planRoot, 'config.json');
540
+ const configPath = path.join(planBase, 'config.json');
542
541
  const phasesDir = path.join(planBase, 'phases');
543
542
 
544
543
  const errors = [];
@@ -649,6 +648,10 @@ function cmdValidateHealth(cwd, options, raw) {
649
648
  addIssue('warning', 'W008', 'config.json: workflow.nyquist_validation absent (defaults to enabled but agents may skip)', 'Run /gsd-health --repair to add key', true);
650
649
  if (!repairs.includes('addNyquistKey')) repairs.push('addNyquistKey');
651
650
  }
651
+ if (configParsed.workflow && configParsed.workflow.ai_integration_phase === undefined) {
652
+ addIssue('warning', 'W016', 'config.json: workflow.ai_integration_phase absent (defaults to enabled — run /gsd-ai-integration-phase before planning AI system phases)', 'Run /gsd-health --repair to add key', true);
653
+ if (!repairs.includes('addAiIntegrationPhaseKey')) repairs.push('addAiIntegrationPhaseKey');
654
+ }
652
655
  } catch { /* intentionally empty */ }
653
656
  }
654
657
 
@@ -740,10 +743,24 @@ function cmdValidateHealth(cwd, options, raw) {
740
743
  }
741
744
  } catch { /* intentionally empty */ }
742
745
 
746
+ // Build a set of phases explicitly marked not-yet-started in the ROADMAP
747
+ // summary list (- [ ] **Phase N:**). These phases are intentionally absent
748
+ // from disk -- W006 must not fire for them (#2009).
749
+ const notStartedPhases = new Set();
750
+ const uncheckedPattern = /-\s*\[\s\]\s*\*{0,2}Phase\s+(\d+[A-Z]?(?:\.\d+)*)[:\s*]/gi;
751
+ let um;
752
+ while ((um = uncheckedPattern.exec(roadmapContent)) !== null) {
753
+ notStartedPhases.add(um[1]);
754
+ // Also add zero-padded variant so 1 and 01 both match
755
+ notStartedPhases.add(String(parseInt(um[1], 10)).padStart(2, '0'));
756
+ }
757
+
743
758
  // Phases in ROADMAP but not on disk
744
759
  for (const p of roadmapPhases) {
745
760
  const padded = String(parseInt(p, 10)).padStart(2, '0');
746
761
  if (!diskPhases.has(p) && !diskPhases.has(padded)) {
762
+ // Skip phases explicitly flagged as not-yet-started in the summary list
763
+ if (notStartedPhases.has(p) || notStartedPhases.has(padded)) continue;
747
764
  addIssue('warning', 'W006', `Phase ${p} in ROADMAP.md but no directory on disk`, 'Create phase directory or remove from roadmap');
748
765
  }
749
766
  }
@@ -861,9 +878,12 @@ function cmdValidateHealth(cwd, options, raw) {
861
878
  }
862
879
  // Generate minimal STATE.md from ROADMAP.md structure
863
880
  const milestone = getMilestoneInfo(cwd);
881
+ const projectRef = path
882
+ .relative(cwd, path.join(planningDir(cwd), 'PROJECT.md'))
883
+ .split(path.sep).join('/');
864
884
  let stateContent = `# Session State\n\n`;
865
885
  stateContent += `## Project Reference\n\n`;
866
- stateContent += `See: .planning/PROJECT.md\n\n`;
886
+ stateContent += `See: ${projectRef}\n\n`;
867
887
  stateContent += `## Position\n\n`;
868
888
  stateContent += `**Milestone:** ${milestone.version} ${milestone.name}\n`;
869
889
  stateContent += `**Current phase:** (determining...)\n`;
@@ -891,6 +911,23 @@ function cmdValidateHealth(cwd, options, raw) {
891
911
  }
892
912
  break;
893
913
  }
914
+ case 'addAiIntegrationPhaseKey': {
915
+ if (fs.existsSync(configPath)) {
916
+ try {
917
+ const configRaw = fs.readFileSync(configPath, 'utf-8');
918
+ const configParsed = JSON.parse(configRaw);
919
+ if (!configParsed.workflow) configParsed.workflow = {};
920
+ if (configParsed.workflow.ai_integration_phase === undefined) {
921
+ configParsed.workflow.ai_integration_phase = true;
922
+ fs.writeFileSync(configPath, JSON.stringify(configParsed, null, 2), 'utf-8');
923
+ }
924
+ repairActions.push({ action: repair, success: true, path: 'config.json' });
925
+ } catch (err) {
926
+ repairActions.push({ action: repair, success: false, error: err.message });
927
+ }
928
+ }
929
+ break;
930
+ }
894
931
  }
895
932
  } catch (err) {
896
933
  repairActions.push({ action: repair, success: false, error: err.message });
@@ -0,0 +1,156 @@
1
+ # AI Evaluation Reference
2
+
3
+ > Reference used by `gsd-eval-planner` and `gsd-eval-auditor`.
4
+ > Based on "AI Evals for Everyone" course (Reganti & Badam) + industry practice.
5
+
6
+ ---
7
+
8
+ ## Core Concepts
9
+
10
+ ### Why Evals Exist
11
+ AI systems are non-deterministic. Input X does not reliably produce output Y across runs, users, or edge cases. Evals are the continuous process of assessing whether your system's behavior meets expectations under real-world conditions — unit tests and integration tests alone are insufficient.
12
+
13
+ ### Model vs. Product Evaluation
14
+ - **Model evals** (MMLU, HumanEval, GSM8K) — measure general capability in standardized conditions. Use as initial filter only.
15
+ - **Product evals** — measure behavior inside your specific system, with your data, your users, your domain rules. This is where 80% of eval effort belongs.
16
+
17
+ ### The Three Components of Every Eval
18
+ - **Input** — everything affecting the system: query, history, retrieved docs, system prompt, config
19
+ - **Expected** — what good behavior looks like, defined through rubrics
20
+ - **Actual** — what the system produced, including intermediate steps, tool calls, and reasoning traces
21
+
22
+ ### Three Measurement Approaches
23
+ 1. **Code-based metrics** — deterministic checks: JSON validation, required disclaimers, performance thresholds, classification flags. Fast, cheap, reliable. Use first.
24
+ 2. **LLM judges** — one model evaluates another against a rubric. Powerful for subjective qualities (tone, reasoning, escalation). Requires calibration against human judgment before trusting.
25
+ 3. **Human evaluation** — gold standard for nuanced judgment. Doesn't scale. Use for calibration, edge cases, periodic sampling, and high-stakes decisions.
26
+
27
+ Most effective systems combine all three.
28
+
29
+ ---
30
+
31
+ ## Evaluation Dimensions
32
+
33
+ ### Pre-Deployment (Development Phase)
34
+
35
+ | Dimension | What It Measures | When It Matters |
36
+ |-----------|-----------------|-----------------|
37
+ | **Factual accuracy** | Correctness of claims against ground truth | RAG, knowledge bases, any factual assertions |
38
+ | **Context faithfulness** | Response grounded in provided context vs. fabricated | RAG pipelines, document Q&A, retrieval-augmented systems |
39
+ | **Hallucination detection** | Plausible but unsupported claims | All generative systems, high-stakes domains |
40
+ | **Escalation accuracy** | Correct identification of when human intervention needed | Customer service, healthcare, financial advisory |
41
+ | **Policy compliance** | Adherence to business rules, legal requirements, disclaimers | Regulated industries, enterprise deployments |
42
+ | **Tone/style appropriateness** | Match with brand voice, audience expectations, emotional context | Customer-facing systems, content generation |
43
+ | **Output structure validity** | Schema compliance, required fields, format correctness | Structured extraction, API integrations, data pipelines |
44
+ | **task completion** | Whether the system accomplished the stated goal | Agentic workflows, multi-step tasks |
45
+ | **Tool use correctness** | Correct selection and invocation of tools | Agent systems with tool calls |
46
+ | **Safety** | Absence of harmful, biased, or inappropriate outputs | All user-facing systems |
47
+
48
+ ### Production Monitoring
49
+
50
+ | Dimension | Monitoring Approach |
51
+ |-----------|---------------------|
52
+ | **Safety violations** | Online guardrail — real-time, immediate intervention |
53
+ | **Compliance failures** | Online guardrail — block or escalate before user sees output |
54
+ | **Quality degradation trends** | Offline flywheel — batch analysis of sampled interactions |
55
+ | **Emerging failure modes** | Signal-metric divergence — when user behavior signals diverge from metric scores, investigate manually |
56
+ | **Cost/latency drift** | Code-based metrics — automated threshold alerts |
57
+
58
+ ---
59
+
60
+ ## The Guardrail vs. Flywheel Decision
61
+
62
+ Ask: "If this behavior goes wrong, would it be catastrophic for my business?"
63
+
64
+ - **Yes → Guardrail** — run online, real-time, with immediate intervention (block, escalate, hand off). Be selective: guardrails add latency.
65
+ - **No → Flywheel** — run offline as batch analysis feeding system refinements over time.
66
+
67
+ ---
68
+
69
+ ## Rubric Design
70
+
71
+ Generic metrics are meaningless without context. "Helpfulness" in real estate means summarizing listings clearly. In healthcare it means knowing when *not* to answer.
72
+
73
+ A rubric must define:
74
+ 1. The dimension being measured
75
+ 2. What scores 1, 3, and 5 on a 5-point scale (or pass/fail criteria)
76
+ 3. Domain-specific examples of acceptable vs. unacceptable behavior
77
+
78
+ Without rubrics, LLM judges produce noise rather than signal.
79
+
80
+ ---
81
+
82
+ ## Reference Dataset Guidelines
83
+
84
+ - Start with **10-20 high-quality examples** — not 200 mediocre ones
85
+ - Cover: critical success scenarios, common user workflows, known edge cases, historical failure modes
86
+ - Have domain experts label the examples (not just engineers)
87
+ - Expand based on what you learn in production — don't build for hypothetical coverage
88
+
89
+ ---
90
+
91
+ ## Eval Tooling Guide
92
+
93
+ | Tool | Type | Best For | Key Strength |
94
+ |------|------|----------|-------------|
95
+ | **RAGAS** | Python library | RAG evaluation | Purpose-built metrics: faithfulness, answer relevance, context precision/recall |
96
+ | **Langfuse** | Platform (open-source, self-hostable) | All system types | Strong tracing, prompt management, good for teams wanting infrastructure control |
97
+ | **LangSmith** | Platform (commercial) | LangChain/LangGraph ecosystems | Tightest integration with LangChain; best if already in that ecosystem |
98
+ | **Arize Phoenix** | Platform (open-source + hosted) | RAG + multi-agent tracing | Strong RAG eval + trace visualization; open-source with hosted option |
99
+ | **Braintrust** | Platform (commercial) | Model-agnostic evaluation | Dataset and experiment management; good for comparing across frameworks |
100
+ | **Promptfoo** | CLI tool (open-source) | Prompt testing, CI/CD | CLI-first, excellent for CI/CD prompt regression testing |
101
+
102
+ ### Tool Selection by System Type
103
+
104
+ | System Type | Recommended Tooling |
105
+ |-------------|---------------------|
106
+ | RAG / Knowledge Q&A | RAGAS + Arize Phoenix or Braintrust |
107
+ | Multi-agent systems | Langfuse + Arize Phoenix |
108
+ | Conversational / single-model | Promptfoo + Braintrust |
109
+ | Structured extraction | Promptfoo + code-based validators |
110
+ | LangChain/LangGraph projects | LangSmith (native integration) |
111
+ | Production monitoring (all types) | Langfuse, Arize Phoenix, or LangSmith |
112
+
113
+ ---
114
+
115
+ ## Evals in the Development Lifecycle
116
+
117
+ ### Plan Phase (Evaluation-Aware Design)
118
+ Before writing code, define:
119
+ 1. What type of AI system is being built → determines framework and dominant eval concerns
120
+ 2. Critical failure modes (3-5 behaviors that cannot go wrong)
121
+ 3. Rubrics — explicit definitions of acceptable/unacceptable behavior per dimension
122
+ 4. Evaluation strategy — which dimensions use code metrics, LLM judges, or human review
123
+ 5. Reference dataset requirements — size, composition, labeling approach
124
+ 6. Eval tooling selection
125
+
126
+ Output: EVALS-SPEC section of AI-SPEC.md
127
+
128
+ ### Execute Phase (Instrument While Building)
129
+ - Add tracing from day one (Langfuse, Arize Phoenix, or LangSmith)
130
+ - Build reference dataset concurrently with implementation
131
+ - Implement code-based checks first; add LLM judges only for subjective dimensions
132
+ - Run evals in CI/CD via Promptfoo or Braintrust
133
+
134
+ ### Verify Phase (Pre-Deployment Validation)
135
+ - Run full reference dataset against all metrics
136
+ - Conduct human review of edge cases and LLM judge disagreements
137
+ - Calibrate LLM judges against human scores (target ≥ 0.7 correlation before trusting)
138
+ - Define and configure production guardrails
139
+ - Establish monitoring baseline
140
+
141
+ ### Monitor Phase (Production Evaluation Loop)
142
+ - Smart sampling — weight toward interactions with concerning signals (retries, unusual length, explicit escalations)
143
+ - Online guardrails on every interaction
144
+ - Offline flywheel on sampled batch
145
+ - Watch for signal-metric divergence — the early warning system for evaluation gaps
146
+
147
+ ---
148
+
149
+ ## Common Pitfalls
150
+
151
+ 1. **Assuming benchmarks predict product success** — they don't; model evals are a filter, not a verdict
152
+ 2. **Engineering evals in isolation** — domain experts must co-define rubrics; engineers alone miss critical nuances
153
+ 3. **Building comprehensive coverage on day one** — start small (10-20 examples), expand from real failure modes
154
+ 4. **Trusting uncalibrated LLM judges** — validate against human judgment before relying on them
155
+ 5. **Measuring everything** — only track metrics that drive decisions; "collect it all" produces noise
156
+ 6. **Treating evaluation as one-time setup** — user behavior evolves, requirements change, failure modes emerge; evaluation is continuous
@@ -0,0 +1,186 @@
1
+ # AI Framework Decision Matrix
2
+
3
+ > Reference used by `gsd-framework-selector` and `gsd-ai-researcher`.
4
+ > Distilled from official docs, benchmarks, and developer reports (2026).
5
+
6
+ ---
7
+
8
+ ## Quick Picks
9
+
10
+ | Situation | Pick |
11
+ |-----------|------|
12
+ | Simplest path to a working agent (OpenAI) | OpenAI Agents SDK |
13
+ | Simplest path to a working agent (model-agnostic) | CrewAI |
14
+ | Production RAG / document Q&A | LlamaIndex |
15
+ | Complex stateful workflows with branching | LangGraph |
16
+ | Multi-agent teams with defined roles | CrewAI |
17
+ | Code-aware autonomous agents (Anthropic) | OpenCode Agent SDK |
18
+ | "I don't know my requirements yet" | LangChain |
19
+ | Regulated / audit-trail required | LangGraph |
20
+ | Enterprise Microsoft/.NET shops | AutoGen/AG2 |
21
+ | Google Cloud / Gemini-committed teams | Google ADK |
22
+ | Pure NLP pipelines with explicit control | Haystack |
23
+
24
+ ---
25
+
26
+ ## Framework Profiles
27
+
28
+ ### CrewAI
29
+ - **Type:** Multi-agent orchestration
30
+ - **Language:** Python only
31
+ - **Model support:** Model-agnostic
32
+ - **Learning curve:** Beginner (role/task/crew maps to real teams)
33
+ - **Best for:** Content pipelines, research automation, business process workflows, rapid prototyping
34
+ - **Avoid if:** Fine-grained state management, TypeScript, fault-tolerant checkpointing, complex conditional branching
35
+ - **Strengths:** Fastest multi-agent prototyping, 5.76x faster than LangGraph on QA tasks, built-in memory (short/long/entity/contextual), Flows architecture, standalone (no LangChain dep)
36
+ - **Weaknesses:** Limited checkpointing, coarse error handling, Python only
37
+ - **Eval concerns:** task decomposition accuracy, inter-agent handoff, goal completion rate, loop detection
38
+
39
+ ### LlamaIndex
40
+ - **Type:** RAG and data ingestion
41
+ - **Language:** Python + TypeScript
42
+ - **Model support:** Model-agnostic
43
+ - **Learning curve:** Intermediate
44
+ - **Best for:** Legal research, internal knowledge assistants, enterprise document search, any system where retrieval quality is the #1 priority
45
+ - **Avoid if:** Primary need is agent orchestration, multi-agent collaboration, or chatbot conversation flow
46
+ - **Strengths:** Best-in-class document parsing (LlamaParse), 35% retrieval accuracy improvement, 20-30% faster queries, mixed retrieval strategies (vector + graph + reranker)
47
+ - **Weaknesses:** Data framework first — agent orchestration is secondary
48
+ - **Eval concerns:** Context faithfulness, hallucination, answer relevance, retrieval precision/recall
49
+
50
+ ### LangChain
51
+ - **Type:** General-purpose LLM framework
52
+ - **Language:** Python + TypeScript
53
+ - **Model support:** Model-agnostic (widest ecosystem)
54
+ - **Learning curve:** Intermediate–Advanced
55
+ - **Best for:** Evolving requirements, many third-party integrations, teams wanting one framework for everything, RAG + agents + chains
56
+ - **Avoid if:** Simple well-defined use case, RAG-primary (use LlamaIndex), complex stateful workflows (use LangGraph), performance at scale is critical
57
+ - **Strengths:** Largest community and integration ecosystem, 25% faster development vs scratch, covers RAG/agents/chains/memory
58
+ - **Weaknesses:** Abstraction overhead, p99 latency degrades under load, complexity creep risk
59
+ - **Eval concerns:** End-to-end task completion, chain correctness, retrieval quality
60
+
61
+ ### LangGraph
62
+ - **Type:** Stateful agent workflows (graph-based)
63
+ - **Language:** Python + TypeScript (full parity)
64
+ - **Model support:** Model-agnostic (inherits LangChain integrations)
65
+ - **Learning curve:** Intermediate–Advanced (graph mental model)
66
+ - **Best for:** Production-grade stateful workflows, regulated industries, audit trails, human-in-the-loop flows, fault-tolerant multi-step agents
67
+ - **Avoid if:** Simple chatbot, purely linear workflow, rapid prototyping
68
+ - **Strengths:** Best checkpointing (every node), time-travel debugging, native Postgres/Redis persistence, streaming support, chosen by 62% of developers for stateful agent work (2026)
69
+ - **Weaknesses:** More upfront scaffolding, steeper curve, overkill for simple cases
70
+ - **Eval concerns:** State transition correctness, goal completion rate, tool use accuracy, safety guardrails
71
+
72
+ ### OpenAI Agents SDK
73
+ - **Type:** Native OpenAI agent framework
74
+ - **Language:** Python + TypeScript
75
+ - **Model support:** Optimized for OpenAI (supports 100+ via Chat Completions compatibility)
76
+ - **Learning curve:** Beginner (4 primitives: Agents, Handoffs, Guardrails, Tracing)
77
+ - **Best for:** OpenAI-committed teams, rapid agent prototyping, voice agents (gpt-realtime), teams wanting visual builder (AgentKit)
78
+ - **Avoid if:** Model flexibility needed, complex multi-agent collaboration, persistent state management required, vendor lock-in concern
79
+ - **Strengths:** Simplest mental model, built-in tracing and guardrails, Handoffs for agent delegation, Realtime Agents for voice
80
+ - **Weaknesses:** OpenAI vendor lock-in, no built-in persistent state, younger ecosystem
81
+ - **Eval concerns:** Instruction following, safety guardrails, escalation accuracy, tone consistency
82
+
83
+ ### OpenCode Agent SDK (Anthropic)
84
+ - **Type:** Code-aware autonomous agent framework
85
+ - **Language:** Python + TypeScript
86
+ - **Model support:** OpenCode models only
87
+ - **Learning curve:** Intermediate (18 hook events, MCP, tool decorators)
88
+ - **Best for:** Developer tooling, code generation/review agents, autonomous coding assistants, MCP-heavy architectures, safety-critical applications
89
+ - **Avoid if:** Model flexibility needed, stable/mature API required, use case unrelated to code/tool-use
90
+ - **Strengths:** Deepest MCP integration, built-in filesystem/shell access, 18 lifecycle hooks, automatic context compaction, extended thinking, safety-first design
91
+ - **Weaknesses:** OpenCode-only vendor lock-in, newer/evolving API, smaller community
92
+ - **Eval concerns:** Tool use correctness, safety, code quality, instruction following
93
+
94
+ ### AutoGen / AG2 / Microsoft Agent Framework
95
+ - **Type:** Multi-agent conversational framework
96
+ - **Language:** Python (AG2), Python + .NET (Microsoft Agent Framework)
97
+ - **Model support:** Model-agnostic
98
+ - **Learning curve:** Intermediate–Advanced
99
+ - **Best for:** Research applications, conversational problem-solving, code generation + execution loops, Microsoft/.NET shops
100
+ - **Avoid if:** You want ecosystem stability, deterministic workflows, or "safest long-term bet" (fragmentation risk)
101
+ - **Strengths:** Most sophisticated conversational agent patterns, code generation + execution loop, async event-driven (v0.4+), cross-language interop (Microsoft Agent Framework)
102
+ - **Weaknesses:** Ecosystem fragmented (AutoGen maintenance mode, AG2 fork, Microsoft Agent Framework preview) — genuine long-term risk
103
+ - **Eval concerns:** Conversation goal completion, consensus quality, code execution correctness
104
+
105
+ ### Google ADK (Agent Development Kit)
106
+ - **Type:** Multi-agent orchestration framework
107
+ - **Language:** Python + Java
108
+ - **Model support:** Optimized for Gemini; supports other models via LiteLLM
109
+ - **Learning curve:** Intermediate (agent/tool/session model, familiar if you know LangGraph)
110
+ - **Best for:** Google Cloud / Vertex AI shops, multi-agent workflows needing built-in session management and memory, teams already committed to Gemini, agent pipelines that need Google Search / BigQuery tool integration
111
+ - **Avoid if:** Model flexibility is required beyond Gemini, no Google Cloud dependency acceptable, TypeScript-only stack
112
+ - **Strengths:** First-party Google support, built-in session/memory/artifact management, tight Vertex AI and Google Search integration, own eval framework (RAGAS-compatible), multi-agent by design (sequential, parallel, loop patterns), Java SDK for enterprise teams
113
+ - **Weaknesses:** Gemini vendor lock-in in practice, younger community than LangChain/LlamaIndex, less third-party integration depth
114
+ - **Eval concerns:** Multi-agent task decomposition, tool use correctness, session state consistency, goal completion rate
115
+
116
+ ### Haystack
117
+ - **Type:** NLP pipeline framework
118
+ - **Language:** Python
119
+ - **Model support:** Model-agnostic
120
+ - **Learning curve:** Intermediate
121
+ - **Best for:** Explicit, auditable NLP pipelines, document processing with fine-grained control, enterprise search, regulated industries needing transparency
122
+ - **Avoid if:** Rapid prototyping, multi-agent workflows, or you want a large community
123
+ - **Strengths:** Explicit pipeline control, strong for structured data pipelines, good documentation
124
+ - **Weaknesses:** Smaller community, less agent-oriented than alternatives
125
+ - **Eval concerns:** Extraction accuracy, pipeline output validity, retrieval quality
126
+
127
+ ---
128
+
129
+ ## Decision Dimensions
130
+
131
+ ### By System Type
132
+
133
+ | System Type | Primary Framework(s) | Key Eval Concerns |
134
+ |-------------|---------------------|-------------------|
135
+ | RAG / Knowledge Q&A | LlamaIndex, LangChain | Context faithfulness, hallucination, retrieval precision/recall |
136
+ | Multi-agent orchestration | CrewAI, LangGraph, Google ADK | task decomposition, handoff quality, goal completion |
137
+ | Conversational assistants | OpenAI Agents SDK, OpenCode Agent SDK | Tone, safety, instruction following, escalation |
138
+ | Structured data extraction | LangChain, LlamaIndex | Schema compliance, extraction accuracy |
139
+ | Autonomous task agents | LangGraph, OpenAI Agents SDK | Safety guardrails, tool correctness, cost adherence |
140
+ | Content generation | OpenCode Agent SDK, OpenAI Agents SDK | Brand voice, factual accuracy, tone |
141
+ | Code automation | OpenCode Agent SDK | Code correctness, safety, test pass rate |
142
+
143
+ ### By Team Size and Stage
144
+
145
+ | Context | Recommendation |
146
+ |---------|----------------|
147
+ | Solo dev, prototyping | OpenAI Agents SDK or CrewAI (fastest to running) |
148
+ | Solo dev, RAG | LlamaIndex (batteries included) |
149
+ | Team, production, stateful | LangGraph (best fault tolerance) |
150
+ | Team, evolving requirements | LangChain (broadest escape hatches) |
151
+ | Team, multi-agent | CrewAI (simplest role abstraction) |
152
+ | Enterprise, .NET | AutoGen AG2 / Microsoft Agent Framework |
153
+
154
+ ### By Model Commitment
155
+
156
+ | Preference | Framework |
157
+ |-----------|-----------|
158
+ | OpenAI-only | OpenAI Agents SDK |
159
+ | Anthropic/OpenCode-only | OpenCode Agent SDK |
160
+ | Google/Gemini-committed | Google ADK |
161
+ | Model-agnostic (full flexibility) | LangChain, LlamaIndex, CrewAI, LangGraph, Haystack |
162
+
163
+ ---
164
+
165
+ ## Anti-Patterns
166
+
167
+ 1. **Using LangChain for simple chatbots** — Direct SDK call is less code, faster, and easier to debug
168
+ 2. **Using CrewAI for complex stateful workflows** — Checkpointing gaps will bite you in production
169
+ 3. **Using OpenAI Agents SDK with non-OpenAI models** — Loses the integration benefits you chose it for
170
+ 4. **Using LlamaIndex as a multi-agent framework** — It can do agents, but that's not its strength
171
+ 5. **Defaulting to LangChain without evaluating alternatives** — "Everyone uses it" ≠ right for your use case
172
+ 6. **Starting a new project on AutoGen (not AG2)** — AutoGen is in maintenance mode; use AG2 or wait for Microsoft Agent Framework GA
173
+ 7. **Choosing LangGraph for simple linear flows** — The graph overhead is not worth it; use LangChain chains instead
174
+ 8. **Ignoring vendor lock-in** — Provider-native SDKs (OpenAI, OpenCode) trade flexibility for integration depth; decide consciously
175
+
176
+ ---
177
+
178
+ ## Combination Plays (Multi-Framework Stacks)
179
+
180
+ | Production Pattern | Stack |
181
+ |-------------------|-------|
182
+ | RAG with observability | LlamaIndex + LangSmith or Langfuse |
183
+ | Stateful agent with RAG | LangGraph + LlamaIndex |
184
+ | Multi-agent with tracing | CrewAI + Langfuse |
185
+ | OpenAI agents with evals | OpenAI Agents SDK + Promptfoo or Braintrust |
186
+ | OpenCode agents with MCP | OpenCode Agent SDK + LangSmith or Arize Phoenix |
@@ -0,0 +1,114 @@
1
+ # Common Bug Patterns
2
+
3
+ Checklist of frequent bug patterns to scan before forming hypotheses. Ordered by frequency. Check these FIRST — they cover ~80% of bugs across all technology stacks.
4
+
5
+ <patterns>
6
+
7
+ ## Null / Undefined Access
8
+
9
+ - **Null property access** — accessing property on `null` or `undefined`, missing null check or optional chaining
10
+ - **Missing return value** — function returns `undefined` instead of expected value, missing `return` statement or wrong branch
11
+ - **Destructuring null** — array/object destructuring on `null`/`undefined`, API returned error shape instead of data
12
+ - **Undefaulted optional** — optional parameter used without default, caller omitted argument
13
+
14
+ ## Off-by-One / Boundary
15
+
16
+ - **Wrong loop bound** — loop starts at 1 instead of 0, or ends at `length` instead of `length - 1`
17
+ - **Fence-post error** — "N items need N-1 separators" miscounted
18
+ - **Inclusive vs exclusive** — range boundary `<` vs `<=`, slice/substring end index
19
+ - **Empty collection** — `.length === 0` falls through to logic assuming items exist
20
+
21
+ ## Async / Timing
22
+
23
+ - **Missing await** — async function called without `await`, gets Promise object instead of resolved value
24
+ - **Race condition** — two async operations read/write same state without coordination
25
+ - **Stale closure** — callback captures old variable value, not current one
26
+ - **Initialization order** — event handler fires before setup complete
27
+ - **Leaked timer** — timeout/interval not cleaned up, fires after component/context destroyed
28
+
29
+ ## State Management
30
+
31
+ - **Shared mutation** — object/array modified in place affects other consumers
32
+ - **Stale render** — state updated but UI not re-rendered, missing reactive trigger or wrong reference
33
+ - **Stale handler state** — closure captures state at bind time, not current value
34
+ - **Dual source of truth** — same data stored in two places, one gets out of sync
35
+ - **Invalid transition** — state machine allows transition missing guard condition
36
+
37
+ ## Import / Module
38
+
39
+ - **Circular dependency** — module A imports B, B imports A, one gets `undefined`
40
+ - **Export mismatch** — default vs named export, `import X` vs `import { X }`
41
+ - **Wrong extension** — `.js` vs `.cjs` vs `.mjs`, `.ts` vs `.tsx`
42
+ - **Path case sensitivity** — works on Windows/macOS, fails on Linux
43
+ - **Missing extension** — ESM requires explicit file extensions in imports
44
+
45
+ ## Type / Coercion
46
+
47
+ - **String vs number compare** — `"5" > "10"` is `true` (lexicographic), `5 > 10` is `false`
48
+ - **Implicit coercion** — `==` instead of `===`, truthy/falsy surprises (`0`, `""`, `[]`)
49
+ - **Numeric precision** — `0.1 + 0.2 !== 0.3`, large integers lose precision
50
+ - **Falsy valid value** — value is `0` or `""` which is valid but falsy
51
+
52
+ ## Environment / Config
53
+
54
+ - **Missing env var** — environment variable missing or wrong value in dev vs prod vs CI
55
+ - **Hardcoded path** — works on one machine, fails on another
56
+ - **Port conflict** — port already in use, previous process still running
57
+ - **Permission denied** — different user/group in deployment
58
+ - **Missing dependency** — not in package.json or not installed
59
+
60
+ ## Data Shape / API Contract
61
+
62
+ - **Changed response shape** — backend updated, frontend expects old format
63
+ - **Wrong container type** — array where object expected or vice versa, `data` vs `data.results` vs `data[0]`
64
+ - **Missing required field** — required field omitted in payload, backend returns validation error
65
+ - **Date format mismatch** — ISO string vs timestamp vs locale string
66
+ - **Encoding mismatch** — UTF-8 vs Latin-1, URL encoding, HTML entities
67
+
68
+ ## Regex / String
69
+
70
+ - **Sticky lastIndex** — regex `g` flag with `.test()` then `.exec()`, `lastIndex` not reset between calls
71
+ - **Missing escape** — `.` matches any char, `$` is special, backslash needs doubling
72
+ - **Greedy overmatch** — `.*` eats through delimiters, need `.*?`
73
+ - **Wrong quote type** — string interpolation needs backticks for template literals
74
+
75
+ ## Error Handling
76
+
77
+ - **Swallowed error** — empty `catch {}` or logs but doesn't rethrow/handle
78
+ - **Wrong error type** — catches base `Error` when specific type needed
79
+ - **Error in handler** — cleanup code throws, masking original error
80
+ - **Unhandled rejection** — missing `.catch()` or try/catch around `await`
81
+
82
+ ## Scope / Closure
83
+
84
+ - **Variable shadowing** — inner scope declares same name, hides outer variable
85
+ - **Loop variable capture** — all closures share same `var i`, use `let` or bind
86
+ - **Lost this binding** — callback loses context, need `.bind()` or arrow function
87
+ - **Scope confusion** — `var` hoisted to function, `let`/`const` block-scoped
88
+
89
+ </patterns>
90
+
91
+ <usage>
92
+
93
+ ## How to Use This Checklist
94
+
95
+ 1. **Before forming any hypothesis**, scan the relevant categories based on the symptom
96
+ 2. **Match symptom to pattern** — if the bug involves "undefined is not an object", check Null/Undefined first
97
+ 3. **Each checked pattern is a hypothesis candidate** — verify or eliminate with evidence
98
+ 4. **If no pattern matches**, proceed to open-ended investigation
99
+
100
+ ### Symptom-to-Category Quick Map
101
+
102
+ | Symptom | Check First |
103
+ |---------|------------|
104
+ | "Cannot read property of undefined/null" | Null/Undefined Access |
105
+ | "X is not a function" | Import/Module, Type/Coercion |
106
+ | Works sometimes, fails sometimes | Async/Timing, State Management |
107
+ | Works locally, fails in CI/prod | Environment/Config |
108
+ | Wrong data displayed | Data Shape, State Management |
109
+ | Off by one item / missing last item | Off-by-One/Boundary |
110
+ | "Unexpected token" / parse error | Data Shape, Type/Coercion |
111
+ | Memory leak / growing resource usage | Async/Timing (cleanup), Scope/Closure |
112
+ | Infinite loop / max call stack | State Management, Async/Timing |
113
+
114
+ </usage>
@@ -0,0 +1,73 @@
1
+ ---
2
+ component: plan-checker
3
+ version: 1
4
+ last_calibrated: 2026-03-24
5
+ ---
6
+
7
+ # Plan-Checker Few-Shot Examples
8
+
9
+ ## Positive Examples
10
+
11
+ ### Example 1: BLOCKER flagged for vague task action missing file and function names
12
+
13
+ **Input:** Plan task with action: "Implement the authentication feature for the API routes."
14
+
15
+ **Output:**
16
+
17
+ > ```yaml
18
+ > issues:
19
+ > - dimension: task_completeness
20
+ > severity: BLOCKER
21
+ > finding: "task T1 action says 'implement the authentication feature' without naming target files, functions to create, or middleware to apply. Executor cannot determine what to build."
22
+ > affected_field: "<action>"
23
+ > suggested_fix: "Specify: create authMiddleware in src/middleware/auth.js, apply to routes in src/routes/api.js lines 12-45, verify with integration test"
24
+ > ```
25
+
26
+ **Why this is good:** The checker cited the specific dimension (task_completeness), quoted the problematic text, explained why it is a blocker (executor cannot determine what to build), and gave a concrete fix with file paths and function names. The finding is actionable -- the planner knows exactly what to add.
27
+
28
+ ### Example 2: BLOCKER for same-wave file conflict between two plans
29
+
30
+ **Input:** Two plans in wave 1 -- Plan 01 modifies `get-shit-done/workflows/execute-phase.md`, Plan 02 also modifies `get-shit-done/workflows/execute-phase.md`. No `depends_on` declared.
31
+
32
+ **Output:**
33
+
34
+ > ```yaml
35
+ > issues:
36
+ > - dimension: dependency_correctness
37
+ > severity: BLOCKER
38
+ > finding: "Plans 01 and 02 both modify get-shit-done/workflows/execute-phase.md in wave 1 with no depends_on relationship. Concurrent execution will cause merge conflicts or lost changes."
39
+ > affected_field: "files_modified"
40
+ > suggested_fix: "Either move Plan 02 to wave 2 with depends_on: ['01'] or consolidate the file changes into a single plan"
41
+ > ```
42
+
43
+ **Why this is good:** The checker identified a real structural problem -- two plans modifying the same file in the same wave without a dependency relationship. It cited dependency_correctness, named both plans, the conflicting file, and provided two alternative fixes.
44
+
45
+ ## Negative Examples
46
+
47
+ ### Example 1: All dimensions pass when verify commands are missing
48
+
49
+ **Input:** Plan with 3 tasks. task T2 has `<verify></verify>` (empty). task T3 has `<verify>echo "done"</verify>` (trivial, cannot distinguish pass from fail).
50
+
51
+ **Output:**
52
+
53
+ > ## CHECK PASSED
54
+ > Plans: 1 | Tasks: 3 | Dimensions: 10 | Issues: 0
55
+
56
+ **Why this is bad:** The checker missed two violations. verification_derivation requires verify commands to be runnable and test actual output -- an empty verify block is a BLOCKER. Verify commands must distinguish pass from fail -- `echo "done"` always returns 0 regardless of implementation state. Both should have been flagged.
57
+
58
+ ### Example 2: False positive INFO for a correctly-sized plan
59
+
60
+ **Input:** Plan with 3 tasks, 6 unique files, 1 dependency. All within limits.
61
+
62
+ **Output:**
63
+
64
+ > ```yaml
65
+ > issues:
66
+ > - dimension: scope_sanity
67
+ > severity: INFO
68
+ > finding: "Plan has 3 tasks -- consider splitting into smaller plans for faster iteration"
69
+ > affected_field: "task count"
70
+ > suggested_fix: "Split tasks into separate plans"
71
+ > ```
72
+
73
+ **Why this is bad:** The checker flagged a non-issue. scope_sanity allows 2-3 tasks per plan -- 3 tasks is within limits. The checker applied a personal preference ("smaller is better") rather than the documented threshold. This wastes planner time on false positives and erodes trust in the checker's judgment. A correct check would produce no issue for this plan.