@qball-inc/the-bulwark 1.0.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.claude-plugin/plugin.json +43 -0
- package/agents/bulwark-fix-validator.md +633 -0
- package/agents/bulwark-implementer.md +391 -0
- package/agents/bulwark-issue-analyzer.md +308 -0
- package/agents/bulwark-standards-reviewer.md +221 -0
- package/agents/plan-creation-architect.md +323 -0
- package/agents/plan-creation-eng-lead.md +352 -0
- package/agents/plan-creation-po.md +300 -0
- package/agents/plan-creation-qa-critic.md +334 -0
- package/agents/product-ideation-competitive-analyzer.md +298 -0
- package/agents/product-ideation-idea-validator.md +268 -0
- package/agents/product-ideation-market-researcher.md +292 -0
- package/agents/product-ideation-pattern-documenter.md +308 -0
- package/agents/product-ideation-segment-analyzer.md +303 -0
- package/agents/product-ideation-strategist.md +259 -0
- package/agents/statusline-setup.md +97 -0
- package/hooks/hooks.json +59 -0
- package/package.json +45 -0
- package/scripts/hooks/cleanup-stale.sh +13 -0
- package/scripts/hooks/enforce-quality.sh +166 -0
- package/scripts/hooks/implementer-quality.sh +256 -0
- package/scripts/hooks/inject-protocol.sh +52 -0
- package/scripts/hooks/suggest-pipeline.sh +175 -0
- package/scripts/hooks/track-pipeline-start.sh +37 -0
- package/scripts/hooks/track-pipeline-stop.sh +52 -0
- package/scripts/init-rules.sh +35 -0
- package/scripts/init.sh +151 -0
- package/skills/anthropic-validator/SKILL.md +607 -0
- package/skills/anthropic-validator/references/agents-checklist.md +131 -0
- package/skills/anthropic-validator/references/commands-checklist.md +102 -0
- package/skills/anthropic-validator/references/hooks-checklist.md +151 -0
- package/skills/anthropic-validator/references/mcp-checklist.md +136 -0
- package/skills/anthropic-validator/references/plugins-checklist.md +148 -0
- package/skills/anthropic-validator/references/skills-checklist.md +85 -0
- package/skills/assertion-patterns/SKILL.md +296 -0
- package/skills/bug-magnet-data/SKILL.md +284 -0
- package/skills/bug-magnet-data/context/cli-args.md +91 -0
- package/skills/bug-magnet-data/context/db-query.md +104 -0
- package/skills/bug-magnet-data/context/file-contents.md +103 -0
- package/skills/bug-magnet-data/context/http-body.md +91 -0
- package/skills/bug-magnet-data/context/process-spawn.md +123 -0
- package/skills/bug-magnet-data/data/booleans/boundaries.yaml +143 -0
- package/skills/bug-magnet-data/data/collections/arrays.yaml +114 -0
- package/skills/bug-magnet-data/data/collections/objects.yaml +123 -0
- package/skills/bug-magnet-data/data/concurrency/race-conditions.yaml +118 -0
- package/skills/bug-magnet-data/data/concurrency/state-machines.yaml +115 -0
- package/skills/bug-magnet-data/data/dates/boundaries.yaml +137 -0
- package/skills/bug-magnet-data/data/dates/invalid.yaml +132 -0
- package/skills/bug-magnet-data/data/dates/timezone.yaml +118 -0
- package/skills/bug-magnet-data/data/encoding/charset.yaml +79 -0
- package/skills/bug-magnet-data/data/encoding/normalization.yaml +105 -0
- package/skills/bug-magnet-data/data/formats/email.yaml +154 -0
- package/skills/bug-magnet-data/data/formats/json.yaml +187 -0
- package/skills/bug-magnet-data/data/formats/url.yaml +165 -0
- package/skills/bug-magnet-data/data/language-specific/javascript.yaml +182 -0
- package/skills/bug-magnet-data/data/language-specific/python.yaml +174 -0
- package/skills/bug-magnet-data/data/language-specific/rust.yaml +148 -0
- package/skills/bug-magnet-data/data/numbers/boundaries.yaml +161 -0
- package/skills/bug-magnet-data/data/numbers/precision.yaml +89 -0
- package/skills/bug-magnet-data/data/numbers/special.yaml +69 -0
- package/skills/bug-magnet-data/data/strings/boundaries.yaml +109 -0
- package/skills/bug-magnet-data/data/strings/injection.yaml +208 -0
- package/skills/bug-magnet-data/data/strings/special-chars.yaml +190 -0
- package/skills/bug-magnet-data/data/strings/unicode.yaml +139 -0
- package/skills/bug-magnet-data/references/external-lists.md +115 -0
- package/skills/bulwark-brainstorm/SKILL.md +563 -0
- package/skills/bulwark-brainstorm/references/at-teammate-prompts.md +60 -0
- package/skills/bulwark-brainstorm/references/role-critical-analyst.md +78 -0
- package/skills/bulwark-brainstorm/references/role-development-lead.md +66 -0
- package/skills/bulwark-brainstorm/references/role-product-delivery-lead.md +79 -0
- package/skills/bulwark-brainstorm/references/role-product-manager.md +62 -0
- package/skills/bulwark-brainstorm/references/role-project-sme.md +59 -0
- package/skills/bulwark-brainstorm/references/role-technical-architect.md +66 -0
- package/skills/bulwark-research/SKILL.md +298 -0
- package/skills/bulwark-research/references/viewpoint-contrarian.md +63 -0
- package/skills/bulwark-research/references/viewpoint-direct-investigation.md +62 -0
- package/skills/bulwark-research/references/viewpoint-first-principles.md +65 -0
- package/skills/bulwark-research/references/viewpoint-practitioner.md +62 -0
- package/skills/bulwark-research/references/viewpoint-prior-art.md +66 -0
- package/skills/bulwark-scaffold/SKILL.md +330 -0
- package/skills/bulwark-statusline/SKILL.md +161 -0
- package/skills/bulwark-statusline/scripts/statusline.sh +144 -0
- package/skills/bulwark-verify/SKILL.md +519 -0
- package/skills/code-review/SKILL.md +428 -0
- package/skills/code-review/examples/anti-patterns/linting.ts +181 -0
- package/skills/code-review/examples/anti-patterns/security.ts +91 -0
- package/skills/code-review/examples/anti-patterns/standards.ts +195 -0
- package/skills/code-review/examples/anti-patterns/type-safety.ts +108 -0
- package/skills/code-review/examples/recommended/linting.ts +195 -0
- package/skills/code-review/examples/recommended/security.ts +154 -0
- package/skills/code-review/examples/recommended/standards.ts +231 -0
- package/skills/code-review/examples/recommended/type-safety.ts +181 -0
- package/skills/code-review/frameworks/angular.md +218 -0
- package/skills/code-review/frameworks/django.md +235 -0
- package/skills/code-review/frameworks/express.md +207 -0
- package/skills/code-review/frameworks/flask.md +298 -0
- package/skills/code-review/frameworks/generic.md +146 -0
- package/skills/code-review/frameworks/react.md +152 -0
- package/skills/code-review/frameworks/vue.md +244 -0
- package/skills/code-review/references/linting-patterns.md +221 -0
- package/skills/code-review/references/security-patterns.md +125 -0
- package/skills/code-review/references/standards-patterns.md +246 -0
- package/skills/code-review/references/type-safety-patterns.md +130 -0
- package/skills/component-patterns/SKILL.md +131 -0
- package/skills/component-patterns/references/pattern-cli-command.md +118 -0
- package/skills/component-patterns/references/pattern-database.md +166 -0
- package/skills/component-patterns/references/pattern-external-api.md +139 -0
- package/skills/component-patterns/references/pattern-file-parser.md +168 -0
- package/skills/component-patterns/references/pattern-http-server.md +162 -0
- package/skills/component-patterns/references/pattern-process-spawner.md +133 -0
- package/skills/continuous-feedback/SKILL.md +327 -0
- package/skills/continuous-feedback/references/collect-instructions.md +81 -0
- package/skills/continuous-feedback/references/specialize-code-review.md +82 -0
- package/skills/continuous-feedback/references/specialize-general.md +98 -0
- package/skills/continuous-feedback/references/specialize-test-audit.md +81 -0
- package/skills/create-skill/SKILL.md +359 -0
- package/skills/create-skill/references/agent-conventions.md +194 -0
- package/skills/create-skill/references/agent-template.md +195 -0
- package/skills/create-skill/references/content-guidance.md +291 -0
- package/skills/create-skill/references/decision-framework.md +124 -0
- package/skills/create-skill/references/template-pipeline.md +217 -0
- package/skills/create-skill/references/template-reference-heavy.md +111 -0
- package/skills/create-skill/references/template-research.md +210 -0
- package/skills/create-skill/references/template-script-driven.md +172 -0
- package/skills/create-skill/references/template-simple.md +80 -0
- package/skills/create-subagent/SKILL.md +353 -0
- package/skills/create-subagent/references/agent-conventions.md +268 -0
- package/skills/create-subagent/references/content-guidance.md +232 -0
- package/skills/create-subagent/references/decision-framework.md +134 -0
- package/skills/create-subagent/references/template-single-agent.md +192 -0
- package/skills/fix-bug/SKILL.md +241 -0
- package/skills/governance-protocol/SKILL.md +116 -0
- package/skills/init/SKILL.md +341 -0
- package/skills/issue-debugging/SKILL.md +385 -0
- package/skills/issue-debugging/references/anti-patterns.md +245 -0
- package/skills/issue-debugging/references/debug-report-schema.md +227 -0
- package/skills/mock-detection/SKILL.md +511 -0
- package/skills/mock-detection/references/false-positive-prevention.md +402 -0
- package/skills/mock-detection/references/stub-patterns.md +236 -0
- package/skills/pipeline-templates/SKILL.md +215 -0
- package/skills/pipeline-templates/references/code-change-workflow.md +277 -0
- package/skills/pipeline-templates/references/code-review.md +336 -0
- package/skills/pipeline-templates/references/fix-validation.md +421 -0
- package/skills/pipeline-templates/references/new-feature.md +335 -0
- package/skills/pipeline-templates/references/research-brainstorm.md +161 -0
- package/skills/pipeline-templates/references/research-planning.md +257 -0
- package/skills/pipeline-templates/references/test-audit.md +389 -0
- package/skills/pipeline-templates/references/test-execution-fix.md +238 -0
- package/skills/plan-creation/SKILL.md +497 -0
- package/skills/product-ideation/SKILL.md +372 -0
- package/skills/product-ideation/references/analysis-frameworks.md +161 -0
- package/skills/session-handoff/SKILL.md +139 -0
- package/skills/session-handoff/references/examples.md +223 -0
- package/skills/setup-lsp/SKILL.md +312 -0
- package/skills/setup-lsp/references/server-registry.md +85 -0
- package/skills/setup-lsp/references/troubleshooting.md +135 -0
- package/skills/subagent-output-templating/SKILL.md +415 -0
- package/skills/subagent-output-templating/references/examples.md +440 -0
- package/skills/subagent-prompting/SKILL.md +364 -0
- package/skills/subagent-prompting/references/examples.md +342 -0
- package/skills/test-audit/SKILL.md +531 -0
- package/skills/test-audit/references/known-limitations.md +41 -0
- package/skills/test-audit/references/priority-classification.md +30 -0
- package/skills/test-audit/references/prompts/deep-mode-detection.md +83 -0
- package/skills/test-audit/references/prompts/synthesis.md +57 -0
- package/skills/test-audit/references/rewrite-instructions.md +46 -0
- package/skills/test-audit/references/schemas/audit-output.yaml +100 -0
- package/skills/test-audit/references/schemas/diagnostic-output.yaml +49 -0
- package/skills/test-audit/scripts/data-flow-analyzer.ts +509 -0
- package/skills/test-audit/scripts/integration-mock-detector.ts +462 -0
- package/skills/test-audit/scripts/package.json +20 -0
- package/skills/test-audit/scripts/skip-detector.ts +211 -0
- package/skills/test-audit/scripts/verification-counter.ts +295 -0
- package/skills/test-classification/SKILL.md +310 -0
- package/skills/test-fixture-creation/SKILL.md +295 -0
|
@@ -0,0 +1,298 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: bulwark-research
|
|
3
|
+
description: Structured multi-viewpoint research using 5 parallel Sonnet sub-agents. Use when deep research is needed on a complex topic before implementation planning.
|
|
4
|
+
user-invocable: true
|
|
5
|
+
argument-hint: "<topic, filepath, or directory>"
|
|
6
|
+
skills:
|
|
7
|
+
- subagent-prompting
|
|
8
|
+
---
|
|
9
|
+
|
|
10
|
+
# Bulwark Research
|
|
11
|
+
|
|
12
|
+
Structured multi-viewpoint research on a given topic. Spawns 5 Sonnet sub-agents in parallel, each analyzing from a distinct analytical viewpoint, then synthesizes into a single research document.
|
|
13
|
+
|
|
14
|
+
---
|
|
15
|
+
|
|
16
|
+
## When to Use This Skill
|
|
17
|
+
|
|
18
|
+
**Load this skill when the user request matches ANY of these patterns:**
|
|
19
|
+
|
|
20
|
+
| Trigger Pattern | Example User Request |
|
|
21
|
+
|-----------------|---------------------|
|
|
22
|
+
| Deep research | "Research agent teams", "Investigate loop detection" |
|
|
23
|
+
| Topic exploration | "What do we know about X?", "Explore approaches to Y" |
|
|
24
|
+
| Pre-planning research | "Before we build X, research the landscape" |
|
|
25
|
+
| Multi-viewpoint analysis | "Analyze X from multiple angles" |
|
|
26
|
+
|
|
27
|
+
**DO NOT use for:**
|
|
28
|
+
- Evaluating implementation feasibility (use `bulwark-brainstorm`)
|
|
29
|
+
- Quick fact lookup (use web search or codebase exploration)
|
|
30
|
+
- Code review (use `code-review`)
|
|
31
|
+
- Debugging (use `issue-debugging`)
|
|
32
|
+
|
|
33
|
+
---
|
|
34
|
+
|
|
35
|
+
## Dependencies
|
|
36
|
+
|
|
37
|
+
| Category | Files | Requirement | When to Load |
|
|
38
|
+
|----------|-------|-------------|--------------|
|
|
39
|
+
| **Viewpoint definitions** | `references/viewpoint-*.md` | **REQUIRED** | Always load all 5 before spawning agents |
|
|
40
|
+
| **Output templates** | `templates/viewpoint-output.md` | **REQUIRED** | Include in every agent prompt |
|
|
41
|
+
| **Synthesis template** | `templates/synthesis-output.md` | **REQUIRED** | Use when writing synthesis |
|
|
42
|
+
| **Subagent prompting** | `subagent-prompting` skill | **REQUIRED** | Load at Stage 1 for 4-part prompt template |
|
|
43
|
+
|
|
44
|
+
**Fallback behavior:**
|
|
45
|
+
- If a viewpoint reference file is missing: Note in diagnostic log, reduce to 4 agents, continue
|
|
46
|
+
- If output template is missing: Use the schema from this SKILL.md directly
|
|
47
|
+
|
|
48
|
+
---
|
|
49
|
+
|
|
50
|
+
## Usage
|
|
51
|
+
|
|
52
|
+
```
|
|
53
|
+
/bulwark-research <topic-or-prompt> [--context <file>]
|
|
54
|
+
/bulwark-research --doc <path-to-document>
|
|
55
|
+
```
|
|
56
|
+
|
|
57
|
+
**Arguments:**
|
|
58
|
+
- `<topic-or-prompt>` - Free-text topic description or problem statement
|
|
59
|
+
- `--context <file>` - Additional context file to provide to all agents
|
|
60
|
+
- `--doc <path>` - Use a document as the topic source instead of free text
|
|
61
|
+
|
|
62
|
+
**Examples:**
|
|
63
|
+
- `/bulwark-research "agent teams and multi-agent orchestration"` - Research a topic
|
|
64
|
+
- `/bulwark-research --doc plans/proposal.md` - Research from a document
|
|
65
|
+
- `/bulwark-research "loop detection" --context docs/architecture.md` - Research with context
|
|
66
|
+
|
|
67
|
+
---
|
|
68
|
+
|
|
69
|
+
## Stages
|
|
70
|
+
|
|
71
|
+
### Stage 1: Pre-Flight
|
|
72
|
+
|
|
73
|
+
```
|
|
74
|
+
Stage 1: Pre-Flight
|
|
75
|
+
├── Read problem statement / document
|
|
76
|
+
├── AskUserQuestion if ambiguous (iterative, 2-3 questions per round)
|
|
77
|
+
├── Slugify topic for output directory
|
|
78
|
+
├── Create output directories: $PROJECT_DIR/logs/research/{topic-slug}/ and $PROJECT_DIR/artifacts/research/{topic-slug}/
|
|
79
|
+
├── Load subagent-prompting skill
|
|
80
|
+
├── Load all 5 references/viewpoint-*.md
|
|
81
|
+
├── Load templates/viewpoint-output.md
|
|
82
|
+
└── Token budget check (warn if >30% consumed)
|
|
83
|
+
```
|
|
84
|
+
|
|
85
|
+
**AskUserQuestion Protocol (Pre-Spawn):**
|
|
86
|
+
|
|
87
|
+
If the problem statement is ambiguous, under-specified, or could benefit from scope boundaries:
|
|
88
|
+
|
|
89
|
+
1. Ask 2-3 clarifying questions using AskUserQuestion
|
|
90
|
+
2. Assess whether the answers provide sufficient clarity to construct high-quality prompts
|
|
91
|
+
3. If not, ask up to 3 more questions in a follow-up round
|
|
92
|
+
4. Repeat until clarity is achieved (no hard cap on rounds, but each round is 2-3 questions max)
|
|
93
|
+
5. If the problem statement is clear and well-scoped from the start, skip this step and note in diagnostics: `pre_flight_interview: skipped (problem statement sufficient)`
|
|
94
|
+
|
|
95
|
+
### Stage 2: Viewpoint Analysis (5 Sonnet, Parallel)
|
|
96
|
+
|
|
97
|
+
```
|
|
98
|
+
Stage 2: Viewpoint Analysis
|
|
99
|
+
├── Construct 5 prompts using 4-part template (GOAL/CONSTRAINTS/CONTEXT/OUTPUT)
|
|
100
|
+
├── Each prompt includes:
|
|
101
|
+
│ ├── Viewpoint definition from references/viewpoint-{name}.md
|
|
102
|
+
│ ├── Output template from templates/viewpoint-output.md
|
|
103
|
+
│ ├── Topic description + any user-provided context
|
|
104
|
+
│ └── Output path: $PROJECT_DIR/logs/research/{topic-slug}/{NN}-{viewpoint-slug}.md
|
|
105
|
+
├── Spawn all 5 agents in parallel via Task tool
|
|
106
|
+
│ ├── subagent_type: general-purpose
|
|
107
|
+
│ ├── model: sonnet
|
|
108
|
+
│ └── All 5 in a single message (parallel)
|
|
109
|
+
└── Token budget check after all 5 complete (checkpoint if >55%)
|
|
110
|
+
```
|
|
111
|
+
|
|
112
|
+
**CRITICAL**: Spawn all 5 agents in a single message with 5 Task tool calls. Do NOT spawn sequentially.
|
|
113
|
+
|
|
114
|
+
### Stage 3: Synthesis
|
|
115
|
+
|
|
116
|
+
```
|
|
117
|
+
Stage 3: Synthesis
|
|
118
|
+
├── Read ALL 5 agent output files (MANDATORY — do not skip any)
|
|
119
|
+
├── If any output is missing or empty → re-spawn that agent once (max 1 retry)
|
|
120
|
+
├── If retry fails → document gap in synthesis under "Incomplete Coverage"
|
|
121
|
+
├── Load templates/synthesis-output.md
|
|
122
|
+
├── Write synthesis to $PROJECT_DIR/artifacts/research/{topic-slug}/synthesis.md
|
|
123
|
+
├── AskUserQuestion for user on open questions (iterative, 2-3 per round)
|
|
124
|
+
├── Critical Evaluation Gate (see below)
|
|
125
|
+
└── Token budget check (must be <65% after synthesis)
|
|
126
|
+
```
|
|
127
|
+
|
|
128
|
+
**Enforcement**: Do NOT begin writing synthesis until ALL available agent outputs have been read. The orchestrator must reference every agent's output at least once in the synthesis.
|
|
129
|
+
|
|
130
|
+
#### Critical Evaluation Gate (Post-User Q&A)
|
|
131
|
+
|
|
132
|
+
After each AskUserQuestion round, do NOT blindly incorporate user responses. Instead:
|
|
133
|
+
|
|
134
|
+
**Step 1 — Classify each user response:**
|
|
135
|
+
|
|
136
|
+
| Classification | Definition | Action |
|
|
137
|
+
|---------------|------------|--------|
|
|
138
|
+
| **Factual** | Known, verifiable information (e.g., "We use PostgreSQL") | Incorporate directly into synthesis |
|
|
139
|
+
| **Opinion** | Preference or priority (e.g., "I'd prefer approach A") | Incorporate directly with attribution: "User preference: ..." |
|
|
140
|
+
| **Speculative** | Unvalidated claim or proposed solution (e.g., "I think library X can do this", "What if we used approach Y?") | **Do NOT incorporate.** Trigger Step 2. |
|
|
141
|
+
|
|
142
|
+
**Step 2 — For Speculative responses, present to user:**
|
|
143
|
+
|
|
144
|
+
> "Your suggestion about [X] is unvalidated. I recommend a targeted follow-up research phase with 2 focused agents (Direct Investigation + Contrarian) to verify feasibility and surface risks before incorporating this into the synthesis.
|
|
145
|
+
>
|
|
146
|
+
> This will spawn 2 Sonnet agents and consume additional token budget.
|
|
147
|
+
>
|
|
148
|
+
> [Run follow-up research / Incorporate as-is with LOW confidence caveat]"
|
|
149
|
+
|
|
150
|
+
**Step 3 — If follow-up research approved:**
|
|
151
|
+
|
|
152
|
+
1. Spawn 2 Sonnet agents in parallel (single message, 2 Task tool calls):
|
|
153
|
+
- **Direct Investigation** — focused on validating the specific claim/solution
|
|
154
|
+
- **Contrarian** — focused on finding failure modes and alternatives for the specific claim/solution
|
|
155
|
+
2. Use the same 4-part prompt template (GOAL/CONSTRAINTS/CONTEXT/OUTPUT)
|
|
156
|
+
3. Include the REASONING DEPTH instructions from the viewpoint reference docs
|
|
157
|
+
4. Output to: `$PROJECT_DIR/logs/research/{topic-slug}/followup-{NN}-direct-investigation.md` and `followup-{NN}-contrarian.md`
|
|
158
|
+
5. Read both outputs, then update synthesis with validated findings
|
|
159
|
+
6. Tag follow-up findings in synthesis with: `[Follow-up: validated]` or `[Follow-up: refuted]` or `[Follow-up: mixed — see details]`
|
|
160
|
+
|
|
161
|
+
**Step 4 — If user declines follow-up:**
|
|
162
|
+
|
|
163
|
+
Incorporate the user's suggestion into synthesis with an explicit caveat:
|
|
164
|
+
> **[Unvalidated — user suggestion, not research-backed]**: {suggestion}
|
|
165
|
+
|
|
166
|
+
**Repeat**: After updating synthesis, ask if user has additional questions or input. Apply the same classification gate to each round. There is no limit on follow-up rounds, but each round with Speculative input that triggers research consumes ~10-15% token budget — warn user if approaching 60%.
|
|
167
|
+
|
|
168
|
+
### Stage 4: Diagnostics (REQUIRED)
|
|
169
|
+
|
|
170
|
+
```
|
|
171
|
+
Stage 4: Diagnostics
|
|
172
|
+
├── Write diagnostic YAML to $PROJECT_DIR/logs/diagnostics/bulwark-research-{YYYYMMDD-HHMMSS}.yaml
|
|
173
|
+
└── Verify completion checklist
|
|
174
|
+
```
|
|
175
|
+
|
|
176
|
+
---
|
|
177
|
+
|
|
178
|
+
## Viewpoints (Sections)
|
|
179
|
+
|
|
180
|
+
Each viewpoint is a distinct analytical lens. All 5 run in parallel — they do not see each other's output.
|
|
181
|
+
|
|
182
|
+
### Viewpoint 1: Direct Investigation
|
|
183
|
+
|
|
184
|
+
**Core Question**: What is this? How does it work? State of the art?
|
|
185
|
+
|
|
186
|
+
**Focus Areas**:
|
|
187
|
+
- Precise definition — what it is and what it is not
|
|
188
|
+
- Mechanical operation (architecture, data flow, lifecycle)
|
|
189
|
+
- Current state of the art — tooling, adoption, standards
|
|
190
|
+
- Key terminology and taxonomy
|
|
191
|
+
|
|
192
|
+
**Reference**: `references/viewpoint-direct-investigation.md`
|
|
193
|
+
|
|
194
|
+
### Viewpoint 2: Practitioner Perspective
|
|
195
|
+
|
|
196
|
+
**Core Question**: How do teams use this in production? What works?
|
|
197
|
+
|
|
198
|
+
**Focus Areas**:
|
|
199
|
+
- Real-world adoption patterns
|
|
200
|
+
- Common implementation approaches and trade-offs
|
|
201
|
+
- Practical gotchas documentation doesn't cover
|
|
202
|
+
- Operational concerns (debugging, monitoring, maintenance)
|
|
203
|
+
- Team skill requirements and learning curves
|
|
204
|
+
|
|
205
|
+
**Reference**: `references/viewpoint-practitioner.md`
|
|
206
|
+
|
|
207
|
+
### Viewpoint 3: Contrarian Angle
|
|
208
|
+
|
|
209
|
+
**Core Question**: What failure modes do most people overlook?
|
|
210
|
+
|
|
211
|
+
**Focus Areas**:
|
|
212
|
+
- Failure modes advocates rarely mention
|
|
213
|
+
- Scenarios where this is the wrong choice
|
|
214
|
+
- Hidden costs (complexity, maintenance burden, cognitive load)
|
|
215
|
+
- Alternatives that might be simpler
|
|
216
|
+
- When NOT to use this
|
|
217
|
+
|
|
218
|
+
**Reference**: `references/viewpoint-contrarian.md`
|
|
219
|
+
|
|
220
|
+
### Viewpoint 4: First Principles
|
|
221
|
+
|
|
222
|
+
**Core Question**: What core problem does this solve? Minimal viable version?
|
|
223
|
+
|
|
224
|
+
**Focus Areas**:
|
|
225
|
+
- Fundamental problem being addressed (stripped of buzzwords)
|
|
226
|
+
- Why existing approaches are insufficient
|
|
227
|
+
- Minimal set of capabilities for value
|
|
228
|
+
- Essential vs. deferrable
|
|
229
|
+
- Decomposition into independent sub-problems
|
|
230
|
+
|
|
231
|
+
**Reference**: `references/viewpoint-first-principles.md`
|
|
232
|
+
|
|
233
|
+
### Viewpoint 5: Prior Art / Historical
|
|
234
|
+
|
|
235
|
+
**Core Question**: What similar patterns exist? Lessons from predecessors?
|
|
236
|
+
|
|
237
|
+
**Focus Areas**:
|
|
238
|
+
- Historical predecessors and analogous patterns
|
|
239
|
+
- Evolution trajectories — what succeeded, what failed, why
|
|
240
|
+
- Hype vs. foundational patterns
|
|
241
|
+
- Lessons applicable to current topic
|
|
242
|
+
|
|
243
|
+
**Reference**: `references/viewpoint-prior-art.md`
|
|
244
|
+
|
|
245
|
+
---
|
|
246
|
+
|
|
247
|
+
## Token Budget Management
|
|
248
|
+
|
|
249
|
+
| Checkpoint | Threshold | Action |
|
|
250
|
+
|------------|-----------|--------|
|
|
251
|
+
| After constructing all prompts | >30% consumed | Warn user: "5 agents will consume significant context" |
|
|
252
|
+
| After reading 3 of 5 outputs | Running tally | If approaching 55%, checkpoint with user |
|
|
253
|
+
| After synthesis | Must be <65% | Leave room for session closing |
|
|
254
|
+
| Synthesis complete at >65% | Immediate | Create handoff, do not start additional work |
|
|
255
|
+
|
|
256
|
+
If token budget is insufficient to complete all 5 agents + synthesis, inform the user and suggest splitting (e.g., "3 agents this session, 2 + synthesis next session").
|
|
257
|
+
|
|
258
|
+
---
|
|
259
|
+
|
|
260
|
+
## Error Handling
|
|
261
|
+
|
|
262
|
+
| Scenario | Action |
|
|
263
|
+
|----------|--------|
|
|
264
|
+
| Agent returns empty output | Re-spawn once. If still empty, document gap in synthesis. |
|
|
265
|
+
| Agent returns truncated output | Accept as-is, note in diagnostics. |
|
|
266
|
+
| Agent fails to spawn | Re-spawn once. If still fails, reduce to 4 agents, document. |
|
|
267
|
+
| Token budget exceeded mid-session | Stop spawning, synthesize from available outputs, note incomplete. |
|
|
268
|
+
| User-provided document unreadable | AskUserQuestion for alternative source. |
|
|
269
|
+
|
|
270
|
+
---
|
|
271
|
+
|
|
272
|
+
## Diagnostic Output (REQUIRED)
|
|
273
|
+
|
|
274
|
+
**MANDATORY**: You MUST write diagnostic output after every invocation. This is Stage 4 and cannot be skipped.
|
|
275
|
+
|
|
276
|
+
Write to: `$PROJECT_DIR/logs/diagnostics/bulwark-research-{YYYYMMDD-HHMMSS}.yaml`
|
|
277
|
+
|
|
278
|
+
**Template**: Use `templates/diagnostic-output.yaml` for the schema. Fill in actual values from the session.
|
|
279
|
+
|
|
280
|
+
---
|
|
281
|
+
|
|
282
|
+
## Completion Checklist
|
|
283
|
+
|
|
284
|
+
**IMPORTANT**: Before returning to the user, verify ALL items are complete:
|
|
285
|
+
|
|
286
|
+
- [ ] Stage 1: Pre-flight complete (topic defined, directories created, skills loaded)
|
|
287
|
+
- [ ] Stage 1: AskUserQuestion used if topic was ambiguous
|
|
288
|
+
- [ ] Stage 2: All 5 viewpoint agents spawned in parallel
|
|
289
|
+
- [ ] Stage 2: All agent outputs written to `$PROJECT_DIR/logs/research/{topic-slug}/`
|
|
290
|
+
- [ ] Stage 3: ALL 5 outputs read before writing synthesis
|
|
291
|
+
- [ ] Stage 3: Synthesis written using `templates/synthesis-output.md`
|
|
292
|
+
- [ ] Stage 3: AskUserQuestion used for post-synthesis review
|
|
293
|
+
- [ ] Stage 3: Critical Evaluation Gate applied to all user responses (classified as Factual/Opinion/Speculative)
|
|
294
|
+
- [ ] Stage 3: Follow-up research spawned for Speculative responses (or user declined with caveat added)
|
|
295
|
+
- [ ] Stage 3: Synthesis written to `$PROJECT_DIR/artifacts/research/{topic-slug}/synthesis.md`
|
|
296
|
+
- [ ] Stage 4: Diagnostic YAML written to `$PROJECT_DIR/logs/diagnostics/`
|
|
297
|
+
|
|
298
|
+
**Do NOT return to user until all checkboxes can be marked complete.**
|
|
@@ -0,0 +1,63 @@
|
|
|
1
|
+
# Viewpoint: Contrarian Angle
|
|
2
|
+
|
|
3
|
+
## Core Question
|
|
4
|
+
|
|
5
|
+
What failure modes and risks do most people overlook?
|
|
6
|
+
|
|
7
|
+
## Focus Areas
|
|
8
|
+
|
|
9
|
+
- Failure modes that advocates rarely mention
|
|
10
|
+
- Scenarios where this approach is the wrong choice
|
|
11
|
+
- Hidden costs (complexity, maintenance burden, cognitive load)
|
|
12
|
+
- Alternatives that might be simpler or more appropriate
|
|
13
|
+
- When NOT to use this
|
|
14
|
+
|
|
15
|
+
## Prompt Template
|
|
16
|
+
|
|
17
|
+
```
|
|
18
|
+
GOAL: Research [{topic}] from the Contrarian Angle. Identify failure modes,
|
|
19
|
+
hidden costs, and scenarios where this approach is the wrong choice. Challenge
|
|
20
|
+
the prevailing narrative.
|
|
21
|
+
|
|
22
|
+
CONSTRAINTS:
|
|
23
|
+
- Focus exclusively on the Contrarian lens — other viewpoints are handled
|
|
24
|
+
by parallel agents
|
|
25
|
+
- Be genuinely critical, not performatively contrarian — ground critiques in evidence
|
|
26
|
+
- Flag confidence levels: HIGH (verified/multiple sources), MEDIUM (single
|
|
27
|
+
source/strong reasoning), LOW (inference/limited data)
|
|
28
|
+
- Do not pad findings — "I couldn't find evidence for X" is a valid and valuable finding
|
|
29
|
+
- Target 1000-1500 words
|
|
30
|
+
|
|
31
|
+
REASONING DEPTH — Research-Evaluate-Deepen:
|
|
32
|
+
You MUST follow this multi-pass process (do not skip to writing the final output):
|
|
33
|
+
|
|
34
|
+
1. INITIAL RESEARCH: Conduct your first pass — search for criticisms, failure cases,
|
|
35
|
+
abandoned projects, and dissenting opinions. Gather raw contrarian evidence.
|
|
36
|
+
2. EVALUATE: Review what you found. For each finding, explicitly state:
|
|
37
|
+
- The critique
|
|
38
|
+
- Supporting evidence (specific failures, measurable costs, real examples)
|
|
39
|
+
- Counterargument (how advocates respond to this critique)
|
|
40
|
+
- Your net assessment (is the critique valid despite the counterargument?)
|
|
41
|
+
3. IDENTIFY GAPS: What are the 2-3 failure modes or risks that NEITHER advocates
|
|
42
|
+
NOR critics seem to discuss? What blind spots exist in the discourse?
|
|
43
|
+
4. DEEPEN: Conduct a second targeted research pass focused on those blind spots.
|
|
44
|
+
Look for adjacent domains where similar patterns failed for non-obvious reasons.
|
|
45
|
+
5. RECONCILE: Document what changed between your initial critiques and your deepened
|
|
46
|
+
analysis. Did any critiques turn out to be weaker than expected? Did new risks
|
|
47
|
+
emerge that are more important than the commonly cited ones?
|
|
48
|
+
|
|
49
|
+
Only after completing all 5 steps, write your final output using the template below.
|
|
50
|
+
|
|
51
|
+
CONTEXT:
|
|
52
|
+
{topic_description}
|
|
53
|
+
{user_provided_context}
|
|
54
|
+
{scope_boundaries}
|
|
55
|
+
|
|
56
|
+
OUTPUT:
|
|
57
|
+
Write findings to: {output_path}
|
|
58
|
+
Use the output template provided below for document structure.
|
|
59
|
+
Use YAML header with: viewpoint, topic, confidence_summary, key_findings (3-5 bullets)
|
|
60
|
+
Follow with detailed analysis organized by the focus areas above.
|
|
61
|
+
|
|
62
|
+
{viewpoint_output_template}
|
|
63
|
+
```
|
|
@@ -0,0 +1,62 @@
|
|
|
1
|
+
# Viewpoint: Direct Investigation
|
|
2
|
+
|
|
3
|
+
## Core Question
|
|
4
|
+
|
|
5
|
+
What is this? How does it work? What is the current state of the art?
|
|
6
|
+
|
|
7
|
+
## Focus Areas
|
|
8
|
+
|
|
9
|
+
- Define the concept precisely — what it is and what it is not
|
|
10
|
+
- How it works mechanically (architecture, data flow, lifecycle)
|
|
11
|
+
- Current state of the art — who uses it, what tooling exists
|
|
12
|
+
- Official documentation, specifications, or standards
|
|
13
|
+
- Key terminology and taxonomy
|
|
14
|
+
|
|
15
|
+
## Prompt Template
|
|
16
|
+
|
|
17
|
+
```
|
|
18
|
+
GOAL: Research [{topic}] from the Direct Investigation perspective. Produce a
|
|
19
|
+
comprehensive technical analysis covering definition, mechanics, state of the art,
|
|
20
|
+
and key terminology.
|
|
21
|
+
|
|
22
|
+
CONSTRAINTS:
|
|
23
|
+
- Focus exclusively on the Direct Investigation lens — other viewpoints are handled
|
|
24
|
+
by parallel agents
|
|
25
|
+
- Be evidence-based: cite sources, examples, or reasoning for each claim
|
|
26
|
+
- Flag confidence levels: HIGH (verified/multiple sources), MEDIUM (single
|
|
27
|
+
source/strong reasoning), LOW (inference/limited data)
|
|
28
|
+
- Do not pad findings — "I couldn't find evidence for X" is a valid and valuable finding
|
|
29
|
+
- Target 1000-1500 words
|
|
30
|
+
|
|
31
|
+
REASONING DEPTH — Research-Evaluate-Deepen:
|
|
32
|
+
You MUST follow this multi-pass process (do not skip to writing the final output):
|
|
33
|
+
|
|
34
|
+
1. INITIAL RESEARCH: Conduct your first pass — web searches, codebase exploration,
|
|
35
|
+
document reads. Gather raw findings.
|
|
36
|
+
2. EVALUATE: Review what you found. For each finding, explicitly state:
|
|
37
|
+
- The claim
|
|
38
|
+
- Supporting evidence
|
|
39
|
+
- Counterevidence or caveats
|
|
40
|
+
- Your net assessment
|
|
41
|
+
3. IDENTIFY GAPS: What are the 2-3 most important questions your initial research
|
|
42
|
+
did NOT answer? What uncertainties remain?
|
|
43
|
+
4. DEEPEN: Conduct a second targeted research pass focused specifically on those
|
|
44
|
+
gaps. Search for counterexamples, edge cases, or missing context.
|
|
45
|
+
5. RECONCILE: Document what changed between your initial findings and your deepened
|
|
46
|
+
findings. Did any initial conclusions shift? Flag these explicitly.
|
|
47
|
+
|
|
48
|
+
Only after completing all 5 steps, write your final output using the template below.
|
|
49
|
+
|
|
50
|
+
CONTEXT:
|
|
51
|
+
{topic_description}
|
|
52
|
+
{user_provided_context}
|
|
53
|
+
{scope_boundaries}
|
|
54
|
+
|
|
55
|
+
OUTPUT:
|
|
56
|
+
Write findings to: {output_path}
|
|
57
|
+
Use the output template provided below for document structure.
|
|
58
|
+
Use YAML header with: viewpoint, topic, confidence_summary, key_findings (3-5 bullets)
|
|
59
|
+
Follow with detailed analysis organized by the focus areas above.
|
|
60
|
+
|
|
61
|
+
{viewpoint_output_template}
|
|
62
|
+
```
|
|
@@ -0,0 +1,65 @@
|
|
|
1
|
+
# Viewpoint: First Principles
|
|
2
|
+
|
|
3
|
+
## Core Question
|
|
4
|
+
|
|
5
|
+
What core problem does this solve? What is the minimal viable version?
|
|
6
|
+
|
|
7
|
+
## Focus Areas
|
|
8
|
+
|
|
9
|
+
- The fundamental problem being addressed (stripped of buzzwords)
|
|
10
|
+
- Why existing approaches are insufficient
|
|
11
|
+
- The minimal set of capabilities needed to solve the core problem
|
|
12
|
+
- What can be deferred vs. what is essential
|
|
13
|
+
- Decomposition into independent sub-problems
|
|
14
|
+
|
|
15
|
+
## Prompt Template
|
|
16
|
+
|
|
17
|
+
```
|
|
18
|
+
GOAL: Research [{topic}] from First Principles. Break it down to the fundamental
|
|
19
|
+
problem it solves, identify the minimal viable version, and decompose into
|
|
20
|
+
independent sub-problems.
|
|
21
|
+
|
|
22
|
+
CONSTRAINTS:
|
|
23
|
+
- Focus exclusively on the First Principles lens — other viewpoints are handled
|
|
24
|
+
by parallel agents
|
|
25
|
+
- Strip away buzzwords and marketing — focus on the underlying problem
|
|
26
|
+
- Flag confidence levels: HIGH (verified/multiple sources), MEDIUM (single
|
|
27
|
+
source/strong reasoning), LOW (inference/limited data)
|
|
28
|
+
- Do not pad findings — "I couldn't find evidence for X" is a valid and valuable finding
|
|
29
|
+
- Target 1000-1500 words
|
|
30
|
+
|
|
31
|
+
REASONING DEPTH — Research-Evaluate-Deepen:
|
|
32
|
+
You MUST follow this multi-pass process (do not skip to writing the final output):
|
|
33
|
+
|
|
34
|
+
1. INITIAL RESEARCH: Conduct your first pass — strip the topic to its core problem.
|
|
35
|
+
What fundamental need does this address? Research the problem space, not just
|
|
36
|
+
the proposed solution.
|
|
37
|
+
2. EVALUATE: Review your decomposition. For each sub-problem identified:
|
|
38
|
+
- The sub-problem statement
|
|
39
|
+
- Why existing approaches fail to solve it
|
|
40
|
+
- The minimal capability needed to address it
|
|
41
|
+
- Whether this sub-problem is truly essential or a nice-to-have
|
|
42
|
+
3. IDENTIFY GAPS: What are the 2-3 assumptions in your decomposition that you
|
|
43
|
+
haven't validated? Are you sure the problem is what it appears to be?
|
|
44
|
+
4. DEEPEN: Conduct a second targeted research pass. Look for cases where the
|
|
45
|
+
"obvious" problem was actually a symptom of a different root cause. Check
|
|
46
|
+
whether simpler framings of the problem exist.
|
|
47
|
+
5. RECONCILE: Document what changed between your initial decomposition and your
|
|
48
|
+
deepened analysis. Did the problem turn out to be simpler or more complex than
|
|
49
|
+
initially framed?
|
|
50
|
+
|
|
51
|
+
Only after completing all 5 steps, write your final output using the template below.
|
|
52
|
+
|
|
53
|
+
CONTEXT:
|
|
54
|
+
{topic_description}
|
|
55
|
+
{user_provided_context}
|
|
56
|
+
{scope_boundaries}
|
|
57
|
+
|
|
58
|
+
OUTPUT:
|
|
59
|
+
Write findings to: {output_path}
|
|
60
|
+
Use the output template provided below for document structure.
|
|
61
|
+
Use YAML header with: viewpoint, topic, confidence_summary, key_findings (3-5 bullets)
|
|
62
|
+
Follow with detailed analysis organized by the focus areas above.
|
|
63
|
+
|
|
64
|
+
{viewpoint_output_template}
|
|
65
|
+
```
|
|
@@ -0,0 +1,62 @@
|
|
|
1
|
+
# Viewpoint: Practitioner Perspective
|
|
2
|
+
|
|
3
|
+
## Core Question
|
|
4
|
+
|
|
5
|
+
How do teams actually use this in production? What works and what doesn't?
|
|
6
|
+
|
|
7
|
+
## Focus Areas
|
|
8
|
+
|
|
9
|
+
- Real-world adoption patterns — who uses this and how
|
|
10
|
+
- Common implementation approaches and their trade-offs
|
|
11
|
+
- Practical gotchas that documentation doesn't cover
|
|
12
|
+
- Operational concerns (debugging, monitoring, maintenance)
|
|
13
|
+
- Team skill requirements and learning curves
|
|
14
|
+
|
|
15
|
+
## Prompt Template
|
|
16
|
+
|
|
17
|
+
```
|
|
18
|
+
GOAL: Research [{topic}] from the Practitioner Perspective. Describe how teams
|
|
19
|
+
actually use this in production — what works well, what's harder than expected,
|
|
20
|
+
and what operational concerns arise.
|
|
21
|
+
|
|
22
|
+
CONSTRAINTS:
|
|
23
|
+
- Focus exclusively on the Practitioner lens — other viewpoints are handled
|
|
24
|
+
by parallel agents
|
|
25
|
+
- Draw on real-world usage patterns, not theoretical capabilities
|
|
26
|
+
- Flag confidence levels: HIGH (verified/multiple sources), MEDIUM (single
|
|
27
|
+
source/strong reasoning), LOW (inference/limited data)
|
|
28
|
+
- Do not pad findings — "I couldn't find evidence for X" is a valid and valuable finding
|
|
29
|
+
- Target 1000-1500 words
|
|
30
|
+
|
|
31
|
+
REASONING DEPTH — Research-Evaluate-Deepen:
|
|
32
|
+
You MUST follow this multi-pass process (do not skip to writing the final output):
|
|
33
|
+
|
|
34
|
+
1. INITIAL RESEARCH: Conduct your first pass — web searches, community discussions,
|
|
35
|
+
blog posts, production case studies. Gather raw practitioner experiences.
|
|
36
|
+
2. EVALUATE: Review what you found. For each finding, explicitly state:
|
|
37
|
+
- The claim (e.g., "Teams report X works well")
|
|
38
|
+
- Supporting evidence (specific examples, team sizes, contexts)
|
|
39
|
+
- Counterevidence or caveats (who reports it does NOT work?)
|
|
40
|
+
- Your net assessment
|
|
41
|
+
3. IDENTIFY GAPS: What are the 2-3 most important practical questions your initial
|
|
42
|
+
research did NOT answer? What operational concerns remain unclear?
|
|
43
|
+
4. DEEPEN: Conduct a second targeted research pass focused on those gaps. Look for
|
|
44
|
+
failure post-mortems, migration stories, or "lessons learned" content.
|
|
45
|
+
5. RECONCILE: Document what changed between your initial findings and your deepened
|
|
46
|
+
findings. Did any "best practices" turn out to have significant caveats?
|
|
47
|
+
|
|
48
|
+
Only after completing all 5 steps, write your final output using the template below.
|
|
49
|
+
|
|
50
|
+
CONTEXT:
|
|
51
|
+
{topic_description}
|
|
52
|
+
{user_provided_context}
|
|
53
|
+
{scope_boundaries}
|
|
54
|
+
|
|
55
|
+
OUTPUT:
|
|
56
|
+
Write findings to: {output_path}
|
|
57
|
+
Use the output template provided below for document structure.
|
|
58
|
+
Use YAML header with: viewpoint, topic, confidence_summary, key_findings (3-5 bullets)
|
|
59
|
+
Follow with detailed analysis organized by the focus areas above.
|
|
60
|
+
|
|
61
|
+
{viewpoint_output_template}
|
|
62
|
+
```
|
|
@@ -0,0 +1,66 @@
|
|
|
1
|
+
# Viewpoint: Prior Art / Historical
|
|
2
|
+
|
|
3
|
+
## Core Question
|
|
4
|
+
|
|
5
|
+
What similar patterns have existed before? What can we learn from their trajectory?
|
|
6
|
+
|
|
7
|
+
## Focus Areas
|
|
8
|
+
|
|
9
|
+
- Historical predecessors and analogous patterns
|
|
10
|
+
- How similar approaches evolved over time
|
|
11
|
+
- What succeeded and why; what failed and why
|
|
12
|
+
- Patterns that were hyped then abandoned vs. patterns that became foundational
|
|
13
|
+
- Lessons applicable to the current topic
|
|
14
|
+
|
|
15
|
+
## Prompt Template
|
|
16
|
+
|
|
17
|
+
```
|
|
18
|
+
GOAL: Research [{topic}] from the Prior Art / Historical perspective. Analyze
|
|
19
|
+
historical predecessors, their trajectories, and lessons applicable to how we
|
|
20
|
+
should approach this topic today.
|
|
21
|
+
|
|
22
|
+
CONSTRAINTS:
|
|
23
|
+
- Focus exclusively on the Prior Art lens — other viewpoints are handled
|
|
24
|
+
by parallel agents
|
|
25
|
+
- Draw genuine historical parallels, not superficial analogies
|
|
26
|
+
- Flag confidence levels: HIGH (verified/multiple sources), MEDIUM (single
|
|
27
|
+
source/strong reasoning), LOW (inference/limited data)
|
|
28
|
+
- Do not pad findings — "I couldn't find evidence for X" is a valid and valuable finding
|
|
29
|
+
- Target 1000-1500 words
|
|
30
|
+
|
|
31
|
+
REASONING DEPTH — Research-Evaluate-Deepen:
|
|
32
|
+
You MUST follow this multi-pass process (do not skip to writing the final output):
|
|
33
|
+
|
|
34
|
+
1. INITIAL RESEARCH: Conduct your first pass — identify historical predecessors,
|
|
35
|
+
analogous patterns from other domains, and evolution trajectories. Cast a wide
|
|
36
|
+
net across computing history and adjacent fields.
|
|
37
|
+
2. EVALUATE: Review each historical parallel. For each:
|
|
38
|
+
- The predecessor and why it's relevant
|
|
39
|
+
- How it succeeded or failed (with specific evidence)
|
|
40
|
+
- The lesson applicable to the current topic
|
|
41
|
+
- How strong the analogy is (direct parallel vs. loose similarity)
|
|
42
|
+
3. IDENTIFY GAPS: What are the 2-3 historical patterns you suspect exist but
|
|
43
|
+
couldn't find? What eras or domains haven't you checked?
|
|
44
|
+
4. DEEPEN: Conduct a second targeted research pass focused on those gaps. Look for
|
|
45
|
+
less obvious predecessors — patterns from other industries, abandoned research
|
|
46
|
+
directions, or solutions that were ahead of their time.
|
|
47
|
+
5. RECONCILE: Document what changed between your initial historical survey and your
|
|
48
|
+
deepened analysis. Did any "new" ideas turn out to have well-documented
|
|
49
|
+
predecessors? Did any historical "failures" turn out to be timing issues rather
|
|
50
|
+
than fundamental flaws?
|
|
51
|
+
|
|
52
|
+
Only after completing all 5 steps, write your final output using the template below.
|
|
53
|
+
|
|
54
|
+
CONTEXT:
|
|
55
|
+
{topic_description}
|
|
56
|
+
{user_provided_context}
|
|
57
|
+
{scope_boundaries}
|
|
58
|
+
|
|
59
|
+
OUTPUT:
|
|
60
|
+
Write findings to: {output_path}
|
|
61
|
+
Use the output template provided below for document structure.
|
|
62
|
+
Use YAML header with: viewpoint, topic, confidence_summary, key_findings (3-5 bullets)
|
|
63
|
+
Follow with detailed analysis organized by the focus areas above.
|
|
64
|
+
|
|
65
|
+
{viewpoint_output_template}
|
|
66
|
+
```
|