start-vibing 4.4.2 → 4.4.4

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (28) hide show
  1. package/package.json +1 -1
  2. package/template/.claude/agents/research-query.md +128 -0
  3. package/template/.claude/agents/research-scout.md +124 -0
  4. package/template/.claude/agents/research-synthesize.md +139 -0
  5. package/template/.claude/agents/research-verify.md +84 -0
  6. package/template/.claude/commands/research.md +18 -0
  7. package/template/.claude/hooks/format-on-edit.sh +26 -0
  8. package/template/.claude/hooks/git-context-session-start.sh +22 -0
  9. package/template/.claude/hooks/quality-gate-stop.sh +46 -0
  10. package/template/.claude/hooks/research-session-start.sh +4 -0
  11. package/template/.claude/settings.json +29 -0
  12. package/template/.claude/skills/research/SKILL.md +285 -0
  13. package/template/.claude/skills/research/references/domain-playbooks.md +604 -0
  14. package/template/.claude/skills/research/references/ontology-patterns.md +376 -0
  15. package/template/.claude/skills/research/references/research-methodology.md +794 -0
  16. package/template/.claude/skills/research/references/source-directory.md +280 -0
  17. package/template/.claude/skills/research/scripts/__pycache__/extract-claims.cpython-313.pyc +0 -0
  18. package/template/.claude/skills/research/scripts/check-cache.sh +129 -0
  19. package/template/.claude/skills/research/scripts/dedup-research.sh +80 -0
  20. package/template/.claude/skills/research/scripts/extract-claims.py +83 -0
  21. package/template/.claude/skills/research/scripts/update-index.sh +106 -0
  22. package/template/.claude/skills/research/scripts/verify-citations.sh +107 -0
  23. package/template/.claude/skills/research/templates/adr.md.tpl +66 -0
  24. package/template/.claude/skills/research/templates/index.md.tpl +25 -0
  25. package/template/.claude/skills/research/templates/moc.md.tpl +39 -0
  26. package/template/.claude/skills/research/templates/research-state.schema.json +64 -0
  27. package/template/.claude/skills/research/templates/research.md.tpl +117 -0
  28. package/template/.claude/agents/research-web.md +0 -164
@@ -0,0 +1,604 @@
1
+ # Domain Playbooks — Reference
2
+
3
+ > Step-by-step research protocols per domain. Each playbook: scope, query templates, source priority, evidence requirements, output structure pointers. Use the playbook that matches the user's question; if multiple match, run them in parallel and merge.
4
+
5
+ ---
6
+
7
+ ## 1. UX/Design Pattern Research
8
+
9
+ ### Scope
10
+
11
+ The user wants to understand a design pattern (modal, navigation, data table, onboarding flow, etc.) — its variants, when to use each, accessibility implications, and how leading products implement it.
12
+
13
+ ### Query templates
14
+
15
+ ```
16
+ "<pattern>" UX best practices 2024..2026
17
+ "<pattern>" baymard
18
+ "<pattern>" nielsen norman
19
+ "<pattern>" accessibility ARIA pattern
20
+ "<pattern>" mobile vs desktop
21
+ site:baymard.com "<pattern>"
22
+ site:nngroup.com "<pattern>"
23
+ site:w3.org/WAI/ARIA/apg "<pattern>"
24
+ "<pattern>" failure mode OR anti-pattern
25
+ "<competitor>" "<pattern>" (×3-5 competitors)
26
+ ```
27
+
28
+ ### Source priority
29
+
30
+ 1. WCAG 2.2 / W3C WAI-ARIA Authoring Practices (APG) — accessibility ground truth.
31
+ 2. Baymard Institute (commerce-UX) and Nielsen Norman Group (general UX) — empirical research.
32
+ 3. Major design systems (Material 3, Apple HIG, Fluent 2, Polaris, Carbon) — production patterns.
33
+ 4. Competitor implementations (3–5) — Playwright screenshots of the actual pattern in use.
34
+ 5. Smashing / A List Apart for editorial framing.
35
+
36
+ ### Evidence requirements
37
+
38
+ - ≥ 1 W3C/WCAG citation for accessibility.
39
+ - ≥ 2 NN/g or Baymard citations for usability rationale.
40
+ - ≥ 3 competitor screenshots taken via Playwright at desktop (1280×800), tablet (768×1024), mobile (375×812).
41
+ - Heuristic eval against Nielsen 10 (visibility, match, control, consistency, error prevention, recognition, flexibility, minimalism, error recovery, help) — score each 1–5.
42
+ - Failure modes documented with at least one real-world example.
43
+
44
+ ### Output structure
45
+
46
+ `/docs/research/ux-<pattern>.md`:
47
+
48
+ ```
49
+ # <Pattern> — UX Research
50
+ ## TL;DR
51
+ ## Variants # decision tree by context
52
+ ## Accessibility (WCAG 2.2 + ARIA APG)
53
+ ## Heuristic Evaluation # Nielsen 10 scores per variant
54
+ ## Competitor Analysis # 3-5 competitors with screenshots
55
+ ## Mobile vs Desktop divergence
56
+ ## Failure Modes / Anti-patterns
57
+ ## Decision Guide # "use variant X when ..."
58
+ ## Sources
59
+ ```
60
+
61
+ ---
62
+
63
+ ## 2. Library / Framework Evaluation
64
+
65
+ ### Scope
66
+
67
+ The user is choosing between libraries, or evaluating whether to adopt one. Output drives a "yes/no/conditional" decision with quantified trade-offs.
68
+
69
+ ### Query templates
70
+
71
+ ```
72
+ "<lib>" vs "<lib2>" 2024..2026
73
+ "<lib>" production case study
74
+ "<lib>" bundle size bundlephobia
75
+ "<lib>" maintainer activity GitHub
76
+ "<lib>" deprecation OR archived
77
+ "<lib>" CVE vulnerability
78
+ "<lib>" migration from "<lib2>"
79
+ "<lib>" benchmark performance
80
+ site:github.com/<org>/<repo> issues "?q=is:issue"
81
+ site:npmjs.com/package/<lib>
82
+ ```
83
+
84
+ ### Community health metrics (collect quantitatively)
85
+
86
+ | Metric | Source | Threshold |
87
+ | ------------------------------------- | ------------------------------------ | --------------------------------- |
88
+ | GitHub stars (raw) | github.com API `/repos/{org}/{repo}` | Context-dependent |
89
+ | Star growth (last 12 mo) | star-history.com | Growing > flat > shrinking |
90
+ | Issue close rate | GitHub Insights | > 70% healthy |
91
+ | PR merge cadence | Insights / Pulse | Weekly merges = active |
92
+ | Last release date | GitHub releases / npm | < 6mo = active |
93
+ | Maintainer count (active in last 6mo) | git shortlog --since | ≥ 2 = bus factor OK |
94
+ | Open issues / age | Issues filtered | Median age < 60d healthy |
95
+ | npm weekly downloads | npmjs.com | Trend matters more than absolute |
96
+ | Bundle size (min+gzip) | bundlephobia.com | Compare to alternatives |
97
+ | Tree-shakability | bundlephobia "side effects" | Important for ESM consumers |
98
+ | TypeScript types quality | DefinitelyTyped vs first-party | First-party >> DT >> none |
99
+ | Test coverage | Codecov / repo badges | Self-reported, sniff |
100
+ | Documentation completeness | Manual review | API reference + guides + examples |
101
+ | Ecosystem (plugins/extensions) | search ecosystem | Indicates traction |
102
+
103
+ ### Source priority
104
+
105
+ 1. Official repo + docs + changelog.
106
+ 2. npm/registry provenance signals.
107
+ 3. bundlephobia / pkg.size for size, snyk / GitHub advisories for security.
108
+ 4. Independent benchmarks (web frameworks: krausest/js-framework-benchmark; sorting/parsing: vendor-neutral repos).
109
+ 5. Production case studies on engineering blogs (Vercel, Cloudflare, Shopify, Netflix Tech Blog).
110
+ 6. HackerNews / Reddit /r/<lang> threads — anecdata, weighted accordingly.
111
+
112
+ ### Evidence requirements
113
+
114
+ - All community-health numbers with collection date.
115
+ - ≥ 1 production case study citing the lib at scale (or note absence).
116
+ - Bundle size comparison table against named alternatives.
117
+ - License compatibility check (cite SPDX identifier).
118
+ - CVE history (NVD search) — clean / open / patched.
119
+
120
+ ### Output structure
121
+
122
+ `/docs/research/lib-<name>.md`:
123
+
124
+ ```
125
+ # <Library> — Evaluation
126
+ ## TL;DR — Recommendation (Adopt / Trial / Hold / Avoid)
127
+ ## Community Health # table of metrics with dates
128
+ ## Bundle / Performance # comparison vs alternatives
129
+ ## API Stability # semver discipline, breaking-change frequency
130
+ ## Ecosystem Fit # plugins, integrations, TS support
131
+ ## Production Evidence # case studies, talks
132
+ ## Security Posture # CVE history, advisory response
133
+ ## Migration Cost # if replacing existing
134
+ ## Risks # bus factor, license, governance
135
+ ## Decision # with explicit reversibility note
136
+ ## Sources
137
+ ```
138
+
139
+ ---
140
+
141
+ ## 3. API Integration Research
142
+
143
+ ### Scope
144
+
145
+ The user is integrating against a third-party API and needs to understand auth, rate limits, SDK quality, pricing, reliability, and idiomatic usage patterns before writing code.
146
+
147
+ ### Query templates
148
+
149
+ ```
150
+ "<API>" authentication OAuth scopes
151
+ "<API>" rate limit "requests per"
152
+ "<API>" SDK <language> official
153
+ "<API>" pricing tier
154
+ "<API>" SLA uptime
155
+ "<API>" status page
156
+ "<API>" deprecation policy
157
+ "<API>" webhook signature verification
158
+ "<API>" pagination
159
+ "<API>" idempotency
160
+ "<API>" error codes
161
+ ```
162
+
163
+ ### Source priority
164
+
165
+ 1. Vendor docs (canonical URL).
166
+ 2. Vendor status page + history (statuspage.io / vendor-hosted).
167
+ 3. Official SDKs on the vendor's GitHub org.
168
+ 4. OpenAPI spec / Postman collection if published.
169
+ 5. Engineering blog posts from the vendor.
170
+ 6. Independent integration write-ups (treat as anecdotal).
171
+
172
+ ### Evidence requirements
173
+
174
+ - Auth pattern: flow diagram + sample request/response with redacted tokens.
175
+ - Rate limits: documented numbers with units (per minute / per hour / per token / per IP).
176
+ - SDK matrix: language × maintenance status × types × last release.
177
+ - Pricing: per-tier table with included quotas + overage cost.
178
+ - SLA: uptime percentage, credit policy, exclusions.
179
+ - Reliability: status-page incident count for last 90 days.
180
+ - Webhook security: signature scheme cited (HMAC-SHA256 etc.) with verification example.
181
+
182
+ ### Output structure
183
+
184
+ `/docs/research/api-<vendor>.md`:
185
+
186
+ ```
187
+ # <Vendor> API — Integration Research
188
+ ## TL;DR
189
+ ## Authentication # flow + scopes + token lifecycle
190
+ ## Endpoints summary # by domain, link to vendor docs
191
+ ## Rate limits # quantified, per dimension
192
+ ## Pagination + idempotency
193
+ ## Error model # codes, retry strategy
194
+ ## Webhooks # signature scheme, replay protection
195
+ ## SDK matrix # by language
196
+ ## Pricing tiers
197
+ ## SLA + reliability # 90-day incident summary
198
+ ## Deprecation policy
199
+ ## Risks / Gotchas
200
+ ## Sources
201
+ ```
202
+
203
+ ---
204
+
205
+ ## 4. Architectural Decision Research
206
+
207
+ ### Scope
208
+
209
+ The user is making a non-trivial architecture choice (database, queue, deployment topology, framework architecture, data modeling). Output is feedstock for an ADR.
210
+
211
+ ### Query templates
212
+
213
+ ```
214
+ "<choice-A>" vs "<choice-B>" production trade-off
215
+ "<choice-A>" RFC OR design doc
216
+ "<choice-A>" "we chose" OR "we migrated"
217
+ "<choice-A>" failure mode at scale
218
+ site:github.com/<org>/<repo>/issues "<topic>"
219
+ "<choice-A>" CAP OR consistency model
220
+ "<choice-A>" cost at scale
221
+ ```
222
+
223
+ ### Source priority
224
+
225
+ 1. Original RFCs / design docs from the projects involved.
226
+ 2. Conference talks (CMU DB Group, QCon, Strange Loop, USENIX) — highest-quality independent analysis.
227
+ 3. Engineering blog post-mortems — what broke at scale.
228
+ 4. Academic comparative papers.
229
+ 5. Vendor documentation (read with bias correction).
230
+
231
+ ### Trade-off matrix (mandatory)
232
+
233
+ Build a matrix per dimension: throughput, latency, consistency, durability, operational complexity, cost, ecosystem maturity, hireability, vendor lock-in. Each cell cites a source.
234
+
235
+ ### Reversibility scoring (Bezos two-way-door)
236
+
237
+ Per option, classify the migration cost away from it as:
238
+
239
+ - **One-way**: data shape lock-in, vendor SDKs throughout codebase, contractual lock-in.
240
+ - **Two-way**: standard interfaces (SQL, S3 API, OCI), abstraction layer in place.
241
+
242
+ Bias toward two-way doors; pay a premium for reversibility unless one-way is clearly superior.
243
+
244
+ ### Evidence requirements
245
+
246
+ - ≥ 2 production post-mortems per option (or note absence and what that implies).
247
+ - Cost model with explicit assumptions.
248
+ - Operational-complexity assessment with team-size assumptions.
249
+ - Migration path _out_ of the choice, not just _in_.
250
+
251
+ ### Output structure
252
+
253
+ `/docs/research/arch-<topic>.md`:
254
+
255
+ ```
256
+ # <Topic> — Architectural Decision Research
257
+ ## TL;DR Recommendation
258
+ ## Options considered
259
+ ## Trade-off matrix # dimensions × options
260
+ ## Reversibility analysis # per option
261
+ ## Production evidence # post-mortems per option
262
+ ## Cost model
263
+ ## Operational requirements
264
+ ## Risks
265
+ ## Open questions
266
+ ## Sources
267
+ ## ADR draft # template-ready: Status / Context / Decision / Consequences
268
+ ```
269
+
270
+ ---
271
+
272
+ ## 5. Market / Competitive Research
273
+
274
+ ### Scope
275
+
276
+ The user wants market context: TAM/SAM/SOM, players, positioning, differentiation, growth trajectories, regulatory headwinds.
277
+
278
+ ### Query templates
279
+
280
+ ```
281
+ "<market>" market size 2024..2026
282
+ "<market>" CAGR forecast
283
+ "<market>" Gartner Magic Quadrant
284
+ "<market>" Forrester Wave
285
+ "<competitor>" funding round
286
+ "<competitor>" 10-K annual report
287
+ "<market>" regulatory
288
+ "<market>" Porter five forces
289
+ ```
290
+
291
+ ### Frameworks to apply
292
+
293
+ - **TAM / SAM / SOM** (Total / Serviceable / Serviceable-Obtainable).
294
+ - **Porter 5 Forces** (Porter 1980): entry threat, supplier power, buyer power, substitutes, rivalry.
295
+ - **Positioning quadrant** (2 chosen axes; cite why those axes).
296
+ - **Jobs-to-be-done** if behavioral framing matters.
297
+
298
+ ### Source priority
299
+
300
+ 1. Public 10-K / 10-Q filings (SEC EDGAR) — primary financial data.
301
+ 2. Tier-1 analyst reports (Gartner, Forrester, IDC).
302
+ 3. Industry trade press (sector-specific).
303
+ 4. CB Insights, PitchBook, Crunchbase for private-market data.
304
+ 5. Founder/exec interviews on podcasts or Substack.
305
+
306
+ ### Evidence requirements
307
+
308
+ - TAM number with methodology cited (top-down vs bottom-up).
309
+ - ≥ 5 competitors profiled with: HQ, founded year, employees, funding to date, last round, primary product, pricing model.
310
+ - Positioning quadrant with axes justified.
311
+ - ≥ 1 regulatory citation if relevant (GDPR, HIPAA, PCI-DSS, sector-specific).
312
+
313
+ ### Output structure
314
+
315
+ `/docs/research/market-<segment>.md`:
316
+
317
+ ```
318
+ # <Segment> — Market Research
319
+ ## TL;DR
320
+ ## Market sizing # TAM/SAM/SOM with methodology
321
+ ## Competitor matrix # ≥ 5 profiles
322
+ ## Positioning # quadrant + JTBD
323
+ ## Porter 5 Forces
324
+ ## Regulatory context
325
+ ## Trends + tailwinds/headwinds
326
+ ## Sources
327
+ ```
328
+
329
+ ---
330
+
331
+ ## 6. Academic Literature Review
332
+
333
+ ### Scope
334
+
335
+ The user wants the state of academic knowledge on a topic — methods, findings, replications, open questions. Default to **PRISMA-ScR scoping flow** (Tricco AC et al. _Ann Intern Med_ 2018, doi:10.7326/M18-0850) when narrative is too light and full systematic is too heavy.
336
+
337
+ ### Query templates
338
+
339
+ By database:
340
+
341
+ - **PubMed**: `("<concept1>"[Title/Abstract]) AND ("<concept2>"[MeSH]) AND ("2020"[PDAT] : "2026"[PDAT])`
342
+ - **arXiv**: `cat:cs.* AND (abs:"<phrase>")`
343
+ - **Google Scholar**: `"<phrase>" -site:wikipedia.org` with date range
344
+ - **Semantic Scholar**: API `/paper/search?query=...&year=2020-2026`
345
+ - **Crossref**: `https://api.crossref.org/works?query.bibliographic=...&filter=from-pub-date:2020`
346
+
347
+ ### Inclusion / exclusion criteria template
348
+
349
+ ```
350
+ Inclusion:
351
+ - Published 2020–2026 (adjust per topic)
352
+ - Peer-reviewed OR preprint with ≥ X citations
353
+ - English (or extend per question)
354
+ - Reports empirical data OR systematic synthesis OR formal proof
355
+ - Topic match: covers <concept1> AND <concept2>
356
+
357
+ Exclusion:
358
+ - Editorials, opinions, letters without data
359
+ - Withdrawn papers (Retraction Watch check)
360
+ - Predatory journals (Beall's archive / DOAJ check)
361
+ - Conference posters without proceedings
362
+ - Conflict of interest undisclosed
363
+ ```
364
+
365
+ ### Source priority
366
+
367
+ 1. Peer-reviewed journal articles with DOI.
368
+ 2. Conference proceedings (top-tier: NeurIPS, ICML, ICLR for ML; SOSP, OSDI, NSDI for systems; CHI, UIST for HCI).
369
+ 3. Preprints (arXiv, bioRxiv) flagged as such.
370
+ 4. Theses / dissertations.
371
+ 5. Working papers / technical reports.
372
+
373
+ ### Evidence requirements
374
+
375
+ - PRISMA-ScR-style flow diagram: identification → screening → eligibility → included counts.
376
+ - Inclusion/exclusion criteria stated explicitly.
377
+ - Per-paper extraction: study design, n, key findings, limitations, replication status.
378
+ - Disagreements / contradictions explicitly listed.
379
+
380
+ ### Output structure
381
+
382
+ `/docs/research/lit-<topic>.md`:
383
+
384
+ ```
385
+ # <Topic> — Literature Review
386
+ ## TL;DR
387
+ ## Question + framing
388
+ ## Method # databases, dates, query strings, inclusion/exclusion
389
+ ## PRISMA-ScR flow # identified / screened / eligible / included
390
+ ## Findings synthesis # by sub-question
391
+ ## Disagreements
392
+ ## Replication status # per major finding
393
+ ## Open questions
394
+ ## Sources # full bibliography (BibTeX / CSL-JSON)
395
+ ```
396
+
397
+ ---
398
+
399
+ ## 7. News & Current-Events Research
400
+
401
+ ### Scope
402
+
403
+ A breaking or recent event needs reconstruction: timeline, actors, claims, disputed claims, current state.
404
+
405
+ ### Query templates
406
+
407
+ ```
408
+ "<event>" site:reuters.com OR site:apnews.com
409
+ "<event>" before:YYYY-MM-DD after:YYYY-MM-DD
410
+ "<event>" timeline
411
+ "<event>" fact check
412
+ "<event>" original document OR primary source OR transcript
413
+ "<actor>" "<event>"
414
+ ```
415
+
416
+ ### Source priority
417
+
418
+ 1. Wire services (Reuters, AP, AFP) as the temporal anchor.
419
+ 2. Newspapers of record (NYT, WSJ, FT, The Economist, Washington Post, Guardian).
420
+ 3. Subject-matter trade press (e.g., The Verge for tech, STAT for biotech).
421
+ 4. Primary documents: court filings, regulatory disclosures, official transcripts, archived statements.
422
+ 5. Fact-check organizations (Snopes, PolitiFact, FactCheck.org) — for _their sources_, not their verdict alone.
423
+ 6. Wayback Machine for verifying what a URL said on a given date.
424
+
425
+ ### Bias detection
426
+
427
+ - **AllSides** (<https://www.allsides.com/>) — outlet bias rating left/center/right.
428
+ - **Ad Fontes Media** (<https://adfontesmedia.com/>) — bias × reliability chart.
429
+
430
+ For any contested claim, cite at least one source from each side of the bias spectrum and note their respective ratings.
431
+
432
+ ### Timeline construction
433
+
434
+ Build a chronological table: timestamp (UTC) | event | source | confidence. Use the wire-service timestamp as anchor; later corrections noted as separate rows.
435
+
436
+ ### Evidence requirements
437
+
438
+ - ≥ 2 wire-source citations for the core facts.
439
+ - ≥ 1 primary document if one exists (court filing, press release, regulatory filing).
440
+ - Wayback snapshot for any URL whose content might change.
441
+ - Bias-spread coverage for contested claims.
442
+
443
+ ### Output structure
444
+
445
+ `/docs/research/news-<slug>.md`:
446
+
447
+ ```
448
+ # <Event> — News Research
449
+ ## TL;DR + as-of timestamp
450
+ ## Timeline # chronological table
451
+ ## Actors
452
+ ## Established facts # with wire-service citations
453
+ ## Disputed claims # both sides with bias ratings
454
+ ## Primary documents # links + Wayback fallbacks
455
+ ## What's missing / unknown
456
+ ## Sources
457
+ ```
458
+
459
+ ---
460
+
461
+ ## 8. Security Research
462
+
463
+ ### Scope
464
+
465
+ A library, service, or pattern needs a security review: known vulnerabilities, advisory chain, vendor disclosure history, mitigations.
466
+
467
+ ### Query templates
468
+
469
+ ```
470
+ "<lib/product>" CVE
471
+ "<lib/product>" GHSA
472
+ "<lib/product>" security advisory
473
+ "<lib/product>" OWASP
474
+ "<lib/product>" responsible disclosure
475
+ "<lib/product>" supply chain attack
476
+ site:nvd.nist.gov "<lib>"
477
+ site:github.com/advisories "<lib>"
478
+ ```
479
+
480
+ ### Source priority
481
+
482
+ 1. **NVD** (NIST National Vulnerability Database, <https://nvd.nist.gov/>) — authoritative CVE store with CVSS scores.
483
+ 2. **GitHub Advisory Database** (<https://github.com/advisories>) — language/ecosystem-aware, often ahead of NVD.
484
+ 3. **MITRE CVE** (<https://cve.org/>) — original record.
485
+ 4. **Vendor security pages** — official advisories.
486
+ 5. **OSV.dev** (<https://osv.dev/>) — Open Source Vulnerability database aggregating multiple feeds.
487
+ 6. **CISA KEV** (Known Exploited Vulnerabilities, <https://www.cisa.gov/known-exploited-vulnerabilities-catalog>) — actively exploited.
488
+ 7. **OWASP Top 10** for category context.
489
+
490
+ ### CVSS scoring
491
+
492
+ CVSS v3.1 / v4.0 vector strings: `AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H` style. Always cite the vector, not just the score number, so reviewers can recompute.
493
+
494
+ ### Evidence requirements
495
+
496
+ - All CVEs in the last 36 months with: CVE ID, CVSS score + vector, affected versions, fixed versions, published date, exploitation status (per CISA KEV).
497
+ - Vendor disclosure-to-patch timeline per CVE.
498
+ - Advisory chain: CVE → vendor advisory → distro/package advisory → user-visible release.
499
+ - Mitigations: configuration, network controls, version pinning.
500
+
501
+ ### Output structure
502
+
503
+ `/docs/research/sec-<topic>.md`:
504
+
505
+ ```
506
+ # <Topic> — Security Research
507
+ ## TL;DR risk assessment
508
+ ## Vulnerability history # CVE table with scores + vectors
509
+ ## Vendor disclosure record # mean time-to-patch
510
+ ## OWASP categories triggered
511
+ ## Supply chain considerations # registry provenance, signing
512
+ ## Recommended controls
513
+ ## Mitigations # config, network, version pinning
514
+ ## Sources
515
+ ```
516
+
517
+ ---
518
+
519
+ ## 9. Pricing & Cost Research
520
+
521
+ ### Scope
522
+
523
+ The user is sizing the cost of using a vendor / running infrastructure / scaling a feature. Output supports a TCO model.
524
+
525
+ ### Query templates
526
+
527
+ ```
528
+ "<vendor>" pricing
529
+ "<vendor>" pricing changed OR pricing increase
530
+ "<vendor>" hidden fees OR egress
531
+ "<vendor>" deprecated tier
532
+ "<vendor>" SLA credit
533
+ "<vendor>" reserved instance OR commitment discount
534
+ "<vendor>" billing surprise reddit
535
+ ```
536
+
537
+ ### Source priority
538
+
539
+ 1. Vendor pricing page (canonical URL) at access date.
540
+ 2. Vendor pricing history — Wayback Machine snapshots quarterly for last 2 years.
541
+ 3. Vendor public price reductions / increases (engineering blog).
542
+ 4. Independent cost-comparison sites (only as supporting; verify against vendor).
543
+ 5. Reddit / HackerNews for "billing surprise" anecdata — anecdotal, follow up to vendor docs.
544
+
545
+ ### Cost model components
546
+
547
+ | Component | Always check |
548
+ | ----------------------- | -------------------------------------------- |
549
+ | Per-unit price | Per-request, per-GB, per-seat, per-vCPU-hour |
550
+ | Tier thresholds | Free tier, growth tier, enterprise gate |
551
+ | Included quotas | Per tier |
552
+ | Overage cost | Per unit beyond included |
553
+ | Egress / network | Often hidden, often dominant |
554
+ | Storage at rest | Per-GB-month |
555
+ | Redundancy multiplier | Multi-AZ / multi-region pricing |
556
+ | Support tier cost | Often a percentage of base spend |
557
+ | Ramp / commit discounts | Reserved, savings plans |
558
+ | Currency / region | EU vs US vs APAC pricing differences |
559
+
560
+ ### Hidden-fee detection
561
+
562
+ Look explicitly for: data egress, API call surcharges, premium-region multipliers, support-tier minimums, audit-log retention, premium-encryption add-ons, dedicated-tenant premiums, "enterprise" feature gates that are on the page only at quote-only pricing.
563
+
564
+ ### Deprecation pricing risk
565
+
566
+ - Has the vendor deprecated a tier and migrated users to a more expensive one in the past? (Wayback diff.)
567
+ - What is the customer's exit cost (egress + re-platform)?
568
+
569
+ ### Evidence requirements
570
+
571
+ - Pricing snapshot (URL + Wayback timestamp).
572
+ - Cost model spreadsheet-style table with unit, qty assumption, per-unit, total.
573
+ - ≥ 1 historical pricing data point (12+ months ago via Wayback) to estimate trajectory.
574
+ - Egress numbers explicit and in their own line.
575
+
576
+ ### Output structure
577
+
578
+ `/docs/research/cost-<vendor>.md`:
579
+
580
+ ```
581
+ # <Vendor> — Pricing & Cost Research
582
+ ## TL;DR per-month estimate at <usage profile>
583
+ ## Pricing snapshot # URL + Wayback timestamp
584
+ ## Tier table
585
+ ## Cost model # spreadsheet-style
586
+ ## Hidden fees
587
+ ## Pricing history (24mo) # Wayback diffs
588
+ ## Deprecation risk
589
+ ## Comparison to alternatives
590
+ ## Exit cost
591
+ ## Sources
592
+ ```
593
+
594
+ ---
595
+
596
+ ## Cross-playbook conventions
597
+
598
+ - **Every doc** carries the URL+QUOTE+ACCESSED-AT+VERIFY-METHOD evidence quad per claim.
599
+ - **Every doc** ends with a `## Sources` section listing every URL cited, in order, with access date.
600
+ - **Every doc** is linked from `/docs/research/index.md` and tagged in `/docs/research/_tags.md`.
601
+ - **Re-running a playbook** on the same topic re-verifies cached claims (HEAD checks, quote-grep) and only re-queries claims that have aged past their content-type half-life (see `research-methodology.md` §7).
602
+ - **When playbooks overlap** (e.g., library evaluation + security research), produce both files and cross-link with `[[lib-name]]` ↔ `[[sec-lib-name]]`.
603
+
604
+ When the user's question doesn't fit a playbook cleanly, fall back to the **scoping-review** narrative format described in `research-methodology.md` §1.6, with PRISMA-ScR-inspired transparency about what was searched and what was excluded.