@opendirectory.dev/skills 0.1.42 → 0.1.44

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,748 @@
1
+ ---
2
+ name: pricing-finder
3
+ description: 'Tell it what your product is (URL or description) and it finds 5 competitors globally, fetches their actual pricing pages, extracts every tier and price point, and returns a complete pricing intelligence report: the dominant pricing model in your space, a benchmark price table, feature gate analysis, competitive positioning map, and a concrete recommended pricing strategy for your product. Use when asked to research competitor pricing, find pricing benchmarks, decide how to price a product, understand pricing models in a space, or build a pricing strategy.'
4
+ compatibility: [claude-code, gemini-cli, github-copilot]
5
+ ---
6
+
7
+ # Pricing Finder
8
+
9
+ Tell it your product URL or description. It finds 5 competitors, fetches their actual pricing pages, and returns a complete pricing intelligence report: dominant model in your space, benchmark price table, feature gate analysis, positioning map, and a concrete pricing recommendation for your product.
10
+
11
+ **Zero required API keys.** Runs entirely on free pip dependencies. Optional API keys improve quality.
12
+
13
+ ---
14
+
15
+ **Zero-hallucination policy:** Every price point, tier name, and feature gate in the output must trace to fetched pricing page content or a DuckDuckGo search snippet. This applies to:
16
+ - Competitor prices: extracted verbatim from fetched page content only
17
+ - "Contact Sales": recorded as-is, never estimated or replaced with a number
18
+ - Tier names: copied exactly from the page, not paraphrased
19
+ - Feature lists: extracted from page content, not inferred from product knowledge
20
+ - Positioning observations: derived from the benchmark table data only
21
+
22
+ ---
23
+
24
+ ## Common Mistakes
25
+
26
+ | The agent will want to... | Why that's wrong |
27
+ |---|---|
28
+ | Fill in "Contact Sales" with an estimated price | Never estimate enterprise pricing. Record it as "Contact Sales" exactly. |
29
+ | Use training knowledge for competitor prices | Every price must trace to fetched page content or a search snippet. |
30
+ | Skip the competitor confirmation step | Always show discovered competitors and wait for confirmation. Wrong competitors = wrong benchmarks. |
31
+ | Recommend a price without referencing benchmark data | Every price recommendation must cite a specific number from the benchmark table. |
32
+ | Mark a page as high quality when content < 500 chars | < 500 chars means the page was not fetched -- mark data_quality as 'low' and use search snippet fallback. |
33
+ | Use em dashes in output | Replace all em dashes with hyphens. |
34
+
35
+ ---
36
+
37
+ ## Read Reference Files Before Each Run
38
+
39
+ ```bash
40
+ cat references/pricing-models.md
41
+ cat references/extraction-guide.md
42
+ cat references/positioning-guide.md
43
+ ```
44
+
45
+ ---
46
+
47
+ ## Step 1: Setup Check
48
+
49
+ ```bash
50
+ echo "TAVILY_API_KEY: ${TAVILY_API_KEY:+set (search quality enhanced)}${TAVILY_API_KEY:-not set, DuckDuckGo will be used (free)}"
51
+ echo "FIRECRAWL_API_KEY: ${FIRECRAWL_API_KEY:+set (JS rendering enhanced)}${FIRECRAWL_API_KEY:-not set, requests+BS4 will be used (free)}"
52
+ echo ""
53
+ python3 -c "from ddgs import DDGS; import requests, bs4, html2text; print('Dependencies OK')" 2>/dev/null \
54
+ || echo "ERROR: Missing dependencies. Run: pip install ddgs requests beautifulsoup4 html2text"
55
+ ```
56
+
57
+ **If dependencies are missing:** Stop immediately. Tell the user: "Missing Python dependencies. Run this to install them: `pip install ddgs requests beautifulsoup4 html2text` -- all free, no accounts needed. Then try again."
58
+
59
+ **If only API keys are missing:** Continue. DuckDuckGo and requests+BS4 are the free defaults.
60
+
61
+ Derive product slug:
62
+
63
+ ```bash
64
+ PRODUCT_SLUG=$(python3 -c "
65
+ from urllib.parse import urlparse
66
+ import sys, re
67
+ url = 'URL_HERE'
68
+ if url.startswith('http'):
69
+ host = urlparse(url).netloc.replace('www.', '')
70
+ print(host.split('.')[0])
71
+ else:
72
+ print(re.sub(r'[^a-z0-9]', '-', url[:30].lower()).strip('-'))
73
+ ")
74
+ echo "Product slug: $PRODUCT_SLUG"
75
+ ```
76
+
77
+ ---
78
+
79
+ ## Step 2: Parse Input
80
+
81
+ Collect from the conversation:
82
+ - `product_url`: the URL to fetch (required, unless user pastes a description directly)
83
+ - `geography`: optional -- US / Europe / India / global. Default: US
84
+
85
+ **If the user provides only a pasted description (no URL):** Skip Steps 3 and 4. Go directly to Step 4 (product analysis) using the pasted text as `product_content`. Set `page_source` to `user_description` and note in `data_quality_flags`.
86
+
87
+ **If neither URL nor description:** Ask: "What is the URL of your product or startup? Or paste a short description: what it does, who it's for, and what makes it different."
88
+
89
+ ---
90
+
91
+ ## Step 3: Fetch Product Page
92
+
93
+ **Primary: Firecrawl (if FIRECRAWL_API_KEY is set)**
94
+
95
+ ```bash
96
+ curl -s -X POST https://api.firecrawl.dev/v1/scrape \
97
+ -H "Authorization: Bearer $FIRECRAWL_API_KEY" \
98
+ -H "Content-Type: application/json" \
99
+ -d '{"url": "URL_HERE", "formats": ["markdown"], "onlyMainContent": true}' \
100
+ | python3 -c "
101
+ import sys, json
102
+ d = json.load(sys.stdin)
103
+ content = d.get('data', {}).get('markdown', '') or d.get('markdown', '')
104
+ print(f'Fetched via Firecrawl: {len(content)} characters')
105
+ open('/tmp/pf-product-raw.md', 'w').write(content)
106
+ "
107
+ ```
108
+
109
+ **Fallback: requests + BS4 (free, always available)**
110
+
111
+ ```bash
112
+ python3 << 'PYEOF'
113
+ import requests, html2text, random
114
+
115
+ USER_AGENTS = [
116
+ "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36",
117
+ "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36",
118
+ ]
119
+ headers = {"User-Agent": random.choice(USER_AGENTS), "Accept": "text/html,application/xhtml+xml;q=0.9,*/*;q=0.8"}
120
+ resp = requests.get("URL_HERE", headers=headers, timeout=20, allow_redirects=True)
121
+ converter = html2text.HTML2Text()
122
+ converter.ignore_images = True
123
+ converter.body_width = 0
124
+ content = converter.handle(resp.text)[:8000]
125
+ print(f'Fetched via requests+BS4: {len(content)} characters')
126
+ open('/tmp/pf-product-raw.md', 'w').write(content)
127
+ PYEOF
128
+ ```
129
+
130
+ **Checkpoint:**
131
+
132
+ ```bash
133
+ python3 -c "
134
+ content = open('/tmp/pf-product-raw.md').read()
135
+ if len(content) < 200:
136
+ print('ERROR: fewer than 200 characters fetched -- page may be JS-rendered')
137
+ else:
138
+ print(f'Content OK: {len(content)} characters')
139
+ "
140
+ ```
141
+
142
+ **If content < 200 characters:** Tell the user: "The product page returned too little content -- the site may be JavaScript-rendered. Please paste a short description: what your product does, who it's for, and what makes it different from competitors."
143
+
144
+ ---
145
+
146
+ ## Step 4: Product Analysis (AI)
147
+
148
+ Print page content:
149
+
150
+ ```bash
151
+ python3 -c "
152
+ content = open('/tmp/pf-product-raw.md').read()[:5000]
153
+ print('=== PRODUCT PAGE (first 5000 chars) ===')
154
+ print(content)
155
+ "
156
+ ```
157
+
158
+ **AI instructions:** Analyze the product page above and extract:
159
+
160
+ - `product_name`: the product or company name
161
+ - `one_line_description`: what it does, for whom, core value prop. Under 20 words. No marketing language.
162
+ - `industry_taxonomy`: `l1` (top-level: developer tools / fintech / healthtech / consumer / etc.), `l2` (sector: devops / payments / hr / etc.), `l3` (specific niche: CI/CD automation / embedded payments / async video / etc.)
163
+ - `differentiators`: exactly 2-3 specific things that distinguish this product. These feed the recommendation -- be specific. Generic answers like "easy to use" are not acceptable.
164
+ - `icp`: `buyer_persona` (job title), `company_type`, `company_size`
165
+ - `geography_bias`: US / Europe / India / global
166
+ - `page_source`: "live_page" or "user_description"
167
+
168
+ Write to `/tmp/pf-product-analysis.json`:
169
+
170
+ ```bash
171
+ python3 << 'PYEOF'
172
+ import json
173
+
174
+ analysis = {
175
+ # FILL from your analysis above
176
+ "product_name": "",
177
+ "one_line_description": "",
178
+ "industry_taxonomy": {"l1": "", "l2": "", "l3": ""},
179
+ "differentiators": [],
180
+ "icp": {"buyer_persona": "", "company_type": "", "company_size": ""},
181
+ "geography_bias": "US",
182
+ "page_source": "live_page"
183
+ }
184
+
185
+ json.dump(analysis, open('/tmp/pf-product-analysis.json', 'w'), indent=2)
186
+ print('Product analysis written.')
187
+ PYEOF
188
+ ```
189
+
190
+ Verify:
191
+
192
+ ```bash
193
+ python3 -c "
194
+ import json
195
+ a = json.load(open('/tmp/pf-product-analysis.json'))
196
+ print('Product:', a['product_name'])
197
+ print('Industry:', a['industry_taxonomy']['l1'], '>', a['industry_taxonomy']['l2'], '>', a['industry_taxonomy']['l3'])
198
+ print('Differentiators:')
199
+ for d in a['differentiators']:
200
+ print(f' - {d}')
201
+ "
202
+ ```
203
+
204
+ ---
205
+
206
+ ## Step 4b: Phase 1 -- Competitor Discovery
207
+
208
+ ```bash
209
+ ls scripts/research.py 2>/dev/null && echo "script found" || echo "ERROR: scripts/research.py not found -- cannot continue"
210
+ ```
211
+
212
+ ```bash
213
+ python3 scripts/research.py \
214
+ --phase discover \
215
+ --product-analysis /tmp/pf-product-analysis.json \
216
+ --output /tmp/pf-competitors-raw.json
217
+ ```
218
+
219
+ Print results for AI review:
220
+
221
+ ```bash
222
+ python3 -c "
223
+ import json
224
+ data = json.load(open('/tmp/pf-competitors-raw.json'))
225
+ print(f'Searches run: {len(data[\"competitor_searches\"])}')
226
+ for s in data['competitor_searches']:
227
+ print(f'\nQuery: {s[\"query\"]}')
228
+ for r in s.get('results', [])[:6]:
229
+ print(f' - {r[\"title\"]} | {r[\"url\"]}')
230
+ print(f' {r.get(\"snippet\",\"\")[:150]}')
231
+ "
232
+ ```
233
+
234
+ **AI instructions:** Read the search results above. Pick exactly 5 competitor companies that:
235
+ 1. Are named in the search result titles or snippets
236
+ 2. Are in the same L3 niche as the product being analyzed
237
+ 3. Are actual software products (not agencies, list articles, or review sites)
238
+ 4. Are distinct from each other
239
+
240
+ For each competitor write: `name`, `url`, `pricing_url` (their pricing page -- infer as `[url]/pricing` if not found in snippets), `description` (one sentence from snippet), `source_url`.
241
+
242
+ ---
243
+
244
+ ## Step 5: Competitor Confirmation
245
+
246
+ ```bash
247
+ python3 << 'PYEOF'
248
+ import json
249
+
250
+ analysis = json.load(open('/tmp/pf-product-analysis.json'))
251
+
252
+ # FILL: 5 competitors from the search results above
253
+ candidates = [
254
+ # {"name": str, "url": str, "pricing_url": str, "description": str, "source_url": str}
255
+ ]
256
+
257
+ print(f"\nFound 5 competitors for {analysis['product_name']} in {analysis['industry_taxonomy']['l3']}:\n")
258
+ for i, c in enumerate(candidates, 1):
259
+ print(f" {i}. {c['name']} -- {c['description']}")
260
+ print(f" Product: {c['url']}")
261
+ print(f" Pricing: {c['pricing_url']}")
262
+
263
+ data = json.load(open('/tmp/pf-competitors-raw.json'))
264
+ data['competitor_candidates'] = candidates
265
+ json.dump(data, open('/tmp/pf-competitors-raw.json', 'w'), indent=2)
266
+ PYEOF
267
+ ```
268
+
269
+ Tell the user: "These are the 5 competitors I'll fetch pricing data from. Add, remove, or swap any -- or say 'looks good' to continue."
270
+
271
+ **Wait for confirmation.** If the user edits the list, update candidates accordingly. Then write the confirmed list:
272
+
273
+ ```bash
274
+ python3 << 'PYEOF'
275
+ import json
276
+
277
+ # FILL: confirmed competitor list (after user review)
278
+ confirmed = [
279
+ # {"name": str, "url": str, "pricing_url": str}
280
+ ]
281
+
282
+ json.dump({"confirmed_competitors": confirmed}, open('/tmp/pf-competitors-confirmed.json', 'w'), indent=2)
283
+ print(f"Confirmed {len(confirmed)} competitors for pricing research.")
284
+ for c in confirmed:
285
+ print(f" - {c['name']} | pricing: {c['pricing_url']}")
286
+ PYEOF
287
+ ```
288
+
289
+ ---
290
+
291
+ ## Step 6: Phase 2 -- Fetch Pricing Pages
292
+
293
+ ```bash
294
+ python3 scripts/research.py \
295
+ --phase fetch-pricing \
296
+ --competitors /tmp/pf-competitors-confirmed.json \
297
+ --output /tmp/pf-pricing-raw.json
298
+ ```
299
+
300
+ This fetches each competitor's pricing page using a 3-tier fallback:
301
+ 1. Direct fetch: `requests` + `beautifulsoup4` + `html2text`
302
+ 2. Google cache: `webcache.googleusercontent.com/search?q=cache:[url]`
303
+ 3. DuckDuckGo search: `"[competitor]" pricing plans cost per month` (snippet fallback)
304
+
305
+ Print fetch summary:
306
+
307
+ ```bash
308
+ python3 -c "
309
+ import json
310
+ data = json.load(open('/tmp/pf-pricing-raw.json'))
311
+ print(f'Competitors fetched: {data[\"competitors_fetched\"]}')
312
+ print()
313
+ for r in data['results']:
314
+ quality_label = {'high': 'GOOD', 'medium': 'OK', 'low': 'SNIPPET ONLY'}.get(r['data_quality'], r['data_quality'])
315
+ print(f' {r[\"name\"]:20} {r[\"source\"]:15} {r[\"content_length\"]:5} chars [{quality_label}]')
316
+ "
317
+ ```
318
+
319
+ **If a competitor has `data_quality: low`:** This means the pricing page was blocked or JS-rendered. The analysis will proceed using search snippets but confidence for that competitor will be noted as low.
320
+
321
+ ---
322
+
323
+ ## Step 7: Pricing Extraction (AI)
324
+
325
+ Print all raw pricing content:
326
+
327
+ ```bash
328
+ python3 -c "
329
+ import json
330
+ data = json.load(open('/tmp/pf-pricing-raw.json'))
331
+ for r in data['results']:
332
+ print(f'\n=== {r[\"name\"]} (source: {r[\"source\"]}, quality: {r[\"data_quality\"]}) ===')
333
+ print(f'Pricing URL: {r[\"pricing_url\"]}')
334
+ print(r['content'][:4000])
335
+ print('---')
336
+ "
337
+ ```
338
+
339
+ **AI instructions:** For each competitor, extract structured pricing data from the content above. Follow `references/extraction-guide.md` for how to identify tiers, prices, limits, and CTAs.
340
+
341
+ Zero-hallucination rules:
342
+ 1. Extract prices verbatim from content only. If a price is not in the content, write `null`.
343
+ 2. Record "Contact Sales" exactly as-is. Never replace with an estimated number.
344
+ 3. `data_quality: low` means data came from search snippets -- extract what's there but do not fill gaps from training knowledge.
345
+ 4. For any field not present in the content: write `"not found in page data"`.
346
+ 5. Annual prices: always record the per-month equivalent alongside the annual total.
347
+
348
+ Write to `/tmp/pf-pricing-extracted.json`:
349
+
350
+ ```bash
351
+ python3 << 'PYEOF'
352
+ import json
353
+
354
+ # FILL: one object per competitor, following the schema below
355
+ extracted = [
356
+ # {
357
+ # "competitor": str,
358
+ # "pricing_url": str,
359
+ # "data_quality": "high" | "medium" | "low",
360
+ # "pricing_model": "per-seat" | "flat-rate" | "usage-based" | "freemium" | "tiered-flat" | "hybrid",
361
+ # "billing_cadence": ["monthly"] | ["annual"] | ["monthly", "annual"],
362
+ # "annual_discount": str, # e.g. "20%" or "not found in page data"
363
+ # "free_tier": true | false,
364
+ # "free_trial": true | false,
365
+ # "free_trial_days": int | null,
366
+ # "tiers": [
367
+ # {
368
+ # "name": str,
369
+ # "price_monthly": float | null, # null if Contact Sales
370
+ # "price_annual_monthly": float | null, # per-month equivalent when billed annually
371
+ # "price_note": str, # "Contact Sales", "Free", or empty
372
+ # "seats": str, # "per seat", "unlimited", "up to 5", etc.
373
+ # "key_limits": [str], # storage, API calls, projects, etc.
374
+ # "key_features": [str] # top 3-5 features in this tier
375
+ # }
376
+ # ],
377
+ # "enterprise_tier": true | false,
378
+ # "enterprise_pricing": str, # "Contact Sales" or actual price
379
+ # "regional_pricing": str | null # e.g. "India: ₹999/mo" or null
380
+ # }
381
+ ]
382
+
383
+ json.dump(extracted, open('/tmp/pf-pricing-extracted.json', 'w'), indent=2)
384
+ print(f'Extracted pricing for {len(extracted)} competitors.')
385
+ for c in extracted:
386
+ tier_count = len(c.get('tiers', []))
387
+ print(f" {c['competitor']:20} model={c['pricing_model']:15} tiers={tier_count} quality={c['data_quality']}")
388
+ PYEOF
389
+ ```
390
+
391
+ ---
392
+
393
+ ## Step 8: Pattern Analysis (AI)
394
+
395
+ Print all extracted pricing data:
396
+
397
+ ```bash
398
+ python3 -c "
399
+ import json
400
+ data = json.load(open('/tmp/pf-pricing-extracted.json'))
401
+ for c in data:
402
+ print(f'\n{c[\"competitor\"]} ({c[\"pricing_model\"]}, quality={c[\"data_quality\"]})')
403
+ for t in c.get('tiers', []):
404
+ price = t.get('price_monthly')
405
+ label = t.get('price_note', '')
406
+ print(f' {t[\"name\"]:15} \${price}/mo' if price is not None else f' {t[\"name\"]:15} {label}')
407
+ "
408
+ ```
409
+
410
+ **AI instructions:** Analyze all extracted pricing data and synthesize patterns. Follow `references/positioning-guide.md` for positioning analysis.
411
+
412
+ Write to `/tmp/pf-patterns.json`:
413
+
414
+ ```bash
415
+ python3 << 'PYEOF'
416
+ import json
417
+
418
+ patterns = {
419
+ # FILL from analysis
420
+
421
+ # Dominant model across 5 competitors
422
+ "dominant_model": "", # the most common model
423
+ "model_breakdown": {}, # {"per-seat": 3, "flat-rate": 1, "freemium": 1}
424
+ "model_explanation": "", # 2 sentences: why this model dominates this space
425
+
426
+ # Price benchmarks (USD/mo, monthly billing)
427
+ "entry_tier": {
428
+ "min": None, "max": None, "median": None,
429
+ "currency": "USD/mo",
430
+ "note": "" # e.g. "based on 4/5 competitors (1 was search snippet only)"
431
+ },
432
+ "mid_tier": {
433
+ "min": None, "max": None, "median": None,
434
+ "currency": "USD/mo",
435
+ "note": ""
436
+ },
437
+ "enterprise_floor": "", # e.g. "$99+/mo" or "Contact Sales (4/5 competitors)"
438
+
439
+ # Billing patterns
440
+ "annual_discount_typical": "", # e.g. "15-20%"
441
+ "billing_cadence_dominant": "", # "monthly + annual", "monthly only", "annual only"
442
+
443
+ # Free tier / trial prevalence
444
+ "free_tier_count": 0, # how many of 5 offer free tier
445
+ "free_trial_count": 0, # how many of 5 offer free trial
446
+ "free_tier_typical_limits": [], # what's typically in a free tier
447
+
448
+ # Feature gates
449
+ "always_free_features": [], # features present in all free/entry tiers
450
+ "always_paid_features": [], # features locked behind paid in all competitors
451
+ "variable_features": [], # features that vary most across competitors
452
+
453
+ # Regional pricing
454
+ "regional_pricing_flags": [], # competitors with region-specific pricing
455
+
456
+ # Data quality
457
+ "high_quality_count": 0, # competitors with fetched page data
458
+ "low_quality_count": 0, # competitors with snippet-only data
459
+ "data_quality_flags": []
460
+ }
461
+
462
+ json.dump(patterns, open('/tmp/pf-patterns.json', 'w'), indent=2)
463
+ print('Patterns written.')
464
+ print(f"Dominant model: {patterns['dominant_model']}")
465
+ print(f"Entry tier: ${patterns['entry_tier']['min']}-${patterns['entry_tier']['max']}/mo (median ${patterns['entry_tier']['median']})")
466
+ print(f"Free tier: {patterns['free_tier_count']}/5 | Free trial: {patterns['free_trial_count']}/5")
467
+ PYEOF
468
+ ```
469
+
470
+ ---
471
+
472
+ ## Step 9: Positioning Map + Recommendation (AI)
473
+
474
+ Print consolidated data:
475
+
476
+ ```bash
477
+ python3 -c "
478
+ import json
479
+
480
+ analysis = json.load(open('/tmp/pf-product-analysis.json'))
481
+ extracted = json.load(open('/tmp/pf-pricing-extracted.json'))
482
+ patterns = json.load(open('/tmp/pf-patterns.json'))
483
+
484
+ print('=== PRODUCT ===')
485
+ print(f'Name: {analysis[\"product_name\"]}')
486
+ print(f'What it does: {analysis[\"one_line_description\"]}')
487
+ print('Differentiators:')
488
+ for d in analysis['differentiators']:
489
+ print(f' - {d}')
490
+
491
+ print()
492
+ print('=== PATTERNS ===')
493
+ print(f'Dominant model: {patterns[\"dominant_model\"]} breakdown: {patterns[\"model_breakdown\"]}')
494
+ print(f'Entry tier: \${patterns[\"entry_tier\"][\"min\"]}-\${patterns[\"entry_tier\"][\"max\"]}/mo (median \${patterns[\"entry_tier\"][\"median\"]})')
495
+ print(f'Mid tier: \${patterns[\"mid_tier\"][\"min\"]}-\${patterns[\"mid_tier\"][\"max\"]}/mo (median \${patterns[\"mid_tier\"][\"median\"]})')
496
+ print(f'Enterprise: {patterns[\"enterprise_floor\"]}')
497
+ print(f'Free tier: {patterns[\"free_tier_count\"]}/5 | Free trial: {patterns[\"free_trial_count\"]}/5')
498
+
499
+ print()
500
+ print('=== COMPETITOR PRICING SUMMARY ===')
501
+ for c in extracted:
502
+ print(f'{c[\"competitor\"]} ({c[\"pricing_model\"]}):')
503
+ for t in c.get('tiers', []):
504
+ p = t.get('price_monthly')
505
+ print(f' {t[\"name\"]}: \${p}/mo' if p is not None else f' {t[\"name\"]}: {t.get(\"price_note\",\"\")}')
506
+ "
507
+ ```
508
+
509
+ **AI instructions -- zero-hallucination rules:**
510
+
511
+ 1. **Positioning map:** Name specific competitors from the extracted data. No invented observations.
512
+ 2. **Underserved gap:** Must reference a specific price range or model type absent from the data.
513
+ 3. **Every price recommendation:** Must cite a specific number from the patterns JSON (entry_tier.median, mid_tier.median, etc.).
514
+ 4. **Free tier recommendation:** Must reference `free_tier_count` from patterns (e.g., "3/5 competitors offer a free tier, so not offering one is a risk").
515
+ 5. **Differentiator gate:** Choose from the product's `differentiators` list in the analysis -- not invented features.
516
+ 6. No em dashes. No banned words (powerful, seamless, game-changing, revolutionary, cutting-edge, leverage).
517
+
518
+ **Generate:**
519
+
520
+ 1. Positioning map: who owns each quadrant (cheap+simple, middle, enterprise), and the underserved gap
521
+ 2. Recommended pricing strategy: model + all tier prices + free tier decision + annual discount + what to gate
522
+
523
+ Write to `/tmp/pf-final.json`:
524
+
525
+ ```bash
526
+ python3 << 'PYEOF'
527
+ import json
528
+
529
+ result = {
530
+ "product_summary": {
531
+ # FILL from analysis
532
+ "product_name": "",
533
+ "one_line_description": "",
534
+ "differentiators": []
535
+ },
536
+ "competitors_researched": [], # FILL: list of competitor names
537
+
538
+ # Filled from patterns
539
+ "pricing_model_analysis": {
540
+ "dominant_model": "",
541
+ "model_breakdown": {},
542
+ "model_explanation": "",
543
+ "free_tier_count": 0,
544
+ "free_trial_count": 0,
545
+ "annual_discount_typical": ""
546
+ },
547
+
548
+ # Benchmark table (filled from extracted data)
549
+ "benchmark_table": [
550
+ # Per competitor:
551
+ # {"name": str, "model": str, "entry_price": str, "mid_price": str,
552
+ # "top_price": str, "free_tier": bool, "free_trial": bool, "data_quality": str}
553
+ ],
554
+
555
+ # Market ranges
556
+ "market_ranges": {
557
+ "entry": {"min": None, "max": None, "median": None},
558
+ "mid": {"min": None, "max": None, "median": None},
559
+ "enterprise": ""
560
+ },
561
+
562
+ # Feature gate analysis
563
+ "feature_gates": {
564
+ "always_free": [],
565
+ "always_paid": [],
566
+ "most_variable": []
567
+ },
568
+
569
+ # Positioning map
570
+ "positioning_map": {
571
+ "cheap_simple": {"competitor": "", "price": ""},
572
+ "middle_market": [],
573
+ "enterprise": {"competitor": "", "note": ""},
574
+ "underserved_gap": ""
575
+ },
576
+
577
+ # Recommendation
578
+ "recommendation": {
579
+ "model": "",
580
+ "model_justification": "", # references specific data from model_breakdown
581
+ "entry_price": "", # e.g. "$12/mo"
582
+ "entry_justification": "", # references entry_tier.median
583
+ "mid_price": "",
584
+ "mid_justification": "",
585
+ "top_price": "", # price or "Contact Sales"
586
+ "top_justification": "",
587
+ "free_tier": True, # bool
588
+ "free_tier_justification": "", # references free_tier_count
589
+ "annual_discount": "", # e.g. "17%"
590
+ "annual_justification": "",
591
+ "gate_behind_paid": "", # specific differentiator from product analysis
592
+ "gate_justification": ""
593
+ },
594
+
595
+ "data_quality_flags": []
596
+ }
597
+
598
+ json.dump(result, open('/tmp/pf-final.json', 'w'), indent=2)
599
+ print('Synthesis written.')
600
+ print(f'Benchmark table: {len(result.get("benchmark_table", []))} competitors')
601
+ print(f'Recommendation model: {result.get("recommendation", {}).get("model", "--")}')
602
+ PYEOF
603
+ ```
604
+
605
+ ---
606
+
607
+ ## Step 10: Self-QA, Present, and Save
608
+
609
+ **Self-QA:**
610
+
611
+ ```bash
612
+ python3 << 'PYEOF'
613
+ import json
614
+
615
+ result = json.load(open('/tmp/pf-final.json'))
616
+ failures = []
617
+
618
+ # Check 1: em dashes
619
+ full_text = json.dumps(result)
620
+ if '—' in full_text:
621
+ result = json.loads(full_text.replace('—', '-'))
622
+ failures.append('Fixed: em dashes replaced with hyphens')
623
+
624
+ # Check 2: banned words
625
+ banned = ['powerful', 'seamless', 'innovative', 'game-changing', 'revolutionize',
626
+ 'cutting-edge', 'best-in-class', 'world-class', 'leverage', 'disrupt', 'transform']
627
+ for word in banned:
628
+ if word.lower() in json.dumps(result).lower():
629
+ failures.append(f'Warning: banned word "{word}" found in output')
630
+
631
+ # Check 3: recommendation completeness
632
+ rec = result.get('recommendation', {})
633
+ required = ['model', 'entry_price', 'mid_price', 'top_price', 'free_tier',
634
+ 'entry_justification', 'mid_justification', 'gate_behind_paid']
635
+ for field in required:
636
+ if not rec.get(field) and rec.get(field) is not False:
637
+ failures.append(f'Warning: recommendation missing field: {field}')
638
+
639
+ # Check 4: no Contact Sales replaced with numbers
640
+ for row in result.get('benchmark_table', []):
641
+ for field in ['entry_price', 'mid_price', 'top_price']:
642
+ val = str(row.get(field, ''))
643
+ if 'contact' in val.lower():
644
+ pass # correct
645
+ elif row.get('data_quality') == 'low' and '$' in val:
646
+ failures.append(f'Warning: {row["name"]} has dollar prices from low-quality source')
647
+
648
+ # Check 5: benchmark table populated
649
+ if len(result.get('benchmark_table', [])) < 3:
650
+ failures.append(f'Warning: benchmark table has only {len(result.get("benchmark_table", []))} competitors -- need at least 3 for reliable benchmarks')
651
+
652
+ # Check 6: "not found in page data" count
653
+ nf = json.dumps(result).count('not found in page data')
654
+ if nf > 0:
655
+ failures.append(f'INFO: {nf} field(s) marked "not found in page data"')
656
+
657
+ if 'data_quality_flags' not in result:
658
+ result['data_quality_flags'] = []
659
+ result['data_quality_flags'].extend(failures)
660
+
661
+ json.dump(result, open('/tmp/pf-final.json', 'w'), indent=2)
662
+ print(f'QA complete. {len(failures)} issues.')
663
+ for f in failures:
664
+ print(f' - {f}')
665
+ if not failures:
666
+ print('All QA checks passed.')
667
+ PYEOF
668
+ ```
669
+
670
+ **Present the output:**
671
+
672
+ ```
673
+ ## Pricing Intel: [product_name]
674
+ Date: [today] | Competitors: [list] | Geography: [geography]
675
+
676
+ ---
677
+
678
+ ### Your Product
679
+ [one_line_description]
680
+ Differentiators: [list]
681
+
682
+ ---
683
+
684
+ ### 1. Pricing Model Analysis
685
+ Dominant model: [dominant_model] ([N]/5 competitors)
686
+ [model_explanation -- 2-3 sentences on why this model dominates the space]
687
+
688
+ Free tier: [N]/5 competitors | Free trial: [N]/5 | Annual discount: typical [X]%
689
+
690
+ ---
691
+
692
+ ### 2. Price Point Benchmark Table
693
+ | Competitor | Model | Entry | Mid | Top | Free tier | Free trial | Data quality |
694
+ |---|---|---|---|---|---|---|---|
695
+ [one row per competitor from benchmark_table]
696
+
697
+ Market ranges:
698
+ - Entry tier: $[min]-$[max]/mo (median $[median])
699
+ - Mid tier: $[min]-$[max]/mo (median $[median])
700
+ - Enterprise: [enterprise_floor]
701
+
702
+ ---
703
+
704
+ ### 3. Feature Gate Analysis
705
+ Always free: [always_free list]
706
+ Always behind paid: [always_paid list]
707
+ Most variable across competitors: [most_variable list]
708
+
709
+ ---
710
+
711
+ ### 4. Competitive Positioning Map
712
+ Cheap + simple: [competitor] at $[X]/mo
713
+ Middle market: [competitors] at $[X]-$[Y]/mo
714
+ Enterprise: [competitor] (Contact Sales)
715
+ Underserved gap: [underserved_gap -- specific observation]
716
+
717
+ ---
718
+
719
+ ### 5. Recommended Pricing for [product_name]
720
+ Model: [model] -- [model_justification]
721
+ Entry: [entry_price] -- [entry_justification]
722
+ Mid: [mid_price] -- [mid_justification]
723
+ Top: [top_price] -- [top_justification]
724
+ Free tier: [Yes/No] -- [free_tier_justification]
725
+ Annual discount: [annual_discount] -- [annual_justification]
726
+ Gate behind paid: [gate_behind_paid] -- [gate_justification]
727
+
728
+ ---
729
+ Data notes: [data_quality_flags or "None"]
730
+ Saved to: docs/pricing-intel/[PRODUCT_SLUG]-[DATE].md
731
+ ```
732
+
733
+ **Save to file and clean up:**
734
+
735
+ ```bash
736
+ DATE=$(date +%Y-%m-%d)
737
+ OUTPUT_FILE="docs/pricing-intel/${PRODUCT_SLUG}-${DATE}.md"
738
+ mkdir -p docs/pricing-intel
739
+ echo "Saved to: $OUTPUT_FILE"
740
+ ```
741
+
742
+ ```bash
743
+ rm -f /tmp/pf-product-raw.md /tmp/pf-product-analysis.json \
744
+ /tmp/pf-competitors-raw.json /tmp/pf-competitors-confirmed.json \
745
+ /tmp/pf-pricing-raw.json /tmp/pf-pricing-extracted.json \
746
+ /tmp/pf-patterns.json /tmp/pf-final.json
747
+ echo "Temp files cleaned up."
748
+ ```