@opendirectory.dev/skills 0.1.38 → 0.1.40
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/package.json +1 -1
- package/registry.json +24 -2
- package/skills/map-your-market/SKILL.md +1 -9
- package/skills/product-update-logger/.env.example +4 -0
- package/skills/product-update-logger/README.md +197 -0
- package/skills/product-update-logger/SKILL.md +462 -0
- package/skills/product-update-logger/evals/evals.json +119 -0
- package/skills/product-update-logger/references/changelog-format.md +96 -0
- package/skills/product-update-logger/references/content-rules.md +154 -0
- package/skills/product-update-logger/references/noise-filter.md +86 -0
- package/skills/product-update-logger/scripts/gather.py +364 -0
- package/skills/where-your-customer-lives/.env.example +4 -0
- package/skills/where-your-customer-lives/README.md +135 -0
- package/skills/where-your-customer-lives/SKILL.md +515 -0
- package/skills/where-your-customer-lives/evals/evals.json +108 -0
- package/skills/where-your-customer-lives/references/channel-types.md +147 -0
- package/skills/where-your-customer-lives/references/entry-tactics.md +179 -0
- package/skills/where-your-customer-lives/references/scoring-guide.md +141 -0
- package/skills/where-your-customer-lives/scripts/fetch.py +810 -0
|
@@ -0,0 +1,135 @@
|
|
|
1
|
+
# where-your-customer-lives
|
|
2
|
+
|
|
3
|
+
Give this skill a product utility and ICP. It searches Reddit, Hacker News, and DuckDuckGo to find the specific communities where your customer actually gathers -- then builds a per-channel playbook: evidence your ICP is there, one entry tactic, one content angle, and specific anti-patterns.
|
|
4
|
+
|
|
5
|
+
Stop guessing which communities to post in. Get signal-traced evidence.
|
|
6
|
+
|
|
7
|
+
## Install
|
|
8
|
+
|
|
9
|
+
```bash
|
|
10
|
+
npx "@opendirectory.dev/skills" install where-your-customer-lives --target claude
|
|
11
|
+
```
|
|
12
|
+
|
|
13
|
+
## What It Does
|
|
14
|
+
|
|
15
|
+
- Accepts: product description + ICP role + ICP pain + competitors (any combination)
|
|
16
|
+
- Signal-traces: searches Reddit/HN for posts matching your ICP, extracts which communities those posts came from
|
|
17
|
+
- Competitor layer: searches where competitors are discussed -- those communities have evaluating ICP
|
|
18
|
+
- Discovers 7 channel types: Reddit, Slack, Discord, Newsletter, Podcast, Conference, YouTube
|
|
19
|
+
- Fetches actual member counts from Reddit API and DuckDuckGo snippets -- no estimates
|
|
20
|
+
- Scores channels: ICP signal count (10x) + size (log scale) + activity + competitor mentions - entry penalty
|
|
21
|
+
- Generates per-channel playbook: who is here, entry tactic, content angle, anti-patterns
|
|
22
|
+
- Saves to `docs/channel-map/[slug]-[date].md` + JSON snapshot
|
|
23
|
+
|
|
24
|
+
## Requirements
|
|
25
|
+
|
|
26
|
+
| Requirement | Purpose | How to Set Up |
|
|
27
|
+
|---|---|---|
|
|
28
|
+
| GITHUB_TOKEN | Optional -- improves competitor layer rate limit | github.com/settings/tokens (no scopes needed) |
|
|
29
|
+
|
|
30
|
+
No other API keys required.
|
|
31
|
+
|
|
32
|
+
## Setup
|
|
33
|
+
|
|
34
|
+
```bash
|
|
35
|
+
cp .env.example .env
|
|
36
|
+
# Add GITHUB_TOKEN if you want higher GitHub rate limits for competitor enrichment
|
|
37
|
+
```
|
|
38
|
+
|
|
39
|
+
## How to Use
|
|
40
|
+
|
|
41
|
+
```
|
|
42
|
+
"Where does my customer live? I build observability tools for DevOps engineers."
|
|
43
|
+
"Find communities for my ICP: sales leaders at B2B SaaS startups dealing with low reply rates."
|
|
44
|
+
"Where should I post? Competitors: Clay, Apollo. ICP: growth engineers."
|
|
45
|
+
"What channels does my ICP use? Product: compliance automation for community banks."
|
|
46
|
+
"I build payment APIs for marketplaces. Who should I reach out to and where?"
|
|
47
|
+
```
|
|
48
|
+
|
|
49
|
+
Include competitor names for richer channel discovery. Include ICP role + pain for accurate signal-tracing.
|
|
50
|
+
|
|
51
|
+
## Why This Instead of Googling
|
|
52
|
+
|
|
53
|
+
A founder doing this manually would:
|
|
54
|
+
- Google "devops communities" and get listicles with outdated or generic results
|
|
55
|
+
- Post in r/devops without knowing if their ICP is actually there
|
|
56
|
+
- Spend a month in the wrong Slack before realizing it's not their market
|
|
57
|
+
|
|
58
|
+
This skill traces backwards from actual ICP pain posts to their source communities. The channels it returns are the ones where your ICP is already discussing the exact problem you solve.
|
|
59
|
+
|
|
60
|
+
## Channel Scoring
|
|
61
|
+
|
|
62
|
+
```
|
|
63
|
+
channel_score = (
|
|
64
|
+
icp_signal_count * 10 # signals traced here -- highest weight
|
|
65
|
+
+ log10(members) * 15, max 50 # community size (log scale)
|
|
66
|
+
+ activity_score, max 30 # posts/week proxy
|
|
67
|
+
+ competitor_mentions * 5 # competitor discussed = ICP evaluating
|
|
68
|
+
) - entry_penalty # -20 paid, -10 invite-only, 0 open
|
|
69
|
+
```
|
|
70
|
+
|
|
71
|
+
Score tiers: top-priority (100+), high (60-99), medium (30-59), low (<30).
|
|
72
|
+
|
|
73
|
+
## Velocity Tracking
|
|
74
|
+
|
|
75
|
+
Run the skill every quarter. JSON snapshots in `docs/channel-map/` let you track which communities grow or shrink for your ICP over time.
|
|
76
|
+
|
|
77
|
+
## Cost Per Run
|
|
78
|
+
|
|
79
|
+
- Reddit, HN: free, no auth
|
|
80
|
+
- DuckDuckGo: free, no auth
|
|
81
|
+
- GitHub: free with optional token
|
|
82
|
+
- AI analysis: uses the model running the skill
|
|
83
|
+
- Total: free
|
|
84
|
+
|
|
85
|
+
## Standalone Script
|
|
86
|
+
|
|
87
|
+
Run channel discovery without Claude. Useful when you want raw channel data first, then bring it to any AI for analysis.
|
|
88
|
+
|
|
89
|
+
```bash
|
|
90
|
+
# Basic usage
|
|
91
|
+
python3 scripts/fetch.py "devops monitoring" --icp-role "DevOps engineers" --icp-pain "alert fatigue"
|
|
92
|
+
|
|
93
|
+
# With competitors
|
|
94
|
+
python3 scripts/fetch.py "B2B sales" --competitors "Clay,Apollo,HubSpot" --output results.json
|
|
95
|
+
|
|
96
|
+
# Print to stdout
|
|
97
|
+
python3 scripts/fetch.py "startup gtm" --icp-role "technical co-founders" --stdout | jq '.summary'
|
|
98
|
+
|
|
99
|
+
# With GitHub token
|
|
100
|
+
GITHUB_TOKEN=your_token python3 scripts/fetch.py "devops tools" --competitors "Datadog,Grafana"
|
|
101
|
+
```
|
|
102
|
+
|
|
103
|
+
The script writes a JSON file with all discovered channels. Open it with Claude and ask: "Generate a channel playbook from this data."
|
|
104
|
+
|
|
105
|
+
## Project Structure
|
|
106
|
+
|
|
107
|
+
```
|
|
108
|
+
where-your-customer-lives/
|
|
109
|
+
├── SKILL.md
|
|
110
|
+
├── README.md
|
|
111
|
+
├── .env.example
|
|
112
|
+
├── scripts/
|
|
113
|
+
│ └── fetch.py standalone channel discovery script
|
|
114
|
+
├── evals/
|
|
115
|
+
│ └── evals.json 5 test cases
|
|
116
|
+
└── references/
|
|
117
|
+
├── channel-types.md 7 channel types, discovery methods, scoring notes
|
|
118
|
+
├── entry-tactics.md entry tactic templates per channel type
|
|
119
|
+
└── scoring-guide.md channel scoring formula and tier thresholds
|
|
120
|
+
```
|
|
121
|
+
|
|
122
|
+
## What map-your-market Does vs. This Skill
|
|
123
|
+
|
|
124
|
+
| | map-your-market | where-your-customer-lives |
|
|
125
|
+
|---|---|---|
|
|
126
|
+
| Question answered | What does my market care about? | Where does my market gather? |
|
|
127
|
+
| Output | Pain clusters, ICP profile, positioning framework | Ranked channel list with per-channel playbook |
|
|
128
|
+
| Data sources | Reddit, HN, GitHub Issues, G2, Trends | Reddit, HN, DuckDuckGo (7 channel types) |
|
|
129
|
+
| Primary use | Market research, messaging | Distribution, outreach, GTM channels |
|
|
130
|
+
|
|
131
|
+
Run both for a complete picture: map-your-market tells you what to say, where-your-customer-lives tells you where to say it.
|
|
132
|
+
|
|
133
|
+
## License
|
|
134
|
+
|
|
135
|
+
MIT
|
|
@@ -0,0 +1,515 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: where-your-customer-lives
|
|
3
|
+
description: Given a product utility and ICP, researches the internet to find the specific channels. Where your customer actually lives, ranked by reachability with a full per-channel playbook. Returns evidence that your ICP is there, one entry tactic, one content angle, and specific anti-patterns per channel. Use when asked where my customer hangs out, what communities should I post in, where is my ICP, find channels for outreach, what forums does my ICP use, where should I spend time for distribution, or which communities are right for my product.
|
|
4
|
+
compatibility: [claude-code, gemini-cli, github-copilot]
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
# Where Your Customer Lives
|
|
8
|
+
|
|
9
|
+
Given a product utility and ICP, trace real ICP pain posts back to their source communities. Layer in competitor discussion signals. Discover Slack/Discord/newsletter/podcast/conference channels via DuckDuckGo. Score every channel by ICP signal count, size, activity, and competitor presence. Output a ranked playbook: evidence, entry tactic, content angle, anti-patterns -- one per channel. No guessing. Signal-traced channels only.
|
|
10
|
+
|
|
11
|
+
---
|
|
12
|
+
|
|
13
|
+
**Critical rule:** Every channel name in the output must exist in either the Reddit API response or DuckDuckGo search results from this run. Every member count must come from the `about.json` API or a search snippet -- never estimated. Every ICP signal count must match the raw data. If a channel type returns 0 results, report 0 -- do not fabricate channels.
|
|
14
|
+
|
|
15
|
+
---
|
|
16
|
+
|
|
17
|
+
## Common Mistakes
|
|
18
|
+
|
|
19
|
+
| The agent will want to... | Why that's wrong |
|
|
20
|
+
|---|---|
|
|
21
|
+
| Recommend generic channels ("LinkedIn", "Twitter") | Every channel must be specific with a name, member count, and URL. "LinkedIn Group: DevOps for Enterprise Teams (45K members)" -- not just "LinkedIn". |
|
|
22
|
+
| Use the same channels for every ICP | Signal-trace is ICP-specific. A DevOps ICP and a Finance ICP produce entirely different channel lists. Run the script fresh per ICP. |
|
|
23
|
+
| Invent member counts or community names | Every channel name must come from DuckDuckGo results or Reddit API. Every member count must come from the API or a search snippet. If unavailable, write "member count not found". |
|
|
24
|
+
| Skip the competitor layer | Where competitors are discussed = your ICP is evaluating alternatives = hottest outreach context. Always run competitor search even if the user did not ask. |
|
|
25
|
+
| Write entry tactics that are product pitches | "Post about your product in r/devops" is not an entry tactic. Entry tactics name the specific thread type, content format, and community norm. |
|
|
26
|
+
| Treat Reddit as the only channel type | The output must include at least 3 channel types. If only Reddit is found, explicitly search DuckDuckGo for Slack/Discord/newsletter/conference before stopping. |
|
|
27
|
+
|
|
28
|
+
---
|
|
29
|
+
|
|
30
|
+
## Step 1: Setup Check
|
|
31
|
+
|
|
32
|
+
```bash
|
|
33
|
+
echo "GITHUB_TOKEN: ${GITHUB_TOKEN:-not set -- competitor layer runs at 60 req/hr unauthenticated}"
|
|
34
|
+
echo ""
|
|
35
|
+
echo "Data sources this run will use:"
|
|
36
|
+
echo " Reddit public JSON (no auth, signal-trace)"
|
|
37
|
+
echo " Reddit about.json (no auth, subreddit metadata)"
|
|
38
|
+
echo " HN Algolia API (no auth, signal-trace)"
|
|
39
|
+
echo " DuckDuckGo HTML (no auth, channel discovery)"
|
|
40
|
+
echo " GitHub API (${GITHUB_TOKEN:+authenticated, }optional for competitor enrichment)"
|
|
41
|
+
```
|
|
42
|
+
|
|
43
|
+
If `GITHUB_TOKEN` is not set: continue. All core channel discovery works without it.
|
|
44
|
+
|
|
45
|
+
---
|
|
46
|
+
|
|
47
|
+
## Step 2: Parse ICP
|
|
48
|
+
|
|
49
|
+
Collect from the conversation:
|
|
50
|
+
- `product` -- what the product does (one sentence)
|
|
51
|
+
- `icp_role` -- who the ICP is (e.g. "technical co-founders", "DevOps engineers at Series A")
|
|
52
|
+
- `icp_pain` -- their primary problem (e.g. "customer acquisition", "alert fatigue")
|
|
53
|
+
- `category` -- market category keywords (e.g. "startup gtm sales", "devops monitoring")
|
|
54
|
+
- `competitors` -- optional competitor names (e.g. "Clay, Apollo, HubSpot")
|
|
55
|
+
|
|
56
|
+
**ICP cascade:**
|
|
57
|
+
1. If the user's prompt contains product + icp_role + icp_pain: extract them directly and proceed.
|
|
58
|
+
2. If the prompt is thin (only category or only product name): check `docs/icp.md` for a saved ICP profile. Merge with prompt details.
|
|
59
|
+
3. If still insufficient (missing icp_role or icp_pain): ask these 3 questions, one at a time:
|
|
60
|
+
- "What does your product do in one sentence?"
|
|
61
|
+
- "Who is your ideal customer? (role, company type, team size)"
|
|
62
|
+
- "What is their primary problem before they find your product?"
|
|
63
|
+
4. Save the final ICP to `docs/icp.md` so other skills can reuse it.
|
|
64
|
+
|
|
65
|
+
Save ICP file if docs/icp.md does not already contain this product:
|
|
66
|
+
|
|
67
|
+
```bash
|
|
68
|
+
python3 << 'PYEOF'
|
|
69
|
+
import json, os
|
|
70
|
+
|
|
71
|
+
icp = {
|
|
72
|
+
"product": "PRODUCT_HERE",
|
|
73
|
+
"icp_role": "ICP_ROLE_HERE",
|
|
74
|
+
"icp_pain": "ICP_PAIN_HERE",
|
|
75
|
+
"competitors": ["COMP_1", "COMP_2"],
|
|
76
|
+
"category": "CATEGORY_HERE"
|
|
77
|
+
}
|
|
78
|
+
|
|
79
|
+
os.makedirs("docs", exist_ok=True)
|
|
80
|
+
with open("/tmp/wcl-input.json", "w") as f:
|
|
81
|
+
json.dump(icp, f, indent=2)
|
|
82
|
+
|
|
83
|
+
# Update docs/icp.md
|
|
84
|
+
icp_md_path = "docs/icp.md"
|
|
85
|
+
new_block = f"""## {icp['product']}
|
|
86
|
+
- **ICP role:** {icp['icp_role']}
|
|
87
|
+
- **ICP pain:** {icp['icp_pain']}
|
|
88
|
+
- **Competitors:** {', '.join(icp['competitors']) if icp['competitors'] else 'none'}
|
|
89
|
+
- **Category:** {icp['category']}
|
|
90
|
+
"""
|
|
91
|
+
existing = open(icp_md_path).read() if os.path.exists(icp_md_path) else ""
|
|
92
|
+
if icp['product'] not in existing:
|
|
93
|
+
with open(icp_md_path, "a") as f:
|
|
94
|
+
f.write(new_block)
|
|
95
|
+
print(f"ICP saved to {icp_md_path}")
|
|
96
|
+
else:
|
|
97
|
+
print(f"ICP already in {icp_md_path}")
|
|
98
|
+
|
|
99
|
+
print(f"Product: {icp['product']}")
|
|
100
|
+
print(f"ICP role: {icp['icp_role']}")
|
|
101
|
+
print(f"ICP pain: {icp['icp_pain']}")
|
|
102
|
+
print(f"Competitors: {', '.join(icp['competitors']) if icp['competitors'] else 'none'}")
|
|
103
|
+
PYEOF
|
|
104
|
+
```
|
|
105
|
+
|
|
106
|
+
---
|
|
107
|
+
|
|
108
|
+
## Step 3: Run the Standalone Data Collection Script
|
|
109
|
+
|
|
110
|
+
Check if the script exists:
|
|
111
|
+
|
|
112
|
+
```bash
|
|
113
|
+
ls scripts/fetch.py 2>/dev/null && echo "script available" || echo "not found"
|
|
114
|
+
```
|
|
115
|
+
|
|
116
|
+
Run channel discovery:
|
|
117
|
+
|
|
118
|
+
```bash
|
|
119
|
+
GITHUB_TOKEN="${GITHUB_TOKEN:-}" python3 scripts/fetch.py \
|
|
120
|
+
"$(python3 -c "import json; d=json.load(open('/tmp/wcl-input.json')); print(d['category'])")" \
|
|
121
|
+
--icp-role "$(python3 -c "import json; d=json.load(open('/tmp/wcl-input.json')); print(d['icp_role'])")" \
|
|
122
|
+
--icp-pain "$(python3 -c "import json; d=json.load(open('/tmp/wcl-input.json')); print(d['icp_pain'])")" \
|
|
123
|
+
--product "$(python3 -c "import json; d=json.load(open('/tmp/wcl-input.json')); print(d['product'])")" \
|
|
124
|
+
--competitors "$(python3 -c "import json; d=json.load(open('/tmp/wcl-input.json')); print(','.join(d['competitors']))")" \
|
|
125
|
+
--output /tmp/wcl-raw.json
|
|
126
|
+
```
|
|
127
|
+
|
|
128
|
+
Wait for completion (allow up to 5 minutes -- Reddit + DuckDuckGo searches take ~120 seconds total).
|
|
129
|
+
|
|
130
|
+
Verify output:
|
|
131
|
+
|
|
132
|
+
```bash
|
|
133
|
+
python3 -c "
|
|
134
|
+
import json
|
|
135
|
+
with open('/tmp/wcl-raw.json') as f:
|
|
136
|
+
d = json.load(f)
|
|
137
|
+
print(f'Reddit posts found: {d[\"reddit_posts_found\"]}')
|
|
138
|
+
print(f'HN signals found: {d[\"hn_signals_found\"]}')
|
|
139
|
+
print(f'Channels discovered: {d[\"summary\"][\"total_channels\"]}')
|
|
140
|
+
print(f'Top priority: {len(d[\"summary\"][\"top_priority\"])}')
|
|
141
|
+
print(f'By type: {d[\"summary\"][\"by_type\"]}')
|
|
142
|
+
print(f'Competitor layer ran: {d[\"summary\"][\"competitor_layer_ran\"]}')
|
|
143
|
+
"
|
|
144
|
+
```
|
|
145
|
+
|
|
146
|
+
If total_channels < 3: tell the user: "Fewer than 3 channels found. The ICP description may be too narrow for Reddit/DDG coverage. Try broader category keywords, or add competitor names to activate the competitor layer." Then attempt one retry with broader category keywords before stopping.
|
|
147
|
+
|
|
148
|
+
---
|
|
149
|
+
|
|
150
|
+
## Step 4: Print Channel Summary
|
|
151
|
+
|
|
152
|
+
Load the raw data and print a ranked summary table:
|
|
153
|
+
|
|
154
|
+
```bash
|
|
155
|
+
python3 -c "
|
|
156
|
+
import json
|
|
157
|
+
with open('/tmp/wcl-raw.json') as f:
|
|
158
|
+
d = json.load(f)
|
|
159
|
+
channels = d['channels_discovered']
|
|
160
|
+
print(f'Channels found: {len(channels)}')
|
|
161
|
+
print()
|
|
162
|
+
print(f'{'#':<4} {'Channel':<35} {'Type':<14} {'Members':<12} {'ICP signals':<13} {'Score':<8} Tier')
|
|
163
|
+
print('-' * 100)
|
|
164
|
+
for i, ch in enumerate(channels[:15], 1):
|
|
165
|
+
members = ch.get('members', 0)
|
|
166
|
+
m_str = f'{members//1000}K' if members >= 1000 else str(members) if members else '?'
|
|
167
|
+
print(f'{i:<4} {ch[\"name\"]:<35} {ch[\"type\"]:<14} {m_str:<12} {ch.get(\"icp_signal_count\",0):<13} {ch.get(\"channel_score\",0):<8} {ch.get(\"tier\",\"\")}')
|
|
168
|
+
"
|
|
169
|
+
```
|
|
170
|
+
|
|
171
|
+
Print the top 3 evidence posts from the highest-scoring channel:
|
|
172
|
+
|
|
173
|
+
```bash
|
|
174
|
+
python3 -c "
|
|
175
|
+
import json
|
|
176
|
+
with open('/tmp/wcl-raw.json') as f:
|
|
177
|
+
d = json.load(f)
|
|
178
|
+
channels = d['channels_discovered']
|
|
179
|
+
if channels:
|
|
180
|
+
top = channels[0]
|
|
181
|
+
print(f'Top channel: {top[\"name\"]}')
|
|
182
|
+
print(f'Evidence posts:')
|
|
183
|
+
for ep in top.get('evidence_posts', [])[:3]:
|
|
184
|
+
print(f' [{ep.get(\"score\",0):.0f}] {ep.get(\"title\",\"\")}')
|
|
185
|
+
print(f' {ep.get(\"url\",\"\")}')
|
|
186
|
+
"
|
|
187
|
+
```
|
|
188
|
+
|
|
189
|
+
---
|
|
190
|
+
|
|
191
|
+
## Step 5: AI Channel Enrichment
|
|
192
|
+
|
|
193
|
+
You now have the raw channel data. For each channel in the top-priority and high tiers, generate a playbook entry.
|
|
194
|
+
|
|
195
|
+
Load all channels:
|
|
196
|
+
|
|
197
|
+
```bash
|
|
198
|
+
python3 -c "
|
|
199
|
+
import json
|
|
200
|
+
with open('/tmp/wcl-raw.json') as f:
|
|
201
|
+
d = json.load(f)
|
|
202
|
+
top_channels = [ch for ch in d['channels_discovered'] if ch.get('tier') in ('top-priority', 'high')]
|
|
203
|
+
print(json.dumps(top_channels, indent=2))
|
|
204
|
+
"
|
|
205
|
+
```
|
|
206
|
+
|
|
207
|
+
For each channel above, generate:
|
|
208
|
+
|
|
209
|
+
**who_is_here:** 2 sentences describing the specific type of ICP present in this channel. Derive from the evidence posts, subreddit description, and ICP profile. Do NOT write "your target audience" -- be specific. Example: "DevOps engineers at companies of 50-500 who own the infra stack without a dedicated SRE team. They post about on-call burnout, Kubernetes sprawl, and choosing between cloud-native and self-hosted observability."
|
|
210
|
+
|
|
211
|
+
**entry_tactic:** One specific, actionable entry move. Name the thread type, posting format, and community norm. NOT "engage with the community." Example: "Find the weekly 'What are you working on?' thread (posted every Monday by automoderator). Reply with a 3-sentence technical challenge you solved -- what broke, what you tried, what worked. No product mention. Build karma before posting standalone content."
|
|
212
|
+
|
|
213
|
+
**content_angle:** The content format that gets highest engagement in this specific channel, derived from evidence post titles and scores. Example: "Technical post-mortems outperform product announcements 5:1 here. Format: 'We migrated 200K users from X to Y -- here is what broke and why.' Concrete numbers + what failed = most upvotes."
|
|
214
|
+
|
|
215
|
+
**anti_patterns:** 2-3 specific behaviors that get posts removed or reputation destroyed in this community. Derive from subreddit rules (if available in description) and evidence post patterns. Example: ["Posting product links in non-promotional threads -- moderators remove within hours", "Asking 'what tools do you use?' without specific context -- flagged as market research farming"]
|
|
216
|
+
|
|
217
|
+
Write the enriched playbook to `/tmp/wcl-channels.json`:
|
|
218
|
+
|
|
219
|
+
```json
|
|
220
|
+
{
|
|
221
|
+
"playbook": [
|
|
222
|
+
{
|
|
223
|
+
"channel": "r/devops",
|
|
224
|
+
"evidence": "34 ICP signals traced here, avg pain score 180",
|
|
225
|
+
"who_is_here": "...",
|
|
226
|
+
"entry_tactic": "...",
|
|
227
|
+
"content_angle": "...",
|
|
228
|
+
"anti_patterns": ["...", "..."]
|
|
229
|
+
}
|
|
230
|
+
]
|
|
231
|
+
}
|
|
232
|
+
```
|
|
233
|
+
|
|
234
|
+
```bash
|
|
235
|
+
python3 -c "
|
|
236
|
+
import json
|
|
237
|
+
with open('/tmp/wcl-channels.json') as f:
|
|
238
|
+
d = json.load(f)
|
|
239
|
+
print(f'Playbook entries: {len(d[\"playbook\"])}')
|
|
240
|
+
for p in d['playbook']:
|
|
241
|
+
print(f' {p[\"channel\"]}')
|
|
242
|
+
"
|
|
243
|
+
```
|
|
244
|
+
|
|
245
|
+
---
|
|
246
|
+
|
|
247
|
+
## Step 6: Generate Full Ranked Output
|
|
248
|
+
|
|
249
|
+
Write the complete ranked playbook to `/tmp/wcl-output.json`:
|
|
250
|
+
|
|
251
|
+
```bash
|
|
252
|
+
python3 << 'PYEOF'
|
|
253
|
+
import json
|
|
254
|
+
from datetime import datetime
|
|
255
|
+
|
|
256
|
+
with open('/tmp/wcl-input.json') as f:
|
|
257
|
+
inp = json.load(f)
|
|
258
|
+
with open('/tmp/wcl-raw.json') as f:
|
|
259
|
+
raw = json.load(f)
|
|
260
|
+
with open('/tmp/wcl-channels.json') as f:
|
|
261
|
+
enriched = json.load(f)
|
|
262
|
+
|
|
263
|
+
playbook_by_channel = {p['channel']: p for p in enriched['playbook']}
|
|
264
|
+
channels = raw['channels_discovered']
|
|
265
|
+
|
|
266
|
+
output = {
|
|
267
|
+
"date": raw['date'],
|
|
268
|
+
"product": inp['product'],
|
|
269
|
+
"icp_role": inp['icp_role'],
|
|
270
|
+
"icp_pain": inp['icp_pain'],
|
|
271
|
+
"competitors": inp.get('competitors', []),
|
|
272
|
+
"total_channels": raw['summary']['total_channels'],
|
|
273
|
+
"channels": []
|
|
274
|
+
}
|
|
275
|
+
|
|
276
|
+
for ch in channels:
|
|
277
|
+
name = ch['name']
|
|
278
|
+
playbook = playbook_by_channel.get(name, {})
|
|
279
|
+
output['channels'].append({
|
|
280
|
+
"rank": channels.index(ch) + 1,
|
|
281
|
+
"name": name,
|
|
282
|
+
"type": ch['type'],
|
|
283
|
+
"url": ch['url'],
|
|
284
|
+
"members": ch.get('members', 0),
|
|
285
|
+
"active_users": ch.get('active_users', 0),
|
|
286
|
+
"icp_signal_count": ch.get('icp_signal_count', 0),
|
|
287
|
+
"competitor_mentions": ch.get('competitor_mentions', 0),
|
|
288
|
+
"channel_score": ch.get('channel_score', 0),
|
|
289
|
+
"tier": ch.get('tier', ''),
|
|
290
|
+
"entry_type": ch.get('entry_type', 'open'),
|
|
291
|
+
"evidence_posts": ch.get('evidence_posts', []),
|
|
292
|
+
"who_is_here": playbook.get('who_is_here', ''),
|
|
293
|
+
"entry_tactic": playbook.get('entry_tactic', ''),
|
|
294
|
+
"content_angle": playbook.get('content_angle', ''),
|
|
295
|
+
"anti_patterns": playbook.get('anti_patterns', []),
|
|
296
|
+
})
|
|
297
|
+
|
|
298
|
+
with open('/tmp/wcl-output.json', 'w') as f:
|
|
299
|
+
json.dump(output, f, indent=2)
|
|
300
|
+
|
|
301
|
+
print(f"Output written: /tmp/wcl-output.json")
|
|
302
|
+
print(f"Total channels: {len(output['channels'])}")
|
|
303
|
+
PYEOF
|
|
304
|
+
```
|
|
305
|
+
|
|
306
|
+
---
|
|
307
|
+
|
|
308
|
+
## Step 7: Self-QA
|
|
309
|
+
|
|
310
|
+
```bash
|
|
311
|
+
python3 -c "
|
|
312
|
+
import json
|
|
313
|
+
|
|
314
|
+
with open('/tmp/wcl-raw.json') as f:
|
|
315
|
+
raw = json.load(f)
|
|
316
|
+
with open('/tmp/wcl-output.json') as f:
|
|
317
|
+
output = json.load(f)
|
|
318
|
+
|
|
319
|
+
full_text = json.dumps(output)
|
|
320
|
+
raw_channel_names = {ch['name'].lower() for ch in raw['channels_discovered']}
|
|
321
|
+
passes = 0
|
|
322
|
+
fails = 0
|
|
323
|
+
|
|
324
|
+
# Check 1: No em dashes
|
|
325
|
+
if chr(8212) in full_text:
|
|
326
|
+
print('FAIL: em dash found in output -- replace with hyphen')
|
|
327
|
+
fails += 1
|
|
328
|
+
else:
|
|
329
|
+
print('PASS: no em dashes')
|
|
330
|
+
passes += 1
|
|
331
|
+
|
|
332
|
+
# Check 2: No banned words
|
|
333
|
+
banned = ['powerful', 'robust', 'seamless', 'innovative', 'game-changing',
|
|
334
|
+
'streamline', 'leverage', 'transform', 'revolutionize']
|
|
335
|
+
found = [w for w in banned if w.lower() in full_text.lower()]
|
|
336
|
+
if found:
|
|
337
|
+
print(f'FAIL: banned words: {found}')
|
|
338
|
+
fails += 1
|
|
339
|
+
else:
|
|
340
|
+
print('PASS: no banned words')
|
|
341
|
+
passes += 1
|
|
342
|
+
|
|
343
|
+
# Check 3: At least 3 channel types
|
|
344
|
+
types = {ch['type'] for ch in output['channels']}
|
|
345
|
+
if len(types) < 3:
|
|
346
|
+
print(f'FAIL: only {len(types)} channel type(s) in output: {types}')
|
|
347
|
+
fails += 1
|
|
348
|
+
else:
|
|
349
|
+
print(f'PASS: {len(types)} channel types: {types}')
|
|
350
|
+
passes += 1
|
|
351
|
+
|
|
352
|
+
# Check 4: All channel names exist in raw data
|
|
353
|
+
for ch in output['channels']:
|
|
354
|
+
if ch['name'].lower() not in raw_channel_names:
|
|
355
|
+
print(f'FAIL: channel not in raw data: {ch[\"name\"]}')
|
|
356
|
+
fails += 1
|
|
357
|
+
|
|
358
|
+
if fails == 0:
|
|
359
|
+
print('PASS: all channel names verified in raw data')
|
|
360
|
+
passes += 1
|
|
361
|
+
|
|
362
|
+
# Check 5: No generic entry tactics
|
|
363
|
+
generic_phrases = ['engage with the community', 'post about your product', 'share your content']
|
|
364
|
+
for ch in output['channels']:
|
|
365
|
+
tactic = ch.get('entry_tactic', '').lower()
|
|
366
|
+
for phrase in generic_phrases:
|
|
367
|
+
if phrase in tactic:
|
|
368
|
+
print(f'FAIL: generic entry tactic in {ch[\"name\"]}: contains \"{phrase}\"')
|
|
369
|
+
fails += 1
|
|
370
|
+
|
|
371
|
+
if fails == 0:
|
|
372
|
+
print('PASS: entry tactics are channel-specific')
|
|
373
|
+
|
|
374
|
+
print()
|
|
375
|
+
print(f'Result: {passes} passed, {fails} failed')
|
|
376
|
+
if fails > 0:
|
|
377
|
+
print('Fix failures before saving.')
|
|
378
|
+
else:
|
|
379
|
+
print('All checks passed. Ready to save.')
|
|
380
|
+
"
|
|
381
|
+
```
|
|
382
|
+
|
|
383
|
+
Fix any failures before proceeding to Step 8.
|
|
384
|
+
|
|
385
|
+
---
|
|
386
|
+
|
|
387
|
+
## Step 8: Save Output and Clean Up
|
|
388
|
+
|
|
389
|
+
```bash
|
|
390
|
+
python3 << 'PYEOF'
|
|
391
|
+
import json, os, re
|
|
392
|
+
from datetime import datetime
|
|
393
|
+
|
|
394
|
+
with open('/tmp/wcl-input.json') as f:
|
|
395
|
+
inp = json.load(f)
|
|
396
|
+
with open('/tmp/wcl-raw.json') as f:
|
|
397
|
+
raw = json.load(f)
|
|
398
|
+
with open('/tmp/wcl-output.json') as f:
|
|
399
|
+
output = json.load(f)
|
|
400
|
+
|
|
401
|
+
slug = re.sub(r'[^a-z0-9]+', '-', (inp.get('icp_role') or inp['category']).lower()).strip('-')[:40]
|
|
402
|
+
date = datetime.now().strftime('%Y-%m-%d')
|
|
403
|
+
os.makedirs('docs/channel-map', exist_ok=True)
|
|
404
|
+
|
|
405
|
+
outpath_md = f"docs/channel-map/{slug}-{date}.md"
|
|
406
|
+
outpath_json = f"docs/channel-map/{slug}-{date}.json"
|
|
407
|
+
|
|
408
|
+
channels = output['channels']
|
|
409
|
+
by_type = {}
|
|
410
|
+
for ch in channels:
|
|
411
|
+
by_type.setdefault(ch['type'], []).append(ch)
|
|
412
|
+
|
|
413
|
+
lines = [
|
|
414
|
+
f"# Where Your Customer Lives: {inp['product'] or inp['category'].title()}",
|
|
415
|
+
f"ICP: {inp['icp_role']} | Date: {date} | Channels found: {len(channels)}",
|
|
416
|
+
"",
|
|
417
|
+
"---",
|
|
418
|
+
"",
|
|
419
|
+
"## Channel Ranking",
|
|
420
|
+
"",
|
|
421
|
+
]
|
|
422
|
+
|
|
423
|
+
tier_labels = {"top-priority": "TOP PRIORITY", "high": "HIGH", "medium": "MEDIUM", "low": "LOW"}
|
|
424
|
+
|
|
425
|
+
for ch in channels:
|
|
426
|
+
members = ch.get('members', 0)
|
|
427
|
+
m_str = f"{members//1000}K" if members >= 1000 else str(members) if members else "member count not found"
|
|
428
|
+
tier_label = tier_labels.get(ch.get('tier', ''), ch.get('tier', '').upper())
|
|
429
|
+
|
|
430
|
+
lines.append(f"### #{ch['rank']}: {ch['name']} [score: {ch['channel_score']}] -- {tier_label}")
|
|
431
|
+
|
|
432
|
+
active = ch.get('active_users', 0)
|
|
433
|
+
active_str = f" | Active: {active//1000}K/day" if active >= 1000 else f" | Active: {active}/day" if active else ""
|
|
434
|
+
lines.append(f"Type: {ch['type'].title()} | Members: {m_str}{active_str} | {ch.get('entry_type', 'open').title()} to join")
|
|
435
|
+
|
|
436
|
+
evidence_str = f"{ch['icp_signal_count']} ICP signals traced here" if ch['icp_signal_count'] > 0 else "Discovered via DuckDuckGo search"
|
|
437
|
+
lines.append(f"Evidence: {evidence_str}")
|
|
438
|
+
|
|
439
|
+
if ch.get('competitor_mentions', 0) > 0 and inp.get('competitors'):
|
|
440
|
+
lines.append(f"Competitor mentions: {ch['competitor_mentions']} across {', '.join(inp['competitors'][:3])}")
|
|
441
|
+
|
|
442
|
+
lines.append("")
|
|
443
|
+
|
|
444
|
+
if ch.get('who_is_here'):
|
|
445
|
+
lines.append(f"**Who is here:** {ch['who_is_here']}")
|
|
446
|
+
lines.append("")
|
|
447
|
+
|
|
448
|
+
if ch.get('entry_tactic'):
|
|
449
|
+
lines.append(f"**Entry tactic:** {ch['entry_tactic']}")
|
|
450
|
+
lines.append("")
|
|
451
|
+
|
|
452
|
+
if ch.get('content_angle'):
|
|
453
|
+
lines.append(f"**Content angle:** {ch['content_angle']}")
|
|
454
|
+
lines.append("")
|
|
455
|
+
|
|
456
|
+
if ch.get('anti_patterns'):
|
|
457
|
+
lines.append("**Anti-patterns:**")
|
|
458
|
+
for ap in ch['anti_patterns']:
|
|
459
|
+
lines.append(f"- {ap}")
|
|
460
|
+
lines.append("")
|
|
461
|
+
|
|
462
|
+
lines.append("---")
|
|
463
|
+
lines.append("")
|
|
464
|
+
|
|
465
|
+
lines += [
|
|
466
|
+
"## Channel Summary by Type",
|
|
467
|
+
"",
|
|
468
|
+
"| Type | Count | Best channel | Score |",
|
|
469
|
+
"|---|---|---|---|",
|
|
470
|
+
]
|
|
471
|
+
for ch_type, chs in sorted(by_type.items(), key=lambda x: -max(c['channel_score'] for c in x[1])):
|
|
472
|
+
best = max(chs, key=lambda x: x['channel_score'])
|
|
473
|
+
lines.append(f"| {ch_type.title()} | {len(chs)} | {best['name']} | {best['channel_score']} |")
|
|
474
|
+
|
|
475
|
+
lines += [
|
|
476
|
+
"",
|
|
477
|
+
"---",
|
|
478
|
+
"",
|
|
479
|
+
"## Data Quality Notes",
|
|
480
|
+
f"- All channel names exist in Reddit API response or DuckDuckGo search results",
|
|
481
|
+
f"- Member counts from Reddit about.json API or search snippets",
|
|
482
|
+
f"- ICP signal counts match raw data ({raw['reddit_posts_found']} Reddit posts, {raw['hn_signals_found']} HN signals)",
|
|
483
|
+
f"- Competitor layer ran: {raw['summary']['competitor_layer_ran']}",
|
|
484
|
+
f"- Sources: Reddit signal-trace, HN signal-trace, DuckDuckGo channel discovery",
|
|
485
|
+
"",
|
|
486
|
+
f"Saved to: {outpath_md}",
|
|
487
|
+
f"JSON snapshot: {outpath_json}",
|
|
488
|
+
]
|
|
489
|
+
|
|
490
|
+
with open(outpath_md, 'w') as f:
|
|
491
|
+
f.write('\n'.join(lines))
|
|
492
|
+
|
|
493
|
+
# JSON snapshot
|
|
494
|
+
snapshot = {
|
|
495
|
+
"input": inp,
|
|
496
|
+
"channels": channels,
|
|
497
|
+
"summary": raw['summary'],
|
|
498
|
+
"date": date,
|
|
499
|
+
}
|
|
500
|
+
with open(outpath_json, 'w') as f:
|
|
501
|
+
json.dump(snapshot, f, indent=2)
|
|
502
|
+
|
|
503
|
+
print(f"Report saved: {outpath_md}")
|
|
504
|
+
print(f"JSON snapshot: {outpath_json}")
|
|
505
|
+
PYEOF
|
|
506
|
+
```
|
|
507
|
+
|
|
508
|
+
Clean up temp files:
|
|
509
|
+
|
|
510
|
+
```bash
|
|
511
|
+
rm -f /tmp/wcl-input.json /tmp/wcl-raw.json /tmp/wcl-channels.json /tmp/wcl-output.json
|
|
512
|
+
echo "Done. Channel map saved to docs/channel-map/"
|
|
513
|
+
```
|
|
514
|
+
|
|
515
|
+
Present the full contents of the saved `.md` file to the user.
|