@opendirectory.dev/skills 0.1.49 → 0.1.51

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@opendirectory.dev/skills",
3
- "version": "0.1.49",
3
+ "version": "0.1.51",
4
4
  "main": "dist/index.js",
5
5
  "types": "dist/index.d.ts",
6
6
  "bin": {
package/registry.json CHANGED
@@ -135,6 +135,16 @@
135
135
  "version": "1.0.0",
136
136
  "path": "skills/kill-the-standup"
137
137
  },
138
+ {
139
+ "name": "linkedin-job-post-to-buyer-pain-map",
140
+ "description": "Takes pasted LinkedIn job posts or hiring descriptions and converts them into a structured buyer pain map with inferred pains, capability gaps, buy...",
141
+ "tags": [
142
+ "Social Media"
143
+ ],
144
+ "author": "ajaycodesitbetter",
145
+ "version": "1.0.0",
146
+ "path": "skills/linkedin-job-post-to-buyer-pain-map"
147
+ },
138
148
  {
139
149
  "name": "linkedin-post-generator",
140
150
  "description": "Converts any content, blog post URL, pasted article, GitHub PR description, or a description of something built, into a formatted LinkedIn post wit...",
@@ -0,0 +1,7 @@
1
+ # linkedin-job-post-to-buyer-pain-map — Environment Variables
2
+ # ============================================================
3
+ # An LLM is required for pain map analysis and scoring.
4
+
5
+ # Required: Gemini API key for signal analysis and pain inference
6
+ # Get it: aistudio.google.com, Get API key
7
+ GEMINI_API_KEY=your_gemini_api_key_here
@@ -0,0 +1,144 @@
1
+ # linkedin-job-post-to-buyer-pain-map
2
+
3
+ <img src="cover.png" width="100%" alt="LinkedIn Job Post to Buyer Pain Map — Signal Decoder for GTM Teams" />
4
+
5
+ Turn LinkedIn job posts into an actionable buyer pain map with signal strength, urgency, and outreach angles for each account.
6
+
7
+ ## What It Does
8
+
9
+ - Parses 1–15 pasted job descriptions and groups them by company
10
+ - Extracts team, seniority, responsibility emphasis, requirement language, and tool/stack hints from each post
11
+ - Infers primary and secondary operational pains with cited evidence from the job text
12
+ - Scores each account on 3 dimensions: signal strength (1–10), urgency (1–10), ICP fit (1–10)
13
+ - Computes an overall priority score (10–100) using a transparent weighted formula
14
+ - Determines buy-vs-build posture and company stage from hiring language
15
+ - Generates 1–3 actionable outreach angles per account with talk-track bullets
16
+ - Produces a structured handoff object ready for `outreach-sequence-builder` consumption
17
+
18
+ ## Requirements
19
+
20
+ | Requirement | Purpose | How to Set Up |
21
+ |------------|---------|--------------|
22
+ | Gemini API key | Signal analysis, scoring, and pain inference | aistudio.google.com → Get API key |
23
+
24
+ ## Setup
25
+
26
+ ```bash
27
+ cp .env.example .env
28
+ ```
29
+
30
+ Fill in `GEMINI_API_KEY` (required).
31
+
32
+ No other dependencies. The skill runs entirely through agent instructions and Gemini API calls.
33
+
34
+ ## How to Use
35
+
36
+ Analyze a single job post:
37
+
38
+ ```
39
+ "Analyze this job post for buyer pain. My product is [X]. ICP: [Y]. Here's the post: [paste job description]"
40
+ ```
41
+
42
+ Batch analysis of multiple posts:
43
+
44
+ ```
45
+ "Build a pain map from these 5 job posts. Product: [X]. ICP: [Y]. Posts: [paste all posts]"
46
+ ```
47
+
48
+ Focus on positioning angles:
49
+
50
+ ```
51
+ "Decode these hiring descriptions for positioning angles. Focus on positioning. Product: [X]. ICP: [Y]. Posts: [paste posts]"
52
+ ```
53
+
54
+ Handoff to outreach:
55
+
56
+ ```
57
+ "Analyze these job posts and give me a handoff I can pass to outreach-sequence-builder"
58
+ ```
59
+
60
+ ## Input Format
61
+
62
+ Each job post should include at minimum:
63
+ - **Company name** and **job title** (required)
64
+ - **Job description text** — the full pasted content (required)
65
+ - Location, seniority, team/function, URL (optional — inferred if missing)
66
+
67
+ Posts can be pasted as plain text, numbered lists, or structured JSON objects.
68
+
69
+ ## Output
70
+
71
+ For each company analyzed, the skill produces:
72
+
73
+ 1. **Summary** — headline, overall score, score breakdown with explanations, stage guess, buy-vs-build posture
74
+ 2. **Buyer Pain Map** — primary and secondary pains, each with label, description, and cited evidence
75
+ 3. **Outreach Angles** — 1–3 angles with narrative and talk-track bullets
76
+ 4. **Handoff** — structured block for outreach-sequence-builder (account summary, key pain, personas, tone)
77
+
78
+ Reports are saved to `docs/pain-maps/` as dated markdown files.
79
+
80
+ ## Scoring Model
81
+
82
+ | Dimension | Weight | Measures |
83
+ |-----------|--------|----------|
84
+ | Signal Strength | 40% | Density and specificity of relevant hiring signals |
85
+ | Urgency | 30% | Time-sensitivity indicators in the job text |
86
+ | ICP Fit | 30% | Company profile match to user's ICP description |
87
+
88
+ **Overall Score** = `round((0.4 × signal + 0.3 × urgency + 0.3 × icp_fit) × 10)` → 10–100 scale.
89
+
90
+ See `references/scoring-rubric.md` for full scoring rules and modifiers.
91
+
92
+ ## Plays Well With
93
+
94
+ | Skill | How |
95
+ |-------|-----|
96
+ | `outreach-sequence-builder` | Feed the handoff block as input context to generate a multi-channel outreach sequence |
97
+ | `noise-to-linkedin-carousel` | Use primary pains and evidence quotes as content source material |
98
+ | `reddit-icp-monitor` | Cross-reference pain themes from job posts with Reddit discussions |
99
+
100
+ ## When NOT to Use
101
+
102
+ - Need to **find** companies hiring → use `linkedin-hiring-intent-scanner` or `yc-intent-radar-skill`
103
+ - Need to **write** outreach messages → use `outreach-sequence-builder`
104
+ - Need to **monitor** platforms for signals → use `reddit-icp-monitor` or `twitter-GTM-find-skill`
105
+ - Need contact data or email enrichment → this skill does not provide that
106
+
107
+ ## Project Structure
108
+
109
+ ```
110
+ linkedin-job-post-to-buyer-pain-map/
111
+ ├── SKILL.md
112
+ ├── README.md
113
+ ├── .env.example
114
+ ├── cover.png
115
+ ├── evals/
116
+ │ └── evals.json
117
+ └── references/
118
+ ├── scoring-rubric.md
119
+ └── examples.md
120
+ ```
121
+
122
+ ## License
123
+
124
+ MIT
125
+
126
+ ## Installation in Claude Desktop App
127
+
128
+ ### Video Tutorial
129
+ Watch this quick video to see how it's done:
130
+
131
+ https://github.com/user-attachments/assets/ee98a1b5-ebc4-452f-bbfb-c434f2935067
132
+
133
+ ### Step 1: Download the skill from GitHub
134
+ 1. Copy the URL of this specific skill folder from your browser's address bar.
135
+ 2. Go to [download-directory.github.io](https://download-directory.github.io/).
136
+ 3. Paste the URL and click **Enter** to download.
137
+
138
+ ### Step 2: Install the Skill in Claude
139
+ 1. Open your **Claude desktop app**.
140
+ 2. Go to the sidebar on the left side and click on the **Customize** section.
141
+ 3. Click on the **Skills** tab, then click on the **+** (plus) icon button to create a new skill.
142
+ 4. Choose the option to **Upload a skill**, and drag and drop the `.zip` file (or you can extract it and drop the folder, both work).
143
+
144
+ > **Note:** Always make sure you are uploading the specific folder that contains the `SKILL.md` file!
@@ -0,0 +1,301 @@
1
+ ---
2
+ name: linkedin-job-post-to-buyer-pain-map
3
+ description: Takes pasted LinkedIn job posts or hiring descriptions and converts them into a structured buyer pain map with inferred pains, capability gaps, buy-vs-build signal, account priority scores, and suggested outreach angles. Use when asked to analyze hiring posts, decode job descriptions for buyer intent, build a pain map from job listings, extract GTM signals from hiring activity, or prioritize accounts based on hiring data. Trigger when a user says "analyze these job posts", "what pain does this hiring signal", "build a pain map from these listings", "decode this job description", or "score these accounts from their hiring".
4
+ author: ajaycodesitbetter
5
+ version: 1.0.0
6
+ ---
7
+
8
+ # LinkedIn Job Post to Buyer Pain Map
9
+
10
+ Take LinkedIn job posts. Decode them into a structured buyer pain map with scores, pains, and outreach angles.
11
+
12
+ ---
13
+
14
+ **Critical rule:** Every inferred pain must cite specific language from the job description that supports it. Never hallucinate pains that are not grounded in the text. If a post is too generic to infer pain, say so explicitly and assign a low signal strength score.
15
+
16
+ **Ethical rule:** Do not infer personal attributes or protected characteristics about candidates. Focus strictly on company-level operational pain and organizational needs.
17
+
18
+ ---
19
+
20
+ ## Step 1: Setup Check
21
+
22
+ Confirm required env vars:
23
+
24
+ ```bash
25
+ echo "GEMINI_API_KEY: ${GEMINI_API_KEY:+set}"
26
+ ```
27
+
28
+ **If GEMINI_API_KEY is missing:**
29
+ Stop. Tell the user: "GEMINI_API_KEY is required. Get it at aistudio.google.com. Add it to your .env file."
30
+
31
+ ---
32
+
33
+ ## Step 2: Collect Inputs
34
+
35
+ The skill needs 3 required inputs. Collect them before proceeding.
36
+
37
+ ### 2a: Product Brief
38
+
39
+ Ask: "Describe your product in 2-5 sentences. What do you do, what is your core value prop, and who do you target?"
40
+
41
+ **If the user already included this in their prompt:** Extract it. Confirm: "Product brief captured: [summary]."
42
+
43
+ ### 2b: ICP Description
44
+
45
+ Ask: "Describe your ideal customer profile in 2-6 bullets: industries, company sizes, roles you sell to, tech stack hints."
46
+
47
+ **If the user already included this in their prompt:** Extract it. Confirm: "ICP captured: [summary]."
48
+
49
+ ### 2c: Hiring Posts
50
+
51
+ Ask: "Paste the job descriptions you want analyzed. For each post, include the company name, job title, and the full description text. You can paste 1-15 posts."
52
+
53
+ **Accepted formats:**
54
+ - Raw pasted text with company name and job title clearly labeled
55
+ - Structured JSON objects with fields: `company_name`, `job_title`, `location` (optional), `seniority` (optional), `team_or_function` (optional), `job_description_text`, `job_url` (optional)
56
+ - Multiple posts separated by clear delimiters (--- or numbered)
57
+
58
+ **If any field is missing:** Infer what you can from the description text. If company_name or job_description_text is missing, ask for it before proceeding.
59
+
60
+ ### 2d: Optional Inputs
61
+
62
+ If the user provides any of these, capture them:
63
+ - `account_notes`: additional context per company (funding, tech stack, known tools, contacts)
64
+ - `focus_dimension`: "pipeline" (bias toward scoring and prioritization), "positioning" (bias toward messaging angles), or "both" (default)
65
+
66
+ ---
67
+
68
+ ## Step 3: Extract Signals
69
+
70
+ For each job post, parse and extract:
71
+
72
+ 1. **Team / function:** Which team is this role on? (GTM, Product, Infra, Data, RevOps, CS, Engineering, etc.)
73
+ 2. **Seniority:** IC, Senior IC, Manager, Director, VP, C-level
74
+ 3. **Responsibilities emphasis:** Classify the dominant mode:
75
+ - Fire-fighting: "stabilize", "fix", "reduce downtime", "unblock"
76
+ - Building new: "build from scratch", "0→1", "greenfield", "design and implement"
77
+ - Optimizing: "scale", "optimize", "improve efficiency", "automate"
78
+ - Replacing: "replace legacy", "migrate from", "modernize"
79
+ 4. **Requirement language:** Note keywords that signal intent: "first X hire", "critical role", "immediate", "must have"
80
+ 5. **Tool / stack hints:** Any references to specific tools, platforms, or technology categories that overlap with the user's product area
81
+
82
+ **Group by company.** If multiple posts come from the same company, group their signals together.
83
+
84
+ State: "Extracted signals from X posts across Y companies."
85
+
86
+ ---
87
+
88
+ ## Step 4: Score with the LLM
89
+
90
+ Read `references/scoring-rubric.md` for the full scoring model.
91
+
92
+ Build the LLM request:
93
+
94
+ ```bash
95
+ cat > /tmp/pain-map-score-request.json << 'ENDJSON'
96
+ {
97
+ "system_instruction": {
98
+ "parts": [{
99
+ "text": "You are a GTM analyst who specializes in decoding hiring signals into buyer intent. For each account provided, you will score three dimensions and infer company context. Rules: (1) Every score must include a one-sentence plain-text explanation. (2) signal_strength measures how many and how specific the hiring signals are relative to the user's product area. (3) urgency measures how time-sensitive the hiring need appears. (4) icp_fit measures how closely the company matches the user's ICP description. (5) Each score is 1-10. (6) overall_score = round((0.4 * signal_strength + 0.3 * urgency + 0.3 * icp_fit) * 10). (7) Infer buy_vs_build from job language. Use EXACTLY one of these labels: 'Leaning build', 'Leaning buy', 'Hybrid (buy-and-build)', 'Unknown'. (8) Infer stage_guess from company clues: funding stage, employee count, company type. (9) Output valid JSON only."
100
+ }]
101
+ },
102
+ "contents": [{
103
+ "parts": [{
104
+ "text": "SCORING_CONTEXT_HERE"
105
+ }]
106
+ }],
107
+ "generationConfig": {
108
+ "temperature": 0.2,
109
+ "maxOutputTokens": 4096
110
+ }
111
+ }
112
+ ENDJSON
113
+ curl -s -X POST \
114
+ "https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash:generateContent?key=$GEMINI_API_KEY" \
115
+ -H "Content-Type: application/json" \
116
+ -d @/tmp/pain-map-score-request.json \
117
+ | python3 -c "import sys,json; d=json.load(sys.stdin); print(d['candidates'][0]['content']['parts'][0]['text'])"
118
+ ```
119
+
120
+ Replace `SCORING_CONTEXT_HERE` with:
121
+ - The product brief and ICP description from Step 2
122
+ - The extracted signals per company from Step 3
123
+ - The scoring rubric rules from `references/scoring-rubric.md`
124
+ - Instructions to output JSON with: `company_name`, `headline`, `overall_score`, `score_breakdown` (signal_strength, urgency, icp_fit — each with score and explanation), `stage_guess`, `buy_vs_build_guess`
125
+
126
+ ---
127
+
128
+ ## Step 5: Build Pain Map
129
+
130
+ For each account, use the extracted signals and LLM scoring context to build the buyer pain map.
131
+
132
+ ```bash
133
+ cat > /tmp/pain-map-analysis-request.json << 'ENDJSON'
134
+ {
135
+ "system_instruction": {
136
+ "parts": [{
137
+ "text": "You are a B2B pain analyst who reads job descriptions and identifies the operational pains a company is experiencing. Rules: (1) Every pain must have a short label, a 1-3 sentence description, and specific phrases quoted from the job post as supporting evidence. (2) Classify pains as primary (directly relevant to the user's product) or secondary (real pain but tangential to the product). (3) If a post is too generic to infer specific pain, state that explicitly — do not hallucinate. (4) For each account, also generate 1-3 recommended outreach angles. Each angle has: angle_name (short), narrative (1-2 sentences on how to lead), and talk_track_bullets (2-4 concrete talking points). (5) Outreach angles must reference specific pains and evidence, not generic value props. No banned words: synergy, leverage, innovative, cutting-edge, best-in-class, world-class, game-changing, disruptive, seamless, robust, comprehensive, revolutionize, transform, streamline. (6) Output valid JSON only."
138
+ }]
139
+ },
140
+ "contents": [{
141
+ "parts": [{
142
+ "text": "ANALYSIS_CONTEXT_HERE"
143
+ }]
144
+ }],
145
+ "generationConfig": {
146
+ "temperature": 0.4,
147
+ "maxOutputTokens": 8192
148
+ }
149
+ }
150
+ ENDJSON
151
+ curl -s -X POST \
152
+ "https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash:generateContent?key=$GEMINI_API_KEY" \
153
+ -H "Content-Type: application/json" \
154
+ -d @/tmp/pain-map-analysis-request.json \
155
+ | python3 -c "import sys,json; d=json.load(sys.stdin); print(d['candidates'][0]['content']['parts'][0]['text'])"
156
+ ```
157
+
158
+ Replace `ANALYSIS_CONTEXT_HERE` with:
159
+ - The product brief and ICP description
160
+ - Each company's job posts (full text)
161
+ - The scores and context from Step 4
162
+ - Account notes if provided
163
+ - The focus_dimension preference (pipeline / positioning / both)
164
+ - Instructions to output per-account: `buyer_pain_map` (primary_pains, secondary_pains) and `recommended_outreach_angles`
165
+
166
+ Read `references/examples.md` to calibrate the expected output quality and depth.
167
+
168
+ ---
169
+
170
+ ## Step 6: Build Handoff Object
171
+
172
+ For each account, generate a structured handoff block that `outreach-sequence-builder` can consume directly:
173
+
174
+ ```
175
+ handoff:
176
+ for_outreach_sequence_builder:
177
+ account_summary: 2-3 sentence recap of the company's situation and hiring context
178
+ key_pain: The single most actionable pain in one sentence
179
+ suggested_personas: 2-4 job titles of the people most likely to own this pain
180
+ tone: One sentence describing the recommended outreach tone
181
+ ```
182
+
183
+ The handoff must be concrete enough that someone could paste it into outreach-sequence-builder's prompt and get a usable sequence without re-researching the account.
184
+
185
+ ---
186
+
187
+ ## Step 7: Self-QA
188
+
189
+ Run every check and fix violations before presenting:
190
+
191
+ - [ ] Every inferred pain cites at least one specific phrase from the job description
192
+ - [ ] No pains hallucinated for generic/low-signal posts (check posts with signal_strength ≤ 3)
193
+ - [ ] All three scores (signal_strength, urgency, icp_fit) have a one-sentence explanation
194
+ - [ ] Overall score matches the formula: `round((0.4 × signal + 0.3 × urgency + 0.3 × icp_fit) × 10)`
195
+ - [ ] Buy-vs-build label is one of: "Leaning build", "Leaning buy", "Hybrid (buy-and-build)", "Unknown"
196
+ - [ ] No banned words in outreach angles: synergy, leverage, innovative, cutting-edge, best-in-class, world-class, game-changing, disruptive, seamless, robust, comprehensive, revolutionize, transform, streamline
197
+ - [ ] Each outreach angle has 2-4 talk track bullets, not generic value statements
198
+ - [ ] Out-of-ICP accounts are flagged with a clear recommendation to deprioritize
199
+ - [ ] Handoff objects are complete with all 4 fields (account_summary, key_pain, suggested_personas, tone)
200
+ - [ ] No personal attributes or protected characteristics inferred about candidates
201
+ - [ ] Posts grouped by company when multiple posts are from the same company
202
+
203
+ Fix any violation before presenting.
204
+
205
+ ---
206
+
207
+ ## Step 8: Output and Save
208
+
209
+ ### Human-readable output
210
+
211
+ Present the full analysis in this format:
212
+
213
+ ```
214
+ ## Buyer Pain Map — [YYYY-MM-DD]
215
+
216
+ **Product:** [product name from brief]
217
+ **Posts analyzed:** X posts across Y companies
218
+ **Focus:** [pipeline / positioning / both]
219
+
220
+ ---
221
+
222
+ ### [Company Name] — Score: [N]/100
223
+
224
+ **Headline:** [one-line summary of what the hiring reveals]
225
+
226
+ | Dimension | Score | Explanation |
227
+ |-----------|-------|-------------|
228
+ | Signal Strength | N/10 | [one sentence] |
229
+ | Urgency | N/10 | [one sentence] |
230
+ | ICP Fit | N/10 | [one sentence] |
231
+
232
+ **Stage:** [stage guess] | **Buy-vs-Build:** [posture label]
233
+
234
+ #### Primary Pains
235
+ - **[Pain label]:** [1-3 sentence description]
236
+ - Evidence: "[quoted phrase from job post]"
237
+
238
+ #### Secondary Pains
239
+ - **[Pain label]:** [1-3 sentence description]
240
+ - Evidence: "[quoted phrase from job post]"
241
+
242
+ #### Recommended Outreach Angles
243
+ 1. **[Angle name]:** [narrative]
244
+ - [bullet 1]
245
+ - [bullet 2]
246
+ - [bullet 3]
247
+
248
+ #### Handoff → outreach-sequence-builder
249
+ > **Summary:** [2-3 sentences]
250
+ > **Key pain:** [one sentence]
251
+ > **Suggested personas:** [list]
252
+ > **Tone:** [one sentence]
253
+
254
+ ---
255
+
256
+ [repeat for each company, ordered by overall_score descending]
257
+ ```
258
+
259
+ ### Save to file
260
+
261
+ ```bash
262
+ mkdir -p docs/pain-maps
263
+ OUTFILE="docs/pain-maps/$(date +%Y-%m-%d).md"
264
+ cat > "$OUTFILE" << 'EOF'
265
+ REPORT_CONTENT_HERE
266
+ EOF
267
+ echo "Pain map saved to $OUTFILE"
268
+ ```
269
+
270
+ If multiple companies are analyzed, also save individual files using slugified company names (lowercase, replace spaces and special characters with hyphens, strip trailing hyphens):
271
+
272
+ e.g., "ACME, Inc." → `acme-inc.md`, "CoolStartup Inc" → `coolstartup-inc.md`
273
+
274
+ ```bash
275
+ SLUG=$(echo "COMPANY_NAME" | tr '[:upper:]' '[:lower:]' | sed 's/[^a-z0-9]/-/g' | sed 's/--*/-/g' | sed 's/-$//')
276
+ cat > "docs/pain-maps/${SLUG}.md" << 'EOF'
277
+ INDIVIDUAL_ACCOUNT_CONTENT_HERE
278
+ EOF
279
+ ```
280
+
281
+ ---
282
+
283
+ ## When to Use
284
+
285
+ - You have 1-15 pasted job descriptions and want to know what pain each company is feeling
286
+ - You are planning account-based outreach and need structured pain analysis before writing sequences
287
+ - You want to prioritize which hiring-signal accounts to pursue first
288
+ - A PMM wants real-world pain language from job posts to refine positioning
289
+
290
+ ## When NOT to Use
291
+
292
+ - You need to **find** companies that are hiring (use `linkedin-hiring-intent-scanner` or `yc-intent-radar-skill` instead)
293
+ - You need to **write outreach messages** (use `outreach-sequence-builder` instead — feed it the handoff from this skill)
294
+ - You want to **monitor** a platform for signals over time (use `reddit-icp-monitor` or `twitter-GTM-find-skill` instead)
295
+ - You need contact information or email enrichment (this skill does not provide that)
296
+
297
+ ## Plays Well With
298
+
299
+ - **outreach-sequence-builder**: Pass the `handoff.for_outreach_sequence_builder` block directly as context when building a sequence.
300
+ - **noise-to-linkedin-carousel**: Use the primary pains and evidence quotes as source material for a carousel about buyer problems.
301
+ - **reddit-icp-monitor**: Cross-reference pain themes from job posts with pain themes showing up in Reddit discussions.
@@ -0,0 +1,36 @@
1
+ {
2
+ "skill_name": "linkedin-job-post-to-buyer-pain-map",
3
+ "evals": [
4
+ {
5
+ "id": 1,
6
+ "prompt": "Analyze this job post for buyer pain. My product is Metabase, an open-source BI tool for non-technical teams. ICP: B2B SaaS, 50-500 employees, Series A-C, modern data stack. Job post: Acme Analytics (Series B, 120 employees) is hiring a Senior Data Platform Engineer to build a self-serve analytics layer for product managers and business leads. Responsibilities include owning ETL and dbt transformations, optimizing query performance for BI tools, and partnering with 6+ internal teams. Requirements: 5+ years with Snowflake, dbt, Airflow.",
7
+ "expected_output": "Agent checks GEMINI_API_KEY (set). Captures product brief (Metabase, open-source BI), ICP (B2B SaaS, 50-500, Series A-C, modern data stack), and job post. Extracts signals: team=Data Platform, seniority=Senior IC, emphasis=building new (self-serve analytics layer), stack hints=Snowflake, dbt, Airflow. Scores with the LLM: signal_strength 8-9 (explicit analytics pain, modern data stack match), urgency 7-8 (6+ teams partnering, scaling rapidly), icp_fit 8-9 (Series B, 120 employees, dbt+Snowflake). Overall score 80-90. Buy-vs-build: Hybrid (buy-and-build). Stage: Scaling post-PMF. Primary pain: data team bottleneck for business teams needing metrics access. Evidence cites 'self-serve analytics layer' and 'without pinging the data team'. Outreach angle references self-serve analytics build-out. Handoff block includes suggested personas (Head of Data, VP Product). Saves to docs/pain-maps/acme-analytics.md.",
8
+ "files": []
9
+ },
10
+ {
11
+ "id": 2,
12
+ "prompt": "Decode this job description for GTM signals. Product: Ramp, a sales onboarding platform. ICP: B2B SaaS, 100-1000 employees, 10+ sales reps, Series B+. Job post: CoolStartup Inc is hiring a Full Stack Engineer. 'We're looking for a rockstar full-stack engineer who loves building cool things. React, Node.js, PostgreSQL. Must be a team player in a fast-paced startup. We offer equity, unlimited PTO, and weekly team lunches.'",
13
+ "expected_output": "Agent checks GEMINI_API_KEY (set). Captures inputs. Extracts signals: team=Engineering, seniority=Mid-level IC, emphasis=generic (no clear pain), stack hints=React/Node/PostgreSQL (unrelated to sales onboarding). Scores: signal_strength 1-2 (no sales, onboarding, or GTM signals), urgency 1-2 (no urgency language, exploratory tone), icp_fit 1-2 (no company size, no sales team indicators). Overall score 10-20. Buy-vs-build: Unknown. Stage: Unknown. No primary pains inferred — agent explicitly states post is too generic to extract meaningful pain signals. Does not hallucinate pains. Recommends skipping this account. No outreach angles generated because signal strength is too low.",
14
+ "files": []
15
+ },
16
+ {
17
+ "id": 3,
18
+ "prompt": "Build a pain map from this hiring post. Product: Datadog, monitoring and observability for cloud-native apps. ICP: cloud-native tech companies, 200+ engineers, Kubernetes, multi-cloud, Series C+ or public. Job post: MedSecure Health Systems (2,000 employees, healthcare technology) is hiring a Director of Cloud Infrastructure to lead cloud migration of 200+ on-premise applications to AWS, build container orchestration capabilities, and establish monitoring and observability practices from scratch. Requirements: AWS, Terraform, Kubernetes, HIPAA compliance.",
19
+ "expected_output": "Agent checks GEMINI_API_KEY (set). Captures inputs. Extracts signals: team=IT Infrastructure, seniority=Director/Leadership, emphasis=building new (from scratch) and replacing (migrating on-prem), stack hints=AWS/Terraform/Kubernetes. Scores: signal_strength 7-9 (Director-level, building observability from scratch, Kubernetes), urgency 7-8 (200+ app migration, leadership hire), icp_fit 2-4 (healthcare not cloud-native tech, on-prem migrating not cloud-native). Overall score 45-60. Stage: Mature/Enterprise. Buy-vs-build: Leaning buy. Primary pain: no observability stack exists, migrating 200+ apps blind. Evidence cites 'establish monitoring and observability practices from scratch'. Agent flags account as OUTSIDE CORE ICP with recommendation to deprioritize unless healthcare is an expansion segment. Still provides outreach angle in case user wants to pursue.",
20
+ "files": []
21
+ },
22
+ {
23
+ "id": 4,
24
+ "prompt": "Analyze these job posts for buyer pain signals.",
25
+ "expected_output": "Agent detects GEMINI_API_KEY is missing from environment. Stops at Step 1. Tells the user: 'GEMINI_API_KEY is required. Get it at aistudio.google.com. Add it to your .env file.' Does not proceed to collect inputs or generate any analysis.",
26
+ "files": [],
27
+ "setup": "GEMINI_API_KEY not set in environment"
28
+ },
29
+ {
30
+ "id": 5,
31
+ "prompt": "Analyze these 3 job posts from the same company. Product: Gong, revenue intelligence platform. ICP: B2B SaaS, 200+ employees, 20+ sales reps, Series B+. Company: RapidScale (Series C, 450 employees). Post 1: VP of Sales — build and scale the outbound sales org from 15 to 50 reps over 18 months, establish sales methodology and playbooks, own pipeline generation strategy. Post 2: Revenue Operations Manager — implement and manage CRM workflows, build forecasting models, reduce manual pipeline reporting, first RevOps hire. Post 3: Senior Customer Success Manager — manage 40+ enterprise accounts, reduce churn in enterprise segment, build QBR process from scratch.",
32
+ "expected_output": "Agent checks GEMINI_API_KEY (set). Captures inputs. Groups all 3 posts under RapidScale. Extracts signals: Post 1 (GTM/Sales leadership, scaling 15→50 reps, building playbooks), Post 2 (RevOps, first hire, CRM + forecasting, manual pipeline reporting), Post 3 (CS, enterprise churn, QBR process). Identifies dominant theme: GTM infrastructure buildout across Sales + RevOps + CS. Scores: signal_strength 9-10 (multiple related hires, explicit GTM pain, VP-level), urgency 8-9 (scaling 15→50, first RevOps hire, enterprise churn), icp_fit 8-9 (Series C, 450 employees, 20+ reps). Overall score 85-95. Stage: Scaling post-PMF. Buy-vs-build: Hybrid (buy-and-build). Primary pains: (1) No sales methodology or playbooks for scaling team, (2) Manual pipeline reporting with no RevOps function, (3) Enterprise churn with no structured QBR process. Multiple outreach angles generated. Handoff block targets VP Sales and RevOps Manager as personas. Report groups all 3 signals under one company entry.",
33
+ "files": []
34
+ }
35
+ ]
36
+ }
@@ -0,0 +1,122 @@
1
+ # Worked Examples
2
+
3
+ This file provides 3 worked examples showing how the skill interprets different types of job posts. The agent reads this file to calibrate its analysis quality.
4
+
5
+ ---
6
+
7
+ ## Example 1: High Signal, High ICP Fit
8
+
9
+ ### Input
10
+
11
+ **Product brief:** "Metabase — open-source BI tool that lets non-technical teams query data and build dashboards without SQL. Target: Series A–C startups with growing data teams."
12
+
13
+ **ICP description:** "B2B SaaS companies, 50–500 employees, Series A–C, have at least 2 data engineers, use modern data stack (dbt, Snowflake, BigQuery), need self-serve analytics for business teams."
14
+
15
+ **Job post:**
16
+ ```json
17
+ {
18
+ "company_name": "Acme Analytics",
19
+ "job_title": "Senior Data Platform Engineer",
20
+ "location": "Remote, US",
21
+ "seniority": "Senior / IC",
22
+ "team_or_function": "Data Platform / Infra",
23
+ "job_description_text": "We're looking for a Senior Data Platform Engineer to own our analytics infrastructure. You'll build a self-serve analytics layer so product managers and business leads can access metrics without pinging the data team. Responsibilities: own ETL pipelines and dbt transformations, optimize query performance for our BI tools, partner with 6+ internal teams to define metrics, and ensure data reliability for executive dashboards. Requirements: 5+ years with modern data stack (Snowflake, dbt, Airflow), experience building self-serve data products, strong SQL. We are a Series B company with 120 employees scaling rapidly."
24
+ }
25
+ ```
26
+
27
+ ### Expected Output
28
+
29
+ **Summary:**
30
+ - Headline: "Staffing data infra to unblock self-serve analytics for non-technical teams"
31
+ - Overall score: 87
32
+ - Signal strength: 9 (explicit analytics pain + modern data stack + building self-serve layer)
33
+ - Urgency: 8 (partnering with 6+ teams suggests backlog, "scaling rapidly")
34
+ - ICP fit: 9 (Series B, 120 employees, dbt + Snowflake, self-serve analytics need)
35
+ - Stage guess: Scaling post-PMF
36
+ - Buy-vs-build: Hybrid (buy-and-build)
37
+ - _Reasoning: building infrastructure + likely to buy BI tooling_
38
+
39
+ **Primary pain:** "Data team is a bottleneck for business teams needing metrics access. PMs and leads are blocked by ad-hoc data requests."
40
+
41
+ **Evidence cited:** "product managers and business leads can access metrics without pinging the data team", "partner with 6+ internal teams"
42
+
43
+ **Outreach angle:** "Reduce ad-hoc data fire drills — lead with how Metabase gives PMs self-serve dashboards so the data team stops being a ticket queue."
44
+
45
+ ---
46
+
47
+ ## Example 2: Low Signal, Generic Post
48
+
49
+ ### Input
50
+
51
+ **Product brief:** "Ramp — sales onboarding platform for B2B SaaS. Helps VP Sales cut new-rep ramp time."
52
+
53
+ **ICP description:** "B2B SaaS, 100–1000 employees, at least 10 sales reps, hiring SDRs/AEs, series B+."
54
+
55
+ **Job post:**
56
+ ```json
57
+ {
58
+ "company_name": "CoolStartup Inc",
59
+ "job_title": "Full Stack Engineer",
60
+ "location": "San Francisco, CA",
61
+ "seniority": "Mid-level / IC",
62
+ "team_or_function": "Engineering",
63
+ "job_description_text": "We're looking for a rockstar full-stack engineer who loves building cool things. You'll work on our product using React, Node.js, and PostgreSQL. Must be a team player who thrives in a fast-paced startup environment. We offer competitive salary, equity, unlimited PTO, and weekly team lunches. Come join our amazing culture!"
64
+ }
65
+ ```
66
+
67
+ ### Expected Output
68
+
69
+ **Summary:**
70
+ - Headline: "Generic engineering hire with no relevant GTM or sales signals"
71
+ - Overall score: 16
72
+ - Signal strength: 1 (no sales, onboarding, or GTM signals; engineering role unrelated to product area)
73
+ - Urgency: 2 (no urgency language, exploratory tone)
74
+ - ICP fit: 2 (no company size, no sales team indicators, no relevant function)
75
+ - Stage guess: Unknown (no indicators)
76
+ - Buy-vs-build: Unknown
77
+
78
+ **Primary pain:** None identified. Post is too generic to infer operational pain relevant to sales onboarding.
79
+
80
+ **Note to user:** "This post contains minimal signal for your ICP. The role is a general engineering hire with no references to sales operations, onboarding, or GTM infrastructure. Recommend skipping."
81
+
82
+ ---
83
+
84
+ ## Example 3: Decent Pain, Out-of-ICP
85
+
86
+ ### Input
87
+
88
+ **Product brief:** "Datadog — monitoring and observability platform for cloud-native applications. Target: tech companies running Kubernetes in production."
89
+
90
+ **ICP description:** "Cloud-native tech companies, 200+ engineers, running Kubernetes/containers, multi-cloud, Series C+ or public."
91
+
92
+ **Job post:**
93
+ ```json
94
+ {
95
+ "company_name": "MedSecure Health Systems",
96
+ "job_title": "Director of Cloud Infrastructure",
97
+ "location": "Chicago, IL",
98
+ "seniority": "Director / Leadership",
99
+ "team_or_function": "IT Infrastructure",
100
+ "job_description_text": "MedSecure Health Systems is seeking a Director of Cloud Infrastructure to lead our cloud migration initiative. You will oversee the transition of 200+ on-premise applications to AWS, build container orchestration capabilities, and establish monitoring and observability practices from scratch. Manage a team of 8 infrastructure engineers. Requirements: 10+ years in infrastructure, experience with AWS, Terraform, container orchestration (Kubernetes preferred), HIPAA compliance expertise required. We are a 2,000-employee healthcare technology company."
101
+ }
102
+ ```
103
+
104
+ ### Expected Output
105
+
106
+ **Summary:**
107
+ - Headline: "Healthcare company building cloud infra and observability from scratch"
108
+ - Overall score: 62
109
+ - Signal strength: 8 (Director-level, building observability from scratch, Kubernetes, AWS migration)
110
+ - Urgency: 7 (200+ app migration, building from scratch, leadership hire)
111
+ - ICP fit: 3 (healthcare, not cloud-native tech company; on-prem migrating to cloud, not cloud-native)
112
+ - Stage guess: Mature / Enterprise
113
+ - Buy-vs-build: Leaning buy
114
+ - _Reasoning: "establish monitoring and observability practices from scratch" suggests they need to adopt tooling_
115
+
116
+ **Primary pain:** "No monitoring or observability stack exists. Migrating 200+ on-prem apps to AWS with zero cloud-native visibility."
117
+
118
+ **Evidence cited:** "establish monitoring and observability practices from scratch", "transition of 200+ on-premise applications to AWS"
119
+
120
+ **ICP flag:** ⚠️ Outside core ICP — healthcare vertical, on-prem migrating (not cloud-native). Pain is real and product-relevant, but company profile does not match ideal customer profile. Consider deprioritizing unless healthcare is an expansion segment.
121
+
122
+ **Outreach angle:** "If pursued: lead with migration observability — reference the 200-app migration and offer a proof-of-value on the first 10 apps moved to AWS."
@@ -0,0 +1,120 @@
1
+ # Scoring Rubric
2
+
3
+ This file defines the scoring model used by the linkedin-job-post-to-buyer-pain-map skill. The agent reads this file during Step 4 to evaluate each account.
4
+
5
+ ---
6
+
7
+ ## Three Scoring Dimensions
8
+
9
+ ### Signal Strength (1–10)
10
+
11
+ Measures how many and how specific the hiring signals are that relate to the user's product area.
12
+
13
+ **Scoring rules:**
14
+
15
+ | Condition | Modifier |
16
+ |-----------|----------|
17
+ | Multiple related hires on the same team or problem area | +3 |
18
+ | Explicit language about a pain area the user's product addresses | +2 |
19
+ | Senior role (Head / Director / VP) in a relevant function | +1 |
20
+ | Role responsibilities reference building, replacing, or fixing a system in the product's category | +1 |
21
+ | "First X hire" or "0→1" language indicating new team formation | +1 |
22
+ | Generic "nice-to-have" requirements with no clear pain signal | −1 |
23
+ | Job post is mostly a list of perks and culture statements with minimal responsibilities | −2 |
24
+
25
+ **Floor:** 1. **Ceiling:** 10. Clamp after summing.
26
+
27
+ **Base:** Start at 3 for any post that mentions a function relevant to the user's ICP.
28
+
29
+ ---
30
+
31
+ ### Urgency (1–10)
32
+
33
+ Measures how time-sensitive the hiring need appears from the text.
34
+
35
+ **Scoring rules:**
36
+
37
+ | Condition | Modifier |
38
+ |-----------|----------|
39
+ | Explicit urgency language: "immediate", "ASAP", "critical hire", "backfill" | +3 |
40
+ | Multiple open roles in the same function at the same company | +2 |
41
+ | Language suggesting firefighting: "stabilize", "fix", "unblock", "reduce downtime" | +2 |
42
+ | Role described as replacement for a departing leader | +1 |
43
+ | "Nice to have" tone, exploratory language ("we're thinking about...") | −2 |
44
+ | No urgency indicators at all | −1 |
45
+
46
+ **Floor:** 1. **Ceiling:** 10. Clamp after summing.
47
+
48
+ **Base:** Start at 4 for any role at IC-Senior or higher that has a defined scope.
49
+
50
+ ---
51
+
52
+ ### ICP Fit (1–10)
53
+
54
+ Measures how closely the company profile in the job post matches the user's `icp_description`.
55
+
56
+ **Scoring rules:**
57
+
58
+ | Condition | Modifier |
59
+ |-----------|----------|
60
+ | Industry matches ICP exactly | +3 |
61
+ | Company size indicators match (headcount, "Series X", "enterprise") | +2 |
62
+ | Tech stack or tool references overlap with ICP tech stack | +2 |
63
+ | Role function matches ICP target persona function | +1 |
64
+ | Industry is adjacent but not exact match | +1 |
65
+ | Industry is clearly outside ICP (e.g., government when ICP is B2B SaaS) | −3 |
66
+ | Company size indicators suggest a segment the user does not serve | −2 |
67
+
68
+ **Floor:** 1. **Ceiling:** 10. Clamp after summing.
69
+
70
+ **Base:** Start at 3 for any post where industry cannot be determined.
71
+
72
+ ---
73
+
74
+ ## Overall Score (10–100)
75
+
76
+ ```
77
+ overall = round((0.4 × signal_strength + 0.3 × urgency + 0.3 × icp_fit) × 10)
78
+ ```
79
+
80
+ Each dimension is 1–10, so the raw weighted sum ranges from 1.0 to 10.0. Multiply by 10 to get 10–100 scale.
81
+
82
+ **Interpretation guide:**
83
+
84
+ | Range | Label | Recommendation |
85
+ |-------|-------|---------------|
86
+ | 80–100 | Strong signal | Prioritize immediately. Build pain map and outreach angles. |
87
+ | 60–79 | Moderate signal | Worth analyzing. Pain map likely has actionable pains. |
88
+ | 40–59 | Weak signal | Low priority. Limited pain signal. |
89
+ | 1–39 | Noise | Skip or deprioritize. Post is too generic or outside ICP. |
90
+
91
+ ---
92
+
93
+ ## Buy-vs-Build Inference
94
+
95
+ The skill infers the company's likely posture toward buying vs building based on language patterns in the job description.
96
+
97
+ | Language Pattern | Inferred Posture |
98
+ |-----------------|-----------------|
99
+ | "Build from scratch", "0→1", "greenfield", "design and implement new" | **Leaning build** |
100
+ | "Evaluate and implement", "select vendor", "integrate third-party" | **Leaning buy** |
101
+ | "Replace legacy", "migrate from", "modernize" | **Leaning buy** (replacing existing) |
102
+ | "Own end-to-end", "build and maintain" + "integrate tools" | **Hybrid (buy-and-build)** |
103
+ | "Scale existing", "optimize current" | **Hybrid** — already built, may buy to augment |
104
+ | No clear signals | **Unknown** — do not guess |
105
+
106
+ Output as a text label, not a numeric score.
107
+
108
+ ---
109
+
110
+ ## Stage Guess
111
+
112
+ Inferred from company context clues in the job description.
113
+
114
+ | Clue | Stage Guess |
115
+ |------|------------|
116
+ | "Series A", "seed", "early stage", &lt; 50 employees | Pre-PMF or Early |
117
+ | "Series B/C", "scaling", "hypergrowth", 50–500 employees | Scaling post-PMF |
118
+ | "Enterprise", "Fortune 500", "publicly traded", &gt; 500 employees | Mature / Enterprise |
119
+ | "First [role] hire", "founding team" | Very early |
120
+ | No clues available | "Unknown" |
@@ -1,13 +1,11 @@
1
- <div align="center">
2
- <img src="https://raw.githubusercontent.com/Varnan-Tech/opendirectory/main/assets/covers/tribe-hook-analyzer-cover.png" width="100%" alt="Meta Tribe Skill Cover" />
3
- </div>
4
-
5
1
  # Meta Tribe Skill
6
2
 
7
- A self-hosted OpenDirectory AI Skill that uses Meta's TRIBE v2 fMRI Model to analyze the neuroscience of video hooks, reels, and scripts.
3
+ AI Skill that uses Meta's TRIBE v2 fMRI Model to analyze the neuroscience of video hooks, reels, and scripts.
8
4
 
9
5
  Instead of guessing what makes a hook engaging using prompt engineering, this skill predicts actual human brain activity across the scientifically validated Yeo-7 Functional Networks, giving you an evidence-based Engagement Report for your content.
10
6
 
7
+ <img width="2752" height="1536" alt="meta-tribeV2-skill-cover-image" src="https://github.com/user-attachments/assets/7fc9b683-2c9a-416e-92c8-79a0f4f0228e" />
8
+
11
9
  ---
12
10
 
13
11
  ## What This Skill Does
@@ -21,6 +19,28 @@ This skill provides the infrastructure to host the massive 80GB TRIBE v2 model p
21
19
 
22
20
  ---
23
21
 
22
+ ## Install
23
+
24
+ ### Video Tutorial
25
+ Watch this quick video to see how it's done:
26
+
27
+ https://github.com/user-attachments/assets/ee98a1b5-ebc4-452f-bbfb-c434f2935067
28
+
29
+ ### Step 1: Download the skill from GitHub
30
+ 1. Copy the URL of this specific skill folder from your browser's address bar.
31
+ 2. Go to [download-directory.github.io](https://download-directory.github.io/).
32
+ 3. Paste the URL and click **Enter** to download.
33
+
34
+ ### Step 2: Install the Skill in Claude
35
+ 1. Open your **Claude desktop app**.
36
+ 2. Go to the sidebar on the left side and click on the **Customize** section.
37
+ 3. Click on the **Skills** tab, then click on the **+** (plus) icon button to create a new skill.
38
+ 4. Choose the option to **Upload a skill**, and drag and drop the `.zip` file (or you can extract it and drop the folder, both work).
39
+
40
+ > **Note:** For some skills (like `position-me`), the `SKILL.md` file might be located inside a subfolder. Always make sure you are uploading the specific folder that contains the `SKILL.md` file!
41
+
42
+ ---
43
+
24
44
  ## Deployment Options
25
45
 
26
46
  Because TRIBE v2 requires a massive amount of VRAM (24GB for text, up to 80GB for video), we offer 3 different deployment options so anyone can use it, regardless of budget or technical expertise.
@@ -81,23 +101,3 @@ The AI agent will read the raw API output and translate the neuroscience into pl
81
101
  * Limbic Network: Translated to "Does this make people feel something?". High Limbic means strong emotional response.
82
102
 
83
103
  Check out the [Results Showcase](results_showcase.md) for actual examples of Neuro-Marketing reports generated by this skill.
84
-
85
- ## Install
86
-
87
- ### Video Tutorial
88
- Watch this quick video to see how it's done:
89
-
90
- https://github.com/user-attachments/assets/ee98a1b5-ebc4-452f-bbfb-c434f2935067
91
-
92
- ### Step 1: Download the skill from GitHub
93
- 1. Copy the URL of this specific skill folder from your browser's address bar.
94
- 2. Go to [download-directory.github.io](https://download-directory.github.io/).
95
- 3. Paste the URL and click **Enter** to download.
96
-
97
- ### Step 2: Install the Skill in Claude
98
- 1. Open your **Claude desktop app**.
99
- 2. Go to the sidebar on the left side and click on the **Customize** section.
100
- 3. Click on the **Skills** tab, then click on the **+** (plus) icon button to create a new skill.
101
- 4. Choose the option to **Upload a skill**, and drag and drop the `.zip` file (or you can extract it and drop the folder, both work).
102
-
103
- > **Note:** For some skills (like `position-me`), the `SKILL.md` file might be located inside a subfolder. Always make sure you are uploading the specific folder that contains the `SKILL.md` file!