@opendirectory.dev/skills 0.1.65 → 0.1.66
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/package.json +1 -1
- package/skills/blog-cover-image-cli/README.md +112 -1
- package/skills/brand-alchemy/README.md +31 -1
- package/skills/claude-md-generator/README.md +73 -1
- package/skills/cold-email-verifier/README.md +41 -1
- package/skills/competitor-pr-finder/README.md +69 -1
- package/skills/cook-the-blog/README.md +82 -1
- package/skills/dependency-update-bot/README.md +96 -1
- package/skills/docs-from-code/README.md +93 -1
- package/skills/email-newsletter/README.md +72 -1
- package/skills/explain-this-pr/README.md +69 -1
- package/skills/gh-issue-to-demand-signal/README.md +95 -4
- package/skills/google-trends-api-skills/README.md +74 -1
- package/skills/graphic-case-study/README.md +97 -3
- package/skills/graphic-chart/README.md +0 -19
- package/skills/graphic-ebook/README.md +99 -3
- package/skills/graphic-gif/README.md +0 -19
- package/skills/graphic-slide-deck/README.md +104 -2
- package/skills/hackernews-intel/README.md +156 -1
- package/skills/human-tone/README.md +43 -1
- package/skills/kill-the-standup/README.md +79 -1
- package/skills/linkedin-job-post-to-buyer-pain-map/README.md +3 -3
- package/skills/linkedin-post-generator/README.md +103 -1
- package/skills/llms-txt-generator/README.md +138 -1
- package/skills/luma-attendees-scraper/README.md +0 -21
- package/skills/map-your-market/README.md +121 -1
- package/skills/meeting-brief-generator/README.md +85 -1
- package/skills/meta-ads-skill/README.md +67 -1
- package/skills/meta-tribeV2-skill/README.md +64 -3
- package/skills/newsletter-digest/README.md +142 -1
- package/skills/noise-to-linkedin-carousel/README.md +0 -21
- package/skills/noise2blog/README.md +102 -1
- package/skills/npm-downloads-to-leads/README.md +131 -12
- package/skills/oss-launch-kit/README.md +0 -21
- package/skills/outreach-sequence-builder/README.md +103 -1
- package/skills/position-me/README.md +65 -1
- package/skills/pr-description-writer/README.md +76 -1
- package/skills/pricing-finder/README.md +114 -1
- package/skills/pricing-page-psychology-audit/README.md +85 -1
- package/skills/product-update-logger/README.md +172 -4
- package/skills/producthunt-launch-kit/README.md +90 -1
- package/skills/reddit-icp-monitor/README.md +112 -1
- package/skills/reddit-post-engine/README.md +98 -1
- package/skills/schema-markup-generator/README.md +109 -1
- package/skills/sdk-adoption-tracker/README.md +127 -1
- package/skills/show-hn-writer/README.md +83 -1
- package/skills/stargazer/README.md +0 -21
- package/skills/tweet-thread-from-blog/README.md +104 -1
- package/skills/twitter-GTM-find-skill/README.md +37 -1
- package/skills/vc-curated-match/README.md +0 -21
- package/skills/vc-finder/README.md +98 -5
- package/skills/vid-motion-graphics/README.md +65 -5
- package/skills/where-your-customer-lives/README.md +0 -19
- package/skills/yc-intent-radar-skill/README.md +35 -1
|
@@ -13,7 +13,7 @@ npx "@opendirectory.dev/skills" install linkedin-post-generator --target claude
|
|
|
13
13
|
### Video Tutorial
|
|
14
14
|
Watch this quick video to see how it's done:
|
|
15
15
|
|
|
16
|
-
https://github.com/user-attachments/assets/
|
|
16
|
+
https://github.com/user-attachments/assets/ee98a1b5-ebc4-452f-bbfb-c434f2935067
|
|
17
17
|
|
|
18
18
|
### Step 1: Download the skill from GitHub
|
|
19
19
|
1. Copy the URL of this specific skill folder from your browser's address bar.
|
|
@@ -27,3 +27,105 @@ https://github.com/user-attachments/assets/cea8b565-2002-4a87-8857-d902bfcfdc1c
|
|
|
27
27
|
4. Choose the option to **Upload a skill**, and drag and drop the `.zip` file (or you can extract it and drop the folder, both work).
|
|
28
28
|
|
|
29
29
|
> **Note:** For some skills (like `position-me`), the `SKILL.md` file might be located inside a subfolder. Always make sure you are uploading the specific folder that contains the `SKILL.md` file!
|
|
30
|
+
|
|
31
|
+
## Post Styles
|
|
32
|
+
|
|
33
|
+
| Style | Use When | Example Input |
|
|
34
|
+
|-------|----------|---------------|
|
|
35
|
+
| Founder/Ship | Personal story of building or shipping | "We merged our streaming SDK after 3 weeks" |
|
|
36
|
+
| Insight | Educational observation or lesson | Blog article, pattern you noticed, lesson learned |
|
|
37
|
+
| Product Launch | Announcing a new tool or feature | PR description, launch brief, feature going GA |
|
|
38
|
+
| Tutorial Summary | Distilling a long technical post | Tutorial URL, deep-dive article, step-by-step guide |
|
|
39
|
+
|
|
40
|
+
The agent auto-detects the right style. You override it by asking for a specific one.
|
|
41
|
+
|
|
42
|
+
## Requirements
|
|
43
|
+
|
|
44
|
+
No LLM API key needed. The agent writes the post.
|
|
45
|
+
|
|
46
|
+
Composio is optional. Add it to post directly to LinkedIn.
|
|
47
|
+
|
|
48
|
+
## Setup
|
|
49
|
+
|
|
50
|
+
### Composio (Optional)
|
|
51
|
+
|
|
52
|
+
Without Composio, the agent outputs formatted text for copy-paste. No configuration needed.
|
|
53
|
+
|
|
54
|
+
To enable direct posting:
|
|
55
|
+
1. Get your API key at https://app.composio.dev/settings
|
|
56
|
+
2. Connect your LinkedIn account at https://app.composio.dev/app/linkedin
|
|
57
|
+
3. Complete the OAuth flow
|
|
58
|
+
4. Add the key to your .env file:
|
|
59
|
+
|
|
60
|
+
```bash
|
|
61
|
+
cp .env.example .env
|
|
62
|
+
# Edit .env and add your COMPOSIO_API_KEY
|
|
63
|
+
```
|
|
64
|
+
|
|
65
|
+
## How to Use
|
|
66
|
+
|
|
67
|
+
From a URL:
|
|
68
|
+
```
|
|
69
|
+
"Turn this into a LinkedIn post: https://yourblog.com/my-post"
|
|
70
|
+
"Write a LinkedIn post about this article: [URL]"
|
|
71
|
+
```
|
|
72
|
+
|
|
73
|
+
From pasted text:
|
|
74
|
+
```
|
|
75
|
+
"Here's a case study, turn it into a LinkedIn post: [paste text]"
|
|
76
|
+
"Convert this to a LinkedIn post: [paste article]"
|
|
77
|
+
```
|
|
78
|
+
|
|
79
|
+
From a PR or shipped feature:
|
|
80
|
+
```
|
|
81
|
+
"Write a LinkedIn post about this PR we merged: [paste PR description]"
|
|
82
|
+
"Announce our new feature on LinkedIn: [describe the feature]"
|
|
83
|
+
```
|
|
84
|
+
|
|
85
|
+
With a style override:
|
|
86
|
+
```
|
|
87
|
+
"Write a LinkedIn post in Tutorial Summary style about: [topic]"
|
|
88
|
+
"Use the Product Launch style for this: [description]"
|
|
89
|
+
```
|
|
90
|
+
|
|
91
|
+
With direct posting:
|
|
92
|
+
```
|
|
93
|
+
"Post this to LinkedIn: [paste content]"
|
|
94
|
+
"Generate and post a LinkedIn update about [topic]"
|
|
95
|
+
```
|
|
96
|
+
|
|
97
|
+
## Output
|
|
98
|
+
|
|
99
|
+
| Output | Description |
|
|
100
|
+
|--------|-------------|
|
|
101
|
+
| LinkedIn post | Formatted text, 900-1,300 characters, ready to publish |
|
|
102
|
+
| First comment | Text with source links. Post this immediately after publishing. |
|
|
103
|
+
| Hook alternatives | 2 additional hook lines in different formats |
|
|
104
|
+
| Posted confirmation | If Composio is configured and you confirm posting |
|
|
105
|
+
|
|
106
|
+
## How Posts Are Formatted
|
|
107
|
+
|
|
108
|
+
Four rules drive every post the agent writes:
|
|
109
|
+
|
|
110
|
+
- Hook first. The first line is all most people see. It works standalone.
|
|
111
|
+
- Short paragraphs. 1-3 lines, then a blank line. LinkedIn is mobile-first.
|
|
112
|
+
- Links in the first comment. URLs in the post body reduce LinkedIn's distribution.
|
|
113
|
+
- Question or CTA at the end. One or the other, not both.
|
|
114
|
+
|
|
115
|
+
## Project Structure
|
|
116
|
+
|
|
117
|
+
```
|
|
118
|
+
linkedin-post-generator/
|
|
119
|
+
├── SKILL.md
|
|
120
|
+
├── README.md
|
|
121
|
+
├── .env.example
|
|
122
|
+
├── evals/
|
|
123
|
+
│ └── evals.json
|
|
124
|
+
└── references/
|
|
125
|
+
├── linkedin-format.md
|
|
126
|
+
└── output-template.md
|
|
127
|
+
```
|
|
128
|
+
|
|
129
|
+
## License
|
|
130
|
+
|
|
131
|
+
MIT
|
|
@@ -13,7 +13,7 @@ npx "@opendirectory.dev/skills" install llms-txt-generator --target claude
|
|
|
13
13
|
### Video Tutorial
|
|
14
14
|
Watch this quick video to see how it's done:
|
|
15
15
|
|
|
16
|
-
https://github.com/user-attachments/assets/
|
|
16
|
+
https://github.com/user-attachments/assets/ee98a1b5-ebc4-452f-bbfb-c434f2935067
|
|
17
17
|
|
|
18
18
|
### Step 1: Download the skill from GitHub
|
|
19
19
|
1. Copy the URL of this specific skill folder from your browser's address bar.
|
|
@@ -27,3 +27,140 @@ https://github.com/user-attachments/assets/cea8b565-2002-4a87-8857-d902bfcfdc1c
|
|
|
27
27
|
4. Choose the option to **Upload a skill**, and drag and drop the `.zip` file (or you can extract it and drop the folder, both work).
|
|
28
28
|
|
|
29
29
|
> **Note:** For some skills (like `position-me`), the `SKILL.md` file might be located inside a subfolder. Always make sure you are uploading the specific folder that contains the `SKILL.md` file!
|
|
30
|
+
|
|
31
|
+
## What It Does
|
|
32
|
+
|
|
33
|
+
The skill crawls your website using Chrome DevTools, reads your actual pages, and produces a clean `llms.txt` file in the format specified by [Jeremy Howard's llms.txt standard](https://llmstxt.org). When AI agents (Claude, ChatGPT, Gemini) visit your site, they read `llms.txt` first to understand what you are and where to find authoritative content.
|
|
34
|
+
|
|
35
|
+
**Without llms.txt:** AI agents guess, hallucinate, or cite competitors instead.
|
|
36
|
+
**With llms.txt:** AI agents cite your product correctly and know exactly where your docs, blog, and key pages live.
|
|
37
|
+
|
|
38
|
+
## Two Modes
|
|
39
|
+
|
|
40
|
+
### Codebase Mode (no Chrome needed)
|
|
41
|
+
If you're inside a website's repo, the skill reads your source files directly pages, routes, blog posts, frontmatter, site config. It writes `llms.txt` straight to `public/` when you approve. No browser required.
|
|
42
|
+
|
|
43
|
+
Supported frameworks: **Next.js** (pages + app router), **Astro**, **Nuxt**, **Gatsby**, **SvelteKit**, **Hugo**, **Jekyll**
|
|
44
|
+
|
|
45
|
+
### Live Site Mode (Chrome or fetch fallback)
|
|
46
|
+
If you only have the URL, the skill crawls the live site using Chrome DevTools MCP. Falls back to standard web fetch if Chrome isn't available.
|
|
47
|
+
|
|
48
|
+
---
|
|
49
|
+
|
|
50
|
+
## Requirements
|
|
51
|
+
|
|
52
|
+
**Codebase Mode:** No extra setup. Just be inside the repo directory.
|
|
53
|
+
|
|
54
|
+
**Live Site Mode:**
|
|
55
|
+
- Chrome with remote debugging enabled (or any live URL skill will fall back to web fetch)
|
|
56
|
+
- Chrome DevTools MCP server configured in your agent (optional, improves JS-rendered sites)
|
|
57
|
+
|
|
58
|
+
## Setup
|
|
59
|
+
|
|
60
|
+
### For Live Site Mode: Start Chrome with Remote Debugging
|
|
61
|
+
|
|
62
|
+
**Mac:**
|
|
63
|
+
```bash
|
|
64
|
+
open -a "Google Chrome" --args --remote-debugging-port=9222
|
|
65
|
+
```
|
|
66
|
+
|
|
67
|
+
**Linux:**
|
|
68
|
+
```bash
|
|
69
|
+
google-chrome --remote-debugging-port=9222
|
|
70
|
+
```
|
|
71
|
+
|
|
72
|
+
**Windows:**
|
|
73
|
+
```bash
|
|
74
|
+
chrome.exe --remote-debugging-port=9222
|
|
75
|
+
```
|
|
76
|
+
|
|
77
|
+
### 2. Install Chrome DevTools MCP Server
|
|
78
|
+
|
|
79
|
+
Follow the setup at: https://github.com/ChromeDevTools/chrome-devtools-mcp
|
|
80
|
+
|
|
81
|
+
Add to your agent's MCP configuration:
|
|
82
|
+
```json
|
|
83
|
+
{
|
|
84
|
+
"mcpServers": {
|
|
85
|
+
"chrome-devtools": {
|
|
86
|
+
"command": "npx",
|
|
87
|
+
"args": ["-y", "@chrome-devtools/mcp-server"],
|
|
88
|
+
"env": {
|
|
89
|
+
"CHROME_DEBUGGING_PORT": "9222"
|
|
90
|
+
}
|
|
91
|
+
}
|
|
92
|
+
}
|
|
93
|
+
}
|
|
94
|
+
```
|
|
95
|
+
|
|
96
|
+
### 3. Configure Environment (Optional)
|
|
97
|
+
|
|
98
|
+
Copy `.env.example` to `.env` and fill in:
|
|
99
|
+
```bash
|
|
100
|
+
cp .env.example .env
|
|
101
|
+
```
|
|
102
|
+
|
|
103
|
+
`GITHUB_TOKEN` and `GITHUB_REPO` are only needed if you want the agent to automatically open a GitHub PR with the generated file.
|
|
104
|
+
|
|
105
|
+
## How to Use
|
|
106
|
+
|
|
107
|
+
**Codebase Mode** just be inside your project and ask:
|
|
108
|
+
```
|
|
109
|
+
"Generate an llms.txt for this site"
|
|
110
|
+
"Add llms.txt to this project"
|
|
111
|
+
"Make this site readable by AI agents"
|
|
112
|
+
```
|
|
113
|
+
|
|
114
|
+
The agent will detect your framework, read your pages and blog posts from source, generate `llms.txt`, and write it to the right directory (e.g. `public/llms.txt`) after you confirm.
|
|
115
|
+
|
|
116
|
+
**Live Site Mode** provide a URL:
|
|
117
|
+
```
|
|
118
|
+
"Generate an llms.txt for https://yoursite.com"
|
|
119
|
+
"Does https://yoursite.com have an llms.txt? If not, create one."
|
|
120
|
+
```
|
|
121
|
+
|
|
122
|
+
The agent will:
|
|
123
|
+
1. Check if `llms.txt` already exists at the domain
|
|
124
|
+
2. Crawl homepage, docs, blog, about, pricing, and API pages
|
|
125
|
+
3. Generate `llms.txt` following the official spec
|
|
126
|
+
4. Optionally generate `llms-full.txt` with full page content
|
|
127
|
+
5. Save the file locally and give you deployment instructions
|
|
128
|
+
6. Optionally open a GitHub PR if configured
|
|
129
|
+
|
|
130
|
+
## Where to Deploy the File
|
|
131
|
+
|
|
132
|
+
Place `llms.txt` at your web root so it's accessible at `https://yourdomain.com/llms.txt`:
|
|
133
|
+
|
|
134
|
+
| Platform | File Location |
|
|
135
|
+
|----------|--------------|
|
|
136
|
+
| Next.js / Vercel | `/public/llms.txt` |
|
|
137
|
+
| Astro | `/public/llms.txt` |
|
|
138
|
+
| Nuxt | `/public/llms.txt` |
|
|
139
|
+
| GitHub Pages | Repository root |
|
|
140
|
+
| Hugo | `/static/llms.txt` |
|
|
141
|
+
| WordPress | Upload via FTP to web root |
|
|
142
|
+
|
|
143
|
+
## Output Files
|
|
144
|
+
|
|
145
|
+
| File | Description |
|
|
146
|
+
|------|-------------|
|
|
147
|
+
| `llms.txt` | Structured link map. LLMs follow links to find content |
|
|
148
|
+
| `llms-full.txt` | Full prose content of key pages. LLMs ingest everything at once |
|
|
149
|
+
|
|
150
|
+
## Project Structure
|
|
151
|
+
|
|
152
|
+
```
|
|
153
|
+
llms-txt-generator/
|
|
154
|
+
├── SKILL.md # Agent instructions
|
|
155
|
+
├── README.md # This file
|
|
156
|
+
├── .env.example # Environment variables template
|
|
157
|
+
├── evals/
|
|
158
|
+
│ └── evals.json # Test prompts for skill evaluation
|
|
159
|
+
└── references/
|
|
160
|
+
├── llms-txt-spec.md # The llms.txt format specification
|
|
161
|
+
└── output-template.md # Exact output template with example
|
|
162
|
+
```
|
|
163
|
+
|
|
164
|
+
## License
|
|
165
|
+
|
|
166
|
+
MIT
|
|
@@ -168,24 +168,3 @@ Some browsers block downloads from DevTools-triggered scripts until user interac
|
|
|
168
168
|
## Disclaimer
|
|
169
169
|
|
|
170
170
|
Use this responsibly and only for events and attendee lists your account is authorized to access.
|
|
171
|
-
|
|
172
|
-
|
|
173
|
-
## Install
|
|
174
|
-
|
|
175
|
-
### Video Tutorial
|
|
176
|
-
Watch this quick video to see how it's done:
|
|
177
|
-
|
|
178
|
-
https://github.com/user-attachments/assets/cea8b565-2002-4a87-8857-d902bfcfdc1c
|
|
179
|
-
|
|
180
|
-
### Step 1: Download the skill from GitHub
|
|
181
|
-
1. Copy the URL of this specific skill folder from your browser's address bar.
|
|
182
|
-
2. Go to [download-directory.github.io](https://download-directory.github.io/).
|
|
183
|
-
3. Paste the URL and click **Enter** to download.
|
|
184
|
-
|
|
185
|
-
### Step 2: Install the Skill in Claude
|
|
186
|
-
1. Open your **Claude desktop app**.
|
|
187
|
-
2. Go to the sidebar on the left side and click on the **Customize** section.
|
|
188
|
-
3. Click on the **Skills** tab, then click on the **+** (plus) icon button to create a new skill.
|
|
189
|
-
4. Choose the option to **Upload a skill**, and drag and drop the `.zip` file (or you can extract it and drop the folder, both work).
|
|
190
|
-
|
|
191
|
-
> **Note:** For some skills (like `position-me`), the `SKILL.md` file might be located inside a subfolder. Always make sure you are uploading the specific folder that contains the `SKILL.md` file!
|
|
@@ -11,7 +11,7 @@ npx "@opendirectory.dev/skills" install map-your-market --target claude
|
|
|
11
11
|
### Video Tutorial
|
|
12
12
|
Watch this quick video to see how it's done:
|
|
13
13
|
|
|
14
|
-
https://github.com/user-attachments/assets/
|
|
14
|
+
https://github.com/user-attachments/assets/ee98a1b5-ebc4-452f-bbfb-c434f2935067
|
|
15
15
|
|
|
16
16
|
### Step 1: Download the skill from GitHub
|
|
17
17
|
1. Copy the URL of this specific skill folder from your browser's address bar.
|
|
@@ -25,3 +25,123 @@ https://github.com/user-attachments/assets/cea8b565-2002-4a87-8857-d902bfcfdc1c
|
|
|
25
25
|
4. Choose the option to **Upload a skill**, and drag and drop the `.zip` file (or you can extract it and drop the folder, both work).
|
|
26
26
|
|
|
27
27
|
> **Note:** For some skills (like `position-me`), the `SKILL.md` file might be located inside a subfolder. Always make sure you are uploading the specific folder that contains the `SKILL.md` file!
|
|
28
|
+
|
|
29
|
+
## What It Does
|
|
30
|
+
|
|
31
|
+
- Accepts any combination of: product description, category keywords, competitor names
|
|
32
|
+
- Auto-detects relevant subreddits from the category
|
|
33
|
+
- Searches Reddit public JSON API for pain posts (top posts, last 12 months)
|
|
34
|
+
- Searches Hacker News Algolia API for stories and Ask HN threads
|
|
35
|
+
- Searches GitHub Issues on competitor repos for high-reaction complaints
|
|
36
|
+
- Scrapes G2 category pages for vendor count and top products
|
|
37
|
+
- Fetches Google Trends direction (up / flat / down) for the category
|
|
38
|
+
- Scores every signal by source weight and recency (GitHub issues score 3x Reddit -- more deliberate signal)
|
|
39
|
+
- Clusters top 60 signals into 5-7 named pain themes
|
|
40
|
+
- Extracts ICP from subreddit and post metadata (not just content)
|
|
41
|
+
- Generates a positioning framework with messaging angles using verbatim market language
|
|
42
|
+
- Saves output to `docs/market-maps/[category]-[date].md` + JSON snapshot
|
|
43
|
+
|
|
44
|
+
## Requirements
|
|
45
|
+
|
|
46
|
+
| Requirement | Purpose | How to Set Up |
|
|
47
|
+
|---|---|---|
|
|
48
|
+
| GITHUB_TOKEN | Optional -- improves GitHub Issues rate limit from 60/hr to 5000/hr | github.com/settings/tokens (no scopes needed for public repos) |
|
|
49
|
+
|
|
50
|
+
No other API keys required.
|
|
51
|
+
|
|
52
|
+
## Setup
|
|
53
|
+
|
|
54
|
+
```bash
|
|
55
|
+
cp .env.example .env
|
|
56
|
+
# Add GITHUB_TOKEN if you want higher GitHub rate limits
|
|
57
|
+
```
|
|
58
|
+
|
|
59
|
+
## How to Use
|
|
60
|
+
|
|
61
|
+
```
|
|
62
|
+
"Map my market: I build developer observability tools"
|
|
63
|
+
"Who is my ICP? Competitors: Datadog, Grafana, New Relic"
|
|
64
|
+
"What are the top pains in the HR software market?"
|
|
65
|
+
"Find messaging angles for my B2B analytics tool"
|
|
66
|
+
"Map the CRM market. What are people complaining about?"
|
|
67
|
+
"I build payment APIs. Who should I be selling to?"
|
|
68
|
+
```
|
|
69
|
+
|
|
70
|
+
Include competitor names for richer GitHub Issues data. Include a product description for tailored messaging angles.
|
|
71
|
+
|
|
72
|
+
## Why This Instead of Manual Research
|
|
73
|
+
|
|
74
|
+
A founder doing this manually would spend 2-3 days:
|
|
75
|
+
- Reading Reddit threads, taking notes
|
|
76
|
+
- Scrolling HN "Ask HN" posts
|
|
77
|
+
- Checking G2 review counts per vendor
|
|
78
|
+
- Looking up Google Trends
|
|
79
|
+
- Synthesizing into a document
|
|
80
|
+
|
|
81
|
+
This skill does the same sweep in 3 minutes and returns verbatim quotes, not paraphrased summaries. The messaging framework uses the exact language your market uses -- not marketing copy you invented.
|
|
82
|
+
|
|
83
|
+
## The Pain Score
|
|
84
|
+
|
|
85
|
+
`pain_score = base * recency_factor`
|
|
86
|
+
|
|
87
|
+
- GitHub issue reactions: `reactions * 3` -- a developer deliberately clicking +1 is the strongest signal
|
|
88
|
+
- Reddit: `upvotes + (comments * 0.3)` -- upvotes count more than comments (comments include noise)
|
|
89
|
+
- HN: `points + (comments * 0.3)` -- same structure
|
|
90
|
+
|
|
91
|
+
Score tiers: critical (200+), high (50-199), medium (10-49), noise (<10, filtered out).
|
|
92
|
+
|
|
93
|
+
## Velocity Tracking
|
|
94
|
+
|
|
95
|
+
Run the skill every quarter. JSON snapshots in `docs/market-maps/` let you compare pain cluster rankings over time. A pain that was #3 last quarter and is #1 this quarter is accelerating -- a stronger positioning bet.
|
|
96
|
+
|
|
97
|
+
## Cost Per Run
|
|
98
|
+
|
|
99
|
+
- Reddit, HN, Google Trends: free, no auth
|
|
100
|
+
- GitHub Issues: free with optional token
|
|
101
|
+
- G2 scrape: free HTML fetch
|
|
102
|
+
- AI analysis: uses the model already running the skill
|
|
103
|
+
- Total: free
|
|
104
|
+
|
|
105
|
+
## Standalone Script
|
|
106
|
+
|
|
107
|
+
Run data collection without Claude. Useful when you want the raw signals first, then bring them to any AI for analysis.
|
|
108
|
+
|
|
109
|
+
```bash
|
|
110
|
+
# Basic usage
|
|
111
|
+
python3 scripts/fetch.py "developer observability"
|
|
112
|
+
|
|
113
|
+
# With competitors
|
|
114
|
+
python3 scripts/fetch.py "developer observability" --competitors "Datadog,Grafana,New Relic"
|
|
115
|
+
|
|
116
|
+
# With product context
|
|
117
|
+
python3 scripts/fetch.py "B2B analytics" --context "We help ops teams track spend"
|
|
118
|
+
|
|
119
|
+
# Print to stdout
|
|
120
|
+
python3 scripts/fetch.py "devops tooling" --stdout | jq '.summary'
|
|
121
|
+
|
|
122
|
+
# With GitHub token for higher rate limits
|
|
123
|
+
GITHUB_TOKEN=your_token python3 scripts/fetch.py "CRM software" --competitors "Salesforce,HubSpot" --output results.json
|
|
124
|
+
```
|
|
125
|
+
|
|
126
|
+
The script writes a JSON file with all raw signals. Open that file with Claude and ask: "Generate a market map and positioning framework from this data."
|
|
127
|
+
|
|
128
|
+
## Project Structure
|
|
129
|
+
|
|
130
|
+
```
|
|
131
|
+
map-your-market/
|
|
132
|
+
├── SKILL.md
|
|
133
|
+
├── README.md
|
|
134
|
+
├── .env.example
|
|
135
|
+
├── scripts/
|
|
136
|
+
│ └── fetch.py standalone data collector
|
|
137
|
+
├── evals/
|
|
138
|
+
│ └── evals.json
|
|
139
|
+
└── references/
|
|
140
|
+
├── subreddit-map.md category to subreddit mapping
|
|
141
|
+
├── pain-scoring.md scoring formula and tier thresholds
|
|
142
|
+
└── icp-signals.md how to extract ICP from post metadata
|
|
143
|
+
```
|
|
144
|
+
|
|
145
|
+
## License
|
|
146
|
+
|
|
147
|
+
MIT
|
|
@@ -14,7 +14,7 @@ npx "@opendirectory.dev/skills" install meeting-brief-generator --target claude
|
|
|
14
14
|
### Video Tutorial
|
|
15
15
|
Watch this quick video to see how it's done:
|
|
16
16
|
|
|
17
|
-
https://github.com/user-attachments/assets/
|
|
17
|
+
https://github.com/user-attachments/assets/ee98a1b5-ebc4-452f-bbfb-c434f2935067
|
|
18
18
|
|
|
19
19
|
### Step 1: Download the skill from GitHub
|
|
20
20
|
1. Copy the URL of this specific skill folder from your browser's address bar.
|
|
@@ -28,3 +28,87 @@ https://github.com/user-attachments/assets/cea8b565-2002-4a87-8857-d902bfcfdc1c
|
|
|
28
28
|
4. Choose the option to **Upload a skill**, and drag and drop the `.zip` file (or you can extract it and drop the folder, both work).
|
|
29
29
|
|
|
30
30
|
> **Note:** For some skills (like `position-me`), the `SKILL.md` file might be located inside a subfolder. Always make sure you are uploading the specific folder that contains the `SKILL.md` file!
|
|
31
|
+
|
|
32
|
+
## What It Does
|
|
33
|
+
|
|
34
|
+
- Runs 6-8 targeted Tavily searches covering company overview, recent news, tech stack, product, competitors, funding, and contact background
|
|
35
|
+
- Synthesizes results into a structured 1-page brief using Gemini
|
|
36
|
+
- Every claim cites a source URL from the research
|
|
37
|
+
- Optionally saves the brief to a Notion database
|
|
38
|
+
- Handles low-data companies by marking gaps instead of inventing content
|
|
39
|
+
|
|
40
|
+
## Requirements
|
|
41
|
+
|
|
42
|
+
| Requirement | Purpose | How to Set Up |
|
|
43
|
+
|------------|---------|--------------|
|
|
44
|
+
| Tavily API key | Company research | app.tavily.com, API Keys |
|
|
45
|
+
| Gemini API key | Brief synthesis | aistudio.google.com, Get API key |
|
|
46
|
+
| Notion token (optional) | Saving briefs | notion.so/my-integrations |
|
|
47
|
+
|
|
48
|
+
## Setup
|
|
49
|
+
|
|
50
|
+
```bash
|
|
51
|
+
cp .env.example .env
|
|
52
|
+
```
|
|
53
|
+
|
|
54
|
+
Fill in:
|
|
55
|
+
- `TAVILY_API_KEY` (required)
|
|
56
|
+
- `GEMINI_API_KEY` (required)
|
|
57
|
+
- `NOTION_TOKEN` and `NOTION_DATABASE_ID` (optional, for saving briefs)
|
|
58
|
+
|
|
59
|
+
## How to Use
|
|
60
|
+
|
|
61
|
+
Basic brief with company only:
|
|
62
|
+
|
|
63
|
+
```
|
|
64
|
+
"Prepare me for a meeting with Stripe next Tuesday"
|
|
65
|
+
"Generate a meeting brief for Vercel"
|
|
66
|
+
"Research Acme Corp before my call tomorrow"
|
|
67
|
+
```
|
|
68
|
+
|
|
69
|
+
With contact and meeting type:
|
|
70
|
+
|
|
71
|
+
```
|
|
72
|
+
"Prepare a discovery call brief for Linear. I'm meeting with the VP Engineering, Jordan Lee."
|
|
73
|
+
"Create a pre-call brief for Notion. Demo call on April 20."
|
|
74
|
+
```
|
|
75
|
+
|
|
76
|
+
Save to Notion:
|
|
77
|
+
|
|
78
|
+
```
|
|
79
|
+
"Generate a meeting brief for Figma and save it to Notion"
|
|
80
|
+
```
|
|
81
|
+
|
|
82
|
+
## Brief Sections
|
|
83
|
+
|
|
84
|
+
| Section | Content |
|
|
85
|
+
|---------|---------|
|
|
86
|
+
| Company Snapshot | What they do, size, funding stage, HQ |
|
|
87
|
+
| Recent News | Last 30 days, source URLs |
|
|
88
|
+
| Decision Maker | Name, title, background (if contact provided) |
|
|
89
|
+
| Tech Stack Signals | Tools spotted in job posts, blog, GitHub |
|
|
90
|
+
| Competitive Context | Who they compete with and how |
|
|
91
|
+
| Talking Points | Because/mention/to formula, 3-5 bullets |
|
|
92
|
+
| Open Questions | Company-specific discovery questions |
|
|
93
|
+
|
|
94
|
+
## Output Format
|
|
95
|
+
|
|
96
|
+
One page, under 400 words. Every claim has a source URL. Talking points follow the format: "Because [finding from research], mention [point] to [goal]."
|
|
97
|
+
|
|
98
|
+
## Project Structure
|
|
99
|
+
|
|
100
|
+
```
|
|
101
|
+
meeting-brief-generator/
|
|
102
|
+
├── SKILL.md
|
|
103
|
+
├── README.md
|
|
104
|
+
├── .env.example
|
|
105
|
+
├── evals/
|
|
106
|
+
│ └── evals.json
|
|
107
|
+
└── references/
|
|
108
|
+
├── brief-format.md
|
|
109
|
+
└── output-template.md
|
|
110
|
+
```
|
|
111
|
+
|
|
112
|
+
## License
|
|
113
|
+
|
|
114
|
+
MIT
|
|
@@ -11,7 +11,7 @@ npx "@opendirectory.dev/skills" install meta-ads-skill --target claude
|
|
|
11
11
|
### Video Tutorial
|
|
12
12
|
Watch this quick video to see how it's done:
|
|
13
13
|
|
|
14
|
-
https://github.com/user-attachments/assets/
|
|
14
|
+
https://github.com/user-attachments/assets/ee98a1b5-ebc4-452f-bbfb-c434f2935067
|
|
15
15
|
|
|
16
16
|
### Step 1: Download the skill from GitHub
|
|
17
17
|
1. Copy the URL of this specific skill folder from your browser's address bar.
|
|
@@ -25,3 +25,69 @@ https://github.com/user-attachments/assets/cea8b565-2002-4a87-8857-d902bfcfdc1c
|
|
|
25
25
|
4. Choose the option to **Upload a skill**, and drag and drop the `.zip` file (or you can extract it and drop the folder, both work).
|
|
26
26
|
|
|
27
27
|
> **Note:** For some skills (like `position-me`), the `SKILL.md` file might be located inside a subfolder. Always make sure you are uploading the specific folder that contains the `SKILL.md` file!
|
|
28
|
+
|
|
29
|
+
## Overview
|
|
30
|
+
|
|
31
|
+
The **Meta Ads Skill** is a comprehensive, production-ready skill designed to give LLMs and AI agents expert-level capabilities to orchestrate the official **Meta Ads Python CLI**.
|
|
32
|
+
|
|
33
|
+
By using this skill, an agent transforms into an **Expert Media Buyer**. It will know exactly how to explore ad structures, troubleshoot campaign performance (like CPA spikes), discover new audiences, and format massive Meta APIs JSON payloads into beautiful, readable markdown reports.
|
|
34
|
+
|
|
35
|
+
---
|
|
36
|
+
|
|
37
|
+
## For Agents: How to Use This Skill Efficiently
|
|
38
|
+
|
|
39
|
+
This skill is designed using a **Progressive Disclosure (Hub-and-Spoke)** architecture to maximize context window efficiency:
|
|
40
|
+
|
|
41
|
+
1. **The Hub (`SKILL.md`)**: The primary entry point. It provides strict guardrails, safety protocols, and the authentication troubleshooting workflow.
|
|
42
|
+
2. **The Spokes (`references/`)**:
|
|
43
|
+
- When you need to perform a specific task (e.g., investigating a CPA spike), read `references/workflows.md` for the exact step-by-step orchestration strategy.
|
|
44
|
+
- When presenting data to the user, read `references/report_templates.md` to strictly follow the required Markdown layout.
|
|
45
|
+
|
|
46
|
+
### Strict Agent Guardrails
|
|
47
|
+
* **Context Protection**: ALWAYS default to `time_range="last_7d"` for insights. ALWAYS use `limit=10` for listing campaigns/adsets initially.
|
|
48
|
+
* **Safety First**: NEVER execute state-changing tools (`create_campaign`, `update_campaign`) without explicitly showing the parameters to the user and waiting for their affirmative confirmation.
|
|
49
|
+
|
|
50
|
+
---
|
|
51
|
+
|
|
52
|
+
## Installation & Setup
|
|
53
|
+
|
|
54
|
+
To use this skill, you must install the official Meta Ads CLI and configure your credentials.
|
|
55
|
+
|
|
56
|
+
### 1. Install the CLI
|
|
57
|
+
The skill relies on the `meta-ads` Python package:
|
|
58
|
+
```bash
|
|
59
|
+
pip install meta-ads
|
|
60
|
+
```
|
|
61
|
+
|
|
62
|
+
### 2. Authentication (System User Token)
|
|
63
|
+
The CLI uses a **System User Access Token** for authentication.
|
|
64
|
+
1. Generate a System User Token in your [Meta Business Suite](https://business.facebook.com/settings/system-users).
|
|
65
|
+
2. Ensure the token has `ads_management`, `ads_read`, and `read_insights` permissions.
|
|
66
|
+
3. Set the following environment variables on your machine:
|
|
67
|
+
|
|
68
|
+
```bash
|
|
69
|
+
export ACCESS_TOKEN="your_system_user_access_token"
|
|
70
|
+
export AD_ACCOUNT_ID="act_your_ad_account_id"
|
|
71
|
+
```
|
|
72
|
+
|
|
73
|
+
---
|
|
74
|
+
|
|
75
|
+
## Skill Repository Structure
|
|
76
|
+
|
|
77
|
+
When you deploy this skill, the structure will look like this:
|
|
78
|
+
|
|
79
|
+
```text
|
|
80
|
+
meta-ads-skill/
|
|
81
|
+
SKILL.md # The core router & guardrails
|
|
82
|
+
references/
|
|
83
|
+
report_templates.md # Standardized markdown report structures
|
|
84
|
+
workflows.md # Orchestration strategies (e.g., CPA troubleshooting)
|
|
85
|
+
```
|
|
86
|
+
|
|
87
|
+
## Supported Commands
|
|
88
|
+
|
|
89
|
+
This skill orchestrates the `meta-ads` CLI using a noun-verb structure:
|
|
90
|
+
* **Campaigns**: `meta ads campaign list`, `meta ads campaign create`
|
|
91
|
+
* **Ad Sets**: `meta ads adset list`
|
|
92
|
+
* **Ads**: `meta ads ad list`
|
|
93
|
+
* **Insights**: `meta ads insights get`
|
|
@@ -21,12 +21,10 @@ This skill provides the infrastructure to host the massive 80GB TRIBE v2 model p
|
|
|
21
21
|
|
|
22
22
|
## Install
|
|
23
23
|
|
|
24
|
-
## Install
|
|
25
|
-
|
|
26
24
|
### Video Tutorial
|
|
27
25
|
Watch this quick video to see how it's done:
|
|
28
26
|
|
|
29
|
-
https://github.com/user-attachments/assets/
|
|
27
|
+
https://github.com/user-attachments/assets/ee98a1b5-ebc4-452f-bbfb-c434f2935067
|
|
30
28
|
|
|
31
29
|
### Step 1: Download the skill from GitHub
|
|
32
30
|
1. Copy the URL of this specific skill folder from your browser's address bar.
|
|
@@ -40,3 +38,66 @@ https://github.com/user-attachments/assets/cea8b565-2002-4a87-8857-d902bfcfdc1c
|
|
|
40
38
|
4. Choose the option to **Upload a skill**, and drag and drop the `.zip` file (or you can extract it and drop the folder, both work).
|
|
41
39
|
|
|
42
40
|
> **Note:** For some skills (like `position-me`), the `SKILL.md` file might be located inside a subfolder. Always make sure you are uploading the specific folder that contains the `SKILL.md` file!
|
|
41
|
+
|
|
42
|
+
---
|
|
43
|
+
|
|
44
|
+
## Deployment Options
|
|
45
|
+
|
|
46
|
+
Because TRIBE v2 requires a massive amount of VRAM (24GB for text, up to 80GB for video), we offer 3 different deployment options so anyone can use it, regardless of budget or technical expertise.
|
|
47
|
+
|
|
48
|
+
### 1. Google Colab (Zero Cost, Decoupled)
|
|
49
|
+
Best for users without a cloud budget. Colab provides free T4 GPUs.
|
|
50
|
+
* How it works: We use a decoupled architecture. You run the heavy AI inference in a Colab Notebook, which outputs a preds.npy prediction file. You then run a local script on your laptop to generate the report.
|
|
51
|
+
* Setup:
|
|
52
|
+
1. Open Google Colab and upload the script from scripts/colab_inference.py into a new Notebook.
|
|
53
|
+
2. Run the notebook. It will output preds.npy and segments.json.
|
|
54
|
+
3. Download those files to your machine and run: `python scripts/local_analyze.py --preds preds.npy`. This will output a text report and an ASCII terminal graph showing the engagement peaks and valleys.
|
|
55
|
+
|
|
56
|
+
### 2. RunPod (Serverless, Pay-per-second)
|
|
57
|
+
Best for production agents and developers. You only pay for the seconds the model is running.
|
|
58
|
+
* How it works: We provide a RunPod Handler and a custom Dockerfile that caches the 80GB model inside the image.
|
|
59
|
+
* Setup:
|
|
60
|
+
1. Build the Docker image using server/Dockerfile.runpod: docker build -f Dockerfile.runpod -t tribe-runpod .
|
|
61
|
+
2. Push the image to Docker Hub or GHCR.
|
|
62
|
+
3. Create a new RunPod Serverless Endpoint using your image URL.
|
|
63
|
+
4. Point your AI Agent to your RunPod Endpoint URL.
|
|
64
|
+
|
|
65
|
+
### 3. AWS EC2 Persistent (Enterprise, BYO-Compute)
|
|
66
|
+
Best for heavy, continuous usage.
|
|
67
|
+
* How it works: Automatically provisions an AWS g5.12xlarge instance (4x A10G GPUs) and runs a FastAPI server.
|
|
68
|
+
* Setup:
|
|
69
|
+
1. Ensure your AWS account has a vCPU quota limit of at least 48 for "Running On-Demand G and VT instances".
|
|
70
|
+
2. Run bash scripts/launch_persistent.sh to provision the instance.
|
|
71
|
+
3. Run export HF_TOKEN="your_token" followed by bash scripts/deploy_to_persistent.sh to build and launch the Docker API.
|
|
72
|
+
|
|
73
|
+
#### AWS GPU Lifecycle & Estimated Costs
|
|
74
|
+
Running the `g5.12xlarge` instance (4x A10G GPUs) provides incredible speed but costs **$7.09 per hour** on On-Demand pricing. It is crucial to manage this lifecycle.
|
|
75
|
+
1. **Launch:** Run `bash scripts/launch_persistent.sh` (Takes ~3 minutes).
|
|
76
|
+
2. **Analyze:** Run your videos through the API.
|
|
77
|
+
3. **Terminate:** When you are completely finished for the day, you MUST terminate the instance to stop billing.
|
|
78
|
+
- Run `aws ec2 describe-instances --filters "Name=instance-state-name,Values=running"` to find your Instance ID.
|
|
79
|
+
- Run `aws ec2 terminate-instances --instance-ids <YOUR_INSTANCE_ID>`.
|
|
80
|
+
- *Do not just "stop" the instance if you don't want to pay for EBS Volume storage costs overnight. Terminate it.*
|
|
81
|
+
|
|
82
|
+
---
|
|
83
|
+
|
|
84
|
+
## HuggingFace Authentication (Required for all methods)
|
|
85
|
+
|
|
86
|
+
TRIBE v2 relies on meta-llama/Llama-3.2-3B, which is a Gated Model.
|
|
87
|
+
1. Create a HuggingFace account.
|
|
88
|
+
2. Go to the Llama 3.2 3B page and TRIBE v2 page and agree to Meta's license terms.
|
|
89
|
+
3. Generate a HuggingFace Access Token (Read permissions) at huggingface.co/settings/tokens.
|
|
90
|
+
4. Supply this token via the HF_TOKEN environment variable.
|
|
91
|
+
|
|
92
|
+
---
|
|
93
|
+
|
|
94
|
+
## The Neuroscience of the Engagement Report
|
|
95
|
+
|
|
96
|
+
The AI agent will read the raw API output and translate the neuroscience into plain English for you:
|
|
97
|
+
|
|
98
|
+
* VAN (Ventral Attention Network): Translated to "Is this surprising enough to stop a scroll?". High VAN means the content is novel and creates a pattern interrupt.
|
|
99
|
+
* DMN (Default Mode Network): Translated to "Will people get bored and tune out?". High DMN is bad. It means the brain is wandering. The AI uses this to identify "Cut Candidates" in your video.
|
|
100
|
+
* DAN (Dorsal Attention Network): Translated to "Are people actively following along?". High DAN means strong logical focus.
|
|
101
|
+
* Limbic Network: Translated to "Does this make people feel something?". High Limbic means strong emotional response.
|
|
102
|
+
|
|
103
|
+
Check out the [Results Showcase](results_showcase.md) for actual examples of Neuro-Marketing reports generated by this skill.
|