@opendirectory.dev/skills 0.1.65 → 0.1.67
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/package.json +1 -1
- package/skills/blog-cover-image-cli/README.md +112 -1
- package/skills/brand-alchemy/README.md +31 -1
- package/skills/claude-md-generator/README.md +73 -1
- package/skills/cold-email-verifier/README.md +41 -1
- package/skills/competitor-pr-finder/README.md +69 -1
- package/skills/cook-the-blog/README.md +82 -1
- package/skills/dependency-update-bot/README.md +96 -1
- package/skills/docs-from-code/README.md +93 -1
- package/skills/email-newsletter/README.md +72 -1
- package/skills/explain-this-pr/README.md +69 -1
- package/skills/gh-issue-to-demand-signal/README.md +95 -4
- package/skills/google-trends-api-skills/README.md +74 -1
- package/skills/graphic-case-study/README.md +97 -3
- package/skills/graphic-chart/README.md +0 -19
- package/skills/graphic-ebook/README.md +99 -3
- package/skills/graphic-gif/README.md +0 -19
- package/skills/graphic-slide-deck/README.md +104 -2
- package/skills/hackernews-intel/README.md +156 -1
- package/skills/human-tone/README.md +43 -1
- package/skills/kill-the-standup/README.md +79 -1
- package/skills/linkedin-job-post-to-buyer-pain-map/README.md +3 -3
- package/skills/linkedin-post-generator/README.md +103 -1
- package/skills/llms-txt-generator/README.md +138 -1
- package/skills/luma-attendees-scraper/README.md +0 -21
- package/skills/map-your-market/README.md +121 -1
- package/skills/meeting-brief-generator/README.md +85 -1
- package/skills/meta-ads-skill/README.md +67 -1
- package/skills/meta-tribeV2-skill/README.md +64 -3
- package/skills/newsletter-digest/README.md +142 -1
- package/skills/noise-to-linkedin-carousel/README.md +0 -21
- package/skills/noise2blog/README.md +102 -1
- package/skills/npm-downloads-to-leads/README.md +131 -12
- package/skills/oss-launch-kit/README.md +0 -21
- package/skills/outreach-sequence-builder/README.md +103 -1
- package/skills/position-me/README.md +65 -1
- package/skills/pr-description-writer/README.md +76 -1
- package/skills/pricing-finder/README.md +114 -1
- package/skills/pricing-page-psychology-audit/README.md +85 -1
- package/skills/product-update-logger/README.md +172 -4
- package/skills/producthunt-launch-kit/README.md +90 -1
- package/skills/reddit-icp-monitor/README.md +112 -1
- package/skills/reddit-post-engine/README.md +98 -1
- package/skills/schema-markup-generator/README.md +109 -1
- package/skills/sdk-adoption-tracker/README.md +127 -1
- package/skills/show-hn-writer/README.md +83 -1
- package/skills/stargazer/README.md +0 -21
- package/skills/tweet-thread-from-blog/README.md +104 -1
- package/skills/twitter-GTM-find-skill/README.md +37 -1
- package/skills/vc-curated-match/README.md +0 -21
- package/skills/vc-finder/README.md +98 -5
- package/skills/vid-motion-graphics/README.md +65 -5
- package/skills/where-your-customer-lives/README.md +0 -19
- package/skills/yc-intent-radar-skill/README.md +35 -1
package/package.json
CHANGED
|
@@ -1,4 +1,4 @@
|
|
|
1
|
-
# Blog Cover Image CLI
|
|
1
|
+
# Blog Cover Image CLI
|
|
2
2
|
|
|
3
3
|
A modern, AI-powered CLI tool designed to automatically generate high-converting, minimalist blog cover images and thumbnails using **Gemini 3.1 Flash Image Preview**.
|
|
4
4
|
|
|
@@ -37,3 +37,114 @@ https://github.com/user-attachments/assets/cea8b565-2002-4a87-8857-d902bfcfdc1c
|
|
|
37
37
|
4. Choose the option to **Upload a skill**, and drag and drop the `.zip` file (or you can extract it and drop the folder, both work).
|
|
38
38
|
|
|
39
39
|
> **Note:** For some skills (like `position-me`), the `SKILL.md` file might be located inside a subfolder. Always make sure you are uploading the specific folder that contains the `SKILL.md` file!
|
|
40
|
+
|
|
41
|
+
## Features
|
|
42
|
+
- **Full AI Generation**: Uses `gemini-3.1-flash-image-preview` to generate the entire image.
|
|
43
|
+
- **Smart Logo Fetching**: Pass a domain (like `vercel.com`) and the CLI automatically fetches the logo using `Brandfetch`, normalizes it to PNG via `sharp`, and injects it into the AI context.
|
|
44
|
+
- **Aesthetic Control**: Bundled with `examples/` that automatically guide the model to produce clean, white-background, heavy-typography styles.
|
|
45
|
+
- **Google Search Grounding**: The image generation is hooked into Google Search to pull real-time data if your title involves current events.
|
|
46
|
+
- **Agent Ready**: Includes an OpenCode `SKILL.md` so your favorite AI agents can use this CLI autonomously.
|
|
47
|
+
- **Self-Healing AI Generator**: Automatically validates generated images using Gemini Pro Vision to detect typos or layout issues, retrying up to 3 times with corrective feedback.
|
|
48
|
+
- **Automated Publishing**: Built-in CI/CD workflow for seamless NPM releases via GitHub Actions.
|
|
49
|
+
|
|
50
|
+
---
|
|
51
|
+
|
|
52
|
+
## Installation
|
|
53
|
+
|
|
54
|
+
You can install this globally via npm:
|
|
55
|
+
|
|
56
|
+
```bash
|
|
57
|
+
npm install -g blog-cover-image-cli
|
|
58
|
+
```
|
|
59
|
+
|
|
60
|
+
*(Note: Ensure you are using Node.js v18+)*
|
|
61
|
+
|
|
62
|
+
---
|
|
63
|
+
|
|
64
|
+
## Configuration
|
|
65
|
+
|
|
66
|
+
The CLI securely stores your API key on your local machine using the `conf` package so you don't have to export it every time.
|
|
67
|
+
|
|
68
|
+
```bash
|
|
69
|
+
# 1. Set your Gemini API Key (Required for image generation)
|
|
70
|
+
blog-cover-cli config set-key <YOUR_GEMINI_API_KEY>
|
|
71
|
+
|
|
72
|
+
# 2. Set your Brandfetch Client ID (Required to fetch high-res logos)
|
|
73
|
+
blog-cover-cli config set-brandfetch-id <YOUR_BRANDFETCH_CLIENT_ID>
|
|
74
|
+
|
|
75
|
+
# Check your keys (masked)
|
|
76
|
+
blog-cover-cli config get-key
|
|
77
|
+
blog-cover-cli config get-brandfetch-id
|
|
78
|
+
```
|
|
79
|
+
|
|
80
|
+
*If you run the generate command without a key, a secure, interactive prompt will ask you for it.*
|
|
81
|
+
|
|
82
|
+
---
|
|
83
|
+
|
|
84
|
+
## Usage
|
|
85
|
+
|
|
86
|
+
Generate a 16:9 cover image by providing a title and a domain name for the logo.
|
|
87
|
+
|
|
88
|
+
```bash
|
|
89
|
+
# Example 1: Cursor
|
|
90
|
+
blog-cover-cli generate -t "Why Cursor is the Ultimate AI Code Editor" -l "cursor.com"
|
|
91
|
+
|
|
92
|
+
# Example 2: Lovable
|
|
93
|
+
blog-cover-cli generate -t "Building Apps in Minutes with Lovable" -l "lovable.dev"
|
|
94
|
+
|
|
95
|
+
# Example 3: X (Twitter)
|
|
96
|
+
blog-cover-cli generate -t "The Future of Real-time Information" -l "x.com"
|
|
97
|
+
```
|
|
98
|
+
|
|
99
|
+
### Options
|
|
100
|
+
|
|
101
|
+
| Flag | Full Name | Description | Required | Default |
|
|
102
|
+
|---|---|---|---|---|
|
|
103
|
+
| `-t` | `--title` | The exact text to render on the cover | **Yes** | |
|
|
104
|
+
| `-l` | `--logo` | The domain to fetch the logo from (e.g. `google.com`) | No | |
|
|
105
|
+
| `-o` | `--output` | The output path for the PNG file | No | `./output/<auto-name>.png` |
|
|
106
|
+
|
|
107
|
+
If you omit the `--output` flag, the CLI automatically creates an `output/` directory in your current path and names the file intelligently based on the logo domain or title (e.g., `output/cursor-cover.png`).
|
|
108
|
+
|
|
109
|
+
---
|
|
110
|
+
|
|
111
|
+
## For AI Agents (OpenCode Skill)
|
|
112
|
+
|
|
113
|
+
This package includes a structured OpenCode skill! Agents can install this package and read the instructions in `agent-skill/blog-cover-generator/SKILL.md` to learn how to generate cover images for users autonomously.
|
|
114
|
+
|
|
115
|
+
**Workflow for Agents:**
|
|
116
|
+
1. Execute `npx -p blog-cover-image-cli blog-cover-cli config set-key $KEY`
|
|
117
|
+
2. Execute `npx -p blog-cover-image-cli blog-cover-cli generate -t "Title" -l "domain.com"`
|
|
118
|
+
3. Return the generated image to the user.
|
|
119
|
+
|
|
120
|
+
---
|
|
121
|
+
|
|
122
|
+
## Self-Healing AI Generator
|
|
123
|
+
|
|
124
|
+
The CLI features a built-in Automated QA (Critic) loop to ensure high-quality results.
|
|
125
|
+
|
|
126
|
+
1. **Generation**: The tool generates an image based on your title and logo.
|
|
127
|
+
2. **Validation**: It uses `gemini-3.1-pro-preview` to OCR the generated image and check for typos, layout issues, or missing elements.
|
|
128
|
+
3. **Self-Correction**: If the validation fails, the CLI automatically retries (up to 3 times), passing the specific "critical feedback" back to the generator to fix the errors.
|
|
129
|
+
|
|
130
|
+
This ensures that common AI image generation issues, like misspelled words in typography, are caught and corrected before you even see the file.
|
|
131
|
+
|
|
132
|
+
---
|
|
133
|
+
|
|
134
|
+
## Automated Publishing (CI/CD)
|
|
135
|
+
|
|
136
|
+
This repository includes a GitHub Action workflow for automated NPM publishing. To set this up for your fork:
|
|
137
|
+
|
|
138
|
+
1. **Generate Token**: Go to [npmjs.com](https://www.npmjs.com/), navigate to **Access Tokens**, and generate a new "Automation" token.
|
|
139
|
+
2. **Add Secret**: In your GitHub repository, go to **Settings > Secrets and variables > Actions**.
|
|
140
|
+
3. **Save Secret**: Create a new repository secret named `NPM_TOKEN` and paste your token.
|
|
141
|
+
|
|
142
|
+
The workflow will automatically publish a new version to NPM whenever you create a new GitHub Release.
|
|
143
|
+
|
|
144
|
+
---
|
|
145
|
+
|
|
146
|
+
## How it works under the hood
|
|
147
|
+
1. **Logo Fetcher**: Hits `Brandfetch`, parses WebP/SVGs/AVIFs, and converts to strict PNGs.
|
|
148
|
+
2. **Context Assembly**: Loads aesthetic examples from the `./examples` folder to ground the style.
|
|
149
|
+
3. **Multimodal Prompting**: Assembles the exact text instructions, the visual examples, and the fetched logo into a single unified payload.
|
|
150
|
+
4. **Google GenAI SDK**: Sends the payload with `tools: [{ googleSearch: {} }]` to the Gemini 3.1 Flash Image model.
|
|
@@ -1,4 +1,4 @@
|
|
|
1
|
-
# Brand Alchemy
|
|
1
|
+
# Brand Alchemy
|
|
2
2
|
|
|
3
3
|
<img width="1920" height="1072" alt="brand-alchemy-skill-cover-image" src="https://github.com/user-attachments/assets/15d90c97-9eef-4eed-abb3-7ce58438adf0" />
|
|
4
4
|
|
|
@@ -27,3 +27,33 @@ https://github.com/user-attachments/assets/cea8b565-2002-4a87-8857-d902bfcfdc1c
|
|
|
27
27
|
4. Choose the option to **Upload a skill**, and drag and drop the `.zip` file (or you can extract it and drop the folder, both work).
|
|
28
28
|
|
|
29
29
|
> **Note:** For some skills (like `position-me`), the `SKILL.md` file might be located inside a subfolder. Always make sure you are uploading the specific folder that contains the `SKILL.md` file!
|
|
30
|
+
|
|
31
|
+
## Core Capabilities
|
|
32
|
+
|
|
33
|
+
When invoked, the skill commands the AI agent to act as an elite branding consultant through a rigorous protocol:
|
|
34
|
+
|
|
35
|
+
* **The Interrogation**: Forces the AI to stop and ask critical discovery questions (Core, Audience, Alternative, Vibe) to extract your brand's true DNA before generating names.
|
|
36
|
+
* **Strategic Positioning**: Applies frameworks from April Dunford ("Obviously Awesome") and Category Design ("Play Bigger") to position your startup against the status quo.
|
|
37
|
+
* **Phonosemantics & Lexicon Science**: Uses sound symbolism (Plosives, Fricatives, Vowel Size) to engineer names that subconsciously communicate speed, power, or luxury.
|
|
38
|
+
* **Universal Domain Verification**: Automatically runs a robust Python script to check DNS and RDAP availability for any TLD (`.com`, `.io`, `.ai`, `.tech`, etc.), ensuring you don't fall in love with a taken name.
|
|
39
|
+
|
|
40
|
+
## Project Structure
|
|
41
|
+
|
|
42
|
+
```text
|
|
43
|
+
brand-alchemy/
|
|
44
|
+
├── README.md # Documentation
|
|
45
|
+
├── SKILL.md # Master protocol for the AI
|
|
46
|
+
├── scripts/
|
|
47
|
+
│ └── domain_checker.py # Universal domain verification script (Python)
|
|
48
|
+
└── references/
|
|
49
|
+
├── core-brand-strategy.md # Elite positioning & category design playbook
|
|
50
|
+
└── lexicon-naming-science.md # Phonosemantics & naming linguistics guide
|
|
51
|
+
```
|
|
52
|
+
|
|
53
|
+
## How to Prompt the AI
|
|
54
|
+
|
|
55
|
+
Once the skill is installed, simply ask the AI to help you name your startup or build a brand strategy.
|
|
56
|
+
|
|
57
|
+
> "Help me name my AI distribution startup. We help technical founders get users."
|
|
58
|
+
|
|
59
|
+
The AI will automatically pause and initiate **Step 1: The Interrogation**, asking you specific questions about your core offering, audience, alternatives, and desired brand vibe.
|
|
@@ -1,4 +1,4 @@
|
|
|
1
|
-
# claude-md-generator
|
|
1
|
+
# claude-md-generator
|
|
2
2
|
|
|
3
3
|
<img width="1280" height="640" alt="claude-md-generator" src="https://github.com/user-attachments/assets/0e295271-2216-47f7-828f-845c98ef0298" />
|
|
4
4
|
|
|
@@ -28,3 +28,75 @@ https://github.com/user-attachments/assets/cea8b565-2002-4a87-8857-d902bfcfdc1c
|
|
|
28
28
|
4. Choose the option to **Upload a skill**, and drag and drop the `.zip` file (or you can extract it and drop the folder, both work).
|
|
29
29
|
|
|
30
30
|
> **Note:** For some skills (like `position-me`), the `SKILL.md` file might be located inside a subfolder. Always make sure you are uploading the specific folder that contains the `SKILL.md` file!
|
|
31
|
+
|
|
32
|
+
## What It Does
|
|
33
|
+
|
|
34
|
+
- Scans project files: package.json, tsconfig.json, linter configs, Makefile, directory structure
|
|
35
|
+
- Extracts all build, test, lint, and dev commands
|
|
36
|
+
- Identifies code style conventions that differ from defaults (path aliases, export patterns, naming)
|
|
37
|
+
- Maps non-obvious architecture decisions
|
|
38
|
+
- Finds gotchas: auto-generated files, required env var setup, test dependencies
|
|
39
|
+
- Generates CLAUDE.md using Gemini, then verifies it stays under 200 lines
|
|
40
|
+
- If CLAUDE.md already exists, improves it without discarding custom content
|
|
41
|
+
|
|
42
|
+
## Requirements
|
|
43
|
+
|
|
44
|
+
| Requirement | Purpose | How to Set Up |
|
|
45
|
+
|------------|---------|--------------|
|
|
46
|
+
| Gemini API key | CLAUDE.md generation from codebase analysis | aistudio.google.com, Get API key |
|
|
47
|
+
|
|
48
|
+
## Setup
|
|
49
|
+
|
|
50
|
+
```bash
|
|
51
|
+
cp .env.example .env
|
|
52
|
+
# Add GEMINI_API_KEY
|
|
53
|
+
```
|
|
54
|
+
|
|
55
|
+
## How to Use
|
|
56
|
+
|
|
57
|
+
From the project root you want to document:
|
|
58
|
+
```
|
|
59
|
+
"Generate a CLAUDE.md for this project"
|
|
60
|
+
"Create a CLAUDE.md"
|
|
61
|
+
"Write Claude configuration for this repo"
|
|
62
|
+
"Help Claude understand this codebase"
|
|
63
|
+
```
|
|
64
|
+
|
|
65
|
+
To update an existing CLAUDE.md:
|
|
66
|
+
```
|
|
67
|
+
"Update my CLAUDE.md: we added Vitest and changed the build system"
|
|
68
|
+
"Improve my existing CLAUDE.md"
|
|
69
|
+
```
|
|
70
|
+
|
|
71
|
+
## What Goes in CLAUDE.md
|
|
72
|
+
|
|
73
|
+
| Section | Include | Skip |
|
|
74
|
+
|---------|---------|------|
|
|
75
|
+
| Commands | Exact runnable commands, flags needed, env vars required | `npm install` and other obvious ones |
|
|
76
|
+
| Architecture | Non-obvious structure, auto-generated directories | "src contains source files" |
|
|
77
|
+
| Code Style | Path aliases, export conventions, non-default settings | Indent size (formatter handles it) |
|
|
78
|
+
| Testing | Required setup, how to run one test | "we use Jest" (visible from package.json) |
|
|
79
|
+
| Gotchas | Auto-generated files, env var order, known intentional issues | Things derivable from the code |
|
|
80
|
+
|
|
81
|
+
## Why Under 200 Lines
|
|
82
|
+
|
|
83
|
+
Long CLAUDE.md files get ignored. Claude loads the full file into context every session: a bloated CLAUDE.md with obvious content trains Claude to skim it. A tight 100-150 line CLAUDE.md with only non-obvious facts gets read and used.
|
|
84
|
+
|
|
85
|
+
The skill cuts aggressively: if a section says only things Claude can infer from the code, it removes it.
|
|
86
|
+
|
|
87
|
+
## Project Structure
|
|
88
|
+
|
|
89
|
+
```
|
|
90
|
+
claude-md-generator/
|
|
91
|
+
├── SKILL.md
|
|
92
|
+
├── README.md
|
|
93
|
+
├── .env.example
|
|
94
|
+
├── evals/
|
|
95
|
+
│ └── evals.json
|
|
96
|
+
└── references/
|
|
97
|
+
└── section-guide.md
|
|
98
|
+
```
|
|
99
|
+
|
|
100
|
+
## License
|
|
101
|
+
|
|
102
|
+
MIT
|
|
@@ -1,4 +1,4 @@
|
|
|
1
|
-
# Cold Email Verifier
|
|
1
|
+
# Cold Email Verifier
|
|
2
2
|
|
|
3
3
|
Agent Skill that equips your AI agent with the ability to autonomously guess, enrich, and verify cold email addresses directly from a CSV file.
|
|
4
4
|
|
|
@@ -30,3 +30,43 @@ https://github.com/user-attachments/assets/cea8b565-2002-4a87-8857-d902bfcfdc1c
|
|
|
30
30
|
4. Choose the option to **Upload a skill**, and drag and drop the `.zip` file (or you can extract it and drop the folder, both work).
|
|
31
31
|
|
|
32
32
|
> **Note:** For some skills (like `position-me`), the `SKILL.md` file might be located inside a subfolder. Always make sure you are uploading the specific folder that contains the `SKILL.md` file!
|
|
33
|
+
|
|
34
|
+
## Verification Engines Supported
|
|
35
|
+
The AI is trained to use two different verification backends:
|
|
36
|
+
1. **ValidEmail.co API (Highly Recommended)**: The AI will use this SaaS API for enterprise-grade accuracy, bypassing strict catch-all servers. You can get a free tier of verification credits at validemail.co.
|
|
37
|
+
2. **Reacher (Self-Hosted)**: The AI can route checks through your own self-hosted Reacher Docker container (e.g., on an AWS EC2 instance with an unblocked Port 25) for 100% free verification.
|
|
38
|
+
|
|
39
|
+
## Installation
|
|
40
|
+
|
|
41
|
+
To install this skill into your AI agent's workspace:
|
|
42
|
+
|
|
43
|
+
1. Clone or download this folder.
|
|
44
|
+
2. Copy the entire cold-email-verifier folder into your agent's skills directory (e.g., ~/.agents/skills/ or your project's .agents/skills/ folder).
|
|
45
|
+
3. Ensure the dependencies in
|
|
46
|
+
equirements.txt are installed in your environment:
|
|
47
|
+
`ash
|
|
48
|
+
pip install -r requirements.txt
|
|
49
|
+
`
|
|
50
|
+
4. Copy the .env.example to .env and add your ValidEmail.co API key:
|
|
51
|
+
`ash
|
|
52
|
+
cp .env.example .env
|
|
53
|
+
`
|
|
54
|
+
|
|
55
|
+
## How to Prompt the AI
|
|
56
|
+
|
|
57
|
+
Once the skill is installed, you can simply talk to your AI agent. Here are example prompts:
|
|
58
|
+
|
|
59
|
+
**Using ValidEmail.co:**
|
|
60
|
+
> "Use the cold email verifier skill to process leads.csv. Please use the validemail mode."
|
|
61
|
+
|
|
62
|
+
**Using a Self-Hosted Reacher Server:**
|
|
63
|
+
> "Verify the emails in leads.csv using the cold email verifier. Use reacher-http mode and point it to http://YOUR_SERVER_IP:8080/v0/check_email."
|
|
64
|
+
|
|
65
|
+
The AI will automatically parse the CSV, handle the domain lookups, generate the permutations, run the verification engine, and output a clean CSV with the valid emails appended.
|
|
66
|
+
|
|
67
|
+
## CSV Format Requirements
|
|
68
|
+
The AI expects the input CSV to contain at least the following headers:
|
|
69
|
+
- First Name
|
|
70
|
+
- Last Name
|
|
71
|
+
- Company Name
|
|
72
|
+
- Domain Name
|
|
@@ -1,4 +1,4 @@
|
|
|
1
|
-
# competitor-pr-finder
|
|
1
|
+
# competitor-pr-finder
|
|
2
2
|
|
|
3
3
|
Give it your product URL. It finds your top 5 competitors, researches every press mention, podcast appearance, and community post across all of them, and tells you exactly which channels to pitch -- with the journalist's name, the angle that got your competitors featured, and a ready-to-send cold pitch for your product.
|
|
4
4
|
|
|
@@ -57,3 +57,71 @@ https://github.com/user-attachments/assets/cea8b565-2002-4a87-8857-d902bfcfdc1c
|
|
|
57
57
|
4. Choose the option to **Upload a skill**, and drag and drop the `.zip` file (or you can extract it and drop the folder, both work).
|
|
58
58
|
|
|
59
59
|
> **Note:** For some skills (like `position-me`), the `SKILL.md` file might be located inside a subfolder. Always make sure you are uploading the specific folder that contains the `SKILL.md` file!
|
|
60
|
+
|
|
61
|
+
## Setup
|
|
62
|
+
|
|
63
|
+
```bash
|
|
64
|
+
cp .env.example .env
|
|
65
|
+
# Add TAVILY_API_KEY (required) and FIRECRAWL_API_KEY (optional)
|
|
66
|
+
```
|
|
67
|
+
|
|
68
|
+
## Usage
|
|
69
|
+
|
|
70
|
+
```
|
|
71
|
+
Find my PR targets: https://yourstartup.com
|
|
72
|
+
```
|
|
73
|
+
|
|
74
|
+
Or paste a description if you don't have a live URL:
|
|
75
|
+
```
|
|
76
|
+
Find PR targets for my startup. We build [what you do] for [who]. [Stage], [geography].
|
|
77
|
+
```
|
|
78
|
+
|
|
79
|
+
## Cost
|
|
80
|
+
|
|
81
|
+
| Operation | Searches | Cost |
|
|
82
|
+
|---|---|---|
|
|
83
|
+
| Product page fetch | 1 Firecrawl or Tavily extract | ~$0.001 |
|
|
84
|
+
| Competitor discovery | 2 Tavily searches | ~$0.02 |
|
|
85
|
+
| 3-track PR research (5 competitors) | 15 Tavily searches | ~$0.15 |
|
|
86
|
+
| Journalist lookup (up to 7 Tier 1 channels) | ~6 Tavily searches | ~$0.06 |
|
|
87
|
+
| **Total** | **~23-24 searches** | **~$0.23/run** |
|
|
88
|
+
|
|
89
|
+
## Zero-Hallucination Policy
|
|
90
|
+
|
|
91
|
+
Every channel name in the output traces to a URL in the search results. Every journalist name traces to a search result snippet. Story angles are extracted from article titles found by Tavily -- not inferred from AI training knowledge. Fields that could not be sourced are labeled "not found in search data."
|
|
92
|
+
|
|
93
|
+
## Project Structure
|
|
94
|
+
|
|
95
|
+
```
|
|
96
|
+
competitor-pr-finder/
|
|
97
|
+
├── SKILL.md -- 10-step workflow for Claude Code
|
|
98
|
+
├── README.md -- this file
|
|
99
|
+
├── .env.example -- environment variable template
|
|
100
|
+
├── scripts/
|
|
101
|
+
│ └── research.py -- two-phase Tavily data collector
|
|
102
|
+
├── evals/
|
|
103
|
+
│ └── evals.json -- 5 test cases
|
|
104
|
+
└── references/
|
|
105
|
+
├── pr-channel-types.md -- how to identify editorial, podcast, community channels
|
|
106
|
+
├── pitch-guide.md -- cold pitch structure, forbidden phrases, angle extraction
|
|
107
|
+
└── tier-scoring.md -- channel tiering rules and frequency map construction
|
|
108
|
+
```
|
|
109
|
+
|
|
110
|
+
## Standalone Script Usage
|
|
111
|
+
|
|
112
|
+
```bash
|
|
113
|
+
# Phase 1: competitor discovery
|
|
114
|
+
python3 scripts/research.py \
|
|
115
|
+
--phase discover \
|
|
116
|
+
--product-analysis /tmp/cprf-product-analysis.json \
|
|
117
|
+
--tavily-key "$TAVILY_API_KEY" \
|
|
118
|
+
--output /tmp/cprf-competitors-raw.json
|
|
119
|
+
|
|
120
|
+
# Phase 2: PR research on confirmed competitors
|
|
121
|
+
python3 scripts/research.py \
|
|
122
|
+
--phase pr-research \
|
|
123
|
+
--competitors /tmp/cprf-competitors-confirmed.json \
|
|
124
|
+
--product-analysis /tmp/cprf-product-analysis.json \
|
|
125
|
+
--tavily-key "$TAVILY_API_KEY" \
|
|
126
|
+
--output /tmp/cprf-pr-raw.json
|
|
127
|
+
```
|
|
@@ -1,4 +1,4 @@
|
|
|
1
|
-
# cook-the-blog
|
|
1
|
+
# cook-the-blog
|
|
2
2
|
|
|
3
3
|
<img width="1280" height="640" alt="cook-the-blog-cover" src="https://github.com/user-attachments/assets/01fc87a2-78a8-433d-8531-a06a6a6df617" />
|
|
4
4
|
|
|
@@ -27,3 +27,84 @@ https://github.com/user-attachments/assets/cea8b565-2002-4a87-8857-d902bfcfdc1c
|
|
|
27
27
|
4. Choose the option to **Upload a skill**, and drag and drop the `.zip` file (or you can extract it and drop the folder, both work).
|
|
28
28
|
|
|
29
29
|
> **Note:** For some skills (like `position-me`), the `SKILL.md` file might be located inside a subfolder. Always make sure you are uploading the specific folder that contains the `SKILL.md` file!
|
|
30
|
+
|
|
31
|
+
## Prerequisites & Dependencies
|
|
32
|
+
|
|
33
|
+
Before running the skill, the AI agent needs access to several external tools, APIs, and environment details.
|
|
34
|
+
|
|
35
|
+
**Agent Instruction:** Before executing the pipeline for the first time, you (the AI Agent) must explicitly ask the user for any missing configuration details listed below (Target Repo, GCS Bucket, Email, etc.). **Do not hardcode these values into the skill.md file automatically without asking.**
|
|
36
|
+
|
|
37
|
+
### 1. Tavily Search (Deep Research)
|
|
38
|
+
Used for deep, high-quality web research to extract hard metrics, founder details, and GTM strategies.
|
|
39
|
+
- **Setup:** You need a Tavily API key from [Tavily](https://tavily.com/).
|
|
40
|
+
- **Installation:** Install the Tavily MCP server so the agent can use it natively.
|
|
41
|
+
```bash
|
|
42
|
+
npx -y @modelcontextprotocol/server-tavily
|
|
43
|
+
```
|
|
44
|
+
- **Environment Variable:** Set `TAVILY_API_KEY` in your agent's environment or MCP config.
|
|
45
|
+
|
|
46
|
+
### 2. SerpApi (SEO Keyword Research)
|
|
47
|
+
Used to pull Google Trends data to find breakout search queries.
|
|
48
|
+
- **Setup:** Get a SerpApi key from [SerpApi](https://serpapi.com/).
|
|
49
|
+
- **Installation:** Ensure Python 3 and the `requests` / `google-search-results` libraries are installed. The user must provide a custom Python script (e.g., `blog_seo_research.py`) that queries the Google Trends API. The agent must be told the exact file path to this script.
|
|
50
|
+
- **Environment Variable:** `SERPAPI_KEY`
|
|
51
|
+
|
|
52
|
+
### 3. Blog Cover Image CLI
|
|
53
|
+
A custom Node.js CLI tool used to generate 16:9 minimalist cover images with company logos.
|
|
54
|
+
- **Installation:** Install the tool globally via npm.
|
|
55
|
+
```bash
|
|
56
|
+
npm i -g blog-cover-image-cli
|
|
57
|
+
```
|
|
58
|
+
- **Usage:** The agent calls it via `blog-cover-cli generate -t "Title" -l "Logo URL" -o "./cover.png"`.
|
|
59
|
+
|
|
60
|
+
### 4. Cloud Storage (e.g., Google Cloud Storage, AWS S3)
|
|
61
|
+
Used to host the generated cover images. The user must specify which cloud provider they want to use.
|
|
62
|
+
- **Setup (Example for GCP):** The user needs a Google Cloud Service Account with storage write permissions.
|
|
63
|
+
- **Installation:** Install the Google Cloud SDK (`gcloud` and `gsutil`).
|
|
64
|
+
- **Authentication:** The user must provide a `service-account.json` file to authenticate.
|
|
65
|
+
```bash
|
|
66
|
+
gcloud auth activate-service-account --key-file=service-account.json
|
|
67
|
+
```
|
|
68
|
+
- **Target Bucket:** The user must provide the target bucket URL (e.g., `gs://your-bucket-name/covers/`).
|
|
69
|
+
|
|
70
|
+
### 5. GitHub CLI & Git
|
|
71
|
+
Used for pushing the final MDX file to the target repository.
|
|
72
|
+
- **Setup:** Ensure `git` and `gh` (GitHub CLI) are installed on the host.
|
|
73
|
+
- **Authentication:** Log in to the GitHub CLI using a personal access token.
|
|
74
|
+
```bash
|
|
75
|
+
gh auth login --with-token < token.txt
|
|
76
|
+
```
|
|
77
|
+
- **Configuration:** The agent will need the user's `git config user.name` and `git config user.email` to ensure proper commit attribution.
|
|
78
|
+
|
|
79
|
+
### 6. Email Notifications (SMTP)
|
|
80
|
+
Used to send a final success summary to the admin.
|
|
81
|
+
- **Setup:** The agent creates a Python script (`send_summary.py`) using the built-in `smtplib`.
|
|
82
|
+
- **Credentials:** The user must provide a dedicated sender Gmail account and an **App Password** (not their real password), as well as the destination admin email.
|
|
83
|
+
|
|
84
|
+
### 7. Stop Slop (AI Output Quality)
|
|
85
|
+
Used to ensure the generated case studies avoid typical AI fluff and maintain a high-quality, human-like tone.
|
|
86
|
+
- **Setup:** Add the [Stop Slop](https://github.com/hardikpandya/stop-slop) skill to your agent's loaded skills before running the generation pipeline.
|
|
87
|
+
|
|
88
|
+
---
|
|
89
|
+
|
|
90
|
+
## Configuration Variables to Ask For
|
|
91
|
+
|
|
92
|
+
When initializing this skill, the agent must ask the user to provide or confirm the following placeholders before running the pipeline:
|
|
93
|
+
|
|
94
|
+
1. **`[TARGET_REPO_URL]`**: The exact GitHub repository URL or slug (e.g., `username/my-blog-repo`).
|
|
95
|
+
2. **`[TARGET_BUCKET]`**: The cloud storage bucket path (e.g., `gs://my-images-bucket/blogs/`).
|
|
96
|
+
3. **`[PUBLIC_IMAGE_BASE_URL]`**: The public base URL where the uploaded images will be accessible (e.g., `https://storage.googleapis.com/my-images-bucket/blogs/`).
|
|
97
|
+
4. **`[GIT_USER_NAME]` & `[GIT_USER_EMAIL]`**: The exact name and email to use for Git commit authorship.
|
|
98
|
+
5. **`[ADMIN_EMAIL]`**: Where to send the final summary report.
|
|
99
|
+
6. **`[SENDER_EMAIL]` & `[SENDER_APP_PASSWORD]`**: The credentials for the SMTP Python script.
|
|
100
|
+
7. **`[PATH_TO_SEO_SCRIPT]`**: The exact path to the Python script that handles the SerpApi Google Trends queries.
|
|
101
|
+
8. **Brand Promotion Link**: The URL and pitch text to inject into the final FAQ of the MDX template (e.g., "If you want to build this, check out [MyBrand](https://mybrand.com)").
|
|
102
|
+
|
|
103
|
+
---
|
|
104
|
+
|
|
105
|
+
## How to Run
|
|
106
|
+
|
|
107
|
+
1. Once the user has provided the environment variables and configuration details, place the `skill.md` file in your agent's workspace.
|
|
108
|
+
2. The agent will read `skill.md` to understand the 8-step execution loop.
|
|
109
|
+
3. Trigger the agent by saying: *"Run the case study generator for [Company Name]."*
|
|
110
|
+
4. The agent will autonomously execute the entire pipeline from research to deployment.
|
|
@@ -1,4 +1,4 @@
|
|
|
1
|
-
# dependency-update-bot
|
|
1
|
+
# dependency-update-bot
|
|
2
2
|
|
|
3
3
|
<img width="1280" height="640" alt="dependency-update-bot" src="https://github.com/user-attachments/assets/08939280-bba2-4ac9-a349-2ca8c25ca328" />
|
|
4
4
|
|
|
@@ -28,3 +28,98 @@ https://github.com/user-attachments/assets/cea8b565-2002-4a87-8857-d902bfcfdc1c
|
|
|
28
28
|
4. Choose the option to **Upload a skill**, and drag and drop the `.zip` file (or you can extract it and drop the folder, both work).
|
|
29
29
|
|
|
30
30
|
> **Note:** For some skills (like `position-me`), the `SKILL.md` file might be located inside a subfolder. Always make sure you are uploading the specific folder that contains the `SKILL.md` file!
|
|
31
|
+
|
|
32
|
+
## What It Does
|
|
33
|
+
|
|
34
|
+
- Runs `npm outdated --json` or `pip list --outdated` to find outdated packages
|
|
35
|
+
- Classifies each update as patch (low risk), minor (medium risk), or major (high risk) using semver
|
|
36
|
+
- Fetches changelogs from GitHub Releases, CHANGELOG.md, or npm/PyPI registry as fallback
|
|
37
|
+
- Uses Gemini to summarize what changed between old and new versions, flagging breaking changes
|
|
38
|
+
- Creates a branch per risk group, updates the package file, and opens a PR with the changelog summary
|
|
39
|
+
- Opens one PR per major update (since each major bump needs individual review)
|
|
40
|
+
|
|
41
|
+
## Requirements
|
|
42
|
+
|
|
43
|
+
| Requirement | Purpose | How to Set Up |
|
|
44
|
+
|------------|---------|--------------|
|
|
45
|
+
| Gemini API key | Changelog summarization | aistudio.google.com, Get API key |
|
|
46
|
+
| GitHub CLI authenticated | PR creation | `gh auth login` |
|
|
47
|
+
| GitHub token (optional) | Higher rate limit for changelog fetching | github.com/settings/tokens, read-only scope |
|
|
48
|
+
|
|
49
|
+
## Setup
|
|
50
|
+
|
|
51
|
+
```bash
|
|
52
|
+
cp .env.example .env
|
|
53
|
+
```
|
|
54
|
+
|
|
55
|
+
Fill in:
|
|
56
|
+
- `GEMINI_API_KEY` (required)
|
|
57
|
+
- `GITHUB_TOKEN` (optional, increases GitHub API rate limit from 60 to 5,000 requests/hour)
|
|
58
|
+
|
|
59
|
+
## How to Use
|
|
60
|
+
|
|
61
|
+
Scan npm dependencies:
|
|
62
|
+
```
|
|
63
|
+
"Check for outdated packages"
|
|
64
|
+
"Update my dependencies and open PRs"
|
|
65
|
+
"Run the dependency update bot"
|
|
66
|
+
```
|
|
67
|
+
|
|
68
|
+
Scan pip dependencies:
|
|
69
|
+
```
|
|
70
|
+
"Check my Python packages for updates"
|
|
71
|
+
"Scan requirements.txt for outdated dependencies"
|
|
72
|
+
```
|
|
73
|
+
|
|
74
|
+
Specific risk level only:
|
|
75
|
+
```
|
|
76
|
+
"Only open PRs for patch updates today"
|
|
77
|
+
"Show me which packages have major version updates"
|
|
78
|
+
```
|
|
79
|
+
|
|
80
|
+
## PR Structure
|
|
81
|
+
|
|
82
|
+
Each PR includes:
|
|
83
|
+
- Risk level label (patch / minor / major)
|
|
84
|
+
- For each package: version bump, changelog summary (3-5 bullets), breaking changes flagged with BREAKING prefix
|
|
85
|
+
- How to verify section
|
|
86
|
+
|
|
87
|
+
**One PR per risk group** for patch and minor updates. **One PR per package** for major updates (since each breaking change needs individual review).
|
|
88
|
+
|
|
89
|
+
## Risk Classification
|
|
90
|
+
|
|
91
|
+
| Level | Version Change | Example | Action |
|
|
92
|
+
|-------|---------------|---------|--------|
|
|
93
|
+
| Patch | Z in X.Y.Z | 4.17.19 to 4.17.21 | Safe to merge after CI passes |
|
|
94
|
+
| Minor | Y in X.Y.Z | 4.17.x to 4.18.x | Review changelog before merging |
|
|
95
|
+
| Major | X in X.Y.Z | 4.x.x to 5.x.x | Read changelog carefully, test thoroughly |
|
|
96
|
+
|
|
97
|
+
If a patch or minor update's changelog contains BREAKING CHANGE keywords, the bot escalates it to major automatically.
|
|
98
|
+
|
|
99
|
+
## Changelog Sources
|
|
100
|
+
|
|
101
|
+
The bot tries these sources in order for each package:
|
|
102
|
+
|
|
103
|
+
1. GitHub Releases API (best)
|
|
104
|
+
2. Raw CHANGELOG.md from the repo
|
|
105
|
+
3. npm registry README (fallback)
|
|
106
|
+
4. PyPI project description (last resort)
|
|
107
|
+
|
|
108
|
+
If no changelog is found, the PR still includes the version bump with a note to review manually.
|
|
109
|
+
|
|
110
|
+
## Project Structure
|
|
111
|
+
|
|
112
|
+
```
|
|
113
|
+
dependency-update-bot/
|
|
114
|
+
├── SKILL.md
|
|
115
|
+
├── README.md
|
|
116
|
+
├── .env.example
|
|
117
|
+
├── evals/
|
|
118
|
+
│ └── evals.json
|
|
119
|
+
└── references/
|
|
120
|
+
└── changelog-patterns.md
|
|
121
|
+
```
|
|
122
|
+
|
|
123
|
+
## License
|
|
124
|
+
|
|
125
|
+
MIT
|
|
@@ -1,4 +1,4 @@
|
|
|
1
|
-
# docs-from-code
|
|
1
|
+
# docs-from-code
|
|
2
2
|
|
|
3
3
|

|
|
4
4
|
|
|
@@ -27,3 +27,95 @@ https://github.com/user-attachments/assets/cea8b565-2002-4a87-8857-d902bfcfdc1c
|
|
|
27
27
|
4. Choose the option to **Upload a skill**, and drag and drop the `.zip` file (or you can extract it and drop the folder, both work).
|
|
28
28
|
|
|
29
29
|
> **Note:** For some skills (like `position-me`), the `SKILL.md` file might be located inside a subfolder. Always make sure you are uploading the specific folder that contains the `SKILL.md` file!
|
|
30
|
+
|
|
31
|
+
## What It Generates
|
|
32
|
+
|
|
33
|
+
| Output | When |
|
|
34
|
+
|--------|------|
|
|
35
|
+
| `README.md` (full) | Project has no README |
|
|
36
|
+
| `README.md` (sections) | README exists but API or architecture sections are stale |
|
|
37
|
+
| `docs/API.md` | Project has HTTP routes |
|
|
38
|
+
| Architecture section | Always, from graphify's god nodes and communities |
|
|
39
|
+
| GitHub PR | When you ask it to open one |
|
|
40
|
+
|
|
41
|
+
## Why graphify?
|
|
42
|
+
|
|
43
|
+
The skill uses graphify as its extraction engine:
|
|
44
|
+
- 20 languages via tree-sitter AST: Python, TypeScript, Go, Rust, Java, and 15 more
|
|
45
|
+
- Architecture insight: god nodes and community clusters show what everything connects through
|
|
46
|
+
- 71.5x fewer tokens than reading raw files, efficient on large codebases
|
|
47
|
+
- SHA256 cache: re-runs only process changed files
|
|
48
|
+
- Honest tagging: `EXTRACTED` (found in source) vs `INFERRED` (reasonable inference with confidence score)
|
|
49
|
+
- Extracts rationale from `# NOTE:`, `# WHY:`, `# HACK:` comments and docstrings
|
|
50
|
+
|
|
51
|
+
The bundled `scripts/` (TypeScript and Python AST extractors) serve as a fallback if graphify is unavailable.
|
|
52
|
+
|
|
53
|
+
## Supported Languages (via graphify)
|
|
54
|
+
|
|
55
|
+
Python, TypeScript, JavaScript, Go, Rust, Java, C, C++, Ruby, C#, Kotlin, Scala, PHP, Swift, Lua, Zig, PowerShell, Elixir, Objective-C, Julia
|
|
56
|
+
|
|
57
|
+
## Requirements
|
|
58
|
+
|
|
59
|
+
- Python 3.10+ (for graphify)
|
|
60
|
+
- Node.js 18+ (for fallback TypeScript extractor)
|
|
61
|
+
- `gh` CLI (optional, for opening PRs automatically)
|
|
62
|
+
- `GITHUB_TOKEN` env var (optional, for PRs)
|
|
63
|
+
|
|
64
|
+
## Setup
|
|
65
|
+
|
|
66
|
+
### 1. Install graphify
|
|
67
|
+
|
|
68
|
+
```bash
|
|
69
|
+
pip install graphifyy
|
|
70
|
+
```
|
|
71
|
+
|
|
72
|
+
No API keys needed for extraction. graphify uses your agent's existing model access.
|
|
73
|
+
|
|
74
|
+
### 2. Configure (Optional)
|
|
75
|
+
|
|
76
|
+
```bash
|
|
77
|
+
cp .env.example .env
|
|
78
|
+
# Add GITHUB_TOKEN if you want auto-PR support
|
|
79
|
+
```
|
|
80
|
+
|
|
81
|
+
## How to Use
|
|
82
|
+
|
|
83
|
+
Be inside your project and ask:
|
|
84
|
+
|
|
85
|
+
```
|
|
86
|
+
"Generate a README for this project"
|
|
87
|
+
"My API docs are out of date, update them from the code"
|
|
88
|
+
"Create docs/API.md from my FastAPI routes"
|
|
89
|
+
"Add an architecture section to our README"
|
|
90
|
+
"Document this TypeScript library"
|
|
91
|
+
```
|
|
92
|
+
|
|
93
|
+
The agent will:
|
|
94
|
+
1. Run `graphify . --no-viz` to build a knowledge graph of your codebase
|
|
95
|
+
2. Read `GRAPH_REPORT.md` for god nodes, communities, and architecture insights
|
|
96
|
+
3. Query the graph for routes, types, and data models
|
|
97
|
+
4. Read existing docs to understand what needs updating
|
|
98
|
+
5. Generate accurate docs grounded in the graph
|
|
99
|
+
6. Write files and optionally open a GitHub PR
|
|
100
|
+
|
|
101
|
+
## Project Structure
|
|
102
|
+
|
|
103
|
+
```
|
|
104
|
+
docs-from-code/
|
|
105
|
+
├── SKILL.md # Agent instructions
|
|
106
|
+
├── README.md # This file
|
|
107
|
+
├── .env.example # Environment variables template
|
|
108
|
+
├── scripts/
|
|
109
|
+
│ ├── package.json # Script dependencies (ts-morph)
|
|
110
|
+
│ ├── extract_ts.ts # TypeScript/JS AST extractor
|
|
111
|
+
│ └── extract_py.py # Python AST extractor
|
|
112
|
+
├── references/
|
|
113
|
+
│ ├── extraction-guide.md # Per-framework extraction notes
|
|
114
|
+
│ └── output-template.md # README and API.md templates
|
|
115
|
+
└── evals/
|
|
116
|
+
└── evals.json # Test prompts
|
|
117
|
+
```
|
|
118
|
+
|
|
119
|
+
## License
|
|
120
|
+
|
|
121
|
+
MIT
|