x-collect-skill 1.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/LICENSE ADDED
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2026 Sam
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
package/README.md ADDED
@@ -0,0 +1,128 @@
1
+ # x-collect
2
+
3
+ [![npm version](https://img.shields.io/npm/v/x-collect-skill)](https://www.npmjs.com/package/x-collect-skill)
4
+ [![license](https://img.shields.io/npm/l/x-collect-skill)](./LICENSE)
5
+
6
+ X/Twitter topic intelligence skill for Claude Code. Uses Playwright browser automation to search x.com directly, scraping real tweets with engagement metrics.
7
+
8
+ ## What it does
9
+
10
+ `/x-collect [topic]` opens x.com in Chrome, runs 3 search rounds, and extracts real tweet data:
11
+
12
+ 1. **Top / Viral Posts** — x.com search "Top" tab, most popular tweets
13
+ 2. **Trending / Recent Posts** — Top tab with `min_faves:50 since:YESTERDAY`, hottest posts from the last 24h
14
+ 3. **KOL Posts** — filtered by `min_faves:100`, high-engagement accounts only
15
+
16
+ Optional bonus rounds:
17
+
18
+ 4. **Hook Study** — pure text posts (`-filter:media -filter:links`) with 500+ likes, for studying copy patterns
19
+ 5. **Conversation Starters** — high-reply posts (`min_replies:30`), to find what drives discussion
20
+
21
+ Output: JSONL + Markdown in `./x-collect-data/` with real handles, tweet text, likes, retweets, replies, views, and a Content Opportunity Summary.
22
+
23
+ ## Prerequisites
24
+
25
+ | Dependency | Purpose | Required |
26
+ |------------|---------|----------|
27
+ | [Claude Code](https://claude.ai/claude-code) | Runtime | Yes |
28
+ | [Playwright MCP](https://github.com/microsoft/playwright-mcp) | Browser automation | Yes |
29
+ | Logged-in X/Twitter session | Browser must have x.com logged in | Yes |
30
+
31
+ ### Install Playwright MCP
32
+
33
+ ```bash
34
+ claude mcp add --scope user playwright -- npx @playwright/mcp@latest
35
+ ```
36
+
37
+ ## Install
38
+
39
+ ### Option A: npx skills (Recommended)
40
+
41
+ ```bash
42
+ npx skills add SamCuipogobongo/x-collect
43
+ ```
44
+
45
+ ### Option B: npm global install
46
+
47
+ ```bash
48
+ npm install -g x-collect-skill
49
+ ```
50
+
51
+ ### Option C: One-line install
52
+
53
+ ```bash
54
+ curl -fsSL https://raw.githubusercontent.com/SamCuipogobongo/x-collect/main/install.sh | bash
55
+ ```
56
+
57
+ ### Option D: Manual
58
+
59
+ ```bash
60
+ git clone https://github.com/SamCuipogobongo/x-collect.git
61
+ mkdir -p ~/.claude/skills/x-collect
62
+ cp x-collect/SKILL.md ~/.claude/skills/x-collect/
63
+ ```
64
+
65
+ ## Usage
66
+
67
+ ```bash
68
+ # In Claude Code (with Chrome open and x.com logged in):
69
+ /x-collect Claude Code # research "Claude Code" on X
70
+ /x-collect vibe coding # research "vibe coding" on X
71
+ /x-collect # interactive mode (asks for topic)
72
+ ```
73
+
74
+ ## Output
75
+
76
+ Data saved to `./x-collect-data/`:
77
+
78
+ | File | Description |
79
+ |------|-------------|
80
+ | `intel.jsonl` | One JSON object per tweet, deduped by URL |
81
+ | `intel.md` | Formatted report with engagement data + Content Opportunity Summary |
82
+
83
+ ### JSONL Schema
84
+
85
+ ```json
86
+ {
87
+ "url": "https://x.com/bcherny/status/123",
88
+ "text": "I'm Boris and I created Claude Code...",
89
+ "intel_type": "viral_post",
90
+ "topic": "Claude Code",
91
+ "account": "@bcherny",
92
+ "display_name": "Boris Cherny",
93
+ "angle": "Creator shares vanilla setup with 15+ parallel sessions",
94
+ "format": "thread",
95
+ "likes": 5800,
96
+ "retweets": 1200,
97
+ "replies": 340,
98
+ "views": 6500000,
99
+ "posted_at": "2026-01-15T10:30:00Z",
100
+ "key_takeaway": "Behind-the-scenes from creator + actionable tips = mega engagement",
101
+ "collected": "2026-02-08"
102
+ }
103
+ ```
104
+
105
+ ### Intel Types
106
+
107
+ | Type | Source |
108
+ |------|--------|
109
+ | `viral_post` | x.com search "Top" tab |
110
+ | `trending_post` | Top tab with `min_faves:50 since:YESTERDAY` |
111
+ | `kol_post` | x.com search with `min_faves:100` filter |
112
+ | `hook_study` | Pure text posts (`-filter:media -filter:links`) with 500+ likes |
113
+ | `conversation_starter` | High-reply posts (`min_replies:30`) |
114
+
115
+ ## How it works
116
+
117
+ The skill uses Playwright MCP to:
118
+ 1. Navigate to x.com search pages with Top tab and various filters
119
+ 2. Wait for tweets to load
120
+ 3. Extract tweet data via DOM queries or accessibility tree snapshot
121
+ 4. Deduplicate by tweet URL
122
+ 5. Output JSONL + Markdown with Content Opportunity Summary
123
+
124
+ No third-party API needed — reads directly from x.com via Playwright browser automation.
125
+
126
+ ## License
127
+
128
+ MIT
package/SKILL.md ADDED
@@ -0,0 +1,375 @@
1
+ ---
2
+ name: x-collect
3
+ description: |
4
+ X/Twitter topic intelligence tool. Uses Playwright browser automation to search
5
+ x.com directly, scraping real tweets, engagement metrics, and KOL accounts.
6
+ Outputs a structured intel report (JSONL + Markdown) with content opportunity analysis.
7
+ Use when the user wants to research a topic on X/Twitter, find what's performing,
8
+ or scout trending discussions before creating content.
9
+ Triggers on: /x-collect, requests to research X/Twitter trends for a topic.
10
+ allowed-tools:
11
+ - Read
12
+ - Write
13
+ - Edit
14
+ - Bash
15
+ - Glob
16
+ - Grep
17
+ - AskUserQuestion
18
+ - mcp__playwright__browser_navigate
19
+ - mcp__playwright__browser_snapshot
20
+ - mcp__playwright__browser_take_screenshot
21
+ - mcp__playwright__browser_run_code
22
+ - mcp__playwright__browser_click
23
+ - mcp__playwright__browser_type
24
+ - mcp__playwright__browser_wait_for
25
+ - mcp__playwright__browser_press_key
26
+ - mcp__playwright__browser_tabs
27
+ ---
28
+
29
+ # X Topic Intelligence
30
+
31
+ Research what's trending and performing on X/Twitter for any topic. Browse x.com directly via Playwright browser automation to find real tweets, engagement data, and KOL accounts.
32
+
33
+ ---
34
+
35
+ ## Usage
36
+
37
+ ```
38
+ /x-collect [topic] # direct mode
39
+ /x-collect # interactive mode - asks for topic
40
+ ```
41
+
42
+ When no topic is provided, use `AskUserQuestion`:
43
+ "What topic do you want to research on X/Twitter? (e.g., 'AI agents', 'vibe coding', 'indie hacking')"
44
+
45
+ ---
46
+
47
+ ## Prerequisites
48
+
49
+ | Dependency | Purpose | Required |
50
+ |------------|---------|----------|
51
+ | Claude Code | Runtime | Yes |
52
+ | Playwright MCP | Browser automation (`npx @playwright/mcp@latest`) | Yes |
53
+ | Logged-in X/Twitter session | Browser must have an active x.com session | Yes |
54
+
55
+ ---
56
+
57
+ ## Browser Automation Flow
58
+
59
+ Use Playwright MCP tools to navigate x.com, perform searches, and extract tweet data.
60
+
61
+ ### General Steps (repeat for each search round)
62
+
63
+ 1. **Navigate** — `browser_navigate` to the x.com search URL
64
+ 2. **Wait** — `browser_wait_for` until tweets appear (wait for text like "Like" or a few seconds)
65
+ 3. **Extract** — `browser_run_code` to run Playwright JS that scrapes tweet data from the DOM
66
+ 4. **Scroll + Extract more** — if fewer than 10 tweets, scroll down and extract again
67
+
68
+ ### Tweet Extraction Script
69
+
70
+ Use `browser_run_code` with this Playwright script to extract tweets:
71
+
72
+ ```javascript
73
+ async (page) => {
74
+ const extractTweets = () => {
75
+ const tweets = document.querySelectorAll('article[data-testid="tweet"]');
76
+ return Array.from(tweets).slice(0, 20).map(t => {
77
+ // Handle
78
+ const nameEl = t.querySelector('[data-testid="User-Name"]');
79
+ const handleLink = nameEl?.querySelector('a[href^="/"][tabindex="-1"]');
80
+ const handle = handleLink?.getAttribute('href')?.replace('/', '') || null;
81
+ const displayName = nameEl?.querySelector('a span')?.innerText || null;
82
+
83
+ // Text
84
+ const text = t.querySelector('[data-testid="tweetText"]')?.innerText || '';
85
+
86
+ // Timestamp & link
87
+ const timeEl = t.querySelector('time');
88
+ const time = timeEl?.getAttribute('datetime') || null;
89
+ const tweetLink = timeEl?.closest('a')?.getAttribute('href') || null;
90
+
91
+ // Engagement: reply, retweet, like counts + views
92
+ const replyCount = t.querySelector('[data-testid="reply"] span')?.innerText || '0';
93
+ const retweetCount = t.querySelector('[data-testid="retweet"] span')?.innerText || '0';
94
+ const likeCount = t.querySelector('[data-testid="like"] span')?.innerText || '0';
95
+ // Views are in an aria-label on the analytics link
96
+ const viewsEl = t.querySelector('a[href*="/analytics"]');
97
+ const viewsLabel = viewsEl?.getAttribute('aria-label') || '';
98
+ const viewsMatch = viewsLabel.match(/([\d,.]+[KMB]?)\s*view/i);
99
+ const views = viewsMatch ? viewsMatch[1] : '0';
100
+
101
+ return {
102
+ handle: handle ? '@' + handle : null,
103
+ display_name: displayName,
104
+ text: text.substring(0, 200),
105
+ posted_at: time,
106
+ url: tweetLink ? 'https://x.com' + tweetLink : null,
107
+ replies: replyCount,
108
+ retweets: retweetCount,
109
+ likes: likeCount,
110
+ views: views
111
+ };
112
+ }).filter(t => t.text && t.handle);
113
+ };
114
+
115
+ // 1. Extract BEFORE scrolling (captures high-engagement first-screen tweets)
116
+ const initial = await page.evaluate(extractTweets);
117
+
118
+ // 2. Scroll down in steps to load more
119
+ for (let i = 0; i < 3; i++) {
120
+ await page.evaluate(() => window.scrollBy(0, 1500));
121
+ await page.waitForTimeout(1500);
122
+ }
123
+
124
+ // 3. Extract again after scrolling
125
+ const afterScroll = await page.evaluate(extractTweets);
126
+
127
+ // 4. Deduplicate by URL
128
+ const seen = new Set();
129
+ const all = [];
130
+ for (const t of [...initial, ...afterScroll]) {
131
+ if (t.url && !seen.has(t.url)) {
132
+ seen.add(t.url);
133
+ all.push(t);
134
+ }
135
+ }
136
+ return all;
137
+ }
138
+ ```
139
+
140
+ > **Note**: X's DOM structure may change. If the selectors break, use `browser_snapshot` to read the accessibility tree and extract data manually, or use `browser_take_screenshot` to visually inspect the page and adjust selectors.
141
+
142
+ ---
143
+
144
+ ## Search Operators Quick Reference
145
+
146
+ Use these operators in the search URL `q=` parameter (URL-encoded). Combine freely.
147
+
148
+ ### Noise Filters (always recommended)
149
+
150
+ | Operator | Effect |
151
+ |----------|--------|
152
+ | `-filter:replies` | Exclude replies — show only original posts |
153
+ | `-filter:nativeretweets` | Exclude retweets — show only original content |
154
+ | `lang:en` | Limit to English (use `lang:zh` for Chinese, etc.) |
155
+
156
+ ### Engagement Thresholds
157
+
158
+ | Operator | Effect |
159
+ |----------|--------|
160
+ | `min_faves:N` | Minimum N likes |
161
+ | `min_retweets:N` | Minimum N reposts |
162
+ | `min_replies:N` | Minimum N replies |
163
+
164
+ ### Content Format Filters
165
+
166
+ | Operator | Effect |
167
+ |----------|--------|
168
+ | `-filter:media -filter:links` | Pure text only (best for hook/copy study) |
169
+ | `filter:images` | Posts with images |
170
+ | `filter:videos` | Posts with video |
171
+ | `-filter:links` | Exclude posts with outbound links |
172
+ | `filter:blue_verified` | Only X Premium (blue checkmark) accounts |
173
+
174
+ ### Time Filters
175
+
176
+ | Operator | Effect |
177
+ |----------|--------|
178
+ | `within_time:24h` | Last 24 hours only |
179
+ | `within_time:7d` | Last 7 days |
180
+ | `since:YYYY-MM-DD` | After specific date |
181
+ | `until:YYYY-MM-DD` | Before specific date |
182
+
183
+ ### Other Useful Operators
184
+
185
+ | Operator | Effect |
186
+ |----------|--------|
187
+ | `?` | Posts containing a question |
188
+ | `from:username` | Posts from a specific account |
189
+ | `to:username` | Replies to a specific account |
190
+ | `"exact phrase"` | Exact phrase match |
191
+ | `(A OR B)` | Either term (OR must be uppercase) |
192
+ | `-keyword` | Exclude posts containing keyword |
193
+ | `conversation_id:ID` | All posts in a thread (use tweet ID) |
194
+
195
+ ---
196
+
197
+ ## Intelligence Gathering Rounds
198
+
199
+ Run 3 core rounds + optional bonus rounds. All queries include `-filter:replies -filter:nativeretweets` by default to eliminate noise and surface only original content.
200
+
201
+ ### Core Rounds
202
+
203
+ #### Round 1: Top / Viral Posts
204
+
205
+ **Query**: `[topic] -filter:replies -filter:nativeretweets lang:en`
206
+ **URL**: `https://x.com/search?q=[URL-encoded query]&src=typed_query&f=top`
207
+
208
+ X's "Top" tab — algorithmically ranked popular tweets. Extract 10-15 tweets.
209
+
210
+ **Focus**: Who posted, what angle, engagement numbers, the hook (first 1-2 sentences).
211
+
212
+ #### Round 2: Trending Posts (Last 24h)
213
+
214
+ **Query**: `[topic] -filter:replies -filter:nativeretweets min_faves:50 since:[YESTERDAY] lang:en`
215
+ **URL**: `https://x.com/search?q=[URL-encoded query]&src=typed_query&f=top`
216
+
217
+ The hottest posts from the last 24 hours. Use `since:YYYY-MM-DD` (yesterday's date) combined with `min_faves:50` and the Top tab sort — this returns only posts with proven engagement, ranked by X's algorithm. Extract 10-15 tweets.
218
+
219
+ **Focus**: Fresh angles, emerging debates, which posts are accelerating right now.
220
+
221
+ > Adjust `min_faves:` threshold: use `min_faves:10` for niche topics, `min_faves:50` for mainstream. Do NOT use the "Latest" tab for this round — it returns zero-engagement noise.
222
+
223
+ #### Round 3: KOL / Verified High-Engagement Posts
224
+
225
+ **Query**: `[topic] filter:blue_verified min_faves:100 -filter:replies -filter:nativeretweets lang:en`
226
+ **URL**: `https://x.com/search?q=[URL-encoded query]&src=typed_query&f=top`
227
+
228
+ Only X Premium (blue checkmark) accounts with 100+ likes. Premium accounts get 10x more reach, so their content patterns are the most worth studying.
229
+
230
+ **Focus**: Who the KOLs are, what positions they hold, their framing style.
231
+
232
+ > Adjust threshold: `min_faves:50` for niche topics, `min_faves:500` for mainstream. If `filter:blue_verified` returns too few results, drop it and keep only `min_faves:`.
233
+
234
+ ### Bonus Rounds (Optional — pick based on goal)
235
+
236
+ #### Round 4: Hook Study — Pure Text Viral Posts
237
+
238
+ **Query**: `[topic] -filter:media -filter:links -filter:replies -filter:nativeretweets min_faves:500 lang:en`
239
+ **URL**: `https://x.com/search?q=[URL-encoded query]&src=typed_query&f=top`
240
+
241
+ Pure text posts with 500+ likes — no images, no videos, no links. These succeed entirely on copy quality. Text posts have the highest average engagement rate (3.24%) on X.
242
+
243
+ **Focus**: Hook patterns (first line), sentence structure, use of numbers, storytelling format. Extract 10 tweets.
244
+
245
+ **When to use**: Before writing content, to study what copy patterns perform best for this topic.
246
+
247
+ #### Round 5: Conversation Starters — High-Reply Posts
248
+
249
+ **Query**: `[topic] min_replies:30 -filter:replies -filter:nativeretweets lang:en`
250
+ **URL**: `https://x.com/search?q=[URL-encoded query]&src=typed_query&f=top`
251
+
252
+ Posts that generated the most replies. In X's algorithm, a reply that gets an author reply back is weighted **75x** a like — making conversation-driving posts the highest-signal content.
253
+
254
+ **Focus**: What questions or takes drive discussion, common debate points, audience pain points. Extract 10 tweets.
255
+
256
+ **When to use**: To understand what the audience cares about and what angles spark debate.
257
+
258
+ ---
259
+
260
+ ## Data Extraction
261
+
262
+ For each tweet, extract and then analyze to determine:
263
+
264
+ | Field | Type | Description |
265
+ |-------|------|-------------|
266
+ | `url` | string | `https://x.com/[user]/status/[id]` (unique key) |
267
+ | `text` | string | First 200 chars of tweet text |
268
+ | `intel_type` | enum | `viral_post` \| `trending_post` \| `kol_post` \| `hook_study` \| `conversation_starter` |
269
+ | `topic` | string | The search topic |
270
+ | `account` | string | `@handle` |
271
+ | `display_name` | string | Display name |
272
+ | `angle` | string | 1-sentence summary of the angle/framing (your analysis) |
273
+ | `format` | string | `thread` \| `single` \| `quote_tweet` \| `poll` |
274
+ | `likes` | number | Like count |
275
+ | `retweets` | number | Retweet count |
276
+ | `replies` | number | Reply count |
277
+ | `views` | string | View count (may include K/M suffix) |
278
+ | `posted_at` | string | ISO timestamp |
279
+ | `key_takeaway` | string | Why this is noteworthy for content creation (your analysis) |
280
+ | `collected` | string | Today's date (YYYY-MM-DD) |
281
+
282
+ ---
283
+
284
+ ## Output
285
+
286
+ All output goes to `./x-collect-data/` in the current working directory (created on first run).
287
+
288
+ ### JSONL — `./x-collect-data/intel.jsonl`
289
+
290
+ One JSON object per line:
291
+
292
+ ```json
293
+ {"url":"https://x.com/bcherny/status/123","text":"I'm Boris and I created Claude Code...","intel_type":"viral_post","topic":"Claude Code","account":"@bcherny","display_name":"Boris Cherny","angle":"Creator shares vanilla setup with 15+ parallel sessions","format":"thread","likes":5800,"retweets":1200,"replies":340,"views":"6.5M","posted_at":"2026-01-15T10:30:00Z","key_takeaway":"Behind-the-scenes from creator + actionable tips = mega engagement","collected":"2026-02-08"}
294
+ ```
295
+
296
+ **Dedup:** Before appending, read existing JSONL and build a set of existing URLs. Skip duplicates.
297
+
298
+ ### Markdown — `./x-collect-data/intel.md`
299
+
300
+ Re-render after each run. Group by `intel_type`, sort by likes descending.
301
+
302
+ Count each group's entries carefully — header count must exactly match row count.
303
+
304
+ ```markdown
305
+ # X Intelligence: [topic]
306
+
307
+ > Last updated: YYYY-MM-DD | Total: N tweets
308
+
309
+ ## Top / Viral Posts (N)
310
+
311
+ | Account | Text (truncated) | Likes | RTs | Replies | Views | Format |
312
+ |---------|-----------------|-------|-----|---------|-------|--------|
313
+ | [@user](tweet_url) | First 80 chars... | 5.8K | 1.2K | 340 | 6.5M | thread |
314
+
315
+ ## Trending Posts — 24h (N)
316
+
317
+ | Account | Text (truncated) | Likes | RTs | Replies | Views | Format |
318
+ |---------|-----------------|-------|-----|---------|-------|--------|
319
+ | [@user](tweet_url) | ... | ... | ... | ... | ... | ... |
320
+
321
+ ## KOL / Verified Posts (N)
322
+
323
+ | Account | Text (truncated) | Likes | RTs | Replies | Views | Format |
324
+ |---------|-----------------|-------|-----|---------|-------|--------|
325
+ | [@user](tweet_url) | ... | ... | ... | ... | ... | ... |
326
+
327
+ ## Hook Study — Pure Text (N) *(if collected)*
328
+
329
+ | Account | Text (truncated) | Likes | RTs | Replies | Views |
330
+ |---------|-----------------|-------|-----|---------|-------|
331
+ | [@user](tweet_url) | First 120 chars... | ... | ... | ... | ... |
332
+
333
+ ## Conversation Starters (N) *(if collected)*
334
+
335
+ | Account | Text (truncated) | Likes | RTs | Replies | Views |
336
+ |---------|-----------------|-------|-----|---------|-------|
337
+ | [@user](tweet_url) | ... | ... | ... | ... | ... |
338
+
339
+ ---
340
+
341
+ ## Content Opportunity Summary
342
+
343
+ Based on the N tweets above:
344
+
345
+ 1. **Hot angles**: [2-3 angles getting the most likes/RTs — cite top posts as: @handle — "first 60 chars of tweet text" ([link](tweet_url))]
346
+ 2. **Content gaps**: [angles with engagement but few posts — underserved demand — cite examples as: @handle — "first 60 chars" ([link](tweet_url))]
347
+ 3. **Recommended format**: [thread / single / hot-take based on what's performing — cite top-performing examples as: @handle — "first 60 chars" ([link](tweet_url))]
348
+ 4. **Top KOLs**: [5 accounts with the most engagement — each as: @handle — "first 60 chars" ([link](tweet_url)) (likes)]
349
+ 5. **Hook patterns**: [common first-line patterns — cite examples as: @handle — "first 60 chars" ([link](tweet_url))]
350
+ 6. **Conversation drivers**: [questions/takes that generate the most replies — cite as: @handle — "first 60 chars" ([link](tweet_url))]
351
+ 7. **Avoid**: [overdone angles with declining engagement]
352
+
353
+ > **Rule**: Every reference to a specific tweet MUST use the three-part format: `@handle — "tweet text excerpt" ([link](tweet_url))` — showing the username, the reference post content, and a clickable link separately.
354
+ ```
355
+
356
+ ---
357
+
358
+ ## Output Message
359
+
360
+ After collection, display:
361
+
362
+ ```
363
+ X Intelligence for "[topic]" — [N] tweets collected:
364
+ Top / viral: X
365
+ Trending (24h): Y
366
+ KOL / verified: Z
367
+ Hook study: A (if collected)
368
+ Conversation starters: B (if collected)
369
+
370
+ Top post: [@handle] — [first 60 chars of text] (likes: N)
371
+
372
+ Data saved to:
373
+ ./x-collect-data/intel.jsonl
374
+ ./x-collect-data/intel.md
375
+ ```
package/bin/install.js ADDED
@@ -0,0 +1,23 @@
1
+ #!/usr/bin/env node
2
+ "use strict";
3
+
4
+ const fs = require("fs");
5
+ const path = require("path");
6
+ const os = require("os");
7
+
8
+ const SKILL_NAME = "x-collect";
9
+ const SKILL_DIR = path.join(os.homedir(), ".claude", "skills", SKILL_NAME);
10
+ const SKILL_SRC = path.join(__dirname, "..", "SKILL.md");
11
+
12
+ // Ensure skill directory exists
13
+ fs.mkdirSync(SKILL_DIR, { recursive: true });
14
+
15
+ // Copy SKILL.md
16
+ fs.copyFileSync(SKILL_SRC, path.join(SKILL_DIR, "SKILL.md"));
17
+
18
+ console.log(`Installed ${SKILL_NAME} skill to ${SKILL_DIR}/SKILL.md`);
19
+ console.log("");
20
+ console.log("Usage: /x-collect [topic]");
21
+ console.log("");
22
+ console.log("Prerequisite: Playwright MCP must be configured in Claude Code.");
23
+ console.log(" claude mcp add --scope user playwright -- npx @playwright/mcp@latest");
package/package.json ADDED
@@ -0,0 +1,31 @@
1
+ {
2
+ "name": "x-collect-skill",
3
+ "version": "1.0.0",
4
+ "description": "X/Twitter topic intelligence skill for Claude Code",
5
+ "keywords": [
6
+ "claude-code",
7
+ "claude-skill",
8
+ "agent-skill",
9
+ "twitter",
10
+ "x-com",
11
+ "playwright"
12
+ ],
13
+ "homepage": "https://github.com/SamCuipogobongo/x-collect",
14
+ "repository": {
15
+ "type": "git",
16
+ "url": "git+https://github.com/SamCuipogobongo/x-collect.git"
17
+ },
18
+ "license": "MIT",
19
+ "bin": {
20
+ "x-collect-install": "bin/install.js"
21
+ },
22
+ "files": [
23
+ "SKILL.md",
24
+ "bin/",
25
+ "LICENSE",
26
+ "README.md"
27
+ ],
28
+ "scripts": {
29
+ "postinstall": "node bin/install.js"
30
+ }
31
+ }