x-collect-skill 1.0.2 → 2.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -18,7 +18,7 @@
18
18
 
19
19
  English | [中文](./README.zh-CN.md)
20
20
 
21
- X/Twitter topic intelligence skill for Claude Code. Uses Playwright browser automation to search x.com directly, scraping real tweets with engagement metrics.
21
+ X/Twitter topic intelligence skill for Claude Code. Uses Actionbook browser automation to search x.com directly, scraping real tweets with engagement metrics.
22
22
 
23
23
  ## What it does
24
24
 
@@ -40,13 +40,18 @@ Output: JSONL + Markdown in `./x-collect-data/` with real handles, tweet text, l
40
40
  | Dependency | Purpose | Required |
41
41
  |------------|---------|----------|
42
42
  | [Claude Code](https://claude.ai/claude-code) | Runtime | Yes |
43
- | [Playwright MCP](https://github.com/microsoft/playwright-mcp) | Browser automation | Yes |
44
- | Logged-in X/Twitter session | Browser must have x.com logged in | Yes |
43
+ | [Actionbook CLI](https://actionbook.dev) | Browser automation (`actionbook` command) | Yes |
44
+ | Actionbook Chrome Extension | Controls user's existing Chrome browser | Yes |
45
+ | Logged-in X/Twitter session | Chrome must have x.com logged in | Yes |
45
46
 
46
- ### Install Playwright MCP
47
+ ### Setup Actionbook Extension
48
+
49
+ Follow the official guide: **[Actionbook Installation](https://actionbook.dev/docs/guides/installation)**
47
50
 
48
51
  ```bash
49
- claude mcp add --scope user playwright -- npx @playwright/mcp@latest
52
+ actionbook extension install # install extension files
53
+ # Then load unpacked extension in Chrome (chrome://extensions)
54
+ actionbook extension serve # start bridge (keep running)
50
55
  ```
51
56
 
52
57
  ## Install
@@ -130,14 +135,15 @@ Data saved to `./x-collect-data/`:
130
135
 
131
136
  ## How it works
132
137
 
133
- The skill uses Playwright MCP to:
134
- 1. Navigate to x.com search pages with Top tab and various filters
135
- 2. Wait for tweets to load
136
- 3. Extract tweet data via DOM queries or accessibility tree snapshot
137
- 4. Deduplicate by tweet URL
138
- 5. Output JSONL + Markdown with Content Opportunity Summary
138
+ The skill uses Actionbook Extension mode to:
139
+ 1. Control the user's Chrome browser (with existing x.com login)
140
+ 2. Navigate to x.com search pages with Top tab and various filters
141
+ 3. Wait for tweets to load
142
+ 4. Extract tweet data via JavaScript evaluation
143
+ 5. Deduplicate by tweet URL
144
+ 6. Output JSONL + Markdown with Content Opportunity Summary
139
145
 
140
- No third-party API needed — reads directly from x.com via Playwright browser automation.
146
+ No third-party API needed — reads directly from x.com via Actionbook browser automation.
141
147
 
142
148
  ## Community
143
149
 
package/README.zh-CN.md CHANGED
@@ -18,7 +18,7 @@
18
18
 
19
19
  [English](./README.md) | 中文
20
20
 
21
- X/Twitter 话题情报采集工具,Claude Code 专用 Skill。通过 Playwright 浏览器自动化直接搜索 x.com,抓取真实推文和互动数据。
21
+ X/Twitter 话题情报采集工具,Claude Code 专用 Skill。通过 Actionbook 浏览器自动化直接搜索 x.com,抓取真实推文和互动数据。
22
22
 
23
23
  ## 功能
24
24
 
@@ -40,13 +40,18 @@ X/Twitter 话题情报采集工具,Claude Code 专用 Skill。通过 Playwrigh
40
40
  | 依赖 | 用途 | 必需 |
41
41
  |------|------|------|
42
42
  | [Claude Code](https://claude.ai/claude-code) | 运行环境 | 是 |
43
- | [Playwright MCP](https://github.com/microsoft/playwright-mcp) | 浏览器自动化 | 是 |
44
- | 已登录的 X/Twitter 会话 | 浏览器需已登录 x.com | 是 |
43
+ | [Actionbook CLI](https://actionbook.dev) | 浏览器自动化(`actionbook` 命令) | 是 |
44
+ | Actionbook Chrome 扩展 | 控制用户已有的 Chrome 浏览器 | 是 |
45
+ | 已登录的 X/Twitter 会话 | Chrome 需已登录 x.com | 是 |
45
46
 
46
- ### 安装 Playwright MCP
47
+ ### 安装 Actionbook 扩展
48
+
49
+ 请先按照官方指南完成安装:**[Actionbook 安装指南](https://actionbook.dev/docs/guides/installation)**
47
50
 
48
51
  ```bash
49
- claude mcp add --scope user playwright -- npx @playwright/mcp@latest
52
+ actionbook extension install # 安装扩展文件
53
+ # 然后在 Chrome 中加载未打包扩展 (chrome://extensions)
54
+ actionbook extension serve # 启动桥接(保持运行)
50
55
  ```
51
56
 
52
57
  ## 安装
@@ -130,14 +135,15 @@ cp x-collect/SKILL.md ~/.claude/skills/x-collect/
130
135
 
131
136
  ## 工作原理
132
137
 
133
- 通过 Playwright MCP 浏览器自动化:
134
- 1. 导航到 x.com 搜索页(Top 标签页 + 各种过滤条件)
135
- 2. 等待推文加载
136
- 3. 通过 DOM 查询或无障碍树快照提取推文数据
137
- 4. 按推文 URL 去重
138
- 5. 输出 JSONL + Markdown,附带内容机会分析
138
+ 通过 Actionbook Extension 模式浏览器自动化:
139
+ 1. 控制用户的 Chrome 浏览器(使用已有的 x.com 登录状态)
140
+ 2. 导航到 x.com 搜索页(Top 标签页 + 各种过滤条件)
141
+ 3. 等待推文加载
142
+ 4. 通过 JavaScript 执行提取推文数据
143
+ 5. 按推文 URL 去重
144
+ 6. 输出 JSONL + Markdown,附带内容机会分析
139
145
 
140
- 无需第三方 API,直接通过 Playwright 浏览器自动化读取 x.com。
146
+ 无需第三方 API,直接通过 Actionbook 浏览器自动化读取 x.com。
141
147
 
142
148
  ## 社区
143
149
 
package/SKILL.md CHANGED
@@ -1,8 +1,8 @@
1
1
  ---
2
2
  name: x-collect
3
3
  description: |
4
- X/Twitter topic intelligence tool. Uses Playwright browser automation to search
5
- x.com directly, scraping real tweets, engagement metrics, and KOL accounts.
4
+ X/Twitter topic intelligence tool. Two search modes: Grok (quick AI-powered search)
5
+ and Actionbook DOM scraping (detailed multi-round collection with search operators).
6
6
  Outputs a structured intel report (JSONL + Markdown) with content opportunity analysis.
7
7
  Use when the user wants to research a topic on X/Twitter, find what's performing,
8
8
  or scout trending discussions before creating content.
@@ -15,27 +15,18 @@ allowed-tools:
15
15
  - Glob
16
16
  - Grep
17
17
  - AskUserQuestion
18
- - mcp__playwright__browser_navigate
19
- - mcp__playwright__browser_snapshot
20
- - mcp__playwright__browser_take_screenshot
21
- - mcp__playwright__browser_run_code
22
- - mcp__playwright__browser_click
23
- - mcp__playwright__browser_type
24
- - mcp__playwright__browser_wait_for
25
- - mcp__playwright__browser_press_key
26
- - mcp__playwright__browser_tabs
27
18
  ---
28
19
 
29
20
  # X Topic Intelligence
30
21
 
31
- Research what's trending and performing on X/Twitter for any topic. Browse x.com directly via Playwright browser automation to find real tweets, engagement data, and KOL accounts.
22
+ Research what's trending and performing on X/Twitter for any topic. Two search backends available pick based on the task complexity.
32
23
 
33
24
  ---
34
25
 
35
26
  ## Usage
36
27
 
37
28
  ```
38
- /x-collect [topic] # direct mode
29
+ /x-collect [topic] # auto-select mode based on topic complexity
39
30
  /x-collect # interactive mode - asks for topic
40
31
  ```
41
32
 
@@ -44,100 +35,213 @@ When no topic is provided, use `AskUserQuestion`:
44
35
 
45
36
  ---
46
37
 
38
+ ## Search Mode Selection
39
+
40
+ | | Grok Mode (quick) | Actionbook Scrape Mode (full) |
41
+ |---|---|---|
42
+ | **Speed** | ~60s single query | ~5-10 min multi-round |
43
+ | **Best for** | Quick overview, trend check, "what's hot" | Systematic collection, precise metrics, dataset building |
44
+ | **Data quality** | AI-summarized, may miss some posts | Raw DOM data, exact engagement numbers, includes views |
45
+ | **Search operators** | No (natural language only) | Yes (`min_faves:`, `filter:blue_verified`, date ranges, etc.) |
46
+ | **Output** | 5-15 posts per query | 10-15 posts per round, 3-5 rounds |
47
+ | **Views count** | Not available | Available |
48
+
49
+ ### Auto-selection rules
50
+
51
+ Use **Grok Mode** when:
52
+ - User asks a simple question ("what's trending about X?", "find popular posts about Y")
53
+ - Quick exploration / first look at a topic
54
+ - Single broad query is sufficient
55
+ - User explicitly says "quick" or "fast"
56
+
57
+ Use **Actionbook Scrape Mode** when:
58
+ - User wants systematic multi-round collection (viral + trending + KOL + hook study)
59
+ - Need precise engagement thresholds (`min_faves:100`, `filter:blue_verified`)
60
+ - Need views count
61
+ - Building a JSONL dataset for later analysis
62
+ - User explicitly says "detailed", "full", or "all rounds"
63
+
64
+ > When unsure, default to **Grok Mode** — it's faster and sufficient for most requests. Escalate to Actionbook Scrape if Grok results are insufficient or user asks for more.
65
+
66
+ ---
67
+
47
68
  ## Prerequisites
48
69
 
49
70
  | Dependency | Purpose | Required |
50
71
  |------------|---------|----------|
51
72
  | Claude Code | Runtime | Yes |
52
- | Playwright MCP | Browser automation (`npx @playwright/mcp@latest`) | Yes |
53
- | Logged-in X/Twitter session | Browser must have an active x.com session | Yes |
73
+ | Actionbook CLI | Browser automation (`actionbook` command) | Yes |
74
+ | Actionbook Extension | Chrome extension + bridge for controlling user's browser | Yes |
75
+ | Logged-in X/Twitter session | User's Chrome must have an active x.com session | Yes |
76
+ | X Premium (for Grok) | Grok search requires Premium subscription | Only for Grok Mode |
77
+
78
+ ### Pre-flight Check
79
+
80
+ Before starting any round, verify the extension bridge is running:
81
+
82
+ ```bash
83
+ actionbook extension ping
84
+ ```
85
+
86
+ If not connected, inform the user:
87
+ > "Actionbook extension bridge is not running. Please run `actionbook extension serve` in a terminal and ensure the Chrome extension is connected."
54
88
 
55
89
  ---
56
90
 
57
- ## Browser Automation Flow
91
+ ## Grok Mode (Quick Search)
58
92
 
59
- Use Playwright MCP tools to navigate x.com, perform searches, and extract tweet data.
93
+ Ask Grok on x.com to search X and return structured results. One query, ~60 seconds.
60
94
 
61
- ### General Steps (repeat for each search round)
95
+ ### Grok Workflow
62
96
 
63
- 1. **Navigate** — `browser_navigate` to the x.com search URL
64
- 2. **Wait** — `browser_wait_for` until tweets appear (wait for text like "Like" or a few seconds)
65
- 3. **Extract** — `browser_run_code` to run Playwright JS that scrapes tweet data from the DOM
66
- 4. **Scroll + Extract more** — if fewer than 10 tweets, scroll down and extract again
97
+ #### 1. Navigate to Grok
67
98
 
68
- ### Tweet Extraction Script
99
+ ```bash
100
+ actionbook --extension browser open "https://x.com/i/grok"
101
+ ```
69
102
 
70
- Use `browser_run_code` with this Playwright script to extract tweets:
71
-
72
- ```javascript
73
- async (page) => {
74
- const extractTweets = () => {
75
- const tweets = document.querySelectorAll('article[data-testid="tweet"]');
76
- return Array.from(tweets).slice(0, 20).map(t => {
77
- // Handle
78
- const nameEl = t.querySelector('[data-testid="User-Name"]');
79
- const handleLink = nameEl?.querySelector('a[href^="/"][tabindex="-1"]');
80
- const handle = handleLink?.getAttribute('href')?.replace('/', '') || null;
81
- const displayName = nameEl?.querySelector('a span')?.innerText || null;
82
-
83
- // Text
84
- const text = t.querySelector('[data-testid="tweetText"]')?.innerText || '';
85
-
86
- // Timestamp & link
87
- const timeEl = t.querySelector('time');
88
- const time = timeEl?.getAttribute('datetime') || null;
89
- const tweetLink = timeEl?.closest('a')?.getAttribute('href') || null;
90
-
91
- // Engagement: reply, retweet, like counts + views
92
- const replyCount = t.querySelector('[data-testid="reply"] span')?.innerText || '0';
93
- const retweetCount = t.querySelector('[data-testid="retweet"] span')?.innerText || '0';
94
- const likeCount = t.querySelector('[data-testid="like"] span')?.innerText || '0';
95
- // Views are in an aria-label on the analytics link
96
- const viewsEl = t.querySelector('a[href*="/analytics"]');
97
- const viewsLabel = viewsEl?.getAttribute('aria-label') || '';
98
- const viewsMatch = viewsLabel.match(/([\d,.]+[KMB]?)\s*view/i);
99
- const views = viewsMatch ? viewsMatch[1] : '0';
100
-
101
- return {
102
- handle: handle ? '@' + handle : null,
103
- display_name: displayName,
104
- text: text.substring(0, 200),
105
- posted_at: time,
106
- url: tweetLink ? 'https://x.com' + tweetLink : null,
107
- replies: replyCount,
108
- retweets: retweetCount,
109
- likes: likeCount,
110
- views: views
111
- };
112
- }).filter(t => t.text && t.handle);
113
- };
114
-
115
- // 1. Extract BEFORE scrolling (captures high-engagement first-screen tweets)
116
- const initial = await page.evaluate(extractTweets);
117
-
118
- // 2. Scroll down in steps to load more
119
- for (let i = 0; i < 3; i++) {
120
- await page.evaluate(() => window.scrollBy(0, 1500));
121
- await page.waitForTimeout(1500);
122
- }
103
+ Wait 4 seconds for page load, then verify textarea exists:
104
+
105
+ ```bash
106
+ actionbook --extension browser eval "document.querySelector('textarea') ? 'ready' : 'not_ready'"
107
+ ```
108
+
109
+ > Use `browser open` (new tab) instead of `browser goto` to avoid "Cannot access chrome:// URL" errors.
123
110
 
124
- // 3. Extract again after scrolling
125
- const afterScroll = await page.evaluate(extractTweets);
111
+ #### 2. Input the Question
112
+
113
+ Craft a Grok prompt that returns structured data. Template:
114
+
115
+ ```
116
+ Search X for the most popular posts about [TOPIC] in the past [TIME_RANGE]. Show me the top [N] posts with the most engagement (likes + retweets). For each post, include: the author handle, post text, likes count, retweets count, and the post URL.
117
+ ```
118
+
119
+ Set the value via JS (React state-safe):
120
+
121
+ ```bash
122
+ actionbook --extension browser eval "
123
+ var textarea = document.querySelector('textarea');
124
+ if (textarea) {
125
+ var setter = Object.getOwnPropertyDescriptor(window.HTMLTextAreaElement.prototype, 'value').set;
126
+ setter.call(textarea, 'YOUR GROK PROMPT HERE');
127
+ textarea.dispatchEvent(new Event('input', { bubbles: true }));
128
+ 'ok'
129
+ } else { 'no_textarea' }
130
+ "
131
+ ```
126
132
 
127
- // 4. Deduplicate by URL
128
- const seen = new Set();
129
- const all = [];
130
- for (const t of [...initial, ...afterScroll]) {
131
- if (t.url && !seen.has(t.url)) {
132
- seen.add(t.url);
133
- all.push(t);
133
+ #### 3. Send the Question
134
+
135
+ ```bash
136
+ actionbook --extension browser eval "
137
+ var btns = document.querySelectorAll('button');
138
+ var sent = false;
139
+ for (var i = 0; i < btns.length; i++) {
140
+ var svg = btns[i].querySelector('svg');
141
+ var rect = btns[i].getBoundingClientRect();
142
+ if (svg && rect.width > 20 && rect.width < 60 && rect.height > 20 && rect.height < 60 && rect.x > 600) {
143
+ btns[i].click();
144
+ sent = true;
145
+ break;
134
146
  }
135
147
  }
136
- return all;
137
- }
148
+ sent ? 'sent' : 'not_found'
149
+ "
138
150
  ```
139
151
 
140
- > **Note**: X's DOM structure may change. If the selectors break, use `browser_snapshot` to read the accessibility tree and extract data manually, or use `browser_take_screenshot` to visually inspect the page and adjust selectors.
152
+ If `not_found`, take a screenshot (`actionbook --extension browser screenshot`) and debug visually.
153
+
154
+ #### 4. Wait for Response
155
+
156
+ Grok takes 10-60 seconds (longer when searching X). Poll every 8 seconds:
157
+
158
+ ```bash
159
+ actionbook --extension browser eval "document.querySelector('main') ? document.querySelector('main').innerText.length : 0"
160
+ ```
161
+
162
+ Response is ready when length > 500. Timeout after 90 seconds and extract whatever is available.
163
+
164
+ #### 5. Extract Response
165
+
166
+ ```bash
167
+ actionbook --extension browser eval "document.querySelector('main').innerText"
168
+ ```
169
+
170
+ #### 6. Parse into JSONL Schema
171
+
172
+ Parse Grok's text response and map each post to the standard JSONL schema. For fields Grok doesn't provide:
173
+ - `views`: set to `"0"` (Grok doesn't return views)
174
+ - `replies`: set to `0` if not provided
175
+ - `posted_at`: set to `null` if not provided
176
+ - `angle` and `key_takeaway`: analyze and fill in yourself
177
+ - `format`: infer from text length and content (`thread` if mentions thread/continues, else `single`)
178
+
179
+ ---
180
+
181
+ ## Actionbook Scrape Mode (Full Collection)
182
+
183
+ Use Actionbook Extension mode to control the user's Chrome browser, navigate x.com search, and extract tweet data directly from the DOM.
184
+
185
+ ### General Steps (repeat for each search round)
186
+
187
+ 1. **Navigate** — `actionbook --extension browser goto "<search-url>"` to the x.com search URL
188
+ 2. **Wait** — `sleep 3` then `actionbook --extension browser wait "article[data-testid='tweet']"` until tweets appear
189
+ 3. **Extract** — `actionbook --extension browser eval "<JS>"` to run extraction script
190
+ 4. **Scroll + Extract more** — if fewer than 10 tweets, scroll and extract again
191
+
192
+ ### Tweet Extraction Script
193
+
194
+ Use `actionbook --extension browser eval` with this JS to extract tweets:
195
+
196
+ ```bash
197
+ actionbook --extension browser eval "
198
+ (function() {
199
+ var tweets = document.querySelectorAll('article[data-testid=\"tweet\"]');
200
+ return Array.from(tweets).slice(0, 20).map(function(t) {
201
+ var nameEl = t.querySelector('[data-testid=\"User-Name\"]');
202
+ var handleLink = nameEl ? nameEl.querySelector('a[href^=\"/\"][tabindex=\"-1\"]') : null;
203
+ var handle = handleLink ? handleLink.getAttribute('href').replace('/', '') : null;
204
+ var displayName = nameEl ? (nameEl.querySelector('a span') || {}).innerText || null : null;
205
+ var textEl = t.querySelector('[data-testid=\"tweetText\"]');
206
+ var text = textEl ? textEl.innerText : '';
207
+ var timeEl = t.querySelector('time');
208
+ var time = timeEl ? timeEl.getAttribute('datetime') : null;
209
+ var linkEl = timeEl ? timeEl.closest('a') : null;
210
+ var tweetLink = linkEl ? linkEl.getAttribute('href') : null;
211
+ var replyCount = (t.querySelector('[data-testid=\"reply\"] span') || {}).innerText || '0';
212
+ var retweetCount = (t.querySelector('[data-testid=\"retweet\"] span') || {}).innerText || '0';
213
+ var likeCount = (t.querySelector('[data-testid=\"like\"] span') || {}).innerText || '0';
214
+ var viewsEl = t.querySelector('a[href*=\"/analytics\"]');
215
+ var viewsLabel = viewsEl ? viewsEl.getAttribute('aria-label') || '' : '';
216
+ var viewsMatch = viewsLabel.match(/([\d,.]+[KMB]?)\s*view/i);
217
+ var views = viewsMatch ? viewsMatch[1] : '0';
218
+ return {
219
+ handle: handle ? '@' + handle : null,
220
+ display_name: displayName,
221
+ text: text.substring(0, 200),
222
+ posted_at: time,
223
+ url: tweetLink ? 'https://x.com' + tweetLink : null,
224
+ replies: replyCount, retweets: retweetCount, likes: likeCount, views: views
225
+ };
226
+ }).filter(function(t) { return t.text && t.handle; });
227
+ })()
228
+ "
229
+ ```
230
+
231
+ ### Scrolling to Load More Tweets
232
+
233
+ ```bash
234
+ # Scroll down 1500px, wait, repeat 3 times
235
+ actionbook --extension browser eval "window.scrollBy(0, 1500)"
236
+ sleep 2
237
+ actionbook --extension browser eval "window.scrollBy(0, 1500)"
238
+ sleep 2
239
+ actionbook --extension browser eval "window.scrollBy(0, 1500)"
240
+ sleep 2
241
+ # Then re-run extraction script above and deduplicate by URL
242
+ ```
243
+
244
+ > **Note**: X's DOM structure may change. If selectors break, use `actionbook --extension browser snapshot` to read the accessibility tree, or `actionbook --extension browser screenshot` to visually inspect and adjust selectors.
141
245
 
142
246
  ---
143
247
 
@@ -194,10 +298,12 @@ Use these operators in the search URL `q=` parameter (URL-encoded). Combine free
194
298
 
195
299
  ---
196
300
 
197
- ## Intelligence Gathering Rounds
301
+ ## Intelligence Gathering Rounds (Actionbook Scrape Mode)
198
302
 
199
303
  Run 3 core rounds + optional bonus rounds. All queries include `-filter:replies -filter:nativeretweets` by default to eliminate noise and surface only original content.
200
304
 
305
+ > **Grok Mode alternative**: If using Grok Mode, skip these rounds entirely. Instead, craft 1-3 targeted Grok prompts covering the same ground (e.g., "top viral posts about [topic]", "trending [topic] posts in last 24h", "which verified accounts are posting about [topic]").
306
+
201
307
  ### Core Rounds
202
308
 
203
309
  #### Round 1: Top / Viral Posts
@@ -360,7 +466,7 @@ Based on the N tweets above:
360
466
  After collection, display:
361
467
 
362
468
  ```
363
- X Intelligence for "[topic]" — [N] tweets collected:
469
+ X Intelligence for "[topic]" — [N] tweets collected (via [Grok / Actionbook Scrape]):
364
470
  Top / viral: X
365
471
  Trending (24h): Y
366
472
  KOL / verified: Z
package/bin/install.js CHANGED
@@ -19,5 +19,8 @@ console.log(`Installed ${SKILL_NAME} skill to ${SKILL_DIR}/SKILL.md`);
19
19
  console.log("");
20
20
  console.log("Usage: /x-collect [topic]");
21
21
  console.log("");
22
- console.log("Prerequisite: Playwright MCP must be configured in Claude Code.");
23
- console.log(" claude mcp add --scope user playwright -- npx @playwright/mcp@latest");
22
+ console.log("Prerequisite: Actionbook CLI and Chrome Extension must be installed.");
23
+ console.log(" 1. Install Actionbook CLI: https://actionbook.dev");
24
+ console.log(" 2. Run: actionbook extension install");
25
+ console.log(" 3. Load unpacked extension in Chrome (chrome://extensions)");
26
+ console.log(" 4. Start bridge: actionbook extension serve");
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "x-collect-skill",
3
- "version": "1.0.2",
3
+ "version": "2.1.0",
4
4
  "description": "X/Twitter topic intelligence skill for Claude Code",
5
5
  "keywords": [
6
6
  "claude-code",
@@ -8,7 +8,7 @@
8
8
  "agent-skill",
9
9
  "twitter",
10
10
  "x-com",
11
- "playwright"
11
+ "actionbook"
12
12
  ],
13
13
  "homepage": "https://github.com/SamCuipogobongo/x-collect",
14
14
  "repository": {