research-powerpack-mcp 3.0.3 โ†’ 3.0.4

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. package/README.md +400 -252
  2. package/package.json +1 -1
package/README.md CHANGED
@@ -1,29 +1,114 @@
1
- # Research Powerpack MCP
2
-
3
- **The ultimate research MCP toolkit** โ€” Reddit mining, web search with CTR aggregation, AI-powered deep research, and intelligent web scraping, all in one modular package.
4
-
5
- [![npm version](https://img.shields.io/npm/v/research-powerpack-mcp.svg)](https://www.npmjs.com/package/research-powerpack-mcp)
6
- [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
1
+ <h1 align="center">๐Ÿ”ฌ Research Powerpack MCP ๐Ÿ”ฌ</h1>
2
+ <h3 align="center">Stop tab-hopping for research. Start getting god-tier context.</h3>
3
+
4
+ <p align="center">
5
+ <strong>
6
+ <em>The ultimate research toolkit for your AI coding assistant. It searches the web, mines Reddit, scrapes any URL, and synthesizes everything into perfectly structured context your LLM actually understands.</em>
7
+ </strong>
8
+ </p>
9
+
10
+ <p align="center">
11
+ <!-- Package Info -->
12
+ <a href="https://www.npmjs.com/package/research-powerpack-mcp"><img alt="npm" src="https://img.shields.io/npm/v/research-powerpack-mcp.svg?style=flat-square&color=4D87E6"></a>
13
+ <a href="#"><img alt="node" src="https://img.shields.io/badge/node-18+-4D87E6.svg?style=flat-square"></a>
14
+ &nbsp;&nbsp;โ€ข&nbsp;&nbsp;
15
+ <!-- Features -->
16
+ <a href="https://opensource.org/licenses/MIT"><img alt="license" src="https://img.shields.io/badge/License-MIT-F9A825.svg?style=flat-square"></a>
17
+ <a href="#"><img alt="platform" src="https://img.shields.io/badge/platform-macOS_|_Linux_|_Windows-2ED573.svg?style=flat-square"></a>
18
+ </p>
19
+
20
+ <p align="center">
21
+ <img alt="modular" src="https://img.shields.io/badge/๐Ÿงฉ_modular-use_1_tool_or_all_5-2ED573.svg?style=for-the-badge">
22
+ <img alt="zero crash" src="https://img.shields.io/badge/๐Ÿ’ช_zero_crash-missing_keys_=_helpful_errors-2ED573.svg?style=for-the-badge">
23
+ </p>
24
+
25
+ <div align="center">
26
+
27
+ ### ๐Ÿงญ Quick Navigation
28
+
29
+ [**โšก Get Started**](#-get-started-in-60-seconds) โ€ข
30
+ [**โœจ Key Features**](#-feature-breakdown-the-secret-sauce) โ€ข
31
+ [**๐ŸŽฎ Usage & Examples**](#-tool-reference) โ€ข
32
+ [**โš™๏ธ API Key Setup**](#-api-key-setup-guides) โ€ข
33
+ [**๐Ÿ†š Why This Slaps**](#-why-this-slaps-other-methods)
34
+
35
+ </div>
7
36
 
8
37
  ---
9
38
 
10
- ## Why Research Powerpack?
11
-
12
- AI coding assistants are only as good as the context they have. This MCP server gives your AI **superpowers for research**:
39
+ **`research-powerpack-mcp`** is the research assistant your AI wishes it had. Stop asking your LLM to guess about things it doesn't know. This MCP server acts like a senior researcher, searching the web, mining Reddit discussions, scraping documentation, and synthesizing everything into perfectly structured context so your AI can actually give you answers worth a damn.
40
+
41
+ <div align="center">
42
+ <table>
43
+ <tr>
44
+ <td align="center">
45
+ <h3>๐Ÿ”</h3>
46
+ <b>Batch Web Search</b><br/>
47
+ <sub>100 keywords in parallel</sub>
48
+ </td>
49
+ <td align="center">
50
+ <h3>๐Ÿ’ฌ</h3>
51
+ <b>Reddit Mining</b><br/>
52
+ <sub>Real opinions, not marketing</sub>
53
+ </td>
54
+ <td align="center">
55
+ <h3>๐ŸŒ</h3>
56
+ <b>Universal Scraping</b><br/>
57
+ <sub>JS rendering + geo-targeting</sub>
58
+ </td>
59
+ <td align="center">
60
+ <h3>๐Ÿง </h3>
61
+ <b>Deep Research</b><br/>
62
+ <sub>AI synthesis with citations</sub>
63
+ </td>
64
+ </tr>
65
+ </table>
66
+ </div>
67
+
68
+ How it slaps:
69
+ - **You:** "What's the best database for my use case?"
70
+ - **AI + Powerpack:** Searches Google, mines Reddit threads, scrapes docs, synthesizes findings.
71
+ - **You:** Get an actually informed answer with real community opinions and citations.
72
+ - **Result:** Ship better decisions. Skip the 47 browser tabs.
13
73
 
14
- | Tool | What It Does | Real Value |
15
- |------|-------------|------------|
16
- | `web_search` | Batch Google search (up to 100 keywords) with CTR-weighted ranking | Find the most authoritative sources across multiple search angles simultaneously |
17
- | `search_reddit` | Google-powered Reddit search with advanced operators | Discover real user discussions, opinions, and experiences |
18
- | `get_reddit_post` | Fetch Reddit posts with smart comment allocation | Extract community wisdom with automatic comment budget distribution |
19
- | `scrape_links` | Universal URL scraping with automatic fallback | Get full content from any webpage with JS rendering and geo-targeting |
20
- | `deep_research` | AI-powered batch research with citations | Get comprehensive, evidence-based answers to multiple questions in parallel |
74
+ ---
21
75
 
22
- **Modular by design** โ€” use just one tool or all five. Configure only the API keys you need.
76
+ ## ๐Ÿ’ฅ Why This Slaps Other Methods
77
+
78
+ Manually researching is a vibe-killer. `research-powerpack-mcp` makes other methods look ancient.
79
+
80
+ <table align="center">
81
+ <tr>
82
+ <td align="center"><b>โŒ The Old Way (Pain)</b></td>
83
+ <td align="center"><b>โœ… The Powerpack Way (Glory)</b></td>
84
+ </tr>
85
+ <tr>
86
+ <td>
87
+ <ol>
88
+ <li>Open 15 browser tabs.</li>
89
+ <li>Skim Stack Overflow answers from 2019.</li>
90
+ <li>Search Reddit, get distracted by drama.</li>
91
+ <li>Copy-paste random snippets to your AI.</li>
92
+ <li>Get a mediocre answer from confused context.</li>
93
+ </ol>
94
+ </td>
95
+ <td>
96
+ <ol>
97
+ <li>Ask your AI to research it.</li>
98
+ <li>AI searches, scrapes, mines Reddit automatically.</li>
99
+ <li>Receive synthesized insights with sources.</li>
100
+ <li>Make an informed decision.</li>
101
+ <li>Go grab a coffee. โ˜•</li>
102
+ </ol>
103
+ </td>
104
+ </tr>
105
+ </table>
106
+
107
+ We're not just fetching random pages. We're building **high-signal, low-noise context** with CTR-weighted ranking, smart comment allocation, and intelligent token distribution that prevents massive responses from breaking your LLM's context window.
23
108
 
24
109
  ---
25
110
 
26
- ## Quick Start
111
+ ## ๐Ÿš€ Get Started in 60 Seconds
27
112
 
28
113
  ### 1. Install
29
114
 
@@ -31,25 +116,23 @@ AI coding assistants are only as good as the context they have. This MCP server
31
116
  npm install research-powerpack-mcp
32
117
  ```
33
118
 
34
- ### 2. Configure (pick what you need)
119
+ ### 2. Configure Your MCP Client
35
120
 
36
- Copy `.env.example` to `.env` and add the API keys for the tools you want:
121
+ <div align="center">
37
122
 
38
- ```bash
39
- # Minimal (just web search) - FREE
40
- SERPER_API_KEY=your_serper_key
123
+ | Client | Config File | Docs |
124
+ |:------:|:-----------:|:----:|
125
+ | ๐Ÿ–ฅ๏ธ **Claude Desktop** | `claude_desktop_config.json` | [Setup](#claude-desktop) |
126
+ | โŒจ๏ธ **Claude Code** | `~/.claude.json` or CLI | [Setup](#claude-code-cli) |
127
+ | ๐ŸŽฏ **Cursor** | `.cursor/mcp.json` | [Setup](#cursorwindsurf) |
128
+ | ๐Ÿ„ **Windsurf** | MCP settings | [Setup](#cursorwindsurf) |
41
129
 
42
- # Full power (all 5 tools)
43
- SERPER_API_KEY=your_serper_key
44
- REDDIT_CLIENT_ID=your_reddit_id
45
- REDDIT_CLIENT_SECRET=your_reddit_secret
46
- SCRAPEDO_API_KEY=your_scrapedo_key
47
- OPENROUTER_API_KEY=your_openrouter_key
48
- ```
130
+ </div>
49
131
 
50
- ### 3. Add to your MCP client
132
+ #### Claude Desktop
133
+
134
+ Add to your `claude_desktop_config.json`:
51
135
 
52
- **Claude Desktop** (`claude_desktop_config.json`):
53
136
  ```json
54
137
  {
55
138
  "mcpServers": {
@@ -68,7 +151,47 @@ OPENROUTER_API_KEY=your_openrouter_key
68
151
  }
69
152
  ```
70
153
 
71
- **Cursor/Windsurf** (`.cursor/mcp.json` or similar):
154
+ #### Claude Code (CLI)
155
+
156
+ One command to rule them all:
157
+
158
+ ```bash
159
+ claude mcp add research-powerpack npx \
160
+ --scope user \
161
+ --env SERPER_API_KEY=your_key \
162
+ --env REDDIT_CLIENT_ID=your_id \
163
+ --env REDDIT_CLIENT_SECRET=your_secret \
164
+ --env OPENROUTER_API_KEY=your_key \
165
+ --env OPENROUTER_BASE_URL=https://openrouter.ai/api/v1 \
166
+ --env RESEARCH_MODEL=x-ai/grok-4.1-fast \
167
+ -- research-powerpack-mcp
168
+ ```
169
+
170
+ Or manually add to `~/.claude.json`:
171
+
172
+ ```json
173
+ {
174
+ "mcpServers": {
175
+ "research-powerpack": {
176
+ "command": "npx",
177
+ "args": ["research-powerpack-mcp"],
178
+ "env": {
179
+ "SERPER_API_KEY": "your_key",
180
+ "REDDIT_CLIENT_ID": "your_id",
181
+ "REDDIT_CLIENT_SECRET": "your_secret",
182
+ "OPENROUTER_API_KEY": "your_key",
183
+ "OPENROUTER_BASE_URL": "https://openrouter.ai/api/v1",
184
+ "RESEARCH_MODEL": "x-ai/grok-4.1-fast"
185
+ }
186
+ }
187
+ }
188
+ }
189
+ ```
190
+
191
+ #### Cursor/Windsurf
192
+
193
+ Add to `.cursor/mcp.json` or equivalent:
194
+
72
195
  ```json
73
196
  {
74
197
  "mcpServers": {
@@ -83,20 +206,195 @@ OPENROUTER_API_KEY=your_openrouter_key
83
206
  }
84
207
  ```
85
208
 
209
+ > **โœจ Zero Crash Promise:** Missing API keys? No problem. The server always starts. Tools just return helpful setup instructions instead of exploding.
210
+
211
+ ---
212
+
213
+ ## โœจ Feature Breakdown: The Secret Sauce
214
+
215
+ <div align="center">
216
+
217
+ | Feature | What It Does | Why You Care |
218
+ | :---: | :--- | :--- |
219
+ | **๐Ÿ” Batch Search**<br/>`100 keywords parallel` | Search Google for up to 100 queries simultaneously | Cover every angle of a topic in one shot |
220
+ | **๐Ÿ“Š CTR Ranking**<br/>`Smart URL scoring` | Identifies URLs that appear across multiple searches | Surfaces high-consensus authoritative sources |
221
+ | **๐Ÿ’ฌ Reddit Mining**<br/>`Real human opinions` | Google-powered Reddit search + native API fetching | Get actual user experiences, not marketing fluff |
222
+ | **๐ŸŽฏ Smart Allocation**<br/>`Token-aware budgets` | 1,000 comment budget distributed across posts | Deep dive on 2 posts or quick scan on 50 |
223
+ | **๐ŸŒ Universal Scraping**<br/>`Works on everything` | Auto-fallback: basic โ†’ JS render โ†’ geo-targeting | Handles SPAs, paywalls, and geo-restricted content |
224
+ | **๐Ÿง  Deep Research**<br/>`AI-powered synthesis` | Batch research with web search and citations | Get comprehensive answers to complex questions |
225
+ | **๐Ÿงฉ Modular Design**<br/>`Use what you need` | Each tool works independently | Pay only for the APIs you actually use |
226
+
227
+ </div>
228
+
229
+ ---
230
+
231
+ ## ๐ŸŽฎ Tool Reference
232
+
233
+ <div align="center">
234
+ <table>
235
+ <tr>
236
+ <td align="center">
237
+ <h3>๐Ÿ”</h3>
238
+ <b><code>web_search</code></b><br/>
239
+ <sub>Batch Google search</sub>
240
+ </td>
241
+ <td align="center">
242
+ <h3>๐Ÿ’ฌ</h3>
243
+ <b><code>search_reddit</code></b><br/>
244
+ <sub>Find Reddit discussions</sub>
245
+ </td>
246
+ <td align="center">
247
+ <h3>๐Ÿ“–</h3>
248
+ <b><code>get_reddit_post</code></b><br/>
249
+ <sub>Fetch posts + comments</sub>
250
+ </td>
251
+ <td align="center">
252
+ <h3>๐ŸŒ</h3>
253
+ <b><code>scrape_links</code></b><br/>
254
+ <sub>Extract any URL</sub>
255
+ </td>
256
+ <td align="center">
257
+ <h3>๐Ÿง </h3>
258
+ <b><code>deep_research</code></b><br/>
259
+ <sub>AI synthesis</sub>
260
+ </td>
261
+ </tr>
262
+ </table>
263
+ </div>
264
+
265
+ ### `web_search`
266
+
267
+ **Batch web search** using Google via Serper API. Search up to 100 keywords in parallel.
268
+
269
+ | Parameter | Type | Required | Description |
270
+ |-----------|------|----------|-------------|
271
+ | `keywords` | `string[]` | Yes | Search queries (1-100). Use distinct keywords for maximum coverage. |
272
+
273
+ **Supports Google operators:** `site:`, `-exclusion`, `"exact phrase"`, `filetype:`
274
+
275
+ ```json
276
+ {
277
+ "keywords": [
278
+ "best IDE 2025",
279
+ "VS Code alternatives",
280
+ "Cursor vs Windsurf comparison"
281
+ ]
282
+ }
283
+ ```
284
+
285
+ ---
286
+
287
+ ### `search_reddit`
288
+
289
+ **Search Reddit** via Google with automatic `site:reddit.com` filtering.
290
+
291
+ | Parameter | Type | Required | Description |
292
+ |-----------|------|----------|-------------|
293
+ | `queries` | `string[]` | Yes | Search queries (max 10) |
294
+ | `date_after` | `string` | No | Filter results after date (YYYY-MM-DD) |
295
+
296
+ **Search operators:** `intitle:keyword`, `"exact phrase"`, `OR`, `-exclude`
297
+
298
+ ```json
299
+ {
300
+ "queries": [
301
+ "best mechanical keyboard 2025",
302
+ "intitle:keyboard recommendation"
303
+ ],
304
+ "date_after": "2024-01-01"
305
+ }
306
+ ```
307
+
308
+ ---
309
+
310
+ ### `get_reddit_post`
311
+
312
+ **Fetch Reddit posts** with smart comment allocation (1,000 comment budget distributed automatically).
313
+
314
+ | Parameter | Type | Required | Default | Description |
315
+ |-----------|------|----------|---------|-------------|
316
+ | `urls` | `string[]` | Yes | โ€” | Reddit post URLs (2-50) |
317
+ | `fetch_comments` | `boolean` | No | `true` | Whether to fetch comments |
318
+ | `max_comments` | `number` | No | auto | Override comment allocation |
319
+
320
+ **Smart Allocation:**
321
+ - 2 posts โ†’ ~500 comments/post (deep dive)
322
+ - 10 posts โ†’ ~100 comments/post
323
+ - 50 posts โ†’ ~20 comments/post (quick scan)
324
+
325
+ ```json
326
+ {
327
+ "urls": [
328
+ "https://reddit.com/r/programming/comments/abc123/post_title",
329
+ "https://reddit.com/r/webdev/comments/def456/another_post"
330
+ ]
331
+ }
332
+ ```
333
+
334
+ ---
335
+
336
+ ### `scrape_links`
337
+
338
+ **Universal URL content extraction** with automatic fallback modes.
339
+
340
+ | Parameter | Type | Required | Default | Description |
341
+ |-----------|------|----------|---------|-------------|
342
+ | `urls` | `string[]` | Yes | โ€” | URLs to scrape (3-50) |
343
+ | `timeout` | `number` | No | `30` | Timeout per URL (seconds) |
344
+ | `use_llm` | `boolean` | No | `false` | Enable AI extraction |
345
+ | `what_to_extract` | `string` | No | โ€” | Extraction instructions for AI |
346
+
347
+ **Automatic Fallback:** Basic โ†’ JS rendering โ†’ JS + US geo-targeting
348
+
349
+ ```json
350
+ {
351
+ "urls": ["https://example.com/article1", "https://example.com/article2"],
352
+ "use_llm": true,
353
+ "what_to_extract": "Extract the main arguments and key statistics"
354
+ }
355
+ ```
356
+
86
357
  ---
87
358
 
88
- ## Environment Variables & Tool Availability
359
+ ### `deep_research`
360
+
361
+ **AI-powered batch research** with web search and citations.
362
+
363
+ | Parameter | Type | Required | Description |
364
+ |-----------|------|----------|-------------|
365
+ | `questions` | `object[]` | Yes | Research questions (2-10) |
366
+ | `questions[].question` | `string` | Yes | The research question |
367
+ | `questions[].file_attachments` | `object[]` | No | Files to include as context |
368
+
369
+ **Token Allocation:** 32,000 tokens distributed across questions:
370
+ - 2 questions โ†’ 16,000 tokens/question (deep dive)
371
+ - 10 questions โ†’ 3,200 tokens/question (rapid multi-topic)
372
+
373
+ ```json
374
+ {
375
+ "questions": [
376
+ { "question": "What are the current best practices for React Server Components in 2025?" },
377
+ { "question": "Compare Bun vs Node.js for production workloads with benchmarks." }
378
+ ]
379
+ }
380
+ ```
381
+
382
+ ---
383
+
384
+ ## โš™๏ธ Environment Variables & Tool Availability
89
385
 
90
386
  Research Powerpack uses a **modular architecture**. Tools are automatically enabled based on which API keys you provide:
91
387
 
388
+ <div align="center">
389
+
92
390
  | ENV Variable | Tools Enabled | Free Tier |
93
- |--------------|---------------|-----------|
94
- | `SERPER_API_KEY` | `web_search`, `search_reddit` | 2,500 queries |
95
- | `REDDIT_CLIENT_ID` + `REDDIT_CLIENT_SECRET` | `get_reddit_post` | Unlimited |
96
- | `SCRAPEDO_API_KEY` | `scrape_links` | 1,000 credits |
97
- | `OPENROUTER_API_KEY` | `deep_research` + AI extraction in `scrape_links` | Pay-as-you-go |
391
+ |:------------:|:-------------:|:---------:|
392
+ | `SERPER_API_KEY` | `web_search`, `search_reddit` | 2,500 queries/mo |
393
+ | `REDDIT_CLIENT_ID` + `SECRET` | `get_reddit_post` | Unlimited |
394
+ | `SCRAPEDO_API_KEY` | `scrape_links` | 1,000 credits/mo |
395
+ | `OPENROUTER_API_KEY` | `deep_research` + AI in `scrape_links` | Pay-as-you-go |
98
396
 
99
- **No ENV = No crash.** The server always starts. If you call a tool without the required API key, you get a helpful error message with setup instructions.
397
+ </div>
100
398
 
101
399
  ### Configuration Examples
102
400
 
@@ -109,7 +407,7 @@ SERPER_API_KEY=xxx
109
407
  REDDIT_CLIENT_ID=xxx
110
408
  REDDIT_CLIENT_SECRET=xxx
111
409
 
112
- # Full research mode (all tools)
410
+ # Full research mode (all 5 tools)
113
411
  SERPER_API_KEY=xxx
114
412
  REDDIT_CLIENT_ID=xxx
115
413
  REDDIT_CLIENT_SECRET=xxx
@@ -119,13 +417,12 @@ OPENROUTER_API_KEY=xxx
119
417
 
120
418
  ---
121
419
 
122
- ## API Key Setup Guides
420
+ ## ๐Ÿ”‘ API Key Setup Guides
123
421
 
124
422
  <details>
125
- <summary><b>๐Ÿ” Serper API (Google Search)</b></summary>
423
+ <summary><b>๐Ÿ” Serper API (Google Search) โ€” FREE: 2,500 queries/month</b></summary>
126
424
 
127
425
  ### What you get
128
- - 2,500 free queries/month
129
426
  - Fast Google search results via API
130
427
  - Enables `web_search` and `search_reddit` tools
131
428
 
@@ -133,23 +430,23 @@ OPENROUTER_API_KEY=xxx
133
430
  1. Go to [serper.dev](https://serper.dev)
134
431
  2. Click **"Get API Key"** (top right)
135
432
  3. Sign up with email or Google
136
- 4. Your API key is displayed on the dashboard
137
- 5. Copy it to your `.env`:
433
+ 4. Copy your API key from the dashboard
434
+ 5. Add to your config:
138
435
  ```
139
436
  SERPER_API_KEY=your_key_here
140
437
  ```
141
438
 
142
439
  ### Pricing
143
440
  - **Free**: 2,500 queries/month
144
- - **Paid**: $50/month for 50,000 queries ($0.001/query)
441
+ - **Paid**: $50/month for 50,000 queries
145
442
 
146
443
  </details>
147
444
 
148
445
  <details>
149
- <summary><b>๐Ÿค– Reddit OAuth (Reddit API)</b></summary>
446
+ <summary><b>๐Ÿค– Reddit OAuth โ€” FREE: Unlimited access</b></summary>
150
447
 
151
448
  ### What you get
152
- - Unlimited Reddit API access
449
+ - Full Reddit API access
153
450
  - Fetch posts and comments with upvote sorting
154
451
  - Enables `get_reddit_post` tool
155
452
 
@@ -159,41 +456,33 @@ OPENROUTER_API_KEY=xxx
159
456
  3. Fill in:
160
457
  - **Name**: `research-powerpack` (or any name)
161
458
  - **App type**: Select **"script"** (important!)
162
- - **Description**: Optional
163
- - **About URL**: Leave blank
164
- - **Redirect URI**: `http://localhost:8080` (required but not used)
459
+ - **Redirect URI**: `http://localhost:8080`
165
460
  4. Click **"create app"**
166
461
  5. Copy your credentials:
167
- - **Client ID**: The string under your app name (e.g., `yuq_M0kWusHp2olglFBnpw`)
462
+ - **Client ID**: The string under your app name
168
463
  - **Client Secret**: The "secret" field
169
- 6. Add to your `.env`:
464
+ 6. Add to your config:
170
465
  ```
171
466
  REDDIT_CLIENT_ID=your_client_id
172
467
  REDDIT_CLIENT_SECRET=your_client_secret
173
468
  ```
174
469
 
175
- ### Tips
176
- - Script apps have the highest rate limits
177
- - No user authentication required
178
- - Works immediately after creation
179
-
180
470
  </details>
181
471
 
182
472
  <details>
183
- <summary><b>๐ŸŒ Scrape.do (Web Scraping)</b></summary>
473
+ <summary><b>๐ŸŒ Scrape.do (Web Scraping) โ€” FREE: 1,000 credits/month</b></summary>
184
474
 
185
475
  ### What you get
186
- - 1,000 free scraping credits
187
476
  - JavaScript rendering support
188
477
  - Geo-targeting and CAPTCHA handling
189
478
  - Enables `scrape_links` tool
190
479
 
191
480
  ### Setup Steps
192
481
  1. Go to [scrape.do](https://scrape.do)
193
- 2. Click **"Start Free"** or **"Get Started"**
482
+ 2. Click **"Start Free"**
194
483
  3. Sign up with email
195
- 4. Your API key is on the dashboard
196
- 5. Add to your `.env`:
484
+ 4. Copy your API key from the dashboard
485
+ 5. Add to your config:
197
486
  ```
198
487
  SCRAPEDO_API_KEY=your_key_here
199
488
  ```
@@ -203,36 +492,32 @@ OPENROUTER_API_KEY=xxx
203
492
  - **JavaScript rendering**: 5 credits
204
493
  - **Geo-targeting**: +25 credits
205
494
 
206
- ### Pricing
207
- - **Free**: 1,000 credits (renews monthly)
208
- - **Starter**: $29/month for 100,000 credits
209
-
210
495
  </details>
211
496
 
212
497
  <details>
213
- <summary><b>๐Ÿง  OpenRouter (AI Models)</b></summary>
498
+ <summary><b>๐Ÿง  OpenRouter (AI Models) โ€” Pay-as-you-go</b></summary>
214
499
 
215
500
  ### What you get
216
501
  - Access to 100+ AI models via one API
217
502
  - Enables `deep_research` tool
218
- - Enables AI extraction in `scrape_links` (`use_llm`, `what_to_extract`)
503
+ - Enables AI extraction in `scrape_links`
219
504
 
220
505
  ### Setup Steps
221
506
  1. Go to [openrouter.ai](https://openrouter.ai)
222
- 2. Click **"Sign In"** โ†’ Sign up with Google/GitHub/email
507
+ 2. Sign up with Google/GitHub/email
223
508
  3. Go to [openrouter.ai/keys](https://openrouter.ai/keys)
224
509
  4. Click **"Create Key"**
225
510
  5. Copy the key (starts with `sk-or-...`)
226
- 6. Add to your `.env`:
511
+ 6. Add to your config:
227
512
  ```
228
513
  OPENROUTER_API_KEY=sk-or-v1-xxxxx
229
514
  ```
230
515
 
231
516
  ### Recommended Models
232
- The default model is `perplexity/sonar-deep-research` (optimized for research with web search).
233
-
234
- Alternative models:
235
517
  ```bash
518
+ # Default (optimized for research)
519
+ RESEARCH_MODEL=perplexity/sonar-deep-research
520
+
236
521
  # Fast and capable
237
522
  RESEARCH_MODEL=x-ai/grok-4.1-fast
238
523
 
@@ -243,207 +528,49 @@ RESEARCH_MODEL=anthropic/claude-3.5-sonnet
243
528
  RESEARCH_MODEL=openai/gpt-4o-mini
244
529
  ```
245
530
 
246
- ### Pricing
247
- - Pay-as-you-go (no subscription required)
248
- - Prices vary by model (~$0.001-$0.03 per 1K tokens)
249
- - `perplexity/sonar-deep-research`: ~$5 per 1M tokens
250
-
251
531
  </details>
252
532
 
253
533
  ---
254
534
 
255
- ## Tool Reference
256
-
257
- ### `web_search`
258
-
259
- **Batch web search** using Google via Serper API. Search up to 100 keywords in parallel.
260
-
261
- | Parameter | Type | Required | Description |
262
- |-----------|------|----------|-------------|
263
- | `keywords` | `string[]` | Yes | Search queries (1-100). Use distinct keywords for maximum coverage. |
264
-
265
- **Features:**
266
- - Google search operators: `site:`, `-exclusion`, `"exact phrase"`, `filetype:`
267
- - CTR-weighted ranking identifies high-consensus URLs
268
- - Related search suggestions per query
269
-
270
- **Example:**
271
- ```json
272
- {
273
- "keywords": [
274
- "best IDE 2025",
275
- "VS Code alternatives",
276
- "Cursor vs Windsurf comparison"
277
- ]
278
- }
279
- ```
280
-
281
- ---
282
-
283
- ### `search_reddit`
284
-
285
- **Search Reddit** via Google with automatic `site:reddit.com` filtering.
286
-
287
- | Parameter | Type | Required | Description |
288
- |-----------|------|----------|-------------|
289
- | `queries` | `string[]` | Yes | Search queries (max 10). Use distinct queries for multiple perspectives. |
290
- | `date_after` | `string` | No | Filter results after date (YYYY-MM-DD) |
291
-
292
- **Search Operators:**
293
- - `intitle:keyword` โ€” Match in post title
294
- - `"exact phrase"` โ€” Exact match
295
- - `OR` โ€” Match either term
296
- - `-exclude` โ€” Exclude term
297
-
298
- **Example:**
299
- ```json
300
- {
301
- "queries": [
302
- "best mechanical keyboard 2025",
303
- "intitle:keyboard recommendation",
304
- "\"keychron\" OR \"nuphy\" review"
305
- ],
306
- "date_after": "2024-01-01"
307
- }
308
- ```
309
-
310
- ---
311
-
312
- ### `get_reddit_post`
313
-
314
- **Fetch Reddit posts** with smart comment allocation (1,000 comment budget distributed automatically).
315
-
316
- | Parameter | Type | Required | Default | Description |
317
- |-----------|------|----------|---------|-------------|
318
- | `urls` | `string[]` | Yes | โ€” | Reddit post URLs (2-50) |
319
- | `fetch_comments` | `boolean` | No | `true` | Whether to fetch comments |
320
- | `max_comments` | `number` | No | auto | Override comment allocation |
321
-
322
- **Smart Allocation:**
323
- - 2 posts: ~500 comments/post (deep dive)
324
- - 10 posts: ~100 comments/post
325
- - 50 posts: ~20 comments/post (quick scan)
326
-
327
- **Example:**
328
- ```json
329
- {
330
- "urls": [
331
- "https://reddit.com/r/programming/comments/abc123/post_title",
332
- "https://reddit.com/r/webdev/comments/def456/another_post"
333
- ],
334
- "fetch_comments": true
335
- }
336
- ```
337
-
338
- ---
339
-
340
- ### `scrape_links`
341
-
342
- **Universal URL content extraction** with automatic fallback modes.
343
-
344
- | Parameter | Type | Required | Default | Description |
345
- |-----------|------|----------|---------|-------------|
346
- | `urls` | `string[]` | Yes | โ€” | URLs to scrape (3-50) |
347
- | `timeout` | `number` | No | `30` | Timeout per URL (seconds) |
348
- | `use_llm` | `boolean` | No | `false` | Enable AI extraction (requires `OPENROUTER_API_KEY`) |
349
- | `what_to_extract` | `string` | No | โ€” | Extraction instructions for AI |
350
-
351
- **Automatic Fallback:**
352
- 1. Basic mode (fast)
353
- 2. JavaScript rendering (for SPAs)
354
- 3. JavaScript + US geo-targeting (for restricted content)
355
-
356
- **Token Allocation:** 32,000 tokens distributed across URLs:
357
- - 3 URLs: ~10,666 tokens/URL
358
- - 10 URLs: ~3,200 tokens/URL
359
- - 50 URLs: ~640 tokens/URL
360
-
361
- **Example:**
362
- ```json
363
- {
364
- "urls": [
365
- "https://example.com/article1",
366
- "https://example.com/article2",
367
- "https://example.com/article3"
368
- ],
369
- "use_llm": true,
370
- "what_to_extract": "Extract the main arguments, key statistics, and conclusions"
371
- }
372
- ```
373
-
374
- ---
375
-
376
- ### `deep_research`
377
-
378
- **AI-powered batch research** with web search and citations.
379
-
380
- | Parameter | Type | Required | Description |
381
- |-----------|------|----------|-------------|
382
- | `questions` | `object[]` | Yes | Research questions (2-10) |
383
- | `questions[].question` | `string` | Yes | The research question |
384
- | `questions[].file_attachments` | `object[]` | No | Files to include as context |
385
-
386
- **Token Allocation:** 32,000 tokens distributed across questions:
387
- - 2 questions: 16,000 tokens/question (deep dive)
388
- - 5 questions: 6,400 tokens/question (balanced)
389
- - 10 questions: 3,200 tokens/question (rapid multi-topic)
390
-
391
- **Example:**
392
- ```json
393
- {
394
- "questions": [
395
- {
396
- "question": "What are the current best practices for React Server Components in 2025? Include patterns for data fetching and caching."
397
- },
398
- {
399
- "question": "Compare the performance characteristics of Bun vs Node.js for production workloads. Include benchmarks and real-world case studies."
400
- }
401
- ]
402
- }
403
- ```
404
-
405
- ---
406
-
407
- ## Recommended Workflows
535
+ ## ๐Ÿ”ฅ Recommended Workflows
408
536
 
409
537
  ### Research a Technology Decision
410
538
 
411
539
  ```
412
- 1. web_search: ["React vs Vue 2025", "Next.js vs Nuxt comparison", "frontend framework benchmarks"]
413
- 2. search_reddit: ["best frontend framework 2025", "migrating from React to Vue", "Next.js production experience"]
414
- 3. get_reddit_post: [URLs from step 2]
415
- 4. scrape_links: [Documentation and blog URLs from step 1]
416
- 5. deep_research: [Synthesize findings into specific questions]
540
+ 1. web_search โ†’ ["React vs Vue 2025", "Next.js vs Nuxt comparison"]
541
+ 2. search_reddit โ†’ ["best frontend framework 2025", "Next.js production experience"]
542
+ 3. get_reddit_post โ†’ [URLs from step 2]
543
+ 4. scrape_links โ†’ [Documentation and blog URLs from step 1]
544
+ 5. deep_research โ†’ [Synthesize findings into specific questions]
417
545
  ```
418
546
 
419
547
  ### Competitive Analysis
420
548
 
421
549
  ```
422
- 1. web_search: ["competitor name review", "competitor vs alternatives", "competitor pricing"]
423
- 2. scrape_links: [Competitor websites, review sites, comparison pages]
424
- 3. search_reddit: ["competitor name experience", "switching from competitor"]
425
- 4. get_reddit_post: [URLs from step 3]
550
+ 1. web_search โ†’ ["competitor name review", "competitor vs alternatives"]
551
+ 2. scrape_links โ†’ [Competitor websites, review sites]
552
+ 3. search_reddit โ†’ ["competitor name experience", "switching from competitor"]
553
+ 4. get_reddit_post โ†’ [URLs from step 3]
426
554
  ```
427
555
 
428
556
  ### Debug an Obscure Error
429
557
 
430
558
  ```
431
- 1. web_search: ["exact error message", "error message + framework name"]
432
- 2. search_reddit: ["error message", "framework + error type"]
433
- 3. get_reddit_post: [URLs with solutions]
434
- 4. scrape_links: [Stack Overflow answers, GitHub issues]
559
+ 1. web_search โ†’ ["exact error message", "error + framework name"]
560
+ 2. search_reddit โ†’ ["error message", "framework + error type"]
561
+ 3. get_reddit_post โ†’ [URLs with solutions]
562
+ 4. scrape_links โ†’ [Stack Overflow answers, GitHub issues]
435
563
  ```
436
564
 
437
565
  ---
438
566
 
439
- ## Enable Full Power Mode
567
+ ## ๐Ÿ”ฅ Enable Full Power Mode
440
568
 
441
569
  For the best research experience, configure all four API keys:
442
570
 
443
571
  ```bash
444
- # .env
445
572
  SERPER_API_KEY=your_serper_key # Free: 2,500 queries/month
446
- REDDIT_CLIENT_ID=your_reddit_id # Free: Nearly unlimited
573
+ REDDIT_CLIENT_ID=your_reddit_id # Free: Unlimited
447
574
  REDDIT_CLIENT_SECRET=your_reddit_secret
448
575
  SCRAPEDO_API_KEY=your_scrapedo_key # Free: 1,000 credits/month
449
576
  OPENROUTER_API_KEY=your_openrouter_key # Pay-as-you-go
@@ -455,11 +582,11 @@ This unlocks:
455
582
  - **Deep research with web search** and citations
456
583
  - **Complete Reddit mining** (search โ†’ fetch โ†’ analyze)
457
584
 
458
- Total setup time: ~10 minutes. Total free tier value: ~$50/month equivalent.
585
+ **Total setup time:** ~10 minutes. **Total free tier value:** ~$50/month equivalent.
459
586
 
460
587
  ---
461
588
 
462
- ## Development
589
+ ## ๐Ÿ› ๏ธ Development
463
590
 
464
591
  ```bash
465
592
  # Clone
@@ -481,6 +608,27 @@ npm run typecheck
481
608
 
482
609
  ---
483
610
 
484
- ## License
611
+ ## ๐Ÿ”ฅ Common Issues & Quick Fixes
612
+
613
+ <details>
614
+ <summary><b>Expand for troubleshooting tips</b></summary>
615
+
616
+ | Problem | Solution |
617
+ | :--- | :--- |
618
+ | **Tool returns "API key not configured"** | Add the required ENV variable to your MCP config. The error message tells you exactly which key is missing. |
619
+ | **Reddit posts returning empty** | Check your `REDDIT_CLIENT_ID` and `REDDIT_CLIENT_SECRET`. Make sure you created a "script" type app. |
620
+ | **Scraping fails on JavaScript sites** | This is expected for first attempt. The tool auto-retries with JS rendering. If still failing, the site may be blocking scrapers. |
621
+ | **Deep research taking too long** | Use a faster model like `x-ai/grok-4.1-fast` instead of `perplexity/sonar-deep-research`. |
622
+ | **Token limit errors** | Reduce the number of URLs/questions per request. The tool distributes a fixed token budget. |
623
+
624
+ </details>
625
+
626
+ ---
627
+
628
+ <div align="center">
629
+
630
+ **Built with ๐Ÿ”ฅ because manually researching for your AI is a soul-crushing waste of time.**
485
631
 
486
632
  MIT ยฉ [YiฤŸit Konur](https://github.com/yigitkonur)
633
+
634
+ </div>
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "research-powerpack-mcp",
3
- "version": "3.0.3",
3
+ "version": "3.0.4",
4
4
  "description": "The ultimate research MCP toolkit: Reddit mining, web search with CTR aggregation, AI-powered deep research, and intelligent web scraping - all in one modular package",
5
5
  "type": "module",
6
6
  "main": "dist/index.js",