research-powerpack-mcp 3.0.3 โ†’ 3.0.5

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,29 +1,114 @@
1
- # Research Powerpack MCP
2
-
3
- **The ultimate research MCP toolkit** โ€” Reddit mining, web search with CTR aggregation, AI-powered deep research, and intelligent web scraping, all in one modular package.
4
-
5
- [![npm version](https://img.shields.io/npm/v/research-powerpack-mcp.svg)](https://www.npmjs.com/package/research-powerpack-mcp)
6
- [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
1
+ <h1 align="center">๐Ÿ”ฌ Research Powerpack MCP ๐Ÿ”ฌ</h1>
2
+ <h3 align="center">Stop tab-hopping for research. Start getting god-tier context.</h3>
3
+
4
+ <p align="center">
5
+ <strong>
6
+ <em>The ultimate research toolkit for your AI coding assistant. It searches the web, mines Reddit, scrapes any URL, and synthesizes everything into perfectly structured context your LLM actually understands.</em>
7
+ </strong>
8
+ </p>
9
+
10
+ <p align="center">
11
+ <!-- Package Info -->
12
+ <a href="https://www.npmjs.com/package/research-powerpack-mcp"><img alt="npm" src="https://img.shields.io/npm/v/research-powerpack-mcp.svg?style=flat-square&color=4D87E6"></a>
13
+ <a href="#"><img alt="node" src="https://img.shields.io/badge/node-18+-4D87E6.svg?style=flat-square"></a>
14
+ &nbsp;&nbsp;โ€ข&nbsp;&nbsp;
15
+ <!-- Features -->
16
+ <a href="https://opensource.org/licenses/MIT"><img alt="license" src="https://img.shields.io/badge/License-MIT-F9A825.svg?style=flat-square"></a>
17
+ <a href="#"><img alt="platform" src="https://img.shields.io/badge/platform-macOS_|_Linux_|_Windows-2ED573.svg?style=flat-square"></a>
18
+ </p>
19
+
20
+ <p align="center">
21
+ <img alt="modular" src="https://img.shields.io/badge/๐Ÿงฉ_modular-use_1_tool_or_all_5-2ED573.svg?style=for-the-badge">
22
+ <img alt="zero crash" src="https://img.shields.io/badge/๐Ÿ’ช_zero_crash-missing_keys_=_helpful_errors-2ED573.svg?style=for-the-badge">
23
+ </p>
24
+
25
+ <div align="center">
26
+
27
+ ### ๐Ÿงญ Quick Navigation
28
+
29
+ [**โšก Get Started**](#-get-started-in-60-seconds) โ€ข
30
+ [**โœจ Key Features**](#-feature-breakdown-the-secret-sauce) โ€ข
31
+ [**๐ŸŽฎ Usage & Examples**](#-tool-reference) โ€ข
32
+ [**โš™๏ธ API Key Setup**](#-api-key-setup-guides) โ€ข
33
+ [**๐Ÿ†š Why This Slaps**](#-why-this-slaps-other-methods)
34
+
35
+ </div>
7
36
 
8
37
  ---
9
38
 
10
- ## Why Research Powerpack?
11
-
12
- AI coding assistants are only as good as the context they have. This MCP server gives your AI **superpowers for research**:
39
+ **`research-powerpack-mcp`** is the research assistant your AI wishes it had. Stop asking your LLM to guess about things it doesn't know. This MCP server acts like a senior researcher, searching the web, mining Reddit discussions, scraping documentation, and synthesizing everything into perfectly structured context so your AI can actually give you answers worth a damn.
40
+
41
+ <div align="center">
42
+ <table>
43
+ <tr>
44
+ <td align="center">
45
+ <h3>๐Ÿ”</h3>
46
+ <b>Batch Web Search</b><br/>
47
+ <sub>100 keywords in parallel</sub>
48
+ </td>
49
+ <td align="center">
50
+ <h3>๐Ÿ’ฌ</h3>
51
+ <b>Reddit Mining</b><br/>
52
+ <sub>Real opinions, not marketing</sub>
53
+ </td>
54
+ <td align="center">
55
+ <h3>๐ŸŒ</h3>
56
+ <b>Universal Scraping</b><br/>
57
+ <sub>JS rendering + geo-targeting</sub>
58
+ </td>
59
+ <td align="center">
60
+ <h3>๐Ÿง </h3>
61
+ <b>Deep Research</b><br/>
62
+ <sub>AI synthesis with citations</sub>
63
+ </td>
64
+ </tr>
65
+ </table>
66
+ </div>
67
+
68
+ How it slaps:
69
+ - **You:** "What's the best database for my use case?"
70
+ - **AI + Powerpack:** Searches Google, mines Reddit threads, scrapes docs, synthesizes findings.
71
+ - **You:** Get an actually informed answer with real community opinions and citations.
72
+ - **Result:** Ship better decisions. Skip the 47 browser tabs.
13
73
 
14
- | Tool | What It Does | Real Value |
15
- |------|-------------|------------|
16
- | `web_search` | Batch Google search (up to 100 keywords) with CTR-weighted ranking | Find the most authoritative sources across multiple search angles simultaneously |
17
- | `search_reddit` | Google-powered Reddit search with advanced operators | Discover real user discussions, opinions, and experiences |
18
- | `get_reddit_post` | Fetch Reddit posts with smart comment allocation | Extract community wisdom with automatic comment budget distribution |
19
- | `scrape_links` | Universal URL scraping with automatic fallback | Get full content from any webpage with JS rendering and geo-targeting |
20
- | `deep_research` | AI-powered batch research with citations | Get comprehensive, evidence-based answers to multiple questions in parallel |
74
+ ---
21
75
 
22
- **Modular by design** โ€” use just one tool or all five. Configure only the API keys you need.
76
+ ## ๐Ÿ’ฅ Why This Slaps Other Methods
77
+
78
+ Manually researching is a vibe-killer. `research-powerpack-mcp` makes other methods look ancient.
79
+
80
+ <table align="center">
81
+ <tr>
82
+ <td align="center"><b>โŒ The Old Way (Pain)</b></td>
83
+ <td align="center"><b>โœ… The Powerpack Way (Glory)</b></td>
84
+ </tr>
85
+ <tr>
86
+ <td>
87
+ <ol>
88
+ <li>Open 15 browser tabs.</li>
89
+ <li>Skim Stack Overflow answers from 2019.</li>
90
+ <li>Search Reddit, get distracted by drama.</li>
91
+ <li>Copy-paste random snippets to your AI.</li>
92
+ <li>Get a mediocre answer from confused context.</li>
93
+ </ol>
94
+ </td>
95
+ <td>
96
+ <ol>
97
+ <li>Ask your AI to research it.</li>
98
+ <li>AI searches, scrapes, mines Reddit automatically.</li>
99
+ <li>Receive synthesized insights with sources.</li>
100
+ <li>Make an informed decision.</li>
101
+ <li>Go grab a coffee. โ˜•</li>
102
+ </ol>
103
+ </td>
104
+ </tr>
105
+ </table>
106
+
107
+ We're not just fetching random pages. We're building **high-signal, low-noise context** with CTR-weighted ranking, smart comment allocation, and intelligent token distribution that prevents massive responses from breaking your LLM's context window.
23
108
 
24
109
  ---
25
110
 
26
- ## Quick Start
111
+ ## ๐Ÿš€ Get Started in 60 Seconds
27
112
 
28
113
  ### 1. Install
29
114
 
@@ -31,25 +116,23 @@ AI coding assistants are only as good as the context they have. This MCP server
31
116
  npm install research-powerpack-mcp
32
117
  ```
33
118
 
34
- ### 2. Configure (pick what you need)
119
+ ### 2. Configure Your MCP Client
35
120
 
36
- Copy `.env.example` to `.env` and add the API keys for the tools you want:
121
+ <div align="center">
37
122
 
38
- ```bash
39
- # Minimal (just web search) - FREE
40
- SERPER_API_KEY=your_serper_key
123
+ | Client | Config File | Docs |
124
+ |:------:|:-----------:|:----:|
125
+ | ๐Ÿ–ฅ๏ธ **Claude Desktop** | `claude_desktop_config.json` | [Setup](#claude-desktop) |
126
+ | โŒจ๏ธ **Claude Code** | `~/.claude.json` or CLI | [Setup](#claude-code-cli) |
127
+ | ๐ŸŽฏ **Cursor** | `.cursor/mcp.json` | [Setup](#cursorwindsurf) |
128
+ | ๐Ÿ„ **Windsurf** | MCP settings | [Setup](#cursorwindsurf) |
41
129
 
42
- # Full power (all 5 tools)
43
- SERPER_API_KEY=your_serper_key
44
- REDDIT_CLIENT_ID=your_reddit_id
45
- REDDIT_CLIENT_SECRET=your_reddit_secret
46
- SCRAPEDO_API_KEY=your_scrapedo_key
47
- OPENROUTER_API_KEY=your_openrouter_key
48
- ```
130
+ </div>
49
131
 
50
- ### 3. Add to your MCP client
132
+ #### Claude Desktop
133
+
134
+ Add to your `claude_desktop_config.json`:
51
135
 
52
- **Claude Desktop** (`claude_desktop_config.json`):
53
136
  ```json
54
137
  {
55
138
  "mcpServers": {
@@ -68,7 +151,47 @@ OPENROUTER_API_KEY=your_openrouter_key
68
151
  }
69
152
  ```
70
153
 
71
- **Cursor/Windsurf** (`.cursor/mcp.json` or similar):
154
+ #### Claude Code (CLI)
155
+
156
+ One command to rule them all:
157
+
158
+ ```bash
159
+ claude mcp add research-powerpack npx \
160
+ --scope user \
161
+ --env SERPER_API_KEY=your_key \
162
+ --env REDDIT_CLIENT_ID=your_id \
163
+ --env REDDIT_CLIENT_SECRET=your_secret \
164
+ --env OPENROUTER_API_KEY=your_key \
165
+ --env OPENROUTER_BASE_URL=https://openrouter.ai/api/v1 \
166
+ --env RESEARCH_MODEL=x-ai/grok-4.1-fast \
167
+ -- research-powerpack-mcp
168
+ ```
169
+
170
+ Or manually add to `~/.claude.json`:
171
+
172
+ ```json
173
+ {
174
+ "mcpServers": {
175
+ "research-powerpack": {
176
+ "command": "npx",
177
+ "args": ["research-powerpack-mcp"],
178
+ "env": {
179
+ "SERPER_API_KEY": "your_key",
180
+ "REDDIT_CLIENT_ID": "your_id",
181
+ "REDDIT_CLIENT_SECRET": "your_secret",
182
+ "OPENROUTER_API_KEY": "your_key",
183
+ "OPENROUTER_BASE_URL": "https://openrouter.ai/api/v1",
184
+ "RESEARCH_MODEL": "x-ai/grok-4.1-fast"
185
+ }
186
+ }
187
+ }
188
+ }
189
+ ```
190
+
191
+ #### Cursor/Windsurf
192
+
193
+ Add to `.cursor/mcp.json` or equivalent:
194
+
72
195
  ```json
73
196
  {
74
197
  "mcpServers": {
@@ -83,20 +206,197 @@ OPENROUTER_API_KEY=your_openrouter_key
83
206
  }
84
207
  ```
85
208
 
209
+ > **โœจ Zero Crash Promise:** Missing API keys? No problem. The server always starts. Tools just return helpful setup instructions instead of exploding.
210
+
211
+ ---
212
+
213
+ ## โœจ Feature Breakdown: The Secret Sauce
214
+
215
+ <div align="center">
216
+
217
+ | Feature | What It Does | Why You Care |
218
+ | :---: | :--- | :--- |
219
+ | **๐Ÿ” Batch Search**<br/>`100 keywords parallel` | Search Google for up to 100 queries simultaneously | Cover every angle of a topic in one shot |
220
+ | **๐Ÿ“Š CTR Ranking**<br/>`Smart URL scoring` | Identifies URLs that appear across multiple searches | Surfaces high-consensus authoritative sources |
221
+ | **๐Ÿ’ฌ Reddit Mining**<br/>`Real human opinions` | Google-powered Reddit search + native API fetching | Get actual user experiences, not marketing fluff |
222
+ | **๐ŸŽฏ Smart Allocation**<br/>`Token-aware budgets` | 1,000 comment budget distributed across posts | Deep dive on 2 posts or quick scan on 50 |
223
+ | **๐ŸŒ Universal Scraping**<br/>`Works on everything` | Auto-fallback: basic โ†’ JS render โ†’ geo-targeting | Handles SPAs, paywalls, and geo-restricted content |
224
+ | **๐Ÿง  Deep Research**<br/>`AI-powered synthesis` | Batch research with web search and citations | Get comprehensive answers to complex questions |
225
+ | **๐Ÿงฉ Modular Design**<br/>`Use what you need` | Each tool works independently | Pay only for the APIs you actually use |
226
+
227
+ </div>
228
+
229
+ ---
230
+
231
+ ## ๐ŸŽฎ Tool Reference
232
+
233
+ <div align="center">
234
+ <table>
235
+ <tr>
236
+ <td align="center">
237
+ <h3>๐Ÿ”</h3>
238
+ <b><code>web_search</code></b><br/>
239
+ <sub>Batch Google search</sub>
240
+ </td>
241
+ <td align="center">
242
+ <h3>๐Ÿ’ฌ</h3>
243
+ <b><code>search_reddit</code></b><br/>
244
+ <sub>Find Reddit discussions</sub>
245
+ </td>
246
+ <td align="center">
247
+ <h3>๐Ÿ“–</h3>
248
+ <b><code>get_reddit_post</code></b><br/>
249
+ <sub>Fetch posts + comments</sub>
250
+ </td>
251
+ <td align="center">
252
+ <h3>๐ŸŒ</h3>
253
+ <b><code>scrape_links</code></b><br/>
254
+ <sub>Extract any URL</sub>
255
+ </td>
256
+ <td align="center">
257
+ <h3>๐Ÿง </h3>
258
+ <b><code>deep_research</code></b><br/>
259
+ <sub>AI synthesis</sub>
260
+ </td>
261
+ </tr>
262
+ </table>
263
+ </div>
264
+
265
+ ### `web_search`
266
+
267
+ **Batch web search** using Google via Serper API. Search up to 100 keywords in parallel.
268
+
269
+ | Parameter | Type | Required | Description |
270
+ |-----------|------|----------|-------------|
271
+ | `keywords` | `string[]` | Yes | Search queries (1-100). Use distinct keywords for maximum coverage. |
272
+
273
+ **Supports Google operators:** `site:`, `-exclusion`, `"exact phrase"`, `filetype:`
274
+
275
+ ```json
276
+ {
277
+ "keywords": [
278
+ "best IDE 2025",
279
+ "VS Code alternatives",
280
+ "Cursor vs Windsurf comparison"
281
+ ]
282
+ }
283
+ ```
284
+
285
+ ---
286
+
287
+ ### `search_reddit`
288
+
289
+ **Search Reddit** via Google with automatic `site:reddit.com` filtering.
290
+
291
+ | Parameter | Type | Required | Description |
292
+ |-----------|------|----------|-------------|
293
+ | `queries` | `string[]` | Yes | Search queries (max 10) |
294
+ | `date_after` | `string` | No | Filter results after date (YYYY-MM-DD) |
295
+
296
+ **Search operators:** `intitle:keyword`, `"exact phrase"`, `OR`, `-exclude`
297
+
298
+ ```json
299
+ {
300
+ "queries": [
301
+ "best mechanical keyboard 2025",
302
+ "intitle:keyboard recommendation"
303
+ ],
304
+ "date_after": "2024-01-01"
305
+ }
306
+ ```
307
+
308
+ ---
309
+
310
+ ### `get_reddit_post`
311
+
312
+ **Fetch Reddit posts** with smart comment allocation (1,000 comment budget distributed automatically).
313
+
314
+ | Parameter | Type | Required | Default | Description |
315
+ |-----------|------|----------|---------|-------------|
316
+ | `urls` | `string[]` | Yes | โ€” | Reddit post URLs (2-50) |
317
+ | `fetch_comments` | `boolean` | No | `true` | Whether to fetch comments |
318
+ | `max_comments` | `number` | No | auto | Override comment allocation |
319
+
320
+ **Smart Allocation:**
321
+ - 2 posts โ†’ ~500 comments/post (deep dive)
322
+ - 10 posts โ†’ ~100 comments/post
323
+ - 50 posts โ†’ ~20 comments/post (quick scan)
324
+
325
+ ```json
326
+ {
327
+ "urls": [
328
+ "https://reddit.com/r/programming/comments/abc123/post_title",
329
+ "https://reddit.com/r/webdev/comments/def456/another_post"
330
+ ]
331
+ }
332
+ ```
333
+
334
+ ---
335
+
336
+ ### `scrape_links`
337
+
338
+ **Universal URL content extraction** with automatic fallback modes.
339
+
340
+ | Parameter | Type | Required | Default | Description |
341
+ |-----------|------|----------|---------|-------------|
342
+ | `urls` | `string[]` | Yes | โ€” | URLs to scrape (3-50) |
343
+ | `timeout` | `number` | No | `30` | Timeout per URL (seconds) |
344
+ | `use_llm` | `boolean` | No | `false` | Enable AI extraction |
345
+ | `what_to_extract` | `string` | No | โ€” | Extraction instructions for AI |
346
+
347
+ **Automatic Fallback:** Basic โ†’ JS rendering โ†’ JS + US geo-targeting
348
+
349
+ ```json
350
+ {
351
+ "urls": ["https://example.com/article1", "https://example.com/article2"],
352
+ "use_llm": true,
353
+ "what_to_extract": "Extract the main arguments and key statistics"
354
+ }
355
+ ```
356
+
357
+ ---
358
+
359
+ ### `deep_research`
360
+
361
+ **AI-powered batch research** with web search and citations.
362
+
363
+ | Parameter | Type | Required | Description |
364
+ |-----------|------|----------|-------------|
365
+ | `questions` | `object[]` | Yes | Research questions (2-10) |
366
+ | `questions[].question` | `string` | Yes | The research question |
367
+ | `questions[].file_attachments` | `object[]` | No | Files to include as context |
368
+
369
+ **Token Allocation:** 32,000 tokens distributed across questions:
370
+ - 2 questions โ†’ 16,000 tokens/question (deep dive)
371
+ - 10 questions โ†’ 3,200 tokens/question (rapid multi-topic)
372
+
373
+ ```json
374
+ {
375
+ "questions": [
376
+ { "question": "What are the current best practices for React Server Components in 2025?" },
377
+ { "question": "Compare Bun vs Node.js for production workloads with benchmarks." }
378
+ ]
379
+ }
380
+ ```
381
+
86
382
  ---
87
383
 
88
- ## Environment Variables & Tool Availability
384
+ ## โš™๏ธ Environment Variables & Tool Availability
89
385
 
90
386
  Research Powerpack uses a **modular architecture**. Tools are automatically enabled based on which API keys you provide:
91
387
 
388
+ <div align="center">
389
+
92
390
  | ENV Variable | Tools Enabled | Free Tier |
93
- |--------------|---------------|-----------|
94
- | `SERPER_API_KEY` | `web_search`, `search_reddit` | 2,500 queries |
95
- | `REDDIT_CLIENT_ID` + `REDDIT_CLIENT_SECRET` | `get_reddit_post` | Unlimited |
96
- | `SCRAPEDO_API_KEY` | `scrape_links` | 1,000 credits |
97
- | `OPENROUTER_API_KEY` | `deep_research` + AI extraction in `scrape_links` | Pay-as-you-go |
391
+ |:------------:|:-------------:|:---------:|
392
+ | `SERPER_API_KEY` | `web_search`, `search_reddit` | 2,500 queries/mo |
393
+ | `REDDIT_CLIENT_ID` + `SECRET` | `get_reddit_post` | Unlimited |
394
+ | `SCRAPEDO_API_KEY` | `scrape_links` | 1,000 credits/mo |
395
+ | `OPENROUTER_API_KEY` | `deep_research` + AI in `scrape_links` | Pay-as-you-go |
396
+ | `RESEARCH_MODEL` | Model for `deep_research` | Default: `perplexity/sonar-deep-research` |
397
+ | `LLM_EXTRACTION_MODEL` | Model for AI extraction in `scrape_links` | Default: `openrouter/gpt-oss-120b:nitro` |
98
398
 
99
- **No ENV = No crash.** The server always starts. If you call a tool without the required API key, you get a helpful error message with setup instructions.
399
+ </div>
100
400
 
101
401
  ### Configuration Examples
102
402
 
@@ -109,7 +409,7 @@ SERPER_API_KEY=xxx
109
409
  REDDIT_CLIENT_ID=xxx
110
410
  REDDIT_CLIENT_SECRET=xxx
111
411
 
112
- # Full research mode (all tools)
412
+ # Full research mode (all 5 tools)
113
413
  SERPER_API_KEY=xxx
114
414
  REDDIT_CLIENT_ID=xxx
115
415
  REDDIT_CLIENT_SECRET=xxx
@@ -119,13 +419,12 @@ OPENROUTER_API_KEY=xxx
119
419
 
120
420
  ---
121
421
 
122
- ## API Key Setup Guides
422
+ ## ๐Ÿ”‘ API Key Setup Guides
123
423
 
124
424
  <details>
125
- <summary><b>๐Ÿ” Serper API (Google Search)</b></summary>
425
+ <summary><b>๐Ÿ” Serper API (Google Search) โ€” FREE: 2,500 queries/month</b></summary>
126
426
 
127
427
  ### What you get
128
- - 2,500 free queries/month
129
428
  - Fast Google search results via API
130
429
  - Enables `web_search` and `search_reddit` tools
131
430
 
@@ -133,23 +432,23 @@ OPENROUTER_API_KEY=xxx
133
432
  1. Go to [serper.dev](https://serper.dev)
134
433
  2. Click **"Get API Key"** (top right)
135
434
  3. Sign up with email or Google
136
- 4. Your API key is displayed on the dashboard
137
- 5. Copy it to your `.env`:
435
+ 4. Copy your API key from the dashboard
436
+ 5. Add to your config:
138
437
  ```
139
438
  SERPER_API_KEY=your_key_here
140
439
  ```
141
440
 
142
441
  ### Pricing
143
442
  - **Free**: 2,500 queries/month
144
- - **Paid**: $50/month for 50,000 queries ($0.001/query)
443
+ - **Paid**: $50/month for 50,000 queries
145
444
 
146
445
  </details>
147
446
 
148
447
  <details>
149
- <summary><b>๐Ÿค– Reddit OAuth (Reddit API)</b></summary>
448
+ <summary><b>๐Ÿค– Reddit OAuth โ€” FREE: Unlimited access</b></summary>
150
449
 
151
450
  ### What you get
152
- - Unlimited Reddit API access
451
+ - Full Reddit API access
153
452
  - Fetch posts and comments with upvote sorting
154
453
  - Enables `get_reddit_post` tool
155
454
 
@@ -159,41 +458,33 @@ OPENROUTER_API_KEY=xxx
159
458
  3. Fill in:
160
459
  - **Name**: `research-powerpack` (or any name)
161
460
  - **App type**: Select **"script"** (important!)
162
- - **Description**: Optional
163
- - **About URL**: Leave blank
164
- - **Redirect URI**: `http://localhost:8080` (required but not used)
461
+ - **Redirect URI**: `http://localhost:8080`
165
462
  4. Click **"create app"**
166
463
  5. Copy your credentials:
167
- - **Client ID**: The string under your app name (e.g., `yuq_M0kWusHp2olglFBnpw`)
464
+ - **Client ID**: The string under your app name
168
465
  - **Client Secret**: The "secret" field
169
- 6. Add to your `.env`:
466
+ 6. Add to your config:
170
467
  ```
171
468
  REDDIT_CLIENT_ID=your_client_id
172
469
  REDDIT_CLIENT_SECRET=your_client_secret
173
470
  ```
174
471
 
175
- ### Tips
176
- - Script apps have the highest rate limits
177
- - No user authentication required
178
- - Works immediately after creation
179
-
180
472
  </details>
181
473
 
182
474
  <details>
183
- <summary><b>๐ŸŒ Scrape.do (Web Scraping)</b></summary>
475
+ <summary><b>๐ŸŒ Scrape.do (Web Scraping) โ€” FREE: 1,000 credits/month</b></summary>
184
476
 
185
477
  ### What you get
186
- - 1,000 free scraping credits
187
478
  - JavaScript rendering support
188
479
  - Geo-targeting and CAPTCHA handling
189
480
  - Enables `scrape_links` tool
190
481
 
191
482
  ### Setup Steps
192
483
  1. Go to [scrape.do](https://scrape.do)
193
- 2. Click **"Start Free"** or **"Get Started"**
484
+ 2. Click **"Start Free"**
194
485
  3. Sign up with email
195
- 4. Your API key is on the dashboard
196
- 5. Add to your `.env`:
486
+ 4. Copy your API key from the dashboard
487
+ 5. Add to your config:
197
488
  ```
198
489
  SCRAPEDO_API_KEY=your_key_here
199
490
  ```
@@ -203,36 +494,32 @@ OPENROUTER_API_KEY=xxx
203
494
  - **JavaScript rendering**: 5 credits
204
495
  - **Geo-targeting**: +25 credits
205
496
 
206
- ### Pricing
207
- - **Free**: 1,000 credits (renews monthly)
208
- - **Starter**: $29/month for 100,000 credits
209
-
210
497
  </details>
211
498
 
212
499
  <details>
213
- <summary><b>๐Ÿง  OpenRouter (AI Models)</b></summary>
500
+ <summary><b>๐Ÿง  OpenRouter (AI Models) โ€” Pay-as-you-go</b></summary>
214
501
 
215
502
  ### What you get
216
503
  - Access to 100+ AI models via one API
217
504
  - Enables `deep_research` tool
218
- - Enables AI extraction in `scrape_links` (`use_llm`, `what_to_extract`)
505
+ - Enables AI extraction in `scrape_links`
219
506
 
220
507
  ### Setup Steps
221
508
  1. Go to [openrouter.ai](https://openrouter.ai)
222
- 2. Click **"Sign In"** โ†’ Sign up with Google/GitHub/email
509
+ 2. Sign up with Google/GitHub/email
223
510
  3. Go to [openrouter.ai/keys](https://openrouter.ai/keys)
224
511
  4. Click **"Create Key"**
225
512
  5. Copy the key (starts with `sk-or-...`)
226
- 6. Add to your `.env`:
513
+ 6. Add to your config:
227
514
  ```
228
515
  OPENROUTER_API_KEY=sk-or-v1-xxxxx
229
516
  ```
230
517
 
231
- ### Recommended Models
232
- The default model is `perplexity/sonar-deep-research` (optimized for research with web search).
233
-
234
- Alternative models:
518
+ ### Recommended Models for Deep Research
235
519
  ```bash
520
+ # Default (optimized for research)
521
+ RESEARCH_MODEL=perplexity/sonar-deep-research
522
+
236
523
  # Fast and capable
237
524
  RESEARCH_MODEL=x-ai/grok-4.1-fast
238
525
 
@@ -243,207 +530,63 @@ RESEARCH_MODEL=anthropic/claude-3.5-sonnet
243
530
  RESEARCH_MODEL=openai/gpt-4o-mini
244
531
  ```
245
532
 
246
- ### Pricing
247
- - Pay-as-you-go (no subscription required)
248
- - Prices vary by model (~$0.001-$0.03 per 1K tokens)
249
- - `perplexity/sonar-deep-research`: ~$5 per 1M tokens
250
-
251
- </details>
252
-
253
- ---
254
-
255
- ## Tool Reference
256
-
257
- ### `web_search`
258
-
259
- **Batch web search** using Google via Serper API. Search up to 100 keywords in parallel.
260
-
261
- | Parameter | Type | Required | Description |
262
- |-----------|------|----------|-------------|
263
- | `keywords` | `string[]` | Yes | Search queries (1-100). Use distinct keywords for maximum coverage. |
264
-
265
- **Features:**
266
- - Google search operators: `site:`, `-exclusion`, `"exact phrase"`, `filetype:`
267
- - CTR-weighted ranking identifies high-consensus URLs
268
- - Related search suggestions per query
269
-
270
- **Example:**
271
- ```json
272
- {
273
- "keywords": [
274
- "best IDE 2025",
275
- "VS Code alternatives",
276
- "Cursor vs Windsurf comparison"
277
- ]
278
- }
279
- ```
280
-
281
- ---
282
-
283
- ### `search_reddit`
284
-
285
- **Search Reddit** via Google with automatic `site:reddit.com` filtering.
286
-
287
- | Parameter | Type | Required | Description |
288
- |-----------|------|----------|-------------|
289
- | `queries` | `string[]` | Yes | Search queries (max 10). Use distinct queries for multiple perspectives. |
290
- | `date_after` | `string` | No | Filter results after date (YYYY-MM-DD) |
291
-
292
- **Search Operators:**
293
- - `intitle:keyword` โ€” Match in post title
294
- - `"exact phrase"` โ€” Exact match
295
- - `OR` โ€” Match either term
296
- - `-exclude` โ€” Exclude term
297
-
298
- **Example:**
299
- ```json
300
- {
301
- "queries": [
302
- "best mechanical keyboard 2025",
303
- "intitle:keyboard recommendation",
304
- "\"keychron\" OR \"nuphy\" review"
305
- ],
306
- "date_after": "2024-01-01"
307
- }
308
- ```
309
-
310
- ---
311
-
312
- ### `get_reddit_post`
313
-
314
- **Fetch Reddit posts** with smart comment allocation (1,000 comment budget distributed automatically).
315
-
316
- | Parameter | Type | Required | Default | Description |
317
- |-----------|------|----------|---------|-------------|
318
- | `urls` | `string[]` | Yes | โ€” | Reddit post URLs (2-50) |
319
- | `fetch_comments` | `boolean` | No | `true` | Whether to fetch comments |
320
- | `max_comments` | `number` | No | auto | Override comment allocation |
321
-
322
- **Smart Allocation:**
323
- - 2 posts: ~500 comments/post (deep dive)
324
- - 10 posts: ~100 comments/post
325
- - 50 posts: ~20 comments/post (quick scan)
326
-
327
- **Example:**
328
- ```json
329
- {
330
- "urls": [
331
- "https://reddit.com/r/programming/comments/abc123/post_title",
332
- "https://reddit.com/r/webdev/comments/def456/another_post"
333
- ],
334
- "fetch_comments": true
335
- }
336
- ```
337
-
338
- ---
339
-
340
- ### `scrape_links`
341
-
342
- **Universal URL content extraction** with automatic fallback modes.
343
-
344
- | Parameter | Type | Required | Default | Description |
345
- |-----------|------|----------|---------|-------------|
346
- | `urls` | `string[]` | Yes | โ€” | URLs to scrape (3-50) |
347
- | `timeout` | `number` | No | `30` | Timeout per URL (seconds) |
348
- | `use_llm` | `boolean` | No | `false` | Enable AI extraction (requires `OPENROUTER_API_KEY`) |
349
- | `what_to_extract` | `string` | No | โ€” | Extraction instructions for AI |
350
-
351
- **Automatic Fallback:**
352
- 1. Basic mode (fast)
353
- 2. JavaScript rendering (for SPAs)
354
- 3. JavaScript + US geo-targeting (for restricted content)
533
+ ### Recommended Models for AI Extraction (`use_llm` in `scrape_links`)
534
+ ```bash
535
+ # Default (fast and cost-effective for extraction)
536
+ LLM_EXTRACTION_MODEL=openrouter/gpt-oss-120b:nitro
355
537
 
356
- **Token Allocation:** 32,000 tokens distributed across URLs:
357
- - 3 URLs: ~10,666 tokens/URL
358
- - 10 URLs: ~3,200 tokens/URL
359
- - 50 URLs: ~640 tokens/URL
538
+ # High quality extraction
539
+ LLM_EXTRACTION_MODEL=anthropic/claude-3.5-sonnet
360
540
 
361
- **Example:**
362
- ```json
363
- {
364
- "urls": [
365
- "https://example.com/article1",
366
- "https://example.com/article2",
367
- "https://example.com/article3"
368
- ],
369
- "use_llm": true,
370
- "what_to_extract": "Extract the main arguments, key statistics, and conclusions"
371
- }
541
+ # Budget-friendly
542
+ LLM_EXTRACTION_MODEL=openai/gpt-4o-mini
372
543
  ```
373
544
 
374
- ---
375
-
376
- ### `deep_research`
545
+ > **Note:** `RESEARCH_MODEL` and `LLM_EXTRACTION_MODEL` are independent. You can use a powerful model for deep research and a faster/cheaper model for content extraction, or vice versa.
377
546
 
378
- **AI-powered batch research** with web search and citations.
379
-
380
- | Parameter | Type | Required | Description |
381
- |-----------|------|----------|-------------|
382
- | `questions` | `object[]` | Yes | Research questions (2-10) |
383
- | `questions[].question` | `string` | Yes | The research question |
384
- | `questions[].file_attachments` | `object[]` | No | Files to include as context |
385
-
386
- **Token Allocation:** 32,000 tokens distributed across questions:
387
- - 2 questions: 16,000 tokens/question (deep dive)
388
- - 5 questions: 6,400 tokens/question (balanced)
389
- - 10 questions: 3,200 tokens/question (rapid multi-topic)
390
-
391
- **Example:**
392
- ```json
393
- {
394
- "questions": [
395
- {
396
- "question": "What are the current best practices for React Server Components in 2025? Include patterns for data fetching and caching."
397
- },
398
- {
399
- "question": "Compare the performance characteristics of Bun vs Node.js for production workloads. Include benchmarks and real-world case studies."
400
- }
401
- ]
402
- }
403
- ```
547
+ </details>
404
548
 
405
549
  ---
406
550
 
407
- ## Recommended Workflows
551
+ ## ๐Ÿ”ฅ Recommended Workflows
408
552
 
409
553
  ### Research a Technology Decision
410
554
 
411
555
  ```
412
- 1. web_search: ["React vs Vue 2025", "Next.js vs Nuxt comparison", "frontend framework benchmarks"]
413
- 2. search_reddit: ["best frontend framework 2025", "migrating from React to Vue", "Next.js production experience"]
414
- 3. get_reddit_post: [URLs from step 2]
415
- 4. scrape_links: [Documentation and blog URLs from step 1]
416
- 5. deep_research: [Synthesize findings into specific questions]
556
+ 1. web_search โ†’ ["React vs Vue 2025", "Next.js vs Nuxt comparison"]
557
+ 2. search_reddit โ†’ ["best frontend framework 2025", "Next.js production experience"]
558
+ 3. get_reddit_post โ†’ [URLs from step 2]
559
+ 4. scrape_links โ†’ [Documentation and blog URLs from step 1]
560
+ 5. deep_research โ†’ [Synthesize findings into specific questions]
417
561
  ```
418
562
 
419
563
  ### Competitive Analysis
420
564
 
421
565
  ```
422
- 1. web_search: ["competitor name review", "competitor vs alternatives", "competitor pricing"]
423
- 2. scrape_links: [Competitor websites, review sites, comparison pages]
424
- 3. search_reddit: ["competitor name experience", "switching from competitor"]
425
- 4. get_reddit_post: [URLs from step 3]
566
+ 1. web_search โ†’ ["competitor name review", "competitor vs alternatives"]
567
+ 2. scrape_links โ†’ [Competitor websites, review sites]
568
+ 3. search_reddit โ†’ ["competitor name experience", "switching from competitor"]
569
+ 4. get_reddit_post โ†’ [URLs from step 3]
426
570
  ```
427
571
 
428
572
  ### Debug an Obscure Error
429
573
 
430
574
  ```
431
- 1. web_search: ["exact error message", "error message + framework name"]
432
- 2. search_reddit: ["error message", "framework + error type"]
433
- 3. get_reddit_post: [URLs with solutions]
434
- 4. scrape_links: [Stack Overflow answers, GitHub issues]
575
+ 1. web_search โ†’ ["exact error message", "error + framework name"]
576
+ 2. search_reddit โ†’ ["error message", "framework + error type"]
577
+ 3. get_reddit_post โ†’ [URLs with solutions]
578
+ 4. scrape_links โ†’ [Stack Overflow answers, GitHub issues]
435
579
  ```
436
580
 
437
581
  ---
438
582
 
439
- ## Enable Full Power Mode
583
+ ## ๐Ÿ”ฅ Enable Full Power Mode
440
584
 
441
585
  For the best research experience, configure all four API keys:
442
586
 
443
587
  ```bash
444
- # .env
445
588
  SERPER_API_KEY=your_serper_key # Free: 2,500 queries/month
446
- REDDIT_CLIENT_ID=your_reddit_id # Free: Nearly unlimited
589
+ REDDIT_CLIENT_ID=your_reddit_id # Free: Unlimited
447
590
  REDDIT_CLIENT_SECRET=your_reddit_secret
448
591
  SCRAPEDO_API_KEY=your_scrapedo_key # Free: 1,000 credits/month
449
592
  OPENROUTER_API_KEY=your_openrouter_key # Pay-as-you-go
@@ -455,11 +598,11 @@ This unlocks:
455
598
  - **Deep research with web search** and citations
456
599
  - **Complete Reddit mining** (search โ†’ fetch โ†’ analyze)
457
600
 
458
- Total setup time: ~10 minutes. Total free tier value: ~$50/month equivalent.
601
+ **Total setup time:** ~10 minutes. **Total free tier value:** ~$50/month equivalent.
459
602
 
460
603
  ---
461
604
 
462
- ## Development
605
+ ## ๐Ÿ› ๏ธ Development
463
606
 
464
607
  ```bash
465
608
  # Clone
@@ -481,6 +624,27 @@ npm run typecheck
481
624
 
482
625
  ---
483
626
 
484
- ## License
627
+ ## ๐Ÿ”ฅ Common Issues & Quick Fixes
628
+
629
+ <details>
630
+ <summary><b>Expand for troubleshooting tips</b></summary>
631
+
632
+ | Problem | Solution |
633
+ | :--- | :--- |
634
+ | **Tool returns "API key not configured"** | Add the required ENV variable to your MCP config. The error message tells you exactly which key is missing. |
635
+ | **Reddit posts returning empty** | Check your `REDDIT_CLIENT_ID` and `REDDIT_CLIENT_SECRET`. Make sure you created a "script" type app. |
636
+ | **Scraping fails on JavaScript sites** | This is expected for first attempt. The tool auto-retries with JS rendering. If still failing, the site may be blocking scrapers. |
637
+ | **Deep research taking too long** | Use a faster model like `x-ai/grok-4.1-fast` instead of `perplexity/sonar-deep-research`. |
638
+ | **Token limit errors** | Reduce the number of URLs/questions per request. The tool distributes a fixed token budget. |
639
+
640
+ </details>
641
+
642
+ ---
643
+
644
+ <div align="center">
645
+
646
+ **Built with ๐Ÿ”ฅ because manually researching for your AI is a soul-crushing waste of time.**
485
647
 
486
648
  MIT ยฉ [YiฤŸit Konur](https://github.com/yigitkonur)
649
+
650
+ </div>
@@ -54,6 +54,6 @@ export declare const REDDIT: {
54
54
  export declare const CTR_WEIGHTS: Record<number, number>;
55
55
  export declare const LLM_EXTRACTION: {
56
56
  readonly MODEL: string;
57
- readonly MAX_TOKENS: 4000;
57
+ readonly MAX_TOKENS: 8000;
58
58
  };
59
59
  //# sourceMappingURL=index.d.ts.map
@@ -76,7 +76,7 @@ export const REDDIT = {
76
76
  RETRY_DELAYS: [2000, 4000, 8000, 16000, 32000],
77
77
  };
78
78
  // ============================================================================
79
- // CTR Weights for URL Ranking
79
+ // CTR Weights for URL Ranking (inspired fro CTR thing)
80
80
  // ============================================================================
81
81
  export const CTR_WEIGHTS = {
82
82
  1: 100.00,
@@ -94,7 +94,7 @@ export const CTR_WEIGHTS = {
94
94
  // LLM Extraction Model (uses OPENROUTER for scrape_links AI extraction)
95
95
  // ============================================================================
96
96
  export const LLM_EXTRACTION = {
97
- MODEL: process.env.LLM_EXTRACTION_MODEL || 'anthropic/claude-3.5-sonnet',
98
- MAX_TOKENS: 4000,
97
+ MODEL: process.env.LLM_EXTRACTION_MODEL || 'openrouter/gpt-oss-120b:nitro',
98
+ MAX_TOKENS: 8000,
99
99
  };
100
100
  //# sourceMappingURL=index.js.map
@@ -1 +1 @@
1
- {"version":3,"file":"index.js","sourceRoot":"","sources":["../../src/config/index.ts"],"names":[],"mappings":"AAAA;;;GAGG;AAaH,MAAM,UAAU,QAAQ;IACtB,OAAO;QACL,eAAe,EAAE,OAAO,CAAC,GAAG,CAAC,gBAAgB,IAAI,EAAE;QACnD,cAAc,EAAE,OAAO,CAAC,GAAG,CAAC,cAAc,IAAI,SAAS;QACvD,gBAAgB,EAAE,OAAO,CAAC,GAAG,CAAC,gBAAgB,IAAI,SAAS;QAC3D,oBAAoB,EAAE,OAAO,CAAC,GAAG,CAAC,oBAAoB,IAAI,SAAS;KACpE,CAAC;AACJ,CAAC;AAED,+EAA+E;AAC/E,6BAA6B;AAC7B,+EAA+E;AAE/E,MAAM,CAAC,MAAM,QAAQ,GAAG;IACtB,QAAQ,EAAE,OAAO,CAAC,GAAG,CAAC,mBAAmB,IAAI,8BAA8B;IAC3E,KAAK,EAAE,OAAO,CAAC,GAAG,CAAC,cAAc,IAAI,gCAAgC;IACrE,OAAO,EAAE,OAAO,CAAC,GAAG,CAAC,kBAAkB,IAAI,EAAE;IAC7C,UAAU,EAAE,QAAQ,CAAC,OAAO,CAAC,GAAG,CAAC,cAAc,IAAI,SAAS,EAAE,EAAE,CAAC;IACjE,gBAAgB,EAAG,OAAO,CAAC,GAAG,CAAC,wBAAsD,IAAI,MAAM;IAC/F,QAAQ,EAAE,QAAQ,CAAC,OAAO,CAAC,GAAG,CAAC,gBAAgB,IAAI,KAAK,EAAE,EAAE,CAAC;CACrD,CAAC;AAEX,+EAA+E;AAC/E,2BAA2B;AAC3B,+EAA+E;AAE/E,MAAM,CAAC,MAAM,MAAM,GAAG;IACpB,IAAI,EAAE,wBAAwB;IAC9B,OAAO,EAAE,OAAO;IAChB,WAAW,EAAE,6DAA6D;CAClE,CAAC;AAcX,MAAM,UAAU,eAAe;IAC7B,MAAM,GAAG,GAAG,QAAQ,EAAE,CAAC;IACvB,OAAO;QACL,MAAM,EAAE,CAAC,CAAC,CAAC,GAAG,CAAC,gBAAgB,IAAI,GAAG,CAAC,oBAAoB,CAAC;QAC5D,MAAM,EAAE,CAAC,CAAC,GAAG,CAAC,cAAc;QAC5B,QAAQ,EAAE,CAAC,CAAC,GAAG,CAAC,eAAe;QAC/B,YAAY,EAAE,CAAC,CAAC,QAAQ,CAAC,OAAO;QAChC,aAAa,EAAE,CAAC,CAAC,QAAQ,CAAC,OAAO,EAAE,uCAAuC;KAC3E,CAAC;AACJ,CAAC;AAED,MAAM,UAAU,oBAAoB,CAAC,UAA8B;IACjE,MAAM,QAAQ,GAAuC;QACnD,MAAM,EAAE,qLAAqL;QAC7L,MAAM,EAAE,gKAAgK;QACxK,QAAQ,EAAE,mJAAmJ;QAC7J,YAAY,EAAE,gJAAgJ;QAC9J,aAAa,EAAE,4KAA4K;KAC5L,CAAC;IACF,OAAO,QAAQ,CAAC,UAAU,CAAC,CAAC;AAC9B,CAAC;AAED,+EAA+E;AAC/E,mDAAmD;AACnD,+EAA+E;AAE/E,MAAM,CAAC,MAAM,OAAO,GAAG;IACrB,cAAc,EAAE,EAAE;IAClB,UAAU,EAAE,EAAE;IACd,iBAAiB,EAAE,KAAK;IACxB,QAAQ,EAAE,CAAC;IACX,QAAQ,EAAE,EAAE;IACZ,WAAW,EAAE,CAAC;IACd,YAAY,EAAE,CAAC,IAAI,EAAE,IAAI,EAAE,IAAI,CAAU;IACzC,iBAAiB,EAAE,sMAAsM;CACjN,CAAC;AAEX,+EAA+E;AAC/E,uBAAuB;AACvB,+EAA+E;AAE/E,MAAM,CAAC,MAAM,MAAM,GAAG;IACpB,cAAc,EAAE,EAAE;IAClB,UAAU,EAAE,EAAE;IACd,kBAAkB,EAAE,IAAI;IACxB,qBAAqB,EAAE,GAAG;IAC1B,SAAS,EAAE,CAAC;IACZ,SAAS,EAAE,EAAE;IACb,WAAW,EAAE,CAAC;IACd,YAAY,EAAE,CAAC,IAAI,EAAE,IAAI,EAAE,IAAI,EAAE,KAAK,EAAE,KAAK,CAAU;CAC/C,CAAC;AAEX,+EAA+E;AAC/E,8BAA8B;AAC9B,+EAA+E;AAE/E,MAAM,CAAC,MAAM,WAAW,GAA2B;IACjD,CAAC,EAAE,MAAM;IACT,CAAC,EAAE,KAAK;IACR,CAAC,EAAE,KAAK;IACR,CAAC,EAAE,KAAK;IACR,CAAC,EAAE,KAAK;IACR,CAAC,EAAE,KAAK;IACR,CAAC,EAAE,KAAK;IACR,CAAC,EAAE,KAAK;IACR,CAAC,EAAE,KAAK;IACR,EAAE,EAAE,KAAK;CACD,CAAC;AAEX,+EAA+E;AAC/E,wEAAwE;AACxE,+EAA+E;AAE/E,MAAM,CAAC,MAAM,cAAc,GAAG;IAC5B,KAAK,EAAE,OAAO,CAAC,GAAG,CAAC,oBAAoB,IAAI,6BAA6B;IACxE,UAAU,EAAE,IAAI;CACR,CAAC"}
1
+ {"version":3,"file":"index.js","sourceRoot":"","sources":["../../src/config/index.ts"],"names":[],"mappings":"AAAA;;;GAGG;AAaH,MAAM,UAAU,QAAQ;IACtB,OAAO;QACL,eAAe,EAAE,OAAO,CAAC,GAAG,CAAC,gBAAgB,IAAI,EAAE;QACnD,cAAc,EAAE,OAAO,CAAC,GAAG,CAAC,cAAc,IAAI,SAAS;QACvD,gBAAgB,EAAE,OAAO,CAAC,GAAG,CAAC,gBAAgB,IAAI,SAAS;QAC3D,oBAAoB,EAAE,OAAO,CAAC,GAAG,CAAC,oBAAoB,IAAI,SAAS;KACpE,CAAC;AACJ,CAAC;AAED,+EAA+E;AAC/E,6BAA6B;AAC7B,+EAA+E;AAE/E,MAAM,CAAC,MAAM,QAAQ,GAAG;IACtB,QAAQ,EAAE,OAAO,CAAC,GAAG,CAAC,mBAAmB,IAAI,8BAA8B;IAC3E,KAAK,EAAE,OAAO,CAAC,GAAG,CAAC,cAAc,IAAI,gCAAgC;IACrE,OAAO,EAAE,OAAO,CAAC,GAAG,CAAC,kBAAkB,IAAI,EAAE;IAC7C,UAAU,EAAE,QAAQ,CAAC,OAAO,CAAC,GAAG,CAAC,cAAc,IAAI,SAAS,EAAE,EAAE,CAAC;IACjE,gBAAgB,EAAG,OAAO,CAAC,GAAG,CAAC,wBAAsD,IAAI,MAAM;IAC/F,QAAQ,EAAE,QAAQ,CAAC,OAAO,CAAC,GAAG,CAAC,gBAAgB,IAAI,KAAK,EAAE,EAAE,CAAC;CACrD,CAAC;AAEX,+EAA+E;AAC/E,2BAA2B;AAC3B,+EAA+E;AAE/E,MAAM,CAAC,MAAM,MAAM,GAAG;IACpB,IAAI,EAAE,wBAAwB;IAC9B,OAAO,EAAE,OAAO;IAChB,WAAW,EAAE,6DAA6D;CAClE,CAAC;AAcX,MAAM,UAAU,eAAe;IAC7B,MAAM,GAAG,GAAG,QAAQ,EAAE,CAAC;IACvB,OAAO;QACL,MAAM,EAAE,CAAC,CAAC,CAAC,GAAG,CAAC,gBAAgB,IAAI,GAAG,CAAC,oBAAoB,CAAC;QAC5D,MAAM,EAAE,CAAC,CAAC,GAAG,CAAC,cAAc;QAC5B,QAAQ,EAAE,CAAC,CAAC,GAAG,CAAC,eAAe;QAC/B,YAAY,EAAE,CAAC,CAAC,QAAQ,CAAC,OAAO;QAChC,aAAa,EAAE,CAAC,CAAC,QAAQ,CAAC,OAAO,EAAE,uCAAuC;KAC3E,CAAC;AACJ,CAAC;AAED,MAAM,UAAU,oBAAoB,CAAC,UAA8B;IACjE,MAAM,QAAQ,GAAuC;QACnD,MAAM,EAAE,qLAAqL;QAC7L,MAAM,EAAE,gKAAgK;QACxK,QAAQ,EAAE,mJAAmJ;QAC7J,YAAY,EAAE,gJAAgJ;QAC9J,aAAa,EAAE,4KAA4K;KAC5L,CAAC;IACF,OAAO,QAAQ,CAAC,UAAU,CAAC,CAAC;AAC9B,CAAC;AAED,+EAA+E;AAC/E,mDAAmD;AACnD,+EAA+E;AAE/E,MAAM,CAAC,MAAM,OAAO,GAAG;IACrB,cAAc,EAAE,EAAE;IAClB,UAAU,EAAE,EAAE;IACd,iBAAiB,EAAE,KAAK;IACxB,QAAQ,EAAE,CAAC;IACX,QAAQ,EAAE,EAAE;IACZ,WAAW,EAAE,CAAC;IACd,YAAY,EAAE,CAAC,IAAI,EAAE,IAAI,EAAE,IAAI,CAAU;IACzC,iBAAiB,EAAE,sMAAsM;CACjN,CAAC;AAEX,+EAA+E;AAC/E,uBAAuB;AACvB,+EAA+E;AAE/E,MAAM,CAAC,MAAM,MAAM,GAAG;IACpB,cAAc,EAAE,EAAE;IAClB,UAAU,EAAE,EAAE;IACd,kBAAkB,EAAE,IAAI;IACxB,qBAAqB,EAAE,GAAG;IAC1B,SAAS,EAAE,CAAC;IACZ,SAAS,EAAE,EAAE;IACb,WAAW,EAAE,CAAC;IACd,YAAY,EAAE,CAAC,IAAI,EAAE,IAAI,EAAE,IAAI,EAAE,KAAK,EAAE,KAAK,CAAU;CAC/C,CAAC;AAEX,+EAA+E;AAC/E,uDAAuD;AACvD,+EAA+E;AAE/E,MAAM,CAAC,MAAM,WAAW,GAA2B;IACjD,CAAC,EAAE,MAAM;IACT,CAAC,EAAE,KAAK;IACR,CAAC,EAAE,KAAK;IACR,CAAC,EAAE,KAAK;IACR,CAAC,EAAE,KAAK;IACR,CAAC,EAAE,KAAK;IACR,CAAC,EAAE,KAAK;IACR,CAAC,EAAE,KAAK;IACR,CAAC,EAAE,KAAK;IACR,EAAE,EAAE,KAAK;CACD,CAAC;AAEX,+EAA+E;AAC/E,wEAAwE;AACxE,+EAA+E;AAE/E,MAAM,CAAC,MAAM,cAAc,GAAG;IAC5B,KAAK,EAAE,OAAO,CAAC,GAAG,CAAC,oBAAoB,IAAI,+BAA+B;IAC1E,UAAU,EAAE,IAAI;CACR,CAAC"}
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "research-powerpack-mcp",
3
- "version": "3.0.3",
3
+ "version": "3.0.5",
4
4
  "description": "The ultimate research MCP toolkit: Reddit mining, web search with CTR aggregation, AI-powered deep research, and intelligent web scraping - all in one modular package",
5
5
  "type": "module",
6
6
  "main": "dist/index.js",