firecrawl-mcp 1.9.0 → 1.11.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/LICENSE +0 -0
- package/README.md +319 -63
- package/dist/index.js +199 -14
- package/dist/index.test.js +0 -0
- package/dist/jest.setup.js +58 -0
- package/dist/src/index.js +1053 -0
- package/dist/src/index.test.js +225 -0
- package/package.json +1 -1
package/LICENSE
CHANGED
|
File without changes
|
package/README.md
CHANGED
|
@@ -2,23 +2,18 @@
|
|
|
2
2
|
|
|
3
3
|
A Model Context Protocol (MCP) server implementation that integrates with [Firecrawl](https://github.com/mendableai/firecrawl) for web scraping capabilities.
|
|
4
4
|
|
|
5
|
-
> Big thanks to [@vrknetha](https://github.com/vrknetha), [@
|
|
6
|
-
>
|
|
7
|
-
> You can also play around with [our MCP Server on MCP.so's playground](https://mcp.so/playground?server=firecrawl-mcp-server) or on [Klavis AI](https://www.klavis.ai/mcp-servers). Thanks to MCP.so and Klavis AI for hosting and [@gstarwd](https://github.com/gstarwd) and [@xiangkaiz](https://github.com/xiangkaiz) for integrating our server.
|
|
5
|
+
> Big thanks to [@vrknetha](https://github.com/vrknetha), [@knacklabs](https://www.knacklabs.ai) for the initial implementation!
|
|
8
6
|
|
|
9
7
|
## Features
|
|
10
8
|
|
|
11
|
-
-
|
|
12
|
-
-
|
|
13
|
-
-
|
|
14
|
-
-
|
|
15
|
-
-
|
|
16
|
-
|
|
17
|
-
|
|
18
|
-
-
|
|
19
|
-
- Support for cloud and self-hosted Firecrawl instances
|
|
20
|
-
- Mobile/Desktop viewport support
|
|
21
|
-
- Smart content filtering with tag inclusion/exclusion
|
|
9
|
+
- Web scraping, crawling, and discovery
|
|
10
|
+
- Search and content extraction
|
|
11
|
+
- Deep research and batch scraping
|
|
12
|
+
- Automatic retries and rate limiting
|
|
13
|
+
- Cloud and self-hosted support
|
|
14
|
+
- SSE support
|
|
15
|
+
|
|
16
|
+
> Play around with [our MCP Server on MCP.so's playground](https://mcp.so/playground?server=firecrawl-mcp-server) or on [Klavis AI](https://www.klavis.ai/mcp-servers).
|
|
22
17
|
|
|
23
18
|
## Installation
|
|
24
19
|
|
|
@@ -41,16 +36,6 @@ Note: Requires Cursor version 0.45.6+
|
|
|
41
36
|
For the most up-to-date configuration instructions, please refer to the official Cursor documentation on configuring MCP servers:
|
|
42
37
|
[Cursor MCP Server Configuration Guide](https://docs.cursor.com/context/model-context-protocol#configuring-mcp-servers)
|
|
43
38
|
|
|
44
|
-
To configure Firecrawl MCP in Cursor **v0.45.6**
|
|
45
|
-
|
|
46
|
-
1. Open Cursor Settings
|
|
47
|
-
2. Go to Features > MCP Servers
|
|
48
|
-
3. Click "+ Add New MCP Server"
|
|
49
|
-
4. Enter the following:
|
|
50
|
-
- Name: "firecrawl-mcp" (or your preferred name)
|
|
51
|
-
- Type: "command"
|
|
52
|
-
- Command: `env FIRECRAWL_API_KEY=your-api-key npx -y firecrawl-mcp`
|
|
53
|
-
|
|
54
39
|
To configure Firecrawl MCP in Cursor **v0.48.6**
|
|
55
40
|
|
|
56
41
|
1. Open Cursor Settings
|
|
@@ -70,6 +55,18 @@ To configure Firecrawl MCP in Cursor **v0.48.6**
|
|
|
70
55
|
}
|
|
71
56
|
}
|
|
72
57
|
```
|
|
58
|
+
|
|
59
|
+
To configure Firecrawl MCP in Cursor **v0.45.6**
|
|
60
|
+
|
|
61
|
+
1. Open Cursor Settings
|
|
62
|
+
2. Go to Features > MCP Servers
|
|
63
|
+
3. Click "+ Add New MCP Server"
|
|
64
|
+
4. Enter the following:
|
|
65
|
+
- Name: "firecrawl-mcp" (or your preferred name)
|
|
66
|
+
- Type: "command"
|
|
67
|
+
- Command: `env FIRECRAWL_API_KEY=your-api-key npx -y firecrawl-mcp`
|
|
68
|
+
|
|
69
|
+
|
|
73
70
|
|
|
74
71
|
> If you are using Windows and are running into issues, try `cmd /c "set FIRECRAWL_API_KEY=your-api-key && npx -y firecrawl-mcp"`
|
|
75
72
|
|
|
@@ -95,6 +92,16 @@ Add this to your `./codeium/windsurf/model_config.json`:
|
|
|
95
92
|
}
|
|
96
93
|
```
|
|
97
94
|
|
|
95
|
+
### Running with SSE Local Mode
|
|
96
|
+
|
|
97
|
+
To run the server using Server-Sent Events (SSE) locally instead of the default stdio transport:
|
|
98
|
+
|
|
99
|
+
```bash
|
|
100
|
+
env SSE_LOCAL=true FIRECRAWL_API_KEY=fc-YOUR_API_KEY npx -y firecrawl-mcp
|
|
101
|
+
```
|
|
102
|
+
|
|
103
|
+
Use the url: http://localhost:3000/sse
|
|
104
|
+
|
|
98
105
|
### Installing via Smithery (Legacy)
|
|
99
106
|
|
|
100
107
|
To install Firecrawl for Claude Desktop automatically via [Smithery](https://smithery.ai/server/@mendableai/mcp-server-firecrawl):
|
|
@@ -103,6 +110,62 @@ To install Firecrawl for Claude Desktop automatically via [Smithery](https://smi
|
|
|
103
110
|
npx -y @smithery/cli install @mendableai/mcp-server-firecrawl --client claude
|
|
104
111
|
```
|
|
105
112
|
|
|
113
|
+
### Running on VS Code
|
|
114
|
+
|
|
115
|
+
For one-click installation, click one of the install buttons below...
|
|
116
|
+
|
|
117
|
+
[](https://insiders.vscode.dev/redirect/mcp/install?name=firecrawl&inputs=%5B%7B%22type%22%3A%22promptString%22%2C%22id%22%3A%22apiKey%22%2C%22description%22%3A%22Firecrawl%20API%20Key%22%2C%22password%22%3Atrue%7D%5D&config=%7B%22command%22%3A%22npx%22%2C%22args%22%3A%5B%22-y%22%2C%22firecrawl-mcp%22%5D%2C%22env%22%3A%7B%22FIRECRAWL_API_KEY%22%3A%22%24%7Binput%3AapiKey%7D%22%7D%7D) [](https://insiders.vscode.dev/redirect/mcp/install?name=firecrawl&inputs=%5B%7B%22type%22%3A%22promptString%22%2C%22id%22%3A%22apiKey%22%2C%22description%22%3A%22Firecrawl%20API%20Key%22%2C%22password%22%3Atrue%7D%5D&config=%7B%22command%22%3A%22npx%22%2C%22args%22%3A%5B%22-y%22%2C%22firecrawl-mcp%22%5D%2C%22env%22%3A%7B%22FIRECRAWL_API_KEY%22%3A%22%24%7Binput%3AapiKey%7D%22%7D%7D&quality=insiders)
|
|
118
|
+
|
|
119
|
+
For manual installation, add the following JSON block to your User Settings (JSON) file in VS Code. You can do this by pressing `Ctrl + Shift + P` and typing `Preferences: Open User Settings (JSON)`.
|
|
120
|
+
|
|
121
|
+
```json
|
|
122
|
+
{
|
|
123
|
+
"mcp": {
|
|
124
|
+
"inputs": [
|
|
125
|
+
{
|
|
126
|
+
"type": "promptString",
|
|
127
|
+
"id": "apiKey",
|
|
128
|
+
"description": "Firecrawl API Key",
|
|
129
|
+
"password": true
|
|
130
|
+
}
|
|
131
|
+
],
|
|
132
|
+
"servers": {
|
|
133
|
+
"firecrawl": {
|
|
134
|
+
"command": "npx",
|
|
135
|
+
"args": ["-y", "firecrawl-mcp"],
|
|
136
|
+
"env": {
|
|
137
|
+
"FIRECRAWL_API_KEY": "${input:apiKey}"
|
|
138
|
+
}
|
|
139
|
+
}
|
|
140
|
+
}
|
|
141
|
+
}
|
|
142
|
+
}
|
|
143
|
+
```
|
|
144
|
+
|
|
145
|
+
Optionally, you can add it to a file called `.vscode/mcp.json` in your workspace. This will allow you to share the configuration with others:
|
|
146
|
+
|
|
147
|
+
```json
|
|
148
|
+
{
|
|
149
|
+
"inputs": [
|
|
150
|
+
{
|
|
151
|
+
"type": "promptString",
|
|
152
|
+
"id": "apiKey",
|
|
153
|
+
"description": "Firecrawl API Key",
|
|
154
|
+
"password": true
|
|
155
|
+
}
|
|
156
|
+
],
|
|
157
|
+
"servers": {
|
|
158
|
+
"firecrawl": {
|
|
159
|
+
"command": "npx",
|
|
160
|
+
"args": ["-y", "firecrawl-mcp"],
|
|
161
|
+
"env": {
|
|
162
|
+
"FIRECRAWL_API_KEY": "${input:apiKey}"
|
|
163
|
+
}
|
|
164
|
+
}
|
|
165
|
+
}
|
|
166
|
+
}
|
|
167
|
+
```
|
|
168
|
+
|
|
106
169
|
## Configuration
|
|
107
170
|
|
|
108
171
|
### Environment Variables
|
|
@@ -236,12 +299,54 @@ The server utilizes Firecrawl's built-in rate limiting and batch processing capa
|
|
|
236
299
|
- Smart request queuing and throttling
|
|
237
300
|
- Automatic retries for transient errors
|
|
238
301
|
|
|
302
|
+
## How to Choose a Tool
|
|
303
|
+
|
|
304
|
+
Use this guide to select the right tool for your task:
|
|
305
|
+
|
|
306
|
+
- **If you know the exact URL(s) you want:**
|
|
307
|
+
- For one: use **scrape**
|
|
308
|
+
- For many: use **batch_scrape**
|
|
309
|
+
- **If you need to discover URLs on a site:** use **map**
|
|
310
|
+
- **If you want to search the web for info:** use **search**
|
|
311
|
+
- **If you want to extract structured data:** use **extract**
|
|
312
|
+
- **If you want to analyze a whole site or section:** use **crawl** (with limits!)
|
|
313
|
+
- **If you want to do in-depth research:** use **deep_research**
|
|
314
|
+
- **If you want to generate LLMs.txt:** use **generate_llmstxt**
|
|
315
|
+
|
|
316
|
+
### Quick Reference Table
|
|
317
|
+
|
|
318
|
+
| Tool | Best for | Returns |
|
|
319
|
+
|---------------------|------------------------------------------|-----------------|
|
|
320
|
+
| scrape | Single page content | markdown/html |
|
|
321
|
+
| batch_scrape | Multiple known URLs | markdown/html[] |
|
|
322
|
+
| map | Discovering URLs on a site | URL[] |
|
|
323
|
+
| crawl | Multi-page extraction (with limits) | markdown/html[] |
|
|
324
|
+
| search | Web search for info | results[] |
|
|
325
|
+
| extract | Structured data from pages | JSON |
|
|
326
|
+
| deep_research | In-depth, multi-source research | summary, sources|
|
|
327
|
+
| generate_llmstxt | LLMs.txt for a domain | text |
|
|
328
|
+
|
|
239
329
|
## Available Tools
|
|
240
330
|
|
|
241
331
|
### 1. Scrape Tool (`firecrawl_scrape`)
|
|
242
332
|
|
|
243
333
|
Scrape content from a single URL with advanced options.
|
|
244
334
|
|
|
335
|
+
**Best for:**
|
|
336
|
+
- Single page content extraction, when you know exactly which page contains the information.
|
|
337
|
+
|
|
338
|
+
**Not recommended for:**
|
|
339
|
+
- Extracting content from multiple pages (use batch_scrape for known URLs, or map + batch_scrape to discover URLs first, or crawl for full page content)
|
|
340
|
+
- When you're unsure which page contains the information (use search)
|
|
341
|
+
- When you need structured data (use extract)
|
|
342
|
+
|
|
343
|
+
**Common mistakes:**
|
|
344
|
+
- Using scrape for a list of URLs (use batch_scrape instead).
|
|
345
|
+
|
|
346
|
+
**Prompt Example:**
|
|
347
|
+
> "Get the content of the page at https://example.com."
|
|
348
|
+
|
|
349
|
+
**Usage Example:**
|
|
245
350
|
```json
|
|
246
351
|
{
|
|
247
352
|
"name": "firecrawl_scrape",
|
|
@@ -259,10 +364,27 @@ Scrape content from a single URL with advanced options.
|
|
|
259
364
|
}
|
|
260
365
|
```
|
|
261
366
|
|
|
367
|
+
**Returns:**
|
|
368
|
+
- Markdown, HTML, or other formats as specified.
|
|
369
|
+
|
|
262
370
|
### 2. Batch Scrape Tool (`firecrawl_batch_scrape`)
|
|
263
371
|
|
|
264
372
|
Scrape multiple URLs efficiently with built-in rate limiting and parallel processing.
|
|
265
373
|
|
|
374
|
+
**Best for:**
|
|
375
|
+
- Retrieving content from multiple pages, when you know exactly which pages to scrape.
|
|
376
|
+
|
|
377
|
+
**Not recommended for:**
|
|
378
|
+
- Discovering URLs (use map first if you don't know the URLs)
|
|
379
|
+
- Scraping a single page (use scrape)
|
|
380
|
+
|
|
381
|
+
**Common mistakes:**
|
|
382
|
+
- Using batch_scrape with too many URLs at once (may hit rate limits or token overflow)
|
|
383
|
+
|
|
384
|
+
**Prompt Example:**
|
|
385
|
+
> "Get the content of these three blog posts: [url1, url2, url3]."
|
|
386
|
+
|
|
387
|
+
**Usage Example:**
|
|
266
388
|
```json
|
|
267
389
|
{
|
|
268
390
|
"name": "firecrawl_batch_scrape",
|
|
@@ -276,7 +398,8 @@ Scrape multiple URLs efficiently with built-in rate limiting and parallel proces
|
|
|
276
398
|
}
|
|
277
399
|
```
|
|
278
400
|
|
|
279
|
-
|
|
401
|
+
**Returns:**
|
|
402
|
+
- Response includes operation ID for status checking:
|
|
280
403
|
|
|
281
404
|
```json
|
|
282
405
|
{
|
|
@@ -303,15 +426,58 @@ Check the status of a batch operation.
|
|
|
303
426
|
}
|
|
304
427
|
```
|
|
305
428
|
|
|
306
|
-
### 4.
|
|
429
|
+
### 4. Map Tool (`firecrawl_map`)
|
|
430
|
+
|
|
431
|
+
Map a website to discover all indexed URLs on the site.
|
|
432
|
+
|
|
433
|
+
**Best for:**
|
|
434
|
+
- Discovering URLs on a website before deciding what to scrape
|
|
435
|
+
- Finding specific sections of a website
|
|
436
|
+
|
|
437
|
+
**Not recommended for:**
|
|
438
|
+
- When you already know which specific URL you need (use scrape or batch_scrape)
|
|
439
|
+
- When you need the content of the pages (use scrape after mapping)
|
|
440
|
+
|
|
441
|
+
**Common mistakes:**
|
|
442
|
+
- Using crawl to discover URLs instead of map
|
|
443
|
+
|
|
444
|
+
**Prompt Example:**
|
|
445
|
+
> "List all URLs on example.com."
|
|
446
|
+
|
|
447
|
+
**Usage Example:**
|
|
448
|
+
```json
|
|
449
|
+
{
|
|
450
|
+
"name": "firecrawl_map",
|
|
451
|
+
"arguments": {
|
|
452
|
+
"url": "https://example.com"
|
|
453
|
+
}
|
|
454
|
+
}
|
|
455
|
+
```
|
|
456
|
+
|
|
457
|
+
**Returns:**
|
|
458
|
+
- Array of URLs found on the site
|
|
459
|
+
|
|
460
|
+
### 5. Search Tool (`firecrawl_search`)
|
|
307
461
|
|
|
308
462
|
Search the web and optionally extract content from search results.
|
|
309
463
|
|
|
464
|
+
**Best for:**
|
|
465
|
+
- Finding specific information across multiple websites, when you don't know which website has the information.
|
|
466
|
+
- When you need the most relevant content for a query
|
|
467
|
+
|
|
468
|
+
**Not recommended for:**
|
|
469
|
+
- When you already know which website to scrape (use scrape)
|
|
470
|
+
- When you need comprehensive coverage of a single website (use map or crawl)
|
|
471
|
+
|
|
472
|
+
**Common mistakes:**
|
|
473
|
+
- Using crawl or map for open-ended questions (use search instead)
|
|
474
|
+
|
|
475
|
+
**Usage Example:**
|
|
310
476
|
```json
|
|
311
477
|
{
|
|
312
478
|
"name": "firecrawl_search",
|
|
313
479
|
"arguments": {
|
|
314
|
-
"query": "
|
|
480
|
+
"query": "latest AI research papers 2023",
|
|
315
481
|
"limit": 5,
|
|
316
482
|
"lang": "en",
|
|
317
483
|
"country": "us",
|
|
@@ -323,15 +489,39 @@ Search the web and optionally extract content from search results.
|
|
|
323
489
|
}
|
|
324
490
|
```
|
|
325
491
|
|
|
326
|
-
|
|
492
|
+
**Returns:**
|
|
493
|
+
- Array of search results (with optional scraped content)
|
|
494
|
+
|
|
495
|
+
**Prompt Example:**
|
|
496
|
+
> "Find the latest research papers on AI published in 2023."
|
|
497
|
+
|
|
498
|
+
### 6. Crawl Tool (`firecrawl_crawl`)
|
|
499
|
+
|
|
500
|
+
Starts an asynchronous crawl job on a website and extract content from all pages.
|
|
501
|
+
|
|
502
|
+
**Best for:**
|
|
503
|
+
- Extracting content from multiple related pages, when you need comprehensive coverage.
|
|
504
|
+
|
|
505
|
+
**Not recommended for:**
|
|
506
|
+
- Extracting content from a single page (use scrape)
|
|
507
|
+
- When token limits are a concern (use map + batch_scrape)
|
|
508
|
+
- When you need fast results (crawling can be slow)
|
|
509
|
+
|
|
510
|
+
**Warning:** Crawl responses can be very large and may exceed token limits. Limit the crawl depth and number of pages, or use map + batch_scrape for better control.
|
|
511
|
+
|
|
512
|
+
**Common mistakes:**
|
|
513
|
+
- Setting limit or maxDepth too high (causes token overflow)
|
|
514
|
+
- Using crawl for a single page (use scrape instead)
|
|
327
515
|
|
|
328
|
-
|
|
516
|
+
**Prompt Example:**
|
|
517
|
+
> "Get all blog posts from the first two levels of example.com/blog."
|
|
329
518
|
|
|
519
|
+
**Usage Example:**
|
|
330
520
|
```json
|
|
331
521
|
{
|
|
332
522
|
"name": "firecrawl_crawl",
|
|
333
523
|
"arguments": {
|
|
334
|
-
"url": "https://example.com",
|
|
524
|
+
"url": "https://example.com/blog/*",
|
|
335
525
|
"maxDepth": 2,
|
|
336
526
|
"limit": 100,
|
|
337
527
|
"allowExternalLinks": false,
|
|
@@ -340,10 +530,62 @@ Start an asynchronous crawl with advanced options.
|
|
|
340
530
|
}
|
|
341
531
|
```
|
|
342
532
|
|
|
343
|
-
|
|
533
|
+
**Returns:**
|
|
534
|
+
- Response includes operation ID for status checking:
|
|
535
|
+
|
|
536
|
+
```json
|
|
537
|
+
{
|
|
538
|
+
"content": [
|
|
539
|
+
{
|
|
540
|
+
"type": "text",
|
|
541
|
+
"text": "Started crawl for: https://example.com/* with job ID: 550e8400-e29b-41d4-a716-446655440000. Use firecrawl_check_crawl_status to check progress."
|
|
542
|
+
}
|
|
543
|
+
],
|
|
544
|
+
"isError": false
|
|
545
|
+
}
|
|
546
|
+
```
|
|
547
|
+
|
|
548
|
+
### 7. Check Crawl Status (`firecrawl_check_crawl_status`)
|
|
549
|
+
|
|
550
|
+
Check the status of a crawl job.
|
|
551
|
+
|
|
552
|
+
```json
|
|
553
|
+
{
|
|
554
|
+
"name": "firecrawl_check_crawl_status",
|
|
555
|
+
"arguments": {
|
|
556
|
+
"id": "550e8400-e29b-41d4-a716-446655440000"
|
|
557
|
+
}
|
|
558
|
+
}
|
|
559
|
+
```
|
|
560
|
+
|
|
561
|
+
**Returns:**
|
|
562
|
+
- Response includes the status of the crawl job:
|
|
563
|
+
|
|
564
|
+
### 8. Extract Tool (`firecrawl_extract`)
|
|
344
565
|
|
|
345
566
|
Extract structured information from web pages using LLM capabilities. Supports both cloud AI and self-hosted LLM extraction.
|
|
346
567
|
|
|
568
|
+
**Best for:**
|
|
569
|
+
- Extracting specific structured data like prices, names, details.
|
|
570
|
+
|
|
571
|
+
**Not recommended for:**
|
|
572
|
+
- When you need the full content of a page (use scrape)
|
|
573
|
+
- When you're not looking for specific structured data
|
|
574
|
+
|
|
575
|
+
**Arguments:**
|
|
576
|
+
- `urls`: Array of URLs to extract information from
|
|
577
|
+
- `prompt`: Custom prompt for the LLM extraction
|
|
578
|
+
- `systemPrompt`: System prompt to guide the LLM
|
|
579
|
+
- `schema`: JSON schema for structured data extraction
|
|
580
|
+
- `allowExternalLinks`: Allow extraction from external links
|
|
581
|
+
- `enableWebSearch`: Enable web search for additional context
|
|
582
|
+
- `includeSubdomains`: Include subdomains in extraction
|
|
583
|
+
|
|
584
|
+
When using a self-hosted instance, the extraction will use your configured LLM. For cloud API, it uses Firecrawl's managed LLM service.
|
|
585
|
+
**Prompt Example:**
|
|
586
|
+
> "Extract the product name, price, and description from these product pages."
|
|
587
|
+
|
|
588
|
+
**Usage Example:**
|
|
347
589
|
```json
|
|
348
590
|
{
|
|
349
591
|
"name": "firecrawl_extract",
|
|
@@ -367,7 +609,8 @@ Extract structured information from web pages using LLM capabilities. Supports b
|
|
|
367
609
|
}
|
|
368
610
|
```
|
|
369
611
|
|
|
370
|
-
|
|
612
|
+
**Returns:**
|
|
613
|
+
- Extracted structured data as defined by your schema
|
|
371
614
|
|
|
372
615
|
```json
|
|
373
616
|
{
|
|
@@ -385,27 +628,33 @@ Example response:
|
|
|
385
628
|
}
|
|
386
629
|
```
|
|
387
630
|
|
|
388
|
-
|
|
631
|
+
### 9. Deep Research Tool (`firecrawl_deep_research`)
|
|
389
632
|
|
|
390
|
-
|
|
391
|
-
- `prompt`: Custom prompt for the LLM extraction
|
|
392
|
-
- `systemPrompt`: System prompt to guide the LLM
|
|
393
|
-
- `schema`: JSON schema for structured data extraction
|
|
394
|
-
- `allowExternalLinks`: Allow extraction from external links
|
|
395
|
-
- `enableWebSearch`: Enable web search for additional context
|
|
396
|
-
- `includeSubdomains`: Include subdomains in extraction
|
|
633
|
+
Conduct deep web research on a query using intelligent crawling, search, and LLM analysis.
|
|
397
634
|
|
|
398
|
-
|
|
635
|
+
**Best for:**
|
|
636
|
+
- Complex research questions requiring multiple sources, in-depth analysis.
|
|
399
637
|
|
|
400
|
-
|
|
638
|
+
**Not recommended for:**
|
|
639
|
+
- Simple questions that can be answered with a single search
|
|
640
|
+
- When you need very specific information from a known page (use scrape)
|
|
641
|
+
- When you need results quickly (deep research can take time)
|
|
401
642
|
|
|
402
|
-
|
|
643
|
+
**Arguments:**
|
|
644
|
+
- query (string, required): The research question or topic to explore.
|
|
645
|
+
- maxDepth (number, optional): Maximum recursive depth for crawling/search (default: 3).
|
|
646
|
+
- timeLimit (number, optional): Time limit in seconds for the research session (default: 120).
|
|
647
|
+
- maxUrls (number, optional): Maximum number of URLs to analyze (default: 50).
|
|
403
648
|
|
|
649
|
+
**Prompt Example:**
|
|
650
|
+
> "Research the environmental impact of electric vehicles versus gasoline vehicles."
|
|
651
|
+
|
|
652
|
+
**Usage Example:**
|
|
404
653
|
```json
|
|
405
654
|
{
|
|
406
655
|
"name": "firecrawl_deep_research",
|
|
407
656
|
"arguments": {
|
|
408
|
-
"query": "
|
|
657
|
+
"query": "What are the environmental impacts of electric vehicles compared to gasoline vehicles?",
|
|
409
658
|
"maxDepth": 3,
|
|
410
659
|
"timeLimit": 120,
|
|
411
660
|
"maxUrls": 50
|
|
@@ -413,22 +662,30 @@ Conduct deep web research on a query using intelligent crawling, search, and LLM
|
|
|
413
662
|
}
|
|
414
663
|
```
|
|
415
664
|
|
|
416
|
-
|
|
665
|
+
**Returns:**
|
|
666
|
+
- Final analysis generated by an LLM based on research. (data.finalAnalysis)
|
|
667
|
+
- May also include structured activities and sources used in the research process.
|
|
668
|
+
|
|
669
|
+
### 10. Generate LLMs.txt Tool (`firecrawl_generate_llmstxt`)
|
|
417
670
|
|
|
418
|
-
|
|
419
|
-
|
|
420
|
-
- timeLimit (number, optional): Time limit in seconds for the research session (default: 120).
|
|
421
|
-
- maxUrls (number, optional): Maximum number of URLs to analyze (default: 50).
|
|
671
|
+
Generate a standardized llms.txt (and optionally llms-full.txt) file for a given domain. This file defines how large language models should interact
|
|
672
|
+
with the site.
|
|
422
673
|
|
|
423
|
-
|
|
674
|
+
**Best for:**
|
|
675
|
+
- Creating machine-readable permission guidelines for AI models.
|
|
424
676
|
|
|
425
|
-
|
|
426
|
-
-
|
|
677
|
+
**Not recommended for:**
|
|
678
|
+
- General content extraction or research
|
|
427
679
|
|
|
428
|
-
|
|
680
|
+
**Arguments:**
|
|
681
|
+
- url (string, required): The base URL of the website to analyze.
|
|
682
|
+
- maxUrls (number, optional): Max number of URLs to include (default: 10).
|
|
683
|
+
- showFullText (boolean, optional): Whether to include llms-full.txt contents in the response.
|
|
429
684
|
|
|
430
|
-
|
|
685
|
+
**Prompt Example:**
|
|
686
|
+
> "Generate an LLMs.txt file for example.com."
|
|
431
687
|
|
|
688
|
+
**Usage Example:**
|
|
432
689
|
```json
|
|
433
690
|
{
|
|
434
691
|
"name": "firecrawl_generate_llmstxt",
|
|
@@ -440,15 +697,8 @@ Generate a standardized llms.txt (and optionally llms-full.txt) file for a given
|
|
|
440
697
|
}
|
|
441
698
|
```
|
|
442
699
|
|
|
443
|
-
|
|
444
|
-
|
|
445
|
-
- url (string, required): The base URL of the website to analyze.
|
|
446
|
-
- maxUrls (number, optional): Max number of URLs to include (default: 10).
|
|
447
|
-
- showFullText (boolean, optional): Whether to include llms-full.txt contents in the response.
|
|
448
|
-
|
|
449
|
-
Returns:
|
|
450
|
-
|
|
451
|
-
- Generated llms.txt file contents and optionally the llms-full.txt (data.llmstxt and/or data.llmsfulltxt)
|
|
700
|
+
**Returns:**
|
|
701
|
+
- LLMs.txt file contents (and optionally llms-full.txt)
|
|
452
702
|
|
|
453
703
|
## Logging System
|
|
454
704
|
|
|
@@ -514,6 +764,12 @@ npm test
|
|
|
514
764
|
3. Run tests: `npm test`
|
|
515
765
|
4. Submit a pull request
|
|
516
766
|
|
|
767
|
+
### Thanks to contributors
|
|
768
|
+
|
|
769
|
+
Thanks to [@vrknetha](https://github.com/vrknetha), [@cawstudios](https://caw.tech) for the initial implementation!
|
|
770
|
+
|
|
771
|
+
Thanks to MCP.so and Klavis AI for hosting and [@gstarwd](https://github.com/gstarwd), [@xiangkaiz](https://github.com/xiangkaiz) and [@zihaolin96](https://github.com/zihaolin96) for integrating our server.
|
|
772
|
+
|
|
517
773
|
## License
|
|
518
774
|
|
|
519
775
|
MIT License - see LICENSE file for details
|