firecrawl-mcp 1.10.0 → 1.11.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/LICENSE CHANGED
File without changes
package/README.md CHANGED
@@ -2,23 +2,18 @@
2
2
 
3
3
  A Model Context Protocol (MCP) server implementation that integrates with [Firecrawl](https://github.com/mendableai/firecrawl) for web scraping capabilities.
4
4
 
5
- > Big thanks to [@vrknetha](https://github.com/vrknetha), [@cawstudios](https://caw.tech) for the initial implementation!
6
- >
7
- > You can also play around with [our MCP Server on MCP.so's playground](https://mcp.so/playground?server=firecrawl-mcp-server) or on [Klavis AI](https://www.klavis.ai/mcp-servers). Thanks to MCP.so and Klavis AI for hosting and [@gstarwd](https://github.com/gstarwd) and [@xiangkaiz](https://github.com/xiangkaiz) for integrating our server.
5
+ > Big thanks to [@vrknetha](https://github.com/vrknetha), [@knacklabs](https://www.knacklabs.ai) for the initial implementation!
8
6
 
9
7
  ## Features
10
8
 
11
- - Scrape, crawl, search, extract, deep research and batch scrape support
12
- - Web scraping with JS rendering
13
- - URL discovery and crawling
14
- - Web search with content extraction
15
- - Automatic retries with exponential backoff
16
- - Efficient batch processing with built-in rate limiting
17
- - Credit usage monitoring for cloud API
18
- - Comprehensive logging system
19
- - Support for cloud and self-hosted Firecrawl instances
20
- - Mobile/Desktop viewport support
21
- - Smart content filtering with tag inclusion/exclusion
9
+ - Web scraping, crawling, and discovery
10
+ - Search and content extraction
11
+ - Deep research and batch scraping
12
+ - Automatic retries and rate limiting
13
+ - Cloud and self-hosted support
14
+ - SSE support
15
+
16
+ > Play around with [our MCP Server on MCP.so's playground](https://mcp.so/playground?server=firecrawl-mcp-server) or on [Klavis AI](https://www.klavis.ai/mcp-servers).
22
17
 
23
18
  ## Installation
24
19
 
@@ -41,16 +36,6 @@ Note: Requires Cursor version 0.45.6+
41
36
  For the most up-to-date configuration instructions, please refer to the official Cursor documentation on configuring MCP servers:
42
37
  [Cursor MCP Server Configuration Guide](https://docs.cursor.com/context/model-context-protocol#configuring-mcp-servers)
43
38
 
44
- To configure Firecrawl MCP in Cursor **v0.45.6**
45
-
46
- 1. Open Cursor Settings
47
- 2. Go to Features > MCP Servers
48
- 3. Click "+ Add New MCP Server"
49
- 4. Enter the following:
50
- - Name: "firecrawl-mcp" (or your preferred name)
51
- - Type: "command"
52
- - Command: `env FIRECRAWL_API_KEY=your-api-key npx -y firecrawl-mcp`
53
-
54
39
  To configure Firecrawl MCP in Cursor **v0.48.6**
55
40
 
56
41
  1. Open Cursor Settings
@@ -70,6 +55,18 @@ To configure Firecrawl MCP in Cursor **v0.48.6**
70
55
  }
71
56
  }
72
57
  ```
58
+
59
+ To configure Firecrawl MCP in Cursor **v0.45.6**
60
+
61
+ 1. Open Cursor Settings
62
+ 2. Go to Features > MCP Servers
63
+ 3. Click "+ Add New MCP Server"
64
+ 4. Enter the following:
65
+ - Name: "firecrawl-mcp" (or your preferred name)
66
+ - Type: "command"
67
+ - Command: `env FIRECRAWL_API_KEY=your-api-key npx -y firecrawl-mcp`
68
+
69
+
73
70
 
74
71
  > If you are using Windows and are running into issues, try `cmd /c "set FIRECRAWL_API_KEY=your-api-key && npx -y firecrawl-mcp"`
75
72
 
@@ -113,6 +110,62 @@ To install Firecrawl for Claude Desktop automatically via [Smithery](https://smi
113
110
  npx -y @smithery/cli install @mendableai/mcp-server-firecrawl --client claude
114
111
  ```
115
112
 
113
+ ### Running on VS Code
114
+
115
+ For one-click installation, click one of the install buttons below...
116
+
117
+ [![Install with NPX in VS Code](https://img.shields.io/badge/VS_Code-NPM-0098FF?style=flat-square&logo=visualstudiocode&logoColor=white)](https://insiders.vscode.dev/redirect/mcp/install?name=firecrawl&inputs=%5B%7B%22type%22%3A%22promptString%22%2C%22id%22%3A%22apiKey%22%2C%22description%22%3A%22Firecrawl%20API%20Key%22%2C%22password%22%3Atrue%7D%5D&config=%7B%22command%22%3A%22npx%22%2C%22args%22%3A%5B%22-y%22%2C%22firecrawl-mcp%22%5D%2C%22env%22%3A%7B%22FIRECRAWL_API_KEY%22%3A%22%24%7Binput%3AapiKey%7D%22%7D%7D) [![Install with NPX in VS Code Insiders](https://img.shields.io/badge/VS_Code_Insiders-NPM-24bfa5?style=flat-square&logo=visualstudiocode&logoColor=white)](https://insiders.vscode.dev/redirect/mcp/install?name=firecrawl&inputs=%5B%7B%22type%22%3A%22promptString%22%2C%22id%22%3A%22apiKey%22%2C%22description%22%3A%22Firecrawl%20API%20Key%22%2C%22password%22%3Atrue%7D%5D&config=%7B%22command%22%3A%22npx%22%2C%22args%22%3A%5B%22-y%22%2C%22firecrawl-mcp%22%5D%2C%22env%22%3A%7B%22FIRECRAWL_API_KEY%22%3A%22%24%7Binput%3AapiKey%7D%22%7D%7D&quality=insiders)
118
+
119
+ For manual installation, add the following JSON block to your User Settings (JSON) file in VS Code. You can do this by pressing `Ctrl + Shift + P` and typing `Preferences: Open User Settings (JSON)`.
120
+
121
+ ```json
122
+ {
123
+ "mcp": {
124
+ "inputs": [
125
+ {
126
+ "type": "promptString",
127
+ "id": "apiKey",
128
+ "description": "Firecrawl API Key",
129
+ "password": true
130
+ }
131
+ ],
132
+ "servers": {
133
+ "firecrawl": {
134
+ "command": "npx",
135
+ "args": ["-y", "firecrawl-mcp"],
136
+ "env": {
137
+ "FIRECRAWL_API_KEY": "${input:apiKey}"
138
+ }
139
+ }
140
+ }
141
+ }
142
+ }
143
+ ```
144
+
145
+ Optionally, you can add it to a file called `.vscode/mcp.json` in your workspace. This will allow you to share the configuration with others:
146
+
147
+ ```json
148
+ {
149
+ "inputs": [
150
+ {
151
+ "type": "promptString",
152
+ "id": "apiKey",
153
+ "description": "Firecrawl API Key",
154
+ "password": true
155
+ }
156
+ ],
157
+ "servers": {
158
+ "firecrawl": {
159
+ "command": "npx",
160
+ "args": ["-y", "firecrawl-mcp"],
161
+ "env": {
162
+ "FIRECRAWL_API_KEY": "${input:apiKey}"
163
+ }
164
+ }
165
+ }
166
+ }
167
+ ```
168
+
116
169
  ## Configuration
117
170
 
118
171
  ### Environment Variables
@@ -246,12 +299,54 @@ The server utilizes Firecrawl's built-in rate limiting and batch processing capa
246
299
  - Smart request queuing and throttling
247
300
  - Automatic retries for transient errors
248
301
 
302
+ ## How to Choose a Tool
303
+
304
+ Use this guide to select the right tool for your task:
305
+
306
+ - **If you know the exact URL(s) you want:**
307
+ - For one: use **scrape**
308
+ - For many: use **batch_scrape**
309
+ - **If you need to discover URLs on a site:** use **map**
310
+ - **If you want to search the web for info:** use **search**
311
+ - **If you want to extract structured data:** use **extract**
312
+ - **If you want to analyze a whole site or section:** use **crawl** (with limits!)
313
+ - **If you want to do in-depth research:** use **deep_research**
314
+ - **If you want to generate LLMs.txt:** use **generate_llmstxt**
315
+
316
+ ### Quick Reference Table
317
+
318
+ | Tool | Best for | Returns |
319
+ |---------------------|------------------------------------------|-----------------|
320
+ | scrape | Single page content | markdown/html |
321
+ | batch_scrape | Multiple known URLs | markdown/html[] |
322
+ | map | Discovering URLs on a site | URL[] |
323
+ | crawl | Multi-page extraction (with limits) | markdown/html[] |
324
+ | search | Web search for info | results[] |
325
+ | extract | Structured data from pages | JSON |
326
+ | deep_research | In-depth, multi-source research | summary, sources|
327
+ | generate_llmstxt | LLMs.txt for a domain | text |
328
+
249
329
  ## Available Tools
250
330
 
251
331
  ### 1. Scrape Tool (`firecrawl_scrape`)
252
332
 
253
333
  Scrape content from a single URL with advanced options.
254
334
 
335
+ **Best for:**
336
+ - Single page content extraction, when you know exactly which page contains the information.
337
+
338
+ **Not recommended for:**
339
+ - Extracting content from multiple pages (use batch_scrape for known URLs, or map + batch_scrape to discover URLs first, or crawl for full page content)
340
+ - When you're unsure which page contains the information (use search)
341
+ - When you need structured data (use extract)
342
+
343
+ **Common mistakes:**
344
+ - Using scrape for a list of URLs (use batch_scrape instead).
345
+
346
+ **Prompt Example:**
347
+ > "Get the content of the page at https://example.com."
348
+
349
+ **Usage Example:**
255
350
  ```json
256
351
  {
257
352
  "name": "firecrawl_scrape",
@@ -269,10 +364,27 @@ Scrape content from a single URL with advanced options.
269
364
  }
270
365
  ```
271
366
 
367
+ **Returns:**
368
+ - Markdown, HTML, or other formats as specified.
369
+
272
370
  ### 2. Batch Scrape Tool (`firecrawl_batch_scrape`)
273
371
 
274
372
  Scrape multiple URLs efficiently with built-in rate limiting and parallel processing.
275
373
 
374
+ **Best for:**
375
+ - Retrieving content from multiple pages, when you know exactly which pages to scrape.
376
+
377
+ **Not recommended for:**
378
+ - Discovering URLs (use map first if you don't know the URLs)
379
+ - Scraping a single page (use scrape)
380
+
381
+ **Common mistakes:**
382
+ - Using batch_scrape with too many URLs at once (may hit rate limits or token overflow)
383
+
384
+ **Prompt Example:**
385
+ > "Get the content of these three blog posts: [url1, url2, url3]."
386
+
387
+ **Usage Example:**
276
388
  ```json
277
389
  {
278
390
  "name": "firecrawl_batch_scrape",
@@ -286,7 +398,8 @@ Scrape multiple URLs efficiently with built-in rate limiting and parallel proces
286
398
  }
287
399
  ```
288
400
 
289
- Response includes operation ID for status checking:
401
+ **Returns:**
402
+ - Response includes operation ID for status checking:
290
403
 
291
404
  ```json
292
405
  {
@@ -313,15 +426,58 @@ Check the status of a batch operation.
313
426
  }
314
427
  ```
315
428
 
316
- ### 4. Search Tool (`firecrawl_search`)
429
+ ### 4. Map Tool (`firecrawl_map`)
430
+
431
+ Map a website to discover all indexed URLs on the site.
432
+
433
+ **Best for:**
434
+ - Discovering URLs on a website before deciding what to scrape
435
+ - Finding specific sections of a website
436
+
437
+ **Not recommended for:**
438
+ - When you already know which specific URL you need (use scrape or batch_scrape)
439
+ - When you need the content of the pages (use scrape after mapping)
440
+
441
+ **Common mistakes:**
442
+ - Using crawl to discover URLs instead of map
443
+
444
+ **Prompt Example:**
445
+ > "List all URLs on example.com."
446
+
447
+ **Usage Example:**
448
+ ```json
449
+ {
450
+ "name": "firecrawl_map",
451
+ "arguments": {
452
+ "url": "https://example.com"
453
+ }
454
+ }
455
+ ```
456
+
457
+ **Returns:**
458
+ - Array of URLs found on the site
459
+
460
+ ### 5. Search Tool (`firecrawl_search`)
317
461
 
318
462
  Search the web and optionally extract content from search results.
319
463
 
464
+ **Best for:**
465
+ - Finding specific information across multiple websites, when you don't know which website has the information.
466
+ - When you need the most relevant content for a query
467
+
468
+ **Not recommended for:**
469
+ - When you already know which website to scrape (use scrape)
470
+ - When you need comprehensive coverage of a single website (use map or crawl)
471
+
472
+ **Common mistakes:**
473
+ - Using crawl or map for open-ended questions (use search instead)
474
+
475
+ **Usage Example:**
320
476
  ```json
321
477
  {
322
478
  "name": "firecrawl_search",
323
479
  "arguments": {
324
- "query": "your search query",
480
+ "query": "latest AI research papers 2023",
325
481
  "limit": 5,
326
482
  "lang": "en",
327
483
  "country": "us",
@@ -333,15 +489,39 @@ Search the web and optionally extract content from search results.
333
489
  }
334
490
  ```
335
491
 
336
- ### 5. Crawl Tool (`firecrawl_crawl`)
492
+ **Returns:**
493
+ - Array of search results (with optional scraped content)
337
494
 
338
- Start an asynchronous crawl with advanced options.
495
+ **Prompt Example:**
496
+ > "Find the latest research papers on AI published in 2023."
339
497
 
498
+ ### 6. Crawl Tool (`firecrawl_crawl`)
499
+
500
+ Starts an asynchronous crawl job on a website and extract content from all pages.
501
+
502
+ **Best for:**
503
+ - Extracting content from multiple related pages, when you need comprehensive coverage.
504
+
505
+ **Not recommended for:**
506
+ - Extracting content from a single page (use scrape)
507
+ - When token limits are a concern (use map + batch_scrape)
508
+ - When you need fast results (crawling can be slow)
509
+
510
+ **Warning:** Crawl responses can be very large and may exceed token limits. Limit the crawl depth and number of pages, or use map + batch_scrape for better control.
511
+
512
+ **Common mistakes:**
513
+ - Setting limit or maxDepth too high (causes token overflow)
514
+ - Using crawl for a single page (use scrape instead)
515
+
516
+ **Prompt Example:**
517
+ > "Get all blog posts from the first two levels of example.com/blog."
518
+
519
+ **Usage Example:**
340
520
  ```json
341
521
  {
342
522
  "name": "firecrawl_crawl",
343
523
  "arguments": {
344
- "url": "https://example.com",
524
+ "url": "https://example.com/blog/*",
345
525
  "maxDepth": 2,
346
526
  "limit": 100,
347
527
  "allowExternalLinks": false,
@@ -350,10 +530,62 @@ Start an asynchronous crawl with advanced options.
350
530
  }
351
531
  ```
352
532
 
353
- ### 6. Extract Tool (`firecrawl_extract`)
533
+ **Returns:**
534
+ - Response includes operation ID for status checking:
535
+
536
+ ```json
537
+ {
538
+ "content": [
539
+ {
540
+ "type": "text",
541
+ "text": "Started crawl for: https://example.com/* with job ID: 550e8400-e29b-41d4-a716-446655440000. Use firecrawl_check_crawl_status to check progress."
542
+ }
543
+ ],
544
+ "isError": false
545
+ }
546
+ ```
547
+
548
+ ### 7. Check Crawl Status (`firecrawl_check_crawl_status`)
549
+
550
+ Check the status of a crawl job.
551
+
552
+ ```json
553
+ {
554
+ "name": "firecrawl_check_crawl_status",
555
+ "arguments": {
556
+ "id": "550e8400-e29b-41d4-a716-446655440000"
557
+ }
558
+ }
559
+ ```
560
+
561
+ **Returns:**
562
+ - Response includes the status of the crawl job:
563
+
564
+ ### 8. Extract Tool (`firecrawl_extract`)
354
565
 
355
566
  Extract structured information from web pages using LLM capabilities. Supports both cloud AI and self-hosted LLM extraction.
356
567
 
568
+ **Best for:**
569
+ - Extracting specific structured data like prices, names, details.
570
+
571
+ **Not recommended for:**
572
+ - When you need the full content of a page (use scrape)
573
+ - When you're not looking for specific structured data
574
+
575
+ **Arguments:**
576
+ - `urls`: Array of URLs to extract information from
577
+ - `prompt`: Custom prompt for the LLM extraction
578
+ - `systemPrompt`: System prompt to guide the LLM
579
+ - `schema`: JSON schema for structured data extraction
580
+ - `allowExternalLinks`: Allow extraction from external links
581
+ - `enableWebSearch`: Enable web search for additional context
582
+ - `includeSubdomains`: Include subdomains in extraction
583
+
584
+ When using a self-hosted instance, the extraction will use your configured LLM. For cloud API, it uses Firecrawl's managed LLM service.
585
+ **Prompt Example:**
586
+ > "Extract the product name, price, and description from these product pages."
587
+
588
+ **Usage Example:**
357
589
  ```json
358
590
  {
359
591
  "name": "firecrawl_extract",
@@ -377,7 +609,8 @@ Extract structured information from web pages using LLM capabilities. Supports b
377
609
  }
378
610
  ```
379
611
 
380
- Example response:
612
+ **Returns:**
613
+ - Extracted structured data as defined by your schema
381
614
 
382
615
  ```json
383
616
  {
@@ -395,27 +628,33 @@ Example response:
395
628
  }
396
629
  ```
397
630
 
398
- #### Extract Tool Options:
631
+ ### 9. Deep Research Tool (`firecrawl_deep_research`)
399
632
 
400
- - `urls`: Array of URLs to extract information from
401
- - `prompt`: Custom prompt for the LLM extraction
402
- - `systemPrompt`: System prompt to guide the LLM
403
- - `schema`: JSON schema for structured data extraction
404
- - `allowExternalLinks`: Allow extraction from external links
405
- - `enableWebSearch`: Enable web search for additional context
406
- - `includeSubdomains`: Include subdomains in extraction
633
+ Conduct deep web research on a query using intelligent crawling, search, and LLM analysis.
407
634
 
408
- When using a self-hosted instance, the extraction will use your configured LLM. For cloud API, it uses Firecrawl's managed LLM service.
635
+ **Best for:**
636
+ - Complex research questions requiring multiple sources, in-depth analysis.
409
637
 
410
- ### 7. Deep Research Tool (firecrawl_deep_research)
638
+ **Not recommended for:**
639
+ - Simple questions that can be answered with a single search
640
+ - When you need very specific information from a known page (use scrape)
641
+ - When you need results quickly (deep research can take time)
411
642
 
412
- Conduct deep web research on a query using intelligent crawling, search, and LLM analysis.
643
+ **Arguments:**
644
+ - query (string, required): The research question or topic to explore.
645
+ - maxDepth (number, optional): Maximum recursive depth for crawling/search (default: 3).
646
+ - timeLimit (number, optional): Time limit in seconds for the research session (default: 120).
647
+ - maxUrls (number, optional): Maximum number of URLs to analyze (default: 50).
648
+
649
+ **Prompt Example:**
650
+ > "Research the environmental impact of electric vehicles versus gasoline vehicles."
413
651
 
652
+ **Usage Example:**
414
653
  ```json
415
654
  {
416
655
  "name": "firecrawl_deep_research",
417
656
  "arguments": {
418
- "query": "how does carbon capture technology work?",
657
+ "query": "What are the environmental impacts of electric vehicles compared to gasoline vehicles?",
419
658
  "maxDepth": 3,
420
659
  "timeLimit": 120,
421
660
  "maxUrls": 50
@@ -423,22 +662,30 @@ Conduct deep web research on a query using intelligent crawling, search, and LLM
423
662
  }
424
663
  ```
425
664
 
426
- Arguments:
665
+ **Returns:**
666
+ - Final analysis generated by an LLM based on research. (data.finalAnalysis)
667
+ - May also include structured activities and sources used in the research process.
668
+
669
+ ### 10. Generate LLMs.txt Tool (`firecrawl_generate_llmstxt`)
427
670
 
428
- - query (string, required): The research question or topic to explore.
429
- - maxDepth (number, optional): Maximum recursive depth for crawling/search (default: 3).
430
- - timeLimit (number, optional): Time limit in seconds for the research session (default: 120).
431
- - maxUrls (number, optional): Maximum number of URLs to analyze (default: 50).
671
+ Generate a standardized llms.txt (and optionally llms-full.txt) file for a given domain. This file defines how large language models should interact
672
+ with the site.
432
673
 
433
- Returns:
674
+ **Best for:**
675
+ - Creating machine-readable permission guidelines for AI models.
434
676
 
435
- - Final analysis generated by an LLM based on research. (data.finalAnalysis)
436
- - May also include structured activities and sources used in the research process.
677
+ **Not recommended for:**
678
+ - General content extraction or research
437
679
 
438
- ### 8. Generate LLMs.txt Tool (firecrawl_generate_llmstxt)
680
+ **Arguments:**
681
+ - url (string, required): The base URL of the website to analyze.
682
+ - maxUrls (number, optional): Max number of URLs to include (default: 10).
683
+ - showFullText (boolean, optional): Whether to include llms-full.txt contents in the response.
439
684
 
440
- Generate a standardized llms.txt (and optionally llms-full.txt) file for a given domain. This file defines how large language models should interact with the site.
685
+ **Prompt Example:**
686
+ > "Generate an LLMs.txt file for example.com."
441
687
 
688
+ **Usage Example:**
442
689
  ```json
443
690
  {
444
691
  "name": "firecrawl_generate_llmstxt",
@@ -450,15 +697,8 @@ Generate a standardized llms.txt (and optionally llms-full.txt) file for a given
450
697
  }
451
698
  ```
452
699
 
453
- Arguments:
454
-
455
- - url (string, required): The base URL of the website to analyze.
456
- - maxUrls (number, optional): Max number of URLs to include (default: 10).
457
- - showFullText (boolean, optional): Whether to include llms-full.txt contents in the response.
458
-
459
- Returns:
460
-
461
- - Generated llms.txt file contents and optionally the llms-full.txt (data.llmstxt and/or data.llmsfulltxt)
700
+ **Returns:**
701
+ - LLMs.txt file contents (and optionally llms-full.txt)
462
702
 
463
703
  ## Logging System
464
704
 
@@ -524,6 +764,12 @@ npm test
524
764
  3. Run tests: `npm test`
525
765
  4. Submit a pull request
526
766
 
767
+ ### Thanks to contributors
768
+
769
+ Thanks to [@vrknetha](https://github.com/vrknetha), [@cawstudios](https://caw.tech) for the initial implementation!
770
+
771
+ Thanks to MCP.so and Klavis AI for hosting and [@gstarwd](https://github.com/gstarwd), [@xiangkaiz](https://github.com/xiangkaiz) and [@zihaolin96](https://github.com/zihaolin96) for integrating our server.
772
+
527
773
  ## License
528
774
 
529
775
  MIT License - see LICENSE file for details
package/dist/index.js CHANGED
@@ -10,9 +10,25 @@ dotenv.config();
10
10
  // Tool definitions
11
11
  const SCRAPE_TOOL = {
12
12
  name: 'firecrawl_scrape',
13
- description: 'Scrape a single webpage with advanced options for content extraction. ' +
14
- 'Supports various formats including markdown, HTML, and screenshots. ' +
15
- 'Can execute custom actions like clicking or scrolling before scraping.',
13
+ description: `
14
+ Scrape content from a single URL with advanced options.
15
+
16
+ **Best for:** Single page content extraction, when you know exactly which page contains the information.
17
+ **Not recommended for:** Multiple pages (use batch_scrape), unknown page (use search), structured data (use extract).
18
+ **Common mistakes:** Using scrape for a list of URLs (use batch_scrape instead).
19
+ **Prompt Example:** "Get the content of the page at https://example.com."
20
+ **Usage Example:**
21
+ \`\`\`json
22
+ {
23
+ "name": "firecrawl_scrape",
24
+ "arguments": {
25
+ "url": "https://example.com",
26
+ "formats": ["markdown"]
27
+ }
28
+ }
29
+ \`\`\`
30
+ **Returns:** Markdown, HTML, or other formats as specified.
31
+ `,
16
32
  inputSchema: {
17
33
  type: 'object',
18
34
  properties: {
@@ -163,7 +179,24 @@ const SCRAPE_TOOL = {
163
179
  };
164
180
  const MAP_TOOL = {
165
181
  name: 'firecrawl_map',
166
- description: 'Discover URLs from a starting point. Can use both sitemap.xml and HTML link discovery.',
182
+ description: `
183
+ Map a website to discover all indexed URLs on the site.
184
+
185
+ **Best for:** Discovering URLs on a website before deciding what to scrape; finding specific sections of a website.
186
+ **Not recommended for:** When you already know which specific URL you need (use scrape or batch_scrape); when you need the content of the pages (use scrape after mapping).
187
+ **Common mistakes:** Using crawl to discover URLs instead of map.
188
+ **Prompt Example:** "List all URLs on example.com."
189
+ **Usage Example:**
190
+ \`\`\`json
191
+ {
192
+ "name": "firecrawl_map",
193
+ "arguments": {
194
+ "url": "https://example.com"
195
+ }
196
+ }
197
+ \`\`\`
198
+ **Returns:** Array of URLs found on the site.
199
+ `,
167
200
  inputSchema: {
168
201
  type: 'object',
169
202
  properties: {
@@ -197,8 +230,29 @@ const MAP_TOOL = {
197
230
  };
198
231
  const CRAWL_TOOL = {
199
232
  name: 'firecrawl_crawl',
200
- description: 'Start an asynchronous crawl of multiple pages from a starting URL. ' +
201
- 'Supports depth control, path filtering, and webhook notifications.',
233
+ description: `
234
+ Starts an asynchronous crawl job on a website and extracts content from all pages.
235
+
236
+ **Best for:** Extracting content from multiple related pages, when you need comprehensive coverage.
237
+ **Not recommended for:** Extracting content from a single page (use scrape); when token limits are a concern (use map + batch_scrape); when you need fast results (crawling can be slow).
238
+ **Warning:** Crawl responses can be very large and may exceed token limits. Limit the crawl depth and number of pages, or use map + batch_scrape for better control.
239
+ **Common mistakes:** Setting limit or maxDepth too high (causes token overflow); using crawl for a single page (use scrape instead).
240
+ **Prompt Example:** "Get all blog posts from the first two levels of example.com/blog."
241
+ **Usage Example:**
242
+ \`\`\`json
243
+ {
244
+ "name": "firecrawl_crawl",
245
+ "arguments": {
246
+ "url": "https://example.com/blog/*",
247
+ "maxDepth": 2,
248
+ "limit": 100,
249
+ "allowExternalLinks": false,
250
+ "deduplicateSimilarURLs": true
251
+ }
252
+ }
253
+ \`\`\`
254
+ **Returns:** Operation ID for status checking; use firecrawl_check_crawl_status to check progress.
255
+ `,
202
256
  inputSchema: {
203
257
  type: 'object',
204
258
  properties: {
@@ -307,7 +361,20 @@ const CRAWL_TOOL = {
307
361
  };
308
362
  const CHECK_CRAWL_STATUS_TOOL = {
309
363
  name: 'firecrawl_check_crawl_status',
310
- description: 'Check the status of a crawl job.',
364
+ description: `
365
+ Check the status of a crawl job.
366
+
367
+ **Usage Example:**
368
+ \`\`\`json
369
+ {
370
+ "name": "firecrawl_check_crawl_status",
371
+ "arguments": {
372
+ "id": "550e8400-e29b-41d4-a716-446655440000"
373
+ }
374
+ }
375
+ \`\`\`
376
+ **Returns:** Status and progress of the crawl job, including results if available.
377
+ `,
311
378
  inputSchema: {
312
379
  type: 'object',
313
380
  properties: {
@@ -321,8 +388,31 @@ const CHECK_CRAWL_STATUS_TOOL = {
321
388
  };
322
389
  const SEARCH_TOOL = {
323
390
  name: 'firecrawl_search',
324
- description: 'Search and retrieve content from web pages with optional scraping. ' +
325
- 'Returns SERP results by default (url, title, description) or full page content when scrapeOptions are provided.',
391
+ description: `
392
+ Search the web and optionally extract content from search results.
393
+
394
+ **Best for:** Finding specific information across multiple websites, when you don't know which website has the information; when you need the most relevant content for a query.
395
+ **Not recommended for:** When you already know which website to scrape (use scrape); when you need comprehensive coverage of a single website (use map or crawl).
396
+ **Common mistakes:** Using crawl or map for open-ended questions (use search instead).
397
+ **Prompt Example:** "Find the latest research papers on AI published in 2023."
398
+ **Usage Example:**
399
+ \`\`\`json
400
+ {
401
+ "name": "firecrawl_search",
402
+ "arguments": {
403
+ "query": "latest AI research papers 2023",
404
+ "limit": 5,
405
+ "lang": "en",
406
+ "country": "us",
407
+ "scrapeOptions": {
408
+ "formats": ["markdown"],
409
+ "onlyMainContent": true
410
+ }
411
+ }
412
+ }
413
+ \`\`\`
414
+ **Returns:** Array of search results (with optional scraped content).
415
+ `,
326
416
  inputSchema: {
327
417
  type: 'object',
328
418
  properties: {
@@ -393,8 +483,45 @@ const SEARCH_TOOL = {
393
483
  };
394
484
  const EXTRACT_TOOL = {
395
485
  name: 'firecrawl_extract',
396
- description: 'Extract structured information from web pages using LLM. ' +
397
- 'Supports both cloud AI and self-hosted LLM extraction.',
486
+ description: `
487
+ Extract structured information from web pages using LLM capabilities. Supports both cloud AI and self-hosted LLM extraction.
488
+
489
+ **Best for:** Extracting specific structured data like prices, names, details.
490
+ **Not recommended for:** When you need the full content of a page (use scrape); when you're not looking for specific structured data.
491
+ **Arguments:**
492
+ - urls: Array of URLs to extract information from
493
+ - prompt: Custom prompt for the LLM extraction
494
+ - systemPrompt: System prompt to guide the LLM
495
+ - schema: JSON schema for structured data extraction
496
+ - allowExternalLinks: Allow extraction from external links
497
+ - enableWebSearch: Enable web search for additional context
498
+ - includeSubdomains: Include subdomains in extraction
499
+ **Prompt Example:** "Extract the product name, price, and description from these product pages."
500
+ **Usage Example:**
501
+ \`\`\`json
502
+ {
503
+ "name": "firecrawl_extract",
504
+ "arguments": {
505
+ "urls": ["https://example.com/page1", "https://example.com/page2"],
506
+ "prompt": "Extract product information including name, price, and description",
507
+ "systemPrompt": "You are a helpful assistant that extracts product information",
508
+ "schema": {
509
+ "type": "object",
510
+ "properties": {
511
+ "name": { "type": "string" },
512
+ "price": { "type": "number" },
513
+ "description": { "type": "string" }
514
+ },
515
+ "required": ["name", "price"]
516
+ },
517
+ "allowExternalLinks": false,
518
+ "enableWebSearch": false,
519
+ "includeSubdomains": false
520
+ }
521
+ }
522
+ \`\`\`
523
+ **Returns:** Extracted structured data as defined by your schema.
524
+ `,
398
525
  inputSchema: {
399
526
  type: 'object',
400
527
  properties: {
@@ -433,7 +560,31 @@ const EXTRACT_TOOL = {
433
560
  };
434
561
  const DEEP_RESEARCH_TOOL = {
435
562
  name: 'firecrawl_deep_research',
436
- description: 'Conduct deep research on a query using web crawling, search, and AI analysis.',
563
+ description: `
564
+ Conduct deep web research on a query using intelligent crawling, search, and LLM analysis.
565
+
566
+ **Best for:** Complex research questions requiring multiple sources, in-depth analysis.
567
+ **Not recommended for:** Simple questions that can be answered with a single search; when you need very specific information from a known page (use scrape); when you need results quickly (deep research can take time).
568
+ **Arguments:**
569
+ - query (string, required): The research question or topic to explore.
570
+ - maxDepth (number, optional): Maximum recursive depth for crawling/search (default: 3).
571
+ - timeLimit (number, optional): Time limit in seconds for the research session (default: 120).
572
+ - maxUrls (number, optional): Maximum number of URLs to analyze (default: 50).
573
+ **Prompt Example:** "Research the environmental impact of electric vehicles versus gasoline vehicles."
574
+ **Usage Example:**
575
+ \`\`\`json
576
+ {
577
+ "name": "firecrawl_deep_research",
578
+ "arguments": {
579
+ "query": "What are the environmental impacts of electric vehicles compared to gasoline vehicles?",
580
+ "maxDepth": 3,
581
+ "timeLimit": 120,
582
+ "maxUrls": 50
583
+ }
584
+ }
585
+ \`\`\`
586
+ **Returns:** Final analysis generated by an LLM based on research. (data.finalAnalysis); may also include structured activities and sources used in the research process.
587
+ `,
437
588
  inputSchema: {
438
589
  type: 'object',
439
590
  properties: {
@@ -459,7 +610,29 @@ const DEEP_RESEARCH_TOOL = {
459
610
  };
460
611
  const GENERATE_LLMSTXT_TOOL = {
461
612
  name: 'firecrawl_generate_llmstxt',
462
- description: 'Generate standardized LLMs.txt file for a given URL, which provides context about how LLMs should interact with the website.',
613
+ description: `
614
+ Generate a standardized llms.txt (and optionally llms-full.txt) file for a given domain. This file defines how large language models should interact with the site.
615
+
616
+ **Best for:** Creating machine-readable permission guidelines for AI models.
617
+ **Not recommended for:** General content extraction or research.
618
+ **Arguments:**
619
+ - url (string, required): The base URL of the website to analyze.
620
+ - maxUrls (number, optional): Max number of URLs to include (default: 10).
621
+ - showFullText (boolean, optional): Whether to include llms-full.txt contents in the response.
622
+ **Prompt Example:** "Generate an LLMs.txt file for example.com."
623
+ **Usage Example:**
624
+ \`\`\`json
625
+ {
626
+ "name": "firecrawl_generate_llmstxt",
627
+ "arguments": {
628
+ "url": "https://example.com",
629
+ "maxUrls": 20,
630
+ "showFullText": true
631
+ }
632
+ }
633
+ \`\`\`
634
+ **Returns:** LLMs.txt file contents (and optionally llms-full.txt).
635
+ `,
463
636
  inputSchema: {
464
637
  type: 'object',
465
638
  properties: {
@@ -725,7 +898,7 @@ server.setRequestHandler(CallToolRequestSchema, async (request) => {
725
898
  content: [
726
899
  {
727
900
  type: 'text',
728
- text: trimResponseText(`Started crawl for ${url} with job ID: ${response.id}`),
901
+ text: trimResponseText(`Started crawl for ${url} with job ID: ${response.id}. Use firecrawl_check_crawl_status to check progress.`),
729
902
  },
730
903
  ],
731
904
  isError: false,
File without changes
File without changes
package/dist/src/index.js CHANGED
File without changes
File without changes
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "firecrawl-mcp",
3
- "version": "1.10.0",
3
+ "version": "1.11.0",
4
4
  "description": "MCP server for Firecrawl web scraping integration. Supports both cloud and self-hosted instances. Features include web scraping, batch processing, structured data extraction, and LLM-powered content analysis.",
5
5
  "type": "module",
6
6
  "bin": {