firecrawl 2.4.0__tar.gz → 2.4.2__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of firecrawl might be problematic. Click here for more details.

@@ -0,0 +1,213 @@
1
+ Metadata-Version: 2.1
2
+ Name: firecrawl
3
+ Version: 2.4.2
4
+ Summary: Python SDK for Firecrawl API
5
+ Home-page: https://github.com/mendableai/firecrawl
6
+ Author: Mendable.ai
7
+ Author-email: "Mendable.ai" <nick@mendable.ai>
8
+ Maintainer-email: "Mendable.ai" <nick@mendable.ai>
9
+ License: MIT License
10
+ Project-URL: Documentation, https://docs.firecrawl.dev
11
+ Project-URL: Source, https://github.com/mendableai/firecrawl
12
+ Project-URL: Tracker, https://github.com/mendableai/firecrawl/issues
13
+ Keywords: SDK,API,firecrawl
14
+ Classifier: Development Status :: 5 - Production/Stable
15
+ Classifier: Environment :: Web Environment
16
+ Classifier: Intended Audience :: Developers
17
+ Classifier: License :: OSI Approved :: MIT License
18
+ Classifier: Natural Language :: English
19
+ Classifier: Operating System :: OS Independent
20
+ Classifier: Programming Language :: Python
21
+ Classifier: Programming Language :: Python :: 3
22
+ Classifier: Programming Language :: Python :: 3.8
23
+ Classifier: Programming Language :: Python :: 3.9
24
+ Classifier: Programming Language :: Python :: 3.10
25
+ Classifier: Topic :: Internet
26
+ Classifier: Topic :: Internet :: WWW/HTTP
27
+ Classifier: Topic :: Internet :: WWW/HTTP :: Indexing/Search
28
+ Classifier: Topic :: Software Development
29
+ Classifier: Topic :: Software Development :: Libraries
30
+ Classifier: Topic :: Software Development :: Libraries :: Python Modules
31
+ Classifier: Topic :: Text Processing
32
+ Classifier: Topic :: Text Processing :: Indexing
33
+ Requires-Python: >=3.8
34
+ Description-Content-Type: text/markdown
35
+ License-File: LICENSE
36
+ Requires-Dist: requests
37
+ Requires-Dist: python-dotenv
38
+ Requires-Dist: websockets
39
+ Requires-Dist: nest-asyncio
40
+ Requires-Dist: pydantic
41
+ Requires-Dist: aiohttp
42
+
43
+ # Firecrawl SDK
44
+
45
+ > Firecrawl SDK is a wrapper around the Firecrawl API to help you easily turn websites into markdown.
46
+
47
+ ## Installation
48
+
49
+ To install the Firecrawl SDK, you can use pip:
50
+
51
+ ```bash
52
+ pip install firecrawl-py
53
+ ```
54
+
55
+ ## Usage
56
+
57
+ 1. Get an API key from [firecrawl.dev](https://firecrawl.dev)
58
+ 2. Set the API key as an environment variable named `FIRECRAWL_API_KEY` or pass it as a parameter to the `FirecrawlApp` class.
59
+
60
+ Here's an example of how to use the SDK:
61
+
62
+ ```python
63
+ from firecrawl import FirecrawlApp
64
+
65
+ app = FirecrawlApp(api_key="fc-YOUR_API_KEY")
66
+
67
+ # Scrape a website:
68
+ scrape_status = app.scrape_url(
69
+ 'https://firecrawl.dev',
70
+ formats=['markdown', 'html']
71
+ )
72
+ print(scrape_status)
73
+
74
+ # Crawl a website:
75
+ crawl_status = app.crawl_url(
76
+ 'https://firecrawl.dev',
77
+ limit=100,
78
+ scrape_options=ScrapeOptions(formats=['markdown', 'html'])
79
+ )
80
+ print(crawl_status)
81
+ ```
82
+
83
+ ### Scraping a URL
84
+
85
+ To scrape a single URL, use the `scrape_url` method. It takes the URL as a parameter and returns the scraped data as a dictionary.
86
+
87
+ ```python
88
+ # Scrape a website:
89
+ scrape_result = app.scrape_url('firecrawl.dev', formats=['markdown', 'html'])
90
+ print(scrape_result)
91
+ ```
92
+
93
+ ### Crawling a Website
94
+
95
+ To crawl a website, use the `crawl_url` method. It takes the starting URL and optional parameters as arguments. The `params` argument allows you to specify additional options for the crawl job, such as the maximum number of pages to crawl, allowed domains, and the output format.
96
+
97
+ ```python
98
+ crawl_status = app.crawl_url(
99
+ 'https://firecrawl.dev',
100
+ limit=100,
101
+ scrape_options=ScrapeOptions(formats=['markdown', 'html']),
102
+ poll_interval=30
103
+ )
104
+ print(crawl_status)
105
+ ```
106
+
107
+ ### Asynchronous Crawling
108
+
109
+ <Tip>Looking for async operations? Check out the [Async Class](#async-class) section below.</Tip>
110
+
111
+ To crawl a website asynchronously, use the `crawl_url_async` method. It returns the crawl `ID` which you can use to check the status of the crawl job. It takes the starting URL and optional parameters as arguments. The `params` argument allows you to specify additional options for the crawl job, such as the maximum number of pages to crawl, allowed domains, and the output format.
112
+
113
+ ```python
114
+ crawl_status = app.async_crawl_url(
115
+ 'https://firecrawl.dev',
116
+ limit=100,
117
+ scrape_options=ScrapeOptions(formats=['markdown', 'html']),
118
+ )
119
+ print(crawl_status)
120
+ ```
121
+
122
+ ### Checking Crawl Status
123
+
124
+ To check the status of a crawl job, use the `check_crawl_status` method. It takes the job ID as a parameter and returns the current status of the crawl job.
125
+
126
+ ```python
127
+ crawl_status = app.check_crawl_status("<crawl_id>")
128
+ print(crawl_status)
129
+ ```
130
+
131
+ ### Cancelling a Crawl
132
+
133
+ To cancel an asynchronous crawl job, use the `cancel_crawl` method. It takes the job ID of the asynchronous crawl as a parameter and returns the cancellation status.
134
+
135
+ ```python
136
+ cancel_crawl = app.cancel_crawl(id)
137
+ print(cancel_crawl)
138
+ ```
139
+
140
+ ### Map a Website
141
+
142
+ Use `map_url` to generate a list of URLs from a website. The `params` argument let you customize the mapping process, including options to exclude subdomains or to utilize the sitemap.
143
+
144
+ ```python
145
+ # Map a website:
146
+ map_result = app.map_url('https://firecrawl.dev')
147
+ print(map_result)
148
+ ```
149
+
150
+ {/* ### Extracting Structured Data from Websites
151
+
152
+ To extract structured data from websites, use the `extract` method. It takes the URLs to extract data from, a prompt, and a schema as arguments. The schema is a Pydantic model that defines the structure of the extracted data.
153
+
154
+ <ExtractPythonShort /> */}
155
+
156
+ ### Crawling a Website with WebSockets
157
+
158
+ To crawl a website with WebSockets, use the `crawl_url_and_watch` method. It takes the starting URL and optional parameters as arguments. The `params` argument allows you to specify additional options for the crawl job, such as the maximum number of pages to crawl, allowed domains, and the output format.
159
+
160
+ ```python
161
+ # inside an async function...
162
+ nest_asyncio.apply()
163
+
164
+ # Define event handlers
165
+ def on_document(detail):
166
+ print("DOC", detail)
167
+
168
+ def on_error(detail):
169
+ print("ERR", detail['error'])
170
+
171
+ def on_done(detail):
172
+ print("DONE", detail['status'])
173
+
174
+ # Function to start the crawl and watch process
175
+ async def start_crawl_and_watch():
176
+ # Initiate the crawl job and get the watcher
177
+ watcher = app.crawl_url_and_watch('firecrawl.dev', exclude_paths=['blog/*'], limit=5)
178
+
179
+ # Add event listeners
180
+ watcher.add_event_listener("document", on_document)
181
+ watcher.add_event_listener("error", on_error)
182
+ watcher.add_event_listener("done", on_done)
183
+
184
+ # Start the watcher
185
+ await watcher.connect()
186
+
187
+ # Run the event loop
188
+ await start_crawl_and_watch()
189
+ ```
190
+
191
+ ## Error Handling
192
+
193
+ The SDK handles errors returned by the Firecrawl API and raises appropriate exceptions. If an error occurs during a request, an exception will be raised with a descriptive error message.
194
+
195
+ ## Async Class
196
+
197
+ For async operations, you can use the `AsyncFirecrawlApp` class. Its methods are the same as the `FirecrawlApp` class, but they don't block the main thread.
198
+
199
+ ```python
200
+ from firecrawl import AsyncFirecrawlApp
201
+
202
+ app = AsyncFirecrawlApp(api_key="YOUR_API_KEY")
203
+
204
+ # Async Scrape
205
+ async def example_scrape():
206
+ scrape_result = await app.scrape_url(url="https://example.com")
207
+ print(scrape_result)
208
+
209
+ # Async Crawl
210
+ async def example_crawl():
211
+ crawl_result = await app.crawl_url(url="https://example.com")
212
+ print(crawl_result)
213
+ ```
@@ -0,0 +1,171 @@
1
+ # Firecrawl SDK
2
+
3
+ > Firecrawl SDK is a wrapper around the Firecrawl API to help you easily turn websites into markdown.
4
+
5
+ ## Installation
6
+
7
+ To install the Firecrawl SDK, you can use pip:
8
+
9
+ ```bash
10
+ pip install firecrawl-py
11
+ ```
12
+
13
+ ## Usage
14
+
15
+ 1. Get an API key from [firecrawl.dev](https://firecrawl.dev)
16
+ 2. Set the API key as an environment variable named `FIRECRAWL_API_KEY` or pass it as a parameter to the `FirecrawlApp` class.
17
+
18
+ Here's an example of how to use the SDK:
19
+
20
+ ```python
21
+ from firecrawl import FirecrawlApp
22
+
23
+ app = FirecrawlApp(api_key="fc-YOUR_API_KEY")
24
+
25
+ # Scrape a website:
26
+ scrape_status = app.scrape_url(
27
+ 'https://firecrawl.dev',
28
+ formats=['markdown', 'html']
29
+ )
30
+ print(scrape_status)
31
+
32
+ # Crawl a website:
33
+ crawl_status = app.crawl_url(
34
+ 'https://firecrawl.dev',
35
+ limit=100,
36
+ scrape_options=ScrapeOptions(formats=['markdown', 'html'])
37
+ )
38
+ print(crawl_status)
39
+ ```
40
+
41
+ ### Scraping a URL
42
+
43
+ To scrape a single URL, use the `scrape_url` method. It takes the URL as a parameter and returns the scraped data as a dictionary.
44
+
45
+ ```python
46
+ # Scrape a website:
47
+ scrape_result = app.scrape_url('firecrawl.dev', formats=['markdown', 'html'])
48
+ print(scrape_result)
49
+ ```
50
+
51
+ ### Crawling a Website
52
+
53
+ To crawl a website, use the `crawl_url` method. It takes the starting URL and optional parameters as arguments. The `params` argument allows you to specify additional options for the crawl job, such as the maximum number of pages to crawl, allowed domains, and the output format.
54
+
55
+ ```python
56
+ crawl_status = app.crawl_url(
57
+ 'https://firecrawl.dev',
58
+ limit=100,
59
+ scrape_options=ScrapeOptions(formats=['markdown', 'html']),
60
+ poll_interval=30
61
+ )
62
+ print(crawl_status)
63
+ ```
64
+
65
+ ### Asynchronous Crawling
66
+
67
+ <Tip>Looking for async operations? Check out the [Async Class](#async-class) section below.</Tip>
68
+
69
+ To crawl a website asynchronously, use the `crawl_url_async` method. It returns the crawl `ID` which you can use to check the status of the crawl job. It takes the starting URL and optional parameters as arguments. The `params` argument allows you to specify additional options for the crawl job, such as the maximum number of pages to crawl, allowed domains, and the output format.
70
+
71
+ ```python
72
+ crawl_status = app.async_crawl_url(
73
+ 'https://firecrawl.dev',
74
+ limit=100,
75
+ scrape_options=ScrapeOptions(formats=['markdown', 'html']),
76
+ )
77
+ print(crawl_status)
78
+ ```
79
+
80
+ ### Checking Crawl Status
81
+
82
+ To check the status of a crawl job, use the `check_crawl_status` method. It takes the job ID as a parameter and returns the current status of the crawl job.
83
+
84
+ ```python
85
+ crawl_status = app.check_crawl_status("<crawl_id>")
86
+ print(crawl_status)
87
+ ```
88
+
89
+ ### Cancelling a Crawl
90
+
91
+ To cancel an asynchronous crawl job, use the `cancel_crawl` method. It takes the job ID of the asynchronous crawl as a parameter and returns the cancellation status.
92
+
93
+ ```python
94
+ cancel_crawl = app.cancel_crawl(id)
95
+ print(cancel_crawl)
96
+ ```
97
+
98
+ ### Map a Website
99
+
100
+ Use `map_url` to generate a list of URLs from a website. The `params` argument let you customize the mapping process, including options to exclude subdomains or to utilize the sitemap.
101
+
102
+ ```python
103
+ # Map a website:
104
+ map_result = app.map_url('https://firecrawl.dev')
105
+ print(map_result)
106
+ ```
107
+
108
+ {/* ### Extracting Structured Data from Websites
109
+
110
+ To extract structured data from websites, use the `extract` method. It takes the URLs to extract data from, a prompt, and a schema as arguments. The schema is a Pydantic model that defines the structure of the extracted data.
111
+
112
+ <ExtractPythonShort /> */}
113
+
114
+ ### Crawling a Website with WebSockets
115
+
116
+ To crawl a website with WebSockets, use the `crawl_url_and_watch` method. It takes the starting URL and optional parameters as arguments. The `params` argument allows you to specify additional options for the crawl job, such as the maximum number of pages to crawl, allowed domains, and the output format.
117
+
118
+ ```python
119
+ # inside an async function...
120
+ nest_asyncio.apply()
121
+
122
+ # Define event handlers
123
+ def on_document(detail):
124
+ print("DOC", detail)
125
+
126
+ def on_error(detail):
127
+ print("ERR", detail['error'])
128
+
129
+ def on_done(detail):
130
+ print("DONE", detail['status'])
131
+
132
+ # Function to start the crawl and watch process
133
+ async def start_crawl_and_watch():
134
+ # Initiate the crawl job and get the watcher
135
+ watcher = app.crawl_url_and_watch('firecrawl.dev', exclude_paths=['blog/*'], limit=5)
136
+
137
+ # Add event listeners
138
+ watcher.add_event_listener("document", on_document)
139
+ watcher.add_event_listener("error", on_error)
140
+ watcher.add_event_listener("done", on_done)
141
+
142
+ # Start the watcher
143
+ await watcher.connect()
144
+
145
+ # Run the event loop
146
+ await start_crawl_and_watch()
147
+ ```
148
+
149
+ ## Error Handling
150
+
151
+ The SDK handles errors returned by the Firecrawl API and raises appropriate exceptions. If an error occurs during a request, an exception will be raised with a descriptive error message.
152
+
153
+ ## Async Class
154
+
155
+ For async operations, you can use the `AsyncFirecrawlApp` class. Its methods are the same as the `FirecrawlApp` class, but they don't block the main thread.
156
+
157
+ ```python
158
+ from firecrawl import AsyncFirecrawlApp
159
+
160
+ app = AsyncFirecrawlApp(api_key="YOUR_API_KEY")
161
+
162
+ # Async Scrape
163
+ async def example_scrape():
164
+ scrape_result = await app.scrape_url(url="https://example.com")
165
+ print(scrape_result)
166
+
167
+ # Async Crawl
168
+ async def example_crawl():
169
+ crawl_result = await app.crawl_url(url="https://example.com")
170
+ print(crawl_result)
171
+ ```
@@ -13,7 +13,7 @@ import os
13
13
 
14
14
  from .firecrawl import FirecrawlApp, JsonConfig, ScrapeOptions # noqa
15
15
 
16
- __version__ = "2.4.0"
16
+ __version__ = "2.4.2"
17
17
 
18
18
  # Define the logger for the Firecrawl project
19
19
  logger: logging.Logger = logging.getLogger("firecrawl")
@@ -210,6 +210,7 @@ class ScrapeParams(ScrapeOptions):
210
210
  jsonOptions: Optional[JsonConfig] = None
211
211
  actions: Optional[List[Union[WaitAction, ScreenshotAction, ClickAction, WriteAction, PressAction, ScrollAction, ScrapeAction, ExecuteJavascriptAction]]] = None
212
212
  agent: Optional[AgentOptions] = None
213
+ webhook: Optional[WebhookConfig] = None
213
214
 
214
215
  class ScrapeResponse(FirecrawlDocument[T], Generic[T]):
215
216
  """Response from scraping operations."""
@@ -2432,15 +2433,15 @@ class FirecrawlApp:
2432
2433
  "batch_scrape_urls": {"formats", "headers", "include_tags", "exclude_tags", "only_main_content",
2433
2434
  "wait_for", "timeout", "location", "mobile", "skip_tls_verification",
2434
2435
  "remove_base64_images", "block_ads", "proxy", "extract", "json_options",
2435
- "actions", "agent"},
2436
+ "actions", "agent", "webhook"},
2436
2437
  "async_batch_scrape_urls": {"formats", "headers", "include_tags", "exclude_tags", "only_main_content",
2437
2438
  "wait_for", "timeout", "location", "mobile", "skip_tls_verification",
2438
2439
  "remove_base64_images", "block_ads", "proxy", "extract", "json_options",
2439
- "actions", "agent"},
2440
+ "actions", "agent", "webhook"},
2440
2441
  "batch_scrape_urls_and_watch": {"formats", "headers", "include_tags", "exclude_tags", "only_main_content",
2441
2442
  "wait_for", "timeout", "location", "mobile", "skip_tls_verification",
2442
2443
  "remove_base64_images", "block_ads", "proxy", "extract", "json_options",
2443
- "actions", "agent"}
2444
+ "actions", "agent", "webhook"}
2444
2445
  }
2445
2446
 
2446
2447
  # Get allowed parameters for this method
@@ -0,0 +1,213 @@
1
+ Metadata-Version: 2.1
2
+ Name: firecrawl
3
+ Version: 2.4.2
4
+ Summary: Python SDK for Firecrawl API
5
+ Home-page: https://github.com/mendableai/firecrawl
6
+ Author: Mendable.ai
7
+ Author-email: "Mendable.ai" <nick@mendable.ai>
8
+ Maintainer-email: "Mendable.ai" <nick@mendable.ai>
9
+ License: MIT License
10
+ Project-URL: Documentation, https://docs.firecrawl.dev
11
+ Project-URL: Source, https://github.com/mendableai/firecrawl
12
+ Project-URL: Tracker, https://github.com/mendableai/firecrawl/issues
13
+ Keywords: SDK,API,firecrawl
14
+ Classifier: Development Status :: 5 - Production/Stable
15
+ Classifier: Environment :: Web Environment
16
+ Classifier: Intended Audience :: Developers
17
+ Classifier: License :: OSI Approved :: MIT License
18
+ Classifier: Natural Language :: English
19
+ Classifier: Operating System :: OS Independent
20
+ Classifier: Programming Language :: Python
21
+ Classifier: Programming Language :: Python :: 3
22
+ Classifier: Programming Language :: Python :: 3.8
23
+ Classifier: Programming Language :: Python :: 3.9
24
+ Classifier: Programming Language :: Python :: 3.10
25
+ Classifier: Topic :: Internet
26
+ Classifier: Topic :: Internet :: WWW/HTTP
27
+ Classifier: Topic :: Internet :: WWW/HTTP :: Indexing/Search
28
+ Classifier: Topic :: Software Development
29
+ Classifier: Topic :: Software Development :: Libraries
30
+ Classifier: Topic :: Software Development :: Libraries :: Python Modules
31
+ Classifier: Topic :: Text Processing
32
+ Classifier: Topic :: Text Processing :: Indexing
33
+ Requires-Python: >=3.8
34
+ Description-Content-Type: text/markdown
35
+ License-File: LICENSE
36
+ Requires-Dist: requests
37
+ Requires-Dist: python-dotenv
38
+ Requires-Dist: websockets
39
+ Requires-Dist: nest-asyncio
40
+ Requires-Dist: pydantic
41
+ Requires-Dist: aiohttp
42
+
43
+ # Firecrawl SDK
44
+
45
+ > Firecrawl SDK is a wrapper around the Firecrawl API to help you easily turn websites into markdown.
46
+
47
+ ## Installation
48
+
49
+ To install the Firecrawl SDK, you can use pip:
50
+
51
+ ```bash
52
+ pip install firecrawl-py
53
+ ```
54
+
55
+ ## Usage
56
+
57
+ 1. Get an API key from [firecrawl.dev](https://firecrawl.dev)
58
+ 2. Set the API key as an environment variable named `FIRECRAWL_API_KEY` or pass it as a parameter to the `FirecrawlApp` class.
59
+
60
+ Here's an example of how to use the SDK:
61
+
62
+ ```python
63
+ from firecrawl import FirecrawlApp
64
+
65
+ app = FirecrawlApp(api_key="fc-YOUR_API_KEY")
66
+
67
+ # Scrape a website:
68
+ scrape_status = app.scrape_url(
69
+ 'https://firecrawl.dev',
70
+ formats=['markdown', 'html']
71
+ )
72
+ print(scrape_status)
73
+
74
+ # Crawl a website:
75
+ crawl_status = app.crawl_url(
76
+ 'https://firecrawl.dev',
77
+ limit=100,
78
+ scrape_options=ScrapeOptions(formats=['markdown', 'html'])
79
+ )
80
+ print(crawl_status)
81
+ ```
82
+
83
+ ### Scraping a URL
84
+
85
+ To scrape a single URL, use the `scrape_url` method. It takes the URL as a parameter and returns the scraped data as a dictionary.
86
+
87
+ ```python
88
+ # Scrape a website:
89
+ scrape_result = app.scrape_url('firecrawl.dev', formats=['markdown', 'html'])
90
+ print(scrape_result)
91
+ ```
92
+
93
+ ### Crawling a Website
94
+
95
+ To crawl a website, use the `crawl_url` method. It takes the starting URL and optional parameters as arguments. The `params` argument allows you to specify additional options for the crawl job, such as the maximum number of pages to crawl, allowed domains, and the output format.
96
+
97
+ ```python
98
+ crawl_status = app.crawl_url(
99
+ 'https://firecrawl.dev',
100
+ limit=100,
101
+ scrape_options=ScrapeOptions(formats=['markdown', 'html']),
102
+ poll_interval=30
103
+ )
104
+ print(crawl_status)
105
+ ```
106
+
107
+ ### Asynchronous Crawling
108
+
109
+ <Tip>Looking for async operations? Check out the [Async Class](#async-class) section below.</Tip>
110
+
111
+ To crawl a website asynchronously, use the `crawl_url_async` method. It returns the crawl `ID` which you can use to check the status of the crawl job. It takes the starting URL and optional parameters as arguments. The `params` argument allows you to specify additional options for the crawl job, such as the maximum number of pages to crawl, allowed domains, and the output format.
112
+
113
+ ```python
114
+ crawl_status = app.async_crawl_url(
115
+ 'https://firecrawl.dev',
116
+ limit=100,
117
+ scrape_options=ScrapeOptions(formats=['markdown', 'html']),
118
+ )
119
+ print(crawl_status)
120
+ ```
121
+
122
+ ### Checking Crawl Status
123
+
124
+ To check the status of a crawl job, use the `check_crawl_status` method. It takes the job ID as a parameter and returns the current status of the crawl job.
125
+
126
+ ```python
127
+ crawl_status = app.check_crawl_status("<crawl_id>")
128
+ print(crawl_status)
129
+ ```
130
+
131
+ ### Cancelling a Crawl
132
+
133
+ To cancel an asynchronous crawl job, use the `cancel_crawl` method. It takes the job ID of the asynchronous crawl as a parameter and returns the cancellation status.
134
+
135
+ ```python
136
+ cancel_crawl = app.cancel_crawl(id)
137
+ print(cancel_crawl)
138
+ ```
139
+
140
+ ### Map a Website
141
+
142
+ Use `map_url` to generate a list of URLs from a website. The `params` argument let you customize the mapping process, including options to exclude subdomains or to utilize the sitemap.
143
+
144
+ ```python
145
+ # Map a website:
146
+ map_result = app.map_url('https://firecrawl.dev')
147
+ print(map_result)
148
+ ```
149
+
150
+ {/* ### Extracting Structured Data from Websites
151
+
152
+ To extract structured data from websites, use the `extract` method. It takes the URLs to extract data from, a prompt, and a schema as arguments. The schema is a Pydantic model that defines the structure of the extracted data.
153
+
154
+ <ExtractPythonShort /> */}
155
+
156
+ ### Crawling a Website with WebSockets
157
+
158
+ To crawl a website with WebSockets, use the `crawl_url_and_watch` method. It takes the starting URL and optional parameters as arguments. The `params` argument allows you to specify additional options for the crawl job, such as the maximum number of pages to crawl, allowed domains, and the output format.
159
+
160
+ ```python
161
+ # inside an async function...
162
+ nest_asyncio.apply()
163
+
164
+ # Define event handlers
165
+ def on_document(detail):
166
+ print("DOC", detail)
167
+
168
+ def on_error(detail):
169
+ print("ERR", detail['error'])
170
+
171
+ def on_done(detail):
172
+ print("DONE", detail['status'])
173
+
174
+ # Function to start the crawl and watch process
175
+ async def start_crawl_and_watch():
176
+ # Initiate the crawl job and get the watcher
177
+ watcher = app.crawl_url_and_watch('firecrawl.dev', exclude_paths=['blog/*'], limit=5)
178
+
179
+ # Add event listeners
180
+ watcher.add_event_listener("document", on_document)
181
+ watcher.add_event_listener("error", on_error)
182
+ watcher.add_event_listener("done", on_done)
183
+
184
+ # Start the watcher
185
+ await watcher.connect()
186
+
187
+ # Run the event loop
188
+ await start_crawl_and_watch()
189
+ ```
190
+
191
+ ## Error Handling
192
+
193
+ The SDK handles errors returned by the Firecrawl API and raises appropriate exceptions. If an error occurs during a request, an exception will be raised with a descriptive error message.
194
+
195
+ ## Async Class
196
+
197
+ For async operations, you can use the `AsyncFirecrawlApp` class. Its methods are the same as the `FirecrawlApp` class, but they don't block the main thread.
198
+
199
+ ```python
200
+ from firecrawl import AsyncFirecrawlApp
201
+
202
+ app = AsyncFirecrawlApp(api_key="YOUR_API_KEY")
203
+
204
+ # Async Scrape
205
+ async def example_scrape():
206
+ scrape_result = await app.scrape_url(url="https://example.com")
207
+ print(scrape_result)
208
+
209
+ # Async Crawl
210
+ async def example_crawl():
211
+ crawl_result = await app.crawl_url(url="https://example.com")
212
+ print(crawl_result)
213
+ ```