firecrawl 1.3.1__tar.gz → 1.4.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of firecrawl might be problematic. Click here for more details.

@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: firecrawl
3
- Version: 1.3.1
3
+ Version: 1.4.0
4
4
  Summary: Python SDK for Firecrawl API
5
5
  Home-page: https://github.com/mendableai/firecrawl
6
6
  Author: Mendable.ai
@@ -189,6 +189,69 @@ async def start_crawl_and_watch():
189
189
  await start_crawl_and_watch()
190
190
  ```
191
191
 
192
+ ### Scraping multiple URLs in batch
193
+
194
+ To batch scrape multiple URLs, use the `batch_scrape_urls` method. It takes the URLs and optional parameters as arguments. The `params` argument allows you to specify additional options for the scraper such as the output formats.
195
+
196
+ ```python
197
+ idempotency_key = str(uuid.uuid4()) # optional idempotency key
198
+ batch_scrape_result = app.batch_scrape_urls(['firecrawl.dev', 'mendable.ai'], {'formats': ['markdown', 'html']}, 2, idempotency_key)
199
+ print(batch_scrape_result)
200
+ ```
201
+
202
+ ### Asynchronous batch scrape
203
+
204
+ To run a batch scrape asynchronously, use the `async_batch_scrape_urls` method. It takes the starting URL and optional parameters as arguments. The `params` argument allows you to specify additional options for the scraper, such as the output formats.
205
+
206
+ ```python
207
+ batch_scrape_result = app.async_batch_scrape_urls(['firecrawl.dev', 'mendable.ai'], {'formats': ['markdown', 'html']})
208
+ print(batch_scrape_result)
209
+ ```
210
+
211
+ ### Checking batch scrape status
212
+
213
+ To check the status of an asynchronous batch scrape job, use the `check_batch_scrape_job` method. It takes the job ID as a parameter and returns the current status of the batch scrape job.
214
+
215
+ ```python
216
+ id = batch_scrape_result['id']
217
+ status = app.check_batch_scrape_job(id)
218
+ ```
219
+
220
+ ### Batch scrape with WebSockets
221
+
222
+ To use batch scrape with WebSockets, use the `batch_scrape_urls_and_watch` method. It takes the starting URL and optional parameters as arguments. The `params` argument allows you to specify additional options for the scraper, such as the output formats.
223
+
224
+ ```python
225
+ # inside an async function...
226
+ nest_asyncio.apply()
227
+
228
+ # Define event handlers
229
+ def on_document(detail):
230
+ print("DOC", detail)
231
+
232
+ def on_error(detail):
233
+ print("ERR", detail['error'])
234
+
235
+ def on_done(detail):
236
+ print("DONE", detail['status'])
237
+
238
+ # Function to start the crawl and watch process
239
+ async def start_crawl_and_watch():
240
+ # Initiate the crawl job and get the watcher
241
+ watcher = app.batch_scrape_urls_and_watch(['firecrawl.dev', 'mendable.ai'], {'formats': ['markdown', 'html']})
242
+
243
+ # Add event listeners
244
+ watcher.add_event_listener("document", on_document)
245
+ watcher.add_event_listener("error", on_error)
246
+ watcher.add_event_listener("done", on_done)
247
+
248
+ # Start the watcher
249
+ await watcher.connect()
250
+
251
+ # Run the event loop
252
+ await start_crawl_and_watch()
253
+ ```
254
+
192
255
  ## Error Handling
193
256
 
194
257
  The SDK handles errors returned by the Firecrawl API and raises appropriate exceptions. If an error occurs during a request, an exception will be raised with a descriptive error message.
@@ -149,6 +149,69 @@ async def start_crawl_and_watch():
149
149
  await start_crawl_and_watch()
150
150
  ```
151
151
 
152
+ ### Scraping multiple URLs in batch
153
+
154
+ To batch scrape multiple URLs, use the `batch_scrape_urls` method. It takes the URLs and optional parameters as arguments. The `params` argument allows you to specify additional options for the scraper such as the output formats.
155
+
156
+ ```python
157
+ idempotency_key = str(uuid.uuid4()) # optional idempotency key
158
+ batch_scrape_result = app.batch_scrape_urls(['firecrawl.dev', 'mendable.ai'], {'formats': ['markdown', 'html']}, 2, idempotency_key)
159
+ print(batch_scrape_result)
160
+ ```
161
+
162
+ ### Asynchronous batch scrape
163
+
164
+ To run a batch scrape asynchronously, use the `async_batch_scrape_urls` method. It takes the starting URL and optional parameters as arguments. The `params` argument allows you to specify additional options for the scraper, such as the output formats.
165
+
166
+ ```python
167
+ batch_scrape_result = app.async_batch_scrape_urls(['firecrawl.dev', 'mendable.ai'], {'formats': ['markdown', 'html']})
168
+ print(batch_scrape_result)
169
+ ```
170
+
171
+ ### Checking batch scrape status
172
+
173
+ To check the status of an asynchronous batch scrape job, use the `check_batch_scrape_job` method. It takes the job ID as a parameter and returns the current status of the batch scrape job.
174
+
175
+ ```python
176
+ id = batch_scrape_result['id']
177
+ status = app.check_batch_scrape_job(id)
178
+ ```
179
+
180
+ ### Batch scrape with WebSockets
181
+
182
+ To use batch scrape with WebSockets, use the `batch_scrape_urls_and_watch` method. It takes the starting URL and optional parameters as arguments. The `params` argument allows you to specify additional options for the scraper, such as the output formats.
183
+
184
+ ```python
185
+ # inside an async function...
186
+ nest_asyncio.apply()
187
+
188
+ # Define event handlers
189
+ def on_document(detail):
190
+ print("DOC", detail)
191
+
192
+ def on_error(detail):
193
+ print("ERR", detail['error'])
194
+
195
+ def on_done(detail):
196
+ print("DONE", detail['status'])
197
+
198
+ # Function to start the crawl and watch process
199
+ async def start_crawl_and_watch():
200
+ # Initiate the crawl job and get the watcher
201
+ watcher = app.batch_scrape_urls_and_watch(['firecrawl.dev', 'mendable.ai'], {'formats': ['markdown', 'html']})
202
+
203
+ # Add event listeners
204
+ watcher.add_event_listener("document", on_document)
205
+ watcher.add_event_listener("error", on_error)
206
+ watcher.add_event_listener("done", on_done)
207
+
208
+ # Start the watcher
209
+ await watcher.connect()
210
+
211
+ # Run the event loop
212
+ await start_crawl_and_watch()
213
+ ```
214
+
152
215
  ## Error Handling
153
216
 
154
217
  The SDK handles errors returned by the Firecrawl API and raises appropriate exceptions. If an error occurs during a request, an exception will be raised with a descriptive error message.
@@ -13,7 +13,7 @@ import os
13
13
 
14
14
  from .firecrawl import FirecrawlApp
15
15
 
16
- __version__ = "1.3.1"
16
+ __version__ = "1.4.0"
17
17
 
18
18
  # Define the logger for the Firecrawl project
19
19
  logger: logging.Logger = logging.getLogger("firecrawl")
@@ -81,8 +81,10 @@ class FirecrawlApp:
81
81
  response = response.json()
82
82
  if response['success'] and 'data' in response:
83
83
  return response['data']
84
- else:
84
+ elif "error" in response:
85
85
  raise Exception(f'Failed to scrape URL. Error: {response["error"]}')
86
+ else:
87
+ raise Exception(f'Failed to scrape URL. Error: {response}')
86
88
  else:
87
89
  self._handle_error(response, 'scrape URL')
88
90
 
@@ -266,11 +268,130 @@ class FirecrawlApp:
266
268
  response = response.json()
267
269
  if response['success'] and 'links' in response:
268
270
  return response
269
- else:
271
+ elif 'error' in response:
270
272
  raise Exception(f'Failed to map URL. Error: {response["error"]}')
273
+ else:
274
+ raise Exception(f'Failed to map URL. Error: {response}')
271
275
  else:
272
276
  self._handle_error(response, 'map')
273
277
 
278
+ def batch_scrape_urls(self, urls: list[str],
279
+ params: Optional[Dict[str, Any]] = None,
280
+ poll_interval: Optional[int] = 2,
281
+ idempotency_key: Optional[str] = None) -> Any:
282
+ """
283
+ Initiate a batch scrape job for the specified URLs using the Firecrawl API.
284
+
285
+ Args:
286
+ urls (list[str]): The URLs to scrape.
287
+ params (Optional[Dict[str, Any]]): Additional parameters for the scraper.
288
+ poll_interval (Optional[int]): Time in seconds between status checks when waiting for job completion. Defaults to 2 seconds.
289
+ idempotency_key (Optional[str]): A unique uuid key to ensure idempotency of requests.
290
+
291
+ Returns:
292
+ Dict[str, Any]: A dictionary containing the scrape results. The structure includes:
293
+ - 'success' (bool): Indicates if the batch scrape was successful.
294
+ - 'status' (str): The final status of the batch scrape job (e.g., 'completed').
295
+ - 'completed' (int): Number of scraped pages that completed.
296
+ - 'total' (int): Total number of scraped pages.
297
+ - 'creditsUsed' (int): Estimated number of API credits used for this batch scrape.
298
+ - 'expiresAt' (str): ISO 8601 formatted date-time string indicating when the batch scrape data expires.
299
+ - 'data' (List[Dict]): List of all the scraped pages.
300
+
301
+ Raises:
302
+ Exception: If the batch scrape job initiation or monitoring fails.
303
+ """
304
+ endpoint = f'/v1/batch/scrape'
305
+ headers = self._prepare_headers(idempotency_key)
306
+ json_data = {'urls': urls}
307
+ if params:
308
+ json_data.update(params)
309
+ response = self._post_request(f'{self.api_url}{endpoint}', json_data, headers)
310
+ if response.status_code == 200:
311
+ id = response.json().get('id')
312
+ return self._monitor_job_status(id, headers, poll_interval)
313
+
314
+ else:
315
+ self._handle_error(response, 'start batch scrape job')
316
+
317
+
318
+ def async_batch_scrape_urls(self, urls: list[str], params: Optional[Dict[str, Any]] = None, idempotency_key: Optional[str] = None) -> Dict[str, Any]:
319
+ """
320
+ Initiate a crawl job asynchronously.
321
+
322
+ Args:
323
+ urls (list[str]): The URLs to scrape.
324
+ params (Optional[Dict[str, Any]]): Additional parameters for the scraper.
325
+ idempotency_key (Optional[str]): A unique uuid key to ensure idempotency of requests.
326
+
327
+ Returns:
328
+ Dict[str, Any]: A dictionary containing the batch scrape initiation response. The structure includes:
329
+ - 'success' (bool): Indicates if the batch scrape initiation was successful.
330
+ - 'id' (str): The unique identifier for the batch scrape job.
331
+ - 'url' (str): The URL to check the status of the batch scrape job.
332
+ """
333
+ endpoint = f'/v1/batch/scrape'
334
+ headers = self._prepare_headers(idempotency_key)
335
+ json_data = {'urls': urls}
336
+ if params:
337
+ json_data.update(params)
338
+ response = self._post_request(f'{self.api_url}{endpoint}', json_data, headers)
339
+ if response.status_code == 200:
340
+ return response.json()
341
+ else:
342
+ self._handle_error(response, 'start batch scrape job')
343
+
344
+ def batch_scrape_urls_and_watch(self, urls: list[str], params: Optional[Dict[str, Any]] = None, idempotency_key: Optional[str] = None) -> 'CrawlWatcher':
345
+ """
346
+ Initiate a batch scrape job and return a CrawlWatcher to monitor the job via WebSocket.
347
+
348
+ Args:
349
+ urls (list[str]): The URLs to scrape.
350
+ params (Optional[Dict[str, Any]]): Additional parameters for the scraper.
351
+ idempotency_key (Optional[str]): A unique uuid key to ensure idempotency of requests.
352
+
353
+ Returns:
354
+ CrawlWatcher: An instance of CrawlWatcher to monitor the batch scrape job.
355
+ """
356
+ crawl_response = self.async_batch_scrape_urls(urls, params, idempotency_key)
357
+ if crawl_response['success'] and 'id' in crawl_response:
358
+ return CrawlWatcher(crawl_response['id'], self)
359
+ else:
360
+ raise Exception("Batch scrape job failed to start")
361
+
362
+ def check_batch_scrape_status(self, id: str) -> Any:
363
+ """
364
+ Check the status of a batch scrape job using the Firecrawl API.
365
+
366
+ Args:
367
+ id (str): The ID of the batch scrape job.
368
+
369
+ Returns:
370
+ Any: The status of the batch scrape job.
371
+
372
+ Raises:
373
+ Exception: If the status check request fails.
374
+ """
375
+ endpoint = f'/v1/batch/scrape/{id}'
376
+
377
+ headers = self._prepare_headers()
378
+ response = self._get_request(f'{self.api_url}{endpoint}', headers)
379
+ if response.status_code == 200:
380
+ data = response.json()
381
+ return {
382
+ 'success': True,
383
+ 'status': data.get('status'),
384
+ 'total': data.get('total'),
385
+ 'completed': data.get('completed'),
386
+ 'creditsUsed': data.get('creditsUsed'),
387
+ 'expiresAt': data.get('expiresAt'),
388
+ 'next': data.get('next'),
389
+ 'data': data.get('data'),
390
+ 'error': data.get('error')
391
+ }
392
+ else:
393
+ self._handle_error(response, 'check batch scrape status')
394
+
274
395
  def _prepare_headers(self, idempotency_key: Optional[str] = None) -> Dict[str, str]:
275
396
  """
276
397
  Prepare the headers for API requests.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: firecrawl
3
- Version: 1.3.1
3
+ Version: 1.4.0
4
4
  Summary: Python SDK for Firecrawl API
5
5
  Home-page: https://github.com/mendableai/firecrawl
6
6
  Author: Mendable.ai
@@ -189,6 +189,69 @@ async def start_crawl_and_watch():
189
189
  await start_crawl_and_watch()
190
190
  ```
191
191
 
192
+ ### Scraping multiple URLs in batch
193
+
194
+ To batch scrape multiple URLs, use the `batch_scrape_urls` method. It takes the URLs and optional parameters as arguments. The `params` argument allows you to specify additional options for the scraper such as the output formats.
195
+
196
+ ```python
197
+ idempotency_key = str(uuid.uuid4()) # optional idempotency key
198
+ batch_scrape_result = app.batch_scrape_urls(['firecrawl.dev', 'mendable.ai'], {'formats': ['markdown', 'html']}, 2, idempotency_key)
199
+ print(batch_scrape_result)
200
+ ```
201
+
202
+ ### Asynchronous batch scrape
203
+
204
+ To run a batch scrape asynchronously, use the `async_batch_scrape_urls` method. It takes the starting URL and optional parameters as arguments. The `params` argument allows you to specify additional options for the scraper, such as the output formats.
205
+
206
+ ```python
207
+ batch_scrape_result = app.async_batch_scrape_urls(['firecrawl.dev', 'mendable.ai'], {'formats': ['markdown', 'html']})
208
+ print(batch_scrape_result)
209
+ ```
210
+
211
+ ### Checking batch scrape status
212
+
213
+ To check the status of an asynchronous batch scrape job, use the `check_batch_scrape_job` method. It takes the job ID as a parameter and returns the current status of the batch scrape job.
214
+
215
+ ```python
216
+ id = batch_scrape_result['id']
217
+ status = app.check_batch_scrape_job(id)
218
+ ```
219
+
220
+ ### Batch scrape with WebSockets
221
+
222
+ To use batch scrape with WebSockets, use the `batch_scrape_urls_and_watch` method. It takes the starting URL and optional parameters as arguments. The `params` argument allows you to specify additional options for the scraper, such as the output formats.
223
+
224
+ ```python
225
+ # inside an async function...
226
+ nest_asyncio.apply()
227
+
228
+ # Define event handlers
229
+ def on_document(detail):
230
+ print("DOC", detail)
231
+
232
+ def on_error(detail):
233
+ print("ERR", detail['error'])
234
+
235
+ def on_done(detail):
236
+ print("DONE", detail['status'])
237
+
238
+ # Function to start the crawl and watch process
239
+ async def start_crawl_and_watch():
240
+ # Initiate the crawl job and get the watcher
241
+ watcher = app.batch_scrape_urls_and_watch(['firecrawl.dev', 'mendable.ai'], {'formats': ['markdown', 'html']})
242
+
243
+ # Add event listeners
244
+ watcher.add_event_listener("document", on_document)
245
+ watcher.add_event_listener("error", on_error)
246
+ watcher.add_event_listener("done", on_done)
247
+
248
+ # Start the watcher
249
+ await watcher.connect()
250
+
251
+ # Run the event loop
252
+ await start_crawl_and_watch()
253
+ ```
254
+
192
255
  ## Error Handling
193
256
 
194
257
  The SDK handles errors returned by the Firecrawl API and raises appropriate exceptions. If an error occurs during a request, an exception will be raised with a descriptive error message.
@@ -8,8 +8,4 @@ firecrawl.egg-info/PKG-INFO
8
8
  firecrawl.egg-info/SOURCES.txt
9
9
  firecrawl.egg-info/dependency_links.txt
10
10
  firecrawl.egg-info/requires.txt
11
- firecrawl.egg-info/top_level.txt
12
- firecrawl/__tests__/e2e_withAuth/__init__.py
13
- firecrawl/__tests__/e2e_withAuth/test.py
14
- firecrawl/__tests__/v1/e2e_withAuth/__init__.py
15
- firecrawl/__tests__/v1/e2e_withAuth/test.py
11
+ firecrawl.egg-info/top_level.txt
@@ -0,0 +1,3 @@
1
+ build
2
+ dist
3
+ firecrawl
@@ -1,170 +0,0 @@
1
- import importlib.util
2
- import pytest
3
- import time
4
- import os
5
- from uuid import uuid4
6
- from dotenv import load_dotenv
7
-
8
- load_dotenv()
9
-
10
- API_URL = "http://127.0.0.1:3002"
11
- ABSOLUTE_FIRECRAWL_PATH = "firecrawl/firecrawl.py"
12
- TEST_API_KEY = os.getenv('TEST_API_KEY')
13
-
14
- print(f"ABSOLUTE_FIRECRAWL_PATH: {ABSOLUTE_FIRECRAWL_PATH}")
15
-
16
- spec = importlib.util.spec_from_file_location("FirecrawlApp", ABSOLUTE_FIRECRAWL_PATH)
17
- firecrawl = importlib.util.module_from_spec(spec)
18
- spec.loader.exec_module(firecrawl)
19
- FirecrawlApp = firecrawl.FirecrawlApp
20
-
21
- def test_no_api_key():
22
- with pytest.raises(Exception) as excinfo:
23
- invalid_app = FirecrawlApp(api_url=API_URL, version='v0')
24
- assert "No API key provided" in str(excinfo.value)
25
-
26
- def test_scrape_url_invalid_api_key():
27
- invalid_app = FirecrawlApp(api_url=API_URL, api_key="invalid_api_key", version='v0')
28
- with pytest.raises(Exception) as excinfo:
29
- invalid_app.scrape_url('https://firecrawl.dev')
30
- assert "Unexpected error during scrape URL: Status code 401. Unauthorized: Invalid token" in str(excinfo.value)
31
-
32
- def test_blocklisted_url():
33
- blocklisted_url = "https://facebook.com/fake-test"
34
- app = FirecrawlApp(api_url=API_URL, api_key=TEST_API_KEY, version='v0')
35
- with pytest.raises(Exception) as excinfo:
36
- app.scrape_url(blocklisted_url)
37
- assert "Unexpected error during scrape URL: Status code 403. Firecrawl currently does not support social media scraping due to policy restrictions. We're actively working on building support for it." in str(excinfo.value)
38
-
39
- def test_successful_response_with_valid_preview_token():
40
- app = FirecrawlApp(api_url=API_URL, api_key="this_is_just_a_preview_token", version='v0')
41
- response = app.scrape_url('https://roastmywebsite.ai')
42
- assert response is not None
43
- assert 'content' in response
44
- assert "_Roast_" in response['content']
45
-
46
- def test_scrape_url_e2e():
47
- app = FirecrawlApp(api_url=API_URL, api_key=TEST_API_KEY, version='v0')
48
- response = app.scrape_url('https://roastmywebsite.ai')
49
- print(response)
50
-
51
- assert response is not None
52
- assert 'content' in response
53
- assert 'markdown' in response
54
- assert 'metadata' in response
55
- assert 'html' not in response
56
- assert "_Roast_" in response['content']
57
-
58
- def test_successful_response_with_valid_api_key_and_include_html():
59
- app = FirecrawlApp(api_url=API_URL, api_key=TEST_API_KEY, version='v0')
60
- response = app.scrape_url('https://roastmywebsite.ai', {'pageOptions': {'includeHtml': True}})
61
- assert response is not None
62
- assert 'content' in response
63
- assert 'markdown' in response
64
- assert 'html' in response
65
- assert 'metadata' in response
66
- assert "_Roast_" in response['content']
67
- assert "_Roast_" in response['markdown']
68
- assert "<h1" in response['html']
69
-
70
- def test_successful_response_for_valid_scrape_with_pdf_file():
71
- app = FirecrawlApp(api_url=API_URL, api_key=TEST_API_KEY, version='v0')
72
- response = app.scrape_url('https://arxiv.org/pdf/astro-ph/9301001.pdf')
73
- assert response is not None
74
- assert 'content' in response
75
- assert 'metadata' in response
76
- assert 'We present spectrophotometric observations of the Broad Line Radio Galaxy' in response['content']
77
-
78
- def test_successful_response_for_valid_scrape_with_pdf_file_without_explicit_extension():
79
- app = FirecrawlApp(api_url=API_URL, api_key=TEST_API_KEY, version='v0')
80
- response = app.scrape_url('https://arxiv.org/pdf/astro-ph/9301001')
81
- time.sleep(6) # wait for 6 seconds
82
- assert response is not None
83
- assert 'content' in response
84
- assert 'metadata' in response
85
- assert 'We present spectrophotometric observations of the Broad Line Radio Galaxy' in response['content']
86
-
87
- def test_crawl_url_invalid_api_key():
88
- invalid_app = FirecrawlApp(api_url=API_URL, api_key="invalid_api_key", version='v0')
89
- with pytest.raises(Exception) as excinfo:
90
- invalid_app.crawl_url('https://firecrawl.dev')
91
- assert "Unexpected error during start crawl job: Status code 401. Unauthorized: Invalid token" in str(excinfo.value)
92
-
93
- def test_should_return_error_for_blocklisted_url():
94
- app = FirecrawlApp(api_url=API_URL, api_key=TEST_API_KEY, version='v0')
95
- blocklisted_url = "https://twitter.com/fake-test"
96
- with pytest.raises(Exception) as excinfo:
97
- app.crawl_url(blocklisted_url)
98
- assert "Unexpected error during start crawl job: Status code 403. Firecrawl currently does not support social media scraping due to policy restrictions. We're actively working on building support for it." in str(excinfo.value)
99
-
100
- def test_crawl_url_wait_for_completion_e2e():
101
- app = FirecrawlApp(api_url=API_URL, api_key=TEST_API_KEY, version='v0')
102
- response = app.crawl_url('https://roastmywebsite.ai', {'crawlerOptions': {'excludes': ['blog/*']}}, True)
103
- assert response is not None
104
- assert len(response) > 0
105
- assert 'content' in response[0]
106
- assert "_Roast_" in response[0]['content']
107
-
108
- def test_crawl_url_with_idempotency_key_e2e():
109
- app = FirecrawlApp(api_url=API_URL, api_key=TEST_API_KEY, version='v0')
110
- uniqueIdempotencyKey = str(uuid4())
111
- response = app.crawl_url('https://roastmywebsite.ai', {'crawlerOptions': {'excludes': ['blog/*']}}, True, 2, uniqueIdempotencyKey)
112
- assert response is not None
113
- assert len(response) > 0
114
- assert 'content' in response[0]
115
- assert "_Roast_" in response[0]['content']
116
-
117
- with pytest.raises(Exception) as excinfo:
118
- app.crawl_url('https://firecrawl.dev', {'crawlerOptions': {'excludes': ['blog/*']}}, True, 2, uniqueIdempotencyKey)
119
- assert "Conflict: Failed to start crawl job due to a conflict. Idempotency key already used" in str(excinfo.value)
120
-
121
- def test_check_crawl_status_e2e():
122
- app = FirecrawlApp(api_url=API_URL, api_key=TEST_API_KEY, version='v0')
123
- response = app.crawl_url('https://firecrawl.dev', {'crawlerOptions': {'excludes': ['blog/*']}}, False)
124
- assert response is not None
125
- assert 'jobId' in response
126
-
127
- time.sleep(30) # wait for 30 seconds
128
- status_response = app.check_crawl_status(response['jobId'])
129
- assert status_response is not None
130
- assert 'status' in status_response
131
- assert status_response['status'] == 'completed'
132
- assert 'data' in status_response
133
- assert len(status_response['data']) > 0
134
-
135
- def test_search_e2e():
136
- app = FirecrawlApp(api_url=API_URL, api_key=TEST_API_KEY, version='v0')
137
- response = app.search("test query")
138
- assert response is not None
139
- assert 'content' in response[0]
140
- assert len(response) > 2
141
-
142
- def test_search_invalid_api_key():
143
- invalid_app = FirecrawlApp(api_url=API_URL, api_key="invalid_api_key", version='v0')
144
- with pytest.raises(Exception) as excinfo:
145
- invalid_app.search("test query")
146
- assert "Unexpected error during search: Status code 401. Unauthorized: Invalid token" in str(excinfo.value)
147
-
148
- def test_llm_extraction():
149
- app = FirecrawlApp(api_url=API_URL, api_key=TEST_API_KEY, version='v0')
150
- response = app.scrape_url("https://firecrawl.dev", {
151
- 'extractorOptions': {
152
- 'mode': 'llm-extraction',
153
- 'extractionPrompt': "Based on the information on the page, find what the company's mission is and whether it supports SSO, and whether it is open source",
154
- 'extractionSchema': {
155
- 'type': 'object',
156
- 'properties': {
157
- 'company_mission': {'type': 'string'},
158
- 'supports_sso': {'type': 'boolean'},
159
- 'is_open_source': {'type': 'boolean'}
160
- },
161
- 'required': ['company_mission', 'supports_sso', 'is_open_source']
162
- }
163
- }
164
- })
165
- assert response is not None
166
- assert 'llm_extraction' in response
167
- llm_extraction = response['llm_extraction']
168
- assert 'company_mission' in llm_extraction
169
- assert isinstance(llm_extraction['supports_sso'], bool)
170
- assert isinstance(llm_extraction['is_open_source'], bool)
@@ -1,352 +0,0 @@
1
- import importlib.util
2
- import pytest
3
- import time
4
- import os
5
- from uuid import uuid4
6
- from dotenv import load_dotenv
7
- from datetime import datetime
8
-
9
- load_dotenv()
10
-
11
- API_URL = "http://127.0.0.1:3002";
12
- ABSOLUTE_FIRECRAWL_PATH = "firecrawl/firecrawl.py"
13
- TEST_API_KEY = os.getenv('TEST_API_KEY')
14
-
15
- print(f"ABSOLUTE_FIRECRAWL_PATH: {ABSOLUTE_FIRECRAWL_PATH}")
16
-
17
- spec = importlib.util.spec_from_file_location("FirecrawlApp", ABSOLUTE_FIRECRAWL_PATH)
18
- firecrawl = importlib.util.module_from_spec(spec)
19
- spec.loader.exec_module(firecrawl)
20
- FirecrawlApp = firecrawl.FirecrawlApp
21
-
22
- def test_no_api_key():
23
- with pytest.raises(Exception) as excinfo:
24
- invalid_app = FirecrawlApp(api_url=API_URL)
25
- assert "No API key provided" in str(excinfo.value)
26
-
27
- def test_scrape_url_invalid_api_key():
28
- invalid_app = FirecrawlApp(api_url=API_URL, api_key="invalid_api_key")
29
- with pytest.raises(Exception) as excinfo:
30
- invalid_app.scrape_url('https://firecrawl.dev')
31
- assert "Unauthorized: Invalid token" in str(excinfo.value)
32
-
33
- def test_blocklisted_url():
34
- blocklisted_url = "https://facebook.com/fake-test"
35
- app = FirecrawlApp(api_url=API_URL, api_key=TEST_API_KEY)
36
- with pytest.raises(Exception) as excinfo:
37
- app.scrape_url(blocklisted_url)
38
- assert "URL is blocked. Firecrawl currently does not support social media scraping due to policy restrictions." in str(excinfo.value)
39
-
40
- def test_successful_response_with_valid_preview_token():
41
- app = FirecrawlApp(api_url=API_URL, api_key="this_is_just_a_preview_token")
42
- response = app.scrape_url('https://roastmywebsite.ai')
43
- assert response is not None
44
- assert "_Roast_" in response['markdown']
45
- assert "content" not in response
46
- assert "html" not in response
47
- assert "metadata" in response
48
- assert "links" not in response
49
- assert "rawHtml" not in response
50
-
51
- def test_successful_response_for_valid_scrape():
52
- app = FirecrawlApp(api_url=API_URL, api_key=TEST_API_KEY)
53
- response = app.scrape_url('https://roastmywebsite.ai')
54
- assert response is not None
55
- assert 'markdown' in response
56
- assert "_Roast_" in response['markdown']
57
- assert 'metadata' in response
58
- assert 'content' not in response
59
- assert 'html' not in response
60
- assert 'rawHtml' not in response
61
- assert 'screenshot' not in response
62
- assert 'links' not in response
63
-
64
- def test_successful_response_with_valid_api_key_and_options():
65
- app = FirecrawlApp(api_url=API_URL, api_key=TEST_API_KEY)
66
- params = {
67
- 'formats': ['markdown', 'html', 'rawHtml', 'screenshot', 'links'],
68
- 'headers': {'x-key': 'test'},
69
- 'includeTags': ['h1'],
70
- 'excludeTags': ['h2'],
71
- 'onlyMainContent': True,
72
- 'timeout': 30000,
73
- 'waitFor': 1000
74
- }
75
- response = app.scrape_url('https://roastmywebsite.ai', params)
76
- assert response is not None
77
- assert 'content' not in response
78
- assert 'markdown' in response
79
- assert 'html' in response
80
- assert 'rawHtml' in response
81
- assert 'screenshot' in response
82
- assert 'links' in response
83
- assert "_Roast_" in response['markdown']
84
- assert "<h1" in response['html']
85
- assert "<h1" in response['rawHtml']
86
- assert "https://" in response['screenshot']
87
- assert len(response['links']) > 0
88
- assert "https://" in response['links'][0]
89
- assert 'metadata' in response
90
- assert 'title' in response['metadata']
91
- assert 'description' in response['metadata']
92
- assert 'keywords' in response['metadata']
93
- assert 'robots' in response['metadata']
94
- assert 'ogTitle' in response['metadata']
95
- assert 'ogDescription' in response['metadata']
96
- assert 'ogUrl' in response['metadata']
97
- assert 'ogImage' in response['metadata']
98
- assert 'ogLocaleAlternate' in response['metadata']
99
- assert 'ogSiteName' in response['metadata']
100
- assert 'sourceURL' in response['metadata']
101
- assert 'statusCode' in response['metadata']
102
- assert 'pageStatusCode' not in response['metadata']
103
- assert 'pageError' not in response['metadata']
104
- assert 'error' not in response['metadata']
105
- assert response['metadata']['title'] == "Roast My Website"
106
- assert response['metadata']['description'] == "Welcome to Roast My Website, the ultimate tool for putting your website through the wringer! This repository harnesses the power of Firecrawl to scrape and capture screenshots of websites, and then unleashes the latest LLM vision models to mercilessly roast them. 🌶️"
107
- assert response['metadata']['keywords'] == "Roast My Website,Roast,Website,GitHub,Firecrawl"
108
- assert response['metadata']['robots'] == "follow, index"
109
- assert response['metadata']['ogTitle'] == "Roast My Website"
110
- assert response['metadata']['ogDescription'] == "Welcome to Roast My Website, the ultimate tool for putting your website through the wringer! This repository harnesses the power of Firecrawl to scrape and capture screenshots of websites, and then unleashes the latest LLM vision models to mercilessly roast them. 🌶️"
111
- assert response['metadata']['ogUrl'] == "https://www.roastmywebsite.ai"
112
- assert response['metadata']['ogImage'] == "https://www.roastmywebsite.ai/og.png"
113
- assert response['metadata']['ogLocaleAlternate'] == []
114
- assert response['metadata']['ogSiteName'] == "Roast My Website"
115
- assert response['metadata']['sourceURL'] == "https://roastmywebsite.ai"
116
- assert response['metadata']['statusCode'] == 200
117
-
118
- def test_successful_response_for_valid_scrape_with_pdf_file():
119
- app = FirecrawlApp(api_url=API_URL, api_key=TEST_API_KEY)
120
- response = app.scrape_url('https://arxiv.org/pdf/astro-ph/9301001.pdf')
121
- assert response is not None
122
- assert 'content' not in response
123
- assert 'metadata' in response
124
- assert 'We present spectrophotometric observations of the Broad Line Radio Galaxy' in response['markdown']
125
-
126
- def test_successful_response_for_valid_scrape_with_pdf_file_without_explicit_extension():
127
- app = FirecrawlApp(api_url=API_URL, api_key=TEST_API_KEY)
128
- response = app.scrape_url('https://arxiv.org/pdf/astro-ph/9301001')
129
- time.sleep(1) # wait for 1 second
130
- assert response is not None
131
- assert 'We present spectrophotometric observations of the Broad Line Radio Galaxy' in response['markdown']
132
-
133
- def test_crawl_url_invalid_api_key():
134
- invalid_app = FirecrawlApp(api_url=API_URL, api_key="invalid_api_key")
135
- with pytest.raises(Exception) as excinfo:
136
- invalid_app.crawl_url('https://firecrawl.dev')
137
- assert "Unauthorized: Invalid token" in str(excinfo.value)
138
-
139
- def test_should_return_error_for_blocklisted_url():
140
- app = FirecrawlApp(api_url=API_URL, api_key=TEST_API_KEY)
141
- blocklisted_url = "https://twitter.com/fake-test"
142
- with pytest.raises(Exception) as excinfo:
143
- app.crawl_url(blocklisted_url)
144
- assert "URL is blocked. Firecrawl currently does not support social media scraping due to policy restrictions." in str(excinfo.value)
145
-
146
- def test_crawl_url_wait_for_completion_e2e():
147
- app = FirecrawlApp(api_url=API_URL, api_key=TEST_API_KEY)
148
- response = app.crawl_url('https://roastmywebsite.ai', {'excludePaths': ['blog/*']}, True, 30)
149
- assert response is not None
150
- assert 'total' in response
151
- assert response['total'] > 0
152
- assert 'creditsUsed' in response
153
- assert response['creditsUsed'] > 0
154
- assert 'expiresAt' in response
155
- assert datetime.strptime(response['expiresAt'], '%Y-%m-%dT%H:%M:%S.%fZ') > datetime.now()
156
- assert 'status' in response
157
- assert response['status'] == 'completed'
158
- assert 'next' not in response
159
- assert len(response['data']) > 0
160
- assert 'markdown' in response['data'][0]
161
- assert "_Roast_" in response['data'][0]['markdown']
162
- assert 'content' not in response['data'][0]
163
- assert 'html' not in response['data'][0]
164
- assert 'rawHtml' not in response['data'][0]
165
- assert 'screenshot' not in response['data'][0]
166
- assert 'links' not in response['data'][0]
167
- assert 'metadata' in response['data'][0]
168
- assert 'title' in response['data'][0]['metadata']
169
- assert 'description' in response['data'][0]['metadata']
170
- assert 'language' in response['data'][0]['metadata']
171
- assert 'sourceURL' in response['data'][0]['metadata']
172
- assert 'statusCode' in response['data'][0]['metadata']
173
- assert 'error' not in response['data'][0]['metadata']
174
-
175
- def test_crawl_url_with_options_and_wait_for_completion():
176
- app = FirecrawlApp(api_url=API_URL, api_key=TEST_API_KEY)
177
- response = app.crawl_url('https://roastmywebsite.ai', {
178
- 'excludePaths': ['blog/*'],
179
- 'includePaths': ['/'],
180
- 'maxDepth': 2,
181
- 'ignoreSitemap': True,
182
- 'limit': 10,
183
- 'allowBackwardLinks': True,
184
- 'allowExternalLinks': True,
185
- 'scrapeOptions': {
186
- 'formats': ['markdown', 'html', 'rawHtml', 'screenshot', 'links'],
187
- 'headers': {"x-key": "test"},
188
- 'includeTags': ['h1'],
189
- 'excludeTags': ['h2'],
190
- 'onlyMainContent': True,
191
- 'waitFor': 1000
192
- }
193
- }, True, 30)
194
- assert response is not None
195
- assert 'total' in response
196
- assert response['total'] > 0
197
- assert 'creditsUsed' in response
198
- assert response['creditsUsed'] > 0
199
- assert 'expiresAt' in response
200
- assert datetime.strptime(response['expiresAt'], '%Y-%m-%dT%H:%M:%S.%fZ') > datetime.now()
201
- assert 'status' in response
202
- assert response['status'] == 'completed'
203
- assert 'next' not in response
204
- assert len(response['data']) > 0
205
- assert 'markdown' in response['data'][0]
206
- assert "_Roast_" in response['data'][0]['markdown']
207
- assert 'content' not in response['data'][0]
208
- assert 'html' in response['data'][0]
209
- assert "<h1" in response['data'][0]['html']
210
- assert 'rawHtml' in response['data'][0]
211
- assert "<h1" in response['data'][0]['rawHtml']
212
- assert 'screenshot' in response['data'][0]
213
- assert "https://" in response['data'][0]['screenshot']
214
- assert 'links' in response['data'][0]
215
- assert len(response['data'][0]['links']) > 0
216
- assert 'metadata' in response['data'][0]
217
- assert 'title' in response['data'][0]['metadata']
218
- assert 'description' in response['data'][0]['metadata']
219
- assert 'language' in response['data'][0]['metadata']
220
- assert 'sourceURL' in response['data'][0]['metadata']
221
- assert 'statusCode' in response['data'][0]['metadata']
222
- assert 'error' not in response['data'][0]['metadata']
223
-
224
- def test_crawl_url_with_idempotency_key_e2e():
225
- app = FirecrawlApp(api_url=API_URL, api_key=TEST_API_KEY)
226
- uniqueIdempotencyKey = str(uuid4())
227
- response = app.crawl_url('https://roastmywebsite.ai', {'excludePaths': ['blog/*']}, False, 2, uniqueIdempotencyKey)
228
- assert response is not None
229
- assert 'id' in response
230
-
231
- with pytest.raises(Exception) as excinfo:
232
- app.crawl_url('https://firecrawl.dev', {'excludePaths': ['blog/*']}, True, 2, uniqueIdempotencyKey)
233
- assert "Idempotency key already used" in str(excinfo.value)
234
-
235
- def test_check_crawl_status_e2e():
236
- app = FirecrawlApp(api_url=API_URL, api_key=TEST_API_KEY)
237
- response = app.crawl_url('https://firecrawl.dev', {'scrapeOptions': {'formats': ['markdown', 'html', 'rawHtml', 'screenshot', 'links']}}, False)
238
- assert response is not None
239
- assert 'id' in response
240
-
241
- max_checks = 15
242
- checks = 0
243
- status_response = app.check_crawl_status(response['id'])
244
-
245
- while status_response['status'] == 'scraping' and checks < max_checks:
246
- time.sleep(1) # wait for 1 second
247
- assert 'partial_data' not in status_response
248
- assert 'current' not in status_response
249
- assert 'data' in status_response
250
- assert 'total' in status_response
251
- assert 'creditsUsed' in status_response
252
- assert 'expiresAt' in status_response
253
- assert 'status' in status_response
254
- assert 'next' in status_response
255
- assert status_response['total'] > 0
256
- assert status_response['creditsUsed'] > 0
257
- assert datetime.strptime(status_response['expiresAt'], '%Y-%m-%dT%H:%M:%S.%fZ') > datetime.now()
258
- assert status_response['status'] == 'scraping'
259
- assert '/v1/crawl/' in status_response['next']
260
- status_response = app.check_crawl_status(response['id'])
261
- checks += 1
262
-
263
- assert status_response is not None
264
- assert 'total' in status_response
265
- assert status_response['total'] > 0
266
- assert 'creditsUsed' in status_response
267
- assert status_response['creditsUsed'] > 0
268
- assert 'expiresAt' in status_response
269
- assert datetime.strptime(status_response['expiresAt'], '%Y-%m-%dT%H:%M:%S.%fZ') > datetime.now()
270
- assert 'status' in status_response
271
- assert status_response['status'] == 'completed'
272
- assert len(status_response['data']) > 0
273
- assert 'markdown' in status_response['data'][0]
274
- assert len(status_response['data'][0]['markdown']) > 10
275
- assert 'content' not in status_response['data'][0]
276
- assert 'html' in status_response['data'][0]
277
- assert "<div" in status_response['data'][0]['html']
278
- assert 'rawHtml' in status_response['data'][0]
279
- assert "<div" in status_response['data'][0]['rawHtml']
280
- assert 'screenshot' in status_response['data'][0]
281
- assert "https://" in status_response['data'][0]['screenshot']
282
- assert 'links' in status_response['data'][0]
283
- assert status_response['data'][0]['links'] is not None
284
- assert len(status_response['data'][0]['links']) > 0
285
- assert 'metadata' in status_response['data'][0]
286
- assert 'title' in status_response['data'][0]['metadata']
287
- assert 'description' in status_response['data'][0]['metadata']
288
- assert 'language' in status_response['data'][0]['metadata']
289
- assert 'sourceURL' in status_response['data'][0]['metadata']
290
- assert 'statusCode' in status_response['data'][0]['metadata']
291
- assert 'error' not in status_response['data'][0]['metadata']
292
-
293
- def test_invalid_api_key_on_map():
294
- invalid_app = FirecrawlApp(api_key="invalid_api_key", api_url=API_URL)
295
- with pytest.raises(Exception) as excinfo:
296
- invalid_app.map_url('https://roastmywebsite.ai')
297
- assert "Unauthorized: Invalid token" in str(excinfo.value)
298
-
299
- def test_blocklisted_url_on_map():
300
- app = FirecrawlApp(api_key=TEST_API_KEY, api_url=API_URL)
301
- blocklisted_url = "https://facebook.com/fake-test"
302
- with pytest.raises(Exception) as excinfo:
303
- app.map_url(blocklisted_url)
304
- assert "URL is blocked. Firecrawl currently does not support social media scraping due to policy restrictions." in str(excinfo.value)
305
-
306
- def test_successful_response_with_valid_preview_token_on_map():
307
- app = FirecrawlApp(api_key="this_is_just_a_preview_token", api_url=API_URL)
308
- response = app.map_url('https://roastmywebsite.ai')
309
- assert response is not None
310
- assert len(response) > 0
311
-
312
- def test_successful_response_for_valid_map():
313
- app = FirecrawlApp(api_key=TEST_API_KEY, api_url=API_URL)
314
- response = app.map_url('https://roastmywebsite.ai')
315
- assert response is not None
316
- assert len(response) > 0
317
- assert any("https://" in link for link in response)
318
- filtered_links = [link for link in response if "roastmywebsite.ai" in link]
319
- assert len(filtered_links) > 0
320
-
321
- def test_search_e2e():
322
- app = FirecrawlApp(api_url=API_URL, api_key=TEST_API_KEY)
323
- with pytest.raises(NotImplementedError) as excinfo:
324
- app.search("test query")
325
- assert "Search is not supported in v1" in str(excinfo.value)
326
-
327
- # def test_llm_extraction():
328
- # app = FirecrawlApp(api_url=API_URL, api_key=TEST_API_KEY)
329
- # response = app.scrape_url("https://mendable.ai", {
330
- # 'extractorOptions': {
331
- # 'mode': 'llm-extraction',
332
- # 'extractionPrompt': "Based on the information on the page, find what the company's mission is and whether it supports SSO, and whether it is open source",
333
- # 'extractionSchema': {
334
- # 'type': 'object',
335
- # 'properties': {
336
- # 'company_mission': {'type': 'string'},
337
- # 'supports_sso': {'type': 'boolean'},
338
- # 'is_open_source': {'type': 'boolean'}
339
- # },
340
- # 'required': ['company_mission', 'supports_sso', 'is_open_source']
341
- # }
342
- # }
343
- # })
344
- # assert response is not None
345
- # assert 'llm_extraction' in response
346
- # llm_extraction = response['llm_extraction']
347
- # assert 'company_mission' in llm_extraction
348
- # assert isinstance(llm_extraction['supports_sso'], bool)
349
- # assert isinstance(llm_extraction['is_open_source'], bool)
350
-
351
-
352
-
@@ -1 +0,0 @@
1
- firecrawl
File without changes
File without changes
File without changes
File without changes