universal-mcp-applications 0.1.17__py3-none-any.whl → 0.1.33__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of universal-mcp-applications might be problematic. Click here for more details.

Files changed (143) hide show
  1. universal_mcp/applications/BEST_PRACTICES.md +166 -0
  2. universal_mcp/applications/ahrefs/README.md +3 -3
  3. universal_mcp/applications/airtable/README.md +3 -3
  4. universal_mcp/applications/airtable/app.py +0 -1
  5. universal_mcp/applications/apollo/app.py +0 -1
  6. universal_mcp/applications/asana/README.md +3 -3
  7. universal_mcp/applications/aws_s3/README.md +29 -0
  8. universal_mcp/applications/aws_s3/app.py +40 -39
  9. universal_mcp/applications/bill/README.md +249 -0
  10. universal_mcp/applications/browser_use/README.md +1 -0
  11. universal_mcp/applications/browser_use/__init__.py +0 -0
  12. universal_mcp/applications/browser_use/app.py +71 -0
  13. universal_mcp/applications/calendly/README.md +45 -45
  14. universal_mcp/applications/calendly/app.py +125 -125
  15. universal_mcp/applications/canva/README.md +35 -35
  16. universal_mcp/applications/canva/app.py +95 -99
  17. universal_mcp/applications/clickup/README.md +4 -4
  18. universal_mcp/applications/confluence/app.py +0 -1
  19. universal_mcp/applications/contentful/README.md +1 -2
  20. universal_mcp/applications/contentful/app.py +4 -5
  21. universal_mcp/applications/crustdata/README.md +3 -3
  22. universal_mcp/applications/domain_checker/README.md +2 -2
  23. universal_mcp/applications/domain_checker/app.py +11 -15
  24. universal_mcp/applications/e2b/README.md +4 -4
  25. universal_mcp/applications/e2b/app.py +4 -4
  26. universal_mcp/applications/elevenlabs/README.md +3 -77
  27. universal_mcp/applications/elevenlabs/app.py +18 -15
  28. universal_mcp/applications/exa/README.md +7 -7
  29. universal_mcp/applications/exa/app.py +17 -17
  30. universal_mcp/applications/falai/README.md +13 -12
  31. universal_mcp/applications/falai/app.py +34 -35
  32. universal_mcp/applications/figma/README.md +3 -3
  33. universal_mcp/applications/file_system/README.md +13 -0
  34. universal_mcp/applications/file_system/app.py +9 -9
  35. universal_mcp/applications/firecrawl/README.md +9 -9
  36. universal_mcp/applications/firecrawl/app.py +46 -46
  37. universal_mcp/applications/fireflies/README.md +14 -14
  38. universal_mcp/applications/fireflies/app.py +164 -57
  39. universal_mcp/applications/fpl/README.md +12 -12
  40. universal_mcp/applications/fpl/app.py +54 -55
  41. universal_mcp/applications/ghost_content/app.py +0 -1
  42. universal_mcp/applications/github/README.md +10 -10
  43. universal_mcp/applications/github/app.py +50 -52
  44. universal_mcp/applications/google_calendar/README.md +10 -10
  45. universal_mcp/applications/google_calendar/app.py +50 -49
  46. universal_mcp/applications/google_docs/README.md +14 -14
  47. universal_mcp/applications/google_docs/app.py +307 -233
  48. universal_mcp/applications/google_drive/README.md +54 -57
  49. universal_mcp/applications/google_drive/app.py +270 -261
  50. universal_mcp/applications/google_gemini/README.md +3 -14
  51. universal_mcp/applications/google_gemini/app.py +15 -18
  52. universal_mcp/applications/google_mail/README.md +20 -20
  53. universal_mcp/applications/google_mail/app.py +110 -109
  54. universal_mcp/applications/google_searchconsole/README.md +10 -10
  55. universal_mcp/applications/google_searchconsole/app.py +37 -37
  56. universal_mcp/applications/google_sheet/README.md +25 -25
  57. universal_mcp/applications/google_sheet/app.py +270 -266
  58. universal_mcp/applications/hashnode/README.md +6 -3
  59. universal_mcp/applications/hashnode/app.py +174 -25
  60. universal_mcp/applications/http_tools/README.md +5 -5
  61. universal_mcp/applications/http_tools/app.py +10 -11
  62. universal_mcp/applications/hubspot/api_segments/__init__.py +0 -0
  63. universal_mcp/applications/hubspot/api_segments/api_segment_base.py +54 -0
  64. universal_mcp/applications/hubspot/api_segments/crm_api.py +7337 -0
  65. universal_mcp/applications/hubspot/api_segments/marketing_api.py +1467 -0
  66. universal_mcp/applications/hubspot/app.py +2 -15
  67. universal_mcp/applications/jira/app.py +0 -1
  68. universal_mcp/applications/klaviyo/README.md +0 -36
  69. universal_mcp/applications/linkedin/README.md +18 -4
  70. universal_mcp/applications/linkedin/app.py +763 -162
  71. universal_mcp/applications/mailchimp/README.md +3 -3
  72. universal_mcp/applications/markitdown/app.py +10 -5
  73. universal_mcp/applications/ms_teams/README.md +31 -31
  74. universal_mcp/applications/ms_teams/app.py +151 -151
  75. universal_mcp/applications/neon/README.md +3 -3
  76. universal_mcp/applications/onedrive/README.md +24 -0
  77. universal_mcp/applications/onedrive/__init__.py +1 -0
  78. universal_mcp/applications/onedrive/app.py +338 -0
  79. universal_mcp/applications/openai/README.md +18 -17
  80. universal_mcp/applications/openai/app.py +40 -39
  81. universal_mcp/applications/outlook/README.md +9 -9
  82. universal_mcp/applications/outlook/app.py +307 -225
  83. universal_mcp/applications/perplexity/README.md +4 -4
  84. universal_mcp/applications/perplexity/app.py +4 -4
  85. universal_mcp/applications/posthog/README.md +128 -127
  86. universal_mcp/applications/reddit/README.md +21 -124
  87. universal_mcp/applications/reddit/app.py +51 -68
  88. universal_mcp/applications/resend/README.md +29 -29
  89. universal_mcp/applications/resend/app.py +116 -117
  90. universal_mcp/applications/rocketlane/app.py +0 -1
  91. universal_mcp/applications/scraper/README.md +7 -4
  92. universal_mcp/applications/scraper/__init__.py +1 -1
  93. universal_mcp/applications/scraper/app.py +341 -103
  94. universal_mcp/applications/semrush/README.md +3 -0
  95. universal_mcp/applications/serpapi/README.md +3 -3
  96. universal_mcp/applications/serpapi/app.py +14 -14
  97. universal_mcp/applications/sharepoint/README.md +19 -0
  98. universal_mcp/applications/sharepoint/app.py +285 -173
  99. universal_mcp/applications/shopify/app.py +0 -1
  100. universal_mcp/applications/shortcut/README.md +3 -3
  101. universal_mcp/applications/slack/README.md +23 -0
  102. universal_mcp/applications/slack/app.py +79 -48
  103. universal_mcp/applications/spotify/README.md +3 -3
  104. universal_mcp/applications/supabase/README.md +3 -3
  105. universal_mcp/applications/tavily/README.md +4 -4
  106. universal_mcp/applications/tavily/app.py +4 -4
  107. universal_mcp/applications/twilio/README.md +15 -0
  108. universal_mcp/applications/twitter/README.md +92 -89
  109. universal_mcp/applications/twitter/api_segments/compliance_api.py +13 -15
  110. universal_mcp/applications/twitter/api_segments/dm_conversations_api.py +20 -20
  111. universal_mcp/applications/twitter/api_segments/dm_events_api.py +12 -12
  112. universal_mcp/applications/twitter/api_segments/likes_api.py +12 -12
  113. universal_mcp/applications/twitter/api_segments/lists_api.py +37 -39
  114. universal_mcp/applications/twitter/api_segments/spaces_api.py +24 -24
  115. universal_mcp/applications/twitter/api_segments/trends_api.py +4 -4
  116. universal_mcp/applications/twitter/api_segments/tweets_api.py +105 -105
  117. universal_mcp/applications/twitter/api_segments/usage_api.py +4 -4
  118. universal_mcp/applications/twitter/api_segments/users_api.py +136 -136
  119. universal_mcp/applications/twitter/app.py +15 -11
  120. universal_mcp/applications/whatsapp/README.md +12 -12
  121. universal_mcp/applications/whatsapp/app.py +66 -67
  122. universal_mcp/applications/whatsapp/audio.py +39 -35
  123. universal_mcp/applications/whatsapp/whatsapp.py +176 -154
  124. universal_mcp/applications/whatsapp_business/README.md +23 -23
  125. universal_mcp/applications/whatsapp_business/app.py +92 -92
  126. universal_mcp/applications/yahoo_finance/README.md +17 -0
  127. universal_mcp/applications/yahoo_finance/__init__.py +1 -0
  128. universal_mcp/applications/yahoo_finance/app.py +300 -0
  129. universal_mcp/applications/youtube/README.md +46 -46
  130. universal_mcp/applications/youtube/app.py +208 -195
  131. universal_mcp/applications/zenquotes/README.md +1 -1
  132. universal_mcp/applications/zenquotes/__init__.py +2 -0
  133. universal_mcp/applications/zenquotes/app.py +5 -5
  134. {universal_mcp_applications-0.1.17.dist-info → universal_mcp_applications-0.1.33.dist-info}/METADATA +5 -90
  135. {universal_mcp_applications-0.1.17.dist-info → universal_mcp_applications-0.1.33.dist-info}/RECORD +137 -128
  136. universal_mcp/applications/replicate/README.md +0 -18
  137. universal_mcp/applications/replicate/__init__.py +0 -1
  138. universal_mcp/applications/replicate/app.py +0 -493
  139. universal_mcp/applications/unipile/README.md +0 -28
  140. universal_mcp/applications/unipile/__init__.py +0 -1
  141. universal_mcp/applications/unipile/app.py +0 -827
  142. {universal_mcp_applications-0.1.17.dist-info → universal_mcp_applications-0.1.33.dist-info}/WHEEL +0 -0
  143. {universal_mcp_applications-0.1.17.dist-info → universal_mcp_applications-0.1.33.dist-info}/licenses/LICENSE +0 -0
@@ -3,7 +3,6 @@ from typing import Any, Literal
3
3
 
4
4
  from fal_client import AsyncClient, AsyncRequestHandle, Status
5
5
  from loguru import logger
6
-
7
6
  from universal_mcp.applications.application import APIApplication
8
7
  from universal_mcp.exceptions import NotAuthorizedError, ToolError
9
8
  from universal_mcp.integrations import Integration
@@ -29,7 +28,7 @@ class FalaiApp(APIApplication):
29
28
  @property
30
29
  def fal_client(self) -> AsyncClient:
31
30
  """
32
- A cached property that lazily initializes an `AsyncClient` instance for API communication. It retrieves the API key from the configured integration, centralizing authentication for all Fal AI operations. Raises `NotAuthorizedError` if the key is missing.
31
+ A cached property that lazily initializes an `AsyncClient` instance. It retrieves the API key from the configured integration, providing a single, centralized authentication point for all methods that interact with the Fal AI API. Raises `NotAuthorizedError` if credentials are not found.
33
32
  """
34
33
  if self._fal_client is None:
35
34
  credentials = self.integration.get_credentials()
@@ -59,20 +58,20 @@ class FalaiApp(APIApplication):
59
58
  ) -> Any:
60
59
  """
61
60
  Executes a Fal AI application synchronously, waiting for completion and returning the result directly. This method is suited for short-running tasks, unlike `submit` which queues a job for asynchronous processing and returns a request ID instead of the final output.
62
-
61
+
63
62
  Args:
64
63
  arguments: A dictionary of arguments for the application
65
64
  application: The name or ID of the Fal application (defaults to 'fal-ai/flux/dev')
66
65
  path: Optional subpath for the application endpoint
67
66
  timeout: Optional timeout in seconds for the request
68
67
  hint: Optional hint for runner selection
69
-
68
+
70
69
  Returns:
71
70
  The result of the application execution as a Python object (converted from JSON response)
72
-
71
+
73
72
  Raises:
74
73
  ToolError: Raised when the Fal API request fails, wrapping the original exception
75
-
74
+
76
75
  Tags:
77
76
  run, execute, ai, synchronous, fal, important
78
77
  """
@@ -101,8 +100,8 @@ class FalaiApp(APIApplication):
101
100
  priority: Priority | None = None,
102
101
  ) -> str:
103
102
  """
104
- Submits a job to the Fal AI queue for asynchronous processing. It immediately returns a unique request ID for tracking the job's lifecycle with the `status`, `result`, and `cancel` methods. Unlike the synchronous `run` method, this function does not wait for the job's completion.
105
-
103
+ Submits a job to the Fal AI queue for asynchronous processing, immediately returning a request ID. This contrasts with the `run` method, which waits for completion. The returned ID is used by `check_status`, `get_result`, and `cancel` to manage the job's lifecycle.
104
+
106
105
  Args:
107
106
  arguments: A dictionary of arguments for the application
108
107
  application: The name or ID of the Fal application, defaulting to 'fal-ai/flux/dev'
@@ -110,13 +109,13 @@ class FalaiApp(APIApplication):
110
109
  hint: Optional hint for runner selection
111
110
  webhook_url: Optional URL to receive a webhook when the request completes
112
111
  priority: Optional queue priority ('normal' or 'low')
113
-
112
+
114
113
  Returns:
115
114
  The request ID (str) of the submitted asynchronous job
116
-
115
+
117
116
  Raises:
118
117
  ToolError: Raised when the Fal API request fails, wrapping the original exception
119
-
118
+
120
119
  Tags:
121
120
  submit, async_job, start, ai, queue
122
121
  """
@@ -147,18 +146,18 @@ class FalaiApp(APIApplication):
147
146
  ) -> Status:
148
147
  """
149
148
  Checks the execution state (e.g., Queued, InProgress) of an asynchronous Fal AI job using its request ID. It provides a non-blocking way to monitor jobs initiated via `submit` without fetching the final `result`, and can optionally include logs.
150
-
149
+
151
150
  Args:
152
151
  request_id: The unique identifier of the submitted request, obtained from a previous submit operation
153
152
  application: The name or ID of the Fal application (defaults to 'fal-ai/flux/dev')
154
153
  with_logs: Boolean flag to include execution logs in the status response (defaults to False)
155
-
154
+
156
155
  Returns:
157
156
  A Status object containing the current state of the request (Queued, InProgress, or Completed)
158
-
157
+
159
158
  Raises:
160
159
  ToolError: Raised when the Fal API request fails or when the provided request ID is invalid
161
-
160
+
162
161
  Tags:
163
162
  status, check, async_job, monitoring, ai
164
163
  """
@@ -181,18 +180,18 @@ class FalaiApp(APIApplication):
181
180
  self, request_id: str, application: str = "fal-ai/flux/dev"
182
181
  ) -> Any:
183
182
  """
184
- Fetches the final output for an asynchronous job, identified by its request_id. This function blocks execution, waiting for the job initiated by `submit` to complete before returning the result. It complements the non-blocking `status` check by providing a synchronous way to get a completed job's data.
185
-
183
+ Retrieves the final result of an asynchronous job, identified by its `request_id`. This function waits for the job, initiated via `submit`, to complete. Unlike the non-blocking `check_status`, this method blocks execution to fetch and return the job's actual output upon completion.
184
+
186
185
  Args:
187
186
  request_id: The unique identifier of the submitted request
188
187
  application: The name or ID of the Fal application (defaults to 'fal-ai/flux/dev')
189
-
188
+
190
189
  Returns:
191
190
  The result of the application execution, converted from JSON response to Python data structures (dict/list)
192
-
191
+
193
192
  Raises:
194
193
  ToolError: When the Fal API request fails or the request does not complete successfully
195
-
194
+
196
195
  Tags:
197
196
  result, async-job, status, wait, ai
198
197
  """
@@ -215,18 +214,18 @@ class FalaiApp(APIApplication):
215
214
  self, request_id: str, application: str = "fal-ai/flux/dev"
216
215
  ) -> None:
217
216
  """
218
- Asynchronously cancels a running or queued Fal AI job identified by its `request_id`. This function complements the `submit` method, providing a way to terminate asynchronous tasks before completion. API errors during the cancellation process are wrapped in a `ToolError`.
219
-
217
+ Asynchronously cancels a running or queued Fal AI job using its `request_id`. This function complements the `submit` method, providing a way to terminate asynchronous tasks before completion. It raises a `ToolError` if the cancellation request fails.
218
+
220
219
  Args:
221
220
  request_id: The unique identifier of the submitted Fal AI request to cancel
222
221
  application: The name or ID of the Fal application (defaults to 'fal-ai/flux/dev')
223
-
222
+
224
223
  Returns:
225
224
  None. The function doesn't return any value.
226
-
225
+
227
226
  Raises:
228
227
  ToolError: Raised when the cancellation request fails due to API errors or if the request cannot be cancelled
229
-
228
+
230
229
  Tags:
231
230
  cancel, async_job, ai, fal, management
232
231
  """
@@ -244,17 +243,17 @@ class FalaiApp(APIApplication):
244
243
 
245
244
  async def upload_file(self, path: str) -> str:
246
245
  """
247
- Asynchronously uploads a local file from a specified path to the Fal Content Delivery Network (CDN). Upon success, it returns a public URL for the file, making it accessible for use as input in other Fal AI application requests. A `ToolError` is raised on failure.
248
-
246
+ Asynchronously uploads a local file to the Fal Content Delivery Network (CDN), returning a public URL. This URL makes the file accessible for use as input in other Fal AI job execution methods like `run` or `submit`. A `ToolError` is raised if the upload fails.
247
+
249
248
  Args:
250
249
  path: The absolute or relative path to the local file
251
-
250
+
252
251
  Returns:
253
252
  A string containing the public URL of the uploaded file on the CDN
254
-
253
+
255
254
  Raises:
256
255
  ToolError: If the file is not found or if the upload operation fails
257
-
256
+
258
257
  Tags:
259
258
  upload, file, cdn, storage, async, important
260
259
  """
@@ -280,8 +279,8 @@ class FalaiApp(APIApplication):
280
279
  hint: str | None = None,
281
280
  ) -> Any:
282
281
  """
283
- A specialized wrapper for the `run` method that synchronously generates images using the 'fal-ai/flux/dev' model. It simplifies image creation with common parameters like `prompt` and `seed`, waits for the task to complete, and returns the result containing image URLs and metadata.
284
-
282
+ A specialized wrapper for the `run` method that synchronously generates images using the 'fal-ai/flux/dev' model. It simplifies image creation with common parameters like `prompt` and `seed`, waits for the task to complete, and directly returns the result containing image URLs and metadata.
283
+
285
284
  Args:
286
285
  prompt: The text prompt used to guide the image generation
287
286
  seed: Random seed for reproducible image generation (default: 6252023)
@@ -291,13 +290,13 @@ class FalaiApp(APIApplication):
291
290
  path: Subpath for the application endpoint (rarely used)
292
291
  timeout: Maximum time in seconds to wait for the request to complete
293
292
  hint: Hint string for runner selection
294
-
293
+
295
294
  Returns:
296
295
  A dictionary containing the generated image URLs and related metadata
297
-
296
+
298
297
  Raises:
299
298
  ToolError: When the image generation request fails or encounters an error
300
-
299
+
301
300
  Tags:
302
301
  generate, image, ai, async, important, flux, customizable, default
303
302
  """
@@ -1,10 +1,10 @@
1
- # Figma MCP Server
1
+ # FigmaApp MCP Server
2
2
 
3
- An MCP Server for the Figma API.
3
+ An MCP Server for the FigmaApp API.
4
4
 
5
5
  ## 🛠️ Tool List
6
6
 
7
- This is automatically generated from OpenAPI schema for the Figma API.
7
+ This is automatically generated from OpenAPI schema for the FigmaApp API.
8
8
 
9
9
 
10
10
  | Tool | Description |
@@ -0,0 +1,13 @@
1
+ # FileSystemApp MCP Server
2
+
3
+ An MCP Server for the FileSystemApp API.
4
+
5
+ ## 🛠️ Tool List
6
+
7
+ This is automatically generated from OpenAPI schema for the FileSystemApp API.
8
+
9
+
10
+ | Tool | Description |
11
+ |------|-------------|
12
+ | `read_file` | Asynchronously reads the entire content of a specified file in binary mode. This static method takes a file path and returns its data as a bytes object, serving as a fundamental file retrieval operation within the FileSystem application. |
13
+ | `write_file` | Writes binary data to a specified file path. If no path is provided, it creates a unique temporary file in `/tmp`. The function returns a dictionary confirming success and providing metadata about the new file, including its path and size. |
@@ -17,18 +17,18 @@ class FileSystemApp(BaseApplication):
17
17
  async def read_file(file_path: str):
18
18
  """
19
19
  Asynchronously reads the entire content of a specified file in binary mode. This static method takes a file path and returns its data as a bytes object, serving as a fundamental file retrieval operation within the FileSystem application.
20
-
20
+
21
21
  Args:
22
22
  file_path (str): The path to the file to read.
23
-
23
+
24
24
  Returns:
25
25
  bytes: The file content as bytes.
26
-
26
+
27
27
  Raises:
28
28
  FileNotFoundError: If the file doesn't exist.
29
29
  IOError: If there's an error reading the file.
30
-
31
- Tags:
30
+
31
+ Tags:
32
32
  important
33
33
  """
34
34
  with open(file_path, "rb") as f:
@@ -38,12 +38,12 @@ class FileSystemApp(BaseApplication):
38
38
  async def write_file(file_data: bytes, file_path: str = None):
39
39
  """
40
40
  Writes binary data to a specified file path. If no path is provided, it creates a unique temporary file in `/tmp`. The function returns a dictionary confirming success and providing metadata about the new file, including its path and size.
41
-
41
+
42
42
  Args:
43
43
  file_data (bytes): The data to write to the file.
44
44
  file_path (str, optional): The path where to write the file.
45
45
  If None, generates a random path in /tmp. Defaults to None.
46
-
46
+
47
47
  Returns:
48
48
  dict: A dictionary containing the operation result with keys:
49
49
  - status (str): "success" if the operation completed successfully
@@ -51,11 +51,11 @@ class FileSystemApp(BaseApplication):
51
51
  - url (str): The file path where the data was written
52
52
  - filename (str): The filename (same as url in this implementation)
53
53
  - size (int): The size of the written data in bytes
54
-
54
+
55
55
  Raises:
56
56
  IOError: If there's an error writing the file.
57
57
  PermissionError: If there are insufficient permissions to write to the path.
58
-
58
+
59
59
  Tags:
60
60
  important
61
61
  """
@@ -9,12 +9,12 @@ This is automatically generated from OpenAPI schema for the FirecrawlApp API.
9
9
 
10
10
  | Tool | Description |
11
11
  |------|-------------|
12
- | `scrape_url` | Scrapes a single URL using Firecrawl and returns the extracted data. |
13
- | `search` | Performs a web search using Firecrawl's search capability. |
14
- | `start_crawl` | Starts a async crawl job for a given URL using Firecrawl. Returns the job ID immediately. |
15
- | `check_crawl_status` | Checks the status of a previously initiated async Firecrawl crawl job. |
16
- | `cancel_crawl` | Cancels a currently running Firecrawl crawl job. |
17
- | `start_batch_scrape` | Starts a batch scrape job for multiple URLs using Firecrawl. (Note: May map to multiple individual scrapes or a specific batch API endpoint if available) |
18
- | `check_batch_scrape_status` | Checks the status of a previously initiated Firecrawl batch scrape job. |
19
- | `quick_web_extract` | Performs a quick, synchronous extraction of data from one or more URLs using Firecrawl and returns the results directly. |
20
- | `check_extract_status` | Checks the status of a previously initiated Firecrawl extraction job. |
12
+ | `scrape_url` | Synchronously scrapes a single URL, immediately returning its content. This provides a direct method for single-page scraping, contrasting with asynchronous, job-based functions like `start_crawl` (for entire sites) and `start_batch_scrape` (for multiple URLs). |
13
+ | `search` | Executes a synchronous web search using the Firecrawl service for a given query. Unlike scrape_url which fetches a single page, this function discovers web content. It returns a dictionary of results on success or an error string on failure, raising exceptions for authorization or SDK issues. |
14
+ | `start_crawl` | Starts an asynchronous Firecrawl job to crawl a website from a given URL, returning a job ID. Unlike the synchronous `scrape_url` for single pages, this function initiates a comprehensive, link-following crawl. Progress can be monitored using the `check_crawl_status` function with the returned ID. |
15
+ | `check_crawl_status` | Retrieves the status of an asynchronous Firecrawl job using its unique ID. As the counterpart to `start_crawl`, this function exclusively monitors website crawl progress, distinct from status checkers for batch scraping or data extraction jobs. Returns job details on success or an error message on failure. |
16
+ | `cancel_crawl` | Cancels a running asynchronous Firecrawl crawl job using its unique ID. As a lifecycle management tool for jobs initiated by `start_crawl`, it returns a confirmation status upon success or an error message on failure, distinguishing it from controls for other job types. |
17
+ | `start_batch_scrape` | Initiates an asynchronous Firecrawl job to scrape a list of URLs. It returns a job ID for tracking with `check_batch_scrape_status`. Unlike the synchronous `scrape_url` which processes a single URL, this function handles bulk scraping and doesn't wait for completion. |
18
+ | `check_batch_scrape_status` | Checks the status of an asynchronous batch scrape job using its job ID. As the counterpart to `start_batch_scrape`, it specifically monitors multi-URL scraping tasks, distinct from checkers for site-wide crawls (`check_crawl_status`) or AI-driven extractions (`check_extract_status`). Returns detailed progress or an error message. |
19
+ | `quick_web_extract` | Performs synchronous, AI-driven data extraction from URLs using an optional prompt or schema. Unlike asynchronous jobs like `start_crawl`, it returns structured data directly. This function raises an exception on failure, contrasting with other methods in the class that return an error string upon failure. |
20
+ | `check_extract_status` | Checks the status of an asynchronous, AI-powered Firecrawl data extraction job using its ID. Unlike `check_crawl_status` or `check_batch_scrape_status`, this function specifically monitors structured data extraction tasks, returning the job's progress or an error message on failure. |
@@ -38,7 +38,7 @@ class FirecrawlApp(APIApplication):
38
38
  @property
39
39
  def firecrawl_api_key(self) -> str:
40
40
  """
41
- A property that lazily retrieves and caches the Firecrawl API key from the configured integration. On first access, it fetches credentials and raises a `NotAuthorizedError` if the key is unobtainable, ensuring all subsequent API calls are properly authenticated.
41
+ A property that lazily retrieves and caches the Firecrawl API key from the configured integration. On first access, it fetches credentials and raises a `NotAuthorizedError` if the key is unobtainable, ensuring all subsequent API calls within the application are properly authenticated before execution.
42
42
  """
43
43
  if self._firecrawl_api_key is None:
44
44
  if not self.integration:
@@ -166,19 +166,19 @@ class FirecrawlApp(APIApplication):
166
166
 
167
167
  def scrape_url(self, url: str) -> Any:
168
168
  """
169
- Synchronously scrapes a single web page's content using the Firecrawl service. This function executes immediately and returns the extracted data, unlike the asynchronous `start_batch_scrape` or `start_crawl` jobs which require status checks. Returns an error message on failure.
170
-
169
+ Synchronously scrapes a single URL, immediately returning its content. This provides a direct method for single-page scraping, contrasting with asynchronous, job-based functions like `start_crawl` (for entire sites) and `start_batch_scrape` (for multiple URLs).
170
+
171
171
  Args:
172
172
  url: The URL of the web page to scrape.
173
-
173
+
174
174
  Returns:
175
175
  A dictionary containing the scraped data on success,
176
176
  or a string containing an error message on failure.
177
-
177
+
178
178
  Raises:
179
179
  NotAuthorizedError: If API key is missing or invalid.
180
180
  ToolError: If the Firecrawl SDK is not installed.
181
-
181
+
182
182
  Tags:
183
183
  scrape, important
184
184
  """
@@ -198,19 +198,19 @@ class FirecrawlApp(APIApplication):
198
198
 
199
199
  def search(self, query: str) -> dict[str, Any] | str:
200
200
  """
201
- Executes a web search using the Firecrawl service for a specified query. It returns a dictionary of results on success or an error string on failure, raising specific exceptions for authorization or SDK installation issues. This provides a direct, synchronous method for information retrieval.
202
-
201
+ Executes a synchronous web search using the Firecrawl service for a given query. Unlike scrape_url which fetches a single page, this function discovers web content. It returns a dictionary of results on success or an error string on failure, raising exceptions for authorization or SDK issues.
202
+
203
203
  Args:
204
204
  query: The search query string.
205
-
205
+
206
206
  Returns:
207
207
  A dictionary containing the search results on success,
208
208
  or a string containing an error message on failure.
209
-
209
+
210
210
  Raises:
211
211
  NotAuthorizedError: If API key is missing or invalid.
212
212
  ToolError: If the Firecrawl SDK is not installed.
213
-
213
+
214
214
  Tags:
215
215
  search, important
216
216
  """
@@ -232,19 +232,19 @@ class FirecrawlApp(APIApplication):
232
232
  url: str,
233
233
  ) -> dict[str, Any] | str:
234
234
  """
235
- Initiates an asynchronous Firecrawl job to crawl a website starting from a given URL. It returns immediately with a job ID, which can be used with `check_crawl_status` to monitor progress. This differs from `scrape_url`, which performs a synchronous scrape of a single page.
236
-
235
+ Starts an asynchronous Firecrawl job to crawl a website from a given URL, returning a job ID. Unlike the synchronous `scrape_url` for single pages, this function initiates a comprehensive, link-following crawl. Progress can be monitored using the `check_crawl_status` function with the returned ID.
236
+
237
237
  Args:
238
238
  url: The starting URL for the crawl.
239
-
239
+
240
240
  Returns:
241
241
  A dictionary containing the job initiation response on success,
242
242
  or a string containing an error message on failure.
243
-
243
+
244
244
  Raises:
245
245
  NotAuthorizedError: If API key is missing or invalid.
246
246
  ToolError: If the Firecrawl SDK is not installed.
247
-
247
+
248
248
  Tags:
249
249
  crawl, async_job, start
250
250
  """
@@ -268,19 +268,19 @@ class FirecrawlApp(APIApplication):
268
268
 
269
269
  def check_crawl_status(self, job_id: str) -> dict[str, Any] | str:
270
270
  """
271
- Retrieves the status of an asynchronous Firecrawl crawl job using its unique ID. Returns a dictionary with the job's details on success or an error message on failure. This function specifically handles jobs initiated by `start_crawl`, distinct from checkers for batch scrapes or extractions.
272
-
271
+ Retrieves the status of an asynchronous Firecrawl job using its unique ID. As the counterpart to `start_crawl`, this function exclusively monitors website crawl progress, distinct from status checkers for batch scraping or data extraction jobs. Returns job details on success or an error message on failure.
272
+
273
273
  Args:
274
274
  job_id: The ID of the crawl job to check.
275
-
275
+
276
276
  Returns:
277
277
  A dictionary containing the job status details on success,
278
278
  or a string containing an error message on failure.
279
-
279
+
280
280
  Raises:
281
281
  NotAuthorizedError: If API key is missing or invalid.
282
282
  ToolError: If the Firecrawl SDK is not installed.
283
-
283
+
284
284
  Tags:
285
285
  crawl, async_job, status
286
286
  """
@@ -303,20 +303,20 @@ class FirecrawlApp(APIApplication):
303
303
 
304
304
  def cancel_crawl(self, job_id: str) -> dict[str, Any] | str:
305
305
  """
306
- Cancels a running asynchronous Firecrawl crawl job identified by its unique ID. As part of the crawl job lifecycle, this function terminates a process initiated by `start_crawl`, returning a confirmation status upon success or an error message if the cancellation fails or is not supported.
307
-
306
+ Cancels a running asynchronous Firecrawl crawl job using its unique ID. As a lifecycle management tool for jobs initiated by `start_crawl`, it returns a confirmation status upon success or an error message on failure, distinguishing it from controls for other job types.
307
+
308
308
  Args:
309
309
  job_id: The ID of the crawl job to cancel.
310
-
310
+
311
311
  Returns:
312
312
  A dictionary confirming the cancellation status on success,
313
313
  or a string containing an error message on failure.
314
314
  (Note: This functionality might depend on Firecrawl API capabilities)
315
-
315
+
316
316
  Raises:
317
317
  NotAuthorizedError: If API key is missing or invalid.
318
318
  ToolError: If the Firecrawl SDK is not installed or operation not supported.
319
-
319
+
320
320
  Tags:
321
321
  crawl, async_job, management, cancel
322
322
  """
@@ -342,19 +342,19 @@ class FirecrawlApp(APIApplication):
342
342
  urls: list[str],
343
343
  ) -> dict[str, Any] | str:
344
344
  """
345
- Initiates an asynchronous batch job to scrape a list of URLs using Firecrawl. It returns a response containing a job ID, which can be tracked with `check_batch_scrape_status`. This differs from the synchronous `scrape_url` which handles a single URL and returns data directly.
346
-
345
+ Initiates an asynchronous Firecrawl job to scrape a list of URLs. It returns a job ID for tracking with `check_batch_scrape_status`. Unlike the synchronous `scrape_url` which processes a single URL, this function handles bulk scraping and doesn't wait for completion.
346
+
347
347
  Args:
348
348
  urls: A list of URLs to scrape.
349
-
349
+
350
350
  Returns:
351
351
  A dictionary containing the job initiation response (e.g., a batch job ID or list of results/job IDs) on success,
352
352
  or a string containing an error message on failure.
353
-
353
+
354
354
  Raises:
355
355
  NotAuthorizedError: If API key is missing or invalid.
356
356
  ToolError: If the Firecrawl SDK is not installed.
357
-
357
+
358
358
  Tags:
359
359
  scrape, batch, async_job, start
360
360
  """
@@ -377,19 +377,19 @@ class FirecrawlApp(APIApplication):
377
377
 
378
378
  def check_batch_scrape_status(self, job_id: str) -> dict[str, Any] | str:
379
379
  """
380
- Checks the status of a previously initiated asynchronous Firecrawl batch scrape job using its job ID. It returns detailed progress information or an error message. This function is the counterpart to `start_batch_scrape` for monitoring multi-URL scraping tasks.
381
-
380
+ Checks the status of an asynchronous batch scrape job using its job ID. As the counterpart to `start_batch_scrape`, it specifically monitors multi-URL scraping tasks, distinct from checkers for site-wide crawls (`check_crawl_status`) or AI-driven extractions (`check_extract_status`). Returns detailed progress or an error message.
381
+
382
382
  Args:
383
383
  job_id: The ID of the batch scrape job to check.
384
-
384
+
385
385
  Returns:
386
386
  A dictionary containing the job status details on success,
387
387
  or a string containing an error message on failure.
388
-
388
+
389
389
  Raises:
390
390
  NotAuthorizedError: If API key is missing or invalid.
391
391
  ToolError: If the Firecrawl SDK is not installed or operation not supported.
392
-
392
+
393
393
  Tags:
394
394
  scrape, batch, async_job, status
395
395
  """
@@ -421,22 +421,22 @@ class FirecrawlApp(APIApplication):
421
421
  allow_external_links: bool | None = False,
422
422
  ) -> dict[str, Any]:
423
423
  """
424
- Performs synchronous, AI-driven data extraction from URLs using an optional prompt or schema. Unlike asynchronous job functions (e.g., `start_crawl`), it returns the structured data directly. This function raises `NotAuthorizedError` or `ToolError` on failure, contrasting with others that return an error string.
425
-
424
+ Performs synchronous, AI-driven data extraction from URLs using an optional prompt or schema. Unlike asynchronous jobs like `start_crawl`, it returns structured data directly. This function raises an exception on failure, contrasting with other methods in the class that return an error string upon failure.
425
+
426
426
  Args:
427
427
  urls: A list of URLs to extract data from.
428
428
  prompt: Optional custom extraction prompt describing what data to extract.
429
429
  schema: Optional JSON schema or Pydantic model for the desired output structure.
430
430
  system_prompt: Optional system context for the extraction.
431
431
  allow_external_links: Optional boolean to allow following external links.
432
-
432
+
433
433
  Returns:
434
434
  A dictionary containing the extracted data on success.
435
-
435
+
436
436
  Raises:
437
437
  NotAuthorizedError: If API key is missing or invalid.
438
438
  ToolError: If the Firecrawl SDK is not installed or extraction fails.
439
-
439
+
440
440
  Tags:
441
441
  extract, ai, sync, quick, important
442
442
  """
@@ -476,19 +476,19 @@ class FirecrawlApp(APIApplication):
476
476
 
477
477
  def check_extract_status(self, job_id: str) -> dict[str, Any] | str:
478
478
  """
479
- Checks the status of a specific asynchronous, AI-powered data extraction job on Firecrawl using its job ID. This is distinct from `check_crawl_status` for web crawling and `check_batch_scrape_status` for bulk scraping, as it specifically monitors AI-driven extractions.
480
-
479
+ Checks the status of an asynchronous, AI-powered Firecrawl data extraction job using its ID. Unlike `check_crawl_status` or `check_batch_scrape_status`, this function specifically monitors structured data extraction tasks, returning the job's progress or an error message on failure.
480
+
481
481
  Args:
482
482
  job_id: The ID of the extraction job to check.
483
-
483
+
484
484
  Returns:
485
485
  A dictionary containing the job status details on success,
486
486
  or a string containing an error message on failure.
487
-
487
+
488
488
  Raises:
489
489
  NotAuthorizedError: If API key is missing or invalid.
490
490
  ToolError: If the Firecrawl SDK is not installed or operation not supported.
491
-
491
+
492
492
  Tags:
493
493
  extract, ai, async_job, status
494
494
  """
@@ -9,17 +9,17 @@ This is automatically generated from OpenAPI schema for the FirefliesApp API.
9
9
 
10
10
  | Tool | Description |
11
11
  |------|-------------|
12
- | `get_team_analytics` | Fetches team analytics data within a specified time range. |
13
- | `get_ai_apps_outputs` | Fetches AI Apps outputs for a given transcript. |
14
- | `get_user_details` | Fetches details for a specific user. |
15
- | `list_users` | Fetches a list of users in the workspace. |
16
- | `get_transcript_details` | Fetches details for a specific transcript. |
17
- | `list_transcripts` | Fetches a list of transcripts, optionally filtered by user ID. |
18
- | `get_bite_details` | Fetches details for a specific bite (soundbite/clip). |
19
- | `list_bites` | Fetches a list of bites, optionally filtered to the current user's bites. |
20
- | `add_to_live_meeting` | Adds Fireflies.ai to a live meeting. |
21
- | `create_bite` | Creates a bite (soundbite/clip) from a transcript. |
22
- | `delete_transcript` | Deletes a transcript. |
23
- | `set_user_role` | Sets the role for a user. |
24
- | `upload_audio` | Uploads an audio file for transcription. |
25
- | `update_meeting_title` | Updates the title of a meeting (transcript). |
12
+ | `get_team_conversation_analytics` | Queries the Fireflies.ai API for team conversation analytics, specifically the average number of filler words. The data retrieval can optionally be filtered by a start and end time. Returns a dictionary containing the fetched analytics. |
13
+ | `get_transcript_ai_outputs` | Retrieves all AI-generated application outputs, such as summaries or analyses, associated with a specific transcript ID. It fetches the detailed prompt and response data for each AI app that has processed the transcript, providing a complete record of AI-generated content. |
14
+ | `get_user_details` | Fetches details, such as name and integrations, for a single user identified by their unique ID. This function queries for a specific user, differentiating it from `list_users` which retrieves a list of all users in the workspace. |
15
+ | `list_users` | Retrieves a list of all users in the workspace, returning each user's name and configured integrations. It provides a complete team roster, differing from `get_user_details`, which fetches information for a single user by their ID. |
16
+ | `get_transcript_details` | Queries the Fireflies API for a single transcript's details, such as title and ID, using its unique identifier. It fetches one specific entry, distinguishing it from `list_transcripts`, which retrieves a collection, and from `get_ai_apps_outputs` which gets AI data from a transcript. |
17
+ | `list_transcripts` | Fetches a list of meeting transcripts, returning the title and ID for each. The list can be filtered to return only transcripts for a specific user. This function complements `get_transcript_details`, which retrieves a single transcript by its unique ID. |
18
+ | `get_bite_details` | Retrieves detailed information for a specific bite (soundbite/clip) using its unique ID. It fetches data including the user ID, name, processing status, and summary. This provides a focused view of a single bite, distinguishing it from `list_bites` which fetches a collection of bites. |
19
+ | `list_bites` | Retrieves a list of soundbites (clips) from the Fireflies API. An optional 'mine' parameter filters for soundbites belonging only to the authenticated user. Differentiates from 'get_bite_details' by fetching multiple items rather than a single one by ID. |
20
+ | `add_to_live_meeting` | Executes a GraphQL mutation to make the Fireflies.ai notetaker join a live meeting specified by its URL. This action initiates the bot's recording and transcription process for the ongoing session and returns a success confirmation. |
21
+ | `create_soundbite_from_transcript` | Creates a soundbite/clip from a specified segment of a transcript using its ID, start, and end times. This function executes a GraphQL mutation, returning details of the newly created bite, such as its ID and processing status. |
22
+ | `delete_transcript` | Permanently deletes a specific transcript from Fireflies.ai using its ID. This destructive operation executes a GraphQL mutation and returns a dictionary containing the details of the transcript (e.g., title, date) as it existed just before being removed, confirming the action. |
23
+ | `set_user_role` | Assigns a new role (e.g., 'admin', 'member') to a user specified by their ID. This function executes a GraphQL mutation to modify user data and returns a dictionary with the user's updated name and admin status to confirm the change. |
24
+ | `transcribe_audio_from_url` | Submits an audio file from a URL to the Fireflies.ai API for transcription. It can optionally associate a title and a list of attendees with the audio, returning the upload status and details upon completion. |
25
+ | `update_transcript_title` | Updates the title of a specific transcript, identified by its ID, to a new value. This function executes a GraphQL mutation and returns a dictionary containing the newly assigned title upon successful completion of the request. |