universal-mcp-applications 0.1.17__py3-none-any.whl → 0.1.19__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of universal-mcp-applications might be problematic. Click here for more details.

Files changed (88) hide show
  1. universal_mcp/applications/ahrefs/README.md +3 -3
  2. universal_mcp/applications/airtable/README.md +3 -3
  3. universal_mcp/applications/asana/README.md +3 -3
  4. universal_mcp/applications/aws_s3/README.md +29 -0
  5. universal_mcp/applications/bill/README.md +249 -0
  6. universal_mcp/applications/calendly/README.md +45 -45
  7. universal_mcp/applications/canva/README.md +35 -35
  8. universal_mcp/applications/clickup/README.md +4 -4
  9. universal_mcp/applications/contentful/README.md +1 -2
  10. universal_mcp/applications/crustdata/README.md +3 -3
  11. universal_mcp/applications/domain_checker/README.md +2 -2
  12. universal_mcp/applications/e2b/README.md +4 -4
  13. universal_mcp/applications/elevenlabs/README.md +3 -77
  14. universal_mcp/applications/exa/README.md +7 -7
  15. universal_mcp/applications/falai/README.md +13 -12
  16. universal_mcp/applications/falai/app.py +6 -6
  17. universal_mcp/applications/figma/README.md +3 -3
  18. universal_mcp/applications/file_system/README.md +13 -0
  19. universal_mcp/applications/firecrawl/README.md +9 -9
  20. universal_mcp/applications/firecrawl/app.py +10 -10
  21. universal_mcp/applications/fireflies/README.md +14 -14
  22. universal_mcp/applications/fpl/README.md +12 -12
  23. universal_mcp/applications/fpl/app.py +5 -5
  24. universal_mcp/applications/github/README.md +10 -10
  25. universal_mcp/applications/github/app.py +9 -9
  26. universal_mcp/applications/google_calendar/README.md +10 -10
  27. universal_mcp/applications/google_calendar/app.py +10 -10
  28. universal_mcp/applications/google_docs/README.md +14 -14
  29. universal_mcp/applications/google_docs/app.py +12 -12
  30. universal_mcp/applications/google_drive/README.md +54 -57
  31. universal_mcp/applications/google_drive/app.py +38 -38
  32. universal_mcp/applications/google_gemini/README.md +3 -14
  33. universal_mcp/applications/google_gemini/app.py +13 -12
  34. universal_mcp/applications/google_mail/README.md +20 -20
  35. universal_mcp/applications/google_mail/app.py +19 -19
  36. universal_mcp/applications/google_searchconsole/README.md +10 -10
  37. universal_mcp/applications/google_searchconsole/app.py +8 -8
  38. universal_mcp/applications/google_sheet/README.md +25 -25
  39. universal_mcp/applications/google_sheet/app.py +20 -20
  40. universal_mcp/applications/http_tools/README.md +5 -5
  41. universal_mcp/applications/hubspot/__init__.py +1 -1
  42. universal_mcp/applications/hubspot/api_segments/__init__.py +0 -0
  43. universal_mcp/applications/hubspot/api_segments/api_segment_base.py +25 -0
  44. universal_mcp/applications/hubspot/api_segments/crm_api.py +7337 -0
  45. universal_mcp/applications/hubspot/api_segments/marketing_api.py +1467 -0
  46. universal_mcp/applications/hubspot/app.py +74 -146
  47. universal_mcp/applications/klaviyo/README.md +0 -36
  48. universal_mcp/applications/linkedin/README.md +4 -4
  49. universal_mcp/applications/linkedin/app.py +4 -4
  50. universal_mcp/applications/mailchimp/README.md +3 -3
  51. universal_mcp/applications/ms_teams/README.md +31 -31
  52. universal_mcp/applications/ms_teams/app.py +28 -28
  53. universal_mcp/applications/neon/README.md +3 -3
  54. universal_mcp/applications/openai/README.md +18 -17
  55. universal_mcp/applications/outlook/README.md +9 -9
  56. universal_mcp/applications/outlook/app.py +9 -9
  57. universal_mcp/applications/perplexity/README.md +4 -4
  58. universal_mcp/applications/posthog/README.md +128 -127
  59. universal_mcp/applications/reddit/README.md +21 -124
  60. universal_mcp/applications/reddit/app.py +90 -89
  61. universal_mcp/applications/replicate/README.md +10 -10
  62. universal_mcp/applications/resend/README.md +29 -29
  63. universal_mcp/applications/scraper/README.md +4 -4
  64. universal_mcp/applications/scraper/app.py +31 -31
  65. universal_mcp/applications/semrush/README.md +3 -0
  66. universal_mcp/applications/serpapi/README.md +3 -3
  67. universal_mcp/applications/sharepoint/README.md +17 -0
  68. universal_mcp/applications/sharepoint/app.py +7 -7
  69. universal_mcp/applications/shortcut/README.md +3 -3
  70. universal_mcp/applications/slack/README.md +23 -0
  71. universal_mcp/applications/spotify/README.md +3 -3
  72. universal_mcp/applications/supabase/README.md +3 -3
  73. universal_mcp/applications/tavily/README.md +4 -4
  74. universal_mcp/applications/twilio/README.md +15 -0
  75. universal_mcp/applications/twitter/README.md +92 -89
  76. universal_mcp/applications/twitter/app.py +11 -11
  77. universal_mcp/applications/unipile/README.md +17 -17
  78. universal_mcp/applications/unipile/app.py +14 -14
  79. universal_mcp/applications/whatsapp/README.md +12 -12
  80. universal_mcp/applications/whatsapp/app.py +13 -13
  81. universal_mcp/applications/whatsapp_business/README.md +23 -23
  82. universal_mcp/applications/youtube/README.md +46 -46
  83. universal_mcp/applications/youtube/app.py +7 -1
  84. universal_mcp/applications/zenquotes/README.md +1 -1
  85. {universal_mcp_applications-0.1.17.dist-info → universal_mcp_applications-0.1.19.dist-info}/METADATA +2 -89
  86. {universal_mcp_applications-0.1.17.dist-info → universal_mcp_applications-0.1.19.dist-info}/RECORD +88 -83
  87. {universal_mcp_applications-0.1.17.dist-info → universal_mcp_applications-0.1.19.dist-info}/WHEEL +0 -0
  88. {universal_mcp_applications-0.1.17.dist-info → universal_mcp_applications-0.1.19.dist-info}/licenses/LICENSE +0 -0
@@ -1,10 +1,10 @@
1
- # Crustdata MCP Server
1
+ # CrustdataApp MCP Server
2
2
 
3
- An MCP Server for the Crustdata API.
3
+ An MCP Server for the CrustdataApp API.
4
4
 
5
5
  ## 🛠️ Tool List
6
6
 
7
- This is automatically generated from OpenAPI schema for the Crustdata API.
7
+ This is automatically generated from OpenAPI schema for the CrustdataApp API.
8
8
 
9
9
 
10
10
  | Tool | Description |
@@ -9,5 +9,5 @@ This is automatically generated from OpenAPI schema for the DomainCheckerApp API
9
9
 
10
10
  | Tool | Description |
11
11
  |------|-------------|
12
- | `check_domain_tool` | Checks if a domain is available for registration by querying DNS records and RDAP data. |
13
- | `check_tlds_tool` | Checks a keyword across multiple top-level domains (TLDs) to find available domain names. |
12
+ | `check_domain_registration` | Determines a domain's availability by querying DNS and RDAP servers. For registered domains, it returns details like registrar and key dates. This function provides a comprehensive analysis for a single, fully qualified domain name, unlike `check_keyword_across_tlds_tool` which checks a keyword across multiple domains. |
13
+ | `find_available_domains_for_keyword` | Checks a keyword's availability across a predefined list of popular TLDs. Using DNS and RDAP lookups, it generates a summary report of available and taken domains. This bulk-check differs from `check_domain_registration`, which deeply analyzes a single, fully-qualified domain. |
@@ -1,12 +1,12 @@
1
- # E2b MCP Server
1
+ # E2bApp MCP Server
2
2
 
3
- An MCP Server for the E2b API.
3
+ An MCP Server for the E2bApp API.
4
4
 
5
5
  ## 🛠️ Tool List
6
6
 
7
- This is automatically generated from OpenAPI schema for the E2b API.
7
+ This is automatically generated from OpenAPI schema for the E2bApp API.
8
8
 
9
9
 
10
10
  | Tool | Description |
11
11
  |------|-------------|
12
- | `execute_python_code` | Executes Python code in a sandbox environment and returns the formatted output |
12
+ | `execute_python_code` | Executes a Python code string in a secure E2B sandbox. It authenticates using the configured API key, runs the code, and returns a formatted string containing the execution's output (stdout/stderr). It raises specific exceptions for authorization failures or general execution issues. |
@@ -9,80 +9,6 @@ This is automatically generated from OpenAPI schema for the ElevenlabsApp API.
9
9
 
10
10
  | Tool | Description |
11
11
  |------|-------------|
12
- | `get_generated_items` | Retrieves historical data based on specified parameters, including page size and voice ID, using the "GET" method at the "/v1/history" endpoint. |
13
- | `get_history_item_by_id` | Retrieves a specific history item by its identifier using the API defined at "/v1/history/{history_item_id}" with the GET method. |
14
- | `delete_history_item` | Deletes a specific history item identified by its ID using the DELETE method. |
15
- | `get_audio_from_history_item` | Retrieves audio data for a specific history item identified by `{history_item_id}` using the `GET` method at the `/v1/history/{history_item_id}/audio` endpoint. |
16
- | `download_history_items` | Initiates a historical data download process and returns a success status upon completion. |
17
- | `delete_sample` | Deletes a specific voice sample identified by the `sample_id` from a voice with the given `voice_id` using the DELETE method. |
18
- | `get_audio_from_sample` | Retrieves the audio file for a specific sample associated with a given voice using the specified voice_id and sample_id. |
19
- | `convert` | Converts text into speech using a specified voice, allowing for optimization of streaming latency and selection of output format. |
20
- | `text_to_speech_with_timestamps` | Generates speech from text with precise character or word-level timing information using the specified voice, supporting audio-text synchronization through timestamps. |
21
- | `convert_as_stream` | Converts text to speech stream using the specified voice ID with configurable latency and output format. |
22
- | `stream_text_with_timestamps` | Converts text to speech using the specified voice ID, streaming the audio output with timestamps. |
23
- | `voice_generation_parameters` | Retrieves the parameters required for generating voice using the specified API endpoint. |
24
- | `generate_arandom_voice` | Generates an audio file by converting text into speech using a specified voice, allowing for customizable voice selection and text input. |
25
- | `create_voice_model` | Generates a custom voice using the provided parameters via the "/v1/voice-generation/create-voice" endpoint by sending a POST request, allowing users to create unique voice models. |
26
- | `create_previews` | Generates a voice preview from a given text prompt using the ElevenLabs API. |
27
- | `create_voice_from_preview` | Creates a new voice entry in the voice library using a generated preview ID and returns voice details. |
28
- | `get_user_subscription_info` | Retrieves the user's subscription details from the API. |
29
- | `get_user_info` | Retrieves user information from the API. |
30
- | `get_voices` | Retrieves a list of voices using the "GET" method at the "/v1/voices" API endpoint. |
31
- | `get_default_voice_settings` | Retrieves the default voice settings using the "GET" method at the "/v1/voices/settings/default" endpoint. |
32
- | `get_voice_settings` | Retrieves voice settings for a specific voice identified by `{voice_id}` using the "GET" method, returning the current configuration for that voice. |
33
- | `get_voice` | Retrieves the details of a specific voice by its ID using the "GET" method at the "/v1/voices/{voice_id}" endpoint. |
34
- | `delete_voice` | Deletes a voice with the specified ID using the DELETE method at the "/v1/voices/{voice_id}" endpoint. |
35
- | `edit_voice_settings` | Updates voice settings for a specified voice ID and returns a success status. |
36
- | `add_voice` | Adds a new voice entry to the voices collection using the provided data. |
37
- | `edit_voice` | Updates the specified voice by ID using a POST request and returns a success status upon completion. |
38
- | `add_sharing_voice` | Adds a voice associated with a public user ID and voice ID using the specified API endpoint. |
39
- | `get_shared_voices` | Retrieves a list of shared voices filtered by parameters like gender and language, with pagination support via page_size. |
40
- | `get_similar_library_voices` | Generates a list of similar voices using the POST method at the "/v1/similar-voices" endpoint. |
41
- | `get_aprofile_page` | Retrieves a unified customer profile by handle and returns the associated attributes, identifiers, and traits. |
42
- | `get_projects` | Retrieves a list of projects using the API defined at the "/v1/projects" endpoint via the GET method. |
43
- | `add_project` | Creates a new project and returns a status message. |
44
- | `get_project_by_id` | Retrieves information for a specific project identified by `{project_id}` using the API endpoint at "/v1/projects/{project_id}" via the GET method. |
45
- | `edit_basic_project_info` | Creates a new project resource by sending data to the specified project identifier using the POST method at the "/v1/projects/{project_id}" endpoint. |
46
- | `delete_project` | Deletes the specified project and returns a success status upon completion. |
47
- | `convert_project` | Converts a specified project identified by project_id and returns the conversion result. |
48
- | `get_project_snapshots` | Retrieves a list of snapshots associated with a specified project. |
49
- | `streams_archive_with_project_audio` | Archives a project snapshot using the specified project ID and snapshot ID and returns a success status. |
50
- | `add_chapter_to_aproject` | Adds a new chapter to a specified project using the provided project identifier and returns a success status upon completion. |
51
- | `update_project_pronunciations` | Updates pronunciation dictionaries for a specified project using the POST method, returning a successful status message upon completion. |
52
- | `get_chapters` | Retrieves a chapter for a specified project by ID using the GET method. |
53
- | `get_chapter_by_id` | Retrieves a specific chapter within a project identified by project_id and chapter_id. |
54
- | `delete_chapter` | Deletes a specific chapter within a project using the "DELETE" method. |
55
- | `convert_chapter` | Converts a chapter in a project using the POST method and returns a response upon successful conversion. |
56
- | `get_chapter_snapshots` | Retrieves a snapshot for a specific chapter within a project using the provided project and chapter IDs. |
57
- | `stream_chapter_audio` | Streams data from a specific chapter snapshot in a project using the API and returns a response indicating success. |
58
- | `dub_avideo_or_an_audio_file` | Initiates a dubbing process and returns a status message using the API defined at the "/v1/dubbing" endpoint via the POST method. |
59
- | `get_dubbing_project_metadata` | Retrieves the details of a specific dubbing job using the provided dubbing ID. |
60
- | `delete_dubbing_project` | Deletes a dubbing project with the specified ID and returns a success status upon completion. |
61
- | `get_transcript_for_dub` | Retrieves the transcript for a specific dubbing task in the requested language using the "GET" method. |
62
- | `get_models` | Retrieves a list of models using the GET method at the "/v1/models" endpoint. |
63
- | `post_audio_native` | Processes audio data using the audio-native API and returns a response. |
64
- | `get_characters_usage_metrics` | Retrieves character statistics within a specified time frame using the start and end Unix timestamps provided in the query parameters. |
65
- | `add_apronunciation_dictionary` | Creates a pronunciation dictionary from a lexicon file and returns its ID and metadata. |
66
- | `add_rules_to_dictionary` | Adds pronunciation rules to a specific pronunciation dictionary identified by its ID using the POST method. |
67
- | `remove_pronunciation_rules` | Removes specified pronunciation rules from a pronunciation dictionary using a POST request. |
68
- | `get_dictionary_version_file` | Retrieves and downloads a specific version of a pronunciation dictionary file using its dictionary ID and version ID. |
69
- | `get_pronunciation_dictionary` | Retrieves a specific pronunciation dictionary by its ID using the "GET" method from the "/v1/pronunciation-dictionaries/{pronunciation_dictionary_id}" endpoint. |
70
- | `get_pronunciation_dictionaries` | Retrieves a list of pronunciation dictionaries using the GET method at the "/v1/pronunciation-dictionaries" endpoint, allowing users to specify the number of items per page via the "page_size" query parameter. |
71
- | `invite_user` | Invites a user to join a workspace by sending an invitation, allowing them to access the specified workspace upon acceptance. |
72
- | `delete_existing_invitation` | Deletes a workspace invite and returns a success response upon completion. |
73
- | `update_member` | Adds members to a workspace and returns the updated member list upon success. |
74
- | `get_signed_url` | Generates a signed URL for initiating a conversation with a specific conversational AI agent, identified by the provided `agent_id`, using the ElevenLabs API. |
75
- | `create_agent` | Creates a conversational AI agent with specified configuration settings and returns the agent details. |
76
- | `get_agent` | Retrieves information about a specific conversational AI agent by its unique identifier using the GET method at the "/v1/convai/agents/{agent_id}" API endpoint. |
77
- | `delete_agent` | Deletes a specified Conversational AI agent using the DELETE method. |
78
- | `patches_an_agent_settings` | Updates an existing conversational AI agent's settings using the specified agent ID, allowing changes to properties such as the agent's name and tool configurations. |
79
- | `get_agent_widget_config` | Retrieves and configures the Convai widget for the specified agent, but the provided details do not specify the exact functionality of this specific endpoint, suggesting it may relate to integrating or customizing Convai's character interaction capabilities. |
80
- | `get_shareable_agent_link` | Retrieves and establishes a link for a Convai agent using the specified agent ID, facilitating integration or connectivity operations. |
81
- | `post_agent_avatar` | Creates and configures a Convai avatar for a specific agent using the POST method, though the exact details of this endpoint are not provided in the available documentation. |
82
- | `get_agent_knowledge_base` | Retrieves specific documentation for a knowledge base associated with an agent in Convai. |
83
- | `add_agent_secret` | Adds a secret to a specified conversational AI agent through the API and returns a status confirmation. |
84
- | `add_to_agent_sknowledge_base` | Adds new content to an agent's knowledge base by uploading a file or resource, which can be used to enhance the agent's conversational capabilities. |
85
- | `get_agents_page` | Retrieves a list of conversational AI agents available in the Convai system. |
86
- | `get_conversations` | Retrieves conversation history for a specified agent ID. |
87
- | `get_conversation_details` | Retrieves and formats the details of a specific conversation based on the provided conversation ID. |
88
- | `get_conversation_audio` | Retrieves the audio from a specific conversation using the ElevenLabs Conversational AI API. |
12
+ | `generate_speech_audio_url` | Converts a text string into speech using the ElevenLabs API. The function then saves the generated audio to a temporary MP3 file and returns a public URL to access it, rather than the raw audio bytes. |
13
+ | `speech_to_text` | Transcribes an audio file into text using the ElevenLabs API. It supports language specification and speaker diarization, providing the inverse operation to the audio-generating `text_to_speech` method. Note: The docstring indicates this is a placeholder for an undocumented endpoint. |
14
+ | `speech_to_speech` | Downloads an audio file from a URL and converts the speech into a specified target voice using the ElevenLabs API. This function transforms the speaker's voice in an existing recording and returns the new audio data as bytes, distinct from creating audio from text. |
@@ -1,15 +1,15 @@
1
- # Exa MCP Server
1
+ # ExaApp MCP Server
2
2
 
3
- An MCP Server for the Exa API.
3
+ An MCP Server for the ExaApp API.
4
4
 
5
5
  ## 🛠️ Tool List
6
6
 
7
- This is automatically generated from OpenAPI schema for the Exa API.
7
+ This is automatically generated from OpenAPI schema for the ExaApp API.
8
8
 
9
9
 
10
10
  | Tool | Description |
11
11
  |------|-------------|
12
- | `search` | Searches for data using the specified criteria and returns a list of results. |
13
- | `find_similar` | Finds and returns similar items using the API at "/findSimilar" via the POST method. |
14
- | `get_contents` | Creates new content entries via a POST request to the "/contents" endpoint. |
15
- | `answer` | Provides an answer to a query using the API endpoint at "/answer" via the POST method. |
12
+ | `search_with_filters` | Executes a query against the Exa API's `/search` endpoint, returning a list of results. This function supports extensive filtering by search type, category, domains, publication dates, and specific text content to refine the search query and tailor the API's response. |
13
+ | `find_similar_by_url` | Finds web pages semantically similar to a given URL. Unlike the `search` function, which uses a text query, this method takes a specific link and returns a list of related results, with options to filter by domain, publication date, and content. |
14
+ | `fetch_page_content` | Retrieves and processes content from a list of URLs, returning full text, summaries, or highlights. Unlike the search function which finds links, this function fetches the actual page content, with optional support for live crawling to get the most up-to-date information. |
15
+ | `answer` | Retrieves a direct, synthesized answer for a given query by calling the Exa `/answer` API endpoint. Unlike `search`, which returns web results, this function provides a conclusive response. It supports streaming, including source text, and selecting a search model. |
@@ -1,17 +1,18 @@
1
- # Falai MCP Server
1
+ # FalaiApp MCP Server
2
2
 
3
- An MCP Server for the Falai API.
3
+ An MCP Server for the FalaiApp API.
4
4
 
5
5
  ## 🛠️ Tool List
6
6
 
7
- This is automatically generated from OpenAPI schema for the Falai API.
7
+ This is automatically generated from OpenAPI schema for the FalaiApp API.
8
8
 
9
- | Tool | Description |
10
- | ---------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------ |
11
- | `run` | Run a Fal AI application directly and wait for the result. Suitable for short-running applications with synchronous execution from the caller's perspective. |
12
- | `submit` | Submits a request to the Fal AI queue for asynchronous processing and returns a request ID for tracking the job. |
13
- | `status` | Checks the status of a previously submitted Fal AI request and retrieves its current execution state |
14
- | `result` | Retrieves the result of a completed Fal AI request, waiting for completion if the request is still running. |
15
- | `cancel` | Asynchronously cancels a running or queued Fal AI request. |
16
- | `upload_file` | Uploads a local file to the Fal CDN and returns its public URL |
17
- | `generate_image` | Asynchronously generates images using the 'fal-ai/flux/dev' application with customizable parameters and default settings |
9
+
10
+ | Tool | Description |
11
+ |------|-------------|
12
+ | `run` | Executes a Fal AI application synchronously, waiting for completion and returning the result directly. This method is suited for short-running tasks, unlike `submit` which queues a job for asynchronous processing and returns a request ID instead of the final output. |
13
+ | `submit` | Submits a job to the Fal AI queue for asynchronous processing, immediately returning a request ID. This contrasts with the `run` method, which waits for completion. The returned ID is used by `check_status`, `get_result`, and `cancel` to manage the job's lifecycle. |
14
+ | `check_status` | Checks the execution state (e.g., Queued, InProgress) of an asynchronous Fal AI job using its request ID. It provides a non-blocking way to monitor jobs initiated via `submit` without fetching the final `result`, and can optionally include logs. |
15
+ | `get_result` | Retrieves the final result of an asynchronous job, identified by its `request_id`. This function waits for the job, initiated via `submit`, to complete. Unlike the non-blocking `check_status`, this method blocks execution to fetch and return the job's actual output upon completion. |
16
+ | `cancel` | Asynchronously cancels a running or queued Fal AI job using its `request_id`. This function complements the `submit` method, providing a way to terminate asynchronous tasks before completion. It raises a `ToolError` if the cancellation request fails. |
17
+ | `upload_file` | Asynchronously uploads a local file to the Fal Content Delivery Network (CDN), returning a public URL. This URL makes the file accessible for use as input in other Fal AI job execution methods like `run` or `submit`. A `ToolError` is raised if the upload fails. |
18
+ | `run_image_generation` | A specialized wrapper for the `run` method that synchronously generates images using the 'fal-ai/flux/dev' model. It simplifies image creation with common parameters like `prompt` and `seed`, waits for the task to complete, and directly returns the result containing image URLs and metadata. |
@@ -29,7 +29,7 @@ class FalaiApp(APIApplication):
29
29
  @property
30
30
  def fal_client(self) -> AsyncClient:
31
31
  """
32
- A cached property that lazily initializes an `AsyncClient` instance for API communication. It retrieves the API key from the configured integration, centralizing authentication for all Fal AI operations. Raises `NotAuthorizedError` if the key is missing.
32
+ A cached property that lazily initializes an `AsyncClient` instance. It retrieves the API key from the configured integration, providing a single, centralized authentication point for all methods that interact with the Fal AI API. Raises `NotAuthorizedError` if credentials are not found.
33
33
  """
34
34
  if self._fal_client is None:
35
35
  credentials = self.integration.get_credentials()
@@ -101,7 +101,7 @@ class FalaiApp(APIApplication):
101
101
  priority: Priority | None = None,
102
102
  ) -> str:
103
103
  """
104
- Submits a job to the Fal AI queue for asynchronous processing. It immediately returns a unique request ID for tracking the job's lifecycle with the `status`, `result`, and `cancel` methods. Unlike the synchronous `run` method, this function does not wait for the job's completion.
104
+ Submits a job to the Fal AI queue for asynchronous processing, immediately returning a request ID. This contrasts with the `run` method, which waits for completion. The returned ID is used by `check_status`, `get_result`, and `cancel` to manage the job's lifecycle.
105
105
 
106
106
  Args:
107
107
  arguments: A dictionary of arguments for the application
@@ -181,7 +181,7 @@ class FalaiApp(APIApplication):
181
181
  self, request_id: str, application: str = "fal-ai/flux/dev"
182
182
  ) -> Any:
183
183
  """
184
- Fetches the final output for an asynchronous job, identified by its request_id. This function blocks execution, waiting for the job initiated by `submit` to complete before returning the result. It complements the non-blocking `status` check by providing a synchronous way to get a completed job's data.
184
+ Retrieves the final result of an asynchronous job, identified by its `request_id`. This function waits for the job, initiated via `submit`, to complete. Unlike the non-blocking `check_status`, this method blocks execution to fetch and return the job's actual output upon completion.
185
185
 
186
186
  Args:
187
187
  request_id: The unique identifier of the submitted request
@@ -215,7 +215,7 @@ class FalaiApp(APIApplication):
215
215
  self, request_id: str, application: str = "fal-ai/flux/dev"
216
216
  ) -> None:
217
217
  """
218
- Asynchronously cancels a running or queued Fal AI job identified by its `request_id`. This function complements the `submit` method, providing a way to terminate asynchronous tasks before completion. API errors during the cancellation process are wrapped in a `ToolError`.
218
+ Asynchronously cancels a running or queued Fal AI job using its `request_id`. This function complements the `submit` method, providing a way to terminate asynchronous tasks before completion. It raises a `ToolError` if the cancellation request fails.
219
219
 
220
220
  Args:
221
221
  request_id: The unique identifier of the submitted Fal AI request to cancel
@@ -244,7 +244,7 @@ class FalaiApp(APIApplication):
244
244
 
245
245
  async def upload_file(self, path: str) -> str:
246
246
  """
247
- Asynchronously uploads a local file from a specified path to the Fal Content Delivery Network (CDN). Upon success, it returns a public URL for the file, making it accessible for use as input in other Fal AI application requests. A `ToolError` is raised on failure.
247
+ Asynchronously uploads a local file to the Fal Content Delivery Network (CDN), returning a public URL. This URL makes the file accessible for use as input in other Fal AI job execution methods like `run` or `submit`. A `ToolError` is raised if the upload fails.
248
248
 
249
249
  Args:
250
250
  path: The absolute or relative path to the local file
@@ -280,7 +280,7 @@ class FalaiApp(APIApplication):
280
280
  hint: str | None = None,
281
281
  ) -> Any:
282
282
  """
283
- A specialized wrapper for the `run` method that synchronously generates images using the 'fal-ai/flux/dev' model. It simplifies image creation with common parameters like `prompt` and `seed`, waits for the task to complete, and returns the result containing image URLs and metadata.
283
+ A specialized wrapper for the `run` method that synchronously generates images using the 'fal-ai/flux/dev' model. It simplifies image creation with common parameters like `prompt` and `seed`, waits for the task to complete, and directly returns the result containing image URLs and metadata.
284
284
 
285
285
  Args:
286
286
  prompt: The text prompt used to guide the image generation
@@ -1,10 +1,10 @@
1
- # Figma MCP Server
1
+ # FigmaApp MCP Server
2
2
 
3
- An MCP Server for the Figma API.
3
+ An MCP Server for the FigmaApp API.
4
4
 
5
5
  ## 🛠️ Tool List
6
6
 
7
- This is automatically generated from OpenAPI schema for the Figma API.
7
+ This is automatically generated from OpenAPI schema for the FigmaApp API.
8
8
 
9
9
 
10
10
  | Tool | Description |
@@ -0,0 +1,13 @@
1
+ # FileSystemApp MCP Server
2
+
3
+ An MCP Server for the FileSystemApp API.
4
+
5
+ ## 🛠️ Tool List
6
+
7
+ This is automatically generated from OpenAPI schema for the FileSystemApp API.
8
+
9
+
10
+ | Tool | Description |
11
+ |------|-------------|
12
+ | `read_file` | Asynchronously reads the entire content of a specified file in binary mode. This static method takes a file path and returns its data as a bytes object, serving as a fundamental file retrieval operation within the FileSystem application. |
13
+ | `write_file` | Writes binary data to a specified file path. If no path is provided, it creates a unique temporary file in `/tmp`. The function returns a dictionary confirming success and providing metadata about the new file, including its path and size. |
@@ -9,12 +9,12 @@ This is automatically generated from OpenAPI schema for the FirecrawlApp API.
9
9
 
10
10
  | Tool | Description |
11
11
  |------|-------------|
12
- | `scrape_url` | Scrapes a single URL using Firecrawl and returns the extracted data. |
13
- | `search` | Performs a web search using Firecrawl's search capability. |
14
- | `start_crawl` | Starts a async crawl job for a given URL using Firecrawl. Returns the job ID immediately. |
15
- | `check_crawl_status` | Checks the status of a previously initiated async Firecrawl crawl job. |
16
- | `cancel_crawl` | Cancels a currently running Firecrawl crawl job. |
17
- | `start_batch_scrape` | Starts a batch scrape job for multiple URLs using Firecrawl. (Note: May map to multiple individual scrapes or a specific batch API endpoint if available) |
18
- | `check_batch_scrape_status` | Checks the status of a previously initiated Firecrawl batch scrape job. |
19
- | `quick_web_extract` | Performs a quick, synchronous extraction of data from one or more URLs using Firecrawl and returns the results directly. |
20
- | `check_extract_status` | Checks the status of a previously initiated Firecrawl extraction job. |
12
+ | `scrape_url` | Synchronously scrapes a single URL, immediately returning its content. This provides a direct method for single-page scraping, contrasting with asynchronous, job-based functions like `start_crawl` (for entire sites) and `start_batch_scrape` (for multiple URLs). |
13
+ | `search` | Executes a synchronous web search using the Firecrawl service for a given query. Unlike scrape_url which fetches a single page, this function discovers web content. It returns a dictionary of results on success or an error string on failure, raising exceptions for authorization or SDK issues. |
14
+ | `start_crawl` | Starts an asynchronous Firecrawl job to crawl a website from a given URL, returning a job ID. Unlike the synchronous `scrape_url` for single pages, this function initiates a comprehensive, link-following crawl. Progress can be monitored using the `check_crawl_status` function with the returned ID. |
15
+ | `check_crawl_status` | Retrieves the status of an asynchronous Firecrawl job using its unique ID. As the counterpart to `start_crawl`, this function exclusively monitors website crawl progress, distinct from status checkers for batch scraping or data extraction jobs. Returns job details on success or an error message on failure. |
16
+ | `cancel_crawl` | Cancels a running asynchronous Firecrawl crawl job using its unique ID. As a lifecycle management tool for jobs initiated by `start_crawl`, it returns a confirmation status upon success or an error message on failure, distinguishing it from controls for other job types. |
17
+ | `start_batch_scrape` | Initiates an asynchronous Firecrawl job to scrape a list of URLs. It returns a job ID for tracking with `check_batch_scrape_status`. Unlike the synchronous `scrape_url` which processes a single URL, this function handles bulk scraping and doesn't wait for completion. |
18
+ | `check_batch_scrape_status` | Checks the status of an asynchronous batch scrape job using its job ID. As the counterpart to `start_batch_scrape`, it specifically monitors multi-URL scraping tasks, distinct from checkers for site-wide crawls (`check_crawl_status`) or AI-driven extractions (`check_extract_status`). Returns detailed progress or an error message. |
19
+ | `quick_web_extract` | Performs synchronous, AI-driven data extraction from URLs using an optional prompt or schema. Unlike asynchronous jobs like `start_crawl`, it returns structured data directly. This function raises an exception on failure, contrasting with other methods in the class that return an error string upon failure. |
20
+ | `check_extract_status` | Checks the status of an asynchronous, AI-powered Firecrawl data extraction job using its ID. Unlike `check_crawl_status` or `check_batch_scrape_status`, this function specifically monitors structured data extraction tasks, returning the job's progress or an error message on failure. |
@@ -38,7 +38,7 @@ class FirecrawlApp(APIApplication):
38
38
  @property
39
39
  def firecrawl_api_key(self) -> str:
40
40
  """
41
- A property that lazily retrieves and caches the Firecrawl API key from the configured integration. On first access, it fetches credentials and raises a `NotAuthorizedError` if the key is unobtainable, ensuring all subsequent API calls are properly authenticated.
41
+ A property that lazily retrieves and caches the Firecrawl API key from the configured integration. On first access, it fetches credentials and raises a `NotAuthorizedError` if the key is unobtainable, ensuring all subsequent API calls within the application are properly authenticated before execution.
42
42
  """
43
43
  if self._firecrawl_api_key is None:
44
44
  if not self.integration:
@@ -166,7 +166,7 @@ class FirecrawlApp(APIApplication):
166
166
 
167
167
  def scrape_url(self, url: str) -> Any:
168
168
  """
169
- Synchronously scrapes a single web page's content using the Firecrawl service. This function executes immediately and returns the extracted data, unlike the asynchronous `start_batch_scrape` or `start_crawl` jobs which require status checks. Returns an error message on failure.
169
+ Synchronously scrapes a single URL, immediately returning its content. This provides a direct method for single-page scraping, contrasting with asynchronous, job-based functions like `start_crawl` (for entire sites) and `start_batch_scrape` (for multiple URLs).
170
170
 
171
171
  Args:
172
172
  url: The URL of the web page to scrape.
@@ -198,7 +198,7 @@ class FirecrawlApp(APIApplication):
198
198
 
199
199
  def search(self, query: str) -> dict[str, Any] | str:
200
200
  """
201
- Executes a web search using the Firecrawl service for a specified query. It returns a dictionary of results on success or an error string on failure, raising specific exceptions for authorization or SDK installation issues. This provides a direct, synchronous method for information retrieval.
201
+ Executes a synchronous web search using the Firecrawl service for a given query. Unlike scrape_url which fetches a single page, this function discovers web content. It returns a dictionary of results on success or an error string on failure, raising exceptions for authorization or SDK issues.
202
202
 
203
203
  Args:
204
204
  query: The search query string.
@@ -232,7 +232,7 @@ class FirecrawlApp(APIApplication):
232
232
  url: str,
233
233
  ) -> dict[str, Any] | str:
234
234
  """
235
- Initiates an asynchronous Firecrawl job to crawl a website starting from a given URL. It returns immediately with a job ID, which can be used with `check_crawl_status` to monitor progress. This differs from `scrape_url`, which performs a synchronous scrape of a single page.
235
+ Starts an asynchronous Firecrawl job to crawl a website from a given URL, returning a job ID. Unlike the synchronous `scrape_url` for single pages, this function initiates a comprehensive, link-following crawl. Progress can be monitored using the `check_crawl_status` function with the returned ID.
236
236
 
237
237
  Args:
238
238
  url: The starting URL for the crawl.
@@ -268,7 +268,7 @@ class FirecrawlApp(APIApplication):
268
268
 
269
269
  def check_crawl_status(self, job_id: str) -> dict[str, Any] | str:
270
270
  """
271
- Retrieves the status of an asynchronous Firecrawl crawl job using its unique ID. Returns a dictionary with the job's details on success or an error message on failure. This function specifically handles jobs initiated by `start_crawl`, distinct from checkers for batch scrapes or extractions.
271
+ Retrieves the status of an asynchronous Firecrawl job using its unique ID. As the counterpart to `start_crawl`, this function exclusively monitors website crawl progress, distinct from status checkers for batch scraping or data extraction jobs. Returns job details on success or an error message on failure.
272
272
 
273
273
  Args:
274
274
  job_id: The ID of the crawl job to check.
@@ -303,7 +303,7 @@ class FirecrawlApp(APIApplication):
303
303
 
304
304
  def cancel_crawl(self, job_id: str) -> dict[str, Any] | str:
305
305
  """
306
- Cancels a running asynchronous Firecrawl crawl job identified by its unique ID. As part of the crawl job lifecycle, this function terminates a process initiated by `start_crawl`, returning a confirmation status upon success or an error message if the cancellation fails or is not supported.
306
+ Cancels a running asynchronous Firecrawl crawl job using its unique ID. As a lifecycle management tool for jobs initiated by `start_crawl`, it returns a confirmation status upon success or an error message on failure, distinguishing it from controls for other job types.
307
307
 
308
308
  Args:
309
309
  job_id: The ID of the crawl job to cancel.
@@ -342,7 +342,7 @@ class FirecrawlApp(APIApplication):
342
342
  urls: list[str],
343
343
  ) -> dict[str, Any] | str:
344
344
  """
345
- Initiates an asynchronous batch job to scrape a list of URLs using Firecrawl. It returns a response containing a job ID, which can be tracked with `check_batch_scrape_status`. This differs from the synchronous `scrape_url` which handles a single URL and returns data directly.
345
+ Initiates an asynchronous Firecrawl job to scrape a list of URLs. It returns a job ID for tracking with `check_batch_scrape_status`. Unlike the synchronous `scrape_url` which processes a single URL, this function handles bulk scraping and doesn't wait for completion.
346
346
 
347
347
  Args:
348
348
  urls: A list of URLs to scrape.
@@ -377,7 +377,7 @@ class FirecrawlApp(APIApplication):
377
377
 
378
378
  def check_batch_scrape_status(self, job_id: str) -> dict[str, Any] | str:
379
379
  """
380
- Checks the status of a previously initiated asynchronous Firecrawl batch scrape job using its job ID. It returns detailed progress information or an error message. This function is the counterpart to `start_batch_scrape` for monitoring multi-URL scraping tasks.
380
+ Checks the status of an asynchronous batch scrape job using its job ID. As the counterpart to `start_batch_scrape`, it specifically monitors multi-URL scraping tasks, distinct from checkers for site-wide crawls (`check_crawl_status`) or AI-driven extractions (`check_extract_status`). Returns detailed progress or an error message.
381
381
 
382
382
  Args:
383
383
  job_id: The ID of the batch scrape job to check.
@@ -421,7 +421,7 @@ class FirecrawlApp(APIApplication):
421
421
  allow_external_links: bool | None = False,
422
422
  ) -> dict[str, Any]:
423
423
  """
424
- Performs synchronous, AI-driven data extraction from URLs using an optional prompt or schema. Unlike asynchronous job functions (e.g., `start_crawl`), it returns the structured data directly. This function raises `NotAuthorizedError` or `ToolError` on failure, contrasting with others that return an error string.
424
+ Performs synchronous, AI-driven data extraction from URLs using an optional prompt or schema. Unlike asynchronous jobs like `start_crawl`, it returns structured data directly. This function raises an exception on failure, contrasting with other methods in the class that return an error string upon failure.
425
425
 
426
426
  Args:
427
427
  urls: A list of URLs to extract data from.
@@ -476,7 +476,7 @@ class FirecrawlApp(APIApplication):
476
476
 
477
477
  def check_extract_status(self, job_id: str) -> dict[str, Any] | str:
478
478
  """
479
- Checks the status of a specific asynchronous, AI-powered data extraction job on Firecrawl using its job ID. This is distinct from `check_crawl_status` for web crawling and `check_batch_scrape_status` for bulk scraping, as it specifically monitors AI-driven extractions.
479
+ Checks the status of an asynchronous, AI-powered Firecrawl data extraction job using its ID. Unlike `check_crawl_status` or `check_batch_scrape_status`, this function specifically monitors structured data extraction tasks, returning the job's progress or an error message on failure.
480
480
 
481
481
  Args:
482
482
  job_id: The ID of the extraction job to check.
@@ -9,17 +9,17 @@ This is automatically generated from OpenAPI schema for the FirefliesApp API.
9
9
 
10
10
  | Tool | Description |
11
11
  |------|-------------|
12
- | `get_team_analytics` | Fetches team analytics data within a specified time range. |
13
- | `get_ai_apps_outputs` | Fetches AI Apps outputs for a given transcript. |
14
- | `get_user_details` | Fetches details for a specific user. |
15
- | `list_users` | Fetches a list of users in the workspace. |
16
- | `get_transcript_details` | Fetches details for a specific transcript. |
17
- | `list_transcripts` | Fetches a list of transcripts, optionally filtered by user ID. |
18
- | `get_bite_details` | Fetches details for a specific bite (soundbite/clip). |
19
- | `list_bites` | Fetches a list of bites, optionally filtered to the current user's bites. |
20
- | `add_to_live_meeting` | Adds Fireflies.ai to a live meeting. |
21
- | `create_bite` | Creates a bite (soundbite/clip) from a transcript. |
22
- | `delete_transcript` | Deletes a transcript. |
23
- | `set_user_role` | Sets the role for a user. |
24
- | `upload_audio` | Uploads an audio file for transcription. |
25
- | `update_meeting_title` | Updates the title of a meeting (transcript). |
12
+ | `get_team_conversation_analytics` | Queries the Fireflies.ai API for team conversation analytics, specifically the average number of filler words. The data retrieval can optionally be filtered by a start and end time. Returns a dictionary containing the fetched analytics. |
13
+ | `get_transcript_ai_outputs` | Retrieves all AI-generated application outputs, such as summaries or analyses, associated with a specific transcript ID. It fetches the detailed prompt and response data for each AI app that has processed the transcript, providing a complete record of AI-generated content. |
14
+ | `get_user_details` | Fetches details, such as name and integrations, for a single user identified by their unique ID. This function queries for a specific user, differentiating it from `list_users` which retrieves a list of all users in the workspace. |
15
+ | `list_users` | Retrieves a list of all users in the workspace, returning each user's name and configured integrations. It provides a complete team roster, differing from `get_user_details`, which fetches information for a single user by their ID. |
16
+ | `get_transcript_details` | Queries the Fireflies API for a single transcript's details, such as title and ID, using its unique identifier. It fetches one specific entry, distinguishing it from `list_transcripts`, which retrieves a collection, and from `get_ai_apps_outputs` which gets AI data from a transcript. |
17
+ | `list_transcripts` | Fetches a list of meeting transcripts, returning the title and ID for each. The list can be filtered to return only transcripts for a specific user. This function complements `get_transcript_details`, which retrieves a single transcript by its unique ID. |
18
+ | `get_bite_details` | Retrieves detailed information for a specific bite (soundbite/clip) using its unique ID. It fetches data including the user ID, name, processing status, and summary. This provides a focused view of a single bite, distinguishing it from `list_bites` which fetches a collection of bites. |
19
+ | `list_bites` | Retrieves a list of soundbites (clips) from the Fireflies API. An optional 'mine' parameter filters for soundbites belonging only to the authenticated user. Differentiates from 'get_bite_details' by fetching multiple items rather than a single one by ID. |
20
+ | `add_to_live_meeting` | Executes a GraphQL mutation to make the Fireflies.ai notetaker join a live meeting specified by its URL. This action initiates the bot's recording and transcription process for the ongoing session and returns a success confirmation. |
21
+ | `create_soundbite_from_transcript` | Creates a soundbite/clip from a specified segment of a transcript using its ID, start, and end times. This function executes a GraphQL mutation, returning details of the newly created bite, such as its ID and processing status. |
22
+ | `delete_transcript` | Permanently deletes a specific transcript from Fireflies.ai using its ID. This destructive operation executes a GraphQL mutation and returns a dictionary containing the details of the transcript (e.g., title, date) as it existed just before being removed, confirming the action. |
23
+ | `set_user_role` | Assigns a new role (e.g., 'admin', 'member') to a user specified by their ID. This function executes a GraphQL mutation to modify user data and returns a dictionary with the user's updated name and admin status to confirm the change. |
24
+ | `transcribe_audio_from_url` | Submits an audio file from a URL to the Fireflies.ai API for transcription. It can optionally associate a title and a list of attendees with the audio, returning the upload status and details upon completion. |
25
+ | `update_transcript_title` | Updates the title of a specific transcript, identified by its ID, to a new value. This function executes a GraphQL mutation and returns a dictionary containing the newly assigned title upon successful completion of the request. |
@@ -9,15 +9,15 @@ This is automatically generated from OpenAPI schema for the FplApp API.
9
9
 
10
10
  | Tool | Description |
11
11
  |------|-------------|
12
- | `get_player_information` | Get detailed information and statistics for a specific player |
13
- | `search_fpl_players` | Search for FPL players by name with optional filtering |
14
- | `get_gameweek_status` | Get precise information about current, previous, and next gameweeks. |
15
- | `analyze_players` | Filter and analyze FPL players based on multiple criteria |
16
- | `compare_players` | Compare multiple players across various metrics |
17
- | `analyze_player_fixtures` | Analyze upcoming fixtures for a player and provide a difficulty rating |
18
- | `analyze_fixtures` | Analyze upcoming fixtures for players, teams, or positions |
19
- | `get_blank_gameweeks` | Get information about upcoming blank gameweeks where teams don't have fixtures |
20
- | `get_double_gameweeks` | Get information about upcoming double gameweeks where teams have multiple fixtures |
21
- | `get_league_standings` | Get standings for a specified FPL league |
22
- | `get_league_analytics` | Get rich analytics for a Fantasy Premier League mini-league |
23
- | `team_info` | Get information about a team |
12
+ | `get_player_information` | Fetches a detailed profile for a specific FPL player, identified by ID or name. It can include gameweek performance history, filterable by range, and future fixtures. This provides an in-depth look at one player, differing from broader search or analysis functions like `search_fpl_players`. |
13
+ | `find_players` | Searches for FPL players by full or partial name with optional filtering by team and position. This is a discovery tool, differentiating it from `get_player_information` which fetches a specific known player. It serves as a public interface to the internal `search_players` utility. |
14
+ | `get_gameweek_snapshot` | Provides a detailed snapshot of the FPL schedule by identifying the current, previous, and next gameweeks. It calculates the precise real-time status of the current gameweek (e.g., 'Imminent', 'In Progress') and returns key deadline times and overall season progress. |
15
+ | `screen_players` | Filters the FPL player database using multiple criteria like position, price, points, and form. Returns a sorted list of matching players, summary statistics for the filtered group, and optional recent gameweek performance data to aid in player discovery and analysis. |
16
+ | `compare_players` | Performs a detailed comparison of multiple players based on specified metrics, recent gameweek history, and upcoming fixture analysis. It provides a side-by-side breakdown, identifies the best performer per metric, and determines an overall winner, considering fixture difficulty, blanks, and doubles for a comprehensive overview. |
17
+ | `analyze_player_fixtures` | Analyzes a player's upcoming fixture difficulty. Given a player's name, it retrieves their schedule for a set number of matches, returning a detailed list and a calculated difficulty rating. This method is a focused alternative to the more comprehensive `analyze_fixtures` function. |
18
+ | `analyze_entity_fixtures` | Provides a detailed fixture difficulty analysis for a specific player, team, or an entire position over a set number of gameweeks. It calculates a difficulty score and can include blank/double gameweek data, offering broader analysis than the player-only `analyze_player_fixtures`. |
19
+ | `get_blank_gameweeks` | Identifies upcoming 'blank' gameweeks where teams lack a scheduled fixture. The function returns a list detailing each blank gameweek and the affected teams within a specified number of weeks, which is essential for strategic planning. It is distinct from `get_double_gameweeks`. |
20
+ | `get_double_gameweeks` | Identifies upcoming double gameweeks where teams have multiple fixtures within a specified number of weeks. It returns a list detailing each double gameweek and the teams involved. This function specifically finds gameweeks with extra matches, unlike `get_blank_gameweeks` which finds those with none. |
21
+ | `get_league_standings` | Retrieves current standings for a specified FPL classic mini-league by its ID. It fetches and parses raw API data to provide a direct snapshot of the league table, distinguishing it from `get_league_analytics` which performs deeper, historical analysis. |
22
+ | `analyze_league` | Performs advanced analysis on an FPL league for a given gameweek range. It routes requests based on the analysis type ('overview', 'historical', 'team_composition') to provide deeper insights beyond the basic rankings from `get_league_standings`, such as historical performance or team composition. |
23
+ | `get_manager_team_info` | Retrieves detailed information for a specific FPL manager's team (an "entry") using its unique ID. This is distinct from functions analyzing Premier League clubs, as it targets an individual user's fantasy squad and its performance details within the game. |
@@ -7,8 +7,8 @@ import requests
7
7
  from universal_mcp.applications.application import APIApplication
8
8
  from universal_mcp.integrations import Integration
9
9
 
10
- from .utils.api import api
11
- from .utils.fixtures import (
10
+ from universal_mcp.applications.fpl.utils.api import api
11
+ from universal_mcp.applications.fpl.utils.fixtures import (
12
12
  analyze_player_fixtures,
13
13
  get_blank_gameweeks,
14
14
  get_double_gameweeks,
@@ -16,20 +16,20 @@ from .utils.fixtures import (
16
16
  get_player_fixtures,
17
17
  get_player_gameweek_history,
18
18
  )
19
- from .utils.helper import (
19
+ from universal_mcp.applications.fpl.utils.helper import (
20
20
  find_players_by_name,
21
21
  get_player_info,
22
22
  get_players_resource,
23
23
  get_team_by_name,
24
24
  search_players,
25
25
  )
26
- from .utils.league_utils import (
26
+ from universal_mcp.applications.fpl.utils.league_utils import (
27
27
  _get_league_historical_performance,
28
28
  _get_league_standings,
29
29
  _get_league_team_composition,
30
30
  parse_league_standings,
31
31
  )
32
- from .utils.position_utils import normalize_position
32
+ from universal_mcp.applications.fpl.utils.position_utils import normalize_position
33
33
 
34
34
 
35
35
  class FplApp(APIApplication):
@@ -9,13 +9,13 @@ This is automatically generated from OpenAPI schema for the GithubApp API.
9
9
 
10
10
  | Tool | Description |
11
11
  |------|-------------|
12
- | `star_repository` | Stars a GitHub repository using the GitHub API and returns a status message. |
13
- | `list_commits` | Retrieves and formats a list of recent commits from a GitHub repository |
14
- | `list_branches` | Lists all branches for a specified GitHub repository and returns them in a formatted string representation. |
15
- | `list_pull_requests` | Retrieves and formats a list of pull requests for a specified GitHub repository. |
16
- | `list_issues` | Retrieves a list of issues from a specified GitHub repository with optional filtering parameters. |
17
- | `get_pull_request` | Retrieves and formats detailed information about a specific GitHub pull request from a repository |
18
- | `create_pull_request` | Creates a new pull request in a GitHub repository, optionally converting an existing issue into a pull request. |
19
- | `create_issue` | Creates a new issue in a specified GitHub repository with a title, body content, and optional labels. |
20
- | `update_issue` | Updates an existing GitHub issue with specified parameters including title, body, assignee, state, and state reason. |
21
- | `list_repo_activities` | Retrieves and formats a list of activities for a specified GitHub repository. |
12
+ | `star_repository` | Stars a GitHub repository for the authenticated user. This user-centric action takes the full repository name ('owner/repo') and returns a simple string message confirming the outcome, unlike other functions that list or create repository content like issues or pull requests. |
13
+ | `list_recent_commits` | Fetches and formats the 12 most recent commits from a repository. It returns a human-readable string summarizing each commit's hash, author, and message, providing a focused overview of recent code changes, unlike functions that list branches, issues, or pull requests. |
14
+ | `list_branches` | Fetches all branches for a specified GitHub repository and formats them into a human-readable string. This method is distinct from others like `search_issues`, as it returns a formatted list for display rather than raw JSON data for programmatic use. |
15
+ | `list_pull_requests` | Fetches pull requests for a repository, filtered by state (e.g., 'open'). It returns a formatted string summarizing each PR's details, distinguishing it from `get_pull_request` (single PR) and `search_issues` (raw issue data). |
16
+ | `search_issues` | Fetches issues from a GitHub repository using specified filters (state, assignee, labels) and pagination. It returns the raw API response as a list of dictionaries, providing detailed issue data for programmatic processing, distinct from other methods that return formatted strings. |
17
+ | `get_pull_request` | Fetches a specific pull request from a repository using its unique number. It returns a human-readable string summarizing the PR's title, creator, status, and description, unlike `list_pull_requests` which retrieves a list of multiple PRs. |
18
+ | `create_pull_request` | Creates a pull request between specified `head` and `base` branches, or converts an issue into a PR. Unlike read functions that return formatted strings, this write operation returns the raw API response as a dictionary, providing comprehensive data on the newly created pull request. |
19
+ | `create_issue` | Creates a new issue in a GitHub repository using a title, body, and optional labels. It returns a formatted confirmation string with the new issue's number and URL, differing from `update_issue` which modifies existing issues and `search_issues` which returns raw API data. |
20
+ | `update_issue` | Modifies an existing GitHub issue, identified by its number within a repository. It can update optional fields like title, body, or state and returns the raw API response as a dictionary, differentiating it from `create_issue` which makes new issues and returns a formatted string. |
21
+ | `list_repo_activities` | Fetches recent events for a GitHub repository and formats them into a human-readable string. It summarizes activities with actors and timestamps, providing a general event feed, unlike other `list_*` functions which retrieve specific resources like commits or issues. |