universal-mcp-applications 0.1.17__py3-none-any.whl → 0.1.19__py3-none-any.whl
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Potentially problematic release.
This version of universal-mcp-applications might be problematic. Click here for more details.
- universal_mcp/applications/ahrefs/README.md +3 -3
- universal_mcp/applications/airtable/README.md +3 -3
- universal_mcp/applications/asana/README.md +3 -3
- universal_mcp/applications/aws_s3/README.md +29 -0
- universal_mcp/applications/bill/README.md +249 -0
- universal_mcp/applications/calendly/README.md +45 -45
- universal_mcp/applications/canva/README.md +35 -35
- universal_mcp/applications/clickup/README.md +4 -4
- universal_mcp/applications/contentful/README.md +1 -2
- universal_mcp/applications/crustdata/README.md +3 -3
- universal_mcp/applications/domain_checker/README.md +2 -2
- universal_mcp/applications/e2b/README.md +4 -4
- universal_mcp/applications/elevenlabs/README.md +3 -77
- universal_mcp/applications/exa/README.md +7 -7
- universal_mcp/applications/falai/README.md +13 -12
- universal_mcp/applications/falai/app.py +6 -6
- universal_mcp/applications/figma/README.md +3 -3
- universal_mcp/applications/file_system/README.md +13 -0
- universal_mcp/applications/firecrawl/README.md +9 -9
- universal_mcp/applications/firecrawl/app.py +10 -10
- universal_mcp/applications/fireflies/README.md +14 -14
- universal_mcp/applications/fpl/README.md +12 -12
- universal_mcp/applications/fpl/app.py +5 -5
- universal_mcp/applications/github/README.md +10 -10
- universal_mcp/applications/github/app.py +9 -9
- universal_mcp/applications/google_calendar/README.md +10 -10
- universal_mcp/applications/google_calendar/app.py +10 -10
- universal_mcp/applications/google_docs/README.md +14 -14
- universal_mcp/applications/google_docs/app.py +12 -12
- universal_mcp/applications/google_drive/README.md +54 -57
- universal_mcp/applications/google_drive/app.py +38 -38
- universal_mcp/applications/google_gemini/README.md +3 -14
- universal_mcp/applications/google_gemini/app.py +13 -12
- universal_mcp/applications/google_mail/README.md +20 -20
- universal_mcp/applications/google_mail/app.py +19 -19
- universal_mcp/applications/google_searchconsole/README.md +10 -10
- universal_mcp/applications/google_searchconsole/app.py +8 -8
- universal_mcp/applications/google_sheet/README.md +25 -25
- universal_mcp/applications/google_sheet/app.py +20 -20
- universal_mcp/applications/http_tools/README.md +5 -5
- universal_mcp/applications/hubspot/__init__.py +1 -1
- universal_mcp/applications/hubspot/api_segments/__init__.py +0 -0
- universal_mcp/applications/hubspot/api_segments/api_segment_base.py +25 -0
- universal_mcp/applications/hubspot/api_segments/crm_api.py +7337 -0
- universal_mcp/applications/hubspot/api_segments/marketing_api.py +1467 -0
- universal_mcp/applications/hubspot/app.py +74 -146
- universal_mcp/applications/klaviyo/README.md +0 -36
- universal_mcp/applications/linkedin/README.md +4 -4
- universal_mcp/applications/linkedin/app.py +4 -4
- universal_mcp/applications/mailchimp/README.md +3 -3
- universal_mcp/applications/ms_teams/README.md +31 -31
- universal_mcp/applications/ms_teams/app.py +28 -28
- universal_mcp/applications/neon/README.md +3 -3
- universal_mcp/applications/openai/README.md +18 -17
- universal_mcp/applications/outlook/README.md +9 -9
- universal_mcp/applications/outlook/app.py +9 -9
- universal_mcp/applications/perplexity/README.md +4 -4
- universal_mcp/applications/posthog/README.md +128 -127
- universal_mcp/applications/reddit/README.md +21 -124
- universal_mcp/applications/reddit/app.py +90 -89
- universal_mcp/applications/replicate/README.md +10 -10
- universal_mcp/applications/resend/README.md +29 -29
- universal_mcp/applications/scraper/README.md +4 -4
- universal_mcp/applications/scraper/app.py +31 -31
- universal_mcp/applications/semrush/README.md +3 -0
- universal_mcp/applications/serpapi/README.md +3 -3
- universal_mcp/applications/sharepoint/README.md +17 -0
- universal_mcp/applications/sharepoint/app.py +7 -7
- universal_mcp/applications/shortcut/README.md +3 -3
- universal_mcp/applications/slack/README.md +23 -0
- universal_mcp/applications/spotify/README.md +3 -3
- universal_mcp/applications/supabase/README.md +3 -3
- universal_mcp/applications/tavily/README.md +4 -4
- universal_mcp/applications/twilio/README.md +15 -0
- universal_mcp/applications/twitter/README.md +92 -89
- universal_mcp/applications/twitter/app.py +11 -11
- universal_mcp/applications/unipile/README.md +17 -17
- universal_mcp/applications/unipile/app.py +14 -14
- universal_mcp/applications/whatsapp/README.md +12 -12
- universal_mcp/applications/whatsapp/app.py +13 -13
- universal_mcp/applications/whatsapp_business/README.md +23 -23
- universal_mcp/applications/youtube/README.md +46 -46
- universal_mcp/applications/youtube/app.py +7 -1
- universal_mcp/applications/zenquotes/README.md +1 -1
- {universal_mcp_applications-0.1.17.dist-info → universal_mcp_applications-0.1.19.dist-info}/METADATA +2 -89
- {universal_mcp_applications-0.1.17.dist-info → universal_mcp_applications-0.1.19.dist-info}/RECORD +88 -83
- {universal_mcp_applications-0.1.17.dist-info → universal_mcp_applications-0.1.19.dist-info}/WHEEL +0 -0
- {universal_mcp_applications-0.1.17.dist-info → universal_mcp_applications-0.1.19.dist-info}/licenses/LICENSE +0 -0
|
@@ -1,10 +1,10 @@
|
|
|
1
|
-
#
|
|
1
|
+
# CrustdataApp MCP Server
|
|
2
2
|
|
|
3
|
-
An MCP Server for the
|
|
3
|
+
An MCP Server for the CrustdataApp API.
|
|
4
4
|
|
|
5
5
|
## 🛠️ Tool List
|
|
6
6
|
|
|
7
|
-
This is automatically generated from OpenAPI schema for the
|
|
7
|
+
This is automatically generated from OpenAPI schema for the CrustdataApp API.
|
|
8
8
|
|
|
9
9
|
|
|
10
10
|
| Tool | Description |
|
|
@@ -9,5 +9,5 @@ This is automatically generated from OpenAPI schema for the DomainCheckerApp API
|
|
|
9
9
|
|
|
10
10
|
| Tool | Description |
|
|
11
11
|
|------|-------------|
|
|
12
|
-
| `
|
|
13
|
-
| `
|
|
12
|
+
| `check_domain_registration` | Determines a domain's availability by querying DNS and RDAP servers. For registered domains, it returns details like registrar and key dates. This function provides a comprehensive analysis for a single, fully qualified domain name, unlike `check_keyword_across_tlds_tool` which checks a keyword across multiple domains. |
|
|
13
|
+
| `find_available_domains_for_keyword` | Checks a keyword's availability across a predefined list of popular TLDs. Using DNS and RDAP lookups, it generates a summary report of available and taken domains. This bulk-check differs from `check_domain_registration`, which deeply analyzes a single, fully-qualified domain. |
|
|
@@ -1,12 +1,12 @@
|
|
|
1
|
-
#
|
|
1
|
+
# E2bApp MCP Server
|
|
2
2
|
|
|
3
|
-
An MCP Server for the
|
|
3
|
+
An MCP Server for the E2bApp API.
|
|
4
4
|
|
|
5
5
|
## 🛠️ Tool List
|
|
6
6
|
|
|
7
|
-
This is automatically generated from OpenAPI schema for the
|
|
7
|
+
This is automatically generated from OpenAPI schema for the E2bApp API.
|
|
8
8
|
|
|
9
9
|
|
|
10
10
|
| Tool | Description |
|
|
11
11
|
|------|-------------|
|
|
12
|
-
| `execute_python_code` | Executes Python code in a sandbox
|
|
12
|
+
| `execute_python_code` | Executes a Python code string in a secure E2B sandbox. It authenticates using the configured API key, runs the code, and returns a formatted string containing the execution's output (stdout/stderr). It raises specific exceptions for authorization failures or general execution issues. |
|
|
@@ -9,80 +9,6 @@ This is automatically generated from OpenAPI schema for the ElevenlabsApp API.
|
|
|
9
9
|
|
|
10
10
|
| Tool | Description |
|
|
11
11
|
|------|-------------|
|
|
12
|
-
| `
|
|
13
|
-
| `
|
|
14
|
-
| `
|
|
15
|
-
| `get_audio_from_history_item` | Retrieves audio data for a specific history item identified by `{history_item_id}` using the `GET` method at the `/v1/history/{history_item_id}/audio` endpoint. |
|
|
16
|
-
| `download_history_items` | Initiates a historical data download process and returns a success status upon completion. |
|
|
17
|
-
| `delete_sample` | Deletes a specific voice sample identified by the `sample_id` from a voice with the given `voice_id` using the DELETE method. |
|
|
18
|
-
| `get_audio_from_sample` | Retrieves the audio file for a specific sample associated with a given voice using the specified voice_id and sample_id. |
|
|
19
|
-
| `convert` | Converts text into speech using a specified voice, allowing for optimization of streaming latency and selection of output format. |
|
|
20
|
-
| `text_to_speech_with_timestamps` | Generates speech from text with precise character or word-level timing information using the specified voice, supporting audio-text synchronization through timestamps. |
|
|
21
|
-
| `convert_as_stream` | Converts text to speech stream using the specified voice ID with configurable latency and output format. |
|
|
22
|
-
| `stream_text_with_timestamps` | Converts text to speech using the specified voice ID, streaming the audio output with timestamps. |
|
|
23
|
-
| `voice_generation_parameters` | Retrieves the parameters required for generating voice using the specified API endpoint. |
|
|
24
|
-
| `generate_arandom_voice` | Generates an audio file by converting text into speech using a specified voice, allowing for customizable voice selection and text input. |
|
|
25
|
-
| `create_voice_model` | Generates a custom voice using the provided parameters via the "/v1/voice-generation/create-voice" endpoint by sending a POST request, allowing users to create unique voice models. |
|
|
26
|
-
| `create_previews` | Generates a voice preview from a given text prompt using the ElevenLabs API. |
|
|
27
|
-
| `create_voice_from_preview` | Creates a new voice entry in the voice library using a generated preview ID and returns voice details. |
|
|
28
|
-
| `get_user_subscription_info` | Retrieves the user's subscription details from the API. |
|
|
29
|
-
| `get_user_info` | Retrieves user information from the API. |
|
|
30
|
-
| `get_voices` | Retrieves a list of voices using the "GET" method at the "/v1/voices" API endpoint. |
|
|
31
|
-
| `get_default_voice_settings` | Retrieves the default voice settings using the "GET" method at the "/v1/voices/settings/default" endpoint. |
|
|
32
|
-
| `get_voice_settings` | Retrieves voice settings for a specific voice identified by `{voice_id}` using the "GET" method, returning the current configuration for that voice. |
|
|
33
|
-
| `get_voice` | Retrieves the details of a specific voice by its ID using the "GET" method at the "/v1/voices/{voice_id}" endpoint. |
|
|
34
|
-
| `delete_voice` | Deletes a voice with the specified ID using the DELETE method at the "/v1/voices/{voice_id}" endpoint. |
|
|
35
|
-
| `edit_voice_settings` | Updates voice settings for a specified voice ID and returns a success status. |
|
|
36
|
-
| `add_voice` | Adds a new voice entry to the voices collection using the provided data. |
|
|
37
|
-
| `edit_voice` | Updates the specified voice by ID using a POST request and returns a success status upon completion. |
|
|
38
|
-
| `add_sharing_voice` | Adds a voice associated with a public user ID and voice ID using the specified API endpoint. |
|
|
39
|
-
| `get_shared_voices` | Retrieves a list of shared voices filtered by parameters like gender and language, with pagination support via page_size. |
|
|
40
|
-
| `get_similar_library_voices` | Generates a list of similar voices using the POST method at the "/v1/similar-voices" endpoint. |
|
|
41
|
-
| `get_aprofile_page` | Retrieves a unified customer profile by handle and returns the associated attributes, identifiers, and traits. |
|
|
42
|
-
| `get_projects` | Retrieves a list of projects using the API defined at the "/v1/projects" endpoint via the GET method. |
|
|
43
|
-
| `add_project` | Creates a new project and returns a status message. |
|
|
44
|
-
| `get_project_by_id` | Retrieves information for a specific project identified by `{project_id}` using the API endpoint at "/v1/projects/{project_id}" via the GET method. |
|
|
45
|
-
| `edit_basic_project_info` | Creates a new project resource by sending data to the specified project identifier using the POST method at the "/v1/projects/{project_id}" endpoint. |
|
|
46
|
-
| `delete_project` | Deletes the specified project and returns a success status upon completion. |
|
|
47
|
-
| `convert_project` | Converts a specified project identified by project_id and returns the conversion result. |
|
|
48
|
-
| `get_project_snapshots` | Retrieves a list of snapshots associated with a specified project. |
|
|
49
|
-
| `streams_archive_with_project_audio` | Archives a project snapshot using the specified project ID and snapshot ID and returns a success status. |
|
|
50
|
-
| `add_chapter_to_aproject` | Adds a new chapter to a specified project using the provided project identifier and returns a success status upon completion. |
|
|
51
|
-
| `update_project_pronunciations` | Updates pronunciation dictionaries for a specified project using the POST method, returning a successful status message upon completion. |
|
|
52
|
-
| `get_chapters` | Retrieves a chapter for a specified project by ID using the GET method. |
|
|
53
|
-
| `get_chapter_by_id` | Retrieves a specific chapter within a project identified by project_id and chapter_id. |
|
|
54
|
-
| `delete_chapter` | Deletes a specific chapter within a project using the "DELETE" method. |
|
|
55
|
-
| `convert_chapter` | Converts a chapter in a project using the POST method and returns a response upon successful conversion. |
|
|
56
|
-
| `get_chapter_snapshots` | Retrieves a snapshot for a specific chapter within a project using the provided project and chapter IDs. |
|
|
57
|
-
| `stream_chapter_audio` | Streams data from a specific chapter snapshot in a project using the API and returns a response indicating success. |
|
|
58
|
-
| `dub_avideo_or_an_audio_file` | Initiates a dubbing process and returns a status message using the API defined at the "/v1/dubbing" endpoint via the POST method. |
|
|
59
|
-
| `get_dubbing_project_metadata` | Retrieves the details of a specific dubbing job using the provided dubbing ID. |
|
|
60
|
-
| `delete_dubbing_project` | Deletes a dubbing project with the specified ID and returns a success status upon completion. |
|
|
61
|
-
| `get_transcript_for_dub` | Retrieves the transcript for a specific dubbing task in the requested language using the "GET" method. |
|
|
62
|
-
| `get_models` | Retrieves a list of models using the GET method at the "/v1/models" endpoint. |
|
|
63
|
-
| `post_audio_native` | Processes audio data using the audio-native API and returns a response. |
|
|
64
|
-
| `get_characters_usage_metrics` | Retrieves character statistics within a specified time frame using the start and end Unix timestamps provided in the query parameters. |
|
|
65
|
-
| `add_apronunciation_dictionary` | Creates a pronunciation dictionary from a lexicon file and returns its ID and metadata. |
|
|
66
|
-
| `add_rules_to_dictionary` | Adds pronunciation rules to a specific pronunciation dictionary identified by its ID using the POST method. |
|
|
67
|
-
| `remove_pronunciation_rules` | Removes specified pronunciation rules from a pronunciation dictionary using a POST request. |
|
|
68
|
-
| `get_dictionary_version_file` | Retrieves and downloads a specific version of a pronunciation dictionary file using its dictionary ID and version ID. |
|
|
69
|
-
| `get_pronunciation_dictionary` | Retrieves a specific pronunciation dictionary by its ID using the "GET" method from the "/v1/pronunciation-dictionaries/{pronunciation_dictionary_id}" endpoint. |
|
|
70
|
-
| `get_pronunciation_dictionaries` | Retrieves a list of pronunciation dictionaries using the GET method at the "/v1/pronunciation-dictionaries" endpoint, allowing users to specify the number of items per page via the "page_size" query parameter. |
|
|
71
|
-
| `invite_user` | Invites a user to join a workspace by sending an invitation, allowing them to access the specified workspace upon acceptance. |
|
|
72
|
-
| `delete_existing_invitation` | Deletes a workspace invite and returns a success response upon completion. |
|
|
73
|
-
| `update_member` | Adds members to a workspace and returns the updated member list upon success. |
|
|
74
|
-
| `get_signed_url` | Generates a signed URL for initiating a conversation with a specific conversational AI agent, identified by the provided `agent_id`, using the ElevenLabs API. |
|
|
75
|
-
| `create_agent` | Creates a conversational AI agent with specified configuration settings and returns the agent details. |
|
|
76
|
-
| `get_agent` | Retrieves information about a specific conversational AI agent by its unique identifier using the GET method at the "/v1/convai/agents/{agent_id}" API endpoint. |
|
|
77
|
-
| `delete_agent` | Deletes a specified Conversational AI agent using the DELETE method. |
|
|
78
|
-
| `patches_an_agent_settings` | Updates an existing conversational AI agent's settings using the specified agent ID, allowing changes to properties such as the agent's name and tool configurations. |
|
|
79
|
-
| `get_agent_widget_config` | Retrieves and configures the Convai widget for the specified agent, but the provided details do not specify the exact functionality of this specific endpoint, suggesting it may relate to integrating or customizing Convai's character interaction capabilities. |
|
|
80
|
-
| `get_shareable_agent_link` | Retrieves and establishes a link for a Convai agent using the specified agent ID, facilitating integration or connectivity operations. |
|
|
81
|
-
| `post_agent_avatar` | Creates and configures a Convai avatar for a specific agent using the POST method, though the exact details of this endpoint are not provided in the available documentation. |
|
|
82
|
-
| `get_agent_knowledge_base` | Retrieves specific documentation for a knowledge base associated with an agent in Convai. |
|
|
83
|
-
| `add_agent_secret` | Adds a secret to a specified conversational AI agent through the API and returns a status confirmation. |
|
|
84
|
-
| `add_to_agent_sknowledge_base` | Adds new content to an agent's knowledge base by uploading a file or resource, which can be used to enhance the agent's conversational capabilities. |
|
|
85
|
-
| `get_agents_page` | Retrieves a list of conversational AI agents available in the Convai system. |
|
|
86
|
-
| `get_conversations` | Retrieves conversation history for a specified agent ID. |
|
|
87
|
-
| `get_conversation_details` | Retrieves and formats the details of a specific conversation based on the provided conversation ID. |
|
|
88
|
-
| `get_conversation_audio` | Retrieves the audio from a specific conversation using the ElevenLabs Conversational AI API. |
|
|
12
|
+
| `generate_speech_audio_url` | Converts a text string into speech using the ElevenLabs API. The function then saves the generated audio to a temporary MP3 file and returns a public URL to access it, rather than the raw audio bytes. |
|
|
13
|
+
| `speech_to_text` | Transcribes an audio file into text using the ElevenLabs API. It supports language specification and speaker diarization, providing the inverse operation to the audio-generating `text_to_speech` method. Note: The docstring indicates this is a placeholder for an undocumented endpoint. |
|
|
14
|
+
| `speech_to_speech` | Downloads an audio file from a URL and converts the speech into a specified target voice using the ElevenLabs API. This function transforms the speaker's voice in an existing recording and returns the new audio data as bytes, distinct from creating audio from text. |
|
|
@@ -1,15 +1,15 @@
|
|
|
1
|
-
#
|
|
1
|
+
# ExaApp MCP Server
|
|
2
2
|
|
|
3
|
-
An MCP Server for the
|
|
3
|
+
An MCP Server for the ExaApp API.
|
|
4
4
|
|
|
5
5
|
## 🛠️ Tool List
|
|
6
6
|
|
|
7
|
-
This is automatically generated from OpenAPI schema for the
|
|
7
|
+
This is automatically generated from OpenAPI schema for the ExaApp API.
|
|
8
8
|
|
|
9
9
|
|
|
10
10
|
| Tool | Description |
|
|
11
11
|
|------|-------------|
|
|
12
|
-
| `
|
|
13
|
-
| `
|
|
14
|
-
| `
|
|
15
|
-
| `answer` |
|
|
12
|
+
| `search_with_filters` | Executes a query against the Exa API's `/search` endpoint, returning a list of results. This function supports extensive filtering by search type, category, domains, publication dates, and specific text content to refine the search query and tailor the API's response. |
|
|
13
|
+
| `find_similar_by_url` | Finds web pages semantically similar to a given URL. Unlike the `search` function, which uses a text query, this method takes a specific link and returns a list of related results, with options to filter by domain, publication date, and content. |
|
|
14
|
+
| `fetch_page_content` | Retrieves and processes content from a list of URLs, returning full text, summaries, or highlights. Unlike the search function which finds links, this function fetches the actual page content, with optional support for live crawling to get the most up-to-date information. |
|
|
15
|
+
| `answer` | Retrieves a direct, synthesized answer for a given query by calling the Exa `/answer` API endpoint. Unlike `search`, which returns web results, this function provides a conclusive response. It supports streaming, including source text, and selecting a search model. |
|
|
@@ -1,17 +1,18 @@
|
|
|
1
|
-
#
|
|
1
|
+
# FalaiApp MCP Server
|
|
2
2
|
|
|
3
|
-
An MCP Server for the
|
|
3
|
+
An MCP Server for the FalaiApp API.
|
|
4
4
|
|
|
5
5
|
## 🛠️ Tool List
|
|
6
6
|
|
|
7
|
-
This is automatically generated from OpenAPI schema for the
|
|
7
|
+
This is automatically generated from OpenAPI schema for the FalaiApp API.
|
|
8
8
|
|
|
9
|
-
|
|
10
|
-
|
|
|
11
|
-
|
|
12
|
-
| `
|
|
13
|
-
| `
|
|
14
|
-
| `
|
|
15
|
-
| `
|
|
16
|
-
| `
|
|
17
|
-
| `
|
|
9
|
+
|
|
10
|
+
| Tool | Description |
|
|
11
|
+
|------|-------------|
|
|
12
|
+
| `run` | Executes a Fal AI application synchronously, waiting for completion and returning the result directly. This method is suited for short-running tasks, unlike `submit` which queues a job for asynchronous processing and returns a request ID instead of the final output. |
|
|
13
|
+
| `submit` | Submits a job to the Fal AI queue for asynchronous processing, immediately returning a request ID. This contrasts with the `run` method, which waits for completion. The returned ID is used by `check_status`, `get_result`, and `cancel` to manage the job's lifecycle. |
|
|
14
|
+
| `check_status` | Checks the execution state (e.g., Queued, InProgress) of an asynchronous Fal AI job using its request ID. It provides a non-blocking way to monitor jobs initiated via `submit` without fetching the final `result`, and can optionally include logs. |
|
|
15
|
+
| `get_result` | Retrieves the final result of an asynchronous job, identified by its `request_id`. This function waits for the job, initiated via `submit`, to complete. Unlike the non-blocking `check_status`, this method blocks execution to fetch and return the job's actual output upon completion. |
|
|
16
|
+
| `cancel` | Asynchronously cancels a running or queued Fal AI job using its `request_id`. This function complements the `submit` method, providing a way to terminate asynchronous tasks before completion. It raises a `ToolError` if the cancellation request fails. |
|
|
17
|
+
| `upload_file` | Asynchronously uploads a local file to the Fal Content Delivery Network (CDN), returning a public URL. This URL makes the file accessible for use as input in other Fal AI job execution methods like `run` or `submit`. A `ToolError` is raised if the upload fails. |
|
|
18
|
+
| `run_image_generation` | A specialized wrapper for the `run` method that synchronously generates images using the 'fal-ai/flux/dev' model. It simplifies image creation with common parameters like `prompt` and `seed`, waits for the task to complete, and directly returns the result containing image URLs and metadata. |
|
|
@@ -29,7 +29,7 @@ class FalaiApp(APIApplication):
|
|
|
29
29
|
@property
|
|
30
30
|
def fal_client(self) -> AsyncClient:
|
|
31
31
|
"""
|
|
32
|
-
A cached property that lazily initializes an `AsyncClient` instance
|
|
32
|
+
A cached property that lazily initializes an `AsyncClient` instance. It retrieves the API key from the configured integration, providing a single, centralized authentication point for all methods that interact with the Fal AI API. Raises `NotAuthorizedError` if credentials are not found.
|
|
33
33
|
"""
|
|
34
34
|
if self._fal_client is None:
|
|
35
35
|
credentials = self.integration.get_credentials()
|
|
@@ -101,7 +101,7 @@ class FalaiApp(APIApplication):
|
|
|
101
101
|
priority: Priority | None = None,
|
|
102
102
|
) -> str:
|
|
103
103
|
"""
|
|
104
|
-
Submits a job to the Fal AI queue for asynchronous processing
|
|
104
|
+
Submits a job to the Fal AI queue for asynchronous processing, immediately returning a request ID. This contrasts with the `run` method, which waits for completion. The returned ID is used by `check_status`, `get_result`, and `cancel` to manage the job's lifecycle.
|
|
105
105
|
|
|
106
106
|
Args:
|
|
107
107
|
arguments: A dictionary of arguments for the application
|
|
@@ -181,7 +181,7 @@ class FalaiApp(APIApplication):
|
|
|
181
181
|
self, request_id: str, application: str = "fal-ai/flux/dev"
|
|
182
182
|
) -> Any:
|
|
183
183
|
"""
|
|
184
|
-
|
|
184
|
+
Retrieves the final result of an asynchronous job, identified by its `request_id`. This function waits for the job, initiated via `submit`, to complete. Unlike the non-blocking `check_status`, this method blocks execution to fetch and return the job's actual output upon completion.
|
|
185
185
|
|
|
186
186
|
Args:
|
|
187
187
|
request_id: The unique identifier of the submitted request
|
|
@@ -215,7 +215,7 @@ class FalaiApp(APIApplication):
|
|
|
215
215
|
self, request_id: str, application: str = "fal-ai/flux/dev"
|
|
216
216
|
) -> None:
|
|
217
217
|
"""
|
|
218
|
-
Asynchronously cancels a running or queued Fal AI job
|
|
218
|
+
Asynchronously cancels a running or queued Fal AI job using its `request_id`. This function complements the `submit` method, providing a way to terminate asynchronous tasks before completion. It raises a `ToolError` if the cancellation request fails.
|
|
219
219
|
|
|
220
220
|
Args:
|
|
221
221
|
request_id: The unique identifier of the submitted Fal AI request to cancel
|
|
@@ -244,7 +244,7 @@ class FalaiApp(APIApplication):
|
|
|
244
244
|
|
|
245
245
|
async def upload_file(self, path: str) -> str:
|
|
246
246
|
"""
|
|
247
|
-
Asynchronously uploads a local file
|
|
247
|
+
Asynchronously uploads a local file to the Fal Content Delivery Network (CDN), returning a public URL. This URL makes the file accessible for use as input in other Fal AI job execution methods like `run` or `submit`. A `ToolError` is raised if the upload fails.
|
|
248
248
|
|
|
249
249
|
Args:
|
|
250
250
|
path: The absolute or relative path to the local file
|
|
@@ -280,7 +280,7 @@ class FalaiApp(APIApplication):
|
|
|
280
280
|
hint: str | None = None,
|
|
281
281
|
) -> Any:
|
|
282
282
|
"""
|
|
283
|
-
A specialized wrapper for the `run` method that synchronously generates images using the 'fal-ai/flux/dev' model. It simplifies image creation with common parameters like `prompt` and `seed`, waits for the task to complete, and returns the result containing image URLs and metadata.
|
|
283
|
+
A specialized wrapper for the `run` method that synchronously generates images using the 'fal-ai/flux/dev' model. It simplifies image creation with common parameters like `prompt` and `seed`, waits for the task to complete, and directly returns the result containing image URLs and metadata.
|
|
284
284
|
|
|
285
285
|
Args:
|
|
286
286
|
prompt: The text prompt used to guide the image generation
|
|
@@ -1,10 +1,10 @@
|
|
|
1
|
-
#
|
|
1
|
+
# FigmaApp MCP Server
|
|
2
2
|
|
|
3
|
-
An MCP Server for the
|
|
3
|
+
An MCP Server for the FigmaApp API.
|
|
4
4
|
|
|
5
5
|
## 🛠️ Tool List
|
|
6
6
|
|
|
7
|
-
This is automatically generated from OpenAPI schema for the
|
|
7
|
+
This is automatically generated from OpenAPI schema for the FigmaApp API.
|
|
8
8
|
|
|
9
9
|
|
|
10
10
|
| Tool | Description |
|
|
@@ -0,0 +1,13 @@
|
|
|
1
|
+
# FileSystemApp MCP Server
|
|
2
|
+
|
|
3
|
+
An MCP Server for the FileSystemApp API.
|
|
4
|
+
|
|
5
|
+
## 🛠️ Tool List
|
|
6
|
+
|
|
7
|
+
This is automatically generated from OpenAPI schema for the FileSystemApp API.
|
|
8
|
+
|
|
9
|
+
|
|
10
|
+
| Tool | Description |
|
|
11
|
+
|------|-------------|
|
|
12
|
+
| `read_file` | Asynchronously reads the entire content of a specified file in binary mode. This static method takes a file path and returns its data as a bytes object, serving as a fundamental file retrieval operation within the FileSystem application. |
|
|
13
|
+
| `write_file` | Writes binary data to a specified file path. If no path is provided, it creates a unique temporary file in `/tmp`. The function returns a dictionary confirming success and providing metadata about the new file, including its path and size. |
|
|
@@ -9,12 +9,12 @@ This is automatically generated from OpenAPI schema for the FirecrawlApp API.
|
|
|
9
9
|
|
|
10
10
|
| Tool | Description |
|
|
11
11
|
|------|-------------|
|
|
12
|
-
| `scrape_url` |
|
|
13
|
-
| `search` |
|
|
14
|
-
| `start_crawl` | Starts
|
|
15
|
-
| `check_crawl_status` |
|
|
16
|
-
| `cancel_crawl` | Cancels a
|
|
17
|
-
| `start_batch_scrape` |
|
|
18
|
-
| `check_batch_scrape_status` | Checks the status of
|
|
19
|
-
| `quick_web_extract` | Performs
|
|
20
|
-
| `check_extract_status` | Checks the status of
|
|
12
|
+
| `scrape_url` | Synchronously scrapes a single URL, immediately returning its content. This provides a direct method for single-page scraping, contrasting with asynchronous, job-based functions like `start_crawl` (for entire sites) and `start_batch_scrape` (for multiple URLs). |
|
|
13
|
+
| `search` | Executes a synchronous web search using the Firecrawl service for a given query. Unlike scrape_url which fetches a single page, this function discovers web content. It returns a dictionary of results on success or an error string on failure, raising exceptions for authorization or SDK issues. |
|
|
14
|
+
| `start_crawl` | Starts an asynchronous Firecrawl job to crawl a website from a given URL, returning a job ID. Unlike the synchronous `scrape_url` for single pages, this function initiates a comprehensive, link-following crawl. Progress can be monitored using the `check_crawl_status` function with the returned ID. |
|
|
15
|
+
| `check_crawl_status` | Retrieves the status of an asynchronous Firecrawl job using its unique ID. As the counterpart to `start_crawl`, this function exclusively monitors website crawl progress, distinct from status checkers for batch scraping or data extraction jobs. Returns job details on success or an error message on failure. |
|
|
16
|
+
| `cancel_crawl` | Cancels a running asynchronous Firecrawl crawl job using its unique ID. As a lifecycle management tool for jobs initiated by `start_crawl`, it returns a confirmation status upon success or an error message on failure, distinguishing it from controls for other job types. |
|
|
17
|
+
| `start_batch_scrape` | Initiates an asynchronous Firecrawl job to scrape a list of URLs. It returns a job ID for tracking with `check_batch_scrape_status`. Unlike the synchronous `scrape_url` which processes a single URL, this function handles bulk scraping and doesn't wait for completion. |
|
|
18
|
+
| `check_batch_scrape_status` | Checks the status of an asynchronous batch scrape job using its job ID. As the counterpart to `start_batch_scrape`, it specifically monitors multi-URL scraping tasks, distinct from checkers for site-wide crawls (`check_crawl_status`) or AI-driven extractions (`check_extract_status`). Returns detailed progress or an error message. |
|
|
19
|
+
| `quick_web_extract` | Performs synchronous, AI-driven data extraction from URLs using an optional prompt or schema. Unlike asynchronous jobs like `start_crawl`, it returns structured data directly. This function raises an exception on failure, contrasting with other methods in the class that return an error string upon failure. |
|
|
20
|
+
| `check_extract_status` | Checks the status of an asynchronous, AI-powered Firecrawl data extraction job using its ID. Unlike `check_crawl_status` or `check_batch_scrape_status`, this function specifically monitors structured data extraction tasks, returning the job's progress or an error message on failure. |
|
|
@@ -38,7 +38,7 @@ class FirecrawlApp(APIApplication):
|
|
|
38
38
|
@property
|
|
39
39
|
def firecrawl_api_key(self) -> str:
|
|
40
40
|
"""
|
|
41
|
-
A property that lazily retrieves and caches the Firecrawl API key from the configured integration. On first access, it fetches credentials and raises a `NotAuthorizedError` if the key is unobtainable, ensuring all subsequent API calls are properly authenticated.
|
|
41
|
+
A property that lazily retrieves and caches the Firecrawl API key from the configured integration. On first access, it fetches credentials and raises a `NotAuthorizedError` if the key is unobtainable, ensuring all subsequent API calls within the application are properly authenticated before execution.
|
|
42
42
|
"""
|
|
43
43
|
if self._firecrawl_api_key is None:
|
|
44
44
|
if not self.integration:
|
|
@@ -166,7 +166,7 @@ class FirecrawlApp(APIApplication):
|
|
|
166
166
|
|
|
167
167
|
def scrape_url(self, url: str) -> Any:
|
|
168
168
|
"""
|
|
169
|
-
Synchronously scrapes a single
|
|
169
|
+
Synchronously scrapes a single URL, immediately returning its content. This provides a direct method for single-page scraping, contrasting with asynchronous, job-based functions like `start_crawl` (for entire sites) and `start_batch_scrape` (for multiple URLs).
|
|
170
170
|
|
|
171
171
|
Args:
|
|
172
172
|
url: The URL of the web page to scrape.
|
|
@@ -198,7 +198,7 @@ class FirecrawlApp(APIApplication):
|
|
|
198
198
|
|
|
199
199
|
def search(self, query: str) -> dict[str, Any] | str:
|
|
200
200
|
"""
|
|
201
|
-
Executes a web search using the Firecrawl service for a
|
|
201
|
+
Executes a synchronous web search using the Firecrawl service for a given query. Unlike scrape_url which fetches a single page, this function discovers web content. It returns a dictionary of results on success or an error string on failure, raising exceptions for authorization or SDK issues.
|
|
202
202
|
|
|
203
203
|
Args:
|
|
204
204
|
query: The search query string.
|
|
@@ -232,7 +232,7 @@ class FirecrawlApp(APIApplication):
|
|
|
232
232
|
url: str,
|
|
233
233
|
) -> dict[str, Any] | str:
|
|
234
234
|
"""
|
|
235
|
-
|
|
235
|
+
Starts an asynchronous Firecrawl job to crawl a website from a given URL, returning a job ID. Unlike the synchronous `scrape_url` for single pages, this function initiates a comprehensive, link-following crawl. Progress can be monitored using the `check_crawl_status` function with the returned ID.
|
|
236
236
|
|
|
237
237
|
Args:
|
|
238
238
|
url: The starting URL for the crawl.
|
|
@@ -268,7 +268,7 @@ class FirecrawlApp(APIApplication):
|
|
|
268
268
|
|
|
269
269
|
def check_crawl_status(self, job_id: str) -> dict[str, Any] | str:
|
|
270
270
|
"""
|
|
271
|
-
Retrieves the status of an asynchronous Firecrawl
|
|
271
|
+
Retrieves the status of an asynchronous Firecrawl job using its unique ID. As the counterpart to `start_crawl`, this function exclusively monitors website crawl progress, distinct from status checkers for batch scraping or data extraction jobs. Returns job details on success or an error message on failure.
|
|
272
272
|
|
|
273
273
|
Args:
|
|
274
274
|
job_id: The ID of the crawl job to check.
|
|
@@ -303,7 +303,7 @@ class FirecrawlApp(APIApplication):
|
|
|
303
303
|
|
|
304
304
|
def cancel_crawl(self, job_id: str) -> dict[str, Any] | str:
|
|
305
305
|
"""
|
|
306
|
-
Cancels a running asynchronous Firecrawl crawl job
|
|
306
|
+
Cancels a running asynchronous Firecrawl crawl job using its unique ID. As a lifecycle management tool for jobs initiated by `start_crawl`, it returns a confirmation status upon success or an error message on failure, distinguishing it from controls for other job types.
|
|
307
307
|
|
|
308
308
|
Args:
|
|
309
309
|
job_id: The ID of the crawl job to cancel.
|
|
@@ -342,7 +342,7 @@ class FirecrawlApp(APIApplication):
|
|
|
342
342
|
urls: list[str],
|
|
343
343
|
) -> dict[str, Any] | str:
|
|
344
344
|
"""
|
|
345
|
-
Initiates an asynchronous
|
|
345
|
+
Initiates an asynchronous Firecrawl job to scrape a list of URLs. It returns a job ID for tracking with `check_batch_scrape_status`. Unlike the synchronous `scrape_url` which processes a single URL, this function handles bulk scraping and doesn't wait for completion.
|
|
346
346
|
|
|
347
347
|
Args:
|
|
348
348
|
urls: A list of URLs to scrape.
|
|
@@ -377,7 +377,7 @@ class FirecrawlApp(APIApplication):
|
|
|
377
377
|
|
|
378
378
|
def check_batch_scrape_status(self, job_id: str) -> dict[str, Any] | str:
|
|
379
379
|
"""
|
|
380
|
-
Checks the status of
|
|
380
|
+
Checks the status of an asynchronous batch scrape job using its job ID. As the counterpart to `start_batch_scrape`, it specifically monitors multi-URL scraping tasks, distinct from checkers for site-wide crawls (`check_crawl_status`) or AI-driven extractions (`check_extract_status`). Returns detailed progress or an error message.
|
|
381
381
|
|
|
382
382
|
Args:
|
|
383
383
|
job_id: The ID of the batch scrape job to check.
|
|
@@ -421,7 +421,7 @@ class FirecrawlApp(APIApplication):
|
|
|
421
421
|
allow_external_links: bool | None = False,
|
|
422
422
|
) -> dict[str, Any]:
|
|
423
423
|
"""
|
|
424
|
-
Performs synchronous, AI-driven data extraction from URLs using an optional prompt or schema. Unlike asynchronous
|
|
424
|
+
Performs synchronous, AI-driven data extraction from URLs using an optional prompt or schema. Unlike asynchronous jobs like `start_crawl`, it returns structured data directly. This function raises an exception on failure, contrasting with other methods in the class that return an error string upon failure.
|
|
425
425
|
|
|
426
426
|
Args:
|
|
427
427
|
urls: A list of URLs to extract data from.
|
|
@@ -476,7 +476,7 @@ class FirecrawlApp(APIApplication):
|
|
|
476
476
|
|
|
477
477
|
def check_extract_status(self, job_id: str) -> dict[str, Any] | str:
|
|
478
478
|
"""
|
|
479
|
-
Checks the status of
|
|
479
|
+
Checks the status of an asynchronous, AI-powered Firecrawl data extraction job using its ID. Unlike `check_crawl_status` or `check_batch_scrape_status`, this function specifically monitors structured data extraction tasks, returning the job's progress or an error message on failure.
|
|
480
480
|
|
|
481
481
|
Args:
|
|
482
482
|
job_id: The ID of the extraction job to check.
|
|
@@ -9,17 +9,17 @@ This is automatically generated from OpenAPI schema for the FirefliesApp API.
|
|
|
9
9
|
|
|
10
10
|
| Tool | Description |
|
|
11
11
|
|------|-------------|
|
|
12
|
-
| `
|
|
13
|
-
| `
|
|
14
|
-
| `get_user_details` | Fetches details for a specific user. |
|
|
15
|
-
| `list_users` |
|
|
16
|
-
| `get_transcript_details` |
|
|
17
|
-
| `list_transcripts` | Fetches a list of transcripts,
|
|
18
|
-
| `get_bite_details` |
|
|
19
|
-
| `list_bites` |
|
|
20
|
-
| `add_to_live_meeting` |
|
|
21
|
-
| `
|
|
22
|
-
| `delete_transcript` |
|
|
23
|
-
| `set_user_role` |
|
|
24
|
-
| `
|
|
25
|
-
| `
|
|
12
|
+
| `get_team_conversation_analytics` | Queries the Fireflies.ai API for team conversation analytics, specifically the average number of filler words. The data retrieval can optionally be filtered by a start and end time. Returns a dictionary containing the fetched analytics. |
|
|
13
|
+
| `get_transcript_ai_outputs` | Retrieves all AI-generated application outputs, such as summaries or analyses, associated with a specific transcript ID. It fetches the detailed prompt and response data for each AI app that has processed the transcript, providing a complete record of AI-generated content. |
|
|
14
|
+
| `get_user_details` | Fetches details, such as name and integrations, for a single user identified by their unique ID. This function queries for a specific user, differentiating it from `list_users` which retrieves a list of all users in the workspace. |
|
|
15
|
+
| `list_users` | Retrieves a list of all users in the workspace, returning each user's name and configured integrations. It provides a complete team roster, differing from `get_user_details`, which fetches information for a single user by their ID. |
|
|
16
|
+
| `get_transcript_details` | Queries the Fireflies API for a single transcript's details, such as title and ID, using its unique identifier. It fetches one specific entry, distinguishing it from `list_transcripts`, which retrieves a collection, and from `get_ai_apps_outputs` which gets AI data from a transcript. |
|
|
17
|
+
| `list_transcripts` | Fetches a list of meeting transcripts, returning the title and ID for each. The list can be filtered to return only transcripts for a specific user. This function complements `get_transcript_details`, which retrieves a single transcript by its unique ID. |
|
|
18
|
+
| `get_bite_details` | Retrieves detailed information for a specific bite (soundbite/clip) using its unique ID. It fetches data including the user ID, name, processing status, and summary. This provides a focused view of a single bite, distinguishing it from `list_bites` which fetches a collection of bites. |
|
|
19
|
+
| `list_bites` | Retrieves a list of soundbites (clips) from the Fireflies API. An optional 'mine' parameter filters for soundbites belonging only to the authenticated user. Differentiates from 'get_bite_details' by fetching multiple items rather than a single one by ID. |
|
|
20
|
+
| `add_to_live_meeting` | Executes a GraphQL mutation to make the Fireflies.ai notetaker join a live meeting specified by its URL. This action initiates the bot's recording and transcription process for the ongoing session and returns a success confirmation. |
|
|
21
|
+
| `create_soundbite_from_transcript` | Creates a soundbite/clip from a specified segment of a transcript using its ID, start, and end times. This function executes a GraphQL mutation, returning details of the newly created bite, such as its ID and processing status. |
|
|
22
|
+
| `delete_transcript` | Permanently deletes a specific transcript from Fireflies.ai using its ID. This destructive operation executes a GraphQL mutation and returns a dictionary containing the details of the transcript (e.g., title, date) as it existed just before being removed, confirming the action. |
|
|
23
|
+
| `set_user_role` | Assigns a new role (e.g., 'admin', 'member') to a user specified by their ID. This function executes a GraphQL mutation to modify user data and returns a dictionary with the user's updated name and admin status to confirm the change. |
|
|
24
|
+
| `transcribe_audio_from_url` | Submits an audio file from a URL to the Fireflies.ai API for transcription. It can optionally associate a title and a list of attendees with the audio, returning the upload status and details upon completion. |
|
|
25
|
+
| `update_transcript_title` | Updates the title of a specific transcript, identified by its ID, to a new value. This function executes a GraphQL mutation and returns a dictionary containing the newly assigned title upon successful completion of the request. |
|
|
@@ -9,15 +9,15 @@ This is automatically generated from OpenAPI schema for the FplApp API.
|
|
|
9
9
|
|
|
10
10
|
| Tool | Description |
|
|
11
11
|
|------|-------------|
|
|
12
|
-
| `get_player_information` |
|
|
13
|
-
| `
|
|
14
|
-
| `
|
|
15
|
-
| `
|
|
16
|
-
| `compare_players` |
|
|
17
|
-
| `analyze_player_fixtures` |
|
|
18
|
-
| `
|
|
19
|
-
| `get_blank_gameweeks` |
|
|
20
|
-
| `get_double_gameweeks` |
|
|
21
|
-
| `get_league_standings` |
|
|
22
|
-
| `
|
|
23
|
-
| `
|
|
12
|
+
| `get_player_information` | Fetches a detailed profile for a specific FPL player, identified by ID or name. It can include gameweek performance history, filterable by range, and future fixtures. This provides an in-depth look at one player, differing from broader search or analysis functions like `search_fpl_players`. |
|
|
13
|
+
| `find_players` | Searches for FPL players by full or partial name with optional filtering by team and position. This is a discovery tool, differentiating it from `get_player_information` which fetches a specific known player. It serves as a public interface to the internal `search_players` utility. |
|
|
14
|
+
| `get_gameweek_snapshot` | Provides a detailed snapshot of the FPL schedule by identifying the current, previous, and next gameweeks. It calculates the precise real-time status of the current gameweek (e.g., 'Imminent', 'In Progress') and returns key deadline times and overall season progress. |
|
|
15
|
+
| `screen_players` | Filters the FPL player database using multiple criteria like position, price, points, and form. Returns a sorted list of matching players, summary statistics for the filtered group, and optional recent gameweek performance data to aid in player discovery and analysis. |
|
|
16
|
+
| `compare_players` | Performs a detailed comparison of multiple players based on specified metrics, recent gameweek history, and upcoming fixture analysis. It provides a side-by-side breakdown, identifies the best performer per metric, and determines an overall winner, considering fixture difficulty, blanks, and doubles for a comprehensive overview. |
|
|
17
|
+
| `analyze_player_fixtures` | Analyzes a player's upcoming fixture difficulty. Given a player's name, it retrieves their schedule for a set number of matches, returning a detailed list and a calculated difficulty rating. This method is a focused alternative to the more comprehensive `analyze_fixtures` function. |
|
|
18
|
+
| `analyze_entity_fixtures` | Provides a detailed fixture difficulty analysis for a specific player, team, or an entire position over a set number of gameweeks. It calculates a difficulty score and can include blank/double gameweek data, offering broader analysis than the player-only `analyze_player_fixtures`. |
|
|
19
|
+
| `get_blank_gameweeks` | Identifies upcoming 'blank' gameweeks where teams lack a scheduled fixture. The function returns a list detailing each blank gameweek and the affected teams within a specified number of weeks, which is essential for strategic planning. It is distinct from `get_double_gameweeks`. |
|
|
20
|
+
| `get_double_gameweeks` | Identifies upcoming double gameweeks where teams have multiple fixtures within a specified number of weeks. It returns a list detailing each double gameweek and the teams involved. This function specifically finds gameweeks with extra matches, unlike `get_blank_gameweeks` which finds those with none. |
|
|
21
|
+
| `get_league_standings` | Retrieves current standings for a specified FPL classic mini-league by its ID. It fetches and parses raw API data to provide a direct snapshot of the league table, distinguishing it from `get_league_analytics` which performs deeper, historical analysis. |
|
|
22
|
+
| `analyze_league` | Performs advanced analysis on an FPL league for a given gameweek range. It routes requests based on the analysis type ('overview', 'historical', 'team_composition') to provide deeper insights beyond the basic rankings from `get_league_standings`, such as historical performance or team composition. |
|
|
23
|
+
| `get_manager_team_info` | Retrieves detailed information for a specific FPL manager's team (an "entry") using its unique ID. This is distinct from functions analyzing Premier League clubs, as it targets an individual user's fantasy squad and its performance details within the game. |
|
|
@@ -7,8 +7,8 @@ import requests
|
|
|
7
7
|
from universal_mcp.applications.application import APIApplication
|
|
8
8
|
from universal_mcp.integrations import Integration
|
|
9
9
|
|
|
10
|
-
from .utils.api import api
|
|
11
|
-
from .utils.fixtures import (
|
|
10
|
+
from universal_mcp.applications.fpl.utils.api import api
|
|
11
|
+
from universal_mcp.applications.fpl.utils.fixtures import (
|
|
12
12
|
analyze_player_fixtures,
|
|
13
13
|
get_blank_gameweeks,
|
|
14
14
|
get_double_gameweeks,
|
|
@@ -16,20 +16,20 @@ from .utils.fixtures import (
|
|
|
16
16
|
get_player_fixtures,
|
|
17
17
|
get_player_gameweek_history,
|
|
18
18
|
)
|
|
19
|
-
from .utils.helper import (
|
|
19
|
+
from universal_mcp.applications.fpl.utils.helper import (
|
|
20
20
|
find_players_by_name,
|
|
21
21
|
get_player_info,
|
|
22
22
|
get_players_resource,
|
|
23
23
|
get_team_by_name,
|
|
24
24
|
search_players,
|
|
25
25
|
)
|
|
26
|
-
from .utils.league_utils import (
|
|
26
|
+
from universal_mcp.applications.fpl.utils.league_utils import (
|
|
27
27
|
_get_league_historical_performance,
|
|
28
28
|
_get_league_standings,
|
|
29
29
|
_get_league_team_composition,
|
|
30
30
|
parse_league_standings,
|
|
31
31
|
)
|
|
32
|
-
from .utils.position_utils import normalize_position
|
|
32
|
+
from universal_mcp.applications.fpl.utils.position_utils import normalize_position
|
|
33
33
|
|
|
34
34
|
|
|
35
35
|
class FplApp(APIApplication):
|
|
@@ -9,13 +9,13 @@ This is automatically generated from OpenAPI schema for the GithubApp API.
|
|
|
9
9
|
|
|
10
10
|
| Tool | Description |
|
|
11
11
|
|------|-------------|
|
|
12
|
-
| `star_repository` | Stars a GitHub repository
|
|
13
|
-
| `
|
|
14
|
-
| `list_branches` |
|
|
15
|
-
| `list_pull_requests` |
|
|
16
|
-
| `
|
|
17
|
-
| `get_pull_request` |
|
|
18
|
-
| `create_pull_request` | Creates a
|
|
19
|
-
| `create_issue` | Creates a new issue in a
|
|
20
|
-
| `update_issue` |
|
|
21
|
-
| `list_repo_activities` |
|
|
12
|
+
| `star_repository` | Stars a GitHub repository for the authenticated user. This user-centric action takes the full repository name ('owner/repo') and returns a simple string message confirming the outcome, unlike other functions that list or create repository content like issues or pull requests. |
|
|
13
|
+
| `list_recent_commits` | Fetches and formats the 12 most recent commits from a repository. It returns a human-readable string summarizing each commit's hash, author, and message, providing a focused overview of recent code changes, unlike functions that list branches, issues, or pull requests. |
|
|
14
|
+
| `list_branches` | Fetches all branches for a specified GitHub repository and formats them into a human-readable string. This method is distinct from others like `search_issues`, as it returns a formatted list for display rather than raw JSON data for programmatic use. |
|
|
15
|
+
| `list_pull_requests` | Fetches pull requests for a repository, filtered by state (e.g., 'open'). It returns a formatted string summarizing each PR's details, distinguishing it from `get_pull_request` (single PR) and `search_issues` (raw issue data). |
|
|
16
|
+
| `search_issues` | Fetches issues from a GitHub repository using specified filters (state, assignee, labels) and pagination. It returns the raw API response as a list of dictionaries, providing detailed issue data for programmatic processing, distinct from other methods that return formatted strings. |
|
|
17
|
+
| `get_pull_request` | Fetches a specific pull request from a repository using its unique number. It returns a human-readable string summarizing the PR's title, creator, status, and description, unlike `list_pull_requests` which retrieves a list of multiple PRs. |
|
|
18
|
+
| `create_pull_request` | Creates a pull request between specified `head` and `base` branches, or converts an issue into a PR. Unlike read functions that return formatted strings, this write operation returns the raw API response as a dictionary, providing comprehensive data on the newly created pull request. |
|
|
19
|
+
| `create_issue` | Creates a new issue in a GitHub repository using a title, body, and optional labels. It returns a formatted confirmation string with the new issue's number and URL, differing from `update_issue` which modifies existing issues and `search_issues` which returns raw API data. |
|
|
20
|
+
| `update_issue` | Modifies an existing GitHub issue, identified by its number within a repository. It can update optional fields like title, body, or state and returns the raw API response as a dictionary, differentiating it from `create_issue` which makes new issues and returns a formatted string. |
|
|
21
|
+
| `list_repo_activities` | Fetches recent events for a GitHub repository and formats them into a human-readable string. It summarizes activities with actors and timestamps, providing a general event feed, unlike other `list_*` functions which retrieve specific resources like commits or issues. |
|