universal-mcp 0.1.12__py3-none-any.whl → 0.1.13rc2__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (109) hide show
  1. universal_mcp/applications/__init__.py +51 -7
  2. universal_mcp/cli.py +109 -17
  3. universal_mcp/integrations/__init__.py +1 -1
  4. universal_mcp/integrations/integration.py +79 -0
  5. universal_mcp/servers/README.md +79 -0
  6. universal_mcp/servers/server.py +17 -29
  7. universal_mcp/stores/README.md +74 -0
  8. universal_mcp/stores/store.py +0 -2
  9. universal_mcp/templates/README.md.j2 +93 -0
  10. universal_mcp/templates/api_client.py.j2 +27 -0
  11. universal_mcp/tools/README.md +86 -0
  12. universal_mcp/tools/tools.py +1 -1
  13. universal_mcp/utils/agentr.py +90 -0
  14. universal_mcp/utils/api_generator.py +166 -208
  15. universal_mcp/utils/openapi.py +221 -321
  16. universal_mcp/utils/singleton.py +23 -0
  17. {universal_mcp-0.1.12.dist-info → universal_mcp-0.1.13rc2.dist-info}/METADATA +16 -41
  18. universal_mcp-0.1.13rc2.dist-info/RECORD +38 -0
  19. universal_mcp/applications/ahrefs/README.md +0 -76
  20. universal_mcp/applications/ahrefs/__init__.py +0 -0
  21. universal_mcp/applications/ahrefs/app.py +0 -2291
  22. universal_mcp/applications/cal_com_v2/README.md +0 -175
  23. universal_mcp/applications/cal_com_v2/__init__.py +0 -0
  24. universal_mcp/applications/cal_com_v2/app.py +0 -5390
  25. universal_mcp/applications/calendly/README.md +0 -78
  26. universal_mcp/applications/calendly/__init__.py +0 -0
  27. universal_mcp/applications/calendly/app.py +0 -1195
  28. universal_mcp/applications/clickup/README.md +0 -160
  29. universal_mcp/applications/clickup/__init__.py +0 -0
  30. universal_mcp/applications/clickup/app.py +0 -5009
  31. universal_mcp/applications/coda/README.md +0 -133
  32. universal_mcp/applications/coda/__init__.py +0 -0
  33. universal_mcp/applications/coda/app.py +0 -3671
  34. universal_mcp/applications/e2b/README.md +0 -37
  35. universal_mcp/applications/e2b/app.py +0 -65
  36. universal_mcp/applications/elevenlabs/README.md +0 -84
  37. universal_mcp/applications/elevenlabs/__init__.py +0 -0
  38. universal_mcp/applications/elevenlabs/app.py +0 -1402
  39. universal_mcp/applications/falai/README.md +0 -42
  40. universal_mcp/applications/falai/__init__.py +0 -0
  41. universal_mcp/applications/falai/app.py +0 -332
  42. universal_mcp/applications/figma/README.md +0 -74
  43. universal_mcp/applications/figma/__init__.py +0 -0
  44. universal_mcp/applications/figma/app.py +0 -1261
  45. universal_mcp/applications/firecrawl/README.md +0 -45
  46. universal_mcp/applications/firecrawl/app.py +0 -268
  47. universal_mcp/applications/github/README.md +0 -47
  48. universal_mcp/applications/github/app.py +0 -429
  49. universal_mcp/applications/gong/README.md +0 -88
  50. universal_mcp/applications/gong/__init__.py +0 -0
  51. universal_mcp/applications/gong/app.py +0 -2297
  52. universal_mcp/applications/google_calendar/app.py +0 -442
  53. universal_mcp/applications/google_docs/README.md +0 -40
  54. universal_mcp/applications/google_docs/app.py +0 -88
  55. universal_mcp/applications/google_drive/README.md +0 -44
  56. universal_mcp/applications/google_drive/app.py +0 -286
  57. universal_mcp/applications/google_mail/README.md +0 -47
  58. universal_mcp/applications/google_mail/app.py +0 -664
  59. universal_mcp/applications/google_sheet/README.md +0 -42
  60. universal_mcp/applications/google_sheet/app.py +0 -150
  61. universal_mcp/applications/hashnode/app.py +0 -81
  62. universal_mcp/applications/hashnode/prompt.md +0 -23
  63. universal_mcp/applications/heygen/README.md +0 -69
  64. universal_mcp/applications/heygen/__init__.py +0 -0
  65. universal_mcp/applications/heygen/app.py +0 -956
  66. universal_mcp/applications/mailchimp/README.md +0 -306
  67. universal_mcp/applications/mailchimp/__init__.py +0 -0
  68. universal_mcp/applications/mailchimp/app.py +0 -10937
  69. universal_mcp/applications/markitdown/app.py +0 -44
  70. universal_mcp/applications/notion/README.md +0 -55
  71. universal_mcp/applications/notion/__init__.py +0 -0
  72. universal_mcp/applications/notion/app.py +0 -527
  73. universal_mcp/applications/perplexity/README.md +0 -37
  74. universal_mcp/applications/perplexity/app.py +0 -65
  75. universal_mcp/applications/reddit/README.md +0 -45
  76. universal_mcp/applications/reddit/app.py +0 -379
  77. universal_mcp/applications/replicate/README.md +0 -65
  78. universal_mcp/applications/replicate/__init__.py +0 -0
  79. universal_mcp/applications/replicate/app.py +0 -980
  80. universal_mcp/applications/resend/README.md +0 -38
  81. universal_mcp/applications/resend/app.py +0 -37
  82. universal_mcp/applications/retell_ai/README.md +0 -46
  83. universal_mcp/applications/retell_ai/__init__.py +0 -0
  84. universal_mcp/applications/retell_ai/app.py +0 -333
  85. universal_mcp/applications/rocketlane/README.md +0 -42
  86. universal_mcp/applications/rocketlane/__init__.py +0 -0
  87. universal_mcp/applications/rocketlane/app.py +0 -194
  88. universal_mcp/applications/serpapi/README.md +0 -37
  89. universal_mcp/applications/serpapi/app.py +0 -73
  90. universal_mcp/applications/spotify/README.md +0 -116
  91. universal_mcp/applications/spotify/__init__.py +0 -0
  92. universal_mcp/applications/spotify/app.py +0 -2526
  93. universal_mcp/applications/supabase/README.md +0 -112
  94. universal_mcp/applications/supabase/__init__.py +0 -0
  95. universal_mcp/applications/supabase/app.py +0 -2970
  96. universal_mcp/applications/tavily/README.md +0 -38
  97. universal_mcp/applications/tavily/app.py +0 -51
  98. universal_mcp/applications/wrike/README.md +0 -71
  99. universal_mcp/applications/wrike/__init__.py +0 -0
  100. universal_mcp/applications/wrike/app.py +0 -1372
  101. universal_mcp/applications/youtube/README.md +0 -82
  102. universal_mcp/applications/youtube/__init__.py +0 -0
  103. universal_mcp/applications/youtube/app.py +0 -1428
  104. universal_mcp/applications/zenquotes/README.md +0 -37
  105. universal_mcp/applications/zenquotes/app.py +0 -31
  106. universal_mcp/integrations/agentr.py +0 -112
  107. universal_mcp-0.1.12.dist-info/RECORD +0 -119
  108. {universal_mcp-0.1.12.dist-info → universal_mcp-0.1.13rc2.dist-info}/WHEEL +0 -0
  109. {universal_mcp-0.1.12.dist-info → universal_mcp-0.1.13rc2.dist-info}/entry_points.txt +0 -0
@@ -1,65 +0,0 @@
1
- from typing import Any, Literal
2
-
3
- from universal_mcp.applications.application import APIApplication
4
- from universal_mcp.integrations import Integration
5
-
6
-
7
- class PerplexityApp(APIApplication):
8
- def __init__(self, integration: Integration | None = None) -> None:
9
- super().__init__(name="perplexity", integration=integration)
10
- self.base_url = "https://api.perplexity.ai"
11
-
12
- def chat(
13
- self,
14
- query: str,
15
- model: Literal[
16
- "r1-1776",
17
- "sonar",
18
- "sonar-pro",
19
- "sonar-reasoning",
20
- "sonar-reasoning-pro",
21
- "sonar-deep-research",
22
- ] = "sonar",
23
- temperature: float = 1,
24
- system_prompt: str = "Be precise and concise.",
25
- ) -> dict[str, Any] | str:
26
- """
27
- Initiates a chat completion request to generate AI responses using various models with customizable parameters.
28
-
29
- Args:
30
- query: The input text/prompt to send to the chat model
31
- model: The model to use for chat completion. Options include 'r1-1776', 'sonar', 'sonar-pro', 'sonar-reasoning', 'sonar-reasoning-pro', 'sonar-deep-research'. Defaults to 'sonar'
32
- temperature: Controls randomness in the model's output. Higher values make output more random, lower values more deterministic. Defaults to 1
33
- system_prompt: Initial system message to guide the model's behavior. Defaults to 'Be precise and concise.'
34
-
35
- Returns:
36
- A dictionary containing the generated content and citations, with keys 'content' (str) and 'citations' (list), or a string in some cases
37
-
38
- Raises:
39
- AuthenticationError: Raised when API authentication fails due to missing or invalid credentials
40
- HTTPError: Raised when the API request fails or returns an error status
41
-
42
- Tags:
43
- chat, generate, ai, completion, important
44
- """
45
- endpoint = f"{self.base_url}/chat/completions"
46
- messages = []
47
- if system_prompt:
48
- messages.append({"role": "system", "content": system_prompt})
49
- messages.append({"role": "user", "content": query})
50
- payload = {
51
- "model": model,
52
- "messages": messages,
53
- "temperature": temperature,
54
- # "max_tokens": 512,
55
- }
56
- data = self._post(endpoint, data=payload)
57
- response = data.json()
58
- content = response["choices"][0]["message"]["content"]
59
- citations = response.get("citations", [])
60
- return {"content": content, "citations": citations}
61
-
62
- def list_tools(self):
63
- return [
64
- self.chat,
65
- ]
@@ -1,45 +0,0 @@
1
-
2
- # Reddit MCP Server
3
-
4
- An MCP Server for the Reddit API.
5
-
6
- ## Supported Integrations
7
-
8
- - AgentR
9
- - API Key (Coming Soon)
10
- - OAuth (Coming Soon)
11
-
12
- ## Tools
13
-
14
- This is automatically generated from OpenAPI schema for the Reddit API.
15
-
16
- ## Supported Integrations
17
-
18
- This tool can be integrated with any service that supports HTTP requests.
19
-
20
- ## Tool List
21
-
22
- | Tool | Description |
23
- |------|-------------|
24
- | create_post | Create a new post in a specified subreddit. |
25
- | delete_content | Delete a Reddit post or comment. |
26
- | edit_content | Edit the text content of a Reddit post or comment. |
27
- | get_comment_by_id | Retrieve a specific Reddit comment by its full ID (t1_commentid). |
28
- | get_post_flairs | Retrieve the list of available post flairs for a specific subreddit. |
29
- | get_subreddit_posts | Get the top posts from a specified subreddit over a given timeframe. |
30
- | post_comment | Post a comment to a Reddit post or another comment. |
31
- | search_subreddits | Search for subreddits matching a query string. |
32
- | validate | Function for validate |
33
-
34
-
35
-
36
- ## Usage
37
-
38
- - Login to AgentR
39
- - Follow the quickstart guide to setup MCP Server for your client
40
- - Visit Apps Store and enable the Reddit app
41
- - Restart the MCP Server
42
-
43
- ### Local Development
44
-
45
- - Follow the README to test with the local MCP Server
@@ -1,379 +0,0 @@
1
- import httpx
2
- from loguru import logger
3
-
4
- from universal_mcp.applications.application import APIApplication
5
- from universal_mcp.exceptions import NotAuthorizedError
6
- from universal_mcp.integrations import Integration
7
-
8
-
9
- class RedditApp(APIApplication):
10
- def __init__(self, integration: Integration) -> None:
11
- super().__init__(name="reddit", integration=integration)
12
- self.base_api_url = "https://oauth.reddit.com"
13
-
14
- def _post(self, url, data):
15
- try:
16
- headers = self._get_headers()
17
- response = httpx.post(url, headers=headers, data=data)
18
- response.raise_for_status()
19
- return response
20
- except NotAuthorizedError as e:
21
- logger.warning(f"Authorization needed: {e.message}")
22
- raise e
23
- except httpx.HTTPStatusError as e:
24
- if e.response.status_code == 429:
25
- return e.response.text or "Rate limit exceeded. Please try again later."
26
- else:
27
- raise e
28
- except Exception as e:
29
- logger.error(f"Error posting {url}: {e}")
30
- raise e
31
-
32
- def _get_headers(self):
33
- if not self.integration:
34
- raise ValueError("Integration not configured for RedditApp")
35
- credentials = self.integration.get_credentials()
36
- if "access_token" not in credentials:
37
- logger.error("Reddit credentials found but missing 'access_token'.")
38
- raise ValueError("Invalid Reddit credentials format.")
39
-
40
- return {
41
- "Authorization": f"Bearer {credentials['access_token']}",
42
- "User-Agent": "agentr-reddit-app/0.1 by AgentR",
43
- }
44
-
45
- def get_subreddit_posts(
46
- self, subreddit: str, limit: int = 5, timeframe: str = "day"
47
- ) -> str:
48
- """
49
- Retrieves and formats top posts from a specified subreddit within a given timeframe using the Reddit API
50
-
51
- Args:
52
- subreddit: The name of the subreddit (e.g., 'python', 'worldnews') without the 'r/' prefix
53
- limit: The maximum number of posts to return (default: 5, max: 100)
54
- timeframe: The time period for top posts. Valid options: 'hour', 'day', 'week', 'month', 'year', 'all' (default: 'day')
55
-
56
- Returns:
57
- A formatted string containing a numbered list of top posts, including titles, authors, scores, and URLs, or an error message if the request fails
58
-
59
- Raises:
60
- RequestException: When the HTTP request to the Reddit API fails
61
- JSONDecodeError: When the API response contains invalid JSON
62
-
63
- Tags:
64
- fetch, reddit, api, list, social-media, important, read-only
65
- """
66
- valid_timeframes = ["hour", "day", "week", "month", "year", "all"]
67
- if timeframe not in valid_timeframes:
68
- return f"Error: Invalid timeframe '{timeframe}'. Please use one of: {', '.join(valid_timeframes)}"
69
- if not 1 <= limit <= 100:
70
- return (
71
- f"Error: Invalid limit '{limit}'. Please use a value between 1 and 100."
72
- )
73
- url = f"{self.base_api_url}/r/{subreddit}/top"
74
- params = {"limit": limit, "t": timeframe}
75
- logger.info(
76
- f"Requesting top {limit} posts from r/{subreddit} for timeframe '{timeframe}'"
77
- )
78
- response = self._get(url, params=params)
79
- data = response.json()
80
- if "error" in data:
81
- logger.error(
82
- f"Reddit API error: {data['error']} - {data.get('message', '')}"
83
- )
84
- return f"Error from Reddit API: {data['error']} - {data.get('message', '')}"
85
- posts = data.get("data", {}).get("children", [])
86
- if not posts:
87
- return (
88
- f"No top posts found in r/{subreddit} for the timeframe '{timeframe}'."
89
- )
90
- result_lines = [
91
- f"Top {len(posts)} posts from r/{subreddit} (timeframe: {timeframe}):\n"
92
- ]
93
- for i, post_container in enumerate(posts):
94
- post = post_container.get("data", {})
95
- title = post.get("title", "No Title")
96
- score = post.get("score", 0)
97
- author = post.get("author", "Unknown Author")
98
- permalink = post.get("permalink", "")
99
- full_url = f"https://www.reddit.com{permalink}" if permalink else "No Link"
100
-
101
- result_lines.append(f'{i + 1}. "{title}" by u/{author} (Score: {score})')
102
- result_lines.append(f" Link: {full_url}")
103
- return "\n".join(result_lines)
104
-
105
- def search_subreddits(
106
- self, query: str, limit: int = 5, sort: str = "relevance"
107
- ) -> str:
108
- """
109
- Searches Reddit for subreddits matching a given query string and returns a formatted list of results including subreddit names, subscriber counts, and descriptions.
110
-
111
- Args:
112
- query: The text to search for in subreddit names and descriptions
113
- limit: The maximum number of subreddits to return, between 1 and 100 (default: 5)
114
- sort: The order of results, either 'relevance' or 'activity' (default: 'relevance')
115
-
116
- Returns:
117
- A formatted string containing a list of matching subreddits with their names, subscriber counts, and descriptions, or an error message if the search fails or parameters are invalid
118
-
119
- Raises:
120
- RequestException: When the HTTP request to Reddit's API fails
121
- JSONDecodeError: When the API response contains invalid JSON
122
-
123
- Tags:
124
- search, important, reddit, api, query, format, list, validation
125
- """
126
- valid_sorts = ["relevance", "activity"]
127
- if sort not in valid_sorts:
128
- return f"Error: Invalid sort option '{sort}'. Please use one of: {', '.join(valid_sorts)}"
129
- if not 1 <= limit <= 100:
130
- return (
131
- f"Error: Invalid limit '{limit}'. Please use a value between 1 and 100."
132
- )
133
- url = f"{self.base_api_url}/subreddits/search"
134
- params = {
135
- "q": query,
136
- "limit": limit,
137
- "sort": sort,
138
- # Optionally include NSFW results? Defaulting to false for safety.
139
- # "include_over_18": "false"
140
- }
141
- logger.info(
142
- f"Searching for subreddits matching '{query}' (limit: {limit}, sort: {sort})"
143
- )
144
- response = self._get(url, params=params)
145
- data = response.json()
146
- if "error" in data:
147
- logger.error(
148
- f"Reddit API error during subreddit search: {data['error']} - {data.get('message', '')}"
149
- )
150
- return f"Error from Reddit API during search: {data['error']} - {data.get('message', '')}"
151
- subreddits = data.get("data", {}).get("children", [])
152
- if not subreddits:
153
- return f"No subreddits found matching the query '{query}'."
154
- result_lines = [
155
- f"Found {len(subreddits)} subreddits matching '{query}' (sorted by {sort}):\n"
156
- ]
157
- for i, sub_container in enumerate(subreddits):
158
- sub_data = sub_container.get("data", {})
159
- display_name = sub_data.get("display_name", "N/A") # e.g., 'python'
160
- title = sub_data.get(
161
- "title", "No Title"
162
- ) # Often the same as display_name or slightly longer
163
- subscribers = sub_data.get("subscribers", 0)
164
- # Use public_description if available, fallback to title
165
- description = sub_data.get("public_description", "").strip() or title
166
-
167
- # Format subscriber count nicely
168
- subscriber_str = f"{subscribers:,}" if subscribers else "Unknown"
169
-
170
- result_lines.append(
171
- f"{i + 1}. r/{display_name} ({subscriber_str} subscribers)"
172
- )
173
- if description:
174
- result_lines.append(f" Description: {description}")
175
- return "\n".join(result_lines)
176
-
177
- def get_post_flairs(self, subreddit: str):
178
- """
179
- Retrieves a list of available post flairs for a specified subreddit using the Reddit API.
180
-
181
- Args:
182
- subreddit: The name of the subreddit (e.g., 'python', 'worldnews') without the 'r/' prefix
183
-
184
- Returns:
185
- A list of dictionaries containing flair details if flairs exist, or a string message indicating no flairs are available
186
-
187
- Raises:
188
- RequestException: When the API request fails or network connectivity issues occur
189
- JSONDecodeError: When the API response contains invalid JSON data
190
-
191
- Tags:
192
- fetch, get, reddit, flair, api, read-only
193
- """
194
- url = f"{self.base_api_url}/r/{subreddit}/api/link_flair_v2"
195
- logger.info(f"Fetching post flairs for subreddit: r/{subreddit}")
196
- response = self._get(url)
197
- flairs = response.json()
198
- if not flairs:
199
- return f"No post flairs available for r/{subreddit}."
200
- return flairs
201
-
202
- def create_post(
203
- self,
204
- subreddit: str,
205
- title: str,
206
- kind: str = "self",
207
- text: str = None,
208
- url: str = None,
209
- flair_id: str = None,
210
- ):
211
- """
212
- Creates a new Reddit post in a specified subreddit with support for text posts, link posts, and image posts
213
-
214
- Args:
215
- subreddit: The name of the subreddit (e.g., 'python', 'worldnews') without the 'r/'
216
- title: The title of the post
217
- kind: The type of post; either 'self' (text post) or 'link' (link or image post)
218
- text: The text content of the post; required if kind is 'self'
219
- url: The URL of the link or image; required if kind is 'link'. Must end with valid image extension for image posts
220
- flair_id: The ID of the flair to assign to the post
221
-
222
- Returns:
223
- The JSON response from the Reddit API, or an error message as a string if the API returns an error
224
-
225
- Raises:
226
- ValueError: Raised when kind is invalid or when required parameters (text for self posts, url for link posts) are missing
227
-
228
- Tags:
229
- create, post, social-media, reddit, api, important
230
- """
231
- if kind not in ["self", "link"]:
232
- raise ValueError("Invalid post kind. Must be one of 'self' or 'link'.")
233
- if kind == "self" and not text:
234
- raise ValueError("Text content is required for text posts.")
235
- if kind == "link" and not url:
236
- raise ValueError("URL is required for link posts (including images).")
237
- data = {
238
- "sr": subreddit,
239
- "title": title,
240
- "kind": kind,
241
- "text": text,
242
- "url": url,
243
- "flair_id": flair_id,
244
- }
245
- data = {k: v for k, v in data.items() if v is not None}
246
- url_api = f"{self.base_api_url}/api/submit"
247
- logger.info(f"Submitting a new post to r/{subreddit}")
248
- response = self._post(url_api, data=data)
249
- response_json = response.json()
250
- if (
251
- response_json
252
- and "json" in response_json
253
- and "errors" in response_json["json"]
254
- ):
255
- errors = response_json["json"]["errors"]
256
- if errors:
257
- error_message = ", ".join(
258
- [f"{code}: {message}" for code, message in errors]
259
- )
260
- return f"Reddit API error: {error_message}"
261
- return response_json
262
-
263
- def get_comment_by_id(self, comment_id: str) -> dict:
264
- """
265
- Retrieves a specific Reddit comment using its unique identifier.
266
-
267
- Args:
268
- comment_id: The full unique identifier of the comment (prefixed with 't1_', e.g., 't1_abcdef')
269
-
270
- Returns:
271
- A dictionary containing the comment data including attributes like author, body, score, etc. If the comment is not found, returns a dictionary with an error message.
272
-
273
- Raises:
274
- HTTPError: When the Reddit API request fails due to network issues or invalid authentication
275
- JSONDecodeError: When the API response cannot be parsed as valid JSON
276
-
277
- Tags:
278
- retrieve, get, reddit, comment, api, fetch, single-item, important
279
- """
280
- url = f"https://oauth.reddit.com/api/info.json?id={comment_id}"
281
- response = self._get(url)
282
- data = response.json()
283
- comments = data.get("data", {}).get("children", [])
284
- if comments:
285
- return comments[0]["data"]
286
- else:
287
- return {"error": "Comment not found."}
288
-
289
- def post_comment(self, parent_id: str, text: str) -> dict:
290
- """
291
- Posts a comment to a Reddit post or comment using the Reddit API
292
-
293
- Args:
294
- parent_id: The full ID of the parent comment or post (e.g., 't3_abc123' for a post, 't1_def456' for a comment)
295
- text: The text content of the comment to be posted
296
-
297
- Returns:
298
- A dictionary containing the Reddit API response with details about the posted comment
299
-
300
- Raises:
301
- RequestException: If the API request fails or returns an error status code
302
- JSONDecodeError: If the API response cannot be parsed as JSON
303
-
304
- Tags:
305
- post, comment, social, reddit, api, important
306
- """
307
- url = f"{self.base_api_url}/api/comment"
308
- data = {
309
- "parent": parent_id,
310
- "text": text,
311
- }
312
- logger.info(f"Posting comment to {parent_id}")
313
- response = self._post(url, data=data)
314
- return response.json()
315
-
316
- def edit_content(self, content_id: str, text: str) -> dict:
317
- """
318
- Edits the text content of an existing Reddit post or comment using the Reddit API
319
-
320
- Args:
321
- content_id: The full ID of the content to edit (e.g., 't3_abc123' for a post, 't1_def456' for a comment)
322
- text: The new text content to replace the existing content
323
-
324
- Returns:
325
- A dictionary containing the API response with details about the edited content
326
-
327
- Raises:
328
- RequestException: When the API request fails or network connectivity issues occur
329
- ValueError: When invalid content_id format or empty text is provided
330
-
331
- Tags:
332
- edit, update, content, reddit, api, important
333
- """
334
- url = f"{self.base_api_url}/api/editusertext"
335
- data = {
336
- "thing_id": content_id,
337
- "text": text,
338
- }
339
- logger.info(f"Editing content {content_id}")
340
- response = self._post(url, data=data)
341
- return response.json()
342
-
343
- def delete_content(self, content_id: str) -> dict:
344
- """
345
- Deletes a specified Reddit post or comment using the Reddit API.
346
-
347
- Args:
348
- content_id: The full ID of the content to delete (e.g., 't3_abc123' for a post, 't1_def456' for a comment)
349
-
350
- Returns:
351
- A dictionary containing a success message with the deleted content ID
352
-
353
- Raises:
354
- HTTPError: When the API request fails or returns an error status code
355
- RequestException: When there are network connectivity issues or API communication problems
356
-
357
- Tags:
358
- delete, content-management, api, reddit, important
359
- """
360
- url = f"{self.base_api_url}/api/del"
361
- data = {
362
- "id": content_id,
363
- }
364
- logger.info(f"Deleting content {content_id}")
365
- response = self._post(url, data=data)
366
- response.raise_for_status()
367
- return {"message": f"Content {content_id} deleted successfully."}
368
-
369
- def list_tools(self):
370
- return [
371
- self.get_subreddit_posts,
372
- self.search_subreddits,
373
- self.get_post_flairs,
374
- self.create_post,
375
- self.get_comment_by_id,
376
- self.post_comment,
377
- self.edit_content,
378
- self.delete_content,
379
- ]
@@ -1,65 +0,0 @@
1
- # Replicate MCP Server
2
-
3
- An MCP Server for the Replicate API.
4
-
5
- ## Supported Integrations
6
-
7
- - AgentR
8
- - API Key (Coming Soon)
9
- - OAuth (Coming Soon)
10
-
11
- ## Tools
12
-
13
- This is automatically generated from OpenAPI schema for the Replicate API.
14
-
15
- ## Supported Integrations
16
-
17
- This tool can be integrated with any service that supports HTTP requests.
18
-
19
- ## Tool List
20
-
21
- | Tool | Description |
22
- |------|-------------|
23
- | account_get | Gets information about the authenticated account. |
24
- | collections_list | Lists collections of models available on Replicate, returning a paginated list of collection objects. |
25
- | collections_get | Retrieves detailed information about a specific model collection, with automatic truncation of large model lists to manage response size. |
26
- | deployments_list | Lists all deployments associated with the authenticated account. |
27
- | deployments_create | Creates a new model deployment with specified configuration parameters. |
28
- | deployments_get | Retrieves detailed information about a specific deployment by its owner and name. |
29
- | deployments_update | Updates configurable properties of an existing deployment, such as hardware specifications and instance scaling parameters. |
30
- | deployments_delete | Deletes a specified deployment associated with a given owner or organization |
31
- | deployments_predictions_create | Creates an asynchronous prediction using a specified deployment, optionally configuring webhook notifications for status updates. |
32
- | hardware_list | Retrieves a list of available hardware options for running models. |
33
- | models_list | Retrieves a paginated list of publicly available models from the Replicate API. |
34
- | models_create | Creates a new model in the system with specified parameters and metadata. |
35
- | models_search | Searches for public models based on a provided query string |
36
- | models_get | Retrieves detailed information about a specific AI model by its owner and name |
37
- | models_delete | Deletes a private model from the system, provided it has no existing versions. |
38
- | models_examples_list | Retrieves a list of example predictions associated with a specific model. |
39
- | models_predictions_create | Creates an asynchronous prediction request using a specified model version. |
40
- | models_readme_get | Retrieves the README content for a specified model in Markdown format. |
41
- | models_versions_list | Lists all available versions of a specified model. |
42
- | models_versions_get | Retrieves detailed information about a specific version of a model by querying the API. |
43
- | models_versions_delete | Deletes a specific version of a model and its associated predictions/output. |
44
- | trainings_create | Initiates a new asynchronous training job for a specific model version, with optional webhook notifications for progress updates. |
45
- | predictions_list | Lists all predictions created by the authenticated account within an optional time range. |
46
- | predictions_create | Creates an asynchronous prediction request using a specified model version. |
47
- | predictions_get | Retrieves the current state and details of a prediction by its ID. |
48
- | predictions_cancel | Cancels a running prediction job identified by its ID. |
49
- | trainings_list | Lists all training jobs created by the authenticated account. |
50
- | trainings_get | Retrieves the current state of a training job by its ID. |
51
- | trainings_cancel | Cancels a specific training job in progress. |
52
- | webhooks_default_secret_get | Retrieves the signing secret for the default webhook endpoint. |
53
-
54
-
55
-
56
- ## Usage
57
-
58
- - Login to AgentR
59
- - Follow the quickstart guide to setup MCP Server for your client
60
- - Visit Apps Store and enable the Replicate app
61
- - Restart the MCP Server
62
-
63
- ### Local Development
64
-
65
- - Follow the README to test with the local MCP Server
File without changes