universal-mcp-applications 0.1.17__py3-none-any.whl → 0.1.33__py3-none-any.whl
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Potentially problematic release.
This version of universal-mcp-applications might be problematic. Click here for more details.
- universal_mcp/applications/BEST_PRACTICES.md +166 -0
- universal_mcp/applications/ahrefs/README.md +3 -3
- universal_mcp/applications/airtable/README.md +3 -3
- universal_mcp/applications/airtable/app.py +0 -1
- universal_mcp/applications/apollo/app.py +0 -1
- universal_mcp/applications/asana/README.md +3 -3
- universal_mcp/applications/aws_s3/README.md +29 -0
- universal_mcp/applications/aws_s3/app.py +40 -39
- universal_mcp/applications/bill/README.md +249 -0
- universal_mcp/applications/browser_use/README.md +1 -0
- universal_mcp/applications/browser_use/__init__.py +0 -0
- universal_mcp/applications/browser_use/app.py +71 -0
- universal_mcp/applications/calendly/README.md +45 -45
- universal_mcp/applications/calendly/app.py +125 -125
- universal_mcp/applications/canva/README.md +35 -35
- universal_mcp/applications/canva/app.py +95 -99
- universal_mcp/applications/clickup/README.md +4 -4
- universal_mcp/applications/confluence/app.py +0 -1
- universal_mcp/applications/contentful/README.md +1 -2
- universal_mcp/applications/contentful/app.py +4 -5
- universal_mcp/applications/crustdata/README.md +3 -3
- universal_mcp/applications/domain_checker/README.md +2 -2
- universal_mcp/applications/domain_checker/app.py +11 -15
- universal_mcp/applications/e2b/README.md +4 -4
- universal_mcp/applications/e2b/app.py +4 -4
- universal_mcp/applications/elevenlabs/README.md +3 -77
- universal_mcp/applications/elevenlabs/app.py +18 -15
- universal_mcp/applications/exa/README.md +7 -7
- universal_mcp/applications/exa/app.py +17 -17
- universal_mcp/applications/falai/README.md +13 -12
- universal_mcp/applications/falai/app.py +34 -35
- universal_mcp/applications/figma/README.md +3 -3
- universal_mcp/applications/file_system/README.md +13 -0
- universal_mcp/applications/file_system/app.py +9 -9
- universal_mcp/applications/firecrawl/README.md +9 -9
- universal_mcp/applications/firecrawl/app.py +46 -46
- universal_mcp/applications/fireflies/README.md +14 -14
- universal_mcp/applications/fireflies/app.py +164 -57
- universal_mcp/applications/fpl/README.md +12 -12
- universal_mcp/applications/fpl/app.py +54 -55
- universal_mcp/applications/ghost_content/app.py +0 -1
- universal_mcp/applications/github/README.md +10 -10
- universal_mcp/applications/github/app.py +50 -52
- universal_mcp/applications/google_calendar/README.md +10 -10
- universal_mcp/applications/google_calendar/app.py +50 -49
- universal_mcp/applications/google_docs/README.md +14 -14
- universal_mcp/applications/google_docs/app.py +307 -233
- universal_mcp/applications/google_drive/README.md +54 -57
- universal_mcp/applications/google_drive/app.py +270 -261
- universal_mcp/applications/google_gemini/README.md +3 -14
- universal_mcp/applications/google_gemini/app.py +15 -18
- universal_mcp/applications/google_mail/README.md +20 -20
- universal_mcp/applications/google_mail/app.py +110 -109
- universal_mcp/applications/google_searchconsole/README.md +10 -10
- universal_mcp/applications/google_searchconsole/app.py +37 -37
- universal_mcp/applications/google_sheet/README.md +25 -25
- universal_mcp/applications/google_sheet/app.py +270 -266
- universal_mcp/applications/hashnode/README.md +6 -3
- universal_mcp/applications/hashnode/app.py +174 -25
- universal_mcp/applications/http_tools/README.md +5 -5
- universal_mcp/applications/http_tools/app.py +10 -11
- universal_mcp/applications/hubspot/api_segments/__init__.py +0 -0
- universal_mcp/applications/hubspot/api_segments/api_segment_base.py +54 -0
- universal_mcp/applications/hubspot/api_segments/crm_api.py +7337 -0
- universal_mcp/applications/hubspot/api_segments/marketing_api.py +1467 -0
- universal_mcp/applications/hubspot/app.py +2 -15
- universal_mcp/applications/jira/app.py +0 -1
- universal_mcp/applications/klaviyo/README.md +0 -36
- universal_mcp/applications/linkedin/README.md +18 -4
- universal_mcp/applications/linkedin/app.py +763 -162
- universal_mcp/applications/mailchimp/README.md +3 -3
- universal_mcp/applications/markitdown/app.py +10 -5
- universal_mcp/applications/ms_teams/README.md +31 -31
- universal_mcp/applications/ms_teams/app.py +151 -151
- universal_mcp/applications/neon/README.md +3 -3
- universal_mcp/applications/onedrive/README.md +24 -0
- universal_mcp/applications/onedrive/__init__.py +1 -0
- universal_mcp/applications/onedrive/app.py +338 -0
- universal_mcp/applications/openai/README.md +18 -17
- universal_mcp/applications/openai/app.py +40 -39
- universal_mcp/applications/outlook/README.md +9 -9
- universal_mcp/applications/outlook/app.py +307 -225
- universal_mcp/applications/perplexity/README.md +4 -4
- universal_mcp/applications/perplexity/app.py +4 -4
- universal_mcp/applications/posthog/README.md +128 -127
- universal_mcp/applications/reddit/README.md +21 -124
- universal_mcp/applications/reddit/app.py +51 -68
- universal_mcp/applications/resend/README.md +29 -29
- universal_mcp/applications/resend/app.py +116 -117
- universal_mcp/applications/rocketlane/app.py +0 -1
- universal_mcp/applications/scraper/README.md +7 -4
- universal_mcp/applications/scraper/__init__.py +1 -1
- universal_mcp/applications/scraper/app.py +341 -103
- universal_mcp/applications/semrush/README.md +3 -0
- universal_mcp/applications/serpapi/README.md +3 -3
- universal_mcp/applications/serpapi/app.py +14 -14
- universal_mcp/applications/sharepoint/README.md +19 -0
- universal_mcp/applications/sharepoint/app.py +285 -173
- universal_mcp/applications/shopify/app.py +0 -1
- universal_mcp/applications/shortcut/README.md +3 -3
- universal_mcp/applications/slack/README.md +23 -0
- universal_mcp/applications/slack/app.py +79 -48
- universal_mcp/applications/spotify/README.md +3 -3
- universal_mcp/applications/supabase/README.md +3 -3
- universal_mcp/applications/tavily/README.md +4 -4
- universal_mcp/applications/tavily/app.py +4 -4
- universal_mcp/applications/twilio/README.md +15 -0
- universal_mcp/applications/twitter/README.md +92 -89
- universal_mcp/applications/twitter/api_segments/compliance_api.py +13 -15
- universal_mcp/applications/twitter/api_segments/dm_conversations_api.py +20 -20
- universal_mcp/applications/twitter/api_segments/dm_events_api.py +12 -12
- universal_mcp/applications/twitter/api_segments/likes_api.py +12 -12
- universal_mcp/applications/twitter/api_segments/lists_api.py +37 -39
- universal_mcp/applications/twitter/api_segments/spaces_api.py +24 -24
- universal_mcp/applications/twitter/api_segments/trends_api.py +4 -4
- universal_mcp/applications/twitter/api_segments/tweets_api.py +105 -105
- universal_mcp/applications/twitter/api_segments/usage_api.py +4 -4
- universal_mcp/applications/twitter/api_segments/users_api.py +136 -136
- universal_mcp/applications/twitter/app.py +15 -11
- universal_mcp/applications/whatsapp/README.md +12 -12
- universal_mcp/applications/whatsapp/app.py +66 -67
- universal_mcp/applications/whatsapp/audio.py +39 -35
- universal_mcp/applications/whatsapp/whatsapp.py +176 -154
- universal_mcp/applications/whatsapp_business/README.md +23 -23
- universal_mcp/applications/whatsapp_business/app.py +92 -92
- universal_mcp/applications/yahoo_finance/README.md +17 -0
- universal_mcp/applications/yahoo_finance/__init__.py +1 -0
- universal_mcp/applications/yahoo_finance/app.py +300 -0
- universal_mcp/applications/youtube/README.md +46 -46
- universal_mcp/applications/youtube/app.py +208 -195
- universal_mcp/applications/zenquotes/README.md +1 -1
- universal_mcp/applications/zenquotes/__init__.py +2 -0
- universal_mcp/applications/zenquotes/app.py +5 -5
- {universal_mcp_applications-0.1.17.dist-info → universal_mcp_applications-0.1.33.dist-info}/METADATA +5 -90
- {universal_mcp_applications-0.1.17.dist-info → universal_mcp_applications-0.1.33.dist-info}/RECORD +137 -128
- universal_mcp/applications/replicate/README.md +0 -18
- universal_mcp/applications/replicate/__init__.py +0 -1
- universal_mcp/applications/replicate/app.py +0 -493
- universal_mcp/applications/unipile/README.md +0 -28
- universal_mcp/applications/unipile/__init__.py +0 -1
- universal_mcp/applications/unipile/app.py +0 -827
- {universal_mcp_applications-0.1.17.dist-info → universal_mcp_applications-0.1.33.dist-info}/WHEEL +0 -0
- {universal_mcp_applications-0.1.17.dist-info → universal_mcp_applications-0.1.33.dist-info}/licenses/LICENSE +0 -0
|
@@ -1,13 +1,13 @@
|
|
|
1
1
|
import os
|
|
2
2
|
from dotenv import load_dotenv
|
|
3
|
+
|
|
3
4
|
load_dotenv()
|
|
4
5
|
|
|
5
|
-
from typing import Any
|
|
6
|
+
from typing import Any, Literal
|
|
6
7
|
|
|
8
|
+
from loguru import logger
|
|
7
9
|
from universal_mcp.applications.application import APIApplication
|
|
8
|
-
from universal_mcp.
|
|
9
|
-
from universal_mcp.applications.unipile import UnipileApp
|
|
10
|
-
from typing import Any, Optional
|
|
10
|
+
from universal_mcp.integrations import Integration
|
|
11
11
|
|
|
12
12
|
class ScraperApp(APIApplication):
|
|
13
13
|
"""
|
|
@@ -15,85 +15,110 @@ class ScraperApp(APIApplication):
|
|
|
15
15
|
Provides a simplified interface for LinkedIn search operations.
|
|
16
16
|
"""
|
|
17
17
|
|
|
18
|
-
def __init__(self, **kwargs: Any) -> None:
|
|
19
|
-
""
|
|
20
|
-
Initialize the ScraperApp.
|
|
21
|
-
|
|
22
|
-
Args:
|
|
23
|
-
integration: The integration configuration containing credentials and other settings.
|
|
24
|
-
It is expected that the integration provides the necessary credentials
|
|
25
|
-
for LinkedIn API access.
|
|
26
|
-
"""
|
|
27
|
-
super().__init__(name="scraper", **kwargs)
|
|
18
|
+
def __init__(self, integration: Integration, **kwargs: Any) -> None:
|
|
19
|
+
super().__init__(name="scraper", integration=integration, **kwargs)
|
|
28
20
|
if self.integration:
|
|
29
21
|
credentials = self.integration.get_credentials()
|
|
30
|
-
|
|
31
|
-
self.account_id = credentials.get("ACCOUNT_ID")
|
|
32
|
-
self.integration = AgentrIntegration(name="unipile", api_key=api_key, base_url="https://staging-agentr-306776579029.asia-southeast1.run.app/")
|
|
33
|
-
self._unipile_app = UnipileApp(integration=self.integration)
|
|
22
|
+
self.account_id = credentials.get("account_id")
|
|
34
23
|
else:
|
|
24
|
+
logger.warning("Integration not found")
|
|
35
25
|
self.account_id = None
|
|
36
|
-
|
|
37
|
-
|
|
38
|
-
def
|
|
39
|
-
self
|
|
40
|
-
|
|
41
|
-
|
|
42
|
-
|
|
43
|
-
|
|
44
|
-
|
|
45
|
-
|
|
46
|
-
|
|
47
|
-
|
|
48
|
-
|
|
26
|
+
|
|
27
|
+
@property
|
|
28
|
+
def base_url(self) -> str:
|
|
29
|
+
if not self._base_url:
|
|
30
|
+
unipile_dsn = os.getenv("UNIPILE_DSN")
|
|
31
|
+
if not unipile_dsn:
|
|
32
|
+
logger.error(
|
|
33
|
+
"UnipileApp: UNIPILE_DSN environment variable is not set."
|
|
34
|
+
)
|
|
35
|
+
raise ValueError(
|
|
36
|
+
"UnipileApp: UNIPILE_DSN environment variable is required."
|
|
37
|
+
)
|
|
38
|
+
self._base_url = f"https://{unipile_dsn}"
|
|
39
|
+
return self._base_url
|
|
40
|
+
|
|
41
|
+
@base_url.setter
|
|
42
|
+
def base_url(self, base_url: str) -> None:
|
|
43
|
+
self._base_url = base_url
|
|
44
|
+
logger.info(f"UnipileApp: Base URL set to {self._base_url}")
|
|
45
|
+
|
|
46
|
+
def _get_headers(self) -> dict[str, str]:
|
|
49
47
|
"""
|
|
50
|
-
|
|
51
|
-
|
|
48
|
+
Get the headers for Unipile API requests.
|
|
49
|
+
Overrides the base class method to use X-Api-Key.
|
|
50
|
+
"""
|
|
51
|
+
if not self.integration:
|
|
52
|
+
logger.warning(
|
|
53
|
+
"UnipileApp: No integration configured, returning empty headers."
|
|
54
|
+
)
|
|
55
|
+
return {}
|
|
56
|
+
|
|
57
|
+
api_key = os.getenv("UNIPILE_API_KEY")
|
|
58
|
+
if not api_key:
|
|
59
|
+
logger.error(
|
|
60
|
+
"UnipileApp: API key not found in integration credentials for Unipile."
|
|
61
|
+
)
|
|
62
|
+
return { # Or return minimal headers if some calls might not need auth (unlikely for Unipile)
|
|
63
|
+
"Content-Type": "application/json",
|
|
64
|
+
"Cache-Control": "no-cache",
|
|
65
|
+
}
|
|
66
|
+
|
|
67
|
+
logger.debug("UnipileApp: Using X-Api-Key for authentication.")
|
|
68
|
+
return {
|
|
69
|
+
"x-api-key": api_key,
|
|
70
|
+
"Content-Type": "application/json",
|
|
71
|
+
"Cache-Control": "no-cache", # Often good practice for APIs
|
|
72
|
+
}
|
|
73
|
+
|
|
74
|
+
def _get_search_parameter_id(self, param_type: str, keywords: str) -> str:
|
|
75
|
+
"""
|
|
76
|
+
Retrieves the ID for a given LinkedIn search parameter by its name.
|
|
77
|
+
|
|
52
78
|
Args:
|
|
53
|
-
|
|
54
|
-
|
|
55
|
-
|
|
56
|
-
limit: Number of items to return (up to 50 for Classic search).
|
|
57
|
-
keywords: Keywords to search for.
|
|
58
|
-
sort_by: How to sort the results, e.g., "relevance" or "date".
|
|
59
|
-
date_posted: Filter posts by when they were posted.
|
|
60
|
-
content_type: Filter by the type of content in the post. Example: "videos", "images", "live_videos", "collaborative_articles", "documents"
|
|
61
|
-
|
|
79
|
+
param_type: The type of parameter to search for (e.g., "LOCATION", "COMPANY").
|
|
80
|
+
keywords: The name of the parameter to find (e.g., "United States").
|
|
81
|
+
|
|
62
82
|
Returns:
|
|
63
|
-
|
|
64
|
-
|
|
83
|
+
The corresponding ID for the search parameter.
|
|
84
|
+
|
|
65
85
|
Raises:
|
|
86
|
+
ValueError: If no exact match for the keywords is found.
|
|
66
87
|
httpx.HTTPError: If the API request fails.
|
|
67
|
-
|
|
68
|
-
Tags:
|
|
69
|
-
linkedin, search, posts, api, scrapper, important
|
|
70
88
|
"""
|
|
71
|
-
|
|
72
|
-
|
|
73
|
-
account_id
|
|
74
|
-
|
|
75
|
-
|
|
76
|
-
|
|
77
|
-
|
|
78
|
-
|
|
79
|
-
|
|
80
|
-
|
|
81
|
-
|
|
82
|
-
|
|
89
|
+
url = f"{self.base_url}/api/v1/linkedin/search/parameters"
|
|
90
|
+
params = {
|
|
91
|
+
"account_id": self.account_id,
|
|
92
|
+
"keywords": keywords,
|
|
93
|
+
"type": param_type,
|
|
94
|
+
}
|
|
95
|
+
|
|
96
|
+
response = self._get(url, params=params)
|
|
97
|
+
results = self._handle_response(response)
|
|
98
|
+
|
|
99
|
+
items = results.get("items", [])
|
|
100
|
+
if items:
|
|
101
|
+
# Return the ID of the first result, assuming it's the most relevant
|
|
102
|
+
return items[0]["id"]
|
|
103
|
+
|
|
104
|
+
raise ValueError(f'Could not find a matching ID for {param_type}: "{keywords}"')
|
|
105
|
+
|
|
83
106
|
|
|
84
107
|
def linkedin_list_profile_posts(
|
|
85
108
|
self,
|
|
86
|
-
identifier: str,
|
|
87
|
-
cursor:
|
|
88
|
-
limit:
|
|
109
|
+
identifier: str, # User or Company provider internal ID
|
|
110
|
+
cursor: str | None = None,
|
|
111
|
+
limit: int | None = None, # 1-100 (spec says max 250)
|
|
112
|
+
is_company: bool | None = None,
|
|
89
113
|
) -> dict[str, Any]:
|
|
90
114
|
"""
|
|
91
|
-
Fetches a paginated list of
|
|
115
|
+
Fetches a paginated list of posts from a specific user or company profile using its provider ID. The `is_company` flag must specify the entity type. Unlike `linkedin_search_posts`, this function directly retrieves content from a known profile's feed instead of performing a global keyword search.
|
|
92
116
|
|
|
93
117
|
Args:
|
|
94
|
-
identifier: The entity's provider internal ID (LinkedIn ID).
|
|
95
|
-
cursor: Pagination cursor
|
|
96
|
-
limit: Number of items to return (1-100, though spec allows up to 250).
|
|
118
|
+
identifier: The entity's provider internal ID (LinkedIn ID).
|
|
119
|
+
cursor: Pagination cursor.
|
|
120
|
+
limit: Number of items to return (1-100, as per Unipile example, though spec allows up to 250).
|
|
121
|
+
is_company: Boolean indicating if the identifier is for a company.
|
|
97
122
|
|
|
98
123
|
Returns:
|
|
99
124
|
A dictionary containing a list of post objects and pagination details.
|
|
@@ -104,24 +129,24 @@ class ScraperApp(APIApplication):
|
|
|
104
129
|
Tags:
|
|
105
130
|
linkedin, post, list, user_posts, company_posts, content, api, important
|
|
106
131
|
"""
|
|
107
|
-
|
|
108
|
-
|
|
109
|
-
|
|
110
|
-
|
|
111
|
-
|
|
112
|
-
limit=limit
|
|
113
|
-
|
|
132
|
+
url = f"{self.base_url}/api/v1/users/{identifier}/posts"
|
|
133
|
+
params: dict[str, Any] = {"account_id": self.account_id}
|
|
134
|
+
if cursor:
|
|
135
|
+
params["cursor"] = cursor
|
|
136
|
+
if limit:
|
|
137
|
+
params["limit"] = limit
|
|
138
|
+
if is_company is not None:
|
|
139
|
+
params["is_company"] = is_company
|
|
114
140
|
|
|
115
|
-
|
|
116
|
-
|
|
117
|
-
|
|
118
|
-
) -> dict[str, Any]:
|
|
141
|
+
response = self._get(url, params=params)
|
|
142
|
+
return response.json()
|
|
143
|
+
|
|
144
|
+
def linkedin_retrieve_profile(self, identifier: str) -> dict[str, Any]:
|
|
119
145
|
"""
|
|
120
|
-
|
|
146
|
+
Fetches a specific LinkedIn user's profile using their public or internal ID. Unlike `linkedin_search_people`, which discovers multiple users via keywords, this function targets and retrieves detailed data for a single, known individual based on a direct identifier.
|
|
121
147
|
|
|
122
148
|
Args:
|
|
123
|
-
identifier: Can be the provider's internal id OR the provider's public id of the requested user.
|
|
124
|
-
For example, for https://www.linkedin.com/in/manojbajaj95/, the identifier is "manojbajaj95".
|
|
149
|
+
identifier: Can be the provider's internal id OR the provider's public id of the requested user.For example, for https://www.linkedin.com/in/manojbajaj95/, the identifier is "manojbajaj95".
|
|
125
150
|
|
|
126
151
|
Returns:
|
|
127
152
|
A dictionary containing the user's profile details.
|
|
@@ -132,28 +157,26 @@ class ScraperApp(APIApplication):
|
|
|
132
157
|
Tags:
|
|
133
158
|
linkedin, user, profile, retrieve, get, api, important
|
|
134
159
|
"""
|
|
135
|
-
|
|
136
|
-
|
|
137
|
-
|
|
138
|
-
|
|
139
|
-
)
|
|
160
|
+
url = f"{self.base_url}/api/v1/users/{identifier}"
|
|
161
|
+
params: dict[str, Any] = {"account_id": self.account_id}
|
|
162
|
+
response = self._get(url, params=params)
|
|
163
|
+
return self._handle_response(response)
|
|
140
164
|
|
|
141
|
-
|
|
142
165
|
def linkedin_list_post_comments(
|
|
143
166
|
self,
|
|
144
167
|
post_id: str,
|
|
145
|
-
comment_id:
|
|
146
|
-
cursor:
|
|
147
|
-
limit:
|
|
168
|
+
comment_id: str | None = None,
|
|
169
|
+
cursor: str | None = None,
|
|
170
|
+
limit: int | None = None,
|
|
148
171
|
) -> dict[str, Any]:
|
|
149
172
|
"""
|
|
150
|
-
Fetches comments for a specified LinkedIn post.
|
|
173
|
+
Fetches a paginated list of comments for a specified LinkedIn post. It can retrieve either top-level comments or threaded replies if an optional `comment_id` is provided. This is a read-only operation, distinct from functions that search for posts or list user-specific content.
|
|
151
174
|
|
|
152
175
|
Args:
|
|
153
|
-
post_id: The social ID of the post.
|
|
176
|
+
post_id: The social ID of the post.
|
|
154
177
|
comment_id: If provided, retrieves replies to this comment ID instead of top-level comments.
|
|
155
178
|
cursor: Pagination cursor.
|
|
156
|
-
limit: Number of comments to return.
|
|
179
|
+
limit: Number of comments to return. (OpenAPI spec shows type string, passed as string if provided).
|
|
157
180
|
|
|
158
181
|
Returns:
|
|
159
182
|
A dictionary containing a list of comment objects and pagination details.
|
|
@@ -164,26 +187,241 @@ class ScraperApp(APIApplication):
|
|
|
164
187
|
Tags:
|
|
165
188
|
linkedin, post, comment, list, content, api, important
|
|
166
189
|
"""
|
|
190
|
+
url = f"{self.base_url}/api/v1/posts/{post_id}/comments"
|
|
191
|
+
params: dict[str, Any] = {"account_id": self.account_id}
|
|
192
|
+
if cursor:
|
|
193
|
+
params["cursor"] = cursor
|
|
194
|
+
if limit is not None:
|
|
195
|
+
params["limit"] = str(limit)
|
|
196
|
+
if comment_id:
|
|
197
|
+
params["comment_id"] = comment_id
|
|
198
|
+
|
|
199
|
+
response = self._get(url, params=params)
|
|
200
|
+
return response.json()
|
|
201
|
+
|
|
202
|
+
def linkedin_search_people(
|
|
203
|
+
self,
|
|
204
|
+
cursor: str | None = None,
|
|
205
|
+
limit: int | None = None,
|
|
206
|
+
keywords: str | None = None,
|
|
207
|
+
location: str | None = None,
|
|
208
|
+
industry: str | None = None,
|
|
209
|
+
company: str | None = None,
|
|
210
|
+
) -> dict[str, Any]:
|
|
211
|
+
"""
|
|
212
|
+
Performs a paginated search for people on LinkedIn, distinct from searches for companies or jobs. It filters results using keywords, location, industry, and company, internally converting filter names like 'United States' into their required API IDs before making the request.
|
|
213
|
+
|
|
214
|
+
Args:
|
|
215
|
+
cursor: Pagination cursor for the next page of entries.
|
|
216
|
+
limit: Number of items to return (up to 50 for Classic search).
|
|
217
|
+
keywords: Keywords to search for.
|
|
218
|
+
location: The geographical location to filter people by (e.g., "United States").
|
|
219
|
+
industry: The industry to filter people by.(e.g., "Software Development".)
|
|
220
|
+
company: The company to filter people by.(e.g., "Google".)
|
|
221
|
+
|
|
222
|
+
Returns:
|
|
223
|
+
A dictionary containing search results and pagination details.
|
|
224
|
+
|
|
225
|
+
Raises:
|
|
226
|
+
httpx.HTTPError: If the API request fails.
|
|
227
|
+
"""
|
|
228
|
+
url = f"{self.base_url}/api/v1/linkedin/search"
|
|
229
|
+
|
|
230
|
+
params: dict[str, Any] = {"account_id": self.account_id}
|
|
231
|
+
if cursor:
|
|
232
|
+
params["cursor"] = cursor
|
|
233
|
+
if limit is not None:
|
|
234
|
+
params["limit"] = limit
|
|
235
|
+
|
|
236
|
+
payload: dict[str, Any] = {"api": "classic", "category": "people"}
|
|
237
|
+
|
|
238
|
+
if keywords:
|
|
239
|
+
payload["keywords"] = keywords
|
|
240
|
+
|
|
241
|
+
if location:
|
|
242
|
+
location_id = self._get_search_parameter_id("LOCATION", location)
|
|
243
|
+
payload["location"] = [location_id]
|
|
244
|
+
|
|
245
|
+
if industry:
|
|
246
|
+
industry_id = self._get_search_parameter_id("INDUSTRY", industry)
|
|
247
|
+
payload["industry"] = [industry_id]
|
|
248
|
+
|
|
249
|
+
if company:
|
|
250
|
+
company_id = self._get_search_parameter_id("COMPANY", company)
|
|
251
|
+
payload["company"] = [company_id]
|
|
167
252
|
|
|
168
|
-
|
|
169
|
-
|
|
170
|
-
|
|
171
|
-
|
|
172
|
-
|
|
173
|
-
|
|
174
|
-
|
|
253
|
+
response = self._post(url, params=params, data=payload)
|
|
254
|
+
return self._handle_response(response)
|
|
255
|
+
|
|
256
|
+
def linkedin_search_companies(
|
|
257
|
+
self,
|
|
258
|
+
cursor: str | None = None,
|
|
259
|
+
limit: int | None = None,
|
|
260
|
+
keywords: str | None = None,
|
|
261
|
+
location: str | None = None,
|
|
262
|
+
industry: str | None = None,
|
|
263
|
+
) -> dict[str, Any]:
|
|
264
|
+
"""
|
|
265
|
+
Executes a paginated LinkedIn search for companies, filtering by optional keywords, location, and industry. Unlike `linkedin_search_people` or `linkedin_search_jobs`, this function specifically sets the API search category to 'companies' to ensure that only company profiles are returned in the search results.
|
|
266
|
+
|
|
267
|
+
Args:
|
|
268
|
+
cursor: Pagination cursor for the next page of entries.
|
|
269
|
+
limit: Number of items to return (up to 50 for Classic search).
|
|
270
|
+
keywords: Keywords to search for.
|
|
271
|
+
location: The geographical location to filter companies by (e.g., "United States").
|
|
272
|
+
industry: The industry to filter companies by.(e.g., "Software Development".)
|
|
273
|
+
|
|
274
|
+
Returns:
|
|
275
|
+
A dictionary containing search results and pagination details.
|
|
276
|
+
|
|
277
|
+
Raises:
|
|
278
|
+
httpx.HTTPError: If the API request fails.
|
|
279
|
+
"""
|
|
280
|
+
url = f"{self.base_url}/api/v1/linkedin/search"
|
|
281
|
+
|
|
282
|
+
params: dict[str, Any] = {"account_id": self.account_id}
|
|
283
|
+
if cursor:
|
|
284
|
+
params["cursor"] = cursor
|
|
285
|
+
if limit is not None:
|
|
286
|
+
params["limit"] = limit
|
|
287
|
+
|
|
288
|
+
payload: dict[str, Any] = {"api": "classic", "category": "companies"}
|
|
289
|
+
|
|
290
|
+
if keywords:
|
|
291
|
+
payload["keywords"] = keywords
|
|
292
|
+
|
|
293
|
+
if location:
|
|
294
|
+
location_id = self._get_search_parameter_id("LOCATION", location)
|
|
295
|
+
payload["location"] = [location_id]
|
|
296
|
+
|
|
297
|
+
if industry:
|
|
298
|
+
industry_id = self._get_search_parameter_id("INDUSTRY", industry)
|
|
299
|
+
payload["industry"] = [industry_id]
|
|
300
|
+
|
|
301
|
+
response = self._post(url, params=params, data=payload)
|
|
302
|
+
return self._handle_response(response)
|
|
303
|
+
|
|
304
|
+
def linkedin_search_posts(
|
|
305
|
+
self,
|
|
306
|
+
cursor: str | None = None,
|
|
307
|
+
limit: int | None = None,
|
|
308
|
+
keywords: str | None = None,
|
|
309
|
+
date_posted: Literal["past_day", "past_week", "past_month"] | None = None,
|
|
310
|
+
sort_by: Literal["relevance", "date"] = "relevance",
|
|
311
|
+
) -> dict[str, Any]:
|
|
312
|
+
"""
|
|
313
|
+
Performs a keyword-based search for LinkedIn posts, allowing results to be filtered by date and sorted by relevance. This function specifically queries the 'posts' category, distinguishing it from other search methods in the class that target people, companies, or jobs, and returns relevant content.
|
|
314
|
+
|
|
315
|
+
Args:
|
|
316
|
+
cursor: Pagination cursor for the next page of entries.
|
|
317
|
+
limit: Number of items to return (up to 50 for Classic search).
|
|
318
|
+
keywords: Keywords to search for.
|
|
319
|
+
date_posted: Filter by when the post was posted.
|
|
320
|
+
sort_by: How to sort the results.
|
|
321
|
+
|
|
322
|
+
Returns:
|
|
323
|
+
A dictionary containing search results and pagination details.
|
|
324
|
+
|
|
325
|
+
Raises:
|
|
326
|
+
httpx.HTTPError: If the API request fails.
|
|
327
|
+
"""
|
|
328
|
+
url = f"{self.base_url}/api/v1/linkedin/search"
|
|
329
|
+
|
|
330
|
+
params: dict[str, Any] = {"account_id": self.account_id}
|
|
331
|
+
if cursor:
|
|
332
|
+
params["cursor"] = cursor
|
|
333
|
+
if limit is not None:
|
|
334
|
+
params["limit"] = limit
|
|
335
|
+
|
|
336
|
+
payload: dict[str, Any] = {"api": "classic", "category": "posts"}
|
|
337
|
+
|
|
338
|
+
if keywords:
|
|
339
|
+
payload["keywords"] = keywords
|
|
340
|
+
if date_posted:
|
|
341
|
+
payload["date_posted"] = date_posted
|
|
342
|
+
if sort_by:
|
|
343
|
+
payload["sort_by"] = sort_by
|
|
344
|
+
|
|
345
|
+
response = self._post(url, params=params, data=payload)
|
|
346
|
+
return self._handle_response(response)
|
|
347
|
+
|
|
348
|
+
def linkedin_search_jobs(
|
|
349
|
+
self,
|
|
350
|
+
cursor: str | None = None,
|
|
351
|
+
limit: int | None = None,
|
|
352
|
+
keywords: str | None = None,
|
|
353
|
+
region: str | None = None,
|
|
354
|
+
sort_by: Literal["relevance", "date"] = "relevance",
|
|
355
|
+
minimum_salary_value: int = 40,
|
|
356
|
+
industry: str | None = None,
|
|
357
|
+
) -> dict[str, Any]:
|
|
358
|
+
"""
|
|
359
|
+
Executes a LinkedIn search specifically for job listings using keywords and filters like region, industry, and minimum salary. Unlike other search functions targeting people or companies, this is specialized for job listings and converts friendly filter names (e.g., "United States") into their required API IDs.
|
|
360
|
+
|
|
361
|
+
Args:
|
|
362
|
+
cursor: Pagination cursor for the next page of entries.
|
|
363
|
+
limit: Number of items to return (up to 50 for Classic search).
|
|
364
|
+
keywords: Keywords to search for.
|
|
365
|
+
region: The geographical region to filter jobs by (e.g., "United States").
|
|
366
|
+
sort_by: How to sort the results.(e.g., "relevance" or "date".)
|
|
367
|
+
minimum_salary_value: The minimum salary to filter for.
|
|
368
|
+
industry: The industry to filter jobs by. (e.g., "Software Development".)
|
|
369
|
+
|
|
370
|
+
Returns:
|
|
371
|
+
A dictionary containing search results and pagination details.
|
|
372
|
+
|
|
373
|
+
Raises:
|
|
374
|
+
httpx.HTTPError: If the API request fails.
|
|
375
|
+
ValueError: If the specified location is not found.
|
|
376
|
+
"""
|
|
377
|
+
url = f"{self.base_url}/api/v1/linkedin/search"
|
|
378
|
+
|
|
379
|
+
params: dict[str, Any] = {"account_id": self.account_id}
|
|
380
|
+
if cursor:
|
|
381
|
+
params["cursor"] = cursor
|
|
382
|
+
if limit is not None:
|
|
383
|
+
params["limit"] = limit
|
|
384
|
+
|
|
385
|
+
payload: dict[str, Any] = {
|
|
386
|
+
"api": "classic",
|
|
387
|
+
"category": "jobs",
|
|
388
|
+
"minimum_salary": {
|
|
389
|
+
"currency": "USD",
|
|
390
|
+
"value": minimum_salary_value,
|
|
391
|
+
},
|
|
392
|
+
}
|
|
393
|
+
|
|
394
|
+
if keywords:
|
|
395
|
+
payload["keywords"] = keywords
|
|
396
|
+
if sort_by:
|
|
397
|
+
payload["sort_by"] = sort_by
|
|
398
|
+
|
|
399
|
+
# If location is provided, get its ID and add it to the payload
|
|
400
|
+
if region:
|
|
401
|
+
location_id = self._get_search_parameter_id("LOCATION", region)
|
|
402
|
+
payload["region"] = location_id
|
|
403
|
+
|
|
404
|
+
if industry:
|
|
405
|
+
industry_id = self._get_search_parameter_id("INDUSTRY", industry)
|
|
406
|
+
payload["industry"] = [industry_id]
|
|
407
|
+
|
|
408
|
+
response = self._post(url, params=params, data=payload)
|
|
409
|
+
return self._handle_response(response)
|
|
410
|
+
|
|
175
411
|
|
|
176
|
-
|
|
177
412
|
def list_tools(self):
|
|
178
413
|
"""
|
|
179
414
|
Returns a list of available tools/functions in this application.
|
|
180
|
-
|
|
415
|
+
|
|
181
416
|
Returns:
|
|
182
417
|
A list of functions that can be used as tools.
|
|
183
418
|
"""
|
|
184
419
|
return [
|
|
185
|
-
self.linkedin_post_search,
|
|
186
420
|
self.linkedin_list_profile_posts,
|
|
187
421
|
self.linkedin_retrieve_profile,
|
|
188
422
|
self.linkedin_list_post_comments,
|
|
189
|
-
|
|
423
|
+
self.linkedin_search_people,
|
|
424
|
+
self.linkedin_search_companies,
|
|
425
|
+
self.linkedin_search_posts,
|
|
426
|
+
self.linkedin_search_jobs,
|
|
427
|
+
]
|
|
@@ -42,3 +42,6 @@ This is automatically generated from OpenAPI schema for the SemrushApp API.
|
|
|
42
42
|
| `referring_domains` | Get domains pointing to the queried domain, root domain, or URL. |
|
|
43
43
|
| `referring_domains_by_country` | Get referring domain distribution by country based on IP addresses. |
|
|
44
44
|
| `related_keywords` | Get extended list of related keywords, synonyms, and variations relevant to a queried term. |
|
|
45
|
+
| `tlds_distribution` | Get referring domain distribution by top-level domain (e.g., .com, .org, .net). |
|
|
46
|
+
| `url_organic_search_keywords` | Get keywords that bring users to a URL via Google's top 100 organic search results. |
|
|
47
|
+
| `url_paid_search_keywords` | Get keywords that bring users to a URL via Google's paid search results. |
|
|
@@ -9,6 +9,6 @@ This is automatically generated from OpenAPI schema for the SerpapiApp API.
|
|
|
9
9
|
|
|
10
10
|
| Tool | Description |
|
|
11
11
|
|------|-------------|
|
|
12
|
-
| `
|
|
13
|
-
| `google_maps_search` |
|
|
14
|
-
| `get_google_maps_reviews` |
|
|
12
|
+
| `web_search` | Performs a general web search via SerpApi, defaulting to the 'google_light' engine. It accepts custom parameters, retrieves organic results, and formats them into a string with titles, links, and snippets. It also handles API authentication and raises `NotAuthorizedError` for credential-related issues. |
|
|
13
|
+
| `google_maps_search` | Executes a Google Maps search via SerpApi using a query, coordinates, or place ID. It enhances the results by adding a `google_maps_url` to each location, distinguishing it from `get_google_maps_reviews` which retrieves reviews for a known place. |
|
|
14
|
+
| `get_google_maps_reviews` | Fetches Google Maps reviews for a specific location via SerpApi using its unique `data_id`. This function uses the `google_maps_reviews` engine, unlike `google_maps_search` which finds locations. Results can be returned in a specified language, defaulting to English. |
|
|
@@ -2,12 +2,12 @@ from typing import Any # For type hinting
|
|
|
2
2
|
|
|
3
3
|
import httpx
|
|
4
4
|
from loguru import logger
|
|
5
|
-
|
|
6
|
-
from serpapi import SerpApiClient as SerpApiSearch # Added SerpApiError
|
|
7
5
|
from universal_mcp.applications.application import APIApplication
|
|
8
6
|
from universal_mcp.exceptions import NotAuthorizedError # For auth errors
|
|
9
7
|
from universal_mcp.integrations import Integration # For integration type hint
|
|
10
8
|
|
|
9
|
+
from serpapi import SerpApiClient as SerpApiSearch # Added SerpApiError
|
|
10
|
+
|
|
11
11
|
|
|
12
12
|
class SerpapiApp(APIApplication):
|
|
13
13
|
def __init__(self, integration: Integration | None = None, **kwargs: Any) -> None:
|
|
@@ -78,17 +78,17 @@ class SerpapiApp(APIApplication):
|
|
|
78
78
|
async def web_search(self, params: dict[str, Any] | None = None) -> str:
|
|
79
79
|
"""
|
|
80
80
|
Performs a general web search via SerpApi, defaulting to the 'google_light' engine. It accepts custom parameters, retrieves organic results, and formats them into a string with titles, links, and snippets. It also handles API authentication and raises `NotAuthorizedError` for credential-related issues.
|
|
81
|
-
|
|
81
|
+
|
|
82
82
|
Args:
|
|
83
83
|
params: Dictionary of engine-specific parameters (e.g., {'q': 'Coffee', 'engine': 'google_light', 'location': 'Austin, TX'}). Defaults to None.
|
|
84
|
-
|
|
84
|
+
|
|
85
85
|
Returns:
|
|
86
86
|
A formatted string containing search results with titles, links, and snippets, or an error message if the search fails.
|
|
87
|
-
|
|
87
|
+
|
|
88
88
|
Raises:
|
|
89
89
|
NotAuthorizedError: If the API key cannot be retrieved or is invalid/rejected by SerpApi.
|
|
90
90
|
Exception: For other unexpected errors during the search process. (Specific HTTP errors or SerpApiErrors are caught and returned as strings or raise NotAuthorizedError).
|
|
91
|
-
|
|
91
|
+
|
|
92
92
|
Tags:
|
|
93
93
|
search, async, web-scraping, api, serpapi, important
|
|
94
94
|
"""
|
|
@@ -194,19 +194,19 @@ class SerpapiApp(APIApplication):
|
|
|
194
194
|
) -> dict[str, Any]:
|
|
195
195
|
"""
|
|
196
196
|
Executes a Google Maps search via SerpApi using a query, coordinates, or place ID. It enhances the results by adding a `google_maps_url` to each location, distinguishing it from `get_google_maps_reviews` which retrieves reviews for a known place.
|
|
197
|
-
|
|
197
|
+
|
|
198
198
|
Args:
|
|
199
199
|
q (string, optional): The search query for Google Maps (e.g., "Coffee", "Restaurants", "Gas stations").
|
|
200
200
|
ll (string, optional): Latitude and longitude with zoom level in format "@lat,lng,zoom" (e.g., "@40.7455096,-74.0083012,14z"). The zoom attribute ranges from 3z (map completely zoomed out) to 21z (map completely zoomed in). Results are not guaranteed to be within the requested geographic location.
|
|
201
201
|
place_id (string, optional): The unique reference to a place in Google Maps. Place IDs are available for most locations, including businesses, landmarks, parks, and intersections. You can find the place_id using our Google Maps API. place_id can be used without any other optional parameter. place_id and data_cid can't be used together.
|
|
202
|
-
|
|
202
|
+
|
|
203
203
|
Returns:
|
|
204
204
|
dict[str, Any]: Formatted Google Maps search results with place names, addresses, ratings, and other details.
|
|
205
|
-
|
|
205
|
+
|
|
206
206
|
Raises:
|
|
207
207
|
ValueError: Raised when required parameters are missing.
|
|
208
208
|
HTTPStatusError: Raised when the API request fails with detailed error information including status code and response body.
|
|
209
|
-
|
|
209
|
+
|
|
210
210
|
Tags:
|
|
211
211
|
google-maps, search, location, places, important
|
|
212
212
|
"""
|
|
@@ -249,18 +249,18 @@ class SerpapiApp(APIApplication):
|
|
|
249
249
|
) -> dict[str, Any]:
|
|
250
250
|
"""
|
|
251
251
|
Fetches Google Maps reviews for a specific location via SerpApi using its unique `data_id`. This function uses the `google_maps_reviews` engine, unlike `google_maps_search` which finds locations. Results can be returned in a specified language, defaulting to English.
|
|
252
|
-
|
|
252
|
+
|
|
253
253
|
Args:
|
|
254
254
|
data_id (string): The data ID of the place to get reviews for (e.g., "0x89c259af336b3341:0xa4969e07ce3108de").
|
|
255
255
|
hl (string, optional): Language parameter for the search results. Defaults to "en".
|
|
256
|
-
|
|
256
|
+
|
|
257
257
|
Returns:
|
|
258
258
|
dict[str, Any]: Google Maps reviews data with ratings, comments, and other review details.
|
|
259
|
-
|
|
259
|
+
|
|
260
260
|
Raises:
|
|
261
261
|
ValueError: Raised when required parameters are missing.
|
|
262
262
|
HTTPStatusError: Raised when the API request fails with detailed error information including status code and response body.
|
|
263
|
-
|
|
263
|
+
|
|
264
264
|
Tags:
|
|
265
265
|
google-maps, reviews, ratings, places, important
|
|
266
266
|
"""
|
|
@@ -0,0 +1,19 @@
|
|
|
1
|
+
# SharePoint Application
|
|
2
|
+
|
|
3
|
+
This application provides tools for interacting with the Microsoft SharePoint API via Microsoft Graph. It allows you to manage files, folders, and retrieve information about your SharePoint drive.
|
|
4
|
+
|
|
5
|
+
## Available Tools
|
|
6
|
+
|
|
7
|
+
- `get_my_profile`: Fetches the profile for the currently authenticated user.
|
|
8
|
+
- `get_drive_info`: Fetches high-level information about the user's SharePoint drive.
|
|
9
|
+
- `search_files`: Searches for files and folders in the user's SharePoint.
|
|
10
|
+
- `get_item_metadata`: Fetches metadata for a specific file or folder.
|
|
11
|
+
- `create_folder`: Creates a new folder.
|
|
12
|
+
- `delete_item`: Deletes a file or folder.
|
|
13
|
+
- `download_file`: Retrieves a download URL for a file.
|
|
14
|
+
- `upload_file`: Uploads a local file.
|
|
15
|
+
- `list_folders`: Lists all folders in a specified directory.
|
|
16
|
+
- `list_files`: Lists all files in a specified directory.
|
|
17
|
+
- `create_folder_and_list`: Creates a folder and then lists the contents of the parent directory.
|
|
18
|
+
- `upload_text_file`: Uploads content from a string to a new text file.
|
|
19
|
+
- `get_document_content`: Retrieves the content of a file.
|