universal-mcp-applications 0.1.27rc2__py3-none-any.whl → 0.1.28__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of universal-mcp-applications might be problematic. Click here for more details.

@@ -62,9 +62,9 @@ class GoogleGeminiApp(APIApplication):
62
62
  async def generate_image(
63
63
  self,
64
64
  prompt: Annotated[str, "The prompt to generate image from"],
65
- images: Annotated[list[str], "The reference images path"] | None = None,
65
+ images: Annotated[list[str], "The reference image URLs"] | None = None,
66
66
  model: str = "gemini-2.5-flash-image-preview",
67
- ) -> list:
67
+ ) -> dict:
68
68
  """
69
69
  Generates an image based on a text prompt and an optional reference image using the Google Gemini model.
70
70
  This tool is ideal for creating visual content or modifying existing images based on natural language descriptions.
@@ -72,7 +72,7 @@ class GoogleGeminiApp(APIApplication):
72
72
 
73
73
  Args:
74
74
  prompt (str): The descriptive text prompt to guide the image generation. For example: "A futuristic city at sunset with flying cars."
75
- images (list[str], optional): An optional list of URL to a reference image. These images will be used as a basis for the generation.
75
+ images (list[str], optional): An optional list of URLs to reference images. These images will be used as a basis for the generation.
76
76
  model (str, optional): The Gemini model to use for image generation. Defaults to "gemini-2.5-flash-image-preview".
77
77
 
78
78
  Returns:
@@ -1,10 +1,10 @@
1
- # OneDriveApp MCP Server
1
+ # OnedriveApp MCP Server
2
2
 
3
- An MCP Server for the OneDriveApp API.
3
+ An MCP Server for the OnedriveApp API.
4
4
 
5
5
  ## 🛠️ Tool List
6
6
 
7
- This is automatically generated from OpenAPI schema for the OneDriveApp API.
7
+ This is automatically generated from OpenAPI schema for the OnedriveApp API.
8
8
 
9
9
 
10
10
  | Tool | Description |
@@ -21,4 +21,4 @@ This is automatically generated from OpenAPI schema for the OneDriveApp API.
21
21
  | `list_files` | Retrieves a list of files within a specified OneDrive folder, defaulting to the root. Unlike `_list_drive_items` which fetches all items, this function filters the results to exclusively return items identified as files, excluding any subdirectories. |
22
22
  | `create_folder_and_list` | Performs a composite action: creates a new folder, then lists all items (files and folders) within that parent directory. This confirms creation by returning the parent's updated contents, distinct from `create_folder` which only returns the new folder's metadata. |
23
23
  | `upload_text_file` | Creates and uploads a new file to OneDrive directly from a string of text content. Unlike `upload_file`, which requires a local file path, this function is specifically for creating a text file from in-memory string data, with a customizable name and destination folder. |
24
- | `get_document_content` | Retrieves the content of a file specified by its ID. It automatically detects if the file is text or binary; text content is returned as a string, while binary content is returned base64-encoded. This differs from `download_file`, which only provides a URL. |
24
+ | `get_document_content` | Retrieves the content of a specific file by its item ID and returns it directly as base64-encoded data. This function is distinct from `download_file`, which only provides a temporary URL for the content, and from `get_item_metadata`, which returns file attributes without the content itself. The function fetches the content by following the file's pre-authenticated download URL. |
@@ -2,6 +2,7 @@ import base64
2
2
  import os
3
3
  from typing import Any
4
4
 
5
+ from loguru import logger
5
6
  from universal_mcp.applications.application import APIApplication
6
7
  from universal_mcp.integrations import Integration
7
8
 
@@ -271,13 +272,17 @@ class OnedriveApp(APIApplication):
271
272
 
272
273
  def get_document_content(self, item_id: str) -> dict[str, Any]:
273
274
  """
274
- Retrieves the content of a file specified by its ID. It automatically detects if the file is text or binary; text content is returned as a string, while binary content is returned base64-encoded. This differs from `download_file`, which only provides a URL.
275
+ Retrieves the content of a specific file by its item ID and returns it directly as base64-encoded data. This function is distinct from `download_file`, which only provides a temporary URL for the content, and from `get_item_metadata`, which returns file attributes without the content itself. The function fetches the content by following the file's pre-authenticated download URL.
275
276
 
276
277
  Args:
277
278
  item_id (str): The ID of the file.
278
279
 
279
280
  Returns:
280
- A dictionary containing the file content. For text files, content is string. For binary, it's base64 encoded.
281
+ dict[str, Any]: A dictionary containing the file details:
282
+ - 'type' (str): The general type of the file (e.g., "image", "audio", "video", "file").
283
+ - 'data' (str): The base64 encoded content of the file.
284
+ - 'mime_type' (str): The MIME type of the file.
285
+ - 'file_name' (str): The name of the file.
281
286
 
282
287
  Tags:
283
288
  get, content, read, file, important
@@ -290,34 +295,29 @@ class OnedriveApp(APIApplication):
290
295
  if not file_metadata:
291
296
  raise ValueError(f"Item with ID '{item_id}' is not a file.")
292
297
 
293
- file_mime_type = file_metadata.get("mimeType", "")
298
+ file_mime_type = file_metadata.get("mimeType", "application/octet-stream")
299
+ file_name = metadata.get("name")
294
300
 
295
- url = f"{self.base_url}/me/drive/items/{item_id}/content"
296
- response = self._get(url)
297
-
298
- if response.status_code >= 400:
299
- # Try to handle as JSON error response from Graph API
300
- return self._handle_response(response)
301
+ download_url = metadata.get("@microsoft.graph.downloadUrl")
302
+ if not download_url:
303
+ logger.error(f"Could not find @microsoft.graph.downloadUrl in metadata for item {item_id}")
304
+ raise ValueError("Could not retrieve download URL for the item.")
301
305
 
302
- content = response.content
306
+ response = self._get(download_url)
303
307
 
304
- is_text = file_mime_type.startswith("text/") or any(t in file_mime_type for t in ["json", "xml", "csv", "javascript", "html"])
308
+ response.raise_for_status()
305
309
 
306
- content_dict = {}
307
- if is_text:
308
- try:
309
- content_dict["content"] = content.decode("utf-8")
310
- except UnicodeDecodeError:
311
- is_text = False
310
+ content = response.content
312
311
 
313
- if not is_text:
314
- content_dict["content_base64"] = base64.b64encode(content).decode("ascii")
312
+ attachment_type = file_mime_type.split("/")[0] if "/" in file_mime_type else "file"
313
+ if attachment_type not in ["image", "audio", "video", "text"]:
314
+ attachment_type = "file"
315
315
 
316
316
  return {
317
- "name": metadata.get("name"),
318
- "content_type": "text" if is_text else "binary",
319
- **content_dict,
320
- "size": len(content),
317
+ "type": attachment_type,
318
+ "data": content,
319
+ "mime_type": file_mime_type,
320
+ "file_name": file_name,
321
321
  }
322
322
 
323
323
  def list_tools(self):
@@ -2,7 +2,7 @@ from dotenv import load_dotenv
2
2
 
3
3
  load_dotenv()
4
4
 
5
- from typing import Any
5
+ from typing import Any, Literal
6
6
 
7
7
  from loguru import logger
8
8
  from universal_mcp.applications.application import APIApplication
@@ -18,14 +18,6 @@ class ScraperApp(APIApplication):
18
18
  """
19
19
 
20
20
  def __init__(self, integration: Integration, **kwargs: Any) -> None:
21
- """
22
- Initialize the ScraperApp.
23
-
24
- Args:
25
- integration: The integration configuration containing credentials and other settings.
26
- It is expected that the integration provides the necessary credentials
27
- for LinkedIn API access.
28
- """
29
21
  super().__init__(name="scraper", integration=integration, **kwargs)
30
22
  if self.integration:
31
23
  credentials = self.integration.get_credentials()
@@ -36,50 +28,46 @@ class ScraperApp(APIApplication):
36
28
  self.account_id = None
37
29
  self._unipile_app = None
38
30
 
39
- def linkedin_post_search(
31
+ def linkedin_search(
40
32
  self,
41
- category: str = "posts",
42
- api: str = "classic",
33
+ category: Literal["people", "companies", "posts", "jobs"],
43
34
  cursor: str | None = None,
44
35
  limit: int | None = None,
45
36
  keywords: str | None = None,
46
- sort_by: str | None = None,
47
37
  date_posted: str | None = None,
48
- content_type: str | None = None,
38
+ sort_by: str | None = None,
39
+ minimum_salary_value: int | None = None,
49
40
  ) -> dict[str, Any]:
50
41
  """
51
- Performs a general LinkedIn search for posts using keywords and filters like date and content type. It supports pagination and can utilize either the 'classic' or 'sales_navigator' API, searching broadly across the platform rather than fetching posts from a specific user's profile.
52
-
42
+ Performs a general LinkedIn search for posts, people, companies, or jobs using keywords and various filters. This function provides broad, keyword-based discovery across the platform, distinct from other methods that retrieve content from a specific user or company profile. It supports pagination and filtering criteria.
43
+
53
44
  Args:
54
- category: Type of search to perform (defaults to "posts").
55
- api: Which LinkedIn API to use - "classic" or "sales_navigator".
45
+ category: Type of search to perform - "people", "companies", "posts", or "jobs".
56
46
  cursor: Pagination cursor for the next page of entries.
57
47
  limit: Number of items to return (up to 50 for Classic search).
58
48
  keywords: Keywords to search for.
59
- sort_by: How to sort the results, e.g., "relevance" or "date".
60
- date_posted: Filter posts by when they were posted.
61
- content_type: Filter by the type of content in the post. Example: "videos", "images", "live_videos", "collaborative_articles", "documents"
62
-
49
+ date_posted: Filter by when the post was posted (posts only).
50
+ sort_by: How to sort the results (for posts and jobs).
51
+ minimum_salary_value: The minimum salary to filter for (jobs only).
52
+
63
53
  Returns:
64
54
  A dictionary containing search results and pagination details.
65
-
55
+
66
56
  Raises:
67
57
  httpx.HTTPError: If the API request fails.
68
-
58
+
69
59
  Tags:
70
- linkedin, search, posts, api, scrapper, important
60
+ linkedin, search, posts, people, companies, api, scrapper, important
71
61
  """
72
-
73
62
  return self._unipile_app.search(
74
63
  account_id=self.account_id,
75
64
  category=category,
76
- api=api,
77
65
  cursor=cursor,
78
66
  limit=limit,
79
67
  keywords=keywords,
80
- sort_by=sort_by,
81
68
  date_posted=date_posted,
82
- content_type=content_type,
69
+ sort_by=sort_by,
70
+ minimum_salary_value=minimum_salary_value,
83
71
  )
84
72
 
85
73
  def linkedin_list_profile_posts(
@@ -89,19 +77,19 @@ class ScraperApp(APIApplication):
89
77
  limit: int | None = None,
90
78
  ) -> dict[str, Any]:
91
79
  """
92
- Fetches a paginated list of all LinkedIn posts from a specific user or company profile using their unique identifier. This function retrieves content directly from a profile, unlike `linkedin_post_search` which finds posts across LinkedIn based on keywords and other filters.
93
-
80
+ Fetches a paginated list of all posts from a specific user or company profile using their unique identifier. This function retrieves content directly from a single profile, unlike the broader, keyword-based `linkedin_search` which searches across the entire LinkedIn platform.
81
+
94
82
  Args:
95
- identifier: The entity's provider internal ID (LinkedIn ID).starts with ACo for users, while for companies it's a series of numbers.
83
+ identifier: The entity's provider internal ID (LinkedIn ID).starts with ACo for users, while for companies it's a series of numbers. You can get it in the results of linkedin_search.
96
84
  cursor: Pagination cursor for the next page of entries.
97
85
  limit: Number of items to return (1-100, though spec allows up to 250).
98
-
86
+
99
87
  Returns:
100
88
  A dictionary containing a list of post objects and pagination details.
101
-
89
+
102
90
  Raises:
103
91
  httpx.HTTPError: If the API request fails.
104
-
92
+
105
93
  Tags:
106
94
  linkedin, post, list, user_posts, company_posts, content, api, important
107
95
  """
@@ -118,18 +106,17 @@ class ScraperApp(APIApplication):
118
106
  identifier: str,
119
107
  ) -> dict[str, Any]:
120
108
  """
121
- Retrieves a specific LinkedIn user's profile by their unique identifier, which can be an internal provider ID or a public username. This function simplifies data access by delegating the actual profile retrieval request to the integrated Unipile application, distinct from functions that list posts or comments.
122
-
109
+ Retrieves a specific LinkedIn user's profile using their unique identifier (e.g., public username). Unlike other methods that list posts or comments, this function focuses on fetching the user's core profile data by delegating the request to the integrated Unipile application.
110
+
123
111
  Args:
124
- identifier: Can be the provider's internal id OR the provider's public id of the requested user.
125
- For example, for https://www.linkedin.com/in/manojbajaj95/, the identifier is "manojbajaj95".
126
-
112
+ identifier: Use the id from linkedin_search results. It starts with ACo for users and is a series of numbers for companies.
113
+
127
114
  Returns:
128
115
  A dictionary containing the user's profile details.
129
-
116
+
130
117
  Raises:
131
118
  httpx.HTTPError: If the API request fails.
132
-
119
+
133
120
  Tags:
134
121
  linkedin, user, profile, retrieve, get, api, important
135
122
  """
@@ -147,20 +134,20 @@ class ScraperApp(APIApplication):
147
134
  limit: int | None = None,
148
135
  ) -> dict[str, Any]:
149
136
  """
150
- Fetches comments for a specified LinkedIn post. If a `comment_id` is provided, it retrieves replies to that comment instead of top-level comments. This function supports pagination and specifically targets comments, unlike others in the class that search for or list entire posts.
151
-
137
+ Fetches a paginated list of comments for a specified LinkedIn post. Providing a `comment_id` retrieves replies to that specific comment instead of top-level ones. This function exclusively targets post interactions, differentiating it from others that list posts or retrieve entire profiles.
138
+
152
139
  Args:
153
- post_id: The social ID of the post. Example rn:li:activity:7342082869034393600
140
+ post_id: The social ID of the post. Example urn:li:ugcPost:7386500271624896512
154
141
  comment_id: If provided, retrieves replies to this comment ID instead of top-level comments.
155
142
  cursor: Pagination cursor.
156
143
  limit: Number of comments to return.
157
-
144
+
158
145
  Returns:
159
146
  A dictionary containing a list of comment objects and pagination details.
160
-
147
+
161
148
  Raises:
162
149
  httpx.HTTPError: If the API request fails.
163
-
150
+
164
151
  Tags:
165
152
  linkedin, post, comment, list, content, api, important
166
153
  """
@@ -173,223 +160,6 @@ class ScraperApp(APIApplication):
173
160
  limit=limit,
174
161
  )
175
162
 
176
- def linkedin_people_search(
177
- self,
178
- cursor: str | None = None,
179
- limit: int | None = None,
180
- keywords: str | None = None,
181
- last_viewed_at: int | None = None,
182
- saved_search_id: str | None = None,
183
- recent_search_id: str | None = None,
184
- location: dict[str, Any] | None = None,
185
- location_by_postal_code: dict[str, Any] | None = None,
186
- industry: dict[str, Any] | None = None,
187
- first_name: str | None = None,
188
- last_name: str | None = None,
189
- tenure: list[dict[str, Any]] | None = None,
190
- groups: list[str] | None = None,
191
- school: dict[str, Any] | None = None,
192
- profile_language: list[str] | None = None,
193
- company: dict[str, Any] | None = None,
194
- company_headcount: list[dict[str, Any]] | None = None,
195
- company_type: list[str] | None = None,
196
- company_location: dict[str, Any] | None = None,
197
- tenure_at_company: list[dict[str, Any]] | None = None,
198
- past_company: dict[str, Any] | None = None,
199
- function: dict[str, Any] | None = None,
200
- role: dict[str, Any] | None = None,
201
- tenure_at_role: list[dict[str, Any]] | None = None,
202
- seniority: dict[str, Any] | None = None,
203
- past_role: dict[str, Any] | None = None,
204
- following_your_company: bool | None = None,
205
- viewed_your_profile_recently: bool | None = None,
206
- network_distance: list[str] | None = None,
207
- connections_of: list[str] | None = None,
208
- past_colleague: bool | None = None,
209
- shared_experiences: bool | None = None,
210
- changed_jobs: bool | None = None,
211
- posted_on_linkedin: bool | None = None,
212
- mentionned_in_news: bool | None = None,
213
- persona: list[str] | None = None,
214
- account_lists: dict[str, Any] | None = None,
215
- lead_lists: dict[str, Any] | None = None,
216
- viewed_profile_recently: bool | None = None,
217
- messaged_recently: bool | None = None,
218
- include_saved_leads: bool | None = None,
219
- include_saved_accounts: bool | None = None,
220
- ) -> dict[str, Any]:
221
- """
222
- Performs a comprehensive LinkedIn Sales Navigator people search with advanced targeting options.
223
- This function provides access to LinkedIn's Sales Navigator search capabilities for finding people
224
- with precise filters including experience, company details, education, and relationship criteria.
225
-
226
- Args:
227
- cursor: Pagination cursor for the next page of entries.
228
- limit: Number of items to return.
229
- keywords: LinkedIn native filter: KEYWORDS.
230
- last_viewed_at: Unix timestamp for saved search filtering.
231
- saved_search_id: ID of saved search (overrides other parameters).
232
- recent_search_id: ID of recent search (overrides other parameters).
233
- location: LinkedIn native filter: GEOGRAPHY. Example: {"include": ["San Francisco Bay Area", "New York City Area"]}
234
- location_by_postal_code: Location filter by postal code. Example: {"postal_code": "94105", "radius": "25"}
235
- industry: LinkedIn native filter: INDUSTRY. Example: {"include": ["Information Technology and Services", "Financial Services"]}
236
- first_name: LinkedIn native filter: FIRST NAME. Example: "John"
237
- last_name: LinkedIn native filter: LAST NAME. Example: "Smith"
238
- tenure: LinkedIn native filter: YEARS OF EXPERIENCE. Example: [{"min": 5, "max": 10}]
239
- groups: LinkedIn native filter: GROUPS. Example: ["group_id_1", "group_id_2"]
240
- school: LinkedIn native filter: SCHOOL. Example: {"include": ["Stanford University", "Harvard University"]}
241
- profile_language: ISO 639-1 language codes, LinkedIn native filter: PROFILE LANGUAGE. Example: ["en", "es"]
242
- company: LinkedIn native filter: CURRENT COMPANY. Example: {"include": ["Google", "Microsoft", "Apple"]}
243
- company_headcount: LinkedIn native filter: COMPANY HEADCOUNT. Example: [{"min": 100, "max": 1000}]
244
- company_type: LinkedIn native filter: COMPANY TYPE. Example: ["Public Company", "Privately Held"]
245
- company_location: LinkedIn native filter: COMPANY HEADQUARTERS LOCATION. Example: {"include": ["San Francisco", "Seattle"]}
246
- tenure_at_company: LinkedIn native filter: YEARS IN CURRENT COMPANY. Example: [{"min": 2, "max": 5}]
247
- past_company: LinkedIn native filter: PAST COMPANY. Example: {"include": ["Facebook", "Amazon"]}
248
- function: LinkedIn native filter: FUNCTION. Example: {"include": ["Engineering", "Sales", "Marketing"]}
249
- role: LinkedIn native filter: CURRENT JOB TITLE. Example: {"include": ["Software Engineer", "Product Manager"]}
250
- tenure_at_role: LinkedIn native filter: YEARS IN CURRENT POSITION. Example: [{"min": 1, "max": 3}]
251
- seniority: LinkedIn native filter: SENIORITY LEVEL. Example: {"include": ["Senior", "Director", "VP"]}
252
- past_role: LinkedIn native filter: PAST JOB TITLE. Example: {"include": ["Senior Developer", "Team Lead"]}
253
- following_your_company: LinkedIn native filter: FOLLOWING YOUR COMPANY. Example: True
254
- viewed_your_profile_recently: LinkedIn native filter: VIEWED YOUR PROFILE RECENTLY. Example: True
255
- network_distance: First, second, third+ degree or GROUP, LinkedIn native filter: CONNECTION. Example: ["1st", "2nd"]
256
- connections_of: LinkedIn native filter: CONNECTIONS OF. Example: ["person_id_1", "person_id_2"]
257
- past_colleague: LinkedIn native filter: PAST COLLEAGUE. Example: True
258
- shared_experiences: LinkedIn native filter: SHARED EXPERIENCES. Example: True
259
- changed_jobs: LinkedIn native filter: CHANGED JOBS. Example: True
260
- posted_on_linkedin: LinkedIn native filter: POSTED ON LINKEDIN. Example: True
261
- mentionned_in_news: LinkedIn native filter: MENTIONNED IN NEWS. Example: True
262
- persona: LinkedIn native filter: PERSONA. Example: ["persona_id_1", "persona_id_2"]
263
- account_lists: LinkedIn native filter: ACCOUNT LISTS. Example: {"include": ["list_id_1"]}
264
- lead_lists: LinkedIn native filter: LEAD LISTS. Example: {"include": ["lead_list_id_1"]}
265
- viewed_profile_recently: LinkedIn native filter: PEOPLE YOU INTERACTED WITH / VIEWED PROFILE. Example: True
266
- messaged_recently: LinkedIn native filter: PEOPLE YOU INTERACTED WITH / MESSAGED. Example: True
267
- include_saved_leads: LinkedIn native filter: SAVED LEADS AND ACCOUNTS / ALL MY SAVED LEADS. Example: True
268
- include_saved_accounts: LinkedIn native filter: SAVED LEADS AND ACCOUNTS / ALL MY SAVED ACCOUNTS. Example: True
269
-
270
- Returns:
271
- A dictionary containing search results and pagination details.
272
-
273
- Raises:
274
- httpx.HTTPError: If the API request fails.
275
-
276
- Tags:
277
- linkedin, sales_navigator, people, search, advanced, scraper, api, important
278
- """
279
- return self._unipile_app.people_search(
280
- account_id=self.account_id,
281
- cursor=cursor,
282
- limit=limit,
283
- keywords=keywords,
284
- last_viewed_at=last_viewed_at,
285
- saved_search_id=saved_search_id,
286
- recent_search_id=recent_search_id,
287
- location=location,
288
- location_by_postal_code=location_by_postal_code,
289
- industry=industry,
290
- first_name=first_name,
291
- last_name=last_name,
292
- tenure=tenure,
293
- groups=groups,
294
- school=school,
295
- profile_language=profile_language,
296
- company=company,
297
- company_headcount=company_headcount,
298
- company_type=company_type,
299
- company_location=company_location,
300
- tenure_at_company=tenure_at_company,
301
- past_company=past_company,
302
- function=function,
303
- role=role,
304
- tenure_at_role=tenure_at_role,
305
- seniority=seniority,
306
- past_role=past_role,
307
- following_your_company=following_your_company,
308
- viewed_your_profile_recently=viewed_your_profile_recently,
309
- network_distance=network_distance,
310
- connections_of=connections_of,
311
- past_colleague=past_colleague,
312
- shared_experiences=shared_experiences,
313
- changed_jobs=changed_jobs,
314
- posted_on_linkedin=posted_on_linkedin,
315
- mentionned_in_news=mentionned_in_news,
316
- persona=persona,
317
- account_lists=account_lists,
318
- lead_lists=lead_lists,
319
- viewed_profile_recently=viewed_profile_recently,
320
- messaged_recently=messaged_recently,
321
- include_saved_leads=include_saved_leads,
322
- include_saved_accounts=include_saved_accounts,
323
- )
324
-
325
- def linkedin_company_search(
326
- self,
327
- cursor: str | None = None,
328
- limit: int | None = None,
329
- keywords: str | None = None,
330
- last_viewed_at: int | None = None,
331
- saved_search_id: str | None = None,
332
- recent_search_id: str | None = None,
333
- location: dict[str, Any] | None = None,
334
- location_by_postal_code: dict[str, Any] | None = None,
335
- industry: dict[str, Any] | None = None,
336
- company_headcount: list[dict[str, Any]] | None = None,
337
- company_type: list[str] | None = None,
338
- company_location: dict[str, Any] | None = None,
339
- following_your_company: bool | None = None,
340
- account_lists: dict[str, Any] | None = None,
341
- include_saved_accounts: bool | None = None,
342
- ) -> dict[str, Any]:
343
- """
344
- Performs a comprehensive LinkedIn Sales Navigator company search with advanced targeting options.
345
- This function provides access to LinkedIn's Sales Navigator search capabilities for finding companies
346
- with precise filters including size, location, industry, and relationship criteria.
347
-
348
- Args:
349
- cursor: Pagination cursor for the next page of entries.
350
- limit: Number of items to return.
351
- keywords: LinkedIn native filter: KEYWORDS. Example: "fintech startup"
352
- last_viewed_at: Unix timestamp for saved search filtering.
353
- saved_search_id: ID of saved search (overrides other parameters).
354
- recent_search_id: ID of recent search (overrides other parameters).
355
- location: LinkedIn native filter: GEOGRAPHY. Example: {"include": ["San Francisco Bay Area", "New York City Area"]}
356
- location_by_postal_code: Location filter by postal code. Example: {"postal_code": "94105", "radius": "25"}
357
- industry: LinkedIn native filter: INDUSTRY. Example: {"include": ["Information Technology and Services", "Financial Services"]}
358
- company_headcount: LinkedIn native filter: COMPANY HEADCOUNT. Example: [{"min": 10, "max": 100}]
359
- company_type: LinkedIn native filter: COMPANY TYPE. Example: ["Public Company", "Privately Held", "Startup"]
360
- company_location: LinkedIn native filter: COMPANY HEADQUARTERS LOCATION. Example: {"include": ["San Francisco", "Seattle", "Austin"]}
361
- following_your_company: LinkedIn native filter: FOLLOWING YOUR COMPANY. Example: True
362
- account_lists: LinkedIn native filter: ACCOUNT LISTS. Example: {"include": ["account_list_id_1"]}
363
- include_saved_accounts: LinkedIn native filter: SAVED LEADS AND ACCOUNTS / ALL MY SAVED ACCOUNTS. Example: True
364
-
365
- Returns:
366
- A dictionary containing search results and pagination details.
367
-
368
- Raises:
369
- httpx.HTTPError: If the API request fails.
370
-
371
- Tags:
372
- linkedin, sales_navigator, companies, search, advanced, scraper, api, important
373
- """
374
- return self._unipile_app.company_search(
375
- account_id=self.account_id,
376
- cursor=cursor,
377
- limit=limit,
378
- keywords=keywords,
379
- last_viewed_at=last_viewed_at,
380
- saved_search_id=saved_search_id,
381
- recent_search_id=recent_search_id,
382
- location=location,
383
- location_by_postal_code=location_by_postal_code,
384
- industry=industry,
385
- company_headcount=company_headcount,
386
- company_type=company_type,
387
- company_location=company_location,
388
- following_your_company=following_your_company,
389
- account_lists=account_lists,
390
- include_saved_accounts=include_saved_accounts,
391
- )
392
-
393
163
  def list_tools(self):
394
164
  """
395
165
  Returns a list of available tools/functions in this application.
@@ -398,10 +168,8 @@ class ScraperApp(APIApplication):
398
168
  A list of functions that can be used as tools.
399
169
  """
400
170
  return [
401
- self.linkedin_post_search,
171
+ self.linkedin_search,
402
172
  self.linkedin_list_profile_posts,
403
173
  self.linkedin_retrieve_profile,
404
174
  self.linkedin_list_post_comments,
405
- self.linkedin_people_search,
406
- self.linkedin_company_search,
407
175
  ]
@@ -1,17 +1,19 @@
1
- # SharepointApp MCP Server
1
+ # SharePoint Application
2
2
 
3
- An MCP Server for the SharepointApp API.
3
+ This application provides tools for interacting with the Microsoft SharePoint API via Microsoft Graph. It allows you to manage files, folders, and retrieve information about your SharePoint drive.
4
4
 
5
- ## 🛠️ Tool List
5
+ ## Available Tools
6
6
 
7
- This is automatically generated from OpenAPI schema for the SharepointApp API.
8
-
9
-
10
- | Tool | Description |
11
- |------|-------------|
12
- | `list_folders` | Retrieves a list of immediate subfolder names within a specified SharePoint directory. If no path is provided, it defaults to the root drive. This function is distinct from `list_files`, as it exclusively lists directories, not files. |
13
- | `create_folder_and_list` | Creates a new folder with a given name inside a specified parent directory (or the root). It then returns an updated list of all folder names within that same directory, effectively confirming that the creation operation was successful. |
14
- | `list_files` | Retrieves metadata for all files in a specified folder. For each file, it returns key details like name, URL, size, and timestamps. This function exclusively lists file properties, distinguishing it from `list_folders` (which lists directories) and `get_document_content` (which retrieves file content). |
15
- | `upload_text_file` | Uploads string content to create a new file in a specified SharePoint folder. To confirm the operation, it returns an updated list of all files and their metadata from that directory, including the newly created file. |
16
- | `get_document_content` | Retrieves a file's content from a specified SharePoint path. It returns a dictionary containing the file's name and size, decoding text files as a string and Base64-encoding binary files. Unlike `list_files`, which only fetches metadata, this function provides the actual file content. |
17
- | `delete_document` | Permanently deletes a specified file from a SharePoint drive using its full path. This is the sole destructive file operation, contrasting with functions that read or create files. It returns `True` on successful deletion and raises an exception on failure, such as if the file is not found. |
7
+ - `get_my_profile`: Fetches the profile for the currently authenticated user.
8
+ - `get_drive_info`: Fetches high-level information about the user's SharePoint drive.
9
+ - `search_files`: Searches for files and folders in the user's SharePoint.
10
+ - `get_item_metadata`: Fetches metadata for a specific file or folder.
11
+ - `create_folder`: Creates a new folder.
12
+ - `delete_item`: Deletes a file or folder.
13
+ - `download_file`: Retrieves a download URL for a file.
14
+ - `upload_file`: Uploads a local file.
15
+ - `list_folders`: Lists all folders in a specified directory.
16
+ - `list_files`: Lists all files in a specified directory.
17
+ - `create_folder_and_list`: Creates a folder and then lists the contents of the parent directory.
18
+ - `upload_text_file`: Uploads content from a string to a new text file.
19
+ - `get_document_content`: Retrieves the content of a file.