@toolsdk.ai/registry 1.0.103 → 1.0.105

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.dev.md ADDED
@@ -0,0 +1,193 @@
1
+ # Awesome MCP Registry Developer Guide
2
+
3
+ This document provides developers with detailed information on how to set up, run, and develop the Awesome MCP Registry project.
4
+
5
+ - [Awesome MCP Registry Developer Guide](#awesome-mcp-registry-developer-guide)
6
+ - [1. 🧰 Prerequisites](#1--prerequisites)
7
+ - [2. 🧰 Tech Stack](#2--tech-stack)
8
+ - [3. 🎯 Project Purpose](#3--project-purpose)
9
+ - [4. 🚀 Quick Start](#4--quick-start)
10
+ - [4.1 Install Dependencies](#41-install-dependencies)
11
+ - [4.2 Build Project](#42-build-project)
12
+ - [4.3 Start Development Server (Without Search Function)](#43-start-development-server-without-search-function)
13
+ - [4.4 Start Development Server (With Search Function)](#44-start-development-server-with-search-function)
14
+ - [5. 🐳 Docker Usage](#5--docker-usage)
15
+ - [5.1 Running with Docker (Without Search Function)](#51-running-with-docker-without-search-function)
16
+ - [5.2 Running with Docker (With Search Function)](#52-running-with-docker-with-search-function)
17
+ - [6. 🛠 Common Issues and Troubleshooting](#6--common-issues-and-troubleshooting)
18
+ - [6.1 MCP Client Test Errors During Build Process](#61-mcp-client-test-errors-during-build-process)
19
+ - [7. 🗃️ Project Structure](#7-️-project-structure)
20
+ - [8. ⚙️ Environment Variables](#8-️-environment-variables)
21
+ - [9. 📝 Contribution Guide](#9--contribution-guide)
22
+
23
+ ## 1. 🧰 Prerequisites
24
+
25
+ Before you begin, ensure your development environment meets the following requirements:
26
+
27
+ - **Node.js** >= 18.x (latest LTS version recommended)
28
+ - **pnpm** >= 8.x (package manager)
29
+ - **Docker** (optional, required only if search functionality is needed)
30
+
31
+ ## 2. 🧰 Tech Stack
32
+
33
+ - **Runtime Environment**: Node.js (ESM modules)
34
+ - **Package Manager**: pnpm
35
+ - **Language**: TypeScript
36
+ - **Web Framework**: Hono.js
37
+ - **Search Service**: MeiliSearch (optional)
38
+ - **Build Tool**: TypeScript Compiler (tsc)
39
+ - **Code Formatting**: Biome
40
+ - **Testing**: Vitest
41
+
42
+ ## 3. 🎯 Project Purpose
43
+
44
+ This project has two main purposes:
45
+
46
+ 1. **MCP Registry** - Collects and indexes various MCP servers, providing search functionality
47
+ 2. **MCP Server** - Deployed as a server to remotely call various MCP servers
48
+
49
+ ## 4. 🚀 Quick Start
50
+
51
+ ### 4.1 Install Dependencies
52
+
53
+ ```bash
54
+ pnpm install
55
+ ```
56
+
57
+ ### 4.2 Build Project
58
+
59
+ ```bash
60
+ make build
61
+ ```
62
+
63
+ This will perform the following operations:
64
+ - Validate all MCP server configurations
65
+ - Install all necessary dependencies
66
+ - Build TypeScript code
67
+
68
+ ### 4.3 Start Development Server (Without Search Function)
69
+
70
+ This is the simplest way to start, suitable for scenarios where only API functionality is needed:
71
+
72
+ 1. Ensure `ENABLE_SEARCH=false` is set in the `.env` file:
73
+
74
+ ```env
75
+ ENABLE_SEARCH=false
76
+ MCP_SERVER_PORT=3003
77
+ ```
78
+
79
+ 2. Start the development server:
80
+
81
+ ```bash
82
+ make dev
83
+ ```
84
+
85
+ 3. Access the following endpoints:
86
+ - API Documentation: http://localhost:3003/swagger
87
+
88
+ ### 4.4 Start Development Server (With Search Function)
89
+
90
+ If you need full search functionality:
91
+
92
+ 1. Set up the `.env` file:
93
+
94
+ ```env
95
+ ENABLE_SEARCH=true
96
+ MCP_SERVER_PORT=3003
97
+ MEILI_HTTP_ADDR=http://localhost:7700
98
+ ```
99
+
100
+ 2. Start the MeiliSearch service:
101
+
102
+ ```bash
103
+ make db
104
+ ```
105
+
106
+ 3. Build the project and start the development server:
107
+
108
+ ```bash
109
+ make build
110
+ make dev
111
+ ```
112
+
113
+ 4. Initialize search indexes:
114
+
115
+ Call the following endpoints via API:
116
+ - `POST /api/v1/search/manage/init` - Initialize search service
117
+ - `POST /api/v1/search/manage/index` - Index data
118
+
119
+ 5. Access:
120
+ - Search Page: http://localhost:3003
121
+ - API Documentation: http://localhost:3003/swagger
122
+
123
+ ## 5. 🐳 Docker Usage
124
+
125
+ ### 5.1 Running with Docker (Without Search Function)
126
+
127
+ ```bash
128
+ # Build image
129
+ make docker-build
130
+
131
+ # Run container (ensure ENABLE_SEARCH=false)
132
+ make docker-run
133
+
134
+ # Visit http://localhost:3003
135
+ ```
136
+
137
+ ### 5.2 Running with Docker (With Search Function)
138
+
139
+ ```bash
140
+ # Set ENABLE_SEARCH=true in .env
141
+ # Start MeiliSearch
142
+ make db
143
+
144
+ # Build and run the main application
145
+ make docker-build
146
+ make docker-run
147
+
148
+ # Visit http://localhost:3003 to use search functionality and API interfaces
149
+ ```
150
+
151
+ ## 6. 🛠 Common Issues and Troubleshooting
152
+
153
+ ### 6.1 MCP Client Test Errors During Build Process
154
+
155
+ When executing the `make build` command, you may see error messages similar to the following:
156
+
157
+ ```
158
+ Error reading MCP Client for package: claude-prompts... ENOENT: no such file or directory
159
+ ```
160
+
161
+ **This is normal!** The reason for these errors is:
162
+
163
+ - This project includes thousands of MCP packages
164
+ - The build process attempts to test all packages through the [test-mcp-clients.ts](file:///root/vika/awesome-mcp-registry/scripts/test-mcp-clients.ts) script
165
+ - Due to the large number, the testing process may take several hours
166
+ - Not all packages need to be installed and tested, as most packages are not essential for running the registry
167
+
168
+ **These errors can be ignored as long as the build process continues to execute.** After the build is complete, you can still use the API and search functionality (if search is enabled) normally.
169
+
170
+ ## 7. 🗃️ Project Structure
171
+
172
+ ```
173
+ .
174
+ ├── config/ # Configuration files
175
+ ├── indexes/ # Generated index files
176
+ ├── packages/ # MCP server configuration files (categorized)
177
+ ├── scripts/ # Build and maintenance scripts
178
+ └── src/ # Source code
179
+ ├── api/ # API routes and server entry points
180
+ └── search/ # Search service
181
+ ```
182
+
183
+ ## 8. ⚙️ Environment Variables
184
+
185
+ The project uses the following environment variables, which can be configured in `.env` or `.env.local`:
186
+
187
+ - `MCP_SERVER_PORT`: Server port (default: 3003)
188
+ - `ENABLE_SEARCH`: Whether to enable search service (default: false)
189
+ - `MEILI_HTTP_ADDR`: MeiliSearch service address (default: http://localhost:7700)
190
+
191
+ ## 9. 📝 Contribution Guide
192
+
193
+ For detailed information on how to contribute code to the project, add new MCP servers, etc., please refer to the [CONTRIBUTING.md](./CONTRIBUTING.md) file.
package/README.md CHANGED
@@ -2,13 +2,13 @@
2
2
 
3
3
  [![Product Hunt](https://api.producthunt.com/widgets/embed-image/v1/top-post-badge.svg?post_id=997428&theme=light&period=daily)](https://www.producthunt.com/products/toolsdk-ai)
4
4
 
5
- ![How many MCP Servers in Awesome MCP Registry](https://img.shields.io/badge/MCP_Servers-4104-blue)
5
+ ![How many MCP Servers in Awesome MCP Registry](https://img.shields.io/badge/MCP_Servers-4105-blue)
6
6
  ![awesome-mcp-registry License](https://img.shields.io/badge/LICENSE-MIT-ff69b4)
7
7
 
8
8
 
9
9
  Welcome to the Awesome MCP Registry.
10
10
 
11
- An open, high-quality, well-structured and developer-friendly list of 4104+ MCP servers.
11
+ An open, high-quality, well-structured and developer-friendly list of 4105+ MCP servers.
12
12
 
13
13
 
14
14
 
@@ -106,9 +106,9 @@ For more detail please see [the guide](./docs/guide.md).
106
106
 
107
107
  # MCP Servers
108
108
 
109
- ✅: Validated and runnable tools (720)
109
+ ✅: Validated and runnable tools (723)
110
110
 
111
- ❌: Cannot be run by the MCP client (with mock environments variables (3384))
111
+ ❌: Cannot be run by the MCP client (with mock environments variables (3382))
112
112
 
113
113
 
114
114
 
@@ -522,7 +522,7 @@ Tools for browsing, scraping, and automating web content in AI-compatible format
522
522
  - [❌ puppeteer-browser-automation](https://github.com/twolven/mcp-server-puppeteer-py): Integrates with Puppeteer to provide browser automation capabilities for web navigation, interaction, and data extraction. (python)
523
523
  - [❌ awesome-cursor](https://github.com/kleneway/awesome-cursor-mpc-server): Built for Cursor, integrates screenshot capture, web page structure analysis, and code review capabilities for automated UI testing, web scraping, and code quality checks. (node)
524
524
  - [❌ screenshotone-mcp-server](https://github.com/mrgoonie/screenshotone-mcp-server): Enables AI to capture and process screenshots of webpages with customizable parameters through the ScreenshotOne API with Cloudflare CDN integration for image storage and retrieval. (node)
525
- - [ tavily-mcp](https://github.com/tavily-ai/tavily-mcp): Integrates with Tavily API to provide real-time web search and content extraction capabilities for research, aggregation, and fact-checking tasks. (node)
525
+ - [ tavily-mcp](https://github.com/tavily-ai/tavily-mcp): Integrates with Tavily API to provide real-time web search and content extraction capabilities for research, aggregation, and fact-checking tasks. (4 tools) (node)
526
526
  - [✅ mcp-jinaai-grounding](https://github.com/spences10/mcp-jinaai-grounding): Integrates JinaAI's content extraction and analysis capabilities for web scraping, documentation parsing, and text analysis tasks. (1 tools) (node)
527
527
  - [❌ website-to-markdown-converter](https://github.com/tolik-unicornrider/mcp_scraper): Converts web content to high-quality Markdown using Mozilla's Readability and TurndownService, enabling clean extraction of meaningful content from URLs or HTML files for analysis and document conversion. (node)
528
528
  - [❌ web-browser-(playwright)](https://github.com/random-robbie/mcp-web-browser): Integrates with Playwright to enable cross-browser web automation for tasks like scraping, testing, and content extraction. (python)
@@ -1593,7 +1593,7 @@ Tools for integrating, transforming, and managing data pipelines.
1593
1593
  - [✅ mcp-google-analytics](https://github.com/surendranb/google-analytics-mcp): A Model Context Protocol (MCP) server for Google Analytics integration. This server provides tools for interacting with Google Analytics, including running reports, querying accounts and properties, and accessing metadata. (4 tools) (python)
1594
1594
  - [❌ odmcp](https://github.com/opendatamcp/opendatamcp): Connect to open data from millions of open government, NGO, and company datasets. (python)
1595
1595
  - [❌ pride-archive-search](https://github.com/pride-archive/mcp_pride_archive_search): Enables searching and retrieving proteomics datasets from the PRIDE Archive database with support for keyword filtering, pagination, and custom sorting options (python)
1596
- - [ aitable-mcp-server](https://github.com/apitable/aitable-mcp-server): AITable.ai Model Context Protocol Server enables AI agents to connect and work with AITable datasheets. (node)
1596
+ - [ @apitable/aitable-mcp-server](https://github.com/apitable/aitable-mcp-server): AITable.ai Model Context Protocol Server enables AI agents to connect and work with AITable datasheets. (6 tools) (node)
1597
1597
  - [❌ graphql-explorer](https://github.com/larshvidsten/mcp_af_graph): Integrates with GraphQL APIs to enable secure data retrieval, query execution, and schema exploration for AI applications (python)
1598
1598
  - [❌ dataverse-(microsoft-powerplatform)](https://github.com/bonanip512/dataversemcpserver): Integrates with Microsoft's PowerPlatform API to enable querying and interaction with Dataverse entities, providing access to metadata, attributes, relationships, and records through authenticated requests without navigating complex API structures. (node)
1599
1599
  - [❌ mcp-google-sheets](https://github.com/xing5/mcp-google-sheets): Integrates with Google Drive and Google Sheets to create, read, update, and manage spreadsheets with support for both OAuth and service account authentication methods. (python)
@@ -1653,6 +1653,7 @@ Enhance your development workflow with tools for coding and environment manageme
1653
1653
  - [✅ @container-inc/mcp](https://github.com/f-inc/containerinc-mcp): Enables seamless deployment of containerized applications directly from code editors through a three-step workflow of GitHub authentication, repository setup, and automated Docker image publishing. (3 tools) (node)
1654
1654
  - [❌ project-orchestrator](https://github.com/sparesparrow/mcp-project-orchestrator): Streamlines software project creation by analyzing user input to select appropriate structures, generate documentation with Mermaid diagrams, and provide tools for setup and management. (python)
1655
1655
  - [❌ devhub-cms-mcp](https://github.com/devhub/devhub-cms-mcp): Manage and utilize content within DevHub CMS (blog posts, hours of operation and other content). (python)
1656
+ - [✅ bika-mcp-server](https://github.com/bika-ai/bika-mcp-server): A Model Context Protocol server that provides read and write access to Bika.ai. This server enables LLMs to list spaces, list nodes, list records, create records and upload attachments in Bika.ai. (6 tools) (node)
1656
1657
  - [❌ heimdall](https://github.com/shinzo-labs/heimdall): Manage your agent tools and MCP server configs with ease. (node)
1657
1658
  - [❌ tasks-organizer](https://github.com/huntsyea/mcp-tasks-organizer): Converts Cursor agent plans into markdown task lists for organizing and structuring project plans into readable, actionable formats. (python)
1658
1659
  - [❌ documentation-markdown-converter](https://github.com/teddylee777/mcpdoc): Provides AI assistants with access to documentation through a configurable server that converts HTML to Markdown and enables auditing of tool calls and returned context. (python)
@@ -3110,7 +3111,7 @@ Find and extract data from various sources quickly and efficiently.
3110
3111
  - [❌ geeknews](https://github.com/the0807/geeknews-mcp-server): Retrieves and extracts tech news, articles, and discussions from GeekNews using BeautifulSoup for more informed conversations about current technology developments. (python)
3111
3112
  - [✅ @toolsdk.ai/tavily-mcp](https://github.com/tavily-ai/tavily-mcp): An MCP server that implements web search, extract, mapping, and crawling through the Tavily API. (4 tools) (node)
3112
3113
  - [❌ goodnews](https://github.com/vectorinstitute/mcp-goodnews): Filters and ranks news articles from NewsAPI based on positive sentiment, delivering uplifting headlines amid potentially negative media cycles. (python)
3113
- - [ tavily-mcp](https://github.com/tavily-ai/tavily-mcp): Integrates with Tavily API to provide real-time web search and content extraction capabilities for research, aggregation, and fact-checking tasks. (node)
3114
+ - [ tavily-mcp](https://github.com/tavily-ai/tavily-mcp): Integrates with Tavily API to provide real-time web search and content extraction capabilities for research, aggregation, and fact-checking tasks. (4 tools) (node)
3114
3115
  - [✅ anilist-mcp](https://github.com/yuna0x0/anilist-mcp): MCP server that interfaces with the AniList API, allowing LLM clients to access and interact with anime, manga, character, staff, and user data from AniList (44 tools) (node)
3115
3116
  - [❌ exa-web-search](https://github.com/mshojaei77/reactmcp): Integrates with Exa API to enable web search capabilities with filtering options for domains, text requirements, and date ranges, returning markdown-formatted results with titles, URLs, publication dates, and content summaries for real-time internet information access. (python)
3116
3117
  - [❌ web-browser-mcp-server](https://github.com/blazickjp/web-browser-mcp-server): Integrates web browsing capabilities for realtime data retrieval, content extraction, and task automation using popular Python libraries. (python)
package/dist/api/index.js CHANGED
@@ -1,15 +1,14 @@
1
1
  import fs from "node:fs/promises";
2
2
  import path from "node:path";
3
- import { fileURLToPath } from "node:url";
4
3
  import { serve } from "@hono/node-server";
5
4
  import { swaggerUI } from "@hono/swagger-ui";
6
5
  import { OpenAPIHono } from "@hono/zod-openapi";
7
6
  import dotenv from "dotenv";
8
7
  import { searchRoutes } from "../search/search-route";
9
8
  import searchService from "../search/search-service";
9
+ import { getDirname } from "../utils";
10
10
  import { packageRoutes } from "./package-route";
11
- const __filename = fileURLToPath(import.meta.url);
12
- const __dirname = path.dirname(__filename);
11
+ const __dirname = getDirname(import.meta.url);
13
12
  dotenv.config({ path: path.resolve(process.cwd(), ".env.local") });
14
13
  dotenv.config({ path: path.resolve(process.cwd(), ".env") });
15
14
  const initializeSearchService = async () => {
@@ -39,10 +38,17 @@ app.get("/", async (c) => {
39
38
  return c.text("MCP Registry API Server is running!");
40
39
  }
41
40
  });
42
- app.get("/api/meta", (c) => {
43
- // eslint-disable-next-line @typescript-eslint/no-require-imports
44
- const packageJson = require("../../package.json");
45
- return c.json({ version: packageJson.version });
41
+ app.get("/api/meta", async (c) => {
42
+ try {
43
+ const packageJson = await import("../../package.json", {
44
+ assert: { type: "json" },
45
+ });
46
+ return c.json({ version: packageJson.default.version });
47
+ }
48
+ catch (error) {
49
+ console.error("Failed to load package.json:", error);
50
+ return c.json({ version: "unknown" });
51
+ }
46
52
  });
47
53
  app.doc("/api/v1/doc", {
48
54
  openapi: "3.0.0",
@@ -1,17 +1,20 @@
1
- /* eslint-disable @typescript-eslint/no-require-imports */
1
+ import path from "node:path";
2
2
  import { createRoute, OpenAPIHono } from "@hono/zod-openapi";
3
3
  import { getPythonDependencies } from "../helper";
4
4
  import { CategoriesResponseSchema, ExecuteToolResponseSchema, FeaturedResponseSchema, PackageDetailResponseSchema, PackagesListResponseSchema, packageNameQuerySchema, ToolExecuteSchema, ToolsResponseSchema, } from "../schema";
5
- import { createResponse, createRouteResponses } from "../utils";
5
+ import { createResponse, createRouteResponses, getDirname } from "../utils";
6
6
  import { packageHandler } from "./package-handler";
7
+ const __dirname = getDirname(import.meta.url);
7
8
  export const packageRoutes = new OpenAPIHono();
8
9
  const featuredRoute = createRoute({
9
10
  method: "get",
10
11
  path: "/config/featured",
11
12
  responses: createRouteResponses(FeaturedResponseSchema),
12
13
  });
13
- packageRoutes.openapi(featuredRoute, (c) => {
14
- const featured = require("../../config/featured.mjs").default;
14
+ packageRoutes.openapi(featuredRoute, async (c) => {
15
+ const featuredPath = path.join(__dirname, "../../config/featured.mjs");
16
+ const featuredModule = await import(`file://${featuredPath}`);
17
+ const featured = featuredModule.default;
15
18
  const response = createResponse(featured);
16
19
  return c.json(response, 200);
17
20
  });
@@ -20,8 +23,10 @@ const categoriesRoute = createRoute({
20
23
  path: "/config/categories",
21
24
  responses: createRouteResponses(CategoriesResponseSchema),
22
25
  });
23
- packageRoutes.openapi(categoriesRoute, (c) => {
24
- const categories = require("../../config/categories.mjs").default;
26
+ packageRoutes.openapi(categoriesRoute, async (c) => {
27
+ const categoriesPath = path.join(__dirname, "../../config/categories.mjs");
28
+ const categoriesModule = await import(`file://${categoriesPath}`);
29
+ const categories = categoriesModule.default;
25
30
  const response = createResponse(categories);
26
31
  return c.json(response, 200);
27
32
  });
@@ -1,9 +1,8 @@
1
1
  import fs from "node:fs";
2
2
  import path from "node:path";
3
- import { fileURLToPath } from "node:url";
4
3
  import { getMcpClient, getPackageConfigByKey, typedAllPackagesList } from "../helper.js";
5
- const __filename = fileURLToPath(import.meta.url);
6
- const __dirname = path.dirname(__filename);
4
+ import { getDirname } from "../utils";
5
+ const __dirname = getDirname(import.meta.url);
7
6
  export class PackageSO {
8
7
  async executeTool(request) {
9
8
  const mcpServerConfig = getPackageConfigByKey(request.packageName);
package/dist/helper.js CHANGED
@@ -1,16 +1,15 @@
1
1
  import assert from "node:assert";
2
2
  import fs from "node:fs";
3
3
  import * as path from "node:path";
4
- import { fileURLToPath } from "node:url";
5
4
  import toml from "@iarna/toml";
6
5
  import { Client } from "@modelcontextprotocol/sdk/client/index.js";
7
6
  import { StdioClientTransport } from "@modelcontextprotocol/sdk/client/stdio.js";
8
7
  import axios from "axios";
9
8
  import semver from "semver";
10
9
  import allPackagesList from "../indexes/packages-list.json";
10
+ import { getDirname } from "../src/utils";
11
11
  import { MCPServerPackageConfigSchema, PackagesListSchema } from "./schema";
12
- const __filename = fileURLToPath(import.meta.url);
13
- const __dirname = path.dirname(__filename);
12
+ const __dirname = getDirname(import.meta.url);
14
13
  export const typedAllPackagesList = PackagesListSchema.parse(allPackagesList);
15
14
  export function getPackageConfigByKey(packageKey) {
16
15
  const value = typedAllPackagesList[packageKey];
@@ -4,10 +4,9 @@
4
4
  */
5
5
  import fs from "node:fs/promises";
6
6
  import path from "node:path";
7
- import { fileURLToPath } from "node:url";
8
7
  import { MeiliSearch } from "meilisearch";
9
- const __filename = fileURLToPath(import.meta.url);
10
- const __dirname = path.dirname(__filename);
8
+ import { getDirname } from "../utils";
9
+ const __dirname = getDirname(import.meta.url);
11
10
  class SearchService {
12
11
  constructor(indexName = "mcp-packages") {
13
12
  // MeiliSearch configuration
package/dist/utils.d.ts CHANGED
@@ -1,5 +1,6 @@
1
1
  import type { z } from "@hono/zod-openapi";
2
2
  import type { Response } from "./types";
3
+ export declare function getDirname(metaUrl: string): string;
3
4
  export declare const createResponse: <T>(data: T, options?: {
4
5
  success?: boolean;
5
6
  code?: number;
package/dist/utils.js CHANGED
@@ -1,4 +1,9 @@
1
+ import { dirname } from "node:path";
2
+ import { fileURLToPath } from "node:url";
1
3
  import { ErrorResponseSchema } from "./schema";
4
+ export function getDirname(metaUrl) {
5
+ return dirname(fileURLToPath(metaUrl));
6
+ }
2
7
  export const createResponse = (data, options) => {
3
8
  const { success = true, code = 200, message = "Success" } = options || {};
4
9
  return {
@@ -1504,7 +1504,7 @@
1504
1504
  "mcp-google-analytics",
1505
1505
  "odmcp",
1506
1506
  "pride-archive-search",
1507
- "aitable-mcp-server",
1507
+ "@apitable/aitable-mcp-server",
1508
1508
  "graphql-explorer",
1509
1509
  "dataverse-(microsoft-powerplatform)",
1510
1510
  "mcp-google-sheets",
@@ -1566,6 +1566,7 @@
1566
1566
  "@container-inc/mcp",
1567
1567
  "project-orchestrator",
1568
1568
  "devhub-cms-mcp",
1569
+ "bika-mcp-server",
1569
1570
  "heimdall",
1570
1571
  "tasks-organizer",
1571
1572
  "documentation-markdown-converter",
@@ -143,8 +143,25 @@
143
143
  "tavily-mcp": {
144
144
  "category": "search-data-extraction",
145
145
  "path": "search-data-extraction/tavily-mcp.json",
146
- "validated": false,
147
- "tools": {}
146
+ "validated": true,
147
+ "tools": {
148
+ "tavily-search": {
149
+ "name": "tavily-search",
150
+ "description": "A powerful web search tool that provides comprehensive, real-time results using Tavily's AI search engine. Returns relevant web content with customizable parameters for result count, content type, and domain filtering. Ideal for gathering current information, news, and detailed web content analysis."
151
+ },
152
+ "tavily-extract": {
153
+ "name": "tavily-extract",
154
+ "description": "A powerful web content extraction tool that retrieves and processes raw content from specified URLs, ideal for data collection, content analysis, and research tasks."
155
+ },
156
+ "tavily-crawl": {
157
+ "name": "tavily-crawl",
158
+ "description": "A powerful web crawler that initiates a structured web crawl starting from a specified base URL. The crawler expands from that point like a tree, following internal links across pages. You can control how deep and wide it goes, and guide it to focus on specific sections of the site."
159
+ },
160
+ "tavily-map": {
161
+ "name": "tavily-map",
162
+ "description": "A powerful web mapping tool that creates a structured map of website URLs, allowing you to discover and analyze site structure, content organization, and navigation paths. Perfect for site audits, content discovery, and understanding website architecture."
163
+ }
164
+ }
148
165
  },
149
166
  "@toolsdk.ai/tavily-mcp": {
150
167
  "category": "search-data-extraction",
@@ -33026,12 +33043,6 @@
33026
33043
  "validated": false,
33027
33044
  "tools": {}
33028
33045
  },
33029
- "aitable-mcp-server": {
33030
- "category": "data-platforms",
33031
- "path": "data-platforms/aitable-mcp-server.json",
33032
- "validated": false,
33033
- "tools": {}
33034
- },
33035
33046
  "@lark-base-open/mcp-node-server": {
33036
33047
  "category": "data-platforms",
33037
33048
  "path": "data-platforms/lark-base-open-mcp-node-server.json",
@@ -47879,5 +47890,67 @@
47879
47890
  "path": "other-tools-and-integrations/mcp-remote.json",
47880
47891
  "validated": false,
47881
47892
  "tools": {}
47893
+ },
47894
+ "bika-mcp-server": {
47895
+ "category": "developer-tools",
47896
+ "path": "developer-tools/bika-mcp-server.json",
47897
+ "validated": true,
47898
+ "tools": {
47899
+ "list_spaces": {
47900
+ "name": "list_spaces",
47901
+ "description": "Fetches all workspaces that the currently authenticated user has permission to access."
47902
+ },
47903
+ "list_nodes": {
47904
+ "name": "list_nodes",
47905
+ "description": "Retrieves all nodes contained within the specified workspace. Nodes in Bika can be of several types: databases (also known as sheets, datasheets, or spreadsheets), automations, documents, and folders."
47906
+ },
47907
+ "get_records": {
47908
+ "name": "get_records",
47909
+ "description": "Read the records from a specified database with support for pagination, field filtering, and sorting options."
47910
+ },
47911
+ "get_fields_schema": {
47912
+ "name": "get_fields_schema",
47913
+ "description": "Returns the JSON schema of all fields within the specified database, This schema will be sent to LLM to help the AI understand the expected structure of the data."
47914
+ },
47915
+ "create_record": {
47916
+ "name": "create_record",
47917
+ "description": "Create a new record in the database. Extract key information from user-provided text based on a predefined Fields JSON Schema and create a new record in the database as a JSON object."
47918
+ },
47919
+ "upload_attachment_via_url": {
47920
+ "name": "upload_attachment_via_url",
47921
+ "description": "Upload an attachment to the Bika server using its web URL. Returns storage information that can be passed to create_record or update_record tools to associate with a specific records."
47922
+ }
47923
+ }
47924
+ },
47925
+ "@apitable/aitable-mcp-server": {
47926
+ "category": "data-platforms",
47927
+ "path": "data-platforms/aitable-mcp-server.json",
47928
+ "validated": true,
47929
+ "tools": {
47930
+ "list_spaces": {
47931
+ "name": "list_spaces",
47932
+ "description": "Fetches all workspaces that the currently authenticated user has permission to access."
47933
+ },
47934
+ "search_nodes": {
47935
+ "name": "search_nodes",
47936
+ "description": "Retrieve nodes based on specific types, permissions, and queries. Nodes in AITable can be of several types: datasheets (also known as sheets, or spreadsheets), form, dashboard, and folders."
47937
+ },
47938
+ "list_records": {
47939
+ "name": "list_records",
47940
+ "description": "Read the records from a specified datasheet with support for pagination, field filtering, and sorting options."
47941
+ },
47942
+ "get_fields_schema": {
47943
+ "name": "get_fields_schema",
47944
+ "description": "Returns the JSON schema of all fields within the specified database, This schema will be sent to LLM to help the AI understand the expected structure of the data."
47945
+ },
47946
+ "create_record": {
47947
+ "name": "create_record",
47948
+ "description": "Create a new record in the datasheet. Extract key information from user-provided text based on a predefined Fields JSON Schema and create a new record in the datasheet as a JSON object."
47949
+ },
47950
+ "upload_attachment_via_url": {
47951
+ "name": "upload_attachment_via_url",
47952
+ "description": "Upload an attachment to the AITable server using its web URL. Returns storage information that can be passed to create_record or update_record tools to associate with a specific records."
47953
+ }
47954
+ }
47882
47955
  }
47883
47956
  }
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@toolsdk.ai/registry",
3
- "version": "1.0.103",
3
+ "version": "1.0.105",
4
4
  "description": "An Open, Structured, and Standard Registry for MCP Servers and Packages.",
5
5
  "keywords": [],
6
6
  "license": "MIT",
@@ -47,6 +47,7 @@
47
47
  "@anaisbetts/mcp-youtube": "0.6.0",
48
48
  "@angiejones/mcp-selenium": "0.1.21",
49
49
  "@antv/mcp-server-chart": "0.8.2",
50
+ "@apitable/aitable-mcp-server": "1.0.3",
50
51
  "@arizeai/phoenix-mcp": "2.2.9",
51
52
  "@ashdev/discourse-mcp-server": "1.0.2",
52
53
  "@auto-browse/unbundle-openapi-mcp": "1.0.1",
@@ -347,6 +348,7 @@
347
348
  "axios": "^1.9.0",
348
349
  "base-network-mcp-server": "0.1.0",
349
350
  "better-fetch-mcp": "1.0.0",
351
+ "bika-mcp-server": "1.0.7-alpha.62",
350
352
  "binance-mcp-server": "1.0.3",
351
353
  "bitcoin-mcp": "0.0.6",
352
354
  "bitrefill-mcp-server": "0.3.0",
@@ -613,6 +615,7 @@
613
615
  "systemprompt-mcp-notion": "1.0.7",
614
616
  "tana-mcp": "1.2.0",
615
617
  "taskqueue-mcp": "1.4.1",
618
+ "tavily-mcp": "0.2.9",
616
619
  "terminal-mcp-server": "1.0.0",
617
620
  "terraform-mcp-server": "0.13.0",
618
621
  "tesouro-direto-mcp": "0.2.3",
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "type": "mcp-server",
3
- "logo": "https://pbs.twimg.com/profile_images/1365504724724633601/0jzIjXtR_400x400.jpg",
3
+ "logo": "https://clickup.com/assets/brand/logo-v3-clickup-symbol-only.svg",
4
4
  "packageName": "@taazkareem/clickup-mcp-server",
5
5
  "description": "Integrates ClickUp task management with AI systems to enable automated task creation, updates, and retrieval for enhanced project workflow efficiency.",
6
6
  "url": "https://github.com/taazkareem/clickup-mcp-server",
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "type": "mcp-server",
3
- "packageName": "aitable-mcp-server",
3
+ "packageName": "@apitable/aitable-mcp-server",
4
4
  "description": "AITable.ai Model Context Protocol Server enables AI agents to connect and work with AITable datasheets.",
5
5
  "url": "https://github.com/apitable/aitable-mcp-server",
6
6
  "runtime": "node",
@@ -0,0 +1,15 @@
1
+ {
2
+ "type": "mcp-server",
3
+ "packageName": "bika-mcp-server",
4
+ "description": "A Model Context Protocol server that provides read and write access to Bika.ai. This server enables LLMs to list spaces, list nodes, list records, create records and upload attachments in Bika.ai.",
5
+ "url": "https://github.com/bika-ai/bika-mcp-server",
6
+ "runtime": "node",
7
+ "license": "unknown",
8
+ "env": {
9
+ "BIKA_API_KEY": {
10
+ "description": "The API key for the Bika.ai API.",
11
+ "required": true
12
+ }
13
+ },
14
+ "name": "Bika.ai MCP Server"
15
+ }