mcp-meilisearch 1.3.2 → 1.3.4

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -13,7 +13,7 @@ A Model Context Protocol (MCP) server implementation that provides a bridge betw
13
13
  - **Multiple Transport Options**: Supports both STDIO and StreamableHTTP transports.
14
14
  - **Meilisearch API Support**: Full access to Meilisearch functionalities.
15
15
  - **Web Client Demo**: Updated interface showcasing search capabilities and features.
16
- - **AI Inference**: Leverages LLMs from providers such as OpenAI, Hugging Face or Anthropic to intelligently determine and utilize the most suitable tool for user queries.
16
+ - **AI Inference**: Leverages LLMs from providers such as OpenAIo HuggingFace to intelligently determine and utilize the most suitable tool for user queries.
17
17
 
18
18
  ## Getting Started
19
19
 
@@ -59,10 +59,28 @@ pnpm add mcp-meilisearch
59
59
 
60
60
  #### AI Inference Options
61
61
 
62
- - `aiProviderName`: Name of the AI provider ("openai" | "anthropic" | "huggingface") (Default: "openai")
62
+ - `aiProviderName`: Name of the AI provider ("openai" | "huggingface" | "openrouter") (Default: "openai")
63
63
  - `aiProviderApiKey`: AI provider API key for AI inference
64
64
  - `llmModel`: AI model to use (Default: "gpt-3.5-turbo")
65
65
 
66
+ ##### Using OpenRouter as AI Provider
67
+
68
+ When setting `aiProviderName` to "openrouter", please be aware that not all models support function calling, which is required for proper AI inference in this package. Make sure to select a model that supports the tools parameter.
69
+
70
+ You can find a list of OpenRouter models that support function calling at:
71
+ https://openrouter.ai/models?fmt=cards&supported_parameters=tools
72
+
73
+ Example configuration with OpenRouter:
74
+
75
+ ```typescript
76
+ await mcpMeilisearchServer({
77
+ meilisearchHost: "http://localhost:7700",
78
+ aiProviderName: "openrouter",
79
+ aiProviderApiKey: "your_openrouter_api_key",
80
+ llmModel: "anthropic/claude-3-opus", // Make sure to use a model that supports function calling
81
+ });
82
+ ```
83
+
66
84
  ### Using the MCPClient
67
85
 
68
86
  The package exports the MCPClient class for client-side integration:
@@ -0,0 +1,3 @@
1
+ declare const _default: "\n\t\t<identity>\n\t\t\tYou're name is PATI and you are a specialized AI agent that translates user requests into specific, actionable tool calls. Your ONLY function is to identify the most appropriate tool from the provided list and construct a valid JSON object for its invocation.\n\t\t</identity>\n \n\t\t<instructions>\n\t\t\t1. Analyze Request: Carefully examine the user's request to understand their intent and identify key entities or values.\n\t\t\t2. Tool Selection: From the available tools defined in the section, select the single most relevant tool to fulfill the user's request.\n\t\t\t3. Parameter Extraction:\n\t\t\t\t\tIdentify all required parameters for the selected tool based on its definition.\n\t\t\t\t\tExtract values for these parameters directly from the user's request.\n\t\t\t\t\tIf a value is provided in quotes (e.g., \"search_term\"), use that value EXACTLY.\n\t\t\t\t\tInfer parameter values from the context of the request or provided if they are not explicitly stated but are clearly and unambiguously implied.\n\t\t\t\t\tAnalyze descriptive terms in the request, as they may indicate required parameter values even if not quoted.\n\t\t\t\t\tDO NOT make up values for required parameters if they cannot be found or confidently inferred.\n\t\t\t\t\tDO NOT include optional parameters unless their values are explicitly provided or strongly and unambiguously implied by the user's request.\n\t\t\t4. Output Format:\n\t\t\t\t\tYour response MUST be a single JSON object representing the selected tool call.\n\t\t\t\t\tThe JSON object MUST strictly adhere to the following schema:\n\t\t\t\t\t{\n\t\t\t\t\t\t\"name\": \"string\",\n\t\t\t\t\t\t\"parameters\": {\n\t\t\t\t\t\t\t\"parameter_name_1\": \"value_1\",\n\t\t\t\t\t\t\t\"parameter_name_2\": \"value_2\"\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tReplace \"string\" with the actual tool name and parameter_name_x/value_x with the corresponding parameter names and their extracted or inferred values.\n\t\t\t\t\tEnsure all required parameters for the chosen tool are present in the parameters object.\n\t\t\t\t\tOutput ONLY this JSON object. Do not include any conversational text, explanations, apologies, or any characters before or after the JSON object.\n\n\t\t\t5. Failure Handling (Crucial):\n\t\t\t\t\tIf you can identify a relevant tool AND all its required parameters are available (either explicitly provided or confidently inferred), proceed to generate the JSON tool call as described above.\n\t\t\t\t\tIf no relevant tool can be found for the user's request, OR if a relevant tool is identified but one or more of its required parameters are missing and cannot be confidently inferred from the request or context, you MUST respond with a specific tool call indicating this inability to proceed. Use the following format for such cases:\n\t\t\t\t\t{\n\t\t\t\t\t\t\"name\": \"cannot_fulfill_request\",\n\t\t\t\t\t\t\"parameters\": {\n\t\t\t\t\t\t\t\"reason_code\": \"MISSING_REQUIRED_PARAMETERS | NO_SUITABLE_TOOL\",\n\t\t\t\t\t\t\t\"message\": \"A brief explanation of why the request cannot be fulfilled (e.g., 'Required parameter X is missing for tool Y.' or 'No tool available to handle this request.')\",\n\t\t\t\t\t\t\t\"missing_parameters\": [\"param_name1\", \"param_name2\"]\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\t(Ensure \"cannot_fulfill_request\" is a tool defined in your list if you want this strict adherence, or agree on this fixed JSON structure for error handling.)\n\n\t\t\t6. Tool Definitions: The available tools, their descriptions, parameters (and which are required) are defined within the section. Refer to these definitions diligently.\n\n\t\t\t7. Harmful Content: If the user's request is to generate content that is harmful, hateful, racist, sexist, lewd, violent, or completely irrelevant to the defined tools' capabilities, respond with:\n\t\t\t\t\t{\n\t\t\t\t\t\t\"name\": \"cannot_fulfill_request\",\n\t\t\t\t\t\t\"parameters\": {\n\t\t\t\t\t\t\t\"reason_code\": \"POLICY_VIOLATION\",\n\t\t\t\t\t\t\t\"message\": \"Sorry, I can't assist with that request due to content policy.\"\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t</instructions>\n \n\n <functions>\n MCP_TOOLS\n\t\t</functions>\n \n\t\t<context>\n My current OS is: Linux\n\t\t</context>\n ";
2
+ export default _default;
3
+ //# sourceMappingURL=system.d.ts.map
@@ -0,0 +1 @@
1
+ {"version":3,"file":"system.d.ts","sourceRoot":"","sources":["../../src/prompts/system.ts"],"names":[],"mappings":";AAAA,wBA+DM"}
@@ -0,0 +1,64 @@
1
+ export default `
2
+ <identity>
3
+ You're name is PATI and you are a specialized AI agent that translates user requests into specific, actionable tool calls. Your ONLY function is to identify the most appropriate tool from the provided list and construct a valid JSON object for its invocation.
4
+ </identity>
5
+
6
+ <instructions>
7
+ 1. Analyze Request: Carefully examine the user's request to understand their intent and identify key entities or values.
8
+ 2. Tool Selection: From the available tools defined in the section, select the single most relevant tool to fulfill the user's request.
9
+ 3. Parameter Extraction:
10
+ Identify all required parameters for the selected tool based on its definition.
11
+ Extract values for these parameters directly from the user's request.
12
+ If a value is provided in quotes (e.g., "search_term"), use that value EXACTLY.
13
+ Infer parameter values from the context of the request or provided if they are not explicitly stated but are clearly and unambiguously implied.
14
+ Analyze descriptive terms in the request, as they may indicate required parameter values even if not quoted.
15
+ DO NOT make up values for required parameters if they cannot be found or confidently inferred.
16
+ DO NOT include optional parameters unless their values are explicitly provided or strongly and unambiguously implied by the user's request.
17
+ 4. Output Format:
18
+ Your response MUST be a single JSON object representing the selected tool call.
19
+ The JSON object MUST strictly adhere to the following schema:
20
+ {
21
+ "name": "string",
22
+ "parameters": {
23
+ "parameter_name_1": "value_1",
24
+ "parameter_name_2": "value_2"
25
+ }
26
+ }
27
+ Replace "string" with the actual tool name and parameter_name_x/value_x with the corresponding parameter names and their extracted or inferred values.
28
+ Ensure all required parameters for the chosen tool are present in the parameters object.
29
+ Output ONLY this JSON object. Do not include any conversational text, explanations, apologies, or any characters before or after the JSON object.
30
+
31
+ 5. Failure Handling (Crucial):
32
+ If you can identify a relevant tool AND all its required parameters are available (either explicitly provided or confidently inferred), proceed to generate the JSON tool call as described above.
33
+ If no relevant tool can be found for the user's request, OR if a relevant tool is identified but one or more of its required parameters are missing and cannot be confidently inferred from the request or context, you MUST respond with a specific tool call indicating this inability to proceed. Use the following format for such cases:
34
+ {
35
+ "name": "cannot_fulfill_request",
36
+ "parameters": {
37
+ "reason_code": "MISSING_REQUIRED_PARAMETERS | NO_SUITABLE_TOOL",
38
+ "message": "A brief explanation of why the request cannot be fulfilled (e.g., 'Required parameter X is missing for tool Y.' or 'No tool available to handle this request.')",
39
+ "missing_parameters": ["param_name1", "param_name2"]
40
+ }
41
+ }
42
+ (Ensure "cannot_fulfill_request" is a tool defined in your list if you want this strict adherence, or agree on this fixed JSON structure for error handling.)
43
+
44
+ 6. Tool Definitions: The available tools, their descriptions, parameters (and which are required) are defined within the section. Refer to these definitions diligently.
45
+
46
+ 7. Harmful Content: If the user's request is to generate content that is harmful, hateful, racist, sexist, lewd, violent, or completely irrelevant to the defined tools' capabilities, respond with:
47
+ {
48
+ "name": "cannot_fulfill_request",
49
+ "parameters": {
50
+ "reason_code": "POLICY_VIOLATION",
51
+ "message": "Sorry, I can't assist with that request due to content policy."
52
+ }
53
+ }
54
+ </instructions>
55
+
56
+
57
+ <functions>
58
+ MCP_TOOLS
59
+ </functions>
60
+
61
+ <context>
62
+ My current OS is: Linux
63
+ </context>
64
+ `;
@@ -1 +1 @@
1
- {"version":3,"file":"ai-tools.d.ts","sourceRoot":"","sources":["../../src/tools/ai-tools.ts"],"names":[],"mappings":"AAEA,OAAO,EAAE,SAAS,EAAE,MAAM,yCAAyC,CAAC;AAcpE;;;GAGG;AACH,eAAO,MAAM,eAAe,GAAI,QAAQ,SAAS,SAqEhD,CAAC;AAEF,eAAe,eAAe,CAAC"}
1
+ {"version":3,"file":"ai-tools.d.ts","sourceRoot":"","sources":["../../src/tools/ai-tools.ts"],"names":[],"mappings":"AAIA,OAAO,EAAE,SAAS,EAAE,MAAM,yCAAyC,CAAC;AAapE;;;GAGG;AACH,eAAO,MAAM,eAAe,GAAI,QAAQ,SAAS,SAwEhD,CAAC;AAEF,eAAe,eAAe,CAAC"}
@@ -1,5 +1,6 @@
1
1
  import { z } from "zod";
2
2
  import { AIService } from "../utils/ai-handler.js";
3
+ import { zodToJsonSchema } from "zod-to-json-schema";
3
4
  import { createErrorResponse } from "../utils/error-handler.js";
4
5
  /**
5
6
  * Register AI tools with the MCP server
@@ -17,11 +18,14 @@ export const registerAITools = (server) => {
17
18
  const aiService = AIService.getInstance();
18
19
  const availableTools = Object.entries(server._registeredTools)
19
20
  .filter(([name]) => name !== "process-ai-query")
20
- .map(([name, { description }]) => ({
21
- name,
22
- description,
23
- parameters: {},
24
- }));
21
+ .map(([name, { description, inputSchema }]) => {
22
+ const { definitions } = zodToJsonSchema(inputSchema, "parameters");
23
+ return {
24
+ name,
25
+ description,
26
+ parameters: definitions?.parameters ?? {},
27
+ };
28
+ });
25
29
  aiService.setAvailableTools(availableTools);
26
30
  const result = await aiService.processQuery(query, specificTools);
27
31
  if (!aiService.ensureInitialized()) {
@@ -1,4 +1,4 @@
1
- export type AiProviderNameOptions = "openai" | "huggingface" | "anthropic";
1
+ export type AiProviderNameOptions = "openai" | "huggingface" | "openrouter";
2
2
  export interface ServerOptions {
3
3
  /**
4
4
  * The URL of the Meilisearch instance
@@ -1 +1 @@
1
- {"version":3,"file":"options.d.ts","sourceRoot":"","sources":["../../src/types/options.ts"],"names":[],"mappings":"AAAA,MAAM,MAAM,qBAAqB,GAAG,QAAQ,GAAG,aAAa,GAAG,WAAW,CAAC;AAE3E,MAAM,WAAW,aAAa;IAC5B;;;OAGG;IACH,eAAe,CAAC,EAAE,MAAM,CAAC;IAEzB;;OAEG;IACH,iBAAiB,CAAC,EAAE,MAAM,CAAC;IAC3B;;;OAGG;IACH,SAAS,CAAC,EAAE,MAAM,GAAG,OAAO,CAAC;IAE7B;;;OAGG;IACH,QAAQ,CAAC,EAAE,MAAM,CAAC;IAElB;;;OAGG;IACH,WAAW,CAAC,EAAE,MAAM,CAAC;IAErB;;;OAGG;IACH,cAAc,CAAC,EAAE,MAAM,CAAC;IAExB;;;OAGG;IACH,sBAAsB,CAAC,EAAE,MAAM,CAAC;IAEhC;;;OAGG;IACH,cAAc,CAAC,EAAE,qBAAqB,CAAC;IAEvC;;OAEG;IACH,gBAAgB,CAAC,EAAE,MAAM,CAAC;IAE1B;;;OAGG;IACH,QAAQ,CAAC,EAAE,MAAM,CAAC;CACnB"}
1
+ {"version":3,"file":"options.d.ts","sourceRoot":"","sources":["../../src/types/options.ts"],"names":[],"mappings":"AAAA,MAAM,MAAM,qBAAqB,GAAG,QAAQ,GAAG,aAAa,GAAG,YAAY,CAAC;AAE5E,MAAM,WAAW,aAAa;IAC5B;;;OAGG;IACH,eAAe,CAAC,EAAE,MAAM,CAAC;IAEzB;;OAEG;IACH,iBAAiB,CAAC,EAAE,MAAM,CAAC;IAC3B;;;OAGG;IACH,SAAS,CAAC,EAAE,MAAM,GAAG,OAAO,CAAC;IAE7B;;;OAGG;IACH,QAAQ,CAAC,EAAE,MAAM,CAAC;IAElB;;;OAGG;IACH,WAAW,CAAC,EAAE,MAAM,CAAC;IAErB;;;OAGG;IACH,cAAc,CAAC,EAAE,MAAM,CAAC;IAExB;;;OAGG;IACH,sBAAsB,CAAC,EAAE,MAAM,CAAC;IAEhC;;;OAGG;IACH,cAAc,CAAC,EAAE,qBAAqB,CAAC;IAEvC;;OAEG;IACH,gBAAgB,CAAC,EAAE,MAAM,CAAC;IAE1B;;;OAGG;IACH,QAAQ,CAAC,EAAE,MAAM,CAAC;CACnB"}
@@ -1,4 +1,14 @@
1
1
  import { AiProviderNameOptions } from "../types/options.js";
2
+ interface AITool {
3
+ name: string;
4
+ description: string;
5
+ parameters: Record<string, any>;
6
+ }
7
+ interface AIToolResponse {
8
+ toolName: string;
9
+ reasoning?: string | null;
10
+ parameters: Record<string, any>;
11
+ }
2
12
  /**
3
13
  * AI Inference Service
4
14
  *
@@ -7,10 +17,10 @@ import { AiProviderNameOptions } from "../types/options.js";
7
17
  */
8
18
  export declare class AIService {
9
19
  private model;
10
- private systemPrompt;
11
20
  private static instance;
12
21
  private static serverInitialized;
13
22
  private provider;
23
+ private readonly systemPrompt;
14
24
  private client;
15
25
  private availableTools;
16
26
  /**
@@ -35,11 +45,7 @@ export declare class AIService {
35
45
  * Set the available tools that can be used by the AI
36
46
  * @param tools Array of tools with name, description, and parameters
37
47
  */
38
- setAvailableTools(tools: {
39
- name: string;
40
- description: string;
41
- parameters: Record<string, any>;
42
- }[]): void;
48
+ setAvailableTools(tools: AITool[]): void;
43
49
  ensureInitialized(): boolean;
44
50
  /**
45
51
  * Get tool definitions for the AI from the available tools
@@ -59,14 +65,9 @@ export declare class AIService {
59
65
  * @param specificTools Optional array of specific tool names to consider
60
66
  * @returns Object containing the selected tool name and parameters
61
67
  */
62
- processQuery(query: string, specificTools?: string[]): Promise<{
63
- toolName: string;
64
- parameters: Record<string, any>;
65
- reasoning?: string;
66
- } | null>;
67
- private processHuggingFaceQuery;
68
- private processAnthropicQuery;
68
+ processQuery(query: string, specificTools?: string[]): Promise<AIToolResponse | null>;
69
69
  private processOpenAIQuery;
70
- private setSystemPrompt;
70
+ private processHuggingFaceQuery;
71
71
  }
72
+ export {};
72
73
  //# sourceMappingURL=ai-handler.d.ts.map
@@ -1 +1 @@
1
- {"version":3,"file":"ai-handler.d.ts","sourceRoot":"","sources":["../../src/utils/ai-handler.ts"],"names":[],"mappings":"AAIA,OAAO,EAAE,qBAAqB,EAAE,MAAM,qBAAqB,CAAC;AAsB5D;;;;;GAKG;AACH,qBAAa,SAAS;IACpB,OAAO,CAAC,KAAK,CAA2B;IACxC,OAAO,CAAC,YAAY,CAAyB;IAC7C,OAAO,CAAC,MAAM,CAAC,QAAQ,CAA0B;IACjD,OAAO,CAAC,MAAM,CAAC,iBAAiB,CAAkB;IAClD,OAAO,CAAC,QAAQ,CAAmC;IACnD,OAAO,CAAC,MAAM,CAAqD;IACnE,OAAO,CAAC,cAAc,CAIb;IAET;;;OAGG;IACH,OAAO;IAEP;;;OAGG;WACW,WAAW,IAAI,SAAS;IAOtC;;;;;;OAMG;IACH,UAAU,CACR,MAAM,EAAE,MAAM,EACd,QAAQ,GAAE,qBAAgC,EAC1C,KAAK,CAAC,EAAE,MAAM,GACb,IAAI;IAyBP;;;OAGG;IACH,iBAAiB,CACf,KAAK,EAAE;QACL,IAAI,EAAE,MAAM,CAAC;QACb,WAAW,EAAE,MAAM,CAAC;QACpB,UAAU,EAAE,MAAM,CAAC,MAAM,EAAE,GAAG,CAAC,CAAC;KACjC,EAAE,GACF,IAAI;IAUP,iBAAiB,IAAI,OAAO;IAI5B;;;;OAIG;IACH,OAAO,CAAC,kBAAkB;IAwB1B;;;;OAIG;IACH,OAAO,CAAC,gBAAgB;IAaxB;;;;;OAKG;IACG,YAAY,CAChB,KAAK,EAAE,MAAM,EACb,aAAa,CAAC,EAAE,MAAM,EAAE,GACvB,OAAO,CAAC;QACT,QAAQ,EAAE,MAAM,CAAC;QACjB,UAAU,EAAE,MAAM,CAAC,MAAM,EAAE,GAAG,CAAC,CAAC;QAChC,SAAS,CAAC,EAAE,MAAM,CAAC;KACpB,GAAG,IAAI,CAAC;YAgCK,uBAAuB;YA2BvB,qBAAqB;YAmCrB,kBAAkB;IA2BhC,OAAO,CAAC,eAAe;CAGxB"}
1
+ {"version":3,"file":"ai-handler.d.ts","sourceRoot":"","sources":["../../src/utils/ai-handler.ts"],"names":[],"mappings":"AAGA,OAAO,EAAE,qBAAqB,EAAE,MAAM,qBAAqB,CAAC;AAG5D,UAAU,MAAM;IACd,IAAI,EAAE,MAAM,CAAC;IACb,WAAW,EAAE,MAAM,CAAC;IACpB,UAAU,EAAE,MAAM,CAAC,MAAM,EAAE,GAAG,CAAC,CAAC;CACjC;AAiBD,UAAU,cAAc;IACtB,QAAQ,EAAE,MAAM,CAAC;IACjB,SAAS,CAAC,EAAE,MAAM,GAAG,IAAI,CAAC;IAC1B,UAAU,EAAE,MAAM,CAAC,MAAM,EAAE,GAAG,CAAC,CAAC;CACjC;AAED;;;;;GAKG;AACH,qBAAa,SAAS;IACpB,OAAO,CAAC,KAAK,CAA2B;IACxC,OAAO,CAAC,MAAM,CAAC,QAAQ,CAA0B;IACjD,OAAO,CAAC,MAAM,CAAC,iBAAiB,CAAkB;IAClD,OAAO,CAAC,QAAQ,CAAmC;IACnD,OAAO,CAAC,QAAQ,CAAC,YAAY,CAAwB;IACrD,OAAO,CAAC,MAAM,CAAyC;IACvD,OAAO,CAAC,cAAc,CAIb;IAET;;;OAGG;IACH,OAAO;IAEP;;;OAGG;WACW,WAAW,IAAI,SAAS;IAOtC;;;;;;OAMG;IACH,UAAU,CACR,MAAM,EAAE,MAAM,EACd,QAAQ,GAAE,qBAAgC,EAC1C,KAAK,CAAC,EAAE,MAAM,GACb,IAAI;IA4BP;;;OAGG;IACH,iBAAiB,CAAC,KAAK,EAAE,MAAM,EAAE,GAAG,IAAI;IAIxC,iBAAiB,IAAI,OAAO;IAI5B;;;;OAIG;IACH,OAAO,CAAC,kBAAkB;IAgB1B;;;;OAIG;IACH,OAAO,CAAC,gBAAgB;IAaxB;;;;;OAKG;IACG,YAAY,CAChB,KAAK,EAAE,MAAM,EACb,aAAa,CAAC,EAAE,MAAM,EAAE,GACvB,OAAO,CAAC,cAAc,GAAG,IAAI,CAAC;YA+BnB,kBAAkB;YAiClB,uBAAuB;CAiCtC"}
@@ -1,7 +1,7 @@
1
1
  import { OpenAI } from "openai";
2
- import Anthropic from "@anthropic-ai/sdk";
3
- import generalPrompt from "../prompts/general.js";
2
+ import systemPrompt from "../prompts/system.js";
4
3
  import { InferenceClient } from "@huggingface/inference";
4
+ import { markdownToJson } from "./response-handler.js";
5
5
  /**
6
6
  * AI Inference Service
7
7
  *
@@ -10,10 +10,10 @@ import { InferenceClient } from "@huggingface/inference";
10
10
  */
11
11
  export class AIService {
12
12
  model = "gpt-3.5-turbo";
13
- systemPrompt = generalPrompt;
14
13
  static instance = null;
15
14
  static serverInitialized = false;
16
15
  provider = "openai";
16
+ systemPrompt = systemPrompt;
17
17
  client = null;
18
18
  availableTools = [];
19
19
  /**
@@ -50,8 +50,11 @@ export class AIService {
50
50
  case "openai":
51
51
  this.client = new OpenAI({ apiKey });
52
52
  break;
53
- case "anthropic":
54
- this.client = new Anthropic({ apiKey });
53
+ case "openrouter":
54
+ this.client = new OpenAI({
55
+ apiKey,
56
+ baseURL: "https://openrouter.ai/api/v1",
57
+ });
55
58
  break;
56
59
  case "huggingface":
57
60
  this.client = new InferenceClient(apiKey);
@@ -67,7 +70,6 @@ export class AIService {
67
70
  */
68
71
  setAvailableTools(tools) {
69
72
  this.availableTools = tools;
70
- this.setSystemPrompt(this.systemPrompt.replace("MCP_TOOLS", JSON.stringify(this.availableTools, null, 2)));
71
73
  }
72
74
  ensureInitialized() {
73
75
  return this.client !== null;
@@ -78,24 +80,16 @@ export class AIService {
78
80
  * @returns Array of tool definitions
79
81
  */
80
82
  getToolDefinitions(toolNames) {
81
- if (!toolNames?.length) {
82
- return this.availableTools.map((tool) => ({
83
- type: "function",
84
- function: {
85
- name: tool.name,
86
- description: tool.description,
87
- parameters: tool.parameters,
88
- },
89
- }));
90
- }
91
- return this.availableTools
92
- .filter((tool) => toolNames.includes(tool.name))
93
- .map((tool) => ({
83
+ const tools = toolNames?.length
84
+ ? this.availableTools.filter((tool) => toolNames.includes(tool.name))
85
+ : this.availableTools;
86
+ return tools.map((tool) => ({
94
87
  type: "function",
95
88
  function: {
89
+ strict: true,
96
90
  name: tool.name,
97
- description: tool.description,
98
91
  parameters: tool.parameters,
92
+ description: tool.description,
99
93
  },
100
94
  }));
101
95
  }
@@ -127,20 +121,15 @@ export class AIService {
127
121
  const mentionedTools = this.extractToolNames(query);
128
122
  const toolsToUse = specificTools || (mentionedTools.length ? mentionedTools : undefined);
129
123
  const tools = this.getToolDefinitions(toolsToUse);
124
+ const systemPrompt = this.systemPrompt.replace("MCP_TOOLS", JSON.stringify(tools, null, 2));
130
125
  const messages = [
131
- { role: "system", content: this.systemPrompt },
126
+ { role: "system", content: systemPrompt },
132
127
  { role: "user", content: query },
133
128
  ];
134
- if (this.provider === "openai") {
135
- return this.processOpenAIQuery(tools, messages);
136
- }
137
- if (this.provider === "anthropic") {
138
- return this.processAnthropicQuery(tools, messages);
139
- }
140
129
  if (this.provider === "huggingface") {
141
130
  return this.processHuggingFaceQuery(tools, messages);
142
131
  }
143
- return null;
132
+ return this.processOpenAIQuery(tools, messages);
144
133
  }
145
134
  catch (error) {
146
135
  if (error instanceof Error) {
@@ -149,70 +138,57 @@ export class AIService {
149
138
  throw error;
150
139
  }
151
140
  }
152
- async processHuggingFaceQuery(tools, messages) {
153
- const response = await this.client.chatCompletion({
141
+ async processOpenAIQuery(tools, messages) {
142
+ const client = this.client;
143
+ const response = await client.chat.completions
144
+ .create({
154
145
  tools,
155
146
  messages,
156
- max_tokens: 512,
157
147
  model: this.model,
148
+ })
149
+ .catch((error) => {
150
+ console.error("Error in OpenAI API call:", error);
151
+ return null;
158
152
  });
159
- if (!response.choices?.length)
153
+ if (!response?.choices.length)
160
154
  return null;
161
155
  const message = response.choices[0].message;
162
- if (message.tool_calls?.length) {
163
- const toolCall = message.tool_calls[0];
164
- return {
165
- toolName: toolCall.function.name,
166
- reasoning: message.content || undefined,
167
- parameters: JSON.parse(toolCall.function.arguments),
168
- };
169
- }
170
- return null;
156
+ if (!message.content)
157
+ return null;
158
+ const toolCall = markdownToJson(message.content);
159
+ if (!toolCall)
160
+ return null;
161
+ return {
162
+ toolName: toolCall.name,
163
+ reasoning: message.content,
164
+ parameters: toolCall.parameters,
165
+ };
171
166
  }
172
- async processAnthropicQuery(tools, messages) {
173
- const response = await this.client.messages.create({
167
+ async processHuggingFaceQuery(tools, messages) {
168
+ const client = this.client;
169
+ const response = await client
170
+ .chatCompletion({
174
171
  tools,
175
172
  messages,
176
- max_tokens: 1024,
177
- model: this.model,
178
- });
179
- const content = response.content;
180
- if (Array.isArray(content) && content.length) {
181
- const toolCallItem = content.find((item) => item.type === "tool_call");
182
- if (toolCallItem?.tool_call) {
183
- const textItems = content.filter((item) => item.type === "text" &&
184
- content.indexOf(item) < content.indexOf(toolCallItem));
185
- const reasoning = textItems.map((item) => item.text).join(" ");
186
- return {
187
- reasoning: reasoning || undefined,
188
- toolName: toolCallItem.tool_call.name,
189
- parameters: JSON.parse(toolCallItem.tool_call.input),
190
- };
191
- }
192
- }
193
- return null;
194
- }
195
- async processOpenAIQuery(tools, messages) {
196
- const response = await this.client.chat.completions.create({
173
+ max_tokens: 512,
197
174
  model: this.model,
198
- messages,
199
- tools,
200
- tool_choice: "auto",
175
+ })
176
+ .catch((error) => {
177
+ console.error("Error in HugginFace API call:", error);
178
+ return null;
201
179
  });
202
- if (!response.choices?.length)
180
+ if (!response?.choices.length)
203
181
  return null;
204
182
  const message = response.choices[0].message;
205
- if (message.tool_calls?.length) {
206
- const toolCall = message.tool_calls[0];
207
- return {
208
- toolName: toolCall.function.name,
209
- reasoning: message.content || undefined,
210
- parameters: JSON.parse(toolCall.function.arguments),
211
- };
212
- }
213
- return null;
214
- }
215
- setSystemPrompt(prompt) {
216
- this.systemPrompt = prompt;
183
+ if (!message.content)
184
+ return null;
185
+ const toolCall = markdownToJson(message.content);
186
+ if (!toolCall)
187
+ return null;
188
+ return {
189
+ toolName: toolCall.name,
190
+ reasoning: message.content,
191
+ parameters: toolCall.parameters,
192
+ };
217
193
  }
218
194
  }
@@ -0,0 +1,10 @@
1
+ /**
2
+ * Transforms a string, potentially containing JSON embedded in Markdown
3
+ * or with non-standard features like comments and trailing commas,
4
+ * into a valid JavaScript object or array.
5
+ *
6
+ * @param markdownJsonString The string potentially containing the JSON.
7
+ * @returns A parsed JavaScript object/array, or null if parsing fails.
8
+ */
9
+ export declare function markdownToJson<T>(markdownJsonString: string): T | null;
10
+ //# sourceMappingURL=response-handler.d.ts.map
@@ -0,0 +1 @@
1
+ {"version":3,"file":"response-handler.d.ts","sourceRoot":"","sources":["../../src/utils/response-handler.ts"],"names":[],"mappings":"AAAA;;;;;;;GAOG;AACH,wBAAgB,cAAc,CAAC,CAAC,EAAE,kBAAkB,EAAE,MAAM,GAAG,CAAC,GAAG,IAAI,CA6BtE"}
@@ -0,0 +1,35 @@
1
+ /**
2
+ * Transforms a string, potentially containing JSON embedded in Markdown
3
+ * or with non-standard features like comments and trailing commas,
4
+ * into a valid JavaScript object or array.
5
+ *
6
+ * @param markdownJsonString The string potentially containing the JSON.
7
+ * @returns A parsed JavaScript object/array, or null if parsing fails.
8
+ */
9
+ export function markdownToJson(markdownJsonString) {
10
+ if (typeof markdownJsonString !== "string" || !markdownJsonString.trim()) {
11
+ return null;
12
+ }
13
+ let S = markdownJsonString.trim();
14
+ const fenceRegex = /^```(?:json)?\s*([\s\S]*?)\s*```$/;
15
+ const fenceMatch = S.match(fenceRegex);
16
+ if (fenceMatch && fenceMatch[1]) {
17
+ S = fenceMatch[1].trim();
18
+ }
19
+ if (S === "")
20
+ return null;
21
+ S = S.replace(/\/\/[^\r\n]*/g, "");
22
+ S = S.replace(/\/\*[\s\S]*?\*\//g, "");
23
+ S = S.replace(/,\s*([}\]])/g, "$1");
24
+ try {
25
+ const parsedJson = JSON.parse(S);
26
+ return parsedJson;
27
+ }
28
+ catch (error) {
29
+ console.error("Failed to parse JSON after transformations.");
30
+ console.error("Original string:", markdownJsonString);
31
+ console.error("Processed string that failed:", S);
32
+ console.error("Error:", error);
33
+ return null;
34
+ }
35
+ }
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "mcp-meilisearch",
3
- "version": "1.3.2",
3
+ "version": "1.3.4",
4
4
  "description": "Model Context Protocol (MCP) implementation for Meilisearch",
5
5
  "main": "dist/index.js",
6
6
  "types": "dist/index.d.ts",
@@ -29,8 +29,7 @@
29
29
  "prepublishOnly": "rm -rf dist && npm version patch && npm run build"
30
30
  },
31
31
  "dependencies": {
32
- "@anthropic-ai/sdk": "^0.50.4",
33
- "@huggingface/inference": "^3.13.0",
32
+ "@huggingface/inference": "^3.13.1",
34
33
  "@modelcontextprotocol/sdk": "^1.11.2",
35
34
  "axios": "^1.9.0",
36
35
  "openai": "^4.98.0",
@@ -38,7 +37,8 @@
38
37
  },
39
38
  "devDependencies": {
40
39
  "@types/node": "^22.15.18",
41
- "typescript": "^5.8.3"
40
+ "typescript": "^5.8.3",
41
+ "zod-to-json-schema": "^3.24.5"
42
42
  },
43
43
  "engines": {
44
44
  "node": ">=20.12.2",
@@ -1,3 +0,0 @@
1
- declare const _default: "\n\t\tAnswer the user's request using the relevant tool(s), if they are available. Check that all the required parameters for each tool call are provided or can reasonably be inferred from context. IF there are no relevant tools or there are missing values for required parameters, ask the user to supply these values; otherwise proceed with the tool calls. If the user provides a specific value for a parameter (for example provided in quotes), make sure to use that value EXACTLY. DO NOT make up values for or ask about optional parameters. Carefully analyze descriptive terms in the request as they may indicate required parameter values that should be included even if not explicitly quoted.\n\n\t\t<identity>\n\t\tYou are an AI programming assistant.\n\t\tWhen asked for your name, you must respond with \"Pati\".\n\t\tFollow the user's requirements carefully & to the letter.\n\t\tIf you are asked to generate content that is harmful, hateful, racist, sexist, lewd, violent, or completely irrelevant to software engineering, only respond with \"Sorry, I can't assist with that.\"\n\t\tKeep your answers short and impersonal.\n\t\t</identity>\n\n\t\t<instructions>\n\t\tYou are a highly sophisticated automated coding agent with expert-level knowledge across many different programming languages and frameworks.\n\t\tThe user will ask a question, or ask you to perform a task, and it may require lots of research to answer correctly. There is a selection of tools that let you perform actions or retrieve helpful context to answer the user's question.\n\t\tIf you can infer the project type (languages, frameworks, and libraries) from the user's query or the context that you have, make sure to keep them in mind when making changes.\n\t\tIf the user wants you to implement a feature and they have not specified the files to edit, first break down the user's request into smaller concepts and think about the kinds of files you need to grasp each concept.\n\t\tIf you aren't sure which tool is relevant, you can call multiple tools. You can call tools repeatedly to take actions or gather as much context as needed until you have completed the task fully. Don't give up unless you are sure the request cannot be fulfilled with the tools you have. It's YOUR RESPONSIBILITY to make sure that you have done all you can to collect necessary context.\n\t\tPrefer using the semantic_search tool to search for context unless you know the exact string or filename pattern you're searching for.\n\t\tDon't make assumptions about the situation- gather context first, then perform the task or answer the question.\n\t\tThink creatively and explore the workspace in order to make a complete fix.\n\t\tDon't repeat yourself after a tool call, pick up where you left off.\n\t\tNEVER print out a codeblock with file changes unless the user asked for it. Use the insert_edit_into_file tool instead.\n\t\tNEVER print out a codeblock with a terminal command to run unless the user asked for it. Use the run_in_terminal tool instead.\n\t\tYou don't need to read a file if it's already provided in context.\n\t\t</instructions>\n\n\t\t<toolUseInstructions>\n\t\tWhen using a tool, follow the json schema very carefully and make sure to include ALL required properties.\n\t\tAlways output valid JSON when using a tool.\n\t\tIf a tool exists to do a task, use the tool instead of asking the user to manually take an action.\n\t\tIf you say that you will take an action, then go ahead and use the tool to do it. No need to ask permission.\n\t\tNever use multi_tool_use.parallel or any tool that does not exist. Use tools using the proper procedure, DO NOT write out a json codeblock with the tool inputs.\n\t\tNever say the name of a tool to a user. For example, instead of saying that you'll use the run_in_terminal tool, say \"I'll run the command in a terminal\".\n\t\tIf you think running multiple tools can answer the user's question, prefer calling them in parallel whenever possible, but do not call semantic_search in parallel.\n\t\tIf semantic_search returns the full contents of the text files in the workspace, you have all the workspace context.\n\t\tDon't call the run_in_terminal tool multiple times in parallel. Instead, run one command and wait for the output before running the next command.\n\t\tAfter you have performed the user's task, if the user corrected something you did, expressed a coding preference, or communicated a fact that you need to remember, use the update_user_preferences tool to save their preferences.\n\t\t</toolUseInstructions>\n\n\t\t<editFileInstructions>\n\t\tDon't try to edit an existing file without reading it first, so you can make changes properly.\n\t\tUse the insert_edit_into_file tool to edit files. When editing files, group your changes by file.\n\t\tNEVER show the changes to the user, just call the tool, and the edits will be applied and shown to the user.\n\t\tNEVER print a codeblock that represents a change to a file, use insert_edit_into_file instead.\n\t\tFor each file, give a short description of what needs to be changed, then use the insert_edit_into_file tool. You can use any tool multiple times in a response, and you can keep writing text after using a tool.\n\t\tFollow best practices when editing files. If a popular external library exists to solve a problem, use it and properly install the package e.g. with \"npm install\" or creating a \"requirements.txt\".\n\t\tAfter editing a file, you MUST call get_errors to validate the change. Fix the errors if they are relevant to your change or the prompt, and remember to validate that they were actually fixed.\n\t\tThe insert_edit_into_file tool is very smart and can understand how to apply your edits to the user's files, you just need to provide minimal hints.\n\t\tWhen you use the insert_edit_into_file tool, avoid repeating existing code, instead use comments to represent regions of unchanged code. The tool prefers that you are as concise as possible. For example:\n\t\t// ...existing code...\n\t\tchanged code\n\t\t// ...existing code...\n\t\tchanged code\n\t\t// ...existing code...\n\n\t\tHere is an example of how you should format an edit to an existing Person class:\n\t\tclass Person {\n\t\t\t// ...existing code...\n\t\t\tage: number;\n\t\t\t// ...existing code...\n\t\t\tgetAge() {\n\t\t\t\treturn this.age;\n\t\t\t}\n\t\t}\n\t\t</editFileInstructions>\n\n\t\t<functions>\n\t\tMCP_TOOLS\n\t\t</functions>\n\n\t\t<context>\n\t\tMy current OS is: Linux\n\t\tI am working in a workspace that has the following structure:\n\t\t```\n\t\texample.txt\n\t\traw_complete_instructions.txt\n\t\traw_instructions.txt\n\t\t```\n\t\tThis view of the workspace structure may be truncated. You can use tools to collect more context if needed.\n\t\t</context>\n\n\t\t<reminder>\n\t\tWhen using the insert_edit_into_file tool, avoid repeating existing code, instead use a line comment with `...existing code...` to represent regions of unchanged code.\n\t\t</reminder>\n\n\t\t<tool_format>\n\t\t<function_calls>\n\t\t<invoke name=\"[tool_name]\">\n\t\t<parameter name=\"[param_name]\">[param_value]\n\t\t";
2
- export default _default;
3
- //# sourceMappingURL=general.d.ts.map
@@ -1 +0,0 @@
1
- {"version":3,"file":"general.d.ts","sourceRoot":"","sources":["../../src/prompts/general.ts"],"names":[],"mappings":";AAAA,wBAyFI"}
@@ -1,90 +0,0 @@
1
- export default `
2
- Answer the user's request using the relevant tool(s), if they are available. Check that all the required parameters for each tool call are provided or can reasonably be inferred from context. IF there are no relevant tools or there are missing values for required parameters, ask the user to supply these values; otherwise proceed with the tool calls. If the user provides a specific value for a parameter (for example provided in quotes), make sure to use that value EXACTLY. DO NOT make up values for or ask about optional parameters. Carefully analyze descriptive terms in the request as they may indicate required parameter values that should be included even if not explicitly quoted.
3
-
4
- <identity>
5
- You are an AI programming assistant.
6
- When asked for your name, you must respond with "Pati".
7
- Follow the user's requirements carefully & to the letter.
8
- If you are asked to generate content that is harmful, hateful, racist, sexist, lewd, violent, or completely irrelevant to software engineering, only respond with "Sorry, I can't assist with that."
9
- Keep your answers short and impersonal.
10
- </identity>
11
-
12
- <instructions>
13
- You are a highly sophisticated automated coding agent with expert-level knowledge across many different programming languages and frameworks.
14
- The user will ask a question, or ask you to perform a task, and it may require lots of research to answer correctly. There is a selection of tools that let you perform actions or retrieve helpful context to answer the user's question.
15
- If you can infer the project type (languages, frameworks, and libraries) from the user's query or the context that you have, make sure to keep them in mind when making changes.
16
- If the user wants you to implement a feature and they have not specified the files to edit, first break down the user's request into smaller concepts and think about the kinds of files you need to grasp each concept.
17
- If you aren't sure which tool is relevant, you can call multiple tools. You can call tools repeatedly to take actions or gather as much context as needed until you have completed the task fully. Don't give up unless you are sure the request cannot be fulfilled with the tools you have. It's YOUR RESPONSIBILITY to make sure that you have done all you can to collect necessary context.
18
- Prefer using the semantic_search tool to search for context unless you know the exact string or filename pattern you're searching for.
19
- Don't make assumptions about the situation- gather context first, then perform the task or answer the question.
20
- Think creatively and explore the workspace in order to make a complete fix.
21
- Don't repeat yourself after a tool call, pick up where you left off.
22
- NEVER print out a codeblock with file changes unless the user asked for it. Use the insert_edit_into_file tool instead.
23
- NEVER print out a codeblock with a terminal command to run unless the user asked for it. Use the run_in_terminal tool instead.
24
- You don't need to read a file if it's already provided in context.
25
- </instructions>
26
-
27
- <toolUseInstructions>
28
- When using a tool, follow the json schema very carefully and make sure to include ALL required properties.
29
- Always output valid JSON when using a tool.
30
- If a tool exists to do a task, use the tool instead of asking the user to manually take an action.
31
- If you say that you will take an action, then go ahead and use the tool to do it. No need to ask permission.
32
- Never use multi_tool_use.parallel or any tool that does not exist. Use tools using the proper procedure, DO NOT write out a json codeblock with the tool inputs.
33
- Never say the name of a tool to a user. For example, instead of saying that you'll use the run_in_terminal tool, say "I'll run the command in a terminal".
34
- If you think running multiple tools can answer the user's question, prefer calling them in parallel whenever possible, but do not call semantic_search in parallel.
35
- If semantic_search returns the full contents of the text files in the workspace, you have all the workspace context.
36
- Don't call the run_in_terminal tool multiple times in parallel. Instead, run one command and wait for the output before running the next command.
37
- After you have performed the user's task, if the user corrected something you did, expressed a coding preference, or communicated a fact that you need to remember, use the update_user_preferences tool to save their preferences.
38
- </toolUseInstructions>
39
-
40
- <editFileInstructions>
41
- Don't try to edit an existing file without reading it first, so you can make changes properly.
42
- Use the insert_edit_into_file tool to edit files. When editing files, group your changes by file.
43
- NEVER show the changes to the user, just call the tool, and the edits will be applied and shown to the user.
44
- NEVER print a codeblock that represents a change to a file, use insert_edit_into_file instead.
45
- For each file, give a short description of what needs to be changed, then use the insert_edit_into_file tool. You can use any tool multiple times in a response, and you can keep writing text after using a tool.
46
- Follow best practices when editing files. If a popular external library exists to solve a problem, use it and properly install the package e.g. with "npm install" or creating a "requirements.txt".
47
- After editing a file, you MUST call get_errors to validate the change. Fix the errors if they are relevant to your change or the prompt, and remember to validate that they were actually fixed.
48
- The insert_edit_into_file tool is very smart and can understand how to apply your edits to the user's files, you just need to provide minimal hints.
49
- When you use the insert_edit_into_file tool, avoid repeating existing code, instead use comments to represent regions of unchanged code. The tool prefers that you are as concise as possible. For example:
50
- // ...existing code...
51
- changed code
52
- // ...existing code...
53
- changed code
54
- // ...existing code...
55
-
56
- Here is an example of how you should format an edit to an existing Person class:
57
- class Person {
58
- // ...existing code...
59
- age: number;
60
- // ...existing code...
61
- getAge() {
62
- return this.age;
63
- }
64
- }
65
- </editFileInstructions>
66
-
67
- <functions>
68
- MCP_TOOLS
69
- </functions>
70
-
71
- <context>
72
- My current OS is: Linux
73
- I am working in a workspace that has the following structure:
74
- \`\`\`
75
- example.txt
76
- raw_complete_instructions.txt
77
- raw_instructions.txt
78
- \`\`\`
79
- This view of the workspace structure may be truncated. You can use tools to collect more context if needed.
80
- </context>
81
-
82
- <reminder>
83
- When using the insert_edit_into_file tool, avoid repeating existing code, instead use a line comment with \`...existing code...\` to represent regions of unchanged code.
84
- </reminder>
85
-
86
- <tool_format>
87
- <function_calls>
88
- <invoke name="[tool_name]">
89
- <parameter name="[param_name]">[param_value]
90
- `;