mcp 0.10.0 → 0.11.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 29a39b8c5bb27a2fcdc8d084dce2cd79dfada5981a18d156bc1de78604035b2e
4
- data.tar.gz: d969675cb0bb08b9ee3971a9bd90891767c7b28d68fe60579cf4857a06ff3680
3
+ metadata.gz: 96d41b59f0a4af6d4f8b975fddb1611e9db6ce9abad93d1a49e41b0499d90e37
4
+ data.tar.gz: f22c32ee6cf069153e93a8a078d0e4c298216519a8237d600c5314851a910407
5
5
  SHA512:
6
- metadata.gz: 3ff992068b54cc35acd43bc9e93edb3959f270a19eedf52540748344f2a94488829cbfec65dedb2772d67dd883f7536523af25d6419ded2cc71944359ff2d016
7
- data.tar.gz: 0f716e3f54ca10619f95787c65dfe01fecc2356474ddf2909dfb2918f544f3fc8aa2a1a798836b2c556a9178ad842472fd79820ad83e96e855e4ca1ded4f5d8b
6
+ metadata.gz: 8fe956d15d21c75eff61fc3a4b71fa8dc0c6f67271324656b5a3e4a908dfe5adb714a6b3ca7f21ce8432598ef388dd9d4039d9086a7adcf391355a132f5d0c01
7
+ data.tar.gz: 823aa108a3eb98f086ef8b31798074ae0f3a670e8256978b8d5ac2aa8bad8b6b772038fd5bf8550725be0e3e931ef880bdc5eb89b695b7395e253dd5b2ebffcb
data/README.md CHANGED
@@ -38,6 +38,7 @@ It implements the Model Context Protocol specification, handling model context r
38
38
  - Supports resource registration and retrieval
39
39
  - Supports stdio & Streamable HTTP (including SSE) transports
40
40
  - Supports notifications for list changes (tools, prompts, resources)
41
+ - Supports sampling (server-to-client LLM completion requests)
41
42
 
42
43
  ### Supported Methods
43
44
 
@@ -50,6 +51,8 @@ It implements the Model Context Protocol specification, handling model context r
50
51
  - `resources/list` - Lists all registered resources and their schemas
51
52
  - `resources/read` - Retrieves a specific resource by name
52
53
  - `resources/templates/list` - Lists all registered resource templates and their schemas
54
+ - `completion/complete` - Returns autocompletion suggestions for prompt arguments and resource URIs
55
+ - `sampling/createMessage` - Requests LLM completion from the client (server-to-client)
53
56
 
54
57
  ### Custom Methods
55
58
 
@@ -102,6 +105,163 @@ end
102
105
  - Raises `MCP::Server::MethodAlreadyDefinedError` if trying to override an existing method
103
106
  - Supports the same exception reporting and instrumentation as standard methods
104
107
 
108
+ ### Sampling
109
+
110
+ The Model Context Protocol allows servers to request LLM completions from clients through the `sampling/createMessage` method.
111
+ This enables servers to leverage the client's LLM capabilities without needing direct access to AI models.
112
+
113
+ **Key Concepts:**
114
+
115
+ - **Server-to-Client Request**: Unlike typical MCP methods (client→server), sampling is initiated by the server
116
+ - **Client Capability**: Clients must declare `sampling` capability during initialization
117
+ - **Tool Support**: When using tools in sampling requests, clients must declare `sampling.tools` capability
118
+ - **Human-in-the-Loop**: Clients can implement user approval before forwarding requests to LLMs
119
+
120
+ **Usage Example (Stdio transport):**
121
+
122
+ `Server#create_sampling_message` is for single-client transports (e.g., `StdioTransport`).
123
+ For multi-client transports (e.g., `StreamableHTTPTransport`), use `server_context.create_sampling_message` inside tools instead,
124
+ which routes the request to the correct client session.
125
+
126
+ ```ruby
127
+ server = MCP::Server.new(name: "my_server")
128
+ transport = MCP::Server::Transports::StdioTransport.new(server)
129
+ server.transport = transport
130
+ ```
131
+
132
+ Client must declare sampling capability during initialization.
133
+ This happens automatically when the client connects.
134
+
135
+ ```ruby
136
+ result = server.create_sampling_message(
137
+ messages: [
138
+ { role: "user", content: { type: "text", text: "What is the capital of France?" } }
139
+ ],
140
+ max_tokens: 100,
141
+ system_prompt: "You are a helpful assistant.",
142
+ temperature: 0.7
143
+ )
144
+ ```
145
+
146
+ Result contains the LLM response:
147
+
148
+ ```ruby
149
+ {
150
+ role: "assistant",
151
+ content: { type: "text", text: "The capital of France is Paris." },
152
+ model: "claude-3-sonnet-20240307",
153
+ stopReason: "endTurn"
154
+ }
155
+ ```
156
+
157
+ **Parameters:**
158
+
159
+ Required:
160
+
161
+ - `messages:` (Array) - Array of message objects with `role` and `content`
162
+ - `max_tokens:` (Integer) - Maximum tokens in the response
163
+
164
+ Optional:
165
+
166
+ - `system_prompt:` (String) - System prompt for the LLM
167
+ - `model_preferences:` (Hash) - Model selection preferences (e.g., `{ intelligencePriority: 0.8 }`)
168
+ - `include_context:` (String) - Context inclusion: `"none"`, `"thisServer"`, or `"allServers"` (soft-deprecated)
169
+ - `temperature:` (Float) - Sampling temperature
170
+ - `stop_sequences:` (Array) - Sequences that stop generation
171
+ - `metadata:` (Hash) - Additional metadata
172
+ - `tools:` (Array) - Tools available to the LLM (requires `sampling.tools` capability)
173
+ - `tool_choice:` (Hash) - Tool selection mode (e.g., `{ mode: "auto" }`)
174
+
175
+ **Using Sampling in Tools (works with both Stdio and HTTP transports):**
176
+
177
+ Tools that accept a `server_context:` parameter can call `create_sampling_message` on it.
178
+ The request is automatically routed to the correct client session.
179
+ Set `server.server_context = server` so that `server_context.create_sampling_message` delegates to the server:
180
+
181
+ ```ruby
182
+ class SummarizeTool < MCP::Tool
183
+ description "Summarize text using LLM"
184
+ input_schema(
185
+ properties: {
186
+ text: { type: "string" }
187
+ },
188
+ required: ["text"]
189
+ )
190
+
191
+ def self.call(text:, server_context:)
192
+ result = server_context.create_sampling_message(
193
+ messages: [
194
+ { role: "user", content: { type: "text", text: "Please summarize: #{text}" } }
195
+ ],
196
+ max_tokens: 500
197
+ )
198
+
199
+ MCP::Tool::Response.new([{
200
+ type: "text",
201
+ text: result[:content][:text]
202
+ }])
203
+ end
204
+ end
205
+
206
+ server = MCP::Server.new(name: "my_server", tools: [SummarizeTool])
207
+ server.server_context = server
208
+ ```
209
+
210
+ **Tool Use in Sampling:**
211
+
212
+ When tools are provided in a sampling request, the LLM can call them during generation.
213
+ The server must handle tool calls and continue the conversation with tool results:
214
+
215
+ ```ruby
216
+ result = server.create_sampling_message(
217
+ messages: [
218
+ { role: "user", content: { type: "text", text: "What's the weather in Paris?" } }
219
+ ],
220
+ max_tokens: 1000,
221
+ tools: [
222
+ {
223
+ name: "get_weather",
224
+ description: "Get weather for a city",
225
+ inputSchema: {
226
+ type: "object",
227
+ properties: { city: { type: "string" } },
228
+ required: ["city"]
229
+ }
230
+ }
231
+ ],
232
+ tool_choice: { mode: "auto" }
233
+ )
234
+
235
+ if result[:stopReason] == "toolUse"
236
+ tool_results = result[:content].map do |tool_use|
237
+ weather_data = get_weather(tool_use[:input][:city])
238
+
239
+ {
240
+ type: "tool_result",
241
+ toolUseId: tool_use[:id],
242
+ content: [{ type: "text", text: weather_data.to_json }]
243
+ }
244
+ end
245
+
246
+ final_result = server.create_sampling_message(
247
+ messages: [
248
+ { role: "user", content: { type: "text", text: "What's the weather in Paris?" } },
249
+ { role: "assistant", content: result[:content] },
250
+ { role: "user", content: tool_results }
251
+ ],
252
+ max_tokens: 1000,
253
+ tools: [...]
254
+ )
255
+ end
256
+ ```
257
+
258
+ **Error Handling:**
259
+
260
+ - Raises `RuntimeError` if transport is not set
261
+ - Raises `RuntimeError` if client does not support `sampling` capability
262
+ - Raises `RuntimeError` if `tools` are used but client lacks `sampling.tools` capability
263
+ - Raises `StandardError` if client returns an error response
264
+
105
265
  ### Notifications
106
266
 
107
267
  The server supports sending notifications to clients when lists of tools, prompts, or resources change. This enables real-time updates without polling.
@@ -183,6 +343,53 @@ The `server_context.report_progress` method accepts:
183
343
  - `report_progress` is a no-op when no `progressToken` was provided by the client
184
344
  - Supports both numeric and string progress tokens
185
345
 
346
+ ### Completions
347
+
348
+ MCP spec includes [Completions](https://modelcontextprotocol.io/specification/latest/server/utilities/completion),
349
+ which enable servers to provide autocompletion suggestions for prompt arguments and resource URIs.
350
+
351
+ To enable completions, declare the `completions` capability and register a handler:
352
+
353
+ ```ruby
354
+ server = MCP::Server.new(
355
+ name: "my_server",
356
+ prompts: [CodeReviewPrompt],
357
+ resource_templates: [FileTemplate],
358
+ capabilities: { completions: {} },
359
+ )
360
+
361
+ server.completion_handler do |params|
362
+ ref = params[:ref]
363
+ argument = params[:argument]
364
+ value = argument[:value]
365
+
366
+ case ref[:type]
367
+ when "ref/prompt"
368
+ values = case argument[:name]
369
+ when "language"
370
+ ["python", "pytorch", "pyside"].select { |v| v.start_with?(value) }
371
+ else
372
+ []
373
+ end
374
+ { completion: { values: values, hasMore: false } }
375
+ when "ref/resource"
376
+ { completion: { values: [], hasMore: false } }
377
+ end
378
+ end
379
+ ```
380
+
381
+ The handler receives a `params` hash with:
382
+
383
+ - `ref` - The reference (`{ type: "ref/prompt", name: "..." }` or `{ type: "ref/resource", uri: "..." }`)
384
+ - `argument` - The argument being completed (`{ name: "...", value: "..." }`)
385
+ - `context` (optional) - Previously resolved arguments (`{ arguments: { ... } }`)
386
+
387
+ The handler must return a hash with a `completion` key containing `values` (array of strings), and optionally `total` and `hasMore`.
388
+ The SDK automatically enforces the 100-item limit per the MCP specification.
389
+
390
+ The server validates that the referenced prompt, resource, or resource template is registered before calling the handler.
391
+ Requests for unknown references return an error.
392
+
186
393
  ### Logging
187
394
 
188
395
  The MCP Ruby SDK supports structured logging through the `notify_log_message` method, following the [MCP Logging specification](https://modelcontextprotocol.io/specification/latest/server/utilities/logging).
@@ -298,11 +505,17 @@ transport = MCP::Server::Transports::StreamableHTTPTransport.new(server, session
298
505
  ### Unsupported Features (to be implemented in future versions)
299
506
 
300
507
  - Resource subscriptions
301
- - Completions
302
508
  - Elicitation
303
509
 
304
510
  ### Usage
305
511
 
512
+ > [!IMPORTANT]
513
+ > `MCP::Server::Transports::StreamableHTTPTransport` stores session and SSE stream state in memory,
514
+ > so it must run in a single process. Use a single-process server (e.g., Puma with `workers 0`).
515
+ > Multi-process configurations (Unicorn, or Puma with `workers > 0`) fork separate processes that
516
+ > do not share memory, which breaks session management and SSE connections.
517
+ > Stateless mode (`stateless: true`) does not use sessions and works with any server configuration.
518
+
306
519
  #### Rails Controller
307
520
 
308
521
  When added to a Rails controller on a route that handles POST requests, your server will be compliant with non-streaming
@@ -1056,6 +1269,7 @@ This class supports:
1056
1269
  - Resource reading via the `resources/read` method (`MCP::Client#read_resources`)
1057
1270
  - Prompt listing via the `prompts/list` method (`MCP::Client#prompts`)
1058
1271
  - Prompt retrieval via the `prompts/get` method (`MCP::Client#get_prompt`)
1272
+ - Completion requests via the `completion/complete` method (`MCP::Client#complete`)
1059
1273
  - Automatic JSON-RPC 2.0 message formatting
1060
1274
  - UUID request ID generation
1061
1275
 
@@ -92,7 +92,7 @@ module JsonRpcHandler
92
92
  end
93
93
 
94
94
  begin
95
- method = method_finder.call(method_name)
95
+ method = method_finder.call(method_name, id)
96
96
 
97
97
  if method.nil?
98
98
  return error_response(id: id, id_validation_pattern: id_validation_pattern, error: {
data/lib/mcp/client.rb CHANGED
@@ -6,6 +6,27 @@ require_relative "client/tool"
6
6
 
7
7
  module MCP
8
8
  class Client
9
+ class ServerError < StandardError
10
+ attr_reader :code, :data
11
+
12
+ def initialize(message, code:, data: nil)
13
+ super(message)
14
+ @code = code
15
+ @data = data
16
+ end
17
+ end
18
+
19
+ class RequestHandlerError < StandardError
20
+ attr_reader :error_type, :original_error, :request
21
+
22
+ def initialize(message, request, error_type: :internal_error, original_error: nil)
23
+ super(message)
24
+ @request = request
25
+ @error_type = error_type
26
+ @original_error = original_error
27
+ end
28
+ end
29
+
9
30
  # Initializes a new MCP::Client instance.
10
31
  #
11
32
  # @param transport [Object] The transport object to use for communication with the server.
@@ -33,11 +54,7 @@ module MCP
33
54
  # puts tool.name
34
55
  # end
35
56
  def tools
36
- response = transport.send_request(request: {
37
- jsonrpc: JsonRpcHandler::Version::V2_0,
38
- id: request_id,
39
- method: "tools/list",
40
- })
57
+ response = request(method: "tools/list")
41
58
 
42
59
  response.dig("result", "tools")&.map do |tool|
43
60
  Tool.new(
@@ -53,11 +70,7 @@ module MCP
53
70
  #
54
71
  # @return [Array<Hash>] An array of available resources.
55
72
  def resources
56
- response = transport.send_request(request: {
57
- jsonrpc: JsonRpcHandler::Version::V2_0,
58
- id: request_id,
59
- method: "resources/list",
60
- })
73
+ response = request(method: "resources/list")
61
74
 
62
75
  response.dig("result", "resources") || []
63
76
  end
@@ -67,11 +80,7 @@ module MCP
67
80
  #
68
81
  # @return [Array<Hash>] An array of available resource templates.
69
82
  def resource_templates
70
- response = transport.send_request(request: {
71
- jsonrpc: JsonRpcHandler::Version::V2_0,
72
- id: request_id,
73
- method: "resources/templates/list",
74
- })
83
+ response = request(method: "resources/templates/list")
75
84
 
76
85
  response.dig("result", "resourceTemplates") || []
77
86
  end
@@ -81,11 +90,7 @@ module MCP
81
90
  #
82
91
  # @return [Array<Hash>] An array of available prompts.
83
92
  def prompts
84
- response = transport.send_request(request: {
85
- jsonrpc: JsonRpcHandler::Version::V2_0,
86
- id: request_id,
87
- method: "prompts/list",
88
- })
93
+ response = request(method: "prompts/list")
89
94
 
90
95
  response.dig("result", "prompts") || []
91
96
  end
@@ -119,12 +124,7 @@ module MCP
119
124
  params[:_meta] = { progressToken: progress_token }
120
125
  end
121
126
 
122
- transport.send_request(request: {
123
- jsonrpc: JsonRpcHandler::Version::V2_0,
124
- id: request_id,
125
- method: "tools/call",
126
- params: params,
127
- })
127
+ request(method: "tools/call", params: params)
128
128
  end
129
129
 
130
130
  # Reads a resource from the server by URI and returns the contents.
@@ -132,12 +132,7 @@ module MCP
132
132
  # @param uri [String] The URI of the resource to read.
133
133
  # @return [Array<Hash>] An array of resource contents (text or blob).
134
134
  def read_resource(uri:)
135
- response = transport.send_request(request: {
136
- jsonrpc: JsonRpcHandler::Version::V2_0,
137
- id: request_id,
138
- method: "resources/read",
139
- params: { uri: uri },
140
- })
135
+ response = request(method: "resources/read", params: { uri: uri })
141
136
 
142
137
  response.dig("result", "contents") || []
143
138
  end
@@ -147,31 +142,50 @@ module MCP
147
142
  # @param name [String] The name of the prompt to get.
148
143
  # @return [Hash] A hash containing the prompt details.
149
144
  def get_prompt(name:)
150
- response = transport.send_request(request: {
151
- jsonrpc: JsonRpcHandler::Version::V2_0,
152
- id: request_id,
153
- method: "prompts/get",
154
- params: { name: name },
155
- })
145
+ response = request(method: "prompts/get", params: { name: name })
156
146
 
157
147
  response.fetch("result", {})
158
148
  end
159
149
 
150
+ # Requests completion suggestions from the server for a prompt argument or resource template URI.
151
+ #
152
+ # @param ref [Hash] The reference, e.g. `{ type: "ref/prompt", name: "my_prompt" }`
153
+ # or `{ type: "ref/resource", uri: "file:///{path}" }`.
154
+ # @param argument [Hash] The argument being completed, e.g. `{ name: "language", value: "py" }`.
155
+ # @param context [Hash, nil] Optional context with previously resolved arguments.
156
+ # @return [Hash] The completion result with `"values"`, `"hasMore"`, and optionally `"total"`.
157
+ def complete(ref:, argument:, context: nil)
158
+ params = { ref: ref, argument: argument }
159
+ params[:context] = context if context
160
+
161
+ response = request(method: "completion/complete", params: params)
162
+
163
+ response.dig("result", "completion") || { "values" => [], "hasMore" => false }
164
+ end
165
+
160
166
  private
161
167
 
162
- def request_id
163
- SecureRandom.uuid
164
- end
168
+ def request(method:, params: nil)
169
+ request_body = {
170
+ jsonrpc: JsonRpcHandler::Version::V2_0,
171
+ id: request_id,
172
+ method: method,
173
+ }
174
+ request_body[:params] = params if params
165
175
 
166
- class RequestHandlerError < StandardError
167
- attr_reader :error_type, :original_error, :request
176
+ response = transport.send_request(request: request_body)
168
177
 
169
- def initialize(message, request, error_type: :internal_error, original_error: nil)
170
- super(message)
171
- @request = request
172
- @error_type = error_type
173
- @original_error = original_error
178
+ # Guard with `is_a?(Hash)` because custom transports may return non-Hash values.
179
+ if response.is_a?(Hash) && response.key?("error")
180
+ error = response["error"]
181
+ raise ServerError.new(error["message"], code: error["code"], data: error["data"])
174
182
  end
183
+
184
+ response
185
+ end
186
+
187
+ def request_id
188
+ SecureRandom.uuid
175
189
  end
176
190
  end
177
191
  end
data/lib/mcp/progress.rb CHANGED
@@ -2,9 +2,10 @@
2
2
 
3
3
  module MCP
4
4
  class Progress
5
- def initialize(notification_target:, progress_token:)
5
+ def initialize(notification_target:, progress_token:, related_request_id: nil)
6
6
  @notification_target = notification_target
7
7
  @progress_token = progress_token
8
+ @related_request_id = related_request_id
8
9
  end
9
10
 
10
11
  def report(progress, total: nil, message: nil)
@@ -16,6 +17,7 @@ module MCP
16
17
  progress: progress,
17
18
  total: total,
18
19
  message: message,
20
+ related_request_id: @related_request_id,
19
21
  )
20
22
  end
21
23
  end
@@ -53,6 +53,41 @@ module MCP
53
53
  MCP.configuration.exception_reporter.call(e, { error: "Failed to send notification" })
54
54
  false
55
55
  end
56
+
57
+ def send_request(method, params = nil)
58
+ request_id = generate_request_id
59
+ request = { jsonrpc: "2.0", id: request_id, method: method }
60
+ request[:params] = params if params
61
+
62
+ begin
63
+ send_response(request)
64
+ rescue => e
65
+ MCP.configuration.exception_reporter.call(e, { error: "Failed to send request" })
66
+ raise
67
+ end
68
+
69
+ while @open && (line = $stdin.gets)
70
+ begin
71
+ parsed = JSON.parse(line.strip, symbolize_names: true)
72
+ rescue JSON::ParserError => e
73
+ MCP.configuration.exception_reporter.call(e, { error: "Failed to parse response" })
74
+ raise
75
+ end
76
+
77
+ if parsed[:id] == request_id && !parsed.key?(:method)
78
+ if parsed[:error]
79
+ raise StandardError, "Client returned an error for #{method} request (code: #{parsed[:error][:code]}): #{parsed[:error][:message]}"
80
+ end
81
+
82
+ return parsed[:result]
83
+ else
84
+ response = @session ? @session.handle(parsed) : @server.handle(parsed)
85
+ send_response(response) if response
86
+ end
87
+ end
88
+
89
+ raise "Transport closed while waiting for response to #{method} request."
90
+ end
56
91
  end
57
92
  end
58
93
  end