mcp 0.9.2 → 0.11.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: a0f26c1a29af5a799a750d9ad00a224f39d24638a9ad267a540313f06da674ed
4
- data.tar.gz: d340e2c1f6492f74a28c6dabc6a04575285269e13a2352e3f747195d4dac1527
3
+ metadata.gz: 96d41b59f0a4af6d4f8b975fddb1611e9db6ce9abad93d1a49e41b0499d90e37
4
+ data.tar.gz: f22c32ee6cf069153e93a8a078d0e4c298216519a8237d600c5314851a910407
5
5
  SHA512:
6
- metadata.gz: 7f428407e35305f1cb5a087bd7abb708df12d17abe6cb66f335c8dc2d82d1020407942dd36ad3cb200377e7d60783d87a72bab765b202ae8426cb267f5d3f46a
7
- data.tar.gz: 49d835de35b9c6c124d99ffe82561a5f7554901d43971afbb82adfb31fe13d0d6d1b6782d460d9b861b6924cc43705266cb09e5de3afd9249b7caa68634fff63
6
+ metadata.gz: 8fe956d15d21c75eff61fc3a4b71fa8dc0c6f67271324656b5a3e4a908dfe5adb714a6b3ca7f21ce8432598ef388dd9d4039d9086a7adcf391355a132f5d0c01
7
+ data.tar.gz: 823aa108a3eb98f086ef8b31798074ae0f3a670e8256978b8d5ac2aa8bad8b6b772038fd5bf8550725be0e3e931ef880bdc5eb89b695b7395e253dd5b2ebffcb
data/README.md CHANGED
@@ -38,6 +38,7 @@ It implements the Model Context Protocol specification, handling model context r
38
38
  - Supports resource registration and retrieval
39
39
  - Supports stdio & Streamable HTTP (including SSE) transports
40
40
  - Supports notifications for list changes (tools, prompts, resources)
41
+ - Supports sampling (server-to-client LLM completion requests)
41
42
 
42
43
  ### Supported Methods
43
44
 
@@ -50,6 +51,8 @@ It implements the Model Context Protocol specification, handling model context r
50
51
  - `resources/list` - Lists all registered resources and their schemas
51
52
  - `resources/read` - Retrieves a specific resource by name
52
53
  - `resources/templates/list` - Lists all registered resource templates and their schemas
54
+ - `completion/complete` - Returns autocompletion suggestions for prompt arguments and resource URIs
55
+ - `sampling/createMessage` - Requests LLM completion from the client (server-to-client)
53
56
 
54
57
  ### Custom Methods
55
58
 
@@ -102,6 +105,163 @@ end
102
105
  - Raises `MCP::Server::MethodAlreadyDefinedError` if trying to override an existing method
103
106
  - Supports the same exception reporting and instrumentation as standard methods
104
107
 
108
+ ### Sampling
109
+
110
+ The Model Context Protocol allows servers to request LLM completions from clients through the `sampling/createMessage` method.
111
+ This enables servers to leverage the client's LLM capabilities without needing direct access to AI models.
112
+
113
+ **Key Concepts:**
114
+
115
+ - **Server-to-Client Request**: Unlike typical MCP methods (client→server), sampling is initiated by the server
116
+ - **Client Capability**: Clients must declare `sampling` capability during initialization
117
+ - **Tool Support**: When using tools in sampling requests, clients must declare `sampling.tools` capability
118
+ - **Human-in-the-Loop**: Clients can implement user approval before forwarding requests to LLMs
119
+
120
+ **Usage Example (Stdio transport):**
121
+
122
+ `Server#create_sampling_message` is for single-client transports (e.g., `StdioTransport`).
123
+ For multi-client transports (e.g., `StreamableHTTPTransport`), use `server_context.create_sampling_message` inside tools instead,
124
+ which routes the request to the correct client session.
125
+
126
+ ```ruby
127
+ server = MCP::Server.new(name: "my_server")
128
+ transport = MCP::Server::Transports::StdioTransport.new(server)
129
+ server.transport = transport
130
+ ```
131
+
132
+ Client must declare sampling capability during initialization.
133
+ This happens automatically when the client connects.
134
+
135
+ ```ruby
136
+ result = server.create_sampling_message(
137
+ messages: [
138
+ { role: "user", content: { type: "text", text: "What is the capital of France?" } }
139
+ ],
140
+ max_tokens: 100,
141
+ system_prompt: "You are a helpful assistant.",
142
+ temperature: 0.7
143
+ )
144
+ ```
145
+
146
+ Result contains the LLM response:
147
+
148
+ ```ruby
149
+ {
150
+ role: "assistant",
151
+ content: { type: "text", text: "The capital of France is Paris." },
152
+ model: "claude-3-sonnet-20240307",
153
+ stopReason: "endTurn"
154
+ }
155
+ ```
156
+
157
+ **Parameters:**
158
+
159
+ Required:
160
+
161
+ - `messages:` (Array) - Array of message objects with `role` and `content`
162
+ - `max_tokens:` (Integer) - Maximum tokens in the response
163
+
164
+ Optional:
165
+
166
+ - `system_prompt:` (String) - System prompt for the LLM
167
+ - `model_preferences:` (Hash) - Model selection preferences (e.g., `{ intelligencePriority: 0.8 }`)
168
+ - `include_context:` (String) - Context inclusion: `"none"`, `"thisServer"`, or `"allServers"` (soft-deprecated)
169
+ - `temperature:` (Float) - Sampling temperature
170
+ - `stop_sequences:` (Array) - Sequences that stop generation
171
+ - `metadata:` (Hash) - Additional metadata
172
+ - `tools:` (Array) - Tools available to the LLM (requires `sampling.tools` capability)
173
+ - `tool_choice:` (Hash) - Tool selection mode (e.g., `{ mode: "auto" }`)
174
+
175
+ **Using Sampling in Tools (works with both Stdio and HTTP transports):**
176
+
177
+ Tools that accept a `server_context:` parameter can call `create_sampling_message` on it.
178
+ The request is automatically routed to the correct client session.
179
+ Set `server.server_context = server` so that `server_context.create_sampling_message` delegates to the server:
180
+
181
+ ```ruby
182
+ class SummarizeTool < MCP::Tool
183
+ description "Summarize text using LLM"
184
+ input_schema(
185
+ properties: {
186
+ text: { type: "string" }
187
+ },
188
+ required: ["text"]
189
+ )
190
+
191
+ def self.call(text:, server_context:)
192
+ result = server_context.create_sampling_message(
193
+ messages: [
194
+ { role: "user", content: { type: "text", text: "Please summarize: #{text}" } }
195
+ ],
196
+ max_tokens: 500
197
+ )
198
+
199
+ MCP::Tool::Response.new([{
200
+ type: "text",
201
+ text: result[:content][:text]
202
+ }])
203
+ end
204
+ end
205
+
206
+ server = MCP::Server.new(name: "my_server", tools: [SummarizeTool])
207
+ server.server_context = server
208
+ ```
209
+
210
+ **Tool Use in Sampling:**
211
+
212
+ When tools are provided in a sampling request, the LLM can call them during generation.
213
+ The server must handle tool calls and continue the conversation with tool results:
214
+
215
+ ```ruby
216
+ result = server.create_sampling_message(
217
+ messages: [
218
+ { role: "user", content: { type: "text", text: "What's the weather in Paris?" } }
219
+ ],
220
+ max_tokens: 1000,
221
+ tools: [
222
+ {
223
+ name: "get_weather",
224
+ description: "Get weather for a city",
225
+ inputSchema: {
226
+ type: "object",
227
+ properties: { city: { type: "string" } },
228
+ required: ["city"]
229
+ }
230
+ }
231
+ ],
232
+ tool_choice: { mode: "auto" }
233
+ )
234
+
235
+ if result[:stopReason] == "toolUse"
236
+ tool_results = result[:content].map do |tool_use|
237
+ weather_data = get_weather(tool_use[:input][:city])
238
+
239
+ {
240
+ type: "tool_result",
241
+ toolUseId: tool_use[:id],
242
+ content: [{ type: "text", text: weather_data.to_json }]
243
+ }
244
+ end
245
+
246
+ final_result = server.create_sampling_message(
247
+ messages: [
248
+ { role: "user", content: { type: "text", text: "What's the weather in Paris?" } },
249
+ { role: "assistant", content: result[:content] },
250
+ { role: "user", content: tool_results }
251
+ ],
252
+ max_tokens: 1000,
253
+ tools: [...]
254
+ )
255
+ end
256
+ ```
257
+
258
+ **Error Handling:**
259
+
260
+ - Raises `RuntimeError` if transport is not set
261
+ - Raises `RuntimeError` if client does not support `sampling` capability
262
+ - Raises `RuntimeError` if `tools` are used but client lacks `sampling.tools` capability
263
+ - Raises `StandardError` if client returns an error response
264
+
105
265
  ### Notifications
106
266
 
107
267
  The server supports sending notifications to clients when lists of tools, prompts, or resources change. This enables real-time updates without polling.
@@ -113,9 +273,17 @@ The server provides the following notification methods:
113
273
  - `notify_tools_list_changed` - Send a notification when the tools list changes
114
274
  - `notify_prompts_list_changed` - Send a notification when the prompts list changes
115
275
  - `notify_resources_list_changed` - Send a notification when the resources list changes
116
- - `notify_progress` - Send a progress notification for long-running operations
117
276
  - `notify_log_message` - Send a structured logging notification message
118
277
 
278
+ #### Session Scoping
279
+
280
+ When using Streamable HTTP transport with multiple clients, each client connection gets its own session. Notifications are scoped as follows:
281
+
282
+ - **`report_progress`** and **`notify_log_message`** called via `server_context` inside a tool handler are automatically sent only to the requesting client.
283
+ No extra configuration is needed.
284
+ - **`notify_tools_list_changed`**, **`notify_prompts_list_changed`**, and **`notify_resources_list_changed`** are always broadcast to all connected clients,
285
+ as they represent server-wide state changes. These should be called on the `server` instance directly.
286
+
119
287
  #### Notification Format
120
288
 
121
289
  Notifications follow the JSON-RPC 2.0 specification and use these method names:
@@ -169,25 +337,58 @@ The `server_context.report_progress` method accepts:
169
337
  - `total:` (optional) — total expected value, so clients can display a percentage
170
338
  - `message:` (optional) — human-readable status message
171
339
 
172
- #### Server-Side: Direct `notify_progress` Usage
340
+ **Key Features:**
173
341
 
174
- You can also call `notify_progress` directly on the server instance:
342
+ - Tools report progress via `server_context.report_progress`
343
+ - `report_progress` is a no-op when no `progressToken` was provided by the client
344
+ - Supports both numeric and string progress tokens
345
+
346
+ ### Completions
347
+
348
+ MCP spec includes [Completions](https://modelcontextprotocol.io/specification/latest/server/utilities/completion),
349
+ which enable servers to provide autocompletion suggestions for prompt arguments and resource URIs.
350
+
351
+ To enable completions, declare the `completions` capability and register a handler:
175
352
 
176
353
  ```ruby
177
- server.notify_progress(
178
- progress_token: "token-123",
179
- progress: 50,
180
- total: 100, # optional
181
- message: "halfway" # optional
354
+ server = MCP::Server.new(
355
+ name: "my_server",
356
+ prompts: [CodeReviewPrompt],
357
+ resource_templates: [FileTemplate],
358
+ capabilities: { completions: {} },
182
359
  )
360
+
361
+ server.completion_handler do |params|
362
+ ref = params[:ref]
363
+ argument = params[:argument]
364
+ value = argument[:value]
365
+
366
+ case ref[:type]
367
+ when "ref/prompt"
368
+ values = case argument[:name]
369
+ when "language"
370
+ ["python", "pytorch", "pyside"].select { |v| v.start_with?(value) }
371
+ else
372
+ []
373
+ end
374
+ { completion: { values: values, hasMore: false } }
375
+ when "ref/resource"
376
+ { completion: { values: [], hasMore: false } }
377
+ end
378
+ end
183
379
  ```
184
380
 
185
- **Key Features:**
381
+ The handler receives a `params` hash with:
186
382
 
187
- - Tools report progress via `server_context.report_progress`
188
- - `report_progress` is a no-op when no `progressToken` was provided by the client
189
- - `notify_progress` is a no-op when no transport is configured
190
- - Supports both numeric and string progress tokens
383
+ - `ref` - The reference (`{ type: "ref/prompt", name: "..." }` or `{ type: "ref/resource", uri: "..." }`)
384
+ - `argument` - The argument being completed (`{ name: "...", value: "..." }`)
385
+ - `context` (optional) - Previously resolved arguments (`{ arguments: { ... } }`)
386
+
387
+ The handler must return a hash with a `completion` key containing `values` (array of strings), and optionally `total` and `hasMore`.
388
+ The SDK automatically enforces the 100-item limit per the MCP specification.
389
+
390
+ The server validates that the referenced prompt, resource, or resource template is registered before calling the handler.
391
+ Requests for unknown references return an error.
191
392
 
192
393
  ### Logging
193
394
 
@@ -293,14 +494,28 @@ Set `stateless: true` in `MCP::Server::Transports::StreamableHTTPTransport.new`
293
494
  transport = MCP::Server::Transports::StreamableHTTPTransport.new(server, stateless: true)
294
495
  ```
295
496
 
497
+ By default, sessions do not expire. To mitigate session hijacking risks, you can set a `session_idle_timeout` (in seconds).
498
+ When configured, sessions that receive no HTTP requests for this duration are automatically expired and cleaned up:
499
+
500
+ ```ruby
501
+ # Session timeout of 30 minutes
502
+ transport = MCP::Server::Transports::StreamableHTTPTransport.new(server, session_idle_timeout: 1800)
503
+ ```
504
+
296
505
  ### Unsupported Features (to be implemented in future versions)
297
506
 
298
507
  - Resource subscriptions
299
- - Completions
300
508
  - Elicitation
301
509
 
302
510
  ### Usage
303
511
 
512
+ > [!IMPORTANT]
513
+ > `MCP::Server::Transports::StreamableHTTPTransport` stores session and SSE stream state in memory,
514
+ > so it must run in a single process. Use a single-process server (e.g., Puma with `workers 0`).
515
+ > Multi-process configurations (Unicorn, or Puma with `workers > 0`) fork separate processes that
516
+ > do not share memory, which breaks session management and SSE connections.
517
+ > Stateless mode (`stateless: true`) does not use sessions and works with any server configuration.
518
+
304
519
  #### Rails Controller
305
520
 
306
521
  When added to a Rails controller on a route that handles POST requests, your server will be compliant with non-streaming
@@ -1054,6 +1269,7 @@ This class supports:
1054
1269
  - Resource reading via the `resources/read` method (`MCP::Client#read_resources`)
1055
1270
  - Prompt listing via the `prompts/list` method (`MCP::Client#prompts`)
1056
1271
  - Prompt retrieval via the `prompts/get` method (`MCP::Client#get_prompt`)
1272
+ - Completion requests via the `completion/complete` method (`MCP::Client#complete`)
1057
1273
  - Automatic JSON-RPC 2.0 message formatting
1058
1274
  - UUID request ID generation
1059
1275
 
@@ -92,7 +92,7 @@ module JsonRpcHandler
92
92
  end
93
93
 
94
94
  begin
95
- method = method_finder.call(method_name)
95
+ method = method_finder.call(method_name, id)
96
96
 
97
97
  if method.nil?
98
98
  return error_response(id: id, id_validation_pattern: id_validation_pattern, error: {
data/lib/mcp/client.rb CHANGED
@@ -6,6 +6,27 @@ require_relative "client/tool"
6
6
 
7
7
  module MCP
8
8
  class Client
9
+ class ServerError < StandardError
10
+ attr_reader :code, :data
11
+
12
+ def initialize(message, code:, data: nil)
13
+ super(message)
14
+ @code = code
15
+ @data = data
16
+ end
17
+ end
18
+
19
+ class RequestHandlerError < StandardError
20
+ attr_reader :error_type, :original_error, :request
21
+
22
+ def initialize(message, request, error_type: :internal_error, original_error: nil)
23
+ super(message)
24
+ @request = request
25
+ @error_type = error_type
26
+ @original_error = original_error
27
+ end
28
+ end
29
+
9
30
  # Initializes a new MCP::Client instance.
10
31
  #
11
32
  # @param transport [Object] The transport object to use for communication with the server.
@@ -33,11 +54,7 @@ module MCP
33
54
  # puts tool.name
34
55
  # end
35
56
  def tools
36
- response = transport.send_request(request: {
37
- jsonrpc: JsonRpcHandler::Version::V2_0,
38
- id: request_id,
39
- method: "tools/list",
40
- })
57
+ response = request(method: "tools/list")
41
58
 
42
59
  response.dig("result", "tools")&.map do |tool|
43
60
  Tool.new(
@@ -53,11 +70,7 @@ module MCP
53
70
  #
54
71
  # @return [Array<Hash>] An array of available resources.
55
72
  def resources
56
- response = transport.send_request(request: {
57
- jsonrpc: JsonRpcHandler::Version::V2_0,
58
- id: request_id,
59
- method: "resources/list",
60
- })
73
+ response = request(method: "resources/list")
61
74
 
62
75
  response.dig("result", "resources") || []
63
76
  end
@@ -67,11 +80,7 @@ module MCP
67
80
  #
68
81
  # @return [Array<Hash>] An array of available resource templates.
69
82
  def resource_templates
70
- response = transport.send_request(request: {
71
- jsonrpc: JsonRpcHandler::Version::V2_0,
72
- id: request_id,
73
- method: "resources/templates/list",
74
- })
83
+ response = request(method: "resources/templates/list")
75
84
 
76
85
  response.dig("result", "resourceTemplates") || []
77
86
  end
@@ -81,11 +90,7 @@ module MCP
81
90
  #
82
91
  # @return [Array<Hash>] An array of available prompts.
83
92
  def prompts
84
- response = transport.send_request(request: {
85
- jsonrpc: JsonRpcHandler::Version::V2_0,
86
- id: request_id,
87
- method: "prompts/list",
88
- })
93
+ response = request(method: "prompts/list")
89
94
 
90
95
  response.dig("result", "prompts") || []
91
96
  end
@@ -119,12 +124,7 @@ module MCP
119
124
  params[:_meta] = { progressToken: progress_token }
120
125
  end
121
126
 
122
- transport.send_request(request: {
123
- jsonrpc: JsonRpcHandler::Version::V2_0,
124
- id: request_id,
125
- method: "tools/call",
126
- params: params,
127
- })
127
+ request(method: "tools/call", params: params)
128
128
  end
129
129
 
130
130
  # Reads a resource from the server by URI and returns the contents.
@@ -132,12 +132,7 @@ module MCP
132
132
  # @param uri [String] The URI of the resource to read.
133
133
  # @return [Array<Hash>] An array of resource contents (text or blob).
134
134
  def read_resource(uri:)
135
- response = transport.send_request(request: {
136
- jsonrpc: JsonRpcHandler::Version::V2_0,
137
- id: request_id,
138
- method: "resources/read",
139
- params: { uri: uri },
140
- })
135
+ response = request(method: "resources/read", params: { uri: uri })
141
136
 
142
137
  response.dig("result", "contents") || []
143
138
  end
@@ -147,31 +142,50 @@ module MCP
147
142
  # @param name [String] The name of the prompt to get.
148
143
  # @return [Hash] A hash containing the prompt details.
149
144
  def get_prompt(name:)
150
- response = transport.send_request(request: {
151
- jsonrpc: JsonRpcHandler::Version::V2_0,
152
- id: request_id,
153
- method: "prompts/get",
154
- params: { name: name },
155
- })
145
+ response = request(method: "prompts/get", params: { name: name })
156
146
 
157
147
  response.fetch("result", {})
158
148
  end
159
149
 
150
+ # Requests completion suggestions from the server for a prompt argument or resource template URI.
151
+ #
152
+ # @param ref [Hash] The reference, e.g. `{ type: "ref/prompt", name: "my_prompt" }`
153
+ # or `{ type: "ref/resource", uri: "file:///{path}" }`.
154
+ # @param argument [Hash] The argument being completed, e.g. `{ name: "language", value: "py" }`.
155
+ # @param context [Hash, nil] Optional context with previously resolved arguments.
156
+ # @return [Hash] The completion result with `"values"`, `"hasMore"`, and optionally `"total"`.
157
+ def complete(ref:, argument:, context: nil)
158
+ params = { ref: ref, argument: argument }
159
+ params[:context] = context if context
160
+
161
+ response = request(method: "completion/complete", params: params)
162
+
163
+ response.dig("result", "completion") || { "values" => [], "hasMore" => false }
164
+ end
165
+
160
166
  private
161
167
 
162
- def request_id
163
- SecureRandom.uuid
164
- end
168
+ def request(method:, params: nil)
169
+ request_body = {
170
+ jsonrpc: JsonRpcHandler::Version::V2_0,
171
+ id: request_id,
172
+ method: method,
173
+ }
174
+ request_body[:params] = params if params
165
175
 
166
- class RequestHandlerError < StandardError
167
- attr_reader :error_type, :original_error, :request
176
+ response = transport.send_request(request: request_body)
168
177
 
169
- def initialize(message, request, error_type: :internal_error, original_error: nil)
170
- super(message)
171
- @request = request
172
- @error_type = error_type
173
- @original_error = original_error
178
+ # Guard with `is_a?(Hash)` because custom transports may return non-Hash values.
179
+ if response.is_a?(Hash) && response.key?("error")
180
+ error = response["error"]
181
+ raise ServerError.new(error["message"], code: error["code"], data: error["data"])
174
182
  end
183
+
184
+ response
185
+ end
186
+
187
+ def request_id
188
+ SecureRandom.uuid
175
189
  end
176
190
  end
177
191
  end
data/lib/mcp/progress.rb CHANGED
@@ -2,19 +2,22 @@
2
2
 
3
3
  module MCP
4
4
  class Progress
5
- def initialize(server:, progress_token:)
6
- @server = server
5
+ def initialize(notification_target:, progress_token:, related_request_id: nil)
6
+ @notification_target = notification_target
7
7
  @progress_token = progress_token
8
+ @related_request_id = related_request_id
8
9
  end
9
10
 
10
11
  def report(progress, total: nil, message: nil)
11
12
  return unless @progress_token
13
+ return unless @notification_target
12
14
 
13
- @server.notify_progress(
15
+ @notification_target.notify_progress(
14
16
  progress_token: @progress_token,
15
17
  progress: progress,
16
18
  total: total,
17
19
  message: message,
20
+ related_request_id: @related_request_id,
18
21
  )
19
22
  end
20
23
  end
@@ -10,17 +10,19 @@ module MCP
10
10
  STATUS_INTERRUPTED = Signal.list["INT"] + 128
11
11
 
12
12
  def initialize(server)
13
- @server = server
13
+ super(server)
14
14
  @open = false
15
+ @session = nil
15
16
  $stdin.set_encoding("UTF-8")
16
17
  $stdout.set_encoding("UTF-8")
17
- super
18
18
  end
19
19
 
20
20
  def open
21
21
  @open = true
22
+ @session = ServerSession.new(server: @server, transport: self)
22
23
  while @open && (line = $stdin.gets)
23
- handle_json_request(line.strip)
24
+ response = @session.handle_json(line.strip)
25
+ send_response(response) if response
24
26
  end
25
27
  rescue Interrupt
26
28
  warn("\nExiting...")
@@ -51,6 +53,41 @@ module MCP
51
53
  MCP.configuration.exception_reporter.call(e, { error: "Failed to send notification" })
52
54
  false
53
55
  end
56
+
57
+ def send_request(method, params = nil)
58
+ request_id = generate_request_id
59
+ request = { jsonrpc: "2.0", id: request_id, method: method }
60
+ request[:params] = params if params
61
+
62
+ begin
63
+ send_response(request)
64
+ rescue => e
65
+ MCP.configuration.exception_reporter.call(e, { error: "Failed to send request" })
66
+ raise
67
+ end
68
+
69
+ while @open && (line = $stdin.gets)
70
+ begin
71
+ parsed = JSON.parse(line.strip, symbolize_names: true)
72
+ rescue JSON::ParserError => e
73
+ MCP.configuration.exception_reporter.call(e, { error: "Failed to parse response" })
74
+ raise
75
+ end
76
+
77
+ if parsed[:id] == request_id && !parsed.key?(:method)
78
+ if parsed[:error]
79
+ raise StandardError, "Client returned an error for #{method} request (code: #{parsed[:error][:code]}): #{parsed[:error][:message]}"
80
+ end
81
+
82
+ return parsed[:result]
83
+ else
84
+ response = @session ? @session.handle(parsed) : @server.handle(parsed)
85
+ send_response(response) if response
86
+ end
87
+ end
88
+
89
+ raise "Transport closed while waiting for response to #{method} request."
90
+ end
54
91
  end
55
92
  end
56
93
  end