async-grpc 0.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
data/design.md ADDED
@@ -0,0 +1,1121 @@
1
+ # Async::GRPC Design
2
+
3
+ Client and server implementation for gRPC using Async, built on top of `protocol-grpc`.
4
+
5
+ ## Overview
6
+
7
+ `async-grpc` provides the networking and concurrency layer for gRPC:
8
+ - **`Async::GRPC::Client`** - wraps `Async::HTTP::Client` for making gRPC calls
9
+ - **`Async::GRPC::Server`** - `Protocol::HTTP::Middleware` for handling gRPC requests
10
+ - Built on top of `protocol-grpc` for protocol abstractions
11
+ - Uses `async-http` for HTTP/2 transport
12
+
13
+ ## Architecture
14
+
15
+ ```
16
+ ┌─────────────────────────────────────────────────────────────┐
17
+ │ async-grpc │
18
+ │ (Client/Server implementations with Async concurrency) │
19
+ ├─────────────────────────────────────────────────────────────┤
20
+ │ protocol-grpc │
21
+ │ (Protocol abstractions: framing, headers, status codes) │
22
+ ├─────────────────────────────────────────────────────────────┤
23
+ │ protocol-http │
24
+ │ (HTTP abstractions: Request, Response, Headers, Body) │
25
+ ├─────────────────────────────────────────────────────────────┤
26
+ │ async-http / protocol-http2 │
27
+ │ (HTTP/2 transport and connection management) │
28
+ └─────────────────────────────────────────────────────────────┘
29
+ ```
30
+
31
+ ## Design Pattern: Body Wrapping
32
+
33
+ Following the pattern from `async-rest`, we wrap response bodies with rich parsing using `Protocol::HTTP::Body::Wrapper`:
34
+
35
+ ```ruby
36
+ # In protocol-grpc:
37
+ class Protocol::GRPC::Body::Readable < Protocol::HTTP::Body::Wrapper
38
+ # gRPC bodies are ALWAYS message-framed, so this is the standard readable body
39
+ def initialize(body, message_class: nil, encoding: nil)
40
+ super(body)
41
+ @message_class = message_class
42
+ @encoding = encoding
43
+ @buffer = String.new.force_encoding(Encoding::BINARY)
44
+ end
45
+
46
+ # Override read to return decoded messages instead of raw chunks
47
+ def read
48
+ # Read 5-byte prefix + message data
49
+ # Decompress if needed
50
+ # Decode with message_class if provided
51
+ end
52
+ end
53
+
54
+ # In async-grpc, wrap responses transparently:
55
+ response = client.call(request)
56
+ response.body = Protocol::GRPC::Body::Readable.new(
57
+ response.body,
58
+ message_class: HelloReply,
59
+ encoding: response.headers["grpc-encoding"]
60
+ )
61
+
62
+ # Now reading is natural - standard Protocol::HTTP::Body interface:
63
+ message = response.body.read # Returns decoded HelloReply message!
64
+ ```
65
+
66
+ This provides:
67
+ - **Transparent wrapping**: Response body is automatically enhanced
68
+ - **Lazy parsing**: Messages are decoded on demand
69
+ - **Streaming support**: Can iterate over messages naturally
70
+ - **Type safety**: Message class determines parsing
71
+
72
+ ### Important: Homogeneous Message Types
73
+
74
+ **In gRPC, all messages in a stream are always the same type.** This is defined in the `.proto` file:
75
+
76
+ ```protobuf
77
+ // All responses are HelloReply
78
+ rpc StreamNumbers(HelloRequest) returns (stream HelloReply);
79
+ ```
80
+
81
+ This constraint simplifies the API significantly:
82
+ - You specify `message_class` **once** when wrapping the body
83
+ - All subsequent `read()` calls decode to that same class
84
+ - No need to check message types or handle polymorphism
85
+ - Standard `Protocol::HTTP::Body` interface (`read`, `each`) just works!
86
+
87
+ The four RPC patterns:
88
+ - **Unary**: 1 request of type A → 1 response of type B
89
+ - **Server streaming**: 1 request of type A → N responses of type B (all type B)
90
+ - **Client streaming**: N requests of type A (all type A) → 1 response of type B
91
+ - **Bidirectional**: N requests of type A (all type A) ↔ M responses of type B (all type B)
92
+
93
+ This is different from protocols like WebSockets where you might receive different message types in the same stream.
94
+
95
+ ## Core Components Summary
96
+
97
+ `async-grpc` provides networking and concurrency layer for gRPC:
98
+
99
+ 1. **Client** - `Async::GRPC::Client` (wraps `Async::HTTP::Client`)
100
+ - Four RPC methods: `unary`, `server_streaming`, `client_streaming`, `bidirectional_streaming`
101
+ - Binary variants for channel adapter: `*_binary` methods
102
+ - Automatic body wrapping with `Protocol::GRPC::Body::Readable`
103
+
104
+ 2. **Server** - Use `Protocol::GRPC::Middleware` with `Async::HTTP::Server`
105
+ - No separate Async::GRPC::Server needed!
106
+ - Protocol middleware handles dispatch
107
+ - Async::HTTP::Server handles connections
108
+
109
+ 3. **Server Context** - `Async::GRPC::ServerCall` (extends `Protocol::GRPC::Call`)
110
+ - Access request metadata
111
+ - Set response metadata/trailers
112
+ - Deadline tracking
113
+ - Cancellation support
114
+
115
+ 4. **Interceptors** - `ClientInterceptor` and `ServerInterceptor`
116
+ - Wrap RPC calls
117
+ - Add cross-cutting concerns (logging, auth, metrics)
118
+
119
+ 5. **Channel Adapter** - `Async::GRPC::ChannelAdapter`
120
+ - Compatible with `GRPC::Core::Channel` interface
121
+ - Enables drop-in replacement for standard gRPC
122
+ - Google Cloud library integration
123
+
124
+ **Key Patterns:**
125
+ - Response bodies automatically wrapped with `Protocol::GRPC::Body::Readable`
126
+ - Standard `read`/`write`/`each` methods (not `read_message`/`write_message`)
127
+ - Compression handled via `encoding:` parameter
128
+
129
+ ### Detailed Components
130
+
131
+ ### 1. `Async::GRPC::Client`
132
+
133
+ Wraps `Async::HTTP::Client` to provide gRPC-specific call methods:
134
+
135
+ ```ruby
136
+ module Async
137
+ module GRPC
138
+ class Client
139
+ # @parameter endpoint [Async::HTTP::Endpoint] The server endpoint
140
+ # @parameter authority [String] The server authority for requests
141
+ def initialize(endpoint, authority: nil)
142
+ @client = Async::HTTP::Client.new(endpoint, protocol: Async::HTTP::Protocol::HTTP2)
143
+ @authority = authority || endpoint.authority
144
+ end
145
+
146
+ # Make a unary RPC call
147
+ # @parameter service [String] Service name, e.g., "my_service.Greeter"
148
+ # @parameter method [String] Method name, e.g., "SayHello"
149
+ # @parameter request [Object] Protobuf request message
150
+ # @parameter response_class [Class] Expected response message class
151
+ # @parameter metadata [Hash] Custom metadata
152
+ # @parameter timeout [Numeric] Deadline for the request
153
+ # @returns [Object] Protobuf response message
154
+ def unary(service, method, request, response_class: nil, metadata: {}, timeout: nil)
155
+ # Build request body with single message
156
+ body = Protocol::GRPC::Body::Writable.new
157
+ body.write(request)
158
+ body.close_write
159
+
160
+ # Build HTTP request
161
+ http_request = build_request(service, method, body, metadata: metadata, timeout: timeout)
162
+
163
+ # Make the call
164
+ http_response = @client.call(http_request)
165
+
166
+ # Wrap response body with gRPC message parser
167
+ # This follows async-rest pattern of wrapping body for rich parsing
168
+ wrap_response_body(http_response, response_class)
169
+
170
+ # Read single message - standard Protocol::HTTP::Body interface
171
+ # The wrapper makes .read return decoded messages instead of raw chunks
172
+ message = http_response.body.read
173
+
174
+ # Check status
175
+ check_status!(http_response)
176
+
177
+ message
178
+ end
179
+
180
+ # Make a server streaming RPC call
181
+ # @parameter service [String] Service name
182
+ # @parameter method [String] Method name
183
+ # @parameter request [Object] Protobuf request message
184
+ # @parameter response_class [Class] Expected response message class
185
+ # @yields {|response| ...} Each response message
186
+ def server_streaming(service, method, request, response_class: nil, metadata: {}, timeout: nil, &block)
187
+ return enum_for(:server_streaming, service, method, request, response_class: response_class, metadata: metadata, timeout: timeout) unless block_given?
188
+
189
+ # Build request body with single message
190
+ body = Protocol::GRPC::Body::Writable.new
191
+ body.write(request)
192
+ body.close_write
193
+
194
+ # Build HTTP request
195
+ http_request = build_request(service, method, body, metadata: metadata, timeout: timeout)
196
+
197
+ # Make the call
198
+ http_response = @client.call(http_request)
199
+
200
+ # Wrap response body
201
+ wrap_response_body(http_response, response_class)
202
+
203
+ # Stream responses - standard Protocol::HTTP::Body#each
204
+ # The wrapper makes each iterate decoded messages
205
+ http_response.body.each do |message|
206
+ yield message
207
+ end
208
+
209
+ # Check status
210
+ check_status!(http_response)
211
+ end
212
+
213
+ # Make a client streaming RPC call
214
+ # @parameter service [String] Service name
215
+ # @parameter method [String] Method name
216
+ # @parameter response_class [Class] Expected response message class
217
+ # @yields {|stream| ...} Block that writes request messages to stream
218
+ # @returns [Object] Protobuf response message
219
+ def client_streaming(service, method, response_class: nil, metadata: {}, timeout: nil, &block)
220
+ # Build request body
221
+ body = Protocol::GRPC::Body::Writable.new
222
+
223
+ # Build HTTP request
224
+ http_request = build_request(service, method, body, metadata: metadata, timeout: timeout)
225
+
226
+ # Start the call in a task
227
+ response_task = Async do
228
+ @client.call(http_request)
229
+ end
230
+
231
+ # Yield the body writer to the caller
232
+ begin
233
+ yield body
234
+ ensure
235
+ body.close_write
236
+ end
237
+
238
+ # Wait for response
239
+ http_response = response_task.wait
240
+
241
+ # Wrap response body
242
+ wrap_response_body(http_response, response_class)
243
+
244
+ # Read single response
245
+ message = http_response.body.read
246
+
247
+ # Check status
248
+ check_status!(http_response)
249
+
250
+ message
251
+ end
252
+
253
+ # Make a bidirectional streaming RPC call
254
+ # @parameter service [String] Service name
255
+ # @parameter method [String] Method name
256
+ # @parameter response_class [Class] Expected response message class
257
+ # @yields {|input, output| ...} Block with input stream and output enumerator
258
+ def bidirectional_streaming(service, method, response_class: nil, metadata: {}, timeout: nil)
259
+ # Build request body
260
+ body = Protocol::GRPC::Body::Writable.new
261
+
262
+ # Build HTTP request
263
+ http_request = build_request(service, method, body, metadata: metadata, timeout: timeout)
264
+
265
+ # Start the call
266
+ http_response = @client.call(http_request)
267
+
268
+ # Wrap response body
269
+ wrap_response_body(http_response, response_class)
270
+
271
+ # Create output enumerator for reading responses
272
+ # Standard Protocol::HTTP::Body#each returns enumerator of messages
273
+ output = http_response.body.each
274
+
275
+ # Yield input writer and output reader to caller
276
+ yield body, output
277
+
278
+ # Ensure body is closed
279
+ body.close_write unless body.closed?
280
+
281
+ # Check status
282
+ check_status!(http_response)
283
+ end
284
+
285
+ # Close the underlying HTTP client
286
+ def close
287
+ @client.close
288
+ end
289
+
290
+ private
291
+
292
+ def build_request(service, method, body, metadata: {}, timeout: nil)
293
+ path = Protocol::GRPC::Methods.build_path(service, method)
294
+ headers = Protocol::GRPC::Methods.build_headers(
295
+ metadata: metadata,
296
+ timeout: timeout
297
+ )
298
+
299
+ Protocol::HTTP::Request[
300
+ "POST", path,
301
+ headers: headers,
302
+ body: body,
303
+ scheme: "https",
304
+ authority: @authority
305
+ ]
306
+ end
307
+
308
+ # Wrap response body with gRPC message parser
309
+ # This follows the async-rest pattern of transparent body wrapping
310
+ def wrap_response_body(response, message_class)
311
+ if response.body
312
+ encoding = response.headers["grpc-encoding"]
313
+ response.body = Protocol::GRPC::Body::Readable.new(
314
+ response.body,
315
+ message_class: message_class,
316
+ encoding: encoding
317
+ )
318
+ end
319
+ end
320
+
321
+ # Check gRPC status and raise error if not OK
322
+ def check_status!(response)
323
+ status = Protocol::GRPC::Metadata.extract_status(response.headers)
324
+ if status != Protocol::GRPC::Status::OK
325
+ message = Protocol::GRPC::Metadata.extract_message(response.headers)
326
+ raise Protocol::GRPC::Error.new(status, message)
327
+ end
328
+ end
329
+ end
330
+ end
331
+ end
332
+ ```
333
+
334
+ ### 2. `Async::GRPC::ServerCall`
335
+
336
+ Rich context object for server-side RPC handling:
337
+
338
+ ```ruby
339
+ module Async
340
+ module GRPC
341
+ # Server-side call context with metadata and deadline tracking
342
+ class ServerCall < Protocol::GRPC::Call
343
+ # @parameter request [Protocol::HTTP::Request]
344
+ # @parameter response_headers [Protocol::HTTP::Headers]
345
+ def initialize(request, response_headers)
346
+ # Parse timeout from grpc-timeout header
347
+ timeout_value = request.headers["grpc-timeout"]
348
+ deadline = if timeout_value
349
+ timeout_seconds = Protocol::GRPC::Methods.parse_timeout(timeout_value)
350
+ Time.now + timeout_seconds if timeout_seconds
351
+ end
352
+
353
+ super(request, deadline: deadline)
354
+ @response_headers = response_headers
355
+ @response_metadata = {}
356
+ @response_trailers = {}
357
+ end
358
+
359
+ # @attribute [Protocol::HTTP::Headers] Response headers
360
+ attr :response_headers
361
+
362
+ # Set response metadata (sent as initial headers)
363
+ # @parameter key [String] Metadata key
364
+ # @parameter value [String] Metadata value
365
+ def set_metadata(key, value)
366
+ @response_metadata[key] = value
367
+ @response_headers[key] = value
368
+ end
369
+
370
+ # Set response trailer (sent after response body)
371
+ # @parameter key [String] Trailer key
372
+ # @parameter value [String] Trailer value
373
+ def set_trailer(key, value)
374
+ @response_trailers[key] = value
375
+ @response_headers.trailer! unless @response_headers.trailer?
376
+ @response_headers[key] = value
377
+ end
378
+
379
+ # Abort the RPC with an error
380
+ # @parameter status [Integer] gRPC status code
381
+ # @parameter message [String] Error message
382
+ def abort!(status, message)
383
+ raise Protocol::GRPC::Error.new(status, message)
384
+ end
385
+
386
+ # Check if we should stop processing
387
+ # @returns [Boolean]
388
+ def should_stop?
389
+ cancelled? || deadline_exceeded?
390
+ end
391
+ end
392
+ end
393
+ end
394
+ ```
395
+
396
+ ### 3. `Async::GRPC::Interceptor`
397
+
398
+ Middleware/interceptor pattern for client and server:
399
+
400
+ ```ruby
401
+ module Async
402
+ module GRPC
403
+ # Base class for client interceptors
404
+ class ClientInterceptor
405
+ # Intercept a client call
406
+ # @parameter service [String] Service name
407
+ # @parameter method [String] Method name
408
+ # @parameter request [Object] Request message
409
+ # @parameter call [Protocol::GRPC::Call] Call context
410
+ # @yields The actual RPC call
411
+ # @returns [Object] Response message
412
+ def call(service, method, request, call)
413
+ yield
414
+ end
415
+ end
416
+
417
+ # Base class for server interceptors
418
+ class ServerInterceptor
419
+ # Intercept a server call
420
+ # @parameter request [Protocol::HTTP::Request] HTTP request
421
+ # @parameter call [ServerCall] Server call context
422
+ # @yields The actual handler
423
+ # @returns [Protocol::HTTP::Response] HTTP response
424
+ def call(request, call)
425
+ yield
426
+ end
427
+ end
428
+
429
+ # Example: Logging interceptor
430
+ class LoggingInterceptor < ClientInterceptor
431
+ def call(service, method, request, call)
432
+ Console.logger.info(self){"Calling #{service}/#{method}"}
433
+ start_time = Time.now
434
+
435
+ begin
436
+ response = yield
437
+ duration = Time.now - start_time
438
+ Console.logger.info(self){"Completed #{service}/#{method} in #{duration}s"}
439
+ response
440
+ rescue => error
441
+ Console.logger.error(self){"Failed #{service}/#{method}: #{error.message}"}
442
+ raise
443
+ end
444
+ end
445
+ end
446
+
447
+ # Example: Metadata interceptor
448
+ class MetadataInterceptor < ClientInterceptor
449
+ def initialize(metadata = {})
450
+ @metadata = metadata
451
+ end
452
+
453
+ def call(service, method, request, call)
454
+ # Add metadata to all calls
455
+ call.request.headers.merge!(@metadata)
456
+ yield
457
+ end
458
+ end
459
+ end
460
+ end
461
+ ```
462
+
463
+ ### 4. Using with Async::HTTP::Server
464
+
465
+ **You don't need a separate `Async::GRPC::Server` class!**
466
+
467
+ Just use `Protocol::GRPC::Middleware` directly with `Async::HTTP::Server`. The async handling happens automatically because `Protocol::HTTP::Body::Writable` is already async-safe (uses Thread::Queue).
468
+
469
+ ```ruby
470
+ require "async"
471
+ require "async/http/server"
472
+ require "async/http/endpoint"
473
+ require "protocol/grpc/middleware"
474
+
475
+ # Create gRPC middleware
476
+ middleware = Protocol::GRPC::Middleware.new
477
+ middleware.register("my_service.Greeter", GreeterService.new)
478
+
479
+ # Use with Async::HTTP::Server - it handles everything!
480
+ endpoint = Async::HTTP::Endpoint.parse(
481
+ "https://localhost:50051",
482
+ protocol: Async::HTTP::Protocol::HTTP2
483
+ )
484
+
485
+ server = Async::HTTP::Server.new(middleware, endpoint)
486
+
487
+ Async do
488
+ server.run
489
+ end
490
+ ```
491
+
492
+ `Async::HTTP::Server` provides:
493
+ - Endpoint binding and connection acceptance
494
+ - HTTP/2 protocol handling
495
+ - Request/response loop in async tasks
496
+ - Connection management
497
+
498
+ `Protocol::GRPC::Middleware` just implements:
499
+ - `call(request) → response`
500
+ - Service dispatch
501
+ - Message framing
502
+ - Error handling
503
+
504
+ **No additional async wrapper needed!** The protocol middleware is already async-compatible because:
505
+ - Handlers can use `Async` tasks internally
506
+ - `Body::Writable` uses async-safe queues
507
+ - Reading/writing messages doesn't block the reactor
508
+
509
+ ### 3. Service Handler Interface
510
+
511
+ Service implementations should follow this pattern:
512
+
513
+ ```ruby
514
+ module Async
515
+ module GRPC
516
+ # Base class for service handlers (optional, but provides structure)
517
+ class ServiceHandler
518
+ # Each RPC method receives:
519
+ # @parameter input [Protocol::GRPC::Body::Readable] Input message stream
520
+ # @parameter output [Protocol::GRPC::Body::Writable] Output message stream
521
+ # @parameter request [Protocol::HTTP::Request] Original HTTP request (for metadata)
522
+
523
+ # Example unary RPC:
524
+ def say_hello(input, output, request)
525
+ # Read single request - standard .read method
526
+ hello_request = input.read
527
+
528
+ # Process
529
+ reply = MyService::HelloReply.new(
530
+ message: "Hello, #{hello_request.name}!"
531
+ )
532
+
533
+ # Write single response - standard .write method
534
+ output.write(reply)
535
+ end
536
+
537
+ # Example server streaming RPC:
538
+ def list_features(input, output, request)
539
+ # Read single request
540
+ rectangle = input.read
541
+
542
+ # Write multiple responses
543
+ 10.times do |i|
544
+ feature = MyService::Feature.new(name: "Feature #{i}")
545
+ output.write(feature)
546
+ end
547
+ end
548
+
549
+ # Example client streaming RPC:
550
+ def record_route(input, output, request)
551
+ # Read multiple requests - standard .each iterator
552
+ points = []
553
+ input.each do |point|
554
+ points << point
555
+ end
556
+
557
+ # Process and write single response
558
+ summary = MyService::RouteSummary.new(
559
+ point_count: points.size
560
+ )
561
+ output.write(summary)
562
+ end
563
+
564
+ # Example bidirectional streaming RPC:
565
+ def route_chat(input, output, request)
566
+ # Read and write concurrently
567
+ Async do |task|
568
+ # Read messages in background
569
+ task.async do
570
+ input.each do |note|
571
+ # Process and respond
572
+ response = MyService::RouteNote.new(
573
+ message: "Echo: #{note.message}"
574
+ )
575
+ output.write(response)
576
+ end
577
+ end
578
+ end
579
+ end
580
+ end
581
+ end
582
+ end
583
+ ```
584
+
585
+ ## Usage Examples
586
+
587
+ ### Client Example
588
+
589
+ ```ruby
590
+ require "async"
591
+ require "async/grpc/client"
592
+ require_relative "my_service_pb"
593
+
594
+ endpoint = Async::HTTP::Endpoint.parse("https://localhost:50051")
595
+
596
+ Async do
597
+ client = Async::GRPC::Client.new(endpoint)
598
+
599
+ # Unary RPC
600
+ request = MyService::HelloRequest.new(name: "World")
601
+ response = client.unary(
602
+ "my_service.Greeter",
603
+ "SayHello",
604
+ request,
605
+ response_class: MyService::HelloReply
606
+ )
607
+ puts response.message
608
+
609
+ # Server streaming RPC
610
+ client.server_streaming(
611
+ "my_service.Greeter",
612
+ "StreamNumbers",
613
+ request,
614
+ response_class: MyService::HelloReply
615
+ ) do |reply|
616
+ puts reply.message
617
+ end
618
+
619
+ # Client streaming RPC
620
+ response = client.client_streaming(
621
+ "my_service.Greeter",
622
+ "RecordRoute",
623
+ response_class: MyService::RouteSummary
624
+ ) do |stream|
625
+ 10.times do |i|
626
+ point = MyService::Point.new(latitude: i, longitude: i)
627
+ stream.write(point)
628
+ end
629
+ end
630
+ puts response.point_count
631
+
632
+ # Bidirectional streaming RPC
633
+ client.bidirectional_streaming(
634
+ "my_service.Greeter",
635
+ "RouteChat",
636
+ response_class: MyService::RouteNote
637
+ ) do |input, output|
638
+ # Write in background
639
+ task = Async do
640
+ 5.times do |i|
641
+ note = MyService::RouteNote.new(message: "Note #{i}")
642
+ input.write(note)
643
+ sleep 0.1
644
+ end
645
+ input.close_write
646
+ end
647
+
648
+ # Read responses
649
+ output.each do |reply|
650
+ puts reply.message
651
+ end
652
+
653
+ task.wait
654
+ end
655
+ ensure
656
+ client.close
657
+ end
658
+ ```
659
+
660
+ ### Server Example
661
+
662
+ ```ruby
663
+ require "async"
664
+ require "async/http/server"
665
+ require "async/http/endpoint"
666
+ require "protocol/grpc/middleware"
667
+ require_relative "my_service_pb"
668
+
669
+ # Implement service handlers
670
+ class GreeterService
671
+ def say_hello(input, output, call)
672
+ hello_request = input.read
673
+
674
+ reply = MyService::HelloReply.new(
675
+ message: "Hello, #{hello_request.name}!"
676
+ )
677
+
678
+ output.write(reply)
679
+ end
680
+
681
+ def stream_numbers(input, output, call)
682
+ hello_request = input.read
683
+
684
+ 10.times do |i|
685
+ reply = MyService::HelloReply.new(
686
+ message: "Number #{i} for #{hello_request.name}"
687
+ )
688
+ output.write(reply)
689
+ sleep 0.1 # Simulate work
690
+ end
691
+ end
692
+ end
693
+
694
+ # Setup server
695
+ endpoint = Async::HTTP::Endpoint.parse(
696
+ "https://localhost:50051",
697
+ protocol: Async::HTTP::Protocol::HTTP2
698
+ )
699
+
700
+ Async do
701
+ # Create gRPC middleware
702
+ grpc = Protocol::GRPC::Middleware.new
703
+ grpc.register("my_service.Greeter", GreeterService.new)
704
+
705
+ # Use with Async::HTTP::Server - no wrapper needed!
706
+ server = Async::HTTP::Server.new(grpc, endpoint)
707
+
708
+ server.run
709
+ end
710
+ ```
711
+
712
+ ### Integration with Falcon
713
+
714
+ ```ruby
715
+ #!/usr/bin/env falcon-host
716
+ # frozen_string_literal: true
717
+
718
+ require "protocol/grpc/middleware"
719
+ require_relative "my_service_pb"
720
+
721
+ class GreeterService
722
+ def say_hello(input, output, call)
723
+ hello_request = input.read
724
+ reply = MyService::HelloReply.new(message: "Hello, #{hello_request.name}!")
725
+ output.write(reply)
726
+ end
727
+ end
728
+
729
+ service "grpc.localhost" do
730
+ include Falcon::Environment::Application
731
+
732
+ middleware do
733
+ # Just use Protocol::GRPC::Middleware directly!
734
+ grpc = Protocol::GRPC::Middleware.new
735
+ grpc.register("my_service.Greeter", GreeterService.new)
736
+ grpc
737
+ end
738
+
739
+ scheme "https"
740
+ protocol {Async::HTTP::Protocol::HTTP2}
741
+
742
+ endpoint do
743
+ Async::HTTP::Endpoint.for(scheme, "localhost", port: 50051, protocol: protocol)
744
+ end
745
+ end
746
+ ```
747
+
748
+ ## Integration with Existing gRPC Libraries
749
+
750
+ ### Channel Adapter for Google Cloud Libraries
751
+
752
+ Many existing Ruby libraries (like `google-cloud-spanner`) depend on the standard `grpc` gem and expect a `GRPC::Core::Channel` interface. To enable these libraries to use `async-grpc`, we provide a channel adapter.
753
+
754
+ ```ruby
755
+ module Async
756
+ module GRPC
757
+ # Adapter that makes Async::GRPC::Client compatible with
758
+ # libraries expecting GRPC::Core::Channel
759
+ class ChannelAdapter
760
+ def initialize(endpoint, channel_args = {}, channel_creds = nil)
761
+ @endpoint = endpoint
762
+ @client = Client.new(endpoint)
763
+ @channel_creds = channel_creds
764
+ end
765
+
766
+ # Unary RPC: "/package.Service/Method"
767
+ def request_response(path, request, marshal, unmarshal, deadline: nil, metadata: {})
768
+ service, method = parse_path(path)
769
+ metadata = add_auth_metadata(metadata, path) if @channel_creds
770
+ timeout = deadline ? [deadline - Time.now, 0].max : nil
771
+
772
+ response_binary = Async do
773
+ @client.unary_binary(service, method, marshal.call(request),
774
+ metadata: metadata, timeout: timeout)
775
+ end.wait
776
+
777
+ unmarshal.call(response_binary)
778
+ end
779
+
780
+ # Server streaming
781
+ def request_stream(path, request, marshal, unmarshal, deadline: nil, metadata: {})
782
+ # Returns Enumerator of responses
783
+ end
784
+
785
+ # Client streaming
786
+ def stream_request(path, marshal, unmarshal, deadline: nil, metadata: {})
787
+ # Returns [input_stream, response_future]
788
+ end
789
+
790
+ # Bidirectional streaming
791
+ def stream_stream(path, marshal, unmarshal, deadline: nil, metadata: {})
792
+ # Returns [input_stream, output_enumerator]
793
+ end
794
+ end
795
+ end
796
+ end
797
+ ```
798
+
799
+ ### Binary Message Interface
800
+
801
+ To support pre-marshaled protobuf data:
802
+
803
+ ```ruby
804
+ class Client
805
+ # Unary with binary data
806
+ def unary_binary(service, method, request_binary, metadata: {}, timeout: nil)
807
+ # Returns binary response (no message_class decoding)
808
+ end
809
+
810
+ # Server streaming with binary
811
+ def server_streaming_binary(service, method, request_binary, &block)
812
+ # Yields binary strings
813
+ end
814
+ end
815
+ ```
816
+
817
+ ### Usage with Google Cloud
818
+
819
+ ```ruby
820
+ require "async/grpc/channel_adapter"
821
+ require "google/cloud/spanner"
822
+
823
+ endpoint = Async::HTTP::Endpoint.parse("https://spanner.googleapis.com")
824
+ credentials = Google::Cloud::Spanner::Credentials.default
825
+
826
+ # Create adapter
827
+ channel = Async::GRPC::ChannelAdapter.new(endpoint, {}, credentials)
828
+
829
+ # Use with Google Cloud libraries
830
+ service = Google::Cloud::Spanner::Service.new
831
+ service.instance_variable_set(:@channel, channel)
832
+
833
+ # Now Spanner uses async-grpc!
834
+ ```
835
+
836
+ See [`SPANNER_INTEGRATION.md`](SPANNER_INTEGRATION.md) for detailed integration guide.
837
+
838
+ ## Design Decisions
839
+
840
+ ### Client Wraps Async::HTTP::Client
841
+
842
+ The client is a thin wrapper that:
843
+ - Manages the HTTP/2 connection lifecycle
844
+ - Handles request/response conversion using `protocol-grpc`
845
+ - Provides RPC-style methods (unary, server_streaming, etc.)
846
+ - Manages streaming with Async tasks
847
+
848
+ Benefits:
849
+ - Reuses `Async::HTTP::Client` connection pooling
850
+ - Automatic HTTP/2 multiplexing
851
+ - Async-friendly streaming
852
+
853
+ ### Server: Just Use Existing Infrastructure
854
+
855
+ **No custom server class needed!** The design is even simpler:
856
+
857
+ 1. `Protocol::GRPC::Middleware` (in protocol-grpc)
858
+ - Implements `call(request) → response`
859
+ - Handles gRPC protocol details
860
+ - Works with any HTTP/2 server
861
+
862
+ 2. `Async::HTTP::Server` (already exists in async-http)
863
+ - Handles endpoint binding
864
+ - Manages connections
865
+ - Runs request/response loop in async tasks
866
+
867
+ Benefits:
868
+ - **No code duplication** - reuse existing Async::HTTP::Server
869
+ - **Standard middleware** - works with any Protocol::HTTP::Middleware
870
+ - **Composable** - can mix gRPC with HTTP endpoints
871
+ - **Simple** - just one middleware class to implement
872
+
873
+ Compare to Protocol::HTTP ecosystem:
874
+ - `Protocol::HTTP::Middleware` - base middleware class
875
+ - `Async::HTTP::Server` - uses any middleware
876
+ - No need for `Async::HTTP::SpecialServer` - same here!
877
+
878
+ ### Service Handler Interface
879
+
880
+ Handlers receive `(input, output, request)`:
881
+ - `input` - stream for reading request messages
882
+ - `output` - stream for writing response messages
883
+ - `request` - original HTTP request for accessing metadata
884
+
885
+ Benefits:
886
+ - Uniform interface for all RPC types
887
+ - Handlers control streaming explicitly
888
+ - Access to metadata via request headers
889
+
890
+ ### Streaming with Async Tasks
891
+
892
+ Bidirectional streaming uses Async tasks:
893
+ - Input and output can be processed concurrently
894
+ - Natural async/await patterns
895
+ - Proper cleanup on errors
896
+
897
+ ## Google Cloud Integration Requirements
898
+
899
+ To support Google Cloud libraries (like `google-cloud-spanner`), async-grpc must provide:
900
+
901
+ ### 1. Channel Adapter Interface
902
+
903
+ Implement `GRPC::Core::Channel` interface methods:
904
+ - `request_response(path, request, marshal, unmarshal, deadline:, metadata:)` - Unary
905
+ - `request_stream(path, request, marshal, unmarshal, deadline:, metadata:)` - Server streaming
906
+ - `stream_request(path, marshal, unmarshal, deadline:, metadata:)` - Client streaming
907
+ - `stream_stream(path, marshal, unmarshal, deadline:, metadata:)` - Bidirectional
908
+
909
+ ### 2. Binary Message Support
910
+
911
+ Support pre-marshaled protobuf data:
912
+ - `Client#unary_binary(service, method, request_binary, ...)` → `response_binary`
913
+ - `Body::Readable` with `message_class: nil` returns raw binary
914
+ - `Body::Writable` accepts binary strings directly
915
+
916
+ ### 3. Authentication Integration
917
+
918
+ Support Google Cloud authentication patterns:
919
+ - OAuth2 access tokens (via credentials object)
920
+ - Per-call credential refresh (credentials have `updater_proc`)
921
+ - Token metadata format: `{"authorization" => "Bearer ya29.a0..."}`
922
+
923
+ Example credential integration:
924
+ ```ruby
925
+ # Google's credential format
926
+ credentials = Google::Cloud::Spanner::Credentials.default
927
+ updater_proc = credentials.client.updater_proc
928
+
929
+ # For each RPC call:
930
+ auth_metadata = updater_proc.call(method_path)
931
+ # => {"authorization" => "Bearer ..."}
932
+
933
+ # Add to request metadata
934
+ client.unary(service, method, request, metadata: auth_metadata)
935
+ ```
936
+
937
+ ### 4. Error Mapping
938
+
939
+ Map gRPC status codes to Google Cloud errors:
940
+ - `Protocol::GRPC::Status::INVALID_ARGUMENT` → `Google::Cloud::InvalidArgumentError`
941
+ - `Protocol::GRPC::Status::NOT_FOUND` → `Google::Cloud::NotFoundError`
942
+ - `Protocol::GRPC::Status::UNAVAILABLE` → `Google::Cloud::UnavailableError`
943
+ - Preserve error messages and metadata
944
+
945
+ ### 5. Metadata Conventions
946
+
947
+ Support Google Cloud metadata conventions:
948
+ - `google-cloud-resource-prefix` - resource path prefix
949
+ - `x-goog-spanner-route-to-leader` - leader-aware routing
950
+ - `x-goog-request-params` - request routing params
951
+ - Custom quota project ID
952
+
953
+ ### 6. Retry Logic Compatibility
954
+
955
+ Support retry policies from Google Cloud:
956
+ - `Gapic::CallOptions` with retry_policy
957
+ - Exponential backoff configuration
958
+ - Per-method retry settings
959
+ - Idempotency awareness
960
+
961
+ ### Implementation Checklist
962
+
963
+ - [ ] `Async::GRPC::ChannelAdapter` class
964
+ - [ ] Binary message methods in `Client`
965
+ - [ ] `GRPC::Core::Channel` interface compatibility
966
+ - [ ] OAuth2 credential integration
967
+ - [ ] Error mapping to Google Cloud errors
968
+ - [ ] Metadata convention support
969
+ - [ ] Retry policy support
970
+ - [ ] Integration tests with actual Spanner SDK
971
+
972
+ See [`SPANNER_INTEGRATION.md`](SPANNER_INTEGRATION.md) for detailed implementation guide.
973
+
974
+ ### Key Interfaces for Google Cloud Compatibility
975
+
976
+ **Channel Interface** (from `GRPC::Core::Channel`):
977
+ ```ruby
978
+ channel = ChannelAdapter.new(endpoint, channel_args, channel_creds)
979
+ response = channel.request_response(path, request, marshal, unmarshal, deadline:, metadata:)
980
+ ```
981
+
982
+ **Binary Client Methods**:
983
+ ```ruby
984
+ client.unary_binary(service, method, binary_request) # => binary_response
985
+ client.server_streaming_binary(service, method, binary_request){|binary| do_stuff}
986
+ client.client_streaming_binary(service, method){|output| output.write(binary)}
987
+ client.bidirectional_streaming_binary(service, method){|input, output| do_stuff}
988
+ ```
989
+
990
+ **Authentication Hook**:
991
+ ```ruby
992
+ # Google Cloud credentials provide updater_proc
993
+ auth_metadata = credentials.client.updater_proc.call(method_path)
994
+ # => {"authorization" => "Bearer ya29.a0..."}
995
+ ```
996
+
997
+ This enables async-grpc to be used as a drop-in replacement for the standard `grpc` gem in Google Cloud libraries.
998
+
999
+ ## Implementation Roadmap
1000
+
1001
+ ### Phase 1: Core Client (✅ Designed)
1002
+ - `Async::GRPC::Client` with all four RPC types
1003
+ - `Async::GRPC::ServerCall` context object (enhances Protocol::GRPC::Call)
1004
+ - Basic error handling and status checking
1005
+ - Response body wrapping pattern
1006
+ - **Server**: Just use `Protocol::GRPC::Middleware` with `Async::HTTP::Server` (no wrapper needed!)
1007
+
1008
+ ### Phase 2: Google Cloud Integration (✅ Designed)
1009
+ - `Async::GRPC::ChannelAdapter` for GRPC::Core::Channel compatibility
1010
+ - Binary message methods (`unary_binary`, etc.)
1011
+ - OAuth2 authentication integration
1012
+ - Google Cloud metadata conventions
1013
+ - Error mapping to Google Cloud errors
1014
+
1015
+ ### Phase 3: Interceptors & Middleware (✅ Designed)
1016
+ - `Async::GRPC::ClientInterceptor` base class
1017
+ - `Async::GRPC::ServerInterceptor` base class
1018
+ - Chain multiple interceptors
1019
+ - Built-in interceptors (logging, metrics, auth)
1020
+
1021
+ ### Phase 4: Advanced Features
1022
+ - Retry policies with exponential backoff
1023
+ - Flow control & backpressure (bounded queues)
1024
+ - Compression negotiation (`grpc-encoding` headers)
1025
+ - Health check service implementation
1026
+ - Server reflection implementation
1027
+ - Graceful shutdown
1028
+
1029
+ ## Missing/Future Features
1030
+
1031
+ ### Core Features (Phase 1-2)
1032
+
1033
+ 1. **Cancellation & Deadlines** (Partially designed)
1034
+ - Proper cancellation propagation through async tasks
1035
+ - Timeout enforcement for streaming RPCs
1036
+ - Cancel ongoing streams when deadline exceeded
1037
+ - Context cancellation (similar to Go's context.Context)
1038
+ - **Note**: `ServerCall` has deadline tracking, need to wire up cancellation
1039
+
1040
+ 2. **Flow Control & Backpressure**
1041
+ - Respect HTTP/2 flow control (handled by async-http)
1042
+ - Backpressure for streaming (don't buffer unbounded)
1043
+ - Use `Protocol::HTTP::Body::Writable` with bounded queue option
1044
+
1045
+ ### Advanced Features (Later)
1046
+
1047
+ 5. **Health Checking**
1048
+ - Standard gRPC health check protocol
1049
+ - `grpc.health.v1.Health` service
1050
+ - Per-service health status
1051
+
1052
+ 6. **Reflection API**
1053
+ - Server reflection protocol (grpc.reflection.v1alpha.ServerReflection)
1054
+ - Allows tools like `grpcurl` to discover services
1055
+ - List services, describe methods, get proto definitions
1056
+
1057
+ 7. **Authentication & Authorization**
1058
+ - Channel credentials (TLS, custom auth)
1059
+ - Per-call credentials (tokens, API keys)
1060
+ - Integration with standard auth patterns
1061
+
1062
+ 8. **Retry Policies**
1063
+ - Automatic retries with exponential backoff
1064
+ - Configurable retry conditions (status codes)
1065
+ - Hedging (parallel requests)
1066
+
1067
+ 9. **Load Balancing**
1068
+ - Client-side load balancing
1069
+ - Service config (retry policy, timeout, LB config)
1070
+ - Integration with service discovery
1071
+
1072
+ 10. **Compression Negotiation**
1073
+ - `grpc-encoding` header support
1074
+ - `grpc-accept-encoding` for advertising support
1075
+ - Multiple compression algorithms (gzip, deflate, etc.)
1076
+
1077
+ ## Open Questions
1078
+
1079
+ 1. **Interceptor API**: What should the interface be?
1080
+ ```ruby
1081
+ class LoggingInterceptor
1082
+ def call(request, call)
1083
+ # Before request
1084
+ result = yield
1085
+ # After response
1086
+ result
1087
+ end
1088
+ end
1089
+ ```
1090
+
1091
+ 2. **Context/Call Object**: Should we have a rich call context?
1092
+ - Access metadata, peer info, deadline
1093
+ - Set trailers, check cancellation
1094
+ - Pass context through interceptors
1095
+
1096
+ 3. **Connection Pooling**: Client-side or server-side?
1097
+ - Current: Single Async::HTTP::Client
1098
+ - Could pool multiple connections
1099
+ - Or rely on HTTP/2 multiplexing?
1100
+
1101
+ 4. **Graceful Shutdown**: How should server shutdown work?
1102
+ - Stop accepting new calls
1103
+ - Wait for in-flight calls to complete
1104
+ - Force close after timeout
1105
+
1106
+ 5. **Error Propagation**: How to handle partial failures in streaming?
1107
+ - Close stream immediately on error?
1108
+ - Send error in trailers?
1109
+ - Allow partial success?
1110
+
1111
+ 6. **Type Validation**: Validate message types at runtime?
1112
+ - Check message class matches expected type
1113
+ - Or trust duck typing?
1114
+
1115
+ ## References
1116
+
1117
+ - [Protocol::GRPC Design](../protocol-grpc/design.md)
1118
+ - [Async::HTTP](https://github.com/socketry/async-http)
1119
+ - [Protocol::HTTP::Middleware](https://github.com/socketry/protocol-http)
1120
+ - [gRPC Core Concepts](https://grpc.io/docs/what-is-grpc/core-concepts/)
1121
+