prosody 0.1.1-aarch64-linux
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +7 -0
- data/.cargo/config.toml +7 -0
- data/.release-please-manifest.json +3 -0
- data/.rspec +3 -0
- data/.ruby-version +1 -0
- data/.standard.yml +9 -0
- data/.taplo.toml +6 -0
- data/ARCHITECTURE.md +591 -0
- data/CHANGELOG.md +92 -0
- data/LICENSE +21 -0
- data/Makefile +36 -0
- data/README.md +946 -0
- data/Rakefile +26 -0
- data/lib/prosody/3.2/prosody.so +0 -0
- data/lib/prosody/3.3/prosody.so +0 -0
- data/lib/prosody/3.4/prosody.so +0 -0
- data/lib/prosody/configuration.rb +333 -0
- data/lib/prosody/handler.rb +177 -0
- data/lib/prosody/native_stubs.rb +417 -0
- data/lib/prosody/processor.rb +321 -0
- data/lib/prosody/sentry.rb +36 -0
- data/lib/prosody/version.rb +10 -0
- data/lib/prosody.rb +42 -0
- data/release-please-config.json +10 -0
- data/sig/configuration.rbs +252 -0
- data/sig/handler.rbs +79 -0
- data/sig/processor.rbs +100 -0
- data/sig/prosody.rbs +171 -0
- data/sig/version.rbs +9 -0
- metadata +165 -0
checksums.yaml
ADDED
|
@@ -0,0 +1,7 @@
|
|
|
1
|
+
---
|
|
2
|
+
SHA256:
|
|
3
|
+
metadata.gz: 578ef32bca6763e6bf5c4abb5ef4ec2ca262cef7ad3ffdb4287f65be39206ee5
|
|
4
|
+
data.tar.gz: 906472277e1d55c4a54ff6b0cd662702fc36ae66dc63141717d3d70c548ccef7
|
|
5
|
+
SHA512:
|
|
6
|
+
metadata.gz: 5e7f2914a7fcfa4066e882d7d4eaf3c347c992a9a7c472392568e38f22d42bba20bb790837a3eb95381f10281e3358122143d4cd33554e90f76a7778f8b5d7c9
|
|
7
|
+
data.tar.gz: '028120a84c256451262bde1473276ff34818fedff6a1857e627ca03f534811332420af420c123a0f9d3d3a3a2cb255e1aa239766b56cbc66dc7457b82e5db8c9'
|
data/.cargo/config.toml
ADDED
data/.rspec
ADDED
data/.ruby-version
ADDED
|
@@ -0,0 +1 @@
|
|
|
1
|
+
3.3.7
|
data/.standard.yml
ADDED
data/.taplo.toml
ADDED
data/ARCHITECTURE.md
ADDED
|
@@ -0,0 +1,591 @@
|
|
|
1
|
+
# Prosody Ruby Architecture: Under The Hood
|
|
2
|
+
|
|
3
|
+
This document explains how the Prosody Ruby gem works internally. We'll start with high-level concepts, then dive deeper
|
|
4
|
+
into the components and their interactions.
|
|
5
|
+
|
|
6
|
+
## The Basics: What Makes Prosody Different
|
|
7
|
+
|
|
8
|
+
Ruby gems for Kafka typically fall into three categories:
|
|
9
|
+
|
|
10
|
+
- Pure Ruby implementations (simple but performance-limited by the GVL)
|
|
11
|
+
- C library bindings (better performance but expose low-level Kafka internals)
|
|
12
|
+
- Java wrappers (performant but require JRuby and non-idiomatic APIs)
|
|
13
|
+
|
|
14
|
+
Existing Ruby Kafka gems process messages synchronously—one per partition at a time—so a slow message blocks all others
|
|
15
|
+
behind it, regardless of key. Prosody is async, processing multiple messages per partition concurrently while preserving
|
|
16
|
+
per-key order. This approach creates unique challenges:
|
|
17
|
+
|
|
18
|
+
1. **Bridging two languages**: Ruby and Rust have different memory models and concurrency approaches
|
|
19
|
+
2. **Concurrent processing**: Processing messages efficiently without blocking
|
|
20
|
+
3. **Safe communication**: Ensuring that errors in one language don't crash the other
|
|
21
|
+
|
|
22
|
+
## Background Concepts
|
|
23
|
+
|
|
24
|
+
Before diving into Prosody's architecture, let's understand some key concepts:
|
|
25
|
+
|
|
26
|
+
### The Global VM Lock (GVL)
|
|
27
|
+
|
|
28
|
+
Ruby's interpreter uses a Global VM Lock (GVL) that allows only one thread to execute Ruby code at a time. This means:
|
|
29
|
+
|
|
30
|
+
- Multiple Ruby threads can exist, but only one runs Ruby code at any moment
|
|
31
|
+
- When a thread performs I/O or other blocking operations, it can release the GVL
|
|
32
|
+
- Native extensions (like our Rust code) can release the GVL to let other Ruby threads run
|
|
33
|
+
|
|
34
|
+
### The Ruby/Rust Thread Boundary
|
|
35
|
+
|
|
36
|
+
A critical limitation: **Rust cannot directly call Ruby methods from Rust-created threads**. This is because:
|
|
37
|
+
|
|
38
|
+
1. Only Ruby-created threads can safely run Ruby code
|
|
39
|
+
2. Ruby's memory management expects certain thread-local state
|
|
40
|
+
3. Running Ruby code from a non-Ruby thread can cause memory corruption or crashes
|
|
41
|
+
|
|
42
|
+
This is why we need a bridge—a dedicated Ruby thread that can safely execute Ruby code on behalf of Rust.
|
|
43
|
+
|
|
44
|
+
### Fibers and Cooperative Concurrency
|
|
45
|
+
|
|
46
|
+
Ruby supports lightweight threads called Fibers that enable cooperative concurrency:
|
|
47
|
+
|
|
48
|
+
- Fibers are managed by Ruby (not the OS)
|
|
49
|
+
- A Fiber runs until it explicitly yields control
|
|
50
|
+
- Multiple Fibers can run on a single Ruby thread
|
|
51
|
+
- The `async` gem builds a concurrency framework on top of Fibers
|
|
52
|
+
|
|
53
|
+
### Blocking vs. Yielding: A Critical Distinction
|
|
54
|
+
|
|
55
|
+
In concurrent programming:
|
|
56
|
+
|
|
57
|
+
- **Blocking**: Stops an entire thread, preventing any code in that thread from running
|
|
58
|
+
- **Yielding**: Pauses the current Fiber but allows other Fibers in the same thread to run
|
|
59
|
+
|
|
60
|
+
This distinction is crucial for Prosody's performance:
|
|
61
|
+
|
|
62
|
+
```ruby
|
|
63
|
+
# Approach 1: Blocking
|
|
64
|
+
def process_messages
|
|
65
|
+
# This blocks the entire thread while waiting
|
|
66
|
+
result = blocking_operation()
|
|
67
|
+
# No other work happens during the wait
|
|
68
|
+
end
|
|
69
|
+
|
|
70
|
+
# Approach 2: Yielding
|
|
71
|
+
def process_messages
|
|
72
|
+
Async do
|
|
73
|
+
# This yields the fiber while waiting, letting other fibers run
|
|
74
|
+
result = queue.pop
|
|
75
|
+
# Other fibers continue working during the wait
|
|
76
|
+
end
|
|
77
|
+
end
|
|
78
|
+
```
|
|
79
|
+
|
|
80
|
+
When processing thousands of messages, yielding allows much higher throughput because work continues during waits.
|
|
81
|
+
|
|
82
|
+
## Architecture Overview
|
|
83
|
+
|
|
84
|
+
Prosody combines a Rust core with a Ruby interface:
|
|
85
|
+
|
|
86
|
+
```mermaid
|
|
87
|
+
graph TD
|
|
88
|
+
A[Ruby Application] --> B[Prosody Ruby Interface]
|
|
89
|
+
B --> C[Bridge]
|
|
90
|
+
C --> D[Prosody Rust Core]
|
|
91
|
+
D --> E[Kafka]
|
|
92
|
+
|
|
93
|
+
subgraph "Ruby Process"
|
|
94
|
+
A
|
|
95
|
+
B
|
|
96
|
+
C
|
|
97
|
+
D
|
|
98
|
+
end
|
|
99
|
+
```
|
|
100
|
+
|
|
101
|
+
### Main Components
|
|
102
|
+
|
|
103
|
+
1. **Ruby Interface Layer**: The classes you interact with directly
|
|
104
|
+
- `Prosody::Client`: The main entry point for applications
|
|
105
|
+
- `Prosody::EventHandler`: Base class for handling messages
|
|
106
|
+
- `Prosody::Message`: Represents a Kafka message
|
|
107
|
+
|
|
108
|
+
2. **Bridge**: The crucial connection between Ruby and Rust
|
|
109
|
+
- Enables safe communication between languages
|
|
110
|
+
- Manages memory safety across the boundary
|
|
111
|
+
- Handles concurrency coordination
|
|
112
|
+
|
|
113
|
+
3. **Rust Core**: The foundation
|
|
114
|
+
- Kafka Connectivity: Handles the connection to Kafka brokers
|
|
115
|
+
- Message Processing: Decodes, routes, and processes messages
|
|
116
|
+
- Error Handling: Manages retries and error classification
|
|
117
|
+
|
|
118
|
+
4. **AsyncTaskProcessor**: Ruby-side concurrent processing
|
|
119
|
+
- Manages message processing within Ruby
|
|
120
|
+
- Provides fiber-based concurrency
|
|
121
|
+
- Handles cancellation and resource cleanup
|
|
122
|
+
|
|
123
|
+
## The Bridge: How Ruby and Rust Communicate
|
|
124
|
+
|
|
125
|
+
The Bridge is the most complex part of Prosody. It allows two different languages with different concurrency models to
|
|
126
|
+
work together safely.
|
|
127
|
+
|
|
128
|
+
### Why We Need A Bridge
|
|
129
|
+
|
|
130
|
+
Rust cannot directly call Ruby methods from Rust threads. Consider what would happen without a bridge:
|
|
131
|
+
|
|
132
|
+
```
|
|
133
|
+
Rust Thread → directly calls → Ruby method
|
|
134
|
+
↓
|
|
135
|
+
🔥 Memory corruption or crash 🔥
|
|
136
|
+
```
|
|
137
|
+
|
|
138
|
+
Instead, we need a bridge pattern:
|
|
139
|
+
|
|
140
|
+
```
|
|
141
|
+
Rust Thread → sends command → Ruby Thread → safely executes → Ruby method
|
|
142
|
+
↓ ↓
|
|
143
|
+
Waits for completion Returns result back to Rust
|
|
144
|
+
```
|
|
145
|
+
|
|
146
|
+
The bridge provides this safe communication channel between languages.
|
|
147
|
+
|
|
148
|
+
### How it Works: Queue-Based Communication
|
|
149
|
+
|
|
150
|
+
The Bridge uses queues (or channels) to pass data between Ruby and Rust:
|
|
151
|
+
|
|
152
|
+
```mermaid
|
|
153
|
+
graph TD
|
|
154
|
+
A[Rust Code] -->|sends functions| B[Command Queue]
|
|
155
|
+
B -->|executed by| C[Ruby Thread]
|
|
156
|
+
C -->|calls| D[Your Handler Code]
|
|
157
|
+
D -->|returns via| E[Result Queue]
|
|
158
|
+
E -->|received by| A
|
|
159
|
+
```
|
|
160
|
+
|
|
161
|
+
1. **Command Queue**: Rust puts functions to be executed in a queue
|
|
162
|
+
2. **Ruby Thread**: A dedicated Ruby thread executes these functions
|
|
163
|
+
3. **Result Queue**: Results flow back through another queue
|
|
164
|
+
|
|
165
|
+
The bridge operates as a continuous poll loop, constantly checking for new functions to execute:
|
|
166
|
+
|
|
167
|
+
```
|
|
168
|
+
Loop:
|
|
169
|
+
1. Check for commands in the queue
|
|
170
|
+
2. If found, acquire GVL and execute in Ruby
|
|
171
|
+
3. Send results back
|
|
172
|
+
4. Release GVL and continue polling
|
|
173
|
+
```
|
|
174
|
+
|
|
175
|
+
This polling approach allows the bridge to efficiently process commands while minimizing GVL contention.
|
|
176
|
+
|
|
177
|
+
## Message Flow
|
|
178
|
+
|
|
179
|
+
### Receiving Messages
|
|
180
|
+
|
|
181
|
+
```mermaid
|
|
182
|
+
sequenceDiagram
|
|
183
|
+
participant Client as Prosody::Client (Rust)
|
|
184
|
+
participant RubyHandler as RubyHandler (Rust)
|
|
185
|
+
participant Scheduler as Scheduler (Rust)
|
|
186
|
+
participant ResultChannel as Result Channel (Rust)
|
|
187
|
+
participant Bridge as Bridge (Rust/Ruby)
|
|
188
|
+
participant AsyncProc as AsyncTaskProcessor (Ruby)
|
|
189
|
+
participant EventHandler as EventHandler (Ruby)
|
|
190
|
+
%% Message received from Kafka
|
|
191
|
+
Client ->>+ RubyHandler: on_message(context, message)
|
|
192
|
+
%% Prepare for processing in Ruby
|
|
193
|
+
RubyHandler ->>+ Scheduler: schedule(task_id, span, function)
|
|
194
|
+
%% Bridge to Ruby runtime
|
|
195
|
+
Note over Bridge, AsyncProc: Cross to Ruby runtime
|
|
196
|
+
Scheduler ->>+ Bridge: run(submit task)
|
|
197
|
+
Bridge ->>+ AsyncProc: submit(task_id, carrier, callback, &block)
|
|
198
|
+
%% Process in Ruby
|
|
199
|
+
AsyncProc ->>+ EventHandler: on_message(context, message)
|
|
200
|
+
EventHandler -->>- AsyncProc: returns (success or error)
|
|
201
|
+
%% Callback directly to ResultChannel
|
|
202
|
+
AsyncProc ->>+ ResultChannel: callback.call(success, result)
|
|
203
|
+
%% Result received by RubyHandler
|
|
204
|
+
ResultChannel -->>- RubyHandler: send result
|
|
205
|
+
%% Complete processing
|
|
206
|
+
RubyHandler -->>- Client: processing complete
|
|
207
|
+
```
|
|
208
|
+
|
|
209
|
+
For **receiving**:
|
|
210
|
+
|
|
211
|
+
1. **Message Reception**: Rust client receives a message from Kafka
|
|
212
|
+
2. **Scheduling**: The message is prepared for processing in Ruby
|
|
213
|
+
3. **Bridge Crossing**: A function is scheduled to run in Ruby-land
|
|
214
|
+
4. **Task Processing**: Your handler code processes the message
|
|
215
|
+
5. **Result Reporting**: The result is sent back to Rust
|
|
216
|
+
6. **Completion**: Processing finishes and Kafka offsets can be committed
|
|
217
|
+
|
|
218
|
+
### Sending Messages
|
|
219
|
+
|
|
220
|
+
```mermaid
|
|
221
|
+
sequenceDiagram
|
|
222
|
+
participant RubyApp as Ruby Application
|
|
223
|
+
participant Client as Prosody::Client (Ruby)
|
|
224
|
+
participant Bridge as Bridge (Rust/Ruby)
|
|
225
|
+
participant Queue as Result Queue (Ruby)
|
|
226
|
+
participant Tokio as Tokio Runtime (Rust)
|
|
227
|
+
participant Kafka as Kafka Broker
|
|
228
|
+
%% Ruby application sends a message
|
|
229
|
+
RubyApp ->>+ Client: send_message(topic, key, payload)
|
|
230
|
+
%% Client prepares to send via bridge
|
|
231
|
+
Client ->>+ Bridge: wait_for(future)
|
|
232
|
+
%% Bridge creates queue and spawns task
|
|
233
|
+
Bridge ->>+ Queue: create queue
|
|
234
|
+
Bridge ->>+ Tokio: spawn(async task)
|
|
235
|
+
%% Ruby waits on queue (non-blocking)
|
|
236
|
+
Note over Client, Queue: Ruby fiber yields while waiting
|
|
237
|
+
Client ->> Queue: pop
|
|
238
|
+
%% Rust task sends to Kafka
|
|
239
|
+
Tokio ->>+ Kafka: send message
|
|
240
|
+
Kafka -->>- Tokio: acknowledge
|
|
241
|
+
%% Result sent back through bridge to the queue
|
|
242
|
+
Tokio ->>+ Bridge: send result back to Ruby
|
|
243
|
+
Bridge ->>+ Queue: push result
|
|
244
|
+
%% Ruby resumes with result
|
|
245
|
+
Queue -->>- Client: return result
|
|
246
|
+
Client -->>- RubyApp: return success/error
|
|
247
|
+
```
|
|
248
|
+
|
|
249
|
+
For **sending**:
|
|
250
|
+
|
|
251
|
+
1. **Message Submission**: Your Ruby code calls `send_message`
|
|
252
|
+
2. **Bridge Activation**: Creates a Queue and spawns a Rust task to send the message
|
|
253
|
+
3. **Ruby Yielding**: Your Ruby code non-blockingly waits on the Queue while other fibers can run
|
|
254
|
+
4. **Rust Processing**: The Rust task sends the message to Kafka asynchronously
|
|
255
|
+
5. **Bridge Callback**: The Rust task uses the bridge to push the result to the Ruby Queue
|
|
256
|
+
6. **Ruby Resumption**: Your Ruby code receives the result and continues execution
|
|
257
|
+
|
|
258
|
+
This architecture is critical - notice that Rust must go through the bridge again to place the result in the Ruby Queue.
|
|
259
|
+
This ensures thread safety while allowing efficient communication.
|
|
260
|
+
|
|
261
|
+
## AsyncTaskProcessor: Ruby-Side Concurrent Processing
|
|
262
|
+
|
|
263
|
+
The `AsyncTaskProcessor` is the Ruby component that manages concurrent task execution using the `async` gem:
|
|
264
|
+
|
|
265
|
+
```ruby
|
|
266
|
+
# Conceptual implementation (simplified)
|
|
267
|
+
class AsyncTaskProcessor
|
|
268
|
+
def initialize
|
|
269
|
+
@command_queue = Queue.new # Thread-safe queue for commands
|
|
270
|
+
end
|
|
271
|
+
|
|
272
|
+
def start
|
|
273
|
+
@thread = Thread.new do
|
|
274
|
+
Async do # Start the async reactor
|
|
275
|
+
loop do
|
|
276
|
+
# This yields the fiber while waiting for a command
|
|
277
|
+
command = @command_queue.pop
|
|
278
|
+
|
|
279
|
+
case command
|
|
280
|
+
when Execute
|
|
281
|
+
# Execute the task in a new fiber
|
|
282
|
+
Async do
|
|
283
|
+
begin
|
|
284
|
+
command.block.call # Run the actual task
|
|
285
|
+
command.callback.call(true, nil) # Signal success
|
|
286
|
+
rescue => e
|
|
287
|
+
command.callback.call(false, e) # Signal failure
|
|
288
|
+
end
|
|
289
|
+
end
|
|
290
|
+
when Shutdown
|
|
291
|
+
break # Exit the loop and terminate
|
|
292
|
+
end
|
|
293
|
+
end
|
|
294
|
+
end
|
|
295
|
+
end
|
|
296
|
+
end
|
|
297
|
+
|
|
298
|
+
def submit(task_id, carrier, callback, &block)
|
|
299
|
+
token = CancellationToken.new
|
|
300
|
+
@command_queue.push(Execute.new(task_id, callback, token, block))
|
|
301
|
+
token
|
|
302
|
+
end
|
|
303
|
+
|
|
304
|
+
def stop
|
|
305
|
+
@command_queue.push(Shutdown.new)
|
|
306
|
+
@thread.join
|
|
307
|
+
end
|
|
308
|
+
end
|
|
309
|
+
```
|
|
310
|
+
|
|
311
|
+
This class enables:
|
|
312
|
+
|
|
313
|
+
1. Fiber-based concurrency with `Async`
|
|
314
|
+
2. Non-blocking task processing
|
|
315
|
+
3. Concurrent execution even when some tasks are waiting
|
|
316
|
+
4. Clean mechanism for task cancellation
|
|
317
|
+
|
|
318
|
+
## The Power of Cooperative Concurrency
|
|
319
|
+
|
|
320
|
+
Prosody's message processing leverages fiber-based concurrency:
|
|
321
|
+
|
|
322
|
+
```
|
|
323
|
+
Without concurrency (traditional approach):
|
|
324
|
+
[Message 1] → Process → Done
|
|
325
|
+
↓
|
|
326
|
+
[Message 2] → Process → Done
|
|
327
|
+
↓
|
|
328
|
+
[Message 3] → Process → Done
|
|
329
|
+
|
|
330
|
+
With Prosody's fiber-based concurrency:
|
|
331
|
+
[Message 1] → Start → Yield while waiting → Resume → Done
|
|
332
|
+
↓
|
|
333
|
+
[Message 2] → Start → Yield while waiting → Resume → Done
|
|
334
|
+
↓
|
|
335
|
+
[Message 3] → Start → Yield while waiting → Resume → Done
|
|
336
|
+
```
|
|
337
|
+
|
|
338
|
+
This approach allows processing to continue even when some messages are waiting for external resources:
|
|
339
|
+
|
|
340
|
+
```ruby
|
|
341
|
+
# Simplified version of how Prosody processes a single message:
|
|
342
|
+
def process_message(message)
|
|
343
|
+
Async do
|
|
344
|
+
# Create two concurrent fibers
|
|
345
|
+
|
|
346
|
+
# Worker fiber that processes the message
|
|
347
|
+
Async do
|
|
348
|
+
begin
|
|
349
|
+
# Call your handler
|
|
350
|
+
handler.on_message(context, message)
|
|
351
|
+
report_success()
|
|
352
|
+
rescue => e
|
|
353
|
+
report_error(e)
|
|
354
|
+
end
|
|
355
|
+
end
|
|
356
|
+
|
|
357
|
+
# Cancellation watcher fiber
|
|
358
|
+
Async do
|
|
359
|
+
# This yields while waiting for cancellation
|
|
360
|
+
cancellation_token.wait
|
|
361
|
+
# Handle cancellation if needed
|
|
362
|
+
end
|
|
363
|
+
end
|
|
364
|
+
end
|
|
365
|
+
```
|
|
366
|
+
|
|
367
|
+
## The CancellationToken System
|
|
368
|
+
|
|
369
|
+
Prosody needs to gracefully cancel in-progress tasks (e.g., during shutdown). It uses a token system:
|
|
370
|
+
|
|
371
|
+
```ruby
|
|
372
|
+
class CancellationToken
|
|
373
|
+
def initialize
|
|
374
|
+
@queue = Queue.new # Thread-safe queue for signaling
|
|
375
|
+
end
|
|
376
|
+
|
|
377
|
+
def cancel
|
|
378
|
+
# Signal cancellation request
|
|
379
|
+
@queue.push(:cancel)
|
|
380
|
+
end
|
|
381
|
+
|
|
382
|
+
def wait
|
|
383
|
+
# Wait for cancellation signal (yields the fiber while waiting)
|
|
384
|
+
@queue.pop
|
|
385
|
+
true
|
|
386
|
+
end
|
|
387
|
+
end
|
|
388
|
+
```
|
|
389
|
+
|
|
390
|
+
The key insight is that `Queue#pop` is fiber-aware:
|
|
391
|
+
|
|
392
|
+
1. When `wait` is called, the fiber yields
|
|
393
|
+
2. Other fibers can continue running
|
|
394
|
+
3. When `cancel` is called, the waiting fiber resumes
|
|
395
|
+
|
|
396
|
+
## Error Handling and Classification
|
|
397
|
+
|
|
398
|
+
Prosody has an error handling system that classifies errors as permanent or transient:
|
|
399
|
+
|
|
400
|
+
```ruby
|
|
401
|
+
class MyHandler < Prosody::EventHandler
|
|
402
|
+
# Tell Prosody: "Don't retry these errors - they won't succeed"
|
|
403
|
+
permanent :on_message,
|
|
404
|
+
ArgumentError, # Bad arguments - won't fix themselves
|
|
405
|
+
TypeError, # Type mismatches - message format issues
|
|
406
|
+
JSON::ParseError # Corrupt data - retrying won't help
|
|
407
|
+
|
|
408
|
+
# Tell Prosody: "Do retry these errors - they might succeed later"
|
|
409
|
+
transient :on_message,
|
|
410
|
+
NetworkError, # Network might recover
|
|
411
|
+
ServiceUnavailableError, # Service might become available
|
|
412
|
+
TimeoutError # Operation might complete next time
|
|
413
|
+
|
|
414
|
+
def on_message(context, message)
|
|
415
|
+
# Process message...
|
|
416
|
+
user_id = message.payload.fetch("user_id")
|
|
417
|
+
api_result = ApiClient.call(user_id)
|
|
418
|
+
Database.save(api_result)
|
|
419
|
+
end
|
|
420
|
+
end
|
|
421
|
+
```
|
|
422
|
+
|
|
423
|
+
The implementation uses Ruby's metaprogramming to wrap methods and classify errors:
|
|
424
|
+
|
|
425
|
+
```ruby
|
|
426
|
+
# Simplified version of the error classification mechanism
|
|
427
|
+
module ErrorClassification
|
|
428
|
+
def permanent(method_name, *exception_classes)
|
|
429
|
+
wrap_errors(method_name, exception_classes, PermanentError)
|
|
430
|
+
end
|
|
431
|
+
|
|
432
|
+
def transient(method_name, *exception_classes)
|
|
433
|
+
wrap_errors(method_name, exception_classes, TransientError)
|
|
434
|
+
end
|
|
435
|
+
|
|
436
|
+
private
|
|
437
|
+
|
|
438
|
+
def wrap_errors(method_name, exception_classes, error_class)
|
|
439
|
+
# Create a module that will be extended from the class
|
|
440
|
+
wrapper = Module.new do
|
|
441
|
+
define_method(method_name) do |*args, &block|
|
|
442
|
+
begin
|
|
443
|
+
# Call the original method
|
|
444
|
+
super(*args, &block)
|
|
445
|
+
rescue *exception_classes => e
|
|
446
|
+
# Re-raise as the appropriate error type
|
|
447
|
+
raise error_class.new(e.message)
|
|
448
|
+
end
|
|
449
|
+
end
|
|
450
|
+
end
|
|
451
|
+
|
|
452
|
+
# Extend the wrapper module to intercept method calls
|
|
453
|
+
extend wrapper
|
|
454
|
+
end
|
|
455
|
+
end
|
|
456
|
+
```
|
|
457
|
+
|
|
458
|
+
## OpenTelemetry Integration
|
|
459
|
+
|
|
460
|
+
Prosody automatically traces message processing across the Ruby-Rust boundary:
|
|
461
|
+
|
|
462
|
+
```mermaid
|
|
463
|
+
graph TD
|
|
464
|
+
A[Producer Code] -->|creates span| B[Kafka Message]
|
|
465
|
+
B -->|span context in headers| C[Consumer]
|
|
466
|
+
C -->|extracts context| D[Rust Processing]
|
|
467
|
+
D -->|propagates context| E[Ruby Handler]
|
|
468
|
+
E -->|creates child span| F[Your Business Logic]
|
|
469
|
+
```
|
|
470
|
+
|
|
471
|
+
This gives you end-to-end visibility of message processing across systems.
|
|
472
|
+
|
|
473
|
+
## Key Technical Points
|
|
474
|
+
|
|
475
|
+
### Memory Safety Across Languages
|
|
476
|
+
|
|
477
|
+
Rust's ownership system prevents memory leaks and data races, but special care is needed at the Ruby boundary:
|
|
478
|
+
|
|
479
|
+
- Ruby objects referenced by Rust are protected from garbage collection
|
|
480
|
+
- Rust manages memory that crosses the boundary
|
|
481
|
+
- Cleanup happens automatically when objects are no longer needed
|
|
482
|
+
|
|
483
|
+
The bridge uses several techniques to ensure safety:
|
|
484
|
+
|
|
485
|
+
1. **BoxValue**: Safely wraps Ruby values and prevents them from being garbage collected
|
|
486
|
+
2. **AtomicTake**: Ensures values can only be consumed once
|
|
487
|
+
3. **Queues**: Provide coordination between Rust and Ruby
|
|
488
|
+
|
|
489
|
+
### GVL Management
|
|
490
|
+
|
|
491
|
+
The Bridge carefully manages Ruby's GVL:
|
|
492
|
+
|
|
493
|
+
```
|
|
494
|
+
Ruby normally allows only one thread to run Ruby code at a time:
|
|
495
|
+
|
|
496
|
+
Thread 1 ---> Ruby VM ---> Thread 2 ---> Thread 3
|
|
497
|
+
running (GVL) waiting waiting
|
|
498
|
+
|
|
499
|
+
Prosody can temporarily release the GVL when running Rust code:
|
|
500
|
+
|
|
501
|
+
Thread 1 ---> Ruby VM <--- Thread 2 Thread 3
|
|
502
|
+
running Rust (GVL) running waiting
|
|
503
|
+
code Ruby code
|
|
504
|
+
|
|
505
|
+
The bridge poll loop constantly alternates between:
|
|
506
|
+
1. Releasing the GVL to handle Rust operations efficiently
|
|
507
|
+
2. Acquiring the GVL to execute Ruby code when needed
|
|
508
|
+
```
|
|
509
|
+
|
|
510
|
+
### Yielding For Maximum Concurrency
|
|
511
|
+
|
|
512
|
+
Prosody uses queues extensively to coordinate between components:
|
|
513
|
+
|
|
514
|
+
- **Command Queue**: Rust → Ruby function execution
|
|
515
|
+
- **Result Queue**: Ruby → Rust result reporting
|
|
516
|
+
- **Cancellation Queue**: Signaling task cancellation
|
|
517
|
+
- **Bridge Queue**: Waiting for async operations without blocking
|
|
518
|
+
|
|
519
|
+
The key advantage of using queues is that operations like `Queue#pop` yield the current fiber instead of blocking the
|
|
520
|
+
entire thread. This allows other fibers to run concurrently, maximizing throughput.
|
|
521
|
+
|
|
522
|
+
## Practical Implications
|
|
523
|
+
|
|
524
|
+
Understanding this architecture helps you write better Prosody applications:
|
|
525
|
+
|
|
526
|
+
1. **Write Non-Blocking Handlers**: Use async I/O in your handlers to maximize concurrency
|
|
527
|
+
```ruby
|
|
528
|
+
def on_message(context, message)
|
|
529
|
+
# Good: Uses async HTTP client that yields while waiting
|
|
530
|
+
response = AsyncHTTP.get(message.payload["url"])
|
|
531
|
+
|
|
532
|
+
# When response arrives, processing resumes
|
|
533
|
+
save_result(response)
|
|
534
|
+
end
|
|
535
|
+
```
|
|
536
|
+
|
|
537
|
+
2. **Classify Errors Properly**: Use `permanent` and `transient` to control retry behavior
|
|
538
|
+
```ruby
|
|
539
|
+
# Retry network errors, but not data errors
|
|
540
|
+
transient :on_message,
|
|
541
|
+
NetworkError, # Network might recover
|
|
542
|
+
ServiceUnavailableError, # Service might become available
|
|
543
|
+
DatabaseConnectionError # Database might come back online
|
|
544
|
+
|
|
545
|
+
permanent :on_message,
|
|
546
|
+
ValidationError, # Bad data - won't fix itself
|
|
547
|
+
DataFormatError, # Schema mismatch - retrying won't help
|
|
548
|
+
AuthorizationError # Permission issues need manual fixing
|
|
549
|
+
```
|
|
550
|
+
|
|
551
|
+
3. **Ensure Clean Shutdown**: Always call `unsubscribe` before exiting
|
|
552
|
+
```ruby
|
|
553
|
+
# Set up a shutdown queue
|
|
554
|
+
shutdown = Queue.new
|
|
555
|
+
|
|
556
|
+
# Configure signal handlers to trigger shutdown
|
|
557
|
+
Signal.trap("INT") { shutdown.push(nil) }
|
|
558
|
+
Signal.trap("TERM") { shutdown.push(nil) }
|
|
559
|
+
|
|
560
|
+
# Subscribe to messages
|
|
561
|
+
client.subscribe(MyHandler.new)
|
|
562
|
+
|
|
563
|
+
# Block until a signal is received
|
|
564
|
+
shutdown.pop
|
|
565
|
+
|
|
566
|
+
# Clean shutdown
|
|
567
|
+
puts "Shutting down gracefully..."
|
|
568
|
+
client.unsubscribe
|
|
569
|
+
```
|
|
570
|
+
|
|
571
|
+
4. **Monitor with Traces**: Use OpenTelemetry to understand message processing
|
|
572
|
+
```ruby
|
|
573
|
+
def on_message(context, message)
|
|
574
|
+
# Create a child span (parent span is automatic)
|
|
575
|
+
OpenTelemetry.tracer.in_span("process_payment") do |span|
|
|
576
|
+
payment = message.payload
|
|
577
|
+
|
|
578
|
+
# Add context to help with debugging and monitoring
|
|
579
|
+
span.add_attributes({
|
|
580
|
+
"payment.id" => payment["id"],
|
|
581
|
+
"payment.amount" => payment["amount"]
|
|
582
|
+
})
|
|
583
|
+
|
|
584
|
+
# Process the message
|
|
585
|
+
process_payment(payment)
|
|
586
|
+
end
|
|
587
|
+
end
|
|
588
|
+
```
|
|
589
|
+
|
|
590
|
+
By leveraging Prosody's architecture, you can build high-performance, reliable Kafka applications in Ruby that make
|
|
591
|
+
efficient use of your system's resources.
|