pgmq-ruby 0.1.0 → 0.3.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/.coditsu/ci.yml +3 -0
- data/.github/workflows/ci.yml +161 -0
- data/.github/workflows/push.yml +35 -0
- data/.gitignore +67 -0
- data/.rspec +1 -0
- data/.ruby-version +1 -0
- data/.yard-lint.yml +168 -0
- data/CHANGELOG.md +103 -0
- data/Gemfile +15 -0
- data/Gemfile.lock +65 -0
- data/LICENSE +165 -0
- data/README.md +627 -0
- data/Rakefile +4 -0
- data/docker-compose.yml +22 -0
- data/lib/pgmq/client/consumer.rb +155 -0
- data/lib/pgmq/client/maintenance.rb +46 -0
- data/lib/pgmq/client/message_lifecycle.rb +240 -0
- data/lib/pgmq/client/metrics.rb +49 -0
- data/lib/pgmq/client/multi_queue.rb +193 -0
- data/lib/pgmq/client/producer.rb +80 -0
- data/lib/pgmq/client/queue_management.rb +112 -0
- data/lib/pgmq/client.rb +138 -0
- data/lib/pgmq/connection.rb +196 -0
- data/lib/pgmq/errors.rb +30 -0
- data/lib/pgmq/message.rb +45 -0
- data/lib/pgmq/metrics.rb +37 -0
- data/lib/pgmq/queue_metadata.rb +37 -0
- data/lib/pgmq/transaction.rb +105 -0
- data/lib/pgmq/version.rb +6 -0
- data/lib/pgmq.rb +53 -0
- data/pgmq-ruby.gemspec +32 -0
- data/renovate.json +18 -0
- metadata +66 -4
data/README.md
ADDED
|
@@ -0,0 +1,627 @@
|
|
|
1
|
+
# PGMQ-Ruby
|
|
2
|
+
|
|
3
|
+
[](https://badge.fury.io/rb/pgmq-ruby)
|
|
4
|
+
[](https://github.com/mensfeld/pgmq-ruby/actions)
|
|
5
|
+
|
|
6
|
+
**Ruby client for [PGMQ](https://github.com/pgmq/pgmq) - PostgreSQL Message Queue**
|
|
7
|
+
|
|
8
|
+
## What is PGMQ-Ruby?
|
|
9
|
+
|
|
10
|
+
PGMQ-Ruby is a Ruby client for PGMQ (PostgreSQL Message Queue). It provides direct access to all PGMQ operations with a clean, minimal API - similar to how [rdkafka-ruby](https://github.com/karafka/rdkafka-ruby) relates to Kafka.
|
|
11
|
+
|
|
12
|
+
**Think of it as:**
|
|
13
|
+
|
|
14
|
+
- **Like AWS SQS** - but running entirely in PostgreSQL with no external dependencies
|
|
15
|
+
- **Like Sidekiq/Resque** - but without Redis, using PostgreSQL for both data and queues
|
|
16
|
+
- **Like rdkafka-ruby** - a thin, efficient wrapper around the underlying system (PGMQ SQL functions)
|
|
17
|
+
|
|
18
|
+
> **Architecture Note**: This library follows the rdkafka-ruby/Karafka pattern - `pgmq-ruby` is the low-level foundation, while higher-level features (job processing, Rails integration, retry strategies) will live in `pgmq-framework` (similar to how Karafka builds on rdkafka-ruby).
|
|
19
|
+
|
|
20
|
+
## Table of Contents
|
|
21
|
+
|
|
22
|
+
- [PGMQ Feature Support](#pgmq-feature-support)
|
|
23
|
+
- [Requirements](#requirements)
|
|
24
|
+
- [Installation](#installation)
|
|
25
|
+
- [Quick Start](#quick-start)
|
|
26
|
+
- [Configuration](#configuration)
|
|
27
|
+
- [API Reference](#api-reference)
|
|
28
|
+
- [Message Object](#message-object)
|
|
29
|
+
- [Serializers](#serializers)
|
|
30
|
+
- [Rails Integration](#rails-integration)
|
|
31
|
+
- [Development](#development)
|
|
32
|
+
- [License](#license)
|
|
33
|
+
- [Author](#author)
|
|
34
|
+
|
|
35
|
+
## PGMQ Feature Support
|
|
36
|
+
|
|
37
|
+
This gem provides complete support for all core PGMQ SQL functions. Based on the [official PGMQ API](https://pgmq.github.io/pgmq/):
|
|
38
|
+
|
|
39
|
+
| Category | Method | Description | Status |
|
|
40
|
+
|----------|--------|-------------|--------|
|
|
41
|
+
| **Sending** | `send` | Send single message with optional delay | ✅ |
|
|
42
|
+
| | `send_batch` | Send multiple messages atomically | ✅ |
|
|
43
|
+
| **Reading** | `read` | Read single message with visibility timeout | ✅ |
|
|
44
|
+
| | `read_batch` | Read multiple messages with visibility timeout | ✅ |
|
|
45
|
+
| | `read_with_poll` | Long-polling for efficient message consumption | ✅ |
|
|
46
|
+
| | `pop` | Atomic read + delete operation | ✅ |
|
|
47
|
+
| **Deleting/Archiving** | `delete` | Delete single message | ✅ |
|
|
48
|
+
| | `delete_batch` | Delete multiple messages | ✅ |
|
|
49
|
+
| | `archive` | Archive single message for long-term storage | ✅ |
|
|
50
|
+
| | `archive_batch` | Archive multiple messages | ✅ |
|
|
51
|
+
| | `purge_queue` | Remove all messages from queue | ✅ |
|
|
52
|
+
| **Queue Management** | `create` | Create standard queue | ✅ |
|
|
53
|
+
| | `create_partitioned` | Create partitioned queue (requires pg_partman) | ✅ |
|
|
54
|
+
| | `create_unlogged` | Create unlogged queue (faster, no crash recovery) | ✅ |
|
|
55
|
+
| | `drop_queue` | Delete queue and all messages | ✅ |
|
|
56
|
+
| | `detach_archive` | Detach archive table from queue | ✅ |
|
|
57
|
+
| **Utilities** | `set_vt` | Update message visibility timeout | ✅ |
|
|
58
|
+
| | `list_queues` | List all queues with metadata | ✅ |
|
|
59
|
+
| | `metrics` | Get queue metrics (length, age, total messages) | ✅ |
|
|
60
|
+
| | `metrics_all` | Get metrics for all queues | ✅ |
|
|
61
|
+
| **Ruby Enhancements** | Transaction Support | Atomic operations via `client.transaction do \|txn\|` | ✅ |
|
|
62
|
+
| | Conditional Filtering | Server-side JSONB filtering with `conditional:` | ✅ |
|
|
63
|
+
| | Multi-Queue Ops | Read/pop/delete/archive from multiple queues | ✅ |
|
|
64
|
+
| | Queue Validation | 48-character limit and name validation | ✅ |
|
|
65
|
+
| | Connection Pooling | Thread-safe connection pool for concurrency | ✅ |
|
|
66
|
+
| | Pluggable Serializers | JSON (default) with custom serializer support | ✅ |
|
|
67
|
+
|
|
68
|
+
## Requirements
|
|
69
|
+
|
|
70
|
+
- Ruby 3.2+
|
|
71
|
+
- PostgreSQL 14-18 with PGMQ extension installed
|
|
72
|
+
|
|
73
|
+
## Installation
|
|
74
|
+
|
|
75
|
+
Add to your Gemfile:
|
|
76
|
+
|
|
77
|
+
```ruby
|
|
78
|
+
gem 'pgmq-ruby'
|
|
79
|
+
```
|
|
80
|
+
|
|
81
|
+
Or install directly:
|
|
82
|
+
|
|
83
|
+
```bash
|
|
84
|
+
gem install pgmq-ruby
|
|
85
|
+
```
|
|
86
|
+
|
|
87
|
+
## Quick Start
|
|
88
|
+
|
|
89
|
+
### Basic Usage
|
|
90
|
+
|
|
91
|
+
```ruby
|
|
92
|
+
require 'pgmq'
|
|
93
|
+
|
|
94
|
+
# Connect to database
|
|
95
|
+
client = PGMQ::Client.new(
|
|
96
|
+
host: 'localhost',
|
|
97
|
+
port: 5432,
|
|
98
|
+
dbname: 'mydb',
|
|
99
|
+
user: 'postgres',
|
|
100
|
+
password: 'secret'
|
|
101
|
+
)
|
|
102
|
+
|
|
103
|
+
# Create a queue
|
|
104
|
+
client.create('orders')
|
|
105
|
+
|
|
106
|
+
# Send a message (must be JSON string)
|
|
107
|
+
msg_id = client.send('orders', '{"order_id":123,"total":99.99}')
|
|
108
|
+
|
|
109
|
+
# Read a message (30 second visibility timeout)
|
|
110
|
+
msg = client.read('orders', vt: 30)
|
|
111
|
+
puts msg.message # => "{\"order_id\":123,\"total\":99.99}" (raw JSON string)
|
|
112
|
+
|
|
113
|
+
# Parse and process (you handle deserialization)
|
|
114
|
+
data = JSON.parse(msg.message)
|
|
115
|
+
process_order(data)
|
|
116
|
+
client.delete('orders', msg.msg_id)
|
|
117
|
+
|
|
118
|
+
# Or archive for long-term storage
|
|
119
|
+
client.archive('orders', msg.msg_id)
|
|
120
|
+
|
|
121
|
+
# Clean up
|
|
122
|
+
client.drop_queue('orders')
|
|
123
|
+
client.close
|
|
124
|
+
```
|
|
125
|
+
|
|
126
|
+
### Rails Integration (Reusing ActiveRecord Connection)
|
|
127
|
+
|
|
128
|
+
```ruby
|
|
129
|
+
# config/initializers/pgmq.rb or in your model
|
|
130
|
+
class OrderProcessor
|
|
131
|
+
def initialize
|
|
132
|
+
# Reuse Rails' connection pool - no separate connection needed!
|
|
133
|
+
@client = PGMQ::Client.new(-> { ActiveRecord::Base.connection.raw_connection })
|
|
134
|
+
end
|
|
135
|
+
|
|
136
|
+
def process_orders
|
|
137
|
+
loop do
|
|
138
|
+
msg = @client.read('orders', vt: 30)
|
|
139
|
+
break unless msg
|
|
140
|
+
|
|
141
|
+
# Parse JSON yourself
|
|
142
|
+
data = JSON.parse(msg.message)
|
|
143
|
+
process_order(data)
|
|
144
|
+
@client.delete('orders', msg.msg_id)
|
|
145
|
+
end
|
|
146
|
+
end
|
|
147
|
+
end
|
|
148
|
+
```
|
|
149
|
+
|
|
150
|
+
## Connection Options
|
|
151
|
+
|
|
152
|
+
PGMQ-Ruby supports multiple ways to connect:
|
|
153
|
+
|
|
154
|
+
### Connection Hash
|
|
155
|
+
|
|
156
|
+
```ruby
|
|
157
|
+
client = PGMQ::Client.new(
|
|
158
|
+
host: 'localhost',
|
|
159
|
+
port: 5432,
|
|
160
|
+
dbname: 'mydb',
|
|
161
|
+
user: 'postgres',
|
|
162
|
+
password: 'secret',
|
|
163
|
+
pool_size: 5, # Default: 5
|
|
164
|
+
pool_timeout: 5 # Default: 5 seconds
|
|
165
|
+
)
|
|
166
|
+
```
|
|
167
|
+
|
|
168
|
+
### Connection String
|
|
169
|
+
|
|
170
|
+
```ruby
|
|
171
|
+
client = PGMQ::Client.new('postgres://user:pass@localhost:5432/dbname')
|
|
172
|
+
```
|
|
173
|
+
|
|
174
|
+
### Rails ActiveRecord (Recommended for Rails apps)
|
|
175
|
+
|
|
176
|
+
```ruby
|
|
177
|
+
# Reuses Rails connection pool - no additional connections needed
|
|
178
|
+
client = PGMQ::Client.new(-> { ActiveRecord::Base.connection.raw_connection })
|
|
179
|
+
```
|
|
180
|
+
|
|
181
|
+
### Custom Connection Pool
|
|
182
|
+
|
|
183
|
+
```ruby
|
|
184
|
+
# Bring your own connection management
|
|
185
|
+
connection = PGMQ::Connection.new('postgres://localhost/mydb', pool_size: 10)
|
|
186
|
+
client = PGMQ::Client.new(connection)
|
|
187
|
+
```
|
|
188
|
+
|
|
189
|
+
### Connection Pool Features
|
|
190
|
+
|
|
191
|
+
PGMQ-Ruby includes connection pooling with resilience:
|
|
192
|
+
|
|
193
|
+
```ruby
|
|
194
|
+
# Configure pool size and timeouts
|
|
195
|
+
client = PGMQ::Client.new(
|
|
196
|
+
'postgres://localhost/mydb',
|
|
197
|
+
pool_size: 10, # Number of connections (default: 5)
|
|
198
|
+
pool_timeout: 5, # Timeout in seconds (default: 5)
|
|
199
|
+
auto_reconnect: true # Auto-reconnect on connection loss (default: true)
|
|
200
|
+
)
|
|
201
|
+
|
|
202
|
+
# Monitor connection pool health
|
|
203
|
+
stats = client.stats
|
|
204
|
+
puts "Pool size: #{stats[:size]}" # => 10
|
|
205
|
+
puts "Available: #{stats[:available]}" # => 8 (2 in use)
|
|
206
|
+
|
|
207
|
+
# Disable auto-reconnect if you prefer explicit error handling
|
|
208
|
+
client = PGMQ::Client.new(
|
|
209
|
+
'postgres://localhost/mydb',
|
|
210
|
+
auto_reconnect: false
|
|
211
|
+
)
|
|
212
|
+
```
|
|
213
|
+
|
|
214
|
+
**Connection Pool Benefits:**
|
|
215
|
+
- **Thread-safe** - Multiple threads can safely share a single client
|
|
216
|
+
- **Fiber-aware** - Works with Ruby 3.0+ Fiber Scheduler for non-blocking I/O
|
|
217
|
+
- **Auto-reconnect** - Recovers from lost connections (configurable)
|
|
218
|
+
- **Health checks** - Verifies connections before use to prevent stale connection errors
|
|
219
|
+
- **Monitoring** - Track pool utilization with `client.stats`
|
|
220
|
+
|
|
221
|
+
|
|
222
|
+
## API Reference
|
|
223
|
+
|
|
224
|
+
### Queue Management
|
|
225
|
+
|
|
226
|
+
```ruby
|
|
227
|
+
# Create a queue
|
|
228
|
+
client.create("queue_name")
|
|
229
|
+
|
|
230
|
+
# Create partitioned queue (requires pg_partman)
|
|
231
|
+
client.create_partitioned("queue_name",
|
|
232
|
+
partition_interval: "daily",
|
|
233
|
+
retention_interval: "7 days"
|
|
234
|
+
)
|
|
235
|
+
|
|
236
|
+
# Create unlogged queue (faster, no crash recovery)
|
|
237
|
+
client.create_unlogged("queue_name")
|
|
238
|
+
|
|
239
|
+
# Drop queue
|
|
240
|
+
client.drop_queue("queue_name")
|
|
241
|
+
|
|
242
|
+
# List all queues
|
|
243
|
+
queues = client.list_queues
|
|
244
|
+
# => [#<PGMQ::QueueMetadata queue_name="orders" created_at=...>, ...]
|
|
245
|
+
```
|
|
246
|
+
|
|
247
|
+
#### Queue Naming Rules
|
|
248
|
+
|
|
249
|
+
Queue names must follow PostgreSQL identifier rules with PGMQ-specific constraints:
|
|
250
|
+
|
|
251
|
+
- **Maximum 48 characters** (PGMQ enforces this limit for table prefixes)
|
|
252
|
+
- Must start with a letter or underscore
|
|
253
|
+
- Can contain only letters, digits, and underscores
|
|
254
|
+
- Case-sensitive
|
|
255
|
+
|
|
256
|
+
**Valid Queue Names:**
|
|
257
|
+
|
|
258
|
+
```ruby
|
|
259
|
+
client.create("orders") # ✓ Simple name
|
|
260
|
+
client.create("high_priority") # ✓ With underscore
|
|
261
|
+
client.create("Queue123") # ✓ With numbers
|
|
262
|
+
client.create("_internal") # ✓ Starts with underscore
|
|
263
|
+
client.create("a" * 47) # ✓ Maximum length (47 chars)
|
|
264
|
+
```
|
|
265
|
+
|
|
266
|
+
**Invalid Queue Names:**
|
|
267
|
+
|
|
268
|
+
```ruby
|
|
269
|
+
client.create("123orders") # ✗ Starts with number
|
|
270
|
+
client.create("my-queue") # ✗ Contains hyphen
|
|
271
|
+
client.create("my.queue") # ✗ Contains period
|
|
272
|
+
client.create("a" * 48) # ✗ Too long (48+ chars)
|
|
273
|
+
# Raises PGMQ::Errors::InvalidQueueNameError
|
|
274
|
+
```
|
|
275
|
+
|
|
276
|
+
### Sending Messages
|
|
277
|
+
|
|
278
|
+
```ruby
|
|
279
|
+
# Send single message (must be JSON string)
|
|
280
|
+
msg_id = client.send("queue_name", '{"data":"value"}')
|
|
281
|
+
|
|
282
|
+
# Send with delay (seconds)
|
|
283
|
+
msg_id = client.send("queue_name", '{"data":"value"}', delay: 60)
|
|
284
|
+
|
|
285
|
+
# Send batch (array of JSON strings)
|
|
286
|
+
msg_ids = client.send_batch("queue_name", [
|
|
287
|
+
'{"order":1}',
|
|
288
|
+
'{"order":2}',
|
|
289
|
+
'{"order":3}'
|
|
290
|
+
])
|
|
291
|
+
# => ["101", "102", "103"]
|
|
292
|
+
```
|
|
293
|
+
|
|
294
|
+
### Reading Messages
|
|
295
|
+
|
|
296
|
+
```ruby
|
|
297
|
+
# Read single message
|
|
298
|
+
msg = client.read("queue_name", vt: 30)
|
|
299
|
+
# => #<PGMQ::Message msg_id="1" message="{...}">
|
|
300
|
+
|
|
301
|
+
# Read batch
|
|
302
|
+
messages = client.read_batch("queue_name", vt: 30, qty: 10)
|
|
303
|
+
|
|
304
|
+
# Read with long-polling
|
|
305
|
+
msg = client.read_with_poll("queue_name",
|
|
306
|
+
vt: 30,
|
|
307
|
+
qty: 1,
|
|
308
|
+
max_poll_seconds: 5,
|
|
309
|
+
poll_interval_ms: 100
|
|
310
|
+
)
|
|
311
|
+
|
|
312
|
+
# Pop (atomic read + delete)
|
|
313
|
+
msg = client.pop("queue_name")
|
|
314
|
+
```
|
|
315
|
+
|
|
316
|
+
#### Conditional Message Filtering
|
|
317
|
+
|
|
318
|
+
Filter messages by JSON payload content using server-side JSONB queries:
|
|
319
|
+
|
|
320
|
+
```ruby
|
|
321
|
+
# Filter by single condition
|
|
322
|
+
msg = client.read("orders", vt: 30, conditional: { status: "pending" })
|
|
323
|
+
|
|
324
|
+
# Filter by multiple conditions (AND logic)
|
|
325
|
+
msg = client.read("orders", vt: 30, conditional: {
|
|
326
|
+
status: "pending",
|
|
327
|
+
priority: "high"
|
|
328
|
+
})
|
|
329
|
+
|
|
330
|
+
# Filter by nested properties
|
|
331
|
+
msg = client.read("orders", vt: 30, conditional: {
|
|
332
|
+
user: { role: "admin" }
|
|
333
|
+
})
|
|
334
|
+
|
|
335
|
+
# Works with read_batch
|
|
336
|
+
messages = client.read_batch("orders",
|
|
337
|
+
vt: 30,
|
|
338
|
+
qty: 10,
|
|
339
|
+
conditional: { type: "priority" }
|
|
340
|
+
)
|
|
341
|
+
|
|
342
|
+
# Works with long-polling
|
|
343
|
+
messages = client.read_with_poll("orders",
|
|
344
|
+
vt: 30,
|
|
345
|
+
max_poll_seconds: 5,
|
|
346
|
+
conditional: { status: "ready" }
|
|
347
|
+
)
|
|
348
|
+
```
|
|
349
|
+
|
|
350
|
+
**How Filtering Works:**
|
|
351
|
+
|
|
352
|
+
- Filtering happens in PostgreSQL using JSONB containment operator (`@>`)
|
|
353
|
+
- Only messages matching **ALL** conditions are returned (AND logic)
|
|
354
|
+
- The `qty` parameter applies **after** filtering
|
|
355
|
+
- Empty conditions `{}` means no filtering (same as omitting parameter)
|
|
356
|
+
|
|
357
|
+
**Performance Tip:** For frequently filtered fields, add JSONB indexes:
|
|
358
|
+
```sql
|
|
359
|
+
CREATE INDEX idx_orders_status
|
|
360
|
+
ON pgmq.q_orders USING gin ((message->'status'));
|
|
361
|
+
```
|
|
362
|
+
|
|
363
|
+
### Message Lifecycle
|
|
364
|
+
|
|
365
|
+
```ruby
|
|
366
|
+
# Delete message
|
|
367
|
+
client.delete("queue_name", msg_id)
|
|
368
|
+
|
|
369
|
+
# Delete batch
|
|
370
|
+
deleted_ids = client.delete_batch("queue_name", [101, 102, 103])
|
|
371
|
+
|
|
372
|
+
# Archive message
|
|
373
|
+
client.archive("queue_name", msg_id)
|
|
374
|
+
|
|
375
|
+
# Archive batch
|
|
376
|
+
archived_ids = client.archive_batch("queue_name", [101, 102, 103])
|
|
377
|
+
|
|
378
|
+
# Update visibility timeout
|
|
379
|
+
msg = client.set_vt("queue_name", msg_id, vt_offset: 60)
|
|
380
|
+
|
|
381
|
+
# Purge all messages
|
|
382
|
+
count = client.purge_queue("queue_name")
|
|
383
|
+
```
|
|
384
|
+
|
|
385
|
+
### Monitoring
|
|
386
|
+
|
|
387
|
+
```ruby
|
|
388
|
+
# Get queue metrics
|
|
389
|
+
metrics = client.metrics("queue_name")
|
|
390
|
+
puts metrics.queue_length # => 42
|
|
391
|
+
puts metrics.oldest_msg_age_sec # => 120
|
|
392
|
+
puts metrics.newest_msg_age_sec # => 5
|
|
393
|
+
puts metrics.total_messages # => 1000
|
|
394
|
+
|
|
395
|
+
# Get all queue metrics
|
|
396
|
+
all_metrics = client.metrics_all
|
|
397
|
+
all_metrics.each do |m|
|
|
398
|
+
puts "#{m.queue_name}: #{m.queue_length} messages"
|
|
399
|
+
end
|
|
400
|
+
```
|
|
401
|
+
|
|
402
|
+
### Transaction Support
|
|
403
|
+
|
|
404
|
+
Low-level PostgreSQL transaction support for atomic operations. Transactions are a database primitive provided by PostgreSQL - this is a thin wrapper for convenience.
|
|
405
|
+
|
|
406
|
+
Execute atomic operations across multiple queues or combine queue operations with application data updates:
|
|
407
|
+
|
|
408
|
+
```ruby
|
|
409
|
+
# Atomic operations across multiple queues
|
|
410
|
+
client.transaction do |txn|
|
|
411
|
+
# Send to multiple queues atomically
|
|
412
|
+
txn.send("orders", '{"order_id":123}')
|
|
413
|
+
txn.send("notifications", '{"user_id":456,"type":"order_created"}')
|
|
414
|
+
txn.send("analytics", '{"event":"order_placed"}')
|
|
415
|
+
end
|
|
416
|
+
|
|
417
|
+
# Process message and update application state atomically
|
|
418
|
+
client.transaction do |txn|
|
|
419
|
+
# Read and process message
|
|
420
|
+
msg = txn.read("orders", vt: 30)
|
|
421
|
+
|
|
422
|
+
if msg
|
|
423
|
+
# Parse and update your database
|
|
424
|
+
data = JSON.parse(msg.message)
|
|
425
|
+
Order.create!(external_id: data["order_id"])
|
|
426
|
+
|
|
427
|
+
# Delete message only if database update succeeds
|
|
428
|
+
txn.delete("orders", msg.msg_id)
|
|
429
|
+
end
|
|
430
|
+
end
|
|
431
|
+
|
|
432
|
+
# Automatic rollback on errors
|
|
433
|
+
client.transaction do |txn|
|
|
434
|
+
txn.send("queue1", '{"data":"message1"}')
|
|
435
|
+
txn.send("queue2", '{"data":"message2"}')
|
|
436
|
+
|
|
437
|
+
raise "Something went wrong!"
|
|
438
|
+
# Both messages are rolled back - neither queue receives anything
|
|
439
|
+
end
|
|
440
|
+
|
|
441
|
+
# Move messages between queues atomically
|
|
442
|
+
client.transaction do |txn|
|
|
443
|
+
msg = txn.read("pending_orders", vt: 30)
|
|
444
|
+
|
|
445
|
+
if msg
|
|
446
|
+
data = JSON.parse(msg.message)
|
|
447
|
+
if data["priority"] == "high"
|
|
448
|
+
# Move to high-priority queue
|
|
449
|
+
txn.send("priority_orders", msg.message)
|
|
450
|
+
txn.delete("pending_orders", msg.msg_id)
|
|
451
|
+
end
|
|
452
|
+
end
|
|
453
|
+
end
|
|
454
|
+
```
|
|
455
|
+
|
|
456
|
+
**How Transactions Work:**
|
|
457
|
+
|
|
458
|
+
- Wraps PostgreSQL's native transaction support (similar to rdkafka-ruby providing Kafka transactions)
|
|
459
|
+
- All operations within the block execute in a single PostgreSQL transaction
|
|
460
|
+
- If any operation fails, the entire transaction is rolled back automatically
|
|
461
|
+
- The transactional client delegates all `PGMQ::Client` methods for convenience
|
|
462
|
+
|
|
463
|
+
**Use Cases:**
|
|
464
|
+
|
|
465
|
+
- **Multi-queue coordination**: Send related messages to multiple queues atomically
|
|
466
|
+
- **Exactly-once processing**: Combine message deletion with application state updates
|
|
467
|
+
- **Message routing**: Move messages between queues without losing data
|
|
468
|
+
- **Batch operations**: Ensure all-or-nothing semantics for bulk operations
|
|
469
|
+
|
|
470
|
+
**Important Notes:**
|
|
471
|
+
|
|
472
|
+
- Transactions hold database locks - keep them short to avoid blocking
|
|
473
|
+
- Long transactions can impact queue throughput
|
|
474
|
+
- Read operations with long visibility timeouts may cause lock contention
|
|
475
|
+
- Consider using `pop()` for atomic read+delete in simple cases
|
|
476
|
+
|
|
477
|
+
## Message Object
|
|
478
|
+
|
|
479
|
+
PGMQ-Ruby is a **low-level transport library** - it returns raw values from PostgreSQL without any transformation. You are responsible for parsing JSON and type conversion.
|
|
480
|
+
|
|
481
|
+
```ruby
|
|
482
|
+
msg = client.read("queue", vt: 30)
|
|
483
|
+
|
|
484
|
+
# All values are strings as returned by PostgreSQL
|
|
485
|
+
msg.msg_id # => "123" (String, not Integer)
|
|
486
|
+
msg.id # => "123" (alias for msg_id)
|
|
487
|
+
msg.read_ct # => "1" (String, not Integer)
|
|
488
|
+
msg.enqueued_at # => "2025-01-15 10:30:00+00" (String, not Time)
|
|
489
|
+
msg.vt # => "2025-01-15 10:30:30+00" (String, not Time)
|
|
490
|
+
msg.message # => "{\"data\":\"value\"}" (Raw JSONB as JSON string)
|
|
491
|
+
msg.headers # => "{\"trace_id\":\"abc123\"}" (Raw JSONB as JSON string, optional)
|
|
492
|
+
msg.queue_name # => "my_queue" (only present for multi-queue operations, otherwise nil)
|
|
493
|
+
|
|
494
|
+
# You handle JSON parsing
|
|
495
|
+
data = JSON.parse(msg.message) # => { "data" => "value" }
|
|
496
|
+
metadata = JSON.parse(msg.headers) if msg.headers # => { "trace_id" => "abc123" }
|
|
497
|
+
|
|
498
|
+
# You handle type conversion if needed
|
|
499
|
+
id = msg.msg_id.to_i # => 123
|
|
500
|
+
read_count = msg.read_ct.to_i # => 1
|
|
501
|
+
enqueued = Time.parse(msg.enqueued_at) # => 2025-01-15 10:30:00 UTC
|
|
502
|
+
```
|
|
503
|
+
|
|
504
|
+
### Message Headers
|
|
505
|
+
|
|
506
|
+
PGMQ supports optional message headers via the `headers` JSONB column:
|
|
507
|
+
|
|
508
|
+
```ruby
|
|
509
|
+
# Sending with headers requires direct SQL or a custom wrapper
|
|
510
|
+
# (pgmq-ruby focuses on the core PGMQ API which doesn't have a send_with_headers function)
|
|
511
|
+
|
|
512
|
+
# Reading messages with headers
|
|
513
|
+
msg = client.read("queue", vt: 30)
|
|
514
|
+
if msg.headers
|
|
515
|
+
metadata = JSON.parse(msg.headers)
|
|
516
|
+
trace_id = metadata["trace_id"]
|
|
517
|
+
correlation_id = metadata["correlation_id"]
|
|
518
|
+
end
|
|
519
|
+
```
|
|
520
|
+
|
|
521
|
+
### Why Raw Values?
|
|
522
|
+
|
|
523
|
+
This library follows the **rdkafka-ruby philosophy** - provide a thin, performant wrapper around the underlying system:
|
|
524
|
+
|
|
525
|
+
1. **No assumptions** - Your application decides how to parse timestamps, convert types, etc.
|
|
526
|
+
2. **Framework-agnostic** - Works equally well with Rails, Sinatra, or plain Ruby
|
|
527
|
+
3. **Zero overhead** - No hidden type conversion or object allocation
|
|
528
|
+
4. **Explicit control** - You see exactly what PostgreSQL returns
|
|
529
|
+
|
|
530
|
+
Higher-level features (automatic deserialization, type conversion, instrumentation) belong in framework layers built on top of this library.
|
|
531
|
+
|
|
532
|
+
## Working with JSON
|
|
533
|
+
|
|
534
|
+
PGMQ stores messages as JSONB in PostgreSQL. You must handle JSON serialization yourself:
|
|
535
|
+
|
|
536
|
+
### Sending Messages
|
|
537
|
+
|
|
538
|
+
```ruby
|
|
539
|
+
# Simple hash
|
|
540
|
+
msg = { order_id: 123, status: "pending" }
|
|
541
|
+
client.send("orders", msg.to_json)
|
|
542
|
+
|
|
543
|
+
# Using JSON.generate for explicit control
|
|
544
|
+
client.send("orders", JSON.generate(order_id: 123, status: "pending"))
|
|
545
|
+
|
|
546
|
+
# Pre-serialized JSON string
|
|
547
|
+
json_str = '{"order_id":123,"status":"pending"}'
|
|
548
|
+
client.send("orders", json_str)
|
|
549
|
+
```
|
|
550
|
+
|
|
551
|
+
### Reading Messages
|
|
552
|
+
|
|
553
|
+
```ruby
|
|
554
|
+
msg = client.read("orders", vt: 30)
|
|
555
|
+
|
|
556
|
+
# Parse JSON yourself
|
|
557
|
+
data = JSON.parse(msg.message)
|
|
558
|
+
puts data["order_id"] # => 123
|
|
559
|
+
puts data["status"] # => "pending"
|
|
560
|
+
|
|
561
|
+
# Handle parsing errors
|
|
562
|
+
begin
|
|
563
|
+
data = JSON.parse(msg.message)
|
|
564
|
+
rescue JSON::ParserError => e
|
|
565
|
+
logger.error "Invalid JSON in message #{msg.msg_id}: #{e.message}"
|
|
566
|
+
client.delete("orders", msg.msg_id) # Remove invalid message
|
|
567
|
+
end
|
|
568
|
+
```
|
|
569
|
+
|
|
570
|
+
### Helper Pattern (Optional)
|
|
571
|
+
|
|
572
|
+
For convenience, you can wrap the client in your own helper:
|
|
573
|
+
|
|
574
|
+
```ruby
|
|
575
|
+
class QueueHelper
|
|
576
|
+
def initialize(client)
|
|
577
|
+
@client = client
|
|
578
|
+
end
|
|
579
|
+
|
|
580
|
+
def send(queue, data)
|
|
581
|
+
@client.send(queue, data.to_json)
|
|
582
|
+
end
|
|
583
|
+
|
|
584
|
+
def read(queue, vt:)
|
|
585
|
+
msg = @client.read(queue, vt: vt)
|
|
586
|
+
return nil unless msg
|
|
587
|
+
|
|
588
|
+
OpenStruct.new(
|
|
589
|
+
id: msg.msg_id.to_i,
|
|
590
|
+
data: JSON.parse(msg.message),
|
|
591
|
+
read_count: msg.read_ct.to_i,
|
|
592
|
+
raw: msg
|
|
593
|
+
)
|
|
594
|
+
end
|
|
595
|
+
end
|
|
596
|
+
|
|
597
|
+
helper = QueueHelper.new(client)
|
|
598
|
+
helper.send("orders", { order_id: 123 })
|
|
599
|
+
msg = helper.read("orders", vt: 30)
|
|
600
|
+
puts msg.data["order_id"] # => 123
|
|
601
|
+
```
|
|
602
|
+
|
|
603
|
+
## Development
|
|
604
|
+
|
|
605
|
+
```bash
|
|
606
|
+
# Clone repository
|
|
607
|
+
git clone https://github.com/mensfeld/pgmq-ruby.git
|
|
608
|
+
cd pgmq-ruby
|
|
609
|
+
|
|
610
|
+
# Install dependencies
|
|
611
|
+
bundle install
|
|
612
|
+
|
|
613
|
+
# Start PostgreSQL with PGMQ
|
|
614
|
+
docker compose up -d
|
|
615
|
+
|
|
616
|
+
# Run tests
|
|
617
|
+
bundle exec rspec
|
|
618
|
+
|
|
619
|
+
# Run console
|
|
620
|
+
bundle exec bin/console
|
|
621
|
+
```
|
|
622
|
+
|
|
623
|
+
## Author
|
|
624
|
+
|
|
625
|
+
Maintained by [Maciej Mensfeld](https://github.com/mensfeld)
|
|
626
|
+
|
|
627
|
+
Also check out [Karafka](https://karafka.io) - High-performance Apache Kafka framework for Ruby.
|
data/Rakefile
ADDED
data/docker-compose.yml
ADDED
|
@@ -0,0 +1,22 @@
|
|
|
1
|
+
version: '3.8'
|
|
2
|
+
|
|
3
|
+
services:
|
|
4
|
+
postgres:
|
|
5
|
+
image: ghcr.io/pgmq/pg18-pgmq:v1.7.0
|
|
6
|
+
container_name: pgmq_postgres_test
|
|
7
|
+
environment:
|
|
8
|
+
POSTGRES_USER: postgres
|
|
9
|
+
POSTGRES_PASSWORD: postgres
|
|
10
|
+
POSTGRES_DB: pgmq_test
|
|
11
|
+
ports:
|
|
12
|
+
- "5433:5432" # Use port 5433 locally to avoid conflicts
|
|
13
|
+
volumes:
|
|
14
|
+
- pgmq_data:/var/lib/postgresql/data
|
|
15
|
+
healthcheck:
|
|
16
|
+
test: ["CMD-SHELL", "pg_isready -U postgres"]
|
|
17
|
+
interval: 5s
|
|
18
|
+
timeout: 5s
|
|
19
|
+
retries: 5
|
|
20
|
+
|
|
21
|
+
volumes:
|
|
22
|
+
pgmq_data: null
|