pgmq-ruby 0.1.0 → 0.4.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
data/README.md ADDED
@@ -0,0 +1,687 @@
1
+ # PGMQ-Ruby
2
+
3
+ [![Gem Version](https://badge.fury.io/rb/pgmq-ruby.svg)](https://badge.fury.io/rb/pgmq-ruby)
4
+ [![Build Status](https://github.com/mensfeld/pgmq-ruby/workflows/CI/badge.svg)](https://github.com/mensfeld/pgmq-ruby/actions)
5
+
6
+ **Ruby client for [PGMQ](https://github.com/pgmq/pgmq) - PostgreSQL Message Queue**
7
+
8
+ ## What is PGMQ-Ruby?
9
+
10
+ PGMQ-Ruby is a Ruby client for PGMQ (PostgreSQL Message Queue). It provides direct access to all PGMQ operations with a clean, minimal API - similar to how [rdkafka-ruby](https://github.com/karafka/rdkafka-ruby) relates to Kafka.
11
+
12
+ **Think of it as:**
13
+
14
+ - **Like AWS SQS** - but running entirely in PostgreSQL with no external dependencies
15
+ - **Like Sidekiq/Resque** - but without Redis, using PostgreSQL for both data and queues
16
+ - **Like rdkafka-ruby** - a thin, efficient wrapper around the underlying system (PGMQ SQL functions)
17
+
18
+ > **Architecture Note**: This library follows the rdkafka-ruby/Karafka pattern - `pgmq-ruby` is the low-level foundation, while higher-level features (job processing, Rails integration, retry strategies) will live in `pgmq-framework` (similar to how Karafka builds on rdkafka-ruby).
19
+
20
+ ## Table of Contents
21
+
22
+ - [PGMQ Feature Support](#pgmq-feature-support)
23
+ - [Requirements](#requirements)
24
+ - [Installation](#installation)
25
+ - [Quick Start](#quick-start)
26
+ - [Configuration](#configuration)
27
+ - [API Reference](#api-reference)
28
+ - [Message Object](#message-object)
29
+ - [Serializers](#serializers)
30
+ - [Rails Integration](#rails-integration)
31
+ - [Development](#development)
32
+ - [License](#license)
33
+ - [Author](#author)
34
+
35
+ ## PGMQ Feature Support
36
+
37
+ This gem provides complete support for all core PGMQ SQL functions. Based on the [official PGMQ API](https://pgmq.github.io/pgmq/):
38
+
39
+ | Category | Method | Description | Status |
40
+ |----------|--------|-------------|--------|
41
+ | **Producing** | `produce` | Send single message with optional delay and headers | ✅ |
42
+ | | `produce_batch` | Send multiple messages atomically with headers | ✅ |
43
+ | **Reading** | `read` | Read single message with visibility timeout | ✅ |
44
+ | | `read_batch` | Read multiple messages with visibility timeout | ✅ |
45
+ | | `read_with_poll` | Long-polling for efficient message consumption | ✅ |
46
+ | | `pop` | Atomic read + delete operation | ✅ |
47
+ | | `pop_batch` | Atomic batch read + delete operation | ✅ |
48
+ | **Deleting/Archiving** | `delete` | Delete single message | ✅ |
49
+ | | `delete_batch` | Delete multiple messages | ✅ |
50
+ | | `archive` | Archive single message for long-term storage | ✅ |
51
+ | | `archive_batch` | Archive multiple messages | ✅ |
52
+ | | `purge_queue` | Remove all messages from queue | ✅ |
53
+ | **Queue Management** | `create` | Create standard queue | ✅ |
54
+ | | `create_partitioned` | Create partitioned queue (requires pg_partman) | ✅ |
55
+ | | `create_unlogged` | Create unlogged queue (faster, no crash recovery) | ✅ |
56
+ | | `drop_queue` | Delete queue and all messages | ✅ |
57
+ | | `detach_archive` | Detach archive table from queue | ✅ |
58
+ | **Utilities** | `set_vt` | Update message visibility timeout | ✅ |
59
+ | | `set_vt_batch` | Batch update visibility timeouts | ✅ |
60
+ | | `set_vt_multi` | Update visibility timeouts across multiple queues | ✅ |
61
+ | | `list_queues` | List all queues with metadata | ✅ |
62
+ | | `metrics` | Get queue metrics (length, age, total messages) | ✅ |
63
+ | | `metrics_all` | Get metrics for all queues | ✅ |
64
+ | | `enable_notify_insert` | Enable PostgreSQL NOTIFY on insert | ✅ |
65
+ | | `disable_notify_insert` | Disable notifications | ✅ |
66
+ | **Ruby Enhancements** | Transaction Support | Atomic operations via `client.transaction do \|txn\|` | ✅ |
67
+ | | Conditional Filtering | Server-side JSONB filtering with `conditional:` | ✅ |
68
+ | | Multi-Queue Ops | Read/pop/delete/archive from multiple queues | ✅ |
69
+ | | Queue Validation | 48-character limit and name validation | ✅ |
70
+ | | Connection Pooling | Thread-safe connection pool for concurrency | ✅ |
71
+ | | Pluggable Serializers | JSON (default) with custom serializer support | ✅ |
72
+
73
+ ## Requirements
74
+
75
+ - Ruby 3.2+
76
+ - PostgreSQL 14-18 with PGMQ extension installed
77
+
78
+ ## Installation
79
+
80
+ Add to your Gemfile:
81
+
82
+ ```ruby
83
+ gem 'pgmq-ruby'
84
+ ```
85
+
86
+ Or install directly:
87
+
88
+ ```bash
89
+ gem install pgmq-ruby
90
+ ```
91
+
92
+ ## Quick Start
93
+
94
+ ### Basic Usage
95
+
96
+ ```ruby
97
+ require 'pgmq'
98
+
99
+ # Connect to database
100
+ client = PGMQ::Client.new(
101
+ host: 'localhost',
102
+ port: 5432,
103
+ dbname: 'mydb',
104
+ user: 'postgres',
105
+ password: 'secret'
106
+ )
107
+
108
+ # Create a queue
109
+ client.create('orders')
110
+
111
+ # Send a message (must be JSON string)
112
+ msg_id = client.produce('orders', '{"order_id":123,"total":99.99}')
113
+
114
+ # Read a message (30 second visibility timeout)
115
+ msg = client.read('orders', vt: 30)
116
+ puts msg.message # => "{\"order_id\":123,\"total\":99.99}" (raw JSON string)
117
+
118
+ # Parse and process (you handle deserialization)
119
+ data = JSON.parse(msg.message)
120
+ process_order(data)
121
+ client.delete('orders', msg.msg_id)
122
+
123
+ # Or archive for long-term storage
124
+ client.archive('orders', msg.msg_id)
125
+
126
+ # Clean up
127
+ client.drop_queue('orders')
128
+ client.close
129
+ ```
130
+
131
+ ### Rails Integration (Reusing ActiveRecord Connection)
132
+
133
+ ```ruby
134
+ # config/initializers/pgmq.rb or in your model
135
+ class OrderProcessor
136
+ def initialize
137
+ # Reuse Rails' connection pool - no separate connection needed!
138
+ @client = PGMQ::Client.new(-> { ActiveRecord::Base.connection.raw_connection })
139
+ end
140
+
141
+ def process_orders
142
+ loop do
143
+ msg = @client.read('orders', vt: 30)
144
+ break unless msg
145
+
146
+ # Parse JSON yourself
147
+ data = JSON.parse(msg.message)
148
+ process_order(data)
149
+ @client.delete('orders', msg.msg_id)
150
+ end
151
+ end
152
+ end
153
+ ```
154
+
155
+ ## Connection Options
156
+
157
+ PGMQ-Ruby supports multiple ways to connect:
158
+
159
+ ### Connection Hash
160
+
161
+ ```ruby
162
+ client = PGMQ::Client.new(
163
+ host: 'localhost',
164
+ port: 5432,
165
+ dbname: 'mydb',
166
+ user: 'postgres',
167
+ password: 'secret',
168
+ pool_size: 5, # Default: 5
169
+ pool_timeout: 5 # Default: 5 seconds
170
+ )
171
+ ```
172
+
173
+ ### Connection String
174
+
175
+ ```ruby
176
+ client = PGMQ::Client.new('postgres://user:pass@localhost:5432/dbname')
177
+ ```
178
+
179
+ ### Rails ActiveRecord (Recommended for Rails apps)
180
+
181
+ ```ruby
182
+ # Reuses Rails connection pool - no additional connections needed
183
+ client = PGMQ::Client.new(-> { ActiveRecord::Base.connection.raw_connection })
184
+ ```
185
+
186
+ ### Custom Connection Pool
187
+
188
+ ```ruby
189
+ # Bring your own connection management
190
+ connection = PGMQ::Connection.new('postgres://localhost/mydb', pool_size: 10)
191
+ client = PGMQ::Client.new(connection)
192
+ ```
193
+
194
+ ### Connection Pool Features
195
+
196
+ PGMQ-Ruby includes connection pooling with resilience:
197
+
198
+ ```ruby
199
+ # Configure pool size and timeouts
200
+ client = PGMQ::Client.new(
201
+ 'postgres://localhost/mydb',
202
+ pool_size: 10, # Number of connections (default: 5)
203
+ pool_timeout: 5, # Timeout in seconds (default: 5)
204
+ auto_reconnect: true # Auto-reconnect on connection loss (default: true)
205
+ )
206
+
207
+ # Monitor connection pool health
208
+ stats = client.stats
209
+ puts "Pool size: #{stats[:size]}" # => 10
210
+ puts "Available: #{stats[:available]}" # => 8 (2 in use)
211
+
212
+ # Disable auto-reconnect if you prefer explicit error handling
213
+ client = PGMQ::Client.new(
214
+ 'postgres://localhost/mydb',
215
+ auto_reconnect: false
216
+ )
217
+ ```
218
+
219
+ **Connection Pool Benefits:**
220
+ - **Thread-safe** - Multiple threads can safely share a single client
221
+ - **Fiber-aware** - Works with Ruby 3.0+ Fiber Scheduler for non-blocking I/O
222
+ - **Auto-reconnect** - Recovers from lost connections (configurable)
223
+ - **Health checks** - Verifies connections before use to prevent stale connection errors
224
+ - **Monitoring** - Track pool utilization with `client.stats`
225
+
226
+
227
+ ## API Reference
228
+
229
+ ### Queue Management
230
+
231
+ ```ruby
232
+ # Create a queue (returns true if created, false if already exists)
233
+ client.create("queue_name") # => true
234
+ client.create("queue_name") # => false (idempotent)
235
+
236
+ # Create partitioned queue (requires pg_partman)
237
+ client.create_partitioned("queue_name",
238
+ partition_interval: "daily",
239
+ retention_interval: "7 days"
240
+ ) # => true/false
241
+
242
+ # Create unlogged queue (faster, no crash recovery)
243
+ client.create_unlogged("queue_name") # => true/false
244
+
245
+ # Drop queue (returns true if dropped, false if didn't exist)
246
+ client.drop_queue("queue_name") # => true/false
247
+
248
+ # List all queues
249
+ queues = client.list_queues
250
+ # => [#<PGMQ::QueueMetadata queue_name="orders" created_at=...>, ...]
251
+ ```
252
+
253
+ #### Queue Naming Rules
254
+
255
+ Queue names must follow PostgreSQL identifier rules with PGMQ-specific constraints:
256
+
257
+ - **Maximum 48 characters** (PGMQ enforces this limit for table prefixes)
258
+ - Must start with a letter or underscore
259
+ - Can contain only letters, digits, and underscores
260
+ - Case-sensitive
261
+
262
+ **Valid Queue Names:**
263
+
264
+ ```ruby
265
+ client.create("orders") # ✓ Simple name
266
+ client.create("high_priority") # ✓ With underscore
267
+ client.create("Queue123") # ✓ With numbers
268
+ client.create("_internal") # ✓ Starts with underscore
269
+ client.create("a" * 47) # ✓ Maximum length (47 chars)
270
+ ```
271
+
272
+ **Invalid Queue Names:**
273
+
274
+ ```ruby
275
+ client.create("123orders") # ✗ Starts with number
276
+ client.create("my-queue") # ✗ Contains hyphen
277
+ client.create("my.queue") # ✗ Contains period
278
+ client.create("a" * 48) # ✗ Too long (48+ chars)
279
+ # Raises PGMQ::Errors::InvalidQueueNameError
280
+ ```
281
+
282
+ ### Sending Messages
283
+
284
+ ```ruby
285
+ # Send single message (must be JSON string)
286
+ msg_id = client.produce("queue_name", '{"data":"value"}')
287
+
288
+ # Send with delay (seconds)
289
+ msg_id = client.produce("queue_name", '{"data":"value"}', delay: 60)
290
+
291
+ # Send with headers (for routing, tracing, correlation)
292
+ msg_id = client.produce("queue_name", '{"data":"value"}',
293
+ headers: '{"trace_id":"abc123","priority":"high"}')
294
+
295
+ # Send with headers and delay
296
+ msg_id = client.produce("queue_name", '{"data":"value"}',
297
+ headers: '{"correlation_id":"req-456"}',
298
+ delay: 60)
299
+
300
+ # Send batch (array of JSON strings)
301
+ msg_ids = client.produce_batch("queue_name", [
302
+ '{"order":1}',
303
+ '{"order":2}',
304
+ '{"order":3}'
305
+ ])
306
+ # => ["101", "102", "103"]
307
+
308
+ # Send batch with headers (one per message)
309
+ msg_ids = client.produce_batch("queue_name",
310
+ ['{"order":1}', '{"order":2}'],
311
+ headers: ['{"priority":"high"}', '{"priority":"low"}'])
312
+ ```
313
+
314
+ ### Reading Messages
315
+
316
+ ```ruby
317
+ # Read single message
318
+ msg = client.read("queue_name", vt: 30)
319
+ # => #<PGMQ::Message msg_id="1" message="{...}">
320
+
321
+ # Read batch
322
+ messages = client.read_batch("queue_name", vt: 30, qty: 10)
323
+
324
+ # Read with long-polling
325
+ msg = client.read_with_poll("queue_name",
326
+ vt: 30,
327
+ qty: 1,
328
+ max_poll_seconds: 5,
329
+ poll_interval_ms: 100
330
+ )
331
+
332
+ # Pop (atomic read + delete)
333
+ msg = client.pop("queue_name")
334
+
335
+ # Pop batch (atomic read + delete for multiple messages)
336
+ messages = client.pop_batch("queue_name", 10)
337
+ ```
338
+
339
+ #### Conditional Message Filtering
340
+
341
+ Filter messages by JSON payload content using server-side JSONB queries:
342
+
343
+ ```ruby
344
+ # Filter by single condition
345
+ msg = client.read("orders", vt: 30, conditional: { status: "pending" })
346
+
347
+ # Filter by multiple conditions (AND logic)
348
+ msg = client.read("orders", vt: 30, conditional: {
349
+ status: "pending",
350
+ priority: "high"
351
+ })
352
+
353
+ # Filter by nested properties
354
+ msg = client.read("orders", vt: 30, conditional: {
355
+ user: { role: "admin" }
356
+ })
357
+
358
+ # Works with read_batch
359
+ messages = client.read_batch("orders",
360
+ vt: 30,
361
+ qty: 10,
362
+ conditional: { type: "priority" }
363
+ )
364
+
365
+ # Works with long-polling
366
+ messages = client.read_with_poll("orders",
367
+ vt: 30,
368
+ max_poll_seconds: 5,
369
+ conditional: { status: "ready" }
370
+ )
371
+ ```
372
+
373
+ **How Filtering Works:**
374
+
375
+ - Filtering happens in PostgreSQL using JSONB containment operator (`@>`)
376
+ - Only messages matching **ALL** conditions are returned (AND logic)
377
+ - The `qty` parameter applies **after** filtering
378
+ - Empty conditions `{}` means no filtering (same as omitting parameter)
379
+
380
+ **Performance Tip:** For frequently filtered fields, add JSONB indexes:
381
+ ```sql
382
+ CREATE INDEX idx_orders_status
383
+ ON pgmq.q_orders USING gin ((message->'status'));
384
+ ```
385
+
386
+ ### Message Lifecycle
387
+
388
+ ```ruby
389
+ # Delete message
390
+ client.delete("queue_name", msg_id)
391
+
392
+ # Delete batch
393
+ deleted_ids = client.delete_batch("queue_name", [101, 102, 103])
394
+
395
+ # Archive message
396
+ client.archive("queue_name", msg_id)
397
+
398
+ # Archive batch
399
+ archived_ids = client.archive_batch("queue_name", [101, 102, 103])
400
+
401
+ # Update visibility timeout
402
+ msg = client.set_vt("queue_name", msg_id, vt_offset: 60)
403
+
404
+ # Batch update visibility timeout
405
+ updated_msgs = client.set_vt_batch("queue_name", [101, 102, 103], vt_offset: 60)
406
+
407
+ # Update visibility timeout across multiple queues
408
+ client.set_vt_multi({
409
+ "orders" => [1, 2, 3],
410
+ "notifications" => [5, 6]
411
+ }, vt_offset: 120)
412
+
413
+ # Purge all messages
414
+ count = client.purge_queue("queue_name")
415
+
416
+ # Enable PostgreSQL NOTIFY for a queue (for LISTEN-based consumers)
417
+ client.enable_notify_insert("queue_name", throttle_interval_ms: 250)
418
+
419
+ # Disable notifications
420
+ client.disable_notify_insert("queue_name")
421
+ ```
422
+
423
+ ### Monitoring
424
+
425
+ ```ruby
426
+ # Get queue metrics
427
+ metrics = client.metrics("queue_name")
428
+ puts metrics.queue_length # => 42
429
+ puts metrics.oldest_msg_age_sec # => 120
430
+ puts metrics.newest_msg_age_sec # => 5
431
+ puts metrics.total_messages # => 1000
432
+
433
+ # Get all queue metrics
434
+ all_metrics = client.metrics_all
435
+ all_metrics.each do |m|
436
+ puts "#{m.queue_name}: #{m.queue_length} messages"
437
+ end
438
+ ```
439
+
440
+ ### Transaction Support
441
+
442
+ Low-level PostgreSQL transaction support for atomic operations. Transactions are a database primitive provided by PostgreSQL - this is a thin wrapper for convenience.
443
+
444
+ Execute atomic operations across multiple queues or combine queue operations with application data updates:
445
+
446
+ ```ruby
447
+ # Atomic operations across multiple queues
448
+ client.transaction do |txn|
449
+ # Send to multiple queues atomically
450
+ txn.produce("orders", '{"order_id":123}')
451
+ txn.produce("notifications", '{"user_id":456,"type":"order_created"}')
452
+ txn.produce("analytics", '{"event":"order_placed"}')
453
+ end
454
+
455
+ # Process message and update application state atomically
456
+ client.transaction do |txn|
457
+ # Read and process message
458
+ msg = txn.read("orders", vt: 30)
459
+
460
+ if msg
461
+ # Parse and update your database
462
+ data = JSON.parse(msg.message)
463
+ Order.create!(external_id: data["order_id"])
464
+
465
+ # Delete message only if database update succeeds
466
+ txn.delete("orders", msg.msg_id)
467
+ end
468
+ end
469
+
470
+ # Automatic rollback on errors
471
+ client.transaction do |txn|
472
+ txn.produce("queue1", '{"data":"message1"}')
473
+ txn.produce("queue2", '{"data":"message2"}')
474
+
475
+ raise "Something went wrong!"
476
+ # Both messages are rolled back - neither queue receives anything
477
+ end
478
+
479
+ # Move messages between queues atomically
480
+ client.transaction do |txn|
481
+ msg = txn.read("pending_orders", vt: 30)
482
+
483
+ if msg
484
+ data = JSON.parse(msg.message)
485
+ if data["priority"] == "high"
486
+ # Move to high-priority queue
487
+ txn.produce("priority_orders", msg.message)
488
+ txn.delete("pending_orders", msg.msg_id)
489
+ end
490
+ end
491
+ end
492
+ ```
493
+
494
+ **How Transactions Work:**
495
+
496
+ - Wraps PostgreSQL's native transaction support (similar to rdkafka-ruby providing Kafka transactions)
497
+ - All operations within the block execute in a single PostgreSQL transaction
498
+ - If any operation fails, the entire transaction is rolled back automatically
499
+ - The transactional client delegates all `PGMQ::Client` methods for convenience
500
+
501
+ **Use Cases:**
502
+
503
+ - **Multi-queue coordination**: Send related messages to multiple queues atomically
504
+ - **Exactly-once processing**: Combine message deletion with application state updates
505
+ - **Message routing**: Move messages between queues without losing data
506
+ - **Batch operations**: Ensure all-or-nothing semantics for bulk operations
507
+
508
+ **Important Notes:**
509
+
510
+ - Transactions hold database locks - keep them short to avoid blocking
511
+ - Long transactions can impact queue throughput
512
+ - Read operations with long visibility timeouts may cause lock contention
513
+ - Consider using `pop()` for atomic read+delete in simple cases
514
+
515
+ ## Message Object
516
+
517
+ PGMQ-Ruby is a **low-level transport library** - it returns raw values from PostgreSQL without any transformation. You are responsible for parsing JSON and type conversion.
518
+
519
+ ```ruby
520
+ msg = client.read("queue", vt: 30)
521
+
522
+ # All values are strings as returned by PostgreSQL
523
+ msg.msg_id # => "123" (String, not Integer)
524
+ msg.id # => "123" (alias for msg_id)
525
+ msg.read_ct # => "1" (String, not Integer)
526
+ msg.enqueued_at # => "2025-01-15 10:30:00+00" (String, not Time)
527
+ msg.vt # => "2025-01-15 10:30:30+00" (String, not Time)
528
+ msg.message # => "{\"data\":\"value\"}" (Raw JSONB as JSON string)
529
+ msg.headers # => "{\"trace_id\":\"abc123\"}" (Raw JSONB as JSON string, optional)
530
+ msg.queue_name # => "my_queue" (only present for multi-queue operations, otherwise nil)
531
+
532
+ # You handle JSON parsing
533
+ data = JSON.parse(msg.message) # => { "data" => "value" }
534
+ metadata = JSON.parse(msg.headers) if msg.headers # => { "trace_id" => "abc123" }
535
+
536
+ # You handle type conversion if needed
537
+ id = msg.msg_id.to_i # => 123
538
+ read_count = msg.read_ct.to_i # => 1
539
+ enqueued = Time.parse(msg.enqueued_at) # => 2025-01-15 10:30:00 UTC
540
+ ```
541
+
542
+ ### Message Headers
543
+
544
+ PGMQ supports optional message headers via the `headers` JSONB column. Headers are useful for metadata like routing information, correlation IDs, and distributed tracing:
545
+
546
+ ```ruby
547
+ # Sending a message with headers
548
+ message = '{"order_id":123}'
549
+ headers = '{"trace_id":"abc123","priority":"high","correlation_id":"req-456"}'
550
+
551
+ msg_id = client.produce("orders", message, headers: headers)
552
+
553
+ # Sending with headers and delay
554
+ msg_id = client.produce("orders", message, headers: headers, delay: 60)
555
+
556
+ # Batch produce with headers (one header object per message)
557
+ messages = ['{"id":1}', '{"id":2}', '{"id":3}']
558
+ headers = [
559
+ '{"priority":"high"}',
560
+ '{"priority":"medium"}',
561
+ '{"priority":"low"}'
562
+ ]
563
+ msg_ids = client.produce_batch("orders", messages, headers: headers)
564
+
565
+ # Reading messages with headers
566
+ msg = client.read("orders", vt: 30)
567
+ if msg.headers
568
+ metadata = JSON.parse(msg.headers)
569
+ trace_id = metadata["trace_id"]
570
+ priority = metadata["priority"]
571
+ correlation_id = metadata["correlation_id"]
572
+ end
573
+ ```
574
+
575
+ Common header use cases:
576
+ - **Distributed tracing**: `trace_id`, `span_id`, `parent_span_id`
577
+ - **Request correlation**: `correlation_id`, `causation_id`
578
+ - **Routing**: `priority`, `region`, `tenant_id`
579
+ - **Content metadata**: `content_type`, `encoding`, `version`
580
+
581
+ ### Why Raw Values?
582
+
583
+ This library follows the **rdkafka-ruby philosophy** - provide a thin, performant wrapper around the underlying system:
584
+
585
+ 1. **No assumptions** - Your application decides how to parse timestamps, convert types, etc.
586
+ 2. **Framework-agnostic** - Works equally well with Rails, Sinatra, or plain Ruby
587
+ 3. **Zero overhead** - No hidden type conversion or object allocation
588
+ 4. **Explicit control** - You see exactly what PostgreSQL returns
589
+
590
+ Higher-level features (automatic deserialization, type conversion, instrumentation) belong in framework layers built on top of this library.
591
+
592
+ ## Working with JSON
593
+
594
+ PGMQ stores messages as JSONB in PostgreSQL. You must handle JSON serialization yourself:
595
+
596
+ ### Sending Messages
597
+
598
+ ```ruby
599
+ # Simple hash
600
+ msg = { order_id: 123, status: "pending" }
601
+ client.produce("orders", msg.to_json)
602
+
603
+ # Using JSON.generate for explicit control
604
+ client.produce("orders", JSON.generate(order_id: 123, status: "pending"))
605
+
606
+ # Pre-serialized JSON string
607
+ json_str = '{"order_id":123,"status":"pending"}'
608
+ client.produce("orders", json_str)
609
+ ```
610
+
611
+ ### Reading Messages
612
+
613
+ ```ruby
614
+ msg = client.read("orders", vt: 30)
615
+
616
+ # Parse JSON yourself
617
+ data = JSON.parse(msg.message)
618
+ puts data["order_id"] # => 123
619
+ puts data["status"] # => "pending"
620
+
621
+ # Handle parsing errors
622
+ begin
623
+ data = JSON.parse(msg.message)
624
+ rescue JSON::ParserError => e
625
+ logger.error "Invalid JSON in message #{msg.msg_id}: #{e.message}"
626
+ client.delete("orders", msg.msg_id) # Remove invalid message
627
+ end
628
+ ```
629
+
630
+ ### Helper Pattern (Optional)
631
+
632
+ For convenience, you can wrap the client in your own helper:
633
+
634
+ ```ruby
635
+ class QueueHelper
636
+ def initialize(client)
637
+ @client = client
638
+ end
639
+
640
+ def produce(queue, data)
641
+ @client.produce(queue, data.to_json)
642
+ end
643
+
644
+ def read(queue, vt:)
645
+ msg = @client.read(queue, vt: vt)
646
+ return nil unless msg
647
+
648
+ OpenStruct.new(
649
+ id: msg.msg_id.to_i,
650
+ data: JSON.parse(msg.message),
651
+ read_count: msg.read_ct.to_i,
652
+ raw: msg
653
+ )
654
+ end
655
+ end
656
+
657
+ helper = QueueHelper.new(client)
658
+ helper.produce("orders", { order_id: 123 })
659
+ msg = helper.read("orders", vt: 30)
660
+ puts msg.data["order_id"] # => 123
661
+ ```
662
+
663
+ ## Development
664
+
665
+ ```bash
666
+ # Clone repository
667
+ git clone https://github.com/mensfeld/pgmq-ruby.git
668
+ cd pgmq-ruby
669
+
670
+ # Install dependencies
671
+ bundle install
672
+
673
+ # Start PostgreSQL with PGMQ
674
+ docker compose up -d
675
+
676
+ # Run tests
677
+ bundle exec rspec
678
+
679
+ # Run console
680
+ bundle exec bin/console
681
+ ```
682
+
683
+ ## Author
684
+
685
+ Maintained by [Maciej Mensfeld](https://github.com/mensfeld)
686
+
687
+ Also check out [Karafka](https://karafka.io) - High-performance Apache Kafka framework for Ruby.
data/Rakefile ADDED
@@ -0,0 +1,4 @@
1
+ # frozen_string_literal: true
2
+
3
+ require 'bundler/setup'
4
+ require 'bundler/gem_tasks'
@@ -0,0 +1,22 @@
1
+ version: '3.8'
2
+
3
+ services:
4
+ postgres:
5
+ image: ghcr.io/pgmq/pg18-pgmq:v1.8.0
6
+ container_name: pgmq_postgres_test
7
+ environment:
8
+ POSTGRES_USER: postgres
9
+ POSTGRES_PASSWORD: postgres
10
+ POSTGRES_DB: pgmq_test
11
+ ports:
12
+ - "5433:5432" # Use port 5433 locally to avoid conflicts
13
+ volumes:
14
+ - pgmq_data:/var/lib/postgresql/data
15
+ healthcheck:
16
+ test: ["CMD-SHELL", "pg_isready -U postgres"]
17
+ interval: 5s
18
+ timeout: 5s
19
+ retries: 5
20
+
21
+ volumes:
22
+ pgmq_data: null