pgmq-ruby 0.3.0 → 0.5.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
data/README.md CHANGED
@@ -1,7 +1,7 @@
1
1
  # PGMQ-Ruby
2
2
 
3
3
  [![Gem Version](https://badge.fury.io/rb/pgmq-ruby.svg)](https://badge.fury.io/rb/pgmq-ruby)
4
- [![Build Status](https://github.com/mensfeld/pgmq-ruby/workflows/ci/badge.svg)](https://github.com/mensfeld/pgmq-ruby/actions)
4
+ [![Build Status](https://github.com/mensfeld/pgmq-ruby/workflows/CI/badge.svg)](https://github.com/mensfeld/pgmq-ruby/actions)
5
5
 
6
6
  **Ruby client for [PGMQ](https://github.com/pgmq/pgmq) - PostgreSQL Message Queue**
7
7
 
@@ -25,11 +25,17 @@ PGMQ-Ruby is a Ruby client for PGMQ (PostgreSQL Message Queue). It provides dire
25
25
  - [Quick Start](#quick-start)
26
26
  - [Configuration](#configuration)
27
27
  - [API Reference](#api-reference)
28
+ - [Queue Management](#queue-management)
29
+ - [Sending Messages](#sending-messages)
30
+ - [Reading Messages](#reading-messages)
31
+ - [Grouped Round-Robin Reading](#grouped-round-robin-reading)
32
+ - [Message Lifecycle](#message-lifecycle)
33
+ - [Monitoring](#monitoring)
34
+ - [Transaction Support](#transaction-support)
35
+ - [Topic Routing](#topic-routing-amqp-like-patterns)
28
36
  - [Message Object](#message-object)
29
- - [Serializers](#serializers)
30
- - [Rails Integration](#rails-integration)
37
+ - [Working with JSON](#working-with-json)
31
38
  - [Development](#development)
32
- - [License](#license)
33
39
  - [Author](#author)
34
40
 
35
41
  ## PGMQ Feature Support
@@ -38,12 +44,15 @@ This gem provides complete support for all core PGMQ SQL functions. Based on the
38
44
 
39
45
  | Category | Method | Description | Status |
40
46
  |----------|--------|-------------|--------|
41
- | **Sending** | `send` | Send single message with optional delay | ✅ |
42
- | | `send_batch` | Send multiple messages atomically | ✅ |
47
+ | **Producing** | `produce` | Send single message with optional delay and headers | ✅ |
48
+ | | `produce_batch` | Send multiple messages atomically with headers | ✅ |
43
49
  | **Reading** | `read` | Read single message with visibility timeout | ✅ |
44
50
  | | `read_batch` | Read multiple messages with visibility timeout | ✅ |
45
51
  | | `read_with_poll` | Long-polling for efficient message consumption | ✅ |
52
+ | | `read_grouped_rr` | Round-robin reading across message groups | ✅ |
53
+ | | `read_grouped_rr_with_poll` | Round-robin with long-polling | ✅ |
46
54
  | | `pop` | Atomic read + delete operation | ✅ |
55
+ | | `pop_batch` | Atomic batch read + delete operation | ✅ |
47
56
  | **Deleting/Archiving** | `delete` | Delete single message | ✅ |
48
57
  | | `delete_batch` | Delete multiple messages | ✅ |
49
58
  | | `archive` | Archive single message for long-term storage | ✅ |
@@ -53,11 +62,20 @@ This gem provides complete support for all core PGMQ SQL functions. Based on the
53
62
  | | `create_partitioned` | Create partitioned queue (requires pg_partman) | ✅ |
54
63
  | | `create_unlogged` | Create unlogged queue (faster, no crash recovery) | ✅ |
55
64
  | | `drop_queue` | Delete queue and all messages | ✅ |
56
- | | `detach_archive` | Detach archive table from queue | ✅ |
57
- | **Utilities** | `set_vt` | Update message visibility timeout | ✅ |
65
+ | **Topic Routing** | `bind_topic` | Bind topic pattern to queue (AMQP-like) | ✅ |
66
+ | | `unbind_topic` | Remove topic binding | ✅ |
67
+ | | `produce_topic` | Send message via routing key | ✅ |
68
+ | | `produce_batch_topic` | Batch send via routing key | ✅ |
69
+ | | `list_topic_bindings` | List all topic bindings | ✅ |
70
+ | | `test_routing` | Test which queues match a routing key | ✅ |
71
+ | **Utilities** | `set_vt` | Update visibility timeout (integer or Time) | ✅ |
72
+ | | `set_vt_batch` | Batch update visibility timeouts | ✅ |
73
+ | | `set_vt_multi` | Update visibility timeouts across multiple queues | ✅ |
58
74
  | | `list_queues` | List all queues with metadata | ✅ |
59
75
  | | `metrics` | Get queue metrics (length, age, total messages) | ✅ |
60
76
  | | `metrics_all` | Get metrics for all queues | ✅ |
77
+ | | `enable_notify_insert` | Enable PostgreSQL NOTIFY on insert | ✅ |
78
+ | | `disable_notify_insert` | Disable notifications | ✅ |
61
79
  | **Ruby Enhancements** | Transaction Support | Atomic operations via `client.transaction do \|txn\|` | ✅ |
62
80
  | | Conditional Filtering | Server-side JSONB filtering with `conditional:` | ✅ |
63
81
  | | Multi-Queue Ops | Read/pop/delete/archive from multiple queues | ✅ |
@@ -70,6 +88,67 @@ This gem provides complete support for all core PGMQ SQL functions. Based on the
70
88
  - Ruby 3.2+
71
89
  - PostgreSQL 14-18 with PGMQ extension installed
72
90
 
91
+ ### Installing PGMQ Extension
92
+
93
+ PGMQ can be installed on your PostgreSQL instance in several ways:
94
+
95
+ #### Standard Installation (Self-hosted PostgreSQL)
96
+
97
+ For self-hosted PostgreSQL instances with filesystem access, install via [PGXN](https://pgxn.org/dist/pgmq/):
98
+
99
+ ```bash
100
+ pgxn install pgmq
101
+ ```
102
+
103
+ Or build from source:
104
+
105
+ ```bash
106
+ git clone https://github.com/pgmq/pgmq.git
107
+ cd pgmq/pgmq-extension
108
+ make && make install
109
+ ```
110
+
111
+ Then enable the extension:
112
+
113
+ ```sql
114
+ CREATE EXTENSION pgmq;
115
+ ```
116
+
117
+ #### Managed PostgreSQL Services (AWS RDS, Aurora, etc.)
118
+
119
+ For managed PostgreSQL services that don't allow native extension installation, PGMQ provides a **SQL-only installation** that works without filesystem access:
120
+
121
+ ```bash
122
+ git clone https://github.com/pgmq/pgmq.git
123
+ cd pgmq
124
+ psql -f pgmq-extension/sql/pgmq.sql postgres://user:pass@your-rds-host:5432/database
125
+ ```
126
+
127
+ This creates a `pgmq` schema with all required functions. See [PGMQ Installation Guide](https://github.com/pgmq/pgmq/blob/main/INSTALLATION.md) for details.
128
+
129
+ **Comparison:**
130
+
131
+ | Feature | Extension | SQL-only |
132
+ |---------|-----------|----------|
133
+ | Version tracking | Yes | No |
134
+ | Upgrade path | Yes | Manual |
135
+ | Filesystem access | Required | Not needed |
136
+ | Managed cloud services | Limited | Full support |
137
+
138
+ #### Using pg_tle (Trusted Language Extensions)
139
+
140
+ If your managed PostgreSQL service supports [pg_tle](https://github.com/aws/pg_tle) (available on AWS RDS PostgreSQL 14.5+ and Aurora), you can potentially install PGMQ as a Trusted Language Extension since PGMQ is written in PL/pgSQL and SQL (both supported by pg_tle).
141
+
142
+ To use pg_tle:
143
+
144
+ 1. Enable pg_tle on your instance (add to `shared_preload_libraries`)
145
+ 2. Create the pg_tle extension: `CREATE EXTENSION pg_tle;`
146
+ 3. Use `pgtle.install_extension()` to install PGMQ's SQL functions
147
+
148
+ See [AWS pg_tle documentation](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/PostgreSQL_trusted_language_extension.html) for setup instructions.
149
+
150
+ > **Note:** The SQL-only installation is simpler and recommended for most managed service use cases. pg_tle provides additional version management and extension lifecycle features if needed.
151
+
73
152
  ## Installation
74
153
 
75
154
  Add to your Gemfile:
@@ -104,7 +183,7 @@ client = PGMQ::Client.new(
104
183
  client.create('orders')
105
184
 
106
185
  # Send a message (must be JSON string)
107
- msg_id = client.send('orders', '{"order_id":123,"total":99.99}')
186
+ msg_id = client.produce('orders', '{"order_id":123,"total":99.99}')
108
187
 
109
188
  # Read a message (30 second visibility timeout)
110
189
  msg = client.read('orders', vt: 30)
@@ -213,7 +292,7 @@ client = PGMQ::Client.new(
213
292
 
214
293
  **Connection Pool Benefits:**
215
294
  - **Thread-safe** - Multiple threads can safely share a single client
216
- - **Fiber-aware** - Works with Ruby 3.0+ Fiber Scheduler for non-blocking I/O
295
+ - **Fiber-aware** - Works with Ruby 3.0+ Fiber Scheduler for non-blocking I/O (tested with the `async` gem)
217
296
  - **Auto-reconnect** - Recovers from lost connections (configurable)
218
297
  - **Health checks** - Verifies connections before use to prevent stale connection errors
219
298
  - **Monitoring** - Track pool utilization with `client.stats`
@@ -224,20 +303,21 @@ client = PGMQ::Client.new(
224
303
  ### Queue Management
225
304
 
226
305
  ```ruby
227
- # Create a queue
228
- client.create("queue_name")
306
+ # Create a queue (returns true if created, false if already exists)
307
+ client.create("queue_name") # => true
308
+ client.create("queue_name") # => false (idempotent)
229
309
 
230
310
  # Create partitioned queue (requires pg_partman)
231
311
  client.create_partitioned("queue_name",
232
312
  partition_interval: "daily",
233
313
  retention_interval: "7 days"
234
- )
314
+ ) # => true/false
235
315
 
236
316
  # Create unlogged queue (faster, no crash recovery)
237
- client.create_unlogged("queue_name")
317
+ client.create_unlogged("queue_name") # => true/false
238
318
 
239
- # Drop queue
240
- client.drop_queue("queue_name")
319
+ # Drop queue (returns true if dropped, false if didn't exist)
320
+ client.drop_queue("queue_name") # => true/false
241
321
 
242
322
  # List all queues
243
323
  queues = client.list_queues
@@ -277,18 +357,32 @@ client.create("a" * 48) # ✗ Too long (48+ chars)
277
357
 
278
358
  ```ruby
279
359
  # Send single message (must be JSON string)
280
- msg_id = client.send("queue_name", '{"data":"value"}')
360
+ msg_id = client.produce("queue_name", '{"data":"value"}')
281
361
 
282
362
  # Send with delay (seconds)
283
- msg_id = client.send("queue_name", '{"data":"value"}', delay: 60)
363
+ msg_id = client.produce("queue_name", '{"data":"value"}', delay: 60)
364
+
365
+ # Send with headers (for routing, tracing, correlation)
366
+ msg_id = client.produce("queue_name", '{"data":"value"}',
367
+ headers: '{"trace_id":"abc123","priority":"high"}')
368
+
369
+ # Send with headers and delay
370
+ msg_id = client.produce("queue_name", '{"data":"value"}',
371
+ headers: '{"correlation_id":"req-456"}',
372
+ delay: 60)
284
373
 
285
374
  # Send batch (array of JSON strings)
286
- msg_ids = client.send_batch("queue_name", [
375
+ msg_ids = client.produce_batch("queue_name", [
287
376
  '{"order":1}',
288
377
  '{"order":2}',
289
378
  '{"order":3}'
290
379
  ])
291
380
  # => ["101", "102", "103"]
381
+
382
+ # Send batch with headers (one per message)
383
+ msg_ids = client.produce_batch("queue_name",
384
+ ['{"order":1}', '{"order":2}'],
385
+ headers: ['{"priority":"high"}', '{"priority":"low"}'])
292
386
  ```
293
387
 
294
388
  ### Reading Messages
@@ -311,6 +405,53 @@ msg = client.read_with_poll("queue_name",
311
405
 
312
406
  # Pop (atomic read + delete)
313
407
  msg = client.pop("queue_name")
408
+
409
+ # Pop batch (atomic read + delete for multiple messages)
410
+ messages = client.pop_batch("queue_name", 10)
411
+
412
+ # Grouped round-robin reading (fair processing across entities)
413
+ # Messages are grouped by the first key in their JSON payload
414
+ messages = client.read_grouped_rr("queue_name", vt: 30, qty: 10)
415
+
416
+ # Grouped round-robin with long-polling
417
+ messages = client.read_grouped_rr_with_poll("queue_name",
418
+ vt: 30,
419
+ qty: 10,
420
+ max_poll_seconds: 5,
421
+ poll_interval_ms: 100
422
+ )
423
+ ```
424
+
425
+ #### Grouped Round-Robin Reading
426
+
427
+ When processing messages from multiple entities (users, orders, tenants), regular FIFO ordering can cause starvation - one entity with many messages can monopolize workers.
428
+
429
+ Grouped round-robin ensures fair processing by interleaving messages from different groups:
430
+
431
+ ```ruby
432
+ # Queue contains messages for different users:
433
+ # user_a: 5 messages, user_b: 2 messages, user_c: 1 message
434
+
435
+ # Regular read would process all user_a messages first (unfair)
436
+ messages = client.read_batch("tasks", vt: 30, qty: 8)
437
+ # => [user_a_1, user_a_2, user_a_3, user_a_4, user_a_5, user_b_1, user_b_2, user_c_1]
438
+
439
+ # Grouped round-robin ensures fair distribution
440
+ messages = client.read_grouped_rr("tasks", vt: 30, qty: 8)
441
+ # => [user_a_1, user_b_1, user_c_1, user_a_2, user_b_2, user_a_3, user_a_4, user_a_5]
442
+ ```
443
+
444
+ **How it works:**
445
+ - Messages are grouped by the **first key** in their JSON payload
446
+ - The first key should be your grouping identifier (e.g., `user_id`, `tenant_id`, `order_id`)
447
+ - PGMQ rotates through groups, taking one message from each before repeating
448
+
449
+ **Message format for grouping:**
450
+ ```ruby
451
+ # Good - user_id is first key, used for grouping
452
+ client.produce("tasks", '{"user_id":"user_a","task":"process"}')
453
+
454
+ # The grouping key should come first in your JSON
314
455
  ```
315
456
 
316
457
  #### Conditional Message Filtering
@@ -375,11 +516,33 @@ client.archive("queue_name", msg_id)
375
516
  # Archive batch
376
517
  archived_ids = client.archive_batch("queue_name", [101, 102, 103])
377
518
 
378
- # Update visibility timeout
379
- msg = client.set_vt("queue_name", msg_id, vt_offset: 60)
519
+ # Update visibility timeout with integer offset (seconds from now)
520
+ msg = client.set_vt("queue_name", msg_id, vt: 60)
521
+
522
+ # Update visibility timeout with absolute Time (PGMQ v1.11.0+)
523
+ future_time = Time.now + 300 # 5 minutes from now
524
+ msg = client.set_vt("queue_name", msg_id, vt: future_time)
525
+
526
+ # Batch update visibility timeout
527
+ updated_msgs = client.set_vt_batch("queue_name", [101, 102, 103], vt: 60)
528
+
529
+ # Batch update with absolute Time
530
+ updated_msgs = client.set_vt_batch("queue_name", [101, 102, 103], vt: Time.now + 120)
531
+
532
+ # Update visibility timeout across multiple queues
533
+ client.set_vt_multi({
534
+ "orders" => [1, 2, 3],
535
+ "notifications" => [5, 6]
536
+ }, vt: 120)
380
537
 
381
538
  # Purge all messages
382
539
  count = client.purge_queue("queue_name")
540
+
541
+ # Enable PostgreSQL NOTIFY for a queue (for LISTEN-based consumers)
542
+ client.enable_notify_insert("queue_name", throttle_interval_ms: 250)
543
+
544
+ # Disable notifications
545
+ client.disable_notify_insert("queue_name")
383
546
  ```
384
547
 
385
548
  ### Monitoring
@@ -409,9 +572,9 @@ Execute atomic operations across multiple queues or combine queue operations wit
409
572
  # Atomic operations across multiple queues
410
573
  client.transaction do |txn|
411
574
  # Send to multiple queues atomically
412
- txn.send("orders", '{"order_id":123}')
413
- txn.send("notifications", '{"user_id":456,"type":"order_created"}')
414
- txn.send("analytics", '{"event":"order_placed"}')
575
+ txn.produce("orders", '{"order_id":123}')
576
+ txn.produce("notifications", '{"user_id":456,"type":"order_created"}')
577
+ txn.produce("analytics", '{"event":"order_placed"}')
415
578
  end
416
579
 
417
580
  # Process message and update application state atomically
@@ -431,8 +594,8 @@ end
431
594
 
432
595
  # Automatic rollback on errors
433
596
  client.transaction do |txn|
434
- txn.send("queue1", '{"data":"message1"}')
435
- txn.send("queue2", '{"data":"message2"}')
597
+ txn.produce("queue1", '{"data":"message1"}')
598
+ txn.produce("queue2", '{"data":"message2"}')
436
599
 
437
600
  raise "Something went wrong!"
438
601
  # Both messages are rolled back - neither queue receives anything
@@ -446,7 +609,7 @@ client.transaction do |txn|
446
609
  data = JSON.parse(msg.message)
447
610
  if data["priority"] == "high"
448
611
  # Move to high-priority queue
449
- txn.send("priority_orders", msg.message)
612
+ txn.produce("priority_orders", msg.message)
450
613
  txn.delete("pending_orders", msg.msg_id)
451
614
  end
452
615
  end
@@ -474,6 +637,82 @@ end
474
637
  - Read operations with long visibility timeouts may cause lock contention
475
638
  - Consider using `pop()` for atomic read+delete in simple cases
476
639
 
640
+ ### Topic Routing (AMQP-like Patterns)
641
+
642
+ PGMQ v1.11.0+ supports AMQP-style topic routing, allowing messages to be delivered to multiple queues based on pattern matching.
643
+
644
+ #### Topic Patterns
645
+
646
+ Topic patterns support wildcards:
647
+ - `*` matches exactly one word (e.g., `orders.*` matches `orders.new` but not `orders.new.priority`)
648
+ - `#` matches zero or more words (e.g., `orders.#` matches `orders`, `orders.new`, and `orders.new.priority`)
649
+
650
+ ```ruby
651
+ # Create queues for different purposes
652
+ client.create("new_orders")
653
+ client.create("order_updates")
654
+ client.create("all_orders")
655
+ client.create("audit_log")
656
+
657
+ # Bind topic patterns to queues
658
+ client.bind_topic("orders.new", "new_orders") # Exact match
659
+ client.bind_topic("orders.update", "order_updates") # Exact match
660
+ client.bind_topic("orders.*", "all_orders") # Single-word wildcard
661
+ client.bind_topic("#", "audit_log") # Catch-all
662
+
663
+ # Send messages via routing key
664
+ # Message is delivered to ALL queues with matching patterns
665
+ count = client.produce_topic("orders.new", '{"order_id":123}')
666
+ # => 3 (delivered to: new_orders, all_orders, audit_log)
667
+
668
+ count = client.produce_topic("orders.update", '{"order_id":123,"status":"shipped"}')
669
+ # => 3 (delivered to: order_updates, all_orders, audit_log)
670
+
671
+ # Send with headers and delay
672
+ count = client.produce_topic("orders.new.priority",
673
+ '{"order_id":456}',
674
+ headers: '{"trace_id":"abc123"}',
675
+ delay: 0
676
+ )
677
+
678
+ # Batch send via topic routing
679
+ results = client.produce_batch_topic("orders.new", [
680
+ '{"order_id":1}',
681
+ '{"order_id":2}',
682
+ '{"order_id":3}'
683
+ ])
684
+ # => [{ queue_name: "new_orders", msg_id: "1" }, ...]
685
+
686
+ # List all topic bindings
687
+ bindings = client.list_topic_bindings
688
+ bindings.each do |b|
689
+ puts "#{b[:pattern]} -> #{b[:queue_name]}"
690
+ end
691
+
692
+ # List bindings for specific queue
693
+ bindings = client.list_topic_bindings(queue_name: "all_orders")
694
+
695
+ # Test which queues a routing key would match (for debugging)
696
+ matches = client.test_routing("orders.new.priority")
697
+ # => [{ pattern: "orders.#", queue_name: "all_orders" }, ...]
698
+
699
+ # Validate routing keys and patterns
700
+ client.validate_routing_key("orders.new.priority") # => true
701
+ client.validate_routing_key("orders.*") # => false (wildcards not allowed in keys)
702
+ client.validate_topic_pattern("orders.*") # => true
703
+ client.validate_topic_pattern("orders.#") # => true
704
+
705
+ # Remove bindings when done
706
+ client.unbind_topic("orders.new", "new_orders")
707
+ client.unbind_topic("orders.*", "all_orders")
708
+ ```
709
+
710
+ **Use Cases:**
711
+ - **Event broadcasting**: Send events to multiple consumers based on event type
712
+ - **Multi-tenant routing**: Route messages to tenant-specific queues
713
+ - **Log aggregation**: Capture all messages in an audit queue while routing to specific handlers
714
+ - **Fan-out patterns**: Deliver one message to multiple processing pipelines
715
+
477
716
  ## Message Object
478
717
 
479
718
  PGMQ-Ruby is a **low-level transport library** - it returns raw values from PostgreSQL without any transformation. You are responsible for parsing JSON and type conversion.
@@ -486,6 +725,7 @@ msg.msg_id # => "123" (String, not Integer)
486
725
  msg.id # => "123" (alias for msg_id)
487
726
  msg.read_ct # => "1" (String, not Integer)
488
727
  msg.enqueued_at # => "2025-01-15 10:30:00+00" (String, not Time)
728
+ msg.last_read_at # => "2025-01-15 10:30:15+00" (String, or nil if never read)
489
729
  msg.vt # => "2025-01-15 10:30:30+00" (String, not Time)
490
730
  msg.message # => "{\"data\":\"value\"}" (Raw JSONB as JSON string)
491
731
  msg.headers # => "{\"trace_id\":\"abc123\"}" (Raw JSONB as JSON string, optional)
@@ -499,25 +739,48 @@ metadata = JSON.parse(msg.headers) if msg.headers # => { "trace_id" => "abc123"
499
739
  id = msg.msg_id.to_i # => 123
500
740
  read_count = msg.read_ct.to_i # => 1
501
741
  enqueued = Time.parse(msg.enqueued_at) # => 2025-01-15 10:30:00 UTC
742
+ last_read = Time.parse(msg.last_read_at) if msg.last_read_at # => Time or nil
502
743
  ```
503
744
 
504
745
  ### Message Headers
505
746
 
506
- PGMQ supports optional message headers via the `headers` JSONB column:
747
+ PGMQ supports optional message headers via the `headers` JSONB column. Headers are useful for metadata like routing information, correlation IDs, and distributed tracing:
507
748
 
508
749
  ```ruby
509
- # Sending with headers requires direct SQL or a custom wrapper
510
- # (pgmq-ruby focuses on the core PGMQ API which doesn't have a send_with_headers function)
750
+ # Sending a message with headers
751
+ message = '{"order_id":123}'
752
+ headers = '{"trace_id":"abc123","priority":"high","correlation_id":"req-456"}'
753
+
754
+ msg_id = client.produce("orders", message, headers: headers)
755
+
756
+ # Sending with headers and delay
757
+ msg_id = client.produce("orders", message, headers: headers, delay: 60)
758
+
759
+ # Batch produce with headers (one header object per message)
760
+ messages = ['{"id":1}', '{"id":2}', '{"id":3}']
761
+ headers = [
762
+ '{"priority":"high"}',
763
+ '{"priority":"medium"}',
764
+ '{"priority":"low"}'
765
+ ]
766
+ msg_ids = client.produce_batch("orders", messages, headers: headers)
511
767
 
512
768
  # Reading messages with headers
513
- msg = client.read("queue", vt: 30)
769
+ msg = client.read("orders", vt: 30)
514
770
  if msg.headers
515
771
  metadata = JSON.parse(msg.headers)
516
772
  trace_id = metadata["trace_id"]
773
+ priority = metadata["priority"]
517
774
  correlation_id = metadata["correlation_id"]
518
775
  end
519
776
  ```
520
777
 
778
+ Common header use cases:
779
+ - **Distributed tracing**: `trace_id`, `span_id`, `parent_span_id`
780
+ - **Request correlation**: `correlation_id`, `causation_id`
781
+ - **Routing**: `priority`, `region`, `tenant_id`
782
+ - **Content metadata**: `content_type`, `encoding`, `version`
783
+
521
784
  ### Why Raw Values?
522
785
 
523
786
  This library follows the **rdkafka-ruby philosophy** - provide a thin, performant wrapper around the underlying system:
@@ -538,14 +801,14 @@ PGMQ stores messages as JSONB in PostgreSQL. You must handle JSON serialization
538
801
  ```ruby
539
802
  # Simple hash
540
803
  msg = { order_id: 123, status: "pending" }
541
- client.send("orders", msg.to_json)
804
+ client.produce("orders", msg.to_json)
542
805
 
543
806
  # Using JSON.generate for explicit control
544
- client.send("orders", JSON.generate(order_id: 123, status: "pending"))
807
+ client.produce("orders", JSON.generate(order_id: 123, status: "pending"))
545
808
 
546
809
  # Pre-serialized JSON string
547
810
  json_str = '{"order_id":123,"status":"pending"}'
548
- client.send("orders", json_str)
811
+ client.produce("orders", json_str)
549
812
  ```
550
813
 
551
814
  ### Reading Messages
@@ -577,8 +840,8 @@ class QueueHelper
577
840
  @client = client
578
841
  end
579
842
 
580
- def send(queue, data)
581
- @client.send(queue, data.to_json)
843
+ def produce(queue, data)
844
+ @client.produce(queue, data.to_json)
582
845
  end
583
846
 
584
847
  def read(queue, vt:)
@@ -595,7 +858,7 @@ class QueueHelper
595
858
  end
596
859
 
597
860
  helper = QueueHelper.new(client)
598
- helper.send("orders", { order_id: 123 })
861
+ helper.produce("orders", { order_id: 123 })
599
862
  msg = helper.read("orders", vt: 30)
600
863
  puts msg.data["order_id"] # => 123
601
864
  ```
data/Rakefile CHANGED
@@ -1,4 +1,73 @@
1
1
  # frozen_string_literal: true
2
2
 
3
- require 'bundler/setup'
4
- require 'bundler/gem_tasks'
3
+ require "bundler/setup"
4
+ require "bundler/gem_tasks"
5
+
6
+ namespace :examples do
7
+ desc "Run all examples (validates gem functionality)"
8
+ task :run do
9
+ examples_dir = File.expand_path("spec/integration", __dir__)
10
+ example_files = Dir.glob(File.join(examples_dir, "*_spec.rb")).sort
11
+
12
+ puts "Running #{example_files.size} examples..."
13
+ puts
14
+
15
+ failed = []
16
+ example_files.each_with_index do |example, index|
17
+ name = File.basename(example)
18
+ puts "[#{index + 1}/#{example_files.size}] Running #{name}..."
19
+
20
+ success = system("bundle exec ruby #{example}")
21
+ if success.nil?
22
+ puts "Interrupted. Aborting."
23
+ exit(130)
24
+ elsif !success
25
+ failed << name
26
+ puts "FAILED: #{name}"
27
+ end
28
+ puts
29
+ end
30
+
31
+ puts "=" * 60
32
+ if failed.empty?
33
+ puts "All #{example_files.size} examples passed."
34
+ else
35
+ puts "#{failed.size} example(s) failed:"
36
+ failed.each { |f| puts " - #{f}" }
37
+ exit(1)
38
+ end
39
+ end
40
+
41
+ desc "Run a specific example by name (e.g., rake examples:run_one[basic_produce_consume])"
42
+ task :run_one, [:name] do |_t, args|
43
+ examples_dir = File.expand_path("spec/integration", __dir__)
44
+ pattern = File.join(examples_dir, "*#{args[:name]}*_spec.rb")
45
+ matches = Dir.glob(pattern)
46
+
47
+ if matches.empty?
48
+ puts "No example found matching: #{args[:name]}"
49
+ exit(1)
50
+ end
51
+
52
+ exec("bundle exec ruby #{matches.first}")
53
+ end
54
+
55
+ desc "List all available examples"
56
+ task :list do
57
+ examples_dir = File.expand_path("spec/integration", __dir__)
58
+ example_files = Dir.glob(File.join(examples_dir, "*_spec.rb")).sort
59
+
60
+ puts "Available examples:"
61
+ example_files.each do |f|
62
+ name = File.basename(f, "_spec.rb")
63
+ puts " #{name}"
64
+ end
65
+ puts
66
+ puts "Run with: bundle exec rake examples:run_one[NAME]"
67
+ puts "Example: bundle exec rake examples:run_one[basic_produce_consume]"
68
+ end
69
+ end
70
+
71
+ # Shorthand task
72
+ desc "Run all examples"
73
+ task examples: "examples:run"
data/docker-compose.yml CHANGED
@@ -2,7 +2,7 @@ version: '3.8'
2
2
 
3
3
  services:
4
4
  postgres:
5
- image: ghcr.io/pgmq/pg18-pgmq:v1.7.0
5
+ image: ghcr.io/pgmq/pg18-pgmq:v1.9.0
6
6
  container_name: pgmq_postgres_test
7
7
  environment:
8
8
  POSTGRES_USER: postgres
@@ -11,7 +11,7 @@ services:
11
11
  ports:
12
12
  - "5433:5432" # Use port 5433 locally to avoid conflicts
13
13
  volumes:
14
- - pgmq_data:/var/lib/postgresql/data
14
+ - pgmq_data:/var/lib/postgresql
15
15
  healthcheck:
16
16
  test: ["CMD-SHELL", "pg_isready -U postgres"]
17
17
  interval: 5s