pgmq-ruby 0.4.0 → 0.6.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: a6b6b3dcddd3785167a0fd8482fb0900f0d51c9c5cf68ddd23b60e84eb306012
4
- data.tar.gz: 94604bf7c55a2bd62a63486278641c247b0a9d9cf3b50379d30ee0b7c8c840f4
3
+ metadata.gz: 812f7254909aa9125fa79b5304cb3fad8461cce5372dd02206c7cfdf9b99afa9
4
+ data.tar.gz: 754f398b518800546fa2c2fe2bbaf3a3d3eb742ca5c162d9c89cd1415ac40a57
5
5
  SHA512:
6
- metadata.gz: 3ea99c27e92f96f137c9552985830949cdff6538409646ddf6b10e587c05c2b831c8c083570996ff2c0ffbefd09ae01fa84002e2ca9093f0a27ec6f7387a2b59
7
- data.tar.gz: 437713bad4d37ff821487493097c6229b8c2069b260b74e587b3452f74508392b1dba7cf37ee3dd91c52143a4c147db373035351a8dc0d86f1d6cd70a4370d4d
6
+ metadata.gz: 40b65d9001f469d2e88c04cc5f4bcf85b234b850a19ee4089e0b25f52b266bf45c8680398de43a61a042998a08760b867d3be1878aad518567caa6a0801cec80
7
+ data.tar.gz: 175b62d75d9fa46391570026e345b18c099ab75dabfe4518acd89e750de4fce97eebd01fcc8733ce771492343a74f50dbca9534c2305e47599eebe966dd04d9d
data/CHANGELOG.md CHANGED
@@ -1,5 +1,53 @@
1
1
  # Changelog
2
2
 
3
+ ## 0.6.0 (2026-04-02)
4
+
5
+ ### Breaking Changes
6
+ - **[Breaking]** Drop Ruby 3.2 support. Minimum required Ruby version is now 3.3.0.
7
+
8
+ ### Connection Management
9
+ - **[Breaking]** Detect shared `PG::Connection` across pool slots at creation time. When a callable connection factory returns the same `PG::Connection` object to multiple pool slots, concurrent threads corrupt libpq's internal state, causing nil `PG::Result` (`NoMethodError: undefined method 'ntuples' for nil`), segfaults, or wrong data. The pool now tracks connection identity via `ObjectSpace::WeakKeyMap` and raises `PGMQ::Errors::ConfigurationError` immediately with a descriptive error message. WeakKeyMap entries are automatically cleaned up when connections are GC'd. **This change is breaking for configurations that intentionally share a single `PG::Connection` across multiple pool slots. Users must ensure their callable returns a distinct `PG::Connection` per pool slot or configure `pool_size: 1` when reusing a single shared connection.**
10
+
11
+ ### Infrastructure
12
+ - **[Change]** Migrate test framework from RSpec to Minitest/Spec with Mocha for mocking, aligning with the broader Karafka ecosystem conventions.
13
+ - **[Change]** Replace `rubocop-rspec` with `rubocop-minitest` for test linting.
14
+ - **[Change]** Add `bin/integrations` runner script that centralizes integration spec execution. Specs no longer need `require_relative "support/example_helper"` — the runner injects it via `-r` flag. Run all specs with `bin/integrations` or specific ones with `bin/integrations spec/integration/foo_spec.rb`.
15
+
16
+ ## 0.5.0 (2026-02-24)
17
+
18
+ ### Breaking Changes
19
+ - **[Breaking]** Remove `detach_archive(queue_name)` method. PGMQ 2.0 no longer requires archive table detachment as archive tables are no longer member objects. The server-side function was already a no-op in PGMQ 2.0+.
20
+ - **[Breaking]** Rename `vt_offset:` parameter to `vt:` in `set_vt`, `set_vt_batch`, and `set_vt_multi` methods. The `vt:` parameter now accepts either an integer offset (seconds from now) or an absolute `Time` object for PGMQ v1.11.0+.
21
+
22
+ ### PGMQ v1.11.0 Features
23
+ - **[Feature]** Add `last_read_at` field to `PGMQ::Message`. Returns the timestamp of the last read operation for the message, or nil if the message has never been read. This enables tracking when messages were last accessed (PGMQ v1.8.1+).
24
+ - **[Feature]** Add Grouped Round-Robin reading for fair message processing:
25
+ - `read_grouped_rr(queue_name, vt:, qty:)` - Read messages in round-robin order across groups
26
+ - `read_grouped_rr_with_poll(queue_name, vt:, qty:, max_poll_seconds:, poll_interval_ms:)` - With long-polling
27
+
28
+ Messages are grouped by the first key in their JSON payload. This ensures fair processing
29
+ when multiple entities (users, orders, etc.) have messages in the queue, preventing any
30
+ single entity from monopolizing workers.
31
+ - **[Feature]** Add Topic Routing support (AMQP-like patterns). New methods in `PGMQ::Client`:
32
+ - `bind_topic(pattern, queue_name)` - Bind a topic pattern to a queue
33
+ - `unbind_topic(pattern, queue_name)` - Remove a topic binding
34
+ - `produce_topic(routing_key, message, headers:, delay:)` - Send message via routing key
35
+ - `produce_batch_topic(routing_key, messages, headers:, delay:)` - Batch send via routing key
36
+ - `list_topic_bindings(queue_name:)` - List all topic bindings
37
+ - `test_routing(routing_key)` - Test which queues a routing key matches
38
+ - `validate_routing_key(routing_key)` - Validate a routing key
39
+ - `validate_topic_pattern(pattern)` - Validate a topic pattern
40
+
41
+ Topic patterns support wildcards: `*` (single word) and `#` (zero or more words).
42
+ Requires PGMQ v1.11.0+.
43
+
44
+ ### Testing
45
+ - **[Feature]** Add Fiber Scheduler integration tests demonstrating compatibility with Ruby's Fiber Scheduler API and the `async` gem for concurrent I/O operations.
46
+
47
+ ### Infrastructure
48
+ - **[Fix]** Update docker-compose.yml volume mount for PostgreSQL 18+ compatibility.
49
+ - **[Change]** Replace Coditsu with StandardRB for code linting. This provides faster, more consistent linting using the community Ruby Style Guide.
50
+
3
51
  ## 0.4.0 (2025-12-26)
4
52
 
5
53
  ### Breaking Changes
@@ -118,7 +166,7 @@ Initial release of pgmq-ruby - a low-level Ruby client for PGMQ (PostgreSQL Mess
118
166
  - [Enhancement] Example scripts demonstrating all features.
119
167
 
120
168
  ### Dependencies
121
- - Ruby >= 3.2.0
169
+ - Ruby >= 3.3.0
122
170
  - PostgreSQL >= 14 with PGMQ extension
123
171
  - `pg` gem (~> 1.5)
124
172
  - `connection_pool` gem (~> 2.4)
data/README.md CHANGED
@@ -25,11 +25,17 @@ PGMQ-Ruby is a Ruby client for PGMQ (PostgreSQL Message Queue). It provides dire
25
25
  - [Quick Start](#quick-start)
26
26
  - [Configuration](#configuration)
27
27
  - [API Reference](#api-reference)
28
+ - [Queue Management](#queue-management)
29
+ - [Sending Messages](#sending-messages)
30
+ - [Reading Messages](#reading-messages)
31
+ - [Grouped Round-Robin Reading](#grouped-round-robin-reading)
32
+ - [Message Lifecycle](#message-lifecycle)
33
+ - [Monitoring](#monitoring)
34
+ - [Transaction Support](#transaction-support)
35
+ - [Topic Routing](#topic-routing-amqp-like-patterns)
28
36
  - [Message Object](#message-object)
29
- - [Serializers](#serializers)
30
- - [Rails Integration](#rails-integration)
37
+ - [Working with JSON](#working-with-json)
31
38
  - [Development](#development)
32
- - [License](#license)
33
39
  - [Author](#author)
34
40
 
35
41
  ## PGMQ Feature Support
@@ -43,6 +49,8 @@ This gem provides complete support for all core PGMQ SQL functions. Based on the
43
49
  | **Reading** | `read` | Read single message with visibility timeout | ✅ |
44
50
  | | `read_batch` | Read multiple messages with visibility timeout | ✅ |
45
51
  | | `read_with_poll` | Long-polling for efficient message consumption | ✅ |
52
+ | | `read_grouped_rr` | Round-robin reading across message groups | ✅ |
53
+ | | `read_grouped_rr_with_poll` | Round-robin with long-polling | ✅ |
46
54
  | | `pop` | Atomic read + delete operation | ✅ |
47
55
  | | `pop_batch` | Atomic batch read + delete operation | ✅ |
48
56
  | **Deleting/Archiving** | `delete` | Delete single message | ✅ |
@@ -54,8 +62,13 @@ This gem provides complete support for all core PGMQ SQL functions. Based on the
54
62
  | | `create_partitioned` | Create partitioned queue (requires pg_partman) | ✅ |
55
63
  | | `create_unlogged` | Create unlogged queue (faster, no crash recovery) | ✅ |
56
64
  | | `drop_queue` | Delete queue and all messages | ✅ |
57
- | | `detach_archive` | Detach archive table from queue | ✅ |
58
- | **Utilities** | `set_vt` | Update message visibility timeout | ✅ |
65
+ | **Topic Routing** | `bind_topic` | Bind topic pattern to queue (AMQP-like) | ✅ |
66
+ | | `unbind_topic` | Remove topic binding | ✅ |
67
+ | | `produce_topic` | Send message via routing key | ✅ |
68
+ | | `produce_batch_topic` | Batch send via routing key | ✅ |
69
+ | | `list_topic_bindings` | List all topic bindings | ✅ |
70
+ | | `test_routing` | Test which queues match a routing key | ✅ |
71
+ | **Utilities** | `set_vt` | Update visibility timeout (integer or Time) | ✅ |
59
72
  | | `set_vt_batch` | Batch update visibility timeouts | ✅ |
60
73
  | | `set_vt_multi` | Update visibility timeouts across multiple queues | ✅ |
61
74
  | | `list_queues` | List all queues with metadata | ✅ |
@@ -75,6 +88,67 @@ This gem provides complete support for all core PGMQ SQL functions. Based on the
75
88
  - Ruby 3.2+
76
89
  - PostgreSQL 14-18 with PGMQ extension installed
77
90
 
91
+ ### Installing PGMQ Extension
92
+
93
+ PGMQ can be installed on your PostgreSQL instance in several ways:
94
+
95
+ #### Standard Installation (Self-hosted PostgreSQL)
96
+
97
+ For self-hosted PostgreSQL instances with filesystem access, install via [PGXN](https://pgxn.org/dist/pgmq/):
98
+
99
+ ```bash
100
+ pgxn install pgmq
101
+ ```
102
+
103
+ Or build from source:
104
+
105
+ ```bash
106
+ git clone https://github.com/pgmq/pgmq.git
107
+ cd pgmq/pgmq-extension
108
+ make && make install
109
+ ```
110
+
111
+ Then enable the extension:
112
+
113
+ ```sql
114
+ CREATE EXTENSION pgmq;
115
+ ```
116
+
117
+ #### Managed PostgreSQL Services (AWS RDS, Aurora, etc.)
118
+
119
+ For managed PostgreSQL services that don't allow native extension installation, PGMQ provides a **SQL-only installation** that works without filesystem access:
120
+
121
+ ```bash
122
+ git clone https://github.com/pgmq/pgmq.git
123
+ cd pgmq
124
+ psql -f pgmq-extension/sql/pgmq.sql postgres://user:pass@your-rds-host:5432/database
125
+ ```
126
+
127
+ This creates a `pgmq` schema with all required functions. See [PGMQ Installation Guide](https://github.com/pgmq/pgmq/blob/main/INSTALLATION.md) for details.
128
+
129
+ **Comparison:**
130
+
131
+ | Feature | Extension | SQL-only |
132
+ |---------|-----------|----------|
133
+ | Version tracking | Yes | No |
134
+ | Upgrade path | Yes | Manual |
135
+ | Filesystem access | Required | Not needed |
136
+ | Managed cloud services | Limited | Full support |
137
+
138
+ #### Using pg_tle (Trusted Language Extensions)
139
+
140
+ If your managed PostgreSQL service supports [pg_tle](https://github.com/aws/pg_tle) (available on AWS RDS PostgreSQL 14.5+ and Aurora), you can potentially install PGMQ as a Trusted Language Extension since PGMQ is written in PL/pgSQL and SQL (both supported by pg_tle).
141
+
142
+ To use pg_tle:
143
+
144
+ 1. Enable pg_tle on your instance (add to `shared_preload_libraries`)
145
+ 2. Create the pg_tle extension: `CREATE EXTENSION pg_tle;`
146
+ 3. Use `pgtle.install_extension()` to install PGMQ's SQL functions
147
+
148
+ See [AWS pg_tle documentation](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/PostgreSQL_trusted_language_extension.html) for setup instructions.
149
+
150
+ > **Note:** The SQL-only installation is simpler and recommended for most managed service use cases. pg_tle provides additional version management and extension lifecycle features if needed.
151
+
78
152
  ## Installation
79
153
 
80
154
  Add to your Gemfile:
@@ -218,7 +292,7 @@ client = PGMQ::Client.new(
218
292
 
219
293
  **Connection Pool Benefits:**
220
294
  - **Thread-safe** - Multiple threads can safely share a single client
221
- - **Fiber-aware** - Works with Ruby 3.0+ Fiber Scheduler for non-blocking I/O
295
+ - **Fiber-aware** - Works with Ruby 3.0+ Fiber Scheduler for non-blocking I/O (tested with the `async` gem)
222
296
  - **Auto-reconnect** - Recovers from lost connections (configurable)
223
297
  - **Health checks** - Verifies connections before use to prevent stale connection errors
224
298
  - **Monitoring** - Track pool utilization with `client.stats`
@@ -334,6 +408,50 @@ msg = client.pop("queue_name")
334
408
 
335
409
  # Pop batch (atomic read + delete for multiple messages)
336
410
  messages = client.pop_batch("queue_name", 10)
411
+
412
+ # Grouped round-robin reading (fair processing across entities)
413
+ # Messages are grouped by the first key in their JSON payload
414
+ messages = client.read_grouped_rr("queue_name", vt: 30, qty: 10)
415
+
416
+ # Grouped round-robin with long-polling
417
+ messages = client.read_grouped_rr_with_poll("queue_name",
418
+ vt: 30,
419
+ qty: 10,
420
+ max_poll_seconds: 5,
421
+ poll_interval_ms: 100
422
+ )
423
+ ```
424
+
425
+ #### Grouped Round-Robin Reading
426
+
427
+ When processing messages from multiple entities (users, orders, tenants), regular FIFO ordering can cause starvation - one entity with many messages can monopolize workers.
428
+
429
+ Grouped round-robin ensures fair processing by interleaving messages from different groups:
430
+
431
+ ```ruby
432
+ # Queue contains messages for different users:
433
+ # user_a: 5 messages, user_b: 2 messages, user_c: 1 message
434
+
435
+ # Regular read would process all user_a messages first (unfair)
436
+ messages = client.read_batch("tasks", vt: 30, qty: 8)
437
+ # => [user_a_1, user_a_2, user_a_3, user_a_4, user_a_5, user_b_1, user_b_2, user_c_1]
438
+
439
+ # Grouped round-robin ensures fair distribution
440
+ messages = client.read_grouped_rr("tasks", vt: 30, qty: 8)
441
+ # => [user_a_1, user_b_1, user_c_1, user_a_2, user_b_2, user_a_3, user_a_4, user_a_5]
442
+ ```
443
+
444
+ **How it works:**
445
+ - Messages are grouped by the **first key** in their JSON payload
446
+ - The first key should be your grouping identifier (e.g., `user_id`, `tenant_id`, `order_id`)
447
+ - PGMQ rotates through groups, taking one message from each before repeating
448
+
449
+ **Message format for grouping:**
450
+ ```ruby
451
+ # Good - user_id is first key, used for grouping
452
+ client.produce("tasks", '{"user_id":"user_a","task":"process"}')
453
+
454
+ # The grouping key should come first in your JSON
337
455
  ```
338
456
 
339
457
  #### Conditional Message Filtering
@@ -398,17 +516,24 @@ client.archive("queue_name", msg_id)
398
516
  # Archive batch
399
517
  archived_ids = client.archive_batch("queue_name", [101, 102, 103])
400
518
 
401
- # Update visibility timeout
402
- msg = client.set_vt("queue_name", msg_id, vt_offset: 60)
519
+ # Update visibility timeout with integer offset (seconds from now)
520
+ msg = client.set_vt("queue_name", msg_id, vt: 60)
521
+
522
+ # Update visibility timeout with absolute Time (PGMQ v1.11.0+)
523
+ future_time = Time.now + 300 # 5 minutes from now
524
+ msg = client.set_vt("queue_name", msg_id, vt: future_time)
403
525
 
404
526
  # Batch update visibility timeout
405
- updated_msgs = client.set_vt_batch("queue_name", [101, 102, 103], vt_offset: 60)
527
+ updated_msgs = client.set_vt_batch("queue_name", [101, 102, 103], vt: 60)
528
+
529
+ # Batch update with absolute Time
530
+ updated_msgs = client.set_vt_batch("queue_name", [101, 102, 103], vt: Time.now + 120)
406
531
 
407
532
  # Update visibility timeout across multiple queues
408
533
  client.set_vt_multi({
409
534
  "orders" => [1, 2, 3],
410
535
  "notifications" => [5, 6]
411
- }, vt_offset: 120)
536
+ }, vt: 120)
412
537
 
413
538
  # Purge all messages
414
539
  count = client.purge_queue("queue_name")
@@ -512,6 +637,82 @@ end
512
637
  - Read operations with long visibility timeouts may cause lock contention
513
638
  - Consider using `pop()` for atomic read+delete in simple cases
514
639
 
640
+ ### Topic Routing (AMQP-like Patterns)
641
+
642
+ PGMQ v1.11.0+ supports AMQP-style topic routing, allowing messages to be delivered to multiple queues based on pattern matching.
643
+
644
+ #### Topic Patterns
645
+
646
+ Topic patterns support wildcards:
647
+ - `*` matches exactly one word (e.g., `orders.*` matches `orders.new` but not `orders.new.priority`)
648
+ - `#` matches zero or more words (e.g., `orders.#` matches `orders`, `orders.new`, and `orders.new.priority`)
649
+
650
+ ```ruby
651
+ # Create queues for different purposes
652
+ client.create("new_orders")
653
+ client.create("order_updates")
654
+ client.create("all_orders")
655
+ client.create("audit_log")
656
+
657
+ # Bind topic patterns to queues
658
+ client.bind_topic("orders.new", "new_orders") # Exact match
659
+ client.bind_topic("orders.update", "order_updates") # Exact match
660
+ client.bind_topic("orders.*", "all_orders") # Single-word wildcard
661
+ client.bind_topic("#", "audit_log") # Catch-all
662
+
663
+ # Send messages via routing key
664
+ # Message is delivered to ALL queues with matching patterns
665
+ count = client.produce_topic("orders.new", '{"order_id":123}')
666
+ # => 3 (delivered to: new_orders, all_orders, audit_log)
667
+
668
+ count = client.produce_topic("orders.update", '{"order_id":123,"status":"shipped"}')
669
+ # => 3 (delivered to: order_updates, all_orders, audit_log)
670
+
671
+ # Send with headers and delay
672
+ count = client.produce_topic("orders.new.priority",
673
+ '{"order_id":456}',
674
+ headers: '{"trace_id":"abc123"}',
675
+ delay: 0
676
+ )
677
+
678
+ # Batch send via topic routing
679
+ results = client.produce_batch_topic("orders.new", [
680
+ '{"order_id":1}',
681
+ '{"order_id":2}',
682
+ '{"order_id":3}'
683
+ ])
684
+ # => [{ queue_name: "new_orders", msg_id: "1" }, ...]
685
+
686
+ # List all topic bindings
687
+ bindings = client.list_topic_bindings
688
+ bindings.each do |b|
689
+ puts "#{b[:pattern]} -> #{b[:queue_name]}"
690
+ end
691
+
692
+ # List bindings for specific queue
693
+ bindings = client.list_topic_bindings(queue_name: "all_orders")
694
+
695
+ # Test which queues a routing key would match (for debugging)
696
+ matches = client.test_routing("orders.new.priority")
697
+ # => [{ pattern: "orders.#", queue_name: "all_orders" }, ...]
698
+
699
+ # Validate routing keys and patterns
700
+ client.validate_routing_key("orders.new.priority") # => true
701
+ client.validate_routing_key("orders.*") # => false (wildcards not allowed in keys)
702
+ client.validate_topic_pattern("orders.*") # => true
703
+ client.validate_topic_pattern("orders.#") # => true
704
+
705
+ # Remove bindings when done
706
+ client.unbind_topic("orders.new", "new_orders")
707
+ client.unbind_topic("orders.*", "all_orders")
708
+ ```
709
+
710
+ **Use Cases:**
711
+ - **Event broadcasting**: Send events to multiple consumers based on event type
712
+ - **Multi-tenant routing**: Route messages to tenant-specific queues
713
+ - **Log aggregation**: Capture all messages in an audit queue while routing to specific handlers
714
+ - **Fan-out patterns**: Deliver one message to multiple processing pipelines
715
+
515
716
  ## Message Object
516
717
 
517
718
  PGMQ-Ruby is a **low-level transport library** - it returns raw values from PostgreSQL without any transformation. You are responsible for parsing JSON and type conversion.
@@ -524,6 +725,7 @@ msg.msg_id # => "123" (String, not Integer)
524
725
  msg.id # => "123" (alias for msg_id)
525
726
  msg.read_ct # => "1" (String, not Integer)
526
727
  msg.enqueued_at # => "2025-01-15 10:30:00+00" (String, not Time)
728
+ msg.last_read_at # => "2025-01-15 10:30:15+00" (String, or nil if never read)
527
729
  msg.vt # => "2025-01-15 10:30:30+00" (String, not Time)
528
730
  msg.message # => "{\"data\":\"value\"}" (Raw JSONB as JSON string)
529
731
  msg.headers # => "{\"trace_id\":\"abc123\"}" (Raw JSONB as JSON string, optional)
@@ -537,6 +739,7 @@ metadata = JSON.parse(msg.headers) if msg.headers # => { "trace_id" => "abc123"
537
739
  id = msg.msg_id.to_i # => 123
538
740
  read_count = msg.read_ct.to_i # => 1
539
741
  enqueued = Time.parse(msg.enqueued_at) # => 2025-01-15 10:30:00 UTC
742
+ last_read = Time.parse(msg.last_read_at) if msg.last_read_at # => Time or nil
540
743
  ```
541
744
 
542
745
  ### Message Headers
@@ -674,7 +877,13 @@ bundle install
674
877
  docker compose up -d
675
878
 
676
879
  # Run tests
677
- bundle exec rspec
880
+ bundle exec rake test
881
+
882
+ # Run all integration specs
883
+ bin/integrations
884
+
885
+ # Run a specific integration spec
886
+ bin/integrations spec/integration/basic_produce_consume_spec.rb
678
887
 
679
888
  # Run console
680
889
  bundle exec bin/console
@@ -33,12 +33,12 @@ module PGMQ
33
33
  result = with_connection do |conn|
34
34
  if conditional.empty?
35
35
  conn.exec_params(
36
- 'SELECT * FROM pgmq.read($1::text, $2::integer, $3::integer)',
36
+ "SELECT * FROM pgmq.read($1::text, $2::integer, $3::integer)",
37
37
  [queue_name, vt, 1]
38
38
  )
39
39
  else
40
40
  conn.exec_params(
41
- 'SELECT * FROM pgmq.read($1::text, $2::integer, $3::integer, $4::jsonb)',
41
+ "SELECT * FROM pgmq.read($1::text, $2::integer, $3::integer, $4::jsonb)",
42
42
  [queue_name, vt, 1, conditional.to_json]
43
43
  )
44
44
  end
@@ -82,12 +82,12 @@ module PGMQ
82
82
  result = with_connection do |conn|
83
83
  if conditional.empty?
84
84
  conn.exec_params(
85
- 'SELECT * FROM pgmq.read($1::text, $2::integer, $3::integer)',
85
+ "SELECT * FROM pgmq.read($1::text, $2::integer, $3::integer)",
86
86
  [queue_name, vt, qty]
87
87
  )
88
88
  else
89
89
  conn.exec_params(
90
- 'SELECT * FROM pgmq.read($1::text, $2::integer, $3::integer, $4::jsonb)',
90
+ "SELECT * FROM pgmq.read($1::text, $2::integer, $3::integer, $4::jsonb)",
91
91
  [queue_name, vt, qty, conditional.to_json]
92
92
  )
93
93
  end
@@ -135,12 +135,12 @@ module PGMQ
135
135
  result = with_connection do |conn|
136
136
  if conditional.empty?
137
137
  conn.exec_params(
138
- 'SELECT * FROM pgmq.read_with_poll($1::text, $2::integer, $3::integer, $4::integer, $5::integer)',
138
+ "SELECT * FROM pgmq.read_with_poll($1::text, $2::integer, $3::integer, $4::integer, $5::integer)",
139
139
  [queue_name, vt, qty, max_poll_seconds, poll_interval_ms]
140
140
  )
141
141
  else
142
- sql = 'SELECT * FROM pgmq.read_with_poll($1::text, $2::integer, $3::integer, ' \
143
- '$4::integer, $5::integer, $6::jsonb)'
142
+ sql = "SELECT * FROM pgmq.read_with_poll($1::text, $2::integer, $3::integer, " \
143
+ "$4::integer, $5::integer, $6::jsonb)"
144
144
  conn.exec_params(
145
145
  sql,
146
146
  [queue_name, vt, qty, max_poll_seconds, poll_interval_ms, conditional.to_json]
@@ -150,6 +150,79 @@ module PGMQ
150
150
 
151
151
  result.map { |row| Message.new(row) }
152
152
  end
153
+
154
+ # Reads messages using grouped round-robin ordering
155
+ #
156
+ # Messages are grouped by the first key in their JSON payload and returned
157
+ # in round-robin order across groups. This ensures fair processing when
158
+ # messages from different entities (users, orders, etc.) are in the queue.
159
+ #
160
+ # @param queue_name [String] name of the queue
161
+ # @param vt [Integer] visibility timeout in seconds
162
+ # @param qty [Integer] number of messages to read
163
+ # @return [Array<PGMQ::Message>] array of messages in round-robin order
164
+ #
165
+ # @example Fair processing across users
166
+ # # Queue contains: user1_msg1, user1_msg2, user2_msg1, user3_msg1
167
+ # messages = client.read_grouped_rr("tasks", vt: 30, qty: 4)
168
+ # # Returns in round-robin: user1_msg1, user2_msg1, user3_msg1, user1_msg2
169
+ #
170
+ # @example Prevent single entity from monopolizing worker
171
+ # loop do
172
+ # messages = client.read_grouped_rr("orders", vt: 30, qty: 10)
173
+ # break if messages.empty?
174
+ # messages.each { |msg| process(msg) }
175
+ # end
176
+ def read_grouped_rr(queue_name, vt: DEFAULT_VT, qty: 1)
177
+ validate_queue_name!(queue_name)
178
+
179
+ result = with_connection do |conn|
180
+ conn.exec_params(
181
+ "SELECT * FROM pgmq.read_grouped_rr($1::text, $2::integer, $3::integer)",
182
+ [queue_name, vt, qty]
183
+ )
184
+ end
185
+
186
+ result.map { |row| Message.new(row) }
187
+ end
188
+
189
+ # Reads messages using grouped round-robin with long-polling support
190
+ #
191
+ # Combines grouped round-robin ordering with long-polling for efficient
192
+ # and fair message consumption.
193
+ #
194
+ # @param queue_name [String] name of the queue
195
+ # @param vt [Integer] visibility timeout in seconds
196
+ # @param qty [Integer] number of messages to read
197
+ # @param max_poll_seconds [Integer] maximum time to poll in seconds
198
+ # @param poll_interval_ms [Integer] interval between polls in milliseconds
199
+ # @return [Array<PGMQ::Message>] array of messages in round-robin order
200
+ #
201
+ # @example Long-polling with fair ordering
202
+ # messages = client.read_grouped_rr_with_poll("tasks",
203
+ # vt: 30,
204
+ # qty: 10,
205
+ # max_poll_seconds: 5,
206
+ # poll_interval_ms: 100
207
+ # )
208
+ def read_grouped_rr_with_poll(
209
+ queue_name,
210
+ vt: DEFAULT_VT,
211
+ qty: 1,
212
+ max_poll_seconds: 5,
213
+ poll_interval_ms: 100
214
+ )
215
+ validate_queue_name!(queue_name)
216
+
217
+ result = with_connection do |conn|
218
+ conn.exec_params(
219
+ "SELECT * FROM pgmq.read_grouped_rr_with_poll($1::text, $2::integer, $3::integer, $4::integer, $5::integer)",
220
+ [queue_name, vt, qty, max_poll_seconds, poll_interval_ms]
221
+ )
222
+ end
223
+
224
+ result.map { |row| Message.new(row) }
225
+ end
153
226
  end
154
227
  end
155
228
  end
@@ -19,27 +19,10 @@ module PGMQ
19
19
  validate_queue_name!(queue_name)
20
20
 
21
21
  result = with_connection do |conn|
22
- conn.exec_params('SELECT pgmq.purge_queue($1::text)', [queue_name])
22
+ conn.exec_params("SELECT pgmq.purge_queue($1::text)", [queue_name])
23
23
  end
24
24
 
25
- result[0]['purge_queue']
26
- end
27
-
28
- # Detaches the archive table from PGMQ management
29
- #
30
- # @param queue_name [String] name of the queue
31
- # @return [void]
32
- #
33
- # @example
34
- # client.detach_archive("orders")
35
- def detach_archive(queue_name)
36
- validate_queue_name!(queue_name)
37
-
38
- with_connection do |conn|
39
- conn.exec_params('SELECT pgmq.detach_archive($1::text)', [queue_name])
40
- end
41
-
42
- nil
25
+ result[0]["purge_queue"]
43
26
  end
44
27
 
45
28
  # Enables PostgreSQL NOTIFY when messages are inserted into a queue
@@ -65,7 +48,7 @@ module PGMQ
65
48
 
66
49
  with_connection do |conn|
67
50
  conn.exec_params(
68
- 'SELECT pgmq.enable_notify_insert($1::text, $2::integer)',
51
+ "SELECT pgmq.enable_notify_insert($1::text, $2::integer)",
69
52
  [queue_name, throttle_interval_ms]
70
53
  )
71
54
  end
@@ -84,7 +67,7 @@ module PGMQ
84
67
  validate_queue_name!(queue_name)
85
68
 
86
69
  with_connection do |conn|
87
- conn.exec_params('SELECT pgmq.disable_notify_insert($1::text)', [queue_name])
70
+ conn.exec_params("SELECT pgmq.disable_notify_insert($1::text)", [queue_name])
88
71
  end
89
72
 
90
73
  nil