async-redis 0.11.2 → 0.13.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,36 @@
1
+ # Automatically generated context index for Utopia::Project guides.
2
+ # Do not edit then files in this directory directly, instead edit the guides and then run `bake utopia:project:agent:context:update`.
3
+ ---
4
+ description: A Redis client library.
5
+ metadata:
6
+ documentation_uri: https://socketry.github.io/async-redis/
7
+ source_code_uri: https://github.com/socketry/async-redis.git
8
+ files:
9
+ - path: getting-started.md
10
+ title: Getting Started
11
+ description: This guide explains how to use the `async-redis` gem to connect to
12
+ a Redis server and perform basic operations.
13
+ - path: transactions-and-pipelines.md
14
+ title: Transactions and Pipelines
15
+ description: This guide explains how to use Redis transactions and pipelines with
16
+ `async-redis` for atomic operations and improved performance.
17
+ - path: subscriptions.md
18
+ title: Subscriptions
19
+ description: This guide explains how to use Redis pub/sub functionality with `async-redis`
20
+ to publish and subscribe to messages.
21
+ - path: data-structures.md
22
+ title: Data Structures and Operations
23
+ description: This guide explains how to work with Redis data types and operations
24
+ using `async-redis`.
25
+ - path: streams.md
26
+ title: Streams
27
+ description: This guide explains how to use Redis streams with `async-redis` for
28
+ reliable message processing and event sourcing.
29
+ - path: scripting.md
30
+ title: Scripting
31
+ description: This guide explains how to use Redis Lua scripting with `async-redis`
32
+ for atomic operations and advanced data processing.
33
+ - path: client-architecture.md
34
+ title: Client Architecture
35
+ description: This guide explains the different client types available in `async-redis`
36
+ and when to use each one.
@@ -0,0 +1,243 @@
1
+ # Scripting
2
+
3
+ This guide explains how to use Redis Lua scripting with `async-redis` for atomic operations and advanced data processing.
4
+
5
+ Lua scripting moves complex logic to the Redis server, ensuring atomicity and reducing network round trips. This is essential for operations that need to read, compute, and write data atomically.
6
+
7
+ Critical for:
8
+ - **Atomic business logic**: Complex operations that must be consistent.
9
+ - **Performance optimization**: Reduce network calls for multi-step operations.
10
+ - **Race condition prevention**: Ensure operations complete without interference.
11
+ - **Custom data structures**: Implement specialized behaviors not available in standard Redis commands.
12
+
13
+ ## Basic Script Loading and Execution
14
+
15
+ ``` ruby
16
+ require "async/redis"
17
+
18
+ endpoint = Async::Redis.local_endpoint
19
+ client = Async::Redis::Client.new(endpoint)
20
+
21
+ Async do
22
+ begin
23
+ # Simple Lua script:
24
+ increment_script = <<~LUA
25
+ local key = KEYS[1]
26
+ local increment = tonumber(ARGV[1])
27
+ local current = redis.call('GET', key) or 0
28
+ local new_value = tonumber(current) + increment
29
+ redis.call('SET', key, new_value)
30
+ return new_value
31
+ LUA
32
+
33
+ # Load and execute script:
34
+ script_sha = client.script("LOAD", increment_script)
35
+ puts "Script loaded with SHA: #{script_sha}"
36
+
37
+ # Execute script by SHA:
38
+ result = client.evalsha(script_sha, 1, "counter", 5)
39
+ puts "Counter incremented to: #{result}"
40
+
41
+ # Execute script directly:
42
+ result = client.eval(increment_script, 1, "counter", 3)
43
+ puts "Counter incremented to: #{result}"
44
+
45
+ ensure
46
+ client.close
47
+ end
48
+ end
49
+ ```
50
+
51
+ ## Parameter Passing and Return Values
52
+
53
+ ``` ruby
54
+ require "async/redis"
55
+
56
+ endpoint = Async::Redis.local_endpoint
57
+ client = Async::Redis::Client.new(endpoint)
58
+
59
+ Async do
60
+ begin
61
+ # Script with multiple parameters and complex return:
62
+ session_update_script = <<~LUA
63
+ local session_key = KEYS[1]
64
+ local user_id = ARGV[1]
65
+ local username = ARGV[2]
66
+ local increment_activity = ARGV[3] == 'true'
67
+
68
+ -- Update session fields:
69
+ redis.call('HSET', session_key, 'user_id', user_id, 'username', username)
70
+
71
+ -- Conditionally increment activity count:
72
+ local activity_count = 0
73
+ if increment_activity then
74
+ activity_count = redis.call('HINCRBY', session_key, 'activity_count', 1)
75
+ else
76
+ activity_count = tonumber(redis.call('HGET', session_key, 'activity_count') or 0)
77
+ end
78
+
79
+ -- Update last activity timestamp:
80
+ redis.call('HSET', session_key, 'last_activity', redis.call('TIME')[1])
81
+
82
+ -- Return session data:
83
+ return {
84
+ user_id,
85
+ username,
86
+ activity_count,
87
+ redis.call('HGET', session_key, 'last_activity')
88
+ }
89
+ LUA
90
+
91
+ # Execute with parameters:
92
+ session_id = "session:" + SecureRandom.hex(16)
93
+ result = client.eval(session_update_script, 1, session_id, "12345", "alice", "true")
94
+ user_id, username, activity_count, last_activity = result
95
+
96
+ puts "Updated session for #{username} (ID: #{user_id})"
97
+ puts "Activity count: #{activity_count}, Last activity: #{last_activity}"
98
+
99
+ ensure
100
+ client.close
101
+ end
102
+ end
103
+ ```
104
+
105
+ ## Script Caching Pattern
106
+
107
+ Instead of complex script managers, use a simple caching pattern with fallback:
108
+
109
+ ``` ruby
110
+ require "async/redis"
111
+
112
+ class JobQueue
113
+ # Define scripts as class constants:
114
+ DEQUEUE_SCRIPT = <<~LUA
115
+ local queue_key = KEYS[1]
116
+ local processing_key = KEYS[2]
117
+ local current_time = ARGV[1]
118
+
119
+ -- Get job from queue:
120
+ local job = redis.call('LPOP', queue_key)
121
+ if not job then
122
+ return nil
123
+ end
124
+
125
+ -- Add to processing set with timestamp:
126
+ redis.call('ZADD', processing_key, current_time, job)
127
+
128
+ return job
129
+ LUA
130
+
131
+ COMPLETE_SCRIPT = <<~LUA
132
+ local processing_key = KEYS[1]
133
+ local job_data = ARGV[1]
134
+
135
+ -- Remove job from processing set:
136
+ local removed = redis.call('ZREM', processing_key, job_data)
137
+
138
+ return removed
139
+ LUA
140
+
141
+ def initialize(client, queue_name)
142
+ @client = client
143
+ @queue_name = queue_name
144
+ @processing_name = "#{queue_name}:processing"
145
+
146
+ # Load all scripts at initialization:
147
+ @dequeue_sha = @client.script("LOAD", DEQUEUE_SCRIPT)
148
+ @complete_sha = @client.script("LOAD", COMPLETE_SCRIPT)
149
+ end
150
+
151
+ def dequeue_job
152
+ @client.evalsha(@dequeue_sha, 2, @queue_name, @processing_name, Time.now.to_i)
153
+ end
154
+
155
+ def complete_job(job_data)
156
+ @client.evalsha(@complete_sha, 1, @processing_name, job_data)
157
+ end
158
+
159
+ def enqueue_job(job_data)
160
+ @client.rpush(@queue_name, job_data)
161
+ end
162
+
163
+ def cleanup_stale_jobs(timeout_seconds = 300)
164
+ cutoff_time = Time.now.to_i - timeout_seconds
165
+ stale_jobs = @client.zrangebyscore(@processing_name, "-inf", cutoff_time)
166
+
167
+ if stale_jobs.any?
168
+ puts "Found #{stale_jobs.length} stale jobs, requeueing..."
169
+
170
+ # Move stale jobs back to queue:
171
+ stale_jobs.each do |job|
172
+ @client.lpush(@queue_name, job)
173
+ end
174
+
175
+ # Remove from processing set:
176
+ @client.zremrangebyscore(@processing_name, "-inf", cutoff_time)
177
+ end
178
+ end
179
+ end
180
+
181
+ # Usage:
182
+ endpoint = Async::Redis.local_endpoint
183
+ client = Async::Redis::Client.new(endpoint)
184
+
185
+ Async do
186
+ begin
187
+ queue = JobQueue.new(client, "work_queue")
188
+
189
+ # Add some jobs:
190
+ 5.times do |i|
191
+ queue.enqueue_job("job_#{i}")
192
+ puts "Enqueued job_#{i}"
193
+ end
194
+
195
+ # Process jobs:
196
+ 3.times do |i|
197
+ job = queue.dequeue_job
198
+ if job
199
+ puts "Processing: #{job}"
200
+ # Simulate work...
201
+ sleep 0.1
202
+ puts "Completed: #{job}"
203
+ else
204
+ puts "No jobs available"
205
+ end
206
+ end
207
+
208
+ # Check for stale jobs:
209
+ queue.cleanup_stale_jobs(60)
210
+
211
+ ensure
212
+ client.close
213
+ end
214
+ end
215
+ ```
216
+
217
+ ## Best Practices
218
+
219
+ ### When to Use Scripts
220
+
221
+ Use Lua scripts when you need:
222
+ - **Atomic multi-step operations**: Multiple Redis commands that must succeed or fail together.
223
+ - **Complex conditional logic**: Operations that depend on current Redis state.
224
+ - **Performance optimization**: Reduce network round trips for complex operations.
225
+ - **Race condition prevention**: Ensure operations complete without interference.
226
+
227
+ ### When Not to Use Scripts
228
+
229
+ Avoid scripts for:
230
+ - **Simple operations**: Single Redis commands don't need scripting.
231
+ - **Long-running operations**: Scripts block the Redis server.
232
+ - **Operations with external dependencies**: Scripts can't make network calls.
233
+ - **Frequently changing logic**: Scripts are cached and harder to update.
234
+
235
+ ### Script Performance Tips
236
+
237
+ - **Keep scripts short**: Long scripts block other operations.
238
+ - **Use local variables**: Avoid repeated Redis calls for the same data.
239
+ - **Cache scripts**: Use `EVALSHA` instead of `EVAL` for better performance.
240
+ - **Handle script cache misses**: Implement fallback logic for `NOSCRIPT` errors.
241
+ - **Validate inputs early**: Check parameters before performing operations.
242
+
243
+ Scripts provide powerful atomic operations but should be used judiciously to maintain Redis performance and simplicity.
@@ -0,0 +1,317 @@
1
+ # Streams
2
+
3
+ This guide explains how to use Redis streams with `async-redis` for reliable message processing and event sourcing.
4
+
5
+ Streams are designed for high-throughput message processing and event sourcing. They provide durability, consumer groups for load balancing, and automatic message acknowledgment.
6
+
7
+ Use streams when you need:
8
+ - **Event sourcing**: Capture all changes to application state.
9
+ - **Message queues**: Reliable message delivery with consumer groups.
10
+ - **Audit logs**: Immutable record of system events.
11
+ - **Real-time analytics**: Process streams of user events or metrics.
12
+
13
+ ## Stream Creation and Consumption
14
+
15
+ ``` ruby
16
+ require "async/redis"
17
+
18
+ endpoint = Async::Redis.local_endpoint
19
+ client = Async::Redis::Client.new(endpoint)
20
+
21
+ Async do
22
+ begin
23
+ # Add entries to stream:
24
+ events = [
25
+ { "type" => "user_signup", "user_id" => "123", "email" => "alice@example.com" },
26
+ { "type" => "purchase", "user_id" => "123", "amount" => "29.99" },
27
+ { "type" => "user_signup", "user_id" => "456", "email" => "bob@example.com" }
28
+ ]
29
+
30
+ events.each do |event|
31
+ entry_id = client.xadd("user_events", "*", event)
32
+ puts "Added event with ID: #{entry_id}"
33
+ end
34
+
35
+ # Read from stream:
36
+ entries = client.xrange("user_events", "-", "+")
37
+ puts "Stream entries:"
38
+ entries.each do |entry_id, fields|
39
+ puts " #{entry_id}: #{fields}"
40
+ end
41
+
42
+ # Read latest entries:
43
+ latest = client.xrevrange("user_events", "+", "-", count: 2)
44
+ puts "Latest 2 entries: #{latest}"
45
+
46
+ ensure
47
+ client.close
48
+ end
49
+ end
50
+ ```
51
+
52
+ ## Reading New Messages
53
+
54
+ ``` ruby
55
+ require "async/redis"
56
+
57
+ endpoint = Async::Redis.local_endpoint
58
+ client = Async::Redis::Client.new(endpoint)
59
+
60
+ Async do
61
+ begin
62
+ # Add some initial events:
63
+ client.xadd("notifications", "*", "type" => "welcome", "user_id" => "123")
64
+ client.xadd("notifications", "*", "type" => "reminder", "user_id" => "456")
65
+
66
+ # Read only new messages (blocking):
67
+ puts "Waiting for new messages..."
68
+
69
+ # This will block until new messages arrive:
70
+ messages = client.xread("BLOCK", 5000, "STREAMS", "notifications", "$")
71
+
72
+ if messages && !messages.empty?
73
+ stream_name, entries = messages[0]
74
+ puts "Received #{entries.length} new messages:"
75
+ entries.each do |entry_id, fields|
76
+ puts " #{entry_id}: #{fields}"
77
+ end
78
+ else
79
+ puts "No new messages received within timeout"
80
+ end
81
+
82
+ ensure
83
+ client.close
84
+ end
85
+ end
86
+ ```
87
+
88
+ ## Consumer Groups
89
+
90
+ Consumer groups enable multiple workers to process messages in parallel while ensuring each message is processed exactly once:
91
+
92
+ ``` ruby
93
+ require "async/redis"
94
+
95
+ endpoint = Async::Redis.local_endpoint
96
+ client = Async::Redis::Client.new(endpoint)
97
+
98
+ Async do
99
+ begin
100
+ # Create consumer group:
101
+ begin
102
+ client.xgroup("CREATE", "user_events", "processors", "0", "MKSTREAM")
103
+ puts "Created consumer group 'processors'"
104
+ rescue Protocol::Redis::ServerError => e
105
+ puts "Consumer group already exists: #{e.message}"
106
+ end
107
+
108
+ # Add some test events:
109
+ 3.times do |i|
110
+ client.xadd("user_events", "*", "event" => "test_#{i}", "timestamp" => Time.now.to_f)
111
+ end
112
+
113
+ # Consume messages:
114
+ consumer_name = "worker_1"
115
+ messages = client.xreadgroup("GROUP", "processors", consumer_name, "COUNT", 2, "STREAMS", "user_events", ">")
116
+
117
+ if messages && !messages.empty?
118
+ stream_name, entries = messages[0]
119
+ puts "Consumer #{consumer_name} received #{entries.length} messages:"
120
+
121
+ entries.each do |entry_id, fields|
122
+ puts " Processing #{entry_id}: #{fields}"
123
+
124
+ # Simulate message processing:
125
+ sleep 0.1
126
+
127
+ # Acknowledge message processing:
128
+ client.xack("user_events", "processors", entry_id)
129
+ puts " Acknowledged #{entry_id}"
130
+ end
131
+ else
132
+ puts "No new messages for consumer #{consumer_name}"
133
+ end
134
+
135
+ ensure
136
+ client.close
137
+ end
138
+ end
139
+ ```
140
+
141
+ ## Multiple Consumers
142
+
143
+ Demonstrate load balancing across multiple consumers:
144
+
145
+ ``` ruby
146
+ require "async/redis"
147
+
148
+ endpoint = Async::Redis.local_endpoint
149
+ client = Async::Redis::Client.new(endpoint)
150
+
151
+ Async do |task|
152
+ begin
153
+ # Create consumer group:
154
+ begin
155
+ client.xgroup("CREATE", "work_queue", "workers", "0", "MKSTREAM")
156
+ rescue Protocol::Redis::ServerError
157
+ # Group already exists
158
+ end
159
+
160
+ # Producer task - add work items:
161
+ producer = task.async do
162
+ 10.times do |i|
163
+ client.xadd("work_queue", "*",
164
+ "task_id" => i,
165
+ "data" => "work_item_#{i}",
166
+ "priority" => rand(1..5)
167
+ )
168
+ puts "Added work item #{i}"
169
+ sleep 0.5
170
+ end
171
+ end
172
+
173
+ # Consumer tasks - process work items:
174
+ consumers = 3.times.map do |worker_id|
175
+ task.async do
176
+ consumer_name = "worker_#{worker_id}"
177
+
178
+ loop do
179
+ messages = client.xreadgroup(
180
+ "GROUP", "workers", consumer_name,
181
+ "COUNT", 1,
182
+ "BLOCK", 1000,
183
+ "STREAMS", "work_queue", ">"
184
+ )
185
+
186
+ if messages && !messages.empty?
187
+ stream_name, entries = messages[0]
188
+
189
+ entries.each do |entry_id, fields|
190
+ puts "#{consumer_name} processing: #{fields}"
191
+
192
+ # Simulate work:
193
+ sleep rand(0.1..0.5)
194
+
195
+ # Acknowledge completion:
196
+ client.xack("work_queue", "workers", entry_id)
197
+ puts "#{consumer_name} completed: #{entry_id}"
198
+ end
199
+ end
200
+ end
201
+ end
202
+ end
203
+
204
+ # Wait for producer to finish:
205
+ producer.wait
206
+
207
+ # Let consumers process remaining work:
208
+ sleep 3
209
+
210
+ # Stop all consumers:
211
+ consumers.each(&:stop)
212
+
213
+ ensure
214
+ client.close
215
+ end
216
+ end
217
+ ```
218
+
219
+ ## Message Acknowledgment and Recovery
220
+
221
+ Handle message acknowledgment and recover from failures:
222
+
223
+ ``` ruby
224
+ require "async/redis"
225
+
226
+ endpoint = Async::Redis.local_endpoint
227
+ client = Async::Redis::Client.new(endpoint)
228
+
229
+ Async do
230
+ begin
231
+ # Check for pending messages:
232
+ pending_info = client.xpending("user_events", "processors")
233
+ if pending_info && pending_info[0] > 0
234
+ puts "Found #{pending_info[0]} pending messages"
235
+
236
+ # Get detailed pending information:
237
+ pending_details = client.xpending("user_events", "processors", "-", "+", 10)
238
+ pending_details.each do |entry_id, consumer, idle_time, delivery_count|
239
+ puts "Message #{entry_id} pending for #{idle_time}ms (delivered #{delivery_count} times to #{consumer})"
240
+
241
+ # Claim long-pending messages for reprocessing:
242
+ if idle_time > 60000 # 1 minute
243
+ claimed = client.xclaim("user_events", "processors", "recovery_worker", 60000, entry_id)
244
+ if claimed && !claimed.empty?
245
+ puts "Claimed message #{entry_id} for reprocessing"
246
+
247
+ # Process the claimed message:
248
+ claimed.each do |claimed_id, fields|
249
+ puts "Reprocessing: #{fields}"
250
+ # ... process message ...
251
+ end
252
+
253
+ # Acknowledge after successful processing:
254
+ client.xack("user_events", "processors", entry_id)
255
+ puts "Acknowledged recovered message #{entry_id}"
256
+ end
257
+ end
258
+ end
259
+ else
260
+ puts "No pending messages found"
261
+ end
262
+
263
+ ensure
264
+ client.close
265
+ end
266
+ end
267
+ ```
268
+
269
+ ## Stream Information and Management
270
+
271
+ Monitor and manage stream health:
272
+
273
+ ``` ruby
274
+ require "async/redis"
275
+
276
+ endpoint = Async::Redis.local_endpoint
277
+ client = Async::Redis::Client.new(endpoint)
278
+
279
+ Async do
280
+ begin
281
+ # Get stream information:
282
+ stream_info = client.xinfo("STREAM", "user_events")
283
+ puts "Stream info: #{stream_info}"
284
+
285
+ # Get consumer group information:
286
+ begin
287
+ groups_info = client.xinfo("GROUPS", "user_events")
288
+ puts "Consumer groups:"
289
+ groups_info.each do |group|
290
+ group_data = Hash[*group]
291
+ puts " Group: #{group_data['name']}, Consumers: #{group_data['consumers']}, Pending: #{group_data['pending']}"
292
+ end
293
+ rescue Protocol::Redis::ServerError
294
+ puts "No consumer groups exist for this stream"
295
+ end
296
+
297
+ # Get consumers in a group:
298
+ begin
299
+ consumers_info = client.xinfo("CONSUMERS", "user_events", "processors")
300
+ puts "Consumers in 'processors' group:"
301
+ consumers_info.each do |consumer|
302
+ consumer_data = Hash[*consumer]
303
+ puts " Consumer: #{consumer_data['name']}, Pending: #{consumer_data['pending']}, Idle: #{consumer_data['idle']}ms"
304
+ end
305
+ rescue Protocol::Redis::ServerError
306
+ puts "Consumer group 'processors' does not exist"
307
+ end
308
+
309
+ # Trim stream to keep only recent messages:
310
+ trimmed = client.xtrim("user_events", "MAXLEN", "~", 1000)
311
+ puts "Trimmed #{trimmed} messages from stream"
312
+
313
+ ensure
314
+ client.close
315
+ end
316
+ end
317
+ ```