async-redis 0.12.0 → 0.13.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 631249ff627c06ac8931e39bbebd3a357695b390452bc67e7fc69e98a8abbcf0
4
- data.tar.gz: 9f85726c02e706838bbf90d78d36233a4749901593e8c766ab008f797d18f381
3
+ metadata.gz: 2e409b3ef4f0989517bcf877880475280c64b979f63622ab347e030d0185b1e4
4
+ data.tar.gz: 74f33ea9534d6fd0337d818475a3addacafa7121bd988be2366aaebc7f1f1e44
5
5
  SHA512:
6
- metadata.gz: 4471fecf34ff2cc9a3ead17ac8a92978aafe7e4ef8abcc5b25a36c7a48223ac9f9734ce8972a337ee2a83fd684e790a8a2bbaf263fd09d8ac646f39520f79443
7
- data.tar.gz: 3d7388f8ab00d99aa5fd97d06ddec1f07c9027c7512fb8c194a08afa1becabf7c93407e73e85be25fc75c6af54129984fee5e638b550e22846e30c0c2b56f5fc
6
+ metadata.gz: 1c71e0975e0559084b767b213462d71489c74f03f58549fc38bda7ee05130a40bdfadd4e5cb784bb8cbf96b1f5b2b4bd00221f6da5c5d5068b36dc6344b5fb0d
7
+ data.tar.gz: 8627a5adee5087961d1174d18d0db1e967133cfcdc7d63fb646c157a2b17137ab19dcb528b7c6261ffe789713f3e9fef41b890c83f7fbfaa09528e2718cd1150
checksums.yaml.gz.sig CHANGED
Binary file
@@ -0,0 +1,124 @@
1
+ # Client Architecture
2
+
3
+ This guide explains the different client types available in `async-redis` and when to use each one.
4
+
5
+ ## Redis Deployment Patterns
6
+
7
+ Redis can be deployed in several configurations, each serving different scalability and availability needs:
8
+
9
+ ### Single Instance
10
+ A single Redis server handles all operations. Simple to set up and manage, but limited by the capacity of one machine.
11
+
12
+ **Use when:**
13
+ - **Development**: Local development and testing.
14
+ - **Small applications**: Low traffic applications with simple caching needs.
15
+ - **Prototyping**: Getting started quickly without infrastructure complexity.
16
+
17
+ **Limitations:**
18
+ - **Single point of failure**: If Redis goes down, your application loses caching.
19
+ - **Memory constraints**: Limited by the memory of one machine.
20
+ - **CPU bottlenecks**: All operations processed by one Redis instance.
21
+
22
+ Use {ruby Async::Redis::Client} to connect to a single Redis instance.
23
+
24
+ ``` ruby
25
+ require "async/redis"
26
+
27
+ endpoint = Async::Redis.local_endpoint
28
+ client = Async::Redis::Client.new(endpoint)
29
+
30
+ Async do
31
+ begin
32
+ client.set("cache:page", "cached content")
33
+ content = client.get("cache:page")
34
+ puts "Retrieved: #{content}"
35
+ ensure
36
+ client.close
37
+ end
38
+ end
39
+ ```
40
+
41
+ ### Cluster (Sharded)
42
+
43
+ Multiple Redis nodes work together, with data automatically distributed across nodes based on key hashing. Provides horizontal scaling and high availability.
44
+
45
+ **Use when:**
46
+ - **Large datasets**: Data doesn't fit in a single Redis instance's memory.
47
+ - **High throughput**: Need to distribute load across multiple machines.
48
+ - **Horizontal scaling**: Want to add capacity by adding more nodes.
49
+
50
+ **Benefits:**
51
+ - **Automatic sharding**: Data distributed across nodes based on consistent hashing.
52
+ - **High availability**: Cluster continues operating if some nodes fail.
53
+ - **Linear scaling**: Add nodes to increase capacity and throughput.
54
+
55
+ Use {ruby Async::Redis::ClusterClient} to connect to a Redis cluster.
56
+
57
+ ``` ruby
58
+ require "async/redis"
59
+
60
+ cluster_endpoints = [
61
+ Async::Redis::Endpoint.new(hostname: "redis-1.example.com", port: 7000),
62
+ Async::Redis::Endpoint.new(hostname: "redis-2.example.com", port: 7001),
63
+ Async::Redis::Endpoint.new(hostname: "redis-3.example.com", port: 7002)
64
+ ]
65
+
66
+ cluster_client = Async::Redis::ClusterClient.new(cluster_endpoints)
67
+
68
+ Async do
69
+ begin
70
+ # Data automatically distributed across nodes:
71
+ cluster_client.set("cache:user:123", "user data")
72
+ cluster_client.set("cache:user:456", "other user data")
73
+
74
+ data = cluster_client.get("cache:user:123")
75
+ puts "Retrieved from cluster: #{data}"
76
+ ensure
77
+ cluster_client.close
78
+ end
79
+ end
80
+ ```
81
+
82
+ Note that the cluster client automatically routes requests to the correct shard where possible.
83
+
84
+ ### Sentinel (Master/Slave with Failover)
85
+
86
+ One master handles writes, multiple slaves handle reads, with sentinel processes monitoring for automatic failover.
87
+
88
+ **Use when:**
89
+ - **High availability**: Cannot tolerate Redis downtime.
90
+ - **Read scaling**: Many read operations, fewer writes.
91
+ - **Automatic failover**: Want automatic promotion of slaves to masters.
92
+
93
+ **Benefits:**
94
+ - **Automatic failover**: Sentinels promote slaves when master fails.
95
+ - **Read/write separation**: Distribute read load across slave instances.
96
+ - **Monitoring**: Built-in health checks and failure detection.
97
+
98
+ Use {ruby Async::Redis::SentinelClient} to connect to a Redis sentinel.
99
+
100
+ ``` ruby
101
+ require "async/redis"
102
+
103
+ sentinel_endpoints = [
104
+ Async::Redis::Endpoint.new(hostname: "sentinel-1.example.com", port: 26379),
105
+ Async::Redis::Endpoint.new(hostname: "sentinel-2.example.com", port: 26379),
106
+ Async::Redis::Endpoint.new(hostname: "sentinel-3.example.com", port: 26379)
107
+ ]
108
+
109
+ sentinel_client = Async::Redis::SentinelClient.new(
110
+ sentinel_endpoints,
111
+ master_name: "mymaster"
112
+ )
113
+
114
+ Async do
115
+ begin
116
+ # Automatically connects to current master:
117
+ sentinel_client.set("cache:critical", "important data")
118
+ data = sentinel_client.get("cache:critical")
119
+ puts "Retrieved from master: #{data}"
120
+ ensure
121
+ sentinel_client.close
122
+ end
123
+ end
124
+ ```
@@ -0,0 +1,486 @@
1
+ # Data Structures and Operations
2
+
3
+ This guide explains how to work with Redis data types and operations using `async-redis`.
4
+
5
+ ## Strings
6
+
7
+ Strings are Redis's most versatile data type, perfect for caching values, storing user sessions, implementing counters, or holding configuration data. Despite the name, they can store any binary data up to 512MB.
8
+
9
+ Common use cases:
10
+ - **Caching**: Store computed results, API responses, or database query results.
11
+ - **Counters**: Page views, user scores, rate limiting.
12
+ - **Sessions**: User authentication tokens and session data.
13
+ - **Configuration**: Application settings and feature flags.
14
+
15
+ ### Basic Operations
16
+
17
+ ``` ruby
18
+ require "async/redis"
19
+
20
+ endpoint = Async::Redis.local_endpoint
21
+ client = Async::Redis::Client.new(endpoint)
22
+
23
+ Async do
24
+ begin
25
+ # Cache API responses:
26
+ client.set("cache:api:users", '{"users": [{"id": 1, "name": "Alice"}]}')
27
+ client.expire("cache:api:users", 300) # 5 minute cache
28
+
29
+ cached_response = client.get("cache:api:users")
30
+ puts "Cached API response: #{cached_response}"
31
+
32
+ # Implement counters:
33
+ client.incr("stats:page_views")
34
+ client.incrby("stats:api_calls", 5)
35
+
36
+ page_views = client.get("stats:page_views")
37
+ api_calls = client.get("stats:api_calls")
38
+ puts "Page views: #{page_views}, API calls: #{api_calls}"
39
+
40
+ ensure
41
+ client.close
42
+ end
43
+ end
44
+ ```
45
+
46
+ ### Binary Data Handling
47
+
48
+ Redis strings can store any binary data, making them useful for caching images, files, or serialized objects:
49
+
50
+ ``` ruby
51
+ require "async/redis"
52
+
53
+ endpoint = Async::Redis.local_endpoint
54
+ client = Async::Redis::Client.new(endpoint)
55
+
56
+ Async do
57
+ begin
58
+ # Cache serialized data:
59
+ user_data = { id: 123, name: "Alice", preferences: { theme: "dark" } }
60
+ serialized = Marshal.dump(user_data)
61
+
62
+ client.set("cache:user:123:full", serialized)
63
+ client.expire("cache:user:123:full", 1800) # 30 minutes
64
+
65
+ # Retrieve and deserialize:
66
+ cached_data = client.get("cache:user:123:full")
67
+ if cached_data
68
+ deserialized = Marshal.load(cached_data)
69
+ puts "Cached user: #{deserialized}"
70
+ end
71
+
72
+ rescue => error
73
+ puts "Cache error: #{error.message}"
74
+ ensure
75
+ client.close
76
+ end
77
+ end
78
+ ```
79
+
80
+ ### Expiration and TTL
81
+
82
+ ``` ruby
83
+ require "async/redis"
84
+
85
+ endpoint = Async::Redis.local_endpoint
86
+ client = Async::Redis::Client.new(endpoint)
87
+
88
+ Async do
89
+ begin
90
+ # Set temporary authentication tokens:
91
+ auth_token = SecureRandom.hex(32)
92
+ client.setex("auth:token:#{auth_token}", 3600, "user:12345")
93
+
94
+ # Cache with conditional setting:
95
+ cache_key = "cache:expensive_computation"
96
+ unless client.exists(cache_key)
97
+ # Simulate expensive computation:
98
+ result = "computed_result_#{rand(1000)}"
99
+ client.setex(cache_key, 600, result) # 10 minute cache
100
+ puts "Computed and cached: #{result}"
101
+ else
102
+ cached_result = client.get(cache_key)
103
+ puts "Using cached result: #{cached_result}"
104
+ end
105
+
106
+ # Check remaining TTL:
107
+ ttl = client.ttl(cache_key)
108
+ puts "Cache expires in #{ttl} seconds"
109
+
110
+ ensure
111
+ client.close
112
+ end
113
+ end
114
+ ```
115
+
116
+ ## Hashes
117
+
118
+ Hashes are ideal for caching structured data with multiple fields. They're more memory-efficient than using separate string keys for each field and provide atomic operations on individual fields, making them perfect for temporary storage and caching scenarios.
119
+
120
+ Perfect for:
121
+ - **Session data**: Cache user session attributes and temporary state.
122
+ - **Row cache**: Store frequently accessed database rows to reduce query load.
123
+ - **Request cache**: Cache computed results from expensive operations or API calls.
124
+ - **User preferences**: Store frequently accessed settings that change occasionally.
125
+
126
+ ### Field Operations
127
+
128
+ ``` ruby
129
+ require "async/redis"
130
+
131
+ endpoint = Async::Redis.local_endpoint
132
+ client = Async::Redis::Client.new(endpoint)
133
+
134
+ Async do
135
+ begin
136
+ # Store session data with individual fields:
137
+ session_id = "session:" + SecureRandom.hex(16)
138
+ client.hset(session_id, "user_id", "12345")
139
+ client.hset(session_id, "username", "alice")
140
+ client.hset(session_id, "roles", "admin,editor")
141
+ client.hset(session_id, "last_activity", Time.now.to_i)
142
+
143
+ # Set session expiration:
144
+ client.expire(session_id, 3600) # 1 hour
145
+
146
+ # Retrieve session fields:
147
+ user_id = client.hget(session_id, "user_id")
148
+ username = client.hget(session_id, "username")
149
+ roles = client.hget(session_id, "roles").split(",")
150
+
151
+ puts "Session for user #{username} (ID: #{user_id}), roles: #{roles.join(', ')}"
152
+
153
+ # Check if user has specific role:
154
+ has_admin = client.hexists(session_id, "roles") &&
155
+ client.hget(session_id, "roles").include?("admin")
156
+ puts "Has admin role: #{has_admin}"
157
+
158
+ # Update last activity timestamp:
159
+ client.hset(session_id, "last_activity", Time.now.to_i)
160
+
161
+ ensure
162
+ client.close
163
+ end
164
+ end
165
+ ```
166
+
167
+ ### Bulk Operations
168
+
169
+ ``` ruby
170
+ require "async/redis"
171
+
172
+ endpoint = Async::Redis.local_endpoint
173
+ client = Async::Redis::Client.new(endpoint)
174
+
175
+ Async do
176
+ begin
177
+ # Cache database row with multiple fields:
178
+ user_cache_key = "cache:user:456"
179
+ user_data = {
180
+ "name" => "Bob",
181
+ "email" => "bob@example.com",
182
+ "last_login" => Time.now.to_i.to_s,
183
+ "status" => "active"
184
+ }
185
+
186
+ # Set all fields at once:
187
+ client.hmset(user_cache_key, *user_data.to_a.flatten)
188
+ client.expire(user_cache_key, 1800) # 30 minutes
189
+
190
+ # Get specific fields:
191
+ name, email = client.hmget(user_cache_key, "name", "email")
192
+ puts "User: #{name} (#{email})"
193
+
194
+ # Get all cached data:
195
+ all_data = client.hgetall(user_cache_key)
196
+ puts "Full user cache: #{all_data}"
197
+
198
+ # Get field count:
199
+ field_count = client.hlen(user_cache_key)
200
+ puts "Cached #{field_count} user fields"
201
+
202
+ ensure
203
+ client.close
204
+ end
205
+ end
206
+ ```
207
+
208
+ ## Lists
209
+
210
+ Lists maintain insertion order and allow duplicates, making them perfect for implementing queues, activity feeds, or recent item lists. They support efficient operations at both ends.
211
+
212
+ Essential for:
213
+ - **Task queues**: Background job processing with FIFO or LIFO behavior.
214
+ - **Activity feeds**: Recent user actions or timeline events.
215
+ - **Message queues**: Communication between application components.
216
+ - **Recent items**: Keep track of recently viewed or accessed items.
217
+
218
+ ### Queue Operations
219
+
220
+ ``` ruby
221
+ require "async/redis"
222
+
223
+ endpoint = Async::Redis.local_endpoint
224
+ client = Async::Redis::Client.new(endpoint)
225
+
226
+ Async do
227
+ begin
228
+ # Producer: Add tasks to queue:
229
+ tasks = ["send_email:123", "process_payment:456", "generate_report:789"]
230
+ tasks.each do |task|
231
+ client.lpush("task_queue", task)
232
+ puts "Queued: #{task}"
233
+ end
234
+
235
+ # Consumer: Process tasks from queue:
236
+ while client.llen("task_queue") > 0
237
+ task = client.rpop("task_queue")
238
+ puts "Processing: #{task}"
239
+
240
+ # Simulate task processing:
241
+ sleep 0.1
242
+ puts "Completed: #{task}"
243
+ end
244
+
245
+ ensure
246
+ client.close
247
+ end
248
+ end
249
+ ```
250
+
251
+ ### Recent Items List
252
+
253
+ ``` ruby
254
+ require "async/redis"
255
+
256
+ endpoint = Async::Redis.local_endpoint
257
+ client = Async::Redis::Client.new(endpoint)
258
+
259
+ Async do
260
+ begin
261
+ user_id = "123"
262
+ recent_key = "recent:viewed:#{user_id}"
263
+
264
+ # User views different pages:
265
+ pages = ["/products/1", "/products/5", "/cart", "/products/1", "/checkout"]
266
+
267
+ pages.each do |page|
268
+ # Remove if already exists to avoid duplicates:
269
+ client.lrem(recent_key, 0, page)
270
+
271
+ # Add to front of list:
272
+ client.lpush(recent_key, page)
273
+
274
+ # Keep only last 5 items:
275
+ client.ltrim(recent_key, 0, 4)
276
+ end
277
+
278
+ # Get recent items:
279
+ recent_pages = client.lrange(recent_key, 0, -1)
280
+ puts "Recently viewed: #{recent_pages}"
281
+
282
+ # Set expiration for cleanup:
283
+ client.expire(recent_key, 86400) # 24 hours
284
+
285
+ ensure
286
+ client.close
287
+ end
288
+ end
289
+ ```
290
+
291
+ ### Blocking Operations
292
+
293
+ Blocking operations let consumers wait for new items instead of constantly polling, making them perfect for real-time job processing:
294
+
295
+ ``` ruby
296
+ require "async/redis"
297
+
298
+ endpoint = Async::Redis.local_endpoint
299
+ client = Async::Redis::Client.new(endpoint)
300
+
301
+ Async do |task|
302
+ begin
303
+ # Producer task:
304
+ producer = task.async do
305
+ 5.times do |i|
306
+ sleep 1
307
+ job_data = { id: i, action: "process_user_#{i}" }.to_json
308
+ client.lpush("job_queue", job_data)
309
+ puts "Produced job #{i}"
310
+ end
311
+ end
312
+
313
+ # Consumer task with blocking pop:
314
+ consumer = task.async do
315
+ 5.times do
316
+ # Block for up to 2 seconds waiting for work:
317
+ result = client.brpop("job_queue", 2)
318
+
319
+ if result
320
+ queue_name, job_json = result
321
+ job = JSON.parse(job_json)
322
+ puts "Processing job #{job['id']}: #{job['action']}"
323
+ sleep 0.5 # Simulate work
324
+ else
325
+ puts "No work available, continuing..."
326
+ end
327
+ end
328
+ end
329
+
330
+ # Wait for both tasks to complete:
331
+ producer.wait
332
+ consumer.wait
333
+
334
+ ensure
335
+ client.close
336
+ end
337
+ end
338
+ ```
339
+
340
+ ## Sets and Sorted Sets
341
+
342
+ Sets automatically handle uniqueness and provide fast membership testing, while sorted sets add scoring for rankings and range queries.
343
+
344
+ Sets are perfect for:
345
+ - **Tags and categories**: User interests, product categories.
346
+ - **Unique visitors**: Track unique users without duplicates.
347
+ - **Permissions**: User roles and access rights.
348
+ - **Cache invalidation**: Track which cache keys need updating.
349
+
350
+ Sorted sets excel at:
351
+ - **Leaderboards**: Game scores, user rankings.
352
+ - **Time-based data**: Recent events, scheduled tasks.
353
+ - **Priority queues**: Tasks with different priorities.
354
+ - **Range queries**: Find items within score ranges.
355
+
356
+ ### Set Operations
357
+
358
+ ``` ruby
359
+ require "async/redis"
360
+
361
+ endpoint = Async::Redis.local_endpoint
362
+ client = Async::Redis::Client.new(endpoint)
363
+
364
+ Async do
365
+ begin
366
+ # Track unique daily visitors:
367
+ today = Date.today.to_s
368
+ visitor_key = "visitors:#{today}"
369
+
370
+ # Add visitors (duplicates automatically ignored):
371
+ visitors = ["user:123", "user:456", "user:123", "user:789", "user:456"]
372
+ visitors.each do |visitor|
373
+ client.sadd(visitor_key, visitor)
374
+ end
375
+
376
+ # Get unique visitor count:
377
+ unique_count = client.scard(visitor_key)
378
+ puts "Unique visitors today: #{unique_count}"
379
+
380
+ # Check if specific user visited:
381
+ visited = client.sismember(visitor_key, "user:123")
382
+ puts "User 123 visited today: #{visited}"
383
+
384
+ # Get all unique visitors:
385
+ all_visitors = client.smembers(visitor_key)
386
+ puts "All visitors: #{all_visitors}"
387
+
388
+ # Set expiration for daily cleanup:
389
+ client.expire(visitor_key, 86400) # 24 hours
390
+
391
+ ensure
392
+ client.close
393
+ end
394
+ end
395
+ ```
396
+
397
+ ### Scoring and Ranking
398
+
399
+ ``` ruby
400
+ require "async/redis"
401
+
402
+ endpoint = Async::Redis.local_endpoint
403
+ client = Async::Redis::Client.new(endpoint)
404
+
405
+ Async do
406
+ begin
407
+ # Track user activity scores:
408
+ leaderboard_key = "leaderboard:weekly"
409
+
410
+ # Add user scores:
411
+ client.zadd(leaderboard_key, 150, "user:alice")
412
+ client.zadd(leaderboard_key, 200, "user:bob")
413
+ client.zadd(leaderboard_key, 175, "user:charlie")
414
+ client.zadd(leaderboard_key, 125, "user:david")
415
+
416
+ # Get top 3 users:
417
+ top_users = client.zrevrange(leaderboard_key, 0, 2, with_scores: true)
418
+ puts "Top 3 users this week:"
419
+ top_users.each_slice(2).with_index do |(user, score), index|
420
+ puts " #{index + 1}. #{user}: #{score.to_i} points"
421
+ end
422
+
423
+ # Get user's rank and score:
424
+ alice_rank = client.zrevrank(leaderboard_key, "user:alice")
425
+ alice_score = client.zscore(leaderboard_key, "user:alice")
426
+ puts "Alice: rank #{alice_rank + 1}, score #{alice_score.to_i}"
427
+
428
+ # Update scores:
429
+ client.zincrby(leaderboard_key, 25, "user:alice")
430
+ new_score = client.zscore(leaderboard_key, "user:alice")
431
+ puts "Alice's updated score: #{new_score.to_i}"
432
+
433
+ # Set weekly expiration:
434
+ client.expire(leaderboard_key, 604800) # 7 days
435
+
436
+ ensure
437
+ client.close
438
+ end
439
+ end
440
+ ```
441
+
442
+ ### Time-Based Range Queries
443
+
444
+ ``` ruby
445
+ require "async/redis"
446
+
447
+ endpoint = Async::Redis.local_endpoint
448
+ client = Async::Redis::Client.new(endpoint)
449
+
450
+ Async do
451
+ begin
452
+ # Track user activity with timestamps:
453
+ activity_key = "activity:user:123"
454
+
455
+ # Add timestamped activities:
456
+ activities = [
457
+ { action: "login", time: Time.now.to_f - 3600 },
458
+ { action: "view_page", time: Time.now.to_f - 1800 },
459
+ { action: "purchase", time: Time.now.to_f - 900 },
460
+ { action: "logout", time: Time.now.to_f - 300 }
461
+ ]
462
+
463
+ activities.each do |activity|
464
+ client.zadd(activity_key, activity[:time], activity[:action])
465
+ end
466
+
467
+ # Get activities from last hour:
468
+ one_hour_ago = Time.now.to_f - 3600
469
+ recent_activities = client.zrangebyscore(activity_key, one_hour_ago, "+inf")
470
+ puts "Recent activities: #{recent_activities}"
471
+
472
+ # Count activities in time range:
473
+ thirty_min_ago = Time.now.to_f - 1800
474
+ recent_count = client.zcount(activity_key, thirty_min_ago, "+inf")
475
+ puts "Activities in last 30 minutes: #{recent_count}"
476
+
477
+ # Clean up old activities:
478
+ two_hours_ago = Time.now.to_f - 7200
479
+ removed = client.zremrangebyscore(activity_key, "-inf", two_hours_ago)
480
+ puts "Removed #{removed} old activities"
481
+
482
+ ensure
483
+ client.close
484
+ end
485
+ end
486
+ ```