faye-redis-ng 1.0.5 → 1.0.7
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/CHANGELOG.md +76 -1
- data/README.md +52 -11
- data/lib/faye/redis/pubsub_coordinator.rb +14 -4
- data/lib/faye/redis/version.rb +1 -1
- data/lib/faye/redis.rb +129 -16
- metadata +1 -1
checksums.yaml
CHANGED
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
---
|
|
2
2
|
SHA256:
|
|
3
|
-
metadata.gz:
|
|
4
|
-
data.tar.gz:
|
|
3
|
+
metadata.gz: 4a2f33a6f83547306e5a0e52a70e2e8e06a236703c5a56bffc7cf85a829d0a54
|
|
4
|
+
data.tar.gz: 0e62be0064f4307be4a87d94ddce9e424d7ee4dbad9a85fdf41c03c7ed5db854
|
|
5
5
|
SHA512:
|
|
6
|
-
metadata.gz:
|
|
7
|
-
data.tar.gz:
|
|
6
|
+
metadata.gz: f67fd292dd0bf0b9fb90a3af34fc7b8ae15627569090c303a06b58f89702124319a8ad96f75603cc38570c5881aa0f4ecaf0e0df39a43ad96ddb9483c3bac51e
|
|
7
|
+
data.tar.gz: 697a5b1bbd62ebd8f87da936ffc0aa3dc4cad84c9a010f4537dfd5e21c1ea1847ce0af757085bb3a4c7dc6f3a2952cf4c53e01d4a6f33e2fa4e714faffec61be
|
data/CHANGELOG.md
CHANGED
|
@@ -7,6 +7,79 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
|
|
|
7
7
|
|
|
8
8
|
## [Unreleased]
|
|
9
9
|
|
|
10
|
+
## [1.0.7] - 2025-10-30
|
|
11
|
+
|
|
12
|
+
### Fixed
|
|
13
|
+
- **Critical: Publish Race Condition**: Fixed race condition in `publish` method where callback could be called multiple times
|
|
14
|
+
- Added `callback_called` flag to prevent duplicate callback invocations
|
|
15
|
+
- Properly track completion of all async operations before calling final callback
|
|
16
|
+
- Ensures `success` status is correctly aggregated from all operations
|
|
17
|
+
- **Impact**: Eliminates unreliable message delivery status in high-concurrency scenarios
|
|
18
|
+
|
|
19
|
+
- **Critical: Thread Safety Issue**: Fixed thread safety issue in PubSubCoordinator message handling
|
|
20
|
+
- Changed `EventMachine.next_tick` to `EventMachine.schedule` for cross-thread safety
|
|
21
|
+
- Added reactor running check before scheduling
|
|
22
|
+
- Added error handling for subscriber callbacks
|
|
23
|
+
- **Impact**: Prevents undefined behavior when messages arrive from Redis pub/sub thread
|
|
24
|
+
|
|
25
|
+
- **Message Deduplication**: Fixed duplicate message enqueue issue
|
|
26
|
+
- Local published messages were being enqueued twice (local + pub/sub echo)
|
|
27
|
+
- Added message ID tracking to filter out locally published messages from pub/sub
|
|
28
|
+
- Messages now include unique IDs for deduplication
|
|
29
|
+
- **Impact**: Eliminates duplicate messages in single-server deployments
|
|
30
|
+
|
|
31
|
+
- **Batch Enqueue Logic**: Fixed `enqueue_messages_batch` to handle nil callbacks correctly
|
|
32
|
+
- Separated empty client list check from callback check
|
|
33
|
+
- Allows batch enqueue without callback (used by setup_message_routing)
|
|
34
|
+
- **Impact**: Fixes NoMethodError when enqueue is called without callback
|
|
35
|
+
|
|
36
|
+
### Added
|
|
37
|
+
- **Concurrency Test Suite**: Added comprehensive concurrency tests (spec/faye/redis_concurrency_spec.rb)
|
|
38
|
+
- Tests for callback guarantee (single invocation)
|
|
39
|
+
- Tests for concurrent publish operations
|
|
40
|
+
- Tests for multi-channel publishing
|
|
41
|
+
- Tests for error handling
|
|
42
|
+
- Stress test with 50 rapid publishes
|
|
43
|
+
- Thread safety tests
|
|
44
|
+
|
|
45
|
+
### Technical Details
|
|
46
|
+
**Publish Race Condition Fix**:
|
|
47
|
+
- Before: Multiple async callbacks could decrement counter and call callback multiple times
|
|
48
|
+
- After: Track completion with callback_called flag, ensure atomic callback invocation
|
|
49
|
+
|
|
50
|
+
**Thread Safety Fix**:
|
|
51
|
+
- Before: `EventMachine.next_tick` called from Redis subscriber thread (unsafe)
|
|
52
|
+
- After: `EventMachine.schedule` safely queues work from any thread to EM reactor
|
|
53
|
+
|
|
54
|
+
**Message Deduplication**:
|
|
55
|
+
- Before: Message published locally → enqueued → published to Redis → received back → enqueued again
|
|
56
|
+
- After: Track local message IDs, filter out self-published messages from pub/sub
|
|
57
|
+
|
|
58
|
+
## [1.0.6] - 2025-10-30
|
|
59
|
+
|
|
60
|
+
### Added
|
|
61
|
+
- **Automatic Garbage Collection**: Implemented automatic GC timer that runs periodically to clean up expired clients and orphaned data
|
|
62
|
+
- New `gc_interval` configuration option (default: 60 seconds)
|
|
63
|
+
- Automatically starts when EventMachine is running
|
|
64
|
+
- Can be disabled by setting `gc_interval` to 0 or false
|
|
65
|
+
- Lazy initialization ensures timer starts even if engine is created before EventMachine starts
|
|
66
|
+
- Timer is properly stopped on disconnect to prevent resource leaks
|
|
67
|
+
|
|
68
|
+
### Changed
|
|
69
|
+
- **Improved User Experience**: No longer requires manual setup of periodic cleanup
|
|
70
|
+
- Memory leak prevention is now automatic by default
|
|
71
|
+
- Matches behavior of original faye-redis-ruby project
|
|
72
|
+
- Users can still manually call `cleanup_expired` if needed
|
|
73
|
+
- Custom GC schedules possible by disabling automatic GC
|
|
74
|
+
|
|
75
|
+
### Technical Details
|
|
76
|
+
The automatic GC timer:
|
|
77
|
+
- Runs `cleanup_expired` every 60 seconds by default
|
|
78
|
+
- Only starts when EventMachine reactor is running
|
|
79
|
+
- Supports lazy initialization for engines created outside EM context
|
|
80
|
+
- Properly handles cleanup on disconnect
|
|
81
|
+
- Can be customized or disabled via `gc_interval` option
|
|
82
|
+
|
|
10
83
|
## [1.0.5] - 2025-10-30
|
|
11
84
|
|
|
12
85
|
### Fixed
|
|
@@ -126,7 +199,9 @@ For 100 subscribers receiving one message:
|
|
|
126
199
|
### Security
|
|
127
200
|
- Client and message IDs now use `SecureRandom.uuid` instead of predictable time-based generation
|
|
128
201
|
|
|
129
|
-
[Unreleased]: https://github.com/7a6163/faye-redis-ng/compare/v1.0.
|
|
202
|
+
[Unreleased]: https://github.com/7a6163/faye-redis-ng/compare/v1.0.7...HEAD
|
|
203
|
+
[1.0.7]: https://github.com/7a6163/faye-redis-ng/compare/v1.0.6...v1.0.7
|
|
204
|
+
[1.0.6]: https://github.com/7a6163/faye-redis-ng/compare/v1.0.5...v1.0.6
|
|
130
205
|
[1.0.5]: https://github.com/7a6163/faye-redis-ng/compare/v1.0.4...v1.0.5
|
|
131
206
|
[1.0.4]: https://github.com/7a6163/faye-redis-ng/compare/v1.0.3...v1.0.4
|
|
132
207
|
[1.0.3]: https://github.com/7a6163/faye-redis-ng/compare/v1.0.2...v1.0.3
|
data/README.md
CHANGED
|
@@ -81,6 +81,9 @@ bayeux = Faye::RackAdapter.new(app, {
|
|
|
81
81
|
client_timeout: 60, # Client session timeout (seconds)
|
|
82
82
|
message_ttl: 3600, # Message TTL (seconds)
|
|
83
83
|
|
|
84
|
+
# Garbage collection
|
|
85
|
+
gc_interval: 60, # Automatic GC interval (seconds), set to 0 or false to disable
|
|
86
|
+
|
|
84
87
|
# Logging
|
|
85
88
|
log_level: :info, # Log level (:silent, :info, :debug)
|
|
86
89
|
|
|
@@ -236,11 +239,45 @@ The CI/CD pipeline will automatically:
|
|
|
236
239
|
|
|
237
240
|
## Memory Management
|
|
238
241
|
|
|
239
|
-
###
|
|
242
|
+
### Automatic Garbage Collection
|
|
243
|
+
|
|
244
|
+
**New in v1.0.6**: faye-redis-ng now includes automatic garbage collection that runs every 60 seconds by default. This automatically cleans up expired clients and orphaned subscription keys, preventing memory leaks without any manual intervention.
|
|
245
|
+
|
|
246
|
+
```ruby
|
|
247
|
+
bayeux = Faye::RackAdapter.new(app, {
|
|
248
|
+
mount: '/faye',
|
|
249
|
+
timeout: 25,
|
|
250
|
+
engine: {
|
|
251
|
+
type: Faye::Redis,
|
|
252
|
+
host: 'localhost',
|
|
253
|
+
port: 6379,
|
|
254
|
+
gc_interval: 60 # Run GC every 60 seconds (default)
|
|
255
|
+
}
|
|
256
|
+
})
|
|
257
|
+
```
|
|
258
|
+
|
|
259
|
+
To customize the GC interval or disable it:
|
|
240
260
|
|
|
241
|
-
|
|
261
|
+
```ruby
|
|
262
|
+
engine: {
|
|
263
|
+
type: Faye::Redis,
|
|
264
|
+
host: 'localhost',
|
|
265
|
+
port: 6379,
|
|
266
|
+
gc_interval: 300 # Run GC every 5 minutes
|
|
267
|
+
}
|
|
242
268
|
|
|
243
|
-
|
|
269
|
+
# Or disable automatic GC
|
|
270
|
+
engine: {
|
|
271
|
+
type: Faye::Redis,
|
|
272
|
+
host: 'localhost',
|
|
273
|
+
port: 6379,
|
|
274
|
+
gc_interval: 0 # Disabled - you'll need to call cleanup_expired manually
|
|
275
|
+
}
|
|
276
|
+
```
|
|
277
|
+
|
|
278
|
+
### Manual Cleanup
|
|
279
|
+
|
|
280
|
+
If you've disabled automatic GC, you can manually clean up expired clients:
|
|
244
281
|
|
|
245
282
|
```ruby
|
|
246
283
|
# Get the engine instance
|
|
@@ -252,9 +289,9 @@ engine.cleanup_expired do |expired_count|
|
|
|
252
289
|
end
|
|
253
290
|
```
|
|
254
291
|
|
|
255
|
-
####
|
|
292
|
+
#### Custom GC Schedule (Optional)
|
|
256
293
|
|
|
257
|
-
|
|
294
|
+
If you need more control, you can disable automatic GC and implement your own schedule:
|
|
258
295
|
|
|
259
296
|
```ruby
|
|
260
297
|
require 'eventmachine'
|
|
@@ -268,11 +305,12 @@ bayeux = Faye::RackAdapter.new(app, {
|
|
|
268
305
|
type: Faye::Redis,
|
|
269
306
|
host: 'localhost',
|
|
270
307
|
port: 6379,
|
|
271
|
-
namespace: 'my-app'
|
|
308
|
+
namespace: 'my-app',
|
|
309
|
+
gc_interval: 0 # Disable automatic GC
|
|
272
310
|
}
|
|
273
311
|
})
|
|
274
312
|
|
|
275
|
-
#
|
|
313
|
+
# Custom cleanup schedule - every 5 minutes
|
|
276
314
|
EM.add_periodic_timer(300) do
|
|
277
315
|
bayeux.get_engine.cleanup_expired do |count|
|
|
278
316
|
puts "[#{Time.now}] Cleaned up #{count} expired clients" if count > 0
|
|
@@ -330,12 +368,15 @@ The `cleanup_expired` method removes:
|
|
|
330
368
|
|
|
331
369
|
### Memory Leak Prevention
|
|
332
370
|
|
|
333
|
-
|
|
371
|
+
**v1.0.6+**: Automatic garbage collection is now enabled by default, preventing memory leaks from orphaned keys without any configuration needed.
|
|
372
|
+
|
|
373
|
+
Without GC, abnormal client disconnections (crashes, network failures, etc.) can cause orphaned keys to accumulate:
|
|
334
374
|
|
|
335
|
-
- **Before
|
|
336
|
-
- **
|
|
375
|
+
- **Before v1.0.5**: 10,000 orphaned clients × 5 channels = 50,000+ keys = 100-500 MB leaked
|
|
376
|
+
- **v1.0.5**: Manual cleanup required via `cleanup_expired` method
|
|
377
|
+
- **v1.0.6+**: Automatic GC runs every 60 seconds by default - no manual intervention needed
|
|
337
378
|
|
|
338
|
-
|
|
379
|
+
The automatic GC ensures memory usage remains stable even with frequent client disconnections.
|
|
339
380
|
|
|
340
381
|
## Troubleshooting
|
|
341
382
|
|
|
@@ -166,11 +166,21 @@ module Faye
|
|
|
166
166
|
begin
|
|
167
167
|
message = JSON.parse(message_json)
|
|
168
168
|
|
|
169
|
-
# Notify all subscribers
|
|
170
|
-
EventMachine.
|
|
171
|
-
|
|
172
|
-
|
|
169
|
+
# Notify all subscribers
|
|
170
|
+
# Use EventMachine.schedule to safely call from non-EM thread
|
|
171
|
+
# (handle_message is called from subscriber_thread, not EM reactor thread)
|
|
172
|
+
if EventMachine.reactor_running?
|
|
173
|
+
EventMachine.schedule do
|
|
174
|
+
@subscribers.dup.each do |subscriber|
|
|
175
|
+
begin
|
|
176
|
+
subscriber.call(channel, message)
|
|
177
|
+
rescue => e
|
|
178
|
+
log_error("Subscriber callback error for #{channel}: #{e.message}")
|
|
179
|
+
end
|
|
180
|
+
end
|
|
173
181
|
end
|
|
182
|
+
else
|
|
183
|
+
log_error("Cannot handle message: EventMachine reactor not running")
|
|
174
184
|
end
|
|
175
185
|
rescue JSON::ParserError => e
|
|
176
186
|
log_error("Failed to parse message from #{channel}: #{e.message}")
|
data/lib/faye/redis/version.rb
CHANGED
data/lib/faye/redis.rb
CHANGED
|
@@ -25,7 +25,8 @@ module Faye
|
|
|
25
25
|
retry_delay: 1,
|
|
26
26
|
client_timeout: 60,
|
|
27
27
|
message_ttl: 3600,
|
|
28
|
-
namespace: 'faye'
|
|
28
|
+
namespace: 'faye',
|
|
29
|
+
gc_interval: 60 # Automatic garbage collection interval (seconds), set to 0 or false to disable
|
|
29
30
|
}.freeze
|
|
30
31
|
|
|
31
32
|
attr_reader :server, :options, :connection, :client_registry,
|
|
@@ -50,10 +51,16 @@ module Faye
|
|
|
50
51
|
|
|
51
52
|
# Set up message routing
|
|
52
53
|
setup_message_routing
|
|
54
|
+
|
|
55
|
+
# Start automatic garbage collection timer
|
|
56
|
+
start_gc_timer
|
|
53
57
|
end
|
|
54
58
|
|
|
55
59
|
# Create a new client
|
|
56
60
|
def create_client(&callback)
|
|
61
|
+
# Ensure GC timer is started (lazy initialization)
|
|
62
|
+
ensure_gc_timer_started
|
|
63
|
+
|
|
57
64
|
client_id = generate_client_id
|
|
58
65
|
@client_registry.create(client_id) do |success|
|
|
59
66
|
if success
|
|
@@ -98,34 +105,68 @@ module Faye
|
|
|
98
105
|
channels = [channels] unless channels.is_a?(Array)
|
|
99
106
|
|
|
100
107
|
begin
|
|
101
|
-
|
|
102
|
-
|
|
108
|
+
# Ensure message has an ID for deduplication
|
|
109
|
+
message = message.dup unless message.frozen?
|
|
110
|
+
message['id'] ||= generate_message_id
|
|
111
|
+
|
|
112
|
+
# Track this message as locally published
|
|
113
|
+
if @local_message_ids
|
|
114
|
+
if @local_message_ids_mutex
|
|
115
|
+
@local_message_ids_mutex.synchronize { @local_message_ids.add(message['id']) }
|
|
116
|
+
else
|
|
117
|
+
@local_message_ids.add(message['id'])
|
|
118
|
+
end
|
|
119
|
+
end
|
|
120
|
+
|
|
121
|
+
total_channels = channels.size
|
|
122
|
+
completed_channels = 0
|
|
123
|
+
callback_called = false
|
|
124
|
+
all_success = true
|
|
103
125
|
|
|
104
126
|
channels.each do |channel|
|
|
105
127
|
# Get subscribers and process in parallel
|
|
106
128
|
@subscription_manager.get_subscribers(channel) do |client_ids|
|
|
107
|
-
#
|
|
129
|
+
# Track operations for this channel
|
|
130
|
+
pending_ops = 2 # pubsub + enqueue
|
|
131
|
+
channel_success = true
|
|
132
|
+
ops_completed = 0
|
|
133
|
+
|
|
134
|
+
complete_channel = lambda do
|
|
135
|
+
ops_completed += 1
|
|
136
|
+
if ops_completed == pending_ops
|
|
137
|
+
# This channel is complete
|
|
138
|
+
all_success &&= channel_success
|
|
139
|
+
completed_channels += 1
|
|
140
|
+
|
|
141
|
+
# Call final callback when all channels are done
|
|
142
|
+
if completed_channels == total_channels && !callback_called && callback
|
|
143
|
+
callback_called = true
|
|
144
|
+
EventMachine.next_tick { callback.call(all_success) }
|
|
145
|
+
end
|
|
146
|
+
end
|
|
147
|
+
end
|
|
148
|
+
|
|
149
|
+
# Publish to pub/sub
|
|
108
150
|
@pubsub_coordinator.publish(channel, message) do |published|
|
|
109
|
-
|
|
151
|
+
channel_success &&= published
|
|
152
|
+
complete_channel.call
|
|
110
153
|
end
|
|
111
154
|
|
|
112
|
-
# Enqueue for all subscribed clients
|
|
155
|
+
# Enqueue for all subscribed clients
|
|
113
156
|
if client_ids.any?
|
|
114
157
|
enqueue_messages_batch(client_ids, message) do |enqueued|
|
|
115
|
-
|
|
158
|
+
channel_success &&= enqueued
|
|
159
|
+
complete_channel.call
|
|
116
160
|
end
|
|
117
|
-
|
|
118
|
-
|
|
119
|
-
|
|
120
|
-
remaining_operations -= 1
|
|
121
|
-
if remaining_operations == 0 && callback
|
|
122
|
-
EventMachine.next_tick { callback.call(success) }
|
|
161
|
+
else
|
|
162
|
+
# No clients, but still need to complete
|
|
163
|
+
complete_channel.call
|
|
123
164
|
end
|
|
124
165
|
end
|
|
125
166
|
end
|
|
126
167
|
rescue => e
|
|
127
168
|
log_error("Failed to publish message to channels #{channels}: #{e.message}")
|
|
128
|
-
EventMachine.next_tick { callback.call(false) } if callback
|
|
169
|
+
EventMachine.next_tick { callback.call(false) } if callback && !callback_called
|
|
129
170
|
end
|
|
130
171
|
end
|
|
131
172
|
|
|
@@ -136,6 +177,9 @@ module Faye
|
|
|
136
177
|
|
|
137
178
|
# Disconnect the engine
|
|
138
179
|
def disconnect
|
|
180
|
+
# Stop GC timer if running
|
|
181
|
+
stop_gc_timer
|
|
182
|
+
|
|
139
183
|
@pubsub_coordinator.disconnect
|
|
140
184
|
@connection.disconnect
|
|
141
185
|
end
|
|
@@ -159,9 +203,20 @@ module Faye
|
|
|
159
203
|
SecureRandom.uuid
|
|
160
204
|
end
|
|
161
205
|
|
|
206
|
+
def generate_message_id
|
|
207
|
+
SecureRandom.uuid
|
|
208
|
+
end
|
|
209
|
+
|
|
162
210
|
# Batch enqueue messages to multiple clients using a single Redis pipeline
|
|
163
211
|
def enqueue_messages_batch(client_ids, message, &callback)
|
|
164
|
-
|
|
212
|
+
# Handle empty client list
|
|
213
|
+
if client_ids.empty?
|
|
214
|
+
EventMachine.next_tick { callback.call(true) } if callback
|
|
215
|
+
return
|
|
216
|
+
end
|
|
217
|
+
|
|
218
|
+
# No callback provided, but still need to enqueue
|
|
219
|
+
# (setup_message_routing calls this without callback)
|
|
165
220
|
|
|
166
221
|
message_json = message.to_json
|
|
167
222
|
message_ttl = @options[:message_ttl] || 3600
|
|
@@ -242,10 +297,31 @@ module Faye
|
|
|
242
297
|
end
|
|
243
298
|
|
|
244
299
|
def setup_message_routing
|
|
300
|
+
# Track locally published message IDs to avoid duplicate enqueue
|
|
301
|
+
@local_message_ids = Set.new
|
|
302
|
+
@local_message_ids_mutex = Mutex.new if defined?(Mutex)
|
|
303
|
+
|
|
245
304
|
# Subscribe to message events from other servers
|
|
246
305
|
@pubsub_coordinator.on_message do |channel, message|
|
|
306
|
+
# Skip if this is a message we just published locally
|
|
307
|
+
# (Redis pub/sub echoes back messages to the publisher)
|
|
308
|
+
message_id = message['id']
|
|
309
|
+
is_local = false
|
|
310
|
+
|
|
311
|
+
if message_id
|
|
312
|
+
if @local_message_ids_mutex
|
|
313
|
+
@local_message_ids_mutex.synchronize do
|
|
314
|
+
is_local = @local_message_ids.delete(message_id)
|
|
315
|
+
end
|
|
316
|
+
else
|
|
317
|
+
is_local = @local_message_ids.delete(message_id)
|
|
318
|
+
end
|
|
319
|
+
end
|
|
320
|
+
|
|
321
|
+
next if is_local
|
|
322
|
+
|
|
323
|
+
# Enqueue for remote servers' messages only
|
|
247
324
|
@subscription_manager.get_subscribers(channel) do |client_ids|
|
|
248
|
-
# Use batch enqueue for better performance
|
|
249
325
|
enqueue_messages_batch(client_ids, message) if client_ids.any?
|
|
250
326
|
end
|
|
251
327
|
end
|
|
@@ -254,5 +330,42 @@ module Faye
|
|
|
254
330
|
def log_error(message)
|
|
255
331
|
@logger.error(message)
|
|
256
332
|
end
|
|
333
|
+
|
|
334
|
+
# Start automatic garbage collection timer
|
|
335
|
+
def start_gc_timer
|
|
336
|
+
gc_interval = @options[:gc_interval]
|
|
337
|
+
|
|
338
|
+
# Skip if GC is disabled (0, false, or nil)
|
|
339
|
+
return if !gc_interval || gc_interval == 0
|
|
340
|
+
|
|
341
|
+
# Only start timer if EventMachine is running
|
|
342
|
+
return unless EventMachine.reactor_running?
|
|
343
|
+
|
|
344
|
+
@logger.info("Starting automatic GC timer with interval: #{gc_interval} seconds")
|
|
345
|
+
|
|
346
|
+
@gc_timer = EventMachine.add_periodic_timer(gc_interval) do
|
|
347
|
+
@logger.debug("Running automatic garbage collection")
|
|
348
|
+
cleanup_expired do |count|
|
|
349
|
+
@logger.debug("GC completed: #{count} expired clients cleaned") if count > 0
|
|
350
|
+
end
|
|
351
|
+
end
|
|
352
|
+
end
|
|
353
|
+
|
|
354
|
+
# Ensure GC timer is started (called lazily on first operation)
|
|
355
|
+
def ensure_gc_timer_started
|
|
356
|
+
return if @gc_timer # Already started
|
|
357
|
+
return if !@options[:gc_interval] || @options[:gc_interval] == 0 # Disabled
|
|
358
|
+
|
|
359
|
+
start_gc_timer
|
|
360
|
+
end
|
|
361
|
+
|
|
362
|
+
# Stop automatic garbage collection timer
|
|
363
|
+
def stop_gc_timer
|
|
364
|
+
if @gc_timer
|
|
365
|
+
EventMachine.cancel_timer(@gc_timer)
|
|
366
|
+
@gc_timer = nil
|
|
367
|
+
@logger.info("Stopped automatic GC timer")
|
|
368
|
+
end
|
|
369
|
+
end
|
|
257
370
|
end
|
|
258
371
|
end
|