postburner 1.0.0.pre.5 → 1.0.0.pre.7

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 38e2982c4478f99aa0d19c58ac7455cdf82ff1c3707a154d1dbd0c3a1546ceb8
4
- data.tar.gz: 0d17bb9da882d8e1cf2cce7c1cddd508ddbc7531c037dd56b7e0c6d34bd40f5b
3
+ metadata.gz: 1d37676412b662bd876f2d2d09ac57e09876e5f0d8a7186c37f4f71f3d40ad89
4
+ data.tar.gz: 7c9e1badc1628b5cab1edaa164e449d7a926e65198009773a702a3b6f49888cd
5
5
  SHA512:
6
- metadata.gz: 28421eb19f5e288e43e9d455364906dabe0ee3ab024a230efb9aca19842ac3677c4b602788d8f0f7c67253f64e1b382d12a4f23756ff01ef2e43a5b51d478d55
7
- data.tar.gz: f13d5452e5394c3e6d8b9468d8863344f35049972fbe5eac6c20cbede12ddc7e9eb2a17538bccb626a7a00a744c567d33e59558f5e7309ab3ecf775bea3db5b9
6
+ metadata.gz: b7bafbb9a871ce492a3f36537b237bbcb0a38e5291065384d2d2c2003bde2ebd078e9f2fd5455023bd9d81b38701a391b09d4dbeb336bfb7759e8c550497e5e9
7
+ data.tar.gz: 9515f40b1fe1e1782dd4a9fc6fd0c5e78e6a30c6678f656d6534eb2344360ec2c03ca745a0aedf7e3e3ec676a1d9edd092d8ff0db5b50966d24ade078496d440
data/README.md CHANGED
@@ -1333,7 +1333,25 @@ job.attempts # Array of attempt timestamps
1333
1333
 
1334
1334
  ## Beanstalkd Integration
1335
1335
 
1336
- Direct access to Beanstalkd for advanced operations:
1336
+ Postburner uses [Beaneater](https://github.com/beanstalkd/beaneater) as the Ruby client for Beanstalkd. You can access the underlying Beaneater connection directly for advanced operations.
1337
+
1338
+ ### Connection Methods
1339
+
1340
+ ```ruby
1341
+ # Get a cached Beaneater connection (returns Beaneater instance)
1342
+ conn = Postburner.connection
1343
+ conn.tubes.to_a # List all tubes
1344
+ conn.stats # Server statistics
1345
+
1346
+ # Block form - yields connection, recommended for one-off operations
1347
+ Postburner.connected do |conn|
1348
+ conn.tubes.to_a # List all tubes
1349
+ conn.tubes['postburner.production.critical'].stats
1350
+ conn.tubes['postburner.production.critical'].kick(10) # Kick 10 buried jobs
1351
+ end
1352
+ ```
1353
+
1354
+ ### Job-Level Access
1337
1355
 
1338
1356
  ```ruby
1339
1357
  # Get Beanstalkd job ID
@@ -1342,16 +1360,114 @@ job.bkid # => 12345
1342
1360
  # Access Beaneater job object
1343
1361
  job.bk.stats
1344
1362
  # => {"id"=>12345, "tube"=>"critical", "state"=>"ready", ...}
1363
+ ```
1345
1364
 
1346
- # Connection management
1365
+ ### Beaneater API
1366
+
1367
+ The connection object is a standard Beaneater instance. See the [Beaneater documentation](https://github.com/beanstalkd/beaneater) for full API details:
1368
+
1369
+ ```ruby
1347
1370
  Postburner.connected do |conn|
1348
- conn.tubes.to_a # List all tubes
1349
- conn.tubes['postburner.production.critical'].stats
1350
- conn.tubes['postburner.production.critical'].kick(10) # Kick 10 buried jobs
1371
+ # Tubes
1372
+ conn.tubes.to_a # List all tube names
1373
+ conn.tubes['my-tube'].stats # Tube statistics
1374
+ conn.tubes['my-tube'].peek(:ready) # Peek at next ready job
1375
+ conn.tubes['my-tube'].kick(10) # Kick 10 buried jobs
1376
+
1377
+ # Server
1378
+ conn.stats # Server statistics
1379
+
1380
+ # Jobs
1381
+ conn.jobs.find(12345) # Find job by ID
1351
1382
  end
1352
1383
  ```
1353
1384
 
1354
1385
 
1386
+ ### Tube Statistics and Management
1387
+
1388
+ Postburner provides methods to inspect and manage Beanstalkd tubes:
1389
+
1390
+ **View tube statistics:**
1391
+
1392
+ ```ruby
1393
+ # View all tubes on the Beanstalkd server
1394
+ stats = Postburner.stats
1395
+ # => {
1396
+ # tubes: [
1397
+ # { name: "postburner.production.default", ready: 10, delayed: 5, buried: 0, reserved: 2, total: 17 },
1398
+ # { name: "postburner.production.critical", ready: 0, delayed: 0, buried: 0, reserved: 1, total: 1 }
1399
+ # ],
1400
+ # totals: { ready: 10, delayed: 5, buried: 0, reserved: 3, total: 18 }
1401
+ # }
1402
+
1403
+ # View specific tubes only
1404
+ stats = Postburner.stats(['postburner.production.critical'])
1405
+ # => { tubes: [...], totals: {...} }
1406
+ ```
1407
+
1408
+ **Clear jobs from tubes:**
1409
+
1410
+ For safety, `clear_jobs!` requires you to explicitly specify which tubes to clear. This prevents accidentally clearing tubes from other applications sharing the same Beanstalkd server.
1411
+
1412
+ ```ruby
1413
+ # Collect stats only (no clearing)
1414
+ result = Postburner.clear_jobs!
1415
+ # => { tubes: [...], totals: {...}, cleared: false }
1416
+
1417
+ # Clear specific tubes (must be in config/postburner.yml)
1418
+ result = Postburner.clear_jobs!(['postburner.production.default'])
1419
+ # => { tubes: [...], totals: {...}, cleared: true }
1420
+
1421
+ # Pretty-print JSON output
1422
+ Postburner.clear_jobs!(['postburner.production.default'], silent: false)
1423
+ # Outputs formatted JSON to stdout
1424
+
1425
+ # Silent mode (no output, just return data)
1426
+ result = Postburner.clear_jobs!(['postburner.production.default'], silent: true)
1427
+ ```
1428
+
1429
+ **Safety validation:**
1430
+
1431
+ Only tubes defined in your loaded configuration can be cleared. This prevents mistakes in multi-tenant Beanstalkd environments:
1432
+
1433
+ ```ruby
1434
+ # Error: trying to clear tube not in config
1435
+ Postburner.clear_jobs!(['postburner.production.other-app'])
1436
+ # => ArgumentError: Cannot clear tubes not in configuration.
1437
+ # Invalid tubes: postburner.production.other-app
1438
+ # Configured tubes: postburner.production.default, postburner.production.critical
1439
+ ```
1440
+
1441
+ **Shortcut using watched_tube_names:**
1442
+
1443
+ Clear all configured tubes at once:
1444
+
1445
+ ```ruby
1446
+ # Get all tubes from current configuration
1447
+ watched_tubes = Postburner.watched_tube_names
1448
+ # => ["postburner.production.default", "postburner.production.critical", "postburner.production.mailers"]
1449
+
1450
+ # Clear all configured tubes
1451
+ Postburner.clear_jobs!(watched_tubes, silent: true)
1452
+ # or
1453
+ Postburner.clear_jobs!(Postburner.watched_tube_names, silent: true)
1454
+ ```
1455
+
1456
+ **Low-level Connection API:**
1457
+
1458
+ For programmatic use without output formatting, use `Connection#clear_tubes!`:
1459
+
1460
+ ```ruby
1461
+ Postburner.connected do |conn|
1462
+ # Returns data only (no puts)
1463
+ result = conn.clear_tubes!(Postburner.watched_tube_names)
1464
+ # => { tubes: [...], totals: {...}, cleared: true }
1465
+
1466
+ # Same validation - must be in configuration
1467
+ result = conn.clear_tubes!(['postburner.production.default'])
1468
+ end
1469
+ ```
1470
+
1355
1471
  ## Web UI
1356
1472
 
1357
1473
  Mount the inspection interface:
@@ -1472,6 +1588,15 @@ There is a CLAUDE.md file for guidance when using Claude Code. Please use it or
1472
1588
 
1473
1589
  We encourage AI tools, but do not vibe, as the code must look like it was written by a human. Code that contains AI agent idioms will be rejected. Code that doesn't follow the project conventions will be rejected.
1474
1590
 
1591
+
1592
+ ### Testing
1593
+
1594
+ ```bash
1595
+ bundle install
1596
+ bundle exec rails test # must have beanstalkd on 11300 by default
1597
+ bundle exec rails app:postburner:work # if you want to run the worker
1598
+ ```
1599
+
1475
1600
  ## License
1476
1601
 
1477
1602
  The gem is available as open source under the terms of the [MIT License](https://opensource.org/licenses/MIT).
@@ -159,9 +159,8 @@ module Postburner
159
159
  #debugger
160
160
 
161
161
  # Response must be a hash with an :id key (value can be nil)
162
- # Backburner returns symbol keys
163
162
  unless response.is_a?(Hash) && response.key?(:id)
164
- raise MalformedResponse, "Missing :id key in response: #{response.inspect}"
163
+ raise Postburner::Job::MalformedResponse, "Missing :id key in response: #{response.inspect}"
165
164
  end
166
165
 
167
166
  persist_metadata!(bkid: response[:id])
@@ -10,37 +10,48 @@ module Postburner
10
10
  # @example Programmatic configuration
11
11
  # Postburner.configure do |config|
12
12
  # config.beanstalk_url = 'beanstalk://localhost:11300'
13
- # config.worker_type = :threads_on_fork
14
13
  # config.logger = Rails.logger
14
+ # config.worker_config = { name: 'default', queues: ['default'], forks: 2, threads: 10 }
15
15
  # end
16
16
  #
17
17
  # @example Loading from YAML
18
- # config = Postburner::Configuration.load_yaml('config/postburner.yml', 'production')
18
+ # config = Postburner::Configuration.load_yaml('config/postburner.yml', 'production', 'imports')
19
19
  #
20
20
  class Configuration
21
- attr_accessor :beanstalk_url, :logger, :queues, :default_queue, :default_priority, :default_ttr, :default_threads, :default_forks, :default_gc_limit
21
+ # Global settings
22
+ attr_accessor :beanstalk_url, :logger, :default_queue, :default_priority, :default_ttr
23
+
24
+ # Worker-specific settings (loaded for a single worker)
25
+ attr_accessor :worker_config
22
26
 
23
27
  # @param options [Hash] Configuration options
24
28
  # @option options [String] :beanstalk_url Beanstalkd URL (default: ENV['BEANSTALK_URL'] or localhost)
25
29
  # @option options [Logger] :logger Logger instance (default: Rails.logger)
26
- # @option options [Hash] :queues Queue configurations
27
30
  # @option options [String] :default_queue Default queue name (default: 'default')
28
31
  # @option options [Integer] :default_priority Default job priority (default: 65536, lower = higher priority)
29
32
  # @option options [Integer] :default_ttr Default time-to-run in seconds (default: 300)
30
- # @option options [Integer] :default_threads Default thread count per fork (default: 1)
31
- # @option options [Integer] :default_forks Default fork count (default: 0, single process)
32
- # @option options [Integer] :default_gc_limit Default GC limit for worker restarts (default: nil, no limit)
33
+ # @option options [Hash] :worker_config Worker configuration hash with keys:
34
+ # - :name [String] Worker name
35
+ # - :queues [Array<String>] Queue/tube names to process
36
+ # - :forks [Integer] Number of forked processes (0 = single process mode)
37
+ # - :threads [Integer] Number of threads per fork
38
+ # - :gc_limit [Integer, nil] Jobs to process before restart (nil = unlimited)
39
+ # - :timeout [Integer] Reserve command timeout in seconds (1-10, default: 3)
33
40
  #
34
41
  def initialize(options = {})
35
42
  @beanstalk_url = options[:beanstalk_url] || ENV['BEANSTALK_URL'] || 'beanstalk://localhost:11300'
36
43
  @logger = options[:logger] || (defined?(Rails) ? Rails.logger : Logger.new(STDOUT))
37
- @queues = options[:queues] || { 'default' => {} }
38
44
  @default_queue = options[:default_queue] || 'default'
39
45
  @default_priority = options[:default_priority] || 65536
40
46
  @default_ttr = options[:default_ttr] || 300
41
- @default_threads = options[:default_threads] || 1
42
- @default_forks = options[:default_forks] || 0
43
- @default_gc_limit = options[:default_gc_limit]
47
+ @worker_config = options[:worker_config] || {
48
+ name: 'default',
49
+ queues: ['default'],
50
+ forks: 0,
51
+ threads: 1,
52
+ gc_limit: nil,
53
+ timeout: 3
54
+ }
44
55
  end
45
56
 
46
57
  # Loads configuration from a YAML file.
@@ -68,30 +79,31 @@ module Postburner
68
79
  # beanstalk_url: beanstalk://localhost:11300
69
80
  # default_priority: 131072 # change default priority from 65536 to 131072
70
81
  #
71
- # production: # <- environment config, i.e. defaults, NOT worker config
82
+ # production: # <- environment config, i.e. defaults
72
83
  # <<: *default
73
84
  # default_forks: 2
74
85
  # default_threads: 10
75
86
  # default_gc_limit: 5000
76
87
  # default_ttr: 300
77
- # workers: # <- worker config, i.e. overrides, NOT environment config
78
- # imports: # <- worker "group" name
88
+ # workers: # <- worker configs
89
+ # imports: # <- worker name
90
+ # timeout: 3 # Reserve timeout in seconds (1-10, default: 3)
91
+ # # Lower values enable faster graceful shutdowns
79
92
  # forks: 4 # Overrides default_forks
80
93
  # threads: 1 # Overrides default_threads
81
94
  # gc_limit: 500 # Overrides default_gc_limit
82
- # # ttr: 60 # Use default from production, i.e. 300 because not set
83
95
  # queues:
84
96
  # - imports
85
97
  # - data_processing
86
98
  #
87
99
  def self.load_yaml(path, env = 'development', worker_name = nil)
88
100
  yaml = YAML.load_file(path, aliases: true)
89
- # env_defaults = top-level environment config (development:, production:, etc.)
90
- env_defaults = yaml[env.to_s] || yaml[env.to_sym]
101
+ # env_config = top-level environment config (development:, production:, etc.)
102
+ env_config = yaml[env.to_s] || yaml[env.to_sym]
91
103
 
92
- raise ArgumentError, "Environment '#{env}' not found in #{path}" unless env_defaults
104
+ raise ArgumentError, "Environment '#{env}' not found in #{path}" unless env_config
93
105
 
94
- workers = env_defaults['workers']
106
+ workers = env_config['workers']
95
107
  raise ArgumentError, "No 'workers:' section found in #{path} for environment '#{env}'" unless workers
96
108
 
97
109
  # Auto-select single worker or validate worker_name
@@ -99,63 +111,54 @@ module Postburner
99
111
  if workers.size == 1
100
112
  worker_name = workers.keys.first
101
113
  else
102
- raise ArgumentError, "Configuration has multiple workers, but --worker not specified\nAvailable workers: #{workers.keys.join(', ')}\nUsage: bin/postburner --worker <name>"
114
+ raise ArgumentError, <<~ERROR
115
+ Configuration has multiple workers, but --worker not specified
116
+ Available workers: #{workers.keys.join(', ')}
117
+ Usage: bin/postburner --worker <name>
118
+ ERROR
103
119
  end
104
120
  else
105
121
  unless workers.key?(worker_name)
106
- raise ArgumentError, "Worker '#{worker_name}' not found in #{path}\nAvailable workers: #{workers.keys.join(', ')}"
122
+ raise ArgumentError, <<~ERROR
123
+ Worker '#{worker_name}' not found in #{path}
124
+ Available workers: #{workers.keys.join(', ')}
125
+ ERROR
107
126
  end
108
127
  end
109
128
 
110
- # worker_config = specific worker configuration (workers: imports:)
111
- worker_config = workers[worker_name]
112
-
113
- # Convert queue array to hash format (queues no longer have per-queue config)
114
- queue_list = worker_config['queues'] || []
115
- queues_hash = {}
116
- queue_list.each do |queue_name|
117
- queues_hash[queue_name] = {} # Empty hash - queues run in worker pool
118
- end
129
+ # worker_yaml = specific worker configuration from YAML (workers: imports:)
130
+ worker_yaml = workers[worker_name]
131
+
132
+ # Build worker_config hash - worker-level overrides env-level defaults
133
+ worker_config = {
134
+ name: worker_name,
135
+ queues: worker_yaml['queues'] || ['default'],
136
+ forks: worker_yaml['forks'] || env_config['default_forks'] || 0,
137
+ threads: worker_yaml['threads'] || env_config['default_threads'] || 1,
138
+ gc_limit: worker_yaml['gc_limit'] || env_config['default_gc_limit'],
139
+ timeout: worker_yaml['timeout'] || 3
140
+ }
119
141
 
120
- # Cascade: worker-level overrides env-level defaults
121
- # Worker uses: forks, threads, gc_limit, ttr, priority (NO default_ prefix)
122
- # Env uses: default_forks, default_threads, etc. (WITH default_ prefix)
123
142
  options = {
124
- beanstalk_url: env_defaults['beanstalk_url'],
125
- queues: queues_hash,
126
- default_queue: worker_config['default_queue'] || env_defaults['default_queue'],
127
- default_priority: worker_config['priority'] || env_defaults['default_priority'],
128
- default_ttr: worker_config['ttr'] || env_defaults['default_ttr'],
129
- default_threads: worker_config['threads'] || env_defaults['default_threads'],
130
- default_forks: worker_config['forks'] || env_defaults['default_forks'],
131
- default_gc_limit: worker_config['gc_limit'] || env_defaults['default_gc_limit']
143
+ beanstalk_url: env_config['beanstalk_url'],
144
+ default_queue: env_config['default_queue'],
145
+ default_priority: env_config['default_priority'],
146
+ default_ttr: env_config['default_ttr'],
147
+ worker_config: worker_config
132
148
  }
133
149
 
134
150
  new(options)
135
151
  end
136
152
 
137
- # Returns queue configuration for a specific queue name.
138
- #
139
- # @param queue_name [String, Symbol] Name of the queue
140
- #
141
- # @return [Hash] Queue configuration with threads, gc_limit, etc.
142
- #
143
- # @example
144
- # config.queue_config('critical') # => { threads: 1, gc_limit: 100 }
145
- #
146
- def queue_config(queue_name)
147
- @queues[queue_name.to_s] || @queues[queue_name.to_sym] || {}
148
- end
149
-
150
- # Returns array of all configured queue names.
153
+ # Returns array of queue names from worker config.
151
154
  #
152
155
  # @return [Array<String>] Queue names
153
156
  #
154
157
  # @example
155
- # config.queue_names # => ['default', 'critical', 'mailers']
158
+ # config.queue_names # => ['imports', 'data_processing']
156
159
  #
157
160
  def queue_names
158
- @queues.keys.map(&:to_s)
161
+ @worker_config[:queues].map(&:to_s)
159
162
  end
160
163
 
161
164
  # Expands queue name to full tube name with environment prefix.
@@ -188,7 +191,6 @@ module Postburner
188
191
  ].compact.join('.')
189
192
  end
190
193
 
191
-
192
194
  # Returns array of expanded tube names with environment prefix.
193
195
  #
194
196
  # @param env [String, Symbol, nil] Environment name (defaults to Rails.env or 'development')
@@ -196,7 +198,8 @@ module Postburner
196
198
  # @return [Array<String>] Array of expanded tube names
197
199
  #
198
200
  # @example
199
- # config.expanded_tube_names('production') # => ['postburner.production.default', 'postburner.production.critical']
201
+ # config.expanded_tube_names('production') # => ['postburner.production.imports', 'postburner.production.data_processing']
202
+ #
200
203
  def expanded_tube_names(env = nil)
201
204
  queue_names.map { |q| expand_tube_name(q, env) }
202
205
  end
@@ -220,7 +223,7 @@ module Postburner
220
223
  # @example
221
224
  # Postburner.configure do |config|
222
225
  # config.beanstalk_url = 'beanstalk://localhost:11300'
223
- # config.worker_type = :threads_on_fork
226
+ # config.worker_config = { name: 'default', queues: ['default'], forks: 2, threads: 10 }
224
227
  # end
225
228
  #
226
229
  def self.configure
@@ -88,6 +88,68 @@ module Postburner
88
88
  @pool = nil
89
89
  end
90
90
 
91
+ # Clears jobs from specified tubes or collects stats for all tubes.
92
+ #
93
+ # Low-level method that returns data only (no output to stdout).
94
+ # Delegates to Postburner.stats for collecting statistics.
95
+ # For user-facing output, use Postburner.clear_jobs! instead.
96
+ #
97
+ # SAFETY: Only allows clearing tubes that are defined in the loaded
98
+ # configuration (watched_tube_names). This prevents accidentally clearing
99
+ # tubes from other applications or environments.
100
+ #
101
+ # @param tube_names [Array<String>, nil] Array of tube names to clear, or nil to only collect stats
102
+ #
103
+ # @return [Hash] Statistics and results with keys:
104
+ # - tubes: Array of hashes with per-tube stats
105
+ # - totals: Hash with aggregated counts across all tubes
106
+ # - cleared: Boolean indicating if tubes were actually cleared
107
+ #
108
+ # @raise [ArgumentError] if tube_names contains tubes not in watched_tube_names
109
+ #
110
+ # @example Collect stats only (no clearing)
111
+ # result = conn.clear_tubes!
112
+ # result[:totals][:total] # => 42
113
+ #
114
+ # @example Clear configured tubes only
115
+ # result = conn.clear_tubes!(Postburner.watched_tube_names)
116
+ # result[:cleared] # => true
117
+ #
118
+ # @example Invalid tube raises error
119
+ # conn.clear_tubes!(['random-tube'])
120
+ # # => ArgumentError: Cannot clear tubes not in configuration
121
+ #
122
+ def clear_tubes!(tube_names = nil)
123
+ ensure_connected!
124
+
125
+ # Validate that tubes to clear are in the loaded configuration
126
+ if tube_names&.any?
127
+ watched = Postburner.watched_tube_names
128
+ invalid_tubes = tube_names - watched
129
+
130
+ if invalid_tubes.any?
131
+ raise ArgumentError, <<~ERROR
132
+ Cannot clear tubes not in configuration.
133
+ Invalid tubes: #{invalid_tubes.join(', ')}
134
+ Configured tubes: #{watched.join(', ')}
135
+ ERROR
136
+ end
137
+ end
138
+
139
+ # Get stats using Postburner.stats
140
+ result = Postburner.stats(tube_names)
141
+ result[:cleared] = tube_names&.any? ? true : false
142
+
143
+ # Actually clear if tube names were provided and validated
144
+ if tube_names&.any?
145
+ tube_names.each do |tube_name|
146
+ tubes[tube_name].clear
147
+ end
148
+ end
149
+
150
+ result
151
+ end
152
+
91
153
  private
92
154
 
93
155
  # Establishes connection to Beanstalkd.
@@ -87,7 +87,8 @@ module Postburner
87
87
  config.logger.info "[Postburner] Environment: #{options[:env]}"
88
88
  config.logger.info "[Postburner] Worker: #{options[:worker] || '(auto-selected)'}" if options[:worker] || options[:queues].nil?
89
89
  config.logger.info "[Postburner] Queues: #{config.queue_names.join(', ')}"
90
- config.logger.info "[Postburner] Defaults: forks=#{config.default_forks}, threads=#{config.default_threads}, gc_limit=#{config.default_gc_limit || 'none'}"
90
+ wc = config.worker_config
91
+ config.logger.info "[Postburner] Worker config: forks=#{wc[:forks]}, threads=#{wc[:threads]}, gc_limit=#{wc[:gc_limit] || 'none'}, timeout=#{wc[:timeout]}s"
91
92
  end
92
93
 
93
94
  # Returns root directory for config file resolution.
@@ -64,8 +64,10 @@ module Postburner
64
64
  #
65
65
  def travel_to(time, &block)
66
66
  unless defined?(ActiveSupport::Testing::TimeHelpers)
67
- raise "ActiveSupport::Testing::TimeHelpers not available. " \
68
- "Postburner::TimeHelpers requires Rails testing helpers for time travel."
67
+ raise <<~ERROR
68
+ ActiveSupport::Testing::TimeHelpers not available.
69
+ Postburner::TimeHelpers requires Rails testing helpers for time travel.
70
+ ERROR
69
71
  end
70
72
 
71
73
  helper = Object.new.extend(ActiveSupport::Testing::TimeHelpers)
@@ -1,3 +1,3 @@
1
1
  module Postburner
2
- VERSION = '1.0.0.pre.5'
2
+ VERSION = '1.0.0.pre.7'
3
3
  end
@@ -4,7 +4,7 @@ require 'concurrent'
4
4
 
5
5
  module Postburner
6
6
  module Workers
7
- # Puma-style worker with configurable forks and threads per queue.
7
+ # Puma-style worker with configurable forks and threads.
8
8
  #
9
9
  # This is the universal Postburner worker that scales from development
10
10
  # to production using forks and threads configuration. Just like Puma:
@@ -16,27 +16,18 @@ module Postburner
16
16
  # ### Single Process Mode (forks: 0)
17
17
  # ```
18
18
  # Main Process
19
- # ├─ Queue 'default' Thread Pool (10 threads)
20
- # ├─ Queue 'critical' Thread Pool (1 thread)
21
- # └─ Queue 'mailers' Thread Pool (5 threads)
19
+ # └─ Thread Pool (N threads watching all queues)
22
20
  # ```
23
21
  #
24
22
  # ### Multi-Process Mode (forks: 1+)
25
23
  # ```
26
24
  # Parent Process
27
- # ├─ Fork 0 (queue: critical)
28
- # │ └─ Thread 1
29
- # ├─ Fork 0 (queue: default)
30
- # │ ├─ Thread 1-10
31
- # ├─ Fork 1 (queue: default) # Puma-style: multiple forks of same queue
32
- # │ ├─ Thread 1-10
33
- # ├─ Fork 2 (queue: default)
34
- # │ └─ Thread 1-10
35
- # ├─ Fork 3 (queue: default)
36
- # │ └─ Thread 1-10
37
- # │ └─ Total: 40 concurrent jobs for 'default' (4 forks × 10 threads)
38
- # └─ Fork 0 (queue: mailers)
39
- # └─ Thread 1-5
25
+ # ├─ Fork 0
26
+ # │ └─ Thread Pool (N threads watching all queues)
27
+ # ├─ Fork 1
28
+ # │ └─ Thread Pool (N threads watching all queues)
29
+ # └─ Fork 2
30
+ # └─ Thread Pool (N threads watching all queues)
40
31
  # ```
41
32
  #
42
33
  # ## Scaling Strategy
@@ -60,38 +51,25 @@ module Postburner
60
51
  # forks: 4
61
52
  # threads: 10
62
53
  # ```
63
- # 4 processes × 10 threads = 40 concurrent jobs per queue
54
+ # 4 processes × 10 threads = 40 concurrent jobs
64
55
  #
65
56
  # ## Configuration
66
57
  #
67
58
  # @example Development (single-threaded)
68
- # development: # <- environment config, i.e. defaults
69
- # default_forks: 0
70
- # default_threads: 1
71
- # workers: # <- worker config, i.e. overrides
59
+ # development:
60
+ # workers:
72
61
  # default:
62
+ # forks: 0
63
+ # threads: 1
73
64
  # queues:
74
65
  # - default
75
66
  # - mailers
76
67
  #
77
- # @example Staging (multi-threaded, single process)
78
- # staging: # <- environment config, i.e. defaults
79
- # default_forks: 0
80
- # default_threads: 10
81
- # default_gc_limit: 5000
82
- # workers: # <- worker config, i.e. overrides
83
- # default:
84
- # queues:
85
- # - critical
86
- # - default
87
- # - mailers
88
- #
89
- # @example Production (Puma-style: forks × threads with worker overrides)
90
- # production: # <- environment config, i.e. defaults
68
+ # @example Production (Puma-style: forks × threads)
69
+ # production:
91
70
  # default_forks: 2
92
71
  # default_threads: 10
93
- # default_gc_limit: 5000
94
- # workers: # <- worker config, i.e. overrides
72
+ # workers:
95
73
  # default:
96
74
  # forks: 4 # Overrides default_forks
97
75
  # threads: 10 # Overrides default_threads
@@ -109,12 +87,13 @@ module Postburner
109
87
  # @return [void]
110
88
  #
111
89
  def start
112
- logger.info "[Postburner::Worker] Starting..."
90
+ logger.info "[Postburner::Worker] Starting worker '#{worker_config[:name]}'..."
113
91
  logger.info "[Postburner::Worker] Queues: #{config.queue_names.join(', ')}"
92
+ logger.info "[Postburner::Worker] Config: #{worker_config[:forks]} forks, #{worker_config[:threads]} threads, gc_limit: #{worker_config[:gc_limit] || 'unlimited'}, timeout: #{worker_config[:timeout]}s"
114
93
  logger.info "[Postburner] #{config.beanstalk_url} watching tubes: #{config.expanded_tube_names.join(', ')}"
115
94
 
116
95
  # Detect mode based on fork configuration
117
- if using_forks?
96
+ if worker_config[:forks] > 0
118
97
  start_forked_mode
119
98
  else
120
99
  start_single_process_mode
@@ -123,21 +102,17 @@ module Postburner
123
102
 
124
103
  private
125
104
 
126
- # Checks if any queue is configured to use forks.
105
+ # Returns the worker configuration hash.
127
106
  #
128
- # @return [Boolean] true if any queue has forks > 0
107
+ # @return [Hash] Worker config with :name, :queues, :forks, :threads, :gc_limit, :timeout
129
108
  #
130
- def using_forks?
131
- config.queue_names.any? do |queue_name|
132
- queue_config = config.queue_config(queue_name)
133
- fork_count = queue_config['forks'] || queue_config[:forks] || config.default_forks
134
- fork_count > 0
135
- end
109
+ def worker_config
110
+ config.worker_config
136
111
  end
137
112
 
138
113
  # Starts worker in single-process mode (forks: 0).
139
114
  #
140
- # Creates thread pools for each queue in the main process.
115
+ # Creates a thread pool that watches all configured queues.
141
116
  # Suitable for development and moderate concurrency needs.
142
117
  #
143
118
  # @return [void]
@@ -147,12 +122,17 @@ module Postburner
147
122
 
148
123
  # Track total jobs processed across all threads
149
124
  @jobs_processed = Concurrent::AtomicFixnum.new(0)
150
- @gc_limit = config.default_gc_limit
125
+ @gc_limit = worker_config[:gc_limit]
126
+
127
+ # Create thread pool
128
+ thread_count = worker_config[:threads]
129
+ @pool = Concurrent::FixedThreadPool.new(thread_count)
151
130
 
152
- # Create thread pools for each queue
153
- @pools = {}
154
- config.queue_names.each do |queue_name|
155
- spawn_queue_threads(queue_name)
131
+ # Spawn worker threads
132
+ thread_count.times do
133
+ @pool.post do
134
+ process_jobs
135
+ end
156
136
  end
157
137
 
158
138
  # Monitor for shutdown or GC limit
@@ -160,9 +140,10 @@ module Postburner
160
140
  sleep 0.5
161
141
  end
162
142
 
163
- # Shutdown pools gracefully
143
+ # Shutdown pool gracefully
164
144
  logger.info "[Postburner::Worker] Shutting down..."
165
- shutdown_pools
145
+ @pool.shutdown
146
+ @pool.wait_for_termination(30)
166
147
 
167
148
  if @gc_limit && @jobs_processed.value >= @gc_limit
168
149
  logger.info "[Postburner::Worker] Reached GC limit (#{@jobs_processed.value} jobs), exiting for restart..."
@@ -172,49 +153,24 @@ module Postburner
172
153
  end
173
154
  end
174
155
 
175
- # Spawns thread pool for a specific queue in single-process mode.
176
- #
177
- # @param queue_name [String] Name of the queue to process
178
- #
179
- # @return [void]
180
- #
181
- def spawn_queue_threads(queue_name)
182
- queue_config = config.queue_config(queue_name)
183
- thread_count = queue_config['threads'] || queue_config[:threads] || config.default_threads
184
-
185
- logger.info "[Postburner::Worker] Queue '#{queue_name}': #{thread_count} threads"
186
-
187
- # Create thread pool
188
- pool = Concurrent::FixedThreadPool.new(thread_count)
189
- @pools[queue_name] = pool
190
-
191
- # Spawn worker threads
192
- thread_count.times do
193
- pool.post do
194
- process_jobs_in_single_process(queue_name)
195
- end
196
- end
197
- end
198
-
199
- # Processes jobs in a single thread (single-process mode).
156
+ # Processes jobs in a single thread.
200
157
  #
201
158
  # Each thread has its own Beanstalkd connection and reserves jobs
202
- # from the specified queue.
203
- #
204
- # @param queue_name [String] Name of the queue to process
159
+ # from all configured queues.
205
160
  #
206
161
  # @return [void]
207
162
  #
208
- def process_jobs_in_single_process(queue_name)
163
+ def process_jobs
209
164
  connection = Postburner::Connection.new
165
+ timeout = worker_config[:timeout]
210
166
 
211
- # Watch only this queue
212
- watch_queues(connection, queue_name)
167
+ # Watch all configured queues
168
+ watch_queues(connection, *config.queue_names)
213
169
 
214
170
  until shutdown? || (@gc_limit && @jobs_processed.value >= @gc_limit)
215
171
  begin
216
- # Reserve with short timeout
217
- job = connection.beanstalk.tubes.reserve(timeout: 1)
172
+ # Reserve with configured timeout
173
+ job = connection.beanstalk.tubes.reserve(timeout: timeout)
218
174
 
219
175
  if job
220
176
  logger.debug "[Postburner::Worker] Thread #{Thread.current.object_id} reserved job #{job.id}"
@@ -238,34 +194,22 @@ module Postburner
238
194
  connection&.close rescue nil
239
195
  end
240
196
 
241
- # Gracefully shuts down all thread pools (single-process mode).
242
- #
243
- # @return [void]
244
- #
245
- def shutdown_pools
246
- @pools.each do |queue_name, pool|
247
- pool.shutdown
248
- pool.wait_for_termination(30)
249
- logger.info "[Postburner::Worker] Queue '#{queue_name}' shutdown complete"
250
- end
251
- end
252
-
253
197
  # Starts worker in forked mode (forks: 1+).
254
198
  #
255
- # Forks multiple child processes for each queue, each running
256
- # a thread pool. Parent process monitors children and restarts them when they exit.
199
+ # Forks multiple child processes, each running a thread pool.
200
+ # Parent process monitors children and restarts them when they exit.
257
201
  #
258
202
  # @return [void]
259
203
  #
260
204
  def start_forked_mode
261
- logger.info "[Postburner::Worker] Mode: Multi-process (forks: 1+)"
205
+ logger.info "[Postburner::Worker] Mode: Multi-process (#{worker_config[:forks]} forks)"
262
206
 
263
- # Track children: { pid => { queue: 'name', fork_num: 0 } }
207
+ # Track children: { pid => fork_num }
264
208
  @children = {}
265
209
 
266
- # Spawn configured number of forks for each queue
267
- config.queue_names.each do |queue_name|
268
- spawn_queue_workers(queue_name)
210
+ # Spawn configured number of forks
211
+ worker_config[:forks].times do |fork_num|
212
+ spawn_fork(fork_num)
269
213
  end
270
214
 
271
215
  # Parent process monitors children
@@ -274,18 +218,16 @@ module Postburner
274
218
  pid, status = Process.wait2(-1, Process::WNOHANG)
275
219
 
276
220
  if pid
277
- child_info = @children.delete(pid)
221
+ fork_num = @children.delete(pid)
278
222
  exit_code = status.exitstatus
279
- queue_name = child_info[:queue]
280
- fork_num = child_info[:fork_num]
281
223
 
282
224
  if exit_code == 99
283
225
  # GC restart - this is normal
284
- logger.info "[Postburner::Worker] Queue '#{queue_name}' fork #{fork_num} reached GC limit, restarting..."
285
- spawn_queue_worker(queue_name, fork_num) unless shutdown?
226
+ logger.info "[Postburner::Worker] Fork #{fork_num} reached GC limit, restarting..."
227
+ spawn_fork(fork_num) unless shutdown?
286
228
  else
287
- logger.error "[Postburner::Worker] Queue '#{queue_name}' fork #{fork_num} exited unexpectedly (code: #{exit_code})"
288
- spawn_queue_worker(queue_name, fork_num) unless shutdown?
229
+ logger.error "[Postburner::Worker] Fork #{fork_num} exited unexpectedly (code: #{exit_code})"
230
+ spawn_fork(fork_num) unless shutdown?
289
231
  end
290
232
  end
291
233
 
@@ -305,61 +247,36 @@ module Postburner
305
247
  logger.info "[Postburner::Worker] Shutdown complete"
306
248
  end
307
249
 
308
- # Spawns all forked worker processes for a specific queue.
309
- #
310
- # @param queue_name [String] Name of the queue to process
311
- #
312
- # @return [void]
313
- #
314
- def spawn_queue_workers(queue_name)
315
- queue_config = config.queue_config(queue_name)
316
- fork_count = queue_config['forks'] || queue_config[:forks] || config.default_forks
317
- thread_count = queue_config['threads'] || queue_config[:threads] || config.default_threads
318
-
319
- # Skip if this queue has 0 forks (shouldn't happen in forked mode, but be defensive)
320
- return if fork_count == 0
321
-
322
- total_concurrency = fork_count * thread_count
323
- logger.info "[Postburner::Worker] Queue '#{queue_name}': #{fork_count} forks × #{thread_count} threads = #{total_concurrency} total concurrency"
324
-
325
- fork_count.times do |fork_num|
326
- spawn_queue_worker(queue_name, fork_num)
327
- end
328
- end
329
-
330
- # Spawns a single forked worker process for a specific queue.
250
+ # Spawns a single forked worker process.
331
251
  #
332
- # @param queue_name [String] Name of the queue to process
333
252
  # @param fork_num [Integer] Fork number (0-indexed)
334
253
  #
335
254
  # @return [void]
336
255
  #
337
- def spawn_queue_worker(queue_name, fork_num)
256
+ def spawn_fork(fork_num)
338
257
  pid = fork do
339
258
  # Child process
340
- run_queue_worker(queue_name, fork_num)
259
+ run_fork(fork_num)
341
260
  end
342
261
 
343
- @children[pid] = { queue: queue_name, fork_num: fork_num }
344
- logger.info "[Postburner::Worker] Spawned worker for queue '#{queue_name}' fork #{fork_num} (pid: #{pid})"
262
+ @children[pid] = fork_num
263
+ logger.info "[Postburner::Worker] Spawned fork #{fork_num} (pid: #{pid})"
345
264
  end
346
265
 
347
- # Runs the thread pool worker for a specific queue fork.
266
+ # Runs the thread pool worker in a forked process.
348
267
  #
349
268
  # This runs in the child process. Creates a thread pool and processes
350
269
  # jobs until GC limit is reached or shutdown is requested.
351
270
  #
352
- # @param queue_name [String] Name of the queue to process
353
271
  # @param fork_num [Integer] Fork number (for logging)
354
272
  #
355
273
  # @return [void]
356
274
  #
357
- def run_queue_worker(queue_name, fork_num)
358
- queue_config = config.queue_config(queue_name)
359
- thread_count = queue_config['threads'] || queue_config[:threads] || config.default_threads
360
- gc_limit = queue_config['gc_limit'] || queue_config[:gc_limit] || config.default_gc_limit
275
+ def run_fork(fork_num)
276
+ thread_count = worker_config[:threads]
277
+ gc_limit = worker_config[:gc_limit]
361
278
 
362
- logger.info "[Postburner::Worker] Queue '#{queue_name}' fork #{fork_num}: #{thread_count} threads, GC limit #{gc_limit || 'unlimited'}"
279
+ logger.info "[Postburner::Worker] Fork #{fork_num}: #{thread_count} threads, GC limit #{gc_limit || 'unlimited'}"
363
280
 
364
281
  # Track jobs processed in this fork
365
282
  jobs_processed = Concurrent::AtomicFixnum.new(0)
@@ -370,7 +287,7 @@ module Postburner
370
287
  # Each thread needs its own Beanstalkd connection
371
288
  thread_count.times do
372
289
  pool.post do
373
- process_jobs_in_fork(queue_name, fork_num, jobs_processed, gc_limit)
290
+ process_jobs_in_fork(fork_num, jobs_processed, gc_limit)
374
291
  end
375
292
  end
376
293
 
@@ -384,43 +301,43 @@ module Postburner
384
301
  pool.wait_for_termination(30)
385
302
 
386
303
  if gc_limit && jobs_processed.value >= gc_limit
387
- logger.info "[Postburner::Worker] Queue '#{queue_name}' fork #{fork_num} reached GC limit (#{jobs_processed.value} jobs), exiting for restart..."
304
+ logger.info "[Postburner::Worker] Fork #{fork_num} reached GC limit (#{jobs_processed.value} jobs), exiting for restart..."
388
305
  exit 99 # Special exit code for GC restart
389
306
  else
390
- logger.info "[Postburner::Worker] Queue '#{queue_name}' fork #{fork_num} shutting down gracefully..."
307
+ logger.info "[Postburner::Worker] Fork #{fork_num} shutting down gracefully..."
391
308
  exit 0
392
309
  end
393
310
  rescue => e
394
- logger.error "[Postburner::Worker] Queue '#{queue_name}' fork #{fork_num} error: #{e.message}"
311
+ logger.error "[Postburner::Worker] Fork #{fork_num} error: #{e.message}"
395
312
  logger.error e.backtrace.join("\n")
396
313
  exit 1
397
314
  end
398
315
 
399
- # Processes jobs in a single thread within a fork (forked mode).
316
+ # Processes jobs in a single thread within a fork.
400
317
  #
401
318
  # Each thread has its own Beanstalkd connection and reserves jobs
402
- # from the specified queue.
319
+ # from all configured queues.
403
320
  #
404
- # @param queue_name [String] Name of the queue to process
405
321
  # @param fork_num [Integer] Fork number (for logging)
406
322
  # @param jobs_processed [Concurrent::AtomicFixnum] Shared counter of jobs processed
407
323
  # @param gc_limit [Integer, nil] Maximum jobs before triggering GC restart (nil = unlimited)
408
324
  #
409
325
  # @return [void]
410
326
  #
411
- def process_jobs_in_fork(queue_name, fork_num, jobs_processed, gc_limit)
327
+ def process_jobs_in_fork(fork_num, jobs_processed, gc_limit)
412
328
  connection = Postburner::Connection.new
329
+ timeout = worker_config[:timeout]
413
330
 
414
- # Watch only this queue
415
- watch_queues(connection, queue_name)
331
+ # Watch all configured queues
332
+ watch_queues(connection, *config.queue_names)
416
333
 
417
334
  until shutdown? || (gc_limit && jobs_processed.value >= gc_limit)
418
335
  begin
419
- # Reserve with short timeout
420
- job = connection.beanstalk.tubes.reserve(timeout: 1)
336
+ # Reserve with configured timeout
337
+ job = connection.beanstalk.tubes.reserve(timeout: timeout)
421
338
 
422
339
  if job
423
- logger.debug "[Postburner::Worker] Queue '#{queue_name}' fork #{fork_num} thread #{Thread.current.object_id} reserved job #{job.id}"
340
+ logger.debug "[Postburner::Worker] Fork #{fork_num} thread #{Thread.current.object_id} reserved job #{job.id}"
424
341
  execute_job(job)
425
342
  jobs_processed.increment
426
343
  end
data/lib/postburner.rb CHANGED
@@ -375,25 +375,50 @@ module Postburner
375
375
  end
376
376
  end
377
377
 
378
- # Removes all jobs from all tubes (not yet implemented).
378
+ # Clears jobs from specified tubes or shows stats for all tubes.
379
379
  #
380
- # This is a destructive operation intended for development/testing cleanup.
381
- # Requires confirmation string "CONFIRM" to prevent accidental execution.
380
+ # High-level method with formatted output. Delegates to Connection#clear_tubes!
381
+ # for the actual work, then pretty-prints the results.
382
382
  #
383
- # @param confirm [String] Must be exactly "CONFIRM" to execute
383
+ # SAFETY: Only allows clearing tubes that are defined in the loaded
384
+ # configuration. This prevents accidentally clearing tubes from other
385
+ # applications or environments sharing the same Beanstalkd server.
384
386
  #
385
- # @return [void]
387
+ # @param tube_names [Array<String>, nil] Array of tube names to clear, or nil to only show stats
388
+ # @param silent [Boolean] If true, suppress output to stdout (default: false)
386
389
  #
387
- # @example
388
- # Postburner.remove_all!("CONFIRM")
390
+ # @return [Hash] Statistics and results (see Connection#clear_tubes!)
391
+ #
392
+ # @raise [ArgumentError] if tube_names contains tubes not in watched_tube_names
393
+ #
394
+ # @example Show stats only (no clearing) - SAFE
395
+ # Postburner.clear_jobs!
396
+ # # Shows stats for ALL tubes on Beanstalkd, but doesn't clear anything
397
+ #
398
+ # @example Clear watched tubes only - SAFE
399
+ # Postburner.clear_jobs!(Postburner.watched_tube_names)
400
+ # # Only clears tubes defined in your config
389
401
  #
390
- # @note Currently a no-op - implementation pending
391
- # @todo Implement job removal from all tubes
402
+ # @example Trying to clear unconfigured tube - RAISES ERROR
403
+ # Postburner.clear_jobs!(['some-other-app-tube'])
404
+ # # => ArgumentError: Cannot clear tubes not in configuration
392
405
  #
393
- def self.remove_all!(confirm)
394
- return unless confirm == "CONFIRM"
406
+ # @example Silent mode (programmatic use)
407
+ # result = Postburner.clear_jobs!(Postburner.watched_tube_names, silent: true)
408
+ # result[:totals][:total] # => 42
409
+ #
410
+ # @see Connection#clear_tubes!
411
+ #
412
+ def self.clear_jobs!(tube_names = nil, silent: false)
413
+ require 'json'
414
+
415
+ result = connection.clear_tubes!(tube_names)
416
+
417
+ unless silent
418
+ puts JSON.pretty_generate(result)
419
+ end
395
420
 
396
- # TODO
421
+ result
397
422
  end
398
423
 
399
424
  # Returns array of watched tube names with environment prefix.
@@ -423,38 +448,82 @@ module Postburner
423
448
  @__watched_tubes ||= watched_tube_names.map { |tube_name| connection.tubes[tube_name] }
424
449
  end
425
450
 
426
- # Returns statistics and introspection data about Beanstalkd and configured queues.
451
+ # Returns detailed statistics about Beanstalkd tubes.
427
452
  #
428
- # Provides Beaneater tube instances for configured tubes and all tubes that exist
429
- # on the Beanstalkd server. Tube instances support introspection methods:
453
+ # Collects job counts (ready, delayed, buried, reserved) for each tube
454
+ # and provides aggregate totals across all tubes.
430
455
  #
431
- # - tube.name - Tube name
432
- # - tube.stats - Tube statistics hash (current-jobs-ready, current-jobs-buried, etc.)
433
- # - tube.peek_ready - Next ready job
434
- # - tube.peek_delayed - Next delayed job
435
- # - tube.peek_buried - Next buried job
436
- # - tube.kick(n) - Kick n buried jobs back to ready
437
- # - tube.pause(delay) - Pause tube for delay seconds
438
- # - tube.clear - Delete all jobs in tube
456
+ # @param tube_names [Array<String>, nil] Specific tube names to inspect, or nil for all tubes
439
457
  #
440
- # @return [Hash] Statistics hash with the following keys:
441
- # - watched_tubes: Array of configured/watched Beaneater::Tube instances
442
- # - tubes: Array of all Beaneater::Tube instances on the server
458
+ # @return [Hash] Statistics hash with keys:
459
+ # - tubes: Array of hashes with per-tube stats (name, ready, delayed, buried, reserved, total)
460
+ # - totals: Hash with aggregated counts across all tubes
443
461
  #
444
462
  # @raise [Beaneater::NotConnected] if connection to Beanstalkd fails
445
463
  #
446
- # @example
464
+ # @example Get stats for all tubes
447
465
  # stats = Postburner.stats
448
- # stats[:watched_tubes].each { |tube| puts "#{tube.name}: #{tube.stats}" }
449
- # stats[:tubes].first.peek_ready
466
+ # stats[:totals][:total] # => 42
467
+ # stats[:tubes].first[:name] # => "default"
468
+ #
469
+ # @example Get stats for specific tubes
470
+ # stats = Postburner.stats(Postburner.watched_tube_names)
471
+ # stats[:tubes].size # => 3
450
472
  #
451
- def self.stats
473
+ def self.stats(tube_names = nil)
452
474
  connected do |conn|
453
- {
454
- watched_tubes: self.watched_tubes,
455
- # Get all tube instances that exist on Beanstalkd
456
- tubes: conn.beanstalk.tubes.all
475
+ # Get tubes to inspect
476
+ tubes_to_inspect = if tube_names&.any?
477
+ tube_names.map { |name| conn.tubes[name] }
478
+ else
479
+ conn.beanstalk.tubes.all
480
+ end
481
+
482
+ result = {
483
+ tubes: [],
484
+ totals: {
485
+ ready: 0,
486
+ delayed: 0,
487
+ buried: 0,
488
+ reserved: 0,
489
+ total: 0
490
+ }
457
491
  }
492
+
493
+ # Collect stats from each tube
494
+ tubes_to_inspect.each do |tube|
495
+ begin
496
+ stats = tube.stats
497
+ # Beaneater returns a StatStruct; access the underlying hash
498
+ stats_hash = stats.instance_variable_get(:@hash) || {}
499
+
500
+ tube_data = {
501
+ name: tube.name,
502
+ ready: stats_hash['current-jobs-ready'] || 0,
503
+ delayed: stats_hash['current-jobs-delayed'] || 0,
504
+ buried: stats_hash['current-jobs-buried'] || 0,
505
+ reserved: stats_hash['current-jobs-reserved'] || 0,
506
+ total: (stats_hash['current-jobs-ready'] || 0) +
507
+ (stats_hash['current-jobs-delayed'] || 0) +
508
+ (stats_hash['current-jobs-buried'] || 0) +
509
+ (stats_hash['current-jobs-reserved'] || 0)
510
+ }
511
+ rescue Beaneater::NotFoundError
512
+ # Tube doesn't exist yet, skip it
513
+ next
514
+ end
515
+
516
+ result[:tubes] << tube_data
517
+
518
+ # Aggregate totals
519
+ result[:totals][:ready] += tube_data[:ready]
520
+ result[:totals][:delayed] += tube_data[:delayed]
521
+ result[:totals][:buried] += tube_data[:buried]
522
+ result[:totals][:reserved] += tube_data[:reserved]
523
+ result[:totals][:total] += tube_data[:total]
524
+ end
525
+
526
+ result
458
527
  end
459
528
  end
460
529
  end
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: postburner
3
3
  version: !ruby/object:Gem::Version
4
- version: 1.0.0.pre.5
4
+ version: 1.0.0.pre.7
5
5
  platform: ruby
6
6
  authors:
7
7
  - Matt Smith