fiber_job 0.1.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +7 -0
- data/CHANGELOG.md +29 -0
- data/README.md +290 -0
- data/bin/fiber_job +32 -0
- data/lib/fiber_job/client.rb +103 -0
- data/lib/fiber_job/concurrency.rb +18 -0
- data/lib/fiber_job/config.rb +74 -0
- data/lib/fiber_job/cron.rb +61 -0
- data/lib/fiber_job/cron_job.rb +41 -0
- data/lib/fiber_job/cron_parser.rb +65 -0
- data/lib/fiber_job/job.rb +143 -0
- data/lib/fiber_job/logger.rb +117 -0
- data/lib/fiber_job/process_manager.rb +18 -0
- data/lib/fiber_job/queue.rb +201 -0
- data/lib/fiber_job/version.rb +5 -0
- data/lib/fiber_job/worker.rb +195 -0
- data/lib/fiber_job.rb +108 -0
- metadata +85 -0
checksums.yaml
ADDED
@@ -0,0 +1,7 @@
|
|
1
|
+
---
|
2
|
+
SHA256:
|
3
|
+
metadata.gz: 8a6c46d9a9278acfb132e182abe3f176e96e1b948aeaee30a722793b4b47b222
|
4
|
+
data.tar.gz: 70d45d9ee91164f8a5016bfc5cbae46dd4250de0a45d6658f442f4c4c78a72e2
|
5
|
+
SHA512:
|
6
|
+
metadata.gz: f2a203f5d2124903dff417ca2562862a3326e496037e8d56e542629cbabbbfe1a129c9888bc31b047d522617db2afb18de22ec112d2c0ec9b37525f48046aedf
|
7
|
+
data.tar.gz: fc605f346003929db83d530b44024cc2f402c503041d1bb8ef890281c5762c581b943f628b289f4bcd5437a5391607c73db0a80c76391ff1de13d7cc5609220f
|
data/CHANGELOG.md
ADDED
@@ -0,0 +1,29 @@
|
|
1
|
+
# Changelog
|
2
|
+
|
3
|
+
All notable changes to this project will be documented in this file.
|
4
|
+
|
5
|
+
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
|
6
|
+
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
7
|
+
|
8
|
+
## [0.1.0] - 2025-07-20
|
9
|
+
|
10
|
+
### Added
|
11
|
+
- Initial release of FiberJob
|
12
|
+
- Hybrid Redis + Async::Queue architecture for optimal performance
|
13
|
+
- Fiber-based job processing with async/await patterns
|
14
|
+
- Job scheduling with delayed and scheduled execution
|
15
|
+
- Cron job support with standard cron expressions
|
16
|
+
- Built-in retry logic with exponential backoff
|
17
|
+
- Priority queue support
|
18
|
+
- Per-queue concurrency control with semaphores
|
19
|
+
- Comprehensive failure tracking and monitoring
|
20
|
+
- Production-ready logging system
|
21
|
+
- Full YARD documentation coverage
|
22
|
+
|
23
|
+
### Features
|
24
|
+
- **Core**: Job enqueueing, processing, and lifecycle management
|
25
|
+
- **Scheduling**: Immediate, delayed, and cron-based job execution
|
26
|
+
- **Concurrency**: Advanced fiber pools with semaphore control
|
27
|
+
- **Persistence**: Redis-backed job storage with atomic operations
|
28
|
+
- **Monitoring**: Queue statistics and failed job inspection
|
29
|
+
- **Configuration**: Flexible per-queue and global settings
|
data/README.md
ADDED
@@ -0,0 +1,290 @@
|
|
1
|
+
# FiberJob
|
2
|
+
|
3
|
+
A high-performance, Redis-based background job processing library for Ruby built on modern fiber-based concurrency. FiberJob combines the persistence of Redis with the speed of async fibers to deliver exceptional performance and reliability.
|
4
|
+
|
5
|
+
## Architecture Highlights
|
6
|
+
|
7
|
+
FiberJob is a experimental gem that uses a architecture that sets it apart from traditional job queues:
|
8
|
+
|
9
|
+
### **Hybrid Redis + Async::Queue Design**
|
10
|
+
- **Redis for persistence**: Durable job storage with atomic operations and scheduling
|
11
|
+
- **Async::Queue for speed**: Lightning-fast in-memory job processing with fiber-based concurrency
|
12
|
+
- **Best of both worlds**: Reliability of Redis + performance of in-memory queues
|
13
|
+
|
14
|
+
### **Advanced Fiber Management**
|
15
|
+
- **Separation of concerns**: Independent polling fibers fetch from Redis while processing fibers execute jobs
|
16
|
+
- **Per-queue fiber pools**: Isolated concurrency control with `Async::Semaphore` for optimal resource utilization
|
17
|
+
- **Non-blocking operations**: All I/O operations use async/await patterns for maximum throughput
|
18
|
+
|
19
|
+
### **Production-Optimized Performance**
|
20
|
+
- **Minimal Redis contention**: Single polling fiber per queue reduces Redis load
|
21
|
+
- **Fast job execution**: Jobs flow through in-memory `Async::Queue` for sub-millisecond processing
|
22
|
+
- **Scalable concurrency**: Configurable fiber pools scale efficiently without thread overhead
|
23
|
+
|
24
|
+
## Features
|
25
|
+
|
26
|
+
- **Fiber-Based Job Processing**: Execute jobs using modern Ruby async/fiber concurrency
|
27
|
+
- **Hybrid Queue Architecture**: Redis persistence + in-memory async queues for optimal performance
|
28
|
+
- **Job Scheduling**: Schedule jobs for future execution with precise timing
|
29
|
+
- **Cron Jobs**: Define recurring jobs with cron expressions
|
30
|
+
- **Retry Logic**: Built-in exponential backoff with configurable retry policies
|
31
|
+
- **Priority Queues**: Support for high-priority job execution
|
32
|
+
- **Advanced Concurrency**: Per-queue fiber pools with semaphore-controlled execution
|
33
|
+
- **Failure Tracking**: Store and inspect failed jobs for debugging
|
34
|
+
- **Redis Integration**: Optimized Redis usage with connection pooling and atomic operations
|
35
|
+
|
36
|
+
## Installation
|
37
|
+
|
38
|
+
Add this line to your application's Gemfile:
|
39
|
+
|
40
|
+
```ruby
|
41
|
+
gem 'fiber_job'
|
42
|
+
```
|
43
|
+
|
44
|
+
And then execute:
|
45
|
+
|
46
|
+
```bash
|
47
|
+
$ bundle install
|
48
|
+
```
|
49
|
+
|
50
|
+
Or install it yourself as:
|
51
|
+
|
52
|
+
```bash
|
53
|
+
$ gem install fiber_job
|
54
|
+
```
|
55
|
+
|
56
|
+
## Quick Start
|
57
|
+
|
58
|
+
### 1. Configuration
|
59
|
+
|
60
|
+
```ruby
|
61
|
+
require 'fiber_job'
|
62
|
+
|
63
|
+
FiberJob.configure do |config|
|
64
|
+
config.redis_url = 'redis://localhost:6379/0'
|
65
|
+
config.queues = [:default, :high, :low]
|
66
|
+
config.queue_concurrency = {
|
67
|
+
default: 5,
|
68
|
+
high: 10,
|
69
|
+
low: 2
|
70
|
+
}
|
71
|
+
config.log_level = :info
|
72
|
+
end
|
73
|
+
```
|
74
|
+
|
75
|
+
### 2. Define a Job
|
76
|
+
|
77
|
+
```ruby
|
78
|
+
class EmailJob < FiberJob::Job
|
79
|
+
def perform(user_id, message)
|
80
|
+
user = User.find(user_id)
|
81
|
+
UserMailer.notification(user, message).deliver_now
|
82
|
+
FiberJob.logger.info "Email sent to user #{user_id}"
|
83
|
+
end
|
84
|
+
end
|
85
|
+
```
|
86
|
+
|
87
|
+
### 3. Enqueue Jobs
|
88
|
+
|
89
|
+
```ruby
|
90
|
+
# Immediate execution
|
91
|
+
EmailJob.perform_async(123, "Welcome to our platform!")
|
92
|
+
|
93
|
+
# Delayed execution
|
94
|
+
EmailJob.perform_in(1.hour, 456, "Don't forget to complete your profile")
|
95
|
+
|
96
|
+
# Scheduled execution
|
97
|
+
EmailJob.perform_at(Date.tomorrow.beginning_of_day, 789, "Daily newsletter")
|
98
|
+
```
|
99
|
+
|
100
|
+
### 4. Start Workers
|
101
|
+
|
102
|
+
```ruby
|
103
|
+
# Start a worker for all configured queues
|
104
|
+
worker = FiberJob::Worker.new
|
105
|
+
worker.start
|
106
|
+
|
107
|
+
# Start a worker for specific queues
|
108
|
+
worker = FiberJob::Worker.new(queues: [:high, :default])
|
109
|
+
worker.start
|
110
|
+
```
|
111
|
+
|
112
|
+
## How the Hybrid Architecture Works
|
113
|
+
|
114
|
+
FiberJob's unique architecture combines Redis persistence with fiber-based in-memory processing:
|
115
|
+
|
116
|
+
```
|
117
|
+
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
|
118
|
+
│ Redis Queue │───▶│ Polling Fiber │───▶│ Async::Queue │
|
119
|
+
│ (Persistent) │ │ (Single/Queue) │ │ (In-Memory) │
|
120
|
+
└─────────────────┘ └─────────────────┘ └─────────────────┘
|
121
|
+
│
|
122
|
+
▼
|
123
|
+
┌─────────────────┐
|
124
|
+
│ Processing Pool │
|
125
|
+
│ (Fiber Pool + │
|
126
|
+
│ Semaphore) │
|
127
|
+
└─────────────────┘
|
128
|
+
```
|
129
|
+
|
130
|
+
### Fiber Pool Architecture
|
131
|
+
|
132
|
+
```ruby
|
133
|
+
# Each queue gets its own fiber ecosystem
|
134
|
+
Sync do |task|
|
135
|
+
@queues.each do |queue_name|
|
136
|
+
# 1. Single polling fiber per queue (minimal Redis load)
|
137
|
+
task.async { poll_redis_queue(queue_name) }
|
138
|
+
|
139
|
+
# 2. Multiple processing fibers per queue (parallel execution)
|
140
|
+
concurrency.times do
|
141
|
+
task.async { process_job_queue(queue_name) }
|
142
|
+
end
|
143
|
+
end
|
144
|
+
end
|
145
|
+
|
146
|
+
# Jobs flow: Redis → Polling Fiber → Async::Queue → Processing Fiber Pool
|
147
|
+
def poll_redis_queue(queue_name)
|
148
|
+
while @running
|
149
|
+
job_data = Queue.pop(queue_name, timeout: 1.0) # Redis operation
|
150
|
+
@job_queues[queue_name].enqueue(job_data) if job_data # Fast async queue
|
151
|
+
end
|
152
|
+
end
|
153
|
+
|
154
|
+
def process_job_queue(queue_name)
|
155
|
+
while @running
|
156
|
+
job_data = @job_queues[queue_name].dequeue # Instant fiber operation
|
157
|
+
@managers[queue_name].execute { execute_job(job_data) } # Semaphore-controlled
|
158
|
+
end
|
159
|
+
end
|
160
|
+
```
|
161
|
+
|
162
|
+
### Performance Benefits
|
163
|
+
|
164
|
+
- **Sub-millisecond job pickup**: Jobs are instantly available in `Async::Queue`
|
165
|
+
- **Reduced Redis load**: One polling fiber per queue instead of multiple workers competing
|
166
|
+
- **Optimal concurrency**: Semaphore ensures exact concurrency limits without oversubscription
|
167
|
+
- **Zero blocking**: All operations use async/await patterns for maximum throughput
|
168
|
+
|
169
|
+
## Job Configuration
|
170
|
+
|
171
|
+
### Custom Job Settings
|
172
|
+
|
173
|
+
```ruby
|
174
|
+
class ComplexJob < FiberJob::Job
|
175
|
+
def initialize
|
176
|
+
super
|
177
|
+
@queue = :high_priority
|
178
|
+
@max_retries = 5
|
179
|
+
@timeout = 600 # 10 minutes
|
180
|
+
end
|
181
|
+
|
182
|
+
def perform(data)
|
183
|
+
# Complex processing logic
|
184
|
+
end
|
185
|
+
|
186
|
+
def retry_delay(attempt)
|
187
|
+
# Custom retry strategy: linear backoff
|
188
|
+
attempt * 60 # 1min, 2min, 3min...
|
189
|
+
end
|
190
|
+
end
|
191
|
+
```
|
192
|
+
|
193
|
+
### Priority Retry
|
194
|
+
|
195
|
+
```ruby
|
196
|
+
class CriticalJob < FiberJob::Job
|
197
|
+
def priority_retry?
|
198
|
+
true # Failed jobs go to front of queue on retry
|
199
|
+
end
|
200
|
+
end
|
201
|
+
```
|
202
|
+
|
203
|
+
## Cron Jobs
|
204
|
+
|
205
|
+
Define recurring jobs with cron expressions:
|
206
|
+
|
207
|
+
```ruby
|
208
|
+
class DailyReportJob < FiberJob::CronJob
|
209
|
+
cron '0 9 * * *' # Every day at 9 AM
|
210
|
+
|
211
|
+
def execute_cron_job
|
212
|
+
# Generate and send daily reports
|
213
|
+
Report.generate_daily
|
214
|
+
FiberJob.logger.info "Daily report generated"
|
215
|
+
end
|
216
|
+
end
|
217
|
+
|
218
|
+
class HourlyCleanupJob < FiberJob::CronJob
|
219
|
+
cron '0 * * * *' # Every hour
|
220
|
+
|
221
|
+
def execute_cron_job
|
222
|
+
# Cleanup old data
|
223
|
+
TempData.cleanup_old_records
|
224
|
+
end
|
225
|
+
end
|
226
|
+
|
227
|
+
# Register cron jobs (automatically done on startup)
|
228
|
+
FiberJob.register_cron_jobs
|
229
|
+
```
|
230
|
+
|
231
|
+
## Queue Management
|
232
|
+
|
233
|
+
### Queue Statistics
|
234
|
+
|
235
|
+
```ruby
|
236
|
+
stats = FiberJob::Queue.stats(:default)
|
237
|
+
puts "Queue size: #{stats[:size]}"
|
238
|
+
puts "Scheduled jobs: #{stats[:scheduled]}"
|
239
|
+
puts "Processing: #{stats[:processing]}"
|
240
|
+
```
|
241
|
+
|
242
|
+
### Failed Jobs
|
243
|
+
|
244
|
+
```ruby
|
245
|
+
# View failed jobs
|
246
|
+
failed_jobs = FiberJob::Queue.failed_jobs
|
247
|
+
failed_jobs.each do |job|
|
248
|
+
puts "Failed: #{job['class']} - #{job['error']}"
|
249
|
+
end
|
250
|
+
|
251
|
+
# Note: clear_failed_jobs method was removed as it was unused
|
252
|
+
```
|
253
|
+
|
254
|
+
## Configuration Options
|
255
|
+
|
256
|
+
| Option | Default | Description |
|
257
|
+
|--------|---------|-------------|
|
258
|
+
| `redis_url` | `redis://localhost:6379` | Redis connection URL |
|
259
|
+
| `concurrency` | `2` | Global default concurrency |
|
260
|
+
| `queues` | `[:default]` | List of queues to process |
|
261
|
+
| `queue_concurrency` | `{default: 2}` | Per-queue concurrency settings |
|
262
|
+
| `log_level` | `:info` | Logging level (debug, info, warn, error, fatal) |
|
263
|
+
|
264
|
+
## Environment Variables
|
265
|
+
|
266
|
+
- `REDIS_URL`: Redis connection URL
|
267
|
+
- `FIBER_JOB_LOG_LEVEL`: Logging level
|
268
|
+
|
269
|
+
## Requirements
|
270
|
+
|
271
|
+
- Ruby 3.1+
|
272
|
+
- Redis 5.0+
|
273
|
+
|
274
|
+
## License
|
275
|
+
|
276
|
+
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
|
277
|
+
|
278
|
+
## Roadmap
|
279
|
+
|
280
|
+
- [ ] Middleware support for job lifecycle hooks
|
281
|
+
- [ ] Dead letter queue for permanently failed jobs
|
282
|
+
- [ ] Metrics and monitoring integration
|
283
|
+
- [ ] ActiveJob adapter
|
284
|
+
- [ ] Web UI for job monitoring and management
|
285
|
+
|
286
|
+
## Support
|
287
|
+
|
288
|
+
- Create an issue on GitHub for bug reports or feature requests
|
289
|
+
- Check existing issues for solutions to common problems
|
290
|
+
- Review the documentation for detailed API information
|
data/bin/fiber_job
ADDED
@@ -0,0 +1,32 @@
|
|
1
|
+
#!/usr/bin/env ruby
|
2
|
+
# frozen_string_literal: true
|
3
|
+
|
4
|
+
require_relative '../lib/fiber_job'
|
5
|
+
|
6
|
+
# Simple CLI for FiberJob
|
7
|
+
command = ARGV[0]
|
8
|
+
|
9
|
+
case command
|
10
|
+
when 'worker'
|
11
|
+
puts "Starting FiberJob worker..."
|
12
|
+
FiberJob::ProcessManager.start_worker
|
13
|
+
when 'version'
|
14
|
+
puts "FiberJob version #{FiberJob::VERSION}"
|
15
|
+
when nil, 'help'
|
16
|
+
puts <<~HELP
|
17
|
+
FiberJob - High-performance fiber-based job processing
|
18
|
+
|
19
|
+
Usage:
|
20
|
+
fiber_job worker Start a worker process
|
21
|
+
fiber_job version Show version
|
22
|
+
fiber_job help Show this help
|
23
|
+
|
24
|
+
Examples:
|
25
|
+
fiber_job worker # Start worker with default configuration
|
26
|
+
REDIS_URL=redis://localhost:6379/1 fiber_job worker # Custom Redis URL
|
27
|
+
HELP
|
28
|
+
else
|
29
|
+
puts "Unknown command: #{command}"
|
30
|
+
puts "Run 'fiber_job help' for usage information"
|
31
|
+
exit 1
|
32
|
+
end
|
@@ -0,0 +1,103 @@
|
|
1
|
+
# frozen_string_literal: true
|
2
|
+
|
3
|
+
module FiberJob
|
4
|
+
# Client handles job enqueueing and scheduling operations.
|
5
|
+
# Provides the core interface for adding jobs to queues with immediate,
|
6
|
+
# delayed, or scheduled execution.
|
7
|
+
#
|
8
|
+
# This class is used internally by {FiberJob::Job} class methods and can also
|
9
|
+
# be used directly for advanced job management scenarios.
|
10
|
+
#
|
11
|
+
# @example Direct usage
|
12
|
+
# FiberJob::Client.enqueue(MyJob, arg1, arg2)
|
13
|
+
# FiberJob::Client.enqueue_in(3600, MyJob, arg1, arg2) # 1 hour delay
|
14
|
+
# FiberJob::Client.enqueue_at(Time.parse("2024-01-01"), MyJob, arg1)
|
15
|
+
#
|
16
|
+
# @author FiberJob Team
|
17
|
+
# @since 1.0.0
|
18
|
+
# @see FiberJob::Job
|
19
|
+
# @see FiberJob::Queue
|
20
|
+
class Client
|
21
|
+
# Enqueues a job for immediate execution.
|
22
|
+
# The job will be added to the appropriate queue and processed
|
23
|
+
# by the next available worker.
|
24
|
+
#
|
25
|
+
# @param job_class [Class] The job class to execute (must inherit from FiberJob::Job)
|
26
|
+
# @param args [Array] Arguments to pass to the job's perform method
|
27
|
+
# @return [String] Unique job identifier
|
28
|
+
#
|
29
|
+
# @example Enqueue a job immediately
|
30
|
+
# FiberJob::Client.enqueue(EmailJob, user.id, "Welcome!")
|
31
|
+
#
|
32
|
+
# @raise [ArgumentError] If job_class is not a valid job class
|
33
|
+
def self.enqueue(job_class, *args)
|
34
|
+
payload = {
|
35
|
+
'class' => job_class.name,
|
36
|
+
'args' => args,
|
37
|
+
'enqueued_at' => Time.now.to_f
|
38
|
+
}
|
39
|
+
|
40
|
+
queue_name = job_class.queue
|
41
|
+
Queue.push(queue_name, payload)
|
42
|
+
|
43
|
+
FiberJob.logger.info "Enqueued #{job_class.name} with args: #{args.inspect}"
|
44
|
+
end
|
45
|
+
|
46
|
+
# Enqueues a job for execution after a specified delay.
|
47
|
+
# The job will be scheduled for future execution and moved to the
|
48
|
+
# regular queue when the delay period expires.
|
49
|
+
#
|
50
|
+
# @param delay_seconds [Numeric] Number of seconds to delay execution
|
51
|
+
# @param job_class [Class] The job class to execute (must inherit from FiberJob::Job)
|
52
|
+
# @param args [Array] Arguments to pass to the job's perform method
|
53
|
+
# @return [String] Unique job identifier
|
54
|
+
#
|
55
|
+
# @example Enqueue with delay
|
56
|
+
# FiberJob::Client.enqueue_in(300, EmailJob, user.id, "5 minute reminder")
|
57
|
+
# FiberJob::Client.enqueue_in(1.hour, ReportJob, date: Date.today)
|
58
|
+
#
|
59
|
+
# @raise [ArgumentError] If delay_seconds is negative or job_class is invalid
|
60
|
+
def self.enqueue_in(delay_seconds, job_class, *args)
|
61
|
+
scheduled_at = Time.now.to_f + delay_seconds
|
62
|
+
payload = {
|
63
|
+
'class' => job_class.name,
|
64
|
+
'args' => args,
|
65
|
+
'enqueued_at' => Time.now.to_f
|
66
|
+
}
|
67
|
+
|
68
|
+
queue_name = job_class.queue
|
69
|
+
Queue.schedule(queue_name, payload, scheduled_at)
|
70
|
+
|
71
|
+
FiberJob.logger.info "Scheduled #{job_class.name} to run in #{delay_seconds}s"
|
72
|
+
end
|
73
|
+
|
74
|
+
# Enqueues a job for execution at a specific time.
|
75
|
+
# The job will be scheduled and executed when the specified time is reached.
|
76
|
+
#
|
77
|
+
# @param timestamp [Time, Integer] Specific time or Unix timestamp for execution
|
78
|
+
# @param job_class [Class] The job class to execute (must inherit from FiberJob::Job)
|
79
|
+
# @param args [Array] Arguments to pass to the job's perform method
|
80
|
+
# @return [String] Unique job identifier
|
81
|
+
#
|
82
|
+
# @example Enqueue at specific time
|
83
|
+
# tomorrow_9am = Time.parse("2024-01-02 09:00:00")
|
84
|
+
# FiberJob::Client.enqueue_at(tomorrow_9am, DailyReportJob, date: Date.today)
|
85
|
+
#
|
86
|
+
# # Using Unix timestamp
|
87
|
+
# FiberJob::Client.enqueue_at(1672531200, MaintenanceJob, type: "cleanup")
|
88
|
+
#
|
89
|
+
# @raise [ArgumentError] If timestamp is in the past or job_class is invalid
|
90
|
+
def self.enqueue_at(timestamp, job_class, *args)
|
91
|
+
payload = {
|
92
|
+
'class' => job_class.name,
|
93
|
+
'args' => args,
|
94
|
+
'enqueued_at' => Time.now.to_f
|
95
|
+
}
|
96
|
+
|
97
|
+
queue_name = job_class.queue
|
98
|
+
Queue.schedule(queue_name, payload, timestamp.to_f)
|
99
|
+
|
100
|
+
FiberJob.logger.info "Scheduled #{job_class.name} to run at #{Time.at(timestamp)}"
|
101
|
+
end
|
102
|
+
end
|
103
|
+
end
|
@@ -0,0 +1,18 @@
|
|
1
|
+
# frozen_string_literal: true
|
2
|
+
|
3
|
+
require 'async'
|
4
|
+
require 'async/semaphore'
|
5
|
+
|
6
|
+
module FiberJob
|
7
|
+
# The ConcurrencyManager class is a lightweight wrapper around Async::Semaphore that
|
8
|
+
# controls how many jobs can execute simultaneously within each queue
|
9
|
+
class ConcurrencyManager
|
10
|
+
def initialize(max_concurrency: 5)
|
11
|
+
@semaphore = Async::Semaphore.new(max_concurrency)
|
12
|
+
end
|
13
|
+
|
14
|
+
def execute(&block)
|
15
|
+
@semaphore.async(&block)
|
16
|
+
end
|
17
|
+
end
|
18
|
+
end
|
@@ -0,0 +1,74 @@
|
|
1
|
+
# frozen_string_literal: true
|
2
|
+
|
3
|
+
require 'logger'
|
4
|
+
|
5
|
+
module FiberJob
|
6
|
+
# Configuration class for FiberJob library settings.
|
7
|
+
# Manages Redis connection, worker concurrency, queue configuration,
|
8
|
+
# and logging settings. Supports both global and per-queue configuration.
|
9
|
+
#
|
10
|
+
# @example Basic configuration
|
11
|
+
# FiberJob.configure do |config|
|
12
|
+
# config.redis_url = 'redis://localhost:6379/0'
|
13
|
+
# config.concurrency = 5
|
14
|
+
# config.queues = [:default, :high, :low]
|
15
|
+
# config.log_level = :debug
|
16
|
+
# end
|
17
|
+
#
|
18
|
+
# @example Per-queue concurrency
|
19
|
+
# FiberJob.configure do |config|
|
20
|
+
# config.queue_concurrency = {
|
21
|
+
# default: 5,
|
22
|
+
# high: 10,
|
23
|
+
# low: 2
|
24
|
+
# }
|
25
|
+
# end
|
26
|
+
#
|
27
|
+
class Config
|
28
|
+
# @!attribute [rw] redis_url
|
29
|
+
# @return [String] Redis connection URL
|
30
|
+
# @!attribute [rw] concurrency
|
31
|
+
# @return [Integer] Global default concurrency level
|
32
|
+
# @!attribute [rw] queues
|
33
|
+
# @return [Array<Symbol>] List of queue names to process
|
34
|
+
# @!attribute [rw] queue_concurrency
|
35
|
+
# @return [Hash] Per-queue concurrency settings
|
36
|
+
# @!attribute [rw] logger
|
37
|
+
# @return [Logger] Logger instance for application logging
|
38
|
+
# @!attribute [rw] log_level
|
39
|
+
# @return [Symbol] Logging level (:debug, :info, :warn, :error)
|
40
|
+
attr_accessor :redis_url, :concurrency, :queues, :queue_concurrency, :logger, :log_level
|
41
|
+
|
42
|
+
# Initializes configuration with sensible defaults.
|
43
|
+
# Values can be overridden through environment variables or configuration blocks.
|
44
|
+
#
|
45
|
+
# @return [void]
|
46
|
+
#
|
47
|
+
# Environment variables:
|
48
|
+
# - REDIS_URL: Redis connection URL (default: redis://localhost:6379)
|
49
|
+
# - FIBER_JOB_LOG_LEVEL: Logging level (default: info)
|
50
|
+
def initialize
|
51
|
+
@redis_url = ENV['REDIS_URL'] || 'redis://localhost:6379'
|
52
|
+
@concurrency = 2 # Global default fallback
|
53
|
+
@queues = [:default]
|
54
|
+
@queue_concurrency = { default: 2 } # Per-queue concurrency
|
55
|
+
@log_level = ENV['FIBER_JOB_LOG_LEVEL']&.to_sym || :info
|
56
|
+
@logger = ::Logger.new($stdout)
|
57
|
+
@logger.level = ::Logger.const_get(@log_level.to_s.upcase)
|
58
|
+
end
|
59
|
+
|
60
|
+
# Returns the concurrency setting for a specific queue.
|
61
|
+
# Falls back to the global concurrency setting if no queue-specific
|
62
|
+
# setting is configured.
|
63
|
+
#
|
64
|
+
# @param queue_name [String, Symbol] Name of the queue
|
65
|
+
# @return [Integer] Concurrency level for the specified queue
|
66
|
+
#
|
67
|
+
# @example Get queue concurrency
|
68
|
+
# config.concurrency_for_queue(:high) # => 10
|
69
|
+
# config.concurrency_for_queue(:unknown) # => 2 (global default)
|
70
|
+
def concurrency_for_queue(queue_name)
|
71
|
+
@queue_concurrency[queue_name.to_sym] || @concurrency
|
72
|
+
end
|
73
|
+
end
|
74
|
+
end
|
@@ -0,0 +1,61 @@
|
|
1
|
+
# frozen_string_literal: true
|
2
|
+
|
3
|
+
module FiberJob
|
4
|
+
class Cron
|
5
|
+
def self.redis
|
6
|
+
@redis ||= Redis.new(url: FiberJob.config.redis_url)
|
7
|
+
end
|
8
|
+
|
9
|
+
def self.register(cron_job_class)
|
10
|
+
job_name = cron_job_class.name
|
11
|
+
cron_expression = cron_job_class.cron_expression
|
12
|
+
|
13
|
+
redis.hset('cron:jobs', job_name, JSON.dump({ 'class' => job_name,
|
14
|
+
'cron' => cron_expression,
|
15
|
+
'queue' => cron_job_class.new.queue,
|
16
|
+
'registered_at' => Time.now.to_f }))
|
17
|
+
|
18
|
+
unless redis.exists?("cron:next_run:#{job_name}")
|
19
|
+
next_time = cron_job_class.next_run_time
|
20
|
+
schedule_job(cron_job_class, next_time)
|
21
|
+
end
|
22
|
+
|
23
|
+
FiberJob.logger.info "Registered cron job: #{job_name} (#{cron_expression})"
|
24
|
+
end
|
25
|
+
|
26
|
+
def self.schedule_job(cron_job_class, run_time)
|
27
|
+
job_name = cron_job_class.name
|
28
|
+
|
29
|
+
# Set next run time
|
30
|
+
redis.set("cron:next_run:#{job_name}", run_time.to_f)
|
31
|
+
|
32
|
+
# Add to sorted set for efficient scanning
|
33
|
+
redis.zadd('cron:schedule', run_time.to_f, job_name)
|
34
|
+
|
35
|
+
FiberJob.logger.debug "Scheduled #{job_name} for #{run_time}"
|
36
|
+
end
|
37
|
+
|
38
|
+
def self.due_jobs(current_time = Time.now)
|
39
|
+
job_names = redis.zrangebyscore('cron:schedule', 0, current_time.to_f)
|
40
|
+
|
41
|
+
job_names.map do |job_name|
|
42
|
+
job_data = JSON.parse(redis.hget('cron:jobs', job_name))
|
43
|
+
next unless job_data
|
44
|
+
|
45
|
+
redis.zrem('cron:schedule', job_name)
|
46
|
+
|
47
|
+
job_data
|
48
|
+
end.compact
|
49
|
+
end
|
50
|
+
|
51
|
+
def self.registered_jobs
|
52
|
+
jobs = redis.hgetall('cron:jobs')
|
53
|
+
jobs.transform_values { |data| JSON.parse(data) }
|
54
|
+
end
|
55
|
+
|
56
|
+
def self.clear_all
|
57
|
+
redis.del('cron:jobs', 'cron:schedule')
|
58
|
+
redis.keys('cron:next_run:*').each { |key| redis.del(key) }
|
59
|
+
end
|
60
|
+
end
|
61
|
+
end
|
@@ -0,0 +1,41 @@
|
|
1
|
+
# frozen_string_literal: true
|
2
|
+
|
3
|
+
require_relative 'cron_parser'
|
4
|
+
|
5
|
+
module FiberJob
|
6
|
+
class CronJob < Job
|
7
|
+
def self.cron(cron_expression)
|
8
|
+
@cron_expression = cron_expression
|
9
|
+
end
|
10
|
+
|
11
|
+
def self.cron_expression
|
12
|
+
@cron_expression || raise("No cron expression defined for #{name}")
|
13
|
+
end
|
14
|
+
|
15
|
+
def self.next_run_time(from_time = Time.now)
|
16
|
+
CronParser.next_run(@cron_expression, from_time)
|
17
|
+
end
|
18
|
+
|
19
|
+
def self.register
|
20
|
+
Cron.register(self)
|
21
|
+
end
|
22
|
+
|
23
|
+
# Override perform to add automatic rescheduling
|
24
|
+
def perform(*args)
|
25
|
+
execute_cron_job(*args)
|
26
|
+
schedule_next_run
|
27
|
+
end
|
28
|
+
|
29
|
+
# Subclasses implement this instead of perform
|
30
|
+
def execute_cron_job(*args)
|
31
|
+
raise NotImplementedError, 'Subclasses must implement execute_cron_job'
|
32
|
+
end
|
33
|
+
|
34
|
+
private
|
35
|
+
|
36
|
+
def schedule_next_run
|
37
|
+
next_time = self.class.next_run_time
|
38
|
+
Cron.schedule_job(self.class, next_time)
|
39
|
+
end
|
40
|
+
end
|
41
|
+
end
|