sidejob 4.0.2 → 4.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA1:
3
- metadata.gz: 8c508ddf20a2498ce9d4b37f1bb5a9ff46a0cf80
4
- data.tar.gz: 8e5b8fbdb89070d8e01857734669883b8cbc6be5
3
+ metadata.gz: e731ae1460c300e29d0698535fd1b7108efd3568
4
+ data.tar.gz: be777ac02165ead0b44de9100cefb8dfbb3c16ac
5
5
  SHA512:
6
- metadata.gz: 22a8451a4c19f630a9a26d72b771b8cf40879e7ce55daaed0f702137ccfe5d233e5eebe3edbfe80d15a514e63917c8ba5378d002e456ae578bef49e7bbd712dd
7
- data.tar.gz: a61a49db836b89ddb5298b20b376f70fda787b2cb81a4e3ef052c309c996287f7461118ecdd1e9953ceb8593671c856c56559baee8bdfc77ac7d2760e75dabd6
6
+ metadata.gz: 82265dc59bf18956bff558a9e9c996d74ae4a7a2f70f396d6a6188a73c884743756c57fac40bbba958de9bc9fe95e5aca435a3ae4489d23b56700c8f3b330448
7
+ data.tar.gz: baac41c609404f39cc9083f13dd489856faaa34f76db2fc5774e1190cf0c9ddbbdab036d893a77a22749a92f5670a62cc04e033e9367eb3ab3f6b1ecf3368cac
data/README.md CHANGED
@@ -18,6 +18,7 @@ Jobs
18
18
  * This ID is used as Sidekiq's jid
19
19
  * Note: a job can be queued multiple times on Sidekiq's queues
20
20
  * Therefore, Sidekiq's jids are not unique
21
+ * Jobs can also have any number of globally unique string names as aliases
21
22
  * Jobs have a queue and class name
22
23
  * Jobs have any number of input and output ports
23
24
  * A job can have any number of named child jobs
@@ -48,6 +49,29 @@ Ports
48
49
  a port named `*` exists in which case new ports are dynamically created and inherit its options.
49
50
  * Currently, the only port option is a default value which is returned when a read is done on the port when its empty.
50
51
 
52
+ Channels
53
+ --------
54
+
55
+ Channels provide a global reliable pubsub system. Every port can be associated with some number of channels.
56
+ Writes to output ports will publish the data to the associated channels. Any published messages to a channel
57
+ are written to all input ports that have subscribed to that channel.
58
+
59
+ The pubsub system is reliable in that the subscribed jobs do not need to be running to receive messages.
60
+ Other clients can also subscribe to channels via standard non-reliable Redis pubsub.
61
+
62
+ Channel names use slashes to indicate hierarchy. Published messages to a channel are also published to channels
63
+ up the hierarchy. For example, a message sent to the channel `/namespace/event` will be sent to the channels
64
+ `/namespace/event`, `/namespace` and `/`.
65
+
66
+ SideJob uses channels starting with /sidejob. The channels used by sidejob:
67
+
68
+ * `/sidejob/log` : Log message. In the context of a running job, the job id will be included in the log entry.
69
+ * { timestamp: (date), read: [{ job: (id), (in|out)port: (port), data: [...] }, ...], write: [{ job: (id), (in|out)port: (port), data: [...] }, ...] }
70
+ * { timestamp: (date), error: (message), backtrace: (exception backtrace) }
71
+ * { timestamp: (date), message: (message) }
72
+ * `/sidejob/workers/[queue]` : Worker registry updated for the queue
73
+ * `/sidejob/job/[id]` : Job messages
74
+
51
75
  Workers
52
76
  -------
53
77
 
@@ -55,6 +79,7 @@ Workers
55
79
  * Workers are required to register themselves
56
80
  * A Sidekiq process should only handle a single queue so all registered workers in the process are for the same queue
57
81
  * It should have a perform method that is called on each run
82
+ * It may have a startup method that is called once before the first run of the job
58
83
  * It may have a shutdown method that is called before the job is terminated
59
84
  * Workers should be idempotent as they may be run more than once for the same state
60
85
  * SideJob ensures only one worker thread runs for a given job at a time
@@ -94,28 +119,26 @@ Additional keys used by SideJob:
94
119
 
95
120
  * workers:(queue) - Hash mapping class name to worker configuration. A worker should define
96
121
  the inports and outports hashes that map port names to port options.
97
- * jobs:last_id - Stores the last job ID (we use incrementing integers from 1)
98
- * jobs:logs - List with JSON encoded logs.
99
- * { timestamp: (date), job: (id), read: [{ job: (id), (in|out)port: (port), data: [...] }, ...], write: [{ job: (id), (in|out)port: (port), data: [...] }, ...] }
100
- * { timestamp: (date), job: (id), error: (message), backtrace: (exception backtrace) }
101
- * jobs - Set with all job ids
102
- * job:(id) - Hash containing job state. Each value is JSON encoded.
103
- * status - job status
104
- * queue - queue name
105
- * class - name of class
106
- * args - array of arguments passed to worker's perform method
107
- * parent - parent job ID
108
- * created_at - timestamp that the job was first queued
109
- * created_by - string indicating the entity that created the job. SideJob uses job:(id) for jobs created by another job.
110
- * ran_at - timestamp of the start of the last run
111
- * Any additional keys used by the worker to track internal job state
122
+ * jobs - Set with all job ids.
123
+ * jobs:last_id - Stores the last job ID (we use incrementing integers from 1).
124
+ * jobs:aliases - Hash mapping a name to job id.
125
+ * job:(id):worker - JSON encoded hash with queue, class, and args for calling the worker.
126
+ * job:(id):status - Job status
127
+ * job:(id):created_at - Timestamp that the job was first queued
128
+ * job:(id):created_by - The entity that created the job. SideJob uses job:(id) for jobs created by another job.
129
+ * job:(id):ran_at - Timestamp of the start of the last run
130
+ * job:(id):aliases - Set with job aliases
112
131
  * job:(id):in:(inport) and job:(id):out:(outport) - List with unread port data. New data is pushed on the right.
113
132
  * job:(id):inports and job:(id):outports - Set containing all existing port names.
114
133
  * job:(id):inports:default and job:(id):outports:default - Hash mapping port name to JSON encoded default value for port.
134
+ * job:(id):inports:channels and job:(id):outports:channels - Hash mapping port name to JSON encoded connected channels.
135
+ * job:(id):parent - Parent job ID
115
136
  * job:(id):children - Hash mapping child job name to child job ID
137
+ * job:(id):state - Hash containing job specific internal state. Each value is JSON encoded.
116
138
  * job:(id):rate:(timestamp) - Rate limiter used to prevent run away executing of a job.
117
139
  Keys are automatically expired.
118
140
  * job:(id):lock - Used to control concurrent writes to a job.
119
141
  Auto expired to prevent stale locks.
120
142
  * job:(id):lock:worker - Used to indicate a worker is attempting to acquire the job lock.
121
143
  Auto expired to prevent stale locks.
144
+ * channel:(channel) - Set with job ids that may have ports subscribed to the channel.
data/lib/sidejob.rb CHANGED
@@ -7,6 +7,7 @@ require 'sidejob/worker'
7
7
  require 'sidejob/server_middleware'
8
8
  require 'time' # for iso8601 method
9
9
  require 'securerandom'
10
+ require 'pathname'
10
11
 
11
12
  module SideJob
12
13
  # Configuration parameters
@@ -49,11 +50,17 @@ module SideJob
49
50
 
50
51
  # To prevent race conditions, we generate the id and set all data in redis before queuing the job to sidekiq
51
52
  # Otherwise, sidekiq may start the job too quickly
52
- id = SideJob.redis.incr('jobs:last_id').to_s
53
+ id = SideJob.redis.incr('jobs:last_id')
53
54
  SideJob.redis.sadd 'jobs', id
54
55
  job = SideJob::Job.new(id)
55
56
 
56
- job.set({queue: queue, class: klass, args: args, status: 'completed', created_by: by, created_at: SideJob.timestamp})
57
+ redis_key = job.redis_key
58
+ SideJob.redis.multi do |multi|
59
+ multi.set "#{redis_key}:worker", {queue: queue, class: klass, args: args}.to_json
60
+ multi.set "#{redis_key}:status", 'completed'
61
+ multi.set "#{redis_key}:created_at", SideJob.timestamp
62
+ multi.set "#{redis_key}:created_by", by
63
+ end
57
64
 
58
65
  if parent
59
66
  raise 'Missing name option for job with a parent' unless name
@@ -67,12 +74,11 @@ module SideJob
67
74
  job.run(at: at)
68
75
  end
69
76
 
70
- # Finds a job by id
71
- # @param job_id [Integer, nil] Job Id
77
+ # Finds a job by name or id.
78
+ # @param name_or_id [String, Integer] Job name or id
72
79
  # @return [SideJob::Job, nil] Job object or nil if it doesn't exist
73
- def self.find(job_id)
74
- return nil unless job_id
75
- job = SideJob::Job.new(job_id) rescue nil
80
+ def self.find(name_or_id)
81
+ SideJob::Job.new(name_or_id) rescue nil
76
82
  end
77
83
 
78
84
  # Returns the current timestamp as a iso8601 string
@@ -81,31 +87,81 @@ module SideJob
81
87
  Time.now.utc.iso8601(9)
82
88
  end
83
89
 
84
- # Adds a log entry to redis with current timestamp.
85
- # @param entry [Hash] Log entry
90
+ # Publishes a log message using the current SideJob context.
91
+ # @param entry [Hash|Exception|String] Log entry
86
92
  def self.log(entry)
87
- context = (Thread.current[:sidejob_log_context] || {}).merge(timestamp: SideJob.timestamp)
88
- SideJob.redis.rpush 'jobs:logs', context.merge(entry).to_json
89
- end
93
+ context = (Thread.current[:sidejob_context] || {}).merge(timestamp: SideJob.timestamp)
94
+
95
+ if entry.is_a?(Exception)
96
+ exception = entry
97
+ entry = { error: exception.message }
98
+ if exception.backtrace
99
+ # only store the backtrace until the first sidekiq line
100
+ entry[:backtrace] = exception.backtrace.take_while {|l| l !~ /sidekiq/}.join("\n")
101
+ end
102
+ elsif entry.is_a?(String)
103
+ entry = { message: entry }
104
+ end
90
105
 
91
- # Return all job logs and optionally clears them.
92
- # @param clear [Boolean] If true, delete logs after returning them (default true)
93
- # @return [Array<Hash>] All logs with the oldest first
94
- def self.logs(clear: true)
95
- SideJob.redis.multi do |multi|
96
- multi.lrange 'jobs:logs', 0, -1
97
- multi.del 'jobs:logs' if clear
98
- end[0].map {|log| JSON.parse(log)}
106
+ # Disable logging to prevent infinite publish loop for input ports subscribed to /sidejob/log which could generate log entries
107
+ SideJob::Port.group(log: false) do
108
+ SideJob.publish '/sidejob/log', context.merge(entry)
109
+ end
99
110
  end
100
111
 
101
- # Adds the given metadata to all {SideJob.log} calls within the block.
102
- # @param metadata [Hash] Metadata to be merged with each log entry
103
- def self.log_context(metadata, &block)
104
- previous = Thread.current[:sidejob_log_context]
105
- Thread.current[:sidejob_log_context] = (previous || {}).merge(metadata.symbolize_keys)
112
+ # Adds to the current SideJob context within the block.
113
+ # @param data [Hash] Data to be merged into the current context
114
+ def self.context(data, &block)
115
+ previous = Thread.current[:sidejob_context]
116
+ Thread.current[:sidejob_context] = (previous || {}).merge(data.symbolize_keys)
106
117
  yield
107
118
  ensure
108
- Thread.current[:sidejob_log_context] = previous
119
+ Thread.current[:sidejob_context] = previous
120
+ end
121
+
122
+ # Publishes a message up the channel hierarchy to jobs by writing to ports subscribed to the channel.
123
+ # Also publishes to the destination channel only via normal redis pubsub.
124
+ # @param channel [String] Channel is path-like, separated by / to indicate hierarchy
125
+ # @param message [Object] JSON encodable message
126
+ def self.publish(channel, message)
127
+ # We don't publish at every level up hierarchy via redis pubsub since a client can use redis psubscribe
128
+ SideJob.redis.publish channel, message.to_json
129
+
130
+ job_subs = {}
131
+
132
+ # Set the context to the original channel so that a job that subscribes to a higher channel can determine
133
+ # the original channel that the message was sent to.
134
+ SideJob.context({channel: channel}) do
135
+ # walk up the channel hierarchy
136
+ Pathname.new(channel).ascend do |channel|
137
+ channel = channel.to_s
138
+ jobs = SideJob.redis.smembers "channel:#{channel}"
139
+ jobs.each do |id|
140
+ job = SideJob.find(id)
141
+ if ! job_subs.has_key?(id)
142
+ job_subs[id] = {}
143
+ if job
144
+ SideJob.redis.hgetall("#{job.redis_key}:inports:channels").each_pair do |port, channels|
145
+ channels = JSON.parse(channels)
146
+ channels.each do |ch|
147
+ job_subs[id][ch] ||= []
148
+ job_subs[id][ch] << port
149
+ end
150
+ end
151
+ end
152
+ end
153
+
154
+ if job && job_subs[id] && job_subs[id][channel]
155
+ job_subs[id][channel].each do |port|
156
+ job.input(port).write message
157
+ end
158
+ else
159
+ # Job is gone or no longer subscribed to this channel
160
+ SideJob.redis.srem "channel:#{channel}", id
161
+ end
162
+ end
163
+ end
164
+ end
109
165
  end
110
166
  end
111
167
 
data/lib/sidejob/job.rb CHANGED
@@ -33,13 +33,55 @@ module SideJob
33
33
  # Retrieve the job's status.
34
34
  # @return [String] Job status
35
35
  def status
36
- get(:status)
36
+ check_exists
37
+ SideJob.redis.get "#{redis_key}:status"
37
38
  end
38
39
 
39
40
  # Set the job status.
40
41
  # @param status [String] The new job status
41
42
  def status=(status)
42
- set({status: status})
43
+ check_exists
44
+ oldstatus = SideJob.redis.getset("#{redis_key}:status", status)
45
+ if oldstatus != status && worker_config['status_publish'] != false
46
+ SideJob::Port.group(log: false) do
47
+ publish({status: status})
48
+ end
49
+ end
50
+ end
51
+
52
+ # Returns all aliases for the job.
53
+ # @return [Array<String>] Job aliases
54
+ def aliases
55
+ SideJob.redis.smembers "#{redis_key}:aliases"
56
+ end
57
+
58
+ # Add an alias for the job.
59
+ # @param name [String] Alias for the job. Must begin with an alphabetic character.
60
+ # @raise [RuntimeError] Error if name is invalid or the name already refers to another job
61
+ def add_alias(name)
62
+ check_exists
63
+ raise "#{name} is not a valid alias" unless name =~ /^[[:alpha:]]/
64
+ current = SideJob.redis.hget('jobs:aliases', name)
65
+ if current
66
+ raise "#{name} is already used by job #{current}" if current.to_i != id
67
+ else
68
+ SideJob.redis.multi do |multi|
69
+ multi.hset 'jobs:aliases', name, id
70
+ multi.sadd "#{redis_key}:aliases", name
71
+ end
72
+ end
73
+ end
74
+
75
+ # Remove an alias for the job.
76
+ # @param name [String] Alias to remove for the job
77
+ # @raise [RuntimeError] Error if name is not an alias for this job
78
+ def remove_alias(name)
79
+ check_exists
80
+ raise "#{name} is not an alias for job #{id}" unless SideJob.redis.sismember("#{redis_key}:aliases", name)
81
+ SideJob.redis.multi do |multi|
82
+ multi.hdel 'jobs:aliases', name
83
+ multi.srem "#{redis_key}:aliases", name
84
+ end
43
85
  end
44
86
 
45
87
  # Run the job.
@@ -53,8 +95,6 @@ module SideJob
53
95
  # @param wait [Float] Run in the specified number of seconds
54
96
  # @return [SideJob::Job, nil] The job that was run or nil if no job was run
55
97
  def run(parent: false, force: false, at: nil, wait: nil)
56
- check_exists
57
-
58
98
  if parent
59
99
  pj = self.parent
60
100
  return pj ? pj.run(force: force, at: at, wait: wait) : nil
@@ -110,6 +150,7 @@ module SideJob
110
150
  # Queues a child job, setting parent and by to self.
111
151
  # @see SideJob.queue
112
152
  def queue(queue, klass, **options)
153
+ check_exists
113
154
  SideJob.queue(queue, klass, options.merge({parent: self, by: "job:#{id}"}))
114
155
  end
115
156
 
@@ -129,9 +170,7 @@ module SideJob
129
170
  # Returns the parent job.
130
171
  # @return [SideJob::Job, nil] Parent job or nil if none
131
172
  def parent
132
- parent = get(:parent)
133
- parent = SideJob.find(parent) if parent
134
- parent
173
+ SideJob.find(SideJob.redis.get("#{redis_key}:parent"))
135
174
  end
136
175
 
137
176
  # Disown a child job so that it no longer has a parent.
@@ -148,7 +187,7 @@ module SideJob
148
187
  end
149
188
 
150
189
  SideJob.redis.multi do |multi|
151
- multi.hdel job.redis_key, 'parent'
190
+ multi.del "#{job.redis_key}:parent"
152
191
  multi.hdel "#{redis_key}:children", name
153
192
  end
154
193
  end
@@ -157,12 +196,13 @@ module SideJob
157
196
  # @param orphan [SideJob::Job] Job that has no parent
158
197
  # @param name [String] Name of child job (must be unique among children)
159
198
  def adopt(orphan, name)
199
+ check_exists
160
200
  raise "Job #{id} cannot adopt itself as a child" if orphan == self
161
201
  raise "Job #{id} cannot adopt job #{orphan.id} as it already has a parent" unless orphan.parent.nil?
162
202
  raise "Job #{id} cannot adopt job #{orphan.id} as child name #{name} is not unique" if name.nil? || ! child(name).nil?
163
203
 
164
204
  SideJob.redis.multi do |multi|
165
- multi.hset orphan.redis_key, 'parent', id.to_json
205
+ multi.set "#{orphan.redis_key}:parent", id.to_json
166
206
  multi.hset "#{redis_key}:children", name, orphan.id
167
207
  end
168
208
  end
@@ -176,14 +216,16 @@ module SideJob
176
216
  parent.disown(self) if parent
177
217
 
178
218
  children = self.children
219
+ aliases = self.aliases
179
220
 
180
221
  # delete all SideJob keys and disown all children
181
222
  ports = inports.map(&:redis_key) + outports.map(&:redis_key)
182
223
  SideJob.redis.multi do |multi|
183
224
  multi.srem 'jobs', id
184
225
  multi.del redis_key
185
- multi.del ports + %w{children inports outports inports:default outports:default}.map {|x| "#{redis_key}:#{x}" }
226
+ multi.del ports + %w{worker status state aliases parent children inports outports inports:default outports:default inports:channels outports:channels created_at created_by ran_at}.map {|x| "#{redis_key}:#{x}" }
186
227
  children.each_value { |child| multi.hdel child.redis_key, 'parent' }
228
+ aliases.each { |name| multi.hdel('jobs:aliases', name) }
187
229
  end
188
230
 
189
231
  # recursively delete all children
@@ -191,6 +233,7 @@ module SideJob
191
233
  child.delete
192
234
  end
193
235
 
236
+ publish({deleted: true})
194
237
  return true
195
238
  end
196
239
 
@@ -236,20 +279,39 @@ module SideJob
236
279
  set_ports :out, ports
237
280
  end
238
281
 
239
- # Returns the entirety of the job's state with both standard and custom keys.
240
- # @return [Hash{String => Object}] Job state
282
+ # Returns
283
+ # @return [Hash]
284
+ def info
285
+ check_exists
286
+ data = SideJob.redis.multi do |multi|
287
+ multi.get "#{redis_key}:worker"
288
+ multi.get "#{redis_key}:created_by"
289
+ multi.get "#{redis_key}:created_at"
290
+ multi.get "#{redis_key}:ran_at"
291
+ end
292
+
293
+ worker = JSON.parse(data[0])
294
+ {
295
+ queue: worker['queue'], class: worker['class'], args: worker['args'],
296
+ created_by: data[1], created_at: data[2], ran_at: data[3],
297
+ }
298
+ end
299
+
300
+ # Returns the entirety of the job's internal state.
301
+ # @return [Hash{String => Object}] Job internal state
241
302
  def state
242
- state = SideJob.redis.hgetall(redis_key)
243
- raise "Job #{id} does not exist!" if ! state
303
+ check_exists
304
+ state = SideJob.redis.hgetall("#{redis_key}:state")
244
305
  state.update(state) {|k,v| JSON.parse("[#{v}]")[0]}
245
306
  state
246
307
  end
247
308
 
248
- # Returns some data from the job's state.
309
+ # Returns some data from the job's internal state.
249
310
  # @param key [Symbol,String] Retrieve value for the given key
250
311
  # @return [Object,nil] Value from the job state or nil if key does not exist
251
312
  def get(key)
252
- val = SideJob.redis.hget(redis_key, key)
313
+ check_exists
314
+ val = SideJob.redis.hget("#{redis_key}:state", key)
253
315
  val ? JSON.parse("[#{val}]")[0] : nil
254
316
  end
255
317
 
@@ -259,7 +321,7 @@ module SideJob
259
321
  def set(data)
260
322
  check_exists
261
323
  return unless data.size > 0
262
- SideJob.redis.hmset redis_key, *(data.map {|k,v| [k, v.to_json]}.flatten)
324
+ SideJob.redis.hmset "#{redis_key}:state", *(data.map {|k,v| [k, v.to_json]}.flatten)
263
325
  end
264
326
 
265
327
  # Unsets some fields in the job's internal state.
@@ -267,7 +329,7 @@ module SideJob
267
329
  # @raise [RuntimeError] Error raised if job no longer exists
268
330
  def unset(*fields)
269
331
  return unless fields.length > 0
270
- SideJob.redis.hdel redis_key, fields
332
+ SideJob.redis.hdel "#{redis_key}:state", fields
271
333
  end
272
334
 
273
335
  # Acquire a lock on the job with a given expiration time.
@@ -276,6 +338,7 @@ module SideJob
276
338
  # @param retry_delay [Float] Maximum seconds to wait (actual will be randomized) before retry getting lock
277
339
  # @return [String, nil] Lock token that should be passed to {#unlock} or nil if lock was not acquired
278
340
  def lock(ttl, retries: 3, retry_delay: 0.2)
341
+ check_exists
279
342
  retries.times do
280
343
  token = SecureRandom.uuid
281
344
  if SideJob.redis.set("#{redis_key}:lock", token, {nx: true, ex: ttl})
@@ -291,6 +354,7 @@ module SideJob
291
354
  # @param ttl [Fixnum] Refresh lock expiration for the given time in seconds
292
355
  # @return [Boolean] Whether the timeout was set
293
356
  def refresh_lock(ttl)
357
+ check_exists
294
358
  SideJob.redis.expire "#{redis_key}:lock", ttl
295
359
  end
296
360
 
@@ -298,6 +362,7 @@ module SideJob
298
362
  # @param token [String] Token returned by {#lock}
299
363
  # @return [Boolean] Whether the job was unlocked
300
364
  def unlock(token)
365
+ check_exists
301
366
  return SideJob.redis.eval('
302
367
  if redis.call("get",KEYS[1]) == ARGV[1] then
303
368
  return redis.call("del",KEYS[1])
@@ -306,6 +371,12 @@ module SideJob
306
371
  end', { keys: ["#{redis_key}:lock"], argv: [token] }) == 1
307
372
  end
308
373
 
374
+ # Publishes a message to the job's channel.
375
+ # @param message [Object] JSON encodable message
376
+ def publish(message)
377
+ SideJob.publish "/sidejob/job/#{id}", message
378
+ end
379
+
309
380
  private
310
381
 
311
382
  # Queue or schedule this job using sidekiq.
@@ -314,21 +385,17 @@ module SideJob
314
385
  # Don't need to queue if a worker is already in process of running
315
386
  return if SideJob.redis.exists "#{redis_key}:lock:worker"
316
387
 
317
- queue = get(:queue)
318
-
388
+ worker = JSON.parse(SideJob.redis.get("#{redis_key}:worker"))
319
389
  # Don't need to queue if the job is already in the queue (this does not include scheduled jobs)
320
390
  # When Sidekiq pulls job out from scheduled set, we can still get the same job queued multiple times
321
391
  # but the server middleware handles it
322
- return if Sidekiq::Queue.new(queue).find_job(@id)
323
-
324
- klass = get(:class)
325
- args = get(:args)
392
+ return if Sidekiq::Queue.new(worker['queue']).find_job(@id)
326
393
 
327
- if ! SideJob.redis.hexists("workers:#{queue}", klass)
394
+ if ! SideJob::Worker.config(worker['queue'], worker['class'])
328
395
  self.status = 'terminated'
329
- raise "Worker no longer registered for #{klass} in queue #{queue}"
396
+ raise "Worker no longer registered for #{klass} in queue #{worker['queue']}"
330
397
  end
331
- item = {'jid' => id, 'queue' => queue, 'class' => klass, 'args' => args || [], 'retry' => false}
398
+ item = {'jid' => id, 'queue' => worker['queue'], 'class' => worker['class'], 'args' => worker['args'] || [], 'retry' => false}
332
399
  item['at'] = time if time && time > Time.now.to_f
333
400
  Sidekiq::Client.push(item)
334
401
  end
@@ -338,17 +405,24 @@ module SideJob
338
405
  SideJob.redis.smembers("#{redis_key}:#{type}ports").reject {|name| name == '*'}.map {|name| SideJob::Port.new(self, type, name)}
339
406
  end
340
407
 
408
+ # Return the worker configuration
409
+ # @return [Hash] Worker config for the job
410
+ def worker_config
411
+ worker = JSON.parse(SideJob.redis.get("#{redis_key}:worker"))
412
+ SideJob::Worker.config(worker['queue'], worker['class']) || {}
413
+ end
414
+
341
415
  # Sets the input/outputs ports for the job and overwrites all current options.
342
416
  # The ports are merged with the worker configuration.
343
417
  # Any current ports that are not in the new port set are deleted (including any data on those ports).
344
418
  # @param type [:in, :out] Input or output ports
345
419
  # @param ports [Hash{Symbol,String => Hash}] Port configuration. Port name to options.
346
420
  def set_ports(type, ports)
421
+ check_exists
347
422
  current = SideJob.redis.smembers("#{redis_key}:#{type}ports") || []
348
- config = SideJob::Worker.config(get(:queue), get(:class))
349
423
 
350
424
  ports ||= {}
351
- ports = (config["#{type}ports"] || {}).merge(ports.dup.stringify_keys)
425
+ ports = (worker_config["#{type}ports"] || {}).merge(ports.dup.stringify_keys)
352
426
  ports.each_key do |port|
353
427
  ports[port] = ports[port].stringify_keys
354
428
  end
@@ -365,13 +439,27 @@ module SideJob
365
439
  # replace port defaults
366
440
  defaults = ports.map do |port, options|
367
441
  if options.has_key?('default')
368
- [port, options['default'].to_json]
442
+ [port, SideJob::Port.encode_data(options['default'])]
369
443
  else
370
444
  nil
371
445
  end
372
446
  end.compact.flatten(1)
373
447
  multi.del "#{redis_key}:#{type}ports:default"
374
448
  multi.hmset "#{redis_key}:#{type}ports:default", *defaults if defaults.length > 0
449
+
450
+ # replace port channels
451
+ channels = ports.map do |port, options|
452
+ if options.has_key?('channels')
453
+ options['channels'].each do |channel|
454
+ multi.sadd "channel:#{channel}", id
455
+ end
456
+ [port, options['channels'].to_json]
457
+ else
458
+ nil
459
+ end
460
+ end.compact.flatten(1)
461
+ multi.del "#{redis_key}:#{type}ports:channels"
462
+ multi.hmset "#{redis_key}:#{type}ports:channels", *channels if channels.length > 0
375
463
  end
376
464
  end
377
465
 
@@ -386,9 +474,9 @@ module SideJob
386
474
  class Job
387
475
  include JobMethods
388
476
 
389
- # @param id [Integer] Job id
390
- def initialize(id)
391
- @id = id.to_i
477
+ # @param alias_or_id [String, Integer] Job alias or id
478
+ def initialize(alias_or_id)
479
+ @id = (SideJob.redis.hget('jobs:aliases', alias_or_id.to_s) || alias_or_id).to_i
392
480
  check_exists
393
481
  end
394
482
  end