sidejob 3.0.1 → 4.0.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA1:
3
- metadata.gz: f39dce3d26065392bfa7e0d886b2976dc009a579
4
- data.tar.gz: 7ad307e865f66fa57caf72f514d390589aa1f674
3
+ metadata.gz: b52b912e86a573ece661bf11ca1a5eadcd410a98
4
+ data.tar.gz: a545d0cafa16991899ee4890713a8fef114e6c7c
5
5
  SHA512:
6
- metadata.gz: 8bfb90bec7b756867cebd04e4f40d0c3ed635c0235d3d4aeebf9fa343dff3bb3cfd23a3a6f17b26151c7ec34c5b3ade360ffa721db4aa3a052f716dd46454773
7
- data.tar.gz: c65db5c4986995174e8f46637d81618229e3375b67d635680221524f93d0947b47192d8cb06ad4c536ca13eaca2e70356cb0c60d6de1a811121f209374c9541a
6
+ metadata.gz: 7c95368c762967a120677d1c01bfe073a73fd4d6e034d186d0d6d40dc1e18fe9663a7ebb110ed2a3b325e3093d8402b7178ec3a415c992aafb1e41ca03408d3e
7
+ data.tar.gz: 45a2d1e8a186af6236a000fb5ae66d4f9a4ea1238fbd578663c4827bbfc8c90cad733ca03cd914568815e6ee4294ce50e0e163d04832bb1da358e0bcb0bf15e6
data/Gemfile.lock CHANGED
@@ -1,7 +1,7 @@
1
1
  PATH
2
2
  remote: .
3
3
  specs:
4
- sidejob (3.0.1)
4
+ sidejob (4.0.1)
5
5
  sidekiq (~> 3.2.5)
6
6
 
7
7
  GEM
@@ -10,18 +10,18 @@ GEM
10
10
  celluloid (0.15.2)
11
11
  timers (~> 1.1.0)
12
12
  coderay (1.1.0)
13
- connection_pool (2.0.0)
13
+ connection_pool (2.2.0)
14
14
  diff-lcs (1.2.5)
15
15
  docile (1.1.5)
16
- json (1.8.1)
16
+ json (1.8.2)
17
17
  method_source (0.8.2)
18
18
  multi_json (1.10.1)
19
19
  pry (0.9.12.6)
20
20
  coderay (~> 1.0)
21
21
  method_source (~> 0.8)
22
22
  slop (~> 3.4)
23
- redis (3.1.0)
24
- redis-namespace (1.5.1)
23
+ redis (3.2.1)
24
+ redis-namespace (1.5.2)
25
25
  redis (~> 3.0, >= 3.0.4)
26
26
  rspec (3.1.0)
27
27
  rspec-core (~> 3.1.0)
data/README.md CHANGED
@@ -46,13 +46,7 @@ Ports
46
46
  * Any object that can be JSON encoded can be written or read from any input or output port.
47
47
  * Ports must be explicitly specified for each job either by the worker configuration or when queuing new jobs unless
48
48
  a port named `*` exists in which case new ports are dynamically created and inherit its options.
49
-
50
- Port options:
51
-
52
- * mode
53
- * Queue - This is the default operation mode. All data written is read in a first in first out manner.
54
- * Memory - No data is stored on the port. The most recent value sets the port default value.
55
- * default - Default value when a read is done on the port with no data
49
+ * Currently, the only port option is a default value which is returned when a read is done on the port when its empty.
56
50
 
57
51
  Workers
58
52
  -------
@@ -103,22 +97,24 @@ Additional keys used by SideJob:
103
97
  * jobs:logs - List with JSON encoded logs.
104
98
  * { timestamp: (date), job: (id), read: [{ job: (id), (in|out)port: (port), data: [...] }, ...], write: [{ job: (id), (in|out)port: (port), data: [...] }, ...] }
105
99
  * { timestamp: (date), job: (id), error: (message), backtrace: (exception backtrace) }
106
- * jobs - Hash mapping active job IDs to JSON encoded job state.
100
+ * jobs - Set with all job ids
101
+ * job:(id) - Hash containing job state. Each value is JSON encoded.
102
+ * status - job status
107
103
  * queue - queue name
108
104
  * class - name of class
109
105
  * args - array of arguments passed to worker's perform method
106
+ * parent - parent job ID
110
107
  * created_at - timestamp that the job was first queued
111
108
  * created_by - string indicating the entity that created the job. SideJob uses job:(id) for jobs created by another job.
112
109
  * ran_at - timestamp of the start of the last run
113
- * Any additional keys used by the worker to track internal state
114
- * job:(id):status - Job status as string.
110
+ * Any additional keys used by the worker to track internal job state
115
111
  * job:(id):in:(inport) and job:(id):out:(outport) - List with unread port data. New data is pushed on the right.
116
- * job:(id):inports:mode and job:(id):outports:mode - Hash mapping port name to port mode. All existing ports must be here.
112
+ * job:(id):inports and job:(id):outports - Set containing all existing port names.
117
113
  * job:(id):inports:default and job:(id):outports:default - Hash mapping port name to JSON encoded default value for port.
118
- * job:(id):ancestors - List with parent job IDs up to the root job that has no parent.
119
- Newer jobs are pushed on the left so the immediate parent is on the left and the root job is on the right.
120
114
  * job:(id):children - Hash mapping child job name to child job ID
121
115
  * job:(id):rate:(timestamp) - Rate limiter used to prevent run away executing of a job.
122
116
  Keys are automatically expired.
123
- * job:(id):lock - Used to prevent multiple worker threads from running a job.
117
+ * job:(id):lock - Used to control concurrent writes to a job.
118
+ Auto expired to prevent stale locks.
119
+ * job:(id):lock:worker - Used to indicate a worker is attempting to acquire the job lock.
124
120
  Auto expired to prevent stale locks.
data/lib/sidejob.rb CHANGED
@@ -5,8 +5,15 @@ require 'sidejob/job'
5
5
  require 'sidejob/worker'
6
6
  require 'sidejob/server_middleware'
7
7
  require 'time' # for iso8601 method
8
+ require 'securerandom'
8
9
 
9
10
  module SideJob
11
+ # Configuration parameters
12
+ CONFIGURATION = {
13
+ lock_expiration: 60, # workers should not run longer than this number of seconds
14
+ max_runs_per_minute: 600, # terminate jobs that run too often
15
+ }
16
+
10
17
  # Returns redis connection
11
18
  # If block is given, yields the redis connection
12
19
  # Otherwise, just returns the redis connection
@@ -39,44 +46,32 @@ module SideJob
39
46
  def self.queue(queue, klass, args: nil, parent: nil, name: nil, at: nil, by: nil, inports: nil, outports: nil)
40
47
  raise "No worker registered for #{klass} in queue #{queue}" unless SideJob::Worker.config(queue, klass)
41
48
 
42
- log_options = {}
43
- if parent
44
- raise 'Missing name option for job with a parent' unless name
45
- raise "Parent already has child job with name #{name}" if parent.child(name)
46
- ancestry = [parent.id] + SideJob.redis.lrange("#{parent.redis_key}:ancestors", 0, -1)
47
- log_options = {job: parent.id}
48
- end
49
-
50
49
  # To prevent race conditions, we generate the id and set all data in redis before queuing the job to sidekiq
51
50
  # Otherwise, sidekiq may start the job too quickly
52
51
  id = SideJob.redis.incr('jobs:last_id').to_s
52
+ SideJob.redis.sadd 'jobs', id
53
53
  job = SideJob::Job.new(id)
54
54
 
55
- SideJob.redis.multi do |multi|
56
- multi.hset 'jobs', id, {queue: queue, class: klass, args: args, created_by: by, created_at: SideJob.timestamp}.to_json
55
+ job.set({queue: queue, class: klass, args: args, status: 'completed', created_by: by, created_at: SideJob.timestamp})
57
56
 
58
- if parent
59
- multi.rpush "#{job.redis_key}:ancestors", ancestry # we need to rpush to get the right order
60
- multi.hset "#{parent.redis_key}:children", name, id
61
- end
57
+ if parent
58
+ raise 'Missing name option for job with a parent' unless name
59
+ parent.adopt(job, name)
62
60
  end
63
61
 
64
62
  # initialize ports
65
- job.group_port_logs(log_options) do
66
- job.inports = inports
67
- job.outports = outports
68
- end
63
+ job.inports = inports
64
+ job.outports = outports
69
65
 
70
66
  job.run(at: at)
71
67
  end
72
68
 
73
69
  # Finds a job by id
74
- # @param job_id [String, nil] Job Id
70
+ # @param job_id [Integer, nil] Job Id
75
71
  # @return [SideJob::Job, nil] Job object or nil if it doesn't exist
76
72
  def self.find(job_id)
77
73
  return nil unless job_id
78
- job = SideJob::Job.new(job_id)
79
- return job.exists? ? job : nil
74
+ job = SideJob::Job.new(job_id) rescue nil
80
75
  end
81
76
 
82
77
  # Returns the current timestamp as a iso8601 string
@@ -88,18 +83,29 @@ module SideJob
88
83
  # Adds a log entry to redis with current timestamp.
89
84
  # @param entry [Hash] Log entry
90
85
  def self.log(entry)
91
- SideJob.redis.rpush 'jobs:logs', entry.merge(timestamp: SideJob.timestamp).to_json
86
+ context = (Thread.current[:sidejob_log_context] || {}).merge(timestamp: SideJob.timestamp)
87
+ SideJob.redis.rpush 'jobs:logs', context.merge(entry).to_json
92
88
  end
93
89
 
94
90
  # Return all job logs and optionally clears them.
95
91
  # @param clear [Boolean] If true, delete logs after returning them (default true)
96
- # @return [Array<Hash>] All logs for the job with the oldest first
92
+ # @return [Array<Hash>] All logs with the oldest first
97
93
  def self.logs(clear: true)
98
94
  SideJob.redis.multi do |multi|
99
95
  multi.lrange 'jobs:logs', 0, -1
100
96
  multi.del 'jobs:logs' if clear
101
97
  end[0].map {|log| JSON.parse(log)}
102
98
  end
99
+
100
+ # Adds the given metadata to all {SideJob.log} calls within the block.
101
+ # @param metadata [Hash] Metadata to be merged with each log entry
102
+ def self.log_context(metadata, &block)
103
+ previous = Thread.current[:sidejob_log_context]
104
+ Thread.current[:sidejob_log_context] = (previous || {}).merge(metadata.symbolize_keys)
105
+ yield
106
+ ensure
107
+ Thread.current[:sidejob_log_context] = previous
108
+ end
103
109
  end
104
110
 
105
111
  # :nocov:
data/lib/sidejob/job.rb CHANGED
@@ -2,7 +2,6 @@ module SideJob
2
2
  # Methods shared between {SideJob::Job} and {SideJob::Worker}.
3
3
  module JobMethods
4
4
  attr_reader :id
5
- attr_accessor :logger
6
5
 
7
6
  # @return [Boolean] True if two jobs or workers have the same id
8
7
  def ==(other)
@@ -28,80 +27,41 @@ module SideJob
28
27
  # Returns if the job still exists.
29
28
  # @return [Boolean] Returns true if this job exists and has not been deleted
30
29
  def exists?
31
- SideJob.redis.hexists 'jobs', id
32
- end
33
-
34
- # If a job logger is defined, call the log method on it with the log entry. Otherwise, call {SideJob.log}.
35
- # @param entry [Hash] Log entry
36
- def log(entry)
37
- entry[:job] = id unless entry[:job]
38
- (@logger || SideJob).log(entry)
39
- end
40
-
41
- # Groups all port reads and writes within the block into a single logged event.
42
- # @param metadata [Hash] If provided, the metadata is merged into the final log entry
43
- def group_port_logs(metadata={}, &block)
44
- new_group = @logger.nil?
45
- @logger ||= GroupPortLogs.new(self)
46
- @logger.add_metadata metadata
47
- yield
48
- ensure
49
- if new_group
50
- @logger.done
51
- @logger = nil
52
- end
30
+ SideJob.redis.sismember 'jobs', id
53
31
  end
54
32
 
55
33
  # Retrieve the job's status.
56
34
  # @return [String] Job status
57
35
  def status
58
- SideJob.redis.get "#{redis_key}:status"
36
+ get(:status)
59
37
  end
60
38
 
61
39
  # Set the job status.
62
40
  # @param status [String] The new job status
63
41
  def status=(status)
64
- SideJob.redis.set "#{redis_key}:status", status
65
- end
66
-
67
- # Prepare to terminate the job. Sets status to 'terminating'.
68
- # Then queues the job so that its shutdown method if it exists can be run.
69
- # After shutdown, the status will be 'terminated'.
70
- # If the job is currently running, it will finish running first.
71
- # If the job is already terminated, it does nothing.
72
- # To start the job after termination, call {#run} with force: true.
73
- # @param recursive [Boolean] If true, recursively terminate all children (default false)
74
- # @return [SideJob::Job] self
75
- def terminate(recursive: false)
76
- if status != 'terminated'
77
- self.status = 'terminating'
78
- sidekiq_queue
79
- end
80
- if recursive
81
- children.each_value do |child|
82
- child.terminate(recursive: true)
83
- end
84
- end
85
- self
42
+ set({status: status})
86
43
  end
87
44
 
88
45
  # Run the job.
89
46
  # This method ensures that the job runs at least once from the beginning.
90
47
  # If the job is currently running, it will run again.
91
48
  # Just like sidekiq, we make no guarantees that the job will not be run more than once.
92
- # Unless force is set, if the status is terminating or terminated, the job will not be run.
49
+ # Unless force is set, the job will only be run if the status is running, queued, suspended, or completed.
50
+ # @param parent [Boolean] Whether to run parent job instead of this one
93
51
  # @param force [Boolean] Whether to run if job is terminated (default false)
94
52
  # @param at [Time, Float] Time to schedule the job, otherwise queue immediately
95
53
  # @param wait [Float] Run in the specified number of seconds
96
- # @return [SideJob::Job] self
97
- def run(force: false, at: nil, wait: nil)
54
+ # @return [SideJob::Job, nil] The job that was run or nil if no job was run
55
+ def run(parent: false, force: false, at: nil, wait: nil)
98
56
  check_exists
99
57
 
100
- case status
101
- when 'terminating', 'terminated'
102
- return unless force
58
+ if parent
59
+ pj = self.parent
60
+ return pj ? pj.run(force: force, at: at, wait: wait) : nil
103
61
  end
104
62
 
63
+ return nil unless force || %w{running queued suspended completed}.include?(status)
64
+
105
65
  self.status = 'queued'
106
66
 
107
67
  time = nil
@@ -116,6 +76,43 @@ module SideJob
116
76
  self
117
77
  end
118
78
 
79
+ # Returns if job and all children are terminated.
80
+ # @return [Boolean] True if this job and all children recursively are terminated
81
+ def terminated?
82
+ return false if status != 'terminated'
83
+ children.each_value do |child|
84
+ return false unless child.terminated?
85
+ end
86
+ return true
87
+ end
88
+
89
+ # Prepare to terminate the job. Sets status to 'terminating'.
90
+ # Then queues the job so that its shutdown method if it exists can be run.
91
+ # After shutdown, the status will be 'terminated'.
92
+ # If the job is currently running, it will finish running first.
93
+ # If the job is already terminated, it does nothing.
94
+ # To start the job after termination, call {#run} with force: true.
95
+ # @param recursive [Boolean] If true, recursively terminate all children (default false)
96
+ # @return [SideJob::Job] self
97
+ def terminate(recursive: false)
98
+ if status != 'terminated'
99
+ self.status = 'terminating'
100
+ sidekiq_queue
101
+ end
102
+ if recursive
103
+ children.each_value do |child|
104
+ child.terminate(recursive: true)
105
+ end
106
+ end
107
+ self
108
+ end
109
+
110
+ # Queues a child job, setting parent and by to self.
111
+ # @see SideJob.queue
112
+ def queue(queue, klass, **options)
113
+ SideJob.queue(queue, klass, options.merge({parent: self, by: "job:#{id}"}))
114
+ end
115
+
119
116
  # Returns a child job by name.
120
117
  # @param name [Symbol, String] Child job name to look up
121
118
  # @return [SideJob::Job, nil] Child job or nil if not found
@@ -129,28 +126,45 @@ module SideJob
129
126
  SideJob.redis.hgetall("#{redis_key}:children").each_with_object({}) {|child, hash| hash[child[0]] = SideJob.find(child[1])}
130
127
  end
131
128
 
132
- # Returns all ancestor jobs.
133
- # @return [Array<SideJob::Job>] Ancestors (parent will be first and root job will be last)
134
- def ancestors
135
- SideJob.redis.lrange("#{redis_key}:ancestors", 0, -1).map { |id| SideJob.find(id) }
136
- end
137
-
138
129
  # Returns the parent job.
139
130
  # @return [SideJob::Job, nil] Parent job or nil if none
140
131
  def parent
141
- parent = SideJob.redis.lindex("#{redis_key}:ancestors", 0)
132
+ parent = get(:parent)
142
133
  parent = SideJob.find(parent) if parent
143
134
  parent
144
135
  end
145
136
 
146
- # Returns if job and all children are terminated.
147
- # @return [Boolean] True if this job and all children recursively are terminated
148
- def terminated?
149
- return false if status != 'terminated'
150
- children.each_value do |child|
151
- return false unless child.terminated?
137
+ # Disown a child job so that it no longer has a parent.
138
+ # @param name_or_job [String, SideJob::Job] Name or child job to disown
139
+ def disown(name_or_job)
140
+ if name_or_job.is_a?(SideJob::Job)
141
+ job = name_or_job
142
+ name = children.rassoc(job)
143
+ raise "Job #{id} cannot disown job #{job.id} as it is not a child" unless name
144
+ else
145
+ name = name_or_job
146
+ job = child(name)
147
+ raise "Job #{id} cannot disown non-existent child #{name}" unless job
148
+ end
149
+
150
+ SideJob.redis.multi do |multi|
151
+ multi.hdel job.redis_key, 'parent'
152
+ multi.hdel "#{redis_key}:children", name
153
+ end
154
+ end
155
+
156
+ # Adopt a parent-less job as a child of this job.
157
+ # @param orphan [SideJob::Job] Job that has no parent
158
+ # @param name [String] Name of child job (must be unique among children)
159
+ def adopt(orphan, name)
160
+ raise "Job #{id} cannot adopt itself as a child" if orphan == self
161
+ raise "Job #{id} cannot adopt job #{orphan.id} as it already has a parent" unless orphan.parent.nil?
162
+ raise "Job #{id} cannot adopt job #{orphan.id} as child name #{name} is not unique" if name.nil? || ! child(name).nil?
163
+
164
+ SideJob.redis.multi do |multi|
165
+ multi.hset orphan.redis_key, 'parent', id.to_json
166
+ multi.hset "#{redis_key}:children", name, orphan.id
152
167
  end
153
- return true
154
168
  end
155
169
 
156
170
  # Deletes the job and all children jobs (recursively) if all are terminated.
@@ -158,42 +172,46 @@ module SideJob
158
172
  def delete
159
173
  return false unless terminated?
160
174
 
161
- # recursively delete all children first
162
- children.each_value do |child|
163
- child.delete
164
- end
175
+ parent = self.parent
176
+ parent.disown(self) if parent
165
177
 
166
- # delete all SideJob keys
178
+ children = self.children
179
+
180
+ # delete all SideJob keys and disown all children
167
181
  ports = inports.map(&:redis_key) + outports.map(&:redis_key)
168
182
  SideJob.redis.multi do |multi|
169
- multi.hdel 'jobs', id
170
- multi.del ports + %w{status children ancestors inports:mode outports:mode inports:default outports:default}.map {|x| "#{redis_key}:#{x}" }
183
+ multi.srem 'jobs', id
184
+ multi.del redis_key
185
+ multi.del ports + %w{children inports outports inports:default outports:default}.map {|x| "#{redis_key}:#{x}" }
186
+ children.each_value { |child| multi.hdel child.redis_key, 'parent' }
187
+ end
188
+
189
+ # recursively delete all children
190
+ children.each_value do |child|
191
+ child.delete
171
192
  end
172
- reload
193
+
173
194
  return true
174
195
  end
175
196
 
176
197
  # Returns an input port.
177
198
  # @param name [Symbol,String] Name of the port
178
199
  # @return [SideJob::Port]
179
- # @raise [RuntimeError] Error raised if port does not exist
180
200
  def input(name)
181
- get_port :in, name
201
+ SideJob::Port.new(self, :in, name)
182
202
  end
183
203
 
184
204
  # Returns an output port
185
205
  # @param name [Symbol,String] Name of the port
186
206
  # @return [SideJob::Port]
187
- # @raise [RuntimeError] Error raised if port does not exist
188
207
  def output(name)
189
- get_port :out, name
208
+ SideJob::Port.new(self, :out, name)
190
209
  end
191
210
 
192
211
  # Gets all input ports.
193
212
  # @return [Array<SideJob::Port>] Input ports
194
213
  def inports
195
- load_ports if ! @ports
196
- @ports[:in].values
214
+ all_ports :in
197
215
  end
198
216
 
199
217
  # Sets the input ports for the job.
@@ -207,8 +225,7 @@ module SideJob
207
225
  # Gets all output ports.
208
226
  # @return [Array<SideJob::Port>] Output ports
209
227
  def outports
210
- load_ports if ! @ports
211
- @ports[:out].values
228
+ all_ports :out
212
229
  end
213
230
 
214
231
  # Sets the input ports for the job.
@@ -219,27 +236,74 @@ module SideJob
219
236
  set_ports :out, ports
220
237
  end
221
238
 
239
+ # Returns the entirety of the job's state with both standard and custom keys.
240
+ # @return [Hash{String => Object}] Job state
241
+ def state
242
+ state = SideJob.redis.hgetall(redis_key)
243
+ raise "Job #{id} does not exist!" if ! state
244
+ state.update(state) {|k,v| JSON.parse("[#{v}]")[0]}
245
+ state
246
+ end
247
+
222
248
  # Returns some data from the job's state.
223
- # The job state is cached for the lifetime of the job object. Call {#reload} if the state may have changed.
224
249
  # @param key [Symbol,String] Retrieve value for the given key
225
250
  # @return [Object,nil] Value from the job state or nil if key does not exist
226
- # @raise [RuntimeError] Error raised if job no longer exists
227
251
  def get(key)
228
- load_state
229
- @state[key.to_s]
252
+ val = SideJob.redis.hget(redis_key, key)
253
+ val ? JSON.parse("[#{val}]")[0] : nil
230
254
  end
231
255
 
232
- # Clears the state and ports cache.
233
- def reload
234
- @state = nil
235
- @ports = nil
236
- @config = nil
256
+ # Sets values in the job's internal state.
257
+ # @param data [Hash{String,Symbol => Object}] Data to update: objects should be JSON encodable
258
+ # @raise [RuntimeError] Error raised if job no longer exists
259
+ def set(data)
260
+ check_exists
261
+ return unless data.size > 0
262
+ SideJob.redis.hmset redis_key, *(data.map {|k,v| [k, v.to_json]}.flatten)
237
263
  end
238
264
 
239
- # Returns the worker configuration for the job.
240
- # @see SideJob::Worker.config
241
- def config
242
- @config ||= SideJob::Worker.config(get(:queue), get(:class))
265
+ # Unsets some fields in the job's internal state.
266
+ # @param fields [Array<String,Symbol>] Fields to unset
267
+ # @raise [RuntimeError] Error raised if job no longer exists
268
+ def unset(*fields)
269
+ return unless fields.length > 0
270
+ SideJob.redis.hdel redis_key, fields
271
+ end
272
+
273
+ # Acquire a lock on the job with a given expiration time.
274
+ # @param ttl [Fixnum] Lock expiration in seconds
275
+ # @param retries [Fixnum] Number of attempts to retry getting lock
276
+ # @param retry_delay [Float] Maximum seconds to wait (actual will be randomized) before retry getting lock
277
+ # @return [String, nil] Lock token that should be passed to {#unlock} or nil if lock was not acquired
278
+ def lock(ttl, retries: 3, retry_delay: 0.2)
279
+ retries.times do
280
+ token = SecureRandom.uuid
281
+ if SideJob.redis.set("#{redis_key}:lock", token, {nx: true, ex: ttl})
282
+ return token # lock acquired
283
+ else
284
+ sleep Random.rand(retry_delay)
285
+ end
286
+ end
287
+ return nil # lock not acquired
288
+ end
289
+
290
+ # Refresh the lock expiration.
291
+ # @param ttl [Fixnum] Refresh lock expiration for the given time in seconds
292
+ # @return [Boolean] Whether the timeout was set
293
+ def refresh_lock(ttl)
294
+ SideJob.redis.expire "#{redis_key}:lock", ttl
295
+ end
296
+
297
+ # Unlock job by deleting the lock only if it equals the lock token.
298
+ # @param token [String] Token returned by {#lock}
299
+ # @return [Boolean] Whether the job was unlocked
300
+ def unlock(token)
301
+ return SideJob.redis.eval('
302
+ if redis.call("get",KEYS[1]) == ARGV[1] then
303
+ return redis.call("del",KEYS[1])
304
+ else
305
+ return 0
306
+ end', { keys: ["#{redis_key}:lock"], argv: [token] }) == 1
243
307
  end
244
308
 
245
309
  private
@@ -247,7 +311,16 @@ module SideJob
247
311
  # Queue or schedule this job using sidekiq.
248
312
  # @param time [Time, Float, nil] Time to schedule the job if specified
249
313
  def sidekiq_queue(time=nil)
314
+ # Don't need to queue if a worker is already in process of running
315
+ return if SideJob.redis.exists "#{redis_key}:lock:worker"
316
+
250
317
  queue = get(:queue)
318
+
319
+ # Don't need to queue if the job is already in the queue (this does not include scheduled jobs)
320
+ # When Sidekiq pulls job out from scheduled set, we can still get the same job queued multiple times
321
+ # but the server middleware handles it
322
+ return if Sidekiq::Queue.new(queue).find_job(@id)
323
+
251
324
  klass = get(:class)
252
325
  args = get(:args)
253
326
 
@@ -260,39 +333,9 @@ module SideJob
260
333
  Sidekiq::Client.push(item)
261
334
  end
262
335
 
263
- # Caches all inports and outports.
264
- def load_ports
265
- @ports = {}
266
- %i{in out}.each do |type|
267
- @ports[type] = {}
268
- SideJob.redis.hkeys("#{redis_key}:#{type}ports:mode").each do |name|
269
- if name == '*'
270
- @ports["#{type}*"] = SideJob::Port.new(self, type, name)
271
- else
272
- @ports[type][name] = SideJob::Port.new(self, type, name)
273
- end
274
- end
275
- end
276
- end
277
-
278
- # Returns an input or output port.
279
- # @param type [:in, :out] Input or output port
280
- # @param name [Symbol,String] Name of the port
281
- # @return [SideJob::Port]
282
- def get_port(type, name)
283
- load_ports if ! @ports
284
- name = name.to_s
285
- return @ports[type][name] if @ports[type][name]
286
-
287
- if @ports["#{type}*"]
288
- # create port with default port options for dynamic ports
289
- port = SideJob::Port.new(self, type, name)
290
- port.options = @ports["#{type}*"].options
291
- @ports[type][name] = port
292
- return port
293
- else
294
- raise "Unknown #{type}put port: #{name}"
295
- end
336
+ # Return all ports of the given type
337
+ def all_ports(type)
338
+ SideJob.redis.smembers("#{redis_key}:#{type}ports").reject {|name| name == '*'}.map {|name| SideJob::Port.new(self, type, name)}
296
339
  end
297
340
 
298
341
  # Sets the input/outputs ports for the job and overwrites all current options.
@@ -301,30 +344,25 @@ module SideJob
301
344
  # @param type [:in, :out] Input or output ports
302
345
  # @param ports [Hash{Symbol,String => Hash}] Port configuration. Port name to options.
303
346
  def set_ports(type, ports)
304
- current = SideJob.redis.hkeys("#{redis_key}:#{type}ports:mode") || []
347
+ current = SideJob.redis.smembers("#{redis_key}:#{type}ports") || []
348
+ config = SideJob::Worker.config(get(:queue), get(:class))
305
349
 
306
- replace_port_data = []
307
- ports = (ports || {}).stringify_keys
308
- ports = (config["#{type}ports"] || {}).merge(ports)
350
+ ports ||= {}
351
+ ports = (config["#{type}ports"] || {}).merge(ports.dup.stringify_keys)
309
352
  ports.each_key do |port|
310
353
  ports[port] = ports[port].stringify_keys
311
- replace_port_data << port if ports[port]['data']
312
354
  end
313
355
 
314
356
  SideJob.redis.multi do |multi|
315
357
  # remove data from old ports
316
- ((current - ports.keys) | replace_port_data).each do |port|
358
+ (current - ports.keys).each do |port|
317
359
  multi.del "#{redis_key}:#{type}:#{port}"
318
360
  end
319
361
 
320
- # completely replace the mode and default keys
321
-
322
- multi.del "#{redis_key}:#{type}ports:mode"
323
- modes = ports.map do |port, options|
324
- [port, options['mode'] || 'queue']
325
- end.flatten(1)
326
- multi.hmset "#{redis_key}:#{type}ports:mode", *modes if modes.length > 0
362
+ multi.del "#{redis_key}:#{type}ports"
363
+ multi.sadd "#{redis_key}:#{type}ports", ports.keys if ports.length > 0
327
364
 
365
+ # replace port defaults
328
366
  defaults = ports.map do |port, options|
329
367
  if options.has_key?('default')
330
368
  [port, options['default'].to_json]
@@ -335,33 +373,11 @@ module SideJob
335
373
  multi.del "#{redis_key}:#{type}ports:default"
336
374
  multi.hmset "#{redis_key}:#{type}ports:default", *defaults if defaults.length > 0
337
375
  end
338
-
339
- @ports = nil
340
-
341
- group_port_logs do
342
- ports.each_pair do |port, options|
343
- if options['data']
344
- port = get_port(type, port)
345
- options['data'].each do |x|
346
- port.write x
347
- end
348
- end
349
- end
350
- end
351
376
  end
352
377
 
353
378
  # @raise [RuntimeError] Error raised if job no longer exists
354
379
  def check_exists
355
- raise "Job #{id} no longer exists!" unless exists?
356
- end
357
-
358
- def load_state
359
- if ! @state
360
- state = SideJob.redis.hget('jobs', id)
361
- raise "Job #{id} no longer exists!" if ! state
362
- @state = JSON.parse(state)
363
- end
364
- @state
380
+ raise "Job #{id} does not exist!" unless exists?
365
381
  end
366
382
  end
367
383
 
@@ -370,56 +386,10 @@ module SideJob
370
386
  class Job
371
387
  include JobMethods
372
388
 
373
- # @param id [String] Job id
389
+ # @param id [Integer] Job id
374
390
  def initialize(id)
375
- @id = id
376
- end
377
- end
378
-
379
- # Logger that groups all port read/writes together.
380
- # @see {JobMethods#group_port_logs}
381
- class GroupPortLogs
382
- def initialize(job)
383
- @metadata = {job: job.id}
384
- end
385
-
386
- # If entry is not a port log, send it on to {SideJob.log}. Otherwise, collect the log until {#done} is called.
387
- # @param entry [Hash] Log entry
388
- def log(entry)
389
- if entry[:read] && entry[:write]
390
- # collect reads and writes by port and group data together
391
- @port_events ||= {read: {}, write: {}} # {job: id, <in|out>port: port} -> data array
392
- %i{read write}.each do |type|
393
- entry[type].each do |event|
394
- data = event.delete(:data)
395
- @port_events[type][event] ||= []
396
- @port_events[type][event].concat data
397
- end
398
- end
399
- else
400
- SideJob.log(entry)
401
- end
402
- end
403
-
404
- # Merges the collected port read and writes and send logs to {SideJob.log}.
405
- def done
406
- return unless @port_events && (@port_events[:read].length > 0 || @port_events[:write].length > 0)
407
-
408
- entry = {}
409
- %i{read write}.each do |type|
410
- entry[type] = @port_events[type].map do |port, data|
411
- port.merge({data: data})
412
- end
413
- end
414
-
415
- SideJob.log @metadata.merge(entry)
416
- @port_events = nil
417
- end
418
-
419
- # Add metadata fields to the final log entry.
420
- # @param metadata [Hash] Data to be merged with the existing metadata and final log entry
421
- def add_metadata(metadata)
422
- @metadata.merge!(metadata.symbolize_keys)
391
+ @id = id.to_i
392
+ check_exists
423
393
  end
424
394
  end
425
395
  end