threaded 0.0.1 → 0.0.4

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA1:
3
- metadata.gz: db08fff84d920a28793a4f9d73340879fdcbb03c
4
- data.tar.gz: 884731a111fbf0c32bcbc3b5ee4e5f105e290566
3
+ metadata.gz: eb7c53b0ad0fb2545af7eb99e905600f84571e5b
4
+ data.tar.gz: d2f05cd3e73110409d4b72fd963f843930c31c58
5
5
  SHA512:
6
- metadata.gz: bdb4e659e9eab461a2eaf0939b6951f269f42d41d812da326cb4de43336d6633afeb8b6287a82bcf989a31bc4d98c29fea35073bb2ff8da36ba486be9c68435d
7
- data.tar.gz: 86e6fc98fbd9245ae59642ffd1832ffdba70abdcbeeb128cf25c7a6b153ec02a296bdf143cf8fbe55cc2e376ee3e7c0f62cd3fe794b4b1d6f5009b48c0543132
6
+ metadata.gz: 2ffb7a3eb19a8a8b21ec6ff6a7bb21e2fb192aebce6c9b23b525cfde92b09749237585efb46651d19fe33063995c073c4a33d3e9033c9a074b818ad9cce3d866
7
+ data.tar.gz: f253a4bacd86226d36dd188c17c4b418191bb27c18a16b55cbeb94fa418a8f9aacfae66aa739902e1db1180876310a1e0ab770175d49bb4391ac2b1761138b6e
data/.gitignore CHANGED
@@ -1,2 +1,3 @@
1
1
  .DS_Store
2
2
  *.gem
3
+ Gemfile.lock
@@ -0,0 +1,7 @@
1
+ ## 0.0.3
2
+
3
+ - Remove Gemfile.lock
4
+
5
+ ## 0.0.2
6
+
7
+ - Allow syncing stdout in promises and turn functionality on by default.
data/README.md CHANGED
@@ -4,9 +4,9 @@
4
4
 
5
5
  Simpler than actors, easier than threads. Get threaded!
6
6
 
7
- ## Why
7
+ ## What
8
8
 
9
- Projects like [Resque](https://github.com/resque/resque), [delayed job](https://github.com/collectiveidea/delayed_job), [queue classic](https://github.com/ryandotsmith/queue_classic), and [sidekiq](https://github.com/mperham/sidekiq) are great. They use data stores like postgres and redis to store information to be processed later. If you're prototyping a system or don't have access to a data store, you might still want to push off some work to a background process. If that's the case an in-memory threaded queue might be a good fit.
9
+ Why wait? If you're doing IO in MRI, or really anything in JRuby you can speed up your programs dramatically by using threads. Threads however are a low level primitive in Ruby that can be difficult to use. The `Threaded` library implements a few common thread patterns into an easy to use interface. This lets you focus on writing your code and `Threaded` will worry about running that code as fast as possible.
10
10
 
11
11
  ## Install
12
12
 
@@ -18,7 +18,130 @@ gem 'threaded'
18
18
 
19
19
  Then run `$ bundle install`
20
20
 
21
- ## Use it
21
+
22
+ # Simple Promises
23
+
24
+ Throw tasks you want to get worked on in the background into a `Threaded.later` block:
25
+
26
+ ```ruby
27
+ promise = Threaded.later do
28
+ require "YAML"
29
+ YAML.load `curl https://s3-external-1.amazonaws.com/heroku-buildpack-ruby/ruby_versions.yml 2>/dev/null`
30
+ end
31
+ ```
32
+
33
+ Then when you need the value use the `value` method:
34
+
35
+ ```ruby
36
+ promise.value # => ["ruby-2.0.0", "ruby-1.9.3", # ...
37
+ ```
38
+
39
+ It's secretly doing all of that work in the background letting your main Ruby thread focus on the work you care about most.
40
+
41
+ ## Keep your Promises
42
+
43
+ Promises will block when executed inside of one another, this means you can put promises in your promises and they'll always be executed in the correct order.
44
+
45
+ ```ruby
46
+ curl = Threaded.later do
47
+ `curl https://s3-external-1.amazonaws.com/heroku-buildpack-ruby/ruby_versions.yml 2>/dev/null`
48
+ end
49
+
50
+ yaml = Threaded.later do
51
+ require "YAML"
52
+ YAML.load curl.value
53
+ end
54
+ ```
55
+
56
+ This code guarantees that the block in `curl` gets executed before the `YAML.load`. Of course, the outcome is the same:
57
+
58
+ ```ruby
59
+ yaml.value # => ["ruby-2.0.0", "ruby-1.9.3", # ...
60
+ ```
61
+
62
+ While a contrived example, you can use this type of promise chaining to parallelize complex tasks.
63
+
64
+ By the way, if you call `Threaded.later` and never call `value` on the returned object it may run but is not guaranteed to. So if you `value` your "promises" then you'll always keep them.
65
+
66
+ ## Promise STDOUT behavior
67
+
68
+ A Threaded `later` block is supposed to look and feel like regular code. The biggest difference is that as soon as you define a `Threaded.later {}` block it begins to run in the background. Nothing throws off the illusion of "normal" code than sporadic random lines in your STDOUT. So by default Threaded captures all promise stdout and only outputs it when `value` is called.
69
+
70
+ ```ruby
71
+ task = Threaded.later do
72
+ puts "HEY YOU GUYS!!!!"
73
+ 10 * 10
74
+ end
75
+ ```
76
+
77
+ At this point the task has already run, but `"HEY YOU GUYS!!!"` is nowhere to be seen in STDOUT. It will show up as soon as you request the value
78
+
79
+
80
+ ```ruby
81
+ puts task.value
82
+ HEY YOU GUYS!!!!
83
+ # => 100
84
+ ```
85
+
86
+ If you don't like this voodoo and want to see really jumbled up STDIO, it's okay. You can set `Threaded.sync_promise_io = false`
87
+
88
+ ```ruby
89
+ task = Threaded.later do
90
+ puts "HEY YOU GUYS!!!!"
91
+ 10 * 10
92
+ end
93
+ # => "HEY YOU GUYS!!!"
94
+ task.value
95
+ # => 100
96
+ ```
97
+
98
+ You can also set this value in a `config` block.
99
+
100
+ ## Noisey Shell Sessions
101
+
102
+ Output from other shell sessions will still show up as soon as they execute.
103
+
104
+ ```
105
+ Threaded.later do
106
+ `git clone git@github.com:schneems/threaded.git`
107
+ end
108
+ => #<Threaded::Promise:0x007fd51c0a9b98 @mutex=#<Mutex:0x007fd51c0a9b20>, @has_run=false, @running=true, @result=nil, @error=nil, @job=#<Proc:0x007fd51c0a9bc0@/Users/schneems/Documents/projects/threaded/lib/threaded.rb:71>>
109
+ remote: Counting objects: 221, done.
110
+ remote: Compressing objects: 100% (130/130), done.
111
+ remote: Total 221 (delta 115), reused 188 (delta 83)
112
+ Receiving objects: 100% (221/221), 29.62 KiB | 0 bytes/s, done.
113
+ Resolving deltas: 100% (115/115), done.
114
+ ```
115
+
116
+ You can get around this by redirecting the IO:
117
+
118
+ ```
119
+ out = `git clone git@github.com:schneems/threaded.git 2>/dev/null`
120
+ puts out
121
+ ```
122
+ In bash `1` is stdout, `2` is stderror. You can redirect a stream to another, for example to merge stderr into stdout you could run a command with `2>&1` at the end. To completely disregard output you can send it to `/dev/null` on unix systems.
123
+
124
+ Note: git does some weird things when you try to redirect stderr to stdio `2>&1` (http://stackoverflow.com/a/18006967/147390).
125
+
126
+ By using a library to direct the sub shell stdin, stdout, stderr like `Open3`:
127
+
128
+ ```
129
+ require 'open3'
130
+ Open3.popen3("git clone git@github.com:schneems/threaded.git") do |stdin, stdout, stderr, wait_thr|
131
+ puts stdout.read.chomp
132
+ puts stderr.read.chomp
133
+ end
134
+ ```
135
+
136
+ Or by running the command in a `--quiet` option if it has one:
137
+
138
+ ```
139
+ `git clone git@github.com:schneems/threaded.git --quiet`
140
+ ```
141
+
142
+ ## Background Queue
143
+
144
+ The engine that powers the `Threaded` promise is also a publicly available background queue! You may be familiar with `Resque` or `sidekiq` that allow you to enqueue jobs to be run later, threaded has something like that. The main difference is that threaded does not persist values to a permanent store (like Resque or PostgreSQL). Here's how you use it.
22
145
 
23
146
  Define your task to be processed:
24
147
 
@@ -31,7 +154,7 @@ class Archive
31
154
  end
32
155
  ```
33
156
 
34
- It can be any object that responds to `call` but we recommend a class or module which makes switching to a durable queue later easier.
157
+ It can be any object that responds to `call` but we recommend a class or module which makes switching to a durable queue system (like Resque) easier.
35
158
 
36
159
  Then to enqueue a task to be run in the background use `Threaded.enqueue`:
37
160
 
@@ -43,7 +166,6 @@ Threaded.enqueue(Archive, repo.id, 'staging')
43
166
  The first argument is a class that defines the task to be processed and the rest of the arguments are passed to the task when it is run.
44
167
 
45
168
 
46
-
47
169
  # Configure
48
170
 
49
171
  The default number of worker threads is 16, you can configure that when you start your queue:
@@ -54,14 +176,6 @@ Threaded.config do |config|
54
176
  end
55
177
  ```
56
178
 
57
- By default jobs have a timeout value of 60 seconds. Since this is an in-memory queue (goes away when your process terminates) it is in your best interests to keep jobs small and quick, and not overload the queue. You can configure a different timeout on start:
58
-
59
- ```ruby
60
- Threaded.config do |config|
61
- config.timeout = 90 # timeout is in seconds
62
- end
63
- ```
64
-
65
179
  Want a different logger? Specify a different Logger:
66
180
 
67
181
  ```ruby
@@ -70,10 +184,10 @@ Threaded.config do |config|
70
184
  end
71
185
  ```
72
186
 
73
- As soon as you call `enqueue` a new thread will be started, if you wish to explicitly start all threads you can call `Threaded.start`. You can also inline your config if you want when you start the queue:
187
+ As soon as you call `enqueue` a new thread will be added to your thread pool if it is needed. If you wish to explicitly start all threads you can call `Threaded.start`. You can also inline your config if you want when you start the queue:
74
188
 
75
189
  ```ruby
76
- Threaded.start(size: 5, timeout: 90, logger: Logger.new(STDOUT))
190
+ Threaded.start(size: 5, logger: Logger.new(STDOUT))
77
191
  ```
78
192
 
79
193
  For testing or guaranteed code execution use the `inline` option:
@@ -88,19 +202,11 @@ This option bypasses the queue and executes code as it comes.
88
202
 
89
203
  This worker operates in the same process as your app, that means if your app is CPU bound, it will not be very useful. This worker uses threads which means that to be useful your app needs to either use IO (database calls, file writes/reads, shelling out, etc.) or run on JRuby or Rubinius.
90
204
 
91
- To make sure all items in your queue are processed you can add a condition `at_exit` to your program:
205
+ All other threading concerns remain true so be careful for using things like `Dir.chdir` inside of threaded as it changes the directory for all threads. Also don't modify shared data (unless you've got a mutex around it and know what you're doing).
92
206
 
93
- ```ruby
94
- at_exit do
95
- Threaded.stop
96
- end
97
- ```
98
-
99
- This call takes an optional timeout value (in seconds).
207
+ It is possible for you to enqueue more things in your queue than can be processed before your program exits (you hit CTRL+C, or get an exception). When your program exits all jobs promises and enqueued jobs go away as they are not persisted to disk. If you care about your data getting run always call `value` on promises. When `value` is called on promises if it has not already started running it will be run immediately. This functionality allows for the chaining of promises.
100
208
 
101
- ```ruby
102
- Threaded.stop(42)
103
- ```
209
+ To truly preserve data you've `enqueued` into Threaded's background queue you need to switch to a durable queue backend like Resque. Alternatively use a promise and call `value` on the `Threaded.later` promise object.
104
210
 
105
211
  ## License
106
212
 
@@ -1,56 +1,81 @@
1
1
  require 'thread'
2
2
  require 'timeout'
3
3
  require 'logger'
4
+ require 'stringio'
4
5
 
5
6
  require 'threaded/version'
6
- require 'threaded/timeout'
7
7
 
8
8
  module Threaded
9
9
  STOP_TIMEOUT = 10 # seconds
10
10
  extend self
11
- attr_accessor :inline, :logger, :size, :timeout
12
- alias :inline? :inline
13
11
 
14
12
  @mutex = Mutex.new
15
13
 
14
+ attr_reader :logger, :size, :inline, :sync_promise_io
15
+ alias :sync_promise_io? :sync_promise_io
16
+ alias :inline? :inline
17
+
18
+ def inline=(inline)
19
+ @mutex.synchronize { @inline = inline }
20
+ end
21
+
22
+ def logger=(logger)
23
+ @mutex.synchronize { @logger = logger }
24
+ end
25
+
26
+ def size=(size)
27
+ @mutex.synchronize { @size = size }
28
+ end
29
+
30
+ def sync_promise_io=(sync_promise_io)
31
+ @mutex.synchronize { @sync_promise_io = sync_promise_io }
32
+ end
33
+ @sync_promise_io = true
34
+
16
35
  def start(options = {})
17
- self.master = options
36
+ raise "Queue is already started, must configure queue before starting" if options.any? && started?
37
+ options.each do |k, v|
38
+ self.send(k, v)
39
+ end
18
40
  self.master.start
19
41
  return self
20
42
  end
21
43
 
22
- def configure(&block)
23
- raise "Queue is already started, must configure queue before starting" if started?
44
+ def master
24
45
  @mutex.synchronize do
25
- yield self
46
+ return @master if @master
47
+ @master = Master.new(logger: self.logger,
48
+ size: self.size)
26
49
  end
50
+ @master
51
+ end
52
+ alias :master= :master
53
+
54
+
55
+ def configure(&block)
56
+ raise "Queue is already started, must configure queue before starting" if started?
57
+ yield self
27
58
  end
28
59
  alias :config :configure
29
60
 
30
61
  def started?
31
- return false unless master
32
- master.alive?
62
+ !stopped?
33
63
  end
34
64
 
35
65
  def stopped?
36
- !started?
37
- end
38
-
39
- def master(options = {})
40
- @mutex.synchronize do
41
- return @master if @master
42
- self.logger = options[:logger] if options[:logger]
43
- self.size = options[:size] if options[:size]
44
- self.timeout = options[:timeout] if options[:timeout]
45
- @master = Master.new(logger: self.logger,
46
- size: self.size,
47
- timeout: self.timeout)
48
- end
66
+ master.stopping?
49
67
  end
50
- alias :master= :master
51
68
 
52
69
  def later(&block)
53
- Threaded::Promise.new(&block).later
70
+ job = if sync_promise_io?
71
+ Proc.new {
72
+ Thread.current[:stdout] = StringIO.new
73
+ block.call
74
+ }
75
+ else
76
+ block
77
+ end
78
+ Threaded::Promise.new(&job).later
54
79
  end
55
80
 
56
81
  def enqueue(job, *args)
@@ -74,6 +99,7 @@ Threaded.logger.level = Logger::INFO
74
99
 
75
100
 
76
101
  require 'threaded/errors'
102
+ require 'threaded/ext/stdout'
77
103
  require 'threaded/worker'
78
104
  require 'threaded/master'
79
105
  require 'threaded/promise'
@@ -0,0 +1,12 @@
1
+ # Redirects STDOUT to `Thread.current[:stdout]` if present
2
+ $stdout.instance_eval do
3
+ alias :original_write :write
4
+ end
5
+
6
+ $stdout.define_singleton_method(:write) do |value|
7
+ if Thread.current[:stdout]
8
+ Thread.current[:stdout].write value
9
+ else
10
+ original_write value
11
+ end
12
+ end
@@ -1,17 +1,14 @@
1
1
  module Threaded
2
2
  class Master
3
- include Threaded::Timeout
4
- attr_reader :workers, :logger
5
-
6
- DEFAULT_TIMEOUT = 60 # seconds, 1 minute
3
+ DEFAULT_TIMEOUT = 10 # seconds
7
4
  DEFAULT_SIZE = 16
5
+ attr_reader :workers, :logger
8
6
 
9
7
  def initialize(options = {})
10
8
  @queue = Queue.new
11
9
  @mutex = Mutex.new
12
10
  @stopping = false
13
11
  @max = options[:size] || DEFAULT_SIZE
14
- @timeout = options[:timeout] || DEFAULT_TIMEOUT
15
12
  @logger = options[:logger] || Threaded.logger
16
13
  @workers = []
17
14
  end
@@ -20,28 +17,24 @@ module Threaded
20
17
  @queue.enq([job, json])
21
18
 
22
19
  new_worker if needs_workers? && @queue.size > 0
23
- raise NoWorkersError unless alive?
20
+ raise NoWorkersError unless workers.detect {|w| w.alive? }
24
21
  return true
25
22
  end
26
23
 
27
- def alive?
28
- return false if workers.empty?
29
- workers.detect {|w| w.alive? }
30
- end
31
-
32
24
  def start
33
- return self if alive?
34
- @max.times { new_worker }
25
+ new_workers(@max, true)
35
26
  return self
36
27
  end
37
28
 
38
- def stop(timeout = 10)
39
- poison
40
- timeout(timeout, "waiting for workers to stop") do
41
- while self.alive?
42
- sleep 0.1
29
+ def stop(timeout = DEFAULT_TIMEOUT)
30
+ @mutex.synchronize do
31
+ @stopping = true
32
+ workers.each {|w| w.poison }
33
+ timeout(timeout, "waiting for workers to stop") do
34
+ while workers.any?
35
+ workers.reject! {|w| w.join if w.dead? }
36
+ end
43
37
  end
44
- join
45
38
  end
46
39
  return self
47
40
  end
@@ -50,31 +43,38 @@ module Threaded
50
43
  @workers.size
51
44
  end
52
45
 
46
+ def stopping?
47
+ @stopping
48
+ end
49
+
53
50
  private
54
51
 
55
- def needs_workers?
56
- size < @max
52
+ def timeout(timeout, message = "", &block)
53
+ ::Timeout.timeout(timeout) do
54
+ yield
55
+ end
56
+ rescue ::Timeout::Error
57
+ logger.error("Took longer than #{timeout} to #{message.inspect}")
57
58
  end
58
59
 
59
- def new_worker
60
- @mutex.synchronize do
61
- return false unless needs_workers?
62
- return false if @stopping
63
- @workers << Worker.new(@queue, timeout: @timeout)
64
- end
60
+ def needs_workers?
61
+ size < @max
65
62
  end
66
63
 
67
- def join
68
- workers.each {|w| w.join }
69
- return self
64
+ def max_workers?
65
+ !needs_workers?
70
66
  end
71
67
 
72
- def poison
68
+ def new_worker(num = 1, force_start = false)
73
69
  @mutex.synchronize do
74
- @stopping = true
70
+ @stopping = false if force_start
71
+ return false if stopping?
72
+ num.times do
73
+ next if max_workers?
74
+ @workers << Worker.new(@queue)
75
+ end
75
76
  end
76
- workers.each {|w| w.poison }
77
- return self
78
77
  end
78
+ alias :new_workers :new_worker
79
79
  end
80
80
  end
@@ -29,15 +29,14 @@ module Threaded
29
29
  @mutex.synchronize do
30
30
  return true if running? || has_run?
31
31
  begin
32
- if @job
33
- @running = true
34
- @result = @job.call
35
- else
36
- raise NoJobError
37
- end
32
+ raise NoJobError unless @job
33
+ @running = true
34
+ @result = @job.call
38
35
  rescue Exception => error
39
36
  @error = error
40
37
  ensure
38
+ @stdout = Thread.current[:stdout].dup if Thread.current[:stdout]
39
+ Thread.current[:stdout] = nil
41
40
  @has_run = true
42
41
  end
43
42
  end
@@ -46,12 +45,14 @@ module Threaded
46
45
  def now
47
46
  wait_for_it!
48
47
  raise error, error.message, error.backtrace if error
48
+ puts @stdout.string if @stdout
49
49
  @result
50
50
  end
51
51
  alias :join :now
52
52
  alias :value :now
53
53
 
54
54
  private
55
+
55
56
  def wait_for_it!
56
57
  return true if has_run?
57
58
 
@@ -63,18 +64,3 @@ module Threaded
63
64
  end
64
65
  end
65
66
  end
66
-
67
- # job = Threaded.later do
68
-
69
-
70
- # end
71
-
72
- # job.now
73
-
74
- # job = Threaded::Promise.new
75
- # job.enqueue do
76
-
77
-
78
- # end
79
-
80
- # job.now
@@ -1,3 +1,3 @@
1
1
  module Threaded
2
- VERSION = "0.0.1"
2
+ VERSION = "0.0.4"
3
3
  end
@@ -1,13 +1,10 @@
1
1
  module Threaded
2
2
  class Worker
3
- DEFAULT_TIMEOUT = 60 # seconds, 1 minute
4
- POISON = "poison"
5
- include Threaded::Timeout
3
+ POISON = "poison"
6
4
  attr_reader :queue, :logger, :thread
7
5
 
8
6
  def initialize(queue, options = {})
9
7
  @queue = queue
10
- @timeout = options[:timeout] || DEFAULT_TIMEOUT
11
8
  @logger = options[:logger] || Threaded.logger
12
9
  @thread = create_thread
13
10
  end
@@ -20,6 +17,10 @@ module Threaded
20
17
  puts "start is deprecated, thread is started when worker created"
21
18
  end
22
19
 
20
+ def dead?
21
+ !alive?
22
+ end
23
+
23
24
  def alive?
24
25
  thread.alive?
25
26
  end
@@ -36,13 +37,11 @@ module Threaded
36
37
  payload = queue.pop
37
38
  job, json = *payload
38
39
  break if payload == POISON
39
-
40
- self.timeout(@timeout, "job: #{job.to_s}") do
41
- job.call(*json)
42
- end
40
+ job.call(*json)
43
41
  end
44
42
  logger.debug("Threaded In Memory Queue Worker '#{object_id}' stopped")
45
43
  }
46
44
  end
47
45
  end
48
46
  end
47
+
@@ -3,26 +3,27 @@ require 'stringio'
3
3
 
4
4
  class ConfigTest < Test::Unit::TestCase
5
5
 
6
- def teardown
6
+ def setup
7
7
  Threaded.stop
8
8
  end
9
9
 
10
+ def teardown
11
+ Threaded.start
12
+ end
13
+
10
14
  def test_config_works
11
15
  fake_out = StringIO.new
12
16
  logger = Logger.new(fake_out)
13
17
  size = rand(1..99)
14
- timeout = rand(1..99)
15
18
 
16
19
  Threaded.configure do |config|
17
20
  config.size = size
18
21
  config.logger = logger
19
- config.timeout = timeout
20
22
  end
21
23
 
22
24
  Threaded.start
23
25
 
24
26
  assert_equal size, Threaded.size
25
- assert_equal timeout, Threaded.timeout
26
27
  assert_equal logger, Threaded.logger
27
28
  end
28
29
 
@@ -32,9 +33,7 @@ class ConfigTest < Test::Unit::TestCase
32
33
  Threaded.configure do |config|
33
34
  config.size = size
34
35
  config.logger = logger
35
- config.timeout = timeout
36
36
  end
37
37
  end
38
38
  end
39
-
40
39
  end
@@ -0,0 +1,60 @@
1
+ require 'test_helper'
2
+
3
+ class PromiseTest < Test::Unit::TestCase
4
+
5
+ def test_promise_interface
6
+ Dummy.expects(:process).with(1).once
7
+ Dummy.expects(:process).with(2).once
8
+
9
+ promise1 = Threaded.later do
10
+ Dummy.process(1)
11
+ end
12
+
13
+ promise2 = Threaded.later do
14
+ Dummy.process(2)
15
+ end
16
+
17
+ promise1.value
18
+ promise2.value
19
+ end
20
+
21
+
22
+ def test_stdout_stdio
23
+ value = "foo"
24
+ promise = Threaded.later do
25
+ puts value
26
+ end
27
+ promise.join
28
+ assert_match value, promise.instance_variable_get("@stdout").string
29
+ end
30
+
31
+ def test_no_sync_stdio
32
+ Threaded.sync_promise_io = false
33
+
34
+ value = "foo"
35
+ promise = Threaded.later do
36
+ puts value
37
+ end
38
+ promise.join
39
+ assert_equal nil, promise.instance_variable_get("@stdout")
40
+ ensure
41
+ Threaded.sync_promise_io = true
42
+ end
43
+
44
+ class Dummy
45
+ def later(num)
46
+ Threaded.later do
47
+ process(num)
48
+ end
49
+ end
50
+
51
+ def process(num)
52
+ Dummy.process(num)
53
+ end
54
+ end
55
+
56
+ def test_scope
57
+ Dummy.expects(:process).with(1).once
58
+ Dummy.new.later(1).value
59
+ end
60
+ end
@@ -3,12 +3,16 @@ require 'test_helper'
3
3
  class ThreadedTest < Test::Unit::TestCase
4
4
 
5
5
  def test_started?
6
- Threaded.start
7
6
  assert Threaded.started?
8
7
  Threaded.stop
9
8
  sleep 1
10
9
  assert Threaded.stopped?
11
10
  refute Threaded.started?
11
+
12
+ Threaded.start
13
+
14
+ refute Threaded.stopped?
15
+ assert Threaded.started?
12
16
  end
13
17
 
14
18
  def test_inline
@@ -18,7 +22,6 @@ class ThreadedTest < Test::Unit::TestCase
18
22
  Threaded.inline = true
19
23
  Threaded.enqueue(job, 1)
20
24
  assert Threaded.inline
21
- assert Threaded.stopped?
22
25
  ensure
23
26
  Threaded.inline = false
24
27
  end
@@ -27,14 +30,12 @@ class ThreadedTest < Test::Unit::TestCase
27
30
  Dummy.expects(:process).with(1).once
28
31
  Dummy.expects(:process).with(2).once
29
32
 
30
- Threaded.start
31
-
32
33
  job = Proc.new {|x| Dummy.process(x) }
33
34
 
34
35
  Threaded.enqueue(job, 1)
35
36
  Threaded.enqueue(job, 2)
36
37
  ensure
37
- Threaded.stop
38
+ Threaded.stop # gives time to process to finish
39
+ Threaded.start
38
40
  end
39
-
40
41
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: threaded
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.0.1
4
+ version: 0.0.4
5
5
  platform: ruby
6
6
  authors:
7
7
  - Richard Schneeman
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2014-02-04 00:00:00.000000000 Z
11
+ date: 2014-02-06 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: rake
@@ -47,23 +47,23 @@ extra_rdoc_files: []
47
47
  files:
48
48
  - ".gitignore"
49
49
  - ".travis.yml"
50
+ - CHANGELOG.md
50
51
  - Gemfile
51
- - Gemfile.lock
52
52
  - README.md
53
53
  - Rakefile
54
54
  - lib/threaded.rb
55
55
  - lib/threaded/errors.rb
56
+ - lib/threaded/ext/stdout.rb
56
57
  - lib/threaded/master.rb
57
58
  - lib/threaded/promise.rb
58
- - lib/threaded/timeout.rb
59
59
  - lib/threaded/version.rb
60
60
  - lib/threaded/worker.rb
61
61
  - test/test_helper.rb
62
- - test/threaded.rb
63
62
  - test/threaded/config_test.rb
64
63
  - test/threaded/master_test.rb
65
64
  - test/threaded/promise_test.rb
66
65
  - test/threaded/worker_test.rb
66
+ - test/threaded_test.rb
67
67
  - threaded.gemspec
68
68
  homepage: https://github.com/schneems/threaded
69
69
  licenses:
@@ -91,9 +91,9 @@ specification_version: 4
91
91
  summary: Memory, Enqueue stuff you will
92
92
  test_files:
93
93
  - test/test_helper.rb
94
- - test/threaded.rb
95
94
  - test/threaded/config_test.rb
96
95
  - test/threaded/master_test.rb
97
96
  - test/threaded/promise_test.rb
98
97
  - test/threaded/worker_test.rb
98
+ - test/threaded_test.rb
99
99
  has_rdoc:
@@ -1,20 +0,0 @@
1
- PATH
2
- remote: .
3
- specs:
4
- threaded (0.0.1)
5
-
6
- GEM
7
- remote: https://rubygems.org/
8
- specs:
9
- metaclass (0.0.2)
10
- mocha (1.0.0)
11
- metaclass (~> 0.0.1)
12
- rake (10.1.1)
13
-
14
- PLATFORMS
15
- ruby
16
-
17
- DEPENDENCIES
18
- mocha
19
- rake
20
- threaded!
@@ -1,11 +0,0 @@
1
- module Threaded
2
- module Timeout
3
- def timeout(timeout, message = "", &block)
4
- ::Timeout.timeout(timeout) do
5
- yield
6
- end
7
- rescue ::Timeout::Error
8
- logger.error("Took longer than #{timeout} to #{message.inspect}")
9
- end
10
- end
11
- end