concurrent_pipeline 0.1.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml ADDED
@@ -0,0 +1,7 @@
1
+ ---
2
+ SHA256:
3
+ metadata.gz: 8857e4a50f34ecbe07e5f76ba1edb189004262de54692c723da56195dee181fe
4
+ data.tar.gz: 506c7703669b8df8aec51ee68139b94c8ac8df8f7b73575f7267eebc8ab37d60
5
+ SHA512:
6
+ metadata.gz: 8b324eb9ae9b7ec2bf96fba2813ee36ad3399f262990e7da2ac7b7ed014201a6cb53fcc4bdb5dfb9300883158785cd8c916c8b05425391e5acde30f851dc2f80
7
+ data.tar.gz: ee71e62462e1f7b744c5816d2f36a49c09d8022cfb1c95f1c7b3fc33e5910ca98a58b7f652a61ae2273b572ac8e58ed1104460ade824aced05d63724d50d79e4
data/.rubocop.yml ADDED
@@ -0,0 +1,14 @@
1
+ AllCops:
2
+ TargetRubyVersion: 3.2
3
+ DisabledByDefault: true
4
+
5
+ Style/StringLiterals:
6
+ Enabled: true
7
+ EnforcedStyle: double_quotes
8
+
9
+ Style/StringLiteralsInInterpolation:
10
+ Enabled: true
11
+ EnforcedStyle: double_quotes
12
+
13
+ Layout/LineLength:
14
+ Max: 120
data/.ruby-version ADDED
@@ -0,0 +1 @@
1
+ 3.2.3
data/README.md ADDED
@@ -0,0 +1,473 @@
1
+ # ConcurrentPipeline
2
+
3
+ Define a bunch of steps, run them concurrently (or not), recover your data from any step along the way. Rinse, repeat as needed.
4
+
5
+ ### Problem it solves
6
+
7
+ Occasionally I need to write a one-off script that has a number of independent steps, maybe does a bit of data aggregation, and reports some results at the end. In total, the script might take a long time to run. First, I'll write the full flow against a small set of data and get it working. Then I'll run it against the full dataset and find out 10 minutes later that I didn't handle a certain edge-case. So I rework the script and rerun ... and 10 minutes later learn that there was another edge-case I didn't handle.
8
+
9
+ The long feedback cycle here is painful and unnecessary. If I wrote all data to a file as it came in, when I encountered a failure, I could fix the handling in my code and resume where I left off, no longer needing to re-process all the steps that have already been completed successfully. This gem is my attempt to build a solution to that scenario.
10
+
11
+ ### Installation
12
+
13
+ Hey it's a gem, you know what to do. Ugh, fine I'll write it: _Run `gem install concurrent_pipeline` to install or add it to your Gemfile!_
14
+
15
+ ### Contributing
16
+
17
+ This code I've just written is already legacy code. Good luck!
18
+
19
+ ### License
20
+
21
+ WTFPL - website down but you can find it if you care
22
+
23
+ ## Guide and Code Examples
24
+
25
+ ### Simplest Usage
26
+
27
+ Define a producer and add a pipeline to it.
28
+
29
+ ```ruby
30
+ # Define your producer:
31
+
32
+ class MyProducer < ConcurrentPipeline::Producer
33
+ pipeline do
34
+ steps(:step_1, :step_2)
35
+
36
+ def step_1
37
+ puts "hi from step_1"
38
+ end
39
+
40
+ def step_2
41
+ puts "hi from step_2"
42
+ end
43
+ end
44
+ end
45
+
46
+ # Run your producer
47
+ producer = MyProducer.new
48
+ producer.call
49
+ # hi from step_1
50
+ # hi from step_2
51
+ ```
52
+
53
+ Wow! What a convoluted way to just run two methods!
54
+
55
+ ### Example of Processing data
56
+
57
+ The previous example just wrote to stdout. In general, we want to be storing all data in a dataset that we can review and potentially write to disk.
58
+
59
+ Pipelines provide three methods for you to use: (I should really figure out ruby-doc and link there :| )
60
+
61
+ - `Pipeline#store`: returns a Store
62
+ - `Pipeline#changeset`: returns a Changeset
63
+ - `Pipeline#stream`: returns a Stream (covered in a later example)
64
+
65
+ Here, we define a model and provide the producer some initial data. Models are stored in the `store`. The models themselves are immutable. In order to create or update a model, we must use the `changeset`.
66
+
67
+ ```ruby
68
+ # Define your producer:
69
+
70
+ class MyProducer < ConcurrentPipeline::Producer
71
+ model(:my_model) do
72
+ attribute :id # an :id attribute is always required!
73
+ attribute :status
74
+
75
+ # You can add more methods here, but remember
76
+ # models are immutable. If you update an
77
+ # attribute here it will be forgotten at the
78
+ # end of the step. All models are re-created
79
+ # from the store for every step.
80
+
81
+ def updated?
82
+ status == "updated"
83
+ end
84
+ end
85
+
86
+ pipeline do
87
+ steps(:step_1, :step_2)
88
+
89
+ def step_1
90
+ # An :id will automatically be created or you can
91
+ # pass your own:
92
+ changeset.create(:my_model, id: 1, status: "created")
93
+ end
94
+
95
+ def step_2
96
+ # You can find the model in the store:
97
+ record = store.find(:my_model, 1)
98
+
99
+ # or get them all and find it yourself if you prefer
100
+ record = store.all(:my_model).select { |r| r.id == 1 }
101
+
102
+ changeset.update(record, status: "updated")
103
+ end
104
+ end
105
+ end
106
+
107
+ producer = MyProducer.new
108
+
109
+ # invoke it:
110
+ producer.call
111
+
112
+ # view results:
113
+ puts producer.data
114
+ # {
115
+ # my_model: [
116
+ # { id: 1, status: "updated" },
117
+ # ]
118
+ # }
119
+ ```
120
+
121
+ Future examples show how to pass your initial data to a producer.
122
+
123
+ ### Example with Concurrency
124
+
125
+ There are a few ways to declare what things should be done concurrently:
126
+
127
+ - Put steps in an array to indicate they can run concurrently
128
+ - Add two pipelines
129
+ - Pass the `each: {model_type}` option to the Pipeline indicating that it should be run for every record of that type.
130
+
131
+ The following example contains all three.
132
+
133
+ ```ruby
134
+ class MyProducer < ConcurrentPipeline::Producer
135
+ model(:my_model) do
136
+ attribute :id # an :id attribute is always required!
137
+ attribute :status
138
+ end
139
+
140
+ pipeline do
141
+ # Steps :step_2 and :step_3 will be run concurrently.
142
+ # Step :step_4 will only be run when they have both
143
+ # finished successfully
144
+ steps(
145
+ :step_1,
146
+ [:step_2, :step_3],
147
+ :step_4
148
+ )
149
+
150
+ # noops since we're just demonstrating usage here.
151
+ def step_1; end
152
+ def step_2; end
153
+ def step_3; end
154
+ def step_4; end
155
+ end
156
+
157
+ # this pipeline will run concurrently with the prior
158
+ # pipeline.
159
+ pipeline do
160
+ steps(:step_1)
161
+ def step_1; end
162
+ end
163
+
164
+ # passing `each:` to the Pipeline indicates that it
165
+ # should be run for every record of that type. When
166
+ # `each:` is specified, the record can be accessed
167
+ # using the `record` method.
168
+ #
169
+ # Note: every record will be processed concurrently.
170
+ # You can limit concurrency by passing the
171
+ # `concurrency: {integer}` option. The default
172
+ # concurrency is Infinite! INFINIIIIITE!!1!11!!!1!
173
+ pipeline(each: :my_model, concurrency: 3) do
174
+ steps(:process)
175
+
176
+ def process
177
+ changeset.update(record, status: "processed")
178
+ end
179
+ end
180
+ end
181
+
182
+ # Lets Pass some initial data:
183
+ initial_data = {
184
+ my_model: [
185
+ { id: 1, status: "waiting" },
186
+ { id: 2, status: "waiting" },
187
+ { id: 3, status: "waiting" },
188
+ ]
189
+ }
190
+ producer = MyProducer.new(data: initial_data)
191
+
192
+ # invoke it:
193
+ producer.call
194
+
195
+ # view results:
196
+ puts producer.data
197
+ # {
198
+ # my_model: [
199
+ # { id: 1, status: "processed" },
200
+ # { id: 2, status: "processed" },
201
+ # { id: 3, status: "processed" },
202
+ # ]
203
+ # }
204
+ ```
205
+
206
+ ### Viewing history and recovering versions
207
+
208
+ A version is created each time a record is updated. This example shows how to view and rerun with a prior version.
209
+
210
+ It's important to note that the system tracks which steps have been performed and which are still waiting to run by writing records to the store. If you change the structure of your Producer (add/remove pipelines or add/remove steps from a pipeline), then there's no guarantee that your data will be compatible across that change. If, however, you only change the body of a step method, then you should be able to rerun a prior version without issue.
211
+
212
+ ```ruby
213
+ class MyProducer < ConcurrentPipeline::Producer
214
+ model(:my_model) do
215
+ attribute :id # an :id attribute is always required!
216
+ attribute :status
217
+ end
218
+
219
+ pipeline(each: :my_model) do
220
+ steps(:process)
221
+
222
+ def process
223
+ changeset.update(record, status: "processed")
224
+ end
225
+ end
226
+ end
227
+
228
+ initial_data = {
229
+ my_model: [
230
+ { id: 1, status: "waiting" },
231
+ { id: 2, status: "waiting" },
232
+ ]
233
+ }
234
+ producer = MyProducer.new(data: initial_data)
235
+ producer.call
236
+
237
+ # access the versions like so:
238
+ puts producer.history.versions.count
239
+ # 5
240
+
241
+ # A version can tell you what diff it applied.
242
+ # Notice here, the :PipelineStep record, that
243
+ # is how the progress is tracked internally.
244
+ puts producer.history.versions[3].diff
245
+ # {
246
+ # changes: [
247
+ # {
248
+ # :action: :update,
249
+ # id: 1,
250
+ # type: :my_model,
251
+ # delta: {:status: "processed"}
252
+ # },
253
+ # {
254
+ # action: :update,
255
+ # id: "5d02ca83-0435-49b5-a812-d4da4eef080e",
256
+ # type: :PipelineStep,
257
+ # delta: {
258
+ # :completed_at: "2024-05-10T18:44:04+00:00",
259
+ # result: :success
260
+ # }
261
+ # }
262
+ # ]
263
+ # }
264
+
265
+ # Let's re-process using a previous version:
266
+ # This will just pick up where it was left off
267
+ re_producer = MyProducer.new(
268
+ store: producer.history.versions[3].store
269
+ )
270
+ re_producer.call
271
+
272
+ # If you need to change the code, you'd probably
273
+ # want to write the data to disk and then read
274
+ # it the next time you run:
275
+
276
+ File.write(
277
+ "last_good_version.yml",
278
+ producer.history.versions[3].store.data.to_yaml
279
+ )
280
+
281
+ # And then next time, load it like so:
282
+ re_producer = MyProducer.new(
283
+ data: YAML.unsafe_load_file("last_good_version.yml")
284
+ )
285
+ ```
286
+
287
+ ### Monitoring progress
288
+
289
+ When you run a long-running script it's nice to know that it's doing something (anything!). Staring at an unscrolling terminal might be good news, might be bad news, might be no news. How to know?
290
+
291
+ Models are immutable and changesets are only applied after a step is completed. If you want to get some data out during processing, you can just `puts` it. Or if you'd like to be a bit more specific about what you track, you can push data to a centralized "stream".
292
+
293
+ Here's an example:
294
+
295
+ ```ruby
296
+ class MyProducer < ConcurrentPipeline::Producer
297
+ stream do
298
+ on(:start) do |message|
299
+ puts "Started processing #{message}"
300
+ end
301
+
302
+ on(:progress) do |data|
303
+ puts "slept #{data[:slept]} times!"
304
+
305
+ # you don't have to just "puts" here:
306
+ # Audio.play(:jeopardy_music)
307
+ end
308
+
309
+ on(:finished) do
310
+ # Notice you have access to the outer scope.
311
+ #
312
+ # Streams are really about monitoring progress,
313
+ # so mutating state here is probably recipe for
314
+ # chaos and darkness, but hey, it's your code
315
+ # and I say fortune favors the bold (I've never
316
+ # actually said that until now).
317
+ some_other_object.reverse!
318
+ end
319
+ end
320
+
321
+ pipeline do
322
+ steps(:process)
323
+
324
+ def process
325
+ # the `push` method takes exactly two arguments:
326
+ # type: A symbol
327
+ # payload: any object, go crazy...
328
+ # ...but remember...concurrency...
329
+ stream.push(:start, "some_object!")
330
+ sleep 1
331
+ stream.push(:progress, {slept: 1 })
332
+ sleep 1
333
+ stream.push(:progress, { slept: 2 })
334
+ changeset.update(record, status: "processed")
335
+
336
+ # Don't feel pressured into sending an object
337
+ # if you don't feel like it.
338
+ stream.push(:finished)
339
+ end
340
+ end
341
+ end
342
+
343
+ some_other_object = [1, 2, 3]
344
+
345
+ producer = MyProducer.new
346
+ producer.call
347
+ puts some_other_object.inspect
348
+
349
+ # Started processing some_object!
350
+ # slept 1 times!
351
+ # slept 2 times!
352
+ # [3, 2, 1]
353
+ ```
354
+
355
+ ### Halting, Blocking, Triggering, etc
356
+
357
+ Perhaps you have a scenario where all ModelOnes have to be processed before any ModelTwo records. Or perhaps you want to find three ModelOne's that satisfy a certain condition and as soon as you've found those three, you want the pipeline to halt.
358
+
359
+ In order to allow this control, each pipline can specify an `open` method indicating whether the pipeline should continue with start or stop and/or continue processing vs halt at the end of the next step.
360
+
361
+ A pipeline is always "closed" when all of its steps are complete, so pipelines cannot loop. If you want a Pipeline to loop, you'd have to have an `each: :model` pipeline that creates a new model to be processed. The pipeline would then re-run (with the new model).
362
+
363
+ Here's an example with some custom "open" methods:
364
+
365
+ ```ruby
366
+ class MyProducer < ConcurrentPipeline::Producer
367
+ model(:model_one) do
368
+ attribute :id # an :id attribute is always required!
369
+ attribute :valid
370
+ end
371
+
372
+ model(:model_two) do
373
+ attribute :id # an :id attribute is always required!
374
+ attribute :processed
375
+ end
376
+
377
+ pipeline(each: :model_one) do
378
+ # we close this pipeline as soon as we've found at least
379
+ # three valid :model_one records. Note that because of
380
+ # concurrency, we might not be able to stop at *exactly*
381
+ # three valid models!
382
+ open { store.all(:model_one).select(&:valid).count < 3 }
383
+
384
+ steps(:process)
385
+
386
+ def process
387
+ sleep(rand(4))
388
+ changeset.update(record, valid: true)
389
+ end
390
+ end
391
+
392
+ pipeline(each: :model_two) do
393
+ open { store.all(:model_one).select(&:valid).count >= 3 }
394
+
395
+ # noop for example
396
+ steps(:process)
397
+ def process
398
+ store.update(record, processed: true)
399
+ end
400
+ end
401
+
402
+ pipeline do
403
+ open {
404
+ is_open = store.all(:model_two).all?(&:processed)
405
+ stream.push(
406
+ :stdout,
407
+ "last pipeline is now: #{is_open ? :open : :closed}"
408
+ )
409
+ is_open
410
+ }
411
+
412
+ steps(:process)
413
+
414
+ def process
415
+ stream.push(:stdout, "all done")
416
+ end
417
+ end
418
+ end
419
+
420
+ initial_data = {
421
+ model_one: [
422
+ { id: 1, valid: false },
423
+ { id: 2, valid: false },
424
+ { id: 3, valid: false },
425
+ { id: 4, valid: false },
426
+ { id: 5, valid: false },
427
+ ],
428
+ model_two: [
429
+ { id: 1, processed: false }
430
+ ]
431
+ }
432
+ producer = MyProducer.new(data: initial_data)
433
+ producer.call
434
+ ```
435
+
436
+ ### Error Handling
437
+
438
+ What happens if a step raises an error? Theoretically, that particular pipeline should just halt and the error will be logged in the corresponding PipelineStep record.
439
+
440
+ The return value of `Pipeline#call` is a boolean indicating whether all PipelineSteps have succeeded.
441
+
442
+ It is possible that I've screwed this up and that an error leads to a deadlock. In order to prevent against data-loss, each update to the yaml file is written to disk in a directory you can find at `Pipeline#dir`. You can also pass your own directory during initialization: `MyPipeline.new(dir: "/tmp/my_dir")`
443
+
444
+ ### Other hints
445
+
446
+ You can pass your data to a Producer in three ways:
447
+ - Passing a hash: `MyProducer.new(data: {...})`
448
+ - Passing a store: `MyProducer.new(store: store)`
449
+
450
+ Lastly, you can pass a block to apply changesets immediately:
451
+
452
+ ```ruby
453
+ processor = MyProducer.new do |changeset|
454
+ [:a, :b, :c].each do |item|
455
+ changset.create(:model, name: item)
456
+ end
457
+ end
458
+ ```
459
+
460
+ If you need access to an outer scope for a stream to access, you can construct your own stream:
461
+
462
+ ```ruby
463
+ my_stream = ConcurrentPipeline::Producer::Stream.new
464
+ outer_variable = :something
465
+ my_stream.on(:stdout) { puts "outer_variable: #{outer_variable}" }
466
+ MyProducer.new(stream: my_stream)
467
+ ```
468
+
469
+ ## Ruby can't do parallel processing so concurrency here is useless
470
+
471
+ Yes and no, but maybe, but maybe not. I've pulled out some explanation in a [separate doc](./concurrency.md)
472
+
473
+ tl;dr: Concurrency will hurt you if you're crunching lots of numbers. It will help you if you're shelling out or hitting the network. Also if you're really crunching so many numbers and performance is critical...maybe just use Go.
data/Rakefile ADDED
@@ -0,0 +1,16 @@
1
+ # frozen_string_literal: true
2
+
3
+ require "bundler/gem_tasks"
4
+ require "rake/testtask"
5
+
6
+ Rake::TestTask.new(:test) do |t|
7
+ t.libs << "test"
8
+ t.libs << "lib"
9
+ t.test_files = FileList["test/**/test_*.rb"]
10
+ end
11
+
12
+ require "rubocop/rake_task"
13
+
14
+ RuboCop::RakeTask.new
15
+
16
+ task default: %i[test rubocop]
data/concurrency.md ADDED
@@ -0,0 +1,90 @@
1
+ ## Ruby can't do parallel processing so concurrency here is useless
2
+
3
+ This is something that gets said every now and then. Also "concurrency is not parallelism" is often heard. Whether or not concurrency will help or hurt the performance of your script depends on what it does mostly.
4
+
5
+ tl;dr: Concurrency will hurt you if you're crunching lots of numbers. It will help you if you're shelling out or hitting the network. If you're really crunching so many numbers, maybe just use Go.
6
+
7
+ ---
8
+
9
+ It's true, ruby (and javascript, python, etc..) do not allow you to have more than one line of code being executed on separate CPUs (or rather, cores) at the same time. So if you're writing a computationlly heavy script, given that ruby can only perform one computation at a time, concurrency will probably slow things down because it requires the VM to context switch between the various threads/fibers. This switching is costly in this scenario.
10
+
11
+ However, there's one slightly odd method that maybe doesn't work like you think: `sleep`. If I have a `sleep 1000` call in some method, when it hits that line when asked what the code is doing, most programmers would say "it's sleeping." But "sleeping" requires no CPU. In fact, sleeping requires the Ruby VM *not* to do a certain thing, which it can achieve by... doing something else! So, ruby *can* do two things at once: it can do something and it can also not do something at the same time! But wait? Is that parallelism or concurrency!? At the VM level, ruby is only executing one line of code, so it's concurrency. However... time is an independent variable that is always on the move. Conceptually, (IMO) the way programmers think about programs, `sleep` is effectively parallel.
12
+
13
+ Here's a slightly different way one might think about it: I have 14 cores on my computer. I probably have thousands of executables on my hard drive. Are those executables *not* running in parallel or concurrently?
14
+
15
+ What's actually happening under the hood? I don't **really** know, I'm not a ruby core contributor or anything. But here's some examples demonstrating how things work:
16
+
17
+ ```ruby
18
+ # a mutex will ensure that the block is executed all in
19
+ # one go. No Thread switching allowed.
20
+ mutex = Mutex.new
21
+ proc = Proc.new do
22
+ mutex.synchronize do
23
+ i = 0
24
+ while(i < 200_000_000) do
25
+ i+= 1
26
+ end
27
+ end
28
+ end
29
+
30
+ start = Time.now.to_f
31
+ [Thread.new(&proc), Thread.new(&proc)].map(&:join)
32
+ puts "with mutex: #{(Time.now.to_f - start)}"
33
+
34
+ # Here we call Thread.pass which tells the VM that now
35
+ # is a good time to switch Threads if it would like.
36
+ proc_2 = Proc.new do
37
+ i = 0
38
+ while(i < 200_000_000) do
39
+ i+= 1
40
+ Thread.pass if i % 100_000 == 0
41
+ end
42
+ end
43
+
44
+ start = Time.now.to_f
45
+ [Thread.new(&proc_2), Thread.new(&proc_2)].map(&:join)
46
+ puts "with pass: #{(Time.now.to_f - start)}"
47
+
48
+ # results:
49
+ # with mutex: 4.34
50
+ # with pass: 7.33
51
+ ```
52
+
53
+ Notice the cost of switching threads! Now lets implement our own personal "sleep" method:
54
+
55
+ ```ruby
56
+ mutex = Mutex.new
57
+ proc = Proc.new do
58
+ # here we sleep all in one go:
59
+ mutex.synchronize do
60
+ start_time = Time.now.to_f
61
+ while(Time.now.to_f - start_time < 3); end
62
+ end
63
+ end
64
+
65
+ start = Time.now.to_f
66
+ [Thread.new(&proc), Thread.new(&proc)].map(&:join)
67
+ puts "with mutex: #{(Time.now.to_f - start)}"
68
+
69
+ proc_2 = Proc.new do
70
+ start_time = Time.now.to_f
71
+
72
+ # Use Thread.pass to indicate to the VM that
73
+ # it can do something else if it likes.
74
+ while(Time.now.to_f - start_time < 3)
75
+ Thread.pass
76
+ end
77
+ end
78
+
79
+ start = Time.now.to_f
80
+ [Thread.new(&proc_2), Thread.new(&proc_2)].map(&:join)
81
+ puts "with pass: #{(Time.now.to_f - start)}"
82
+
83
+ # results:
84
+ # with mutex: 6.00
85
+ # with pass: 3.00
86
+ ```
87
+
88
+ Glorious. With our second implementation, we've made our own `sleep` functionality. And it's clear that two lines of code are never executing at the same time.
89
+
90
+ Oh well, whatever, concurrency, parallelism, etc. What matters is that if your code sleeps a lot, concurrency will improve performance. "But Pete", you ask, "why would I ever want to 'sleep' in a script!?" When you shell out to another process, ruby will sleep to wait for its results. When you hit the network, ruby will sleep to wait for its response. If you're doing that, then write a concurrent pipeline! If you're not, then don't bother. It turns out, we can all find joy and happiness together.
@@ -0,0 +1,34 @@
1
+ # frozen_string_literal: true
2
+
3
+ require_relative "lib/concurrent_pipeline/version"
4
+
5
+ Gem::Specification.new do |spec|
6
+ spec.name = "concurrent_pipeline"
7
+ spec.version = ConcurrentPipeline::VERSION
8
+ spec.authors = ["Pete Kinnecom"]
9
+ spec.email = ["git@k7u7.com"]
10
+
11
+ spec.summary = <<~TEXT.strip
12
+ Define a pipeline of tasks, run them concurrently, and see a versioned \
13
+ history of all changes along the way.
14
+ TEXT
15
+ spec.homepage = "https://github.com/petekinnecom/concurrent_pipeline"
16
+ spec.license = "WTFPL"
17
+ spec.required_ruby_version = ">= 3.0.0"
18
+ spec.metadata["allowed_push_host"] = "https://rubygems.org"
19
+ spec.metadata["homepage_uri"] = spec.homepage
20
+ spec.metadata["source_code_uri"] = spec.homepage
21
+ spec.metadata["changelog_uri"] = spec.homepage
22
+
23
+ spec.files = Dir.chdir(__dir__) do
24
+ `git ls-files -z`.split("\x0").reject do |f|
25
+ (File.expand_path(f) == __FILE__) ||
26
+ f.start_with?(*%w[bin/ test/ spec/ features/ .git .circleci appveyor Gemfile])
27
+ end
28
+ end
29
+ spec.bindir = "exe"
30
+ spec.executables = spec.files.grep(%r{\Aexe/}) { |f| File.basename(f) }
31
+ spec.require_paths = ["lib"]
32
+
33
+ spec.add_dependency("concurrent-ruby-edge")
34
+ end