fractor 0.1.3 → 0.1.6
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/.rubocop-https---raw-githubusercontent-com-riboseinc-oss-guides-main-ci-rubocop-yml +552 -0
- data/.rubocop.yml +14 -8
- data/.rubocop_todo.yml +154 -48
- data/README.adoc +1371 -317
- data/examples/auto_detection/README.adoc +52 -0
- data/examples/auto_detection/auto_detection.rb +170 -0
- data/examples/continuous_chat_common/message_protocol.rb +53 -0
- data/examples/continuous_chat_fractor/README.adoc +217 -0
- data/examples/continuous_chat_fractor/chat_client.rb +303 -0
- data/examples/continuous_chat_fractor/chat_common.rb +83 -0
- data/examples/continuous_chat_fractor/chat_server.rb +167 -0
- data/examples/continuous_chat_fractor/simulate.rb +345 -0
- data/examples/continuous_chat_server/README.adoc +135 -0
- data/examples/continuous_chat_server/chat_client.rb +303 -0
- data/examples/continuous_chat_server/chat_server.rb +359 -0
- data/examples/continuous_chat_server/simulate.rb +343 -0
- data/examples/hierarchical_hasher/hierarchical_hasher.rb +12 -8
- data/examples/multi_work_type/multi_work_type.rb +30 -29
- data/examples/pipeline_processing/pipeline_processing.rb +15 -15
- data/examples/producer_subscriber/producer_subscriber.rb +20 -16
- data/examples/scatter_gather/scatter_gather.rb +29 -28
- data/examples/simple/sample.rb +38 -6
- data/examples/specialized_workers/specialized_workers.rb +44 -37
- data/lib/fractor/continuous_server.rb +188 -0
- data/lib/fractor/result_aggregator.rb +1 -1
- data/lib/fractor/supervisor.rb +291 -108
- data/lib/fractor/version.rb +1 -1
- data/lib/fractor/work_queue.rb +68 -0
- data/lib/fractor/work_result.rb +1 -1
- data/lib/fractor/worker.rb +2 -1
- data/lib/fractor/wrapped_ractor.rb +12 -2
- data/lib/fractor.rb +2 -0
- metadata +17 -2
data/README.adoc
CHANGED
|
@@ -5,9 +5,9 @@ distributing computational work across multiple Ractors.
|
|
|
5
5
|
|
|
6
6
|
== Introduction
|
|
7
7
|
|
|
8
|
-
Fractor stands for *Function-driven Ractors framework*. It is a lightweight
|
|
9
|
-
framework designed to simplify the process of distributing computational
|
|
10
|
-
across multiple Ractors (Ruby's actor-like concurrency model).
|
|
8
|
+
Fractor stands for *Function-driven Ractors framework*. It is a lightweight
|
|
9
|
+
Ruby framework designed to simplify the process of distributing computational
|
|
10
|
+
work across multiple Ractors (Ruby's actor-like concurrency model).
|
|
11
11
|
|
|
12
12
|
The primary goal of Fractor is to provide a structured way to define work,
|
|
13
13
|
process it in parallel using Ractors, and aggregate the results, while
|
|
@@ -108,85 +108,125 @@ component that manages the pool of workers and distributes work items
|
|
|
108
108
|
|
|
109
109
|
component that collects and organizes work results from workers
|
|
110
110
|
|
|
111
|
+
=== pipeline mode
|
|
111
112
|
|
|
113
|
+
operating mode where Fractor processes a defined set of work items and then
|
|
114
|
+
stops
|
|
112
115
|
|
|
116
|
+
=== continuous mode
|
|
113
117
|
|
|
114
|
-
|
|
118
|
+
operating mode where Fractor runs indefinitely, processing work items as they
|
|
119
|
+
arrive
|
|
115
120
|
|
|
116
|
-
=== General
|
|
117
121
|
|
|
118
|
-
The Fractor framework consists of the following main classes, all residing
|
|
119
|
-
within the `Fractor` module.
|
|
120
122
|
|
|
121
123
|
|
|
122
|
-
|
|
124
|
+
== Understanding Fractor operating modes
|
|
123
125
|
|
|
124
|
-
|
|
126
|
+
=== General
|
|
125
127
|
|
|
126
|
-
|
|
128
|
+
Fractor supports two distinct operating modes, each optimized for different use
|
|
129
|
+
cases. Understanding these modes is essential for choosing the right approach
|
|
130
|
+
for your application.
|
|
127
131
|
|
|
128
|
-
|
|
129
|
-
should return a `Fractor::WorkResult` object.
|
|
132
|
+
=== Pipeline mode (batch processing)
|
|
130
133
|
|
|
131
|
-
|
|
134
|
+
Pipeline mode is designed for processing a defined set of work items with a
|
|
135
|
+
clear beginning and end.
|
|
132
136
|
|
|
133
|
-
|
|
137
|
+
Characteristics:
|
|
134
138
|
|
|
135
|
-
|
|
139
|
+
* Processes a predetermined batch of work items
|
|
140
|
+
* Stops automatically when all work is completed
|
|
141
|
+
* Results are collected and accessed after processing completes
|
|
142
|
+
* Ideal for one-time computations or periodic batch jobs
|
|
136
143
|
|
|
137
|
-
|
|
144
|
+
Common use cases:
|
|
138
145
|
|
|
139
|
-
|
|
146
|
+
* Processing a file or dataset
|
|
147
|
+
* Batch data transformations
|
|
148
|
+
* One-time parallel computations
|
|
149
|
+
* Scheduled batch jobs
|
|
150
|
+
* Hierarchical or multi-stage processing
|
|
140
151
|
|
|
141
|
-
|
|
152
|
+
=== Continuous mode (long-running servers)
|
|
142
153
|
|
|
143
|
-
|
|
144
|
-
|
|
154
|
+
Continuous mode is designed for applications that need to run indefinitely,
|
|
155
|
+
processing work items as they arrive.
|
|
145
156
|
|
|
146
|
-
|
|
157
|
+
Characteristics:
|
|
147
158
|
|
|
148
|
-
|
|
159
|
+
* Runs continuously without a predetermined end
|
|
160
|
+
* Processes work items dynamically as they become available
|
|
161
|
+
* Workers idle efficiently when no work is available
|
|
162
|
+
* Results are processed via callbacks, not batch collection
|
|
163
|
+
* Supports graceful shutdown and runtime monitoring
|
|
149
164
|
|
|
150
|
-
|
|
165
|
+
Common use cases:
|
|
151
166
|
|
|
152
|
-
|
|
167
|
+
* Chat servers and messaging systems
|
|
168
|
+
* Background job processors
|
|
169
|
+
* Real-time data stream processing
|
|
170
|
+
* Web servers handling concurrent requests
|
|
171
|
+
* Monitoring and alerting systems
|
|
172
|
+
* Event-driven architectures
|
|
153
173
|
|
|
154
|
-
|
|
174
|
+
=== Comparison
|
|
155
175
|
|
|
156
|
-
|
|
176
|
+
[cols="1,2,2",options="header"]
|
|
177
|
+
|===
|
|
178
|
+
|Aspect |Pipeline Mode |Continuous Mode
|
|
157
179
|
|
|
158
|
-
|
|
180
|
+
|Duration
|
|
181
|
+
|Finite (stops when done)
|
|
182
|
+
|Indefinite (runs until stopped)
|
|
159
183
|
|
|
160
|
-
|
|
184
|
+
|Work arrival
|
|
185
|
+
|All work known upfront
|
|
186
|
+
|Work arrives dynamically
|
|
161
187
|
|
|
162
|
-
|
|
163
|
-
|
|
188
|
+
|Result handling
|
|
189
|
+
|Batch collection after completion
|
|
190
|
+
|Callback-based processing
|
|
164
191
|
|
|
165
|
-
|
|
192
|
+
|Typical lifetime
|
|
193
|
+
|Seconds to minutes
|
|
194
|
+
|Hours to days/weeks
|
|
166
195
|
|
|
167
|
-
|
|
196
|
+
|Shutdown
|
|
197
|
+
|Automatic on completion
|
|
198
|
+
|Manual or signal-based
|
|
168
199
|
|
|
169
|
-
|
|
200
|
+
|Best for
|
|
201
|
+
|Batch jobs, file processing
|
|
202
|
+
|Servers, streams, job queues
|
|
203
|
+
|===
|
|
170
204
|
|
|
171
|
-
|
|
205
|
+
=== Decision guide
|
|
172
206
|
|
|
173
|
-
|
|
174
|
-
Ractors.
|
|
207
|
+
Choose *Pipeline mode* when:
|
|
175
208
|
|
|
176
|
-
|
|
209
|
+
* You have a complete dataset to process
|
|
210
|
+
* Processing has a clear start and end
|
|
211
|
+
* You need all results aggregated after completion
|
|
212
|
+
* The task is one-time or scheduled periodically
|
|
177
213
|
|
|
178
|
-
|
|
214
|
+
Choose *Continuous mode* when:
|
|
215
|
+
|
|
216
|
+
* Work arrives over time from external sources
|
|
217
|
+
* Your application runs as a long-lived server
|
|
218
|
+
* You need to process items as they arrive
|
|
219
|
+
* Results should be handled immediately via callbacks
|
|
179
220
|
|
|
180
|
-
Handles graceful shutdown on `SIGINT` (Ctrl+C).
|
|
181
221
|
|
|
182
222
|
|
|
183
223
|
|
|
184
|
-
== Quick start
|
|
224
|
+
== Quick start: Pipeline mode
|
|
185
225
|
|
|
186
226
|
=== General
|
|
187
227
|
|
|
188
|
-
This quick start guide shows the minimum steps needed to get
|
|
189
|
-
|
|
228
|
+
This quick start guide shows the minimum steps needed to get parallel batch
|
|
229
|
+
processing working with Fractor.
|
|
190
230
|
|
|
191
231
|
=== Step 1: Create a minimal Work class
|
|
192
232
|
|
|
@@ -259,57 +299,233 @@ returns an error result.
|
|
|
259
299
|
The Supervisor class orchestrates the entire framework, managing worker Ractors,
|
|
260
300
|
distributing work, and collecting results.
|
|
261
301
|
|
|
262
|
-
It initializes pools of Ractors, each running an instance of a Worker
|
|
263
|
-
class. The Supervisor handles the communication between the main thread and
|
|
264
|
-
the Ractors, including sending work items and receiving results.
|
|
265
|
-
|
|
266
|
-
The Supervisor also manages the work queue and the ResultAggregator, which
|
|
267
|
-
collects and organizes all results from the workers.
|
|
268
|
-
|
|
269
|
-
To set up the Supervisor, you specify worker pools, each containing a Worker class
|
|
270
|
-
and the number of workers to create. You can create multiple worker pools with
|
|
271
|
-
different worker types to handle different kinds of work. Each worker pool can
|
|
272
|
-
process any type of Work object that inherits from Fractor::Work.
|
|
273
|
-
|
|
274
302
|
[source,ruby]
|
|
275
303
|
----
|
|
276
|
-
# Create the supervisor
|
|
304
|
+
# Create the supervisor with auto-detected number of workers
|
|
277
305
|
supervisor = Fractor::Supervisor.new(
|
|
278
306
|
worker_pools: [
|
|
279
|
-
{ worker_class: MyWorker
|
|
307
|
+
{ worker_class: MyWorker } # Number of workers auto-detected
|
|
280
308
|
]
|
|
281
309
|
)
|
|
282
310
|
|
|
283
|
-
# Add
|
|
284
|
-
supervisor.add_work_item(MyWork.new(1))
|
|
285
|
-
|
|
286
|
-
# Add multiple work items
|
|
311
|
+
# Add work items (instances of Work subclasses)
|
|
287
312
|
supervisor.add_work_items([
|
|
313
|
+
MyWork.new(1),
|
|
288
314
|
MyWork.new(2),
|
|
289
315
|
MyWork.new(3),
|
|
290
316
|
MyWork.new(4),
|
|
291
317
|
MyWork.new(5)
|
|
292
318
|
])
|
|
293
319
|
|
|
294
|
-
# You can add different types of Work objects to the same supervisor
|
|
295
|
-
supervisor.add_work_items([
|
|
296
|
-
MyWork.new(6),
|
|
297
|
-
OtherWork.new("data")
|
|
298
|
-
])
|
|
299
|
-
|
|
300
320
|
# Run the processing
|
|
301
321
|
supervisor.run
|
|
302
322
|
|
|
303
|
-
# Access results
|
|
323
|
+
# Access results after completion
|
|
304
324
|
puts "Results: #{supervisor.results.results.map(&:result)}"
|
|
305
325
|
puts "Errors: #{supervisor.results.errors.size}"
|
|
306
326
|
----
|
|
307
327
|
|
|
308
328
|
That's it! With these three simple steps, you have a working parallel processing
|
|
309
|
-
system using Fractor.
|
|
329
|
+
system using Fractor in pipeline mode.
|
|
330
|
+
|
|
331
|
+
|
|
332
|
+
|
|
333
|
+
|
|
334
|
+
== Quick start: Continuous mode
|
|
335
|
+
|
|
336
|
+
=== General
|
|
337
|
+
|
|
338
|
+
This quick start guide shows how to build a long-running server using Fractor's
|
|
339
|
+
high-level primitives for continuous mode. These primitives eliminate boilerplate
|
|
340
|
+
code for thread management, queuing, and results processing.
|
|
341
|
+
|
|
342
|
+
=== Step 1: Create Work and Worker classes
|
|
343
|
+
|
|
344
|
+
Just like pipeline mode, you need Work and Worker classes:
|
|
345
|
+
|
|
346
|
+
[source,ruby]
|
|
347
|
+
----
|
|
348
|
+
require 'fractor'
|
|
349
|
+
|
|
350
|
+
class MessageWork < Fractor::Work
|
|
351
|
+
def initialize(client_id, message)
|
|
352
|
+
super({ client_id: client_id, message: message })
|
|
353
|
+
end
|
|
354
|
+
|
|
355
|
+
def client_id
|
|
356
|
+
input[:client_id]
|
|
357
|
+
end
|
|
358
|
+
|
|
359
|
+
def message
|
|
360
|
+
input[:message]
|
|
361
|
+
end
|
|
362
|
+
end
|
|
363
|
+
|
|
364
|
+
class MessageWorker < Fractor::Worker
|
|
365
|
+
def process(work)
|
|
366
|
+
# Process the message
|
|
367
|
+
processed = "Echo: #{work.message}"
|
|
368
|
+
|
|
369
|
+
Fractor::WorkResult.new(
|
|
370
|
+
result: { client_id: work.client_id, response: processed },
|
|
371
|
+
work: work
|
|
372
|
+
)
|
|
373
|
+
rescue => e
|
|
374
|
+
Fractor::WorkResult.new(error: e.message, work: work)
|
|
375
|
+
end
|
|
376
|
+
end
|
|
377
|
+
----
|
|
378
|
+
|
|
379
|
+
=== Step 2: Set up WorkQueue
|
|
380
|
+
|
|
381
|
+
Create a thread-safe work queue that will hold incoming work items:
|
|
382
|
+
|
|
383
|
+
[source,ruby]
|
|
384
|
+
----
|
|
385
|
+
# Create a thread-safe work queue
|
|
386
|
+
work_queue = Fractor::WorkQueue.new
|
|
387
|
+
----
|
|
388
|
+
|
|
389
|
+
=== Step 3: Set up ContinuousServer with callbacks
|
|
390
|
+
|
|
391
|
+
The ContinuousServer handles all the boilerplate: thread management, signal
|
|
392
|
+
handling, and results processing.
|
|
393
|
+
|
|
394
|
+
[source,ruby]
|
|
395
|
+
----
|
|
396
|
+
# Create the continuous server
|
|
397
|
+
server = Fractor::ContinuousServer.new(
|
|
398
|
+
worker_pools: [
|
|
399
|
+
{ worker_class: MessageWorker, num_workers: 4 }
|
|
400
|
+
],
|
|
401
|
+
work_queue: work_queue, # Auto-registers as work source
|
|
402
|
+
log_file: 'logs/server.log' # Optional logging
|
|
403
|
+
)
|
|
404
|
+
|
|
405
|
+
# Define how to handle successful results
|
|
406
|
+
server.on_result do |result|
|
|
407
|
+
client_id = result.result[:client_id]
|
|
408
|
+
response = result.result[:response]
|
|
409
|
+
puts "Sending to client #{client_id}: #{response}"
|
|
410
|
+
# Send response to client here
|
|
411
|
+
end
|
|
412
|
+
|
|
413
|
+
# Define how to handle errors
|
|
414
|
+
server.on_error do |error_result|
|
|
415
|
+
puts "Error processing work: #{error_result.error}"
|
|
416
|
+
end
|
|
417
|
+
----
|
|
418
|
+
|
|
419
|
+
=== Step 4: Run and add work dynamically
|
|
420
|
+
|
|
421
|
+
Start the server and add work items as they arrive:
|
|
422
|
+
|
|
423
|
+
[source,ruby]
|
|
424
|
+
----
|
|
425
|
+
# Start the server in a background thread
|
|
426
|
+
server_thread = Thread.new { server.run }
|
|
427
|
+
|
|
428
|
+
# Your application can now push work items dynamically
|
|
429
|
+
# For example, when a client sends a message:
|
|
430
|
+
work_queue << MessageWork.new(client_id: 1, message: "Hello")
|
|
431
|
+
work_queue << MessageWork.new(client_id: 2, message: "World")
|
|
432
|
+
|
|
433
|
+
# The server runs indefinitely, processing work as it arrives
|
|
434
|
+
# Use Ctrl+C or send SIGTERM for graceful shutdown
|
|
435
|
+
|
|
436
|
+
# Or stop programmatically
|
|
437
|
+
sleep 10
|
|
438
|
+
server.stop
|
|
439
|
+
server_thread.join
|
|
440
|
+
----
|
|
441
|
+
|
|
442
|
+
That's it! The ContinuousServer handles all thread management, signal handling,
|
|
443
|
+
and graceful shutdown automatically.
|
|
444
|
+
|
|
445
|
+
|
|
446
|
+
|
|
447
|
+
|
|
448
|
+
== Core components
|
|
449
|
+
|
|
450
|
+
=== General
|
|
451
|
+
|
|
452
|
+
The Fractor framework consists of the following main classes, all residing
|
|
453
|
+
within the `Fractor` module. These core components are used by both pipeline
|
|
454
|
+
mode and continuous mode.
|
|
455
|
+
|
|
456
|
+
|
|
457
|
+
=== Fractor::Worker
|
|
458
|
+
|
|
459
|
+
The abstract base class for defining how work should be processed.
|
|
460
|
+
|
|
461
|
+
Client code must subclass this and implement the `process(work)` method.
|
|
462
|
+
|
|
463
|
+
The `process` method receives a `Fractor::Work` object (or a subclass) and
|
|
464
|
+
should return a `Fractor::WorkResult` object.
|
|
465
|
+
|
|
466
|
+
=== Fractor::Work
|
|
467
|
+
|
|
468
|
+
The abstract base class for representing a unit of work.
|
|
469
|
+
|
|
470
|
+
Typically holds the input data needed by the `Worker`.
|
|
471
|
+
|
|
472
|
+
Client code should subclass this to define specific types of work items.
|
|
473
|
+
|
|
474
|
+
=== Fractor::WorkResult
|
|
475
|
+
|
|
476
|
+
A container object returned by the `Worker#process` method.
|
|
477
|
+
|
|
478
|
+
Holds either the successful `:result` of the computation or an `:error`
|
|
479
|
+
message if processing failed.
|
|
480
|
+
|
|
481
|
+
Includes a reference back to the original `:work` item.
|
|
482
|
+
|
|
483
|
+
Provides a `success?` method.
|
|
484
|
+
|
|
485
|
+
=== Fractor::ResultAggregator
|
|
486
|
+
|
|
487
|
+
Collects and stores all `WorkResult` objects generated by the workers.
|
|
488
|
+
|
|
489
|
+
Separates results into `results` (successful) and `errors` arrays.
|
|
490
|
+
|
|
491
|
+
=== Fractor::WrappedRactor
|
|
492
|
+
|
|
493
|
+
Manages an individual Ruby `Ractor`.
|
|
494
|
+
|
|
495
|
+
Instantiates the client-provided `Worker` subclass within the Ractor.
|
|
496
|
+
|
|
497
|
+
Handles receiving `Work` items, calling the `Worker#process` method, and
|
|
498
|
+
yielding `WorkResult` objects (or errors) back to the `Supervisor`.
|
|
499
|
+
|
|
500
|
+
=== Fractor::Supervisor
|
|
501
|
+
|
|
502
|
+
The main orchestrator of the framework.
|
|
503
|
+
|
|
504
|
+
Initializes and manages a pool of `WrappedRactor` instances.
|
|
505
|
+
|
|
506
|
+
Manages a `work_queue` of input data.
|
|
507
|
+
|
|
508
|
+
Distributes work items (wrapped in the client's `Work` subclass) to available
|
|
509
|
+
Ractors.
|
|
510
|
+
|
|
511
|
+
Listens for results and errors from Ractors using `Ractor.select`.
|
|
512
|
+
|
|
513
|
+
Uses `ResultAggregator` to store outcomes.
|
|
514
|
+
|
|
515
|
+
Handles graceful shutdown on `SIGINT` (Ctrl+C).
|
|
516
|
+
|
|
517
|
+
|
|
518
|
+
|
|
519
|
+
|
|
520
|
+
== Pipeline mode components
|
|
310
521
|
|
|
522
|
+
=== General
|
|
523
|
+
|
|
524
|
+
This section describes the components and their detailed usage specifically for
|
|
525
|
+
pipeline mode (batch processing). For continuous mode, see the Continuous mode
|
|
526
|
+
components section.
|
|
311
527
|
|
|
312
|
-
|
|
528
|
+
Pipeline mode uses only the core components without any additional primitives.
|
|
313
529
|
|
|
314
530
|
=== Work class
|
|
315
531
|
|
|
@@ -362,11 +578,9 @@ end
|
|
|
362
578
|
|
|
363
579
|
[TIP]
|
|
364
580
|
====
|
|
365
|
-
====
|
|
366
581
|
* Keep Work objects lightweight and serializable since they will be passed
|
|
367
582
|
between Ractors
|
|
368
583
|
* Implement a meaningful `to_s` method for better debugging
|
|
369
|
-
====
|
|
370
584
|
* Consider adding validation in the initializer to catch issues early
|
|
371
585
|
====
|
|
372
586
|
|
|
@@ -448,7 +662,10 @@ def process(work)
|
|
|
448
662
|
Fractor::WorkResult.new(result: result, work: work)
|
|
449
663
|
rescue StandardError => e
|
|
450
664
|
# Catch and convert any unexpected exceptions to error results
|
|
451
|
-
Fractor::WorkResult.new(
|
|
665
|
+
Fractor::WorkResult.new(
|
|
666
|
+
error: "An unexpected error occurred: #{e.message}",
|
|
667
|
+
work: work
|
|
668
|
+
)
|
|
452
669
|
end
|
|
453
670
|
----
|
|
454
671
|
|
|
@@ -460,277 +677,1118 @@ end
|
|
|
460
677
|
* Ensure all paths return a valid `WorkResult` object
|
|
461
678
|
====
|
|
462
679
|
|
|
463
|
-
===
|
|
680
|
+
=== Supervisor class for pipeline mode
|
|
464
681
|
|
|
465
682
|
==== Purpose and responsibilities
|
|
466
683
|
|
|
467
|
-
The `Fractor::
|
|
468
|
-
|
|
469
|
-
work item.
|
|
684
|
+
The `Fractor::Supervisor` class orchestrates the entire framework, managing
|
|
685
|
+
worker Ractors, distributing work, and collecting results.
|
|
470
686
|
|
|
471
|
-
====
|
|
687
|
+
==== Configuration options
|
|
472
688
|
|
|
473
|
-
|
|
689
|
+
When creating a Supervisor for pipeline mode, configure worker pools:
|
|
474
690
|
|
|
475
691
|
[source,ruby]
|
|
476
692
|
----
|
|
477
|
-
|
|
478
|
-
|
|
693
|
+
supervisor = Fractor::Supervisor.new(
|
|
694
|
+
worker_pools: [
|
|
695
|
+
# Pool 1 - for general data processing
|
|
696
|
+
{ worker_class: MyWorker, num_workers: 4 },
|
|
697
|
+
|
|
698
|
+
# Pool 2 - for specialized image processing
|
|
699
|
+
{ worker_class: ImageWorker, num_workers: 2 }
|
|
700
|
+
]
|
|
701
|
+
# Note: continuous_mode defaults to false for pipeline mode
|
|
702
|
+
)
|
|
479
703
|
----
|
|
480
704
|
|
|
481
|
-
|
|
705
|
+
==== Worker auto-detection
|
|
706
|
+
|
|
707
|
+
Fractor automatically detects the number of available processors on your system
|
|
708
|
+
and uses that value when `num_workers` is not specified. This provides optimal
|
|
709
|
+
resource utilization across different deployment environments without requiring
|
|
710
|
+
manual configuration.
|
|
482
711
|
|
|
483
712
|
[source,ruby]
|
|
484
713
|
----
|
|
485
|
-
#
|
|
486
|
-
Fractor::
|
|
714
|
+
# Auto-detect number of workers (recommended for most cases)
|
|
715
|
+
supervisor = Fractor::Supervisor.new(
|
|
716
|
+
worker_pools: [
|
|
717
|
+
{ worker_class: MyWorker } # Will use number of available processors
|
|
718
|
+
]
|
|
719
|
+
)
|
|
720
|
+
|
|
721
|
+
# Explicitly set number of workers (useful for specific requirements)
|
|
722
|
+
supervisor = Fractor::Supervisor.new(
|
|
723
|
+
worker_pools: [
|
|
724
|
+
{ worker_class: MyWorker, num_workers: 4 } # Always use exactly 4 workers
|
|
725
|
+
]
|
|
726
|
+
)
|
|
727
|
+
|
|
728
|
+
# Mix auto-detection and explicit configuration
|
|
729
|
+
supervisor = Fractor::Supervisor.new(
|
|
730
|
+
worker_pools: [
|
|
731
|
+
{ worker_class: FastWorker }, # Auto-detected
|
|
732
|
+
{ worker_class: HeavyWorker, num_workers: 2 } # Explicitly 2 workers
|
|
733
|
+
]
|
|
734
|
+
)
|
|
487
735
|
----
|
|
488
736
|
|
|
489
|
-
|
|
737
|
+
The auto-detection uses Ruby's `Etc.nprocessors` which returns the number of
|
|
738
|
+
available processors. If detection fails for any reason, it falls back to 2
|
|
739
|
+
workers.
|
|
740
|
+
|
|
741
|
+
[TIP]
|
|
742
|
+
====
|
|
743
|
+
* Use auto-detection for portable code that adapts to different environments
|
|
744
|
+
* Explicitly set `num_workers` when you need precise control over resource usage
|
|
745
|
+
* Consider system load and other factors when choosing explicit values
|
|
746
|
+
====
|
|
747
|
+
|
|
748
|
+
==== Adding work
|
|
490
749
|
|
|
491
|
-
You can
|
|
750
|
+
You can add work items individually or in batches:
|
|
492
751
|
|
|
493
752
|
[source,ruby]
|
|
494
753
|
----
|
|
495
|
-
|
|
496
|
-
|
|
497
|
-
|
|
498
|
-
|
|
499
|
-
|
|
500
|
-
|
|
501
|
-
|
|
754
|
+
# Add a single item
|
|
755
|
+
supervisor.add_work_item(MyWork.new(42))
|
|
756
|
+
|
|
757
|
+
# Add multiple items
|
|
758
|
+
supervisor.add_work_items([
|
|
759
|
+
MyWork.new(1),
|
|
760
|
+
MyWork.new(2),
|
|
761
|
+
MyWork.new(3),
|
|
762
|
+
MyWork.new(4),
|
|
763
|
+
MyWork.new(5)
|
|
764
|
+
])
|
|
765
|
+
|
|
766
|
+
# Add items of different work types
|
|
767
|
+
supervisor.add_work_items([
|
|
768
|
+
TextWork.new("Process this text"),
|
|
769
|
+
ImageWork.new({ width: 800, height: 600 })
|
|
770
|
+
])
|
|
502
771
|
----
|
|
503
772
|
|
|
504
|
-
|
|
773
|
+
The Supervisor can handle any Work object that inherits from Fractor::Work.
|
|
774
|
+
Workers must check the type of Work they receive and process it accordingly.
|
|
775
|
+
|
|
776
|
+
==== Running and monitoring
|
|
505
777
|
|
|
506
|
-
|
|
778
|
+
To start processing:
|
|
507
779
|
|
|
508
780
|
[source,ruby]
|
|
509
781
|
----
|
|
510
|
-
|
|
511
|
-
|
|
782
|
+
# Start processing and block until complete
|
|
783
|
+
supervisor.run
|
|
512
784
|
----
|
|
513
785
|
|
|
514
|
-
|
|
786
|
+
The Supervisor automatically handles:
|
|
787
|
+
|
|
788
|
+
* Starting the worker Ractors
|
|
789
|
+
* Distributing work items to available workers
|
|
790
|
+
* Collecting results and errors
|
|
791
|
+
* Graceful shutdown on completion or interruption (Ctrl+C)
|
|
792
|
+
|
|
793
|
+
=== ResultAggregator for pipeline mode
|
|
515
794
|
|
|
516
795
|
==== Purpose and responsibilities
|
|
517
796
|
|
|
518
797
|
The `Fractor::ResultAggregator` collects and organizes all results from the
|
|
519
798
|
workers, separating successful results from errors.
|
|
520
799
|
|
|
521
|
-
|
|
522
|
-
|
|
523
|
-
* For order independent results, the results may be utilized (popped) as they
|
|
524
|
-
are received.
|
|
525
|
-
|
|
526
|
-
* For order dependent results, the results are aggregated in the order they
|
|
527
|
-
are received. The order of results is important for re-assembly or
|
|
528
|
-
further processing.
|
|
529
|
-
|
|
530
|
-
* For results that require aggregation, the `ResultsAggregator` is used to determine
|
|
531
|
-
whether the results are completed, which signify that all work items have
|
|
532
|
-
been processed and ready for further processing.
|
|
533
|
-
|
|
800
|
+
In pipeline mode, results are collected throughout processing and accessed
|
|
801
|
+
after the supervisor finishes running.
|
|
534
802
|
|
|
535
803
|
==== Accessing results
|
|
536
804
|
|
|
537
|
-
|
|
805
|
+
After processing completes:
|
|
538
806
|
|
|
539
807
|
[source,ruby]
|
|
540
808
|
----
|
|
541
|
-
# Get
|
|
542
|
-
|
|
809
|
+
# Get the ResultAggregator
|
|
810
|
+
aggregator = supervisor.results
|
|
811
|
+
|
|
812
|
+
# Check counts
|
|
813
|
+
puts "Processed #{aggregator.results.size} items successfully"
|
|
814
|
+
puts "Encountered #{aggregator.errors.size} errors"
|
|
815
|
+
|
|
816
|
+
# Access successful results
|
|
817
|
+
aggregator.results.each do |result|
|
|
818
|
+
puts "Work item #{result.work.input} produced #{result.result}"
|
|
819
|
+
end
|
|
820
|
+
|
|
821
|
+
# Access errors
|
|
822
|
+
aggregator.errors.each do |error_result|
|
|
823
|
+
puts "Work item #{error_result.work.input} failed: #{error_result.error}"
|
|
824
|
+
end
|
|
825
|
+
----
|
|
826
|
+
|
|
827
|
+
To access successful results:
|
|
828
|
+
|
|
829
|
+
[source,ruby]
|
|
830
|
+
----
|
|
831
|
+
# Get all successful results
|
|
832
|
+
successful_results = supervisor.results.results
|
|
543
833
|
|
|
544
834
|
# Extract just the result values
|
|
545
835
|
result_values = successful_results.map(&:result)
|
|
546
836
|
----
|
|
547
837
|
|
|
548
|
-
To access errors:
|
|
838
|
+
To access errors:
|
|
839
|
+
|
|
840
|
+
[source,ruby]
|
|
841
|
+
----
|
|
842
|
+
# Get all error results
|
|
843
|
+
error_results = supervisor.results.errors
|
|
844
|
+
|
|
845
|
+
# Extract error messages
|
|
846
|
+
error_messages = error_results.map(&:error)
|
|
847
|
+
|
|
848
|
+
# Get the work items that failed
|
|
849
|
+
failed_work_items = error_results.map(&:work)
|
|
850
|
+
----
|
|
851
|
+
|
|
852
|
+
|
|
853
|
+
[TIP]
|
|
854
|
+
====
|
|
855
|
+
* Check both successful results and errors after processing completes
|
|
856
|
+
* Consider implementing custom reporting based on the aggregated results
|
|
857
|
+
====
|
|
858
|
+
|
|
859
|
+
|
|
860
|
+
|
|
861
|
+
|
|
862
|
+
== Pipeline mode patterns
|
|
863
|
+
|
|
864
|
+
=== Custom work distribution
|
|
865
|
+
|
|
866
|
+
For more complex scenarios, you might want to prioritize certain work items:
|
|
867
|
+
|
|
868
|
+
[source,ruby]
|
|
869
|
+
----
|
|
870
|
+
# Create Work objects for high priority items
|
|
871
|
+
high_priority_works = high_priority_items.map { |item| MyWork.new(item) }
|
|
872
|
+
|
|
873
|
+
# Add high-priority items first
|
|
874
|
+
supervisor.add_work_items(high_priority_works)
|
|
875
|
+
|
|
876
|
+
# Run with just enough workers for high-priority items
|
|
877
|
+
supervisor.run
|
|
878
|
+
|
|
879
|
+
# Create Work objects for lower priority items
|
|
880
|
+
low_priority_works = low_priority_items.map { |item| MyWork.new(item) }
|
|
881
|
+
|
|
882
|
+
# Add and process lower-priority items
|
|
883
|
+
supervisor.add_work_items(low_priority_works)
|
|
884
|
+
supervisor.run
|
|
885
|
+
----
|
|
886
|
+
|
|
887
|
+
=== Handling large datasets
|
|
888
|
+
|
|
889
|
+
For very large datasets, consider processing in batches:
|
|
890
|
+
|
|
891
|
+
[source,ruby]
|
|
892
|
+
----
|
|
893
|
+
large_dataset.each_slice(1000) do |batch|
|
|
894
|
+
# Convert batch items to Work objects
|
|
895
|
+
work_batch = batch.map { |item| MyWork.new(item) }
|
|
896
|
+
|
|
897
|
+
supervisor.add_work_items(work_batch)
|
|
898
|
+
supervisor.run
|
|
899
|
+
|
|
900
|
+
# Process this batch's results before continuing
|
|
901
|
+
process_batch_results(supervisor.results)
|
|
902
|
+
end
|
|
903
|
+
----
|
|
904
|
+
|
|
905
|
+
=== Multi-work type processing
|
|
906
|
+
|
|
907
|
+
The Multi-Work Type pattern demonstrates how a single supervisor and worker can
|
|
908
|
+
handle multiple types of work items.
|
|
909
|
+
|
|
910
|
+
[source,ruby]
|
|
911
|
+
----
|
|
912
|
+
class UniversalWorker < Fractor::Worker
|
|
913
|
+
def process(work)
|
|
914
|
+
case work
|
|
915
|
+
when TextWork
|
|
916
|
+
process_text(work)
|
|
917
|
+
when ImageWork
|
|
918
|
+
process_image(work)
|
|
919
|
+
else
|
|
920
|
+
Fractor::WorkResult.new(
|
|
921
|
+
error: "Unknown work type: #{work.class}",
|
|
922
|
+
work: work
|
|
923
|
+
)
|
|
924
|
+
end
|
|
925
|
+
end
|
|
926
|
+
|
|
927
|
+
private
|
|
928
|
+
|
|
929
|
+
def process_text(work)
|
|
930
|
+
result = work.text.upcase
|
|
931
|
+
Fractor::WorkResult.new(result: result, work: work)
|
|
932
|
+
end
|
|
933
|
+
|
|
934
|
+
def process_image(work)
|
|
935
|
+
result = { width: work.width * 2, height: work.height * 2 }
|
|
936
|
+
Fractor::WorkResult.new(result: result, work: work)
|
|
937
|
+
end
|
|
938
|
+
end
|
|
939
|
+
|
|
940
|
+
# Add different types of work
|
|
941
|
+
supervisor.add_work_items([
|
|
942
|
+
TextWork.new("hello"),
|
|
943
|
+
ImageWork.new(width: 100, height: 100),
|
|
944
|
+
TextWork.new("world")
|
|
945
|
+
])
|
|
946
|
+
----
|
|
947
|
+
|
|
948
|
+
=== Hierarchical work processing
|
|
949
|
+
|
|
950
|
+
The Producer/Subscriber pattern showcases processing that generates sub-work:
|
|
951
|
+
|
|
952
|
+
[source,ruby]
|
|
953
|
+
----
|
|
954
|
+
# First pass: Process documents
|
|
955
|
+
supervisor.add_work_items(documents.map { |doc| DocumentWork.new(doc) })
|
|
956
|
+
supervisor.run
|
|
957
|
+
|
|
958
|
+
# Collect sections generated from documents
|
|
959
|
+
sections = supervisor.results.results.flat_map do |result|
|
|
960
|
+
result.result[:sections]
|
|
961
|
+
end
|
|
962
|
+
|
|
963
|
+
# Second pass: Process sections
|
|
964
|
+
supervisor.add_work_items(sections.map { |section| SectionWork.new(section) })
|
|
965
|
+
supervisor.run
|
|
966
|
+
----
|
|
967
|
+
|
|
968
|
+
=== Pipeline stages
|
|
969
|
+
|
|
970
|
+
The Pipeline Processing pattern implements multi-stage transformation:
|
|
971
|
+
|
|
972
|
+
[source,ruby]
|
|
973
|
+
----
|
|
974
|
+
# Stage 1: Extract data
|
|
975
|
+
supervisor1 = Fractor::Supervisor.new(
|
|
976
|
+
worker_pools: [{ worker_class: ExtractionWorker }]
|
|
977
|
+
)
|
|
978
|
+
supervisor1.add_work_items(raw_data.map { |d| ExtractionWork.new(d) })
|
|
979
|
+
supervisor1.run
|
|
980
|
+
extracted = supervisor1.results.results.map(&:result)
|
|
981
|
+
|
|
982
|
+
# Stage 2: Transform data
|
|
983
|
+
supervisor2 = Fractor::Supervisor.new(
|
|
984
|
+
worker_pools: [{ worker_class: TransformWorker }]
|
|
985
|
+
)
|
|
986
|
+
supervisor2.add_work_items(extracted.map { |e| TransformWork.new(e) })
|
|
987
|
+
supervisor2.run
|
|
988
|
+
transformed = supervisor2.results.results.map(&:result)
|
|
989
|
+
|
|
990
|
+
# Stage 3: Load data
|
|
991
|
+
supervisor3 = Fractor::Supervisor.new(
|
|
992
|
+
worker_pools: [{ worker_class: LoadWorker }]
|
|
993
|
+
)
|
|
994
|
+
supervisor3.add_work_items(transformed.map { |t| LoadWork.new(t) })
|
|
995
|
+
supervisor3.run
|
|
996
|
+
----
|
|
997
|
+
|
|
998
|
+
|
|
999
|
+
|
|
1000
|
+
|
|
1001
|
+
== Continuous mode components
|
|
1002
|
+
|
|
1003
|
+
=== General
|
|
1004
|
+
|
|
1005
|
+
This section describes the components and their detailed usage specifically for
|
|
1006
|
+
continuous mode (long-running servers). For pipeline mode, see the Pipeline mode
|
|
1007
|
+
components section.
|
|
1008
|
+
|
|
1009
|
+
Continuous mode offers two approaches: a low-level API for manual control, and
|
|
1010
|
+
high-level primitives that eliminate boilerplate code.
|
|
1011
|
+
|
|
1012
|
+
=== Low-level components
|
|
1013
|
+
|
|
1014
|
+
==== General
|
|
1015
|
+
|
|
1016
|
+
The low-level API provides manual control over continuous mode operation. This
|
|
1017
|
+
approach is useful when you need fine-grained control over threading, work
|
|
1018
|
+
sources, or results processing.
|
|
1019
|
+
|
|
1020
|
+
Use the low-level API when:
|
|
1021
|
+
|
|
1022
|
+
* You need custom thread management
|
|
1023
|
+
* Your work source logic is complex
|
|
1024
|
+
* You require precise control over the supervisor lifecycle
|
|
1025
|
+
* You're integrating with existing thread pools or event loops
|
|
1026
|
+
|
|
1027
|
+
For most applications, the high-level primitives (described in the next section)
|
|
1028
|
+
are recommended as they eliminate significant boilerplate code.
|
|
1029
|
+
|
|
1030
|
+
==== Supervisor with continuous_mode: true
|
|
1031
|
+
|
|
1032
|
+
To enable continuous mode, set the `continuous_mode` option:
|
|
1033
|
+
|
|
1034
|
+
[source,ruby]
|
|
1035
|
+
----
|
|
1036
|
+
supervisor = Fractor::Supervisor.new(
|
|
1037
|
+
worker_pools: [
|
|
1038
|
+
{ worker_class: MyWorker, num_workers: 2 }
|
|
1039
|
+
],
|
|
1040
|
+
continuous_mode: true # Enable continuous mode
|
|
1041
|
+
)
|
|
1042
|
+
----
|
|
1043
|
+
|
|
1044
|
+
==== Work source callbacks
|
|
1045
|
+
|
|
1046
|
+
Register a callback that provides new work on demand:
|
|
1047
|
+
|
|
1048
|
+
[source,ruby]
|
|
1049
|
+
----
|
|
1050
|
+
supervisor.register_work_source do
|
|
1051
|
+
# Return nil or empty array if no work is available
|
|
1052
|
+
# Return a work item or array of work items when available
|
|
1053
|
+
items = get_next_work_items
|
|
1054
|
+
if items && !items.empty?
|
|
1055
|
+
# Convert to Work objects if needed
|
|
1056
|
+
items.map { |item| MyWork.new(item) }
|
|
1057
|
+
else
|
|
1058
|
+
nil
|
|
1059
|
+
end
|
|
1060
|
+
end
|
|
1061
|
+
----
|
|
1062
|
+
|
|
1063
|
+
The callback is polled every 100ms by an internal timer thread.
|
|
1064
|
+
|
|
1065
|
+
==== Manual thread management
|
|
1066
|
+
|
|
1067
|
+
You must manually manage threads and results processing:
|
|
1068
|
+
|
|
1069
|
+
[source,ruby]
|
|
1070
|
+
----
|
|
1071
|
+
# Start supervisor in a background thread
|
|
1072
|
+
supervisor_thread = Thread.new { supervisor.run }
|
|
1073
|
+
|
|
1074
|
+
# Start results processing thread
|
|
1075
|
+
results_thread = Thread.new do
|
|
1076
|
+
loop do
|
|
1077
|
+
# Process results
|
|
1078
|
+
while (result = supervisor.results.results.shift)
|
|
1079
|
+
handle_result(result)
|
|
1080
|
+
end
|
|
1081
|
+
|
|
1082
|
+
# Process errors
|
|
1083
|
+
while (error = supervisor.results.errors.shift)
|
|
1084
|
+
handle_error(error)
|
|
1085
|
+
end
|
|
1086
|
+
|
|
1087
|
+
sleep 0.1
|
|
1088
|
+
end
|
|
1089
|
+
end
|
|
1090
|
+
|
|
1091
|
+
# Ensure cleanup on shutdown
|
|
1092
|
+
begin
|
|
1093
|
+
supervisor_thread.join
|
|
1094
|
+
rescue Interrupt
|
|
1095
|
+
supervisor.stop
|
|
1096
|
+
ensure
|
|
1097
|
+
results_thread.kill
|
|
1098
|
+
supervisor_thread.join
|
|
1099
|
+
end
|
|
1100
|
+
----
|
|
1101
|
+
|
|
1102
|
+
=== High-level components
|
|
1103
|
+
|
|
1104
|
+
==== General
|
|
1105
|
+
|
|
1106
|
+
Fractor provides high-level primitives that dramatically simplify continuous
|
|
1107
|
+
mode applications by eliminating boilerplate code.
|
|
1108
|
+
|
|
1109
|
+
These primitives solve common problems:
|
|
1110
|
+
|
|
1111
|
+
* *Thread management*: Automatic supervisor and results processing threads
|
|
1112
|
+
* *Queue synchronization*: Thread-safe work queue with automatic integration
|
|
1113
|
+
* *Results processing*: Callback-based handling instead of manual loops
|
|
1114
|
+
* *Signal handling*: Built-in support for SIGINT, SIGTERM, SIGUSR1/SIGBREAK
|
|
1115
|
+
* *Graceful shutdown*: Coordinated cleanup across all threads
|
|
1116
|
+
|
|
1117
|
+
Real-world benefits:
|
|
1118
|
+
|
|
1119
|
+
* The chat server example reduced from 279 lines to 167 lines (40% reduction)
|
|
1120
|
+
* Eliminates ~112 lines of thread, queue, and signal handling boilerplate
|
|
1121
|
+
* Simpler, more maintainable code with fewer error-prone details
|
|
1122
|
+
|
|
1123
|
+
==== Fractor::WorkQueue
|
|
1124
|
+
|
|
1125
|
+
===== Purpose and responsibilities
|
|
1126
|
+
|
|
1127
|
+
`Fractor::WorkQueue` provides a thread-safe queue for continuous mode
|
|
1128
|
+
applications. It handles work item storage and integrates automatically with the
|
|
1129
|
+
supervisor's work source mechanism.
|
|
1130
|
+
|
|
1131
|
+
===== Thread-safety
|
|
1132
|
+
|
|
1133
|
+
The WorkQueue is *thread-safe* but not *Ractor-safe*:
|
|
1134
|
+
|
|
1135
|
+
* *Thread-safe*: Multiple threads can safely push work items concurrently
|
|
1136
|
+
* *Not Ractor-safe*: The queue lives in the main process and cannot be shared
|
|
1137
|
+
across Ractor boundaries
|
|
1138
|
+
|
|
1139
|
+
This design is intentional. The WorkQueue operates in the main process where
|
|
1140
|
+
your application code runs. Work items are retrieved by the Supervisor (also in
|
|
1141
|
+
the main process) and then sent to worker Ractors.
|
|
1142
|
+
|
|
1143
|
+
.WorkQueue architecture
|
|
1144
|
+
[source]
|
|
1145
|
+
----
|
|
1146
|
+
Main Process
|
|
1147
|
+
├─→ Your application threads (push to WorkQueue)
|
|
1148
|
+
├─→ WorkQueue (thread-safe, lives here)
|
|
1149
|
+
├─→ Supervisor (polls WorkQueue)
|
|
1150
|
+
│ └─→ Sends work to Worker Ractors
|
|
1151
|
+
└─→ Worker Ractors (receive frozen/shareable work items)
|
|
1152
|
+
----
|
|
1153
|
+
|
|
1154
|
+
===== Creating a WorkQueue
|
|
1155
|
+
|
|
1156
|
+
[source,ruby]
|
|
1157
|
+
----
|
|
1158
|
+
work_queue = Fractor::WorkQueue.new
|
|
1159
|
+
----
|
|
1160
|
+
|
|
1161
|
+
===== Adding work items
|
|
1162
|
+
|
|
1163
|
+
Use the `<<` operator for thread-safe push operations:
|
|
1164
|
+
|
|
1165
|
+
[source,ruby]
|
|
1166
|
+
----
|
|
1167
|
+
# From any thread in your application
|
|
1168
|
+
work_queue << MyWork.new(data)
|
|
1169
|
+
|
|
1170
|
+
# Thread-safe even from multiple threads
|
|
1171
|
+
threads = 10.times.map do |i|
|
|
1172
|
+
Thread.new do
|
|
1173
|
+
100.times do |j|
|
|
1174
|
+
work_queue << MyWork.new("thread-#{i}-item-#{j}")
|
|
1175
|
+
end
|
|
1176
|
+
end
|
|
1177
|
+
end
|
|
1178
|
+
threads.each(&:join)
|
|
1179
|
+
----
|
|
1180
|
+
|
|
1181
|
+
===== Checking queue status
|
|
1182
|
+
|
|
1183
|
+
[source,ruby]
|
|
1184
|
+
----
|
|
1185
|
+
# Check if queue is empty
|
|
1186
|
+
if work_queue.empty?
|
|
1187
|
+
puts "No work available"
|
|
1188
|
+
end
|
|
1189
|
+
|
|
1190
|
+
# Get current queue size
|
|
1191
|
+
puts "Queue has #{work_queue.size} items"
|
|
1192
|
+
----
|
|
1193
|
+
|
|
1194
|
+
===== Integration with Supervisor
|
|
1195
|
+
|
|
1196
|
+
The WorkQueue integrates automatically with ContinuousServer (see next section).
|
|
1197
|
+
For manual integration with a Supervisor:
|
|
1198
|
+
|
|
1199
|
+
[source,ruby]
|
|
1200
|
+
----
|
|
1201
|
+
supervisor = Fractor::Supervisor.new(
|
|
1202
|
+
worker_pools: [{ worker_class: MyWorker }],
|
|
1203
|
+
continuous_mode: true
|
|
1204
|
+
)
|
|
1205
|
+
|
|
1206
|
+
# Register the work queue as a work source
|
|
1207
|
+
work_queue.register_with_supervisor(supervisor)
|
|
1208
|
+
|
|
1209
|
+
# Now the supervisor will automatically poll the queue for work
|
|
1210
|
+
----
|
|
1211
|
+
|
|
1212
|
+
==== Fractor::ContinuousServer
|
|
1213
|
+
|
|
1214
|
+
===== Purpose and responsibilities
|
|
1215
|
+
|
|
1216
|
+
`Fractor::ContinuousServer` is a high-level wrapper that handles all the
|
|
1217
|
+
complexity of running a continuous mode application. It manages:
|
|
1218
|
+
|
|
1219
|
+
* Supervisor thread lifecycle
|
|
1220
|
+
* Results processing thread with callback system
|
|
1221
|
+
* Signal handling (SIGINT, SIGTERM, SIGUSR1/SIGBREAK)
|
|
1222
|
+
* Graceful shutdown coordination
|
|
1223
|
+
* Optional logging
|
|
1224
|
+
|
|
1225
|
+
===== Creating a ContinuousServer
|
|
1226
|
+
|
|
1227
|
+
[source,ruby]
|
|
1228
|
+
----
|
|
1229
|
+
server = Fractor::ContinuousServer.new(
|
|
1230
|
+
worker_pools: [
|
|
1231
|
+
{ worker_class: MessageWorker, num_workers: 4 }
|
|
1232
|
+
],
|
|
1233
|
+
work_queue: work_queue, # Optional, auto-registers if provided
|
|
1234
|
+
log_file: 'logs/server.log' # Optional
|
|
1235
|
+
)
|
|
1236
|
+
----
|
|
1237
|
+
|
|
1238
|
+
Parameters:
|
|
1239
|
+
|
|
1240
|
+
* `worker_pools` (required): Array of worker pool configurations
|
|
1241
|
+
* `work_queue` (optional): A Fractor::WorkQueue instance to auto-register
|
|
1242
|
+
* `log_file` (optional): Path for log output
|
|
1243
|
+
|
|
1244
|
+
===== Registering callbacks
|
|
1245
|
+
|
|
1246
|
+
Define how to handle results and errors:
|
|
1247
|
+
|
|
1248
|
+
[source,ruby]
|
|
1249
|
+
----
|
|
1250
|
+
# Handle successful results
|
|
1251
|
+
server.on_result do |result|
|
|
1252
|
+
# result is a Fractor::WorkResult with result.result containing your data
|
|
1253
|
+
puts "Success: #{result.result}"
|
|
1254
|
+
# Send response to client, update database, etc.
|
|
1255
|
+
end
|
|
1256
|
+
|
|
1257
|
+
# Handle errors
|
|
1258
|
+
server.on_error do |error_result|
|
|
1259
|
+
# error_result is a Fractor::WorkResult with error_result.error containing the message
|
|
1260
|
+
puts "Error: #{error_result.error}"
|
|
1261
|
+
# Log error, send notification, etc.
|
|
1262
|
+
end
|
|
1263
|
+
----
|
|
1264
|
+
|
|
1265
|
+
===== Running the server
|
|
1266
|
+
|
|
1267
|
+
[source,ruby]
|
|
1268
|
+
----
|
|
1269
|
+
# Blocking: Run the server (blocks until shutdown signal)
|
|
1270
|
+
server.run
|
|
1271
|
+
|
|
1272
|
+
# Non-blocking: Run in background thread
|
|
1273
|
+
server_thread = Thread.new { server.run }
|
|
1274
|
+
|
|
1275
|
+
# Your application continues here...
|
|
1276
|
+
# Add work to queue as needed
|
|
1277
|
+
work_queue << MyWork.new(data)
|
|
1278
|
+
|
|
1279
|
+
# Later, stop the server
|
|
1280
|
+
server.stop
|
|
1281
|
+
server_thread.join
|
|
1282
|
+
----
|
|
1283
|
+
|
|
1284
|
+
===== Signal handling
|
|
1285
|
+
|
|
1286
|
+
The ContinuousServer automatically handles:
|
|
1287
|
+
|
|
1288
|
+
* *SIGINT* (Ctrl+C): Graceful shutdown
|
|
1289
|
+
* *SIGTERM*: Graceful shutdown (production deployment)
|
|
1290
|
+
* *SIGUSR1* (Unix) / *SIGBREAK* (Windows): Status output
|
|
1291
|
+
|
|
1292
|
+
No additional code needed - signals work automatically.
|
|
1293
|
+
|
|
1294
|
+
===== Graceful shutdown
|
|
1295
|
+
|
|
1296
|
+
When a shutdown signal is received:
|
|
1297
|
+
|
|
1298
|
+
. Stops accepting new work from the work queue
|
|
1299
|
+
. Allows in-progress work to complete (within ~2 seconds)
|
|
1300
|
+
. Processes remaining results through callbacks
|
|
1301
|
+
. Cleans up all threads and resources
|
|
1302
|
+
. Returns from the `run` method
|
|
1303
|
+
|
|
1304
|
+
===== Programmatic shutdown
|
|
1305
|
+
|
|
1306
|
+
[source,ruby]
|
|
1307
|
+
----
|
|
1308
|
+
# Stop the server programmatically
|
|
1309
|
+
server.stop
|
|
1310
|
+
|
|
1311
|
+
# The run method will return shortly after
|
|
1312
|
+
----
|
|
1313
|
+
|
|
1314
|
+
==== Integration architecture
|
|
1315
|
+
|
|
1316
|
+
The high-level components work together seamlessly:
|
|
1317
|
+
|
|
1318
|
+
.Complete architecture diagram
|
|
1319
|
+
[source]
|
|
1320
|
+
----
|
|
1321
|
+
┌───────────────────────────────────────────────────────────┐
|
|
1322
|
+
│ Main Process │
|
|
1323
|
+
│ │
|
|
1324
|
+
│ ┌──────────────┐ ┌──────────────────────────────┐ │
|
|
1325
|
+
│ │ Your App │────>│ WorkQueue (thread-safe) │ │
|
|
1326
|
+
│ │ (any thread) │ │ - Thread::Queue internally │ │
|
|
1327
|
+
│ └──────────────┘ └──────────────────────────────┘ │
|
|
1328
|
+
│ │ │
|
|
1329
|
+
│ │ polled every 100ms │
|
|
1330
|
+
│ ▼ │
|
|
1331
|
+
│ ┌────────────────────────────────────────────────────┐ │
|
|
1332
|
+
│ │ ContinuousServer │ │
|
|
1333
|
+
│ │ ┌─────────────────────────────────────────────┐ │ │
|
|
1334
|
+
│ │ │ Supervisor Thread │ │ │
|
|
1335
|
+
│ │ │ - Manages worker Ractors │ │ │
|
|
1336
|
+
│ │ │ - Distributes work │ │ │
|
|
1337
|
+
│ │ │ - Coordinates shutdown │ │ │
|
|
1338
|
+
│ │ └─────────────────────────────────────────────┘ │ │
|
|
1339
|
+
│ │ │ │ │
|
|
1340
|
+
│ │ ▼ │ │
|
|
1341
|
+
│ │ ┌─────────────────────────────────────────────┐ │ │
|
|
1342
|
+
│ │ │ Worker Ractors (parallel execution) │ │ │
|
|
1343
|
+
│ │ │ - Ractor 1: WorkerInstance.process(work) │ │ │
|
|
1344
|
+
│ │ │ - Ractor 2: WorkerInstance.process(work) │ │ │
|
|
1345
|
+
│ │ │ - Ractor N: WorkerInstance.process(work) │ │ │
|
|
1346
|
+
│ │ └─────────────────────────────────────────────┘ │ │
|
|
1347
|
+
│ │ │ │ │
|
|
1348
|
+
│ │ ▼ (WorkResults) │ │
|
|
1349
|
+
│ │ ┌─────────────────────────────────────────────┐ │ │
|
|
1350
|
+
│ │ │ Results Processing Thread │ │ │
|
|
1351
|
+
│ │ │ - on_result callback for successes │ │ │
|
|
1352
|
+
│ │ │ - on_error callback for failures │ │ │
|
|
1353
|
+
│ │ └─────────────────────────────────────────────┘ │ │
|
|
1354
|
+
│ │ │ │
|
|
1355
|
+
│ │ ┌─────────────────────────────────────────────┐ │ │
|
|
1356
|
+
│ │ │ Signal Handler Thread │ │ │
|
|
1357
|
+
│ │ │ - SIGINT/SIGTERM: Shutdown │ │ │
|
|
1358
|
+
│ │ │ - SIGUSR1/SIGBREAK: Status │ │ │
|
|
1359
|
+
│ │ └─────────────────────────────────────────────┘ │ │
|
|
1360
|
+
│ └────────────────────────────────────────────────────┘ │
|
|
1361
|
+
└───────────────────────────────────────────────────────────┘
|
|
1362
|
+
----
|
|
1363
|
+
|
|
1364
|
+
Key points:
|
|
1365
|
+
|
|
1366
|
+
* WorkQueue lives in main process (thread-safe, not Ractor-safe)
|
|
1367
|
+
* Supervisor polls WorkQueue and distributes to Ractors
|
|
1368
|
+
* Work items must be frozen/shareable to cross Ractor boundary
|
|
1369
|
+
* Results come back through callbacks, not batch collection
|
|
1370
|
+
* All thread management is automatic
|
|
1371
|
+
|
|
1372
|
+
|
|
1373
|
+
|
|
1374
|
+
|
|
1375
|
+
== Continuous mode patterns
|
|
1376
|
+
|
|
1377
|
+
=== Basic server with callbacks
|
|
1378
|
+
|
|
1379
|
+
The most common pattern uses WorkQueue + ContinuousServer:
|
|
1380
|
+
|
|
1381
|
+
[source,ruby]
|
|
1382
|
+
----
|
|
1383
|
+
require 'fractor'
|
|
1384
|
+
|
|
1385
|
+
# Define work and worker
|
|
1386
|
+
class RequestWork < Fractor::Work
|
|
1387
|
+
def initialize(request_id, data)
|
|
1388
|
+
super({ request_id: request_id, data: data })
|
|
1389
|
+
end
|
|
1390
|
+
end
|
|
1391
|
+
|
|
1392
|
+
class RequestWorker < Fractor::Worker
|
|
1393
|
+
def process(work)
|
|
1394
|
+
# Process the request
|
|
1395
|
+
result = perform_computation(work.input[:data])
|
|
1396
|
+
|
|
1397
|
+
Fractor::WorkResult.new(
|
|
1398
|
+
result: { request_id: work.input[:request_id], response: result },
|
|
1399
|
+
work: work
|
|
1400
|
+
)
|
|
1401
|
+
rescue => e
|
|
1402
|
+
Fractor::WorkResult.new(error: e.message, work: work)
|
|
1403
|
+
end
|
|
1404
|
+
|
|
1405
|
+
private
|
|
1406
|
+
|
|
1407
|
+
def perform_computation(data)
|
|
1408
|
+
# Your business logic here
|
|
1409
|
+
data.upcase
|
|
1410
|
+
end
|
|
1411
|
+
end
|
|
1412
|
+
|
|
1413
|
+
# Set up server
|
|
1414
|
+
work_queue = Fractor::WorkQueue.new
|
|
1415
|
+
|
|
1416
|
+
server = Fractor::ContinuousServer.new(
|
|
1417
|
+
worker_pools: [{ worker_class: RequestWorker, num_workers: 4 }],
|
|
1418
|
+
work_queue: work_queue
|
|
1419
|
+
)
|
|
1420
|
+
|
|
1421
|
+
server.on_result { |result| puts "Success: #{result.result}" }
|
|
1422
|
+
server.on_error { |error| puts "Error: #{error.error}" }
|
|
1423
|
+
|
|
1424
|
+
# Run server (blocks until shutdown)
|
|
1425
|
+
Thread.new { server.run }
|
|
1426
|
+
|
|
1427
|
+
# Application logic adds work as needed
|
|
1428
|
+
work_queue << RequestWork.new(1, "hello")
|
|
1429
|
+
work_queue << RequestWork.new(2, "world")
|
|
1430
|
+
|
|
1431
|
+
sleep # Keep main thread alive
|
|
1432
|
+
----
|
|
1433
|
+
|
|
1434
|
+
=== Event-driven processing
|
|
1435
|
+
|
|
1436
|
+
Process events from external sources as they arrive:
|
|
1437
|
+
|
|
1438
|
+
[source,ruby]
|
|
1439
|
+
----
|
|
1440
|
+
# Event source (could be webhooks, message queue, etc.)
|
|
1441
|
+
event_source = EventSource.new
|
|
1442
|
+
|
|
1443
|
+
# Set up work queue and server
|
|
1444
|
+
work_queue = Fractor::WorkQueue.new
|
|
1445
|
+
server = Fractor::ContinuousServer.new(
|
|
1446
|
+
worker_pools: [{ worker_class: EventWorker, num_workers: 8 }],
|
|
1447
|
+
work_queue: work_queue
|
|
1448
|
+
)
|
|
1449
|
+
|
|
1450
|
+
server.on_result do |result|
|
|
1451
|
+
# Publish result to subscribers
|
|
1452
|
+
publish_event(result.result)
|
|
1453
|
+
end
|
|
1454
|
+
|
|
1455
|
+
# Event loop adds work to queue
|
|
1456
|
+
event_source.on_event do |event|
|
|
1457
|
+
work_queue << EventWork.new(event)
|
|
1458
|
+
end
|
|
1459
|
+
|
|
1460
|
+
# Start server
|
|
1461
|
+
server.run
|
|
1462
|
+
----
|
|
1463
|
+
|
|
1464
|
+
=== Dynamic work sources
|
|
1465
|
+
|
|
1466
|
+
Combine multiple work sources:
|
|
549
1467
|
|
|
550
1468
|
[source,ruby]
|
|
551
1469
|
----
|
|
552
|
-
|
|
553
|
-
error_results = supervisor.results.errors
|
|
1470
|
+
work_queue = Fractor::WorkQueue.new
|
|
554
1471
|
|
|
555
|
-
#
|
|
556
|
-
|
|
1472
|
+
# Source 1: HTTP requests
|
|
1473
|
+
http_server.on_request do |request|
|
|
1474
|
+
work_queue << HttpWork.new(request)
|
|
1475
|
+
end
|
|
557
1476
|
|
|
558
|
-
#
|
|
559
|
-
|
|
1477
|
+
# Source 2: Message queue
|
|
1478
|
+
message_queue.subscribe do |message|
|
|
1479
|
+
work_queue << MessageWork.new(message)
|
|
1480
|
+
end
|
|
1481
|
+
|
|
1482
|
+
# Source 3: Scheduled tasks
|
|
1483
|
+
scheduler.every('1m') do
|
|
1484
|
+
work_queue << ScheduledWork.new(Time.now)
|
|
1485
|
+
end
|
|
1486
|
+
|
|
1487
|
+
# Single server processes all work types
|
|
1488
|
+
server = Fractor::ContinuousServer.new(
|
|
1489
|
+
worker_pools: [
|
|
1490
|
+
{ worker_class: HttpWorker, num_workers: 4 },
|
|
1491
|
+
{ worker_class: MessageWorker, num_workers: 2 },
|
|
1492
|
+
{ worker_class: ScheduledWorker, num_workers: 1 }
|
|
1493
|
+
],
|
|
1494
|
+
work_queue: work_queue
|
|
1495
|
+
)
|
|
1496
|
+
|
|
1497
|
+
server.run
|
|
560
1498
|
----
|
|
561
1499
|
|
|
1500
|
+
=== Graceful shutdown strategies
|
|
562
1501
|
|
|
563
|
-
|
|
564
|
-
====
|
|
565
|
-
* Check both successful results and errors after processing completes
|
|
566
|
-
* Consider implementing custom reporting based on the aggregated results
|
|
567
|
-
====
|
|
1502
|
+
==== Signal-based shutdown (production)
|
|
568
1503
|
|
|
1504
|
+
[source,ruby]
|
|
1505
|
+
----
|
|
1506
|
+
# Server automatically handles SIGTERM
|
|
1507
|
+
server = Fractor::ContinuousServer.new(
|
|
1508
|
+
worker_pools: [{ worker_class: MyWorker }],
|
|
1509
|
+
work_queue: work_queue,
|
|
1510
|
+
log_file: '/var/log/myapp/server.log'
|
|
1511
|
+
)
|
|
569
1512
|
|
|
570
|
-
|
|
1513
|
+
# Just run the server - signals handled automatically
|
|
1514
|
+
server.run
|
|
571
1515
|
|
|
572
|
-
|
|
1516
|
+
# In production:
|
|
1517
|
+
# systemctl stop myapp # Sends SIGTERM
|
|
1518
|
+
# docker stop container # Sends SIGTERM
|
|
1519
|
+
# kill -TERM <pid> # Manual SIGTERM
|
|
1520
|
+
----
|
|
573
1521
|
|
|
574
|
-
|
|
575
|
-
the communication between the Supervisor and the Worker instance running inside
|
|
576
|
-
the Ractor.
|
|
1522
|
+
==== Time-based shutdown
|
|
577
1523
|
|
|
578
|
-
|
|
1524
|
+
[source,ruby]
|
|
1525
|
+
----
|
|
1526
|
+
server_thread = Thread.new { server.run }
|
|
579
1527
|
|
|
580
|
-
|
|
581
|
-
|
|
1528
|
+
# Run for specific duration
|
|
1529
|
+
sleep 3600 # Run for 1 hour
|
|
1530
|
+
server.stop
|
|
1531
|
+
server_thread.join
|
|
1532
|
+
----
|
|
582
1533
|
|
|
583
|
-
|
|
584
|
-
* The Worker instance lives inside the Ractor
|
|
585
|
-
* Work items are sent to the Ractor via the WrappedRactor's `send` method
|
|
586
|
-
* Results are yielded back to the Supervisor
|
|
1534
|
+
==== Condition-based shutdown
|
|
587
1535
|
|
|
588
|
-
|
|
1536
|
+
[source,ruby]
|
|
1537
|
+
----
|
|
1538
|
+
server_thread = Thread.new { server.run }
|
|
1539
|
+
|
|
1540
|
+
# Monitor thread checks conditions
|
|
1541
|
+
monitor = Thread.new do
|
|
1542
|
+
loop do
|
|
1543
|
+
if should_shutdown?
|
|
1544
|
+
server.stop
|
|
1545
|
+
break
|
|
1546
|
+
end
|
|
1547
|
+
sleep 10
|
|
1548
|
+
end
|
|
1549
|
+
end
|
|
589
1550
|
|
|
590
|
-
|
|
1551
|
+
server_thread.join
|
|
1552
|
+
monitor.kill
|
|
1553
|
+
----
|
|
591
1554
|
|
|
592
|
-
|
|
593
|
-
yielded back
|
|
594
|
-
. Unexpected errors in the Ractor itself are caught and logged
|
|
1555
|
+
=== Before/after comparison
|
|
595
1556
|
|
|
1557
|
+
The chat server example demonstrates the real-world impact of using the
|
|
1558
|
+
high-level primitives.
|
|
596
1559
|
|
|
597
|
-
|
|
1560
|
+
==== Before: Low-level API (279 lines)
|
|
598
1561
|
|
|
599
|
-
|
|
1562
|
+
Required manual management of:
|
|
600
1563
|
|
|
601
|
-
|
|
602
|
-
|
|
1564
|
+
* Supervisor thread creation and lifecycle (~15 lines)
|
|
1565
|
+
* Results processing thread with loops (~50 lines)
|
|
1566
|
+
* Queue creation and synchronization (~10 lines)
|
|
1567
|
+
* Signal handling setup (~15 lines)
|
|
1568
|
+
* Thread coordination and shutdown (~20 lines)
|
|
1569
|
+
* IO.select event loop (~110 lines)
|
|
1570
|
+
* Manual error handling throughout (~59 lines)
|
|
603
1571
|
|
|
604
|
-
====
|
|
1572
|
+
==== After: High-level primitives (167 lines)
|
|
605
1573
|
|
|
606
|
-
|
|
1574
|
+
Eliminated boilerplate:
|
|
607
1575
|
|
|
608
|
-
|
|
609
|
-
|
|
610
|
-
|
|
611
|
-
|
|
612
|
-
|
|
613
|
-
{ worker_class: MyWorker, num_workers: 4 },
|
|
1576
|
+
* WorkQueue handles queue and synchronization (automatic)
|
|
1577
|
+
* ContinuousServer manages all threads (automatic)
|
|
1578
|
+
* Callbacks replace manual results loops (automatic)
|
|
1579
|
+
* Signal handling built-in (automatic)
|
|
1580
|
+
* Graceful shutdown coordinated (automatic)
|
|
614
1581
|
|
|
615
|
-
|
|
616
|
-
|
|
617
|
-
],
|
|
618
|
-
continuous_mode: false # Optional: Run in continuous mode (default: false)
|
|
619
|
-
)
|
|
620
|
-
----
|
|
1582
|
+
Result: **40% code reduction** (112 fewer lines), simpler architecture, fewer
|
|
1583
|
+
error-prone details.
|
|
621
1584
|
|
|
622
|
-
|
|
1585
|
+
See link:examples/continuous_chat_fractor/chat_server.rb[the refactored chat
|
|
1586
|
+
server] for the complete example.
|
|
623
1587
|
|
|
624
|
-
You can add work items individually or in batches:
|
|
625
1588
|
|
|
626
|
-
|
|
1589
|
+
|
|
1590
|
+
|
|
1591
|
+
== Process monitoring and logging
|
|
1592
|
+
|
|
1593
|
+
=== Status monitoring and health checks
|
|
1594
|
+
|
|
1595
|
+
The signals SIGUSR1 (or SIGBREAK on Windows) can be used for health checks.
|
|
1596
|
+
|
|
1597
|
+
When the signal is received, the supervisor prints its current status to
|
|
1598
|
+
standard output.
|
|
1599
|
+
|
|
1600
|
+
[example]
|
|
1601
|
+
Sending the signal:
|
|
1602
|
+
|
|
1603
|
+
Unix:
|
|
1604
|
+
|
|
1605
|
+
[source,sh]
|
|
1606
|
+
----
|
|
1607
|
+
# Send SIGUSR1 to the supervisor process
|
|
1608
|
+
kill -USR1 <pid>
|
|
627
1609
|
----
|
|
628
|
-
# Add a single item
|
|
629
|
-
supervisor.add_work_item(MyWork.new(42))
|
|
630
1610
|
|
|
631
|
-
|
|
632
|
-
supervisor.add_work_items([
|
|
633
|
-
MyWork.new(1),
|
|
634
|
-
MyWork.new(2),
|
|
635
|
-
MyWork.new(3),
|
|
636
|
-
MyWork.new(4),
|
|
637
|
-
MyWork.new(5)
|
|
638
|
-
])
|
|
1611
|
+
Windows:
|
|
639
1612
|
|
|
640
|
-
|
|
641
|
-
|
|
642
|
-
|
|
643
|
-
|
|
644
|
-
])
|
|
1613
|
+
[source,sh]
|
|
1614
|
+
----
|
|
1615
|
+
# Send SIGBREAK to the supervisor process
|
|
1616
|
+
kill -BREAK <pid>
|
|
645
1617
|
----
|
|
646
1618
|
|
|
647
|
-
|
|
648
|
-
Workers must check the type of Work they receive and process it accordingly.
|
|
1619
|
+
Output:
|
|
649
1620
|
|
|
650
|
-
|
|
1621
|
+
[source]
|
|
1622
|
+
----
|
|
1623
|
+
=== Fractor Supervisor Status ===
|
|
1624
|
+
Mode: Continuous
|
|
1625
|
+
Running: true
|
|
1626
|
+
Workers: 4
|
|
1627
|
+
Idle workers: 2
|
|
1628
|
+
Queue size: 15
|
|
1629
|
+
Results: 127
|
|
1630
|
+
Errors: 3
|
|
1631
|
+
----
|
|
651
1632
|
|
|
652
|
-
|
|
1633
|
+
=== Logging
|
|
1634
|
+
|
|
1635
|
+
Fractor supports logging of its operations to a specified log file.
|
|
1636
|
+
|
|
1637
|
+
For ContinuousServer, pass the `log_file` parameter:
|
|
653
1638
|
|
|
654
1639
|
[source,ruby]
|
|
655
1640
|
----
|
|
656
|
-
|
|
657
|
-
|
|
1641
|
+
server = Fractor::ContinuousServer.new(
|
|
1642
|
+
worker_pools: [{ worker_class: MyWorker }],
|
|
1643
|
+
work_queue: work_queue,
|
|
1644
|
+
log_file: 'logs/server.log'
|
|
1645
|
+
)
|
|
658
1646
|
----
|
|
659
1647
|
|
|
660
|
-
|
|
1648
|
+
For manual Supervisor usage, set the `FRACTOR_LOG_FILE` environment variable
|
|
1649
|
+
before starting your application:
|
|
661
1650
|
|
|
662
|
-
|
|
663
|
-
|
|
664
|
-
|
|
665
|
-
|
|
1651
|
+
[source,sh]
|
|
1652
|
+
----
|
|
1653
|
+
export FRACTOR_LOG_FILE=/path/to/logs/server.log
|
|
1654
|
+
ruby my_fractor_app.rb
|
|
1655
|
+
----
|
|
666
1656
|
|
|
1657
|
+
The log file will contain detailed information about the supervisor's
|
|
1658
|
+
operations, including worker activity, work distribution, results, and errors.
|
|
667
1659
|
|
|
668
|
-
|
|
1660
|
+
.Examples of accessing logs
|
|
1661
|
+
[example]
|
|
1662
|
+
[source,sh]
|
|
1663
|
+
----
|
|
1664
|
+
# Check if server is responsive (Unix/Linux/macOS)
|
|
1665
|
+
kill -USR1 <pid> && tail -f /path/to/logs/server.log
|
|
669
1666
|
|
|
670
|
-
|
|
1667
|
+
# Monitor with systemd
|
|
1668
|
+
systemctl status fractor-server
|
|
1669
|
+
journalctl -u fractor-server -f
|
|
671
1670
|
|
|
672
|
-
|
|
1671
|
+
# Monitor with Docker
|
|
1672
|
+
docker logs -f <container_id>
|
|
673
1673
|
----
|
|
674
|
-
# Get the ResultAggregator
|
|
675
|
-
aggregator = supervisor.results
|
|
676
1674
|
|
|
677
|
-
# Check counts
|
|
678
|
-
puts "Processed #{aggregator.results.size} items successfully"
|
|
679
|
-
puts "Encountered #{aggregator.errors.size} errors"
|
|
680
1675
|
|
|
681
|
-
# Access successful results
|
|
682
|
-
aggregator.results.each do |result|
|
|
683
|
-
puts "Work item #{result.work.input} produced #{result.result}"
|
|
684
|
-
end
|
|
685
1676
|
|
|
686
|
-
# Access errors
|
|
687
|
-
aggregator.errors.each do |error_result|
|
|
688
|
-
puts "Work item #{error_result.work.input} failed: #{error_result.error}"
|
|
689
|
-
end
|
|
690
|
-
----
|
|
691
1677
|
|
|
692
|
-
==
|
|
1678
|
+
== Signal handling
|
|
693
1679
|
|
|
694
|
-
===
|
|
1680
|
+
=== General
|
|
695
1681
|
|
|
696
|
-
|
|
1682
|
+
Fractor provides production-ready signal handling for process control and
|
|
1683
|
+
monitoring. The framework supports different signals depending on the operating
|
|
1684
|
+
system, enabling graceful shutdown and runtime status monitoring.
|
|
697
1685
|
|
|
698
|
-
|
|
699
|
-
----
|
|
700
|
-
# Create Work objects for high priority items
|
|
701
|
-
high_priority_works = high_priority_items.map { |item| MyWork.new(item) }
|
|
1686
|
+
=== Unix signals (Linux, macOS, Unix)
|
|
702
1687
|
|
|
703
|
-
|
|
704
|
-
supervisor.add_work_items(high_priority_works)
|
|
1688
|
+
==== SIGINT (Ctrl+C)
|
|
705
1689
|
|
|
706
|
-
|
|
707
|
-
supervisor.run
|
|
1690
|
+
Interactive interrupt signal for graceful shutdown.
|
|
708
1691
|
|
|
709
|
-
|
|
710
|
-
low_priority_works = low_priority_items.map { |item| MyWork.new(item) }
|
|
1692
|
+
Usage:
|
|
711
1693
|
|
|
712
|
-
|
|
713
|
-
|
|
714
|
-
|
|
1694
|
+
* Press `Ctrl+C` in the terminal running Fractor
|
|
1695
|
+
* Behavior depends on mode:
|
|
1696
|
+
** *Batch mode*: Stops immediately after current work completes
|
|
1697
|
+
** *Continuous mode*: Initiates graceful shutdown
|
|
1698
|
+
|
|
1699
|
+
==== SIGTERM
|
|
1700
|
+
|
|
1701
|
+
Standard Unix termination signal, preferred for production deployments.
|
|
1702
|
+
|
|
1703
|
+
This ensures a graceful shutdown of the Fractor supervisor and its workers.
|
|
1704
|
+
|
|
1705
|
+
Usage:
|
|
1706
|
+
|
|
1707
|
+
[source,sh]
|
|
1708
|
+
----
|
|
1709
|
+
kill -TERM <pid>
|
|
1710
|
+
# or simply
|
|
1711
|
+
kill <pid> # SIGTERM is the default
|
|
715
1712
|
----
|
|
716
1713
|
|
|
717
|
-
|
|
1714
|
+
Typical signals from service managers:
|
|
718
1715
|
|
|
719
|
-
|
|
1716
|
+
* Systemd sends SIGTERM on `systemctl stop`
|
|
1717
|
+
* Docker sends SIGTERM on `docker stop`
|
|
1718
|
+
* Kubernetes sends SIGTERM during pod termination
|
|
720
1719
|
|
|
721
|
-
[source,
|
|
1720
|
+
[source,ini]
|
|
1721
|
+
----
|
|
1722
|
+
# Example systemd service
|
|
1723
|
+
[Service]
|
|
1724
|
+
ExecStart=/usr/bin/ruby /path/to/fractor_server.rb
|
|
1725
|
+
KillMode=process
|
|
1726
|
+
KillSignal=SIGTERM
|
|
1727
|
+
TimeoutStopSec=30
|
|
722
1728
|
----
|
|
723
|
-
large_dataset.each_slice(1000) do |batch|
|
|
724
|
-
# Convert batch items to Work objects
|
|
725
|
-
work_batch = batch.map { |item| MyWork.new(item) }
|
|
726
1729
|
|
|
727
|
-
|
|
728
|
-
supervisor.run
|
|
1730
|
+
==== SIGUSR1
|
|
729
1731
|
|
|
730
|
-
|
|
731
|
-
|
|
732
|
-
|
|
1732
|
+
Real-time status monitoring without stopping the process.
|
|
1733
|
+
|
|
1734
|
+
Usage:
|
|
1735
|
+
|
|
1736
|
+
[source,sh]
|
|
1737
|
+
----
|
|
1738
|
+
kill -USR1 <pid>
|
|
1739
|
+
----
|
|
1740
|
+
|
|
1741
|
+
Output example:
|
|
1742
|
+
|
|
1743
|
+
[example]
|
|
1744
|
+
[source]
|
|
733
1745
|
----
|
|
1746
|
+
=== Fractor Supervisor Status ===
|
|
1747
|
+
Mode: Continuous
|
|
1748
|
+
Running: true
|
|
1749
|
+
Workers: 4
|
|
1750
|
+
Idle workers: 2
|
|
1751
|
+
Queue size: 15
|
|
1752
|
+
Results: 127
|
|
1753
|
+
Errors: 3
|
|
1754
|
+
----
|
|
1755
|
+
|
|
1756
|
+
=== Windows signals
|
|
1757
|
+
|
|
1758
|
+
==== SIGBREAK (Ctrl+Break)
|
|
1759
|
+
|
|
1760
|
+
Windows alternative to SIGUSR1 for status monitoring.
|
|
1761
|
+
|
|
1762
|
+
Usage:
|
|
1763
|
+
|
|
1764
|
+
* Press `Ctrl+Break` in the terminal running Fractor
|
|
1765
|
+
* Same output as SIGUSR1 on Unix
|
|
1766
|
+
|
|
1767
|
+
[NOTE]
|
|
1768
|
+
SIGUSR1 is not available on Windows. Use `Ctrl+Break` instead for status
|
|
1769
|
+
monitoring on Windows platforms.
|
|
1770
|
+
|
|
1771
|
+
|
|
1772
|
+
=== Signal behavior by mode
|
|
1773
|
+
|
|
1774
|
+
==== Batch mode
|
|
1775
|
+
|
|
1776
|
+
In batch processing mode:
|
|
1777
|
+
|
|
1778
|
+
* SIGINT/SIGTERM: Stops immediately after current work completes
|
|
1779
|
+
* SIGUSR1/SIGBREAK: Displays current status
|
|
1780
|
+
|
|
1781
|
+
==== Continuous mode
|
|
1782
|
+
|
|
1783
|
+
In continuous mode (long-running servers):
|
|
1784
|
+
|
|
1785
|
+
* SIGINT/SIGTERM: Graceful shutdown within ~2 seconds
|
|
1786
|
+
** Stops accepting new work
|
|
1787
|
+
** Completes in-progress work
|
|
1788
|
+
** Cleans up resources
|
|
1789
|
+
* SIGUSR1/SIGBREAK: Displays current status
|
|
1790
|
+
|
|
1791
|
+
|
|
734
1792
|
|
|
735
1793
|
|
|
736
1794
|
== Running a basic example
|
|
@@ -756,7 +1814,10 @@ class MyWorker < Fractor::Worker
|
|
|
756
1814
|
def process(work)
|
|
757
1815
|
if work.input == 5
|
|
758
1816
|
# Return a Fractor::WorkResult for errors
|
|
759
|
-
return Fractor::WorkResult.new(
|
|
1817
|
+
return Fractor::WorkResult.new(
|
|
1818
|
+
error: "Error processing work #{work.input}",
|
|
1819
|
+
work: work
|
|
1820
|
+
)
|
|
760
1821
|
end
|
|
761
1822
|
|
|
762
1823
|
calculated = work.input * 2
|
|
@@ -798,81 +1859,6 @@ the final aggregated results, including any errors encountered. Press `Ctrl+C`
|
|
|
798
1859
|
during execution to test the graceful shutdown.
|
|
799
1860
|
|
|
800
1861
|
|
|
801
|
-
== Continuous mode
|
|
802
|
-
|
|
803
|
-
=== General
|
|
804
|
-
|
|
805
|
-
Fractor provides a powerful feature called "continuous mode" that allows
|
|
806
|
-
supervisors to run indefinitely, processing work items as they arrive without
|
|
807
|
-
stopping after the initial work queue is empty.
|
|
808
|
-
|
|
809
|
-
=== Features
|
|
810
|
-
|
|
811
|
-
* *Non-stopping Execution*: Supervisors run indefinitely until explicitly stopped
|
|
812
|
-
* *On-demand Work*: Workers only process work when it's available
|
|
813
|
-
* *Resource Efficiency*: Workers idle when no work is available, without consuming excessive resources
|
|
814
|
-
* *Dynamic Work Addition*: New work can be added at any time through the work source callback
|
|
815
|
-
* *Graceful Shutdown*: Resources are properly cleaned up when the supervisor is stopped
|
|
816
|
-
|
|
817
|
-
Continuous mode is particularly useful for:
|
|
818
|
-
|
|
819
|
-
* *Chat servers*: Processing incoming messages as they arrive
|
|
820
|
-
* *Background job processors*: Handling tasks from a job queue
|
|
821
|
-
* *Real-time data processing*: Analyzing data streams as they come in
|
|
822
|
-
* *Web servers*: Responding to incoming requests in parallel
|
|
823
|
-
* *Monitoring systems*: Continuously checking system statuses
|
|
824
|
-
|
|
825
|
-
See the Chat Server example in the examples directory for a complete implementation of continuous mode.
|
|
826
|
-
|
|
827
|
-
|
|
828
|
-
=== Using continuous mode
|
|
829
|
-
|
|
830
|
-
==== Step 1. Create a supervisor with the `continuous_mode: true` option
|
|
831
|
-
|
|
832
|
-
[source,ruby]
|
|
833
|
-
----
|
|
834
|
-
supervisor = Fractor::Supervisor.new(
|
|
835
|
-
worker_pools: [
|
|
836
|
-
{ worker_class: MyWorker, num_workers: 2 }
|
|
837
|
-
],
|
|
838
|
-
continuous_mode: true # Enable continuous mode
|
|
839
|
-
)
|
|
840
|
-
----
|
|
841
|
-
|
|
842
|
-
==== Step 2. Register a work source callback that provides new work on demand
|
|
843
|
-
|
|
844
|
-
[source,ruby]
|
|
845
|
-
----
|
|
846
|
-
supervisor.register_work_source do
|
|
847
|
-
# Return nil or empty array if no work is available
|
|
848
|
-
# Return a work item or array of work items when available
|
|
849
|
-
items = get_next_work_items
|
|
850
|
-
if items && !items.empty?
|
|
851
|
-
# Convert to Work objects if needed
|
|
852
|
-
items.map { |item| MyWork.new(item) }
|
|
853
|
-
else
|
|
854
|
-
nil
|
|
855
|
-
end
|
|
856
|
-
end
|
|
857
|
-
----
|
|
858
|
-
|
|
859
|
-
==== Step 4. Run the supervisor in a non-blocking way
|
|
860
|
-
|
|
861
|
-
Typically in a background thread.
|
|
862
|
-
|
|
863
|
-
[source,ruby]
|
|
864
|
-
----
|
|
865
|
-
supervisor_thread = Thread.new { supervisor.run }
|
|
866
|
-
----
|
|
867
|
-
|
|
868
|
-
==== Step 4. Explicitly call `stop` on the supervisor to stop processing
|
|
869
|
-
|
|
870
|
-
[source,ruby]
|
|
871
|
-
----
|
|
872
|
-
supervisor.stop
|
|
873
|
-
supervisor_thread.join # Wait for the supervisor thread to finish
|
|
874
|
-
----
|
|
875
|
-
|
|
876
1862
|
|
|
877
1863
|
|
|
878
1864
|
== Example applications
|
|
@@ -883,7 +1869,9 @@ The Fractor gem comes with several example applications that demonstrate various
|
|
|
883
1869
|
patterns and use cases. Each example can be found in the `examples` directory of
|
|
884
1870
|
the gem repository. Detailed descriptions for these are provided below.
|
|
885
1871
|
|
|
886
|
-
===
|
|
1872
|
+
=== Pipeline mode examples
|
|
1873
|
+
|
|
1874
|
+
==== Simple example
|
|
887
1875
|
|
|
888
1876
|
The Simple Example (link:examples/simple/[examples/simple/]) demonstrates the
|
|
889
1877
|
basic usage of the Fractor framework. It shows how to create a simple Work
|
|
@@ -897,9 +1885,26 @@ Key features:
|
|
|
897
1885
|
* Simple Supervisor setup
|
|
898
1886
|
* Parallel processing of work items
|
|
899
1887
|
* Error handling and result aggregation
|
|
1888
|
+
* Auto-detection of available processors
|
|
900
1889
|
* Graceful shutdown on completion
|
|
901
1890
|
|
|
902
|
-
|
|
1891
|
+
==== Auto-detection example
|
|
1892
|
+
|
|
1893
|
+
The Auto-Detection Example
|
|
1894
|
+
(link:examples/auto_detection/[examples/auto_detection/]) demonstrates
|
|
1895
|
+
Fractor's automatic worker detection feature. It shows how to use
|
|
1896
|
+
auto-detection, explicit configuration, and mixed approaches for controlling
|
|
1897
|
+
the number of workers.
|
|
1898
|
+
|
|
1899
|
+
Key features:
|
|
1900
|
+
|
|
1901
|
+
* Automatic detection of available processors
|
|
1902
|
+
* Comparison of auto-detection vs explicit configuration
|
|
1903
|
+
* Mixed configuration with multiple worker pools
|
|
1904
|
+
* Best practices for worker configuration
|
|
1905
|
+
* Portable code that adapts to different environments
|
|
1906
|
+
|
|
1907
|
+
==== Hierarchical hasher
|
|
903
1908
|
|
|
904
1909
|
The Hierarchical Hasher example
|
|
905
1910
|
(link:examples/hierarchical_hasher/[examples/hierarchical_hasher/]) demonstrates
|
|
@@ -914,7 +1919,7 @@ Key features:
|
|
|
914
1919
|
* Independent processing of data segments
|
|
915
1920
|
* Aggregation of results to form a final output
|
|
916
1921
|
|
|
917
|
-
|
|
1922
|
+
==== Multi-work type
|
|
918
1923
|
|
|
919
1924
|
The Multi-Work Type example
|
|
920
1925
|
(link:examples/multi_work_type/[examples/multi_work_type/]) demonstrates how a
|
|
@@ -928,7 +1933,7 @@ Key features:
|
|
|
928
1933
|
* Polymorphic worker processing based on work type
|
|
929
1934
|
* Unified workflow for diverse tasks
|
|
930
1935
|
|
|
931
|
-
|
|
1936
|
+
==== Pipeline processing
|
|
932
1937
|
|
|
933
1938
|
The Pipeline Processing example
|
|
934
1939
|
(link:examples/pipeline_processing/[examples/pipeline_processing/]) implements a
|
|
@@ -942,7 +1947,7 @@ Key features:
|
|
|
942
1947
|
* Concurrent execution of different pipeline stages
|
|
943
1948
|
* Data transformation at each step of the pipeline
|
|
944
1949
|
|
|
945
|
-
|
|
1950
|
+
==== Producer/subscriber
|
|
946
1951
|
|
|
947
1952
|
The Producer/Subscriber example
|
|
948
1953
|
(link:examples/producer_subscriber/[examples/producer_subscriber/]) showcases a
|
|
@@ -956,7 +1961,7 @@ Key features:
|
|
|
956
1961
|
* Dynamic generation of sub-work based on initial processing
|
|
957
1962
|
* Construction of hierarchical result structures
|
|
958
1963
|
|
|
959
|
-
|
|
1964
|
+
==== Scatter/gather
|
|
960
1965
|
|
|
961
1966
|
The Scatter/Gather example
|
|
962
1967
|
(link:examples/scatter_gather/[examples/scatter_gather/]) illustrates how a
|
|
@@ -971,7 +1976,7 @@ Key features:
|
|
|
971
1976
|
* Concurrent processing of subtasks
|
|
972
1977
|
* Aggregation of partial results into a final result
|
|
973
1978
|
|
|
974
|
-
|
|
1979
|
+
==== Specialized workers
|
|
975
1980
|
|
|
976
1981
|
The Specialized Workers example
|
|
977
1982
|
(link:examples/specialized_workers/[examples/specialized_workers/]) demonstrates
|
|
@@ -986,10 +1991,59 @@ Key features:
|
|
|
986
1991
|
* Routing of work items to appropriately specialized workers
|
|
987
1992
|
* Optimization of resources and logic per task type
|
|
988
1993
|
|
|
1994
|
+
=== Continuous mode examples
|
|
1995
|
+
|
|
1996
|
+
==== Plain socket implementation
|
|
1997
|
+
|
|
1998
|
+
The plain socket implementation
|
|
1999
|
+
(link:examples/continuous_chat_server/[examples/continuous_chat_server/])
|
|
2000
|
+
provides a baseline chat server using plain TCP sockets without Fractor. This
|
|
2001
|
+
serves as a comparison point to understand the benefits of using Fractor for
|
|
2002
|
+
continuous processing.
|
|
2003
|
+
|
|
2004
|
+
==== Fractor-based implementation
|
|
2005
|
+
|
|
2006
|
+
The Fractor-based implementation
|
|
2007
|
+
(link:examples/continuous_chat_fractor/[examples/continuous_chat_fractor/])
|
|
2008
|
+
demonstrates how to build a production-ready chat server using Fractor's
|
|
2009
|
+
continuous mode with high-level primitives.
|
|
2010
|
+
|
|
2011
|
+
Key features:
|
|
2012
|
+
|
|
2013
|
+
* *Continuous mode operation*: Server runs indefinitely processing messages as
|
|
2014
|
+
they arrive
|
|
2015
|
+
* *High-level primitives*: Uses WorkQueue and ContinuousServer to eliminate
|
|
2016
|
+
boilerplate
|
|
2017
|
+
* *Graceful shutdown*: Production-ready signal handling (SIGINT, SIGTERM,
|
|
2018
|
+
SIGUSR1/SIGBREAK)
|
|
2019
|
+
* *Callback-based results*: Clean separation of concerns with on_result and
|
|
2020
|
+
on_error callbacks
|
|
2021
|
+
* *Cross-platform support*: Works on Unix/Linux/macOS and Windows
|
|
2022
|
+
* *Process monitoring*: Runtime status checking via signals
|
|
2023
|
+
* *40% code reduction*: 167 lines vs 279 lines with low-level API
|
|
2024
|
+
|
|
2025
|
+
The implementation includes:
|
|
2026
|
+
|
|
2027
|
+
* `chat_common.rb`: Work and Worker class definitions for chat message
|
|
2028
|
+
processing
|
|
2029
|
+
* `chat_server.rb`: Main server using high-level primitives
|
|
2030
|
+
* `simulate.rb`: Test client simulator
|
|
2031
|
+
|
|
2032
|
+
This example demonstrates production deployment patterns including:
|
|
2033
|
+
|
|
2034
|
+
* Systemd service integration
|
|
2035
|
+
* Docker container deployment
|
|
2036
|
+
* Process monitoring and health checks
|
|
2037
|
+
* Graceful restart procedures
|
|
2038
|
+
|
|
2039
|
+
See link:examples/continuous_chat_fractor/README.adoc[the chat server README]
|
|
2040
|
+
for detailed implementation documentation.
|
|
2041
|
+
|
|
2042
|
+
|
|
989
2043
|
|
|
990
2044
|
|
|
991
2045
|
== Copyright and license
|
|
992
2046
|
|
|
993
2047
|
Copyright Ribose.
|
|
994
2048
|
|
|
995
|
-
Licensed under the
|
|
2049
|
+
Licensed under the Ribose BSD 2-Clause License.
|