fractor 0.1.4 → 0.1.6

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (33) hide show
  1. checksums.yaml +4 -4
  2. data/.rubocop-https---raw-githubusercontent-com-riboseinc-oss-guides-main-ci-rubocop-yml +552 -0
  3. data/.rubocop.yml +14 -8
  4. data/.rubocop_todo.yml +162 -46
  5. data/README.adoc +1364 -376
  6. data/examples/auto_detection/auto_detection.rb +9 -9
  7. data/examples/continuous_chat_common/message_protocol.rb +53 -0
  8. data/examples/continuous_chat_fractor/README.adoc +217 -0
  9. data/examples/continuous_chat_fractor/chat_client.rb +303 -0
  10. data/examples/continuous_chat_fractor/chat_common.rb +83 -0
  11. data/examples/continuous_chat_fractor/chat_server.rb +167 -0
  12. data/examples/continuous_chat_fractor/simulate.rb +345 -0
  13. data/examples/continuous_chat_server/README.adoc +135 -0
  14. data/examples/continuous_chat_server/chat_client.rb +303 -0
  15. data/examples/continuous_chat_server/chat_server.rb +359 -0
  16. data/examples/continuous_chat_server/simulate.rb +343 -0
  17. data/examples/hierarchical_hasher/hierarchical_hasher.rb +12 -8
  18. data/examples/multi_work_type/multi_work_type.rb +30 -29
  19. data/examples/pipeline_processing/pipeline_processing.rb +15 -15
  20. data/examples/producer_subscriber/producer_subscriber.rb +20 -16
  21. data/examples/scatter_gather/scatter_gather.rb +29 -28
  22. data/examples/simple/sample.rb +5 -5
  23. data/examples/specialized_workers/specialized_workers.rb +44 -37
  24. data/lib/fractor/continuous_server.rb +188 -0
  25. data/lib/fractor/result_aggregator.rb +1 -1
  26. data/lib/fractor/supervisor.rb +277 -104
  27. data/lib/fractor/version.rb +1 -1
  28. data/lib/fractor/work_queue.rb +68 -0
  29. data/lib/fractor/work_result.rb +1 -1
  30. data/lib/fractor/worker.rb +2 -1
  31. data/lib/fractor/wrapped_ractor.rb +12 -2
  32. data/lib/fractor.rb +2 -0
  33. metadata +15 -2
data/README.adoc CHANGED
@@ -5,9 +5,9 @@ distributing computational work across multiple Ractors.
5
5
 
6
6
  == Introduction
7
7
 
8
- Fractor stands for *Function-driven Ractors framework*. It is a lightweight Ruby
9
- framework designed to simplify the process of distributing computational work
10
- across multiple Ractors (Ruby's actor-like concurrency model).
8
+ Fractor stands for *Function-driven Ractors framework*. It is a lightweight
9
+ Ruby framework designed to simplify the process of distributing computational
10
+ work across multiple Ractors (Ruby's actor-like concurrency model).
11
11
 
12
12
  The primary goal of Fractor is to provide a structured way to define work,
13
13
  process it in parallel using Ractors, and aggregate the results, while
@@ -108,85 +108,125 @@ component that manages the pool of workers and distributes work items
108
108
 
109
109
  component that collects and organizes work results from workers
110
110
 
111
+ === pipeline mode
111
112
 
113
+ operating mode where Fractor processes a defined set of work items and then
114
+ stops
112
115
 
116
+ === continuous mode
113
117
 
114
- == Core components
118
+ operating mode where Fractor runs indefinitely, processing work items as they
119
+ arrive
115
120
 
116
- === General
117
121
 
118
- The Fractor framework consists of the following main classes, all residing
119
- within the `Fractor` module.
120
122
 
121
123
 
122
- === Fractor::Worker
124
+ == Understanding Fractor operating modes
123
125
 
124
- The abstract base class for defining how work should be processed.
126
+ === General
125
127
 
126
- Client code must subclass this and implement the `process(work)` method.
128
+ Fractor supports two distinct operating modes, each optimized for different use
129
+ cases. Understanding these modes is essential for choosing the right approach
130
+ for your application.
127
131
 
128
- The `process` method receives a `Fractor::Work` object (or a subclass) and
129
- should return a `Fractor::WorkResult` object.
132
+ === Pipeline mode (batch processing)
130
133
 
131
- === Fractor::Work
134
+ Pipeline mode is designed for processing a defined set of work items with a
135
+ clear beginning and end.
132
136
 
133
- The abstract base class for representing a unit of work.
137
+ Characteristics:
134
138
 
135
- Typically holds the input data needed by the `Worker`.
139
+ * Processes a predetermined batch of work items
140
+ * Stops automatically when all work is completed
141
+ * Results are collected and accessed after processing completes
142
+ * Ideal for one-time computations or periodic batch jobs
136
143
 
137
- Client code should subclass this to define specific types of work items.
144
+ Common use cases:
138
145
 
139
- === Fractor::WorkResult
146
+ * Processing a file or dataset
147
+ * Batch data transformations
148
+ * One-time parallel computations
149
+ * Scheduled batch jobs
150
+ * Hierarchical or multi-stage processing
140
151
 
141
- A container object returned by the `Worker#process` method.
152
+ === Continuous mode (long-running servers)
142
153
 
143
- Holds either the successful `:result` of the computation or an `:error`
144
- message if processing failed.
154
+ Continuous mode is designed for applications that need to run indefinitely,
155
+ processing work items as they arrive.
145
156
 
146
- Includes a reference back to the original `:work` item.
157
+ Characteristics:
147
158
 
148
- Provides a `success?` method.
159
+ * Runs continuously without a predetermined end
160
+ * Processes work items dynamically as they become available
161
+ * Workers idle efficiently when no work is available
162
+ * Results are processed via callbacks, not batch collection
163
+ * Supports graceful shutdown and runtime monitoring
149
164
 
150
- === Fractor::ResultAggregator
165
+ Common use cases:
151
166
 
152
- Collects and stores all `WorkResult` objects generated by the workers.
167
+ * Chat servers and messaging systems
168
+ * Background job processors
169
+ * Real-time data stream processing
170
+ * Web servers handling concurrent requests
171
+ * Monitoring and alerting systems
172
+ * Event-driven architectures
153
173
 
154
- Separates results into `results` (successful) and `errors` arrays.
174
+ === Comparison
155
175
 
156
- === Fractor::WrappedRactor
176
+ [cols="1,2,2",options="header"]
177
+ |===
178
+ |Aspect |Pipeline Mode |Continuous Mode
157
179
 
158
- Manages an individual Ruby `Ractor`.
180
+ |Duration
181
+ |Finite (stops when done)
182
+ |Indefinite (runs until stopped)
159
183
 
160
- Instantiates the client-provided `Worker` subclass within the Ractor.
184
+ |Work arrival
185
+ |All work known upfront
186
+ |Work arrives dynamically
161
187
 
162
- Handles receiving `Work` items, calling the `Worker#process` method, and
163
- yielding `WorkResult` objects (or errors) back to the `Supervisor`.
188
+ |Result handling
189
+ |Batch collection after completion
190
+ |Callback-based processing
164
191
 
165
- === Fractor::Supervisor
192
+ |Typical lifetime
193
+ |Seconds to minutes
194
+ |Hours to days/weeks
166
195
 
167
- The main orchestrator of the framework.
196
+ |Shutdown
197
+ |Automatic on completion
198
+ |Manual or signal-based
168
199
 
169
- Initializes and manages a pool of `WrappedRactor` instances.
200
+ |Best for
201
+ |Batch jobs, file processing
202
+ |Servers, streams, job queues
203
+ |===
170
204
 
171
- Manages a `work_queue` of input data.
205
+ === Decision guide
172
206
 
173
- Distributes work items (wrapped in the client's `Work` subclass) to available
174
- Ractors.
207
+ Choose *Pipeline mode* when:
175
208
 
176
- Listens for results and errors from Ractors using `Ractor.select`.
209
+ * You have a complete dataset to process
210
+ * Processing has a clear start and end
211
+ * You need all results aggregated after completion
212
+ * The task is one-time or scheduled periodically
177
213
 
178
- Uses `ResultAggregator` to store outcomes.
214
+ Choose *Continuous mode* when:
215
+
216
+ * Work arrives over time from external sources
217
+ * Your application runs as a long-lived server
218
+ * You need to process items as they arrive
219
+ * Results should be handled immediately via callbacks
179
220
 
180
- Handles graceful shutdown on `SIGINT` (Ctrl+C).
181
221
 
182
222
 
183
223
 
184
- == Quick start
224
+ == Quick start: Pipeline mode
185
225
 
186
226
  === General
187
227
 
188
- This quick start guide shows the minimum steps needed to get a simple parallel
189
- execution working with Fractor.
228
+ This quick start guide shows the minimum steps needed to get parallel batch
229
+ processing working with Fractor.
190
230
 
191
231
  === Step 1: Create a minimal Work class
192
232
 
@@ -259,20 +299,6 @@ returns an error result.
259
299
  The Supervisor class orchestrates the entire framework, managing worker Ractors,
260
300
  distributing work, and collecting results.
261
301
 
262
- It initializes pools of Ractors, each running an instance of a Worker
263
- class. The Supervisor handles the communication between the main thread and
264
- the Ractors, including sending work items and receiving results.
265
-
266
- The Supervisor also manages the work queue and the ResultAggregator, which
267
- collects and organizes all results from the workers.
268
-
269
- To set up the Supervisor, you specify worker pools, each containing a Worker class
270
- and optionally the number of workers to create. If you don't specify `num_workers`,
271
- Fractor will automatically detect the number of available processors on your system
272
- and use that value. You can create multiple worker pools with different worker types
273
- to handle different kinds of work. Each worker pool can process any type of Work
274
- object that inherits from Fractor::Work.
275
-
276
302
  [source,ruby]
277
303
  ----
278
304
  # Create the supervisor with auto-detected number of workers
@@ -282,43 +308,224 @@ supervisor = Fractor::Supervisor.new(
282
308
  ]
283
309
  )
284
310
 
285
- # Or explicitly specify the number of workers
286
- supervisor = Fractor::Supervisor.new(
287
- worker_pools: [
288
- { worker_class: MyWorker, num_workers: 4 } # Explicitly use 4 workers
289
- ]
290
- )
291
-
292
- # Add individual work items (instances of Work subclasses)
293
- supervisor.add_work_item(MyWork.new(1))
294
-
295
- # Add multiple work items
311
+ # Add work items (instances of Work subclasses)
296
312
  supervisor.add_work_items([
313
+ MyWork.new(1),
297
314
  MyWork.new(2),
298
315
  MyWork.new(3),
299
316
  MyWork.new(4),
300
317
  MyWork.new(5)
301
318
  ])
302
319
 
303
- # You can add different types of Work objects to the same supervisor
304
- supervisor.add_work_items([
305
- MyWork.new(6),
306
- OtherWork.new("data")
307
- ])
308
-
309
320
  # Run the processing
310
321
  supervisor.run
311
322
 
312
- # Access results
323
+ # Access results after completion
313
324
  puts "Results: #{supervisor.results.results.map(&:result)}"
314
325
  puts "Errors: #{supervisor.results.errors.size}"
315
326
  ----
316
327
 
317
328
  That's it! With these three simple steps, you have a working parallel processing
318
- system using Fractor.
329
+ system using Fractor in pipeline mode.
330
+
331
+
332
+
333
+
334
+ == Quick start: Continuous mode
335
+
336
+ === General
337
+
338
+ This quick start guide shows how to build a long-running server using Fractor's
339
+ high-level primitives for continuous mode. These primitives eliminate boilerplate
340
+ code for thread management, queuing, and results processing.
341
+
342
+ === Step 1: Create Work and Worker classes
343
+
344
+ Just like pipeline mode, you need Work and Worker classes:
345
+
346
+ [source,ruby]
347
+ ----
348
+ require 'fractor'
349
+
350
+ class MessageWork < Fractor::Work
351
+ def initialize(client_id, message)
352
+ super({ client_id: client_id, message: message })
353
+ end
354
+
355
+ def client_id
356
+ input[:client_id]
357
+ end
358
+
359
+ def message
360
+ input[:message]
361
+ end
362
+ end
363
+
364
+ class MessageWorker < Fractor::Worker
365
+ def process(work)
366
+ # Process the message
367
+ processed = "Echo: #{work.message}"
368
+
369
+ Fractor::WorkResult.new(
370
+ result: { client_id: work.client_id, response: processed },
371
+ work: work
372
+ )
373
+ rescue => e
374
+ Fractor::WorkResult.new(error: e.message, work: work)
375
+ end
376
+ end
377
+ ----
378
+
379
+ === Step 2: Set up WorkQueue
380
+
381
+ Create a thread-safe work queue that will hold incoming work items:
382
+
383
+ [source,ruby]
384
+ ----
385
+ # Create a thread-safe work queue
386
+ work_queue = Fractor::WorkQueue.new
387
+ ----
388
+
389
+ === Step 3: Set up ContinuousServer with callbacks
390
+
391
+ The ContinuousServer handles all the boilerplate: thread management, signal
392
+ handling, and results processing.
393
+
394
+ [source,ruby]
395
+ ----
396
+ # Create the continuous server
397
+ server = Fractor::ContinuousServer.new(
398
+ worker_pools: [
399
+ { worker_class: MessageWorker, num_workers: 4 }
400
+ ],
401
+ work_queue: work_queue, # Auto-registers as work source
402
+ log_file: 'logs/server.log' # Optional logging
403
+ )
404
+
405
+ # Define how to handle successful results
406
+ server.on_result do |result|
407
+ client_id = result.result[:client_id]
408
+ response = result.result[:response]
409
+ puts "Sending to client #{client_id}: #{response}"
410
+ # Send response to client here
411
+ end
412
+
413
+ # Define how to handle errors
414
+ server.on_error do |error_result|
415
+ puts "Error processing work: #{error_result.error}"
416
+ end
417
+ ----
418
+
419
+ === Step 4: Run and add work dynamically
420
+
421
+ Start the server and add work items as they arrive:
422
+
423
+ [source,ruby]
424
+ ----
425
+ # Start the server in a background thread
426
+ server_thread = Thread.new { server.run }
427
+
428
+ # Your application can now push work items dynamically
429
+ # For example, when a client sends a message:
430
+ work_queue << MessageWork.new(client_id: 1, message: "Hello")
431
+ work_queue << MessageWork.new(client_id: 2, message: "World")
432
+
433
+ # The server runs indefinitely, processing work as it arrives
434
+ # Use Ctrl+C or send SIGTERM for graceful shutdown
435
+
436
+ # Or stop programmatically
437
+ sleep 10
438
+ server.stop
439
+ server_thread.join
440
+ ----
441
+
442
+ That's it! The ContinuousServer handles all thread management, signal handling,
443
+ and graceful shutdown automatically.
444
+
445
+
446
+
447
+
448
+ == Core components
449
+
450
+ === General
451
+
452
+ The Fractor framework consists of the following main classes, all residing
453
+ within the `Fractor` module. These core components are used by both pipeline
454
+ mode and continuous mode.
455
+
456
+
457
+ === Fractor::Worker
458
+
459
+ The abstract base class for defining how work should be processed.
460
+
461
+ Client code must subclass this and implement the `process(work)` method.
462
+
463
+ The `process` method receives a `Fractor::Work` object (or a subclass) and
464
+ should return a `Fractor::WorkResult` object.
465
+
466
+ === Fractor::Work
319
467
 
468
+ The abstract base class for representing a unit of work.
469
+
470
+ Typically holds the input data needed by the `Worker`.
471
+
472
+ Client code should subclass this to define specific types of work items.
473
+
474
+ === Fractor::WorkResult
475
+
476
+ A container object returned by the `Worker#process` method.
477
+
478
+ Holds either the successful `:result` of the computation or an `:error`
479
+ message if processing failed.
480
+
481
+ Includes a reference back to the original `:work` item.
482
+
483
+ Provides a `success?` method.
484
+
485
+ === Fractor::ResultAggregator
486
+
487
+ Collects and stores all `WorkResult` objects generated by the workers.
488
+
489
+ Separates results into `results` (successful) and `errors` arrays.
490
+
491
+ === Fractor::WrappedRactor
492
+
493
+ Manages an individual Ruby `Ractor`.
494
+
495
+ Instantiates the client-provided `Worker` subclass within the Ractor.
496
+
497
+ Handles receiving `Work` items, calling the `Worker#process` method, and
498
+ yielding `WorkResult` objects (or errors) back to the `Supervisor`.
499
+
500
+ === Fractor::Supervisor
501
+
502
+ The main orchestrator of the framework.
503
+
504
+ Initializes and manages a pool of `WrappedRactor` instances.
505
+
506
+ Manages a `work_queue` of input data.
507
+
508
+ Distributes work items (wrapped in the client's `Work` subclass) to available
509
+ Ractors.
510
+
511
+ Listens for results and errors from Ractors using `Ractor.select`.
512
+
513
+ Uses `ResultAggregator` to store outcomes.
514
+
515
+ Handles graceful shutdown on `SIGINT` (Ctrl+C).
320
516
 
321
- == Usage
517
+
518
+
519
+
520
+ == Pipeline mode components
521
+
522
+ === General
523
+
524
+ This section describes the components and their detailed usage specifically for
525
+ pipeline mode (batch processing). For continuous mode, see the Continuous mode
526
+ components section.
527
+
528
+ Pipeline mode uses only the core components without any additional primitives.
322
529
 
323
530
  === Work class
324
531
 
@@ -371,11 +578,9 @@ end
371
578
 
372
579
  [TIP]
373
580
  ====
374
- ====
375
581
  * Keep Work objects lightweight and serializable since they will be passed
376
582
  between Ractors
377
583
  * Implement a meaningful `to_s` method for better debugging
378
- ====
379
584
  * Consider adding validation in the initializer to catch issues early
380
585
  ====
381
586
 
@@ -457,7 +662,10 @@ def process(work)
457
662
  Fractor::WorkResult.new(result: result, work: work)
458
663
  rescue StandardError => e
459
664
  # Catch and convert any unexpected exceptions to error results
460
- Fractor::WorkResult.new(error: "An unexpected error occurred: #{e.message}", work: work)
665
+ Fractor::WorkResult.new(
666
+ error: "An unexpected error occurred: #{e.message}",
667
+ work: work
668
+ )
461
669
  end
462
670
  ----
463
671
 
@@ -469,319 +677,1119 @@ end
469
677
  * Ensure all paths return a valid `WorkResult` object
470
678
  ====
471
679
 
472
- === WorkResult class
680
+ === Supervisor class for pipeline mode
473
681
 
474
682
  ==== Purpose and responsibilities
475
683
 
476
- The `Fractor::WorkResult` class is a container that holds either the successful
477
- result of processing or an error message, along with a reference to the original
478
- work item.
684
+ The `Fractor::Supervisor` class orchestrates the entire framework, managing
685
+ worker Ractors, distributing work, and collecting results.
479
686
 
480
- ==== Creating results
687
+ ==== Configuration options
481
688
 
482
- To create a successful result:
689
+ When creating a Supervisor for pipeline mode, configure worker pools:
483
690
 
484
691
  [source,ruby]
485
692
  ----
486
- # For successful processing
487
- Fractor::WorkResult.new(result: calculated_value, work: work_object)
488
- ----
489
-
490
- To create an error result:
693
+ supervisor = Fractor::Supervisor.new(
694
+ worker_pools: [
695
+ # Pool 1 - for general data processing
696
+ { worker_class: MyWorker, num_workers: 4 },
491
697
 
492
- [source,ruby]
493
- ----
494
- # For error conditions
495
- Fractor::WorkResult.new(error: "Error message", work: work_object)
698
+ # Pool 2 - for specialized image processing
699
+ { worker_class: ImageWorker, num_workers: 2 }
700
+ ]
701
+ # Note: continuous_mode defaults to false for pipeline mode
702
+ )
496
703
  ----
497
704
 
498
- ==== Checking result status
705
+ ==== Worker auto-detection
499
706
 
500
- You can check if a result was successful:
707
+ Fractor automatically detects the number of available processors on your system
708
+ and uses that value when `num_workers` is not specified. This provides optimal
709
+ resource utilization across different deployment environments without requiring
710
+ manual configuration.
501
711
 
502
712
  [source,ruby]
503
713
  ----
504
- if work_result.success?
505
- # Handle successful result
506
- processed_value = work_result.result
507
- else
508
- # Handle error
509
- error_message = work_result.error
510
- end
511
- ----
512
-
513
- ==== Accessing original work
714
+ # Auto-detect number of workers (recommended for most cases)
715
+ supervisor = Fractor::Supervisor.new(
716
+ worker_pools: [
717
+ { worker_class: MyWorker } # Will use number of available processors
718
+ ]
719
+ )
514
720
 
515
- The original work item is always available:
721
+ # Explicitly set number of workers (useful for specific requirements)
722
+ supervisor = Fractor::Supervisor.new(
723
+ worker_pools: [
724
+ { worker_class: MyWorker, num_workers: 4 } # Always use exactly 4 workers
725
+ ]
726
+ )
516
727
 
517
- [source,ruby]
518
- ----
519
- original_work = work_result.work
520
- input_value = original_work.input
728
+ # Mix auto-detection and explicit configuration
729
+ supervisor = Fractor::Supervisor.new(
730
+ worker_pools: [
731
+ { worker_class: FastWorker }, # Auto-detected
732
+ { worker_class: HeavyWorker, num_workers: 2 } # Explicitly 2 workers
733
+ ]
734
+ )
521
735
  ----
522
736
 
523
- === ResultAggregator class
524
-
525
- ==== Purpose and responsibilities
526
-
527
- The `Fractor::ResultAggregator` collects and organizes all results from the
528
- workers, separating successful results from errors.
529
-
530
- Completed work results may be order independent or order dependent.
531
-
532
- * For order independent results, the results may be utilized (popped) as they
533
- are received.
534
-
535
- * For order dependent results, the results are aggregated in the order they
536
- are received. The order of results is important for re-assembly or
537
- further processing.
538
-
539
- * For results that require aggregation, the `ResultsAggregator` is used to determine
540
- whether the results are completed, which signify that all work items have
541
- been processed and ready for further processing.
737
+ The auto-detection uses Ruby's `Etc.nprocessors` which returns the number of
738
+ available processors. If detection fails for any reason, it falls back to 2
739
+ workers.
542
740
 
741
+ [TIP]
742
+ ====
743
+ * Use auto-detection for portable code that adapts to different environments
744
+ * Explicitly set `num_workers` when you need precise control over resource usage
745
+ * Consider system load and other factors when choosing explicit values
746
+ ====
543
747
 
544
- ==== Accessing results
748
+ ==== Adding work
545
749
 
546
- To access successful results:
750
+ You can add work items individually or in batches:
547
751
 
548
752
  [source,ruby]
549
753
  ----
550
- # Get all successful results
551
- successful_results = supervisor.results.results
754
+ # Add a single item
755
+ supervisor.add_work_item(MyWork.new(42))
552
756
 
553
- # Extract just the result values
757
+ # Add multiple items
758
+ supervisor.add_work_items([
759
+ MyWork.new(1),
760
+ MyWork.new(2),
761
+ MyWork.new(3),
762
+ MyWork.new(4),
763
+ MyWork.new(5)
764
+ ])
765
+
766
+ # Add items of different work types
767
+ supervisor.add_work_items([
768
+ TextWork.new("Process this text"),
769
+ ImageWork.new({ width: 800, height: 600 })
770
+ ])
771
+ ----
772
+
773
+ The Supervisor can handle any Work object that inherits from Fractor::Work.
774
+ Workers must check the type of Work they receive and process it accordingly.
775
+
776
+ ==== Running and monitoring
777
+
778
+ To start processing:
779
+
780
+ [source,ruby]
781
+ ----
782
+ # Start processing and block until complete
783
+ supervisor.run
784
+ ----
785
+
786
+ The Supervisor automatically handles:
787
+
788
+ * Starting the worker Ractors
789
+ * Distributing work items to available workers
790
+ * Collecting results and errors
791
+ * Graceful shutdown on completion or interruption (Ctrl+C)
792
+
793
+ === ResultAggregator for pipeline mode
794
+
795
+ ==== Purpose and responsibilities
796
+
797
+ The `Fractor::ResultAggregator` collects and organizes all results from the
798
+ workers, separating successful results from errors.
799
+
800
+ In pipeline mode, results are collected throughout processing and accessed
801
+ after the supervisor finishes running.
802
+
803
+ ==== Accessing results
804
+
805
+ After processing completes:
806
+
807
+ [source,ruby]
808
+ ----
809
+ # Get the ResultAggregator
810
+ aggregator = supervisor.results
811
+
812
+ # Check counts
813
+ puts "Processed #{aggregator.results.size} items successfully"
814
+ puts "Encountered #{aggregator.errors.size} errors"
815
+
816
+ # Access successful results
817
+ aggregator.results.each do |result|
818
+ puts "Work item #{result.work.input} produced #{result.result}"
819
+ end
820
+
821
+ # Access errors
822
+ aggregator.errors.each do |error_result|
823
+ puts "Work item #{error_result.work.input} failed: #{error_result.error}"
824
+ end
825
+ ----
826
+
827
+ To access successful results:
828
+
829
+ [source,ruby]
830
+ ----
831
+ # Get all successful results
832
+ successful_results = supervisor.results.results
833
+
834
+ # Extract just the result values
554
835
  result_values = successful_results.map(&:result)
555
836
  ----
556
837
 
557
- To access errors:
838
+ To access errors:
839
+
840
+ [source,ruby]
841
+ ----
842
+ # Get all error results
843
+ error_results = supervisor.results.errors
844
+
845
+ # Extract error messages
846
+ error_messages = error_results.map(&:error)
847
+
848
+ # Get the work items that failed
849
+ failed_work_items = error_results.map(&:work)
850
+ ----
851
+
852
+
853
+ [TIP]
854
+ ====
855
+ * Check both successful results and errors after processing completes
856
+ * Consider implementing custom reporting based on the aggregated results
857
+ ====
858
+
859
+
860
+
861
+
862
+ == Pipeline mode patterns
863
+
864
+ === Custom work distribution
865
+
866
+ For more complex scenarios, you might want to prioritize certain work items:
867
+
868
+ [source,ruby]
869
+ ----
870
+ # Create Work objects for high priority items
871
+ high_priority_works = high_priority_items.map { |item| MyWork.new(item) }
872
+
873
+ # Add high-priority items first
874
+ supervisor.add_work_items(high_priority_works)
875
+
876
+ # Run with just enough workers for high-priority items
877
+ supervisor.run
878
+
879
+ # Create Work objects for lower priority items
880
+ low_priority_works = low_priority_items.map { |item| MyWork.new(item) }
881
+
882
+ # Add and process lower-priority items
883
+ supervisor.add_work_items(low_priority_works)
884
+ supervisor.run
885
+ ----
886
+
887
+ === Handling large datasets
888
+
889
+ For very large datasets, consider processing in batches:
890
+
891
+ [source,ruby]
892
+ ----
893
+ large_dataset.each_slice(1000) do |batch|
894
+ # Convert batch items to Work objects
895
+ work_batch = batch.map { |item| MyWork.new(item) }
896
+
897
+ supervisor.add_work_items(work_batch)
898
+ supervisor.run
899
+
900
+ # Process this batch's results before continuing
901
+ process_batch_results(supervisor.results)
902
+ end
903
+ ----
904
+
905
+ === Multi-work type processing
906
+
907
+ The Multi-Work Type pattern demonstrates how a single supervisor and worker can
908
+ handle multiple types of work items.
909
+
910
+ [source,ruby]
911
+ ----
912
+ class UniversalWorker < Fractor::Worker
913
+ def process(work)
914
+ case work
915
+ when TextWork
916
+ process_text(work)
917
+ when ImageWork
918
+ process_image(work)
919
+ else
920
+ Fractor::WorkResult.new(
921
+ error: "Unknown work type: #{work.class}",
922
+ work: work
923
+ )
924
+ end
925
+ end
926
+
927
+ private
928
+
929
+ def process_text(work)
930
+ result = work.text.upcase
931
+ Fractor::WorkResult.new(result: result, work: work)
932
+ end
933
+
934
+ def process_image(work)
935
+ result = { width: work.width * 2, height: work.height * 2 }
936
+ Fractor::WorkResult.new(result: result, work: work)
937
+ end
938
+ end
939
+
940
+ # Add different types of work
941
+ supervisor.add_work_items([
942
+ TextWork.new("hello"),
943
+ ImageWork.new(width: 100, height: 100),
944
+ TextWork.new("world")
945
+ ])
946
+ ----
947
+
948
+ === Hierarchical work processing
949
+
950
+ The Producer/Subscriber pattern showcases processing that generates sub-work:
951
+
952
+ [source,ruby]
953
+ ----
954
+ # First pass: Process documents
955
+ supervisor.add_work_items(documents.map { |doc| DocumentWork.new(doc) })
956
+ supervisor.run
957
+
958
+ # Collect sections generated from documents
959
+ sections = supervisor.results.results.flat_map do |result|
960
+ result.result[:sections]
961
+ end
962
+
963
+ # Second pass: Process sections
964
+ supervisor.add_work_items(sections.map { |section| SectionWork.new(section) })
965
+ supervisor.run
966
+ ----
967
+
968
+ === Pipeline stages
969
+
970
+ The Pipeline Processing pattern implements multi-stage transformation:
971
+
972
+ [source,ruby]
973
+ ----
974
+ # Stage 1: Extract data
975
+ supervisor1 = Fractor::Supervisor.new(
976
+ worker_pools: [{ worker_class: ExtractionWorker }]
977
+ )
978
+ supervisor1.add_work_items(raw_data.map { |d| ExtractionWork.new(d) })
979
+ supervisor1.run
980
+ extracted = supervisor1.results.results.map(&:result)
981
+
982
+ # Stage 2: Transform data
983
+ supervisor2 = Fractor::Supervisor.new(
984
+ worker_pools: [{ worker_class: TransformWorker }]
985
+ )
986
+ supervisor2.add_work_items(extracted.map { |e| TransformWork.new(e) })
987
+ supervisor2.run
988
+ transformed = supervisor2.results.results.map(&:result)
989
+
990
+ # Stage 3: Load data
991
+ supervisor3 = Fractor::Supervisor.new(
992
+ worker_pools: [{ worker_class: LoadWorker }]
993
+ )
994
+ supervisor3.add_work_items(transformed.map { |t| LoadWork.new(t) })
995
+ supervisor3.run
996
+ ----
997
+
998
+
999
+
1000
+
1001
+ == Continuous mode components
1002
+
1003
+ === General
1004
+
1005
+ This section describes the components and their detailed usage specifically for
1006
+ continuous mode (long-running servers). For pipeline mode, see the Pipeline mode
1007
+ components section.
1008
+
1009
+ Continuous mode offers two approaches: a low-level API for manual control, and
1010
+ high-level primitives that eliminate boilerplate code.
1011
+
1012
+ === Low-level components
1013
+
1014
+ ==== General
1015
+
1016
+ The low-level API provides manual control over continuous mode operation. This
1017
+ approach is useful when you need fine-grained control over threading, work
1018
+ sources, or results processing.
1019
+
1020
+ Use the low-level API when:
1021
+
1022
+ * You need custom thread management
1023
+ * Your work source logic is complex
1024
+ * You require precise control over the supervisor lifecycle
1025
+ * You're integrating with existing thread pools or event loops
1026
+
1027
+ For most applications, the high-level primitives (described in the next section)
1028
+ are recommended as they eliminate significant boilerplate code.
1029
+
1030
+ ==== Supervisor with continuous_mode: true
1031
+
1032
+ To enable continuous mode, set the `continuous_mode` option:
1033
+
1034
+ [source,ruby]
1035
+ ----
1036
+ supervisor = Fractor::Supervisor.new(
1037
+ worker_pools: [
1038
+ { worker_class: MyWorker, num_workers: 2 }
1039
+ ],
1040
+ continuous_mode: true # Enable continuous mode
1041
+ )
1042
+ ----
1043
+
1044
+ ==== Work source callbacks
1045
+
1046
+ Register a callback that provides new work on demand:
1047
+
1048
+ [source,ruby]
1049
+ ----
1050
+ supervisor.register_work_source do
1051
+ # Return nil or empty array if no work is available
1052
+ # Return a work item or array of work items when available
1053
+ items = get_next_work_items
1054
+ if items && !items.empty?
1055
+ # Convert to Work objects if needed
1056
+ items.map { |item| MyWork.new(item) }
1057
+ else
1058
+ nil
1059
+ end
1060
+ end
1061
+ ----
1062
+
1063
+ The callback is polled every 100ms by an internal timer thread.
1064
+
1065
+ ==== Manual thread management
1066
+
1067
+ You must manually manage threads and results processing:
1068
+
1069
+ [source,ruby]
1070
+ ----
1071
+ # Start supervisor in a background thread
1072
+ supervisor_thread = Thread.new { supervisor.run }
1073
+
1074
+ # Start results processing thread
1075
+ results_thread = Thread.new do
1076
+ loop do
1077
+ # Process results
1078
+ while (result = supervisor.results.results.shift)
1079
+ handle_result(result)
1080
+ end
1081
+
1082
+ # Process errors
1083
+ while (error = supervisor.results.errors.shift)
1084
+ handle_error(error)
1085
+ end
1086
+
1087
+ sleep 0.1
1088
+ end
1089
+ end
1090
+
1091
+ # Ensure cleanup on shutdown
1092
+ begin
1093
+ supervisor_thread.join
1094
+ rescue Interrupt
1095
+ supervisor.stop
1096
+ ensure
1097
+ results_thread.kill
1098
+ supervisor_thread.join
1099
+ end
1100
+ ----
1101
+
1102
+ === High-level components
1103
+
1104
+ ==== General
1105
+
1106
+ Fractor provides high-level primitives that dramatically simplify continuous
1107
+ mode applications by eliminating boilerplate code.
1108
+
1109
+ These primitives solve common problems:
1110
+
1111
+ * *Thread management*: Automatic supervisor and results processing threads
1112
+ * *Queue synchronization*: Thread-safe work queue with automatic integration
1113
+ * *Results processing*: Callback-based handling instead of manual loops
1114
+ * *Signal handling*: Built-in support for SIGINT, SIGTERM, SIGUSR1/SIGBREAK
1115
+ * *Graceful shutdown*: Coordinated cleanup across all threads
1116
+
1117
+ Real-world benefits:
1118
+
1119
+ * The chat server example reduced from 279 lines to 167 lines (40% reduction)
1120
+ * Eliminates ~112 lines of thread, queue, and signal handling boilerplate
1121
+ * Simpler, more maintainable code with fewer error-prone details
1122
+
1123
+ ==== Fractor::WorkQueue
1124
+
1125
+ ===== Purpose and responsibilities
1126
+
1127
+ `Fractor::WorkQueue` provides a thread-safe queue for continuous mode
1128
+ applications. It handles work item storage and integrates automatically with the
1129
+ supervisor's work source mechanism.
1130
+
1131
+ ===== Thread-safety
1132
+
1133
+ The WorkQueue is *thread-safe* but not *Ractor-safe*:
1134
+
1135
+ * *Thread-safe*: Multiple threads can safely push work items concurrently
1136
+ * *Not Ractor-safe*: The queue lives in the main process and cannot be shared
1137
+ across Ractor boundaries
1138
+
1139
+ This design is intentional. The WorkQueue operates in the main process where
1140
+ your application code runs. Work items are retrieved by the Supervisor (also in
1141
+ the main process) and then sent to worker Ractors.
1142
+
1143
+ .WorkQueue architecture
1144
+ [source]
1145
+ ----
1146
+ Main Process
1147
+ ├─→ Your application threads (push to WorkQueue)
1148
+ ├─→ WorkQueue (thread-safe, lives here)
1149
+ ├─→ Supervisor (polls WorkQueue)
1150
+ │ └─→ Sends work to Worker Ractors
1151
+ └─→ Worker Ractors (receive frozen/shareable work items)
1152
+ ----
1153
+
1154
+ ===== Creating a WorkQueue
1155
+
1156
+ [source,ruby]
1157
+ ----
1158
+ work_queue = Fractor::WorkQueue.new
1159
+ ----
1160
+
1161
+ ===== Adding work items
1162
+
1163
+ Use the `<<` operator for thread-safe push operations:
1164
+
1165
+ [source,ruby]
1166
+ ----
1167
+ # From any thread in your application
1168
+ work_queue << MyWork.new(data)
1169
+
1170
+ # Thread-safe even from multiple threads
1171
+ threads = 10.times.map do |i|
1172
+ Thread.new do
1173
+ 100.times do |j|
1174
+ work_queue << MyWork.new("thread-#{i}-item-#{j}")
1175
+ end
1176
+ end
1177
+ end
1178
+ threads.each(&:join)
1179
+ ----
1180
+
1181
+ ===== Checking queue status
1182
+
1183
+ [source,ruby]
1184
+ ----
1185
+ # Check if queue is empty
1186
+ if work_queue.empty?
1187
+ puts "No work available"
1188
+ end
1189
+
1190
+ # Get current queue size
1191
+ puts "Queue has #{work_queue.size} items"
1192
+ ----
1193
+
1194
+ ===== Integration with Supervisor
1195
+
1196
+ The WorkQueue integrates automatically with ContinuousServer (see next section).
1197
+ For manual integration with a Supervisor:
1198
+
1199
+ [source,ruby]
1200
+ ----
1201
+ supervisor = Fractor::Supervisor.new(
1202
+ worker_pools: [{ worker_class: MyWorker }],
1203
+ continuous_mode: true
1204
+ )
1205
+
1206
+ # Register the work queue as a work source
1207
+ work_queue.register_with_supervisor(supervisor)
1208
+
1209
+ # Now the supervisor will automatically poll the queue for work
1210
+ ----
1211
+
1212
+ ==== Fractor::ContinuousServer
1213
+
1214
+ ===== Purpose and responsibilities
1215
+
1216
+ `Fractor::ContinuousServer` is a high-level wrapper that handles all the
1217
+ complexity of running a continuous mode application. It manages:
1218
+
1219
+ * Supervisor thread lifecycle
1220
+ * Results processing thread with callback system
1221
+ * Signal handling (SIGINT, SIGTERM, SIGUSR1/SIGBREAK)
1222
+ * Graceful shutdown coordination
1223
+ * Optional logging
1224
+
1225
+ ===== Creating a ContinuousServer
1226
+
1227
+ [source,ruby]
1228
+ ----
1229
+ server = Fractor::ContinuousServer.new(
1230
+ worker_pools: [
1231
+ { worker_class: MessageWorker, num_workers: 4 }
1232
+ ],
1233
+ work_queue: work_queue, # Optional, auto-registers if provided
1234
+ log_file: 'logs/server.log' # Optional
1235
+ )
1236
+ ----
1237
+
1238
+ Parameters:
1239
+
1240
+ * `worker_pools` (required): Array of worker pool configurations
1241
+ * `work_queue` (optional): A Fractor::WorkQueue instance to auto-register
1242
+ * `log_file` (optional): Path for log output
1243
+
1244
+ ===== Registering callbacks
1245
+
1246
+ Define how to handle results and errors:
1247
+
1248
+ [source,ruby]
1249
+ ----
1250
+ # Handle successful results
1251
+ server.on_result do |result|
1252
+ # result is a Fractor::WorkResult with result.result containing your data
1253
+ puts "Success: #{result.result}"
1254
+ # Send response to client, update database, etc.
1255
+ end
1256
+
1257
+ # Handle errors
1258
+ server.on_error do |error_result|
1259
+ # error_result is a Fractor::WorkResult with error_result.error containing the message
1260
+ puts "Error: #{error_result.error}"
1261
+ # Log error, send notification, etc.
1262
+ end
1263
+ ----
1264
+
1265
+ ===== Running the server
1266
+
1267
+ [source,ruby]
1268
+ ----
1269
+ # Blocking: Run the server (blocks until shutdown signal)
1270
+ server.run
1271
+
1272
+ # Non-blocking: Run in background thread
1273
+ server_thread = Thread.new { server.run }
1274
+
1275
+ # Your application continues here...
1276
+ # Add work to queue as needed
1277
+ work_queue << MyWork.new(data)
1278
+
1279
+ # Later, stop the server
1280
+ server.stop
1281
+ server_thread.join
1282
+ ----
1283
+
1284
+ ===== Signal handling
1285
+
1286
+ The ContinuousServer automatically handles:
1287
+
1288
+ * *SIGINT* (Ctrl+C): Graceful shutdown
1289
+ * *SIGTERM*: Graceful shutdown (production deployment)
1290
+ * *SIGUSR1* (Unix) / *SIGBREAK* (Windows): Status output
1291
+
1292
+ No additional code needed - signals work automatically.
1293
+
1294
+ ===== Graceful shutdown
1295
+
1296
+ When a shutdown signal is received:
1297
+
1298
+ . Stops accepting new work from the work queue
1299
+ . Allows in-progress work to complete (within ~2 seconds)
1300
+ . Processes remaining results through callbacks
1301
+ . Cleans up all threads and resources
1302
+ . Returns from the `run` method
1303
+
1304
+ ===== Programmatic shutdown
1305
+
1306
+ [source,ruby]
1307
+ ----
1308
+ # Stop the server programmatically
1309
+ server.stop
1310
+
1311
+ # The run method will return shortly after
1312
+ ----
1313
+
1314
+ ==== Integration architecture
1315
+
1316
+ The high-level components work together seamlessly:
1317
+
1318
+ .Complete architecture diagram
1319
+ [source]
1320
+ ----
1321
+ ┌───────────────────────────────────────────────────────────┐
1322
+ │ Main Process │
1323
+ │ │
1324
+ │ ┌──────────────┐ ┌──────────────────────────────┐ │
1325
+ │ │ Your App │────>│ WorkQueue (thread-safe) │ │
1326
+ │ │ (any thread) │ │ - Thread::Queue internally │ │
1327
+ │ └──────────────┘ └──────────────────────────────┘ │
1328
+ │ │ │
1329
+ │ │ polled every 100ms │
1330
+ │ ▼ │
1331
+ │ ┌────────────────────────────────────────────────────┐ │
1332
+ │ │ ContinuousServer │ │
1333
+ │ │ ┌─────────────────────────────────────────────┐ │ │
1334
+ │ │ │ Supervisor Thread │ │ │
1335
+ │ │ │ - Manages worker Ractors │ │ │
1336
+ │ │ │ - Distributes work │ │ │
1337
+ │ │ │ - Coordinates shutdown │ │ │
1338
+ │ │ └─────────────────────────────────────────────┘ │ │
1339
+ │ │ │ │ │
1340
+ │ │ ▼ │ │
1341
+ │ │ ┌─────────────────────────────────────────────┐ │ │
1342
+ │ │ │ Worker Ractors (parallel execution) │ │ │
1343
+ │ │ │ - Ractor 1: WorkerInstance.process(work) │ │ │
1344
+ │ │ │ - Ractor 2: WorkerInstance.process(work) │ │ │
1345
+ │ │ │ - Ractor N: WorkerInstance.process(work) │ │ │
1346
+ │ │ └─────────────────────────────────────────────┘ │ │
1347
+ │ │ │ │ │
1348
+ │ │ ▼ (WorkResults) │ │
1349
+ │ │ ┌─────────────────────────────────────────────┐ │ │
1350
+ │ │ │ Results Processing Thread │ │ │
1351
+ │ │ │ - on_result callback for successes │ │ │
1352
+ │ │ │ - on_error callback for failures │ │ │
1353
+ │ │ └─────────────────────────────────────────────┘ │ │
1354
+ │ │ │ │
1355
+ │ │ ┌─────────────────────────────────────────────┐ │ │
1356
+ │ │ │ Signal Handler Thread │ │ │
1357
+ │ │ │ - SIGINT/SIGTERM: Shutdown │ │ │
1358
+ │ │ │ - SIGUSR1/SIGBREAK: Status │ │ │
1359
+ │ │ └─────────────────────────────────────────────┘ │ │
1360
+ │ └────────────────────────────────────────────────────┘ │
1361
+ └───────────────────────────────────────────────────────────┘
1362
+ ----
1363
+
1364
+ Key points:
1365
+
1366
+ * WorkQueue lives in main process (thread-safe, not Ractor-safe)
1367
+ * Supervisor polls WorkQueue and distributes to Ractors
1368
+ * Work items must be frozen/shareable to cross Ractor boundary
1369
+ * Results come back through callbacks, not batch collection
1370
+ * All thread management is automatic
1371
+
1372
+
1373
+
1374
+
1375
+ == Continuous mode patterns
1376
+
1377
+ === Basic server with callbacks
1378
+
1379
+ The most common pattern uses WorkQueue + ContinuousServer:
1380
+
1381
+ [source,ruby]
1382
+ ----
1383
+ require 'fractor'
1384
+
1385
+ # Define work and worker
1386
+ class RequestWork < Fractor::Work
1387
+ def initialize(request_id, data)
1388
+ super({ request_id: request_id, data: data })
1389
+ end
1390
+ end
1391
+
1392
+ class RequestWorker < Fractor::Worker
1393
+ def process(work)
1394
+ # Process the request
1395
+ result = perform_computation(work.input[:data])
1396
+
1397
+ Fractor::WorkResult.new(
1398
+ result: { request_id: work.input[:request_id], response: result },
1399
+ work: work
1400
+ )
1401
+ rescue => e
1402
+ Fractor::WorkResult.new(error: e.message, work: work)
1403
+ end
1404
+
1405
+ private
1406
+
1407
+ def perform_computation(data)
1408
+ # Your business logic here
1409
+ data.upcase
1410
+ end
1411
+ end
1412
+
1413
+ # Set up server
1414
+ work_queue = Fractor::WorkQueue.new
1415
+
1416
+ server = Fractor::ContinuousServer.new(
1417
+ worker_pools: [{ worker_class: RequestWorker, num_workers: 4 }],
1418
+ work_queue: work_queue
1419
+ )
1420
+
1421
+ server.on_result { |result| puts "Success: #{result.result}" }
1422
+ server.on_error { |error| puts "Error: #{error.error}" }
1423
+
1424
+ # Run server (blocks until shutdown)
1425
+ Thread.new { server.run }
1426
+
1427
+ # Application logic adds work as needed
1428
+ work_queue << RequestWork.new(1, "hello")
1429
+ work_queue << RequestWork.new(2, "world")
1430
+
1431
+ sleep # Keep main thread alive
1432
+ ----
1433
+
1434
+ === Event-driven processing
1435
+
1436
+ Process events from external sources as they arrive:
1437
+
1438
+ [source,ruby]
1439
+ ----
1440
+ # Event source (could be webhooks, message queue, etc.)
1441
+ event_source = EventSource.new
1442
+
1443
+ # Set up work queue and server
1444
+ work_queue = Fractor::WorkQueue.new
1445
+ server = Fractor::ContinuousServer.new(
1446
+ worker_pools: [{ worker_class: EventWorker, num_workers: 8 }],
1447
+ work_queue: work_queue
1448
+ )
1449
+
1450
+ server.on_result do |result|
1451
+ # Publish result to subscribers
1452
+ publish_event(result.result)
1453
+ end
1454
+
1455
+ # Event loop adds work to queue
1456
+ event_source.on_event do |event|
1457
+ work_queue << EventWork.new(event)
1458
+ end
1459
+
1460
+ # Start server
1461
+ server.run
1462
+ ----
1463
+
1464
+ === Dynamic work sources
1465
+
1466
+ Combine multiple work sources:
1467
+
1468
+ [source,ruby]
1469
+ ----
1470
+ work_queue = Fractor::WorkQueue.new
1471
+
1472
+ # Source 1: HTTP requests
1473
+ http_server.on_request do |request|
1474
+ work_queue << HttpWork.new(request)
1475
+ end
1476
+
1477
+ # Source 2: Message queue
1478
+ message_queue.subscribe do |message|
1479
+ work_queue << MessageWork.new(message)
1480
+ end
1481
+
1482
+ # Source 3: Scheduled tasks
1483
+ scheduler.every('1m') do
1484
+ work_queue << ScheduledWork.new(Time.now)
1485
+ end
1486
+
1487
+ # Single server processes all work types
1488
+ server = Fractor::ContinuousServer.new(
1489
+ worker_pools: [
1490
+ { worker_class: HttpWorker, num_workers: 4 },
1491
+ { worker_class: MessageWorker, num_workers: 2 },
1492
+ { worker_class: ScheduledWorker, num_workers: 1 }
1493
+ ],
1494
+ work_queue: work_queue
1495
+ )
1496
+
1497
+ server.run
1498
+ ----
1499
+
1500
+ === Graceful shutdown strategies
1501
+
1502
+ ==== Signal-based shutdown (production)
558
1503
 
559
1504
  [source,ruby]
560
1505
  ----
561
- # Get all error results
562
- error_results = supervisor.results.errors
1506
+ # Server automatically handles SIGTERM
1507
+ server = Fractor::ContinuousServer.new(
1508
+ worker_pools: [{ worker_class: MyWorker }],
1509
+ work_queue: work_queue,
1510
+ log_file: '/var/log/myapp/server.log'
1511
+ )
563
1512
 
564
- # Extract error messages
565
- error_messages = error_results.map(&:error)
1513
+ # Just run the server - signals handled automatically
1514
+ server.run
566
1515
 
567
- # Get the work items that failed
568
- failed_work_items = error_results.map(&:work)
1516
+ # In production:
1517
+ # systemctl stop myapp # Sends SIGTERM
1518
+ # docker stop container # Sends SIGTERM
1519
+ # kill -TERM <pid> # Manual SIGTERM
569
1520
  ----
570
1521
 
1522
+ ==== Time-based shutdown
571
1523
 
572
- [TIP]
573
- ====
574
- * Check both successful results and errors after processing completes
575
- * Consider implementing custom reporting based on the aggregated results
576
- ====
1524
+ [source,ruby]
1525
+ ----
1526
+ server_thread = Thread.new { server.run }
577
1527
 
1528
+ # Run for specific duration
1529
+ sleep 3600 # Run for 1 hour
1530
+ server.stop
1531
+ server_thread.join
1532
+ ----
578
1533
 
579
- === WrappedRactor class
1534
+ ==== Condition-based shutdown
580
1535
 
581
- ==== Purpose and responsibilities
1536
+ [source,ruby]
1537
+ ----
1538
+ server_thread = Thread.new { server.run }
1539
+
1540
+ # Monitor thread checks conditions
1541
+ monitor = Thread.new do
1542
+ loop do
1543
+ if should_shutdown?
1544
+ server.stop
1545
+ break
1546
+ end
1547
+ sleep 10
1548
+ end
1549
+ end
582
1550
 
583
- The `Fractor::WrappedRactor` class manages an individual Ruby Ractor, handling
584
- the communication between the Supervisor and the Worker instance running inside
585
- the Ractor.
1551
+ server_thread.join
1552
+ monitor.kill
1553
+ ----
586
1554
 
587
- ==== Usage notes
1555
+ === Before/after comparison
588
1556
 
589
- This class is primarily used internally by the Supervisor, but understanding its
590
- role helps with debugging:
1557
+ The chat server example demonstrates the real-world impact of using the
1558
+ high-level primitives.
591
1559
 
592
- * Each WrappedRactor creates and manages one Ractor
593
- * The Worker instance lives inside the Ractor
594
- * Work items are sent to the Ractor via the WrappedRactor's `send` method
595
- * Results are yielded back to the Supervisor
1560
+ ==== Before: Low-level API (279 lines)
596
1561
 
597
- ==== Error propagation
1562
+ Required manual management of:
598
1563
 
599
- The WrappedRactor handles error propagation in two ways:
1564
+ * Supervisor thread creation and lifecycle (~15 lines)
1565
+ * Results processing thread with loops (~50 lines)
1566
+ * Queue creation and synchronization (~10 lines)
1567
+ * Signal handling setup (~15 lines)
1568
+ * Thread coordination and shutdown (~20 lines)
1569
+ * IO.select event loop (~110 lines)
1570
+ * Manual error handling throughout (~59 lines)
600
1571
 
601
- . Errors from the Worker's `process` method are wrapped in a WorkResult and
602
- yielded back
603
- . Unexpected errors in the Ractor itself are caught and logged
1572
+ ==== After: High-level primitives (167 lines)
604
1573
 
1574
+ Eliminated boilerplate:
605
1575
 
606
- === Supervisor class
1576
+ * WorkQueue handles queue and synchronization (automatic)
1577
+ * ContinuousServer manages all threads (automatic)
1578
+ * Callbacks replace manual results loops (automatic)
1579
+ * Signal handling built-in (automatic)
1580
+ * Graceful shutdown coordinated (automatic)
607
1581
 
608
- ==== Purpose and responsibilities
1582
+ Result: **40% code reduction** (112 fewer lines), simpler architecture, fewer
1583
+ error-prone details.
609
1584
 
610
- The `Fractor::Supervisor` class orchestrates the entire framework, managing
611
- worker Ractors, distributing work, and collecting results.
1585
+ See link:examples/continuous_chat_fractor/chat_server.rb[the refactored chat
1586
+ server] for the complete example.
612
1587
 
613
- ==== Configuration options
614
1588
 
615
- When creating a Supervisor, you can configure:
616
1589
 
617
- [source,ruby]
618
- ----
619
- supervisor = Fractor::Supervisor.new(
620
- worker_pools: [
621
- # Pool 1 - for general data processing
622
- { worker_class: MyWorker, num_workers: 4 },
623
1590
 
624
- # Pool 2 - for specialized image processing
625
- { worker_class: ImageWorker, num_workers: 2 }
626
- ],
627
- continuous_mode: false # Optional: Run in continuous mode (default: false)
628
- )
629
- ----
1591
+ == Process monitoring and logging
630
1592
 
631
- ==== Worker auto-detection
1593
+ === Status monitoring and health checks
632
1594
 
633
- Fractor automatically detects the number of available processors on your system
634
- and uses that value when `num_workers` is not specified. This provides optimal
635
- resource utilization across different deployment environments without requiring
636
- manual configuration.
1595
+ The signals SIGUSR1 (or SIGBREAK on Windows) can be used for health checks.
637
1596
 
638
- [source,ruby]
1597
+ When the signal is received, the supervisor prints its current status to
1598
+ standard output.
1599
+
1600
+ [example]
1601
+ Sending the signal:
1602
+
1603
+ Unix:
1604
+
1605
+ [source,sh]
1606
+ ----
1607
+ # Send SIGUSR1 to the supervisor process
1608
+ kill -USR1 <pid>
639
1609
  ----
640
- # Auto-detect number of workers (recommended for most cases)
641
- supervisor = Fractor::Supervisor.new(
642
- worker_pools: [
643
- { worker_class: MyWorker } # Will use number of available processors
644
- ]
645
- )
646
1610
 
647
- # Explicitly set number of workers (useful for specific requirements)
648
- supervisor = Fractor::Supervisor.new(
649
- worker_pools: [
650
- { worker_class: MyWorker, num_workers: 4 } # Always use exactly 4 workers
651
- ]
652
- )
1611
+ Windows:
653
1612
 
654
- # Mix auto-detection and explicit configuration
655
- supervisor = Fractor::Supervisor.new(
656
- worker_pools: [
657
- { worker_class: FastWorker }, # Auto-detected
658
- { worker_class: HeavyWorker, num_workers: 2 } # Explicitly 2 workers
659
- ]
660
- )
1613
+ [source,sh]
1614
+ ----
1615
+ # Send SIGBREAK to the supervisor process
1616
+ kill -BREAK <pid>
661
1617
  ----
662
1618
 
663
- The auto-detection uses Ruby's `Etc.nprocessors` which returns the number of
664
- available processors. If detection fails for any reason, it falls back to 2
665
- workers.
1619
+ Output:
666
1620
 
667
- [TIP]
668
- * Use auto-detection for portable code that adapts to different environments
669
- * Explicitly set `num_workers` when you need precise control over resource usage
670
- * Consider system load and other factors when choosing explicit values
1621
+ [source]
1622
+ ----
1623
+ === Fractor Supervisor Status ===
1624
+ Mode: Continuous
1625
+ Running: true
1626
+ Workers: 4
1627
+ Idle workers: 2
1628
+ Queue size: 15
1629
+ Results: 127
1630
+ Errors: 3
1631
+ ----
671
1632
 
672
- ==== Adding work
1633
+ === Logging
673
1634
 
674
- You can add work items individually or in batches:
1635
+ Fractor supports logging of its operations to a specified log file.
1636
+
1637
+ For ContinuousServer, pass the `log_file` parameter:
675
1638
 
676
1639
  [source,ruby]
677
1640
  ----
678
- # Add a single item
679
- supervisor.add_work_item(MyWork.new(42))
1641
+ server = Fractor::ContinuousServer.new(
1642
+ worker_pools: [{ worker_class: MyWorker }],
1643
+ work_queue: work_queue,
1644
+ log_file: 'logs/server.log'
1645
+ )
1646
+ ----
680
1647
 
681
- # Add multiple items
682
- supervisor.add_work_items([
683
- MyWork.new(1),
684
- MyWork.new(2),
685
- MyWork.new(3),
686
- MyWork.new(4),
687
- MyWork.new(5)
688
- ])
1648
+ For manual Supervisor usage, set the `FRACTOR_LOG_FILE` environment variable
1649
+ before starting your application:
689
1650
 
690
- # Add items of different work types
691
- supervisor.add_work_items([
692
- TextWork.new("Process this text"),
693
- ImageWork.new({ width: 800, height: 600 })
694
- ])
1651
+ [source,sh]
1652
+ ----
1653
+ export FRACTOR_LOG_FILE=/path/to/logs/server.log
1654
+ ruby my_fractor_app.rb
695
1655
  ----
696
1656
 
697
- The Supervisor can handle any Work object that inherits from Fractor::Work.
698
- Workers must check the type of Work they receive and process it accordingly.
1657
+ The log file will contain detailed information about the supervisor's
1658
+ operations, including worker activity, work distribution, results, and errors.
699
1659
 
700
- ==== Running and monitoring
1660
+ .Examples of accessing logs
1661
+ [example]
1662
+ [source,sh]
1663
+ ----
1664
+ # Check if server is responsive (Unix/Linux/macOS)
1665
+ kill -USR1 <pid> && tail -f /path/to/logs/server.log
701
1666
 
702
- To start processing:
1667
+ # Monitor with systemd
1668
+ systemctl status fractor-server
1669
+ journalctl -u fractor-server -f
703
1670
 
704
- [source,ruby]
705
- ----
706
- # Start processing and block until complete
707
- supervisor.run
1671
+ # Monitor with Docker
1672
+ docker logs -f <container_id>
708
1673
  ----
709
1674
 
710
- The Supervisor automatically handles:
711
1675
 
712
- * Starting the worker Ractors
713
- * Distributing work items to available workers
714
- * Collecting results and errors
715
- * Graceful shutdown on completion or interruption (Ctrl+C)
716
1676
 
717
1677
 
718
- ==== Accessing results
1678
+ == Signal handling
719
1679
 
720
- After processing completes:
1680
+ === General
721
1681
 
722
- [source,ruby]
723
- ----
724
- # Get the ResultAggregator
725
- aggregator = supervisor.results
1682
+ Fractor provides production-ready signal handling for process control and
1683
+ monitoring. The framework supports different signals depending on the operating
1684
+ system, enabling graceful shutdown and runtime status monitoring.
726
1685
 
727
- # Check counts
728
- puts "Processed #{aggregator.results.size} items successfully"
729
- puts "Encountered #{aggregator.errors.size} errors"
1686
+ === Unix signals (Linux, macOS, Unix)
730
1687
 
731
- # Access successful results
732
- aggregator.results.each do |result|
733
- puts "Work item #{result.work.input} produced #{result.result}"
734
- end
1688
+ ==== SIGINT (Ctrl+C)
735
1689
 
736
- # Access errors
737
- aggregator.errors.each do |error_result|
738
- puts "Work item #{error_result.work.input} failed: #{error_result.error}"
739
- end
740
- ----
1690
+ Interactive interrupt signal for graceful shutdown.
741
1691
 
742
- == Advanced usage patterns
1692
+ Usage:
743
1693
 
744
- === Custom work distribution
1694
+ * Press `Ctrl+C` in the terminal running Fractor
1695
+ * Behavior depends on mode:
1696
+ ** *Batch mode*: Stops immediately after current work completes
1697
+ ** *Continuous mode*: Initiates graceful shutdown
745
1698
 
746
- For more complex scenarios, you might want to prioritize certain work items:
1699
+ ==== SIGTERM
747
1700
 
748
- [source,ruby]
749
- ----
750
- # Create Work objects for high priority items
751
- high_priority_works = high_priority_items.map { |item| MyWork.new(item) }
1701
+ Standard Unix termination signal, preferred for production deployments.
752
1702
 
753
- # Add high-priority items first
754
- supervisor.add_work_items(high_priority_works)
1703
+ This ensures a graceful shutdown of the Fractor supervisor and its workers.
755
1704
 
756
- # Run with just enough workers for high-priority items
757
- supervisor.run
1705
+ Usage:
758
1706
 
759
- # Create Work objects for lower priority items
760
- low_priority_works = low_priority_items.map { |item| MyWork.new(item) }
1707
+ [source,sh]
1708
+ ----
1709
+ kill -TERM <pid>
1710
+ # or simply
1711
+ kill <pid> # SIGTERM is the default
1712
+ ----
761
1713
 
762
- # Add and process lower-priority items
763
- supervisor.add_work_items(low_priority_works)
764
- supervisor.run
1714
+ Typical signals from service managers:
1715
+
1716
+ * Systemd sends SIGTERM on `systemctl stop`
1717
+ * Docker sends SIGTERM on `docker stop`
1718
+ * Kubernetes sends SIGTERM during pod termination
1719
+
1720
+ [source,ini]
1721
+ ----
1722
+ # Example systemd service
1723
+ [Service]
1724
+ ExecStart=/usr/bin/ruby /path/to/fractor_server.rb
1725
+ KillMode=process
1726
+ KillSignal=SIGTERM
1727
+ TimeoutStopSec=30
765
1728
  ----
766
1729
 
767
- === Handling large datasets
1730
+ ==== SIGUSR1
768
1731
 
769
- For very large datasets, consider processing in batches:
1732
+ Real-time status monitoring without stopping the process.
770
1733
 
771
- [source,ruby]
1734
+ Usage:
1735
+
1736
+ [source,sh]
1737
+ ----
1738
+ kill -USR1 <pid>
772
1739
  ----
773
- large_dataset.each_slice(1000) do |batch|
774
- # Convert batch items to Work objects
775
- work_batch = batch.map { |item| MyWork.new(item) }
776
1740
 
777
- supervisor.add_work_items(work_batch)
778
- supervisor.run
1741
+ Output example:
779
1742
 
780
- # Process this batch's results before continuing
781
- process_batch_results(supervisor.results)
782
- end
1743
+ [example]
1744
+ [source]
1745
+ ----
1746
+ === Fractor Supervisor Status ===
1747
+ Mode: Continuous
1748
+ Running: true
1749
+ Workers: 4
1750
+ Idle workers: 2
1751
+ Queue size: 15
1752
+ Results: 127
1753
+ Errors: 3
783
1754
  ----
784
1755
 
1756
+ === Windows signals
1757
+
1758
+ ==== SIGBREAK (Ctrl+Break)
1759
+
1760
+ Windows alternative to SIGUSR1 for status monitoring.
1761
+
1762
+ Usage:
1763
+
1764
+ * Press `Ctrl+Break` in the terminal running Fractor
1765
+ * Same output as SIGUSR1 on Unix
1766
+
1767
+ [NOTE]
1768
+ SIGUSR1 is not available on Windows. Use `Ctrl+Break` instead for status
1769
+ monitoring on Windows platforms.
1770
+
1771
+
1772
+ === Signal behavior by mode
1773
+
1774
+ ==== Batch mode
1775
+
1776
+ In batch processing mode:
1777
+
1778
+ * SIGINT/SIGTERM: Stops immediately after current work completes
1779
+ * SIGUSR1/SIGBREAK: Displays current status
1780
+
1781
+ ==== Continuous mode
1782
+
1783
+ In continuous mode (long-running servers):
1784
+
1785
+ * SIGINT/SIGTERM: Graceful shutdown within ~2 seconds
1786
+ ** Stops accepting new work
1787
+ ** Completes in-progress work
1788
+ ** Cleans up resources
1789
+ * SIGUSR1/SIGBREAK: Displays current status
1790
+
1791
+
1792
+
785
1793
 
786
1794
  == Running a basic example
787
1795
 
@@ -806,7 +1814,10 @@ class MyWorker < Fractor::Worker
806
1814
  def process(work)
807
1815
  if work.input == 5
808
1816
  # Return a Fractor::WorkResult for errors
809
- return Fractor::WorkResult.new(error: "Error processing work #{work.input}", work: work)
1817
+ return Fractor::WorkResult.new(
1818
+ error: "Error processing work #{work.input}",
1819
+ work: work
1820
+ )
810
1821
  end
811
1822
 
812
1823
  calculated = work.input * 2
@@ -848,81 +1859,6 @@ the final aggregated results, including any errors encountered. Press `Ctrl+C`
848
1859
  during execution to test the graceful shutdown.
849
1860
 
850
1861
 
851
- == Continuous mode
852
-
853
- === General
854
-
855
- Fractor provides a powerful feature called "continuous mode" that allows
856
- supervisors to run indefinitely, processing work items as they arrive without
857
- stopping after the initial work queue is empty.
858
-
859
- === Features
860
-
861
- * *Non-stopping Execution*: Supervisors run indefinitely until explicitly stopped
862
- * *On-demand Work*: Workers only process work when it's available
863
- * *Resource Efficiency*: Workers idle when no work is available, without consuming excessive resources
864
- * *Dynamic Work Addition*: New work can be added at any time through the work source callback
865
- * *Graceful Shutdown*: Resources are properly cleaned up when the supervisor is stopped
866
-
867
- Continuous mode is particularly useful for:
868
-
869
- * *Chat servers*: Processing incoming messages as they arrive
870
- * *Background job processors*: Handling tasks from a job queue
871
- * *Real-time data processing*: Analyzing data streams as they come in
872
- * *Web servers*: Responding to incoming requests in parallel
873
- * *Monitoring systems*: Continuously checking system statuses
874
-
875
- See the Chat Server example in the examples directory for a complete implementation of continuous mode.
876
-
877
-
878
- === Using continuous mode
879
-
880
- ==== Step 1. Create a supervisor with the `continuous_mode: true` option
881
-
882
- [source,ruby]
883
- ----
884
- supervisor = Fractor::Supervisor.new(
885
- worker_pools: [
886
- { worker_class: MyWorker, num_workers: 2 }
887
- ],
888
- continuous_mode: true # Enable continuous mode
889
- )
890
- ----
891
-
892
- ==== Step 2. Register a work source callback that provides new work on demand
893
-
894
- [source,ruby]
895
- ----
896
- supervisor.register_work_source do
897
- # Return nil or empty array if no work is available
898
- # Return a work item or array of work items when available
899
- items = get_next_work_items
900
- if items && !items.empty?
901
- # Convert to Work objects if needed
902
- items.map { |item| MyWork.new(item) }
903
- else
904
- nil
905
- end
906
- end
907
- ----
908
-
909
- ==== Step 4. Run the supervisor in a non-blocking way
910
-
911
- Typically in a background thread.
912
-
913
- [source,ruby]
914
- ----
915
- supervisor_thread = Thread.new { supervisor.run }
916
- ----
917
-
918
- ==== Step 4. Explicitly call `stop` on the supervisor to stop processing
919
-
920
- [source,ruby]
921
- ----
922
- supervisor.stop
923
- supervisor_thread.join # Wait for the supervisor thread to finish
924
- ----
925
-
926
1862
 
927
1863
 
928
1864
  == Example applications
@@ -933,7 +1869,9 @@ The Fractor gem comes with several example applications that demonstrate various
933
1869
  patterns and use cases. Each example can be found in the `examples` directory of
934
1870
  the gem repository. Detailed descriptions for these are provided below.
935
1871
 
936
- === Simple example
1872
+ === Pipeline mode examples
1873
+
1874
+ ==== Simple example
937
1875
 
938
1876
  The Simple Example (link:examples/simple/[examples/simple/]) demonstrates the
939
1877
  basic usage of the Fractor framework. It shows how to create a simple Work
@@ -950,10 +1888,11 @@ Key features:
950
1888
  * Auto-detection of available processors
951
1889
  * Graceful shutdown on completion
952
1890
 
953
- === Auto-detection example
1891
+ ==== Auto-detection example
954
1892
 
955
- The Auto-Detection Example (link:examples/auto_detection/[examples/auto_detection/])
956
- demonstrates Fractor's automatic worker detection feature. It shows how to use
1893
+ The Auto-Detection Example
1894
+ (link:examples/auto_detection/[examples/auto_detection/]) demonstrates
1895
+ Fractor's automatic worker detection feature. It shows how to use
957
1896
  auto-detection, explicit configuration, and mixed approaches for controlling
958
1897
  the number of workers.
959
1898
 
@@ -965,7 +1904,7 @@ Key features:
965
1904
  * Best practices for worker configuration
966
1905
  * Portable code that adapts to different environments
967
1906
 
968
- === Hierarchical hasher
1907
+ ==== Hierarchical hasher
969
1908
 
970
1909
  The Hierarchical Hasher example
971
1910
  (link:examples/hierarchical_hasher/[examples/hierarchical_hasher/]) demonstrates
@@ -980,7 +1919,7 @@ Key features:
980
1919
  * Independent processing of data segments
981
1920
  * Aggregation of results to form a final output
982
1921
 
983
- === Multi-work type
1922
+ ==== Multi-work type
984
1923
 
985
1924
  The Multi-Work Type example
986
1925
  (link:examples/multi_work_type/[examples/multi_work_type/]) demonstrates how a
@@ -994,7 +1933,7 @@ Key features:
994
1933
  * Polymorphic worker processing based on work type
995
1934
  * Unified workflow for diverse tasks
996
1935
 
997
- === Pipeline processing
1936
+ ==== Pipeline processing
998
1937
 
999
1938
  The Pipeline Processing example
1000
1939
  (link:examples/pipeline_processing/[examples/pipeline_processing/]) implements a
@@ -1008,7 +1947,7 @@ Key features:
1008
1947
  * Concurrent execution of different pipeline stages
1009
1948
  * Data transformation at each step of the pipeline
1010
1949
 
1011
- === Producer/subscriber
1950
+ ==== Producer/subscriber
1012
1951
 
1013
1952
  The Producer/Subscriber example
1014
1953
  (link:examples/producer_subscriber/[examples/producer_subscriber/]) showcases a
@@ -1022,7 +1961,7 @@ Key features:
1022
1961
  * Dynamic generation of sub-work based on initial processing
1023
1962
  * Construction of hierarchical result structures
1024
1963
 
1025
- === Scatter/gather
1964
+ ==== Scatter/gather
1026
1965
 
1027
1966
  The Scatter/Gather example
1028
1967
  (link:examples/scatter_gather/[examples/scatter_gather/]) illustrates how a
@@ -1037,7 +1976,7 @@ Key features:
1037
1976
  * Concurrent processing of subtasks
1038
1977
  * Aggregation of partial results into a final result
1039
1978
 
1040
- === Specialized workers
1979
+ ==== Specialized workers
1041
1980
 
1042
1981
  The Specialized Workers example
1043
1982
  (link:examples/specialized_workers/[examples/specialized_workers/]) demonstrates
@@ -1052,10 +1991,59 @@ Key features:
1052
1991
  * Routing of work items to appropriately specialized workers
1053
1992
  * Optimization of resources and logic per task type
1054
1993
 
1994
+ === Continuous mode examples
1995
+
1996
+ ==== Plain socket implementation
1997
+
1998
+ The plain socket implementation
1999
+ (link:examples/continuous_chat_server/[examples/continuous_chat_server/])
2000
+ provides a baseline chat server using plain TCP sockets without Fractor. This
2001
+ serves as a comparison point to understand the benefits of using Fractor for
2002
+ continuous processing.
2003
+
2004
+ ==== Fractor-based implementation
2005
+
2006
+ The Fractor-based implementation
2007
+ (link:examples/continuous_chat_fractor/[examples/continuous_chat_fractor/])
2008
+ demonstrates how to build a production-ready chat server using Fractor's
2009
+ continuous mode with high-level primitives.
2010
+
2011
+ Key features:
2012
+
2013
+ * *Continuous mode operation*: Server runs indefinitely processing messages as
2014
+ they arrive
2015
+ * *High-level primitives*: Uses WorkQueue and ContinuousServer to eliminate
2016
+ boilerplate
2017
+ * *Graceful shutdown*: Production-ready signal handling (SIGINT, SIGTERM,
2018
+ SIGUSR1/SIGBREAK)
2019
+ * *Callback-based results*: Clean separation of concerns with on_result and
2020
+ on_error callbacks
2021
+ * *Cross-platform support*: Works on Unix/Linux/macOS and Windows
2022
+ * *Process monitoring*: Runtime status checking via signals
2023
+ * *40% code reduction*: 167 lines vs 279 lines with low-level API
2024
+
2025
+ The implementation includes:
2026
+
2027
+ * `chat_common.rb`: Work and Worker class definitions for chat message
2028
+ processing
2029
+ * `chat_server.rb`: Main server using high-level primitives
2030
+ * `simulate.rb`: Test client simulator
2031
+
2032
+ This example demonstrates production deployment patterns including:
2033
+
2034
+ * Systemd service integration
2035
+ * Docker container deployment
2036
+ * Process monitoring and health checks
2037
+ * Graceful restart procedures
2038
+
2039
+ See link:examples/continuous_chat_fractor/README.adoc[the chat server README]
2040
+ for detailed implementation documentation.
2041
+
2042
+
1055
2043
 
1056
2044
 
1057
2045
  == Copyright and license
1058
2046
 
1059
2047
  Copyright Ribose.
1060
2048
 
1061
- Licensed under the MIT License.
2049
+ Licensed under the Ribose BSD 2-Clause License.