fractor 0.1.0 → 0.1.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: e137bd644ad72869b5ac03f1afcfad823c8880bd31d148fca23afd990bb7647f
4
- data.tar.gz: b5c4e6a60e809f4da86be51a7b606dbd91b357d6c20ed726b170f5ca44c5d54c
3
+ metadata.gz: b9d15481939349c5d4ad4f3b09368d0221b690cf06a737007b5f247a20cda6e6
4
+ data.tar.gz: 66aca66a7c4b1ac1559a77fa97d20917bd52b3a9cf3c4da04906f2a6295ded75
5
5
  SHA512:
6
- metadata.gz: 67f40bc03a9b7bb2ca6c3c3c6fa98a4f692ef49c35608090fdafd06566d689d23085dead346e8560317d03179062f16405b0fd0e66ac5335ef941ee46d41396c
7
- data.tar.gz: 01c3622f2ef6e5d0bf55b76c4d7ec6beecd76f094110cc4ea22dd0916ea765aec387d55ea563d29aeaa4955c7f97d6f7756b1186b82b22ca2bc8fae47bbeaa9f
6
+ metadata.gz: 205fbcb518ea078314f5964e3e60a61212c95e3f2959bb3420e6a060bb17f8bbcef0fbd7a74e27ec5304718ead2c8defd41591f4707757f7ce25bcb22374d0c0
7
+ data.tar.gz: c66a1af867247746dfd5d63bd30348ae0cf18bb13186b549ccd1380eb3829dd4aa662f4dc359a9479d7666f418cb0e586e6d0b0caf0b74c45f17f29282048326
data/.rubocop.yml CHANGED
@@ -1,3 +1,5 @@
1
+ inherit_from: .rubocop_todo.yml
2
+
1
3
  AllCops:
2
4
  TargetRubyVersion: 3.0
3
5
 
data/.rubocop_todo.yml ADDED
@@ -0,0 +1,82 @@
1
+ # This configuration was generated by
2
+ # `rubocop --auto-gen-config`
3
+ # on 2025-05-06 11:12:48 UTC using RuboCop version 1.75.5.
4
+ # The point is for the user to remove these configuration records
5
+ # one by one as the offenses are removed from the code base.
6
+ # Note that changes in the inspected code, or installation of new
7
+ # versions of RuboCop, may require this file to be generated again.
8
+
9
+ # Offense count: 3
10
+ # Configuration parameters: AllowedMethods.
11
+ # AllowedMethods: enums
12
+ Lint/ConstantDefinitionInBlock:
13
+ Exclude:
14
+ - 'spec/fractor/integration_spec.rb'
15
+ - 'spec/fractor/work_spec.rb'
16
+
17
+ # Offense count: 1
18
+ Lint/HashCompareByIdentity:
19
+ Exclude:
20
+ - 'examples/producer_subscriber/producer_subscriber.rb'
21
+
22
+ # Offense count: 2
23
+ # Configuration parameters: AllowedParentClasses.
24
+ Lint/MissingSuper:
25
+ Exclude:
26
+ - 'examples/specialized_workers/specialized_workers.rb'
27
+
28
+ # Offense count: 3
29
+ Lint/RescueException:
30
+ Exclude:
31
+ - 'lib/fractor/wrapped_ractor.rb'
32
+
33
+ # Offense count: 15
34
+ # Configuration parameters: AllowedMethods, AllowedPatterns, CountRepeatedAttributes.
35
+ Metrics/AbcSize:
36
+ Max: 83
37
+
38
+ # Offense count: 8
39
+ # Configuration parameters: CountComments, CountAsOne, AllowedMethods, AllowedPatterns.
40
+ # AllowedMethods: refine
41
+ Metrics/BlockLength:
42
+ Max: 78
43
+
44
+ # Offense count: 2
45
+ # Configuration parameters: CountComments, CountAsOne.
46
+ Metrics/ClassLength:
47
+ Max: 155
48
+
49
+ # Offense count: 3
50
+ # Configuration parameters: AllowedMethods, AllowedPatterns.
51
+ Metrics/CyclomaticComplexity:
52
+ Max: 25
53
+
54
+ # Offense count: 32
55
+ # Configuration parameters: CountComments, CountAsOne, AllowedMethods, AllowedPatterns.
56
+ Metrics/MethodLength:
57
+ Max: 60
58
+
59
+ # Offense count: 2
60
+ # Configuration parameters: AllowedMethods, AllowedPatterns.
61
+ Metrics/PerceivedComplexity:
62
+ Max: 25
63
+
64
+ # Offense count: 1
65
+ Security/Eval:
66
+ Exclude:
67
+ - 'examples/multi_work_type/multi_work_type.rb'
68
+
69
+ # Offense count: 1
70
+ # Configuration parameters: AllowedConstants.
71
+ Style/Documentation:
72
+ Exclude:
73
+ - 'spec/**/*'
74
+ - 'test/**/*'
75
+ - 'examples/hierarchical_hasher/hierarchical_hasher.rb'
76
+
77
+ # Offense count: 8
78
+ # This cop supports safe autocorrection (--autocorrect).
79
+ # Configuration parameters: AllowHeredoc, AllowURI, URISchemes, IgnoreCopDirectives, AllowedPatterns, SplitStrings.
80
+ # URISchemes: http, https
81
+ Layout/LineLength:
82
+ Max: 160
data/README.adoc CHANGED
@@ -181,7 +181,7 @@ Handles graceful shutdown on `SIGINT` (Ctrl+C).
181
181
 
182
182
 
183
183
 
184
- == Quick start guide
184
+ == Quick start
185
185
 
186
186
  === General
187
187
 
@@ -198,16 +198,18 @@ encapsulates the input data needed for processing.
198
198
  require 'fractor'
199
199
 
200
200
  class MyWork < Fractor::Work
201
- # The base class already provides input storage and basic functionality
202
- # You can optionally override to_s for better debugging
201
+ # Store all properties in the input hash
202
+ def initialize(value)
203
+ super({ value: value })
204
+ end
203
205
 
204
- def initialize(input)
205
- super # This stores input in @input
206
- # Add any additional initialization or replace @input with your own logic
206
+ # Accessor method for the stored value
207
+ def value
208
+ input[:value]
207
209
  end
208
210
 
209
211
  def to_s
210
- "MyWork: #{@input}"
212
+ "MyWork: #{value}"
211
213
  end
212
214
  end
213
215
  ----
@@ -257,28 +259,43 @@ returns an error result.
257
259
  The Supervisor class orchestrates the entire framework, managing worker Ractors,
258
260
  distributing work, and collecting results.
259
261
 
260
- It initializes a pool of Ractors, each running an instance of the Worker
262
+ It initializes pools of Ractors, each running an instance of a Worker
261
263
  class. The Supervisor handles the communication between the main thread and
262
264
  the Ractors, including sending work items and receiving results.
263
265
 
264
266
  The Supervisor also manages the work queue and the ResultAggregator, which
265
267
  collects and organizes all results from the workers.
266
268
 
267
- To set up the Supervisor, you need to specify the Worker and Work classes you
268
- created earlier. You can also specify the number of parallel Ractors to use.
269
- The default is 2, but you can increase this for more parallelism.
269
+ To set up the Supervisor, you specify worker pools, each containing a Worker class
270
+ and the number of workers to create. You can create multiple worker pools with
271
+ different worker types to handle different kinds of work. Each worker pool can
272
+ process any type of Work object that inherits from Fractor::Work.
270
273
 
271
274
  [source,ruby]
272
275
  ----
273
276
  # Create the supervisor
274
277
  supervisor = Fractor::Supervisor.new(
275
- worker_class: MyWorker,
276
- work_class: MyWork,
277
- num_workers: 4 # Number of parallel Ractors
278
+ worker_pools: [
279
+ { worker_class: MyWorker, num_workers: 4 } # One pool with 4 workers
280
+ ]
278
281
  )
279
282
 
280
- # Add work items (raw data)
281
- supervisor.add_work([1, 2, 3, 4, 5].map { |i| MyWork.new(i) })
283
+ # Add individual work items (instances of Work subclasses)
284
+ supervisor.add_work_item(MyWork.new(1))
285
+
286
+ # Add multiple work items
287
+ supervisor.add_work_items([
288
+ MyWork.new(2),
289
+ MyWork.new(3),
290
+ MyWork.new(4),
291
+ MyWork.new(5)
292
+ ])
293
+
294
+ # You can add different types of Work objects to the same supervisor
295
+ supervisor.add_work_items([
296
+ MyWork.new(6),
297
+ OtherWork.new("data")
298
+ ])
282
299
 
283
300
  # Run the processing
284
301
  supervisor.run
@@ -292,7 +309,7 @@ That's it! With these three simple steps, you have a working parallel processing
292
309
  system using Fractor.
293
310
 
294
311
 
295
- == Detailed guides
312
+ == Usage
296
313
 
297
314
  === Work class
298
315
 
@@ -345,9 +362,11 @@ end
345
362
 
346
363
  [TIP]
347
364
  ====
365
+ ====
348
366
  * Keep Work objects lightweight and serializable since they will be passed
349
367
  between Ractors
350
368
  * Implement a meaningful `to_s` method for better debugging
369
+ ====
351
370
  * Consider adding validation in the initializer to catch issues early
352
371
  ====
353
372
 
@@ -415,7 +434,7 @@ def process(work)
415
434
  end
416
435
  ----
417
436
 
418
- === Unexpected errors caught by rescue
437
+ ===== Unexpected errors caught by rescue
419
438
 
420
439
  These are unexpected exceptions that may occur during processing. You should
421
440
  catch these and convert them into error results.
@@ -434,10 +453,12 @@ end
434
453
  ----
435
454
 
436
455
  [TIP]
456
+ ====
437
457
  * Keep the `process` method focused on a single responsibility
438
458
  * Use meaningful error messages that help diagnose issues
439
459
  * Consider adding logging within the `process` method for debugging
440
460
  * Ensure all paths return a valid `WorkResult` object
461
+ ====
441
462
 
442
463
  === WorkResult class
443
464
 
@@ -572,6 +593,7 @@ The WrappedRactor handles error propagation in two ways:
572
593
  yielded back
573
594
  . Unexpected errors in the Ractor itself are caught and logged
574
595
 
596
+
575
597
  === Supervisor class
576
598
 
577
599
  ==== Purpose and responsibilities
@@ -586,9 +608,14 @@ When creating a Supervisor, you can configure:
586
608
  [source,ruby]
587
609
  ----
588
610
  supervisor = Fractor::Supervisor.new(
589
- worker_class: MyWorker, # Required: Your Worker subclass
590
- work_class: MyWork, # Required: Your Work subclass
591
- num_workers: 4 # Optional: Number of Ractors (default: 2)
611
+ worker_pools: [
612
+ # Pool 1 - for general data processing
613
+ { worker_class: MyWorker, num_workers: 4 },
614
+
615
+ # Pool 2 - for specialized image processing
616
+ { worker_class: ImageWorker, num_workers: 2 }
617
+ ],
618
+ continuous_mode: false # Optional: Run in continuous mode (default: false)
592
619
  )
593
620
  ----
594
621
 
@@ -599,18 +626,27 @@ You can add work items individually or in batches:
599
626
  [source,ruby]
600
627
  ----
601
628
  # Add a single item
602
- supervisor.add_work([42])
629
+ supervisor.add_work_item(MyWork.new(42))
603
630
 
604
631
  # Add multiple items
605
- supervisor.add_work([1, 2, 3, 4, 5])
632
+ supervisor.add_work_items([
633
+ MyWork.new(1),
634
+ MyWork.new(2),
635
+ MyWork.new(3),
636
+ MyWork.new(4),
637
+ MyWork.new(5)
638
+ ])
606
639
 
607
- # Add complex items
608
- supervisor.add_work([
609
- {id: 1, data: "foo"},
610
- {id: 2, data: "bar"}
640
+ # Add items of different work types
641
+ supervisor.add_work_items([
642
+ TextWork.new("Process this text"),
643
+ ImageWork.new({ width: 800, height: 600 })
611
644
  ])
612
645
  ----
613
646
 
647
+ The Supervisor can handle any Work object that inherits from Fractor::Work.
648
+ Workers must check the type of Work they receive and process it accordingly.
649
+
614
650
  ==== Running and monitoring
615
651
 
616
652
  To start processing:
@@ -628,6 +664,7 @@ The Supervisor automatically handles:
628
664
  * Collecting results and errors
629
665
  * Graceful shutdown on completion or interruption (Ctrl+C)
630
666
 
667
+
631
668
  ==== Accessing results
632
669
 
633
670
  After processing completes:
@@ -652,33 +689,42 @@ aggregator.errors.each do |error_result|
652
689
  end
653
690
  ----
654
691
 
655
- ==== Advanced usage patterns
692
+ == Advanced usage patterns
656
693
 
657
- ===== Custom work distribution
694
+ === Custom work distribution
658
695
 
659
696
  For more complex scenarios, you might want to prioritize certain work items:
660
697
 
661
698
  [source,ruby]
662
699
  ----
700
+ # Create Work objects for high priority items
701
+ high_priority_works = high_priority_items.map { |item| MyWork.new(item) }
702
+
663
703
  # Add high-priority items first
664
- supervisor.add_work(high_priority_items)
704
+ supervisor.add_work_items(high_priority_works)
665
705
 
666
706
  # Run with just enough workers for high-priority items
667
707
  supervisor.run
668
708
 
709
+ # Create Work objects for lower priority items
710
+ low_priority_works = low_priority_items.map { |item| MyWork.new(item) }
711
+
669
712
  # Add and process lower-priority items
670
- supervisor.add_work(low_priority_items)
713
+ supervisor.add_work_items(low_priority_works)
671
714
  supervisor.run
672
715
  ----
673
716
 
674
- ===== Handling large datasets
717
+ === Handling large datasets
675
718
 
676
719
  For very large datasets, consider processing in batches:
677
720
 
678
721
  [source,ruby]
679
722
  ----
680
723
  large_dataset.each_slice(1000) do |batch|
681
- supervisor.add_work(batch)
724
+ # Convert batch items to Work objects
725
+ work_batch = batch.map { |item| MyWork.new(item) }
726
+
727
+ supervisor.add_work_items(work_batch)
682
728
  supervisor.run
683
729
 
684
730
  # Process this batch's results before continuing
@@ -686,11 +732,13 @@ large_dataset.each_slice(1000) do |batch|
686
732
  end
687
733
  ----
688
734
 
689
- == Running the example
735
+
736
+ == Running a basic example
690
737
 
691
738
  . Install the gem as described in the Installation section.
692
739
 
693
- . Create a new Ruby file (e.g., `my_fractor_example.rb`) with your implementation:
740
+ . Create a new Ruby file (e.g., `my_fractor_example.rb`) with your
741
+ implementation:
694
742
 
695
743
  [source,ruby]
696
744
  ----
@@ -717,15 +765,18 @@ class MyWorker < Fractor::Worker
717
765
  end
718
766
  end
719
767
 
720
- # Create supervisor
768
+ # Create supervisor with a worker pool
721
769
  supervisor = Fractor::Supervisor.new(
722
- worker_class: MyWorker,
723
- work_class: MyWork,
724
- num_workers: 2
770
+ worker_pools: [
771
+ { worker_class: MyWorker, num_workers: 2 }
772
+ ]
725
773
  )
726
774
 
727
- # Add work items (1..10)
728
- supervisor.add_work((1..10).to_a)
775
+ # Create Work objects
776
+ work_items = (1..10).map { |i| MyWork.new(i) }
777
+
778
+ # Add work items
779
+ supervisor.add_work_items(work_items)
729
780
 
730
781
  # Run processing
731
782
  supervisor.run
@@ -747,6 +798,195 @@ the final aggregated results, including any errors encountered. Press `Ctrl+C`
747
798
  during execution to test the graceful shutdown.
748
799
 
749
800
 
801
+ == Continuous mode
802
+
803
+ === General
804
+
805
+ Fractor provides a powerful feature called "continuous mode" that allows
806
+ supervisors to run indefinitely, processing work items as they arrive without
807
+ stopping after the initial work queue is empty.
808
+
809
+ === Features
810
+
811
+ * *Non-stopping Execution*: Supervisors run indefinitely until explicitly stopped
812
+ * *On-demand Work*: Workers only process work when it's available
813
+ * *Resource Efficiency*: Workers idle when no work is available, without consuming excessive resources
814
+ * *Dynamic Work Addition*: New work can be added at any time through the work source callback
815
+ * *Graceful Shutdown*: Resources are properly cleaned up when the supervisor is stopped
816
+
817
+ Continuous mode is particularly useful for:
818
+
819
+ * *Chat servers*: Processing incoming messages as they arrive
820
+ * *Background job processors*: Handling tasks from a job queue
821
+ * *Real-time data processing*: Analyzing data streams as they come in
822
+ * *Web servers*: Responding to incoming requests in parallel
823
+ * *Monitoring systems*: Continuously checking system statuses
824
+
825
+ See the Chat Server example in the examples directory for a complete implementation of continuous mode.
826
+
827
+
828
+ === Using continuous mode
829
+
830
+ ==== Step 1. Create a supervisor with the `continuous_mode: true` option
831
+
832
+ [source,ruby]
833
+ ----
834
+ supervisor = Fractor::Supervisor.new(
835
+ worker_pools: [
836
+ { worker_class: MyWorker, num_workers: 2 }
837
+ ],
838
+ continuous_mode: true # Enable continuous mode
839
+ )
840
+ ----
841
+
842
+ ==== Step 2. Register a work source callback that provides new work on demand
843
+
844
+ [source,ruby]
845
+ ----
846
+ supervisor.register_work_source do
847
+ # Return nil or empty array if no work is available
848
+ # Return a work item or array of work items when available
849
+ items = get_next_work_items
850
+ if items && !items.empty?
851
+ # Convert to Work objects if needed
852
+ items.map { |item| MyWork.new(item) }
853
+ else
854
+ nil
855
+ end
856
+ end
857
+ ----
858
+
859
+ ==== Step 4. Run the supervisor in a non-blocking way
860
+
861
+ Typically in a background thread.
862
+
863
+ [source,ruby]
864
+ ----
865
+ supervisor_thread = Thread.new { supervisor.run }
866
+ ----
867
+
868
+ ==== Step 4. Explicitly call `stop` on the supervisor to stop processing
869
+
870
+ [source,ruby]
871
+ ----
872
+ supervisor.stop
873
+ supervisor_thread.join # Wait for the supervisor thread to finish
874
+ ----
875
+
876
+
877
+
878
+ == Example applications
879
+
880
+ === General
881
+
882
+ The Fractor gem comes with several example applications that demonstrate various
883
+ patterns and use cases. Each example can be found in the `examples` directory of
884
+ the gem repository. Detailed descriptions for these are provided below.
885
+
886
+ === Simple example
887
+
888
+ The Simple Example (link:examples/simple/[examples/simple/]) demonstrates the
889
+ basic usage of the Fractor framework. It shows how to create a simple Work
890
+ class, a Worker class, and a Supervisor to manage the processing of work items
891
+ in parallel. This example serves as a starting point for understanding how to
892
+ use Fractor.
893
+
894
+ Key features:
895
+
896
+ * Basic Work and Worker class implementation
897
+ * Simple Supervisor setup
898
+ * Parallel processing of work items
899
+ * Error handling and result aggregation
900
+ * Graceful shutdown on completion
901
+
902
+ === Hierarchical hasher
903
+
904
+ The Hierarchical Hasher example
905
+ (link:examples/hierarchical_hasher/[examples/hierarchical_hasher/]) demonstrates
906
+ how to use the Fractor framework to process a file in parallel by breaking it
907
+ into chunks, hashing each chunk independently, and then combining the results
908
+ into a final hash. This approach is useful for processing large files
909
+ efficiently.
910
+
911
+ Key features:
912
+
913
+ * Parallel data chunking for large files
914
+ * Independent processing of data segments
915
+ * Aggregation of results to form a final output
916
+
917
+ === Multi-work type
918
+
919
+ The Multi-Work Type example
920
+ (link:examples/multi_work_type/[examples/multi_work_type/]) demonstrates how a
921
+ single Fractor supervisor and worker can handle multiple types of work items
922
+ (e.g., `TextWork` and `ImageWork`). The worker intelligently adapts its
923
+ processing strategy based on the class of the incoming work item.
924
+
925
+ Key features:
926
+
927
+ * Support for multiple `Fractor::Work` subclasses
928
+ * Polymorphic worker processing based on work type
929
+ * Unified workflow for diverse tasks
930
+
931
+ === Pipeline processing
932
+
933
+ The Pipeline Processing example
934
+ (link:examples/pipeline_processing/[examples/pipeline_processing/]) implements a
935
+ multi-stage processing pipeline where data flows sequentially through a series
936
+ of transformations. The output of one stage becomes the input for the next, and
937
+ different stages can operate concurrently on different data items.
938
+
939
+ Key features:
940
+
941
+ * Sequential data flow through multiple processing stages
942
+ * Concurrent execution of different pipeline stages
943
+ * Data transformation at each step of the pipeline
944
+
945
+ === Producer/subscriber
946
+
947
+ The Producer/Subscriber example
948
+ (link:examples/producer_subscriber/[examples/producer_subscriber/]) showcases a
949
+ multi-stage document processing system where initial work (processing a
950
+ document) can generate additional sub-work items (processing sections of the
951
+ document). This creates a hierarchical processing pattern.
952
+
953
+ Key features:
954
+
955
+ * Implementation of producer-consumer patterns
956
+ * Dynamic generation of sub-work based on initial processing
957
+ * Construction of hierarchical result structures
958
+
959
+ === Scatter/gather
960
+
961
+ The Scatter/Gather example
962
+ (link:examples/scatter_gather/[examples/scatter_gather/]) illustrates how a
963
+ large task or dataset is broken down (scattered) into smaller, independent
964
+ subtasks. These subtasks are processed in parallel by multiple workers, and
965
+ their results are then collected (gathered) and combined to produce the final
966
+ output.
967
+
968
+ Key features:
969
+
970
+ * Distribution of a large task into smaller, parallelizable subtasks
971
+ * Concurrent processing of subtasks
972
+ * Aggregation of partial results into a final result
973
+
974
+ === Specialized workers
975
+
976
+ The Specialized Workers example
977
+ (link:examples/specialized_workers/[examples/specialized_workers/]) demonstrates
978
+ creating distinct worker types, each tailored to handle specific kinds of tasks
979
+ (e.g., `ComputeWorker` for CPU-intensive operations and `DatabaseWorker` for
980
+ I/O-bound database interactions). This allows for optimized resource utilization
981
+ and domain-specific logic.
982
+
983
+ Key features:
984
+
985
+ * Creation of worker classes for specific processing domains
986
+ * Routing of work items to appropriately specialized workers
987
+ * Optimization of resources and logic per task type
988
+
989
+
750
990
 
751
991
  == Copyright and license
752
992
 
@@ -0,0 +1,75 @@
1
+ = Hierarchical Hasher Example
2
+ :toc: macro
3
+ :toc-title: Table of Contents
4
+ :toclevels: 3
5
+
6
+ toc::[]
7
+
8
+ == Overview
9
+
10
+ The Hierarchical Hasher example demonstrates how to use the Fractor framework to process a file in parallel by breaking it into chunks, hashing each chunk independently, and then combining the results into a final hash.
11
+
12
+ This example is particularly useful for:
13
+
14
+ * Processing large files efficiently
15
+ * Demonstrating parallel data chunking patterns
16
+ * Showcasing result aggregation techniques
17
+
18
+ == Implementation Details
19
+
20
+ The example consists of the following key components:
21
+
22
+ === ChunkWork
23
+
24
+ A subclass of `Fractor::Work` that represents a chunk of a file to be hashed. Each `ChunkWork` instance contains:
25
+
26
+ * The chunk data
27
+ * The starting position within the file
28
+ * The length of the chunk
29
+
30
+ === HashWorker
31
+
32
+ A subclass of `Fractor::Worker` that processes `ChunkWork` instances by:
33
+
34
+ 1. Calculating a SHA-256 hash for the chunk
35
+ 2. Returning a work result containing the hash, start position, and length
36
+
37
+ === FileHasher
38
+
39
+ The main orchestration class that:
40
+
41
+ 1. Breaks a file into chunks of a specified size
42
+ 2. Creates a `Fractor::Supervisor` with the `HashWorker` and `ChunkWork` classes
43
+ 3. Processes all chunks in parallel
44
+ 4. Aggregates the results to create a final hash by combining all chunk hashes
45
+
46
+ == Usage
47
+
48
+ [source,ruby]
49
+ ----
50
+ # Basic usage
51
+ ruby hierarchical_hasher.rb <file_path> [worker_count]
52
+
53
+ # Examples
54
+ ruby hierarchical_hasher.rb sample.txt # Use default 4 workers
55
+ ruby hierarchical_hasher.rb large_file.dat 8 # Use 8 workers
56
+ ----
57
+
58
+ == How It Works
59
+
60
+ 1. The file is divided into 1KB chunks (configurable)
61
+ 2. Each chunk is assigned to a worker for processing
62
+ 3. Workers calculate SHA-256 hashes for their assigned chunks
63
+ 4. Results are collected and sorted by their original position in the file
64
+ 5. The individual chunk hashes are concatenated with newlines
65
+ 6. A final SHA-256 hash is calculated on the combined hash string
66
+
67
+ == Performance Considerations
68
+
69
+ * The chunk size can be adjusted to optimize performance for different file types
70
+ * The number of workers can be increased for better parallelization on multi-core systems
71
+ * Very small files may not benefit from parallelization due to the overhead
72
+
73
+ == Ractor Compatibility Note
74
+
75
+ This example uses SHA-256 instead of SHA3 because the SHA3 implementation in some Ruby versions is not Ractor-compatible.