fractor 0.1.3 → 0.1.6
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/.rubocop-https---raw-githubusercontent-com-riboseinc-oss-guides-main-ci-rubocop-yml +552 -0
- data/.rubocop.yml +14 -8
- data/.rubocop_todo.yml +154 -48
- data/README.adoc +1371 -317
- data/examples/auto_detection/README.adoc +52 -0
- data/examples/auto_detection/auto_detection.rb +170 -0
- data/examples/continuous_chat_common/message_protocol.rb +53 -0
- data/examples/continuous_chat_fractor/README.adoc +217 -0
- data/examples/continuous_chat_fractor/chat_client.rb +303 -0
- data/examples/continuous_chat_fractor/chat_common.rb +83 -0
- data/examples/continuous_chat_fractor/chat_server.rb +167 -0
- data/examples/continuous_chat_fractor/simulate.rb +345 -0
- data/examples/continuous_chat_server/README.adoc +135 -0
- data/examples/continuous_chat_server/chat_client.rb +303 -0
- data/examples/continuous_chat_server/chat_server.rb +359 -0
- data/examples/continuous_chat_server/simulate.rb +343 -0
- data/examples/hierarchical_hasher/hierarchical_hasher.rb +12 -8
- data/examples/multi_work_type/multi_work_type.rb +30 -29
- data/examples/pipeline_processing/pipeline_processing.rb +15 -15
- data/examples/producer_subscriber/producer_subscriber.rb +20 -16
- data/examples/scatter_gather/scatter_gather.rb +29 -28
- data/examples/simple/sample.rb +38 -6
- data/examples/specialized_workers/specialized_workers.rb +44 -37
- data/lib/fractor/continuous_server.rb +188 -0
- data/lib/fractor/result_aggregator.rb +1 -1
- data/lib/fractor/supervisor.rb +291 -108
- data/lib/fractor/version.rb +1 -1
- data/lib/fractor/work_queue.rb +68 -0
- data/lib/fractor/work_result.rb +1 -1
- data/lib/fractor/worker.rb +2 -1
- data/lib/fractor/wrapped_ractor.rb +12 -2
- data/lib/fractor.rb +2 -0
- metadata +17 -2
|
@@ -0,0 +1,52 @@
|
|
|
1
|
+
= Auto-Detection Example
|
|
2
|
+
|
|
3
|
+
This example demonstrates Fractor's automatic worker detection feature.
|
|
4
|
+
|
|
5
|
+
== Purpose
|
|
6
|
+
|
|
7
|
+
Shows how Fractor can automatically detect the number of available processors on your system and create the optimal number of workers without manual configuration.
|
|
8
|
+
|
|
9
|
+
== What This Demonstrates
|
|
10
|
+
|
|
11
|
+
* How Fractor automatically detects the number of available processors
|
|
12
|
+
* Comparison between auto-detection and explicit worker configuration
|
|
13
|
+
* Mixed configuration (some pools with auto-detection, some explicit)
|
|
14
|
+
* How to verify the number of workers being used
|
|
15
|
+
|
|
16
|
+
== When to Use Auto-Detection
|
|
17
|
+
|
|
18
|
+
* For portable code that adapts to different environments
|
|
19
|
+
* When you want optimal resource utilization without manual tuning
|
|
20
|
+
* For development where the number of cores varies across machines
|
|
21
|
+
|
|
22
|
+
== When to Set Explicit Values
|
|
23
|
+
|
|
24
|
+
* When you need precise control over resource usage
|
|
25
|
+
* For production environments with specific requirements
|
|
26
|
+
* When limiting workers due to memory or other constraints
|
|
27
|
+
|
|
28
|
+
== Running the Example
|
|
29
|
+
|
|
30
|
+
[source,shell]
|
|
31
|
+
----
|
|
32
|
+
ruby examples/auto_detection/auto_detection.rb
|
|
33
|
+
----
|
|
34
|
+
|
|
35
|
+
== Expected Output
|
|
36
|
+
|
|
37
|
+
The script will:
|
|
38
|
+
|
|
39
|
+
1. Display the number of processors detected on your system
|
|
40
|
+
2. Run three examples:
|
|
41
|
+
* Example 1: Auto-detection (uses all available processors)
|
|
42
|
+
* Example 2: Explicit configuration (uses exactly 4 workers)
|
|
43
|
+
* Example 3: Mixed configuration (combines both approaches)
|
|
44
|
+
3. Show results from processing work items in parallel
|
|
45
|
+
4. Provide a summary of benefits for each approach
|
|
46
|
+
|
|
47
|
+
== Key Takeaways
|
|
48
|
+
|
|
49
|
+
* Auto-detection provides automatic adaptation to different environments
|
|
50
|
+
* Explicit configuration provides precise control when needed
|
|
51
|
+
* You can mix both approaches in the same supervisor
|
|
52
|
+
* Best practice: use auto-detection for development, tune for production if needed
|
|
@@ -0,0 +1,170 @@
|
|
|
1
|
+
#!/usr/bin/env ruby
|
|
2
|
+
# frozen_string_literal: true
|
|
3
|
+
|
|
4
|
+
# =============================================================================
|
|
5
|
+
# Auto-Detection Example
|
|
6
|
+
# =============================================================================
|
|
7
|
+
#
|
|
8
|
+
# This example demonstrates Fractor's automatic worker detection feature.
|
|
9
|
+
#
|
|
10
|
+
# WHAT THIS DEMONSTRATES:
|
|
11
|
+
# - How Fractor automatically detects the number of available processors
|
|
12
|
+
# - Comparison between auto-detection and explicit worker configuration
|
|
13
|
+
# - Mixed configuration (some pools with auto-detection, some explicit)
|
|
14
|
+
# - How to verify the number of workers being used
|
|
15
|
+
#
|
|
16
|
+
# WHEN TO USE AUTO-DETECTION:
|
|
17
|
+
# - For portable code that adapts to different environments
|
|
18
|
+
# - When you want optimal resource utilization without manual tuning
|
|
19
|
+
# - For development where the number of cores varies across machines
|
|
20
|
+
#
|
|
21
|
+
# WHEN TO SET EXPLICIT VALUES:
|
|
22
|
+
# - When you need precise control over resource usage
|
|
23
|
+
# - For production environments with specific requirements
|
|
24
|
+
# - When limiting workers due to memory or other constraints
|
|
25
|
+
#
|
|
26
|
+
# HOW TO RUN:
|
|
27
|
+
# ruby examples/auto_detection/auto_detection.rb
|
|
28
|
+
#
|
|
29
|
+
# WHAT TO EXPECT:
|
|
30
|
+
# - The script will show how many processors were auto-detected
|
|
31
|
+
# - It will create workers based on detection vs explicit configuration
|
|
32
|
+
# - Results will be processed in parallel across all workers
|
|
33
|
+
#
|
|
34
|
+
# =============================================================================
|
|
35
|
+
|
|
36
|
+
require_relative "../../lib/fractor"
|
|
37
|
+
require "etc"
|
|
38
|
+
|
|
39
|
+
# Simple work class for demonstration
|
|
40
|
+
class ComputeWork < Fractor::Work
|
|
41
|
+
def initialize(value)
|
|
42
|
+
super({ value: value })
|
|
43
|
+
end
|
|
44
|
+
|
|
45
|
+
def value
|
|
46
|
+
input[:value]
|
|
47
|
+
end
|
|
48
|
+
|
|
49
|
+
def to_s
|
|
50
|
+
"ComputeWork: #{value}"
|
|
51
|
+
end
|
|
52
|
+
end
|
|
53
|
+
|
|
54
|
+
# Simple worker that squares numbers
|
|
55
|
+
class ComputeWorker < Fractor::Worker
|
|
56
|
+
def process(work)
|
|
57
|
+
result = work.value * work.value
|
|
58
|
+
Fractor::WorkResult.new(result: result, work: work)
|
|
59
|
+
rescue StandardError => e
|
|
60
|
+
Fractor::WorkResult.new(error: e, work: work)
|
|
61
|
+
end
|
|
62
|
+
end
|
|
63
|
+
|
|
64
|
+
# =============================================================================
|
|
65
|
+
# DEMONSTRATION
|
|
66
|
+
# =============================================================================
|
|
67
|
+
|
|
68
|
+
puts "=" * 80
|
|
69
|
+
puts "Fractor Auto-Detection Example"
|
|
70
|
+
puts "=" * 80
|
|
71
|
+
puts
|
|
72
|
+
|
|
73
|
+
# Show system information
|
|
74
|
+
num_processors = Etc.nprocessors
|
|
75
|
+
puts "System Information:"
|
|
76
|
+
puts " Available processors: #{num_processors}"
|
|
77
|
+
puts
|
|
78
|
+
|
|
79
|
+
# Example 1: Auto-detection (recommended for most cases)
|
|
80
|
+
puts "-" * 80
|
|
81
|
+
puts "Example 1: Auto-Detection"
|
|
82
|
+
puts "-" * 80
|
|
83
|
+
puts "Creating supervisor WITHOUT specifying num_workers..."
|
|
84
|
+
puts "Fractor will automatically detect and use #{num_processors} workers"
|
|
85
|
+
puts
|
|
86
|
+
|
|
87
|
+
supervisor1 = Fractor::Supervisor.new(
|
|
88
|
+
worker_pools: [
|
|
89
|
+
{ worker_class: ComputeWorker }, # No num_workers specified
|
|
90
|
+
],
|
|
91
|
+
)
|
|
92
|
+
|
|
93
|
+
# Add work items
|
|
94
|
+
work_items = (1..10).map { |i| ComputeWork.new(i) }
|
|
95
|
+
supervisor1.add_work_items(work_items)
|
|
96
|
+
|
|
97
|
+
puts "Processing 10 work items with auto-detected workers..."
|
|
98
|
+
supervisor1.run
|
|
99
|
+
|
|
100
|
+
puts "Results: #{supervisor1.results.results.map(&:result).sort.join(', ')}"
|
|
101
|
+
puts "✓ Auto-detection successful!"
|
|
102
|
+
puts
|
|
103
|
+
|
|
104
|
+
# Example 2: Explicit configuration
|
|
105
|
+
puts "-" * 80
|
|
106
|
+
puts "Example 2: Explicit Configuration"
|
|
107
|
+
puts "-" * 80
|
|
108
|
+
puts "Creating supervisor WITH explicit num_workers=4..."
|
|
109
|
+
puts
|
|
110
|
+
|
|
111
|
+
supervisor2 = Fractor::Supervisor.new(
|
|
112
|
+
worker_pools: [
|
|
113
|
+
{ worker_class: ComputeWorker, num_workers: 4 },
|
|
114
|
+
],
|
|
115
|
+
)
|
|
116
|
+
|
|
117
|
+
supervisor2.add_work_items((11..20).map { |i| ComputeWork.new(i) })
|
|
118
|
+
|
|
119
|
+
puts "Processing 10 work items with 4 explicitly configured workers..."
|
|
120
|
+
supervisor2.run
|
|
121
|
+
|
|
122
|
+
puts "Results: #{supervisor2.results.results.map(&:result).sort.join(', ')}"
|
|
123
|
+
puts "✓ Explicit configuration successful!"
|
|
124
|
+
puts
|
|
125
|
+
|
|
126
|
+
# Example 3: Mixed configuration
|
|
127
|
+
puts "-" * 80
|
|
128
|
+
puts "Example 3: Mixed Auto-Detection and Explicit Configuration"
|
|
129
|
+
puts "-" * 80
|
|
130
|
+
puts "Creating supervisor with multiple worker pools:"
|
|
131
|
+
puts " - Pool 1: Auto-detected workers"
|
|
132
|
+
puts " - Pool 2: 2 explicitly configured workers"
|
|
133
|
+
puts
|
|
134
|
+
|
|
135
|
+
supervisor3 = Fractor::Supervisor.new(
|
|
136
|
+
worker_pools: [
|
|
137
|
+
{ worker_class: ComputeWorker }, # Auto-detected
|
|
138
|
+
{ worker_class: ComputeWorker, num_workers: 2 }, # Explicit
|
|
139
|
+
],
|
|
140
|
+
)
|
|
141
|
+
|
|
142
|
+
supervisor3.add_work_items((21..30).map { |i| ComputeWork.new(i) })
|
|
143
|
+
|
|
144
|
+
puts "Processing 10 work items with mixed configuration..."
|
|
145
|
+
supervisor3.run
|
|
146
|
+
|
|
147
|
+
puts "Results: #{supervisor3.results.results.map(&:result).sort.join(', ')}"
|
|
148
|
+
puts "✓ Mixed configuration successful!"
|
|
149
|
+
puts
|
|
150
|
+
|
|
151
|
+
# Summary
|
|
152
|
+
puts "=" * 80
|
|
153
|
+
puts "Summary"
|
|
154
|
+
puts "=" * 80
|
|
155
|
+
puts
|
|
156
|
+
puts "Auto-detection provides:"
|
|
157
|
+
puts " ✓ Automatic adaptation to different environments"
|
|
158
|
+
puts " ✓ Optimal resource utilization by default"
|
|
159
|
+
puts " ✓ Less configuration needed"
|
|
160
|
+
puts " ✓ Portability across machines with different CPU counts"
|
|
161
|
+
puts
|
|
162
|
+
puts "Explicit configuration provides:"
|
|
163
|
+
puts " ✓ Precise control over worker count"
|
|
164
|
+
puts " ✓ Ability to limit resource usage"
|
|
165
|
+
puts " ✓ Predictable behavior in production"
|
|
166
|
+
puts
|
|
167
|
+
puts "Best practice: Use auto-detection for development and testing,"
|
|
168
|
+
puts " then tune explicitly for production if needed."
|
|
169
|
+
puts
|
|
170
|
+
puts "=" * 80
|
|
@@ -0,0 +1,53 @@
|
|
|
1
|
+
#!/usr/bin/env ruby
|
|
2
|
+
# frozen_string_literal: true
|
|
3
|
+
|
|
4
|
+
require "json"
|
|
5
|
+
require "time"
|
|
6
|
+
|
|
7
|
+
module ContinuousChat
|
|
8
|
+
# Message packet class for handling protocol messages
|
|
9
|
+
class MessagePacket
|
|
10
|
+
attr_reader :type, :data, :timestamp
|
|
11
|
+
|
|
12
|
+
def initialize(type, data, timestamp = Time.now.to_i)
|
|
13
|
+
@type = type.to_sym
|
|
14
|
+
@data = data
|
|
15
|
+
@timestamp = timestamp
|
|
16
|
+
end
|
|
17
|
+
|
|
18
|
+
# Convert to JSON string
|
|
19
|
+
def to_json(*_args)
|
|
20
|
+
{
|
|
21
|
+
type: @type,
|
|
22
|
+
data: @data,
|
|
23
|
+
timestamp: @timestamp,
|
|
24
|
+
}.to_json
|
|
25
|
+
end
|
|
26
|
+
|
|
27
|
+
# String representation
|
|
28
|
+
def to_s
|
|
29
|
+
to_json
|
|
30
|
+
end
|
|
31
|
+
end
|
|
32
|
+
|
|
33
|
+
# Helper module for message protocol
|
|
34
|
+
module MessageProtocol
|
|
35
|
+
# Create a packet of the given type with data
|
|
36
|
+
def self.create_packet(type, data)
|
|
37
|
+
MessagePacket.new(type, data).to_json
|
|
38
|
+
end
|
|
39
|
+
|
|
40
|
+
# Parse a JSON string into a message packet
|
|
41
|
+
def self.parse_packet(json_string)
|
|
42
|
+
data = JSON.parse(json_string)
|
|
43
|
+
type = data["type"]&.to_sym
|
|
44
|
+
content = data["data"]
|
|
45
|
+
timestamp = data["timestamp"] || Time.now.to_i
|
|
46
|
+
|
|
47
|
+
MessagePacket.new(type, content, timestamp)
|
|
48
|
+
rescue JSON::ParserError => e
|
|
49
|
+
puts "Error parsing message: #{e.message}"
|
|
50
|
+
nil
|
|
51
|
+
end
|
|
52
|
+
end
|
|
53
|
+
end
|
|
@@ -0,0 +1,217 @@
|
|
|
1
|
+
= Continuous Chat Server Example (Fractor-based)
|
|
2
|
+
|
|
3
|
+
== Overview
|
|
4
|
+
|
|
5
|
+
This example demonstrates Fractor's continuous mode feature with a chat server implementation. Unlike the plain socket implementation in `examples/continuous_chat_server/`, this version uses Fractor's Worker and Supervisor classes to process chat messages concurrently.
|
|
6
|
+
|
|
7
|
+
The example shows how to:
|
|
8
|
+
|
|
9
|
+
* Use Fractor in continuous mode (`continuous_mode: true`)
|
|
10
|
+
* Register work source callbacks with `register_work_source`
|
|
11
|
+
* Process messages asynchronously using Ractor-based workers
|
|
12
|
+
* Coordinate between main thread socket handling and Fractor workers
|
|
13
|
+
* Implement graceful shutdown
|
|
14
|
+
|
|
15
|
+
== Key Concepts
|
|
16
|
+
|
|
17
|
+
* *Continuous Mode*: The Fractor supervisor runs indefinitely, processing work as it arrives
|
|
18
|
+
* *Work Sources*: Callback functions that provide new work items to the supervisor on demand
|
|
19
|
+
* *Asynchronous Processing*: ChatWorker processes messages concurrently in Ractors
|
|
20
|
+
* *Thread Coordination*: Multiple threads working together - main thread handles I/O, Fractor workers process messages
|
|
21
|
+
* *Message Logging*: All message processing is logged to demonstrate Fractor's work distribution
|
|
22
|
+
|
|
23
|
+
== Architecture
|
|
24
|
+
|
|
25
|
+
=== Fractor Components
|
|
26
|
+
|
|
27
|
+
The server uses the following Fractor components:
|
|
28
|
+
|
|
29
|
+
1. *ChatMessage (Fractor::Work)*: Represents a chat message as a unit of work
|
|
30
|
+
- Encapsulates the message packet and optional client socket reference
|
|
31
|
+
- Each message becomes a work item in the Fractor queue
|
|
32
|
+
|
|
33
|
+
2. *ChatWorker (Fractor::Worker)*: Processes chat messages
|
|
34
|
+
- Runs in a Ractor for parallel processing
|
|
35
|
+
- Handles different message types (broadcast, direct_message, server_message, user_list)
|
|
36
|
+
- Returns WorkResult with processing outcome
|
|
37
|
+
|
|
38
|
+
3. *Supervisor*: Orchestrates the workers
|
|
39
|
+
- Configured with `continuous_mode: true` to run indefinitely
|
|
40
|
+
- Uses 2 worker Ractors (auto-detected from system processors)
|
|
41
|
+
- Registered work source pulls from a thread-safe Queue
|
|
42
|
+
|
|
43
|
+
=== Thread Architecture
|
|
44
|
+
|
|
45
|
+
The server uses three concurrent components:
|
|
46
|
+
|
|
47
|
+
1. *Main Thread*: Handles socket I/O with `IO.select`
|
|
48
|
+
- Accepts new client connections
|
|
49
|
+
- Reads messages from client sockets
|
|
50
|
+
- Sends responses back to clients
|
|
51
|
+
- Puts received messages into the work queue
|
|
52
|
+
|
|
53
|
+
2. *Supervisor Thread*: Runs the Fractor supervisor
|
|
54
|
+
- Continuously pulls work from the queue via the work source callback
|
|
55
|
+
- Distributes work to available ChatWorker Ractors
|
|
56
|
+
- Collects results in the ResultAggregator
|
|
57
|
+
|
|
58
|
+
3. *Results Thread*: Processes completed work
|
|
59
|
+
- Monitors the ResultAggregator for new results
|
|
60
|
+
- Logs processing outcomes
|
|
61
|
+
- Handles errors from workers
|
|
62
|
+
|
|
63
|
+
== Example Components
|
|
64
|
+
|
|
65
|
+
1. *chat_common.rb*: Shared code with Fractor classes
|
|
66
|
+
* `MessagePacket` class for message protocol
|
|
67
|
+
* `MessageProtocol` module for serialization
|
|
68
|
+
* `ChatMessage` class extending `Fractor::Work`
|
|
69
|
+
* `ChatWorker` class extending `Fractor::Worker`
|
|
70
|
+
|
|
71
|
+
2. *chat_server.rb*: Fractor-based chat server
|
|
72
|
+
* Socket handling in main thread
|
|
73
|
+
* Fractor supervisor in continuous mode
|
|
74
|
+
* Work source callback pulling from Queue
|
|
75
|
+
* Results processing thread
|
|
76
|
+
|
|
77
|
+
3. *chat_client.rb*: Simple chat client (reused from plain example)
|
|
78
|
+
* Connects to server via TCP socket
|
|
79
|
+
* Sends and receives JSON messages
|
|
80
|
+
|
|
81
|
+
4. *simulate.rb*: Automated simulation
|
|
82
|
+
* Creates server and multiple clients as processes
|
|
83
|
+
* Runs predefined message schedule
|
|
84
|
+
* Analyzes logs after completion
|
|
85
|
+
|
|
86
|
+
== Running the Example
|
|
87
|
+
|
|
88
|
+
=== Running the Simulation
|
|
89
|
+
|
|
90
|
+
To run the complete automated simulation:
|
|
91
|
+
|
|
92
|
+
[source,sh]
|
|
93
|
+
----
|
|
94
|
+
ruby examples/continuous_chat_fractor/simulate.rb
|
|
95
|
+
----
|
|
96
|
+
|
|
97
|
+
Optional parameters:
|
|
98
|
+
* `-p, --port PORT` - Specify server port (default: 3000)
|
|
99
|
+
* `-d, --duration SECONDS` - Duration of simulation in seconds (default: 10)
|
|
100
|
+
* `-l, --log-dir DIR` - Directory for log files (default: logs)
|
|
101
|
+
* `-h, --help` - Show help message
|
|
102
|
+
|
|
103
|
+
=== Running Server and Clients Separately
|
|
104
|
+
|
|
105
|
+
Run the server in one terminal:
|
|
106
|
+
|
|
107
|
+
[source,sh]
|
|
108
|
+
----
|
|
109
|
+
ruby examples/continuous_chat_fractor/chat_server.rb [PORT] [LOG_FILE]
|
|
110
|
+
----
|
|
111
|
+
|
|
112
|
+
Run clients in different terminals:
|
|
113
|
+
|
|
114
|
+
[source,sh]
|
|
115
|
+
----
|
|
116
|
+
ruby examples/continuous_chat_fractor/chat_client.rb [USERNAME] [PORT] [LOG_FILE]
|
|
117
|
+
----
|
|
118
|
+
|
|
119
|
+
== Features Demonstrated
|
|
120
|
+
|
|
121
|
+
* *Continuous Mode*: Supervisor runs indefinitely without stopping
|
|
122
|
+
* *Work Source Callback*: Dynamically provides work from a Queue
|
|
123
|
+
* *Concurrent Processing*: Multiple Ractor workers process messages in parallel
|
|
124
|
+
* *Thread Coordination*: Main thread, supervisor thread, and results thread work together
|
|
125
|
+
* *Message Logging*: All operations logged to files for verification
|
|
126
|
+
* *Graceful Shutdown*: Proper cleanup of Fractor supervisor and sockets
|
|
127
|
+
|
|
128
|
+
== Comparison with Plain Socket Implementation
|
|
129
|
+
|
|
130
|
+
The plain socket implementation (`examples/continuous_chat_server/`) uses:
|
|
131
|
+
- `IO.select` for non-blocking I/O
|
|
132
|
+
- Sequential message processing in the main thread
|
|
133
|
+
- Simple, straightforward architecture
|
|
134
|
+
|
|
135
|
+
The Fractor-based implementation demonstrates:
|
|
136
|
+
- Parallel message processing using Ractors
|
|
137
|
+
- Work queue pattern with work source callbacks
|
|
138
|
+
- Separation of concerns (I/O vs processing)
|
|
139
|
+
- Continuous mode supervisor pattern
|
|
140
|
+
|
|
141
|
+
Both implementations are functional. The Fractor version shows how to structure a long-running server using Fractor's continuous mode, which is useful for:
|
|
142
|
+
- CPU-intensive message processing
|
|
143
|
+
- Scaling message handling across cores
|
|
144
|
+
- Separating I/O from computation
|
|
145
|
+
- Learning Fractor's continuous mode patterns
|
|
146
|
+
|
|
147
|
+
== Expected Output
|
|
148
|
+
|
|
149
|
+
The simulation will show:
|
|
150
|
+
* Fractor supervisor starting with workers
|
|
151
|
+
* Clients connecting to the server
|
|
152
|
+
* Messages being sent between clients
|
|
153
|
+
* Messages being added to Fractor work queue (logged as "Received from...")
|
|
154
|
+
* Graceful shutdown of all components
|
|
155
|
+
|
|
156
|
+
NOTE: In this implementation, Fractor workers process messages in parallel for demonstration purposes (analyzing message types, logging processing), while the main thread handles actual message delivery to ensure real-time responsiveness. The work items are successfully queued and processed by workers - you can verify this by seeing that all messages are correctly broadcast/delivered to clients.
|
|
157
|
+
|
|
158
|
+
== Log Files
|
|
159
|
+
|
|
160
|
+
After running the simulation, check the `logs/` directory:
|
|
161
|
+
|
|
162
|
+
* `server_messages.log` - Server activity and Fractor processing
|
|
163
|
+
* `client_<username>_messages.log` - Client activity
|
|
164
|
+
* `client_<username>_send_messages.json` - Messages sent by client
|
|
165
|
+
|
|
166
|
+
== Implementation Notes
|
|
167
|
+
|
|
168
|
+
=== Why Thread-safe Queue?
|
|
169
|
+
|
|
170
|
+
The implementation uses Ruby's `Queue` class (thread-safe) to coordinate between:
|
|
171
|
+
- Main thread (producing work from socket I/O)
|
|
172
|
+
- Supervisor thread (consuming work via work source callback)
|
|
173
|
+
|
|
174
|
+
This is necessary because Ractors cannot directly share mutable objects with threads.
|
|
175
|
+
|
|
176
|
+
=== Work Source Callback
|
|
177
|
+
|
|
178
|
+
The work source callback pulls up to 5 messages from the queue at once:
|
|
179
|
+
|
|
180
|
+
[source,ruby]
|
|
181
|
+
----
|
|
182
|
+
supervisor.register_work_source do
|
|
183
|
+
messages = []
|
|
184
|
+
5.times do
|
|
185
|
+
break if message_queue.empty?
|
|
186
|
+
msg = message_queue.pop(true) rescue nil
|
|
187
|
+
messages << msg if msg
|
|
188
|
+
end
|
|
189
|
+
messages.empty? ? nil : messages
|
|
190
|
+
end
|
|
191
|
+
----
|
|
192
|
+
|
|
193
|
+
This batching improves efficiency by reducing callback overhead.
|
|
194
|
+
|
|
195
|
+
=== Results Processing
|
|
196
|
+
|
|
197
|
+
A separate thread monitors the ResultAggregator because:
|
|
198
|
+
- The main thread is busy with socket I/O
|
|
199
|
+
- The supervisor thread is running `supervisor.run`
|
|
200
|
+
- We want to log results as they complete
|
|
201
|
+
|
|
202
|
+
In a production system, you might process results differently (e.g., send notifications, update databases, etc.).
|
|
203
|
+
|
|
204
|
+
== Continuous Mode Benefits
|
|
205
|
+
|
|
206
|
+
This example demonstrates key benefits of Fractor's continuous mode:
|
|
207
|
+
|
|
208
|
+
1. *Non-stopping Execution*: Server runs indefinitely, processing messages as they arrive
|
|
209
|
+
2. *Dynamic Work Addition*: Work source callback provides new work on demand
|
|
210
|
+
3. *Resource Efficiency*: Workers idle when no work available
|
|
211
|
+
4. *Parallel Processing*: Multiple messages processed concurrently
|
|
212
|
+
5. *Graceful Shutdown*: `supervisor.stop` cleanly terminates workers
|
|
213
|
+
|
|
214
|
+
== See Also
|
|
215
|
+
|
|
216
|
+
* link:../continuous_chat_server/[Plain Socket Implementation] - Simpler approach without Fractor
|
|
217
|
+
* link:../../README.adoc#continuous-mode[Main README Continuous Mode Section]
|