async-container-supervisor 0.7.0 → 0.9.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: a2da6a39261568dcfdcd067bcbd7364397df19aad34826ec9d7b6745e74aa198
4
- data.tar.gz: b8510b8f17ac2fea393f12223604c96a8f9303d6e87b1fc2ac54fc6b0cdfdaeb
3
+ metadata.gz: 1b9684c9b4ef621c8b92411d251478b9751cc901e251fa1b35c3fca92af18763
4
+ data.tar.gz: 5e9f9a25b01f4de9c160aa194acd2a3467215d09ccbbdfbac178c2bb1f278e58
5
5
  SHA512:
6
- metadata.gz: bf79321c826f009edac43b3c1bf1313710ed6881863aa56dea8c7909c7c231cbb1e07b92b0ed2241f1f76cd137e934ec17fb0b6ed00b30ebe8778c430c61735c
7
- data.tar.gz: c469d508ec02830abe705ec935b6778b45f78923ed592e9af652f07189ad1929e2c61be389bed2ab2076697e7349398e199a43dd238b486e6ebbc899e4b5f9f7
6
+ metadata.gz: 7fa18caa63bb5dff847b3640c63ca5c2109a3003537bf36cf8ac2947d560a8c0dc5d201b79e98c789e991db415fb2b07aa2c8a9cf94f556f2d932d68d4044dd4
7
+ data.tar.gz: 235f540925cc1ea12f1f2ef89df5cfacca213e56c4a0b39514f610a92e93e7de995e087842eb827936daa954802631ced0f5ca89ae22f7441c69763f10d7bc6b
checksums.yaml.gz.sig CHANGED
Binary file
@@ -35,12 +35,6 @@ graph TD
35
35
  Worker1 -.->|connects via IPC| Supervisor
36
36
  Worker2 -.->|connects via IPC| Supervisor
37
37
  WorkerN -.->|connects via IPC| Supervisor
38
-
39
- style Controller fill:#e1f5ff
40
- style Supervisor fill:#fff4e1
41
- style Worker1 fill:#e8f5e9
42
- style Worker2 fill:#e8f5e9
43
- style WorkerN fill:#e8f5e9
44
38
  ```
45
39
 
46
40
  **Important:** The supervisor process is itself just another process managed by the root controller. If the supervisor crashes, the controller will restart it, and all worker processes will automatically reconnect to the new supervisor. This design ensures high availability and fault tolerance.
@@ -115,7 +109,13 @@ This will start:
115
109
 
116
110
  ### Adding Health Monitors
117
111
 
118
- You can add monitors to detect and respond to unhealthy conditions. For example, to add a memory monitor:
112
+ You can add monitors to observe worker health and automatically respond to issues. Monitors are useful for:
113
+
114
+ - **Memory leak detection**: Automatically restart workers consuming excessive memory.
115
+ - **Performance monitoring**: Track CPU and memory usage trends.
116
+ - **Capacity planning**: Understand resource requirements.
117
+
118
+ For example, to add monitoring:
119
119
 
120
120
  ```ruby
121
121
  service "supervisor" do
@@ -123,17 +123,22 @@ service "supervisor" do
123
123
 
124
124
  monitors do
125
125
  [
126
- # Restart workers that exceed 500MB of memory:
126
+ # Log process metrics for observability:
127
+ Async::Container::Supervisor::ProcessMonitor.new(
128
+ interval: 60
129
+ ),
130
+
131
+ # Restart workers exceeding memory limits:
127
132
  Async::Container::Supervisor::MemoryMonitor.new(
128
- interval: 10, # Check every 10 seconds
129
- limit: 1024 * 1024 * 500 # 500MB limit
133
+ interval: 10,
134
+ maximum_size_limit: 1024 * 1024 * 500 # 500MB limit per process
130
135
  )
131
136
  ]
132
137
  end
133
138
  end
134
139
  ```
135
140
 
136
- The {ruby Async::Container::Supervisor::MemoryMonitor} will periodically check worker memory usage and restart any workers that exceed the configured limit.
141
+ See the {ruby Async::Container::Supervisor::MemoryMonitor Memory Monitor} and {ruby Async::Container::Supervisor::ProcessMonitor Process Monitor} guides for detailed configuration options and best practices.
137
142
 
138
143
  ### Collecting Diagnostics
139
144
 
data/context/index.yaml CHANGED
@@ -10,3 +10,11 @@ files:
10
10
  title: Getting Started
11
11
  description: This guide explains how to get started with `async-container-supervisor`
12
12
  to supervise and monitor worker processes in your Ruby applications.
13
+ - path: memory-monitor.md
14
+ title: Memory Monitor
15
+ description: This guide explains how to use the <code class="language-ruby">Async::Container::Supervisor::MemoryMonitor</code>
16
+ to detect and restart workers that exceed memory limits or develop memory leaks.
17
+ - path: process-monitor.md
18
+ title: Process Monitor
19
+ description: This guide explains how to use the <code class="language-ruby">Async::Container::Supervisor::ProcessMonitor</code>
20
+ to log CPU and memory metrics for your worker processes.
@@ -0,0 +1,129 @@
1
+ # Memory Monitor
2
+
3
+ This guide explains how to use the {ruby Async::Container::Supervisor::MemoryMonitor} to detect and restart workers that exceed memory limits or develop memory leaks.
4
+
5
+ ## Overview
6
+
7
+ Long-running worker processes often accumulate memory over time, either through legitimate growth or memory leaks. Without intervention, workers can consume all available system memory, causing performance degradation or system crashes. The `MemoryMonitor` solves this by automatically detecting and restarting problematic workers before they impact system stability.
8
+
9
+ Use the `MemoryMonitor` when you need:
10
+
11
+ - **Memory leak protection**: Automatically restart workers that continuously accumulate memory.
12
+ - **Resource limits**: Enforce maximum memory usage per worker.
13
+ - **System stability**: Prevent runaway processes from exhausting system memory.
14
+ - **Leak diagnosis**: Capture memory samples when leaks are detected for debugging.
15
+
16
+ The monitor uses the `memory-leak` gem to track process memory usage over time, detecting abnormal growth patterns that indicate leaks.
17
+
18
+ ## Usage
19
+
20
+ Add a memory monitor to your supervisor service to automatically restart workers that exceed 500MB:
21
+
22
+ ```ruby
23
+ service "supervisor" do
24
+ include Async::Container::Supervisor::Environment
25
+
26
+ monitors do
27
+ [
28
+ Async::Container::Supervisor::MemoryMonitor.new(
29
+ # Check worker memory every 10 seconds:
30
+ interval: 10,
31
+
32
+ # Restart workers exceeding 500MB:
33
+ maximum_size_limit: 1024 * 1024 * 500
34
+ )
35
+ ]
36
+ end
37
+ end
38
+ ```
39
+
40
+ When a worker exceeds the limit:
41
+ 1. The monitor logs the leak detection.
42
+ 2. Optionally captures a memory sample for debugging.
43
+ 3. Sends `SIGINT` to gracefully shut down the worker.
44
+ 4. The container automatically spawns a replacement worker.
45
+
46
+ ## Configuration Options
47
+
48
+ The `MemoryMonitor` accepts the following options:
49
+
50
+ ### `interval`
51
+
52
+ The interval (in seconds) at which to check for memory leaks. Default: `10` seconds.
53
+
54
+ ```ruby
55
+ Async::Container::Supervisor::MemoryMonitor.new(interval: 30)
56
+ ```
57
+
58
+ ### `maximum_size_limit`
59
+
60
+ The maximum memory size (in bytes) per process. When a process exceeds this limit, it will be restarted.
61
+
62
+ ```ruby
63
+ # 500MB limit
64
+ Async::Container::Supervisor::MemoryMonitor.new(maximum_size_limit: 1024 * 1024 * 500)
65
+
66
+ # 1GB limit
67
+ Async::Container::Supervisor::MemoryMonitor.new(maximum_size_limit: 1024 * 1024 * 1024)
68
+ ```
69
+
70
+ ### `total_size_limit`
71
+
72
+ The total size limit (in bytes) for all monitored processes combined. If not specified, only per-process limits are enforced.
73
+
74
+ ```ruby
75
+ # Total limit of 2GB across all workers
76
+ Async::Container::Supervisor::MemoryMonitor.new(
77
+ maximum_size_limit: 1024 * 1024 * 500, # 500MB per process
78
+ total_size_limit: 1024 * 1024 * 1024 * 2 # 2GB total
79
+ )
80
+ ```
81
+
82
+ ### `memory_sample`
83
+
84
+ Options for capturing memory samples when a leak is detected. If `nil`, memory sampling is disabled.
85
+
86
+ Default: `{duration: 30, timeout: 120}`
87
+
88
+ ```ruby
89
+ # Customize memory sampling:
90
+ Async::Container::Supervisor::MemoryMonitor.new(
91
+ memory_sample: {
92
+ duration: 60, # Sample for 60 seconds
93
+ timeout: 180 # Timeout after 180 seconds
94
+ }
95
+ )
96
+
97
+ # Disable memory sampling:
98
+ Async::Container::Supervisor::MemoryMonitor.new(
99
+ memory_sample: nil
100
+ )
101
+ ```
102
+
103
+ ## Memory Leak Detection
104
+
105
+ When a memory leak is detected, the monitor will:
106
+
107
+ 1. Log the leak detection with process details.
108
+ 2. If `memory_sample` is configured, capture a memory sample from the worker.
109
+ 3. Send a `SIGINT` signal to gracefully restart the worker.
110
+ 4. The container will automatically restart the worker process.
111
+
112
+ ### Memory Sampling
113
+
114
+ When a memory leak is detected and `memory_sample` is configured, the monitor requests a lightweight memory sample from the worker. This sample:
115
+
116
+ - Tracks allocations during the sampling period.
117
+ - Forces a garbage collection.
118
+ - Returns a JSON report showing retained objects.
119
+
120
+ The report includes:
121
+ - `total_allocated`: Total allocated memory and object count.
122
+ - `total_retained`: Total retained memory and count after GC.
123
+ - `by_gem`: Breakdown by gem/library.
124
+ - `by_file`: Breakdown by source file.
125
+ - `by_location`: Breakdown by specific file:line locations.
126
+ - `by_class`: Breakdown by object class.
127
+ - `strings`: String allocation analysis.
128
+
129
+ This is much more efficient than a full heap dump using `ObjectSpace.dump_all`.
@@ -0,0 +1,91 @@
1
+ # Process Monitor
2
+
3
+ This guide explains how to use the {ruby Async::Container::Supervisor::ProcessMonitor} to log CPU and memory metrics for your worker processes.
4
+
5
+ ## Overview
6
+
7
+ Understanding how your workers consume resources over time is essential for performance optimization, capacity planning, and debugging. Without visibility into CPU and memory usage, you can't identify bottlenecks, plan infrastructure scaling, or diagnose production issues effectively.
8
+
9
+ The `ProcessMonitor` provides this observability by periodically capturing and logging comprehensive metrics for your entire application process tree.
10
+
11
+ Use the `ProcessMonitor` when you need:
12
+
13
+ - **Performance analysis**: Identify which workers consume the most CPU or memory.
14
+ - **Capacity planning**: Determine optimal worker counts and memory requirements.
15
+ - **Trend monitoring**: Track resource usage patterns over time.
16
+ - **Debugging assistance**: Correlate resource usage with application behavior.
17
+ - **Cost optimization**: Right-size infrastructure based on actual usage.
18
+
19
+ Unlike the {ruby Async::Container::Supervisor::MemoryMonitor}, which takes action when limits are exceeded, the `ProcessMonitor` is purely observational - it logs metrics without interfering with worker processes.
20
+
21
+ ## Usage
22
+
23
+ Add a process monitor to log resource usage every minute:
24
+
25
+ ```ruby
26
+ service "supervisor" do
27
+ include Async::Container::Supervisor::Environment
28
+
29
+ monitors do
30
+ [
31
+ # Log CPU and memory metrics for all processes:
32
+ Async::Container::Supervisor::ProcessMonitor.new(
33
+ interval: 60 # Capture metrics every minute
34
+ )
35
+ ]
36
+ end
37
+ end
38
+ ```
39
+
40
+ This allows you to easily search and filter by specific fields:
41
+ - `general.process_id = 12347` - Find metrics for a specific process.
42
+ - `general.command = "worker-1"` - Find all metrics for worker processes.
43
+ - `general.processor_utilization > 50` - Find high CPU usage processes.
44
+ - `general.resident_size > 500000` - Find processes using more than 500MB.
45
+
46
+ ## Configuration Options
47
+
48
+ ### `interval`
49
+
50
+ The interval (in seconds) at which to capture and log process metrics. Default: `60` seconds.
51
+
52
+ ```ruby
53
+ # Log every 30 seconds
54
+ Async::Container::Supervisor::ProcessMonitor.new(interval: 30)
55
+
56
+ # Log every 5 minutes
57
+ Async::Container::Supervisor::ProcessMonitor.new(interval: 300)
58
+ ```
59
+
60
+ ## Captured Metrics
61
+
62
+ The `ProcessMonitor` captures the following metrics for each process:
63
+
64
+ ### Core Metrics
65
+
66
+ - **process_id**: Unique identifier for the process.
67
+ - **parent_process_id**: The parent process that spawned this one.
68
+ - **process_group_id**: Process group identifier.
69
+ - **command**: The command name.
70
+ - **processor_utilization**: CPU usage percentage.
71
+ - **resident_size**: Physical memory used (KB).
72
+ - **total_size**: Total memory space including shared memory (KB).
73
+ - **processor_time**: Total CPU time used (seconds).
74
+ - **elapsed_time**: How long the process has been running (seconds).
75
+
76
+ ### Detailed Memory Metrics
77
+
78
+ When available (OS-dependent), additional memory details are captured:
79
+
80
+ - **map_count**: Number of memory mappings (stacks, libraries, etc.).
81
+ - **proportional_size**: Memory usage accounting for shared memory (KB).
82
+ - **shared_clean_size**: Unmodified shared memory (KB).
83
+ - **shared_dirty_size**: Modified shared memory (KB).
84
+ - **private_clean_size**: Unmodified private memory (KB).
85
+ - **private_dirty_size**: Modified private memory (KB).
86
+ - **referenced_size**: Active page-cache (KB).
87
+ - **anonymous_size**: Memory not backed by files (KB)
88
+ - **swap_size**: Memory swapped to disk (KB).
89
+ - **proportional_swap_size**: Proportional swap usage (KB).
90
+ - **major_faults**: The number of page faults requiring I/O.
91
+ - **minor_faults**: The number of page faults that don't require I/O (e.g. CoW).
@@ -11,6 +11,9 @@ module Async
11
11
  module Supervisor
12
12
  # A client provides a mechanism to connect to a supervisor server in order to execute operations.
13
13
  class Client
14
+ # Initialize a new client.
15
+ #
16
+ # @parameter endpoint [IO::Endpoint] The supervisor endpoint to connect to.
14
17
  def initialize(endpoint: Supervisor.endpoint)
15
18
  @endpoint = endpoint
16
19
  end
@@ -4,12 +4,24 @@
4
4
  # Copyright, 2025, by Samuel Williams.
5
5
 
6
6
  require "json"
7
+ require "async"
7
8
 
8
9
  module Async
9
10
  module Container
10
11
  module Supervisor
12
+ # Represents a bidirectional communication channel between supervisor and worker.
13
+ #
14
+ # Handles message passing, call/response patterns, and connection lifecycle.
11
15
  class Connection
16
+ # Represents a remote procedure call over a connection.
17
+ #
18
+ # Manages the call lifecycle, response queueing, and completion signaling.
12
19
  class Call
20
+ # Initialize a new call.
21
+ #
22
+ # @parameter connection [Connection] The connection this call belongs to.
23
+ # @parameter id [Integer] The unique call identifier.
24
+ # @parameter message [Hash] The call message/parameters.
13
25
  def initialize(connection, id, message)
14
26
  @connection = connection
15
27
  @id = id
@@ -18,10 +30,16 @@ module Async
18
30
  @queue = ::Thread::Queue.new
19
31
  end
20
32
 
33
+ # Convert the call to a JSON-compatible hash.
34
+ #
35
+ # @returns [Hash] The message hash.
21
36
  def as_json(...)
22
37
  @message
23
38
  end
24
39
 
40
+ # Convert the call to a JSON string.
41
+ #
42
+ # @returns [String] The JSON representation.
25
43
  def to_json(...)
26
44
  as_json.to_json(...)
27
45
  end
@@ -32,14 +50,24 @@ module Async
32
50
  # @attribute [Hash] The message that initiated the call.
33
51
  attr :message
34
52
 
53
+ # Access a parameter from the call message.
54
+ #
55
+ # @parameter key [Symbol] The parameter name.
56
+ # @returns [Object] The parameter value.
35
57
  def [] key
36
58
  @message[key]
37
59
  end
38
60
 
61
+ # Push a response into the call's queue.
62
+ #
63
+ # @parameter response [Hash] The response data to push.
39
64
  def push(**response)
40
65
  @queue.push(response)
41
66
  end
42
67
 
68
+ # Pop a response from the call's queue.
69
+ #
70
+ # @returns [Hash, nil] The next response or nil if queue is closed.
43
71
  def pop(...)
44
72
  @queue.pop(...)
45
73
  end
@@ -49,12 +77,20 @@ module Async
49
77
  @queue.close
50
78
  end
51
79
 
52
- def each(&block)
53
- while response = self.pop
80
+ # Iterate over all responses from the call.
81
+ #
82
+ # @yields {|response| ...} Each response from the queue.
83
+ def each(timeout: nil, &block)
84
+ while response = self.pop(timeout: timeout)
54
85
  yield response
55
86
  end
56
87
  end
57
88
 
89
+ # Finish the call with a final response.
90
+ #
91
+ # Closes the response queue after pushing the final response.
92
+ #
93
+ # @parameter response [Hash] The final response data.
58
94
  def finish(**response)
59
95
  # If the remote end has already closed the connection, we don't need to send a finished message:
60
96
  unless @queue.closed?
@@ -63,10 +99,16 @@ module Async
63
99
  end
64
100
  end
65
101
 
102
+ # Finish the call with a failure response.
103
+ #
104
+ # @parameter response [Hash] The error response data.
66
105
  def fail(**response)
67
106
  self.finish(failed: true, **response)
68
107
  end
69
108
 
109
+ # Check if the call's queue is closed.
110
+ #
111
+ # @returns [Boolean] True if the queue is closed.
70
112
  def closed?
71
113
  @queue.closed?
72
114
  end
@@ -74,7 +116,8 @@ module Async
74
116
  # Forward this call to another connection, proxying all responses back.
75
117
  #
76
118
  # This provides true streaming forwarding - intermediate responses flow through
77
- # in real-time rather than being buffered.
119
+ # in real-time rather than being buffered. The forwarding runs asynchronously
120
+ # to avoid blocking the dispatcher.
78
121
  #
79
122
  # @parameter target_connection [Connection] The connection to forward the call to.
80
123
  # @parameter operation [Hash] The operation request to forward (must include :do key).
@@ -92,6 +135,15 @@ module Async
92
135
  end
93
136
  end
94
137
 
138
+ # Dispatch a call to a target handler.
139
+ #
140
+ # Creates a call, dispatches it to the target, and streams responses back
141
+ # through the connection.
142
+ #
143
+ # @parameter connection [Connection] The connection to dispatch on.
144
+ # @parameter target [Dispatchable] The target handler.
145
+ # @parameter id [Integer] The call identifier.
146
+ # @parameter message [Hash] The call message.
95
147
  def self.dispatch(connection, target, id, message)
96
148
  Async do
97
149
  call = self.new(connection, id, message)
@@ -103,16 +155,27 @@ module Async
103
155
  connection.write(id: id, **response)
104
156
  end
105
157
  ensure
106
- # If the queue is closed, we don't need to send a finished message.
158
+ # Ensure the call is removed from the connection's calls hash, otherwise it will leak:
159
+ connection.calls.delete(id)
160
+
161
+ # If the queue is closed, we don't need to send a finished message:
107
162
  unless call.closed?
108
- connection.write(id: id, finished: true)
163
+ # If the above write failed, this is likely to fail too, and we can safely ignore it.
164
+ connection.write(id: id, finished: true) rescue nil
109
165
  end
110
-
111
- connection.calls.delete(id)
112
166
  end
113
167
  end
114
168
 
115
- def self.call(connection, **message, &block)
169
+ # Make a call on a connection and wait for responses.
170
+ #
171
+ # If a block is provided, yields each response. Otherwise, buffers intermediate
172
+ # responses and returns the final response.
173
+ #
174
+ # @parameter connection [Connection] The connection to call on.
175
+ # @parameter message [Hash] The call message/parameters.
176
+ # @yields {|response| ...} Each intermediate response if block given.
177
+ # @returns [Hash, Array] The final response or array of intermediate responses.
178
+ def self.call(connection, timeout: nil, **message, &block)
116
179
  id = connection.next_id
117
180
  call = self.new(connection, id, message)
118
181
 
@@ -121,11 +184,11 @@ module Async
121
184
  connection.write(id: id, **message)
122
185
 
123
186
  if block_given?
124
- call.each(&block)
187
+ call.each(timeout: timeout, &block)
125
188
  else
126
189
  intermediate = nil
127
190
 
128
- while response = call.pop
191
+ while response = call.pop(timeout: timeout)
129
192
  if response.delete(:finished)
130
193
  if intermediate
131
194
  if response.any?
@@ -149,6 +212,11 @@ module Async
149
212
  end
150
213
  end
151
214
 
215
+ # Initialize a new connection.
216
+ #
217
+ # @parameter stream [IO] The underlying IO stream.
218
+ # @parameter id [Integer] The starting call ID (default: 0).
219
+ # @parameter state [Hash] Initial connection state.
152
220
  def initialize(stream, id = 0, **state)
153
221
  @stream = stream
154
222
  @id = id
@@ -164,42 +232,49 @@ module Async
164
232
  # @attribute [Hash(Symbol, Object)] State associated with this connection, for example the process ID, etc.
165
233
  attr_accessor :state
166
234
 
235
+ # Generate the next unique call ID.
236
+ #
237
+ # @returns [Integer] The next call identifier.
167
238
  def next_id
168
239
  @id += 2
169
240
  end
170
241
 
242
+ # Write a message to the connection stream.
243
+ #
244
+ # @parameter message [Hash] The message to write.
171
245
  def write(**message)
172
246
  @stream.write(JSON.dump(message) << "\n")
173
247
  @stream.flush
174
248
  end
175
249
 
176
- def call(timeout: nil, **message)
177
- id = next_id
178
- calls[id] = ::Thread::Queue.new
179
-
180
- write(id: id, **message)
181
-
182
- return calls[id].pop(timeout: timeout)
183
- ensure
184
- calls.delete(id)
185
- end
186
-
250
+ # Read a message from the connection stream.
251
+ #
252
+ # @returns [Hash, nil] The parsed message or nil if stream is closed.
187
253
  def read
188
254
  if line = @stream&.gets
189
255
  JSON.parse(line, symbolize_names: true)
190
256
  end
191
257
  end
192
258
 
259
+ # Iterate over all messages from the connection.
260
+ #
261
+ # @yields {|message| ...} Each message read from the stream.
193
262
  def each
194
263
  while message = self.read
195
264
  yield message
196
265
  end
197
266
  end
198
267
 
268
+ # Make a synchronous call and wait for a single response.
199
269
  def call(...)
200
270
  Call.call(self, ...)
201
271
  end
202
272
 
273
+ # Run the connection, processing incoming messages.
274
+ #
275
+ # Dispatches incoming calls to the target and routes responses to waiting calls.
276
+ #
277
+ # @parameter target [Dispatchable] The target to dispatch calls to.
203
278
  def run(target)
204
279
  self.each do |message|
205
280
  if id = message.delete(:id)
@@ -219,12 +294,20 @@ module Async
219
294
  end
220
295
  end
221
296
 
297
+ # Run the connection in a background task.
298
+ #
299
+ # @parameter target [Dispatchable] The target to dispatch calls to.
300
+ # @parameter parent [Async::Task] The parent task.
301
+ # @returns [Async::Task] The background reader task.
222
302
  def run_in_background(target, parent: Task.current)
223
303
  @reader ||= parent.async do
224
304
  self.run(target)
225
305
  end
226
306
  end
227
307
 
308
+ # Close the connection and clean up resources.
309
+ #
310
+ # Stops the background reader, closes the stream, and closes all pending calls.
228
311
  def close
229
312
  if @reader
230
313
  @reader.stop
@@ -9,7 +9,15 @@ require_relative "endpoint"
9
9
  module Async
10
10
  module Container
11
11
  module Supervisor
12
+ # A mixin for objects that can dispatch calls.
13
+ #
14
+ # Provides automatic method dispatch based on the call's `:do` parameter.
12
15
  module Dispatchable
16
+ # Dispatch a call to the appropriate method.
17
+ #
18
+ # Routes calls to methods named `do_#{operation}` based on the call's `:do` parameter.
19
+ #
20
+ # @parameter call [Connection::Call] The call to dispatch.
13
21
  def dispatch(call)
14
22
  method_name = "do_#{call.message[:do]}"
15
23
  self.public_send(method_name, call)
@@ -8,6 +8,10 @@ require "io/endpoint/unix_endpoint"
8
8
  module Async
9
9
  module Container
10
10
  module Supervisor
11
+ # Get the supervisor IPC endpoint.
12
+ #
13
+ # @parameter path [String] The path for the Unix socket (default: "supervisor.ipc").
14
+ # @returns [IO::Endpoint] The Unix socket endpoint.
11
15
  def self.endpoint(path = "supervisor.ipc")
12
16
  ::IO::Endpoint.unix(path)
13
17
  end
@@ -10,6 +10,9 @@ require_relative "service"
10
10
  module Async
11
11
  module Container
12
12
  module Supervisor
13
+ # An environment mixin for supervisor services.
14
+ #
15
+ # Provides configuration and setup for supervisor processes that monitor workers.
13
16
  module Environment
14
17
  # The service class to use for the supervisor.
15
18
  # @returns [Class]
@@ -40,10 +43,18 @@ module Async
40
43
  {restart: true, count: 1, health_check_timeout: 30}
41
44
  end
42
45
 
46
+ # Get the list of monitors to run in the supervisor.
47
+ #
48
+ # Override this method to provide custom monitors.
49
+ #
50
+ # @returns [Array] The list of monitor instances.
43
51
  def monitors
44
52
  []
45
53
  end
46
54
 
55
+ # Create the supervisor server instance.
56
+ #
57
+ # @returns [Server] The supervisor server.
47
58
  def make_server(endpoint)
48
59
  Server.new(endpoint: endpoint, monitors: self.monitors)
49
60
  end
@@ -0,0 +1,36 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Async
4
+ module Container
5
+ module Supervisor
6
+ # A helper for running loops at aligned intervals.
7
+ module Loop
8
+ # A robust loop that executes a block at aligned intervals.
9
+ #
10
+ # The alignment is modulo the current clock in seconds.
11
+ #
12
+ # If an error occurs during the execution of the block, it is logged and the loop continues.
13
+ #
14
+ # @parameter interval [Integer] The interval in seconds between executions of the block.
15
+ def self.run(interval: 60, &block)
16
+ while true
17
+ # Compute the wait time to the next interval:
18
+ wait = interval - (Time.now.to_f % interval)
19
+ if wait.positive?
20
+ # Sleep until the next interval boundary:
21
+ sleep(wait)
22
+ end
23
+
24
+ begin
25
+ yield
26
+ rescue => error
27
+ Console.error(self, "Loop error:", error)
28
+ end
29
+ end
30
+ end
31
+ end
32
+
33
+ private_constant :Loop
34
+ end
35
+ end
36
+ end
@@ -6,18 +6,21 @@
6
6
  require "memory/leak/cluster"
7
7
  require "set"
8
8
 
9
+ require_relative "loop"
10
+
9
11
  module Async
10
12
  module Container
11
13
  module Supervisor
14
+ # Monitors worker memory usage and restarts workers that exceed limits.
15
+ #
16
+ # Uses the `memory` gem to track process memory and detect leaks.
12
17
  class MemoryMonitor
13
- MEMORY_SAMPLE = {duration: 60, timeout: 60+20}
14
-
15
18
  # Create a new memory monitor.
16
19
  #
17
20
  # @parameter interval [Integer] The interval at which to check for memory leaks.
18
21
  # @parameter total_size_limit [Integer] The total size limit of all processes, or nil for no limit.
19
22
  # @parameter options [Hash] Options to pass to the cluster when adding processes.
20
- def initialize(interval: 10, total_size_limit: nil, memory_sample: MEMORY_SAMPLE, **options)
23
+ def initialize(interval: 10, total_size_limit: nil, memory_sample: false, **options)
21
24
  @interval = interval
22
25
  @cluster = Memory::Leak::Cluster.new(total_size_limit: total_size_limit)
23
26
 
@@ -29,6 +32,9 @@ module Async
29
32
  @processes = Hash.new{|hash, key| hash[key] = Set.new.compare_by_identity}
30
33
  end
31
34
 
35
+ # @attribute [Memory::Leak::Cluster] The cluster of processes being monitored.
36
+ attr_reader :cluster
37
+
32
38
  # Add a process to the memory monitor. You may override this to control how processes are added to the cluster.
33
39
  #
34
40
  # @parameter process_id [Integer] The process ID to add.
@@ -82,7 +88,7 @@ module Async
82
88
 
83
89
  if @memory_sample
84
90
  Console.info(self, "Capturing memory sample...", child: {process_id: process_id}, memory_sample: @memory_sample)
85
-
91
+
86
92
  # We are tracking multiple connections to the same process:
87
93
  connections = @processes[process_id]
88
94
 
@@ -95,8 +101,14 @@ module Async
95
101
  end
96
102
 
97
103
  # Kill the process gently:
98
- Console.info(self, "Killing process!", child: {process_id: process_id})
99
- Process.kill(:INT, process_id)
104
+ begin
105
+ Console.info(self, "Killing process!", child: {process_id: process_id})
106
+ Process.kill(:INT, process_id)
107
+ rescue Errno::ESRCH
108
+ # No such process - he's dead Jim.
109
+ rescue => error
110
+ Console.warn(self, "Failed to kill process!", child: {process_id: process_id}, exception: error)
111
+ end
100
112
 
101
113
  true
102
114
  end
@@ -106,14 +118,17 @@ module Async
106
118
  # @returns [Async::Task] The task that is running the memory monitor.
107
119
  def run
108
120
  Async do
109
- while true
121
+ Loop.run(interval: @interval) do
110
122
  # This block must return true if the process was killed.
111
123
  @cluster.check! do |process_id, monitor|
112
124
  Console.error(self, "Memory leak detected!", child: {process_id: process_id}, monitor: monitor)
113
- memory_leak_detected(process_id, monitor)
125
+
126
+ begin
127
+ memory_leak_detected(process_id, monitor)
128
+ rescue => error
129
+ Console.error(self, "Failed to handle memory leak!", child: {process_id: process_id}, exception: error)
130
+ end
114
131
  end
115
-
116
- sleep(@interval)
117
132
  end
118
133
  end
119
134
  end
@@ -0,0 +1,90 @@
1
+ # frozen_string_literal: true
2
+
3
+ # Released under the MIT License.
4
+ # Copyright, 2025, by Samuel Williams.
5
+
6
+ require "process/metrics"
7
+ require_relative "loop"
8
+
9
+ require_relative "loop"
10
+
11
+ module Async
12
+ module Container
13
+ module Supervisor
14
+ # Monitors process metrics and logs them periodically.
15
+ #
16
+ # Uses the `process-metrics` gem to capture CPU and memory metrics for a process tree.
17
+ # Unlike {MemoryMonitor}, this monitor captures metrics for the entire process tree
18
+ # by tracking the parent process ID (ppid), which is more efficient than tracking
19
+ # individual processes.
20
+ class ProcessMonitor
21
+ # Create a new process monitor.
22
+ #
23
+ # @parameter interval [Integer] The interval in seconds at which to log process metrics.
24
+ # @parameter ppid [Integer] The parent process ID to monitor. If nil, uses the current process to capture its children.
25
+ def initialize(interval: 60, ppid: nil)
26
+ @interval = interval
27
+ @ppid = ppid || Process.ppid
28
+ end
29
+
30
+ # @attribute [Integer] The parent process ID being monitored.
31
+ attr :ppid
32
+
33
+ # Register a connection with the process monitor.
34
+ #
35
+ # This is provided for consistency with {MemoryMonitor}, but since we monitor
36
+ # the entire process tree via ppid, we don't need to track individual connections.
37
+ #
38
+ # @parameter connection [Connection] The connection to register.
39
+ def register(connection)
40
+ Console.debug(self, "Connection registered.", connection: connection, state: connection.state)
41
+ end
42
+
43
+ # Remove a connection from the process monitor.
44
+ #
45
+ # This is provided for consistency with {MemoryMonitor}, but since we monitor
46
+ # the entire process tree via ppid, we don't need to track individual connections.
47
+ #
48
+ # @parameter connection [Connection] The connection to remove.
49
+ def remove(connection)
50
+ Console.debug(self, "Connection removed.", connection: connection, state: connection.state)
51
+ end
52
+
53
+ # Capture current process metrics for the entire process tree.
54
+ #
55
+ # @returns [Hash] A hash mapping process IDs to their metrics.
56
+ def metrics
57
+ Process::Metrics::General.capture(ppid: @ppid)
58
+ end
59
+
60
+ # Dump the current status of the process monitor.
61
+ #
62
+ # @parameter call [Connection::Call] The call to respond to.
63
+ def status(call)
64
+ metrics = self.metrics
65
+
66
+ call.push(process_monitor: {ppid: @ppid, metrics: metrics})
67
+ end
68
+
69
+ # Run the process monitor.
70
+ #
71
+ # Periodically captures and logs process metrics for the entire process tree.
72
+ #
73
+ # @returns [Async::Task] The task that is running the process monitor.
74
+ def run
75
+ Async do
76
+ Loop.run(interval: @interval) do
77
+ metrics = self.metrics
78
+
79
+ # Log each process individually for better searchability in log platforms:
80
+ metrics.each do |process_id, general|
81
+ Console.info(self, "Process metrics captured.", general: general)
82
+ end
83
+ end
84
+ end
85
+ end
86
+ end
87
+ end
88
+ end
89
+ end
90
+
@@ -16,6 +16,10 @@ module Async
16
16
  #
17
17
  # There are various tasks that can be executed by the server, such as restarting the process group, and querying the status of the processes. The server is also responsible for managing the lifecycle of the monitors, which can be used to monitor the status of the connected workers.
18
18
  class Server
19
+ # Initialize a new supervisor server.
20
+ #
21
+ # @parameter monitors [Array] The monitors to run.
22
+ # @parameter endpoint [IO::Endpoint] The endpoint to listen on.
19
23
  def initialize(monitors: [], endpoint: Supervisor.endpoint)
20
24
  @monitors = monitors
21
25
  @endpoint = endpoint
@@ -28,8 +32,16 @@ module Async
28
32
 
29
33
  include Dispatchable
30
34
 
35
+ # Register a worker connection with the supervisor.
36
+ #
37
+ # Assigns a unique connection ID and notifies all monitors of the new connection.
38
+ #
39
+ # @parameter call [Connection::Call] The registration call.
40
+ # @parameter call[:state] [Hash] The worker state to merge (e.g. process_id).
31
41
  def do_register(call)
32
- call.connection.state.merge!(call.message[:state])
42
+ if state = call.message[:state]
43
+ call.connection.state.merge!(state)
44
+ end
33
45
 
34
46
  connection_id = SecureRandom.uuid
35
47
  call.connection.state[:connection_id] = connection_id
@@ -42,14 +54,18 @@ module Async
42
54
  Console.error(self, "Error while registering process!", monitor: monitor, exception: error)
43
55
  end
44
56
  ensure
45
- call.finish
57
+ call.finish(connection_id: connection_id)
46
58
  end
47
59
 
48
60
  # Forward an operation to a worker connection.
49
61
  #
62
+ # This allows clients to invoke operations on specific worker processes by
63
+ # providing a connection_id. The operation is proxied through to the worker
64
+ # and responses are streamed back to the client.
65
+ #
50
66
  # @parameter call [Connection::Call] The call to handle.
51
- # @parameter operation [Hash] The operation to forward, must include :do key.
52
- # @parameter connection_id [String] The connection ID to target.
67
+ # @parameter call[:operation] [Hash] The operation to forward, must include :do key.
68
+ # @parameter call[:connection_id] [String] The connection ID to target.
53
69
  def do_forward(call)
54
70
  operation = call[:operation]
55
71
  connection_id = call[:connection_id]
@@ -82,6 +98,12 @@ module Async
82
98
  ::Process.kill(signal, ::Process.ppid)
83
99
  end
84
100
 
101
+ # Query the status of the supervisor and all connected workers.
102
+ #
103
+ # Returns information about all registered connections and delegates to
104
+ # monitors to provide additional status information.
105
+ #
106
+ # @parameter call [Connection::Call] The status call.
85
107
  def do_status(call)
86
108
  connections = @connections.map do |connection_id, connection|
87
109
  {
@@ -98,6 +120,11 @@ module Async
98
120
  call.finish(connections: connections)
99
121
  end
100
122
 
123
+ # Remove a worker connection from the supervisor.
124
+ #
125
+ # Notifies all monitors and removes the connection from tracking.
126
+ #
127
+ # @parameter connection [Connection] The connection to remove.
101
128
  def remove(connection)
102
129
  if connection_id = connection.state[:connection_id]
103
130
  @connections.delete(connection_id)
@@ -110,6 +137,11 @@ module Async
110
137
  end
111
138
  end
112
139
 
140
+ # Run the supervisor server.
141
+ #
142
+ # Starts all monitors and accepts connections from workers.
143
+ #
144
+ # @parameter parent [Async::Task] The parent task to run under.
113
145
  def run(parent: Task.current)
114
146
  parent.async do |task|
115
147
  @monitors.each do |monitor|
@@ -10,6 +10,9 @@ require "io/endpoint/bound_endpoint"
10
10
  module Async
11
11
  module Container
12
12
  module Supervisor
13
+ # The supervisor service implementation.
14
+ #
15
+ # Manages the lifecycle of the supervisor server and its monitors.
13
16
  class Service < Async::Service::Generic
14
17
  # Initialize the supervisor using the given environment.
15
18
  # @parameter environment [Build::Environment]
@@ -32,10 +35,18 @@ module Async
32
35
  super
33
36
  end
34
37
 
38
+ # Get the name of the supervisor service.
39
+ #
40
+ # @returns [String] The service name.
35
41
  def name
36
42
  @evaluator.name
37
43
  end
38
44
 
45
+ # Set up the supervisor service in the container.
46
+ #
47
+ # Creates and runs the supervisor server with configured monitors.
48
+ #
49
+ # @parameter container [Async::Container::Generic] The container to set up in.
39
50
  def setup(container)
40
51
  container_options = @evaluator.container_options
41
52
  health_check_timeout = container_options[:health_check_timeout]
@@ -8,6 +8,9 @@ require "async/service/environment"
8
8
  module Async
9
9
  module Container
10
10
  module Supervisor
11
+ # An environment mixin for supervised worker services.
12
+ #
13
+ # Enables workers to connect to and be supervised by the supervisor.
11
14
  module Supervised
12
15
  # The IPC path to use for communication with the supervisor.
13
16
  # @returns [String]
@@ -21,6 +24,10 @@ module Async
21
24
  ::IO::Endpoint.unix(supervisor_ipc_path)
22
25
  end
23
26
 
27
+ # Create a supervised worker for the given instance.
28
+ #
29
+ # @parameter instance [Async::Container::Instance] The container instance.
30
+ # @returns [Worker] The worker client.
24
31
  def make_supervised_worker(instance)
25
32
  Worker.new(instance, endpoint: supervisor_endpoint)
26
33
  end
@@ -3,10 +3,13 @@
3
3
  # Released under the MIT License.
4
4
  # Copyright, 2025, by Samuel Williams.
5
5
 
6
+ # @namespace
6
7
  module Async
8
+ # @namespace
7
9
  module Container
10
+ # @namespace
8
11
  module Supervisor
9
- VERSION = "0.7.0"
12
+ VERSION = "0.9.0"
10
13
  end
11
14
  end
12
15
  end
@@ -13,42 +13,70 @@ module Async
13
13
  #
14
14
  # There are various tasks that can be executed by the worker, such as dumping memory, threads, and garbage collection profiles.
15
15
  class Worker < Client
16
+ # Run a worker with the given state.
17
+ #
18
+ # @parameter state [Hash] The worker state (e.g. process_id, instance info).
19
+ # @parameter endpoint [IO::Endpoint] The supervisor endpoint to connect to.
16
20
  def self.run(...)
17
21
  self.new(...).run
18
22
  end
19
23
 
20
- def initialize(state, endpoint: Supervisor.endpoint)
24
+ # Initialize a new worker.
25
+ #
26
+ # @parameter state [Hash] The worker state to register with the supervisor.
27
+ # @parameter endpoint [IO::Endpoint] The supervisor endpoint to connect to.
28
+ def initialize(state = nil, endpoint: Supervisor.endpoint)
29
+ super(endpoint: endpoint)
21
30
  @state = state
22
- @endpoint = endpoint
23
31
  end
24
32
 
25
33
  include Dispatchable
26
34
 
27
- private def dump(call)
35
+ private def dump(call, buffer: true)
28
36
  if path = call[:path]
29
37
  File.open(path, "w") do |file|
30
38
  yield file
31
39
  end
32
40
 
33
41
  call.finish(path: path)
34
- else
42
+ elsif buffer
35
43
  buffer = StringIO.new
36
44
  yield buffer
37
45
 
38
- call.finish(data: buffer.string)
46
+ if message = call[:log]
47
+ Console.info(self, message, data: buffer.string)
48
+ call.finish
49
+ else
50
+ call.finish(data: buffer.string)
51
+ end
52
+ else
53
+ call.fail(error: {message: "Buffered output not supported!"})
39
54
  end
40
55
  end
41
56
 
57
+ # Dump the current fiber scheduler hierarchy.
58
+ #
59
+ # Generates a hierarchical view of all running fibers and their relationships.
60
+ #
61
+ # @parameter call [Connection::Call] The call to respond to.
62
+ # @parameter call[:path] [String] Optional file path to save the dump.
42
63
  def do_scheduler_dump(call)
43
64
  dump(call) do |file|
44
65
  Fiber.scheduler.print_hierarchy(file)
45
66
  end
46
67
  end
47
68
 
69
+ # Dump the entire object space to a file.
70
+ #
71
+ # This is a heavyweight operation that dumps all objects in the heap.
72
+ # Consider using {do_memory_sample} for lighter weight memory leak detection.
73
+ #
74
+ # @parameter call [Connection::Call] The call to respond to.
75
+ # @parameter call[:path] [String] Optional file path to save the dump.
48
76
  def do_memory_dump(call)
49
77
  require "objspace"
50
78
 
51
- dump(call) do |file|
79
+ dump(call, buffer: false) do |file|
52
80
  ObjectSpace.dump_all(output: file)
53
81
  end
54
82
  end
@@ -59,8 +87,12 @@ module Async
59
87
  # retained objects allocated during the sampling period. Late-lifecycle
60
88
  # allocations that are retained are likely memory leaks.
61
89
  #
90
+ # The method samples allocations for the specified duration, forces a garbage
91
+ # collection, and returns a JSON report showing allocated vs retained memory
92
+ # broken down by gem, file, location, and class.
93
+ #
62
94
  # @parameter call [Connection::Call] The call to respond to.
63
- # @parameter duration [Numeric] The duration in seconds to sample for (default: 10).
95
+ # @parameter call[:duration] [Numeric] The duration in seconds to sample for.
64
96
  def do_memory_sample(call)
65
97
  require "memory"
66
98
 
@@ -82,15 +114,21 @@ module Async
82
114
  # Stop sampling
83
115
  sampler.stop
84
116
 
85
- Console.info(self, "Memory sampling completed, generating report...", sampler: sampler)
86
-
87
- # Generate a report focused on retained objects (likely leaks):
88
117
  report = sampler.report
89
- call.finish(report: report.as_json)
118
+
119
+ dump(call) do |file|
120
+ file.puts(report.to_s)
121
+ end
90
122
  ensure
91
123
  GC.start
92
124
  end
93
125
 
126
+ # Dump information about all running threads.
127
+ #
128
+ # Includes thread inspection and backtraces for debugging.
129
+ #
130
+ # @parameter call [Connection::Call] The call to respond to.
131
+ # @parameter call[:path] [String] Optional file path to save the dump.
94
132
  def do_thread_dump(call)
95
133
  dump(call) do |file|
96
134
  Thread.list.each do |thread|
@@ -100,11 +138,22 @@ module Async
100
138
  end
101
139
  end
102
140
 
141
+ # Start garbage collection profiling.
142
+ #
143
+ # Enables the GC profiler to track garbage collection performance.
144
+ #
145
+ # @parameter call [Connection::Call] The call to respond to.
103
146
  def do_garbage_profile_start(call)
104
147
  GC::Profiler.enable
105
148
  call.finish(started: true)
106
149
  end
107
150
 
151
+ # Stop garbage collection profiling and return results.
152
+ #
153
+ # Disables the GC profiler and returns collected profiling data.
154
+ #
155
+ # @parameter call [Connection::Call] The call to respond to.
156
+ # @parameter call[:path] [String] Optional file path to save the profile.
108
157
  def do_garbage_profile_stop(call)
109
158
  dump(connection, message) do |file|
110
159
  file.puts GC::Profiler.result
@@ -118,6 +167,7 @@ module Async
118
167
 
119
168
  # Register the worker with the supervisor:
120
169
  connection.call(do: :register, state: @state)
170
+ # We ignore the response (it contains the `connection_id`).
121
171
  end
122
172
  end
123
173
  end
@@ -10,16 +10,7 @@ require_relative "supervisor/worker"
10
10
  require_relative "supervisor/client"
11
11
 
12
12
  require_relative "supervisor/memory_monitor"
13
+ require_relative "supervisor/process_monitor"
13
14
 
14
15
  require_relative "supervisor/environment"
15
16
  require_relative "supervisor/supervised"
16
-
17
- # @namespace
18
- module Async
19
- # @namespace
20
- module Container
21
- # @namespace
22
- module Supervisor
23
- end
24
- end
25
- end
data/readme.md CHANGED
@@ -18,10 +18,26 @@ Please see the [project documentation](https://socketry.github.io/async-containe
18
18
 
19
19
  - [Getting Started](https://socketry.github.io/async-container-supervisor/guides/getting-started/index) - This guide explains how to get started with `async-container-supervisor` to supervise and monitor worker processes in your Ruby applications.
20
20
 
21
+ - [Memory Monitor](https://socketry.github.io/async-container-supervisor/guides/memory-monitor/index) - This guide explains how to use the <code class="language-ruby">Async::Container::Supervisor::MemoryMonitor</code> to detect and restart workers that exceed memory limits or develop memory leaks.
22
+
23
+ - [Process Monitor](https://socketry.github.io/async-container-supervisor/guides/process-monitor/index) - This guide explains how to use the <code class="language-ruby">Async::Container::Supervisor::ProcessMonitor</code> to log CPU and memory metrics for your worker processes.
24
+
21
25
  ## Releases
22
26
 
23
27
  Please see the [project releases](https://socketry.github.io/async-container-supervisor/releases/index) for all releases.
24
28
 
29
+ ### v0.9.0
30
+
31
+ - Better handling of write failures in `Connection::Call.dispatch`, ensuring we don't leak calls.
32
+ - Robust monitor loop handling - restart on failure, and align loop iterations.
33
+ - Disable memory sampler by default and use text output format.
34
+ - Introduce support for redirecting dump output to logs.
35
+
36
+ ### v0.8.0
37
+
38
+ - Add `Async::Container::Supervisor::ProcessMonitor` for logging CPU and memory metrics periodically.
39
+ - Fix documentation to use correct `maximum_size_limit:` parameter name for `MemoryMonitor` (was incorrectly documented as `limit:`).
40
+
25
41
  ### v0.7.0
26
42
 
27
43
  - If a memory leak is detected, sample memory usage for 60 seconds before exiting.
data/releases.md CHANGED
@@ -1,5 +1,17 @@
1
1
  # Releases
2
2
 
3
+ ## v0.9.0
4
+
5
+ - Better handling of write failures in `Connection::Call.dispatch`, ensuring we don't leak calls.
6
+ - Robust monitor loop handling - restart on failure, and align loop iterations.
7
+ - Disable memory sampler by default and use text output format.
8
+ - Introduce support for redirecting dump output to logs.
9
+
10
+ ## v0.8.0
11
+
12
+ - Add `Async::Container::Supervisor::ProcessMonitor` for logging CPU and memory metrics periodically.
13
+ - Fix documentation to use correct `maximum_size_limit:` parameter name for `MemoryMonitor` (was incorrectly documented as `limit:`).
14
+
3
15
  ## v0.7.0
4
16
 
5
17
  - If a memory leak is detected, sample memory usage for 60 seconds before exiting.
data.tar.gz.sig CHANGED
@@ -1,3 +1,4 @@
1
- upÑFC#\�l�� �>� Ea$�#��$�/��R�!wC�sk��.�͓v�5�Ng�ݨ��Lw�\T2r���=Q�w,�*�N#�o斲�s;�E� �4*��9�S�]��MEqx)X�q�p��76�y�����L�|����F�aF;p0�0`QȰ(h�-SL%�pR�ϭ��u�`��>���bKRN��iIS/XQ�i���]4����9S���
2
- %���+p&h���?美cF�����[��Cۏ�����;��1�6w���؅_0</v'�W
3
- �9>pw���җ��^�Ykc���'i�,D�%`T}��rOA��Z]0�=��y;��5#�LbZYV��^���=3T����9%������;Ըt
1
+ �9��b�l=�<���(Cܝy�뼍���fIl���Q�km1������� .���jW>�����k����q% I:�;%��� Ot�*�ϻ�wW3Im��Z�!�_�� ��<u��~������}[��y�ۤG( �a'��+k�H:˒���%�)l��J����ؠS
2
+ H��-�*C��R6g&�Kk��mL�7���(
3
+ '�U\� v�=B�;��Iv~b�6�~x��Wˋ&�����bke��y*6����D0>�ת@k22dR4bL����QZ�p,�TƸ ��U&�.���2v�X�02���?1��ヌ_Dv:D��X̄S�h�
4
+ ����|ܺ�H�R��:�[-��"x��v}^7�lgo
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: async-container-supervisor
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.7.0
4
+ version: 0.9.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Samuel Williams
@@ -94,6 +94,20 @@ dependencies:
94
94
  - - "~>"
95
95
  - !ruby/object:Gem::Version
96
96
  version: '0.5'
97
+ - !ruby/object:Gem::Dependency
98
+ name: process-metrics
99
+ requirement: !ruby/object:Gem::Requirement
100
+ requirements:
101
+ - - ">="
102
+ - !ruby/object:Gem::Version
103
+ version: '0'
104
+ type: :runtime
105
+ prerelease: false
106
+ version_requirements: !ruby/object:Gem::Requirement
107
+ requirements:
108
+ - - ">="
109
+ - !ruby/object:Gem::Version
110
+ version: '0'
97
111
  executables: []
98
112
  extensions: []
99
113
  extra_rdoc_files: []
@@ -101,13 +115,17 @@ files:
101
115
  - bake/async/container/supervisor.rb
102
116
  - context/getting-started.md
103
117
  - context/index.yaml
118
+ - context/memory-monitor.md
119
+ - context/process-monitor.md
104
120
  - lib/async/container/supervisor.rb
105
121
  - lib/async/container/supervisor/client.rb
106
122
  - lib/async/container/supervisor/connection.rb
107
123
  - lib/async/container/supervisor/dispatchable.rb
108
124
  - lib/async/container/supervisor/endpoint.rb
109
125
  - lib/async/container/supervisor/environment.rb
126
+ - lib/async/container/supervisor/loop.rb
110
127
  - lib/async/container/supervisor/memory_monitor.rb
128
+ - lib/async/container/supervisor/process_monitor.rb
111
129
  - lib/async/container/supervisor/server.rb
112
130
  - lib/async/container/supervisor/service.rb
113
131
  - lib/async/container/supervisor/supervised.rb
@@ -136,7 +154,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
136
154
  - !ruby/object:Gem::Version
137
155
  version: '0'
138
156
  requirements: []
139
- rubygems_version: 3.7.2
157
+ rubygems_version: 3.6.9
140
158
  specification_version: 4
141
159
  summary: A supervisor for managing multiple container processes.
142
160
  test_files: []
metadata.gz.sig CHANGED
Binary file