taski 0.9.0 → 0.9.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 1ad85c90c371fc1ea2f61e1d7e9817c7a9cc3fa9f258be2bf1154ced5f4c0349
4
- data.tar.gz: a56399d990f0a3ecfddff0f24b5a45c8319ea88e58299a3d3a22a2336295f5cc
3
+ metadata.gz: 6cf783b65df1fdbbb8c8f19d78467243ea1d054c1b47097cae24b11d22d30c56
4
+ data.tar.gz: f5a88d4f88a6babb6e121b93cd22066691563a539c722cd8a3e336258ebfe495
5
5
  SHA512:
6
- metadata.gz: f6969545ce4924434c3e031dca9fff39a974dfd53ee73fb610979cdc3fd09842f50b8871efec23a79dd40d1985bb97c92b651744655a38a12edbdeb9c0f05daa
7
- data.tar.gz: 94a9f76f2da179d4b202b788f3abf27531b8a1df4ccd186eb84a5c695dfcbca3c78f75f2350ba052dcb98b68da7427e60b0a493e67340c1d04440278b4c3ab7f
6
+ metadata.gz: ff7ccc6b57e2ed321a84406e2e8e913aac4ec5d9cf2e6f3d00f28cc9b22bbb6609147d500a3fca1dde7fe37bb921cd8ba00380c9b52962f71ea5e2331bef8e16
7
+ data.tar.gz: 5978f76dab28ef8cc3f57338a89996e9ba45fce7c897cf7739e7ada9076e5fc0006bb5663a1723c5492140fec8e14f5811100a8325e96dc56ebdf35684aeca0b
data/CHANGELOG.md CHANGED
@@ -7,6 +7,31 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
7
7
 
8
8
  ## [Unreleased]
9
9
 
10
+ ## [0.9.2] - 2026-02-16
11
+
12
+ ### Added
13
+ - Progress::Config API for declarative layout/theme configuration ([#180](https://github.com/ahogappa/taski/pull/180))
14
+ - Split Layout::Tree into Tree::Live (TTY) and Tree::Event (non-TTY) with `Tree.for` factory ([#181](https://github.com/ahogappa/taski/pull/181))
15
+
16
+ ### Fixed
17
+ - Add base64 as runtime dependency ([#182](https://github.com/ahogappa/taski/pull/182))
18
+ - Use done_count instead of completed_count in completion display ([#179](https://github.com/ahogappa/taski/pull/179))
19
+ - Pass skipped_count in Simple layout's render_final ([#179](https://github.com/ahogappa/taski/pull/179))
20
+ - Show most recently started tasks first in simple progress display ([#178](https://github.com/ahogappa/taski/pull/178))
21
+
22
+ ## [0.9.1] - 2026-02-16
23
+
24
+ ### Changed
25
+ - Replace raw arrays and hashes with FiberProtocol Data classes for typed protocol messages ([#175](https://github.com/ahogappa/taski/pull/175))
26
+ - Add AST-guided start_dep speculative parallel execution for improved performance ([#175](https://github.com/ahogappa/taski/pull/175))
27
+ - Add TaskProxy for lazy dependency resolution with unsafe proxy usage detection ([#175](https://github.com/ahogappa/taski/pull/175))
28
+ - Remove static-graph-based task scheduling from Executor in favor of Fiber pull model ([#176](https://github.com/ahogappa/taski/pull/176))
29
+ - Replace inline `Class.new(Taski::Task)` with named fixture classes in tests ([#174](https://github.com/ahogappa/taski/pull/174))
30
+ - Add custom export methods section to README ([#172](https://github.com/ahogappa/taski/pull/172))
31
+
32
+ ### Fixed
33
+ - Fix data race on `@next_thread_index` in enqueue/enqueue_clean ([#175](https://github.com/ahogappa/taski/pull/175))
34
+
10
35
  ## [0.9.0] - 2026-02-08
11
36
 
12
37
  ### Added
data/README.md CHANGED
@@ -15,6 +15,7 @@
15
15
  - **Exports API**: Simple value sharing between tasks
16
16
  - **Real-time Progress**: Visual feedback with parallel task progress display
17
17
  - **Fiber-Based Execution**: Lightweight Fiber-based dependency resolution for efficient parallel execution
18
+ - **Lazy Dependency Resolution**: Dependencies return lightweight proxies that defer resolution until the value is actually used, enabling better parallelism
18
19
 
19
20
  ## Quick Start
20
21
 
@@ -77,6 +78,46 @@ class Server < Taski::Task
77
78
  end
78
79
  ```
79
80
 
81
+ ### Custom Export Methods
82
+
83
+ By default, `exports` generates a reader that returns the instance variable (e.g., `exports :value` reads `@value`). You can override this by defining your own instance method with the same name:
84
+
85
+ **Fixed values** — no computation needed in `run`:
86
+
87
+ ```ruby
88
+ class Config < Taski::Task
89
+ exports :timeout
90
+
91
+ def timeout
92
+ 30
93
+ end
94
+
95
+ def run; end
96
+ end
97
+
98
+ Config.timeout # => 30
99
+ ```
100
+
101
+ **Shared logic between `run` and `clean`** — the method works as both an export and a regular instance method:
102
+
103
+ ```ruby
104
+ class DatabaseSetup < Taski::Task
105
+ exports :connection
106
+
107
+ def connection
108
+ @connection ||= Database.connect
109
+ end
110
+
111
+ def run
112
+ connection.setup_schema
113
+ end
114
+
115
+ def clean
116
+ connection.close
117
+ end
118
+ end
119
+ ```
120
+
80
121
  ### Conditional Logic - Runtime Selection
81
122
 
82
123
  Use `if` statements to switch behavior based on environment:
@@ -230,6 +271,8 @@ RandomTask.value # => 99 (different value - fresh execution)
230
271
  DoubleConsumer.run # RandomTask runs once, both accesses get same value
231
272
  ```
232
273
 
274
+ When a task accesses a dependency (e.g., `SomeDep.value`), the result may be a lightweight proxy. The actual resolution is deferred until the value is used, allowing independent dependencies to execute in parallel transparently. This is automatic and requires no changes to your task code. Dependencies used in conditions or as arguments are automatically resolved synchronously for safety.
275
+
233
276
  ### Error Handling
234
277
 
235
278
  When a task fails, Taski wraps the error with task-specific context. Each task class automatically gets a `::Error` subclass for targeted rescue:
data/docs/GUIDE.md CHANGED
@@ -7,6 +7,7 @@ This guide provides detailed documentation beyond the basics covered in the READ
7
7
  - [Error Handling](#error-handling)
8
8
  - [Lifecycle Management](#lifecycle-management)
9
9
  - [Progress Display](#progress-display)
10
+ - [Lazy Dependency Resolution](#lazy-dependency-resolution)
10
11
  - [Debugging](#debugging)
11
12
 
12
13
  ---
@@ -354,6 +355,44 @@ ruby build.rb > build.log 2>&1
354
355
 
355
356
  ---
356
357
 
358
+ ## Lazy Dependency Resolution
359
+
360
+ ### How It Works
361
+
362
+ When a task accesses a dependency's exported value (e.g., `DepTask.value`), Taski may return a lightweight **proxy object** instead of the actual value. This proxy defers dependency resolution until you call a method on it, at which point it transparently resolves the real value and forwards the method call.
363
+
364
+ ```ruby
365
+ class FetchData < Taski::Task
366
+ exports :data
367
+ def run
368
+ @data = expensive_api_call
369
+ end
370
+ end
371
+
372
+ class ProcessData < Taski::Task
373
+ exports :result
374
+ def run
375
+ raw = FetchData.data # May return a proxy (no blocking yet)
376
+ setup_environment # Task continues while FetchData runs
377
+ @result = raw.transform # Proxy resolves here — blocks if needed
378
+ end
379
+ end
380
+ ```
381
+
382
+ From the user's perspective, the proxy is completely transparent — it behaves exactly like the real value.
383
+
384
+ ### Why It Matters
385
+
386
+ Proxy-based resolution enables better parallelism. A task can continue executing setup logic while its dependencies are still running, only blocking when the dependency value is actually used. This can significantly reduce total execution time when tasks have independent setup work before they need their dependencies.
387
+
388
+ ### Automatic Safety
389
+
390
+ Taski uses static analysis (Prism AST parsing) to determine when proxy resolution is safe. Dependencies used in positions where the proxy could cause issues — such as conditions (`if dep_value`), method arguments, or other contexts where truthiness or identity matters — are automatically resolved synchronously instead of returning a proxy.
391
+
392
+ You do not need to think about this in normal usage. The static analyzer examines your task's `run` method and only enables proxy resolution for dependency accesses that are confirmed safe (e.g., simple assignments like `x = Dep.value` followed by method calls on `x`).
393
+
394
+ ---
395
+
357
396
  ## Debugging
358
397
 
359
398
  ### Structured Logging
@@ -390,4 +429,4 @@ end
390
429
 
391
430
  **Static Analysis Requirements**
392
431
 
393
- Tasks must be defined in source files (not dynamically with `Class.new`) because static analysis uses Prism AST parsing which requires actual source files.
432
+ Tasks must be defined in source files (not dynamically with `Class.new`) because static analysis uses Prism AST parsing which requires actual source files. Static analysis is used for dependency tree visualization, circular dependency detection, and optimizing dependency resolution (determining when lazy proxy resolution is safe vs. when synchronous resolution is required).
@@ -5,8 +5,12 @@ require "etc"
5
5
  module Taski
6
6
  module Execution
7
7
  # Orchestrates run (Fiber-based) and clean (direct) phases of task execution.
8
- # Delegates to Scheduler (dependency order), WorkerPool (worker threads),
9
- # and ExecutionFacade (observer notifications).
8
+ # Delegates to Scheduler (state tracking / advisory proposals),
9
+ # WorkerPool (worker threads), and ExecutionFacade (observer notifications).
10
+ #
11
+ # Task execution is driven by the Fiber pull model — tasks start only when
12
+ # requested via Fiber.yield FiberProtocol::NeedDep. Scheduler may propose tasks,
13
+ # but Executor/Wrapper can reject proposals not backed by actual Fiber requests.
10
14
  class Executor
11
15
  class << self
12
16
  def execute(root_task_class, registry:, execution_facade:)
@@ -43,8 +47,6 @@ module Taski
43
47
 
44
48
  @worker_pool.start
45
49
 
46
- pre_start_leaf_tasks
47
-
48
50
  enqueue_root_if_needed(root_task_class)
49
51
 
50
52
  run_main_loop(root_task_class)
@@ -83,10 +85,6 @@ module Taski
83
85
 
84
86
  # Run phase
85
87
 
86
- def pre_start_leaf_tasks
87
- @scheduler.next_ready_tasks.each { |task_class| enqueue_for_execution(task_class) }
88
- end
89
-
90
88
  def enqueue_root_if_needed(root_task_class)
91
89
  return unless @scheduler.pending?(root_task_class)
92
90
 
@@ -98,24 +96,30 @@ module Taski
98
96
  break if @registry.abort_requested? && !@scheduler.running_tasks?
99
97
 
100
98
  event = @completion_queue.pop
101
- handle_completion(event)
99
+ case event
100
+ in FiberProtocol::StartDepNotify => notify
101
+ @scheduler.mark_running(notify.task_class)
102
+ in FiberProtocol::TaskCompleted | FiberProtocol::TaskFailed
103
+ handle_completion(event)
104
+ else
105
+ raise "[BUG] unexpected completion queue event: #{event.inspect}"
106
+ end
102
107
  end
103
108
  end
104
109
 
105
110
  def handle_completion(event)
106
- task_class = event[:task_class]
111
+ task_class = event.task_class
107
112
  Taski::Logging.debug(Taski::Logging::Events::EXECUTOR_TASK_COMPLETED, task: task_class.name)
108
113
 
109
- if event[:error]
110
- @scheduler.mark_failed(task_class)
111
- log_error_detail(task_class, event[:error])
112
- skip_pending_dependents(task_class)
113
- else
114
+ case event
115
+ in FiberProtocol::TaskFailed => failed
116
+ @scheduler.mark_failed(failed.task_class)
117
+ log_error_detail(failed.task_class, failed.error)
118
+ skip_pending_dependents(failed.task_class)
119
+ in FiberProtocol::TaskCompleted
114
120
  @scheduler.mark_completed(task_class)
115
- end
116
-
117
- @scheduler.next_ready_tasks.each do |ready_class|
118
- enqueue_for_execution(ready_class)
121
+ else
122
+ raise "[BUG] unexpected run completion event: #{event.inspect}"
119
123
  end
120
124
  end
121
125
 
@@ -192,13 +196,18 @@ module Taski
192
196
  end
193
197
 
194
198
  def handle_clean_completion(event)
195
- task_class = event[:task_class]
199
+ task_class = event.task_class
196
200
  Taski::Logging.debug(Taski::Logging::Events::EXECUTOR_CLEAN_COMPLETED, task: task_class.name)
197
- if event[:error]
198
- @scheduler.mark_clean_failed(task_class)
199
- else
201
+
202
+ case event
203
+ in FiberProtocol::CleanFailed => failed
204
+ @scheduler.mark_clean_failed(failed.task_class)
205
+ in FiberProtocol::CleanCompleted
200
206
  @scheduler.mark_clean_completed(task_class)
207
+ else
208
+ raise "[BUG] unexpected clean completion event: #{event.inspect}"
201
209
  end
210
+
202
211
  enqueue_ready_clean_tasks
203
212
  end
204
213
 
@@ -0,0 +1,27 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Taski
4
+ module Execution
5
+ module FiberProtocol
6
+ # === Fiber yields (task -> worker pool) ===
7
+ StartDep = Data.define(:task_class)
8
+ NeedDep = Data.define(:task_class, :method)
9
+
10
+ # === Fiber resume error signal (worker pool -> task) ===
11
+ DepError = Data.define(:error)
12
+
13
+ # === Completion queue events (worker pool -> executor) ===
14
+ StartDepNotify = Data.define(:task_class)
15
+ TaskCompleted = Data.define(:task_class, :wrapper)
16
+ TaskFailed = Data.define(:task_class, :wrapper, :error)
17
+ CleanCompleted = Data.define(:task_class, :wrapper)
18
+ CleanFailed = Data.define(:task_class, :wrapper, :error)
19
+
20
+ # === Worker thread commands (pool -> worker thread) ===
21
+ Execute = Data.define(:task_class, :wrapper)
22
+ ExecuteClean = Data.define(:task_class, :wrapper)
23
+ Resume = Data.define(:fiber, :value)
24
+ ResumeError = Data.define(:fiber, :error)
25
+ end
26
+ end
27
+ end
@@ -297,13 +297,13 @@ module Taski
297
297
  def notify_fiber_waiters_completed(waiters)
298
298
  waiters.each do |thread_queue, fiber, method|
299
299
  value = @task.public_send(method)
300
- thread_queue.push([:resume, fiber, value])
300
+ thread_queue.push(FiberProtocol::Resume.new(fiber, value))
301
301
  end
302
302
  end
303
303
 
304
304
  def notify_fiber_waiters_failed(waiters, error)
305
305
  waiters.each do |thread_queue, fiber, _method|
306
- thread_queue.push([:resume_error, fiber, error])
306
+ thread_queue.push(FiberProtocol::ResumeError.new(fiber, error))
307
307
  end
308
308
  end
309
309
  end
@@ -10,19 +10,22 @@ module Taski
10
10
 
11
11
  # WorkerPool manages N threads, each with its own command Queue.
12
12
  # Tasks are executed within Fibers on worker threads.
13
- # When a Fiber yields [:need_dep, dep_class, method], the worker
14
- # resolves the dependency via TaskWrapper#request_value:
15
13
  #
16
- # - :completed resume Fiber immediately with the value
17
- # - :wait park the Fiber (it will be resumed later via the thread's queue)
18
- # - :start → start the dependency as a nested Fiber on the same thread
14
+ # Fiber protocol supports two yield types (FiberProtocol Data classes):
15
+ # - StartDep(task_class) non-blocking. Starts dep on another
16
+ # thread and resumes the Fiber immediately. Used for speculative prestart.
17
+ # - NeedDep(task_class, method) → blocking. Resolves dependency via
18
+ # TaskWrapper#request_value:
19
+ # - :completed → resume Fiber immediately with the value
20
+ # - :wait → park the Fiber (it will be resumed later via the thread's queue)
21
+ # - :start → start the dependency as a nested Fiber on the same thread
19
22
  #
20
- # Worker threads process these commands:
21
- # - [:execute, task_class, wrapper] → create and drive a new Fiber
22
- # - [:execute_clean, task_class, wrapper] → run clean directly (no Fiber)
23
- # - [:resume, fiber, value] → resume a parked Fiber with a value
24
- # - [:resume_error, fiber, error] → resume a parked Fiber with an error
25
- # - :shutdown → exit the worker loop
23
+ # Worker threads process these commands (FiberProtocol Data classes):
24
+ # - Execute(task_class, wrapper) → create and drive a new Fiber
25
+ # - ExecuteClean(task_class, wrapper) → run clean directly (no Fiber)
26
+ # - Resume(fiber, value) → resume a parked Fiber with a value
27
+ # - ResumeError(fiber, error) → resume a parked Fiber with an error
28
+ # - :shutdown → exit the worker loop
26
29
  class WorkerPool
27
30
  attr_reader :worker_count
28
31
 
@@ -38,6 +41,7 @@ module Taski
38
41
  @fiber_contexts = {}
39
42
  @task_start_times_mutex = Mutex.new
40
43
  @task_start_times = {}
44
+ @enqueue_mutex = Mutex.new
41
45
  end
42
46
 
43
47
  def start
@@ -52,17 +56,21 @@ module Taski
52
56
 
53
57
  # Round-robins across worker threads.
54
58
  def enqueue(task_class, wrapper)
55
- queue = @thread_queues[@next_thread_index % @worker_count]
56
- @next_thread_index += 1
57
- queue.push([:execute, task_class, wrapper])
58
- Taski::Logging.debug(Taski::Logging::Events::WORKER_POOL_ENQUEUED, task: task_class.name, thread_index: (@next_thread_index - 1) % @worker_count)
59
+ @enqueue_mutex.synchronize do
60
+ queue = @thread_queues[@next_thread_index % @worker_count]
61
+ @next_thread_index += 1
62
+ queue.push(FiberProtocol::Execute.new(task_class, wrapper))
63
+ Taski::Logging.debug(Taski::Logging::Events::WORKER_POOL_ENQUEUED, task: task_class.name, thread_index: (@next_thread_index - 1) % @worker_count)
64
+ end
59
65
  end
60
66
 
61
67
  # Clean tasks run directly without Fiber wrapping.
62
68
  def enqueue_clean(task_class, wrapper)
63
- queue = @thread_queues[@next_thread_index % @worker_count]
64
- @next_thread_index += 1
65
- queue.push([:execute_clean, task_class, wrapper])
69
+ @enqueue_mutex.synchronize do
70
+ queue = @thread_queues[@next_thread_index % @worker_count]
71
+ @next_thread_index += 1
72
+ queue.push(FiberProtocol::ExecuteClean.new(task_class, wrapper))
73
+ end
66
74
  end
67
75
 
68
76
  def shutdown
@@ -77,19 +85,17 @@ module Taski
77
85
  cmd = queue.pop
78
86
  break if cmd == :shutdown
79
87
 
80
- case cmd[0]
81
- when :execute
82
- _, task_class, wrapper = cmd
83
- drive_fiber(task_class, wrapper, queue)
84
- when :resume
85
- _, fiber, value = cmd
86
- resume_fiber(fiber, value, queue)
87
- when :resume_error
88
- _, fiber, error = cmd
89
- resume_fiber_with_error(fiber, error, queue)
90
- when :execute_clean
91
- _, task_class, wrapper = cmd
92
- execute_clean_task(task_class, wrapper)
88
+ case cmd
89
+ in FiberProtocol::Execute => exec
90
+ drive_fiber(exec.task_class, exec.wrapper, queue)
91
+ in FiberProtocol::Resume => res
92
+ resume_fiber(res.fiber, res.value, queue)
93
+ in FiberProtocol::ResumeError => err
94
+ resume_fiber_with_error(err.fiber, err.error, queue)
95
+ in FiberProtocol::ExecuteClean => clean
96
+ execute_clean_task(clean.task_class, clean.wrapper)
97
+ else
98
+ raise "[BUG] unexpected worker command: #{cmd.inspect}"
93
99
  end
94
100
  end
95
101
  end
@@ -99,9 +105,16 @@ module Taski
99
105
  def drive_fiber(task_class, wrapper, queue)
100
106
  return if @registry.abort_requested?
101
107
 
108
+ analysis = Taski::StaticAnalysis::StartDepAnalyzer.analyze(task_class)
102
109
  fiber = Fiber.new do
103
110
  setup_run_thread_locals
104
- wrapper.task.run
111
+ Thread.current[:taski_start_deps] = analysis.start_deps
112
+ (analysis.start_deps | analysis.sync_deps).each { |dep_class| Fiber.yield(FiberProtocol::StartDep.new(dep_class)) }
113
+ run_result = wrapper.task.run
114
+ resolve_proxy_exports(wrapper)
115
+ run_result
116
+ ensure
117
+ Thread.current[:taski_start_deps] = nil
105
118
  end
106
119
 
107
120
  now = Time.now
@@ -120,12 +133,16 @@ module Taski
120
133
  result = fiber.resume(resume_value)
121
134
 
122
135
  while fiber.alive?
123
- if result.is_a?(Array) && result[0] == :need_dep
124
- _, dep_class, method = result
125
- handle_dependency(dep_class, method, fiber, task_class, wrapper, queue)
136
+ case result
137
+ in FiberProtocol::StartDep => start_dep
138
+ handle_start_dep(start_dep.task_class)
139
+ result = fiber.resume
140
+ next
141
+ in FiberProtocol::NeedDep => need_dep
142
+ handle_dependency(need_dep.task_class, need_dep.method, fiber, task_class, wrapper, queue)
126
143
  return # Fiber is either continuing or parked
127
144
  else
128
- break
145
+ break # task.run returned a non-protocol value (normal completion)
129
146
  end
130
147
  end
131
148
 
@@ -142,12 +159,13 @@ module Taski
142
159
  when :completed
143
160
  drive_fiber_loop(fiber, task_class, wrapper, queue, status[1])
144
161
  when :failed
145
- drive_fiber_loop(fiber, task_class, wrapper, queue, [:_taski_error, status[1]])
162
+ drive_fiber_loop(fiber, task_class, wrapper, queue, FiberProtocol::DepError.new(status[1]))
146
163
  when :wait
147
164
  store_fiber_context(fiber, task_class, wrapper)
148
165
  when :start
149
166
  store_fiber_context(fiber, task_class, wrapper)
150
- start_dependency(dep_class, dep_wrapper, queue)
167
+ # dep_wrapper is already RUNNING (set atomically by request_value)
168
+ drive_fiber(dep_class, dep_wrapper, queue)
151
169
  end
152
170
  end
153
171
 
@@ -155,29 +173,52 @@ module Taski
155
173
  # Restores fiber context before resuming since teardown_thread_locals
156
174
  # cleared thread-local state when the fiber was parked.
157
175
  def resume_fiber(fiber, value, queue)
158
- context = get_fiber_context(fiber)
159
- return unless context
160
-
161
- task_class, wrapper = context
162
- setup_run_thread_locals
163
- start_output_capture(task_class)
164
- drive_fiber_loop(fiber, task_class, wrapper, queue, value)
176
+ resume_fiber_with_value(fiber, value, queue)
165
177
  end
166
178
 
167
179
  def resume_fiber_with_error(fiber, error, queue)
180
+ resume_fiber_with_value(fiber, FiberProtocol::DepError.new(error), queue)
181
+ end
182
+
183
+ def resume_fiber_with_value(fiber, resume_value, queue)
168
184
  context = get_fiber_context(fiber)
169
185
  return unless context
170
186
 
171
187
  task_class, wrapper = context
172
188
  setup_run_thread_locals
173
189
  start_output_capture(task_class)
174
- drive_fiber_loop(fiber, task_class, wrapper, queue, [:_taski_error, error])
190
+ drive_fiber_loop(fiber, task_class, wrapper, queue, resume_value)
175
191
  end
176
192
 
177
- # Start a dependency task as a new Fiber on this thread.
178
- # The wrapper is already RUNNING (set atomically by request_value).
179
- def start_dependency(dep_class, dep_wrapper, queue)
180
- drive_fiber(dep_class, dep_wrapper, queue)
193
+ # Handle :start_dep speculatively start a dependency on another thread.
194
+ # Non-blocking: the calling Fiber is resumed immediately after enqueueing.
195
+ # Uses mark_running to prevent duplicate starts.
196
+ def handle_start_dep(dep_class)
197
+ dep_wrapper = @registry.create_wrapper(dep_class, execution_facade: @execution_facade)
198
+ return unless dep_wrapper.mark_running
199
+
200
+ # Notify Executor so Scheduler can track the running state.
201
+ # Must be pushed before the execute command to guarantee ordering.
202
+ @completion_queue.push(FiberProtocol::StartDepNotify.new(dep_class))
203
+
204
+ @enqueue_mutex.synchronize do
205
+ target_queue = @thread_queues[@next_thread_index % @worker_count]
206
+ @next_thread_index += 1
207
+ target_queue.push(FiberProtocol::Execute.new(dep_class, dep_wrapper))
208
+ end
209
+ end
210
+
211
+ # Resolve any TaskProxy instances stored in exported ivars.
212
+ # After task.run, proxies assigned to @value etc. must be resolved
213
+ # while still inside the Fiber context so Fiber.yield works.
214
+ def resolve_proxy_exports(wrapper)
215
+ wrapper.task.class.exported_methods.each do |method|
216
+ ivar = :"@#{method}"
217
+ val = wrapper.task.instance_variable_get(ivar)
218
+ next unless val.respond_to?(:__taski_proxy_resolve__)
219
+ resolved = val.__taski_proxy_resolve__
220
+ wrapper.task.instance_variable_set(ivar, resolved)
221
+ end
181
222
  end
182
223
 
183
224
  def complete_task(task_class, wrapper, result)
@@ -185,7 +226,7 @@ module Taski
185
226
  duration = task_duration_ms(task_class)
186
227
  Taski::Logging.info(Taski::Logging::Events::TASK_COMPLETED, task: task_class.name, duration_ms: duration)
187
228
  wrapper.mark_completed(result)
188
- @completion_queue.push({task_class: task_class, wrapper: wrapper})
229
+ @completion_queue.push(FiberProtocol::TaskCompleted.new(task_class, wrapper))
189
230
  teardown_thread_locals
190
231
  end
191
232
 
@@ -195,7 +236,7 @@ module Taski
195
236
  duration = task_duration_ms(task_class)
196
237
  Taski::Logging.error(Taski::Logging::Events::TASK_FAILED, task: task_class.name, duration_ms: duration)
197
238
  wrapper.mark_failed(error)
198
- @completion_queue.push({task_class: task_class, wrapper: wrapper, error: error})
239
+ @completion_queue.push(FiberProtocol::TaskFailed.new(task_class, wrapper, error))
199
240
  teardown_thread_locals
200
241
  end
201
242
 
@@ -213,13 +254,13 @@ module Taski
213
254
  duration = ((Time.now - clean_start) * 1000).round(1)
214
255
  Taski::Logging.debug(Taski::Logging::Events::TASK_CLEAN_COMPLETED, task: task_class.name, duration_ms: duration)
215
256
  wrapper.mark_clean_completed(result)
216
- @completion_queue.push({task_class: task_class, wrapper: wrapper, clean: true})
257
+ @completion_queue.push(FiberProtocol::CleanCompleted.new(task_class, wrapper))
217
258
  rescue => e
218
259
  @registry.request_abort! if e.is_a?(Taski::TaskAbortException)
219
260
  duration = ((Time.now - clean_start) * 1000).round(1) if clean_start
220
261
  Taski::Logging.warn(Taski::Logging::Events::TASK_CLEAN_FAILED, task: task_class.name, duration_ms: duration)
221
262
  wrapper.mark_clean_failed(e)
222
- @completion_queue.push({task_class: task_class, wrapper: wrapper, error: e, clean: true})
263
+ @completion_queue.push(FiberProtocol::CleanFailed.new(task_class, wrapper, e))
223
264
  ensure
224
265
  stop_output_capture
225
266
  teardown_thread_locals
@@ -0,0 +1,90 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Taski
4
+ module Progress
5
+ # Configuration for progress display.
6
+ # Holds class references for Layout and Theme, and builds display instances lazily.
7
+ #
8
+ # @example
9
+ # Taski.progress.layout = Taski::Progress::Layout::Tree
10
+ # Taski.progress.theme = Taski::Progress::Theme::Detail
11
+ class Config
12
+ attr_reader :layout, :theme, :output
13
+
14
+ # @param on_invalidate [Proc, nil] Called when config changes (to clear external caches)
15
+ def initialize(&on_invalidate)
16
+ @layout = nil
17
+ @theme = nil
18
+ @output = nil
19
+ @cached_display = nil
20
+ @on_invalidate = on_invalidate
21
+ end
22
+
23
+ def layout=(klass)
24
+ validate_layout!(klass) if klass
25
+ @layout = klass
26
+ invalidate!
27
+ end
28
+
29
+ def theme=(klass)
30
+ validate_theme!(klass) if klass
31
+ @theme = klass
32
+ invalidate!
33
+ end
34
+
35
+ def output=(io)
36
+ @output = io
37
+ invalidate!
38
+ end
39
+
40
+ # Build a Layout instance from the current config.
41
+ # Returns a cached instance if config hasn't changed.
42
+ def build
43
+ @cached_display ||= build_display
44
+ end
45
+
46
+ # Reset all settings to defaults.
47
+ def reset
48
+ @layout = nil
49
+ @theme = nil
50
+ @output = nil
51
+ invalidate!
52
+ end
53
+
54
+ private
55
+
56
+ def invalidate!
57
+ @cached_display = nil
58
+ @on_invalidate&.call
59
+ end
60
+
61
+ def build_display
62
+ layout_ref = @layout || Layout::Simple
63
+ args = {}
64
+ args[:theme] = @theme.new if @theme
65
+ args[:output] = @output if @output
66
+
67
+ if layout_ref.respond_to?(:for)
68
+ layout_ref.for(**args)
69
+ else
70
+ layout_ref.new(**args)
71
+ end
72
+ end
73
+
74
+ def validate_layout!(klass)
75
+ # Accept a Class that inherits from Base, or a Module with .for factory
76
+ valid = (klass.is_a?(Class) && klass <= Layout::Base) ||
77
+ (klass.is_a?(Module) && klass.respond_to?(:for))
78
+ unless valid
79
+ raise ArgumentError, "layout must be a Layout::Base subclass or a module with .for, got #{klass.inspect}"
80
+ end
81
+ end
82
+
83
+ def validate_theme!(klass)
84
+ unless klass.is_a?(Class) && klass <= Theme::Base
85
+ raise ArgumentError, "theme must be a subclass of Taski::Progress::Theme::Base, got #{klass.inspect}"
86
+ end
87
+ end
88
+ end
89
+ end
90
+ end