taski 0.9.0 → 0.9.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/CHANGELOG.md +13 -0
- data/README.md +43 -0
- data/docs/GUIDE.md +40 -1
- data/lib/taski/execution/executor.rb +32 -23
- data/lib/taski/execution/fiber_protocol.rb +27 -0
- data/lib/taski/execution/task_wrapper.rb +2 -2
- data/lib/taski/execution/worker_pool.rb +95 -54
- data/lib/taski/static_analysis/start_dep_analyzer.rb +400 -0
- data/lib/taski/task.rb +10 -5
- data/lib/taski/task_proxy.rb +59 -0
- data/lib/taski/test_helper.rb +1 -1
- data/lib/taski/version.rb +1 -1
- data/lib/taski.rb +2 -0
- metadata +4 -1
checksums.yaml
CHANGED
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
---
|
|
2
2
|
SHA256:
|
|
3
|
-
metadata.gz:
|
|
4
|
-
data.tar.gz:
|
|
3
|
+
metadata.gz: 7c7ea0740de2f0d1fcf24a72fe5b32e75128fa4d793718b3ac12a2c041c37628
|
|
4
|
+
data.tar.gz: 8fb45c4d390a98709d934f4b59f14a613d689ed7538df630cb4328bc405b2803
|
|
5
5
|
SHA512:
|
|
6
|
-
metadata.gz:
|
|
7
|
-
data.tar.gz:
|
|
6
|
+
metadata.gz: 7cba0b749c9f5b5e76f990731017d18c550407215f851877129d2c6b1c9872fccfb3d0e088b01db19a92064f92592971a085ca9eecca3bcd8742e44b7374a9e5
|
|
7
|
+
data.tar.gz: 211b4d9ed455e19688bbb29acc7beb5043150703e144bb38f973b68f6bbf5fb3aa942c7fd2d77faa4332df243cb5f0d50e4a06813c7786b7c954f5a8eb674900
|
data/CHANGELOG.md
CHANGED
|
@@ -7,6 +7,19 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
|
|
|
7
7
|
|
|
8
8
|
## [Unreleased]
|
|
9
9
|
|
|
10
|
+
## [0.9.1] - 2026-02-16
|
|
11
|
+
|
|
12
|
+
### Changed
|
|
13
|
+
- Replace raw arrays and hashes with FiberProtocol Data classes for typed protocol messages ([#175](https://github.com/ahogappa/taski/pull/175))
|
|
14
|
+
- Add AST-guided start_dep speculative parallel execution for improved performance ([#175](https://github.com/ahogappa/taski/pull/175))
|
|
15
|
+
- Add TaskProxy for lazy dependency resolution with unsafe proxy usage detection ([#175](https://github.com/ahogappa/taski/pull/175))
|
|
16
|
+
- Remove static-graph-based task scheduling from Executor in favor of Fiber pull model ([#176](https://github.com/ahogappa/taski/pull/176))
|
|
17
|
+
- Replace inline `Class.new(Taski::Task)` with named fixture classes in tests ([#174](https://github.com/ahogappa/taski/pull/174))
|
|
18
|
+
- Add custom export methods section to README ([#172](https://github.com/ahogappa/taski/pull/172))
|
|
19
|
+
|
|
20
|
+
### Fixed
|
|
21
|
+
- Fix data race on `@next_thread_index` in enqueue/enqueue_clean ([#175](https://github.com/ahogappa/taski/pull/175))
|
|
22
|
+
|
|
10
23
|
## [0.9.0] - 2026-02-08
|
|
11
24
|
|
|
12
25
|
### Added
|
data/README.md
CHANGED
|
@@ -15,6 +15,7 @@
|
|
|
15
15
|
- **Exports API**: Simple value sharing between tasks
|
|
16
16
|
- **Real-time Progress**: Visual feedback with parallel task progress display
|
|
17
17
|
- **Fiber-Based Execution**: Lightweight Fiber-based dependency resolution for efficient parallel execution
|
|
18
|
+
- **Lazy Dependency Resolution**: Dependencies return lightweight proxies that defer resolution until the value is actually used, enabling better parallelism
|
|
18
19
|
|
|
19
20
|
## Quick Start
|
|
20
21
|
|
|
@@ -77,6 +78,46 @@ class Server < Taski::Task
|
|
|
77
78
|
end
|
|
78
79
|
```
|
|
79
80
|
|
|
81
|
+
### Custom Export Methods
|
|
82
|
+
|
|
83
|
+
By default, `exports` generates a reader that returns the instance variable (e.g., `exports :value` reads `@value`). You can override this by defining your own instance method with the same name:
|
|
84
|
+
|
|
85
|
+
**Fixed values** — no computation needed in `run`:
|
|
86
|
+
|
|
87
|
+
```ruby
|
|
88
|
+
class Config < Taski::Task
|
|
89
|
+
exports :timeout
|
|
90
|
+
|
|
91
|
+
def timeout
|
|
92
|
+
30
|
|
93
|
+
end
|
|
94
|
+
|
|
95
|
+
def run; end
|
|
96
|
+
end
|
|
97
|
+
|
|
98
|
+
Config.timeout # => 30
|
|
99
|
+
```
|
|
100
|
+
|
|
101
|
+
**Shared logic between `run` and `clean`** — the method works as both an export and a regular instance method:
|
|
102
|
+
|
|
103
|
+
```ruby
|
|
104
|
+
class DatabaseSetup < Taski::Task
|
|
105
|
+
exports :connection
|
|
106
|
+
|
|
107
|
+
def connection
|
|
108
|
+
@connection ||= Database.connect
|
|
109
|
+
end
|
|
110
|
+
|
|
111
|
+
def run
|
|
112
|
+
connection.setup_schema
|
|
113
|
+
end
|
|
114
|
+
|
|
115
|
+
def clean
|
|
116
|
+
connection.close
|
|
117
|
+
end
|
|
118
|
+
end
|
|
119
|
+
```
|
|
120
|
+
|
|
80
121
|
### Conditional Logic - Runtime Selection
|
|
81
122
|
|
|
82
123
|
Use `if` statements to switch behavior based on environment:
|
|
@@ -230,6 +271,8 @@ RandomTask.value # => 99 (different value - fresh execution)
|
|
|
230
271
|
DoubleConsumer.run # RandomTask runs once, both accesses get same value
|
|
231
272
|
```
|
|
232
273
|
|
|
274
|
+
When a task accesses a dependency (e.g., `SomeDep.value`), the result may be a lightweight proxy. The actual resolution is deferred until the value is used, allowing independent dependencies to execute in parallel transparently. This is automatic and requires no changes to your task code. Dependencies used in conditions or as arguments are automatically resolved synchronously for safety.
|
|
275
|
+
|
|
233
276
|
### Error Handling
|
|
234
277
|
|
|
235
278
|
When a task fails, Taski wraps the error with task-specific context. Each task class automatically gets a `::Error` subclass for targeted rescue:
|
data/docs/GUIDE.md
CHANGED
|
@@ -7,6 +7,7 @@ This guide provides detailed documentation beyond the basics covered in the READ
|
|
|
7
7
|
- [Error Handling](#error-handling)
|
|
8
8
|
- [Lifecycle Management](#lifecycle-management)
|
|
9
9
|
- [Progress Display](#progress-display)
|
|
10
|
+
- [Lazy Dependency Resolution](#lazy-dependency-resolution)
|
|
10
11
|
- [Debugging](#debugging)
|
|
11
12
|
|
|
12
13
|
---
|
|
@@ -354,6 +355,44 @@ ruby build.rb > build.log 2>&1
|
|
|
354
355
|
|
|
355
356
|
---
|
|
356
357
|
|
|
358
|
+
## Lazy Dependency Resolution
|
|
359
|
+
|
|
360
|
+
### How It Works
|
|
361
|
+
|
|
362
|
+
When a task accesses a dependency's exported value (e.g., `DepTask.value`), Taski may return a lightweight **proxy object** instead of the actual value. This proxy defers dependency resolution until you call a method on it, at which point it transparently resolves the real value and forwards the method call.
|
|
363
|
+
|
|
364
|
+
```ruby
|
|
365
|
+
class FetchData < Taski::Task
|
|
366
|
+
exports :data
|
|
367
|
+
def run
|
|
368
|
+
@data = expensive_api_call
|
|
369
|
+
end
|
|
370
|
+
end
|
|
371
|
+
|
|
372
|
+
class ProcessData < Taski::Task
|
|
373
|
+
exports :result
|
|
374
|
+
def run
|
|
375
|
+
raw = FetchData.data # May return a proxy (no blocking yet)
|
|
376
|
+
setup_environment # Task continues while FetchData runs
|
|
377
|
+
@result = raw.transform # Proxy resolves here — blocks if needed
|
|
378
|
+
end
|
|
379
|
+
end
|
|
380
|
+
```
|
|
381
|
+
|
|
382
|
+
From the user's perspective, the proxy is completely transparent — it behaves exactly like the real value.
|
|
383
|
+
|
|
384
|
+
### Why It Matters
|
|
385
|
+
|
|
386
|
+
Proxy-based resolution enables better parallelism. A task can continue executing setup logic while its dependencies are still running, only blocking when the dependency value is actually used. This can significantly reduce total execution time when tasks have independent setup work before they need their dependencies.
|
|
387
|
+
|
|
388
|
+
### Automatic Safety
|
|
389
|
+
|
|
390
|
+
Taski uses static analysis (Prism AST parsing) to determine when proxy resolution is safe. Dependencies used in positions where the proxy could cause issues — such as conditions (`if dep_value`), method arguments, or other contexts where truthiness or identity matters — are automatically resolved synchronously instead of returning a proxy.
|
|
391
|
+
|
|
392
|
+
You do not need to think about this in normal usage. The static analyzer examines your task's `run` method and only enables proxy resolution for dependency accesses that are confirmed safe (e.g., simple assignments like `x = Dep.value` followed by method calls on `x`).
|
|
393
|
+
|
|
394
|
+
---
|
|
395
|
+
|
|
357
396
|
## Debugging
|
|
358
397
|
|
|
359
398
|
### Structured Logging
|
|
@@ -390,4 +429,4 @@ end
|
|
|
390
429
|
|
|
391
430
|
**Static Analysis Requirements**
|
|
392
431
|
|
|
393
|
-
Tasks must be defined in source files (not dynamically with `Class.new`) because static analysis uses Prism AST parsing which requires actual source files.
|
|
432
|
+
Tasks must be defined in source files (not dynamically with `Class.new`) because static analysis uses Prism AST parsing which requires actual source files. Static analysis is used for dependency tree visualization, circular dependency detection, and optimizing dependency resolution (determining when lazy proxy resolution is safe vs. when synchronous resolution is required).
|
|
@@ -5,8 +5,12 @@ require "etc"
|
|
|
5
5
|
module Taski
|
|
6
6
|
module Execution
|
|
7
7
|
# Orchestrates run (Fiber-based) and clean (direct) phases of task execution.
|
|
8
|
-
# Delegates to Scheduler (
|
|
9
|
-
# and ExecutionFacade (observer notifications).
|
|
8
|
+
# Delegates to Scheduler (state tracking / advisory proposals),
|
|
9
|
+
# WorkerPool (worker threads), and ExecutionFacade (observer notifications).
|
|
10
|
+
#
|
|
11
|
+
# Task execution is driven by the Fiber pull model — tasks start only when
|
|
12
|
+
# requested via Fiber.yield FiberProtocol::NeedDep. Scheduler may propose tasks,
|
|
13
|
+
# but Executor/Wrapper can reject proposals not backed by actual Fiber requests.
|
|
10
14
|
class Executor
|
|
11
15
|
class << self
|
|
12
16
|
def execute(root_task_class, registry:, execution_facade:)
|
|
@@ -43,8 +47,6 @@ module Taski
|
|
|
43
47
|
|
|
44
48
|
@worker_pool.start
|
|
45
49
|
|
|
46
|
-
pre_start_leaf_tasks
|
|
47
|
-
|
|
48
50
|
enqueue_root_if_needed(root_task_class)
|
|
49
51
|
|
|
50
52
|
run_main_loop(root_task_class)
|
|
@@ -83,10 +85,6 @@ module Taski
|
|
|
83
85
|
|
|
84
86
|
# Run phase
|
|
85
87
|
|
|
86
|
-
def pre_start_leaf_tasks
|
|
87
|
-
@scheduler.next_ready_tasks.each { |task_class| enqueue_for_execution(task_class) }
|
|
88
|
-
end
|
|
89
|
-
|
|
90
88
|
def enqueue_root_if_needed(root_task_class)
|
|
91
89
|
return unless @scheduler.pending?(root_task_class)
|
|
92
90
|
|
|
@@ -98,24 +96,30 @@ module Taski
|
|
|
98
96
|
break if @registry.abort_requested? && !@scheduler.running_tasks?
|
|
99
97
|
|
|
100
98
|
event = @completion_queue.pop
|
|
101
|
-
|
|
99
|
+
case event
|
|
100
|
+
in FiberProtocol::StartDepNotify => notify
|
|
101
|
+
@scheduler.mark_running(notify.task_class)
|
|
102
|
+
in FiberProtocol::TaskCompleted | FiberProtocol::TaskFailed
|
|
103
|
+
handle_completion(event)
|
|
104
|
+
else
|
|
105
|
+
raise "[BUG] unexpected completion queue event: #{event.inspect}"
|
|
106
|
+
end
|
|
102
107
|
end
|
|
103
108
|
end
|
|
104
109
|
|
|
105
110
|
def handle_completion(event)
|
|
106
|
-
task_class = event
|
|
111
|
+
task_class = event.task_class
|
|
107
112
|
Taski::Logging.debug(Taski::Logging::Events::EXECUTOR_TASK_COMPLETED, task: task_class.name)
|
|
108
113
|
|
|
109
|
-
|
|
110
|
-
|
|
111
|
-
|
|
112
|
-
|
|
113
|
-
|
|
114
|
+
case event
|
|
115
|
+
in FiberProtocol::TaskFailed => failed
|
|
116
|
+
@scheduler.mark_failed(failed.task_class)
|
|
117
|
+
log_error_detail(failed.task_class, failed.error)
|
|
118
|
+
skip_pending_dependents(failed.task_class)
|
|
119
|
+
in FiberProtocol::TaskCompleted
|
|
114
120
|
@scheduler.mark_completed(task_class)
|
|
115
|
-
|
|
116
|
-
|
|
117
|
-
@scheduler.next_ready_tasks.each do |ready_class|
|
|
118
|
-
enqueue_for_execution(ready_class)
|
|
121
|
+
else
|
|
122
|
+
raise "[BUG] unexpected run completion event: #{event.inspect}"
|
|
119
123
|
end
|
|
120
124
|
end
|
|
121
125
|
|
|
@@ -192,13 +196,18 @@ module Taski
|
|
|
192
196
|
end
|
|
193
197
|
|
|
194
198
|
def handle_clean_completion(event)
|
|
195
|
-
task_class = event
|
|
199
|
+
task_class = event.task_class
|
|
196
200
|
Taski::Logging.debug(Taski::Logging::Events::EXECUTOR_CLEAN_COMPLETED, task: task_class.name)
|
|
197
|
-
|
|
198
|
-
|
|
199
|
-
|
|
201
|
+
|
|
202
|
+
case event
|
|
203
|
+
in FiberProtocol::CleanFailed => failed
|
|
204
|
+
@scheduler.mark_clean_failed(failed.task_class)
|
|
205
|
+
in FiberProtocol::CleanCompleted
|
|
200
206
|
@scheduler.mark_clean_completed(task_class)
|
|
207
|
+
else
|
|
208
|
+
raise "[BUG] unexpected clean completion event: #{event.inspect}"
|
|
201
209
|
end
|
|
210
|
+
|
|
202
211
|
enqueue_ready_clean_tasks
|
|
203
212
|
end
|
|
204
213
|
|
|
@@ -0,0 +1,27 @@
|
|
|
1
|
+
# frozen_string_literal: true
|
|
2
|
+
|
|
3
|
+
module Taski
|
|
4
|
+
module Execution
|
|
5
|
+
module FiberProtocol
|
|
6
|
+
# === Fiber yields (task -> worker pool) ===
|
|
7
|
+
StartDep = Data.define(:task_class)
|
|
8
|
+
NeedDep = Data.define(:task_class, :method)
|
|
9
|
+
|
|
10
|
+
# === Fiber resume error signal (worker pool -> task) ===
|
|
11
|
+
DepError = Data.define(:error)
|
|
12
|
+
|
|
13
|
+
# === Completion queue events (worker pool -> executor) ===
|
|
14
|
+
StartDepNotify = Data.define(:task_class)
|
|
15
|
+
TaskCompleted = Data.define(:task_class, :wrapper)
|
|
16
|
+
TaskFailed = Data.define(:task_class, :wrapper, :error)
|
|
17
|
+
CleanCompleted = Data.define(:task_class, :wrapper)
|
|
18
|
+
CleanFailed = Data.define(:task_class, :wrapper, :error)
|
|
19
|
+
|
|
20
|
+
# === Worker thread commands (pool -> worker thread) ===
|
|
21
|
+
Execute = Data.define(:task_class, :wrapper)
|
|
22
|
+
ExecuteClean = Data.define(:task_class, :wrapper)
|
|
23
|
+
Resume = Data.define(:fiber, :value)
|
|
24
|
+
ResumeError = Data.define(:fiber, :error)
|
|
25
|
+
end
|
|
26
|
+
end
|
|
27
|
+
end
|
|
@@ -297,13 +297,13 @@ module Taski
|
|
|
297
297
|
def notify_fiber_waiters_completed(waiters)
|
|
298
298
|
waiters.each do |thread_queue, fiber, method|
|
|
299
299
|
value = @task.public_send(method)
|
|
300
|
-
thread_queue.push(
|
|
300
|
+
thread_queue.push(FiberProtocol::Resume.new(fiber, value))
|
|
301
301
|
end
|
|
302
302
|
end
|
|
303
303
|
|
|
304
304
|
def notify_fiber_waiters_failed(waiters, error)
|
|
305
305
|
waiters.each do |thread_queue, fiber, _method|
|
|
306
|
-
thread_queue.push(
|
|
306
|
+
thread_queue.push(FiberProtocol::ResumeError.new(fiber, error))
|
|
307
307
|
end
|
|
308
308
|
end
|
|
309
309
|
end
|
|
@@ -10,19 +10,22 @@ module Taski
|
|
|
10
10
|
|
|
11
11
|
# WorkerPool manages N threads, each with its own command Queue.
|
|
12
12
|
# Tasks are executed within Fibers on worker threads.
|
|
13
|
-
# When a Fiber yields [:need_dep, dep_class, method], the worker
|
|
14
|
-
# resolves the dependency via TaskWrapper#request_value:
|
|
15
13
|
#
|
|
16
|
-
#
|
|
17
|
-
# -
|
|
18
|
-
#
|
|
14
|
+
# Fiber protocol supports two yield types (FiberProtocol Data classes):
|
|
15
|
+
# - StartDep(task_class) → non-blocking. Starts dep on another
|
|
16
|
+
# thread and resumes the Fiber immediately. Used for speculative prestart.
|
|
17
|
+
# - NeedDep(task_class, method) → blocking. Resolves dependency via
|
|
18
|
+
# TaskWrapper#request_value:
|
|
19
|
+
# - :completed → resume Fiber immediately with the value
|
|
20
|
+
# - :wait → park the Fiber (it will be resumed later via the thread's queue)
|
|
21
|
+
# - :start → start the dependency as a nested Fiber on the same thread
|
|
19
22
|
#
|
|
20
|
-
# Worker threads process these commands:
|
|
21
|
-
# -
|
|
22
|
-
# -
|
|
23
|
-
# -
|
|
24
|
-
# -
|
|
25
|
-
# - :shutdown
|
|
23
|
+
# Worker threads process these commands (FiberProtocol Data classes):
|
|
24
|
+
# - Execute(task_class, wrapper) → create and drive a new Fiber
|
|
25
|
+
# - ExecuteClean(task_class, wrapper) → run clean directly (no Fiber)
|
|
26
|
+
# - Resume(fiber, value) → resume a parked Fiber with a value
|
|
27
|
+
# - ResumeError(fiber, error) → resume a parked Fiber with an error
|
|
28
|
+
# - :shutdown → exit the worker loop
|
|
26
29
|
class WorkerPool
|
|
27
30
|
attr_reader :worker_count
|
|
28
31
|
|
|
@@ -38,6 +41,7 @@ module Taski
|
|
|
38
41
|
@fiber_contexts = {}
|
|
39
42
|
@task_start_times_mutex = Mutex.new
|
|
40
43
|
@task_start_times = {}
|
|
44
|
+
@enqueue_mutex = Mutex.new
|
|
41
45
|
end
|
|
42
46
|
|
|
43
47
|
def start
|
|
@@ -52,17 +56,21 @@ module Taski
|
|
|
52
56
|
|
|
53
57
|
# Round-robins across worker threads.
|
|
54
58
|
def enqueue(task_class, wrapper)
|
|
55
|
-
|
|
56
|
-
|
|
57
|
-
|
|
58
|
-
|
|
59
|
+
@enqueue_mutex.synchronize do
|
|
60
|
+
queue = @thread_queues[@next_thread_index % @worker_count]
|
|
61
|
+
@next_thread_index += 1
|
|
62
|
+
queue.push(FiberProtocol::Execute.new(task_class, wrapper))
|
|
63
|
+
Taski::Logging.debug(Taski::Logging::Events::WORKER_POOL_ENQUEUED, task: task_class.name, thread_index: (@next_thread_index - 1) % @worker_count)
|
|
64
|
+
end
|
|
59
65
|
end
|
|
60
66
|
|
|
61
67
|
# Clean tasks run directly without Fiber wrapping.
|
|
62
68
|
def enqueue_clean(task_class, wrapper)
|
|
63
|
-
|
|
64
|
-
|
|
65
|
-
|
|
69
|
+
@enqueue_mutex.synchronize do
|
|
70
|
+
queue = @thread_queues[@next_thread_index % @worker_count]
|
|
71
|
+
@next_thread_index += 1
|
|
72
|
+
queue.push(FiberProtocol::ExecuteClean.new(task_class, wrapper))
|
|
73
|
+
end
|
|
66
74
|
end
|
|
67
75
|
|
|
68
76
|
def shutdown
|
|
@@ -77,19 +85,17 @@ module Taski
|
|
|
77
85
|
cmd = queue.pop
|
|
78
86
|
break if cmd == :shutdown
|
|
79
87
|
|
|
80
|
-
case cmd
|
|
81
|
-
|
|
82
|
-
|
|
83
|
-
|
|
84
|
-
|
|
85
|
-
|
|
86
|
-
|
|
87
|
-
|
|
88
|
-
|
|
89
|
-
|
|
90
|
-
|
|
91
|
-
_, task_class, wrapper = cmd
|
|
92
|
-
execute_clean_task(task_class, wrapper)
|
|
88
|
+
case cmd
|
|
89
|
+
in FiberProtocol::Execute => exec
|
|
90
|
+
drive_fiber(exec.task_class, exec.wrapper, queue)
|
|
91
|
+
in FiberProtocol::Resume => res
|
|
92
|
+
resume_fiber(res.fiber, res.value, queue)
|
|
93
|
+
in FiberProtocol::ResumeError => err
|
|
94
|
+
resume_fiber_with_error(err.fiber, err.error, queue)
|
|
95
|
+
in FiberProtocol::ExecuteClean => clean
|
|
96
|
+
execute_clean_task(clean.task_class, clean.wrapper)
|
|
97
|
+
else
|
|
98
|
+
raise "[BUG] unexpected worker command: #{cmd.inspect}"
|
|
93
99
|
end
|
|
94
100
|
end
|
|
95
101
|
end
|
|
@@ -99,9 +105,16 @@ module Taski
|
|
|
99
105
|
def drive_fiber(task_class, wrapper, queue)
|
|
100
106
|
return if @registry.abort_requested?
|
|
101
107
|
|
|
108
|
+
analysis = Taski::StaticAnalysis::StartDepAnalyzer.analyze(task_class)
|
|
102
109
|
fiber = Fiber.new do
|
|
103
110
|
setup_run_thread_locals
|
|
104
|
-
|
|
111
|
+
Thread.current[:taski_start_deps] = analysis.start_deps
|
|
112
|
+
(analysis.start_deps | analysis.sync_deps).each { |dep_class| Fiber.yield(FiberProtocol::StartDep.new(dep_class)) }
|
|
113
|
+
run_result = wrapper.task.run
|
|
114
|
+
resolve_proxy_exports(wrapper)
|
|
115
|
+
run_result
|
|
116
|
+
ensure
|
|
117
|
+
Thread.current[:taski_start_deps] = nil
|
|
105
118
|
end
|
|
106
119
|
|
|
107
120
|
now = Time.now
|
|
@@ -120,12 +133,16 @@ module Taski
|
|
|
120
133
|
result = fiber.resume(resume_value)
|
|
121
134
|
|
|
122
135
|
while fiber.alive?
|
|
123
|
-
|
|
124
|
-
|
|
125
|
-
|
|
136
|
+
case result
|
|
137
|
+
in FiberProtocol::StartDep => start_dep
|
|
138
|
+
handle_start_dep(start_dep.task_class)
|
|
139
|
+
result = fiber.resume
|
|
140
|
+
next
|
|
141
|
+
in FiberProtocol::NeedDep => need_dep
|
|
142
|
+
handle_dependency(need_dep.task_class, need_dep.method, fiber, task_class, wrapper, queue)
|
|
126
143
|
return # Fiber is either continuing or parked
|
|
127
144
|
else
|
|
128
|
-
break
|
|
145
|
+
break # task.run returned a non-protocol value (normal completion)
|
|
129
146
|
end
|
|
130
147
|
end
|
|
131
148
|
|
|
@@ -142,12 +159,13 @@ module Taski
|
|
|
142
159
|
when :completed
|
|
143
160
|
drive_fiber_loop(fiber, task_class, wrapper, queue, status[1])
|
|
144
161
|
when :failed
|
|
145
|
-
drive_fiber_loop(fiber, task_class, wrapper, queue,
|
|
162
|
+
drive_fiber_loop(fiber, task_class, wrapper, queue, FiberProtocol::DepError.new(status[1]))
|
|
146
163
|
when :wait
|
|
147
164
|
store_fiber_context(fiber, task_class, wrapper)
|
|
148
165
|
when :start
|
|
149
166
|
store_fiber_context(fiber, task_class, wrapper)
|
|
150
|
-
|
|
167
|
+
# dep_wrapper is already RUNNING (set atomically by request_value)
|
|
168
|
+
drive_fiber(dep_class, dep_wrapper, queue)
|
|
151
169
|
end
|
|
152
170
|
end
|
|
153
171
|
|
|
@@ -155,29 +173,52 @@ module Taski
|
|
|
155
173
|
# Restores fiber context before resuming since teardown_thread_locals
|
|
156
174
|
# cleared thread-local state when the fiber was parked.
|
|
157
175
|
def resume_fiber(fiber, value, queue)
|
|
158
|
-
|
|
159
|
-
return unless context
|
|
160
|
-
|
|
161
|
-
task_class, wrapper = context
|
|
162
|
-
setup_run_thread_locals
|
|
163
|
-
start_output_capture(task_class)
|
|
164
|
-
drive_fiber_loop(fiber, task_class, wrapper, queue, value)
|
|
176
|
+
resume_fiber_with_value(fiber, value, queue)
|
|
165
177
|
end
|
|
166
178
|
|
|
167
179
|
def resume_fiber_with_error(fiber, error, queue)
|
|
180
|
+
resume_fiber_with_value(fiber, FiberProtocol::DepError.new(error), queue)
|
|
181
|
+
end
|
|
182
|
+
|
|
183
|
+
def resume_fiber_with_value(fiber, resume_value, queue)
|
|
168
184
|
context = get_fiber_context(fiber)
|
|
169
185
|
return unless context
|
|
170
186
|
|
|
171
187
|
task_class, wrapper = context
|
|
172
188
|
setup_run_thread_locals
|
|
173
189
|
start_output_capture(task_class)
|
|
174
|
-
drive_fiber_loop(fiber, task_class, wrapper, queue,
|
|
190
|
+
drive_fiber_loop(fiber, task_class, wrapper, queue, resume_value)
|
|
175
191
|
end
|
|
176
192
|
|
|
177
|
-
#
|
|
178
|
-
#
|
|
179
|
-
|
|
180
|
-
|
|
193
|
+
# Handle :start_dep — speculatively start a dependency on another thread.
|
|
194
|
+
# Non-blocking: the calling Fiber is resumed immediately after enqueueing.
|
|
195
|
+
# Uses mark_running to prevent duplicate starts.
|
|
196
|
+
def handle_start_dep(dep_class)
|
|
197
|
+
dep_wrapper = @registry.create_wrapper(dep_class, execution_facade: @execution_facade)
|
|
198
|
+
return unless dep_wrapper.mark_running
|
|
199
|
+
|
|
200
|
+
# Notify Executor so Scheduler can track the running state.
|
|
201
|
+
# Must be pushed before the execute command to guarantee ordering.
|
|
202
|
+
@completion_queue.push(FiberProtocol::StartDepNotify.new(dep_class))
|
|
203
|
+
|
|
204
|
+
@enqueue_mutex.synchronize do
|
|
205
|
+
target_queue = @thread_queues[@next_thread_index % @worker_count]
|
|
206
|
+
@next_thread_index += 1
|
|
207
|
+
target_queue.push(FiberProtocol::Execute.new(dep_class, dep_wrapper))
|
|
208
|
+
end
|
|
209
|
+
end
|
|
210
|
+
|
|
211
|
+
# Resolve any TaskProxy instances stored in exported ivars.
|
|
212
|
+
# After task.run, proxies assigned to @value etc. must be resolved
|
|
213
|
+
# while still inside the Fiber context so Fiber.yield works.
|
|
214
|
+
def resolve_proxy_exports(wrapper)
|
|
215
|
+
wrapper.task.class.exported_methods.each do |method|
|
|
216
|
+
ivar = :"@#{method}"
|
|
217
|
+
val = wrapper.task.instance_variable_get(ivar)
|
|
218
|
+
next unless val.respond_to?(:__taski_proxy_resolve__)
|
|
219
|
+
resolved = val.__taski_proxy_resolve__
|
|
220
|
+
wrapper.task.instance_variable_set(ivar, resolved)
|
|
221
|
+
end
|
|
181
222
|
end
|
|
182
223
|
|
|
183
224
|
def complete_task(task_class, wrapper, result)
|
|
@@ -185,7 +226,7 @@ module Taski
|
|
|
185
226
|
duration = task_duration_ms(task_class)
|
|
186
227
|
Taski::Logging.info(Taski::Logging::Events::TASK_COMPLETED, task: task_class.name, duration_ms: duration)
|
|
187
228
|
wrapper.mark_completed(result)
|
|
188
|
-
@completion_queue.push(
|
|
229
|
+
@completion_queue.push(FiberProtocol::TaskCompleted.new(task_class, wrapper))
|
|
189
230
|
teardown_thread_locals
|
|
190
231
|
end
|
|
191
232
|
|
|
@@ -195,7 +236,7 @@ module Taski
|
|
|
195
236
|
duration = task_duration_ms(task_class)
|
|
196
237
|
Taski::Logging.error(Taski::Logging::Events::TASK_FAILED, task: task_class.name, duration_ms: duration)
|
|
197
238
|
wrapper.mark_failed(error)
|
|
198
|
-
@completion_queue.push(
|
|
239
|
+
@completion_queue.push(FiberProtocol::TaskFailed.new(task_class, wrapper, error))
|
|
199
240
|
teardown_thread_locals
|
|
200
241
|
end
|
|
201
242
|
|
|
@@ -213,13 +254,13 @@ module Taski
|
|
|
213
254
|
duration = ((Time.now - clean_start) * 1000).round(1)
|
|
214
255
|
Taski::Logging.debug(Taski::Logging::Events::TASK_CLEAN_COMPLETED, task: task_class.name, duration_ms: duration)
|
|
215
256
|
wrapper.mark_clean_completed(result)
|
|
216
|
-
@completion_queue.push(
|
|
257
|
+
@completion_queue.push(FiberProtocol::CleanCompleted.new(task_class, wrapper))
|
|
217
258
|
rescue => e
|
|
218
259
|
@registry.request_abort! if e.is_a?(Taski::TaskAbortException)
|
|
219
260
|
duration = ((Time.now - clean_start) * 1000).round(1) if clean_start
|
|
220
261
|
Taski::Logging.warn(Taski::Logging::Events::TASK_CLEAN_FAILED, task: task_class.name, duration_ms: duration)
|
|
221
262
|
wrapper.mark_clean_failed(e)
|
|
222
|
-
@completion_queue.push(
|
|
263
|
+
@completion_queue.push(FiberProtocol::CleanFailed.new(task_class, wrapper, e))
|
|
223
264
|
ensure
|
|
224
265
|
stop_output_capture
|
|
225
266
|
teardown_thread_locals
|
|
@@ -0,0 +1,400 @@
|
|
|
1
|
+
# frozen_string_literal: true
|
|
2
|
+
|
|
3
|
+
require "prism"
|
|
4
|
+
|
|
5
|
+
module Taski
|
|
6
|
+
module StaticAnalysis
|
|
7
|
+
# Analyzes a task's run method AST to find dependencies that are safe
|
|
8
|
+
# to speculatively pre-start (start_dep). Uses a whitelist approach:
|
|
9
|
+
# only confirmed patterns are collected; unknown patterns cause the
|
|
10
|
+
# analyzer to stop (returning what was collected so far up to that point).
|
|
11
|
+
#
|
|
12
|
+
# Currently handles variable assignment patterns only:
|
|
13
|
+
# a = Dep.value (LocalVariableWriteNode)
|
|
14
|
+
# @a = Dep.value (InstanceVariableWriteNode)
|
|
15
|
+
#
|
|
16
|
+
# This is a performance optimization only — if analysis fails or returns
|
|
17
|
+
# empty, tasks still work correctly via lazy Fiber pull (need_dep).
|
|
18
|
+
class StartDepAnalyzer
|
|
19
|
+
DepInfo = Data.define(:klass, :method_name)
|
|
20
|
+
AnalysisResult = Data.define(:start_deps, :sync_deps)
|
|
21
|
+
|
|
22
|
+
# AST node types that are known safe (not dependencies, won't stop scanning)
|
|
23
|
+
SAFE_TYPES = Set[
|
|
24
|
+
Prism::LocalVariableReadNode, Prism::InstanceVariableReadNode,
|
|
25
|
+
Prism::ConstantReadNode, Prism::ConstantPathNode,
|
|
26
|
+
Prism::IntegerNode, Prism::FloatNode, Prism::StringNode,
|
|
27
|
+
Prism::SymbolNode, Prism::NilNode, Prism::TrueNode, Prism::FalseNode,
|
|
28
|
+
Prism::SelfNode
|
|
29
|
+
].freeze
|
|
30
|
+
|
|
31
|
+
@cache = {}
|
|
32
|
+
@cache_mutex = Mutex.new
|
|
33
|
+
|
|
34
|
+
class << self
|
|
35
|
+
# Analyze a task class and return deps safe to prestart.
|
|
36
|
+
# Results are cached per task class.
|
|
37
|
+
# @param task_class [Class] The task class to analyze
|
|
38
|
+
# @return [Array<DepInfo>] Deduplicated list of safe dependencies
|
|
39
|
+
def analyze(task_class)
|
|
40
|
+
@cache_mutex.synchronize do
|
|
41
|
+
return @cache[task_class] if @cache.key?(task_class)
|
|
42
|
+
end
|
|
43
|
+
|
|
44
|
+
result = new.analyze(task_class)
|
|
45
|
+
|
|
46
|
+
@cache_mutex.synchronize do
|
|
47
|
+
@cache[task_class] ||= result
|
|
48
|
+
end
|
|
49
|
+
end
|
|
50
|
+
|
|
51
|
+
# Clear cache (for testing)
|
|
52
|
+
def clear_cache!
|
|
53
|
+
@cache_mutex.synchronize { @cache.clear }
|
|
54
|
+
end
|
|
55
|
+
end
|
|
56
|
+
|
|
57
|
+
EMPTY_RESULT = AnalysisResult.new(start_deps: Set.new.freeze, sync_deps: Set.new.freeze).freeze
|
|
58
|
+
|
|
59
|
+
def initialize
|
|
60
|
+
@deps = []
|
|
61
|
+
@seen_classes = Set.new
|
|
62
|
+
end
|
|
63
|
+
|
|
64
|
+
# Analyze a task class's run method and return safe-to-prestart deps
|
|
65
|
+
# and sync_dep_classes (deps whose proxy variables are used unsafely).
|
|
66
|
+
# @param task_class [Class] The task class to analyze
|
|
67
|
+
# @return [AnalysisResult]
|
|
68
|
+
def analyze(task_class)
|
|
69
|
+
@task_class = task_class
|
|
70
|
+
@exported_ivars = Set.new(task_class.exported_methods.map { |m| :"@#{m}" })
|
|
71
|
+
source_location = task_class.instance_method(:run).source_location
|
|
72
|
+
return EMPTY_RESULT unless source_location
|
|
73
|
+
|
|
74
|
+
file_path, _line = source_location
|
|
75
|
+
parse_result = Prism.parse_file(file_path)
|
|
76
|
+
|
|
77
|
+
run_node = find_run_method(parse_result.value, task_class)
|
|
78
|
+
return EMPTY_RESULT unless run_node&.body
|
|
79
|
+
|
|
80
|
+
scan_statements(run_node.body)
|
|
81
|
+
unsafe_classes = detect_unsafe_proxy_usage(run_node.body)
|
|
82
|
+
all_dep_classes = Set.new(@deps.map(&:klass))
|
|
83
|
+
start_deps = all_dep_classes - unsafe_classes
|
|
84
|
+
sync_deps = unsafe_classes
|
|
85
|
+
AnalysisResult.new(start_deps: start_deps, sync_deps: sync_deps)
|
|
86
|
+
rescue NameError
|
|
87
|
+
EMPTY_RESULT
|
|
88
|
+
end
|
|
89
|
+
|
|
90
|
+
private
|
|
91
|
+
|
|
92
|
+
# Find the def run node inside the target class
|
|
93
|
+
def find_run_method(program_node, task_class)
|
|
94
|
+
target_name = task_class.name
|
|
95
|
+
find_run_in_tree(program_node, [], target_name)
|
|
96
|
+
end
|
|
97
|
+
|
|
98
|
+
def find_run_in_tree(node, namespace_path, target_name)
|
|
99
|
+
case node
|
|
100
|
+
when Prism::ProgramNode
|
|
101
|
+
node.statements.body.each do |child|
|
|
102
|
+
result = find_run_in_tree(child, namespace_path, target_name)
|
|
103
|
+
return result if result
|
|
104
|
+
end
|
|
105
|
+
when Prism::ModuleNode
|
|
106
|
+
name = node.constant_path.slice
|
|
107
|
+
new_path = namespace_path + [name]
|
|
108
|
+
node.body&.body&.each do |child|
|
|
109
|
+
result = find_run_in_tree(child, new_path, target_name)
|
|
110
|
+
return result if result
|
|
111
|
+
end
|
|
112
|
+
when Prism::ClassNode
|
|
113
|
+
name = node.constant_path.slice
|
|
114
|
+
new_path = namespace_path + [name]
|
|
115
|
+
full_name = new_path.join("::")
|
|
116
|
+
|
|
117
|
+
node.body&.body&.each do |child|
|
|
118
|
+
if full_name == target_name
|
|
119
|
+
return child if child.is_a?(Prism::DefNode) && child.name == :run
|
|
120
|
+
else
|
|
121
|
+
result = find_run_in_tree(child, new_path, target_name)
|
|
122
|
+
return result if result
|
|
123
|
+
end
|
|
124
|
+
end
|
|
125
|
+
when Prism::StatementsNode
|
|
126
|
+
node.body.each do |child|
|
|
127
|
+
result = find_run_in_tree(child, namespace_path, target_name)
|
|
128
|
+
return result if result
|
|
129
|
+
end
|
|
130
|
+
end
|
|
131
|
+
|
|
132
|
+
nil
|
|
133
|
+
end
|
|
134
|
+
|
|
135
|
+
# Scan statements, collecting deps. Stops at the first unknown pattern.
|
|
136
|
+
def scan_statements(node)
|
|
137
|
+
return unless node.is_a?(Prism::StatementsNode)
|
|
138
|
+
node.body.each { |stmt| break unless try_match(stmt) }
|
|
139
|
+
end
|
|
140
|
+
|
|
141
|
+
# Match a statement against known patterns.
|
|
142
|
+
# Returns true to continue scanning, false to stop.
|
|
143
|
+
def try_match(stmt)
|
|
144
|
+
case stmt
|
|
145
|
+
when Prism::LocalVariableWriteNode, Prism::InstanceVariableWriteNode
|
|
146
|
+
check_dep_call(stmt.value)
|
|
147
|
+
true
|
|
148
|
+
when *SAFE_TYPES
|
|
149
|
+
true
|
|
150
|
+
else
|
|
151
|
+
false
|
|
152
|
+
end
|
|
153
|
+
end
|
|
154
|
+
|
|
155
|
+
# Check if a node is a Task dependency call (Constant.method) and collect it.
|
|
156
|
+
def check_dep_call(node)
|
|
157
|
+
return unless node.is_a?(Prism::CallNode)
|
|
158
|
+
return unless node.receiver
|
|
159
|
+
|
|
160
|
+
case node.receiver
|
|
161
|
+
when Prism::ConstantReadNode, Prism::ConstantPathNode
|
|
162
|
+
constant_name = node.receiver.slice
|
|
163
|
+
resolved = resolve_constant(constant_name)
|
|
164
|
+
if resolved.is_a?(Class) && defined?(Taski::Task) && resolved < Taski::Task
|
|
165
|
+
collect_dep(node)
|
|
166
|
+
end
|
|
167
|
+
end
|
|
168
|
+
end
|
|
169
|
+
|
|
170
|
+
# Collect a dependency, deduplicating by class
|
|
171
|
+
def collect_dep(call_node)
|
|
172
|
+
constant_name = call_node.receiver.slice
|
|
173
|
+
method_name = call_node.name
|
|
174
|
+
klass = resolve_constant(constant_name)
|
|
175
|
+
return unless klass
|
|
176
|
+
|
|
177
|
+
@deps << DepInfo.new(klass: klass, method_name: method_name) if @seen_classes.add?(klass)
|
|
178
|
+
end
|
|
179
|
+
|
|
180
|
+
# Phase 2: Detect proxy variables used in unsafe contexts.
|
|
181
|
+
# Returns a Set of dep classes whose proxy variables are used unsafely.
|
|
182
|
+
# A proxy variable is a local variable assigned from a Taski::Task dep call
|
|
183
|
+
# (e.g., `a = Dep.value`). If such a variable is later used in an unsafe
|
|
184
|
+
# context (as argument, condition, array element, etc.), the dep class is
|
|
185
|
+
# added to sync_dep_classes so it will be resolved synchronously.
|
|
186
|
+
def detect_unsafe_proxy_usage(body_node)
|
|
187
|
+
proxy_vars = build_proxy_var_map(body_node)
|
|
188
|
+
|
|
189
|
+
unsafe_classes = Set.new
|
|
190
|
+
scan_for_unsafe_usage(body_node, proxy_vars, unsafe_classes)
|
|
191
|
+
unsafe_classes
|
|
192
|
+
end
|
|
193
|
+
|
|
194
|
+
# Build mapping of { local_var_name => dep_class } from assignment statements
|
|
195
|
+
def build_proxy_var_map(body_node)
|
|
196
|
+
proxy_vars = {}
|
|
197
|
+
return proxy_vars unless body_node.is_a?(Prism::StatementsNode)
|
|
198
|
+
|
|
199
|
+
body_node.body.each do |stmt|
|
|
200
|
+
next unless stmt.is_a?(Prism::LocalVariableWriteNode)
|
|
201
|
+
|
|
202
|
+
dep_class = extract_dep_class(stmt.value)
|
|
203
|
+
proxy_vars[stmt.name] = dep_class if dep_class
|
|
204
|
+
end
|
|
205
|
+
proxy_vars
|
|
206
|
+
end
|
|
207
|
+
|
|
208
|
+
# Extract the dep class from a call node if it's a Taski::Task dep call
|
|
209
|
+
def extract_dep_class(node)
|
|
210
|
+
return nil unless node.is_a?(Prism::CallNode)
|
|
211
|
+
return nil unless node.receiver
|
|
212
|
+
|
|
213
|
+
case node.receiver
|
|
214
|
+
when Prism::ConstantReadNode, Prism::ConstantPathNode
|
|
215
|
+
constant_name = node.receiver.slice
|
|
216
|
+
resolved = resolve_constant(constant_name)
|
|
217
|
+
if resolved.is_a?(Class) && defined?(Taski::Task) && resolved < Taski::Task
|
|
218
|
+
resolved
|
|
219
|
+
end
|
|
220
|
+
end
|
|
221
|
+
end
|
|
222
|
+
|
|
223
|
+
# Recursively scan AST for unsafe proxy variable usage.
|
|
224
|
+
# Safe contexts: receiver of CallNode, string interpolation,
|
|
225
|
+
# RHS of local/ivar assignment. Everything else is unsafe.
|
|
226
|
+
def scan_for_unsafe_usage(node, proxy_vars, unsafe_classes) # rubocop:disable Metrics/CyclomaticComplexity,Metrics/PerceivedComplexity
|
|
227
|
+
case node
|
|
228
|
+
when Prism::StatementsNode
|
|
229
|
+
node.body.each { |child| scan_for_unsafe_usage(child, proxy_vars, unsafe_classes) }
|
|
230
|
+
|
|
231
|
+
when Prism::LocalVariableWriteNode
|
|
232
|
+
if (dep_class = proxy_dep_class(node.value, proxy_vars))
|
|
233
|
+
# Reassignment or direct dep call: track the new variable name
|
|
234
|
+
proxy_vars[node.name] = dep_class
|
|
235
|
+
else
|
|
236
|
+
scan_for_unsafe_usage(node.value, proxy_vars, unsafe_classes)
|
|
237
|
+
end
|
|
238
|
+
|
|
239
|
+
when Prism::InstanceVariableWriteNode
|
|
240
|
+
if (dep_class = proxy_dep_class(node.value, proxy_vars))
|
|
241
|
+
if @exported_ivars.include?(node.name)
|
|
242
|
+
# @exported = proxy → safe (resolve_proxy_exports handles it)
|
|
243
|
+
else
|
|
244
|
+
# @non_exported = proxy → track for unsafe usage detection
|
|
245
|
+
proxy_vars[node.name] = dep_class
|
|
246
|
+
end
|
|
247
|
+
else
|
|
248
|
+
scan_for_unsafe_usage(node.value, proxy_vars, unsafe_classes)
|
|
249
|
+
end
|
|
250
|
+
|
|
251
|
+
when Prism::CallNode
|
|
252
|
+
# Receiver: proxy.foo → safe (method_missing fires)
|
|
253
|
+
if proxy_var_read?(node.receiver, proxy_vars)
|
|
254
|
+
# safe — don't flag receiver
|
|
255
|
+
elsif node.receiver
|
|
256
|
+
scan_for_unsafe_usage(node.receiver, proxy_vars, unsafe_classes)
|
|
257
|
+
end
|
|
258
|
+
# Arguments: foo(proxy) → UNSAFE
|
|
259
|
+
node.arguments&.arguments&.each do |arg|
|
|
260
|
+
if proxy_var_read?(arg, proxy_vars)
|
|
261
|
+
unsafe_classes.add(proxy_vars[arg.name])
|
|
262
|
+
else
|
|
263
|
+
scan_for_unsafe_usage(arg, proxy_vars, unsafe_classes)
|
|
264
|
+
end
|
|
265
|
+
end
|
|
266
|
+
scan_for_unsafe_usage(node.block, proxy_vars, unsafe_classes) if node.block
|
|
267
|
+
|
|
268
|
+
when Prism::IfNode
|
|
269
|
+
check_predicate_unsafe(node.predicate, proxy_vars, unsafe_classes)
|
|
270
|
+
scan_for_unsafe_usage(node.statements, proxy_vars, unsafe_classes) if node.statements
|
|
271
|
+
scan_for_unsafe_usage(node.subsequent, proxy_vars, unsafe_classes) if node.subsequent
|
|
272
|
+
|
|
273
|
+
when Prism::UnlessNode
|
|
274
|
+
check_predicate_unsafe(node.predicate, proxy_vars, unsafe_classes)
|
|
275
|
+
scan_for_unsafe_usage(node.statements, proxy_vars, unsafe_classes) if node.statements
|
|
276
|
+
scan_for_unsafe_usage(node.else_clause, proxy_vars, unsafe_classes) if node.else_clause
|
|
277
|
+
|
|
278
|
+
when Prism::WhileNode, Prism::UntilNode
|
|
279
|
+
check_predicate_unsafe(node.predicate, proxy_vars, unsafe_classes)
|
|
280
|
+
scan_for_unsafe_usage(node.statements, proxy_vars, unsafe_classes) if node.statements
|
|
281
|
+
|
|
282
|
+
when Prism::InterpolatedStringNode
|
|
283
|
+
node.parts.each do |part|
|
|
284
|
+
next unless part.is_a?(Prism::EmbeddedStatementsNode)
|
|
285
|
+
|
|
286
|
+
if part.statements&.body&.size == 1 &&
|
|
287
|
+
proxy_var_read?(part.statements.body[0], proxy_vars)
|
|
288
|
+
# safe — string interpolation calls to_s
|
|
289
|
+
else
|
|
290
|
+
scan_for_unsafe_usage(part, proxy_vars, unsafe_classes)
|
|
291
|
+
end
|
|
292
|
+
end
|
|
293
|
+
|
|
294
|
+
when Prism::EmbeddedStatementsNode
|
|
295
|
+
scan_for_unsafe_usage(node.statements, proxy_vars, unsafe_classes) if node.statements
|
|
296
|
+
|
|
297
|
+
when Prism::ArrayNode
|
|
298
|
+
node.elements.each do |elem|
|
|
299
|
+
if proxy_var_read?(elem, proxy_vars)
|
|
300
|
+
unsafe_classes.add(proxy_vars[elem.name])
|
|
301
|
+
else
|
|
302
|
+
scan_for_unsafe_usage(elem, proxy_vars, unsafe_classes)
|
|
303
|
+
end
|
|
304
|
+
end
|
|
305
|
+
|
|
306
|
+
when Prism::ElseNode
|
|
307
|
+
scan_for_unsafe_usage(node.statements, proxy_vars, unsafe_classes) if node.statements
|
|
308
|
+
|
|
309
|
+
when Prism::BeginNode
|
|
310
|
+
scan_for_unsafe_usage(node.statements, proxy_vars, unsafe_classes) if node.statements
|
|
311
|
+
scan_for_unsafe_usage(node.rescue_clause, proxy_vars, unsafe_classes) if node.rescue_clause
|
|
312
|
+
scan_for_unsafe_usage(node.ensure_clause, proxy_vars, unsafe_classes) if node.ensure_clause
|
|
313
|
+
|
|
314
|
+
when Prism::RescueNode
|
|
315
|
+
scan_for_unsafe_usage(node.statements, proxy_vars, unsafe_classes) if node.statements
|
|
316
|
+
scan_for_unsafe_usage(node.subsequent, proxy_vars, unsafe_classes) if node.subsequent
|
|
317
|
+
|
|
318
|
+
when Prism::EnsureNode
|
|
319
|
+
scan_for_unsafe_usage(node.statements, proxy_vars, unsafe_classes) if node.statements
|
|
320
|
+
|
|
321
|
+
when Prism::ParenthesesNode
|
|
322
|
+
scan_for_unsafe_usage(node.body, proxy_vars, unsafe_classes) if node.body
|
|
323
|
+
|
|
324
|
+
when Prism::LocalVariableReadNode, Prism::InstanceVariableReadNode
|
|
325
|
+
# Bare proxy variable read in unknown context → UNSAFE
|
|
326
|
+
unsafe_classes.add(proxy_vars[node.name]) if proxy_vars.key?(node.name)
|
|
327
|
+
|
|
328
|
+
else
|
|
329
|
+
# For any unhandled node type, recurse into children (safety-first)
|
|
330
|
+
if node.respond_to?(:compact_child_nodes)
|
|
331
|
+
node.compact_child_nodes.each do |child|
|
|
332
|
+
scan_for_unsafe_usage(child, proxy_vars, unsafe_classes)
|
|
333
|
+
end
|
|
334
|
+
end
|
|
335
|
+
end
|
|
336
|
+
end
|
|
337
|
+
|
|
338
|
+
# Check if a predicate node is an unsafe proxy variable read
|
|
339
|
+
def check_predicate_unsafe(predicate, proxy_vars, unsafe_classes)
|
|
340
|
+
if proxy_var_read?(predicate, proxy_vars)
|
|
341
|
+
unsafe_classes.add(proxy_vars[predicate_key(predicate)])
|
|
342
|
+
else
|
|
343
|
+
scan_for_unsafe_usage(predicate, proxy_vars, unsafe_classes)
|
|
344
|
+
end
|
|
345
|
+
end
|
|
346
|
+
|
|
347
|
+
# Return the dep class if the node reads a proxy variable (local or ivar)
|
|
348
|
+
# or is a direct dep call. Returns nil otherwise.
|
|
349
|
+
def proxy_dep_class(node, proxy_vars)
|
|
350
|
+
if proxy_var_read?(node, proxy_vars)
|
|
351
|
+
proxy_vars[predicate_key(node)]
|
|
352
|
+
else
|
|
353
|
+
extract_dep_class(node)
|
|
354
|
+
end
|
|
355
|
+
end
|
|
356
|
+
|
|
357
|
+
# Check if node is a proxy variable read (local var or ivar)
|
|
358
|
+
def proxy_var_read?(node, proxy_vars)
|
|
359
|
+
case node
|
|
360
|
+
when Prism::LocalVariableReadNode
|
|
361
|
+
proxy_vars.key?(node.name)
|
|
362
|
+
when Prism::InstanceVariableReadNode
|
|
363
|
+
proxy_vars.key?(node.name)
|
|
364
|
+
else
|
|
365
|
+
false
|
|
366
|
+
end
|
|
367
|
+
end
|
|
368
|
+
|
|
369
|
+
# Extract the proxy_vars key from a variable read node
|
|
370
|
+
def predicate_key(node)
|
|
371
|
+
node.name
|
|
372
|
+
end
|
|
373
|
+
|
|
374
|
+
# Resolve a constant name to the class, with namespace fallback.
|
|
375
|
+
def resolve_constant(constant_name)
|
|
376
|
+
Object.const_get(constant_name)
|
|
377
|
+
rescue NameError
|
|
378
|
+
resolve_with_namespace(constant_name)
|
|
379
|
+
end
|
|
380
|
+
|
|
381
|
+
def resolve_with_namespace(constant_name)
|
|
382
|
+
return nil unless @task_class
|
|
383
|
+
|
|
384
|
+
namespace_parts = @task_class.name.split("::")
|
|
385
|
+
namespace_parts.length.downto(0) do |i|
|
|
386
|
+
prefix = namespace_parts.take(i).join("::")
|
|
387
|
+
full_name = prefix.empty? ? constant_name : "#{prefix}::#{constant_name}"
|
|
388
|
+
|
|
389
|
+
begin
|
|
390
|
+
return Object.const_get(full_name)
|
|
391
|
+
rescue NameError
|
|
392
|
+
next
|
|
393
|
+
end
|
|
394
|
+
end
|
|
395
|
+
|
|
396
|
+
nil
|
|
397
|
+
end
|
|
398
|
+
end
|
|
399
|
+
end
|
|
400
|
+
end
|
data/lib/taski/task.rb
CHANGED
|
@@ -6,6 +6,7 @@ require_relative "execution/registry"
|
|
|
6
6
|
require_relative "execution/task_wrapper"
|
|
7
7
|
require_relative "progress/layout/tree"
|
|
8
8
|
require_relative "progress/theme/plain"
|
|
9
|
+
require_relative "task_proxy"
|
|
9
10
|
|
|
10
11
|
module Taski
|
|
11
12
|
# Base class for all tasks in the Taski framework.
|
|
@@ -181,12 +182,16 @@ module Taski
|
|
|
181
182
|
registry = Taski.current_registry
|
|
182
183
|
if registry
|
|
183
184
|
if Thread.current[:taski_fiber_context]
|
|
184
|
-
|
|
185
|
-
|
|
186
|
-
|
|
187
|
-
|
|
185
|
+
start_deps = Thread.current[:taski_start_deps]
|
|
186
|
+
if start_deps&.include?(self)
|
|
187
|
+
# Lazy resolution via proxy - safe dep confirmed by static analysis
|
|
188
|
+
TaskProxy.new(self, method)
|
|
189
|
+
else
|
|
190
|
+
# Synchronous resolution: dep not in allowlist (unknown or unsafe usage)
|
|
191
|
+
result = Fiber.yield(Taski::Execution::FiberProtocol::NeedDep.new(self, method))
|
|
192
|
+
raise result.error if result in Taski::Execution::FiberProtocol::DepError
|
|
193
|
+
result
|
|
188
194
|
end
|
|
189
|
-
result
|
|
190
195
|
else
|
|
191
196
|
# Synchronous resolution (clean phase, outside Fiber)
|
|
192
197
|
wrapper = registry.get_or_create(self) do
|
|
@@ -0,0 +1,59 @@
|
|
|
1
|
+
# frozen_string_literal: true
|
|
2
|
+
|
|
3
|
+
module Taski
|
|
4
|
+
# Lazy proxy that defers dependency resolution until the value is actually used.
|
|
5
|
+
# Inherits from BasicObject to minimize available methods, maximizing method_missing delegation.
|
|
6
|
+
class TaskProxy < BasicObject
|
|
7
|
+
def initialize(task_class, method)
|
|
8
|
+
@task_class = task_class
|
|
9
|
+
@method = method
|
|
10
|
+
@resolved = false
|
|
11
|
+
@value = nil
|
|
12
|
+
@error = nil
|
|
13
|
+
end
|
|
14
|
+
|
|
15
|
+
def __resolve__
|
|
16
|
+
::Kernel.raise @error if @error
|
|
17
|
+
return @value if @resolved
|
|
18
|
+
@value = ::Fiber.yield(::Taski::Execution::FiberProtocol::NeedDep.new(@task_class, @method))
|
|
19
|
+
if @value in ::Taski::Execution::FiberProtocol::DepError
|
|
20
|
+
@error = @value.error
|
|
21
|
+
::Kernel.raise @error
|
|
22
|
+
end
|
|
23
|
+
@resolved = true
|
|
24
|
+
@value
|
|
25
|
+
end
|
|
26
|
+
|
|
27
|
+
def __taski_proxy_resolve__
|
|
28
|
+
__resolve__
|
|
29
|
+
end
|
|
30
|
+
|
|
31
|
+
def method_missing(name, *args, **kwargs, &block)
|
|
32
|
+
__resolve__.__send__(name, *args, **kwargs, &block)
|
|
33
|
+
end
|
|
34
|
+
|
|
35
|
+
def respond_to_missing?(name, include_private = false)
|
|
36
|
+
name == :__taski_proxy_resolve__ || __resolve__.respond_to?(name, include_private)
|
|
37
|
+
end
|
|
38
|
+
|
|
39
|
+
def !
|
|
40
|
+
!__resolve__
|
|
41
|
+
end
|
|
42
|
+
|
|
43
|
+
def ==(other)
|
|
44
|
+
__resolve__ == other
|
|
45
|
+
end
|
|
46
|
+
|
|
47
|
+
def !=(other)
|
|
48
|
+
__resolve__ != other
|
|
49
|
+
end
|
|
50
|
+
|
|
51
|
+
def equal?(other)
|
|
52
|
+
__resolve__.equal?(other)
|
|
53
|
+
end
|
|
54
|
+
|
|
55
|
+
def respond_to?(name, include_private = false)
|
|
56
|
+
name == :__taski_proxy_resolve__ || __resolve__.respond_to?(name, include_private)
|
|
57
|
+
end
|
|
58
|
+
end
|
|
59
|
+
end
|
data/lib/taski/test_helper.rb
CHANGED
|
@@ -69,7 +69,7 @@ module Taski
|
|
|
69
69
|
|
|
70
70
|
if MockRegistry.mock_for(task_class)
|
|
71
71
|
wrapper.mark_completed(nil) unless wrapper.completed?
|
|
72
|
-
@completion_queue.push(
|
|
72
|
+
@completion_queue.push(Taski::Execution::FiberProtocol::TaskCompleted.new(task_class, wrapper))
|
|
73
73
|
return
|
|
74
74
|
end
|
|
75
75
|
|
data/lib/taski/version.rb
CHANGED
data/lib/taski.rb
CHANGED
|
@@ -4,6 +4,8 @@ require_relative "taski/version"
|
|
|
4
4
|
require_relative "taski/static_analysis/analyzer"
|
|
5
5
|
require_relative "taski/static_analysis/visitor"
|
|
6
6
|
require_relative "taski/static_analysis/dependency_graph"
|
|
7
|
+
require_relative "taski/static_analysis/start_dep_analyzer"
|
|
8
|
+
require_relative "taski/execution/fiber_protocol"
|
|
7
9
|
require_relative "taski/execution/registry"
|
|
8
10
|
require_relative "taski/execution/task_observer"
|
|
9
11
|
require_relative "taski/execution/execution_facade"
|
metadata
CHANGED
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
|
2
2
|
name: taski
|
|
3
3
|
version: !ruby/object:Gem::Version
|
|
4
|
-
version: 0.9.
|
|
4
|
+
version: 0.9.1
|
|
5
5
|
platform: ruby
|
|
6
6
|
authors:
|
|
7
7
|
- ahogappa
|
|
@@ -84,6 +84,7 @@ files:
|
|
|
84
84
|
- lib/taski/env.rb
|
|
85
85
|
- lib/taski/execution/execution_facade.rb
|
|
86
86
|
- lib/taski/execution/executor.rb
|
|
87
|
+
- lib/taski/execution/fiber_protocol.rb
|
|
87
88
|
- lib/taski/execution/registry.rb
|
|
88
89
|
- lib/taski/execution/scheduler.rb
|
|
89
90
|
- lib/taski/execution/task_observer.rb
|
|
@@ -106,8 +107,10 @@ files:
|
|
|
106
107
|
- lib/taski/progress/theme/plain.rb
|
|
107
108
|
- lib/taski/static_analysis/analyzer.rb
|
|
108
109
|
- lib/taski/static_analysis/dependency_graph.rb
|
|
110
|
+
- lib/taski/static_analysis/start_dep_analyzer.rb
|
|
109
111
|
- lib/taski/static_analysis/visitor.rb
|
|
110
112
|
- lib/taski/task.rb
|
|
113
|
+
- lib/taski/task_proxy.rb
|
|
111
114
|
- lib/taski/test_helper.rb
|
|
112
115
|
- lib/taski/test_helper/errors.rb
|
|
113
116
|
- lib/taski/test_helper/minitest.rb
|