polyrun 1.1.0 → 1.3.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/CHANGELOG.md +19 -0
- data/README.md +1 -1
- data/lib/polyrun/cli/ci_shard_run_command.rb +97 -2
- data/lib/polyrun/cli/ci_shard_run_parse.rb +68 -0
- data/lib/polyrun/cli/config_command.rb +42 -0
- data/lib/polyrun/cli/default_run.rb +115 -0
- data/lib/polyrun/cli/help.rb +54 -0
- data/lib/polyrun/cli/helpers.rb +4 -31
- data/lib/polyrun/cli/plan_command.rb +22 -12
- data/lib/polyrun/cli/prepare_command.rb +2 -2
- data/lib/polyrun/cli/queue_command.rb +46 -19
- data/lib/polyrun/cli/run_shards_command.rb +62 -5
- data/lib/polyrun/cli/run_shards_plan_boot_phases.rb +1 -1
- data/lib/polyrun/cli/run_shards_plan_options.rb +1 -1
- data/lib/polyrun/cli/run_shards_planning.rb +8 -8
- data/lib/polyrun/cli/run_shards_run.rb +4 -2
- data/lib/polyrun/cli/start_bootstrap.rb +2 -6
- data/lib/polyrun/cli.rb +46 -50
- data/lib/polyrun/config/dotted_path.rb +21 -0
- data/lib/polyrun/config/effective.rb +72 -0
- data/lib/polyrun/config/resolver.rb +78 -0
- data/lib/polyrun/config.rb +7 -0
- data/lib/polyrun/coverage/collector.rb +15 -9
- data/lib/polyrun/coverage/collector_finish.rb +2 -0
- data/lib/polyrun/coverage/collector_fragment_meta.rb +57 -0
- data/lib/polyrun/partition/paths.rb +83 -2
- data/lib/polyrun/quick/runner.rb +26 -17
- data/lib/polyrun/templates/ci_matrix.polyrun.yml +3 -2
- data/lib/polyrun/version.rb +1 -1
- metadata +9 -1
checksums.yaml
CHANGED
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
---
|
|
2
2
|
SHA256:
|
|
3
|
-
metadata.gz:
|
|
4
|
-
data.tar.gz:
|
|
3
|
+
metadata.gz: e3162ed760c231d4fa78ff396f55f708af13bda62808ba340f3c1494cf2dd97e
|
|
4
|
+
data.tar.gz: f401bd075462bafa14905c08a996fa46370025be9872f6a4d7aefbfdde251d2d
|
|
5
5
|
SHA512:
|
|
6
|
-
metadata.gz:
|
|
7
|
-
data.tar.gz:
|
|
6
|
+
metadata.gz: 3d0bf0e8ac88d3d0a007f53ce340c324bcd65c6397b4b67905d428d104ebff3078bc41220873b0b7d741e6cc65be06d5d103f8b8abd64e18ce0dc2d8f3e55bc1
|
|
7
|
+
data.tar.gz: 42906169eeeae14b3531d870359ce8fe9bd59261770553a6e7e6872e48e47b7b1013709eedea5c176a94ccee5c1e498551e8d6285ffc68b37c51d05169b184b1
|
data/CHANGELOG.md
CHANGED
|
@@ -1,5 +1,24 @@
|
|
|
1
1
|
# CHANGELOG
|
|
2
2
|
|
|
3
|
+
## 1.3.0 (2026-04-15)
|
|
4
|
+
|
|
5
|
+
- Add safe parsing for `ci-shard-run` / `ci-shard-rspec` `--shard-processes` and `--workers` (warn + exit 2 on missing or non-integer values).
|
|
6
|
+
- Fix `shard_child_env` when `matrix_total > 1` and `matrix_index` is nil: omit `POLYRUN_SHARD_MATRIX_*` and warn (avoid `Integer(nil)`).
|
|
7
|
+
- Document in `polyrun help` that `POLYRUN_SHARD_PROCESSES` and ci-shard `--workers` / `--shard-processes` are local processes per matrix job, distinct from `POLYRUN_WORKERS` / `run-shards`.
|
|
8
|
+
- BREAKING: Multi-worker shard runs may emit coverage JSON fragments whose basenames include `shard*` and `worker*` segments; `merge-coverage` still matches `polyrun-fragment-*.json`.
|
|
9
|
+
|
|
10
|
+
## 1.2.0 (2026-04-15)
|
|
11
|
+
|
|
12
|
+
- Add `polyrun config <dotted.path>` to print values from `Polyrun::Config::Effective` (same effective tree as runtime: arbitrary YAML paths, merged `prepare.env.<KEY>` as for `polyrun prepare`, resolved `partition.shard_index`, `partition.shard_total`, `partition.timing_granularity`, and `workers`).
|
|
13
|
+
- Memoize `Polyrun::Config::Effective.build` per thread (keyed by config path, object id, and env fingerprint) so repeated `dig` calls do not rebuild the merged tree.
|
|
14
|
+
- Add `DISPATCH_SUBCOMMAND_NAMES` and `IMPLICIT_PATH_EXCLUSION_TOKENS`; route implicit path-only argv against one list (includes `ci-shard-*`, `help`, `version`); add spec that dispatch names match `when` branches in `lib/polyrun/cli.rb`.
|
|
15
|
+
- Run `polyrun` with no subcommand to fan out parallel tests: pick RSpec (`start`), Minitest (`bundle exec rails test` or `bundle exec ruby -I test`), or Polyrun Quick (`bundle exec polyrun quick`) from `spec/**/*_spec.rb` vs `test/**/*_test.rb` vs Quick globs.
|
|
16
|
+
- Accept path-only argv (and optional `run-shards` options before paths, e.g. `--workers`) to shard those files without naming a subcommand; infer suite from `_spec.rb` / `_test.rb` vs other `.rb` files.
|
|
17
|
+
- Add optional `partition.suite` (`auto`, `rspec`, `minitest`, `quick`) when resolving globbed paths for `run-shards` / `parallel-rspec` / default runs.
|
|
18
|
+
- Document implicit argv (known subcommand first vs path-like implicit parallel) and parallel Quick `bundle exec` from app root in `polyrun help` and `examples/README.md`.
|
|
19
|
+
- Comment `detect_auto_suite` glob order in `lib/polyrun/partition/paths.rb` (RSpec/Minitest globs before Quick discovery).
|
|
20
|
+
- Remove redundant `OptionParser` from `polyrun config` (no options; banner only).
|
|
21
|
+
|
|
3
22
|
## 1.1.0 (2026-04-15)
|
|
4
23
|
|
|
5
24
|
- Add `ci-shard-run` / `ci-shard-rspec` for matrix-style sharding (one job per `POLYRUN_SHARD_INDEX` / `POLYRUN_SHARD_TOTAL`): resolve paths via the same plan as `polyrun plan`, then `exec` the given command with this shard’s paths (unlike `run-shards`, which fans out multiple workers on one host).
|
data/README.md
CHANGED
|
@@ -19,7 +19,7 @@ Capybara and Playwright stay in your application; Polyrun does not replace brows
|
|
|
19
19
|
|
|
20
20
|
## How?
|
|
21
21
|
|
|
22
|
-
1. Add the gem (path or RubyGems) and `require "polyrun"` where you integrate—for example coverage merge in CI or prepare hooks.
|
|
22
|
+
1. Add the gem (path or RubyGems) and `require "polyrun"` where you integrate—for example coverage merge in CI or prepare hooks. To pin the executable in your app, run `bundle binstubs polyrun` (writes `bin/polyrun`; ensure `bin/` is on `PATH` or invoke `./bin/polyrun`).
|
|
23
23
|
2. Add a `polyrun.yml` beside the app, or pass `-c` to point at one. Configure `partition` (paths, shard index and total, strategy), and optionally `databases` (Postgres template and `shard_db_pattern`), `prepare`, and `coverage`. If you use `partition.paths_build`, Polyrun can write `partition.paths_file` (for example `spec/spec_paths.txt`) from globs and ordered stages—substring priorities for integration specs, or a regex stage for “Rails-heavy files first”—without a per-project Ruby script. That step runs before `plan` and `run-shards`. Use `bin/polyrun build-paths` to refresh the paths file only.
|
|
24
24
|
3. Run prepare once before fan-out—for example `script/ci_prepare` for Vite or webpack builds, and `Polyrun::Prepare::Assets` digest markers. See `examples/TESTING_REQUIREMENTS.md`.
|
|
25
25
|
4. Run workers with `bin/polyrun run-shards --workers N -- bundle exec rspec`: N separate OS processes, each running RSpec with its own file list from `partition.paths_file`, or `spec/spec_paths.txt`, or else `spec/**/*_spec.rb`. Stderr shows where paths came from; after a successful multi-worker run it reminds you to run merge-coverage unless you use `parallel-rspec` or `run-shards --merge-coverage`.
|
|
@@ -6,6 +6,11 @@ module Polyrun
|
|
|
6
6
|
# workers on a single host. Runs +build-paths+, +plan+ for that shard, then +exec+ of a user command
|
|
7
7
|
# with that shard's paths appended (same argv pattern as +run-shards+ after +--+).
|
|
8
8
|
#
|
|
9
|
+
# With +--shard-processes M+ (or +partition.shard_processes+ / +POLYRUN_SHARD_PROCESSES+), fans out
|
|
10
|
+
# +M+ OS processes on this host, each running a subset of this shard's paths (NxM: +N+ matrix jobs × +M+
|
|
11
|
+
# processes). Child processes get local +POLYRUN_SHARD_INDEX+ / +POLYRUN_SHARD_TOTAL+ (+0..M-1+, +M+);
|
|
12
|
+
# when +N+ > 1, also +POLYRUN_SHARD_MATRIX_INDEX+ / +POLYRUN_SHARD_MATRIX_TOTAL+ for unique coverage fragments.
|
|
13
|
+
#
|
|
9
14
|
# After +--+, prefer **multiple argv tokens** (+bundle+, +exec+, +rspec+, …). A single token that
|
|
10
15
|
# contains spaces is split with +Shellwords+ (not a full shell); exotic quoting differs from +sh -c+.
|
|
11
16
|
module CiShardRunCommand
|
|
@@ -25,6 +30,60 @@ module Polyrun
|
|
|
25
30
|
[paths, 0]
|
|
26
31
|
end
|
|
27
32
|
|
|
33
|
+
def ci_shard_local_plan!(paths, workers)
|
|
34
|
+
Polyrun::Partition::Plan.new(
|
|
35
|
+
items: paths,
|
|
36
|
+
total_shards: workers,
|
|
37
|
+
strategy: "round_robin",
|
|
38
|
+
root: Dir.pwd
|
|
39
|
+
)
|
|
40
|
+
end
|
|
41
|
+
|
|
42
|
+
# When +N+ > 1 and +M+ > 1, pass matrix index/total for coverage fragment names; else nil (see +shard_child_env+).
|
|
43
|
+
def ci_shard_matrix_context(pc, shard_processes)
|
|
44
|
+
n = resolve_shard_total(pc)
|
|
45
|
+
return [nil, nil] if n <= 1 || shard_processes <= 1
|
|
46
|
+
|
|
47
|
+
[resolve_shard_index(pc), n]
|
|
48
|
+
end
|
|
49
|
+
|
|
50
|
+
def ci_shard_run_fanout!(ctx)
|
|
51
|
+
pids = run_shards_spawn_workers(ctx)
|
|
52
|
+
return 1 if pids.empty?
|
|
53
|
+
|
|
54
|
+
run_shards_warn_interleaved(ctx[:parallel], pids.size)
|
|
55
|
+
shard_results = run_shards_wait_all_children(pids)
|
|
56
|
+
failed = shard_results.reject { |r| r[:success] }.map { |r| r[:shard] }
|
|
57
|
+
|
|
58
|
+
if failed.any?
|
|
59
|
+
Polyrun::Log.warn "polyrun ci-shard: finished #{pids.size} worker(s) (some failed)"
|
|
60
|
+
run_shards_log_failed_reruns(failed, shard_results, ctx[:plan], ctx[:parallel], ctx[:workers], ctx[:cmd])
|
|
61
|
+
return 1
|
|
62
|
+
end
|
|
63
|
+
|
|
64
|
+
Polyrun::Log.warn "polyrun ci-shard: finished #{pids.size} worker(s) (exit 0)"
|
|
65
|
+
0
|
|
66
|
+
end
|
|
67
|
+
|
|
68
|
+
def ci_shard_fanout_context(cfg:, pc:, paths:, shard_processes:, cmd:, config_path:)
|
|
69
|
+
plan = ci_shard_local_plan!(paths, shard_processes)
|
|
70
|
+
mx, mt = ci_shard_matrix_context(pc, shard_processes)
|
|
71
|
+
{
|
|
72
|
+
workers: shard_processes,
|
|
73
|
+
cmd: cmd,
|
|
74
|
+
cfg: cfg,
|
|
75
|
+
plan: plan,
|
|
76
|
+
run_t0: Process.clock_gettime(Process::CLOCK_MONOTONIC),
|
|
77
|
+
parallel: true,
|
|
78
|
+
merge_coverage: false,
|
|
79
|
+
merge_output: nil,
|
|
80
|
+
merge_format: nil,
|
|
81
|
+
config_path: config_path,
|
|
82
|
+
matrix_shard_index: mx,
|
|
83
|
+
matrix_shard_total: mt
|
|
84
|
+
}
|
|
85
|
+
end
|
|
86
|
+
|
|
28
87
|
# Runner-agnostic matrix shard: +polyrun ci-shard-run [plan options] -- <command> [args...]+
|
|
29
88
|
# Paths for this shard are appended after the command (like +run-shards+).
|
|
30
89
|
def cmd_ci_shard_run(argv, config_path)
|
|
@@ -42,10 +101,27 @@ module Polyrun
|
|
|
42
101
|
end
|
|
43
102
|
cmd = Shellwords.split(cmd.first) if cmd.size == 1 && cmd.first.include?(" ")
|
|
44
103
|
|
|
104
|
+
cfg = Polyrun::Config.load(path: config_path || ENV["POLYRUN_CONFIG"])
|
|
105
|
+
pc = cfg.partition
|
|
106
|
+
shard_processes, perr = ci_shard_parse_shard_processes!(plan_argv, pc)
|
|
107
|
+
return perr if perr
|
|
108
|
+
|
|
109
|
+
shard_processes, err = ci_shard_normalize_shard_processes(shard_processes)
|
|
110
|
+
return err if err
|
|
111
|
+
|
|
45
112
|
paths, code = ci_shard_planned_paths!(plan_argv, config_path, command_label: "ci-shard-run")
|
|
46
113
|
return code if code != 0
|
|
47
114
|
|
|
48
|
-
|
|
115
|
+
if shard_processes <= 1
|
|
116
|
+
exec(*cmd, *paths)
|
|
117
|
+
return 0
|
|
118
|
+
end
|
|
119
|
+
|
|
120
|
+
ctx = ci_shard_fanout_context(
|
|
121
|
+
cfg: cfg, pc: pc, paths: paths, shard_processes: shard_processes, cmd: cmd, config_path: config_path
|
|
122
|
+
)
|
|
123
|
+
Polyrun::Log.warn "polyrun ci-shard-run: #{paths.size} path(s) → #{shard_processes} process(es) on this host (NxM: matrix jobs × local processes)"
|
|
124
|
+
ci_shard_run_fanout!(ctx)
|
|
49
125
|
end
|
|
50
126
|
|
|
51
127
|
# Same as +ci-shard-run -- bundle exec rspec+ with an optional second segment for RSpec-only flags:
|
|
@@ -55,10 +131,29 @@ module Polyrun
|
|
|
55
131
|
plan_argv = sep ? argv[0...sep] : argv
|
|
56
132
|
rspec_argv = sep ? argv[(sep + 1)..] : []
|
|
57
133
|
|
|
134
|
+
cfg = Polyrun::Config.load(path: config_path || ENV["POLYRUN_CONFIG"])
|
|
135
|
+
pc = cfg.partition
|
|
136
|
+
shard_processes, perr = ci_shard_parse_shard_processes!(plan_argv, pc)
|
|
137
|
+
return perr if perr
|
|
138
|
+
|
|
139
|
+
shard_processes, err = ci_shard_normalize_shard_processes(shard_processes)
|
|
140
|
+
return err if err
|
|
141
|
+
|
|
58
142
|
paths, code = ci_shard_planned_paths!(plan_argv, config_path, command_label: "ci-shard-rspec")
|
|
59
143
|
return code if code != 0
|
|
60
144
|
|
|
61
|
-
|
|
145
|
+
cmd = ["bundle", "exec", "rspec", *rspec_argv]
|
|
146
|
+
|
|
147
|
+
if shard_processes <= 1
|
|
148
|
+
exec(*cmd, *paths)
|
|
149
|
+
return 0
|
|
150
|
+
end
|
|
151
|
+
|
|
152
|
+
ctx = ci_shard_fanout_context(
|
|
153
|
+
cfg: cfg, pc: pc, paths: paths, shard_processes: shard_processes, cmd: cmd, config_path: config_path
|
|
154
|
+
)
|
|
155
|
+
Polyrun::Log.warn "polyrun ci-shard-rspec: #{paths.size} path(s) → #{shard_processes} process(es) on this host (NxM: matrix jobs × local processes)"
|
|
156
|
+
ci_shard_run_fanout!(ctx)
|
|
62
157
|
end
|
|
63
158
|
end
|
|
64
159
|
end
|
|
@@ -0,0 +1,68 @@
|
|
|
1
|
+
module Polyrun
|
|
2
|
+
class CLI
|
|
3
|
+
# Parsing for +ci-shard-run+ / +ci-shard-rspec+ plan argv (+--shard-processes+, +--workers+).
|
|
4
|
+
module CiShardRunParse
|
|
5
|
+
private
|
|
6
|
+
|
|
7
|
+
# Strips +--shard-processes+ / +--workers+ from +plan_argv+ and returns +[count, exit_code]+.
|
|
8
|
+
# +exit_code+ is +nil+ on success, +2+ on invalid or missing integer (no exception).
|
|
9
|
+
# Does not use +OptionParser+ so +plan+ flags (+--shard+, +--total+, …) pass through unchanged.
|
|
10
|
+
# Note: +--workers+ here means processes for this matrix job (+POLYRUN_SHARD_PROCESSES+), not +run-shards+ +POLYRUN_WORKERS+.
|
|
11
|
+
def ci_shard_parse_shard_processes!(plan_argv, pc)
|
|
12
|
+
workers = Polyrun::Config::Resolver.resolve_shard_processes(pc)
|
|
13
|
+
rest = []
|
|
14
|
+
i = 0
|
|
15
|
+
while i < plan_argv.size
|
|
16
|
+
case plan_argv[i]
|
|
17
|
+
when "--shard-processes"
|
|
18
|
+
n, err = ci_shard_parse_positive_int_flag!(plan_argv, i, "--shard-processes")
|
|
19
|
+
return [nil, err] if err
|
|
20
|
+
|
|
21
|
+
workers = n
|
|
22
|
+
i += 2
|
|
23
|
+
when "--workers"
|
|
24
|
+
n, err = ci_shard_parse_positive_int_flag!(plan_argv, i, "--workers")
|
|
25
|
+
return [nil, err] if err
|
|
26
|
+
|
|
27
|
+
workers = n
|
|
28
|
+
i += 2
|
|
29
|
+
else
|
|
30
|
+
rest << plan_argv[i]
|
|
31
|
+
i += 1
|
|
32
|
+
end
|
|
33
|
+
end
|
|
34
|
+
plan_argv.replace(rest)
|
|
35
|
+
[workers, nil]
|
|
36
|
+
end
|
|
37
|
+
|
|
38
|
+
# @return [Array(Integer or nil, Integer or nil)] +[value, exit_code]+ — +exit_code+ is +nil+ on success, +2+ on error
|
|
39
|
+
def ci_shard_parse_positive_int_flag!(argv, i, flag_name)
|
|
40
|
+
arg = argv[i + 1]
|
|
41
|
+
if arg.nil?
|
|
42
|
+
Polyrun::Log.warn "polyrun ci-shard: missing value for #{flag_name}"
|
|
43
|
+
return [nil, 2]
|
|
44
|
+
end
|
|
45
|
+
n = Integer(arg, exception: false)
|
|
46
|
+
if n.nil?
|
|
47
|
+
Polyrun::Log.warn "polyrun ci-shard: #{flag_name} must be an integer (got #{arg.inspect})"
|
|
48
|
+
return [nil, 2]
|
|
49
|
+
end
|
|
50
|
+
[n, nil]
|
|
51
|
+
end
|
|
52
|
+
|
|
53
|
+
# @return [Array(Integer, Integer, nil)] +[capped_workers, exit_code]+ — +exit_code+ is +nil+ when OK
|
|
54
|
+
def ci_shard_normalize_shard_processes(workers)
|
|
55
|
+
if workers < 1
|
|
56
|
+
Polyrun::Log.warn "polyrun ci-shard: --shard-processes / --workers must be >= 1"
|
|
57
|
+
return [workers, 2]
|
|
58
|
+
end
|
|
59
|
+
w = workers
|
|
60
|
+
if w > Polyrun::Config::MAX_PARALLEL_WORKERS
|
|
61
|
+
Polyrun::Log.warn "polyrun ci-shard: capping --shard-processes / --workers from #{w} to #{Polyrun::Config::MAX_PARALLEL_WORKERS}"
|
|
62
|
+
w = Polyrun::Config::MAX_PARALLEL_WORKERS
|
|
63
|
+
end
|
|
64
|
+
[w, nil]
|
|
65
|
+
end
|
|
66
|
+
end
|
|
67
|
+
end
|
|
68
|
+
end
|
|
@@ -0,0 +1,42 @@
|
|
|
1
|
+
require "json"
|
|
2
|
+
|
|
3
|
+
require_relative "../config/effective"
|
|
4
|
+
|
|
5
|
+
module Polyrun
|
|
6
|
+
class CLI
|
|
7
|
+
module ConfigCommand
|
|
8
|
+
private
|
|
9
|
+
|
|
10
|
+
def cmd_config(argv, config_path)
|
|
11
|
+
dotted = argv.shift
|
|
12
|
+
if dotted.nil? || dotted.strip.empty?
|
|
13
|
+
Polyrun::Log.warn "polyrun config: need a dotted path (e.g. prepare.env.PLAYWRIGHT_ENV, partition.paths_file, workers)"
|
|
14
|
+
return 2
|
|
15
|
+
end
|
|
16
|
+
unless argv.empty?
|
|
17
|
+
Polyrun::Log.warn "polyrun config: unexpected arguments: #{argv.join(" ")}"
|
|
18
|
+
return 2
|
|
19
|
+
end
|
|
20
|
+
|
|
21
|
+
cfg = Polyrun::Config.load(path: config_path || ENV["POLYRUN_CONFIG"])
|
|
22
|
+
val = Polyrun::Config::Effective.dig(cfg, dotted)
|
|
23
|
+
if val.nil?
|
|
24
|
+
Polyrun::Log.warn "polyrun config: no value for #{dotted}"
|
|
25
|
+
return 1
|
|
26
|
+
end
|
|
27
|
+
|
|
28
|
+
Polyrun::Log.puts format_config_value(val)
|
|
29
|
+
0
|
|
30
|
+
end
|
|
31
|
+
|
|
32
|
+
def format_config_value(val)
|
|
33
|
+
case val
|
|
34
|
+
when Hash, Array
|
|
35
|
+
JSON.generate(val)
|
|
36
|
+
else
|
|
37
|
+
val.to_s
|
|
38
|
+
end
|
|
39
|
+
end
|
|
40
|
+
end
|
|
41
|
+
end
|
|
42
|
+
end
|
|
@@ -0,0 +1,115 @@
|
|
|
1
|
+
require "tempfile"
|
|
2
|
+
|
|
3
|
+
module Polyrun
|
|
4
|
+
class CLI
|
|
5
|
+
# No-subcommand default (`polyrun`) and path-only argv (implicit parallel run).
|
|
6
|
+
module DefaultRun
|
|
7
|
+
private
|
|
8
|
+
|
|
9
|
+
def dispatch_default_parallel!(config_path)
|
|
10
|
+
suite = Polyrun::Partition::Paths.detect_auto_suite(Dir.pwd)
|
|
11
|
+
unless suite
|
|
12
|
+
Polyrun::Log.warn "polyrun: no tests found (spec/**/*_spec.rb, test/**/*_test.rb, or Polyrun quick files). See polyrun help."
|
|
13
|
+
return 2
|
|
14
|
+
end
|
|
15
|
+
|
|
16
|
+
Polyrun::Log.warn "polyrun: default → parallel #{suite} (use `polyrun help` for subcommands)" if @verbose
|
|
17
|
+
|
|
18
|
+
case suite
|
|
19
|
+
when :rspec
|
|
20
|
+
cmd_start([], config_path)
|
|
21
|
+
when :minitest
|
|
22
|
+
cmd_parallel_minitest([], config_path)
|
|
23
|
+
when :quick
|
|
24
|
+
cmd_parallel_quick([], config_path)
|
|
25
|
+
else
|
|
26
|
+
2
|
|
27
|
+
end
|
|
28
|
+
end
|
|
29
|
+
|
|
30
|
+
# If +argv[0]+ is in {IMPLICIT_PATH_EXCLUSION_TOKENS}, treat as a normal subcommand. Otherwise, path-like
|
|
31
|
+
# tokens may trigger implicit parallel sharding (see +print_help+).
|
|
32
|
+
def implicit_parallel_run?(argv)
|
|
33
|
+
return false if argv.empty?
|
|
34
|
+
return false if Polyrun::CLI::IMPLICIT_PATH_EXCLUSION_TOKENS.include?(argv[0])
|
|
35
|
+
|
|
36
|
+
argv.any? { |a| cli_implicit_path_token?(a) }
|
|
37
|
+
end
|
|
38
|
+
|
|
39
|
+
def cli_implicit_path_token?(s)
|
|
40
|
+
return false if s.start_with?("-") && s != "-"
|
|
41
|
+
return true if s == "-"
|
|
42
|
+
return true if s.start_with?("./", "../", "/")
|
|
43
|
+
return true if s.end_with?(".rb")
|
|
44
|
+
return true if File.exist?(File.expand_path(s))
|
|
45
|
+
return true if /[*?\[]/.match?(s)
|
|
46
|
+
|
|
47
|
+
false
|
|
48
|
+
end
|
|
49
|
+
|
|
50
|
+
def dispatch_implicit_parallel_targets!(argv, config_path)
|
|
51
|
+
path_tokens = argv.select { |a| cli_implicit_path_token?(a) }
|
|
52
|
+
head = argv.reject { |a| cli_implicit_path_token?(a) }
|
|
53
|
+
expanded = expand_implicit_target_paths(path_tokens)
|
|
54
|
+
if expanded.empty?
|
|
55
|
+
Polyrun::Log.warn "polyrun: no files matched path arguments"
|
|
56
|
+
return 2
|
|
57
|
+
end
|
|
58
|
+
|
|
59
|
+
suite = Polyrun::Partition::Paths.infer_suite_from_paths(expanded)
|
|
60
|
+
if suite == :invalid
|
|
61
|
+
Polyrun::Log.warn "polyrun: mixing _spec.rb and _test.rb paths in one run is not supported"
|
|
62
|
+
return 2
|
|
63
|
+
end
|
|
64
|
+
if suite.nil?
|
|
65
|
+
Polyrun::Log.warn "polyrun: could not infer suite from paths"
|
|
66
|
+
return 2
|
|
67
|
+
end
|
|
68
|
+
|
|
69
|
+
tmp = Tempfile.new(["polyrun-paths-", ".txt"])
|
|
70
|
+
begin
|
|
71
|
+
tmp.write(expanded.join("\n") + "\n")
|
|
72
|
+
tmp.close
|
|
73
|
+
combined = head + ["--paths-file", tmp.path]
|
|
74
|
+
case suite
|
|
75
|
+
when :rspec
|
|
76
|
+
cmd_start(combined, config_path)
|
|
77
|
+
when :minitest
|
|
78
|
+
cmd_parallel_minitest(combined, config_path)
|
|
79
|
+
when :quick
|
|
80
|
+
cmd_parallel_quick(combined, config_path)
|
|
81
|
+
else
|
|
82
|
+
2
|
|
83
|
+
end
|
|
84
|
+
ensure
|
|
85
|
+
tmp.close! unless tmp.closed?
|
|
86
|
+
begin
|
|
87
|
+
File.unlink(tmp.path)
|
|
88
|
+
rescue Errno::ENOENT
|
|
89
|
+
# already removed
|
|
90
|
+
end
|
|
91
|
+
end
|
|
92
|
+
end
|
|
93
|
+
|
|
94
|
+
def expand_implicit_target_paths(path_tokens)
|
|
95
|
+
path_tokens.flat_map do |p|
|
|
96
|
+
abs = File.expand_path(p)
|
|
97
|
+
if File.directory?(abs)
|
|
98
|
+
spec = Dir.glob(File.join(abs, "**", "*_spec.rb")).sort
|
|
99
|
+
test = Dir.glob(File.join(abs, "**", "*_test.rb")).sort
|
|
100
|
+
quick = Dir.glob(File.join(abs, "**", "*.rb")).sort.reject do |f|
|
|
101
|
+
File.basename(f).end_with?("_spec.rb", "_test.rb")
|
|
102
|
+
end
|
|
103
|
+
spec + test + quick
|
|
104
|
+
elsif /[*?\[]/.match?(p)
|
|
105
|
+
Dir.glob(abs).sort
|
|
106
|
+
elsif File.file?(abs)
|
|
107
|
+
[abs]
|
|
108
|
+
else
|
|
109
|
+
[]
|
|
110
|
+
end
|
|
111
|
+
end.uniq
|
|
112
|
+
end
|
|
113
|
+
end
|
|
114
|
+
end
|
|
115
|
+
end
|
|
@@ -0,0 +1,54 @@
|
|
|
1
|
+
module Polyrun
|
|
2
|
+
class CLI
|
|
3
|
+
module Help
|
|
4
|
+
def print_help
|
|
5
|
+
Polyrun::Log.puts <<~HELP
|
|
6
|
+
usage: polyrun [global options] [<command> | <paths...>]
|
|
7
|
+
|
|
8
|
+
With no command, runs parallel tests for the detected suite: RSpec under spec/, Minitest under test/, or Polyrun Quick (same discovery as polyrun quick). If the first argument is a known subcommand name, it is dispatched. Otherwise, path-like tokens (optionally with run-shards flags such as --workers) shard those files in parallel; see commands below.
|
|
9
|
+
|
|
10
|
+
global:
|
|
11
|
+
-c, --config PATH polyrun.yml path (or POLYRUN_CONFIG)
|
|
12
|
+
-v, --verbose
|
|
13
|
+
-h, --help
|
|
14
|
+
|
|
15
|
+
Trace timing (stderr): DEBUG=1 or POLYRUN_DEBUG=1
|
|
16
|
+
Branch coverage in JSON fragments: POLYRUN_COVERAGE_BRANCHES=1 (stdlib Coverage; merge-coverage merges branches)
|
|
17
|
+
polyrun quick coverage: POLYRUN_COVERAGE=1 or (config/polyrun_coverage.yml + POLYRUN_QUICK_COVERAGE=1); POLYRUN_COVERAGE_DISABLE=1 skips
|
|
18
|
+
Merge wall time (stderr): POLYRUN_PROFILE_MERGE=1 (or verbose / DEBUG)
|
|
19
|
+
Post-merge formats (run-shards): POLYRUN_MERGE_FORMATS (default: json,lcov,cobertura,console,html)
|
|
20
|
+
Skip optional script/build_spec_paths.rb before start: POLYRUN_SKIP_BUILD_SPEC_PATHS=1
|
|
21
|
+
Skip start auto-prepare / auto DB provision: POLYRUN_START_SKIP_PREPARE=1, POLYRUN_START_SKIP_DATABASES=1
|
|
22
|
+
Skip writing paths_file from partition.paths_build: POLYRUN_SKIP_PATHS_BUILD=1
|
|
23
|
+
Warn if merge-coverage wall time exceeds N seconds (default 10): POLYRUN_MERGE_SLOW_WARN_SECONDS (0 disables)
|
|
24
|
+
Parallel RSpec workers: POLYRUN_WORKERS default 5, max 10 (run-shards / parallel-rspec / start); distinct from POLYRUN_SHARD_PROCESSES / ci-shard --shard-processes (local processes per CI matrix job)
|
|
25
|
+
Partition timing granularity (default file): POLYRUN_TIMING_GRANULARITY=file|example (experimental per-example; see partition.timing_granularity)
|
|
26
|
+
|
|
27
|
+
commands:
|
|
28
|
+
version print version
|
|
29
|
+
plan emit partition manifest JSON
|
|
30
|
+
prepare run prepare recipe: default | assets (optional prepare.command overrides bin/rails assets:precompile) | shell (prepare.command required)
|
|
31
|
+
merge-coverage merge SimpleCov JSON fragments (json/lcov/cobertura/console)
|
|
32
|
+
run-shards fan out N parallel OS processes (POLYRUN_SHARD_*; not Ruby threads); optional --merge-coverage
|
|
33
|
+
parallel-rspec run-shards + merge-coverage (defaults to: bundle exec rspec after --)
|
|
34
|
+
start parallel-rspec; auto-runs prepare (shell/assets) and db:setup-* when polyrun.yml configures them; legacy script/build_spec_paths.rb if paths_build absent
|
|
35
|
+
ci-shard-run CI matrix: build-paths + plan for POLYRUN_SHARD_INDEX / POLYRUN_SHARD_TOTAL (or config), then run your command with that shard's paths after --; optional --shard-processes M or --workers M (POLYRUN_SHARD_PROCESSES; not POLYRUN_WORKERS) for N×M jobs × processes on this host
|
|
36
|
+
ci-shard-rspec same as ci-shard-run -- bundle exec rspec; optional --shard-processes / --workers / -- [rspec-only flags]
|
|
37
|
+
build-paths write partition.paths_file from partition.paths_build (same as auto step before plan/run-shards)
|
|
38
|
+
init write a starter polyrun.yml or POLYRUN.md from built-in templates (see docs/SETUP_PROFILE.md)
|
|
39
|
+
queue file-backed batch queue: init (optional --shard/--total etc. as plan, then claim/ack); M workers share one dir; no duplicate paths across claims
|
|
40
|
+
quick run Polyrun::Quick (describe/it, before/after, let, expect…to, assert_*; optional capybara!)
|
|
41
|
+
report-coverage write all coverage formats from one JSON file
|
|
42
|
+
report-junit RSpec JSON or Polyrun testcase JSON → JUnit XML (CI)
|
|
43
|
+
report-timing print slow-file summary from merged timing JSON
|
|
44
|
+
merge-timing merge polyrun_timing_*.json shards
|
|
45
|
+
config print effective config by dotted path (see Polyrun::Config::Effective; same tree as YAML plus merged prepare.env, resolved partition shard fields, workers)
|
|
46
|
+
env print shard + database env (see polyrun.yml databases)
|
|
47
|
+
db:setup-template migrate template DB (PostgreSQL)
|
|
48
|
+
db:setup-shard CREATE DATABASE shard FROM template (one POLYRUN_SHARD_INDEX)
|
|
49
|
+
db:clone-shards migrate templates + DROP/CREATE all shard DBs (replaces clone_shard shell scripts)
|
|
50
|
+
HELP
|
|
51
|
+
end
|
|
52
|
+
end
|
|
53
|
+
end
|
|
54
|
+
end
|
data/lib/polyrun/cli/helpers.rb
CHANGED
|
@@ -7,40 +7,16 @@ module Polyrun
|
|
|
7
7
|
module Helpers
|
|
8
8
|
private
|
|
9
9
|
|
|
10
|
-
def partition_int(pc, keys, default)
|
|
11
|
-
keys.each do |k|
|
|
12
|
-
v = pc[k] || pc[k.to_sym]
|
|
13
|
-
next if v.nil? || v.to_s.empty?
|
|
14
|
-
|
|
15
|
-
i = Integer(v, exception: false)
|
|
16
|
-
return i unless i.nil?
|
|
17
|
-
end
|
|
18
|
-
default
|
|
19
|
-
end
|
|
20
|
-
|
|
21
10
|
def env_int(name, fallback)
|
|
22
|
-
|
|
23
|
-
return fallback if s.nil? || s.empty?
|
|
24
|
-
|
|
25
|
-
Integer(s, exception: false) || fallback
|
|
11
|
+
Polyrun::Config::Resolver.env_int(name, fallback)
|
|
26
12
|
end
|
|
27
13
|
|
|
28
14
|
def resolve_shard_index(pc)
|
|
29
|
-
|
|
30
|
-
|
|
31
|
-
ci = Polyrun::Env::Ci.detect_shard_index
|
|
32
|
-
return ci unless ci.nil?
|
|
33
|
-
|
|
34
|
-
partition_int(pc, %w[shard_index shard], 0)
|
|
15
|
+
Polyrun::Config::Resolver.resolve_shard_index(pc)
|
|
35
16
|
end
|
|
36
17
|
|
|
37
18
|
def resolve_shard_total(pc)
|
|
38
|
-
|
|
39
|
-
|
|
40
|
-
ci = Polyrun::Env::Ci.detect_shard_total
|
|
41
|
-
return ci unless ci.nil?
|
|
42
|
-
|
|
43
|
-
partition_int(pc, %w[shard_total total], 1)
|
|
19
|
+
Polyrun::Config::Resolver.resolve_shard_total(pc)
|
|
44
20
|
end
|
|
45
21
|
|
|
46
22
|
def expand_merge_input_pattern(path)
|
|
@@ -119,10 +95,7 @@ module Polyrun
|
|
|
119
95
|
|
|
120
96
|
# CLI + polyrun.yml + POLYRUN_TIMING_GRANULARITY; default +:file+.
|
|
121
97
|
def resolve_partition_timing_granularity(pc, cli_val)
|
|
122
|
-
|
|
123
|
-
raw ||= pc && (pc["timing_granularity"] || pc[:timing_granularity])
|
|
124
|
-
raw ||= ENV["POLYRUN_TIMING_GRANULARITY"]
|
|
125
|
-
Polyrun::Partition::TimingKeys.normalize_granularity(raw || "file")
|
|
98
|
+
Polyrun::Config::Resolver.resolve_partition_timing_granularity(pc, cli_val)
|
|
126
99
|
end
|
|
127
100
|
end
|
|
128
101
|
end
|
|
@@ -76,21 +76,31 @@ module Polyrun
|
|
|
76
76
|
}
|
|
77
77
|
end
|
|
78
78
|
|
|
79
|
+
# Partition flags shared by +polyrun plan+ and +queue init+ (excluding +--paths-file+, which each command registers once).
|
|
80
|
+
def plan_command_register_partition_options!(opts, ctx)
|
|
81
|
+
opts.on("--shard INDEX", Integer) { |v| ctx[:shard] = v }
|
|
82
|
+
opts.on("--total N", Integer) { |v| ctx[:total] = v }
|
|
83
|
+
opts.on("--strategy NAME", String) { |v| ctx[:strategy] = v }
|
|
84
|
+
opts.on("--seed VAL") { |v| ctx[:seed] = v }
|
|
85
|
+
opts.on("--constraints PATH", "YAML: pin / serial_glob (see spec_queue.md)") { |v| ctx[:constraints_path] = v }
|
|
86
|
+
opts.on("--timing PATH", "path => seconds JSON; implies cost_binpack unless strategy is cost-based or hrw") do |v|
|
|
87
|
+
ctx[:timing_path] = v
|
|
88
|
+
end
|
|
89
|
+
opts.on("--timing-granularity VAL", "file (default) or example (experimental: path:line items)") do |v|
|
|
90
|
+
ctx[:timing_granularity] = v
|
|
91
|
+
end
|
|
92
|
+
end
|
|
93
|
+
|
|
94
|
+
# Shared by +polyrun plan+ and +queue init+ so partition flags match +Partition::Plan+ / +plan+ JSON.
|
|
95
|
+
def plan_command_register_options!(opts, ctx)
|
|
96
|
+
opts.on("--paths-file PATH", String) { |v| ctx[:paths_file] = v }
|
|
97
|
+
plan_command_register_partition_options!(opts, ctx)
|
|
98
|
+
end
|
|
99
|
+
|
|
79
100
|
def plan_command_parse_argv!(argv, ctx)
|
|
80
101
|
OptionParser.new do |opts|
|
|
81
102
|
opts.banner = "usage: polyrun plan [options] [--] [paths...]"
|
|
82
|
-
opts
|
|
83
|
-
opts.on("--total N", Integer) { |v| ctx[:total] = v }
|
|
84
|
-
opts.on("--strategy NAME", String) { |v| ctx[:strategy] = v }
|
|
85
|
-
opts.on("--seed VAL") { |v| ctx[:seed] = v }
|
|
86
|
-
opts.on("--paths-file PATH", String) { |v| ctx[:paths_file] = v }
|
|
87
|
-
opts.on("--constraints PATH", "YAML: pin / serial_glob (see spec_queue.md)") { |v| ctx[:constraints_path] = v }
|
|
88
|
-
opts.on("--timing PATH", "path => seconds JSON; implies cost_binpack unless strategy is cost-based or hrw") do |v|
|
|
89
|
-
ctx[:timing_path] = v
|
|
90
|
-
end
|
|
91
|
-
opts.on("--timing-granularity VAL", "file (default) or example (experimental: path:line items)") do |v|
|
|
92
|
-
ctx[:timing_granularity] = v
|
|
93
|
-
end
|
|
103
|
+
plan_command_register_options!(opts, ctx)
|
|
94
104
|
end.parse!(argv)
|
|
95
105
|
end
|
|
96
106
|
|
|
@@ -19,8 +19,8 @@ module Polyrun
|
|
|
19
19
|
cfg = Polyrun::Config.load(path: config_path || ENV["POLYRUN_CONFIG"])
|
|
20
20
|
prep = cfg.prepare
|
|
21
21
|
recipe = prep["recipe"] || prep[:recipe] || "default"
|
|
22
|
-
prep_env = (prep
|
|
23
|
-
child_env = prep_env.empty? ? nil :
|
|
22
|
+
prep_env = Polyrun::Config::Resolver.prepare_env_yaml_string_map(prep)
|
|
23
|
+
child_env = prep_env.empty? ? nil : Polyrun::Config::Resolver.merged_prepare_env(prep)
|
|
24
24
|
manifest = prepare_build_manifest(recipe, dry, prep_env)
|
|
25
25
|
|
|
26
26
|
exit_code = prepare_dispatch_recipe(manifest, prep, recipe, dry, child_env)
|