polyrun 1.2.0 → 1.4.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/CHANGELOG.md +15 -0
- data/README.md +58 -0
- data/lib/polyrun/cli/ci_shard_hooks.rb +121 -0
- data/lib/polyrun/cli/ci_shard_run_command.rb +81 -2
- data/lib/polyrun/cli/ci_shard_run_parse.rb +68 -0
- data/lib/polyrun/cli/help.rb +5 -4
- data/lib/polyrun/cli/hooks_command.rb +97 -0
- data/lib/polyrun/cli/plan_command.rb +22 -12
- data/lib/polyrun/cli/queue_command.rb +46 -19
- data/lib/polyrun/cli/run_shards_command.rb +13 -2
- data/lib/polyrun/cli/run_shards_parallel_children.rb +92 -0
- data/lib/polyrun/cli/run_shards_run.rb +55 -63
- data/lib/polyrun/cli.rb +7 -1
- data/lib/polyrun/config/effective.rb +2 -1
- data/lib/polyrun/config/resolver.rb +8 -0
- data/lib/polyrun/config.rb +5 -0
- data/lib/polyrun/coverage/collector.rb +15 -9
- data/lib/polyrun/coverage/collector_finish.rb +2 -0
- data/lib/polyrun/coverage/collector_fragment_meta.rb +57 -0
- data/lib/polyrun/hooks/dsl.rb +128 -0
- data/lib/polyrun/hooks/worker_runner.rb +27 -0
- data/lib/polyrun/hooks/worker_shell.rb +50 -0
- data/lib/polyrun/hooks.rb +185 -0
- data/lib/polyrun/templates/ci_matrix.polyrun.yml +3 -2
- data/lib/polyrun/version.rb +1 -1
- data/lib/polyrun.rb +1 -0
- metadata +10 -1
checksums.yaml
CHANGED
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
---
|
|
2
2
|
SHA256:
|
|
3
|
-
metadata.gz:
|
|
4
|
-
data.tar.gz:
|
|
3
|
+
metadata.gz: 04bfc3ed1a2c01072864dd179decce4d13c68221bba4da0964f4ea600401b57c
|
|
4
|
+
data.tar.gz: df02fc828fb8ac8c9cf3792ae08420ee87d328a89d79a7eeec26c9e239506ee4
|
|
5
5
|
SHA512:
|
|
6
|
-
metadata.gz:
|
|
7
|
-
data.tar.gz:
|
|
6
|
+
metadata.gz: 378921ebc46b80562c5ae4bb529a3035eed7366cbfbcffd9a5c4bcb1d18f1c1fe6b737dfcc580c9b7e3fb1ce4360ac9ad04c7a7edbc50e4763a2e5430a61c873
|
|
7
|
+
data.tar.gz: 552545582ab8c34e1411834d5f5ef6d1b6f1d022ead68746059b709cd04d733c6d778b13621c003ec2f4525367bdb5e849577d109e2e04835fcf9b12407bf2ef
|
data/CHANGELOG.md
CHANGED
|
@@ -1,5 +1,20 @@
|
|
|
1
1
|
# CHANGELOG
|
|
2
2
|
|
|
3
|
+
## 1.4.0 (2026-04-16)
|
|
4
|
+
|
|
5
|
+
- Add `hooks:` in `polyrun.yml` — shell commands for `before_suite` / `after_suite`, `before_shard` / `after_shard`, `before_worker` / `after_worker` (RSpec-style YAML keys `before(:suite)`, `before(:all)`, `before(:each)` accepted). Wire hooks into `run-shards`, `parallel-rspec`, and `ci-shard-*`.
|
|
6
|
+
- Add `hooks.ruby` / `hooks.ruby_file` and `Polyrun::Hooks::Dsl` (`before(:suite)` … `after(:each)` blocks); worker Ruby hooks run in the child via `ruby -e` + `POLYRUN_HOOKS_RUBY_FILE`.
|
|
7
|
+
- Add `polyrun hook run <phase>` (`--shard` / `--total` optional). Set `POLYRUN_HOOKS_DISABLE=1` to skip hooks during orchestration only; `hook run` still executes.
|
|
8
|
+
- On `ci-shard-run` / `ci-shard-rspec`, skip automatic `before_suite` / `after_suite` when `POLYRUN_SHARD_TOTAL` > 1 (matrix); run suite hooks once via `polyrun hook run` or set `POLYRUN_HOOKS_SUITE_PER_MATRIX_JOB=1` to run them on every matrix job.
|
|
9
|
+
- Document hook phases, matrix vs suite, and `after_shard` ordering in `README.md`; list `Polyrun::Hooks` and `Polyrun::Hooks::Dsl` in the library section.
|
|
10
|
+
|
|
11
|
+
## 1.3.0 (2026-04-15)
|
|
12
|
+
|
|
13
|
+
- Add safe parsing for `ci-shard-run` / `ci-shard-rspec` `--shard-processes` and `--workers` (warn + exit 2 on missing or non-integer values).
|
|
14
|
+
- Fix `shard_child_env` when `matrix_total > 1` and `matrix_index` is nil: omit `POLYRUN_SHARD_MATRIX_*` and warn (avoid `Integer(nil)`).
|
|
15
|
+
- Document in `polyrun help` that `POLYRUN_SHARD_PROCESSES` and ci-shard `--workers` / `--shard-processes` are local processes per matrix job, distinct from `POLYRUN_WORKERS` / `run-shards`.
|
|
16
|
+
- BREAKING: Multi-worker shard runs may emit coverage JSON fragments whose basenames include `shard*` and `worker*` segments; `merge-coverage` still matches `polyrun-fragment-*.json`.
|
|
17
|
+
|
|
3
18
|
## 1.2.0 (2026-04-15)
|
|
4
19
|
|
|
5
20
|
- Add `polyrun config <dotted.path>` to print values from `Polyrun::Config::Effective` (same effective tree as runtime: arbitrary YAML paths, merged `prepare.env.<KEY>` as for `polyrun prepare`, resolved `partition.shard_index`, `partition.shard_total`, `partition.timing_granularity`, and `workers`).
|
data/README.md
CHANGED
|
@@ -25,6 +25,61 @@ Capybara and Playwright stay in your application; Polyrun does not replace brows
|
|
|
25
25
|
4. Run workers with `bin/polyrun run-shards --workers N -- bundle exec rspec`: N separate OS processes, each running RSpec with its own file list from `partition.paths_file`, or `spec/spec_paths.txt`, or else `spec/**/*_spec.rb`. Stderr shows where paths came from; after a successful multi-worker run it reminds you to run merge-coverage unless you use `parallel-rspec` or `run-shards --merge-coverage`.
|
|
26
26
|
5. Merge artifacts with `bin/polyrun merge-coverage` on `coverage/polyrun-fragment-*.json` (one fragment per `POLYRUN_SHARD_INDEX` when coverage is on), or use `bin/polyrun parallel-rspec` or `run-shards --merge-coverage` so Polyrun runs merge for you. Optional: `merge-timing`, `report-timing`, `report-junit`.
|
|
27
27
|
|
|
28
|
+
### Hooks (`hooks:` in `polyrun.yml`)
|
|
29
|
+
|
|
30
|
+
Optional **shell** commands and/or a **Ruby DSL** file for instrumentation (telemetry, Slack, logging, manual debugging). Names mirror RSpec’s API (`before(:suite)`, `before(:all)`, `before(:each)`), but **Polyrun hooks are about process orchestration**, not RSpec example groups. Below, **suite / shard / worker** mean Polyrun’s model unless stated otherwise.
|
|
31
|
+
|
|
32
|
+
#### What “suite”, “shard”, and “worker” mean
|
|
33
|
+
|
|
34
|
+
| Term | Process | Meaning |
|
|
35
|
+
|------|---------|---------|
|
|
36
|
+
| **Suite** | **Parent** only | One **orchestration run** on a single machine: a single `polyrun run-shards` / `parallel-rspec` / `ci-shard-run` (with **one** global shard, see below). `before_suite` runs once before any worker is started; `after_suite` runs once after all workers have exited and (when used) merge-coverage has finished. This is **not** the same as “the whole RSpec suite in one process”—with `--workers N`, RSpec runs in **N separate processes**, each with its own examples. |
|
|
37
|
+
| **Shard** | **Parent** only | One **partition** of the path list for this run, identified by `POLYRUN_SHARD_INDEX` / `POLYRUN_SHARD_TOTAL` **for that parallel layout** (0 … N−1 for `run-shards` with N workers). `before_shard` runs in the parent **immediately before** `Process.spawn` for that index; `after_shard` runs **after** that child has exited. The parent **waits workers in shard index order** (0, then 1, …), so `after_shard` runs in that order—not in “who finished first” order if workers overlap in time. Empty partitions are skipped (no spawn, no hooks). |
|
|
38
|
+
| **Worker** | **Child** OS process | The process that runs your command after `--` (e.g. `bundle exec rspec …`). `before_worker` / `after_worker` run **in that child**, directly before and after the test command. **Individual examples run only after `before_worker` completes** (inside the same process as RSpec/Minitest). |
|
|
39
|
+
|
|
40
|
+
**CI matrix** (`POLYRUN_SHARD_TOTAL` > 1, one job per index): each job is a **global shard**, not a full “suite” in the pipeline sense. **`ci-shard-run` / `ci-shard-rspec` with one process per job do not run `before_suite` / `after_suite` automatically**—otherwise they would run once per matrix cell. Put pipeline-wide setup/teardown in a **separate CI step** (e.g. `bin/polyrun hook run before_suite` and `hook run after_suite` once), or set `POLYRUN_HOOKS_SUITE_PER_MATRIX_JOB=1` to restore the old behaviour (suite hooks on every matrix job). **`before_shard` / `after_shard` / worker hooks still run** per job, with `POLYRUN_SHARD_INDEX` / `POLYRUN_SHARD_TOTAL` set from the matrix. A **single** non-matrix `ci-shard-run` (`POLYRUN_SHARD_TOTAL` is 1) still runs suite hooks like `run-shards --workers 1`. Fan-out on one host (`--shard-processes` > 1) still runs **`before_suite` / `after_suite` once** for that job, around the local workers.
|
|
41
|
+
|
|
42
|
+
#### Lifecycle (typical `run-shards` with multiple workers)
|
|
43
|
+
|
|
44
|
+
```text
|
|
45
|
+
[Parent] before_suite
|
|
46
|
+
[Parent] for each shard index i that has paths:
|
|
47
|
+
before_shard(i) → spawn worker i
|
|
48
|
+
[Child i] before_worker → test runner starts → examples run → runner exits
|
|
49
|
+
[Parent] after_shard(i) (parent waits children in shard index order 0…N−1, not global finish order)
|
|
50
|
+
[Parent] merge-coverage (if requested)
|
|
51
|
+
[Parent] after_suite
|
|
52
|
+
```
|
|
53
|
+
|
|
54
|
+
`after_worker` runs in the child after the test command exits, before the parent’s `after_shard`.
|
|
55
|
+
|
|
56
|
+
#### Order and priority within one phase
|
|
57
|
+
|
|
58
|
+
1. **Ruby DSL, then shell (YAML)** — For the same phase (e.g. `before_suite`), all **Ruby** blocks from `hooks.ruby` run first, then every **shell** command from YAML for that phase.
|
|
59
|
+
2. **Multiple Ruby blocks** — In one DSL file, registrations run in **source order** (e.g. two `before(:suite)` blocks run top to bottom).
|
|
60
|
+
3. **Multiple shell commands** — Use a **YAML list**; entries run in list order:
|
|
61
|
+
```yaml
|
|
62
|
+
before_suite:
|
|
63
|
+
- echo first
|
|
64
|
+
- echo second
|
|
65
|
+
```
|
|
66
|
+
4. **Duplicate YAML keys** — Do **not** repeat the same key (e.g. two `before_suite:` lines). Parsers may keep only one value; behaviour is undefined. Prefer a **list** under a single key.
|
|
67
|
+
5. **Failure** — If any step in a phase fails (non-zero exit from shell; uncaught error in Ruby), orchestration stops or marks failure per existing `run-shards` rules; `after_worker` shell steps use `|| true` so a failing teardown does not mask the test exit code (Ruby `after_worker` is wrapped similarly in the worker script).
|
|
68
|
+
|
|
69
|
+
Environment includes `POLYRUN_HOOK_PHASE`, `POLYRUN_HOOK=1`, `POLYRUN_HOOK_ORCHESTRATOR` (`1` in parent, `0` in workers), `POLYRUN_SHARD_*`, and `POLYRUN_SUITE_EXIT_STATUS` on `after_suite`. Worker children get `POLYRUN_HOOKS_RUBY_FILE` when using the Ruby DSL. Set `POLYRUN_HOOKS_DISABLE=1` to skip hooks during `run-shards` / `parallel-rspec` / `ci-shard-*` (orchestration only); `polyrun hook run` still executes hooks. **`POLYRUN_HOOKS_SUITE_PER_MATRIX_JOB=1`** — when set, run `before_suite` / `after_suite` on every CI matrix job (not recommended for expensive global setup).
|
|
70
|
+
|
|
71
|
+
**YAML keys (shell) and RSpec-style names in YAML:**
|
|
72
|
+
|
|
73
|
+
| YAML key | DSL in `hooks.ruby` | When |
|
|
74
|
+
|----------|----------------------|------|
|
|
75
|
+
| `before_suite` / `after_suite` | `before(:suite)` / `after(:suite)` | Parent: once per orchestration on one host; **skipped** for `ci-shard-run` when `POLYRUN_SHARD_TOTAL` > 1 unless `POLYRUN_HOOKS_SUITE_PER_MATRIX_JOB=1` (see matrix paragraph above) |
|
|
76
|
+
| `before_shard` / `after_shard` | `before(:all)` / `after(:all)` | Parent: per shard index (spawn / after exit) |
|
|
77
|
+
| `before_worker` / `after_worker` | `before(:each)` / `after(:each)` | Child: around the test command |
|
|
78
|
+
|
|
79
|
+
**Ruby DSL (`hooks.ruby` or `hooks.ruby_file`):** path to a `.rb` file (relative to the project root). Blocks receive a `Hash` with **string keys** (same env as shell hooks). Worker-phase Ruby hooks run in the child via `ruby -e 'require "polyrun"; …'`.
|
|
80
|
+
|
|
81
|
+
Run one phase by hand: `bin/polyrun hook run before_suite` (optional `--shard N --total M`). YAML may use quoted keys such as `"before(:suite)"` instead of `before_suite`.
|
|
82
|
+
|
|
28
83
|
Quick CLI samples:
|
|
29
84
|
|
|
30
85
|
If the current directory already has `polyrun.yml` or `config/polyrun.yml`, you can omit `-c` (same as `Config.load` default discovery). Pass `-c PATH` or set `POLYRUN_CONFIG` when the file lives elsewhere or uses another name.
|
|
@@ -41,6 +96,7 @@ bin/polyrun env --shard 0 --total 4 # print DATABASE_URL exports from polyrun.
|
|
|
41
96
|
bin/polyrun init --list
|
|
42
97
|
bin/polyrun init --profile gem -o polyrun.yml # starter YAML; see docs/SETUP_PROFILE.md
|
|
43
98
|
bin/polyrun quick # Polyrun::Quick examples under spec/polyrun_quick/ or test/polyrun_quick/
|
|
99
|
+
bin/polyrun hook run before_suite # run hooks.before_suite from polyrun.yml (manual / CI)
|
|
44
100
|
```
|
|
45
101
|
|
|
46
102
|
### Matrix shards and timing
|
|
@@ -87,6 +143,8 @@ That single require loads the CLI and core library **without** loading RSpec or
|
|
|
87
143
|
| `Polyrun::Prepare::Assets` | Digest trees, marker file, `assets:precompile`. |
|
|
88
144
|
| `Polyrun::Database::Shard` | Shard env map, `%{shard}` DB names, URL path suffix for `postgres://`, `mysql2://`, `mongodb://`, etc. |
|
|
89
145
|
| `Polyrun::Database::UrlBuilder` | URLs from `polyrun.yml` `databases:` — nested blocks or `adapter:` for common Rails stacks (`postgresql`, `mysql`/`mysql2`, `trilogy`, `sqlserver`/`mssql`, `sqlite3`/`sqlite`, `mongodb`/`mongo`). |
|
|
146
|
+
| `Polyrun::Hooks` | Load from `Config#hooks`; `run_phase` / `run_phase_if_enabled`; `build_worker_shell_script` wraps the worker command. |
|
|
147
|
+
| `Polyrun::Hooks::Dsl` | Ruby hook file (`hooks.ruby`); `before(:suite)` / `after(:each)` etc. in `config/polyrun_hooks.rb` (see README). |
|
|
90
148
|
|
|
91
149
|
## Development
|
|
92
150
|
|
|
@@ -0,0 +1,121 @@
|
|
|
1
|
+
module Polyrun
|
|
2
|
+
class CLI
|
|
3
|
+
# Suite / shard / worker shell hooks for +ci-shard-run+ / +ci-shard-rspec+.
|
|
4
|
+
module CiShardHooks
|
|
5
|
+
private
|
|
6
|
+
|
|
7
|
+
# rubocop:disable Metrics/AbcSize -- suite hooks + spawn/wait + failure paths
|
|
8
|
+
def ci_shard_run_fanout!(ctx)
|
|
9
|
+
hook_cfg = Polyrun::Hooks.from_config(ctx[:cfg])
|
|
10
|
+
suite_started = false
|
|
11
|
+
exit_code = 1
|
|
12
|
+
|
|
13
|
+
begin
|
|
14
|
+
env_suite = ENV.to_h.merge(
|
|
15
|
+
"POLYRUN_HOOK_ORCHESTRATOR" => "1",
|
|
16
|
+
"POLYRUN_SHARD_TOTAL" => ctx[:workers].to_s
|
|
17
|
+
)
|
|
18
|
+
code = hook_cfg.run_phase_if_enabled(:before_suite, env_suite)
|
|
19
|
+
return code if code != 0
|
|
20
|
+
|
|
21
|
+
suite_started = true
|
|
22
|
+
|
|
23
|
+
pids, spawn_err = run_shards_spawn_workers(ctx, hook_cfg)
|
|
24
|
+
if spawn_err
|
|
25
|
+
exit_code = spawn_err
|
|
26
|
+
return spawn_err
|
|
27
|
+
end
|
|
28
|
+
return 1 if pids.empty?
|
|
29
|
+
|
|
30
|
+
run_shards_warn_interleaved(ctx[:parallel], pids.size)
|
|
31
|
+
shard_results, wait_hook_err = run_shards_wait_all_children(pids, hook_cfg, ctx)
|
|
32
|
+
failed = shard_results.reject { |r| r[:success] }.map { |r| r[:shard] }
|
|
33
|
+
|
|
34
|
+
if failed.any?
|
|
35
|
+
Polyrun::Log.warn "polyrun ci-shard: finished #{pids.size} worker(s) (some failed)"
|
|
36
|
+
run_shards_log_failed_reruns(failed, shard_results, ctx[:plan], ctx[:parallel], ctx[:workers], ctx[:cmd])
|
|
37
|
+
exit_code = 1
|
|
38
|
+
exit_code = 1 if wait_hook_err != 0
|
|
39
|
+
return exit_code
|
|
40
|
+
end
|
|
41
|
+
|
|
42
|
+
exit_code = (wait_hook_err == 0) ? 0 : 1
|
|
43
|
+
Polyrun::Log.warn "polyrun ci-shard: finished #{pids.size} worker(s) (exit 0)" if exit_code == 0
|
|
44
|
+
exit_code
|
|
45
|
+
ensure
|
|
46
|
+
if suite_started
|
|
47
|
+
env_after = ENV.to_h.merge(
|
|
48
|
+
"POLYRUN_HOOK_ORCHESTRATOR" => "1",
|
|
49
|
+
"POLYRUN_SHARD_TOTAL" => ctx[:workers].to_s,
|
|
50
|
+
"POLYRUN_SUITE_EXIT_STATUS" => exit_code.to_s
|
|
51
|
+
)
|
|
52
|
+
hook_cfg.run_phase_if_enabled(:after_suite, env_after)
|
|
53
|
+
end
|
|
54
|
+
end
|
|
55
|
+
end
|
|
56
|
+
# rubocop:enable Metrics/AbcSize
|
|
57
|
+
|
|
58
|
+
# One matrix shard, one OS process: same hook phases as +run-shards+ with +--workers 1+ (no +exec+ when hooks exist).
|
|
59
|
+
# rubocop:disable Metrics/AbcSize -- suite / shard / worker lifecycle
|
|
60
|
+
def ci_shard_run_single!(cmd, paths, cfg, pc, _config_path)
|
|
61
|
+
hook_cfg = Polyrun::Hooks.from_config(cfg)
|
|
62
|
+
if hook_cfg.empty? || Polyrun::Hooks.disabled?
|
|
63
|
+
exec(*cmd, *paths)
|
|
64
|
+
end
|
|
65
|
+
|
|
66
|
+
si = Polyrun::Config::Resolver.resolve_shard_index(pc)
|
|
67
|
+
st = Polyrun::Config::Resolver.resolve_shard_total(pc)
|
|
68
|
+
suite_started = false
|
|
69
|
+
exit_code = 1
|
|
70
|
+
# Distributed CI matrix (N > 1 global shards): each job is one shard; suite hooks are pipeline-wide.
|
|
71
|
+
# Run them once via +polyrun hook run before_suite+ / +after_suite+ (e.g. dedicated job), or set
|
|
72
|
+
# +POLYRUN_HOOKS_SUITE_PER_MATRIX_JOB=1+ to run suite hooks on every matrix job.
|
|
73
|
+
matrix_shards = st > 1
|
|
74
|
+
run_suite_hooks = !matrix_shards || Polyrun::Hooks.suite_per_matrix_job?
|
|
75
|
+
|
|
76
|
+
begin
|
|
77
|
+
env_orch = ENV.to_h.merge(
|
|
78
|
+
"POLYRUN_HOOK_ORCHESTRATOR" => "1",
|
|
79
|
+
"POLYRUN_SHARD_INDEX" => si.to_s,
|
|
80
|
+
"POLYRUN_SHARD_TOTAL" => st.to_s
|
|
81
|
+
)
|
|
82
|
+
if run_suite_hooks
|
|
83
|
+
code = hook_cfg.run_phase_if_enabled(:before_suite, env_orch)
|
|
84
|
+
return code if code != 0
|
|
85
|
+
|
|
86
|
+
suite_started = true
|
|
87
|
+
end
|
|
88
|
+
|
|
89
|
+
code = hook_cfg.run_phase_if_enabled(:before_shard, env_orch)
|
|
90
|
+
return code if code != 0
|
|
91
|
+
|
|
92
|
+
mx, mt = ci_shard_matrix_context(pc, 1)
|
|
93
|
+
child_env = shard_child_env(cfg: cfg, workers: 1, shard: 0, matrix_index: mx, matrix_total: mt)
|
|
94
|
+
child_env = child_env.merge("POLYRUN_HOOK_ORCHESTRATOR" => "0")
|
|
95
|
+
child_env = hook_cfg.merge_worker_ruby_env(child_env)
|
|
96
|
+
|
|
97
|
+
if hook_cfg.worker_hooks? && !Polyrun::Hooks.disabled?
|
|
98
|
+
system(child_env, "sh", "-c", hook_cfg.build_worker_shell_script(cmd, paths))
|
|
99
|
+
else
|
|
100
|
+
system(child_env, *cmd, *paths)
|
|
101
|
+
end
|
|
102
|
+
exit_code = $?.exitstatus
|
|
103
|
+
|
|
104
|
+
rc = hook_cfg.run_phase_if_enabled(:after_shard, env_orch.merge(
|
|
105
|
+
"POLYRUN_WORKER_EXIT_STATUS" => exit_code.to_s
|
|
106
|
+
))
|
|
107
|
+
exit_code = rc if rc != 0
|
|
108
|
+
|
|
109
|
+
exit_code
|
|
110
|
+
ensure
|
|
111
|
+
if suite_started
|
|
112
|
+
hook_cfg.run_phase_if_enabled(:after_suite, env_orch.merge(
|
|
113
|
+
"POLYRUN_SUITE_EXIT_STATUS" => exit_code.to_s
|
|
114
|
+
))
|
|
115
|
+
end
|
|
116
|
+
end
|
|
117
|
+
end
|
|
118
|
+
# rubocop:enable Metrics/AbcSize
|
|
119
|
+
end
|
|
120
|
+
end
|
|
121
|
+
end
|
|
@@ -1,14 +1,23 @@
|
|
|
1
1
|
require "shellwords"
|
|
2
2
|
|
|
3
|
+
require_relative "ci_shard_hooks"
|
|
4
|
+
|
|
3
5
|
module Polyrun
|
|
4
6
|
class CLI
|
|
5
7
|
# One CI matrix job = one global shard (POLYRUN_SHARD_INDEX / POLYRUN_SHARD_TOTAL), not +run-shards+
|
|
6
8
|
# workers on a single host. Runs +build-paths+, +plan+ for that shard, then +exec+ of a user command
|
|
7
9
|
# with that shard's paths appended (same argv pattern as +run-shards+ after +--+).
|
|
8
10
|
#
|
|
11
|
+
# With +--shard-processes M+ (or +partition.shard_processes+ / +POLYRUN_SHARD_PROCESSES+), fans out
|
|
12
|
+
# +M+ OS processes on this host, each running a subset of this shard's paths (NxM: +N+ matrix jobs × +M+
|
|
13
|
+
# processes). Child processes get local +POLYRUN_SHARD_INDEX+ / +POLYRUN_SHARD_TOTAL+ (+0..M-1+, +M+);
|
|
14
|
+
# when +N+ > 1, also +POLYRUN_SHARD_MATRIX_INDEX+ / +POLYRUN_SHARD_MATRIX_TOTAL+ for unique coverage fragments.
|
|
15
|
+
#
|
|
9
16
|
# After +--+, prefer **multiple argv tokens** (+bundle+, +exec+, +rspec+, …). A single token that
|
|
10
17
|
# contains spaces is split with +Shellwords+ (not a full shell); exotic quoting differs from +sh -c+.
|
|
11
18
|
module CiShardRunCommand
|
|
19
|
+
include CiShardHooks
|
|
20
|
+
|
|
12
21
|
private
|
|
13
22
|
|
|
14
23
|
# @return [Array(Array<String>, Integer)] [paths, 0] on success, or [nil, exit_code] on failure
|
|
@@ -25,6 +34,42 @@ module Polyrun
|
|
|
25
34
|
[paths, 0]
|
|
26
35
|
end
|
|
27
36
|
|
|
37
|
+
def ci_shard_local_plan!(paths, workers)
|
|
38
|
+
Polyrun::Partition::Plan.new(
|
|
39
|
+
items: paths,
|
|
40
|
+
total_shards: workers,
|
|
41
|
+
strategy: "round_robin",
|
|
42
|
+
root: Dir.pwd
|
|
43
|
+
)
|
|
44
|
+
end
|
|
45
|
+
|
|
46
|
+
# When +N+ > 1 and +M+ > 1, pass matrix index/total for coverage fragment names; else nil (see +shard_child_env+).
|
|
47
|
+
def ci_shard_matrix_context(pc, shard_processes)
|
|
48
|
+
n = resolve_shard_total(pc)
|
|
49
|
+
return [nil, nil] if n <= 1 || shard_processes <= 1
|
|
50
|
+
|
|
51
|
+
[resolve_shard_index(pc), n]
|
|
52
|
+
end
|
|
53
|
+
|
|
54
|
+
def ci_shard_fanout_context(cfg:, pc:, paths:, shard_processes:, cmd:, config_path:)
|
|
55
|
+
plan = ci_shard_local_plan!(paths, shard_processes)
|
|
56
|
+
mx, mt = ci_shard_matrix_context(pc, shard_processes)
|
|
57
|
+
{
|
|
58
|
+
workers: shard_processes,
|
|
59
|
+
cmd: cmd,
|
|
60
|
+
cfg: cfg,
|
|
61
|
+
plan: plan,
|
|
62
|
+
run_t0: Process.clock_gettime(Process::CLOCK_MONOTONIC),
|
|
63
|
+
parallel: true,
|
|
64
|
+
merge_coverage: false,
|
|
65
|
+
merge_output: nil,
|
|
66
|
+
merge_format: nil,
|
|
67
|
+
config_path: config_path,
|
|
68
|
+
matrix_shard_index: mx,
|
|
69
|
+
matrix_shard_total: mt
|
|
70
|
+
}
|
|
71
|
+
end
|
|
72
|
+
|
|
28
73
|
# Runner-agnostic matrix shard: +polyrun ci-shard-run [plan options] -- <command> [args...]+
|
|
29
74
|
# Paths for this shard are appended after the command (like +run-shards+).
|
|
30
75
|
def cmd_ci_shard_run(argv, config_path)
|
|
@@ -42,10 +87,26 @@ module Polyrun
|
|
|
42
87
|
end
|
|
43
88
|
cmd = Shellwords.split(cmd.first) if cmd.size == 1 && cmd.first.include?(" ")
|
|
44
89
|
|
|
90
|
+
cfg = Polyrun::Config.load(path: config_path || ENV["POLYRUN_CONFIG"])
|
|
91
|
+
pc = cfg.partition
|
|
92
|
+
shard_processes, perr = ci_shard_parse_shard_processes!(plan_argv, pc)
|
|
93
|
+
return perr if perr
|
|
94
|
+
|
|
95
|
+
shard_processes, err = ci_shard_normalize_shard_processes(shard_processes)
|
|
96
|
+
return err if err
|
|
97
|
+
|
|
45
98
|
paths, code = ci_shard_planned_paths!(plan_argv, config_path, command_label: "ci-shard-run")
|
|
46
99
|
return code if code != 0
|
|
47
100
|
|
|
48
|
-
|
|
101
|
+
if shard_processes <= 1
|
|
102
|
+
return ci_shard_run_single!(cmd, paths, cfg, pc, config_path)
|
|
103
|
+
end
|
|
104
|
+
|
|
105
|
+
ctx = ci_shard_fanout_context(
|
|
106
|
+
cfg: cfg, pc: pc, paths: paths, shard_processes: shard_processes, cmd: cmd, config_path: config_path
|
|
107
|
+
)
|
|
108
|
+
Polyrun::Log.warn "polyrun ci-shard-run: #{paths.size} path(s) → #{shard_processes} process(es) on this host (NxM: matrix jobs × local processes)"
|
|
109
|
+
ci_shard_run_fanout!(ctx)
|
|
49
110
|
end
|
|
50
111
|
|
|
51
112
|
# Same as +ci-shard-run -- bundle exec rspec+ with an optional second segment for RSpec-only flags:
|
|
@@ -55,10 +116,28 @@ module Polyrun
|
|
|
55
116
|
plan_argv = sep ? argv[0...sep] : argv
|
|
56
117
|
rspec_argv = sep ? argv[(sep + 1)..] : []
|
|
57
118
|
|
|
119
|
+
cfg = Polyrun::Config.load(path: config_path || ENV["POLYRUN_CONFIG"])
|
|
120
|
+
pc = cfg.partition
|
|
121
|
+
shard_processes, perr = ci_shard_parse_shard_processes!(plan_argv, pc)
|
|
122
|
+
return perr if perr
|
|
123
|
+
|
|
124
|
+
shard_processes, err = ci_shard_normalize_shard_processes(shard_processes)
|
|
125
|
+
return err if err
|
|
126
|
+
|
|
58
127
|
paths, code = ci_shard_planned_paths!(plan_argv, config_path, command_label: "ci-shard-rspec")
|
|
59
128
|
return code if code != 0
|
|
60
129
|
|
|
61
|
-
|
|
130
|
+
cmd = ["bundle", "exec", "rspec", *rspec_argv]
|
|
131
|
+
|
|
132
|
+
if shard_processes <= 1
|
|
133
|
+
return ci_shard_run_single!(cmd, paths, cfg, pc, config_path)
|
|
134
|
+
end
|
|
135
|
+
|
|
136
|
+
ctx = ci_shard_fanout_context(
|
|
137
|
+
cfg: cfg, pc: pc, paths: paths, shard_processes: shard_processes, cmd: cmd, config_path: config_path
|
|
138
|
+
)
|
|
139
|
+
Polyrun::Log.warn "polyrun ci-shard-rspec: #{paths.size} path(s) → #{shard_processes} process(es) on this host (NxM: matrix jobs × local processes)"
|
|
140
|
+
ci_shard_run_fanout!(ctx)
|
|
62
141
|
end
|
|
63
142
|
end
|
|
64
143
|
end
|
|
@@ -0,0 +1,68 @@
|
|
|
1
|
+
module Polyrun
|
|
2
|
+
class CLI
|
|
3
|
+
# Parsing for +ci-shard-run+ / +ci-shard-rspec+ plan argv (+--shard-processes+, +--workers+).
|
|
4
|
+
module CiShardRunParse
|
|
5
|
+
private
|
|
6
|
+
|
|
7
|
+
# Strips +--shard-processes+ / +--workers+ from +plan_argv+ and returns +[count, exit_code]+.
|
|
8
|
+
# +exit_code+ is +nil+ on success, +2+ on invalid or missing integer (no exception).
|
|
9
|
+
# Does not use +OptionParser+ so +plan+ flags (+--shard+, +--total+, …) pass through unchanged.
|
|
10
|
+
# Note: +--workers+ here means processes for this matrix job (+POLYRUN_SHARD_PROCESSES+), not +run-shards+ +POLYRUN_WORKERS+.
|
|
11
|
+
def ci_shard_parse_shard_processes!(plan_argv, pc)
|
|
12
|
+
workers = Polyrun::Config::Resolver.resolve_shard_processes(pc)
|
|
13
|
+
rest = []
|
|
14
|
+
i = 0
|
|
15
|
+
while i < plan_argv.size
|
|
16
|
+
case plan_argv[i]
|
|
17
|
+
when "--shard-processes"
|
|
18
|
+
n, err = ci_shard_parse_positive_int_flag!(plan_argv, i, "--shard-processes")
|
|
19
|
+
return [nil, err] if err
|
|
20
|
+
|
|
21
|
+
workers = n
|
|
22
|
+
i += 2
|
|
23
|
+
when "--workers"
|
|
24
|
+
n, err = ci_shard_parse_positive_int_flag!(plan_argv, i, "--workers")
|
|
25
|
+
return [nil, err] if err
|
|
26
|
+
|
|
27
|
+
workers = n
|
|
28
|
+
i += 2
|
|
29
|
+
else
|
|
30
|
+
rest << plan_argv[i]
|
|
31
|
+
i += 1
|
|
32
|
+
end
|
|
33
|
+
end
|
|
34
|
+
plan_argv.replace(rest)
|
|
35
|
+
[workers, nil]
|
|
36
|
+
end
|
|
37
|
+
|
|
38
|
+
# @return [Array(Integer or nil, Integer or nil)] +[value, exit_code]+ — +exit_code+ is +nil+ on success, +2+ on error
|
|
39
|
+
def ci_shard_parse_positive_int_flag!(argv, i, flag_name)
|
|
40
|
+
arg = argv[i + 1]
|
|
41
|
+
if arg.nil?
|
|
42
|
+
Polyrun::Log.warn "polyrun ci-shard: missing value for #{flag_name}"
|
|
43
|
+
return [nil, 2]
|
|
44
|
+
end
|
|
45
|
+
n = Integer(arg, exception: false)
|
|
46
|
+
if n.nil?
|
|
47
|
+
Polyrun::Log.warn "polyrun ci-shard: #{flag_name} must be an integer (got #{arg.inspect})"
|
|
48
|
+
return [nil, 2]
|
|
49
|
+
end
|
|
50
|
+
[n, nil]
|
|
51
|
+
end
|
|
52
|
+
|
|
53
|
+
# @return [Array(Integer, Integer, nil)] +[capped_workers, exit_code]+ — +exit_code+ is +nil+ when OK
|
|
54
|
+
def ci_shard_normalize_shard_processes(workers)
|
|
55
|
+
if workers < 1
|
|
56
|
+
Polyrun::Log.warn "polyrun ci-shard: --shard-processes / --workers must be >= 1"
|
|
57
|
+
return [workers, 2]
|
|
58
|
+
end
|
|
59
|
+
w = workers
|
|
60
|
+
if w > Polyrun::Config::MAX_PARALLEL_WORKERS
|
|
61
|
+
Polyrun::Log.warn "polyrun ci-shard: capping --shard-processes / --workers from #{w} to #{Polyrun::Config::MAX_PARALLEL_WORKERS}"
|
|
62
|
+
w = Polyrun::Config::MAX_PARALLEL_WORKERS
|
|
63
|
+
end
|
|
64
|
+
[w, nil]
|
|
65
|
+
end
|
|
66
|
+
end
|
|
67
|
+
end
|
|
68
|
+
end
|
data/lib/polyrun/cli/help.rb
CHANGED
|
@@ -21,7 +21,7 @@ module Polyrun
|
|
|
21
21
|
Skip start auto-prepare / auto DB provision: POLYRUN_START_SKIP_PREPARE=1, POLYRUN_START_SKIP_DATABASES=1
|
|
22
22
|
Skip writing paths_file from partition.paths_build: POLYRUN_SKIP_PATHS_BUILD=1
|
|
23
23
|
Warn if merge-coverage wall time exceeds N seconds (default 10): POLYRUN_MERGE_SLOW_WARN_SECONDS (0 disables)
|
|
24
|
-
Parallel RSpec workers: POLYRUN_WORKERS default 5, max 10 (run-shards / parallel-rspec / start)
|
|
24
|
+
Parallel RSpec workers: POLYRUN_WORKERS default 5, max 10 (run-shards / parallel-rspec / start); distinct from POLYRUN_SHARD_PROCESSES / ci-shard --shard-processes (local processes per CI matrix job)
|
|
25
25
|
Partition timing granularity (default file): POLYRUN_TIMING_GRANULARITY=file|example (experimental per-example; see partition.timing_granularity)
|
|
26
26
|
|
|
27
27
|
commands:
|
|
@@ -32,12 +32,13 @@ module Polyrun
|
|
|
32
32
|
run-shards fan out N parallel OS processes (POLYRUN_SHARD_*; not Ruby threads); optional --merge-coverage
|
|
33
33
|
parallel-rspec run-shards + merge-coverage (defaults to: bundle exec rspec after --)
|
|
34
34
|
start parallel-rspec; auto-runs prepare (shell/assets) and db:setup-* when polyrun.yml configures them; legacy script/build_spec_paths.rb if paths_build absent
|
|
35
|
-
ci-shard-run CI matrix: build-paths + plan for POLYRUN_SHARD_INDEX / POLYRUN_SHARD_TOTAL (or config), then run your command with that shard's paths after -- (
|
|
36
|
-
ci-shard-rspec same as ci-shard-run -- bundle exec rspec; optional -- [rspec-only flags]
|
|
35
|
+
ci-shard-run CI matrix: build-paths + plan for POLYRUN_SHARD_INDEX / POLYRUN_SHARD_TOTAL (or config), then run your command with that shard's paths after --; optional --shard-processes M or --workers M (POLYRUN_SHARD_PROCESSES; not POLYRUN_WORKERS) for N×M jobs × processes on this host
|
|
36
|
+
ci-shard-rspec same as ci-shard-run -- bundle exec rspec; optional --shard-processes / --workers / -- [rspec-only flags]
|
|
37
37
|
build-paths write partition.paths_file from partition.paths_build (same as auto step before plan/run-shards)
|
|
38
38
|
init write a starter polyrun.yml or POLYRUN.md from built-in templates (see docs/SETUP_PROFILE.md)
|
|
39
|
-
queue file-backed batch queue
|
|
39
|
+
queue file-backed batch queue: init (optional --shard/--total etc. as plan, then claim/ack); M workers share one dir; no duplicate paths across claims
|
|
40
40
|
quick run Polyrun::Quick (describe/it, before/after, let, expect…to, assert_*; optional capybara!)
|
|
41
|
+
hook run <phase> run one shell hook from polyrun.yml hooks: (e.g. before_suite); optional --shard/--total
|
|
41
42
|
report-coverage write all coverage formats from one JSON file
|
|
42
43
|
report-junit RSpec JSON or Polyrun testcase JSON → JUnit XML (CI)
|
|
43
44
|
report-timing print slow-file summary from merged timing JSON
|
|
@@ -0,0 +1,97 @@
|
|
|
1
|
+
module Polyrun
|
|
2
|
+
class CLI
|
|
3
|
+
# +polyrun hook run <phase>+ — run one lifecycle phase from +polyrun.yml+ +hooks:+ (manual debugging / CI).
|
|
4
|
+
module HooksCommand
|
|
5
|
+
private
|
|
6
|
+
|
|
7
|
+
def cmd_hook(argv, config_path)
|
|
8
|
+
sub = argv.shift
|
|
9
|
+
case sub
|
|
10
|
+
when "run"
|
|
11
|
+
cmd_hook_run(argv, config_path)
|
|
12
|
+
when nil, "help", "-h", "--help"
|
|
13
|
+
print_hook_help
|
|
14
|
+
0
|
|
15
|
+
else
|
|
16
|
+
Polyrun::Log.warn "polyrun hook: unknown subcommand #{sub.inspect} (try: polyrun hook run <phase>)"
|
|
17
|
+
print_hook_help
|
|
18
|
+
2
|
|
19
|
+
end
|
|
20
|
+
end
|
|
21
|
+
|
|
22
|
+
def cmd_hook_run(argv, config_path)
|
|
23
|
+
phase = argv.shift
|
|
24
|
+
if phase.nil? || phase == "-h" || phase == "--help"
|
|
25
|
+
print_hook_help
|
|
26
|
+
return 2
|
|
27
|
+
end
|
|
28
|
+
|
|
29
|
+
shard, total = hook_run_parse_shard_flags!(argv)
|
|
30
|
+
|
|
31
|
+
unless argv.empty?
|
|
32
|
+
Polyrun::Log.warn "polyrun hook run: unexpected arguments: #{argv.inspect}"
|
|
33
|
+
return 2
|
|
34
|
+
end
|
|
35
|
+
|
|
36
|
+
phase_sym = Polyrun::Hooks.parse_phase(phase)
|
|
37
|
+
unless phase_sym && Polyrun::Hooks::PHASES.include?(phase_sym)
|
|
38
|
+
Polyrun::Log.warn "polyrun hook run: unknown phase #{phase.inspect} (expected: #{Polyrun::Hooks::PHASES.join(", ")})"
|
|
39
|
+
return 2
|
|
40
|
+
end
|
|
41
|
+
|
|
42
|
+
cfg = Polyrun::Config.load(path: config_path || ENV["POLYRUN_CONFIG"])
|
|
43
|
+
hook_cfg = Polyrun::Hooks.from_config(cfg)
|
|
44
|
+
env = hook_run_env(shard, total)
|
|
45
|
+
|
|
46
|
+
hook_cfg.run_phase(phase_sym, env)
|
|
47
|
+
rescue ArgumentError => e
|
|
48
|
+
Polyrun::Log.warn "polyrun hook run: #{e.message}"
|
|
49
|
+
2
|
|
50
|
+
end
|
|
51
|
+
|
|
52
|
+
def hook_run_parse_shard_flags!(argv)
|
|
53
|
+
shard = nil
|
|
54
|
+
total = nil
|
|
55
|
+
while (a = argv.first)
|
|
56
|
+
case a
|
|
57
|
+
when "--shard"
|
|
58
|
+
argv.shift
|
|
59
|
+
shard = Integer(argv.shift || (raise ArgumentError, "--shard needs a value"))
|
|
60
|
+
when "--total"
|
|
61
|
+
argv.shift
|
|
62
|
+
total = Integer(argv.shift || (raise ArgumentError, "--total needs a value"))
|
|
63
|
+
else
|
|
64
|
+
break
|
|
65
|
+
end
|
|
66
|
+
end
|
|
67
|
+
[shard, total]
|
|
68
|
+
end
|
|
69
|
+
|
|
70
|
+
def hook_run_env(shard, total)
|
|
71
|
+
env = ENV.to_h.merge(
|
|
72
|
+
"POLYRUN_HOOK_ORCHESTRATOR" => "1",
|
|
73
|
+
"POLYRUN_HOOK_CLI" => "1"
|
|
74
|
+
)
|
|
75
|
+
env["POLYRUN_SHARD_INDEX"] = shard.to_s unless shard.nil?
|
|
76
|
+
env["POLYRUN_SHARD_TOTAL"] = total.to_s unless total.nil?
|
|
77
|
+
env
|
|
78
|
+
end
|
|
79
|
+
|
|
80
|
+
def print_hook_help
|
|
81
|
+
Polyrun::Log.puts <<~HELP
|
|
82
|
+
usage: polyrun hook run <phase> [--shard N] [--total M]
|
|
83
|
+
|
|
84
|
+
Runs hook(s) from polyrun.yml: Ruby DSL (+hooks.ruby+) then shell strings for <phase> (same names as RSpec lifecycle:
|
|
85
|
+
before_suite / after_suite as before(:suite) / after(:suite); before_shard / after_shard as
|
|
86
|
+
before(:all) / after(:all); before_worker / after_worker as before(:each) / after(:each)).
|
|
87
|
+
|
|
88
|
+
Phases: #{Polyrun::Hooks::PHASES.join(", ")}
|
|
89
|
+
|
|
90
|
+
Optional --shard / --total set POLYRUN_SHARD_INDEX / POLYRUN_SHARD_TOTAL for the hook process.
|
|
91
|
+
POLYRUN_HOOKS_DISABLE=1 skips hooks during run-shards / ci-shard only; polyrun hook run still executes.
|
|
92
|
+
For CI matrix (POLYRUN_SHARD_TOTAL > 1), ci-shard-run skips before_suite / after_suite unless POLYRUN_HOOKS_SUITE_PER_MATRIX_JOB=1; run those phases here or in a dedicated CI job.
|
|
93
|
+
HELP
|
|
94
|
+
end
|
|
95
|
+
end
|
|
96
|
+
end
|
|
97
|
+
end
|
|
@@ -76,21 +76,31 @@ module Polyrun
|
|
|
76
76
|
}
|
|
77
77
|
end
|
|
78
78
|
|
|
79
|
+
# Partition flags shared by +polyrun plan+ and +queue init+ (excluding +--paths-file+, which each command registers once).
|
|
80
|
+
def plan_command_register_partition_options!(opts, ctx)
|
|
81
|
+
opts.on("--shard INDEX", Integer) { |v| ctx[:shard] = v }
|
|
82
|
+
opts.on("--total N", Integer) { |v| ctx[:total] = v }
|
|
83
|
+
opts.on("--strategy NAME", String) { |v| ctx[:strategy] = v }
|
|
84
|
+
opts.on("--seed VAL") { |v| ctx[:seed] = v }
|
|
85
|
+
opts.on("--constraints PATH", "YAML: pin / serial_glob (see spec_queue.md)") { |v| ctx[:constraints_path] = v }
|
|
86
|
+
opts.on("--timing PATH", "path => seconds JSON; implies cost_binpack unless strategy is cost-based or hrw") do |v|
|
|
87
|
+
ctx[:timing_path] = v
|
|
88
|
+
end
|
|
89
|
+
opts.on("--timing-granularity VAL", "file (default) or example (experimental: path:line items)") do |v|
|
|
90
|
+
ctx[:timing_granularity] = v
|
|
91
|
+
end
|
|
92
|
+
end
|
|
93
|
+
|
|
94
|
+
# Shared by +polyrun plan+ and +queue init+ so partition flags match +Partition::Plan+ / +plan+ JSON.
|
|
95
|
+
def plan_command_register_options!(opts, ctx)
|
|
96
|
+
opts.on("--paths-file PATH", String) { |v| ctx[:paths_file] = v }
|
|
97
|
+
plan_command_register_partition_options!(opts, ctx)
|
|
98
|
+
end
|
|
99
|
+
|
|
79
100
|
def plan_command_parse_argv!(argv, ctx)
|
|
80
101
|
OptionParser.new do |opts|
|
|
81
102
|
opts.banner = "usage: polyrun plan [options] [--] [paths...]"
|
|
82
|
-
opts
|
|
83
|
-
opts.on("--total N", Integer) { |v| ctx[:total] = v }
|
|
84
|
-
opts.on("--strategy NAME", String) { |v| ctx[:strategy] = v }
|
|
85
|
-
opts.on("--seed VAL") { |v| ctx[:seed] = v }
|
|
86
|
-
opts.on("--paths-file PATH", String) { |v| ctx[:paths_file] = v }
|
|
87
|
-
opts.on("--constraints PATH", "YAML: pin / serial_glob (see spec_queue.md)") { |v| ctx[:constraints_path] = v }
|
|
88
|
-
opts.on("--timing PATH", "path => seconds JSON; implies cost_binpack unless strategy is cost-based or hrw") do |v|
|
|
89
|
-
ctx[:timing_path] = v
|
|
90
|
-
end
|
|
91
|
-
opts.on("--timing-granularity VAL", "file (default) or example (experimental: path:line items)") do |v|
|
|
92
|
-
ctx[:timing_granularity] = v
|
|
93
|
-
end
|
|
103
|
+
plan_command_register_options!(opts, ctx)
|
|
94
104
|
end.parse!(argv)
|
|
95
105
|
end
|
|
96
106
|
|