polyrun 1.0.0 → 1.1.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/CHANGELOG.md +23 -0
- data/README.md +22 -2
- data/docs/SETUP_PROFILE.md +1 -1
- data/lib/polyrun/cli/ci_shard_run_command.rb +65 -0
- data/lib/polyrun/cli/helpers.rb +19 -3
- data/lib/polyrun/cli/plan_command.rb +47 -17
- data/lib/polyrun/cli/prepare_command.rb +0 -1
- data/lib/polyrun/cli/prepare_recipe.rb +12 -7
- data/lib/polyrun/cli/queue_command.rb +17 -7
- data/lib/polyrun/cli/run_shards_plan_boot_phases.rb +2 -2
- data/lib/polyrun/cli/run_shards_plan_options.rb +17 -10
- data/lib/polyrun/cli/run_shards_planning.rb +8 -4
- data/lib/polyrun/cli.rb +12 -0
- data/lib/polyrun/database/provision.rb +12 -7
- data/lib/polyrun/partition/constraints.rb +15 -4
- data/lib/polyrun/partition/plan.rb +38 -28
- data/lib/polyrun/partition/timing_keys.rb +85 -0
- data/lib/polyrun/prepare/assets.rb +12 -5
- data/lib/polyrun/process_stdio.rb +91 -0
- data/lib/polyrun/rspec.rb +19 -0
- data/lib/polyrun/templates/POLYRUN.md +1 -1
- data/lib/polyrun/templates/ci_matrix.polyrun.yml +4 -1
- data/lib/polyrun/timing/merge.rb +2 -1
- data/lib/polyrun/timing/rspec_example_formatter.rb +53 -0
- data/lib/polyrun/version.rb +1 -1
- data/polyrun.gemspec +1 -1
- data/sig/polyrun/rspec.rbs +2 -0
- metadata +6 -1
checksums.yaml
CHANGED
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
---
|
|
2
2
|
SHA256:
|
|
3
|
-
metadata.gz:
|
|
4
|
-
data.tar.gz:
|
|
3
|
+
metadata.gz: 65a84fc362b402e23a550b5ea4f9980a5f6fc896fb5d1c445c3c8d2b22849604
|
|
4
|
+
data.tar.gz: 55c8baa0261b1e012c5b82592c82ae68409481eb7bee01503a0408d7b837fafe
|
|
5
5
|
SHA512:
|
|
6
|
-
metadata.gz:
|
|
7
|
-
data.tar.gz:
|
|
6
|
+
metadata.gz: 50b5673497a1454363faf29781f9c5fc6f231cb5e8f85b570da61e036af3cb78450e2b6dc4854db1fe99528c292fece8f022c4e6cf69638bc8c09881fdb63a18
|
|
7
|
+
data.tar.gz: b10cf9ec6d80d0aeecfda0965ec6d7473d4c69a70a520bb3fb5b911679a0ba00d28be163f90458b7d4c3f5a62aa1de94739fb70811a5e82fe136651ca2c07c14
|
data/CHANGELOG.md
ADDED
|
@@ -0,0 +1,23 @@
|
|
|
1
|
+
# CHANGELOG
|
|
2
|
+
|
|
3
|
+
## 1.1.0 (2026-04-15)
|
|
4
|
+
|
|
5
|
+
- Add `ci-shard-run` / `ci-shard-rspec` for matrix-style sharding (one job per `POLYRUN_SHARD_INDEX` / `POLYRUN_SHARD_TOTAL`): resolve paths via the same plan as `polyrun plan`, then `exec` the given command with this shard’s paths (unlike `run-shards`, which fans out multiple workers on one host).
|
|
6
|
+
- Add experimental per-example partition timing: `partition.timing_granularity` / `--timing-granularity` (`file` default, `example` for `path:line` items), `POLYRUN_TIMING_GRANULARITY`, merged timing JSON with `absolute_path:line` keys, `TimingKeys.load_costs_json_file`, constraints matching pins/globs on the file part of locators, queue init support, and optional `Polyrun::Timing::RSpecExampleFormatter` plus `Polyrun::RSpec.install_example_timing!`.
|
|
7
|
+
- Add `Polyrun::ProcessStdio.inherit_stdio_spawn_wait` and `spawn_wait` for subprocesses with inherited stdio (or temp-file capture when `silent: true`) to avoid Open3 pipe-thread noise on interrupt; used by prepare (shell / custom assets), `Prepare::Assets.precompile!`, and `Provision.prepare_template!` (`bin/rails db:prepare`). On failure, `db:prepare` / `assets:precompile` embed captured stdout/stderr in `Polyrun::Error` (truncated when huge).
|
|
8
|
+
- Refactor `polyrun plan` around `plan_command_compute_manifest` and `plan_command_build_manifest`; `cmd_plan` output stays aligned with `plan_command_compute_manifest` (tests guard drift).
|
|
9
|
+
- `TimingKeys.load_costs_json_file` accepts optional `root:` for key normalization; warns when two JSON keys normalize to the same entry with different seconds; `TimingKeys.canonical_file_path` / `normalize_locator` resolve directory symlinks so `/var/…` and `/private/var/…` (macOS) map to one key.
|
|
10
|
+
- `Polyrun::RSpec.install_example_timing!(output_path:)` no longer sets `ENV` when an explicit path is passed; formatter uses `timing_output_path` (override or `ENV` / default filename).
|
|
11
|
+
- Fix noisy `IOError` / broken-pipe behavior when interrupting long-running prepare / Rails subprocesses that previously used `Open3.capture3`.
|
|
12
|
+
|
|
13
|
+
## 1.0.0 (2026-04-14)
|
|
14
|
+
|
|
15
|
+
- Initial stable release of Polyrun: parallel tests, SimpleCov-compatible coverage formatters, fixtures/snapshots, assets and DB provisioning with zero runtime gem dependencies.
|
|
16
|
+
- Add `polyrun` CLI with `plan`, `run-shards`, partition/load balancing, coverage merge and reporting, database helpers, queue helpers, and the Quick runner.
|
|
17
|
+
- Add coverage Rake tasks and YAML configuration for merged / Cobertura-style output.
|
|
18
|
+
- Add database command flows using `db:prepare` (replacing earlier `db:migrate`-only paths) for provisioning-style runs.
|
|
19
|
+
- Implement graceful shutdown for worker processes in `run_shards` when the parent is interrupted.
|
|
20
|
+
- Expand Quick runner defaults and parallel shard database creation.
|
|
21
|
+
- Add examples tree, specs for merge and queue behavior, and docs for cwd-relative configuration.
|
|
22
|
+
- Add RSpec suite, RuboCop (including a FileLength cop), YAML templates, and a `bin/release` script.
|
|
23
|
+
- Add RBS signatures under `sig/` and validate them in CI; expand documentation and specs.
|
data/README.md
CHANGED
|
@@ -8,7 +8,7 @@ Running tests in parallel across processes still requires a single merged covera
|
|
|
8
8
|
|
|
9
9
|
Polyrun provides:
|
|
10
10
|
|
|
11
|
-
- Orchestration: `plan`, `run-shards`, and `parallel-rspec` (run-shards plus merge-coverage), with an optional on-disk queue and constraints for file lists and load balancing.
|
|
11
|
+
- Orchestration: `plan`, `run-shards`, and `parallel-rspec` (run-shards plus merge-coverage), with an optional on-disk queue and constraints for file lists and load balancing. For **GitHub Actions-style matrix sharding** (one job per global shard), use `ci-shard-run -- …` (any test runner) or `ci-shard-rspec`—not `run-shards` / `parallel-rspec`, which fan out N workers on one machine.
|
|
12
12
|
- Coverage: merge SimpleCov-compatible JSON fragments; emit JSON, LCOV, Cobertura, or console summaries (you can drop separate SimpleCov merge plugins for this path).
|
|
13
13
|
- CI reporting: JUnit XML from RSpec JSON; slow-file reports from merged timing JSON.
|
|
14
14
|
- Parallel hygiene: asset digest markers, SQL snapshots, YAML fixture batches, and DB URL or shard helpers aligned with `POLYRUN_SHARD_*`.
|
|
@@ -32,6 +32,8 @@ If the current directory already has `polyrun.yml` or `config/polyrun.yml`, you
|
|
|
32
32
|
```bash
|
|
33
33
|
bin/polyrun version
|
|
34
34
|
bin/polyrun build-paths # write spec/spec_paths.txt from partition.paths_build (uses polyrun.yml in cwd)
|
|
35
|
+
bin/polyrun ci-shard-run -- bundle exec rspec # CI matrix: shard plan + append paths to the command after --
|
|
36
|
+
bin/polyrun ci-shard-rspec # same as ci-shard-run -- bundle exec rspec
|
|
35
37
|
bin/polyrun parallel-rspec --workers 5 # run-shards + merge-coverage (default: bundle exec rspec)
|
|
36
38
|
bin/polyrun run-shards --workers 5 --merge-coverage -- bundle exec rspec
|
|
37
39
|
bin/polyrun merge-coverage -i cov1.json -i cov2.json -o merged.json --format json,lcov,cobertura,console
|
|
@@ -41,6 +43,12 @@ bin/polyrun init --profile gem -o polyrun.yml # starter YAML; see docs/SETUP_P
|
|
|
41
43
|
bin/polyrun quick # Polyrun::Quick examples under spec/polyrun_quick/ or test/polyrun_quick/
|
|
42
44
|
```
|
|
43
45
|
|
|
46
|
+
### Matrix shards and timing
|
|
47
|
+
|
|
48
|
+
- `ci-shard-run` — Pass the command as separate words after `--` (e.g. `ci-shard-run -- bundle exec rspec`). One combined string with spaces is split via `Shellwords`, not a full shell; shell-only quoting does not apply.
|
|
49
|
+
- Timing JSON — Run `plan`, `queue init`, and `merge-timing` from the same repository root (cwd) you use when producing `polyrun_timing.json` so path keys normalize consistently. `Polyrun::Partition::Plan.load_timing_costs` and `TimingKeys.load_costs_json_file` accept `root:` to align keys to a fixed directory.
|
|
50
|
+
- Per-example timing (`--timing-granularity example`) — Experimental. Cost maps and plan items scale with example count, not file count; expect larger memory use and slower planning than file mode on big suites.
|
|
51
|
+
|
|
44
52
|
### Adopting Polyrun (setup profile and scaffolds)
|
|
45
53
|
|
|
46
54
|
- [docs/SETUP_PROFILE.md](docs/SETUP_PROFILE.md) — Checklist for project type (gem, Rails, Appraisal), parallelism target (one CI job with N workers, matrix shards, or a single non-matrix runner), database layout, prepare, spec order, coverage, and CI model A (single runner with `parallel-rspec`) versus model B (matrix plus a merge-coverage job). Treat `polyrun.yml` as the contract; bin scripts and `database.yml` are adapters.
|
|
@@ -131,7 +139,19 @@ See [`examples/README.md`](examples/README.md) for Rails apps (Capybara, Playwri
|
|
|
131
139
|
|
|
132
140
|
You can replace SimpleCov and simplecov plugins, parallel_tests, and rspec_junit_formatter with Polyrun for those roles. Use `merge-timing`, `report-timing`, and `Data::FactoryCounts` (optionally with `Data::FactoryInstrumentation`) for slow-file and factory metrics. YAML fixture batches and bulk inserts can use `Data::Fixtures` and `ParallelProvisioning` for shard-aware seeding; wire your own `truncate` and `load_seed` in hooks.
|
|
133
141
|
|
|
134
|
-
|
|
142
|
+
## License
|
|
143
|
+
|
|
144
|
+
Released under the [MIT License](LICENSE). Copyright (c) 2026 Andrei Makarov.
|
|
145
|
+
|
|
146
|
+
## Contributing
|
|
147
|
+
|
|
148
|
+
Bug reports and pull requests are welcome. See [CONTRIBUTING.md](CONTRIBUTING.md) for development setup, tests, RuboCop, RBS, optional Trunk, and PR conventions. Community participation follows [CODE_OF_CONDUCT.md](CODE_OF_CONDUCT.md).
|
|
149
|
+
|
|
150
|
+
## Security
|
|
151
|
+
|
|
152
|
+
Do not open public issues for security vulnerabilities. See [SECURITY.md](SECURITY.md) for how to report them.
|
|
153
|
+
|
|
154
|
+
## Sponsors
|
|
135
155
|
|
|
136
156
|
Sponsored by [Kisko Labs](https://www.kiskolabs.com).
|
|
137
157
|
|
data/docs/SETUP_PROFILE.md
CHANGED
|
@@ -81,7 +81,7 @@ Model A — one job, N worker processes
|
|
|
81
81
|
Model B — matrix of jobs (one shard per job)
|
|
82
82
|
|
|
83
83
|
- Each job sets `POLYRUN_SHARD_INDEX` / `POLYRUN_SHARD_TOTAL` (and DB URLs per shard if needed).
|
|
84
|
-
- Run `polyrun
|
|
84
|
+
- Run `polyrun ci-shard-run -- bundle exec rspec` (or `ci-shard-rspec`), or `ci-shard-run -- bundle exec polyrun quick` / other runners; or the same steps manually (`bin/rspec_ci_shard` wrappers).
|
|
85
85
|
- Upload `coverage/polyrun-fragment-*.json` (or named per shard).
|
|
86
86
|
- A final `merge-coverage` job downloads artifacts and merges.
|
|
87
87
|
|
|
@@ -0,0 +1,65 @@
|
|
|
1
|
+
require "shellwords"
|
|
2
|
+
|
|
3
|
+
module Polyrun
|
|
4
|
+
class CLI
|
|
5
|
+
# One CI matrix job = one global shard (POLYRUN_SHARD_INDEX / POLYRUN_SHARD_TOTAL), not +run-shards+
|
|
6
|
+
# workers on a single host. Runs +build-paths+, +plan+ for that shard, then +exec+ of a user command
|
|
7
|
+
# with that shard's paths appended (same argv pattern as +run-shards+ after +--+).
|
|
8
|
+
#
|
|
9
|
+
# After +--+, prefer **multiple argv tokens** (+bundle+, +exec+, +rspec+, …). A single token that
|
|
10
|
+
# contains spaces is split with +Shellwords+ (not a full shell); exotic quoting differs from +sh -c+.
|
|
11
|
+
module CiShardRunCommand
|
|
12
|
+
private
|
|
13
|
+
|
|
14
|
+
# @return [Array(Array<String>, Integer)] [paths, 0] on success, or [nil, exit_code] on failure
|
|
15
|
+
def ci_shard_planned_paths!(plan_argv, config_path, command_label:)
|
|
16
|
+
manifest, code = plan_command_compute_manifest(plan_argv, config_path)
|
|
17
|
+
return [nil, code] if code != 0
|
|
18
|
+
|
|
19
|
+
paths = manifest["paths"] || []
|
|
20
|
+
if paths.empty?
|
|
21
|
+
Polyrun::Log.warn "polyrun #{command_label}: no paths for this shard (check shard/total and paths list)"
|
|
22
|
+
return [nil, 2]
|
|
23
|
+
end
|
|
24
|
+
|
|
25
|
+
[paths, 0]
|
|
26
|
+
end
|
|
27
|
+
|
|
28
|
+
# Runner-agnostic matrix shard: +polyrun ci-shard-run [plan options] -- <command> [args...]+
|
|
29
|
+
# Paths for this shard are appended after the command (like +run-shards+).
|
|
30
|
+
def cmd_ci_shard_run(argv, config_path)
|
|
31
|
+
sep = argv.index("--")
|
|
32
|
+
unless sep
|
|
33
|
+
Polyrun::Log.warn "polyrun ci-shard-run: need -- before the command (e.g. ci-shard-run -- bundle exec rspec)"
|
|
34
|
+
return 2
|
|
35
|
+
end
|
|
36
|
+
|
|
37
|
+
plan_argv = argv[0...sep]
|
|
38
|
+
cmd = argv[(sep + 1)..].map(&:to_s)
|
|
39
|
+
if cmd.empty?
|
|
40
|
+
Polyrun::Log.warn "polyrun ci-shard-run: empty command after --"
|
|
41
|
+
return 2
|
|
42
|
+
end
|
|
43
|
+
cmd = Shellwords.split(cmd.first) if cmd.size == 1 && cmd.first.include?(" ")
|
|
44
|
+
|
|
45
|
+
paths, code = ci_shard_planned_paths!(plan_argv, config_path, command_label: "ci-shard-run")
|
|
46
|
+
return code if code != 0
|
|
47
|
+
|
|
48
|
+
exec(*cmd, *paths)
|
|
49
|
+
end
|
|
50
|
+
|
|
51
|
+
# Same as +ci-shard-run -- bundle exec rspec+ with an optional second segment for RSpec-only flags:
|
|
52
|
+
# +polyrun ci-shard-rspec [plan options] [-- [rspec args]]+
|
|
53
|
+
def cmd_ci_shard_rspec(argv, config_path)
|
|
54
|
+
sep = argv.index("--")
|
|
55
|
+
plan_argv = sep ? argv[0...sep] : argv
|
|
56
|
+
rspec_argv = sep ? argv[(sep + 1)..] : []
|
|
57
|
+
|
|
58
|
+
paths, code = ci_shard_planned_paths!(plan_argv, config_path, command_label: "ci-shard-rspec")
|
|
59
|
+
return code if code != 0
|
|
60
|
+
|
|
61
|
+
exec("bundle", "exec", "rspec", *rspec_argv, *paths)
|
|
62
|
+
end
|
|
63
|
+
end
|
|
64
|
+
end
|
|
65
|
+
end
|
data/lib/polyrun/cli/helpers.rb
CHANGED
|
@@ -1,5 +1,7 @@
|
|
|
1
1
|
require "yaml"
|
|
2
2
|
|
|
3
|
+
require_relative "../partition/timing_keys"
|
|
4
|
+
|
|
3
5
|
module Polyrun
|
|
4
6
|
class CLI
|
|
5
7
|
module Helpers
|
|
@@ -95,9 +97,15 @@ module Polyrun
|
|
|
95
97
|
|
|
96
98
|
# +default_weight+ should be precomputed when sorting many paths (e.g. +queue init+), matching
|
|
97
99
|
# {Partition::Plan#default_weight} semantics: mean of known timing costs for missing paths.
|
|
98
|
-
def queue_weight_for(path, costs, default_weight = nil)
|
|
99
|
-
|
|
100
|
-
|
|
100
|
+
def queue_weight_for(path, costs, default_weight = nil, granularity: :file)
|
|
101
|
+
g = Polyrun::Partition::TimingKeys.normalize_granularity(granularity)
|
|
102
|
+
key =
|
|
103
|
+
if g == :example
|
|
104
|
+
Polyrun::Partition::TimingKeys.normalize_locator(path.to_s, Dir.pwd, :example)
|
|
105
|
+
else
|
|
106
|
+
File.expand_path(path.to_s, Dir.pwd)
|
|
107
|
+
end
|
|
108
|
+
return costs[key] if costs.key?(key)
|
|
101
109
|
|
|
102
110
|
unless default_weight.nil?
|
|
103
111
|
return default_weight
|
|
@@ -108,6 +116,14 @@ module Polyrun
|
|
|
108
116
|
|
|
109
117
|
vals.sum / vals.size.to_f
|
|
110
118
|
end
|
|
119
|
+
|
|
120
|
+
# CLI + polyrun.yml + POLYRUN_TIMING_GRANULARITY; default +:file+.
|
|
121
|
+
def resolve_partition_timing_granularity(pc, cli_val)
|
|
122
|
+
raw = cli_val
|
|
123
|
+
raw ||= pc && (pc["timing_granularity"] || pc[:timing_granularity])
|
|
124
|
+
raw ||= ENV["POLYRUN_TIMING_GRANULARITY"]
|
|
125
|
+
Polyrun::Partition::TimingKeys.normalize_granularity(raw || "file")
|
|
126
|
+
end
|
|
111
127
|
end
|
|
112
128
|
end
|
|
113
129
|
end
|
|
@@ -7,6 +7,15 @@ module Polyrun
|
|
|
7
7
|
private
|
|
8
8
|
|
|
9
9
|
def cmd_plan(argv, config_path)
|
|
10
|
+
manifest, code = plan_command_compute_manifest(argv, config_path)
|
|
11
|
+
return code if code != 0
|
|
12
|
+
|
|
13
|
+
Polyrun::Log.puts JSON.generate(manifest)
|
|
14
|
+
0
|
|
15
|
+
end
|
|
16
|
+
|
|
17
|
+
# @return [Array(Hash, Integer)] manifest hash and exit code (+0+ on success, non-zero on failure)
|
|
18
|
+
def plan_command_compute_manifest(argv, config_path)
|
|
10
19
|
cfg = Polyrun::Config.load(path: config_path || ENV["POLYRUN_CONFIG"])
|
|
11
20
|
pc = cfg.partition
|
|
12
21
|
ctx = plan_command_initial_context(pc)
|
|
@@ -14,30 +23,44 @@ module Polyrun
|
|
|
14
23
|
|
|
15
24
|
paths_file = ctx[:paths_file] || (pc["paths_file"] || pc[:paths_file])
|
|
16
25
|
code = Polyrun::Partition::PathsBuild.apply!(partition: pc, cwd: Dir.pwd)
|
|
17
|
-
return code if code != 0
|
|
26
|
+
return [nil, code] if code != 0
|
|
18
27
|
|
|
28
|
+
plan_command_manifest_from_paths(cfg, pc, argv, ctx, paths_file)
|
|
29
|
+
end
|
|
30
|
+
|
|
31
|
+
def plan_command_manifest_from_paths(cfg, pc, argv, ctx, paths_file)
|
|
19
32
|
timing_path = plan_resolve_timing_path(pc, ctx[:timing_path], ctx[:strategy])
|
|
33
|
+
ctx[:timing_granularity] = resolve_partition_timing_granularity(pc, ctx[:timing_granularity])
|
|
20
34
|
Polyrun::Log.warn "polyrun plan: using #{cfg.path}" if @verbose && cfg.path
|
|
21
35
|
|
|
22
|
-
|
|
23
|
-
return 2 if
|
|
24
|
-
|
|
25
|
-
loaded = plan_load_costs_and_strategy(timing_path, ctx[:strategy])
|
|
26
|
-
return 2 if loaded.nil?
|
|
27
|
-
|
|
28
|
-
costs, strategy = loaded
|
|
36
|
+
bundle = plan_command_items_costs_strategy(paths_file, argv, timing_path, ctx)
|
|
37
|
+
return [nil, 2] if bundle.nil?
|
|
29
38
|
|
|
39
|
+
items, costs, strategy = bundle
|
|
30
40
|
constraints = load_partition_constraints(pc, ctx[:constraints_path])
|
|
31
41
|
|
|
32
|
-
|
|
42
|
+
manifest = plan_command_build_manifest(
|
|
33
43
|
items: items,
|
|
34
44
|
total: ctx[:total],
|
|
35
45
|
strategy: strategy,
|
|
36
46
|
seed: ctx[:seed],
|
|
37
47
|
costs: costs,
|
|
38
48
|
constraints: constraints,
|
|
39
|
-
shard: ctx[:shard]
|
|
49
|
+
shard: ctx[:shard],
|
|
50
|
+
timing_granularity: ctx[:timing_granularity]
|
|
40
51
|
)
|
|
52
|
+
[manifest, 0]
|
|
53
|
+
end
|
|
54
|
+
|
|
55
|
+
def plan_command_items_costs_strategy(paths_file, argv, timing_path, ctx)
|
|
56
|
+
items = plan_plan_items(paths_file, argv)
|
|
57
|
+
return nil if items.nil?
|
|
58
|
+
|
|
59
|
+
loaded = plan_load_costs_and_strategy(timing_path, ctx[:strategy], ctx[:timing_granularity])
|
|
60
|
+
return nil if loaded.nil?
|
|
61
|
+
|
|
62
|
+
costs, strategy = loaded
|
|
63
|
+
[items, costs, strategy]
|
|
41
64
|
end
|
|
42
65
|
|
|
43
66
|
def plan_command_initial_context(pc)
|
|
@@ -48,7 +71,8 @@ module Polyrun
|
|
|
48
71
|
seed: pc["seed"] || pc[:seed],
|
|
49
72
|
paths_file: nil,
|
|
50
73
|
timing_path: nil,
|
|
51
|
-
constraints_path: nil
|
|
74
|
+
constraints_path: nil,
|
|
75
|
+
timing_granularity: nil
|
|
52
76
|
}
|
|
53
77
|
end
|
|
54
78
|
|
|
@@ -64,10 +88,13 @@ module Polyrun
|
|
|
64
88
|
opts.on("--timing PATH", "path => seconds JSON; implies cost_binpack unless strategy is cost-based or hrw") do |v|
|
|
65
89
|
ctx[:timing_path] = v
|
|
66
90
|
end
|
|
91
|
+
opts.on("--timing-granularity VAL", "file (default) or example (experimental: path:line items)") do |v|
|
|
92
|
+
ctx[:timing_granularity] = v
|
|
93
|
+
end
|
|
67
94
|
end.parse!(argv)
|
|
68
95
|
end
|
|
69
96
|
|
|
70
|
-
def
|
|
97
|
+
def plan_command_build_manifest(items:, total:, strategy:, seed:, costs:, constraints:, shard:, timing_granularity: :file)
|
|
71
98
|
plan = Polyrun::Debug.time("Partition::Plan.new (plan command)") do
|
|
72
99
|
Polyrun::Partition::Plan.new(
|
|
73
100
|
items: items,
|
|
@@ -76,7 +103,8 @@ module Polyrun
|
|
|
76
103
|
seed: seed,
|
|
77
104
|
costs: costs,
|
|
78
105
|
constraints: constraints,
|
|
79
|
-
root: Dir.pwd
|
|
106
|
+
root: Dir.pwd,
|
|
107
|
+
timing_granularity: timing_granularity
|
|
80
108
|
)
|
|
81
109
|
end
|
|
82
110
|
Polyrun::Debug.log_kv(
|
|
@@ -86,8 +114,7 @@ module Polyrun
|
|
|
86
114
|
strategy: strategy,
|
|
87
115
|
path_count: items.size
|
|
88
116
|
)
|
|
89
|
-
|
|
90
|
-
0
|
|
117
|
+
plan.manifest(shard)
|
|
91
118
|
end
|
|
92
119
|
|
|
93
120
|
def plan_resolve_timing_path(pc, timing_path, strategy)
|
|
@@ -110,9 +137,12 @@ module Polyrun
|
|
|
110
137
|
end
|
|
111
138
|
end
|
|
112
139
|
|
|
113
|
-
def plan_load_costs_and_strategy(timing_path, strategy)
|
|
140
|
+
def plan_load_costs_and_strategy(timing_path, strategy, timing_granularity)
|
|
114
141
|
if timing_path
|
|
115
|
-
costs = Polyrun::Partition::Plan.load_timing_costs(
|
|
142
|
+
costs = Polyrun::Partition::Plan.load_timing_costs(
|
|
143
|
+
File.expand_path(timing_path.to_s, Dir.pwd),
|
|
144
|
+
granularity: timing_granularity
|
|
145
|
+
)
|
|
116
146
|
if costs.empty?
|
|
117
147
|
Polyrun::Log.warn "polyrun plan: timing file missing or has no entries: #{timing_path}"
|
|
118
148
|
return nil
|
|
@@ -1,4 +1,4 @@
|
|
|
1
|
-
|
|
1
|
+
require_relative "../process_stdio"
|
|
2
2
|
|
|
3
3
|
module Polyrun
|
|
4
4
|
class CLI
|
|
@@ -22,8 +22,7 @@ module Polyrun
|
|
|
22
22
|
return [manifest, nil]
|
|
23
23
|
end
|
|
24
24
|
if custom && !custom.to_s.strip.empty?
|
|
25
|
-
|
|
26
|
-
prepare_log_stderr(err)
|
|
25
|
+
st = prepare_run_shell_inherit_stdio(child_env, custom.to_s, rails_root, silent: !@verbose)
|
|
27
26
|
unless st.success?
|
|
28
27
|
Polyrun::Log.warn "polyrun prepare: assets custom command failed (exit #{st.exitstatus})"
|
|
29
28
|
return [manifest, 1]
|
|
@@ -59,8 +58,7 @@ module Polyrun
|
|
|
59
58
|
return [manifest, nil]
|
|
60
59
|
end
|
|
61
60
|
lines.each_with_index do |line, i|
|
|
62
|
-
|
|
63
|
-
prepare_log_stderr(err)
|
|
61
|
+
st = prepare_run_shell_inherit_stdio(child_env, line, rails_root, silent: !@verbose)
|
|
64
62
|
unless st.success?
|
|
65
63
|
Polyrun::Log.warn "polyrun prepare: shell step #{i + 1} failed (exit #{st.exitstatus})"
|
|
66
64
|
return [manifest, 1]
|
|
@@ -69,8 +67,15 @@ module Polyrun
|
|
|
69
67
|
[manifest, nil]
|
|
70
68
|
end
|
|
71
69
|
|
|
72
|
-
def
|
|
73
|
-
Polyrun::
|
|
70
|
+
def prepare_run_shell_inherit_stdio(child_env, script, rails_root, silent: false)
|
|
71
|
+
Polyrun::ProcessStdio.inherit_stdio_spawn_wait(
|
|
72
|
+
child_env,
|
|
73
|
+
"sh",
|
|
74
|
+
"-c",
|
|
75
|
+
script.to_s,
|
|
76
|
+
chdir: rails_root,
|
|
77
|
+
silent: silent
|
|
78
|
+
)
|
|
74
79
|
end
|
|
75
80
|
end
|
|
76
81
|
end
|
|
@@ -11,6 +11,7 @@ module Polyrun
|
|
|
11
11
|
dir = ".polyrun-queue"
|
|
12
12
|
paths_file = nil
|
|
13
13
|
timing_path = nil
|
|
14
|
+
timing_granularity = nil
|
|
14
15
|
worker = ENV["USER"] || "worker"
|
|
15
16
|
batch = 5
|
|
16
17
|
lease_id = nil
|
|
@@ -19,7 +20,7 @@ module Polyrun
|
|
|
19
20
|
Polyrun::Debug.log("queue: subcommand=#{sub.inspect}")
|
|
20
21
|
case sub
|
|
21
22
|
when "init"
|
|
22
|
-
queue_cmd_init(argv, dir, paths_file, timing_path)
|
|
23
|
+
queue_cmd_init(argv, dir, paths_file, timing_path, timing_granularity)
|
|
23
24
|
when "claim"
|
|
24
25
|
queue_cmd_claim(argv, dir, worker, batch)
|
|
25
26
|
when "ack"
|
|
@@ -32,29 +33,38 @@ module Polyrun
|
|
|
32
33
|
end
|
|
33
34
|
end
|
|
34
35
|
|
|
35
|
-
def queue_cmd_init(argv, dir, paths_file, timing_path)
|
|
36
|
+
def queue_cmd_init(argv, dir, paths_file, timing_path, timing_granularity)
|
|
36
37
|
OptionParser.new do |opts|
|
|
37
|
-
opts.banner = "usage: polyrun queue init --paths-file P [--timing PATH] [--dir DIR]"
|
|
38
|
+
opts.banner = "usage: polyrun queue init --paths-file P [--timing PATH] [--timing-granularity VAL] [--dir DIR]"
|
|
38
39
|
opts.on("--dir PATH") { |v| dir = v }
|
|
39
40
|
opts.on("--paths-file PATH") { |v| paths_file = v }
|
|
40
41
|
opts.on("--timing PATH") { |v| timing_path = v }
|
|
42
|
+
opts.on("--timing-granularity VAL") { |v| timing_granularity = v }
|
|
41
43
|
end.parse!(argv)
|
|
42
44
|
unless paths_file
|
|
43
45
|
Polyrun::Log.warn "queue init: need --paths-file"
|
|
44
46
|
return 2
|
|
45
47
|
end
|
|
48
|
+
cfg = Polyrun::Config.load(path: ENV["POLYRUN_CONFIG"])
|
|
49
|
+
g = resolve_partition_timing_granularity(cfg.partition, timing_granularity)
|
|
46
50
|
items = Polyrun::Partition::Paths.read_lines(paths_file)
|
|
47
|
-
costs =
|
|
48
|
-
|
|
51
|
+
costs =
|
|
52
|
+
if timing_path
|
|
53
|
+
Polyrun::Partition::Plan.load_timing_costs(
|
|
54
|
+
File.expand_path(timing_path, Dir.pwd),
|
|
55
|
+
granularity: g
|
|
56
|
+
)
|
|
57
|
+
end
|
|
58
|
+
ordered = queue_init_ordered_items(items, costs, g)
|
|
49
59
|
Polyrun::Queue::FileStore.new(dir).init!(ordered)
|
|
50
60
|
Polyrun::Log.puts JSON.generate({"dir" => File.expand_path(dir), "count" => ordered.size})
|
|
51
61
|
0
|
|
52
62
|
end
|
|
53
63
|
|
|
54
|
-
def queue_init_ordered_items(items, costs)
|
|
64
|
+
def queue_init_ordered_items(items, costs, granularity = :file)
|
|
55
65
|
if costs && !costs.empty?
|
|
56
66
|
dw = costs.values.sum / costs.size.to_f
|
|
57
|
-
items.sort_by { |p| [-queue_weight_for(p, costs, dw), p] }
|
|
67
|
+
items.sort_by { |p| [-queue_weight_for(p, costs, dw, granularity: granularity), p] }
|
|
58
68
|
else
|
|
59
69
|
items.sort
|
|
60
70
|
end
|
|
@@ -28,13 +28,13 @@ module Polyrun
|
|
|
28
28
|
items, paths_source, err = run_shards_resolve_items(o[:paths_file])
|
|
29
29
|
return [err, nil] if err
|
|
30
30
|
|
|
31
|
-
costs, strategy, err = run_shards_resolve_costs(o[:timing_path], o[:strategy])
|
|
31
|
+
costs, strategy, err = run_shards_resolve_costs(o[:timing_path], o[:strategy], o[:timing_granularity])
|
|
32
32
|
return [err, nil] if err
|
|
33
33
|
|
|
34
34
|
run_shards_plan_ready_log(o, strategy, cmd, paths_source, items.size)
|
|
35
35
|
|
|
36
36
|
constraints = load_partition_constraints(pc, o[:constraints_path])
|
|
37
|
-
plan = run_shards_make_plan(items, o[:workers], strategy, o[:seed], costs, constraints)
|
|
37
|
+
plan = run_shards_make_plan(items, o[:workers], strategy, o[:seed], costs, constraints, o[:timing_granularity])
|
|
38
38
|
|
|
39
39
|
run_shards_debug_shard_sizes(plan, o[:workers])
|
|
40
40
|
Polyrun::Log.warn "polyrun run-shards: #{items.size} paths → #{o[:workers]} workers (#{strategy})" if @verbose
|
|
@@ -9,6 +9,7 @@ module Polyrun
|
|
|
9
9
|
st = run_shards_plan_options_state(pc)
|
|
10
10
|
run_shards_plan_options_parse!(head, st)
|
|
11
11
|
st[:paths_file] ||= pc["paths_file"] || pc[:paths_file]
|
|
12
|
+
st[:timing_granularity] = resolve_partition_timing_granularity(pc, st[:timing_granularity])
|
|
12
13
|
st
|
|
13
14
|
end
|
|
14
15
|
|
|
@@ -20,6 +21,7 @@ module Polyrun
|
|
|
20
21
|
seed: pc["seed"] || pc[:seed],
|
|
21
22
|
timing_path: nil,
|
|
22
23
|
constraints_path: nil,
|
|
24
|
+
timing_granularity: nil,
|
|
23
25
|
merge_coverage: false,
|
|
24
26
|
merge_output: nil,
|
|
25
27
|
merge_format: nil
|
|
@@ -28,18 +30,23 @@ module Polyrun
|
|
|
28
30
|
|
|
29
31
|
def run_shards_plan_options_parse!(head, st)
|
|
30
32
|
OptionParser.new do |opts|
|
|
31
|
-
opts
|
|
32
|
-
opts.on("--workers N", Integer) { |v| st[:workers] = v }
|
|
33
|
-
opts.on("--strategy NAME", String) { |v| st[:strategy] = v }
|
|
34
|
-
opts.on("--seed VAL") { |v| st[:seed] = v }
|
|
35
|
-
opts.on("--paths-file PATH", String) { |v| st[:paths_file] = v }
|
|
36
|
-
opts.on("--constraints PATH", String) { |v| st[:constraints_path] = v }
|
|
37
|
-
opts.on("--timing PATH", "merged polyrun_timing.json; implies cost_binpack unless hrw/cost") { |v| st[:timing_path] = v }
|
|
38
|
-
opts.on("--merge-coverage", "After success, merge coverage/polyrun-fragment-*.json (Polyrun coverage must be enabled)") { st[:merge_coverage] = true }
|
|
39
|
-
opts.on("--merge-output PATH", String) { |v| st[:merge_output] = v }
|
|
40
|
-
opts.on("--merge-format LIST", String) { |v| st[:merge_format] = v }
|
|
33
|
+
run_shards_plan_options_register!(opts, st)
|
|
41
34
|
end.parse!(head)
|
|
42
35
|
end
|
|
36
|
+
|
|
37
|
+
def run_shards_plan_options_register!(opts, st)
|
|
38
|
+
opts.banner = "usage: polyrun run-shards [--workers N] [--strategy NAME] [--paths-file P] [--timing P] [--timing-granularity VAL] [--constraints P] [--seed S] [--merge-coverage] [--merge-output P] [--merge-format LIST] [--] <command> [args...]"
|
|
39
|
+
opts.on("--workers N", Integer) { |v| st[:workers] = v }
|
|
40
|
+
opts.on("--strategy NAME", String) { |v| st[:strategy] = v }
|
|
41
|
+
opts.on("--seed VAL") { |v| st[:seed] = v }
|
|
42
|
+
opts.on("--paths-file PATH", String) { |v| st[:paths_file] = v }
|
|
43
|
+
opts.on("--constraints PATH", String) { |v| st[:constraints_path] = v }
|
|
44
|
+
opts.on("--timing PATH", "merged polyrun_timing.json; implies cost_binpack unless hrw/cost") { |v| st[:timing_path] = v }
|
|
45
|
+
opts.on("--timing-granularity VAL", "file (default) or example (experimental)") { |v| st[:timing_granularity] = v }
|
|
46
|
+
opts.on("--merge-coverage", "After success, merge coverage/polyrun-fragment-*.json (Polyrun coverage must be enabled)") { st[:merge_coverage] = true }
|
|
47
|
+
opts.on("--merge-output PATH", String) { |v| st[:merge_output] = v }
|
|
48
|
+
opts.on("--merge-format LIST", String) { |v| st[:merge_format] = v }
|
|
49
|
+
end
|
|
43
50
|
end
|
|
44
51
|
end
|
|
45
52
|
end
|
|
@@ -70,9 +70,12 @@ module Polyrun
|
|
|
70
70
|
[items, paths_source, nil]
|
|
71
71
|
end
|
|
72
72
|
|
|
73
|
-
def run_shards_resolve_costs(timing_path, strategy)
|
|
73
|
+
def run_shards_resolve_costs(timing_path, strategy, timing_granularity)
|
|
74
74
|
if timing_path
|
|
75
|
-
costs = Polyrun::Partition::Plan.load_timing_costs(
|
|
75
|
+
costs = Polyrun::Partition::Plan.load_timing_costs(
|
|
76
|
+
File.expand_path(timing_path.to_s, Dir.pwd),
|
|
77
|
+
granularity: timing_granularity
|
|
78
|
+
)
|
|
76
79
|
if costs.empty?
|
|
77
80
|
Polyrun::Log.warn "polyrun run-shards: timing file missing or empty: #{timing_path}"
|
|
78
81
|
return [nil, nil, 2]
|
|
@@ -90,7 +93,7 @@ module Polyrun
|
|
|
90
93
|
end
|
|
91
94
|
end
|
|
92
95
|
|
|
93
|
-
def run_shards_make_plan(items, workers, strategy, seed, costs, constraints)
|
|
96
|
+
def run_shards_make_plan(items, workers, strategy, seed, costs, constraints, timing_granularity)
|
|
94
97
|
Polyrun::Debug.time("Partition::Plan.new (partition #{items.size} paths → #{workers} shards)") do
|
|
95
98
|
Polyrun::Partition::Plan.new(
|
|
96
99
|
items: items,
|
|
@@ -99,7 +102,8 @@ module Polyrun
|
|
|
99
102
|
seed: seed,
|
|
100
103
|
costs: costs,
|
|
101
104
|
constraints: constraints,
|
|
102
|
-
root: Dir.pwd
|
|
105
|
+
root: Dir.pwd,
|
|
106
|
+
timing_granularity: timing_granularity
|
|
103
107
|
)
|
|
104
108
|
end
|
|
105
109
|
end
|
data/lib/polyrun/cli.rb
CHANGED
|
@@ -12,9 +12,15 @@ require_relative "cli/queue_command"
|
|
|
12
12
|
require_relative "cli/timing_command"
|
|
13
13
|
require_relative "cli/init_command"
|
|
14
14
|
require_relative "cli/quick_command"
|
|
15
|
+
require_relative "cli/ci_shard_run_command"
|
|
15
16
|
|
|
16
17
|
module Polyrun
|
|
17
18
|
class CLI
|
|
19
|
+
CI_SHARD_COMMANDS = {
|
|
20
|
+
"ci-shard-run" => :cmd_ci_shard_run,
|
|
21
|
+
"ci-shard-rspec" => :cmd_ci_shard_rspec
|
|
22
|
+
}.freeze
|
|
23
|
+
|
|
18
24
|
include Helpers
|
|
19
25
|
include PlanCommand
|
|
20
26
|
include PrepareCommand
|
|
@@ -27,6 +33,7 @@ module Polyrun
|
|
|
27
33
|
include TimingCommand
|
|
28
34
|
include InitCommand
|
|
29
35
|
include QuickCommand
|
|
36
|
+
include CiShardRunCommand
|
|
30
37
|
|
|
31
38
|
def self.run(argv = ARGV)
|
|
32
39
|
new.run(argv)
|
|
@@ -121,6 +128,8 @@ module Polyrun
|
|
|
121
128
|
cmd_start(argv, config_path)
|
|
122
129
|
when "build-paths"
|
|
123
130
|
cmd_build_paths(config_path)
|
|
131
|
+
when *CI_SHARD_COMMANDS.keys
|
|
132
|
+
send(CI_SHARD_COMMANDS.fetch(command), argv, config_path)
|
|
124
133
|
when "init"
|
|
125
134
|
cmd_init(argv, config_path)
|
|
126
135
|
when "queue"
|
|
@@ -152,6 +161,7 @@ module Polyrun
|
|
|
152
161
|
Skip writing paths_file from partition.paths_build: POLYRUN_SKIP_PATHS_BUILD=1
|
|
153
162
|
Warn if merge-coverage wall time exceeds N seconds (default 10): POLYRUN_MERGE_SLOW_WARN_SECONDS (0 disables)
|
|
154
163
|
Parallel RSpec workers: POLYRUN_WORKERS default 5, max 10 (run-shards / parallel-rspec / start)
|
|
164
|
+
Partition timing granularity (default file): POLYRUN_TIMING_GRANULARITY=file|example (experimental per-example; see partition.timing_granularity)
|
|
155
165
|
|
|
156
166
|
commands:
|
|
157
167
|
version print version
|
|
@@ -161,6 +171,8 @@ module Polyrun
|
|
|
161
171
|
run-shards fan out N parallel OS processes (POLYRUN_SHARD_*; not Ruby threads); optional --merge-coverage
|
|
162
172
|
parallel-rspec run-shards + merge-coverage (defaults to: bundle exec rspec after --)
|
|
163
173
|
start parallel-rspec; auto-runs prepare (shell/assets) and db:setup-* when polyrun.yml configures them; legacy script/build_spec_paths.rb if paths_build absent
|
|
174
|
+
ci-shard-run CI matrix: build-paths + plan for POLYRUN_SHARD_INDEX / POLYRUN_SHARD_TOTAL (or config), then run your command with that shard's paths after -- (like run-shards; not multi-worker)
|
|
175
|
+
ci-shard-rspec same as ci-shard-run -- bundle exec rspec; optional -- [rspec-only flags]
|
|
164
176
|
build-paths write partition.paths_file from partition.paths_build (same as auto step before plan/run-shards)
|
|
165
177
|
init write a starter polyrun.yml or POLYRUN.md from built-in templates (see docs/SETUP_PROFILE.md)
|
|
166
178
|
queue file-backed batch queue (init / claim / ack / status)
|
|
@@ -1,5 +1,6 @@
|
|
|
1
1
|
require "open3"
|
|
2
2
|
require "shellwords"
|
|
3
|
+
require_relative "../process_stdio"
|
|
3
4
|
|
|
4
5
|
module Polyrun
|
|
5
6
|
module Database
|
|
@@ -49,20 +50,24 @@ module Polyrun
|
|
|
49
50
|
# Multi-DB Rails apps must pass all template URLs in one invocation so each DB uses its own +migrations_paths+.
|
|
50
51
|
# Uses +db:prepare+ (not +db:migrate+ alone) so empty template databases load +schema.rb+ first;
|
|
51
52
|
# apps that squash or archive migrations and keep only incremental files need that path.
|
|
53
|
+
#
|
|
54
|
+
# Streams stdout/stderr to the terminal by default. With +silent: true+, redirects child stdio
|
|
55
|
+
# to +File::NULL+ (no live output; non-interactive).
|
|
52
56
|
def prepare_template!(rails_root:, env:, silent: true)
|
|
53
57
|
exe = File.join(rails_root, "bin", "rails")
|
|
54
58
|
raise Polyrun::Error, "Provision: missing #{exe}" unless File.executable?(exe)
|
|
55
59
|
|
|
56
60
|
child_env = ENV.to_h.merge(env)
|
|
57
61
|
child_env["RAILS_ENV"] ||= ENV["RAILS_ENV"] || "test"
|
|
58
|
-
|
|
59
|
-
|
|
62
|
+
st, out, err = Polyrun::ProcessStdio.spawn_wait(
|
|
63
|
+
child_env,
|
|
64
|
+
exe,
|
|
65
|
+
"db:prepare",
|
|
66
|
+
chdir: rails_root,
|
|
67
|
+
silent: silent
|
|
68
|
+
)
|
|
60
69
|
unless st.success?
|
|
61
|
-
|
|
62
|
-
msg << "\n--- stderr ---\n#{err}" unless err.to_s.strip.empty?
|
|
63
|
-
# Rails often prints the first migration/SQL error on stdout; stderr may only show InFailedSqlTransaction.
|
|
64
|
-
msg << "\n--- stdout ---\n#{rails_out}" unless rails_out.to_s.strip.empty?
|
|
65
|
-
raise Polyrun::Error, msg
|
|
70
|
+
raise Polyrun::Error, Polyrun::ProcessStdio.format_failure_message("db:prepare", st, out, err)
|
|
66
71
|
end
|
|
67
72
|
|
|
68
73
|
true
|
|
@@ -1,3 +1,5 @@
|
|
|
1
|
+
require_relative "timing_keys"
|
|
2
|
+
|
|
1
3
|
module Polyrun
|
|
2
4
|
module Partition
|
|
3
5
|
# Hard constraints for plan assignment (spec_queue.md): pins, serial globs.
|
|
@@ -31,21 +33,30 @@ module Polyrun
|
|
|
31
33
|
end
|
|
32
34
|
|
|
33
35
|
# Returns Integer shard index if constrained, or nil if free to place by LPT/HRW.
|
|
36
|
+
# For +path:line+ items (example granularity), also matches pins/globs against the file path only.
|
|
34
37
|
def forced_shard_for(path)
|
|
35
38
|
rel = path.to_s
|
|
36
39
|
abs = File.expand_path(rel, @root)
|
|
40
|
+
variants = [rel, abs]
|
|
41
|
+
if (fp = TimingKeys.file_part_for_constraint(rel))
|
|
42
|
+
variants << fp
|
|
43
|
+
variants << File.expand_path(fp, @root)
|
|
44
|
+
end
|
|
45
|
+
variants.uniq!
|
|
37
46
|
|
|
38
47
|
@pin_map.each do |pattern, shard|
|
|
39
48
|
next if pattern.to_s.empty?
|
|
40
49
|
|
|
41
|
-
|
|
42
|
-
|
|
50
|
+
variants.each do |rel_i|
|
|
51
|
+
abs_i = File.expand_path(rel_i, @root)
|
|
52
|
+
return shard if match_pattern?(pattern.to_s, rel_i, abs_i)
|
|
43
53
|
end
|
|
44
54
|
end
|
|
45
55
|
|
|
46
56
|
@serial_globs.each do |g|
|
|
47
|
-
|
|
48
|
-
|
|
57
|
+
variants.each do |rel_i|
|
|
58
|
+
abs_i = File.expand_path(rel_i, @root)
|
|
59
|
+
return @serial_shard if match_pattern?(g, rel_i, abs_i)
|
|
49
60
|
end
|
|
50
61
|
end
|
|
51
62
|
|
|
@@ -1,41 +1,48 @@
|
|
|
1
|
-
|
|
2
|
-
|
|
1
|
+
require_relative "timing_keys"
|
|
3
2
|
require_relative "constraints"
|
|
4
3
|
require_relative "hrw"
|
|
5
4
|
require_relative "min_heap"
|
|
6
5
|
require_relative "stable_shuffle"
|
|
7
|
-
|
|
8
6
|
module Polyrun
|
|
9
7
|
module Partition
|
|
10
|
-
# Assigns discrete items (e.g. spec paths) to shards (spec_queue.md).
|
|
8
|
+
# Assigns discrete items (e.g. spec paths, or +path:line+ example locators) to shards (spec_queue.md).
|
|
11
9
|
#
|
|
12
10
|
# Strategies:
|
|
13
11
|
# - +round_robin+ — sorted paths, assign by index mod +total_shards+.
|
|
14
12
|
# - +random_round_robin+ — Fisher–Yates shuffle (optional +seed+), then same mod assignment.
|
|
15
|
-
# - +cost_binpack+ (+cost+, +binpack+, +timing+) — LPT greedy binpack using per-
|
|
13
|
+
# - +cost_binpack+ (+cost+, +binpack+, +timing+) — LPT greedy binpack using per-item weights;
|
|
16
14
|
# optional {Constraints} for pins / serial globs before LPT on the rest.
|
|
15
|
+
# Default +timing_granularity+ is +file+ (one weight per spec file). Experimental +:example+
|
|
16
|
+
# uses +path:line+ locators and per-example weights in the timing JSON.
|
|
17
17
|
# - +hrw+ (+rendezvous+) — rendezvous hashing for minimal remapping when m changes; optional constraints.
|
|
18
18
|
class Plan
|
|
19
19
|
COST_STRATEGIES = %w[cost cost_binpack binpack timing].freeze
|
|
20
20
|
HRW_STRATEGIES = %w[hrw rendezvous].freeze
|
|
21
21
|
|
|
22
|
-
attr_reader :items, :total_shards, :strategy, :seed, :constraints
|
|
22
|
+
attr_reader :items, :total_shards, :strategy, :seed, :constraints, :timing_granularity
|
|
23
23
|
|
|
24
|
-
def initialize(items:, total_shards:, strategy: "round_robin", seed: nil, costs: nil, constraints: nil, root: nil)
|
|
25
|
-
@
|
|
24
|
+
def initialize(items:, total_shards:, strategy: "round_robin", seed: nil, costs: nil, constraints: nil, root: nil, timing_granularity: :file)
|
|
25
|
+
@timing_granularity = TimingKeys.normalize_granularity(timing_granularity)
|
|
26
|
+
@root = root ? File.expand_path(root) : Dir.pwd
|
|
27
|
+
@items = items.map do |x|
|
|
28
|
+
if @timing_granularity == :example
|
|
29
|
+
TimingKeys.normalize_locator(x, @root, :example)
|
|
30
|
+
else
|
|
31
|
+
x.to_s.strip
|
|
32
|
+
end
|
|
33
|
+
end.freeze
|
|
26
34
|
@total_shards = Integer(total_shards)
|
|
27
35
|
raise Polyrun::Error, "total_shards must be >= 1" if @total_shards < 1
|
|
28
36
|
|
|
29
37
|
@strategy = strategy.to_s
|
|
30
38
|
@seed = seed
|
|
31
|
-
@root = root ? File.expand_path(root) : Dir.pwd
|
|
32
39
|
@constraints = normalize_constraints(constraints)
|
|
33
40
|
@costs = normalize_costs(costs)
|
|
34
41
|
|
|
35
42
|
validate_constraints_strategy_combo!
|
|
36
43
|
if cost_strategy? && (@costs.nil? || @costs.empty?)
|
|
37
44
|
raise Polyrun::Error,
|
|
38
|
-
"strategy #{@strategy} requires a timing map (path => seconds), e.g. merged polyrun_timing.json"
|
|
45
|
+
"strategy #{@strategy} requires a timing map (path => seconds or path:line => seconds), e.g. merged polyrun_timing.json"
|
|
39
46
|
end
|
|
40
47
|
end
|
|
41
48
|
|
|
@@ -85,24 +92,14 @@ module Polyrun
|
|
|
85
92
|
"seed" => seed,
|
|
86
93
|
"paths" => shard(shard_index)
|
|
87
94
|
}
|
|
95
|
+
m["timing_granularity"] = timing_granularity.to_s if timing_granularity == :example
|
|
88
96
|
secs = shard_weight_totals
|
|
89
97
|
m["shard_seconds"] = secs if cost_strategy? || (hrw_strategy? && secs.any? { |x| x > 0 })
|
|
90
98
|
m
|
|
91
99
|
end
|
|
92
100
|
|
|
93
|
-
def self.load_timing_costs(path)
|
|
94
|
-
|
|
95
|
-
return {} unless File.file?(abs)
|
|
96
|
-
|
|
97
|
-
data = JSON.parse(File.read(abs))
|
|
98
|
-
return {} unless data.is_a?(Hash)
|
|
99
|
-
|
|
100
|
-
out = {}
|
|
101
|
-
data.each do |k, v|
|
|
102
|
-
key = File.expand_path(k.to_s, Dir.pwd)
|
|
103
|
-
out[key] = v.to_f
|
|
104
|
-
end
|
|
105
|
-
out
|
|
101
|
+
def self.load_timing_costs(path, granularity: :file, root: nil)
|
|
102
|
+
TimingKeys.load_costs_json_file(path, granularity, root: root)
|
|
106
103
|
end
|
|
107
104
|
|
|
108
105
|
def self.cost_strategy?(name)
|
|
@@ -134,7 +131,12 @@ module Polyrun
|
|
|
134
131
|
|
|
135
132
|
c = {}
|
|
136
133
|
costs.each do |k, v|
|
|
137
|
-
key =
|
|
134
|
+
key =
|
|
135
|
+
if @timing_granularity == :example
|
|
136
|
+
TimingKeys.normalize_locator(k.to_s, @root, :example)
|
|
137
|
+
else
|
|
138
|
+
File.expand_path(k.to_s, @root)
|
|
139
|
+
end
|
|
138
140
|
c[key] = v.to_f
|
|
139
141
|
end
|
|
140
142
|
c
|
|
@@ -161,19 +163,27 @@ module Polyrun
|
|
|
161
163
|
end
|
|
162
164
|
|
|
163
165
|
def weight_for(path)
|
|
164
|
-
|
|
165
|
-
return @costs[
|
|
166
|
+
key = cost_lookup_key(path.to_s)
|
|
167
|
+
return @costs[key] if @costs&.key?(key)
|
|
166
168
|
|
|
167
169
|
default_weight
|
|
168
170
|
end
|
|
169
171
|
|
|
170
172
|
def weight_for_optional(path)
|
|
171
|
-
|
|
172
|
-
return @costs[
|
|
173
|
+
key = cost_lookup_key(path.to_s)
|
|
174
|
+
return @costs[key] if @costs&.key?(key)
|
|
173
175
|
|
|
174
176
|
0.0
|
|
175
177
|
end
|
|
176
178
|
|
|
179
|
+
def cost_lookup_key(path)
|
|
180
|
+
if @timing_granularity == :example
|
|
181
|
+
TimingKeys.normalize_locator(path, @root, :example)
|
|
182
|
+
else
|
|
183
|
+
File.expand_path(path, @root)
|
|
184
|
+
end
|
|
185
|
+
end
|
|
186
|
+
|
|
177
187
|
def cost_shards
|
|
178
188
|
@cost_shards ||= build_lpt_buckets
|
|
179
189
|
end
|
|
@@ -0,0 +1,85 @@
|
|
|
1
|
+
require "json"
|
|
2
|
+
|
|
3
|
+
require_relative "../log"
|
|
4
|
+
|
|
5
|
+
module Polyrun
|
|
6
|
+
module Partition
|
|
7
|
+
# Normalizes partition item keys and timing JSON keys for +file+ vs experimental +example+ granularity.
|
|
8
|
+
#
|
|
9
|
+
# * +file+ — one item per spec file; keys are absolute paths (see {#canonical_file_path}).
|
|
10
|
+
# * +example+ — one item per example (RSpec-style +path:line+); keys are +"#{absolute_path}:#{line}+".
|
|
11
|
+
module TimingKeys
|
|
12
|
+
module_function
|
|
13
|
+
|
|
14
|
+
# Resolves the parent directory with +File.realpath+ so +/var/...+ and +/private/var/...+ (macOS
|
|
15
|
+
# tmpdirs) and symlink segments map to one key for the same file.
|
|
16
|
+
def canonical_file_path(abs_path)
|
|
17
|
+
dir = File.dirname(abs_path)
|
|
18
|
+
base = File.basename(abs_path)
|
|
19
|
+
File.join(File.realpath(dir), base)
|
|
20
|
+
rescue SystemCallError
|
|
21
|
+
abs_path
|
|
22
|
+
end
|
|
23
|
+
|
|
24
|
+
# @return [:file, :example]
|
|
25
|
+
def normalize_granularity(value)
|
|
26
|
+
case value.to_s.strip.downcase
|
|
27
|
+
when "example", "examples"
|
|
28
|
+
:example
|
|
29
|
+
else
|
|
30
|
+
:file
|
|
31
|
+
end
|
|
32
|
+
end
|
|
33
|
+
|
|
34
|
+
# File path only (for partition constraints) when +item+ is +path:line+.
|
|
35
|
+
def file_part_for_constraint(item)
|
|
36
|
+
s = item.to_s
|
|
37
|
+
m = s.match(/\A(.+):(\d+)\z/)
|
|
38
|
+
return nil unless m && m[2].match?(/\A\d+\z/)
|
|
39
|
+
|
|
40
|
+
m[1]
|
|
41
|
+
end
|
|
42
|
+
|
|
43
|
+
# Normalize a path or +path:line+ locator relative to +root+ for cost maps and +Plan+ items.
|
|
44
|
+
def normalize_locator(raw, root, granularity)
|
|
45
|
+
s = raw.to_s.strip
|
|
46
|
+
return canonical_file_path(File.expand_path(s, root)) if s.empty?
|
|
47
|
+
|
|
48
|
+
if granularity == :example && (m = s.match(/\A(.+):(\d+)\z/)) && m[2].match?(/\A\d+\z/)
|
|
49
|
+
fp = canonical_file_path(File.expand_path(m[1], root))
|
|
50
|
+
return "#{fp}:#{m[2]}"
|
|
51
|
+
end
|
|
52
|
+
|
|
53
|
+
canonical_file_path(File.expand_path(s, root))
|
|
54
|
+
end
|
|
55
|
+
|
|
56
|
+
# Loads merged timing JSON (+path => seconds+ or +path:line => seconds+).
|
|
57
|
+
#
|
|
58
|
+
# @param root [String, nil] directory for normalizing relative keys (default: +Dir.pwd+). Use the
|
|
59
|
+
# same working directory (or pass the same +root+ as {Partition::Plan}+'s +root+) as when
|
|
60
|
+
# generating the timing file so keys align.
|
|
61
|
+
def load_costs_json_file(path, granularity, root: nil)
|
|
62
|
+
abs = File.expand_path(path.to_s, Dir.pwd)
|
|
63
|
+
return {} unless File.file?(abs)
|
|
64
|
+
|
|
65
|
+
data = JSON.parse(File.read(abs))
|
|
66
|
+
return {} unless data.is_a?(Hash)
|
|
67
|
+
|
|
68
|
+
g = normalize_granularity(granularity)
|
|
69
|
+
root = File.expand_path(root || Dir.pwd)
|
|
70
|
+
out = {}
|
|
71
|
+
data.each do |k, v|
|
|
72
|
+
key = normalize_locator(k.to_s, root, g)
|
|
73
|
+
fv = v.to_f
|
|
74
|
+
if out.key?(key) && out[key] != fv
|
|
75
|
+
Polyrun::Log.warn(
|
|
76
|
+
"polyrun: timing JSON duplicate key #{key.inspect} after normalize (#{out[key]} vs #{fv}); using #{fv}"
|
|
77
|
+
)
|
|
78
|
+
end
|
|
79
|
+
out[key] = fv
|
|
80
|
+
end
|
|
81
|
+
out
|
|
82
|
+
end
|
|
83
|
+
end
|
|
84
|
+
end
|
|
85
|
+
end
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
require "digest/md5"
|
|
2
2
|
require "fileutils"
|
|
3
|
-
|
|
3
|
+
require_relative "../process_stdio"
|
|
4
4
|
|
|
5
5
|
module Polyrun
|
|
6
6
|
module Prepare
|
|
@@ -41,14 +41,21 @@ module Polyrun
|
|
|
41
41
|
end
|
|
42
42
|
|
|
43
43
|
# Shells out to +bin/rails assets:precompile+ when +rails_root+ contains +bin/rails+.
|
|
44
|
+
# +silent: true+ discards child stdio (+File::NULL+); +silent: false+ inherits the terminal.
|
|
44
45
|
def precompile!(rails_root:, silent: true)
|
|
45
46
|
exe = File.join(rails_root, "bin", "rails")
|
|
46
47
|
raise Polyrun::Error, "Prepare::Assets: no #{exe}" unless File.executable?(exe)
|
|
47
48
|
|
|
48
|
-
|
|
49
|
-
|
|
50
|
-
|
|
51
|
-
|
|
49
|
+
st, out, err = Polyrun::ProcessStdio.spawn_wait(
|
|
50
|
+
nil,
|
|
51
|
+
exe,
|
|
52
|
+
"assets:precompile",
|
|
53
|
+
chdir: rails_root,
|
|
54
|
+
silent: silent
|
|
55
|
+
)
|
|
56
|
+
unless st.success?
|
|
57
|
+
raise Polyrun::Error, Polyrun::ProcessStdio.format_failure_message("assets:precompile", st, out, err)
|
|
58
|
+
end
|
|
52
59
|
|
|
53
60
|
true
|
|
54
61
|
end
|
|
@@ -0,0 +1,91 @@
|
|
|
1
|
+
require "tempfile"
|
|
2
|
+
|
|
3
|
+
module Polyrun
|
|
4
|
+
# Run a subprocess without +Open3+ pipe reader threads (avoids noisy +IOError+s on SIGINT when
|
|
5
|
+
# streams close). By default stdin/stdout/stderr are inherited so output streams live and the
|
|
6
|
+
# child can use the TTY for prompts.
|
|
7
|
+
module ProcessStdio
|
|
8
|
+
MAX_FAILURE_CAPTURE_BYTES = 32_768
|
|
9
|
+
|
|
10
|
+
class << self
|
|
11
|
+
# @param env [Hash, nil] optional environment for the child (only forwarded when a Hash)
|
|
12
|
+
# @param argv [Array<String>] command argv
|
|
13
|
+
# @param silent [Boolean] if true, connect stdin/stdout/stderr to +File::NULL+ (no terminal output;
|
|
14
|
+
# non-interactive). Still no Open3 pipe threads.
|
|
15
|
+
# @return [Process::Status]
|
|
16
|
+
def inherit_stdio_spawn_wait(env, *argv, chdir: nil, silent: false)
|
|
17
|
+
st, = spawn_wait(env, *argv, chdir: chdir, silent: silent)
|
|
18
|
+
st
|
|
19
|
+
end
|
|
20
|
+
|
|
21
|
+
# Like {#inherit_stdio_spawn_wait}, but returns captured stdout/stderr when +silent+ is true.
|
|
22
|
+
# On success those strings are empty (not read). When +silent+ is false, output goes to the TTY
|
|
23
|
+
# and returned captures are empty.
|
|
24
|
+
#
|
|
25
|
+
# @return [Array(Process::Status, String, String)] status, stdout capture, stderr capture
|
|
26
|
+
def spawn_wait(env, *argv, chdir: nil, silent: false)
|
|
27
|
+
args = spawn_argv(env, *argv)
|
|
28
|
+
return spawn_wait_inherit(args, chdir) unless silent
|
|
29
|
+
|
|
30
|
+
spawn_wait_silent(args, chdir)
|
|
31
|
+
end
|
|
32
|
+
|
|
33
|
+
# Builds a diagnostic string for failed subprocesses (used when +silent: true+ hid live output).
|
|
34
|
+
def format_failure_message(label, status, stdout, stderr)
|
|
35
|
+
msg = "#{label} failed (exit #{status.exitstatus})"
|
|
36
|
+
s = stdout.to_s
|
|
37
|
+
e = stderr.to_s
|
|
38
|
+
msg << "\n--- stdout ---\n#{s}" unless s.strip.empty?
|
|
39
|
+
msg << "\n--- stderr ---\n#{e}" unless e.strip.empty?
|
|
40
|
+
msg
|
|
41
|
+
end
|
|
42
|
+
|
|
43
|
+
private
|
|
44
|
+
|
|
45
|
+
def spawn_argv(env, *argv)
|
|
46
|
+
a = []
|
|
47
|
+
a << env if env.is_a?(Hash)
|
|
48
|
+
a.concat(argv)
|
|
49
|
+
a
|
|
50
|
+
end
|
|
51
|
+
|
|
52
|
+
def spawn_wait_inherit(args, chdir)
|
|
53
|
+
opts = {in: :in, out: :out, err: :err}
|
|
54
|
+
opts[:chdir] = chdir if chdir
|
|
55
|
+
pid = Process.spawn(*args, **opts)
|
|
56
|
+
st = Process.wait2(pid).last
|
|
57
|
+
[st, "", ""]
|
|
58
|
+
end
|
|
59
|
+
|
|
60
|
+
def spawn_wait_silent(args, chdir)
|
|
61
|
+
Tempfile.create("polyrun-out") do |tfout|
|
|
62
|
+
Tempfile.create("polyrun-err") do |tferr|
|
|
63
|
+
tfout.close
|
|
64
|
+
tferr.close
|
|
65
|
+
out_path = tfout.path
|
|
66
|
+
err_path = tferr.path
|
|
67
|
+
opts = {in: File::NULL, out: out_path, err: err_path}
|
|
68
|
+
opts[:chdir] = chdir if chdir
|
|
69
|
+
pid = Process.spawn(*args, **opts)
|
|
70
|
+
st = Process.wait2(pid).last
|
|
71
|
+
if st.success?
|
|
72
|
+
[st, "", ""]
|
|
73
|
+
else
|
|
74
|
+
out = File.binread(out_path)
|
|
75
|
+
err = File.binread(err_path)
|
|
76
|
+
[st, truncate_failure_capture(out), truncate_failure_capture(err)]
|
|
77
|
+
end
|
|
78
|
+
end
|
|
79
|
+
end
|
|
80
|
+
end
|
|
81
|
+
|
|
82
|
+
def truncate_failure_capture(bytes)
|
|
83
|
+
s = bytes.to_s
|
|
84
|
+
return s if s.bytesize <= MAX_FAILURE_CAPTURE_BYTES
|
|
85
|
+
|
|
86
|
+
tail = s.byteslice(-MAX_FAILURE_CAPTURE_BYTES, MAX_FAILURE_CAPTURE_BYTES)
|
|
87
|
+
"... (#{s.bytesize} bytes total; showing last #{MAX_FAILURE_CAPTURE_BYTES} bytes)\n" + tail
|
|
88
|
+
end
|
|
89
|
+
end
|
|
90
|
+
end
|
|
91
|
+
end
|
data/lib/polyrun/rspec.rb
CHANGED
|
@@ -11,5 +11,24 @@ module Polyrun
|
|
|
11
11
|
Polyrun::Data::ParallelProvisioning.run_suite_hooks!
|
|
12
12
|
end
|
|
13
13
|
end
|
|
14
|
+
|
|
15
|
+
# Experimental: add {Timing::RSpecExampleFormatter} and write per-example JSON (see +timing_granularity: example+).
|
|
16
|
+
# With +output_path:+, that path is used directly (no +ENV+ mutation). Without it, the formatter
|
|
17
|
+
# reads +ENV["POLYRUN_EXAMPLE_TIMING_OUT"]+ or defaults to +polyrun_timing_examples.json+.
|
|
18
|
+
def install_example_timing!(output_path: nil)
|
|
19
|
+
require_relative "timing/rspec_example_formatter"
|
|
20
|
+
fmt =
|
|
21
|
+
if output_path
|
|
22
|
+
op = output_path
|
|
23
|
+
Class.new(Polyrun::Timing::RSpecExampleFormatter) do
|
|
24
|
+
define_method(:timing_output_path) { op }
|
|
25
|
+
end
|
|
26
|
+
else
|
|
27
|
+
Polyrun::Timing::RSpecExampleFormatter
|
|
28
|
+
end
|
|
29
|
+
::RSpec.configure do |config|
|
|
30
|
+
config.add_formatter fmt
|
|
31
|
+
end
|
|
32
|
+
end
|
|
14
33
|
end
|
|
15
34
|
end
|
|
@@ -25,7 +25,7 @@ Adjust `--workers` or use `bin/rspec_parallel` if your repo provides a wrapper.
|
|
|
25
25
|
### Model B — matrix: one shard per job
|
|
26
26
|
|
|
27
27
|
- Matrix sets `POLYRUN_SHARD_INDEX` and `POLYRUN_SHARD_TOTAL` explicitly (many runners do not set `CI_NODE_*` by default).
|
|
28
|
-
- Each job runs `polyrun
|
|
28
|
+
- Each job runs `polyrun ci-shard-run -- …` (e.g. `-- bundle exec rspec` or `ci-shard-rspec`), or `build-paths` + `plan` + your runner manually. Legacy: `bin/polyrun-rspec` or `bin/rspec_ci_shard` patterns.
|
|
29
29
|
- Upload `coverage/polyrun-fragment-<shard>.json` per job; a `merge-coverage` job downloads all fragments and merges.
|
|
30
30
|
|
|
31
31
|
Do not combine Model A and Model B in one workflow without a documented reason (nested parallelism and duplicate merges).
|
|
@@ -1,5 +1,8 @@
|
|
|
1
1
|
# Polyrun — partition contract for CI matrix (one job per POLYRUN_SHARD_INDEX).
|
|
2
|
-
# Each matrix row: set POLYRUN_SHARD_INDEX and POLYRUN_SHARD_TOTAL; run
|
|
2
|
+
# Each matrix row: set POLYRUN_SHARD_INDEX and POLYRUN_SHARD_TOTAL; run:
|
|
3
|
+
# bundle exec polyrun -c polyrun.yml ci-shard-run -- bundle exec rspec
|
|
4
|
+
# (or ci-shard-rspec; or e.g. ci-shard-run -- bundle exec polyrun quick).
|
|
5
|
+
# Equivalent to build-paths, plan --shard/--total, then run that command with this slice's paths.
|
|
3
6
|
# A separate CI job downloads coverage/polyrun-fragment-*.json and runs merge-coverage.
|
|
4
7
|
# Do not use parallel-rspec with multiple workers inside the same matrix row unless you intend nested parallelism.
|
|
5
8
|
# See: docs/SETUP_PROFILE.md
|
data/lib/polyrun/timing/merge.rb
CHANGED
|
@@ -4,7 +4,8 @@ require_relative "../debug"
|
|
|
4
4
|
|
|
5
5
|
module Polyrun
|
|
6
6
|
module Timing
|
|
7
|
-
# Merges per-shard timing JSON files (spec2 §2.4): path => wall seconds (float)
|
|
7
|
+
# Merges per-shard timing JSON files (spec2 §2.4): path => wall seconds (float), or (experimental)
|
|
8
|
+
# +absolute_path:line+ => seconds for per-example timing.
|
|
8
9
|
# Disjoint suites: values merged by taking the maximum per path when duplicates appear.
|
|
9
10
|
module Merge
|
|
10
11
|
module_function
|
|
@@ -0,0 +1,53 @@
|
|
|
1
|
+
require "json"
|
|
2
|
+
|
|
3
|
+
require "rspec/core/formatters/base_formatter"
|
|
4
|
+
|
|
5
|
+
module Polyrun
|
|
6
|
+
module Timing
|
|
7
|
+
# Experimental: records +absolute_path:line_number+ => wall seconds per example for
|
|
8
|
+
# {Partition::Plan} +timing_granularity: :example+ and +merge-timing+.
|
|
9
|
+
#
|
|
10
|
+
# Use after RSpec is loaded:
|
|
11
|
+
# require "polyrun/timing/rspec_example_formatter"
|
|
12
|
+
# RSpec.configure { |c| c.add_formatter Polyrun::Timing::RSpecExampleFormatter }
|
|
13
|
+
# Or {Polyrun::RSpec.install_example_timing!} (+output_path:+ avoids touching +ENV+).
|
|
14
|
+
#
|
|
15
|
+
# Default output path: +ENV["POLYRUN_EXAMPLE_TIMING_OUT"]+ if set, else +polyrun_timing_examples.json+.
|
|
16
|
+
class RSpecExampleFormatter < RSpec::Core::Formatters::BaseFormatter
|
|
17
|
+
RSpec::Core::Formatters.register self, :example_finished, :close
|
|
18
|
+
|
|
19
|
+
def initialize(output)
|
|
20
|
+
super
|
|
21
|
+
@times = {}
|
|
22
|
+
end
|
|
23
|
+
|
|
24
|
+
def example_finished(notification)
|
|
25
|
+
ex = notification.example
|
|
26
|
+
result = ex.execution_result
|
|
27
|
+
return if result.pending?
|
|
28
|
+
|
|
29
|
+
t = result.run_time
|
|
30
|
+
return unless t
|
|
31
|
+
|
|
32
|
+
path = ex.metadata[:absolute_path]
|
|
33
|
+
return unless path
|
|
34
|
+
|
|
35
|
+
line = ex.metadata[:line_number]
|
|
36
|
+
return unless line
|
|
37
|
+
|
|
38
|
+
key = "#{File.expand_path(path)}:#{line}"
|
|
39
|
+
cur = @times[key]
|
|
40
|
+
@times[key] = cur ? [cur, t].max : t
|
|
41
|
+
end
|
|
42
|
+
|
|
43
|
+
def close(_notification)
|
|
44
|
+
File.write(timing_output_path, JSON.pretty_generate(@times))
|
|
45
|
+
end
|
|
46
|
+
|
|
47
|
+
# Override in a subclass from {Polyrun::RSpec.install_example_timing!(output_path: ...)}.
|
|
48
|
+
def timing_output_path
|
|
49
|
+
ENV["POLYRUN_EXAMPLE_TIMING_OUT"] || "polyrun_timing_examples.json"
|
|
50
|
+
end
|
|
51
|
+
end
|
|
52
|
+
end
|
|
53
|
+
end
|
data/lib/polyrun/version.rb
CHANGED
data/polyrun.gemspec
CHANGED
|
@@ -10,7 +10,7 @@ Gem::Specification.new do |spec|
|
|
|
10
10
|
spec.license = "MIT"
|
|
11
11
|
spec.required_ruby_version = ">= 3.1.0"
|
|
12
12
|
|
|
13
|
-
spec.files = Dir["lib/**/*", "sig/**/*.rbs", "bin/polyrun", "README.md", "docs/SETUP_PROFILE.md", "LICENSE", "CONTRIBUTING.md", "CODE_OF_CONDUCT.md", "SECURITY.md", "polyrun.gemspec"].reject { |f| File.directory?(f) }
|
|
13
|
+
spec.files = Dir["lib/**/*", "sig/**/*.rbs", "bin/polyrun", "README.md", "CHANGELOG.md", "docs/SETUP_PROFILE.md", "LICENSE", "CONTRIBUTING.md", "CODE_OF_CONDUCT.md", "SECURITY.md", "polyrun.gemspec"].reject { |f| File.directory?(f) }
|
|
14
14
|
spec.bindir = "bin"
|
|
15
15
|
spec.executables = ["polyrun"]
|
|
16
16
|
spec.require_paths = ["lib"]
|
data/sig/polyrun/rspec.rbs
CHANGED
metadata
CHANGED
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
|
2
2
|
name: polyrun
|
|
3
3
|
version: !ruby/object:Gem::Version
|
|
4
|
-
version: 1.
|
|
4
|
+
version: 1.1.0
|
|
5
5
|
platform: ruby
|
|
6
6
|
authors:
|
|
7
7
|
- Andrei Makarov
|
|
@@ -156,6 +156,7 @@ executables:
|
|
|
156
156
|
extensions: []
|
|
157
157
|
extra_rdoc_files: []
|
|
158
158
|
files:
|
|
159
|
+
- CHANGELOG.md
|
|
159
160
|
- CODE_OF_CONDUCT.md
|
|
160
161
|
- CONTRIBUTING.md
|
|
161
162
|
- LICENSE
|
|
@@ -165,6 +166,7 @@ files:
|
|
|
165
166
|
- docs/SETUP_PROFILE.md
|
|
166
167
|
- lib/polyrun.rb
|
|
167
168
|
- lib/polyrun/cli.rb
|
|
169
|
+
- lib/polyrun/cli/ci_shard_run_command.rb
|
|
168
170
|
- lib/polyrun/cli/coverage_commands.rb
|
|
169
171
|
- lib/polyrun/cli/coverage_merge_io.rb
|
|
170
172
|
- lib/polyrun/cli/database_commands.rb
|
|
@@ -226,8 +228,10 @@ files:
|
|
|
226
228
|
- lib/polyrun/partition/plan_lpt.rb
|
|
227
229
|
- lib/polyrun/partition/plan_sharding.rb
|
|
228
230
|
- lib/polyrun/partition/stable_shuffle.rb
|
|
231
|
+
- lib/polyrun/partition/timing_keys.rb
|
|
229
232
|
- lib/polyrun/prepare/artifacts.rb
|
|
230
233
|
- lib/polyrun/prepare/assets.rb
|
|
234
|
+
- lib/polyrun/process_stdio.rb
|
|
231
235
|
- lib/polyrun/queue/file_store.rb
|
|
232
236
|
- lib/polyrun/queue/file_store_pending.rb
|
|
233
237
|
- lib/polyrun/quick.rb
|
|
@@ -248,6 +252,7 @@ files:
|
|
|
248
252
|
- lib/polyrun/templates/minimal_gem.polyrun.yml
|
|
249
253
|
- lib/polyrun/templates/rails_prepare.polyrun.yml
|
|
250
254
|
- lib/polyrun/timing/merge.rb
|
|
255
|
+
- lib/polyrun/timing/rspec_example_formatter.rb
|
|
251
256
|
- lib/polyrun/timing/summary.rb
|
|
252
257
|
- lib/polyrun/version.rb
|
|
253
258
|
- polyrun.gemspec
|