polyharness 0.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/LICENSE ADDED
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2026 weijt606
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
package/README.md ADDED
@@ -0,0 +1,455 @@
1
+ # PolyHarness
2
+
3
+ ```text
4
+ _____ _ _ _
5
+ | __ \ | | | | | |
6
+ | |__) |__ | |_ _ | |__| | __ _ _ __ _ __ ___ ___ ___
7
+ | ___/ _ \| | | | || __ |/ _` | '__| '_ \ / _ \/ __/ __|
8
+ | | | (_) | | |_| || | | | (_| | | | | | | __/\__ \__ \
9
+ |_| \___/|_|\__, ||_| |_|\__,_|_| |_| |_|\___||___/___/
10
+ __/ |
11
+ |___/
12
+ ```
13
+
14
+ **Make your AI Agent evolve automatically.**
15
+
16
+ [![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](LICENSE)
17
+ [![Python 3.12+](https://img.shields.io/badge/python-3.12+-blue.svg)](https://www.python.org/downloads/)
18
+ [![Tests](https://img.shields.io/badge/tests-121%20passing-brightgreen.svg)]()
19
+ [![中文文档](https://img.shields.io/badge/文档-中文版-red.svg)](README_CN.md)
20
+
21
+ ---
22
+
23
+ Your AI agent runs the same harness every time. Same prompts, same tool config, same strategy — no matter how many times it fails.
24
+
25
+ **PolyHarness addresses that.** It records each iteration, evaluates candidate harness changes, and uses the accumulated history to search for better-scoring configurations. You run one command to start the loop.
26
+
27
+ | | |
28
+ |---|---|
29
+ | **Self-Evolution** | Iteratively searches over harness changes and keeps the full evaluation history in one workspace. |
30
+ | **6 Agent Backends** | Claude Code · Claw Code · Codex · OpenCode · API direct · Local — plug in any CLI agent. |
31
+ | **Full History** | Every iteration's code, scores, and traces preserved. The Meta-Harness paper reports that non-Markovian search outperforms blind retries. |
32
+ | **Search Tree** | Visualize the optimization path. Compare any two candidates with per-task diffs. |
33
+ | **One-Command Setup** | `ph init --base-harness ... --task-dir ...` — copies files, configures workspace, done. |
34
+ | **Closed Loop** | init → run → inspect → apply. You choose when to write the best-scoring candidate back to your project. |
35
+
36
+ ---
37
+
38
+ ## Backstory
39
+
40
+ Stanford's [Meta-Harness paper](https://arxiv.org/abs/2603.28052) (IRIS Lab, 2026) proved a surprising result: **harness design is the #1 lever for agent performance** — more impactful than model choice, prompt engineering, or fine-tuning.
41
+
42
+ The key insight? When you give an AI agent access to *full diagnostic history* — not just the latest score, but every past attempt's code, traces, and failure modes — it can *systematically evolve* its own harness configuration. The paper called this "non-Markovian search" and showed it outperforms simple best-of-N sampling by a wide margin.
43
+
44
+ But the paper only released the final optimized artifact (`agent.py`). **The search framework itself was never open-sourced.**
45
+
46
+ PolyHarness fills that gap. It's the open-source engine that makes Meta-Harness search available to everyone — for any agent, any task, any evaluation pipeline.
47
+
48
+ > **Think of it this way:**
49
+ > - Memory tools (like Supermemory) give agents persistent **memory** across conversations.
50
+ > - **PolyHarness gives agents persistent self-evolution** — you get a repeatable way to refine how they work over time.
51
+
52
+ ## What PolyHarness Is
53
+
54
+ PolyHarness is the open-source engine for iteratively searching over an agent's harness.
55
+
56
+ It builds on ideas from the Meta-Harness paper and the TBench2 results reported there, while focusing this repository on the optimization workflow itself — how harness variants are proposed, evaluated, and revised over repeated runs.
57
+
58
+ If tools like ForgeCode help you code, PolyHarness helps you search for task-specific harness improvements by iterating on prompts, tool use, and harness logic.
59
+
60
+ ---
61
+
62
+ ## Use PolyHarness
63
+
64
+ <table>
65
+ <tr>
66
+ <td width="50%" valign="top">
67
+
68
+ ### I use AI coding agents
69
+
70
+ You have Claude Code, Codex, or another agent.
71
+ You want to tune it for your specific tasks — without manually tweaking prompts.
72
+
73
+ ```bash
74
+ pip install polyharness
75
+ ph init --agent claude-code --task-dir ./my_tasks
76
+ ph run
77
+ ph apply
78
+ ```
79
+
80
+ You now have a repeatable optimization workspace. Inspect the results, then apply the best-scoring candidate if it improves your evaluation.
81
+
82
+ **[→ Jump to Quick Start](#quick-start)**
83
+
84
+ </td>
85
+ <td width="50%" valign="top">
86
+
87
+ ### I'm building agent frameworks
88
+
89
+ You're developing an AI agent or tool and want
90
+ to integrate automated optimization as a feature.
91
+
92
+ PolyHarness provides a pluggable adapter API —
93
+ implement 3 methods and your agent can participate in the same search loop.
94
+
95
+ ```python
96
+ class MyAgentAdapter(CLIAdapter):
97
+ def build_command(self, prompt, cwd):
98
+ return ["my-agent", "--prompt", prompt]
99
+ def parse_output(self, stdout, stderr, code):
100
+ return CLIResult(...)
101
+ ```
102
+
103
+ **[→ Jump to Architecture](#how-it-works)**
104
+
105
+ </td>
106
+ </tr>
107
+ </table>
108
+
109
+ ---
110
+
111
+ ## Quick Start
112
+
113
+ ### 1. Install
114
+
115
+ ```bash
116
+ pip install polyharness # Python >= 3.12
117
+ # or
118
+ npm install -g polyharness # Node.js wrapper, auto-installs Python package
119
+ ```
120
+
121
+ ### 2. Check your environment
122
+
123
+ ```bash
124
+ ph doctor
125
+ ```
126
+
127
+ This auto-detects which agent backends (Claude Code, Codex, etc.) are installed and shows their status.
128
+
129
+ ### 3. Initialize a workspace
130
+
131
+ ```bash
132
+ ph init --agent claude-code \
133
+ --base-harness ./my_harness/ \
134
+ --task-dir ./my_tasks/ \
135
+ --eval-script ./evaluate.py
136
+ ```
137
+
138
+ This copies your harness code, test cases, and evaluation script into a structured workspace — and auto-configures everything. No manual YAML editing.
139
+
140
+ ### 4. Run the optimization loop
141
+
142
+ ```bash
143
+ ph run
144
+ ```
145
+
146
+ The orchestrator: copies your harness → asks the Proposer agent for a candidate change → evaluates the result → stores everything → repeats.
147
+
148
+ ### 5. Inspect and apply
149
+
150
+ ```bash
151
+ ph status # progress table + elapsed + improvement rate
152
+ ph log # search tree with delta (Δ) column
153
+ ph best # best candidate details
154
+ ph leaderboard # ranked table of all candidates (--tasks for drilldown)
155
+ ph compare 0 5 # diff two iterations (scores + code)
156
+ ph diff 5 # shorthand for: compare 0 5
157
+ ph trace 3 # view stdout/stderr/metrics for iter_3
158
+ ph report # generate a full markdown report
159
+
160
+ ph apply # write best harness back to base_harness/
161
+ ph export ./my-optimized # or export to any directory
162
+ ph clean --keep-best # remove candidates to free disk space
163
+ ```
164
+
165
+ ### Try it now (no API key needed)
166
+
167
+ ```bash
168
+ cd examples/math-word-problems
169
+
170
+ ph init --agent local \
171
+ --base-harness ./base_harness \
172
+ --task-dir . \
173
+ --workspace .ph_workspace
174
+
175
+ ph run --workspace .ph_workspace --max-iterations 5
176
+ ph log --workspace .ph_workspace
177
+
178
+ # Search Tree
179
+ # └── iter_0 0.3500
180
+ # └── iter_1 0.5000
181
+ # └── iter_2 0.6500
182
+ # └── iter_3 0.9000 ★
183
+ ```
184
+
185
+ The score path above is the current measured result of the bundled `math-word-problems` example with the repository's `local` backend, rounded for readability. It is not a paper benchmark or an external project result. The `local` backend is deterministic; no fixed score uplift is claimed here for Claude Code, Codex, or other real agent backends.
186
+
187
+ ---
188
+
189
+ ## How It Works
190
+
191
+ PolyHarness runs a **Meta-Harness-style search loop** — an iterative process where an AI agent proposes, evaluates, and stores harness changes:
192
+
193
+ ```
194
+ ┌──────────────────────────────────────────────────────────────┐
195
+ │ │
196
+ │ You PolyHarness │
197
+ │ │ │ │
198
+ │ ├── ph init ──────────────────→│ Creates workspace │
199
+ │ │ (harness + tasks + eval) │ Copies files │
200
+ │ │ │ Injects CLAUDE.md │
201
+ │ │ │ │
202
+ │ ├── ph run ───────────────────→│ Starts search loop: │
203
+ │ │ │ │
204
+ │ │ ┌──────────────────────────┤ │
205
+ │ │ │ Step 1: SELECT parent │ Best or Tournament │
206
+ │ │ │ Step 2: COPY harness │ From parent → candidate │
207
+ │ │ │ Step 3: PROPOSE changes │ Agent reads all history │
208
+ │ │ │ Step 4: EVALUATE │ Run tasks, get scores │
209
+ │ │ │ Step 5: STORE results │ Code + scores + traces │
210
+ │ │ │ Step 6: CHECK stopping │ Improved? Patience left? │
211
+ │ │ └──────────┬───────────────┤ │
212
+ │ │ └── loop ───────┘ │
213
+ │ │ │ │
214
+ │ ├── ph log ───────────────────→│ Shows search tree │
215
+ │ ├── ph compare 0 5 ──────────→│ Score deltas + code diff │
216
+ │ └── ph apply ─────────────────→│ Writes best back │
217
+ │ │
218
+ └──────────────────────────────────────────────────────────────┘
219
+ ```
220
+
221
+ ### Why it works: non-Markovian search
222
+
223
+ Traditional approaches: run the agent → check the score → retry. Each attempt is independent.
224
+
225
+ **PolyHarness is different.** Every iteration stores:
226
+ - The complete candidate source code
227
+ - Per-task scores (not just the overall number)
228
+ - Full execution traces (stdout, stderr, exit codes)
229
+ - Metadata (parent candidate, proposer model, changes summary)
230
+
231
+ The Proposer reads **all of this** before generating the next candidate. It can see *why* a previous attempt failed, *which specific tasks* regressed, and *what code changes* caused it. This is why the Meta-Harness paper found that full-context search outperforms scores-only search by 15+ percentage points.
232
+
233
+ ---
234
+
235
+ ## Supported Agent Backends
236
+
237
+ | Backend | Command | Use case |
238
+ |---------|---------|----------|
239
+ | `api` | — | Default. Anthropic API direct, just needs `ANTHROPIC_API_KEY` |
240
+ | `claude-code` | `claude -p` | Official Claude Code CLI (Pro/Teams subscription) |
241
+ | `claw-code` | `claw -p` | Open-source Claw Code CLI |
242
+ | `codex` | `codex --quiet` | OpenAI Codex CLI |
243
+ | `opencode` | `opencode -p` | OpenCode CLI |
244
+ | `local` | — | Offline rule-based engine for development & testing |
245
+
246
+ `ph doctor` auto-detects all available backends and shows their status.
247
+
248
+ When you run `ph init --agent claude-code`, PolyHarness automatically generates a `CLAUDE.md` instruction file in the workspace, telling the agent how to behave as an optimization Proposer. Same for `CLAW.md`, `CODEX.md`, `OPENCODE.md` — each agent's native instruction format.
249
+
250
+ ---
251
+
252
+ ## Installation
253
+
254
+ ### pip (recommended)
255
+
256
+ ```bash
257
+ pip install polyharness # Requires Python >= 3.12
258
+ ph --version
259
+ ```
260
+
261
+ ### npm / npx
262
+
263
+ ```bash
264
+ npm install -g polyharness # postinstall auto-installs Python package
265
+ npx polyharness doctor # or run without global install
266
+ ```
267
+
268
+ The npm package is a thin Node.js wrapper (`bin/ph.mjs`) that finds and invokes the Python CLI. It checks: `ph` on PATH → `python -m poly_harness` → auto-discovers `.venv` in parent directories.
269
+
270
+ ### From source
271
+
272
+ ```bash
273
+ git clone https://github.com/weijt606/polyharness.git
274
+ cd polyharness
275
+
276
+ python -m venv .venv && source .venv/bin/activate
277
+ pip install -e ".[dev]"
278
+ # or: pip install anthropic click pydantic pyyaml rich && export PYTHONPATH="$PWD/src"
279
+
280
+ python -m poly_harness --version
281
+ ```
282
+
283
+ ---
284
+
285
+ ## CLI Reference
286
+
287
+ | Command | Description |
288
+ |---------|-------------|
289
+ | `ph doctor` | Detect installed agents and environment status |
290
+ | `ph init` | Initialize workspace with auto-copy of harness, tasks, eval script |
291
+ | `ph run` | Start the optimization search loop |
292
+ | `ph status` | Progress table with elapsed time, improvement rate, and delta |
293
+ | `ph log` | Search tree with delta (Δ) column (or `--flat` for table) |
294
+ | `ph best` | Show best candidate: score, per-task breakdown, changes summary |
295
+ | `ph compare A B` | Compare two iterations: score deltas + unified code diff |
296
+ | `ph diff <N>` | Shorthand for `compare 0 <N>` |
297
+ | `ph leaderboard` | Ranked table of all candidates (`--top N`, `--tasks` drilldown) |
298
+ | `ph trace <N>` | View stdout, stderr, metrics, exit code for an iteration |
299
+ | `ph report` | Generate a full markdown report with score trends and per-task table |
300
+ | `ph apply` | Copy best harness back to `base_harness/` (or `--target` dir) |
301
+ | `ph export <dir>` | Export candidate to any directory (with optional `--include-meta`) |
302
+ | `ph clean` | Remove candidate dirs to free disk space (`--keep-best`, `-y`) |
303
+ | `ph config show` | Display the current workspace configuration |
304
+ | `ph config set K V` | Modify a config value via dot-notation (with validation) |
305
+
306
+ ### Global flags
307
+
308
+ ```
309
+ -v, --verbose Show detailed output
310
+ -q, --quiet Suppress non-essential output
311
+ ```
312
+
313
+ ### `ph init` options
314
+
315
+ ```
316
+ --agent <name> Backend: claude-code | claw-code | codex | opencode | api | local
317
+ --workspace <dir> Workspace directory (default: current dir)
318
+ --base-harness <dir> Copy starting harness code into workspace
319
+ --task-dir <dir> Copy tasks/ folder and evaluate.py into workspace
320
+ --eval-script <path> Copy a specific evaluate.py into workspace
321
+ ```
322
+
323
+ ### `ph run` options
324
+
325
+ ```
326
+ --max-iterations N Override max iterations
327
+ --dry-run Only evaluate the base harness, skip search
328
+ --resume Continue an interrupted search from where it left off
329
+ --backend <name> Override proposer backend without editing config
330
+ --strategy <name> Override parent selection: best | tournament | all
331
+ ```
332
+
333
+ ---
334
+
335
+ ## Examples
336
+
337
+ The score trajectories below are measured from the bundled examples using the current `local` backend and are rounded for readability. They are not borrowed from the Meta-Harness paper or from external benchmarks.
338
+
339
+ ### Text Classification (sentiment analysis)
340
+
341
+ ```bash
342
+ cd examples/text-classification
343
+ ph init --agent local --base-harness ./base_harness --task-dir . --workspace .ws
344
+ ph run --workspace .ws --max-iterations 3
345
+
346
+ # iter_0: 0.65 → iter_1: 1.00 ★ (naive word list → expanded lexicon)
347
+ ```
348
+
349
+ ### Math Word Problems (numerical reasoning)
350
+
351
+ ```bash
352
+ cd examples/math-word-problems
353
+ ph init --agent local --base-harness ./base_harness --task-dir . --workspace .ws
354
+ ph run --workspace .ws --max-iterations 5
355
+
356
+ # iter_0: 0.35 → iter_1: 0.50 → iter_2: 0.65 → iter_3: 0.90 ★
357
+ # (naive multiply → operation detection → averages/% → multi-step reasoning)
358
+ ```
359
+
360
+ ### Code Generation (function synthesis)
361
+
362
+ ```bash
363
+ cd examples/code-generation
364
+ ph init --agent local --base-harness ./base_harness --task-dir . --workspace .ws
365
+ ph run --workspace .ws --max-iterations 5
366
+
367
+ # iter_0: 0.27 → iter_1: 0.50 → iter_2: 0.68 → iter_3: 0.95 ★
368
+ # (5 keywords → 10 patterns → composite logic → comprehensive coverage)
369
+ ```
370
+
371
+ ### API Calling (endpoint routing + parameter extraction)
372
+
373
+ ```bash
374
+ cd examples/api-calling
375
+ ph init --agent local --base-harness ./base_harness --task-dir . --workspace .ws
376
+ ph run --workspace .ws --max-iterations 5
377
+
378
+ # iter_0: 0.19 → iter_1: 0.55 → iter_2: 0.77 → iter_3: 0.87 ★
379
+ # (keyword matching → broad routing → param helpers → full regex extraction)
380
+ ```
381
+
382
+ ### RAG Question Answering (retrieval + answer extraction)
383
+
384
+ ```bash
385
+ cd examples/rag-qa
386
+ ph init --agent local --base-harness ./base_harness --task-dir . --workspace .ws
387
+ ph run --workspace .ws --max-iterations 5
388
+
389
+ # iter_0: 0.51 → iter_1: 0.79 ★
390
+ # (word overlap → stopword-filtered retrieval + sentence scoring)
391
+ ```
392
+
393
+ ---
394
+
395
+ ## Project Structure
396
+
397
+ ```
398
+ src/poly_harness/
399
+ ├── cli.py # Click CLI — 16 commands/subcommands
400
+ ├── config.py # Pydantic config models
401
+ ├── orchestrator.py # Meta-Harness search loop + progress bar + error recovery
402
+ ├── workspace.py # Filesystem workspace + agent instruction injection
403
+ ├── search_log.py # JSONL append-only search log
404
+ ├── doctor.py # Environment detection for all backends
405
+ ├── evaluator/
406
+ │ └── evaluator.py # PythonEvaluator (subprocess)
407
+ ├── proposer/
408
+ │ ├── api_proposer.py # Anthropic API direct + tool-use loop
409
+ │ ├── cli_proposer.py # CLIProposer — unified subprocess management
410
+ │ ├── local_proposer.py # Offline rule-based (5 task types)
411
+ │ └── adapters/ # Per-agent CLI adapters
412
+ │ ├── claude_code.py # claude -p
413
+ │ ├── claw_code.py # claw -p
414
+ │ ├── codex.py # codex --quiet --auto-edit
415
+ │ └── opencode.py # opencode -p
416
+
417
+ bin/
418
+ ├── ph.mjs # npm wrapper
419
+ └── postinstall.mjs # npm postinstall
420
+
421
+ examples/
422
+ ├── text-classification/ # 20 test cases
423
+ ├── math-word-problems/ # 20 test cases
424
+ ├── code-generation/ # 20 tasks × 3 inputs
425
+ ├── api-calling/ # 20 test cases
426
+ └── rag-qa/ # 20 QA pairs + 10-doc knowledge base
427
+
428
+ tests/ # 121 tests (pytest)
429
+ ```
430
+
431
+ ## Local Development
432
+
433
+ ```bash
434
+ git clone https://github.com/weijt606/polyharness.git && cd polyharness
435
+ python -m venv .venv && source .venv/bin/activate
436
+ pip install anthropic click pydantic pyyaml rich pytest pytest-cov ruff
437
+ export PYTHONPATH="$PWD/src"
438
+
439
+ python -m pytest tests/ # run tests
440
+ ruff check src/ tests/ # lint
441
+ ```
442
+
443
+ ## Documentation
444
+
445
+ - [Product Development](docs/development/product-development.md) — roadmap, user scenarios, success metrics
446
+ - [Technical Architecture](docs/development/technical-architecture.md) — system design & data flow
447
+ - [Meta-Harness Paper](docs/research/references/meta-harness-paper.md) — theoretical foundation and paper-reported reference results
448
+
449
+ ---
450
+
451
+ <p align="center"><strong>Give your agent self-evolution. It's about time.</strong></p>
452
+
453
+ ## License
454
+
455
+ MIT
package/bin/ph.mjs ADDED
@@ -0,0 +1,61 @@
1
+ #!/usr/bin/env node
2
+
3
+ /**
4
+ * ph — PolyHarness CLI wrapper for npm installations.
5
+ *
6
+ * This thin wrapper finds and invokes the Python `ph` CLI.
7
+ * Resolution order:
8
+ * 1. `ph` on PATH (pip-installed entry point)
9
+ * 2. `python -m poly_harness` (PYTHONPATH / editable install)
10
+ * 3. Local .venv (auto-detect venv in cwd or parents)
11
+ */
12
+
13
+ import { execFileSync } from "node:child_process";
14
+ import { existsSync } from "node:fs";
15
+ import { dirname, join } from "node:path";
16
+ import process from "node:process";
17
+
18
+ const args = process.argv.slice(2);
19
+
20
+ /** Try running a command; return true on success. */
21
+ function tryExec(cmd, cmdArgs) {
22
+ try {
23
+ execFileSync(cmd, cmdArgs, { stdio: "inherit" });
24
+ return true;
25
+ } catch {
26
+ return false;
27
+ }
28
+ }
29
+
30
+ /** Walk up from cwd looking for .venv/bin/python. */
31
+ function findVenvPython() {
32
+ let dir = process.cwd();
33
+ const root = dirname(dir) === dir ? dir : "/";
34
+ for (let i = 0; i < 10; i++) {
35
+ const candidate = join(dir, ".venv", "bin", "python");
36
+ if (existsSync(candidate)) return candidate;
37
+ const parent = dirname(dir);
38
+ if (parent === dir) break;
39
+ dir = parent;
40
+ }
41
+ return null;
42
+ }
43
+
44
+ // Strategy 1: `ph` on PATH
45
+ if (tryExec("ph", args)) process.exit(0);
46
+
47
+ // Strategy 2: system python -m poly_harness
48
+ for (const py of ["python3", "python"]) {
49
+ if (tryExec(py, ["-m", "poly_harness", ...args])) process.exit(0);
50
+ }
51
+
52
+ // Strategy 3: auto-detect .venv
53
+ const venvPy = findVenvPython();
54
+ if (venvPy && tryExec(venvPy, ["-m", "poly_harness", ...args])) {
55
+ process.exit(0);
56
+ }
57
+
58
+ console.error(
59
+ `Error: Could not find PolyHarness.\n\nInstall the Python package:\n pip install polyharness\n\nOr install from source:\n git clone https://github.com/weijt606/polyharness.git && cd polyharness\n pip install -e .`
60
+ );
61
+ process.exit(1);
@@ -0,0 +1,54 @@
1
+ #!/usr/bin/env node
2
+
3
+ /**
4
+ * postinstall — attempt to install the Python package automatically.
5
+ * Runs after `npm install polyharness` (or `npm install -g polyharness`).
6
+ * Silent failure is OK — user can install pip package manually.
7
+ */
8
+
9
+ import { execSync } from "node:child_process";
10
+
11
+ function isInstalled() {
12
+ try {
13
+ execSync('python3 -c "import poly_harness"', { stdio: "ignore" });
14
+ return true;
15
+ } catch {
16
+ return false;
17
+ }
18
+ }
19
+
20
+ function tryInstall(cmd) {
21
+ try {
22
+ execSync(cmd, { stdio: "inherit" });
23
+ return true;
24
+ } catch {
25
+ return false;
26
+ }
27
+ }
28
+
29
+ if (isInstalled()) {
30
+ console.log("✅ polyharness Python package already installed.");
31
+ process.exit(0);
32
+ }
33
+
34
+ console.log("Installing polyharness Python package...");
35
+
36
+ // Try uv first (fast), then pip3, then pip
37
+ const strategies = [
38
+ "uv pip install polyharness",
39
+ "pip3 install polyharness",
40
+ "pip install polyharness",
41
+ ];
42
+
43
+ for (const cmd of strategies) {
44
+ if (tryInstall(cmd)) {
45
+ console.log("✅ polyharness installed successfully.");
46
+ process.exit(0);
47
+ }
48
+ }
49
+
50
+ console.warn(
51
+ "⚠️ Could not auto-install Python package. Please run manually:\n" +
52
+ " pip install polyharness"
53
+ );
54
+ process.exit(0); // non-fatal — npm install should still succeed
package/package.json ADDED
@@ -0,0 +1,23 @@
1
+ {
2
+ "name": "polyharness",
3
+ "version": "0.1.0",
4
+ "description": "Make your AI agent evolve automatically through iterative harness optimization.",
5
+ "keywords": ["agent", "harness", "optimization", "meta-harness", "cli"],
6
+ "license": "MIT",
7
+ "repository": {
8
+ "type": "git",
9
+ "url": "https://github.com/weijt606/polyharness.git"
10
+ },
11
+ "bin": {
12
+ "ph": "./bin/ph.mjs"
13
+ },
14
+ "files": [
15
+ "bin/"
16
+ ],
17
+ "scripts": {
18
+ "postinstall": "node bin/postinstall.mjs"
19
+ },
20
+ "engines": {
21
+ "node": ">=18"
22
+ }
23
+ }