all-hands-cli 0.1.12 → 0.1.13

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,203 +1,157 @@
1
1
  # Validation Tooling
2
2
 
3
- Per **Agentic Validation Tooling**, programmatic validation replaces human supervision. This reference covers how validation suites are created, structured, and how they compound from stochastic exploration into deterministic gates.
3
+ Programmatic validation replaces human supervision. Validation suites compound from stochastic exploration into deterministic gates.
4
4
 
5
5
  ## Crystallization Lifecycle
6
6
 
7
- Per **Agentic Validation Tooling**, validation compounds through a lifecycle:
8
-
9
- 1. **Stochastic exploration** — Agent-driven exploratory testing using model intuition discovers patterns
7
+ 1. **Stochastic exploration** Agent-driven exploratory testing discovers patterns
10
8
  2. **Pattern crystallization** — Discovered patterns become deterministic checks
11
9
  3. **CI/CD entrenchment** — Deterministic checks gate releases
12
10
  4. **Frontier shift** — Stochastic exploration moves to new unknowns
13
11
 
14
- This is how validation compounds. Every domain has both a stochastic dimension (exploratory) and a deterministic dimension (binary pass/fail).
12
+ Every domain has both a stochastic dimension (exploratory) and a deterministic dimension (binary pass/fail).
15
13
 
16
14
  ## Suite Existence Threshold
17
15
 
18
- A validation suite must have a meaningful stochastic dimension to justify existing. Deterministic-only tools (type checking, linting, formatting) are test commands referenced directly in acceptance criteria and CI/CD — they are NOT suites.
16
+ A suite must have a meaningful stochastic dimension to justify existing. Deterministic-only tools (type checking, linting, formatting) are test commands in acceptance criteria and CI/CD — they are NOT suites.
19
17
 
20
18
  ## Repository Agnosticism
21
19
 
22
- This reference file is a generic rule file that ships with the harness. It MUST NOT contain references to project-specific validation suites, commands, or infrastructure. All examples must either:
23
- - Reference existing default validation suites shipped with this repo (currently: xcode-automation, browser-automation)
24
- - Use generic/hypothetical descriptions that any target repository can map to their own context
25
-
26
- When examples are needed, use **snippets from the existing default suites** rather than naming suites or commands that belong to a specific target project. Target repositories create their own suites for their domains — this file teaches how to create and structure them, not what they should be called.
20
+ This file MUST NOT contain project-specific references. All examples must either reference default suites shipped with this repo (currently: xcode-automation, browser-automation) or use generic descriptions. This file teaches patterns, not inventories.
27
21
 
28
- **Why**: Target repositories consume this file as authoritative guidance. Project-specific references create confusion (agents look for suites that don't exist), couple the harness to a single project, and violate the principle that this file teaches patterns, not inventories. If a pattern needs a concrete example, draw it from xcode-automation or browser-automation.
22
+ Project-specific references cause agents to look for suites that don't exist in target repos and couple the harness to a single project. If a pattern needs a concrete example, draw it from xcode-automation or browser-automation.
29
23
 
30
24
  ## Creating Validation Tooling
31
25
 
32
26
  Follow `.allhands/flows/shared/CREATE_VALIDATION_TOOLING_SPEC.md` for the full process. This creates a spec, not an implementation.
33
27
 
34
28
  ### Research Phase
35
- - Run `ah tavily search "<validation_type> testing tools"` for available tools
36
- - Run `ah perplexity research "best practices <validation_type> testing <technology>"` for best practices
29
+ - `ah tavily search "<validation_type> testing tools"` for available tools
30
+ - `ah perplexity research "best practices <validation_type> testing <technology>"` for best practices
37
31
  - Determine whether the domain has a meaningful stochastic dimension before proceeding
38
- - Run `ah tools --list` to check existing MCP integrations
32
+ - `ah tools --list` to check existing MCP integrations
39
33
 
40
34
  ### Tool Validation Phase
41
- Per **Agentic Validation Tooling**, research produces assumptions; running the tool produces ground truth:
35
+ Research produces assumptions; running the tool produces ground truth:
42
36
  - Install and verify tool responds to `--help`
43
37
  - Create a minimal test target (temp directory, not committed)
44
38
  - Execute representative stochastic workflows
45
- - Systematically try commands against codebase-relevant scenarios
39
+ - Try commands against codebase-relevant scenarios
46
40
  - Document divergences from researched documentation
47
41
 
48
42
  ### Suite Writing Philosophy
49
43
 
50
- Per **Frontier Models are Capable** and **Context is Precious**:
51
-
52
- - **`--help` as prerequisite**: Suites MUST instruct agents to pull `<tool> --help` before any exploration — command vocabulary shapes exploration quality. The suite MUST NOT replicate full command docs.
53
- - **Inline command examples**: Weave brief examples into use-case motivations as calibration anchors not exhaustive catalogs, not separated command reference sections.
54
- - **Motivation framing**: Frame around harness value: reducing human-in-loop supervision, verifying code quality, confirming implementation matches expectations.
55
- - **Exploration categories**: Describe with enough command specificity to orient. For untested territory, prefer motivations over prescriptive sequences — the agent extrapolates better from goals than rigid steps. For patterns verified through testing, state them authoritatively (see below).
44
+ - **`--help` as prerequisite**: Suites MUST instruct agents to run `<tool> --help` before exploration. Suites MUST NOT replicate full command docs.
45
+ - **Inline command examples**: Weave brief examples into use-case motivations as calibration anchors — not exhaustive catalogs.
46
+ - **Motivation framing**: Frame around reducing human-in-loop supervision, verifying quality, confirming implementation matches expectations.
47
+ - **Exploration categories**: Enough command specificity to orient. Untested territory: motivations over prescriptive sequences. Verified patterns: state authoritatively.
56
48
 
57
- Formula: **motivations backed by inline command examples + `--help` as prerequisite and progressive disclosure**. Commands woven into use cases give direction; `--help` reveals depth.
49
+ Formula: **motivations + inline command examples + `--help` for progressive disclosure**.
58
50
 
59
51
  ### Proven vs Untested Guidance
60
52
 
61
- Validation suites should be grounded in hands-on testing against the actual repo, not theoretical instructions. The level of authority in how guidance is written depends on whether it has been verified:
53
+ - **Proven patterns** (verified via Tool Validation Phase): State authoritatively within use-case motivations. Override generic tool docs when they conflict. Example: "`xctrace` requires `--device '<UDID>'` for simulator" is a hard requirement discovered through testing, stated directly alongside the motivation.
54
+ - **Untested edge cases**: Define the motivation and reference analogous solved examples. Do NOT write prescriptive steps for unverified scenarios — frontier models given clear motivation and a reference example extrapolate better than they follow rigid untested instructions.
62
55
 
63
- - **Proven patterns** (verified via the Tool Validation Phase): State authoritatively within use-case motivations — the pattern is established fact, not a suggestion. These override generic tool documentation when they conflict. Example: "`xctrace` requires `--device '<UDID>'` for simulator" is a hard requirement discovered through testing, stated directly alongside the motivation (why: `xctrace` can't find simulator processes without it). The motivation formula still applies — proven patterns are *authoritative examples within motivations*, not raw command catalogs.
64
- - **Untested edge cases** (not yet exercised in this repo): Define the **motivation** (what the agent should achieve and why) and reference **analogous solved examples** from proven patterns. Do NOT write prescriptive step-by-step instructions for scenarios that haven't been verified — unverified prescriptions can mislead the agent into rigid sequences that don't match reality. Instead, trust that a frontier model given clear motivation and a reference example of how a similar problem was solved will extrapolate the correct approach through stochastic exploration.
65
-
66
- **Why this matters**: Frontier models produce emergent, adaptive behavior when given goals and reference points. Unverified prescriptive instructions constrain this emergence and risk encoding incorrect assumptions. Motivation + examples activate the model's reasoning about the problem space; rigid untested instructions bypass it. The Tool Validation Phase exists to convert untested guidance into proven patterns over time — the crystallization lifecycle in action.
56
+ The Tool Validation Phase converts untested guidance into proven patterns over time the crystallization lifecycle in action.
67
57
 
68
58
  ### Evidence Capture
69
59
 
70
- Per **Quality Engineering**, two audiences require different artifacts:
71
-
72
- - **Agent (self-verification)**: Primitives used during the observe-act-verify loop (state checks, assertions, console output). Real-time, not recorded.
73
- - **Engineer (review artifacts)**: Trust evidence produced after exploration (recordings, screenshots, traces, reports).
60
+ - **Agent (self-verification)**: State checks, assertions, console output during observe-act-verify. Real-time, not recorded.
61
+ - **Engineer (review artifacts)**: Recordings, screenshots, traces, reports produced after exploration.
74
62
 
75
63
  Pattern: explore first, capture second.
76
64
 
77
65
  ## Validation Suite Schema
78
66
 
79
- Run `ah schema validation-suite` for the authoritative schema. Key sections in a suite:
67
+ Run `ah schema validation-suite` for the authoritative schema. Key sections:
80
68
 
81
- - **Stochastic Validation**: Agent-driven exploratory testing with model intuition
69
+ - **Stochastic Validation**: Agent-driven exploratory testing
82
70
  - **Deterministic Integration**: Binary pass/fail commands that gate completion
83
71
 
84
- List available suites: `ah validation-tools list`
85
-
86
72
  ## Integration with Prompt Execution
87
73
 
88
- Prompt files reference validation suites in their `validation_suites` frontmatter. During execution:
89
- 1. Agent reads suite's **Stochastic Validation** section during implementation for exploratory quality
90
- 2. Agent runs suite's **Deterministic Integration** section for acceptance criteria gating
74
+ Prompt files reference suites in `validation_suites` frontmatter. During execution:
75
+ 1. Agent reads **Stochastic Validation** during implementation
76
+ 2. Agent runs **Deterministic Integration** for acceptance criteria gating
91
77
  3. Validation review (`PROMPT_VALIDATION_REVIEW.md`) confirms pass/fail
92
78
 
93
79
  ## Command Documentation Principle
94
80
 
95
- Two categories of commands exist in validation suites, each requiring different documentation approaches:
96
-
97
- **External tooling commands — Document explicitly**: Commands from external tools (`xctrace`, `xcrun simctl`, `agent-browser`, `playwright`, `curl`, etc.) are stable, unfamiliar to agents by default, and unlikely to change with codebase evolution. Document specific commands, flags, and use cases inline with motivations. Example from xcode-automation: `xcrun xctrace record --template 'Time Profiler' --device '<UDID>' --attach '<PID>'` — the flags, ordering constraints, and PID discovery method are all external tool knowledge that the suite documents explicitly.
98
-
99
- **Internal codebase commands — Document patterns, not inventories**: Project-specific scripts, test commands, and codebase-specific CLI wrappers evolve rapidly. Instead:
100
- 1. **Document core infrastructure commands explicitly** — commands that boot services, manage environments, and are foundational to validation in the target project. These are stable and essential per-project, but suites should teach agents how to discover them (e.g., "check `package.json` scripts" or "run `--help`"), not hardcode specific script names.
101
- 2. **Teach patterns for everything else** — naming conventions, where to discover project commands, what categories mean, and how to build upon them.
102
- 3. **Document motivations** — why different test categories exist, when to use which, what confidence each provides.
103
-
104
- Per **Frontier Models are Capable**: An agent given patterns + motivations + discovery instructions outperforms one given stale command inventories. Suites that teach patterns age gracefully; suites that enumerate commands require maintenance on every change.
81
+ - **External tooling** (xctrace, simctl, playwright, etc.) — Document explicitly: commands, flags, use cases inline with motivations. Stable and unfamiliar to agents by default. Example from xcode-automation: `xcrun xctrace record --template 'Time Profiler' --device '<UDID>' --attach '<PID>'` — flags, ordering constraints, and PID discovery are external tool knowledge that belongs in the suite.
82
+ - **Internal codebase commands** — Document patterns, not inventories: teach discovery (`package.json` scripts, `--help`), naming conventions, motivations for test categories. Pattern-based suites age gracefully; command inventories require constant maintenance.
105
83
 
106
84
  ## Decision Tree Requirement
107
85
 
108
- Every validation suite MUST include a decision tree that routes agents to the correct validation approach based on their situation. Decision trees:
109
- - Distinguish which instructions are relevant to which validation scenario (e.g., UI-only test vs full E2E with native code changes)
110
- - Show where/when stochastic vs deterministic testing applies
111
- - Surface deterministic branch points where other validation suites must be utilized (e.g., "Does this branch have native code changes? → Yes → follow xcode-automation decision tree")
112
- - Cleanly articulate multiple expected use cases within a single suite
86
+ Every suite MUST include a decision tree routing agents to the correct validation approach:
87
+ - Distinguish relevant instructions per scenario (e.g., UI-only vs full E2E)
88
+ - Show where stochastic vs deterministic testing applies
89
+ - Surface branch points where other suites must be utilized (e.g., "Does this branch have native code changes? → Yes → follow xcode-automation decision tree")
113
90
 
114
- The decision tree replaces flat prerequisite lists with structured routing. An agent reads the tree and follows the branch matching their situation, skipping irrelevant setup and finding the right cross-references.
91
+ The decision tree replaces flat prerequisite lists with structured routing an agent follows the branch matching their situation, skipping irrelevant setup.
115
92
 
116
93
  ## tmux Session Management Standard
117
94
 
118
- All suites that require long-running processes (dev servers, Expo servers, Flask API, Metro bundler) MUST use the tmux approach proven in xcode-automation:
95
+ Suites requiring long-running processes MUST use tmux:
119
96
 
120
97
  ```bash
121
- # CRITICAL: -t $TMUX_PANE pins split to agent's window, not user's focused window
98
+ # -t $TMUX_PANE pins split to agent's window, not user's focused window
122
99
  tmux split-window -h -d -t $TMUX_PANE \
123
100
  -c /path/to/repo '<command>'
124
101
  ```
125
102
 
126
- **Observability**: Agents MUST verify processes are running correctly via tmux pane capture (`tmux capture-pane -p -t <pane_id>`) before proceeding with validation. This prevents silent failures where a dev server fails to start but the agent proceeds to test against nothing.
127
-
128
- **Teardown**: Reverse order of setup. Kill processes via `tmux send-keys -t <pane_id> C-c` or kill the pane.
129
-
130
- **Worktree isolation**: Each worktree uses unique ports (via `.env.local`), so tmux sessions in different worktrees don't conflict. Agents must use the correct repo path (`-c`) for the worktree they're operating in.
103
+ - **Observability**: Verify via `tmux capture-pane -p -t <pane_id>` before proceeding
104
+ - **Teardown**: Reverse order. `tmux send-keys -t <pane_id> C-c` or kill the pane
105
+ - **Worktree isolation**: Unique ports per worktree (`.env.local`), correct repo path (`-c`)
131
106
 
132
107
  Reference xcode-automation as the canonical tmux pattern.
133
108
 
134
109
  ## Hypothesis-First Validation Workflow
135
110
 
136
- New suites should be drafted, then tested hands-on on a feature branch before guidance is marked as proven. This aligns with the Proven vs Untested Guidance principle:
111
+ New suites: draft, then test on a feature branch before marking guidance as proven.
137
112
 
138
- 1. **Draft**: Write suite files based on plan and codebase analysis (mark unverified practices as hypotheses)
139
- 2. **Test on feature branch**: Check out a feature branch and exercise each suite's practices hands-on — boot services, run commands, verify workflows, test worktree isolation
140
- 3. **Verify & adjust**: Document what works, what doesn't, what needs adjustment. Worktree-specific concerns get explicit verification.
141
- 4. **Solidify**: Only after verification do practices become authoritative guidance. Unverified practices stay framed as motivations per the Proven vs Untested Guidance principle.
113
+ 1. **Draft**: Write suite based on plan/analysis (mark unverified practices as hypotheses)
114
+ 2. **Test on feature branch**: Exercise practices hands-on
115
+ 3. **Verify & adjust**: Document what works, what doesn't
116
+ 4. **Solidify**: Only verified practices become authoritative guidance
142
117
 
143
- The plan/handoff document persists as the hypothesis record. If implementation runs long, it serves as the handoff document for future work.
118
+ The plan/handoff document persists as the hypothesis record for future work.
144
119
 
145
120
  ## Cross-Referencing Between Suites
146
121
 
147
- **Reference** when complex multi-step setup is involved (e.g., simulator setup spanning multiple tools) — point to the authoritative suite's decision tree rather than duplicating instructions.
148
-
149
- **Inline** when the command is simple and stable (e.g., `xcrun simctl boot <UDID>`) — no need to send agents to another document for a single command.
150
-
151
- Decision trees are the natural place for cross-references — branch points that route to another suite's decision tree. Example from browser-automation: "Does the change affect native iOS rendering? → Yes → follow xcode-automation decision tree for build and simulator verification."
152
-
153
- ## Testing Scenario Matrix
154
-
155
- Target repositories should build a scenario matrix mapping their validation scenarios to suite combinations. The matrix documents which suites apply to which types of changes, so agents can quickly determine what validation is needed. Structure as a table:
122
+ - **Reference** for complex multi-step setup — point to the authoritative suite's decision tree
123
+ - **Inline** for simple, stable commands — no redirect needed for a single command
156
124
 
157
- | Scenario | Suite(s) | Notes |
158
- |----------|----------|-------|
159
- | _Description of change type_ | _Which suites apply_ | _Any special setup or cross-references_ |
125
+ Decision tree branch points are the natural place for cross-references.
160
126
 
161
- Example using this repo's default suites:
127
+ ## Suite Discoverability
162
128
 
163
- | Scenario | Suite(s) | Notes |
164
- |----------|----------|-------|
165
- | Browser UI changes only | browser-automation | Dev server must be running |
166
- | Native iOS/macOS changes | xcode-automation | Simulator setup via session defaults |
167
- | Cross-platform changes (web + native) | browser-automation + xcode-automation | Each suite's decision tree routes to the relevant validation path |
129
+ Suite discovery is programmatic, not manual. No maintained inventories or mapping tables.
168
130
 
169
- When a suite serves as a shared dependency for multiple scenarios (e.g., a database management suite referenced by both API and front-end suites), it should be cross-referenced via decision tree branch points rather than duplicated.
131
+ - **During creation**: `ah validation-tools list` check for overlap and cross-reference points before creating a new suite.
132
+ - **During utilization**: Agents run `ah validation-tools list` to discover suites via glob patterns and descriptions. Decision trees handle routing.
170
133
 
171
134
  ## Environment Management Patterns
172
135
 
173
- Validation suites that depend on environment configuration should document these patterns for their domain:
174
-
175
- **ENV injection**: Document how the target project injects environment variables for different contexts (local development, testing, production). Suites should teach the pattern (e.g., "check for `.env.*` files and wrapper scripts") rather than hardcoding specific variable names.
176
-
177
- **Service isolation**: When validation requires running services (dev servers, databases, bundlers), document how to avoid port conflicts across concurrent worktrees or parallel agent sessions. Reference the suite's ENV Configuration table for relevant variables.
178
-
179
- **Worktree isolation**: Each worktree should use unique ports and isolated service instances where possible. Suites should document which resources need isolation and how to configure it (e.g., xcode-automation documents simulator isolation via dedicated simulator clones and derived data paths).
180
-
181
- ## Suite Creation Guidance
136
+ Suites depending on environment configuration should document:
182
137
 
183
- When creating a new validation suite for a new domain:
138
+ - **ENV injection**: Teach discovery patterns (e.g., "check `.env.*` files") rather than hardcoding variable names
139
+ - **Service isolation**: How to avoid port conflicts across concurrent worktrees/sessions
140
+ - **Worktree isolation**: Unique ports and isolated service instances per worktree
184
141
 
185
- **Engineer provides**: Testing scenarios, tooling requirements, CI/CD integration needs, cross-references to existing suites.
142
+ ## Suite Creation Checklist
186
143
 
187
- **Suite author follows**:
188
- 1. Follow the validation suite schema (`ah schema validation-suite`)
189
- 2. Validate the stochastic dimension meets the existence threshold
190
- 3. Apply the Command Documentation Principle — external tools explicit, internal commands via patterns + discovery
191
- 4. Include a Decision Tree routing agents to the correct validation path
192
- 5. Use tmux Session Management Standard for long-running processes
193
- 6. Document proven vs untested guidance per the Hypothesis-First Validation Workflow
144
+ 1. Follow `ah schema validation-suite`
145
+ 2. Validate stochastic dimension meets existence threshold
146
+ 3. External tools explicit, internal commands via patterns + discovery
147
+ 4. Include a Decision Tree
148
+ 5. Use tmux standard for long-running processes
149
+ 6. Mark proven vs untested guidance
194
150
  7. Cross-reference other suites at decision tree branch points
195
151
 
196
- **Structural templates** (reference the existing default suites for patterns):
197
- - xcode-automation — external-tool-heavy suite (MCP tools, xctrace, simctl). Reference for suites that primarily wrap external CLI tools with agent-driven exploration.
198
- - browser-automation — dual-dimension suite (agent-browser stochastic, Playwright deterministic). Reference for suites that have both agent-driven exploration and scripted CI-gated tests.
152
+ **Structural templates**: xcode-automation (external-tool-heavy), browser-automation (dual stochastic/deterministic).
199
153
 
200
154
  ## Related References
201
155
 
202
- - [`tools-commands-mcp-hooks.md`](tools-commands-mcp-hooks.md) — When validation uses hooks, CLI commands, or MCP research tools
203
- - [`knowledge-compounding.md`](knowledge-compounding.md) — When crystallized patterns need to compound into persistent knowledge
156
+ - [`tools-commands-mcp-hooks.md`](tools-commands-mcp-hooks.md) — Validation using hooks, CLI commands, or MCP tools
157
+ - [`knowledge-compounding.md`](knowledge-compounding.md) — Crystallized patterns compounding into persistent knowledge
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "all-hands-cli",
3
- "version": "0.1.12",
3
+ "version": "0.1.13",
4
4
  "description": "Agentic harness for model-first software development",
5
5
  "type": "module",
6
6
  "bin": {