all-hands-cli 0.1.12 → 0.1.14

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,203 +1,157 @@
1
1
  # Validation Tooling
2
2
 
3
- Per **Agentic Validation Tooling**, programmatic validation replaces human supervision. This reference covers how validation suites are created, structured, and how they compound from stochastic exploration into deterministic gates.
3
+ Programmatic validation replaces human supervision. Validation suites compound from stochastic exploration into deterministic gates.
4
4
 
5
5
  ## Crystallization Lifecycle
6
6
 
7
- Per **Agentic Validation Tooling**, validation compounds through a lifecycle:
8
-
9
- 1. **Stochastic exploration** — Agent-driven exploratory testing using model intuition discovers patterns
7
+ 1. **Stochastic exploration** Agent-driven exploratory testing discovers patterns
10
8
  2. **Pattern crystallization** — Discovered patterns become deterministic checks
11
9
  3. **CI/CD entrenchment** — Deterministic checks gate releases
12
10
  4. **Frontier shift** — Stochastic exploration moves to new unknowns
13
11
 
14
- This is how validation compounds. Every domain has both a stochastic dimension (exploratory) and a deterministic dimension (binary pass/fail).
12
+ Every domain has both a stochastic dimension (exploratory) and a deterministic dimension (binary pass/fail).
15
13
 
16
14
  ## Suite Existence Threshold
17
15
 
18
- A validation suite must have a meaningful stochastic dimension to justify existing. Deterministic-only tools (type checking, linting, formatting) are test commands referenced directly in acceptance criteria and CI/CD — they are NOT suites.
16
+ A suite must have a meaningful stochastic dimension to justify existing. Deterministic-only tools (type checking, linting, formatting) are test commands in acceptance criteria and CI/CD — they are NOT suites.
19
17
 
20
18
  ## Repository Agnosticism
21
19
 
22
- This reference file is a generic rule file that ships with the harness. It MUST NOT contain references to project-specific validation suites, commands, or infrastructure. All examples must either:
23
- - Reference existing default validation suites shipped with this repo (currently: xcode-automation, browser-automation)
24
- - Use generic/hypothetical descriptions that any target repository can map to their own context
25
-
26
- When examples are needed, use **snippets from the existing default suites** rather than naming suites or commands that belong to a specific target project. Target repositories create their own suites for their domains — this file teaches how to create and structure them, not what they should be called.
20
+ This file MUST NOT contain project-specific references. All examples must either reference default suites shipped with this repo (currently: xcode-automation, browser-automation) or use generic descriptions. This file teaches patterns, not inventories.
27
21
 
28
- **Why**: Target repositories consume this file as authoritative guidance. Project-specific references create confusion (agents look for suites that don't exist), couple the harness to a single project, and violate the principle that this file teaches patterns, not inventories. If a pattern needs a concrete example, draw it from xcode-automation or browser-automation.
22
+ Project-specific references cause agents to look for suites that don't exist in target repos and couple the harness to a single project. If a pattern needs a concrete example, draw it from xcode-automation or browser-automation.
29
23
 
30
24
  ## Creating Validation Tooling
31
25
 
32
26
  Follow `.allhands/flows/shared/CREATE_VALIDATION_TOOLING_SPEC.md` for the full process. This creates a spec, not an implementation.
33
27
 
34
28
  ### Research Phase
35
- - Run `ah tavily search "<validation_type> testing tools"` for available tools
36
- - Run `ah perplexity research "best practices <validation_type> testing <technology>"` for best practices
29
+ - `ah tavily search "<validation_type> testing tools"` for available tools
30
+ - `ah perplexity research "best practices <validation_type> testing <technology>"` for best practices
37
31
  - Determine whether the domain has a meaningful stochastic dimension before proceeding
38
- - Run `ah tools --list` to check existing MCP integrations
32
+ - `ah tools --list` to check existing MCP integrations
39
33
 
40
34
  ### Tool Validation Phase
41
- Per **Agentic Validation Tooling**, research produces assumptions; running the tool produces ground truth:
35
+ Research produces assumptions; running the tool produces ground truth:
42
36
  - Install and verify tool responds to `--help`
43
37
  - Create a minimal test target (temp directory, not committed)
44
38
  - Execute representative stochastic workflows
45
- - Systematically try commands against codebase-relevant scenarios
39
+ - Try commands against codebase-relevant scenarios
46
40
  - Document divergences from researched documentation
47
41
 
48
42
  ### Suite Writing Philosophy
49
43
 
50
- Per **Frontier Models are Capable** and **Context is Precious**:
51
-
52
- - **`--help` as prerequisite**: Suites MUST instruct agents to pull `<tool> --help` before any exploration — command vocabulary shapes exploration quality. The suite MUST NOT replicate full command docs.
53
- - **Inline command examples**: Weave brief examples into use-case motivations as calibration anchors not exhaustive catalogs, not separated command reference sections.
54
- - **Motivation framing**: Frame around harness value: reducing human-in-loop supervision, verifying code quality, confirming implementation matches expectations.
55
- - **Exploration categories**: Describe with enough command specificity to orient. For untested territory, prefer motivations over prescriptive sequences — the agent extrapolates better from goals than rigid steps. For patterns verified through testing, state them authoritatively (see below).
44
+ - **`--help` as prerequisite**: Suites MUST instruct agents to run `<tool> --help` before exploration. Suites MUST NOT replicate full command docs.
45
+ - **Inline command examples**: Weave brief examples into use-case motivations as calibration anchors — not exhaustive catalogs.
46
+ - **Motivation framing**: Frame around reducing human-in-loop supervision, verifying quality, confirming implementation matches expectations.
47
+ - **Exploration categories**: Enough command specificity to orient. Untested territory: motivations over prescriptive sequences. Verified patterns: state authoritatively.
56
48
 
57
- Formula: **motivations backed by inline command examples + `--help` as prerequisite and progressive disclosure**. Commands woven into use cases give direction; `--help` reveals depth.
49
+ Formula: **motivations + inline command examples + `--help` for progressive disclosure**.
58
50
 
59
51
  ### Proven vs Untested Guidance
60
52
 
61
- Validation suites should be grounded in hands-on testing against the actual repo, not theoretical instructions. The level of authority in how guidance is written depends on whether it has been verified:
53
+ - **Proven patterns** (verified via Tool Validation Phase): State authoritatively within use-case motivations. Override generic tool docs when they conflict. Example: "`xctrace` requires `--device '<UDID>'` for simulator" is a hard requirement discovered through testing, stated directly alongside the motivation.
54
+ - **Untested edge cases**: Define the motivation and reference analogous solved examples. Do NOT write prescriptive steps for unverified scenarios — frontier models given clear motivation and a reference example extrapolate better than they follow rigid untested instructions.
62
55
 
63
- - **Proven patterns** (verified via the Tool Validation Phase): State authoritatively within use-case motivations — the pattern is established fact, not a suggestion. These override generic tool documentation when they conflict. Example: "`xctrace` requires `--device '<UDID>'` for simulator" is a hard requirement discovered through testing, stated directly alongside the motivation (why: `xctrace` can't find simulator processes without it). The motivation formula still applies — proven patterns are *authoritative examples within motivations*, not raw command catalogs.
64
- - **Untested edge cases** (not yet exercised in this repo): Define the **motivation** (what the agent should achieve and why) and reference **analogous solved examples** from proven patterns. Do NOT write prescriptive step-by-step instructions for scenarios that haven't been verified — unverified prescriptions can mislead the agent into rigid sequences that don't match reality. Instead, trust that a frontier model given clear motivation and a reference example of how a similar problem was solved will extrapolate the correct approach through stochastic exploration.
65
-
66
- **Why this matters**: Frontier models produce emergent, adaptive behavior when given goals and reference points. Unverified prescriptive instructions constrain this emergence and risk encoding incorrect assumptions. Motivation + examples activate the model's reasoning about the problem space; rigid untested instructions bypass it. The Tool Validation Phase exists to convert untested guidance into proven patterns over time — the crystallization lifecycle in action.
56
+ The Tool Validation Phase converts untested guidance into proven patterns over time the crystallization lifecycle in action.
67
57
 
68
58
  ### Evidence Capture
69
59
 
70
- Per **Quality Engineering**, two audiences require different artifacts:
71
-
72
- - **Agent (self-verification)**: Primitives used during the observe-act-verify loop (state checks, assertions, console output). Real-time, not recorded.
73
- - **Engineer (review artifacts)**: Trust evidence produced after exploration (recordings, screenshots, traces, reports).
60
+ - **Agent (self-verification)**: State checks, assertions, console output during observe-act-verify. Real-time, not recorded.
61
+ - **Engineer (review artifacts)**: Recordings, screenshots, traces, reports produced after exploration.
74
62
 
75
63
  Pattern: explore first, capture second.
76
64
 
77
65
  ## Validation Suite Schema
78
66
 
79
- Run `ah schema validation-suite` for the authoritative schema. Key sections in a suite:
67
+ Run `ah schema validation-suite` for the authoritative schema. Key sections:
80
68
 
81
- - **Stochastic Validation**: Agent-driven exploratory testing with model intuition
69
+ - **Stochastic Validation**: Agent-driven exploratory testing
82
70
  - **Deterministic Integration**: Binary pass/fail commands that gate completion
83
71
 
84
- List available suites: `ah validation-tools list`
85
-
86
72
  ## Integration with Prompt Execution
87
73
 
88
- Prompt files reference validation suites in their `validation_suites` frontmatter. During execution:
89
- 1. Agent reads suite's **Stochastic Validation** section during implementation for exploratory quality
90
- 2. Agent runs suite's **Deterministic Integration** section for acceptance criteria gating
74
+ Prompt files reference suites in `validation_suites` frontmatter. During execution:
75
+ 1. Agent reads **Stochastic Validation** during implementation
76
+ 2. Agent runs **Deterministic Integration** for acceptance criteria gating
91
77
  3. Validation review (`PROMPT_VALIDATION_REVIEW.md`) confirms pass/fail
92
78
 
93
79
  ## Command Documentation Principle
94
80
 
95
- Two categories of commands exist in validation suites, each requiring different documentation approaches:
96
-
97
- **External tooling commands — Document explicitly**: Commands from external tools (`xctrace`, `xcrun simctl`, `agent-browser`, `playwright`, `curl`, etc.) are stable, unfamiliar to agents by default, and unlikely to change with codebase evolution. Document specific commands, flags, and use cases inline with motivations. Example from xcode-automation: `xcrun xctrace record --template 'Time Profiler' --device '<UDID>' --attach '<PID>'` — the flags, ordering constraints, and PID discovery method are all external tool knowledge that the suite documents explicitly.
98
-
99
- **Internal codebase commands — Document patterns, not inventories**: Project-specific scripts, test commands, and codebase-specific CLI wrappers evolve rapidly. Instead:
100
- 1. **Document core infrastructure commands explicitly** — commands that boot services, manage environments, and are foundational to validation in the target project. These are stable and essential per-project, but suites should teach agents how to discover them (e.g., "check `package.json` scripts" or "run `--help`"), not hardcode specific script names.
101
- 2. **Teach patterns for everything else** — naming conventions, where to discover project commands, what categories mean, and how to build upon them.
102
- 3. **Document motivations** — why different test categories exist, when to use which, what confidence each provides.
103
-
104
- Per **Frontier Models are Capable**: An agent given patterns + motivations + discovery instructions outperforms one given stale command inventories. Suites that teach patterns age gracefully; suites that enumerate commands require maintenance on every change.
81
+ - **External tooling** (xctrace, simctl, playwright, etc.) — Document explicitly: commands, flags, use cases inline with motivations. Stable and unfamiliar to agents by default. Example from xcode-automation: `xcrun xctrace record --template 'Time Profiler' --device '<UDID>' --attach '<PID>'` — flags, ordering constraints, and PID discovery are external tool knowledge that belongs in the suite.
82
+ - **Internal codebase commands** — Document patterns, not inventories: teach discovery (`package.json` scripts, `--help`), naming conventions, motivations for test categories. Pattern-based suites age gracefully; command inventories require constant maintenance.
105
83
 
106
84
  ## Decision Tree Requirement
107
85
 
108
- Every validation suite MUST include a decision tree that routes agents to the correct validation approach based on their situation. Decision trees:
109
- - Distinguish which instructions are relevant to which validation scenario (e.g., UI-only test vs full E2E with native code changes)
110
- - Show where/when stochastic vs deterministic testing applies
111
- - Surface deterministic branch points where other validation suites must be utilized (e.g., "Does this branch have native code changes? → Yes → follow xcode-automation decision tree")
112
- - Cleanly articulate multiple expected use cases within a single suite
86
+ Every suite MUST include a decision tree routing agents to the correct validation approach:
87
+ - Distinguish relevant instructions per scenario (e.g., UI-only vs full E2E)
88
+ - Show where stochastic vs deterministic testing applies
89
+ - Surface branch points where other suites must be utilized (e.g., "Does this branch have native code changes? → Yes → follow xcode-automation decision tree")
113
90
 
114
- The decision tree replaces flat prerequisite lists with structured routing. An agent reads the tree and follows the branch matching their situation, skipping irrelevant setup and finding the right cross-references.
91
+ The decision tree replaces flat prerequisite lists with structured routing an agent follows the branch matching their situation, skipping irrelevant setup.
115
92
 
116
93
  ## tmux Session Management Standard
117
94
 
118
- All suites that require long-running processes (dev servers, Expo servers, Flask API, Metro bundler) MUST use the tmux approach proven in xcode-automation:
95
+ Suites requiring long-running processes MUST use tmux:
119
96
 
120
97
  ```bash
121
- # CRITICAL: -t $TMUX_PANE pins split to agent's window, not user's focused window
98
+ # -t $TMUX_PANE pins split to agent's window, not user's focused window
122
99
  tmux split-window -h -d -t $TMUX_PANE \
123
100
  -c /path/to/repo '<command>'
124
101
  ```
125
102
 
126
- **Observability**: Agents MUST verify processes are running correctly via tmux pane capture (`tmux capture-pane -p -t <pane_id>`) before proceeding with validation. This prevents silent failures where a dev server fails to start but the agent proceeds to test against nothing.
127
-
128
- **Teardown**: Reverse order of setup. Kill processes via `tmux send-keys -t <pane_id> C-c` or kill the pane.
129
-
130
- **Worktree isolation**: Each worktree uses unique ports (via `.env.local`), so tmux sessions in different worktrees don't conflict. Agents must use the correct repo path (`-c`) for the worktree they're operating in.
103
+ - **Observability**: Verify via `tmux capture-pane -p -t <pane_id>` before proceeding
104
+ - **Teardown**: Reverse order. `tmux send-keys -t <pane_id> C-c` or kill the pane
105
+ - **Worktree isolation**: Unique ports per worktree (`.env.local`), correct repo path (`-c`)
131
106
 
132
107
  Reference xcode-automation as the canonical tmux pattern.
133
108
 
134
109
  ## Hypothesis-First Validation Workflow
135
110
 
136
- New suites should be drafted, then tested hands-on on a feature branch before guidance is marked as proven. This aligns with the Proven vs Untested Guidance principle:
111
+ New suites: draft, then test on a feature branch before marking guidance as proven.
137
112
 
138
- 1. **Draft**: Write suite files based on plan and codebase analysis (mark unverified practices as hypotheses)
139
- 2. **Test on feature branch**: Check out a feature branch and exercise each suite's practices hands-on — boot services, run commands, verify workflows, test worktree isolation
140
- 3. **Verify & adjust**: Document what works, what doesn't, what needs adjustment. Worktree-specific concerns get explicit verification.
141
- 4. **Solidify**: Only after verification do practices become authoritative guidance. Unverified practices stay framed as motivations per the Proven vs Untested Guidance principle.
113
+ 1. **Draft**: Write suite based on plan/analysis (mark unverified practices as hypotheses)
114
+ 2. **Test on feature branch**: Exercise practices hands-on
115
+ 3. **Verify & adjust**: Document what works, what doesn't
116
+ 4. **Solidify**: Only verified practices become authoritative guidance
142
117
 
143
- The plan/handoff document persists as the hypothesis record. If implementation runs long, it serves as the handoff document for future work.
118
+ The plan/handoff document persists as the hypothesis record for future work.
144
119
 
145
120
  ## Cross-Referencing Between Suites
146
121
 
147
- **Reference** when complex multi-step setup is involved (e.g., simulator setup spanning multiple tools) — point to the authoritative suite's decision tree rather than duplicating instructions.
148
-
149
- **Inline** when the command is simple and stable (e.g., `xcrun simctl boot <UDID>`) — no need to send agents to another document for a single command.
150
-
151
- Decision trees are the natural place for cross-references — branch points that route to another suite's decision tree. Example from browser-automation: "Does the change affect native iOS rendering? → Yes → follow xcode-automation decision tree for build and simulator verification."
152
-
153
- ## Testing Scenario Matrix
154
-
155
- Target repositories should build a scenario matrix mapping their validation scenarios to suite combinations. The matrix documents which suites apply to which types of changes, so agents can quickly determine what validation is needed. Structure as a table:
122
+ - **Reference** for complex multi-step setup — point to the authoritative suite's decision tree
123
+ - **Inline** for simple, stable commands — no redirect needed for a single command
156
124
 
157
- | Scenario | Suite(s) | Notes |
158
- |----------|----------|-------|
159
- | _Description of change type_ | _Which suites apply_ | _Any special setup or cross-references_ |
125
+ Decision tree branch points are the natural place for cross-references.
160
126
 
161
- Example using this repo's default suites:
127
+ ## Suite Discoverability
162
128
 
163
- | Scenario | Suite(s) | Notes |
164
- |----------|----------|-------|
165
- | Browser UI changes only | browser-automation | Dev server must be running |
166
- | Native iOS/macOS changes | xcode-automation | Simulator setup via session defaults |
167
- | Cross-platform changes (web + native) | browser-automation + xcode-automation | Each suite's decision tree routes to the relevant validation path |
129
+ Suite discovery is programmatic, not manual. No maintained inventories or mapping tables.
168
130
 
169
- When a suite serves as a shared dependency for multiple scenarios (e.g., a database management suite referenced by both API and front-end suites), it should be cross-referenced via decision tree branch points rather than duplicated.
131
+ - **During creation**: `ah validation-tools list` check for overlap and cross-reference points before creating a new suite.
132
+ - **During utilization**: Agents run `ah validation-tools list` to discover suites via glob patterns and descriptions. Decision trees handle routing.
170
133
 
171
134
  ## Environment Management Patterns
172
135
 
173
- Validation suites that depend on environment configuration should document these patterns for their domain:
174
-
175
- **ENV injection**: Document how the target project injects environment variables for different contexts (local development, testing, production). Suites should teach the pattern (e.g., "check for `.env.*` files and wrapper scripts") rather than hardcoding specific variable names.
176
-
177
- **Service isolation**: When validation requires running services (dev servers, databases, bundlers), document how to avoid port conflicts across concurrent worktrees or parallel agent sessions. Reference the suite's ENV Configuration table for relevant variables.
178
-
179
- **Worktree isolation**: Each worktree should use unique ports and isolated service instances where possible. Suites should document which resources need isolation and how to configure it (e.g., xcode-automation documents simulator isolation via dedicated simulator clones and derived data paths).
180
-
181
- ## Suite Creation Guidance
136
+ Suites depending on environment configuration should document:
182
137
 
183
- When creating a new validation suite for a new domain:
138
+ - **ENV injection**: Teach discovery patterns (e.g., "check `.env.*` files") rather than hardcoding variable names
139
+ - **Service isolation**: How to avoid port conflicts across concurrent worktrees/sessions
140
+ - **Worktree isolation**: Unique ports and isolated service instances per worktree
184
141
 
185
- **Engineer provides**: Testing scenarios, tooling requirements, CI/CD integration needs, cross-references to existing suites.
142
+ ## Suite Creation Checklist
186
143
 
187
- **Suite author follows**:
188
- 1. Follow the validation suite schema (`ah schema validation-suite`)
189
- 2. Validate the stochastic dimension meets the existence threshold
190
- 3. Apply the Command Documentation Principle — external tools explicit, internal commands via patterns + discovery
191
- 4. Include a Decision Tree routing agents to the correct validation path
192
- 5. Use tmux Session Management Standard for long-running processes
193
- 6. Document proven vs untested guidance per the Hypothesis-First Validation Workflow
144
+ 1. Follow `ah schema validation-suite`
145
+ 2. Validate stochastic dimension meets existence threshold
146
+ 3. External tools explicit, internal commands via patterns + discovery
147
+ 4. Include a Decision Tree
148
+ 5. Use tmux standard for long-running processes
149
+ 6. Mark proven vs untested guidance
194
150
  7. Cross-reference other suites at decision tree branch points
195
151
 
196
- **Structural templates** (reference the existing default suites for patterns):
197
- - xcode-automation — external-tool-heavy suite (MCP tools, xctrace, simctl). Reference for suites that primarily wrap external CLI tools with agent-driven exploration.
198
- - browser-automation — dual-dimension suite (agent-browser stochastic, Playwright deterministic). Reference for suites that have both agent-driven exploration and scripted CI-gated tests.
152
+ **Structural templates**: xcode-automation (external-tool-heavy), browser-automation (dual stochastic/deterministic).
199
153
 
200
154
  ## Related References
201
155
 
202
- - [`tools-commands-mcp-hooks.md`](tools-commands-mcp-hooks.md) — When validation uses hooks, CLI commands, or MCP research tools
203
- - [`knowledge-compounding.md`](knowledge-compounding.md) — When crystallized patterns need to compound into persistent knowledge
156
+ - [`tools-commands-mcp-hooks.md`](tools-commands-mcp-hooks.md) — Validation using hooks, CLI commands, or MCP tools
157
+ - [`knowledge-compounding.md`](knowledge-compounding.md) — Crystallized patterns compounding into persistent knowledge
package/.internal.json CHANGED
@@ -29,7 +29,8 @@
29
29
  ".reposearch/**",
30
30
  "concerns.md",
31
31
  "specs/**",
32
- ".allhands/docs.local.json"
32
+ ".allhands/docs.local.json",
33
+ ".allhands/.sync-state.json"
33
34
  ],
34
35
  "initOnly": [
35
36
  ".allhands/skills/**",
package/bin/sync-cli.js CHANGED
@@ -4856,9 +4856,9 @@ var Yargs = YargsFactory(esm_default);
4856
4856
  var yargs_default = Yargs;
4857
4857
 
4858
4858
  // src/commands/sync.ts
4859
- import { copyFileSync, existsSync as existsSync9, mkdirSync, readFileSync as readFileSync8, unlinkSync, writeFileSync as writeFileSync3 } from "fs";
4859
+ import { copyFileSync, existsSync as existsSync10, mkdirSync as mkdirSync2, readFileSync as readFileSync9, unlinkSync, writeFileSync as writeFileSync4 } from "fs";
4860
4860
  import { homedir } from "os";
4861
- import { basename as basename4, dirname as dirname7, join as join7, resolve as resolve6 } from "path";
4861
+ import { basename as basename4, dirname as dirname8, join as join8, resolve as resolve6 } from "path";
4862
4862
 
4863
4863
  // src/lib/git.ts
4864
4864
  import { execSync, spawnSync } from "child_process";
@@ -4909,6 +4909,14 @@ function getFileBlobHash(filePath, repoPath) {
4909
4909
  const result = git(["hash-object", filePath], repoPath);
4910
4910
  return result.success ? result.stdout.trim() : null;
4911
4911
  }
4912
+ function getHeadCommit(repoPath) {
4913
+ const result = git(["rev-parse", "HEAD"], repoPath);
4914
+ return result.success ? result.stdout.trim() : null;
4915
+ }
4916
+ function hasUncommittedChanges(repoPath) {
4917
+ const result = git(["status", "--porcelain"], repoPath);
4918
+ return result.success && result.stdout.length > 0;
4919
+ }
4912
4920
  function fileExistsInHistory(relPath, blobHash, repoPath) {
4913
4921
  const result = git(["rev-list", "HEAD", "--objects", "--", relPath], repoPath);
4914
4922
  if (!result.success || !result.stdout) return false;
@@ -6789,7 +6797,8 @@ function getNextBackupPath(filePath) {
6789
6797
 
6790
6798
  // src/lib/constants.ts
6791
6799
  var SYNC_CONFIG_FILENAME = ".allhands-sync-config.json";
6792
- var PUSH_BLOCKLIST = ["CLAUDE.project.md", ".allhands-sync-config.json"];
6800
+ var SYNC_STATE_FILENAME = ".allhands/.sync-state.json";
6801
+ var PUSH_BLOCKLIST = ["CLAUDE.project.md", ".allhands-sync-config.json", ".allhands/.sync-state.json"];
6793
6802
  var SYNC_CONFIG_TEMPLATE = {
6794
6803
  $comment: "Customization for claude-all-hands push command",
6795
6804
  includes: [],
@@ -6884,6 +6893,52 @@ function ensureTargetLines(targetRoot, verbose = false) {
6884
6893
  return anyChanged;
6885
6894
  }
6886
6895
 
6896
+ // src/lib/sync-state.ts
6897
+ import { existsSync as existsSync9, mkdirSync, readFileSync as readFileSync8, writeFileSync as writeFileSync3 } from "fs";
6898
+ import { dirname as dirname7, join as join7 } from "path";
6899
+ function writeSyncState(targetRoot, allhandsRoot, syncedFiles) {
6900
+ const files = {};
6901
+ for (const relPath of [...syncedFiles].sort()) {
6902
+ const sourceFile = join7(allhandsRoot, relPath);
6903
+ if (!existsSync9(sourceFile)) continue;
6904
+ const hash = getFileBlobHash(sourceFile, allhandsRoot);
6905
+ if (hash) {
6906
+ files[relPath] = hash;
6907
+ }
6908
+ }
6909
+ const state = {
6910
+ version: 1,
6911
+ syncedAt: (/* @__PURE__ */ new Date()).toISOString(),
6912
+ sourceCommit: getHeadCommit(allhandsRoot),
6913
+ dirty: hasUncommittedChanges(allhandsRoot),
6914
+ files
6915
+ };
6916
+ const outPath = join7(targetRoot, SYNC_STATE_FILENAME);
6917
+ mkdirSync(dirname7(outPath), { recursive: true });
6918
+ writeFileSync3(outPath, JSON.stringify(state, null, 2) + "\n");
6919
+ }
6920
+ function readSyncState(targetRoot) {
6921
+ const stateFile = join7(targetRoot, SYNC_STATE_FILENAME);
6922
+ if (!existsSync9(stateFile)) return null;
6923
+ try {
6924
+ const content = readFileSync8(stateFile, "utf-8");
6925
+ const parsed = JSON.parse(content);
6926
+ if (!parsed || typeof parsed !== "object" || parsed.version !== 1 || !parsed.files || typeof parsed.files !== "object") {
6927
+ return null;
6928
+ }
6929
+ return parsed;
6930
+ } catch {
6931
+ return null;
6932
+ }
6933
+ }
6934
+ function wasModifiedSinceSync(targetFilePath, relPath, syncState, repoPath) {
6935
+ const manifestHash = syncState.files[relPath];
6936
+ if (!manifestHash) return null;
6937
+ const targetHash = getFileBlobHash(targetFilePath, repoPath);
6938
+ if (!targetHash) return true;
6939
+ return targetHash !== manifestHash;
6940
+ }
6941
+
6887
6942
  // src/commands/sync.ts
6888
6943
  var AH_SHIM_SCRIPT = `#!/bin/bash
6889
6944
  # AllHands CLI shim - finds and executes project-local .allhands/harness/ah
@@ -6902,34 +6957,34 @@ echo "hint: run 'npx all-hands sync .' to initialize this project" >&2
6902
6957
  exit 1
6903
6958
  `;
6904
6959
  function setupAhShim() {
6905
- const localBin = join7(homedir(), ".local", "bin");
6906
- const shimPath = join7(localBin, "ah");
6960
+ const localBin = join8(homedir(), ".local", "bin");
6961
+ const shimPath = join8(localBin, "ah");
6907
6962
  const pathEnv = process.env.PATH || "";
6908
6963
  const inPath = pathEnv.split(":").some(
6909
- (p) => p === localBin || p === join7(homedir(), ".local/bin")
6964
+ (p) => p === localBin || p === join8(homedir(), ".local/bin")
6910
6965
  );
6911
- if (existsSync9(shimPath)) {
6912
- const existing = readFileSync8(shimPath, "utf-8");
6966
+ if (existsSync10(shimPath)) {
6967
+ const existing = readFileSync9(shimPath, "utf-8");
6913
6968
  if (existing.includes(".allhands/harness/ah")) {
6914
6969
  return { installed: false, path: shimPath, inPath };
6915
6970
  }
6916
6971
  }
6917
- mkdirSync(localBin, { recursive: true });
6918
- writeFileSync3(shimPath, AH_SHIM_SCRIPT, { mode: 493 });
6972
+ mkdirSync2(localBin, { recursive: true });
6973
+ writeFileSync4(shimPath, AH_SHIM_SCRIPT, { mode: 493 });
6919
6974
  return { installed: true, path: shimPath, inPath };
6920
6975
  }
6921
6976
  async function cmdSync(target = ".", autoYes = false, init = false) {
6922
6977
  const resolvedTarget = resolve6(process.cwd(), target);
6923
6978
  const allhandsRoot = getAllhandsRoot();
6924
- const targetAllhandsDir = join7(resolvedTarget, ".allhands");
6925
- const isFirstTime = !existsSync9(targetAllhandsDir);
6979
+ const targetAllhandsDir = join8(resolvedTarget, ".allhands");
6980
+ const isFirstTime = !existsSync10(targetAllhandsDir);
6926
6981
  if (isFirstTime) {
6927
6982
  console.log(`Initializing allhands in: ${resolvedTarget}`);
6928
6983
  } else {
6929
6984
  console.log(`Updating allhands in: ${resolvedTarget}`);
6930
6985
  }
6931
6986
  console.log(`Source: ${allhandsRoot}`);
6932
- if (!existsSync9(resolvedTarget)) {
6987
+ if (!existsSync10(resolvedTarget)) {
6933
6988
  console.error(`Error: Target directory does not exist: ${resolvedTarget}`);
6934
6989
  return 1;
6935
6990
  }
@@ -6973,15 +7028,15 @@ async function cmdSync(target = ".", autoYes = false, init = false) {
6973
7028
  const conflicts = [];
6974
7029
  const deletedInSource = [];
6975
7030
  for (const relPath of distributable) {
6976
- const sourceFile = join7(allhandsRoot, relPath);
6977
- const targetFile = join7(resolvedTarget, relPath);
6978
- if (!existsSync9(sourceFile)) {
6979
- if (!isFirstTime && existsSync9(targetFile)) {
7031
+ const sourceFile = join8(allhandsRoot, relPath);
7032
+ const targetFile = join8(resolvedTarget, relPath);
7033
+ if (!existsSync10(sourceFile)) {
7034
+ if (!isFirstTime && existsSync10(targetFile)) {
6980
7035
  deletedInSource.push(relPath);
6981
7036
  }
6982
7037
  continue;
6983
7038
  }
6984
- if (existsSync9(targetFile)) {
7039
+ if (existsSync10(targetFile)) {
6985
7040
  if (filesAreDifferent(sourceFile, targetFile)) {
6986
7041
  conflicts.push(relPath);
6987
7042
  }
@@ -7002,7 +7057,7 @@ Auto-overwriting ${conflicts.length} conflicting files (--yes mode)`);
7002
7057
  if (resolution === "backup") {
7003
7058
  console.log("\nCreating backups...");
7004
7059
  for (const relPath of conflicts) {
7005
- const targetFile = join7(resolvedTarget, relPath);
7060
+ const targetFile = join8(resolvedTarget, relPath);
7006
7061
  const bkPath = getNextBackupPath(targetFile);
7007
7062
  copyFileSync(targetFile, bkPath);
7008
7063
  console.log(` ${relPath} \u2192 ${basename4(bkPath)}`);
@@ -7011,12 +7066,14 @@ Auto-overwriting ${conflicts.length} conflicting files (--yes mode)`);
7011
7066
  }
7012
7067
  console.log("\nCopying allhands files...");
7013
7068
  console.log(`Found ${distributable.size} files to distribute`);
7069
+ const syncedFiles = /* @__PURE__ */ new Set();
7014
7070
  for (const relPath of [...distributable].sort()) {
7015
- const sourceFile = join7(allhandsRoot, relPath);
7016
- const targetFile = join7(resolvedTarget, relPath);
7017
- if (!existsSync9(sourceFile)) continue;
7018
- mkdirSync(dirname7(targetFile), { recursive: true });
7019
- if (existsSync9(targetFile)) {
7071
+ const sourceFile = join8(allhandsRoot, relPath);
7072
+ const targetFile = join8(resolvedTarget, relPath);
7073
+ if (!existsSync10(sourceFile)) continue;
7074
+ syncedFiles.add(relPath);
7075
+ mkdirSync2(dirname8(targetFile), { recursive: true });
7076
+ if (existsSync10(targetFile)) {
7020
7077
  if (!filesAreDifferent(sourceFile, targetFile)) {
7021
7078
  skipped++;
7022
7079
  continue;
@@ -7029,6 +7086,7 @@ Auto-overwriting ${conflicts.length} conflicting files (--yes mode)`);
7029
7086
  }
7030
7087
  }
7031
7088
  restoreDotfiles(resolvedTarget);
7089
+ writeSyncState(resolvedTarget, allhandsRoot, syncedFiles);
7032
7090
  if (!isFirstTime && deletedInSource.length > 0) {
7033
7091
  console.log(`
7034
7092
  ${deletedInSource.length} files removed from allhands source:`);
@@ -7038,8 +7096,8 @@ ${deletedInSource.length} files removed from allhands source:`);
7038
7096
  const shouldDelete = autoYes || await confirm("Delete these from target?");
7039
7097
  if (shouldDelete) {
7040
7098
  for (const f of deletedInSource) {
7041
- const targetFile = join7(resolvedTarget, f);
7042
- if (existsSync9(targetFile)) {
7099
+ const targetFile = join8(resolvedTarget, f);
7100
+ if (existsSync10(targetFile)) {
7043
7101
  unlinkSync(targetFile);
7044
7102
  console.log(` Deleted: ${f}`);
7045
7103
  }
@@ -7050,9 +7108,9 @@ ${deletedInSource.length} files removed from allhands source:`);
7050
7108
  const targetLinesUpdated = ensureTargetLines(resolvedTarget, true);
7051
7109
  const envExamples = [".env.example", ".env.ai.example"];
7052
7110
  for (const envExample of envExamples) {
7053
- const sourceEnv = join7(allhandsRoot, envExample);
7054
- const targetEnv = join7(resolvedTarget, envExample);
7055
- if (existsSync9(sourceEnv)) {
7111
+ const sourceEnv = join8(allhandsRoot, envExample);
7112
+ const targetEnv = join8(resolvedTarget, envExample);
7113
+ if (existsSync10(sourceEnv)) {
7056
7114
  console.log(`Copying ${envExample}`);
7057
7115
  copyFileSync(sourceEnv, targetEnv);
7058
7116
  }
@@ -7074,15 +7132,15 @@ ${deletedInSource.length} files removed from allhands source:`);
7074
7132
  }
7075
7133
  let syncConfigCreated = false;
7076
7134
  if (isFirstTime) {
7077
- const syncConfigPath = join7(resolvedTarget, SYNC_CONFIG_FILENAME);
7078
- if (existsSync9(syncConfigPath)) {
7135
+ const syncConfigPath = join8(resolvedTarget, SYNC_CONFIG_FILENAME);
7136
+ if (existsSync10(syncConfigPath)) {
7079
7137
  console.log(`
7080
7138
  ${SYNC_CONFIG_FILENAME} already exists - skipping`);
7081
7139
  } else if (!autoYes) {
7082
7140
  console.log("\nThe push command lets you contribute changes back to all-hands.");
7083
7141
  console.log("A sync config file lets you customize which files to include/exclude.");
7084
7142
  if (await confirm(`Create ${SYNC_CONFIG_FILENAME}?`)) {
7085
- writeFileSync3(syncConfigPath, JSON.stringify(SYNC_CONFIG_TEMPLATE, null, 2) + "\n");
7143
+ writeFileSync4(syncConfigPath, JSON.stringify(SYNC_CONFIG_TEMPLATE, null, 2) + "\n");
7086
7144
  syncConfigCreated = true;
7087
7145
  console.log(` Created ${SYNC_CONFIG_FILENAME}`);
7088
7146
  }
@@ -7113,21 +7171,21 @@ ${"=".repeat(60)}`);
7113
7171
  }
7114
7172
 
7115
7173
  // src/commands/pull-manifest.ts
7116
- import { existsSync as existsSync10, writeFileSync as writeFileSync4 } from "fs";
7117
- import { join as join8 } from "path";
7174
+ import { existsSync as existsSync11, writeFileSync as writeFileSync5 } from "fs";
7175
+ import { join as join9 } from "path";
7118
7176
  async function cmdPullManifest() {
7119
7177
  const cwd = process.cwd();
7120
7178
  if (!isGitRepo(cwd)) {
7121
7179
  console.error("Error: Not in a git repository");
7122
7180
  return 1;
7123
7181
  }
7124
- const configPath = join8(cwd, SYNC_CONFIG_FILENAME);
7125
- if (existsSync10(configPath)) {
7182
+ const configPath = join9(cwd, SYNC_CONFIG_FILENAME);
7183
+ if (existsSync11(configPath)) {
7126
7184
  console.error(`Error: ${SYNC_CONFIG_FILENAME} already exists`);
7127
7185
  console.error("Remove it first if you want to regenerate");
7128
7186
  return 1;
7129
7187
  }
7130
- writeFileSync4(configPath, JSON.stringify(SYNC_CONFIG_TEMPLATE, null, 2) + "\n");
7188
+ writeFileSync5(configPath, JSON.stringify(SYNC_CONFIG_TEMPLATE, null, 2) + "\n");
7131
7189
  console.log(`Created ${SYNC_CONFIG_FILENAME}`);
7132
7190
  console.log("\nUsage:");
7133
7191
  console.log(' - Add file paths to "includes" to push additional files');
@@ -7137,9 +7195,9 @@ async function cmdPullManifest() {
7137
7195
  }
7138
7196
 
7139
7197
  // src/commands/push.ts
7140
- import { copyFileSync as copyFileSync2, existsSync as existsSync11, mkdirSync as mkdirSync2, readFileSync as readFileSync9, rmSync } from "fs";
7198
+ import { copyFileSync as copyFileSync2, existsSync as existsSync12, mkdirSync as mkdirSync3, readFileSync as readFileSync10, rmSync } from "fs";
7141
7199
  import { tmpdir } from "os";
7142
- import { dirname as dirname8, join as join9 } from "path";
7200
+ import { dirname as dirname9, join as join10 } from "path";
7143
7201
  import * as readline2 from "readline";
7144
7202
 
7145
7203
  // src/lib/gh.ts
@@ -7175,12 +7233,12 @@ function getGhUser() {
7175
7233
 
7176
7234
  // src/commands/push.ts
7177
7235
  function loadSyncConfig(cwd) {
7178
- const configPath = join9(cwd, SYNC_CONFIG_FILENAME);
7179
- if (!existsSync11(configPath)) {
7236
+ const configPath = join10(cwd, SYNC_CONFIG_FILENAME);
7237
+ if (!existsSync12(configPath)) {
7180
7238
  return null;
7181
7239
  }
7182
7240
  try {
7183
- const content = readFileSync9(configPath, "utf-8");
7241
+ const content = readFileSync10(configPath, "utf-8");
7184
7242
  return JSON.parse(content);
7185
7243
  } catch (e) {
7186
7244
  throw new Error(`Failed to parse ${SYNC_CONFIG_FILENAME}: ${e instanceof Error ? e.message : String(e)}`);
@@ -7233,8 +7291,12 @@ function checkPrerequisites(cwd) {
7233
7291
  }
7234
7292
  return { success: true, ghUser };
7235
7293
  }
7236
- function wasModifiedByTargetRepo(cwd, relPath, allhandsRoot) {
7237
- const localFile = join9(cwd, relPath);
7294
+ function wasModifiedByTargetRepo(cwd, relPath, allhandsRoot, syncState) {
7295
+ const localFile = join10(cwd, relPath);
7296
+ if (syncState) {
7297
+ const manifestResult = wasModifiedSinceSync(localFile, relPath, syncState, cwd);
7298
+ if (manifestResult !== null) return manifestResult;
7299
+ }
7238
7300
  const localBlobHash = getFileBlobHash(localFile, allhandsRoot);
7239
7301
  if (!localBlobHash) return true;
7240
7302
  return !fileExistsInHistory(relPath, localBlobHash, allhandsRoot);
@@ -7243,6 +7305,7 @@ function collectFilesToPush(cwd, finalIncludes, finalExcludes) {
7243
7305
  const allhandsRoot = getAllhandsRoot();
7244
7306
  const manifest = new Manifest(allhandsRoot);
7245
7307
  const upstreamFiles = manifest.getDistributableFiles();
7308
+ const syncState = readSyncState(cwd);
7246
7309
  const filesToPush = [];
7247
7310
  const localGitFiles = new Set(getGitFiles(cwd));
7248
7311
  const deletedFiles = /* @__PURE__ */ new Set();
@@ -7268,11 +7331,11 @@ function collectFilesToPush(cwd, finalIncludes, finalExcludes) {
7268
7331
  if (!localGitFiles.has(relPath) && !deletedFiles.has(relPath)) {
7269
7332
  continue;
7270
7333
  }
7271
- const localFile = join9(cwd, relPath);
7272
- const upstreamFile = join9(allhandsRoot, relPath);
7273
- if (existsSync11(localFile)) {
7334
+ const localFile = join10(cwd, relPath);
7335
+ const upstreamFile = join10(allhandsRoot, relPath);
7336
+ if (existsSync12(localFile)) {
7274
7337
  if (filesAreDifferent(localFile, upstreamFile)) {
7275
- if (wasModifiedByTargetRepo(cwd, relPath, allhandsRoot)) {
7338
+ if (wasModifiedByTargetRepo(cwd, relPath, allhandsRoot, syncState)) {
7276
7339
  filesToPush.push({ path: relPath, type: "M" });
7277
7340
  }
7278
7341
  }
@@ -7287,9 +7350,9 @@ function collectFilesToPush(cwd, finalIncludes, finalExcludes) {
7287
7350
  if (PUSH_BLOCKLIST.includes(relPath)) continue;
7288
7351
  if (finalExcludes.some((p) => minimatch(relPath, p, { dot: true }))) continue;
7289
7352
  if (filesToPush.some((f) => f.path === relPath)) continue;
7290
- const localFile = join9(cwd, relPath);
7291
- const upstreamFile = join9(allhandsRoot, relPath);
7292
- if (existsSync11(upstreamFile) && !filesAreDifferent(localFile, upstreamFile)) {
7353
+ const localFile = join10(cwd, relPath);
7354
+ const upstreamFile = join10(allhandsRoot, relPath);
7355
+ if (existsSync12(upstreamFile) && !filesAreDifferent(localFile, upstreamFile)) {
7293
7356
  continue;
7294
7357
  }
7295
7358
  filesToPush.push({ path: relPath, type: "A" });
@@ -7322,8 +7385,8 @@ async function createPullRequest(cwd, ghUser, filesToPush, title, body) {
7322
7385
  return 1;
7323
7386
  }
7324
7387
  }
7325
- const tempDir = join9(tmpdir(), `allhands-push-${Date.now()}`);
7326
- mkdirSync2(tempDir, { recursive: true });
7388
+ const tempDir = join10(tmpdir(), `allhands-push-${Date.now()}`);
7389
+ mkdirSync3(tempDir, { recursive: true });
7327
7390
  try {
7328
7391
  console.log("Cloning fork...");
7329
7392
  const cloneResult = gh(["repo", "clone", `${ghUser}/${repoName}`, tempDir, "--", "--depth=1"]);
@@ -7354,9 +7417,9 @@ async function createPullRequest(cwd, ghUser, filesToPush, title, body) {
7354
7417
  if (file.type === "D") {
7355
7418
  git(["rm", "--ignore-unmatch", file.path], tempDir);
7356
7419
  } else {
7357
- const src = join9(cwd, file.path);
7358
- const dest = join9(tempDir, file.path);
7359
- mkdirSync2(dirname8(dest), { recursive: true });
7420
+ const src = join10(cwd, file.path);
7421
+ const dest = join10(tempDir, file.path);
7422
+ mkdirSync3(dirname9(dest), { recursive: true });
7360
7423
  copyFileSync2(src, dest);
7361
7424
  }
7362
7425
  }
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "all-hands-cli",
3
- "version": "0.1.12",
3
+ "version": "0.1.14",
4
4
  "description": "Agentic harness for model-first software development",
5
5
  "type": "module",
6
6
  "bin": {
@@ -9,6 +9,7 @@ import { Manifest, filesAreDifferent } from '../lib/manifest.js';
9
9
  import { getAllhandsRoot, UPSTREAM_REPO } from '../lib/paths.js';
10
10
  import { askQuestion, confirm } from '../lib/ui.js';
11
11
  import { PUSH_BLOCKLIST, SYNC_CONFIG_FILENAME } from '../lib/constants.js';
12
+ import { readSyncState, wasModifiedSinceSync, SyncState } from '../lib/sync-state.js';
12
13
 
13
14
  interface SyncConfig {
14
15
  includes?: string[];
@@ -98,12 +99,25 @@ function checkPrerequisites(cwd: string): PrerequisiteResult {
98
99
  * Determine if a file was actually modified by the target repo,
99
100
  * vs simply being out of date because upstream moved forward.
100
101
  *
101
- * Compares the target file's content against all historical versions
102
- * in the upstream repo. If it matches any previous version, the target
103
- * repo hasn't modified it.
102
+ * Prefers the sync-state manifest (written during sync) which records the
103
+ * exact blob hash of each source file at sync time. Falls back to git
104
+ * history search for repos synced before the manifest existed.
104
105
  */
105
- function wasModifiedByTargetRepo(cwd: string, relPath: string, allhandsRoot: string): boolean {
106
+ function wasModifiedByTargetRepo(
107
+ cwd: string,
108
+ relPath: string,
109
+ allhandsRoot: string,
110
+ syncState: SyncState | null
111
+ ): boolean {
106
112
  const localFile = join(cwd, relPath);
113
+
114
+ // Prefer manifest check when available
115
+ if (syncState) {
116
+ const manifestResult = wasModifiedSinceSync(localFile, relPath, syncState, cwd);
117
+ if (manifestResult !== null) return manifestResult;
118
+ }
119
+
120
+ // Fall back to git history for legacy repos or files not in manifest
107
121
  const localBlobHash = getFileBlobHash(localFile, allhandsRoot);
108
122
 
109
123
  if (!localBlobHash) return true; // safe default: assume modified on error
@@ -119,6 +133,7 @@ function collectFilesToPush(
119
133
  const allhandsRoot = getAllhandsRoot();
120
134
  const manifest = new Manifest(allhandsRoot);
121
135
  const upstreamFiles = manifest.getDistributableFiles();
136
+ const syncState = readSyncState(cwd);
122
137
  const filesToPush: FileEntry[] = [];
123
138
 
124
139
  // Get non-ignored files in user's repo (respects .gitignore)
@@ -156,7 +171,7 @@ function collectFilesToPush(
156
171
 
157
172
  if (existsSync(localFile)) {
158
173
  if (filesAreDifferent(localFile, upstreamFile)) {
159
- if (wasModifiedByTargetRepo(cwd, relPath, allhandsRoot)) {
174
+ if (wasModifiedByTargetRepo(cwd, relPath, allhandsRoot, syncState)) {
160
175
  filesToPush.push({ path: relPath, type: 'M' });
161
176
  }
162
177
  }
@@ -8,6 +8,7 @@ import { ConflictResolution, askConflictResolution, confirm, getNextBackupPath }
8
8
  import { SYNC_CONFIG_FILENAME, SYNC_CONFIG_TEMPLATE } from '../lib/constants.js';
9
9
  import { restoreDotfiles } from '../lib/dotfiles.js';
10
10
  import { ensureTargetLines } from '../lib/target-lines.js';
11
+ import { writeSyncState } from '../lib/sync-state.js';
11
12
 
12
13
  const AH_SHIM_SCRIPT = `#!/bin/bash
13
14
  # AllHands CLI shim - finds and executes project-local .allhands/harness/ah
@@ -169,12 +170,16 @@ export async function cmdSync(target: string = '.', autoYes: boolean = false, in
169
170
  console.log('\nCopying allhands files...');
170
171
  console.log(`Found ${distributable.size} files to distribute`);
171
172
 
173
+ const syncedFiles = new Set<string>();
174
+
172
175
  for (const relPath of [...distributable].sort()) {
173
176
  const sourceFile = join(allhandsRoot, relPath);
174
177
  const targetFile = join(resolvedTarget, relPath);
175
178
 
176
179
  if (!existsSync(sourceFile)) continue;
177
180
 
181
+ syncedFiles.add(relPath);
182
+
178
183
  mkdirSync(dirname(targetFile), { recursive: true });
179
184
 
180
185
  if (existsSync(targetFile)) {
@@ -193,6 +198,9 @@ export async function cmdSync(target: string = '.', autoYes: boolean = false, in
193
198
  // Restore dotfiles (gitignore → .gitignore, etc.)
194
199
  restoreDotfiles(resolvedTarget);
195
200
 
201
+ // Write sync-state manifest for push false-positive detection
202
+ writeSyncState(resolvedTarget, allhandsRoot, syncedFiles);
203
+
196
204
  // Update-only: Handle deleted files
197
205
  if (!isFirstTime && deletedInSource.length > 0) {
198
206
  console.log(`\n${deletedInSource.length} files removed from allhands source:`);
@@ -1,7 +1,8 @@
1
1
  export const SYNC_CONFIG_FILENAME = '.allhands-sync-config.json';
2
+ export const SYNC_STATE_FILENAME = '.allhands/.sync-state.json';
2
3
 
3
4
  // Files that should never be pushed back to upstream
4
- export const PUSH_BLOCKLIST = ['CLAUDE.project.md', '.allhands-sync-config.json'];
5
+ export const PUSH_BLOCKLIST = ['CLAUDE.project.md', '.allhands-sync-config.json', '.allhands/.sync-state.json'];
5
6
 
6
7
  export const SYNC_CONFIG_TEMPLATE = {
7
8
  $comment: 'Customization for claude-all-hands push command',
package/src/lib/git.ts CHANGED
@@ -70,6 +70,22 @@ export function getFileBlobHash(filePath: string, repoPath: string): string | nu
70
70
  return result.success ? result.stdout.trim() : null;
71
71
  }
72
72
 
73
+ /**
74
+ * Get the HEAD commit hash of a repo. Returns null if no commits exist.
75
+ */
76
+ export function getHeadCommit(repoPath: string): string | null {
77
+ const result = git(['rev-parse', 'HEAD'], repoPath);
78
+ return result.success ? result.stdout.trim() : null;
79
+ }
80
+
81
+ /**
82
+ * Check if a repo has uncommitted changes (staged or unstaged).
83
+ */
84
+ export function hasUncommittedChanges(repoPath: string): boolean {
85
+ const result = git(['status', '--porcelain'], repoPath);
86
+ return result.success && result.stdout.length > 0;
87
+ }
88
+
73
89
  /**
74
90
  * Check if a specific blob hash appears in the git history of a file path.
75
91
  * Uses a single `rev-list --objects` call instead of per-commit lookups.
@@ -0,0 +1,93 @@
1
+ import { existsSync, mkdirSync, readFileSync, writeFileSync } from 'fs';
2
+ import { dirname, join } from 'path';
3
+ import { getFileBlobHash } from './git.js';
4
+ import { getHeadCommit, hasUncommittedChanges } from './git.js';
5
+ import { SYNC_STATE_FILENAME } from './constants.js';
6
+
7
+ export interface SyncState {
8
+ version: 1;
9
+ syncedAt: string;
10
+ sourceCommit: string | null;
11
+ dirty: boolean;
12
+ files: Record<string, string>;
13
+ }
14
+
15
+ /**
16
+ * Write a sync-state manifest recording the blob hash of each synced file.
17
+ * This allows push to detect whether a target file was modified locally
18
+ * without relying on git history (which fails for uncommitted working-tree copies).
19
+ */
20
+ export function writeSyncState(
21
+ targetRoot: string,
22
+ allhandsRoot: string,
23
+ syncedFiles: Set<string>
24
+ ): void {
25
+ const files: Record<string, string> = {};
26
+
27
+ for (const relPath of [...syncedFiles].sort()) {
28
+ const sourceFile = join(allhandsRoot, relPath);
29
+ if (!existsSync(sourceFile)) continue;
30
+ const hash = getFileBlobHash(sourceFile, allhandsRoot);
31
+ if (hash) {
32
+ files[relPath] = hash;
33
+ }
34
+ }
35
+
36
+ const state: SyncState = {
37
+ version: 1,
38
+ syncedAt: new Date().toISOString(),
39
+ sourceCommit: getHeadCommit(allhandsRoot),
40
+ dirty: hasUncommittedChanges(allhandsRoot),
41
+ files,
42
+ };
43
+
44
+ const outPath = join(targetRoot, SYNC_STATE_FILENAME);
45
+ mkdirSync(dirname(outPath), { recursive: true });
46
+ writeFileSync(outPath, JSON.stringify(state, null, 2) + '\n');
47
+ }
48
+
49
+ /**
50
+ * Read the sync-state manifest from a target repo.
51
+ * Returns null if the manifest does not exist or cannot be parsed.
52
+ */
53
+ export function readSyncState(targetRoot: string): SyncState | null {
54
+ const stateFile = join(targetRoot, SYNC_STATE_FILENAME);
55
+ if (!existsSync(stateFile)) return null;
56
+ try {
57
+ const content = readFileSync(stateFile, 'utf-8');
58
+ const parsed = JSON.parse(content);
59
+ if (
60
+ !parsed ||
61
+ typeof parsed !== 'object' ||
62
+ parsed.version !== 1 ||
63
+ !parsed.files ||
64
+ typeof parsed.files !== 'object'
65
+ ) {
66
+ return null;
67
+ }
68
+ return parsed as SyncState;
69
+ } catch {
70
+ return null;
71
+ }
72
+ }
73
+
74
+ /**
75
+ * Check whether a target file was modified since the last sync.
76
+ * Returns null if the file is not in the manifest (caller should fall back).
77
+ * Returns true if the target's hash differs from the manifest.
78
+ * Returns false if the target's hash matches the manifest.
79
+ */
80
+ export function wasModifiedSinceSync(
81
+ targetFilePath: string,
82
+ relPath: string,
83
+ syncState: SyncState,
84
+ repoPath: string
85
+ ): boolean | null {
86
+ const manifestHash = syncState.files[relPath];
87
+ if (!manifestHash) return null;
88
+
89
+ const targetHash = getFileBlobHash(targetFilePath, repoPath);
90
+ if (!targetHash) return true; // safe default: assume modified on error
91
+
92
+ return targetHash !== manifestHash;
93
+ }