slash-do 1.1.0 → 1.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/bin/cli.js CHANGED
@@ -208,7 +208,11 @@ async function main() {
208
208
  }
209
209
  }
210
210
 
211
- main().catch(err => {
212
- console.error('Error:', err.message);
213
- process.exit(1);
214
- });
211
+ if (require.main === module) {
212
+ main().catch(err => {
213
+ console.error('Error:', err.message);
214
+ process.exit(1);
215
+ });
216
+ }
217
+
218
+ module.exports = { parseArgs };
@@ -47,13 +47,7 @@ When the resolved model is `opus`, **omit** the `model` parameter on the Agent/T
47
47
 
48
48
  ### Model Profile Rationale
49
49
 
50
- **Why Opus for audit in Quality?** Audit agents make judgment calls distinguishing real bugs from false positives across 30+ lines of context. Opus reduces false positive noise, meaning less wasted remediation work downstream.
51
-
52
- **Why Sonnet for audit in Balanced?** Sonnet handles code analysis well when given explicit checklists (which each audit agent has). The 30-line context requirement provides enough signal. Good cost/quality tradeoff with 7 parallel agents.
53
-
54
- **Why Haiku for audit in Budget?** Surface-level pattern matching is Haiku's strength. May produce more false positives, but the remediation agents (still Sonnet) will validate findings before fixing. Best for large codebases where you want a fast first pass.
55
-
56
- **Why never Haiku for remediation?** Remediation agents write code, run builds, and commit. Code generation quality matters — Haiku may produce fixes that don't compile or introduce subtle regressions. Sonnet is the floor for code-writing agents.
50
+ Opus reduces false positives in audit (judgment-heavy). Sonnet is the floor for code-writing agents (remediation). Haiku works for fast first-pass pattern scanning but may produce more false positives remediation agents (Sonnet+) validate before fixing.
57
51
 
58
52
  ## Phase 0: Discovery & Setup
59
53
 
@@ -83,7 +77,7 @@ Derive build and test commands from the project type:
83
77
  - Rust: `cargo build`, `cargo test`
84
78
  - Python: `pytest`, `python -m pytest`
85
79
  - Go: `go build ./...`, `go test ./...`
86
- - If ambiguous, check CLAUDE.md for documented commands
80
+ - If ambiguous, check project conventions already in context for documented commands
87
81
 
88
82
  Record as `BUILD_CMD` and `TEST_CMD`.
89
83
 
@@ -106,7 +100,7 @@ This ensures the browser is ready before we need it in Phase 6, avoiding interru
106
100
 
107
101
  ## Phase 1: Unified Audit
108
102
 
109
- Read the project's CLAUDE.md files first to understand conventions. Pass relevant conventions to each agent.
103
+ Project conventions are already in your context. Pass relevant conventions to each agent.
110
104
 
111
105
  Launch 7 Explore agents in two batches. Each agent must report findings in this format:
112
106
  ```
@@ -477,7 +471,6 @@ If `BROWSER_AUTHENTICATED` is not true (e.g., Phase 0e was skipped or failed):
477
471
  1. Navigate to the first PR URL using `browser_navigate`
478
472
  2. Check for user avatar/menu
479
473
  3. If not logged in: navigate to `https://github.com/login`, inform the user **"Please log in to GitHub in the browser. I'll wait for you to confirm."**, and use `AskUserQuestion` to wait
480
- 4. Do NOT close the browser at any point during this phase
481
474
 
482
475
  ### 6.1: Request Copilot reviews on all PRs
483
476
 
@@ -496,17 +489,16 @@ If this returns 422 ("not a collaborator"), **fall back to Playwright** for each
496
489
 
497
490
  ### 6.2: Poll for review completion
498
491
 
499
- For each PR, poll every 15 seconds, max 3 minutes (12 polls):
500
- ```bash
501
- gh api repos/{OWNER}/{REPO}/pulls/{PR_NUMBER}/reviews --jq '.[] | "\(.user.login): \(.state)"'
502
- ```
492
+ **Dynamic poll timing**: Before your first poll, check how long the most recent Copilot review on this PR took by comparing consecutive Copilot review `submittedAt` timestamps (or PR creation time for the first review). Use that duration as your expected wait. If no prior review exists, default to 5 minutes. Set poll interval to 60 seconds and max wait to **2x the expected duration** (minimum 5 minutes, maximum 20 minutes). Copilot reviews can take **10-15 minutes** for large diffs — do NOT give up early.
503
493
 
504
- Also check for inline comments:
494
+ For each PR, poll using GraphQL to check for a new Copilot review:
505
495
  ```bash
506
- gh api repos/{OWNER}/{REPO}/pulls/{PR_NUMBER}/comments --jq '.[] | "\(.user.login) [\(.path):\(.line)]: \(.body[:120])"'
496
+ echo '{"query":"{ repository(owner: \"OWNER\", name: \"REPO\") { pullRequest(number: PR_NUM) { reviews(last: 5) { nodes { state body author { login } submittedAt } } reviewThreads(first: 100) { nodes { id isResolved comments(first: 3) { nodes { body path line author { login } } } } } } } }"}' | gh api graphql --input -
507
497
  ```
508
498
 
509
- The review is complete when a `copilot[bot]` or `copilot-pull-request-reviewer[bot]` review appears.
499
+ The review is complete when a new `copilot-pull-request-reviewer[bot]` review appears with a `submittedAt` after your request. If no review appears after max wait, **ask the user** whether to continue waiting, re-request, or skip.
500
+
501
+ **Error detection**: After a review appears, check its `body` for error text such as "Copilot encountered an error" or "unable to review this pull request". If found, this is NOT a successful review — log a warning, re-request the review (step 6.1), and resume polling from 6.2. Allow up to 3 error retries per PR before asking the user whether to continue or skip.
510
502
 
511
503
  ### 6.3: Check for unresolved threads
512
504
 
@@ -610,7 +602,6 @@ If merge fails (e.g., branch protection, merge conflicts from a prior PR):
610
602
  - **Copilot timeout** (review not received within 3 min): inform user, offer to merge without review approval or wait longer
611
603
  - **Copilot review loop exceeds 5 iterations per PR**: stop iterating on that PR, inform user, proceed to merge
612
604
  - **Existing worktree found at startup**: ask user — resume (reuse worktree) or cleanup (remove and start fresh)
613
- - **`gh auth status` / `glab auth status` failure**: halt and tell user to authenticate first
614
605
  - **No findings above LOW**: skip Phases 3-7, print "No actionable findings" with the LOW summary
615
606
  - **Browser not authenticated**: use `AskUserQuestion` to ask the user to log in — never skip this or close the browser
616
607
  - **Merge conflict after prior PR merged**: rebase the branch onto the updated default branch, push with `--force-with-lease`, re-run CI
@@ -626,8 +617,5 @@ If merge fails (e.g., branch protection, merge conflicts from a prior PR):
626
617
  - When extracting modules, always add backward-compatible re-exports in the original module to prevent cross-PR breakage
627
618
  - Version bump happens exactly once on the first category branch based on aggregate commit analysis
628
619
  - Only CRITICAL, HIGH, and MEDIUM findings are auto-remediated; LOW and Test Coverage remain tracked in PLAN.md
629
- - Do not include co-author or generated-by info in any commits, PRs, or output
630
620
  - GitLab projects skip the Copilot review loop entirely (Phase 6) and stop after MR creation
631
- - Always run `gh auth status` (or `glab auth status`) before any authenticated operation
632
- - The Playwright browser should be opened early (Phase 0e) and kept open throughout — never close it prematurely
633
621
  - CI must pass on each PR before requesting Copilot review or merging
@@ -94,14 +94,10 @@ gh pr create \
94
94
 
95
95
  - Write a clear title and rich description
96
96
  - If a PR template was found, follow its structure
97
- - Do NOT include co-author or "generated with" messages
98
97
  - Print the resulting PR URL so the user can review it
99
98
 
100
99
  ## Important
101
100
 
102
- - Never stage files you didn't edit
103
- - Never use `git add -A` or `git add .`
104
- - Do NOT bump versions or update changelogs — upstream maintainers control those
105
101
  - Do NOT merge the PR — upstream maintainers handle that
106
102
  - Do NOT run Copilot review loops — you don't control the upstream repo's review settings
107
103
  - If the fork is significantly behind upstream, warn the user about potential merge conflicts
package/commands/do/pr.md CHANGED
@@ -32,7 +32,7 @@ Before creating the PR, perform a thorough self-review. Read each changed file
32
32
  ## Open the PR
33
33
 
34
34
  - Create a PR from `{current_branch}` to `{default_branch}`
35
- - Create a rich PR description — no co-author or "generated with" messages
35
+ - Create a rich PR description
36
36
 
37
37
  **IMPORTANT**: During each fix cycle in the Copilot review loop below, after fixing all review comments and before pushing, also bump the patch version (`npm version patch --no-git-tag-version` or equivalent) and commit the version bump.
38
38
 
@@ -10,6 +10,7 @@ Commit and push all work from this session, updating documentation as needed.
10
10
 
11
11
  1. **Identify changes to commit**:
12
12
  - Run `git status` and `git diff --stat` to see what changed
13
+ - If there are no changes to commit, inform the user and stop
13
14
  - If you edited files in this session, commit only those files
14
15
  - If invoked without prior edit context, review all uncommitted changes
15
16
  - if there are files that should be added to the .gitignore that are not yet there, ensure we have proper .gitignore coverage
@@ -47,11 +48,3 @@ Commit and push all work from this session, updating documentation as needed.
47
48
 
48
49
  5. **Push the changes**:
49
50
  - Use `git pull --rebase --autostash && git push` to push safely
50
-
51
- ## Important
52
-
53
- - Never stage files you didn't edit
54
- - Never use `git add -A` or `git add .`
55
- - Keep commit messages focused on the "why" not just the "what"
56
- - If there are no changes to commit, inform the user
57
- - Do NOT bump the version in package.json — `/release` handles versioning
@@ -6,22 +6,30 @@ description: Create a release PR using the project's documented release workflow
6
6
 
7
7
  Before doing anything, determine the project's source and target branches for releases. Do NOT hardcode branch names. Instead, discover them:
8
8
 
9
- 1. **Source branch** — run `gh repo view --json defaultBranchRef -q '.defaultBranchRef.name'` to get the repo's default branch
9
+ 1. **Source branch** — run `gh repo view --json defaultBranchRef -q '.defaultBranchRef.name'` to get the repo's default branch (typically `main`)
10
10
  2. **Target branch** — determine by reading (in priority order):
11
11
  - **GitHub Actions workflows** — check `.github/workflows/release.yml` (or similar) for `on: push: branches:` to find the branch that triggers the release pipeline
12
- - **Project CLAUDE.md** — look for git workflow sections, branch descriptions, or release instructions
12
+ - **Project conventions** (already in context) — look for git workflow sections, branch descriptions, or release instructions
13
13
  - **Versioning docs** — check `docs/VERSIONING.md`, `CONTRIBUTING.md`, or `RELEASING.md`
14
14
  - **Branch convention** — if a `release` branch exists, the target is `release`; otherwise ask the user
15
+ 3. **Ensure the target branch exists** — if not, create it from the last release tag:
16
+ ```bash
17
+ git branch release $(git describe --tags --abbrev=0)
18
+ git push -u origin release
19
+ ```
20
+ This ensures the PR diff shows ALL changes since the last release, not just the version bump.
15
21
 
16
22
  Print the detected workflow: `Detected release flow: {source} → {target}`
17
23
 
18
24
  If ambiguous, ask the user to confirm before proceeding.
19
25
 
26
+ **Important**: The PR direction is `{source}` → `{target}` (e.g., `main` → `release`). This gives Copilot the full diff of all changes since the last release for review. Do NOT create a branch from source and PR back into it — that only shows the version bump commit.
27
+
20
28
  ## Pre-Release Checks
21
29
 
22
30
  1. **Ensure you're on the source branch** — checkout if needed
23
31
  2. **Pull latest** — `git pull --rebase --autostash`
24
- 3. **Run tests** — execute the project's test suite (check CLAUDE.md or package.json for the command)
32
+ 3. **Run tests** — execute the project's test suite (per project conventions already in context, or check package.json)
25
33
  4. **Run build** — execute the project's build command if one exists
26
34
 
27
35
  ## Determine Version and Finalize Changelog
@@ -63,8 +71,11 @@ Perform a thorough self-review. Read each changed file — not just the diff —
63
71
 
64
72
  ## Open the Release PR
65
73
 
66
- - Push the source branch to remote
67
- - Create a PR from `{source}` to `{target}`
74
+ - Push the source branch to remote (it should already be up to date with the release commit)
75
+ - Create a PR from `{source}` `{target}` (e.g., `main` → `release`)
76
+ ```bash
77
+ gh pr create --title "Release v{version}" --base {target} --head {source} --body "..."
78
+ ```
68
79
  - Title: `Release v{version}` (read version from package.json or equivalent)
69
80
  - Body: include the changelog content for this version if available, otherwise summarize commits since last release
70
81
  - Keep the description clean — no co-author or "generated with" messages
@@ -91,6 +102,12 @@ Perform a thorough self-review. Read each changed file — not just the diff —
91
102
 
92
103
  ## Post-Merge
93
104
 
94
- - Report the final status including version, PR URL, and merge state
95
- - Remind the user to check for the GitHub release once CI completes (if the project uses automated releases)
96
- - Switch back to the source branch locally: `git checkout {source} && git pull --rebase --autostash`
105
+ 1. **Tag the release** on the target branch to trigger the publish workflow:
106
+ ```bash
107
+ git fetch origin {target}
108
+ git tag v{version} origin/{target}
109
+ git push origin v{version}
110
+ ```
111
+ 2. **Switch back to the source branch** locally: `git checkout {source} && git pull --rebase --autostash`
112
+ 3. **Report the final status** including version, PR URL, tag, and merge state
113
+ 4. Remind the user to check for the GitHub release once CI completes (if the project uses automated releases)
@@ -13,19 +13,54 @@ argument-hint: "[base-branch]"
13
13
 
14
14
  If there are no changes, inform the user and stop.
15
15
 
16
- ## Read Project Conventions
16
+ ## Apply Project Conventions
17
17
 
18
- Read the project's CLAUDE.md (if it exists) to understand:
19
- - Code style rules, error handling patterns, logging conventions
20
- - Custom error classes, validation patterns, framework-specific rules
21
- - Any explicit security model or scope exclusions
22
-
23
- These conventions override generic best practices. For example, if CLAUDE.md says "no auth needed — internal tool", do not flag missing authentication.
18
+ CLAUDE.md is already loaded into your context. Use its rules (code style, error handling, logging, security model, scope exclusions) as overrides to generic best practices throughout this review. For example, if CLAUDE.md says "no auth needed — internal tool", do not flag missing authentication.
24
19
 
25
20
  ## Deep File Review
26
21
 
27
22
  For **each changed file** in the diff, read the **entire file** (not just diff hunks). Reviewing only the diff misses context bugs where new code interacts incorrectly with existing code.
28
23
 
24
+ ### Understand the Code Flow
25
+
26
+ Before checking individual files against the checklist, **map the flow of changed code across all files**. This means:
27
+
28
+ 1. **Trace call chains** — for each new or modified function/method, identify every caller and callee across the changed files. Read those files too if needed. You cannot evaluate whether code is duplicated or well-structured without knowing how it connects.
29
+ 2. **Identify shared data paths** — trace data from entry point (route handler, event listener, CLI arg) through transforms, storage, and output. Understand what each layer is responsible for.
30
+ 3. **Map responsibilities** — for each changed module/file, state its single responsibility in one sentence. If you can't, it may be doing too much.
31
+
32
+ ### Evaluate Software Engineering Principles
33
+
34
+ With the flow understood, evaluate the changed code against these principles:
35
+
36
+ **DRY (Don't Repeat Yourself)**
37
+ - Look for logic duplicated across changed files or between changed and existing code. Grep for similar function signatures, repeated conditional blocks, or copy-pasted patterns with minor variations.
38
+ - If two functions do nearly the same thing with small differences, they should likely share a common implementation with the differences parameterized.
39
+ - Duplicated validation, error formatting, or data transformation are common violations.
40
+
41
+ **YAGNI (You Ain't Gonna Need It)**
42
+ - Flag abstractions, config options, parameters, or extension points that serve no current use case. Code should solve the problem at hand, not hypothetical future problems.
43
+ - Unnecessary wrapper functions, premature generalization (e.g., a factory that produces one type), and unused feature flags are common violations.
44
+
45
+ **SOLID Principles**
46
+ - **Single Responsibility** — each module/function should have one reason to change. If a function handles both business logic and I/O formatting, flag it.
47
+ - **Open/Closed** — new behavior should be addable without modifying existing working code where practical (e.g., strategy patterns, plugin hooks).
48
+ - **Liskov Substitution** — if subclasses or interface implementations exist, verify they are fully substitutable without breaking callers.
49
+ - **Interface Segregation** — callers should not depend on methods they don't use. Large config objects or option bags passed through many layers are a smell.
50
+ - **Dependency Inversion** — high-level modules should not import low-level implementation details directly when an abstraction boundary would be cleaner.
51
+
52
+ **Separation of Concerns**
53
+ - Business logic should not be tangled with transport (HTTP, WebSocket), storage (SQL, file I/O), or presentation (HTML, JSON formatting).
54
+ - If a route handler contains business rules beyond simple delegation, flag it.
55
+
56
+ **Naming & Readability**
57
+ - Function and variable names should communicate intent. If you need to read the implementation to understand what a name means, it's poorly named.
58
+ - Boolean variables/params should read as predicates (`isReady`, `hasAccess`), not ambiguous nouns.
59
+
60
+ Only flag principle violations that are **concrete and actionable** in the changed code. Do not flag pre-existing design issues in untouched code unless the changes make them worse.
61
+
62
+ ### Per-File Checklist
63
+
29
64
  Check every file against this checklist:
30
65
 
31
66
  !`cat ~/.claude/lib/code-review-checklist.md`
@@ -50,7 +85,7 @@ For each issue found:
50
85
  1. Classify severity: **CRITICAL** (runtime crash, data leak, security) vs **IMPROVEMENT** (consistency, robustness, conventions)
51
86
  2. Fix all CRITICAL issues immediately
52
87
  3. For IMPROVEMENT issues, fix them too — the goal is to eliminate Copilot review round-trips
53
- 4. After fixes, run the project's test suite and build command (check CLAUDE.md for commands)
88
+ 4. After fixes, run the project's test suite and build command (per project conventions already in context)
54
89
  5. Commit fixes: `refactor: address code review findings`
55
90
 
56
91
  ## Report
@@ -4,7 +4,7 @@ description: Resolve PR review feedback with parallel agents
4
4
 
5
5
  # Resolve PR Review Feedback
6
6
 
7
- Address the latest code review feedback on the current branch's pull request using a team-based approach.
7
+ Address the latest code review feedback on the current branch's pull request using parallel sub-agents.
8
8
 
9
9
  ## Steps
10
10
 
@@ -18,23 +18,21 @@ Address the latest code review feedback on the current branch's pull request usi
18
18
  ```
19
19
  Save results to `/tmp/pr_threads.json` for parsing.
20
20
 
21
- 4. **Spawn a team to address feedback in parallel**:
22
- - Create a team with `TeamCreate`
23
- - Create a task for each unresolved review thread using `TaskCreate`
24
- - Create an additional task for an **independent code quality review** of all files changed in the PR (`gh pr diff --name-only`)
25
- - Spawn sub-agents (general-purpose type) as teammates to handle each task in parallel:
26
- - One agent per review thread (or group closely related threads on the same file)
27
- - One dedicated agent for the code quality review
28
- - Each agent should:
21
+ 4. **Spawn parallel sub-agents to address feedback**:
22
+ - For small PRs (1-3 unresolved threads), handle fixes inline instead of spawning agents
23
+ - For larger PRs, spawn one `Agent` call (general-purpose type) per review thread (or group closely related threads on the same file into one agent)
24
+ - Spawn one additional `Agent` call for an **independent code quality review** of all files changed in the PR (`gh pr diff --name-only`)
25
+ - Launch all Agent calls **in parallel** (multiple tool calls in a single response) and wait for all to return
26
+ - Each thread-fixing agent should:
29
27
  - Read the file and understand the context of the feedback
30
28
  - Make the requested code changes if they are accurate and warranted
31
29
  - Look for further opportunities to DRY up affected code
32
- - Report back what was changed and the thread ID that was addressed
30
+ - Return what was changed and the thread ID that was addressed
33
31
  - The code quality reviewer should:
34
32
  - Read all changed files in the PR
35
33
  - Check for: style violations, missing error handling, dead code, DRY violations, security issues
36
- - Apply fixes directly and report what was changed
37
- - Wait for all agents to complete, then review their changes
34
+ - Apply fixes directly and return what was changed
35
+ - After all agents return, review their changes for conflicts or overlapping edits
38
36
 
39
37
  5. **Run tests**: Run the project's test suite to verify all changes pass. Do not proceed if tests fail — fix issues first.
40
38
 
@@ -68,18 +66,19 @@ Verify the request was accepted by checking that `Copilot` appears in the respon
68
66
 
69
67
  ### Poll for review completion
70
68
 
71
- Poll every 30 seconds using GraphQL to check for a new review with a `submittedAt` timestamp after the request:
69
+ Poll using GraphQL to check for a new review with a `submittedAt` timestamp after the request:
72
70
  ```bash
73
71
  gh api graphql -f query='{ repository(owner: "OWNER", name: "REPO") { pullRequest(number: PR_NUM) { reviews(last: 3) { nodes { state body author { login } submittedAt } } reviewThreads(first: 100) { nodes { id isResolved comments(first: 3) { nodes { body path line author { login } } } } } } } }'
74
72
  ```
75
73
 
76
- Copilot reviews typically take 60-120 seconds. The review is complete when a new `copilot-pull-request-reviewer` review node appears.
74
+ **Dynamic poll timing**: Before your first poll, check how long the most recent Copilot review on this PR took by comparing consecutive Copilot review `submittedAt` timestamps (or PR creation time for the first review). Use that duration as your expected wait. If no prior review exists, default to 5 minutes. Set poll interval to 60 seconds and max wait to **2x the expected duration** (minimum 5 minutes, maximum 20 minutes). Copilot reviews can take **10-15 minutes** for large diffs — do NOT give up early.
75
+
76
+ The review is complete when a new `copilot-pull-request-reviewer` review node appears. If no review appears after max wait, **ask the user** whether to continue waiting, re-request, or skip.
77
+
78
+ **Error detection**: After a review appears, check its `body` for error text such as "Copilot encountered an error" or "unable to review this pull request". If found, this is NOT a successful review — log a warning, re-request the review (same API call above), and resume polling. Allow up to 3 error retries before asking the user whether to continue or skip.
77
79
 
78
80
  ## Notes
79
81
 
80
82
  - Only resolve threads where you've actually addressed the feedback
81
83
  - If feedback is unclear or incorrect, leave a reply comment instead of resolving
82
- - Do not include co-author info in commits
83
- - For small PRs (1-3 threads), sub-agents may be overkill — use judgment on whether to spawn a team or handle inline
84
84
  - Always run tests before committing — never push code with known failures
85
- - Shut down the team after all work is complete
@@ -0,0 +1,73 @@
1
+ #!/usr/bin/env node
2
+ // Check for slashdo updates in background, write result to cache
3
+ // Called by SessionStart hook - runs once per session
4
+
5
+ const fs = require('fs');
6
+ const path = require('path');
7
+ const os = require('os');
8
+ const { spawn } = require('child_process');
9
+
10
+ const homeDir = os.homedir();
11
+ const cacheDir = path.join(homeDir, '.claude', 'cache');
12
+ const cacheFile = path.join(cacheDir, 'slashdo-update-check.json');
13
+ const versionFile = path.join(homeDir, '.claude', '.slashdo-version');
14
+
15
+ // Best-effort: silently exit on any setup failure (permissions, read-only FS, etc.)
16
+ try {
17
+ if (!fs.existsSync(cacheDir)) {
18
+ fs.mkdirSync(cacheDir, { recursive: true });
19
+ }
20
+
21
+ // Spawn background process so we don't block session start
22
+ const child = spawn(process.execPath, ['-e', `
23
+ const fs = require('fs');
24
+ const { execSync } = require('child_process');
25
+
26
+ const cacheFile = ${JSON.stringify(cacheFile)};
27
+ const versionFile = ${JSON.stringify(versionFile)};
28
+
29
+ let installed = '0.0.0';
30
+ try {
31
+ if (fs.existsSync(versionFile)) {
32
+ installed = fs.readFileSync(versionFile, 'utf8').trim();
33
+ }
34
+ } catch (e) {}
35
+
36
+ // No version file means slashdo isn't installed — skip silently
37
+ if (installed === '0.0.0') {
38
+ process.exit(0);
39
+ }
40
+
41
+ let latest = null;
42
+ try {
43
+ latest = execSync('npm view slash-do version', { encoding: 'utf8', timeout: 5000, windowsHide: true }).trim();
44
+ } catch (e) {}
45
+
46
+ // Simple semver comparison: only flag update when latest > installed
47
+ let updateAvailable = false;
48
+ if (latest && latest !== installed) {
49
+ const parse = v => (v || '').replace(/^v/, '').replace(/-.+$/, '').split('.').map(Number);
50
+ const [iM, im, ip] = parse(installed);
51
+ const [lM, lm, lp] = parse(latest);
52
+ if ([iM, im, ip, lM, lm, lp].some(isNaN)) { updateAvailable = installed !== latest; }
53
+ else { updateAvailable = lM > iM || (lM === iM && (lm > im || (lm === im && lp > ip))); }
54
+ }
55
+
56
+ const result = {
57
+ update_available: updateAvailable,
58
+ installed,
59
+ latest: latest || 'unknown',
60
+ checked: Math.floor(Date.now() / 1000)
61
+ };
62
+
63
+ fs.writeFileSync(cacheFile, JSON.stringify(result));
64
+ `], {
65
+ stdio: 'ignore',
66
+ windowsHide: true,
67
+ detached: true
68
+ });
69
+
70
+ child.unref();
71
+ } catch (e) {
72
+ // Hook is best-effort — never break SessionStart
73
+ }
@@ -0,0 +1,123 @@
1
+ #!/usr/bin/env node
2
+ // slashdo statusline for Claude Code
3
+ // Shows: model | current task | directory | context usage | update notifications
4
+
5
+ const fs = require('fs');
6
+ const path = require('path');
7
+ const os = require('os');
8
+
9
+ // Read JSON from stdin
10
+ let input = '';
11
+ process.stdin.setEncoding('utf8');
12
+ process.stdin.on('data', chunk => input += chunk);
13
+ process.stdin.on('end', () => {
14
+ try {
15
+ const data = JSON.parse(input);
16
+ const model = data.model?.display_name || 'Claude';
17
+ const dir = data.workspace?.current_dir || process.cwd();
18
+ const session = data.session_id || '';
19
+ const remaining = data.context_window?.remaining_percentage;
20
+
21
+ // Context window display (shows USED percentage scaled to 80% limit)
22
+ // Claude Code enforces an 80% context limit, so we scale to show 100% at that point
23
+ let ctx = '';
24
+ if (remaining != null) {
25
+ const rem = Math.round(remaining);
26
+ const rawUsed = Math.max(0, Math.min(100, 100 - rem));
27
+ // Scale: 80% real usage = 100% displayed
28
+ const used = Math.min(100, Math.round((rawUsed / 80) * 100));
29
+
30
+ // Write context metrics to bridge file for context-monitor hooks
31
+ if (session) {
32
+ try {
33
+ const safeSession = session.replace(/[^a-zA-Z0-9_-]/g, '');
34
+ const bridgePath = path.join(os.tmpdir(), `claude-ctx-${safeSession}.json`);
35
+ const bridgeData = JSON.stringify({
36
+ session_id: session,
37
+ remaining_percentage: remaining,
38
+ used_pct: used,
39
+ timestamp: Math.floor(Date.now() / 1000)
40
+ });
41
+ fs.writeFileSync(bridgePath, bridgeData);
42
+ } catch (e) {
43
+ // Silent fail -- bridge is best-effort, don't break statusline
44
+ }
45
+ }
46
+
47
+ // Build progress bar (10 segments)
48
+ const filled = Math.floor(used / 10);
49
+ const bar = '█'.repeat(filled) + '░'.repeat(10 - filled);
50
+
51
+ // Color based on scaled usage
52
+ if (used < 63) {
53
+ ctx = ` \x1b[32m${bar} ${used}%\x1b[0m`;
54
+ } else if (used < 81) {
55
+ ctx = ` \x1b[33m${bar} ${used}%\x1b[0m`;
56
+ } else if (used < 95) {
57
+ ctx = ` \x1b[38;5;208m${bar} ${used}%\x1b[0m`;
58
+ } else {
59
+ ctx = ` \x1b[5;31m💀 ${bar} ${used}%\x1b[0m`;
60
+ }
61
+ }
62
+
63
+ // Current task from todos
64
+ let task = '';
65
+ const homeDir = os.homedir();
66
+ const todosDir = path.join(homeDir, '.claude', 'todos');
67
+ if (session && fs.existsSync(todosDir)) {
68
+ try {
69
+ const entries = fs.readdirSync(todosDir);
70
+ let latestFile = null;
71
+ let latestMtime = 0;
72
+ for (const f of entries) {
73
+ if (!f.startsWith(session) || !f.includes('-agent-') || !f.endsWith('.json')) continue;
74
+ try {
75
+ const mt = fs.statSync(path.join(todosDir, f)).mtimeMs;
76
+ if (mt > latestMtime) { latestMtime = mt; latestFile = f; }
77
+ } catch (e) {}
78
+ }
79
+
80
+ if (latestFile) {
81
+ try {
82
+ const todos = JSON.parse(fs.readFileSync(path.join(todosDir, latestFile), 'utf8'));
83
+ const inProgress = todos.find(t => t.status === 'in_progress');
84
+ if (inProgress) task = inProgress.activeForm || '';
85
+ } catch (e) {}
86
+ }
87
+ } catch (e) {
88
+ // Silently fail on file system errors - don't break statusline
89
+ }
90
+ }
91
+
92
+ // Update notifications (GSD + slashdo)
93
+ let updates = '';
94
+ const gsdCacheFile = path.join(homeDir, '.claude', 'cache', 'gsd-update-check.json');
95
+ if (fs.existsSync(gsdCacheFile)) {
96
+ try {
97
+ const cache = JSON.parse(fs.readFileSync(gsdCacheFile, 'utf8'));
98
+ if (cache.update_available) {
99
+ updates += '\x1b[33m⬆ /gsd:update\x1b[0m │ ';
100
+ }
101
+ } catch (e) {}
102
+ }
103
+ const slashdoCacheFile = path.join(homeDir, '.claude', 'cache', 'slashdo-update-check.json');
104
+ if (fs.existsSync(slashdoCacheFile)) {
105
+ try {
106
+ const cache = JSON.parse(fs.readFileSync(slashdoCacheFile, 'utf8'));
107
+ if (cache.update_available) {
108
+ updates += '\x1b[33m⬆ /do:update\x1b[0m │ ';
109
+ }
110
+ } catch (e) {}
111
+ }
112
+
113
+ // Output
114
+ const dirname = path.basename(dir);
115
+ if (task) {
116
+ process.stdout.write(`${updates}\x1b[2m${model}\x1b[0m │ \x1b[1m${task}\x1b[0m │ \x1b[2m${dirname}\x1b[0m${ctx}`);
117
+ } else {
118
+ process.stdout.write(`${updates}\x1b[2m${model}\x1b[0m │ \x1b[2m${dirname}\x1b[0m${ctx}`);
119
+ }
120
+ } catch (e) {
121
+ // Silent fail - don't break statusline on parse errors
122
+ }
123
+ });
package/install.sh CHANGED
@@ -1,6 +1,7 @@
1
1
  #!/usr/bin/env bash
2
2
  # slashdo — curl-based installer (no npm required)
3
3
  # Usage: curl -fsSL https://raw.githubusercontent.com/atomantic/slashdo/main/install.sh | bash
4
+ # shellcheck disable=SC2059,SC2207
4
5
  set -euo pipefail
5
6
 
6
7
  REPO="atomantic/slashdo"
@@ -36,6 +37,10 @@ LIBS=(
36
37
  code-review-checklist copilot-review-loop graphql-escaping
37
38
  )
38
39
 
40
+ HOOKS=(slashdo-check-update slashdo-statusline)
41
+
42
+ OLD_HOOKS=(update-check)
43
+
39
44
  detect_envs() {
40
45
  local envs=()
41
46
  [ -d "$HOME/.claude" ] && envs+=(claude)
@@ -48,7 +53,8 @@ detect_envs() {
48
53
  install_claude() {
49
54
  local target_cmd="$HOME/.claude/commands/do"
50
55
  local target_lib="$HOME/.claude/lib"
51
- mkdir -p "$target_cmd" "$target_lib"
56
+ local target_hooks="$HOME/.claude/hooks"
57
+ mkdir -p "$target_cmd" "$target_lib" "$target_hooks"
52
58
 
53
59
  printf " Installing to ${GREEN}Claude Code${RESET}...\n"
54
60
 
@@ -70,12 +76,107 @@ install_claude() {
70
76
  fi
71
77
  done
72
78
 
79
+ for hook in "${HOOKS[@]}"; do
80
+ printf " hook/%-19s" "$hook.js"
81
+ if curl -fsSL "$BASE_URL/hooks/$hook.js" -o "$target_hooks/$hook.js" 2>/dev/null; then
82
+ printf "${GREEN}ok${RESET}\n"
83
+ else
84
+ printf "failed\n"
85
+ fi
86
+ done
87
+
73
88
  for old in "${OLD_COMMANDS[@]}"; do
74
89
  if [ -f "$target_cmd/$old.md" ]; then
75
90
  rm -f "$target_cmd/$old.md"
76
91
  printf " migrated: /do:%-14s${GREEN}ok${RESET}\n" "$old"
77
92
  fi
78
93
  done
94
+
95
+ for old in "${OLD_HOOKS[@]}"; do
96
+ if [ -f "$target_hooks/$old.md" ]; then
97
+ rm -f "$target_hooks/$old.md"
98
+ printf " removed: hook/%-13s${GREEN}ok${RESET}\n" "$old.md"
99
+ fi
100
+ done
101
+
102
+ # Register hooks in settings.json (requires Node.js and successful hook downloads)
103
+ if command -v node &>/dev/null && [ -f "$target_hooks/slashdo-check-update.js" ]; then
104
+ printf " settings.json: "
105
+ local node_result
106
+ if ! node_result=$(node -e '
107
+ const fs = require("fs");
108
+ const path = require("path");
109
+ const home = require("os").homedir();
110
+ const settingsPath = path.join(home, ".claude", "settings.json");
111
+ const hooksDir = path.join(home, ".claude", "hooks");
112
+
113
+ let settings = {};
114
+ if (fs.existsSync(settingsPath)) {
115
+ try { settings = JSON.parse(fs.readFileSync(settingsPath, "utf8")); } catch (e) {
116
+ process.stdout.write("skipped (settings.json parse error)");
117
+ process.exit(0);
118
+ }
119
+ }
120
+
121
+ let modified = false;
122
+
123
+ // SessionStart hook (only if hook file exists)
124
+ const updateHookPath = path.join(hooksDir, "slashdo-check-update.js");
125
+ if (!settings.hooks || typeof settings.hooks !== "object" || Array.isArray(settings.hooks)) settings.hooks = {};
126
+ if (typeof settings.hooks.SessionStart === "undefined") {
127
+ settings.hooks.SessionStart = [];
128
+ } else if (!Array.isArray(settings.hooks.SessionStart)) {
129
+ process.stdout.write("skipped (settings.hooks.SessionStart has unexpected shape)");
130
+ process.exit(0);
131
+ }
132
+
133
+ const hookCmd = "node \"" + updateHookPath + "\"";
134
+ const alreadyRegistered = settings.hooks.SessionStart.some(function(g) {
135
+ return g && typeof g === "object" && Array.isArray(g.hooks) && g.hooks.some(function(h) {
136
+ return h && typeof h === "object" && typeof h.command === "string" && h.command.indexOf("slashdo-check-update") !== -1;
137
+ });
138
+ });
139
+
140
+ if (!alreadyRegistered) {
141
+ if (settings.hooks.SessionStart.length > 0) {
142
+ var firstGroup = settings.hooks.SessionStart[0];
143
+ if (!firstGroup || typeof firstGroup !== "object") {
144
+ firstGroup = {};
145
+ settings.hooks.SessionStart[0] = firstGroup;
146
+ }
147
+ if (!Array.isArray(firstGroup.hooks)) firstGroup.hooks = [];
148
+ firstGroup.hooks.push({ type: "command", command: hookCmd });
149
+ } else {
150
+ settings.hooks.SessionStart.push({ hooks: [{ type: "command", command: hookCmd }] });
151
+ }
152
+ modified = true;
153
+ }
154
+
155
+ // Statusline (only if none exists and hook file was downloaded)
156
+ const statuslineHookPath = path.join(hooksDir, "slashdo-statusline.js");
157
+ if (!settings.statusLine && fs.existsSync(statuslineHookPath)) {
158
+ const slCmd = "node \"" + statuslineHookPath + "\"";
159
+ settings.statusLine = { type: "command", command: slCmd };
160
+ modified = true;
161
+ }
162
+
163
+ if (modified) {
164
+ fs.writeFileSync(settingsPath, JSON.stringify(settings, null, 2) + "\n");
165
+ }
166
+
167
+ process.stdout.write(modified ? "updated" : "already configured");
168
+ ' 2>/dev/null); then
169
+ printf " %sfailed%s\n" "$YELLOW" "$RESET"
170
+ elif echo "$node_result" | grep -q "^skipped"; then
171
+ printf "%s%s%s\n" "$YELLOW" "$node_result" "$RESET"
172
+ else
173
+ printf "%s %sok%s\n" "$node_result" "$GREEN" "$RESET"
174
+ fi
175
+ elif command -v node &>/dev/null; then
176
+ printf " ${DIM}settings.json: skipped (hook files not found)${RESET}\n"
177
+ else
178
+ printf " ${DIM}settings.json: skipped (node not found — hooks installed but not registered)${RESET}\n"
179
+ fi
79
180
  }
80
181
 
81
182
  install_opencode() {
@@ -1,5 +1,5 @@
1
1
  **Hygiene**
2
- - Leftover debug code (console.log without emoji prefix, debugger, TODO/FIXME/HACK)
2
+ - Leftover debug code (`console.log`, `debugger`, TODO/FIXME/HACK comments)
3
3
  - Hardcoded secrets, API keys, or credentials
4
4
  - Files that shouldn't be committed (.env, node_modules, build artifacts)
5
5
  - Overly broad changes that should be split into separate PRs
@@ -9,16 +9,20 @@
9
9
  - No unused imports introduced by the changes
10
10
 
11
11
  **Runtime correctness**
12
- - State/variables that are declared but never updated or only partially wired up (e.g. a state setter that's never called with `true`)
12
+ - State/variables that are declared but never updated or only partially wired up (e.g. a state setter that's never called)
13
13
  - Side effects during React render (setState, navigation, mutations outside useEffect)
14
14
  - Off-by-one errors, null/undefined access without guards
15
+ - `JSON.parse` on user-editable files (config, settings, cache) without error handling — corrupted files will crash the process
16
+ - Accessing properties/methods on parsed JSON objects without verifying expected structure (e.g., `obj.arr.push()` when `arr` might not be an array)
17
+ - Iterating arrays from external/user-editable sources without guarding each element — a `null` or wrong-type entry throws `TypeError` when treated as an object
18
+ - Version/string comparisons using `!==` when semantic ordering matters — use proper semver comparison for version checks
15
19
 
16
20
  **Resource management**
17
21
  - Event listeners, socket handlers, subscriptions, and timers are cleaned up on unmount/teardown
18
22
  - useEffect cleanup functions remove everything the effect sets up
19
23
 
20
24
  **HTTP status codes & error classification**
21
- - Service functions that throw generic `Error` for client-caused conditions (not found, invalid input) — these bubble as 500 when they should be 400/404. Use the project's error class (e.g., `ServerError`, `HttpError`, `createError`) with explicit status codes
25
+ - Service functions that throw generic `Error` for client-caused conditions (not found, invalid input) — these bubble as 500 when they should be 400/404. Use typed error classes with explicit status codes
22
26
  - Consistent error responses across similar endpoints — if one validates, all should
23
27
 
24
28
  **API & URL safety**
@@ -31,29 +35,69 @@
31
35
 
32
36
  **Input handling**
33
37
  - Trimming values where whitespace is significant (API keys, tokens, passwords, base64) — only trim identifiers/names, not secret values
34
- - Swallowed errors (empty `.catch(() => {})`) that hide failures from users — at minimum show a toast/notification on failure
38
+ - Swallowed errors (empty `.catch(() => {})`) that hide failures from users — at minimum surface a notification on failure
35
39
 
36
40
  **Validation & consistency**
37
41
  - New endpoints/schemas match validation standards of similar existing endpoints (check for field limits, required fields, types)
38
42
  - New API routes have the same error handling patterns as existing routes
39
43
  - If validation exists on one endpoint for a param, the same param on other endpoints needs the same validation
40
- - Schema fields that accept values the rest of the system can't handle (e.g., managedSecrets accepting any string when the sync endpoint requires `[A-Z0-9_]`)
44
+ - Schema fields that accept values the rest of the system can't handle (e.g., a field accepts any string but downstream code requires a specific format)
45
+ - Numeric query params (`limit`, `offset`, `page`) parsed from strings without lower-bound clamping — `parseInt` can produce 0, negative, or `NaN` values that cause SQL errors or unexpected behavior. Always clamp to safe bounds (e.g., `Math.max(1, ...)`)
46
+ - Summary counters/accumulators that miss edge cases — if an item is removed, is the count updated? Are all branches counted?
47
+ - Silent operations in verbose sequences — when a series of operations each prints a status line, ensure all branches print consistent output
48
+ - Labels, comments, or status messages that describe behavior the code doesn't implement — e.g., a map named "renamed" that only deletes, or an action labeled "migrated" that never creates the target
49
+ - Registering references (config entries, settings pointers) to files or resources without verifying the resource actually exists — a failed download or missing file leaves dangling references that break later operations
50
+ - Error/catch handlers that exit cleanly (`exit 0`, `return`) without any user-visible output — makes failures look like successes; always print a skip/warning message explaining why the operation was skipped
41
51
 
42
52
  **Concurrency & data integrity**
43
- - Shared mutable state (files, in-memory caches) accessed by concurrent requests without locking or atomic writes — if two requests can hit the same resource, consider a mutex or write-to-tmp-then-rename pattern
44
- - Multi-step read-modify-write cycles on JSON files or databases that can interleave with other requests
53
+ - Shared mutable state (files, in-memory caches) accessed by concurrent requests without locking or atomic writes
54
+ - Multi-step read-modify-write cycles on files or databases that can interleave with other requests
55
+ - Multi-table writes (e.g., parent row + relationship/link rows) without a transaction — FK violations or errors after the first insert leave partial state. Wrap all related writes in a single transaction
56
+ - Functions with early returns for "no primary fields to update" that silently skip secondary operations (relationship updates, link table writes) — ensure early-return guards don't bypass logic that should run independently of primary field changes
57
+
58
+ **Search & navigation**
59
+ - Search results that link to generic list pages instead of deep-linking to the specific record — include the record type and ID in the URL
60
+ - Search or query code that hardcodes one backend's implementation when the system supports multiple backends — use the active backend's capabilities so results aren't stale after a backend switch
61
+
62
+ **Sync & replication**
63
+ - Upsert/`ON CONFLICT UPDATE` clauses that only update a subset of the fields exported by the corresponding "get changes" query — omitted fields cause replicas to diverge. Deliberately omit only fields that should stay local (e.g., access stats), and document the decision
64
+ - Pagination using `COUNT(*)` to compute `hasMore` — this forces a full table scan on large tables. Use the `limit + 1` pattern: fetch one extra row to detect more pages, return only `limit` rows
65
+
66
+ **SQL & database**
67
+ - Parameterized query placeholder indices (`$1`, `$2`, ...) must match the actual parameter array positions — especially when multiple queries share a param builder or when the index is computed dynamically
68
+ - Database triggers (e.g., `BEFORE UPDATE` setting `updated_at = NOW()`) that clobber explicitly-provided values — verify triggers don't interfere with replication/sync that sets fields to remote timestamps
69
+ - Auto-incrementing columns (`BIGSERIAL`, `SERIAL`) only auto-increment on INSERT, not UPDATE — if change-tracking relies on a sequence column, the UPDATE path must explicitly call `nextval()` to bump it
70
+ - Database functions that require specific extensions or minimum versions — verify the deployment target supports them and the init script enables the extension
71
+ - Full-text search with strict query parsers (`to_tsquery`) directly on user input — punctuation, quotes, and operators cause SQL errors. Use `websearch_to_tsquery` or `plainto_tsquery` for user-facing search
72
+ - Query results assigned to variables but never read — remove dead queries to avoid unnecessary database load
73
+ - N+1 query patterns inside transactions (SELECT + INSERT/UPDATE per row) — use batched upserts (`INSERT ... ON CONFLICT ... DO UPDATE`) to reduce round-trips and lock time
74
+
75
+ **Lazy initialization & module loading**
76
+ - Cached state getters that return `null`/`undefined` before the module is initialized — code that checks the cached value before triggering initialization will get incorrect results. Provide an async initializer or ensure-style function
77
+ - Re-exporting constants from heavy modules defeats lazy loading — define shared constants in a lightweight module or inline them
78
+
79
+ **Data format portability**
80
+ - Values that cross serialization boundaries (JSON API → database, peer sync) may change format — e.g., arrays in JSON vs specialized string literals in the database. Convert consistently before writing to the target
81
+
82
+ **Shell script safety**
83
+ - Subprocess calls in shell scripts under `set -e` — if the subprocess fails, the script aborts. Check exit status and handle gracefully
84
+ - When the same data structure is manipulated in both application code and shell-inline scripts, apply identical guards in both places
85
+
86
+ **Cross-platform compatibility**
87
+ - Shell-specific commands (e.g., `sleep`) in Node.js setup/build scripts — use language-native alternatives for portability
45
88
 
46
89
  **Test coverage**
47
90
  - New validation schemas, service functions, or business logic added without corresponding tests — especially when the project already has a test suite covering similar existing code
48
91
  - New error paths (404, 400) that are untestable because the service throws generic errors instead of typed/status-coded ones
49
92
 
50
93
  **Accessibility**
51
- - Interactive elements (buttons, toggles, custom controls) missing accessible names, roles, or ARIA states — screen readers can't interpret unnamed buttons or div-based toggles
52
- - Custom toggle/switch UI built from `<button>` or `<div>` instead of native `<input type="checkbox">` with appropriate labeling
94
+ - Interactive elements (buttons, toggles, custom controls) missing accessible names, roles, or ARIA states
95
+ - Custom toggle/switch UI built from `<button>` or `<div>` instead of native inputs with appropriate labeling
53
96
 
54
97
  **Configuration & hardcoding**
55
- - Hardcoded values (usernames, org names, limits) when a config field or env var already exists for that purpose — use the existing config
98
+ - Hardcoded values (usernames, org names, limits) when a config field or env var already exists for that purpose
56
99
  - Dead config fields that nothing reads — either wire them up or remove them
100
+ - Duplicated config/constants across modules — extract to a single shared module to prevent drift (watch for circular imports when choosing the shared location)
57
101
 
58
102
  **Style & conventions**
59
103
  - Naming and patterns consistent with the rest of the codebase
@@ -17,14 +17,16 @@ After the PR is created, run the Copilot review-and-fix loop:
17
17
  gh api graphql -f query='{ repository(owner: "OWNER", name: "REPO") { pullRequest(number: PR_NUM) { reviews(last: 3) { nodes { state body author { login } submittedAt } } reviewThreads(first: 100) { nodes { id isResolved comments(first: 3) { nodes { body path line author { login } } } } } } } }'
18
18
  ```
19
19
  - The review is complete when a new Copilot review node appears with a `submittedAt` after your latest push
20
+ - **Error detection**: After a review appears, check the review `body` for error text such as "Copilot encountered an error" or "unable to review this pull request". If the review body contains this error, it is NOT a successful review — re-request the review (step 1) and resume polling. Log a warning so the user knows a retry occurred. Apply a maximum of 3 error retries before asking the user whether to continue waiting or skip.
20
21
  - **Do NOT proceed until the re-requested review has actually posted** — "Awaiting requested review" means it is still in progress
21
- - Poll every 60 seconds; Copilot reviews can take **10-15 minutes** for large diffs do NOT give up early
22
- - **Continue polling for at least 15 minutes** before concluding the review won't arrive
23
- - If no review appears after 15 minutes, **ask the user** whether to continue waiting, re-request the review, or skip — **never proceed without user approval when the review loop fails**
22
+ - **Dynamic poll timing**: Before your first poll, check how long the most recent Copilot review on this PR took by comparing its `submittedAt` to the previous review's `submittedAt` (or to the PR creation time if it was the first review). Use that duration as your expected wait time. If no prior review exists, default to 5 minutes. Set poll interval to 60 seconds and max wait to **2x the expected duration** (minimum 5 minutes, maximum 20 minutes).
23
+ - Copilot reviews can take **10-15 minutes** for large diffs do NOT give up early
24
+ - If no review appears after the max wait time, **ask the user** whether to continue waiting, re-request the review, or skip — **never proceed without user approval when the review loop fails**
24
25
  - If the review request silently disappears (reviewRequests becomes empty without a review being posted), re-request the review once and resume polling
25
26
 
26
27
  3. **Check for unresolved comments**
27
28
  - Filter review threads for `isResolved: false`
29
+ - **First, verify the review was successful**: check that the latest Copilot review body does NOT contain "Copilot encountered an error" or "unable to review". If it does, this is an error response — go back to step 1 (re-request) instead of proceeding. This check is critical because error reviews have no comments and no unresolved threads, making them look identical to a clean review.
28
30
  - Also count the total comments in the latest review (check the review body for "generated N comments")
29
31
  - If the latest review has **zero comments** (body says "generated 0 comments" or no unresolved threads exist): the PR is clean — exit the loop
30
32
  - If **there are unresolved comments**: proceed to fix them (step 4)
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "slash-do",
3
- "version": "1.1.0",
3
+ "version": "1.2.0",
4
4
  "description": "Curated slash commands for AI coding assistants — Claude Code, OpenCode, Gemini CLI, and Codex",
5
5
  "author": "Adam Eivy <adam@eivy.com>",
6
6
  "license": "MIT",
@@ -35,7 +35,10 @@
35
35
  "install.sh",
36
36
  "uninstall.sh"
37
37
  ],
38
+ "scripts": {
39
+ "test": "node --test test/*.test.js"
40
+ },
38
41
  "engines": {
39
- "node": ">=16.7.0"
42
+ "node": ">=18.0.0"
40
43
  }
41
44
  }
@@ -12,6 +12,7 @@ const ENVIRONMENTS = {
12
12
  commandsDir: path.join(HOME, '.claude', 'commands'),
13
13
  libDir: path.join(HOME, '.claude', 'lib'),
14
14
  hooksDir: path.join(HOME, '.claude', 'hooks'),
15
+ settingsFile: path.join(HOME, '.claude', 'settings.json'),
15
16
  versionFile: path.join(HOME, '.claude', '.slashdo-version'),
16
17
  format: 'yaml-frontmatter',
17
18
  ext: '.md',
@@ -19,6 +20,7 @@ const ENVIRONMENTS = {
19
20
  libPathPrefix: '~/.claude/lib/',
20
21
  supportsHooks: true,
21
22
  supportsCatInclusion: true,
23
+ supportsTeams: true,
22
24
  },
23
25
  opencode: {
24
26
  name: 'OpenCode',
@@ -32,6 +34,7 @@ const ENVIRONMENTS = {
32
34
  libPathPrefix: '~/.config/opencode/lib/',
33
35
  supportsHooks: false,
34
36
  supportsCatInclusion: true,
37
+ supportsTeams: false,
35
38
  },
36
39
  gemini: {
37
40
  name: 'Gemini CLI',
@@ -45,6 +48,7 @@ const ENVIRONMENTS = {
45
48
  libPathPrefix: '~/.gemini/lib/',
46
49
  supportsHooks: false,
47
50
  supportsCatInclusion: true,
51
+ supportsTeams: false,
48
52
  },
49
53
  codex: {
50
54
  name: 'Codex',
@@ -58,6 +62,7 @@ const ENVIRONMENTS = {
58
62
  libPathPrefix: null,
59
63
  supportsHooks: false,
60
64
  supportsCatInclusion: false,
65
+ supportsTeams: false,
61
66
  },
62
67
  };
63
68
 
package/src/installer.js CHANGED
@@ -43,7 +43,7 @@ function collectHooks(hooksDir) {
43
43
  const files = [];
44
44
  const entries = fs.readdirSync(hooksDir, { withFileTypes: true });
45
45
  for (const entry of entries) {
46
- if (entry.isFile() && entry.name.endsWith('.md')) {
46
+ if (entry.isFile() && (entry.name.endsWith('.md') || entry.name.endsWith('.js'))) {
47
47
  files.push({
48
48
  relPath: entry.name,
49
49
  absPath: path.join(hooksDir, entry.name),
@@ -68,6 +68,164 @@ const RENAMED_COMMANDS = {
68
68
  'optimize-md': 'omd',
69
69
  };
70
70
 
71
+ // Old hooks to remove during install/uninstall (superseded or no longer needed)
72
+ const OBSOLETE_HOOKS = [
73
+ 'update-check.md',
74
+ ];
75
+
76
+ function registerHooksInSettings(env, hookFiles, dryRun) {
77
+ if (!env.settingsFile) return [];
78
+
79
+ const actions = [];
80
+ const settingsPath = env.settingsFile;
81
+
82
+ let settings = {};
83
+ if (fs.existsSync(settingsPath)) {
84
+ try {
85
+ settings = JSON.parse(fs.readFileSync(settingsPath, 'utf8'));
86
+ } catch (e) {
87
+ // Corrupted settings.json — skip registration to avoid data loss
88
+ actions.push({ name: 'settings.json', status: 'skipped (parse error)' });
89
+ return actions;
90
+ }
91
+ }
92
+
93
+ let modified = false;
94
+
95
+ // Register SessionStart hook for slashdo-check-update.js
96
+ const updateCheckHook = hookFiles.find(h => h.name === 'slashdo-check-update.js');
97
+ if (updateCheckHook) {
98
+ const hookCommand = `node "${path.join(env.hooksDir, updateCheckHook.name)}"`;
99
+
100
+ if (!settings.hooks) {
101
+ settings.hooks = {};
102
+ } else if (typeof settings.hooks !== 'object' || Array.isArray(settings.hooks)) {
103
+ actions.push({ name: 'settings/hooks', status: 'skipped (unexpected shape)' });
104
+ return actions;
105
+ }
106
+
107
+ // If SessionStart exists but isn't an array, skip hook registration (but continue to statusLine)
108
+ if (Object.prototype.hasOwnProperty.call(settings.hooks, 'SessionStart') &&
109
+ !Array.isArray(settings.hooks.SessionStart)) {
110
+ actions.push({ name: 'settings/SessionStart hook', status: 'skipped (unexpected shape)' });
111
+ } else {
112
+ if (!Array.isArray(settings.hooks.SessionStart)) settings.hooks.SessionStart = [];
113
+
114
+ const alreadyRegistered = settings.hooks.SessionStart.some(group =>
115
+ group &&
116
+ typeof group === 'object' &&
117
+ Array.isArray(group.hooks) &&
118
+ group.hooks.some(h => typeof h?.command === 'string' && h.command.includes('slashdo-check-update'))
119
+ );
120
+
121
+ if (!alreadyRegistered) {
122
+ if (settings.hooks.SessionStart.length > 0) {
123
+ let firstGroup = settings.hooks.SessionStart[0];
124
+ if (!firstGroup || typeof firstGroup !== 'object') {
125
+ firstGroup = { hooks: [] };
126
+ settings.hooks.SessionStart[0] = firstGroup;
127
+ }
128
+ if (!Array.isArray(firstGroup.hooks)) firstGroup.hooks = [];
129
+ firstGroup.hooks.push({
130
+ type: 'command',
131
+ command: hookCommand,
132
+ });
133
+ } else {
134
+ settings.hooks.SessionStart.push({
135
+ hooks: [{
136
+ type: 'command',
137
+ command: hookCommand,
138
+ }],
139
+ });
140
+ }
141
+ modified = true;
142
+ actions.push({ name: 'settings/SessionStart hook', status: dryRun ? 'would register' : 'registered' });
143
+ } else {
144
+ actions.push({ name: 'settings/SessionStart hook', status: 'already registered' });
145
+ }
146
+ }
147
+ }
148
+
149
+ // Configure statusline only if none exists
150
+ const statuslineHook = hookFiles.find(h => h.name === 'slashdo-statusline.js');
151
+ if (statuslineHook && !settings.statusLine) {
152
+ const statuslineCommand = `node "${path.join(env.hooksDir, statuslineHook.name)}"`;
153
+ settings.statusLine = {
154
+ type: 'command',
155
+ command: statuslineCommand,
156
+ };
157
+ modified = true;
158
+ actions.push({ name: 'settings/statusLine', status: dryRun ? 'would configure' : 'configured' });
159
+ } else if (statuslineHook && settings.statusLine) {
160
+ actions.push({ name: 'settings/statusLine', status: 'existing statusline preserved' });
161
+ }
162
+
163
+ if (!dryRun && modified) {
164
+ fs.writeFileSync(settingsPath, JSON.stringify(settings, null, 2) + '\n', 'utf8');
165
+ }
166
+
167
+ return actions;
168
+ }
169
+
170
+ function deregisterHooksFromSettings(env, dryRun) {
171
+ if (!env.settingsFile) return [];
172
+
173
+ const settingsPath = env.settingsFile;
174
+ if (!fs.existsSync(settingsPath)) return [];
175
+
176
+ const actions = [];
177
+ let settings;
178
+ try {
179
+ settings = JSON.parse(fs.readFileSync(settingsPath, 'utf8'));
180
+ } catch (e) {
181
+ // Corrupted settings.json — skip deregistration to avoid data loss
182
+ actions.push({ name: 'settings.json', status: 'skipped (parse error)' });
183
+ return actions;
184
+ }
185
+ let modified = false;
186
+
187
+ // Remove SessionStart hook entries referencing slashdo
188
+ if (Array.isArray(settings.hooks?.SessionStart)) {
189
+ const emptiedByUs = new Set();
190
+ for (let i = 0; i < settings.hooks.SessionStart.length; i++) {
191
+ const group = settings.hooks.SessionStart[i];
192
+ if (!group || typeof group !== 'object') continue;
193
+ if (Array.isArray(group.hooks)) {
194
+ const before = group.hooks.length;
195
+ group.hooks = group.hooks.filter(h =>
196
+ !h || typeof h !== 'object' || typeof h.command !== 'string' || !h.command.includes('slashdo-check-update')
197
+ );
198
+ if (group.hooks.length < before) {
199
+ modified = true;
200
+ actions.push({ name: 'settings/SessionStart hook', status: dryRun ? 'would deregister' : 'deregistered' });
201
+ if (group.hooks.length === 0) emptiedByUs.add(i);
202
+ }
203
+ }
204
+ }
205
+ // Only remove groups that became empty as a result of removing slashdo entries
206
+ settings.hooks.SessionStart = settings.hooks.SessionStart.filter((_, i) => !emptiedByUs.has(i));
207
+ if (settings.hooks.SessionStart.length === 0) {
208
+ delete settings.hooks.SessionStart;
209
+ }
210
+ if (Object.keys(settings.hooks).length === 0) {
211
+ delete settings.hooks;
212
+ }
213
+ }
214
+
215
+ // Remove statusline if it references slashdo-statusline
216
+ if (settings.statusLine?.command?.includes('slashdo-statusline')) {
217
+ delete settings.statusLine;
218
+ modified = true;
219
+ actions.push({ name: 'settings/statusLine', status: dryRun ? 'would remove' : 'removed' });
220
+ }
221
+
222
+ if (!dryRun && modified) {
223
+ fs.writeFileSync(settingsPath, JSON.stringify(settings, null, 2) + '\n', 'utf8');
224
+ }
225
+
226
+ return actions;
227
+ }
228
+
71
229
  function install({ env, packageDir, filterNames, dryRun, uninstall }) {
72
230
  const commandsDir = path.join(packageDir, 'commands');
73
231
  const libDir = path.join(packageDir, 'lib');
@@ -83,7 +241,7 @@ function install({ env, packageDir, filterNames, dryRun, uninstall }) {
83
241
  const results = { installed: 0, updated: 0, upToDate: 0, removed: 0, actions: [] };
84
242
 
85
243
  if (uninstall) {
86
- return doUninstall(filtered, libFiles, hookFiles, env, results, dryRun);
244
+ return doUninstall(filtered, libFiles, hookFiles, env, results, dryRun, filterNames);
87
245
  }
88
246
 
89
247
  for (const cmd of filtered) {
@@ -181,8 +339,15 @@ function install({ env, packageDir, filterNames, dryRun, uninstall }) {
181
339
  if (isNew) results.installed++;
182
340
  else results.updated++;
183
341
  }
342
+
343
+ // Register hooks in settings.json (only for full installs, not filtered command installs)
344
+ if (!filterNames?.length) {
345
+ const settingsActions = registerHooksInSettings(env, hookFiles, dryRun);
346
+ results.actions.push(...settingsActions);
347
+ }
184
348
  }
185
349
 
350
+ // Clean up renamed commands
186
351
  for (const [oldName, newName] of Object.entries(RENAMED_COMMANDS)) {
187
352
  const oldRelPath = path.join('do', oldName + '.md');
188
353
  const oldTargetRel = getTargetFilename(oldRelPath, env);
@@ -198,6 +363,23 @@ function install({ env, packageDir, filterNames, dryRun, uninstall }) {
198
363
  }
199
364
  }
200
365
 
366
+ // Clean up obsolete hooks from prior versions
367
+ if (env.supportsHooks && env.hooksDir) {
368
+ for (const oldName of OBSOLETE_HOOKS) {
369
+ const oldTargetPath = path.join(env.hooksDir, oldName);
370
+
371
+ if (fs.existsSync(oldTargetPath)) {
372
+ if (dryRun) {
373
+ results.actions.push({ name: `hook/${oldName}`, status: 'would remove (obsolete)' });
374
+ } else {
375
+ fs.unlinkSync(oldTargetPath);
376
+ results.actions.push({ name: `hook/${oldName}`, status: 'removed (obsolete)' });
377
+ }
378
+ results.removed++;
379
+ }
380
+ }
381
+ }
382
+
201
383
  if (!dryRun && env.versionFile) {
202
384
  const pkg = JSON.parse(fs.readFileSync(path.join(packageDir, 'package.json'), 'utf8'));
203
385
  fs.writeFileSync(env.versionFile, pkg.version, 'utf8');
@@ -206,7 +388,7 @@ function install({ env, packageDir, filterNames, dryRun, uninstall }) {
206
388
  return results;
207
389
  }
208
390
 
209
- function doUninstall(commands, libFiles, hookFiles, env, results, dryRun) {
391
+ function doUninstall(commands, libFiles, hookFiles, env, results, dryRun, filterNames) {
210
392
  for (const cmd of commands) {
211
393
  const targetRel = getTargetFilename(cmd.relPath, env);
212
394
  const targetPath = path.join(env.commandsDir, targetRel);
@@ -237,7 +419,7 @@ function doUninstall(commands, libFiles, hookFiles, env, results, dryRun) {
237
419
  }
238
420
  }
239
421
 
240
- if (env.supportsHooks && env.hooksDir) {
422
+ if (env.supportsHooks && env.hooksDir && !filterNames?.length) {
241
423
  for (const hook of hookFiles) {
242
424
  const targetPath = path.join(env.hooksDir, hook.relPath);
243
425
  if (!fs.existsSync(targetPath)) continue;
@@ -250,6 +432,35 @@ function doUninstall(commands, libFiles, hookFiles, env, results, dryRun) {
250
432
  }
251
433
  results.removed++;
252
434
  }
435
+
436
+ // Clean up obsolete hooks that may have been installed by prior versions
437
+ for (const oldName of OBSOLETE_HOOKS) {
438
+ const oldPath = path.join(env.hooksDir, oldName);
439
+ if (fs.existsSync(oldPath)) {
440
+ if (dryRun) {
441
+ results.actions.push({ name: `hook/${oldName}`, status: 'would remove (obsolete)' });
442
+ } else {
443
+ fs.unlinkSync(oldPath);
444
+ results.actions.push({ name: `hook/${oldName}`, status: 'removed (obsolete)' });
445
+ }
446
+ results.removed++;
447
+ }
448
+ }
449
+
450
+ // Deregister hooks and clean up cache
451
+ const settingsActions = deregisterHooksFromSettings(env, dryRun);
452
+ results.actions.push(...settingsActions);
453
+
454
+ const cacheFile = path.join(path.dirname(env.hooksDir), 'cache', 'slashdo-update-check.json');
455
+ if (fs.existsSync(cacheFile)) {
456
+ if (dryRun) {
457
+ results.actions.push({ name: 'cache/slashdo-update-check.json', status: 'would remove' });
458
+ } else {
459
+ fs.unlinkSync(cacheFile);
460
+ results.actions.push({ name: 'cache/slashdo-update-check.json', status: 'removed' });
461
+ }
462
+ results.removed++;
463
+ }
253
464
  }
254
465
 
255
466
  if (!dryRun && env.versionFile && fs.existsSync(env.versionFile)) {
package/uninstall.sh CHANGED
@@ -1,6 +1,7 @@
1
1
  #!/usr/bin/env bash
2
2
  # slashdo — curl-based uninstaller
3
3
  # Usage: curl -fsSL https://raw.githubusercontent.com/atomantic/slashdo/main/uninstall.sh | bash
4
+ # shellcheck disable=SC2059,SC2207
4
5
  set -euo pipefail
5
6
 
6
7
  CYAN='\033[0;36m'
@@ -32,9 +33,14 @@ LIBS=(
32
33
  code-review-checklist copilot-review-loop graphql-escaping
33
34
  )
34
35
 
36
+ HOOKS=(slashdo-check-update slashdo-statusline)
37
+
38
+ OLD_HOOKS=(update-check)
39
+
35
40
  uninstall_claude() {
36
41
  local target_cmd="$HOME/.claude/commands/do"
37
42
  local target_lib="$HOME/.claude/lib"
43
+ local target_hooks="$HOME/.claude/hooks"
38
44
  local count=0
39
45
 
40
46
  printf " Uninstalling from ${GREEN}Claude Code${RESET}...\n"
@@ -55,8 +61,93 @@ uninstall_claude() {
55
61
  fi
56
62
  done
57
63
 
64
+ for hook in "${HOOKS[@]}"; do
65
+ if [ -f "$target_hooks/$hook.js" ]; then
66
+ rm -f "$target_hooks/$hook.js"
67
+ printf " removed: hook/%-17s${GREEN}ok${RESET}\n" "$hook.js"
68
+ count=$((count + 1))
69
+ fi
70
+ done
71
+
72
+ for old in "${OLD_HOOKS[@]}"; do
73
+ if [ -f "$target_hooks/$old.md" ]; then
74
+ rm -f "$target_hooks/$old.md"
75
+ printf " removed: hook/%-17s${GREEN}ok${RESET}\n" "$old.md"
76
+ count=$((count + 1))
77
+ fi
78
+ done
79
+
80
+ # Remove cache file
81
+ if [ -f "$HOME/.claude/cache/slashdo-update-check.json" ]; then
82
+ rm -f "$HOME/.claude/cache/slashdo-update-check.json"
83
+ printf " removed: cache/slashdo-update-check.json ${GREEN}ok${RESET}\n"
84
+ count=$((count + 1))
85
+ fi
86
+
58
87
  if [ -f "$HOME/.claude/.slashdo-version" ]; then
59
88
  rm -f "$HOME/.claude/.slashdo-version"
89
+ printf " removed: .slashdo-version ${GREEN}ok${RESET}\n"
90
+ count=$((count + 1))
91
+ fi
92
+
93
+ # Deregister from settings.json (requires Node.js)
94
+ if command -v node &>/dev/null; then
95
+ if node -e '
96
+ const fs = require("fs");
97
+ const path = require("path");
98
+ const home = require("os").homedir();
99
+ const settingsPath = path.join(home, ".claude", "settings.json");
100
+
101
+ if (!fs.existsSync(settingsPath)) process.exit(0);
102
+
103
+ let settings;
104
+ try {
105
+ settings = JSON.parse(fs.readFileSync(settingsPath, "utf8"));
106
+ } catch (e) {
107
+ process.stdout.write(" skipped settings.json deregistration (parse error)\n");
108
+ process.exit(0);
109
+ }
110
+ let modified = false;
111
+
112
+ if (settings.hooks && Array.isArray(settings.hooks.SessionStart)) {
113
+ var emptiedByUs = {};
114
+ for (var i = 0; i < settings.hooks.SessionStart.length; i++) {
115
+ var group = settings.hooks.SessionStart[i];
116
+ if (!group || typeof group !== "object") continue;
117
+ if (Array.isArray(group.hooks)) {
118
+ var before = group.hooks.length;
119
+ group.hooks = group.hooks.filter(function(h) {
120
+ if (!h || typeof h !== "object") return true;
121
+ return typeof h.command !== "string" || h.command.indexOf("slashdo-check-update") === -1;
122
+ });
123
+ if (group.hooks.length < before) {
124
+ modified = true;
125
+ if (group.hooks.length === 0) emptiedByUs[i] = true;
126
+ }
127
+ }
128
+ }
129
+ settings.hooks.SessionStart = settings.hooks.SessionStart.filter(function(g, i) {
130
+ return !emptiedByUs[i];
131
+ });
132
+ if (settings.hooks.SessionStart.length === 0) delete settings.hooks.SessionStart;
133
+ if (Object.keys(settings.hooks).length === 0) delete settings.hooks;
134
+ }
135
+
136
+ if (settings.statusLine && settings.statusLine.command &&
137
+ settings.statusLine.command.indexOf("slashdo-statusline") !== -1) {
138
+ delete settings.statusLine;
139
+ modified = true;
140
+ }
141
+
142
+ if (modified) {
143
+ fs.writeFileSync(settingsPath, JSON.stringify(settings, null, 2) + "\n");
144
+ process.stdout.write(" deregistered from settings.json\n");
145
+ }
146
+ '; then
147
+ : # deregistration handled inside node
148
+ else
149
+ printf " ${YELLOW}settings.json deregistration failed${RESET}\n"
150
+ fi
60
151
  fi
61
152
 
62
153
  if [ $count -eq 0 ]; then
@@ -1,21 +0,0 @@
1
- ---
2
- description: Check for slashdo updates on session start
3
- hooks:
4
- - event: SessionStart
5
- ---
6
-
7
- # slashdo Update Check
8
-
9
- Check if a newer version of slashdo is available.
10
-
11
- ## Steps
12
-
13
- 1. Read the installed version from `~/.claude/.slashdo-version`
14
- 2. If the file doesn't exist, skip silently (slashdo may not be installed)
15
- 3. Run `npm view slash-do version` with a 3-second timeout
16
- 4. Compare versions
17
- 5. If the latest version is newer than the installed version, print:
18
- ```
19
- slashdo update available: v{installed} -> v{latest} (run /do:update)
20
- ```
21
- 6. If up to date or if the check fails (network error, timeout), do nothing — don't interrupt the user's session