slash-do 1.0.0 → 1.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/bin/cli.js CHANGED
@@ -208,7 +208,11 @@ async function main() {
208
208
  }
209
209
  }
210
210
 
211
- main().catch(err => {
212
- console.error('Error:', err.message);
213
- process.exit(1);
214
- });
211
+ if (require.main === module) {
212
+ main().catch(err => {
213
+ console.error('Error:', err.message);
214
+ process.exit(1);
215
+ });
216
+ }
217
+
218
+ module.exports = { parseArgs };
@@ -13,6 +13,42 @@ Parse `$ARGUMENTS` for:
13
13
  - **Path filter**: limit scanning scope to specific directories or files
14
14
  - **Focus areas**: e.g., "security only", "DRY and bugs"
15
15
 
16
+ ## Configuration
17
+
18
+ Before starting the pipeline, present the user with configuration options using `AskUserQuestion`:
19
+
20
+ ```
21
+ AskUserQuestion([
22
+ {
23
+ question: "Which model profile for audit and remediation agents?",
24
+ header: "Model",
25
+ multiSelect: false,
26
+ options: [
27
+ { label: "Quality", description: "Opus for all agents — fewest false positives, best fixes (highest cost, 7+ Opus agents)" },
28
+ { label: "Balanced (Recommended)", description: "Sonnet for audit and remediation — good quality at moderate cost" },
29
+ { label: "Budget", description: "Haiku for audit, Sonnet for remediation — fastest and cheapest" }
30
+ ]
31
+ }
32
+ ])
33
+ ```
34
+
35
+ Record the selection as `MODEL_PROFILE` and derive agent models from this table:
36
+
37
+ | Agent Role | Quality | Balanced | Budget |
38
+ |------------|---------|----------|--------|
39
+ | Audit agents (7 Explore agents, Phase 1) | opus | sonnet | haiku |
40
+ | Remediation agents (general-purpose, Phase 3) | opus | sonnet | sonnet |
41
+
42
+ Derive two variables:
43
+ - `AUDIT_MODEL`: `opus` / `sonnet` / `haiku` based on profile
44
+ - `REMEDIATION_MODEL`: `opus` / `sonnet` / `sonnet` based on profile
45
+
46
+ When the resolved model is `opus`, **omit** the `model` parameter on the Agent/Task call so the agent inherits the session's Opus version. This avoids version conflicts when organizations pin specific Opus versions.
47
+
48
+ ### Model Profile Rationale
49
+
50
+ Opus reduces false positives in audit (judgment-heavy). Sonnet is the floor for code-writing agents (remediation). Haiku works for fast first-pass pattern scanning but may produce more false positives — remediation agents (Sonnet+) validate before fixing.
51
+
16
52
  ## Phase 0: Discovery & Setup
17
53
 
18
54
  Detect the project environment before any scanning or remediation.
@@ -41,7 +77,7 @@ Derive build and test commands from the project type:
41
77
  - Rust: `cargo build`, `cargo test`
42
78
  - Python: `pytest`, `python -m pytest`
43
79
  - Go: `go build ./...`, `go test ./...`
44
- - If ambiguous, check CLAUDE.md for documented commands
80
+ - If ambiguous, check project conventions already in context for documented commands
45
81
 
46
82
  Record as `BUILD_CMD` and `TEST_CMD`.
47
83
 
@@ -64,7 +100,7 @@ This ensures the browser is ready before we need it in Phase 6, avoiding interru
64
100
 
65
101
  ## Phase 1: Unified Audit
66
102
 
67
- Read the project's CLAUDE.md files first to understand conventions. Pass relevant conventions to each agent.
103
+ Project conventions are already in your context. Pass relevant conventions to each agent.
68
104
 
69
105
  Launch 7 Explore agents in two batches. Each agent must report findings in this format:
70
106
  ```
@@ -81,6 +117,8 @@ If the surrounding context shows the code is correct, do NOT flag it.
81
117
 
82
118
  ### Batch 1 (5 parallel Explore agents via Task tool):
83
119
 
120
+ **Model**: Pass `AUDIT_MODEL` as the `model` parameter on each agent. If `AUDIT_MODEL` is `opus`, omit the parameter to inherit from session.
121
+
84
122
  1. **Security & Secrets**
85
123
  Sources: authentication checks, credential exposure, infrastructure security, input validation, dependency health
86
124
  Focus: hardcoded credentials, API keys, exposed secrets, authentication bypasses, disabled security checks, PII exposure, injection vulnerabilities (SQL/command/path traversal), insecure CORS configurations, missing auth checks, unsanitized user input in file paths or queries, known CVEs in dependencies (check `npm audit` / `cargo audit` / `pip-audit` / `go vuln` output), abandoned or unmaintained dependencies, overly permissive dependency version ranges
@@ -103,6 +141,8 @@ If the surrounding context shows the code is correct, do NOT flag it.
103
141
 
104
142
  ### Batch 2 (2 agents after Batch 1 completes):
105
143
 
144
+ **Model**: Same `AUDIT_MODEL` as Batch 1.
145
+
106
146
  6. **Stack-Specific**
107
147
  Dynamically focus based on `PROJECT_TYPE` detected in Phase 0:
108
148
  - **Node/React**: missing cleanup in useEffect, stale closures, unstable deps arrays, duplicate hooks across components, re-created functions inside render, missing AbortController, bundle size concerns (large imports that could be tree-shaken or lazy-loaded)
@@ -223,7 +263,7 @@ If no shared utilities were identified, skip this step.
223
263
  - Bugs, Performance & Error Handling
224
264
  - Stack-Specific
225
265
  3. Only create tasks for categories that have actionable findings
226
- 4. Spawn up to 5 general-purpose agents as teammates
266
+ 4. Spawn up to 5 general-purpose agents as teammates. **Pass `REMEDIATION_MODEL` as the `model` parameter on each agent.** If `REMEDIATION_MODEL` is `opus`, omit the parameter to inherit from session.
227
267
 
228
268
  ### Agent instructions template:
229
269
  ```
@@ -431,7 +471,6 @@ If `BROWSER_AUTHENTICATED` is not true (e.g., Phase 0e was skipped or failed):
431
471
  1. Navigate to the first PR URL using `browser_navigate`
432
472
  2. Check for user avatar/menu
433
473
  3. If not logged in: navigate to `https://github.com/login`, inform the user **"Please log in to GitHub in the browser. I'll wait for you to confirm."**, and use `AskUserQuestion` to wait
434
- 4. Do NOT close the browser at any point during this phase
435
474
 
436
475
  ### 6.1: Request Copilot reviews on all PRs
437
476
 
@@ -450,17 +489,16 @@ If this returns 422 ("not a collaborator"), **fall back to Playwright** for each
450
489
 
451
490
  ### 6.2: Poll for review completion
452
491
 
453
- For each PR, poll every 15 seconds, max 3 minutes (12 polls):
454
- ```bash
455
- gh api repos/{OWNER}/{REPO}/pulls/{PR_NUMBER}/reviews --jq '.[] | "\(.user.login): \(.state)"'
456
- ```
492
+ **Dynamic poll timing**: Before your first poll, check how long the most recent Copilot review on this PR took by comparing consecutive Copilot review `submittedAt` timestamps (or PR creation time for the first review). Use that duration as your expected wait. If no prior review exists, default to 5 minutes. Set poll interval to 60 seconds and max wait to **2x the expected duration** (minimum 5 minutes, maximum 20 minutes). Copilot reviews can take **10-15 minutes** for large diffs — do NOT give up early.
457
493
 
458
- Also check for inline comments:
494
+ For each PR, poll using GraphQL to check for a new Copilot review:
459
495
  ```bash
460
- gh api repos/{OWNER}/{REPO}/pulls/{PR_NUMBER}/comments --jq '.[] | "\(.user.login) [\(.path):\(.line)]: \(.body[:120])"'
496
+ echo '{"query":"{ repository(owner: \"OWNER\", name: \"REPO\") { pullRequest(number: PR_NUM) { reviews(last: 5) { nodes { state body author { login } submittedAt } } reviewThreads(first: 100) { nodes { id isResolved comments(first: 3) { nodes { body path line author { login } } } } } } } }"}' | gh api graphql --input -
461
497
  ```
462
498
 
463
- The review is complete when a `copilot[bot]` or `copilot-pull-request-reviewer[bot]` review appears.
499
+ The review is complete when a new `copilot-pull-request-reviewer[bot]` review appears with a `submittedAt` after your request. If no review appears after max wait, **ask the user** whether to continue waiting, re-request, or skip.
500
+
501
+ **Error detection**: After a review appears, check its `body` for error text such as "Copilot encountered an error" or "unable to review this pull request". If found, this is NOT a successful review — log a warning, re-request the review (step 6.1), and resume polling from 6.2. Allow up to 3 error retries per PR before asking the user whether to continue or skip.
464
502
 
465
503
  ### 6.3: Check for unresolved threads
466
504
 
@@ -564,7 +602,6 @@ If merge fails (e.g., branch protection, merge conflicts from a prior PR):
564
602
  - **Copilot timeout** (review not received within 3 min): inform user, offer to merge without review approval or wait longer
565
603
  - **Copilot review loop exceeds 5 iterations per PR**: stop iterating on that PR, inform user, proceed to merge
566
604
  - **Existing worktree found at startup**: ask user — resume (reuse worktree) or cleanup (remove and start fresh)
567
- - **`gh auth status` / `glab auth status` failure**: halt and tell user to authenticate first
568
605
  - **No findings above LOW**: skip Phases 3-7, print "No actionable findings" with the LOW summary
569
606
  - **Browser not authenticated**: use `AskUserQuestion` to ask the user to log in — never skip this or close the browser
570
607
  - **Merge conflict after prior PR merged**: rebase the branch onto the updated default branch, push with `--force-with-lease`, re-run CI
@@ -580,8 +617,5 @@ If merge fails (e.g., branch protection, merge conflicts from a prior PR):
580
617
  - When extracting modules, always add backward-compatible re-exports in the original module to prevent cross-PR breakage
581
618
  - Version bump happens exactly once on the first category branch based on aggregate commit analysis
582
619
  - Only CRITICAL, HIGH, and MEDIUM findings are auto-remediated; LOW and Test Coverage remain tracked in PLAN.md
583
- - Do not include co-author or generated-by info in any commits, PRs, or output
584
620
  - GitLab projects skip the Copilot review loop entirely (Phase 6) and stop after MR creation
585
- - Always run `gh auth status` (or `glab auth status`) before any authenticated operation
586
- - The Playwright browser should be opened early (Phase 0e) and kept open throughout — never close it prematurely
587
621
  - CI must pass on each PR before requesting Copilot review or merging
@@ -94,14 +94,10 @@ gh pr create \
94
94
 
95
95
  - Write a clear title and rich description
96
96
  - If a PR template was found, follow its structure
97
- - Do NOT include co-author or "generated with" messages
98
97
  - Print the resulting PR URL so the user can review it
99
98
 
100
99
  ## Important
101
100
 
102
- - Never stage files you didn't edit
103
- - Never use `git add -A` or `git add .`
104
- - Do NOT bump versions or update changelogs — upstream maintainers control those
105
101
  - Do NOT merge the PR — upstream maintainers handle that
106
102
  - Do NOT run Copilot review loops — you don't control the upstream repo's review settings
107
103
  - If the fork is significantly behind upstream, warn the user about potential merge conflicts
package/commands/do/pr.md CHANGED
@@ -32,7 +32,7 @@ Before creating the PR, perform a thorough self-review. Read each changed file
32
32
  ## Open the PR
33
33
 
34
34
  - Create a PR from `{current_branch}` to `{default_branch}`
35
- - Create a rich PR description — no co-author or "generated with" messages
35
+ - Create a rich PR description
36
36
 
37
37
  **IMPORTANT**: During each fix cycle in the Copilot review loop below, after fixing all review comments and before pushing, also bump the patch version (`npm version patch --no-git-tag-version` or equivalent) and commit the version bump.
38
38
 
@@ -10,6 +10,7 @@ Commit and push all work from this session, updating documentation as needed.
10
10
 
11
11
  1. **Identify changes to commit**:
12
12
  - Run `git status` and `git diff --stat` to see what changed
13
+ - If there are no changes to commit, inform the user and stop
13
14
  - If you edited files in this session, commit only those files
14
15
  - If invoked without prior edit context, review all uncommitted changes
15
16
  - if there are files that should be added to the .gitignore that are not yet there, ensure we have proper .gitignore coverage
@@ -47,11 +48,3 @@ Commit and push all work from this session, updating documentation as needed.
47
48
 
48
49
  5. **Push the changes**:
49
50
  - Use `git pull --rebase --autostash && git push` to push safely
50
-
51
- ## Important
52
-
53
- - Never stage files you didn't edit
54
- - Never use `git add -A` or `git add .`
55
- - Keep commit messages focused on the "why" not just the "what"
56
- - If there are no changes to commit, inform the user
57
- - Do NOT bump the version in package.json — `/release` handles versioning
@@ -6,22 +6,30 @@ description: Create a release PR using the project's documented release workflow
6
6
 
7
7
  Before doing anything, determine the project's source and target branches for releases. Do NOT hardcode branch names. Instead, discover them:
8
8
 
9
- 1. **Source branch** — run `gh repo view --json defaultBranchRef -q '.defaultBranchRef.name'` to get the repo's default branch
9
+ 1. **Source branch** — run `gh repo view --json defaultBranchRef -q '.defaultBranchRef.name'` to get the repo's default branch (typically `main`)
10
10
  2. **Target branch** — determine by reading (in priority order):
11
11
  - **GitHub Actions workflows** — check `.github/workflows/release.yml` (or similar) for `on: push: branches:` to find the branch that triggers the release pipeline
12
- - **Project CLAUDE.md** — look for git workflow sections, branch descriptions, or release instructions
12
+ - **Project conventions** (already in context) — look for git workflow sections, branch descriptions, or release instructions
13
13
  - **Versioning docs** — check `docs/VERSIONING.md`, `CONTRIBUTING.md`, or `RELEASING.md`
14
14
  - **Branch convention** — if a `release` branch exists, the target is `release`; otherwise ask the user
15
+ 3. **Ensure the target branch exists** — if not, create it from the last release tag:
16
+ ```bash
17
+ git branch release $(git describe --tags --abbrev=0)
18
+ git push -u origin release
19
+ ```
20
+ This ensures the PR diff shows ALL changes since the last release, not just the version bump.
15
21
 
16
22
  Print the detected workflow: `Detected release flow: {source} → {target}`
17
23
 
18
24
  If ambiguous, ask the user to confirm before proceeding.
19
25
 
26
+ **Important**: The PR direction is `{source}` → `{target}` (e.g., `main` → `release`). This gives Copilot the full diff of all changes since the last release for review. Do NOT create a branch from source and PR back into it — that only shows the version bump commit.
27
+
20
28
  ## Pre-Release Checks
21
29
 
22
30
  1. **Ensure you're on the source branch** — checkout if needed
23
31
  2. **Pull latest** — `git pull --rebase --autostash`
24
- 3. **Run tests** — execute the project's test suite (check CLAUDE.md or package.json for the command)
32
+ 3. **Run tests** — execute the project's test suite (per project conventions already in context, or check package.json)
25
33
  4. **Run build** — execute the project's build command if one exists
26
34
 
27
35
  ## Determine Version and Finalize Changelog
@@ -63,8 +71,11 @@ Perform a thorough self-review. Read each changed file — not just the diff —
63
71
 
64
72
  ## Open the Release PR
65
73
 
66
- - Push the source branch to remote
67
- - Create a PR from `{source}` to `{target}`
74
+ - Push the source branch to remote (it should already be up to date with the release commit)
75
+ - Create a PR from `{source}` `{target}` (e.g., `main` → `release`)
76
+ ```bash
77
+ gh pr create --title "Release v{version}" --base {target} --head {source} --body "..."
78
+ ```
68
79
  - Title: `Release v{version}` (read version from package.json or equivalent)
69
80
  - Body: include the changelog content for this version if available, otherwise summarize commits since last release
70
81
  - Keep the description clean — no co-author or "generated with" messages
@@ -91,6 +102,12 @@ Perform a thorough self-review. Read each changed file — not just the diff —
91
102
 
92
103
  ## Post-Merge
93
104
 
94
- - Report the final status including version, PR URL, and merge state
95
- - Remind the user to check for the GitHub release once CI completes (if the project uses automated releases)
96
- - Switch back to the source branch locally: `git checkout {source} && git pull --rebase --autostash`
105
+ 1. **Tag the release** on the target branch to trigger the publish workflow:
106
+ ```bash
107
+ git fetch origin {target}
108
+ git tag v{version} origin/{target}
109
+ git push origin v{version}
110
+ ```
111
+ 2. **Switch back to the source branch** locally: `git checkout {source} && git pull --rebase --autostash`
112
+ 3. **Report the final status** including version, PR URL, tag, and merge state
113
+ 4. Remind the user to check for the GitHub release once CI completes (if the project uses automated releases)
@@ -13,19 +13,54 @@ argument-hint: "[base-branch]"
13
13
 
14
14
  If there are no changes, inform the user and stop.
15
15
 
16
- ## Read Project Conventions
16
+ ## Apply Project Conventions
17
17
 
18
- Read the project's CLAUDE.md (if it exists) to understand:
19
- - Code style rules, error handling patterns, logging conventions
20
- - Custom error classes, validation patterns, framework-specific rules
21
- - Any explicit security model or scope exclusions
22
-
23
- These conventions override generic best practices. For example, if CLAUDE.md says "no auth needed — internal tool", do not flag missing authentication.
18
+ CLAUDE.md is already loaded into your context. Use its rules (code style, error handling, logging, security model, scope exclusions) as overrides to generic best practices throughout this review. For example, if CLAUDE.md says "no auth needed — internal tool", do not flag missing authentication.
24
19
 
25
20
  ## Deep File Review
26
21
 
27
22
  For **each changed file** in the diff, read the **entire file** (not just diff hunks). Reviewing only the diff misses context bugs where new code interacts incorrectly with existing code.
28
23
 
24
+ ### Understand the Code Flow
25
+
26
+ Before checking individual files against the checklist, **map the flow of changed code across all files**. This means:
27
+
28
+ 1. **Trace call chains** — for each new or modified function/method, identify every caller and callee across the changed files. Read those files too if needed. You cannot evaluate whether code is duplicated or well-structured without knowing how it connects.
29
+ 2. **Identify shared data paths** — trace data from entry point (route handler, event listener, CLI arg) through transforms, storage, and output. Understand what each layer is responsible for.
30
+ 3. **Map responsibilities** — for each changed module/file, state its single responsibility in one sentence. If you can't, it may be doing too much.
31
+
32
+ ### Evaluate Software Engineering Principles
33
+
34
+ With the flow understood, evaluate the changed code against these principles:
35
+
36
+ **DRY (Don't Repeat Yourself)**
37
+ - Look for logic duplicated across changed files or between changed and existing code. Grep for similar function signatures, repeated conditional blocks, or copy-pasted patterns with minor variations.
38
+ - If two functions do nearly the same thing with small differences, they should likely share a common implementation with the differences parameterized.
39
+ - Duplicated validation, error formatting, or data transformation are common violations.
40
+
41
+ **YAGNI (You Ain't Gonna Need It)**
42
+ - Flag abstractions, config options, parameters, or extension points that serve no current use case. Code should solve the problem at hand, not hypothetical future problems.
43
+ - Unnecessary wrapper functions, premature generalization (e.g., a factory that produces one type), and unused feature flags are common violations.
44
+
45
+ **SOLID Principles**
46
+ - **Single Responsibility** — each module/function should have one reason to change. If a function handles both business logic and I/O formatting, flag it.
47
+ - **Open/Closed** — new behavior should be addable without modifying existing working code where practical (e.g., strategy patterns, plugin hooks).
48
+ - **Liskov Substitution** — if subclasses or interface implementations exist, verify they are fully substitutable without breaking callers.
49
+ - **Interface Segregation** — callers should not depend on methods they don't use. Large config objects or option bags passed through many layers are a smell.
50
+ - **Dependency Inversion** — high-level modules should not import low-level implementation details directly when an abstraction boundary would be cleaner.
51
+
52
+ **Separation of Concerns**
53
+ - Business logic should not be tangled with transport (HTTP, WebSocket), storage (SQL, file I/O), or presentation (HTML, JSON formatting).
54
+ - If a route handler contains business rules beyond simple delegation, flag it.
55
+
56
+ **Naming & Readability**
57
+ - Function and variable names should communicate intent. If you need to read the implementation to understand what a name means, it's poorly named.
58
+ - Boolean variables/params should read as predicates (`isReady`, `hasAccess`), not ambiguous nouns.
59
+
60
+ Only flag principle violations that are **concrete and actionable** in the changed code. Do not flag pre-existing design issues in untouched code unless the changes make them worse.
61
+
62
+ ### Per-File Checklist
63
+
29
64
  Check every file against this checklist:
30
65
 
31
66
  !`cat ~/.claude/lib/code-review-checklist.md`
@@ -50,7 +85,7 @@ For each issue found:
50
85
  1. Classify severity: **CRITICAL** (runtime crash, data leak, security) vs **IMPROVEMENT** (consistency, robustness, conventions)
51
86
  2. Fix all CRITICAL issues immediately
52
87
  3. For IMPROVEMENT issues, fix them too — the goal is to eliminate Copilot review round-trips
53
- 4. After fixes, run the project's test suite and build command (check CLAUDE.md for commands)
88
+ 4. After fixes, run the project's test suite and build command (per project conventions already in context)
54
89
  5. Commit fixes: `refactor: address code review findings`
55
90
 
56
91
  ## Report
@@ -4,7 +4,7 @@ description: Resolve PR review feedback with parallel agents
4
4
 
5
5
  # Resolve PR Review Feedback
6
6
 
7
- Address the latest code review feedback on the current branch's pull request using a team-based approach.
7
+ Address the latest code review feedback on the current branch's pull request using parallel sub-agents.
8
8
 
9
9
  ## Steps
10
10
 
@@ -18,23 +18,21 @@ Address the latest code review feedback on the current branch's pull request usi
18
18
  ```
19
19
  Save results to `/tmp/pr_threads.json` for parsing.
20
20
 
21
- 4. **Spawn a team to address feedback in parallel**:
22
- - Create a team with `TeamCreate`
23
- - Create a task for each unresolved review thread using `TaskCreate`
24
- - Create an additional task for an **independent code quality review** of all files changed in the PR (`gh pr diff --name-only`)
25
- - Spawn sub-agents (general-purpose type) as teammates to handle each task in parallel:
26
- - One agent per review thread (or group closely related threads on the same file)
27
- - One dedicated agent for the code quality review
28
- - Each agent should:
21
+ 4. **Spawn parallel sub-agents to address feedback**:
22
+ - For small PRs (1-3 unresolved threads), handle fixes inline instead of spawning agents
23
+ - For larger PRs, spawn one `Agent` call (general-purpose type) per review thread (or group closely related threads on the same file into one agent)
24
+ - Spawn one additional `Agent` call for an **independent code quality review** of all files changed in the PR (`gh pr diff --name-only`)
25
+ - Launch all Agent calls **in parallel** (multiple tool calls in a single response) and wait for all to return
26
+ - Each thread-fixing agent should:
29
27
  - Read the file and understand the context of the feedback
30
28
  - Make the requested code changes if they are accurate and warranted
31
29
  - Look for further opportunities to DRY up affected code
32
- - Report back what was changed and the thread ID that was addressed
30
+ - Return what was changed and the thread ID that was addressed
33
31
  - The code quality reviewer should:
34
32
  - Read all changed files in the PR
35
33
  - Check for: style violations, missing error handling, dead code, DRY violations, security issues
36
- - Apply fixes directly and report what was changed
37
- - Wait for all agents to complete, then review their changes
34
+ - Apply fixes directly and return what was changed
35
+ - After all agents return, review their changes for conflicts or overlapping edits
38
36
 
39
37
  5. **Run tests**: Run the project's test suite to verify all changes pass. Do not proceed if tests fail — fix issues first.
40
38
 
@@ -68,18 +66,19 @@ Verify the request was accepted by checking that `Copilot` appears in the respon
68
66
 
69
67
  ### Poll for review completion
70
68
 
71
- Poll every 30 seconds using GraphQL to check for a new review with a `submittedAt` timestamp after the request:
69
+ Poll using GraphQL to check for a new review with a `submittedAt` timestamp after the request:
72
70
  ```bash
73
71
  gh api graphql -f query='{ repository(owner: "OWNER", name: "REPO") { pullRequest(number: PR_NUM) { reviews(last: 3) { nodes { state body author { login } submittedAt } } reviewThreads(first: 100) { nodes { id isResolved comments(first: 3) { nodes { body path line author { login } } } } } } } }'
74
72
  ```
75
73
 
76
- Copilot reviews typically take 60-120 seconds. The review is complete when a new `copilot-pull-request-reviewer` review node appears.
74
+ **Dynamic poll timing**: Before your first poll, check how long the most recent Copilot review on this PR took by comparing consecutive Copilot review `submittedAt` timestamps (or PR creation time for the first review). Use that duration as your expected wait. If no prior review exists, default to 5 minutes. Set poll interval to 60 seconds and max wait to **2x the expected duration** (minimum 5 minutes, maximum 20 minutes). Copilot reviews can take **10-15 minutes** for large diffs — do NOT give up early.
75
+
76
+ The review is complete when a new `copilot-pull-request-reviewer` review node appears. If no review appears after max wait, **ask the user** whether to continue waiting, re-request, or skip.
77
+
78
+ **Error detection**: After a review appears, check its `body` for error text such as "Copilot encountered an error" or "unable to review this pull request". If found, this is NOT a successful review — log a warning, re-request the review (same API call above), and resume polling. Allow up to 3 error retries before asking the user whether to continue or skip.
77
79
 
78
80
  ## Notes
79
81
 
80
82
  - Only resolve threads where you've actually addressed the feedback
81
83
  - If feedback is unclear or incorrect, leave a reply comment instead of resolving
82
- - Do not include co-author info in commits
83
- - For small PRs (1-3 threads), sub-agents may be overkill — use judgment on whether to spawn a team or handle inline
84
84
  - Always run tests before committing — never push code with known failures
85
- - Shut down the team after all work is complete
@@ -0,0 +1,73 @@
1
+ #!/usr/bin/env node
2
+ // Check for slashdo updates in background, write result to cache
3
+ // Called by SessionStart hook - runs once per session
4
+
5
+ const fs = require('fs');
6
+ const path = require('path');
7
+ const os = require('os');
8
+ const { spawn } = require('child_process');
9
+
10
+ const homeDir = os.homedir();
11
+ const cacheDir = path.join(homeDir, '.claude', 'cache');
12
+ const cacheFile = path.join(cacheDir, 'slashdo-update-check.json');
13
+ const versionFile = path.join(homeDir, '.claude', '.slashdo-version');
14
+
15
+ // Best-effort: silently exit on any setup failure (permissions, read-only FS, etc.)
16
+ try {
17
+ if (!fs.existsSync(cacheDir)) {
18
+ fs.mkdirSync(cacheDir, { recursive: true });
19
+ }
20
+
21
+ // Spawn background process so we don't block session start
22
+ const child = spawn(process.execPath, ['-e', `
23
+ const fs = require('fs');
24
+ const { execSync } = require('child_process');
25
+
26
+ const cacheFile = ${JSON.stringify(cacheFile)};
27
+ const versionFile = ${JSON.stringify(versionFile)};
28
+
29
+ let installed = '0.0.0';
30
+ try {
31
+ if (fs.existsSync(versionFile)) {
32
+ installed = fs.readFileSync(versionFile, 'utf8').trim();
33
+ }
34
+ } catch (e) {}
35
+
36
+ // No version file means slashdo isn't installed — skip silently
37
+ if (installed === '0.0.0') {
38
+ process.exit(0);
39
+ }
40
+
41
+ let latest = null;
42
+ try {
43
+ latest = execSync('npm view slash-do version', { encoding: 'utf8', timeout: 5000, windowsHide: true }).trim();
44
+ } catch (e) {}
45
+
46
+ // Simple semver comparison: only flag update when latest > installed
47
+ let updateAvailable = false;
48
+ if (latest && latest !== installed) {
49
+ const parse = v => (v || '').replace(/^v/, '').replace(/-.+$/, '').split('.').map(Number);
50
+ const [iM, im, ip] = parse(installed);
51
+ const [lM, lm, lp] = parse(latest);
52
+ if ([iM, im, ip, lM, lm, lp].some(isNaN)) { updateAvailable = installed !== latest; }
53
+ else { updateAvailable = lM > iM || (lM === iM && (lm > im || (lm === im && lp > ip))); }
54
+ }
55
+
56
+ const result = {
57
+ update_available: updateAvailable,
58
+ installed,
59
+ latest: latest || 'unknown',
60
+ checked: Math.floor(Date.now() / 1000)
61
+ };
62
+
63
+ fs.writeFileSync(cacheFile, JSON.stringify(result));
64
+ `], {
65
+ stdio: 'ignore',
66
+ windowsHide: true,
67
+ detached: true
68
+ });
69
+
70
+ child.unref();
71
+ } catch (e) {
72
+ // Hook is best-effort — never break SessionStart
73
+ }
@@ -0,0 +1,123 @@
1
+ #!/usr/bin/env node
2
+ // slashdo statusline for Claude Code
3
+ // Shows: model | current task | directory | context usage | update notifications
4
+
5
+ const fs = require('fs');
6
+ const path = require('path');
7
+ const os = require('os');
8
+
9
+ // Read JSON from stdin
10
+ let input = '';
11
+ process.stdin.setEncoding('utf8');
12
+ process.stdin.on('data', chunk => input += chunk);
13
+ process.stdin.on('end', () => {
14
+ try {
15
+ const data = JSON.parse(input);
16
+ const model = data.model?.display_name || 'Claude';
17
+ const dir = data.workspace?.current_dir || process.cwd();
18
+ const session = data.session_id || '';
19
+ const remaining = data.context_window?.remaining_percentage;
20
+
21
+ // Context window display (shows USED percentage scaled to 80% limit)
22
+ // Claude Code enforces an 80% context limit, so we scale to show 100% at that point
23
+ let ctx = '';
24
+ if (remaining != null) {
25
+ const rem = Math.round(remaining);
26
+ const rawUsed = Math.max(0, Math.min(100, 100 - rem));
27
+ // Scale: 80% real usage = 100% displayed
28
+ const used = Math.min(100, Math.round((rawUsed / 80) * 100));
29
+
30
+ // Write context metrics to bridge file for context-monitor hooks
31
+ if (session) {
32
+ try {
33
+ const safeSession = session.replace(/[^a-zA-Z0-9_-]/g, '');
34
+ const bridgePath = path.join(os.tmpdir(), `claude-ctx-${safeSession}.json`);
35
+ const bridgeData = JSON.stringify({
36
+ session_id: session,
37
+ remaining_percentage: remaining,
38
+ used_pct: used,
39
+ timestamp: Math.floor(Date.now() / 1000)
40
+ });
41
+ fs.writeFileSync(bridgePath, bridgeData);
42
+ } catch (e) {
43
+ // Silent fail -- bridge is best-effort, don't break statusline
44
+ }
45
+ }
46
+
47
+ // Build progress bar (10 segments)
48
+ const filled = Math.floor(used / 10);
49
+ const bar = '█'.repeat(filled) + '░'.repeat(10 - filled);
50
+
51
+ // Color based on scaled usage
52
+ if (used < 63) {
53
+ ctx = ` \x1b[32m${bar} ${used}%\x1b[0m`;
54
+ } else if (used < 81) {
55
+ ctx = ` \x1b[33m${bar} ${used}%\x1b[0m`;
56
+ } else if (used < 95) {
57
+ ctx = ` \x1b[38;5;208m${bar} ${used}%\x1b[0m`;
58
+ } else {
59
+ ctx = ` \x1b[5;31m💀 ${bar} ${used}%\x1b[0m`;
60
+ }
61
+ }
62
+
63
+ // Current task from todos
64
+ let task = '';
65
+ const homeDir = os.homedir();
66
+ const todosDir = path.join(homeDir, '.claude', 'todos');
67
+ if (session && fs.existsSync(todosDir)) {
68
+ try {
69
+ const entries = fs.readdirSync(todosDir);
70
+ let latestFile = null;
71
+ let latestMtime = 0;
72
+ for (const f of entries) {
73
+ if (!f.startsWith(session) || !f.includes('-agent-') || !f.endsWith('.json')) continue;
74
+ try {
75
+ const mt = fs.statSync(path.join(todosDir, f)).mtimeMs;
76
+ if (mt > latestMtime) { latestMtime = mt; latestFile = f; }
77
+ } catch (e) {}
78
+ }
79
+
80
+ if (latestFile) {
81
+ try {
82
+ const todos = JSON.parse(fs.readFileSync(path.join(todosDir, latestFile), 'utf8'));
83
+ const inProgress = todos.find(t => t.status === 'in_progress');
84
+ if (inProgress) task = inProgress.activeForm || '';
85
+ } catch (e) {}
86
+ }
87
+ } catch (e) {
88
+ // Silently fail on file system errors - don't break statusline
89
+ }
90
+ }
91
+
92
+ // Update notifications (GSD + slashdo)
93
+ let updates = '';
94
+ const gsdCacheFile = path.join(homeDir, '.claude', 'cache', 'gsd-update-check.json');
95
+ if (fs.existsSync(gsdCacheFile)) {
96
+ try {
97
+ const cache = JSON.parse(fs.readFileSync(gsdCacheFile, 'utf8'));
98
+ if (cache.update_available) {
99
+ updates += '\x1b[33m⬆ /gsd:update\x1b[0m │ ';
100
+ }
101
+ } catch (e) {}
102
+ }
103
+ const slashdoCacheFile = path.join(homeDir, '.claude', 'cache', 'slashdo-update-check.json');
104
+ if (fs.existsSync(slashdoCacheFile)) {
105
+ try {
106
+ const cache = JSON.parse(fs.readFileSync(slashdoCacheFile, 'utf8'));
107
+ if (cache.update_available) {
108
+ updates += '\x1b[33m⬆ /do:update\x1b[0m │ ';
109
+ }
110
+ } catch (e) {}
111
+ }
112
+
113
+ // Output
114
+ const dirname = path.basename(dir);
115
+ if (task) {
116
+ process.stdout.write(`${updates}\x1b[2m${model}\x1b[0m │ \x1b[1m${task}\x1b[0m │ \x1b[2m${dirname}\x1b[0m${ctx}`);
117
+ } else {
118
+ process.stdout.write(`${updates}\x1b[2m${model}\x1b[0m │ \x1b[2m${dirname}\x1b[0m${ctx}`);
119
+ }
120
+ } catch (e) {
121
+ // Silent fail - don't break statusline on parse errors
122
+ }
123
+ });