@kodrunhq/opencode-autopilot 1.4.0 → 1.5.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,266 @@
1
+ ---
2
+ name: e2e-testing
3
+ description: End-to-end testing patterns for critical user flows -- test the system as a user would use it
4
+ stacks: []
5
+ requires: []
6
+ ---
7
+
8
+ # E2E Testing
9
+
10
+ End-to-end testing patterns for critical user flows. E2E tests verify the system works correctly from the user's perspective, exercising the full stack from UI (or API surface) through business logic to data storage and back. This skill covers when to write E2E tests, how to design them, and how to keep them reliable.
11
+
12
+ ## When to Use
13
+
14
+ - Critical user flows that must never break (login, signup, checkout, data creation)
15
+ - Integration points between multiple services or layers
16
+ - Flows where unit tests cannot catch the issue (routing, middleware chains, full request lifecycle)
17
+ - Before major releases as a confidence gate
18
+ - After infrastructure changes (database migrations, service upgrades, environment changes)
19
+ - When a bug was reported that unit and integration tests did not catch
20
+
21
+ ## When NOT to Use
22
+
23
+ - Pure logic that can be tested with unit tests (calculations, transformations, validators)
24
+ - Single API endpoint behavior (use integration tests)
25
+ - UI component rendering in isolation (use component tests)
26
+ - Performance benchmarking (use dedicated performance tests)
27
+
28
+ E2E tests are the most expensive tests to write and maintain. Use them surgically for flows where no other test type provides sufficient confidence.
29
+
30
+ ## E2E Test Design Principles
31
+
32
+ ### Test User Journeys, Not Components
33
+
34
+ An E2E test should mirror a real user workflow from start to finish:
35
+
36
+ ```
37
+ // Good: Complete user journey
38
+ test("new user can sign up, verify email, and access dashboard", () => {
39
+ navigateTo("/signup")
40
+ fillForm({ email: "alice@example.com", password: "SecurePass123!" })
41
+ clickButton("Create Account")
42
+ verifyEmailLink()
43
+ expectRedirectTo("/dashboard")
44
+ expectVisible("Welcome, Alice")
45
+ })
46
+
47
+ // Bad: Testing a single component in E2E
48
+ test("signup button renders correctly", () => {
49
+ navigateTo("/signup")
50
+ expectVisible("Create Account") // This is a component test
51
+ })
52
+ ```
53
+
54
+ ### Use Realistic Data
55
+
56
+ - Use data that resembles production data in structure and edge cases
57
+ - Include special characters, long strings, and boundary values
58
+ - Avoid "test123" or "foo@bar.com" -- use realistic names and formats
59
+ - If the system has user roles, test each role's journey separately
60
+
61
+ ### Keep Tests Independent
62
+
63
+ - Each test should work regardless of execution order
64
+ - Never depend on state created by a previous test
65
+ - Set up all required state within the test itself (or in a beforeEach hook)
66
+ - Clean up created state after the test (or use isolated test environments)
67
+
68
+ ### Use the Page Object Pattern
69
+
70
+ Encapsulate UI interactions behind a clean interface:
71
+
72
+ ```
73
+ // Page object
74
+ const loginPage = {
75
+ navigate: () => goto("/login"),
76
+ fillEmail: (email) => fill("[data-testid=email]", email),
77
+ fillPassword: (password) => fill("[data-testid=password]", password),
78
+ submit: () => click("[data-testid=login-button]"),
79
+ getErrorMessage: () => getText("[data-testid=error-message]"),
80
+ }
81
+
82
+ // Test uses page object
83
+ test("login with valid credentials", () => {
84
+ loginPage.navigate()
85
+ loginPage.fillEmail("alice@example.com")
86
+ loginPage.fillPassword("SecurePass123!")
87
+ loginPage.submit()
88
+ expectRedirectTo("/dashboard")
89
+ })
90
+ ```
91
+
92
+ Benefits: tests are readable, selector changes only need one update, complex interactions are reusable.
93
+
94
+ ## Test Structure
95
+
96
+ Every E2E test follows the same four-phase structure:
97
+
98
+ ### Arrange
99
+
100
+ Set up the preconditions for the test:
101
+
102
+ - Create test data (users, records, configurations)
103
+ - Navigate to the starting page or API endpoint
104
+ - Ensure the system is in a clean, known state
105
+ - Set up any mocks for external services (payment gateways, email providers)
106
+
107
+ ### Act
108
+
109
+ Perform the user actions being tested:
110
+
111
+ - Click buttons, fill forms, navigate between pages
112
+ - Submit API requests with realistic payloads
113
+ - Wait for async operations to complete (use explicit waits, not sleep)
114
+
115
+ ### Assert
116
+
117
+ Verify the expected outcome:
118
+
119
+ - Check page content, URLs, and UI state
120
+ - Verify API responses (status codes, body content)
121
+ - Check that side effects occurred (database records created, emails sent)
122
+ - Verify error states when testing failure scenarios
123
+
124
+ ### Cleanup
125
+
126
+ Remove test artifacts:
127
+
128
+ - Delete created test data
129
+ - Reset any modified configurations
130
+ - Close opened connections or sessions
131
+ - Leave the system in the same state it was in before the test
132
+
133
+ ## Common E2E Patterns
134
+
135
+ ### Pattern: Happy Path First
136
+
137
+ Always implement the successful flow before testing error cases.
138
+
139
+ 1. Write the happy path test (user completes the flow successfully)
140
+ 2. Verify the happy path is stable and passes consistently
141
+ 3. Then add error case tests (invalid input, network failures, auth failures)
142
+ 4. Then add edge case tests (concurrent requests, timeout scenarios)
143
+
144
+ Rationale: the happy path test validates that the entire stack works. If the happy path fails, error case tests are meaningless.
145
+
146
+ ### Pattern: Smoke Tests
147
+
148
+ A minimal set of E2E tests that verify the application starts and basic flows work:
149
+
150
+ - Application loads without errors
151
+ - Login works with valid credentials
152
+ - The primary feature is accessible and functional
153
+ - Critical API endpoints respond with expected status codes
154
+
155
+ Run smoke tests on every commit. They should complete in under 2 minutes. If a smoke test fails, the build is broken.
156
+
157
+ ### Pattern: Critical Path Tests
158
+
159
+ Tests for the most important user flows -- the ones that directly impact revenue or user trust:
160
+
161
+ - User registration and onboarding
162
+ - Core feature workflow (the thing users pay for)
163
+ - Payment and billing (if applicable)
164
+ - Data export and deletion (compliance-critical)
165
+
166
+ Run critical path tests before every release and on the release candidate branch. They may take 5-15 minutes.
167
+
168
+ ### Pattern: Contract Tests for Service Boundaries
169
+
170
+ When your E2E tests span multiple services, use contract tests to verify the API contract between them:
171
+
172
+ - Producer tests: verify the API produces responses matching the contract
173
+ - Consumer tests: verify the client correctly handles the contract responses
174
+ - Contract changes require both sides to update
175
+
176
+ This reduces the need for full cross-service E2E tests, which are expensive and flaky.
177
+
178
+ ## Anti-Pattern Catalog
179
+
180
+ ### Anti-Pattern: Testing Everything E2E
181
+
182
+ **What it looks like:** Writing E2E tests for every feature, including pure logic and simple CRUD operations.
183
+
184
+ **Why it is harmful:** E2E tests are slow (seconds to minutes per test), expensive to maintain, and brittle. A large E2E suite becomes a bottleneck that slows down development.
185
+
186
+ **Instead:** Use the testing pyramid -- unit tests for logic, integration tests for APIs and data access, E2E only for critical user flows. A healthy ratio is roughly 70% unit, 20% integration, 10% E2E.
187
+
188
+ ### Anti-Pattern: Flaky Tests
189
+
190
+ **What it looks like:** Tests that pass or fail randomly without any code change. Developers re-run the suite hoping for green.
191
+
192
+ **Why it is harmful:** Flaky tests destroy trust in the test suite. Teams start ignoring failures, and real bugs slip through.
193
+
194
+ **Instead:**
195
+ - Use explicit waits instead of arbitrary sleep/delays
196
+ - Ensure clean state before each test (no leftover data from previous runs)
197
+ - Avoid timing-dependent assertions (use polling with timeout instead)
198
+ - Run flaky tests in isolation to identify the root cause
199
+ - Quarantine flaky tests until fixed -- do not leave them in the main suite
200
+
201
+ ### Anti-Pattern: No Cleanup
202
+
203
+ **What it looks like:** Tests create data (users, records, files) but never clean up, causing subsequent tests to fail due to duplicate data or unexpected state.
204
+
205
+ **Why it is harmful:** Tests become order-dependent and fail in CI but pass locally (or vice versa).
206
+
207
+ **Instead:** Clean up in afterEach hooks. Use database transactions that roll back. Use unique identifiers (timestamps, UUIDs) for test data. Run each test in an isolated environment if possible.
208
+
209
+ ### Anti-Pattern: Hardcoded Selectors
210
+
211
+ **What it looks like:** Tests use CSS selectors like `.btn-primary`, `#submit`, or DOM structure paths like `div > form > button:nth-child(3)`.
212
+
213
+ **Why it is harmful:** Any UI restructuring or CSS class change breaks the tests, even if the functionality is unchanged.
214
+
215
+ **Instead:** Use dedicated test attributes (`data-testid="login-button"`) that are decoupled from styling and structure. These survive redesigns.
216
+
217
+ ### Anti-Pattern: No Test Data Strategy
218
+
219
+ **What it looks like:** Tests use the same hardcoded user ("admin@test.com") and assume it exists in the database.
220
+
221
+ **Why it is harmful:** Tests fail in fresh environments, cannot run in parallel (shared state conflicts), and mask data-dependent bugs.
222
+
223
+ **Instead:** Each test creates its own data, uses it, and cleans it up. Use factories or fixtures that generate unique, realistic test data.
224
+
225
+ ## Integration with Our Tools
226
+
227
+ ### Automated Review of E2E Tests
228
+
229
+ Use `oc_review` to check E2E test quality. The review engine evaluates:
230
+ - Test isolation (no shared state between tests)
231
+ - Proper cleanup (data created is data deleted)
232
+ - Realistic assertions (not just "page loaded")
233
+ - Flakiness risks (timing dependencies, non-deterministic data)
234
+
235
+ ### TDD for E2E Tests
236
+
237
+ Reference the tdd-workflow skill for the RED-GREEN-REFACTOR approach when writing E2E tests:
238
+ 1. **RED:** Write the E2E test for the user journey. It fails because the feature does not exist.
239
+ 2. **GREEN:** Implement the feature (building up from unit and integration tests). The E2E test passes.
240
+ 3. **REFACTOR:** Clean up the implementation. The E2E test still passes, confirming behavior is preserved.
241
+
242
+ ## Failure Modes
243
+
244
+ ### Test Is Flaky
245
+
246
+ **Symptom:** Test passes locally, fails in CI (or passes 9 out of 10 times).
247
+
248
+ **Diagnosis:** Check for timing dependencies (race conditions between UI updates and assertions), environment differences (ports, timeouts, screen resolution), and shared state (parallel test runs interfering).
249
+
250
+ **Fix:** Add explicit waits for async operations. Use unique test data per run. Ensure the CI environment matches local as closely as possible.
251
+
252
+ ### Test Passes Locally but Fails in CI
253
+
254
+ **Symptom:** Consistent pass on developer machines, consistent fail in CI.
255
+
256
+ **Diagnosis:** Environment differences -- different browser versions, screen sizes, network latency, database contents, or missing environment variables.
257
+
258
+ **Fix:** Run tests in containers that match the CI environment. Use headless browsers with fixed viewport sizes. Check that all environment variables are set in CI config.
259
+
260
+ ### Test Suite Is Too Slow
261
+
262
+ **Symptom:** E2E suite takes more than 15 minutes, blocking the deployment pipeline.
263
+
264
+ **Diagnosis:** Too many E2E tests, or tests are doing work that could be done at a lower level.
265
+
266
+ **Fix:** Move non-critical tests to integration level. Run smoke tests on every commit, critical path tests on release branches only. Parallelize test execution across multiple workers. Use shared authentication setup across tests instead of logging in for each one.
@@ -0,0 +1,296 @@
1
+ ---
2
+ name: git-worktrees
3
+ description: Git worktrees for isolated parallel development — work on multiple branches simultaneously without stashing
4
+ stacks: []
5
+ requires: []
6
+ ---
7
+
8
+ # Git Worktrees
9
+
10
+ Git worktrees let you check out multiple branches simultaneously in separate directories, all sharing the same repository. Instead of stashing, switching branches, and losing context, you create isolated workspaces where each branch has its own working directory.
11
+
12
+ ## When to Use
13
+
14
+ - **Working on multiple features simultaneously** — keep each feature in its own directory with its own running dev server, tests, and editor window
15
+ - **Need to switch context without stashing** — stashing is lossy (you forget what you stashed and why). Worktrees preserve full context
16
+ - **Running long-running tests on one branch while developing on another** — tests run in worktree A while you code in worktree B
17
+ - **Reviewing a PR while keeping your current work intact** — check out the PR branch in a new worktree, review and test it, then return to your main worktree
18
+ - **Comparing implementations side-by-side** — two worktrees with different approaches, run benchmarks in both
19
+ - **Hotfix on production while mid-feature** — create a worktree from the release branch, apply the fix, merge it, then return to your feature
20
+
21
+ ## Git Worktrees Workflow
22
+
23
+ ### Step 1: Create a Worktree
24
+
25
+ ```bash
26
+ # Create a worktree for an existing branch
27
+ git worktree add ../project-feature-name feature-branch
28
+
29
+ # Create a worktree with a new branch
30
+ git worktree add -b new-feature ../project-new-feature main
31
+
32
+ # Create a worktree from a specific commit or tag
33
+ git worktree add ../project-hotfix v2.1.0
34
+ ```
35
+
36
+ **What happens:** Git creates a new directory (the worktree) that shares the same `.git` repository as your main checkout. Both directories can have different branches checked out, different staged changes, and different working directory state — but they share the same commit history, remotes, and configuration.
37
+
38
+ **Key detail:** The `.git` directory in a worktree is a file (not a directory) that points back to the main repository's `.git/worktrees/` folder. This is how Git knows they are linked.
39
+
40
+ ### Step 2: Naming Convention
41
+
42
+ Use a consistent naming pattern so worktrees are easy to find and identify:
43
+
44
+ ```
45
+ ../project-branchname # Sibling directory pattern
46
+ ../myapp-fix-auth # Project name + branch purpose
47
+ ../myapp-feature-search # Project name + feature name
48
+ ../myapp-pr-review-142 # Project name + PR number
49
+ ```
50
+
51
+ **Rules:**
52
+ - Keep worktrees as siblings of the main repository (one directory up)
53
+ - Prefix with the project name to avoid confusion when working on multiple projects
54
+ - Include the purpose in the name (not just the branch name)
55
+ - Do not create worktrees inside the main repository directory
56
+
57
+ ### Step 3: Working in Worktrees
58
+
59
+ Each worktree is independent:
60
+
61
+ ```bash
62
+ # Switch to the worktree
63
+ cd ../project-feature-name
64
+
65
+ # Work normally — all git commands work as expected
66
+ git status
67
+ git add .
68
+ git commit -m "feat: implement search"
69
+ git push origin feature-branch
70
+
71
+ # Run project-specific tools
72
+ bun install # Each worktree needs its own node_modules
73
+ bun test # Tests run against this worktree's code
74
+ bun run dev # Dev server runs independently
75
+ ```
76
+
77
+ **What is shared:** Commit history, branches, remotes, tags, git config, hooks.
78
+
79
+ **What is NOT shared:** Working directory, staged changes (index), node_modules, build artifacts, .env files, editor state.
80
+
81
+ ### Step 4: Synchronize Between Worktrees
82
+
83
+ Changes committed in one worktree are immediately visible to the other (they share the same repository):
84
+
85
+ ```bash
86
+ # In worktree A: commit changes
87
+ git commit -m "feat: add user model"
88
+
89
+ # In worktree B: the commit exists (same repo)
90
+ git log --oneline # Shows the commit from worktree A
91
+
92
+ # To get worktree A's changes into worktree B's branch
93
+ git merge feature-a # Or git rebase, git cherry-pick
94
+ ```
95
+
96
+ **Important:** Do not check out the same branch in two worktrees. Git prevents this because it would cause index corruption. If you need to see the same code, use `git show` or `git diff` instead.
97
+
98
+ ### Step 5: Cleanup
99
+
100
+ ```bash
101
+ # List all worktrees
102
+ git worktree list
103
+
104
+ # Remove a worktree (deletes the directory)
105
+ git worktree remove ../project-feature-name
106
+
107
+ # Clean up stale worktree references (if directory was manually deleted)
108
+ git worktree prune
109
+
110
+ # Force-remove a worktree with uncommitted changes
111
+ git worktree remove --force ../project-feature-name
112
+ ```
113
+
114
+ **When to clean up:**
115
+ - After merging the branch (the worktree served its purpose)
116
+ - After closing the PR you were reviewing
117
+ - After the experiment failed and you want to discard it
118
+ - Regularly — run `git worktree list` weekly to find stale worktrees
119
+
120
+ ## Common Patterns
121
+
122
+ ### Pattern: Parallel Feature Development
123
+
124
+ **Scenario:** You are implementing feature A when a high-priority bug report comes in. You need to fix the bug immediately without losing your feature A context.
125
+
126
+ ```bash
127
+ # You are in the main worktree, mid-feature-A
128
+ # Create a worktree for the hotfix
129
+ git worktree add -b hotfix/auth-bypass ../myapp-hotfix main
130
+
131
+ # Switch to the hotfix worktree
132
+ cd ../myapp-hotfix
133
+ bun install
134
+
135
+ # Fix the bug, test it, commit, push, create PR
136
+ # ...
137
+
138
+ # Return to your feature work — everything is exactly where you left it
139
+ cd ../myapp
140
+ # Continue feature A with full context preserved
141
+ ```
142
+
143
+ **Benefit:** Zero context loss. No stashing, no branch switching, no re-running setup commands. Your feature A terminal, editor, and mental state are all preserved.
144
+
145
+ ### Pattern: Safe Experimentation
146
+
147
+ **Scenario:** You want to try a risky refactor but are not sure it will work. You do not want to pollute your branch with experimental commits.
148
+
149
+ ```bash
150
+ # Create an experimental worktree
151
+ git worktree add -b experiment/new-architecture ../myapp-experiment main
152
+
153
+ cd ../myapp-experiment
154
+ bun install
155
+
156
+ # Experiment freely — nothing affects your main worktree
157
+ # If it works: merge the branch into main
158
+ # If it fails: just remove the worktree
159
+ git worktree remove ../myapp-experiment
160
+ git branch -D experiment/new-architecture
161
+ ```
162
+
163
+ **Benefit:** Zero risk. The experiment is completely isolated. If it fails, cleanup is a single command. No need to `git reset --hard`, no dangling commits, no stash entries to forget about.
164
+
165
+ ### Pattern: PR Review with Full Testing
166
+
167
+ **Scenario:** You need to review a PR and want to run the tests locally, but you do not want to stop your current work.
168
+
169
+ ```bash
170
+ # Fetch the PR branch
171
+ git fetch origin pull/142/head:pr-142
172
+
173
+ # Create a worktree for the review
174
+ git worktree add ../myapp-pr-142 pr-142
175
+
176
+ cd ../myapp-pr-142
177
+ bun install
178
+ bun test
179
+ bun run dev # Test the feature manually
180
+
181
+ # Review complete — clean up
182
+ cd ../myapp
183
+ git worktree remove ../myapp-pr-142
184
+ git branch -D pr-142
185
+ ```
186
+
187
+ **Benefit:** Full local testing of the PR without disrupting your work. You can run the PR's dev server alongside your own.
188
+
189
+ ### Pattern: Comparison Testing
190
+
191
+ **Scenario:** You want to measure the performance impact of a change by comparing before and after.
192
+
193
+ ```bash
194
+ # Worktree A: the branch with your optimization
195
+ git worktree add ../myapp-optimized optimization-branch
196
+
197
+ # Worktree B: the baseline (main branch, already your main worktree)
198
+
199
+ # Run benchmarks in both
200
+ cd ../myapp && bun run benchmark > /tmp/baseline.txt
201
+ cd ../myapp-optimized && bun run benchmark > /tmp/optimized.txt
202
+
203
+ # Compare results
204
+ diff /tmp/baseline.txt /tmp/optimized.txt
205
+ ```
206
+
207
+ **Benefit:** True side-by-side comparison. No "run benchmarks, switch branches, run again, hope nothing changed" workflow.
208
+
209
+ ## Anti-Pattern Catalog
210
+
211
+ ### Anti-Pattern: Too Many Worktrees
212
+
213
+ **What goes wrong:** You create a worktree for every branch and end up with 10+ worktrees. You lose track of which ones are active, which are stale, and which have uncommitted work.
214
+
215
+ **Instead:** Limit yourself to 2-3 active worktrees at most. One for your main work, one for a hotfix or PR review, and optionally one for experimentation. Clean up worktrees as soon as their purpose is served.
216
+
217
+ **Check:** Run `git worktree list` regularly. If you see more than 3 entries, clean up.
218
+
219
+ ### Anti-Pattern: Forgetting to Clean Up
220
+
221
+ **What goes wrong:** Stale worktrees waste disk space (each has its own node_modules, build artifacts, etc.) and create confusion when you stumble upon them weeks later.
222
+
223
+ **Instead:** Clean up immediately after the worktree's purpose is served. Set a reminder if needed. Run `git worktree list` as part of your weekly routine.
224
+
225
+ **Check:** `du -sh ../myapp-*` to see how much space worktrees are consuming.
226
+
227
+ ### Anti-Pattern: Shared Dependencies
228
+
229
+ **What goes wrong:** You assume worktrees share node_modules (they do not). You run the project in a new worktree without installing dependencies and get cryptic errors.
230
+
231
+ **Instead:** Run `bun install` (or the project's dependency install command) in every new worktree. Each worktree has its own dependency tree. This is by design — different branches may have different dependencies.
232
+
233
+ ### Anti-Pattern: Checking Out the Same Branch
234
+
235
+ **What goes wrong:** You try to check out the same branch in two worktrees. Git prevents this with an error. You bypass it with `--force` and corrupt the index.
236
+
237
+ **Instead:** Never check out the same branch in two worktrees. If you need to see the same code in two places, use `git show branch:file` or create a copy. If you need to move a branch to a different worktree, first remove the old worktree.
238
+
239
+ ### Anti-Pattern: Worktrees Inside the Main Repo
240
+
241
+ **What goes wrong:** You create worktrees inside the main repository directory (`./worktrees/feature-name`). This confuses tools, IDEs, and sometimes Git itself.
242
+
243
+ **Instead:** Always create worktrees as siblings of the main repository (`../project-feature-name`). This keeps each worktree's directory tree clean and avoids nesting issues.
244
+
245
+ ## Failure Modes
246
+
247
+ ### "fatal: branch is already checked out"
248
+
249
+ **Cause:** You are trying to check out a branch that is already checked out in another worktree. Git prevents this to avoid index corruption.
250
+
251
+ **Fix:** Either: (a) use a different branch, (b) remove the other worktree first with `git worktree remove`, or (c) create a new branch from the desired commit with `git worktree add -b new-name ../path commit`.
252
+
253
+ ### "fatal: path already exists"
254
+
255
+ **Cause:** The target directory for the worktree already exists (from a previous worktree that was not properly cleaned up, or a coincidental name collision).
256
+
257
+ **Fix:** Either: (a) choose a different path, (b) remove the existing directory if it is safe to do so, or (c) run `git worktree prune` if the directory is a stale worktree reference.
258
+
259
+ ### Merge Conflicts When Syncing
260
+
261
+ **Cause:** Both worktrees modified the same files on different branches. When you try to merge or rebase, conflicts arise.
262
+
263
+ **Fix:** Handle like normal Git conflicts. The worktree does not change the conflict resolution process. Resolve in whichever worktree is doing the merge.
264
+
265
+ ### Node Modules Out of Sync
266
+
267
+ **Cause:** You pulled new changes in a worktree but did not re-install dependencies. The lockfile changed but node_modules is stale.
268
+
269
+ **Fix:** Run `bun install` after pulling changes. If issues persist, delete node_modules and reinstall: `rm -rf node_modules && bun install`.
270
+
271
+ ### IDE Not Recognizing Worktree
272
+
273
+ **Cause:** Some IDEs get confused when multiple directories share the same `.git` repository.
274
+
275
+ **Fix:** Open the worktree directory directly (not the parent). Most modern editors (VS Code, Cursor, Zed) handle worktrees correctly when you open the worktree root as the project.
276
+
277
+ ## Quick Reference
278
+
279
+ ```bash
280
+ # Create worktree (existing branch)
281
+ git worktree add ../project-branch branch-name
282
+
283
+ # Create worktree (new branch from base)
284
+ git worktree add -b new-branch ../project-branch base-branch
285
+
286
+ # List all worktrees
287
+ git worktree list
288
+
289
+ # Remove worktree
290
+ git worktree remove ../project-branch
291
+
292
+ # Clean stale references
293
+ git worktree prune
294
+ ```
295
+
296
+ **Remember:** Install dependencies in every new worktree. Clean up when done. Limit to 2-3 active worktrees.