@jgamaraalv/ts-dev-kit 3.1.2 → 3.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -12,7 +12,7 @@
12
12
  "name": "ts-dev-kit",
13
13
  "source": "./",
14
14
  "description": "15 specialized agents and 22 skills for TypeScript fullstack development",
15
- "version": "3.1.2",
15
+ "version": "3.2.0",
16
16
  "author": {
17
17
  "name": "jgamaraalv"
18
18
  },
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "ts-dev-kit",
3
- "version": "3.1.2",
3
+ "version": "3.2.0",
4
4
  "description": "15 specialized agents and 22 skills for TypeScript fullstack development with Fastify, Next.js, PostgreSQL, Redis, and more.",
5
5
  "author": {
6
6
  "name": "jgamaraalv",
package/CHANGELOG.md CHANGED
@@ -5,6 +5,26 @@ All notable changes to this project will be documented in this file.
5
5
  The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
6
6
  and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
7
7
 
8
+ ## [3.2.0] - 2026-02-27
9
+
10
+ ### Added
11
+
12
+ - Cross-scope agent name resolution: `/execute-task` and `/debug` dispatch protocols now detect whether agents are registered with a plugin prefix (`ts-dev-kit:agent-name`) and use the correct `subagent_type` automatically
13
+ - `/yolo` Phase 1.5 — Ensure plugin availability: when ts-dev-kit is installed as a plugin, copies agents, skills, and agent-memory into the project before mounting the devcontainer so they're accessible inside the container
14
+ - `/codebase-adapter` scope-aware discovery: phase 2 now searches agents and skills across project scope (`.claude/`), plugin scope (`node_modules/`), and personal scope (`~/.claude/`)
15
+
16
+ ### Fixed
17
+
18
+ - Agent memory paths in all 13 agents now use dynamic resolution (`agent-memory/<name>/` at project root first, fallback to `.claude/agent-memory/<name>/`) instead of a hardcoded `.claude/` prefix
19
+ - `/core-web-vitals` visualize script path no longer hardcoded to `~/.claude/skills/` — now discovers the correct path across all installation scopes
20
+
21
+ ## [3.1.3] - 2026-02-27
22
+
23
+ ### Fixed
24
+
25
+ - `/yolo` firewall script: revert to strict upstream reference (`set -euo pipefail`, no fallbacks) — remove `exec > >(tee ...)` that caused VS Code to hang with "Unable to resolve resource", remove `|| true` fallbacks that masked failures, remove `| tee` from `postStartCommand`; keep only the `sort -u` dedup fix for duplicate DNS IPs
26
+ - `/yolo` SKILL.md: document critical anti-patterns (process substitution logging, loose error handling, piped postStartCommand)
27
+
8
28
  ## [3.1.2] - 2026-02-27
9
29
 
10
30
  ### Fixed
@@ -88,7 +88,7 @@ Report when done:
88
88
  </output>
89
89
 
90
90
  <agent-memory>
91
- You have a persistent memory directory at `.claude/agent-memory/accessibility-pro/`. Its contents persist across conversations.
91
+ You have a persistent memory directory. Its contents persist across conversations. To find it, look for `agent-memory/accessibility-pro/` at the project root first, then fall back to `.claude/agent-memory/accessibility-pro/`. Use whichever path exists.
92
92
 
93
93
  As you work, consult your memory files to build on previous experience. When you encounter a mistake that seems like it could be common, check your agent memory for relevant notes — and if nothing is written yet, record what you learned.
94
94
 
@@ -58,7 +58,7 @@ Report when done:
58
58
  </output>
59
59
 
60
60
  <agent-memory>
61
- You have a persistent memory directory at `.claude/agent-memory/api-builder/`. Its contents persist across conversations.
61
+ You have a persistent memory directory. Its contents persist across conversations. To find it, look for `agent-memory/api-builder/` at the project root first, then fall back to `.claude/agent-memory/api-builder/`. Use whichever path exists.
62
62
 
63
63
  As you work, consult your memory files to build on previous experience. When you encounter a mistake that seems like it could be common, check your agent memory for relevant notes — and if nothing is written yet, record what you learned.
64
64
 
@@ -70,7 +70,7 @@ APPROVE / REQUEST CHANGES / NEEDS DISCUSSION
70
70
  </output_format>
71
71
 
72
72
  <agent-memory>
73
- You have a persistent memory directory at `.claude/agent-memory/code-reviewer/`. Its contents persist across conversations.
73
+ You have a persistent memory directory. Its contents persist across conversations. To find it, look for `agent-memory/code-reviewer/` at the project root first, then fall back to `.claude/agent-memory/code-reviewer/`. Use whichever path exists.
74
74
 
75
75
  As you work, consult your memory files to build on previous experience. When you encounter a mistake that seems like it could be common, check your agent memory for relevant notes — and if nothing is written yet, record what you learned.
76
76
 
@@ -69,7 +69,7 @@ Report when done:
69
69
  </output>
70
70
 
71
71
  <agent-memory>
72
- You have a persistent memory directory at `.claude/agent-memory/database-expert/`. Its contents persist across conversations.
72
+ You have a persistent memory directory. Its contents persist across conversations. To find it, look for `agent-memory/database-expert/` at the project root first, then fall back to `.claude/agent-memory/database-expert/`. Use whichever path exists.
73
73
 
74
74
  As you work, consult your memory files to build on previous experience. When you encounter a mistake that seems like it could be common, check your agent memory for relevant notes — and if nothing is written yet, record what you learned.
75
75
 
@@ -119,7 +119,7 @@ Report when done:
119
119
  </output>
120
120
 
121
121
  <agent-memory>
122
- You have a persistent memory directory at `.claude/agent-memory/debugger/`. Its contents persist across conversations.
122
+ You have a persistent memory directory. Its contents persist across conversations. To find it, look for `agent-memory/debugger/` at the project root first, then fall back to `.claude/agent-memory/debugger/`. Use whichever path exists.
123
123
 
124
124
  As you work, consult your memory files to build on previous experience. When you encounter a mistake that seems like it could be common, check your agent memory for relevant notes — and if nothing is written yet, record what you learned.
125
125
 
@@ -43,7 +43,7 @@ Report when done:
43
43
  </output>
44
44
 
45
45
  <agent-memory>
46
- You have a persistent memory directory at `.claude/agent-memory/docker-expert/`. Its contents persist across conversations.
46
+ You have a persistent memory directory. Its contents persist across conversations. To find it, look for `agent-memory/docker-expert/` at the project root first, then fall back to `.claude/agent-memory/docker-expert/`. Use whichever path exists.
47
47
 
48
48
  As you work, consult your memory files to build on previous experience. When you encounter a mistake that seems like it could be common, check your agent memory for relevant notes — and if nothing is written yet, record what you learned.
49
49
 
@@ -101,7 +101,7 @@ Report when done:
101
101
  </output>
102
102
 
103
103
  <agent-memory>
104
- You have a persistent memory directory at `.claude/agent-memory/performance-engineer/`. Its contents persist across conversations.
104
+ You have a persistent memory directory. Its contents persist across conversations. To find it, look for `agent-memory/performance-engineer/` at the project root first, then fall back to `.claude/agent-memory/performance-engineer/`. Use whichever path exists.
105
105
 
106
106
  As you work, consult your memory files to build on previous experience. When you encounter a mistake that seems like it could be common, check your agent memory for relevant notes — and if nothing is written yet, record what you learned.
107
107
 
@@ -90,7 +90,7 @@ Report when done:
90
90
  </output>
91
91
 
92
92
  <agent-memory>
93
- You have a persistent memory directory at `.claude/agent-memory/playwright-expert/`. Its contents persist across conversations.
93
+ You have a persistent memory directory. Its contents persist across conversations. To find it, look for `agent-memory/playwright-expert/` at the project root first, then fall back to `.claude/agent-memory/playwright-expert/`. Use whichever path exists.
94
94
 
95
95
  As you work, consult your memory files to build on previous experience. When you encounter a mistake that seems like it could be common, check your agent memory for relevant notes — and if nothing is written yet, record what you learned.
96
96
 
@@ -69,7 +69,7 @@ Report when done:
69
69
  </output>
70
70
 
71
71
  <agent-memory>
72
- You have a persistent memory directory at `.claude/agent-memory/react-specialist/`. Its contents persist across conversations.
72
+ You have a persistent memory directory. Its contents persist across conversations. To find it, look for `agent-memory/react-specialist/` at the project root first, then fall back to `.claude/agent-memory/react-specialist/`. Use whichever path exists.
73
73
 
74
74
  As you work, consult your memory files to build on previous experience. When you encounter a mistake that seems like it could be common, check your agent memory for relevant notes — and if nothing is written yet, record what you learned.
75
75
 
@@ -71,7 +71,7 @@ If implementing fixes, run the project's standard quality checks for every packa
71
71
  </quality_gates>
72
72
 
73
73
  <agent-memory>
74
- You have a persistent memory directory at `.claude/agent-memory/security-scanner/`. Its contents persist across conversations.
74
+ You have a persistent memory directory. Its contents persist across conversations. To find it, look for `agent-memory/security-scanner/` at the project root first, then fall back to `.claude/agent-memory/security-scanner/`. Use whichever path exists.
75
75
 
76
76
  As you work, consult your memory files to build on previous experience. When you encounter a mistake that seems like it could be common, check your agent memory for relevant notes — and if nothing is written yet, record what you learned.
77
77
 
@@ -115,7 +115,7 @@ Report when done:
115
115
  </output>
116
116
 
117
117
  <agent-memory>
118
- You have a persistent memory directory at `.claude/agent-memory/test-generator/`. Its contents persist across conversations.
118
+ You have a persistent memory directory. Its contents persist across conversations. To find it, look for `agent-memory/test-generator/` at the project root first, then fall back to `.claude/agent-memory/test-generator/`. Use whichever path exists.
119
119
 
120
120
  As you work, consult your memory files to build on previous experience. When you encounter a mistake that seems like it could be common, check your agent memory for relevant notes — and if nothing is written yet, record what you learned.
121
121
 
@@ -110,7 +110,7 @@ Report when done:
110
110
  </output>
111
111
 
112
112
  <agent-memory>
113
- You have a persistent memory directory at `.claude/agent-memory/typescript-pro/`. Its contents persist across conversations.
113
+ You have a persistent memory directory. Its contents persist across conversations. To find it, look for `agent-memory/typescript-pro/` at the project root first, then fall back to `.claude/agent-memory/typescript-pro/`. Use whichever path exists.
114
114
 
115
115
  As you work, consult your memory files to build on previous experience. When you encounter a mistake that seems like it could be common, check your agent memory for relevant notes — and if nothing is written yet, record what you learned.
116
116
 
@@ -66,7 +66,7 @@ Report when done:
66
66
  </output>
67
67
 
68
68
  <agent-memory>
69
- You have a persistent memory directory at `.claude/agent-memory/ux-optimizer/`. Its contents persist across conversations.
69
+ You have a persistent memory directory. Its contents persist across conversations. To find it, look for `agent-memory/ux-optimizer/` at the project root first, then fall back to `.claude/agent-memory/ux-optimizer/`. Use whichever path exists.
70
70
 
71
71
  As you work, consult your memory files to build on previous experience. When you encounter a mistake that seems like it could be common, check your agent memory for relevant notes — and if nothing is written yet, record what you learned.
72
72
 
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@jgamaraalv/ts-dev-kit",
3
- "version": "3.1.2",
3
+ "version": "3.2.0",
4
4
  "description": "Claude Code plugin: 15 agents + 22 skills for TypeScript fullstack development",
5
5
  "author": "jgamaraalv",
6
6
  "license": "MIT",
@@ -19,7 +19,7 @@ Working directory: !`pwd`
19
19
 
20
20
  Lockfile detected: !`ls bun.lock pnpm-lock.yaml yarn.lock package-lock.json 2>/dev/null | head -1 || echo "none"`
21
21
 
22
- Agents installed: !`ls .claude/agents/ 2>/dev/null | tr '\n' ' ' || echo "(not found)"`
22
+ Agents installed: !`ls .claude/agents/ 2>/dev/null | tr '\n' ' ' || ls agents/ 2>/dev/null | tr '\n' ' ' || echo "(not found)"`
23
23
 
24
24
  MCP servers configured: !`python3 -c "import json; s=json.load(open('.claude/settings.json')); print(', '.join(s.get('mcpServers',{}).keys()) or '(none)')" 2>/dev/null || echo "(not found)"`
25
25
 
@@ -91,9 +91,17 @@ Discover with Read, Glob, Grep — verify everything, assume nothing.
91
91
  - typecheck, lint, test, build command names
92
92
  - Compose the full run command per workspace (e.g., `pnpm --filter @acme/api typecheck`)
93
93
 
94
- **Available skills** — list directories in `[plugin-root]/skills/`
95
-
96
- **Available agents** list files in `[plugin-root]/.claude/agents/`
94
+ **Available skills** — search across all scopes in order:
95
+ 1. `[plugin-root]/skills/` (plugin or project-local)
96
+ 2. `.claude/skills/` in the project root (project scope)
97
+ 3. `~/.claude/skills/` (personal scope)
98
+ Merge and deduplicate — plugin-root takes priority.
99
+
100
+ **Available agents** — search across all scopes in order:
101
+ 1. `[plugin-root]/agents/` and `[plugin-root]/.claude/agents/` (plugin or project-local)
102
+ 2. `.claude/agents/` in the project root (project scope)
103
+ 3. `~/.claude/agents/` (personal scope)
104
+ Merge and deduplicate — plugin-root takes priority.
97
105
 
98
106
  **Available MCPs** — read `.claude/settings.json` in project root (and `~/.claude/settings.json` as fallback). Extract `mcpServers` keys.
99
107
  </phase_2_project_discovery>
@@ -75,18 +75,25 @@ correlates with INP but does not replace field measurement.
75
75
 
76
76
  When the user provides metric values or a Lighthouse JSON file, generate an interactive HTML report and open it in the browser:
77
77
 
78
+ To locate the script, find `scripts/visualize.py` relative to this skill's directory. The path depends on how ts-dev-kit is installed:
79
+ - **Project scope**: `skills/core-web-vitals/scripts/visualize.py` or `.claude/skills/core-web-vitals/scripts/visualize.py`
80
+ - **Personal scope**: `~/.claude/skills/core-web-vitals/scripts/visualize.py`
81
+ - **Plugin scope**: resolve via `node_modules/@jgamaraalv/ts-dev-kit/skills/core-web-vitals/scripts/visualize.py`
82
+
83
+ Use `find` or `ls` to discover the actual path, then run:
84
+
78
85
  ```bash
79
- # From manual values
80
- python3 ~/.claude/skills/core-web-vitals/scripts/visualize.py \
86
+ # From manual values (replace SCRIPT_PATH with the discovered path)
87
+ python3 SCRIPT_PATH/visualize.py \
81
88
  --lcp 2.1 --inp 180 --cls 0.05 \
82
89
  --url https://example.com
83
90
 
84
91
  # From a Lighthouse JSON output
85
- python3 ~/.claude/skills/core-web-vitals/scripts/visualize.py \
92
+ python3 SCRIPT_PATH/visualize.py \
86
93
  --lighthouse lighthouse-report.json
87
94
 
88
95
  # Custom output path, no auto-open
89
- python3 ~/.claude/skills/core-web-vitals/scripts/visualize.py \
96
+ python3 SCRIPT_PATH/visualize.py \
90
97
  --lcp 3.8 --inp 420 --cls 0.12 \
91
98
  --output cwv-report.html --no-open
92
99
  ```
@@ -72,7 +72,7 @@ After investigation, dispatch the appropriate **specialist agent** (not necessar
72
72
  [Specific changes needed — be precise about expected behavior]
73
73
  ```
74
74
 
75
- Use the agent type that matches the fix domain:
75
+ Use the agent type that matches the fix domain. If ts-dev-kit is installed as a plugin, use the prefixed name (e.g., `ts-dev-kit:api-builder`). Check which agents are available in your context and use the exact registered name:
76
76
  - API route fix -> `api-builder` (preloads fastify-best-practices)
77
77
  - Database/query fix -> `database-expert` (preloads drizzle-pg, postgresql)
78
78
  - Component fix -> `react-specialist` (preloads react-best-practices, composition-patterns)
@@ -84,6 +84,8 @@ Use the agent type that matches the fix domain:
84
84
 
85
85
  ## Role-specific prompts
86
86
 
87
+ > **Note:** All agent types below may be prefixed with `ts-dev-kit:` when the plugin is installed in plugin scope (e.g., `ts-dev-kit:debugger`). Check the available agents in your context and use the exact registered name.
88
+
87
89
  ### Backend debugger
88
90
 
89
91
  **Agent type**: `debugger`
@@ -166,9 +168,10 @@ Verification focus:
166
168
  # Dispatch: MULTI-LAYER
167
169
 
168
170
  # Wave 1: Parallel investigation
171
+ # Use "debugger" or "ts-dev-kit:debugger" depending on scope
169
172
  Task(
170
173
  description: "Debug resource creation API endpoint",
171
- subagent_type: "debugger",
174
+ subagent_type: "debugger", // or "ts-dev-kit:debugger" if plugin-scoped
172
175
  model: "sonnet",
173
176
  prompt: """
174
177
  ## Bug description
@@ -195,7 +198,7 @@ Returns 200, but GET /api/<resource> returns empty array.
195
198
 
196
199
  Task(
197
200
  description: "Debug resource list page",
198
- subagent_type: "debugger",
201
+ subagent_type: "debugger", // or "ts-dev-kit:debugger" if plugin-scoped
199
202
  model: "sonnet",
200
203
  prompt: """
201
204
  ## Bug description
@@ -219,12 +222,12 @@ stale data.
219
222
  )
220
223
 
221
224
  # Wave 2: Dispatch fixes using specialist agents matching the fix domain
222
- # e.g., api-builder for API fix, react-specialist for frontend fix
225
+ # e.g., api-builder (or ts-dev-kit:api-builder) for API fix
223
226
 
224
227
  # Wave 3: E2E verification
225
228
  Task(
226
229
  description: "Verify resource creation flow",
227
- subagent_type: "playwright-expert",
230
+ subagent_type: "playwright-expert", // or "ts-dev-kit:playwright-expert"
228
231
  model: "haiku",
229
232
  prompt: """
230
233
  ## Your task
@@ -176,11 +176,11 @@ Do not decompose if:
176
176
  Each role gets its own persona, skill set, context files, and success criteria.
177
177
 
178
178
  <example>
179
- Role A: Database specialist (sub-area: Database). Agent: database-expert. Task: design schema + migration for the new feature. Skills: drizzle-pg, postgresql.
180
- Role B: API endpoint developer (sub-area: Endpoints). Agent: api-builder. Task: build REST routes consuming the new schema. Skills: fastify-best-practices.
181
- Role C: Component architect (sub-area: Components). Agent: react-specialist. Task: build the result card and list components. Skills: react-best-practices, composition-patterns.
179
+ Role A: Database specialist (sub-area: Database). Agent: database-expert (or ts-dev-kit:database-expert if plugin-scoped). Task: design schema + migration for the new feature. Skills: drizzle-pg, postgresql.
180
+ Role B: API endpoint developer (sub-area: Endpoints). Agent: api-builder (or ts-dev-kit:api-builder). Task: build REST routes consuming the new schema. Skills: fastify-best-practices.
181
+ Role C: Component architect (sub-area: Components). Agent: react-specialist (or ts-dev-kit:react-specialist). Task: build the result card and list components. Skills: react-best-practices, composition-patterns.
182
182
  Role D: Page builder (sub-area: Pages/routing). Agent: general-purpose (ad-hoc). Task: wire components into the search results page with data fetching. Skills: nextjs-best-practices.
183
- Role E: TypeScript library developer. Agent: typescript-pro. Task: add shared schemas and types. Skills: none extra.
183
+ Role E: TypeScript library developer. Agent: typescript-pro (or ts-dev-kit:typescript-pro). Task: add shared schemas and types. Skills: none extra.
184
184
  </example>
185
185
  </rule_1_define_roles_independently>
186
186
 
@@ -188,8 +188,11 @@ Role E: TypeScript library developer. Agent: typescript-pro. Task: add shared sc
188
188
  For each role, spawn a specialized subagent via the Task tool.
189
189
 
190
190
  **Selecting the agent type:**
191
- 1. Check if a project agent exists in `.claude/agents/` that matches the sub-area (see domain_areas table above for the mapping).
192
- 2. If a matching agent exists, use its name as `subagent_type` (e.g., `database-expert`, `api-builder`, `react-specialist`).
191
+ 1. **Resolve the agent name for the current scope.** Agents may be registered under different names depending on how ts-dev-kit is installed:
192
+ - **Project scope** (`.claude/agents/`): short name e.g., `database-expert`
193
+ - **Plugin scope**: prefixed with the plugin namespace — e.g., `ts-dev-kit:database-expert`
194
+ Before dispatching, check which agents are available in your context. If you see agents with a `ts-dev-kit:` prefix, use the prefixed name as `subagent_type`. If agents are available by short name, use the short name.
195
+ 2. If a matching agent exists (in any scope), use its resolved name as `subagent_type`.
193
196
  3. If no matching agent exists, use `general-purpose` as `subagent_type` and embed the full role definition directly in the prompt — this creates an ad-hoc specialist without needing a .md file.
194
197
 
195
198
  **Ad-hoc agent creation** — when `subagent_type: "general-purpose"` is used as a specialist surrogate, the prompt must include:
@@ -44,6 +44,8 @@ Agents with preloaded skills (via `skills` frontmatter) do NOT need Skill() call
44
44
  | typescript-pro | (none) | — |
45
45
  | playwright-expert | (none) | — |
46
46
 
47
+ > **Note:** When ts-dev-kit is installed as a plugin, agent names are prefixed with the plugin namespace (e.g., `ts-dev-kit:api-builder` instead of `api-builder`). Always check the available agents in your context and use the full registered name as `subagent_type`.
48
+
47
49
  ### Agent tool restrictions
48
50
 
49
51
  | Agent | Restriction |
@@ -68,9 +70,11 @@ All agents default to `sonnet`. Override with the Task tool's `model` parameter
68
70
  ## Dispatch example
69
71
 
70
72
  ```
73
+ // Use the agent name as registered in your context.
74
+ // Project-scoped: "api-builder". Plugin-scoped: "ts-dev-kit:api-builder".
71
75
  Task(
72
76
  description: "Build resource API routes",
73
- subagent_type: "api-builder",
77
+ subagent_type: "api-builder", // or "ts-dev-kit:api-builder" if plugin-scoped
74
78
  model: "sonnet",
75
79
  prompt: """
76
80
  ## Your task
@@ -97,7 +101,10 @@ Discover from the codebase:
97
101
 
98
102
  Before dispatching, resolve the agent type for each role:
99
103
 
100
- 1. **Check `.claude/agents/`** if a project agent matches the sub-area from the domain mapping (e.g., `database-expert` for DB work, `api-builder` for endpoints, `react-specialist` for components), use it as `subagent_type`.
104
+ 1. **Resolve the agent name for the current scope.** Agents may be registered with a plugin prefix depending on how ts-dev-kit is installed:
105
+ - **Project scope** (`.claude/agents/`): short name — e.g., `api-builder`
106
+ - **Plugin scope**: prefixed — e.g., `ts-dev-kit:api-builder`
107
+ Check which agents are available in your context and use the exact registered name as `subagent_type`.
101
108
  2. **Use a built-in agent type** — if the Task tool has a matching built-in type (e.g., `typescript-pro`, `performance-engineer`, `security-scanner`), use it directly.
102
109
  3. **Create an ad-hoc specialist** — if no existing agent matches, use `subagent_type: "general-purpose"` and embed the full specialist definition in the prompt.
103
110
 
@@ -48,8 +48,11 @@ Follow this decision tree strictly. Each phase gates the next.
48
48
  START
49
49
 
50
50
  ├─ Phase 1: Detect devcontainer
51
- │ ├─ EXISTS → Phase 2
52
- │ └─ MISSING → Phase 5 (install) → Phase 2
51
+ │ ├─ EXISTS → Phase 1.5
52
+ │ └─ MISSING → Phase 5 (install) → Phase 1.5
53
+
54
+ ├─ Phase 1.5: Ensure plugin availability
55
+ │ └─ Copy agents/skills/agent-memory into project if missing → Phase 2
53
56
 
54
57
  ├─ Phase 2: Check Docker
55
58
  │ ├─ RUNNING → Phase 3
@@ -62,7 +65,7 @@ START
62
65
  │ └─ Start Docker Desktop/daemon → Phase 3
63
66
 
64
67
  └─ Phase 5: Install devcontainer
65
- └─ Scaffold .devcontainer/ → Phase 2
68
+ └─ Scaffold .devcontainer/ → Phase 1.5
66
69
  ```
67
70
 
68
71
  <phase_1_detect>
@@ -72,9 +75,9 @@ START
72
75
  Check whether `.devcontainer/devcontainer.json` exists in the project root.
73
76
 
74
77
  1. Read the `<live_context>` above for the pre-injected detection result.
75
- 2. If **YES** — announce to the user and proceed to Phase 2:
78
+ 2. If **YES** — announce to the user and proceed to Phase 1.5:
76
79
 
77
- > **DEVCONTAINER DETECTED** — `.devcontainer/` found. Checking Docker status...
80
+ > **DEVCONTAINER DETECTED** — `.devcontainer/` found. Checking plugin availability...
78
81
 
79
82
  3. If **NO** — announce and proceed to Phase 5:
80
83
 
@@ -82,6 +85,89 @@ Check whether `.devcontainer/devcontainer.json` exists in the project root.
82
85
 
83
86
  </phase_1_detect>
84
87
 
88
+ <phase_1_5_ensure_plugin>
89
+
90
+ ## Phase 1.5 — Ensure plugin availability
91
+
92
+ The devcontainer mounts only the project directory at `/workspace`. If ts-dev-kit is installed as a plugin (outside the project), its agents, skills, and agent-memory won't be available inside the container. This phase copies them into the project so they're accessible.
93
+
94
+ ### Step 1 — Check if agents and skills already exist in the project
95
+
96
+ ```bash
97
+ ls .claude/agents/*.md 2>/dev/null && echo "AGENTS_PRESENT" || echo "AGENTS_MISSING"
98
+ ls .claude/skills/*/SKILL.md 2>/dev/null && echo "SKILLS_PRESENT" || echo "SKILLS_MISSING"
99
+ ```
100
+
101
+ If both are present, skip to Phase 2:
102
+
103
+ > **PLUGIN FILES PRESENT** — agents and skills found in project. Checking Docker status...
104
+
105
+ ### Step 2 — Locate the plugin source
106
+
107
+ If agents or skills are missing, search for the plugin in these locations (in order):
108
+
109
+ 1. `node_modules/@jgamaraalv/ts-dev-kit/` (npm-installed plugin)
110
+ 2. The directory containing this skill file (if invoked from a known path)
111
+
112
+ ```bash
113
+ # Try npm location
114
+ PLUGIN_SRC="node_modules/@jgamaraalv/ts-dev-kit"
115
+ if [ ! -d "$PLUGIN_SRC/agents" ]; then
116
+ echo "Plugin not found in node_modules"
117
+ PLUGIN_SRC=""
118
+ fi
119
+ ```
120
+
121
+ If the plugin source cannot be found, inform the user and proceed to Phase 2 (the devcontainer will work but without ts-dev-kit agents):
122
+
123
+ > **PLUGIN NOT FOUND** — Could not locate ts-dev-kit plugin files. The devcontainer will work but agents and skills won't be available. Install the plugin with `npm install -D @jgamaraalv/ts-dev-kit` to fix this.
124
+
125
+ ### Step 3 — Copy plugin files into the project
126
+
127
+ ```bash
128
+ # Copy agents
129
+ mkdir -p .claude/agents
130
+ cp "$PLUGIN_SRC"/agents/*.md .claude/agents/
131
+
132
+ # Copy skills (symlink directories to save space)
133
+ mkdir -p .claude/skills
134
+ for skill_dir in "$PLUGIN_SRC"/skills/*/; do
135
+ skill_name=$(basename "$skill_dir")
136
+ if [ ! -e ".claude/skills/$skill_name" ]; then
137
+ cp -r "$skill_dir" ".claude/skills/$skill_name"
138
+ fi
139
+ done
140
+
141
+ # Copy agent-memory scaffolds (don't overwrite existing memories)
142
+ if [ -d "$PLUGIN_SRC/agent-memory" ]; then
143
+ for mem_dir in "$PLUGIN_SRC"/agent-memory/*/; do
144
+ mem_name=$(basename "$mem_dir")
145
+ if [ ! -d "agent-memory/$mem_name" ]; then
146
+ mkdir -p "agent-memory/$mem_name"
147
+ cp -n "$mem_dir"* "agent-memory/$mem_name/" 2>/dev/null || true
148
+ fi
149
+ done
150
+ fi
151
+ ```
152
+
153
+ ### Step 4 — Suggest .gitignore update
154
+
155
+ Check if these copied directories are already in `.gitignore`. If not, inform the user:
156
+
157
+ > **PLUGIN FILES COPIED** — Agents, skills, and agent-memory copied into the project for devcontainer access.
158
+ >
159
+ > Consider adding these to `.gitignore` if you don't want to commit them:
160
+ > ```
161
+ > # ts-dev-kit plugin files (copied for devcontainer)
162
+ > .claude/agents/
163
+ > .claude/skills/
164
+ > agent-memory/
165
+ > ```
166
+
167
+ Proceed to Phase 2.
168
+
169
+ </phase_1_5_ensure_plugin>
170
+
85
171
  <phase_2_check_docker>
86
172
 
87
173
  ## Phase 2 — Check Docker
@@ -252,14 +338,23 @@ Write the file using the reference firewall script. See [references/init-firewal
252
338
 
253
339
  Key security features:
254
340
 
341
+ - `set -euo pipefail` — exits immediately on any error (strict mode)
255
342
  - Default-deny policy (DROP all INPUT, OUTPUT, FORWARD)
256
- - Whitelisted outbound only: npm registry, GitHub (dynamic IP fetch), Claude API, Sentry, StatsIG, VS Code marketplace
343
+ - Whitelisted outbound only: npm registry, GitHub (dynamic IP fetch via `api.github.com/meta`), Claude API, Sentry, StatsIG, VS Code marketplace
257
344
  - DNS and SSH allowed
258
345
  - Localhost and host network allowed
346
+ - Docker DNS rules preserved before flushing
347
+ - Uses `ipset hash:net` for efficient CIDR matching (required — do NOT add fallback logic)
259
348
  - Startup verification: confirms `example.com` is blocked and `api.github.com` is reachable
260
- - Docker DNS rules preserved
261
- - Falls back to iptables-only rules if `ipset` kernel module is unavailable (common on Docker Desktop for Mac)
262
- - Full diagnostic log at `/tmp/firewall-init.log` for troubleshooting
349
+ - Exits with error if any domain fails to resolve or any IP range is invalid
350
+
351
+ **Critical anti-patterns to avoid in this script:**
352
+ - Do NOT use `exec > >(tee ...)` for logging — process substitutions run via `sudo` never receive EOF, causing the script to hang indefinitely and VS Code to fail with "Unable to resolve resource"
353
+ - Do NOT remove the `-e` flag from `set -euo pipefail` — silent error handling masks failures
354
+ - Do NOT add `|| true` fallbacks for core iptables/ipset commands
355
+
356
+ **Critical anti-pattern to avoid in `devcontainer.json`:**
357
+ - Do NOT pipe `postStartCommand` through `tee` (e.g., `... | tee /tmp/firewall-init.log`) — this hides the script's real exit code and creates redundant I/O. The correct value is simply: `"postStartCommand": "sudo /usr/local/bin/init-firewall.sh"`
263
358
 
264
359
  ### Step 5 — Add `.devcontainer` to `.gitignore` (optional)
265
360
 
@@ -279,9 +374,9 @@ cat .devcontainer/devcontainer.json | head -5
279
374
 
280
375
  Announce completion:
281
376
 
282
- > **DEVCONTAINER INSTALLED** — `.devcontainer/` is ready with Dockerfile, firewall script, and configuration. Proceeding to check Docker...
377
+ > **DEVCONTAINER INSTALLED** — `.devcontainer/` is ready with Dockerfile, firewall script, and configuration. Checking plugin availability...
283
378
 
284
- Return to Phase 2.
379
+ Return to Phase 1.5.
285
380
 
286
381
  </phase_5_install_devcontainer>
287
382
 
@@ -57,7 +57,7 @@ Write this file to `.devcontainer/devcontainer.json`:
57
57
  },
58
58
  "workspaceMount": "source=${localWorkspaceFolder},target=/workspace,type=bind,consistency=delegated",
59
59
  "workspaceFolder": "/workspace",
60
- "postStartCommand": "sudo /usr/local/bin/init-firewall.sh 2>&1 | tee /tmp/firewall-init.log",
60
+ "postStartCommand": "sudo /usr/local/bin/init-firewall.sh",
61
61
  "waitFor": "postStartCommand"
62
62
  }
63
63
  ```
@@ -4,279 +4,162 @@ Write this file to `.devcontainer/init-firewall.sh`:
4
4
 
5
5
  ```bash
6
6
  #!/bin/bash
7
- set -uo pipefail # Strict vars and pipelines, but handle errors manually
8
- IFS=$'\n\t'
9
-
10
- LOG="/tmp/firewall-init.log"
11
- exec > >(tee -a "$LOG") 2>&1
12
- echo "=== Firewall init started at $(date -u) ==="
13
-
14
- # --- Diagnostics -----------------------------------------------------------
15
- echo "--- Pre-flight checks ---"
16
- HAVE_IPTABLES=false
17
- HAVE_IPSET=false
18
-
19
- if iptables -L -n >/dev/null 2>&1; then
20
- HAVE_IPTABLES=true
21
- echo "[OK] iptables is functional"
22
- else
23
- echo "[FAIL] iptables is NOT functional — firewall cannot be configured"
24
- echo "Hint: ensure --cap-add=NET_ADMIN is set in devcontainer.json runArgs"
25
- exit 1
26
- fi
27
-
28
- if ipset list >/dev/null 2>&1 || ipset create _test hash:net 2>/dev/null; then
29
- ipset destroy _test 2>/dev/null || true
30
- HAVE_IPSET=true
31
- echo "[OK] ipset is functional"
32
- else
33
- echo "[WARN] ipset is NOT functional — falling back to iptables-only rules"
34
- fi
35
-
36
- if command -v dig >/dev/null 2>&1; then
37
- echo "[OK] dig is available"
38
- else
39
- echo "[WARN] dig not found — DNS resolution will use getent"
40
- fi
41
-
42
- if command -v aggregate >/dev/null 2>&1; then
43
- echo "[OK] aggregate is available"
44
- else
45
- echo "[WARN] aggregate not found — GitHub CIDRs will be added individually"
46
- fi
47
-
48
- # --- Helper: resolve domain to IPs ----------------------------------------
49
- resolve_domain() {
50
- local domain="$1"
51
- local ips=""
52
- if command -v dig >/dev/null 2>&1; then
53
- ips=$(dig +noall +answer +short A "$domain" 2>/dev/null | grep -E '^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$' | sort -u)
54
- fi
55
- if [ -z "$ips" ] && command -v getent >/dev/null 2>&1; then
56
- ips=$(getent ahostsv4 "$domain" 2>/dev/null | awk '{print $1}' | sort -u | grep -E '^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$')
57
- fi
58
- echo "$ips"
59
- }
60
-
61
- # --- 1. Preserve Docker DNS -----------------------------------------------
62
- echo "--- Preserving Docker DNS rules ---"
63
- DOCKER_DNS_RULES=$(iptables-save -t nat 2>/dev/null | grep "127\.0\.0\.11" || true)
64
-
65
- # --- 2. Flush existing rules -----------------------------------------------
66
- echo "--- Flushing existing rules ---"
67
- iptables -F || true
68
- iptables -X || true
69
- iptables -t nat -F || true
70
- iptables -t nat -X || true
71
- iptables -t mangle -F || true
72
- iptables -t mangle -X || true
73
- if [ "$HAVE_IPSET" = true ]; then
74
- ipset destroy allowed-domains 2>/dev/null || true
75
- fi
76
-
77
- # --- 3. Restore Docker DNS -------------------------------------------------
7
+ set -euo pipefail # Exit on error, undefined vars, and pipeline failures
8
+ IFS=$'\n\t' # Stricter word splitting
9
+
10
+ # 1. Extract Docker DNS info BEFORE any flushing
11
+ DOCKER_DNS_RULES=$(iptables-save -t nat | grep "127\.0\.0\.11" || true)
12
+
13
+ # Flush existing rules and delete existing ipsets
14
+ iptables -F
15
+ iptables -X
16
+ iptables -t nat -F
17
+ iptables -t nat -X
18
+ iptables -t mangle -F
19
+ iptables -t mangle -X
20
+ ipset destroy allowed-domains 2>/dev/null || true
21
+
22
+ # 2. Selectively restore ONLY internal Docker DNS resolution
78
23
  if [ -n "$DOCKER_DNS_RULES" ]; then
79
24
  echo "Restoring Docker DNS rules..."
80
25
  iptables -t nat -N DOCKER_OUTPUT 2>/dev/null || true
81
26
  iptables -t nat -N DOCKER_POSTROUTING 2>/dev/null || true
82
- echo "$DOCKER_DNS_RULES" | while read -r rule; do
83
- iptables -t nat $rule 2>/dev/null || true
84
- done
27
+ echo "$DOCKER_DNS_RULES" | xargs -L 1 iptables -t nat
85
28
  else
86
29
  echo "No Docker DNS rules to restore"
87
30
  fi
88
31
 
89
- # --- 4. Base rules (DNS, SSH, localhost) ------------------------------------
90
- echo "--- Setting base rules ---"
91
- # Outbound DNS
32
+ # First allow DNS and localhost before any restrictions
33
+ # Allow outbound DNS
92
34
  iptables -A OUTPUT -p udp --dport 53 -j ACCEPT
93
- iptables -A OUTPUT -p tcp --dport 53 -j ACCEPT
94
- # Inbound DNS responses
35
+ # Allow inbound DNS responses
95
36
  iptables -A INPUT -p udp --sport 53 -j ACCEPT
96
- iptables -A INPUT -p tcp --sport 53 -j ACCEPT
97
- # Outbound SSH
37
+ # Allow outbound SSH
98
38
  iptables -A OUTPUT -p tcp --dport 22 -j ACCEPT
39
+ # Allow inbound SSH responses
99
40
  iptables -A INPUT -p tcp --sport 22 -m state --state ESTABLISHED -j ACCEPT
100
- # Localhost
41
+ # Allow localhost
101
42
  iptables -A INPUT -i lo -j ACCEPT
102
43
  iptables -A OUTPUT -o lo -j ACCEPT
103
44
 
104
- # --- 5. Host network -------------------------------------------------------
105
- HOST_IP=$(ip route 2>/dev/null | grep default | head -1 | awk '{print $3}')
106
- if [ -n "$HOST_IP" ]; then
107
- HOST_NETWORK=$(echo "$HOST_IP" | sed "s/\.[0-9]*$/.0\/16/")
108
- echo "Host network: $HOST_NETWORK (via $HOST_IP)"
109
- iptables -A INPUT -s "$HOST_NETWORK" -j ACCEPT
110
- iptables -A OUTPUT -d "$HOST_NETWORK" -j ACCEPT
111
- else
112
- echo "[WARN] Could not detect host IP — allowing RFC1918 ranges for Docker connectivity"
113
- iptables -A INPUT -s 172.16.0.0/12 -j ACCEPT
114
- iptables -A OUTPUT -d 172.16.0.0/12 -j ACCEPT
115
- iptables -A INPUT -s 192.168.0.0/16 -j ACCEPT
116
- iptables -A OUTPUT -d 192.168.0.0/16 -j ACCEPT
117
- iptables -A INPUT -s 10.0.0.0/8 -j ACCEPT
118
- iptables -A OUTPUT -d 10.0.0.0/8 -j ACCEPT
119
- fi
45
+ # Create ipset with CIDR support
46
+ ipset create allowed-domains hash:net
120
47
 
121
- # --- 6. Allowed domains ----------------------------------------------------
122
- ALLOWED_DOMAINS=(
123
- "registry.npmjs.org"
124
- "api.anthropic.com"
125
- "sentry.io"
126
- "statsig.anthropic.com"
127
- "statsig.com"
128
- "marketplace.visualstudio.com"
129
- "vscode.blob.core.windows.net"
130
- "update.code.visualstudio.com"
131
- )
48
+ # Fetch GitHub meta information and aggregate + add their IP ranges
49
+ echo "Fetching GitHub IP ranges..."
50
+ gh_ranges=$(curl -s https://api.github.com/meta)
51
+ if [ -z "$gh_ranges" ]; then
52
+ echo "ERROR: Failed to fetch GitHub IP ranges"
53
+ exit 1
54
+ fi
132
55
 
133
- if [ "$HAVE_IPSET" = true ]; then
134
- # ---- ipset mode (preferred) ----
135
- echo "--- Building ipset allowlist ---"
136
- ipset create allowed-domains hash:net
56
+ if ! echo "$gh_ranges" | jq -e '.web and .api and .git' >/dev/null; then
57
+ echo "ERROR: GitHub API response missing required fields"
58
+ exit 1
59
+ fi
137
60
 
138
- # GitHub dynamic IPs
139
- echo "Fetching GitHub IP ranges..."
140
- gh_ranges=$(curl -sf --connect-timeout 10 https://api.github.com/meta 2>/dev/null || true)
141
- if [ -n "$gh_ranges" ] && echo "$gh_ranges" | jq -e '.web and .api and .git' >/dev/null 2>&1; then
142
- if command -v aggregate >/dev/null 2>&1; then
143
- while read -r cidr; do
144
- [[ "$cidr" =~ ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+/[0-9]+$ ]] && ipset add allowed-domains "$cidr" 2>/dev/null || true
145
- done < <(echo "$gh_ranges" | jq -r '(.web + .api + .git)[]' | aggregate -q 2>/dev/null)
146
- else
147
- while read -r cidr; do
148
- [[ "$cidr" =~ ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+/[0-9]+$ ]] && ipset add allowed-domains "$cidr" 2>/dev/null || true
149
- done < <(echo "$gh_ranges" | jq -r '(.web + .api + .git)[]')
150
- fi
151
- echo "[OK] GitHub IP ranges added"
152
- else
153
- echo "[WARN] Could not fetch GitHub IPs — resolving github.com directly"
154
- for gh_domain in "github.com" "api.github.com" "raw.githubusercontent.com" "objects.githubusercontent.com"; do
155
- ips=$(resolve_domain "$gh_domain")
156
- while read -r ip; do
157
- [ -n "$ip" ] && ipset add allowed-domains "$ip" 2>/dev/null || true
158
- done <<< "$ips"
159
- done
61
+ echo "Processing GitHub IPs..."
62
+ while read -r cidr; do
63
+ if [[ ! "$cidr" =~ ^[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}/[0-9]{1,2}$ ]]; then
64
+ echo "ERROR: Invalid CIDR range from GitHub meta: $cidr"
65
+ exit 1
160
66
  fi
161
-
162
- # Other allowed domains
163
- for domain in "${ALLOWED_DOMAINS[@]}"; do
164
- echo "Resolving $domain..."
165
- ips=$(resolve_domain "$domain")
166
- if [ -z "$ips" ]; then
167
- echo "[WARN] Failed to resolve $domain — skipping"
168
- continue
169
- fi
170
- while read -r ip; do
171
- [ -n "$ip" ] && ipset add allowed-domains "$ip" 2>/dev/null || true
172
- done <<< "$ips"
173
- done
174
-
175
- # Apply ipset match rule
176
- iptables -A OUTPUT -m set --match-set allowed-domains dst -j ACCEPT
177
-
178
- else
179
- # ---- iptables-only mode (fallback) ----
180
- echo "--- Building iptables allowlist (no ipset) ---"
181
-
182
- # GitHub IPs
183
- echo "Fetching GitHub IP ranges..."
184
- gh_ranges=$(curl -sf --connect-timeout 10 https://api.github.com/meta 2>/dev/null || true)
185
- if [ -n "$gh_ranges" ] && echo "$gh_ranges" | jq -e '.web and .api and .git' >/dev/null 2>&1; then
186
- while read -r cidr; do
187
- [[ "$cidr" =~ ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+/[0-9]+$ ]] && \
188
- iptables -A OUTPUT -d "$cidr" -j ACCEPT 2>/dev/null || true
189
- done < <(echo "$gh_ranges" | jq -r '(.web + .api + .git)[]')
190
- echo "[OK] GitHub IP ranges added via iptables"
191
- else
192
- echo "[WARN] Could not fetch GitHub IPs — resolving github.com directly"
193
- for gh_domain in "github.com" "api.github.com" "raw.githubusercontent.com" "objects.githubusercontent.com"; do
194
- ips=$(resolve_domain "$gh_domain")
195
- while read -r ip; do
196
- [ -n "$ip" ] && iptables -A OUTPUT -d "$ip" -j ACCEPT 2>/dev/null || true
197
- done <<< "$ips"
198
- done
67
+ echo "Adding GitHub range $cidr"
68
+ ipset add allowed-domains "$cidr"
69
+ done < <(echo "$gh_ranges" | jq -r '(.web + .api + .git)[]' | sort -u | aggregate -q)
70
+
71
+ # Resolve and add other allowed domains
72
+ for domain in \
73
+ "registry.npmjs.org" \
74
+ "api.anthropic.com" \
75
+ "sentry.io" \
76
+ "statsig.anthropic.com" \
77
+ "statsig.com" \
78
+ "marketplace.visualstudio.com" \
79
+ "vscode.blob.core.windows.net" \
80
+ "update.code.visualstudio.com"; do
81
+ echo "Resolving $domain..."
82
+ ips=$(dig +noall +answer A "$domain" | awk '$4 == "A" {print $5}' | sort -u)
83
+ if [ -z "$ips" ]; then
84
+ echo "ERROR: Failed to resolve $domain"
85
+ exit 1
199
86
  fi
200
87
 
201
- # Other allowed domains
202
- for domain in "${ALLOWED_DOMAINS[@]}"; do
203
- echo "Resolving $domain..."
204
- ips=$(resolve_domain "$domain")
205
- if [ -z "$ips" ]; then
206
- echo "[WARN] Failed to resolve $domain — skipping"
207
- continue
88
+ while read -r ip; do
89
+ if [[ ! "$ip" =~ ^[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}$ ]]; then
90
+ echo "ERROR: Invalid IP from DNS for $domain: $ip"
91
+ exit 1
208
92
  fi
209
- while read -r ip; do
210
- [ -n "$ip" ] && iptables -A OUTPUT -d "$ip" -j ACCEPT 2>/dev/null || true
211
- done <<< "$ips"
212
- done
93
+ echo "Adding $ip for $domain"
94
+ ipset add allowed-domains "$ip"
95
+ done < <(echo "$ips")
96
+ done
97
+
98
+ # Get host IP from default route
99
+ HOST_IP=$(ip route | grep default | cut -d" " -f3)
100
+ if [ -z "$HOST_IP" ]; then
101
+ echo "ERROR: Failed to detect host IP"
102
+ exit 1
213
103
  fi
214
104
 
215
- # --- 7. Default-deny policies ----------------------------------------------
216
- echo "--- Applying default-deny policies ---"
105
+ HOST_NETWORK=$(echo "$HOST_IP" | sed "s/\.[0-9]*$/.0\/24/")
106
+ echo "Host network detected as: $HOST_NETWORK"
107
+
108
+ # Set up remaining iptables rules
109
+ iptables -A INPUT -s "$HOST_NETWORK" -j ACCEPT
110
+ iptables -A OUTPUT -d "$HOST_NETWORK" -j ACCEPT
111
+
112
+ # Set default policies to DROP first
217
113
  iptables -P INPUT DROP
218
114
  iptables -P FORWARD DROP
219
115
  iptables -P OUTPUT DROP
220
116
 
221
- # Allow established connections
117
+ # First allow established connections for already approved traffic
222
118
  iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
223
119
  iptables -A OUTPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
224
120
 
225
- # Reject everything else with immediate feedback
226
- iptables -A OUTPUT -j REJECT --reject-with icmp-admin-prohibited
121
+ # Then allow only specific outbound traffic to allowed domains
122
+ iptables -A OUTPUT -m set --match-set allowed-domains dst -j ACCEPT
227
123
 
228
- # --- 8. Verification -------------------------------------------------------
229
- echo "--- Verifying firewall ---"
230
- VERIFIED=true
124
+ # Explicitly REJECT all other outbound traffic for immediate feedback
125
+ iptables -A OUTPUT -j REJECT --reject-with icmp-admin-prohibited
231
126
 
127
+ echo "Firewall configuration complete"
128
+ echo "Verifying firewall rules..."
232
129
  if curl --connect-timeout 5 https://example.com >/dev/null 2>&1; then
233
- echo "[FAIL] Firewall verification failed reached https://example.com (should be blocked)"
234
- VERIFIED=false
130
+ echo "ERROR: Firewall verification failed - was able to reach https://example.com"
131
+ exit 1
235
132
  else
236
- echo "[OK] https://example.com is blocked as expected"
133
+ echo "Firewall verification passed - unable to reach https://example.com as expected"
237
134
  fi
238
135
 
239
- if curl --connect-timeout 5 https://api.github.com/zen >/dev/null 2>&1; then
240
- echo "[OK] https://api.github.com is reachable as expected"
136
+ # Verify GitHub API access
137
+ if ! curl --connect-timeout 5 https://api.github.com/zen >/dev/null 2>&1; then
138
+ echo "ERROR: Firewall verification failed - unable to reach https://api.github.com"
139
+ exit 1
241
140
  else
242
- echo "[WARN] https://api.github.com is not reachable — GitHub IPs may have changed"
243
- echo " Claude Code will still work, but git operations may fail"
141
+ echo "Firewall verification passed - able to reach https://api.github.com as expected"
244
142
  fi
143
+ ```
245
144
 
246
- if [ "$VERIFIED" = false ]; then
247
- echo "=== FIREWALL VERIFICATION FAILED ==="
248
- exit 1
249
- fi
145
+ ## Differences from the upstream Claude Code reference
250
146
 
251
- echo "=== Firewall configured successfully ==="
252
- echo "Mode: $([ "$HAVE_IPSET" = true ] && echo 'ipset' || echo 'iptables-only')"
253
- echo "Log: $LOG"
254
- ```
147
+ The only change from the [upstream reference](https://github.com/anthropics/claude-code/blob/main/.devcontainer/init-firewall.sh) is `| sort -u` added to two pipelines to deduplicate IPs before `ipset add`:
255
148
 
256
- ## Key improvements over the original
149
+ 1. **GitHub CIDRs** (line with `aggregate -q`): `jq -r '...'[]' | sort -u | aggregate -q` — deduplicates before aggregation
150
+ 2. **Domain resolution** (line with `dig`): `awk ... | sort -u` — deduplicates IPs from DNS (e.g., `marketplace.visualstudio.com` returns the same IP twice)
257
151
 
258
- | Issue | Original behavior | Improved behavior |
259
- |-------|-------------------|-------------------|
260
- | Duplicate IPs from DNS | `ipset add` fatal error (exit 1) | Deduplicates via `sort -u` + `ipset add ... \|\| true` |
261
- | `ipset` not available | Fatal crash | Falls back to iptables-only rules |
262
- | `aggregate` not available | Fatal crash | Adds CIDRs individually without aggregation |
263
- | `dig` not available | Fatal crash | Falls back to `getent ahostsv4` |
264
- | GitHub API unreachable | Fatal crash | Resolves github.com/api.github.com directly |
265
- | Domain resolution fails | Fatal crash | Skips domain with warning, continues |
266
- | Host IP detection fails | Fatal crash | Allows all RFC1918 ranges as fallback |
267
- | Docker DNS restore fails | Silent + potential crash | Handles per-rule with `|| true` |
268
- | No diagnostic output | Blind exit code 1 | Full log at `/tmp/firewall-init.log` |
269
- | GitHub unreachable after setup | Fatal crash | Warning only (Claude API is the critical path) |
152
+ Without deduplication, `ipset add` fails with `"Element cannot be added to the set: it's already added"` and `set -euo pipefail` terminates the script.
270
153
 
271
154
  ## Firewall rules summary
272
155
 
273
156
  | Rule | Direction | Purpose |
274
157
  |------|-----------|---------|
275
- | DNS (UDP/TCP 53) | Outbound | Domain name resolution |
158
+ | DNS (UDP 53) | Outbound | Domain name resolution |
276
159
  | SSH (TCP 22) | Outbound | Git over SSH |
277
160
  | Localhost | Both | Container-internal communication |
278
161
  | Host network | Both | Docker host ↔ container communication |
279
- | GitHub IPs | Outbound | Git operations, GitHub API (dynamically fetched or resolved) |
162
+ | GitHub IPs | Outbound | Git operations, GitHub API (dynamically fetched from api.github.com/meta) |
280
163
  | npm registry | Outbound | Package installation |
281
164
  | api.anthropic.com | Outbound | Claude API calls |
282
165
  | sentry.io | Outbound | Error reporting |
@@ -287,7 +170,7 @@ echo "Log: $LOG"
287
170
  ## Verification on startup
288
171
 
289
172
  The script verifies the firewall by:
290
- 1. Confirming `https://example.com` is **blocked** (must fail — otherwise exit 1)
291
- 2. Confirming `https://api.github.com` is **reachable** (warning only if it fails)
173
+ 1. Confirming `https://example.com` is **blocked** (should fail)
174
+ 2. Confirming `https://api.github.com` is **reachable** (should succeed)
292
175
 
293
- All output is logged to `/tmp/firewall-init.log` for debugging.
176
+ If either check fails, the script exits with an error and the container will not start.