@shaykec/bridge 0.4.24 → 0.4.26

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (320) hide show
  1. package/journeys/ai-engineer.yaml +34 -0
  2. package/journeys/backend-developer.yaml +36 -0
  3. package/journeys/business-analyst.yaml +37 -0
  4. package/journeys/devops-engineer.yaml +37 -0
  5. package/journeys/engineering-manager.yaml +44 -0
  6. package/journeys/frontend-developer.yaml +41 -0
  7. package/journeys/fullstack-developer.yaml +49 -0
  8. package/journeys/mobile-developer.yaml +42 -0
  9. package/journeys/product-manager.yaml +35 -0
  10. package/journeys/qa-engineer.yaml +37 -0
  11. package/journeys/ux-designer.yaml +43 -0
  12. package/modules/README.md +52 -0
  13. package/modules/accessibility-fundamentals/content.md +126 -0
  14. package/modules/accessibility-fundamentals/exercises.md +88 -0
  15. package/modules/accessibility-fundamentals/module.yaml +43 -0
  16. package/modules/accessibility-fundamentals/quick-ref.md +71 -0
  17. package/modules/accessibility-fundamentals/quiz.md +100 -0
  18. package/modules/accessibility-fundamentals/resources.md +29 -0
  19. package/modules/accessibility-fundamentals/walkthrough.md +80 -0
  20. package/modules/adr-writing/content.md +121 -0
  21. package/modules/adr-writing/exercises.md +81 -0
  22. package/modules/adr-writing/module.yaml +41 -0
  23. package/modules/adr-writing/quick-ref.md +57 -0
  24. package/modules/adr-writing/quiz.md +73 -0
  25. package/modules/adr-writing/resources.md +29 -0
  26. package/modules/adr-writing/walkthrough.md +64 -0
  27. package/modules/ai-agents/content.md +120 -0
  28. package/modules/ai-agents/exercises.md +82 -0
  29. package/modules/ai-agents/module.yaml +42 -0
  30. package/modules/ai-agents/quick-ref.md +60 -0
  31. package/modules/ai-agents/quiz.md +103 -0
  32. package/modules/ai-agents/resources.md +30 -0
  33. package/modules/ai-agents/walkthrough.md +85 -0
  34. package/modules/ai-assisted-research/content.md +136 -0
  35. package/modules/ai-assisted-research/exercises.md +80 -0
  36. package/modules/ai-assisted-research/module.yaml +42 -0
  37. package/modules/ai-assisted-research/quick-ref.md +67 -0
  38. package/modules/ai-assisted-research/quiz.md +73 -0
  39. package/modules/ai-assisted-research/resources.md +33 -0
  40. package/modules/ai-assisted-research/walkthrough.md +85 -0
  41. package/modules/ai-pair-programming/content.md +105 -0
  42. package/modules/ai-pair-programming/exercises.md +98 -0
  43. package/modules/ai-pair-programming/module.yaml +39 -0
  44. package/modules/ai-pair-programming/quick-ref.md +58 -0
  45. package/modules/ai-pair-programming/quiz.md +73 -0
  46. package/modules/ai-pair-programming/resources.md +34 -0
  47. package/modules/ai-pair-programming/walkthrough.md +117 -0
  48. package/modules/ai-test-generation/content.md +125 -0
  49. package/modules/ai-test-generation/exercises.md +98 -0
  50. package/modules/ai-test-generation/module.yaml +39 -0
  51. package/modules/ai-test-generation/quick-ref.md +65 -0
  52. package/modules/ai-test-generation/quiz.md +74 -0
  53. package/modules/ai-test-generation/resources.md +41 -0
  54. package/modules/ai-test-generation/walkthrough.md +100 -0
  55. package/modules/api-design/content.md +189 -0
  56. package/modules/api-design/exercises.md +84 -0
  57. package/modules/api-design/game.yaml +113 -0
  58. package/modules/api-design/module.yaml +45 -0
  59. package/modules/api-design/quick-ref.md +73 -0
  60. package/modules/api-design/quiz.md +100 -0
  61. package/modules/api-design/resources.md +55 -0
  62. package/modules/api-design/walkthrough.md +88 -0
  63. package/modules/clean-code/content.md +136 -0
  64. package/modules/clean-code/exercises.md +137 -0
  65. package/modules/clean-code/game.yaml +172 -0
  66. package/modules/clean-code/module.yaml +44 -0
  67. package/modules/clean-code/quick-ref.md +44 -0
  68. package/modules/clean-code/quiz.md +105 -0
  69. package/modules/clean-code/resources.md +40 -0
  70. package/modules/clean-code/walkthrough.md +78 -0
  71. package/modules/clean-code/workshop.yaml +149 -0
  72. package/modules/code-review/content.md +130 -0
  73. package/modules/code-review/exercises.md +95 -0
  74. package/modules/code-review/game.yaml +83 -0
  75. package/modules/code-review/module.yaml +42 -0
  76. package/modules/code-review/quick-ref.md +77 -0
  77. package/modules/code-review/quiz.md +105 -0
  78. package/modules/code-review/resources.md +40 -0
  79. package/modules/code-review/walkthrough.md +106 -0
  80. package/modules/daily-workflow/content.md +81 -0
  81. package/modules/daily-workflow/exercises.md +50 -0
  82. package/modules/daily-workflow/module.yaml +33 -0
  83. package/modules/daily-workflow/quick-ref.md +37 -0
  84. package/modules/daily-workflow/quiz.md +65 -0
  85. package/modules/daily-workflow/resources.md +38 -0
  86. package/modules/daily-workflow/walkthrough.md +83 -0
  87. package/modules/debugging-systematically/content.md +139 -0
  88. package/modules/debugging-systematically/exercises.md +91 -0
  89. package/modules/debugging-systematically/module.yaml +46 -0
  90. package/modules/debugging-systematically/quick-ref.md +59 -0
  91. package/modules/debugging-systematically/quiz.md +105 -0
  92. package/modules/debugging-systematically/resources.md +42 -0
  93. package/modules/debugging-systematically/walkthrough.md +84 -0
  94. package/modules/debugging-systematically/workshop.yaml +127 -0
  95. package/modules/demo-test/content.md +68 -0
  96. package/modules/demo-test/exercises.md +28 -0
  97. package/modules/demo-test/game.yaml +171 -0
  98. package/modules/demo-test/module.yaml +41 -0
  99. package/modules/demo-test/quick-ref.md +54 -0
  100. package/modules/demo-test/quiz.md +74 -0
  101. package/modules/demo-test/resources.md +21 -0
  102. package/modules/demo-test/walkthrough.md +122 -0
  103. package/modules/demo-test/workshop.yaml +31 -0
  104. package/modules/design-critique/content.md +93 -0
  105. package/modules/design-critique/exercises.md +71 -0
  106. package/modules/design-critique/module.yaml +41 -0
  107. package/modules/design-critique/quick-ref.md +63 -0
  108. package/modules/design-critique/quiz.md +73 -0
  109. package/modules/design-critique/resources.md +27 -0
  110. package/modules/design-critique/walkthrough.md +68 -0
  111. package/modules/design-patterns/content.md +335 -0
  112. package/modules/design-patterns/exercises.md +82 -0
  113. package/modules/design-patterns/game.yaml +55 -0
  114. package/modules/design-patterns/module.yaml +45 -0
  115. package/modules/design-patterns/quick-ref.md +44 -0
  116. package/modules/design-patterns/quiz.md +101 -0
  117. package/modules/design-patterns/resources.md +40 -0
  118. package/modules/design-patterns/walkthrough.md +64 -0
  119. package/modules/exploratory-testing/content.md +133 -0
  120. package/modules/exploratory-testing/exercises.md +88 -0
  121. package/modules/exploratory-testing/module.yaml +41 -0
  122. package/modules/exploratory-testing/quick-ref.md +68 -0
  123. package/modules/exploratory-testing/quiz.md +75 -0
  124. package/modules/exploratory-testing/resources.md +39 -0
  125. package/modules/exploratory-testing/walkthrough.md +87 -0
  126. package/modules/git/content.md +128 -0
  127. package/modules/git/exercises.md +53 -0
  128. package/modules/git/game.yaml +190 -0
  129. package/modules/git/module.yaml +44 -0
  130. package/modules/git/quick-ref.md +67 -0
  131. package/modules/git/quiz.md +89 -0
  132. package/modules/git/resources.md +49 -0
  133. package/modules/git/walkthrough.md +92 -0
  134. package/modules/git/workshop.yaml +145 -0
  135. package/modules/hiring-interviews/content.md +130 -0
  136. package/modules/hiring-interviews/exercises.md +88 -0
  137. package/modules/hiring-interviews/module.yaml +41 -0
  138. package/modules/hiring-interviews/quick-ref.md +68 -0
  139. package/modules/hiring-interviews/quiz.md +73 -0
  140. package/modules/hiring-interviews/resources.md +36 -0
  141. package/modules/hiring-interviews/walkthrough.md +75 -0
  142. package/modules/hooks/content.md +97 -0
  143. package/modules/hooks/exercises.md +69 -0
  144. package/modules/hooks/module.yaml +39 -0
  145. package/modules/hooks/quick-ref.md +93 -0
  146. package/modules/hooks/quiz.md +81 -0
  147. package/modules/hooks/resources.md +34 -0
  148. package/modules/hooks/walkthrough.md +105 -0
  149. package/modules/hooks/workshop.yaml +64 -0
  150. package/modules/incident-response/content.md +124 -0
  151. package/modules/incident-response/exercises.md +82 -0
  152. package/modules/incident-response/game.yaml +132 -0
  153. package/modules/incident-response/module.yaml +45 -0
  154. package/modules/incident-response/quick-ref.md +53 -0
  155. package/modules/incident-response/quiz.md +103 -0
  156. package/modules/incident-response/resources.md +40 -0
  157. package/modules/incident-response/walkthrough.md +82 -0
  158. package/modules/llm-fundamentals/content.md +114 -0
  159. package/modules/llm-fundamentals/exercises.md +83 -0
  160. package/modules/llm-fundamentals/module.yaml +42 -0
  161. package/modules/llm-fundamentals/quick-ref.md +64 -0
  162. package/modules/llm-fundamentals/quiz.md +103 -0
  163. package/modules/llm-fundamentals/resources.md +30 -0
  164. package/modules/llm-fundamentals/walkthrough.md +91 -0
  165. package/modules/one-on-ones/content.md +133 -0
  166. package/modules/one-on-ones/exercises.md +81 -0
  167. package/modules/one-on-ones/module.yaml +44 -0
  168. package/modules/one-on-ones/quick-ref.md +67 -0
  169. package/modules/one-on-ones/quiz.md +73 -0
  170. package/modules/one-on-ones/resources.md +37 -0
  171. package/modules/one-on-ones/walkthrough.md +69 -0
  172. package/modules/package.json +9 -0
  173. package/modules/prioritization-frameworks/content.md +130 -0
  174. package/modules/prioritization-frameworks/exercises.md +93 -0
  175. package/modules/prioritization-frameworks/module.yaml +41 -0
  176. package/modules/prioritization-frameworks/quick-ref.md +77 -0
  177. package/modules/prioritization-frameworks/quiz.md +73 -0
  178. package/modules/prioritization-frameworks/resources.md +32 -0
  179. package/modules/prioritization-frameworks/walkthrough.md +69 -0
  180. package/modules/prompt-engineering/content.md +123 -0
  181. package/modules/prompt-engineering/exercises.md +82 -0
  182. package/modules/prompt-engineering/game.yaml +101 -0
  183. package/modules/prompt-engineering/module.yaml +45 -0
  184. package/modules/prompt-engineering/quick-ref.md +65 -0
  185. package/modules/prompt-engineering/quiz.md +105 -0
  186. package/modules/prompt-engineering/resources.md +36 -0
  187. package/modules/prompt-engineering/walkthrough.md +81 -0
  188. package/modules/rag-fundamentals/content.md +111 -0
  189. package/modules/rag-fundamentals/exercises.md +80 -0
  190. package/modules/rag-fundamentals/module.yaml +45 -0
  191. package/modules/rag-fundamentals/quick-ref.md +58 -0
  192. package/modules/rag-fundamentals/quiz.md +75 -0
  193. package/modules/rag-fundamentals/resources.md +34 -0
  194. package/modules/rag-fundamentals/walkthrough.md +75 -0
  195. package/modules/react-fundamentals/content.md +140 -0
  196. package/modules/react-fundamentals/exercises.md +81 -0
  197. package/modules/react-fundamentals/game.yaml +145 -0
  198. package/modules/react-fundamentals/module.yaml +45 -0
  199. package/modules/react-fundamentals/quick-ref.md +62 -0
  200. package/modules/react-fundamentals/quiz.md +106 -0
  201. package/modules/react-fundamentals/resources.md +42 -0
  202. package/modules/react-fundamentals/walkthrough.md +89 -0
  203. package/modules/react-fundamentals/workshop.yaml +112 -0
  204. package/modules/react-native-fundamentals/content.md +141 -0
  205. package/modules/react-native-fundamentals/exercises.md +79 -0
  206. package/modules/react-native-fundamentals/module.yaml +42 -0
  207. package/modules/react-native-fundamentals/quick-ref.md +60 -0
  208. package/modules/react-native-fundamentals/quiz.md +61 -0
  209. package/modules/react-native-fundamentals/resources.md +24 -0
  210. package/modules/react-native-fundamentals/walkthrough.md +84 -0
  211. package/modules/registry.yaml +1650 -0
  212. package/modules/risk-management/content.md +162 -0
  213. package/modules/risk-management/exercises.md +86 -0
  214. package/modules/risk-management/module.yaml +41 -0
  215. package/modules/risk-management/quick-ref.md +82 -0
  216. package/modules/risk-management/quiz.md +73 -0
  217. package/modules/risk-management/resources.md +40 -0
  218. package/modules/risk-management/walkthrough.md +67 -0
  219. package/modules/running-effective-standups/content.md +119 -0
  220. package/modules/running-effective-standups/exercises.md +79 -0
  221. package/modules/running-effective-standups/module.yaml +40 -0
  222. package/modules/running-effective-standups/quick-ref.md +61 -0
  223. package/modules/running-effective-standups/quiz.md +73 -0
  224. package/modules/running-effective-standups/resources.md +36 -0
  225. package/modules/running-effective-standups/walkthrough.md +76 -0
  226. package/modules/solid-principles/content.md +154 -0
  227. package/modules/solid-principles/exercises.md +107 -0
  228. package/modules/solid-principles/module.yaml +42 -0
  229. package/modules/solid-principles/quick-ref.md +50 -0
  230. package/modules/solid-principles/quiz.md +102 -0
  231. package/modules/solid-principles/resources.md +39 -0
  232. package/modules/solid-principles/walkthrough.md +84 -0
  233. package/modules/sprint-planning/content.md +142 -0
  234. package/modules/sprint-planning/exercises.md +79 -0
  235. package/modules/sprint-planning/game.yaml +84 -0
  236. package/modules/sprint-planning/module.yaml +44 -0
  237. package/modules/sprint-planning/quick-ref.md +76 -0
  238. package/modules/sprint-planning/quiz.md +102 -0
  239. package/modules/sprint-planning/resources.md +39 -0
  240. package/modules/sprint-planning/walkthrough.md +75 -0
  241. package/modules/sql-fundamentals/content.md +160 -0
  242. package/modules/sql-fundamentals/exercises.md +87 -0
  243. package/modules/sql-fundamentals/game.yaml +105 -0
  244. package/modules/sql-fundamentals/module.yaml +45 -0
  245. package/modules/sql-fundamentals/quick-ref.md +53 -0
  246. package/modules/sql-fundamentals/quiz.md +103 -0
  247. package/modules/sql-fundamentals/resources.md +42 -0
  248. package/modules/sql-fundamentals/walkthrough.md +92 -0
  249. package/modules/sql-fundamentals/workshop.yaml +109 -0
  250. package/modules/stakeholder-communication/content.md +186 -0
  251. package/modules/stakeholder-communication/exercises.md +87 -0
  252. package/modules/stakeholder-communication/module.yaml +38 -0
  253. package/modules/stakeholder-communication/quick-ref.md +89 -0
  254. package/modules/stakeholder-communication/quiz.md +73 -0
  255. package/modules/stakeholder-communication/resources.md +41 -0
  256. package/modules/stakeholder-communication/walkthrough.md +74 -0
  257. package/modules/system-design/content.md +149 -0
  258. package/modules/system-design/exercises.md +83 -0
  259. package/modules/system-design/game.yaml +95 -0
  260. package/modules/system-design/module.yaml +46 -0
  261. package/modules/system-design/quick-ref.md +59 -0
  262. package/modules/system-design/quiz.md +102 -0
  263. package/modules/system-design/resources.md +46 -0
  264. package/modules/system-design/walkthrough.md +90 -0
  265. package/modules/team-topologies/content.md +166 -0
  266. package/modules/team-topologies/exercises.md +85 -0
  267. package/modules/team-topologies/module.yaml +41 -0
  268. package/modules/team-topologies/quick-ref.md +61 -0
  269. package/modules/team-topologies/quiz.md +101 -0
  270. package/modules/team-topologies/resources.md +37 -0
  271. package/modules/team-topologies/walkthrough.md +76 -0
  272. package/modules/technical-debt/content.md +111 -0
  273. package/modules/technical-debt/exercises.md +92 -0
  274. package/modules/technical-debt/module.yaml +39 -0
  275. package/modules/technical-debt/quick-ref.md +60 -0
  276. package/modules/technical-debt/quiz.md +73 -0
  277. package/modules/technical-debt/resources.md +25 -0
  278. package/modules/technical-debt/walkthrough.md +94 -0
  279. package/modules/technical-mentoring/content.md +128 -0
  280. package/modules/technical-mentoring/exercises.md +84 -0
  281. package/modules/technical-mentoring/module.yaml +41 -0
  282. package/modules/technical-mentoring/quick-ref.md +74 -0
  283. package/modules/technical-mentoring/quiz.md +73 -0
  284. package/modules/technical-mentoring/resources.md +33 -0
  285. package/modules/technical-mentoring/walkthrough.md +65 -0
  286. package/modules/test-strategy/content.md +136 -0
  287. package/modules/test-strategy/exercises.md +84 -0
  288. package/modules/test-strategy/game.yaml +99 -0
  289. package/modules/test-strategy/module.yaml +45 -0
  290. package/modules/test-strategy/quick-ref.md +66 -0
  291. package/modules/test-strategy/quiz.md +99 -0
  292. package/modules/test-strategy/resources.md +60 -0
  293. package/modules/test-strategy/walkthrough.md +97 -0
  294. package/modules/test-strategy/workshop.yaml +96 -0
  295. package/modules/typescript-fundamentals/content.md +127 -0
  296. package/modules/typescript-fundamentals/exercises.md +79 -0
  297. package/modules/typescript-fundamentals/game.yaml +111 -0
  298. package/modules/typescript-fundamentals/module.yaml +45 -0
  299. package/modules/typescript-fundamentals/quick-ref.md +55 -0
  300. package/modules/typescript-fundamentals/quiz.md +104 -0
  301. package/modules/typescript-fundamentals/resources.md +42 -0
  302. package/modules/typescript-fundamentals/walkthrough.md +71 -0
  303. package/modules/typescript-fundamentals/workshop.yaml +146 -0
  304. package/modules/user-story-mapping/content.md +123 -0
  305. package/modules/user-story-mapping/exercises.md +87 -0
  306. package/modules/user-story-mapping/module.yaml +41 -0
  307. package/modules/user-story-mapping/quick-ref.md +64 -0
  308. package/modules/user-story-mapping/quiz.md +73 -0
  309. package/modules/user-story-mapping/resources.md +29 -0
  310. package/modules/user-story-mapping/walkthrough.md +86 -0
  311. package/modules/writing-prds/content.md +133 -0
  312. package/modules/writing-prds/exercises.md +93 -0
  313. package/modules/writing-prds/game.yaml +83 -0
  314. package/modules/writing-prds/module.yaml +44 -0
  315. package/modules/writing-prds/quick-ref.md +77 -0
  316. package/modules/writing-prds/quiz.md +103 -0
  317. package/modules/writing-prds/resources.md +30 -0
  318. package/modules/writing-prds/walkthrough.md +87 -0
  319. package/package.json +5 -3
  320. package/src/server.js +17 -7
@@ -0,0 +1,105 @@
1
+ # AI Pair Programming — Working with Claude Code
2
+
3
+ <!-- hint:slides topic="AI pair programming: mental model, when AI helps vs struggles, prompting patterns, and the human-AI feedback loop" slides="5" -->
4
+
5
+ ## Mental Model
6
+
7
+ Think of AI as a pair programmer: fast, knowledgeable, but without full context of your codebase, conventions, or constraints. You steer; the AI proposes. You verify; the AI iterates.
8
+
9
+ ## When AI Helps vs When It Doesn't
10
+
11
+ ### AI Excels At
12
+ - **Boilerplate** — repetitive setup, configs, scaffolding
13
+ - **Translating intent** — "add validation for email format" → code
14
+ - **Explaining** — "what does this regex do?"
15
+ - **Exploring options** — "show me three ways to implement X"
16
+ - **Fixing known patterns** — common bugs, linter fixes
17
+
18
+ ### AI Struggles With
19
+ - **Project-specific context** — your naming, architecture, existing patterns
20
+ - **Live state** — what's running, current errors, env vars
21
+ - **Ambiguity** — "make it better" (better how?)
22
+ - **Multi-file refactors** — keeping the whole system consistent
23
+ - **Creative design** — you know the product; AI infers
24
+
25
+ ## Effective Prompting Patterns
26
+
27
+ ### Be Specific
28
+ ❌ "Fix this function."
29
+ ✅ "This function should return null when `input` is empty; it currently throws. Also add a JSDoc."
30
+
31
+ ### Provide Context
32
+ ❌ "Write a React component."
33
+ ✅ "Write a React component for a user avatar. We use Tailwind, and `user` has `name` and `avatarUrl`. Match the pattern in `ProfileCard.jsx`."
34
+
35
+ ### Show Examples
36
+ ❌ "Use our API style."
37
+ ✅ "Follow this pattern: `const { data, error } = useApi('/users');` — same hook, different endpoint."
38
+
39
+ ### Break Down Large Tasks
40
+ ❌ "Build a full auth flow."
41
+ ✅ "Step 1: Add a login form component. Step 2: Wire it to the existing `auth.login` function. Step 3: Add error handling for invalid credentials."
42
+
43
+ ## Iterating on Output
44
+
45
+ 1. **Run and verify** — don't assume it works; run tests, try the flow
46
+ 2. **Narrow feedback** — "The `fetch` URL is wrong; it should use `API_BASE` from config"
47
+ 3. **Escalate gradually** — if the AI keeps missing, add more context or take over that part
48
+
49
+ ## Reviewing AI-Generated Code
50
+
51
+ Treat it like any code review:
52
+
53
+ - **Correctness** — does it do what you asked?
54
+ - **Safety** — no hardcoded secrets, proper input validation
55
+ - **Fit** — matches your patterns, conventions, architecture
56
+ - **Maintainability** — clear, readable, not over-engineered
57
+
58
+ ```javascript
59
+ // AI might produce this:
60
+ const key = "sk-12345"; // ❌ Never commit secrets
61
+
62
+ // You ensure:
63
+ const key = process.env.API_KEY; // ✅ From env
64
+ ```
65
+
66
+ ## Claude Code Features
67
+
68
+ ### Skills
69
+ Skills are reusable procedures. If your task matches a skill, invoke it: the agent follows the skill's steps. Example: `spec-writer`, `argus-integration`.
70
+
71
+ ### Subagents
72
+ Heavy or parallel work can be delegated. The main agent coordinates; subagents focus on specific batches (e.g., screen customization, testing).
73
+
74
+ ### MCP (Model Context Protocol)
75
+ Tools like Argus (testing), appxray (inspection), code search. Enable the right MCP for your task so the agent can inspect, test, and validate.
76
+
77
+ ## When to Take Over Manually
78
+
79
+ - **Subtle bugs** — AI keeps proposing wrong fixes; you know the cause
80
+ - **Architecture decisions** — you own the design
81
+ - **Sensitive logic** — auth, billing, security-critical paths
82
+ - **Performance** — AI may not know your bottlenecks
83
+
84
+ ## Human-AI Collaboration Loop
85
+
86
+ ```mermaid
87
+ flowchart LR
88
+ A[You: goal + context] --> B[AI: proposal]
89
+ B --> C{You: verify}
90
+ C -->|Pass| D[Ship it]
91
+ C -->|Fail| E[You: specific feedback]
92
+ E --> B
93
+ ```
94
+
95
+ The loop tightens when feedback is **specific** and **iterative**. "Try again" rarely helps; "use `useCallback` here because the parent re-renders often" does.
96
+
97
+ ---
98
+
99
+ ## Key Takeaways
100
+
101
+ 1. **You're the architect** — AI implements; you verify and integrate
102
+ 2. **Context is king** — the more you give, the better the output
103
+ 3. **Iterate with precision** — narrow, concrete feedback beats vague "fix this"
104
+ 4. **Review everything** — AI code can have subtle bugs and security issues
105
+ 5. **Know when to take over** — some work is yours to do
@@ -0,0 +1,98 @@
1
+ # AI Pair Programming Exercises
2
+
3
+ ## Exercise 1: Improve a Vague Prompt
4
+
5
+ **Task:** Someone gave Claude Code this prompt: "Add tests for the user service." Rewrite it to be specific enough for useful output. Include: what to test, which file/module, and any existing test patterns.
6
+
7
+ **Validation:**
8
+ - [ ] Specifies the module/class/service under test
9
+ - [ ] Mentions what to test (e.g., happy path, error cases)
10
+ - [ ] References existing test style if applicable (e.g., Jest, Vitest)
11
+
12
+ **Hints:**
13
+ 1. "User service" — which file? Which functions?
14
+ 2. What behaviors matter? Create, update, validation errors?
15
+ 3. "We use Vitest and `describe`/`it`" — pattern matters
16
+
17
+ ---
18
+
19
+ ## Exercise 2: Provide Context via Example
20
+
21
+ **Task:** Your API client uses a specific pattern for auth headers. Write a 3-sentence prompt that shows the pattern so the AI can add a new authenticated request.
22
+
23
+ Pattern: `headers: { 'Authorization': `Bearer ${token}` }` and token from `getAuth().token`.
24
+
25
+ **Validation:**
26
+ - [ ] Includes a concrete example of the header format
27
+ - [ ] Explains where the token comes from
28
+ - [ ] Specifies the new endpoint or use case
29
+
30
+ **Hints:**
31
+ 1. Show one real request that uses the pattern
32
+ 2. Name the auth helper or context
33
+ 3. State the new endpoint or action needed
34
+
35
+ ---
36
+
37
+ ## Exercise 3: Give Narrow Feedback
38
+
39
+ **Task:** The AI produced a 50-line component with a bug: it doesn't handle the loading state. The loading prop exists but isn't used. Write feedback that fixes just this, without asking for a full rewrite.
40
+
41
+ **Validation:**
42
+ - [ ] Identifies the exact issue (loading state unused)
43
+ - [ ] Suggests where/how to use it (e.g., show spinner when loading)
44
+ - [ ] Doesn't ask for unrelated changes
45
+
46
+ **Hints:**
47
+ 1. Point to the prop: "The `loading` prop is passed but never used"
48
+ 2. Suggest the fix: "Show a spinner when loading is true"
49
+ 3. Optionally show a one-line example
50
+
51
+ ---
52
+
53
+ ## Exercise 4: Review AI Output for Security
54
+
55
+ **Task:** The AI suggested this code for a "forgot password" flow:
56
+
57
+ ```javascript
58
+ async function resetPassword(email, newPassword) {
59
+ await fetch('/api/reset', {
60
+ method: 'POST',
61
+ body: JSON.stringify({ email, newPassword })
62
+ });
63
+ }
64
+ ```
65
+
66
+ List three security or correctness issues and how you'd fix them.
67
+
68
+ **Validation:**
69
+ - [ ] Identifies at least: HTTPS, token/verification, plaintext password
70
+ - [ ] Proposes concrete fixes (e.g., token-based flow, hash on server)
71
+
72
+ **Hints:**
73
+ 1. Is the password sent in plaintext? How should reset work?
74
+ 2. Does the API verify the requester? Token in email link?
75
+ 3. HTTPS, rate limiting, validation?
76
+
77
+ ---
78
+
79
+ ## Exercise 5: Decide AI vs Manual
80
+
81
+ **Task:** For each scenario, choose AI / Manual / Both and write one sentence justifying your choice.
82
+
83
+ 1. Refactor a 200-line function into smaller functions
84
+ 2. Fix a build error: "Module not found: './utils'"
85
+ 3. Design the data model for a new feature
86
+ 4. Write integration tests for an existing API
87
+
88
+ **Validation:**
89
+ - [ ] Refactor: Both (AI can suggest structure; you verify logic)
90
+ - [ ] Build error: Manual (path/config is project-specific)
91
+ - [ ] Data model: Manual or Both (design is yours; AI can draft)
92
+ - [ ] Integration tests: Both (AI can scaffold; you verify coverage)
93
+
94
+ **Hints:**
95
+ 1. Refactoring needs your domain knowledge but benefits from AI suggestions
96
+ 2. Module resolution is often path/casing/config — very project-specific
97
+ 3. Design decisions are architectural — you own them
98
+ 4. Tests: AI can generate structure; you ensure they test the right things
@@ -0,0 +1,39 @@
1
+ slug: ai-pair-programming
2
+ title: "AI Pair Programming — Working with Claude Code"
3
+ version: 1.0.0
4
+ description: "Use Claude Code effectively — prompting, iteration, and when to lean on AI vs code yourself."
5
+ category: claude-code
6
+ tags: [ai, claude-code, pair-programming, productivity, prompting]
7
+ difficulty: intermediate
8
+
9
+ xp:
10
+ read: 15
11
+ walkthrough: 40
12
+ exercise: 25
13
+ quiz: 20
14
+ quiz-perfect-bonus: 10
15
+
16
+ time:
17
+ quick: 5
18
+ read: 20
19
+ guided: 50
20
+
21
+ prerequisites: [code-review]
22
+ related: [prompt-engineering, debugging-systematically]
23
+
24
+ triggers:
25
+ - "How do I use Claude Code effectively?"
26
+ - "What's the best way to pair program with AI?"
27
+ - "How do I give good instructions to Claude Code?"
28
+ - "When should I use AI vs code myself?"
29
+
30
+ visuals:
31
+ diagrams: [diagram-mermaid, diagram-flow]
32
+ quiz-types: [quiz-matching, quiz-timed-choice]
33
+ playground: bash
34
+ slides: true
35
+
36
+ sources:
37
+ - url: "https://docs.anthropic.com"
38
+ label: "Anthropic Claude Documentation"
39
+ type: docs
@@ -0,0 +1,58 @@
1
+ # AI Pair Programming Quick Reference
2
+
3
+ ## When AI Excels vs Struggles
4
+
5
+ | AI Excels At | AI Struggles With |
6
+ |--------------|-------------------|
7
+ | Boilerplate, configs, scaffolding | Project-specific context |
8
+ | Translating intent → code | Live state, env vars |
9
+ | Explaining code | Ambiguity ("make it better") |
10
+ | Exploring options | Multi-file refactors |
11
+ | Fixing known patterns | Creative design |
12
+
13
+ ## Effective Prompting
14
+
15
+ | Principle | Bad | Good |
16
+ |-----------|-----|------|
17
+ | Be specific | "Fix this function." | "Return null when `input` is empty; add JSDoc." |
18
+ | Provide context | "Write a React component." | "Avatar component, Tailwind, `user.name` / `avatarUrl`, match `ProfileCard.jsx`." |
19
+ | Show examples | "Use our API style." | "Follow: `useApi('/users')` — same hook, different endpoint." |
20
+ | Break down tasks | "Build full auth flow." | "Step 1: Login form. Step 2: Wire to `auth.login`. Step 3: Error handling." |
21
+
22
+ ## Iterating on Output
23
+
24
+ 1. **Run and verify** — don't assume; run tests, try the flow
25
+ 2. **Narrow feedback** — "The `fetch` URL should use `API_BASE` from config"
26
+ 3. **Escalate** — if AI keeps missing, add context or take over
27
+
28
+ ## Code Review Checklist
29
+
30
+ - [ ] **Correctness** — does it do what you asked?
31
+ - [ ] **Safety** — no hardcoded secrets, proper validation
32
+ - [ ] **Fit** — matches patterns, conventions, architecture
33
+ - [ ] **Maintainability** — clear, readable, not over-engineered
34
+
35
+ ## When to Take Over
36
+
37
+ - Subtle bugs AI can't fix after 2–3 tries
38
+ - Architecture decisions
39
+ - Sensitive logic (auth, billing, security)
40
+ - Performance-critical paths
41
+
42
+ ## Claude Code Features
43
+
44
+ | Feature | Purpose |
45
+ |---------|---------|
46
+ | **Skills** | Reusable procedures (spec-writer, argus-integration) |
47
+ | **Subagents** | Parallel work; main agent coordinates |
48
+ | **MCP** | Tools: Argus, appxray, code search |
49
+
50
+ ## Human–AI Loop
51
+
52
+ ```
53
+ You: goal + context → AI: proposal → You: verify
54
+ ↑ ↓
55
+ └── specific feedback ───────────┘
56
+ ```
57
+
58
+ **Key:** "Try again" rarely helps. "Use `useCallback` here because the parent re-renders often" does.
@@ -0,0 +1,73 @@
1
+ # AI Pair Programming — Quiz
2
+
3
+ ## Question 1
4
+
5
+ When is AI *least* effective at helping?
6
+
7
+ A) Writing boilerplate and config files
8
+ B) Fixing a subtle race condition in WebSocket handling
9
+ C) Explaining what a regex does
10
+ D) Exploring multiple implementation options
11
+
12
+ <!-- ANSWER: B -->
13
+ <!-- EXPLANATION: Subtle debugging involves project-specific state, timing, and environment. AI lacks live context and may propose wrong fixes. Boilerplate, explanation, and option exploration are AI strengths. -->
14
+
15
+ ## Question 2
16
+
17
+ What makes a prompt more effective?
18
+
19
+ A) Being short and generic
20
+ B) Including context, examples, and specific requirements
21
+ C) Using technical jargon only
22
+ D) Asking for "better" or "improved" output
23
+
24
+ <!-- ANSWER: B -->
25
+ <!-- EXPLANATION: Effective prompts provide context (where, what exists), examples (patterns to follow), and specific requirements. Vague prompts like "make it better" give the model little to work with. -->
26
+
27
+ ## Question 3
28
+
29
+ The AI produced code that uses `var` and a `for` loop. You want `const` and `.map()`. What feedback works best?
30
+
31
+ A) "Use modern JS"
32
+ B) "Replace the for loop with .map() and use const instead of var"
33
+ C) "Fix this"
34
+ D) Rewriting the whole function yourself
35
+
36
+ <!-- ANSWER: B -->
37
+ <!-- EXPLANATION: Narrow, concrete feedback tells the AI exactly what to change. "Use modern JS" is ambiguous; "Fix this" gives no direction. Rewriting bypasses the iteration loop. -->
38
+
39
+ ## Question 4
40
+
41
+ When reviewing AI-generated code, you should check for:
42
+
43
+ A) Only correctness
44
+ B) Correctness, safety (no secrets, validation), fit (matches patterns), maintainability
45
+ C) Only whether it runs
46
+ D) Only style consistency
47
+
48
+ <!-- ANSWER: B -->
49
+ <!-- EXPLANATION: Treat AI output like any code review. Correctness matters, but safety (hardcoded secrets, input validation), fit (conventions, architecture), and maintainability are equally important. -->
50
+
51
+ ## Question 5
52
+
53
+ When should you take over from the AI manually?
54
+
55
+ A) After the first iteration
56
+ B) When the AI keeps proposing wrong fixes, or for architecture and security-critical decisions
57
+ C) Never — always iterate until it works
58
+ D) Only when the AI says it can't do something
59
+
60
+ <!-- ANSWER: B -->
61
+ <!-- EXPLANATION: Diminishing returns from repeated iteration signals time to take over. Architecture, security-critical logic, and subtle bugs you understand are better handled manually. -->
62
+
63
+ ## Question 6
64
+
65
+ What does a "skill" (e.g., spec-writer, argus-integration) provide that a freeform prompt doesn't?
66
+
67
+ A) Faster response time
68
+ B) A reusable procedure the agent follows step-by-step; standardized inputs and outputs
69
+ C) Access to private APIs
70
+ D) Lower cost per request
71
+
72
+ <!-- ANSWER: B -->
73
+ <!-- EXPLANATION: Skills encode procedures. When invoked, the agent follows the skill's steps rather than interpreting a freeform request. This reduces ambiguity and ensures consistent results. -->
@@ -0,0 +1,34 @@
1
+ # AI Pair Programming — Resources
2
+
3
+ ## Official Docs
4
+
5
+ - [Anthropic Claude Documentation](https://docs.anthropic.com) — Claude API, prompting, best practices.
6
+ - [Claude Code Documentation](https://docs.anthropic.com/en/docs/build-with-claude/claude-code) — Claude Code setup and usage.
7
+
8
+ ## Videos
9
+
10
+ - [How to Pair Program with AI](https://www.youtube.com/results?search_query=pair+program+AI) — Practical AI pairing workflows.
11
+ - [Fireship — Claude Code](https://www.youtube.com/results?search_query=Fireship+Claude) — Quick Claude Code overview.
12
+ - [Coding with AI Best Practices](https://www.youtube.com/results?search_query=coding+with+AI+best+practices) — Prompting and iteration.
13
+
14
+ ## Articles
15
+
16
+ - [Anthropic — Prompt Engineering](https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering) — How to write effective prompts.
17
+ - [The Art of Prompting](https://simonwillison.net/series/prompting/) — Simon Willison on AI prompting.
18
+ - [Anthropic — Working with Claude](https://docs.anthropic.com/en/docs/build-with-claude) — Effective AI collaboration and prompting.
19
+
20
+ ## Books
21
+
22
+ - **The AI-Assisted Developer** by Nathan Young — Practical AI coding workflows.
23
+ - **Prompt Engineering Guide** (various) — Prompt patterns and techniques.
24
+
25
+ ## Podcasts
26
+
27
+ - [Software Engineering Daily](https://softwareengineeringdaily.com/) — Episodes on AI-assisted development.
28
+ - [The Changelog](https://changelog.com/) — AI tools and pair programming discussions.
29
+
30
+ ## Tools
31
+
32
+ - [Claude Code](https://claude.ai) — AI pair programmer from Anthropic.
33
+ - [Cursor](https://cursor.com) — AI-first code editor.
34
+ - [GitHub Copilot](https://github.com/features/copilot) — Inline AI suggestions.
@@ -0,0 +1,117 @@
1
+ # AI Pair Programming Walkthrough — Learn by Doing
2
+
3
+ ## Before We Begin
4
+
5
+ **Diagnostic Question:** What AI coding tools have you used (Cursor, Copilot, ChatGPT, etc.)? For which tasks did they help most — and for which did they fall short?
6
+
7
+ **Checkpoint:** You have a sense of the AI tools landscape. You're ready to think about when AI helps vs when human judgment is essential.
8
+
9
+ ---
10
+
11
+ ## Step 1: Identify the Right Tool
12
+
13
+ Not every task is best for AI.
14
+
15
+ <!-- hint:buttons type="multi" prompt="Which tasks suit AI assistance?" options="Repetitive code,Architecture decisions,Debugging,Documentation" -->
16
+
17
+ **Task:** For each scenario, decide: AI, Manual, or Both (AI drafts, you refine)?
18
+
19
+ 1. Add a new REST endpoint that mirrors an existing one
20
+ 2. Debug a race condition in your WebSocket handling
21
+ 3. Write a README for your project
22
+ 4. Choose between Redis vs Memcached for your caching layer
23
+
24
+ **Question:** What made you choose AI vs manual for each? What kind of context does the AI have (or lack) in each case?
25
+
26
+ **Checkpoint:** The user should recognize that repetitive/structured tasks (1, 3) suit AI; subtle debugging (2) and architecture (4) need human judgment. Both can mean AI drafts, human reviews.
27
+
28
+ ---
29
+
30
+ ## Step 2: Write a Specific Prompt
31
+
32
+ <!-- hint:list style="cards" -->
33
+
34
+ Vague prompts lead to generic output.
35
+
36
+ **Task:** You want to add input validation to a signup form. Your form has `email` and `password` fields. Write a prompt that gives the AI enough context to produce useful code.
37
+
38
+ Include: (a) what to validate, (b) any existing patterns (e.g., "we use `zod`"), (c) where the form lives (file/component).
39
+
40
+ **Question:** What would happen if you only said "add validation"? What extra information did you add and why?
41
+
42
+ **Checkpoint:** The user's prompt includes at least: fields to validate, validation rules (format, length), and where to put the code. They can explain why each piece matters.
43
+
44
+ ---
45
+
46
+ ## Step 3: Provide Examples
47
+
48
+ Show, don't just tell.
49
+
50
+ **Task:** Your codebase uses a custom `useApi` hook. Write a 2–3 sentence prompt that shows the pattern so the AI can add a new API call correctly.
51
+
52
+ Example hook usage: `const { data, loading, error } = useApi('/users', { method: 'GET' });`
53
+
54
+ **Question:** Why might "use our useApi hook" fail? What does the example give the AI that words alone don't?
55
+
56
+ **Checkpoint:** The user includes a concrete example of the hook in use. They understand that examples disambiguate style and reduce wrong guesses.
57
+
58
+ ---
59
+
60
+ ## Step 4: Iterate with Narrow Feedback
61
+
62
+ <!-- hint:card type="tip" title="Narrow feedback" -->
63
+
64
+ Practice giving feedback that helps the AI correct course.
65
+
66
+ **Task:** The AI produced a function that uses `var` and a `for` loop. You want `const` and `.map()`. Write the feedback you'd give — without rewriting the whole function yourself.
67
+
68
+ **Question:** What's the difference between "use modern JS" and "replace the for loop with .map() and use const instead of var"? Which gets better results?
69
+
70
+ **Checkpoint:** The user's feedback is specific (naming the constructs to change) and doesn't rewrite the code. They understand that narrow feedback is more actionable.
71
+
72
+ ---
73
+
74
+ ## Step 5: Review AI Output
75
+
76
+ Treat AI output as a draft to verify.
77
+
78
+ <!-- hint:code language="javascript" highlight="1,4" -->
79
+
80
+ **Task:** The AI suggested this code:
81
+
82
+ ```javascript
83
+ async function fetchUser(id) {
84
+ const res = await fetch(`/api/users/${id}`);
85
+ return res.json();
86
+ }
87
+ ```
88
+
89
+ List at least 3 things you'd check before merging: correctness, safety, edge cases.
90
+
91
+ **Question:** What could go wrong in production with this code? What would you add or change?
92
+
93
+ **Checkpoint:** The user identifies: no error handling, no check for non-OK status, possible JSON parse errors, no loading/empty states. They propose at least one concrete fix (e.g., check `res.ok`).
94
+
95
+ ---
96
+
97
+ ## Step 6: Use Skills and MCP
98
+
99
+ Leverage structured capabilities.
100
+
101
+ **Task:** Look up what skills or MCP tools are available in your setup (e.g., spec-writer, argus, code search). Pick one and describe a task where invoking it would help.
102
+
103
+ **Question:** When would you use a skill vs just asking in natural language? What does the skill give you that a freeform prompt doesn't?
104
+
105
+ **Checkpoint:** The user can name at least one skill or MCP tool and explain when it's useful (e.g., spec-writer after a feature change, argus for testing). They understand skills encode procedures.
106
+
107
+ ---
108
+
109
+ ## Step 7: Decide When to Take Over
110
+
111
+ Know when AI isn't the right tool.
112
+
113
+ **Task:** You've asked the AI to fix a bug three times. Each fix either doesn't work or introduces a new bug. What do you do next?
114
+
115
+ **Question:** At what point does iterating with the AI cost more than debugging yourself? What signals tell you to take over?
116
+
117
+ **Checkpoint:** The user proposes: gather more context (logs, repro steps), try a narrower prompt, or take over and fix manually. They recognize diminishing returns from repeated AI attempts.
@@ -0,0 +1,125 @@
1
+ # AI-Assisted Test Generation — Using Claude for Coverage
2
+
3
+ <!-- hint:slides topic="AI test generation: AI strengths and weaknesses, prompting for tests, review pitfalls, and human+AI workflow" slides="5" -->
4
+
5
+ ## What AI Is Good At (and Bad At)
6
+
7
+ **Good at:**
8
+ - Generating boilerplate (describe, it, expect)
9
+ - Suggesting edge cases (empty, null, boundary)
10
+ - Creating test data (fixtures, mocks)
11
+ - Drafting test doubles (stubs, mocks)
12
+ - Writing tests for well-structured, documented code
13
+
14
+ **Bad at:**
15
+ - Understanding *intent* without context
16
+ - Knowing what *matters* to your business
17
+ - Judging test quality (shallow vs deep assertions)
18
+ - Choosing what to test when time is limited
19
+
20
+ AI amplifies your judgment. You direct; AI drafts.
21
+
22
+ ## Prompting for Tests
23
+
24
+ ### Provide Context
25
+
26
+ - **Function under test**: Paste the code.
27
+ - **Framework**: Jest, Vitest, Playwright, etc.
28
+ - **Examples**: One good test as a style reference.
29
+ - **Constraints**: "No mocks for X", "Use testing-library for React".
30
+
31
+ ```javascript
32
+ // Example prompt context:
33
+ // "Here's my function. Generate unit tests with Vitest.
34
+ // I want: happy path, empty input, invalid input, and one edge case.
35
+ // Style: use describe/it, expect().toBe() for primitives."
36
+ ```
37
+
38
+ ### Example Prompts
39
+
40
+ - "Generate unit tests for this function. Include happy path, null/empty, and error cases."
41
+ - "Write 3 integration tests for this API endpoint. Use the existing test setup."
42
+ - "Suggest edge cases I might have missed for this validation function."
43
+ - "Generate a Jest mock for this service. I need to verify it's called with the right arguments."
44
+
45
+ ## Reviewing AI-Generated Tests
46
+
47
+ ### Common Pitfalls
48
+
49
+ | Pitfall | What to check |
50
+ |---------|----------------|
51
+ | **Shallow assertions** | `expect(result).toBeDefined()` — asserts little |
52
+ | **False confidence** | Tests pass but don't verify real behavior |
53
+ | **Over-mocking** | Mocks everything; integration value lost |
54
+ | **Missing edge cases** | AI often hits obvious cases, misses subtle ones |
55
+ | **Wrong framework** | Jest vs Vitest, RTL vs Enzyme — verify APIs |
56
+
57
+ ### Review Checklist
58
+
59
+ - [ ] Assertions verify *behavior*, not just "it didn't throw"
60
+ - [ ] Edge cases and error paths are covered
61
+ - [ ] Mocks are justified (isolate unit vs test integration)
62
+ - [ ] Test names describe what is being tested
63
+ - [ ] Tests are independent (no shared mutable state)
64
+
65
+ ## AI Across Test Layers
66
+
67
+ | Layer | AI Strength | Human Must |
68
+ |-------|-------------|------------|
69
+ | **Unit** | Boilerplate, edge cases | Choose what to test, verify assertions |
70
+ | **Integration** | Setup, fixtures, API contracts | Verify real flows, DB state |
71
+ | **E2E** | Selectors, steps | Scenarios that matter, flakiness |
72
+
73
+ ## Iterating on Generated Tests
74
+
75
+ 1. **Generate** — Get a first draft from AI.
76
+ 2. **Run** — Do they pass? Fix setup/import errors.
77
+ 3. **Review** — Strengthen weak assertions, add missing cases.
78
+ 4. **Refine prompt** — "Add tests for when the API returns 500" or "Use arrayContaining for the response."
79
+ 5. **Repeat** — Use feedback to improve future prompts.
80
+
81
+ ## AI for Test Data Generation
82
+
83
+ AI can generate:
84
+ - Realistic fake data (names, emails, dates)
85
+ - Boundary values (min, max, zero)
86
+ - Invalid inputs for negative tests
87
+ - Complex nested structures
88
+
89
+ ```javascript
90
+ // Prompt: "Generate 5 valid and 3 invalid sample inputs for
91
+ // this email validation function"
92
+ ```
93
+
94
+ ## AI for Accessibility Testing
95
+
96
+ AI can suggest:
97
+ - ARIA attribute checks
98
+ - Keyboard navigation tests
99
+ - Color contrast assertions
100
+ - Screen reader flow tests
101
+
102
+ Still verify with real assistive tech; AI can miss nuance.
103
+
104
+ ## Workflow: Human + AI
105
+
106
+ ```mermaid
107
+ flowchart LR
108
+ A[You: function + context] --> B[AI: draft tests]
109
+ B --> C[You: run + review]
110
+ C --> D{Good enough?}
111
+ D -->|No| E[You: refine prompt]
112
+ E --> B
113
+ D -->|Yes| F[You: commit]
114
+ ```
115
+
116
+ ---
117
+
118
+ ## Key Takeaways
119
+
120
+ 1. **AI drafts, you direct** — Provide context, examples, constraints.
121
+ 2. **Review critically** — Shallow assertions and over-mocking are common.
122
+ 3. **Use AI for boilerplate and suggestions** — Not for deciding what matters.
123
+ 4. **Iterate** — Refine prompts based on output quality.
124
+ 5. **Combine layers** — AI for unit draft; human for integration/e2e strategy.
125
+ 6. **Test data and accessibility** — AI can help; human verification still essential.