@shaykec/bridge 0.4.25 → 0.4.26
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/journeys/ai-engineer.yaml +34 -0
- package/journeys/backend-developer.yaml +36 -0
- package/journeys/business-analyst.yaml +37 -0
- package/journeys/devops-engineer.yaml +37 -0
- package/journeys/engineering-manager.yaml +44 -0
- package/journeys/frontend-developer.yaml +41 -0
- package/journeys/fullstack-developer.yaml +49 -0
- package/journeys/mobile-developer.yaml +42 -0
- package/journeys/product-manager.yaml +35 -0
- package/journeys/qa-engineer.yaml +37 -0
- package/journeys/ux-designer.yaml +43 -0
- package/modules/README.md +52 -0
- package/modules/accessibility-fundamentals/content.md +126 -0
- package/modules/accessibility-fundamentals/exercises.md +88 -0
- package/modules/accessibility-fundamentals/module.yaml +43 -0
- package/modules/accessibility-fundamentals/quick-ref.md +71 -0
- package/modules/accessibility-fundamentals/quiz.md +100 -0
- package/modules/accessibility-fundamentals/resources.md +29 -0
- package/modules/accessibility-fundamentals/walkthrough.md +80 -0
- package/modules/adr-writing/content.md +121 -0
- package/modules/adr-writing/exercises.md +81 -0
- package/modules/adr-writing/module.yaml +41 -0
- package/modules/adr-writing/quick-ref.md +57 -0
- package/modules/adr-writing/quiz.md +73 -0
- package/modules/adr-writing/resources.md +29 -0
- package/modules/adr-writing/walkthrough.md +64 -0
- package/modules/ai-agents/content.md +120 -0
- package/modules/ai-agents/exercises.md +82 -0
- package/modules/ai-agents/module.yaml +42 -0
- package/modules/ai-agents/quick-ref.md +60 -0
- package/modules/ai-agents/quiz.md +103 -0
- package/modules/ai-agents/resources.md +30 -0
- package/modules/ai-agents/walkthrough.md +85 -0
- package/modules/ai-assisted-research/content.md +136 -0
- package/modules/ai-assisted-research/exercises.md +80 -0
- package/modules/ai-assisted-research/module.yaml +42 -0
- package/modules/ai-assisted-research/quick-ref.md +67 -0
- package/modules/ai-assisted-research/quiz.md +73 -0
- package/modules/ai-assisted-research/resources.md +33 -0
- package/modules/ai-assisted-research/walkthrough.md +85 -0
- package/modules/ai-pair-programming/content.md +105 -0
- package/modules/ai-pair-programming/exercises.md +98 -0
- package/modules/ai-pair-programming/module.yaml +39 -0
- package/modules/ai-pair-programming/quick-ref.md +58 -0
- package/modules/ai-pair-programming/quiz.md +73 -0
- package/modules/ai-pair-programming/resources.md +34 -0
- package/modules/ai-pair-programming/walkthrough.md +117 -0
- package/modules/ai-test-generation/content.md +125 -0
- package/modules/ai-test-generation/exercises.md +98 -0
- package/modules/ai-test-generation/module.yaml +39 -0
- package/modules/ai-test-generation/quick-ref.md +65 -0
- package/modules/ai-test-generation/quiz.md +74 -0
- package/modules/ai-test-generation/resources.md +41 -0
- package/modules/ai-test-generation/walkthrough.md +100 -0
- package/modules/api-design/content.md +189 -0
- package/modules/api-design/exercises.md +84 -0
- package/modules/api-design/game.yaml +113 -0
- package/modules/api-design/module.yaml +45 -0
- package/modules/api-design/quick-ref.md +73 -0
- package/modules/api-design/quiz.md +100 -0
- package/modules/api-design/resources.md +55 -0
- package/modules/api-design/walkthrough.md +88 -0
- package/modules/clean-code/content.md +136 -0
- package/modules/clean-code/exercises.md +137 -0
- package/modules/clean-code/game.yaml +172 -0
- package/modules/clean-code/module.yaml +44 -0
- package/modules/clean-code/quick-ref.md +44 -0
- package/modules/clean-code/quiz.md +105 -0
- package/modules/clean-code/resources.md +40 -0
- package/modules/clean-code/walkthrough.md +78 -0
- package/modules/clean-code/workshop.yaml +149 -0
- package/modules/code-review/content.md +130 -0
- package/modules/code-review/exercises.md +95 -0
- package/modules/code-review/game.yaml +83 -0
- package/modules/code-review/module.yaml +42 -0
- package/modules/code-review/quick-ref.md +77 -0
- package/modules/code-review/quiz.md +105 -0
- package/modules/code-review/resources.md +40 -0
- package/modules/code-review/walkthrough.md +106 -0
- package/modules/daily-workflow/content.md +81 -0
- package/modules/daily-workflow/exercises.md +50 -0
- package/modules/daily-workflow/module.yaml +33 -0
- package/modules/daily-workflow/quick-ref.md +37 -0
- package/modules/daily-workflow/quiz.md +65 -0
- package/modules/daily-workflow/resources.md +38 -0
- package/modules/daily-workflow/walkthrough.md +83 -0
- package/modules/debugging-systematically/content.md +139 -0
- package/modules/debugging-systematically/exercises.md +91 -0
- package/modules/debugging-systematically/module.yaml +46 -0
- package/modules/debugging-systematically/quick-ref.md +59 -0
- package/modules/debugging-systematically/quiz.md +105 -0
- package/modules/debugging-systematically/resources.md +42 -0
- package/modules/debugging-systematically/walkthrough.md +84 -0
- package/modules/debugging-systematically/workshop.yaml +127 -0
- package/modules/demo-test/content.md +68 -0
- package/modules/demo-test/exercises.md +28 -0
- package/modules/demo-test/game.yaml +171 -0
- package/modules/demo-test/module.yaml +41 -0
- package/modules/demo-test/quick-ref.md +54 -0
- package/modules/demo-test/quiz.md +74 -0
- package/modules/demo-test/resources.md +21 -0
- package/modules/demo-test/walkthrough.md +122 -0
- package/modules/demo-test/workshop.yaml +31 -0
- package/modules/design-critique/content.md +93 -0
- package/modules/design-critique/exercises.md +71 -0
- package/modules/design-critique/module.yaml +41 -0
- package/modules/design-critique/quick-ref.md +63 -0
- package/modules/design-critique/quiz.md +73 -0
- package/modules/design-critique/resources.md +27 -0
- package/modules/design-critique/walkthrough.md +68 -0
- package/modules/design-patterns/content.md +335 -0
- package/modules/design-patterns/exercises.md +82 -0
- package/modules/design-patterns/game.yaml +55 -0
- package/modules/design-patterns/module.yaml +45 -0
- package/modules/design-patterns/quick-ref.md +44 -0
- package/modules/design-patterns/quiz.md +101 -0
- package/modules/design-patterns/resources.md +40 -0
- package/modules/design-patterns/walkthrough.md +64 -0
- package/modules/exploratory-testing/content.md +133 -0
- package/modules/exploratory-testing/exercises.md +88 -0
- package/modules/exploratory-testing/module.yaml +41 -0
- package/modules/exploratory-testing/quick-ref.md +68 -0
- package/modules/exploratory-testing/quiz.md +75 -0
- package/modules/exploratory-testing/resources.md +39 -0
- package/modules/exploratory-testing/walkthrough.md +87 -0
- package/modules/git/content.md +128 -0
- package/modules/git/exercises.md +53 -0
- package/modules/git/game.yaml +190 -0
- package/modules/git/module.yaml +44 -0
- package/modules/git/quick-ref.md +67 -0
- package/modules/git/quiz.md +89 -0
- package/modules/git/resources.md +49 -0
- package/modules/git/walkthrough.md +92 -0
- package/modules/git/workshop.yaml +145 -0
- package/modules/hiring-interviews/content.md +130 -0
- package/modules/hiring-interviews/exercises.md +88 -0
- package/modules/hiring-interviews/module.yaml +41 -0
- package/modules/hiring-interviews/quick-ref.md +68 -0
- package/modules/hiring-interviews/quiz.md +73 -0
- package/modules/hiring-interviews/resources.md +36 -0
- package/modules/hiring-interviews/walkthrough.md +75 -0
- package/modules/hooks/content.md +97 -0
- package/modules/hooks/exercises.md +69 -0
- package/modules/hooks/module.yaml +39 -0
- package/modules/hooks/quick-ref.md +93 -0
- package/modules/hooks/quiz.md +81 -0
- package/modules/hooks/resources.md +34 -0
- package/modules/hooks/walkthrough.md +105 -0
- package/modules/hooks/workshop.yaml +64 -0
- package/modules/incident-response/content.md +124 -0
- package/modules/incident-response/exercises.md +82 -0
- package/modules/incident-response/game.yaml +132 -0
- package/modules/incident-response/module.yaml +45 -0
- package/modules/incident-response/quick-ref.md +53 -0
- package/modules/incident-response/quiz.md +103 -0
- package/modules/incident-response/resources.md +40 -0
- package/modules/incident-response/walkthrough.md +82 -0
- package/modules/llm-fundamentals/content.md +114 -0
- package/modules/llm-fundamentals/exercises.md +83 -0
- package/modules/llm-fundamentals/module.yaml +42 -0
- package/modules/llm-fundamentals/quick-ref.md +64 -0
- package/modules/llm-fundamentals/quiz.md +103 -0
- package/modules/llm-fundamentals/resources.md +30 -0
- package/modules/llm-fundamentals/walkthrough.md +91 -0
- package/modules/one-on-ones/content.md +133 -0
- package/modules/one-on-ones/exercises.md +81 -0
- package/modules/one-on-ones/module.yaml +44 -0
- package/modules/one-on-ones/quick-ref.md +67 -0
- package/modules/one-on-ones/quiz.md +73 -0
- package/modules/one-on-ones/resources.md +37 -0
- package/modules/one-on-ones/walkthrough.md +69 -0
- package/modules/package.json +9 -0
- package/modules/prioritization-frameworks/content.md +130 -0
- package/modules/prioritization-frameworks/exercises.md +93 -0
- package/modules/prioritization-frameworks/module.yaml +41 -0
- package/modules/prioritization-frameworks/quick-ref.md +77 -0
- package/modules/prioritization-frameworks/quiz.md +73 -0
- package/modules/prioritization-frameworks/resources.md +32 -0
- package/modules/prioritization-frameworks/walkthrough.md +69 -0
- package/modules/prompt-engineering/content.md +123 -0
- package/modules/prompt-engineering/exercises.md +82 -0
- package/modules/prompt-engineering/game.yaml +101 -0
- package/modules/prompt-engineering/module.yaml +45 -0
- package/modules/prompt-engineering/quick-ref.md +65 -0
- package/modules/prompt-engineering/quiz.md +105 -0
- package/modules/prompt-engineering/resources.md +36 -0
- package/modules/prompt-engineering/walkthrough.md +81 -0
- package/modules/rag-fundamentals/content.md +111 -0
- package/modules/rag-fundamentals/exercises.md +80 -0
- package/modules/rag-fundamentals/module.yaml +45 -0
- package/modules/rag-fundamentals/quick-ref.md +58 -0
- package/modules/rag-fundamentals/quiz.md +75 -0
- package/modules/rag-fundamentals/resources.md +34 -0
- package/modules/rag-fundamentals/walkthrough.md +75 -0
- package/modules/react-fundamentals/content.md +140 -0
- package/modules/react-fundamentals/exercises.md +81 -0
- package/modules/react-fundamentals/game.yaml +145 -0
- package/modules/react-fundamentals/module.yaml +45 -0
- package/modules/react-fundamentals/quick-ref.md +62 -0
- package/modules/react-fundamentals/quiz.md +106 -0
- package/modules/react-fundamentals/resources.md +42 -0
- package/modules/react-fundamentals/walkthrough.md +89 -0
- package/modules/react-fundamentals/workshop.yaml +112 -0
- package/modules/react-native-fundamentals/content.md +141 -0
- package/modules/react-native-fundamentals/exercises.md +79 -0
- package/modules/react-native-fundamentals/module.yaml +42 -0
- package/modules/react-native-fundamentals/quick-ref.md +60 -0
- package/modules/react-native-fundamentals/quiz.md +61 -0
- package/modules/react-native-fundamentals/resources.md +24 -0
- package/modules/react-native-fundamentals/walkthrough.md +84 -0
- package/modules/registry.yaml +1650 -0
- package/modules/risk-management/content.md +162 -0
- package/modules/risk-management/exercises.md +86 -0
- package/modules/risk-management/module.yaml +41 -0
- package/modules/risk-management/quick-ref.md +82 -0
- package/modules/risk-management/quiz.md +73 -0
- package/modules/risk-management/resources.md +40 -0
- package/modules/risk-management/walkthrough.md +67 -0
- package/modules/running-effective-standups/content.md +119 -0
- package/modules/running-effective-standups/exercises.md +79 -0
- package/modules/running-effective-standups/module.yaml +40 -0
- package/modules/running-effective-standups/quick-ref.md +61 -0
- package/modules/running-effective-standups/quiz.md +73 -0
- package/modules/running-effective-standups/resources.md +36 -0
- package/modules/running-effective-standups/walkthrough.md +76 -0
- package/modules/solid-principles/content.md +154 -0
- package/modules/solid-principles/exercises.md +107 -0
- package/modules/solid-principles/module.yaml +42 -0
- package/modules/solid-principles/quick-ref.md +50 -0
- package/modules/solid-principles/quiz.md +102 -0
- package/modules/solid-principles/resources.md +39 -0
- package/modules/solid-principles/walkthrough.md +84 -0
- package/modules/sprint-planning/content.md +142 -0
- package/modules/sprint-planning/exercises.md +79 -0
- package/modules/sprint-planning/game.yaml +84 -0
- package/modules/sprint-planning/module.yaml +44 -0
- package/modules/sprint-planning/quick-ref.md +76 -0
- package/modules/sprint-planning/quiz.md +102 -0
- package/modules/sprint-planning/resources.md +39 -0
- package/modules/sprint-planning/walkthrough.md +75 -0
- package/modules/sql-fundamentals/content.md +160 -0
- package/modules/sql-fundamentals/exercises.md +87 -0
- package/modules/sql-fundamentals/game.yaml +105 -0
- package/modules/sql-fundamentals/module.yaml +45 -0
- package/modules/sql-fundamentals/quick-ref.md +53 -0
- package/modules/sql-fundamentals/quiz.md +103 -0
- package/modules/sql-fundamentals/resources.md +42 -0
- package/modules/sql-fundamentals/walkthrough.md +92 -0
- package/modules/sql-fundamentals/workshop.yaml +109 -0
- package/modules/stakeholder-communication/content.md +186 -0
- package/modules/stakeholder-communication/exercises.md +87 -0
- package/modules/stakeholder-communication/module.yaml +38 -0
- package/modules/stakeholder-communication/quick-ref.md +89 -0
- package/modules/stakeholder-communication/quiz.md +73 -0
- package/modules/stakeholder-communication/resources.md +41 -0
- package/modules/stakeholder-communication/walkthrough.md +74 -0
- package/modules/system-design/content.md +149 -0
- package/modules/system-design/exercises.md +83 -0
- package/modules/system-design/game.yaml +95 -0
- package/modules/system-design/module.yaml +46 -0
- package/modules/system-design/quick-ref.md +59 -0
- package/modules/system-design/quiz.md +102 -0
- package/modules/system-design/resources.md +46 -0
- package/modules/system-design/walkthrough.md +90 -0
- package/modules/team-topologies/content.md +166 -0
- package/modules/team-topologies/exercises.md +85 -0
- package/modules/team-topologies/module.yaml +41 -0
- package/modules/team-topologies/quick-ref.md +61 -0
- package/modules/team-topologies/quiz.md +101 -0
- package/modules/team-topologies/resources.md +37 -0
- package/modules/team-topologies/walkthrough.md +76 -0
- package/modules/technical-debt/content.md +111 -0
- package/modules/technical-debt/exercises.md +92 -0
- package/modules/technical-debt/module.yaml +39 -0
- package/modules/technical-debt/quick-ref.md +60 -0
- package/modules/technical-debt/quiz.md +73 -0
- package/modules/technical-debt/resources.md +25 -0
- package/modules/technical-debt/walkthrough.md +94 -0
- package/modules/technical-mentoring/content.md +128 -0
- package/modules/technical-mentoring/exercises.md +84 -0
- package/modules/technical-mentoring/module.yaml +41 -0
- package/modules/technical-mentoring/quick-ref.md +74 -0
- package/modules/technical-mentoring/quiz.md +73 -0
- package/modules/technical-mentoring/resources.md +33 -0
- package/modules/technical-mentoring/walkthrough.md +65 -0
- package/modules/test-strategy/content.md +136 -0
- package/modules/test-strategy/exercises.md +84 -0
- package/modules/test-strategy/game.yaml +99 -0
- package/modules/test-strategy/module.yaml +45 -0
- package/modules/test-strategy/quick-ref.md +66 -0
- package/modules/test-strategy/quiz.md +99 -0
- package/modules/test-strategy/resources.md +60 -0
- package/modules/test-strategy/walkthrough.md +97 -0
- package/modules/test-strategy/workshop.yaml +96 -0
- package/modules/typescript-fundamentals/content.md +127 -0
- package/modules/typescript-fundamentals/exercises.md +79 -0
- package/modules/typescript-fundamentals/game.yaml +111 -0
- package/modules/typescript-fundamentals/module.yaml +45 -0
- package/modules/typescript-fundamentals/quick-ref.md +55 -0
- package/modules/typescript-fundamentals/quiz.md +104 -0
- package/modules/typescript-fundamentals/resources.md +42 -0
- package/modules/typescript-fundamentals/walkthrough.md +71 -0
- package/modules/typescript-fundamentals/workshop.yaml +146 -0
- package/modules/user-story-mapping/content.md +123 -0
- package/modules/user-story-mapping/exercises.md +87 -0
- package/modules/user-story-mapping/module.yaml +41 -0
- package/modules/user-story-mapping/quick-ref.md +64 -0
- package/modules/user-story-mapping/quiz.md +73 -0
- package/modules/user-story-mapping/resources.md +29 -0
- package/modules/user-story-mapping/walkthrough.md +86 -0
- package/modules/writing-prds/content.md +133 -0
- package/modules/writing-prds/exercises.md +93 -0
- package/modules/writing-prds/game.yaml +83 -0
- package/modules/writing-prds/module.yaml +44 -0
- package/modules/writing-prds/quick-ref.md +77 -0
- package/modules/writing-prds/quiz.md +103 -0
- package/modules/writing-prds/resources.md +30 -0
- package/modules/writing-prds/walkthrough.md +87 -0
- package/package.json +1 -1
|
@@ -0,0 +1,40 @@
|
|
|
1
|
+
# Code Review — Resources
|
|
2
|
+
|
|
3
|
+
## Videos
|
|
4
|
+
|
|
5
|
+
- [Code Review Best Practices](https://www.youtube.com/watch?v=0l_7Fy5_wfc) — Google Developers, ~20 min. Covers what reviewers look for, CL author tips, and reviewer guidance from Google's eng-practices.
|
|
6
|
+
- [How to Give and Receive Code Reviews](https://www.youtube.com/watch?v=a9_2vv7C1_Y) — GOTO Conferences, ~45 min. Practical advice on feedback tone, prioritization, and team dynamics.
|
|
7
|
+
|
|
8
|
+
## Articles and Readings
|
|
9
|
+
|
|
10
|
+
- [Google Engineering Practices: Code Review](https://google.github.io/eng-practices/review/) — Google. Canonical guide: what to look for, CL author guide, reviewer guide. Key takeaway: correctness and design first; nits are optional.
|
|
11
|
+
- [The CL Author's Guide](https://google.github.io/eng-practices/review/developer/) — Google. How to write good descriptions, scope PRs, and respond to feedback.
|
|
12
|
+
- [How to Do a Code Review](https://google.github.io/eng-practices/review/reviewer/) — Google. Reviewer checklist, picking reviewers, in-person reviews.
|
|
13
|
+
- [Code Review at Microsoft](https://docs.microsoft.com/en-us/azure/devops/repos/git/pull-request-overview) — Microsoft. PR workflow and review practices in Azure DevOps.
|
|
14
|
+
|
|
15
|
+
## Books
|
|
16
|
+
|
|
17
|
+
- **The Clean Coder** by Robert C. Martin — Chapter on professional collaboration and accepting feedback. Why code review is part of professionalism.
|
|
18
|
+
- **Software Team Lead's Handbook** by Christen Gilly — Sections on code review as a knowledge-sharing and quality mechanism.
|
|
19
|
+
|
|
20
|
+
## Tools and Playgrounds
|
|
21
|
+
|
|
22
|
+
- [GitHub Pull Requests](https://docs.github.com/en/pull-requests) — Native PR and review flow. Inline comments, suggestions, approval workflows.
|
|
23
|
+
- [GitLab Merge Requests](https://docs.gitlab.com/ee/user/project/merge_requests/) — MR lifecycle, approval rules, code review features.
|
|
24
|
+
- [Gerrit Code Review](https://www.gerritcodereview.com/) — Patch-based review used by many large open-source projects. Strong for strict workflows.
|
|
25
|
+
|
|
26
|
+
## Podcasts
|
|
27
|
+
|
|
28
|
+
- [Software Engineering Radio — Code Review](https://www.se-radio.net/) — Episodes on review culture and process.
|
|
29
|
+
- [Changelog — Code Review Culture](https://changelog.com/podcast) — How top teams build effective review practices.
|
|
30
|
+
- [Developer Tea — Giving Feedback](https://developertea.com/) — Short episodes on communication and feedback skills essential for reviews.
|
|
31
|
+
|
|
32
|
+
## Interactive and Visual
|
|
33
|
+
|
|
34
|
+
- [Conventional Comments](https://conventionalcomments.org/) — Labeling system for review comments (praise, suggestion, nitpick, etc.) with examples.
|
|
35
|
+
- [ReviewBoard](https://www.reviewboard.org/) — Open-source code review tool with visual diff and comment threading.
|
|
36
|
+
|
|
37
|
+
## Courses
|
|
38
|
+
|
|
39
|
+
- [Google Engineering Practices — Full Guide (free)](https://google.github.io/eng-practices/) — Complete guide to Google's code review standards, reviewer and author guides.
|
|
40
|
+
- [Pluralsight — Code Review Best Practices](https://www.pluralsight.com/) — Structured course on effective code reviews.
|
|
@@ -0,0 +1,106 @@
|
|
|
1
|
+
# Code Review Walkthrough — Learn by Doing
|
|
2
|
+
|
|
3
|
+
## Before We Begin
|
|
4
|
+
|
|
5
|
+
Code review works best when we treat feedback as a collaborative improvement, not a judgment. The goal is better code and shared learning—not winning an argument.
|
|
6
|
+
|
|
7
|
+
**Diagnostic question:** When you've given or received code feedback, what made it feel helpful? What made it feel adversarial?
|
|
8
|
+
|
|
9
|
+
**Checkpoint:** You can name one behavior that makes feedback land well and one that makes it land poorly.
|
|
10
|
+
|
|
11
|
+
---
|
|
12
|
+
|
|
13
|
+
## Step 1: Prepare Your Mindset
|
|
14
|
+
|
|
15
|
+
Before reviewing or being reviewed, set the right frame.
|
|
16
|
+
|
|
17
|
+
**Task:** Think of the last time someone critiqued your work (code or otherwise). How did you feel? What made it easier or harder to accept?
|
|
18
|
+
|
|
19
|
+
**Question:** Why do you think code review comments can feel personal, even when they're about the code? What's one way to mentally separate feedback from identity?
|
|
20
|
+
|
|
21
|
+
**Checkpoint:** The user should recognize that feedback targets the work, not the person, and that separating the two reduces defensiveness.
|
|
22
|
+
|
|
23
|
+
---
|
|
24
|
+
|
|
25
|
+
## Step 2: Write a PR Description
|
|
26
|
+
|
|
27
|
+
Good descriptions make reviews faster and more accurate.
|
|
28
|
+
|
|
29
|
+
**Task:** Pick a small change you've made recently (or imagine one). Write a PR description with "What", "Why", and "How" sections. Keep it under 10 lines.
|
|
30
|
+
|
|
31
|
+
**Question:** What information would a reviewer need that isn't in the diff itself? Why does "Why" matter as much as "What"?
|
|
32
|
+
|
|
33
|
+
**Checkpoint:** The user should include context (what problem it solves), rationale (why this approach), and enough technical detail for a reviewer to verify the change.
|
|
34
|
+
|
|
35
|
+
---
|
|
36
|
+
|
|
37
|
+
## Step 3: Turn a Vague Comment into a Good One
|
|
38
|
+
|
|
39
|
+
Practice giving specific, constructive feedback.
|
|
40
|
+
|
|
41
|
+
<!-- hint:code language="javascript" highlight="2,4" -->
|
|
42
|
+
|
|
43
|
+
**Task:** You see this code in a PR:
|
|
44
|
+
|
|
45
|
+
```javascript
|
|
46
|
+
function findUser(id) {
|
|
47
|
+
for (let i = 0; i < users.length; i++) {
|
|
48
|
+
if (users[i].id === id) return users[i];
|
|
49
|
+
}
|
|
50
|
+
return null;
|
|
51
|
+
}
|
|
52
|
+
```
|
|
53
|
+
|
|
54
|
+
Write three possible review comments: one vague, one specific but unhelpful, one specific and constructive (with a suggestion).
|
|
55
|
+
|
|
56
|
+
**Question:** What makes the third comment more useful? How would it change the author's next steps?
|
|
57
|
+
|
|
58
|
+
**Checkpoint:** The user produces a comment that explains the issue, why it matters, and suggests an improvement (e.g., `find()` or a Map for repeated lookups).
|
|
59
|
+
|
|
60
|
+
---
|
|
61
|
+
|
|
62
|
+
## Step 4: Prioritize Feedback
|
|
63
|
+
|
|
64
|
+
Not all feedback is equal.
|
|
65
|
+
|
|
66
|
+
<!-- hint:buttons type="single" prompt="What's the priority of 'API key hardcoded'?" options="Blocking,Should fix,Nit" -->
|
|
67
|
+
|
|
68
|
+
**Task:** Given these hypothetical review comments, classify each as "blocking", "should fix", or "nit":
|
|
69
|
+
|
|
70
|
+
1. "There's a potential null reference on line 42 when `response` is undefined."
|
|
71
|
+
2. "Consider renaming `x` to `userCount` for clarity."
|
|
72
|
+
3. "This API key is hardcoded; it should come from env."
|
|
73
|
+
4. "I'd use `forEach` instead of `for` loop here."
|
|
74
|
+
|
|
75
|
+
**Question:** What criteria help you decide whether something blocks merge vs is optional?
|
|
76
|
+
|
|
77
|
+
**Checkpoint:** The user correctly prioritizes: (1) blocking, (2) nit, (3) blocking, (4) nit — and can explain why security and correctness trump style.
|
|
78
|
+
|
|
79
|
+
---
|
|
80
|
+
|
|
81
|
+
## Step 5: Respond to Feedback
|
|
82
|
+
|
|
83
|
+
Practice receiving feedback gracefully.
|
|
84
|
+
|
|
85
|
+
**Task:** Imagine a reviewer wrote: "This function is too long and does too much. Hard to test."
|
|
86
|
+
|
|
87
|
+
Write two responses: one defensive, one constructive. The constructive response should either (a) ask a clarifying question, or (b) propose a concrete next step.
|
|
88
|
+
|
|
89
|
+
**Question:** What does a "constructive" response accomplish that a defensive one doesn't?
|
|
90
|
+
|
|
91
|
+
**Checkpoint:** The user's response acknowledges the concern and either asks for specifics or proposes refactoring into smaller, testable functions.
|
|
92
|
+
|
|
93
|
+
---
|
|
94
|
+
|
|
95
|
+
## Step 6: Build a Review Checklist
|
|
96
|
+
|
|
97
|
+
Make reviews consistent and thorough.
|
|
98
|
+
|
|
99
|
+
<!-- hint:list style="cards" -->
|
|
100
|
+
<!-- hint:card type="tip" title="Best practices: correctness first, then tests, then readability" -->
|
|
101
|
+
|
|
102
|
+
**Task:** Create a personal checklist of 5–7 items you'll check on every PR you review. Mix correctness, readability, and process (e.g., "PR has tests").
|
|
103
|
+
|
|
104
|
+
**Question:** How might a checklist change your review behavior? What might you overlook without one?
|
|
105
|
+
|
|
106
|
+
**Checkpoint:** The user's checklist covers at least correctness, tests, security considerations, and readability. They can explain why each item matters.
|
|
@@ -0,0 +1,81 @@
|
|
|
1
|
+
# A Day with Claude Code
|
|
2
|
+
|
|
3
|
+
> **Level: 🌱 Beginner** | *Story*
|
|
4
|
+
|
|
5
|
+
## 8:30 AM — Starting the Day
|
|
6
|
+
|
|
7
|
+
Maya opens her terminal and launches Claude Code in her team's monorepo. First thing she does every morning:
|
|
8
|
+
|
|
9
|
+
```
|
|
10
|
+
> /teach:stats
|
|
11
|
+
```
|
|
12
|
+
|
|
13
|
+
Her dashboard shows she's at Green Belt (180 XP) with 6 modules completed. She notices she's 220 XP away from Blue Belt. The level-up command suggests trying the "sub-agents" module next.
|
|
14
|
+
|
|
15
|
+
But first, work.
|
|
16
|
+
|
|
17
|
+
## 9:00 AM — Reviewing a PR
|
|
18
|
+
|
|
19
|
+
A teammate's PR came in overnight. Maya asks Claude to help review it:
|
|
20
|
+
|
|
21
|
+
```
|
|
22
|
+
> Review PR #347 — focus on error handling and edge cases
|
|
23
|
+
```
|
|
24
|
+
|
|
25
|
+
Claude reads the diff, identifies a potential null pointer in the payment handler, and suggests a fix. Maya copies the feedback into the PR review. That interaction earns her a few bonus XP from the hooks tracking her tool usage.
|
|
26
|
+
|
|
27
|
+
## 10:30 AM — Debugging a Production Issue
|
|
28
|
+
|
|
29
|
+
An alert fires. The checkout flow is returning 500 errors for a subset of users. Maya opens Claude Code in the service repo:
|
|
30
|
+
|
|
31
|
+
```
|
|
32
|
+
> There's a 500 error in the checkout flow for users with expired promo codes.
|
|
33
|
+
> Help me trace the issue.
|
|
34
|
+
```
|
|
35
|
+
|
|
36
|
+
Claude reads the error logs, follows the code path from the API endpoint through the promo code validator, and finds the bug: the validator throws an unhandled exception when the promo code's expiry date is null (legacy data).
|
|
37
|
+
|
|
38
|
+
Maya fixes it with a null check, Claude helps write the test, and the hotfix ships by lunch.
|
|
39
|
+
|
|
40
|
+
## 1:00 PM — Learning Something New
|
|
41
|
+
|
|
42
|
+
After lunch, Maya has a 30-minute learning slot. She decides to try the hooks module:
|
|
43
|
+
|
|
44
|
+
```
|
|
45
|
+
> /teach hooks
|
|
46
|
+
```
|
|
47
|
+
|
|
48
|
+
The Socratic tutor walks her through lifecycle events, asking questions like "What do you think happens when a PostToolUse hook fails?" Instead of giving her the answer, it lets her reason about it — and she discovers that hooks should fail gracefully because they run synchronously.
|
|
49
|
+
|
|
50
|
+
She gets through 4 of 7 walkthrough steps before her next meeting. Progress is saved automatically.
|
|
51
|
+
|
|
52
|
+
## 3:00 PM — Building a Feature
|
|
53
|
+
|
|
54
|
+
Back to feature work. Maya's building a new notification preferences page:
|
|
55
|
+
|
|
56
|
+
```
|
|
57
|
+
> I need a notification preferences page. Users should be able to toggle
|
|
58
|
+
> email, push, and SMS notifications independently for each notification type
|
|
59
|
+
> (marketing, transactional, security).
|
|
60
|
+
```
|
|
61
|
+
|
|
62
|
+
Claude scaffolds the component, creates the API endpoints, and writes the database migration. Maya reviews each piece, asks questions about the design choices, and makes a few adjustments.
|
|
63
|
+
|
|
64
|
+
## 4:30 PM — Wrapping Up
|
|
65
|
+
|
|
66
|
+
Before she leaves, Maya checks her stats one more time:
|
|
67
|
+
|
|
68
|
+
```
|
|
69
|
+
> /teach:stats
|
|
70
|
+
```
|
|
71
|
+
|
|
72
|
+
She earned 35 XP today — 15 from the hooks module walkthrough and 20 bonus from active tool usage (the production debugging session hit several categories). She's now at 215 XP, getting closer to Blue Belt.
|
|
73
|
+
|
|
74
|
+
Tomorrow she'll finish the hooks walkthrough and maybe start on sub-agents.
|
|
75
|
+
|
|
76
|
+
## What Made This Day Productive
|
|
77
|
+
|
|
78
|
+
1. **Real work and learning blend together** — the production debugging wasn't a tutorial, but the hooks module directly helped Maya understand the tooling better.
|
|
79
|
+
2. **Progress is visible** — XP and belts provide a tangible sense of growth.
|
|
80
|
+
3. **Learning fits into small gaps** — 30 minutes with the Socratic tutor is enough to make real progress.
|
|
81
|
+
4. **The AI adapts** — during the feature build, Claude is a productivity tool. During the learning slot, it switches to teaching mode.
|
|
@@ -0,0 +1,50 @@
|
|
|
1
|
+
# A Day with Claude Code — Exercises
|
|
2
|
+
|
|
3
|
+
## Exercise 1: Set Up a Claude Code Session and Check Stats
|
|
4
|
+
|
|
5
|
+
**Task:** Start a Claude Code session in a project (or a test directory). Run the stats command to see your current XP and belt. Note how far you are from the next belt and what the level-up suggestion recommends.
|
|
6
|
+
|
|
7
|
+
**Validation:**
|
|
8
|
+
- [ ] Claude Code session started successfully
|
|
9
|
+
- [ ] Stats command executed (e.g., `/teach:stats` or equivalent)
|
|
10
|
+
- [ ] You can state your current XP and belt level
|
|
11
|
+
- [ ] You can identify at least one suggested next step
|
|
12
|
+
|
|
13
|
+
**Hints:**
|
|
14
|
+
1. Ensure Claude Code is installed and configured for your workspace
|
|
15
|
+
2. Stats may be available via a plugin command or slash command — check your setup
|
|
16
|
+
3. If gamification isn't configured, the exercise still counts if you successfully ran the session
|
|
17
|
+
|
|
18
|
+
---
|
|
19
|
+
|
|
20
|
+
## Exercise 2: Use AI to Review a Real PR
|
|
21
|
+
|
|
22
|
+
**Task:** Pick a real pull request — your own, a teammate's, or from an open-source repo you have cloned. Ask Claude to review it with a specific focus (e.g., error handling, security, readability). Incorporate at least one piece of Claude's feedback into your understanding or a comment.
|
|
23
|
+
|
|
24
|
+
**Validation:**
|
|
25
|
+
- [ ] PR identified and context provided to Claude
|
|
26
|
+
- [ ] Review requested with a clear focus area
|
|
27
|
+
- [ ] Claude's feedback received and understood
|
|
28
|
+
- [ ] At least one actionable insight applied (comment written, or mental note for future PRs)
|
|
29
|
+
|
|
30
|
+
**Hints:**
|
|
31
|
+
1. Smaller PRs (under ~400 lines) work best for focused review
|
|
32
|
+
2. Specify the focus: "error handling," "edge cases," "naming" — it improves the quality of feedback
|
|
33
|
+
3. You can copy Claude's suggestions into GitHub/GitLab review comments
|
|
34
|
+
|
|
35
|
+
---
|
|
36
|
+
|
|
37
|
+
## Exercise 3: Use AI to Debug an Issue
|
|
38
|
+
|
|
39
|
+
**Task:** Find or create a bug — a failing test, a runtime error, or unexpected behavior. Describe it to Claude with relevant context (error message, stack trace, steps to reproduce). Work through Claude's suggested investigation steps, identify the root cause, and apply a fix.
|
|
40
|
+
|
|
41
|
+
**Validation:**
|
|
42
|
+
- [ ] Bug described with sufficient context (error, logs, or reproduction steps)
|
|
43
|
+
- [ ] Investigation followed (code path traced, hypothesis formed)
|
|
44
|
+
- [ ] Root cause identified
|
|
45
|
+
- [ ] Fix applied and verified (test passes or error resolved)
|
|
46
|
+
|
|
47
|
+
**Hints:**
|
|
48
|
+
1. Start with the error message — paste it verbatim
|
|
49
|
+
2. If Claude suggests a hypothesis, try it (add a null check, add a test) to validate
|
|
50
|
+
3. Writing a test for the fix (with Claude's help) reinforces the debugging session
|
|
@@ -0,0 +1,33 @@
|
|
|
1
|
+
slug: daily-workflow
|
|
2
|
+
title: "A Day with Claude Code"
|
|
3
|
+
version: 1.0.0
|
|
4
|
+
description: "Story: follow a developer through a typical day using Claude Code for real tasks."
|
|
5
|
+
category: claude-code
|
|
6
|
+
tags: [claude-code, workflow, story, productivity]
|
|
7
|
+
difficulty: beginner
|
|
8
|
+
narrative: true
|
|
9
|
+
|
|
10
|
+
xp:
|
|
11
|
+
read: 10
|
|
12
|
+
walkthrough: 30
|
|
13
|
+
exercise: 20
|
|
14
|
+
quiz: 15
|
|
15
|
+
quiz-perfect-bonus: 10
|
|
16
|
+
|
|
17
|
+
time:
|
|
18
|
+
quick: 5
|
|
19
|
+
read: 10
|
|
20
|
+
guided: 45
|
|
21
|
+
|
|
22
|
+
prerequisites: []
|
|
23
|
+
related: [hooks, skills, sub-agents]
|
|
24
|
+
|
|
25
|
+
visuals:
|
|
26
|
+
diagrams: [diagram-flow, diagram-mermaid]
|
|
27
|
+
quiz-types: [quiz-matching, quiz-timed-choice]
|
|
28
|
+
playground: bash
|
|
29
|
+
|
|
30
|
+
triggers:
|
|
31
|
+
- "What does a typical day with Claude Code look like?"
|
|
32
|
+
- "How do people actually use Claude Code?"
|
|
33
|
+
- "Show me a real workflow with Claude Code"
|
|
@@ -0,0 +1,37 @@
|
|
|
1
|
+
# A Day with Claude Code — Quick Reference
|
|
2
|
+
|
|
3
|
+
## Claude Code Commands
|
|
4
|
+
|
|
5
|
+
| Command | Purpose |
|
|
6
|
+
|---|---|
|
|
7
|
+
| `/teach:stats` | View XP, belt level, and suggested next modules |
|
|
8
|
+
| `/teach:level-up` | See belt roadmap and personalized suggestions |
|
|
9
|
+
| `/teach <module>` | Start Socratic teaching for a module (e.g., `hooks`) |
|
|
10
|
+
| `/teach:canvas` | Open the visual canvas (diagrams, quizzes) |
|
|
11
|
+
| `/teach:learn <topic>` | Free-style learning — generate a module on any topic |
|
|
12
|
+
|
|
13
|
+
## Workflow Patterns
|
|
14
|
+
|
|
15
|
+
| Scenario | Approach |
|
|
16
|
+
|---|---|
|
|
17
|
+
| **Morning** | Run stats, note current belt and suggested next module |
|
|
18
|
+
| **PR review** | "Review PR #N — focus on [error handling | security | readability]" |
|
|
19
|
+
| **Debugging** | Provide error, logs, steps to reproduce; ask Claude to trace and suggest fixes |
|
|
20
|
+
| **Learning slot** | `/teach <module>` — 30 min is enough for 3–4 walkthrough steps |
|
|
21
|
+
| **Feature build** | Describe feature in natural language; review each generated piece |
|
|
22
|
+
| **End of day** | Run stats again; reflect on XP earned from work vs. learning |
|
|
23
|
+
|
|
24
|
+
## XP Sources
|
|
25
|
+
|
|
26
|
+
| Source | Example |
|
|
27
|
+
|---|---|
|
|
28
|
+
| **Module completion** | Finishing walkthrough steps, passing quizzes |
|
|
29
|
+
| **Tool usage** | Production debugging, PR review, code generation |
|
|
30
|
+
| **Games & exercises** | Interactive exercises, workshop scenarios |
|
|
31
|
+
|
|
32
|
+
## Key Principles
|
|
33
|
+
|
|
34
|
+
- **Blend work and learning** — Use Claude for real tasks and for teaching in the same day
|
|
35
|
+
- **Progress is visible** — XP and belts make growth tangible
|
|
36
|
+
- **Learning fits gaps** — Short sessions (e.g., 30 min) are effective
|
|
37
|
+
- **AI adapts to context** — Productivity mode for features; teaching mode for `/teach`
|
|
@@ -0,0 +1,65 @@
|
|
|
1
|
+
# A Day with Claude Code — Quiz
|
|
2
|
+
|
|
3
|
+
## Question 1
|
|
4
|
+
|
|
5
|
+
What does `/teach:stats` show you?
|
|
6
|
+
|
|
7
|
+
A) Only your completed modules
|
|
8
|
+
B) XP, belt level, and suggestions for what to learn next
|
|
9
|
+
C) A list of all available modules
|
|
10
|
+
D) Your recent chat history
|
|
11
|
+
|
|
12
|
+
<!-- ANSWER: B -->
|
|
13
|
+
<!-- EXPLANATION: The stats dashboard displays your current XP total, your belt level (e.g., Green Belt), how far you are from the next belt, and personalized suggestions like "try the sub-agents module next." -->
|
|
14
|
+
|
|
15
|
+
## Question 2
|
|
16
|
+
|
|
17
|
+
<!-- VISUAL: matching -->
|
|
18
|
+
|
|
19
|
+
Match the scenario to the Claude Code feature that helps:
|
|
20
|
+
|
|
21
|
+
| Scenario | Feature |
|
|
22
|
+
|----------|---------|
|
|
23
|
+
| A) Teammate's PR needs review | 1) `/teach:stats` |
|
|
24
|
+
| B) Want to see learning progress | 2) Natural language PR review request |
|
|
25
|
+
| C) Need to learn hooks | 3) `/teach <module>` |
|
|
26
|
+
| D) Production 500 error | 4) Debugging prompt with logs and context |
|
|
27
|
+
|
|
28
|
+
<!-- ANSWER: A-2, B-1, C-3, D-4 -->
|
|
29
|
+
<!-- EXPLANATION: PR review uses natural language ("Review PR #347 — focus on error handling"). Stats uses /teach:stats. Learning uses /teach <module>. Debugging uses descriptive prompts with context. -->
|
|
30
|
+
|
|
31
|
+
## Question 3
|
|
32
|
+
|
|
33
|
+
Why might checking stats in the morning and again at end of day be useful?
|
|
34
|
+
|
|
35
|
+
A) It's required for Claude Code to work
|
|
36
|
+
B) It makes progress visible and reinforces productive habits
|
|
37
|
+
C) It resets your XP for the day
|
|
38
|
+
D) It syncs with team dashboards
|
|
39
|
+
|
|
40
|
+
<!-- ANSWER: B -->
|
|
41
|
+
<!-- EXPLANATION: Visible progress (XP gains, belt advancement) reinforces the blend of real work and learning. Seeing "I earned 35 XP today" makes both productive sessions and learning sessions feel tangible. -->
|
|
42
|
+
|
|
43
|
+
## Question 4
|
|
44
|
+
|
|
45
|
+
During a debugging session, what does the human typically provide that Claude needs?
|
|
46
|
+
|
|
47
|
+
A) Nothing — Claude reads everything automatically
|
|
48
|
+
B) Error logs, stack traces, and domain context
|
|
49
|
+
C) Only the file path to the bug
|
|
50
|
+
D) A pre-written fix to apply
|
|
51
|
+
|
|
52
|
+
<!-- ANSWER: B -->
|
|
53
|
+
<!-- EXPLANATION: Claude needs context to debug effectively: error messages, stack traces, which users are affected, and domain knowledge (e.g., "users with expired promo codes"). The human supplies this; Claude traces the code path and suggests fixes. -->
|
|
54
|
+
|
|
55
|
+
## Question 5
|
|
56
|
+
|
|
57
|
+
What is the main benefit of Socratic teaching (asking questions instead of giving answers) in a learning session?
|
|
58
|
+
|
|
59
|
+
A) It's faster than reading documentation
|
|
60
|
+
B) It forces active reasoning, which improves retention
|
|
61
|
+
C) It reduces the number of modules available
|
|
62
|
+
D) It requires less setup
|
|
63
|
+
|
|
64
|
+
<!-- ANSWER: B -->
|
|
65
|
+
<!-- EXPLANATION: When you reason through "What do you think happens when a PostToolUse hook fails?" instead of being told the answer, you discover it yourself. That active engagement leads to better retention than passive reading. -->
|
|
@@ -0,0 +1,38 @@
|
|
|
1
|
+
# A Day with Claude Code — Resources
|
|
2
|
+
|
|
3
|
+
## Official Docs
|
|
4
|
+
|
|
5
|
+
- [Claude Code Documentation](https://docs.anthropic.com/en/docs/claude-code) — Setup, commands, and configuration.
|
|
6
|
+
- [Claude Code Best Practices](https://docs.anthropic.com/en/docs/claude-code/best-practices) — Prompt tips and workflow patterns for productive sessions.
|
|
7
|
+
- [Anthropic Cookbook](https://github.com/anthropics/anthropic-cookbook) — Practical examples and patterns for building with Claude.
|
|
8
|
+
|
|
9
|
+
## Videos
|
|
10
|
+
|
|
11
|
+
- [Claude Code in 100 Seconds](https://www.youtube.com/results?search_query=claude+code+100+seconds) — Quick overview of what Claude Code is and does.
|
|
12
|
+
- [Cursor AI — Full Tutorial](https://www.youtube.com/results?search_query=cursor+ai+full+tutorial) — Demonstrates AI-assisted coding workflows similar to Claude Code.
|
|
13
|
+
- [AI Coding Assistants Compared](https://www.youtube.com/results?search_query=ai+coding+assistants+compared+2025) — Context on where Claude Code fits in the AI tooling landscape.
|
|
14
|
+
|
|
15
|
+
## Articles and Readings
|
|
16
|
+
|
|
17
|
+
- [Anthropic's Blog — Claude Code](https://www.anthropic.com/news) — Launch announcements and capability updates.
|
|
18
|
+
- [Simon Willison — AI-Assisted Development](https://simonwillison.net/series/ai/) — Practical insights on coding with AI assistants.
|
|
19
|
+
- [Addy Osmani — AI-Assisted Development](https://addyosmani.com/blog/) — Patterns for productive human-AI collaboration.
|
|
20
|
+
- [Swyx — AI Engineering](https://www.latent.space/) — Latent Space blog and podcast on AI tooling.
|
|
21
|
+
|
|
22
|
+
## Books
|
|
23
|
+
|
|
24
|
+
- **The Pragmatic Programmer** by Hunt & Thomas — Foundational productivity practices that pair well with AI assistance. [pragprog.com](https://pragprog.com/titles/tpp20/the-pragmatic-programmer-20th-anniversary-edition/).
|
|
25
|
+
- **Deep Work** by Cal Newport — Focus strategies relevant to getting the most from AI pairing sessions.
|
|
26
|
+
|
|
27
|
+
## Podcasts
|
|
28
|
+
|
|
29
|
+
- [Latent Space](https://www.latent.space/podcast) — AI engineering podcast covering coding agents and developer tools.
|
|
30
|
+
- [Changelog](https://changelog.com/podcast) — Developer-focused show with episodes on AI-assisted workflows.
|
|
31
|
+
- [Software Engineering Daily](https://softwareengineeringdaily.com/) — Technical deep dives, frequent AI/ML tooling episodes.
|
|
32
|
+
|
|
33
|
+
## Tools
|
|
34
|
+
|
|
35
|
+
- [Claude Code](https://docs.anthropic.com/en/docs/claude-code) — The CLI agent featured in this module.
|
|
36
|
+
- [GitHub Copilot](https://github.com/features/copilot) — AI pair programmer for comparison and complementary use.
|
|
37
|
+
- [aider](https://aider.chat/) — Open-source AI coding assistant in the terminal.
|
|
38
|
+
- [Cursor](https://cursor.com/) — AI-native code editor with similar agent capabilities.
|
|
@@ -0,0 +1,83 @@
|
|
|
1
|
+
# Daily Workflow Walkthrough — Learn by Doing
|
|
2
|
+
|
|
3
|
+
## Before We Begin
|
|
4
|
+
|
|
5
|
+
<!-- hint:buttons type="multi" prompt="Which of these do you use regularly?" options="AI coding assistants,Code review tools,Automated testing,Learning platforms,None of these" -->
|
|
6
|
+
<!-- hint:card type="concept" title="AI-Assisted Development" -->
|
|
7
|
+
|
|
8
|
+
**Question:** How do you currently use your development tools throughout a typical day? Where do you spend the most time — writing code, reviewing, debugging, or learning?
|
|
9
|
+
|
|
10
|
+
**Checkpoint:** The user can describe their current workflow and identify at least one area where AI assistance could help.
|
|
11
|
+
|
|
12
|
+
---
|
|
13
|
+
|
|
14
|
+
## Step 1: Morning Setup and Stats
|
|
15
|
+
|
|
16
|
+
<!-- hint:terminal -->
|
|
17
|
+
|
|
18
|
+
**Task:** Open your terminal and run `/teach:stats` to check your current XP and belt. Then run `/teach list` to see available modules. Note which modules you've completed and which are recommended next.
|
|
19
|
+
|
|
20
|
+
**Question:** Why might checking your learning progress at the start of the day be useful? How does it connect to deliberate practice?
|
|
21
|
+
|
|
22
|
+
**Checkpoint:** The user has run both commands and can describe their current progress.
|
|
23
|
+
|
|
24
|
+
---
|
|
25
|
+
|
|
26
|
+
## Step 2: Reviewing Code with AI
|
|
27
|
+
|
|
28
|
+
<!-- hint:list style="numbered" -->
|
|
29
|
+
|
|
30
|
+
**Task:** Find a recent PR or code change (yours or a teammate's). Ask your AI assistant to review it, focusing on error handling and edge cases. Compare the AI's feedback to what you would have caught yourself.
|
|
31
|
+
|
|
32
|
+
**Question:** What did the AI catch that you might have missed? What did it miss that you noticed? When should you trust AI feedback vs. apply your own judgment?
|
|
33
|
+
|
|
34
|
+
**Checkpoint:** The user can articulate the strengths and limitations of AI-assisted code review.
|
|
35
|
+
|
|
36
|
+
---
|
|
37
|
+
|
|
38
|
+
## Step 3: Debugging with AI Assistance
|
|
39
|
+
|
|
40
|
+
<!-- hint:diagram mermaid-type="flowchart" topic="AI-assisted debugging flow: describe bug, AI reads code, trace path, identify root cause, fix and test" -->
|
|
41
|
+
|
|
42
|
+
**Task:** Think of a recent bug you encountered (or use a hypothetical one). Describe it to your AI assistant with context: the error message, the expected behavior, and the actual behavior. Walk through the debugging process together.
|
|
43
|
+
|
|
44
|
+
**Question:** How did providing context change the quality of the AI's suggestions? What's the minimum context needed for useful debugging help?
|
|
45
|
+
|
|
46
|
+
**Checkpoint:** The user understands that AI debugging quality depends heavily on the context provided.
|
|
47
|
+
|
|
48
|
+
---
|
|
49
|
+
|
|
50
|
+
## Step 4: Learning During the Day
|
|
51
|
+
|
|
52
|
+
<!-- hint:buttons type="single" prompt="How would you fit learning into your day?" options="Dedicated 30-min block,Between meetings,After finishing a task,During code review" -->
|
|
53
|
+
|
|
54
|
+
**Task:** Choose a module from `/teach list` that relates to something you're working on this week. Start the walkthrough and complete at least 2 steps. Notice how the Socratic method asks you to think before giving answers.
|
|
55
|
+
|
|
56
|
+
**Question:** How is learning through guided questions different from reading documentation? Which approach helps you retain more?
|
|
57
|
+
|
|
58
|
+
**Checkpoint:** The user has started a module and can explain the value of active recall over passive reading.
|
|
59
|
+
|
|
60
|
+
---
|
|
61
|
+
|
|
62
|
+
## Step 5: Building a Feature with AI
|
|
63
|
+
|
|
64
|
+
<!-- hint:code language="javascript" -->
|
|
65
|
+
|
|
66
|
+
**Task:** Pick a small feature or improvement in your project. Use your AI assistant to help plan it: describe what you want, ask it to outline the approach, then implement it yourself (with AI help for specific questions). Track how you divide the work.
|
|
67
|
+
|
|
68
|
+
**Question:** What parts of the feature were faster with AI help? What parts required your own expertise and judgment? Where's the line between productive AI use and over-reliance?
|
|
69
|
+
|
|
70
|
+
**Checkpoint:** The user can describe a healthy division of labor between themselves and AI tools.
|
|
71
|
+
|
|
72
|
+
---
|
|
73
|
+
|
|
74
|
+
## Step 6: End-of-Day Reflection
|
|
75
|
+
|
|
76
|
+
<!-- hint:buttons type="rating" prompt="How productive did you feel today with AI assistance?" -->
|
|
77
|
+
<!-- hint:celebrate -->
|
|
78
|
+
|
|
79
|
+
**Task:** Review what you accomplished today. Run `/teach:stats` again and compare to the morning. Write a brief note (mental or written) about: one thing AI helped with, one thing you did better yourself, and one thing you want to learn tomorrow.
|
|
80
|
+
|
|
81
|
+
**Question:** How would you design your ideal daily workflow that balances AI assistance, independent work, and continuous learning?
|
|
82
|
+
|
|
83
|
+
**Checkpoint:** The user can articulate a personal workflow that integrates AI tools without depending on them.
|