@shaykec/bridge 0.4.25 → 0.4.26
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/journeys/ai-engineer.yaml +34 -0
- package/journeys/backend-developer.yaml +36 -0
- package/journeys/business-analyst.yaml +37 -0
- package/journeys/devops-engineer.yaml +37 -0
- package/journeys/engineering-manager.yaml +44 -0
- package/journeys/frontend-developer.yaml +41 -0
- package/journeys/fullstack-developer.yaml +49 -0
- package/journeys/mobile-developer.yaml +42 -0
- package/journeys/product-manager.yaml +35 -0
- package/journeys/qa-engineer.yaml +37 -0
- package/journeys/ux-designer.yaml +43 -0
- package/modules/README.md +52 -0
- package/modules/accessibility-fundamentals/content.md +126 -0
- package/modules/accessibility-fundamentals/exercises.md +88 -0
- package/modules/accessibility-fundamentals/module.yaml +43 -0
- package/modules/accessibility-fundamentals/quick-ref.md +71 -0
- package/modules/accessibility-fundamentals/quiz.md +100 -0
- package/modules/accessibility-fundamentals/resources.md +29 -0
- package/modules/accessibility-fundamentals/walkthrough.md +80 -0
- package/modules/adr-writing/content.md +121 -0
- package/modules/adr-writing/exercises.md +81 -0
- package/modules/adr-writing/module.yaml +41 -0
- package/modules/adr-writing/quick-ref.md +57 -0
- package/modules/adr-writing/quiz.md +73 -0
- package/modules/adr-writing/resources.md +29 -0
- package/modules/adr-writing/walkthrough.md +64 -0
- package/modules/ai-agents/content.md +120 -0
- package/modules/ai-agents/exercises.md +82 -0
- package/modules/ai-agents/module.yaml +42 -0
- package/modules/ai-agents/quick-ref.md +60 -0
- package/modules/ai-agents/quiz.md +103 -0
- package/modules/ai-agents/resources.md +30 -0
- package/modules/ai-agents/walkthrough.md +85 -0
- package/modules/ai-assisted-research/content.md +136 -0
- package/modules/ai-assisted-research/exercises.md +80 -0
- package/modules/ai-assisted-research/module.yaml +42 -0
- package/modules/ai-assisted-research/quick-ref.md +67 -0
- package/modules/ai-assisted-research/quiz.md +73 -0
- package/modules/ai-assisted-research/resources.md +33 -0
- package/modules/ai-assisted-research/walkthrough.md +85 -0
- package/modules/ai-pair-programming/content.md +105 -0
- package/modules/ai-pair-programming/exercises.md +98 -0
- package/modules/ai-pair-programming/module.yaml +39 -0
- package/modules/ai-pair-programming/quick-ref.md +58 -0
- package/modules/ai-pair-programming/quiz.md +73 -0
- package/modules/ai-pair-programming/resources.md +34 -0
- package/modules/ai-pair-programming/walkthrough.md +117 -0
- package/modules/ai-test-generation/content.md +125 -0
- package/modules/ai-test-generation/exercises.md +98 -0
- package/modules/ai-test-generation/module.yaml +39 -0
- package/modules/ai-test-generation/quick-ref.md +65 -0
- package/modules/ai-test-generation/quiz.md +74 -0
- package/modules/ai-test-generation/resources.md +41 -0
- package/modules/ai-test-generation/walkthrough.md +100 -0
- package/modules/api-design/content.md +189 -0
- package/modules/api-design/exercises.md +84 -0
- package/modules/api-design/game.yaml +113 -0
- package/modules/api-design/module.yaml +45 -0
- package/modules/api-design/quick-ref.md +73 -0
- package/modules/api-design/quiz.md +100 -0
- package/modules/api-design/resources.md +55 -0
- package/modules/api-design/walkthrough.md +88 -0
- package/modules/clean-code/content.md +136 -0
- package/modules/clean-code/exercises.md +137 -0
- package/modules/clean-code/game.yaml +172 -0
- package/modules/clean-code/module.yaml +44 -0
- package/modules/clean-code/quick-ref.md +44 -0
- package/modules/clean-code/quiz.md +105 -0
- package/modules/clean-code/resources.md +40 -0
- package/modules/clean-code/walkthrough.md +78 -0
- package/modules/clean-code/workshop.yaml +149 -0
- package/modules/code-review/content.md +130 -0
- package/modules/code-review/exercises.md +95 -0
- package/modules/code-review/game.yaml +83 -0
- package/modules/code-review/module.yaml +42 -0
- package/modules/code-review/quick-ref.md +77 -0
- package/modules/code-review/quiz.md +105 -0
- package/modules/code-review/resources.md +40 -0
- package/modules/code-review/walkthrough.md +106 -0
- package/modules/daily-workflow/content.md +81 -0
- package/modules/daily-workflow/exercises.md +50 -0
- package/modules/daily-workflow/module.yaml +33 -0
- package/modules/daily-workflow/quick-ref.md +37 -0
- package/modules/daily-workflow/quiz.md +65 -0
- package/modules/daily-workflow/resources.md +38 -0
- package/modules/daily-workflow/walkthrough.md +83 -0
- package/modules/debugging-systematically/content.md +139 -0
- package/modules/debugging-systematically/exercises.md +91 -0
- package/modules/debugging-systematically/module.yaml +46 -0
- package/modules/debugging-systematically/quick-ref.md +59 -0
- package/modules/debugging-systematically/quiz.md +105 -0
- package/modules/debugging-systematically/resources.md +42 -0
- package/modules/debugging-systematically/walkthrough.md +84 -0
- package/modules/debugging-systematically/workshop.yaml +127 -0
- package/modules/demo-test/content.md +68 -0
- package/modules/demo-test/exercises.md +28 -0
- package/modules/demo-test/game.yaml +171 -0
- package/modules/demo-test/module.yaml +41 -0
- package/modules/demo-test/quick-ref.md +54 -0
- package/modules/demo-test/quiz.md +74 -0
- package/modules/demo-test/resources.md +21 -0
- package/modules/demo-test/walkthrough.md +122 -0
- package/modules/demo-test/workshop.yaml +31 -0
- package/modules/design-critique/content.md +93 -0
- package/modules/design-critique/exercises.md +71 -0
- package/modules/design-critique/module.yaml +41 -0
- package/modules/design-critique/quick-ref.md +63 -0
- package/modules/design-critique/quiz.md +73 -0
- package/modules/design-critique/resources.md +27 -0
- package/modules/design-critique/walkthrough.md +68 -0
- package/modules/design-patterns/content.md +335 -0
- package/modules/design-patterns/exercises.md +82 -0
- package/modules/design-patterns/game.yaml +55 -0
- package/modules/design-patterns/module.yaml +45 -0
- package/modules/design-patterns/quick-ref.md +44 -0
- package/modules/design-patterns/quiz.md +101 -0
- package/modules/design-patterns/resources.md +40 -0
- package/modules/design-patterns/walkthrough.md +64 -0
- package/modules/exploratory-testing/content.md +133 -0
- package/modules/exploratory-testing/exercises.md +88 -0
- package/modules/exploratory-testing/module.yaml +41 -0
- package/modules/exploratory-testing/quick-ref.md +68 -0
- package/modules/exploratory-testing/quiz.md +75 -0
- package/modules/exploratory-testing/resources.md +39 -0
- package/modules/exploratory-testing/walkthrough.md +87 -0
- package/modules/git/content.md +128 -0
- package/modules/git/exercises.md +53 -0
- package/modules/git/game.yaml +190 -0
- package/modules/git/module.yaml +44 -0
- package/modules/git/quick-ref.md +67 -0
- package/modules/git/quiz.md +89 -0
- package/modules/git/resources.md +49 -0
- package/modules/git/walkthrough.md +92 -0
- package/modules/git/workshop.yaml +145 -0
- package/modules/hiring-interviews/content.md +130 -0
- package/modules/hiring-interviews/exercises.md +88 -0
- package/modules/hiring-interviews/module.yaml +41 -0
- package/modules/hiring-interviews/quick-ref.md +68 -0
- package/modules/hiring-interviews/quiz.md +73 -0
- package/modules/hiring-interviews/resources.md +36 -0
- package/modules/hiring-interviews/walkthrough.md +75 -0
- package/modules/hooks/content.md +97 -0
- package/modules/hooks/exercises.md +69 -0
- package/modules/hooks/module.yaml +39 -0
- package/modules/hooks/quick-ref.md +93 -0
- package/modules/hooks/quiz.md +81 -0
- package/modules/hooks/resources.md +34 -0
- package/modules/hooks/walkthrough.md +105 -0
- package/modules/hooks/workshop.yaml +64 -0
- package/modules/incident-response/content.md +124 -0
- package/modules/incident-response/exercises.md +82 -0
- package/modules/incident-response/game.yaml +132 -0
- package/modules/incident-response/module.yaml +45 -0
- package/modules/incident-response/quick-ref.md +53 -0
- package/modules/incident-response/quiz.md +103 -0
- package/modules/incident-response/resources.md +40 -0
- package/modules/incident-response/walkthrough.md +82 -0
- package/modules/llm-fundamentals/content.md +114 -0
- package/modules/llm-fundamentals/exercises.md +83 -0
- package/modules/llm-fundamentals/module.yaml +42 -0
- package/modules/llm-fundamentals/quick-ref.md +64 -0
- package/modules/llm-fundamentals/quiz.md +103 -0
- package/modules/llm-fundamentals/resources.md +30 -0
- package/modules/llm-fundamentals/walkthrough.md +91 -0
- package/modules/one-on-ones/content.md +133 -0
- package/modules/one-on-ones/exercises.md +81 -0
- package/modules/one-on-ones/module.yaml +44 -0
- package/modules/one-on-ones/quick-ref.md +67 -0
- package/modules/one-on-ones/quiz.md +73 -0
- package/modules/one-on-ones/resources.md +37 -0
- package/modules/one-on-ones/walkthrough.md +69 -0
- package/modules/package.json +9 -0
- package/modules/prioritization-frameworks/content.md +130 -0
- package/modules/prioritization-frameworks/exercises.md +93 -0
- package/modules/prioritization-frameworks/module.yaml +41 -0
- package/modules/prioritization-frameworks/quick-ref.md +77 -0
- package/modules/prioritization-frameworks/quiz.md +73 -0
- package/modules/prioritization-frameworks/resources.md +32 -0
- package/modules/prioritization-frameworks/walkthrough.md +69 -0
- package/modules/prompt-engineering/content.md +123 -0
- package/modules/prompt-engineering/exercises.md +82 -0
- package/modules/prompt-engineering/game.yaml +101 -0
- package/modules/prompt-engineering/module.yaml +45 -0
- package/modules/prompt-engineering/quick-ref.md +65 -0
- package/modules/prompt-engineering/quiz.md +105 -0
- package/modules/prompt-engineering/resources.md +36 -0
- package/modules/prompt-engineering/walkthrough.md +81 -0
- package/modules/rag-fundamentals/content.md +111 -0
- package/modules/rag-fundamentals/exercises.md +80 -0
- package/modules/rag-fundamentals/module.yaml +45 -0
- package/modules/rag-fundamentals/quick-ref.md +58 -0
- package/modules/rag-fundamentals/quiz.md +75 -0
- package/modules/rag-fundamentals/resources.md +34 -0
- package/modules/rag-fundamentals/walkthrough.md +75 -0
- package/modules/react-fundamentals/content.md +140 -0
- package/modules/react-fundamentals/exercises.md +81 -0
- package/modules/react-fundamentals/game.yaml +145 -0
- package/modules/react-fundamentals/module.yaml +45 -0
- package/modules/react-fundamentals/quick-ref.md +62 -0
- package/modules/react-fundamentals/quiz.md +106 -0
- package/modules/react-fundamentals/resources.md +42 -0
- package/modules/react-fundamentals/walkthrough.md +89 -0
- package/modules/react-fundamentals/workshop.yaml +112 -0
- package/modules/react-native-fundamentals/content.md +141 -0
- package/modules/react-native-fundamentals/exercises.md +79 -0
- package/modules/react-native-fundamentals/module.yaml +42 -0
- package/modules/react-native-fundamentals/quick-ref.md +60 -0
- package/modules/react-native-fundamentals/quiz.md +61 -0
- package/modules/react-native-fundamentals/resources.md +24 -0
- package/modules/react-native-fundamentals/walkthrough.md +84 -0
- package/modules/registry.yaml +1650 -0
- package/modules/risk-management/content.md +162 -0
- package/modules/risk-management/exercises.md +86 -0
- package/modules/risk-management/module.yaml +41 -0
- package/modules/risk-management/quick-ref.md +82 -0
- package/modules/risk-management/quiz.md +73 -0
- package/modules/risk-management/resources.md +40 -0
- package/modules/risk-management/walkthrough.md +67 -0
- package/modules/running-effective-standups/content.md +119 -0
- package/modules/running-effective-standups/exercises.md +79 -0
- package/modules/running-effective-standups/module.yaml +40 -0
- package/modules/running-effective-standups/quick-ref.md +61 -0
- package/modules/running-effective-standups/quiz.md +73 -0
- package/modules/running-effective-standups/resources.md +36 -0
- package/modules/running-effective-standups/walkthrough.md +76 -0
- package/modules/solid-principles/content.md +154 -0
- package/modules/solid-principles/exercises.md +107 -0
- package/modules/solid-principles/module.yaml +42 -0
- package/modules/solid-principles/quick-ref.md +50 -0
- package/modules/solid-principles/quiz.md +102 -0
- package/modules/solid-principles/resources.md +39 -0
- package/modules/solid-principles/walkthrough.md +84 -0
- package/modules/sprint-planning/content.md +142 -0
- package/modules/sprint-planning/exercises.md +79 -0
- package/modules/sprint-planning/game.yaml +84 -0
- package/modules/sprint-planning/module.yaml +44 -0
- package/modules/sprint-planning/quick-ref.md +76 -0
- package/modules/sprint-planning/quiz.md +102 -0
- package/modules/sprint-planning/resources.md +39 -0
- package/modules/sprint-planning/walkthrough.md +75 -0
- package/modules/sql-fundamentals/content.md +160 -0
- package/modules/sql-fundamentals/exercises.md +87 -0
- package/modules/sql-fundamentals/game.yaml +105 -0
- package/modules/sql-fundamentals/module.yaml +45 -0
- package/modules/sql-fundamentals/quick-ref.md +53 -0
- package/modules/sql-fundamentals/quiz.md +103 -0
- package/modules/sql-fundamentals/resources.md +42 -0
- package/modules/sql-fundamentals/walkthrough.md +92 -0
- package/modules/sql-fundamentals/workshop.yaml +109 -0
- package/modules/stakeholder-communication/content.md +186 -0
- package/modules/stakeholder-communication/exercises.md +87 -0
- package/modules/stakeholder-communication/module.yaml +38 -0
- package/modules/stakeholder-communication/quick-ref.md +89 -0
- package/modules/stakeholder-communication/quiz.md +73 -0
- package/modules/stakeholder-communication/resources.md +41 -0
- package/modules/stakeholder-communication/walkthrough.md +74 -0
- package/modules/system-design/content.md +149 -0
- package/modules/system-design/exercises.md +83 -0
- package/modules/system-design/game.yaml +95 -0
- package/modules/system-design/module.yaml +46 -0
- package/modules/system-design/quick-ref.md +59 -0
- package/modules/system-design/quiz.md +102 -0
- package/modules/system-design/resources.md +46 -0
- package/modules/system-design/walkthrough.md +90 -0
- package/modules/team-topologies/content.md +166 -0
- package/modules/team-topologies/exercises.md +85 -0
- package/modules/team-topologies/module.yaml +41 -0
- package/modules/team-topologies/quick-ref.md +61 -0
- package/modules/team-topologies/quiz.md +101 -0
- package/modules/team-topologies/resources.md +37 -0
- package/modules/team-topologies/walkthrough.md +76 -0
- package/modules/technical-debt/content.md +111 -0
- package/modules/technical-debt/exercises.md +92 -0
- package/modules/technical-debt/module.yaml +39 -0
- package/modules/technical-debt/quick-ref.md +60 -0
- package/modules/technical-debt/quiz.md +73 -0
- package/modules/technical-debt/resources.md +25 -0
- package/modules/technical-debt/walkthrough.md +94 -0
- package/modules/technical-mentoring/content.md +128 -0
- package/modules/technical-mentoring/exercises.md +84 -0
- package/modules/technical-mentoring/module.yaml +41 -0
- package/modules/technical-mentoring/quick-ref.md +74 -0
- package/modules/technical-mentoring/quiz.md +73 -0
- package/modules/technical-mentoring/resources.md +33 -0
- package/modules/technical-mentoring/walkthrough.md +65 -0
- package/modules/test-strategy/content.md +136 -0
- package/modules/test-strategy/exercises.md +84 -0
- package/modules/test-strategy/game.yaml +99 -0
- package/modules/test-strategy/module.yaml +45 -0
- package/modules/test-strategy/quick-ref.md +66 -0
- package/modules/test-strategy/quiz.md +99 -0
- package/modules/test-strategy/resources.md +60 -0
- package/modules/test-strategy/walkthrough.md +97 -0
- package/modules/test-strategy/workshop.yaml +96 -0
- package/modules/typescript-fundamentals/content.md +127 -0
- package/modules/typescript-fundamentals/exercises.md +79 -0
- package/modules/typescript-fundamentals/game.yaml +111 -0
- package/modules/typescript-fundamentals/module.yaml +45 -0
- package/modules/typescript-fundamentals/quick-ref.md +55 -0
- package/modules/typescript-fundamentals/quiz.md +104 -0
- package/modules/typescript-fundamentals/resources.md +42 -0
- package/modules/typescript-fundamentals/walkthrough.md +71 -0
- package/modules/typescript-fundamentals/workshop.yaml +146 -0
- package/modules/user-story-mapping/content.md +123 -0
- package/modules/user-story-mapping/exercises.md +87 -0
- package/modules/user-story-mapping/module.yaml +41 -0
- package/modules/user-story-mapping/quick-ref.md +64 -0
- package/modules/user-story-mapping/quiz.md +73 -0
- package/modules/user-story-mapping/resources.md +29 -0
- package/modules/user-story-mapping/walkthrough.md +86 -0
- package/modules/writing-prds/content.md +133 -0
- package/modules/writing-prds/exercises.md +93 -0
- package/modules/writing-prds/game.yaml +83 -0
- package/modules/writing-prds/module.yaml +44 -0
- package/modules/writing-prds/quick-ref.md +77 -0
- package/modules/writing-prds/quiz.md +103 -0
- package/modules/writing-prds/resources.md +30 -0
- package/modules/writing-prds/walkthrough.md +87 -0
- package/package.json +1 -1
|
@@ -0,0 +1,77 @@
|
|
|
1
|
+
# Prioritization Quick Reference
|
|
2
|
+
|
|
3
|
+
## RICE Formula
|
|
4
|
+
|
|
5
|
+
```
|
|
6
|
+
RICE = (Reach × Impact × Confidence) ÷ Effort
|
|
7
|
+
```
|
|
8
|
+
|
|
9
|
+
| Factor | Type | Scale / Units |
|
|
10
|
+
|--------|------|---------------|
|
|
11
|
+
| Reach | Number | Users, events, or conversions per timeframe |
|
|
12
|
+
| Impact | 0.25–3 | 0.25 min, 0.5 low, 1 med, 2 high, 3 massive |
|
|
13
|
+
| Confidence | % | 50%, 80%, 100% |
|
|
14
|
+
| Effort | Person-months | Total team capacity |
|
|
15
|
+
|
|
16
|
+
**Higher RICE = higher priority.**
|
|
17
|
+
|
|
18
|
+
## MoSCoW Buckets
|
|
19
|
+
|
|
20
|
+
| Bucket | Meaning | Guideline |
|
|
21
|
+
|--------|----------|-----------|
|
|
22
|
+
| **M**ust | Launch blockers | ≤ 60% of scope |
|
|
23
|
+
| **S**hould | Important, not blocking | |
|
|
24
|
+
| **C**ould | Nice to have | If time permits |
|
|
25
|
+
| **W**on't | Out of scope | Capture for later |
|
|
26
|
+
|
|
27
|
+
## Impact Mapping Structure
|
|
28
|
+
|
|
29
|
+
```
|
|
30
|
+
Goal: [Business outcome]
|
|
31
|
+
├── Actor: [Who can help/block]
|
|
32
|
+
│ ├── Impact: [Behavior change needed]
|
|
33
|
+
│ │ ├── Deliverable: [What we build]
|
|
34
|
+
│ │ └── Deliverable: [...]
|
|
35
|
+
│ └── Impact: [...]
|
|
36
|
+
└── Actor: [...]
|
|
37
|
+
```
|
|
38
|
+
|
|
39
|
+
## Value vs Effort Matrix
|
|
40
|
+
|
|
41
|
+
| | Low Effort | High Effort |
|
|
42
|
+
|---|------------|-------------|
|
|
43
|
+
| **High Value** | Quick wins ✓ | Big bets |
|
|
44
|
+
| **Low Value** | Fill-ins | Avoid ✗ |
|
|
45
|
+
|
|
46
|
+
## Framework Comparison
|
|
47
|
+
|
|
48
|
+
| Use Case | Best Framework |
|
|
49
|
+
|----------|----------------|
|
|
50
|
+
| Ranked backlog, quantitative | RICE |
|
|
51
|
+
| Release scope, stakeholder alignment | MoSCoW |
|
|
52
|
+
| Goal alignment, assumption surfacing | Impact Mapping |
|
|
53
|
+
| Quick triage | Value/Effort |
|
|
54
|
+
|
|
55
|
+
## Communicating Priorities
|
|
56
|
+
|
|
57
|
+
1. **Show the model** — Don't hide; share how you scored
|
|
58
|
+
2. **Make inputs editable** — Disagreement? Change inputs, re-run
|
|
59
|
+
3. **Capture "Won't"** — Explicit out-of-scope prevents surprises
|
|
60
|
+
4. **Revisit often** — New info → new scores
|
|
61
|
+
|
|
62
|
+
## RICE Example
|
|
63
|
+
|
|
64
|
+
| Initiative | R | I | C | E | RICE |
|
|
65
|
+
|------------|---|---|---|---|------|
|
|
66
|
+
| Dark mode | 10k | 0.5 | 100% | 1 | 5,000 |
|
|
67
|
+
| Search | 8k | 2 | 80% | 3 | 4,267 |
|
|
68
|
+
| Export | 2k | 1 | 100% | 0.5 | 4,000 |
|
|
69
|
+
|
|
70
|
+
## Impact Map (Mermaid)
|
|
71
|
+
|
|
72
|
+
```mermaid
|
|
73
|
+
flowchart TB
|
|
74
|
+
G[Goal] --> A[Actor]
|
|
75
|
+
A --> I[Impact]
|
|
76
|
+
I --> D[Deliverable]
|
|
77
|
+
```
|
|
@@ -0,0 +1,73 @@
|
|
|
1
|
+
# Prioritization — Quiz
|
|
2
|
+
|
|
3
|
+
## Question 1
|
|
4
|
+
|
|
5
|
+
In the RICE formula, which factor is the denominator?
|
|
6
|
+
|
|
7
|
+
A) Reach
|
|
8
|
+
B) Impact
|
|
9
|
+
C) Confidence
|
|
10
|
+
D) Effort
|
|
11
|
+
|
|
12
|
+
<!-- ANSWER: D -->
|
|
13
|
+
<!-- EXPLANATION: RICE = (Reach × Impact × Confidence) ÷ Effort. Reach, Impact, and Confidence are the "benefit" factors (multiplied); Effort is the "cost" (divisor). Higher effort lowers the score. -->
|
|
14
|
+
|
|
15
|
+
## Question 2
|
|
16
|
+
|
|
17
|
+
In MoSCoW prioritization, what is the main risk of having too many "Must have" items?
|
|
18
|
+
|
|
19
|
+
A) The document becomes too long
|
|
20
|
+
B) Nothing is truly prioritized—everything is "critical"
|
|
21
|
+
C) Stakeholders will reject the plan
|
|
22
|
+
D) Engineering estimates become inaccurate
|
|
23
|
+
|
|
24
|
+
<!-- ANSWER: B -->
|
|
25
|
+
<!-- EXPLANATION: If everything is Must, nothing is. Must should be reserved for true launch-blockers. A common guideline is to cap Must at ~60% of scope. Too many Musts means you're either over-scoped or under-prioritized. -->
|
|
26
|
+
|
|
27
|
+
## Question 3
|
|
28
|
+
|
|
29
|
+
In Impact Mapping, what does "Impact" represent?
|
|
30
|
+
|
|
31
|
+
A) The number of users affected
|
|
32
|
+
B) A behavior change required of an actor to support the goal
|
|
33
|
+
C) The revenue generated by a deliverable
|
|
34
|
+
D) The effort to build a feature
|
|
35
|
+
|
|
36
|
+
<!-- ANSWER: B -->
|
|
37
|
+
<!-- EXPLANATION: Impact is the middle layer: Goal → Actor → Impact → Deliverable. Impact answers: "How must this actor change their behavior to help us achieve the goal?" It's about cause and effect, not scope or effort. -->
|
|
38
|
+
|
|
39
|
+
## Question 4
|
|
40
|
+
|
|
41
|
+
A feature has Reach 4,000, Impact 2, Confidence 80%, and Effort 2. What is its RICE score?
|
|
42
|
+
|
|
43
|
+
A) 3,200
|
|
44
|
+
B) 1,280
|
|
45
|
+
C) 2,560
|
|
46
|
+
D) 6,400
|
|
47
|
+
|
|
48
|
+
<!-- ANSWER: A -->
|
|
49
|
+
<!-- EXPLANATION: RICE = (Reach × Impact × Confidence) ÷ Effort = (4000 × 2 × 0.8) ÷ 2 = 6400 ÷ 2 = 3200. -->
|
|
50
|
+
|
|
51
|
+
## Question 5
|
|
52
|
+
|
|
53
|
+
Which framework is best for "what's in this release vs next release"?
|
|
54
|
+
|
|
55
|
+
A) RICE
|
|
56
|
+
B) MoSCoW
|
|
57
|
+
C) Impact Mapping
|
|
58
|
+
D) Value vs Effort matrix
|
|
59
|
+
|
|
60
|
+
<!-- ANSWER: B -->
|
|
61
|
+
<!-- EXPLANATION: MoSCoW explicitly buckets work into Must/Should/Could/Won't, which maps cleanly to release scope. RICE ranks but doesn't define release boundaries. Impact Mapping is strategic; Value/Effort is a triage view. MoSCoW is designed for scope negotiation. -->
|
|
62
|
+
|
|
63
|
+
## Question 6
|
|
64
|
+
|
|
65
|
+
In a Value vs Effort matrix, where should "quick wins" go?
|
|
66
|
+
|
|
67
|
+
A) Low value, low effort
|
|
68
|
+
B) High value, high effort
|
|
69
|
+
C) High value, low effort
|
|
70
|
+
D) Low value, high effort
|
|
71
|
+
|
|
72
|
+
<!-- ANSWER: C -->
|
|
73
|
+
<!-- EXPLANATION: Quick wins are high value with low effort—they deliver impact without huge cost. Do these first. Low value + high effort = avoid. High value + high effort = big bets. Low value + low effort = fill-ins. -->
|
|
@@ -0,0 +1,32 @@
|
|
|
1
|
+
# Prioritization — Resources
|
|
2
|
+
|
|
3
|
+
## Videos
|
|
4
|
+
|
|
5
|
+
- [RICE Prioritization Framework Explained](https://www.youtube.com/watch?v=8Q6o-3LbQPE) — Product Manager HQ, ~12 min. Step-by-step RICE walkthrough.
|
|
6
|
+
- [Impact Mapping in 15 Minutes](https://www.youtube.com/watch?v=8Q6o-3LbQPE) — Gojko Adzic. Quick intro to the impact mapping technique.
|
|
7
|
+
|
|
8
|
+
## Articles and Readings
|
|
9
|
+
|
|
10
|
+
- [RICE Scoring Model](https://www.productplan.com/glossary/rice-scoring-model/) — ProductPlan. Official RICE overview: formula, factors, and examples.
|
|
11
|
+
- [Where do product roadmaps come from?](https://medium.com/intercom-inside/where-do-product-roadmaps-come-from-b526a9c60493) — Intercom. Origin of RICE and roadmap thinking.
|
|
12
|
+
- [How to Use MoSCoW Prioritization](https://www.productplan.com/glossary/moscow-prioritization/) — ProductPlan. MoSCoW explained with examples.
|
|
13
|
+
- [Impact Mapping](https://www.impactmapping.org/) — Gojko Adzic. The official impact mapping site with guides and examples.
|
|
14
|
+
- [How to get the most out of impact mapping](https://gojko.net/2014/11/17/how-to-get-the-most-out-of-impact-mapping) — Gojko Adzic. Practical tips and anti-patterns.
|
|
15
|
+
- [Prioritization Frameworks for Product Managers](https://www.productplan.com/product-management-frameworks/) — ProductPlan. Comparison of RICE, MoSCoW, value scoring, and more.
|
|
16
|
+
|
|
17
|
+
## Books
|
|
18
|
+
|
|
19
|
+
- **Impact Mapping** by Gojko Adzic — The canonical guide to impact mapping; short, practical.
|
|
20
|
+
- **Inspired** by Marty Cagan — SVPG perspective on opportunity assessment and prioritization.
|
|
21
|
+
- **Escaping the Build Trap** by Melissa Perri — Prioritization in the context of outcome-driven product management.
|
|
22
|
+
|
|
23
|
+
## Podcasts
|
|
24
|
+
|
|
25
|
+
- [Lenny's Podcast: How to prioritize](https://www.lennyspodcast.com/) — Lenny Rachitsky. Episodes on roadmap and prioritization with PM leaders.
|
|
26
|
+
- [Product Talk](https://www.producttalk.org/) — Teresa Torres. Opportunity solution trees and evidence-based prioritization.
|
|
27
|
+
|
|
28
|
+
## Tools
|
|
29
|
+
|
|
30
|
+
- [ProductPlan](https://www.productplan.com/) — Roadmapping and prioritization software with RICE built-in.
|
|
31
|
+
- [Aha!](https://www.aha.io/) — Product strategy and roadmap tools with scoring frameworks.
|
|
32
|
+
- [Miro Impact Mapping Template](https://miro.com/templates/impact-mapping/) — Collaborative impact mapping canvas.
|
|
@@ -0,0 +1,69 @@
|
|
|
1
|
+
# Prioritization Walkthrough — Learn by Doing
|
|
2
|
+
|
|
3
|
+
## Before We Begin
|
|
4
|
+
|
|
5
|
+
**Diagnostic:** Why prioritize at all? If you could only ship *one* initiative this quarter, how would you choose which one? What would "the right choice" mean in your context?
|
|
6
|
+
|
|
7
|
+
**Checkpoint:** You can name at least one trade-off you'd face (e.g., revenue vs. retention, quick win vs. strategic bet) and why a gut feel isn't enough.
|
|
8
|
+
|
|
9
|
+
---
|
|
10
|
+
|
|
11
|
+
## Step 1: Compute RICE Scores
|
|
12
|
+
|
|
13
|
+
<!-- hint:diagram mermaid-type="flowchart" topic="RICE formula: Reach × Impact × Confidence ÷ Effort" -->
|
|
14
|
+
|
|
15
|
+
**Task:** Three initiatives: (A) Reach 5,000, Impact 2, Confidence 80%, Effort 2 person-months. (B) Reach 3,000, Impact 1, Confidence 100%, Effort 0.5. (C) Reach 8,000, Impact 0.5, Confidence 50%, Effort 4. Compute RICE for each. Rank them.
|
|
16
|
+
|
|
17
|
+
**Question:** Why does (B) rank highest despite lower Reach? What would change if you doubled (C)'s Confidence?
|
|
18
|
+
|
|
19
|
+
**Checkpoint:** The user computes: A ≈ 4,000, B ≈ 6,000, C ≈ 500. Rank: B, A, C. They understand Effort and Confidence strongly affect the score.
|
|
20
|
+
|
|
21
|
+
---
|
|
22
|
+
|
|
23
|
+
## Step 2: Apply MoSCoW to a Release
|
|
24
|
+
|
|
25
|
+
**Task:** You're planning a "v2 search" release. Assign these to M/S/C/W (max 3 Must, max 3 Should): basic keyword search, filters, search suggestions, export results, save searches, analytics dashboard, API for search, keyboard shortcuts.
|
|
26
|
+
|
|
27
|
+
**Question:** What's the risk of making "export results" a Must? How would you negotiate if stakeholders want 5 Musts?
|
|
28
|
+
|
|
29
|
+
**Checkpoint:** The user assigns items with justification. They recognize basic search is Must; export might be Should or Could. They can explain why capping Must forces tradeoffs.
|
|
30
|
+
|
|
31
|
+
---
|
|
32
|
+
|
|
33
|
+
## Step 3: Build an Impact Map
|
|
34
|
+
|
|
35
|
+
**Task:** Goal: "Reduce support tickets about 'how do I reset my password' by 30% in Q3." Build an impact map: 2 actors, 2 impacts per actor, 2 deliverables per impact.
|
|
36
|
+
|
|
37
|
+
**Question:** Why does Impact describe *behavior change* rather than *features*? What assumption are you making?
|
|
38
|
+
|
|
39
|
+
**Checkpoint:** The user produces: actors (e.g., users, support agents), impacts (e.g., "users self-serve reset"), deliverables (e.g., clearer UX, reset email). They state at least one assumption (e.g., "users will use the new flow").
|
|
40
|
+
|
|
41
|
+
---
|
|
42
|
+
|
|
43
|
+
## Step 4: Plot Value vs Effort
|
|
44
|
+
|
|
45
|
+
**Task:** Take 6 real or hypothetical initiatives from your work. Define "value" and "effort" for your context. Plot each on a 2×2. Label: 1 quick win, 1 big bet, 1 avoid.
|
|
46
|
+
|
|
47
|
+
**Question:** Who might disagree with your "avoid" placement? How would you defend it?
|
|
48
|
+
|
|
49
|
+
**Checkpoint:** The user has a matrix with 6 items. Quick win = high value, low effort. Avoid = low value, high effort. They can justify the avoid and anticipate stakeholder pushback.
|
|
50
|
+
|
|
51
|
+
---
|
|
52
|
+
|
|
53
|
+
## Step 5: Defend a Priority
|
|
54
|
+
|
|
55
|
+
**Task:** You prioritized "Dark mode" over "Export to PDF" using RICE. A stakeholder says: "Export is critical for enterprise users." Write your response in 4–6 sentences. Reference the model; acknowledge their concern; offer a path forward.
|
|
56
|
+
|
|
57
|
+
**Question:** What would you do if they provide new data (e.g., enterprise is 40% of revenue) that changes Reach or Impact?
|
|
58
|
+
|
|
59
|
+
**Checkpoint:** The user's response shows the numbers, explains why Dark mode scored higher, acknowledges the concern, and offers to re-score with updated inputs or add Export to next quarter.
|
|
60
|
+
|
|
61
|
+
---
|
|
62
|
+
|
|
63
|
+
## Step 6: Combine Frameworks
|
|
64
|
+
|
|
65
|
+
**Task:** For a single product initiative, use both RICE and Impact Mapping. First: write the goal and build a 2-layer impact map. Then: pick 3 deliverables from the map and score them with RICE. What does each framework reveal that the other doesn't?
|
|
66
|
+
|
|
67
|
+
**Question:** When would you lead with Impact Mapping vs RICE in a stakeholder meeting? Why?
|
|
68
|
+
|
|
69
|
+
**Checkpoint:** The user produces an impact map and RICE scores for 3 deliverables. They articulate: Impact Mapping shows *why* and *who*; RICE provides *rank order*. They can choose the right lead depending on audience (strategic vs tactical).
|
|
@@ -0,0 +1,123 @@
|
|
|
1
|
+
# Prompt Engineering — Techniques That Get Better AI Outputs
|
|
2
|
+
|
|
3
|
+
<!-- hint:slides topic="Prompt engineering: prompt anatomy, zero-shot vs few-shot, chain-of-thought, structured output, and system prompts" slides="5" -->
|
|
4
|
+
|
|
5
|
+
## What Makes a Good Prompt?
|
|
6
|
+
|
|
7
|
+
Prompts are instructions you give to an LLM. A good prompt is **specific**, **structured**, and **contextual** — it tells the model exactly what you want, in what format, with enough context to succeed.
|
|
8
|
+
|
|
9
|
+
## Anatomy of a Prompt
|
|
10
|
+
|
|
11
|
+
Every effective prompt can be broken into five components:
|
|
12
|
+
|
|
13
|
+
```mermaid
|
|
14
|
+
flowchart TB
|
|
15
|
+
subgraph prompt["Prompt Anatomy"]
|
|
16
|
+
R[Role]
|
|
17
|
+
C[Context]
|
|
18
|
+
T[Task]
|
|
19
|
+
F[Format]
|
|
20
|
+
X[Constraints]
|
|
21
|
+
end
|
|
22
|
+
R --> C --> T --> F --> X
|
|
23
|
+
X --> Out[Better Output]
|
|
24
|
+
```
|
|
25
|
+
|
|
26
|
+
| Component | Purpose | Example |
|
|
27
|
+
|-----------|---------|---------|
|
|
28
|
+
| **Role** | Sets the AI's persona and expertise | "You are an expert Python developer." |
|
|
29
|
+
| **Context** | Background the model needs | "The codebase uses FastAPI and Pydantic." |
|
|
30
|
+
| **Task** | The specific request | "Add validation for the email field." |
|
|
31
|
+
| **Format** | Desired output structure | "Return a JSON object with keys: valid, errors." |
|
|
32
|
+
| **Constraints** | Limits and rules | "Keep under 100 words. No external APIs." |
|
|
33
|
+
|
|
34
|
+
## Zero-Shot vs Few-Shot Prompting
|
|
35
|
+
|
|
36
|
+
**Zero-shot** — No examples. The model infers from the instruction alone.
|
|
37
|
+
|
|
38
|
+
```
|
|
39
|
+
Translate "Hello" to Spanish.
|
|
40
|
+
```
|
|
41
|
+
|
|
42
|
+
**Few-shot** — Provide 1–5 examples to establish the pattern.
|
|
43
|
+
|
|
44
|
+
```
|
|
45
|
+
Translate to Spanish:
|
|
46
|
+
Hello → Hola
|
|
47
|
+
Goodbye → Adiós
|
|
48
|
+
Please → Por favor
|
|
49
|
+
Thank you → ?
|
|
50
|
+
```
|
|
51
|
+
|
|
52
|
+
Few-shot dramatically improves accuracy for structured, style-sensitive, or edge-case tasks. The model learns the pattern from examples.
|
|
53
|
+
|
|
54
|
+
## Chain-of-Thought Reasoning
|
|
55
|
+
|
|
56
|
+
Ask the model to "think step by step" or "show your reasoning." This improves performance on logic, math, and multi-step tasks.
|
|
57
|
+
|
|
58
|
+
```
|
|
59
|
+
What is 17 × 24?
|
|
60
|
+
|
|
61
|
+
Think through each step before giving the final answer.
|
|
62
|
+
```
|
|
63
|
+
|
|
64
|
+
CoT elicits intermediate reasoning, reducing errors and making output verifiable.
|
|
65
|
+
|
|
66
|
+
## System Prompts
|
|
67
|
+
|
|
68
|
+
Many APIs support a **system prompt** — instructions that apply across the conversation, separate from user messages. Use it for:
|
|
69
|
+
|
|
70
|
+
- Role and persona
|
|
71
|
+
- Output format preferences
|
|
72
|
+
- Guardrails and constraints
|
|
73
|
+
- Domain knowledge
|
|
74
|
+
|
|
75
|
+
## Structured Output
|
|
76
|
+
|
|
77
|
+
Request specific formats to make output machine-readable:
|
|
78
|
+
|
|
79
|
+
- **JSON** — `Return a JSON object with: name, email, score`
|
|
80
|
+
- **XML** — `Wrap the summary in <summary> tags`
|
|
81
|
+
- **Markdown** — `Format as a markdown table with columns A, B, C`
|
|
82
|
+
|
|
83
|
+
## Iterating on Prompts
|
|
84
|
+
|
|
85
|
+
1. **Start simple** — one clear task
|
|
86
|
+
2. **Add context** — if output is generic or wrong
|
|
87
|
+
3. **Add examples** — if the format or style drifts
|
|
88
|
+
4. **Add constraints** — if output is too long or off-topic
|
|
89
|
+
5. **Test edge cases** — ambiguous inputs, empty inputs
|
|
90
|
+
|
|
91
|
+
## Temperature and Creativity
|
|
92
|
+
|
|
93
|
+
- **Low (0–0.3)** — Deterministic, factual, consistent. Good for code, extraction, classification.
|
|
94
|
+
- **High (0.7–1.0)** — Creative, varied. Good for brainstorming, stories, varied phrasing.
|
|
95
|
+
|
|
96
|
+
## Common Mistakes
|
|
97
|
+
|
|
98
|
+
| Mistake | Fix |
|
|
99
|
+
|---------|-----|
|
|
100
|
+
| Vague instructions | Be explicit: "List 3 pros and 3 cons" |
|
|
101
|
+
| No examples | Add 1–2 few-shot examples for format/style |
|
|
102
|
+
| Too many constraints at once | Add constraints incrementally |
|
|
103
|
+
| Assuming the model knows your context | Provide relevant background |
|
|
104
|
+
| One giant prompt | Break into steps or sub-tasks |
|
|
105
|
+
|
|
106
|
+
## Prompting by Task Type
|
|
107
|
+
|
|
108
|
+
| Task | Strategy |
|
|
109
|
+
|------|----------|
|
|
110
|
+
| **Summarization** | Specify length, key points, audience |
|
|
111
|
+
| **Analysis** | Define criteria, structure (pros/cons, compare/contrast) |
|
|
112
|
+
| **Code** | Language, framework, patterns, input/output format |
|
|
113
|
+
| **Creative writing** | Tone, length, style, examples of desired voice |
|
|
114
|
+
|
|
115
|
+
---
|
|
116
|
+
|
|
117
|
+
## Key Takeaways
|
|
118
|
+
|
|
119
|
+
1. **Structure your prompts** — role, context, task, format, constraints
|
|
120
|
+
2. **Few-shot beats zero-shot** for structured or style-sensitive tasks
|
|
121
|
+
3. **Chain-of-thought** helps with reasoning and multi-step problems
|
|
122
|
+
4. **Iterate** — start simple, add context and examples as needed
|
|
123
|
+
5. **Avoid** vague instructions, no examples, and constraint overload
|
|
@@ -0,0 +1,82 @@
|
|
|
1
|
+
# Prompt Engineering — Exercises
|
|
2
|
+
|
|
3
|
+
## Exercise 1: Improve a Vague Prompt
|
|
4
|
+
|
|
5
|
+
**Task:** Someone wrote: "Make this email sound professional." Rewrite it to be specific. Include: audience, tone, length, and one example of a phrase to avoid or use.
|
|
6
|
+
|
|
7
|
+
**Validation:**
|
|
8
|
+
- [ ] Specifies the audience (e.g., client, internal)
|
|
9
|
+
- [ ] Defines "professional" (formal? concise? friendly-but-polished?)
|
|
10
|
+
- [ ] Includes length or structure (short paragraph? bullet points?)
|
|
11
|
+
- [ ] Gives at least one example of desired or undesired phrasing
|
|
12
|
+
|
|
13
|
+
**Hints:**
|
|
14
|
+
1. "Professional" is vague — formal? concise? industry jargon?
|
|
15
|
+
2. "Sound professional for a CFO" narrows audience
|
|
16
|
+
3. "Use 'regarding' not 'about'; avoid exclamation points" — concrete style
|
|
17
|
+
|
|
18
|
+
---
|
|
19
|
+
|
|
20
|
+
## Exercise 2: Add Few-Shot Examples
|
|
21
|
+
|
|
22
|
+
**Task:** You need the AI to convert product names to URL slugs. Zero-shot: "Convert to URL slug." Add 2–3 few-shot examples. Example format: "Blue Widget Pro" → "blue-widget-pro".
|
|
23
|
+
|
|
24
|
+
**Validation:**
|
|
25
|
+
- [ ] At least 2 examples showing input → output
|
|
26
|
+
- [ ] Examples cover edge cases (capitals, spaces, special chars?)
|
|
27
|
+
- [ ] Output format is consistent (lowercase, hyphens)
|
|
28
|
+
|
|
29
|
+
**Hints:**
|
|
30
|
+
1. Show: "Hello World" → "hello-world"
|
|
31
|
+
2. Add: "Super Product 2024" → "super-product-2024" (numbers?)
|
|
32
|
+
3. Consider: "O'Reilly Book" → "oreilly-book" (apostrophes)
|
|
33
|
+
|
|
34
|
+
---
|
|
35
|
+
|
|
36
|
+
## Exercise 3: Request Structured Output
|
|
37
|
+
|
|
38
|
+
**Task:** Ask the AI to analyze a short product review and return: sentiment (positive/negative/neutral), main topic (1–3 words), and one suggested action for the business. Request JSON with keys `sentiment`, `topic`, `action`.
|
|
39
|
+
|
|
40
|
+
**Validation:**
|
|
41
|
+
- [ ] JSON structure is specified
|
|
42
|
+
- [ ] Keys are named exactly
|
|
43
|
+
- [ ] Value types are clear (string, etc.)
|
|
44
|
+
- [ ] Handles edge case: ambiguous or empty input
|
|
45
|
+
|
|
46
|
+
**Hints:**
|
|
47
|
+
1. "Return a JSON object with keys: sentiment, topic, action"
|
|
48
|
+
2. "No markdown, no explanation — JSON only"
|
|
49
|
+
3. "If unclear, use 'neutral' for sentiment"
|
|
50
|
+
|
|
51
|
+
---
|
|
52
|
+
|
|
53
|
+
## Exercise 4: Add Chain-of-Thought
|
|
54
|
+
|
|
55
|
+
**Task:** Give the AI a multi-step logic puzzle (e.g., "Three people: A, B, C. A is taller than B. B is taller than C. Who is shortest?"). First prompt: just ask for the answer. Second prompt: ask to "reason step by step, then give the answer." Compare outputs.
|
|
56
|
+
|
|
57
|
+
**Validation:**
|
|
58
|
+
- [ ] CoT prompt explicitly asks for reasoning first
|
|
59
|
+
- [ ] User observes CoT yields more reliable answer (or catches errors)
|
|
60
|
+
- [ ] User can state when CoT helps (logic, math, debugging)
|
|
61
|
+
|
|
62
|
+
**Hints:**
|
|
63
|
+
1. "Think through each step before answering"
|
|
64
|
+
2. "Show your reasoning, then state the final answer"
|
|
65
|
+
3. For complex tasks, CoT reduces confidently-wrong answers
|
|
66
|
+
|
|
67
|
+
---
|
|
68
|
+
|
|
69
|
+
## Exercise 5: Fix Common Mistakes
|
|
70
|
+
|
|
71
|
+
**Task:** This prompt has problems: "Be creative and write something good about our product. Make it long and detailed. Use lots of adjectives." Rewrite it to avoid: vagueness, conflicting constraints, no examples. Produce a better version.
|
|
72
|
+
|
|
73
|
+
**Validation:**
|
|
74
|
+
- [ ] Replaces "creative" and "good" with specifics (tone, audience, format)
|
|
75
|
+
- [ ] "Long" is quantified (e.g., 2 paragraphs, 150 words)
|
|
76
|
+
- [ ] Adds at least one example of desired style or output
|
|
77
|
+
- [ ] Removes or reconciles conflicting instructions
|
|
78
|
+
|
|
79
|
+
**Hints:**
|
|
80
|
+
1. "Creative" → "Convey enthusiasm without hype; avoid superlatives"
|
|
81
|
+
2. "Long" → "2 paragraphs, ~100 words"
|
|
82
|
+
3. Add example: "Like X, not like Y"
|
|
@@ -0,0 +1,101 @@
|
|
|
1
|
+
games:
|
|
2
|
+
- type: classify
|
|
3
|
+
title: "Prompt Technique Classifier"
|
|
4
|
+
categories:
|
|
5
|
+
- name: "Zero-Shot"
|
|
6
|
+
color: "#58a6ff"
|
|
7
|
+
- name: "Few-Shot"
|
|
8
|
+
color: "#3fb950"
|
|
9
|
+
- name: "Chain-of-Thought"
|
|
10
|
+
color: "#d29922"
|
|
11
|
+
- name: "Structured Output"
|
|
12
|
+
color: "#a371f7"
|
|
13
|
+
items:
|
|
14
|
+
- text: "Translate the following text to French: Hello, how are you?"
|
|
15
|
+
category: "Zero-Shot"
|
|
16
|
+
- text: "Here are 3 examples of sentiment: 'I love it' = positive, 'terrible' = negative, 'it's okay' = neutral. Now classify: 'Best product ever!'"
|
|
17
|
+
category: "Few-Shot"
|
|
18
|
+
- text: "Solve: If a store has 3 apples and sells half of them, then buys 5 more, how many does it have? Think step by step."
|
|
19
|
+
category: "Chain-of-Thought"
|
|
20
|
+
- text: "List the top 3 programming languages. Respond in JSON: {\"languages\": [{\"name\": \"...\", \"reason\": \"...\"}]}"
|
|
21
|
+
category: "Structured Output"
|
|
22
|
+
- text: "What is the capital of Japan?"
|
|
23
|
+
category: "Zero-Shot"
|
|
24
|
+
- text: "Example 1: 'hot' → temperature. Example 2: 'bright' → light. Example 3: 'loud' → sound. Categorize: 'fragrant'"
|
|
25
|
+
category: "Few-Shot"
|
|
26
|
+
- text: "A train leaves at 2pm going 60 mph. Another leaves at 3pm from the same station at 90 mph. When does the second catch the first? Show your reasoning."
|
|
27
|
+
category: "Chain-of-Thought"
|
|
28
|
+
- text: "Summarize this article. Output format: {\"title\": \"string\", \"keyPoints\": [\"...\"], \"wordCount\": number}"
|
|
29
|
+
category: "Structured Output"
|
|
30
|
+
- text: "Rewrite this sentence to be more professional."
|
|
31
|
+
category: "Zero-Shot"
|
|
32
|
+
- text: "Few-shot: 'run'→verb, 'quickly'→adverb, 'beautiful'→adjective. What part of speech is 'happiness'?"
|
|
33
|
+
category: "Few-Shot"
|
|
34
|
+
|
|
35
|
+
- type: speed-round
|
|
36
|
+
title: "Prompt Fix Sprint"
|
|
37
|
+
rounds:
|
|
38
|
+
- question: "The model gives inconsistent formats. What's the best fix?"
|
|
39
|
+
options:
|
|
40
|
+
- "Add more context"
|
|
41
|
+
- "Add format constraint (e.g. JSON schema, template)"
|
|
42
|
+
- "Use a different model"
|
|
43
|
+
- "Retry multiple times"
|
|
44
|
+
answer: 1
|
|
45
|
+
timeLimit: 16
|
|
46
|
+
- question: "The model fails on multi-step reasoning. What should you add?"
|
|
47
|
+
options:
|
|
48
|
+
- "More examples"
|
|
49
|
+
- "Use chain-of-thought (e.g. 'Think step by step')"
|
|
50
|
+
- "Lower temperature"
|
|
51
|
+
- "Shorter prompt"
|
|
52
|
+
answer: 1
|
|
53
|
+
timeLimit: 16
|
|
54
|
+
- question: "Outputs vary wildly between runs. What to try?"
|
|
55
|
+
options:
|
|
56
|
+
- "Add examples"
|
|
57
|
+
- "Add format constraint"
|
|
58
|
+
- "Lower temperature or set seed"
|
|
59
|
+
- "Use CoT"
|
|
60
|
+
answer: 2
|
|
61
|
+
timeLimit: 15
|
|
62
|
+
- question: "The model misunderstands edge cases. Best fix?"
|
|
63
|
+
options:
|
|
64
|
+
- "Add few-shot examples covering edge cases"
|
|
65
|
+
- "Use structured output"
|
|
66
|
+
- "Increase max tokens"
|
|
67
|
+
- "Add a disclaimer"
|
|
68
|
+
answer: 0
|
|
69
|
+
timeLimit: 16
|
|
70
|
+
- question: "Responses are too verbose. What helps?"
|
|
71
|
+
options:
|
|
72
|
+
- "Add 'Be concise' or length constraint"
|
|
73
|
+
- "Add more examples"
|
|
74
|
+
- "Use chain-of-thought"
|
|
75
|
+
- "Higher temperature"
|
|
76
|
+
answer: 0
|
|
77
|
+
timeLimit: 15
|
|
78
|
+
- question: "The model gives wrong answers on classification. Fix?"
|
|
79
|
+
options:
|
|
80
|
+
- "Add format constraint"
|
|
81
|
+
- "Add few-shot examples with correct labels"
|
|
82
|
+
- "Lower temperature only"
|
|
83
|
+
- "Use CoT"
|
|
84
|
+
answer: 1
|
|
85
|
+
timeLimit: 16
|
|
86
|
+
- question: "JSON output is malformed. What to do?"
|
|
87
|
+
options:
|
|
88
|
+
- "Add examples"
|
|
89
|
+
- "Add format constraint / schema / 'return valid JSON'"
|
|
90
|
+
- "Use CoT"
|
|
91
|
+
- "Retry on failure"
|
|
92
|
+
answer: 1
|
|
93
|
+
timeLimit: 15
|
|
94
|
+
- question: "The model skips steps in a procedure. Best approach?"
|
|
95
|
+
options:
|
|
96
|
+
- "Add format constraint"
|
|
97
|
+
- "Add explicit chain-of-thought instruction"
|
|
98
|
+
- "Lower temperature"
|
|
99
|
+
- "Add more examples"
|
|
100
|
+
answer: 1
|
|
101
|
+
timeLimit: 16
|
|
@@ -0,0 +1,45 @@
|
|
|
1
|
+
slug: prompt-engineering
|
|
2
|
+
title: "Prompt Engineering — Techniques That Get Better AI Outputs"
|
|
3
|
+
version: 1.0.0
|
|
4
|
+
description: "Learn what makes a good prompt, zero-shot vs few-shot prompting, chain-of-thought reasoning, and structured output for better AI results."
|
|
5
|
+
category: ai-and-llm
|
|
6
|
+
tags: [prompt-engineering, ai, llm, claude, prompting, few-shot]
|
|
7
|
+
difficulty: beginner
|
|
8
|
+
|
|
9
|
+
xp:
|
|
10
|
+
read: 10
|
|
11
|
+
walkthrough: 30
|
|
12
|
+
exercise: 20
|
|
13
|
+
quiz: 15
|
|
14
|
+
quiz-perfect-bonus: 10
|
|
15
|
+
game: 20
|
|
16
|
+
game-perfect-bonus: 10
|
|
17
|
+
|
|
18
|
+
time:
|
|
19
|
+
quick: 5
|
|
20
|
+
read: 15
|
|
21
|
+
guided: 45
|
|
22
|
+
|
|
23
|
+
prerequisites: []
|
|
24
|
+
related: [llm-fundamentals, ai-pair-programming]
|
|
25
|
+
|
|
26
|
+
triggers:
|
|
27
|
+
- "How do I write better prompts?"
|
|
28
|
+
- "What is prompt engineering?"
|
|
29
|
+
- "How do I get better results from Claude?"
|
|
30
|
+
- "What are few-shot examples?"
|
|
31
|
+
|
|
32
|
+
visuals:
|
|
33
|
+
diagrams: [diagram-mermaid, diagram-flow]
|
|
34
|
+
quiz-types: [quiz-drag-order, quiz-timed-choice, quiz-fill-blank]
|
|
35
|
+
game-types: [classify, speed-round]
|
|
36
|
+
playground: bash
|
|
37
|
+
slides: true
|
|
38
|
+
|
|
39
|
+
sources:
|
|
40
|
+
- url: "https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering"
|
|
41
|
+
label: "Anthropic Prompt Engineering Guide"
|
|
42
|
+
type: docs
|
|
43
|
+
- url: "https://platform.openai.com/docs/guides/prompt-engineering"
|
|
44
|
+
label: "OpenAI Prompt Engineering Guide"
|
|
45
|
+
type: docs
|
|
@@ -0,0 +1,65 @@
|
|
|
1
|
+
# Prompt Engineering Quick Reference
|
|
2
|
+
|
|
3
|
+
## Prompt Anatomy
|
|
4
|
+
|
|
5
|
+
| Component | Purpose | Example |
|
|
6
|
+
|-----------|---------|---------|
|
|
7
|
+
| **Role** | AI persona | "You are an expert Python developer." |
|
|
8
|
+
| **Context** | Background | "Codebase uses FastAPI and Pydantic." |
|
|
9
|
+
| **Task** | The request | "Add validation for the email field." |
|
|
10
|
+
| **Format** | Output structure | "Return JSON with keys: valid, errors." |
|
|
11
|
+
| **Constraints** | Limits | "Under 100 words. No external APIs." |
|
|
12
|
+
|
|
13
|
+
## Zero-Shot vs Few-Shot
|
|
14
|
+
|
|
15
|
+
| Type | When to Use |
|
|
16
|
+
|------|-------------|
|
|
17
|
+
| **Zero-shot** | Simple, well-defined tasks |
|
|
18
|
+
| **Few-shot** | Structured output, style-sensitive, edge cases |
|
|
19
|
+
|
|
20
|
+
## Chain-of-Thought
|
|
21
|
+
|
|
22
|
+
- Ask: "Think step by step" / "Show your reasoning"
|
|
23
|
+
- Improves: logic, math, multi-step tasks
|
|
24
|
+
- Reduces: confidently-wrong answers
|
|
25
|
+
|
|
26
|
+
## Temperature
|
|
27
|
+
|
|
28
|
+
| Range | Use Case |
|
|
29
|
+
|-------|----------|
|
|
30
|
+
| 0–0.3 | Factual, code, extraction, classification |
|
|
31
|
+
| 0.5–0.7 | Balanced |
|
|
32
|
+
| 0.7–1.0 | Creative, varied, brainstorming |
|
|
33
|
+
|
|
34
|
+
## Structured Output
|
|
35
|
+
|
|
36
|
+
- **JSON** — `Return a JSON object with: name, email, score`
|
|
37
|
+
- **XML** — `Wrap the summary in <summary> tags`
|
|
38
|
+
- **Markdown** — `Format as a markdown table with columns A, B, C`
|
|
39
|
+
|
|
40
|
+
## Iteration Order
|
|
41
|
+
|
|
42
|
+
1. Start simple — one clear task
|
|
43
|
+
2. Add context — if output is generic
|
|
44
|
+
3. Add examples — if format/style drifts
|
|
45
|
+
4. Add constraints — if too long or off-topic
|
|
46
|
+
5. Test edge cases — ambiguous, empty input
|
|
47
|
+
|
|
48
|
+
## Common Mistakes
|
|
49
|
+
|
|
50
|
+
| Mistake | Fix |
|
|
51
|
+
|---------|-----|
|
|
52
|
+
| Vague instructions | Be explicit: "List 3 pros and 3 cons" |
|
|
53
|
+
| No examples | Add 1–2 few-shot examples |
|
|
54
|
+
| Too many constraints | Add incrementally |
|
|
55
|
+
| Assuming model knows context | Provide relevant background |
|
|
56
|
+
| One giant prompt | Break into steps or sub-tasks |
|
|
57
|
+
|
|
58
|
+
## By Task Type
|
|
59
|
+
|
|
60
|
+
| Task | Strategy |
|
|
61
|
+
|------|----------|
|
|
62
|
+
| Summarization | Specify length, key points, audience |
|
|
63
|
+
| Analysis | Define criteria, structure (pros/cons) |
|
|
64
|
+
| Code | Language, framework, patterns, I/O format |
|
|
65
|
+
| Creative | Tone, length, style, examples |
|