@shaykec/bridge 0.4.25 → 0.4.26

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (319) hide show
  1. package/journeys/ai-engineer.yaml +34 -0
  2. package/journeys/backend-developer.yaml +36 -0
  3. package/journeys/business-analyst.yaml +37 -0
  4. package/journeys/devops-engineer.yaml +37 -0
  5. package/journeys/engineering-manager.yaml +44 -0
  6. package/journeys/frontend-developer.yaml +41 -0
  7. package/journeys/fullstack-developer.yaml +49 -0
  8. package/journeys/mobile-developer.yaml +42 -0
  9. package/journeys/product-manager.yaml +35 -0
  10. package/journeys/qa-engineer.yaml +37 -0
  11. package/journeys/ux-designer.yaml +43 -0
  12. package/modules/README.md +52 -0
  13. package/modules/accessibility-fundamentals/content.md +126 -0
  14. package/modules/accessibility-fundamentals/exercises.md +88 -0
  15. package/modules/accessibility-fundamentals/module.yaml +43 -0
  16. package/modules/accessibility-fundamentals/quick-ref.md +71 -0
  17. package/modules/accessibility-fundamentals/quiz.md +100 -0
  18. package/modules/accessibility-fundamentals/resources.md +29 -0
  19. package/modules/accessibility-fundamentals/walkthrough.md +80 -0
  20. package/modules/adr-writing/content.md +121 -0
  21. package/modules/adr-writing/exercises.md +81 -0
  22. package/modules/adr-writing/module.yaml +41 -0
  23. package/modules/adr-writing/quick-ref.md +57 -0
  24. package/modules/adr-writing/quiz.md +73 -0
  25. package/modules/adr-writing/resources.md +29 -0
  26. package/modules/adr-writing/walkthrough.md +64 -0
  27. package/modules/ai-agents/content.md +120 -0
  28. package/modules/ai-agents/exercises.md +82 -0
  29. package/modules/ai-agents/module.yaml +42 -0
  30. package/modules/ai-agents/quick-ref.md +60 -0
  31. package/modules/ai-agents/quiz.md +103 -0
  32. package/modules/ai-agents/resources.md +30 -0
  33. package/modules/ai-agents/walkthrough.md +85 -0
  34. package/modules/ai-assisted-research/content.md +136 -0
  35. package/modules/ai-assisted-research/exercises.md +80 -0
  36. package/modules/ai-assisted-research/module.yaml +42 -0
  37. package/modules/ai-assisted-research/quick-ref.md +67 -0
  38. package/modules/ai-assisted-research/quiz.md +73 -0
  39. package/modules/ai-assisted-research/resources.md +33 -0
  40. package/modules/ai-assisted-research/walkthrough.md +85 -0
  41. package/modules/ai-pair-programming/content.md +105 -0
  42. package/modules/ai-pair-programming/exercises.md +98 -0
  43. package/modules/ai-pair-programming/module.yaml +39 -0
  44. package/modules/ai-pair-programming/quick-ref.md +58 -0
  45. package/modules/ai-pair-programming/quiz.md +73 -0
  46. package/modules/ai-pair-programming/resources.md +34 -0
  47. package/modules/ai-pair-programming/walkthrough.md +117 -0
  48. package/modules/ai-test-generation/content.md +125 -0
  49. package/modules/ai-test-generation/exercises.md +98 -0
  50. package/modules/ai-test-generation/module.yaml +39 -0
  51. package/modules/ai-test-generation/quick-ref.md +65 -0
  52. package/modules/ai-test-generation/quiz.md +74 -0
  53. package/modules/ai-test-generation/resources.md +41 -0
  54. package/modules/ai-test-generation/walkthrough.md +100 -0
  55. package/modules/api-design/content.md +189 -0
  56. package/modules/api-design/exercises.md +84 -0
  57. package/modules/api-design/game.yaml +113 -0
  58. package/modules/api-design/module.yaml +45 -0
  59. package/modules/api-design/quick-ref.md +73 -0
  60. package/modules/api-design/quiz.md +100 -0
  61. package/modules/api-design/resources.md +55 -0
  62. package/modules/api-design/walkthrough.md +88 -0
  63. package/modules/clean-code/content.md +136 -0
  64. package/modules/clean-code/exercises.md +137 -0
  65. package/modules/clean-code/game.yaml +172 -0
  66. package/modules/clean-code/module.yaml +44 -0
  67. package/modules/clean-code/quick-ref.md +44 -0
  68. package/modules/clean-code/quiz.md +105 -0
  69. package/modules/clean-code/resources.md +40 -0
  70. package/modules/clean-code/walkthrough.md +78 -0
  71. package/modules/clean-code/workshop.yaml +149 -0
  72. package/modules/code-review/content.md +130 -0
  73. package/modules/code-review/exercises.md +95 -0
  74. package/modules/code-review/game.yaml +83 -0
  75. package/modules/code-review/module.yaml +42 -0
  76. package/modules/code-review/quick-ref.md +77 -0
  77. package/modules/code-review/quiz.md +105 -0
  78. package/modules/code-review/resources.md +40 -0
  79. package/modules/code-review/walkthrough.md +106 -0
  80. package/modules/daily-workflow/content.md +81 -0
  81. package/modules/daily-workflow/exercises.md +50 -0
  82. package/modules/daily-workflow/module.yaml +33 -0
  83. package/modules/daily-workflow/quick-ref.md +37 -0
  84. package/modules/daily-workflow/quiz.md +65 -0
  85. package/modules/daily-workflow/resources.md +38 -0
  86. package/modules/daily-workflow/walkthrough.md +83 -0
  87. package/modules/debugging-systematically/content.md +139 -0
  88. package/modules/debugging-systematically/exercises.md +91 -0
  89. package/modules/debugging-systematically/module.yaml +46 -0
  90. package/modules/debugging-systematically/quick-ref.md +59 -0
  91. package/modules/debugging-systematically/quiz.md +105 -0
  92. package/modules/debugging-systematically/resources.md +42 -0
  93. package/modules/debugging-systematically/walkthrough.md +84 -0
  94. package/modules/debugging-systematically/workshop.yaml +127 -0
  95. package/modules/demo-test/content.md +68 -0
  96. package/modules/demo-test/exercises.md +28 -0
  97. package/modules/demo-test/game.yaml +171 -0
  98. package/modules/demo-test/module.yaml +41 -0
  99. package/modules/demo-test/quick-ref.md +54 -0
  100. package/modules/demo-test/quiz.md +74 -0
  101. package/modules/demo-test/resources.md +21 -0
  102. package/modules/demo-test/walkthrough.md +122 -0
  103. package/modules/demo-test/workshop.yaml +31 -0
  104. package/modules/design-critique/content.md +93 -0
  105. package/modules/design-critique/exercises.md +71 -0
  106. package/modules/design-critique/module.yaml +41 -0
  107. package/modules/design-critique/quick-ref.md +63 -0
  108. package/modules/design-critique/quiz.md +73 -0
  109. package/modules/design-critique/resources.md +27 -0
  110. package/modules/design-critique/walkthrough.md +68 -0
  111. package/modules/design-patterns/content.md +335 -0
  112. package/modules/design-patterns/exercises.md +82 -0
  113. package/modules/design-patterns/game.yaml +55 -0
  114. package/modules/design-patterns/module.yaml +45 -0
  115. package/modules/design-patterns/quick-ref.md +44 -0
  116. package/modules/design-patterns/quiz.md +101 -0
  117. package/modules/design-patterns/resources.md +40 -0
  118. package/modules/design-patterns/walkthrough.md +64 -0
  119. package/modules/exploratory-testing/content.md +133 -0
  120. package/modules/exploratory-testing/exercises.md +88 -0
  121. package/modules/exploratory-testing/module.yaml +41 -0
  122. package/modules/exploratory-testing/quick-ref.md +68 -0
  123. package/modules/exploratory-testing/quiz.md +75 -0
  124. package/modules/exploratory-testing/resources.md +39 -0
  125. package/modules/exploratory-testing/walkthrough.md +87 -0
  126. package/modules/git/content.md +128 -0
  127. package/modules/git/exercises.md +53 -0
  128. package/modules/git/game.yaml +190 -0
  129. package/modules/git/module.yaml +44 -0
  130. package/modules/git/quick-ref.md +67 -0
  131. package/modules/git/quiz.md +89 -0
  132. package/modules/git/resources.md +49 -0
  133. package/modules/git/walkthrough.md +92 -0
  134. package/modules/git/workshop.yaml +145 -0
  135. package/modules/hiring-interviews/content.md +130 -0
  136. package/modules/hiring-interviews/exercises.md +88 -0
  137. package/modules/hiring-interviews/module.yaml +41 -0
  138. package/modules/hiring-interviews/quick-ref.md +68 -0
  139. package/modules/hiring-interviews/quiz.md +73 -0
  140. package/modules/hiring-interviews/resources.md +36 -0
  141. package/modules/hiring-interviews/walkthrough.md +75 -0
  142. package/modules/hooks/content.md +97 -0
  143. package/modules/hooks/exercises.md +69 -0
  144. package/modules/hooks/module.yaml +39 -0
  145. package/modules/hooks/quick-ref.md +93 -0
  146. package/modules/hooks/quiz.md +81 -0
  147. package/modules/hooks/resources.md +34 -0
  148. package/modules/hooks/walkthrough.md +105 -0
  149. package/modules/hooks/workshop.yaml +64 -0
  150. package/modules/incident-response/content.md +124 -0
  151. package/modules/incident-response/exercises.md +82 -0
  152. package/modules/incident-response/game.yaml +132 -0
  153. package/modules/incident-response/module.yaml +45 -0
  154. package/modules/incident-response/quick-ref.md +53 -0
  155. package/modules/incident-response/quiz.md +103 -0
  156. package/modules/incident-response/resources.md +40 -0
  157. package/modules/incident-response/walkthrough.md +82 -0
  158. package/modules/llm-fundamentals/content.md +114 -0
  159. package/modules/llm-fundamentals/exercises.md +83 -0
  160. package/modules/llm-fundamentals/module.yaml +42 -0
  161. package/modules/llm-fundamentals/quick-ref.md +64 -0
  162. package/modules/llm-fundamentals/quiz.md +103 -0
  163. package/modules/llm-fundamentals/resources.md +30 -0
  164. package/modules/llm-fundamentals/walkthrough.md +91 -0
  165. package/modules/one-on-ones/content.md +133 -0
  166. package/modules/one-on-ones/exercises.md +81 -0
  167. package/modules/one-on-ones/module.yaml +44 -0
  168. package/modules/one-on-ones/quick-ref.md +67 -0
  169. package/modules/one-on-ones/quiz.md +73 -0
  170. package/modules/one-on-ones/resources.md +37 -0
  171. package/modules/one-on-ones/walkthrough.md +69 -0
  172. package/modules/package.json +9 -0
  173. package/modules/prioritization-frameworks/content.md +130 -0
  174. package/modules/prioritization-frameworks/exercises.md +93 -0
  175. package/modules/prioritization-frameworks/module.yaml +41 -0
  176. package/modules/prioritization-frameworks/quick-ref.md +77 -0
  177. package/modules/prioritization-frameworks/quiz.md +73 -0
  178. package/modules/prioritization-frameworks/resources.md +32 -0
  179. package/modules/prioritization-frameworks/walkthrough.md +69 -0
  180. package/modules/prompt-engineering/content.md +123 -0
  181. package/modules/prompt-engineering/exercises.md +82 -0
  182. package/modules/prompt-engineering/game.yaml +101 -0
  183. package/modules/prompt-engineering/module.yaml +45 -0
  184. package/modules/prompt-engineering/quick-ref.md +65 -0
  185. package/modules/prompt-engineering/quiz.md +105 -0
  186. package/modules/prompt-engineering/resources.md +36 -0
  187. package/modules/prompt-engineering/walkthrough.md +81 -0
  188. package/modules/rag-fundamentals/content.md +111 -0
  189. package/modules/rag-fundamentals/exercises.md +80 -0
  190. package/modules/rag-fundamentals/module.yaml +45 -0
  191. package/modules/rag-fundamentals/quick-ref.md +58 -0
  192. package/modules/rag-fundamentals/quiz.md +75 -0
  193. package/modules/rag-fundamentals/resources.md +34 -0
  194. package/modules/rag-fundamentals/walkthrough.md +75 -0
  195. package/modules/react-fundamentals/content.md +140 -0
  196. package/modules/react-fundamentals/exercises.md +81 -0
  197. package/modules/react-fundamentals/game.yaml +145 -0
  198. package/modules/react-fundamentals/module.yaml +45 -0
  199. package/modules/react-fundamentals/quick-ref.md +62 -0
  200. package/modules/react-fundamentals/quiz.md +106 -0
  201. package/modules/react-fundamentals/resources.md +42 -0
  202. package/modules/react-fundamentals/walkthrough.md +89 -0
  203. package/modules/react-fundamentals/workshop.yaml +112 -0
  204. package/modules/react-native-fundamentals/content.md +141 -0
  205. package/modules/react-native-fundamentals/exercises.md +79 -0
  206. package/modules/react-native-fundamentals/module.yaml +42 -0
  207. package/modules/react-native-fundamentals/quick-ref.md +60 -0
  208. package/modules/react-native-fundamentals/quiz.md +61 -0
  209. package/modules/react-native-fundamentals/resources.md +24 -0
  210. package/modules/react-native-fundamentals/walkthrough.md +84 -0
  211. package/modules/registry.yaml +1650 -0
  212. package/modules/risk-management/content.md +162 -0
  213. package/modules/risk-management/exercises.md +86 -0
  214. package/modules/risk-management/module.yaml +41 -0
  215. package/modules/risk-management/quick-ref.md +82 -0
  216. package/modules/risk-management/quiz.md +73 -0
  217. package/modules/risk-management/resources.md +40 -0
  218. package/modules/risk-management/walkthrough.md +67 -0
  219. package/modules/running-effective-standups/content.md +119 -0
  220. package/modules/running-effective-standups/exercises.md +79 -0
  221. package/modules/running-effective-standups/module.yaml +40 -0
  222. package/modules/running-effective-standups/quick-ref.md +61 -0
  223. package/modules/running-effective-standups/quiz.md +73 -0
  224. package/modules/running-effective-standups/resources.md +36 -0
  225. package/modules/running-effective-standups/walkthrough.md +76 -0
  226. package/modules/solid-principles/content.md +154 -0
  227. package/modules/solid-principles/exercises.md +107 -0
  228. package/modules/solid-principles/module.yaml +42 -0
  229. package/modules/solid-principles/quick-ref.md +50 -0
  230. package/modules/solid-principles/quiz.md +102 -0
  231. package/modules/solid-principles/resources.md +39 -0
  232. package/modules/solid-principles/walkthrough.md +84 -0
  233. package/modules/sprint-planning/content.md +142 -0
  234. package/modules/sprint-planning/exercises.md +79 -0
  235. package/modules/sprint-planning/game.yaml +84 -0
  236. package/modules/sprint-planning/module.yaml +44 -0
  237. package/modules/sprint-planning/quick-ref.md +76 -0
  238. package/modules/sprint-planning/quiz.md +102 -0
  239. package/modules/sprint-planning/resources.md +39 -0
  240. package/modules/sprint-planning/walkthrough.md +75 -0
  241. package/modules/sql-fundamentals/content.md +160 -0
  242. package/modules/sql-fundamentals/exercises.md +87 -0
  243. package/modules/sql-fundamentals/game.yaml +105 -0
  244. package/modules/sql-fundamentals/module.yaml +45 -0
  245. package/modules/sql-fundamentals/quick-ref.md +53 -0
  246. package/modules/sql-fundamentals/quiz.md +103 -0
  247. package/modules/sql-fundamentals/resources.md +42 -0
  248. package/modules/sql-fundamentals/walkthrough.md +92 -0
  249. package/modules/sql-fundamentals/workshop.yaml +109 -0
  250. package/modules/stakeholder-communication/content.md +186 -0
  251. package/modules/stakeholder-communication/exercises.md +87 -0
  252. package/modules/stakeholder-communication/module.yaml +38 -0
  253. package/modules/stakeholder-communication/quick-ref.md +89 -0
  254. package/modules/stakeholder-communication/quiz.md +73 -0
  255. package/modules/stakeholder-communication/resources.md +41 -0
  256. package/modules/stakeholder-communication/walkthrough.md +74 -0
  257. package/modules/system-design/content.md +149 -0
  258. package/modules/system-design/exercises.md +83 -0
  259. package/modules/system-design/game.yaml +95 -0
  260. package/modules/system-design/module.yaml +46 -0
  261. package/modules/system-design/quick-ref.md +59 -0
  262. package/modules/system-design/quiz.md +102 -0
  263. package/modules/system-design/resources.md +46 -0
  264. package/modules/system-design/walkthrough.md +90 -0
  265. package/modules/team-topologies/content.md +166 -0
  266. package/modules/team-topologies/exercises.md +85 -0
  267. package/modules/team-topologies/module.yaml +41 -0
  268. package/modules/team-topologies/quick-ref.md +61 -0
  269. package/modules/team-topologies/quiz.md +101 -0
  270. package/modules/team-topologies/resources.md +37 -0
  271. package/modules/team-topologies/walkthrough.md +76 -0
  272. package/modules/technical-debt/content.md +111 -0
  273. package/modules/technical-debt/exercises.md +92 -0
  274. package/modules/technical-debt/module.yaml +39 -0
  275. package/modules/technical-debt/quick-ref.md +60 -0
  276. package/modules/technical-debt/quiz.md +73 -0
  277. package/modules/technical-debt/resources.md +25 -0
  278. package/modules/technical-debt/walkthrough.md +94 -0
  279. package/modules/technical-mentoring/content.md +128 -0
  280. package/modules/technical-mentoring/exercises.md +84 -0
  281. package/modules/technical-mentoring/module.yaml +41 -0
  282. package/modules/technical-mentoring/quick-ref.md +74 -0
  283. package/modules/technical-mentoring/quiz.md +73 -0
  284. package/modules/technical-mentoring/resources.md +33 -0
  285. package/modules/technical-mentoring/walkthrough.md +65 -0
  286. package/modules/test-strategy/content.md +136 -0
  287. package/modules/test-strategy/exercises.md +84 -0
  288. package/modules/test-strategy/game.yaml +99 -0
  289. package/modules/test-strategy/module.yaml +45 -0
  290. package/modules/test-strategy/quick-ref.md +66 -0
  291. package/modules/test-strategy/quiz.md +99 -0
  292. package/modules/test-strategy/resources.md +60 -0
  293. package/modules/test-strategy/walkthrough.md +97 -0
  294. package/modules/test-strategy/workshop.yaml +96 -0
  295. package/modules/typescript-fundamentals/content.md +127 -0
  296. package/modules/typescript-fundamentals/exercises.md +79 -0
  297. package/modules/typescript-fundamentals/game.yaml +111 -0
  298. package/modules/typescript-fundamentals/module.yaml +45 -0
  299. package/modules/typescript-fundamentals/quick-ref.md +55 -0
  300. package/modules/typescript-fundamentals/quiz.md +104 -0
  301. package/modules/typescript-fundamentals/resources.md +42 -0
  302. package/modules/typescript-fundamentals/walkthrough.md +71 -0
  303. package/modules/typescript-fundamentals/workshop.yaml +146 -0
  304. package/modules/user-story-mapping/content.md +123 -0
  305. package/modules/user-story-mapping/exercises.md +87 -0
  306. package/modules/user-story-mapping/module.yaml +41 -0
  307. package/modules/user-story-mapping/quick-ref.md +64 -0
  308. package/modules/user-story-mapping/quiz.md +73 -0
  309. package/modules/user-story-mapping/resources.md +29 -0
  310. package/modules/user-story-mapping/walkthrough.md +86 -0
  311. package/modules/writing-prds/content.md +133 -0
  312. package/modules/writing-prds/exercises.md +93 -0
  313. package/modules/writing-prds/game.yaml +83 -0
  314. package/modules/writing-prds/module.yaml +44 -0
  315. package/modules/writing-prds/quick-ref.md +77 -0
  316. package/modules/writing-prds/quiz.md +103 -0
  317. package/modules/writing-prds/resources.md +30 -0
  318. package/modules/writing-prds/walkthrough.md +87 -0
  319. package/package.json +1 -1
@@ -0,0 +1,105 @@
1
+ # Prompt Engineering — Quiz
2
+
3
+ ## Question 1
4
+
5
+ Which component of a prompt sets the AI's persona and expertise?
6
+
7
+ A) Context
8
+ B) Task
9
+ C) Role
10
+ D) Format
11
+
12
+ <!-- ANSWER: C -->
13
+ <!-- EXPLANATION: Role sets who the AI is (e.g., "You are an expert Python developer"). Context provides background; Task is the request; Format defines output structure. -->
14
+
15
+ ## Question 2
16
+
17
+ Few-shot prompting improves output most for:
18
+
19
+ A) Simple translation
20
+ B) Structured or style-sensitive tasks where the pattern matters
21
+ C) Very long documents
22
+ D) Questions with a single correct answer
23
+
24
+ <!-- ANSWER: B -->
25
+ <!-- EXPLANATION: Few-shot gives examples so the model learns the pattern. It helps when format, style, or edge-case handling matters. Simple translation often works with zero-shot. -->
26
+
27
+ ## Question 3
28
+
29
+ Chain-of-thought prompting is most useful for:
30
+
31
+ A) Short factual answers
32
+ B) Logic, math, and multi-step reasoning tasks
33
+ C) Creative writing
34
+ D) Summarization only
35
+
36
+ <!-- ANSWER: B -->
37
+ <!-- EXPLANATION: CoT asks the model to "think step by step," which improves performance on reasoning, logic, and multi-step problems. It reduces errors by surfacing intermediate reasoning. -->
38
+
39
+ ## Question 4
40
+
41
+ For extracting structured data (e.g., names, dates) from text, you should request:
42
+
43
+ A) Freeform prose
44
+ B) JSON or XML with specified keys
45
+ C) Markdown only
46
+ D) Bullet points
47
+
48
+ <!-- ANSWER: B -->
49
+ <!-- EXPLANATION: Structured output (JSON, XML) with specified keys makes the output machine-readable and parseable. Freeform prose requires extra parsing; structured format enables integration. -->
50
+
51
+ ## Question 5
52
+
53
+ Low temperature (0–0.3) is best for:
54
+
55
+ A) Brainstorming creative ideas
56
+ B) Factual extraction, code generation, classification
57
+ C) Varied storytelling
58
+ D) Open-ended exploration
59
+
60
+ <!-- ANSWER: B -->
61
+ <!-- EXPLANATION: Low temperature makes output deterministic and consistent — good for factual tasks, code, extraction. High temperature increases variety and creativity but also randomness. -->
62
+
63
+ ## Question 6
64
+
65
+ A prompt produces generic, off-topic output. The best first fix is usually:
66
+
67
+ A) Add more constraints
68
+ B) Add context and be more specific about the task
69
+ C) Increase temperature
70
+ D) Use a different model
71
+
72
+ <!-- ANSWER: B -->
73
+ <!-- EXPLANATION: Generic output often means insufficient context or vague task. Adding context and specificity usually helps before adding constraints. Temperature and model choice come later if needed. -->
74
+
75
+ ## Question 7
76
+
77
+ <!-- VISUAL: drag-order -->
78
+
79
+ Put these steps in the correct order for improving a vague prompt:
80
+
81
+ A) Add few-shot examples if the format matters
82
+ B) Define the role and context
83
+ C) Specify the output format (JSON, markdown, etc.)
84
+ D) Refine the task with concrete requirements
85
+
86
+ <!-- ANSWER: B,D,C,A -->
87
+ <!-- EXPLANATION: Start with role and context (B) to ground the AI. Then refine the task with specifics (D). Define output format (C) for parseability. Add examples (A) if style or structure is critical. -->
88
+
89
+ ## Question 8
90
+
91
+ <!-- VISUAL: fill-blank -->
92
+
93
+ Complete the prompt template for structured extraction:
94
+
95
+ ```
96
+ You are a ___0___ assistant. Extract the following from the text:
97
+ - name
98
+ - date
99
+ - amount
100
+
101
+ Return valid JSON with keys: name, date, amount.
102
+ ```
103
+
104
+ <!-- ANSWER: data extraction -->
105
+ <!-- EXPLANATION: The template sets role (data extraction assistant), defines the task (extract specific fields), and specifies output format (JSON with keys). "data extraction" or "structured data" fits the role. -->
@@ -0,0 +1,36 @@
1
+ # Prompt Engineering — Resources
2
+
3
+ ## Official Docs
4
+
5
+ - [Anthropic — Prompt Engineering](https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering) — Claude prompting guide.
6
+ - [OpenAI — Prompt Engineering Guide](https://platform.openai.com/docs/guides/prompt-engineering) — OpenAI best practices.
7
+ - [Google — Introduction to Prompt Design](https://cloud.google.com/vertex-ai/docs/generative-ai/learn/prompt-design-strategies) — Vertex AI prompting.
8
+
9
+ ## Videos
10
+
11
+ - [Anthropic — Prompt Engineering](https://www.youtube.com/results?search_query=Anthropic+prompt+engineering) — Claude prompting videos.
12
+ - [Andrej Karpathy — Prompt Engineering](https://www.youtube.com/results?search_query=Karpathy+prompt+engineering) — LLM prompting concepts.
13
+ - [Fireship — Prompt Engineering](https://www.youtube.com/results?search_query=Fireship+prompt+engineering) — Quick overview.
14
+
15
+ ## Articles
16
+
17
+ - [Learn Prompting](https://learnprompting.org/) — Free prompt engineering course.
18
+ - [Lilian Weng — Prompt Engineering](https://lilianweng.github.io/posts/2023-03-15-prompt-engineering/) — Technical overview.
19
+ - [Simon Willison — Prompting](https://simonwillison.net/series/prompting/) — Practical prompt patterns.
20
+ - [OpenAI — Prompt Engineering Techniques](https://platform.openai.com/docs/guides/prompt-engineering/prompting-guide) — Few-shot, CoT, structured output.
21
+
22
+ ## Books
23
+
24
+ - **Prompt Engineering Guide** (various) — Prompt patterns and techniques.
25
+ - **AI Engineering** by Chip Huyen — Production ML and LLM systems.
26
+
27
+ ## Podcasts
28
+
29
+ - [Practical AI](https://changelog.com/practicalai) — Episodes on prompting and LLM workflows.
30
+ - [The TWIML AI Podcast](https://twimlai.com/) — ML and LLM industry discussions.
31
+
32
+ ## Tools
33
+
34
+ - [OpenAI Playground](https://platform.openai.com/playground) — Experiment with prompts.
35
+ - [Claude API](https://docs.anthropic.com/en/api/getting-started) — Claude prompting and parameters.
36
+ - [LangSmith](https://smith.langchain.com/) — Prompt debugging and tracing.
@@ -0,0 +1,81 @@
1
+ # Prompt Engineering Walkthrough — Learn by Doing
2
+
3
+ ## Step 1: Structure a Basic Prompt
4
+
5
+ <!-- hint:code language="text" highlight="1,3" -->
6
+
7
+ **Task:** You want the AI to summarize a meeting transcript. Write a prompt that includes: role, context (what kind of meeting), task (summarize), format (bullet points? sections?), and one constraint (e.g., max 200 words).
8
+
9
+ **Question:** What would happen if you only said "Summarize this"? What does each component add?
10
+
11
+ **Checkpoint:** The user's prompt has at least role, context, task, format, and one constraint. They can explain why each piece improves output (e.g., format ensures machine-readable structure).
12
+
13
+ ---
14
+
15
+ ## Step 2: Zero-Shot vs Few-Shot
16
+
17
+ <!-- hint:buttons type="single" prompt="When do you need few-shot examples?" options="Simple classification,Style-sensitive tasks,Broad summarization" -->
18
+
19
+ **Task:** You need the AI to classify support tickets as "bug", "feature request", or "question". Write (a) a zero-shot prompt and (b) a few-shot prompt with 2–3 examples. Run both (or predict): which would you expect to perform better? Why?
20
+
21
+ **Question:** For which tasks would zero-shot be enough? When do you *need* few-shot?
22
+
23
+ **Checkpoint:** The user has both prompts. They understand few-shot establishes the pattern for structured or style-sensitive tasks. They can name cases where zero-shot suffices (simple translation, broad summarization).
24
+
25
+ ---
26
+
27
+ ## Step 3: Chain-of-Thought for Reasoning
28
+
29
+ <!-- hint:card type="concept" title="Chain-of-Thought" -->
30
+
31
+ **Task:** Ask the AI: "If a train leaves Station A at 60 mph and Station B at 40 mph toward each other, 200 miles apart, when do they meet?" Use a prompt that asks the model to "think step by step" before giving the answer. Compare to a prompt that doesn't.
32
+
33
+ **Question:** Why does "show your reasoning" help? What kinds of tasks benefit most from chain-of-thought?
34
+
35
+ **Checkpoint:** The user has used CoT. They observe: CoT reduces errors on logic and math; it makes output verifiable. They can name other CoT-suitable tasks (multi-step planning, debugging).
36
+
37
+ ---
38
+
39
+ ## Step 4: Structured Output
40
+
41
+ <!-- hint:diagram mermaid-type="flowchart" topic="prompt structure" -->
42
+
43
+ **Embed:** https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering
44
+
45
+ **Task:** Ask the AI to extract names, emails, and phone numbers from a short paragraph of text. Request the output as JSON with keys `names`, `emails`, `phones`. Run it. If the format drifts, add more explicit structure (e.g., "Return valid JSON only, no markdown").
46
+
47
+ **Question:** Why is JSON useful for downstream use? What if the model returns markdown-wrapped JSON?
48
+
49
+ **Checkpoint:** The user has gotten JSON output. They understand structured output enables parsing and integration. They can handle common drift (e.g., "valid JSON only").
50
+
51
+ ---
52
+
53
+ ## Step 5: Iterate on a Failing Prompt
54
+
55
+ **Task:** Your prompt for "write a product description" produces generic, marketing-fluffy text. Iterate: add (a) a constraint ("no buzzwords"), (b) an example of the tone you want, (c) a length limit. Which change helped most?
56
+
57
+ **Question:** When would you add an example vs a constraint? What's the risk of too many constraints at once?
58
+
59
+ **Checkpoint:** The user has iterated and seen improvement. They understand: examples establish style; constraints narrow scope. They recognize constraint overload can confuse the model.
60
+
61
+ ---
62
+
63
+ ## Step 6: Choose Temperature
64
+
65
+ <!-- hint:buttons type="single" prompt="Lower temperature reduces what?" options="Creativity,Hallucination risk,Speed" -->
66
+
67
+ **Task:** For each scenario, choose temperature (0, 0.3, 0.7, 1.0) and justify: (a) Extracting entity names from text. (b) Brainstorming 10 product names. (c) Writing unit tests from a spec. (d) Creative short story.
68
+
69
+ **Question:** Why does low temperature reduce hallucination risk? When is high temperature useful despite that risk?
70
+
71
+ **Checkpoint:** The user matches: extraction (0–0.3), brainstorming (0.7–1.0), tests (0–0.3), story (0.7–1.0). They connect low temp to factual consistency and high temp to creativity.
72
+
73
+ ---
74
+
75
+ ## Step 7: System vs User Prompts
76
+
77
+ **Task:** Many APIs support a system prompt and user messages. Design a system prompt for a "helpful coding assistant" that: (a) sets the role, (b) specifies output format (code blocks, explain briefly), (c) adds one guardrail (e.g., "don't assume APIs we don't have"). Why put this in the system prompt vs the user message?
78
+
79
+ **Question:** What should *not* go in the system prompt? When would you put task-specific instructions in the user message instead?
80
+
81
+ **Checkpoint:** The user has a system prompt with role, format, and guardrail. They understand system = persistent context; user = per-request. They can distinguish what belongs where.
@@ -0,0 +1,111 @@
1
+ # RAG — Retrieval-Augmented Generation for Grounded AI
2
+
3
+ <!-- hint:slides topic="RAG pipeline: indexing, chunking, embedding, retrieval, augmented generation, and evaluation" slides="6" -->
4
+
5
+ ## The Problem RAG Solves
6
+
7
+ LLMs don't know your data. They have a **knowledge cutoff**, can **hallucinate**, and can't access private docs, live data, or domain-specific content. **RAG (Retrieval-Augmented Generation)** solves this by retrieving relevant context and augmenting the prompt with it.
8
+
9
+ ## How RAG Works
10
+
11
+ ```mermaid
12
+ flowchart TB
13
+ subgraph index["Indexing (offline)"]
14
+ D[Docs] --> C[Chunk]
15
+ C --> E[Embed]
16
+ E --> S[Store]
17
+ end
18
+ subgraph query["Query (online)"]
19
+ Q[Query] --> EQ[Embed Query]
20
+ EQ --> R[Retrieve]
21
+ S --> R
22
+ R --> A[Augment Prompt]
23
+ A --> G[Generate]
24
+ G --> Out[Answer]
25
+ end
26
+ ```
27
+
28
+ **Indexing:** Docs → Chunk → Embed → Store (vector DB)
29
+ **Query:** Query → Embed → Retrieve → Augment prompt → Generate
30
+
31
+ ## The Full RAG Pipeline
32
+
33
+ ```mermaid
34
+ flowchart LR
35
+ D[Docs] --> C[Chunk]
36
+ C --> E[Embed]
37
+ E --> V[Vector Store]
38
+ Q[Query] --> EQ[Embed Query]
39
+ EQ --> R[Retrieve]
40
+ V --> R
41
+ R --> A[Augment]
42
+ A --> P[Prompt]
43
+ P --> G[Generate]
44
+ G --> Out[Output]
45
+ ```
46
+
47
+ ## Embeddings
48
+
49
+ **Embeddings** convert text to vectors (lists of numbers) that capture semantic meaning. Similar texts → similar vectors. Use cosine similarity or dot product to find "nearest" chunks.
50
+
51
+ - "How do I reset my password?" ≈ "Password reset instructions"
52
+ - Different from keyword search: captures meaning, not just words
53
+
54
+ ## Vector Databases
55
+
56
+ Store embeddings and support **similarity search**:
57
+
58
+ | Tool | Notes |
59
+ |------|-------|
60
+ | **Pinecone** | Managed, scalable |
61
+ | **Weaviate** | Open-source, hybrid search |
62
+ | **Chroma** | Lightweight, embedded |
63
+ | **pgvector** | PostgreSQL extension |
64
+
65
+ ## Chunking Strategies
66
+
67
+ | Strategy | When to Use |
68
+ |----------|-------------|
69
+ | **Fixed-size** | Simple; split every N tokens |
70
+ | **Semantic** | Split on meaning boundaries (paragraphs, sections) |
71
+ | **Recursive** | Hierarchical: try sentence → paragraph → section |
72
+ | **Overlap** | Overlap chunks to preserve context at boundaries |
73
+
74
+ Bad chunking = retrieval misses relevant context or returns fragments that don't make sense alone.
75
+
76
+ ## Retrieval Quality
77
+
78
+ - **Top-k** — Return k most similar chunks. Tune k (often 3–10).
79
+ - **Similarity threshold** — Only return chunks above a score. Filters noise.
80
+ - **Re-ranking** — Second pass: cross-encoder or LLM to rank top candidates. Improves precision.
81
+
82
+ ## Hybrid Search
83
+
84
+ Combine **keyword** (BM25, full-text) with **semantic** (embeddings). Keyword finds exact terms; semantic finds paraphrases. Merge scores (e.g., weighted average, Reciprocal Rank Fusion).
85
+
86
+ ## Evaluation
87
+
88
+ | Metric | What It Measures |
89
+ |--------|------------------|
90
+ | **Faithfulness** | Does the answer stay grounded in the retrieved context? |
91
+ | **Relevance** | Do retrieved chunks match the query? |
92
+ | **Answer correctness** | Is the final answer factually correct? |
93
+
94
+ ## Common Pitfalls
95
+
96
+ | Pitfall | Fix |
97
+ |---------|-----|
98
+ | **Bad chunking** | Use semantic or recursive; tune chunk size and overlap |
99
+ | **No metadata filtering** | Filter by source, date, type before retrieval |
100
+ | **Stuffing too much** | Limit context; use re-ranking; summarize if needed |
101
+ | **Wrong embedding model** | Match model to domain (e.g., code vs. prose) |
102
+
103
+ ---
104
+
105
+ ## Key Takeaways
106
+
107
+ 1. **RAG** — Retrieve relevant docs → augment prompt → generate
108
+ 2. **Embeddings** — Text → vectors; similarity = semantic match
109
+ 3. **Chunking** — Fixed, semantic, or recursive; overlap helps
110
+ 4. **Retrieval** — Top-k, threshold, re-ranking
111
+ 5. **Evaluate** — Faithfulness, relevance, correctness
@@ -0,0 +1,80 @@
1
+ # RAG Fundamentals Exercises
2
+
3
+ ## Exercise 1: Choose a Chunking Strategy
4
+
5
+ **Task:** For each document type, pick a chunking strategy and justify in one sentence: (a) Legal contract, (b) FAQ page, (c) Codebase documentation, (d) Blog posts.
6
+
7
+ **Validation:**
8
+ - [ ] Legal: semantic/paragraph (preserve clause boundaries)
9
+ - [ ] FAQ: by Q&A pair (natural unit)
10
+ - [ ] Code docs: semantic by function/section
11
+ - [ ] Blog: paragraph or recursive
12
+
13
+ **Hints:**
14
+ 1. Legal: clauses matter; don't split mid-clause
15
+ 2. FAQ: each Q&A is a natural chunk
16
+ 3. Code: function, class, or section
17
+ 4. Blog: paragraphs or sections
18
+
19
+ ---
20
+
21
+ ## Exercise 2: Design Metadata for Retrieval
22
+
23
+ **Task:** You're indexing a product docs site. What metadata would you store with each chunk? How would you use it at query time? List 4 metadata fields and one filter scenario for each.
24
+
25
+ **Validation:**
26
+ - [ ] At least: source, section, product, last_updated
27
+ - [ ] Each has a filter use case (e.g., "only v2 docs", "only API reference")
28
+
29
+ **Hints:**
30
+ 1. `source`: "Which file/page"
31
+ 2. `section`: "API, Getting Started, etc."
32
+ 3. `product`: "Product A, B"
33
+ 4. `last_updated`: Filter stale content
34
+
35
+ ---
36
+
37
+ ## Exercise 3: Compare Keyword vs Semantic Search
38
+
39
+ **Task:** Write a query that keyword search would handle well, and one that semantic search would handle better. For each, explain why. Example: "exact error code XYZ" vs "how to fix connection timeout".
40
+
41
+ **Validation:**
42
+ - [ ] Keyword: exact phrase, specific term, part number
43
+ - [ ] Semantic: paraphrased, conceptual, "how do I...?"
44
+
45
+ **Hints:**
46
+ 1. Keyword good: "ERROR_CODE_404", "API v2.1"
47
+ 2. Semantic good: "payment failed" ≈ "transaction declined", "troubleshoot slow API"
48
+
49
+ ---
50
+
51
+ ## Exercise 4: Write an Augmented Prompt Template
52
+
53
+ **Task:** Write a prompt template for RAG. Placeholders: `{context}` (retrieved chunks), `{question}` (user query). Include instructions: use only the context, say "I don't know" if the answer isn't there, cite the source when possible.
54
+
55
+ **Validation:**
56
+ - [ ] Has {context} and {question}
57
+ - [ ] Instructs to use only context
58
+ - [ ] Handles "not in context" case
59
+ - [ ] Asks for citation/source when possible
60
+
61
+ **Hints:**
62
+ 1. "Use ONLY the following context. If the answer isn't there, say so."
63
+ 2. "When possible, cite which part of the context supports your answer."
64
+ 3. Template: "Context:\n{context}\n\nQuestion: {question}\n\nAnswer:"
65
+
66
+ ---
67
+
68
+ ## Exercise 5: Propose Evaluation Metrics
69
+
70
+ **Task:** For a customer-support RAG bot, propose 3 metrics you'd track. For each: name, what it measures, how you'd compute it (human eval, model-as-judge, or automated).
71
+
72
+ **Validation:**
73
+ - [ ] Faithfulness: answer grounded in context (model-as-judge or human)
74
+ - [ ] Relevance: retrieved chunks match query (human or relevance model)
75
+ - [ ] Correctness: answer is factually right (human or comparison to gold)
76
+
77
+ **Hints:**
78
+ 1. Faithfulness: "Does the answer only use the provided context?"
79
+ 2. Relevance: "Do retrieved chunks address the question?"
80
+ 3. Correctness: "Is the answer factually correct?"
@@ -0,0 +1,45 @@
1
+ slug: rag-fundamentals
2
+ title: "RAG — Retrieval-Augmented Generation for Grounded AI"
3
+ version: 1.0.0
4
+ description: "Ground LLMs with your data using retrieval, embeddings, vector databases, and the full RAG pipeline."
5
+ category: ai-and-llm
6
+ tags: [rag, retrieval, embeddings, vector-database, grounding, knowledge-base]
7
+ difficulty: intermediate
8
+
9
+ xp:
10
+ read: 15
11
+ walkthrough: 40
12
+ exercise: 25
13
+ quiz: 20
14
+ quiz-perfect-bonus: 10
15
+
16
+ time:
17
+ quick: 5
18
+ read: 20
19
+ guided: 50
20
+
21
+ prerequisites: [llm-fundamentals]
22
+ related: [ai-agents, prompt-engineering]
23
+
24
+ triggers:
25
+ - "What is RAG?"
26
+ - "How do I ground AI with my own data?"
27
+ - "What are embeddings?"
28
+ - "How do vector databases work?"
29
+
30
+ visuals:
31
+ diagrams: [diagram-mermaid, diagram-architecture]
32
+ quiz-types: [quiz-drag-order, quiz-matching]
33
+ playground: bash
34
+ slides: true
35
+
36
+ sources:
37
+ - url: "https://docs.langchain.com"
38
+ label: "LangChain Documentation"
39
+ type: docs
40
+ - url: "https://docs.llamaindex.ai"
41
+ label: "LlamaIndex Documentation"
42
+ type: docs
43
+ - url: "https://www.pinecone.io/learn"
44
+ label: "Pinecone Learning Center"
45
+ type: docs
@@ -0,0 +1,58 @@
1
+ # RAG Fundamentals Quick Reference
2
+
3
+ ## RAG Pipeline
4
+
5
+ **Indexing:** Docs → Chunk → Embed → Store (vector DB)
6
+
7
+ **Query:** Query → Embed → Retrieve → Augment prompt → Generate
8
+
9
+ ## Embeddings
10
+
11
+ - Text → vectors (numbers)
12
+ - Similar meaning → similar vectors
13
+ - Use cosine similarity or dot product for retrieval
14
+
15
+ ## Vector Databases
16
+
17
+ | Tool | Use Case |
18
+ |-----------|-------------------|
19
+ | Pinecone | Managed, scalable |
20
+ | Weaviate | Open-source, hybrid|
21
+ | Chroma | Lightweight |
22
+ | pgvector | PostgreSQL |
23
+
24
+ ## Chunking
25
+
26
+ | Strategy | Pros / Cons |
27
+ |-------------|--------------------------------|
28
+ | Fixed-size | Simple; can split mid-sentence|
29
+ | Semantic | Preserves meaning; uneven size |
30
+ | Recursive | Flexible; more config |
31
+ | Overlap | Keeps context at boundaries |
32
+
33
+ ## Retrieval Tuning
34
+
35
+ - **Top-k** — How many chunks (often 3–10)
36
+ - **Threshold** — Minimum similarity score
37
+ - **Re-ranking** — Second pass for precision
38
+ - **Hybrid** — Keyword + semantic
39
+
40
+ ## Evaluation
41
+
42
+ - **Faithfulness** — Grounded in context?
43
+ - **Relevance** — Right chunks retrieved?
44
+ - **Correctness** — Factually right?
45
+
46
+ ## Common Pitfalls
47
+
48
+ - Bad chunking → tune size, overlap, strategy
49
+ - No metadata → add filters (source, date)
50
+ - Too much context → limit k, re-rank
51
+ - Wrong embeddings → match model to domain
52
+
53
+ ## One-Liners
54
+
55
+ - **RAG** — Retrieve → Augment → Generate.
56
+ - **Embeddings** — Semantic similarity, not just keywords.
57
+ - **Chunking** — Semantic beats fixed for coherence.
58
+ - **Evaluate** — Faithfulness, relevance, correctness.
@@ -0,0 +1,75 @@
1
+ # RAG Fundamentals Quiz
2
+
3
+ ## Question 1
4
+
5
+ What does RAG stand for and what problem does it solve?
6
+
7
+ A) Random Algorithm Generation — generates random code
8
+ B) Retrieval-Augmented Generation — grounds LLM answers with retrieved documents
9
+ C) Recursive Auto-Regression — a type of model training
10
+ D) Real-time Annotation Gateway — data labeling tool
11
+
12
+ <!-- ANSWER: B -->
13
+ <!-- EXPLANATION: RAG (Retrieval-Augmented Generation) retrieves relevant documents, augments the prompt with them, and generates answers grounded in that context. It addresses LLM knowledge cutoff and hallucination. -->
14
+
15
+ ## Question 2
16
+
17
+ What do embeddings represent?
18
+
19
+ A) Compressed versions of documents
20
+ B) Numerical vectors that capture semantic meaning; similar text → similar vectors
21
+ C) Encryption keys for secure storage
22
+ D) Token counts for pricing
23
+
24
+ <!-- ANSWER: B -->
25
+ <!-- EXPLANATION: Embeddings map text to vectors in a high-dimensional space. Semantically similar texts produce similar vectors, enabling similarity search. -->
26
+
27
+ ## Question 3
28
+
29
+ Drag the RAG pipeline steps into the correct order:
30
+
31
+ <!-- VISUAL: quiz-drag-order -->
32
+
33
+ A) Docs → Chunk → Embed → Store → Query → Embed → Retrieve → Augment → Generate
34
+ B) Query → Retrieve → Chunk → Embed → Generate
35
+ C) Embed → Chunk → Store → Retrieve → Generate
36
+ D) Chunk → Query → Embed → Retrieve → Augment → Generate
37
+
38
+ <!-- ANSWER: A -->
39
+ <!-- EXPLANATION: Indexing: Docs → Chunk → Embed → Store. Query: Query → Embed → Retrieve (from store) → Augment prompt → Generate. -->
40
+
41
+ ## Question 4
42
+
43
+ Which chunking strategy preserves meaning boundaries best?
44
+
45
+ A) Fixed-size only
46
+ B) Semantic (e.g., by paragraph or section)
47
+ C) Random split
48
+ D) Single chunk per document
49
+
50
+ <!-- ANSWER: B -->
51
+ <!-- EXPLANATION: Semantic chunking splits on natural boundaries (paragraphs, sections) so each chunk is a coherent unit. Fixed-size can split mid-sentence. -->
52
+
53
+ ## Question 5
54
+
55
+ What does "re-ranking" do in RAG?
56
+
57
+ A) Re-trains the embedding model
58
+ B) Re-ranks retrieved chunks with a second pass to improve precision before sending to the LLM
59
+ C) Reorders user queries by priority
60
+ D) Restores deleted chunks
61
+
62
+ <!-- ANSWER: B -->
63
+ <!-- EXPLANATION: Re-ranking takes the top-k retrieved chunks and uses a cross-encoder or LLM to re-score them, improving which chunks are passed to the generator. -->
64
+
65
+ ## Question 6
66
+
67
+ Which is a common RAG pitfall?
68
+
69
+ A) Using too few chunks (e.g., top-1 only)
70
+ B) Stuffing too much context into the prompt, diluting relevance
71
+ C) Using keyword search instead of embeddings
72
+ D) Evaluating with too many metrics
73
+
74
+ <!-- ANSWER: B -->
75
+ <!-- EXPLANATION: Stuffing too much retrieved context can add noise, exceed context limits, and dilute the most relevant information. Tune top-k and use re-ranking. -->
@@ -0,0 +1,34 @@
1
+ # RAG Fundamentals — Resources
2
+
3
+ ## Official Docs
4
+
5
+ - [LangChain](https://docs.langchain.com) — RAG chains, retrievers, vector stores.
6
+ - [LlamaIndex](https://docs.llamaindex.ai) — Data frameworks, indexing, and RAG pipelines.
7
+ - [Pinecone Learning Center](https://www.pinecone.io/learn/) — Vector DB concepts and tutorials.
8
+
9
+ ## Videos
10
+
11
+ - [3Blue1Brown — Neural Networks](https://www.youtube.com/watch?v=aircAruvnKk) — Foundation for understanding embeddings.
12
+ - [Andrej Karpathy — Embeddings](https://www.youtube.com/results?search_query=Andrej+Karpathy+embeddings) — Embedding intuition.
13
+ - [AI Explained — RAG](https://www.youtube.com/results?search_query=AI+Explained+RAG) — RAG overviews.
14
+ - [Fireship — RAG in 100 Seconds](https://www.youtube.com/results?search_query=Fireship+RAG) — Quick RAG intro.
15
+
16
+ ## Articles
17
+
18
+ - [Lilian Weng — Retrieval-Augmented Generation](https://lilianweng.github.io/posts/2023-06-23-agent/) — RAG and agent patterns.
19
+ - [Chip Huyen — RAG](https://huyenchip.com/) — Production RAG considerations.
20
+ - [Simon Willison — RAG](https://simonwillison.net/series/rag/) — Practical RAG examples.
21
+
22
+ ## Books
23
+
24
+ - **Build a Large Language Model (From Scratch)** by Sebastian Raschka — Embeddings and retrieval concepts.
25
+ - **AI Engineering** by Chip Huyen — RAG in production.
26
+
27
+ ## Tools
28
+
29
+ - [LangChain](https://docs.langchain.com) — RAG framework.
30
+ - [LlamaIndex](https://docs.llamaindex.ai) — Data and RAG framework.
31
+ - [Pinecone](https://www.pinecone.io/) — Vector database.
32
+ - [Chroma](https://www.trychroma.com/) — Lightweight vector DB.
33
+ - [OpenAI Embeddings](https://platform.openai.com/docs/guides/embeddings) — Text embedding API.
34
+ - [Hugging Face Sentence Transformers](https://www.sbert.net/) — Open embedding models.