rhachet-roles-ehmpathy 1.1.0 → 1.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (375) hide show
  1. package/dist/_topublish/domain-glossary-brief/src/domain/objects/Catalog.js +2 -0
  2. package/dist/_topublish/domain-glossary-brief/src/domain/objects/Catalog.js.map +1 -0
  3. package/dist/_topublish/domain-glossary-brief/src/domain/objects/TriageCatalog.d.ts +18 -0
  4. package/dist/_topublish/domain-glossary-brief/src/domain/objects/TriageCatalog.js +3 -0
  5. package/dist/_topublish/domain-glossary-brief/src/domain/objects/TriageCatalog.js.map +1 -0
  6. package/dist/_topublish/rhachet-roles-bhrain/src/domain/objects/Focus.d.ts +34 -0
  7. package/dist/_topublish/rhachet-roles-bhrain/src/domain/objects/Focus.js +9 -0
  8. package/dist/_topublish/rhachet-roles-bhrain/src/domain/objects/Focus.js.map +1 -0
  9. package/dist/_topublish/rhachet-roles-bhrain/src/domain/objects/PonderCatalog.d.ts +9 -0
  10. package/dist/_topublish/rhachet-roles-bhrain/src/domain/objects/PonderCatalog.js +3 -0
  11. package/dist/_topublish/rhachet-roles-bhrain/src/domain/objects/PonderCatalog.js.map +1 -0
  12. package/dist/_topublish/rhachet-roles-bhrain/src/domain/objects/Question.d.ts +1 -0
  13. package/dist/_topublish/rhachet-roles-bhrain/src/domain/objects/Question.js +3 -0
  14. package/dist/_topublish/rhachet-roles-bhrain/src/domain/objects/Question.js.map +1 -0
  15. package/dist/logic/artifact/genStepSwapArtifact.d.ts +1 -1
  16. package/dist/logic/artifact/genStepSwapArtifact.js +1 -1
  17. package/dist/logic/roles/bhrain/.briefs/cognition/cog000.overview.and.premise.md +115 -0
  18. package/dist/logic/roles/bhrain/.briefs/cognition/cog021.coordinates.spherical.md +69 -0
  19. package/dist/logic/roles/bhrain/.briefs/cognition/cog021.metaphor.cauliflorous.md +44 -0
  20. package/dist/logic/roles/bhrain/.briefs/cognition/cog021.structs.catalog.md +51 -0
  21. package/dist/logic/roles/bhrain/.briefs/cognition/cog021.structs.vector.md +112 -0
  22. package/dist/logic/roles/bhrain/.briefs/cognition/cog101.concept.treestruct.coords.1.spherical.md +80 -0
  23. package/dist/logic/roles/bhrain/.briefs/cognition/cog101.concept.treestruct.coords.2.abstractive.md +59 -0
  24. package/dist/logic/roles/bhrain/.briefs/cognition/cog101.concept.treestruct.coords.3.descriptive.md +64 -0
  25. package/dist/logic/roles/bhrain/.briefs/cognition/{cog101.cortal.focus.p1.definition.md → cog201.cortal.focus.p1.definition.md} +77 -28
  26. package/dist/logic/roles/bhrain/.briefs/cognition/cog201.cortal.focus.p2.acuity.md +134 -0
  27. package/dist/logic/roles/bhrain/.briefs/cognition/cog201.cortal.focus.p2.breadth.md +151 -0
  28. package/dist/logic/roles/bhrain/.briefs/cognition/cog201.cortal.focus.p2.depth.md +147 -0
  29. package/dist/logic/roles/bhrain/.briefs/cognition/cog251.cortal.focus.p3.fabric.md +96 -0
  30. package/dist/logic/roles/bhrain/.briefs/cognition/cog251.cortal.focus.p3.usecases.md +76 -0
  31. package/dist/logic/roles/bhrain/.briefs/cognition/cog301.traversal.1.motion.primitives._.md +155 -0
  32. package/dist/logic/roles/bhrain/.briefs/cognition/cog301.traversal.1.motion.primitives.acuity.md +94 -0
  33. package/dist/logic/roles/bhrain/.briefs/cognition/cog301.traversal.1.motion.primitives.breadth.md +114 -0
  34. package/dist/logic/roles/bhrain/.briefs/cognition/cog301.traversal.1.motion.primitives.breadth.vary.md +105 -0
  35. package/dist/logic/roles/bhrain/.briefs/cognition/cog301.traversal.1.motion.primitives.depth.md +132 -0
  36. package/dist/logic/roles/bhrain/.briefs/cognition/cog301.traversal.2.motion.composites._.md +106 -0
  37. package/dist/logic/roles/bhrain/.briefs/cognition/cog301.traversal.2.motion.composites.grammar.md +105 -0
  38. package/dist/logic/roles/bhrain/.briefs/cognition/cog401.questions.._.md +209 -0
  39. package/dist/logic/roles/bhrain/.briefs/cognition/cog401.questions.1.persp.as.berries.md +168 -0
  40. package/dist/logic/roles/bhrain/.briefs/cognition/cog401.questions.1.persp.as.vectors.md +74 -0
  41. package/dist/logic/roles/bhrain/.briefs/cognition/cog401.questions.1.persp.has.precision.tunable.md +80 -0
  42. package/dist/logic/roles/bhrain/.briefs/cognition/cog401.questions.2.1.primitives.rough._.md +99 -0
  43. package/dist/logic/roles/bhrain/.briefs/cognition/cog401.questions.2.1.primitives.rough.interrogative.md +108 -0
  44. package/dist/logic/roles/bhrain/.briefs/cognition/cog401.questions.2.1.primitives.rough.why.[article].md +55 -0
  45. package/dist/logic/roles/bhrain/.briefs/cognition/cog401.questions.2.2.composite.smooth._.md +83 -0
  46. package/dist/logic/roles/bhrain/.briefs/cognition/cog401.questions.2.2.composite.smooth.examples.md +101 -0
  47. package/dist/logic/roles/bhrain/.briefs/cognition/cog501.cortal.assemblylang.1.primitives._.md +134 -0
  48. package/dist/logic/roles/bhrain/.briefs/cognition/cog501.cortal.assemblylang.1.primitives.recall.md +149 -0
  49. package/dist/logic/roles/bhrain/.briefs/cognition/cog501.cortal.assemblylang.1.primitives.steer.md +146 -0
  50. package/dist/logic/roles/bhrain/.briefs/cognition/cog501.cortal.assemblylang.1.primitives.think.md +141 -0
  51. package/dist/logic/roles/bhrain/.briefs/cognition/cog501.cortal.assemblylang.2.composites.zoom.md +127 -0
  52. package/dist/logic/roles/bhrain/.briefs/cognition/cog501.cortal.assemblylang.3.catalogs.md +107 -0
  53. package/dist/logic/roles/bhrain/.briefs/cognition/cog501.cortal.assemblylang.3.grammar.md +124 -0
  54. package/dist/logic/roles/bhrain/.briefs/cognition/inflight/concept.vs.idea.md +70 -0
  55. package/dist/logic/roles/bhrain/.briefs/cognition/inflight/core.concept.adjectives.md +8 -0
  56. package/dist/logic/roles/bhrain/.briefs/knowledge/kno101.primitives.1.ontology.[article].frame.docs_as_materializations.md +63 -0
  57. package/dist/logic/roles/bhrain/.briefs/knowledge/kno101.primitives.1.ontology.[article].frame.docs_as_references.md +45 -0
  58. package/dist/logic/roles/bhrain/.briefs/knowledge/kno101.primitives.2.rel.many_to_many.[article].md +37 -0
  59. package/dist/logic/roles/bhrain/.briefs/knowledge/kno101.primitives.3.instances.[article].md +39 -0
  60. package/dist/logic/roles/bhrain/.briefs/knowledge/kno101.primitives.4.documents.[article].md +37 -0
  61. package/dist/logic/roles/bhrain/.briefs/knowledge/kno101.primitives.5.concepts.[article].md +39 -0
  62. package/dist/logic/roles/bhrain/.briefs/knowledge/kno201.documents._.[article].md +48 -0
  63. package/dist/logic/roles/bhrain/.briefs/knowledge/kno201.documents._.[catalog].md +52 -0
  64. package/dist/logic/roles/bhrain/.briefs/knowledge/kno201.documents.articles.[article].md +40 -0
  65. package/dist/logic/roles/bhrain/.briefs/knowledge/kno201.documents.catalogs.[article].md +41 -0
  66. package/dist/logic/roles/bhrain/.briefs/knowledge/kno201.documents.demos.[article].md +42 -0
  67. package/dist/logic/roles/bhrain/.briefs/knowledge/kno201.documents.lessons.[article].md +42 -0
  68. package/dist/logic/roles/bhrain/.briefs/knowledge/kno301.doc.compression.1.refs._.[article].md +41 -0
  69. package/dist/logic/roles/bhrain/.briefs/knowledge/kno301.doc.compression.2.kernels._.[article].i1.md +50 -0
  70. package/dist/logic/roles/bhrain/.briefs/knowledge/kno301.doc.compression.3.briefs._.[article].md +40 -0
  71. package/dist/logic/roles/bhrain/.briefs/knowledge/kno301.doc.compression._.[article].md +90 -0
  72. package/dist/logic/roles/bhrain/.briefs/knowledge/kno301.doc.compression._.[catalog].persp.garden.md +64 -0
  73. package/dist/logic/roles/bhrain/.briefs/knowledge/kno301.doc.enbrief.1.from_instances.[article].md +45 -0
  74. package/dist/logic/roles/bhrain/.briefs/knowledge/kno301.doc.enbrief.1.from_instances.[demo].algorithm.md +54 -0
  75. package/dist/logic/roles/bhrain/.briefs/knowledge/kno301.doc.enbrief.1.from_instances.[demo].color.md +56 -0
  76. package/dist/logic/roles/bhrain/.briefs/knowledge/kno301.doc.enbrief.1.from_instances.[demo].empathy.md +54 -0
  77. package/dist/logic/roles/bhrain/.briefs/knowledge/kno301.doc.enbrief.1.from_instances.[demo].entropy.md +54 -0
  78. package/dist/logic/roles/bhrain/.briefs/knowledge/kno301.doc.enbrief.1.from_instances.[demo].gravity.md +54 -0
  79. package/dist/logic/roles/bhrain/.briefs/knowledge/kno301.doc.enbrief.1.from_instances.[demo].joke.md +56 -0
  80. package/dist/logic/roles/bhrain/.briefs/knowledge/kno301.doc.enbrief.1.from_instances.[demo].value.md +54 -0
  81. package/dist/logic/roles/bhrain/.briefs/knowledge/kno301.doc.enbrief.2._.[catalog].md +43 -0
  82. package/dist/logic/roles/bhrain/.briefs/knowledge/kno301.doc.enbrief.2.articulate.[article].md +27 -0
  83. package/dist/logic/roles/bhrain/.briefs/knowledge/kno301.doc.enbrief.2.articulate.[lesson].md +49 -0
  84. package/dist/logic/roles/bhrain/.briefs/knowledge/kno301.doc.enbrief.2.catalogize.[article].md +27 -0
  85. package/dist/logic/roles/bhrain/.briefs/knowledge/kno301.doc.enbrief.2.catalogize.[lesson].md +54 -0
  86. package/dist/logic/roles/bhrain/.briefs/knowledge/kno301.doc.enbrief.2.demonstrate.[article].md +26 -0
  87. package/dist/logic/roles/bhrain/.briefs/knowledge/kno301.doc.enbrief.2.demonstrate.[lesson].md +49 -0
  88. package/dist/logic/roles/bhrain/.briefs/knowledge/kno301.doc.enbrief.2.lessonize.[article].md +26 -0
  89. package/dist/logic/roles/bhrain/.briefs/knowledge/kno301.doc.enbrief.2.lessonize.[lesson].md +54 -0
  90. package/dist/logic/roles/bhrain/.briefs/knowledge/kno301.doc.kernelize._.[article].md +58 -0
  91. package/dist/logic/roles/bhrain/.briefs/knowledge/kno301.doc.kernelize._.[lesson].md +88 -0
  92. package/dist/logic/roles/bhrain/.briefs/knowledge/kno351.docs.are_instances.[article].md +34 -0
  93. package/dist/logic/roles/bhrain/.briefs/knowledge/kno351.docs.recursion.[catalog].md +44 -0
  94. package/dist/logic/roles/bhrain/.briefs/knowledge/kno401.actors.1.role.author.[article].md +36 -0
  95. package/dist/logic/roles/bhrain/.briefs/knowledge/kno401.actors.1.role.librarian.[article].md +40 -0
  96. package/dist/logic/roles/bhrain/.briefs/knowledge/kno401.actors.2.interdependence.[article].md +52 -0
  97. package/dist/logic/roles/bhrain/.briefs/knowledge/kno501.doc.enbrief.catalog.structure1.[article].md +53 -0
  98. package/dist/logic/roles/bhrain/.briefs/knowledge/kno501.doc.enbrief.catalog.structure1.[lesson].template.md +101 -0
  99. package/dist/logic/roles/bhrain/.briefs/librarian.context/article.variant.vision.[article].md +60 -0
  100. package/dist/logic/roles/bhrain/.briefs/librarian.context/term.expectation.vs_assumption._.md +60 -0
  101. package/dist/logic/roles/bhrain/.briefs/librarian.context/term.frame.vs_perspective.[article].md +96 -0
  102. package/dist/logic/roles/bhrain/.briefs/librarian.context/term.invariant.[article].md +29 -0
  103. package/dist/logic/roles/bhrain/.briefs/librarian.context/term.lesson._vs_article.[article].md +36 -0
  104. package/dist/logic/roles/bhrain/.briefs/librarian.context/term.ref._vs_brief.md +90 -0
  105. package/dist/logic/roles/bhrain/.briefs/librarian.context/term.referent.[article].md +43 -0
  106. package/dist/logic/roles/bhrain/.briefs/librarian.context/usage.lesson_vs_article.[lesson].md +31 -0
  107. package/dist/logic/roles/bhrain/.briefs/librarian.context/usage.lesson_vs_article_vs_demo.[lesson].md +37 -0
  108. package/dist/logic/roles/bhrain/.briefs/librarian.tactics/.readme.md +12 -0
  109. package/dist/logic/roles/bhrain/.briefs/librarian.tactics/<articulate>.TriageCatalog.[gallery][example].structure.md +18 -0
  110. package/dist/logic/roles/bhrain/.briefs/librarian.tactics/<articulate>._.[article].frame.cognitive.md +33 -0
  111. package/dist/logic/roles/bhrain/.briefs/librarian.tactics/<articulate>._.[article].frame.tactical.md +45 -0
  112. package/dist/logic/roles/bhrain/.briefs/librarian.tactics/<articulate>.tactic.[catalog].md +83 -0
  113. package/dist/logic/roles/bhrain/.briefs/librarian.tactics/<articulate>.tactic.concept_dimension.examples.[article][seed].md +4 -0
  114. package/dist/logic/roles/bhrain/.briefs/librarian.tactics/<articulate>.tactic.concept_dimension.invariants.[article].md +36 -0
  115. package/dist/logic/roles/bhrain/.briefs/librarian.tactics/<articulate>.tactic.from.examples.md +44 -0
  116. package/dist/logic/roles/bhrain/.briefs/librarian.tactics/<articulate>.tactic.from.seed.md +48 -0
  117. package/dist/logic/roles/bhrain/.briefs/librarian.tactics/<articulate>.tactic.with.templates.[article].md +57 -0
  118. package/dist/logic/roles/bhrain/.briefs/librarian.tactics/<articulate>.tactic.with.templates.[gallery][review].effective.md +1 -0
  119. package/dist/logic/roles/bhrain/.briefs/librarian.tactics/<articulate>.tone.bluecollar.[article][seed].md +5 -0
  120. package/dist/logic/roles/bhrain/.briefs/librarian.tactics/<catalogize>._.[article][seed].md +3 -0
  121. package/dist/logic/roles/bhrain/.briefs/librarian.tactics/<catalogize>.observation.via_clusterage_over_via_imagination.[seed].md +6 -0
  122. package/dist/logic/roles/bhrain/.briefs/librarian.tactics/<catalogize>.vs_diverge.[article].persp.save_compute.md +46 -0
  123. package/dist/logic/roles/bhrain/.briefs/librarian.tactics/<demonstrate>._.[article].frame.colloquial.i2.by_grok.md +64 -0
  124. package/dist/logic/roles/bhrain/.briefs/librarian.tactics/<demonstrate>.variants.[catalog].md +106 -0
  125. package/dist/logic/roles/bhrain/.briefs/librarian.tactics/<demonstrate>.variants.contrast.[demo].usecase.vs_userjourney.by_chatgpt.md +45 -0
  126. package/dist/logic/roles/bhrain/.briefs/librarian.tactics/<demonstrate>.variants.counter.[demo].usecase.flyer.by_chargpt.md +38 -0
  127. package/dist/logic/roles/bhrain/.briefs/librarian.tactics/<demonstrate>.variants.counter.[demo].walkability.phoenix.by_chargpt.md +41 -0
  128. package/dist/logic/roles/bhrain/.briefs/librarian.tactics/<demonstrate>.variants.example.[demo].shear_force.scissors.by_grok.md +52 -0
  129. package/dist/logic/roles/bhrain/.briefs/librarian.tactics/<demonstrate>.variants.example.[demo].tea.darjeeling.by_grok.md +50 -0
  130. package/dist/logic/roles/bhrain/.briefs/librarian.tactics/<demonstrate>.variants.example.[demo].usecase.book_flight.by_grok.md +54 -0
  131. package/dist/logic/roles/bhrain/.briefs/librarian.tactics/<demonstrate>.variants.example.[demo].usecase.order_food.by_chatgpt.md +40 -0
  132. package/dist/logic/roles/bhrain/.briefs/librarian.tactics/<demonstrate>.variants.example.[demo].walkability.portland.by_chatgpt.i3.md +42 -0
  133. package/dist/logic/roles/bhrain/.briefs/librarian.tactics/<demonstrate>.variants.example.[demo].walkability.portland.by_grok.i2.md +49 -0
  134. package/dist/logic/roles/bhrain/.briefs/librarian.tactics/<demonstrate>.variants.example.[lesson].howto.md +28 -0
  135. package/dist/logic/roles/bhrain/.briefs/librarian.tactics/<demonstrate>.variants.example.structure.[article].i2.md +73 -0
  136. package/dist/logic/roles/bhrain/.briefs/librarian.tactics/<demonstrate>.variants.narrative.[demo].usecase.order_online.by_chatgpt.md +34 -0
  137. package/dist/logic/roles/bhrain/.briefs/librarian.tactics/<demonstrate>.variants.walkthrough.[demo].usecase.book_online.by_chatgpt.md +47 -0
  138. package/dist/logic/roles/bhrain/.briefs/librarian.tactics/[brief].verbiage.outline.over.narrative.md +55 -0
  139. package/dist/logic/roles/bhrain/.briefs/thinker.tactics/<cluster>._.[article].frame.tactical._.md +85 -0
  140. package/dist/logic/roles/bhrain/.briefs/thinker.tactics/<cluster>.vs_<diverge>.duality.[article].md +43 -0
  141. package/dist/logic/roles/bhrain/.briefs/thinker.tactics/<diverge>._.[article].frame.cognitive.[seed].md +4 -0
  142. package/dist/logic/roles/bhrain/.briefs/thinker.tactics/<diverge>._.[article].frame.tactical.md +89 -0
  143. package/dist/logic/roles/bhrain/.briefs/thinker.tactics/<elaborate>_vs_<elucidate>.[seed].md +1 -0
  144. package/dist/logic/roles/bhrain/.briefs/thinker.tactics/<enquestion>._.[article].md +113 -0
  145. package/dist/logic/roles/bhrain/.briefs/thinker.tactics/<enquestion>._.[gallery].plumber.diagnose.md +130 -0
  146. package/dist/logic/roles/bhrain/.briefs/thinker.tactics/<enquestion>._.[gallery].thinker.enquestion.md +125 -0
  147. package/dist/logic/roles/bhrain/.briefs/thinker.tactics/<enquestion>.tactic.perspectives.[article].md +36 -0
  148. package/dist/logic/roles/bhrain/.briefs/thinker.tactics/<triage>._.[article].frame.tactical.md +85 -0
  149. package/dist/logic/roles/bhrain/.briefs/thinker.tactics/<triage>.persp.grades_from_context.[article].md +48 -0
  150. package/dist/logic/roles/bhrain/.briefs/thinker.tactics/<triage>.persp.implicit_question.[article].md +65 -0
  151. package/dist/logic/roles/bhrain/.briefs/worders/core.matmuls_vecmuls_elemuls.md +93 -0
  152. package/dist/logic/roles/bhrain/.briefs/worders/core.transformers.as_origin.md +62 -0
  153. package/dist/logic/roles/bhrain/.briefs/worders/core.transformers.self_attention.[article].md +93 -0
  154. package/dist/logic/roles/bhrain/.briefs/worders/core.transformers.self_attention.[demo].ambig.bank.md +80 -0
  155. package/dist/logic/roles/bhrain/.briefs/worders/core.transformers.self_attention.[demo].cat_sat.md +67 -0
  156. package/dist/logic/roles/bhrain/.briefs/worders/force.repeat_input_structures.md +48 -0
  157. package/dist/logic/roles/bhrain/.briefs/worders/knowledge.interdependence.[article].md +37 -0
  158. package/dist/logic/roles/bhrain/.briefs/worders/knowledge.interdependence.[demo].domain.physics.md +30 -0
  159. package/dist/logic/roles/bhrain/.briefs/worders/knowledge.internal_vs_external.[article].artist_vs_librarian.md +44 -0
  160. package/dist/logic/roles/bhrain/.briefs/worders/knowledge.internal_vs_external.[demo].artist_vs_librarian.md +37 -0
  161. package/dist/logic/roles/bhrain/.briefs/worders/knowledge.internal_vs_external.[demo].domain.physics.md +39 -0
  162. package/dist/logic/roles/bhrain/.briefs/worders/knowledge.internalized.[article].md +35 -0
  163. package/dist/logic/roles/bhrain/.briefs/worders/knowledge.internalized.[demo].artist.md +36 -0
  164. package/dist/logic/roles/bhrain/.briefs/worders/knowledge.internalized.[demo].neural.md +37 -0
  165. package/dist/logic/roles/bhrain/.briefs/worders/knowledge.internalized.[demo].pianist.md +34 -0
  166. package/dist/logic/roles/bhrain/.briefs/worders/limits.rhyme.md +46 -0
  167. package/dist/logic/roles/bhrain/.briefs/worders/limits.spell.md +49 -0
  168. package/dist/logic/roles/bhrain/.briefs/worders/teach.via.library.examples.md +28 -0
  169. package/dist/logic/roles/bhrain/.briefs/worders/teach.via.library.explanations_vs_examples.md +40 -0
  170. package/dist/logic/roles/bhrain/.briefs/worders/trend.prefer_reuse.[seed].md +10 -0
  171. package/dist/logic/roles/bhrain/brief.articulate/stepArticulate.d.ts +61 -0
  172. package/dist/logic/roles/bhrain/brief.articulate/stepArticulate.integration.test.js +96 -0
  173. package/dist/logic/roles/bhrain/brief.articulate/stepArticulate.integration.test.js.map +1 -0
  174. package/dist/logic/roles/bhrain/brief.articulate/stepArticulate.js +94 -0
  175. package/dist/logic/roles/bhrain/brief.articulate/stepArticulate.js.map +1 -0
  176. package/dist/logic/roles/bhrain/brief.articulate/stepArticulate.skill.d.ts +31 -0
  177. package/dist/logic/roles/bhrain/brief.articulate/stepArticulate.skill.js +137 -0
  178. package/dist/logic/roles/bhrain/brief.articulate/stepArticulate.skill.js.map +1 -0
  179. package/dist/logic/roles/bhrain/brief.articulate/stepArticulate.template.md +129 -0
  180. package/dist/logic/roles/bhrain/brief.catalogize/stepCatalogize.d.ts +55 -0
  181. package/dist/logic/roles/bhrain/brief.catalogize/stepCatalogize.integration.test.js +118 -0
  182. package/dist/logic/roles/bhrain/brief.catalogize/stepCatalogize.integration.test.js.map +1 -0
  183. package/dist/logic/roles/bhrain/brief.catalogize/stepCatalogize.js +72 -0
  184. package/dist/logic/roles/bhrain/brief.catalogize/stepCatalogize.js.map +1 -0
  185. package/dist/logic/roles/bhrain/brief.catalogize/stepCatalogize.skill.d.ts +28 -0
  186. package/dist/logic/roles/bhrain/brief.catalogize/stepCatalogize.skill.js +119 -0
  187. package/dist/logic/roles/bhrain/brief.catalogize/stepCatalogize.skill.js.map +1 -0
  188. package/dist/logic/roles/bhrain/brief.demonstrate/stepDemonstrate.d.ts +59 -0
  189. package/dist/logic/roles/bhrain/brief.demonstrate/stepDemonstrate.integration.test.js +119 -0
  190. package/dist/logic/roles/bhrain/brief.demonstrate/stepDemonstrate.integration.test.js.map +1 -0
  191. package/dist/logic/roles/bhrain/brief.demonstrate/stepDemonstrate.js +103 -0
  192. package/dist/logic/roles/bhrain/brief.demonstrate/stepDemonstrate.js.map +1 -0
  193. package/dist/logic/roles/bhrain/brief.demonstrate/stepDemonstrate.skill.d.ts +30 -0
  194. package/dist/logic/roles/bhrain/brief.demonstrate/stepDemonstrate.skill.js +138 -0
  195. package/dist/logic/roles/bhrain/brief.demonstrate/stepDemonstrate.skill.js.map +1 -0
  196. package/dist/logic/roles/bhrain/brief.demonstrate/stepDemonstrate.template.md +135 -0
  197. package/dist/logic/roles/bhrain/getBhrainBrief.Options.codegen.d.ts +1 -1
  198. package/dist/logic/roles/bhrain/getBhrainBrief.Options.codegen.js +166 -0
  199. package/dist/logic/roles/bhrain/getBhrainBrief.Options.codegen.js.map +1 -1
  200. package/dist/logic/roles/bhrain/getBhrainRole.js +16 -2
  201. package/dist/logic/roles/bhrain/getBhrainRole.js.map +1 -1
  202. package/dist/logic/roles/bhrain/{primitive.idealogic.atomic/cluster → khue.cluster}/stepCluster.d.ts +18 -12
  203. package/dist/logic/roles/bhrain/khue.cluster/stepCluster.integration.test.js +140 -0
  204. package/dist/logic/roles/bhrain/khue.cluster/stepCluster.integration.test.js.map +1 -0
  205. package/dist/logic/roles/bhrain/khue.cluster/stepCluster.js +91 -0
  206. package/dist/logic/roles/bhrain/khue.cluster/stepCluster.js.map +1 -0
  207. package/dist/logic/roles/bhrain/khue.cluster/stepCluster.skill.d.ts +29 -0
  208. package/dist/logic/roles/bhrain/khue.cluster/stepCluster.skill.js +127 -0
  209. package/dist/logic/roles/bhrain/khue.cluster/stepCluster.skill.js.map +1 -0
  210. package/dist/logic/roles/bhrain/khue.cluster/stepCluster.template.md +134 -0
  211. package/dist/logic/roles/bhrain/{primitive.idealogic.atomic/diverge → khue.diverge}/stepDiverge.d.ts +16 -8
  212. package/dist/logic/roles/bhrain/khue.diverge/stepDiverge.integration.test.js +115 -0
  213. package/dist/logic/roles/bhrain/khue.diverge/stepDiverge.integration.test.js.map +1 -0
  214. package/dist/logic/roles/bhrain/khue.diverge/stepDiverge.js +92 -0
  215. package/dist/logic/roles/bhrain/khue.diverge/stepDiverge.js.map +1 -0
  216. package/dist/logic/roles/bhrain/khue.diverge/stepDiverge.skill.d.ts +29 -0
  217. package/dist/logic/roles/bhrain/khue.diverge/stepDiverge.skill.js +112 -0
  218. package/dist/logic/roles/bhrain/khue.diverge/stepDiverge.skill.js.map +1 -0
  219. package/dist/logic/roles/bhrain/khue.diverge/stepDiverge.template.md +110 -0
  220. package/dist/logic/roles/bhrain/khue.instantiate/stepInstantiate.d.ts +55 -0
  221. package/dist/logic/roles/bhrain/khue.instantiate/stepInstantiate.integration.test.js +119 -0
  222. package/dist/logic/roles/bhrain/khue.instantiate/stepInstantiate.integration.test.js.map +1 -0
  223. package/dist/logic/roles/bhrain/khue.instantiate/stepInstantiate.js +75 -0
  224. package/dist/logic/roles/bhrain/khue.instantiate/stepInstantiate.js.map +1 -0
  225. package/dist/logic/roles/bhrain/khue.instantiate/stepInstantiate.skill.d.ts +28 -0
  226. package/dist/logic/roles/bhrain/khue.instantiate/stepInstantiate.skill.js +119 -0
  227. package/dist/logic/roles/bhrain/khue.instantiate/stepInstantiate.skill.js.map +1 -0
  228. package/dist/logic/roles/bhrain/khue.instantiate/stepInstantiate.template.md +73 -0
  229. package/dist/logic/roles/bhrain/khue.triage/stepTriage.d.ts +57 -0
  230. package/dist/logic/roles/bhrain/khue.triage/stepTriage.integration.test.js +143 -0
  231. package/dist/logic/roles/bhrain/khue.triage/stepTriage.integration.test.js.map +1 -0
  232. package/dist/logic/roles/bhrain/khue.triage/stepTriage.js +93 -0
  233. package/dist/logic/roles/bhrain/khue.triage/stepTriage.js.map +1 -0
  234. package/dist/logic/roles/bhrain/khue.triage/stepTriage.skill.d.ts +29 -0
  235. package/dist/logic/roles/bhrain/khue.triage/stepTriage.skill.js +127 -0
  236. package/dist/logic/roles/bhrain/khue.triage/stepTriage.skill.js.map +1 -0
  237. package/dist/logic/roles/bhrain/khue.triage/stepTriage.template.md +128 -0
  238. package/dist/logic/roles/ecologist/.briefs/product/user.journey._.[article].i1.md +68 -0
  239. package/dist/logic/roles/ecologist/.briefs/product/user.journey.purpose.[article].i1.md +52 -0
  240. package/dist/logic/roles/ecologist/.briefs/product/user.journey.purpose.[article].md +52 -0
  241. package/dist/logic/roles/ecologist/getEcologistBrief.Options.codegen.d.ts +1 -1
  242. package/dist/logic/roles/ecologist/getEcologistBrief.Options.codegen.js +16 -12
  243. package/dist/logic/roles/ecologist/getEcologistBrief.Options.codegen.js.map +1 -1
  244. package/dist/logic/roles/mechanic/.briefs/codestyle/mech.args.input-inline.md +63 -0
  245. package/dist/logic/roles/mechanic/.briefs/codestyle/pit-of-success.via.minimize-surface-area.md +58 -0
  246. package/dist/logic/roles/mechanic/getMechanicBrief.Options.codegen.d.ts +1 -1
  247. package/dist/logic/roles/mechanic/getMechanicBrief.Options.codegen.js +2 -0
  248. package/dist/logic/roles/mechanic/getMechanicBrief.Options.codegen.js.map +1 -1
  249. package/dist/logic/roles/mechanic/write/loopWrite.d.ts +9 -9
  250. package/dist/logic/roles/mechanic/write/loopWrite.skill.d.ts +9 -9
  251. package/dist/logic/roles/mechanic/write/loopWrite.skill.js +1 -1
  252. package/dist/logic/roles/mechanic/write/stepWrite.js +1 -0
  253. package/dist/logic/roles/mechanic/write/stepWrite.js.map +1 -1
  254. package/dist/logic/roles/mechanic/write/stepWrite.template.md +4 -0
  255. package/package.json +3 -2
  256. package/readme.[seed].md +2 -0
  257. package/dist/logic/roles/bhrain/.briefs/cognition/cog101.cortal.focus.p2.acuity.md +0 -107
  258. package/dist/logic/roles/bhrain/.briefs/cognition/cog101.cortal.focus.p2.breadth.md +0 -118
  259. package/dist/logic/roles/bhrain/.briefs/cognition/cog101.cortal.focus.p2.depth.md +0 -121
  260. package/dist/logic/roles/bhrain/.briefs/cognition/cog301.concept.traversal.p1.universal.md +0 -108
  261. package/dist/logic/roles/bhrain/.briefs/cognition/cog301.concept.traversal.p2.relative.md +0 -76
  262. package/dist/logic/roles/bhrain/.briefs/cognition/cog301.concept.traversal.p3.directions.md +0 -42
  263. package/dist/logic/roles/bhrain/.briefs/tactic.<think>[idea]/composite/<triangulate>[concept].md +0 -66
  264. package/dist/logic/roles/bhrain/.briefs/tactic.<think>[idea]/primitive/<zoom>._.md +0 -124
  265. package/dist/logic/roles/bhrain/.briefs/tactic.<think>[idea]/primitive/<zoomin>[acuity]<sharpen>[concept].md +0 -53
  266. package/dist/logic/roles/bhrain/.briefs/tactic.<think>[idea]/primitive/<zoomin>[breadth]<decompose>[concept].md +0 -67
  267. package/dist/logic/roles/bhrain/.briefs/tactic.<think>[idea]/primitive/<zoomin>[depth]<abstractify>[concept].md +0 -124
  268. package/dist/logic/roles/bhrain/.briefs/tactic.<think>[idea]/primitive/<zoomout>[acuity]<blurren>[concept].md +0 -56
  269. package/dist/logic/roles/bhrain/.briefs/tactic.<think>[idea]/primitive/<zoomout>[breadth]<broaden>[concept].md +0 -71
  270. package/dist/logic/roles/bhrain/.briefs/tactic.<think>[idea]/primitive/<zoomout>[depth]<elaborate>[concept].md +0 -74
  271. package/dist/logic/roles/bhrain/primitive.idealogic.atomic/cluster/stepCluster.integration.test.js +0 -102
  272. package/dist/logic/roles/bhrain/primitive.idealogic.atomic/cluster/stepCluster.integration.test.js.map +0 -1
  273. package/dist/logic/roles/bhrain/primitive.idealogic.atomic/cluster/stepCluster.js +0 -59
  274. package/dist/logic/roles/bhrain/primitive.idealogic.atomic/cluster/stepCluster.js.map +0 -1
  275. package/dist/logic/roles/bhrain/primitive.idealogic.atomic/cluster/stepCluster.template.md +0 -127
  276. package/dist/logic/roles/bhrain/primitive.idealogic.atomic/collect/stepCollect.d.ts +0 -15
  277. package/dist/logic/roles/bhrain/primitive.idealogic.atomic/collect/stepCollect.integration.test.js +0 -91
  278. package/dist/logic/roles/bhrain/primitive.idealogic.atomic/collect/stepCollect.integration.test.js.map +0 -1
  279. package/dist/logic/roles/bhrain/primitive.idealogic.atomic/collect/stepCollect.js +0 -33
  280. package/dist/logic/roles/bhrain/primitive.idealogic.atomic/collect/stepCollect.js.map +0 -1
  281. package/dist/logic/roles/bhrain/primitive.idealogic.atomic/diverge/stepDiverge.integration.test.js +0 -122
  282. package/dist/logic/roles/bhrain/primitive.idealogic.atomic/diverge/stepDiverge.integration.test.js.map +0 -1
  283. package/dist/logic/roles/bhrain/primitive.idealogic.atomic/diverge/stepDiverge.js +0 -59
  284. package/dist/logic/roles/bhrain/primitive.idealogic.atomic/diverge/stepDiverge.js.map +0 -1
  285. package/dist/logic/roles/bhrain/primitive.idealogic.atomic/diverge/stepDiverge.template.md +0 -125
  286. package/dist/logic/roles/bhrain/primitive.idealogic.atomic/envision/stepEnvision.d.ts +0 -53
  287. package/dist/logic/roles/bhrain/primitive.idealogic.atomic/envision/stepEnvision.integration.test.js +0 -126
  288. package/dist/logic/roles/bhrain/primitive.idealogic.atomic/envision/stepEnvision.integration.test.js.map +0 -1
  289. package/dist/logic/roles/bhrain/primitive.idealogic.atomic/envision/stepEnvision.js +0 -61
  290. package/dist/logic/roles/bhrain/primitive.idealogic.atomic/envision/stepEnvision.js.map +0 -1
  291. package/dist/logic/roles/bhrain/primitive.idealogic.atomic/envision/stepEnvision.template.md +0 -105
  292. package/dist/logic/roles/bhrain/primitive.idealogic.composite/expand/stepExpand.d.ts +0 -24
  293. package/dist/logic/roles/bhrain/primitive.idealogic.composite/expand/stepExpand.integration.test.js +0 -118
  294. package/dist/logic/roles/bhrain/primitive.idealogic.composite/expand/stepExpand.integration.test.js.map +0 -1
  295. package/dist/logic/roles/bhrain/primitive.idealogic.composite/expand/stepExpand.js +0 -38
  296. package/dist/logic/roles/bhrain/primitive.idealogic.composite/expand/stepExpand.js.map +0 -1
  297. package/dist/logic/roles/bhrain/primitive.strategic.atomic/interpret/stepInterpret.d.ts +0 -45
  298. package/dist/logic/roles/bhrain/primitive.strategic.atomic/interpret/stepInterpret.integration.test.js +0 -115
  299. package/dist/logic/roles/bhrain/primitive.strategic.atomic/interpret/stepInterpret.integration.test.js.map +0 -1
  300. package/dist/logic/roles/bhrain/primitive.strategic.atomic/interpret/stepInterpret.js +0 -59
  301. package/dist/logic/roles/bhrain/primitive.strategic.atomic/interpret/stepInterpret.js.map +0 -1
  302. package/dist/logic/roles/bhrain/primitive.strategic.atomic/interpret/stepInterpret.skill.d.ts +0 -24
  303. package/dist/logic/roles/bhrain/primitive.strategic.atomic/interpret/stepInterpret.skill.js +0 -64
  304. package/dist/logic/roles/bhrain/primitive.strategic.atomic/interpret/stepInterpret.skill.js.map +0 -1
  305. package/dist/logic/roles/bhrain/primitive.strategic.atomic/interpret/stepInterpret.template.md +0 -143
  306. package/dist/logic/roles/ecologist/domain.sketch/loopStudyDomain.d.ts +0 -24
  307. package/dist/logic/roles/ecologist/domain.sketch/loopStudyDomain.integration.test.js +0 -57
  308. package/dist/logic/roles/ecologist/domain.sketch/loopStudyDomain.integration.test.js.map +0 -1
  309. package/dist/logic/roles/ecologist/domain.sketch/loopStudyDomain.js +0 -11
  310. package/dist/logic/roles/ecologist/domain.sketch/loopStudyDomain.js.map +0 -1
  311. package/dist/logic/roles/ecologist/domain.sketch/skillStudyDomain.d.ts +0 -25
  312. package/dist/logic/roles/ecologist/domain.sketch/skillStudyDomain.js +0 -90
  313. package/dist/logic/roles/ecologist/domain.sketch/skillStudyDomain.js.map +0 -1
  314. package/dist/logic/roles/ecologist/domain.sketch/stepStudyDomain.d.ts +0 -21
  315. package/dist/logic/roles/ecologist/domain.sketch/stepStudyDomain.integration.test.d.ts +0 -1
  316. package/dist/logic/roles/ecologist/domain.sketch/stepStudyDomain.integration.test.js +0 -65
  317. package/dist/logic/roles/ecologist/domain.sketch/stepStudyDomain.integration.test.js.map +0 -1
  318. package/dist/logic/roles/ecologist/domain.sketch/stepStudyDomain.js +0 -60
  319. package/dist/logic/roles/ecologist/domain.sketch/stepStudyDomain.js.map +0 -1
  320. package/dist/logic/roles/ecologist/domain.sketch/stepStudyDomain.template.md +0 -93
  321. package/dist/logic/roles/ecologist/domain.term/stepCollectTermUsecases.d.ts +0 -45
  322. package/dist/logic/roles/ecologist/domain.term/stepCollectTermUsecases.integration.test.d.ts +0 -1
  323. package/dist/logic/roles/ecologist/domain.term/stepCollectTermUsecases.integration.test.js +0 -69
  324. package/dist/logic/roles/ecologist/domain.term/stepCollectTermUsecases.integration.test.js.map +0 -1
  325. package/dist/logic/roles/ecologist/domain.term/stepCollectTermUsecases.js +0 -67
  326. package/dist/logic/roles/ecologist/domain.term/stepCollectTermUsecases.js.map +0 -1
  327. package/dist/logic/roles/ecologist/domain.term/stepCollectTermUsecases.skill.d.ts +0 -25
  328. package/dist/logic/roles/ecologist/domain.term/stepCollectTermUsecases.skill.js +0 -85
  329. package/dist/logic/roles/ecologist/domain.term/stepCollectTermUsecases.skill.js.map +0 -1
  330. package/dist/logic/roles/ecologist/domain.term/stepCollectTermUsecases.template.md +0 -160
  331. package/dist/logic/roles/ecologist/domain.term/stepDistillTerm.d.ts +0 -47
  332. package/dist/logic/roles/ecologist/domain.term/stepDistillTerm.integration.test.d.ts +0 -1
  333. package/dist/logic/roles/ecologist/domain.term/stepDistillTerm.integration.test.js +0 -127
  334. package/dist/logic/roles/ecologist/domain.term/stepDistillTerm.integration.test.js.map +0 -1
  335. package/dist/logic/roles/ecologist/domain.term/stepDistillTerm.js +0 -68
  336. package/dist/logic/roles/ecologist/domain.term/stepDistillTerm.js.map +0 -1
  337. package/dist/logic/roles/ecologist/domain.term/stepDistillTerm.skill.d.ts +0 -26
  338. package/dist/logic/roles/ecologist/domain.term/stepDistillTerm.skill.js +0 -92
  339. package/dist/logic/roles/ecologist/domain.term/stepDistillTerm.skill.js.map +0 -1
  340. package/dist/logic/roles/ecologist/domain.term/stepDistillTerm.template.md +0 -173
  341. package/dist/logic/roles/ecologist/domain.usecases/stepCollectUsecases.d.ts +0 -45
  342. package/dist/logic/roles/ecologist/domain.usecases/stepCollectUsecases.integration.test.d.ts +0 -1
  343. package/dist/logic/roles/ecologist/domain.usecases/stepCollectUsecases.integration.test.js +0 -69
  344. package/dist/logic/roles/ecologist/domain.usecases/stepCollectUsecases.integration.test.js.map +0 -1
  345. package/dist/logic/roles/ecologist/domain.usecases/stepCollectUsecases.js +0 -67
  346. package/dist/logic/roles/ecologist/domain.usecases/stepCollectUsecases.js.map +0 -1
  347. package/dist/logic/roles/ecologist/domain.usecases/stepCollectUsecases.skill.d.ts +0 -25
  348. package/dist/logic/roles/ecologist/domain.usecases/stepCollectUsecases.skill.js +0 -85
  349. package/dist/logic/roles/ecologist/domain.usecases/stepCollectUsecases.skill.js.map +0 -1
  350. package/dist/logic/roles/ecologist/domain.usecases/stepCollectUsecases.template.md +0 -160
  351. package/dist/logic/roles/ecologist/domain.usecases/stepDiscoverUsecases.d.ts +0 -45
  352. package/dist/logic/roles/ecologist/domain.usecases/stepDiscoverUsecases.js +0 -67
  353. package/dist/logic/roles/ecologist/domain.usecases/stepDiscoverUsecases.js.map +0 -1
  354. package/dist/logic/roles/ecologist/envision/stepEnvision.integration.test.js +0 -78
  355. package/dist/logic/roles/ecologist/envision/stepEnvision.integration.test.js.map +0 -1
  356. package/dist/logic/roles/ecologist/envision/stepEnvision.js +0 -96
  357. package/dist/logic/roles/ecologist/envision/stepEnvision.js.map +0 -1
  358. package/dist/logic/roles/ecologist/envision/stepEnvision.skill.js +0 -72
  359. package/dist/logic/roles/ecologist/envision/stepEnvision.skill.js.map +0 -1
  360. package/dist/logic/roles/ecologist/envision/stepEnvision.template.md +0 -92
  361. /package/dist/{logic/roles/ecologist/envision/stepEnvision.d.ts → _topublish/domain-glossary-brief/src/domain/objects/Catalog.d.ts} +0 -0
  362. /package/dist/logic/roles/bhrain/.briefs/cognition/{cog021.treestruct.md → cog021.structs.treestruct.md} +0 -0
  363. /package/dist/logic/roles/bhrain/.briefs/cognition/{cog101.concept.treestruct.gravity.md → cog151.concept.treestruct.gravity.md} +0 -0
  364. /package/dist/logic/roles/bhrain/.briefs/cognition/{cog101.cortal.focus.p1.examples.cont.md → cog201.cortal.focus.p1.examples.cont.md} +0 -0
  365. /package/dist/logic/roles/bhrain/.briefs/cognition/{cog101.cortal.focus.p3.mode.md → cog251.cortal.focus.p3.mode.md} +0 -0
  366. /package/dist/logic/roles/bhrain/.briefs/cognition/{cog101.cortal.focus.p3.rythm.md → cog251.cortal.focus.p3.rythm.md} +0 -0
  367. /package/dist/logic/roles/{ecologist/envision/stepEnvision.integration.test.d.ts → bhrain/.briefs/cognition/cog501.cortal.assemblylang_.md} +0 -0
  368. /package/dist/logic/roles/bhrain/{primitive.idealogic.atomic/cluster/stepCluster.integration.test.d.ts → brief.articulate/stepArticulate.integration.test.d.ts} +0 -0
  369. /package/dist/logic/roles/bhrain/{primitive.idealogic.atomic/collect/stepCollect.integration.test.d.ts → brief.catalogize/stepCatalogize.integration.test.d.ts} +0 -0
  370. /package/dist/logic/roles/bhrain/{primitive.idealogic.atomic/diverge/stepDiverge.integration.test.d.ts → brief.demonstrate/stepDemonstrate.integration.test.d.ts} +0 -0
  371. /package/dist/logic/roles/bhrain/{primitive.idealogic.atomic/envision/stepEnvision.integration.test.d.ts → khue.cluster/stepCluster.integration.test.d.ts} +0 -0
  372. /package/dist/logic/roles/bhrain/{primitive.idealogic.composite/expand/stepExpand.integration.test.d.ts → khue.diverge/stepDiverge.integration.test.d.ts} +0 -0
  373. /package/dist/logic/roles/bhrain/{primitive.strategic.atomic/interpret/stepInterpret.integration.test.d.ts → khue.instantiate/stepInstantiate.integration.test.d.ts} +0 -0
  374. /package/dist/logic/roles/{ecologist/domain.sketch/loopStudyDomain.integration.test.d.ts → bhrain/khue.triage/stepTriage.integration.test.d.ts} +0 -0
  375. /package/dist/logic/roles/ecologist/{envision/stepEnvision.skill.d.ts → .briefs/product/user.journey._.[article].md} +0 -0
@@ -0,0 +1,93 @@
1
+ # 🧩 .brief: `llm inference = matmuls + vecmuls + elemuls`
2
+
3
+ ## .what
4
+ llm inference is the process by which a trained language model transforms input tokens into output predictions. the computation can be reduced to three categories:
5
+
6
+ - **matmuls** → large matrix multiplications (heavy, parameter-rich ops)
7
+ - **vecmuls** → vector-scale multiplications/additions (normalization scale/shift, bias)
8
+ - **elemuls** → elementwise nonlinear operations (activations, softmax, residuals)
9
+
10
+ together, these form the skeleton of inference: heavy matmuls provide learned transformations, while vecmuls and elemuls act as the glue that gives nonlinearity and stability.
11
+
12
+ ---
13
+
14
+ ## 🎯 purpose
15
+ - apply billions of learned parameters (matmuls) to transform token inputs
16
+ - refine representations through normalization (vecmuls) and nonlinear activations (elemuls)
17
+ - output logits over vocabulary for next-token prediction
18
+
19
+ ---
20
+
21
+ ## ⚙️ method
22
+
23
+ ### 🔑 what’s happening inside llm inference
24
+
25
+ 1. **token embedding**
26
+ - input tokens → dense vectors via embedding matrix (**matmul**).
27
+
28
+ 2. **transformer layers**
29
+ - **linear projections:** weight matrices × input (**matmuls**).
30
+ - **attention mechanism:** query × key → attention weights (**matmul**), then weights × value (**matmul**).
31
+ - **feed-forward networks:** matmuls with intermediate activation (**elemuls**).
32
+
33
+ 3. **non-linearities & normalization**
34
+ - **activations:** per-element functions (relu, gelu, etc.) = **elemuls**.
35
+ - **normalization:** mean/variance across vector + learned scale/shift = **vecmuls + elemuls**.
36
+
37
+ 4. **output layer**
38
+ - final hidden state × vocab matrix → logits (**matmul**).
39
+
40
+ ---
41
+
42
+ ## 🧮 operation classes
43
+ - **matmuls:** embeddings, projections, attention (qkᵀ, softmax·v), feed-forward, output head.
44
+ - **vecmuls:** layernorm scale/shift, bias addition.
45
+ - **elemuls:** relu/gelu activations, softmax exponentials/divides, residual adds.
46
+
47
+ ---
48
+
49
+ ## 📊 insight
50
+ - **yes:** matmuls dominate compute and parameter count.
51
+ - **no:** inference is not *only* matmuls — vecmuls and elemuls are critical for expressivity and stability.
52
+ - **so:** inference = “giant chains of matmuls, with vecmuls and elemuls woven in.”
53
+
54
+ ---
55
+
56
+ ## 💻 toy pseudocode skeleton
57
+
58
+ \`\`\`python
59
+ def llm_inference(tokens, weights):
60
+ # 1. embedding lookup (matmul)
61
+ x = embed(tokens, weights["embedding"]) # matmul
62
+
63
+ for layer in weights["layers"]:
64
+ # 2. linear projections (matmuls)
65
+ q = matmul(x, layer["Wq"])
66
+ k = matmul(x, layer["Wk"])
67
+ v = matmul(x, layer["Wv"])
68
+
69
+ # 3. attention (matmuls + elemuls)
70
+ attn_scores = matmul(q, k.T) / sqrt(d) # matmul
71
+ attn_weights = softmax(attn_scores) # elemul
72
+ attn_output = matmul(attn_weights, v) # matmul
73
+
74
+ # 4. residual connection (elemul)
75
+ x = x + attn_output # elemul
76
+
77
+ # 5. normalization (vecmul + elemul)
78
+ x = layernorm(x, layer["gamma"], layer["beta"]) # vecmul + elemul
79
+
80
+ # 6. feed-forward network
81
+ h = matmul(x, layer["W1"]) # matmul
82
+ h = gelu(h) # elemul
83
+ h = matmul(h, layer["W2"]) # matmul
84
+ x = x + h # elemul
85
+
86
+ # 7. output projection (matmul)
87
+ logits = matmul(x, weights["output"]) # matmul
88
+ return logits
89
+ \`\`\`
90
+
91
+ ---
92
+
93
+ in short: **llm inference = matmuls (heavy lifting) + vecmuls (scaling/shift) + elemuls (nonlinear glue).**
@@ -0,0 +1,62 @@
1
+ # 🧩 .brief: `the transformer architecture — birth of llms`
2
+
3
+ ## .what
4
+ the **transformer** is the fundamental architecture that enabled large language models (llms). introduced in 2017 by vaswani et al. in *“attention is all you need”*, it replaced recurrence with **self-attention**, making it possible to train massive models on vast text corpora with efficient parallelization.
5
+
6
+ ---
7
+
8
+ ## 🎯 purpose
9
+ - overcome the sequential bottlenecks of rnn/lstm models
10
+ - capture long-range dependencies across entire sequences
11
+ - enable scalable, parallel training on gpus/tpus
12
+ - provide a flexible backbone that can grow with data and compute
13
+
14
+ ---
15
+
16
+ ## ⚙️ method
17
+
18
+ ### 1. **token embedding**
19
+ - words/subwords mapped into dense vectors (matmuls).
20
+
21
+ ### 2. **positional encoding**
22
+ - inject sequence order into embeddings, since attention is order-agnostic.
23
+
24
+ ### 3. **multi-head self-attention**
25
+ - queries, keys, and values projected via matmuls.
26
+ - attention scores = q·kᵀ → softmax → weighted sum with v.
27
+ - multiple heads let the model learn diverse relational patterns.
28
+
29
+ ### 4. **feed-forward networks**
30
+ - per-token mlps applied after attention.
31
+ - matmuls + nonlinear activations (elemuls).
32
+
33
+ ### 5. **residual connections + normalization**
34
+ - stabilize training, preserve gradients, and allow deep stacking.
35
+
36
+ ---
37
+
38
+ ## 🔑 why transformers were a leap
39
+
40
+ - **parallelism:** attention lets all tokens be processed simultaneously.
41
+ - **long-range context:** any token can directly attend to any other.
42
+ - **scalability:** depth, width, and data scale smoothly (scaling laws).
43
+ - **expressivity:** multi-head attention captures complex dependencies.
44
+
45
+ ---
46
+
47
+ ## 🌍 llm lineage
48
+
49
+ - **2017 — transformer** (*attention is all you need*)
50
+ - **2018 — bert, gpt-1** (first pretrained transformer language models)
51
+ - **2019 — gpt-2** (scaling shows surprising emergent abilities)
52
+ - **2020 — gpt-3** (175b parameters; llms become viable)
53
+ - **2022+ — instruction-tuned & rlhf models** (chatgpt, claude, etc.)
54
+
55
+ ---
56
+
57
+ ## 📊 insight
58
+ - the transformer is the **architectural skeleton** of llms.
59
+ - llms are “just” massive stacks of transformers trained on enormous corpora.
60
+ - rlhf, fine-tuning, and alignment methods refine the outputs — but the core engine is still the **transformer self-attention block**.
61
+
62
+ in short: **transformers are the soil from which llms grew.**
@@ -0,0 +1,93 @@
1
+ # 🧩 .brief.article: `self-attention`
2
+
3
+ ## 🔑 what is self-attention
4
+
5
+ self-attention is a mechanism that lets every token in a sequence **dynamically weigh its relationship to every other token** when computing its next representation.
6
+
7
+ each token generates three vectors:
8
+ - **query (q)** — what this token is “looking for”
9
+ - **key (k)** — what this token “offers”
10
+ - **value (v)** — the actual information carried
11
+
12
+ the similarity of query vs key determines how much attention a token pays to another token’s value.
13
+
14
+ mathematically:
15
+
16
+ \[
17
+ \text{Attention}(Q,K,V) = \text{softmax}\!\left(\frac{QK^\top}{\sqrt{d_k}}\right) V
18
+ \]
19
+
20
+ - **qkᵀ** → pairwise similarity scores (**matmul**)
21
+ - **softmax** → normalize into attention weights
22
+ - **weights × v** → weighted sum of values = new representation
23
+
24
+ ---
25
+
26
+ ## 🎯 purpose
27
+ - let tokens reference and integrate information from anywhere in the sequence
28
+ - capture long-range dependencies in a single operation
29
+ - enable efficient, parallel computation across tokens
30
+ - provide multiple relational views through multi-head attention
31
+
32
+ ---
33
+
34
+ ## ⚙️ method
35
+
36
+ 1. compute q, k, v for each token via linear projections (**matmuls**)
37
+ 2. calculate similarity scores q·kᵀ (**matmul**)
38
+ 3. normalize scores with softmax (**elemuls**)
39
+ 4. use normalized weights to combine values (**matmul**)
40
+ 5. update each token representation with the weighted sum
41
+
42
+ ---
43
+
44
+ ## 🔑 benefits
45
+
46
+ 1. **parallelism** — all q, k, v computed at once; no recurrence.
47
+ 2. **long-range context** — any token can directly attend to any other.
48
+ 3. **scalability** — uniform, repeatable structure scales with data/compute.
49
+ 4. **expressivity** — multi-head attention lets the model learn diverse relational patterns.
50
+
51
+ ---
52
+
53
+ ## 🧩 intuition example
54
+
55
+ sentence:
56
+ > “the cat sat on the mat because it was tired.”
57
+
58
+ let’s track the token **“it”** and show how its q, k, v vectors interact with **“the cat.”**
59
+ for illustration, assume a toy **5-dimensional hidden space**.
60
+
61
+ ### token: **“it”**
62
+ - query (**q_it**) = `[0.9, 0.1, 0.0, 0.2, 0.3]`
63
+ - “looking for an antecedent noun with certain features”
64
+ - key (**k_it**) = `[0.1, 0.3, 0.2, 0.0, 0.4]`
65
+ - “offers” self as a pronoun needing resolution
66
+ - value (**v_it**) = `[0.2, 0.5, 0.1, 0.0, 0.7]`
67
+ - the information carried by “it” itself
68
+
69
+ ### token: **“cat”**
70
+ - query (**q_cat**) = `[0.2, 0.4, 0.1, 0.3, 0.0]`
71
+ - key (**k_cat**) = `[0.8, 0.2, 0.0, 0.1, 0.3]`
72
+ - describes the features of “cat” as a noun subject
73
+ - value (**v_cat**) = `[0.7, 0.6, 0.2, 0.4, 0.1]`
74
+ - semantic content of “cat”
75
+
76
+ ### computing attention
77
+ 1. similarity score = q_it · k_cat =
78
+ `0.9*0.8 + 0.1*0.2 + 0.0*0.0 + 0.2*0.1 + 0.3*0.3 = 0.72 + 0.02 + 0 + 0.02 + 0.09 = 0.85`
79
+ 2. suppose normalized (softmax) weight for “cat” = **0.70**, for others tokens total 0.30.
80
+ 3. “it”’s updated representation =
81
+ `0.70 * v_cat + 0.30 * (weighted sum of other tokens’ values)`
82
+
83
+ so “it”’s final vector is now strongly composed of “cat”’s value vector, making the model understand that **“it” refers to “the cat.”**
84
+
85
+ ---
86
+
87
+ ## 📊 insight
88
+ - **queries = what a token seeks**
89
+ - **keys = what a token provides**
90
+ - **values = the information contributed**
91
+ - weighted connections between them create **context-aware representations**.
92
+
93
+ self-attention is how llms directly model relationships across a sequence — allowing “it” to learn its referent is “the cat.”
@@ -0,0 +1,80 @@
1
+ # 🧩 .brief.demo: `multi-head self-attention on ambiguous words (value vectors evolve)`
2
+
3
+ ## .what
4
+ this demo shows how two identical tokens — **“bank”** — begin with the same embedding but evolve into **different value vectors** after attention layers, depending on context. multi-head attention disambiguates their meaning dynamically.
5
+
6
+ ---
7
+
8
+ ## 🎯 purpose
9
+ - demonstrate that identical word embeddings diverge into context-specific meanings
10
+ - show how q, k, v vectors are derived from hidden states
11
+ - illustrate how attention causes “bank” (financial vs river) to separate
12
+
13
+ ---
14
+
15
+ ## ⚙️ demo setup
16
+
17
+ sentence:
18
+ > “the **bank** raised interest rates, and she sat by the **bank** of the river.”
19
+
20
+ toy hidden size = **4**, with **2 heads.**
21
+
22
+ ---
23
+
24
+ ### 🔹 step 1 — input embedding (same for both “bank”)
25
+
26
+ both tokens start with the **same embedding lookup** from the word embedding matrix:
27
+
28
+ - embedding(**bank**) = `[0.5, 0.7, 0.2, 0.1]`
29
+
30
+ this is identical for both occurrences, because embeddings are tied to vocabulary entries, not context.
31
+
32
+ ---
33
+
34
+ ### 🔹 step 2 — hidden states diverge
35
+
36
+ after one transformer block, the hidden state of each “bank” diverges due to attending to different neighbors:
37
+
38
+ - hidden(**bank_financial**) = `[0.8, 0.2, 0.6, 0.1]`
39
+ - hidden(**bank_river**) = `[0.1, 0.9, 0.2, 0.7]`
40
+
41
+ ---
42
+
43
+ ### 🔹 step 3 — compute q, k, v from hidden states
44
+
45
+ each hidden state is projected by learned matrices \( W_q, W_k, W_v \).
46
+
47
+ for simplicity, show only **value (v)** vectors here:
48
+
49
+ - v(**bank_financial**) = `[0.6, 0.5, 0.7, 0.2]`
50
+ - v(**bank_river**) = `[0.2, 0.8, 0.3, 0.6]`
51
+
52
+ note: originally both banks had the **same v** if taken right after embedding, but after context mixing, they diverge.
53
+
54
+ ---
55
+
56
+ ### 🔹 step 4 — multi-head specialization
57
+
58
+ - **head 1 (financial context):**
59
+ “bank” attends to “interest rates” → reinforces v(**bank_financial**).
60
+
61
+ - **head 2 (geographic context):**
62
+ “bank” attends to “river” → reinforces v(**bank_river**).
63
+
64
+ ---
65
+
66
+ ## 🧩 combined effect
67
+
68
+ - initially, both “bank” tokens share the same vector (embedding).
69
+ - after attention, their hidden states — and thus their q/k/v vectors — diverge.
70
+ - multi-head attention ensures each occurrence is contextualized differently.
71
+
72
+ ---
73
+
74
+ ## 📊 insight
75
+ - **embeddings** are static (same for same word).
76
+ - **hidden states** are dynamic (different for each occurrence).
77
+ - **q, k, v** are projections of hidden states, so they also differ per occurrence.
78
+ - result: llms resolve word sense disambiguation in context by letting identical tokens evolve into different representations.
79
+
80
+ in short: **same word, same start — different meaning, different vectors.**
@@ -0,0 +1,67 @@
1
+ # 🧩 .brief.demo: `multi-head self-attention`
2
+
3
+ ## .what
4
+ this demo illustrates how **multi-head self-attention** allows a transformer to capture **different types of relationships simultaneously** by projecting the same tokens into multiple attention “heads.” each head learns its own query/key/value space, enabling diverse relational patterns (e.g., pronoun resolution, verb agreement, or topic continuity).
5
+
6
+ ---
7
+
8
+ ## 🎯 purpose
9
+ - show how multiple attention heads operate in parallel
10
+ - demonstrate how heads specialize in different linguistic/semantic functions
11
+ - highlight how combining heads yields richer token representations
12
+
13
+ ---
14
+
15
+ ## ⚙️ demo setup
16
+
17
+ sentence:
18
+ > “the cat sat on the mat because it was tired.”
19
+
20
+ toy hidden size = **6**, with **2 heads**, each head having its own projection of q, k, v.
21
+
22
+ ---
23
+
24
+ ### 🔹 head 1 — pronoun resolution (antecedent tracking)
25
+
26
+ focus: linking **“it”** → **“the cat.”**
27
+
28
+ - **q_it (head 1):** `[0.9, 0.1, 0.0]`
29
+ - **k_cat (head 1):** `[0.8, 0.2, 0.1]`
30
+ - **dot product:** `0.9*0.8 + 0.1*0.2 + 0.0*0.1 = 0.74`
31
+ - **attention weight:** 0.74 → dominates over other tokens
32
+ - **result:** “it” attends strongly to “cat,” resolving the pronoun.
33
+
34
+ ---
35
+
36
+ ### 🔹 head 2 — verb agreement (syntactic continuity)
37
+
38
+ focus: linking **subject** (“the cat”) → **verb** (“sat”).
39
+
40
+ - **q_cat (head 2):** `[0.2, 0.7, 0.3]`
41
+ - **k_sat (head 2):** `[0.1, 0.9, 0.2]`
42
+ - **dot product:** `0.2*0.1 + 0.7*0.9 + 0.3*0.2 = 0.02 + 0.63 + 0.06 = 0.71`
43
+ - **attention weight:** strong match
44
+ - **result:** “cat” attends to “sat,” enforcing subject-verb connection.
45
+
46
+ ---
47
+
48
+ ## 🧩 combined effect
49
+
50
+ after attention, outputs from all heads are concatenated and linearly transformed:
51
+
52
+ \[
53
+ \text{MultiHead}(Q,K,V) = \text{Concat}(\text{head}_1, \text{head}_2, \dots) W^O
54
+ \]
55
+
56
+ - **head 1 output:** captures **semantic resolution** (it ↔ cat).
57
+ - **head 2 output:** captures **syntactic relation** (cat ↔ sat).
58
+ - combined, the model encodes **both** types of context simultaneously.
59
+
60
+ ---
61
+
62
+ ## 📊 insight
63
+ - each head = a separate lens on token relationships.
64
+ - heads specialize (some semantic, some syntactic, some positional).
65
+ - combining them creates **richer, multifaceted token embeddings.**
66
+
67
+ in short: **multi-head attention = parallel perspectives on the same sequence.**
@@ -0,0 +1,48 @@
1
+ # 🧩 .brief: `llm replication of input structures`
2
+
3
+ ## 🔑 what it implies
4
+
5
+ llms are trained to predict the **next token** given a sequence, so they are highly sensitive to **patterns in the immediate input context**. when a prompt contains an example with a certain structure (formatting, headings, bullet styles, code blocks), the model learns that continuing with the same structure minimizes prediction error.
6
+
7
+ this replication is not true “understanding” of the structure — it is a probabilistic continuation shaped by training data, where mimicking provided forms was often the correct next step.
8
+
9
+ ---
10
+
11
+ ## 🎯 implications for behavior
12
+
13
+ - **structural mimicry**
14
+ - the model mirrors bullet lists, markdown, code syntax, or prose style seen in the input.
15
+ - **consistency bias**
16
+ - once a format is established in the prompt, deviating feels “unlikely” under the learned distribution.
17
+ - **few-shot learning**
18
+ - demonstration examples act as templates; the model generalizes content into the same frame.
19
+ - **alignment with expectation**
20
+ - replication maximizes coherence with the input and aligns with user intent implicitly signaled by structure.
21
+
22
+ ---
23
+
24
+ ## 🧩 example
25
+
26
+ > input:
27
+ >
28
+ > “list three animals in this format:
29
+ > - mammal:
30
+ > - bird:
31
+ > - reptile:”
32
+
33
+ > output:
34
+ > - mammal: dog
35
+ > - bird: eagle
36
+ > - reptile: lizard
37
+
38
+ the model doesn’t reason about taxonomy per se — it reproduces the structure because that’s the **most probable continuation**.
39
+
40
+ ---
41
+
42
+ ## 📊 insight
43
+
44
+ - llms replicate structures because **continuation is their core mechanic**.
45
+ - training on diverse, structured text (tables, lists, markdown, code) reinforced the habit of format preservation.
46
+ - this property is what enables **few-shot prompting** and makes llms easy to steer by example.
47
+
48
+ in short: **llms copy input structures because structure itself is a strong predictive signal for next-token generation.**
@@ -0,0 +1,37 @@
1
+ # 🧩 .brief: `externalized knowledge requires internalized scaffolding`
2
+
3
+ ---
4
+
5
+ ## 🧠 what this concept means
6
+ - **externalized knowledge** (facts in a book, formulas on a sheet, data in a database) is inert on its own.
7
+ - for it to become usable, the system — human or llm — must have **internalized scaffolding**: conceptual frameworks, interpretive skills, and procedural fluency that make sense of the raw reference.
8
+ - without scaffolding, externalized knowledge is like symbols without meaning.
9
+
10
+ ---
11
+
12
+ ## 🎨 examples across domains
13
+ - **physics**: a formula table is only useful if the student has internalized what variables represent and when equations apply.
14
+ - **chemistry**: a reaction handbook is only actionable if the chemist has internalized fundamentals like valence, polarity, and stability.
15
+ - **language**: a dictionary provides definitions, but only someone with internalized grammar and semantics can apply them in real communication.
16
+ - **llms**: retrieved passages (RAG) are useful only because the model has internalized language patterns that allow it to interpret, paraphrase, and apply them.
17
+
18
+ ---
19
+
20
+ ## ⚙️ mechanics of the relationship
21
+ - **internalized scaffolding**: generalizable, embodied patterns (styles, rules, frameworks).
22
+ - **externalized reference**: precise, factual, bounded records.
23
+ - **interaction**: scaffolding enables comprehension and application; reference extends range and accuracy.
24
+
25
+ ---
26
+
27
+ ## 🔑 takeaways
28
+ - externalized knowledge alone = static, inert.
29
+ - internalized knowledge alone = fluent, adaptive, but fallible.
30
+ - **operability emerges only from their combination**: scaffolding activates references, references ground scaffolding.
31
+
32
+ ---
33
+
34
+ ## 📌 intuition anchor
35
+ a formula sheet without understanding is **dead weight**.
36
+ understanding without a sheet risks **slips and gaps**.
37
+ together, they form a **living system of knowledge** — internalized scaffolding breathing life into externalized records.
@@ -0,0 +1,30 @@
1
+ # 🧩 .brief.reflect: `interdependence of internalized and externalized knowledge`
2
+
3
+ ---
4
+
5
+ ## ⚛️ physics domain insight
6
+ - a **formula table** provides equations (externalized knowledge), but:
7
+ - without knowing what “\(F\)” or “\(a\)” represent, the student cannot meaningfully use \(F=ma\).
8
+ - they must already have **internalized fundamentals**: what force, mass, and acceleration *mean*, how they relate, and when the equation applies.
9
+ - only with that internalized base does the externalized table become **operable knowledge**.
10
+
11
+ ---
12
+
13
+ ## 🧠 mapping back to llms
14
+ - **retrieval (externalized)**: gives the llm a fact, equation, or passage.
15
+ - **weights (internalized)**: provide the interpretive machinery — language parsing, reasoning steps, contextual application.
16
+ - **together**: the llm can both **understand** what the retrieved info means and **apply** it productively in conversation.
17
+
18
+ ---
19
+
20
+ ## 🔑 reflection takeaways
21
+ - **internalized is prerequisite**: externalized knowledge is inert without an internalized framework to activate it.
22
+ - **externalized augments**: once the fundamentals are in place, external sources extend the system’s reach into precision and breadth.
23
+ - **true power lies in combination**: fluent intuition (internalized) guided by explicit reference (externalized).
24
+
25
+ ---
26
+
27
+ ## 📌 intuition anchor
28
+ a student with only a **formula sheet** but no understanding is stuck.
29
+ a student with only **intuition** risks errors.
30
+ but a student with **both** can solve a wide range of problems with accuracy and adaptability.
@@ -0,0 +1,44 @@
1
+ # 🧩 .brief: `internalized vs externalized knowledge`
2
+
3
+ ---
4
+
5
+ ## 🧠 what the contrast shows
6
+ knowledge can be understood in two fundamentally different forms:
7
+ - **internalized knowledge**: absorbed into the structure of a system (weights, habits, style)
8
+ - **externalized knowledge**: stored outside the system in explicit, retrievable records (databases, documents, libraries)
9
+
10
+ this contrast explains both the strengths and weaknesses of llms when relying solely on their weights versus when augmented with retrieval.
11
+
12
+ ---
13
+
14
+ ## 🎨 illustration frame
15
+ - **artist (internalized)**: paints in impressionist style without recalling a specific work. their training flows into expressive creation.
16
+ - **librarian (externalized)**: fetches the exact book that contains the answer. precision is guaranteed, but no new creation occurs.
17
+
18
+ ---
19
+
20
+ ## ⚙️ mapping to llms
21
+ - **internalized knowledge in weights**
22
+ - emerges through gradient descent across massive training corpora
23
+ - encoded as *distributed statistical patterns*, not explicit entries
24
+ - enables **generalization** and **creative recombination** of concepts
25
+ - limited by **fuzziness** and potential for **hallucination**
26
+
27
+ - **externalized knowledge in retrieval (rag, databases, tools)**
28
+ - stored explicitly, outside the model
29
+ - enables **accuracy** and **verifiability**
30
+ - limited by what is recorded and retrievable, no inherent *style* or *fluency*
31
+
32
+ ---
33
+
34
+ ## 🔑 contrast takeaways
35
+ - **internalized = embodied fluency**: adaptive, generative, pattern-driven
36
+ - **externalized = explicit record**: exact, grounded, reference-driven
37
+ - **hybrid strength**: combining both yields systems that can *express fluidly* while also *grounding in fact*.
38
+
39
+ ---
40
+
41
+ ## 📌 intuition anchor
42
+ an llm alone is like a **painter**: it embodies and expresses style.
43
+ with retrieval, it gains a **librarian**: grounding its creativity in precise references.
44
+ together, they demonstrate the full spectrum of knowledge handling.
@@ -0,0 +1,37 @@
1
+ # 🧩 .brief.demo: `contrasting internalized vs externalized knowledge`
2
+
3
+ ---
4
+
5
+ ## 🎨 illustration: artist vs photo album
6
+ - **internalized**: an artist paints a landscape in impressionist style. the scene is new, never painted before, but the *style* flows naturally from their training.
7
+ - **externalized**: another person, instead of painting, flips open a photo album to show a picture of a real landscape. it is precise, factual, but limited to what’s in the album.
8
+
9
+ ---
10
+
11
+ ## 🧠 llm parallel
12
+ - **internalized knowledge (in weights)**
13
+ - llms generate from *patterns encoded in their parameters*.
14
+ - they “know” paris is the capital of france not because a fact is stored explicitly, but because the statistical patterns linking “capital of france” → “paris” are internalized during training.
15
+ - *strengths*: generalization, stylistic fluency, application to unseen contexts.
16
+ - *limits*: fuzziness, hallucination, imprecision.
17
+
18
+ - **externalized knowledge (retrieval / database / rag)**
19
+ - systems query a source (wikipedia, sql table, search engine) to fetch the answer directly.
20
+ - paris is recalled exactly as written.
21
+ - *strengths*: precision, factual grounding, verifiability.
22
+ - *limits*: no fluency, no adaptation — must rely on what’s stored.
23
+
24
+ ---
25
+
26
+ ## ⚖️ contrast takeaways
27
+ - **internalized = style** → adaptive, expressive, compressed into structure.
28
+ - **externalized = record** → exact, reliable, stored outside the system.
29
+ - llms by default operate on **internalized knowledge**, but can be paired with external retrieval to combine *fluency* with *accuracy*.
30
+
31
+ ---
32
+
33
+ ## 🔑 intuition anchor
34
+ an llm is like a painter, not a librarian.
35
+ - the **painter** expresses patterns they’ve internalized into their craft.
36
+ - the **librarian** retrieves precise facts from shelves.
37
+ both approaches are useful, but they represent fundamentally different forms of knowledge.
@@ -0,0 +1,39 @@
1
+ # 🧩 .brief.demo: `contrasting knowledge via physics`
2
+
3
+ ---
4
+
5
+ ## ⚛️ illustration: physicist vs formula table
6
+ - **internalized (physicist’s intuition)**
7
+ - an experienced physicist can look at a problem — say, a ball rolling down an incline — and **intuitively anticipate** its acceleration or trajectory.
8
+ - they don’t consciously recall each derivation; years of practice have **internalized the principles** of mechanics.
9
+ - this allows them to solve novel problems, even ones never covered in class, by applying generalized patterns of motion, force, and energy.
10
+
11
+ - **externalized (formula reference table)**
12
+ - another student consults a table of equations: \(F=ma\), kinematic formulas, conservation laws.
13
+ - the reference ensures **exact correctness** when applied carefully, but it covers only the documented cases.
14
+ - without creativity, it cannot extend to unusual or unlisted scenarios.
15
+
16
+ ---
17
+
18
+ ## 🧠 llm parallel
19
+ - **internalized knowledge in weights**
20
+ - like the physicist, llms have absorbed statistical regularities across their training data.
21
+ - they can extend learned structures to **new prompts**, reasoning flexibly about unseen questions.
22
+ - limitation: they may make systematic errors when the internalized “intuition” misfires.
23
+
24
+ - **externalized knowledge in retrieval**
25
+ - like the formula table, retrieval provides **precise, explicit answers** or derivations.
26
+ - it guarantees correctness for known facts but cannot innovate beyond stored entries.
27
+
28
+ ---
29
+
30
+ ## 🔑 contrast takeaways
31
+ - **internalized = physicist’s intuition** → adaptive, generalizing, creative but fallible.
32
+ - **externalized = formula table** → precise, bounded, authoritative but rigid.
33
+ - llms alone resemble physicists reasoning from intuition; with retrieval, they combine intuition with the reliability of exact formulas.
34
+
35
+ ---
36
+
37
+ ## 📌 intuition anchor
38
+ an llm without retrieval is like a **physicist solving problems from intuition** — quick, adaptive, but prone to slips.
39
+ with retrieval, it becomes a physicist **plus a formula table** — intuition reinforced by exact references.
@@ -0,0 +1,35 @@
1
+ # 🧩 .brief: `internalized knowledge in llm weights`
2
+
3
+ ---
4
+
5
+ ## 🧠 how knowledge lives in llm weights
6
+
7
+ - **statistical associations**
8
+ during training, the model adjusts billions of parameters (weights) so that certain patterns of tokens predict others. what emerges are high-dimensional representations that capture regularities across text: facts, grammar, reasoning patterns, world structures.
9
+
10
+ - **distributed representation**
11
+ no single weight corresponds to a single fact (like “paris is the capital of france”). instead, many weights collectively encode the probability structure of phrases and contexts. knowledge is *smeared* across the parameter space.
12
+
13
+ - **emergent concepts**
14
+ neurons and attention heads sometimes specialize in representing particular structures (e.g. gendered pronouns, syntax boundaries, factual relationships). but these are still part of a network of overlapping functions, not isolated “fact cells.”
15
+
16
+ ---
17
+
18
+ ## 📊 what this implies
19
+
20
+ - **implicit knowledge**
21
+ the model *has* facts (e.g., it can usually complete “the capital of france is ___”), but it doesn’t “store” them in a database-like table. retrieval is probabilistic and context-dependent.
22
+
23
+ - **fuzziness & errors**
24
+ because knowledge is embedded as distributed patterns, recall isn’t guaranteed. the model may confuse similar facts or “hallucinate” when patterns overlap.
25
+
26
+ - **generalization**
27
+ knowledge in weights isn’t just memorized; it’s also compressed into *general rules*. that’s why llms can apply concepts to novel contexts they weren’t explicitly trained on.
28
+
29
+ ---
30
+
31
+ ## ⚖️ analogy
32
+
33
+ think of llm weights like the structure of a muscle trained through repetition:
34
+ - they don’t “remember” every exact movement, but they *embody* the regularities of movement.
35
+ - similarly, llm weights embody statistical regularities of language and facts.