rhachet-roles-ehmpathy 1.15.13 → 1.15.14
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/dist/contract/commands/codegenBriefOptions.js +0 -2
- package/dist/contract/commands/codegenBriefOptions.js.map +1 -1
- package/dist/contract/sdk/index.d.ts +1 -1
- package/dist/contract/sdk/index.js +1 -1
- package/dist/contract/sdk/index.js.map +1 -1
- package/dist/domain.roles/architect/getArchitectRole.js.map +1 -0
- package/dist/domain.roles/ecologist/getEcologistBrief.Options.codegen.js.map +1 -0
- package/dist/domain.roles/ecologist/getEcologistBrief.js.map +1 -0
- package/dist/domain.roles/ecologist/getEcologistRole.js.map +1 -0
- package/dist/{roles → domain.roles}/getRoleRegistry.js +0 -2
- package/dist/domain.roles/getRoleRegistry.js.map +1 -0
- package/dist/domain.roles/getRoleRegistry.readme.js.map +1 -0
- package/dist/domain.roles/mechanic/getMechanicBrief.Options.codegen.js.map +1 -0
- package/dist/domain.roles/mechanic/getMechanicBrief.js.map +1 -0
- package/dist/{roles → domain.roles}/mechanic/getMechanicRole.js +7 -3
- package/dist/domain.roles/mechanic/getMechanicRole.js.map +1 -0
- package/package.json +4 -5
- package/dist/.test/genContextLogTrail.d.ts +0 -2
- package/dist/.test/genContextLogTrail.js +0 -12
- package/dist/.test/genContextLogTrail.js.map +0 -1
- package/dist/.test/genContextStitchTrail.d.ts +0 -1
- package/dist/.test/genContextStitchTrail.js +0 -6
- package/dist/.test/genContextStitchTrail.js.map +0 -1
- package/dist/.test/getContextOpenAI.d.ts +0 -2
- package/dist/.test/getContextOpenAI.js +0 -19
- package/dist/.test/getContextOpenAI.js.map +0 -1
- package/dist/roles/architect/getArchitectRole.js.map +0 -1
- package/dist/roles/bhrain/briefs/cognition/cog000.overview.and.premise.md +0 -115
- package/dist/roles/bhrain/briefs/cognition/cog021.coordinates.spherical.md +0 -69
- package/dist/roles/bhrain/briefs/cognition/cog021.metaphor.cauliflorous.md +0 -44
- package/dist/roles/bhrain/briefs/cognition/cog021.metaphor.galactic_spacetravel.[article].md +0 -42
- package/dist/roles/bhrain/briefs/cognition/cog021.metaphor.galactic_spacetravel.[lesson].md +0 -60
- package/dist/roles/bhrain/briefs/cognition/cog021.structs.catalog.md +0 -51
- package/dist/roles/bhrain/briefs/cognition/cog021.structs.treestruct.md +0 -85
- package/dist/roles/bhrain/briefs/cognition/cog021.structs.vector.md +0 -112
- package/dist/roles/bhrain/briefs/cognition/cog101.concept.definition.md +0 -115
- package/dist/roles/bhrain/briefs/cognition/cog101.concept.treestruct._.md +0 -112
- package/dist/roles/bhrain/briefs/cognition/cog101.concept.treestruct.coords.1.spherical.md +0 -80
- package/dist/roles/bhrain/briefs/cognition/cog101.concept.treestruct.coords.2.abstractive.md +0 -59
- package/dist/roles/bhrain/briefs/cognition/cog101.concept.treestruct.coords.3.descriptive.md +0 -64
- package/dist/roles/bhrain/briefs/cognition/cog101.concept.treestruct.persp.1.perspectives.md +0 -88
- package/dist/roles/bhrain/briefs/cognition/cog101.concept.treestruct.persp.2.universal.md +0 -82
- package/dist/roles/bhrain/briefs/cognition/cog101.concept.treestruct.persp.3.relative.md +0 -106
- package/dist/roles/bhrain/briefs/cognition/cog101.concept.treestruct.persp.4.fractal.md +0 -83
- package/dist/roles/bhrain/briefs/cognition/cog151.concept.treestruct.gravity.md +0 -89
- package/dist/roles/bhrain/briefs/cognition/cog201.cortal.focus.p1.definition.md +0 -231
- package/dist/roles/bhrain/briefs/cognition/cog201.cortal.focus.p1.examples.cont.md +0 -82
- package/dist/roles/bhrain/briefs/cognition/cog201.cortal.focus.p2.acuity.md +0 -134
- package/dist/roles/bhrain/briefs/cognition/cog201.cortal.focus.p2.breadth.md +0 -151
- package/dist/roles/bhrain/briefs/cognition/cog201.cortal.focus.p2.depth.md +0 -147
- package/dist/roles/bhrain/briefs/cognition/cog251.cortal.focus.p3.fabric.md +0 -96
- package/dist/roles/bhrain/briefs/cognition/cog251.cortal.focus.p3.mode.md +0 -68
- package/dist/roles/bhrain/briefs/cognition/cog251.cortal.focus.p3.rythm.md +0 -56
- package/dist/roles/bhrain/briefs/cognition/cog251.cortal.focus.p3.usecases.md +0 -76
- package/dist/roles/bhrain/briefs/cognition/cog301.traversal.1.motion.primitives._.md +0 -155
- package/dist/roles/bhrain/briefs/cognition/cog301.traversal.1.motion.primitives.acuity.md +0 -94
- package/dist/roles/bhrain/briefs/cognition/cog301.traversal.1.motion.primitives.breadth.md +0 -114
- package/dist/roles/bhrain/briefs/cognition/cog301.traversal.1.motion.primitives.breadth.vary.md +0 -105
- package/dist/roles/bhrain/briefs/cognition/cog301.traversal.1.motion.primitives.depth.md +0 -132
- package/dist/roles/bhrain/briefs/cognition/cog301.traversal.2.motion.composites._.md +0 -106
- package/dist/roles/bhrain/briefs/cognition/cog301.traversal.2.motion.composites.grammar.md +0 -105
- package/dist/roles/bhrain/briefs/cognition/cog401.questions.._.md +0 -209
- package/dist/roles/bhrain/briefs/cognition/cog401.questions.1.persp.as.berries.md +0 -168
- package/dist/roles/bhrain/briefs/cognition/cog401.questions.1.persp.as.vectors.md +0 -74
- package/dist/roles/bhrain/briefs/cognition/cog401.questions.1.persp.has.precision.tunable.md +0 -80
- package/dist/roles/bhrain/briefs/cognition/cog401.questions.2.1.primitives.rough._.md +0 -99
- package/dist/roles/bhrain/briefs/cognition/cog401.questions.2.1.primitives.rough.interrogative.md +0 -108
- package/dist/roles/bhrain/briefs/cognition/cog401.questions.2.1.primitives.rough.why.[article].md +0 -55
- package/dist/roles/bhrain/briefs/cognition/cog401.questions.2.2.composite.smooth._.md +0 -83
- package/dist/roles/bhrain/briefs/cognition/cog401.questions.2.2.composite.smooth.examples.md +0 -101
- package/dist/roles/bhrain/briefs/cognition/cog501.cortal.assemblylang.1.primitives._.md +0 -134
- package/dist/roles/bhrain/briefs/cognition/cog501.cortal.assemblylang.1.primitives.recall.md +0 -149
- package/dist/roles/bhrain/briefs/cognition/cog501.cortal.assemblylang.1.primitives.steer.md +0 -146
- package/dist/roles/bhrain/briefs/cognition/cog501.cortal.assemblylang.1.primitives.think.md +0 -141
- package/dist/roles/bhrain/briefs/cognition/cog501.cortal.assemblylang.2.composites.zoom.md +0 -127
- package/dist/roles/bhrain/briefs/cognition/cog501.cortal.assemblylang.3.catalogs.md +0 -107
- package/dist/roles/bhrain/briefs/cognition/cog501.cortal.assemblylang.3.grammar.md +0 -124
- package/dist/roles/bhrain/briefs/cognition/cog501.cortal.assemblylang_.md +0 -0
- package/dist/roles/bhrain/briefs/cognition/inflight/concept.vs.idea.md +0 -70
- package/dist/roles/bhrain/briefs/cognition/inflight/core.concept.adjectives.md +0 -8
- package/dist/roles/bhrain/briefs/distilisys.grammar.compressed.md +0 -19
- package/dist/roles/bhrain/briefs/grammar/gerunds.1.why.common.[article].md +0 -32
- package/dist/roles/bhrain/briefs/grammar/gerunds.1.why.term_smells.[article].md +0 -36
- package/dist/roles/bhrain/briefs/grammar/gerunds.1.why.term_smells.detection.[lesson].md +0 -73
- package/dist/roles/bhrain/briefs/grammar/gerunds.2.tactic.eliminate.[article].md +0 -55
- package/dist/roles/bhrain/briefs/grammar/gerunds.2.tactic.eliminate.[lesson].md +0 -41
- package/dist/roles/bhrain/briefs/grammar/gerunds.3.eliminator.[trait]._.md +0 -66
- package/dist/roles/bhrain/briefs/grammar/gerunds.3.eliminator.[trait].balance.md +0 -36
- package/dist/roles/bhrain/briefs/grammar/gerunds.3.eliminator.[trait].bane.md +0 -34
- package/dist/roles/bhrain/briefs/grammar/gerunds.3.eliminator.[trait].boon.md +0 -35
- package/dist/roles/bhrain/briefs/knowledge/kno101.primitives.1.ontology.[article].frame.docs_as_materializations.md +0 -63
- package/dist/roles/bhrain/briefs/knowledge/kno101.primitives.1.ontology.[article].frame.docs_as_references.md +0 -45
- package/dist/roles/bhrain/briefs/knowledge/kno101.primitives.2.rel.many_to_many.[article].md +0 -37
- package/dist/roles/bhrain/briefs/knowledge/kno101.primitives.3.instances.[article].md +0 -39
- package/dist/roles/bhrain/briefs/knowledge/kno101.primitives.4.documents.[article].md +0 -37
- package/dist/roles/bhrain/briefs/knowledge/kno101.primitives.5.concepts.[article].md +0 -39
- package/dist/roles/bhrain/briefs/knowledge/kno201.documents._.[article].md +0 -48
- package/dist/roles/bhrain/briefs/knowledge/kno201.documents._.[catalog].md +0 -52
- package/dist/roles/bhrain/briefs/knowledge/kno201.documents.articles.[article].md +0 -40
- package/dist/roles/bhrain/briefs/knowledge/kno201.documents.catalogs.[article].md +0 -41
- package/dist/roles/bhrain/briefs/knowledge/kno201.documents.demos.[article].md +0 -42
- package/dist/roles/bhrain/briefs/knowledge/kno201.documents.lessons.[article].md +0 -42
- package/dist/roles/bhrain/briefs/knowledge/kno301.doc.compression.1.refs._.[article].md +0 -41
- package/dist/roles/bhrain/briefs/knowledge/kno301.doc.compression.2.kernels._.[article].i1.md +0 -50
- package/dist/roles/bhrain/briefs/knowledge/kno301.doc.compression.3.briefs._.[article].md +0 -40
- package/dist/roles/bhrain/briefs/knowledge/kno301.doc.compression._.[article].md +0 -90
- package/dist/roles/bhrain/briefs/knowledge/kno301.doc.compression._.[catalog].persp.garden.md +0 -64
- package/dist/roles/bhrain/briefs/knowledge/kno301.doc.enbrief.1.from_instances.[article].md +0 -45
- package/dist/roles/bhrain/briefs/knowledge/kno301.doc.enbrief.1.from_instances.[demo].algorithm.md +0 -54
- package/dist/roles/bhrain/briefs/knowledge/kno301.doc.enbrief.1.from_instances.[demo].color.md +0 -56
- package/dist/roles/bhrain/briefs/knowledge/kno301.doc.enbrief.1.from_instances.[demo].empathy.md +0 -54
- package/dist/roles/bhrain/briefs/knowledge/kno301.doc.enbrief.1.from_instances.[demo].entropy.md +0 -54
- package/dist/roles/bhrain/briefs/knowledge/kno301.doc.enbrief.1.from_instances.[demo].gravity.md +0 -54
- package/dist/roles/bhrain/briefs/knowledge/kno301.doc.enbrief.1.from_instances.[demo].joke.md +0 -56
- package/dist/roles/bhrain/briefs/knowledge/kno301.doc.enbrief.1.from_instances.[demo].value.md +0 -54
- package/dist/roles/bhrain/briefs/knowledge/kno301.doc.enbrief.2._.[catalog].md +0 -43
- package/dist/roles/bhrain/briefs/knowledge/kno301.doc.enbrief.2.articulate.[article].md +0 -27
- package/dist/roles/bhrain/briefs/knowledge/kno301.doc.enbrief.2.articulate.[lesson].md +0 -49
- package/dist/roles/bhrain/briefs/knowledge/kno301.doc.enbrief.2.catalogize.[article].md +0 -27
- package/dist/roles/bhrain/briefs/knowledge/kno301.doc.enbrief.2.catalogize.[lesson].md +0 -54
- package/dist/roles/bhrain/briefs/knowledge/kno301.doc.enbrief.2.demonstrate.[article].md +0 -26
- package/dist/roles/bhrain/briefs/knowledge/kno301.doc.enbrief.2.demonstrate.[lesson].md +0 -49
- package/dist/roles/bhrain/briefs/knowledge/kno301.doc.enbrief.2.lessonize.[article].md +0 -26
- package/dist/roles/bhrain/briefs/knowledge/kno301.doc.enbrief.2.lessonize.[lesson].md +0 -54
- package/dist/roles/bhrain/briefs/knowledge/kno301.doc.kernelize._.[article].md +0 -58
- package/dist/roles/bhrain/briefs/knowledge/kno301.doc.kernelize._.[lesson].md +0 -88
- package/dist/roles/bhrain/briefs/knowledge/kno351.docs.are_instances.[article].md +0 -34
- package/dist/roles/bhrain/briefs/knowledge/kno351.docs.recursion.[catalog].md +0 -44
- package/dist/roles/bhrain/briefs/knowledge/kno401.actors.1.role.author.[article].md +0 -36
- package/dist/roles/bhrain/briefs/knowledge/kno401.actors.1.role.librarian.[article].md +0 -40
- package/dist/roles/bhrain/briefs/knowledge/kno401.actors.2.interdependence.[article].md +0 -52
- package/dist/roles/bhrain/briefs/knowledge/kno501.doc.enbrief.catalog.structure1.[article].md +0 -53
- package/dist/roles/bhrain/briefs/knowledge/kno501.doc.enbrief.catalog.structure1.[lesson].template.md +0 -101
- package/dist/roles/bhrain/briefs/librarian.context/article.variant.vision.[article].md +0 -60
- package/dist/roles/bhrain/briefs/librarian.context/term.expectation.vs_assumption._.md +0 -60
- package/dist/roles/bhrain/briefs/librarian.context/term.frame.vs_perspective.[article].md +0 -96
- package/dist/roles/bhrain/briefs/librarian.context/term.invariant.[article].md +0 -29
- package/dist/roles/bhrain/briefs/librarian.context/term.lesson._vs_article.[article].md +0 -36
- package/dist/roles/bhrain/briefs/librarian.context/term.ref._vs_brief.md +0 -90
- package/dist/roles/bhrain/briefs/librarian.context/term.referent.[article].md +0 -43
- package/dist/roles/bhrain/briefs/librarian.context/usage.lesson_vs_article.[lesson].md +0 -31
- package/dist/roles/bhrain/briefs/librarian.context/usage.lesson_vs_article_vs_demo.[lesson].md +0 -37
- package/dist/roles/bhrain/briefs/librarian.tactics/.readme.md +0 -12
- package/dist/roles/bhrain/briefs/librarian.tactics/<articulate>._.[article].frame.cognitive.md +0 -33
- package/dist/roles/bhrain/briefs/librarian.tactics/<articulate>._.[article].frame.tactical.md +0 -45
- package/dist/roles/bhrain/briefs/librarian.tactics/<articulate>.tactic.[catalog].md +0 -83
- package/dist/roles/bhrain/briefs/librarian.tactics/<articulate>.tactic.concept_dimension.examples.[article][seed].md +0 -4
- package/dist/roles/bhrain/briefs/librarian.tactics/<articulate>.tactic.concept_dimension.invariants.[article].md +0 -36
- package/dist/roles/bhrain/briefs/librarian.tactics/<articulate>.tactic.from.examples.md +0 -44
- package/dist/roles/bhrain/briefs/librarian.tactics/<articulate>.tactic.from.seed.md +0 -48
- package/dist/roles/bhrain/briefs/librarian.tactics/<articulate>.tactic.with.templates.[article].md +0 -57
- package/dist/roles/bhrain/briefs/librarian.tactics/<articulate>.tactic.with.templates.[gallery][review].effective.md +0 -1
- package/dist/roles/bhrain/briefs/librarian.tactics/<articulate>.tone.bluecollar.[article][seed].md +0 -5
- package/dist/roles/bhrain/briefs/librarian.tactics/<catalogize>._.[article][seed].md +0 -3
- package/dist/roles/bhrain/briefs/librarian.tactics/<catalogize>.observation.via_clusterage_over_via_imagination.[seed].md +0 -6
- package/dist/roles/bhrain/briefs/librarian.tactics/<catalogize>.vs_diverge.[article].persp.save_compute.md +0 -46
- package/dist/roles/bhrain/briefs/librarian.tactics/<demonstrate>._.[article].frame.colloquial.i2.by_grok.md +0 -64
- package/dist/roles/bhrain/briefs/librarian.tactics/<demonstrate>.variants.[catalog].md +0 -106
- package/dist/roles/bhrain/briefs/librarian.tactics/<demonstrate>.variants.contrast.[demo].usecase.vs_userjourney.by_chatgpt.md +0 -45
- package/dist/roles/bhrain/briefs/librarian.tactics/<demonstrate>.variants.counter.[demo].usecase.flyer.by_chargpt.md +0 -38
- package/dist/roles/bhrain/briefs/librarian.tactics/<demonstrate>.variants.counter.[demo].walkability.phoenix.by_chargpt.md +0 -41
- package/dist/roles/bhrain/briefs/librarian.tactics/<demonstrate>.variants.example.[demo].shear_force.scissors.by_grok.md +0 -52
- package/dist/roles/bhrain/briefs/librarian.tactics/<demonstrate>.variants.example.[demo].tea.darjeeling.by_grok.md +0 -50
- package/dist/roles/bhrain/briefs/librarian.tactics/<demonstrate>.variants.example.[demo].usecase.book_flight.by_grok.md +0 -54
- package/dist/roles/bhrain/briefs/librarian.tactics/<demonstrate>.variants.example.[demo].usecase.order_food.by_chatgpt.md +0 -40
- package/dist/roles/bhrain/briefs/librarian.tactics/<demonstrate>.variants.example.[demo].walkability.portland.by_chatgpt.i3.md +0 -42
- package/dist/roles/bhrain/briefs/librarian.tactics/<demonstrate>.variants.example.[demo].walkability.portland.by_grok.i2.md +0 -49
- package/dist/roles/bhrain/briefs/librarian.tactics/<demonstrate>.variants.example.[lesson].howto.md +0 -28
- package/dist/roles/bhrain/briefs/librarian.tactics/<demonstrate>.variants.example.structure.[article].i2.md +0 -73
- package/dist/roles/bhrain/briefs/librarian.tactics/<demonstrate>.variants.narrative.[demo].usecase.order_online.by_chatgpt.md +0 -34
- package/dist/roles/bhrain/briefs/librarian.tactics/<demonstrate>.variants.walkthrough.[demo].usecase.book_online.by_chatgpt.md +0 -47
- package/dist/roles/bhrain/briefs/librarian.tactics/[brief].verbiage.outline.over.narrative.md +0 -55
- package/dist/roles/bhrain/briefs/logistics/term.logistics.[article].md +0 -21
- package/dist/roles/bhrain/briefs/logistics/term.logistics.of_information.[article].md +0 -22
- package/dist/roles/bhrain/briefs/logistics/term.logistics.of_knowledge.[article].md +0 -29
- package/dist/roles/bhrain/briefs/physics/optics.focal.acuity.md +0 -77
- package/dist/roles/bhrain/briefs/physics/optics.focal.breadth.md +0 -74
- package/dist/roles/bhrain/briefs/physics/optics.focal.depth.md +0 -77
- package/dist/roles/bhrain/briefs/physics/optics.focal.distance.md +0 -92
- package/dist/roles/bhrain/briefs/physics/optics.focal.tradeoffs.md +0 -107
- package/dist/roles/bhrain/briefs/tactician/tactics.compose.traits_and_skills.[article].md +0 -76
- package/dist/roles/bhrain/briefs/tactician/trait.articulation.[article].md +0 -67
- package/dist/roles/bhrain/briefs/tactician/trait.purpose.[article].md +0 -56
- package/dist/roles/bhrain/briefs/tactician/trait.vs_skill.[article].md +0 -55
- package/dist/roles/bhrain/briefs/tactician/trait.vs_tactic.[article].md +0 -70
- package/dist/roles/bhrain/briefs/terms.motive.intent.goal.md +0 -46
- package/dist/roles/bhrain/briefs/thinker.tactics/<cluster>._.[article].frame.tactical._.md +0 -85
- package/dist/roles/bhrain/briefs/thinker.tactics/<cluster>.vs_<diverge>.duality.[article].md +0 -43
- package/dist/roles/bhrain/briefs/thinker.tactics/<diverge>._.[article].frame.cognitive.[seed].md +0 -4
- package/dist/roles/bhrain/briefs/thinker.tactics/<diverge>._.[article].frame.tactical.md +0 -89
- package/dist/roles/bhrain/briefs/thinker.tactics/<elaborate>_vs_<elucidate>.[seed].md +0 -1
- package/dist/roles/bhrain/briefs/thinker.tactics/<enquestion>._.[article].md +0 -113
- package/dist/roles/bhrain/briefs/thinker.tactics/<enquestion>._.[gallery].plumber.diagnose.md +0 -130
- package/dist/roles/bhrain/briefs/thinker.tactics/<enquestion>._.[gallery].thinker.enquestion.md +0 -125
- package/dist/roles/bhrain/briefs/thinker.tactics/<enquestion>.tactic.perspectives.[article].md +0 -36
- package/dist/roles/bhrain/briefs/thinker.tactics/<triage>._.[article].frame.tactical.md +0 -85
- package/dist/roles/bhrain/briefs/thinker.tactics/<triage>.persp.grades_from_context.[article].md +0 -48
- package/dist/roles/bhrain/briefs/thinker.tactics/<triage>.persp.implicit_question.[article].md +0 -65
- package/dist/roles/bhrain/briefs/trait.chillnature.md +0 -14
- package/dist/roles/bhrain/briefs/trait.ocd.md +0 -5
- package/dist/roles/bhrain/briefs/worders/core.matmuls_vecmuls_elemuls.md +0 -93
- package/dist/roles/bhrain/briefs/worders/core.transformers.as_origin.md +0 -62
- package/dist/roles/bhrain/briefs/worders/core.transformers.self_attention.[article].md +0 -93
- package/dist/roles/bhrain/briefs/worders/core.transformers.self_attention.[demo].ambig.bank.md +0 -80
- package/dist/roles/bhrain/briefs/worders/core.transformers.self_attention.[demo].cat_sat.md +0 -67
- package/dist/roles/bhrain/briefs/worders/force.repeat_input_structures.md +0 -48
- package/dist/roles/bhrain/briefs/worders/knowledge.interdependence.[article].md +0 -37
- package/dist/roles/bhrain/briefs/worders/knowledge.interdependence.[demo].domain.physics.md +0 -30
- package/dist/roles/bhrain/briefs/worders/knowledge.internal_vs_external.[article].artist_vs_librarian.md +0 -44
- package/dist/roles/bhrain/briefs/worders/knowledge.internal_vs_external.[demo].artist_vs_librarian.md +0 -37
- package/dist/roles/bhrain/briefs/worders/knowledge.internal_vs_external.[demo].domain.physics.md +0 -39
- package/dist/roles/bhrain/briefs/worders/knowledge.internalized.[article].md +0 -35
- package/dist/roles/bhrain/briefs/worders/knowledge.internalized.[demo].artist.md +0 -36
- package/dist/roles/bhrain/briefs/worders/knowledge.internalized.[demo].neural.md +0 -37
- package/dist/roles/bhrain/briefs/worders/knowledge.internalized.[demo].pianist.md +0 -34
- package/dist/roles/bhrain/briefs/worders/limits.rhyme.md +0 -46
- package/dist/roles/bhrain/briefs/worders/limits.spell.md +0 -49
- package/dist/roles/bhrain/briefs/worders/teach.via.library.examples.md +0 -28
- package/dist/roles/bhrain/briefs/worders/teach.via.library.explanations_vs_examples.md +0 -40
- package/dist/roles/bhrain/briefs/worders/trend.prefer_reuse.[seed].md +0 -10
- package/dist/roles/bhrain/getBhrainBrief.Options.codegen.d.ts +0 -10
- package/dist/roles/bhrain/getBhrainBrief.Options.codegen.js +0 -203
- package/dist/roles/bhrain/getBhrainBrief.Options.codegen.js.map +0 -1
- package/dist/roles/bhrain/getBhrainBrief.d.ts +0 -13
- package/dist/roles/bhrain/getBhrainBrief.js +0 -21
- package/dist/roles/bhrain/getBhrainBrief.js.map +0 -1
- package/dist/roles/bhrain/getBhrainRole.d.ts +0 -2
- package/dist/roles/bhrain/getBhrainRole.js +0 -55
- package/dist/roles/bhrain/getBhrainRole.js.map +0 -1
- package/dist/roles/bhrain/skills/brief.articulate/stepArticulate.d.ts +0 -59
- package/dist/roles/bhrain/skills/brief.articulate/stepArticulate.js +0 -97
- package/dist/roles/bhrain/skills/brief.articulate/stepArticulate.js.map +0 -1
- package/dist/roles/bhrain/skills/brief.articulate/stepArticulate.skill.d.ts +0 -30
- package/dist/roles/bhrain/skills/brief.articulate/stepArticulate.skill.js +0 -125
- package/dist/roles/bhrain/skills/brief.articulate/stepArticulate.skill.js.map +0 -1
- package/dist/roles/bhrain/skills/brief.articulate/stepArticulate.template.md +0 -120
- package/dist/roles/bhrain/skills/brief.catalogize/stepCatalogize.d.ts +0 -54
- package/dist/roles/bhrain/skills/brief.catalogize/stepCatalogize.js +0 -74
- package/dist/roles/bhrain/skills/brief.catalogize/stepCatalogize.js.map +0 -1
- package/dist/roles/bhrain/skills/brief.catalogize/stepCatalogize.skill.d.ts +0 -28
- package/dist/roles/bhrain/skills/brief.catalogize/stepCatalogize.skill.js +0 -124
- package/dist/roles/bhrain/skills/brief.catalogize/stepCatalogize.skill.js.map +0 -1
- package/dist/roles/bhrain/skills/brief.demonstrate/stepDemonstrate.d.ts +0 -59
- package/dist/roles/bhrain/skills/brief.demonstrate/stepDemonstrate.js +0 -103
- package/dist/roles/bhrain/skills/brief.demonstrate/stepDemonstrate.js.map +0 -1
- package/dist/roles/bhrain/skills/brief.demonstrate/stepDemonstrate.skill.d.ts +0 -30
- package/dist/roles/bhrain/skills/brief.demonstrate/stepDemonstrate.skill.js +0 -138
- package/dist/roles/bhrain/skills/brief.demonstrate/stepDemonstrate.skill.js.map +0 -1
- package/dist/roles/bhrain/skills/brief.demonstrate/stepDemonstrate.template.md +0 -135
- package/dist/roles/bhrain/skills/khue.cluster/stepCluster.d.ts +0 -57
- package/dist/roles/bhrain/skills/khue.cluster/stepCluster.js +0 -91
- package/dist/roles/bhrain/skills/khue.cluster/stepCluster.js.map +0 -1
- package/dist/roles/bhrain/skills/khue.cluster/stepCluster.skill.d.ts +0 -29
- package/dist/roles/bhrain/skills/khue.cluster/stepCluster.skill.js +0 -127
- package/dist/roles/bhrain/skills/khue.cluster/stepCluster.skill.js.map +0 -1
- package/dist/roles/bhrain/skills/khue.cluster/stepCluster.template.md +0 -134
- package/dist/roles/bhrain/skills/khue.diverge/stepDiverge.d.ts +0 -57
- package/dist/roles/bhrain/skills/khue.diverge/stepDiverge.js +0 -92
- package/dist/roles/bhrain/skills/khue.diverge/stepDiverge.js.map +0 -1
- package/dist/roles/bhrain/skills/khue.diverge/stepDiverge.skill.d.ts +0 -29
- package/dist/roles/bhrain/skills/khue.diverge/stepDiverge.skill.js +0 -112
- package/dist/roles/bhrain/skills/khue.diverge/stepDiverge.skill.js.map +0 -1
- package/dist/roles/bhrain/skills/khue.diverge/stepDiverge.template.md +0 -110
- package/dist/roles/bhrain/skills/khue.instantiate/stepInstantiate.d.ts +0 -55
- package/dist/roles/bhrain/skills/khue.instantiate/stepInstantiate.js +0 -75
- package/dist/roles/bhrain/skills/khue.instantiate/stepInstantiate.js.map +0 -1
- package/dist/roles/bhrain/skills/khue.instantiate/stepInstantiate.skill.d.ts +0 -28
- package/dist/roles/bhrain/skills/khue.instantiate/stepInstantiate.skill.js +0 -136
- package/dist/roles/bhrain/skills/khue.instantiate/stepInstantiate.skill.js.map +0 -1
- package/dist/roles/bhrain/skills/khue.instantiate/stepInstantiate.template.md +0 -73
- package/dist/roles/bhrain/skills/khue.triage/stepTriage.d.ts +0 -57
- package/dist/roles/bhrain/skills/khue.triage/stepTriage.js +0 -93
- package/dist/roles/bhrain/skills/khue.triage/stepTriage.js.map +0 -1
- package/dist/roles/bhrain/skills/khue.triage/stepTriage.skill.d.ts +0 -29
- package/dist/roles/bhrain/skills/khue.triage/stepTriage.skill.js +0 -127
- package/dist/roles/bhrain/skills/khue.triage/stepTriage.skill.js.map +0 -1
- package/dist/roles/bhrain/skills/khue.triage/stepTriage.template.md +0 -128
- package/dist/roles/ecologist/getEcologistBrief.Options.codegen.js.map +0 -1
- package/dist/roles/ecologist/getEcologistBrief.js.map +0 -1
- package/dist/roles/ecologist/getEcologistRole.js.map +0 -1
- package/dist/roles/getRoleRegistry.js.map +0 -1
- package/dist/roles/getRoleRegistry.readme.js.map +0 -1
- package/dist/roles/mechanic/getMechanicBrief.Options.codegen.js.map +0 -1
- package/dist/roles/mechanic/getMechanicBrief.js.map +0 -1
- package/dist/roles/mechanic/getMechanicRole.js.map +0 -1
- package/dist/roles/mechanic/skills/git.worktree.common.sh +0 -58
- package/dist/roles/mechanic/skills/git.worktree.del.sh +0 -51
- package/dist/roles/mechanic/skills/git.worktree.get.sh +0 -51
- package/dist/roles/mechanic/skills/git.worktree.set.sh +0 -108
- package/dist/roles/mechanic/skills/git.worktree.sh +0 -46
- package/dist/roles/mechanic/skills/init.bhuild.sh +0 -311
- package/dist/roles/mechanic/skills/test.integration.sh +0 -50
- /package/dist/{roles → domain.roles}/architect/briefs/criteria.given_when_then.[seed].v3.md +0 -0
- /package/dist/{roles → domain.roles}/architect/briefs/practices/prefer.env_access.prep_over_dev.md +0 -0
- /package/dist/{roles → domain.roles}/architect/briefs/ubiqlang.ambiguous-from-overload.md +0 -0
- /package/dist/{roles → domain.roles}/architect/getArchitectRole.d.ts +0 -0
- /package/dist/{roles → domain.roles}/architect/getArchitectRole.js +0 -0
- /package/dist/{roles → domain.roles}/ecologist/briefs/.readme.md +0 -0
- /package/dist/{roles → domain.roles}/ecologist/briefs/distilisys/sys101.distilisys.grammar.md +0 -0
- /package/dist/{roles → domain.roles}/ecologist/briefs/distilisys/sys201.actor.motive._.summary.md +0 -0
- /package/dist/{roles → domain.roles}/ecologist/briefs/distilisys/sys201.actor.motive.p1.reversibility.entropy.md +0 -0
- /package/dist/{roles → domain.roles}/ecologist/briefs/distilisys/sys201.actor.motive.p2.option.chance.choice.md +0 -0
- /package/dist/{roles → domain.roles}/ecologist/briefs/distilisys/sys201.actor.motive.p3.chance.motive.polarity.threat.md +0 -0
- /package/dist/{roles → domain.roles}/ecologist/briefs/distilisys/sys201.actor.motive.p4.motive.horizon.md +0 -0
- /package/dist/{roles → domain.roles}/ecologist/briefs/distilisys/sys201.actor.motive.p5.motive.grammar.md +0 -0
- /package/dist/{roles → domain.roles}/ecologist/briefs/distilisys/sys211.actor.resources._.primitives.summary.md +0 -0
- /package/dist/{roles → domain.roles}/ecologist/briefs/distilisys/sys211.actor.resources.pt1.primitive.time.md +0 -0
- /package/dist/{roles → domain.roles}/ecologist/briefs/distilisys/sys211.actor.resources.pt2.primitive.energy.md +0 -0
- /package/dist/{roles → domain.roles}/ecologist/briefs/distilisys/sys211.actor.resources.pt3.primitive.space.md +0 -0
- /package/dist/{roles → domain.roles}/ecologist/briefs/distilisys/sys211.actor.resources.pt4.primitive.claim.md +0 -0
- /package/dist/{roles → domain.roles}/ecologist/briefs/distilisys/sys211.actor.resources.pt5.composites.md +0 -0
- /package/dist/{roles → domain.roles}/ecologist/briefs/distilisys/sys231.actor.claims.p1.primitive.exchange.md +0 -0
- /package/dist/{roles → domain.roles}/ecologist/briefs/ecology/.eco001.origin.prompt.md +0 -0
- /package/dist/{roles → domain.roles}/ecologist/briefs/ecology/eco001.overview.md +0 -0
- /package/dist/{roles → domain.roles}/ecologist/briefs/ecology/eco101.core-system-understanding.md +0 -0
- /package/dist/{roles → domain.roles}/ecologist/briefs/ecology/eco101.p1.ecosystem-structure.md +0 -0
- /package/dist/{roles → domain.roles}/ecologist/briefs/ecology/eco101.p2.trophic-dynamics.md +0 -0
- /package/dist/{roles → domain.roles}/ecologist/briefs/ecology/eco101.p3.population-ecology.md +0 -0
- /package/dist/{roles → domain.roles}/ecologist/briefs/ecology/eco101.p4.community-interactions.md +0 -0
- /package/dist/{roles → domain.roles}/ecologist/briefs/ecology/eco505.systems-thinking.md +0 -0
- /package/dist/{roles → domain.roles}/ecologist/briefs/economy/econ001.overview.md +0 -0
- /package/dist/{roles → domain.roles}/ecologist/briefs/economy/econ101.core-mechanics.md +0 -0
- /package/dist/{roles → domain.roles}/ecologist/briefs/economy/econ101.p1.supply-and-demand.md +0 -0
- /package/dist/{roles → domain.roles}/ecologist/briefs/economy/econ101.p2.opportunity-cost.md +0 -0
- /package/dist/{roles → domain.roles}/ecologist/briefs/economy/econ101.p3.marginal-analysis.md +0 -0
- /package/dist/{roles → domain.roles}/ecologist/briefs/economy/econ101.p4.rational-choice.md +0 -0
- /package/dist/{roles → domain.roles}/ecologist/briefs/economy/econ201.market-structures-and-failures.md +0 -0
- /package/dist/{roles → domain.roles}/ecologist/briefs/economy/econ301.production-and-growth.md +0 -0
- /package/dist/{roles → domain.roles}/ecologist/briefs/economy/econ401.macro-systems.md +0 -0
- /package/dist/{roles → domain.roles}/ecologist/briefs/economy/econ501.global-and-institutional.md +0 -0
- /package/dist/{roles → domain.roles}/ecologist/briefs/economy/econ501.p1.game-theory.md +0 -0
- /package/dist/{roles → domain.roles}/ecologist/briefs/economy/econ501.p4.behavioral-economics.md +0 -0
- /package/dist/{roles → domain.roles}/ecologist/briefs/product/user.journey._.[article].i1.md +0 -0
- /package/dist/{roles → domain.roles}/ecologist/briefs/product/user.journey._.[article].md +0 -0
- /package/dist/{roles → domain.roles}/ecologist/briefs/product/user.journey.purpose.[article].i1.md +0 -0
- /package/dist/{roles → domain.roles}/ecologist/briefs/product/user.journey.purpose.[article].md +0 -0
- /package/dist/{roles → domain.roles}/ecologist/briefs/term.distillation.md +0 -0
- /package/dist/{roles → domain.roles}/ecologist/getEcologistBrief.Options.codegen.d.ts +0 -0
- /package/dist/{roles → domain.roles}/ecologist/getEcologistBrief.Options.codegen.js +0 -0
- /package/dist/{roles → domain.roles}/ecologist/getEcologistBrief.d.ts +0 -0
- /package/dist/{roles → domain.roles}/ecologist/getEcologistBrief.js +0 -0
- /package/dist/{roles → domain.roles}/ecologist/getEcologistRole.d.ts +0 -0
- /package/dist/{roles → domain.roles}/ecologist/getEcologistRole.js +0 -0
- /package/dist/{roles → domain.roles}/getRoleRegistry.d.ts +0 -0
- /package/dist/{roles → domain.roles}/getRoleRegistry.readme.d.ts +0 -0
- /package/dist/{roles → domain.roles}/getRoleRegistry.readme.js +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/architecture/bounded-contexts.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/architecture/directional-dependencies.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/architecture/domain-driven-design.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/architecture/ubiqlang.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/codestyle/_mech.compressed.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/codestyle/_mech.compressed.prompt.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/codestyle/flow.failfast.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/codestyle/flow.idempotency.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/codestyle/flow.immutability.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/codestyle/flow.narratives.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/codestyle/flow.single-responsibility.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/codestyle/flow.transformers_over_conditionals.[lesson].md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/codestyle/mech.args.input-context.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/codestyle/mech.args.input-inline.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/codestyle/mech.arrowonly.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/codestyle/mech.clear-contracts.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/codestyle/mech.tests.given-when-then.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/codestyle/mech.what-why.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/codestyle/mech.what-why.v2.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/codestyle/pit-of-success.via.minimize-surface-area.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/criteria.practices/never.term.script.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/criteria.practices/prefer.emojis.chill_nature.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/criteria.practices/prefer.jq.over_alt.[demo].md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/criteria.practices/prefer.terraform.[criteria].md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/criteria.practices/prefer.terraform.[seed].md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/criteria.practices/require.dependency.pinned_versions.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/criteria.practices/require.idempotency.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/criteria.practices/require.knowledge.externalized.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/engineer/dependency-injection.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/engineer/dependency-injection.stub.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/lessons/code.prod.typescript.types/bivariance_vs_contravariance.[lesson].md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/patterns/code.prod.contract.inputs.nameargs/bad-practice/forbid.positional-args.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/patterns/code.prod.contract.inputs.nameargs/best-practice/require.namedargs.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/patterns/code.prod.declarative/.readme.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/patterns/code.prod.declarative/best-practice/declastruct.[demo].md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/patterns/code.prod.domain.objects/bad-practices/blocker.has.attributes.nullable.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/patterns/code.prod.domain.objects/bad-practices/blocker.has.attributes.undefined.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/patterns/code.prod.domain.objects/bad-practices/blocker.refs.immuatble.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/patterns/code.prod.domain.objects/best-practice/ref.package.domain-objects.[readme].md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/patterns/code.prod.domain.operations/bad-practices/forbid.ordered-args.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/patterns/code.prod.domain.operations/best-practice/require.sync.names.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/patterns/code.prod.errors.failfast/bad-practices/forbid.failhide.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/patterns/code.prod.errors.failfast/bad-practices/forbid.hide_errors.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/patterns/code.prod.errors.failfast/best-practice/prefer.HelpfulError.wrap.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/patterns/code.prod.errors.failfast/best-practice/require.fail_fast.[demo].shell.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/patterns/code.prod.errors.failfast/best-practice/require.fail_fast.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/patterns/code.prod.narrative/bad-practices/avoid.ifs.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/patterns/code.prod.narrative/bad-practices/forbid.else.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/patterns/code.prod.narrative/bad-practices/forbid.else.v2.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/patterns/code.prod.narrative/best-practice/early-returns.named-checks.[demo].md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/patterns/code.prod.repo.structure/bad-practices/forbid.barrel.exports.ts.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/patterns/code.prod.repo.structure/bad-practices/forbid.index.ts.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/patterns/code.prod.repo.structure/best-practice/directional-dependencies.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/patterns/code.prod.repo.structure/best-practice/dot-test-and-dot-temp.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/patterns/code.prod.typescript.utils/best-practice/ref.package.as-command.[tips].md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/patterns/code.test.acceptance/best-practice/blackbox.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/patterns/code.test.howto/best-practice/howto.diagnose.[lesson].md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/patterns/code.test.howto/best-practice/howto.run.[lesson].md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/patterns/code.test.howto/best-practice/howto.use.[lesson].md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/patterns/code.test.howto/best-practice/howto.write.[lesson].md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/patterns/code.test.howto/best-practice/howto.write.[lesson].on_scope.for_integ.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/patterns/code.test.howto/best-practice/howto.write.[lesson].on_scope.for_units.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/patterns/code.test.howto/best-practice/howto.write.bdd.[lesson].md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/patterns/code.test.howto/best-practice/prefer.datadriven.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/patterns/code.test.howto/best-practice/ref.test-fns.[readme].md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/patterns/code.test.howto/best-practice/whento.snapshots.[lesson].md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/patterns/flow.debug.diagnostics/bad-practices/forbid.trust_vscode.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/patterns/flow.refact.questions/best-practice/require.testchange.review.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/patterns/lang.terms/.readme.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/patterns/lang.terms/bad-practices/forbid.term=existing.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/patterns/lang.terms/best-practice/require.order.noun_adj.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/patterns/lang.terms/domain=practices.terms=forbid_prefer_desire_require.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/patterns/lang.terms/domain=software.terms=prodcode_vs_testcode.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/patterns/lang.tones/.readme.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/patterns/lang.tones/prefer.chill-nature.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/patterns/lang.tones/prefer.lowercase.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/style.compressed.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/style.compressed.prompt.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/style.names.treestruct.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/style.names.ubiqlang.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/style.words.lowercase.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/terms/badpractice/script.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/briefs/terms/plan.exec_vs_apply.md +0 -0
- /package/dist/{roles → domain.roles}/mechanic/getMechanicBrief.Options.codegen.d.ts +0 -0
- /package/dist/{roles → domain.roles}/mechanic/getMechanicBrief.Options.codegen.js +0 -0
- /package/dist/{roles → domain.roles}/mechanic/getMechanicBrief.d.ts +0 -0
- /package/dist/{roles → domain.roles}/mechanic/getMechanicBrief.js +0 -0
- /package/dist/{roles → domain.roles}/mechanic/getMechanicRole.d.ts +0 -0
- /package/dist/{roles/mechanic/skills → domain.roles/mechanic/inits}/claude.hooks/pretooluse.check-permissions.sh +0 -0
- /package/dist/{roles/mechanic/skills → domain.roles/mechanic/inits}/claude.hooks/pretooluse.forbid-stderr-redirect.sh +0 -0
- /package/dist/{roles/mechanic/skills → domain.roles/mechanic/inits}/claude.hooks/sessionstart.notify-permissions.sh +0 -0
- /package/dist/{roles/mechanic/skills → domain.roles/mechanic/inits}/init.claude.hooks.cleanup.sh +0 -0
- /package/dist/{roles/mechanic/skills → domain.roles/mechanic/inits}/init.claude.hooks.findsert.sh +0 -0
- /package/dist/{roles/mechanic/skills → domain.roles/mechanic/inits}/init.claude.hooks.sh +0 -0
- /package/dist/{roles/mechanic/skills → domain.roles/mechanic/inits}/init.claude.permissions.jsonc +0 -0
- /package/dist/{roles/mechanic/skills → domain.roles/mechanic/inits}/init.claude.permissions.sh +0 -0
- /package/dist/{roles/mechanic/skills → domain.roles/mechanic/inits}/init.claude.sh +0 -0
- /package/dist/{roles/mechanic/skills → domain.roles/mechanic/inits}/link.claude.transcripts.sh +0 -0
- /package/dist/{roles → domain.roles}/mechanic/skills/claude.tools/cpsafe.sh +0 -0
- /package/dist/{roles → domain.roles}/mechanic/skills/claude.tools/sedreplace.sh +0 -0
- /package/dist/{roles → domain.roles}/mechanic/skills/declapract.upgrade.sh +0 -0
package/dist/roles/bhrain/briefs/thinker.tactics/<triage>.persp.implicit_question.[article].md
DELETED
|
@@ -1,65 +0,0 @@
|
|
|
1
|
-
# 🧩 .brief: `perspective; triage as question-based tactic`
|
|
2
|
-
|
|
3
|
-
## .what
|
|
4
|
-
|
|
5
|
-
`triage` in thought processes is fundamentally a **question-based tactic**. This perspective suggests that triage is driven by **explicit or implicit questions** that determine the **dimensions** along which various elements or tasks are assessed and prioritized.
|
|
6
|
-
|
|
7
|
-
---
|
|
8
|
-
|
|
9
|
-
## 🎯 purpose
|
|
10
|
-
- to **prioritize effectively** — identifying which elements need immediate attention and which can be postponed or discarded
|
|
11
|
-
- to ensure that the **most critical issues** are addressed in a structured manner
|
|
12
|
-
- to provide a clear framework for **decision-making** in complex or time-pressured situations
|
|
13
|
-
|
|
14
|
-
---
|
|
15
|
-
|
|
16
|
-
## 🌀 triage dynamics
|
|
17
|
-
|
|
18
|
-
### 1. question-driven assessment
|
|
19
|
-
- **defining criteria**: Each triage process is guided by specific questions that focus on what matters most.
|
|
20
|
-
- e.g., "Which tasks have the highest impact?" or "What must be solved immediately?"
|
|
21
|
-
|
|
22
|
-
### 2. dimension selection
|
|
23
|
-
- **criteria alignment**: Questions help in selecting the dimensions of assessment, such as urgency, importance, or risk.
|
|
24
|
-
|
|
25
|
-
---
|
|
26
|
-
|
|
27
|
-
## 🔍 triage examples
|
|
28
|
-
|
|
29
|
-
### medical triage
|
|
30
|
-
- **main question**: "Which patient needs care first?"
|
|
31
|
-
- **assessment dimensions**:
|
|
32
|
-
- severity of condition
|
|
33
|
-
- survival probability
|
|
34
|
-
- resource availability
|
|
35
|
-
|
|
36
|
-
### project prioritization
|
|
37
|
-
- **main question**: "Which project aligns best with our strategic goals?"
|
|
38
|
-
- **assessment dimensions**:
|
|
39
|
-
- potential ROI
|
|
40
|
-
- strategic alignment
|
|
41
|
-
- resource requirements
|
|
42
|
-
|
|
43
|
-
### information overload management
|
|
44
|
-
- **main question**: "Which information is most crucial to process immediately?"
|
|
45
|
-
- **assessment dimensions**:
|
|
46
|
-
- relevance to current objectives
|
|
47
|
-
- urgency of response required
|
|
48
|
-
- impact on future decisions
|
|
49
|
-
|
|
50
|
-
---
|
|
51
|
-
|
|
52
|
-
## 🌿 implications for practice
|
|
53
|
-
|
|
54
|
-
- adopting a question-based triage framework can make prioritization more transparent and justifiable.
|
|
55
|
-
- defining clear, shared questions ensures all involved parties understand **why decisions are made**, improving collaboration and buy-in.
|
|
56
|
-
|
|
57
|
-
---
|
|
58
|
-
|
|
59
|
-
## 💭 reflections
|
|
60
|
-
|
|
61
|
-
Consider whether:
|
|
62
|
-
- the questions used in your triage process effectively cover all necessary dimensions.
|
|
63
|
-
- your team could benefit from revisiting and refining these questions to better align with current goals.
|
|
64
|
-
|
|
65
|
-
> By treating triage as a question-based tactic, we can enhance focus, efficiency, and clarity in decision-making.
|
|
@@ -1,14 +0,0 @@
|
|
|
1
|
-
prefer more chill, nature based emojis
|
|
2
|
-
|
|
3
|
-
for example
|
|
4
|
-
|
|
5
|
-
# src/skills/diagnose/queryApis/detectLambdaCrontasks.sh
|
|
6
|
-
🌊 Wave for the main detection flow
|
|
7
|
-
🌱 Seedling for Step 1 (starting/growing the list)
|
|
8
|
-
🌿 Herb for successful enumeration
|
|
9
|
-
🍂 Fallen leaf for when nothing is found
|
|
10
|
-
🌾 Sheaf of rice for Step 2 (harvesting/checking data)
|
|
11
|
-
🌸 Cherry blossom for Step 3 (results blooming)
|
|
12
|
-
🌙 Crescent moon for no cron tasks found (peaceful/quiet)
|
|
13
|
-
🌻 Sunflower for the summary (bright conclusion)
|
|
14
|
-
|
|
@@ -1,5 +0,0 @@
|
|
|
1
|
-
.ocd = obsessed with structure, precision, and clarity.
|
|
2
|
-
- strives for maximum signal and minimal noise.
|
|
3
|
-
- compulsively organizes, deduplicates, and compresses information.
|
|
4
|
-
- intolerant of vagueness, drift, or redundant phrasing.
|
|
5
|
-
- ensures consistent grammar, formatting, and taxonomy.
|
|
@@ -1,93 +0,0 @@
|
|
|
1
|
-
# 🧩 .brief: `llm inference = matmuls + vecmuls + elemuls`
|
|
2
|
-
|
|
3
|
-
## .what
|
|
4
|
-
llm inference is the process by which a trained language model transforms input tokens into output predictions. the computation can be reduced to three categories:
|
|
5
|
-
|
|
6
|
-
- **matmuls** → large matrix multiplications (heavy, parameter-rich ops)
|
|
7
|
-
- **vecmuls** → vector-scale multiplications/additions (normalization scale/shift, bias)
|
|
8
|
-
- **elemuls** → elementwise nonlinear operations (activations, softmax, residuals)
|
|
9
|
-
|
|
10
|
-
together, these form the skeleton of inference: heavy matmuls provide learned transformations, while vecmuls and elemuls act as the glue that gives nonlinearity and stability.
|
|
11
|
-
|
|
12
|
-
---
|
|
13
|
-
|
|
14
|
-
## 🎯 purpose
|
|
15
|
-
- apply billions of learned parameters (matmuls) to transform token inputs
|
|
16
|
-
- refine representations through normalization (vecmuls) and nonlinear activations (elemuls)
|
|
17
|
-
- output logits over vocabulary for next-token prediction
|
|
18
|
-
|
|
19
|
-
---
|
|
20
|
-
|
|
21
|
-
## ⚙️ method
|
|
22
|
-
|
|
23
|
-
### 🔑 what’s happening inside llm inference
|
|
24
|
-
|
|
25
|
-
1. **token embedding**
|
|
26
|
-
- input tokens → dense vectors via embedding matrix (**matmul**).
|
|
27
|
-
|
|
28
|
-
2. **transformer layers**
|
|
29
|
-
- **linear projections:** weight matrices × input (**matmuls**).
|
|
30
|
-
- **attention mechanism:** query × key → attention weights (**matmul**), then weights × value (**matmul**).
|
|
31
|
-
- **feed-forward networks:** matmuls with intermediate activation (**elemuls**).
|
|
32
|
-
|
|
33
|
-
3. **non-linearities & normalization**
|
|
34
|
-
- **activations:** per-element functions (relu, gelu, etc.) = **elemuls**.
|
|
35
|
-
- **normalization:** mean/variance across vector + learned scale/shift = **vecmuls + elemuls**.
|
|
36
|
-
|
|
37
|
-
4. **output layer**
|
|
38
|
-
- final hidden state × vocab matrix → logits (**matmul**).
|
|
39
|
-
|
|
40
|
-
---
|
|
41
|
-
|
|
42
|
-
## 🧮 operation classes
|
|
43
|
-
- **matmuls:** embeddings, projections, attention (qkᵀ, softmax·v), feed-forward, output head.
|
|
44
|
-
- **vecmuls:** layernorm scale/shift, bias addition.
|
|
45
|
-
- **elemuls:** relu/gelu activations, softmax exponentials/divides, residual adds.
|
|
46
|
-
|
|
47
|
-
---
|
|
48
|
-
|
|
49
|
-
## 📊 insight
|
|
50
|
-
- **yes:** matmuls dominate compute and parameter count.
|
|
51
|
-
- **no:** inference is not *only* matmuls — vecmuls and elemuls are critical for expressivity and stability.
|
|
52
|
-
- **so:** inference = “giant chains of matmuls, with vecmuls and elemuls woven in.”
|
|
53
|
-
|
|
54
|
-
---
|
|
55
|
-
|
|
56
|
-
## 💻 toy pseudocode skeleton
|
|
57
|
-
|
|
58
|
-
\`\`\`python
|
|
59
|
-
def llm_inference(tokens, weights):
|
|
60
|
-
# 1. embedding lookup (matmul)
|
|
61
|
-
x = embed(tokens, weights["embedding"]) # matmul
|
|
62
|
-
|
|
63
|
-
for layer in weights["layers"]:
|
|
64
|
-
# 2. linear projections (matmuls)
|
|
65
|
-
q = matmul(x, layer["Wq"])
|
|
66
|
-
k = matmul(x, layer["Wk"])
|
|
67
|
-
v = matmul(x, layer["Wv"])
|
|
68
|
-
|
|
69
|
-
# 3. attention (matmuls + elemuls)
|
|
70
|
-
attn_scores = matmul(q, k.T) / sqrt(d) # matmul
|
|
71
|
-
attn_weights = softmax(attn_scores) # elemul
|
|
72
|
-
attn_output = matmul(attn_weights, v) # matmul
|
|
73
|
-
|
|
74
|
-
# 4. residual connection (elemul)
|
|
75
|
-
x = x + attn_output # elemul
|
|
76
|
-
|
|
77
|
-
# 5. normalization (vecmul + elemul)
|
|
78
|
-
x = layernorm(x, layer["gamma"], layer["beta"]) # vecmul + elemul
|
|
79
|
-
|
|
80
|
-
# 6. feed-forward network
|
|
81
|
-
h = matmul(x, layer["W1"]) # matmul
|
|
82
|
-
h = gelu(h) # elemul
|
|
83
|
-
h = matmul(h, layer["W2"]) # matmul
|
|
84
|
-
x = x + h # elemul
|
|
85
|
-
|
|
86
|
-
# 7. output projection (matmul)
|
|
87
|
-
logits = matmul(x, weights["output"]) # matmul
|
|
88
|
-
return logits
|
|
89
|
-
\`\`\`
|
|
90
|
-
|
|
91
|
-
---
|
|
92
|
-
|
|
93
|
-
in short: **llm inference = matmuls (heavy lifting) + vecmuls (scaling/shift) + elemuls (nonlinear glue).**
|
|
@@ -1,62 +0,0 @@
|
|
|
1
|
-
# 🧩 .brief: `the transformer architecture — birth of llms`
|
|
2
|
-
|
|
3
|
-
## .what
|
|
4
|
-
the **transformer** is the fundamental architecture that enabled large language models (llms). introduced in 2017 by vaswani et al. in *“attention is all you need”*, it replaced recurrence with **self-attention**, making it possible to train massive models on vast text corpora with efficient parallelization.
|
|
5
|
-
|
|
6
|
-
---
|
|
7
|
-
|
|
8
|
-
## 🎯 purpose
|
|
9
|
-
- overcome the sequential bottlenecks of rnn/lstm models
|
|
10
|
-
- capture long-range dependencies across entire sequences
|
|
11
|
-
- enable scalable, parallel training on gpus/tpus
|
|
12
|
-
- provide a flexible backbone that can grow with data and compute
|
|
13
|
-
|
|
14
|
-
---
|
|
15
|
-
|
|
16
|
-
## ⚙️ method
|
|
17
|
-
|
|
18
|
-
### 1. **token embedding**
|
|
19
|
-
- words/subwords mapped into dense vectors (matmuls).
|
|
20
|
-
|
|
21
|
-
### 2. **positional encoding**
|
|
22
|
-
- inject sequence order into embeddings, since attention is order-agnostic.
|
|
23
|
-
|
|
24
|
-
### 3. **multi-head self-attention**
|
|
25
|
-
- queries, keys, and values projected via matmuls.
|
|
26
|
-
- attention scores = q·kᵀ → softmax → weighted sum with v.
|
|
27
|
-
- multiple heads let the model learn diverse relational patterns.
|
|
28
|
-
|
|
29
|
-
### 4. **feed-forward networks**
|
|
30
|
-
- per-token mlps applied after attention.
|
|
31
|
-
- matmuls + nonlinear activations (elemuls).
|
|
32
|
-
|
|
33
|
-
### 5. **residual connections + normalization**
|
|
34
|
-
- stabilize training, preserve gradients, and allow deep stacking.
|
|
35
|
-
|
|
36
|
-
---
|
|
37
|
-
|
|
38
|
-
## 🔑 why transformers were a leap
|
|
39
|
-
|
|
40
|
-
- **parallelism:** attention lets all tokens be processed simultaneously.
|
|
41
|
-
- **long-range context:** any token can directly attend to any other.
|
|
42
|
-
- **scalability:** depth, width, and data scale smoothly (scaling laws).
|
|
43
|
-
- **expressivity:** multi-head attention captures complex dependencies.
|
|
44
|
-
|
|
45
|
-
---
|
|
46
|
-
|
|
47
|
-
## 🌍 llm lineage
|
|
48
|
-
|
|
49
|
-
- **2017 — transformer** (*attention is all you need*)
|
|
50
|
-
- **2018 — bert, gpt-1** (first pretrained transformer language models)
|
|
51
|
-
- **2019 — gpt-2** (scaling shows surprising emergent abilities)
|
|
52
|
-
- **2020 — gpt-3** (175b parameters; llms become viable)
|
|
53
|
-
- **2022+ — instruction-tuned & rlhf models** (chatgpt, claude, etc.)
|
|
54
|
-
|
|
55
|
-
---
|
|
56
|
-
|
|
57
|
-
## 📊 insight
|
|
58
|
-
- the transformer is the **architectural skeleton** of llms.
|
|
59
|
-
- llms are “just” massive stacks of transformers trained on enormous corpora.
|
|
60
|
-
- rlhf, fine-tuning, and alignment methods refine the outputs — but the core engine is still the **transformer self-attention block**.
|
|
61
|
-
|
|
62
|
-
in short: **transformers are the soil from which llms grew.**
|
|
@@ -1,93 +0,0 @@
|
|
|
1
|
-
# 🧩 .brief.article: `self-attention`
|
|
2
|
-
|
|
3
|
-
## 🔑 what is self-attention
|
|
4
|
-
|
|
5
|
-
self-attention is a mechanism that lets every token in a sequence **dynamically weigh its relationship to every other token** when computing its next representation.
|
|
6
|
-
|
|
7
|
-
each token generates three vectors:
|
|
8
|
-
- **query (q)** — what this token is “looking for”
|
|
9
|
-
- **key (k)** — what this token “offers”
|
|
10
|
-
- **value (v)** — the actual information carried
|
|
11
|
-
|
|
12
|
-
the similarity of query vs key determines how much attention a token pays to another token’s value.
|
|
13
|
-
|
|
14
|
-
mathematically:
|
|
15
|
-
|
|
16
|
-
\[
|
|
17
|
-
\text{Attention}(Q,K,V) = \text{softmax}\!\left(\frac{QK^\top}{\sqrt{d_k}}\right) V
|
|
18
|
-
\]
|
|
19
|
-
|
|
20
|
-
- **qkᵀ** → pairwise similarity scores (**matmul**)
|
|
21
|
-
- **softmax** → normalize into attention weights
|
|
22
|
-
- **weights × v** → weighted sum of values = new representation
|
|
23
|
-
|
|
24
|
-
---
|
|
25
|
-
|
|
26
|
-
## 🎯 purpose
|
|
27
|
-
- let tokens reference and integrate information from anywhere in the sequence
|
|
28
|
-
- capture long-range dependencies in a single operation
|
|
29
|
-
- enable efficient, parallel computation across tokens
|
|
30
|
-
- provide multiple relational views through multi-head attention
|
|
31
|
-
|
|
32
|
-
---
|
|
33
|
-
|
|
34
|
-
## ⚙️ method
|
|
35
|
-
|
|
36
|
-
1. compute q, k, v for each token via linear projections (**matmuls**)
|
|
37
|
-
2. calculate similarity scores q·kᵀ (**matmul**)
|
|
38
|
-
3. normalize scores with softmax (**elemuls**)
|
|
39
|
-
4. use normalized weights to combine values (**matmul**)
|
|
40
|
-
5. update each token representation with the weighted sum
|
|
41
|
-
|
|
42
|
-
---
|
|
43
|
-
|
|
44
|
-
## 🔑 benefits
|
|
45
|
-
|
|
46
|
-
1. **parallelism** — all q, k, v computed at once; no recurrence.
|
|
47
|
-
2. **long-range context** — any token can directly attend to any other.
|
|
48
|
-
3. **scalability** — uniform, repeatable structure scales with data/compute.
|
|
49
|
-
4. **expressivity** — multi-head attention lets the model learn diverse relational patterns.
|
|
50
|
-
|
|
51
|
-
---
|
|
52
|
-
|
|
53
|
-
## 🧩 intuition example
|
|
54
|
-
|
|
55
|
-
sentence:
|
|
56
|
-
> “the cat sat on the mat because it was tired.”
|
|
57
|
-
|
|
58
|
-
let’s track the token **“it”** and show how its q, k, v vectors interact with **“the cat.”**
|
|
59
|
-
for illustration, assume a toy **5-dimensional hidden space**.
|
|
60
|
-
|
|
61
|
-
### token: **“it”**
|
|
62
|
-
- query (**q_it**) = `[0.9, 0.1, 0.0, 0.2, 0.3]`
|
|
63
|
-
- “looking for an antecedent noun with certain features”
|
|
64
|
-
- key (**k_it**) = `[0.1, 0.3, 0.2, 0.0, 0.4]`
|
|
65
|
-
- “offers” self as a pronoun needing resolution
|
|
66
|
-
- value (**v_it**) = `[0.2, 0.5, 0.1, 0.0, 0.7]`
|
|
67
|
-
- the information carried by “it” itself
|
|
68
|
-
|
|
69
|
-
### token: **“cat”**
|
|
70
|
-
- query (**q_cat**) = `[0.2, 0.4, 0.1, 0.3, 0.0]`
|
|
71
|
-
- key (**k_cat**) = `[0.8, 0.2, 0.0, 0.1, 0.3]`
|
|
72
|
-
- describes the features of “cat” as a noun subject
|
|
73
|
-
- value (**v_cat**) = `[0.7, 0.6, 0.2, 0.4, 0.1]`
|
|
74
|
-
- semantic content of “cat”
|
|
75
|
-
|
|
76
|
-
### computing attention
|
|
77
|
-
1. similarity score = q_it · k_cat =
|
|
78
|
-
`0.9*0.8 + 0.1*0.2 + 0.0*0.0 + 0.2*0.1 + 0.3*0.3 = 0.72 + 0.02 + 0 + 0.02 + 0.09 = 0.85`
|
|
79
|
-
2. suppose normalized (softmax) weight for “cat” = **0.70**, for others tokens total 0.30.
|
|
80
|
-
3. “it”’s updated representation =
|
|
81
|
-
`0.70 * v_cat + 0.30 * (weighted sum of other tokens’ values)`
|
|
82
|
-
|
|
83
|
-
so “it”’s final vector is now strongly composed of “cat”’s value vector, making the model understand that **“it” refers to “the cat.”**
|
|
84
|
-
|
|
85
|
-
---
|
|
86
|
-
|
|
87
|
-
## 📊 insight
|
|
88
|
-
- **queries = what a token seeks**
|
|
89
|
-
- **keys = what a token provides**
|
|
90
|
-
- **values = the information contributed**
|
|
91
|
-
- weighted connections between them create **context-aware representations**.
|
|
92
|
-
|
|
93
|
-
self-attention is how llms directly model relationships across a sequence — allowing “it” to learn its referent is “the cat.”
|
package/dist/roles/bhrain/briefs/worders/core.transformers.self_attention.[demo].ambig.bank.md
DELETED
|
@@ -1,80 +0,0 @@
|
|
|
1
|
-
# 🧩 .brief.demo: `multi-head self-attention on ambiguous words (value vectors evolve)`
|
|
2
|
-
|
|
3
|
-
## .what
|
|
4
|
-
this demo shows how two identical tokens — **“bank”** — begin with the same embedding but evolve into **different value vectors** after attention layers, depending on context. multi-head attention disambiguates their meaning dynamically.
|
|
5
|
-
|
|
6
|
-
---
|
|
7
|
-
|
|
8
|
-
## 🎯 purpose
|
|
9
|
-
- demonstrate that identical word embeddings diverge into context-specific meanings
|
|
10
|
-
- show how q, k, v vectors are derived from hidden states
|
|
11
|
-
- illustrate how attention causes “bank” (financial vs river) to separate
|
|
12
|
-
|
|
13
|
-
---
|
|
14
|
-
|
|
15
|
-
## ⚙️ demo setup
|
|
16
|
-
|
|
17
|
-
sentence:
|
|
18
|
-
> “the **bank** raised interest rates, and she sat by the **bank** of the river.”
|
|
19
|
-
|
|
20
|
-
toy hidden size = **4**, with **2 heads.**
|
|
21
|
-
|
|
22
|
-
---
|
|
23
|
-
|
|
24
|
-
### 🔹 step 1 — input embedding (same for both “bank”)
|
|
25
|
-
|
|
26
|
-
both tokens start with the **same embedding lookup** from the word embedding matrix:
|
|
27
|
-
|
|
28
|
-
- embedding(**bank**) = `[0.5, 0.7, 0.2, 0.1]`
|
|
29
|
-
|
|
30
|
-
this is identical for both occurrences, because embeddings are tied to vocabulary entries, not context.
|
|
31
|
-
|
|
32
|
-
---
|
|
33
|
-
|
|
34
|
-
### 🔹 step 2 — hidden states diverge
|
|
35
|
-
|
|
36
|
-
after one transformer block, the hidden state of each “bank” diverges due to attending to different neighbors:
|
|
37
|
-
|
|
38
|
-
- hidden(**bank_financial**) = `[0.8, 0.2, 0.6, 0.1]`
|
|
39
|
-
- hidden(**bank_river**) = `[0.1, 0.9, 0.2, 0.7]`
|
|
40
|
-
|
|
41
|
-
---
|
|
42
|
-
|
|
43
|
-
### 🔹 step 3 — compute q, k, v from hidden states
|
|
44
|
-
|
|
45
|
-
each hidden state is projected by learned matrices \( W_q, W_k, W_v \).
|
|
46
|
-
|
|
47
|
-
for simplicity, show only **value (v)** vectors here:
|
|
48
|
-
|
|
49
|
-
- v(**bank_financial**) = `[0.6, 0.5, 0.7, 0.2]`
|
|
50
|
-
- v(**bank_river**) = `[0.2, 0.8, 0.3, 0.6]`
|
|
51
|
-
|
|
52
|
-
note: originally both banks had the **same v** if taken right after embedding, but after context mixing, they diverge.
|
|
53
|
-
|
|
54
|
-
---
|
|
55
|
-
|
|
56
|
-
### 🔹 step 4 — multi-head specialization
|
|
57
|
-
|
|
58
|
-
- **head 1 (financial context):**
|
|
59
|
-
“bank” attends to “interest rates” → reinforces v(**bank_financial**).
|
|
60
|
-
|
|
61
|
-
- **head 2 (geographic context):**
|
|
62
|
-
“bank” attends to “river” → reinforces v(**bank_river**).
|
|
63
|
-
|
|
64
|
-
---
|
|
65
|
-
|
|
66
|
-
## 🧩 combined effect
|
|
67
|
-
|
|
68
|
-
- initially, both “bank” tokens share the same vector (embedding).
|
|
69
|
-
- after attention, their hidden states — and thus their q/k/v vectors — diverge.
|
|
70
|
-
- multi-head attention ensures each occurrence is contextualized differently.
|
|
71
|
-
|
|
72
|
-
---
|
|
73
|
-
|
|
74
|
-
## 📊 insight
|
|
75
|
-
- **embeddings** are static (same for same word).
|
|
76
|
-
- **hidden states** are dynamic (different for each occurrence).
|
|
77
|
-
- **q, k, v** are projections of hidden states, so they also differ per occurrence.
|
|
78
|
-
- result: llms resolve word sense disambiguation in context by letting identical tokens evolve into different representations.
|
|
79
|
-
|
|
80
|
-
in short: **same word, same start — different meaning, different vectors.**
|
|
@@ -1,67 +0,0 @@
|
|
|
1
|
-
# 🧩 .brief.demo: `multi-head self-attention`
|
|
2
|
-
|
|
3
|
-
## .what
|
|
4
|
-
this demo illustrates how **multi-head self-attention** allows a transformer to capture **different types of relationships simultaneously** by projecting the same tokens into multiple attention “heads.” each head learns its own query/key/value space, enabling diverse relational patterns (e.g., pronoun resolution, verb agreement, or topic continuity).
|
|
5
|
-
|
|
6
|
-
---
|
|
7
|
-
|
|
8
|
-
## 🎯 purpose
|
|
9
|
-
- show how multiple attention heads operate in parallel
|
|
10
|
-
- demonstrate how heads specialize in different linguistic/semantic functions
|
|
11
|
-
- highlight how combining heads yields richer token representations
|
|
12
|
-
|
|
13
|
-
---
|
|
14
|
-
|
|
15
|
-
## ⚙️ demo setup
|
|
16
|
-
|
|
17
|
-
sentence:
|
|
18
|
-
> “the cat sat on the mat because it was tired.”
|
|
19
|
-
|
|
20
|
-
toy hidden size = **6**, with **2 heads**, each head having its own projection of q, k, v.
|
|
21
|
-
|
|
22
|
-
---
|
|
23
|
-
|
|
24
|
-
### 🔹 head 1 — pronoun resolution (antecedent tracking)
|
|
25
|
-
|
|
26
|
-
focus: linking **“it”** → **“the cat.”**
|
|
27
|
-
|
|
28
|
-
- **q_it (head 1):** `[0.9, 0.1, 0.0]`
|
|
29
|
-
- **k_cat (head 1):** `[0.8, 0.2, 0.1]`
|
|
30
|
-
- **dot product:** `0.9*0.8 + 0.1*0.2 + 0.0*0.1 = 0.74`
|
|
31
|
-
- **attention weight:** 0.74 → dominates over other tokens
|
|
32
|
-
- **result:** “it” attends strongly to “cat,” resolving the pronoun.
|
|
33
|
-
|
|
34
|
-
---
|
|
35
|
-
|
|
36
|
-
### 🔹 head 2 — verb agreement (syntactic continuity)
|
|
37
|
-
|
|
38
|
-
focus: linking **subject** (“the cat”) → **verb** (“sat”).
|
|
39
|
-
|
|
40
|
-
- **q_cat (head 2):** `[0.2, 0.7, 0.3]`
|
|
41
|
-
- **k_sat (head 2):** `[0.1, 0.9, 0.2]`
|
|
42
|
-
- **dot product:** `0.2*0.1 + 0.7*0.9 + 0.3*0.2 = 0.02 + 0.63 + 0.06 = 0.71`
|
|
43
|
-
- **attention weight:** strong match
|
|
44
|
-
- **result:** “cat” attends to “sat,” enforcing subject-verb connection.
|
|
45
|
-
|
|
46
|
-
---
|
|
47
|
-
|
|
48
|
-
## 🧩 combined effect
|
|
49
|
-
|
|
50
|
-
after attention, outputs from all heads are concatenated and linearly transformed:
|
|
51
|
-
|
|
52
|
-
\[
|
|
53
|
-
\text{MultiHead}(Q,K,V) = \text{Concat}(\text{head}_1, \text{head}_2, \dots) W^O
|
|
54
|
-
\]
|
|
55
|
-
|
|
56
|
-
- **head 1 output:** captures **semantic resolution** (it ↔ cat).
|
|
57
|
-
- **head 2 output:** captures **syntactic relation** (cat ↔ sat).
|
|
58
|
-
- combined, the model encodes **both** types of context simultaneously.
|
|
59
|
-
|
|
60
|
-
---
|
|
61
|
-
|
|
62
|
-
## 📊 insight
|
|
63
|
-
- each head = a separate lens on token relationships.
|
|
64
|
-
- heads specialize (some semantic, some syntactic, some positional).
|
|
65
|
-
- combining them creates **richer, multifaceted token embeddings.**
|
|
66
|
-
|
|
67
|
-
in short: **multi-head attention = parallel perspectives on the same sequence.**
|
|
@@ -1,48 +0,0 @@
|
|
|
1
|
-
# 🧩 .brief: `llm replication of input structures`
|
|
2
|
-
|
|
3
|
-
## 🔑 what it implies
|
|
4
|
-
|
|
5
|
-
llms are trained to predict the **next token** given a sequence, so they are highly sensitive to **patterns in the immediate input context**. when a prompt contains an example with a certain structure (formatting, headings, bullet styles, code blocks), the model learns that continuing with the same structure minimizes prediction error.
|
|
6
|
-
|
|
7
|
-
this replication is not true “understanding” of the structure — it is a probabilistic continuation shaped by training data, where mimicking provided forms was often the correct next step.
|
|
8
|
-
|
|
9
|
-
---
|
|
10
|
-
|
|
11
|
-
## 🎯 implications for behavior
|
|
12
|
-
|
|
13
|
-
- **structural mimicry**
|
|
14
|
-
- the model mirrors bullet lists, markdown, code syntax, or prose style seen in the input.
|
|
15
|
-
- **consistency bias**
|
|
16
|
-
- once a format is established in the prompt, deviating feels “unlikely” under the learned distribution.
|
|
17
|
-
- **few-shot learning**
|
|
18
|
-
- demonstration examples act as templates; the model generalizes content into the same frame.
|
|
19
|
-
- **alignment with expectation**
|
|
20
|
-
- replication maximizes coherence with the input and aligns with user intent implicitly signaled by structure.
|
|
21
|
-
|
|
22
|
-
---
|
|
23
|
-
|
|
24
|
-
## 🧩 example
|
|
25
|
-
|
|
26
|
-
> input:
|
|
27
|
-
>
|
|
28
|
-
> “list three animals in this format:
|
|
29
|
-
> - mammal:
|
|
30
|
-
> - bird:
|
|
31
|
-
> - reptile:”
|
|
32
|
-
|
|
33
|
-
> output:
|
|
34
|
-
> - mammal: dog
|
|
35
|
-
> - bird: eagle
|
|
36
|
-
> - reptile: lizard
|
|
37
|
-
|
|
38
|
-
the model doesn’t reason about taxonomy per se — it reproduces the structure because that’s the **most probable continuation**.
|
|
39
|
-
|
|
40
|
-
---
|
|
41
|
-
|
|
42
|
-
## 📊 insight
|
|
43
|
-
|
|
44
|
-
- llms replicate structures because **continuation is their core mechanic**.
|
|
45
|
-
- training on diverse, structured text (tables, lists, markdown, code) reinforced the habit of format preservation.
|
|
46
|
-
- this property is what enables **few-shot prompting** and makes llms easy to steer by example.
|
|
47
|
-
|
|
48
|
-
in short: **llms copy input structures because structure itself is a strong predictive signal for next-token generation.**
|
|
@@ -1,37 +0,0 @@
|
|
|
1
|
-
# 🧩 .brief: `externalized knowledge requires internalized scaffolding`
|
|
2
|
-
|
|
3
|
-
---
|
|
4
|
-
|
|
5
|
-
## 🧠 what this concept means
|
|
6
|
-
- **externalized knowledge** (facts in a book, formulas on a sheet, data in a database) is inert on its own.
|
|
7
|
-
- for it to become usable, the system — human or llm — must have **internalized scaffolding**: conceptual frameworks, interpretive skills, and procedural fluency that make sense of the raw reference.
|
|
8
|
-
- without scaffolding, externalized knowledge is like symbols without meaning.
|
|
9
|
-
|
|
10
|
-
---
|
|
11
|
-
|
|
12
|
-
## 🎨 examples across domains
|
|
13
|
-
- **physics**: a formula table is only useful if the student has internalized what variables represent and when equations apply.
|
|
14
|
-
- **chemistry**: a reaction handbook is only actionable if the chemist has internalized fundamentals like valence, polarity, and stability.
|
|
15
|
-
- **language**: a dictionary provides definitions, but only someone with internalized grammar and semantics can apply them in real communication.
|
|
16
|
-
- **llms**: retrieved passages (RAG) are useful only because the model has internalized language patterns that allow it to interpret, paraphrase, and apply them.
|
|
17
|
-
|
|
18
|
-
---
|
|
19
|
-
|
|
20
|
-
## ⚙️ mechanics of the relationship
|
|
21
|
-
- **internalized scaffolding**: generalizable, embodied patterns (styles, rules, frameworks).
|
|
22
|
-
- **externalized reference**: precise, factual, bounded records.
|
|
23
|
-
- **interaction**: scaffolding enables comprehension and application; reference extends range and accuracy.
|
|
24
|
-
|
|
25
|
-
---
|
|
26
|
-
|
|
27
|
-
## 🔑 takeaways
|
|
28
|
-
- externalized knowledge alone = static, inert.
|
|
29
|
-
- internalized knowledge alone = fluent, adaptive, but fallible.
|
|
30
|
-
- **operability emerges only from their combination**: scaffolding activates references, references ground scaffolding.
|
|
31
|
-
|
|
32
|
-
---
|
|
33
|
-
|
|
34
|
-
## 📌 intuition anchor
|
|
35
|
-
a formula sheet without understanding is **dead weight**.
|
|
36
|
-
understanding without a sheet risks **slips and gaps**.
|
|
37
|
-
together, they form a **living system of knowledge** — internalized scaffolding breathing life into externalized records.
|
|
@@ -1,30 +0,0 @@
|
|
|
1
|
-
# 🧩 .brief.reflect: `interdependence of internalized and externalized knowledge`
|
|
2
|
-
|
|
3
|
-
---
|
|
4
|
-
|
|
5
|
-
## ⚛️ physics domain insight
|
|
6
|
-
- a **formula table** provides equations (externalized knowledge), but:
|
|
7
|
-
- without knowing what “\(F\)” or “\(a\)” represent, the student cannot meaningfully use \(F=ma\).
|
|
8
|
-
- they must already have **internalized fundamentals**: what force, mass, and acceleration *mean*, how they relate, and when the equation applies.
|
|
9
|
-
- only with that internalized base does the externalized table become **operable knowledge**.
|
|
10
|
-
|
|
11
|
-
---
|
|
12
|
-
|
|
13
|
-
## 🧠 mapping back to llms
|
|
14
|
-
- **retrieval (externalized)**: gives the llm a fact, equation, or passage.
|
|
15
|
-
- **weights (internalized)**: provide the interpretive machinery — language parsing, reasoning steps, contextual application.
|
|
16
|
-
- **together**: the llm can both **understand** what the retrieved info means and **apply** it productively in conversation.
|
|
17
|
-
|
|
18
|
-
---
|
|
19
|
-
|
|
20
|
-
## 🔑 reflection takeaways
|
|
21
|
-
- **internalized is prerequisite**: externalized knowledge is inert without an internalized framework to activate it.
|
|
22
|
-
- **externalized augments**: once the fundamentals are in place, external sources extend the system’s reach into precision and breadth.
|
|
23
|
-
- **true power lies in combination**: fluent intuition (internalized) guided by explicit reference (externalized).
|
|
24
|
-
|
|
25
|
-
---
|
|
26
|
-
|
|
27
|
-
## 📌 intuition anchor
|
|
28
|
-
a student with only a **formula sheet** but no understanding is stuck.
|
|
29
|
-
a student with only **intuition** risks errors.
|
|
30
|
-
but a student with **both** can solve a wide range of problems with accuracy and adaptability.
|
|
@@ -1,44 +0,0 @@
|
|
|
1
|
-
# 🧩 .brief: `internalized vs externalized knowledge`
|
|
2
|
-
|
|
3
|
-
---
|
|
4
|
-
|
|
5
|
-
## 🧠 what the contrast shows
|
|
6
|
-
knowledge can be understood in two fundamentally different forms:
|
|
7
|
-
- **internalized knowledge**: absorbed into the structure of a system (weights, habits, style)
|
|
8
|
-
- **externalized knowledge**: stored outside the system in explicit, retrievable records (databases, documents, libraries)
|
|
9
|
-
|
|
10
|
-
this contrast explains both the strengths and weaknesses of llms when relying solely on their weights versus when augmented with retrieval.
|
|
11
|
-
|
|
12
|
-
---
|
|
13
|
-
|
|
14
|
-
## 🎨 illustration frame
|
|
15
|
-
- **artist (internalized)**: paints in impressionist style without recalling a specific work. their training flows into expressive creation.
|
|
16
|
-
- **librarian (externalized)**: fetches the exact book that contains the answer. precision is guaranteed, but no new creation occurs.
|
|
17
|
-
|
|
18
|
-
---
|
|
19
|
-
|
|
20
|
-
## ⚙️ mapping to llms
|
|
21
|
-
- **internalized knowledge in weights**
|
|
22
|
-
- emerges through gradient descent across massive training corpora
|
|
23
|
-
- encoded as *distributed statistical patterns*, not explicit entries
|
|
24
|
-
- enables **generalization** and **creative recombination** of concepts
|
|
25
|
-
- limited by **fuzziness** and potential for **hallucination**
|
|
26
|
-
|
|
27
|
-
- **externalized knowledge in retrieval (rag, databases, tools)**
|
|
28
|
-
- stored explicitly, outside the model
|
|
29
|
-
- enables **accuracy** and **verifiability**
|
|
30
|
-
- limited by what is recorded and retrievable, no inherent *style* or *fluency*
|
|
31
|
-
|
|
32
|
-
---
|
|
33
|
-
|
|
34
|
-
## 🔑 contrast takeaways
|
|
35
|
-
- **internalized = embodied fluency**: adaptive, generative, pattern-driven
|
|
36
|
-
- **externalized = explicit record**: exact, grounded, reference-driven
|
|
37
|
-
- **hybrid strength**: combining both yields systems that can *express fluidly* while also *grounding in fact*.
|
|
38
|
-
|
|
39
|
-
---
|
|
40
|
-
|
|
41
|
-
## 📌 intuition anchor
|
|
42
|
-
an llm alone is like a **painter**: it embodies and expresses style.
|
|
43
|
-
with retrieval, it gains a **librarian**: grounding its creativity in precise references.
|
|
44
|
-
together, they demonstrate the full spectrum of knowledge handling.
|