arscontexta 0.6.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.claude-plugin/marketplace.json +11 -0
- package/.claude-plugin/plugin.json +22 -0
- package/README.md +683 -0
- package/agents/knowledge-guide.md +49 -0
- package/bin/cli.mjs +66 -0
- package/generators/agents-md.md +240 -0
- package/generators/claude-md.md +379 -0
- package/generators/features/atomic-notes.md +124 -0
- package/generators/features/ethical-guardrails.md +58 -0
- package/generators/features/graph-analysis.md +188 -0
- package/generators/features/helper-functions.md +92 -0
- package/generators/features/maintenance.md +164 -0
- package/generators/features/methodology-knowledge.md +70 -0
- package/generators/features/mocs.md +144 -0
- package/generators/features/multi-domain.md +61 -0
- package/generators/features/personality.md +71 -0
- package/generators/features/processing-pipeline.md +428 -0
- package/generators/features/schema.md +149 -0
- package/generators/features/self-evolution.md +229 -0
- package/generators/features/self-space.md +78 -0
- package/generators/features/semantic-search.md +99 -0
- package/generators/features/session-rhythm.md +85 -0
- package/generators/features/templates.md +85 -0
- package/generators/features/wiki-links.md +88 -0
- package/generators/soul-md.md +121 -0
- package/hooks/hooks.json +45 -0
- package/hooks/scripts/auto-commit.sh +44 -0
- package/hooks/scripts/session-capture.sh +35 -0
- package/hooks/scripts/session-orient.sh +86 -0
- package/hooks/scripts/write-validate.sh +42 -0
- package/methodology/AI shifts knowledge systems from externalizing memory to externalizing attention.md +59 -0
- package/methodology/BM25 retrieval fails on full-length descriptions because query term dilution reduces match scores.md +39 -0
- package/methodology/IBIS framework maps claim-based architecture to structured argumentation.md +58 -0
- package/methodology/LLM attention degrades as context fills.md +49 -0
- package/methodology/MOC construction forces synthesis that automated generation from metadata cannot replicate.md +49 -0
- package/methodology/MOC maintenance investment compounds because orientation savings multiply across every future session.md +41 -0
- package/methodology/MOCs are attention management devices not just organizational tools.md +51 -0
- package/methodology/PKM failure follows a predictable cycle.md +50 -0
- package/methodology/ThreadMode to DocumentMode transformation is the core value creation step.md +52 -0
- package/methodology/WIP limits force processing over accumulation.md +53 -0
- package/methodology/Zeigarnik effect validates capture-first philosophy because open loops drain attention.md +42 -0
- package/methodology/academic research uses structured extraction with cross-source synthesis.md +566 -0
- package/methodology/adapt the four-phase processing pipeline to domain-specific throughput needs.md +197 -0
- package/methodology/agent notes externalize navigation intuition that search cannot discover and traversal cannot reconstruct.md +48 -0
- package/methodology/agent self-memory should be architecturally separate from user knowledge systems.md +48 -0
- package/methodology/agent session boundaries create natural automation checkpoints that human-operated systems lack.md +56 -0
- package/methodology/agent-cognition.md +107 -0
- package/methodology/agents are simultaneously methodology executors and subjects creating a unique trust asymmetry.md +66 -0
- package/methodology/aspect-oriented programming solved the same cross-cutting concern problem that hooks solve.md +39 -0
- package/methodology/associative ontologies beat hierarchical taxonomies because heterarchy adapts while hierarchy brittles.md +53 -0
- package/methodology/attention residue may have a minimum granularity that cannot be subdivided.md +46 -0
- package/methodology/auto-commit hooks eliminate prospective memory failures by converting remember-to-act into guaranteed execution.md +47 -0
- package/methodology/automated detection is always safe because it only reads state while automated remediation risks content corruption.md +42 -0
- package/methodology/automation should be retired when its false positive rate exceeds its true positive rate or it catches zero issues.md +56 -0
- package/methodology/backlinks implicitly define notes by revealing usage context.md +35 -0
- package/methodology/backward maintenance asks what would be different if written today.md +62 -0
- package/methodology/balance onboarding enforcement and questions to prevent premature complexity.md +229 -0
- package/methodology/basic level categorization determines optimal MOC granularity.md +51 -0
- package/methodology/batching by context similarity reduces switching costs in agent processing.md +43 -0
- package/methodology/behavioral anti-patterns matter more than tool selection.md +42 -0
- package/methodology/betweenness centrality identifies bridge notes connecting disparate knowledge domains.md +57 -0
- package/methodology/blueprints that teach construction outperform downloads that provide pre-built code for platform-dependent modules.md +42 -0
- package/methodology/bootstrapping principle enables self-improving systems.md +62 -0
- package/methodology/build automatic memory through cognitive offloading and session handoffs.md +285 -0
- package/methodology/capture the reaction to content not just the content itself.md +41 -0
- package/methodology/claims must be specific enough to be wrong.md +36 -0
- package/methodology/closure rituals create clean breaks that prevent attention residue bleed.md +44 -0
- package/methodology/cognitive offloading is the architectural foundation for vault design.md +46 -0
- package/methodology/cognitive outsourcing risk in agent-operated systems.md +55 -0
- package/methodology/coherence maintains consistency despite inconsistent inputs.md +96 -0
- package/methodology/coherent architecture emerges from wiki links spreading activation and small-world topology.md +48 -0
- package/methodology/community detection algorithms can inform when MOCs should split or merge.md +52 -0
- package/methodology/complete navigation requires four complementary types that no single mechanism provides.md +43 -0
- package/methodology/complex systems evolve from simple working systems.md +59 -0
- package/methodology/composable knowledge architecture builds systems from independent toggleable modules not monolithic templates.md +61 -0
- package/methodology/compose multi-domain systems through separate templates and shared graph.md +372 -0
- package/methodology/concept-orientation beats source-orientation for cross-domain connections.md +51 -0
- package/methodology/confidence thresholds gate automated action between the mechanical and judgment zones.md +50 -0
- package/methodology/configuration dimensions interact so choices in one create pressure on others.md +58 -0
- package/methodology/configuration paralysis emerges when derivation surfaces too many decisions.md +44 -0
- package/methodology/context files function as agent operating systems through self-referential self-extension.md +46 -0
- package/methodology/context phrase clarity determines how deep a navigation hierarchy can scale.md +46 -0
- package/methodology/continuous small-batch processing eliminates review dread.md +48 -0
- package/methodology/controlled disorder engineers serendipity through semantic rather than topical linking.md +51 -0
- package/methodology/creative writing uses worldbuilding consistency with character tracking.md +672 -0
- package/methodology/cross-links between MOC territories indicate creative leaps and integration depth.md +43 -0
- package/methodology/dangling links reveal which notes want to exist.md +62 -0
- package/methodology/data exit velocity measures how quickly content escapes vendor lock-in.md +74 -0
- package/methodology/decontextualization risk means atomicity may strip meaning that cannot be recovered.md +48 -0
- package/methodology/dense interlinked research claims enable derivation while sparse references only enable templating.md +47 -0
- package/methodology/dependency resolution through topological sort makes module composition transparent and verifiable.md +56 -0
- package/methodology/derivation generates knowledge systems from composable research claims not template customization.md +63 -0
- package/methodology/derivation-engine.md +27 -0
- package/methodology/derived systems follow a seed-evolve-reseed lifecycle.md +56 -0
- package/methodology/description quality for humans diverges from description quality for keyword search.md +73 -0
- package/methodology/descriptions are retrieval filters not summaries.md +112 -0
- package/methodology/design MOCs as attention management devices with lifecycle governance.md +318 -0
- package/methodology/design-dimensions.md +66 -0
- package/methodology/digital mutability enables note evolution that physical permanence forbids.md +54 -0
- package/methodology/discovery-retrieval.md +48 -0
- package/methodology/distinctiveness scoring treats description quality as measurable.md +69 -0
- package/methodology/does agent processing recover what fast capture loses.md +43 -0
- package/methodology/domain-compositions.md +37 -0
- package/methodology/dual-coding with visual elements could enhance agent traversal.md +55 -0
- package/methodology/each module must be describable in one sentence under 200 characters or it does too many things.md +45 -0
- package/methodology/each new note compounds value by creating traversal paths.md +55 -0
- package/methodology/eight configuration dimensions parameterize the space of possible knowledge systems.md +56 -0
- package/methodology/elaborative encoding is the quality gate for new notes.md +55 -0
- package/methodology/enforce schema with graduated strictness across capture processing and query zones.md +221 -0
- package/methodology/enforcing atomicity can create paralysis when ideas resist decomposition.md +43 -0
- package/methodology/engineering uses technical decision tracking with architectural memory.md +766 -0
- package/methodology/every knowledge domain shares a four-phase processing skeleton that diverges only in the process step.md +53 -0
- package/methodology/evolution observations provide actionable signals for system adaptation.md +67 -0
- package/methodology/external memory shapes cognition more than base model.md +60 -0
- package/methodology/faceted classification treats notes as multi-dimensional objects rather than folder contents.md +65 -0
- package/methodology/failure-modes.md +27 -0
- package/methodology/false universalism applies same processing logic regardless of domain.md +49 -0
- package/methodology/federated wiki pattern enables multi-agent divergence as feature not bug.md +59 -0
- package/methodology/flat files break at retrieval scale.md +75 -0
- package/methodology/forced engagement produces weak connections.md +48 -0
- package/methodology/four abstraction layers separate platform-agnostic from platform-dependent knowledge system features.md +47 -0
- package/methodology/fresh context per task preserves quality better than chaining phases.md +44 -0
- package/methodology/friction reveals architecture.md +63 -0
- package/methodology/friction-driven module adoption prevents configuration debt by adding complexity only at pain points.md +48 -0
- package/methodology/gardening cycle implements tend prune fertilize operations.md +41 -0
- package/methodology/generation effect gate blocks processing without transformation.md +40 -0
- package/methodology/goal-driven memory orchestration enables autonomous domain learning through directed compute allocation.md +41 -0
- package/methodology/good descriptions layer heuristic then mechanism then implication.md +57 -0
- package/methodology/graph-structure.md +65 -0
- package/methodology/guided notes might outperform post-hoc structuring for high-volume capture.md +37 -0
- package/methodology/health wellness uses symptom-trigger correlation with multi-dimensional tracking.md +819 -0
- package/methodology/hook composition creates emergent methodology from independent single-concern components.md +47 -0
- package/methodology/hook enforcement guarantees quality while instruction enforcement merely suggests it.md +51 -0
- package/methodology/hook-driven learning loops create self-improving methodology through observation accumulation.md +62 -0
- package/methodology/hooks are the agent habit system that replaces the missing basal ganglia.md +40 -0
- package/methodology/hooks cannot replace genuine cognitive engagement yet more automation is always tempting.md +87 -0
- package/methodology/hooks enable context window efficiency by delegating deterministic checks to external processes.md +47 -0
- package/methodology/idempotent maintenance operations are safe to automate because running them twice produces the same result as running them once.md +44 -0
- package/methodology/implement condition-based maintenance triggers for derived systems.md +255 -0
- package/methodology/implicit dependencies create distributed monoliths that fail silently across configurations.md +58 -0
- package/methodology/implicit knowledge emerges from traversal.md +55 -0
- package/methodology/incremental formalization happens through repeated touching of old notes.md +60 -0
- package/methodology/incremental reading enables cross-source connection finding.md +39 -0
- package/methodology/index.md +32 -0
- package/methodology/inline links carry richer relationship data than metadata fields.md +91 -0
- package/methodology/insight accretion differs from productivity in knowledge systems.md +41 -0
- package/methodology/intermediate packets enable assembly over creation.md +52 -0
- package/methodology/intermediate representation pattern enables reliable vault operations beyond regex.md +62 -0
- package/methodology/justification chains enable forward backward and evolution reasoning about configuration decisions.md +46 -0
- package/methodology/knowledge system architecture is parameterized by platform capabilities not fixed by methodology.md +51 -0
- package/methodology/knowledge systems become communication partners through complexity and memory humans cannot sustain.md +47 -0
- package/methodology/knowledge systems share universal operations and structural components across all methodology traditions.md +46 -0
- package/methodology/legal case management uses precedent chains with regulatory change propagation.md +892 -0
- package/methodology/live index via periodic regeneration keeps discovery current.md +58 -0
- package/methodology/local-first file formats are inherently agent-native.md +69 -0
- package/methodology/logic column pattern separates reasoning from procedure.md +35 -0
- package/methodology/maintenance operations are more universal than creative pipelines because structural health is domain-invariant.md +47 -0
- package/methodology/maintenance scheduling frequency should match consequence speed not detection capability.md +50 -0
- package/methodology/maintenance targeting should prioritize mechanism and theory notes.md +26 -0
- package/methodology/maintenance-patterns.md +72 -0
- package/methodology/markdown plus YAML plus ripgrep implements a queryable graph database without infrastructure.md +55 -0
- package/methodology/maturity field enables agent context prioritization.md +33 -0
- package/methodology/memory-architecture.md +27 -0
- package/methodology/metacognitive confidence can diverge from retrieval capability.md +42 -0
- package/methodology/metadata reduces entropy enabling precision over recall.md +91 -0
- package/methodology/methodology development should follow the trajectory from documentation to skill to hook as understanding hardens.md +80 -0
- package/methodology/methodology traditions are named points in a shared configuration space not competing paradigms.md +64 -0
- package/methodology/mnemonic medium embeds verification into navigation.md +46 -0
- package/methodology/module communication through shared YAML fields creates loose coupling without direct dependencies.md +44 -0
- package/methodology/module deactivation must account for structural artifacts that survive the toggle.md +49 -0
- package/methodology/multi-domain systems compose through separate templates and shared graph.md +61 -0
- package/methodology/multi-domain-composition.md +27 -0
- package/methodology/narrow folksonomy optimizes for single-operator retrieval unlike broad consensus tagging.md +53 -0
- package/methodology/navigation infrastructure passes through distinct scaling regimes that require qualitative strategy shifts.md +48 -0
- package/methodology/navigational vertigo emerges in pure association systems without local hierarchy.md +54 -0
- package/methodology/note titles should function as APIs enabling sentence transclusion.md +51 -0
- package/methodology/note-design.md +57 -0
- package/methodology/notes are skills /342/200/224 curated knowledge injected when relevant.md" +62 -0
- package/methodology/notes function as cognitive anchors that stabilize attention during complex tasks.md +41 -0
- package/methodology/novel domains derive by mapping knowledge type to closest reference domain then adapting.md +50 -0
- package/methodology/nudge theory explains graduated hook enforcement as choice architecture for agents.md +59 -0
- package/methodology/observation and tension logs function as dead-letter queues for failed automation.md +51 -0
- package/methodology/operational memory and knowledge memory serve different functions in agent architecture.md +48 -0
- package/methodology/operational wisdom requires contextual observation.md +52 -0
- package/methodology/orchestrated vault creation transforms arscontexta from tool to autonomous knowledge factory.md +40 -0
- package/methodology/organic emergence versus active curation creates a fundamental vault governance tension.md +68 -0
- package/methodology/orphan notes are seeds not failures.md +38 -0
- package/methodology/over-automation corrupts quality when hooks encode judgment rather than verification.md +62 -0
- package/methodology/people relationships uses Dunbar-layered graphs with interaction tracking.md +659 -0
- package/methodology/personal assistant uses life area management with review automation.md +610 -0
- package/methodology/platform adapter translation is semantic not mechanical because hook event meanings differ.md +40 -0
- package/methodology/platform capability tiers determine which knowledge system features can be implemented.md +48 -0
- package/methodology/platform fragmentation means identical conceptual operations require different implementations across agent environments.md +44 -0
- package/methodology/premature complexity is the most common derivation failure mode.md +45 -0
- package/methodology/prevent domain-specific failure modes through the vulnerability matrix.md +336 -0
- package/methodology/processing effort should follow retrieval demand.md +57 -0
- package/methodology/processing-workflows.md +75 -0
- package/methodology/product management uses feedback pipelines with experiment tracking.md +789 -0
- package/methodology/productivity porn risk in meta-system building.md +30 -0
- package/methodology/programmable notes could enable property-triggered workflows.md +64 -0
- package/methodology/progressive disclosure means reading right not reading less.md +69 -0
- package/methodology/progressive schema validates only what active modules require not the full system schema.md +49 -0
- package/methodology/project management uses decision tracking with stakeholder context.md +776 -0
- package/methodology/propositional link semantics transform wiki links from associative to reasoned.md +87 -0
- package/methodology/prospective memory requires externalization.md +53 -0
- package/methodology/provenance tracks where beliefs come from.md +62 -0
- package/methodology/queries evolve during search so agents should checkpoint.md +35 -0
- package/methodology/question-answer metadata enables inverted search patterns.md +39 -0
- package/methodology/random note resurfacing prevents write-only memory.md +33 -0
- package/methodology/reconciliation loops that compare desired state to actual state enable drift correction without continuous monitoring.md +59 -0
- package/methodology/reflection synthesizes existing notes into new insight.md +100 -0
- package/methodology/retrieval utility should drive design over capture completeness.md +69 -0
- package/methodology/retrieval verification loop tests description quality at scale.md +81 -0
- package/methodology/role field makes graph structure explicit.md +94 -0
- package/methodology/scaffolding enables divergence that fine-tuning cannot.md +67 -0
- package/methodology/schema enforcement via validation agents enables soft consistency.md +60 -0
- package/methodology/schema evolution follows observe-then-formalize not design-then-enforce.md +65 -0
- package/methodology/schema field names are the only domain specific element in the universal note pattern.md +46 -0
- package/methodology/schema fields should use domain-native vocabulary not abstract terminology.md +47 -0
- package/methodology/schema templates reduce cognitive overhead at capture time.md +55 -0
- package/methodology/schema validation hooks externalize inhibitory control that degrades under cognitive load.md +48 -0
- package/methodology/schema-enforcement.md +27 -0
- package/methodology/self-extension requires context files to contain platform operations knowledge not just methodology.md +47 -0
- package/methodology/sense-making vs storage does compression lose essential nuance.md +73 -0
- package/methodology/session boundary hooks implement cognitive bookends for orientation and reflection.md +60 -0
- package/methodology/session handoff creates continuity without persistent memory.md +43 -0
- package/methodology/session outputs are packets for future selves.md +43 -0
- package/methodology/session transcript mining enables experiential validation that structural tests cannot provide.md +38 -0
- package/methodology/skill context budgets constrain knowledge system complexity on agent platforms.md +52 -0
- package/methodology/skills encode methodology so manual execution bypasses quality gates.md +50 -0
- package/methodology/small-world topology requires hubs and dense local links.md +99 -0
- package/methodology/source attribution enables tracing claims to foundations.md +38 -0
- package/methodology/spaced repetition scheduling could optimize vault maintenance.md +44 -0
- package/methodology/spreading activation models how agents should traverse.md +79 -0
- package/methodology/stale navigation actively misleads because agents trust curated maps completely.md +43 -0
- package/methodology/stigmergy coordinates agents through environmental traces without direct communication.md +62 -0
- package/methodology/storage versus thinking distinction determines which tool patterns apply.md +56 -0
- package/methodology/structure enables navigation without reading everything.md +52 -0
- package/methodology/structure without processing provides no value.md +56 -0
- package/methodology/student learning uses prerequisite graphs with spaced retrieval.md +770 -0
- package/methodology/summary coherence tests composability before filing.md +37 -0
- package/methodology/tag rot applies to wiki links because titles serve as both identifier and display text.md +50 -0
- package/methodology/temporal media must convert to spatial text for agent traversal.md +43 -0
- package/methodology/temporal processing priority creates age-based inbox urgency.md +45 -0
- package/methodology/temporal separation of capture and processing preserves context freshness.md +39 -0
- package/methodology/ten universal primitives form the kernel of every viable agent knowledge system.md +162 -0
- package/methodology/testing effect could enable agent knowledge verification.md +38 -0
- package/methodology/the AgentSkills standard embodies progressive disclosure at the skill level.md +40 -0
- package/methodology/the derivation engine improves recursively as deployed systems generate observations.md +49 -0
- package/methodology/the determinism boundary separates hook methodology from skill methodology.md +46 -0
- package/methodology/the fix-versus-report decision depends on determinism reversibility and accumulated trust.md +45 -0
- package/methodology/the generation effect requires active transformation not just storage.md +57 -0
- package/methodology/the no wrong patches guarantee ensures any valid module combination produces a valid system.md +58 -0
- package/methodology/the system is the argument.md +46 -0
- package/methodology/the vault constitutes identity for agents.md +86 -0
- package/methodology/the vault methodology transfers because it encodes cognitive science not domain specifics.md +47 -0
- package/methodology/therapy journal uses warm personality with pattern detection for emotional processing.md +584 -0
- package/methodology/three capture schools converge through agent-mediated synthesis.md +55 -0
- package/methodology/three concurrent maintenance loops operate at different timescales to catch different classes of problems.md +56 -0
- package/methodology/throughput matters more than accumulation.md +58 -0
- package/methodology/title as claim enables traversal as reasoning.md +50 -0
- package/methodology/topological organization beats temporal for knowledge work.md +52 -0
- package/methodology/trading uses conviction tracking with thesis-outcome correlation.md +699 -0
- package/methodology/trails transform ephemeral navigation into persistent artifacts.md +39 -0
- package/methodology/transform universal vocabulary to domain-native language through six levels.md +259 -0
- package/methodology/type field enables structured queries without folder hierarchies.md +53 -0
- package/methodology/use-case presets dissolve the tension between composability and simplicity.md +44 -0
- package/methodology/vault conventions may impose hidden rigidity on thinking.md +44 -0
- package/methodology/verbatim risk applies to agents too.md +31 -0
- package/methodology/vibe notetaking is the emerging industry consensus for AI-native self-organization.md +56 -0
- package/methodology/vivid memories need verification.md +45 -0
- package/methodology/vocabulary-transformation.md +27 -0
- package/methodology/voice capture is the highest-bandwidth channel for agent-delegated knowledge systems.md +45 -0
- package/methodology/wiki links are the digital evolution of analog indexing.md +73 -0
- package/methodology/wiki links as social contract transforms agents into stewards of incomplete references.md +52 -0
- package/methodology/wiki links create navigation paths that shape retrieval.md +63 -0
- package/methodology/wiki links implement GraphRAG without the infrastructure.md +101 -0
- package/methodology/writing for audience blocks authentic creation.md +22 -0
- package/methodology/you operate a system that takes notes.md +79 -0
- package/openclaw/SKILL.md +110 -0
- package/package.json +45 -0
- package/platforms/README.md +51 -0
- package/platforms/claude-code/generator.md +61 -0
- package/platforms/claude-code/hooks/README.md +186 -0
- package/platforms/claude-code/hooks/auto-commit.sh.template +38 -0
- package/platforms/claude-code/hooks/session-capture.sh.template +72 -0
- package/platforms/claude-code/hooks/session-orient.sh.template +189 -0
- package/platforms/claude-code/hooks/write-validate.sh.template +106 -0
- package/platforms/openclaw/generator.md +82 -0
- package/platforms/openclaw/hooks/README.md +89 -0
- package/platforms/openclaw/hooks/bootstrap.ts.template +224 -0
- package/platforms/openclaw/hooks/command-new.ts.template +165 -0
- package/platforms/openclaw/hooks/heartbeat.ts.template +214 -0
- package/platforms/shared/features/README.md +70 -0
- package/platforms/shared/skill-blocks/graph.md +145 -0
- package/platforms/shared/skill-blocks/learn.md +119 -0
- package/platforms/shared/skill-blocks/next.md +131 -0
- package/platforms/shared/skill-blocks/pipeline.md +326 -0
- package/platforms/shared/skill-blocks/ralph.md +616 -0
- package/platforms/shared/skill-blocks/reduce.md +1142 -0
- package/platforms/shared/skill-blocks/refactor.md +129 -0
- package/platforms/shared/skill-blocks/reflect.md +780 -0
- package/platforms/shared/skill-blocks/remember.md +524 -0
- package/platforms/shared/skill-blocks/rethink.md +574 -0
- package/platforms/shared/skill-blocks/reweave.md +680 -0
- package/platforms/shared/skill-blocks/seed.md +320 -0
- package/platforms/shared/skill-blocks/stats.md +145 -0
- package/platforms/shared/skill-blocks/tasks.md +171 -0
- package/platforms/shared/skill-blocks/validate.md +323 -0
- package/platforms/shared/skill-blocks/verify.md +562 -0
- package/platforms/shared/templates/README.md +35 -0
- package/presets/experimental/categories.yaml +1 -0
- package/presets/experimental/preset.yaml +38 -0
- package/presets/experimental/starter/README.md +7 -0
- package/presets/experimental/vocabulary.yaml +7 -0
- package/presets/personal/categories.yaml +7 -0
- package/presets/personal/preset.yaml +41 -0
- package/presets/personal/starter/goals.md +21 -0
- package/presets/personal/starter/index.md +17 -0
- package/presets/personal/starter/life-areas.md +21 -0
- package/presets/personal/starter/people.md +21 -0
- package/presets/personal/vocabulary.yaml +32 -0
- package/presets/research/categories.yaml +8 -0
- package/presets/research/preset.yaml +41 -0
- package/presets/research/starter/index.md +17 -0
- package/presets/research/starter/methods.md +21 -0
- package/presets/research/starter/open-questions.md +21 -0
- package/presets/research/vocabulary.yaml +33 -0
- package/reference/AUDIT-REPORT.md +238 -0
- package/reference/claim-map.md +172 -0
- package/reference/components.md +327 -0
- package/reference/conversation-patterns.md +542 -0
- package/reference/derivation-validation.md +649 -0
- package/reference/dimension-claim-map.md +134 -0
- package/reference/evolution-lifecycle.md +297 -0
- package/reference/failure-modes.md +235 -0
- package/reference/interaction-constraints.md +204 -0
- package/reference/kernel.yaml +242 -0
- package/reference/methodology.md +283 -0
- package/reference/open-questions.md +279 -0
- package/reference/personality-layer.md +302 -0
- package/reference/self-space.md +299 -0
- package/reference/semantic-vs-keyword.md +288 -0
- package/reference/session-lifecycle.md +298 -0
- package/reference/templates/base-note.md +16 -0
- package/reference/templates/companion-note.md +70 -0
- package/reference/templates/creative-note.md +16 -0
- package/reference/templates/learning-note.md +16 -0
- package/reference/templates/life-note.md +16 -0
- package/reference/templates/moc.md +26 -0
- package/reference/templates/relationship-note.md +17 -0
- package/reference/templates/research-note.md +19 -0
- package/reference/templates/session-log.md +24 -0
- package/reference/templates/therapy-note.md +16 -0
- package/reference/test-fixtures/edge-case-constraints.md +148 -0
- package/reference/test-fixtures/multi-domain.md +164 -0
- package/reference/test-fixtures/novel-domain-gaming.md +138 -0
- package/reference/test-fixtures/research-minimal.md +102 -0
- package/reference/test-fixtures/therapy-full.md +155 -0
- package/reference/testing-milestones.md +1087 -0
- package/reference/three-spaces.md +363 -0
- package/reference/tradition-presets.md +203 -0
- package/reference/use-case-presets.md +341 -0
- package/reference/validate-kernel.sh +432 -0
- package/reference/vocabulary-transforms.md +85 -0
- package/scripts/sync-thinking.sh +147 -0
- package/skill-sources/graph/SKILL.md +567 -0
- package/skill-sources/graph/skill.json +17 -0
- package/skill-sources/learn/SKILL.md +254 -0
- package/skill-sources/learn/skill.json +17 -0
- package/skill-sources/next/SKILL.md +407 -0
- package/skill-sources/next/skill.json +17 -0
- package/skill-sources/pipeline/SKILL.md +314 -0
- package/skill-sources/pipeline/skill.json +17 -0
- package/skill-sources/ralph/SKILL.md +604 -0
- package/skill-sources/ralph/skill.json +17 -0
- package/skill-sources/reduce/SKILL.md +1113 -0
- package/skill-sources/reduce/skill.json +17 -0
- package/skill-sources/refactor/SKILL.md +448 -0
- package/skill-sources/refactor/skill.json +17 -0
- package/skill-sources/reflect/SKILL.md +747 -0
- package/skill-sources/reflect/skill.json +17 -0
- package/skill-sources/remember/SKILL.md +534 -0
- package/skill-sources/remember/skill.json +17 -0
- package/skill-sources/rethink/SKILL.md +658 -0
- package/skill-sources/rethink/skill.json +17 -0
- package/skill-sources/reweave/SKILL.md +657 -0
- package/skill-sources/reweave/skill.json +17 -0
- package/skill-sources/seed/SKILL.md +303 -0
- package/skill-sources/seed/skill.json +17 -0
- package/skill-sources/stats/SKILL.md +371 -0
- package/skill-sources/stats/skill.json +17 -0
- package/skill-sources/tasks/SKILL.md +402 -0
- package/skill-sources/tasks/skill.json +17 -0
- package/skill-sources/validate/SKILL.md +310 -0
- package/skill-sources/validate/skill.json +17 -0
- package/skill-sources/verify/SKILL.md +532 -0
- package/skill-sources/verify/skill.json +17 -0
- package/skills/add-domain/SKILL.md +441 -0
- package/skills/add-domain/skill.json +17 -0
- package/skills/architect/SKILL.md +568 -0
- package/skills/architect/skill.json +17 -0
- package/skills/ask/SKILL.md +388 -0
- package/skills/ask/skill.json +17 -0
- package/skills/health/SKILL.md +760 -0
- package/skills/health/skill.json +17 -0
- package/skills/help/SKILL.md +348 -0
- package/skills/help/skill.json +17 -0
- package/skills/recommend/SKILL.md +553 -0
- package/skills/recommend/skill.json +17 -0
- package/skills/reseed/SKILL.md +385 -0
- package/skills/reseed/skill.json +17 -0
- package/skills/setup/SKILL.md +1688 -0
- package/skills/setup/skill.json +17 -0
- package/skills/tutorial/SKILL.md +496 -0
- package/skills/tutorial/skill.json +17 -0
- package/skills/upgrade/SKILL.md +395 -0
- package/skills/upgrade/skill.json +17 -0
|
@@ -0,0 +1,770 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Student learning knowledge system — inspirational composition showing derived architecture for prerequisite graph construction, mastery tracking, and spaced retrieval scheduling
|
|
3
|
+
kind: example
|
|
4
|
+
domain: student-learning
|
|
5
|
+
topics: ["[[domain-compositions]]"]
|
|
6
|
+
---
|
|
7
|
+
|
|
8
|
+
# student learning uses prerequisite graphs with spaced retrieval
|
|
9
|
+
|
|
10
|
+
An inspirational composition showing what an agent-operated learning system looks like when derived from first principles. The killer feature is the prerequisite graph: every concept links to the concepts it depends on and the concepts that depend on it, and the agent detects knowledge gaps before they cascade into downstream confusion. This is not a template to copy but a worked example demonstrating how the 8 configuration dimensions compose into a system that turns passive note-taking into active learning architecture.
|
|
11
|
+
|
|
12
|
+
## Persona
|
|
13
|
+
|
|
14
|
+
**Priya, 22, third-year computer science student at a large research university.** She is taking four courses this semester: Operating Systems, Machine Learning, Discrete Mathematics II, and a humanities elective (Philosophy of Mind). She is strong in math but struggling with OS — the systems programming concepts feel disconnected from her programming experience, and she cannot tell whether she is struggling because the material is genuinely hard or because she is missing prerequisite knowledge from courses she passed without deeply understanding.
|
|
15
|
+
|
|
16
|
+
Priya's current study system is a mess. She has lecture notes in OneNote organized by date, a few Anki decks she made for Discrete Math I (abandoned after week 6), bookmarked Stack Overflow answers, and a folder of past exams she reviews the night before. She re-reads notes before exams, feels confident, then blanks on retrieval questions. Her grades do not reflect her effort because her study methods are high-effort-low-retention: re-reading creates an illusion of knowledge that active recall would expose.
|
|
17
|
+
|
|
18
|
+
Her agent's name is Sage. Sage maintains the concept graph across all four courses, tracks which concepts Priya actually understands (tested through retrieval, not self-report), and detects prerequisite gaps before they cascade. When Priya struggles with virtual memory in OS, Sage traces the prerequisite chain and discovers that Priya's understanding of memory addressing from Computer Architecture is shaky — a gap that has been silently compounding for two semesters.
|
|
19
|
+
|
|
20
|
+
Priya talks to Sage daily for 15-30 minutes (post-lecture processing and study sessions). Sage manages the review schedule, generates practice problems, and prepares study guides.
|
|
21
|
+
|
|
22
|
+
## Configuration
|
|
23
|
+
|
|
24
|
+
The 8 dimensions as derived for student learning:
|
|
25
|
+
|
|
26
|
+
| Dimension | Position | Rationale |
|
|
27
|
+
|-----------|----------|-----------|
|
|
28
|
+
| **Granularity** | Fine-grained — one note per concept | The prerequisite graph requires atomic concepts. "Virtual memory" must be its own note so that its prerequisites (memory addressing, page tables, address translation) can link to it individually. A compound "OS Chapter 7" note cannot participate in a prerequisite graph because it bundles concepts with different dependency structures. |
|
|
29
|
+
| **Organization** | Flat with course-based and concept-based MOCs | All concepts live in `notes/` regardless of course. A concept like "graph theory" appears in both Discrete Math and ML. Flat organization enables cross-course connections that course-folder silos would prevent. MOCs organize by course (for lecture tracking) and by concept domain (for understanding topology). |
|
|
30
|
+
| **Linking** | Dense explicit with prerequisite typing | Every concept links to its prerequisites with `prerequisite` relationship type and to downstream concepts with `enables` relationship type. Cross-course links are especially valuable: when Probability Theory appears in both Discrete Math and ML, the link reveals that mastering it once serves two courses. |
|
|
31
|
+
| **Metadata** | Medium-dense — mastery tracking and retrieval scheduling | Concepts need `mastery_level`, `last_tested`, `prerequisites`, `retrieval_strength`. Dense enough for programmatic gap detection and spaced repetition scheduling but not so dense that creating a concept note after lecture feels like paperwork. |
|
|
32
|
+
| **Processing** | Medium — lecture capture, concept extraction, prerequisite linking, retrieval testing | Content enters as lecture notes and gets processed into concept notes. Processing involves: identify distinct concepts, check for existing concept notes (avoid duplication), link prerequisites, generate retrieval questions. Heavier than personal assistant routing but lighter than research extraction because the concepts are largely defined by the course material. |
|
|
33
|
+
| **Formalization** | Convention-first with gradual automation | Start with written conventions for concept note creation and prerequisite linking. Automate retrieval scheduling (spaced repetition engine) and gap detection (prerequisite chain validation) as the graph grows. Schema validation for concept notes starts after the first month, once Priya's note-taking patterns stabilize. |
|
|
34
|
+
| **Review** | Dual cadence — daily retrieval practice and weekly concept review | Daily: spaced repetition review of due concepts (5-15 minutes). Weekly: assess progress across courses, identify stalled concepts, plan study priorities. Pre-exam: targeted review based on prerequisite completeness for exam topics. The agent manages the daily schedule; the weekly review is collaborative. |
|
|
35
|
+
| **Scope** | Multi-course, single semester with carryover | All four courses in one system. Concepts from previous semesters (prerequisites) are included when referenced but not exhaustively backfilled. The graph grows forward from this semester, pulling in prior knowledge only when gaps are detected. |
|
|
36
|
+
|
|
37
|
+
## Vault Structure
|
|
38
|
+
|
|
39
|
+
```
|
|
40
|
+
vault/
|
|
41
|
+
├── self/
|
|
42
|
+
│ ├── identity.md # Sage's role and Priya's learning patterns
|
|
43
|
+
│ ├── study-patterns.md # When Priya learns best, energy patterns
|
|
44
|
+
│ └── memory/
|
|
45
|
+
│ ├── gap-history.md # Prerequisite gaps detected and resolved
|
|
46
|
+
│ └── exam-postmortems.md # What exams revealed about understanding
|
|
47
|
+
├── notes/ # All concept notes (flat)
|
|
48
|
+
│ ├── index.md # Hub: all courses and concept domains
|
|
49
|
+
│ ├── operating-systems.md # Course MOC
|
|
50
|
+
│ ├── machine-learning.md # Course MOC
|
|
51
|
+
│ ├── discrete-math-ii.md # Course MOC
|
|
52
|
+
│ ├── philosophy-of-mind.md # Course MOC
|
|
53
|
+
│ ├── computation.md # Concept domain MOC (cross-course)
|
|
54
|
+
│ ├── probability.md # Concept domain MOC (cross-course)
|
|
55
|
+
│ ├── virtual-memory.md # Concept note
|
|
56
|
+
│ ├── page-tables.md # Concept note
|
|
57
|
+
│ ├── memory-addressing.md # Concept note
|
|
58
|
+
│ ├── process-scheduling.md # Concept note
|
|
59
|
+
│ ├── gradient-descent.md # Concept note
|
|
60
|
+
│ ├── backpropagation.md # Concept note
|
|
61
|
+
│ ├── bayes-theorem.md # Concept note
|
|
62
|
+
│ ├── graph-coloring.md # Concept note
|
|
63
|
+
│ ├── chinese-room-argument.md # Concept note
|
|
64
|
+
│ ├── lecture-2026-02-10-os.md # Lecture note
|
|
65
|
+
│ ├── lecture-2026-02-10-ml.md # Lecture note
|
|
66
|
+
│ ├── problem-set-3-os.md # Practice problems
|
|
67
|
+
│ ├── midterm-1-os-prep.md # Study guide
|
|
68
|
+
│ └── ...
|
|
69
|
+
├── inbox/ # Quick captures during lectures
|
|
70
|
+
│ └── ...
|
|
71
|
+
├── archive/ # Past semester concept notes (still linkable)
|
|
72
|
+
│ └── ...
|
|
73
|
+
└── ops/
|
|
74
|
+
├── templates/
|
|
75
|
+
│ ├── concept.md
|
|
76
|
+
│ ├── lecture.md
|
|
77
|
+
│ ├── practice-problem.md
|
|
78
|
+
│ ├── study-guide.md
|
|
79
|
+
│ └── exam-postmortem.md
|
|
80
|
+
├── derivation.md
|
|
81
|
+
├── retrieval-schedule.md # Spaced repetition queue
|
|
82
|
+
└── health/
|
|
83
|
+
├── prerequisite-gaps.md # Detected gaps in the dependency chain
|
|
84
|
+
└── mastery-dashboard.md # Per-course mastery overview
|
|
85
|
+
```
|
|
86
|
+
|
|
87
|
+
## Note Schemas
|
|
88
|
+
|
|
89
|
+
### Concept
|
|
90
|
+
|
|
91
|
+
```yaml
|
|
92
|
+
---
|
|
93
|
+
description: A discrete idea or principle that can be understood, tested, and connected to other concepts
|
|
94
|
+
type: concept
|
|
95
|
+
domain: systems | ml | math | philosophy | programming
|
|
96
|
+
courses: ["[[operating-systems]]"]
|
|
97
|
+
mastery_level: unaware | exposed | familiar | practiced | mastered
|
|
98
|
+
last_tested: 2026-02-08
|
|
99
|
+
retrieval_strength: strong | moderate | weak | untested
|
|
100
|
+
prerequisites: ["[[memory-addressing]]", "[[page-tables]]"]
|
|
101
|
+
enables: ["[[demand-paging]]", "[[memory-mapped-io]]"]
|
|
102
|
+
retrieval_questions:
|
|
103
|
+
- "Explain how virtual memory decouples logical from physical address space"
|
|
104
|
+
- "What happens during a page fault? Walk through each step"
|
|
105
|
+
- "Why does virtual memory require hardware support?"
|
|
106
|
+
topics: ["[[operating-systems]]", "[[computation]]"]
|
|
107
|
+
relevant_notes: ["[[memory-addressing]] -- prerequisite: must understand physical vs logical addresses first", "[[page-tables]] -- prerequisite: the data structure that maps virtual to physical", "[[process-scheduling]] -- related: virtual memory enables multiprogramming which scheduling manages"]
|
|
108
|
+
---
|
|
109
|
+
```
|
|
110
|
+
|
|
111
|
+
### Lecture
|
|
112
|
+
|
|
113
|
+
```yaml
|
|
114
|
+
---
|
|
115
|
+
description: Capture from a specific class session with key concepts and questions raised
|
|
116
|
+
type: lecture
|
|
117
|
+
course: "[[operating-systems]]"
|
|
118
|
+
date: 2026-02-10
|
|
119
|
+
instructor: Prof. Nakamura
|
|
120
|
+
key_concepts: ["[[virtual-memory]]", "[[page-tables]]", "[[page-fault-handling]]"]
|
|
121
|
+
questions_raised:
|
|
122
|
+
- "How does the TLB interact with context switches?"
|
|
123
|
+
- "Why is the page size usually 4KB and not larger?"
|
|
124
|
+
examples_given:
|
|
125
|
+
- "Worked through address translation with a two-level page table"
|
|
126
|
+
topics: ["[[operating-systems]]"]
|
|
127
|
+
---
|
|
128
|
+
```
|
|
129
|
+
|
|
130
|
+
### Practice Problem
|
|
131
|
+
|
|
132
|
+
```yaml
|
|
133
|
+
---
|
|
134
|
+
description: An exercise testing specific concepts, with approach documentation and error analysis
|
|
135
|
+
type: practice-problem
|
|
136
|
+
concepts_tested: ["[[virtual-memory]]", "[[page-tables]]"]
|
|
137
|
+
course: "[[operating-systems]]"
|
|
138
|
+
source: Problem Set 3, Question 4
|
|
139
|
+
difficulty: medium
|
|
140
|
+
attempted: true
|
|
141
|
+
correct: false
|
|
142
|
+
time_taken: 25 minutes
|
|
143
|
+
solution_approach: Tried to compute page table size but confused virtual and physical address bits
|
|
144
|
+
errors_made: ["Confused virtual address space size with physical memory size", "Did not account for multi-level page table overhead"]
|
|
145
|
+
concepts_revealed_weak: ["[[memory-addressing]]"]
|
|
146
|
+
topics: ["[[operating-systems]]"]
|
|
147
|
+
---
|
|
148
|
+
```
|
|
149
|
+
|
|
150
|
+
### Study Guide
|
|
151
|
+
|
|
152
|
+
```yaml
|
|
153
|
+
---
|
|
154
|
+
description: Exam preparation document mapping exam topics to concept mastery status and study priorities
|
|
155
|
+
type: study-guide
|
|
156
|
+
course: "[[operating-systems]]"
|
|
157
|
+
exam: Midterm 1
|
|
158
|
+
date: 2026-02-28
|
|
159
|
+
topics_covered: ["[[process-scheduling]]", "[[virtual-memory]]", "[[page-tables]]", "[[memory-addressing]]", "[[system-calls]]", "[[concurrency-basics]]"]
|
|
160
|
+
mastery_summary:
|
|
161
|
+
strong: ["[[system-calls]]", "[[process-scheduling]]"]
|
|
162
|
+
moderate: ["[[concurrency-basics]]"]
|
|
163
|
+
weak: ["[[virtual-memory]]", "[[page-tables]]"]
|
|
164
|
+
gap: ["[[memory-addressing]]"]
|
|
165
|
+
study_priority: ["[[memory-addressing]]", "[[page-tables]]", "[[virtual-memory]]", "[[concurrency-basics]]"]
|
|
166
|
+
topics: ["[[operating-systems]]"]
|
|
167
|
+
---
|
|
168
|
+
```
|
|
169
|
+
|
|
170
|
+
### Exam Postmortem
|
|
171
|
+
|
|
172
|
+
```yaml
|
|
173
|
+
---
|
|
174
|
+
description: Post-exam analysis identifying what the exam revealed about actual understanding
|
|
175
|
+
type: exam-postmortem
|
|
176
|
+
course: "[[operating-systems]]"
|
|
177
|
+
exam: Midterm 1
|
|
178
|
+
date: 2026-02-28
|
|
179
|
+
grade: B-
|
|
180
|
+
expected_grade: B+
|
|
181
|
+
surprised_by:
|
|
182
|
+
- "Could not explain TLB miss handling under time pressure — knew the concept but retrieval was slow"
|
|
183
|
+
- "Multi-level page table question required combining three concepts simultaneously"
|
|
184
|
+
concepts_overestimated: ["[[virtual-memory]]"]
|
|
185
|
+
concepts_underestimated: ["[[system-calls]]"]
|
|
186
|
+
study_method_insights:
|
|
187
|
+
- "Re-reading notes on virtual memory created false confidence — could recognize but not produce"
|
|
188
|
+
- "Practice problems on system calls forced active recall, which held up under exam pressure"
|
|
189
|
+
topics: ["[[operating-systems]]"]
|
|
190
|
+
---
|
|
191
|
+
```
|
|
192
|
+
|
|
193
|
+
## Example Notes
|
|
194
|
+
|
|
195
|
+
### Concept: Virtual Memory
|
|
196
|
+
|
|
197
|
+
```markdown
|
|
198
|
+
---
|
|
199
|
+
description: Abstraction that decouples programs from physical memory layout by mapping virtual addresses to physical frames through page tables
|
|
200
|
+
type: concept
|
|
201
|
+
domain: systems
|
|
202
|
+
courses: ["[[operating-systems]]"]
|
|
203
|
+
mastery_level: familiar
|
|
204
|
+
last_tested: 2026-02-08
|
|
205
|
+
retrieval_strength: weak
|
|
206
|
+
prerequisites: ["[[memory-addressing]]", "[[page-tables]]", "[[address-translation]]"]
|
|
207
|
+
enables: ["[[demand-paging]]", "[[memory-mapped-io]]", "[[copy-on-write]]", "[[shared-memory]]"]
|
|
208
|
+
retrieval_questions:
|
|
209
|
+
- "Explain how virtual memory decouples logical from physical address space"
|
|
210
|
+
- "What happens during a page fault? Walk through each step"
|
|
211
|
+
- "Why does virtual memory require hardware support (MMU)?"
|
|
212
|
+
- "How does virtual memory enable multiprogramming?"
|
|
213
|
+
topics: ["[[operating-systems]]", "[[computation]]"]
|
|
214
|
+
relevant_notes: ["[[memory-addressing]] -- prerequisite: must understand physical vs logical addresses first", "[[page-tables]] -- prerequisite: the data structure that enables the mapping", "[[address-translation]] -- prerequisite: the mechanical process of converting addresses", "[[process-scheduling]] -- related: virtual memory enables the multiprogramming that scheduling manages", "[[demand-paging]] -- downstream: virtual memory makes demand paging possible"]
|
|
215
|
+
---
|
|
216
|
+
|
|
217
|
+
# virtual memory
|
|
218
|
+
|
|
219
|
+
Virtual memory is the abstraction that lets every process believe it has its own
|
|
220
|
+
contiguous address space starting from zero, regardless of how physical memory
|
|
221
|
+
is actually organized. The operating system and hardware (MMU) collaborate to
|
|
222
|
+
translate virtual addresses to physical addresses transparently.
|
|
223
|
+
|
|
224
|
+
## Why It Exists
|
|
225
|
+
Without virtual memory, every process would need to know where in physical memory
|
|
226
|
+
it was loaded. Programs would need to be relocated at load time, could not be
|
|
227
|
+
larger than physical memory, and could not share memory safely. Virtual memory
|
|
228
|
+
solves all three problems by adding a layer of indirection: the process uses
|
|
229
|
+
virtual addresses, the MMU translates them, and the OS manages the mapping.
|
|
230
|
+
|
|
231
|
+
## How It Works (Simplified)
|
|
232
|
+
1. Process generates a virtual address
|
|
233
|
+
2. MMU splits the address into page number + offset
|
|
234
|
+
3. Page number indexes into the [[page-tables]] to find the physical frame number
|
|
235
|
+
4. Frame number + offset = physical address
|
|
236
|
+
5. If the page is not in memory (page fault), the OS loads it from disk
|
|
237
|
+
|
|
238
|
+
This depends on understanding [[memory-addressing]] — specifically the difference
|
|
239
|
+
between an address space (the range of possible addresses) and physical memory
|
|
240
|
+
(the actual RAM installed). Virtual address space can be larger than physical
|
|
241
|
+
memory because not all pages need to be resident simultaneously.
|
|
242
|
+
|
|
243
|
+
## What I Struggle With
|
|
244
|
+
The multi-level page table structure. I understand why single-level page tables
|
|
245
|
+
waste memory (most of the virtual address space is unused, but the table must
|
|
246
|
+
have entries for all of it). I can describe two-level page tables in theory.
|
|
247
|
+
But when I try to compute page table sizes or walk through address translation
|
|
248
|
+
with specific bit counts, I get confused about which bits go where.
|
|
249
|
+
|
|
250
|
+
Sage's diagnosis: this struggle traces back to [[memory-addressing]], specifically
|
|
251
|
+
the relationship between address width (in bits), address space size (in bytes),
|
|
252
|
+
and page size. Resolving that prerequisite gap should make multi-level page
|
|
253
|
+
tables more tractable.
|
|
254
|
+
|
|
255
|
+
## Cross-Course Connection
|
|
256
|
+
The concept of indirection — adding a layer between what something appears to
|
|
257
|
+
be and what it actually is — appears in [[machine-learning]] as well. Feature
|
|
258
|
+
embeddings create a "virtual" representation space that decouples the learning
|
|
259
|
+
algorithm from the raw input format. This is not a deep analogy but noticing
|
|
260
|
+
the pattern of indirection as a recurring systems design tool.
|
|
261
|
+
|
|
262
|
+
---
|
|
263
|
+
|
|
264
|
+
Relevant Notes:
|
|
265
|
+
- [[memory-addressing]] -- prerequisite, gap detected: address width vs space size confusion
|
|
266
|
+
- [[page-tables]] -- prerequisite, the mechanism that makes this work
|
|
267
|
+
- [[process-scheduling]] -- virtual memory enables the multiprogramming context
|
|
268
|
+
- [[demand-paging]] -- the downstream concept this enables
|
|
269
|
+
```
|
|
270
|
+
|
|
271
|
+
### Concept: Memory Addressing (Prerequisite Gap)
|
|
272
|
+
|
|
273
|
+
```markdown
|
|
274
|
+
---
|
|
275
|
+
description: The relationship between address width in bits, addressable space in bytes, and physical memory size — the foundation that virtual memory builds on
|
|
276
|
+
type: concept
|
|
277
|
+
domain: systems
|
|
278
|
+
courses: ["[[operating-systems]]"]
|
|
279
|
+
mastery_level: exposed
|
|
280
|
+
last_tested: 2026-02-10
|
|
281
|
+
retrieval_strength: weak
|
|
282
|
+
prerequisites: ["[[binary-number-representation]]"]
|
|
283
|
+
enables: ["[[virtual-memory]]", "[[page-tables]]", "[[cache-organization]]"]
|
|
284
|
+
retrieval_questions:
|
|
285
|
+
- "If a system has 32-bit addresses, how large is the addressable space?"
|
|
286
|
+
- "A system has 16GB of physical RAM. How many bits does the physical address need?"
|
|
287
|
+
- "Why can the virtual address space be larger than physical memory?"
|
|
288
|
+
- "What is the difference between addressable space and installed memory?"
|
|
289
|
+
gap_detected: 2026-02-08
|
|
290
|
+
gap_source: "[[problem-set-3-os]]"
|
|
291
|
+
topics: ["[[operating-systems]]", "[[computation]]"]
|
|
292
|
+
relevant_notes: ["[[binary-number-representation]] -- prerequisite: must convert between bits and sizes", "[[virtual-memory]] -- downstream: this gap is blocking understanding of VM", "[[page-tables]] -- downstream: page table size calculations require address arithmetic"]
|
|
293
|
+
---
|
|
294
|
+
|
|
295
|
+
# memory addressing
|
|
296
|
+
|
|
297
|
+
Memory addressing is the relationship between three quantities that students
|
|
298
|
+
(including me) frequently conflate:
|
|
299
|
+
|
|
300
|
+
1. **Address width** — how many bits in an address (e.g., 32 bits, 64 bits)
|
|
301
|
+
2. **Addressable space** — how many unique locations can be named = 2^(address width)
|
|
302
|
+
3. **Physical memory** — how much RAM is actually installed (may be less than addressable space)
|
|
303
|
+
|
|
304
|
+
## Why This Matters
|
|
305
|
+
Every calculation in virtual memory depends on keeping these straight. When
|
|
306
|
+
Prof. Nakamura asks "how many entries does the page table need?" the answer
|
|
307
|
+
depends on the size of the VIRTUAL address space (determined by virtual address
|
|
308
|
+
width), not the physical memory. I keep mixing these up.
|
|
309
|
+
|
|
310
|
+
## The Gap
|
|
311
|
+
Sage flagged this as a prerequisite gap on Feb 8 after I got Problem Set 3,
|
|
312
|
+
Question 4 wrong. My error: I used physical memory size (4GB) to compute
|
|
313
|
+
page table entries when I should have used virtual address space size (2^32 for
|
|
314
|
+
a 32-bit system). The numbers happened to be the same in that problem (4GB
|
|
315
|
+
physical, 32-bit virtual = 4GB virtual), which masked my confusion. But in
|
|
316
|
+
the next problem where physical was 2GB and virtual was 32-bit, I got it wrong.
|
|
317
|
+
|
|
318
|
+
## Working Through It
|
|
319
|
+
- 32-bit address means 2^32 = 4,294,967,296 addressable bytes = 4 GB
|
|
320
|
+
- 64-bit address means 2^64 = 18.4 exabytes (far more than any physical RAM)
|
|
321
|
+
- A system with 16GB RAM needs at least 34-bit physical addresses (2^34 = 16GB)
|
|
322
|
+
- BUT the virtual address space can be much larger than physical RAM — that is
|
|
323
|
+
the whole point of [[virtual-memory]]
|
|
324
|
+
|
|
325
|
+
## Practice Drills Sage Generated
|
|
326
|
+
1. 48-bit virtual address, 4KB pages. How many page table entries needed? (2^48 / 2^12 = 2^36)
|
|
327
|
+
2. 32-bit virtual, 16GB physical, 4KB pages. How many physical frames? (16GB / 4KB = 4M = 2^22)
|
|
328
|
+
3. Why is the number of page table entries determined by virtual, not physical, address space?
|
|
329
|
+
|
|
330
|
+
## Mastery Status
|
|
331
|
+
Exposed but not practiced. I can follow the explanation when reading it, but
|
|
332
|
+
under time pressure (exam conditions), I revert to the conflation error.
|
|
333
|
+
Need spaced practice with varied numbers until the distinction is automatic.
|
|
334
|
+
|
|
335
|
+
---
|
|
336
|
+
|
|
337
|
+
Relevant Notes:
|
|
338
|
+
- [[virtual-memory]] -- this gap blocks understanding of VM page table sizing
|
|
339
|
+
- [[binary-number-representation]] -- the prerequisite beneath this prerequisite
|
|
340
|
+
- [[page-tables]] -- computing page table size requires this addressing knowledge
|
|
341
|
+
```
|
|
342
|
+
|
|
343
|
+
### Lecture: Operating Systems, February 10
|
|
344
|
+
|
|
345
|
+
```markdown
|
|
346
|
+
---
|
|
347
|
+
description: Lecture on virtual memory implementation covering address translation, TLB, and multi-level page tables
|
|
348
|
+
type: lecture
|
|
349
|
+
course: "[[operating-systems]]"
|
|
350
|
+
date: 2026-02-10
|
|
351
|
+
instructor: Prof. Nakamura
|
|
352
|
+
key_concepts: ["[[virtual-memory]]", "[[page-tables]]", "[[address-translation]]", "[[tlb]]"]
|
|
353
|
+
questions_raised:
|
|
354
|
+
- "How does the TLB interact with context switches? Does it flush?"
|
|
355
|
+
- "Why is the page size usually 4KB and not larger or smaller?"
|
|
356
|
+
- "What happens when the page table itself does not fit in memory?"
|
|
357
|
+
examples_given:
|
|
358
|
+
- "Worked through address translation with a two-level page table on a 32-bit system"
|
|
359
|
+
- "Showed TLB hit/miss timing comparison: 1ns TLB hit vs 100ns page table walk"
|
|
360
|
+
topics: ["[[operating-systems]]"]
|
|
361
|
+
relevant_notes: ["[[virtual-memory]] -- central concept of today's lecture", "[[memory-addressing]] -- my gap showed during the worked example: I confused address width with memory size again"]
|
|
362
|
+
---
|
|
363
|
+
|
|
364
|
+
# lecture 2026-02-10 — operating systems
|
|
365
|
+
|
|
366
|
+
Prof. Nakamura covered virtual memory implementation. Key takeaway: virtual
|
|
367
|
+
memory is not just about having more memory than physically installed —
|
|
368
|
+
it is about isolation, protection, and shared memory.
|
|
369
|
+
|
|
370
|
+
## Concepts Introduced or Deepened
|
|
371
|
+
- [[virtual-memory]] — deepened: how the abstraction actually works at hardware level
|
|
372
|
+
- [[address-translation]] — new: the step-by-step process the MMU follows
|
|
373
|
+
- [[tlb]] — new: the cache that makes address translation fast enough to be practical
|
|
374
|
+
- Multi-level page tables — deepened: why single-level wastes space, how two-level solves it
|
|
375
|
+
|
|
376
|
+
## Key Insight
|
|
377
|
+
The TLB is what makes virtual memory practical. Without it, every memory access
|
|
378
|
+
requires an additional memory access (to read the page table), which would
|
|
379
|
+
halve performance. The TLB converts the common case from 2 memory accesses
|
|
380
|
+
to 1 + tiny TLB lookup. This is an instance of a general systems principle:
|
|
381
|
+
expensive indirection becomes viable when the common case is cached.
|
|
382
|
+
|
|
383
|
+
## Questions to Follow Up
|
|
384
|
+
- TLB and context switches: Prof mentioned "TLB flush" — need to understand
|
|
385
|
+
what this means for performance of context-heavy workloads
|
|
386
|
+
- Page size trade-offs: larger pages = fewer table entries but more internal
|
|
387
|
+
fragmentation. Is there a formula for optimal page size?
|
|
388
|
+
|
|
389
|
+
## Gap Observation
|
|
390
|
+
During the worked example, I again stumbled on address width vs addressable
|
|
391
|
+
space. When asked "how many entries in the first-level page table?" I tried
|
|
392
|
+
to compute from physical memory size. The student next to me got it instantly
|
|
393
|
+
by dividing virtual address space by (page size * entries per second-level table).
|
|
394
|
+
My [[memory-addressing]] gap is directly blocking my ability to follow these
|
|
395
|
+
examples in real time.
|
|
396
|
+
|
|
397
|
+
---
|
|
398
|
+
|
|
399
|
+
Relevant Notes:
|
|
400
|
+
- [[virtual-memory]] -- today's focus concept
|
|
401
|
+
- [[memory-addressing]] -- gap surfaced again during worked example
|
|
402
|
+
- [[page-tables]] -- the mechanism discussed in depth today
|
|
403
|
+
```
|
|
404
|
+
|
|
405
|
+
### Practice Problem: Page Table Sizing (Revealing Weakness)
|
|
406
|
+
|
|
407
|
+
```markdown
|
|
408
|
+
---
|
|
409
|
+
description: Problem Set 3 Question 4 — page table size calculation that exposed the memory addressing prerequisite gap
|
|
410
|
+
type: practice-problem
|
|
411
|
+
concepts_tested: ["[[virtual-memory]]", "[[page-tables]]", "[[memory-addressing]]"]
|
|
412
|
+
course: "[[operating-systems]]"
|
|
413
|
+
source: Problem Set 3, Question 4
|
|
414
|
+
difficulty: medium
|
|
415
|
+
attempted: true
|
|
416
|
+
correct: false
|
|
417
|
+
time_taken: 25 minutes
|
|
418
|
+
solution_approach: Tried to compute page table entries using physical memory size instead of virtual address space size
|
|
419
|
+
errors_made:
|
|
420
|
+
- "Used physical memory (4GB) to determine number of page table entries"
|
|
421
|
+
- "Conflated 'addresses the system can name' with 'memory the system has installed'"
|
|
422
|
+
- "Got the right answer by accident because 32-bit virtual = 4GB = physical size in this problem"
|
|
423
|
+
concepts_revealed_weak: ["[[memory-addressing]]"]
|
|
424
|
+
topics: ["[[operating-systems]]"]
|
|
425
|
+
relevant_notes: ["[[memory-addressing]] -- this problem exposed the gap", "[[virtual-memory]] -- the concept being tested"]
|
|
426
|
+
---
|
|
427
|
+
|
|
428
|
+
# problem set 3 — question 4
|
|
429
|
+
|
|
430
|
+
**Problem:** A system has 32-bit virtual addresses, 4GB physical memory, and
|
|
431
|
+
4KB pages. Calculate: (a) number of page table entries, (b) page table size
|
|
432
|
+
if each entry is 4 bytes, (c) why is this problematic?
|
|
433
|
+
|
|
434
|
+
## My Attempt
|
|
435
|
+
(a) Physical memory / page size = 4GB / 4KB = 1M entries
|
|
436
|
+
|
|
437
|
+
**Wrong reasoning.** The number of page table entries is determined by the
|
|
438
|
+
virtual address space, not physical memory. The page table must have an entry
|
|
439
|
+
for every possible virtual page, whether or not it maps to a physical frame.
|
|
440
|
+
|
|
441
|
+
Correct: Virtual address space / page size = 2^32 / 2^12 = 2^20 = 1M entries.
|
|
442
|
+
|
|
443
|
+
I got the right NUMBER (1M) because 2^32 = 4GB = the physical memory in this
|
|
444
|
+
problem. But my REASONING was wrong, and the next problem (where physical is
|
|
445
|
+
2GB but virtual is still 32-bit) proved it — I computed 512K entries instead
|
|
446
|
+
of the correct 1M.
|
|
447
|
+
|
|
448
|
+
(b) 1M * 4 bytes = 4MB page table. Correct (by accident).
|
|
449
|
+
|
|
450
|
+
(c) A 4MB page table per process is problematic because with 100 processes,
|
|
451
|
+
that is 400MB just for page tables. This motivates multi-level page tables.
|
|
452
|
+
|
|
453
|
+
## What This Revealed
|
|
454
|
+
The error is not about page tables — it is about [[memory-addressing]].
|
|
455
|
+
I do not have a solid intuition for the difference between "addresses that
|
|
456
|
+
exist" and "memory that is installed." Until I fix this, every VM calculation
|
|
457
|
+
will be unreliable even when I get the right answer by accident.
|
|
458
|
+
|
|
459
|
+
## Sage's Recommendation
|
|
460
|
+
Mastery sequence:
|
|
461
|
+
1. Drill [[memory-addressing]] with varied address widths and physical sizes
|
|
462
|
+
2. Re-attempt this problem with 48-bit virtual, 8GB physical, 4KB pages
|
|
463
|
+
(numbers where the wrong method gives a visibly wrong answer)
|
|
464
|
+
3. Then return to multi-level page table problems
|
|
465
|
+
|
|
466
|
+
---
|
|
467
|
+
|
|
468
|
+
Relevant Notes:
|
|
469
|
+
- [[memory-addressing]] -- the actual gap this problem exposed
|
|
470
|
+
- [[virtual-memory]] -- the concept I thought I was struggling with
|
|
471
|
+
- [[page-tables]] -- the structure being sized in this problem
|
|
472
|
+
```
|
|
473
|
+
|
|
474
|
+
### Cross-Course Connection: Indirection as Design Pattern
|
|
475
|
+
|
|
476
|
+
```markdown
|
|
477
|
+
---
|
|
478
|
+
description: The pattern of adding a layer of indirection to decouple interface from implementation appears across OS, ML, and philosophy — a meta-concept connecting three courses
|
|
479
|
+
type: concept
|
|
480
|
+
domain: systems
|
|
481
|
+
courses: ["[[operating-systems]]", "[[machine-learning]]", "[[philosophy-of-mind]]"]
|
|
482
|
+
mastery_level: familiar
|
|
483
|
+
last_tested: 2026-02-12
|
|
484
|
+
retrieval_strength: moderate
|
|
485
|
+
prerequisites: ["[[virtual-memory]]", "[[gradient-descent]]"]
|
|
486
|
+
enables: []
|
|
487
|
+
retrieval_questions:
|
|
488
|
+
- "Name three instances of indirection across your courses and explain the common structure"
|
|
489
|
+
- "What are the trade-offs of adding indirection? When does it help vs hurt?"
|
|
490
|
+
topics: ["[[computation]]"]
|
|
491
|
+
relevant_notes: ["[[virtual-memory]] -- OS instance: virtual addresses decouple programs from physical layout", "[[gradient-descent]] -- ML instance: loss functions decouple learning from raw prediction error", "[[chinese-room-argument]] -- Philosophy instance: Searle argues Chinese Room has indirection without understanding"]
|
|
492
|
+
---
|
|
493
|
+
|
|
494
|
+
# indirection as design pattern
|
|
495
|
+
|
|
496
|
+
Sage noticed that the concept of indirection — inserting a layer between what
|
|
497
|
+
something appears to be and what it actually is — shows up in three of my four
|
|
498
|
+
courses this semester. Worth noting because cross-course patterns are the kind
|
|
499
|
+
of deep understanding that transfers beyond any single exam.
|
|
500
|
+
|
|
501
|
+
## Instances
|
|
502
|
+
**Operating Systems:** [[virtual-memory]] adds a mapping layer between the
|
|
503
|
+
addresses a program uses and the physical memory locations those addresses
|
|
504
|
+
correspond to. The benefit: programs do not need to know about physical
|
|
505
|
+
layout, and the OS can rearrange physical memory without breaking programs.
|
|
506
|
+
|
|
507
|
+
**Machine Learning:** Loss functions and [[gradient-descent]] add a layer
|
|
508
|
+
between "how wrong is this prediction" and "what should the model do about
|
|
509
|
+
it." The loss function transforms raw error into a signal the optimizer
|
|
510
|
+
can follow. Different loss functions (MSE, cross-entropy) create different
|
|
511
|
+
optimization landscapes for the same underlying problem.
|
|
512
|
+
|
|
513
|
+
**Philosophy of Mind:** The [[chinese-room-argument]] turns on whether
|
|
514
|
+
indirection (the man following rules without understanding Chinese) constitutes
|
|
515
|
+
understanding. Searle argues the indirection layer (rule-following) does not
|
|
516
|
+
bridge to the thing it appears to produce (understanding). This is the same
|
|
517
|
+
structural question: does the mapping layer preserve the essential property
|
|
518
|
+
of what it maps?
|
|
519
|
+
|
|
520
|
+
## The Common Structure
|
|
521
|
+
In each case:
|
|
522
|
+
1. There is a "real" layer (physical memory, raw errors, Chinese language)
|
|
523
|
+
2. There is an interface layer (virtual addresses, loss gradients, rule outputs)
|
|
524
|
+
3. The interface layer decouples users from implementation details
|
|
525
|
+
4. The decoupling enables flexibility but introduces the question: is anything
|
|
526
|
+
lost in the translation?
|
|
527
|
+
|
|
528
|
+
## Why This Matters for Learning
|
|
529
|
+
Recognizing indirection as a cross-domain pattern means I am not learning
|
|
530
|
+
three separate concepts — I am learning one structural principle with three
|
|
531
|
+
instantiations. This is the kind of transfer that Sage calls "far transfer":
|
|
532
|
+
understanding that travels across domain boundaries because the structure,
|
|
533
|
+
not the content, is what was learned.
|
|
534
|
+
|
|
535
|
+
---
|
|
536
|
+
|
|
537
|
+
Relevant Notes:
|
|
538
|
+
- [[virtual-memory]] -- OS instance of indirection
|
|
539
|
+
- [[gradient-descent]] -- ML instance of indirection
|
|
540
|
+
- [[chinese-room-argument]] -- philosophical instance questioning whether indirection preserves meaning
|
|
541
|
+
```
|
|
542
|
+
|
|
543
|
+
## Processing Workflow
|
|
544
|
+
|
|
545
|
+
Content flows through a capture-extract-connect-test cycle designed around the rhythms of student life:
|
|
546
|
+
|
|
547
|
+
### 1. Capture (During/After Lectures)
|
|
548
|
+
Priya takes rapid notes during lectures in `inbox/`. These are messy, incomplete, and timestamp-ordered. After class (or the same evening), she talks to Sage: "Sage, today's OS lecture covered virtual memory, address translation, and the TLB." Sage prompts for details, questions raised, and examples given. The result is a structured lecture note.
|
|
549
|
+
|
|
550
|
+
### 2. Extract (Post-Lecture Processing)
|
|
551
|
+
Sage processes the lecture note by:
|
|
552
|
+
- Identifying distinct concepts mentioned (virtual memory, address translation, TLB)
|
|
553
|
+
- Checking if concept notes already exist for each (avoid duplication)
|
|
554
|
+
- Creating new concept notes for genuinely new concepts
|
|
555
|
+
- Updating existing concept notes if the lecture deepened understanding
|
|
556
|
+
- Linking the lecture note to all concept notes it references
|
|
557
|
+
|
|
558
|
+
For each new concept, Sage asks: "What does this concept depend on? What does it enable?" This builds the prerequisite graph incrementally.
|
|
559
|
+
|
|
560
|
+
### 3. Connect (Prerequisite Graph Construction)
|
|
561
|
+
After extraction, Sage updates the prerequisite graph:
|
|
562
|
+
- Links new concepts to their prerequisites (virtual memory requires memory addressing and page tables)
|
|
563
|
+
- Links new concepts to downstream concepts they enable (virtual memory enables demand paging)
|
|
564
|
+
- Checks for cross-course connections: does this concept appear in other courses?
|
|
565
|
+
- Generates retrieval questions for each new concept
|
|
566
|
+
|
|
567
|
+
When Sage detects a concept from a previous semester that Priya needs but may not have mastered (like memory addressing from Computer Architecture), Sage creates a concept note with `gap_detected` metadata and recommends targeted review.
|
|
568
|
+
|
|
569
|
+
### 4. Test (Active Retrieval)
|
|
570
|
+
Sage manages a spaced repetition schedule for all concepts with retrieval questions:
|
|
571
|
+
- New concepts get tested within 24 hours (initial encoding)
|
|
572
|
+
- Concepts Priya retrieves successfully get longer intervals (2 days, 5 days, 14 days, 30 days)
|
|
573
|
+
- Concepts Priya struggles with get shorter intervals and prerequisite chain investigation
|
|
574
|
+
- Practice problems provide deeper testing than retrieval questions
|
|
575
|
+
|
|
576
|
+
After each test session, Sage updates `mastery_level` and `retrieval_strength` for tested concepts. Failed retrievals trigger prerequisite chain analysis: "You could not retrieve virtual memory page fault handling. Checking prerequisites: memory addressing is weak. Recommend reviewing memory addressing before re-attempting virtual memory."
|
|
577
|
+
|
|
578
|
+
### 5. Synthesize (Exam Preparation)
|
|
579
|
+
Before exams, Sage generates study guides:
|
|
580
|
+
1. List all concepts the exam covers (from syllabus + lecture notes)
|
|
581
|
+
2. For each concept, check mastery_level and retrieval_strength
|
|
582
|
+
3. Trace prerequisite chains: if a tested concept is weak, check whether its prerequisites are solid
|
|
583
|
+
4. Prioritize study by: prerequisite gaps first (foundation), then weak concepts, then moderate concepts for reinforcement
|
|
584
|
+
5. Generate targeted practice problems for the weakest areas
|
|
585
|
+
|
|
586
|
+
The study guide is not "review everything" — it is "review these specific concepts in this specific order because this is where your understanding has gaps."
|
|
587
|
+
|
|
588
|
+
### 6. Reflect (Post-Exam Postmortem)
|
|
589
|
+
After each exam, Sage facilitates a postmortem:
|
|
590
|
+
- Which questions surprised you? (reveals concepts you overestimated)
|
|
591
|
+
- Which questions felt easy? (confirms mastery)
|
|
592
|
+
- What study methods worked? (refines future study strategy)
|
|
593
|
+
- What study methods gave false confidence? (specifically targeting re-reading)
|
|
594
|
+
|
|
595
|
+
Postmortem insights update Sage's understanding of Priya's learning patterns and inform future study recommendations.
|
|
596
|
+
|
|
597
|
+
## MOC Structure
|
|
598
|
+
|
|
599
|
+
### Hub: All Courses
|
|
600
|
+
|
|
601
|
+
```markdown
|
|
602
|
+
---
|
|
603
|
+
description: Navigation hub for all active courses and cross-course concept domains
|
|
604
|
+
type: moc
|
|
605
|
+
topics: []
|
|
606
|
+
---
|
|
607
|
+
|
|
608
|
+
# index
|
|
609
|
+
|
|
610
|
+
Four active courses this semester, plus cross-cutting concept domains that
|
|
611
|
+
span multiple courses.
|
|
612
|
+
|
|
613
|
+
## Active Courses
|
|
614
|
+
- [[operating-systems]] -- systems programming, Prof. Nakamura. Current gap: memory addressing prerequisite
|
|
615
|
+
- [[machine-learning]] -- statistical learning theory, Prof. Chen. On track, strong math foundation helps
|
|
616
|
+
- [[discrete-math-ii]] -- combinatorics and graph theory, Prof. Williams. Solid, building on Discrete Math I
|
|
617
|
+
- [[philosophy-of-mind]] -- consciousness and computation, Prof. Okafor. Engaging, surprising connections to CS
|
|
618
|
+
|
|
619
|
+
## Cross-Course Concept Domains
|
|
620
|
+
- [[computation]] -- indirection, abstraction layers, state machines (OS + ML + Philosophy)
|
|
621
|
+
- [[probability]] -- probabilistic reasoning (Discrete Math + ML)
|
|
622
|
+
- [[optimization]] -- gradient methods, search (ML + Discrete Math)
|
|
623
|
+
|
|
624
|
+
## Health
|
|
625
|
+
- Prerequisite gaps detected: ops/health/prerequisite-gaps.md
|
|
626
|
+
- Mastery dashboard: ops/health/mastery-dashboard.md
|
|
627
|
+
- Retrieval schedule: ops/retrieval-schedule.md
|
|
628
|
+
```
|
|
629
|
+
|
|
630
|
+
### Course MOC: Operating Systems
|
|
631
|
+
|
|
632
|
+
```markdown
|
|
633
|
+
---
|
|
634
|
+
description: Operating Systems (CS 350) — process management, memory systems, and concurrency, with prerequisite gap tracking for memory addressing
|
|
635
|
+
type: moc
|
|
636
|
+
course_code: CS 350
|
|
637
|
+
instructor: Prof. Nakamura
|
|
638
|
+
semester: Spring 2026
|
|
639
|
+
topics: ["[[index]]"]
|
|
640
|
+
---
|
|
641
|
+
|
|
642
|
+
# operating systems
|
|
643
|
+
|
|
644
|
+
Systems programming course covering process management, memory systems,
|
|
645
|
+
file systems, and concurrency. Currently the most challenging course because
|
|
646
|
+
several concepts depend on Computer Architecture knowledge that I passed
|
|
647
|
+
without deeply understanding.
|
|
648
|
+
|
|
649
|
+
## Core Concepts by Module
|
|
650
|
+
|
|
651
|
+
### Process Management
|
|
652
|
+
- [[process-lifecycle]] -- states, transitions, PCB structure (mastered)
|
|
653
|
+
- [[process-scheduling]] -- algorithms, criteria, trade-offs (practiced)
|
|
654
|
+
- [[system-calls]] -- interface between user and kernel mode (mastered)
|
|
655
|
+
- [[context-switching]] -- mechanism and cost (familiar)
|
|
656
|
+
|
|
657
|
+
### Memory Systems
|
|
658
|
+
- [[memory-addressing]] -- GAP DETECTED: address width vs space size confusion (exposed)
|
|
659
|
+
- [[page-tables]] -- single and multi-level structures (familiar)
|
|
660
|
+
- [[address-translation]] -- step-by-step MMU process (familiar)
|
|
661
|
+
- [[virtual-memory]] -- abstraction, benefits, implementation (familiar, blocked by addressing gap)
|
|
662
|
+
- [[tlb]] -- translation caching, flush on context switch (exposed)
|
|
663
|
+
- [[demand-paging]] -- not yet covered
|
|
664
|
+
- [[page-replacement]] -- not yet covered
|
|
665
|
+
|
|
666
|
+
### Concurrency (upcoming)
|
|
667
|
+
- [[concurrency-basics]] -- threads, race conditions (exposed from readings)
|
|
668
|
+
- Not yet covered in lectures
|
|
669
|
+
|
|
670
|
+
## Prerequisite Gaps
|
|
671
|
+
Sage has detected one critical gap in this course:
|
|
672
|
+
- [[memory-addressing]] — flagged 2026-02-08 via [[problem-set-3-os]]. This gap
|
|
673
|
+
cascades into virtual-memory, page-tables, and all downstream memory concepts.
|
|
674
|
+
Resolving this gap is the highest priority for OS study.
|
|
675
|
+
|
|
676
|
+
## Lectures
|
|
677
|
+
- [[lecture-2026-02-03-os]] -- process scheduling algorithms
|
|
678
|
+
- [[lecture-2026-02-05-os]] -- system calls and kernel mode
|
|
679
|
+
- [[lecture-2026-02-10-os]] -- virtual memory and address translation
|
|
680
|
+
|
|
681
|
+
## Exam Timeline
|
|
682
|
+
- Midterm 1: February 28 (process management + memory systems)
|
|
683
|
+
- Midterm 2: April 11 (concurrency + file systems)
|
|
684
|
+
- Final: May 15 (comprehensive)
|
|
685
|
+
|
|
686
|
+
---
|
|
687
|
+
|
|
688
|
+
Agent Notes:
|
|
689
|
+
The memory addressing gap is architecturally significant — it is a single
|
|
690
|
+
prerequisite failure that blocks understanding of 5+ downstream concepts.
|
|
691
|
+
Priya can "learn" virtual memory, page tables, and TLB as surface-level
|
|
692
|
+
definitions, but she cannot DO the calculations (page table sizing, address
|
|
693
|
+
translation) until addressing is solid. Prioritize this gap over new
|
|
694
|
+
lecture content.
|
|
695
|
+
```
|
|
696
|
+
|
|
697
|
+
## Graph Query Examples
|
|
698
|
+
|
|
699
|
+
```bash
|
|
700
|
+
# Find all prerequisite gaps (concepts with gap_detected field)
|
|
701
|
+
rg '^gap_detected:' notes/ -l | while read f; do
|
|
702
|
+
desc=$(rg '^description:' "$f" | head -1 | cut -d: -f2-)
|
|
703
|
+
gap_date=$(rg '^gap_detected:' "$f" | cut -d' ' -f2)
|
|
704
|
+
echo "GAP ($gap_date):$desc — $(basename $f)"
|
|
705
|
+
done
|
|
706
|
+
|
|
707
|
+
# Find all concepts at "weak" retrieval strength that enable downstream concepts
|
|
708
|
+
rg '^retrieval_strength: weak' notes/ -l | while read f; do
|
|
709
|
+
enables=$(rg '^enables:' "$f" | head -1)
|
|
710
|
+
if echo "$enables" | grep -q '\[\['; then
|
|
711
|
+
echo "WEAK + ENABLING: $(basename $f)"
|
|
712
|
+
echo " Enables: $enables"
|
|
713
|
+
fi
|
|
714
|
+
done
|
|
715
|
+
|
|
716
|
+
# Cross-course concept detection: concepts appearing in 2+ courses
|
|
717
|
+
rg '^courses:.*\[\[.*\]\].*\[\[' notes/ -l | while read f; do
|
|
718
|
+
courses=$(rg '^courses:' "$f" | head -1)
|
|
719
|
+
echo "CROSS-COURSE: $(basename $f) — $courses"
|
|
720
|
+
done
|
|
721
|
+
|
|
722
|
+
# Mastery distribution per course
|
|
723
|
+
COURSE="operating-systems"
|
|
724
|
+
echo "=== $COURSE mastery distribution ==="
|
|
725
|
+
for level in mastered practiced familiar exposed unaware; do
|
|
726
|
+
count=$(rg "^mastery_level: $level" notes/ -l | \
|
|
727
|
+
xargs -I{} rg "^courses:.*$COURSE" {} -l 2>/dev/null | wc -l | tr -d ' ')
|
|
728
|
+
echo " $level: $count"
|
|
729
|
+
done
|
|
730
|
+
|
|
731
|
+
# Prerequisite chain depth: how many prerequisites deep is a concept?
|
|
732
|
+
find_depth() {
|
|
733
|
+
local concept="$1"
|
|
734
|
+
local file=$(find notes/ -name "$concept.md" 2>/dev/null | head -1)
|
|
735
|
+
if [ -z "$file" ]; then echo 0; return; fi
|
|
736
|
+
local prereqs=$(rg '^prerequisites:' "$file" | grep -o '\[\[[^]]*\]\]' | sed 's/\[\[//g;s/\]\]//g')
|
|
737
|
+
if [ -z "$prereqs" ]; then echo 0; return; fi
|
|
738
|
+
local max_depth=0
|
|
739
|
+
for p in $prereqs; do
|
|
740
|
+
local d=$(find_depth "$p")
|
|
741
|
+
if [ "$d" -gt "$max_depth" ]; then max_depth=$d; fi
|
|
742
|
+
done
|
|
743
|
+
echo $((max_depth + 1))
|
|
744
|
+
}
|
|
745
|
+
# Usage: find_depth "virtual-memory" → 2 (memory-addressing → binary-number-representation)
|
|
746
|
+
```
|
|
747
|
+
|
|
748
|
+
## What Makes This Domain Unique
|
|
749
|
+
|
|
750
|
+
**1. The prerequisite graph is a dependency graph with cascading failure modes.** In a research vault, a missing connection is a missed opportunity for synthesis — the note still stands on its own. In a learning system, a missing prerequisite is a structural failure that cascades forward. If Priya does not understand memory addressing, she cannot understand virtual memory, which means she cannot understand demand paging, which means the entire second half of Operating Systems is built on sand. The prerequisite graph is not a nice organizational feature — it is a diagnostic tool that reveals why a student is struggling by tracing symptoms back to root causes. This is fundamentally different from the associative linking in research systems where every note is independently valuable.
|
|
751
|
+
|
|
752
|
+
**2. Mastery must be tested, not self-reported.** In a personal assistant vault, an area is "green" or "red" based on engagement — a reasonable proxy because the user knows their own life. In a learning system, self-reported mastery is dangerously unreliable. Students who re-read notes feel confident but fail on recall. The research on metacognitive blindness (students confident in material they cannot retrieve) means the system must actively test mastery through retrieval questions and practice problems, not trust Priya's self-assessment. This makes testing a first-class operation, not an optional review — and makes the system inherently more adversarial than a personal assistant (which is always on the user's side).
|
|
753
|
+
|
|
754
|
+
**3. Cross-course connections create transfer learning that isolated study cannot.** Priya takes four courses as four separate workloads. But "indirection as a design pattern" spans OS, ML, and Philosophy. Discrete Math's probability concepts directly enable ML's Bayesian reasoning. These connections are invisible when courses are studied in isolation. The flat concept graph that spans all courses reveals structural similarities that would never surface in a course-folder organization — and these cross-course connections represent the kind of deep understanding that transfers beyond any single exam into lasting expertise. No course syllabus teaches this. The graph reveals it.
|
|
755
|
+
|
|
756
|
+
## Agent-Native Advantages
|
|
757
|
+
|
|
758
|
+
**Prerequisite gap detection before cascading failure.** When Priya struggles with virtual memory, she experiences it as "this topic is hard." A human tutor might ask probing questions and eventually identify the memory addressing gap, but only if they think to look there. Sage traces the prerequisite graph algorithmically: virtual-memory depends on memory-addressing, memory-addressing has `retrieval_strength: weak` and a failed practice problem. The diagnosis is not a guess — it is a graph traversal. This is the difference between a student spending 10 hours re-reading virtual memory notes (treating the symptom) and spending 2 hours on memory addressing drills (treating the cause). A human cannot maintain the prerequisite graph across 4 courses and 50+ concepts. An agent can, and the diagnostic accuracy directly converts to study time saved.
|
|
759
|
+
|
|
760
|
+
**Spaced repetition scheduling across all courses simultaneously.** A human using Anki creates separate decks per course and reviews them independently. Sage manages one unified schedule that optimizes across all courses: "You have 20 minutes for review today. Based on retrieval decay curves: 3 OS concepts are due (one of which is a prerequisite for Friday's lecture), 2 ML concepts are due, and 1 Discrete Math concept is approaching its forgetting threshold. Priority order: the OS prerequisite (time-critical), then the concepts closest to their forgetting thresholds." No human maintains this optimization across four courses. They review whatever deck they feel like, or cram the night before. The agent transforms spaced repetition from a per-course habit (usually abandoned) into a unified, priority-aware system that schedules based on retrieval science, not mood.
|
|
761
|
+
|
|
762
|
+
**Practice problem targeting based on concept weakness, not chapter progression.** When Sage detects that memory addressing is weak, it does not recommend "do more Chapter 7 problems." It generates or selects problems that specifically test the confused concepts: "Given a 48-bit virtual address space and 8GB physical RAM, compute the number of page table entries." This problem is crafted to produce different answers depending on whether Priya uses virtual address space (correct) or physical memory (her error pattern). If she gets it right, the concept is strengthening. If she gets it wrong in the predicted way, the gap is confirmed and Sage adjusts. This is not generic "more practice" — it is diagnostic practice targeting the specific confusion the prerequisite graph revealed. A textbook's problem sets are ordered by chapter. The agent's problem selection is ordered by diagnostic value.
|
|
763
|
+
|
|
764
|
+
**Cross-course concept detection that isolated study cannot produce.** Sage detects that probability theory appears in both Discrete Math and ML. A human studying for each course separately would learn Bayes' theorem twice — once as a math proof, once as an ML algorithm — without connecting them. Sage links the concepts: "The [[bayes-theorem]] note in your Discrete Math course is the mathematical foundation for the [[naive-bayes-classifier]] in your ML course. Mastering the proof strengthens your understanding of the classifier, and seeing the classifier application deepens your intuition for the proof." This connection is invisible in a course-siloed system. It is obvious in a flat concept graph that spans courses. The agent does not just notice the connection — it schedules review so that the two instantiations reinforce each other rather than competing for study time.
|
|
765
|
+
|
|
766
|
+
**Exam postmortem analysis revealing study method failures, not just content failures.** After every exam, Sage analyzes the relationship between study methods and outcomes. "You studied virtual memory by re-reading notes (3 sessions, 2.5 hours total). You studied system calls by doing practice problems (2 sessions, 1.5 hours total). On the exam: virtual memory questions scored 65%, system calls scored 92%. This is consistent with research on re-reading vs. active retrieval: recognition (re-reading) creates confidence but not recall, while retrieval practice (problems) creates durable recall." A human student notices "I did badly on VM" and resolves to study harder. The agent notices "your study METHOD for VM was the problem, not your effort level" and recommends a specific change in approach. This meta-cognitive feedback is what separates students who improve from students who work harder at the same ineffective methods — and it is precisely the kind of pattern detection that requires tracking study methods alongside outcomes across multiple exam cycles, which no student naturally does.
|
|
767
|
+
---
|
|
768
|
+
|
|
769
|
+
Topics:
|
|
770
|
+
- [[domain-compositions]]
|