arscontexta 0.6.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.claude-plugin/marketplace.json +11 -0
- package/.claude-plugin/plugin.json +22 -0
- package/README.md +683 -0
- package/agents/knowledge-guide.md +49 -0
- package/bin/cli.mjs +66 -0
- package/generators/agents-md.md +240 -0
- package/generators/claude-md.md +379 -0
- package/generators/features/atomic-notes.md +124 -0
- package/generators/features/ethical-guardrails.md +58 -0
- package/generators/features/graph-analysis.md +188 -0
- package/generators/features/helper-functions.md +92 -0
- package/generators/features/maintenance.md +164 -0
- package/generators/features/methodology-knowledge.md +70 -0
- package/generators/features/mocs.md +144 -0
- package/generators/features/multi-domain.md +61 -0
- package/generators/features/personality.md +71 -0
- package/generators/features/processing-pipeline.md +428 -0
- package/generators/features/schema.md +149 -0
- package/generators/features/self-evolution.md +229 -0
- package/generators/features/self-space.md +78 -0
- package/generators/features/semantic-search.md +99 -0
- package/generators/features/session-rhythm.md +85 -0
- package/generators/features/templates.md +85 -0
- package/generators/features/wiki-links.md +88 -0
- package/generators/soul-md.md +121 -0
- package/hooks/hooks.json +45 -0
- package/hooks/scripts/auto-commit.sh +44 -0
- package/hooks/scripts/session-capture.sh +35 -0
- package/hooks/scripts/session-orient.sh +86 -0
- package/hooks/scripts/write-validate.sh +42 -0
- package/methodology/AI shifts knowledge systems from externalizing memory to externalizing attention.md +59 -0
- package/methodology/BM25 retrieval fails on full-length descriptions because query term dilution reduces match scores.md +39 -0
- package/methodology/IBIS framework maps claim-based architecture to structured argumentation.md +58 -0
- package/methodology/LLM attention degrades as context fills.md +49 -0
- package/methodology/MOC construction forces synthesis that automated generation from metadata cannot replicate.md +49 -0
- package/methodology/MOC maintenance investment compounds because orientation savings multiply across every future session.md +41 -0
- package/methodology/MOCs are attention management devices not just organizational tools.md +51 -0
- package/methodology/PKM failure follows a predictable cycle.md +50 -0
- package/methodology/ThreadMode to DocumentMode transformation is the core value creation step.md +52 -0
- package/methodology/WIP limits force processing over accumulation.md +53 -0
- package/methodology/Zeigarnik effect validates capture-first philosophy because open loops drain attention.md +42 -0
- package/methodology/academic research uses structured extraction with cross-source synthesis.md +566 -0
- package/methodology/adapt the four-phase processing pipeline to domain-specific throughput needs.md +197 -0
- package/methodology/agent notes externalize navigation intuition that search cannot discover and traversal cannot reconstruct.md +48 -0
- package/methodology/agent self-memory should be architecturally separate from user knowledge systems.md +48 -0
- package/methodology/agent session boundaries create natural automation checkpoints that human-operated systems lack.md +56 -0
- package/methodology/agent-cognition.md +107 -0
- package/methodology/agents are simultaneously methodology executors and subjects creating a unique trust asymmetry.md +66 -0
- package/methodology/aspect-oriented programming solved the same cross-cutting concern problem that hooks solve.md +39 -0
- package/methodology/associative ontologies beat hierarchical taxonomies because heterarchy adapts while hierarchy brittles.md +53 -0
- package/methodology/attention residue may have a minimum granularity that cannot be subdivided.md +46 -0
- package/methodology/auto-commit hooks eliminate prospective memory failures by converting remember-to-act into guaranteed execution.md +47 -0
- package/methodology/automated detection is always safe because it only reads state while automated remediation risks content corruption.md +42 -0
- package/methodology/automation should be retired when its false positive rate exceeds its true positive rate or it catches zero issues.md +56 -0
- package/methodology/backlinks implicitly define notes by revealing usage context.md +35 -0
- package/methodology/backward maintenance asks what would be different if written today.md +62 -0
- package/methodology/balance onboarding enforcement and questions to prevent premature complexity.md +229 -0
- package/methodology/basic level categorization determines optimal MOC granularity.md +51 -0
- package/methodology/batching by context similarity reduces switching costs in agent processing.md +43 -0
- package/methodology/behavioral anti-patterns matter more than tool selection.md +42 -0
- package/methodology/betweenness centrality identifies bridge notes connecting disparate knowledge domains.md +57 -0
- package/methodology/blueprints that teach construction outperform downloads that provide pre-built code for platform-dependent modules.md +42 -0
- package/methodology/bootstrapping principle enables self-improving systems.md +62 -0
- package/methodology/build automatic memory through cognitive offloading and session handoffs.md +285 -0
- package/methodology/capture the reaction to content not just the content itself.md +41 -0
- package/methodology/claims must be specific enough to be wrong.md +36 -0
- package/methodology/closure rituals create clean breaks that prevent attention residue bleed.md +44 -0
- package/methodology/cognitive offloading is the architectural foundation for vault design.md +46 -0
- package/methodology/cognitive outsourcing risk in agent-operated systems.md +55 -0
- package/methodology/coherence maintains consistency despite inconsistent inputs.md +96 -0
- package/methodology/coherent architecture emerges from wiki links spreading activation and small-world topology.md +48 -0
- package/methodology/community detection algorithms can inform when MOCs should split or merge.md +52 -0
- package/methodology/complete navigation requires four complementary types that no single mechanism provides.md +43 -0
- package/methodology/complex systems evolve from simple working systems.md +59 -0
- package/methodology/composable knowledge architecture builds systems from independent toggleable modules not monolithic templates.md +61 -0
- package/methodology/compose multi-domain systems through separate templates and shared graph.md +372 -0
- package/methodology/concept-orientation beats source-orientation for cross-domain connections.md +51 -0
- package/methodology/confidence thresholds gate automated action between the mechanical and judgment zones.md +50 -0
- package/methodology/configuration dimensions interact so choices in one create pressure on others.md +58 -0
- package/methodology/configuration paralysis emerges when derivation surfaces too many decisions.md +44 -0
- package/methodology/context files function as agent operating systems through self-referential self-extension.md +46 -0
- package/methodology/context phrase clarity determines how deep a navigation hierarchy can scale.md +46 -0
- package/methodology/continuous small-batch processing eliminates review dread.md +48 -0
- package/methodology/controlled disorder engineers serendipity through semantic rather than topical linking.md +51 -0
- package/methodology/creative writing uses worldbuilding consistency with character tracking.md +672 -0
- package/methodology/cross-links between MOC territories indicate creative leaps and integration depth.md +43 -0
- package/methodology/dangling links reveal which notes want to exist.md +62 -0
- package/methodology/data exit velocity measures how quickly content escapes vendor lock-in.md +74 -0
- package/methodology/decontextualization risk means atomicity may strip meaning that cannot be recovered.md +48 -0
- package/methodology/dense interlinked research claims enable derivation while sparse references only enable templating.md +47 -0
- package/methodology/dependency resolution through topological sort makes module composition transparent and verifiable.md +56 -0
- package/methodology/derivation generates knowledge systems from composable research claims not template customization.md +63 -0
- package/methodology/derivation-engine.md +27 -0
- package/methodology/derived systems follow a seed-evolve-reseed lifecycle.md +56 -0
- package/methodology/description quality for humans diverges from description quality for keyword search.md +73 -0
- package/methodology/descriptions are retrieval filters not summaries.md +112 -0
- package/methodology/design MOCs as attention management devices with lifecycle governance.md +318 -0
- package/methodology/design-dimensions.md +66 -0
- package/methodology/digital mutability enables note evolution that physical permanence forbids.md +54 -0
- package/methodology/discovery-retrieval.md +48 -0
- package/methodology/distinctiveness scoring treats description quality as measurable.md +69 -0
- package/methodology/does agent processing recover what fast capture loses.md +43 -0
- package/methodology/domain-compositions.md +37 -0
- package/methodology/dual-coding with visual elements could enhance agent traversal.md +55 -0
- package/methodology/each module must be describable in one sentence under 200 characters or it does too many things.md +45 -0
- package/methodology/each new note compounds value by creating traversal paths.md +55 -0
- package/methodology/eight configuration dimensions parameterize the space of possible knowledge systems.md +56 -0
- package/methodology/elaborative encoding is the quality gate for new notes.md +55 -0
- package/methodology/enforce schema with graduated strictness across capture processing and query zones.md +221 -0
- package/methodology/enforcing atomicity can create paralysis when ideas resist decomposition.md +43 -0
- package/methodology/engineering uses technical decision tracking with architectural memory.md +766 -0
- package/methodology/every knowledge domain shares a four-phase processing skeleton that diverges only in the process step.md +53 -0
- package/methodology/evolution observations provide actionable signals for system adaptation.md +67 -0
- package/methodology/external memory shapes cognition more than base model.md +60 -0
- package/methodology/faceted classification treats notes as multi-dimensional objects rather than folder contents.md +65 -0
- package/methodology/failure-modes.md +27 -0
- package/methodology/false universalism applies same processing logic regardless of domain.md +49 -0
- package/methodology/federated wiki pattern enables multi-agent divergence as feature not bug.md +59 -0
- package/methodology/flat files break at retrieval scale.md +75 -0
- package/methodology/forced engagement produces weak connections.md +48 -0
- package/methodology/four abstraction layers separate platform-agnostic from platform-dependent knowledge system features.md +47 -0
- package/methodology/fresh context per task preserves quality better than chaining phases.md +44 -0
- package/methodology/friction reveals architecture.md +63 -0
- package/methodology/friction-driven module adoption prevents configuration debt by adding complexity only at pain points.md +48 -0
- package/methodology/gardening cycle implements tend prune fertilize operations.md +41 -0
- package/methodology/generation effect gate blocks processing without transformation.md +40 -0
- package/methodology/goal-driven memory orchestration enables autonomous domain learning through directed compute allocation.md +41 -0
- package/methodology/good descriptions layer heuristic then mechanism then implication.md +57 -0
- package/methodology/graph-structure.md +65 -0
- package/methodology/guided notes might outperform post-hoc structuring for high-volume capture.md +37 -0
- package/methodology/health wellness uses symptom-trigger correlation with multi-dimensional tracking.md +819 -0
- package/methodology/hook composition creates emergent methodology from independent single-concern components.md +47 -0
- package/methodology/hook enforcement guarantees quality while instruction enforcement merely suggests it.md +51 -0
- package/methodology/hook-driven learning loops create self-improving methodology through observation accumulation.md +62 -0
- package/methodology/hooks are the agent habit system that replaces the missing basal ganglia.md +40 -0
- package/methodology/hooks cannot replace genuine cognitive engagement yet more automation is always tempting.md +87 -0
- package/methodology/hooks enable context window efficiency by delegating deterministic checks to external processes.md +47 -0
- package/methodology/idempotent maintenance operations are safe to automate because running them twice produces the same result as running them once.md +44 -0
- package/methodology/implement condition-based maintenance triggers for derived systems.md +255 -0
- package/methodology/implicit dependencies create distributed monoliths that fail silently across configurations.md +58 -0
- package/methodology/implicit knowledge emerges from traversal.md +55 -0
- package/methodology/incremental formalization happens through repeated touching of old notes.md +60 -0
- package/methodology/incremental reading enables cross-source connection finding.md +39 -0
- package/methodology/index.md +32 -0
- package/methodology/inline links carry richer relationship data than metadata fields.md +91 -0
- package/methodology/insight accretion differs from productivity in knowledge systems.md +41 -0
- package/methodology/intermediate packets enable assembly over creation.md +52 -0
- package/methodology/intermediate representation pattern enables reliable vault operations beyond regex.md +62 -0
- package/methodology/justification chains enable forward backward and evolution reasoning about configuration decisions.md +46 -0
- package/methodology/knowledge system architecture is parameterized by platform capabilities not fixed by methodology.md +51 -0
- package/methodology/knowledge systems become communication partners through complexity and memory humans cannot sustain.md +47 -0
- package/methodology/knowledge systems share universal operations and structural components across all methodology traditions.md +46 -0
- package/methodology/legal case management uses precedent chains with regulatory change propagation.md +892 -0
- package/methodology/live index via periodic regeneration keeps discovery current.md +58 -0
- package/methodology/local-first file formats are inherently agent-native.md +69 -0
- package/methodology/logic column pattern separates reasoning from procedure.md +35 -0
- package/methodology/maintenance operations are more universal than creative pipelines because structural health is domain-invariant.md +47 -0
- package/methodology/maintenance scheduling frequency should match consequence speed not detection capability.md +50 -0
- package/methodology/maintenance targeting should prioritize mechanism and theory notes.md +26 -0
- package/methodology/maintenance-patterns.md +72 -0
- package/methodology/markdown plus YAML plus ripgrep implements a queryable graph database without infrastructure.md +55 -0
- package/methodology/maturity field enables agent context prioritization.md +33 -0
- package/methodology/memory-architecture.md +27 -0
- package/methodology/metacognitive confidence can diverge from retrieval capability.md +42 -0
- package/methodology/metadata reduces entropy enabling precision over recall.md +91 -0
- package/methodology/methodology development should follow the trajectory from documentation to skill to hook as understanding hardens.md +80 -0
- package/methodology/methodology traditions are named points in a shared configuration space not competing paradigms.md +64 -0
- package/methodology/mnemonic medium embeds verification into navigation.md +46 -0
- package/methodology/module communication through shared YAML fields creates loose coupling without direct dependencies.md +44 -0
- package/methodology/module deactivation must account for structural artifacts that survive the toggle.md +49 -0
- package/methodology/multi-domain systems compose through separate templates and shared graph.md +61 -0
- package/methodology/multi-domain-composition.md +27 -0
- package/methodology/narrow folksonomy optimizes for single-operator retrieval unlike broad consensus tagging.md +53 -0
- package/methodology/navigation infrastructure passes through distinct scaling regimes that require qualitative strategy shifts.md +48 -0
- package/methodology/navigational vertigo emerges in pure association systems without local hierarchy.md +54 -0
- package/methodology/note titles should function as APIs enabling sentence transclusion.md +51 -0
- package/methodology/note-design.md +57 -0
- package/methodology/notes are skills /342/200/224 curated knowledge injected when relevant.md" +62 -0
- package/methodology/notes function as cognitive anchors that stabilize attention during complex tasks.md +41 -0
- package/methodology/novel domains derive by mapping knowledge type to closest reference domain then adapting.md +50 -0
- package/methodology/nudge theory explains graduated hook enforcement as choice architecture for agents.md +59 -0
- package/methodology/observation and tension logs function as dead-letter queues for failed automation.md +51 -0
- package/methodology/operational memory and knowledge memory serve different functions in agent architecture.md +48 -0
- package/methodology/operational wisdom requires contextual observation.md +52 -0
- package/methodology/orchestrated vault creation transforms arscontexta from tool to autonomous knowledge factory.md +40 -0
- package/methodology/organic emergence versus active curation creates a fundamental vault governance tension.md +68 -0
- package/methodology/orphan notes are seeds not failures.md +38 -0
- package/methodology/over-automation corrupts quality when hooks encode judgment rather than verification.md +62 -0
- package/methodology/people relationships uses Dunbar-layered graphs with interaction tracking.md +659 -0
- package/methodology/personal assistant uses life area management with review automation.md +610 -0
- package/methodology/platform adapter translation is semantic not mechanical because hook event meanings differ.md +40 -0
- package/methodology/platform capability tiers determine which knowledge system features can be implemented.md +48 -0
- package/methodology/platform fragmentation means identical conceptual operations require different implementations across agent environments.md +44 -0
- package/methodology/premature complexity is the most common derivation failure mode.md +45 -0
- package/methodology/prevent domain-specific failure modes through the vulnerability matrix.md +336 -0
- package/methodology/processing effort should follow retrieval demand.md +57 -0
- package/methodology/processing-workflows.md +75 -0
- package/methodology/product management uses feedback pipelines with experiment tracking.md +789 -0
- package/methodology/productivity porn risk in meta-system building.md +30 -0
- package/methodology/programmable notes could enable property-triggered workflows.md +64 -0
- package/methodology/progressive disclosure means reading right not reading less.md +69 -0
- package/methodology/progressive schema validates only what active modules require not the full system schema.md +49 -0
- package/methodology/project management uses decision tracking with stakeholder context.md +776 -0
- package/methodology/propositional link semantics transform wiki links from associative to reasoned.md +87 -0
- package/methodology/prospective memory requires externalization.md +53 -0
- package/methodology/provenance tracks where beliefs come from.md +62 -0
- package/methodology/queries evolve during search so agents should checkpoint.md +35 -0
- package/methodology/question-answer metadata enables inverted search patterns.md +39 -0
- package/methodology/random note resurfacing prevents write-only memory.md +33 -0
- package/methodology/reconciliation loops that compare desired state to actual state enable drift correction without continuous monitoring.md +59 -0
- package/methodology/reflection synthesizes existing notes into new insight.md +100 -0
- package/methodology/retrieval utility should drive design over capture completeness.md +69 -0
- package/methodology/retrieval verification loop tests description quality at scale.md +81 -0
- package/methodology/role field makes graph structure explicit.md +94 -0
- package/methodology/scaffolding enables divergence that fine-tuning cannot.md +67 -0
- package/methodology/schema enforcement via validation agents enables soft consistency.md +60 -0
- package/methodology/schema evolution follows observe-then-formalize not design-then-enforce.md +65 -0
- package/methodology/schema field names are the only domain specific element in the universal note pattern.md +46 -0
- package/methodology/schema fields should use domain-native vocabulary not abstract terminology.md +47 -0
- package/methodology/schema templates reduce cognitive overhead at capture time.md +55 -0
- package/methodology/schema validation hooks externalize inhibitory control that degrades under cognitive load.md +48 -0
- package/methodology/schema-enforcement.md +27 -0
- package/methodology/self-extension requires context files to contain platform operations knowledge not just methodology.md +47 -0
- package/methodology/sense-making vs storage does compression lose essential nuance.md +73 -0
- package/methodology/session boundary hooks implement cognitive bookends for orientation and reflection.md +60 -0
- package/methodology/session handoff creates continuity without persistent memory.md +43 -0
- package/methodology/session outputs are packets for future selves.md +43 -0
- package/methodology/session transcript mining enables experiential validation that structural tests cannot provide.md +38 -0
- package/methodology/skill context budgets constrain knowledge system complexity on agent platforms.md +52 -0
- package/methodology/skills encode methodology so manual execution bypasses quality gates.md +50 -0
- package/methodology/small-world topology requires hubs and dense local links.md +99 -0
- package/methodology/source attribution enables tracing claims to foundations.md +38 -0
- package/methodology/spaced repetition scheduling could optimize vault maintenance.md +44 -0
- package/methodology/spreading activation models how agents should traverse.md +79 -0
- package/methodology/stale navigation actively misleads because agents trust curated maps completely.md +43 -0
- package/methodology/stigmergy coordinates agents through environmental traces without direct communication.md +62 -0
- package/methodology/storage versus thinking distinction determines which tool patterns apply.md +56 -0
- package/methodology/structure enables navigation without reading everything.md +52 -0
- package/methodology/structure without processing provides no value.md +56 -0
- package/methodology/student learning uses prerequisite graphs with spaced retrieval.md +770 -0
- package/methodology/summary coherence tests composability before filing.md +37 -0
- package/methodology/tag rot applies to wiki links because titles serve as both identifier and display text.md +50 -0
- package/methodology/temporal media must convert to spatial text for agent traversal.md +43 -0
- package/methodology/temporal processing priority creates age-based inbox urgency.md +45 -0
- package/methodology/temporal separation of capture and processing preserves context freshness.md +39 -0
- package/methodology/ten universal primitives form the kernel of every viable agent knowledge system.md +162 -0
- package/methodology/testing effect could enable agent knowledge verification.md +38 -0
- package/methodology/the AgentSkills standard embodies progressive disclosure at the skill level.md +40 -0
- package/methodology/the derivation engine improves recursively as deployed systems generate observations.md +49 -0
- package/methodology/the determinism boundary separates hook methodology from skill methodology.md +46 -0
- package/methodology/the fix-versus-report decision depends on determinism reversibility and accumulated trust.md +45 -0
- package/methodology/the generation effect requires active transformation not just storage.md +57 -0
- package/methodology/the no wrong patches guarantee ensures any valid module combination produces a valid system.md +58 -0
- package/methodology/the system is the argument.md +46 -0
- package/methodology/the vault constitutes identity for agents.md +86 -0
- package/methodology/the vault methodology transfers because it encodes cognitive science not domain specifics.md +47 -0
- package/methodology/therapy journal uses warm personality with pattern detection for emotional processing.md +584 -0
- package/methodology/three capture schools converge through agent-mediated synthesis.md +55 -0
- package/methodology/three concurrent maintenance loops operate at different timescales to catch different classes of problems.md +56 -0
- package/methodology/throughput matters more than accumulation.md +58 -0
- package/methodology/title as claim enables traversal as reasoning.md +50 -0
- package/methodology/topological organization beats temporal for knowledge work.md +52 -0
- package/methodology/trading uses conviction tracking with thesis-outcome correlation.md +699 -0
- package/methodology/trails transform ephemeral navigation into persistent artifacts.md +39 -0
- package/methodology/transform universal vocabulary to domain-native language through six levels.md +259 -0
- package/methodology/type field enables structured queries without folder hierarchies.md +53 -0
- package/methodology/use-case presets dissolve the tension between composability and simplicity.md +44 -0
- package/methodology/vault conventions may impose hidden rigidity on thinking.md +44 -0
- package/methodology/verbatim risk applies to agents too.md +31 -0
- package/methodology/vibe notetaking is the emerging industry consensus for AI-native self-organization.md +56 -0
- package/methodology/vivid memories need verification.md +45 -0
- package/methodology/vocabulary-transformation.md +27 -0
- package/methodology/voice capture is the highest-bandwidth channel for agent-delegated knowledge systems.md +45 -0
- package/methodology/wiki links are the digital evolution of analog indexing.md +73 -0
- package/methodology/wiki links as social contract transforms agents into stewards of incomplete references.md +52 -0
- package/methodology/wiki links create navigation paths that shape retrieval.md +63 -0
- package/methodology/wiki links implement GraphRAG without the infrastructure.md +101 -0
- package/methodology/writing for audience blocks authentic creation.md +22 -0
- package/methodology/you operate a system that takes notes.md +79 -0
- package/openclaw/SKILL.md +110 -0
- package/package.json +45 -0
- package/platforms/README.md +51 -0
- package/platforms/claude-code/generator.md +61 -0
- package/platforms/claude-code/hooks/README.md +186 -0
- package/platforms/claude-code/hooks/auto-commit.sh.template +38 -0
- package/platforms/claude-code/hooks/session-capture.sh.template +72 -0
- package/platforms/claude-code/hooks/session-orient.sh.template +189 -0
- package/platforms/claude-code/hooks/write-validate.sh.template +106 -0
- package/platforms/openclaw/generator.md +82 -0
- package/platforms/openclaw/hooks/README.md +89 -0
- package/platforms/openclaw/hooks/bootstrap.ts.template +224 -0
- package/platforms/openclaw/hooks/command-new.ts.template +165 -0
- package/platforms/openclaw/hooks/heartbeat.ts.template +214 -0
- package/platforms/shared/features/README.md +70 -0
- package/platforms/shared/skill-blocks/graph.md +145 -0
- package/platforms/shared/skill-blocks/learn.md +119 -0
- package/platforms/shared/skill-blocks/next.md +131 -0
- package/platforms/shared/skill-blocks/pipeline.md +326 -0
- package/platforms/shared/skill-blocks/ralph.md +616 -0
- package/platforms/shared/skill-blocks/reduce.md +1142 -0
- package/platforms/shared/skill-blocks/refactor.md +129 -0
- package/platforms/shared/skill-blocks/reflect.md +780 -0
- package/platforms/shared/skill-blocks/remember.md +524 -0
- package/platforms/shared/skill-blocks/rethink.md +574 -0
- package/platforms/shared/skill-blocks/reweave.md +680 -0
- package/platforms/shared/skill-blocks/seed.md +320 -0
- package/platforms/shared/skill-blocks/stats.md +145 -0
- package/platforms/shared/skill-blocks/tasks.md +171 -0
- package/platforms/shared/skill-blocks/validate.md +323 -0
- package/platforms/shared/skill-blocks/verify.md +562 -0
- package/platforms/shared/templates/README.md +35 -0
- package/presets/experimental/categories.yaml +1 -0
- package/presets/experimental/preset.yaml +38 -0
- package/presets/experimental/starter/README.md +7 -0
- package/presets/experimental/vocabulary.yaml +7 -0
- package/presets/personal/categories.yaml +7 -0
- package/presets/personal/preset.yaml +41 -0
- package/presets/personal/starter/goals.md +21 -0
- package/presets/personal/starter/index.md +17 -0
- package/presets/personal/starter/life-areas.md +21 -0
- package/presets/personal/starter/people.md +21 -0
- package/presets/personal/vocabulary.yaml +32 -0
- package/presets/research/categories.yaml +8 -0
- package/presets/research/preset.yaml +41 -0
- package/presets/research/starter/index.md +17 -0
- package/presets/research/starter/methods.md +21 -0
- package/presets/research/starter/open-questions.md +21 -0
- package/presets/research/vocabulary.yaml +33 -0
- package/reference/AUDIT-REPORT.md +238 -0
- package/reference/claim-map.md +172 -0
- package/reference/components.md +327 -0
- package/reference/conversation-patterns.md +542 -0
- package/reference/derivation-validation.md +649 -0
- package/reference/dimension-claim-map.md +134 -0
- package/reference/evolution-lifecycle.md +297 -0
- package/reference/failure-modes.md +235 -0
- package/reference/interaction-constraints.md +204 -0
- package/reference/kernel.yaml +242 -0
- package/reference/methodology.md +283 -0
- package/reference/open-questions.md +279 -0
- package/reference/personality-layer.md +302 -0
- package/reference/self-space.md +299 -0
- package/reference/semantic-vs-keyword.md +288 -0
- package/reference/session-lifecycle.md +298 -0
- package/reference/templates/base-note.md +16 -0
- package/reference/templates/companion-note.md +70 -0
- package/reference/templates/creative-note.md +16 -0
- package/reference/templates/learning-note.md +16 -0
- package/reference/templates/life-note.md +16 -0
- package/reference/templates/moc.md +26 -0
- package/reference/templates/relationship-note.md +17 -0
- package/reference/templates/research-note.md +19 -0
- package/reference/templates/session-log.md +24 -0
- package/reference/templates/therapy-note.md +16 -0
- package/reference/test-fixtures/edge-case-constraints.md +148 -0
- package/reference/test-fixtures/multi-domain.md +164 -0
- package/reference/test-fixtures/novel-domain-gaming.md +138 -0
- package/reference/test-fixtures/research-minimal.md +102 -0
- package/reference/test-fixtures/therapy-full.md +155 -0
- package/reference/testing-milestones.md +1087 -0
- package/reference/three-spaces.md +363 -0
- package/reference/tradition-presets.md +203 -0
- package/reference/use-case-presets.md +341 -0
- package/reference/validate-kernel.sh +432 -0
- package/reference/vocabulary-transforms.md +85 -0
- package/scripts/sync-thinking.sh +147 -0
- package/skill-sources/graph/SKILL.md +567 -0
- package/skill-sources/graph/skill.json +17 -0
- package/skill-sources/learn/SKILL.md +254 -0
- package/skill-sources/learn/skill.json +17 -0
- package/skill-sources/next/SKILL.md +407 -0
- package/skill-sources/next/skill.json +17 -0
- package/skill-sources/pipeline/SKILL.md +314 -0
- package/skill-sources/pipeline/skill.json +17 -0
- package/skill-sources/ralph/SKILL.md +604 -0
- package/skill-sources/ralph/skill.json +17 -0
- package/skill-sources/reduce/SKILL.md +1113 -0
- package/skill-sources/reduce/skill.json +17 -0
- package/skill-sources/refactor/SKILL.md +448 -0
- package/skill-sources/refactor/skill.json +17 -0
- package/skill-sources/reflect/SKILL.md +747 -0
- package/skill-sources/reflect/skill.json +17 -0
- package/skill-sources/remember/SKILL.md +534 -0
- package/skill-sources/remember/skill.json +17 -0
- package/skill-sources/rethink/SKILL.md +658 -0
- package/skill-sources/rethink/skill.json +17 -0
- package/skill-sources/reweave/SKILL.md +657 -0
- package/skill-sources/reweave/skill.json +17 -0
- package/skill-sources/seed/SKILL.md +303 -0
- package/skill-sources/seed/skill.json +17 -0
- package/skill-sources/stats/SKILL.md +371 -0
- package/skill-sources/stats/skill.json +17 -0
- package/skill-sources/tasks/SKILL.md +402 -0
- package/skill-sources/tasks/skill.json +17 -0
- package/skill-sources/validate/SKILL.md +310 -0
- package/skill-sources/validate/skill.json +17 -0
- package/skill-sources/verify/SKILL.md +532 -0
- package/skill-sources/verify/skill.json +17 -0
- package/skills/add-domain/SKILL.md +441 -0
- package/skills/add-domain/skill.json +17 -0
- package/skills/architect/SKILL.md +568 -0
- package/skills/architect/skill.json +17 -0
- package/skills/ask/SKILL.md +388 -0
- package/skills/ask/skill.json +17 -0
- package/skills/health/SKILL.md +760 -0
- package/skills/health/skill.json +17 -0
- package/skills/help/SKILL.md +348 -0
- package/skills/help/skill.json +17 -0
- package/skills/recommend/SKILL.md +553 -0
- package/skills/recommend/skill.json +17 -0
- package/skills/reseed/SKILL.md +385 -0
- package/skills/reseed/skill.json +17 -0
- package/skills/setup/SKILL.md +1688 -0
- package/skills/setup/skill.json +17 -0
- package/skills/tutorial/SKILL.md +496 -0
- package/skills/tutorial/skill.json +17 -0
- package/skills/upgrade/SKILL.md +395 -0
- package/skills/upgrade/skill.json +17 -0
|
@@ -0,0 +1,789 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: product-management knowledge system — inspirational composition showing derived architecture for feedback-to-feature pipelines, experiment tracking, and customer voice intelligence
|
|
3
|
+
kind: example
|
|
4
|
+
domain: pm
|
|
5
|
+
topics: ["[[domain-compositions]]"]
|
|
6
|
+
---
|
|
7
|
+
|
|
8
|
+
# product management uses feedback pipelines with experiment tracking
|
|
9
|
+
|
|
10
|
+
A derived architecture for a product manager who needs to turn the noise of customer feedback, competitive signals, and experiment results into coherent product strategy. Not a feature tracking tool — a strategic intelligence system that connects what customers say to what experiments prove to what the roadmap should prioritize.
|
|
11
|
+
|
|
12
|
+
The agent translation here cuts to the core of the PM discipline. Humans are good at empathy, judgment, and stakeholder negotiation. They are terrible at remembering what 847 customers said across 12 channels over 6 months and detecting that 23% of them mentioned the same pain point using different words. An agent cannot feel what a frustrated user feels, but it can hold every customer voice simultaneously and surface the patterns that should drive decisions. The system gives the PM evidence-backed conviction instead of gut-feel confidence.
|
|
13
|
+
|
|
14
|
+
---
|
|
15
|
+
|
|
16
|
+
## Persona
|
|
17
|
+
|
|
18
|
+
**Nadia Reeves** is the Senior PM for the growth team at a mid-stage B2B SaaS company (Series B, 180 employees, $28M ARR) that sells a workflow automation platform. She owns the self-serve onboarding experience, the expansion motion for existing accounts, and the experiment program that tests growth hypotheses. On any given day she is balancing: user research interviews, experiment results, competitive moves, feature requests from customer success, OKR tracking, and roadmap negotiations with engineering.
|
|
19
|
+
|
|
20
|
+
Nadia's problem is not a lack of data — it is too much data with too little synthesis. She has Gong recordings from customer calls, Intercom transcripts from support, NPS survey responses, experiment results in Amplitude, feature requests in a Productboard backlog, competitive updates in a Google Doc that was last touched in October, and her own research notes in scattered Notion pages. Each channel captures genuine signal. Nothing connects them. When she argues for a roadmap item in the quarterly planning meeting, she pulls evidence from memory and whichever tool she happened to open that morning. She knows there is more evidence — she just cannot find it in time.
|
|
21
|
+
|
|
22
|
+
Nadia's agent operates as a strategic intelligence layer. It ingests feedback from every channel, clusters it into themes, connects themes to experiments and features, and surfaces evidence when Nadia needs to make a case. When she writes a PRD, the agent pulls: every piece of customer feedback related to this problem, every experiment that tested adjacent hypotheses, the competitive landscape for this feature area, and the OKR this feature would advance. The agent does not decide what to build — it ensures Nadia decides with complete evidence rather than partial recall.
|
|
23
|
+
|
|
24
|
+
---
|
|
25
|
+
|
|
26
|
+
## Configuration
|
|
27
|
+
|
|
28
|
+
| Dimension | Position | Rationale |
|
|
29
|
+
|-----------|----------|-----------|
|
|
30
|
+
| **Granularity** | Atomic for feedback and experiment results, compound for PRDs and personas | Each piece of customer feedback is a distinct signal that might connect to different themes. Atomizing feedback enables clustering. But PRDs and personas are inherently compound — splitting a PRD into atomic claims would destroy the narrative structure that stakeholders need. |
|
|
31
|
+
| **Organization** | Flat with type-based subdirectories | Feedback, experiments, PRDs, and competitive intelligence each have their own directory. Within directories, files are flat. The graph connects across: a feedback theme links to the PRD it informed, which links to the experiment that validated the hypothesis, which links to the OKR it advances. |
|
|
32
|
+
| **Linking** | Explicit with agent-suggested connections from feedback clustering | PRD-to-OKR links must be explicit and intentional — Nadia argues for this alignment in planning meetings. Feedback-to-theme connections are agent-suggested: the agent detects that seven support tickets use different words for the same pain point and suggests a theme. |
|
|
33
|
+
| **Metadata** | Dense for feedback (segment, channel, sentiment), medium for everything else | Feedback metadata enables the queries that matter most: "Show me all negative feedback from enterprise customers about onboarding in the last quarter." Dense metadata on feedback is the price of admission for pattern detection. PRDs and experiments need fewer structured fields because they are read holistically. |
|
|
34
|
+
| **Processing** | Heavy for feedback (clustering pipeline), light for decisions and PRDs | Raw feedback is high-volume, low-signal-per-item content. Without processing, 200 support tickets are just 200 tickets. The agent's clustering pipeline transforms them into five actionable themes with quantified demand signals. PRDs and decisions arrive near-final — processing is mostly connection-finding, not transformation. |
|
|
35
|
+
| **Formalization** | Medium — templates for feedback and experiments, flexible for strategy docs | Feedback entries must be consistently structured for clustering to work. Experiment results must follow a hypothesis-result-learning format for cross-experiment pattern detection. But strategy documents, roadmap rationales, and competitive analyses benefit from narrative flexibility. |
|
|
36
|
+
| **Review** | Continuous for feedback clustering, quarterly for strategy alignment | Feedback arrives continuously and should be clustered in near-real-time. But strategic coherence — are our experiments aligned with our OKRs? is our roadmap connected to customer evidence? — is a quarterly-cadence concern that matches planning cycles. |
|
|
37
|
+
| **Scope** | Team-shared — product, engineering, design, customer success all contribute | Customer feedback comes from CS. Experiment results come from engineering. Competitive intelligence comes from marketing. The knowledge system must be team-accessible because the value comes from cross-functional evidence synthesis. |
|
|
38
|
+
|
|
39
|
+
---
|
|
40
|
+
|
|
41
|
+
## Vault Structure
|
|
42
|
+
|
|
43
|
+
```
|
|
44
|
+
vault/
|
|
45
|
+
├── self/
|
|
46
|
+
│ ├── identity.md # Agent identity, product philosophy
|
|
47
|
+
│ └── memory/
|
|
48
|
+
│ └── [operational learnings]
|
|
49
|
+
├── notes/
|
|
50
|
+
│ ├── index.md # Hub: entry point
|
|
51
|
+
│ ├── customer-voice.md # MOC: feedback themes and demand signals
|
|
52
|
+
│ ├── experiments.md # MOC: experiment program, patterns, learnings
|
|
53
|
+
│ ├── strategy.md # MOC: OKRs, roadmap rationale, bets
|
|
54
|
+
│ ├── competitive.md # MOC: competitive landscape, positioning
|
|
55
|
+
│ ├── personas.md # MOC: user segments and JTBD
|
|
56
|
+
│ ├── features.md # MOC: feature areas with evidence connections
|
|
57
|
+
│ │
|
|
58
|
+
│ ├── feedback/
|
|
59
|
+
│ │ ├── fb-2026-02-14-onboarding-confusion-enterprise.md
|
|
60
|
+
│ │ ├── fb-2026-02-13-api-documentation-gaps.md
|
|
61
|
+
│ │ └── ...
|
|
62
|
+
│ ├── themes/
|
|
63
|
+
│ │ ├── theme-onboarding-time-to-value.md
|
|
64
|
+
│ │ ├── theme-api-developer-experience.md
|
|
65
|
+
│ │ └── ...
|
|
66
|
+
│ ├── experiments/
|
|
67
|
+
│ │ ├── exp-001-simplified-onboarding-wizard.md
|
|
68
|
+
│ │ ├── exp-002-interactive-api-playground.md
|
|
69
|
+
│ │ └── ...
|
|
70
|
+
│ ├── prds/
|
|
71
|
+
│ │ ├── prd-guided-onboarding-v2.md
|
|
72
|
+
│ │ ├── prd-api-playground.md
|
|
73
|
+
│ │ └── ...
|
|
74
|
+
│ ├── decisions/
|
|
75
|
+
│ │ ├── dec-001-prioritize-onboarding-over-expansion-q1.md
|
|
76
|
+
│ │ └── ...
|
|
77
|
+
│ ├── competitive/
|
|
78
|
+
│ │ ├── competitor-acme-workflows.md
|
|
79
|
+
│ │ ├── competitor-flowmatic.md
|
|
80
|
+
│ │ └── ...
|
|
81
|
+
│ ├── personas/
|
|
82
|
+
│ │ ├── persona-technical-evaluator.md
|
|
83
|
+
│ │ ├── persona-business-champion.md
|
|
84
|
+
│ │ └── ...
|
|
85
|
+
│ └── okrs/
|
|
86
|
+
│ ├── okr-q1-2026-reduce-onboarding-drop-off.md
|
|
87
|
+
│ └── ...
|
|
88
|
+
├── ops/
|
|
89
|
+
│ ├── templates/
|
|
90
|
+
│ │ ├── feedback.md
|
|
91
|
+
│ │ ├── theme.md
|
|
92
|
+
│ │ ├── experiment.md
|
|
93
|
+
│ │ ├── prd.md
|
|
94
|
+
│ │ ├── decision.md
|
|
95
|
+
│ │ ├── competitor.md
|
|
96
|
+
│ │ ├── persona.md
|
|
97
|
+
│ │ └── okr.md
|
|
98
|
+
│ ├── logs/
|
|
99
|
+
│ │ ├── feedback-digest.md # Weekly feedback clustering summary
|
|
100
|
+
│ │ ├── experiment-patterns.md # Cross-experiment learning log
|
|
101
|
+
│ │ └── evidence-gaps.md # Where decisions lack supporting evidence
|
|
102
|
+
│ └── derivation.md
|
|
103
|
+
└── inbox/
|
|
104
|
+
└── [raw feedback imports, interview transcripts, competitive screenshots]
|
|
105
|
+
```
|
|
106
|
+
|
|
107
|
+
---
|
|
108
|
+
|
|
109
|
+
## Note Schemas
|
|
110
|
+
|
|
111
|
+
### Customer Feedback
|
|
112
|
+
|
|
113
|
+
```yaml
|
|
114
|
+
---
|
|
115
|
+
description: [one sentence capturing the customer's core message]
|
|
116
|
+
feedback_id: FB-YYYY-MM-DD-NNN
|
|
117
|
+
date: YYYY-MM-DD
|
|
118
|
+
source: support-ticket | nps-survey | interview | sales-call | social-media | in-app
|
|
119
|
+
customer_segment: enterprise | mid-market | smb | free-tier
|
|
120
|
+
customer_name: [name or anonymous]
|
|
121
|
+
account_arr: NNNk | null
|
|
122
|
+
sentiment: positive | neutral | negative | frustrated
|
|
123
|
+
feature_area: onboarding | api | integrations | reporting | billing | general
|
|
124
|
+
verbatim: "[exact customer quote]"
|
|
125
|
+
themes: ["[[theme-name]]"]
|
|
126
|
+
topics: ["[[customer-voice]]"]
|
|
127
|
+
---
|
|
128
|
+
```
|
|
129
|
+
|
|
130
|
+
### Feedback Theme
|
|
131
|
+
|
|
132
|
+
```yaml
|
|
133
|
+
---
|
|
134
|
+
description: [one sentence describing the pattern across multiple feedback items]
|
|
135
|
+
theme_id: THEME-NNN
|
|
136
|
+
feedback_count: NN
|
|
137
|
+
first_seen: YYYY-MM-DD
|
|
138
|
+
last_seen: YYYY-MM-DD
|
|
139
|
+
segments_affected: ["enterprise", "mid-market"]
|
|
140
|
+
sentiment_distribution:
|
|
141
|
+
negative: NN%
|
|
142
|
+
neutral: NN%
|
|
143
|
+
positive: NN%
|
|
144
|
+
demand_signal: critical | strong | moderate | weak
|
|
145
|
+
related_experiments: ["[[exp-nnn-title]]"]
|
|
146
|
+
related_prds: ["[[prd-title]]"]
|
|
147
|
+
related_okrs: ["[[okr-title]]"]
|
|
148
|
+
topics: ["[[customer-voice]]"]
|
|
149
|
+
---
|
|
150
|
+
```
|
|
151
|
+
|
|
152
|
+
### Experiment Result
|
|
153
|
+
|
|
154
|
+
```yaml
|
|
155
|
+
---
|
|
156
|
+
description: [one sentence stating the hypothesis and outcome]
|
|
157
|
+
exp_id: EXP-NNN
|
|
158
|
+
status: proposed | running | completed | abandoned
|
|
159
|
+
hypothesis: [if we do X, then Y will happen, measured by Z]
|
|
160
|
+
start_date: YYYY-MM-DD
|
|
161
|
+
end_date: YYYY-MM-DD | null
|
|
162
|
+
primary_metric: [metric name]
|
|
163
|
+
control_result: [value]
|
|
164
|
+
variant_result: [value]
|
|
165
|
+
lift: [+/-NN%]
|
|
166
|
+
statistical_significance: [p-value or confidence interval]
|
|
167
|
+
decision: ship | iterate | kill | inconclusive
|
|
168
|
+
learnings: ["key insight from this experiment"]
|
|
169
|
+
related_themes: ["[[theme-name]]"]
|
|
170
|
+
related_prds: ["[[prd-title]]"]
|
|
171
|
+
topics: ["[[experiments]]"]
|
|
172
|
+
---
|
|
173
|
+
```
|
|
174
|
+
|
|
175
|
+
### PRD (Product Requirements Document)
|
|
176
|
+
|
|
177
|
+
```yaml
|
|
178
|
+
---
|
|
179
|
+
description: [one sentence describing the feature and its primary user benefit]
|
|
180
|
+
prd_id: PRD-NNN
|
|
181
|
+
status: draft | in-review | approved | in-development | shipped | deprecated
|
|
182
|
+
problem_statement: [what problem this solves]
|
|
183
|
+
target_persona: "[[persona-name]]"
|
|
184
|
+
success_metrics: ["metric: target"]
|
|
185
|
+
related_themes: ["[[theme-name]] -- evidence connection"]
|
|
186
|
+
related_experiments: ["[[exp-nnn-title]] -- validation status"]
|
|
187
|
+
related_okrs: ["[[okr-title]] -- strategic alignment"]
|
|
188
|
+
evidence_strength: strong | moderate | weak | speculative
|
|
189
|
+
topics: ["[[features]]"]
|
|
190
|
+
---
|
|
191
|
+
```
|
|
192
|
+
|
|
193
|
+
### Product Decision
|
|
194
|
+
|
|
195
|
+
```yaml
|
|
196
|
+
---
|
|
197
|
+
description: [one sentence stating the decision and primary rationale]
|
|
198
|
+
dec_id: DEC-NNN
|
|
199
|
+
date: YYYY-MM-DD
|
|
200
|
+
deciders: ["name (role)"]
|
|
201
|
+
decision: [what was decided]
|
|
202
|
+
alternatives_considered: ["alternative -- why rejected"]
|
|
203
|
+
evidence_used: ["[[note]] -- what it showed"]
|
|
204
|
+
reversibility: one-way | two-way
|
|
205
|
+
review_date: YYYY-MM-DD | null
|
|
206
|
+
topics: ["[[strategy]]"]
|
|
207
|
+
---
|
|
208
|
+
```
|
|
209
|
+
|
|
210
|
+
### Competitor Profile
|
|
211
|
+
|
|
212
|
+
```yaml
|
|
213
|
+
---
|
|
214
|
+
description: [one sentence positioning this competitor relative to us]
|
|
215
|
+
competitor_name: [company name]
|
|
216
|
+
product: [product name]
|
|
217
|
+
market_position: leader | challenger | niche | emerging
|
|
218
|
+
last_updated: YYYY-MM-DD
|
|
219
|
+
strengths: ["strength"]
|
|
220
|
+
weaknesses: ["weakness"]
|
|
221
|
+
differentiators: ["what they do that we don't"]
|
|
222
|
+
our_advantages: ["what we do that they don't"]
|
|
223
|
+
recent_moves: ["YYYY-MM: what they did"]
|
|
224
|
+
threat_level: high | medium | low
|
|
225
|
+
topics: ["[[competitive]]"]
|
|
226
|
+
---
|
|
227
|
+
```
|
|
228
|
+
|
|
229
|
+
### OKR
|
|
230
|
+
|
|
231
|
+
```yaml
|
|
232
|
+
---
|
|
233
|
+
description: [the objective in one sentence]
|
|
234
|
+
okr_id: OKR-YYYY-QN-NNN
|
|
235
|
+
quarter: YYYY-QN
|
|
236
|
+
objective: [qualitative goal]
|
|
237
|
+
key_results:
|
|
238
|
+
- metric: [what to measure]
|
|
239
|
+
target: [number]
|
|
240
|
+
current: [number]
|
|
241
|
+
status: on-track | at-risk | off-track
|
|
242
|
+
owner: [name]
|
|
243
|
+
related_features: ["[[prd-title]]"]
|
|
244
|
+
related_themes: ["[[theme-name]] -- customer evidence supporting this objective"]
|
|
245
|
+
topics: ["[[strategy]]"]
|
|
246
|
+
---
|
|
247
|
+
```
|
|
248
|
+
|
|
249
|
+
### User Persona
|
|
250
|
+
|
|
251
|
+
```yaml
|
|
252
|
+
---
|
|
253
|
+
description: [one sentence describing who this persona is and what they need]
|
|
254
|
+
persona_name: [archetype name]
|
|
255
|
+
segment: enterprise | mid-market | smb
|
|
256
|
+
role: [job title or function]
|
|
257
|
+
goals: ["what they want to achieve"]
|
|
258
|
+
pain_points: ["what frustrates them"]
|
|
259
|
+
jtbd: ["job to be done statement"]
|
|
260
|
+
behaviors: ["observable behavior pattern"]
|
|
261
|
+
preferred_channels: ["how they interact with the product"]
|
|
262
|
+
topics: ["[[personas]]"]
|
|
263
|
+
---
|
|
264
|
+
```
|
|
265
|
+
|
|
266
|
+
---
|
|
267
|
+
|
|
268
|
+
## Example Notes
|
|
269
|
+
|
|
270
|
+
### Customer Feedback: Onboarding Confusion
|
|
271
|
+
|
|
272
|
+
```markdown
|
|
273
|
+
---
|
|
274
|
+
description: Enterprise customer VP of Engineering frustrated by onboarding wizard that assumes technical users but their first users are business analysts
|
|
275
|
+
feedback_id: FB-2026-02-14-001
|
|
276
|
+
date: 2026-02-14
|
|
277
|
+
source: interview
|
|
278
|
+
customer_segment: enterprise
|
|
279
|
+
customer_name: Marcus Webb
|
|
280
|
+
account_arr: 180k
|
|
281
|
+
sentiment: frustrated
|
|
282
|
+
feature_area: onboarding
|
|
283
|
+
verbatim: "We bought this for our business analysts, but the setup wizard asks about API keys and webhook configurations in the first three steps. My analysts don't know what a webhook is. They closed the tab and went back to spreadsheets. We almost churned before we started."
|
|
284
|
+
themes: ["[[theme-onboarding-time-to-value]]"]
|
|
285
|
+
topics: ["[[customer-voice]]"]
|
|
286
|
+
---
|
|
287
|
+
|
|
288
|
+
# Onboarding confusion — enterprise business analyst path missing
|
|
289
|
+
|
|
290
|
+
Marcus Webb (VP Engineering, Trellis Corp, $180K ARR) described a critical onboarding failure during a customer research interview. His company purchased the platform for business analysts, but the onboarding wizard assumes a technical user persona from step one.
|
|
291
|
+
|
|
292
|
+
The core issue is persona mismatch in the onboarding flow. The wizard was designed around [[persona-technical-evaluator]] but Trellis's actual first users are closer to [[persona-business-champion]] — business-side users who want outcomes, not infrastructure.
|
|
293
|
+
|
|
294
|
+
This is not an isolated complaint. It connects to [[theme-onboarding-time-to-value]], where 12 other feedback items describe variations of "the first experience assumes too much technical knowledge." Marcus's feedback is the strongest signal because it comes from a $180K ARR account that nearly churned — the stakes are concrete and quantified.
|
|
295
|
+
|
|
296
|
+
His specific suggestion was a role selector at the start of onboarding: "Are you a developer, a business analyst, or an admin? Just that one question would change everything that comes next." This aligns with [[exp-001-simplified-onboarding-wizard]], which tested a reduced-step wizard but did not test persona branching.
|
|
297
|
+
|
|
298
|
+
---
|
|
299
|
+
|
|
300
|
+
Topics:
|
|
301
|
+
- [[customer-voice]]
|
|
302
|
+
```
|
|
303
|
+
|
|
304
|
+
### Feedback Theme: Onboarding Time to Value
|
|
305
|
+
|
|
306
|
+
```markdown
|
|
307
|
+
---
|
|
308
|
+
description: Persistent feedback cluster showing that new users take too long to reach their first meaningful outcome — 13 feedback items across enterprise and mid-market segments, mostly negative sentiment
|
|
309
|
+
theme_id: THEME-001
|
|
310
|
+
feedback_count: 13
|
|
311
|
+
first_seen: 2025-09-22
|
|
312
|
+
last_seen: 2026-02-14
|
|
313
|
+
segments_affected: ["enterprise", "mid-market"]
|
|
314
|
+
sentiment_distribution:
|
|
315
|
+
negative: 77%
|
|
316
|
+
neutral: 15%
|
|
317
|
+
positive: 8%
|
|
318
|
+
demand_signal: critical
|
|
319
|
+
related_experiments: ["[[exp-001-simplified-onboarding-wizard]]"]
|
|
320
|
+
related_prds: ["[[prd-guided-onboarding-v2]]"]
|
|
321
|
+
related_okrs: ["[[okr-q1-2026-reduce-onboarding-drop-off]]"]
|
|
322
|
+
topics: ["[[customer-voice]]"]
|
|
323
|
+
---
|
|
324
|
+
|
|
325
|
+
# Onboarding time-to-value
|
|
326
|
+
|
|
327
|
+
The most persistent negative feedback theme across the past six months. Thirteen distinct feedback items from ten different accounts describe the same core problem: new users cannot reach their first valuable outcome quickly enough.
|
|
328
|
+
|
|
329
|
+
## The pattern
|
|
330
|
+
|
|
331
|
+
The feedback clusters into three sub-patterns:
|
|
332
|
+
|
|
333
|
+
1. **Technical assumption mismatch** (5 items) — The onboarding wizard assumes technical users. Business analysts and non-technical champions encounter API keys, webhooks, and configuration steps they do not understand. Strongest signal: [[fb-2026-02-14-onboarding-confusion-enterprise]] from a $180K ARR account.
|
|
334
|
+
|
|
335
|
+
2. **Too many steps before value** (4 items) — Even technical users report that the wizard requires 14 steps before they can run their first automation. Competitors (specifically [[competitor-flowmatic]]) achieve "first automation" in 5 steps. The gap is quantifiable and competitive.
|
|
336
|
+
|
|
337
|
+
3. **No guided path to outcomes** (4 items) — Users complete onboarding but do not know what to do next. They have configured the tool but have not experienced its value. The drop-off between "onboarding complete" and "first automation run" is 38% (from Amplitude data).
|
|
338
|
+
|
|
339
|
+
## Evidence strength
|
|
340
|
+
|
|
341
|
+
This theme has the strongest evidence base of any active theme:
|
|
342
|
+
- 13 feedback items across two segments
|
|
343
|
+
- Quantified churn risk (Marcus Webb at Trellis, $180K ARR)
|
|
344
|
+
- Competitive benchmark ([[competitor-flowmatic]]: 5 steps vs our 14)
|
|
345
|
+
- Funnel data showing 38% post-onboarding drop-off
|
|
346
|
+
- One completed experiment ([[exp-001-simplified-onboarding-wizard]]) showing 22% lift from step reduction alone
|
|
347
|
+
|
|
348
|
+
## Strategic connection
|
|
349
|
+
|
|
350
|
+
This theme directly supports [[okr-q1-2026-reduce-onboarding-drop-off]] and has informed [[prd-guided-onboarding-v2]], which proposes persona-branched onboarding paths. The decision to prioritize onboarding over expansion this quarter ([[dec-001-prioritize-onboarding-over-expansion-q1]]) was partially driven by this theme's demand signal strength.
|
|
351
|
+
|
|
352
|
+
---
|
|
353
|
+
|
|
354
|
+
Relevant Notes:
|
|
355
|
+
- [[exp-001-simplified-onboarding-wizard]] -- tested step reduction, showed 22% lift, but did not test persona branching
|
|
356
|
+
- [[prd-guided-onboarding-v2]] -- the feature proposal this theme informs
|
|
357
|
+
- [[competitor-flowmatic]] -- competitive benchmark for onboarding speed
|
|
358
|
+
- [[persona-business-champion]] -- the persona most affected by this theme
|
|
359
|
+
|
|
360
|
+
Topics:
|
|
361
|
+
- [[customer-voice]]
|
|
362
|
+
```
|
|
363
|
+
|
|
364
|
+
### Experiment Result: Simplified Onboarding Wizard
|
|
365
|
+
|
|
366
|
+
```markdown
|
|
367
|
+
---
|
|
368
|
+
description: Reducing onboarding wizard from 14 steps to 7 improved completion rate by 22% but did not significantly improve time-to-first-automation, suggesting step count is necessary but not sufficient
|
|
369
|
+
exp_id: EXP-001
|
|
370
|
+
status: completed
|
|
371
|
+
hypothesis: If we reduce onboarding wizard steps from 14 to 7 by deferring non-essential configuration, then wizard completion rate will increase by 15%+
|
|
372
|
+
start_date: 2025-11-01
|
|
373
|
+
end_date: 2025-12-15
|
|
374
|
+
primary_metric: wizard completion rate
|
|
375
|
+
control_result: 61%
|
|
376
|
+
variant_result: 74.5%
|
|
377
|
+
lift: +22%
|
|
378
|
+
statistical_significance: "p < 0.01, n = 2,847"
|
|
379
|
+
decision: ship
|
|
380
|
+
learnings: ["Step reduction improves completion but the post-wizard experience still loses users", "Deferred configuration items (API keys, webhooks) caused confusion later when users needed them for advanced features", "The lift was strongest for mid-market segment (+28%) and weakest for enterprise (+14%), suggesting enterprise users face additional barriers beyond step count"]
|
|
381
|
+
related_themes: ["[[theme-onboarding-time-to-value]]"]
|
|
382
|
+
related_prds: ["[[prd-guided-onboarding-v2]]"]
|
|
383
|
+
topics: ["[[experiments]]"]
|
|
384
|
+
---
|
|
385
|
+
|
|
386
|
+
# EXP-001: Simplified onboarding wizard
|
|
387
|
+
|
|
388
|
+
## Background
|
|
389
|
+
|
|
390
|
+
Testing the most straightforward onboarding improvement: fewer steps. The existing 14-step wizard included API key setup, webhook configuration, team invitations, notification preferences, and integration setup — all before the user could create their first automation. The hypothesis was that deferring non-essential steps would reduce abandonment.
|
|
391
|
+
|
|
392
|
+
## Results
|
|
393
|
+
|
|
394
|
+
The primary metric (wizard completion rate) improved significantly: 61% to 74.5%, a 22% lift with high statistical confidence. This validated the core hypothesis that step count was a barrier.
|
|
395
|
+
|
|
396
|
+
However, the secondary metric (time-to-first-automation, measured as time from account creation to first automation run) improved only marginally: median 4.2 days to 3.8 days. This means we moved the bottleneck but did not remove it. Users now complete the wizard but still take nearly four days to actually do something useful.
|
|
397
|
+
|
|
398
|
+
## Segment breakdown
|
|
399
|
+
|
|
400
|
+
| Segment | Control | Variant | Lift |
|
|
401
|
+
|---------|---------|---------|------|
|
|
402
|
+
| SMB | 72% | 89% | +24% |
|
|
403
|
+
| Mid-market | 58% | 74% | +28% |
|
|
404
|
+
| Enterprise | 49% | 56% | +14% |
|
|
405
|
+
|
|
406
|
+
The enterprise segment showed the smallest lift, which aligns with [[fb-2026-02-14-onboarding-confusion-enterprise]] — enterprise users face persona-mismatch barriers that step reduction alone does not address. This is evidence for the persona-branching approach in [[prd-guided-onboarding-v2]].
|
|
407
|
+
|
|
408
|
+
## What this means for next experiments
|
|
409
|
+
|
|
410
|
+
Step reduction was necessary but not sufficient. The next experiment should test the post-wizard experience: guided paths to first automation, persona-specific templates, or an interactive walkthrough. The gap between "completed wizard" (74.5%) and "ran first automation within 7 days" (42%) is the new frontier.
|
|
411
|
+
|
|
412
|
+
---
|
|
413
|
+
|
|
414
|
+
Relevant Notes:
|
|
415
|
+
- [[theme-onboarding-time-to-value]] -- the feedback theme that motivated this experiment
|
|
416
|
+
- [[prd-guided-onboarding-v2]] -- the feature proposal incorporating these learnings
|
|
417
|
+
- [[okr-q1-2026-reduce-onboarding-drop-off]] -- the strategic objective this advances
|
|
418
|
+
|
|
419
|
+
Topics:
|
|
420
|
+
- [[experiments]]
|
|
421
|
+
```
|
|
422
|
+
|
|
423
|
+
### Product Decision: Prioritize Onboarding Over Expansion Q1
|
|
424
|
+
|
|
425
|
+
```markdown
|
|
426
|
+
---
|
|
427
|
+
description: Chose to invest Q1 engineering capacity in self-serve onboarding improvements rather than expansion and upsell features because onboarding churn evidence was quantified while expansion opportunity remained speculative
|
|
428
|
+
dec_id: DEC-001
|
|
429
|
+
date: 2025-12-18
|
|
430
|
+
deciders: ["Nadia Reeves (Senior PM)", "Jake Morrison (VP Product)", "Chen Wei (Engineering Director)"]
|
|
431
|
+
decision: Allocate 70% of growth team Q1 capacity to onboarding improvements (guided wizard v2, persona branching, template library), 30% to expansion experiments
|
|
432
|
+
alternatives_considered: ["50/50 split between onboarding and expansion -- rejected because neither track would get sufficient investment to move metrics meaningfully", "Expansion-first with onboarding as fast-follow -- rejected because onboarding churn data showed immediate revenue risk while expansion opportunity was unquantified", "Full expansion focus -- rejected because losing $180K+ ARR accounts to onboarding friction has higher expected value destruction than missing expansion opportunity"]
|
|
433
|
+
evidence_used: ["[[theme-onboarding-time-to-value]] -- 13 feedback items showing persistent, cross-segment onboarding friction", "[[exp-001-simplified-onboarding-wizard]] -- 22% completion lift proves onboarding is improvable, not a fundamental product-market fit issue", "[[fb-2026-02-14-onboarding-confusion-enterprise]] -- $180K ARR account nearly churned, quantifying revenue risk", "[[competitor-flowmatic]] -- competitive benchmark showing 5-step vs our 14-step onboarding, creating switching risk"]
|
|
434
|
+
reversibility: two-way
|
|
435
|
+
review_date: 2026-03-15
|
|
436
|
+
topics: ["[[strategy]]"]
|
|
437
|
+
---
|
|
438
|
+
|
|
439
|
+
# DEC-001: Prioritize onboarding over expansion for Q1 2026
|
|
440
|
+
|
|
441
|
+
## The argument
|
|
442
|
+
|
|
443
|
+
The growth team has finite capacity. The question for Q1 was: invest in converting more of the people who show up (onboarding), or invest in getting more from the people who already converted (expansion)?
|
|
444
|
+
|
|
445
|
+
The evidence asymmetry made the decision. Onboarding friction is documented with 13 feedback items, a quantified churn risk ($180K ARR account), a competitive benchmark (Flowmatic's 5-step wizard), and a successful experiment showing the problem is tractable (EXP-001: 22% completion lift from step reduction). Expansion opportunity is real but speculative — we believe mid-market accounts have room to grow, but we have not quantified the opportunity or tested whether specific interventions move it.
|
|
446
|
+
|
|
447
|
+
When one option has evidence and the other has intuition, invest where you have evidence. The expansion case can be built during Q1 with research and small experiments using the 30% allocation.
|
|
448
|
+
|
|
449
|
+
## What we are watching
|
|
450
|
+
|
|
451
|
+
This decision reverses if:
|
|
452
|
+
- Onboarding improvements fail to move time-to-first-automation below 48 hours (currently 3.8 days median)
|
|
453
|
+
- Expansion signals become urgent (a competitor launches a feature that enables easy expansion, creating switching risk)
|
|
454
|
+
- Q1 mid-quarter review (Feb 15) shows no meaningful improvement in activation metrics
|
|
455
|
+
|
|
456
|
+
---
|
|
457
|
+
|
|
458
|
+
Relevant Notes:
|
|
459
|
+
- [[theme-onboarding-time-to-value]] -- the evidence base driving this decision
|
|
460
|
+
- [[okr-q1-2026-reduce-onboarding-drop-off]] -- the OKR this decision serves
|
|
461
|
+
- [[prd-guided-onboarding-v2]] -- the primary deliverable this decision enables
|
|
462
|
+
|
|
463
|
+
Topics:
|
|
464
|
+
- [[strategy]]
|
|
465
|
+
```
|
|
466
|
+
|
|
467
|
+
### Competitor Profile: Flowmatic
|
|
468
|
+
|
|
469
|
+
```markdown
|
|
470
|
+
---
|
|
471
|
+
description: Primary competitor in the SMB and mid-market workflow automation space — stronger onboarding, weaker enterprise features, aggressive pricing
|
|
472
|
+
competitor_name: Flowmatic
|
|
473
|
+
product: Flowmatic Platform
|
|
474
|
+
market_position: challenger
|
|
475
|
+
last_updated: 2026-02-01
|
|
476
|
+
strengths: ["5-step onboarding that gets users to first automation in under 10 minutes", "Template marketplace with 500+ pre-built automations", "Freemium model with generous free tier driving adoption", "Strong developer community and documentation"]
|
|
477
|
+
weaknesses: ["Limited enterprise features: no SSO, no audit logging, no SOC 2 compliance", "No role-based access control — single permission level per workspace", "API rate limits make high-volume use cases impractical", "Customer support is community-only for free and mid tiers"]
|
|
478
|
+
differentiators: ["Template-first approach vs our configuration-first approach", "Visual flow builder is more intuitive for non-technical users", "Public API is better documented with interactive playground"]
|
|
479
|
+
our_advantages: ["Enterprise security and compliance (SSO, RBAC, SOC 2, HIPAA)", "API throughput handles 10x their rate limits", "Custom integration framework for proprietary systems", "Dedicated customer success for enterprise accounts"]
|
|
480
|
+
recent_moves: ["2026-01: Launched 'Flowmatic for Teams' with basic collaboration features", "2025-11: Raised Series C at $450M valuation, aggressive hiring in enterprise sales", "2025-09: Released interactive API playground — directly addresses our documentation gap"]
|
|
481
|
+
threat_level: high
|
|
482
|
+
topics: ["[[competitive]]"]
|
|
483
|
+
---
|
|
484
|
+
|
|
485
|
+
# Flowmatic
|
|
486
|
+
|
|
487
|
+
Flowmatic is the most dangerous competitor in SMB and mid-market because they have solved the problem we are currently failing at: getting users to value quickly. Their 5-step onboarding is the benchmark [[theme-onboarding-time-to-value]] is measured against. Their template marketplace means users do not need to build automations from scratch — they start with something that works and customize it.
|
|
488
|
+
|
|
489
|
+
## Strategic implications
|
|
490
|
+
|
|
491
|
+
The competitive threat is migration-oriented, not head-to-head. We do not lose deals to Flowmatic at the enterprise level — our security, compliance, and throughput advantages are decisive there. We lose users who start with Flowmatic's free tier, build their workflows on Flowmatic's templates, and never evaluate us because they solved their problem without us. When those users' companies grow to enterprise scale, switching costs make migration prohibitive.
|
|
492
|
+
|
|
493
|
+
This means the onboarding investment ([[dec-001-prioritize-onboarding-over-expansion-q1]]) is also a competitive response: if we cannot match their time-to-value for non-technical users, we cede the bottom-up adoption motion that eventually becomes enterprise revenue.
|
|
494
|
+
|
|
495
|
+
## What to watch
|
|
496
|
+
|
|
497
|
+
Flowmatic's Series C and "Teams" launch signal an enterprise push. If they add SSO and audit logging within the next 6 months, our differentiation narrows significantly. The interactive API playground they launched in September 2025 is already better than our API documentation — this directly feeds [[theme-api-developer-experience]].
|
|
498
|
+
|
|
499
|
+
---
|
|
500
|
+
|
|
501
|
+
Relevant Notes:
|
|
502
|
+
- [[theme-onboarding-time-to-value]] -- Flowmatic's 5-step wizard is the benchmark
|
|
503
|
+
- [[dec-001-prioritize-onboarding-over-expansion-q1]] -- our strategic response
|
|
504
|
+
- [[theme-api-developer-experience]] -- their playground addresses a gap we haven't closed
|
|
505
|
+
|
|
506
|
+
Topics:
|
|
507
|
+
- [[competitive]]
|
|
508
|
+
```
|
|
509
|
+
|
|
510
|
+
---
|
|
511
|
+
|
|
512
|
+
## Processing Workflow
|
|
513
|
+
|
|
514
|
+
### Capture
|
|
515
|
+
|
|
516
|
+
Feedback enters from multiple channels with different capture patterns:
|
|
517
|
+
|
|
518
|
+
| Channel | Capture Method | Agent Role |
|
|
519
|
+
|---------|---------------|------------|
|
|
520
|
+
| Support tickets (Intercom) | Weekly batch import | Agent creates feedback notes, extracts verbatim quotes, classifies segment and sentiment |
|
|
521
|
+
| Customer interviews (Gong) | Post-interview processing | Agent extracts key insights, verbatim quotes, and follow-up items from transcript |
|
|
522
|
+
| NPS surveys | Monthly batch import | Agent creates feedback notes, clusters by score tier, extracts open-ended responses |
|
|
523
|
+
| Sales call notes | Ad-hoc capture | Agent processes CRM notes into structured feedback entries |
|
|
524
|
+
| Social media mentions | Weekly scan | Agent captures relevant mentions, classifies sentiment |
|
|
525
|
+
| In-app feedback | Real-time processing | Agent creates feedback notes as they arrive |
|
|
526
|
+
|
|
527
|
+
Nadia's primary capture moment is after customer interviews. She voice-dumps key insights immediately after hanging up, while emotional context is fresh. The agent processes the dump into a structured feedback note with verbatim quotes, segment classification, and initial theme connections.
|
|
528
|
+
|
|
529
|
+
### Process
|
|
530
|
+
|
|
531
|
+
The agent's core processing work is the feedback-to-theme pipeline:
|
|
532
|
+
|
|
533
|
+
1. **Classify incoming feedback** — assign segment, sentiment, feature area. These are structured fields that enable downstream queries.
|
|
534
|
+
|
|
535
|
+
2. **Cluster into themes** — when a new feedback item arrives, the agent searches existing themes semantically. If it matches an existing theme, the agent links it and updates the theme's feedback count, date range, and sentiment distribution. If it does not match any theme, the agent holds it until three or more similar items accumulate, then proposes a new theme.
|
|
536
|
+
|
|
537
|
+
3. **Quantify demand signals** — themes gain strength through feedback count, ARR concentration (are the accounts complaining large?), cross-segment presence (does this affect multiple segments?), and sentiment intensity. The agent maintains a `demand_signal` rating of critical, strong, moderate, or weak based on these factors.
|
|
538
|
+
|
|
539
|
+
4. **Connect themes to strategy** — when a theme reaches "strong" or "critical," the agent checks: does an experiment exist that tests this? Does a PRD exist that addresses this? Does an OKR exist that this would advance? Missing connections become evidence gaps logged for Nadia's attention.
|
|
540
|
+
|
|
541
|
+
5. **Process experiment results** — when an experiment completes, the agent connects results back to the themes and PRDs that motivated it, updates theme status based on whether the experiment validated or invalidated the hypothesis, and identifies what the next experiment should test.
|
|
542
|
+
|
|
543
|
+
### Connect
|
|
544
|
+
|
|
545
|
+
Cross-cutting intelligence connections:
|
|
546
|
+
|
|
547
|
+
- **Feedback-to-feature pipeline** — themes aggregate demand signals, PRDs cite themes as evidence, experiments test PRD hypotheses, decisions cite experiments. The chain from "a customer said X" to "we decided to build Y because of Z evidence" is fully traversable.
|
|
548
|
+
- **Competitive-to-strategy connections** — when a competitor makes a move relevant to an active theme, the agent links them. "Flowmatic launched an API playground" connects to [[theme-api-developer-experience]] and strengthens the demand signal.
|
|
549
|
+
- **OKR-to-evidence alignment** — each OKR links to the themes, experiments, and PRDs that advance it. The agent can answer: "What evidence do we have that this OKR is achievable?" and "What customer feedback says this OKR matters?"
|
|
550
|
+
- **Persona-to-feedback connections** — feedback items are linked to the personas they affect. The agent can answer: "What are the top pain points for the Technical Evaluator persona?" by traversing feedback items linked to that persona.
|
|
551
|
+
|
|
552
|
+
### Verify
|
|
553
|
+
|
|
554
|
+
Periodic and triggered checks:
|
|
555
|
+
|
|
556
|
+
1. **Weekly feedback digest** — agent summarizes new feedback, theme changes, and demand signal shifts. Nadia reviews this instead of individual feedback items.
|
|
557
|
+
2. **Evidence gap detection** — monthly scan for PRDs without supporting customer evidence, OKRs without related experiments, and experiments without clear theme connections. These gaps represent decisions based on intuition rather than evidence.
|
|
558
|
+
3. **Competitive freshness check** — monthly scan for competitor profiles not updated in 60+ days. The competitive landscape changes faster than any other knowledge type.
|
|
559
|
+
4. **Experiment follow-through** — are experiment learnings incorporated into PRDs? Are "iterate" decisions followed by new experiments? The agent flags experiments that completed but led to no downstream action.
|
|
560
|
+
5. **Strategic coherence check** — quarterly: does the roadmap trace back to OKRs? Do OKRs trace back to customer evidence? The agent generates a coherence report showing which roadmap items have evidence chains and which are speculative.
|
|
561
|
+
|
|
562
|
+
---
|
|
563
|
+
|
|
564
|
+
## MOC Structure
|
|
565
|
+
|
|
566
|
+
### Hub (index.md)
|
|
567
|
+
|
|
568
|
+
```markdown
|
|
569
|
+
---
|
|
570
|
+
description: Entry point for the product knowledge system — navigate by customer voice, experiments, strategy, or competitive landscape
|
|
571
|
+
type: moc
|
|
572
|
+
---
|
|
573
|
+
|
|
574
|
+
# index
|
|
575
|
+
|
|
576
|
+
## Core Intelligence
|
|
577
|
+
- [[customer-voice]] -- what customers are saying, clustered into actionable themes
|
|
578
|
+
- [[experiments]] -- what we have tested and what we have learned
|
|
579
|
+
- [[strategy]] -- OKRs, roadmap rationale, major decisions
|
|
580
|
+
- [[competitive]] -- who else is in the market and what they are doing
|
|
581
|
+
|
|
582
|
+
## Building Blocks
|
|
583
|
+
- [[features]] -- feature areas with evidence connections
|
|
584
|
+
- [[personas]] -- user segments and jobs to be done
|
|
585
|
+
|
|
586
|
+
## Maintenance
|
|
587
|
+
- [[feedback-digest]] -- weekly summary of incoming signal
|
|
588
|
+
- [[experiment-patterns]] -- cross-experiment learning log
|
|
589
|
+
- [[evidence-gaps]] -- where decisions lack supporting evidence
|
|
590
|
+
```
|
|
591
|
+
|
|
592
|
+
### Customer Voice MOC (customer-voice.md)
|
|
593
|
+
|
|
594
|
+
```markdown
|
|
595
|
+
---
|
|
596
|
+
description: Feedback themes clustered from all channels — the quantified voice of the customer organized by demand signal strength
|
|
597
|
+
type: moc
|
|
598
|
+
topics: ["[[index]]"]
|
|
599
|
+
---
|
|
600
|
+
|
|
601
|
+
# customer-voice
|
|
602
|
+
|
|
603
|
+
Every customer interaction produces signal. Most organizations capture that signal but never synthesize it — feedback lives in support tools, interview transcripts, and survey reports that are read once and forgotten. This MOC organizes feedback into themes with quantified demand signals, connecting the raw voice of the customer to the strategic decisions that should respond to it.
|
|
604
|
+
|
|
605
|
+
## Critical Demand Signals
|
|
606
|
+
- [[theme-onboarding-time-to-value]] -- 13 items, cross-segment, $180K ARR at risk, competitive benchmark disadvantage
|
|
607
|
+
- [[theme-api-developer-experience]] -- 8 items, concentrated in technical evaluator persona, competitor has already addressed this
|
|
608
|
+
|
|
609
|
+
## Strong Demand Signals
|
|
610
|
+
- [[theme-reporting-customization]] -- 7 items, enterprise-concentrated, tied to expansion motion
|
|
611
|
+
- [[theme-integration-marketplace]] -- 6 items, mid-market concentrated, competitive parity feature
|
|
612
|
+
|
|
613
|
+
## Moderate Demand Signals
|
|
614
|
+
- [[theme-team-collaboration]] -- 4 items, emerging pattern, may strengthen as Teams plans mature
|
|
615
|
+
- [[theme-mobile-access]] -- 3 items, SMB concentrated, low ARR impact
|
|
616
|
+
|
|
617
|
+
## Feedback sources
|
|
618
|
+
|
|
619
|
+
| Channel | Items last 30 days | Dominant sentiment |
|
|
620
|
+
|---------|-------------------|-------------------|
|
|
621
|
+
| Support tickets | 24 | Negative (62%) |
|
|
622
|
+
| Customer interviews | 8 | Mixed (neutral 50%) |
|
|
623
|
+
| NPS surveys | 12 | Mixed |
|
|
624
|
+
| Sales calls | 5 | Neutral |
|
|
625
|
+
| In-app | 15 | Negative (58%) |
|
|
626
|
+
|
|
627
|
+
---
|
|
628
|
+
|
|
629
|
+
Agent Notes:
|
|
630
|
+
- Demand signal strength is not just feedback count. A single feedback item from a $180K ARR account that nearly churned is stronger signal than ten items from free-tier users with no revenue stake. Weight by ARR concentration and churn proximity.
|
|
631
|
+
- When new feedback arrives, check existing themes before creating a new one. The same pain point often uses different vocabulary across segments: enterprise users say "workflow configuration" while SMB users say "setup."
|
|
632
|
+
- The weekly feedback digest should highlight new themes, signal strength changes, and newly surfaced verbatim quotes that are particularly vivid or specific.
|
|
633
|
+
|
|
634
|
+
Topics:
|
|
635
|
+
- [[index]]
|
|
636
|
+
```
|
|
637
|
+
|
|
638
|
+
### Experiments MOC (experiments.md)
|
|
639
|
+
|
|
640
|
+
```markdown
|
|
641
|
+
---
|
|
642
|
+
description: Experiment program tracking — active tests, completed results, cross-experiment patterns, and the learning velocity of the growth team
|
|
643
|
+
type: moc
|
|
644
|
+
topics: ["[[index]]"]
|
|
645
|
+
---
|
|
646
|
+
|
|
647
|
+
# experiments
|
|
648
|
+
|
|
649
|
+
Experiments are the bridge between customer signal and product conviction. A feedback theme tells you what customers want. An experiment tells you whether your solution actually works. This MOC tracks the experiment program's health: what we are testing, what we have learned, and what patterns emerge across experiments.
|
|
650
|
+
|
|
651
|
+
## Active Experiments
|
|
652
|
+
- [[exp-003-persona-branched-onboarding]] -- testing role selector as first onboarding step (started 2026-02-01)
|
|
653
|
+
- [[exp-004-template-recommendations]] -- testing ML-driven template suggestions post-onboarding (started 2026-02-10)
|
|
654
|
+
|
|
655
|
+
## Completed — Key Results
|
|
656
|
+
- [[exp-001-simplified-onboarding-wizard]] -- 22% completion lift from step reduction, shipped, but insufficient for time-to-first-automation
|
|
657
|
+
- [[exp-002-interactive-api-playground]] -- 35% increase in API adoption among developer persona, shipped
|
|
658
|
+
|
|
659
|
+
## Abandoned
|
|
660
|
+
- [[exp-005-gamified-onboarding]] -- abandoned after qualitative research showed enterprise users find gamification patronizing
|
|
661
|
+
|
|
662
|
+
## Cross-Experiment Patterns
|
|
663
|
+
|
|
664
|
+
Three patterns emerge from the completed experiment portfolio:
|
|
665
|
+
|
|
666
|
+
1. **Segment-specific effects matter more than average effects.** EXP-001 showed +28% for mid-market but only +14% for enterprise. Reporting average lift masks segment differences that should drive persona-specific solutions.
|
|
667
|
+
|
|
668
|
+
2. **Completion metrics mislead without outcome metrics.** EXP-001 improved wizard completion but barely moved time-to-first-automation. The metric that matters is the downstream outcome, not the intermediate funnel step.
|
|
669
|
+
|
|
670
|
+
3. **Developer-focused improvements have clearest ROI.** EXP-002's API playground showed 35% API adoption lift with minimal engineering investment. Technical users respond to self-serve tooling more reliably than non-technical users respond to UX simplification.
|
|
671
|
+
|
|
672
|
+
---
|
|
673
|
+
|
|
674
|
+
Agent Notes:
|
|
675
|
+
- When proposing new experiments, check whether the hypothesis conflicts with learnings from completed experiments. If EXP-001 showed that step reduction alone is insufficient, a new experiment proposing further step reduction without addressing post-wizard experience is likely to waste capacity.
|
|
676
|
+
- Connect every experiment to at least one feedback theme and one OKR. Experiments without customer evidence are fishing expeditions. Experiments without strategic alignment are curiosity projects.
|
|
677
|
+
|
|
678
|
+
Topics:
|
|
679
|
+
- [[index]]
|
|
680
|
+
```
|
|
681
|
+
|
|
682
|
+
---
|
|
683
|
+
|
|
684
|
+
## Graph Query Examples
|
|
685
|
+
|
|
686
|
+
```bash
|
|
687
|
+
# Find all feedback from enterprise customers with negative sentiment
|
|
688
|
+
rg '^customer_segment: enterprise' notes/feedback/ -l | xargs rg '^sentiment: negative' -l
|
|
689
|
+
|
|
690
|
+
# Find all themes with critical demand signal
|
|
691
|
+
rg '^demand_signal: critical' notes/themes/
|
|
692
|
+
|
|
693
|
+
# Find all experiments that resulted in a "ship" decision
|
|
694
|
+
rg '^decision: ship' notes/experiments/
|
|
695
|
+
|
|
696
|
+
# Find all PRDs without related experiment validation
|
|
697
|
+
for prd in notes/prds/*.md; do
|
|
698
|
+
rg -q '^related_experiments:.*\[\[' "$prd" || echo "NO EXPERIMENT: $prd"
|
|
699
|
+
done
|
|
700
|
+
|
|
701
|
+
# Find evidence chains: which themes connect to OKRs?
|
|
702
|
+
rg '^related_okrs:' notes/themes/ | rg -v 'null'
|
|
703
|
+
|
|
704
|
+
# Find all feedback mentioning onboarding from accounts over $100K ARR
|
|
705
|
+
rg '^feature_area: onboarding' notes/feedback/ -l | xargs rg '^account_arr:' | \
|
|
706
|
+
grep -E '[0-9]{3,}k'
|
|
707
|
+
|
|
708
|
+
# Find competitor moves in the last 3 months
|
|
709
|
+
rg '^recent_moves:' notes/competitive/ -A 5 | rg '2026-0[1-2]'
|
|
710
|
+
|
|
711
|
+
# Find decisions where evidence strength is weak or speculative
|
|
712
|
+
rg '^evidence_strength: weak\|^evidence_strength: speculative' notes/prds/
|
|
713
|
+
|
|
714
|
+
# Count feedback by feature area
|
|
715
|
+
rg '^feature_area:' notes/feedback/ | awk -F': ' '{print $2}' | sort | uniq -c | sort -rn
|
|
716
|
+
```
|
|
717
|
+
|
|
718
|
+
---
|
|
719
|
+
|
|
720
|
+
## What Makes This Domain Unique
|
|
721
|
+
|
|
722
|
+
### The feedback-to-feature pipeline is the primary value chain
|
|
723
|
+
|
|
724
|
+
In a research vault, value lives in synthesis — connecting claims to build higher-order understanding. In a product management system, value lives in the pipeline from raw customer signal to strategic action. Every feedback item should eventually contribute to a theme. Every theme with sufficient demand signal should inform a PRD or an experiment. Every experiment result should update the themes it tested and the PRDs it informs. Every decision should cite the evidence chain that justifies it. The pipeline is not metadata — it is the product manager's primary weapon against opinion-driven decision-making.
|
|
725
|
+
|
|
726
|
+
### Evidence strength is a first-class property
|
|
727
|
+
|
|
728
|
+
Research claims are evaluated by reasoning quality and connection density. Product decisions are evaluated by evidence strength — and evidence strength varies enormously. A PRD backed by 13 feedback items, a competitive benchmark, and a successful experiment has different credibility than a PRD backed by a hunch and a Slack thread. Making `evidence_strength` an explicit schema field forces Nadia (and her agent) to be honest about the foundation underneath each decision. The agent can generate an "evidence audit" of the roadmap: how many features have strong evidence, how many are speculative, and where the gaps are.
|
|
729
|
+
|
|
730
|
+
### Multi-channel synthesis is the core processing challenge
|
|
731
|
+
|
|
732
|
+
Research sources are typically documents — papers, articles, transcripts. Product feedback arrives through support tickets, surveys, interviews, sales calls, social media, and in-app mechanisms. Each channel has different signal-to-noise ratios, different vocabularies, and different biases (support tickets over-represent frustrated users; sales calls over-represent prospective rather than existing customers). The agent's clustering pipeline must normalize across these channels: recognizing that a support ticket saying "the setup is confusing" and an interview transcript saying "our analysts couldn't figure out where to start" are expressing the same pain point. This cross-channel normalization is where human synthesis breaks down and agent processing excels.
|
|
733
|
+
|
|
734
|
+
---
|
|
735
|
+
|
|
736
|
+
## Agent-Native Advantages
|
|
737
|
+
|
|
738
|
+
### Exhaustive feedback clustering across all channels
|
|
739
|
+
|
|
740
|
+
Nadia reads customer feedback selectively — the support tickets that get escalated, the interviews she personally conducts, the NPS comments with extreme scores. She misses the 80% of signal that does not cross her attention threshold. The agent reads everything. Every support ticket, every survey response, every sales call note. It clusters not by keyword matching but by semantic similarity: "the onboarding is confusing" and "I couldn't figure out how to get started" and "we almost gave up during setup" are the same pain point expressed by three different people in three different contexts.
|
|
741
|
+
|
|
742
|
+
At scale, this transforms the PM's relationship with customer evidence. Instead of arguing from "I have heard this from a few customers," Nadia argues from "13 customers across two segments have expressed this pain point, with combined ARR of $940K at risk and the strongest signal coming from accounts in the 90-day post-onboarding churn window." The quantification is not more rigorous because Nadia has better spreadsheets — it is more rigorous because the agent holds every data point simultaneously and the human never could.
|
|
743
|
+
|
|
744
|
+
### Cross-experiment pattern detection across the full portfolio
|
|
745
|
+
|
|
746
|
+
Individual experiments produce individual learnings. The pattern across experiments produces strategic insight. After ten completed experiments, the agent detects:
|
|
747
|
+
|
|
748
|
+
- **Segment response patterns** — "Enterprise users consistently show smaller lifts from UX changes than mid-market users. The bottleneck for enterprise is not UX simplicity but persona-mismatch in the initial experience."
|
|
749
|
+
- **Metric relationship patterns** — "Intermediate funnel metrics (wizard completion, page views) improve more easily than outcome metrics (time-to-first-automation, 30-day retention). The team should focus experiment design on outcome metrics even though they require larger samples."
|
|
750
|
+
- **Investment efficiency patterns** — "Developer-focused improvements (API playground, documentation, SDK) show 2-3x the effect size per engineering day invested compared to non-technical UX improvements. The developer persona is more responsive to self-serve tooling."
|
|
751
|
+
|
|
752
|
+
These patterns are invisible within any single experiment. They emerge only when someone holds the full experiment portfolio in mind and looks for structural regularities. No PM does this. The agent does it automatically after every experiment completion.
|
|
753
|
+
|
|
754
|
+
### Evidence chain traceability from decision back to customer voice
|
|
755
|
+
|
|
756
|
+
When Nadia defends a roadmap decision in a planning meeting, she needs to trace the chain: "We are building this because customers said X, we validated it with experiment Y, and it advances OKR Z." Today this chain lives partly in her memory, partly in scattered tools, and partly in slides she created last quarter. The agent maintains the chain explicitly and bidirectionally.
|
|
757
|
+
|
|
758
|
+
Forward traversal: customer feedback -> theme -> experiment -> PRD -> OKR. "This customer pain point clusters with these twelve other reports, was partially validated by this experiment, is addressed by this PRD, and advances this strategic objective."
|
|
759
|
+
|
|
760
|
+
Backward traversal: OKR -> PRD -> experiment -> theme -> customer feedback. "This OKR is advanced by these PRDs, which are informed by these experiments, which tested hypotheses from these themes, which are grounded in these specific customer voices."
|
|
761
|
+
|
|
762
|
+
The chain is valuable not just for defense but for gap detection. When a PRD has no theme connection, the agent flags it: "This feature request has no customer evidence. Is it based on competitive pressure, internal intuition, or stakeholder request?" When an OKR has no experiment validation, the agent flags it: "This objective has not been experimentally tested. What experiments would increase confidence?"
|
|
763
|
+
|
|
764
|
+
### Competitive intelligence with automatic strategic connection
|
|
765
|
+
|
|
766
|
+
Nadia tracks competitors episodically — she reads a competitor's blog post, skims a feature announcement, notices a pricing change mentioned on Twitter. Each observation is captured but not connected to her strategic decisions. The agent connects every competitive signal to the themes and decisions it affects.
|
|
767
|
+
|
|
768
|
+
When Flowmatic launches an interactive API playground, the agent does not just log the competitive move. It connects it to [[theme-api-developer-experience]] (our customers have been asking for this), to [[exp-002-interactive-api-playground]] (we tested our version and it showed strong results), and to the competitive positioning (Flowmatic now has feature parity on API tooling). The competitive move is instantly contextualized within Nadia's strategic framework.
|
|
769
|
+
|
|
770
|
+
Over time, the agent builds a competitive trajectory model: "Flowmatic has made three enterprise-oriented moves in the last six months (Teams launch, SSO roadmap mention, enterprise sales hiring). Their historical pace suggests enterprise feature parity within 12-18 months. Our current advantages in security and compliance are durable for now but narrowing."
|
|
771
|
+
|
|
772
|
+
No PM maintains this longitudinal competitive model. They react to individual moves. The agent accumulates every move into a trajectory that informs strategic timing: "If we are going to build our developer experience advantage, we need to move before Flowmatic's enterprise push closes our differentiation gap."
|
|
773
|
+
|
|
774
|
+
### Real-time evidence auditing for roadmap integrity
|
|
775
|
+
|
|
776
|
+
At any moment, the agent can generate a roadmap evidence audit:
|
|
777
|
+
|
|
778
|
+
| Roadmap Item | Theme Connection | Experiment Validation | OKR Alignment | Evidence Strength |
|
|
779
|
+
|-------------|-----------------|---------------------|---------------|-------------------|
|
|
780
|
+
| Guided onboarding v2 | [[theme-onboarding-time-to-value]] (critical, 13 items) | [[exp-001-simplified-onboarding-wizard]] (positive, partial) | [[okr-q1-2026-reduce-onboarding-drop-off]] | Strong |
|
|
781
|
+
| API playground | [[theme-api-developer-experience]] (strong, 8 items) | [[exp-002-interactive-api-playground]] (positive, shipped) | [[okr-q1-2026-reduce-onboarding-drop-off]] | Strong |
|
|
782
|
+
| Reporting customization | [[theme-reporting-customization]] (strong, 7 items) | None | None | Moderate — evidence exists but no validation |
|
|
783
|
+
| Mobile companion app | [[theme-mobile-access]] (moderate, 3 items) | None | None | Weak — limited evidence, no validation, no strategic alignment |
|
|
784
|
+
|
|
785
|
+
This audit is not a one-time exercise — it updates continuously as new feedback arrives, experiments complete, and themes shift. It is the difference between "we think our roadmap is evidence-based" and "we can prove which parts are evidence-based and which are not."
|
|
786
|
+
---
|
|
787
|
+
|
|
788
|
+
Topics:
|
|
789
|
+
- [[domain-compositions]]
|