memorix 0.6.5 → 0.7.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -2,6 +2,33 @@
2
2
 
3
3
  All notable changes to this project will be documented in this file.
4
4
 
5
+ ## [0.7.0] — 2026-02-21
6
+
7
+ ### Added
8
+ - **Memory-Driven Skills Engine** (`memorix_skills` MCP tool):
9
+ - `list` — Discover all `SKILL.md` files across 7 agent directories
10
+ - `generate` — Auto-generate project-specific skills from observation patterns (gotchas, decisions, how-it-works)
11
+ - `inject` — Return full skill content directly to agent context
12
+ - Intelligent scoring: requires skill-worthy observation types, not just volume
13
+ - Write to any target agent with `write: true, target: "<agent>"`
14
+ - **Transformers.js Embedding Provider**:
15
+ - Pure JavaScript fallback (`@huggingface/transformers`) — no native deps required
16
+ - Provider chain: `fastembed` → `transformers.js` → fulltext-only
17
+ - Quantized model (`q8`) for small footprint
18
+ - **Dashboard Enhancements**:
19
+ - Canvas donut chart for observation type distribution
20
+ - Embedding provider status card (enabled/provider/dimensions)
21
+ - Search result highlighting with `<mark>` tags
22
+ - **17 new tests** for Skills Engine (list, generate, inject, write, scoring, dedup)
23
+
24
+ ### Changed
25
+ - Scoring algorithm requires at least 1 skill-worthy type (gotcha/decision/how-it-works/problem-solution/trade-off) — pure discovery/what-changed entities won't generate skills
26
+ - Volume bonus reduced from 2×obs to 1×obs (capped at 5) to favor quality over quantity
27
+ - Type diversity bonus increased from 2 to 3 points per unique skill-worthy type
28
+
29
+ ### Fixed
30
+ - 422 tests passing (up from 405), 34 test files, zero regressions
31
+
5
32
  ## [0.5.0] — 2026-02-15
6
33
 
7
34
  ### Added
package/README.md CHANGED
@@ -97,6 +97,7 @@ Then use `"command": "memorix"` instead of `"command": "npx"` in your config.
97
97
  - **MCP Config Migration** — Detect and migrate MCP server configs (merges — never overwrites)
98
98
  - **Rules Sync** — Scan → Deduplicate → Conflict detection → Cross-format generation
99
99
  - **Skills & Workflows** — Copy skill folders and workflow files across agents
100
+ - **Memory-Driven Skills** — `memorix_skills` auto-generates project-specific `SKILL.md` from observation patterns (gotchas, decisions, how-it-works)
100
101
  - **Apply with Safety** — Backup `.bak` → Atomic write → Auto-rollback on failure
101
102
 
102
103
  ### 🔒 Project Isolation
@@ -112,8 +113,11 @@ Then use `"command": "memorix"` instead of `"command": "npx"` in your config.
112
113
  - **Web Dashboard** — `memorix_dashboard` opens a beautiful web UI at `http://localhost:3210`
113
114
  - **Project Switcher** — Dropdown to view any project's data without switching IDEs
114
115
  - **Knowledge Graph** — Interactive visualization of entities and relations
116
+ - **Type Distribution Chart** — Canvas donut chart showing observation type breakdown
117
+ - **Embedding Status** — Real-time display of vector search provider status (enabled/provider/dimensions)
115
118
  - **Retention Scores** — Exponential decay scoring with immunity status
116
- - **Observation Management** — Expand/collapse details, search, delete with confirmation, data export
119
+ - **Observation Management** — Expand/collapse details, **search with text highlighting**, delete with confirmation, data export
120
+ - **Batch Cleanup** — Auto-detect and bulk-delete low-quality observations
117
121
  - **Light/Dark Theme** — Premium glassmorphism design, bilingual (EN/中文)
118
122
 
119
123
  ### 🪝 Auto-Memory Hooks
@@ -327,15 +331,26 @@ Files: ["src/auth/jwt.ts", "src/config.ts"]
327
331
 
328
332
  ## 🔮 Optional: Vector Search
329
333
 
330
- Install `fastembed` for hybrid (BM25 + semantic) search:
334
+ Memorix supports **hybrid search** (BM25 + semantic vectors) with a provider priority chain:
335
+
336
+ | Priority | Provider | How to Enable | Notes |
337
+ |----------|----------|---------------|-------|
338
+ | 1st | `fastembed` | `npm install -g fastembed` | Fastest, native ONNX bindings |
339
+ | 2nd | `transformers.js` | `npm install -g @huggingface/transformers` | Pure JS/WASM, cross-platform |
340
+ | Fallback | Full-text (BM25) | Always available | Already very effective for code |
331
341
 
332
342
  ```bash
343
+ # Option A: Native speed (recommended if it installs cleanly)
333
344
  npm install -g fastembed
345
+
346
+ # Option B: Universal compatibility (works everywhere, no native deps)
347
+ npm install -g @huggingface/transformers
334
348
  ```
335
349
 
336
- - **Without it** — BM25 full-text search (already very effective for code)
337
- - **With it** — Queries like "authentication" also match "login flow" via semantic similarity
338
- - Local ONNX inference, zero API calls, zero privacy risk
350
+ - **Without either** — BM25 full-text search works great out of the box
351
+ - **With any provider** — Queries like "authentication" also match "login flow" via semantic similarity
352
+ - Both run **locally** zero API calls, zero privacy risk, zero cost
353
+ - The dashboard shows which provider is active in real-time
339
354
 
340
355
  ---
341
356
 
@@ -379,7 +394,8 @@ src/
379
394
  ├── memory/ # Graph, observations, retention, entity extraction
380
395
  ├── store/ # Orama search engine + disk persistence
381
396
  ├── compact/ # 3-layer Progressive Disclosure engine
382
- ├── embedding/ # Optional fastembed vector provider
397
+ ├── embedding/ # Vector providers (fastembed transformers.js → fallback)
398
+ ├── skills/ # Memory-driven project skills engine (list → generate → inject)
383
399
  ├── hooks/ # Auto-memory hooks (normalizer + pattern detector)
384
400
  ├── workspace/ # Cross-agent MCP/workflow/skills sync
385
401
  ├── rules/ # Cross-agent rules sync (7 adapters)
package/dist/cli/index.js CHANGED
@@ -476,7 +476,100 @@ var init_fastembed_provider = __esm({
476
476
  }
477
477
  });
478
478
 
479
+ // src/embedding/transformers-provider.ts
480
+ var transformers_provider_exports = {};
481
+ __export(transformers_provider_exports, {
482
+ TransformersProvider: () => TransformersProvider
483
+ });
484
+ var cache2, MAX_CACHE_SIZE2, TransformersProvider;
485
+ var init_transformers_provider = __esm({
486
+ "src/embedding/transformers-provider.ts"() {
487
+ "use strict";
488
+ init_esm_shims();
489
+ cache2 = /* @__PURE__ */ new Map();
490
+ MAX_CACHE_SIZE2 = 5e3;
491
+ TransformersProvider = class _TransformersProvider {
492
+ name = "transformers-minilm";
493
+ dimensions = 384;
494
+ extractor;
495
+ // Pipeline instance
496
+ constructor(extractor) {
497
+ this.extractor = extractor;
498
+ }
499
+ /**
500
+ * Initialize the Transformers.js provider.
501
+ * Downloads model on first use (~22MB quantized), cached locally after.
502
+ */
503
+ static async create() {
504
+ const { pipeline } = await import("@huggingface/transformers");
505
+ const extractor = await pipeline(
506
+ "feature-extraction",
507
+ "Xenova/all-MiniLM-L6-v2",
508
+ { dtype: "q8" }
509
+ // Quantized for small footprint
510
+ );
511
+ return new _TransformersProvider(extractor);
512
+ }
513
+ async embed(text) {
514
+ const cached = cache2.get(text);
515
+ if (cached) return cached;
516
+ const output = await this.extractor(text, {
517
+ pooling: "mean",
518
+ normalize: true
519
+ });
520
+ const result = Array.from(output.tolist()[0]);
521
+ if (result.length !== this.dimensions) {
522
+ throw new Error(`Expected ${this.dimensions}d embedding, got ${result.length}d`);
523
+ }
524
+ this.cacheSet(text, result);
525
+ return result;
526
+ }
527
+ async embedBatch(texts) {
528
+ const results = new Array(texts.length);
529
+ const uncachedIndices = [];
530
+ const uncachedTexts = [];
531
+ for (let i = 0; i < texts.length; i++) {
532
+ const cached = cache2.get(texts[i]);
533
+ if (cached) {
534
+ results[i] = cached;
535
+ } else {
536
+ uncachedIndices.push(i);
537
+ uncachedTexts.push(texts[i]);
538
+ }
539
+ }
540
+ if (uncachedTexts.length > 0) {
541
+ const output = await this.extractor(uncachedTexts, {
542
+ pooling: "mean",
543
+ normalize: true
544
+ });
545
+ const allVecs = output.tolist();
546
+ for (let i = 0; i < allVecs.length; i++) {
547
+ const vec = Array.from(allVecs[i]);
548
+ const originalIdx = uncachedIndices[i];
549
+ results[originalIdx] = vec;
550
+ this.cacheSet(uncachedTexts[i], vec);
551
+ }
552
+ }
553
+ return results;
554
+ }
555
+ cacheSet(key, value) {
556
+ if (cache2.size >= MAX_CACHE_SIZE2) {
557
+ const firstKey = cache2.keys().next().value;
558
+ if (firstKey !== void 0) cache2.delete(firstKey);
559
+ }
560
+ cache2.set(key, value);
561
+ }
562
+ };
563
+ }
564
+ });
565
+
479
566
  // src/embedding/provider.ts
567
+ var provider_exports = {};
568
+ __export(provider_exports, {
569
+ getEmbeddingProvider: () => getEmbeddingProvider,
570
+ isVectorSearchAvailable: () => isVectorSearchAvailable,
571
+ resetProvider: () => resetProvider
572
+ });
480
573
  async function getEmbeddingProvider() {
481
574
  if (initialized) return provider;
482
575
  initialized = true;
@@ -487,9 +580,24 @@ async function getEmbeddingProvider() {
487
580
  return provider;
488
581
  } catch {
489
582
  }
583
+ try {
584
+ const { TransformersProvider: TransformersProvider2 } = await Promise.resolve().then(() => (init_transformers_provider(), transformers_provider_exports));
585
+ provider = await TransformersProvider2.create();
586
+ console.error(`[memorix] Embedding provider: ${provider.name} (${provider.dimensions}d)`);
587
+ return provider;
588
+ } catch {
589
+ }
490
590
  console.error("[memorix] No embedding provider available \u2014 using fulltext search only");
491
591
  return null;
492
592
  }
593
+ async function isVectorSearchAvailable() {
594
+ const p3 = await getEmbeddingProvider();
595
+ return p3 !== null;
596
+ }
597
+ function resetProvider() {
598
+ provider = null;
599
+ initialized = false;
600
+ }
493
601
  var provider, initialized;
494
602
  var init_provider = __esm({
495
603
  "src/embedding/provider.ts"() {
@@ -3749,6 +3857,320 @@ var init_retention = __esm({
3749
3857
  }
3750
3858
  });
3751
3859
 
3860
+ // src/skills/engine.ts
3861
+ var engine_exports = {};
3862
+ __export(engine_exports, {
3863
+ SkillsEngine: () => SkillsEngine
3864
+ });
3865
+ import { existsSync as existsSync5, readFileSync as readFileSync3, writeFileSync as writeFileSync2, mkdirSync as mkdirSync3, readdirSync as readdirSync2 } from "fs";
3866
+ import { join as join11 } from "path";
3867
+ import { homedir as homedir10 } from "os";
3868
+ var SKILLS_DIRS, SKILL_WORTHY_TYPES, MIN_OBS_FOR_SKILL, MIN_SCORE_FOR_SKILL, SkillsEngine;
3869
+ var init_engine3 = __esm({
3870
+ "src/skills/engine.ts"() {
3871
+ "use strict";
3872
+ init_esm_shims();
3873
+ SKILLS_DIRS = {
3874
+ codex: [".codex/skills", ".agents/skills"],
3875
+ cursor: [".cursor/skills", ".cursor/skills-cursor"],
3876
+ windsurf: [".windsurf/skills"],
3877
+ "claude-code": [".claude/skills"],
3878
+ copilot: [".github/skills", ".copilot/skills"],
3879
+ antigravity: [".agent/skills", ".gemini/skills", ".gemini/antigravity/skills"],
3880
+ kiro: [".kiro/skills"]
3881
+ };
3882
+ SKILL_WORTHY_TYPES = /* @__PURE__ */ new Set([
3883
+ "gotcha",
3884
+ "decision",
3885
+ "how-it-works",
3886
+ "problem-solution",
3887
+ "trade-off"
3888
+ ]);
3889
+ MIN_OBS_FOR_SKILL = 3;
3890
+ MIN_SCORE_FOR_SKILL = 5;
3891
+ SkillsEngine = class {
3892
+ constructor(projectRoot, options) {
3893
+ this.projectRoot = projectRoot;
3894
+ this.skipGlobal = options?.skipGlobal ?? false;
3895
+ }
3896
+ skipGlobal;
3897
+ // ============================================================
3898
+ // List: Discover all available skills
3899
+ // ============================================================
3900
+ /**
3901
+ * List all available skills from all agents + generated suggestions.
3902
+ */
3903
+ listSkills() {
3904
+ const skills = [];
3905
+ const seen = /* @__PURE__ */ new Set();
3906
+ const home = homedir10();
3907
+ for (const [agent, dirs] of Object.entries(SKILLS_DIRS)) {
3908
+ for (const dir of dirs) {
3909
+ const paths = [join11(this.projectRoot, dir)];
3910
+ if (!this.skipGlobal) {
3911
+ paths.push(join11(home, dir));
3912
+ }
3913
+ for (const skillsRoot of paths) {
3914
+ if (!existsSync5(skillsRoot)) continue;
3915
+ try {
3916
+ const entries = readdirSync2(skillsRoot, { withFileTypes: true });
3917
+ for (const entry of entries) {
3918
+ if (!entry.isDirectory()) continue;
3919
+ const name = entry.name;
3920
+ if (seen.has(name)) continue;
3921
+ const skillMd = join11(skillsRoot, name, "SKILL.md");
3922
+ if (!existsSync5(skillMd)) continue;
3923
+ try {
3924
+ const content = readFileSync3(skillMd, "utf-8");
3925
+ const description = this.parseDescription(content);
3926
+ skills.push({
3927
+ name,
3928
+ description,
3929
+ sourcePath: join11(skillsRoot, name),
3930
+ sourceAgent: agent,
3931
+ content,
3932
+ generated: false
3933
+ });
3934
+ seen.add(name);
3935
+ } catch {
3936
+ }
3937
+ }
3938
+ } catch {
3939
+ }
3940
+ }
3941
+ }
3942
+ }
3943
+ return skills;
3944
+ }
3945
+ // ============================================================
3946
+ // Generate: Create skills from observation patterns
3947
+ // ============================================================
3948
+ /**
3949
+ * Analyze observations and generate SKILL.md content for entities with
3950
+ * rich knowledge accumulation.
3951
+ */
3952
+ generateFromObservations(observations2) {
3953
+ const clusters = this.clusterByEntity(observations2);
3954
+ for (const cluster of clusters.values()) {
3955
+ cluster.score = this.scoreCluster(cluster);
3956
+ }
3957
+ const results = [];
3958
+ const sortedClusters = [...clusters.values()].filter((c) => c.score >= MIN_SCORE_FOR_SKILL).sort((a, b) => b.score - a.score).slice(0, 10);
3959
+ for (const cluster of sortedClusters) {
3960
+ const skill = this.clusterToSkill(cluster);
3961
+ if (skill) results.push(skill);
3962
+ }
3963
+ return results;
3964
+ }
3965
+ /**
3966
+ * Write a generated skill to the target agent's skills directory.
3967
+ */
3968
+ writeSkill(skill, target) {
3969
+ const dirs = SKILLS_DIRS[target];
3970
+ if (!dirs || dirs.length === 0) return null;
3971
+ const targetDir = join11(this.projectRoot, dirs[0], skill.name);
3972
+ try {
3973
+ mkdirSync3(targetDir, { recursive: true });
3974
+ writeFileSync2(join11(targetDir, "SKILL.md"), skill.content, "utf-8");
3975
+ return join11(dirs[0], skill.name, "SKILL.md");
3976
+ } catch {
3977
+ return null;
3978
+ }
3979
+ }
3980
+ // ============================================================
3981
+ // Inject: Return skill content for direct agent consumption
3982
+ // ============================================================
3983
+ /**
3984
+ * Get full content of a skill by name (for direct injection).
3985
+ */
3986
+ injectSkill(name) {
3987
+ const all = this.listSkills();
3988
+ return all.find((s) => s.name.toLowerCase() === name.toLowerCase()) || null;
3989
+ }
3990
+ // ============================================================
3991
+ // Internal helpers
3992
+ // ============================================================
3993
+ parseDescription(content) {
3994
+ const match = content.match(/^---[\s\S]*?description:\s*["']?(.+?)["']?\s*$/m);
3995
+ return match ? match[1] : "";
3996
+ }
3997
+ clusterByEntity(observations2) {
3998
+ const clusters = /* @__PURE__ */ new Map();
3999
+ for (const obs of observations2) {
4000
+ const entity = obs.entityName || "unknown";
4001
+ let cluster = clusters.get(entity);
4002
+ if (!cluster) {
4003
+ cluster = { entity, observations: [], types: /* @__PURE__ */ new Set(), score: 0 };
4004
+ clusters.set(entity, cluster);
4005
+ }
4006
+ cluster.observations.push(obs);
4007
+ cluster.types.add(obs.type);
4008
+ }
4009
+ return clusters;
4010
+ }
4011
+ scoreCluster(cluster) {
4012
+ let score = 0;
4013
+ const obs = cluster.observations;
4014
+ if (obs.length < MIN_OBS_FOR_SKILL) return 0;
4015
+ let hasSkillWorthyType = false;
4016
+ for (const type of cluster.types) {
4017
+ if (SKILL_WORTHY_TYPES.has(type)) {
4018
+ hasSkillWorthyType = true;
4019
+ break;
4020
+ }
4021
+ }
4022
+ if (!hasSkillWorthyType) return 0;
4023
+ score += Math.min(obs.length, 5);
4024
+ for (const type of cluster.types) {
4025
+ if (SKILL_WORTHY_TYPES.has(type)) score += 3;
4026
+ }
4027
+ const gotchas = obs.filter((o) => o.type === "gotcha").length;
4028
+ score += gotchas * 3;
4029
+ const decisions = obs.filter((o) => o.type === "decision").length;
4030
+ score += decisions * 2;
4031
+ const totalFacts = obs.reduce((sum, o) => sum + (o.facts?.length || 0), 0);
4032
+ score += Math.min(totalFacts, 5);
4033
+ const totalFiles = new Set(obs.flatMap((o) => o.filesModified || [])).size;
4034
+ score += Math.min(totalFiles, 5);
4035
+ return score;
4036
+ }
4037
+ clusterToSkill(cluster) {
4038
+ const { entity, observations: observations2 } = cluster;
4039
+ const safeName = entity.replace(/[^a-zA-Z0-9_-]/g, "-").toLowerCase();
4040
+ const gotchas = observations2.filter((o) => o.type === "gotcha");
4041
+ const decisions = observations2.filter((o) => o.type === "decision");
4042
+ const howItWorks = observations2.filter((o) => o.type === "how-it-works");
4043
+ const problems = observations2.filter((o) => o.type === "problem-solution");
4044
+ const tradeoffs = observations2.filter((o) => o.type === "trade-off");
4045
+ const others = observations2.filter(
4046
+ (o) => !["gotcha", "decision", "how-it-works", "problem-solution", "trade-off"].includes(o.type)
4047
+ );
4048
+ const allFacts = [...new Set(observations2.flatMap((o) => o.facts || []))];
4049
+ const allConcepts = [...new Set(observations2.flatMap((o) => o.concepts || []))];
4050
+ const allFiles = [...new Set(observations2.flatMap((o) => o.filesModified || []))];
4051
+ const lines = [];
4052
+ const description = this.generateDescription(cluster);
4053
+ lines.push("---");
4054
+ lines.push(`description: ${description}`);
4055
+ lines.push("---");
4056
+ lines.push("");
4057
+ lines.push(`# ${entity}`);
4058
+ lines.push("");
4059
+ lines.push(`> Auto-generated from ${observations2.length} project observations by Memorix.`);
4060
+ lines.push("> Adapt to your actual project context before relying on this skill.");
4061
+ lines.push("");
4062
+ if (allFiles.length > 0) {
4063
+ lines.push("## Key Files");
4064
+ lines.push("");
4065
+ for (const f of allFiles.slice(0, 15)) {
4066
+ lines.push(`- \`${f}\``);
4067
+ }
4068
+ lines.push("");
4069
+ }
4070
+ if (gotchas.length > 0) {
4071
+ lines.push("## \u26A0\uFE0F Critical Gotchas");
4072
+ lines.push("");
4073
+ for (const g of gotchas) {
4074
+ lines.push(`### ${g.title}`);
4075
+ if (g.narrative) lines.push("", g.narrative);
4076
+ if (g.facts && g.facts.length > 0) {
4077
+ lines.push("", ...g.facts.map((f) => `- ${f}`));
4078
+ }
4079
+ lines.push("");
4080
+ }
4081
+ }
4082
+ if (decisions.length > 0) {
4083
+ lines.push("## \u{1F3D7}\uFE0F Architecture Decisions");
4084
+ lines.push("");
4085
+ for (const d of decisions) {
4086
+ lines.push(`### ${d.title}`);
4087
+ if (d.narrative) lines.push("", d.narrative);
4088
+ if (d.facts && d.facts.length > 0) {
4089
+ lines.push("", ...d.facts.map((f) => `- ${f}`));
4090
+ }
4091
+ lines.push("");
4092
+ }
4093
+ }
4094
+ if (howItWorks.length > 0) {
4095
+ lines.push("## \u{1F4D6} How It Works");
4096
+ lines.push("");
4097
+ for (const h of howItWorks) {
4098
+ lines.push(`### ${h.title}`);
4099
+ if (h.narrative) lines.push("", h.narrative);
4100
+ lines.push("");
4101
+ }
4102
+ }
4103
+ if (problems.length > 0) {
4104
+ lines.push("## \u{1F527} Common Problems & Solutions");
4105
+ lines.push("");
4106
+ for (const p3 of problems) {
4107
+ lines.push(`### ${p3.title}`);
4108
+ if (p3.narrative) lines.push("", p3.narrative);
4109
+ if (p3.facts && p3.facts.length > 0) {
4110
+ lines.push("", ...p3.facts.map((f) => `- ${f}`));
4111
+ }
4112
+ lines.push("");
4113
+ }
4114
+ }
4115
+ if (tradeoffs.length > 0) {
4116
+ lines.push("## \u2696\uFE0F Trade-offs");
4117
+ lines.push("");
4118
+ for (const t of tradeoffs) {
4119
+ lines.push(`### ${t.title}`);
4120
+ if (t.narrative) lines.push("", t.narrative);
4121
+ lines.push("");
4122
+ }
4123
+ }
4124
+ if (others.length > 0) {
4125
+ lines.push("## \u{1F4DD} Notes");
4126
+ lines.push("");
4127
+ for (const o of others.slice(0, 5)) {
4128
+ lines.push(`- **${o.title}**: ${o.narrative?.split("\n")[0] || ""}`);
4129
+ }
4130
+ lines.push("");
4131
+ }
4132
+ if (allConcepts.length > 0) {
4133
+ lines.push("## \u{1F3F7}\uFE0F Related Concepts");
4134
+ lines.push("");
4135
+ lines.push(allConcepts.map((c) => `\`${c}\``).join(", "));
4136
+ lines.push("");
4137
+ }
4138
+ if (allFacts.length > 0) {
4139
+ lines.push("## \u{1F4CC} Quick Facts");
4140
+ lines.push("");
4141
+ for (const f of allFacts.slice(0, 15)) {
4142
+ lines.push(`- ${f}`);
4143
+ }
4144
+ lines.push("");
4145
+ }
4146
+ const content = lines.join("\n");
4147
+ return {
4148
+ name: safeName,
4149
+ description,
4150
+ sourcePath: "",
4151
+ sourceAgent: "codex",
4152
+ // generated skills follow SKILL.md standard
4153
+ content,
4154
+ generated: true
4155
+ };
4156
+ }
4157
+ generateDescription(cluster) {
4158
+ const parts = [];
4159
+ const typeCounts = {};
4160
+ for (const obs of cluster.observations) {
4161
+ typeCounts[obs.type] = (typeCounts[obs.type] || 0) + 1;
4162
+ }
4163
+ if (typeCounts["gotcha"]) parts.push(`${typeCounts["gotcha"]} gotcha(s)`);
4164
+ if (typeCounts["decision"]) parts.push(`${typeCounts["decision"]} decision(s)`);
4165
+ if (typeCounts["how-it-works"]) parts.push(`${typeCounts["how-it-works"]} explanation(s)`);
4166
+ if (typeCounts["problem-solution"]) parts.push(`${typeCounts["problem-solution"]} fix(es)`);
4167
+ const summary = parts.length > 0 ? parts.join(", ") : `${cluster.observations.length} observations`;
4168
+ return `Project patterns for ${cluster.entity}: ${summary}`;
4169
+ }
4170
+ };
4171
+ }
4172
+ });
4173
+
3752
4174
  // src/dashboard/server.ts
3753
4175
  var server_exports = {};
3754
4176
  __export(server_exports, {
@@ -3836,13 +4258,26 @@ async function handleApi(req, res, dataDir, projectId, projectName, baseDir) {
3836
4258
  typeCounts[t] = (typeCounts[t] || 0) + 1;
3837
4259
  }
3838
4260
  const sorted = [...observations2].sort((a, b) => (b.id || 0) - (a.id || 0)).slice(0, 10);
4261
+ let embeddingStatus = { enabled: false, provider: "", dimensions: 0 };
4262
+ try {
4263
+ const { isEmbeddingEnabled: isEmbeddingEnabled2 } = await Promise.resolve().then(() => (init_orama_store(), orama_store_exports));
4264
+ const { getEmbeddingProvider: getEmbeddingProvider2 } = await Promise.resolve().then(() => (init_provider(), provider_exports));
4265
+ const provider2 = await getEmbeddingProvider2();
4266
+ embeddingStatus = {
4267
+ enabled: isEmbeddingEnabled2(),
4268
+ provider: provider2?.name || "",
4269
+ dimensions: provider2?.dimensions || 0
4270
+ };
4271
+ } catch {
4272
+ }
3839
4273
  sendJson(res, {
3840
4274
  entities: graph.entities.length,
3841
4275
  relations: graph.relations.length,
3842
4276
  observations: observations2.length,
3843
4277
  nextId: nextId2,
3844
4278
  typeCounts,
3845
- recentObservations: sorted
4279
+ recentObservations: sorted,
4280
+ embedding: embeddingStatus
3846
4281
  });
3847
4282
  break;
3848
4283
  }
@@ -4643,6 +5078,114 @@ Entity: ${entityName} | Type: ${type} | Project: ${project.id}${enrichment}`
4643
5078
  };
4644
5079
  }
4645
5080
  );
5081
+ server.registerTool(
5082
+ "memorix_skills",
5083
+ {
5084
+ title: "Project Skills",
5085
+ description: `Memory-driven project skills. Action "list": show all available skills from all agents. Action "generate": auto-generate project-specific skills from observation patterns (gotchas, decisions, how-it-works). Action "inject": return a specific skill's full content for direct use. Generated skills follow the SKILL.md standard and can be synced across agents.`,
5086
+ inputSchema: {
5087
+ action: z.enum(["list", "generate", "inject"]).describe('Action: "list" to discover skills, "generate" to create from memory, "inject" to get skill content'),
5088
+ name: z.string().optional().describe('Skill name (required for "inject")'),
5089
+ target: z.enum(AGENT_TARGETS).optional().describe('Target agent to write generated skills to (optional for "generate")'),
5090
+ write: z.boolean().optional().describe("Whether to write generated skills to disk (default: false, preview only)")
5091
+ }
5092
+ },
5093
+ async ({ action, name, target, write }) => {
5094
+ const { SkillsEngine: SkillsEngine2 } = await Promise.resolve().then(() => (init_engine3(), engine_exports));
5095
+ const engine = new SkillsEngine2(project.rootPath);
5096
+ if (action === "list") {
5097
+ const skills = engine.listSkills();
5098
+ if (skills.length === 0) {
5099
+ return {
5100
+ content: [{ type: "text", text: 'No skills found in any agent directory.\n\nSkills are discovered from:\n- `.cursor/skills/*/SKILL.md`\n- `.agents/skills/*/SKILL.md`\n- `.agent/skills/*/SKILL.md`\n- `.windsurf/skills/*/SKILL.md`\n- etc.\n\nUse action "generate" to auto-create skills from your project observations.' }]
5101
+ };
5102
+ }
5103
+ const lines2 = [
5104
+ `## Available Skills (${skills.length})`,
5105
+ ""
5106
+ ];
5107
+ for (const sk of skills) {
5108
+ lines2.push(`- **${sk.name}** (${sk.sourceAgent}): ${sk.description || "(no description)"}`);
5109
+ }
5110
+ lines2.push("", '> Use `action: "inject", name: "<skill-name>"` to get full skill content.');
5111
+ return {
5112
+ content: [{ type: "text", text: lines2.join("\n") }]
5113
+ };
5114
+ }
5115
+ if (action === "inject") {
5116
+ if (!name) {
5117
+ return {
5118
+ content: [{ type: "text", text: 'Error: `name` is required for inject action. Use `action: "list"` first to see available skills.' }],
5119
+ isError: true
5120
+ };
5121
+ }
5122
+ const skill = engine.injectSkill(name);
5123
+ if (!skill) {
5124
+ return {
5125
+ content: [{ type: "text", text: `Skill "${name}" not found. Use \`action: "list"\` to see available skills.` }],
5126
+ isError: true
5127
+ };
5128
+ }
5129
+ return {
5130
+ content: [{ type: "text", text: `## Skill: ${skill.name}
5131
+ **Source**: ${skill.sourceAgent}
5132
+ **Path**: ${skill.sourcePath}
5133
+
5134
+ ---
5135
+
5136
+ ${skill.content}` }]
5137
+ };
5138
+ }
5139
+ const { loadObservationsJson: loadObservationsJson2 } = await Promise.resolve().then(() => (init_persistence(), persistence_exports));
5140
+ const allObs = await loadObservationsJson2(projectDir2);
5141
+ const obsData = allObs.map((o) => ({
5142
+ id: o.id || 0,
5143
+ entityName: o.entityName || "unknown",
5144
+ type: o.type || "discovery",
5145
+ title: o.title || "",
5146
+ narrative: o.narrative || "",
5147
+ facts: o.facts,
5148
+ concepts: o.concepts,
5149
+ filesModified: o.filesModified,
5150
+ createdAt: o.createdAt
5151
+ }));
5152
+ const generated = engine.generateFromObservations(obsData);
5153
+ if (generated.length === 0) {
5154
+ return {
5155
+ content: [{ type: "text", text: "No skill-worthy patterns found yet.\n\nSkills are auto-generated when entities accumulate enough observations (3+), especially gotchas, decisions, and how-it-works notes.\n\nKeep using memorix_store to build up project knowledge!" }]
5156
+ };
5157
+ }
5158
+ const lines = [
5159
+ `## Generated Skills (${generated.length})`,
5160
+ "",
5161
+ "Based on observation patterns in your project memory:",
5162
+ ""
5163
+ ];
5164
+ for (const sk of generated) {
5165
+ lines.push(`### ${sk.name}`);
5166
+ lines.push(`- **Description**: ${sk.description}`);
5167
+ lines.push(`- **Observations**: ${sk.content.split("\n").length} lines of knowledge`);
5168
+ if (write && target) {
5169
+ const path8 = engine.writeSkill(sk, target);
5170
+ if (path8) {
5171
+ lines.push(`- \u2705 **Written**: \`${path8}\``);
5172
+ } else {
5173
+ lines.push(`- \u274C Failed to write`);
5174
+ }
5175
+ }
5176
+ lines.push("");
5177
+ }
5178
+ if (!write) {
5179
+ lines.push('> Preview only. Add `write: true, target: "<agent>"` to save skills to disk.');
5180
+ }
5181
+ if (generated.length > 0) {
5182
+ lines.push("", "---", "### Preview: " + generated[0].name, "", "```markdown", generated[0].content, "```");
5183
+ }
5184
+ return {
5185
+ content: [{ type: "text", text: lines.join("\n") }]
5186
+ };
5187
+ }
5188
+ );
4646
5189
  let dashboardRunning = false;
4647
5190
  server.registerTool(
4648
5191
  "memorix_dashboard",