@utilarium/overcontext 0.0.6 → 0.0.7-dev.20260220025748.d61df74

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -2,6 +2,34 @@
2
2
 
3
3
  > Schema-driven framework for context management
4
4
 
5
+ ## Why Overcontext?
6
+
7
+ The world is waking up to the idea that **context is king** when interacting with Large Language Models. As Andrej Karpathy put it, "context engineering" is "the delicate art and science of filling the context window with just the right information." The industry is shifting from prompt engineering to context engineering -- and the tooling ecosystem is racing to catch up.
8
+
9
+ The current landscape is converging on a common pattern: **large collections of Markdown files**. Whether it's [CLAUDE.md](https://docs.anthropic.com/en/docs/claude-code/memory) for Claude Code, [AGENTS.md](https://github.com/agentsmd/agents.md) as an open standard under the Linux Foundation, [.cursor/rules/](https://docs.cursor.com/context/rules-for-ai) for Cursor, or [copilot-instructions.md](https://docs.github.com/en/copilot/customizing-copilot/adding-repository-custom-instructions) for GitHub Copilot -- the default answer to "how do I give my AI more context?" is always the same: write more Markdown.
10
+
11
+ Markdown is a fine starting point, but it has limits. It's unstructured, unsearchable at the entity level, hard to layer across scopes, and impossible to validate. As systems move toward more agentic interaction patterns -- where AI agents dynamically search for and compose context at runtime -- we need something more structured.
12
+
13
+ **Overcontext is a reaction to that.** It started as a strategy for managing structured context files about people, projects, and terminology to support transcription tools like [Protokoll](https://github.com/redaksjon/protokoll) and other tools under development. Protokoll transforms voice memos into organized, context-aware notes by maintaining a knowledge base of people, projects, and organizations as YAML files -- using that context to correct proper names, classify content, and route transcriptions to the right places. As Protokoll's context needs grew, it became clear that a general-purpose framework was needed: one that could not only *store* context entities but *retrieve*, *search*, *layer*, and *validate* them with the rigor that agentic systems demand.
14
+
15
+ Overcontext provides that framework. It gives you schema-validated, type-safe context entities with hierarchical discovery, namespace isolation, and a query API -- the building blocks for context engineering that goes beyond flat files.
16
+
17
+ ### How Overcontext Compares
18
+
19
+ The context management landscape is broad. Here's where Overcontext fits:
20
+
21
+ | Approach | Examples | What They Do | What's Missing |
22
+ |---|---|---|---|
23
+ | **Agent instruction files** | CLAUDE.md, AGENTS.md, .cursorrules | Behavioral instructions for AI agents | No entity model, no search, no validation |
24
+ | **Codebase-to-prompt tools** | [Repomix](https://github.com/yamadashy/repomix), [files-to-prompt](https://github.com/simonw/files-to-prompt), [yek](https://github.com/mohsen1/yek) | Serialize code into LLM-friendly formats | One-shot dumps, no structured entities or queries |
25
+ | **AI memory services** | [Mem0](https://mem0.ai), [Letta](https://letta.com), [Zep](https://getzep.com) | Persistent memory via embeddings and knowledge graphs | Cloud-dependent, opaque storage, not file-based |
26
+ | **MCP servers** | [Model Context Protocol](https://modelcontextprotocol.io) | Standardized protocol for connecting AI to data sources | A transport layer, not a storage or schema framework |
27
+ | **Overcontext** | This library | Schema-validated entities with hierarchical discovery, namespaces, and search | Designed to be the structured layer *underneath* all of the above |
28
+
29
+ Overcontext isn't trying to replace Markdown instruction files or MCP -- it's the structured entity layer that those systems can build on. Your `CLAUDE.md` can reference Overcontext entities. Your MCP server can serve them. Your CLI tools can query them. The context you maintain is schema-validated, version-controllable, and yours.
30
+
31
+ ---
32
+
5
33
  Overcontext provides infrastructure for defining and managing custom entity schemas. Unlike a library with predefined types, overcontext lets you define your own entity schemas using Zod and provides the storage, validation, discovery, and CLI building blocks to work with them.
6
34
 
7
35
  ## Features
package/dist/api/slug.js CHANGED
@@ -37,6 +37,15 @@
37
37
  * Generate a unique ID, appending a number if needed.
38
38
  */ const generateUniqueId = async (baseName, exists)=>{
39
39
  const baseId = slugify(baseName);
40
+ if (!baseId) {
41
+ // slugify produced an empty string (input was all special characters).
42
+ // Fall back to a random ID to avoid creating entities with empty IDs.
43
+ const fallbackId = `entity-${Date.now()}-${Math.random().toString(36).substring(2, 8)}`;
44
+ if (!await exists(fallbackId)) {
45
+ return fallbackId;
46
+ }
47
+ return `${fallbackId}-${Math.random().toString(36).substring(2, 8)}`;
48
+ }
40
49
  if (!await exists(baseId)) {
41
50
  return baseId;
42
51
  }
@@ -47,8 +56,9 @@
47
56
  return candidateId;
48
57
  }
49
58
  }
50
- // Fallback to timestamp
51
- return `${baseId}-${Date.now()}`;
59
+ // Fallback to timestamp + random suffix to reduce predictability and
60
+ // avoid collisions in concurrent environments.
61
+ return `${baseId}-${Date.now()}-${Math.random().toString(36).substring(2, 8)}`;
52
62
  };
53
63
 
54
64
  export { generateUniqueId, slugify };
@@ -1 +1 @@
1
- {"version":3,"file":"slug.js","sources":["../../src/api/slug.ts"],"sourcesContent":["/**\n * Generate a slug from a name.\n * \"John Doe\" -> \"john-doe\"\n * \"API Reference\" -> \"api-reference\"\n */\nexport const slugify = (name: string): string => {\n const cleaned = name\n .toLowerCase()\n .trim()\n .replace(/[^\\w\\s-]/g, '') // Remove non-word chars\n .replace(/[\\s_]+/g, '-'); // Replace spaces/underscores\n \n // Single-pass algorithm to collapse hyphens and trim (avoid ReDoS)\n const result: string[] = [];\n let prevWasHyphen = false;\n let startIdx = 0;\n let endIdx = cleaned.length;\n \n // Skip leading hyphens\n while (startIdx < cleaned.length && cleaned[startIdx] === '-') {\n startIdx++;\n }\n \n // Skip trailing hyphens\n while (endIdx > startIdx && cleaned[endIdx - 1] === '-') {\n endIdx--;\n }\n \n // Build result, collapsing consecutive hyphens\n for (let i = startIdx; i < endIdx; i++) {\n const char = cleaned[i];\n if (char === '-') {\n if (!prevWasHyphen) {\n result.push('-');\n prevWasHyphen = true;\n }\n } else {\n result.push(char);\n prevWasHyphen = false;\n }\n }\n \n return result.join('');\n};\n\n/**\n * Generate a unique ID, appending a number if needed.\n */\nexport const generateUniqueId = async (\n baseName: string,\n exists: (id: string) => Promise<boolean>\n): Promise<string> => {\n const baseId = slugify(baseName);\n\n if (!await exists(baseId)) {\n return baseId;\n }\n\n // Try appending numbers\n for (let i = 2; i <= 100; i++) {\n const candidateId = `${baseId}-${i}`;\n if (!await exists(candidateId)) {\n return candidateId;\n }\n }\n\n // Fallback to timestamp\n return `${baseId}-${Date.now()}`;\n};\n"],"names":["slugify","name","cleaned","toLowerCase","trim","replace","result","prevWasHyphen","startIdx","endIdx","length","i","char","push","join","generateUniqueId","baseName","exists","baseId","candidateId","Date","now"],"mappings":"AAAA;;;;IAKO,MAAMA,OAAAA,GAAU,CAACC,IAAAA,GAAAA;IACpB,MAAMC,OAAAA,GAAUD,IAAAA,CACXE,WAAW,EAAA,CACXC,IAAI,GACJC,OAAO,CAAC,WAAA,EAAa,EAAA,CAAA;KACrBA,OAAO,CAAC,SAAA,EAAW,GAAA,CAAA,CAAA;;AAGxB,IAAA,MAAMC,SAAmB,EAAE;AAC3B,IAAA,IAAIC,aAAAA,GAAgB,KAAA;AACpB,IAAA,IAAIC,QAAAA,GAAW,CAAA;IACf,IAAIC,MAAAA,GAASP,QAAQQ,MAAM;;IAG3B,MAAOF,QAAAA,GAAWN,QAAQQ,MAAM,IAAIR,OAAO,CAACM,QAAAA,CAAS,KAAK,GAAA,CAAK;AAC3DA,QAAAA,QAAAA,EAAAA;AACJ,IAAA;;AAGA,IAAA,MAAOC,SAASD,QAAAA,IAAYN,OAAO,CAACO,MAAAA,GAAS,CAAA,CAAE,KAAK,GAAA,CAAK;AACrDA,QAAAA,MAAAA,EAAAA;AACJ,IAAA;;AAGA,IAAA,IAAK,IAAIE,CAAAA,GAAIH,QAAAA,EAAUG,CAAAA,GAAIF,QAAQE,CAAAA,EAAAA,CAAK;QACpC,MAAMC,IAAAA,GAAOV,OAAO,CAACS,CAAAA,CAAE;AACvB,QAAA,IAAIC,SAAS,GAAA,EAAK;AACd,YAAA,IAAI,CAACL,aAAAA,EAAe;AAChBD,gBAAAA,MAAAA,CAAOO,IAAI,CAAC,GAAA,CAAA;gBACZN,aAAAA,GAAgB,IAAA;AACpB,YAAA;QACJ,CAAA,MAAO;AACHD,YAAAA,MAAAA,CAAOO,IAAI,CAACD,IAAAA,CAAAA;YACZL,aAAAA,GAAgB,KAAA;AACpB,QAAA;AACJ,IAAA;IAEA,OAAOD,MAAAA,CAAOQ,IAAI,CAAC,EAAA,CAAA;AACvB;AAEA;;AAEC,IACM,MAAMC,gBAAAA,GAAmB,OAC5BC,QAAAA,EACAC,MAAAA,GAAAA;AAEA,IAAA,MAAMC,SAASlB,OAAAA,CAAQgB,QAAAA,CAAAA;IAEvB,IAAI,CAAC,MAAMC,MAAAA,CAAOC,MAAAA,CAAAA,EAAS;QACvB,OAAOA,MAAAA;AACX,IAAA;;AAGA,IAAA,IAAK,IAAIP,CAAAA,GAAI,CAAA,EAAGA,CAAAA,IAAK,KAAKA,CAAAA,EAAAA,CAAK;AAC3B,QAAA,MAAMQ,WAAAA,GAAc,CAAA,EAAGD,MAAAA,CAAO,CAAC,EAAEP,CAAAA,CAAAA,CAAG;QACpC,IAAI,CAAC,MAAMM,MAAAA,CAAOE,WAAAA,CAAAA,EAAc;YAC5B,OAAOA,WAAAA;AACX,QAAA;AACJ,IAAA;;AAGA,IAAA,OAAO,GAAGD,MAAAA,CAAO,CAAC,EAAEE,IAAAA,CAAKC,GAAG,EAAA,CAAA,CAAI;AACpC;;;;"}
1
+ {"version":3,"file":"slug.js","sources":["../../src/api/slug.ts"],"sourcesContent":["/**\n * Generate a slug from a name.\n * \"John Doe\" -> \"john-doe\"\n * \"API Reference\" -> \"api-reference\"\n */\nexport const slugify = (name: string): string => {\n const cleaned = name\n .toLowerCase()\n .trim()\n .replace(/[^\\w\\s-]/g, '') // Remove non-word chars\n .replace(/[\\s_]+/g, '-'); // Replace spaces/underscores\n \n // Single-pass algorithm to collapse hyphens and trim (avoid ReDoS)\n const result: string[] = [];\n let prevWasHyphen = false;\n let startIdx = 0;\n let endIdx = cleaned.length;\n \n // Skip leading hyphens\n while (startIdx < cleaned.length && cleaned[startIdx] === '-') {\n startIdx++;\n }\n \n // Skip trailing hyphens\n while (endIdx > startIdx && cleaned[endIdx - 1] === '-') {\n endIdx--;\n }\n \n // Build result, collapsing consecutive hyphens\n for (let i = startIdx; i < endIdx; i++) {\n const char = cleaned[i];\n if (char === '-') {\n if (!prevWasHyphen) {\n result.push('-');\n prevWasHyphen = true;\n }\n } else {\n result.push(char);\n prevWasHyphen = false;\n }\n }\n \n return result.join('');\n};\n\n/**\n * Generate a unique ID, appending a number if needed.\n */\nexport const generateUniqueId = async (\n baseName: string,\n exists: (id: string) => Promise<boolean>\n): Promise<string> => {\n const baseId = slugify(baseName);\n\n if (!baseId) {\n // slugify produced an empty string (input was all special characters).\n // Fall back to a random ID to avoid creating entities with empty IDs.\n const fallbackId = `entity-${Date.now()}-${Math.random().toString(36).substring(2, 8)}`;\n if (!await exists(fallbackId)) {\n return fallbackId;\n }\n return `${fallbackId}-${Math.random().toString(36).substring(2, 8)}`;\n }\n\n if (!await exists(baseId)) {\n return baseId;\n }\n\n // Try appending numbers\n for (let i = 2; i <= 100; i++) {\n const candidateId = `${baseId}-${i}`;\n if (!await exists(candidateId)) {\n return candidateId;\n }\n }\n\n // Fallback to timestamp + random suffix to reduce predictability and\n // avoid collisions in concurrent environments.\n return `${baseId}-${Date.now()}-${Math.random().toString(36).substring(2, 8)}`;\n};\n"],"names":["slugify","name","cleaned","toLowerCase","trim","replace","result","prevWasHyphen","startIdx","endIdx","length","i","char","push","join","generateUniqueId","baseName","exists","baseId","fallbackId","Date","now","Math","random","toString","substring","candidateId"],"mappings":"AAAA;;;;IAKO,MAAMA,OAAAA,GAAU,CAACC,IAAAA,GAAAA;IACpB,MAAMC,OAAAA,GAAUD,IAAAA,CACXE,WAAW,EAAA,CACXC,IAAI,GACJC,OAAO,CAAC,WAAA,EAAa,EAAA,CAAA;KACrBA,OAAO,CAAC,SAAA,EAAW,GAAA,CAAA,CAAA;;AAGxB,IAAA,MAAMC,SAAmB,EAAE;AAC3B,IAAA,IAAIC,aAAAA,GAAgB,KAAA;AACpB,IAAA,IAAIC,QAAAA,GAAW,CAAA;IACf,IAAIC,MAAAA,GAASP,QAAQQ,MAAM;;IAG3B,MAAOF,QAAAA,GAAWN,QAAQQ,MAAM,IAAIR,OAAO,CAACM,QAAAA,CAAS,KAAK,GAAA,CAAK;AAC3DA,QAAAA,QAAAA,EAAAA;AACJ,IAAA;;AAGA,IAAA,MAAOC,SAASD,QAAAA,IAAYN,OAAO,CAACO,MAAAA,GAAS,CAAA,CAAE,KAAK,GAAA,CAAK;AACrDA,QAAAA,MAAAA,EAAAA;AACJ,IAAA;;AAGA,IAAA,IAAK,IAAIE,CAAAA,GAAIH,QAAAA,EAAUG,CAAAA,GAAIF,QAAQE,CAAAA,EAAAA,CAAK;QACpC,MAAMC,IAAAA,GAAOV,OAAO,CAACS,CAAAA,CAAE;AACvB,QAAA,IAAIC,SAAS,GAAA,EAAK;AACd,YAAA,IAAI,CAACL,aAAAA,EAAe;AAChBD,gBAAAA,MAAAA,CAAOO,IAAI,CAAC,GAAA,CAAA;gBACZN,aAAAA,GAAgB,IAAA;AACpB,YAAA;QACJ,CAAA,MAAO;AACHD,YAAAA,MAAAA,CAAOO,IAAI,CAACD,IAAAA,CAAAA;YACZL,aAAAA,GAAgB,KAAA;AACpB,QAAA;AACJ,IAAA;IAEA,OAAOD,MAAAA,CAAOQ,IAAI,CAAC,EAAA,CAAA;AACvB;AAEA;;AAEC,IACM,MAAMC,gBAAAA,GAAmB,OAC5BC,QAAAA,EACAC,MAAAA,GAAAA;AAEA,IAAA,MAAMC,SAASlB,OAAAA,CAAQgB,QAAAA,CAAAA;AAEvB,IAAA,IAAI,CAACE,MAAAA,EAAQ;;;AAGT,QAAA,MAAMC,aAAa,CAAC,OAAO,EAAEC,IAAAA,CAAKC,GAAG,GAAG,CAAC,EAAEC,IAAAA,CAAKC,MAAM,GAAGC,QAAQ,CAAC,IAAIC,SAAS,CAAC,GAAG,CAAA,CAAA,CAAA,CAAI;QACvF,IAAI,CAAC,MAAMR,MAAAA,CAAOE,UAAAA,CAAAA,EAAa;YAC3B,OAAOA,UAAAA;AACX,QAAA;AACA,QAAA,OAAO,CAAA,EAAGA,UAAAA,CAAW,CAAC,EAAEG,IAAAA,CAAKC,MAAM,EAAA,CAAGC,QAAQ,CAAC,EAAA,CAAA,CAAIC,SAAS,CAAC,GAAG,CAAA,CAAA,CAAA,CAAI;AACxE,IAAA;IAEA,IAAI,CAAC,MAAMR,MAAAA,CAAOC,MAAAA,CAAAA,EAAS;QACvB,OAAOA,MAAAA;AACX,IAAA;;AAGA,IAAA,IAAK,IAAIP,CAAAA,GAAI,CAAA,EAAGA,CAAAA,IAAK,KAAKA,CAAAA,EAAAA,CAAK;AAC3B,QAAA,MAAMe,WAAAA,GAAc,CAAA,EAAGR,MAAAA,CAAO,CAAC,EAAEP,CAAAA,CAAAA,CAAG;QACpC,IAAI,CAAC,MAAMM,MAAAA,CAAOS,WAAAA,CAAAA,EAAc;YAC5B,OAAOA,WAAAA;AACX,QAAA;AACJ,IAAA;;;AAIA,IAAA,OAAO,GAAGR,MAAAA,CAAO,CAAC,EAAEE,IAAAA,CAAKC,GAAG,GAAG,CAAC,EAAEC,IAAAA,CAAKC,MAAM,GAAGC,QAAQ,CAAC,IAAIC,SAAS,CAAC,GAAG,CAAA,CAAA,CAAA,CAAI;AAClF;;;;"}
package/dist/index.cjs CHANGED
@@ -411,6 +411,10 @@ class NamespaceNotFoundError extends StorageError {
411
411
  };
412
412
  };
413
413
 
414
+ /**
415
+ * Maximum number of entities allowed in a single batch operation.
416
+ * Prevents resource exhaustion from unbounded batch sizes.
417
+ */ const MAX_BATCH_SIZE = 1000;
414
418
  const createFileSystemProvider = async (options)=>{
415
419
  const { basePath, registry, createIfMissing = true, extension = '.yaml', readonly = false, defaultNamespace, filenameStrategy } = options;
416
420
  // --- Helper Functions ---
@@ -521,10 +525,8 @@ const createFileSystemProvider = async (options)=>{
521
525
  const prefix = id.substring(0, Math.min(8, id.length));
522
526
  const files = await fs__namespace.readdir(dir);
523
527
  const candidates = files.filter((f)=>(f.endsWith('.yaml') || f.endsWith('.yml')) && f.startsWith(prefix));
524
- if (candidates.length === 1) {
525
- return path__namespace.join(dir, candidates[0]);
526
- }
527
- // Multiple prefix matches — read and verify the id field
528
+ // Always verify the entity ID inside the file to prevent
529
+ // returning the wrong entity when IDs share a common prefix.
528
530
  for (const file of candidates){
529
531
  const filePath = path__namespace.join(dir, file);
530
532
  const entity = await readEntity(filePath, type);
@@ -541,8 +543,35 @@ const createFileSystemProvider = async (options)=>{
541
543
  });
542
544
  }
543
545
  };
546
+ // Cache the real basePath to handle symlinked temp directories (e.g., macOS /var/folders -> /private/var/folders).
547
+ // Computed lazily on first readEntity call once the directory exists.
548
+ let realBasePath;
544
549
  const readEntity = async (filePath, type)=>{
545
550
  try {
551
+ // Resolve symlinks and verify the real path is within basePath
552
+ // to prevent symlink-based path traversal.
553
+ let realFilePath;
554
+ try {
555
+ realFilePath = await fs__namespace.realpath(filePath);
556
+ } catch {
557
+ // If realpath fails (e.g., file doesn't exist), fall through to readFile
558
+ // which will throw ENOENT and be handled below.
559
+ realFilePath = filePath;
560
+ }
561
+ // Resolve basePath's real path to handle symlinked parent directories.
562
+ // Fall back to path.resolve if basePath doesn't exist yet.
563
+ if (!realBasePath) {
564
+ try {
565
+ realBasePath = await fs__namespace.realpath(basePath);
566
+ } catch {
567
+ realBasePath = path__namespace.resolve(basePath);
568
+ }
569
+ }
570
+ if (!realFilePath.startsWith(realBasePath + path__namespace.sep) && realFilePath !== realBasePath) {
571
+ // eslint-disable-next-line no-console
572
+ console.warn(`Symlink traversal attempt detected: ${filePath} resolves to ${realFilePath}`);
573
+ return undefined;
574
+ }
546
575
  const content = await fs__namespace.readFile(filePath, 'utf-8');
547
576
  let parsed;
548
577
  try {
@@ -555,14 +584,29 @@ const createFileSystemProvider = async (options)=>{
555
584
  if (!parsed || typeof parsed !== 'object') {
556
585
  return undefined;
557
586
  }
558
- // Protect against prototype pollution from malicious YAML
559
- // Only __proto__ is dangerous - constructor/prototype as string keys are safe
560
- const parsedObj = parsed;
561
- if (Object.prototype.hasOwnProperty.call(parsedObj, '__proto__')) {
587
+ // Protect against prototype pollution from malicious YAML.
588
+ // Recursively check for __proto__ keys at all nesting levels,
589
+ // since nested __proto__ can still cause pollution when objects
590
+ // are spread or assigned downstream.
591
+ const hasProtoKey = (obj)=>{
592
+ if (!obj || typeof obj !== 'object') return false;
593
+ if (Object.prototype.hasOwnProperty.call(obj, '__proto__')) return true;
594
+ for (const value of Object.values(obj)){
595
+ if (hasProtoKey(value)) return true;
596
+ }
597
+ if (Array.isArray(obj)) {
598
+ for (const item of obj){
599
+ if (hasProtoKey(item)) return true;
600
+ }
601
+ }
602
+ return false;
603
+ };
604
+ if (hasProtoKey(parsed)) {
562
605
  // eslint-disable-next-line no-console
563
606
  console.warn(`Potential prototype pollution attempt in ${filePath}: __proto__ key detected`);
564
607
  return undefined;
565
608
  }
609
+ const parsedObj = parsed;
566
610
  // Validate against registered schema
567
611
  const result = registry.validateAs(type, {
568
612
  ...parsedObj,
@@ -778,6 +822,9 @@ const createFileSystemProvider = async (options)=>{
778
822
  }
779
823
  },
780
824
  async saveBatch (entities, namespace) {
825
+ if (entities.length > MAX_BATCH_SIZE) {
826
+ throw new StorageAccessError(`Batch size ${entities.length} exceeds maximum of ${MAX_BATCH_SIZE}. Split into smaller batches.`);
827
+ }
781
828
  const saved = [];
782
829
  for (const entity of entities){
783
830
  saved.push(await this.save(entity, namespace));
@@ -785,6 +832,9 @@ const createFileSystemProvider = async (options)=>{
785
832
  return saved;
786
833
  },
787
834
  async deleteBatch (refs, namespace) {
835
+ if (refs.length > MAX_BATCH_SIZE) {
836
+ throw new StorageAccessError(`Batch size ${refs.length} exceeds maximum of ${MAX_BATCH_SIZE}. Split into smaller batches.`);
837
+ }
788
838
  let count = 0;
789
839
  for (const ref of refs){
790
840
  if (await this.delete(ref.type, ref.id, namespace)) {
@@ -1013,6 +1063,15 @@ const createMemoryProvider = (options)=>{
1013
1063
  * Generate a unique ID, appending a number if needed.
1014
1064
  */ const generateUniqueId = async (baseName, exists)=>{
1015
1065
  const baseId = slugify(baseName);
1066
+ if (!baseId) {
1067
+ // slugify produced an empty string (input was all special characters).
1068
+ // Fall back to a random ID to avoid creating entities with empty IDs.
1069
+ const fallbackId = `entity-${Date.now()}-${Math.random().toString(36).substring(2, 8)}`;
1070
+ if (!await exists(fallbackId)) {
1071
+ return fallbackId;
1072
+ }
1073
+ return `${fallbackId}-${Math.random().toString(36).substring(2, 8)}`;
1074
+ }
1016
1075
  if (!await exists(baseId)) {
1017
1076
  return baseId;
1018
1077
  }
@@ -1023,8 +1082,9 @@ const createMemoryProvider = (options)=>{
1023
1082
  return candidateId;
1024
1083
  }
1025
1084
  }
1026
- // Fallback to timestamp
1027
- return `${baseId}-${Date.now()}`;
1085
+ // Fallback to timestamp + random suffix to reduce predictability and
1086
+ // avoid collisions in concurrent environments.
1087
+ return `${baseId}-${Date.now()}-${Math.random().toString(36).substring(2, 8)}`;
1028
1088
  };
1029
1089
 
1030
1090
  /**