docdex 0.2.58 → 0.2.60

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -1,5 +1,9 @@
1
1
  # Changelog
2
2
 
3
+ ## 0.2.60
4
+ - Deduplicate installer-managed Docdex client config on reinstall: JSON client configs now collapse stale `docdex` entries, Codex TOML converges to one canonical Docdex entry, and packaged Docdex instruction blocks replace older Codex/Gemini/Claude prompt blocks instead of duplicating them.
5
+ - Update the packaged daemon dependency set to remove the vulnerable `rustls-webpki` chain that caused the nightly security audit failure.
6
+
3
7
  ## 0.2.58
4
8
  - Export Docdex delegation savings in hourly mswarm telemetry packages and expose matching runtime/admin mswarm summaries for frontend visibility.
5
9
 
package/assets/agents.md CHANGED
@@ -1,4 +1,4 @@
1
- ---- START OF DOCDEX INFO V0.2.58 ----
1
+ ---- START OF DOCDEX INFO V0.2.60 ----
2
2
  Docdex URL: http://127.0.0.1:28491
3
3
  Use this base URL for Docdex HTTP endpoints.
4
4
  Health check endpoint: `GET /healthz` (not `/v1/health`).
@@ -11,6 +11,7 @@ Health check endpoint: `GET /healthz` (not `/v1/health`).
11
11
  - Use impact analysis for every code change: prefer MCP tools `docdex_impact_graph` / `docdex_dag_export` (IPC/HTTP). If shell networking is blocked, do not use curl; use MCP/IPC instead. If unavailable, state it and proceed cautiously.
12
12
  - Apply DAG reasoning for planning: prefer dependency graph facts (impact results and /v1/dag/export) to choose the right change order and scope.
13
13
  - Use Docdex tools intentionally: docdex_search/symbols/ast for repo truth; docdex_stats/files/repo_inspect/index for index health.
14
+ - When session history matters, use conversation-memory tools intentionally: archive/search/read with `docdex_conversation_*`, build compact context with `docdex_wakeup`, record durable notes with `docdex_diary_*`, and inspect temporal graph state with `docdex_kg_*` instead of replaying full transcripts.
14
15
  - For folder structure, use docdex_tree instead of raw `rg --files`/`find` to avoid noisy folders.
15
16
  - When you do not know something, run docdex_web_research (force_web=true). Web research is encouraged by default for non-repo facts and external APIs.
16
17
  - When a Docdex feature makes a task easier/safer, you MUST use it instead of ad-hoc inspection. Examples: `docdex_search` for context, `docdex_open`/`/v1/snippet` for file slices, `docdex_symbols`/`docdex_ast` for structure, `docdex_impact_graph`/`docdex_impact_diagnostics` for dependency safety, and `docdex_dag_export` to review session traces.
@@ -92,7 +93,20 @@ Precision tools for structural analysis. Do not rely on text search for definiti
92
93
  | docdex_save_preference | Store a global user preference (Style, Tooling, Constraint). |
93
94
  | docdex_get_profile | Retrieve global preferences. |
94
95
 
95
- ### D. Local Delegation (Cheap Models)
96
+ ### D. Conversation Memory + Temporal Knowledge Graph
97
+
98
+ Use these when prior sessions, durable notes, wake-up context, or graph-derived facts matter.
99
+
100
+ | MCP Tool / HTTP | Purpose |
101
+ | --- | --- |
102
+ | docdex_conversation_import / search / list / read / export / redact / delete | Manage archived conversations inside the current repo scope or an explicit conversation namespace. |
103
+ | docdex_conversation_prune | Preview or apply retention and compaction for sessions, diary entries, hook events, working memory, and episodic rollups. |
104
+ | docdex_diary_write / docdex_diary_read | Persist concise durable notes for an agent and read them back later. |
105
+ | docdex_conversation_hook | Trigger periodic or session-close summarization from transcript or summary payloads. |
106
+ | docdex_wakeup | Build a compact wake-up bundle from working memory, episodic summaries, KG facts, and transcript snippets. |
107
+ | docdex_kg_query / search_nodes / search_edges / search_episodes / timeline / neighborhood / entity_links / episode / delete_edge / delete_episode / rebuild / clear | Query and maintain the temporal knowledge graph derived from conversations. |
108
+
109
+ ### E. Local Delegation (Cheap Models)
96
110
 
97
111
  Use local delegation for low-complexity code-writing tasks and lightweight general questions to reduce paid-model usage.
98
112
 
@@ -125,7 +139,7 @@ Table output shows `USAGE`, `COMPLEXITY`, `RATING`, `REASON`, `COST/$1M`, and `H
125
139
  Use `agent: model:<ollama-model>` to force a specific local model (for example, `model:phi3.5:3.8b`).
126
140
  Avoid entries that only advertise `embedding` or `vision`.
127
141
 
128
- ### E. Index Health + File Access
142
+ ### F. Index Health + File Access
129
143
 
130
144
  Use these to verify index coverage, repo binding, and to read precise file slices.
131
145
 
@@ -146,6 +160,12 @@ Use these to verify index coverage, repo binding, and to read precise file slice
146
160
  - docdex_index: Reindex the full repo or ingest specific files when stale.
147
161
  - docdex_search diff: Limit search to working tree, staged, or ref ranges; filter by paths.
148
162
  - docdex_web_research knobs: force_web, skip_local_search, repo_only, no_cache, web_limit, llm_filter_local_results, llm_model.
163
+ - docdex_conversation_import/search/list/read/export/redact/delete: Archive and inspect scoped conversation sessions instead of depending on transient chat history. After redaction, read/export keep message slots but replace stored content with `[redacted]` placeholders.
164
+ - docdex_conversation_prune: Dry-run or apply retention across sessions, diary entries, hook events, working memory, and episodic rollups.
165
+ - docdex_diary_write/read: Persist concise durable notes tied to an agent and repo or conversation namespace.
166
+ - docdex_conversation_hook: Enqueue or synchronously process periodic/session-close summarization from transcript or summary input.
167
+ - docdex_wakeup: Build a bounded wake-up bundle before answering instead of replaying whole transcripts.
168
+ - docdex_kg_*: Inspect derived entities, edges, episodes, timelines, neighborhoods, entity links, and graph maintenance actions.
149
169
  - docdex_open: Read narrow file slices after targets are identified.
150
170
  - docdex_tree: Render a filtered folder tree (prefer this over `rg --files` / `find`).
151
171
  - docdex_impact_diagnostics: Scan dynamic imports when imports are unclear or failing.
@@ -160,6 +180,7 @@ Use these to verify index coverage, repo binding, and to read precise file slice
160
180
  - HTTP /v1/search/batch: Execute bounded multi-query retrieval in one request.
161
181
  - MCP tools `docdex_capabilities`, `docdex_rerank`, `docdex_batch_search`: optional capability/flow surfaces for Codali integration.
162
182
  - HTTP /v1/snippet: Fetch exact line-safe snippets for a doc_id returned by search.
183
+ - HTTP /v1/chat/completions: OpenAI-compatible chat can inject wake-up, profile, and cached project-map context; non-streaming responses may include `reasoning_trace`.
163
184
  - HTTP /v1/impact/diagnostics: Inspect unresolved/dynamic imports when impact graphs look incomplete.
164
185
 
165
186
  ## CLI Fallbacks (when MCP/IPC is unavailable)
@@ -171,6 +192,11 @@ Use these only when MCP tools cannot be called (e.g., blocked sandbox networking
171
192
  - `docdexd impact-graph --repo <path> --file <rel>`: impact graph (HTTP/local).
172
193
  - `docdexd dag view --repo <path> <session_id>` / `docdexd dag export --repo <path> <session_id>`: DAG export/render.
173
194
  - `docdexd search --repo <path> --query "<q>"`: /search equivalent (HTTP/local).
195
+ - `docdexd conversations import|list|search|read|export|redact|delete --repo <path>`: conversation archive management via the daemon HTTP API.
196
+ - `docdexd conversations prune --repo <path> [--apply] [--manual-retention-days ... --auto-capture-retention-days ... --diary-retention-days ... --hook-event-retention-days ... --working-memory-retention-days ... --episodic-rollup-retention-days ...]`: retention preview/apply.
197
+ - `docdexd diary write|read --repo <path>`: store and read agent diary entries in the current scope.
198
+ - `docdexd hook conversation --repo <path> --action <periodic_memory_save|pre_compaction_summarization|session_close_summarization>`: trigger conversation-memory hooks.
199
+ - `docdexd conversations kg-query|kg-search-nodes|kg-search-edges|kg-search-episodes|kg-neighborhood|kg-entity-links|kg-episode|kg-timeline|kg-delete-edge|kg-delete-episode|kg-rebuild|kg-clear --repo <path>`: temporal KG exploration and maintenance.
174
200
  - `docdexd delegation savings`: delegation telemetry (JSON: offloaded count, local/primary tokens & costs, savings).
175
201
  - `docdexd delegation agents --json`: list local delegation targets and capabilities (mcoda agents include `max_complexity`, `rating`, `cost_per_million`, `usage`, `reasoning_rating`, `health_status`).
176
202
  - `mcoda agent list --json --refresh-health`: preferred machine-consumer inventory command for fresh health; fallback to plain `--json` for older mcoda versions.
@@ -332,6 +358,35 @@ Do not guess fields; use these canonical shapes.
332
358
  - `docdex_save_preference`: `{ agent_id, category, content }`
333
359
  - `docdex_local_completion`: `{ task_type, instruction, context, max_tokens?, timeout_ms?, mode?, max_context_chars?, agent?, caller_agent_id?, caller_model?, primary_cost_per_million?, project_root?, repo_path? }`
334
360
  - `docdex_web_research`: `{ project_root, query, force_web, skip_local_search?, web_limit?, no_cache? }`
361
+ - `docdex_conversation_import`: `{ source?, source_session_id?, title?, agent_id?, transport?, started_at_ms?, ended_at_ms?, format?, messages?, transcript_text?, metadata?, project_root?, repo_path?, conversation_namespace? }`
362
+ - `docdex_conversation_search`: `{ query, agent_id?, limit?, offset?, project_root?, repo_path?, conversation_namespace? }`
363
+ - `docdex_conversation_list`: `{ agent_id?, limit?, offset?, project_root?, repo_path?, conversation_namespace? }`
364
+ - `docdex_conversation_read`: `{ session_id, project_root?, repo_path?, conversation_namespace? }`
365
+ - `docdex_conversation_delete`: `{ session_id, project_root?, repo_path?, conversation_namespace? }`
366
+ - `docdex_conversation_export`: `{ session_id, project_root?, repo_path?, conversation_namespace? }`
367
+ - `docdex_conversation_redact`: `{ session_id, project_root?, repo_path?, conversation_namespace? }`
368
+ - `docdex_conversation_prune`: `{ manual_retention_days?, auto_capture_retention_days?, diary_retention_days?, hook_event_retention_days?, working_memory_retention_days?, episodic_rollup_retention_days?, apply?, project_root?, repo_path?, conversation_namespace? }`
369
+ - `docdex_diary_write`: `{ content, agent_id?, entry_type?, source_session_id?, metadata?, project_root?, repo_path?, conversation_namespace? }`
370
+ - `docdex_diary_read`: `{ agent_id?, limit?, offset?, project_root?, repo_path?, conversation_namespace? }`
371
+ - `docdex_conversation_hook`: `{ action, source?, source_session_id?, title?, agent_id?, transport?, started_at_ms?, ended_at_ms?, format?, messages?, transcript_text?, summary_text?, metadata?, wait_for_processing?, project_root?, repo_path?, conversation_namespace? }`
372
+ - `docdex_wakeup`: `{ agent_id?, query?, max_tokens?, project_root?, repo_path?, conversation_namespace? }`
373
+ - `docdex_kg_query`: `{ query, relation?, limit?, offset?, project_root?, repo_path?, conversation_namespace? }`
374
+ - `docdex_kg_search_nodes`: `{ query, entity_type?, limit?, offset?, project_root?, repo_path?, conversation_namespace? }`
375
+ - `docdex_kg_search_edges`: `{ query, relation?, limit?, offset?, project_root?, repo_path?, conversation_namespace? }`
376
+ - `docdex_kg_timeline`: `{ entity, relation?, limit?, project_root?, repo_path?, conversation_namespace? }`
377
+ - `docdex_kg_search_episodes`: `{ query, source_type?, limit?, offset?, project_root?, repo_path?, conversation_namespace? }`
378
+ - `docdex_kg_neighborhood`: `{ entity, relation?, limit?, project_root?, repo_path?, conversation_namespace? }`
379
+ - `docdex_kg_entity_links`: `{ entity, link_type?, limit?, project_root?, repo_path?, conversation_namespace? }`
380
+ - `docdex_kg_episode`: `{ episode_id?, id?, limit?, project_root?, repo_path?, conversation_namespace? }`
381
+ - `docdex_kg_delete_edge`: `{ edge_id, id?, project_root?, repo_path?, conversation_namespace? }`
382
+ - `docdex_kg_delete_episode`: `{ episode_id, id?, project_root?, repo_path?, conversation_namespace? }`
383
+ - `docdex_kg_rebuild`: `{ project_root?, repo_path?, conversation_namespace? }`
384
+ - `docdex_kg_clear`: `{ project_root?, repo_path?, conversation_namespace? }`
385
+
386
+ Notes:
387
+ - `docdex_conversation_import.format` must be one of `auto`, `plain_text`, `generic_json`, `codex_jsonl`, `claude_jsonl`, or `chatgpt_export`.
388
+ - `docdex_conversation_hook.action` must be one of `periodic_memory_save`, `pre_compaction_summarization`, or `session_close_summarization`.
389
+ - Conversation-memory MCP tools accept `conversation_namespace`; use it instead of `project_root` / `repo_path` for repo-less shared archives.
335
390
 
336
391
  ### 9) Common error fixes (do not guess)
337
392
 
@@ -340,6 +395,8 @@ Do not guess fields; use these canonical shapes.
340
395
  - Calling `/v1/initialize` on the multi-repo daemon with `rootUri`, then using the returned repo_id.
341
396
  - `missing_repo`: Supply repo_id (HTTP) or project_root (MCP), or call /v1/initialize.
342
397
  - `invalid_range` (docdex_open): Adjust start/end line to fit total_lines.
398
+ - `missing conversation scope`: Supply `project_root` / `repo_path` for repo-scoped archives or `conversation_namespace` for repo-less shared archives.
399
+ - `conflicting conversation scope`: Do not combine `repo_id` / `project_root` with `conversation_namespace` on the same request.
343
400
 
344
401
  ## Interaction Patterns
345
402
 
@@ -350,10 +407,11 @@ When answering a complex coding query, follow this "Reasoning Trace":
350
407
  1. Retrieve Profile: Call docdex_get_profile to load user style/constraints (e.g., "Use functional components").
351
408
  2. Search Code: Call docdex_search or docdex_symbols to find the relevant code.
352
409
  3. Check Memory: Call docdex_memory_recall for project-specific caveats (e.g., "Auth logic was refactored last week").
353
- 4. Validate structure: Use docdex_ast/docdex_symbols to confirm targets before editing.
354
- 5. Read context: Use docdex_open to fetch minimal file slices after locating targets.
355
- 6. Plan with DAG: Use /v1/dag/export or /v1/graph/impact to order changes by dependencies.
356
- 7. Synthesize: Generate code that matches the Repo Truth while adhering to the Profile Style.
410
+ 4. If prior sessions matter: call `docdex_conversation_search`, `docdex_conversation_read`, or `docdex_wakeup` before relying on implicit chat history.
411
+ 5. Validate structure: Use docdex_ast/docdex_symbols to confirm targets before editing.
412
+ 6. Read context: Use docdex_open to fetch minimal file slices after locating targets.
413
+ 7. Plan with DAG: Use /v1/dag/export or /v1/graph/impact to order changes by dependencies.
414
+ 8. Synthesize: Generate code that matches the Repo Truth while adhering to the Profile Style.
357
415
 
358
416
  ### 2. Memory Capture (Mandatory)
359
417
 
@@ -362,8 +420,9 @@ Save more memories for both lobes during the task, not just at the end.
362
420
  1. Repo memory: After each meaningful discovery or code change, save at least one durable fact (file location, behavior, config, gotcha) via `docdex_memory_save`.
363
421
  2. Memory overrides: When a new repo memory replaces older facts, include `metadata.supersedes` with the prior memory id(s). Docdex marks the superseded entries with `supersededBy`/`supersededAtMs`, down-ranks them during recall, and they can be removed via `docdex memory compact` (dry-run unless `--apply`).
364
422
  3. Profile memory: When the user expresses a preference, constraint, or workflow correction, call `docdex_save_preference` immediately with the right category.
365
- 4. Keep it crisp: 1-3 short sentences, include file paths when relevant, avoid raw code blobs.
366
- 5. Safety: Never store secrets, tokens, or sensitive user data. Skip transient or speculative info.
423
+ 4. Use `docdex_diary_write` for concise session outcomes, handoff notes, or reminders that are useful later but are not durable repo facts.
424
+ 5. Keep it crisp: 1-3 short sentences, include file paths when relevant, avoid raw code blobs.
425
+ 6. Safety: Never store secrets, tokens, or sensitive user data. Skip transient or speculative info.
367
426
 
368
427
  ### 3. Index Health + Diff-Aware Search (Mandatory)
369
428
 
@@ -374,7 +433,19 @@ Use these when results look incomplete or when the task is about recent changes.
374
433
  3. Reindex if needed: Run docdex_index (or advise it) when stale_index/missing files appear.
375
434
  4. Use diff search: For change-specific tasks, use docdex_search with diff mode (working tree/staged/range).
376
435
 
377
- ### 4. Local Delegation Workflow (Cheap Models)
436
+ ### 4. Conversation Memory + Wake-Up Workflow (Mandatory when session history matters)
437
+
438
+ When the task depends on earlier sessions, durable notes, or handoff state, use the conversation archive instead of relying on implicit chat history.
439
+
440
+ 1. Use `docdex_conversation_search` / `docdex_conversation_list` / `docdex_conversation_read` to recover prior sessions in the current repo or explicit `conversation_namespace`.
441
+ 2. Use `docdex_wakeup` to assemble compact context from working memory, episodic summaries, KG facts, and transcript snippets before answering.
442
+ 3. Use `docdex_diary_write` to persist concise durable notes after important milestones; read them back with `docdex_diary_read` when resuming work.
443
+ 4. Use `docdex_conversation_hook` when importing external agent transcripts or firing periodic/session-close summarization actions.
444
+ 5. Use `docdex_kg_*` tools when you need entity timelines, provenance episodes, code-facing links, or graph cleanup/rebuild operations.
445
+ 6. Use `conversation_namespace` only for repo-less/shared archives. Do not combine it with `repo_id`, `project_root`, or `repo_path` on the same request.
446
+ 7. For `/v1/chat/completions`, set `docdex.agent_id` (or `x-docdex-agent-id`) so wake-up/profile context resolves correctly; responses may include `reasoning_trace` and automatically inject cached `Project map:` context when enabled.
447
+
448
+ ### 5. Local Delegation Workflow (Cheap Models)
378
449
 
379
450
  When you are an expensive/paid model, delegate eligible low-complexity code-writing tasks and lightweight general questions first unless the user opts out.
380
451
  Delegation is mandatory for eligible tasks. If you are expensive and delegation is enabled (explicitly or auto-enabled with local candidates), you MUST attempt local delegation before doing the work yourself. Only skip when delegation is disabled, no local candidate exists, or local output fails validation after a retry/fallback.
@@ -410,7 +481,7 @@ Local models cannot call tools. The leading agent must provide a complete, minim
410
481
  4. Boundaries: explicit files allowed to edit vs read-only; no new dependencies unless allowed.
411
482
  5. Guardrails: ask for clarification if context is insufficient; do not invent missing APIs; return only the requested format.
412
483
 
413
- ### 5. Graph + AST Usage (Mandatory for Code Changes)
484
+ ### 6. Graph + AST Usage (Mandatory for Code Changes)
414
485
 
415
486
  For any code change, use both AST and graph tools to reduce drift and hidden coupling.
416
487
 
@@ -420,7 +491,7 @@ For any code change, use both AST and graph tools to reduce drift and hidden cou
420
491
  4. Use docdex_impact_diagnostics when imports are dynamic or unresolved.
421
492
  5. If graph endpoints are unavailable, state it and proceed cautiously with extra local search.
422
493
 
423
- ### 6. Handling Corrections (Learning)
494
+ ### 7. Handling Corrections (Learning)
424
495
 
425
496
  If the user says: "I told you, we do not use Moment.js here, use date-fns!"
426
497
 
@@ -429,21 +500,21 @@ If the user says: "I told you, we do not use Moment.js here, use date-fns!"
429
500
  - content: "Do not use Moment.js; prefer date-fns."
430
501
  - agent_id: "default" (or active agent ID)
431
502
 
432
- ### 7. Impact Analysis
503
+ ### 8. Impact Analysis
433
504
 
434
505
  If the user asks: "Safe to delete getUser?"
435
506
 
436
507
  - Action: Call GET /v1/graph/impact?file=src/user.ts
437
508
  - Output: Analyze the inbound edges. If the list is not empty, it is unsafe.
438
509
 
439
- ### 8. Non-Repo Real-World Queries (Web First)
510
+ ### 9. Non-Repo Real-World Queries (Web First)
440
511
 
441
512
  If the user asks a non-repo, real-world question (weather, news, general facts), immediately call docdex_web_research with force_web=true.
442
513
  - Resolve relative dates ("yesterday", "last week") using system time by default.
443
514
  - Do not run docdex_search unless the user explicitly wants repo-local context.
444
515
  - Assume web access is allowed unless the user forbids it; if the web call fails, report the failure and ask for a source or permission.
445
516
 
446
- ### 9. Failure Handling (Missing Results or Errors)
517
+ ### 10. Failure Handling (Missing Results or Errors)
447
518
 
448
519
  - Ensure project_root or repo_path is set, or call /v1/initialize to bind a default root.
449
520
  - Use docdex_repo_inspect to confirm repo identity and normalized root.
@@ -666,8 +666,17 @@ function stripLegacyDocdexBodySegment(segment, body) {
666
666
  const normalizedSegment = String(segment || "").replace(/\r\n/g, "\n");
667
667
  const normalizedBody = String(body || "").replace(/\r\n/g, "\n");
668
668
  if (!normalizedBody.trim()) return normalizedSegment;
669
- const re = new RegExp(`\\n?${escapeRegExp(normalizedBody)}\\n?`, "g");
670
- return normalizedSegment.replace(re, "\n").replace(/\n{3,}/g, "\n\n");
669
+ let result = normalizedSegment;
670
+ let index = result.indexOf(normalizedBody);
671
+ while (index !== -1) {
672
+ let start = index;
673
+ let end = index + normalizedBody.length;
674
+ if (start > 0 && result[start - 1] === "\n") start -= 1;
675
+ if (end < result.length && result[end] === "\n") end += 1;
676
+ result = `${result.slice(0, start)}\n${result.slice(end)}`;
677
+ index = result.indexOf(normalizedBody);
678
+ }
679
+ return result.replace(/\n{3,}/g, "\n\n");
671
680
  }
672
681
 
673
682
  function stripLegacyDocdexBody(text, body) {
@@ -1082,6 +1091,7 @@ function upsertMcpServerJson(pathname, url, options = {}) {
1082
1091
  const { value } = readJson(pathname);
1083
1092
  if (typeof value !== "object" || value == null || Array.isArray(value)) return false;
1084
1093
  const root = value;
1094
+ const before = JSON.stringify(root);
1085
1095
  const extra =
1086
1096
  options &&
1087
1097
  typeof options === "object" &&
@@ -1090,9 +1100,25 @@ function upsertMcpServerJson(pathname, url, options = {}) {
1090
1100
  !Array.isArray(options.extra)
1091
1101
  ? options.extra
1092
1102
  : {};
1093
- const extraEntries = Object.entries(extra);
1094
- const matchesExtras = (entry) =>
1095
- extraEntries.every(([key, value]) => entry && entry[key] === value);
1103
+ const isPlainObject = (entry) =>
1104
+ typeof entry === "object" && entry != null && !Array.isArray(entry);
1105
+ const removeDocdexFromSection = (key) => {
1106
+ const section = root[key];
1107
+ if (Array.isArray(section)) {
1108
+ const filtered = section.filter((entry) => !(entry && entry.name === "docdex"));
1109
+ if (filtered.length !== section.length) {
1110
+ root[key] = filtered;
1111
+ }
1112
+ return;
1113
+ }
1114
+ if (!isPlainObject(section) || !Object.prototype.hasOwnProperty.call(section, "docdex")) {
1115
+ return;
1116
+ }
1117
+ delete section.docdex;
1118
+ if (Object.keys(section).length === 0) {
1119
+ delete root[key];
1120
+ }
1121
+ };
1096
1122
  const pickSection = () => {
1097
1123
  if (root.mcpServers && typeof root.mcpServers === "object" && !Array.isArray(root.mcpServers)) {
1098
1124
  return { key: "mcpServers", section: root.mcpServers };
@@ -1103,29 +1129,39 @@ function upsertMcpServerJson(pathname, url, options = {}) {
1103
1129
  return null;
1104
1130
  };
1105
1131
  if (Array.isArray(root.mcpServers)) {
1106
- const idx = root.mcpServers.findIndex((entry) => entry && entry.name === "docdex");
1107
- if (idx >= 0) {
1108
- const current = root.mcpServers[idx] || {};
1109
- if (current.url === url && matchesExtras(current)) return false;
1110
- root.mcpServers[idx] = { ...current, ...extra, url, name: "docdex" };
1111
- writeJson(pathname, root);
1112
- return true;
1132
+ const nextEntries = [];
1133
+ let insertIndex = -1;
1134
+ let current = {};
1135
+ for (const entry of root.mcpServers) {
1136
+ if (entry && entry.name === "docdex") {
1137
+ if (insertIndex === -1) {
1138
+ insertIndex = nextEntries.length;
1139
+ current = isPlainObject(entry) ? { ...entry } : {};
1140
+ }
1141
+ continue;
1142
+ }
1143
+ nextEntries.push(entry);
1113
1144
  }
1114
- root.mcpServers.push({ ...extra, url, name: "docdex" });
1115
- writeJson(pathname, root);
1116
- return true;
1117
- }
1118
-
1119
- const picked = pickSection();
1120
- if (!picked) {
1121
- root.mcpServers = {};
1145
+ const nextEntry = { ...current, ...extra, url, name: "docdex" };
1146
+ if (insertIndex === -1) {
1147
+ nextEntries.push(nextEntry);
1148
+ } else {
1149
+ nextEntries.splice(insertIndex, 0, nextEntry);
1150
+ }
1151
+ root.mcpServers = nextEntries;
1152
+ removeDocdexFromSection("mcp_servers");
1153
+ } else {
1154
+ const picked = pickSection();
1155
+ const sectionKey = picked ? picked.key : "mcpServers";
1156
+ if (!picked) {
1157
+ root[sectionKey] = {};
1158
+ }
1159
+ const section = root[sectionKey];
1160
+ const current = isPlainObject(section.docdex) ? section.docdex : {};
1161
+ section.docdex = { ...current, ...extra, url };
1162
+ removeDocdexFromSection(sectionKey === "mcpServers" ? "mcp_servers" : "mcpServers");
1122
1163
  }
1123
- const section = picked ? picked.section : root.mcpServers;
1124
- const current = section.docdex;
1125
- if (current && current.url === url && matchesExtras(current)) return false;
1126
- const base =
1127
- current && typeof current === "object" && !Array.isArray(current) ? current : {};
1128
- section.docdex = { ...base, ...extra, url };
1164
+ if (JSON.stringify(root) === before) return false;
1129
1165
  writeJson(pathname, root);
1130
1166
  return true;
1131
1167
  }
@@ -1273,19 +1309,25 @@ function upsertCodexConfig(pathname, url) {
1273
1309
  end += 1;
1274
1310
  }
1275
1311
  let updated = false;
1276
- let docdexLine = -1;
1312
+ const docdexLines = [];
1277
1313
  for (let i = start + 1; i < end; i += 1) {
1278
1314
  if (/^\s*docdex\s*=/.test(lines[i])) {
1279
- docdexLine = i;
1280
- break;
1315
+ docdexLines.push(i);
1281
1316
  }
1282
1317
  }
1283
- if (docdexLine === -1) {
1318
+ if (docdexLines.length === 0) {
1284
1319
  lines.splice(end, 0, entryLine);
1285
1320
  updated = true;
1286
- } else if (lines[docdexLine].trim() !== entryLine) {
1287
- lines[docdexLine] = entryLine;
1288
- updated = true;
1321
+ } else {
1322
+ const [firstDocdexLine, ...extraDocdexLines] = docdexLines;
1323
+ if (lines[firstDocdexLine].trim() !== entryLine) {
1324
+ lines[firstDocdexLine] = entryLine;
1325
+ updated = true;
1326
+ }
1327
+ for (let i = extraDocdexLines.length - 1; i >= 0; i -= 1) {
1328
+ lines.splice(extraDocdexLines[i], 1);
1329
+ updated = true;
1330
+ }
1289
1331
  }
1290
1332
  return { contents: lines.join("\n"), updated };
1291
1333
  };
@@ -1336,6 +1378,71 @@ function upsertCodexConfig(pathname, url) {
1336
1378
  return { contents: output.join("\n"), updated };
1337
1379
  };
1338
1380
 
1381
+ const removeRootDocdexEntries = (text) => {
1382
+ const lines = text.split(/\r?\n/);
1383
+ const output = [];
1384
+ let inRootSection = false;
1385
+ let updated = false;
1386
+ for (const line of lines) {
1387
+ const section = line.match(/^\s*\[([^\]]+)\]\s*$/);
1388
+ if (section) {
1389
+ inRootSection = section[1].trim() === "mcp_servers";
1390
+ output.push(line);
1391
+ continue;
1392
+ }
1393
+ if (inRootSection && /^\s*docdex\s*=/.test(line)) {
1394
+ updated = true;
1395
+ continue;
1396
+ }
1397
+ output.push(line);
1398
+ }
1399
+ return { contents: output.join("\n"), updated };
1400
+ };
1401
+
1402
+ const countRootDocdexEntries = (text) => {
1403
+ const lines = text.split(/\r?\n/);
1404
+ let inRootSection = false;
1405
+ let count = 0;
1406
+ for (const line of lines) {
1407
+ const section = line.match(/^\s*\[([^\]]+)\]\s*$/);
1408
+ if (section) {
1409
+ inRootSection = section[1].trim() === "mcp_servers";
1410
+ continue;
1411
+ }
1412
+ if (inRootSection && /^\s*docdex\s*=/.test(line)) {
1413
+ count += 1;
1414
+ }
1415
+ }
1416
+ return count;
1417
+ };
1418
+
1419
+ const removeNestedDocdexSections = (text) => {
1420
+ const lines = text.split(/\r?\n/);
1421
+ const output = [];
1422
+ let skipping = false;
1423
+ let updated = false;
1424
+ for (const line of lines) {
1425
+ const isSection = /^\s*\[.+\]\s*$/.test(line);
1426
+ if (skipping) {
1427
+ if (isSection) {
1428
+ skipping = false;
1429
+ } else {
1430
+ continue;
1431
+ }
1432
+ }
1433
+ if (/^\s*\[mcp_servers\.docdex\]\s*$/.test(line)) {
1434
+ skipping = true;
1435
+ updated = true;
1436
+ continue;
1437
+ }
1438
+ output.push(line);
1439
+ }
1440
+ return { contents: output.join("\n"), updated };
1441
+ };
1442
+
1443
+ const countNestedDocdexSections = (text) =>
1444
+ (text.match(/^\s*\[mcp_servers\.docdex\]\s*$/gm) || []).length;
1445
+
1339
1446
  let contents = "";
1340
1447
  if (fs.existsSync(pathname)) {
1341
1448
  contents = fs.readFileSync(pathname, "utf8");
@@ -1351,7 +1458,23 @@ function upsertCodexConfig(pathname, url) {
1351
1458
  contents = cleaned.contents;
1352
1459
  updated = updated || cleaned.updated;
1353
1460
 
1354
- if (hasNestedMcpServers(contents)) {
1461
+ const preferNested = hasNestedMcpServers(contents);
1462
+ const rootDocdexCount = countRootDocdexEntries(contents);
1463
+ const nestedDocdexCount = countNestedDocdexSections(contents);
1464
+
1465
+ if (preferNested && rootDocdexCount > 0) {
1466
+ const prunedRoot = removeRootDocdexEntries(contents);
1467
+ contents = prunedRoot.contents;
1468
+ updated = updated || prunedRoot.updated;
1469
+ }
1470
+
1471
+ if ((!preferNested && nestedDocdexCount > 0) || (preferNested && nestedDocdexCount > 1)) {
1472
+ const prunedNested = removeNestedDocdexSections(contents);
1473
+ contents = prunedNested.contents;
1474
+ updated = updated || prunedNested.updated;
1475
+ }
1476
+
1477
+ if (preferNested) {
1355
1478
  const nested = upsertDocdexNested(contents, url);
1356
1479
  contents = nested.contents;
1357
1480
  updated = updated || nested.updated;
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "docdex",
3
- "version": "0.2.58",
3
+ "version": "0.2.60",
4
4
  "mcpName": "io.github.bekirdag/docdex",
5
5
  "description": "Local-first documentation and code indexer with HTTP/MCP search, AST, and agent memory.",
6
6
  "bin": {