ownsearch 0.1.6 → 0.1.8

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -61,13 +61,20 @@ Typical agent workflows:
61
61
  - reranks and deduplicates result sets before returning them
62
62
  - lets agents retrieve ranked hits, exact chunks, or bundled grounded context
63
63
 
64
+ Incremental indexing behavior:
65
+
66
+ - if a file is unchanged, OwnSearch skips it
67
+ - if a file is updated, OwnSearch re-indexes only that file's chunks
68
+ - if a new file appears, OwnSearch indexes only the new file
69
+ - if a file is deleted, OwnSearch removes that file's chunks from the index
70
+
64
71
  ## Current power
65
72
 
66
73
  What is already strong in the current package:
67
74
 
68
75
  - local-first setup with Docker-backed Qdrant
69
76
  - deterministic readiness checks through `ownsearch doctor`
70
- - multi-platform MCP config generation
77
+ - multi-platform MCP config installation
71
78
  - bundled retrieval skill for better query planning
72
79
  - support for common text document formats
73
80
  - large plain text and code files are no longer blocked by the extracted-document size cap
@@ -134,12 +141,15 @@ ownsearch serve-mcp
134
141
  On first run, `ownsearch setup` can:
135
142
 
136
143
  - prompt for `GEMINI_API_KEY`
137
- - open Google AI Studio automatically
144
+ - explain the key-setup flow before opening Google AI Studio
145
+ - open Google AI Studio after the user confirms they are ready
138
146
  - save the key to `~/.ownsearch/.env`
139
147
  - validate the pasted key before saving it
140
148
  - ask whether setup output should be optimized for a human or an agent
149
+ - explain that the MCP server exposes built-in retrieval guidance for agents
141
150
  - print exact next commands for CLI and MCP usage
142
- - optionally print an MCP config snippet for a selected agent
151
+ - offer to install MCP config automatically for supported agents
152
+ - fall back to a manual config snippet inside setup if automatic installation is not supported or fails
143
153
 
144
154
  Gemini API usage is governed by Google’s current free-tier limits, quotas, and pricing.
145
155
 
@@ -170,17 +180,16 @@ It is less suitable when:
170
180
 
171
181
  ## Agent integration
172
182
 
173
- To print MCP config snippets:
183
+ To let OwnSearch install MCP config automatically:
174
184
 
175
185
  ```bash
176
- ownsearch print-agent-config codex
177
- ownsearch print-agent-config cursor
178
- ownsearch print-agent-config vscode
179
- ownsearch print-agent-config github-copilot
180
- ownsearch print-agent-config copilot-cli
181
- ownsearch print-agent-config windsurf
182
- ownsearch print-agent-config continue
183
- ownsearch print-agent-config claude-desktop
186
+ ownsearch install-agent-config codex
187
+ ownsearch install-agent-config cursor
188
+ ownsearch install-agent-config vscode
189
+ ownsearch install-agent-config github-copilot
190
+ ownsearch install-agent-config copilot-cli
191
+ ownsearch install-agent-config windsurf
192
+ ownsearch install-agent-config continue
184
193
  ```
185
194
 
186
195
  Supported config targets currently include:
@@ -196,8 +205,9 @@ Supported config targets currently include:
196
205
 
197
206
  Notes:
198
207
 
199
- - `claude-desktop` currently returns guidance rather than a raw JSON snippet because current Claude Desktop docs prefer desktop extensions (`.mcpb`) over manual JSON server configs
200
- - all other supported targets return concrete MCP config payloads
208
+ - `claude-desktop` is not auto-installed because current Claude Desktop docs prefer desktop extensions (`.mcpb`) over manual JSON server configs
209
+ - supported agents are installed with a safe merge that preserves existing MCP servers
210
+ - if automatic installation is not supported or fails, setup falls back to showing a manual config snippet
201
211
 
202
212
  ## Bundled skill
203
213
 
@@ -215,6 +225,14 @@ The skill is intended to help an agent:
215
225
  - avoid duplicate-heavy answer synthesis
216
226
  - stay grounded when retrieval is probabilistic
217
227
 
228
+ The MCP server also exposes the same guidance directly:
229
+
230
+ - resource: `ownsearch://skills/retrieval`
231
+ - prompt: `ownsearch-retrieval-guide`
232
+ - tool: `get_retrieval_skill`
233
+
234
+ That lets an attached agent load the OwnSearch retrieval playbook through MCP instead of relying on external repo knowledge.
235
+
218
236
  ## CLI commands
219
237
 
220
238
  - `ownsearch setup`
@@ -239,8 +257,8 @@ The skill is intended to help an agent:
239
257
  Shows collection status and vector configuration.
240
258
  - `ownsearch serve-mcp`
241
259
  Starts the stdio MCP server.
242
- - `ownsearch print-agent-config <agent>`
243
- Prints MCP config snippets or platform guidance.
260
+ - `ownsearch install-agent-config <agent>`
261
+ Safely merges OwnSearch into a supported agent MCP config when the platform can be updated automatically.
244
262
  - `ownsearch print-skill [skill]`
245
263
  Prints a bundled OwnSearch skill.
246
264
 
@@ -259,6 +277,11 @@ The MCP server currently exposes:
259
277
  - `delete_root`
260
278
  - `store_status`
261
279
 
280
+ In addition to tools, the MCP server exposes:
281
+
282
+ - resource: `ownsearch://skills/retrieval`
283
+ - prompt: `ownsearch-retrieval-guide`
284
+
262
285
  Recommended retrieval flow:
263
286
 
264
287
  1. Use `literal_search` when the user gives an exact title, name, identifier, or quoted phrase.
@@ -287,21 +310,144 @@ The repo also includes comparative retrieval evals:
287
310
 
288
311
  - `scripts/eval-grep-vs-ownsearch.mts`
289
312
  - `scripts/eval-adversarial-retrieval.mts`
313
+ - `scripts/eval-agent-tooling-efficiency.mts`
314
+ - `scripts/eval-dnd-agent-efficiency.mts`
290
315
 
291
316
  These evals are meant to expose where:
292
317
 
293
318
  - plain `grep` is still best
294
319
  - shallow semantic retrieval is too weak
295
320
  - deeper retrieval improves agent-facing RAG quality
321
+ - the retrieval layer improves agent efficiency compared with normal CLI-style tool usage
322
+
323
+ Run them with:
324
+
325
+ ```bash
326
+ npm run eval:agent-efficiency
327
+ npm run eval:dnd-agent-efficiency
328
+ ```
329
+
330
+ ### Benchmark sources
331
+
332
+ These benchmark results are from local corpora checked in under:
333
+
334
+ - `_testing/mireglass_test`
335
+ - a small synthetic archive corpus used to probe ambiguity, aliasing, contradiction handling, and source diversification
336
+ - `_testing/dnd_test`
337
+ - a larger PDF-heavy rules corpus containing:
338
+ - `phb.pdf`
339
+ - `PlayerDnDBasicRules_v0.2.pdf`
340
+ - `D&D 5e - DM's Basic Rules v 0.3.pdf`
341
+ - `Dungeon Master's Guide.pdf`
342
+
343
+ The eval scripts are designed to be reproducible from the repo, not hand-scored screenshots or one-off demos.
344
+
345
+ ### Mireglass retrieval benchmark
346
+
347
+ Command:
348
+
349
+ ```bash
350
+ npm run eval:agent-efficiency
351
+ ```
352
+
353
+ This benchmark compares:
354
+
355
+ - a CLI-agent style baseline that uses lexical search plus targeted file reads
356
+ - `search_context`
357
+ - `deep_search_context`
358
+
359
+ Latest Mireglass result:
360
+
361
+ | Method | Avg quality | Avg efficiency | Avg latency | Avg chars | Avg commands | Quality wins | Efficiency wins |
362
+ |---|---:|---:|---:|---:|---:|---:|---:|
363
+ | `cli_baseline` | `0.352` | `0.117` | `32.8 ms` | `2466.5` | `4.00` | `0/8` | `0/8` |
364
+ | `search_context` | `0.687` | `0.493` | `564.0 ms` | `8811.5` | `1.00` | `3/8` | `6/8` |
365
+ | `deep_search_context` | `0.722` | `0.436` | `1633.4 ms` | `9019.8` | `1.00` | `5/8` | `2/8` |
366
+
367
+ Quality bar chart:
296
368
 
297
- On the current Mireglass benchmark corpus, the latest comparative run produced:
369
+ ```text
370
+ cli_baseline 0.352 #######
371
+ search_context 0.687 ##############
372
+ deep_search_context 0.722 ##############
373
+ ```
374
+
375
+ Efficiency bar chart:
376
+
377
+ ```text
378
+ cli_baseline 0.117 ##
379
+ search_context 0.493 ##########
380
+ deep_search_context 0.436 #########
381
+ ```
298
382
 
299
- - `deep`: `69.2` average score
300
- - `grep`: `65.67` average score
301
- - `shallow`: `65.09` average score
383
+ Takeaway:
384
+
385
+ - `deep_search_context` was best on archive-style answer quality
386
+ - `search_context` was usually the best default on efficiency
387
+ - the CLI baseline needed more tool steps and still produced weaker evidence bundles
302
388
 
303
389
  The adversarial eval also showed that the current deep path reduced known noise-file leakage the most in this corpus.
304
390
 
391
+ ### D&D corpus benchmark
392
+
393
+ Command:
394
+
395
+ ```bash
396
+ npm run eval:dnd-agent-efficiency
397
+ ```
398
+
399
+ This benchmark compares:
400
+
401
+ - `cli_extract_cold`
402
+ - a realistic CLI-agent baseline that extracts PDF text fresh for each question, then does lexical ranking and excerpt selection
403
+ - `cli_extract_warm`
404
+ - the same baseline with the extracted corpus already in memory
405
+ - `search_context`
406
+ - `deep_search_context`
407
+
408
+ Latest D&D result:
409
+
410
+ | Method | Avg quality | Avg efficiency | Avg latency | Avg chars | Avg commands | Quality wins | Efficiency wins |
411
+ |---|---:|---:|---:|---:|---:|---:|---:|
412
+ | `cli_extract_cold` | `0.605` | `0.129` | `4850.7 ms` | `3692.8` | `4.00` | `0/6` | `0/6` |
413
+ | `cli_extract_warm` | `0.605` | `0.318` | `25.5 ms` | `3692.8` | `4.00` | `0/6` | `0/6` |
414
+ | `search_context` | `0.864` | `0.717` | `665.2 ms` | `9577.3` | `1.00` | `5/6` | `5/6` |
415
+ | `deep_search_context` | `0.880` | `0.716` | `1615.3 ms` | `7978.3` | `1.00` | `1/6` | `1/6` |
416
+
417
+ Quality bar chart:
418
+
419
+ ```text
420
+ cli_extract_cold 0.605 ############
421
+ cli_extract_warm 0.605 ############
422
+ search_context 0.864 #################
423
+ deep_search_context 0.880 ##################
424
+ ```
425
+
426
+ Efficiency bar chart:
427
+
428
+ ```text
429
+ cli_extract_cold 0.129 ###
430
+ cli_extract_warm 0.318 ######
431
+ search_context 0.717 ##############
432
+ deep_search_context 0.716 ##############
433
+ ```
434
+
435
+ Takeaway:
436
+
437
+ - on a larger PDF-heavy rules corpus, `search_context` was the best default for agent efficiency
438
+ - `deep_search_context` was slightly stronger on raw quality but usually not enough to justify the extra latency on straightforward rules questions
439
+ - even a warmed CLI extraction baseline was materially worse for grounded retrieval quality than the indexed search layer
440
+
441
+ ### Trust notes
442
+
443
+ These numbers are useful, but they are not universal truths.
444
+
445
+ - The benchmark corpora are local and finite.
446
+ - The scoring functions are explicit in the scripts and can be inspected or changed.
447
+ - The D&D benchmark favors grounded rules retrieval, not open-ended generation quality.
448
+ - The Mireglass benchmark favors multi-document archive reasoning and contradiction handling.
449
+ - For a new corpus, you should treat these as reference evals and add your own benchmark set before making strong deployment claims.
450
+
305
451
  ## Limitations
306
452
 
307
453
  This package is deploy-ready for text-first corpora, but it is not universal document intelligence.
@@ -0,0 +1,148 @@
1
+ import {
2
+ OwnSearchError
3
+ } from "./chunk-GDUOZGEP.js";
4
+
5
+ // src/agent-install.ts
6
+ import fs from "fs/promises";
7
+ import os from "os";
8
+ import path from "path";
9
+ import { execFile as execFileCallback } from "child_process";
10
+ import { promisify } from "util";
11
+ import TOML from "@iarna/toml";
12
+ var execFile = promisify(execFileCallback);
13
+ var OWNSEARCH_STDIO_CONFIG = {
14
+ command: "ownsearch",
15
+ args: ["serve-mcp"]
16
+ };
17
+ function getJsonServerConfig() {
18
+ return {
19
+ command: OWNSEARCH_STDIO_CONFIG.command,
20
+ args: OWNSEARCH_STDIO_CONFIG.args
21
+ };
22
+ }
23
+ async function ensureDir(dirPath) {
24
+ await fs.mkdir(dirPath, { recursive: true });
25
+ }
26
+ async function readJsonFile(filePath) {
27
+ try {
28
+ const raw = await fs.readFile(filePath, "utf8");
29
+ return JSON.parse(raw);
30
+ } catch {
31
+ return {};
32
+ }
33
+ }
34
+ async function writeJsonFile(filePath, value) {
35
+ await ensureDir(path.dirname(filePath));
36
+ await fs.writeFile(filePath, `${JSON.stringify(value, null, 2)}
37
+ `, "utf8");
38
+ }
39
+ function isRecord(value) {
40
+ return Boolean(value) && typeof value === "object" && !Array.isArray(value);
41
+ }
42
+ async function mergeJsonServerFile(filePath, topLevelKey) {
43
+ const parsed = await readJsonFile(filePath);
44
+ const next = { ...parsed };
45
+ const existingServers = isRecord(next[topLevelKey]) ? { ...next[topLevelKey] } : {};
46
+ existingServers.ownsearch = getJsonServerConfig();
47
+ next[topLevelKey] = existingServers;
48
+ await writeJsonFile(filePath, next);
49
+ return {
50
+ agent: "cursor",
51
+ method: "file-merge",
52
+ targetPath: filePath,
53
+ summary: `Updated ${filePath} and merged OwnSearch into ${topLevelKey}.`
54
+ };
55
+ }
56
+ async function installCodexConfig() {
57
+ const configPath = path.join(os.homedir(), ".codex", "config.toml");
58
+ await ensureDir(path.dirname(configPath));
59
+ let parsed = {};
60
+ try {
61
+ const raw = await fs.readFile(configPath, "utf8");
62
+ parsed = TOML.parse(raw);
63
+ } catch {
64
+ parsed = {};
65
+ }
66
+ const mcpServers = isRecord(parsed.mcp_servers) ? { ...parsed.mcp_servers } : {};
67
+ mcpServers.ownsearch = {
68
+ command: OWNSEARCH_STDIO_CONFIG.command,
69
+ args: OWNSEARCH_STDIO_CONFIG.args
70
+ };
71
+ parsed.mcp_servers = mcpServers;
72
+ await fs.writeFile(configPath, TOML.stringify(parsed), "utf8");
73
+ return {
74
+ agent: "codex",
75
+ method: "file-merge",
76
+ targetPath: configPath,
77
+ summary: `Updated ${configPath} and merged OwnSearch into [mcp_servers].`
78
+ };
79
+ }
80
+ async function installVsCodeConfig(agent) {
81
+ const payload = JSON.stringify({
82
+ name: "ownsearch",
83
+ command: OWNSEARCH_STDIO_CONFIG.command,
84
+ args: OWNSEARCH_STDIO_CONFIG.args
85
+ });
86
+ const commands = process.platform === "win32" ? ["code.cmd", "code"] : ["code"];
87
+ let lastError;
88
+ for (const command of commands) {
89
+ try {
90
+ await execFile(command, ["--add-mcp", payload], {
91
+ windowsHide: true
92
+ });
93
+ return {
94
+ agent,
95
+ method: "cli",
96
+ command,
97
+ summary: `Added OwnSearch to VS Code via \`${command} --add-mcp\`.`
98
+ };
99
+ } catch (error) {
100
+ lastError = error;
101
+ }
102
+ }
103
+ throw new OwnSearchError(
104
+ "Could not add OwnSearch to VS Code automatically because the `code` CLI was not found. Install the VS Code shell command or use `ownsearch print-agent-config vscode`."
105
+ );
106
+ }
107
+ async function installContinueConfig() {
108
+ const filePath = path.join(os.homedir(), ".continue", "mcpServers", "ownsearch.json");
109
+ await writeJsonFile(filePath, {
110
+ mcpServers: {
111
+ ownsearch: getJsonServerConfig()
112
+ }
113
+ });
114
+ return {
115
+ agent: "continue",
116
+ method: "file-merge",
117
+ targetPath: filePath,
118
+ summary: `Wrote Continue MCP config to ${filePath}.`
119
+ };
120
+ }
121
+ async function installAgentConfig(agent) {
122
+ switch (agent) {
123
+ case "codex":
124
+ return installCodexConfig();
125
+ case "vscode":
126
+ case "github-copilot":
127
+ return installVsCodeConfig(agent);
128
+ case "cursor": {
129
+ const result = await mergeJsonServerFile(path.join(os.homedir(), ".cursor", "mcp.json"), "mcpServers");
130
+ return { ...result, agent: "cursor" };
131
+ }
132
+ case "windsurf": {
133
+ const result = await mergeJsonServerFile(path.join(os.homedir(), ".codeium", "mcp_config.json"), "mcpServers");
134
+ return { ...result, agent: "windsurf" };
135
+ }
136
+ case "copilot-cli": {
137
+ const result = await mergeJsonServerFile(path.join(os.homedir(), ".copilot", "mcp-config.json"), "mcpServers");
138
+ return { ...result, agent: "copilot-cli" };
139
+ }
140
+ case "continue":
141
+ return installContinueConfig();
142
+ default:
143
+ throw new OwnSearchError(`Automatic MCP installation is not supported for ${agent}.`);
144
+ }
145
+ }
146
+ export {
147
+ installAgentConfig
148
+ };
package/dist/cli.js CHANGED
@@ -26,21 +26,22 @@ loadOwnSearchEnv();
26
26
  var program = new Command();
27
27
  var PACKAGE_NAME = "ownsearch";
28
28
  var GEMINI_API_KEY_URL = "https://aistudio.google.com/apikey";
29
- var DOCKER_DESKTOP_WINDOWS_URL = "https://docs.docker.com/desktop/setup/install/windows-install/";
30
29
  var DOCKER_DESKTOP_OVERVIEW_URL = "https://docs.docker.com/desktop/";
30
+ var DOCKER_DESKTOP_WINDOWS_URL = "https://docs.docker.com/desktop/setup/install/windows-install/";
31
+ var DOCKER_DESKTOP_MAC_URL = "https://docs.docker.com/desktop/setup/install/mac-install/";
32
+ var DOCKER_ENGINE_LINUX_URL = "https://docs.docker.com/engine/install/";
31
33
  var BUNDLED_SKILL_NAME = "ownsearch-rag-search";
32
- var SUPPORTED_AGENTS = [
34
+ var SHOULD_SHOW_PROGRESS = process.stderr.isTTY;
35
+ var PACKAGE_VERSION = "0.1.8";
36
+ var AUTO_INSTALL_AGENTS = /* @__PURE__ */ new Set([
33
37
  "codex",
34
- "claude-desktop",
35
38
  "continue",
36
39
  "copilot-cli",
37
40
  "cursor",
38
41
  "github-copilot",
39
42
  "vscode",
40
43
  "windsurf"
41
- ];
42
- var SHOULD_SHOW_PROGRESS = process.stderr.isTTY;
43
- var PACKAGE_VERSION = "0.1.5";
44
+ ]);
44
45
  function requireGeminiKey() {
45
46
  if (!process.env.GEMINI_API_KEY) {
46
47
  throw new OwnSearchError("Set GEMINI_API_KEY before running OwnSearch.");
@@ -53,8 +54,26 @@ function progress(message, enabled = SHOULD_SHOW_PROGRESS) {
53
54
  process.stderr.write(`${message}
54
55
  `);
55
56
  }
57
+ function getDockerInstallLabel() {
58
+ if (process.platform === "win32") {
59
+ return "Windows install";
60
+ }
61
+ if (process.platform === "darwin") {
62
+ return "macOS install";
63
+ }
64
+ return "Linux install";
65
+ }
66
+ function getDockerInstallUrl() {
67
+ if (process.platform === "win32") {
68
+ return DOCKER_DESKTOP_WINDOWS_URL;
69
+ }
70
+ if (process.platform === "darwin") {
71
+ return DOCKER_DESKTOP_MAC_URL;
72
+ }
73
+ return DOCKER_ENGINE_LINUX_URL;
74
+ }
56
75
  async function loadDockerModule() {
57
- return import("./docker-YQNVOH2N.js");
76
+ return import("./docker-T4PNVYXI.js");
58
77
  }
59
78
  async function loadGeminiModule() {
60
79
  return import("./gemini-VERDPJ32.js");
@@ -74,6 +93,15 @@ async function loadContextModule() {
74
93
  async function loadRetrievalModule() {
75
94
  return import("./retrieval-ZCY6DDUD.js");
76
95
  }
96
+ async function loadAgentInstallModule() {
97
+ return import("./agent-install-DXYYHX4V.js");
98
+ }
99
+ function isAutoInstallAgent(agent) {
100
+ return AUTO_INSTALL_AGENTS.has(agent);
101
+ }
102
+ function isInstallableAgentName(agent) {
103
+ return AUTO_INSTALL_AGENTS.has(agent);
104
+ }
77
105
  function buildAgentConfig(agent) {
78
106
  const stdioConfig = {
79
107
  command: PACKAGE_NAME,
@@ -190,12 +218,18 @@ async function promptForGeminiKey() {
190
218
  output: process.stdout
191
219
  });
192
220
  try {
193
- console.log(`OwnSearch needs a Gemini API key for indexing and search.`);
194
- console.log("Gemini API usage is governed by Google\u2019s current free-tier limits, quotas, and pricing.");
195
- console.log(`Open Google AI Studio here: ${GEMINI_API_KEY_URL}`);
196
- console.log(`OwnSearch will save the key to ${getEnvPath()}`);
221
+ console.log("OwnSearch needs a Gemini API key for indexing and search.");
222
+ console.log("Gemini API usage is governed by Google's current free-tier limits, quotas, and pricing.");
223
+ console.log("");
224
+ console.log("Next step:");
225
+ console.log(" 1. OwnSearch will open Google AI Studio in your browser.");
226
+ console.log(" 2. You create or copy a Gemini API key there.");
227
+ console.log(` 3. You paste the key here, and OwnSearch saves it to ${getEnvPath()}.`);
228
+ console.log("");
229
+ console.log(`AI Studio URL: ${GEMINI_API_KEY_URL}`);
230
+ await rl.question("Press Enter when you are ready for OwnSearch to open AI Studio: ");
197
231
  openGeminiKeyPage();
198
- await rl.question("Press Enter after the AI Studio page is open and you are ready to paste the key: ");
232
+ await rl.question("Press Enter after AI Studio is open and you have copied the key: ");
199
233
  for (; ; ) {
200
234
  const apiKey = (await rl.question("Paste GEMINI_API_KEY and press Enter (Ctrl+C to cancel): ")).trim();
201
235
  if (!apiKey) {
@@ -317,14 +351,14 @@ function printSetupNextSteps() {
317
351
  console.log(' ownsearch deep-search-context "your question here" --final-limit 10 --max-chars 16000');
318
352
  console.log(" 6. Start the MCP server:");
319
353
  console.log(" ownsearch serve-mcp");
320
- console.log(" 7. Print agent-specific config:");
321
- console.log(" ownsearch print-agent-config codex");
354
+ console.log(" 7. Let OwnSearch install MCP config for a supported agent:");
355
+ console.log(" ownsearch install-agent-config codex");
322
356
  console.log(" 8. Print the bundled retrieval skill:");
323
357
  console.log(` ownsearch print-skill ${BUNDLED_SKILL_NAME}`);
324
358
  console.log("");
325
359
  console.log("Docker requirement");
326
- console.log(" OwnSearch requires Docker Desktop so it can run Qdrant locally.");
327
- console.log(` Windows install: ${DOCKER_DESKTOP_WINDOWS_URL}`);
360
+ console.log(" OwnSearch requires Docker so it can run Qdrant locally.");
361
+ console.log(` ${getDockerInstallLabel()}: ${getDockerInstallUrl()}`);
328
362
  console.log(` Docker docs: ${DOCKER_DESKTOP_OVERVIEW_URL}`);
329
363
  }
330
364
  function printAgentSetupNextSteps() {
@@ -340,12 +374,12 @@ function printAgentSetupNextSteps() {
340
374
  console.log(' ownsearch deep-search-context "your question here" --final-limit 10 --max-chars 16000');
341
375
  console.log(" Start the MCP server:");
342
376
  console.log(" ownsearch serve-mcp");
343
- console.log(" Print MCP config for the host agent:");
344
- console.log(" ownsearch print-agent-config codex");
377
+ console.log(" Let OwnSearch install MCP config automatically:");
378
+ console.log(" ownsearch install-agent-config codex");
345
379
  console.log("");
346
380
  console.log("Docker requirement");
347
- console.log(" OwnSearch requires Docker Desktop so it can run Qdrant locally.");
348
- console.log(` Windows install: ${DOCKER_DESKTOP_WINDOWS_URL}`);
381
+ console.log(" OwnSearch requires Docker so it can run Qdrant locally.");
382
+ console.log(` ${getDockerInstallLabel()}: ${getDockerInstallUrl()}`);
349
383
  console.log(` Docker docs: ${DOCKER_DESKTOP_OVERVIEW_URL}`);
350
384
  }
351
385
  async function promptForAgentChoice() {
@@ -431,12 +465,63 @@ function printAgentConfigSnippet(agent) {
431
465
  console.log(JSON.stringify(payload.config, null, 2));
432
466
  console.log("");
433
467
  console.log(`OwnSearch will load GEMINI_API_KEY from ${getEnvPath()} if you ran \`ownsearch setup\`.`);
468
+ if (isAutoInstallAgent(agent)) {
469
+ console.log(`To let OwnSearch install this automatically, run \`ownsearch install-agent-config ${agent}\`.`);
470
+ }
471
+ }
472
+ }
473
+ function printAgentInstallSummary(result) {
474
+ console.log("");
475
+ console.log(`OwnSearch installed MCP config for ${result.agent}`);
476
+ console.log(` Result: ${result.summary}`);
477
+ if (result.targetPath) {
478
+ console.log(` Config path: ${result.targetPath}`);
479
+ }
480
+ if (result.command) {
481
+ console.log(` Installer command: ${result.command}`);
482
+ }
483
+ console.log(" MCP guidance is built in:");
484
+ console.log(" resource: ownsearch://skills/retrieval");
485
+ console.log(" prompt: ownsearch-retrieval-guide");
486
+ console.log(" tool fallback: get_retrieval_skill");
487
+ }
488
+ async function maybeInstallAgentConfig(agent) {
489
+ if (!isAutoInstallAgent(agent) || !process.stdin.isTTY || !process.stdout.isTTY) {
490
+ printAgentConfigSnippet(agent);
491
+ return;
492
+ }
493
+ const rl = readline.createInterface({
494
+ input: process.stdin,
495
+ output: process.stdout
496
+ });
497
+ try {
498
+ console.log("");
499
+ console.log(`OwnSearch can install the MCP server for ${agent} automatically without removing other MCP servers.`);
500
+ const answer = (await rl.question("Install it now? [Y/n]: ")).trim().toLowerCase();
501
+ if (answer === "n" || answer === "no") {
502
+ printAgentConfigSnippet(agent);
503
+ return;
504
+ }
505
+ } finally {
506
+ rl.close();
507
+ }
508
+ try {
509
+ progress(`OwnSearch setup: installing MCP config for ${agent}...`);
510
+ const { installAgentConfig } = await loadAgentInstallModule();
511
+ const result = await installAgentConfig(agent);
512
+ printAgentInstallSummary(result);
513
+ } catch (error) {
514
+ console.log("");
515
+ console.log(`OwnSearch could not install MCP config for ${agent} automatically.`);
516
+ console.log(error instanceof Error ? error.message : String(error));
517
+ printAgentConfigSnippet(agent);
434
518
  }
435
519
  }
436
520
  function printSetupSummary(input) {
437
521
  console.log("OwnSearch setup complete");
438
522
  console.log(" Docker is required because OwnSearch runs Qdrant locally in Docker.");
439
- console.log(` Docker docs: ${DOCKER_DESKTOP_WINDOWS_URL}`);
523
+ console.log(` ${getDockerInstallLabel()}: ${getDockerInstallUrl()}`);
524
+ console.log(` Docker docs: ${DOCKER_DESKTOP_OVERVIEW_URL}`);
440
525
  console.log(` Config: ${input.configPath}`);
441
526
  console.log(` API key file: ${input.envPath}`);
442
527
  console.log(` Qdrant: ${input.qdrantUrl} (${input.qdrantStarted ? "started now" : "already running or reachable"})`);
@@ -448,16 +533,27 @@ function printSetupSummary(input) {
448
533
  } else {
449
534
  console.log(" Gemini API key: missing");
450
535
  }
536
+ console.log(" MCP guidance:");
537
+ console.log(" OwnSearch exposes a retrieval resource, prompt, and fallback tool so attached agents can learn the correct retrieval workflow immediately.");
538
+ console.log(" resource: ownsearch://skills/retrieval");
539
+ console.log(" prompt: ownsearch-retrieval-guide");
540
+ console.log(" tool fallback: get_retrieval_skill");
451
541
  }
452
542
  function printAgentSetupSummary(input) {
453
543
  console.log("OwnSearch setup ready for agent use");
454
544
  console.log(" Docker is required because OwnSearch runs Qdrant locally in Docker.");
455
- console.log(` Docker docs: ${DOCKER_DESKTOP_WINDOWS_URL}`);
545
+ console.log(` ${getDockerInstallLabel()}: ${getDockerInstallUrl()}`);
546
+ console.log(` Docker docs: ${DOCKER_DESKTOP_OVERVIEW_URL}`);
456
547
  console.log(` Config path: ${input.configPath}`);
457
548
  console.log(` Managed env path: ${input.envPath}`);
458
549
  console.log(` Qdrant endpoint: ${input.qdrantUrl}`);
459
550
  console.log(` Qdrant status: ${input.qdrantStarted ? "started during setup" : "already reachable"}`);
460
551
  console.log(` Gemini key: ${input.geminiApiKeyPresent ? `ready (${input.geminiApiKeySource})` : "missing"}`);
552
+ console.log(" MCP guidance is built in:");
553
+ console.log(" resource: ownsearch://skills/retrieval");
554
+ console.log(" prompt: ownsearch-retrieval-guide");
555
+ console.log(" tool fallback: get_retrieval_skill");
556
+ console.log(" Attached agents should load the resource or prompt first, then use literal_search, search_context, or deep_search_context as needed.");
461
557
  }
462
558
  program.name("ownsearch").description("Gemini-powered local search MCP server backed by Qdrant.").version(PACKAGE_VERSION);
463
559
  program.command("setup").description("Create config and start a local Qdrant Docker container.").option("--json", "Print machine-readable JSON output").option("--audience <audience>", "Choose output style: human or agent").action(async (options) => {
@@ -496,7 +592,7 @@ program.command("setup").description("Create config and start a local Qdrant Doc
496
592
  printSetupNextSteps();
497
593
  const agent = await promptForAgentChoice();
498
594
  if (agent) {
499
- printAgentConfigSnippet(agent);
595
+ await maybeInstallAgentConfig(agent);
500
596
  }
501
597
  }
502
598
  });
@@ -651,17 +747,14 @@ program.command("serve-mcp").description("Start the stdio MCP server.").action(a
651
747
  process.exitCode = code ?? 0;
652
748
  });
653
749
  });
654
- program.command("print-agent-config").argument("<agent>", SUPPORTED_AGENTS.join(" | ")).description("Print an MCP config snippet for a supported agent.").option("--json", "Print the full machine-readable payload").action(async (agent, options) => {
655
- if (SUPPORTED_AGENTS.includes(agent)) {
656
- const payload = buildAgentConfig(agent);
657
- if (options.json) {
658
- console.log(JSON.stringify(payload, null, 2));
659
- return;
660
- }
661
- printAgentConfigSnippet(agent);
662
- return;
663
- }
664
- throw new OwnSearchError(`Unsupported agent: ${agent}`);
750
+ program.command("install-agent-config").argument("<agent>", [...AUTO_INSTALL_AGENTS].join(" | ")).description("Safely install OwnSearch into a supported agent MCP config without removing other MCP servers.").action(async (agent) => {
751
+ if (!isInstallableAgentName(agent)) {
752
+ throw new OwnSearchError(`Automatic MCP installation is not supported for ${agent}.`);
753
+ }
754
+ progress(`OwnSearch: installing MCP config for ${agent}...`);
755
+ const { installAgentConfig } = await loadAgentInstallModule();
756
+ const result = await installAgentConfig(agent);
757
+ printAgentInstallSummary(result);
665
758
  });
666
759
  program.command("print-skill").argument("[skill]", `Bundled skill name (default ${BUNDLED_SKILL_NAME})`).description("Print a bundled OwnSearch skill that helps agents query retrieval tools more effectively.").action(async (skill) => {
667
760
  const skillName = skill?.trim() || BUNDLED_SKILL_NAME;