@arabold/docs-mcp-server 1.9.0 → 1.11.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +85 -241
- package/dist/{chunk-A5FW7XVC.js → chunk-VF2RUEVV.js} +779 -280
- package/dist/chunk-VF2RUEVV.js.map +1 -0
- package/dist/cli.js +46 -17
- package/dist/cli.js.map +1 -1
- package/dist/server.js +567 -366
- package/dist/server.js.map +1 -1
- package/package.json +6 -7
- package/dist/chunk-A5FW7XVC.js.map +0 -1
package/README.md
CHANGED
|
@@ -1,81 +1,50 @@
|
|
|
1
|
-
#
|
|
1
|
+
# Docs MCP Server: Enhance Your AI Coding Assistant
|
|
2
2
|
|
|
3
|
-
|
|
3
|
+
AI coding assistants often struggle with outdated documentation, leading to incorrect suggestions or hallucinated code examples. Verifying AI responses against specific library versions can be time-consuming and inefficient.
|
|
4
4
|
|
|
5
|
-
|
|
6
|
-
|
|
7
|
-
- 🌐 **Versatile Scraping:** Fetch documentation from diverse sources like websites, GitHub, npm, PyPI, or local files.
|
|
8
|
-
- 🧠 **Intelligent Processing:** Automatically split content semantically and generate embeddings using your choice of models (OpenAI, Google Gemini, Azure OpenAI, AWS Bedrock, Ollama, and more).
|
|
9
|
-
- 💾 **Optimized Storage:** Leverage SQLite with `sqlite-vec` for efficient vector storage and FTS5 for robust full-text search.
|
|
10
|
-
- 🔍 **Powerful Hybrid Search:** Combine vector similarity and full-text search across different library versions for highly relevant results.
|
|
11
|
-
- ⚙️ **Asynchronous Job Handling:** Manage scraping and indexing tasks efficiently with a background job queue and MCP/CLI tools.
|
|
12
|
-
- 🐳 **Simple Deployment:** Get up and running quickly using Docker or npx.
|
|
13
|
-
|
|
14
|
-
## Overview
|
|
15
|
-
|
|
16
|
-
This project provides a Model Context Protocol (MCP) server designed to scrape, process, index, and search documentation for various software libraries and packages. It fetches content from specified URLs, splits it into meaningful chunks using semantic splitting techniques, generates vector embeddings using OpenAI, and stores the data in an SQLite database. The server utilizes `sqlite-vec` for efficient vector similarity search and FTS5 for full-text search capabilities, combining them for hybrid search results. It supports versioning, allowing documentation for different library versions (including unversioned content) to be stored and queried distinctly.
|
|
17
|
-
|
|
18
|
-
The server exposes MCP tools for:
|
|
19
|
-
|
|
20
|
-
- Starting a scraping job (`scrape_docs`): Returns a `jobId` immediately.
|
|
21
|
-
- Checking job status (`get_job_status`): Retrieves the current status and progress of a specific job.
|
|
22
|
-
- Listing active/completed jobs (`list_jobs`): Shows recent and ongoing jobs.
|
|
23
|
-
- Cancelling a job (`cancel_job`): Attempts to stop a running or queued job.
|
|
24
|
-
- Searching documentation (`search_docs`).
|
|
25
|
-
- Listing indexed libraries (`list_libraries`).
|
|
26
|
-
- Finding appropriate versions (`find_version`).
|
|
27
|
-
- Removing indexed documents (`remove_docs`).
|
|
28
|
-
- Fetching single URLs (`fetch_url`): Fetches a URL and returns its content as Markdown.
|
|
29
|
-
|
|
30
|
-
## Configuration
|
|
31
|
-
|
|
32
|
-
The following environment variables are supported to configure the embedding model behavior:
|
|
33
|
-
|
|
34
|
-
### Embedding Model Configuration
|
|
35
|
-
|
|
36
|
-
- `DOCS_MCP_EMBEDDING_MODEL`: **Optional.** Format: `provider:model_name` or just `model_name` (defaults to `text-embedding-3-small`). Supported providers and their required environment variables:
|
|
37
|
-
|
|
38
|
-
- `openai` (default): Uses OpenAI's embedding models
|
|
39
|
-
|
|
40
|
-
- `OPENAI_API_KEY`: **Required.** Your OpenAI API key
|
|
41
|
-
- `OPENAI_ORG_ID`: **Optional.** Your OpenAI Organization ID
|
|
42
|
-
- `OPENAI_API_BASE`: **Optional.** Custom base URL for OpenAI-compatible APIs (e.g., Ollama, Azure OpenAI)
|
|
5
|
+
The **Docs MCP Server** addresses these challenges by providing a personal, always-current knowledge base for your AI assistant. It acts as a bridge, connecting your LLM directly to the **latest official documentation** from thousands of software libraries.
|
|
43
6
|
|
|
44
|
-
|
|
7
|
+
By grounding AI responses in accurate, version-aware context, the Docs MCP Server enables you to receive concise and relevant integration details and code snippets, improving the reliability and efficiency of LLM-assisted development.
|
|
45
8
|
|
|
46
|
-
|
|
9
|
+
It's **free**, **open-source**, runs **locally** for privacy, and integrates seamlessly with your workflow via the Model Context Protocol (MCP).
|
|
47
10
|
|
|
48
|
-
|
|
11
|
+
## Why Use the Docs MCP Server?
|
|
49
12
|
|
|
50
|
-
|
|
13
|
+
LLM-assisted coding promises speed and efficiency, but often falls short due to:
|
|
51
14
|
|
|
52
|
-
|
|
15
|
+
- 🌀 **Stale Knowledge:** LLMs train on snapshots of the internet, quickly falling behind new library releases and API changes.
|
|
16
|
+
- 👻 **Code Hallucinations:** AI can invent plausible-looking code that is syntactically correct but functionally wrong or uses non-existent APIs.
|
|
17
|
+
- ❓ **Version Ambiguity:** Generic answers rarely account for the specific version dependencies in _your_ project, leading to subtle bugs.
|
|
18
|
+
- ⏳ **Verification Overhead:** Developers spend valuable time double-checking AI suggestions against official documentation.
|
|
53
19
|
|
|
54
|
-
|
|
55
|
-
- `AWS_SECRET_ACCESS_KEY`: **Required.** AWS secret key
|
|
56
|
-
- `AWS_REGION` or `BEDROCK_AWS_REGION`: **Required.** AWS region for Bedrock
|
|
20
|
+
**The Docs MCP Server tackles these problems head-on by:**
|
|
57
21
|
|
|
58
|
-
|
|
59
|
-
|
|
60
|
-
|
|
61
|
-
|
|
62
|
-
- `AZURE_OPENAI_API_VERSION`: **Required.** Azure API version
|
|
22
|
+
- ✅ **Providing Always Up-to-Date Context:** It fetches and indexes documentation _directly_ from official sources (websites, GitHub, npm, PyPI, local files) on demand.
|
|
23
|
+
- 🎯 **Delivering Version-Specific Answers:** Search queries can target exact library versions, ensuring the information aligns with your project's dependencies.
|
|
24
|
+
- 💡 **Reducing Hallucinations:** By grounding the LLM in real documentation, it provides accurate examples and integration details.
|
|
25
|
+
- ⚡ **Boosting Productivity:** Get trustworthy answers faster, integrated directly into your AI assistant workflow.
|
|
63
26
|
|
|
64
|
-
|
|
65
|
-
|
|
66
|
-
The database schema uses a fixed dimension of 1536 for embedding vectors. Only models that produce vectors with dimension ≤ 1536 are supported, except for certain providers (like Gemini) that support dimension reduction.
|
|
67
|
-
|
|
68
|
-
For OpenAI-compatible APIs (like Ollama), use the `openai` provider with `OPENAI_API_BASE` pointing to your endpoint.
|
|
27
|
+
## ✨ Key Features
|
|
69
28
|
|
|
70
|
-
|
|
29
|
+
- **Up-to-Date Knowledge:** Fetches the latest documentation directly from the source.
|
|
30
|
+
- **Version-Aware Search:** Get answers relevant to specific library versions (e.g., `react@18.2.0` vs `react@17.0.0`).
|
|
31
|
+
- **Accurate Snippets:** Reduces AI hallucinations by using context from official docs.
|
|
32
|
+
- **Broad Source Compatibility:** Scrapes websites, GitHub repos, package manager sites (npm, PyPI), and even local file directories.
|
|
33
|
+
- **Intelligent Processing:** Automatically chunks documentation semantically and generates embeddings.
|
|
34
|
+
- **Flexible Embedding Models:** Supports OpenAI (incl. compatible APIs like Ollama), Google Gemini/Vertex AI, Azure OpenAI, AWS Bedrock, and more.
|
|
35
|
+
- **Powerful Hybrid Search:** Combines vector similarity with full-text search for relevance.
|
|
36
|
+
- **Local & Private:** Runs entirely on your machine, keeping your data and queries private.
|
|
37
|
+
- **Free & Open Source:** Built for the community, by the community.
|
|
38
|
+
- **Simple Deployment:** Easy setup via Docker or `npx`.
|
|
39
|
+
- **Seamless Integration:** Works with MCP-compatible clients (like Claude, Cline, Roo).
|
|
71
40
|
|
|
72
41
|
## Running the MCP Server
|
|
73
42
|
|
|
74
|
-
|
|
43
|
+
Get up and running quickly!
|
|
75
44
|
|
|
76
|
-
### Option 1: Using Docker
|
|
45
|
+
### Option 1: Using Docker
|
|
77
46
|
|
|
78
|
-
This
|
|
47
|
+
This approach is easy, straightforward, and doesn't require Node.js to be installed.
|
|
79
48
|
|
|
80
49
|
1. **Ensure Docker is installed and running.**
|
|
81
50
|
2. **Configure your MCP settings:**
|
|
@@ -176,7 +145,7 @@ docker run -i --rm \
|
|
|
176
145
|
|
|
177
146
|
### Option 2: Using npx
|
|
178
147
|
|
|
179
|
-
This approach is
|
|
148
|
+
This approach is useful when you need local file access (e.g., indexing documentation from your local file system). While this can also be achieved by mounting paths into a Docker container, using npx is simpler but requires a Node.js installation.
|
|
180
149
|
|
|
181
150
|
1. **Ensure Node.js is installed.**
|
|
182
151
|
2. **Configure your MCP settings:**
|
|
@@ -204,10 +173,29 @@ This approach is recommended when you need local file access (e.g., indexing doc
|
|
|
204
173
|
|
|
205
174
|
3. **That's it!** The server will now be available to your AI assistant.
|
|
206
175
|
|
|
176
|
+
### Option 3: Using npx with HTTP Protocol
|
|
177
|
+
|
|
178
|
+
Similar to Option 2, this uses `npx` to run the latest published package without needing Docker or a local clone. However, this option starts the server using the Streamable HTTP protocol instead of the default stdio, making it accessible via HTTP endpoints. This is useful if you have multiple clients, you work with multiple code assistants in parallel, or want to expose the server to other applications.
|
|
179
|
+
|
|
180
|
+
1. **Ensure Node.js is installed.**
|
|
181
|
+
2. **Run the command:**
|
|
182
|
+
|
|
183
|
+
```bash
|
|
184
|
+
# Ensure required environment variables like OPENAI_API_KEY are set
|
|
185
|
+
npx --package=@arabold/docs-mcp-server docs-server --protocol http --port 8000
|
|
186
|
+
```
|
|
187
|
+
|
|
188
|
+
- `--protocol http`: Instructs the server to use the HTTP protocol.
|
|
189
|
+
- `--port <number>`: Specifies the listening port (default: 8000).
|
|
190
|
+
|
|
191
|
+
The server will expose endpoints like `/mcp` and `/sse` on the specified port.
|
|
192
|
+
|
|
207
193
|
## Using the CLI
|
|
208
194
|
|
|
209
195
|
You can use the CLI to manage documentation directly, either via Docker or npx. **Important: Use the same method (Docker or npx) for both the server and CLI to ensure access to the same indexed documentation.**
|
|
210
196
|
|
|
197
|
+
Here's how to invoke the CLI:
|
|
198
|
+
|
|
211
199
|
### Using Docker CLI
|
|
212
200
|
|
|
213
201
|
If you're running the server with Docker, use Docker for the CLI as well:
|
|
@@ -232,151 +220,58 @@ npx -y --package=@arabold/docs-mcp-server docs-cli <command> [options]
|
|
|
232
220
|
|
|
233
221
|
The npx approach will use the default data directory on your system (typically in your home directory), ensuring consistency between server and CLI.
|
|
234
222
|
|
|
235
|
-
|
|
236
|
-
|
|
237
|
-
### CLI Command Reference
|
|
238
|
-
|
|
239
|
-
The `docs-cli` provides commands for managing the documentation index. Access it either via Docker (`docker run -v docs-mcp-data:/data ghcr.io/arabold/docs-mcp-server:latest docs-cli ...`) or `npx` (`npx -y --package=@arabold/docs-mcp-server docs-cli ...`).
|
|
240
|
-
|
|
241
|
-
**General Help:**
|
|
242
|
-
|
|
243
|
-
```bash
|
|
244
|
-
docs-cli --help
|
|
245
|
-
# or
|
|
246
|
-
npx -y --package=@arabold/docs-mcp-server docs-cli --help
|
|
247
|
-
```
|
|
248
|
-
|
|
249
|
-
**Command Specific Help:** (Replace `docs-cli` with the `npx...` command if not installed globally)
|
|
250
|
-
|
|
251
|
-
```bash
|
|
252
|
-
docs-cli scrape --help
|
|
253
|
-
docs-cli search --help
|
|
254
|
-
docs-cli fetch-url --help
|
|
255
|
-
docs-cli find-version --help
|
|
256
|
-
docs-cli remove --help
|
|
257
|
-
docs-cli list --help
|
|
258
|
-
```
|
|
259
|
-
|
|
260
|
-
### Fetching Single URLs (`fetch-url`)
|
|
261
|
-
|
|
262
|
-
Fetches a single URL and converts its content to Markdown. Unlike `scrape`, this command does not crawl links or store the content.
|
|
263
|
-
|
|
264
|
-
```bash
|
|
265
|
-
docs-cli fetch-url <url> [options]
|
|
266
|
-
```
|
|
267
|
-
|
|
268
|
-
**Options:**
|
|
269
|
-
|
|
270
|
-
- `--no-follow-redirects`: Disable following HTTP redirects (default: follow redirects)
|
|
271
|
-
|
|
272
|
-
**Examples:**
|
|
273
|
-
|
|
274
|
-
```bash
|
|
275
|
-
# Fetch a URL and convert to Markdown
|
|
276
|
-
docs-cli fetch-url https://example.com/page.html
|
|
277
|
-
```
|
|
278
|
-
|
|
279
|
-
### Scraping Documentation (`scrape`)
|
|
280
|
-
|
|
281
|
-
Scrapes and indexes documentation from a given URL for a specific library.
|
|
282
|
-
|
|
283
|
-
```bash
|
|
284
|
-
docs-cli scrape <library> <url> [options]
|
|
285
|
-
```
|
|
286
|
-
|
|
287
|
-
**Options:**
|
|
288
|
-
|
|
289
|
-
- `-v, --version <string>`: The specific version to associate with the scraped documents.
|
|
290
|
-
- Accepts full versions (`1.2.3`), pre-release versions (`1.2.3-beta.1`), or partial versions (`1`, `1.2` which are expanded to `1.0.0`, `1.2.0`).
|
|
291
|
-
- If omitted, the documentation is indexed as **unversioned**.
|
|
292
|
-
- `-p, --max-pages <number>`: Maximum pages to scrape (default: 1000).
|
|
293
|
-
- `-d, --max-depth <number>`: Maximum navigation depth (default: 3).
|
|
294
|
-
- `-c, --max-concurrency <number>`: Maximum concurrent requests (default: 3).
|
|
295
|
-
- `--ignore-errors`: Ignore errors during scraping (default: true).
|
|
296
|
-
|
|
297
|
-
**Examples:**
|
|
298
|
-
|
|
299
|
-
```bash
|
|
300
|
-
# Scrape React 18.2.0 docs
|
|
301
|
-
docs-cli scrape react --version 18.2.0 https://react.dev/
|
|
302
|
-
```
|
|
303
|
-
|
|
304
|
-
### Searching Documentation (`search`)
|
|
305
|
-
|
|
306
|
-
Searches the indexed documentation for a library, optionally filtering by version.
|
|
223
|
+
The main commands available are:
|
|
307
224
|
|
|
308
|
-
|
|
309
|
-
|
|
310
|
-
|
|
225
|
+
- `scrape`: Scrapes and indexes documentation from a URL.
|
|
226
|
+
- `search`: Searches the indexed documentation.
|
|
227
|
+
- `list`: Lists all indexed libraries.
|
|
228
|
+
- `remove`: Removes indexed documentation.
|
|
229
|
+
- `fetch-url`: Fetches a single URL and converts to Markdown.
|
|
230
|
+
- `find-version`: Finds the best matching version for a library.
|
|
311
231
|
|
|
312
|
-
|
|
232
|
+
See the [CLI Command Reference](#cli-command-reference) below for detailed command usage.
|
|
313
233
|
|
|
314
|
-
|
|
315
|
-
- Supports exact versions (`18.0.0`), partial versions (`18`), or ranges (`18.x`).
|
|
316
|
-
- If omitted, searches the **latest** available indexed version.
|
|
317
|
-
- If a specific version/range doesn't match, it falls back to the latest indexed version _older_ than the target.
|
|
318
|
-
- To search **only unversioned** documents, explicitly pass an empty string: `--version ""`. (Note: Omitting `--version` searches latest, which _might_ be unversioned if no other versions exist).
|
|
319
|
-
- `-l, --limit <number>`: Maximum number of results (default: 5).
|
|
320
|
-
- `-e, --exact-match`: Only match the exact version specified (disables fallback and range matching) (default: false).
|
|
321
|
-
|
|
322
|
-
**Examples:**
|
|
323
|
-
|
|
324
|
-
```bash
|
|
325
|
-
# Search latest React docs for 'hooks'
|
|
326
|
-
docs-cli search react 'hooks'
|
|
327
|
-
```
|
|
328
|
-
|
|
329
|
-
### Finding Available Versions (`find-version`)
|
|
330
|
-
|
|
331
|
-
Checks the index for the best matching version for a library based on a target, and indicates if unversioned documents exist.
|
|
332
|
-
|
|
333
|
-
```bash
|
|
334
|
-
docs-cli find-version <library> [options]
|
|
335
|
-
```
|
|
234
|
+
## Configuration
|
|
336
235
|
|
|
337
|
-
|
|
236
|
+
The following environment variables are supported to configure the embedding model behavior:
|
|
338
237
|
|
|
339
|
-
|
|
238
|
+
### Embedding Model Configuration
|
|
340
239
|
|
|
341
|
-
**
|
|
240
|
+
- `DOCS_MCP_EMBEDDING_MODEL`: **Optional.** Format: `provider:model_name` or just `model_name` (defaults to `text-embedding-3-small`). Supported providers and their required environment variables:
|
|
342
241
|
|
|
343
|
-
|
|
344
|
-
# Find the latest indexed version for react
|
|
345
|
-
docs-cli find-version react
|
|
346
|
-
```
|
|
242
|
+
- `openai` (default): Uses OpenAI's embedding models
|
|
347
243
|
|
|
348
|
-
|
|
244
|
+
- `OPENAI_API_KEY`: **Required.** Your OpenAI API key
|
|
245
|
+
- `OPENAI_ORG_ID`: **Optional.** Your OpenAI Organization ID
|
|
246
|
+
- `OPENAI_API_BASE`: **Optional.** Custom base URL for OpenAI-compatible APIs (e.g., Ollama, Azure OpenAI)
|
|
349
247
|
|
|
350
|
-
|
|
248
|
+
- `vertex`: Uses Google Cloud Vertex AI embeddings
|
|
351
249
|
|
|
352
|
-
|
|
353
|
-
docs-cli list
|
|
354
|
-
```
|
|
250
|
+
- `GOOGLE_APPLICATION_CREDENTIALS`: **Required.** Path to service account JSON key file
|
|
355
251
|
|
|
356
|
-
|
|
252
|
+
- `gemini`: Uses Google Generative AI (Gemini) embeddings
|
|
357
253
|
|
|
358
|
-
|
|
254
|
+
- `GOOGLE_API_KEY`: **Required.** Your Google API key
|
|
359
255
|
|
|
360
|
-
|
|
361
|
-
docs-cli remove <library> [options]
|
|
362
|
-
```
|
|
256
|
+
- `aws`: Uses AWS Bedrock embeddings
|
|
363
257
|
|
|
364
|
-
**
|
|
258
|
+
- `AWS_ACCESS_KEY_ID`: **Required.** AWS access key
|
|
259
|
+
- `AWS_SECRET_ACCESS_KEY`: **Required.** AWS secret key
|
|
260
|
+
- `AWS_REGION` or `BEDROCK_AWS_REGION`: **Required.** AWS region for Bedrock
|
|
365
261
|
|
|
366
|
-
-
|
|
262
|
+
- `microsoft`: Uses Azure OpenAI embeddings
|
|
263
|
+
- `AZURE_OPENAI_API_KEY`: **Required.** Azure OpenAI API key
|
|
264
|
+
- `AZURE_OPENAI_API_INSTANCE_NAME`: **Required.** Azure instance name
|
|
265
|
+
- `AZURE_OPENAI_API_DEPLOYMENT_NAME`: **Required.** Azure deployment name
|
|
266
|
+
- `AZURE_OPENAI_API_VERSION`: **Required.** Azure API version
|
|
367
267
|
|
|
368
|
-
|
|
268
|
+
### Vector Dimensions
|
|
369
269
|
|
|
370
|
-
|
|
371
|
-
# Remove React 18.2.0 docs
|
|
372
|
-
docs-cli remove react --version 18.2.0
|
|
373
|
-
```
|
|
270
|
+
The database schema uses a fixed dimension of 1536 for embedding vectors. Only models that produce vectors with dimension ≤ 1536 are supported, except for certain providers (like Gemini) that support dimension reduction.
|
|
374
271
|
|
|
375
|
-
|
|
272
|
+
For OpenAI-compatible APIs (like Ollama), use the `openai` provider with `OPENAI_API_BASE` pointing to your endpoint.
|
|
376
273
|
|
|
377
|
-
|
|
378
|
-
- **Searching/Finding:** Accepts specific versions, partials, or ranges (`X.Y.Z`, `X.Y`, `X`, `X.x`). Falls back to the latest older version if the target doesn't match. Omitting the version targets the latest available. Explicitly searching `--version ""` targets unversioned documents.
|
|
379
|
-
- **Unversioned Docs:** Libraries can have documentation stored without a specific version (by omitting `--version` during scrape). These can be searched explicitly using `--version ""`. The `find-version` command will also report if unversioned docs exist alongside any semver matches.
|
|
274
|
+
These variables can be set regardless of how you run the server (Docker, npx, or from source).
|
|
380
275
|
|
|
381
276
|
## Development & Advanced Setup
|
|
382
277
|
|
|
@@ -386,39 +281,6 @@ This section covers running the server/CLI directly from the source code for dev
|
|
|
386
281
|
|
|
387
282
|
This provides an isolated environment and exposes the server via HTTP endpoints.
|
|
388
283
|
|
|
389
|
-
1. **Clone the repository:**
|
|
390
|
-
```bash
|
|
391
|
-
git clone https://github.com/arabold/docs-mcp-server.git # Replace with actual URL if different
|
|
392
|
-
cd docs-mcp-server
|
|
393
|
-
```
|
|
394
|
-
2. **Create `.env` file:**
|
|
395
|
-
Copy the example and add your OpenAI key (see "Environment Setup" below).
|
|
396
|
-
```bash
|
|
397
|
-
cp .env.example .env
|
|
398
|
-
# Edit .env and add your OPENAI_API_KEY
|
|
399
|
-
```
|
|
400
|
-
3. **Build the Docker image:**
|
|
401
|
-
```bash
|
|
402
|
-
docker build -t docs-mcp-server .
|
|
403
|
-
```
|
|
404
|
-
4. **Run the Docker container:**
|
|
405
|
-
|
|
406
|
-
```bash
|
|
407
|
-
# Option 1: Using a named volume (recommended)
|
|
408
|
-
# Docker automatically creates the volume 'docs-mcp-data' if it doesn't exist on first run.
|
|
409
|
-
docker run -i --env-file .env -v docs-mcp-data:/data --name docs-mcp-server docs-mcp-server
|
|
410
|
-
|
|
411
|
-
# Option 2: Mapping to a host directory
|
|
412
|
-
# docker run -i --env-file .env -v /path/on/your/host:/data --name docs-mcp-server docs-mcp-server
|
|
413
|
-
```
|
|
414
|
-
|
|
415
|
-
- `-i`: Keep STDIN open even if not attached. This is crucial for interacting with the server over stdio.
|
|
416
|
-
- `--env-file .env`: Loads environment variables (like `OPENAI_API_KEY`) from your local `.env` file.
|
|
417
|
-
- `-v docs-mcp-data:/data` or `-v /path/on/your/host:/data`: **Crucial for persistence.** This mounts a Docker named volume (Docker creates `docs-mcp-data` automatically if needed) or a host directory to the `/data` directory inside the container. The `/data` directory is where the server stores its `documents.db` file (as configured by `DOCS_MCP_STORE_PATH` in the Dockerfile). This ensures your indexed documentation persists even if the container is stopped or removed.
|
|
418
|
-
- `--name docs-mcp-server`: Assigns a convenient name to the container.
|
|
419
|
-
|
|
420
|
-
The server inside the container now runs directly using Node.js and communicates over **stdio**.
|
|
421
|
-
|
|
422
284
|
This method is useful for contributing to the project or running un-published versions.
|
|
423
285
|
|
|
424
286
|
1. **Clone the repository:**
|
|
@@ -475,7 +337,7 @@ This method is useful for contributing to the project or running un-published ve
|
|
|
475
337
|
# DOCS_MCP_STORE_PATH=/path/to/your/desired/storage/directory
|
|
476
338
|
```
|
|
477
339
|
|
|
478
|
-
###
|
|
340
|
+
### Testing (from Source)
|
|
479
341
|
|
|
480
342
|
Since MCP servers communicate over stdio when run directly via Node.js, debugging can be challenging. We recommend using the [MCP Inspector](https://github.com/modelcontextprotocol/inspector), which is available as a package script after building:
|
|
481
343
|
|
|
@@ -485,24 +347,6 @@ npx @modelcontextprotocol/inspector node dist/server.js
|
|
|
485
347
|
|
|
486
348
|
The Inspector will provide a URL to access debugging tools in your browser.
|
|
487
349
|
|
|
488
|
-
### Releasing
|
|
489
|
-
|
|
490
|
-
This project uses [semantic-release](https://github.com/semantic-release/semantic-release) and [Conventional Commits](https://www.conventionalcommits.org/) to automate the release process.
|
|
491
|
-
|
|
492
|
-
**How it works:**
|
|
493
|
-
|
|
494
|
-
1. **Commit Messages:** All commits merged into the `main` branch **must** follow the Conventional Commits specification.
|
|
495
|
-
2. **Manual Trigger:** The "Release" GitHub Actions workflow can be triggered manually from the Actions tab when you're ready to create a new release.
|
|
496
|
-
3. **`semantic-release` Actions:** Determines version, updates `CHANGELOG.md` & `package.json`, commits, tags, publishes to npm, and creates a GitHub Release.
|
|
497
|
-
|
|
498
|
-
**What you need to do:**
|
|
499
|
-
|
|
500
|
-
- Use Conventional Commits.
|
|
501
|
-
- Merge changes to `main`.
|
|
502
|
-
- Trigger a release manually when ready from the Actions tab in GitHub.
|
|
503
|
-
|
|
504
|
-
**Automation handles:** Changelog, version bumps, tags, npm publish, GitHub releases.
|
|
505
|
-
|
|
506
350
|
### Architecture
|
|
507
351
|
|
|
508
352
|
For details on the project's architecture and design principles, please see [ARCHITECTURE.md](ARCHITECTURE.md).
|