@dpesch/mantisbt-mcp-server 1.5.2 → 1.5.5
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +30 -0
- package/README.de.md +10 -3
- package/README.md +10 -3
- package/dist/config.js +8 -0
- package/dist/index.js +13 -4
- package/dist/search/store.js +7 -2
- package/dist/search/sync.js +13 -0
- package/dist/tools/files.js +9 -2
- package/dist/tools/issues.js +25 -1
- package/package.json +4 -4
- package/scripts/hooks/pre-push.mjs +195 -0
- package/scripts/init.mjs +95 -0
- package/tests/config.test.ts +69 -0
- package/tests/helpers/search-mocks.ts +1 -0
- package/tests/search/store.test.ts +45 -0
- package/tests/search/sync.test.ts +54 -0
- package/tests/tools/files.test.ts +71 -1
- package/tests/tools/issues.test.ts +52 -0
package/CHANGELOG.md
CHANGED
|
@@ -7,6 +7,36 @@ This project adheres to [Semantic Versioning](https://semver.org/).
|
|
|
7
7
|
|
|
8
8
|
---
|
|
9
9
|
|
|
10
|
+
## [1.5.5] – 2026-03-18
|
|
11
|
+
|
|
12
|
+
### Fixed
|
|
13
|
+
- Semantic search index sync: eliminated O(n²) disk write amplification during initial index builds. Previously `addBatch()` wrote the entire `index.json` to disk after every batch, causing n/batch_size complete rewrites for a full rebuild. Now `addBatch()` only updates the in-memory map; a new `flush()` method (atomic write via tmp file + rename) persists to disk. `SearchSyncService.sync()` calls `flush()` as a checkpoint every 100 indexed issues (`CHECKPOINT_INTERVAL=100`), limiting data loss on process kill to at most 100 issues, and performs a final flush after the loop for any remaining items.
|
|
14
|
+
|
|
15
|
+
---
|
|
16
|
+
|
|
17
|
+
## [1.5.4] – 2026-03-18
|
|
18
|
+
|
|
19
|
+
### Fixed
|
|
20
|
+
- `registerFileTools`: the `uploadDir` parameter was typed as required (`string | undefined`) instead of truly optional (`uploadDir?: string`), causing TypeScript errors in callers that omit the argument and breaking the CI typecheck step.
|
|
21
|
+
|
|
22
|
+
### Changed
|
|
23
|
+
- Added `npm run init` setup script (`scripts/init.mjs`): checks Node.js version (≥ 18), runs `npm install`, installs git hooks from `scripts/hooks/`, and runs a typecheck to verify the setup.
|
|
24
|
+
- Git pre-push hook logic is now version-controlled in `scripts/hooks/pre-push.mjs`; the hook runs `npm run typecheck` before every push to catch type errors locally before they reach CI.
|
|
25
|
+
|
|
26
|
+
---
|
|
27
|
+
|
|
28
|
+
## [1.5.3] – 2026-03-17
|
|
29
|
+
|
|
30
|
+
### Security
|
|
31
|
+
- Removed unused `vectra` dependency. The package was listed in `dependencies` but never imported — `VectraStore` is a self-contained implementation. Removing it eliminates three transitive CVEs in the `openai` → `axios` chain (GHSA-jr5f-v2jv-69x6 SSRF/credential-leakage, GHSA-43fc-jf86-j433 DoS, GHSA-wf5p-g6vw-rhxx CSRF).
|
|
32
|
+
- `upload_file`: new optional `MANTIS_UPLOAD_DIR` environment variable restricts `file_path` uploads to a configured directory. When set, any path that resolves outside the directory (including `../` traversal attempts) is rejected before the file is read. Without the variable the behaviour is unchanged (no restriction). The resolved directory prefix is computed once at server start, not per request.
|
|
33
|
+
- HTTP transport now binds to `127.0.0.1` (localhost only) by default instead of `0.0.0.0` (all interfaces). This prevents unintended exposure on network interfaces when the server is started without explicit network configuration. Set `MCP_HTTP_HOST=0.0.0.0` to restore the previous behaviour (required for Docker and remote access).
|
|
34
|
+
- New optional `MCP_HTTP_TOKEN` environment variable: when set, the `/mcp` endpoint requires an `Authorization: Bearer <token>` header. Requests without a valid token receive HTTP 401. The `/health` endpoint remains public regardless of this setting.
|
|
35
|
+
- New optional `MCP_HTTP_HOST` environment variable: overrides the bind address for HTTP mode (default: `127.0.0.1`).
|
|
36
|
+
- `update_issue`: the `fields` parameter now validates against an explicit allowlist of known MantisBT field names (`summary`, `description`, `steps_to_reproduce`, `additional_information`, `status`, `resolution`, `priority`, `severity`, `reproducibility`, `handler`, `category`, `version`, `fixed_in_version`, `target_version`, `view_state`, `tags`, `custom_fields`); unknown keys are rejected with a validation error. Reference objects (`status`, `handler`, `reproducibility`, `version`, `view_state`, etc.) must now contain at least `id` or `name` — empty objects `{}` are rejected. Previously any key was accepted and forwarded directly to the API.
|
|
37
|
+
|
|
38
|
+
---
|
|
39
|
+
|
|
10
40
|
## [1.5.2] – 2026-03-17
|
|
11
41
|
|
|
12
42
|
### Fixed
|
package/README.de.md
CHANGED
|
@@ -38,7 +38,7 @@ In `~/.claude/claude_desktop_config.json` (Claude Desktop) oder der lokalen
|
|
|
38
38
|
```bash
|
|
39
39
|
git clone https://codeberg.org/dpesch/mantisbt-mcp-server
|
|
40
40
|
cd mantisbt-mcp-server
|
|
41
|
-
npm
|
|
41
|
+
npm run init
|
|
42
42
|
npm run build
|
|
43
43
|
```
|
|
44
44
|
|
|
@@ -69,11 +69,14 @@ npm run build
|
|
|
69
69
|
| `MANTIS_CACHE_TTL` | – | `3600` | Cache-Lebensdauer in Sekunden |
|
|
70
70
|
| `TRANSPORT` | – | `stdio` | Transport-Modus: `stdio` oder `http` |
|
|
71
71
|
| `PORT` | – | `3000` | Port für HTTP-Modus |
|
|
72
|
+
| `MCP_HTTP_HOST` | – | `127.0.0.1` | Bind-Adresse für HTTP-Modus. **Geändert von `0.0.0.0` auf `127.0.0.1`** — der Server horcht standardmäßig nur auf localhost. Für Docker oder Remote-Zugriff `0.0.0.0` setzen. |
|
|
73
|
+
| `MCP_HTTP_TOKEN` | – | – | Wenn gesetzt, muss jede `/mcp`-Anfrage den Header `Authorization: Bearer <token>` enthalten. `/health` ist immer öffentlich. |
|
|
72
74
|
| `MANTIS_SEARCH_ENABLED` | – | `false` | Auf `true` setzen, um die semantische Suche zu aktivieren |
|
|
73
75
|
| `MANTIS_SEARCH_BACKEND` | – | `vectra` | Vektorspeicher: `vectra` (reines JS) oder `sqlite-vec` (manuelle Installation erforderlich) |
|
|
74
76
|
| `MANTIS_SEARCH_DIR` | – | `{MANTIS_CACHE_DIR}/search` | Verzeichnis für den Suchindex |
|
|
75
77
|
| `MANTIS_SEARCH_MODEL` | – | `Xenova/paraphrase-multilingual-MiniLM-L12-v2` | Embedding-Modell (wird beim ersten Start einmalig heruntergeladen, ~80 MB) |
|
|
76
78
|
| `MANTIS_SEARCH_THREADS` | – | `1` | Anzahl der ONNX-Intra-Op-Threads für das Embedding-Modell. Standard ist 1, um CPU-Sättigung auf Mehrkernsystemen und in WSL zu verhindern. Nur erhöhen, wenn die Indexierungsgeschwindigkeit kritisch ist und der Host ausschließlich für diese Last vorgesehen ist. |
|
|
79
|
+
| `MANTIS_UPLOAD_DIR` | – | – | Schränkt `upload_file` auf Dateien in diesem Verzeichnis ein. Wenn gesetzt, wird jeder `file_path` außerhalb des Verzeichnisses abgelehnt (Pfad-Traversal-Versuche via `../` werden blockiert). Ohne diese Variable gilt keine Einschränkung. |
|
|
77
80
|
|
|
78
81
|
### Config-Datei (Fallback)
|
|
79
82
|
|
|
@@ -199,14 +202,18 @@ Für den Einsatz als eigenständiger Server (z.B. in Remote-Setups):
|
|
|
199
202
|
|
|
200
203
|
```bash
|
|
201
204
|
MANTIS_BASE_URL=... MANTIS_API_KEY=... TRANSPORT=http PORT=3456 node dist/index.js
|
|
205
|
+
|
|
206
|
+
# Mit Token-Authentifizierung und expliziter Bind-Adresse (erforderlich für Docker/Remote):
|
|
207
|
+
# MCP_HTTP_TOKEN=secret MANTIS_BASE_URL=... MANTIS_API_KEY=... \
|
|
208
|
+
# TRANSPORT=http PORT=3456 MCP_HTTP_HOST=0.0.0.0 node dist/index.js
|
|
202
209
|
```
|
|
203
210
|
|
|
204
|
-
Healthcheck: `GET http://localhost:3456/health`
|
|
211
|
+
Healthcheck: `GET http://localhost:3456/health` (immer öffentlich, kein Token erforderlich)
|
|
205
212
|
|
|
206
213
|
## Entwicklung
|
|
207
214
|
|
|
208
215
|
```bash
|
|
209
|
-
npm
|
|
216
|
+
npm run init # Ersteinrichtung: Abhängigkeiten, Git-Hooks, Typprüfung
|
|
210
217
|
npm run build # TypeScript → dist/ kompilieren
|
|
211
218
|
npm run typecheck # Typprüfung ohne Ausgabe
|
|
212
219
|
npm run dev # Watch-Modus für Entwicklung
|
package/README.md
CHANGED
|
@@ -38,7 +38,7 @@ Add to `~/.claude/claude_desktop_config.json` (Claude Desktop) or your local
|
|
|
38
38
|
```bash
|
|
39
39
|
git clone https://codeberg.org/dpesch/mantisbt-mcp-server
|
|
40
40
|
cd mantisbt-mcp-server
|
|
41
|
-
npm
|
|
41
|
+
npm run init
|
|
42
42
|
npm run build
|
|
43
43
|
```
|
|
44
44
|
|
|
@@ -69,11 +69,14 @@ npm run build
|
|
|
69
69
|
| `MANTIS_CACHE_TTL` | – | `3600` | Cache lifetime in seconds |
|
|
70
70
|
| `TRANSPORT` | – | `stdio` | Transport mode: `stdio` or `http` |
|
|
71
71
|
| `PORT` | – | `3000` | Port for HTTP mode |
|
|
72
|
+
| `MCP_HTTP_HOST` | – | `127.0.0.1` | Bind address for HTTP mode. **Changed from `0.0.0.0` to `127.0.0.1`** — the server now listens on localhost only by default. Set to `0.0.0.0` for Docker or remote access. |
|
|
73
|
+
| `MCP_HTTP_TOKEN` | – | – | When set, the `/mcp` endpoint requires `Authorization: Bearer <token>`. The `/health` endpoint is always public. |
|
|
72
74
|
| `MANTIS_SEARCH_ENABLED` | – | `false` | Set to `true` to enable semantic search |
|
|
73
75
|
| `MANTIS_SEARCH_BACKEND` | – | `vectra` | Vector store backend: `vectra` (pure JS) or `sqlite-vec` (requires manual install) |
|
|
74
76
|
| `MANTIS_SEARCH_DIR` | – | `{MANTIS_CACHE_DIR}/search` | Directory for the search index |
|
|
75
77
|
| `MANTIS_SEARCH_MODEL` | – | `Xenova/paraphrase-multilingual-MiniLM-L12-v2` | Embedding model name (downloaded once on first use, ~80 MB) |
|
|
76
78
|
| `MANTIS_SEARCH_THREADS` | – | `1` | Number of ONNX intra-op threads for the embedding model. Default is 1 to prevent CPU saturation on multi-core machines and WSL. Increase only if index rebuild speed matters and the host is dedicated to this workload. |
|
|
79
|
+
| `MANTIS_UPLOAD_DIR` | – | – | Restrict `upload_file` to files within this directory. When set, any `file_path` outside the directory is rejected (path traversal attempts via `../` are blocked). Without this variable there is no restriction. |
|
|
77
80
|
|
|
78
81
|
### Config file (fallback)
|
|
79
82
|
|
|
@@ -199,14 +202,18 @@ For use as a standalone server (e.g. in remote setups):
|
|
|
199
202
|
|
|
200
203
|
```bash
|
|
201
204
|
MANTIS_BASE_URL=... MANTIS_API_KEY=... TRANSPORT=http PORT=3456 node dist/index.js
|
|
205
|
+
|
|
206
|
+
# With token authentication and explicit bind address (required for Docker/remote):
|
|
207
|
+
# MCP_HTTP_TOKEN=secret MANTIS_BASE_URL=... MANTIS_API_KEY=... \
|
|
208
|
+
# TRANSPORT=http PORT=3456 MCP_HTTP_HOST=0.0.0.0 node dist/index.js
|
|
202
209
|
```
|
|
203
210
|
|
|
204
|
-
Health check: `GET http://localhost:3456/health`
|
|
211
|
+
Health check: `GET http://localhost:3456/health` (always public, no token required)
|
|
205
212
|
|
|
206
213
|
## Development
|
|
207
214
|
|
|
208
215
|
```bash
|
|
209
|
-
npm
|
|
216
|
+
npm run init # First-time setup: install deps, git hooks, typecheck
|
|
210
217
|
npm run build # Compile TypeScript → dist/
|
|
211
218
|
npm run typecheck # Type check without output
|
|
212
219
|
npm run dev # Watch mode for development
|
package/dist/config.js
CHANGED
|
@@ -77,11 +77,19 @@ export async function getConfig() {
|
|
|
77
77
|
const searchModelName = process.env.MANTIS_SEARCH_MODEL ??
|
|
78
78
|
'Xenova/paraphrase-multilingual-MiniLM-L12-v2';
|
|
79
79
|
const searchNumThreads = Math.max(1, parseInt(process.env.MANTIS_SEARCH_THREADS ?? '', 10) || 1);
|
|
80
|
+
const uploadDir = process.env.MANTIS_UPLOAD_DIR;
|
|
81
|
+
const httpHost = process.env.MCP_HTTP_HOST ?? '127.0.0.1';
|
|
82
|
+
const httpPort = parseInt(process.env.PORT ?? '3000', 10);
|
|
83
|
+
const httpToken = process.env.MCP_HTTP_TOKEN;
|
|
80
84
|
cachedConfig = {
|
|
81
85
|
baseUrl: baseUrl.replace(/\/$/, ''), // strip trailing slash
|
|
82
86
|
apiKey,
|
|
83
87
|
cacheDir,
|
|
84
88
|
cacheTtl,
|
|
89
|
+
uploadDir,
|
|
90
|
+
httpHost,
|
|
91
|
+
httpPort,
|
|
92
|
+
httpToken,
|
|
85
93
|
search: {
|
|
86
94
|
enabled: searchEnabled,
|
|
87
95
|
backend: searchBackend,
|
package/dist/index.js
CHANGED
|
@@ -44,7 +44,7 @@ async function createMcpServer() {
|
|
|
44
44
|
});
|
|
45
45
|
registerIssueTools(server, client, cache);
|
|
46
46
|
registerNoteTools(server, client);
|
|
47
|
-
registerFileTools(server, client);
|
|
47
|
+
registerFileTools(server, client, config.uploadDir);
|
|
48
48
|
registerRelationshipTools(server, client);
|
|
49
49
|
registerMonitorTools(server, client);
|
|
50
50
|
registerProjectTools(server, client);
|
|
@@ -75,10 +75,19 @@ async function runStdio() {
|
|
|
75
75
|
process.stdin.once('close', () => process.exit(0));
|
|
76
76
|
}
|
|
77
77
|
async function runHttp() {
|
|
78
|
+
const config = await getConfig();
|
|
78
79
|
const server = await createMcpServer();
|
|
79
|
-
const port =
|
|
80
|
+
const port = config.httpPort;
|
|
80
81
|
const httpServer = createServer(async (req, res) => {
|
|
81
82
|
if (req.method === 'POST' && req.url === '/mcp') {
|
|
83
|
+
if (config.httpToken) {
|
|
84
|
+
const auth = req.headers['authorization'];
|
|
85
|
+
if (auth !== `Bearer ${config.httpToken}`) {
|
|
86
|
+
res.writeHead(401, { 'Content-Type': 'application/json' });
|
|
87
|
+
res.end(JSON.stringify({ error: 'Unauthorized' }));
|
|
88
|
+
return;
|
|
89
|
+
}
|
|
90
|
+
}
|
|
82
91
|
const chunks = [];
|
|
83
92
|
req.on('data', (chunk) => chunks.push(chunk));
|
|
84
93
|
req.on('end', async () => {
|
|
@@ -107,8 +116,8 @@ async function runHttp() {
|
|
|
107
116
|
res.end();
|
|
108
117
|
}
|
|
109
118
|
});
|
|
110
|
-
httpServer.listen(port, () => {
|
|
111
|
-
console.error(`MantisBT MCP Server v${version} running on http
|
|
119
|
+
httpServer.listen(port, config.httpHost, () => {
|
|
120
|
+
console.error(`MantisBT MCP Server v${version} running on http://${config.httpHost}:${port}/mcp`);
|
|
112
121
|
});
|
|
113
122
|
}
|
|
114
123
|
// ---------------------------------------------------------------------------
|
package/dist/search/store.js
CHANGED
|
@@ -1,4 +1,4 @@
|
|
|
1
|
-
import { readFile, writeFile, mkdir, unlink } from 'node:fs/promises';
|
|
1
|
+
import { readFile, writeFile, rename, mkdir, unlink } from 'node:fs/promises';
|
|
2
2
|
import { join } from 'node:path';
|
|
3
3
|
// ---------------------------------------------------------------------------
|
|
4
4
|
// VectraStore
|
|
@@ -34,8 +34,10 @@ export class VectraStore {
|
|
|
34
34
|
}
|
|
35
35
|
async persist() {
|
|
36
36
|
const indexFile = join(this.vectraDir, 'index.json');
|
|
37
|
+
const tmpFile = indexFile + '.tmp';
|
|
37
38
|
const data = JSON.stringify([...this.items.values()]);
|
|
38
|
-
await writeFile(
|
|
39
|
+
await writeFile(tmpFile, data, 'utf-8');
|
|
40
|
+
await rename(tmpFile, indexFile);
|
|
39
41
|
}
|
|
40
42
|
async add(item) {
|
|
41
43
|
await this.ensureLoaded();
|
|
@@ -47,6 +49,9 @@ export class VectraStore {
|
|
|
47
49
|
for (const item of items) {
|
|
48
50
|
this.items.set(item.id, item);
|
|
49
51
|
}
|
|
52
|
+
// No persist() here — call flush() once after all batches are processed.
|
|
53
|
+
}
|
|
54
|
+
async flush() {
|
|
50
55
|
await this.persist();
|
|
51
56
|
}
|
|
52
57
|
async search(vector, topN) {
|
package/dist/search/sync.js
CHANGED
|
@@ -3,6 +3,7 @@
|
|
|
3
3
|
// ---------------------------------------------------------------------------
|
|
4
4
|
const PAGE_SIZE = 50;
|
|
5
5
|
const EMBED_BATCH_SIZE = 10;
|
|
6
|
+
const CHECKPOINT_INTERVAL = 100; // flush to disk every N indexed issues
|
|
6
7
|
export class SearchSyncService {
|
|
7
8
|
client;
|
|
8
9
|
store;
|
|
@@ -17,6 +18,7 @@ export class SearchSyncService {
|
|
|
17
18
|
const { issues: allIssues, totalFromApi } = await this.fetchAllIssues(lastSyncedAt ?? undefined, projectId);
|
|
18
19
|
let indexed = 0;
|
|
19
20
|
let skipped = 0;
|
|
21
|
+
let indexedSinceCheckpoint = 0;
|
|
20
22
|
// Process in batches of EMBED_BATCH_SIZE
|
|
21
23
|
for (let i = 0; i < allIssues.length; i += EMBED_BATCH_SIZE) {
|
|
22
24
|
const batch = allIssues.slice(i, i + EMBED_BATCH_SIZE);
|
|
@@ -43,6 +45,17 @@ export class SearchSyncService {
|
|
|
43
45
|
}));
|
|
44
46
|
await this.store.addBatch(batchItems);
|
|
45
47
|
indexed += batchItems.length;
|
|
48
|
+
indexedSinceCheckpoint += batchItems.length;
|
|
49
|
+
// Checkpoint flush: persist every CHECKPOINT_INTERVAL issues to limit
|
|
50
|
+
// data loss if the process is killed before the final flush.
|
|
51
|
+
if (indexedSinceCheckpoint >= CHECKPOINT_INTERVAL) {
|
|
52
|
+
await this.store.flush();
|
|
53
|
+
indexedSinceCheckpoint = 0;
|
|
54
|
+
}
|
|
55
|
+
}
|
|
56
|
+
// Final flush for any remaining items not yet written by a checkpoint.
|
|
57
|
+
if (indexedSinceCheckpoint > 0) {
|
|
58
|
+
await this.store.flush();
|
|
46
59
|
}
|
|
47
60
|
await this.store.setLastSyncedAt(new Date().toISOString());
|
|
48
61
|
// Persist the best known total for get_search_index_status.
|
package/dist/tools/files.js
CHANGED
|
@@ -1,5 +1,5 @@
|
|
|
1
1
|
import { readFile } from 'node:fs/promises';
|
|
2
|
-
import { basename } from 'node:path';
|
|
2
|
+
import { basename, resolve, sep } from 'node:path';
|
|
3
3
|
import { z } from 'zod';
|
|
4
4
|
import { getVersionHint } from '../version-hint.js';
|
|
5
5
|
function errorText(msg) {
|
|
@@ -8,7 +8,8 @@ function errorText(msg) {
|
|
|
8
8
|
const hint = vh?.getUpdateHint();
|
|
9
9
|
return hint ? `Error: ${msg}\n\n${hint}` : `Error: ${msg}`;
|
|
10
10
|
}
|
|
11
|
-
export function registerFileTools(server, client) {
|
|
11
|
+
export function registerFileTools(server, client, uploadDir) {
|
|
12
|
+
const normalizedUploadDir = uploadDir ? resolve(uploadDir) + sep : undefined;
|
|
12
13
|
// ---------------------------------------------------------------------------
|
|
13
14
|
// list_issue_files
|
|
14
15
|
// ---------------------------------------------------------------------------
|
|
@@ -78,6 +79,12 @@ The optional content_type parameter sets the MIME type (e.g. "image/png"). If om
|
|
|
78
79
|
let fileBuffer;
|
|
79
80
|
let fileName;
|
|
80
81
|
if (file_path) {
|
|
82
|
+
if (normalizedUploadDir) {
|
|
83
|
+
const normalizedPath = resolve(file_path);
|
|
84
|
+
if (!normalizedPath.startsWith(normalizedUploadDir)) {
|
|
85
|
+
return { content: [{ type: 'text', text: errorText('file_path is not allowed — access restricted to the designated upload directory') }], isError: true };
|
|
86
|
+
}
|
|
87
|
+
}
|
|
81
88
|
fileBuffer = await readFile(file_path);
|
|
82
89
|
fileName = filename ?? basename(file_path);
|
|
83
90
|
}
|
package/dist/tools/issues.js
CHANGED
|
@@ -213,6 +213,9 @@ export function registerIssueTools(server, client, cache) {
|
|
|
213
213
|
// ---------------------------------------------------------------------------
|
|
214
214
|
// update_issue
|
|
215
215
|
// ---------------------------------------------------------------------------
|
|
216
|
+
// MantisBT reference shape: at least one of id or name must be provided
|
|
217
|
+
const ref = z.object({ id: z.number().optional(), name: z.string().optional() })
|
|
218
|
+
.refine(o => o.id !== undefined || o.name !== undefined, { message: "At least one of 'id' or 'name' must be provided" });
|
|
216
219
|
server.registerTool('update_issue', {
|
|
217
220
|
title: 'Update Issue',
|
|
218
221
|
description: `Update one or more fields of an existing MantisBT issue using a partial PATCH.
|
|
@@ -220,19 +223,40 @@ export function registerIssueTools(server, client, cache) {
|
|
|
220
223
|
The "fields" object accepts any combination of:
|
|
221
224
|
- summary (string)
|
|
222
225
|
- description (string)
|
|
226
|
+
- steps_to_reproduce (string)
|
|
227
|
+
- additional_information (string)
|
|
223
228
|
- status: { name: "new"|"feedback"|"acknowledged"|"confirmed"|"assigned"|"resolved"|"closed" }
|
|
224
229
|
- resolution: { id: 20 } (20 = fixed/resolved)
|
|
225
230
|
- handler: { id: <user_id> } or { name: "<username>" }
|
|
226
231
|
- priority: { name: "<priority_name>" }
|
|
227
232
|
- severity: { name: "<severity_name>" }
|
|
233
|
+
- reproducibility: { name: "<reproducibility_name>" }
|
|
228
234
|
- category: { name: "<category_name>" }
|
|
235
|
+
- version: { name: "<version_name>" } (affected version)
|
|
229
236
|
- target_version: { name: "<version_name>" }
|
|
230
237
|
- fixed_in_version: { name: "<version_name>" }
|
|
238
|
+
- view_state: { name: "public"|"private" }
|
|
231
239
|
|
|
232
240
|
Important: when resolving an issue, always set BOTH status and resolution to avoid leaving resolution as "open".`,
|
|
233
241
|
inputSchema: z.object({
|
|
234
242
|
id: z.coerce.number().int().positive().describe('Numeric issue ID to update'),
|
|
235
|
-
fields: z.
|
|
243
|
+
fields: z.object({
|
|
244
|
+
summary: z.string().optional(),
|
|
245
|
+
description: z.string().optional(),
|
|
246
|
+
steps_to_reproduce: z.string().optional(),
|
|
247
|
+
additional_information: z.string().optional(),
|
|
248
|
+
status: ref.optional(),
|
|
249
|
+
resolution: ref.optional(),
|
|
250
|
+
priority: ref.optional(),
|
|
251
|
+
severity: ref.optional(),
|
|
252
|
+
reproducibility: ref.optional(),
|
|
253
|
+
handler: ref.optional(),
|
|
254
|
+
category: ref.optional(),
|
|
255
|
+
version: ref.optional(),
|
|
256
|
+
target_version: ref.optional(),
|
|
257
|
+
fixed_in_version: ref.optional(),
|
|
258
|
+
view_state: ref.optional(),
|
|
259
|
+
}).strict().describe('Fields to update (partial update — only provided fields are changed; unknown keys are rejected)'),
|
|
236
260
|
}),
|
|
237
261
|
annotations: {
|
|
238
262
|
readOnlyHint: false,
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "@dpesch/mantisbt-mcp-server",
|
|
3
|
-
"version": "1.5.
|
|
3
|
+
"version": "1.5.5",
|
|
4
4
|
"description": "MCP server for MantisBT REST API – read and manage bug tracker issues",
|
|
5
5
|
"author": "Dominik Pesch",
|
|
6
6
|
"license": "MIT",
|
|
@@ -23,13 +23,13 @@
|
|
|
23
23
|
"test": "vitest run",
|
|
24
24
|
"test:watch": "vitest",
|
|
25
25
|
"test:coverage": "vitest run --coverage",
|
|
26
|
-
"test:record": "tsx scripts/record-fixtures.ts"
|
|
26
|
+
"test:record": "tsx scripts/record-fixtures.ts",
|
|
27
|
+
"init": "node scripts/init.mjs"
|
|
27
28
|
},
|
|
28
29
|
"dependencies": {
|
|
29
30
|
"@huggingface/transformers": "^3.0.0",
|
|
30
31
|
"@modelcontextprotocol/sdk": "^1.0.0",
|
|
31
|
-
|
|
32
|
-
"zod": "^3.22.4"
|
|
32
|
+
"zod": "^3.22.4"
|
|
33
33
|
},
|
|
34
34
|
"devDependencies": {
|
|
35
35
|
"@types/node": "^20.0.0",
|
|
@@ -0,0 +1,195 @@
|
|
|
1
|
+
#!/usr/bin/env node
|
|
2
|
+
// pre-push hook — typecheck gate + Codeberg .claude/ filter
|
|
3
|
+
//
|
|
4
|
+
// Typecheck runs for every push (origin and upstream) to catch type errors
|
|
5
|
+
// before they reach CI.
|
|
6
|
+
//
|
|
7
|
+
// Codeberg filter handles multi-commit pushes correctly:
|
|
8
|
+
// 1. Fetches the actual Codeberg tip via ls-remote
|
|
9
|
+
// 2. Finds the local commit whose filtered tree matches that tip (anchor)
|
|
10
|
+
// 3. Filters every commit between anchor and tip in order (oldest first)
|
|
11
|
+
// 4. Builds a proper filtered chain anchored to the actual remote tip
|
|
12
|
+
// 5. Pushes the filtered tip with --force
|
|
13
|
+
// 6. Exits 1 to block git's unfiltered push
|
|
14
|
+
//
|
|
15
|
+
// Branches are processed before tags so the shaMap is available for tags
|
|
16
|
+
// that point to commits already processed as part of the branch.
|
|
17
|
+
//
|
|
18
|
+
// ⚠ IMPORTANT: upstream (Codeberg) is a filtered mirror — push-only.
|
|
19
|
+
// Never run git pull, git fetch + merge, or git rebase against upstream.
|
|
20
|
+
// Filtered commits have different SHAs; pulling them back creates duplicate
|
|
21
|
+
// history. Use origin (Gitolite) as the authoritative source.
|
|
22
|
+
//
|
|
23
|
+
// --force is always required for Codeberg: filtered SHAs structurally diverge
|
|
24
|
+
// from local SHAs, so git rejects main as non-fast-forward before the hook
|
|
25
|
+
// runs unless --force bypasses that check.
|
|
26
|
+
|
|
27
|
+
import { execSync } from 'node:child_process';
|
|
28
|
+
import { createInterface } from 'node:readline';
|
|
29
|
+
|
|
30
|
+
// Recursion guard — must be first to avoid running typecheck during recursive pushes
|
|
31
|
+
if (process.env._PREPUSH_FILTER_ACTIVE) process.exit(0);
|
|
32
|
+
|
|
33
|
+
// Run typecheck before every push (catches type errors before they hit CI)
|
|
34
|
+
try {
|
|
35
|
+
execSync('npm run typecheck', { stdio: 'inherit' });
|
|
36
|
+
} catch {
|
|
37
|
+
console.error('✗ Typecheck failed — push aborted. Fix type errors first.');
|
|
38
|
+
process.exit(1);
|
|
39
|
+
}
|
|
40
|
+
|
|
41
|
+
const [,, , remoteUrl] = process.argv;
|
|
42
|
+
|
|
43
|
+
// Only intercept Codeberg pushes for the filter step
|
|
44
|
+
if (!remoteUrl?.includes('codeberg.org')) process.exit(0);
|
|
45
|
+
|
|
46
|
+
console.log('→ Codeberg push detected — filtering .claude/ directory...');
|
|
47
|
+
|
|
48
|
+
const ZERO_SHA = '0'.repeat(40);
|
|
49
|
+
|
|
50
|
+
const git = (cmd, opts = {}) => {
|
|
51
|
+
try {
|
|
52
|
+
return execSync(`git ${cmd}`, { encoding: 'utf8', ...opts }).trim();
|
|
53
|
+
} catch (err) {
|
|
54
|
+
console.error(`✗ git ${cmd}\n${err.stderr || err.message}`);
|
|
55
|
+
process.exit(1);
|
|
56
|
+
}
|
|
57
|
+
};
|
|
58
|
+
|
|
59
|
+
const gitOptional = (cmd, opts = {}) => {
|
|
60
|
+
try { return execSync(`git ${cmd}`, { encoding: 'utf8', ...opts }).trim(); }
|
|
61
|
+
catch { return ''; }
|
|
62
|
+
};
|
|
63
|
+
|
|
64
|
+
const isSha = s => /^[0-9a-f]{40}$/.test(s);
|
|
65
|
+
|
|
66
|
+
// Remove .claude/ from the root tree of a commit and return the new tree SHA.
|
|
67
|
+
const filterTree = commitSha => {
|
|
68
|
+
const entries = git(`ls-tree "${commitSha}^{tree}"`);
|
|
69
|
+
const filtered = entries.split('\n').filter(e => !e.match(/\t\.claude$/)).join('\n');
|
|
70
|
+
const tree = git('mktree', { input: filtered });
|
|
71
|
+
if (!isSha(tree)) { console.error(`✗ mktree failed for ${commitSha}`); process.exit(1); }
|
|
72
|
+
return tree;
|
|
73
|
+
};
|
|
74
|
+
|
|
75
|
+
// Create a filtered commit object preserving all original metadata.
|
|
76
|
+
const makeFilteredCommit = (commitSha, filteredTree, mappedParents) => {
|
|
77
|
+
const log = git(`log -1 --format=%an%n%ae%n%aI%n%cn%n%ce%n%cI%n%n%B "${commitSha}"`);
|
|
78
|
+
const [an, ae, aI, cn, ce, cI, , ...msgLines] = log.split('\n');
|
|
79
|
+
const commitMsg = msgLines.join('\n').trimEnd();
|
|
80
|
+
const parentFlags = mappedParents.map(p => `-p ${p}`).join(' ');
|
|
81
|
+
const env = {
|
|
82
|
+
...process.env,
|
|
83
|
+
GIT_AUTHOR_NAME: an, GIT_AUTHOR_EMAIL: ae, GIT_AUTHOR_DATE: aI,
|
|
84
|
+
GIT_COMMITTER_NAME: cn, GIT_COMMITTER_EMAIL: ce, GIT_COMMITTER_DATE: cI,
|
|
85
|
+
};
|
|
86
|
+
const result = git(`commit-tree "${filteredTree}" ${parentFlags}`, { env, input: commitMsg });
|
|
87
|
+
if (!isSha(result)) { console.error(`✗ commit-tree failed for ${commitSha}`); process.exit(1); }
|
|
88
|
+
return result;
|
|
89
|
+
};
|
|
90
|
+
|
|
91
|
+
// Scan ancestors of tipSha (newest first) and return the most recent one whose
|
|
92
|
+
// filtered root tree equals the tree of remoteSha. Returns null if not found.
|
|
93
|
+
const findLocalAnchor = (tipSha, remoteSha, maxDepth = 100) => {
|
|
94
|
+
const remoteTree = gitOptional(`rev-parse "${remoteSha}^{tree}"`);
|
|
95
|
+
if (!remoteTree) return null;
|
|
96
|
+
const candidates = gitOptional(`rev-list --max-count=${maxDepth} "${tipSha}"`);
|
|
97
|
+
if (!candidates) return null;
|
|
98
|
+
for (const sha of candidates.split('\n').filter(Boolean)) {
|
|
99
|
+
if (filterTree(sha) === remoteTree) return sha;
|
|
100
|
+
}
|
|
101
|
+
return null;
|
|
102
|
+
};
|
|
103
|
+
|
|
104
|
+
const rl = createInterface({ input: process.stdin });
|
|
105
|
+
const lines = [];
|
|
106
|
+
rl.on('line', line => { if (line.trim()) lines.push(line.trim()); });
|
|
107
|
+
|
|
108
|
+
rl.on('close', () => {
|
|
109
|
+
// Process branches before tags so shaMap is populated when tags are handled.
|
|
110
|
+
const branches = lines.filter(l => l.split(' ')[2]?.startsWith('refs/heads/'));
|
|
111
|
+
const others = lines.filter(l => !l.split(' ')[2]?.startsWith('refs/heads/'));
|
|
112
|
+
|
|
113
|
+
// localSha → filteredSha, accumulated across all refs in this push.
|
|
114
|
+
const shaMap = {};
|
|
115
|
+
let pushed = 0;
|
|
116
|
+
|
|
117
|
+
for (const line of [...branches, ...others]) {
|
|
118
|
+
const parts = line.split(' ');
|
|
119
|
+
if (parts.length < 3) { console.error(`✗ Malformed push input: ${line}`); process.exit(1); }
|
|
120
|
+
const [, localSha, remoteRef] = parts;
|
|
121
|
+
|
|
122
|
+
if (localSha === ZERO_SHA) continue; // deletion — skip
|
|
123
|
+
|
|
124
|
+
const label = remoteRef.replace('refs/heads/', '').replace('refs/tags/', '');
|
|
125
|
+
|
|
126
|
+
// If this commit was already filtered as part of a branch push, reuse it.
|
|
127
|
+
if (localSha in shaMap) {
|
|
128
|
+
try {
|
|
129
|
+
execSync(`git push "${remoteUrl}" "${shaMap[localSha]}:${remoteRef}" --force`,
|
|
130
|
+
{ env: { ...process.env, _PREPUSH_FILTER_ACTIVE: '1' }, stdio: 'inherit' });
|
|
131
|
+
} catch {
|
|
132
|
+
console.error(`✗ Push failed for ${label}`); process.exit(1);
|
|
133
|
+
}
|
|
134
|
+
console.log(`✓ ${label} pushed to Codeberg (without .claude/)`);
|
|
135
|
+
pushed++;
|
|
136
|
+
continue;
|
|
137
|
+
}
|
|
138
|
+
|
|
139
|
+
// Get the actual current SHA on Codeberg for this ref.
|
|
140
|
+
const lsOut = gitOptional(`ls-remote "${remoteUrl}" "${remoteRef}"`);
|
|
141
|
+
const actualRemoteSha = lsOut ? lsOut.split(/\s+/)[0] : '';
|
|
142
|
+
|
|
143
|
+
if (!actualRemoteSha) {
|
|
144
|
+
// New ref on Codeberg — filter the tip with no parent.
|
|
145
|
+
shaMap[localSha] = makeFilteredCommit(localSha, filterTree(localSha), []);
|
|
146
|
+
} else {
|
|
147
|
+
// Find the local commit that corresponds to the current Codeberg tip.
|
|
148
|
+
const localAnchor = findLocalAnchor(localSha, actualRemoteSha);
|
|
149
|
+
|
|
150
|
+
if (localAnchor) {
|
|
151
|
+
shaMap[localAnchor] = actualRemoteSha;
|
|
152
|
+
|
|
153
|
+
// Filter all commits between anchor and tip, oldest first.
|
|
154
|
+
const revList = gitOptional(`rev-list --reverse "${localAnchor}..${localSha}"`);
|
|
155
|
+
const commits = revList ? revList.split('\n').filter(Boolean) : [];
|
|
156
|
+
|
|
157
|
+
for (const sha of commits) {
|
|
158
|
+
const filteredTree = filterTree(sha);
|
|
159
|
+
const parents = gitOptional(`log -1 --format=%P "${sha}"`);
|
|
160
|
+
const parentShas = parents ? parents.split(' ').filter(Boolean) : [];
|
|
161
|
+
// Map each parent through shaMap; fall back to actualRemoteSha for
|
|
162
|
+
// parents outside the current range (already on the remote).
|
|
163
|
+
const mappedParents = parentShas
|
|
164
|
+
.map(p => shaMap[p] ?? actualRemoteSha)
|
|
165
|
+
.filter(Boolean);
|
|
166
|
+
shaMap[sha] = makeFilteredCommit(sha, filteredTree, mappedParents);
|
|
167
|
+
}
|
|
168
|
+
} else {
|
|
169
|
+
// Fallback: anchor not found — filter tip only, rooted at remote tip.
|
|
170
|
+
console.warn(` ⚠ Could not find local base for ${label}, filtering tip only`);
|
|
171
|
+
shaMap[localSha] = makeFilteredCommit(localSha, filterTree(localSha), [actualRemoteSha]);
|
|
172
|
+
}
|
|
173
|
+
}
|
|
174
|
+
|
|
175
|
+
const filteredTip = shaMap[localSha];
|
|
176
|
+
if (!filteredTip) {
|
|
177
|
+
console.error(`✗ Could not compute filtered SHA for ${localSha}`);
|
|
178
|
+
process.exit(1);
|
|
179
|
+
}
|
|
180
|
+
|
|
181
|
+
try {
|
|
182
|
+
execSync(`git push "${remoteUrl}" "${filteredTip}:${remoteRef}" --force`,
|
|
183
|
+
{ env: { ...process.env, _PREPUSH_FILTER_ACTIVE: '1' }, stdio: 'inherit' });
|
|
184
|
+
} catch {
|
|
185
|
+
console.error(`✗ Push to Codeberg failed for ${label}`); process.exit(1);
|
|
186
|
+
}
|
|
187
|
+
|
|
188
|
+
console.log(`✓ ${label} pushed to Codeberg (without .claude/)`);
|
|
189
|
+
pushed++;
|
|
190
|
+
}
|
|
191
|
+
|
|
192
|
+
if (pushed === 0) process.exit(0);
|
|
193
|
+
console.log('→ Filtered push complete. Blocking unfiltered push.');
|
|
194
|
+
process.exit(1);
|
|
195
|
+
});
|
package/scripts/init.mjs
ADDED
|
@@ -0,0 +1,95 @@
|
|
|
1
|
+
#!/usr/bin/env node
|
|
2
|
+
// Project setup script — run via: npm run init
|
|
3
|
+
//
|
|
4
|
+
// Steps:
|
|
5
|
+
// 1. Check Node.js version (requires >=18)
|
|
6
|
+
// 2. Install dependencies (npm install)
|
|
7
|
+
// 3. Install git hooks from scripts/hooks/ into .git/hooks/
|
|
8
|
+
// 4. Run typecheck to verify the setup
|
|
9
|
+
|
|
10
|
+
import { spawnSync } from 'node:child_process';
|
|
11
|
+
import { copyFileSync, chmodSync, existsSync, mkdirSync } from 'node:fs';
|
|
12
|
+
import { resolve, dirname, join } from 'node:path';
|
|
13
|
+
import { fileURLToPath } from 'node:url';
|
|
14
|
+
|
|
15
|
+
const root = resolve(dirname(fileURLToPath(import.meta.url)), '..');
|
|
16
|
+
// Auf Windows .cmd-Dateien via cmd.exe aufrufen — kein shell:true nötig, keine Deprecation-Warnung
|
|
17
|
+
const [npmBin, npmBaseArgs] = process.platform === 'win32'
|
|
18
|
+
? ['cmd.exe', ['/c', 'npm']]
|
|
19
|
+
: ['npm', []];
|
|
20
|
+
|
|
21
|
+
// ---------------------------------------------------------------------------
|
|
22
|
+
// 1. Node.js version check
|
|
23
|
+
// ---------------------------------------------------------------------------
|
|
24
|
+
|
|
25
|
+
const [major] = process.versions.node.split('.').map(Number);
|
|
26
|
+
if (major < 18) {
|
|
27
|
+
console.error(`✗ Node.js >= 18 required, found ${process.version}`);
|
|
28
|
+
process.exit(1);
|
|
29
|
+
}
|
|
30
|
+
console.log(`✓ Node.js ${process.version}`);
|
|
31
|
+
|
|
32
|
+
// ---------------------------------------------------------------------------
|
|
33
|
+
// 2. Install dependencies
|
|
34
|
+
// ---------------------------------------------------------------------------
|
|
35
|
+
|
|
36
|
+
console.log('\n→ Installing dependencies...');
|
|
37
|
+
const install = spawnSync(npmBin, [...npmBaseArgs, 'install'], {
|
|
38
|
+
stdio: 'inherit',
|
|
39
|
+
cwd: root,
|
|
40
|
+
});
|
|
41
|
+
if (install.status !== 0) {
|
|
42
|
+
console.error('✗ npm install failed');
|
|
43
|
+
process.exit(1);
|
|
44
|
+
}
|
|
45
|
+
|
|
46
|
+
// ---------------------------------------------------------------------------
|
|
47
|
+
// 3. Install git hooks
|
|
48
|
+
// ---------------------------------------------------------------------------
|
|
49
|
+
|
|
50
|
+
console.log('\n→ Installing git hooks...');
|
|
51
|
+
|
|
52
|
+
const hooksSourceDir = resolve(root, 'scripts/hooks');
|
|
53
|
+
const gitHooksDir = resolve(root, '.git/hooks');
|
|
54
|
+
|
|
55
|
+
if (!existsSync(gitHooksDir)) {
|
|
56
|
+
mkdirSync(gitHooksDir, { recursive: true });
|
|
57
|
+
}
|
|
58
|
+
|
|
59
|
+
const hooks = ['pre-push'];
|
|
60
|
+
for (const hook of hooks) {
|
|
61
|
+
const src = resolve(hooksSourceDir, `${hook}.mjs`);
|
|
62
|
+
const dest = resolve(gitHooksDir, hook);
|
|
63
|
+
|
|
64
|
+
if (!existsSync(src)) {
|
|
65
|
+
console.warn(` ⚠ Hook source not found, skipping: scripts/hooks/${hook}.mjs`);
|
|
66
|
+
continue;
|
|
67
|
+
}
|
|
68
|
+
|
|
69
|
+
copyFileSync(src, dest);
|
|
70
|
+
|
|
71
|
+
// chmod +x (no-op on Windows — git runs hooks directly via shebang)
|
|
72
|
+
try {
|
|
73
|
+
chmodSync(dest, 0o755);
|
|
74
|
+
} catch {
|
|
75
|
+
// Silently ignore on platforms that don't support chmod
|
|
76
|
+
}
|
|
77
|
+
|
|
78
|
+
console.log(` ✓ .git/hooks/${hook} installed`);
|
|
79
|
+
}
|
|
80
|
+
|
|
81
|
+
// ---------------------------------------------------------------------------
|
|
82
|
+
// 4. Typecheck
|
|
83
|
+
// ---------------------------------------------------------------------------
|
|
84
|
+
|
|
85
|
+
console.log('\n→ Running typecheck...');
|
|
86
|
+
const check = spawnSync(npmBin, [...npmBaseArgs, 'run', 'typecheck'], {
|
|
87
|
+
stdio: 'inherit',
|
|
88
|
+
cwd: root,
|
|
89
|
+
});
|
|
90
|
+
if (check.status !== 0) {
|
|
91
|
+
console.error('✗ Typecheck failed — check type errors above');
|
|
92
|
+
process.exit(1);
|
|
93
|
+
}
|
|
94
|
+
|
|
95
|
+
console.log('\n✓ Setup complete. Happy hacking!');
|
package/tests/config.test.ts
CHANGED
|
@@ -146,6 +146,75 @@ describe('getConfig() – errors', () => {
|
|
|
146
146
|
});
|
|
147
147
|
});
|
|
148
148
|
|
|
149
|
+
// ---------------------------------------------------------------------------
|
|
150
|
+
// HTTP transport configuration
|
|
151
|
+
// ---------------------------------------------------------------------------
|
|
152
|
+
|
|
153
|
+
describe('getConfig() – HTTP transport', () => {
|
|
154
|
+
it('uses 127.0.0.1 as default httpHost', async () => {
|
|
155
|
+
vi.stubEnv('MANTIS_BASE_URL', 'https://mantis.example.com');
|
|
156
|
+
vi.stubEnv('MANTIS_API_KEY', 'key');
|
|
157
|
+
|
|
158
|
+
const getConfig = await freshGetConfig();
|
|
159
|
+
const config = await getConfig();
|
|
160
|
+
|
|
161
|
+
expect(config.httpHost).toBe('127.0.0.1');
|
|
162
|
+
});
|
|
163
|
+
|
|
164
|
+
it('uses 3000 as default httpPort', async () => {
|
|
165
|
+
vi.stubEnv('MANTIS_BASE_URL', 'https://mantis.example.com');
|
|
166
|
+
vi.stubEnv('MANTIS_API_KEY', 'key');
|
|
167
|
+
|
|
168
|
+
const getConfig = await freshGetConfig();
|
|
169
|
+
const config = await getConfig();
|
|
170
|
+
|
|
171
|
+
expect(config.httpPort).toBe(3000);
|
|
172
|
+
});
|
|
173
|
+
|
|
174
|
+
it('uses PORT when set', async () => {
|
|
175
|
+
vi.stubEnv('MANTIS_BASE_URL', 'https://mantis.example.com');
|
|
176
|
+
vi.stubEnv('MANTIS_API_KEY', 'key');
|
|
177
|
+
vi.stubEnv('PORT', '8080');
|
|
178
|
+
|
|
179
|
+
const getConfig = await freshGetConfig();
|
|
180
|
+
const config = await getConfig();
|
|
181
|
+
|
|
182
|
+
expect(config.httpPort).toBe(8080);
|
|
183
|
+
});
|
|
184
|
+
|
|
185
|
+
it('uses MCP_HTTP_HOST when set', async () => {
|
|
186
|
+
vi.stubEnv('MANTIS_BASE_URL', 'https://mantis.example.com');
|
|
187
|
+
vi.stubEnv('MANTIS_API_KEY', 'key');
|
|
188
|
+
vi.stubEnv('MCP_HTTP_HOST', '0.0.0.0');
|
|
189
|
+
|
|
190
|
+
const getConfig = await freshGetConfig();
|
|
191
|
+
const config = await getConfig();
|
|
192
|
+
|
|
193
|
+
expect(config.httpHost).toBe('0.0.0.0');
|
|
194
|
+
});
|
|
195
|
+
|
|
196
|
+
it('leaves httpToken undefined when MCP_HTTP_TOKEN is not set', async () => {
|
|
197
|
+
vi.stubEnv('MANTIS_BASE_URL', 'https://mantis.example.com');
|
|
198
|
+
vi.stubEnv('MANTIS_API_KEY', 'key');
|
|
199
|
+
|
|
200
|
+
const getConfig = await freshGetConfig();
|
|
201
|
+
const config = await getConfig();
|
|
202
|
+
|
|
203
|
+
expect(config.httpToken).toBeUndefined();
|
|
204
|
+
});
|
|
205
|
+
|
|
206
|
+
it('reads httpToken from MCP_HTTP_TOKEN', async () => {
|
|
207
|
+
vi.stubEnv('MANTIS_BASE_URL', 'https://mantis.example.com');
|
|
208
|
+
vi.stubEnv('MANTIS_API_KEY', 'key');
|
|
209
|
+
vi.stubEnv('MCP_HTTP_TOKEN', 'secret-token');
|
|
210
|
+
|
|
211
|
+
const getConfig = await freshGetConfig();
|
|
212
|
+
const config = await getConfig();
|
|
213
|
+
|
|
214
|
+
expect(config.httpToken).toBe('secret-token');
|
|
215
|
+
});
|
|
216
|
+
});
|
|
217
|
+
|
|
149
218
|
// ---------------------------------------------------------------------------
|
|
150
219
|
// Singleton caching
|
|
151
220
|
// ---------------------------------------------------------------------------
|
|
@@ -45,6 +45,7 @@ export function makeMockStore(options?: { lastSyncedAt?: string | null; itemCoun
|
|
|
45
45
|
resetLastSyncedAt: vi.fn(async () => {}),
|
|
46
46
|
getLastKnownTotal: vi.fn(async () => options?.lastKnownTotal ?? null),
|
|
47
47
|
setLastKnownTotal: vi.fn(async () => {}),
|
|
48
|
+
flush: vi.fn(async () => {}),
|
|
48
49
|
};
|
|
49
50
|
}
|
|
50
51
|
|
|
@@ -146,3 +146,48 @@ describe('VectraStore.resetLastSyncedAt', () => {
|
|
|
146
146
|
expect(await store.getLastSyncedAt()).toBeNull();
|
|
147
147
|
});
|
|
148
148
|
});
|
|
149
|
+
|
|
150
|
+
describe('VectraStore.addBatch', () => {
|
|
151
|
+
it('increases in-memory count without persisting to disk', async () => {
|
|
152
|
+
await store.addBatch([
|
|
153
|
+
{ id: 1, vector: randomVector(), metadata: { summary: 'A' } },
|
|
154
|
+
{ id: 2, vector: randomVector(), metadata: { summary: 'B' } },
|
|
155
|
+
]);
|
|
156
|
+
expect(await store.count()).toBe(2);
|
|
157
|
+
|
|
158
|
+
// A new instance reading the same dir must not see the items yet
|
|
159
|
+
const store2 = new VectraStore(dir);
|
|
160
|
+
expect(await store2.count()).toBe(0);
|
|
161
|
+
});
|
|
162
|
+
|
|
163
|
+
it('persists items after flush()', async () => {
|
|
164
|
+
await store.addBatch([
|
|
165
|
+
{ id: 10, vector: randomVector(), metadata: { summary: 'X' } },
|
|
166
|
+
{ id: 11, vector: randomVector(), metadata: { summary: 'Y' } },
|
|
167
|
+
]);
|
|
168
|
+
await store.flush();
|
|
169
|
+
|
|
170
|
+
const store2 = new VectraStore(dir);
|
|
171
|
+
expect(await store2.count()).toBe(2);
|
|
172
|
+
});
|
|
173
|
+
|
|
174
|
+
it('accumulates items across multiple addBatch calls before flush', async () => {
|
|
175
|
+
await store.addBatch([{ id: 1, vector: randomVector(), metadata: { summary: 'First' } }]);
|
|
176
|
+
await store.addBatch([{ id: 2, vector: randomVector(), metadata: { summary: 'Second' } }]);
|
|
177
|
+
await store.flush();
|
|
178
|
+
|
|
179
|
+
const store2 = new VectraStore(dir);
|
|
180
|
+
expect(await store2.count()).toBe(2);
|
|
181
|
+
});
|
|
182
|
+
});
|
|
183
|
+
|
|
184
|
+
describe('VectraStore.flush (atomic write)', () => {
|
|
185
|
+
it('leaves no .tmp file after a successful flush', async () => {
|
|
186
|
+
const { readdir } = await import('node:fs/promises');
|
|
187
|
+
await store.addBatch([{ id: 1, vector: randomVector(), metadata: { summary: 'Test' } }]);
|
|
188
|
+
await store.flush();
|
|
189
|
+
|
|
190
|
+
const files = await readdir(join(dir, 'vectra'));
|
|
191
|
+
expect(files.some(f => f.endsWith('.tmp'))).toBe(false);
|
|
192
|
+
});
|
|
193
|
+
});
|
|
@@ -144,6 +144,60 @@ describe('SearchSyncService.sync – project_id', () => {
|
|
|
144
144
|
});
|
|
145
145
|
});
|
|
146
146
|
|
|
147
|
+
// ---------------------------------------------------------------------------
|
|
148
|
+
// flush / checkpoint behaviour
|
|
149
|
+
// ---------------------------------------------------------------------------
|
|
150
|
+
|
|
151
|
+
describe('SearchSyncService.sync – flush and checkpoint', () => {
|
|
152
|
+
function makeIssues(count: number) {
|
|
153
|
+
return Array.from({ length: count }, (_, i) => ({
|
|
154
|
+
id: i + 1,
|
|
155
|
+
summary: `Issue ${i + 1}`,
|
|
156
|
+
description: `Description ${i + 1}`,
|
|
157
|
+
updated_at: '2024-03-10T08:00:00Z',
|
|
158
|
+
}));
|
|
159
|
+
}
|
|
160
|
+
|
|
161
|
+
it('calls flush exactly once when fewer than 100 issues are indexed', async () => {
|
|
162
|
+
const store = makeMockStore({ lastSyncedAt: null });
|
|
163
|
+
vi.mocked(fetch).mockResolvedValue(
|
|
164
|
+
makeResponse(200, JSON.stringify({ issues: makeIssues(50), total_count: 50 }))
|
|
165
|
+
);
|
|
166
|
+
|
|
167
|
+
const service = new SearchSyncService(client, store, embedder);
|
|
168
|
+
await service.sync();
|
|
169
|
+
|
|
170
|
+
// Only the final flush, no checkpoint
|
|
171
|
+
expect(store.flush).toHaveBeenCalledTimes(1);
|
|
172
|
+
});
|
|
173
|
+
|
|
174
|
+
it('calls flush exactly once when exactly 100 issues are indexed (checkpoint covers all, no redundant final)', async () => {
|
|
175
|
+
const store = makeMockStore({ lastSyncedAt: null });
|
|
176
|
+
vi.mocked(fetch).mockResolvedValue(
|
|
177
|
+
makeResponse(200, JSON.stringify({ issues: makeIssues(100), total_count: 100 }))
|
|
178
|
+
);
|
|
179
|
+
|
|
180
|
+
const service = new SearchSyncService(client, store, embedder);
|
|
181
|
+
await service.sync();
|
|
182
|
+
|
|
183
|
+
// Checkpoint at 100 covers all items; indexedSinceCheckpoint resets to 0 → no redundant final flush
|
|
184
|
+
expect(store.flush).toHaveBeenCalledTimes(1);
|
|
185
|
+
});
|
|
186
|
+
|
|
187
|
+
it('calls flush twice when 110 issues are indexed (checkpoint at 100 + final for remaining 10)', async () => {
|
|
188
|
+
const store = makeMockStore({ lastSyncedAt: null });
|
|
189
|
+
vi.mocked(fetch).mockResolvedValue(
|
|
190
|
+
makeResponse(200, JSON.stringify({ issues: makeIssues(110), total_count: 110 }))
|
|
191
|
+
);
|
|
192
|
+
|
|
193
|
+
const service = new SearchSyncService(client, store, embedder);
|
|
194
|
+
await service.sync();
|
|
195
|
+
|
|
196
|
+
// Checkpoint at 100, then final flush for remaining 10
|
|
197
|
+
expect(store.flush).toHaveBeenCalledTimes(2);
|
|
198
|
+
});
|
|
199
|
+
});
|
|
200
|
+
|
|
147
201
|
// ---------------------------------------------------------------------------
|
|
148
202
|
// total_count persistence (regression: MantisBT installations without total_count)
|
|
149
203
|
// ---------------------------------------------------------------------------
|
|
@@ -1,3 +1,4 @@
|
|
|
1
|
+
import path from 'node:path';
|
|
1
2
|
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
|
|
2
3
|
import { MantisClient } from '../../src/client.js';
|
|
3
4
|
import { registerFileTools } from '../../src/tools/files.js';
|
|
@@ -20,7 +21,7 @@ let client: MantisClient;
|
|
|
20
21
|
beforeEach(() => {
|
|
21
22
|
mockServer = new MockMcpServer();
|
|
22
23
|
client = new MantisClient('https://mantis.example.com', 'test-token');
|
|
23
|
-
registerFileTools(mockServer as never, client);
|
|
24
|
+
registerFileTools(mockServer as never, client, undefined);
|
|
24
25
|
vi.stubGlobal('fetch', vi.fn());
|
|
25
26
|
});
|
|
26
27
|
|
|
@@ -272,3 +273,72 @@ describe('upload_file (Base64)', () => {
|
|
|
272
273
|
expect(result.content[0]!.text).toContain('Error:');
|
|
273
274
|
});
|
|
274
275
|
});
|
|
276
|
+
|
|
277
|
+
// ---------------------------------------------------------------------------
|
|
278
|
+
// upload_file – Path Traversal protection (uploadDir)
|
|
279
|
+
// ---------------------------------------------------------------------------
|
|
280
|
+
|
|
281
|
+
describe('upload_file (uploadDir restriction)', () => {
|
|
282
|
+
const uploadDir = path.resolve('/tmp/uploads');
|
|
283
|
+
|
|
284
|
+
beforeEach(() => {
|
|
285
|
+
// Override the server registered in the outer beforeEach with one that
|
|
286
|
+
// has uploadDir set.
|
|
287
|
+
mockServer = new MockMcpServer();
|
|
288
|
+
client = new MantisClient('https://mantis.example.com', 'test-token');
|
|
289
|
+
registerFileTools(mockServer as never, client, uploadDir);
|
|
290
|
+
vi.stubGlobal('fetch', vi.fn());
|
|
291
|
+
});
|
|
292
|
+
|
|
293
|
+
it('allows file_path inside uploadDir', async () => {
|
|
294
|
+
vi.mocked(readFile).mockResolvedValue(Buffer.from('content') as never);
|
|
295
|
+
vi.mocked(fetch).mockResolvedValue(makeResponse(200, JSON.stringify({ id: 5 })));
|
|
296
|
+
|
|
297
|
+
const result = await mockServer.callTool('upload_file', {
|
|
298
|
+
issue_id: 42,
|
|
299
|
+
file_path: path.join(uploadDir, 'report.pdf'),
|
|
300
|
+
});
|
|
301
|
+
|
|
302
|
+
expect(result.isError).toBeUndefined();
|
|
303
|
+
expect(readFile).toHaveBeenCalled();
|
|
304
|
+
});
|
|
305
|
+
|
|
306
|
+
it('blocks file_path outside uploadDir', async () => {
|
|
307
|
+
const result = await mockServer.callTool('upload_file', {
|
|
308
|
+
issue_id: 42,
|
|
309
|
+
file_path: '/etc/passwd',
|
|
310
|
+
});
|
|
311
|
+
|
|
312
|
+
expect(result.isError).toBe(true);
|
|
313
|
+
expect(result.content[0]!.text).toContain('not allowed');
|
|
314
|
+
expect(readFile).not.toHaveBeenCalled();
|
|
315
|
+
});
|
|
316
|
+
|
|
317
|
+
it('blocks path traversal escaping uploadDir', async () => {
|
|
318
|
+
const result = await mockServer.callTool('upload_file', {
|
|
319
|
+
issue_id: 42,
|
|
320
|
+
file_path: path.join(uploadDir, '..', 'secret.txt'),
|
|
321
|
+
});
|
|
322
|
+
|
|
323
|
+
expect(result.isError).toBe(true);
|
|
324
|
+
expect(result.content[0]!.text).toContain('not allowed');
|
|
325
|
+
expect(readFile).not.toHaveBeenCalled();
|
|
326
|
+
});
|
|
327
|
+
|
|
328
|
+
it('allows any file_path when uploadDir is undefined (no restriction)', async () => {
|
|
329
|
+
// This uses the outer beforeEach server (uploadDir = undefined).
|
|
330
|
+
const unrestrictedServer = new MockMcpServer();
|
|
331
|
+
registerFileTools(unrestrictedServer as never, client, undefined);
|
|
332
|
+
|
|
333
|
+
vi.mocked(readFile).mockResolvedValue(Buffer.from('content') as never);
|
|
334
|
+
vi.mocked(fetch).mockResolvedValue(makeResponse(200, JSON.stringify({ id: 5 })));
|
|
335
|
+
|
|
336
|
+
const result = await unrestrictedServer.callTool('upload_file', {
|
|
337
|
+
issue_id: 42,
|
|
338
|
+
file_path: '/etc/passwd',
|
|
339
|
+
});
|
|
340
|
+
|
|
341
|
+
expect(result.isError).toBeUndefined();
|
|
342
|
+
expect(readFile).toHaveBeenCalledWith('/etc/passwd');
|
|
343
|
+
});
|
|
344
|
+
});
|
|
@@ -474,3 +474,55 @@ describe('list_issues – recorded fixtures', () => {
|
|
|
474
474
|
expect(parsed.issues).toHaveLength(resolvedInFixture);
|
|
475
475
|
});
|
|
476
476
|
});
|
|
477
|
+
|
|
478
|
+
// ---------------------------------------------------------------------------
|
|
479
|
+
// update_issue – fields allowlist
|
|
480
|
+
// ---------------------------------------------------------------------------
|
|
481
|
+
|
|
482
|
+
describe('update_issue – fields allowlist', () => {
|
|
483
|
+
it('accepts known string fields (summary, description)', async () => {
|
|
484
|
+
vi.mocked(fetch).mockResolvedValue(makeResponse(200, JSON.stringify({ issue: { id: 1, summary: 'Updated' } })));
|
|
485
|
+
|
|
486
|
+
const result = await mockServer.callTool(
|
|
487
|
+
'update_issue',
|
|
488
|
+
{ id: 1, fields: { summary: 'Updated', description: 'New desc' } },
|
|
489
|
+
{ validate: true },
|
|
490
|
+
);
|
|
491
|
+
|
|
492
|
+
expect(result.isError).toBeUndefined();
|
|
493
|
+
});
|
|
494
|
+
|
|
495
|
+
it('accepts known object fields (status, resolution, handler)', async () => {
|
|
496
|
+
vi.mocked(fetch).mockResolvedValue(makeResponse(200, JSON.stringify({ issue: { id: 1 } })));
|
|
497
|
+
|
|
498
|
+
const result = await mockServer.callTool(
|
|
499
|
+
'update_issue',
|
|
500
|
+
{ id: 1, fields: { status: { name: 'resolved' }, resolution: { id: 20 }, handler: { id: 5 } } },
|
|
501
|
+
{ validate: true },
|
|
502
|
+
);
|
|
503
|
+
|
|
504
|
+
expect(result.isError).toBeUndefined();
|
|
505
|
+
});
|
|
506
|
+
|
|
507
|
+
it('rejects unknown fields without calling the API', async () => {
|
|
508
|
+
const result = await mockServer.callTool(
|
|
509
|
+
'update_issue',
|
|
510
|
+
{ id: 1, fields: { reporter: { id: 99 } } },
|
|
511
|
+
{ validate: true },
|
|
512
|
+
);
|
|
513
|
+
|
|
514
|
+
expect(result.isError).toBe(true);
|
|
515
|
+
expect(vi.mocked(fetch)).not.toHaveBeenCalled();
|
|
516
|
+
});
|
|
517
|
+
|
|
518
|
+
it('rejects fields with an unknown key mixed with known keys without calling the API', async () => {
|
|
519
|
+
const result = await mockServer.callTool(
|
|
520
|
+
'update_issue',
|
|
521
|
+
{ id: 1, fields: { summary: 'ok', unknown_field: 'bad' } },
|
|
522
|
+
{ validate: true },
|
|
523
|
+
);
|
|
524
|
+
|
|
525
|
+
expect(result.isError).toBe(true);
|
|
526
|
+
expect(vi.mocked(fetch)).not.toHaveBeenCalled();
|
|
527
|
+
});
|
|
528
|
+
});
|