preflight-mcp 0.2.3 → 0.2.5

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -15,7 +15,7 @@ Each bundle contains:
15
15
 
16
16
  ## Features
17
17
 
18
- - **13 MCP tools** to create/update/repair/search/read bundles, generate evidence graphs, and manage trace links
18
+ - **15 MCP tools** to create/update/repair/search/read bundles, generate evidence graphs, and manage trace links
19
19
  - **5 MCP prompts** for interactive guidance: menu, analyze guide, search guide, manage guide, trace guide
20
20
  - **LLM-friendly outputs**: After bundle creation, prompts user to generate dependency graph for deeper analysis
21
21
  - **Proactive trace links**: LLM automatically discovers and records code↔test, code↔doc relationships
@@ -125,7 +125,7 @@ Run end-to-end smoke test:
125
125
  npm run smoke
126
126
  ```
127
127
 
128
- ## Tools (13 total)
128
+ ## Tools (15 total)
129
129
 
130
130
  ### `preflight_list_bundles`
131
131
  List bundle IDs in storage.
@@ -156,10 +156,15 @@ Input (example):
156
156
  ### `preflight_read_file`
157
157
  Read file(s) from bundle. Two modes:
158
158
  - **Batch mode** (omit `file`): Returns ALL key files (OVERVIEW.md, START_HERE.md, AGENTS.md, manifest.json, deps/dependency-graph.json, repo READMEs) in one call
159
- - **Single file mode** (provide `file`): Returns that specific file (e.g., `deps/dependency-graph.json` for dependency graph)
159
+ - **Single file mode** (provide `file`): Returns that specific file
160
+ - **Evidence citation**: Use `withLineNumbers: true` to get `N|line` format; use `ranges: ["20-80"]` to read specific lines
160
161
  - Triggers: "查看bundle", "bundle概览", "项目信息", "show bundle", "读取依赖图"
161
- - Use `file: "manifest.json"` to get bundle metadata (repos, timestamps, tags, etc.)
162
- - Use `file: "deps/dependency-graph.json"` to read the dependency graph (generated by `preflight_evidence_dependency_graph`)
162
+
163
+ ### `preflight_repo_tree`
164
+ Get repository structure overview without wasting tokens on search.
165
+ - Returns: ASCII directory tree, file count by extension/directory, entry point candidates
166
+ - Use BEFORE deep analysis to understand project layout
167
+ - Triggers: "show project structure", "what files are in this repo", "项目结构", "文件分布"
163
168
 
164
169
  ### `preflight_delete_bundle`
165
170
  Delete/remove a bundle permanently.
@@ -216,9 +221,14 @@ Create or update traceability links (code↔test, code↔doc, file↔requirement
216
221
  ### `preflight_trace_query`
217
222
  Query traceability links (code↔test, code↔doc, commit↔ticket).
218
223
  - **Proactive use**: LLM automatically queries trace links when analyzing specific files
219
- - Helps answer: "Does this code have tests?", "What requirements does this implement?"
224
+ - Returns `reason` and `nextSteps` when no edges found (helps LLM decide next action)
220
225
  - Fast when `bundleId` is provided; can scan across bundles when omitted.
221
226
 
227
+ ### `preflight_trace_export`
228
+ Export trace links to `trace/trace.json` for direct LLM reading.
229
+ - Note: Auto-exported after each `trace_upsert`, so only needed to manually refresh
230
+ - Triggers: "export trace", "refresh trace.json", "导出trace"
231
+
222
232
  ### `preflight_cleanup_orphans`
223
233
  Remove incomplete or corrupted bundles (bundles without valid manifest.json).
224
234
  - Triggers: "clean up broken bundles", "remove orphans", "清理孤儿bundle"
package/README.zh-CN.md CHANGED
@@ -15,7 +15,7 @@
15
15
 
16
16
  ## Features
17
17
 
18
- - **13 个 MCP 工具**:create/update/repair/search/evidence/trace/read/cleanup(外加 resources)
18
+ - **15 个 MCP 工具**:create/update/repair/search/evidence/trace/read/cleanup(外加 resources)
19
19
  - **5 个 MCP prompts**:交互式引导(菜单、分析指南、搜索指南、管理指南、追溯指南)
20
20
  - **去重**:避免对相同的规范化输入重复索引
21
21
  - **可靠的 GitHub 获取**:可配置 git clone 超时 + GitHub archive(zipball)兜底
@@ -140,7 +140,7 @@ npm run smoke
140
140
  - 列表与清理逻辑只接受 UUID v4 作为 bundleId
141
141
  - 会自动过滤 `#recycle`、`tmp`、`.deleting` 等非 bundle 目录
142
142
 
143
- ## Tools (12 total)
143
+ ## Tools (15 total)
144
144
 
145
145
  ### `preflight_list_bundles`
146
146
  列出所有 bundle。
@@ -170,11 +170,16 @@ npm run smoke
170
170
 
171
171
  ### `preflight_read_file`
172
172
  从 bundle 读取文件。两种模式:
173
- - **批量模式**(省略 `file`):返回所有关键文件(OVERVIEW.md、START_HERE.md、AGENTS.md、manifest.json、deps/dependency-graph.json、repo READMEs)
174
- - **单文件模式**(提供 `file`):返回指定文件(如 `deps/dependency-graph.json` 获取依赖图)
173
+ - **批量模式**(省略 `file`):返回所有关键文件
174
+ - **单文件模式**(提供 `file`):返回指定文件
175
+ - **证据引用**:使用 `withLineNumbers: true` 获取 `N|行` 格式;使用 `ranges: ["20-80"]` 读取指定行
175
176
  - 触发词:「查看概览」「项目概览」「bundle详情」「读取依赖图」
176
- - 使用 `file: "manifest.json"` 获取 bundle 元数据
177
- - 使用 `file: "deps/dependency-graph.json"` 读取依赖图(由 `preflight_evidence_dependency_graph` 生成)
177
+
178
+ ### `preflight_repo_tree`
179
+ 获取仓库结构概览,避免浪费 token 搜索。
180
+ - 返回:ASCII 目录树、按扩展名/目录统计文件数、入口点候选
181
+ - 在深入分析前使用,了解项目布局
182
+ - 触发词:「项目结构」「文件分布」「show tree」
178
183
 
179
184
  ### `preflight_delete_bundle`
180
185
  永久删除/移除一个 bundle。
@@ -225,7 +230,14 @@ npm run smoke
225
230
  写入/更新 bundle 级 traceability links(commit↔ticket、symbol↔test、code↔doc 等)。
226
231
 
227
232
  ### `preflight_trace_query`
228
- 查询 traceability links(提供 `bundleId` 时更快;省略时可跨 bundle 扫描,带上限)。
233
+ 查询 traceability links
234
+ - 无匹配边时返回 `reason` 和 `nextSteps`(帮助 LLM 决定下一步)
235
+ - 提供 `bundleId` 时更快;省略时可跨 bundle 扫描
236
+
237
+ ### `preflight_trace_export`
238
+ 导出 trace links 到 `trace/trace.json`。
239
+ - 注意:每次 `trace_upsert` 后会自动导出,此工具仅用于手动刷新
240
+ - 触发词:「导出trace」「刷新trace.json」
229
241
 
230
242
  ### `preflight_cleanup_orphans`
231
243
  删除不完整或损坏的 bundle(缺少有效 manifest.json)。
@@ -0,0 +1,224 @@
1
+ import fs from 'node:fs/promises';
2
+ import path from 'node:path';
3
+ const ENTRY_POINT_PATTERNS = [
4
+ { pattern: /^readme\.md$/i, type: 'readme', priority: 100 },
5
+ { pattern: /^readme$/i, type: 'readme', priority: 95 },
6
+ { pattern: /^index\.(ts|js|tsx|jsx|py|go|rs)$/i, type: 'index', priority: 90 },
7
+ { pattern: /^main\.(ts|js|tsx|jsx|py|go|rs)$/i, type: 'main', priority: 85 },
8
+ { pattern: /^app\.(ts|js|tsx|jsx|py)$/i, type: 'app', priority: 80 },
9
+ { pattern: /^server\.(ts|js|tsx|jsx|py|go)$/i, type: 'server', priority: 75 },
10
+ { pattern: /^cli\.(ts|js|py)$/i, type: 'cli', priority: 70 },
11
+ { pattern: /^__init__\.py$/i, type: 'index', priority: 60 },
12
+ { pattern: /^mod\.rs$/i, type: 'index', priority: 60 },
13
+ { pattern: /^lib\.rs$/i, type: 'main', priority: 85 },
14
+ { pattern: /^package\.json$/i, type: 'config', priority: 50 },
15
+ { pattern: /^pyproject\.toml$/i, type: 'config', priority: 50 },
16
+ { pattern: /^cargo\.toml$/i, type: 'config', priority: 50 },
17
+ { pattern: /^go\.mod$/i, type: 'config', priority: 50 },
18
+ { pattern: /\.test\.(ts|js|tsx|jsx)$/i, type: 'test', priority: 30 },
19
+ { pattern: /_test\.(py|go)$/i, type: 'test', priority: 30 },
20
+ { pattern: /^test_.*\.py$/i, type: 'test', priority: 30 },
21
+ ];
22
+ function matchesGlob(filename, patterns) {
23
+ if (patterns.length === 0)
24
+ return true;
25
+ for (const pattern of patterns) {
26
+ // Simple glob matching: * matches any sequence, ** matches any path
27
+ const regexPattern = pattern
28
+ .replace(/\./g, '\\.')
29
+ .replace(/\*\*/g, '.*')
30
+ .replace(/\*/g, '[^/]*');
31
+ const regex = new RegExp(`^${regexPattern}$`, 'i');
32
+ if (regex.test(filename))
33
+ return true;
34
+ }
35
+ return false;
36
+ }
37
+ function shouldExclude(relativePath, excludePatterns) {
38
+ if (excludePatterns.length === 0)
39
+ return false;
40
+ for (const pattern of excludePatterns) {
41
+ // Simple exclusion matching
42
+ if (relativePath.includes(pattern))
43
+ return true;
44
+ if (pattern.startsWith('*') && relativePath.endsWith(pattern.slice(1)))
45
+ return true;
46
+ }
47
+ return false;
48
+ }
49
+ export async function generateRepoTree(bundleRootDir, bundleId, options = {}) {
50
+ const depth = options.depth ?? 4;
51
+ const includePatterns = options.include ?? [];
52
+ const excludePatterns = options.exclude ?? ['node_modules', '.git', '__pycache__', '.venv', 'venv', 'dist', 'build', '*.pyc'];
53
+ const reposDir = path.join(bundleRootDir, 'repos');
54
+ const stats = {
55
+ totalFiles: 0,
56
+ totalDirs: 0,
57
+ byExtension: {},
58
+ byTopDir: {},
59
+ };
60
+ const entryPointCandidates = [];
61
+ // Build tree recursively
62
+ async function buildTree(dir, currentDepth, relativePath) {
63
+ if (currentDepth > depth)
64
+ return null;
65
+ try {
66
+ const stat = await fs.stat(dir);
67
+ const name = path.basename(dir);
68
+ if (stat.isFile()) {
69
+ // Check include/exclude patterns
70
+ if (includePatterns.length > 0 && !matchesGlob(name, includePatterns)) {
71
+ return null;
72
+ }
73
+ if (shouldExclude(relativePath, excludePatterns)) {
74
+ return null;
75
+ }
76
+ stats.totalFiles++;
77
+ // Track extension stats
78
+ const ext = path.extname(name).toLowerCase() || '(no ext)';
79
+ stats.byExtension[ext] = (stats.byExtension[ext] ?? 0) + 1;
80
+ // Track top directory stats
81
+ const topDir = relativePath.split('/')[0] ?? '(root)';
82
+ stats.byTopDir[topDir] = (stats.byTopDir[topDir] ?? 0) + 1;
83
+ // Check for entry point candidates
84
+ for (const ep of ENTRY_POINT_PATTERNS) {
85
+ if (ep.pattern.test(name)) {
86
+ entryPointCandidates.push({
87
+ path: relativePath,
88
+ type: ep.type,
89
+ priority: ep.priority,
90
+ });
91
+ break;
92
+ }
93
+ }
94
+ return { name, type: 'file', size: stat.size };
95
+ }
96
+ if (stat.isDirectory()) {
97
+ // Check exclude patterns for directories
98
+ if (shouldExclude(name, excludePatterns) || shouldExclude(relativePath, excludePatterns)) {
99
+ return null;
100
+ }
101
+ stats.totalDirs++;
102
+ const entries = await fs.readdir(dir, { withFileTypes: true });
103
+ const children = [];
104
+ // Sort: directories first, then files, alphabetically
105
+ const sortedEntries = entries.sort((a, b) => {
106
+ if (a.isDirectory() && !b.isDirectory())
107
+ return -1;
108
+ if (!a.isDirectory() && b.isDirectory())
109
+ return 1;
110
+ return a.name.localeCompare(b.name);
111
+ });
112
+ for (const entry of sortedEntries) {
113
+ const childPath = path.join(dir, entry.name);
114
+ const childRelPath = relativePath ? `${relativePath}/${entry.name}` : entry.name;
115
+ const childNode = await buildTree(childPath, currentDepth + 1, childRelPath);
116
+ if (childNode) {
117
+ children.push(childNode);
118
+ }
119
+ }
120
+ return { name, type: 'dir', children: children.length > 0 ? children : undefined };
121
+ }
122
+ return null;
123
+ }
124
+ catch {
125
+ return null;
126
+ }
127
+ }
128
+ // Build tree starting from repos directory
129
+ let rootNode = null;
130
+ try {
131
+ await fs.access(reposDir);
132
+ rootNode = await buildTree(reposDir, 0, '');
133
+ }
134
+ catch {
135
+ // repos directory doesn't exist, try bundle root
136
+ rootNode = await buildTree(bundleRootDir, 0, '');
137
+ }
138
+ // Generate ASCII tree
139
+ function renderTree(node, prefix = '', isLast = true) {
140
+ const lines = [];
141
+ const connector = isLast ? '└── ' : '├── ';
142
+ const extension = isLast ? ' ' : '│ ';
143
+ lines.push(`${prefix}${connector}${node.name}${node.type === 'dir' ? '/' : ''}`);
144
+ if (node.children) {
145
+ const childCount = node.children.length;
146
+ node.children.forEach((child, index) => {
147
+ const childLines = renderTree(child, prefix + extension, index === childCount - 1);
148
+ lines.push(...childLines);
149
+ });
150
+ }
151
+ return lines;
152
+ }
153
+ let treeText = '';
154
+ if (rootNode) {
155
+ if (rootNode.children && rootNode.children.length > 0) {
156
+ treeText = `${rootNode.name}/\n`;
157
+ rootNode.children.forEach((child, index) => {
158
+ const childLines = renderTree(child, '', index === rootNode.children.length - 1);
159
+ treeText += childLines.join('\n') + '\n';
160
+ });
161
+ }
162
+ else {
163
+ treeText = `${rootNode.name}/ (empty or filtered out)`;
164
+ }
165
+ }
166
+ else {
167
+ treeText = '(no files found)';
168
+ }
169
+ // Sort entry point candidates by priority
170
+ entryPointCandidates.sort((a, b) => b.priority - a.priority);
171
+ return {
172
+ bundleId,
173
+ tree: treeText.trim(),
174
+ stats,
175
+ entryPointCandidates: entryPointCandidates.slice(0, 20), // Limit to top 20
176
+ };
177
+ }
178
+ /**
179
+ * Format tree result as human-readable text
180
+ */
181
+ export function formatTreeResult(result) {
182
+ const lines = [];
183
+ lines.push(`📂 Repository Structure for bundle: ${result.bundleId}`);
184
+ lines.push('');
185
+ lines.push('## Directory Tree');
186
+ lines.push('```');
187
+ lines.push(result.tree);
188
+ lines.push('```');
189
+ lines.push('');
190
+ lines.push('## Statistics');
191
+ lines.push(`- Total files: ${result.stats.totalFiles}`);
192
+ lines.push(`- Total directories: ${result.stats.totalDirs}`);
193
+ lines.push('');
194
+ // By extension (top 10)
195
+ const extEntries = Object.entries(result.stats.byExtension)
196
+ .sort((a, b) => b[1] - a[1])
197
+ .slice(0, 10);
198
+ if (extEntries.length > 0) {
199
+ lines.push('### Files by Extension');
200
+ for (const [ext, count] of extEntries) {
201
+ lines.push(`- ${ext}: ${count}`);
202
+ }
203
+ lines.push('');
204
+ }
205
+ // By top directory (top 10)
206
+ const dirEntries = Object.entries(result.stats.byTopDir)
207
+ .sort((a, b) => b[1] - a[1])
208
+ .slice(0, 10);
209
+ if (dirEntries.length > 0) {
210
+ lines.push('### Files by Top Directory');
211
+ for (const [dir, count] of dirEntries) {
212
+ lines.push(`- ${dir}: ${count}`);
213
+ }
214
+ lines.push('');
215
+ }
216
+ // Entry point candidates
217
+ if (result.entryPointCandidates.length > 0) {
218
+ lines.push('## Entry Point Candidates');
219
+ for (const ep of result.entryPointCandidates.slice(0, 10)) {
220
+ lines.push(`- \`${ep.path}\` (${ep.type})`);
221
+ }
222
+ }
223
+ return lines.join('\n');
224
+ }
@@ -29,8 +29,17 @@ export const DependencyGraphInputSchema = {
29
29
  maxNodes: z.number().int().min(10).max(2000).default(300),
30
30
  maxEdges: z.number().int().min(10).max(5000).default(800),
31
31
  timeBudgetMs: z.number().int().min(1000).max(30_000).default(25_000),
32
+ /** Maximum file size in bytes. Files larger than this are skipped. Default 1MB. */
33
+ maxFileSizeBytes: z.number().int().min(10_000).max(50_000_000).default(1_000_000)
34
+ .describe('Max file size in bytes. Default 1MB. Increase if important large files are skipped.'),
35
+ /** Strategy for handling large files */
36
+ largeFileStrategy: z.enum(['skip', 'truncate']).default('skip')
37
+ .describe('How to handle files exceeding maxFileSizeBytes. skip=ignore entirely, truncate=read first N lines.'),
38
+ /** If largeFileStrategy=truncate, how many lines to read */
39
+ truncateLines: z.number().int().min(100).max(5000).default(500)
40
+ .describe('When largeFileStrategy=truncate, read this many lines. Default 500.'),
32
41
  })
33
- .default({ maxFiles: 200, maxNodes: 300, maxEdges: 800, timeBudgetMs: 25_000 }),
42
+ .default({ maxFiles: 200, maxNodes: 300, maxEdges: 800, timeBudgetMs: 25_000, maxFileSizeBytes: 1_000_000, largeFileStrategy: 'skip', truncateLines: 500 }),
34
43
  };
35
44
  function sha256Hex(text) {
36
45
  return crypto.createHash('sha256').update(text, 'utf8').digest('hex');
@@ -919,6 +928,35 @@ async function generateGlobalDependencyGraph(ctx) {
919
928
  let truncatedReason;
920
929
  let filesProcessed = 0;
921
930
  let usedAstCount = 0;
931
+ // Coverage tracking
932
+ const perLanguage = {};
933
+ const perDir = {};
934
+ const skippedFiles = [];
935
+ let scannedFilesCount = 0;
936
+ // Helper to get language from extension
937
+ const extToLang = {
938
+ '.ts': 'TypeScript', '.tsx': 'TypeScript',
939
+ '.js': 'JavaScript', '.jsx': 'JavaScript', '.mjs': 'JavaScript', '.cjs': 'JavaScript',
940
+ '.py': 'Python',
941
+ '.go': 'Go',
942
+ '.rs': 'Rust',
943
+ '.java': 'Java',
944
+ '.rb': 'Ruby',
945
+ '.php': 'PHP',
946
+ };
947
+ const getLang = (filePath) => {
948
+ const ext = path.extname(filePath).toLowerCase();
949
+ return extToLang[ext] ?? 'Other';
950
+ };
951
+ const getTopDir = (filePath) => {
952
+ // Extract the top-level directory under repos/owner/repo/norm/
953
+ const parts = filePath.split('/');
954
+ // repos/owner/repo/norm/[topDir]/...
955
+ if (parts.length > 4 && parts[0] === 'repos' && parts[3] === 'norm') {
956
+ return parts[4] ?? '(root)';
957
+ }
958
+ return parts[0] ?? '(root)';
959
+ };
922
960
  // Collect all code files
923
961
  const codeExtensions = new Set(['.ts', '.tsx', '.js', '.jsx', '.mjs', '.cjs', '.py', '.go', '.rs', '.java', '.rb', '.php']);
924
962
  const codeFiles = [];
@@ -946,15 +984,29 @@ async function generateGlobalDependencyGraph(ctx) {
946
984
  }
947
985
  // Walk repos directory
948
986
  for await (const relPath of walkDir(paths.reposDir, 'repos')) {
987
+ scannedFilesCount++;
988
+ // Track per-directory stats
989
+ const topDir = getTopDir(relPath);
990
+ perDir[topDir] = (perDir[topDir] ?? 0) + 1;
991
+ // Track per-language stats
992
+ const lang = getLang(relPath);
993
+ if (!perLanguage[lang]) {
994
+ perLanguage[lang] = { scanned: 0, parsed: 0, edges: 0 };
995
+ }
996
+ perLanguage[lang].scanned++;
949
997
  if (codeFiles.length >= limits.maxFiles) {
950
998
  truncated = true;
951
999
  truncatedReason = 'maxFiles reached during discovery';
952
- break;
1000
+ skippedFiles.push({ path: relPath, reason: 'maxFiles limit reached' });
1001
+ continue;
953
1002
  }
954
1003
  // Only include files under norm/ directories
955
1004
  if (relPath.includes('/norm/')) {
956
1005
  codeFiles.push(relPath);
957
1006
  }
1007
+ else {
1008
+ skippedFiles.push({ path: relPath, reason: 'not in norm/ directory' });
1009
+ }
958
1010
  }
959
1011
  warnings.push({
960
1012
  code: 'global_mode',
@@ -971,15 +1023,49 @@ async function generateGlobalDependencyGraph(ctx) {
971
1023
  const fileId = `file:${filePath}`;
972
1024
  addNode({ id: fileId, kind: 'file', name: filePath, file: filePath });
973
1025
  // Read and extract imports
1026
+ const lang = getLang(filePath);
974
1027
  try {
975
1028
  const absPath = safeJoin(paths.rootDir, filePath);
976
- const raw = await fs.readFile(absPath, 'utf8');
1029
+ const stat = await fs.stat(absPath);
1030
+ // Handle large files based on strategy
1031
+ const maxSize = args.options.maxFileSizeBytes ?? 1_000_000;
1032
+ const strategy = args.options.largeFileStrategy ?? 'skip';
1033
+ const truncateLines = args.options.truncateLines ?? 500;
1034
+ if (stat.size > maxSize) {
1035
+ if (strategy === 'skip') {
1036
+ skippedFiles.push({
1037
+ path: filePath,
1038
+ size: stat.size,
1039
+ reason: `file too large (>${Math.round(maxSize / 1024)}KB). Use largeFileStrategy='truncate' or increase maxFileSizeBytes to include.`
1040
+ });
1041
+ continue;
1042
+ }
1043
+ // strategy === 'truncate': read first N lines only
1044
+ }
1045
+ let raw;
1046
+ if (stat.size > maxSize && strategy === 'truncate') {
1047
+ // Read file and take first N lines
1048
+ const fullContent = await fs.readFile(absPath, 'utf8');
1049
+ const allLines = fullContent.replace(/\r\n/g, '\n').split('\n');
1050
+ raw = allLines.slice(0, truncateLines).join('\n');
1051
+ warnings.push({
1052
+ code: 'file_truncated',
1053
+ message: `${filePath}: truncated to first ${truncateLines} lines (file size ${Math.round(stat.size / 1024)}KB exceeds ${Math.round(maxSize / 1024)}KB limit)`,
1054
+ });
1055
+ }
1056
+ else {
1057
+ raw = await fs.readFile(absPath, 'utf8');
1058
+ }
977
1059
  const normalized = raw.replace(/\r\n/g, '\n');
978
1060
  const lines = normalized.split('\n');
979
1061
  const extracted = await extractImportsForFile(cfg, filePath, normalized, lines, warnings);
980
1062
  if (extracted.usedAst)
981
1063
  usedAstCount++;
982
1064
  filesProcessed++;
1065
+ // Track parsed count per language
1066
+ if (perLanguage[lang]) {
1067
+ perLanguage[lang].parsed++;
1068
+ }
983
1069
  const fileRepo = parseRepoNormPath(filePath);
984
1070
  if (!fileRepo)
985
1071
  continue;
@@ -1016,11 +1102,17 @@ async function generateGlobalDependencyGraph(ctx) {
1016
1102
  sources: [source],
1017
1103
  notes: [...imp.notes, `resolved import "${imp.module}" to ${resolvedFile}`],
1018
1104
  });
1105
+ // Track edges per language
1106
+ if (perLanguage[lang]) {
1107
+ perLanguage[lang].edges++;
1108
+ }
1019
1109
  }
1020
1110
  }
1021
1111
  }
1022
- catch {
1023
- // Skip unreadable files
1112
+ catch (err) {
1113
+ // Track skipped files with reason
1114
+ const reason = err instanceof Error ? err.message : 'unknown error';
1115
+ skippedFiles.push({ path: filePath, reason: `read error: ${reason.slice(0, 100)}` });
1024
1116
  }
1025
1117
  }
1026
1118
  // Post-process warnings
@@ -1031,6 +1123,22 @@ async function generateGlobalDependencyGraph(ctx) {
1031
1123
  : 'Global graph used regex-based import extraction. Import resolution is best-effort. Only internal imports (resolved to files in the bundle) are shown.',
1032
1124
  });
1033
1125
  const importEdges = edges.filter((e) => e.type === 'imports_resolved').length;
1126
+ // Build coverage report
1127
+ const coverageReport = {
1128
+ scannedFilesCount,
1129
+ parsedFilesCount: filesProcessed,
1130
+ perLanguage,
1131
+ perDir,
1132
+ skippedFiles: skippedFiles.slice(0, 50), // Limit to first 50 for output size
1133
+ truncated,
1134
+ truncatedReason,
1135
+ limits: {
1136
+ maxFiles: limits.maxFiles,
1137
+ maxNodes: limits.maxNodes,
1138
+ maxEdges: limits.maxEdges,
1139
+ timeBudgetMs,
1140
+ },
1141
+ };
1034
1142
  return {
1035
1143
  meta: {
1036
1144
  requestId,
@@ -1060,6 +1168,7 @@ async function generateGlobalDependencyGraph(ctx) {
1060
1168
  },
1061
1169
  warnings,
1062
1170
  },
1171
+ coverageReport,
1063
1172
  };
1064
1173
  }
1065
1174
  /**
package/dist/server.js CHANGED
@@ -1,4 +1,5 @@
1
1
  import fs from 'node:fs/promises';
2
+ import path from 'node:path';
2
3
  import { McpServer, ResourceTemplate } from '@modelcontextprotocol/sdk/server/mcp.js';
3
4
  import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
4
5
  import * as z from 'zod';
@@ -16,6 +17,7 @@ import { cleanupOnStartup, cleanupOrphanBundles } from './bundle/cleanup.js';
16
17
  import { startHttpServer } from './http/server.js';
17
18
  import { DependencyGraphInputSchema, generateDependencyGraph } from './evidence/dependencyGraph.js';
18
19
  import { TraceQueryInputSchema, TraceUpsertInputSchema, traceQuery, traceUpsert } from './trace/service.js';
20
+ import { generateRepoTree, formatTreeResult } from './bundle/tree.js';
19
21
  const CreateRepoInputSchema = z.union([
20
22
  z.object({
21
23
  kind: z.literal('github'),
@@ -100,7 +102,7 @@ export async function startServer() {
100
102
  startHttpServer(cfg);
101
103
  const server = new McpServer({
102
104
  name: 'preflight-mcp',
103
- version: '0.2.3',
105
+ version: '0.2.5',
104
106
  description: 'Create evidence-based preflight bundles for repositories (docs + code) with SQLite FTS search.',
105
107
  }, {
106
108
  capabilities: {
@@ -268,41 +270,54 @@ export async function startServer() {
268
270
  '(1) Omit "file" param → returns ALL key files in one call. ' +
269
271
  '(2) Provide "file" param → returns that specific file. ' +
270
272
  'Use when: "查看bundle", "show bundle", "read overview", "bundle概览", "项目信息", "读取依赖图".\n\n' +
273
+ '⭐ **Evidence Citation Support:**\n' +
274
+ '- Use `withLineNumbers: true` to get output in `N|line` format for precise citations\n' +
275
+ '- Use `ranges: ["20-80", "100-120"]` to read only specific line ranges\n' +
276
+ '- Combine both for efficient evidence gathering: `{ file: "src/main.ts", withLineNumbers: true, ranges: ["50-100"] }`\n' +
277
+ '- Citation format: `repos/owner/repo/norm/src/main.ts:50-100`\n\n' +
278
+ '⭐ **Recommended Reading Order (AI-optimized summaries are better than raw README):**\n' +
279
+ '1. `OVERVIEW.md` - Project structure & architecture summary (START HERE)\n' +
280
+ '2. `START_HERE.md` - Key entry points & critical paths\n' +
281
+ '3. `AGENTS.md` - AI agent usage guide\n' +
282
+ '4. `analysis/FACTS.json` - Static analysis data (dependencies, exports, etc.)\n' +
283
+ '5. `deps/dependency-graph.json` - Import relationships (if generated)\n' +
284
+ '6. `repos/{owner}/{repo}/norm/README.md` - Original README (only if you need raw docs)\n\n' +
271
285
  '📁 **Bundle Structure:**\n' +
272
286
  '```\n' +
273
287
  'bundle-{id}/\n' +
274
- '├── manifest.json # Bundle metadata (JSON)\n' +
275
- '├── OVERVIEW.md # Project overview\n' +
276
- '├── START_HERE.md # Entry points guide\n' +
277
- '├── AGENTS.md # AI agent instructions\n' +
278
- '├── analysis/\n' +
279
- '│ └── FACTS.json # Static analysis facts (JSON) - use this tool to read\n' +
280
- '├── deps/\n' +
281
- '│ └── dependency-graph.json # Import graph (JSON) - generated by preflight_evidence_dependency_graph\n' +
282
- '├── trace/\n' +
283
- '│ ├── trace.sqlite3 # Trace links DB - query via preflight_trace_query\n' +
284
- '│ └── trace.json # Trace links export (JSON) - use this tool to read\n' +
285
- '├── indexes/\n' +
286
- '│ └── search.sqlite3 # FTS5 index - query via preflight_search_bundle\n' +
287
- '└── repos/{owner}/{repo}/\n' +
288
- ' ├── raw/ # Original files\n' +
289
- ' └── norm/ # Normalized source code\n' +
288
+ '├── OVERVIEW.md # Start here - AI-generated project summary\n' +
289
+ '├── START_HERE.md # Entry points & key files\n' +
290
+ '├── AGENTS.md # AI agent instructions\n' +
291
+ '├── manifest.json # Bundle metadata\n' +
292
+ '├── analysis/FACTS.json # Static analysis facts\n' +
293
+ '├── deps/dependency-graph.json # Import graph (generated on demand)\n' +
294
+ '├── trace/trace.json # Trace links export (auto-generated after trace_upsert)\n' +
295
+ '├── indexes/search.sqlite3 # FTS5 index (use preflight_search_bundle)\n' +
296
+ '└── repos/{owner}/{repo}/norm/ # Source code & original README\n' +
290
297
  '```\n\n' +
291
- '**File Access Methods:**\n' +
292
- '- README: `repos/{owner}/{repo}/norm/README.md` - use THIS tool\n' +
293
- '- JSON files (FACTS.json, dependency-graph.json, trace.json, manifest.json): Use THIS tool\n' +
294
- '- SQLite DBs (search.sqlite3): Use preflight_search_bundle\n' +
295
- '- SQLite DBs (trace.sqlite3): Use preflight_trace_query/preflight_trace_upsert\n' +
296
- '- Source code: `repos/{owner}/{repo}/norm/{path}` - use THIS tool',
298
+ '**File Access:**\n' +
299
+ '- Omit `file` param returns OVERVIEW + START_HERE + AGENTS + manifest (recommended)\n' +
300
+ '- Original README: `file: "repos/{owner}/{repo}/norm/README.md"`\n' +
301
+ '- Source code: `file: "repos/{owner}/{repo}/norm/{path}"`\n' +
302
+ '- Search code: use preflight_search_bundle instead',
297
303
  inputSchema: {
298
304
  bundleId: z.string().describe('Bundle ID to read.'),
299
305
  file: z.string().optional().describe('Specific file to read (e.g., "deps/dependency-graph.json"). If omitted, returns all key files including dependency graph if exists.'),
306
+ withLineNumbers: z.boolean().optional().default(false).describe('If true, prefix each line with line number in "N|" format for evidence citation.'),
307
+ ranges: z.array(z.string()).optional().describe('Line ranges to read, e.g. ["20-80", "100-120"]. Each range is "start-end" (1-indexed, inclusive). If omitted, reads entire file.'),
300
308
  },
301
309
  outputSchema: {
302
310
  bundleId: z.string(),
303
311
  file: z.string().optional(),
304
312
  content: z.string().optional(),
305
313
  files: z.record(z.string(), z.string().nullable()).optional(),
314
+ lineInfo: z.object({
315
+ totalLines: z.number(),
316
+ ranges: z.array(z.object({
317
+ start: z.number(),
318
+ end: z.number(),
319
+ })),
320
+ }).optional(),
306
321
  },
307
322
  annotations: {
308
323
  readOnlyHint: true,
@@ -315,13 +330,78 @@ export async function startServer() {
315
330
  }
316
331
  const paths = getBundlePathsForId(storageDir, args.bundleId);
317
332
  const bundleRoot = paths.rootDir;
333
+ // Helper: parse range string "start-end" into { start, end }
334
+ const parseRange = (rangeStr) => {
335
+ const match = rangeStr.match(/^(\d+)-(\d+)$/);
336
+ if (!match)
337
+ return null;
338
+ const start = parseInt(match[1], 10);
339
+ const end = parseInt(match[2], 10);
340
+ if (start < 1 || end < start)
341
+ return null;
342
+ return { start, end };
343
+ };
344
+ // Helper: format content with optional line numbers and ranges
345
+ const formatContent = (rawContent, withLineNumbers, ranges) => {
346
+ const lines = rawContent.replace(/\r\n/g, '\n').split('\n');
347
+ const totalLines = lines.length;
348
+ let selectedLines = [];
349
+ if (ranges && ranges.length > 0) {
350
+ // Extract specified ranges
351
+ for (const range of ranges) {
352
+ const start = Math.max(1, range.start);
353
+ const end = Math.min(totalLines, range.end);
354
+ for (let i = start; i <= end; i++) {
355
+ selectedLines.push({ lineNo: i, text: lines[i - 1] ?? '' });
356
+ }
357
+ }
358
+ }
359
+ else {
360
+ // All lines
361
+ selectedLines = lines.map((text, idx) => ({ lineNo: idx + 1, text }));
362
+ }
363
+ // Format output
364
+ const formatted = withLineNumbers
365
+ ? selectedLines.map((l) => `${l.lineNo}|${l.text}`).join('\n')
366
+ : selectedLines.map((l) => l.text).join('\n');
367
+ const actualRanges = ranges && ranges.length > 0
368
+ ? ranges.map((r) => ({ start: Math.max(1, r.start), end: Math.min(totalLines, r.end) }))
369
+ : [{ start: 1, end: totalLines }];
370
+ return { content: formatted, lineInfo: { totalLines, ranges: actualRanges } };
371
+ };
318
372
  // Single file mode
319
373
  if (args.file) {
320
374
  const absPath = safeJoin(bundleRoot, args.file);
321
- const content = await fs.readFile(absPath, 'utf8');
322
- const out = { bundleId: args.bundleId, file: args.file, content };
375
+ const rawContent = await fs.readFile(absPath, 'utf8');
376
+ // Parse ranges if provided
377
+ let parsedRanges;
378
+ if (args.ranges && args.ranges.length > 0) {
379
+ parsedRanges = [];
380
+ for (const rangeStr of args.ranges) {
381
+ const parsed = parseRange(rangeStr);
382
+ if (!parsed) {
383
+ throw new Error(`Invalid range format: "${rangeStr}". Expected "start-end" (e.g., "20-80").`);
384
+ }
385
+ parsedRanges.push(parsed);
386
+ }
387
+ // Sort and merge overlapping ranges
388
+ parsedRanges.sort((a, b) => a.start - b.start);
389
+ }
390
+ const { content, lineInfo } = formatContent(rawContent, args.withLineNumbers ?? false, parsedRanges);
391
+ const out = {
392
+ bundleId: args.bundleId,
393
+ file: args.file,
394
+ content,
395
+ lineInfo,
396
+ };
397
+ // Build text with citation hint
398
+ let textOutput = content;
399
+ if (parsedRanges && parsedRanges.length > 0) {
400
+ const rangeStr = parsedRanges.map((r) => `${r.start}-${r.end}`).join(', ');
401
+ textOutput = `[${args.file}:${rangeStr}] (${lineInfo.totalLines} total lines)\n\n${content}`;
402
+ }
323
403
  return {
324
- content: [{ type: 'text', text: content }],
404
+ content: [{ type: 'text', text: textOutput }],
325
405
  structuredContent: out,
326
406
  };
327
407
  }
@@ -380,6 +460,67 @@ export async function startServer() {
380
460
  throw wrapPreflightError(err);
381
461
  }
382
462
  });
463
+ server.registerTool('preflight_repo_tree', {
464
+ title: 'Repository tree & statistics',
465
+ description: 'Get repository structure overview with directory tree, file statistics, and entry point candidates. ' +
466
+ 'Use this BEFORE deep analysis to understand project layout without wasting tokens on search. ' +
467
+ 'Use when: "show project structure", "what files are in this repo", "项目结构", "文件分布", "show tree".\n\n' +
468
+ '**Output includes:**\n' +
469
+ '- ASCII directory tree (depth-limited)\n' +
470
+ '- File count by extension (.ts, .py, etc.)\n' +
471
+ '- File count by top-level directory\n' +
472
+ '- Entry point candidates (README, main, index, cli, server, etc.)\n\n' +
473
+ '**Recommended workflow:**\n' +
474
+ '1. Call preflight_repo_tree to understand structure\n' +
475
+ '2. Read OVERVIEW.md for AI-generated summary\n' +
476
+ '3. Use preflight_search_bundle to find specific code\n' +
477
+ '4. Use preflight_read_file with ranges for evidence gathering',
478
+ inputSchema: {
479
+ bundleId: z.string().describe('Bundle ID to analyze.'),
480
+ depth: z.number().int().min(1).max(10).default(4).describe('Maximum directory depth to traverse. Default 4.'),
481
+ include: z.array(z.string()).optional().describe('Glob patterns to include (e.g., ["*.ts", "*.py"]). If omitted, includes all files.'),
482
+ exclude: z.array(z.string()).optional().describe('Patterns to exclude (e.g., ["node_modules", "*.pyc"]). Defaults include common excludes.'),
483
+ },
484
+ outputSchema: {
485
+ bundleId: z.string(),
486
+ tree: z.string().describe('ASCII directory tree representation.'),
487
+ stats: z.object({
488
+ totalFiles: z.number(),
489
+ totalDirs: z.number(),
490
+ byExtension: z.record(z.string(), z.number()),
491
+ byTopDir: z.record(z.string(), z.number()),
492
+ }),
493
+ entryPointCandidates: z.array(z.object({
494
+ path: z.string(),
495
+ type: z.enum(['readme', 'main', 'index', 'cli', 'server', 'app', 'test', 'config']),
496
+ priority: z.number(),
497
+ })),
498
+ },
499
+ annotations: {
500
+ readOnlyHint: true,
501
+ },
502
+ }, async (args) => {
503
+ try {
504
+ const storageDir = await findBundleStorageDir(cfg.storageDirs, args.bundleId);
505
+ if (!storageDir) {
506
+ throw new BundleNotFoundError(args.bundleId);
507
+ }
508
+ const paths = getBundlePathsForId(storageDir, args.bundleId);
509
+ const result = await generateRepoTree(paths.rootDir, args.bundleId, {
510
+ depth: args.depth,
511
+ include: args.include,
512
+ exclude: args.exclude,
513
+ });
514
+ const textOutput = formatTreeResult(result);
515
+ return {
516
+ content: [{ type: 'text', text: textOutput }],
517
+ structuredContent: result,
518
+ };
519
+ }
520
+ catch (err) {
521
+ throw wrapPreflightError(err);
522
+ }
523
+ });
383
524
  server.registerTool('preflight_delete_bundle', {
384
525
  title: 'Delete bundle',
385
526
  description: 'Delete/remove a bundle permanently. Use when: "delete bundle", "remove bundle", "清除bundle", "删除索引", "移除仓库".',
@@ -832,12 +973,50 @@ export async function startServer() {
832
973
  'Generate an evidence-based dependency graph. IMPORTANT: Before running, ASK the user which bundle and which file/mode they want! ' +
833
974
  'Two modes: (1) TARGET MODE: analyze a specific file (provide target.file). (2) GLOBAL MODE: project-wide graph (omit target). ' +
834
975
  'Do NOT automatically choose bundle or mode - confirm with user first! ' +
835
- 'File path must be bundle-relative: repos/{owner}/{repo}/norm/{path}.',
976
+ 'File path must be bundle-relative: repos/{owner}/{repo}/norm/{path}.\n\n' +
977
+ '📊 **Coverage Report (Global Mode):**\n' +
978
+ 'The response includes a `coverageReport` explaining what was analyzed:\n' +
979
+ '- `scannedFilesCount` / `parsedFilesCount`: Files discovered vs successfully parsed\n' +
980
+ '- `perLanguage`: Statistics per programming language (TypeScript, Python, etc.)\n' +
981
+ '- `perDir`: File counts per top-level directory\n' +
982
+ '- `skippedFiles`: Files that were skipped with reasons (too large, read error, etc.)\n' +
983
+ '- `truncated` / `truncatedReason`: Whether limits were hit\n\n' +
984
+ 'Use this to understand graph completeness and identify gaps.\n\n' +
985
+ '📂 **Large File Handling (LLM Guidance):**\n' +
986
+ '- Default: files >1MB are skipped to avoid timeouts\n' +
987
+ '- If coverageReport.skippedFiles shows important files were skipped:\n' +
988
+ ' 1. Try `largeFileStrategy: "truncate"` to read first 500 lines\n' +
989
+ ' 2. Or increase `maxFileSizeBytes` (e.g., 5000000 for 5MB)\n' +
990
+ '- Options: `{ maxFileSizeBytes: 5000000, largeFileStrategy: "truncate", truncateLines: 1000 }`\n' +
991
+ '- User can override these settings if needed',
836
992
  inputSchema: DependencyGraphInputSchema,
837
993
  outputSchema: {
838
994
  meta: z.any(),
839
995
  facts: z.any(),
840
996
  signals: z.any(),
997
+ coverageReport: z.object({
998
+ scannedFilesCount: z.number(),
999
+ parsedFilesCount: z.number(),
1000
+ perLanguage: z.record(z.string(), z.object({
1001
+ scanned: z.number(),
1002
+ parsed: z.number(),
1003
+ edges: z.number(),
1004
+ })),
1005
+ perDir: z.record(z.string(), z.number()),
1006
+ skippedFiles: z.array(z.object({
1007
+ path: z.string(),
1008
+ size: z.number().optional(),
1009
+ reason: z.string(),
1010
+ })),
1011
+ truncated: z.boolean(),
1012
+ truncatedReason: z.string().optional(),
1013
+ limits: z.object({
1014
+ maxFiles: z.number(),
1015
+ maxNodes: z.number(),
1016
+ maxEdges: z.number(),
1017
+ timeBudgetMs: z.number(),
1018
+ }),
1019
+ }).optional().describe('Coverage report explaining what was analyzed and what was skipped (global mode only).'),
841
1020
  },
842
1021
  annotations: {
843
1022
  readOnlyHint: true,
@@ -861,9 +1040,21 @@ export async function startServer() {
861
1040
  description: 'Create or update traceability links (code↔test, code↔doc, file↔requirement). ' +
862
1041
  '**Proactive use recommended**: When you discover relationships during code analysis ' +
863
1042
  '(e.g., "this file has a corresponding test", "this module implements feature X"), ' +
864
- 'automatically create trace links to record these findings for future queries. ' +
865
- 'Common link types: tested_by, implements, documents, relates_to, depends_on. ' +
866
- 'Stores trace edges in a per-bundle SQLite database.',
1043
+ 'automatically create trace links to record these findings for future queries.\n\n' +
1044
+ '📌 **When to Write Trace Links (LLM Rules):**\n' +
1045
+ 'Write trace links ONLY for these 3 high-value relationship types:\n' +
1046
+ '1. **Entry ↔ Core module** (entrypoint_of): Main entry points and their critical paths\n' +
1047
+ '2. **Implementation ↔ Test** (tested_by): Code files and their corresponding tests\n' +
1048
+ '3. **Code ↔ Documentation** (documents/implements): Code implementing specs or documented in files\n\n' +
1049
+ '⚠️ **Required Evidence:**\n' +
1050
+ '- sources: Array of evidence with file path + line range or note\n' +
1051
+ '- method: "exact" (parser-verified) or "heuristic" (name-based)\n' +
1052
+ '- confidence: 0.0-1.0 (use 0.9 for exact matches, 0.6-0.8 for heuristics)\n\n' +
1053
+ '❌ **Do NOT write:**\n' +
1054
+ '- Pure import relationships (use dependency_graph instead)\n' +
1055
+ '- Low-value or obvious relationships\n\n' +
1056
+ '**Standard edge_types:** tested_by, documents, implements, relates_to, entrypoint_of, depends_on\n\n' +
1057
+ '📤 **Auto-export:** trace.json is automatically exported to trace/trace.json after each upsert for LLM direct reading.',
867
1058
  inputSchema: TraceUpsertInputSchema,
868
1059
  outputSchema: {
869
1060
  bundleId: z.string(),
@@ -911,6 +1102,10 @@ export async function startServer() {
911
1102
  updatedAt: z.string(),
912
1103
  bundleId: z.string().optional(),
913
1104
  })),
1105
+ reason: z.enum(['no_edges', 'no_matching_edges', 'not_initialized', 'no_matching_bundle']).optional()
1106
+ .describe('Reason for empty results. no_edges=no trace links exist across bundles, no_matching_edges=links exist but none match query, not_initialized=trace DB empty for this bundle, no_matching_bundle=no bundles found.'),
1107
+ nextSteps: z.array(z.string()).optional()
1108
+ .describe('Actionable guidance when edges is empty.'),
914
1109
  },
915
1110
  annotations: {
916
1111
  readOnlyHint: true,
@@ -922,8 +1117,60 @@ export async function startServer() {
922
1117
  await assertBundleComplete(cfg, args.bundleId);
923
1118
  }
924
1119
  const out = await traceQuery(cfg, args);
1120
+ // Build human-readable text output
1121
+ let textOutput;
1122
+ if (out.edges.length === 0 && out.reason) {
1123
+ textOutput = `No trace links found.\nReason: ${out.reason}\n\nNext steps:\n${(out.nextSteps ?? []).map(s => `- ${s}`).join('\n')}`;
1124
+ }
1125
+ else {
1126
+ textOutput = JSON.stringify(out, null, 2);
1127
+ }
925
1128
  return {
926
- content: [{ type: 'text', text: JSON.stringify(out, null, 2) }],
1129
+ content: [{ type: 'text', text: textOutput }],
1130
+ structuredContent: out,
1131
+ };
1132
+ }
1133
+ catch (err) {
1134
+ throw wrapPreflightError(err);
1135
+ }
1136
+ });
1137
+ server.registerTool('preflight_trace_export', {
1138
+ title: 'Trace: export to JSON',
1139
+ description: 'Export trace links to trace/trace.json for direct LLM reading. ' +
1140
+ 'Note: trace.json is auto-exported after each trace_upsert, so this tool is only needed to manually refresh or verify the export. ' +
1141
+ 'Use when: "export trace", "refresh trace.json", "导出trace", "刷新trace.json".',
1142
+ inputSchema: {
1143
+ bundleId: z.string().describe('Bundle ID to export trace links from.'),
1144
+ },
1145
+ outputSchema: {
1146
+ bundleId: z.string(),
1147
+ exported: z.number().int().describe('Number of edges exported.'),
1148
+ jsonPath: z.string().describe('Bundle-relative path to the exported JSON file.'),
1149
+ },
1150
+ annotations: {
1151
+ readOnlyHint: false,
1152
+ },
1153
+ }, async (args) => {
1154
+ try {
1155
+ await assertBundleComplete(cfg, args.bundleId);
1156
+ const storageDir = await findBundleStorageDir(cfg.storageDirs, args.bundleId);
1157
+ if (!storageDir) {
1158
+ throw new BundleNotFoundError(args.bundleId);
1159
+ }
1160
+ const paths = getBundlePathsForId(storageDir, args.bundleId);
1161
+ const traceDbPath = path.join(paths.rootDir, 'trace', 'trace.sqlite3');
1162
+ // Import and call exportTraceToJson
1163
+ const { exportTraceToJson } = await import('./trace/store.js');
1164
+ const result = await exportTraceToJson(traceDbPath);
1165
+ // Convert absolute path to bundle-relative path
1166
+ const jsonRelPath = 'trace/trace.json';
1167
+ const out = {
1168
+ bundleId: args.bundleId,
1169
+ exported: result.exported,
1170
+ jsonPath: jsonRelPath,
1171
+ };
1172
+ return {
1173
+ content: [{ type: 'text', text: `Exported ${result.exported} trace edge(s) to ${jsonRelPath}` }],
927
1174
  structuredContent: out,
928
1175
  };
929
1176
  }
@@ -70,6 +70,28 @@ export async function traceQuery(cfg, rawArgs) {
70
70
  if (args.bundleId) {
71
71
  const dbPath = await getTraceDbPathForBundleId(cfg, args.bundleId);
72
72
  const rows = queryTraceEdges(dbPath, { source, target, edgeType: args.edge_type, limit: args.limit });
73
+ // Add reason and nextSteps for empty results
74
+ if (rows.length === 0) {
75
+ // Check if trace DB has any edges at all
76
+ const allEdges = queryTraceEdges(dbPath, { limit: 1 });
77
+ const hasAnyEdges = allEdges.length > 0;
78
+ return {
79
+ bundleId: args.bundleId,
80
+ edges: [],
81
+ reason: hasAnyEdges ? 'no_matching_edges' : 'not_initialized',
82
+ nextSteps: hasAnyEdges
83
+ ? [
84
+ 'Try a different source_type/source_id combination',
85
+ 'Use preflight_search_bundle to find related files first',
86
+ 'Check if the file path uses bundle-relative format: repos/{owner}/{repo}/norm/{path}',
87
+ ]
88
+ : [
89
+ 'Use preflight_trace_upsert to create trace links',
90
+ 'Trace links record relationships: code↔test (tested_by), code↔doc (documents), module↔requirement (implements)',
91
+ 'Example: { edges: [{ type: "tested_by", source: { type: "file", id: "repos/.../src/main.ts" }, target: { type: "file", id: "repos/.../tests/main.test.ts" }, method: "exact", confidence: 0.9 }] }',
92
+ ],
93
+ };
94
+ }
73
95
  return { bundleId: args.bundleId, edges: rows };
74
96
  }
75
97
  // Slow path: scan across bundles (best-effort, capped)
@@ -100,9 +122,23 @@ export async function traceQuery(cfg, rawArgs) {
100
122
  }
101
123
  // Sort by updatedAt desc across bundles
102
124
  collected.sort((a, b) => new Date(b.updatedAt).getTime() - new Date(a.updatedAt).getTime());
103
- return {
125
+ const result = {
104
126
  scannedBundles: bundleIds.length,
105
127
  truncated: truncated ? true : undefined,
106
128
  edges: collected.slice(0, args.limit),
107
129
  };
130
+ // Add reason and nextSteps for empty results
131
+ if (collected.length === 0) {
132
+ result.reason = bundleIds.length === 0 ? 'no_matching_bundle' : 'no_edges';
133
+ result.nextSteps = bundleIds.length === 0
134
+ ? [
135
+ 'No bundles found. Create a bundle first using preflight_create_bundle.',
136
+ ]
137
+ : [
138
+ 'No trace links found across any bundle.',
139
+ 'Use preflight_trace_upsert with a specific bundleId to create trace links.',
140
+ 'Trace links record relationships: code↔test (tested_by), code↔doc (documents), module↔requirement (implements)',
141
+ ];
142
+ }
143
+ return result;
108
144
  }
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "preflight-mcp",
3
- "version": "0.2.3",
3
+ "version": "0.2.5",
4
4
  "description": "MCP server that creates evidence-based preflight bundles for GitHub repositories and library docs.",
5
5
  "type": "module",
6
6
  "license": "MIT",