@haaaiawd/anws 1.2.5 → 2.0.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +208 -172
- package/bin/cli.js +22 -9
- package/lib/adapters/index.js +157 -0
- package/lib/agents.js +136 -1
- package/lib/changelog.js +187 -0
- package/lib/copy.js +72 -1
- package/lib/diff.js +270 -0
- package/lib/init.js +143 -125
- package/lib/install-state.js +195 -0
- package/lib/manifest.js +184 -42
- package/lib/output.js +185 -13
- package/lib/prompt.js +284 -0
- package/lib/resources/index.js +27 -0
- package/lib/update.js +291 -83
- package/package.json +10 -6
- package/templates/.agents/skills/concept-modeler/SKILL.md +176 -0
- package/templates/{.agent → .agents}/skills/design-reviewer/SKILL.md +6 -6
- package/templates/.agents/skills/nexus-mapper/SKILL.md +306 -0
- package/templates/.agents/skills/nexus-mapper/references/language-customization.md +164 -0
- package/templates/.agents/skills/nexus-mapper/references/output-schema.md +298 -0
- package/templates/.agents/skills/nexus-mapper/references/probe-protocol.md +246 -0
- package/templates/.agents/skills/nexus-mapper/scripts/extract_ast.py +706 -0
- package/templates/.agents/skills/nexus-mapper/scripts/git_detective.py +194 -0
- package/templates/.agents/skills/nexus-mapper/scripts/languages.json +127 -0
- package/templates/.agents/skills/nexus-mapper/scripts/query_graph.py +556 -0
- package/templates/.agents/skills/nexus-mapper/scripts/requirements.txt +6 -0
- package/templates/{.agent → .agents}/skills/report-template/SKILL.md +11 -14
- package/templates/.agents/skills/report-template/references/REPORT_TEMPLATE.md +100 -0
- package/templates/{.agent → .agents}/skills/runtime-inspector/SKILL.md +1 -1
- package/templates/.agents/skills/sequential-thinking/SKILL.md +166 -0
- package/templates/.agents/skills/spec-writer/SKILL.md +108 -0
- package/templates/{.agent → .agents}/skills/spec-writer/references/prd_template.md +1 -1
- package/templates/{.agent → .agents}/skills/system-architect/SKILL.md +3 -3
- package/templates/.agents/skills/system-architect/references/rfc_template.md +59 -0
- package/templates/{.agent → .agents}/skills/system-designer/SKILL.md +6 -6
- package/templates/{.agent → .agents}/skills/system-designer/references/system-design-template.md +75 -25
- package/templates/{.agent → .agents}/skills/task-planner/SKILL.md +1 -1
- package/templates/.agents/skills/task-planner/references/TASK_TEMPLATE.md +144 -0
- package/templates/{.agent → .agents}/skills/task-reviewer/SKILL.md +4 -3
- package/templates/{.agent → .agents}/skills/tech-evaluator/SKILL.md +2 -2
- package/templates/{.agent → .agents}/skills/tech-evaluator/references/ADR_TEMPLATE.md +10 -0
- package/templates/{.agent → .agents}/workflows/blueprint.md +32 -27
- package/templates/{.agent → .agents}/workflows/challenge.md +21 -15
- package/templates/{.agent → .agents}/workflows/change.md +23 -14
- package/templates/{.agent → .agents}/workflows/craft.md +8 -19
- package/templates/{.agent → .agents}/workflows/design-system.md +81 -54
- package/templates/{.agent → .agents}/workflows/explore.md +6 -19
- package/templates/{.agent → .agents}/workflows/forge.md +30 -32
- package/templates/{.agent → .agents}/workflows/genesis.md +68 -56
- package/templates/.agents/workflows/probe.md +168 -0
- package/templates/{.agent → .agents}/workflows/quickstart.md +7 -12
- package/templates/.agents/workflows/upgrade.md +192 -0
- package/templates/AGENTS.md +66 -45
- package/templates/.agent/skills/build-inspector/SKILL.md +0 -83
- package/templates/.agent/skills/complexity-guard/SKILL.md +0 -71
- package/templates/.agent/skills/complexity-guard/references/anti_patterns.md +0 -21
- package/templates/.agent/skills/concept-modeler/SKILL.md +0 -112
- package/templates/.agent/skills/concept-modeler/prompts/GLOSSARY_PROMPT.md +0 -40
- package/templates/.agent/skills/concept-modeler/references/ENTITY_EXTRACTION_PROMPT.md +0 -299
- package/templates/.agent/skills/concept-modeler/scripts/glossary_gen.py +0 -66
- package/templates/.agent/skills/git-forensics/SKILL.md +0 -74
- package/templates/.agent/skills/git-forensics/references/ANALYSIS_METHODOLOGY.md +0 -193
- package/templates/.agent/skills/git-forensics/scripts/__pycache__/git_forensics.cpython-313.pyc +0 -0
- package/templates/.agent/skills/git-forensics/scripts/git_forensics.py +0 -615
- package/templates/.agent/skills/git-forensics/scripts/git_hotspots.py +0 -118
- package/templates/.agent/skills/report-template/references/REPORT_TEMPLATE.md +0 -100
- package/templates/.agent/skills/spec-writer/SKILL.md +0 -108
- package/templates/.agent/skills/system-architect/references/rfc_template.md +0 -59
- package/templates/.agent/skills/task-planner/references/TASK_TEMPLATE.md +0 -144
- package/templates/.agent/workflows/scout.md +0 -139
- /package/templates/{.agent → .agents}/skills/system-designer/references/system-design-detail-template.md +0 -0
|
@@ -0,0 +1,556 @@
|
|
|
1
|
+
#!/usr/bin/env python3
|
|
2
|
+
"""
|
|
3
|
+
query_graph.py — AST 按需查询工具
|
|
4
|
+
|
|
5
|
+
读取 extract_ast.py 产出的 ast_nodes.json,提供多种查询模式,
|
|
6
|
+
输出 agent 易消费的精简文本。
|
|
7
|
+
|
|
8
|
+
用途:
|
|
9
|
+
- PROBE 流程中辅助 REASON/OBJECT/EMIT 阶段生成认知文件
|
|
10
|
+
- 开发中做 bug 调查、修改影响评估、重构分析
|
|
11
|
+
|
|
12
|
+
用法:
|
|
13
|
+
python query_graph.py <ast_nodes.json> --file <path>
|
|
14
|
+
python query_graph.py <ast_nodes.json> --who-imports <module_or_path>
|
|
15
|
+
python query_graph.py <ast_nodes.json> --impact <path>
|
|
16
|
+
python query_graph.py <ast_nodes.json> --hub-analysis [--top N]
|
|
17
|
+
python query_graph.py <ast_nodes.json> --summary
|
|
18
|
+
"""
|
|
19
|
+
|
|
20
|
+
import sys
|
|
21
|
+
import json
|
|
22
|
+
import argparse
|
|
23
|
+
from pathlib import Path, PurePosixPath
|
|
24
|
+
from collections import defaultdict
|
|
25
|
+
|
|
26
|
+
|
|
27
|
+
class GitStats:
|
|
28
|
+
"""git_stats.json 的查询辅助。可选加载,不影响核心 AST 查询。"""
|
|
29
|
+
|
|
30
|
+
def __init__(self, data: dict):
|
|
31
|
+
self.period_days: int = data.get('analysis_period_days', 90)
|
|
32
|
+
self.hotspots: dict[str, dict] = {} # path → {changes, risk}
|
|
33
|
+
for h in data.get('hotspots', []):
|
|
34
|
+
self.hotspots[h['path']] = h
|
|
35
|
+
self.coupling: dict[str, list[dict]] = defaultdict(list) # path → [{peer, co_changes, score}]
|
|
36
|
+
for c in data.get('coupling_pairs', []):
|
|
37
|
+
self.coupling[c['file_a']].append({
|
|
38
|
+
'peer': c['file_b'], 'co_changes': c['co_changes'],
|
|
39
|
+
'score': c['coupling_score'],
|
|
40
|
+
})
|
|
41
|
+
self.coupling[c['file_b']].append({
|
|
42
|
+
'peer': c['file_a'], 'co_changes': c['co_changes'],
|
|
43
|
+
'score': c['coupling_score'],
|
|
44
|
+
})
|
|
45
|
+
|
|
46
|
+
def file_risk(self, path: str) -> dict | None:
|
|
47
|
+
return self.hotspots.get(path)
|
|
48
|
+
|
|
49
|
+
def file_coupling(self, path: str) -> list[dict]:
|
|
50
|
+
return sorted(self.coupling.get(path, []), key=lambda x: x['score'], reverse=True)
|
|
51
|
+
|
|
52
|
+
RISK_ICON = {'high': '🔴', 'medium': '🟡', 'low': '🟢'}
|
|
53
|
+
|
|
54
|
+
def format_risk_block(self, path: str) -> list[str]:
|
|
55
|
+
"""为一个文件生成 git 风险 + 耦合的文本行(空列表表示无数据)。"""
|
|
56
|
+
lines: list[str] = []
|
|
57
|
+
risk = self.file_risk(path)
|
|
58
|
+
if risk:
|
|
59
|
+
icon = self.RISK_ICON.get(risk['risk'], '⚪')
|
|
60
|
+
lines.append(f"Git risk: {icon} {risk['risk']} ({risk['changes']} changes in {self.period_days} days)")
|
|
61
|
+
coupling = self.file_coupling(path)
|
|
62
|
+
if coupling:
|
|
63
|
+
lines.append("Coupled files (co-change):")
|
|
64
|
+
for c in coupling[:5]:
|
|
65
|
+
lines.append(f" - {c['peer']} (coupling: {c['score']:.2f}, {c['co_changes']} co-changes)")
|
|
66
|
+
return lines
|
|
67
|
+
|
|
68
|
+
|
|
69
|
+
class ASTGraph:
|
|
70
|
+
"""内存中的 AST 图索引,支持多种查询模式。"""
|
|
71
|
+
|
|
72
|
+
SOURCE_ROOT_MARKERS = (
|
|
73
|
+
('src',),
|
|
74
|
+
('backend', 'src'),
|
|
75
|
+
('frontend', 'src'),
|
|
76
|
+
('client', 'src'),
|
|
77
|
+
('src', 'main', 'python'),
|
|
78
|
+
('src', 'test', 'python'),
|
|
79
|
+
('src', 'main', 'java'),
|
|
80
|
+
('src', 'test', 'java'),
|
|
81
|
+
('src', 'main', 'kotlin'),
|
|
82
|
+
('src', 'test', 'kotlin'),
|
|
83
|
+
)
|
|
84
|
+
|
|
85
|
+
def __init__(self, data: dict, git_stats: GitStats | None = None):
|
|
86
|
+
self.data = data
|
|
87
|
+
self.nodes: list[dict] = data.get('nodes', [])
|
|
88
|
+
self.edges: list[dict] = data.get('edges', [])
|
|
89
|
+
self.stats: dict = data.get('stats', {})
|
|
90
|
+
self.languages: list[str] = data.get('languages', [])
|
|
91
|
+
self.git: GitStats | None = git_stats
|
|
92
|
+
|
|
93
|
+
# 索引
|
|
94
|
+
self.nodes_by_id: dict[str, dict] = {}
|
|
95
|
+
self.nodes_by_path: dict[str, list[dict]] = defaultdict(list)
|
|
96
|
+
self.modules_by_path: dict[str, dict] = {}
|
|
97
|
+
self.imports_forward: dict[str, set[str]] = defaultdict(set)
|
|
98
|
+
self.imports_reverse: dict[str, set[str]] = defaultdict(set)
|
|
99
|
+
self.internal_imports_forward: dict[str, set[str]] = defaultdict(set)
|
|
100
|
+
self.internal_imports_reverse: dict[str, set[str]] = defaultdict(set)
|
|
101
|
+
self.contains_children: dict[str, list[dict]] = defaultdict(list)
|
|
102
|
+
self.path_to_module_id: dict[str, str] = {}
|
|
103
|
+
self.alias_to_module_ids: dict[str, set[str]] = defaultdict(set)
|
|
104
|
+
|
|
105
|
+
self._build_index()
|
|
106
|
+
|
|
107
|
+
def _build_index(self) -> None:
|
|
108
|
+
for node in self.nodes:
|
|
109
|
+
nid = node['id']
|
|
110
|
+
self.nodes_by_id[nid] = node
|
|
111
|
+
path = node.get('path', '')
|
|
112
|
+
if path:
|
|
113
|
+
self.nodes_by_path[path].append(node)
|
|
114
|
+
if node['type'] == 'Module' and path:
|
|
115
|
+
self.modules_by_path[path] = node
|
|
116
|
+
self.path_to_module_id[path] = nid
|
|
117
|
+
for alias in self._module_aliases(nid, path):
|
|
118
|
+
self.alias_to_module_ids[alias].add(nid)
|
|
119
|
+
|
|
120
|
+
for edge in self.edges:
|
|
121
|
+
src, tgt, etype = edge['source'], edge['target'], edge['type']
|
|
122
|
+
if etype == 'imports':
|
|
123
|
+
self.imports_forward[src].add(tgt)
|
|
124
|
+
self.imports_reverse[tgt].add(src)
|
|
125
|
+
elif etype == 'contains':
|
|
126
|
+
child = self.nodes_by_id.get(tgt)
|
|
127
|
+
if child:
|
|
128
|
+
self.contains_children[src].append(child)
|
|
129
|
+
|
|
130
|
+
module_ids = {n['id'] for n in self.nodes if n['type'] == 'Module'}
|
|
131
|
+
for source, targets in self.imports_forward.items():
|
|
132
|
+
if source not in module_ids:
|
|
133
|
+
continue
|
|
134
|
+
for target in targets:
|
|
135
|
+
resolved = self.resolve_import_target(target)
|
|
136
|
+
if resolved and resolved in module_ids and resolved != source:
|
|
137
|
+
self.internal_imports_forward[source].add(resolved)
|
|
138
|
+
self.internal_imports_reverse[resolved].add(source)
|
|
139
|
+
|
|
140
|
+
def _module_aliases(self, module_id: str, path: str) -> set[str]:
|
|
141
|
+
aliases = {module_id}
|
|
142
|
+
parts = list(PurePosixPath(path.replace('\\', '/')).parts)
|
|
143
|
+
if not parts:
|
|
144
|
+
return aliases
|
|
145
|
+
|
|
146
|
+
stem = PurePosixPath(parts[-1]).stem
|
|
147
|
+
normalized_parts = parts[:-1] if stem == '__init__' else parts[:-1] + [stem]
|
|
148
|
+
|
|
149
|
+
for marker in self.SOURCE_ROOT_MARKERS:
|
|
150
|
+
if tuple(normalized_parts[:len(marker)]) == marker and len(normalized_parts) > len(marker):
|
|
151
|
+
aliases.add('.'.join(normalized_parts[len(marker):]))
|
|
152
|
+
|
|
153
|
+
for idx, part in enumerate(normalized_parts):
|
|
154
|
+
if part == 'src' and idx + 1 < len(normalized_parts):
|
|
155
|
+
aliases.add('.'.join(normalized_parts[idx + 1:]))
|
|
156
|
+
|
|
157
|
+
return {alias for alias in aliases if alias}
|
|
158
|
+
|
|
159
|
+
def resolve_import_target(self, target: str) -> str | None:
|
|
160
|
+
if target in self.nodes_by_id and self.nodes_by_id[target]['type'] == 'Module':
|
|
161
|
+
return target
|
|
162
|
+
|
|
163
|
+
direct = self.alias_to_module_ids.get(target)
|
|
164
|
+
if direct and len(direct) == 1:
|
|
165
|
+
return next(iter(direct))
|
|
166
|
+
|
|
167
|
+
parts = target.split('.')
|
|
168
|
+
while len(parts) > 1:
|
|
169
|
+
parts = parts[:-1]
|
|
170
|
+
candidate = '.'.join(parts)
|
|
171
|
+
matches = self.alias_to_module_ids.get(candidate)
|
|
172
|
+
if matches and len(matches) == 1:
|
|
173
|
+
return next(iter(matches))
|
|
174
|
+
|
|
175
|
+
return None
|
|
176
|
+
|
|
177
|
+
def _classify_imports(self, imports: set[str]) -> tuple[list[tuple[str, str]], list[str]]:
|
|
178
|
+
internal: list[tuple[str, str]] = []
|
|
179
|
+
external: list[str] = []
|
|
180
|
+
for imp in sorted(imports):
|
|
181
|
+
resolved = self.resolve_import_target(imp)
|
|
182
|
+
if resolved:
|
|
183
|
+
internal.append((imp, resolved))
|
|
184
|
+
else:
|
|
185
|
+
external.append(imp)
|
|
186
|
+
return internal, external
|
|
187
|
+
|
|
188
|
+
def resolve_to_module_id(self, query: str) -> str | None:
|
|
189
|
+
"""将文件路径或 module id 统一解析为 module id。"""
|
|
190
|
+
# 尝试直接作为 module id
|
|
191
|
+
if query in self.nodes_by_id and self.nodes_by_id[query]['type'] == 'Module':
|
|
192
|
+
return query
|
|
193
|
+
# 尝试作为文件路径(兼容 \\ 和 /)
|
|
194
|
+
normalized = query.replace('\\', '/')
|
|
195
|
+
if normalized in self.path_to_module_id:
|
|
196
|
+
return self.path_to_module_id[normalized]
|
|
197
|
+
# 模糊匹配:去掉开头的 repo 相对路径前缀
|
|
198
|
+
for path, mid in self.path_to_module_id.items():
|
|
199
|
+
if path.endswith(normalized) or normalized.endswith(path):
|
|
200
|
+
return mid
|
|
201
|
+
return None
|
|
202
|
+
|
|
203
|
+
def resolve_to_path(self, module_id: str) -> str | None:
|
|
204
|
+
"""将 module id 解析为文件路径。"""
|
|
205
|
+
node = self.nodes_by_id.get(module_id)
|
|
206
|
+
if node:
|
|
207
|
+
return node.get('path')
|
|
208
|
+
return None
|
|
209
|
+
|
|
210
|
+
# ── 查询模式实现 ──────────────────────────────────────────────
|
|
211
|
+
|
|
212
|
+
def query_file(self, file_query: str) -> str:
|
|
213
|
+
"""--file: 查看某个文件的完整结构和 import 清单。"""
|
|
214
|
+
mid = self.resolve_to_module_id(file_query)
|
|
215
|
+
if not mid:
|
|
216
|
+
return f"[NOT FOUND] No module matching '{file_query}'"
|
|
217
|
+
|
|
218
|
+
module_node = self.nodes_by_id[mid]
|
|
219
|
+
path = module_node.get('path', mid)
|
|
220
|
+
lines = module_node.get('lines', '?')
|
|
221
|
+
lang = module_node.get('lang', '?')
|
|
222
|
+
|
|
223
|
+
out = [f"=== {path} ==="]
|
|
224
|
+
out.append(f"Module: {mid} ({lines} lines, {lang})")
|
|
225
|
+
out.append("")
|
|
226
|
+
|
|
227
|
+
# 类和函数
|
|
228
|
+
classes = [n for n in self.contains_children.get(mid, []) if n['type'] == 'Class']
|
|
229
|
+
top_funcs = [n for n in self.contains_children.get(mid, []) if n['type'] == 'Function']
|
|
230
|
+
|
|
231
|
+
if classes:
|
|
232
|
+
out.append("Classes:")
|
|
233
|
+
for cls in classes:
|
|
234
|
+
sl = cls.get('start_line', '?')
|
|
235
|
+
el = cls.get('end_line', '?')
|
|
236
|
+
out.append(f" {cls['label']} (L{sl}-L{el})")
|
|
237
|
+
methods = [n for n in self.contains_children.get(cls['id'], []) if n['type'] == 'Function']
|
|
238
|
+
for i, m in enumerate(methods):
|
|
239
|
+
prefix = "└─" if i == len(methods) - 1 else "├─"
|
|
240
|
+
ml = m.get('start_line', '?')
|
|
241
|
+
me = m.get('end_line', '?')
|
|
242
|
+
out.append(f" {prefix} {m['label']} (L{ml}-L{me})")
|
|
243
|
+
out.append("")
|
|
244
|
+
|
|
245
|
+
if top_funcs:
|
|
246
|
+
out.append("Top-level Functions:")
|
|
247
|
+
for f in top_funcs:
|
|
248
|
+
sl = f.get('start_line', '?')
|
|
249
|
+
el = f.get('end_line', '?')
|
|
250
|
+
out.append(f" {f['label']} (L{sl}-L{el})")
|
|
251
|
+
out.append("")
|
|
252
|
+
|
|
253
|
+
# Imports
|
|
254
|
+
imports = sorted(self.imports_forward.get(mid, set()))
|
|
255
|
+
if imports:
|
|
256
|
+
internal, external = self._classify_imports(set(imports))
|
|
257
|
+
out.append("Imports:")
|
|
258
|
+
for raw_imp, resolved_imp in internal:
|
|
259
|
+
imp_path = self.resolve_to_path(resolved_imp)
|
|
260
|
+
suffix = f" ({imp_path})" if imp_path else ""
|
|
261
|
+
if raw_imp == resolved_imp:
|
|
262
|
+
out.append(f" → {raw_imp}{suffix}")
|
|
263
|
+
else:
|
|
264
|
+
out.append(f" → {raw_imp} [resolved: {resolved_imp}{suffix}]")
|
|
265
|
+
for imp in external:
|
|
266
|
+
out.append(f" → {imp} (external)")
|
|
267
|
+
out.append("")
|
|
268
|
+
|
|
269
|
+
if not classes and not top_funcs and not imports:
|
|
270
|
+
out.append("(no classes, functions, or imports detected)")
|
|
271
|
+
|
|
272
|
+
# Git stats (可选)
|
|
273
|
+
if self.git:
|
|
274
|
+
git_lines = self.git.format_risk_block(path)
|
|
275
|
+
if git_lines:
|
|
276
|
+
out.append("Git:")
|
|
277
|
+
out.extend(f" {l}" for l in git_lines)
|
|
278
|
+
out.append("")
|
|
279
|
+
|
|
280
|
+
return "\n".join(out)
|
|
281
|
+
|
|
282
|
+
def query_who_imports(self, module_query: str) -> str:
|
|
283
|
+
"""--who-imports: 反向依赖查询。"""
|
|
284
|
+
mid = self.resolve_to_module_id(module_query)
|
|
285
|
+
|
|
286
|
+
# 也尝试直接在 imports_reverse 中查找(处理外部包名等)
|
|
287
|
+
if not mid:
|
|
288
|
+
# 可能是部分匹配(如 'flask' 在 imports target 中)
|
|
289
|
+
matches = set()
|
|
290
|
+
normalized = module_query.replace('\\', '/')
|
|
291
|
+
for target, sources in self.imports_reverse.items():
|
|
292
|
+
if target == normalized or target == module_query:
|
|
293
|
+
matches.update(sources)
|
|
294
|
+
if matches:
|
|
295
|
+
return self._format_who_imports(module_query, matches)
|
|
296
|
+
return f"[NOT FOUND] No module matching '{module_query}'"
|
|
297
|
+
|
|
298
|
+
importers: set[str] = set(self.internal_imports_reverse.get(mid, set()))
|
|
299
|
+
|
|
300
|
+
return self._format_who_imports(mid, importers)
|
|
301
|
+
|
|
302
|
+
def _format_who_imports(self, query: str, importers: set[str]) -> str:
|
|
303
|
+
out = [f"=== Who imports {query}? ==="]
|
|
304
|
+
if not importers:
|
|
305
|
+
out.append("Not imported by any module in the project.")
|
|
306
|
+
return "\n".join(out)
|
|
307
|
+
|
|
308
|
+
out.append(f"Imported by {len(importers)} module(s):")
|
|
309
|
+
for imp in sorted(importers):
|
|
310
|
+
imp_path = self.resolve_to_path(imp)
|
|
311
|
+
suffix = f" ({imp_path})" if imp_path else ""
|
|
312
|
+
out.append(f" ← {imp}{suffix}")
|
|
313
|
+
return "\n".join(out)
|
|
314
|
+
|
|
315
|
+
def query_impact(self, file_query: str) -> str:
|
|
316
|
+
"""--impact: 影响半径分析(上下游依赖一览)。"""
|
|
317
|
+
mid = self.resolve_to_module_id(file_query)
|
|
318
|
+
if not mid:
|
|
319
|
+
return f"[NOT FOUND] No module matching '{file_query}'"
|
|
320
|
+
|
|
321
|
+
module_node = self.nodes_by_id[mid]
|
|
322
|
+
path = module_node.get('path', mid)
|
|
323
|
+
|
|
324
|
+
out = [f"=== Impact radius: {path} ===", ""]
|
|
325
|
+
|
|
326
|
+
# 上游:本文件 import 了谁
|
|
327
|
+
forward = sorted(self.imports_forward.get(mid, set()))
|
|
328
|
+
internal_forward, external_forward = self._classify_imports(set(forward))
|
|
329
|
+
|
|
330
|
+
out.append("Depends on (this file imports):")
|
|
331
|
+
if internal_forward:
|
|
332
|
+
for raw_dep, resolved_dep in internal_forward:
|
|
333
|
+
dep_path = self.resolve_to_path(resolved_dep)
|
|
334
|
+
suffix = f" ({dep_path})" if dep_path else ""
|
|
335
|
+
if raw_dep == resolved_dep:
|
|
336
|
+
out.append(f" → {raw_dep}{suffix}")
|
|
337
|
+
else:
|
|
338
|
+
out.append(f" → {raw_dep} [resolved: {resolved_dep}{suffix}]")
|
|
339
|
+
if external_forward:
|
|
340
|
+
for dep in external_forward:
|
|
341
|
+
out.append(f" → {dep} (external)")
|
|
342
|
+
if not forward:
|
|
343
|
+
out.append(" (none)")
|
|
344
|
+
out.append("")
|
|
345
|
+
|
|
346
|
+
# 下游:谁 import 了本文件
|
|
347
|
+
importers: set[str] = set(self.internal_imports_reverse.get(mid, set()))
|
|
348
|
+
|
|
349
|
+
out.append("Depended by (other files import this):")
|
|
350
|
+
if importers:
|
|
351
|
+
for imp in sorted(importers):
|
|
352
|
+
imp_path = self.resolve_to_path(imp)
|
|
353
|
+
suffix = f" ({imp_path})" if imp_path else ""
|
|
354
|
+
out.append(f" ← {imp}{suffix}")
|
|
355
|
+
else:
|
|
356
|
+
out.append(" (none)")
|
|
357
|
+
out.append("")
|
|
358
|
+
|
|
359
|
+
downstream_count = len(importers)
|
|
360
|
+
upstream_count = len({resolved for _raw, resolved in internal_forward})
|
|
361
|
+
out.append(
|
|
362
|
+
f"Impact summary: {upstream_count} upstream dependencies, "
|
|
363
|
+
f"{downstream_count} downstream dependents"
|
|
364
|
+
)
|
|
365
|
+
|
|
366
|
+
# Git stats (可选)
|
|
367
|
+
if self.git:
|
|
368
|
+
out.append("")
|
|
369
|
+
git_lines = self.git.format_risk_block(path)
|
|
370
|
+
if git_lines:
|
|
371
|
+
out.extend(git_lines)
|
|
372
|
+
|
|
373
|
+
return "\n".join(out)
|
|
374
|
+
|
|
375
|
+
def query_hub_analysis(self, top_n: int = 10) -> str:
|
|
376
|
+
"""--hub-analysis: 高扇入/高扇出核心节点识别。"""
|
|
377
|
+
fan_in = {target: len(sources) for target, sources in self.internal_imports_reverse.items()}
|
|
378
|
+
fan_out = {source: len(targets) for source, targets in self.internal_imports_forward.items()}
|
|
379
|
+
|
|
380
|
+
out = ["=== Hub Analysis ===", ""]
|
|
381
|
+
|
|
382
|
+
# Top fan-in
|
|
383
|
+
top_fan_in = sorted(fan_in.items(), key=lambda x: x[1], reverse=True)[:top_n]
|
|
384
|
+
out.append("Top fan-in (most imported by others):")
|
|
385
|
+
if top_fan_in:
|
|
386
|
+
for i, (mid, count) in enumerate(top_fan_in, 1):
|
|
387
|
+
path = self.resolve_to_path(mid) or ""
|
|
388
|
+
out.append(f" {i}. {mid} — imported by {count} module(s) [{path}]")
|
|
389
|
+
else:
|
|
390
|
+
out.append(" (no internal import relationships found)")
|
|
391
|
+
out.append("")
|
|
392
|
+
|
|
393
|
+
# Top fan-out
|
|
394
|
+
top_fan_out = sorted(fan_out.items(), key=lambda x: x[1], reverse=True)[:top_n]
|
|
395
|
+
out.append("Top fan-out (imports most others):")
|
|
396
|
+
if top_fan_out:
|
|
397
|
+
for i, (mid, count) in enumerate(top_fan_out, 1):
|
|
398
|
+
path = self.resolve_to_path(mid) or ""
|
|
399
|
+
out.append(f" {i}. {mid} — imports {count} internal module(s) [{path}]")
|
|
400
|
+
else:
|
|
401
|
+
out.append(" (no internal import relationships found)")
|
|
402
|
+
|
|
403
|
+
return "\n".join(out)
|
|
404
|
+
|
|
405
|
+
def query_summary(self) -> str:
|
|
406
|
+
"""--summary: 按顶层目录聚合的结构摘要。"""
|
|
407
|
+
# 按第一级或第二级目录聚合
|
|
408
|
+
dir_stats: dict[str, dict] = defaultdict(
|
|
409
|
+
lambda: {'modules': 0, 'classes': 0, 'functions': 0, 'lines': 0,
|
|
410
|
+
'class_names': [], 'import_dirs': set()}
|
|
411
|
+
)
|
|
412
|
+
|
|
413
|
+
# 决定聚合粒度:取 path 的前 2 级目录
|
|
414
|
+
def _dir_key(path: str) -> str:
|
|
415
|
+
parts = path.split('/')
|
|
416
|
+
if len(parts) <= 2:
|
|
417
|
+
return parts[0] + '/'
|
|
418
|
+
return '/'.join(parts[:2]) + '/'
|
|
419
|
+
|
|
420
|
+
for node in self.nodes:
|
|
421
|
+
path = node.get('path', '')
|
|
422
|
+
if not path:
|
|
423
|
+
continue
|
|
424
|
+
dk = _dir_key(path)
|
|
425
|
+
ntype = node['type']
|
|
426
|
+
if ntype == 'Module':
|
|
427
|
+
dir_stats[dk]['modules'] += 1
|
|
428
|
+
dir_stats[dk]['lines'] += node.get('lines', 0)
|
|
429
|
+
elif ntype == 'Class':
|
|
430
|
+
dir_stats[dk]['classes'] += 1
|
|
431
|
+
dir_stats[dk]['class_names'].append(node['label'])
|
|
432
|
+
elif ntype == 'Function':
|
|
433
|
+
dir_stats[dk]['functions'] += 1
|
|
434
|
+
|
|
435
|
+
# 收集每个目录的 import 来源目录
|
|
436
|
+
for mid, targets in self.imports_forward.items():
|
|
437
|
+
src_node = self.nodes_by_id.get(mid)
|
|
438
|
+
if not src_node or src_node['type'] != 'Module':
|
|
439
|
+
continue
|
|
440
|
+
src_path = src_node.get('path', '')
|
|
441
|
+
if not src_path:
|
|
442
|
+
continue
|
|
443
|
+
src_dk = _dir_key(src_path)
|
|
444
|
+
for t in targets:
|
|
445
|
+
t_node = self.nodes_by_id.get(t)
|
|
446
|
+
if t_node and t_node.get('path'):
|
|
447
|
+
t_dk = _dir_key(t_node['path'])
|
|
448
|
+
if t_dk != src_dk:
|
|
449
|
+
dir_stats[src_dk]['import_dirs'].add(t_dk.rstrip('/'))
|
|
450
|
+
|
|
451
|
+
out = ["=== Directory Summary ===", ""]
|
|
452
|
+
|
|
453
|
+
for dk in sorted(dir_stats.keys()):
|
|
454
|
+
s = dir_stats[dk]
|
|
455
|
+
out.append(
|
|
456
|
+
f"{dk} ({s['modules']} modules, {s['classes']} classes, "
|
|
457
|
+
f"{s['functions']} functions, {s['lines']} lines)"
|
|
458
|
+
)
|
|
459
|
+
if s['class_names']:
|
|
460
|
+
# 最多显示 8 个
|
|
461
|
+
names = s['class_names'][:8]
|
|
462
|
+
suffix = f" ... +{len(s['class_names']) - 8}" if len(s['class_names']) > 8 else ""
|
|
463
|
+
out.append(f" Key classes: {', '.join(names)}{suffix}")
|
|
464
|
+
import_dirs = sorted(s['import_dirs'])
|
|
465
|
+
if import_dirs:
|
|
466
|
+
out.append(f" Key imports from: {', '.join(import_dirs)}")
|
|
467
|
+
else:
|
|
468
|
+
out.append(f" Key imports from: (none / external only)")
|
|
469
|
+
out.append("")
|
|
470
|
+
|
|
471
|
+
if not dir_stats:
|
|
472
|
+
out.append("(no modules found in ast_nodes.json)")
|
|
473
|
+
|
|
474
|
+
return "\n".join(out)
|
|
475
|
+
|
|
476
|
+
|
|
477
|
+
def main() -> None:
|
|
478
|
+
parser = argparse.ArgumentParser(
|
|
479
|
+
description='Query AST graph from ast_nodes.json',
|
|
480
|
+
formatter_class=argparse.RawDescriptionHelpFormatter,
|
|
481
|
+
epilog="""
|
|
482
|
+
Examples:
|
|
483
|
+
%(prog)s ast_nodes.json --file src/server/handler.py
|
|
484
|
+
%(prog)s ast_nodes.json --who-imports src.server.handler
|
|
485
|
+
%(prog)s ast_nodes.json --impact src/server/handler.py
|
|
486
|
+
%(prog)s ast_nodes.json --hub-analysis --top 10
|
|
487
|
+
%(prog)s ast_nodes.json --summary
|
|
488
|
+
""",
|
|
489
|
+
)
|
|
490
|
+
parser.add_argument('ast_json', help='Path to ast_nodes.json')
|
|
491
|
+
parser.add_argument('--file', dest='file_query', help='Show structure and imports of a file')
|
|
492
|
+
parser.add_argument('--who-imports', dest='who_imports', help='Find modules that import the given module')
|
|
493
|
+
parser.add_argument('--impact', dest='impact_query', help='Show impact radius (deps + dependents)')
|
|
494
|
+
parser.add_argument('--hub-analysis', action='store_true', help='Show top fan-in/fan-out modules')
|
|
495
|
+
parser.add_argument('--summary', action='store_true', help='Show per-directory structural summary')
|
|
496
|
+
parser.add_argument('--top', type=int, default=10, help='Number of results for hub-analysis (default: 10)')
|
|
497
|
+
parser.add_argument('--git-stats', dest='git_stats_path', metavar='GIT_STATS_JSON',
|
|
498
|
+
help='Optional git_stats.json to enrich --file and --impact with risk/coupling data')
|
|
499
|
+
|
|
500
|
+
args = parser.parse_args()
|
|
501
|
+
|
|
502
|
+
# 检查至少有一个查询模式
|
|
503
|
+
has_query = any([args.file_query, args.who_imports, args.impact_query,
|
|
504
|
+
args.hub_analysis, args.summary])
|
|
505
|
+
if not has_query:
|
|
506
|
+
parser.print_help()
|
|
507
|
+
sys.exit(1)
|
|
508
|
+
|
|
509
|
+
# 加载 JSON
|
|
510
|
+
ast_path = Path(args.ast_json)
|
|
511
|
+
if not ast_path.exists():
|
|
512
|
+
sys.stderr.write(f"[ERROR] File not found: {ast_path}\n")
|
|
513
|
+
sys.exit(1)
|
|
514
|
+
|
|
515
|
+
try:
|
|
516
|
+
raw_text = ast_path.read_text(encoding='utf-8')
|
|
517
|
+
# 跳过可能混入的 stderr 行(如 [WARNING]),定位到第一个 '{' 开始解析
|
|
518
|
+
json_start = raw_text.find('{')
|
|
519
|
+
if json_start < 0:
|
|
520
|
+
sys.stderr.write(f"[ERROR] No JSON object found in {ast_path}\n")
|
|
521
|
+
sys.exit(1)
|
|
522
|
+
data = json.loads(raw_text[json_start:])
|
|
523
|
+
except json.JSONDecodeError as e:
|
|
524
|
+
sys.stderr.write(f"[ERROR] Invalid JSON: {e}\n")
|
|
525
|
+
sys.exit(1)
|
|
526
|
+
|
|
527
|
+
# 可选加载 git stats
|
|
528
|
+
git_stats: GitStats | None = None
|
|
529
|
+
if args.git_stats_path:
|
|
530
|
+
gs_path = Path(args.git_stats_path)
|
|
531
|
+
if not gs_path.exists():
|
|
532
|
+
sys.stderr.write(f"[WARNING] git_stats file not found: {gs_path}, ignoring\n")
|
|
533
|
+
else:
|
|
534
|
+
try:
|
|
535
|
+
gs_data = json.loads(gs_path.read_text(encoding='utf-8'))
|
|
536
|
+
git_stats = GitStats(gs_data)
|
|
537
|
+
except (json.JSONDecodeError, KeyError) as e:
|
|
538
|
+
sys.stderr.write(f"[WARNING] git_stats parse error: {e}, ignoring\n")
|
|
539
|
+
|
|
540
|
+
graph = ASTGraph(data, git_stats=git_stats)
|
|
541
|
+
|
|
542
|
+
# 执行查询
|
|
543
|
+
if args.file_query:
|
|
544
|
+
print(graph.query_file(args.file_query))
|
|
545
|
+
elif args.who_imports:
|
|
546
|
+
print(graph.query_who_imports(args.who_imports))
|
|
547
|
+
elif args.impact_query:
|
|
548
|
+
print(graph.query_impact(args.impact_query))
|
|
549
|
+
elif args.hub_analysis:
|
|
550
|
+
print(graph.query_hub_analysis(args.top))
|
|
551
|
+
elif args.summary:
|
|
552
|
+
print(graph.query_summary())
|
|
553
|
+
|
|
554
|
+
|
|
555
|
+
if __name__ == '__main__':
|
|
556
|
+
main()
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
---
|
|
2
2
|
name: report-template
|
|
3
|
-
description: 综合
|
|
3
|
+
description: 综合 Probe 阶段所有分析(nexus-mapper, runtime-inspector),生成决策就绪的系统风险报告。
|
|
4
4
|
---
|
|
5
5
|
|
|
6
6
|
# 综合者手册 (The Synthesizer's Manual)
|
|
@@ -15,30 +15,28 @@ description: 综合 Scout 阶段所有分析(build-inspector, runtime-inspecto
|
|
|
15
15
|
|
|
16
16
|
> [!IMPORTANT]
|
|
17
17
|
> 在生成报告之前,你**必须**进行自我检查:
|
|
18
|
-
> 1. "
|
|
19
|
-
> 2. "
|
|
20
|
-
> 3. "
|
|
18
|
+
> 1. "nexus-mapper 发现的构建边界和 runtime-inspector 发现的 IPC 边界是否一致?"
|
|
19
|
+
> 2. "nexus-mapper 发现的高耦合文件对是否跨越了构建边界?"
|
|
20
|
+
> 3. "nexus-mapper 识别的缺失组件是否与已发现的风险相关?"
|
|
21
21
|
> 4. "这份报告是否足够完整?"
|
|
22
22
|
|
|
23
23
|
---
|
|
24
24
|
|
|
25
25
|
## ⚡ Quick Start
|
|
26
26
|
|
|
27
|
-
1. **读取模板 (MANDATORY)**:
|
|
27
|
+
1. **读取模板 (MANDATORY)**: 读取 `references/REPORT_TEMPLATE.md`。你的报告**必须**完全匹配此结构。
|
|
28
28
|
2. **综合所有发现**: 汇总来自以下来源的输出:
|
|
29
|
-
* `
|
|
29
|
+
* `nexus-mapper` → Build Roots, Topology, Coupling Pairs, Hotspots, Entities, Missing Components
|
|
30
30
|
* `runtime-inspector` → IPC Surfaces, Contract Status
|
|
31
|
-
* `git-forensics` → Coupling Pairs, Hotspots
|
|
32
|
-
* `concept-modeler` → Entities, Missing Components
|
|
33
31
|
3. **起草报告**: 按照模板组织逻辑连接。
|
|
34
|
-
4. **发布 (CRITICAL)**:
|
|
32
|
+
4. **发布 (CRITICAL)**: 你**必须**创建 `.anws/v{N}/00_PROBE_REPORT.md` 并写入完整报告。**禁止**仅打印到聊天。确保 `.anws/v{N}/` 目录存在。
|
|
35
33
|
|
|
36
34
|
---
|
|
37
35
|
|
|
38
36
|
## ✅ 完成检查清单
|
|
39
37
|
|
|
40
38
|
在进入下一阶段之前,验证:
|
|
41
|
-
- [ ] 输出文件已创建:
|
|
39
|
+
- [ ] 输出文件已创建: `.anws/v{N}/00_PROBE_REPORT.md`
|
|
42
40
|
- [ ] 包含: System Fingerprint, Component Map, Risk Matrix, Feature Landing Guide
|
|
43
41
|
- [ ] 用户已确认发现
|
|
44
42
|
|
|
@@ -55,9 +53,9 @@ description: 综合 Scout 阶段所有分析(build-inspector, runtime-inspecto
|
|
|
55
53
|
* 检查清单: 日志?错误处理?CI/CD?密钥管理?版本握手?
|
|
56
54
|
|
|
57
55
|
### 3. 交叉验证 (Cross-Verification)
|
|
58
|
-
* **
|
|
59
|
-
* **
|
|
60
|
-
* **结论**:
|
|
56
|
+
* **nexus-mapper** 说 "Workspace 统一管理"?
|
|
57
|
+
* **nexus-mapper** 说 "高耦合跨越构建根"?
|
|
58
|
+
* **结论**: 发现隐藏逻辑耦合 → **重构目标**。
|
|
61
59
|
|
|
62
60
|
### 4. 人工检查点
|
|
63
61
|
* 强制用户确认: "这份报告完整吗?"
|
|
@@ -83,6 +81,5 @@ description: 综合 Scout 阶段所有分析(build-inspector, runtime-inspecto
|
|
|
83
81
|
|
|
84
82
|
这份报告的直接消费者是 `/blueprint` 阶段的:
|
|
85
83
|
* **System Architect**: 依赖你的风险清单来设计规避策略。
|
|
86
|
-
* **Complexity Guard**: 依赖你的发现来审计 RFC 复杂度。
|
|
87
84
|
|
|
88
85
|
你的分析质量**直接决定**下一阶段的设计质量。
|