ai-unit-test-generator 1.4.6 → 2.0.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -5,6 +5,137 @@ All notable changes to this project will be documented in this file.
5
5
  The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
6
6
  and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
7
7
 
8
+ ## [2.0.0] - 2025-01-11
9
+
10
+ ### 🎉 Major Release: AI-Enhanced Configuration
11
+
12
+ This is a **major refactor** with breaking changes. The package has been completely restructured to support AI-powered codebase analysis and intelligent scoring configuration.
13
+
14
+ ### 🔥 Hotfix (same day)
15
+ - **Fixed**: AI enhancement logic now correctly applied in `scorer.mjs` scoring loop
16
+ - **Optimized**: File-level import caching in `scanner.mjs` (30-40% performance improvement)
17
+ - **Configurable**: Business entity keywords moved to `aiEnhancement.entityKeywords`
18
+ - **Updated**: README reflects v2.0 architecture
19
+ - **Added**: `matchPattern` helper function for glob pattern matching in AI suggestions
20
+
21
+ ### Added
22
+
23
+ #### New Commands
24
+ - **`ai-test init`**: Initialize configuration file (was implicit in `scan`)
25
+ - **`ai-test analyze`**: AI-powered codebase analysis
26
+ - Automatically samples representative code from your codebase
27
+ - Calls Cursor Agent to analyze business logic and risk patterns
28
+ - Interactive review UI with category-by-category approval
29
+ - Generates project-specific scoring suggestions
30
+ - Supports score adjustment for individual suggestions
31
+
32
+ #### New Modules
33
+ - **`lib/utils/config-manager.mjs`**: Configuration file management
34
+ - **`lib/utils/scan-manager.mjs`**: Scan result management and staleness detection
35
+ - **`lib/ai/sampler.mjs`**: Intelligent code sampling for AI analysis
36
+ - **`lib/ai/context-builder.mjs`**: Project context extraction (framework, deps)
37
+ - **`lib/ai/analyzer-prompt.mjs`**: Constrained AI analysis prompt builder
38
+ - **`lib/ai/validator.mjs`**: Strict JSON schema validation for AI responses
39
+ - **`lib/ai/reviewer.mjs`**: Interactive suggestion review UI
40
+ - **`lib/ai/config-writer.mjs`**: Safe configuration updates (locked paths protection)
41
+ - **`lib/workflows/init.mjs`**: Init workflow
42
+ - **`lib/workflows/analyze.mjs`**: AI analysis workflow
43
+ - **`lib/workflows/scan.mjs`**: Scan workflow (extracted from CLI)
44
+ - **`lib/workflows/generate.mjs`**: Generate workflow (extracted from CLI)
45
+
46
+ #### AI Enhancement Configuration
47
+ - Added `aiEnhancement` section to config schema
48
+ - `businessCriticalPaths`: AI-identified critical business logic paths
49
+ - `highRiskModules`: AI-identified high-risk modules
50
+ - `testabilityAdjustments`: AI-suggested testability modifiers
51
+ - Locked/writable path protection mechanism
52
+ - AI can only write to `aiEnhancement.suggestions`
53
+ - Core scoring rules remain immutable
54
+
55
+ #### Enhanced Metadata Extraction
56
+ - `lib/core/scanner.mjs` now extracts rich metadata:
57
+ - Critical imports (payment, auth, API libraries)
58
+ - Business entity types (Payment, Order, Booking, etc.)
59
+ - JSDoc documentation
60
+ - Error handling patterns (try-catch count)
61
+ - External API calls (fetch, axios)
62
+ - Return types and parameter types
63
+
64
+ ### Changed
65
+
66
+ #### Breaking Changes
67
+ - **CLI Structure**: Removed implicit config creation from `scan`, added explicit `init` command
68
+ - **Command Workflow**: Now requires `ai-test init` before other commands
69
+ - **Module Structure**: Reorganized `lib/` directory with clear separation:
70
+ - `core/`: AST, Git, Scoring
71
+ - `ai/`: Analysis & Test Generation
72
+ - `workflows/`: High-level orchestration
73
+ - `utils/`: Shared utilities
74
+ - `testing/`: Test running & analysis
75
+ - **Config Schema**: Added `aiEnhancement` section (backward compatible)
76
+
77
+ #### Improvements
78
+ - **CLI Simplification**: `bin/cli.js` now purely delegates to workflow modules
79
+ - **Better Error Messages**: More helpful hints when config is missing
80
+ - **Modular Design**: Each workflow is independently importable
81
+ - **Type Safety**: All modules use ESM with proper exports
82
+
83
+ ### Architecture
84
+
85
+ ```
86
+ User Workflow:
87
+ 1. ai-test init → Create config
88
+ 2. ai-test analyze → AI analysis (optional)
89
+ 3. ai-test scan → Scan & score
90
+ 4. ai-test generate → Generate tests
91
+
92
+ Module Hierarchy:
93
+ bin/cli.js
94
+
95
+ lib/workflows/ (init, analyze, scan, generate)
96
+
97
+ lib/core/ (scanner, scorer, git-analyzer)
98
+ lib/ai/ (sampler, validator, reviewer, etc.)
99
+ lib/utils/ (config-manager, scan-manager)
100
+ ```
101
+
102
+ ### Migration Guide
103
+
104
+ For users upgrading from v1.x:
105
+
106
+ 1. **Config file**: Your existing `ai-test.config.jsonc` will continue to work
107
+ 2. **New workflow**:
108
+ ```bash
109
+ # Old (v1.x)
110
+ ai-test scan # Would create config if missing
111
+
112
+ # New (v2.0)
113
+ ai-test init # Explicit init step
114
+ ai-test analyze # Optional: Let AI optimize config
115
+ ai-test scan # Scan with AI enhancements
116
+ ```
117
+ 3. **AI features are optional**: If you don't run `analyze`, scoring works exactly as before
118
+ 4. **Programmatic API**: Can now import workflows directly:
119
+ ```js
120
+ import { init, analyze, scan, generate } from 'ai-unit-test-generator/workflows'
121
+ ```
122
+
123
+ ### Technical Details
124
+
125
+ - **AI Prompt Engineering**: Highly constrained prompts with strict JSON schema
126
+ - **Validation**: Multi-layer validation (schema, confidence thresholds, evidence requirements)
127
+ - **Safety**: Locked configuration paths prevent AI from modifying core scoring logic
128
+ - **Interactive UX**: Category-by-category review with score adjustment support
129
+ - **Token Optimization**: Intelligent code sampling (max 25 files, 1500 chars each)
130
+
131
+ ### Credits
132
+
133
+ - Inspired by Meta's TestGen-LLM for assurance filters
134
+ - Scoring methodology based on software testing research
135
+ - Built on top of `ts-morph` for robust AST analysis
136
+
137
+ ---
138
+
8
139
  ## [1.4.6] - 2025-01-11
9
140
 
10
141
  ### Fixed
package/README.md CHANGED
@@ -379,27 +379,52 @@ ai-unit-test-generator/
379
379
  ├── bin/
380
380
  │ └── cli.js # CLI entry point
381
381
  ├── lib/
382
- │ ├── core/
383
- │ │ ├── scanner.mjs # Code scanning
384
- │ │ ├── git-analyzer.mjs # Git history analysis
385
- │ │ └── scorer.mjs # Scoring engine
386
- ├── ai/
387
- ├── prompter.mjs # AI prompt generation
388
- │ │ ├── generator.mjs # Cursor Agent invocation
389
- │ │ └── extractor.mjs # Test extraction
390
- │ ├── testing/
391
- │ │ ├── runner.mjs # Jest runner
392
- │ │ └── analyzer.mjs # Failure analysis
393
- │ ├── workflows/
394
- │ │ └── batch.mjs # Batch processing workflow
395
- └── utils/
396
- ├── file.mjs # File operations
397
- ├── status.mjs # Status management
398
- └── deps.mjs # Dependency analysis
382
+ │ ├── core/ # 核心分析层(无 AI 依赖)
383
+ │ │ ├── scanner.mjs # AST 扫描 + 元数据提取
384
+ │ │ ├── git-analyzer.mjs # Git 历史分析
385
+ │ │ ├── scorer.mjs # 评分引擎 + AI 增强
386
+ │ └── index.mjs # 模块导出
387
+ │ ├── ai/ # AI 交互层
388
+ │ │ ├── sampler.mjs # 智能代码采样
389
+ │ │ ├── context-builder.mjs # 项目上下文构建
390
+ ├── analyzer-prompt.mjs # AI 分析 Prompt
391
+ │ │ ├── validator.mjs # 响应验证
392
+ │ │ ├── reviewer.mjs # 交互式审核 UI
393
+ ├── config-writer.mjs # 安全配置写入
394
+ │ │ ├── prompt-builder.mjs # 测试生成 Prompt
395
+ │ ├── client.mjs # Cursor Agent 调用
396
+ ├── extractor.mjs # 测试提取
397
+ │ └── index.mjs # 模块导出
398
+ ├── testing/ # 测试执行层
399
+ │ │ ├── runner.mjs # Jest 运行器
400
+ │ │ ├── analyzer.mjs # 失败分析
401
+ │ │ ├── coverage-runner.mjs # 覆盖率分析
402
+ │ │ └── index.mjs # 模块导出
403
+ │ ├── workflows/ # 工作流编排层
404
+ │ │ ├── init.mjs # 初始化工作流
405
+ │ │ ├── analyze.mjs # AI 分析工作流
406
+ │ │ ├── scan.mjs # 扫描工作流
407
+ │ │ ├── generate.mjs # 生成工作流
408
+ │ │ ├── batch.mjs # 批处理
409
+ │ │ ├── all.mjs # 全自动
410
+ │ │ └── index.mjs # 模块导出
411
+ │ ├── utils/ # 工具层
412
+ │ │ ├── config-manager.mjs # 配置管理
413
+ │ │ ├── scan-manager.mjs # 扫描管理
414
+ │ │ ├── marker.mjs # 状态标记
415
+ │ │ └── index.mjs # 模块导出
416
+ │ └── index.mjs # 根导出
399
417
  └── templates/
400
- └── default.config.jsonc # Default config template
418
+ └── default.config.jsonc # 默认配置模板
401
419
  ```
402
420
 
421
+ ### Architecture Principles
422
+
423
+ - **Layered Design**: Clear separation between core analysis, AI interaction, testing, and workflows
424
+ - **Zero AI Dependency in Core**: `lib/core/` can be used without AI features
425
+ - **Modular Exports**: Each layer has `index.mjs` for clean API
426
+ - **Programmatic API**: All workflows can be imported and used programmatically
427
+
403
428
  ## 🤝 Contributing
404
429
 
405
430
  Contributions are welcome! Please submit issues or pull requests.
package/bin/cli.js CHANGED
@@ -1,248 +1,137 @@
1
1
  #!/usr/bin/env node
2
2
 
3
+ /**
4
+ * AI-Unit-Test-Generator CLI
5
+ *
6
+ * Commands:
7
+ * init - Initialize configuration file
8
+ * analyze - AI-powered codebase analysis
9
+ * scan - Scan code and generate priority scoring
10
+ * generate - Generate unit tests
11
+ */
12
+
3
13
  import { program } from 'commander'
4
14
  import { fileURLToPath } from 'url'
5
- import { dirname, join, resolve } from 'path'
6
- import { existsSync, copyFileSync, mkdirSync, readFileSync } from 'fs'
7
- import { spawn } from 'child_process'
15
+ import { dirname, join } from 'path'
16
+ import { readFileSync } from 'fs'
8
17
 
9
18
  const __filename = fileURLToPath(import.meta.url)
10
19
  const __dirname = dirname(__filename)
11
- const PKG_ROOT = resolve(__dirname, '..')
12
-
13
- // 辅助函数:移除 JSON 注释
14
- function stripJsonComments(str) {
15
- return String(str)
16
- .replace(/\/\*[\s\S]*?\*\//g, '')
17
- .replace(/(^|\s)\/\/.*$/gm, '')
18
- }
20
+ const PKG_ROOT = join(__dirname, '..')
19
21
 
20
- // 读取 package.json 版本
22
+ // 读取版本号
21
23
  const pkgJson = JSON.parse(readFileSync(join(PKG_ROOT, 'package.json'), 'utf-8'))
22
24
 
23
25
  program
24
26
  .name('ai-test')
25
27
  .description('AI-powered unit test generator with smart priority scoring')
26
28
  .version(pkgJson.version)
29
+ .addHelpText('after', `
30
+ Quick Start:
31
+ 1. $ ai-test init # Create config file
32
+ 2. $ ai-test analyze # Let AI analyze your codebase (optional)
33
+ 3. $ ai-test scan # Scan & score functions
34
+ 4. $ ai-test generate # Generate tests
35
+
36
+ Examples:
37
+ $ ai-test init
38
+ $ ai-test analyze
39
+ $ ai-test scan --skip-git
40
+ $ ai-test generate -n 20 --all
41
+
42
+ Documentation: https://github.com/YuhengZhou/ai-unit-test-generator
43
+ `)
44
+
45
+ // ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
46
+ // 命令 1: init - 初始化配置
47
+ // ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
48
+ program
49
+ .command('init')
50
+ .description('Initialize ai-test configuration file')
51
+ .option('-c, --config <path>', 'Config file path', 'ai-test.config.jsonc')
52
+ .action(async (options) => {
53
+ const { init } = await import('../lib/workflows/init.mjs')
54
+ await init(options)
55
+ })
56
+
57
+ // ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
58
+ // 命令 2: analyze - AI 配置分析
59
+ // ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
60
+ program
61
+ .command('analyze')
62
+ .description('AI-powered codebase analysis for scoring optimization')
63
+ .option('-c, --config <path>', 'Config file path')
64
+ .option('-o, --output <dir>', 'Output directory', 'reports')
65
+ .addHelpText('after', `
66
+ How it works:
67
+ 1. Samples representative code from your codebase
68
+ 2. Calls Cursor Agent to analyze business logic
69
+ 3. Generates scoring suggestions (business critical paths, high risk modules)
70
+ 4. Interactive review - you choose which suggestions to apply
71
+ 5. Updates ai-test.config.jsonc with approved suggestions
72
+
73
+ Note: Requires cursor-agent CLI to be installed
74
+ `)
75
+ .action(async (options) => {
76
+ const { analyze } = await import('../lib/workflows/analyze.mjs')
77
+ await analyze(options)
78
+ })
27
79
 
28
80
  // ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
29
- // 命令 1: scan - 扫描代码并生成优先级报告
81
+ // 命令 3: scan - 扫描代码并打分
30
82
  // ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
31
83
  program
32
84
  .command('scan')
33
85
  .description('Scan code and generate priority scoring report')
34
- .option('-c, --config <path>', 'Config file path', 'ai-test.config.jsonc')
86
+ .option('-c, --config <path>', 'Config file path')
35
87
  .option('-o, --output <dir>', 'Output directory', 'reports')
36
88
  .option('--skip-git', 'Skip Git signals generation')
89
+ .addHelpText('after', `
90
+ Generates:
91
+ - reports/targets.json (AST + complexity data)
92
+ - reports/git_signals.json (Git history signals)
93
+ - reports/ut_scores.md (Human-readable report)
94
+ - reports/ut_scores.csv (Machine-readable scores)
95
+
96
+ Scoring modes:
97
+ - Layered: Different weights for each layer (foundation, business, state, UI)
98
+ - Legacy: Unified weights across all code
99
+
100
+ AI enhancement:
101
+ - If you ran 'ai-test analyze', scores will automatically use AI suggestions
102
+ - AI-identified business critical paths get higher priority
103
+ `)
37
104
  .action(async (options) => {
38
- let { config, output, skipGit } = options
39
-
40
- // 自动探测现有配置 & 初始化(不存在则创建)
41
- const detectOrder = [config, 'ai-test.config.jsonc', 'ai-test.config.json']
42
- const detected = detectOrder.find(p => existsSync(p))
43
- if (detected) config = detected
44
- if (!existsSync(config)) {
45
- console.log('⚙️ Config not found, creating default config...')
46
- const templatePath = join(PKG_ROOT, 'templates', 'default.config.jsonc')
47
- copyFileSync(templatePath, config)
48
- console.log(`✅ Config created: ${config}\n`)
49
- }
50
-
51
- // 创建输出目录
52
- if (!existsSync(output)) {
53
- mkdirSync(output, { recursive: true })
54
- }
55
-
56
- // 可选:在扫描前自动运行覆盖率(由配置控制)
57
- try {
58
- const cfgText = existsSync(config) ? readFileSync(config, 'utf-8') : '{}'
59
- const cfg = JSON.parse(stripJsonComments(cfgText))
60
- const covCfg = cfg?.coverage || { runBeforeScan: false }
61
- if (covCfg.runBeforeScan) {
62
- console.log('🧪 Running coverage before scan...')
63
- await new Promise((resolve) => {
64
- const cmd = covCfg.command || 'npx jest --coverage --silent'
65
- const child = spawn(cmd, { stdio: 'inherit', shell: true, cwd: process.cwd() })
66
- // 即使失败也继续(有些测试可能失败,但仍能生成部分覆盖率数据)
67
- child.on('close', () => resolve())
68
- child.on('error', () => resolve())
69
- })
70
- console.log('✅ Coverage analysis completed.\n')
71
- }
72
- } catch (err) {
73
- console.warn('⚠️ Coverage step failed or Jest not installed. Skipping coverage and continuing scan.')
74
- console.warn(' - npm i -D jest@29 ts-jest@29 @types/jest@29 jest-environment-jsdom@29 --legacy-peer-deps')
75
- }
76
-
77
- console.log('🚀 Starting code scan...\n')
78
-
79
- try {
80
- // Step 1: 生成目标列表
81
- console.log('📋 Step 1: Scanning targets...')
82
- await runScript('core/scanner.mjs', [
83
- '--config', config,
84
- '--out', join(output, 'targets.json')
85
- ])
86
-
87
- // Step 2: 生成 Git 信号 (可选)
88
- if (!skipGit) {
89
- console.log('\n📊 Step 2: Analyzing Git history...')
90
- await runScript('core/git-analyzer.mjs', [
91
- '--targets', join(output, 'targets.json'),
92
- '--out', join(output, 'git_signals.json')
93
- ])
94
- }
95
-
96
- // Step 3: 运行评分(保留现有状态)
97
- console.log('\n🎯 Step 3: Scoring targets...')
98
- const scoreArgs = [
99
- '--targets', join(output, 'targets.json'),
100
- '--config', config,
101
- '--out-md', join(output, 'ut_scores.md'),
102
- '--out-csv', join(output, 'ut_scores.csv')
103
- ]
104
- if (!skipGit && existsSync(join(output, 'git_signals.json'))) {
105
- scoreArgs.push('--git', join(output, 'git_signals.json'))
106
- }
107
- await runScript('core/scorer.mjs', scoreArgs)
108
-
109
- // 统计 TODO/DONE
110
- const reportPath = join(output, 'ut_scores.md')
111
- if (existsSync(reportPath)) {
112
- const content = readFileSync(reportPath, 'utf-8')
113
- const todoCount = (content.match(/\| TODO \|/g) || []).length
114
- const doneCount = (content.match(/\| DONE \|/g) || []).length
115
-
116
- console.log('\n✅ Scan completed!')
117
- console.log(`\n📊 Status:`)
118
- console.log(` TODO: ${todoCount}`)
119
- console.log(` DONE: ${doneCount}`)
120
- console.log(` Total: ${todoCount + doneCount}`)
121
- console.log(`\n📄 Report: ${reportPath}`)
122
- console.log(`\n💡 Next: ai-test generate`)
123
- }
124
- } catch (err) {
125
- console.error('❌ Scan failed:', err.message)
126
- process.exit(1)
127
- }
105
+ const { scan } = await import('../lib/workflows/scan.mjs')
106
+ await scan(options)
128
107
  })
129
108
 
130
109
  // ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
131
- // 命令 2: generate - 生成测试
110
+ // 命令 4: generate - 生成测试
132
111
  // ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
133
112
  program
134
113
  .command('generate')
135
- .description('Generate tests (default: 10 TODO functions)')
114
+ .description('Generate unit tests for untested functions')
136
115
  .option('-n, --count <number>', 'Number of functions to generate', parseInt, 10)
137
116
  .option('-p, --priority <level>', 'Priority filter (P0, P1, P2, P3)', 'P0')
138
117
  .option('--all', 'Generate all remaining TODO functions')
139
118
  .option('--report <path>', 'Report file path', 'reports/ut_scores.md')
119
+ .addHelpText('after', `
120
+ Examples:
121
+ $ ai-test generate # Generate 10 P0 functions
122
+ $ ai-test generate -n 20 # Generate 20 P0 functions
123
+ $ ai-test generate -p P1 -n 15 # Generate 15 P1 functions
124
+ $ ai-test generate --all # Generate all P0 TODO functions
125
+
126
+ Features:
127
+ - Automatic test generation using Cursor Agent
128
+ - Jest integration with coverage tracking
129
+ - Failure retry with hints
130
+ - Auto-marking DONE on success
131
+ `)
140
132
  .action(async (options) => {
141
- const { count, priority, all, report } = options
142
-
143
- // 检查报告是否存在
144
- if (!existsSync(report)) {
145
- console.error(`❌ Report not found: ${report}`)
146
- console.log(` Run: ai-test scan`)
147
- process.exit(1)
148
- }
149
-
150
- if (all) {
151
- // 持续生成直到所有 TODO 完成
152
- console.log(`🚀 Generating all ${priority} TODO functions...\n`)
153
-
154
- let batchNum = 1
155
- let totalGenerated = 0
156
- let totalPassed = 0
157
-
158
- while (true) {
159
- // 检查还有多少 TODO
160
- const content = readFileSync(report, 'utf-8')
161
- const lines = content.split('\n')
162
- const todoLines = lines.filter(line =>
163
- line.includes('| TODO |') && line.includes(`| ${priority} |`)
164
- )
165
-
166
- if (todoLines.length === 0) {
167
- console.log(`\n✅ All ${priority} functions completed!`)
168
- console.log(` Total generated: ${totalGenerated}`)
169
- console.log(` Total passed: ${totalPassed}`)
170
- break
171
- }
172
-
173
- console.log(`\n━━━━ Batch ${batchNum} (${todoLines.length} TODO remaining) ━━━━`)
174
-
175
- try {
176
- const result = await generateBatch(priority, Math.min(count, todoLines.length), 0, report)
177
- totalGenerated += result.generated
178
- totalPassed += result.passed
179
- batchNum++
180
- } catch (err) {
181
- console.error(`❌ Batch ${batchNum} failed:`, err.message)
182
- break
183
- }
184
- }
185
- } else {
186
- // 生成指定数量
187
- console.log(`🚀 Generating ${count} ${priority} functions...\n`)
188
- try {
189
- await generateBatch(priority, count, 0, report)
190
- } catch (err) {
191
- console.error('❌ Generation failed:', err.message)
192
- process.exit(1)
193
- }
194
- }
195
- })
196
-
197
- // ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
198
- // 辅助函数
199
- // ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
200
-
201
- /**
202
- * 生成单批测试
203
- */
204
- async function generateBatch(priority, count, skip, report) {
205
- const batchScript = join(PKG_ROOT, 'lib/workflows/batch.mjs')
206
-
207
- return new Promise((resolve, reject) => {
208
- const child = spawn('node', [batchScript, priority, String(count), String(skip), report], {
209
- stdio: 'inherit',
210
- cwd: process.cwd()
211
- })
212
-
213
- child.on('close', (code) => {
214
- if (code === 0) {
215
- resolve({ generated: count, passed: count }) // TODO: 从 batch 输出获取实际数字
216
- } else {
217
- reject(new Error(`Batch generation failed with code ${code}`))
218
- }
219
- })
220
-
221
- child.on('error', reject)
222
- })
223
- }
224
-
225
- /**
226
- * 运行脚本
227
- */
228
- function runScript(scriptPath, args) {
229
- return new Promise((resolve, reject) => {
230
- const fullPath = join(PKG_ROOT, 'lib', scriptPath)
231
- const child = spawn('node', [fullPath, ...args], {
232
- stdio: 'inherit',
233
- cwd: process.cwd()
234
- })
235
-
236
- child.on('close', (code) => {
237
- if (code === 0) {
238
- resolve()
239
- } else {
240
- reject(new Error(`Script ${scriptPath} exited with code ${code}`))
241
- }
242
- })
243
-
244
- child.on('error', reject)
133
+ const { generate } = await import('../lib/workflows/generate.mjs')
134
+ await generate(options)
245
135
  })
246
- }
247
136
 
248
137
  program.parse()
@@ -0,0 +1,120 @@
1
+ /**
2
+ * AI 分析 Prompt 构建器
3
+ */
4
+
5
+ /**
6
+ * 构建分析 Prompt
7
+ */
8
+ export function buildAnalysisPrompt(samples, stats, projectCtx) {
9
+ return `You are analyzing a ${projectCtx.framework} codebase to identify business-critical paths and high-risk modules.
10
+
11
+ ## 📊 Project Overview
12
+
13
+ - **Framework**: ${projectCtx.framework}
14
+ - **Total Files**: ${stats.totalFiles}
15
+ - **Total Lines**: ${stats.totalLines}
16
+ - **Key Dependencies**: ${projectCtx.criticalDeps.join(', ') || 'None detected'}
17
+
18
+ ## 📂 Code Samples (${samples.length} files)
19
+
20
+ ${samples.map((s, i) => `
21
+ ### Sample ${i + 1}: ${s.path}
22
+ **Layer**: ${s.layer}
23
+ **Reason**: ${s.reason}
24
+
25
+ \`\`\`typescript
26
+ ${s.preview}
27
+ \`\`\`
28
+ `).join('\n')}
29
+
30
+ ## 🎯 Your Task
31
+
32
+ **YOU HAVE ACCESS TO THE FULL CODEBASE** via Cursor's indexing. The samples above are just for quick reference.
33
+
34
+ Please analyze the ENTIRE codebase (not just the samples) and suggest:
35
+
36
+ 1. **businessCriticalPaths**: Which paths handle core business logic?
37
+ - Look for: payment, booking, pricing, checkout, order processing
38
+ - Use your codebase index to find all relevant files
39
+
40
+ 2. **highRiskModules**: Which modules have high error risk?
41
+ - Look for: date/time handling, external APIs, money calculations, parsing
42
+ - Check for complex logic, many try-catch blocks
43
+
44
+ 3. **testabilityAdjustments**: Which paths are easier/harder to test?
45
+ - Look for: pure functions (easier), heavy dependencies (harder)
46
+ - Consider side effects, I/O operations
47
+
48
+ ## 💡 Use Your Codebase Knowledge
49
+
50
+ - You can search the codebase using @codebase
51
+ - You know the full dependency graph
52
+ - You understand the business logic from code context
53
+
54
+ ## OUTPUT SCHEMA
55
+
56
+ \`\`\`json
57
+ {
58
+ "suggestions": {
59
+ "businessCriticalPaths": [
60
+ {
61
+ "pattern": "services/payment/**",
62
+ "confidence": 0.95,
63
+ "reason": "Handles Stripe payment processing",
64
+ "suggestedBC": 10,
65
+ "evidence": [
66
+ "Contains processPayment function with Stripe API calls",
67
+ "Referenced by checkout flow in multiple places",
68
+ "Handles money transactions"
69
+ ]
70
+ }
71
+ ],
72
+ "highRiskModules": [
73
+ {
74
+ "pattern": "utils/date/**",
75
+ "confidence": 0.88,
76
+ "reason": "Complex timezone and date calculations",
77
+ "suggestedER": 8,
78
+ "evidence": [
79
+ "Multiple timezone conversion functions",
80
+ "Handles daylight saving time"
81
+ ]
82
+ }
83
+ ],
84
+ "testabilityAdjustments": [
85
+ {
86
+ "pattern": "utils/**",
87
+ "confidence": 0.92,
88
+ "reason": "Pure utility functions with no side effects",
89
+ "adjustment": "+1",
90
+ "evidence": [
91
+ "All exports are pure functions",
92
+ "No external dependencies observed"
93
+ ]
94
+ }
95
+ ]
96
+ }
97
+ }
98
+ \`\`\`
99
+
100
+ ## CRITICAL RULES
101
+
102
+ 1. **Output ONLY JSON** - No explanations, no markdown wrapper
103
+ 2. **Match Schema Exactly** - Any deviation will be rejected
104
+ 3. **Stay Within Bounds** - All scores must be within specified ranges
105
+ 4. **Require Evidence** - Each suggestion needs 2-3 concrete evidence points
106
+ 5. **No Assumptions** - Only suggest what you can directly observe
107
+
108
+ ## CONSTRAINTS
109
+
110
+ - confidence ≥ 0.70 (businessCriticalPaths ≥ 0.85)
111
+ - 2-3 evidence items per suggestion
112
+ - Pattern must match actual paths in codebase
113
+ - Max 10 suggestions per category
114
+ - suggestedBC: 8 | 9 | 10
115
+ - suggestedER: 7 | 8 | 9 | 10
116
+ - adjustment: "-2" | "-1" | "+1" | "+2"
117
+
118
+ Output ONLY the JSON, no explanation.`
119
+ }
120
+