ai-unit-test-generator 1.3.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md ADDED
@@ -0,0 +1,432 @@
1
+ # ai-test-generator
2
+
3
+ > AI-powered unit test generator with smart priority scoring
4
+
5
+ [![npm version](https://img.shields.io/npm/v/ai-test-generator.svg)](https://www.npmjs.com/package/ai-test-generator)
6
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
7
+
8
+ ## ๐ŸŽฏ Features
9
+
10
+ - **Layered Architecture**: Scores based on code layer (Foundation, Business Logic, State Management, UI)
11
+ - **AI-Optimized**: Designed for AI-powered test generation workflows
12
+ - **Batch Test Generation**: Built-in tools for generating tests with AI (ChatGPT, Claude, Cursor Agent, etc.)
13
+ - **Multiple Metrics**: Combines testability, complexity, dependency count, and more
14
+ - **Flexible Configuration**: JSON-based config with presets for different project types
15
+ - **CLI & Programmatic API**: Use as a command-line tool or integrate into your workflow
16
+ - **Rich Reports**: Generates Markdown and CSV reports with detailed scoring
17
+ - **Progress Tracking**: Mark functions as DONE/TODO/SKIP to track testing progress
18
+
19
+ ## ๐Ÿ“ฆ Installation
20
+
21
+ ### Global Installation (CLI)
22
+
23
+ ```bash
24
+ npm install -g ai-test-generator
25
+ ```
26
+
27
+ ### Project Installation
28
+
29
+ ```bash
30
+ npm install --save-dev ai-test-generator
31
+ ```
32
+
33
+ ## ๐Ÿš€ Quick Start
34
+
35
+ ### 1. Initialize Configuration
36
+
37
+ ```bash
38
+ ai-test init
39
+ ```
40
+
41
+ This creates `ut_scoring_config.json` with default settings.
42
+
43
+ ### 2. Scan and Score
44
+
45
+ ```bash
46
+ ai-test scan
47
+ ```
48
+
49
+ This will:
50
+ 1. Scan your codebase for testable targets
51
+ 2. Analyze Git history for error signals
52
+ 3. Calculate priority scores
53
+ 4. Generate reports in `reports/`
54
+
55
+ ### 3. View Report
56
+
57
+ ```bash
58
+ # View complete report
59
+ cat reports/ut_scores.md
60
+
61
+ # View P0 targets only
62
+ grep "| P0 |" reports/ut_scores.md
63
+
64
+ # Export P0 targets to file
65
+ grep "| P0 |" reports/ut_scores.md | awk -F'|' '{print $2}' > p0_targets.txt
66
+ ```
67
+
68
+ ### 4. Generate Tests with AI
69
+
70
+ ```bash
71
+ # Generate prompt for AI
72
+ ai-test gen-prompt -p P0 -n 10 > prompt.txt
73
+
74
+ # Copy prompt.txt to your AI tool (ChatGPT, Claude, Cursor, etc.)
75
+ # Get the response, save as response.txt
76
+
77
+ # Extract test files from AI response
78
+ ai-test extract-tests response.txt
79
+
80
+ # Run tests
81
+ npm test
82
+
83
+ # Mark as done
84
+ ai-test mark-done func1 func2 func3
85
+ ```
86
+
87
+ ## ๐Ÿ“– CLI Commands
88
+
89
+ ### `ai-test init`
90
+
91
+ Initialize UT scoring configuration.
92
+
93
+ ```bash
94
+ ai-test init [options]
95
+
96
+ Options:
97
+ -p, --preset <type> Use preset config (default)
98
+ -o, --output <path> Output config file path (default: ut_scoring_config.json)
99
+ ```
100
+
101
+ ### `ai-test scan`
102
+
103
+ Scan code and generate UT priority scoring report.
104
+
105
+ ```bash
106
+ ai-test scan [options]
107
+
108
+ Options:
109
+ -c, --config <path> Config file path (default: ut_scoring_config.json)
110
+ -o, --output <dir> Output directory (default: reports)
111
+ --skip-git Skip Git signals generation
112
+ ```
113
+
114
+ **Output Files:**
115
+ - `reports/targets.json` - List of all scanned targets
116
+ - `reports/git_signals.json` - Git analysis data
117
+ - `reports/ut_scores.md` - Priority scoring report (Markdown, sorted by score)
118
+ - `reports/ut_scores.csv` - Priority scoring report (CSV)
119
+
120
+ ### `ai-test gen-prompt`
121
+
122
+ Generate AI prompt for batch test generation.
123
+
124
+ ```bash
125
+ ai-test gen-prompt [options] > prompt.txt
126
+
127
+ Options:
128
+ -p, --priority <level> Filter by priority (P0, P1, P2, P3)
129
+ -l, --layer <name> Filter by layer (Foundation, Business, State, UI)
130
+ -n, --limit <count> Limit number of targets
131
+ --skip <count> Skip first N targets (for pagination)
132
+ --min-score <score> Minimum score threshold
133
+ --report <path> Report file path (default: reports/ut_scores.md)
134
+ --framework <name> Project framework (default: React + TypeScript)
135
+ --hints <text> Inject failure hints text into prompt
136
+ --hints-file <path> Inject failure hints from file (e.g., reports/hints.txt)
137
+ --only-paths <csv> Restrict to specific source paths (comma-separated)
138
+ ```
139
+
140
+ **Example:**
141
+ ```bash
142
+ # Generate prompt for 10 P0 functions
143
+ ai-test gen-prompt -p P0 -n 10 > prompt.txt
144
+
145
+ # Generate prompt for Foundation layer with high scores
146
+ ai-test gen-prompt -l Foundation --min-score 7.5 -n 5 > prompt.txt
147
+
148
+ # Generate next batch (skip first 10)
149
+ ai-test gen-prompt -p P0 --skip 10 -n 10 > prompt.txt
150
+ ```
151
+
152
+ ### `ai-test extract-tests`
153
+
154
+ Extract test files from AI response.
155
+
156
+ ```bash
157
+ ai-test extract-tests <response-file> [options]
158
+
159
+ Options:
160
+ --overwrite Overwrite existing test files
161
+ --dry-run Show what would be created without actually creating files
162
+ ```
163
+
164
+ **Example:**
165
+ ```bash
166
+ # Extract tests from AI response
167
+ ai-test extract-tests response.txt
168
+
169
+ # Dry-run to see what would be created
170
+ ai-test extract-tests response.txt --dry-run
171
+
172
+ # Overwrite existing tests
173
+ ai-test extract-tests response.txt --overwrite
174
+ ```
175
+
176
+ ### `ai-test mark-done`
177
+
178
+ Mark functions as done in the scoring report.
179
+
180
+ ```bash
181
+ ai-test mark-done <function-names...> [options]
182
+
183
+ Options:
184
+ --report <path> Report file path (default: reports/ut_scores.md)
185
+ --status <status> Mark status: DONE or SKIP (default: DONE)
186
+ --dry-run Show what would be changed without actually changing
187
+ ```
188
+
189
+ **Example:**
190
+ ```bash
191
+ # Mark functions as done
192
+ ai-test mark-done disableDragBack getMediumScale
193
+
194
+ # Mark as skipped
195
+ ai-test mark-done complexFunction --status SKIP
196
+
197
+ # Dry-run
198
+ ai-test mark-done func1 func2 --dry-run
199
+ ```
200
+
201
+ ## โš™๏ธ Configuration
202
+
203
+ ### Basic Configuration
204
+
205
+ ```json
206
+ {
207
+ "scoringMode": "layered",
208
+ "layers": {
209
+ "foundation": {
210
+ "name": "Foundation (ๅŸบ็ก€ๅทฅๅ…ทๅฑ‚)",
211
+ "patterns": ["utils/**", "helpers/**", "constants/**"],
212
+ "weights": {
213
+ "testability": 0.50,
214
+ "dependencyCount": 0.30,
215
+ "complexity": 0.20
216
+ },
217
+ "thresholds": {
218
+ "P0": 7.5,
219
+ "P1": 6.0,
220
+ "P2": 4.0
221
+ }
222
+ }
223
+ }
224
+ }
225
+ ```
226
+
227
+ ### Presets
228
+
229
+ - **default**: Balanced scoring for general projects
230
+ - **react**: Optimized for React applications
231
+ - **node**: Optimized for Node.js projects
232
+
233
+ ## ๐Ÿ“Š Priority Levels
234
+
235
+ | Priority | Score Range | Description | Action |
236
+ |----------|-------------|-------------|--------|
237
+ | **P0** | โ‰ฅ7.5 | Must test | Generate immediately with AI |
238
+ | **P1** | 6.0-7.4 | High priority | Batch generate |
239
+ | **P2** | 4.0-5.9 | Medium priority | Generate with careful review |
240
+ | **P3** | <4.0 | Low priority | Optional coverage |
241
+
242
+ ## ๐Ÿ—๏ธ Architecture
243
+
244
+ ### Layered Scoring System
245
+
246
+ 1. **Foundation Layer**: Utils, helpers, constants
247
+ - High testability weight (50%)
248
+ - Focus on pure functions
249
+
250
+ 2. **Business Logic Layer**: Services, APIs, data processing
251
+ - Balanced metrics
252
+ - Critical for correctness
253
+
254
+ 3. **State Management Layer**: Stores, contexts, reducers
255
+ - Error risk weight emphasized
256
+
257
+ 4. **UI Components Layer**: React components, views
258
+ - Complexity and error risk balanced
259
+
260
+ ## ๐Ÿ”ง Programmatic Usage
261
+
262
+ If you need to integrate into your Node.js workflow:
263
+
264
+ ```javascript
265
+ import { exec } from 'child_process'
266
+ import { promisify } from 'util'
267
+
268
+ const execAsync = promisify(exec)
269
+
270
+ // Run scoring workflow
271
+ const { stdout } = await execAsync('ai-test scan')
272
+ console.log(stdout)
273
+ ```
274
+
275
+ **Recommended**: Use the CLI directly for simplicity.
276
+
277
+ ## ๐Ÿค– AI Integration
278
+
279
+ This tool is designed for AI-powered test generation workflows:
280
+
281
+ 1. **Run scoring** to get prioritized target list
282
+ ```bash
283
+ ai-test scan
284
+ ```
285
+
286
+ 2. **Extract P0 targets**
287
+ ```bash
288
+ grep "| P0 |" reports/ut_scores.md | awk -F'|' '{print $2}' > p0_targets.txt
289
+ ```
290
+
291
+ 3. **Batch Automation with Cursor Agent**
292
+ ```bash
293
+ # One batch
294
+ ai-test run-batch -p P0 -n 10 --skip 0
295
+
296
+ # Run all batches until TODO=0
297
+ ai-test run-all -p P0 -n 10
298
+ ```
299
+
300
+ 4. **Failure Feedback Loop**
301
+ - After each batch, we parse Jest JSON report and write hints into `reports/hints.txt`
302
+ - Next batch will inject hints via `--hints-file reports/hints.txt`
303
+
304
+ ### โœ… Quality Assurance (Default Enhanced Mode)
305
+
306
+ To balance quality and speed, ai-test-generator enables a light assurance layer by default (inspired by Meta's TestGen-LLM deployment at scale).
307
+
308
+ - Stability reruns (P0 only):
309
+ - `run-batch` reruns Jest once for P0 targets to reduce flakiness (runInBand)
310
+ - Coverage-aware prioritization:
311
+ - `score-ut.mjs` supports coverageBoost to prioritize low-coverage files
312
+ - Failure hints loop:
313
+ - `jest-failure-analyzer.mjs` produces actionable hints (import path, type errors, timers, selectors)
314
+ - Hints are injected to the next prompt automatically
315
+
316
+ Notes:
317
+ - Defaults are conservative (P0 strong, others light) to avoid runtime explosion.
318
+ - You can further tune coverageBoost and thresholds in `ut_scoring_config.json`.
319
+
320
+ ## ๐Ÿ“ˆ Metrics Explained
321
+
322
+ ### Testability (50% weight in Foundation layer)
323
+ - Pure functions: 10/10
324
+ - Simple mocks: 8-9/10
325
+ - Complex dependencies: 4-6/10
326
+
327
+ ### Dependency Count (30% weight in Foundation layer)
328
+ - โ‰ฅ10 modules depend on it: 10/10
329
+ - 5-9 modules: 10/10
330
+ - 3-4 modules: 9/10
331
+ - 1-2 modules: 7/10
332
+ - Not referenced: 5/10
333
+
334
+ ### Complexity (20% weight in Foundation layer)
335
+ - Cyclomatic complexity: 11-15: 10/10
336
+ - Cognitive complexity from ESLint
337
+ - Adjustments for nesting depth, branch count
338
+
339
+ ## ๐Ÿ› ๏ธ Development
340
+
341
+ ### Project Structure
342
+
343
+ ```
344
+ ut-priority-scorer/
345
+ โ”œโ”€โ”€ bin/
346
+ โ”‚ โ””โ”€โ”€ ut-score.js # CLI entry point
347
+ โ”œโ”€โ”€ lib/
348
+ โ”‚ โ”œโ”€โ”€ index.js # Main exports
349
+ โ”‚ โ”œโ”€โ”€ gen-targets.mjs # Target generation
350
+ โ”‚ โ”œโ”€โ”€ gen-git-signals.mjs # Git analysis
351
+ โ”‚ โ”œโ”€โ”€ score-ut.mjs # Scoring engine (supports coverage boost)
352
+ โ”‚ โ”œโ”€โ”€ gen-test-prompt.mjs # Build batch prompt (JSON manifest + code blocks)
353
+ โ”‚ โ”œโ”€โ”€ extract-tests.mjs # Extract tests (manifest-first, fallback headings)
354
+ โ”‚ โ”œโ”€โ”€ ai-generate.mjs # Call cursor-agent chat
355
+ โ”‚ โ”œโ”€โ”€ jest-runner.mjs # Run Jest and collect reports/coverage
356
+ โ”‚ โ”œโ”€โ”€ jest-failure-analyzer.mjs # Parse Jest JSON and produce hints
357
+ โ”‚ โ”œโ”€โ”€ run-batch.mjs # Orchestrate one batch
358
+ โ”‚ โ””โ”€โ”€ run-all.mjs # Loop through all batches
359
+ โ”œโ”€โ”€ templates/
360
+ โ”‚ โ”œโ”€โ”€ default.config.json # Default preset
361
+ โ”‚ โ”œโ”€โ”€ react.config.json # React preset
362
+ โ”‚ โ””โ”€โ”€ node.config.json # Node.js preset
363
+ โ””โ”€โ”€ docs/
364
+ โ”œโ”€โ”€ workflow.md # Detailed workflow
365
+ โ””โ”€โ”€ architecture.md # Architecture design
366
+ ```
367
+
368
+ ## ๐Ÿ“ Examples
369
+
370
+ ### Example: Scoring a React Project
371
+
372
+ ```bash
373
+ # Initialize with default config
374
+ ai-test init
375
+
376
+ # Customize config if needed
377
+ vim ut_scoring_config.json
378
+
379
+ # Run scoring
380
+ ai-test scan
381
+
382
+ # View P0 targets
383
+ grep "| P0 |" reports/ut_scores.md
384
+ ```
385
+
386
+ ### Example: npm Scripts Integration
387
+
388
+ Add to your `package.json`:
389
+
390
+ ```json
391
+ {
392
+ "scripts": {
393
+ "test:priority": "ai-test scan",
394
+ "test:p0": "grep '| P0 |' reports/ut_scores.md"
395
+ }
396
+ }
397
+ ```
398
+
399
+ ## ๐Ÿค Contributing
400
+
401
+ Contributions are welcome! Please read our contributing guidelines.
402
+
403
+ ## ๐Ÿ“„ License
404
+
405
+ MIT ยฉ YuhengZhou
406
+
407
+ ## ๐Ÿ”— Links
408
+
409
+ - [GitHub Repository](https://github.com/temptrip/ai-test-generator)
410
+ - [npm Package](https://www.npmjs.com/package/ai-test-generator)
411
+ - [Issue Tracker](https://github.com/temptrip/ai-test-generator/issues)
412
+
413
+ ## ๐Ÿ“š Inspiration & Prior Art
414
+
415
+ We learned from and adapted ideas in the community:
416
+
417
+ - Meta TestGen-LLM โ€” assurance filters and at-scale practices: [Automated Unit Test Improvement using Large Language Models at Meta](https://arxiv.org/abs/2402.09171)
418
+ - ByteDance Midscene.js (NL to UI Test) โ€” natural language interface and stability practices: `https://github.com/web-infra-dev/midscene`
419
+ - Airbnb โ€” large-scale LLM-aided migration and batching: `https://medium.com/airbnb-engineering/accelerating-large-scale-test-migration-with-llms-9565c208023b`
420
+ - TestART (paper) โ€” iterative generation and template repairs: `https://arxiv.org/abs/2408.03095`
421
+
422
+ In ai-test-generator, these ideas shape:
423
+ - Strict output protocol (JSON manifest + per-file code blocks)
424
+ - Failure feedback loop (Jest JSON โ†’ actionable hints โ†’ next prompt)
425
+ - Batch processing with progress tracking (TODO/DONE/SKIP)
426
+ - Optional coverage-aware prioritization
427
+
428
+ ## ๐Ÿ’ฌ Support
429
+
430
+ - GitHub Issues: [Report bugs](https://github.com/temptrip/ut-priority-scorer/issues)
431
+ - Documentation: [Read the docs](https://github.com/temptrip/ut-priority-scorer/tree/main/docs)
432
+
package/bin/cli.js ADDED
@@ -0,0 +1,217 @@
1
+ #!/usr/bin/env node
2
+
3
+ import { program } from 'commander'
4
+ import { fileURLToPath } from 'url'
5
+ import { dirname, join, resolve } from 'path'
6
+ import { existsSync, copyFileSync, mkdirSync, readFileSync } from 'fs'
7
+ import { spawn } from 'child_process'
8
+
9
+ const __filename = fileURLToPath(import.meta.url)
10
+ const __dirname = dirname(__filename)
11
+ const PKG_ROOT = resolve(__dirname, '..')
12
+
13
+ // ่ฏปๅ– package.json ็‰ˆๆœฌ
14
+ const pkgJson = JSON.parse(readFileSync(join(PKG_ROOT, 'package.json'), 'utf-8'))
15
+
16
+ program
17
+ .name('ai-test')
18
+ .description('AI-powered unit test generator with smart priority scoring')
19
+ .version(pkgJson.version)
20
+
21
+ // โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”
22
+ // ๅ‘ฝไปค 1: scan - ๆ‰ซๆไปฃ็ ๅนถ็”Ÿๆˆไผ˜ๅ…ˆ็บงๆŠฅๅ‘Š
23
+ // โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”
24
+ program
25
+ .command('scan')
26
+ .description('Scan code and generate priority scoring report')
27
+ .option('-c, --config <path>', 'Config file path', 'ut_scoring_config.json')
28
+ .option('-o, --output <dir>', 'Output directory', 'reports')
29
+ .option('--skip-git', 'Skip Git signals generation')
30
+ .action(async (options) => {
31
+ const { config, output, skipGit } = options
32
+
33
+ // ่‡ชๅŠจๅˆๅง‹ๅŒ–้…็ฝฎ๏ผˆๅฆ‚ๆžœไธๅญ˜ๅœจ๏ผ‰
34
+ if (!existsSync(config)) {
35
+ console.log('โš™๏ธ Config not found, creating default config...')
36
+ const templatePath = join(PKG_ROOT, 'templates', 'default.config.json')
37
+ copyFileSync(templatePath, config)
38
+ console.log(`โœ… Config created: ${config}\n`)
39
+ }
40
+
41
+ // ๅˆ›ๅปบ่พ“ๅ‡บ็›ฎๅฝ•
42
+ if (!existsSync(output)) {
43
+ mkdirSync(output, { recursive: true })
44
+ }
45
+
46
+ console.log('๐Ÿš€ Starting code scan...\n')
47
+
48
+ try {
49
+ // Step 1: ็”Ÿๆˆ็›ฎๆ ‡ๅˆ—่กจ
50
+ console.log('๐Ÿ“‹ Step 1: Scanning targets...')
51
+ await runScript('core/scanner.mjs', [
52
+ '--config', config,
53
+ '--out', join(output, 'targets.json')
54
+ ])
55
+
56
+ // Step 2: ็”Ÿๆˆ Git ไฟกๅท (ๅฏ้€‰)
57
+ if (!skipGit) {
58
+ console.log('\n๐Ÿ“Š Step 2: Analyzing Git history...')
59
+ await runScript('core/git-analyzer.mjs', [
60
+ '--targets', join(output, 'targets.json'),
61
+ '--out', join(output, 'git_signals.json')
62
+ ])
63
+ }
64
+
65
+ // Step 3: ่ฟ่กŒ่ฏ„ๅˆ†๏ผˆไฟ็•™็Žฐๆœ‰็Šถๆ€๏ผ‰
66
+ console.log('\n๐ŸŽฏ Step 3: Scoring targets...')
67
+ const scoreArgs = [
68
+ '--targets', join(output, 'targets.json'),
69
+ '--config', config,
70
+ '--out-md', join(output, 'ut_scores.md'),
71
+ '--out-csv', join(output, 'ut_scores.csv')
72
+ ]
73
+ if (!skipGit && existsSync(join(output, 'git_signals.json'))) {
74
+ scoreArgs.push('--git', join(output, 'git_signals.json'))
75
+ }
76
+ await runScript('core/scorer.mjs', scoreArgs)
77
+
78
+ // ็ปŸ่ฎก TODO/DONE
79
+ const reportPath = join(output, 'ut_scores.md')
80
+ if (existsSync(reportPath)) {
81
+ const content = readFileSync(reportPath, 'utf-8')
82
+ const todoCount = (content.match(/\| TODO \|/g) || []).length
83
+ const doneCount = (content.match(/\| DONE \|/g) || []).length
84
+
85
+ console.log('\nโœ… Scan completed!')
86
+ console.log(`\n๐Ÿ“Š Status:`)
87
+ console.log(` TODO: ${todoCount}`)
88
+ console.log(` DONE: ${doneCount}`)
89
+ console.log(` Total: ${todoCount + doneCount}`)
90
+ console.log(`\n๐Ÿ“„ Report: ${reportPath}`)
91
+ console.log(`\n๐Ÿ’ก Next: ai-test generate`)
92
+ }
93
+ } catch (err) {
94
+ console.error('โŒ Scan failed:', err.message)
95
+ process.exit(1)
96
+ }
97
+ })
98
+
99
+ // โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”
100
+ // ๅ‘ฝไปค 2: generate - ็”Ÿๆˆๆต‹่ฏ•
101
+ // โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”
102
+ program
103
+ .command('generate')
104
+ .description('Generate tests (default: 10 TODO functions)')
105
+ .option('-n, --count <number>', 'Number of functions to generate', parseInt, 10)
106
+ .option('-p, --priority <level>', 'Priority filter (P0, P1, P2, P3)', 'P0')
107
+ .option('--all', 'Generate all remaining TODO functions')
108
+ .option('--report <path>', 'Report file path', 'reports/ut_scores.md')
109
+ .action(async (options) => {
110
+ const { count, priority, all, report } = options
111
+
112
+ // ๆฃ€ๆŸฅๆŠฅๅ‘Šๆ˜ฏๅฆๅญ˜ๅœจ
113
+ if (!existsSync(report)) {
114
+ console.error(`โŒ Report not found: ${report}`)
115
+ console.log(` Run: ai-test scan`)
116
+ process.exit(1)
117
+ }
118
+
119
+ if (all) {
120
+ // ๆŒ็ปญ็”Ÿๆˆ็›ดๅˆฐๆ‰€ๆœ‰ TODO ๅฎŒๆˆ
121
+ console.log(`๐Ÿš€ Generating all ${priority} TODO functions...\n`)
122
+
123
+ let batchNum = 1
124
+ let totalGenerated = 0
125
+ let totalPassed = 0
126
+
127
+ while (true) {
128
+ // ๆฃ€ๆŸฅ่ฟ˜ๆœ‰ๅคšๅฐ‘ TODO
129
+ const content = readFileSync(report, 'utf-8')
130
+ const lines = content.split('\n')
131
+ const todoLines = lines.filter(line =>
132
+ line.includes('| TODO |') && line.includes(`| ${priority} |`)
133
+ )
134
+
135
+ if (todoLines.length === 0) {
136
+ console.log(`\nโœ… All ${priority} functions completed!`)
137
+ console.log(` Total generated: ${totalGenerated}`)
138
+ console.log(` Total passed: ${totalPassed}`)
139
+ break
140
+ }
141
+
142
+ console.log(`\nโ”โ”โ”โ” Batch ${batchNum} (${todoLines.length} TODO remaining) โ”โ”โ”โ”`)
143
+
144
+ try {
145
+ const result = await generateBatch(priority, Math.min(count, todoLines.length), 0, report)
146
+ totalGenerated += result.generated
147
+ totalPassed += result.passed
148
+ batchNum++
149
+ } catch (err) {
150
+ console.error(`โŒ Batch ${batchNum} failed:`, err.message)
151
+ break
152
+ }
153
+ }
154
+ } else {
155
+ // ็”ŸๆˆๆŒ‡ๅฎšๆ•ฐ้‡
156
+ console.log(`๐Ÿš€ Generating ${count} ${priority} functions...\n`)
157
+ try {
158
+ await generateBatch(priority, count, 0, report)
159
+ } catch (err) {
160
+ console.error('โŒ Generation failed:', err.message)
161
+ process.exit(1)
162
+ }
163
+ }
164
+ })
165
+
166
+ // โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”
167
+ // ่พ…ๅŠฉๅ‡ฝๆ•ฐ
168
+ // โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”
169
+
170
+ /**
171
+ * ็”Ÿๆˆๅ•ๆ‰นๆต‹่ฏ•
172
+ */
173
+ async function generateBatch(priority, count, skip, report) {
174
+ const batchScript = join(PKG_ROOT, 'lib/workflows/batch.mjs')
175
+
176
+ return new Promise((resolve, reject) => {
177
+ const child = spawn('node', [batchScript, priority, String(count), String(skip), report], {
178
+ stdio: 'inherit',
179
+ cwd: process.cwd()
180
+ })
181
+
182
+ child.on('close', (code) => {
183
+ if (code === 0) {
184
+ resolve({ generated: count, passed: count }) // TODO: ไปŽ batch ่พ“ๅ‡บ่Žทๅ–ๅฎž้™…ๆ•ฐๅญ—
185
+ } else {
186
+ reject(new Error(`Batch generation failed with code ${code}`))
187
+ }
188
+ })
189
+
190
+ child.on('error', reject)
191
+ })
192
+ }
193
+
194
+ /**
195
+ * ่ฟ่กŒ่„šๆœฌ
196
+ */
197
+ function runScript(scriptPath, args) {
198
+ return new Promise((resolve, reject) => {
199
+ const fullPath = join(PKG_ROOT, 'lib', scriptPath)
200
+ const child = spawn('node', [fullPath, ...args], {
201
+ stdio: 'inherit',
202
+ cwd: process.cwd()
203
+ })
204
+
205
+ child.on('close', (code) => {
206
+ if (code === 0) {
207
+ resolve()
208
+ } else {
209
+ reject(new Error(`Script ${scriptPath} exited with code ${code}`))
210
+ }
211
+ })
212
+
213
+ child.on('error', reject)
214
+ })
215
+ }
216
+
217
+ program.parse()