ai-unit-test-generator 1.4.5 โ†’ 1.4.6

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -5,6 +5,13 @@ All notable changes to this project will be documented in this file.
5
5
  The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
6
6
  and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
7
7
 
8
+ ## [1.4.6] - 2025-01-11
9
+
10
+ ### Fixed
11
+ - Fixed JSONC parsing in CLI for coverage config
12
+ - Fixed coverage command to continue even if tests fail (partial coverage data still useful)
13
+ - Fixed tsconfig.json to include `node_modules/@types` for Jest types
14
+
8
15
  ## [1.4.5] - 2025-01-11
9
16
 
10
17
  ### Fixed
package/README.md CHANGED
@@ -1,467 +1,427 @@
1
- # ai-test-generator
1
+ # ai-unit-test-generator
2
2
 
3
- > AI-powered unit test generator with smart priority scoring
3
+ > AI-powered unit test generator with intelligent priority scoring
4
4
 
5
- [![npm version](https://img.shields.io/npm/v/ai-test-generator.svg)](https://www.npmjs.com/package/ai-test-generator)
5
+ [![npm version](https://img.shields.io/npm/v/ai-unit-test-generator.svg)](https://www.npmjs.com/package/ai-unit-test-generator)
6
6
  [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
7
7
 
8
- ## ๐ŸŽฏ Features
8
+ ## ๐ŸŽฏ Key Features
9
9
 
10
- - **Layered Architecture**: Scores based on code layer (Foundation, Business Logic, State Management, UI)
11
- - **AI-Optimized**: Designed for AI-powered test generation workflows
12
- - **Batch Test Generation**: Built-in tools for generating tests with AI (ChatGPT, Claude, Cursor Agent, etc.)
13
- - **Multiple Metrics**: Combines testability, complexity, dependency count, and more
14
- - **Flexible Configuration**: JSON-based config with presets for different project types
15
- - **CLI & Programmatic API**: Use as a command-line tool or integrate into your workflow
16
- - **Rich Reports**: Generates Markdown and CSV reports with detailed scoring
17
- - **Progress Tracking**: Mark functions as DONE/TODO/SKIP to track testing progress
10
+ - **๐Ÿ—๏ธ Layered Architecture Scoring**: Intelligent scoring based on code layers (Foundation, Business, State, UI)
11
+ - **๐Ÿค– AI-Native Design**: Perfect integration with Cursor Agent, ChatGPT, Claude, and more
12
+ - **๐Ÿ“Š Coverage-Aware**: Integrates code coverage data for smart prioritization (incremental & existing code)
13
+ - **๐ŸŽจ Multi-Dimensional Scoring**: Combines testability, complexity, dependency count, business criticality, error risk
14
+ - **โšก Batch Generation**: Automated batch test generation with failure retry and progress tracking
15
+ - **๐Ÿ“ Rich Reports**: Generates detailed scoring reports in Markdown and CSV formats
16
+ - **๐Ÿ”„ Status Management**: Automatic progress tracking (TODO/DONE/SKIP)
18
17
 
19
18
  ## ๐Ÿ“ฆ Installation
20
19
 
21
- ### Global Installation (CLI)
20
+ ### Global Installation (Recommended)
22
21
 
23
22
  ```bash
24
- npm install -g ai-test-generator
23
+ npm install -g ai-unit-test-generator
25
24
  ```
26
25
 
27
26
  ### Project Installation
28
27
 
29
28
  ```bash
30
- npm install --save-dev ai-test-generator
29
+ npm install --save-dev ai-unit-test-generator
31
30
  ```
32
31
 
33
32
  ## ๐Ÿš€ Quick Start
34
33
 
35
- ### 1. Initialize Configuration
36
-
37
- ```bash
38
- ai-test init
39
- ```
40
-
41
- This creates `ut_scoring_config.json` with default settings.
42
-
43
- ### 2. Scan and Score
34
+ ### 1. Scan Code and Generate Priority Report
44
35
 
45
36
  ```bash
46
37
  ai-test scan
47
38
  ```
48
39
 
49
- This will:
50
- 1. Scan your codebase for testable targets
51
- 2. Analyze Git history for error signals
52
- 3. Calculate priority scores
53
- 4. Generate reports in `reports/`
40
+ On first run, it automatically generates a config file `ai-test.config.jsonc` with detailed comments.
54
41
 
55
- ### 3. View Report
42
+ The scan will:
43
+ 1. Analyze code structure and identify testable targets
44
+ 2. Auto-run Jest coverage analysis (if installed)
45
+ 3. Analyze Git history for error signals
46
+ 4. Calculate multi-dimensional priority scores
47
+ 5. Generate sorted report (`reports/ut_scores.md`)
56
48
 
57
- ```bash
58
- # View complete report
59
- cat reports/ut_scores.md
49
+ **Output Example:**
60
50
 
61
- # View P0 targets only
62
- grep "| P0 |" reports/ut_scores.md
63
-
64
- # Export P0 targets to file
65
- grep "| P0 |" reports/ut_scores.md | awk -F'|' '{print $2}' > p0_targets.txt
51
+ ```markdown
52
+ | Status | Score | Priority | Name | Type | Layer | Path | Coverage | CS | BC | CC | ER | Testability | DepCount |
53
+ |--------|-------|----------|------|------|-------|------|----------|----|----|----|----|-------------|----------|
54
+ | TODO | 8.5 | P0 | validatePayment | function | Business Logic | src/services/payment.ts | 0.0% | 10 | 10 | 8 | 7 | 9 | 12 |
55
+ | TODO | 7.8 | P0 | formatCurrency | function | Foundation | src/utils/format.ts | 15.0% | 8 | 6 | 5 | 8 | 10 | 8 |
66
56
  ```
67
57
 
68
- ### 4. Generate Tests with AI
58
+ ### 2. Generate Tests
69
59
 
70
60
  ```bash
71
- # Generate prompt for AI
72
- ai-test gen-prompt -p P0 -n 10 > prompt.txt
73
-
74
- # Copy prompt.txt to your AI tool (ChatGPT, Claude, Cursor, etc.)
75
- # Get the response, save as response.txt
61
+ # Generate tests for 10 functions (default)
62
+ ai-test generate
76
63
 
77
- # Extract test files from AI response
78
- ai-test extract-tests response.txt
79
-
80
- # Run tests
81
- npm test
82
-
83
- # Mark as done
84
- ai-test mark-done func1 func2 func3
64
+ # Generate tests for 20 functions
65
+ ai-test generate -n 20
85
66
  ```
86
67
 
87
- ## ๐Ÿ“– CLI Commands
88
-
89
- ### `ai-test init`
68
+ The command will:
69
+ 1. Select the top N highest-priority untested functions from the report
70
+ 2. Invoke Cursor Agent to auto-generate tests
71
+ 3. Extract and save test files
72
+ 4. Run Jest to validate tests
73
+ 5. Auto-update report status to DONE
74
+ 6. Generate failure hints for next iteration if tests fail
90
75
 
91
- Initialize UT scoring configuration.
76
+ ### 3. Track Progress
92
77
 
93
78
  ```bash
94
- ai-test init [options]
79
+ # View report
80
+ cat reports/ut_scores.md
95
81
 
96
- Options:
97
- -p, --preset <type> Use preset config (default)
98
- -o, --output <path> Output config file path (default: ut_scoring_config.json)
82
+ # View P0 priority tasks
83
+ grep "| P0 |" reports/ut_scores.md
84
+
85
+ # View pending tasks
86
+ grep "| TODO |" reports/ut_scores.md
99
87
  ```
100
88
 
89
+ ## ๐Ÿ“– CLI Commands
90
+
101
91
  ### `ai-test scan`
102
92
 
103
- Scan code and generate UT priority scoring report.
93
+ Scan code and generate priority report
104
94
 
105
95
  ```bash
106
96
  ai-test scan [options]
107
97
 
108
98
  Options:
109
- -c, --config <path> Config file path (default: ut_scoring_config.json)
99
+ -c, --config <path> Config file path (default: ai-test.config.jsonc)
110
100
  -o, --output <dir> Output directory (default: reports)
111
- --skip-git Skip Git signals generation
101
+ --skip-git Skip Git history analysis
112
102
  ```
113
103
 
114
104
  **Output Files:**
115
- - `reports/targets.json` - List of all scanned targets
105
+ - `reports/targets.json` - List of scanned targets
116
106
  - `reports/git_signals.json` - Git analysis data
117
- - `reports/ut_scores.md` - Priority scoring report (Markdown, sorted by score)
118
- - `reports/ut_scores.csv` - Priority scoring report (CSV)
119
-
120
- ### `ai-test gen-prompt`
121
-
122
- Generate AI prompt for batch test generation.
123
-
124
- ```bash
125
- ai-test gen-prompt [options] > prompt.txt
126
-
127
- Options:
128
- -p, --priority <level> Filter by priority (P0, P1, P2, P3)
129
- -l, --layer <name> Filter by layer (Foundation, Business, State, UI)
130
- -n, --limit <count> Limit number of targets
131
- --skip <count> Skip first N targets (for pagination)
132
- --min-score <score> Minimum score threshold
133
- --report <path> Report file path (default: reports/ut_scores.md)
134
- --framework <name> Project framework (default: React + TypeScript)
135
- --hints <text> Inject failure hints text into prompt
136
- --hints-file <path> Inject failure hints from file (e.g., reports/hints.txt)
137
- --only-paths <csv> Restrict to specific source paths (comma-separated)
138
- ```
139
-
140
- **Example:**
141
- ```bash
142
- # Generate prompt for 10 P0 functions
143
- ai-test gen-prompt -p P0 -n 10 > prompt.txt
144
-
145
- # Generate prompt for Foundation layer with high scores
146
- ai-test gen-prompt -l Foundation --min-score 7.5 -n 5 > prompt.txt
147
-
148
- # Generate next batch (skip first 10)
149
- ai-test gen-prompt -p P0 --skip 10 -n 10 > prompt.txt
150
- ```
107
+ - `reports/ut_scores.md` - Markdown format report (sorted by score)
108
+ - `reports/ut_scores.csv` - CSV format report
151
109
 
152
- ### `ai-test extract-tests`
110
+ ### `ai-test generate`
153
111
 
154
- Extract test files from AI response.
112
+ Generate unit tests (using Cursor Agent)
155
113
 
156
114
  ```bash
157
- ai-test extract-tests <response-file> [options]
115
+ ai-test generate [options]
158
116
 
159
117
  Options:
160
- --overwrite Overwrite existing test files
161
- --dry-run Show what would be created without actually creating files
118
+ -n, --count <number> Number of functions to generate tests for (default: 10)
162
119
  ```
163
120
 
164
- **Example:**
165
- ```bash
166
- # Extract tests from AI response
167
- ai-test extract-tests response.txt
168
-
169
- # Dry-run to see what would be created
170
- ai-test extract-tests response.txt --dry-run
171
-
172
- # Overwrite existing tests
173
- ai-test extract-tests response.txt --overwrite
174
- ```
175
-
176
- ### `ai-test mark-done`
177
-
178
- Mark functions as done in the scoring report.
179
-
180
- ```bash
181
- ai-test mark-done <function-names...> [options]
182
-
183
- Options:
184
- --report <path> Report file path (default: reports/ut_scores.md)
185
- --status <status> Mark status: DONE or SKIP (default: DONE)
186
- --dry-run Show what would be changed without actually changing
187
- ```
188
-
189
- **Example:**
190
- ```bash
191
- # Mark functions as done
192
- ai-test mark-done disableDragBack getMediumScale
193
-
194
- # Mark as skipped
195
- ai-test mark-done complexFunction --status SKIP
196
-
197
- # Dry-run
198
- ai-test mark-done func1 func2 --dry-run
199
- ```
121
+ The command automatically:
122
+ - Selects highest-priority untested functions
123
+ - Generates AI prompt and invokes Cursor Agent
124
+ - Extracts and saves test files
125
+ - Runs tests for validation
126
+ - Updates report status
200
127
 
201
128
  ## โš™๏ธ Configuration
202
129
 
203
- ### Basic Configuration
130
+ On first run of `ai-test scan`, a config file `ai-test.config.jsonc` is auto-generated with detailed comments.
204
131
 
205
- ```json
132
+ ### Core Configuration
133
+
134
+ ```jsonc
206
135
  {
207
- "scoringMode": "layered",
136
+ "scoringMode": "layered", // Scoring mode: layered or legacy
137
+
138
+ // Coverage configuration
139
+ "coverage": {
140
+ "runBeforeScan": true, // Auto-run Jest coverage before scan
141
+ "command": "npx jest --coverage --silent",
142
+ "summaryPath": "coverage/coverage-summary.json"
143
+ },
144
+
145
+ // Coverage scoring mapping
146
+ "coverageScoring": {
147
+ "naScore": 5, // Default score when coverage data is unavailable
148
+ "mapping": [
149
+ { "lte": 0, "score": 10 }, // 0% coverage โ†’ highest priority
150
+ { "lte": 40, "score": 8 }, // 1-40% โ†’ high priority
151
+ { "lte": 70, "score": 6 }, // 41-70% โ†’ medium priority
152
+ { "lte": 90, "score": 3 }, // 71-90% โ†’ low priority
153
+ { "lte": 100, "score": 1 } // 91-100% โ†’ lowest priority
154
+ ]
155
+ },
156
+
157
+ // Layer configuration
208
158
  "layers": {
209
159
  "foundation": {
210
- "name": "Foundation (ๅŸบ็ก€ๅทฅๅ…ทๅฑ‚)",
211
- "patterns": ["utils/**", "helpers/**", "constants/**"],
160
+ "name": "Foundation (Utils & Helpers)",
161
+ "patterns": ["**/utils/**", "**/helpers/**", "**/constants/**"],
212
162
  "weights": {
213
- "testability": 0.50,
214
- "dependencyCount": 0.30,
215
- "complexity": 0.20
163
+ "testability": 0.30, // Testability weight
164
+ "dependencyCount": 0.25, // Dependency count weight
165
+ "complexity": 0.15, // Complexity weight
166
+ "BC": 0.10, // Business criticality weight
167
+ "ER": 0.10, // Error risk weight
168
+ "coverage": 0.10 // Coverage weight
216
169
  },
217
170
  "thresholds": {
218
- "P0": 7.5,
219
- "P1": 6.0,
220
- "P2": 4.0
171
+ "P0": 8.0, // Score โ‰ฅ8.0 = P0 (must test)
172
+ "P1": 6.5, // Score 6.5-7.9 = P1 (high priority)
173
+ "P2": 5.0 // Score 5.0-6.4 = P2 (medium priority), <5.0 = P3
221
174
  }
222
175
  }
176
+ // ... other layer configurations
223
177
  }
224
178
  }
225
179
  ```
226
180
 
227
- ### Presets
228
-
229
- - **default**: Balanced scoring for general projects
230
- - **react**: Optimized for React applications
231
- - **node**: Optimized for Node.js projects
232
-
233
181
  ## ๐Ÿ“Š Priority Levels
234
182
 
235
183
  | Priority | Score Range | Description | Action |
236
184
  |----------|-------------|-------------|--------|
237
- | **P0** | โ‰ฅ7.5 | Must test | Generate immediately with AI |
238
- | **P1** | 6.0-7.4 | High priority | Batch generate |
239
- | **P2** | 4.0-5.9 | Medium priority | Generate with careful review |
240
- | **P3** | <4.0 | Low priority | Optional coverage |
185
+ | **P0** | โ‰ฅ8.0 | Must test | Generate immediately |
186
+ | **P1** | 6.5-7.9 | High priority | Batch generate |
187
+ | **P2** | 5.0-6.4 | Medium priority | Generate with review |
188
+ | **P3** | <5.0 | Low priority | Optional coverage |
241
189
 
242
- ## ๐Ÿ—๏ธ Architecture
190
+ ## ๐Ÿ—๏ธ Layered Architecture
243
191
 
244
- ### Layered Scoring System
192
+ ### 1. Foundation Layer
193
+ - **Characteristics**: Utility functions, helpers, constants
194
+ - **Weights**: High testability weight (30%)
195
+ - **Threshold**: P0 โ‰ฅ 8.0
245
196
 
246
- 1. **Foundation Layer**: Utils, helpers, constants
247
- - High testability weight (50%)
248
- - Focus on pure functions
249
-
250
- 2. **Business Logic Layer**: Services, APIs, data processing
251
- - Balanced metrics
252
- - Critical for correctness
197
+ ### 2. Business Logic Layer
198
+ - **Characteristics**: Services, APIs, data processing
199
+ - **Weights**: Balanced multi-dimensional scoring
200
+ - **Threshold**: P0 โ‰ฅ 7.5
253
201
 
254
- 3. **State Management Layer**: Stores, contexts, reducers
255
- - Error risk weight emphasized
256
-
257
- 4. **UI Components Layer**: React components, views
258
- - Complexity and error risk balanced
202
+ ### 3. State Management Layer
203
+ - **Characteristics**: State stores, contexts, reducers
204
+ - **Weights**: Emphasizes error risk
205
+ - **Threshold**: P0 โ‰ฅ 7.0
259
206
 
260
- ## ๐Ÿ”ง Programmatic Usage
207
+ ### 4. UI Components Layer
208
+ - **Characteristics**: React components, views
209
+ - **Weights**: Balanced complexity and error risk
210
+ - **Threshold**: P0 โ‰ฅ 6.5
261
211
 
262
- If you need to integrate into your Node.js workflow:
212
+ ## ๐Ÿ“ˆ Scoring Metrics Explained
263
213
 
264
- ```javascript
265
- import { exec } from 'child_process'
266
- import { promisify } from 'util'
214
+ ### Coverage Score (CS)
267
215
 
268
- const execAsync = promisify(exec)
216
+ **New Feature**: Integrates code coverage data for both incremental and existing code scenarios.
269
217
 
270
- // Run scoring workflow
271
- const { stdout } = await execAsync('ai-test scan')
272
- console.log(stdout)
273
- ```
218
+ - **Score Mapping**:
219
+ - 0% โ†’ 10 points (highest priority, needs testing urgently)
220
+ - 1-40% โ†’ 8 points (high priority)
221
+ - 41-70% โ†’ 6 points (medium priority)
222
+ - 71-90% โ†’ 3 points (low priority)
223
+ - 91-100% โ†’ 1 point (well covered)
224
+ - N/A โ†’ 5 points (no data available)
274
225
 
275
- **Recommended**: Use the CLI directly for simplicity.
226
+ - **Weight**: Configured per layer (typically 10-20%)
276
227
 
277
- ## ๐Ÿค– AI Integration
228
+ ### Testability
278
229
 
279
- This tool is designed for AI-powered test generation workflows:
230
+ - **Pure Functions**: 10/10 (no side effects, easy to test)
231
+ - **Simple Mocks**: 8-9/10 (dependencies easy to mock)
232
+ - **Complex Dependencies**: 4-6/10 (requires complex test setup)
280
233
 
281
- 1. **Run scoring** to get prioritized target list
282
- ```bash
283
- ai-test scan
284
- ```
234
+ ### Dependency Count
285
235
 
286
- 2. **Extract P0 targets**
287
- ```bash
288
- grep "| P0 |" reports/ut_scores.md | awk -F'|' '{print $2}' > p0_targets.txt
289
- ```
236
+ Based on reference count:
237
+ - **โ‰ฅ10 modules referencing**: 10/10 (core module)
238
+ - **5-9 modules**: 10/10
239
+ - **3-4 modules**: 9/10
240
+ - **1-2 modules**: 7/10
241
+ - **No references**: 5/10
290
242
 
291
- 3. **Batch Automation with Cursor Agent**
292
- ```bash
293
- # One batch
294
- ai-test run-batch -p P0 -n 10 --skip 0
243
+ ### Complexity
295
244
 
296
- # Run all batches until TODO=0
297
- ai-test run-all -p P0 -n 10
298
- ```
245
+ - **Cyclomatic Complexity**: 11-15 โ†’ 10/10 (medium complexity, worth testing)
246
+ - **Cognitive Complexity**: Analyzed via ESLint
247
+ - **Nesting Depth**: Adjusts complexity score
299
248
 
300
- 4. **Failure Feedback Loop**
301
- - After each batch, we parse Jest JSON report and write hints into `reports/hints.txt`
302
- - Next batch will inject hints via `--hints-file reports/hints.txt`
249
+ ### Business Criticality (BC)
303
250
 
304
- ### โœ… Quality Assurance (Default Enhanced Mode)
251
+ Based on Git history:
252
+ - Modification frequency
253
+ - Number of contributors
254
+ - Commit message keywords (fix, bug, hotfix)
305
255
 
306
- To balance quality and speed, ai-test-generator enables a light assurance layer by default (inspired by Meta's TestGen-LLM deployment at scale).
256
+ ### Error Risk (ER)
307
257
 
308
- - Stability reruns (P0 only):
309
- - `run-batch` reruns Jest once for P0 targets to reduce flakiness (runInBand)
310
- - Coverage-aware prioritization:
311
- - `score-ut.mjs` supports coverageBoost to prioritize low-coverage files
312
- - Failure hints loop:
313
- - `jest-failure-analyzer.mjs` produces actionable hints (import path, type errors, timers, selectors)
314
- - Hints are injected to the next prompt automatically
258
+ Based on:
259
+ - Error handling code
260
+ - Try-catch blocks
261
+ - Number of conditional branches
315
262
 
316
- Notes:
317
- - Defaults are conservative (P0 strong, others light) to avoid runtime explosion.
318
- - You can further tune coverageBoost and thresholds in `ut_scoring_config.json`.
263
+ ## ๐Ÿค– AI Integration
319
264
 
320
- ## ๐Ÿ“ˆ Metrics Explained
265
+ ### Using Cursor Agent (Recommended)
321
266
 
322
- ### Coverage-aware Scoring (New)
267
+ ```bash
268
+ # Auto-generate tests (built-in Cursor Agent integration)
269
+ ai-test generate -n 10
270
+ ```
323
271
 
324
- We incorporate code coverage into prioritization, balancing both incremental (new code) and stock (existing code) scenarios.
272
+ ### Using Other AI Tools (ChatGPT, Claude)
325
273
 
326
- - Coverage Score (CS) mapping:
327
- - 0% โ†’ 10 (highest)
328
- - 1-40% โ†’ 8
329
- - 41-70% โ†’ 6
330
- - 71-90% โ†’ 3
331
- - 91-100% โ†’ 1 (lowest)
332
- - N/A โ†’ 5 (no data)
333
- - Default weights (legacy mode): `coverage: 0.20`
334
- - Layered weights: each layer includes a `coverage` weight (see default config)
335
- - Optional `coverageBoost` provides a small additive boost for low-coverage files to break ties.
274
+ ```bash
275
+ # 1. Generate AI prompt
276
+ ai-test scan
277
+ grep "| TODO |" reports/ut_scores.md | head -10
336
278
 
337
- Coverage is parsed from `coverage/coverage-summary.json` if present. You can run coverage before `scan` to leverage coverage-aware prioritization.
279
+ # 2. Manually copy function info to AI tool
280
+ # 3. Save AI response to a file
281
+ # 4. Run tests for validation
282
+ npm test
283
+ ```
338
284
 
339
- Configuration snippet:
285
+ ## ๐ŸŽฌ Complete Workflow Example
340
286
 
341
- ```json
342
- {
343
- "coverageScoring": {
344
- "naScore": 5,
345
- "mapping": [
346
- { "lte": 0, "score": 10 },
347
- { "lte": 40, "score": 8 },
348
- { "lte": 70, "score": 6 },
349
- { "lte": 90, "score": 3 },
350
- { "lte": 100, "score": 1 }
351
- ]
352
- },
353
- "coverageBoost": { "enable": true, "threshold": 60, "scale": 0.5, "maxBoost": 0.5 }
354
- }
355
- ```
287
+ ```bash
288
+ # 1. Install Jest (if not already installed)
289
+ npm i -D jest@29 ts-jest@29 @types/jest@29 jest-environment-jsdom@29
356
290
 
357
- ### Testability (50% weight in Foundation layer)
358
- - Pure functions: 10/10
359
- - Simple mocks: 8-9/10
360
- - Complex dependencies: 4-6/10
291
+ # 2. Scan code
292
+ ai-test scan
293
+ # โœ… Auto-generates config file
294
+ # โœ… Auto-runs coverage analysis
295
+ # โœ… Generates priority report
361
296
 
362
- ### Dependency Count (30% weight in Foundation layer)
363
- - โ‰ฅ10 modules depend on it: 10/10
364
- - 5-9 modules: 10/10
365
- - 3-4 modules: 9/10
366
- - 1-2 modules: 7/10
367
- - Not referenced: 5/10
297
+ # 3. View report
298
+ cat reports/ut_scores.md
368
299
 
369
- ### Complexity (20% weight in Foundation layer)
370
- - Cyclomatic complexity: 11-15: 10/10
371
- - Cognitive complexity from ESLint
372
- - Adjustments for nesting depth, branch count
300
+ # 4. Generate tests (10 functions)
301
+ ai-test generate
373
302
 
374
- ## ๐Ÿ› ๏ธ Development
303
+ # 5. View results
304
+ npm test
375
305
 
376
- ### Project Structure
306
+ # 6. Continue with next batch
307
+ ai-test generate -n 10
377
308
 
378
- ```
379
- ut-priority-scorer/
380
- โ”œโ”€โ”€ bin/
381
- โ”‚ โ””โ”€โ”€ ut-score.js # CLI entry point
382
- โ”œโ”€โ”€ lib/
383
- โ”‚ โ”œโ”€โ”€ index.js # Main exports
384
- โ”‚ โ”œโ”€โ”€ gen-targets.mjs # Target generation
385
- โ”‚ โ”œโ”€โ”€ gen-git-signals.mjs # Git analysis
386
- โ”‚ โ”œโ”€โ”€ score-ut.mjs # Scoring engine (supports coverage boost)
387
- โ”‚ โ”œโ”€โ”€ gen-test-prompt.mjs # Build batch prompt (JSON manifest + code blocks)
388
- โ”‚ โ”œโ”€โ”€ extract-tests.mjs # Extract tests (manifest-first, fallback headings)
389
- โ”‚ โ”œโ”€โ”€ ai-generate.mjs # Call cursor-agent chat
390
- โ”‚ โ”œโ”€โ”€ jest-runner.mjs # Run Jest and collect reports/coverage
391
- โ”‚ โ”œโ”€โ”€ jest-failure-analyzer.mjs # Parse Jest JSON and produce hints
392
- โ”‚ โ”œโ”€โ”€ run-batch.mjs # Orchestrate one batch
393
- โ”‚ โ””โ”€โ”€ run-all.mjs # Loop through all batches
394
- โ”œโ”€โ”€ templates/
395
- โ”‚ โ”œโ”€โ”€ default.config.json # Default preset
396
- โ”‚ โ”œโ”€โ”€ react.config.json # React preset
397
- โ”‚ โ””โ”€โ”€ node.config.json # Node.js preset
398
- โ””โ”€โ”€ docs/
399
- โ”œโ”€โ”€ workflow.md # Detailed workflow
400
- โ””โ”€โ”€ architecture.md # Architecture design
309
+ # 7. Repeat until all high-priority tests are complete
401
310
  ```
402
311
 
403
- ## ๐Ÿ“ Examples
312
+ ## ๐Ÿ› ๏ธ Advanced Usage
404
313
 
405
- ### Example: Scoring a React Project
314
+ ### Jest Environment Requirements
406
315
 
407
- ```bash
408
- # Initialize with default config
409
- ai-test init
316
+ First-time users need to install Jest:
410
317
 
411
- # Customize config if needed
412
- vim ut_scoring_config.json
318
+ ```bash
319
+ npm i -D jest@29 ts-jest@29 @types/jest@29 jest-environment-jsdom@29
320
+ ```
413
321
 
414
- # Run scoring
415
- ai-test scan
322
+ Then add type support in `tsconfig.json`:
416
323
 
417
- # View P0 targets
418
- grep "| P0 |" reports/ut_scores.md
324
+ ```json
325
+ {
326
+ "compilerOptions": {
327
+ "typeRoots": ["node_modules/@types"]
328
+ }
329
+ }
419
330
  ```
420
331
 
421
- ### Example: npm Scripts Integration
332
+ ### Custom Coverage Command
422
333
 
423
- Add to your `package.json`:
334
+ Modify `ai-test.config.jsonc`:
424
335
 
425
- ```json
336
+ ```jsonc
426
337
  {
427
- "scripts": {
428
- "test:priority": "ai-test scan",
429
- "test:p0": "grep '| P0 |' reports/ut_scores.md"
338
+ "coverage": {
339
+ "runBeforeScan": true,
340
+ "command": "npm run test:coverage" // Custom command
430
341
  }
431
342
  }
432
343
  ```
433
344
 
434
- ## ๐Ÿค Contributing
345
+ ### Skip Git Analysis
435
346
 
436
- Contributions are welcome! Please read our contributing guidelines.
347
+ If project has no Git history or Git signals are not needed:
437
348
 
438
- ## ๐Ÿ“„ License
349
+ ```bash
350
+ ai-test scan --skip-git
351
+ ```
439
352
 
440
- MIT ยฉ YuhengZhou
353
+ ## ๐Ÿ“š Inspiration
441
354
 
442
- ## ๐Ÿ”— Links
355
+ This project draws inspiration from research and practices:
443
356
 
444
- - [GitHub Repository](https://github.com/temptrip/ai-test-generator)
445
- - [npm Package](https://www.npmjs.com/package/ai-test-generator)
446
- - [Issue Tracker](https://github.com/temptrip/ai-test-generator/issues)
357
+ - **Meta TestGen-LLM**: Quality assurance filters and at-scale practices
358
+ [Automated Unit Test Improvement using Large Language Models at Meta](https://arxiv.org/abs/2402.09171)
447
359
 
448
- ## ๐Ÿ“š Inspiration & Prior Art
360
+ - **ByteDance Midscene.js**: Natural language interface and stability practices
361
+ https://github.com/web-infra-dev/midscene
449
362
 
450
- We learned from and adapted ideas in the community:
363
+ - **Airbnb**: Large-scale LLM-assisted migration and batching
364
+ https://medium.com/airbnb-engineering/accelerating-large-scale-test-migration-with-llms-9565c208023b
451
365
 
452
- - Meta TestGen-LLM โ€” assurance filters and at-scale practices: [Automated Unit Test Improvement using Large Language Models at Meta](https://arxiv.org/abs/2402.09171)
453
- - ByteDance Midscene.js (NL to UI Test) โ€” natural language interface and stability practices: `https://github.com/web-infra-dev/midscene`
454
- - Airbnb โ€” large-scale LLM-aided migration and batching: `https://medium.com/airbnb-engineering/accelerating-large-scale-test-migration-with-llms-9565c208023b`
455
- - TestART (paper) โ€” iterative generation and template repairs: `https://arxiv.org/abs/2408.03095`
366
+ - **TestART**: Iterative generation and template repair
367
+ https://arxiv.org/abs/2408.03095
456
368
 
457
- In ai-test-generator, these ideas shape:
458
- - Strict output protocol (JSON manifest + per-file code blocks)
369
+ These ideas are reflected in `ai-unit-test-generator` as:
370
+ - Strict output protocol (JSON manifest + code blocks)
459
371
  - Failure feedback loop (Jest JSON โ†’ actionable hints โ†’ next prompt)
460
372
  - Batch processing with progress tracking (TODO/DONE/SKIP)
461
- - Optional coverage-aware prioritization
373
+ - Coverage-aware prioritization
374
+
375
+ ## ๐Ÿ”ง Project Structure
376
+
377
+ ```
378
+ ai-unit-test-generator/
379
+ โ”œโ”€โ”€ bin/
380
+ โ”‚ โ””โ”€โ”€ cli.js # CLI entry point
381
+ โ”œโ”€โ”€ lib/
382
+ โ”‚ โ”œโ”€โ”€ core/
383
+ โ”‚ โ”‚ โ”œโ”€โ”€ scanner.mjs # Code scanning
384
+ โ”‚ โ”‚ โ”œโ”€โ”€ git-analyzer.mjs # Git history analysis
385
+ โ”‚ โ”‚ โ””โ”€โ”€ scorer.mjs # Scoring engine
386
+ โ”‚ โ”œโ”€โ”€ ai/
387
+ โ”‚ โ”‚ โ”œโ”€โ”€ prompter.mjs # AI prompt generation
388
+ โ”‚ โ”‚ โ”œโ”€โ”€ generator.mjs # Cursor Agent invocation
389
+ โ”‚ โ”‚ โ””โ”€โ”€ extractor.mjs # Test extraction
390
+ โ”‚ โ”œโ”€โ”€ testing/
391
+ โ”‚ โ”‚ โ”œโ”€โ”€ runner.mjs # Jest runner
392
+ โ”‚ โ”‚ โ””โ”€โ”€ analyzer.mjs # Failure analysis
393
+ โ”‚ โ”œโ”€โ”€ workflows/
394
+ โ”‚ โ”‚ โ””โ”€โ”€ batch.mjs # Batch processing workflow
395
+ โ”‚ โ””โ”€โ”€ utils/
396
+ โ”‚ โ”œโ”€โ”€ file.mjs # File operations
397
+ โ”‚ โ”œโ”€โ”€ status.mjs # Status management
398
+ โ”‚ โ””โ”€โ”€ deps.mjs # Dependency analysis
399
+ โ””โ”€โ”€ templates/
400
+ โ””โ”€โ”€ default.config.jsonc # Default config template
401
+ ```
402
+
403
+ ## ๐Ÿค Contributing
404
+
405
+ Contributions are welcome! Please submit issues or pull requests.
406
+
407
+ ## ๐Ÿ“„ License
408
+
409
+ MIT ยฉ YuhengZhou
410
+
411
+ ## ๐Ÿ”— Links
412
+
413
+ - [npm Package](https://www.npmjs.com/package/ai-unit-test-generator)
414
+ - [Technical Documentation](./tech_doc.md)
415
+ - [Changelog](./CHANGELOG.md)
462
416
 
463
417
  ## ๐Ÿ’ฌ Support
464
418
 
465
- - GitHub Issues: [Report bugs](https://github.com/temptrip/ut-priority-scorer/issues)
466
- - Documentation: [Read the docs](https://github.com/temptrip/ut-priority-scorer/tree/main/docs)
419
+ Need help? Get support through:
420
+
421
+ - Read [Technical Documentation](./tech_doc.md)
422
+ - Check [Changelog](./CHANGELOG.md)
423
+ - Submit [GitHub Issues](https://github.com/temptrip/ai-unit-test-generator/issues)
424
+
425
+ ---
467
426
 
427
+ **Tip**: For first-time users, it's recommended to test on a small project first to familiarize yourself with the workflow before applying to larger projects.
package/bin/cli.js CHANGED
@@ -10,6 +10,13 @@ const __filename = fileURLToPath(import.meta.url)
10
10
  const __dirname = dirname(__filename)
11
11
  const PKG_ROOT = resolve(__dirname, '..')
12
12
 
13
+ // ่พ…ๅŠฉๅ‡ฝๆ•ฐ๏ผš็งป้™ค JSON ๆณจ้‡Š
14
+ function stripJsonComments(str) {
15
+ return String(str)
16
+ .replace(/\/\*[\s\S]*?\*\//g, '')
17
+ .replace(/(^|\s)\/\/.*$/gm, '')
18
+ }
19
+
13
20
  // ่ฏปๅ– package.json ็‰ˆๆœฌ
14
21
  const pkgJson = JSON.parse(readFileSync(join(PKG_ROOT, 'package.json'), 'utf-8'))
15
22
 
@@ -49,16 +56,18 @@ program
49
56
  // ๅฏ้€‰๏ผšๅœจๆ‰ซๆๅ‰่‡ชๅŠจ่ฟ่กŒ่ฆ†็›–็އ๏ผˆ็”ฑ้…็ฝฎๆŽงๅˆถ๏ผ‰
50
57
  try {
51
58
  const cfgText = existsSync(config) ? readFileSync(config, 'utf-8') : '{}'
52
- const cfg = JSON.parse(cfgText)
59
+ const cfg = JSON.parse(stripJsonComments(cfgText))
53
60
  const covCfg = cfg?.coverage || { runBeforeScan: false }
54
61
  if (covCfg.runBeforeScan) {
55
62
  console.log('๐Ÿงช Running coverage before scan...')
56
- await new Promise((resolve, reject) => {
63
+ await new Promise((resolve) => {
57
64
  const cmd = covCfg.command || 'npx jest --coverage --silent'
58
65
  const child = spawn(cmd, { stdio: 'inherit', shell: true, cwd: process.cwd() })
59
- child.on('close', code => code === 0 ? resolve(0) : reject(new Error(`coverage exited ${code}`)))
60
- child.on('error', reject)
66
+ // ๅณไฝฟๅคฑ่ดฅไนŸ็ปง็ปญ๏ผˆๆœ‰ไบ›ๆต‹่ฏ•ๅฏ่ƒฝๅคฑ่ดฅ๏ผŒไฝ†ไป่ƒฝ็”Ÿๆˆ้ƒจๅˆ†่ฆ†็›–็އๆ•ฐๆฎ๏ผ‰
67
+ child.on('close', () => resolve())
68
+ child.on('error', () => resolve())
61
69
  })
70
+ console.log('โœ… Coverage analysis completed.\n')
62
71
  }
63
72
  } catch (err) {
64
73
  console.warn('โš ๏ธ Coverage step failed or Jest not installed. Skipping coverage and continuing scan.')
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "ai-unit-test-generator",
3
- "version": "1.4.5",
3
+ "version": "1.4.6",
4
4
  "description": "AI-powered unit test generator with smart priority scoring",
5
5
  "keywords": [
6
6
  "unit-test",