speccrew 0.5.13 → 0.5.15
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.speccrew/agents/speccrew-test-manager.md +361 -37
- package/.speccrew/skills/speccrew-test-reporter/SKILL.md +297 -0
- package/.speccrew/skills/{speccrew-test-execute → speccrew-test-reporter}/templates/BUG-REPORT-TEMPLATE.md +24 -1
- package/.speccrew/skills/{speccrew-test-execute → speccrew-test-reporter}/templates/TEST-REPORT-TEMPLATE.md +8 -1
- package/.speccrew/skills/{speccrew-test-execute → speccrew-test-runner}/SKILL.md +142 -104
- package/.speccrew/skills/speccrew-test-runner/templates/TEST-EXECUTION-RESULT-TEMPLATE.md +80 -0
- package/lib/utils.js +1 -0
- package/package.json +1 -1
|
@@ -1,42 +1,72 @@
|
|
|
1
1
|
---
|
|
2
|
-
name: speccrew-test-
|
|
3
|
-
description: Executes
|
|
4
|
-
tools: Read, Write, Glob, Grep
|
|
2
|
+
name: speccrew-test-runner
|
|
3
|
+
description: "SpecCrew Test Runner. Executes test code, parses results, and detects deviations between expected and actual outcomes. Reads test code files and system design to run platform-specific test suites."
|
|
4
|
+
tools: Read, Write, Bash, Glob, Grep
|
|
5
5
|
---
|
|
6
6
|
|
|
7
7
|
# Trigger Scenarios
|
|
8
8
|
|
|
9
9
|
- When speccrew-test-manager dispatches test execution after test code is confirmed
|
|
10
|
-
- When
|
|
11
|
-
- When user
|
|
10
|
+
- When speccrew-test-reporter needs raw execution results to generate reports
|
|
11
|
+
- When user explicitly requests "run tests", "execute tests", "run test suite"
|
|
12
|
+
|
|
13
|
+
# Role Positioning
|
|
14
|
+
|
|
15
|
+
**Primary Role**: Test Execution Engine
|
|
16
|
+
|
|
17
|
+
**Responsibilities**:
|
|
18
|
+
- Read and validate test code files
|
|
19
|
+
- Perform environment pre-checks (runtime, dependencies, test framework)
|
|
20
|
+
- Execute test commands for various platforms and frameworks
|
|
21
|
+
- Parse test framework output into structured data
|
|
22
|
+
- Detect deviations between expected and actual results
|
|
23
|
+
- Output structured execution results for downstream consumers
|
|
24
|
+
|
|
25
|
+
**Upstream Dependencies**: speccrew-test-manager, speccrew-test-code-generator
|
|
26
|
+
**Downstream Consumers**: speccrew-test-reporter
|
|
12
27
|
|
|
13
28
|
# Workflow
|
|
14
29
|
|
|
15
30
|
## Absolute Constraints
|
|
16
31
|
|
|
17
|
-
> **These rules apply to ALL
|
|
32
|
+
> **These rules apply to ALL execution steps. Violation = task failure.**
|
|
33
|
+
|
|
34
|
+
1. **MUST: Environment First** — Always verify environment before running tests. Never skip pre-checks.
|
|
35
|
+
|
|
36
|
+
2. **MUST: Complete Output Capture** — Capture both stdout and stderr for diagnostics.
|
|
37
|
+
|
|
38
|
+
3. **MUST: TC ID Traceability** — Every test result must be traced back to its TC ID.
|
|
39
|
+
|
|
40
|
+
4. **MUST NOT: Generate Reports** — This skill does NOT generate human-readable reports or bug documents. Only structured execution results.
|
|
18
41
|
|
|
19
|
-
|
|
42
|
+
5. **MUST: Structured Output** — Output MUST be valid structured data (JSON/Markdown) that speccrew-test-reporter can consume.
|
|
20
43
|
|
|
21
|
-
|
|
44
|
+
## Input Parameters
|
|
22
45
|
|
|
23
|
-
|
|
46
|
+
| Parameter | Required | Description |
|
|
47
|
+
|-----------|----------|-------------|
|
|
48
|
+
| `test_code_plan_path` | Yes | Path to test code plan document with test file mappings |
|
|
49
|
+
| `test_cases_path` | Yes | Path to test cases document with expected results |
|
|
50
|
+
| `platform_id` | Yes | Target platform (frontend/backend/desktop/mobile) |
|
|
51
|
+
| `output_dir` | Yes | Directory for execution results output |
|
|
52
|
+
| `feature_name` | Yes | Feature name for output file naming |
|
|
24
53
|
|
|
25
54
|
## Step 1: Read Inputs
|
|
26
55
|
|
|
27
56
|
Read the following documents in order:
|
|
28
57
|
|
|
29
|
-
1. **Test
|
|
30
|
-
- Contains test case definitions with TC IDs, descriptions, and expected results
|
|
31
|
-
- Used for deviation detection against actual results
|
|
32
|
-
|
|
33
|
-
2. **Test Code Plan**: `test_code_plan_path`
|
|
58
|
+
1. **Test Code Plan**: `test_code_plan_path`
|
|
34
59
|
- Contains mapping between TC IDs and test file locations
|
|
35
60
|
- Identifies platform_id and corresponding test framework
|
|
61
|
+
- Lists all test files to be executed
|
|
36
62
|
|
|
37
|
-
|
|
38
|
-
-
|
|
39
|
-
-
|
|
63
|
+
2. **Test Cases Document**: `test_cases_path`
|
|
64
|
+
- Contains test case definitions with TC IDs, descriptions, and expected results
|
|
65
|
+
- Used for deviation detection against actual results
|
|
66
|
+
|
|
67
|
+
3. **Test Code Files**: Read each test file listed in the code plan
|
|
68
|
+
- Extract TC ID comments for traceability mapping
|
|
69
|
+
- Verify test file syntax and structure
|
|
40
70
|
|
|
41
71
|
**Input Validation**:
|
|
42
72
|
- Verify all required paths are provided and files exist
|
|
@@ -46,7 +76,17 @@ Read the following documents in order:
|
|
|
46
76
|
|
|
47
77
|
Before executing tests, verify the environment is ready:
|
|
48
78
|
|
|
49
|
-
### 2.1 Check
|
|
79
|
+
### 2.1 Check Runtime Availability
|
|
80
|
+
|
|
81
|
+
| Platform | Runtime Check |
|
|
82
|
+
|----------|---------------|
|
|
83
|
+
| Frontend | `node --version` |
|
|
84
|
+
| Backend (Node) | `node --version` |
|
|
85
|
+
| Backend (Python) | `python --version` or `python3 --version` |
|
|
86
|
+
| Desktop | Platform-specific runtime check |
|
|
87
|
+
| Mobile | Platform-specific runtime check |
|
|
88
|
+
|
|
89
|
+
### 2.2 Check Test Dependencies
|
|
50
90
|
|
|
51
91
|
| Platform | Dependency Check Command |
|
|
52
92
|
|----------|--------------------------|
|
|
@@ -65,7 +105,7 @@ Environment Error: Missing test dependencies
|
|
|
65
105
|
Please install dependencies before proceeding.
|
|
66
106
|
```
|
|
67
107
|
|
|
68
|
-
### 2.
|
|
108
|
+
### 2.3 Check Test Configuration Files
|
|
69
109
|
|
|
70
110
|
Verify test configuration files exist:
|
|
71
111
|
|
|
@@ -76,7 +116,7 @@ Verify test configuration files exist:
|
|
|
76
116
|
| Pytest | `pytest.ini`, `pyproject.toml`, or `setup.cfg` |
|
|
77
117
|
| JUnit | `junit.xml` or build tool configuration |
|
|
78
118
|
|
|
79
|
-
### 2.
|
|
119
|
+
### 2.4 Check Service Dependencies
|
|
80
120
|
|
|
81
121
|
Determine if the system under test requires:
|
|
82
122
|
- Database service running
|
|
@@ -85,7 +125,7 @@ Determine if the system under test requires:
|
|
|
85
125
|
|
|
86
126
|
**Checkpoint A**: If any environment check fails, report specific missing items and stop execution.
|
|
87
127
|
|
|
88
|
-
|
|
128
|
+
**STOP IF FAILED**: IF any pre-check fails THEN:
|
|
89
129
|
1. Stop workflow immediately
|
|
90
130
|
2. Report all failures to user
|
|
91
131
|
3. Do NOT proceed to test execution
|
|
@@ -142,13 +182,16 @@ Parse test framework output to extract:
|
|
|
142
182
|
| Skipped | Number of skipped tests |
|
|
143
183
|
| Duration | Total execution time |
|
|
144
184
|
|
|
145
|
-
### 4.2 Extract
|
|
185
|
+
### 4.2 Extract Test Details
|
|
146
186
|
|
|
147
|
-
For each
|
|
187
|
+
For each test, extract:
|
|
148
188
|
- **Test Function Name**: Full test name/suite
|
|
149
|
-
- **
|
|
150
|
-
- **
|
|
189
|
+
- **TC ID**: Associated test case ID from comments
|
|
190
|
+
- **Status**: PASS / FAIL / ERROR / SKIP
|
|
191
|
+
- **Error Message**: Primary error description (if failed)
|
|
192
|
+
- **Stack Trace**: Call stack leading to failure (if failed)
|
|
151
193
|
- **Assertion Details**: Expected vs actual values (if available)
|
|
194
|
+
- **Duration**: Individual test execution time
|
|
152
195
|
|
|
153
196
|
### 4.3 Map Tests to TC IDs
|
|
154
197
|
|
|
@@ -176,39 +219,40 @@ For each test case:
|
|
|
176
219
|
|
|
177
220
|
| Type | Code | Description | Severity |
|
|
178
221
|
|------|------|-------------|----------|
|
|
222
|
+
| PASS | PASS | Test passed as expected | - |
|
|
179
223
|
| FAIL | FAIL | Test assertion failed - actual result differs from expected | High |
|
|
180
224
|
| ERROR | ERROR | Runtime error - code threw exception or crashed | Critical |
|
|
181
225
|
| SKIP | SKIP | Test was skipped - preconditions not met | Medium |
|
|
182
226
|
| FLAKY | FLAKY | Intermittent failure - non-deterministic behavior | High |
|
|
183
227
|
|
|
184
|
-
### 5.3 Root Cause Analysis
|
|
228
|
+
### 5.3 Root Cause Analysis (Basic)
|
|
185
229
|
|
|
186
|
-
For each deviation,
|
|
230
|
+
For each deviation, perform initial analysis:
|
|
187
231
|
- **Assertion Failure**: Which specific assertion failed and why
|
|
188
232
|
- **Runtime Error**: Exception type and location
|
|
189
233
|
- **Skip Reason**: Why test could not execute
|
|
190
234
|
- **Flaky Pattern**: Conditions causing intermittent failure
|
|
191
235
|
|
|
192
|
-
>
|
|
236
|
+
> **Note**: Detailed root cause analysis with impact assessment is performed by speccrew-test-reporter.
|
|
193
237
|
|
|
194
|
-
## Step 6: Generate
|
|
238
|
+
## Step 6: Generate Execution Results
|
|
195
239
|
|
|
196
240
|
### 6.1 Read Template
|
|
197
241
|
|
|
198
|
-
Read template: `
|
|
242
|
+
Read template: `templates/TEST-EXECUTION-RESULT-TEMPLATE.md`
|
|
199
243
|
|
|
200
|
-
### 6.2 Copy Template to
|
|
244
|
+
### 6.2 Copy Template to Output Path
|
|
201
245
|
|
|
202
246
|
1. **Read template** from Step 6.1
|
|
203
247
|
2. **Replace top-level placeholders** (feature name, platform, execution date, etc.)
|
|
204
|
-
3. **Create the document** using `create_file` at: `{output_dir}/{feature}-test-
|
|
248
|
+
3. **Create the document** using `create_file` at: `{output_dir}/{feature}-test-execution-results.md`
|
|
205
249
|
4. **Verify**: Document has complete section structure
|
|
206
250
|
|
|
207
251
|
### 6.3 Fill Each Section Using search_replace
|
|
208
252
|
|
|
209
253
|
Fill each section with test execution data.
|
|
210
254
|
|
|
211
|
-
>
|
|
255
|
+
> **CRITICAL CONSTRAINTS:**
|
|
212
256
|
> - **FORBIDDEN: `create_file` to rewrite the entire document**
|
|
213
257
|
> - **MUST use `search_replace` to fill each section individually**
|
|
214
258
|
> - **All section titles MUST be preserved**
|
|
@@ -217,66 +261,61 @@ Fill each section with test execution data.
|
|
|
217
261
|
|
|
218
262
|
| Section | Content |
|
|
219
263
|
|---------|---------|
|
|
220
|
-
| **Execution Summary** | Feature name
|
|
221
|
-
| **Results Overview** | Counts and percentages for all result types
|
|
222
|
-
| **Results
|
|
223
|
-
| **
|
|
224
|
-
| **Coverage Status** | Requirement-to-test-case mapping, status per requirement |
|
|
264
|
+
| **Execution Summary** | Feature name, platform, test framework, execution date, duration, overall pass rate |
|
|
265
|
+
| **Results Overview** | Counts and percentages for all result types |
|
|
266
|
+
| **Test Results Detail** | Table with TC ID, test name, status, duration, error message |
|
|
267
|
+
| **Deviation Analysis** | List of all FAIL/ERROR/SKIP deviations with classification |
|
|
225
268
|
| **Environment Information** | OS, runtime, framework versions, key dependencies |
|
|
226
|
-
| **
|
|
227
|
-
|
|
228
|
-
##
|
|
229
|
-
|
|
230
|
-
###
|
|
231
|
-
|
|
232
|
-
|
|
233
|
-
|
|
234
|
-
|
|
235
|
-
|
|
236
|
-
|
|
237
|
-
|
|
238
|
-
|
|
239
|
-
|
|
240
|
-
|
|
241
|
-
|
|
242
|
-
|
|
243
|
-
|
|
244
|
-
|
|
245
|
-
|
|
246
|
-
|
|
247
|
-
|
|
248
|
-
|
|
249
|
-
|
|
250
|
-
|
|
251
|
-
|
|
252
|
-
|
|
253
|
-
|
|
254
|
-
|
|
255
|
-
|
|
256
|
-
|
|
257
|
-
|
|
258
|
-
|
|
259
|
-
|
|
260
|
-
|
|
261
|
-
|
|
262
|
-
|
|
263
|
-
|
|
264
|
-
|
|
265
|
-
|
|
266
|
-
|
|
267
|
-
|
|
268
|
-
|
|
269
|
-
|
|
270
|
-
|
|
271
|
-
|
|
272
|
-
|
|
273
|
-
|
|
274
|
-
|
|
275
|
-
|
|
276
|
-
- [ ] Clear reproduction steps
|
|
277
|
-
- [ ] Expected vs actual result
|
|
278
|
-
- [ ] Relevant log excerpt
|
|
279
|
-
- [ ] Suggested fix direction
|
|
269
|
+
| **Raw Output** | Captured stdout/stderr excerpts for debugging |
|
|
270
|
+
|
|
271
|
+
## Output
|
|
272
|
+
|
|
273
|
+
### Output Files
|
|
274
|
+
|
|
275
|
+
| File | Path | Description |
|
|
276
|
+
|------|------|-------------|
|
|
277
|
+
| Execution Results | `{output_dir}/{feature}-test-execution-results.md` | Structured test execution data |
|
|
278
|
+
|
|
279
|
+
### Output Format Contract
|
|
280
|
+
|
|
281
|
+
The execution results document serves as the **input contract** for speccrew-test-reporter:
|
|
282
|
+
|
|
283
|
+
```yaml
|
|
284
|
+
execution_summary:
|
|
285
|
+
feature_name: string
|
|
286
|
+
platform_id: string
|
|
287
|
+
framework: string
|
|
288
|
+
execution_date: ISO8601
|
|
289
|
+
duration_ms: number
|
|
290
|
+
total_tests: number
|
|
291
|
+
passed: number
|
|
292
|
+
failed: number
|
|
293
|
+
errors: number
|
|
294
|
+
skipped: number
|
|
295
|
+
pass_rate: percentage
|
|
296
|
+
|
|
297
|
+
test_results:
|
|
298
|
+
- tc_id: string
|
|
299
|
+
test_name: string
|
|
300
|
+
status: PASS|FAIL|ERROR|SKIP
|
|
301
|
+
duration_ms: number
|
|
302
|
+
error_message: string|null
|
|
303
|
+
stack_trace: string|null
|
|
304
|
+
expected_result: string
|
|
305
|
+
actual_result: string
|
|
306
|
+
|
|
307
|
+
deviations:
|
|
308
|
+
- tc_id: string
|
|
309
|
+
type: FAIL|ERROR|SKIP|FLAKY
|
|
310
|
+
severity: Critical|High|Medium|Low
|
|
311
|
+
description: string
|
|
312
|
+
|
|
313
|
+
environment:
|
|
314
|
+
os: string
|
|
315
|
+
runtime_version: string
|
|
316
|
+
framework_version: string
|
|
317
|
+
dependencies: list
|
|
318
|
+
```
|
|
280
319
|
|
|
281
320
|
# Key Rules
|
|
282
321
|
|
|
@@ -284,21 +323,20 @@ Each bug report must include:
|
|
|
284
323
|
|------|-------------|
|
|
285
324
|
| **Environment First** | Always verify environment before running tests |
|
|
286
325
|
| **Complete Output Capture** | Capture both stdout and stderr for diagnostics |
|
|
287
|
-
| **TC ID Traceability** | Every
|
|
288
|
-
| **
|
|
289
|
-
| **
|
|
290
|
-
| **
|
|
326
|
+
| **TC ID Traceability** | Every test result must be traced back to its TC ID |
|
|
327
|
+
| **No Report Generation** | This skill does NOT generate human-readable reports |
|
|
328
|
+
| **Structured Output Only** | Output must be structured data for downstream consumption |
|
|
329
|
+
| **Runner → Reporter Contract** | Output format must match speccrew-test-reporter input expectations |
|
|
291
330
|
|
|
292
331
|
# Checklist
|
|
293
332
|
|
|
294
333
|
- [ ] Environment pre-check passed before execution
|
|
295
334
|
- [ ] All test files from code plan were executed
|
|
335
|
+
- [ ] Test results are correctly parsed from framework output
|
|
296
336
|
- [ ] Failed tests are correctly traced back to TC IDs
|
|
297
|
-
- [ ]
|
|
298
|
-
- [ ]
|
|
299
|
-
- [ ]
|
|
300
|
-
- [ ] All bug reports written to correct output path
|
|
301
|
-
- [ ] Test report written to correct output path
|
|
337
|
+
- [ ] Deviation classification completed for all non-PASS results
|
|
338
|
+
- [ ] Execution results written to correct output path
|
|
339
|
+
- [ ] Output format follows contract for speccrew-test-reporter
|
|
302
340
|
|
|
303
341
|
---
|
|
304
342
|
|
|
@@ -310,13 +348,13 @@ Upon completion (success or failure), output the following report format:
|
|
|
310
348
|
```
|
|
311
349
|
## Task Completion Report
|
|
312
350
|
- **Status**: SUCCESS
|
|
313
|
-
- **Task ID**: <from dispatch context, e.g., "test-
|
|
351
|
+
- **Task ID**: <from dispatch context, e.g., "test-runner-web-vue">
|
|
314
352
|
- **Platform**: <platform_id, e.g., "web-vue">
|
|
315
353
|
- **Phase**: test_execution
|
|
316
354
|
- **Output Files**:
|
|
317
|
-
- `
|
|
318
|
-
- `speccrew-workspace/iterations/{iteration}/05.system-test/bugs/[feature]-bug-{seq}.md` (if any failures)
|
|
355
|
+
- `{output_dir}/{feature}-test-execution-results.md`
|
|
319
356
|
- **Summary**: Test execution completed with {passed}/{total} passed, {failed} failed, {skipped} skipped
|
|
357
|
+
- **Next Step**: Dispatch to speccrew-test-reporter for report generation
|
|
320
358
|
```
|
|
321
359
|
|
|
322
360
|
## Failure Report
|
|
@@ -0,0 +1,80 @@
|
|
|
1
|
+
# Test Execution Results: {Feature Name}
|
|
2
|
+
|
|
3
|
+
> **Document Type**: Structured Execution Results
|
|
4
|
+
> **Consumer**: speccrew-test-reporter
|
|
5
|
+
> **Format Version**: 1.0
|
|
6
|
+
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
## 1. Execution Summary
|
|
10
|
+
|
|
11
|
+
| Item | Value |
|
|
12
|
+
|------|-------|
|
|
13
|
+
| Feature Name | {feature_name} |
|
|
14
|
+
| Platform | {platform_id} |
|
|
15
|
+
| Test Framework | {framework} |
|
|
16
|
+
| Execution Date | {date} |
|
|
17
|
+
| Total Duration | {duration} |
|
|
18
|
+
| Executor | speccrew-test-runner |
|
|
19
|
+
|
|
20
|
+
## 2. Results Overview
|
|
21
|
+
|
|
22
|
+
| Metric | Count | Percentage |
|
|
23
|
+
|--------|-------|------------|
|
|
24
|
+
| Total | {total} | 100% |
|
|
25
|
+
| Passed | {passed} | {pass_rate}% |
|
|
26
|
+
| Failed | {failed} | {fail_rate}% |
|
|
27
|
+
| Error | {error} | {error_rate}% |
|
|
28
|
+
| Skipped | {skipped} | {skip_rate}% |
|
|
29
|
+
|
|
30
|
+
## 3. Test Results Detail
|
|
31
|
+
|
|
32
|
+
| TC ID | Test Name | Status | Duration | Error Message |
|
|
33
|
+
|-------|-----------|--------|----------|---------------|
|
|
34
|
+
| {tc_id} | {test_name} | PASS/FAIL/ERROR/SKIP | {duration} | {message_or_empty} |
|
|
35
|
+
|
|
36
|
+
## 4. Deviation Analysis
|
|
37
|
+
|
|
38
|
+
### 4.1 Failed Tests (FAIL)
|
|
39
|
+
|
|
40
|
+
| TC ID | Test Name | Severity | Description |
|
|
41
|
+
|-------|-----------|----------|-------------|
|
|
42
|
+
| {tc_id} | {test_name} | High | {description} |
|
|
43
|
+
|
|
44
|
+
### 4.2 Runtime Errors (ERROR)
|
|
45
|
+
|
|
46
|
+
| TC ID | Test Name | Severity | Description |
|
|
47
|
+
|-------|-----------|----------|-------------|
|
|
48
|
+
| {tc_id} | {test_name} | Critical | {description} |
|
|
49
|
+
|
|
50
|
+
### 4.3 Skipped Tests (SKIP)
|
|
51
|
+
|
|
52
|
+
| TC ID | Test Name | Severity | Description |
|
|
53
|
+
|-------|-----------|----------|-------------|
|
|
54
|
+
| {tc_id} | {test_name} | Medium | {description} |
|
|
55
|
+
|
|
56
|
+
## 5. Environment Information
|
|
57
|
+
|
|
58
|
+
| Item | Value |
|
|
59
|
+
|------|-------|
|
|
60
|
+
| OS | {os} |
|
|
61
|
+
| Runtime | {runtime_version} |
|
|
62
|
+
| Test Framework | {framework_version} |
|
|
63
|
+
| Dependencies | {key_dependencies} |
|
|
64
|
+
|
|
65
|
+
## 6. Raw Output Excerpts
|
|
66
|
+
|
|
67
|
+
### 6.1 Standard Output (stdout)
|
|
68
|
+
```
|
|
69
|
+
{stdout_excerpt}
|
|
70
|
+
```
|
|
71
|
+
|
|
72
|
+
### 6.2 Standard Error (stderr)
|
|
73
|
+
```
|
|
74
|
+
{stderr_excerpt}
|
|
75
|
+
```
|
|
76
|
+
|
|
77
|
+
---
|
|
78
|
+
|
|
79
|
+
**Results Generated:** {timestamp}
|
|
80
|
+
**Next Action:** Dispatch to speccrew-test-reporter for report generation
|
package/lib/utils.js
CHANGED
|
@@ -111,6 +111,7 @@ function removeDirRecursive(dirPath) {
|
|
|
111
111
|
const DEPRECATED_SKILLS = [
|
|
112
112
|
'speccrew-dev-desktop', // replaced by speccrew-dev-desktop-electron + speccrew-dev-desktop-tauri
|
|
113
113
|
'speccrew-dev-review', // replaced by speccrew-dev-review-backend/frontend/mobile/desktop
|
|
114
|
+
'speccrew-test-execute', // replaced by speccrew-test-runner + speccrew-test-reporter
|
|
114
115
|
];
|
|
115
116
|
|
|
116
117
|
function cleanDeprecatedSkills(destDir, deprecatedList) {
|