@ash-mallick/browserstack-sync 1.4.0 → 1.5.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,6 +1,8 @@
1
1
  # @ash-mallick/browserstack-sync
2
2
 
3
- Sync Playwright/Cypress tests to BrowserStack Test Management with AI-powered test step extraction.
3
+ Sync **Playwright** and **Cypress** e2e specs to CSV (TC-001, TC-002, …) and optionally to **BrowserStack Test Management** (one folder per spec, create/update test cases).
4
+
5
+ **🦙 FREE AI-powered test step extraction** using **Ollama** — runs 100% locally, no data sent to cloud!
4
6
 
5
7
  **By Ashutosh Mallick**
6
8
 
@@ -12,183 +14,96 @@ Sync Playwright/Cypress tests to BrowserStack Test Management with AI-powered te
12
14
  npm install @ash-mallick/browserstack-sync
13
15
  ```
14
16
 
15
- ---
16
-
17
- ## Quick Setup
18
-
19
- Create `.env` file:
20
-
21
- ```env
22
- BROWSERSTACK_USERNAME=your_username
23
- BROWSERSTACK_ACCESS_KEY=your_access_key
24
- BROWSERSTACK_PROJECT_ID=PR-XXXX
25
- ```
17
+ Or run without installing: `npx @ash-mallick/browserstack-sync --csv-only`
26
18
 
27
19
  ---
28
20
 
29
- ## Commands
21
+ ## Run
30
22
 
31
- ### 1. Generate CSV Files Only (No BrowserStack)
23
+ From your project root (where your e2e specs live):
32
24
 
33
25
  ```bash
26
+ # Generate CSVs only
34
27
  npx am-browserstack-sync --csv-only
35
- ```
36
-
37
- Creates CSV files with test cases in `playwright/e2e-csv/` folder.
38
-
39
- ---
40
-
41
- ### 2. Onboard Single Spec to BrowserStack
42
-
43
- ```bash
44
- npx am-browserstack-sync --spec=login.spec
45
- ```
46
-
47
- Creates folder and test cases for the specified spec file.
48
-
49
- ---
50
-
51
- ### 3. Onboard All Specs to BrowserStack
52
28
 
53
- ```bash
29
+ # Sync all specs, no prompt (e.g. CI)
54
30
  npx am-browserstack-sync --all
55
- ```
56
-
57
- Creates folders and test cases for all spec files in your project.
58
-
59
- ---
60
-
61
- ### 4. Sync Test Run Results to BrowserStack
62
-
63
- Specify your report file location:
64
-
65
- ```bash
66
- npx am-browserstack-sync-runs --report=test-results.json
67
- ```
68
-
69
- **Supported formats (auto-detected):**
70
- - Playwright JSON
71
- - Cypress/Mochawesome JSON
72
- - JUnit XML
73
31
 
74
- **Configure report path** (so you don't need `--report` every time):
32
+ # Sync only certain specs
33
+ npx am-browserstack-sync --spec=login.spec,checkout.spec
75
34
 
76
- ```bash
77
- # Via environment variable
78
- export BROWSERSTACK_REPORT_PATH=test-results.json
35
+ # Disable AI analysis (use regex extraction only)
36
+ npx am-browserstack-sync --no-ai
79
37
 
80
- # Or in .am-browserstack-sync.json
81
- { "reportPath": "test-results.json" }
38
+ # Use a specific Ollama model
39
+ npx am-browserstack-sync --model=llama3.2
82
40
  ```
83
41
 
84
- **Generate reports** (add to your test command):
85
-
86
- ```bash
87
- # Playwright
88
- npx playwright test --reporter=json --output-file=test-results.json
42
+ **Scripts** in `package.json`:
89
43
 
90
- # Cypress
91
- npx cypress run --reporter mochawesome
92
- ```
93
-
94
- **With custom run name:**
95
-
96
- ```bash
97
- npx am-browserstack-sync-runs --report=test-results.json --run-name="Nightly Run"
44
+ ```json
45
+ {
46
+ "scripts": {
47
+ "sync:e2e": "am-browserstack-sync",
48
+ "sync:e2e-csv": "am-browserstack-sync --csv-only"
49
+ }
50
+ }
98
51
  ```
99
52
 
100
53
  ---
101
54
 
102
- ## GitLab CI Integration
55
+ ## 🦙 AI-Powered Step Analysis (FREE with Ollama)
103
56
 
104
- ### Option A: Your pipeline already generates reports
57
+ The tool uses **Ollama** to analyze your test code and generate **human-readable test steps**. Ollama runs **100% locally** on your machine — no data sent to cloud, completely free, no API key needed!
105
58
 
106
- If your existing test job generates JSON/JUnit reports, just add sync after:
59
+ **Example transformation:**
107
60
 
108
- ```yaml
109
- # Your existing test job
110
- test:
111
- script:
112
- - npm ci
113
- - CENV=prod npx playwright test --reporter=allure-playwright,json --output-file=test-results.json
114
- artifacts:
115
- paths:
116
- - test-results.json
117
-
118
- # Add this job to sync results to BrowserStack
119
- sync_browserstack:
120
- needs: [test]
121
- script:
122
- - npm install @ash-mallick/browserstack-sync
123
- - npx am-browserstack-sync-runs --report=test-results.json --run-name="Pipeline #$CI_PIPELINE_ID"
61
+ ```typescript
62
+ // Your test code:
63
+ test('should log in successfully', async ({ page }) => {
64
+ await page.goto('/login');
65
+ await page.getByLabel(/email/i).fill('user@example.com');
66
+ await page.getByLabel(/password/i).fill('validpassword');
67
+ await page.getByRole('button', { name: /sign in/i }).click();
68
+ await expect(page).toHaveURL(/\/dashboard/);
69
+ });
124
70
  ```
125
71
 
126
- ### Option B: Scheduled sync job (runs tests + syncs)
127
-
128
- ```yaml
129
- scheduled_browserstack_sync:
130
- rules:
131
- - if: $CI_PIPELINE_SOURCE == "schedule"
132
- script:
133
- - npm ci
134
- - CENV=prod npx playwright test --reporter=allure-playwright,json --output-file=test-results.json || true
135
- - npm install @ash-mallick/browserstack-sync
136
- - npx am-browserstack-sync-runs --report=test-results.json
137
- ```
138
-
139
- ### Required CI/CD Variables
140
-
141
- Add in **Settings → CI/CD → Variables**:
142
-
143
- | Key | Value |
144
- |-----|-------|
145
- | `BROWSERSTACK_USERNAME` | your_username |
146
- | `BROWSERSTACK_ACCESS_KEY` | your_access_key |
147
- | `BROWSERSTACK_PROJECT_ID` | PR-XXXX |
148
-
149
- For scheduled jobs: **CI/CD → Schedules → New Schedule**.
150
-
151
- ---
152
-
153
- ## All Options Reference
154
-
155
- ### Onboarding Command (`am-browserstack-sync`)
156
-
157
- | Option | Description |
158
- |--------|-------------|
159
- | `--csv-only` | Generate CSVs only, do not sync to BrowserStack |
160
- | `--all` | Sync all spec files without prompting |
161
- | `--spec=name` | Sync specific spec(s), comma-separated |
162
- | `--no-ai` | Disable AI step analysis, use regex extraction |
163
- | `--model=name` | Ollama model to use (default: llama3.2) |
164
-
165
- ### Run Sync Command (`am-browserstack-sync-runs`)
166
-
167
- | Option | Description |
168
- |--------|-------------|
169
- | `--report=path` | Path to test report file (auto-detects if not specified) |
170
- | `--run-name=name` | Custom name for the test run |
171
- | `--format=type` | Report format: `playwright-json` or `junit` |
172
-
173
- ---
174
-
175
- ## AI-Powered Test Steps (Optional)
72
+ **Generated steps:**
176
73
 
177
- The tool uses Ollama to analyze your test code and generate human-readable test steps. Ollama runs locally - no data sent to cloud, completely free.
74
+ | Step | Expected Result |
75
+ |------|-----------------|
76
+ | Navigate to /login page | Login page loads successfully |
77
+ | Enter 'user@example.com' in the Email field | Email is entered |
78
+ | Enter 'validpassword' in the Password field | Password is masked and entered |
79
+ | Click the 'Sign In' button | Form is submitted |
80
+ | Verify URL | URL matches /dashboard |
178
81
 
179
82
  ### Setup Ollama
180
83
 
181
- 1. Download from [ollama.ai](https://ollama.ai)
182
- 2. Pull a model:
84
+ 1. Download and install Ollama from [ollama.ai](https://ollama.ai)
85
+
86
+ 2. Pull a model (llama3.2 recommended):
183
87
  ```bash
184
88
  ollama pull llama3.2
185
89
  ```
186
- 3. Start Ollama:
90
+
91
+ 3. Start Ollama (runs automatically on macOS, or run manually):
187
92
  ```bash
188
93
  ollama serve
189
94
  ```
190
95
 
191
- AI analysis runs automatically when Ollama is running. Use `--no-ai` to disable.
96
+ 4. Run the sync — AI analysis is automatic when Ollama is running!
97
+ ```bash
98
+ npx am-browserstack-sync --csv-only
99
+ ```
100
+
101
+ ### Options
102
+
103
+ - `--no-ai` — Disable AI, use regex-based extraction instead
104
+ - `--model=llama3.2` — Use a different Ollama model
105
+ - `OLLAMA_MODEL=llama3.2` — Set default model via env variable
106
+ - `OLLAMA_HOST=http://localhost:11434` — Custom Ollama host
192
107
 
193
108
  ### Recommended Models
194
109
 
@@ -197,16 +112,17 @@ AI analysis runs automatically when Ollama is running. Use `--no-ai` to disable.
197
112
  | `llama3.2` | 2GB | General purpose, fast (default) |
198
113
  | `codellama` | 4GB | Better code understanding |
199
114
  | `llama3.2:1b` | 1GB | Fastest, lower quality |
115
+ | `mistral` | 4GB | Good balance |
116
+
117
+ **Fallback:** If Ollama is not running, the tool automatically uses regex-based step extraction, which still provides meaningful steps.
200
118
 
201
119
  ---
202
120
 
203
- ## Configuration (Optional)
121
+ ## Config (optional)
204
122
 
205
- Default directories:
206
- - e2e specs: `playwright/e2e`
207
- - CSV output: `playwright/e2e-csv`
123
+ Defaults: e2e dir `playwright/e2e`, CSV dir `playwright/e2e-csv`. For Cypress use e.g. `cypress/e2e`.
208
124
 
209
- Override via `.am-browserstack-sync.json`:
125
+ Override via **`.am-browserstack-sync.json`** in project root:
210
126
 
211
127
  ```json
212
128
  {
@@ -215,27 +131,42 @@ Override via `.am-browserstack-sync.json`:
215
131
  }
216
132
  ```
217
133
 
218
- Or via environment variables:
219
- - `PLAYWRIGHT_BROWSERSTACK_E2E_DIR`
220
- - `PLAYWRIGHT_BROWSERSTACK_CSV_DIR`
134
+ Or **package.json**: `"amBrowserstackSync": { "e2eDir": "...", "csvOutputDir": "..." }`
135
+ Or env: `PLAYWRIGHT_BROWSERSTACK_E2E_DIR`, `PLAYWRIGHT_BROWSERSTACK_CSV_DIR`.
221
136
 
222
137
  ---
223
138
 
224
- ## What Gets Created
139
+ ## BrowserStack sync
140
+
141
+ Sync pushes your e2e tests into **BrowserStack Test Management** so you can track test cases, link runs, and keep specs in sync with one source of truth. Under your chosen project it creates **one folder per spec file** (e.g. `login.spec`, `checkout.spec`) and one **test case** per test, with title, description, steps, state (Active), type (Functional), automation status, and tags. Existing test cases are matched by title or TC-id and **updated**; new ones are **created**. No duplicates.
225
142
 
226
- When you run `npx am-browserstack-sync --all`:
143
+ **Setup:**
227
144
 
228
- 1. **CSV files** - One per spec file with test case details
229
- 2. **BrowserStack folders** - One folder per spec file
230
- 3. **Test cases** - Each test becomes a test case with:
231
- - Title
232
- - AI-generated steps (or regex-extracted)
233
- - Expected results
234
- - Tags
235
- - Automation status
145
+ 1. In project root, create **`.env`** (do not commit):
236
146
 
237
- When you run `npx am-browserstack-sync-runs`:
147
+ ```env
148
+ BROWSERSTACK_USERNAME=your_username
149
+ BROWSERSTACK_ACCESS_KEY=your_access_key
150
+ BROWSERSTACK_PROJECT_ID=PR-XXXX
151
+ ```
152
+ Or use a single token: `BROWSERSTACK_API_TOKEN=your_token`
153
+
154
+ 2. Get credentials and project ID from [Test Management → API keys](https://test-management.browserstack.com/settings/api-keys). The project ID is in the project URL (e.g. `PR-1234`).
155
+
156
+ 3. **Install Ollama** for AI-powered step analysis (optional but recommended).
157
+
158
+ 4. Run **`npx am-browserstack-sync`** (without `--csv-only`). You'll be prompted to sync all specs or pick specific ones (unless you use `--all` or `--spec=...`). After sync, open your project in BrowserStack to see the new folders and test cases.
159
+
160
+ ---
161
+
162
+ ## What it does
163
+
164
+ - Finds **Playwright** (`*.spec.*`, `*.test.*`) and **Cypress** (`*.cy.*`) spec files in your e2e dir.
165
+ - Extracts test titles from `test('...')` / `it('...')`, assigns TC-001, TC-002, …
166
+ - **Analyzes test code** with Ollama AI (local, free) or regex to generate human-readable steps and expected results.
167
+ - Enriches with state (Active), type (Functional), automation (automated), tags (from spec + title).
168
+ - Writes **one CSV per spec** (test_case_id, title, state, case_type, steps, expected_results, jira_issues, automation_status, tags, description, spec_file).
169
+ - Optionally syncs to BrowserStack with description, steps, and tags.
170
+ ---
238
171
 
239
- 1. **Test Run** - Created in BrowserStack
240
- 2. **Results** - Pass/fail status for each test
241
- 3. **Link** - URL to view results in BrowserStack
172
+ **Author:** Ashutosh Mallick
package/bin/cli.js CHANGED
@@ -2,67 +2,31 @@
2
2
  /**
3
3
  * CLI for @ash-mallick/browserstack-sync (am-browserstack-sync).
4
4
  * By Ashutosh Mallick. Run from your project root: npx am-browserstack-sync [options]
5
- *
6
- * This command onboards your Playwright/Cypress tests to BrowserStack:
7
- * - Generates CSV files with test cases
8
- * - Creates folders in BrowserStack (one per spec file)
9
- * - Creates/updates test cases with AI-generated steps
5
+ * Uses cwd as project root; config from .env, config file, or package.json.
10
6
  *
11
- * OPTIONS:
7
+ * Options:
12
8
  * --csv-only Generate CSVs only, do not sync to BrowserStack
13
9
  * --all Sync all spec files (no interactive prompt)
14
- * --spec=name Sync only these specs (comma-separated)
15
- * --no-ai Disable AI-powered step analysis
16
- * --model=name Ollama model to use (default: llama3.2)
10
+ * --spec=name Sync only these specs (comma-separated). e.g. --spec=login.spec,checkout.spec
11
+ * --no-ai Disable AI-powered step analysis (use regex extraction only)
12
+ * --model=name Ollama model to use (default: llama3.2). e.g. --model=codellama
17
13
  *
18
- * EXAMPLES:
19
- * npx am-browserstack-sync --all # Onboard all tests
20
- * npx am-browserstack-sync --csv-only # Generate CSVs only
21
- * npx am-browserstack-sync --spec=login # Onboard specific spec
14
+ * AI Analysis (FREE, Local with Ollama):
15
+ * Install Ollama from https://ollama.ai, then:
16
+ * ollama pull llama3.2
17
+ * ollama serve
22
18
  *
23
- * To sync test run results, use:
24
- * npx am-browserstack-sync-runs
19
+ * AI analysis is automatic when Ollama is running! No API key needed.
20
+ * The AI will analyze your test code and generate human-readable steps like:
21
+ * - "Navigate to /login page"
22
+ * - "Enter 'user@test.com' in the Email field"
23
+ * - "Click the 'Sign In' button"
24
+ * - "Verify URL matches /dashboard"
25
25
  */
26
26
 
27
27
  import { runSync } from '../lib/index.js';
28
- import 'dotenv/config';
29
28
 
30
29
  const argv = process.argv.slice(2);
31
-
32
- // Show help
33
- if (argv.includes('--help') || argv.includes('-h')) {
34
- console.log(`
35
- 📦 BrowserStack Test Case Sync
36
-
37
- Onboard your Playwright/Cypress tests to BrowserStack Test Management.
38
-
39
- USAGE:
40
- npx am-browserstack-sync [options]
41
-
42
- OPTIONS:
43
- --csv-only Generate CSVs only, do not sync to BrowserStack
44
- --all Sync all spec files without prompting
45
- --spec=<names> Sync specific specs (comma-separated)
46
- --no-ai Disable AI step analysis (use regex)
47
- --model=<name> Ollama model (default: llama3.2)
48
- --help, -h Show this help
49
-
50
- EXAMPLES:
51
- npx am-browserstack-sync --all
52
- npx am-browserstack-sync --csv-only
53
- npx am-browserstack-sync --spec=login.spec,checkout.spec
54
-
55
- TO SYNC TEST RUN RESULTS:
56
- npx am-browserstack-sync-runs
57
-
58
- ENVIRONMENT VARIABLES:
59
- BROWSERSTACK_USERNAME Your BrowserStack username
60
- BROWSERSTACK_ACCESS_KEY Your BrowserStack access key
61
- BROWSERSTACK_PROJECT_ID Project ID (e.g., PR-1234)
62
- `);
63
- process.exit(0);
64
- }
65
-
66
30
  const csvOnly = argv.includes('--csv-only');
67
31
  const all = argv.includes('--all');
68
32
  const noAI = argv.includes('--no-ai');
@@ -14,7 +14,7 @@ const OLLAMA_HOST = process.env.OLLAMA_HOST || 'http://localhost:11434';
14
14
  /**
15
15
  * System prompt for the AI to understand what we want.
16
16
  */
17
- const SYSTEM_PROMPT = `You are a test automation expert. Your job is to analyze Playwright or Cypress test code and extract human-readable test steps.
17
+ const SYSTEM_PROMPT = `You are a QA expert creating manual test steps from automated test code.
18
18
 
19
19
  CRITICAL RULES:
20
20
  1. Generate steps ONLY for what is explicitly in the test code provided
@@ -200,109 +200,3 @@ export async function syncToBrowserStack(specsMap, projectId, auth) {
200
200
  console.log(` Updated: ${totalUpdated} test case(s)`);
201
201
  console.log(` Skipped: ${totalSkipped} test case(s) (no changes)`);
202
202
  }
203
-
204
- // ============================================================================
205
- // TEST RUN API - For syncing test execution results
206
- // ============================================================================
207
-
208
- /**
209
- * Create a new Test Run in BrowserStack.
210
- */
211
- export async function createTestRun(projectId, runName, description, auth) {
212
- const url = `${BS_BASE}/projects/${encodeURIComponent(projectId)}/test-runs`;
213
- const body = {
214
- test_run: {
215
- name: runName,
216
- description: description || `Automated test run: ${runName}`,
217
- },
218
- };
219
- const res = await bsRequest('POST', url, auth, body);
220
- return {
221
- runId: res.test_run?.id || res.id,
222
- identifier: res.test_run?.identifier || res.identifier,
223
- };
224
- }
225
-
226
- /**
227
- * Add test cases to a Test Run.
228
- */
229
- export async function addTestCasesToRun(projectId, runId, testCaseIds, auth) {
230
- const url = `${BS_BASE}/projects/${encodeURIComponent(projectId)}/test-runs/${runId}/test-cases`;
231
- const body = {
232
- test_case_identifiers: testCaseIds,
233
- };
234
- return bsRequest('POST', url, auth, body);
235
- }
236
-
237
- /**
238
- * Update test result in a Test Run.
239
- */
240
- export async function updateTestResult(projectId, runId, testCaseId, status, options = {}, auth) {
241
- const url = `${BS_BASE}/projects/${encodeURIComponent(projectId)}/test-runs/${runId}/test-results`;
242
-
243
- const statusMap = {
244
- passed: 'passed',
245
- failed: 'failed',
246
- skipped: 'skipped',
247
- timedOut: 'failed',
248
- };
249
-
250
- const body = {
251
- test_result: {
252
- test_case_identifier: testCaseId,
253
- status: statusMap[status] || status,
254
- comment: options.comment || '',
255
- },
256
- };
257
-
258
- if (options.duration) {
259
- body.test_result.duration = Math.round(options.duration / 1000);
260
- }
261
-
262
- return bsRequest('POST', url, auth, body);
263
- }
264
-
265
- /**
266
- * Complete a Test Run.
267
- */
268
- export async function completeTestRun(projectId, runId, auth) {
269
- const url = `${BS_BASE}/projects/${encodeURIComponent(projectId)}/test-runs/${runId}`;
270
- const body = {
271
- test_run: {
272
- state: 'done',
273
- },
274
- };
275
- return bsRequest('PATCH', url, auth, body);
276
- }
277
-
278
- /**
279
- * Get all test cases in a project.
280
- */
281
- export async function getAllTestCases(projectId, auth) {
282
- const all = [];
283
- let page = 1;
284
- let hasMore = true;
285
-
286
- while (hasMore) {
287
- const url = `${BS_BASE}/projects/${encodeURIComponent(projectId)}/test-cases?p=${page}`;
288
- const res = await bsRequest('GET', url, auth);
289
- const list = res.test_cases || [];
290
- all.push(...list);
291
- const info = res.info || {};
292
- hasMore = info.next != null;
293
- page += 1;
294
- }
295
-
296
- return all;
297
- }
298
-
299
- /**
300
- * Find test case by title.
301
- */
302
- export function findTestCaseByTitle(testCases, title) {
303
- const normalizedTitle = (title || '').trim().toLowerCase();
304
- return testCases.find((tc) => {
305
- const tcTitle = (tc.name || tc.title || '').trim().toLowerCase();
306
- return tcTitle === normalizedTitle;
307
- });
308
- }
package/package.json CHANGED
@@ -1,13 +1,12 @@
1
1
  {
2
2
  "name": "@ash-mallick/browserstack-sync",
3
- "version": "1.4.0",
4
- "description": "Sync Playwright & Cypress e2e specs to CSV and BrowserStack Test Management with FREE AI-powered test step extraction using Ollama (local).",
3
+ "version": "1.5.0",
4
+ "description": "Sync Playwright & Cypress e2e specs to CSV and BrowserStack Test Management with FREE AI-powered test step extraction using Ollama (local)",
5
5
  "author": "Ashutosh Mallick",
6
6
  "type": "module",
7
7
  "main": "lib/index.js",
8
8
  "bin": {
9
- "am-browserstack-sync": "bin/cli.js",
10
- "am-browserstack-sync-runs": "bin/sync-runs.js"
9
+ "am-browserstack-sync": "bin/cli.js"
11
10
  },
12
11
  "files": [
13
12
  "bin",
@@ -15,9 +14,10 @@
15
14
  ],
16
15
  "scripts": {
17
16
  "test": "playwright test",
17
+ "test:e2e": "playwright test",
18
18
  "sync": "node bin/cli.js",
19
+ "sync:csv": "node bin/cli.js",
19
20
  "sync:csv-only": "node bin/cli.js --csv-only",
20
- "sync:runs": "node bin/sync-runs.js",
21
21
  "prepublishOnly": "node bin/cli.js --csv-only",
22
22
  "publish:public": "npm publish --access public"
23
23
  },
@@ -28,11 +28,9 @@
28
28
  "e2e",
29
29
  "test-management",
30
30
  "sync",
31
- "ci-cd",
32
- "gitlab",
33
- "github-actions",
34
31
  "ai",
35
32
  "ollama",
33
+ "llama",
36
34
  "test-steps",
37
35
  "free",
38
36
  "local",
@@ -48,8 +46,10 @@
48
46
  "ollama": "^0.6.3"
49
47
  },
50
48
  "devDependencies": {
51
- "@playwright/test": "^1.49.0"
49
+ "@playwright/test": "^1.49.0",
50
+ "playwright": "^1.49.0"
52
51
  },
52
+ "peerDependenciesMeta": {},
53
53
  "engines": {
54
54
  "node": ">=18"
55
55
  }
package/bin/sync-runs.js DELETED
@@ -1,254 +0,0 @@
1
- #!/usr/bin/env node
2
- /**
3
- * Simple CLI to sync test run results to BrowserStack.
4
- * Add this to your existing CI pipeline after tests run.
5
- *
6
- * Usage:
7
- * npx am-browserstack-sync-runs
8
- * npx am-browserstack-sync-runs --report=custom-results.json
9
- * npx am-browserstack-sync-runs --run-name="Nightly Build #123"
10
- *
11
- * Environment Variables (required):
12
- * BROWSERSTACK_USERNAME
13
- * BROWSERSTACK_ACCESS_KEY
14
- * BROWSERSTACK_PROJECT_ID
15
- *
16
- * The command will:
17
- * 1. Auto-detect Playwright report files (results.json, test-results.json, etc.)
18
- * 2. Parse test results
19
- * 3. Create a Test Run in BrowserStack
20
- * 4. Sync pass/fail status for each test
21
- * 5. Show a link to view results
22
- */
23
-
24
- import 'dotenv/config';
25
- import fs from 'fs';
26
- import path from 'path';
27
- import { syncResultsFromReport } from '../lib/sync-results.js';
28
-
29
- const argv = process.argv.slice(2);
30
-
31
- // Show help
32
- if (argv.includes('--help') || argv.includes('-h')) {
33
- console.log(`
34
- 📊 BrowserStack Test Run Sync
35
-
36
- Sync your Playwright/Cypress test results to BrowserStack Test Management.
37
- Specify your report file location - supports JSON and XML formats.
38
-
39
- USAGE:
40
- npx am-browserstack-sync-runs --report=<path> [options]
41
-
42
- OPTIONS:
43
- --report=<path> Path to your test report file (required unless configured)
44
- --run-name=<name> Name for the test run (default: "CI Run - <timestamp>")
45
- --format=<fmt> Report format: playwright-json, mochawesome, cypress, junit (auto-detected)
46
- --help, -h Show this help
47
-
48
- CONFIGURE REPORT PATH (optional - so you don't need --report every time):
49
- Environment variable: BROWSERSTACK_REPORT_PATH=test-results.json
50
- Config file: .am-browserstack-sync.json { "reportPath": "test-results.json" }
51
- package.json: "amBrowserstackSync": { "reportPath": "test-results.json" }
52
-
53
- ENVIRONMENT VARIABLES (required):
54
- BROWSERSTACK_USERNAME Your BrowserStack username
55
- BROWSERSTACK_ACCESS_KEY Your BrowserStack access key
56
- BROWSERSTACK_PROJECT_ID Your project ID (e.g., PR-1234)
57
-
58
- SUPPORTED REPORT FORMATS:
59
- - Playwright JSON: npx playwright test --reporter=json --output-file=test-results.json
60
- - Cypress Mochawesome: npx cypress run --reporter mochawesome
61
- - JUnit XML: npx playwright test --reporter=junit --output-file=junit.xml
62
-
63
- EXAMPLES:
64
- # Specify report file
65
- npx am-browserstack-sync-runs --report=test-results.json
66
-
67
- # With custom run name (great for CI)
68
- npx am-browserstack-sync-runs --report=test-results.json --run-name="Build #123"
69
-
70
- # Sync JUnit XML results
71
- npx am-browserstack-sync-runs --report=junit.xml
72
-
73
- CI PIPELINE EXAMPLE:
74
- # GitLab CI
75
- script:
76
- - npm ci
77
- - npx playwright test --reporter=json --output-file=test-results.json || true
78
- - npx am-browserstack-sync-runs --report=test-results.json --run-name="Pipeline #\$CI_PIPELINE_ID"
79
- `);
80
- process.exit(0);
81
- }
82
-
83
- // Check if a report file exists at the given path
84
- function checkReportExists(reportPath) {
85
- if (!reportPath) return null;
86
-
87
- const cwd = process.cwd();
88
- const fullPath = path.isAbsolute(reportPath) ? reportPath : path.join(cwd, reportPath);
89
-
90
- if (fs.existsSync(fullPath)) {
91
- const stat = fs.statSync(fullPath);
92
- const ageMinutes = Math.round((Date.now() - stat.mtime.getTime()) / 60000);
93
- console.log(`📄 Using report: ${reportPath}`);
94
- console.log(` Modified: ${ageMinutes < 60 ? ageMinutes + ' minutes ago' : Math.round(ageMinutes / 60) + ' hours ago'}`);
95
- return fullPath;
96
- }
97
-
98
- return null;
99
- }
100
-
101
- // Get report path from config or environment
102
- function getConfiguredReportPath() {
103
- // Check environment variable first
104
- if (process.env.BROWSERSTACK_REPORT_PATH) {
105
- return process.env.BROWSERSTACK_REPORT_PATH;
106
- }
107
-
108
- // Check config file
109
- const cwd = process.cwd();
110
- const configPaths = [
111
- path.join(cwd, '.am-browserstack-sync.json'),
112
- path.join(cwd, 'browserstack-sync.json'),
113
- ];
114
-
115
- for (const configPath of configPaths) {
116
- if (fs.existsSync(configPath)) {
117
- try {
118
- const config = JSON.parse(fs.readFileSync(configPath, 'utf-8'));
119
- if (config.reportPath) {
120
- return config.reportPath;
121
- }
122
- } catch {
123
- // Ignore parse errors
124
- }
125
- }
126
- }
127
-
128
- // Check package.json
129
- const pkgPath = path.join(cwd, 'package.json');
130
- if (fs.existsSync(pkgPath)) {
131
- try {
132
- const pkg = JSON.parse(fs.readFileSync(pkgPath, 'utf-8'));
133
- if (pkg.amBrowserstackSync?.reportPath) {
134
- return pkg.amBrowserstackSync.reportPath;
135
- }
136
- } catch {
137
- // Ignore parse errors
138
- }
139
- }
140
-
141
- return null;
142
- }
143
-
144
- // Auto-detect format from file
145
- function detectFormat(reportPath) {
146
- if (reportPath.endsWith('.xml')) {
147
- return 'junit';
148
- }
149
- return 'playwright-json';
150
- }
151
-
152
- // Parse arguments
153
- const reportArg = argv.find((a) => a.startsWith('--report='));
154
- const formatArg = argv.find((a) => a.startsWith('--format='));
155
- const runNameArg = argv.find((a) => a.startsWith('--run-name='));
156
-
157
- let reportPath = reportArg ? reportArg.slice('--report='.length).trim() : null;
158
- let format = formatArg ? formatArg.slice('--format='.length).trim() : null;
159
- const runName = runNameArg
160
- ? runNameArg.slice('--run-name='.length).trim()
161
- : process.env.BROWSERSTACK_RUN_NAME || `CI Run - ${new Date().toISOString().slice(0, 16).replace('T', ' ')}`;
162
-
163
- // Determine report path: CLI arg > env var > config file
164
- if (!reportPath) {
165
- reportPath = getConfiguredReportPath();
166
- }
167
-
168
- // Check if report exists
169
- if (reportPath) {
170
- const resolvedPath = checkReportExists(reportPath);
171
- if (!resolvedPath) {
172
- console.error(`
173
- ❌ Report file not found: ${reportPath}
174
-
175
- Make sure your test run generates this report before syncing.
176
- `);
177
- process.exit(1);
178
- }
179
- reportPath = resolvedPath;
180
- } else {
181
- console.error(`
182
- ❌ No report path configured!
183
-
184
- Please specify your test report location using one of these methods:
185
-
186
- OPTION 1: Command line argument
187
- npx am-browserstack-sync-runs --report=test-results.json
188
-
189
- OPTION 2: Environment variable
190
- export BROWSERSTACK_REPORT_PATH=test-results.json
191
- npx am-browserstack-sync-runs
192
-
193
- OPTION 3: Config file (.am-browserstack-sync.json)
194
- {
195
- "reportPath": "test-results.json"
196
- }
197
-
198
- OPTION 4: package.json
199
- {
200
- "amBrowserstackSync": {
201
- "reportPath": "test-results.json"
202
- }
203
- }
204
-
205
- GENERATING REPORTS:
206
-
207
- Playwright:
208
- npx playwright test --reporter=json --output-file=test-results.json
209
-
210
- Cypress (mochawesome):
211
- npx cypress run --reporter mochawesome
212
-
213
- JUnit XML (both):
214
- npx playwright test --reporter=junit --output-file=junit.xml
215
- `);
216
- process.exit(1);
217
- }
218
-
219
- // Auto-detect format if not specified
220
- if (!format) {
221
- format = detectFormat(reportPath);
222
- }
223
-
224
- // Check required env vars
225
- const missing = [];
226
- if (!process.env.BROWSERSTACK_USERNAME && !process.env.BROWSERSTACK_API_TOKEN) {
227
- missing.push('BROWSERSTACK_USERNAME');
228
- }
229
- if (!process.env.BROWSERSTACK_ACCESS_KEY && !process.env.BROWSERSTACK_API_TOKEN) {
230
- missing.push('BROWSERSTACK_ACCESS_KEY');
231
- }
232
- if (!process.env.BROWSERSTACK_PROJECT_ID) {
233
- missing.push('BROWSERSTACK_PROJECT_ID');
234
- }
235
-
236
- if (missing.length > 0) {
237
- console.error(`
238
- ❌ Missing required environment variables:
239
- ${missing.join('\n ')}
240
-
241
- Set these in your CI/CD settings or .env file.
242
- `);
243
- process.exit(1);
244
- }
245
-
246
- // Run sync
247
- syncResultsFromReport({
248
- reportPath,
249
- format,
250
- runName,
251
- }).catch((err) => {
252
- console.error('\n❌ Error:', err.message);
253
- process.exit(1);
254
- });
@@ -1,473 +0,0 @@
1
- /**
2
- * Sync test results from CI pipeline to BrowserStack Test Management.
3
- *
4
- * This module reads test results from Playwright JSON report or JUnit XML
5
- * and syncs them to BrowserStack, creating a Test Run with pass/fail status.
6
- *
7
- * Usage:
8
- * npx am-browserstack-sync --sync-results --report=playwright-report.json
9
- * npx am-browserstack-sync --sync-results --report=junit.xml --format=junit
10
- */
11
-
12
- import fs from 'fs';
13
- import path from 'path';
14
- import {
15
- createTestRun,
16
- addTestCasesToRun,
17
- updateTestResult,
18
- completeTestRun,
19
- getAllTestCases,
20
- findTestCaseByTitle,
21
- } from './browserstack.js';
22
-
23
- /**
24
- * Parse Playwright JSON report format.
25
- * @param {string} reportPath - Path to the JSON report file
26
- * @returns {Array<{title: string, status: string, duration: number, error?: string}>}
27
- */
28
- function parsePlaywrightJson(reportPath) {
29
- const report = JSON.parse(fs.readFileSync(reportPath, 'utf-8'));
30
- const results = [];
31
-
32
- // Playwright JSON report structure: { suites: [...], stats: {...} }
33
- function extractTests(suites) {
34
- for (const suite of suites || []) {
35
- for (const spec of suite.specs || []) {
36
- for (const test of spec.tests || []) {
37
- // Get the result from the test results array
38
- const result = test.results?.[0] || {};
39
- results.push({
40
- title: spec.title || test.title,
41
- fullTitle: `${suite.title} > ${spec.title}`,
42
- status: mapPlaywrightStatus(test.status || result.status),
43
- duration: result.duration || 0,
44
- error: result.error?.message || result.errors?.[0]?.message || '',
45
- file: suite.file || spec.file,
46
- });
47
- }
48
- }
49
-
50
- // Recursively process nested suites
51
- if (suite.suites) {
52
- extractTests(suite.suites);
53
- }
54
- }
55
- }
56
-
57
- extractTests(report.suites);
58
- return results;
59
- }
60
-
61
- /**
62
- * Parse JUnit XML report format.
63
- * @param {string} reportPath - Path to the JUnit XML file
64
- * @returns {Array<{title: string, status: string, duration: number, error?: string}>}
65
- */
66
- function parseJunitXml(reportPath) {
67
- const xml = fs.readFileSync(reportPath, 'utf-8');
68
- const results = [];
69
-
70
- // Simple regex-based XML parsing for JUnit format
71
- // <testcase name="..." classname="..." time="...">
72
- // <failure message="...">...</failure>
73
- // <skipped/>
74
- // </testcase>
75
-
76
- const testcaseRegex = /<testcase\s+[^>]*name="([^"]+)"[^>]*time="([^"]+)"[^>]*>([\s\S]*?)<\/testcase>/gi;
77
- const failureRegex = /<failure[^>]*message="([^"]*)"[^>]*>/i;
78
- const skippedRegex = /<skipped\s*\/?>/i;
79
- const errorRegex = /<error[^>]*message="([^"]*)"[^>]*>/i;
80
-
81
- let match;
82
- while ((match = testcaseRegex.exec(xml)) !== null) {
83
- const [, name, time, content] = match;
84
-
85
- let status = 'passed';
86
- let error = '';
87
-
88
- if (skippedRegex.test(content)) {
89
- status = 'skipped';
90
- } else if (failureRegex.test(content)) {
91
- status = 'failed';
92
- const failMatch = content.match(failureRegex);
93
- error = failMatch?.[1] || 'Test failed';
94
- } else if (errorRegex.test(content)) {
95
- status = 'failed';
96
- const errMatch = content.match(errorRegex);
97
- error = errMatch?.[1] || 'Test error';
98
- }
99
-
100
- results.push({
101
- title: name,
102
- status,
103
- duration: Math.round(parseFloat(time) * 1000), // Convert seconds to ms
104
- error,
105
- });
106
- }
107
-
108
- return results;
109
- }
110
-
111
- /**
112
- * Parse Mochawesome JSON report format (commonly used with Cypress).
113
- * @param {string} reportPath - Path to the mochawesome JSON file
114
- * @returns {Array<{title: string, status: string, duration: number, error?: string}>}
115
- */
116
- function parseMochawesome(reportPath) {
117
- const report = JSON.parse(fs.readFileSync(reportPath, 'utf-8'));
118
- const results = [];
119
-
120
- // Mochawesome structure: { results: [{ suites: [{ tests: [...] }] }] }
121
- function extractTests(suites) {
122
- for (const suite of suites || []) {
123
- for (const test of suite.tests || []) {
124
- results.push({
125
- title: test.title,
126
- fullTitle: test.fullTitle || `${suite.title} > ${test.title}`,
127
- status: mapMochawesomeStatus(test.state || test.pass),
128
- duration: test.duration || 0,
129
- error: test.err?.message || test.err?.estack || '',
130
- file: suite.file,
131
- });
132
- }
133
-
134
- // Recursively process nested suites
135
- if (suite.suites) {
136
- extractTests(suite.suites);
137
- }
138
- }
139
- }
140
-
141
- // Handle both mochawesome and mochawesome-merge formats
142
- if (report.results) {
143
- for (const result of report.results) {
144
- extractTests(result.suites);
145
- }
146
- } else if (report.suites) {
147
- extractTests(report.suites);
148
- }
149
-
150
- return results;
151
- }
152
-
153
- /**
154
- * Parse Cypress JSON report (spec-based results).
155
- * @param {string} reportPath - Path to the Cypress JSON report
156
- * @returns {Array<{title: string, status: string, duration: number, error?: string}>}
157
- */
158
- function parseCypressJson(reportPath) {
159
- const report = JSON.parse(fs.readFileSync(reportPath, 'utf-8'));
160
- const results = [];
161
-
162
- // Cypress native JSON structure
163
- // { runs: [{ tests: [...], spec: {...} }] }
164
- if (report.runs) {
165
- for (const run of report.runs) {
166
- for (const test of run.tests || []) {
167
- const attempts = test.attempts || [];
168
- const lastAttempt = attempts[attempts.length - 1] || {};
169
-
170
- results.push({
171
- title: test.title?.join(' > ') || test.title,
172
- status: mapCypressStatus(test.state || lastAttempt.state),
173
- duration: test.duration || lastAttempt.duration || 0,
174
- error: lastAttempt.error?.message || test.displayError || '',
175
- file: run.spec?.relative || run.spec?.name,
176
- });
177
- }
178
- }
179
- }
180
-
181
- // Cypress Dashboard JSON structure
182
- // { tests: [...] }
183
- if (report.tests && !report.runs) {
184
- for (const test of report.tests) {
185
- results.push({
186
- title: test.title || test.name,
187
- status: mapCypressStatus(test.state || test.status),
188
- duration: test.duration || 0,
189
- error: test.error?.message || '',
190
- });
191
- }
192
- }
193
-
194
- return results;
195
- }
196
-
197
- /**
198
- * Map Mochawesome status to BrowserStack status.
199
- */
200
- function mapMochawesomeStatus(status) {
201
- if (status === true || status === 'passed') return 'passed';
202
- if (status === false || status === 'failed') return 'failed';
203
- if (status === 'pending' || status === 'skipped') return 'skipped';
204
- return status || 'passed';
205
- }
206
-
207
- /**
208
- * Map Cypress status to BrowserStack status.
209
- */
210
- function mapCypressStatus(status) {
211
- const map = {
212
- passed: 'passed',
213
- failed: 'failed',
214
- pending: 'skipped',
215
- skipped: 'skipped',
216
- };
217
- return map[status] || status;
218
- }
219
-
220
- /**
221
- * Parse Playwright HTML report's index.html or results.json
222
- */
223
- function parsePlaywrightHtmlReport(reportDir) {
224
- // Try to find the JSON data in the HTML report directory
225
- const possiblePaths = [
226
- path.join(reportDir, 'report.json'),
227
- path.join(reportDir, 'data', 'report.json'),
228
- path.join(reportDir, 'results.json'),
229
- ];
230
-
231
- for (const p of possiblePaths) {
232
- if (fs.existsSync(p)) {
233
- return parsePlaywrightJson(p);
234
- }
235
- }
236
-
237
- throw new Error(`Could not find JSON data in Playwright HTML report directory: ${reportDir}`);
238
- }
239
-
240
- /**
241
- * Map Playwright status to BrowserStack status.
242
- */
243
- function mapPlaywrightStatus(status) {
244
- const map = {
245
- passed: 'passed',
246
- failed: 'failed',
247
- timedOut: 'failed',
248
- skipped: 'skipped',
249
- interrupted: 'blocked',
250
- expected: 'passed',
251
- unexpected: 'failed',
252
- flaky: 'retest',
253
- };
254
- return map[status] || status;
255
- }
256
-
257
- /**
258
- * Load authentication from environment.
259
- */
260
- function loadAuth() {
261
- const username = process.env.BROWSERSTACK_USERNAME;
262
- const accessKey = process.env.BROWSERSTACK_ACCESS_KEY;
263
- const apiToken = process.env.BROWSERSTACK_API_TOKEN;
264
-
265
- if (apiToken) {
266
- return { apiToken };
267
- }
268
-
269
- if (username && accessKey) {
270
- return { username, accessKey };
271
- }
272
-
273
- return null;
274
- }
275
-
276
- /**
277
- * Sync test results from a report file to BrowserStack.
278
- *
279
- * @param {Object} options
280
- * @param {string} options.reportPath - Path to the report file or directory
281
- * @param {string} options.format - Report format: 'playwright-json', 'junit', 'playwright-html'
282
- * @param {string} options.projectId - BrowserStack project ID
283
- * @param {string} options.runName - Name for the test run
284
- * @param {string} options.runDescription - Description for the test run
285
- */
286
- export async function syncResultsFromReport(options = {}) {
287
- const {
288
- reportPath,
289
- format = 'playwright-json',
290
- projectId = process.env.BROWSERSTACK_PROJECT_ID,
291
- runName = process.env.BROWSERSTACK_RUN_NAME || `CI Run - ${new Date().toISOString().slice(0, 16).replace('T', ' ')}`,
292
- runDescription = process.env.BROWSERSTACK_RUN_DESCRIPTION || 'Test results synced from CI pipeline',
293
- } = options;
294
-
295
- // Validate inputs
296
- if (!reportPath) {
297
- throw new Error('Report path is required. Use --report=<path>');
298
- }
299
-
300
- if (!projectId) {
301
- throw new Error('BROWSERSTACK_PROJECT_ID is required');
302
- }
303
-
304
- const auth = loadAuth();
305
- if (!auth) {
306
- throw new Error('BrowserStack credentials not found. Set BROWSERSTACK_USERNAME/ACCESS_KEY or API_TOKEN');
307
- }
308
-
309
- // Check if report exists
310
- if (!fs.existsSync(reportPath)) {
311
- throw new Error(`Report file not found: ${reportPath}`);
312
- }
313
-
314
- console.log('\n📊 BrowserStack Results Sync');
315
- console.log(' Report:', reportPath);
316
- console.log(' Format:', format);
317
- console.log(' Project:', projectId);
318
- console.log(' Run Name:', runName);
319
-
320
- // Parse the report based on format
321
- console.log('\n Parsing test results...');
322
- let testResults;
323
- let detectedFormat = format;
324
-
325
- const stat = fs.statSync(reportPath);
326
- if (stat.isDirectory()) {
327
- // Assume it's a Playwright HTML report directory
328
- testResults = parsePlaywrightHtmlReport(reportPath);
329
- detectedFormat = 'playwright-html';
330
- } else if (format === 'junit' || reportPath.endsWith('.xml')) {
331
- testResults = parseJunitXml(reportPath);
332
- detectedFormat = 'junit-xml';
333
- } else {
334
- // Try to auto-detect JSON format by reading the file
335
- const content = fs.readFileSync(reportPath, 'utf-8');
336
- const json = JSON.parse(content);
337
-
338
- if (format === 'mochawesome' || json.stats?.testsRegistered !== undefined || (json.results && json.results[0]?.suites)) {
339
- // Mochawesome format (Cypress)
340
- testResults = parseMochawesome(reportPath);
341
- detectedFormat = 'mochawesome';
342
- } else if (format === 'cypress' || json.runs) {
343
- // Cypress native JSON format
344
- testResults = parseCypressJson(reportPath);
345
- detectedFormat = 'cypress-json';
346
- } else if (json.suites || json.config?.projects) {
347
- // Playwright JSON format
348
- testResults = parsePlaywrightJson(reportPath);
349
- detectedFormat = 'playwright-json';
350
- } else {
351
- // Fallback: try Playwright first, then Mochawesome
352
- try {
353
- testResults = parsePlaywrightJson(reportPath);
354
- detectedFormat = 'playwright-json';
355
- } catch {
356
- try {
357
- testResults = parseMochawesome(reportPath);
358
- detectedFormat = 'mochawesome';
359
- } catch {
360
- testResults = parseCypressJson(reportPath);
361
- detectedFormat = 'cypress-json';
362
- }
363
- }
364
- }
365
- }
366
-
367
- console.log(` Detected format: ${detectedFormat}`)
368
-
369
- console.log(` Found ${testResults.length} test result(s)`);
370
-
371
- if (testResults.length === 0) {
372
- console.log('\n ⚠️ No test results found in report');
373
- return;
374
- }
375
-
376
- // Fetch all test cases from BrowserStack
377
- console.log('\n Fetching test cases from BrowserStack...');
378
- const allBsTestCases = await getAllTestCases(projectId, auth);
379
- console.log(` Found ${allBsTestCases.length} test case(s) in project`);
380
-
381
- // Map results to BrowserStack test cases
382
- const mappedResults = [];
383
- const unmappedResults = [];
384
-
385
- for (const result of testResults) {
386
- const bsTestCase = findTestCaseByTitle(allBsTestCases, result.title);
387
- if (bsTestCase) {
388
- mappedResults.push({ ...result, bsTestCase });
389
- } else {
390
- unmappedResults.push(result);
391
- }
392
- }
393
-
394
- console.log(` Mapped: ${mappedResults.length} test(s)`);
395
- if (unmappedResults.length > 0) {
396
- console.log(` Unmapped: ${unmappedResults.length} test(s) (not found in BrowserStack)`);
397
- }
398
-
399
- if (mappedResults.length === 0) {
400
- console.log('\n ⚠️ No tests matched. Make sure tests are synced first:');
401
- console.log(' npx am-browserstack-sync --all');
402
- return;
403
- }
404
-
405
- // Create test run
406
- console.log(`\n Creating test run: "${runName}"...`);
407
- const run = await createTestRun(projectId, runName, runDescription, auth);
408
- const runId = run.runId;
409
- const runIdentifier = run.identifier;
410
- console.log(` ✓ Test run created: ${runIdentifier}`);
411
-
412
- // Add test cases to run
413
- const testCaseIds = mappedResults.map(r => r.bsTestCase.identifier);
414
- console.log(` Adding ${testCaseIds.length} test case(s) to run...`);
415
- await addTestCasesToRun(projectId, runId, testCaseIds, auth);
416
-
417
- // Update results
418
- console.log('\n Syncing results:');
419
- const stats = { passed: 0, failed: 0, skipped: 0 };
420
-
421
- for (const result of mappedResults) {
422
- try {
423
- await updateTestResult(
424
- projectId,
425
- runId,
426
- result.bsTestCase.identifier,
427
- result.status,
428
- {
429
- duration: result.duration,
430
- comment: result.error || '',
431
- },
432
- auth
433
- );
434
-
435
- const icon = result.status === 'passed' ? '✓' : result.status === 'failed' ? '✗' : '○';
436
- console.log(` ${icon} ${result.title} → ${result.status}`);
437
-
438
- if (result.status === 'passed') stats.passed++;
439
- else if (result.status === 'failed') stats.failed++;
440
- else stats.skipped++;
441
- } catch (error) {
442
- console.error(` ✗ Failed to update ${result.title}: ${error.message}`);
443
- }
444
- }
445
-
446
- // Complete the run
447
- console.log('\n Completing test run...');
448
- await completeTestRun(projectId, runId, auth);
449
-
450
- const total = stats.passed + stats.failed + stats.skipped;
451
- console.log(`\n ✓ Test run completed: ${runIdentifier}`);
452
- console.log(` 📊 Results: ${stats.passed} passed, ${stats.failed} failed, ${stats.skipped} skipped (${total} total)`);
453
- console.log(` 🔗 View: https://test-management.browserstack.com/projects/${projectId}/runs/${runId}`);
454
-
455
- // Show unmapped tests
456
- if (unmappedResults.length > 0) {
457
- console.log('\n ⚠️ Tests not found in BrowserStack (sync them first):');
458
- unmappedResults.slice(0, 5).forEach(r => console.log(` - ${r.title}`));
459
- if (unmappedResults.length > 5) {
460
- console.log(` ... and ${unmappedResults.length - 5} more`);
461
- }
462
- }
463
-
464
- return {
465
- runId,
466
- runIdentifier,
467
- stats,
468
- mappedCount: mappedResults.length,
469
- unmappedCount: unmappedResults.length,
470
- };
471
- }
472
-
473
- export { parsePlaywrightJson, parseJunitXml };