@ash-mallick/browserstack-sync 1.3.0 → 1.4.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,188 +1,194 @@
1
1
  # @ash-mallick/browserstack-sync
2
2
 
3
- Sync **Playwright** and **Cypress** e2e specs to **BrowserStack Test Management** with **AI-powered test step extraction**.
4
-
5
- **🦙 FREE AI analysis** using **Ollama** — runs 100% locally, no data sent to cloud!
3
+ Sync Playwright/Cypress tests to BrowserStack Test Management with AI-powered test step extraction.
6
4
 
7
5
  **By Ashutosh Mallick**
8
6
 
9
7
  ---
10
8
 
11
- ## 🚀 Quick Start (3 Steps)
9
+ ## Install
12
10
 
13
- ### Step 1: Install
14
11
  ```bash
15
12
  npm install @ash-mallick/browserstack-sync
16
13
  ```
17
14
 
18
- ### Step 2: Setup Environment
19
- Create `.env` file in your project root:
15
+ ---
16
+
17
+ ## Quick Setup
18
+
19
+ Create `.env` file:
20
+
20
21
  ```env
21
22
  BROWSERSTACK_USERNAME=your_username
22
23
  BROWSERSTACK_ACCESS_KEY=your_access_key
23
24
  BROWSERSTACK_PROJECT_ID=PR-XXXX
24
25
  ```
25
26
 
26
- ### Step 3: Onboard Tests to BrowserStack
27
+ ---
28
+
29
+ ## Commands
30
+
31
+ ### 1. Generate CSV Files Only (No BrowserStack)
32
+
27
33
  ```bash
28
- npx am-browserstack-sync --all
34
+ npx am-browserstack-sync --csv-only
29
35
  ```
30
36
 
31
- This will:
32
- - ✅ Generate CSV files with test cases
33
- - ✅ Create folders in BrowserStack (one per spec file)
34
- - ✅ Create/update test cases with AI-generated steps
37
+ Creates CSV files with test cases in `playwright/e2e-csv/` folder.
35
38
 
36
39
  ---
37
40
 
38
- ## 📊 Sync Test Run Results (CI Pipeline)
39
-
40
- Add this single line to your existing CI pipeline to sync test results:
41
+ ### 2. Onboard Single Spec to BrowserStack
41
42
 
42
43
  ```bash
43
- npx am-browserstack-sync-runs
44
+ npx am-browserstack-sync --spec=login.spec
44
45
  ```
45
46
 
46
- ### GitLab CI Example
47
+ Creates folder and test cases for the specified spec file.
47
48
 
48
- ```yaml
49
- test:
50
- script:
51
- # Your existing test command (add JSON reporter)
52
- - npx playwright test --reporter=json --output-file=test-results.json
53
- # Sync results to BrowserStack (add this line)
54
- - npx am-browserstack-sync-runs --run-name="Pipeline #$CI_PIPELINE_ID"
55
- ```
49
+ ---
56
50
 
57
- ### GitHub Actions Example
51
+ ### 3. Onboard All Specs to BrowserStack
58
52
 
59
- ```yaml
60
- - run: npx playwright test --reporter=json --output-file=test-results.json
61
- - run: npx am-browserstack-sync-runs --run-name="Build #${{ github.run_number }}"
53
+ ```bash
54
+ npx am-browserstack-sync --all
62
55
  ```
63
56
 
64
- The command will:
65
- - 📄 Auto-detect your test report file
66
- - 📊 Parse test results (pass/fail/skip)
67
- - 🔗 Create a Test Run in BrowserStack
68
- - ✅ Sync each test result
69
- - 🔗 Provide a link to view results
57
+ Creates folders and test cases for all spec files in your project.
70
58
 
71
59
  ---
72
60
 
73
- ## Commands Overview
61
+ ### 4. Sync Test Run Results to BrowserStack
74
62
 
75
- | Command | Purpose |
76
- |---------|---------|
77
- | `npx am-browserstack-sync --all` | Onboard tests to BrowserStack (one-time setup) |
78
- | `npx am-browserstack-sync-runs` | Sync test run results (add to CI pipeline) |
79
- | `npx am-browserstack-sync --csv-only` | Generate CSVs only (no BrowserStack sync) |
63
+ Specify your report file location:
80
64
 
81
- ---
65
+ ```bash
66
+ npx am-browserstack-sync-runs --report=test-results.json
67
+ ```
82
68
 
83
- ## Detailed Usage
69
+ **Supported formats (auto-detected):**
70
+ - Playwright JSON
71
+ - Cypress/Mochawesome JSON
72
+ - JUnit XML
84
73
 
85
- ### Onboard Tests (am-browserstack-sync)
74
+ **Configure report path** (so you don't need `--report` every time):
86
75
 
87
76
  ```bash
88
- # Onboard all spec files
89
- npx am-browserstack-sync --all
77
+ # Via environment variable
78
+ export BROWSERSTACK_REPORT_PATH=test-results.json
90
79
 
91
- # Onboard specific specs
92
- npx am-browserstack-sync --spec=login.spec,checkout.spec
80
+ # Or in .am-browserstack-sync.json
81
+ { "reportPath": "test-results.json" }
82
+ ```
93
83
 
94
- # Generate CSVs only (no BrowserStack)
95
- npx am-browserstack-sync --csv-only
84
+ **Generate reports** (add to your test command):
96
85
 
97
- # Disable AI (use regex extraction)
98
- npx am-browserstack-sync --no-ai
86
+ ```bash
87
+ # Playwright
88
+ npx playwright test --reporter=json --output-file=test-results.json
99
89
 
100
- # Use specific Ollama model
101
- npx am-browserstack-sync --model=llama3.2
90
+ # Cypress
91
+ npx cypress run --reporter mochawesome
102
92
  ```
103
93
 
104
- ### Sync Run Results (am-browserstack-sync-runs)
94
+ **With custom run name:**
105
95
 
106
96
  ```bash
107
- # Auto-detect report and sync
108
- npx am-browserstack-sync-runs
97
+ npx am-browserstack-sync-runs --report=test-results.json --run-name="Nightly Run"
98
+ ```
109
99
 
110
- # Specify report file
111
- npx am-browserstack-sync-runs --report=test-results.json
100
+ ---
101
+
102
+ ## GitLab CI Integration
112
103
 
113
- # Custom run name
114
- npx am-browserstack-sync-runs --run-name="Nightly Regression"
104
+ ### Option A: Your pipeline already generates reports
105
+
106
+ If your existing test job generates JSON/JUnit reports, just add sync after:
107
+
108
+ ```yaml
109
+ # Your existing test job
110
+ test:
111
+ script:
112
+ - npm ci
113
+ - CENV=prod npx playwright test --reporter=allure-playwright,json --output-file=test-results.json
114
+ artifacts:
115
+ paths:
116
+ - test-results.json
115
117
 
116
- # JUnit XML format
117
- npx am-browserstack-sync-runs --report=junit.xml --format=junit
118
+ # Add this job to sync results to BrowserStack
119
+ sync_browserstack:
120
+ needs: [test]
121
+ script:
122
+ - npm install @ash-mallick/browserstack-sync
123
+ - npx am-browserstack-sync-runs --report=test-results.json --run-name="Pipeline #$CI_PIPELINE_ID"
118
124
  ```
119
125
 
120
- ### Scripts in package.json
126
+ ### Option B: Scheduled sync job (runs tests + syncs)
121
127
 
122
- ```json
123
- {
124
- "scripts": {
125
- "test": "playwright test --reporter=json --output-file=test-results.json",
126
- "sync:onboard": "am-browserstack-sync --all",
127
- "sync:runs": "am-browserstack-sync-runs"
128
- }
129
- }
128
+ ```yaml
129
+ scheduled_browserstack_sync:
130
+ rules:
131
+ - if: $CI_PIPELINE_SOURCE == "schedule"
132
+ script:
133
+ - npm ci
134
+ - CENV=prod npx playwright test --reporter=allure-playwright,json --output-file=test-results.json || true
135
+ - npm install @ash-mallick/browserstack-sync
136
+ - npx am-browserstack-sync-runs --report=test-results.json
130
137
  ```
131
138
 
139
+ ### Required CI/CD Variables
140
+
141
+ Add in **Settings → CI/CD → Variables**:
142
+
143
+ | Key | Value |
144
+ |-----|-------|
145
+ | `BROWSERSTACK_USERNAME` | your_username |
146
+ | `BROWSERSTACK_ACCESS_KEY` | your_access_key |
147
+ | `BROWSERSTACK_PROJECT_ID` | PR-XXXX |
148
+
149
+ For scheduled jobs: **CI/CD → Schedules → New Schedule**.
150
+
132
151
  ---
133
152
 
134
- ## 🦙 AI-Powered Step Analysis (FREE with Ollama)
153
+ ## All Options Reference
135
154
 
136
- The tool uses **Ollama** to analyze your test code and generate **human-readable test steps**. Ollama runs **100% locally** on your machine — no data sent to cloud, completely free, no API key needed!
155
+ ### Onboarding Command (`am-browserstack-sync`)
137
156
 
138
- **Example transformation:**
157
+ | Option | Description |
158
+ |--------|-------------|
159
+ | `--csv-only` | Generate CSVs only, do not sync to BrowserStack |
160
+ | `--all` | Sync all spec files without prompting |
161
+ | `--spec=name` | Sync specific spec(s), comma-separated |
162
+ | `--no-ai` | Disable AI step analysis, use regex extraction |
163
+ | `--model=name` | Ollama model to use (default: llama3.2) |
139
164
 
140
- ```typescript
141
- // Your test code:
142
- test('should log in successfully', async ({ page }) => {
143
- await page.goto('/login');
144
- await page.getByLabel(/email/i).fill('user@example.com');
145
- await page.getByLabel(/password/i).fill('validpassword');
146
- await page.getByRole('button', { name: /sign in/i }).click();
147
- await expect(page).toHaveURL(/\/dashboard/);
148
- });
149
- ```
165
+ ### Run Sync Command (`am-browserstack-sync-runs`)
150
166
 
151
- **Generated steps:**
167
+ | Option | Description |
168
+ |--------|-------------|
169
+ | `--report=path` | Path to test report file (auto-detects if not specified) |
170
+ | `--run-name=name` | Custom name for the test run |
171
+ | `--format=type` | Report format: `playwright-json` or `junit` |
152
172
 
153
- | Step | Expected Result |
154
- |------|-----------------|
155
- | Navigate to /login page | Login page loads successfully |
156
- | Enter 'user@example.com' in the Email field | Email is entered |
157
- | Enter 'validpassword' in the Password field | Password is masked and entered |
158
- | Click the 'Sign In' button | Form is submitted |
159
- | Verify URL | URL matches /dashboard |
173
+ ---
160
174
 
161
- ### Setup Ollama
175
+ ## AI-Powered Test Steps (Optional)
162
176
 
163
- 1. Download and install Ollama from [ollama.ai](https://ollama.ai)
177
+ The tool uses Ollama to analyze your test code and generate human-readable test steps. Ollama runs locally - no data sent to cloud, completely free.
164
178
 
165
- 2. Pull a model (llama3.2 recommended):
179
+ ### Setup Ollama
180
+
181
+ 1. Download from [ollama.ai](https://ollama.ai)
182
+ 2. Pull a model:
166
183
  ```bash
167
184
  ollama pull llama3.2
168
185
  ```
169
-
170
- 3. Start Ollama (runs automatically on macOS, or run manually):
186
+ 3. Start Ollama:
171
187
  ```bash
172
188
  ollama serve
173
189
  ```
174
190
 
175
- 4. Run the sync — AI analysis is automatic when Ollama is running!
176
- ```bash
177
- npx am-browserstack-sync --csv-only
178
- ```
179
-
180
- ### Options
181
-
182
- - `--no-ai` — Disable AI, use regex-based extraction instead
183
- - `--model=llama3.2` — Use a different Ollama model
184
- - `OLLAMA_MODEL=llama3.2` — Set default model via env variable
185
- - `OLLAMA_HOST=http://localhost:11434` — Custom Ollama host
191
+ AI analysis runs automatically when Ollama is running. Use `--no-ai` to disable.
186
192
 
187
193
  ### Recommended Models
188
194
 
@@ -191,17 +197,16 @@ test('should log in successfully', async ({ page }) => {
191
197
  | `llama3.2` | 2GB | General purpose, fast (default) |
192
198
  | `codellama` | 4GB | Better code understanding |
193
199
  | `llama3.2:1b` | 1GB | Fastest, lower quality |
194
- | `mistral` | 4GB | Good balance |
195
-
196
- **Fallback:** If Ollama is not running, the tool automatically uses regex-based step extraction, which still provides meaningful steps.
197
200
 
198
201
  ---
199
202
 
200
- ## Config (optional)
203
+ ## Configuration (Optional)
201
204
 
202
- Defaults: e2e dir `playwright/e2e`, CSV dir `playwright/e2e-csv`. For Cypress use e.g. `cypress/e2e`.
205
+ Default directories:
206
+ - e2e specs: `playwright/e2e`
207
+ - CSV output: `playwright/e2e-csv`
203
208
 
204
- Override via **`.am-browserstack-sync.json`** in project root:
209
+ Override via `.am-browserstack-sync.json`:
205
210
 
206
211
  ```json
207
212
  {
@@ -210,222 +215,27 @@ Override via **`.am-browserstack-sync.json`** in project root:
210
215
  }
211
216
  ```
212
217
 
213
- Or **package.json**: `"amBrowserstackSync": { "e2eDir": "...", "csvOutputDir": "..." }`
214
- Or env: `PLAYWRIGHT_BROWSERSTACK_E2E_DIR`, `PLAYWRIGHT_BROWSERSTACK_CSV_DIR`.
215
-
216
- ---
217
-
218
- ## BrowserStack sync
219
-
220
- Sync pushes your e2e tests into **BrowserStack Test Management** so you can track test cases, link runs, and keep specs in sync with one source of truth. Under your chosen project it creates **one folder per spec file** (e.g. `login.spec`, `checkout.spec`) and one **test case** per test, with title, description, steps, state (Active), type (Functional), automation status, and tags. Existing test cases are matched by title or TC-id and **updated**; new ones are **created**. No duplicates.
221
-
222
- **Setup:**
223
-
224
- 1. In project root, create **`.env`** (do not commit):
225
-
226
- ```env
227
- BROWSERSTACK_USERNAME=your_username
228
- BROWSERSTACK_ACCESS_KEY=your_access_key
229
- BROWSERSTACK_PROJECT_ID=PR-XXXX
230
- ```
231
- Or use a single token: `BROWSERSTACK_API_TOKEN=your_token`
232
-
233
- 2. Get credentials and project ID from [Test Management → API keys](https://test-management.browserstack.com/settings/api-keys). The project ID is in the project URL (e.g. `PR-1234`).
234
-
235
- 3. **Install Ollama** for AI-powered step analysis (optional but recommended).
236
-
237
- 4. Run **`npx am-browserstack-sync`** (without `--csv-only`). You'll be prompted to sync all specs or pick specific ones (unless you use `--all` or `--spec=...`). After sync, open your project in BrowserStack to see the new folders and test cases.
238
-
239
- ---
240
-
241
- ## What it does
242
-
243
- - Finds **Playwright** (`*.spec.*`, `*.test.*`) and **Cypress** (`*.cy.*`) spec files in your e2e dir.
244
- - Extracts test titles from `test('...')` / `it('...')`, assigns TC-001, TC-002, …
245
- - **Analyzes test code** with Ollama AI (local, free) or regex to generate human-readable steps and expected results.
246
- - Enriches with state (Active), type (Functional), automation (automated), tags (from spec + title).
247
- - Writes **one CSV per spec** (test_case_id, title, state, case_type, steps, expected_results, jira_issues, automation_status, tags, description, spec_file).
248
- - Optionally syncs to BrowserStack with description, steps, and tags.
249
-
250
- ---
251
-
252
- ## 📊 Test Run Tracking (NEW!)
253
-
254
- Run your Playwright tests and **automatically track results on BrowserStack**. Each test run is recorded with pass/fail status, duration, and links back to your test cases.
255
-
256
- ### Quick Start
257
-
258
- ```bash
259
- # Run all tests with BrowserStack tracking
260
- npx am-browserstack-sync --run
261
-
262
- # Run with a custom run name
263
- npx am-browserstack-sync --run --run-name="Nightly Regression"
264
-
265
- # Run specific spec files
266
- npx am-browserstack-sync --run --spec=login.spec.js,checkout.spec.js
267
-
268
- # Run tests matching a pattern
269
- npx am-browserstack-sync --run --grep="login"
270
- ```
271
-
272
- ### Setup
273
-
274
- 1. First, **sync your test cases** to BrowserStack (one-time or when tests change):
275
- ```bash
276
- npx am-browserstack-sync --all
277
- ```
278
-
279
- 2. Ensure your `.env` has the required credentials:
280
- ```env
281
- BROWSERSTACK_USERNAME=your_username
282
- BROWSERSTACK_ACCESS_KEY=your_access_key
283
- BROWSERSTACK_PROJECT_ID=PR-XXXX
284
- ```
285
-
286
- 3. Run tests with tracking:
287
- ```bash
288
- npx am-browserstack-sync --run
289
- ```
290
-
291
- ### What happens during a test run
292
-
293
- 1. **Creates a Test Run** in BrowserStack with your specified name
294
- 2. **Maps tests** to existing test cases by title
295
- 3. **Updates results in real-time** as each test passes/fails/skips
296
- 4. **Completes the run** with a summary and link to view results
297
-
298
- ### Example Output
299
-
300
- ```
301
- 🚀 Running Playwright tests with BrowserStack tracking...
302
- Run Name: Nightly Regression
303
-
304
- 🔗 BrowserStack Test Management - Initializing...
305
- Fetching test cases from BrowserStack...
306
- Found 15 test case(s) in project PR-1234
307
- Creating test run: "Nightly Regression"...
308
- ✓ Test run created: TR-42 (ID: 12345)
309
- Adding 15 test case(s) to the run...
310
- ✓ Test cases added to run
311
- 🚀 Starting test execution with BrowserStack tracking
312
-
313
- ✓ [BS] should display login form → passed
314
- ✓ [BS] should log in successfully → passed
315
- ✗ [BS] should show error for invalid credentials → failed
316
-
317
- 🔗 BrowserStack Test Management - Completing run...
318
-
319
- ✓ Test run completed: TR-42
320
- 📊 Results: 2 passed, 1 failed, 0 skipped (3 total)
321
- 🔗 View in BrowserStack: https://test-management.browserstack.com/projects/PR-1234/runs/12345
322
- ```
323
-
324
- ### Using as a Playwright Reporter
325
-
326
- You can also use the BrowserStack reporter directly in your `playwright.config.js`:
327
-
328
- ```javascript
329
- // playwright.config.js
330
- export default {
331
- reporter: [
332
- ['list'],
333
- ['@ash-mallick/browserstack-sync/reporter', {
334
- projectId: 'PR-1234',
335
- runName: 'CI Pipeline Run',
336
- }],
337
- ],
338
- };
339
- ```
340
-
341
- ### CI/CD Integration - GitLab
342
-
343
- **Sync test results from GitLab CI pipelines to BrowserStack!**
344
-
345
- #### Quick Setup
346
-
347
- 1. Add CI/CD variables in GitLab (Settings > CI/CD > Variables):
348
- - `BROWSERSTACK_USERNAME`
349
- - `BROWSERSTACK_ACCESS_KEY`
350
- - `BROWSERSTACK_PROJECT_ID`
351
-
352
- 2. Run tests with JSON reporter and sync results:
353
-
354
- ```yaml
355
- # .gitlab-ci.yml
356
- test_and_sync:
357
- image: mcr.microsoft.com/playwright:v1.40.0-jammy
358
- script:
359
- - npm ci
360
- - npm install @ash-mallick/browserstack-sync
361
- # Run tests with JSON reporter
362
- - npx playwright test --reporter=json --output-file=test-results.json || true
363
- # Sync results to BrowserStack
364
- - npx am-browserstack-sync-runs --run-name="CI Pipeline #$CI_PIPELINE_ID"
365
- artifacts:
366
- paths:
367
- - test-results.json
368
- ```
369
-
370
- #### Scheduled Runs (Nightly/Weekly)
371
-
372
- ```yaml
373
- nightly_tests:
374
- rules:
375
- - if: $CI_PIPELINE_SOURCE == "schedule"
376
- variables:
377
- BROWSERSTACK_RUN_NAME: "Nightly Regression - $CI_COMMIT_SHORT_SHA"
378
- script:
379
- - npx playwright test --reporter=json --output-file=test-results.json || true
380
- - npx am-browserstack-sync-runs
381
- ```
382
-
383
- #### CLI Options
384
-
385
- ```bash
386
- # Auto-detect report and sync
387
- npx am-browserstack-sync-runs
388
-
389
- # Specify report file
390
- npx am-browserstack-sync-runs --report=custom-results.json
391
-
392
- # Custom run name
393
- npx am-browserstack-sync-runs --run-name="Nightly Build"
394
-
395
- # JUnit XML format
396
- npx am-browserstack-sync-runs --report=junit.xml --format=junit
397
- ```
398
-
399
- ### GitHub Actions Example
400
-
401
- ```yaml
402
- - name: Run tests and sync to BrowserStack
403
- env:
404
- BROWSERSTACK_USERNAME: ${{ secrets.BROWSERSTACK_USERNAME }}
405
- BROWSERSTACK_ACCESS_KEY: ${{ secrets.BROWSERSTACK_ACCESS_KEY }}
406
- BROWSERSTACK_PROJECT_ID: PR-1234
407
- run: |
408
- npx playwright test --reporter=json --output-file=test-results.json || true
409
- npx am-browserstack-sync-runs --run-name="Build #${{ github.run_number }}"
410
- ```
218
+ Or via environment variables:
219
+ - `PLAYWRIGHT_BROWSERSTACK_E2E_DIR`
220
+ - `PLAYWRIGHT_BROWSERSTACK_CSV_DIR`
411
221
 
412
222
  ---
413
223
 
414
- ## Programmatic API
224
+ ## What Gets Created
415
225
 
416
- ```javascript
417
- import { runSync } from '@ash-mallick/browserstack-sync';
226
+ When you run `npx am-browserstack-sync --all`:
418
227
 
419
- await runSync({
420
- cwd: '/path/to/project',
421
- csvOnly: true,
422
- all: true,
423
- spec: ['login.spec'],
424
- useAI: true,
425
- model: 'llama3.2',
426
- });
427
- ```
228
+ 1. **CSV files** - One per spec file with test case details
229
+ 2. **BrowserStack folders** - One folder per spec file
230
+ 3. **Test cases** - Each test becomes a test case with:
231
+ - Title
232
+ - AI-generated steps (or regex-extracted)
233
+ - Expected results
234
+ - Tags
235
+ - Automation status
428
236
 
429
- ---
237
+ When you run `npx am-browserstack-sync-runs`:
430
238
 
431
- **Author:** Ashutosh Mallick
239
+ 1. **Test Run** - Created in BrowserStack
240
+ 2. **Results** - Pass/fail status for each test
241
+ 3. **Link** - URL to view results in BrowserStack
package/bin/sync-runs.js CHANGED
@@ -34,79 +34,108 @@ if (argv.includes('--help') || argv.includes('-h')) {
34
34
  📊 BrowserStack Test Run Sync
35
35
 
36
36
  Sync your Playwright/Cypress test results to BrowserStack Test Management.
37
- Add this command to your existing CI pipeline after tests complete.
37
+ Specify your report file location - supports JSON and XML formats.
38
38
 
39
39
  USAGE:
40
- npx am-browserstack-sync-runs [options]
40
+ npx am-browserstack-sync-runs --report=<path> [options]
41
41
 
42
42
  OPTIONS:
43
- --report=<path> Path to report file (auto-detects if not specified)
43
+ --report=<path> Path to your test report file (required unless configured)
44
44
  --run-name=<name> Name for the test run (default: "CI Run - <timestamp>")
45
- --format=<fmt> Report format: playwright-json, junit (default: auto)
45
+ --format=<fmt> Report format: playwright-json, mochawesome, cypress, junit (auto-detected)
46
46
  --help, -h Show this help
47
47
 
48
+ CONFIGURE REPORT PATH (optional - so you don't need --report every time):
49
+ Environment variable: BROWSERSTACK_REPORT_PATH=test-results.json
50
+ Config file: .am-browserstack-sync.json { "reportPath": "test-results.json" }
51
+ package.json: "amBrowserstackSync": { "reportPath": "test-results.json" }
52
+
48
53
  ENVIRONMENT VARIABLES (required):
49
54
  BROWSERSTACK_USERNAME Your BrowserStack username
50
55
  BROWSERSTACK_ACCESS_KEY Your BrowserStack access key
51
56
  BROWSERSTACK_PROJECT_ID Your project ID (e.g., PR-1234)
52
57
 
53
- EXAMPLES:
54
- # Auto-detect report and sync
55
- npx am-browserstack-sync-runs
58
+ SUPPORTED REPORT FORMATS:
59
+ - Playwright JSON: npx playwright test --reporter=json --output-file=test-results.json
60
+ - Cypress Mochawesome: npx cypress run --reporter mochawesome
61
+ - JUnit XML: npx playwright test --reporter=junit --output-file=junit.xml
56
62
 
63
+ EXAMPLES:
57
64
  # Specify report file
58
65
  npx am-browserstack-sync-runs --report=test-results.json
59
66
 
60
- # Custom run name (great for CI)
61
- npx am-browserstack-sync-runs --run-name="Build #\${CI_PIPELINE_ID}"
67
+ # With custom run name (great for CI)
68
+ npx am-browserstack-sync-runs --report=test-results.json --run-name="Build #123"
62
69
 
63
70
  # Sync JUnit XML results
64
- npx am-browserstack-sync-runs --report=junit.xml --format=junit
71
+ npx am-browserstack-sync-runs --report=junit.xml
65
72
 
66
- ADD TO YOUR CI PIPELINE:
73
+ CI PIPELINE EXAMPLE:
67
74
  # GitLab CI
68
75
  script:
69
- - npm test # Your existing test command
70
- - npx am-browserstack-sync-runs --run-name="Pipeline #\$CI_PIPELINE_ID"
71
-
72
- # GitHub Actions
73
- - run: npm test
74
- - run: npx am-browserstack-sync-runs --run-name="Build #\${{ github.run_number }}"
75
-
76
- PREREQUISITES:
77
- 1. First, onboard your tests to BrowserStack:
78
- npx am-browserstack-sync --all
79
-
80
- 2. Then add run sync to your pipeline:
81
- npx am-browserstack-sync-runs
76
+ - npm ci
77
+ - npx playwright test --reporter=json --output-file=test-results.json || true
78
+ - npx am-browserstack-sync-runs --report=test-results.json --run-name="Pipeline #\$CI_PIPELINE_ID"
82
79
  `);
83
80
  process.exit(0);
84
81
  }
85
82
 
86
- // Auto-detect report file
87
- function findReportFile() {
83
+ // Check if a report file exists at the given path
84
+ function checkReportExists(reportPath) {
85
+ if (!reportPath) return null;
86
+
87
+ const cwd = process.cwd();
88
+ const fullPath = path.isAbsolute(reportPath) ? reportPath : path.join(cwd, reportPath);
89
+
90
+ if (fs.existsSync(fullPath)) {
91
+ const stat = fs.statSync(fullPath);
92
+ const ageMinutes = Math.round((Date.now() - stat.mtime.getTime()) / 60000);
93
+ console.log(`📄 Using report: ${reportPath}`);
94
+ console.log(` Modified: ${ageMinutes < 60 ? ageMinutes + ' minutes ago' : Math.round(ageMinutes / 60) + ' hours ago'}`);
95
+ return fullPath;
96
+ }
97
+
98
+ return null;
99
+ }
100
+
101
+ // Get report path from config or environment
102
+ function getConfiguredReportPath() {
103
+ // Check environment variable first
104
+ if (process.env.BROWSERSTACK_REPORT_PATH) {
105
+ return process.env.BROWSERSTACK_REPORT_PATH;
106
+ }
107
+
108
+ // Check config file
88
109
  const cwd = process.cwd();
89
- const possibleFiles = [
90
- 'test-results.json',
91
- 'results.json',
92
- 'playwright-report/results.json',
93
- 'playwright-report.json',
94
- 'report.json',
95
- 'junit.xml',
96
- 'test-results.xml',
110
+ const configPaths = [
111
+ path.join(cwd, '.am-browserstack-sync.json'),
112
+ path.join(cwd, 'browserstack-sync.json'),
97
113
  ];
98
114
 
99
- for (const file of possibleFiles) {
100
- const fullPath = path.join(cwd, file);
101
- if (fs.existsSync(fullPath)) {
102
- return fullPath;
115
+ for (const configPath of configPaths) {
116
+ if (fs.existsSync(configPath)) {
117
+ try {
118
+ const config = JSON.parse(fs.readFileSync(configPath, 'utf-8'));
119
+ if (config.reportPath) {
120
+ return config.reportPath;
121
+ }
122
+ } catch {
123
+ // Ignore parse errors
124
+ }
103
125
  }
104
126
  }
105
127
 
106
- // Check for playwright-report directory
107
- const htmlReportDir = path.join(cwd, 'playwright-report');
108
- if (fs.existsSync(htmlReportDir) && fs.statSync(htmlReportDir).isDirectory()) {
109
- return htmlReportDir;
128
+ // Check package.json
129
+ const pkgPath = path.join(cwd, 'package.json');
130
+ if (fs.existsSync(pkgPath)) {
131
+ try {
132
+ const pkg = JSON.parse(fs.readFileSync(pkgPath, 'utf-8'));
133
+ if (pkg.amBrowserstackSync?.reportPath) {
134
+ return pkg.amBrowserstackSync.reportPath;
135
+ }
136
+ } catch {
137
+ // Ignore parse errors
138
+ }
110
139
  }
111
140
 
112
141
  return null;
@@ -131,24 +160,60 @@ const runName = runNameArg
131
160
  ? runNameArg.slice('--run-name='.length).trim()
132
161
  : process.env.BROWSERSTACK_RUN_NAME || `CI Run - ${new Date().toISOString().slice(0, 16).replace('T', ' ')}`;
133
162
 
134
- // Auto-detect report if not specified
163
+ // Determine report path: CLI arg > env var > config file
135
164
  if (!reportPath) {
136
- reportPath = findReportFile();
137
- if (!reportPath) {
138
- console.error(`
139
- ❌ No test report found!
140
-
141
- Make sure your test framework generates a report file:
165
+ reportPath = getConfiguredReportPath();
166
+ }
142
167
 
143
- # Playwright - add to your test command:
144
- npx playwright test --reporter=json --output-file=test-results.json
168
+ // Check if report exists
169
+ if (reportPath) {
170
+ const resolvedPath = checkReportExists(reportPath);
171
+ if (!resolvedPath) {
172
+ console.error(`
173
+ ❌ Report file not found: ${reportPath}
145
174
 
146
- # Or specify the report path:
147
- npx am-browserstack-sync-runs --report=path/to/results.json
175
+ Make sure your test run generates this report before syncing.
148
176
  `);
149
177
  process.exit(1);
150
178
  }
151
- console.log(`📄 Auto-detected report: ${reportPath}`);
179
+ reportPath = resolvedPath;
180
+ } else {
181
+ console.error(`
182
+ ❌ No report path configured!
183
+
184
+ Please specify your test report location using one of these methods:
185
+
186
+ OPTION 1: Command line argument
187
+ npx am-browserstack-sync-runs --report=test-results.json
188
+
189
+ OPTION 2: Environment variable
190
+ export BROWSERSTACK_REPORT_PATH=test-results.json
191
+ npx am-browserstack-sync-runs
192
+
193
+ OPTION 3: Config file (.am-browserstack-sync.json)
194
+ {
195
+ "reportPath": "test-results.json"
196
+ }
197
+
198
+ OPTION 4: package.json
199
+ {
200
+ "amBrowserstackSync": {
201
+ "reportPath": "test-results.json"
202
+ }
203
+ }
204
+
205
+ GENERATING REPORTS:
206
+
207
+ Playwright:
208
+ npx playwright test --reporter=json --output-file=test-results.json
209
+
210
+ Cypress (mochawesome):
211
+ npx cypress run --reporter mochawesome
212
+
213
+ JUnit XML (both):
214
+ npx playwright test --reporter=junit --output-file=junit.xml
215
+ `);
216
+ process.exit(1);
152
217
  }
153
218
 
154
219
  // Auto-detect format if not specified
@@ -200,3 +200,109 @@ export async function syncToBrowserStack(specsMap, projectId, auth) {
200
200
  console.log(` Updated: ${totalUpdated} test case(s)`);
201
201
  console.log(` Skipped: ${totalSkipped} test case(s) (no changes)`);
202
202
  }
203
+
204
+ // ============================================================================
205
+ // TEST RUN API - For syncing test execution results
206
+ // ============================================================================
207
+
208
+ /**
209
+ * Create a new Test Run in BrowserStack.
210
+ */
211
+ export async function createTestRun(projectId, runName, description, auth) {
212
+ const url = `${BS_BASE}/projects/${encodeURIComponent(projectId)}/test-runs`;
213
+ const body = {
214
+ test_run: {
215
+ name: runName,
216
+ description: description || `Automated test run: ${runName}`,
217
+ },
218
+ };
219
+ const res = await bsRequest('POST', url, auth, body);
220
+ return {
221
+ runId: res.test_run?.id || res.id,
222
+ identifier: res.test_run?.identifier || res.identifier,
223
+ };
224
+ }
225
+
226
+ /**
227
+ * Add test cases to a Test Run.
228
+ */
229
+ export async function addTestCasesToRun(projectId, runId, testCaseIds, auth) {
230
+ const url = `${BS_BASE}/projects/${encodeURIComponent(projectId)}/test-runs/${runId}/test-cases`;
231
+ const body = {
232
+ test_case_identifiers: testCaseIds,
233
+ };
234
+ return bsRequest('POST', url, auth, body);
235
+ }
236
+
237
+ /**
238
+ * Update test result in a Test Run.
239
+ */
240
+ export async function updateTestResult(projectId, runId, testCaseId, status, options = {}, auth) {
241
+ const url = `${BS_BASE}/projects/${encodeURIComponent(projectId)}/test-runs/${runId}/test-results`;
242
+
243
+ const statusMap = {
244
+ passed: 'passed',
245
+ failed: 'failed',
246
+ skipped: 'skipped',
247
+ timedOut: 'failed',
248
+ };
249
+
250
+ const body = {
251
+ test_result: {
252
+ test_case_identifier: testCaseId,
253
+ status: statusMap[status] || status,
254
+ comment: options.comment || '',
255
+ },
256
+ };
257
+
258
+ if (options.duration) {
259
+ body.test_result.duration = Math.round(options.duration / 1000);
260
+ }
261
+
262
+ return bsRequest('POST', url, auth, body);
263
+ }
264
+
265
+ /**
266
+ * Complete a Test Run.
267
+ */
268
+ export async function completeTestRun(projectId, runId, auth) {
269
+ const url = `${BS_BASE}/projects/${encodeURIComponent(projectId)}/test-runs/${runId}`;
270
+ const body = {
271
+ test_run: {
272
+ state: 'done',
273
+ },
274
+ };
275
+ return bsRequest('PATCH', url, auth, body);
276
+ }
277
+
278
+ /**
279
+ * Get all test cases in a project.
280
+ */
281
+ export async function getAllTestCases(projectId, auth) {
282
+ const all = [];
283
+ let page = 1;
284
+ let hasMore = true;
285
+
286
+ while (hasMore) {
287
+ const url = `${BS_BASE}/projects/${encodeURIComponent(projectId)}/test-cases?p=${page}`;
288
+ const res = await bsRequest('GET', url, auth);
289
+ const list = res.test_cases || [];
290
+ all.push(...list);
291
+ const info = res.info || {};
292
+ hasMore = info.next != null;
293
+ page += 1;
294
+ }
295
+
296
+ return all;
297
+ }
298
+
299
+ /**
300
+ * Find test case by title.
301
+ */
302
+ export function findTestCaseByTitle(testCases, title) {
303
+ const normalizedTitle = (title || '').trim().toLowerCase();
304
+ return testCases.find((tc) => {
305
+ const tcTitle = (tc.name || tc.title || '').trim().toLowerCase();
306
+ return tcTitle === normalizedTitle;
307
+ });
308
+ }
@@ -108,6 +108,115 @@ function parseJunitXml(reportPath) {
108
108
  return results;
109
109
  }
110
110
 
111
+ /**
112
+ * Parse Mochawesome JSON report format (commonly used with Cypress).
113
+ * @param {string} reportPath - Path to the mochawesome JSON file
114
+ * @returns {Array<{title: string, status: string, duration: number, error?: string}>}
115
+ */
116
+ function parseMochawesome(reportPath) {
117
+ const report = JSON.parse(fs.readFileSync(reportPath, 'utf-8'));
118
+ const results = [];
119
+
120
+ // Mochawesome structure: { results: [{ suites: [{ tests: [...] }] }] }
121
+ function extractTests(suites) {
122
+ for (const suite of suites || []) {
123
+ for (const test of suite.tests || []) {
124
+ results.push({
125
+ title: test.title,
126
+ fullTitle: test.fullTitle || `${suite.title} > ${test.title}`,
127
+ status: mapMochawesomeStatus(test.state || test.pass),
128
+ duration: test.duration || 0,
129
+ error: test.err?.message || test.err?.estack || '',
130
+ file: suite.file,
131
+ });
132
+ }
133
+
134
+ // Recursively process nested suites
135
+ if (suite.suites) {
136
+ extractTests(suite.suites);
137
+ }
138
+ }
139
+ }
140
+
141
+ // Handle both mochawesome and mochawesome-merge formats
142
+ if (report.results) {
143
+ for (const result of report.results) {
144
+ extractTests(result.suites);
145
+ }
146
+ } else if (report.suites) {
147
+ extractTests(report.suites);
148
+ }
149
+
150
+ return results;
151
+ }
152
+
153
+ /**
154
+ * Parse Cypress JSON report (spec-based results).
155
+ * @param {string} reportPath - Path to the Cypress JSON report
156
+ * @returns {Array<{title: string, status: string, duration: number, error?: string}>}
157
+ */
158
+ function parseCypressJson(reportPath) {
159
+ const report = JSON.parse(fs.readFileSync(reportPath, 'utf-8'));
160
+ const results = [];
161
+
162
+ // Cypress native JSON structure
163
+ // { runs: [{ tests: [...], spec: {...} }] }
164
+ if (report.runs) {
165
+ for (const run of report.runs) {
166
+ for (const test of run.tests || []) {
167
+ const attempts = test.attempts || [];
168
+ const lastAttempt = attempts[attempts.length - 1] || {};
169
+
170
+ results.push({
171
+ title: test.title?.join(' > ') || test.title,
172
+ status: mapCypressStatus(test.state || lastAttempt.state),
173
+ duration: test.duration || lastAttempt.duration || 0,
174
+ error: lastAttempt.error?.message || test.displayError || '',
175
+ file: run.spec?.relative || run.spec?.name,
176
+ });
177
+ }
178
+ }
179
+ }
180
+
181
+ // Cypress Dashboard JSON structure
182
+ // { tests: [...] }
183
+ if (report.tests && !report.runs) {
184
+ for (const test of report.tests) {
185
+ results.push({
186
+ title: test.title || test.name,
187
+ status: mapCypressStatus(test.state || test.status),
188
+ duration: test.duration || 0,
189
+ error: test.error?.message || '',
190
+ });
191
+ }
192
+ }
193
+
194
+ return results;
195
+ }
196
+
197
+ /**
198
+ * Map Mochawesome status to BrowserStack status.
199
+ */
200
+ function mapMochawesomeStatus(status) {
201
+ if (status === true || status === 'passed') return 'passed';
202
+ if (status === false || status === 'failed') return 'failed';
203
+ if (status === 'pending' || status === 'skipped') return 'skipped';
204
+ return status || 'passed';
205
+ }
206
+
207
+ /**
208
+ * Map Cypress status to BrowserStack status.
209
+ */
210
+ function mapCypressStatus(status) {
211
+ const map = {
212
+ passed: 'passed',
213
+ failed: 'failed',
214
+ pending: 'skipped',
215
+ skipped: 'skipped',
216
+ };
217
+ return map[status] || status;
218
+ }
219
+
111
220
  /**
112
221
  * Parse Playwright HTML report's index.html or results.json
113
222
  */
@@ -211,17 +320,52 @@ export async function syncResultsFromReport(options = {}) {
211
320
  // Parse the report based on format
212
321
  console.log('\n Parsing test results...');
213
322
  let testResults;
323
+ let detectedFormat = format;
214
324
 
215
325
  const stat = fs.statSync(reportPath);
216
326
  if (stat.isDirectory()) {
217
327
  // Assume it's a Playwright HTML report directory
218
328
  testResults = parsePlaywrightHtmlReport(reportPath);
329
+ detectedFormat = 'playwright-html';
219
330
  } else if (format === 'junit' || reportPath.endsWith('.xml')) {
220
331
  testResults = parseJunitXml(reportPath);
332
+ detectedFormat = 'junit-xml';
221
333
  } else {
222
- testResults = parsePlaywrightJson(reportPath);
334
+ // Try to auto-detect JSON format by reading the file
335
+ const content = fs.readFileSync(reportPath, 'utf-8');
336
+ const json = JSON.parse(content);
337
+
338
+ if (format === 'mochawesome' || json.stats?.testsRegistered !== undefined || (json.results && json.results[0]?.suites)) {
339
+ // Mochawesome format (Cypress)
340
+ testResults = parseMochawesome(reportPath);
341
+ detectedFormat = 'mochawesome';
342
+ } else if (format === 'cypress' || json.runs) {
343
+ // Cypress native JSON format
344
+ testResults = parseCypressJson(reportPath);
345
+ detectedFormat = 'cypress-json';
346
+ } else if (json.suites || json.config?.projects) {
347
+ // Playwright JSON format
348
+ testResults = parsePlaywrightJson(reportPath);
349
+ detectedFormat = 'playwright-json';
350
+ } else {
351
+ // Fallback: try Playwright first, then Mochawesome
352
+ try {
353
+ testResults = parsePlaywrightJson(reportPath);
354
+ detectedFormat = 'playwright-json';
355
+ } catch {
356
+ try {
357
+ testResults = parseMochawesome(reportPath);
358
+ detectedFormat = 'mochawesome';
359
+ } catch {
360
+ testResults = parseCypressJson(reportPath);
361
+ detectedFormat = 'cypress-json';
362
+ }
363
+ }
364
+ }
223
365
  }
224
366
 
367
+ console.log(` Detected format: ${detectedFormat}`)
368
+
225
369
  console.log(` Found ${testResults.length} test result(s)`);
226
370
 
227
371
  if (testResults.length === 0) {
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@ash-mallick/browserstack-sync",
3
- "version": "1.3.0",
3
+ "version": "1.4.0",
4
4
  "description": "Sync Playwright & Cypress e2e specs to CSV and BrowserStack Test Management with FREE AI-powered test step extraction using Ollama (local).",
5
5
  "author": "Ashutosh Mallick",
6
6
  "type": "module",