@ash-mallick/browserstack-sync 1.1.3 → 1.3.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,51 +1,130 @@
1
1
  # @ash-mallick/browserstack-sync
2
2
 
3
- Sync **Playwright** and **Cypress** e2e specs to CSV (TC-001, TC-002, …) and optionally to **BrowserStack Test Management** (one folder per spec, create/update test cases).
3
+ Sync **Playwright** and **Cypress** e2e specs to **BrowserStack Test Management** with **AI-powered test step extraction**.
4
4
 
5
- **🦙 FREE AI-powered test step extraction** using **Ollama** — runs 100% locally, no data sent to cloud!
5
+ **🦙 FREE AI analysis** using **Ollama** — runs 100% locally, no data sent to cloud!
6
6
 
7
7
  **By Ashutosh Mallick**
8
8
 
9
9
  ---
10
10
 
11
- ## Install
11
+ ## 🚀 Quick Start (3 Steps)
12
12
 
13
+ ### Step 1: Install
13
14
  ```bash
14
15
  npm install @ash-mallick/browserstack-sync
15
16
  ```
16
17
 
17
- Or run without installing: `npx @ash-mallick/browserstack-sync --csv-only`
18
+ ### Step 2: Setup Environment
19
+ Create `.env` file in your project root:
20
+ ```env
21
+ BROWSERSTACK_USERNAME=your_username
22
+ BROWSERSTACK_ACCESS_KEY=your_access_key
23
+ BROWSERSTACK_PROJECT_ID=PR-XXXX
24
+ ```
25
+
26
+ ### Step 3: Onboard Tests to BrowserStack
27
+ ```bash
28
+ npx am-browserstack-sync --all
29
+ ```
30
+
31
+ This will:
32
+ - ✅ Generate CSV files with test cases
33
+ - ✅ Create folders in BrowserStack (one per spec file)
34
+ - ✅ Create/update test cases with AI-generated steps
18
35
 
19
36
  ---
20
37
 
21
- ## Run
38
+ ## 📊 Sync Test Run Results (CI Pipeline)
22
39
 
23
- From your project root (where your e2e specs live):
40
+ Add this single line to your existing CI pipeline to sync test results:
24
41
 
25
42
  ```bash
26
- # Generate CSVs only
27
- npx am-browserstack-sync --csv-only
43
+ npx am-browserstack-sync-runs
44
+ ```
45
+
46
+ ### GitLab CI Example
47
+
48
+ ```yaml
49
+ test:
50
+ script:
51
+ # Your existing test command (add JSON reporter)
52
+ - npx playwright test --reporter=json --output-file=test-results.json
53
+ # Sync results to BrowserStack (add this line)
54
+ - npx am-browserstack-sync-runs --run-name="Pipeline #$CI_PIPELINE_ID"
55
+ ```
56
+
57
+ ### GitHub Actions Example
58
+
59
+ ```yaml
60
+ - run: npx playwright test --reporter=json --output-file=test-results.json
61
+ - run: npx am-browserstack-sync-runs --run-name="Build #${{ github.run_number }}"
62
+ ```
63
+
64
+ The command will:
65
+ - 📄 Auto-detect your test report file
66
+ - 📊 Parse test results (pass/fail/skip)
67
+ - 🔗 Create a Test Run in BrowserStack
68
+ - ✅ Sync each test result
69
+ - 🔗 Provide a link to view results
70
+
71
+ ---
72
+
73
+ ## Commands Overview
74
+
75
+ | Command | Purpose |
76
+ |---------|---------|
77
+ | `npx am-browserstack-sync --all` | Onboard tests to BrowserStack (one-time setup) |
78
+ | `npx am-browserstack-sync-runs` | Sync test run results (add to CI pipeline) |
79
+ | `npx am-browserstack-sync --csv-only` | Generate CSVs only (no BrowserStack sync) |
80
+
81
+ ---
82
+
83
+ ## Detailed Usage
28
84
 
29
- # Sync all specs, no prompt (e.g. CI)
85
+ ### Onboard Tests (am-browserstack-sync)
86
+
87
+ ```bash
88
+ # Onboard all spec files
30
89
  npx am-browserstack-sync --all
31
90
 
32
- # Sync only certain specs
91
+ # Onboard specific specs
33
92
  npx am-browserstack-sync --spec=login.spec,checkout.spec
34
93
 
35
- # Disable AI analysis (use regex extraction only)
94
+ # Generate CSVs only (no BrowserStack)
95
+ npx am-browserstack-sync --csv-only
96
+
97
+ # Disable AI (use regex extraction)
36
98
  npx am-browserstack-sync --no-ai
37
99
 
38
- # Use a specific Ollama model
100
+ # Use specific Ollama model
39
101
  npx am-browserstack-sync --model=llama3.2
40
102
  ```
41
103
 
42
- **Scripts** in `package.json`:
104
+ ### Sync Run Results (am-browserstack-sync-runs)
105
+
106
+ ```bash
107
+ # Auto-detect report and sync
108
+ npx am-browserstack-sync-runs
109
+
110
+ # Specify report file
111
+ npx am-browserstack-sync-runs --report=test-results.json
112
+
113
+ # Custom run name
114
+ npx am-browserstack-sync-runs --run-name="Nightly Regression"
115
+
116
+ # JUnit XML format
117
+ npx am-browserstack-sync-runs --report=junit.xml --format=junit
118
+ ```
119
+
120
+ ### Scripts in package.json
43
121
 
44
122
  ```json
45
123
  {
46
124
  "scripts": {
47
- "sync:e2e": "am-browserstack-sync",
48
- "sync:e2e-csv": "am-browserstack-sync --csv-only"
125
+ "test": "playwright test --reporter=json --output-file=test-results.json",
126
+ "sync:onboard": "am-browserstack-sync --all",
127
+ "sync:runs": "am-browserstack-sync-runs"
49
128
  }
50
129
  }
51
130
  ```
@@ -167,6 +246,186 @@ Sync pushes your e2e tests into **BrowserStack Test Management** so you can trac
167
246
  - Enriches with state (Active), type (Functional), automation (automated), tags (from spec + title).
168
247
  - Writes **one CSV per spec** (test_case_id, title, state, case_type, steps, expected_results, jira_issues, automation_status, tags, description, spec_file).
169
248
  - Optionally syncs to BrowserStack with description, steps, and tags.
249
+
250
+ ---
251
+
252
+ ## 📊 Test Run Tracking (NEW!)
253
+
254
+ Run your Playwright tests and **automatically track results on BrowserStack**. Each test run is recorded with pass/fail status, duration, and links back to your test cases.
255
+
256
+ ### Quick Start
257
+
258
+ ```bash
259
+ # Run all tests with BrowserStack tracking
260
+ npx am-browserstack-sync --run
261
+
262
+ # Run with a custom run name
263
+ npx am-browserstack-sync --run --run-name="Nightly Regression"
264
+
265
+ # Run specific spec files
266
+ npx am-browserstack-sync --run --spec=login.spec.js,checkout.spec.js
267
+
268
+ # Run tests matching a pattern
269
+ npx am-browserstack-sync --run --grep="login"
270
+ ```
271
+
272
+ ### Setup
273
+
274
+ 1. First, **sync your test cases** to BrowserStack (one-time or when tests change):
275
+ ```bash
276
+ npx am-browserstack-sync --all
277
+ ```
278
+
279
+ 2. Ensure your `.env` has the required credentials:
280
+ ```env
281
+ BROWSERSTACK_USERNAME=your_username
282
+ BROWSERSTACK_ACCESS_KEY=your_access_key
283
+ BROWSERSTACK_PROJECT_ID=PR-XXXX
284
+ ```
285
+
286
+ 3. Run tests with tracking:
287
+ ```bash
288
+ npx am-browserstack-sync --run
289
+ ```
290
+
291
+ ### What happens during a test run
292
+
293
+ 1. **Creates a Test Run** in BrowserStack with your specified name
294
+ 2. **Maps tests** to existing test cases by title
295
+ 3. **Updates results in real-time** as each test passes/fails/skips
296
+ 4. **Completes the run** with a summary and link to view results
297
+
298
+ ### Example Output
299
+
300
+ ```
301
+ 🚀 Running Playwright tests with BrowserStack tracking...
302
+ Run Name: Nightly Regression
303
+
304
+ 🔗 BrowserStack Test Management - Initializing...
305
+ Fetching test cases from BrowserStack...
306
+ Found 15 test case(s) in project PR-1234
307
+ Creating test run: "Nightly Regression"...
308
+ ✓ Test run created: TR-42 (ID: 12345)
309
+ Adding 15 test case(s) to the run...
310
+ ✓ Test cases added to run
311
+ 🚀 Starting test execution with BrowserStack tracking
312
+
313
+ ✓ [BS] should display login form → passed
314
+ ✓ [BS] should log in successfully → passed
315
+ ✗ [BS] should show error for invalid credentials → failed
316
+
317
+ 🔗 BrowserStack Test Management - Completing run...
318
+
319
+ ✓ Test run completed: TR-42
320
+ 📊 Results: 2 passed, 1 failed, 0 skipped (3 total)
321
+ 🔗 View in BrowserStack: https://test-management.browserstack.com/projects/PR-1234/runs/12345
322
+ ```
323
+
324
+ ### Using as a Playwright Reporter
325
+
326
+ You can also use the BrowserStack reporter directly in your `playwright.config.js`:
327
+
328
+ ```javascript
329
+ // playwright.config.js
330
+ export default {
331
+ reporter: [
332
+ ['list'],
333
+ ['@ash-mallick/browserstack-sync/reporter', {
334
+ projectId: 'PR-1234',
335
+ runName: 'CI Pipeline Run',
336
+ }],
337
+ ],
338
+ };
339
+ ```
340
+
341
+ ### CI/CD Integration - GitLab
342
+
343
+ **Sync test results from GitLab CI pipelines to BrowserStack!**
344
+
345
+ #### Quick Setup
346
+
347
+ 1. Add CI/CD variables in GitLab (Settings > CI/CD > Variables):
348
+ - `BROWSERSTACK_USERNAME`
349
+ - `BROWSERSTACK_ACCESS_KEY`
350
+ - `BROWSERSTACK_PROJECT_ID`
351
+
352
+ 2. Run tests with JSON reporter and sync results:
353
+
354
+ ```yaml
355
+ # .gitlab-ci.yml
356
+ test_and_sync:
357
+ image: mcr.microsoft.com/playwright:v1.40.0-jammy
358
+ script:
359
+ - npm ci
360
+ - npm install @ash-mallick/browserstack-sync
361
+ # Run tests with JSON reporter
362
+ - npx playwright test --reporter=json --output-file=test-results.json || true
363
+ # Sync results to BrowserStack
364
+ - npx am-browserstack-sync-runs --run-name="CI Pipeline #$CI_PIPELINE_ID"
365
+ artifacts:
366
+ paths:
367
+ - test-results.json
368
+ ```
369
+
370
+ #### Scheduled Runs (Nightly/Weekly)
371
+
372
+ ```yaml
373
+ nightly_tests:
374
+ rules:
375
+ - if: $CI_PIPELINE_SOURCE == "schedule"
376
+ variables:
377
+ BROWSERSTACK_RUN_NAME: "Nightly Regression - $CI_COMMIT_SHORT_SHA"
378
+ script:
379
+ - npx playwright test --reporter=json --output-file=test-results.json || true
380
+ - npx am-browserstack-sync-runs
381
+ ```
382
+
383
+ #### CLI Options
384
+
385
+ ```bash
386
+ # Auto-detect report and sync
387
+ npx am-browserstack-sync-runs
388
+
389
+ # Specify report file
390
+ npx am-browserstack-sync-runs --report=custom-results.json
391
+
392
+ # Custom run name
393
+ npx am-browserstack-sync-runs --run-name="Nightly Build"
394
+
395
+ # JUnit XML format
396
+ npx am-browserstack-sync-runs --report=junit.xml --format=junit
397
+ ```
398
+
399
+ ### GitHub Actions Example
400
+
401
+ ```yaml
402
+ - name: Run tests and sync to BrowserStack
403
+ env:
404
+ BROWSERSTACK_USERNAME: ${{ secrets.BROWSERSTACK_USERNAME }}
405
+ BROWSERSTACK_ACCESS_KEY: ${{ secrets.BROWSERSTACK_ACCESS_KEY }}
406
+ BROWSERSTACK_PROJECT_ID: PR-1234
407
+ run: |
408
+ npx playwright test --reporter=json --output-file=test-results.json || true
409
+ npx am-browserstack-sync-runs --run-name="Build #${{ github.run_number }}"
410
+ ```
411
+
412
+ ---
413
+
414
+ ## Programmatic API
415
+
416
+ ```javascript
417
+ import { runSync } from '@ash-mallick/browserstack-sync';
418
+
419
+ await runSync({
420
+ cwd: '/path/to/project',
421
+ csvOnly: true,
422
+ all: true,
423
+ spec: ['login.spec'],
424
+ useAI: true,
425
+ model: 'llama3.2',
426
+ });
427
+ ```
428
+
170
429
  ---
171
430
 
172
431
  **Author:** Ashutosh Mallick
package/bin/cli.js CHANGED
@@ -2,31 +2,67 @@
2
2
  /**
3
3
  * CLI for @ash-mallick/browserstack-sync (am-browserstack-sync).
4
4
  * By Ashutosh Mallick. Run from your project root: npx am-browserstack-sync [options]
5
- * Uses cwd as project root; config from .env, config file, or package.json.
5
+ *
6
+ * This command onboards your Playwright/Cypress tests to BrowserStack:
7
+ * - Generates CSV files with test cases
8
+ * - Creates folders in BrowserStack (one per spec file)
9
+ * - Creates/updates test cases with AI-generated steps
6
10
  *
7
- * Options:
11
+ * OPTIONS:
8
12
  * --csv-only Generate CSVs only, do not sync to BrowserStack
9
13
  * --all Sync all spec files (no interactive prompt)
10
- * --spec=name Sync only these specs (comma-separated). e.g. --spec=login.spec,checkout.spec
11
- * --no-ai Disable AI-powered step analysis (use regex extraction only)
12
- * --model=name Ollama model to use (default: llama3.2). e.g. --model=codellama
14
+ * --spec=name Sync only these specs (comma-separated)
15
+ * --no-ai Disable AI-powered step analysis
16
+ * --model=name Ollama model to use (default: llama3.2)
13
17
  *
14
- * AI Analysis (FREE, Local with Ollama):
15
- * Install Ollama from https://ollama.ai, then:
16
- * ollama pull llama3.2
17
- * ollama serve
18
+ * EXAMPLES:
19
+ * npx am-browserstack-sync --all # Onboard all tests
20
+ * npx am-browserstack-sync --csv-only # Generate CSVs only
21
+ * npx am-browserstack-sync --spec=login # Onboard specific spec
18
22
  *
19
- * AI analysis is automatic when Ollama is running! No API key needed.
20
- * The AI will analyze your test code and generate human-readable steps like:
21
- * - "Navigate to /login page"
22
- * - "Enter 'user@test.com' in the Email field"
23
- * - "Click the 'Sign In' button"
24
- * - "Verify URL matches /dashboard"
23
+ * To sync test run results, use:
24
+ * npx am-browserstack-sync-runs
25
25
  */
26
26
 
27
27
  import { runSync } from '../lib/index.js';
28
+ import 'dotenv/config';
28
29
 
29
30
  const argv = process.argv.slice(2);
31
+
32
+ // Show help
33
+ if (argv.includes('--help') || argv.includes('-h')) {
34
+ console.log(`
35
+ 📦 BrowserStack Test Case Sync
36
+
37
+ Onboard your Playwright/Cypress tests to BrowserStack Test Management.
38
+
39
+ USAGE:
40
+ npx am-browserstack-sync [options]
41
+
42
+ OPTIONS:
43
+ --csv-only Generate CSVs only, do not sync to BrowserStack
44
+ --all Sync all spec files without prompting
45
+ --spec=<names> Sync specific specs (comma-separated)
46
+ --no-ai Disable AI step analysis (use regex)
47
+ --model=<name> Ollama model (default: llama3.2)
48
+ --help, -h Show this help
49
+
50
+ EXAMPLES:
51
+ npx am-browserstack-sync --all
52
+ npx am-browserstack-sync --csv-only
53
+ npx am-browserstack-sync --spec=login.spec,checkout.spec
54
+
55
+ TO SYNC TEST RUN RESULTS:
56
+ npx am-browserstack-sync-runs
57
+
58
+ ENVIRONMENT VARIABLES:
59
+ BROWSERSTACK_USERNAME Your BrowserStack username
60
+ BROWSERSTACK_ACCESS_KEY Your BrowserStack access key
61
+ BROWSERSTACK_PROJECT_ID Project ID (e.g., PR-1234)
62
+ `);
63
+ process.exit(0);
64
+ }
65
+
30
66
  const csvOnly = argv.includes('--csv-only');
31
67
  const all = argv.includes('--all');
32
68
  const noAI = argv.includes('--no-ai');
@@ -0,0 +1,189 @@
1
+ #!/usr/bin/env node
2
+ /**
3
+ * Simple CLI to sync test run results to BrowserStack.
4
+ * Add this to your existing CI pipeline after tests run.
5
+ *
6
+ * Usage:
7
+ * npx am-browserstack-sync-runs
8
+ * npx am-browserstack-sync-runs --report=custom-results.json
9
+ * npx am-browserstack-sync-runs --run-name="Nightly Build #123"
10
+ *
11
+ * Environment Variables (required):
12
+ * BROWSERSTACK_USERNAME
13
+ * BROWSERSTACK_ACCESS_KEY
14
+ * BROWSERSTACK_PROJECT_ID
15
+ *
16
+ * The command will:
17
+ * 1. Auto-detect Playwright report files (results.json, test-results.json, etc.)
18
+ * 2. Parse test results
19
+ * 3. Create a Test Run in BrowserStack
20
+ * 4. Sync pass/fail status for each test
21
+ * 5. Show a link to view results
22
+ */
23
+
24
+ import 'dotenv/config';
25
+ import fs from 'fs';
26
+ import path from 'path';
27
+ import { syncResultsFromReport } from '../lib/sync-results.js';
28
+
29
+ const argv = process.argv.slice(2);
30
+
31
+ // Show help
32
+ if (argv.includes('--help') || argv.includes('-h')) {
33
+ console.log(`
34
+ 📊 BrowserStack Test Run Sync
35
+
36
+ Sync your Playwright/Cypress test results to BrowserStack Test Management.
37
+ Add this command to your existing CI pipeline after tests complete.
38
+
39
+ USAGE:
40
+ npx am-browserstack-sync-runs [options]
41
+
42
+ OPTIONS:
43
+ --report=<path> Path to report file (auto-detects if not specified)
44
+ --run-name=<name> Name for the test run (default: "CI Run - <timestamp>")
45
+ --format=<fmt> Report format: playwright-json, junit (default: auto)
46
+ --help, -h Show this help
47
+
48
+ ENVIRONMENT VARIABLES (required):
49
+ BROWSERSTACK_USERNAME Your BrowserStack username
50
+ BROWSERSTACK_ACCESS_KEY Your BrowserStack access key
51
+ BROWSERSTACK_PROJECT_ID Your project ID (e.g., PR-1234)
52
+
53
+ EXAMPLES:
54
+ # Auto-detect report and sync
55
+ npx am-browserstack-sync-runs
56
+
57
+ # Specify report file
58
+ npx am-browserstack-sync-runs --report=test-results.json
59
+
60
+ # Custom run name (great for CI)
61
+ npx am-browserstack-sync-runs --run-name="Build #\${CI_PIPELINE_ID}"
62
+
63
+ # Sync JUnit XML results
64
+ npx am-browserstack-sync-runs --report=junit.xml --format=junit
65
+
66
+ ADD TO YOUR CI PIPELINE:
67
+ # GitLab CI
68
+ script:
69
+ - npm test # Your existing test command
70
+ - npx am-browserstack-sync-runs --run-name="Pipeline #\$CI_PIPELINE_ID"
71
+
72
+ # GitHub Actions
73
+ - run: npm test
74
+ - run: npx am-browserstack-sync-runs --run-name="Build #\${{ github.run_number }}"
75
+
76
+ PREREQUISITES:
77
+ 1. First, onboard your tests to BrowserStack:
78
+ npx am-browserstack-sync --all
79
+
80
+ 2. Then add run sync to your pipeline:
81
+ npx am-browserstack-sync-runs
82
+ `);
83
+ process.exit(0);
84
+ }
85
+
86
+ // Auto-detect report file
87
+ function findReportFile() {
88
+ const cwd = process.cwd();
89
+ const possibleFiles = [
90
+ 'test-results.json',
91
+ 'results.json',
92
+ 'playwright-report/results.json',
93
+ 'playwright-report.json',
94
+ 'report.json',
95
+ 'junit.xml',
96
+ 'test-results.xml',
97
+ ];
98
+
99
+ for (const file of possibleFiles) {
100
+ const fullPath = path.join(cwd, file);
101
+ if (fs.existsSync(fullPath)) {
102
+ return fullPath;
103
+ }
104
+ }
105
+
106
+ // Check for playwright-report directory
107
+ const htmlReportDir = path.join(cwd, 'playwright-report');
108
+ if (fs.existsSync(htmlReportDir) && fs.statSync(htmlReportDir).isDirectory()) {
109
+ return htmlReportDir;
110
+ }
111
+
112
+ return null;
113
+ }
114
+
115
+ // Auto-detect format from file
116
+ function detectFormat(reportPath) {
117
+ if (reportPath.endsWith('.xml')) {
118
+ return 'junit';
119
+ }
120
+ return 'playwright-json';
121
+ }
122
+
123
+ // Parse arguments
124
+ const reportArg = argv.find((a) => a.startsWith('--report='));
125
+ const formatArg = argv.find((a) => a.startsWith('--format='));
126
+ const runNameArg = argv.find((a) => a.startsWith('--run-name='));
127
+
128
+ let reportPath = reportArg ? reportArg.slice('--report='.length).trim() : null;
129
+ let format = formatArg ? formatArg.slice('--format='.length).trim() : null;
130
+ const runName = runNameArg
131
+ ? runNameArg.slice('--run-name='.length).trim()
132
+ : process.env.BROWSERSTACK_RUN_NAME || `CI Run - ${new Date().toISOString().slice(0, 16).replace('T', ' ')}`;
133
+
134
+ // Auto-detect report if not specified
135
+ if (!reportPath) {
136
+ reportPath = findReportFile();
137
+ if (!reportPath) {
138
+ console.error(`
139
+ ❌ No test report found!
140
+
141
+ Make sure your test framework generates a report file:
142
+
143
+ # Playwright - add to your test command:
144
+ npx playwright test --reporter=json --output-file=test-results.json
145
+
146
+ # Or specify the report path:
147
+ npx am-browserstack-sync-runs --report=path/to/results.json
148
+ `);
149
+ process.exit(1);
150
+ }
151
+ console.log(`📄 Auto-detected report: ${reportPath}`);
152
+ }
153
+
154
+ // Auto-detect format if not specified
155
+ if (!format) {
156
+ format = detectFormat(reportPath);
157
+ }
158
+
159
+ // Check required env vars
160
+ const missing = [];
161
+ if (!process.env.BROWSERSTACK_USERNAME && !process.env.BROWSERSTACK_API_TOKEN) {
162
+ missing.push('BROWSERSTACK_USERNAME');
163
+ }
164
+ if (!process.env.BROWSERSTACK_ACCESS_KEY && !process.env.BROWSERSTACK_API_TOKEN) {
165
+ missing.push('BROWSERSTACK_ACCESS_KEY');
166
+ }
167
+ if (!process.env.BROWSERSTACK_PROJECT_ID) {
168
+ missing.push('BROWSERSTACK_PROJECT_ID');
169
+ }
170
+
171
+ if (missing.length > 0) {
172
+ console.error(`
173
+ ❌ Missing required environment variables:
174
+ ${missing.join('\n ')}
175
+
176
+ Set these in your CI/CD settings or .env file.
177
+ `);
178
+ process.exit(1);
179
+ }
180
+
181
+ // Run sync
182
+ syncResultsFromReport({
183
+ reportPath,
184
+ format,
185
+ runName,
186
+ }).catch((err) => {
187
+ console.error('\n❌ Error:', err.message);
188
+ process.exit(1);
189
+ });
@@ -14,74 +14,78 @@ const OLLAMA_HOST = process.env.OLLAMA_HOST || 'http://localhost:11434';
14
14
  /**
15
15
  * System prompt for the AI to understand what we want.
16
16
  */
17
- const SYSTEM_PROMPT = `You are a QA expert creating detailed manual test steps from automated test code. Your goal is to write steps so clear and detailed that ANY tester can execute the test manually by just reading them.
18
-
19
- IMPORTANT: Output ONLY a valid JSON array. No markdown, no explanation, no extra text.
20
-
21
- Each step must have:
22
- - "step": A detailed, numbered action description. Include:
23
- - Exact UI element to interact with (button name, field label, section title)
24
- - Exact text/values to enter or look for
25
- - Location hints (e.g., "in the header", "in the Solutions section", "at the top of the page")
26
- - "result": The specific expected outcome that can be visually verified. Include:
27
- - Exact text that should be displayed
28
- - UI state changes (visible, enabled, selected, etc.)
29
- - URL changes if applicable
30
-
31
- RULES FOR DETAILED STEPS:
32
- 1. NAVIGATION: "Open the browser and navigate to [exact URL]. Wait for the page to fully load."
33
- 2. VISIBILITY CHECK: "Locate the [section name] section on the page. Verify it is visible and contains the text '[exact text]'."
34
- 3. ELEMENT VERIFICATION: "In the [section name], find the [element type] with text '[text]'. Verify it is displayed."
35
- 4. LINK VALIDATION: "Find the '[link text]' link in the [location]. Verify it has href pointing to '[URL pattern]'."
36
- 5. BUTTON CHECK: "Locate the '[button text]' button in the [section]. Verify it is visible and clickable."
37
- 6. CARD/TILE VALIDATION: "In the [section], locate the first card/tile. Verify it displays: title, image, and description."
38
- 7. SCROLL ACTION: "Scroll down to the [section name] section to bring it into view."
39
-
40
- EXAMPLE OUTPUT:
17
+ const SYSTEM_PROMPT = `You are a test automation expert. Your job is to analyze Playwright or Cypress test code and extract human-readable test steps.
18
+
19
+ CRITICAL RULES:
20
+ 1. Generate steps ONLY for what is explicitly in the test code provided
21
+ 2. DO NOT add steps for things not in the code
22
+ 3. DO NOT hallucinate or assume content - use ONLY text/values from the code
23
+ 4. Each expect() or assertion = one verification step
24
+ 5. Output ONLY valid JSON array, nothing else
25
+
26
+ STEP FORMAT:
27
+ {
28
+ "step": "Numbered action description with exact element and location",
29
+ "result": "Expected outcome using exact text from the code"
30
+ }
31
+
32
+ MAPPING CODE TO STEPS:
33
+
34
+ 1. getByTestId('section-name') "Locate the section with test ID 'section-name'"
35
+ 2. locator('.class-name') "Find the element with class 'class-name'"
36
+ 3. getByRole('link', { name: /text/i }) "Find the link labeled 'text'"
37
+ 4. expect(element).toBeVisible() Result: "The element is visible"
38
+ 5. expect(element).toContainText('exact text') → Result: "The element displays 'exact text'"
39
+ 6. expect(element).toHaveText('exact text') → Result: "The element shows exactly 'exact text'"
40
+ 7. expect(link).toHaveAttribute('href', '/path') → Result: "The link href is '/path'"
41
+ 8. scrollIntoViewIfNeeded() → "Scroll to bring the element into view"
42
+
43
+ EXAMPLE - For this code:
44
+ const scope = page.getByTestId("success-stories");
45
+ await expect(scope.locator(".band__headline")).toHaveText("Success stories");
46
+
47
+ OUTPUT:
41
48
  [
42
- {"step": "1. Open the browser and navigate to the application homepage (base URL + '/'). Wait for the page to fully load.", "result": "The homepage loads successfully. The masthead banner is visible at the top of the page."},
43
- {"step": "2. Locate the masthead section at the top of the page. Verify the main heading is displayed.", "result": "The heading 'Accelerate AI with an open ecosystem' is visible in the masthead."},
44
- {"step": "3. In the masthead section, find the 'Explore our AI ecosystem' call-to-action link.", "result": "The 'Explore our AI ecosystem' link is visible and has an href containing '/ai'."},
45
- {"step": "4. Scroll down to the 'Solutions' section. Verify the section headline is displayed.", "result": "The Solutions section is visible with headline 'Find the solution you're looking for, in less time'."},
46
- {"step": "5. In the Solutions section, locate the product slider/carousel. Verify the first product card is displayed.", "result": "The first product card shows: product title, product image, and product description."}
49
+ {"step": "1. Locate the section with test ID 'success-stories' on the page.", "result": "The success-stories section is found."},
50
+ {"step": "2. In the success-stories section, find the element with class 'band__headline'. Verify its text content.", "result": "The headline displays exactly 'Success stories'."}
47
51
  ]
48
52
 
49
- IMPORTANT:
50
- - Number each step sequentially (1, 2, 3...)
51
- - Be specific about WHAT to look for and WHERE
52
- - Include exact text strings from the code in quotes
53
- - Make expected results measurable and verifiable
54
- - Output ONLY the JSON array, nothing else`;
53
+ REMEMBER:
54
+ - ONLY use text/values that appear in the test code
55
+ - DO NOT make up text or values
56
+ - Number steps 1, 2, 3...
57
+ - Be precise and specific`;
55
58
 
56
59
  /**
57
60
  * Build the user prompt for AI analysis.
58
61
  */
59
62
  function buildUserPrompt(testTitle, testCode) {
60
- return `Create detailed manual test steps from this automated test. A QA tester should be able to execute this test manually by reading your steps.
63
+ return `Convert this automated test into manual test steps.
61
64
 
62
- TEST NAME: "${testTitle}"
65
+ TEST: "${testTitle}"
63
66
 
64
- AUTOMATED TEST CODE:
65
- \`\`\`javascript
67
+ CODE:
66
68
  ${testCode}
67
- \`\`\`
68
-
69
- ANALYZE THE CODE CAREFULLY:
70
- 1. Look at page.goto() calls to identify which page/URL to navigate to
71
- 2. Look at getByTestId(), locator(), getByRole(), getByLabel() to identify UI elements
72
- 3. Look at expect() assertions to understand what needs to be verified
73
- 4. Look at .toContainText(), .toHaveText(), .toBeVisible() for expected text/state
74
- 5. Look at .toHaveAttribute('href', ...) for link URL validations
75
- 6. Identify ALL assertions and convert them to expected results
76
-
77
- CREATE COMPREHENSIVE STEPS:
78
- - Include EVERY action from the test code
79
- - Include EVERY assertion/verification from the test code
80
- - Use exact text strings from .toContainText() and .toHaveText() in your expected results
81
- - Number steps sequentially (1, 2, 3...)
82
- - Make each step detailed enough for a manual tester
83
-
84
- OUTPUT: Return ONLY a valid JSON array, no other text.`;
69
+
70
+ INSTRUCTIONS:
71
+ 1. Create one step for EACH expect() assertion in the code
72
+ 2. Use ONLY the exact text strings that appear in the code (in quotes or regex)
73
+ 3. For each getByTestId(), locator(), or getByRole() - describe THAT element
74
+ 4. For each .toContainText('X') or .toHaveText('X') - the expected result is "displays 'X'"
75
+ 5. For each .toHaveAttribute('href', 'Y') - the expected result is "href is 'Y'"
76
+ 6. DO NOT add extra text or values that are not in the code
77
+
78
+ Example mapping:
79
+ - expect(page.getByTestId("masthead")).toBeVisible()
80
+ Step: "Locate the element with test ID 'masthead'." Result: "The masthead element is visible."
81
+
82
+ - expect(el).toContainText("Hello World")
83
+ Step: "Verify the element's text content." Result: "The element contains 'Hello World'."
84
+
85
+ - expect(link).toHaveAttribute("href", "/contact")
86
+ → Step: "Check the link's href attribute." Result: "The link href is '/contact'."
87
+
88
+ OUTPUT: Return ONLY a JSON array of {step, result} objects. No other text.`;
85
89
  }
86
90
 
87
91
  /**
@@ -112,32 +112,91 @@ export async function updateTestCase(projectId, testCaseId, testCase, auth) {
112
112
 
113
113
  /** Match existing BS test case to our local case: by title or by our TC-xxx tag */
114
114
  function findExisting(existingList, localCase) {
115
- const byTitle = existingList.find((t) => (t.title || '').trim() === (localCase.title || '').trim());
115
+ const byTitle = existingList.find((t) => (t.name || t.title || '').trim() === (localCase.title || '').trim());
116
116
  if (byTitle) return byTitle;
117
117
  const byTag = existingList.find((t) => (t.tags || []).includes(localCase.id));
118
118
  return byTag;
119
119
  }
120
120
 
121
+ /**
122
+ * Compare local test case with existing BrowserStack test case.
123
+ * Returns true if content has changed and update is needed.
124
+ */
125
+ function hasContentChanged(existing, localCase) {
126
+ // Compare steps - this is the main content that changes
127
+ const existingSteps = existing.test_case_steps || [];
128
+ const localSteps = Array.isArray(localCase.steps) ? localCase.steps : [];
129
+
130
+ // Different number of steps = changed
131
+ if (existingSteps.length !== localSteps.length) {
132
+ return true;
133
+ }
134
+
135
+ // Compare each step
136
+ for (let i = 0; i < localSteps.length; i++) {
137
+ const existingStep = existingSteps[i] || {};
138
+ const localStep = localSteps[i] || {};
139
+
140
+ const existingStepText = (existingStep.step || '').trim();
141
+ const localStepText = (localStep.step || '').trim();
142
+ const existingResult = (existingStep.result || '').trim();
143
+ const localResult = (localStep.result || '').trim();
144
+
145
+ if (existingStepText !== localStepText || existingResult !== localResult) {
146
+ return true;
147
+ }
148
+ }
149
+
150
+ // Compare title
151
+ const existingTitle = (existing.name || existing.title || '').trim();
152
+ const localTitle = (localCase.title || '').trim();
153
+ if (existingTitle !== localTitle) {
154
+ return true;
155
+ }
156
+
157
+ // No changes detected
158
+ return false;
159
+ }
160
+
121
161
  export async function syncToBrowserStack(specsMap, projectId, auth) {
122
162
  console.log('\nSyncing to BrowserStack project', projectId);
163
+
164
+ let totalCreated = 0;
165
+ let totalUpdated = 0;
166
+ let totalSkipped = 0;
167
+
123
168
  for (const [baseName, { specFile, cases }] of specsMap) {
124
169
  const folderName = baseName;
125
170
  const folderId = await ensureFolder(projectId, folderName, null, auth);
126
171
  const existing = await getTestCasesInFolder(projectId, folderId, auth);
127
- console.log('Folder:', folderName, '(id:', folderId, ')');
172
+ console.log('Folder:', folderName, '(id:', folderId, ') -', cases.length, 'test(s)');
173
+
128
174
  for (const tc of cases) {
129
175
  try {
130
176
  const found = findExisting(existing, tc);
131
177
  if (found) {
132
- await updateTestCase(projectId, found.identifier, tc, auth);
133
- console.log(' Updated', tc.id, tc.title, '->', found.identifier);
178
+ // Check if content has actually changed before updating
179
+ if (hasContentChanged(found, tc)) {
180
+ await updateTestCase(projectId, found.identifier, tc, auth);
181
+ console.log(' ✓ Updated', tc.id, tc.title, '->', found.identifier);
182
+ totalUpdated++;
183
+ } else {
184
+ console.log(' ○ Skipped', tc.id, tc.title, '(no changes)');
185
+ totalSkipped++;
186
+ }
134
187
  } else {
135
188
  const id = await createTestCase(projectId, folderId, tc, auth);
136
- console.log(' Created', tc.id, tc.title, '->', id);
189
+ console.log(' + Created', tc.id, tc.title, '->', id);
190
+ totalCreated++;
137
191
  }
138
192
  } catch (e) {
139
- console.error(' Failed', tc.id, tc.title, e.message);
193
+ console.error(' Failed', tc.id, tc.title, e.message);
140
194
  }
141
195
  }
142
196
  }
197
+
198
+ console.log('\nSync summary:');
199
+ console.log(` Created: ${totalCreated} test case(s)`);
200
+ console.log(` Updated: ${totalUpdated} test case(s)`);
201
+ console.log(` Skipped: ${totalSkipped} test case(s) (no changes)`);
143
202
  }
package/lib/enrich.js CHANGED
@@ -40,6 +40,46 @@ function inferTags(specBaseName, title, isCypress) {
40
40
  return tags;
41
41
  }
42
42
 
43
+ /**
44
+ * Count the number of expect() assertions in test code.
45
+ */
46
+ function countAssertions(code) {
47
+ const expectMatches = code.match(/expect\s*\(/g) || [];
48
+ return expectMatches.length;
49
+ }
50
+
51
+ /**
52
+ * Merge AI steps with regex steps to ensure completeness.
53
+ * If AI generated fewer steps than assertions, supplement with unique regex steps.
54
+ */
55
+ function mergeStepsForCompleteness(aiSteps, regexSteps, assertionCount) {
56
+ if (!aiSteps || aiSteps.length === 0) {
57
+ return regexSteps;
58
+ }
59
+
60
+ // If AI generated enough steps, use AI steps
61
+ if (aiSteps.length >= assertionCount * 0.7) { // 70% coverage threshold
62
+ return aiSteps;
63
+ }
64
+
65
+ // If AI generated too few steps, supplement with unique regex steps
66
+ const aiStepTexts = new Set(aiSteps.map(s => s.step.toLowerCase()));
67
+ const supplementalSteps = regexSteps.filter(rs => {
68
+ // Only add regex steps that aren't similar to existing AI steps
69
+ const rsLower = rs.step.toLowerCase();
70
+ return !Array.from(aiStepTexts).some(aiText =>
71
+ aiText.includes(rsLower.slice(0, 30)) || rsLower.includes(aiText.slice(0, 30))
72
+ );
73
+ });
74
+
75
+ // Combine and renumber
76
+ const combined = [...aiSteps, ...supplementalSteps];
77
+ return combined.map((step, idx) => ({
78
+ step: step.step.replace(/^\d+\.\s*/, `${idx + 1}. `),
79
+ result: step.result,
80
+ }));
81
+ }
82
+
43
83
  /**
44
84
  * Enrich a single case with state, case_type, steps, expected_results, jira_issues, automation_status, tags, description.
45
85
  * @param {Object} case_ - Test case with id, title, code
@@ -54,13 +94,19 @@ export function enrichCase(case_, specBaseName, isCypress = false, aiSteps = nul
54
94
  const fw = isCypress ? FRAMEWORK.cypress : FRAMEWORK.playwright;
55
95
  const tags = inferTags(specBaseName, title, isCypress);
56
96
 
57
- // Determine steps: AI steps > regex-extracted steps > default
97
+ // Count assertions in code for completeness check
98
+ const assertionCount = code ? countAssertions(code) : 0;
99
+
100
+ // Get regex-based steps as fallback/supplement
101
+ const regexSteps = code ? extractStepsWithRegex(code, isCypress) : [];
102
+
103
+ // Determine steps: merge AI + regex for completeness, or use regex only
58
104
  let steps;
59
105
  if (aiSteps && aiSteps.length > 0) {
60
- steps = aiSteps;
61
- } else if (code) {
62
- // Try regex-based extraction from code
63
- steps = extractStepsWithRegex(code, isCypress);
106
+ // Merge AI steps with regex steps to ensure we don't miss assertions
107
+ steps = mergeStepsForCompleteness(aiSteps, regexSteps, assertionCount);
108
+ } else if (regexSteps.length > 0) {
109
+ steps = regexSteps;
64
110
  } else {
65
111
  // Fallback to generic step
66
112
  steps = [{ step: fw.step, result: DEFAULT_EXPECTED_RESULT }];
@@ -0,0 +1,329 @@
1
+ /**
2
+ * Sync test results from CI pipeline to BrowserStack Test Management.
3
+ *
4
+ * This module reads test results from Playwright JSON report or JUnit XML
5
+ * and syncs them to BrowserStack, creating a Test Run with pass/fail status.
6
+ *
7
+ * Usage:
8
+ * npx am-browserstack-sync --sync-results --report=playwright-report.json
9
+ * npx am-browserstack-sync --sync-results --report=junit.xml --format=junit
10
+ */
11
+
12
+ import fs from 'fs';
13
+ import path from 'path';
14
+ import {
15
+ createTestRun,
16
+ addTestCasesToRun,
17
+ updateTestResult,
18
+ completeTestRun,
19
+ getAllTestCases,
20
+ findTestCaseByTitle,
21
+ } from './browserstack.js';
22
+
23
+ /**
24
+ * Parse Playwright JSON report format.
25
+ * @param {string} reportPath - Path to the JSON report file
26
+ * @returns {Array<{title: string, status: string, duration: number, error?: string}>}
27
+ */
28
+ function parsePlaywrightJson(reportPath) {
29
+ const report = JSON.parse(fs.readFileSync(reportPath, 'utf-8'));
30
+ const results = [];
31
+
32
+ // Playwright JSON report structure: { suites: [...], stats: {...} }
33
+ function extractTests(suites) {
34
+ for (const suite of suites || []) {
35
+ for (const spec of suite.specs || []) {
36
+ for (const test of spec.tests || []) {
37
+ // Get the result from the test results array
38
+ const result = test.results?.[0] || {};
39
+ results.push({
40
+ title: spec.title || test.title,
41
+ fullTitle: `${suite.title} > ${spec.title}`,
42
+ status: mapPlaywrightStatus(test.status || result.status),
43
+ duration: result.duration || 0,
44
+ error: result.error?.message || result.errors?.[0]?.message || '',
45
+ file: suite.file || spec.file,
46
+ });
47
+ }
48
+ }
49
+
50
+ // Recursively process nested suites
51
+ if (suite.suites) {
52
+ extractTests(suite.suites);
53
+ }
54
+ }
55
+ }
56
+
57
+ extractTests(report.suites);
58
+ return results;
59
+ }
60
+
61
+ /**
62
+ * Parse JUnit XML report format.
63
+ * @param {string} reportPath - Path to the JUnit XML file
64
+ * @returns {Array<{title: string, status: string, duration: number, error?: string}>}
65
+ */
66
+ function parseJunitXml(reportPath) {
67
+ const xml = fs.readFileSync(reportPath, 'utf-8');
68
+ const results = [];
69
+
70
+ // Simple regex-based XML parsing for JUnit format
71
+ // <testcase name="..." classname="..." time="...">
72
+ // <failure message="...">...</failure>
73
+ // <skipped/>
74
+ // </testcase>
75
+
76
+ const testcaseRegex = /<testcase\s+[^>]*name="([^"]+)"[^>]*time="([^"]+)"[^>]*>([\s\S]*?)<\/testcase>/gi;
77
+ const failureRegex = /<failure[^>]*message="([^"]*)"[^>]*>/i;
78
+ const skippedRegex = /<skipped\s*\/?>/i;
79
+ const errorRegex = /<error[^>]*message="([^"]*)"[^>]*>/i;
80
+
81
+ let match;
82
+ while ((match = testcaseRegex.exec(xml)) !== null) {
83
+ const [, name, time, content] = match;
84
+
85
+ let status = 'passed';
86
+ let error = '';
87
+
88
+ if (skippedRegex.test(content)) {
89
+ status = 'skipped';
90
+ } else if (failureRegex.test(content)) {
91
+ status = 'failed';
92
+ const failMatch = content.match(failureRegex);
93
+ error = failMatch?.[1] || 'Test failed';
94
+ } else if (errorRegex.test(content)) {
95
+ status = 'failed';
96
+ const errMatch = content.match(errorRegex);
97
+ error = errMatch?.[1] || 'Test error';
98
+ }
99
+
100
+ results.push({
101
+ title: name,
102
+ status,
103
+ duration: Math.round(parseFloat(time) * 1000), // Convert seconds to ms
104
+ error,
105
+ });
106
+ }
107
+
108
+ return results;
109
+ }
110
+
111
+ /**
112
+ * Parse Playwright HTML report's index.html or results.json
113
+ */
114
+ function parsePlaywrightHtmlReport(reportDir) {
115
+ // Try to find the JSON data in the HTML report directory
116
+ const possiblePaths = [
117
+ path.join(reportDir, 'report.json'),
118
+ path.join(reportDir, 'data', 'report.json'),
119
+ path.join(reportDir, 'results.json'),
120
+ ];
121
+
122
+ for (const p of possiblePaths) {
123
+ if (fs.existsSync(p)) {
124
+ return parsePlaywrightJson(p);
125
+ }
126
+ }
127
+
128
+ throw new Error(`Could not find JSON data in Playwright HTML report directory: ${reportDir}`);
129
+ }
130
+
131
+ /**
132
+ * Map Playwright status to BrowserStack status.
133
+ */
134
+ function mapPlaywrightStatus(status) {
135
+ const map = {
136
+ passed: 'passed',
137
+ failed: 'failed',
138
+ timedOut: 'failed',
139
+ skipped: 'skipped',
140
+ interrupted: 'blocked',
141
+ expected: 'passed',
142
+ unexpected: 'failed',
143
+ flaky: 'retest',
144
+ };
145
+ return map[status] || status;
146
+ }
147
+
148
+ /**
149
+ * Load authentication from environment.
150
+ */
151
+ function loadAuth() {
152
+ const username = process.env.BROWSERSTACK_USERNAME;
153
+ const accessKey = process.env.BROWSERSTACK_ACCESS_KEY;
154
+ const apiToken = process.env.BROWSERSTACK_API_TOKEN;
155
+
156
+ if (apiToken) {
157
+ return { apiToken };
158
+ }
159
+
160
+ if (username && accessKey) {
161
+ return { username, accessKey };
162
+ }
163
+
164
+ return null;
165
+ }
166
+
167
+ /**
168
+ * Sync test results from a report file to BrowserStack.
169
+ *
170
+ * @param {Object} options
171
+ * @param {string} options.reportPath - Path to the report file or directory
172
+ * @param {string} options.format - Report format: 'playwright-json', 'junit', 'playwright-html'
173
+ * @param {string} options.projectId - BrowserStack project ID
174
+ * @param {string} options.runName - Name for the test run
175
+ * @param {string} options.runDescription - Description for the test run
176
+ */
177
+ export async function syncResultsFromReport(options = {}) {
178
+ const {
179
+ reportPath,
180
+ format = 'playwright-json',
181
+ projectId = process.env.BROWSERSTACK_PROJECT_ID,
182
+ runName = process.env.BROWSERSTACK_RUN_NAME || `CI Run - ${new Date().toISOString().slice(0, 16).replace('T', ' ')}`,
183
+ runDescription = process.env.BROWSERSTACK_RUN_DESCRIPTION || 'Test results synced from CI pipeline',
184
+ } = options;
185
+
186
+ // Validate inputs
187
+ if (!reportPath) {
188
+ throw new Error('Report path is required. Use --report=<path>');
189
+ }
190
+
191
+ if (!projectId) {
192
+ throw new Error('BROWSERSTACK_PROJECT_ID is required');
193
+ }
194
+
195
+ const auth = loadAuth();
196
+ if (!auth) {
197
+ throw new Error('BrowserStack credentials not found. Set BROWSERSTACK_USERNAME/ACCESS_KEY or API_TOKEN');
198
+ }
199
+
200
+ // Check if report exists
201
+ if (!fs.existsSync(reportPath)) {
202
+ throw new Error(`Report file not found: ${reportPath}`);
203
+ }
204
+
205
+ console.log('\n📊 BrowserStack Results Sync');
206
+ console.log(' Report:', reportPath);
207
+ console.log(' Format:', format);
208
+ console.log(' Project:', projectId);
209
+ console.log(' Run Name:', runName);
210
+
211
+ // Parse the report based on format
212
+ console.log('\n Parsing test results...');
213
+ let testResults;
214
+
215
+ const stat = fs.statSync(reportPath);
216
+ if (stat.isDirectory()) {
217
+ // Assume it's a Playwright HTML report directory
218
+ testResults = parsePlaywrightHtmlReport(reportPath);
219
+ } else if (format === 'junit' || reportPath.endsWith('.xml')) {
220
+ testResults = parseJunitXml(reportPath);
221
+ } else {
222
+ testResults = parsePlaywrightJson(reportPath);
223
+ }
224
+
225
+ console.log(` Found ${testResults.length} test result(s)`);
226
+
227
+ if (testResults.length === 0) {
228
+ console.log('\n ⚠️ No test results found in report');
229
+ return;
230
+ }
231
+
232
+ // Fetch all test cases from BrowserStack
233
+ console.log('\n Fetching test cases from BrowserStack...');
234
+ const allBsTestCases = await getAllTestCases(projectId, auth);
235
+ console.log(` Found ${allBsTestCases.length} test case(s) in project`);
236
+
237
+ // Map results to BrowserStack test cases
238
+ const mappedResults = [];
239
+ const unmappedResults = [];
240
+
241
+ for (const result of testResults) {
242
+ const bsTestCase = findTestCaseByTitle(allBsTestCases, result.title);
243
+ if (bsTestCase) {
244
+ mappedResults.push({ ...result, bsTestCase });
245
+ } else {
246
+ unmappedResults.push(result);
247
+ }
248
+ }
249
+
250
+ console.log(` Mapped: ${mappedResults.length} test(s)`);
251
+ if (unmappedResults.length > 0) {
252
+ console.log(` Unmapped: ${unmappedResults.length} test(s) (not found in BrowserStack)`);
253
+ }
254
+
255
+ if (mappedResults.length === 0) {
256
+ console.log('\n ⚠️ No tests matched. Make sure tests are synced first:');
257
+ console.log(' npx am-browserstack-sync --all');
258
+ return;
259
+ }
260
+
261
+ // Create test run
262
+ console.log(`\n Creating test run: "${runName}"...`);
263
+ const run = await createTestRun(projectId, runName, runDescription, auth);
264
+ const runId = run.runId;
265
+ const runIdentifier = run.identifier;
266
+ console.log(` ✓ Test run created: ${runIdentifier}`);
267
+
268
+ // Add test cases to run
269
+ const testCaseIds = mappedResults.map(r => r.bsTestCase.identifier);
270
+ console.log(` Adding ${testCaseIds.length} test case(s) to run...`);
271
+ await addTestCasesToRun(projectId, runId, testCaseIds, auth);
272
+
273
+ // Update results
274
+ console.log('\n Syncing results:');
275
+ const stats = { passed: 0, failed: 0, skipped: 0 };
276
+
277
+ for (const result of mappedResults) {
278
+ try {
279
+ await updateTestResult(
280
+ projectId,
281
+ runId,
282
+ result.bsTestCase.identifier,
283
+ result.status,
284
+ {
285
+ duration: result.duration,
286
+ comment: result.error || '',
287
+ },
288
+ auth
289
+ );
290
+
291
+ const icon = result.status === 'passed' ? '✓' : result.status === 'failed' ? '✗' : '○';
292
+ console.log(` ${icon} ${result.title} → ${result.status}`);
293
+
294
+ if (result.status === 'passed') stats.passed++;
295
+ else if (result.status === 'failed') stats.failed++;
296
+ else stats.skipped++;
297
+ } catch (error) {
298
+ console.error(` ✗ Failed to update ${result.title}: ${error.message}`);
299
+ }
300
+ }
301
+
302
+ // Complete the run
303
+ console.log('\n Completing test run...');
304
+ await completeTestRun(projectId, runId, auth);
305
+
306
+ const total = stats.passed + stats.failed + stats.skipped;
307
+ console.log(`\n ✓ Test run completed: ${runIdentifier}`);
308
+ console.log(` 📊 Results: ${stats.passed} passed, ${stats.failed} failed, ${stats.skipped} skipped (${total} total)`);
309
+ console.log(` 🔗 View: https://test-management.browserstack.com/projects/${projectId}/runs/${runId}`);
310
+
311
+ // Show unmapped tests
312
+ if (unmappedResults.length > 0) {
313
+ console.log('\n ⚠️ Tests not found in BrowserStack (sync them first):');
314
+ unmappedResults.slice(0, 5).forEach(r => console.log(` - ${r.title}`));
315
+ if (unmappedResults.length > 5) {
316
+ console.log(` ... and ${unmappedResults.length - 5} more`);
317
+ }
318
+ }
319
+
320
+ return {
321
+ runId,
322
+ runIdentifier,
323
+ stats,
324
+ mappedCount: mappedResults.length,
325
+ unmappedCount: unmappedResults.length,
326
+ };
327
+ }
328
+
329
+ export { parsePlaywrightJson, parseJunitXml };
package/package.json CHANGED
@@ -1,12 +1,13 @@
1
1
  {
2
2
  "name": "@ash-mallick/browserstack-sync",
3
- "version": "1.1.3",
4
- "description": "Sync Playwright & Cypress e2e specs to CSV and BrowserStack Test Management with FREE AI-powered test step extraction using Ollama (local)",
3
+ "version": "1.3.0",
4
+ "description": "Sync Playwright & Cypress e2e specs to CSV and BrowserStack Test Management with FREE AI-powered test step extraction using Ollama (local).",
5
5
  "author": "Ashutosh Mallick",
6
6
  "type": "module",
7
7
  "main": "lib/index.js",
8
8
  "bin": {
9
- "am-browserstack-sync": "bin/cli.js"
9
+ "am-browserstack-sync": "bin/cli.js",
10
+ "am-browserstack-sync-runs": "bin/sync-runs.js"
10
11
  },
11
12
  "files": [
12
13
  "bin",
@@ -14,10 +15,9 @@
14
15
  ],
15
16
  "scripts": {
16
17
  "test": "playwright test",
17
- "test:e2e": "playwright test",
18
18
  "sync": "node bin/cli.js",
19
- "sync:csv": "node bin/cli.js",
20
19
  "sync:csv-only": "node bin/cli.js --csv-only",
20
+ "sync:runs": "node bin/sync-runs.js",
21
21
  "prepublishOnly": "node bin/cli.js --csv-only",
22
22
  "publish:public": "npm publish --access public"
23
23
  },
@@ -28,9 +28,11 @@
28
28
  "e2e",
29
29
  "test-management",
30
30
  "sync",
31
+ "ci-cd",
32
+ "gitlab",
33
+ "github-actions",
31
34
  "ai",
32
35
  "ollama",
33
- "llama",
34
36
  "test-steps",
35
37
  "free",
36
38
  "local",
@@ -46,10 +48,8 @@
46
48
  "ollama": "^0.6.3"
47
49
  },
48
50
  "devDependencies": {
49
- "@playwright/test": "^1.49.0",
50
- "playwright": "^1.49.0"
51
+ "@playwright/test": "^1.49.0"
51
52
  },
52
- "peerDependenciesMeta": {},
53
53
  "engines": {
54
54
  "node": ">=18"
55
55
  }