@ash-mallick/browserstack-sync 1.1.4 → 1.3.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,8 +1,6 @@
1
1
  # @ash-mallick/browserstack-sync
2
2
 
3
- Sync **Playwright** and **Cypress** e2e specs to CSV (TC-001, TC-002, …) and optionally to **BrowserStack Test Management** (one folder per spec, create/update test cases).
4
-
5
- **🦙 FREE AI-powered test step extraction** using **Ollama** — runs 100% locally, no data sent to cloud!
3
+ Sync Playwright/Cypress tests to BrowserStack Test Management with AI-powered test step extraction.
6
4
 
7
5
  **By Ashutosh Mallick**
8
6
 
@@ -14,96 +12,140 @@ Sync **Playwright** and **Cypress** e2e specs to CSV (TC-001, TC-002, …) and o
14
12
  npm install @ash-mallick/browserstack-sync
15
13
  ```
16
14
 
17
- Or run without installing: `npx @ash-mallick/browserstack-sync --csv-only`
15
+ ---
16
+
17
+ ## Quick Setup
18
+
19
+ Create `.env` file:
20
+
21
+ ```env
22
+ BROWSERSTACK_USERNAME=your_username
23
+ BROWSERSTACK_ACCESS_KEY=your_access_key
24
+ BROWSERSTACK_PROJECT_ID=PR-XXXX
25
+ ```
18
26
 
19
27
  ---
20
28
 
21
- ## Run
29
+ ## Commands
22
30
 
23
- From your project root (where your e2e specs live):
31
+ ### 1. Generate CSV Files Only (No BrowserStack)
24
32
 
25
33
  ```bash
26
- # Generate CSVs only
27
34
  npx am-browserstack-sync --csv-only
35
+ ```
28
36
 
29
- # Sync all specs, no prompt (e.g. CI)
30
- npx am-browserstack-sync --all
37
+ Creates CSV files with test cases in `playwright/e2e-csv/` folder.
31
38
 
32
- # Sync only certain specs
33
- npx am-browserstack-sync --spec=login.spec,checkout.spec
39
+ ---
34
40
 
35
- # Disable AI analysis (use regex extraction only)
36
- npx am-browserstack-sync --no-ai
41
+ ### 2. Onboard Single Spec to BrowserStack
37
42
 
38
- # Use a specific Ollama model
39
- npx am-browserstack-sync --model=llama3.2
43
+ ```bash
44
+ npx am-browserstack-sync --spec=login.spec
40
45
  ```
41
46
 
42
- **Scripts** in `package.json`:
47
+ Creates folder and test cases for the specified spec file.
43
48
 
44
- ```json
45
- {
46
- "scripts": {
47
- "sync:e2e": "am-browserstack-sync",
48
- "sync:e2e-csv": "am-browserstack-sync --csv-only"
49
- }
50
- }
49
+ ---
50
+
51
+ ### 3. Onboard All Specs to BrowserStack
52
+
53
+ ```bash
54
+ npx am-browserstack-sync --all
51
55
  ```
52
56
 
57
+ Creates folders and test cases for all spec files in your project.
58
+
53
59
  ---
54
60
 
55
- ## 🦙 AI-Powered Step Analysis (FREE with Ollama)
61
+ ### 4. Sync Test Run Results to BrowserStack
62
+
63
+ First, run your tests with JSON reporter:
56
64
 
57
- The tool uses **Ollama** to analyze your test code and generate **human-readable test steps**. Ollama runs **100% locally** on your machine — no data sent to cloud, completely free, no API key needed!
65
+ ```bash
66
+ npx playwright test --reporter=list,json --output-file=test-results.json
67
+ ```
58
68
 
59
- **Example transformation:**
69
+ Then sync results:
60
70
 
61
- ```typescript
62
- // Your test code:
63
- test('should log in successfully', async ({ page }) => {
64
- await page.goto('/login');
65
- await page.getByLabel(/email/i).fill('user@example.com');
66
- await page.getByLabel(/password/i).fill('validpassword');
67
- await page.getByRole('button', { name: /sign in/i }).click();
68
- await expect(page).toHaveURL(/\/dashboard/);
69
- });
71
+ ```bash
72
+ npx am-browserstack-sync-runs
70
73
  ```
71
74
 
72
- **Generated steps:**
75
+ Or with custom run name:
73
76
 
74
- | Step | Expected Result |
75
- |------|-----------------|
76
- | Navigate to /login page | Login page loads successfully |
77
- | Enter 'user@example.com' in the Email field | Email is entered |
78
- | Enter 'validpassword' in the Password field | Password is masked and entered |
79
- | Click the 'Sign In' button | Form is submitted |
80
- | Verify URL | URL matches /dashboard |
77
+ ```bash
78
+ npx am-browserstack-sync-runs --run-name="Nightly Run"
79
+ ```
81
80
 
82
- ### Setup Ollama
81
+ ---
82
+
83
+ ## GitLab CI Scheduled Job
84
+
85
+ Add to `.gitlab-ci.yml`:
86
+
87
+ ```yaml
88
+ scheduled_browserstack_sync:
89
+ rules:
90
+ - if: $CI_PIPELINE_SOURCE == "schedule"
91
+ script:
92
+ - npm ci
93
+ - npx playwright test --reporter=list,json --output-file=test-results.json || true
94
+ - npm install @ash-mallick/browserstack-sync
95
+ - npx am-browserstack-sync-runs
96
+ ```
83
97
 
84
- 1. Download and install Ollama from [ollama.ai](https://ollama.ai)
98
+ Add CI/CD Variables in GitLab (**Settings → CI/CD → Variables**):
85
99
 
86
- 2. Pull a model (llama3.2 recommended):
100
+ | Key | Value |
101
+ |-----|-------|
102
+ | `BROWSERSTACK_USERNAME` | your_username |
103
+ | `BROWSERSTACK_ACCESS_KEY` | your_access_key |
104
+ | `BROWSERSTACK_PROJECT_ID` | PR-XXXX |
105
+
106
+ Create schedule in **CI/CD → Schedules → New Schedule**.
107
+
108
+ ---
109
+
110
+ ## All Options Reference
111
+
112
+ ### Onboarding Command (`am-browserstack-sync`)
113
+
114
+ | Option | Description |
115
+ |--------|-------------|
116
+ | `--csv-only` | Generate CSVs only, do not sync to BrowserStack |
117
+ | `--all` | Sync all spec files without prompting |
118
+ | `--spec=name` | Sync specific spec(s), comma-separated |
119
+ | `--no-ai` | Disable AI step analysis, use regex extraction |
120
+ | `--model=name` | Ollama model to use (default: llama3.2) |
121
+
122
+ ### Run Sync Command (`am-browserstack-sync-runs`)
123
+
124
+ | Option | Description |
125
+ |--------|-------------|
126
+ | `--report=path` | Path to test report file (auto-detects if not specified) |
127
+ | `--run-name=name` | Custom name for the test run |
128
+ | `--format=type` | Report format: `playwright-json` or `junit` |
129
+
130
+ ---
131
+
132
+ ## AI-Powered Test Steps (Optional)
133
+
134
+ The tool uses Ollama to analyze your test code and generate human-readable test steps. Ollama runs locally - no data sent to cloud, completely free.
135
+
136
+ ### Setup Ollama
137
+
138
+ 1. Download from [ollama.ai](https://ollama.ai)
139
+ 2. Pull a model:
87
140
  ```bash
88
141
  ollama pull llama3.2
89
142
  ```
90
-
91
- 3. Start Ollama (runs automatically on macOS, or run manually):
143
+ 3. Start Ollama:
92
144
  ```bash
93
145
  ollama serve
94
146
  ```
95
147
 
96
- 4. Run the sync — AI analysis is automatic when Ollama is running!
97
- ```bash
98
- npx am-browserstack-sync --csv-only
99
- ```
100
-
101
- ### Options
102
-
103
- - `--no-ai` — Disable AI, use regex-based extraction instead
104
- - `--model=llama3.2` — Use a different Ollama model
105
- - `OLLAMA_MODEL=llama3.2` — Set default model via env variable
106
- - `OLLAMA_HOST=http://localhost:11434` — Custom Ollama host
148
+ AI analysis runs automatically when Ollama is running. Use `--no-ai` to disable.
107
149
 
108
150
  ### Recommended Models
109
151
 
@@ -112,17 +154,16 @@ test('should log in successfully', async ({ page }) => {
112
154
  | `llama3.2` | 2GB | General purpose, fast (default) |
113
155
  | `codellama` | 4GB | Better code understanding |
114
156
  | `llama3.2:1b` | 1GB | Fastest, lower quality |
115
- | `mistral` | 4GB | Good balance |
116
-
117
- **Fallback:** If Ollama is not running, the tool automatically uses regex-based step extraction, which still provides meaningful steps.
118
157
 
119
158
  ---
120
159
 
121
- ## Config (optional)
160
+ ## Configuration (Optional)
122
161
 
123
- Defaults: e2e dir `playwright/e2e`, CSV dir `playwright/e2e-csv`. For Cypress use e.g. `cypress/e2e`.
162
+ Default directories:
163
+ - e2e specs: `playwright/e2e`
164
+ - CSV output: `playwright/e2e-csv`
124
165
 
125
- Override via **`.am-browserstack-sync.json`** in project root:
166
+ Override via `.am-browserstack-sync.json`:
126
167
 
127
168
  ```json
128
169
  {
@@ -131,42 +172,27 @@ Override via **`.am-browserstack-sync.json`** in project root:
131
172
  }
132
173
  ```
133
174
 
134
- Or **package.json**: `"amBrowserstackSync": { "e2eDir": "...", "csvOutputDir": "..." }`
135
- Or env: `PLAYWRIGHT_BROWSERSTACK_E2E_DIR`, `PLAYWRIGHT_BROWSERSTACK_CSV_DIR`.
175
+ Or via environment variables:
176
+ - `PLAYWRIGHT_BROWSERSTACK_E2E_DIR`
177
+ - `PLAYWRIGHT_BROWSERSTACK_CSV_DIR`
136
178
 
137
179
  ---
138
180
 
139
- ## BrowserStack sync
140
-
141
- Sync pushes your e2e tests into **BrowserStack Test Management** so you can track test cases, link runs, and keep specs in sync with one source of truth. Under your chosen project it creates **one folder per spec file** (e.g. `login.spec`, `checkout.spec`) and one **test case** per test, with title, description, steps, state (Active), type (Functional), automation status, and tags. Existing test cases are matched by title or TC-id and **updated**; new ones are **created**. No duplicates.
181
+ ## What Gets Created
142
182
 
143
- **Setup:**
183
+ When you run `npx am-browserstack-sync --all`:
144
184
 
145
- 1. In project root, create **`.env`** (do not commit):
146
-
147
- ```env
148
- BROWSERSTACK_USERNAME=your_username
149
- BROWSERSTACK_ACCESS_KEY=your_access_key
150
- BROWSERSTACK_PROJECT_ID=PR-XXXX
151
- ```
152
- Or use a single token: `BROWSERSTACK_API_TOKEN=your_token`
185
+ 1. **CSV files** - One per spec file with test case details
186
+ 2. **BrowserStack folders** - One folder per spec file
187
+ 3. **Test cases** - Each test becomes a test case with:
188
+ - Title
189
+ - AI-generated steps (or regex-extracted)
190
+ - Expected results
191
+ - Tags
192
+ - Automation status
153
193
 
154
- 2. Get credentials and project ID from [Test Management → API keys](https://test-management.browserstack.com/settings/api-keys). The project ID is in the project URL (e.g. `PR-1234`).
155
-
156
- 3. **Install Ollama** for AI-powered step analysis (optional but recommended).
157
-
158
- 4. Run **`npx am-browserstack-sync`** (without `--csv-only`). You'll be prompted to sync all specs or pick specific ones (unless you use `--all` or `--spec=...`). After sync, open your project in BrowserStack to see the new folders and test cases.
159
-
160
- ---
161
-
162
- ## What it does
163
-
164
- - Finds **Playwright** (`*.spec.*`, `*.test.*`) and **Cypress** (`*.cy.*`) spec files in your e2e dir.
165
- - Extracts test titles from `test('...')` / `it('...')`, assigns TC-001, TC-002, …
166
- - **Analyzes test code** with Ollama AI (local, free) or regex to generate human-readable steps and expected results.
167
- - Enriches with state (Active), type (Functional), automation (automated), tags (from spec + title).
168
- - Writes **one CSV per spec** (test_case_id, title, state, case_type, steps, expected_results, jira_issues, automation_status, tags, description, spec_file).
169
- - Optionally syncs to BrowserStack with description, steps, and tags.
170
- ---
194
+ When you run `npx am-browserstack-sync-runs`:
171
195
 
172
- **Author:** Ashutosh Mallick
196
+ 1. **Test Run** - Created in BrowserStack
197
+ 2. **Results** - Pass/fail status for each test
198
+ 3. **Link** - URL to view results in BrowserStack
package/bin/cli.js CHANGED
@@ -2,31 +2,67 @@
2
2
  /**
3
3
  * CLI for @ash-mallick/browserstack-sync (am-browserstack-sync).
4
4
  * By Ashutosh Mallick. Run from your project root: npx am-browserstack-sync [options]
5
- * Uses cwd as project root; config from .env, config file, or package.json.
5
+ *
6
+ * This command onboards your Playwright/Cypress tests to BrowserStack:
7
+ * - Generates CSV files with test cases
8
+ * - Creates folders in BrowserStack (one per spec file)
9
+ * - Creates/updates test cases with AI-generated steps
6
10
  *
7
- * Options:
11
+ * OPTIONS:
8
12
  * --csv-only Generate CSVs only, do not sync to BrowserStack
9
13
  * --all Sync all spec files (no interactive prompt)
10
- * --spec=name Sync only these specs (comma-separated). e.g. --spec=login.spec,checkout.spec
11
- * --no-ai Disable AI-powered step analysis (use regex extraction only)
12
- * --model=name Ollama model to use (default: llama3.2). e.g. --model=codellama
14
+ * --spec=name Sync only these specs (comma-separated)
15
+ * --no-ai Disable AI-powered step analysis
16
+ * --model=name Ollama model to use (default: llama3.2)
13
17
  *
14
- * AI Analysis (FREE, Local with Ollama):
15
- * Install Ollama from https://ollama.ai, then:
16
- * ollama pull llama3.2
17
- * ollama serve
18
+ * EXAMPLES:
19
+ * npx am-browserstack-sync --all # Onboard all tests
20
+ * npx am-browserstack-sync --csv-only # Generate CSVs only
21
+ * npx am-browserstack-sync --spec=login # Onboard specific spec
18
22
  *
19
- * AI analysis is automatic when Ollama is running! No API key needed.
20
- * The AI will analyze your test code and generate human-readable steps like:
21
- * - "Navigate to /login page"
22
- * - "Enter 'user@test.com' in the Email field"
23
- * - "Click the 'Sign In' button"
24
- * - "Verify URL matches /dashboard"
23
+ * To sync test run results, use:
24
+ * npx am-browserstack-sync-runs
25
25
  */
26
26
 
27
27
  import { runSync } from '../lib/index.js';
28
+ import 'dotenv/config';
28
29
 
29
30
  const argv = process.argv.slice(2);
31
+
32
+ // Show help
33
+ if (argv.includes('--help') || argv.includes('-h')) {
34
+ console.log(`
35
+ 📦 BrowserStack Test Case Sync
36
+
37
+ Onboard your Playwright/Cypress tests to BrowserStack Test Management.
38
+
39
+ USAGE:
40
+ npx am-browserstack-sync [options]
41
+
42
+ OPTIONS:
43
+ --csv-only Generate CSVs only, do not sync to BrowserStack
44
+ --all Sync all spec files without prompting
45
+ --spec=<names> Sync specific specs (comma-separated)
46
+ --no-ai Disable AI step analysis (use regex)
47
+ --model=<name> Ollama model (default: llama3.2)
48
+ --help, -h Show this help
49
+
50
+ EXAMPLES:
51
+ npx am-browserstack-sync --all
52
+ npx am-browserstack-sync --csv-only
53
+ npx am-browserstack-sync --spec=login.spec,checkout.spec
54
+
55
+ TO SYNC TEST RUN RESULTS:
56
+ npx am-browserstack-sync-runs
57
+
58
+ ENVIRONMENT VARIABLES:
59
+ BROWSERSTACK_USERNAME Your BrowserStack username
60
+ BROWSERSTACK_ACCESS_KEY Your BrowserStack access key
61
+ BROWSERSTACK_PROJECT_ID Project ID (e.g., PR-1234)
62
+ `);
63
+ process.exit(0);
64
+ }
65
+
30
66
  const csvOnly = argv.includes('--csv-only');
31
67
  const all = argv.includes('--all');
32
68
  const noAI = argv.includes('--no-ai');
@@ -0,0 +1,189 @@
1
+ #!/usr/bin/env node
2
+ /**
3
+ * Simple CLI to sync test run results to BrowserStack.
4
+ * Add this to your existing CI pipeline after tests run.
5
+ *
6
+ * Usage:
7
+ * npx am-browserstack-sync-runs
8
+ * npx am-browserstack-sync-runs --report=custom-results.json
9
+ * npx am-browserstack-sync-runs --run-name="Nightly Build #123"
10
+ *
11
+ * Environment Variables (required):
12
+ * BROWSERSTACK_USERNAME
13
+ * BROWSERSTACK_ACCESS_KEY
14
+ * BROWSERSTACK_PROJECT_ID
15
+ *
16
+ * The command will:
17
+ * 1. Auto-detect Playwright report files (results.json, test-results.json, etc.)
18
+ * 2. Parse test results
19
+ * 3. Create a Test Run in BrowserStack
20
+ * 4. Sync pass/fail status for each test
21
+ * 5. Show a link to view results
22
+ */
23
+
24
+ import 'dotenv/config';
25
+ import fs from 'fs';
26
+ import path from 'path';
27
+ import { syncResultsFromReport } from '../lib/sync-results.js';
28
+
29
+ const argv = process.argv.slice(2);
30
+
31
+ // Show help
32
+ if (argv.includes('--help') || argv.includes('-h')) {
33
+ console.log(`
34
+ 📊 BrowserStack Test Run Sync
35
+
36
+ Sync your Playwright/Cypress test results to BrowserStack Test Management.
37
+ Add this command to your existing CI pipeline after tests complete.
38
+
39
+ USAGE:
40
+ npx am-browserstack-sync-runs [options]
41
+
42
+ OPTIONS:
43
+ --report=<path> Path to report file (auto-detects if not specified)
44
+ --run-name=<name> Name for the test run (default: "CI Run - <timestamp>")
45
+ --format=<fmt> Report format: playwright-json, junit (default: auto)
46
+ --help, -h Show this help
47
+
48
+ ENVIRONMENT VARIABLES (required):
49
+ BROWSERSTACK_USERNAME Your BrowserStack username
50
+ BROWSERSTACK_ACCESS_KEY Your BrowserStack access key
51
+ BROWSERSTACK_PROJECT_ID Your project ID (e.g., PR-1234)
52
+
53
+ EXAMPLES:
54
+ # Auto-detect report and sync
55
+ npx am-browserstack-sync-runs
56
+
57
+ # Specify report file
58
+ npx am-browserstack-sync-runs --report=test-results.json
59
+
60
+ # Custom run name (great for CI)
61
+ npx am-browserstack-sync-runs --run-name="Build #\${CI_PIPELINE_ID}"
62
+
63
+ # Sync JUnit XML results
64
+ npx am-browserstack-sync-runs --report=junit.xml --format=junit
65
+
66
+ ADD TO YOUR CI PIPELINE:
67
+ # GitLab CI
68
+ script:
69
+ - npm test # Your existing test command
70
+ - npx am-browserstack-sync-runs --run-name="Pipeline #\$CI_PIPELINE_ID"
71
+
72
+ # GitHub Actions
73
+ - run: npm test
74
+ - run: npx am-browserstack-sync-runs --run-name="Build #\${{ github.run_number }}"
75
+
76
+ PREREQUISITES:
77
+ 1. First, onboard your tests to BrowserStack:
78
+ npx am-browserstack-sync --all
79
+
80
+ 2. Then add run sync to your pipeline:
81
+ npx am-browserstack-sync-runs
82
+ `);
83
+ process.exit(0);
84
+ }
85
+
86
+ // Auto-detect report file
87
+ function findReportFile() {
88
+ const cwd = process.cwd();
89
+ const possibleFiles = [
90
+ 'test-results.json',
91
+ 'results.json',
92
+ 'playwright-report/results.json',
93
+ 'playwright-report.json',
94
+ 'report.json',
95
+ 'junit.xml',
96
+ 'test-results.xml',
97
+ ];
98
+
99
+ for (const file of possibleFiles) {
100
+ const fullPath = path.join(cwd, file);
101
+ if (fs.existsSync(fullPath)) {
102
+ return fullPath;
103
+ }
104
+ }
105
+
106
+ // Check for playwright-report directory
107
+ const htmlReportDir = path.join(cwd, 'playwright-report');
108
+ if (fs.existsSync(htmlReportDir) && fs.statSync(htmlReportDir).isDirectory()) {
109
+ return htmlReportDir;
110
+ }
111
+
112
+ return null;
113
+ }
114
+
115
+ // Auto-detect format from file
116
+ function detectFormat(reportPath) {
117
+ if (reportPath.endsWith('.xml')) {
118
+ return 'junit';
119
+ }
120
+ return 'playwright-json';
121
+ }
122
+
123
+ // Parse arguments
124
+ const reportArg = argv.find((a) => a.startsWith('--report='));
125
+ const formatArg = argv.find((a) => a.startsWith('--format='));
126
+ const runNameArg = argv.find((a) => a.startsWith('--run-name='));
127
+
128
+ let reportPath = reportArg ? reportArg.slice('--report='.length).trim() : null;
129
+ let format = formatArg ? formatArg.slice('--format='.length).trim() : null;
130
+ const runName = runNameArg
131
+ ? runNameArg.slice('--run-name='.length).trim()
132
+ : process.env.BROWSERSTACK_RUN_NAME || `CI Run - ${new Date().toISOString().slice(0, 16).replace('T', ' ')}`;
133
+
134
+ // Auto-detect report if not specified
135
+ if (!reportPath) {
136
+ reportPath = findReportFile();
137
+ if (!reportPath) {
138
+ console.error(`
139
+ ❌ No test report found!
140
+
141
+ Make sure your test framework generates a report file:
142
+
143
+ # Playwright - add to your test command:
144
+ npx playwright test --reporter=json --output-file=test-results.json
145
+
146
+ # Or specify the report path:
147
+ npx am-browserstack-sync-runs --report=path/to/results.json
148
+ `);
149
+ process.exit(1);
150
+ }
151
+ console.log(`📄 Auto-detected report: ${reportPath}`);
152
+ }
153
+
154
+ // Auto-detect format if not specified
155
+ if (!format) {
156
+ format = detectFormat(reportPath);
157
+ }
158
+
159
+ // Check required env vars
160
+ const missing = [];
161
+ if (!process.env.BROWSERSTACK_USERNAME && !process.env.BROWSERSTACK_API_TOKEN) {
162
+ missing.push('BROWSERSTACK_USERNAME');
163
+ }
164
+ if (!process.env.BROWSERSTACK_ACCESS_KEY && !process.env.BROWSERSTACK_API_TOKEN) {
165
+ missing.push('BROWSERSTACK_ACCESS_KEY');
166
+ }
167
+ if (!process.env.BROWSERSTACK_PROJECT_ID) {
168
+ missing.push('BROWSERSTACK_PROJECT_ID');
169
+ }
170
+
171
+ if (missing.length > 0) {
172
+ console.error(`
173
+ ❌ Missing required environment variables:
174
+ ${missing.join('\n ')}
175
+
176
+ Set these in your CI/CD settings or .env file.
177
+ `);
178
+ process.exit(1);
179
+ }
180
+
181
+ // Run sync
182
+ syncResultsFromReport({
183
+ reportPath,
184
+ format,
185
+ runName,
186
+ }).catch((err) => {
187
+ console.error('\n❌ Error:', err.message);
188
+ process.exit(1);
189
+ });
@@ -14,7 +14,7 @@ const OLLAMA_HOST = process.env.OLLAMA_HOST || 'http://localhost:11434';
14
14
  /**
15
15
  * System prompt for the AI to understand what we want.
16
16
  */
17
- const SYSTEM_PROMPT = `You are a QA expert creating manual test steps from automated test code.
17
+ const SYSTEM_PROMPT = `You are a test automation expert. Your job is to analyze Playwright or Cypress test code and extract human-readable test steps.
18
18
 
19
19
  CRITICAL RULES:
20
20
  1. Generate steps ONLY for what is explicitly in the test code provided
@@ -200,3 +200,109 @@ export async function syncToBrowserStack(specsMap, projectId, auth) {
200
200
  console.log(` Updated: ${totalUpdated} test case(s)`);
201
201
  console.log(` Skipped: ${totalSkipped} test case(s) (no changes)`);
202
202
  }
203
+
204
+ // ============================================================================
205
+ // TEST RUN API - For syncing test execution results
206
+ // ============================================================================
207
+
208
+ /**
209
+ * Create a new Test Run in BrowserStack.
210
+ */
211
+ export async function createTestRun(projectId, runName, description, auth) {
212
+ const url = `${BS_BASE}/projects/${encodeURIComponent(projectId)}/test-runs`;
213
+ const body = {
214
+ test_run: {
215
+ name: runName,
216
+ description: description || `Automated test run: ${runName}`,
217
+ },
218
+ };
219
+ const res = await bsRequest('POST', url, auth, body);
220
+ return {
221
+ runId: res.test_run?.id || res.id,
222
+ identifier: res.test_run?.identifier || res.identifier,
223
+ };
224
+ }
225
+
226
+ /**
227
+ * Add test cases to a Test Run.
228
+ */
229
+ export async function addTestCasesToRun(projectId, runId, testCaseIds, auth) {
230
+ const url = `${BS_BASE}/projects/${encodeURIComponent(projectId)}/test-runs/${runId}/test-cases`;
231
+ const body = {
232
+ test_case_identifiers: testCaseIds,
233
+ };
234
+ return bsRequest('POST', url, auth, body);
235
+ }
236
+
237
+ /**
238
+ * Update test result in a Test Run.
239
+ */
240
+ export async function updateTestResult(projectId, runId, testCaseId, status, options = {}, auth) {
241
+ const url = `${BS_BASE}/projects/${encodeURIComponent(projectId)}/test-runs/${runId}/test-results`;
242
+
243
+ const statusMap = {
244
+ passed: 'passed',
245
+ failed: 'failed',
246
+ skipped: 'skipped',
247
+ timedOut: 'failed',
248
+ };
249
+
250
+ const body = {
251
+ test_result: {
252
+ test_case_identifier: testCaseId,
253
+ status: statusMap[status] || status,
254
+ comment: options.comment || '',
255
+ },
256
+ };
257
+
258
+ if (options.duration) {
259
+ body.test_result.duration = Math.round(options.duration / 1000);
260
+ }
261
+
262
+ return bsRequest('POST', url, auth, body);
263
+ }
264
+
265
+ /**
266
+ * Complete a Test Run.
267
+ */
268
+ export async function completeTestRun(projectId, runId, auth) {
269
+ const url = `${BS_BASE}/projects/${encodeURIComponent(projectId)}/test-runs/${runId}`;
270
+ const body = {
271
+ test_run: {
272
+ state: 'done',
273
+ },
274
+ };
275
+ return bsRequest('PATCH', url, auth, body);
276
+ }
277
+
278
+ /**
279
+ * Get all test cases in a project.
280
+ */
281
+ export async function getAllTestCases(projectId, auth) {
282
+ const all = [];
283
+ let page = 1;
284
+ let hasMore = true;
285
+
286
+ while (hasMore) {
287
+ const url = `${BS_BASE}/projects/${encodeURIComponent(projectId)}/test-cases?p=${page}`;
288
+ const res = await bsRequest('GET', url, auth);
289
+ const list = res.test_cases || [];
290
+ all.push(...list);
291
+ const info = res.info || {};
292
+ hasMore = info.next != null;
293
+ page += 1;
294
+ }
295
+
296
+ return all;
297
+ }
298
+
299
+ /**
300
+ * Find test case by title.
301
+ */
302
+ export function findTestCaseByTitle(testCases, title) {
303
+ const normalizedTitle = (title || '').trim().toLowerCase();
304
+ return testCases.find((tc) => {
305
+ const tcTitle = (tc.name || tc.title || '').trim().toLowerCase();
306
+ return tcTitle === normalizedTitle;
307
+ });
308
+ }
@@ -0,0 +1,329 @@
1
+ /**
2
+ * Sync test results from CI pipeline to BrowserStack Test Management.
3
+ *
4
+ * This module reads test results from Playwright JSON report or JUnit XML
5
+ * and syncs them to BrowserStack, creating a Test Run with pass/fail status.
6
+ *
7
+ * Usage:
8
+ * npx am-browserstack-sync --sync-results --report=playwright-report.json
9
+ * npx am-browserstack-sync --sync-results --report=junit.xml --format=junit
10
+ */
11
+
12
+ import fs from 'fs';
13
+ import path from 'path';
14
+ import {
15
+ createTestRun,
16
+ addTestCasesToRun,
17
+ updateTestResult,
18
+ completeTestRun,
19
+ getAllTestCases,
20
+ findTestCaseByTitle,
21
+ } from './browserstack.js';
22
+
23
+ /**
24
+ * Parse Playwright JSON report format.
25
+ * @param {string} reportPath - Path to the JSON report file
26
+ * @returns {Array<{title: string, status: string, duration: number, error?: string}>}
27
+ */
28
+ function parsePlaywrightJson(reportPath) {
29
+ const report = JSON.parse(fs.readFileSync(reportPath, 'utf-8'));
30
+ const results = [];
31
+
32
+ // Playwright JSON report structure: { suites: [...], stats: {...} }
33
+ function extractTests(suites) {
34
+ for (const suite of suites || []) {
35
+ for (const spec of suite.specs || []) {
36
+ for (const test of spec.tests || []) {
37
+ // Get the result from the test results array
38
+ const result = test.results?.[0] || {};
39
+ results.push({
40
+ title: spec.title || test.title,
41
+ fullTitle: `${suite.title} > ${spec.title}`,
42
+ status: mapPlaywrightStatus(test.status || result.status),
43
+ duration: result.duration || 0,
44
+ error: result.error?.message || result.errors?.[0]?.message || '',
45
+ file: suite.file || spec.file,
46
+ });
47
+ }
48
+ }
49
+
50
+ // Recursively process nested suites
51
+ if (suite.suites) {
52
+ extractTests(suite.suites);
53
+ }
54
+ }
55
+ }
56
+
57
+ extractTests(report.suites);
58
+ return results;
59
+ }
60
+
61
+ /**
62
+ * Parse JUnit XML report format.
63
+ * @param {string} reportPath - Path to the JUnit XML file
64
+ * @returns {Array<{title: string, status: string, duration: number, error?: string}>}
65
+ */
66
+ function parseJunitXml(reportPath) {
67
+ const xml = fs.readFileSync(reportPath, 'utf-8');
68
+ const results = [];
69
+
70
+ // Simple regex-based XML parsing for JUnit format
71
+ // <testcase name="..." classname="..." time="...">
72
+ // <failure message="...">...</failure>
73
+ // <skipped/>
74
+ // </testcase>
75
+
76
+ const testcaseRegex = /<testcase\s+[^>]*name="([^"]+)"[^>]*time="([^"]+)"[^>]*>([\s\S]*?)<\/testcase>/gi;
77
+ const failureRegex = /<failure[^>]*message="([^"]*)"[^>]*>/i;
78
+ const skippedRegex = /<skipped\s*\/?>/i;
79
+ const errorRegex = /<error[^>]*message="([^"]*)"[^>]*>/i;
80
+
81
+ let match;
82
+ while ((match = testcaseRegex.exec(xml)) !== null) {
83
+ const [, name, time, content] = match;
84
+
85
+ let status = 'passed';
86
+ let error = '';
87
+
88
+ if (skippedRegex.test(content)) {
89
+ status = 'skipped';
90
+ } else if (failureRegex.test(content)) {
91
+ status = 'failed';
92
+ const failMatch = content.match(failureRegex);
93
+ error = failMatch?.[1] || 'Test failed';
94
+ } else if (errorRegex.test(content)) {
95
+ status = 'failed';
96
+ const errMatch = content.match(errorRegex);
97
+ error = errMatch?.[1] || 'Test error';
98
+ }
99
+
100
+ results.push({
101
+ title: name,
102
+ status,
103
+ duration: Math.round(parseFloat(time) * 1000), // Convert seconds to ms
104
+ error,
105
+ });
106
+ }
107
+
108
+ return results;
109
+ }
110
+
111
+ /**
112
+ * Parse Playwright HTML report's index.html or results.json
113
+ */
114
+ function parsePlaywrightHtmlReport(reportDir) {
115
+ // Try to find the JSON data in the HTML report directory
116
+ const possiblePaths = [
117
+ path.join(reportDir, 'report.json'),
118
+ path.join(reportDir, 'data', 'report.json'),
119
+ path.join(reportDir, 'results.json'),
120
+ ];
121
+
122
+ for (const p of possiblePaths) {
123
+ if (fs.existsSync(p)) {
124
+ return parsePlaywrightJson(p);
125
+ }
126
+ }
127
+
128
+ throw new Error(`Could not find JSON data in Playwright HTML report directory: ${reportDir}`);
129
+ }
130
+
131
+ /**
132
+ * Map Playwright status to BrowserStack status.
133
+ */
134
+ function mapPlaywrightStatus(status) {
135
+ const map = {
136
+ passed: 'passed',
137
+ failed: 'failed',
138
+ timedOut: 'failed',
139
+ skipped: 'skipped',
140
+ interrupted: 'blocked',
141
+ expected: 'passed',
142
+ unexpected: 'failed',
143
+ flaky: 'retest',
144
+ };
145
+ return map[status] || status;
146
+ }
147
+
148
+ /**
149
+ * Load authentication from environment.
150
+ */
151
+ function loadAuth() {
152
+ const username = process.env.BROWSERSTACK_USERNAME;
153
+ const accessKey = process.env.BROWSERSTACK_ACCESS_KEY;
154
+ const apiToken = process.env.BROWSERSTACK_API_TOKEN;
155
+
156
+ if (apiToken) {
157
+ return { apiToken };
158
+ }
159
+
160
+ if (username && accessKey) {
161
+ return { username, accessKey };
162
+ }
163
+
164
+ return null;
165
+ }
166
+
167
+ /**
168
+ * Sync test results from a report file to BrowserStack.
169
+ *
170
+ * @param {Object} options
171
+ * @param {string} options.reportPath - Path to the report file or directory
172
+ * @param {string} options.format - Report format: 'playwright-json', 'junit', 'playwright-html'
173
+ * @param {string} options.projectId - BrowserStack project ID
174
+ * @param {string} options.runName - Name for the test run
175
+ * @param {string} options.runDescription - Description for the test run
176
+ */
177
+ export async function syncResultsFromReport(options = {}) {
178
+ const {
179
+ reportPath,
180
+ format = 'playwright-json',
181
+ projectId = process.env.BROWSERSTACK_PROJECT_ID,
182
+ runName = process.env.BROWSERSTACK_RUN_NAME || `CI Run - ${new Date().toISOString().slice(0, 16).replace('T', ' ')}`,
183
+ runDescription = process.env.BROWSERSTACK_RUN_DESCRIPTION || 'Test results synced from CI pipeline',
184
+ } = options;
185
+
186
+ // Validate inputs
187
+ if (!reportPath) {
188
+ throw new Error('Report path is required. Use --report=<path>');
189
+ }
190
+
191
+ if (!projectId) {
192
+ throw new Error('BROWSERSTACK_PROJECT_ID is required');
193
+ }
194
+
195
+ const auth = loadAuth();
196
+ if (!auth) {
197
+ throw new Error('BrowserStack credentials not found. Set BROWSERSTACK_USERNAME/ACCESS_KEY or API_TOKEN');
198
+ }
199
+
200
+ // Check if report exists
201
+ if (!fs.existsSync(reportPath)) {
202
+ throw new Error(`Report file not found: ${reportPath}`);
203
+ }
204
+
205
+ console.log('\n📊 BrowserStack Results Sync');
206
+ console.log(' Report:', reportPath);
207
+ console.log(' Format:', format);
208
+ console.log(' Project:', projectId);
209
+ console.log(' Run Name:', runName);
210
+
211
+ // Parse the report based on format
212
+ console.log('\n Parsing test results...');
213
+ let testResults;
214
+
215
+ const stat = fs.statSync(reportPath);
216
+ if (stat.isDirectory()) {
217
+ // Assume it's a Playwright HTML report directory
218
+ testResults = parsePlaywrightHtmlReport(reportPath);
219
+ } else if (format === 'junit' || reportPath.endsWith('.xml')) {
220
+ testResults = parseJunitXml(reportPath);
221
+ } else {
222
+ testResults = parsePlaywrightJson(reportPath);
223
+ }
224
+
225
+ console.log(` Found ${testResults.length} test result(s)`);
226
+
227
+ if (testResults.length === 0) {
228
+ console.log('\n ⚠️ No test results found in report');
229
+ return;
230
+ }
231
+
232
+ // Fetch all test cases from BrowserStack
233
+ console.log('\n Fetching test cases from BrowserStack...');
234
+ const allBsTestCases = await getAllTestCases(projectId, auth);
235
+ console.log(` Found ${allBsTestCases.length} test case(s) in project`);
236
+
237
+ // Map results to BrowserStack test cases
238
+ const mappedResults = [];
239
+ const unmappedResults = [];
240
+
241
+ for (const result of testResults) {
242
+ const bsTestCase = findTestCaseByTitle(allBsTestCases, result.title);
243
+ if (bsTestCase) {
244
+ mappedResults.push({ ...result, bsTestCase });
245
+ } else {
246
+ unmappedResults.push(result);
247
+ }
248
+ }
249
+
250
+ console.log(` Mapped: ${mappedResults.length} test(s)`);
251
+ if (unmappedResults.length > 0) {
252
+ console.log(` Unmapped: ${unmappedResults.length} test(s) (not found in BrowserStack)`);
253
+ }
254
+
255
+ if (mappedResults.length === 0) {
256
+ console.log('\n ⚠️ No tests matched. Make sure tests are synced first:');
257
+ console.log(' npx am-browserstack-sync --all');
258
+ return;
259
+ }
260
+
261
+ // Create test run
262
+ console.log(`\n Creating test run: "${runName}"...`);
263
+ const run = await createTestRun(projectId, runName, runDescription, auth);
264
+ const runId = run.runId;
265
+ const runIdentifier = run.identifier;
266
+ console.log(` ✓ Test run created: ${runIdentifier}`);
267
+
268
+ // Add test cases to run
269
+ const testCaseIds = mappedResults.map(r => r.bsTestCase.identifier);
270
+ console.log(` Adding ${testCaseIds.length} test case(s) to run...`);
271
+ await addTestCasesToRun(projectId, runId, testCaseIds, auth);
272
+
273
+ // Update results
274
+ console.log('\n Syncing results:');
275
+ const stats = { passed: 0, failed: 0, skipped: 0 };
276
+
277
+ for (const result of mappedResults) {
278
+ try {
279
+ await updateTestResult(
280
+ projectId,
281
+ runId,
282
+ result.bsTestCase.identifier,
283
+ result.status,
284
+ {
285
+ duration: result.duration,
286
+ comment: result.error || '',
287
+ },
288
+ auth
289
+ );
290
+
291
+ const icon = result.status === 'passed' ? '✓' : result.status === 'failed' ? '✗' : '○';
292
+ console.log(` ${icon} ${result.title} → ${result.status}`);
293
+
294
+ if (result.status === 'passed') stats.passed++;
295
+ else if (result.status === 'failed') stats.failed++;
296
+ else stats.skipped++;
297
+ } catch (error) {
298
+ console.error(` ✗ Failed to update ${result.title}: ${error.message}`);
299
+ }
300
+ }
301
+
302
+ // Complete the run
303
+ console.log('\n Completing test run...');
304
+ await completeTestRun(projectId, runId, auth);
305
+
306
+ const total = stats.passed + stats.failed + stats.skipped;
307
+ console.log(`\n ✓ Test run completed: ${runIdentifier}`);
308
+ console.log(` 📊 Results: ${stats.passed} passed, ${stats.failed} failed, ${stats.skipped} skipped (${total} total)`);
309
+ console.log(` 🔗 View: https://test-management.browserstack.com/projects/${projectId}/runs/${runId}`);
310
+
311
+ // Show unmapped tests
312
+ if (unmappedResults.length > 0) {
313
+ console.log('\n ⚠️ Tests not found in BrowserStack (sync them first):');
314
+ unmappedResults.slice(0, 5).forEach(r => console.log(` - ${r.title}`));
315
+ if (unmappedResults.length > 5) {
316
+ console.log(` ... and ${unmappedResults.length - 5} more`);
317
+ }
318
+ }
319
+
320
+ return {
321
+ runId,
322
+ runIdentifier,
323
+ stats,
324
+ mappedCount: mappedResults.length,
325
+ unmappedCount: unmappedResults.length,
326
+ };
327
+ }
328
+
329
+ export { parsePlaywrightJson, parseJunitXml };
package/package.json CHANGED
@@ -1,12 +1,13 @@
1
1
  {
2
2
  "name": "@ash-mallick/browserstack-sync",
3
- "version": "1.1.4",
4
- "description": "Sync Playwright & Cypress e2e specs to CSV and BrowserStack Test Management with FREE AI-powered test step extraction using Ollama (local)",
3
+ "version": "1.3.1",
4
+ "description": "Sync Playwright & Cypress e2e specs to CSV and BrowserStack Test Management with FREE AI-powered test step extraction using Ollama (local).",
5
5
  "author": "Ashutosh Mallick",
6
6
  "type": "module",
7
7
  "main": "lib/index.js",
8
8
  "bin": {
9
- "am-browserstack-sync": "bin/cli.js"
9
+ "am-browserstack-sync": "bin/cli.js",
10
+ "am-browserstack-sync-runs": "bin/sync-runs.js"
10
11
  },
11
12
  "files": [
12
13
  "bin",
@@ -14,10 +15,9 @@
14
15
  ],
15
16
  "scripts": {
16
17
  "test": "playwright test",
17
- "test:e2e": "playwright test",
18
18
  "sync": "node bin/cli.js",
19
- "sync:csv": "node bin/cli.js",
20
19
  "sync:csv-only": "node bin/cli.js --csv-only",
20
+ "sync:runs": "node bin/sync-runs.js",
21
21
  "prepublishOnly": "node bin/cli.js --csv-only",
22
22
  "publish:public": "npm publish --access public"
23
23
  },
@@ -28,9 +28,11 @@
28
28
  "e2e",
29
29
  "test-management",
30
30
  "sync",
31
+ "ci-cd",
32
+ "gitlab",
33
+ "github-actions",
31
34
  "ai",
32
35
  "ollama",
33
- "llama",
34
36
  "test-steps",
35
37
  "free",
36
38
  "local",
@@ -42,15 +44,12 @@
42
44
  "url": ""
43
45
  },
44
46
  "dependencies": {
45
- "@ash-mallick/browserstack-sync": "^1.1.3",
46
47
  "dotenv": "^16.4.5",
47
48
  "ollama": "^0.6.3"
48
49
  },
49
50
  "devDependencies": {
50
- "@playwright/test": "^1.49.0",
51
- "playwright": "^1.49.0"
51
+ "@playwright/test": "^1.49.0"
52
52
  },
53
- "peerDependenciesMeta": {},
54
53
  "engines": {
55
54
  "node": ">=18"
56
55
  }