@ash-mallick/browserstack-sync 1.1.1 → 1.1.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (3) hide show
  1. package/README.md +6 -27
  2. package/bin/cli.js +0 -0
  3. package/package.json +1 -1
package/README.md CHANGED
@@ -26,9 +26,6 @@ From your project root (where your e2e specs live):
26
26
  # Generate CSVs only
27
27
  npx am-browserstack-sync --csv-only
28
28
 
29
- # Sync to BrowserStack (interactive: choose all or specific specs)
30
- npx am-browserstack-sync
31
-
32
29
  # Sync all specs, no prompt (e.g. CI)
33
30
  npx am-browserstack-sync --all
34
31
 
@@ -39,7 +36,7 @@ npx am-browserstack-sync --spec=login.spec,checkout.spec
39
36
  npx am-browserstack-sync --no-ai
40
37
 
41
38
  # Use a specific Ollama model
42
- npx am-browserstack-sync --model=codellama
39
+ npx am-browserstack-sync --model=llama3.2
43
40
  ```
44
41
 
45
42
  **Scripts** in `package.json`:
@@ -84,19 +81,19 @@ test('should log in successfully', async ({ page }) => {
84
81
 
85
82
  ### Setup Ollama
86
83
 
87
- 1. **Download and install** Ollama from [ollama.ai](https://ollama.ai)
84
+ 1. Download and install Ollama from [ollama.ai](https://ollama.ai)
88
85
 
89
- 2. **Pull a model** (llama3.2 recommended):
86
+ 2. Pull a model (llama3.2 recommended):
90
87
  ```bash
91
88
  ollama pull llama3.2
92
89
  ```
93
90
 
94
- 3. **Start Ollama** (runs automatically on macOS, or run manually):
91
+ 3. Start Ollama (runs automatically on macOS, or run manually):
95
92
  ```bash
96
93
  ollama serve
97
94
  ```
98
95
 
99
- 4. **Run the sync** — AI analysis is automatic when Ollama is running!
96
+ 4. Run the sync — AI analysis is automatic when Ollama is running!
100
97
  ```bash
101
98
  npx am-browserstack-sync --csv-only
102
99
  ```
@@ -104,7 +101,7 @@ test('should log in successfully', async ({ page }) => {
104
101
  ### Options
105
102
 
106
103
  - `--no-ai` — Disable AI, use regex-based extraction instead
107
- - `--model=codellama` — Use a different Ollama model
104
+ - `--model=llama3.2` — Use a different Ollama model
108
105
  - `OLLAMA_MODEL=llama3.2` — Set default model via env variable
109
106
  - `OLLAMA_HOST=http://localhost:11434` — Custom Ollama host
110
107
 
@@ -170,24 +167,6 @@ Sync pushes your e2e tests into **BrowserStack Test Management** so you can trac
170
167
  - Enriches with state (Active), type (Functional), automation (automated), tags (from spec + title).
171
168
  - Writes **one CSV per spec** (test_case_id, title, state, case_type, steps, expected_results, jira_issues, automation_status, tags, description, spec_file).
172
169
  - Optionally syncs to BrowserStack with description, steps, and tags.
173
-
174
- ---
175
-
176
- ## Programmatic API
177
-
178
- ```js
179
- import { runSync } from '@ash-mallick/browserstack-sync';
180
-
181
- await runSync({
182
- cwd: '/path/to/project',
183
- csvOnly: true,
184
- all: true,
185
- spec: ['login.spec'],
186
- useAI: true, // Enable AI analysis (default: true if Ollama is running)
187
- model: 'llama3.2', // Ollama model to use
188
- });
189
- ```
190
-
191
170
  ---
192
171
 
193
172
  **Author:** Ashutosh Mallick
package/bin/cli.js CHANGED
File without changes
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@ash-mallick/browserstack-sync",
3
- "version": "1.1.1",
3
+ "version": "1.1.2",
4
4
  "description": "Sync Playwright & Cypress e2e specs to CSV and BrowserStack Test Management with FREE AI-powered test step extraction using Ollama (local)",
5
5
  "author": "Ashutosh Mallick",
6
6
  "type": "module",