greenrun-cli 0.1.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/LICENSE +21 -0
- package/README.md +100 -0
- package/dist/api-client.d.ts +63 -0
- package/dist/api-client.js +105 -0
- package/dist/cli.d.ts +2 -0
- package/dist/cli.js +63 -0
- package/dist/commands/init.d.ts +1 -0
- package/dist/commands/init.js +243 -0
- package/dist/server.d.ts +1 -0
- package/dist/server.js +128 -0
- package/package.json +44 -0
- package/templates/claude-md.md +39 -0
- package/templates/commands/greenrun-sweep.md +71 -0
- package/templates/commands/greenrun.md +73 -0
package/LICENSE
ADDED
|
@@ -0,0 +1,21 @@
|
|
|
1
|
+
MIT License
|
|
2
|
+
|
|
3
|
+
Copyright (c) 2025 Greenrun
|
|
4
|
+
|
|
5
|
+
Permission is hereby granted, free of charge, to any person obtaining a copy
|
|
6
|
+
of this software and associated documentation files (the "Software"), to deal
|
|
7
|
+
in the Software without restriction, including without limitation the rights
|
|
8
|
+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
|
9
|
+
copies of the Software, and to permit persons to whom the Software is
|
|
10
|
+
furnished to do so, subject to the following conditions:
|
|
11
|
+
|
|
12
|
+
The above copyright notice and this permission notice shall be included in all
|
|
13
|
+
copies or substantial portions of the Software.
|
|
14
|
+
|
|
15
|
+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
16
|
+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
17
|
+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
|
18
|
+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
|
19
|
+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
|
20
|
+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
21
|
+
SOFTWARE.
|
package/README.md
ADDED
|
@@ -0,0 +1,100 @@
|
|
|
1
|
+
# greenrun-cli
|
|
2
|
+
|
|
3
|
+
Browser test management for Claude Code. Connects Claude to the [Greenrun](https://app.greenrun.dev) API via MCP, enabling Claude to run, create, and manage browser tests directly from your terminal.
|
|
4
|
+
|
|
5
|
+
## Prerequisites
|
|
6
|
+
|
|
7
|
+
- **Node.js 18+**
|
|
8
|
+
- **Claude Code CLI** - [Install guide](https://docs.anthropic.com/en/docs/claude-code)
|
|
9
|
+
- **Claude in Chrome extension** - Required for browser test execution. [Install from Chrome Web Store](https://chromewebstore.google.com/detail/claude-in-chrome)
|
|
10
|
+
|
|
11
|
+
## Quick Start
|
|
12
|
+
|
|
13
|
+
```bash
|
|
14
|
+
npx greenrun-cli init
|
|
15
|
+
```
|
|
16
|
+
|
|
17
|
+
This interactive wizard will:
|
|
18
|
+
1. Connect your Greenrun API token
|
|
19
|
+
2. Configure the MCP server for Claude Code
|
|
20
|
+
3. Optionally install slash commands and project instructions
|
|
21
|
+
|
|
22
|
+
## How It Works
|
|
23
|
+
|
|
24
|
+
Greenrun CLI is an [MCP server](https://modelcontextprotocol.io/) that gives Claude Code access to the Greenrun API. Combined with the Claude in Chrome extension for browser automation, Claude can execute your browser tests end-to-end.
|
|
25
|
+
|
|
26
|
+
**Flow:** Claude Code -> MCP Server -> Greenrun API -> Test instructions -> Browser automation via Chrome extension
|
|
27
|
+
|
|
28
|
+
## Slash Commands
|
|
29
|
+
|
|
30
|
+
After setup, two slash commands are available in Claude Code:
|
|
31
|
+
|
|
32
|
+
### `/greenrun`
|
|
33
|
+
|
|
34
|
+
Runs all browser tests for the current project. Optionally pass a test name to run a single test.
|
|
35
|
+
|
|
36
|
+
### `/greenrun-sweep`
|
|
37
|
+
|
|
38
|
+
Impact analysis - identifies which tests are affected by recent code changes and offers to run them.
|
|
39
|
+
|
|
40
|
+
## MCP Tools
|
|
41
|
+
|
|
42
|
+
The server exposes these tools to Claude:
|
|
43
|
+
|
|
44
|
+
| Tool | Description |
|
|
45
|
+
|------|-------------|
|
|
46
|
+
| `list_projects` | List all projects |
|
|
47
|
+
| `create_project` | Create a new project |
|
|
48
|
+
| `get_project` | Get project details |
|
|
49
|
+
| `list_pages` | List pages in a project |
|
|
50
|
+
| `create_page` | Register a page URL |
|
|
51
|
+
| `list_tests` | List tests (with latest run status) |
|
|
52
|
+
| `get_test` | Get test details and instructions |
|
|
53
|
+
| `create_test` | Create a new test case |
|
|
54
|
+
| `update_test` | Update test instructions or status |
|
|
55
|
+
| `sweep` | Find tests affected by specific pages |
|
|
56
|
+
| `start_run` | Start a test run |
|
|
57
|
+
| `complete_run` | Record test run result |
|
|
58
|
+
| `get_run` | Get run details |
|
|
59
|
+
| `list_runs` | List run history |
|
|
60
|
+
|
|
61
|
+
## Manual Setup
|
|
62
|
+
|
|
63
|
+
If you prefer to configure manually instead of using `init`:
|
|
64
|
+
|
|
65
|
+
```bash
|
|
66
|
+
claude mcp add --transport stdio -e GREENRUN_API_TOKEN=your_token greenrun -- npx -y greenrun-cli@latest
|
|
67
|
+
```
|
|
68
|
+
|
|
69
|
+
Or add to your project's `.mcp.json`:
|
|
70
|
+
|
|
71
|
+
```json
|
|
72
|
+
{
|
|
73
|
+
"mcpServers": {
|
|
74
|
+
"greenrun": {
|
|
75
|
+
"command": "npx",
|
|
76
|
+
"args": ["-y", "greenrun-cli@latest"],
|
|
77
|
+
"env": { "GREENRUN_API_TOKEN": "${GREENRUN_API_TOKEN}" }
|
|
78
|
+
}
|
|
79
|
+
}
|
|
80
|
+
}
|
|
81
|
+
```
|
|
82
|
+
|
|
83
|
+
## CLI Usage
|
|
84
|
+
|
|
85
|
+
```
|
|
86
|
+
greenrun init Interactive setup wizard
|
|
87
|
+
greenrun serve Start MCP server explicitly
|
|
88
|
+
greenrun --version Print version
|
|
89
|
+
greenrun --help Print help
|
|
90
|
+
```
|
|
91
|
+
|
|
92
|
+
Non-interactive init:
|
|
93
|
+
|
|
94
|
+
```bash
|
|
95
|
+
npx greenrun-cli init --token gr_xxx --scope local --no-claude-md --no-commands
|
|
96
|
+
```
|
|
97
|
+
|
|
98
|
+
## License
|
|
99
|
+
|
|
100
|
+
MIT
|
|
@@ -0,0 +1,63 @@
|
|
|
1
|
+
export interface ApiConfig {
|
|
2
|
+
baseUrl: string;
|
|
3
|
+
token: string;
|
|
4
|
+
}
|
|
5
|
+
export declare class ApiClient {
|
|
6
|
+
private baseUrl;
|
|
7
|
+
private token;
|
|
8
|
+
constructor(config: ApiConfig);
|
|
9
|
+
private request;
|
|
10
|
+
listProjects(): Promise<unknown>;
|
|
11
|
+
createProject(data: {
|
|
12
|
+
name: string;
|
|
13
|
+
base_url?: string;
|
|
14
|
+
description?: string;
|
|
15
|
+
concurrency?: number;
|
|
16
|
+
}): Promise<unknown>;
|
|
17
|
+
getProject(id: string): Promise<unknown>;
|
|
18
|
+
updateProject(id: string, data: {
|
|
19
|
+
name?: string;
|
|
20
|
+
base_url?: string;
|
|
21
|
+
description?: string;
|
|
22
|
+
concurrency?: number;
|
|
23
|
+
}): Promise<unknown>;
|
|
24
|
+
deleteProject(id: string): Promise<unknown>;
|
|
25
|
+
listPages(projectId: string): Promise<unknown>;
|
|
26
|
+
createPage(projectId: string, data: {
|
|
27
|
+
url: string;
|
|
28
|
+
name?: string;
|
|
29
|
+
}): Promise<unknown>;
|
|
30
|
+
updatePage(id: string, data: {
|
|
31
|
+
url?: string;
|
|
32
|
+
name?: string;
|
|
33
|
+
}): Promise<unknown>;
|
|
34
|
+
deletePage(id: string): Promise<unknown>;
|
|
35
|
+
listTests(projectId: string): Promise<unknown>;
|
|
36
|
+
createTest(projectId: string, data: {
|
|
37
|
+
name: string;
|
|
38
|
+
instructions: string;
|
|
39
|
+
page_ids?: string[];
|
|
40
|
+
status?: string;
|
|
41
|
+
tags?: string[];
|
|
42
|
+
}): Promise<unknown>;
|
|
43
|
+
getTest(id: string): Promise<unknown>;
|
|
44
|
+
updateTest(id: string, data: {
|
|
45
|
+
name?: string;
|
|
46
|
+
instructions?: string;
|
|
47
|
+
page_ids?: string[];
|
|
48
|
+
status?: string;
|
|
49
|
+
tags?: string[];
|
|
50
|
+
}): Promise<unknown>;
|
|
51
|
+
deleteTest(id: string): Promise<unknown>;
|
|
52
|
+
sweep(projectId: string, params: {
|
|
53
|
+
pages?: string[];
|
|
54
|
+
url_pattern?: string;
|
|
55
|
+
}): Promise<unknown>;
|
|
56
|
+
startRun(testId: string): Promise<unknown>;
|
|
57
|
+
completeRun(runId: string, data: {
|
|
58
|
+
status: string;
|
|
59
|
+
result?: string;
|
|
60
|
+
}): Promise<unknown>;
|
|
61
|
+
getRun(runId: string): Promise<unknown>;
|
|
62
|
+
listRuns(testId: string): Promise<unknown>;
|
|
63
|
+
}
|
|
@@ -0,0 +1,105 @@
|
|
|
1
|
+
export class ApiClient {
|
|
2
|
+
baseUrl;
|
|
3
|
+
token;
|
|
4
|
+
constructor(config) {
|
|
5
|
+
this.baseUrl = config.baseUrl.replace(/\/+$/, '');
|
|
6
|
+
this.token = config.token;
|
|
7
|
+
}
|
|
8
|
+
async request(method, path, body) {
|
|
9
|
+
const url = `${this.baseUrl}/api/v1${path}`;
|
|
10
|
+
const headers = {
|
|
11
|
+
'Authorization': `Bearer ${this.token}`,
|
|
12
|
+
'Accept': 'application/json',
|
|
13
|
+
'Content-Type': 'application/json',
|
|
14
|
+
};
|
|
15
|
+
const response = await fetch(url, {
|
|
16
|
+
method,
|
|
17
|
+
headers,
|
|
18
|
+
body: body ? JSON.stringify(body) : undefined,
|
|
19
|
+
});
|
|
20
|
+
if (!response.ok) {
|
|
21
|
+
const text = await response.text();
|
|
22
|
+
let message;
|
|
23
|
+
try {
|
|
24
|
+
const json = JSON.parse(text);
|
|
25
|
+
message = json.message || text;
|
|
26
|
+
}
|
|
27
|
+
catch {
|
|
28
|
+
message = text;
|
|
29
|
+
}
|
|
30
|
+
throw new Error(`API ${method} ${path} failed (${response.status}): ${message}`);
|
|
31
|
+
}
|
|
32
|
+
return response.json();
|
|
33
|
+
}
|
|
34
|
+
// Projects
|
|
35
|
+
async listProjects() {
|
|
36
|
+
return this.request('GET', '/projects');
|
|
37
|
+
}
|
|
38
|
+
async createProject(data) {
|
|
39
|
+
return this.request('POST', '/projects', data);
|
|
40
|
+
}
|
|
41
|
+
async getProject(id) {
|
|
42
|
+
return this.request('GET', `/projects/${id}`);
|
|
43
|
+
}
|
|
44
|
+
async updateProject(id, data) {
|
|
45
|
+
return this.request('PUT', `/projects/${id}`, data);
|
|
46
|
+
}
|
|
47
|
+
async deleteProject(id) {
|
|
48
|
+
return this.request('DELETE', `/projects/${id}`);
|
|
49
|
+
}
|
|
50
|
+
// Pages
|
|
51
|
+
async listPages(projectId) {
|
|
52
|
+
return this.request('GET', `/projects/${projectId}/pages`);
|
|
53
|
+
}
|
|
54
|
+
async createPage(projectId, data) {
|
|
55
|
+
return this.request('POST', `/projects/${projectId}/pages`, data);
|
|
56
|
+
}
|
|
57
|
+
async updatePage(id, data) {
|
|
58
|
+
return this.request('PUT', `/pages/${id}`, data);
|
|
59
|
+
}
|
|
60
|
+
async deletePage(id) {
|
|
61
|
+
return this.request('DELETE', `/pages/${id}`);
|
|
62
|
+
}
|
|
63
|
+
// Tests
|
|
64
|
+
async listTests(projectId) {
|
|
65
|
+
return this.request('GET', `/projects/${projectId}/tests`);
|
|
66
|
+
}
|
|
67
|
+
async createTest(projectId, data) {
|
|
68
|
+
return this.request('POST', `/projects/${projectId}/tests`, data);
|
|
69
|
+
}
|
|
70
|
+
async getTest(id) {
|
|
71
|
+
return this.request('GET', `/tests/${id}`);
|
|
72
|
+
}
|
|
73
|
+
async updateTest(id, data) {
|
|
74
|
+
return this.request('PUT', `/tests/${id}`, data);
|
|
75
|
+
}
|
|
76
|
+
async deleteTest(id) {
|
|
77
|
+
return this.request('DELETE', `/tests/${id}`);
|
|
78
|
+
}
|
|
79
|
+
// Sweep
|
|
80
|
+
async sweep(projectId, params) {
|
|
81
|
+
const searchParams = new URLSearchParams();
|
|
82
|
+
if (params.pages) {
|
|
83
|
+
for (const page of params.pages) {
|
|
84
|
+
searchParams.append('pages[]', page);
|
|
85
|
+
}
|
|
86
|
+
}
|
|
87
|
+
if (params.url_pattern) {
|
|
88
|
+
searchParams.set('url_pattern', params.url_pattern);
|
|
89
|
+
}
|
|
90
|
+
return this.request('GET', `/projects/${projectId}/sweep?${searchParams.toString()}`);
|
|
91
|
+
}
|
|
92
|
+
// Test Runs
|
|
93
|
+
async startRun(testId) {
|
|
94
|
+
return this.request('POST', `/tests/${testId}/runs`);
|
|
95
|
+
}
|
|
96
|
+
async completeRun(runId, data) {
|
|
97
|
+
return this.request('PUT', `/runs/${runId}`, data);
|
|
98
|
+
}
|
|
99
|
+
async getRun(runId) {
|
|
100
|
+
return this.request('GET', `/runs/${runId}`);
|
|
101
|
+
}
|
|
102
|
+
async listRuns(testId) {
|
|
103
|
+
return this.request('GET', `/tests/${testId}/runs`);
|
|
104
|
+
}
|
|
105
|
+
}
|
package/dist/cli.d.ts
ADDED
package/dist/cli.js
ADDED
|
@@ -0,0 +1,63 @@
|
|
|
1
|
+
#!/usr/bin/env node
|
|
2
|
+
import { readFileSync } from 'fs';
|
|
3
|
+
import { fileURLToPath } from 'url';
|
|
4
|
+
import { dirname, join } from 'path';
|
|
5
|
+
const __filename = fileURLToPath(import.meta.url);
|
|
6
|
+
const __dirname = dirname(__filename);
|
|
7
|
+
function getVersion() {
|
|
8
|
+
const pkgPath = join(__dirname, '..', 'package.json');
|
|
9
|
+
const pkg = JSON.parse(readFileSync(pkgPath, 'utf-8'));
|
|
10
|
+
return pkg.version;
|
|
11
|
+
}
|
|
12
|
+
function printHelp() {
|
|
13
|
+
const version = getVersion();
|
|
14
|
+
console.log(`greenrun-cli v${version} - Browser test management for Claude Code
|
|
15
|
+
|
|
16
|
+
Usage:
|
|
17
|
+
greenrun init Interactive setup wizard
|
|
18
|
+
greenrun serve Start MCP server
|
|
19
|
+
greenrun --version, -v Print version
|
|
20
|
+
greenrun --help, -h Print this help
|
|
21
|
+
|
|
22
|
+
When invoked with no arguments over a pipe (non-TTY stdin),
|
|
23
|
+
the MCP server starts automatically (used by Claude Code).
|
|
24
|
+
|
|
25
|
+
Quick start:
|
|
26
|
+
npx greenrun-cli init
|
|
27
|
+
`);
|
|
28
|
+
}
|
|
29
|
+
async function main() {
|
|
30
|
+
const args = process.argv.slice(2);
|
|
31
|
+
const command = args[0];
|
|
32
|
+
if (command === '--version' || command === '-v') {
|
|
33
|
+
console.log(getVersion());
|
|
34
|
+
return;
|
|
35
|
+
}
|
|
36
|
+
if (command === '--help' || command === '-h') {
|
|
37
|
+
printHelp();
|
|
38
|
+
return;
|
|
39
|
+
}
|
|
40
|
+
if (command === 'init') {
|
|
41
|
+
const { runInit } = await import('./commands/init.js');
|
|
42
|
+
await runInit(args.slice(1));
|
|
43
|
+
return;
|
|
44
|
+
}
|
|
45
|
+
if (command === 'serve') {
|
|
46
|
+
const { startServer } = await import('./server.js');
|
|
47
|
+
await startServer();
|
|
48
|
+
return;
|
|
49
|
+
}
|
|
50
|
+
// No command: auto-detect mode
|
|
51
|
+
if (!process.stdin.isTTY) {
|
|
52
|
+
// Non-TTY stdin means Claude Code is invoking us as an MCP server
|
|
53
|
+
const { startServer } = await import('./server.js');
|
|
54
|
+
await startServer();
|
|
55
|
+
return;
|
|
56
|
+
}
|
|
57
|
+
// TTY with no command: show help
|
|
58
|
+
printHelp();
|
|
59
|
+
}
|
|
60
|
+
main().catch((error) => {
|
|
61
|
+
console.error('Fatal error:', error);
|
|
62
|
+
process.exit(1);
|
|
63
|
+
});
|
|
@@ -0,0 +1 @@
|
|
|
1
|
+
export declare function runInit(args: string[]): Promise<void>;
|
|
@@ -0,0 +1,243 @@
|
|
|
1
|
+
import { createInterface } from 'readline';
|
|
2
|
+
import { execSync } from 'child_process';
|
|
3
|
+
import { existsSync, readFileSync, writeFileSync, mkdirSync, appendFileSync } from 'fs';
|
|
4
|
+
import { join, dirname } from 'path';
|
|
5
|
+
import { fileURLToPath } from 'url';
|
|
6
|
+
const __filename = fileURLToPath(import.meta.url);
|
|
7
|
+
const __dirname = dirname(__filename);
|
|
8
|
+
const TEMPLATES_DIR = join(__dirname, '..', '..', 'templates');
|
|
9
|
+
const APP_URL = 'https://app.greenrun.dev';
|
|
10
|
+
function parseFlags(args) {
|
|
11
|
+
const opts = { claudeMd: true, commands: true };
|
|
12
|
+
for (let i = 0; i < args.length; i++) {
|
|
13
|
+
const arg = args[i];
|
|
14
|
+
if (arg === '--token' && args[i + 1]) {
|
|
15
|
+
opts.token = args[++i];
|
|
16
|
+
}
|
|
17
|
+
else if (arg === '--scope' && args[i + 1]) {
|
|
18
|
+
const val = args[++i];
|
|
19
|
+
if (val === 'local' || val === 'project')
|
|
20
|
+
opts.scope = val;
|
|
21
|
+
}
|
|
22
|
+
else if (arg === '--no-claude-md') {
|
|
23
|
+
opts.claudeMd = false;
|
|
24
|
+
}
|
|
25
|
+
else if (arg === '--no-commands') {
|
|
26
|
+
opts.commands = false;
|
|
27
|
+
}
|
|
28
|
+
}
|
|
29
|
+
return opts;
|
|
30
|
+
}
|
|
31
|
+
function prompt(rl, question) {
|
|
32
|
+
return new Promise((resolve) => {
|
|
33
|
+
rl.question(question, (answer) => {
|
|
34
|
+
resolve(answer.trim());
|
|
35
|
+
});
|
|
36
|
+
});
|
|
37
|
+
}
|
|
38
|
+
function checkPrerequisites() {
|
|
39
|
+
let claude = false;
|
|
40
|
+
try {
|
|
41
|
+
execSync('claude --version', { stdio: 'pipe' });
|
|
42
|
+
claude = true;
|
|
43
|
+
}
|
|
44
|
+
catch {
|
|
45
|
+
// not installed
|
|
46
|
+
}
|
|
47
|
+
return { claude, chromeHint: true };
|
|
48
|
+
}
|
|
49
|
+
async function validateToken(token) {
|
|
50
|
+
try {
|
|
51
|
+
const response = await fetch(`${APP_URL}/api/v1/projects`, {
|
|
52
|
+
headers: {
|
|
53
|
+
'Authorization': `Bearer ${token}`,
|
|
54
|
+
'Accept': 'application/json',
|
|
55
|
+
},
|
|
56
|
+
});
|
|
57
|
+
if (!response.ok)
|
|
58
|
+
return { valid: false };
|
|
59
|
+
const data = await response.json();
|
|
60
|
+
const projects = Array.isArray(data) ? data : (data.data ?? []);
|
|
61
|
+
return { valid: true, projectCount: projects.length };
|
|
62
|
+
}
|
|
63
|
+
catch {
|
|
64
|
+
return { valid: false };
|
|
65
|
+
}
|
|
66
|
+
}
|
|
67
|
+
function configureMcpLocal(token) {
|
|
68
|
+
execSync(`claude mcp add --transport stdio -e GREENRUN_API_TOKEN=${token} greenrun -- npx -y greenrun-cli@latest`, { stdio: 'inherit' });
|
|
69
|
+
}
|
|
70
|
+
function configureMcpProject(token) {
|
|
71
|
+
const mcpConfig = {
|
|
72
|
+
mcpServers: {
|
|
73
|
+
greenrun: {
|
|
74
|
+
command: 'npx',
|
|
75
|
+
args: ['-y', 'greenrun-cli@latest'],
|
|
76
|
+
env: { GREENRUN_API_TOKEN: '${GREENRUN_API_TOKEN}' },
|
|
77
|
+
},
|
|
78
|
+
},
|
|
79
|
+
};
|
|
80
|
+
const mcpPath = join(process.cwd(), '.mcp.json');
|
|
81
|
+
let existing = {};
|
|
82
|
+
if (existsSync(mcpPath)) {
|
|
83
|
+
try {
|
|
84
|
+
existing = JSON.parse(readFileSync(mcpPath, 'utf-8'));
|
|
85
|
+
}
|
|
86
|
+
catch {
|
|
87
|
+
// overwrite invalid JSON
|
|
88
|
+
}
|
|
89
|
+
}
|
|
90
|
+
existing.mcpServers = existing.mcpServers || {};
|
|
91
|
+
existing.mcpServers.greenrun = mcpConfig.mcpServers.greenrun;
|
|
92
|
+
writeFileSync(mcpPath, JSON.stringify(existing, null, 2) + '\n');
|
|
93
|
+
console.log(' Created .mcp.json');
|
|
94
|
+
// Add token to .env
|
|
95
|
+
const envPath = join(process.cwd(), '.env');
|
|
96
|
+
const envLine = `GREENRUN_API_TOKEN=${token}`;
|
|
97
|
+
if (existsSync(envPath)) {
|
|
98
|
+
const envContent = readFileSync(envPath, 'utf-8');
|
|
99
|
+
if (!envContent.includes('GREENRUN_API_TOKEN=')) {
|
|
100
|
+
appendFileSync(envPath, `\n${envLine}\n`);
|
|
101
|
+
console.log(' Added GREENRUN_API_TOKEN to .env');
|
|
102
|
+
}
|
|
103
|
+
else {
|
|
104
|
+
console.log(' GREENRUN_API_TOKEN already in .env (not modified)');
|
|
105
|
+
}
|
|
106
|
+
}
|
|
107
|
+
else {
|
|
108
|
+
writeFileSync(envPath, `${envLine}\n`);
|
|
109
|
+
console.log(' Created .env with GREENRUN_API_TOKEN');
|
|
110
|
+
}
|
|
111
|
+
}
|
|
112
|
+
function installClaudeMd() {
|
|
113
|
+
const templatePath = join(TEMPLATES_DIR, 'claude-md.md');
|
|
114
|
+
if (!existsSync(templatePath)) {
|
|
115
|
+
console.log(' Warning: CLAUDE.md template not found, skipping');
|
|
116
|
+
return;
|
|
117
|
+
}
|
|
118
|
+
const snippet = readFileSync(templatePath, 'utf-8');
|
|
119
|
+
const claudeMdPath = join(process.cwd(), 'CLAUDE.md');
|
|
120
|
+
if (existsSync(claudeMdPath)) {
|
|
121
|
+
const existing = readFileSync(claudeMdPath, 'utf-8');
|
|
122
|
+
if (existing.includes('## Greenrun')) {
|
|
123
|
+
console.log(' CLAUDE.md already contains Greenrun section, skipping');
|
|
124
|
+
return;
|
|
125
|
+
}
|
|
126
|
+
appendFileSync(claudeMdPath, '\n' + snippet);
|
|
127
|
+
console.log(' Appended Greenrun instructions to CLAUDE.md');
|
|
128
|
+
}
|
|
129
|
+
else {
|
|
130
|
+
writeFileSync(claudeMdPath, snippet);
|
|
131
|
+
console.log(' Created CLAUDE.md with Greenrun instructions');
|
|
132
|
+
}
|
|
133
|
+
}
|
|
134
|
+
function installCommands() {
|
|
135
|
+
const commandsDir = join(process.cwd(), '.claude', 'commands');
|
|
136
|
+
mkdirSync(commandsDir, { recursive: true });
|
|
137
|
+
const commands = ['greenrun.md', 'greenrun-sweep.md'];
|
|
138
|
+
for (const cmd of commands) {
|
|
139
|
+
const src = join(TEMPLATES_DIR, 'commands', cmd);
|
|
140
|
+
if (!existsSync(src)) {
|
|
141
|
+
console.log(` Warning: ${cmd} template not found, skipping`);
|
|
142
|
+
continue;
|
|
143
|
+
}
|
|
144
|
+
const dest = join(commandsDir, cmd);
|
|
145
|
+
writeFileSync(dest, readFileSync(src, 'utf-8'));
|
|
146
|
+
console.log(` Installed /${cmd.replace('.md', '')}`);
|
|
147
|
+
}
|
|
148
|
+
}
|
|
149
|
+
export async function runInit(args) {
|
|
150
|
+
const opts = parseFlags(args);
|
|
151
|
+
const interactive = !opts.token;
|
|
152
|
+
console.log('\nGreenrun - Browser Test Management for Claude Code\n');
|
|
153
|
+
// Prerequisites
|
|
154
|
+
console.log('Prerequisites:');
|
|
155
|
+
const prereqs = checkPrerequisites();
|
|
156
|
+
if (prereqs.claude) {
|
|
157
|
+
console.log(' [x] Claude Code CLI installed');
|
|
158
|
+
}
|
|
159
|
+
else {
|
|
160
|
+
console.log(' [ ] Claude Code CLI not found');
|
|
161
|
+
console.log(' Install it: https://docs.anthropic.com/en/docs/claude-code');
|
|
162
|
+
if (interactive) {
|
|
163
|
+
console.log('\nClaude Code is required. Please install it and run this command again.');
|
|
164
|
+
process.exit(1);
|
|
165
|
+
}
|
|
166
|
+
}
|
|
167
|
+
console.log(' [i] Claude in Chrome extension required for browser test execution');
|
|
168
|
+
console.log(' Get it at: https://chromewebstore.google.com/detail/claude-in-chrome\n');
|
|
169
|
+
let token = opts.token;
|
|
170
|
+
let scope = opts.scope;
|
|
171
|
+
if (interactive) {
|
|
172
|
+
const rl = createInterface({ input: process.stdin, output: process.stdout });
|
|
173
|
+
// Step 1: Token
|
|
174
|
+
console.log('Step 1: API Token');
|
|
175
|
+
console.log(` Get your token at: ${APP_URL}/tokens`);
|
|
176
|
+
token = await prompt(rl, ' Paste your token: ');
|
|
177
|
+
if (!token) {
|
|
178
|
+
console.log(' No token provided. Aborting.');
|
|
179
|
+
rl.close();
|
|
180
|
+
process.exit(1);
|
|
181
|
+
}
|
|
182
|
+
process.stdout.write(' Validating... ');
|
|
183
|
+
const validation = await validateToken(token);
|
|
184
|
+
if (!validation.valid) {
|
|
185
|
+
console.log('Failed! Invalid token or cannot reach the API.');
|
|
186
|
+
rl.close();
|
|
187
|
+
process.exit(1);
|
|
188
|
+
}
|
|
189
|
+
console.log(`Connected! (${validation.projectCount} project${validation.projectCount === 1 ? '' : 's'} found)\n`);
|
|
190
|
+
// Step 2: Scope
|
|
191
|
+
console.log('Step 2: MCP Configuration');
|
|
192
|
+
console.log(' [1] Local config (recommended) - token stored in ~/.claude.json');
|
|
193
|
+
console.log(' [2] Project config (.mcp.json) - token via env var');
|
|
194
|
+
const scopeChoice = await prompt(rl, ' Choice [1]: ');
|
|
195
|
+
scope = scopeChoice === '2' ? 'project' : 'local';
|
|
196
|
+
console.log();
|
|
197
|
+
// Step 3: Extras
|
|
198
|
+
console.log('Step 3: Extras (optional)');
|
|
199
|
+
const claudeMdAnswer = await prompt(rl, ' Add Greenrun instructions to CLAUDE.md? [Y/n]: ');
|
|
200
|
+
opts.claudeMd = claudeMdAnswer.toLowerCase() !== 'n';
|
|
201
|
+
const commandsAnswer = await prompt(rl, ' Install slash commands? [Y/n]: ');
|
|
202
|
+
opts.commands = commandsAnswer.toLowerCase() !== 'n';
|
|
203
|
+
console.log();
|
|
204
|
+
rl.close();
|
|
205
|
+
}
|
|
206
|
+
else {
|
|
207
|
+
// Non-interactive: validate token
|
|
208
|
+
if (!token) {
|
|
209
|
+
console.error('Error: --token is required for non-interactive mode');
|
|
210
|
+
process.exit(1);
|
|
211
|
+
}
|
|
212
|
+
process.stdout.write('Validating token... ');
|
|
213
|
+
const validation = await validateToken(token);
|
|
214
|
+
if (!validation.valid) {
|
|
215
|
+
console.log('Failed!');
|
|
216
|
+
process.exit(1);
|
|
217
|
+
}
|
|
218
|
+
console.log(`Connected! (${validation.projectCount} project${validation.projectCount === 1 ? '' : 's'} found)`);
|
|
219
|
+
scope = scope || 'local';
|
|
220
|
+
}
|
|
221
|
+
// Configure MCP
|
|
222
|
+
console.log('Configuring MCP server...');
|
|
223
|
+
if (scope === 'project') {
|
|
224
|
+
configureMcpProject(token);
|
|
225
|
+
}
|
|
226
|
+
else {
|
|
227
|
+
configureMcpLocal(token);
|
|
228
|
+
}
|
|
229
|
+
console.log(' MCP server configured.\n');
|
|
230
|
+
// Install extras
|
|
231
|
+
if (opts.claudeMd) {
|
|
232
|
+
installClaudeMd();
|
|
233
|
+
}
|
|
234
|
+
if (opts.commands) {
|
|
235
|
+
installCommands();
|
|
236
|
+
}
|
|
237
|
+
console.log(`
|
|
238
|
+
Done! Restart Claude Code to connect.
|
|
239
|
+
|
|
240
|
+
Make sure Chrome is open with the Claude in Chrome extension active
|
|
241
|
+
before running /greenrun - Claude needs browser access to execute tests.
|
|
242
|
+
`);
|
|
243
|
+
}
|
package/dist/server.d.ts
ADDED
|
@@ -0,0 +1 @@
|
|
|
1
|
+
export declare function startServer(): Promise<void>;
|
package/dist/server.js
ADDED
|
@@ -0,0 +1,128 @@
|
|
|
1
|
+
import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';
|
|
2
|
+
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
|
|
3
|
+
import { z } from 'zod';
|
|
4
|
+
import { ApiClient } from './api-client.js';
|
|
5
|
+
export async function startServer() {
|
|
6
|
+
const GREENRUN_API_URL = process.env.GREENRUN_API_URL || 'https://app.greenrun.dev';
|
|
7
|
+
const GREENRUN_API_TOKEN = process.env.GREENRUN_API_TOKEN;
|
|
8
|
+
if (!GREENRUN_API_TOKEN) {
|
|
9
|
+
console.error('Error: GREENRUN_API_TOKEN environment variable is required');
|
|
10
|
+
process.exit(1);
|
|
11
|
+
}
|
|
12
|
+
const api = new ApiClient({
|
|
13
|
+
baseUrl: GREENRUN_API_URL,
|
|
14
|
+
token: GREENRUN_API_TOKEN,
|
|
15
|
+
});
|
|
16
|
+
const server = new McpServer({
|
|
17
|
+
name: 'greenrun',
|
|
18
|
+
version: '0.1.0',
|
|
19
|
+
});
|
|
20
|
+
// --- Projects ---
|
|
21
|
+
server.tool('list_projects', 'List all projects', {}, async () => {
|
|
22
|
+
const result = await api.listProjects();
|
|
23
|
+
return { content: [{ type: 'text', text: JSON.stringify(result, null, 2) }] };
|
|
24
|
+
});
|
|
25
|
+
server.tool('create_project', 'Create a new project', {
|
|
26
|
+
name: z.string().describe('Project name'),
|
|
27
|
+
base_url: z.string().optional().describe('Base URL of the site (e.g. https://myapp.com)'),
|
|
28
|
+
description: z.string().optional().describe('Project description'),
|
|
29
|
+
concurrency: z.number().int().min(1).max(20).optional().describe('Number of tests to run in parallel (default: 5)'),
|
|
30
|
+
}, async (args) => {
|
|
31
|
+
const result = await api.createProject(args);
|
|
32
|
+
return { content: [{ type: 'text', text: JSON.stringify(result, null, 2) }] };
|
|
33
|
+
});
|
|
34
|
+
server.tool('get_project', 'Get project details', { project_id: z.string().describe('Project UUID') }, async (args) => {
|
|
35
|
+
const result = await api.getProject(args.project_id);
|
|
36
|
+
return { content: [{ type: 'text', text: JSON.stringify(result, null, 2) }] };
|
|
37
|
+
});
|
|
38
|
+
// --- Pages ---
|
|
39
|
+
server.tool('list_pages', 'List pages in a project', { project_id: z.string().describe('Project UUID') }, async (args) => {
|
|
40
|
+
const result = await api.listPages(args.project_id);
|
|
41
|
+
return { content: [{ type: 'text', text: JSON.stringify(result, null, 2) }] };
|
|
42
|
+
});
|
|
43
|
+
server.tool('create_page', 'Register a page URL in a project', {
|
|
44
|
+
project_id: z.string().describe('Project UUID'),
|
|
45
|
+
url: z.string().describe('Page URL (absolute or relative to project base_url)'),
|
|
46
|
+
name: z.string().optional().describe('Human-friendly page name'),
|
|
47
|
+
}, async (args) => {
|
|
48
|
+
const result = await api.createPage(args.project_id, { url: args.url, name: args.name });
|
|
49
|
+
return { content: [{ type: 'text', text: JSON.stringify(result, null, 2) }] };
|
|
50
|
+
});
|
|
51
|
+
// --- Tests ---
|
|
52
|
+
server.tool('list_tests', 'List tests in a project (includes latest run status)', { project_id: z.string().describe('Project UUID') }, async (args) => {
|
|
53
|
+
const result = await api.listTests(args.project_id);
|
|
54
|
+
return { content: [{ type: 'text', text: JSON.stringify(result, null, 2) }] };
|
|
55
|
+
});
|
|
56
|
+
server.tool('get_test', 'Get test details including instructions, pages, and recent runs', { test_id: z.string().describe('Test UUID') }, async (args) => {
|
|
57
|
+
const result = await api.getTest(args.test_id);
|
|
58
|
+
return { content: [{ type: 'text', text: JSON.stringify(result, null, 2) }] };
|
|
59
|
+
});
|
|
60
|
+
server.tool('create_test', 'Store a new test case in a project', {
|
|
61
|
+
project_id: z.string().describe('Project UUID'),
|
|
62
|
+
name: z.string().describe('Test name (e.g. "User can log in")'),
|
|
63
|
+
instructions: z.string().describe('Complete test instructions as plain text'),
|
|
64
|
+
page_ids: z.array(z.string()).optional().describe('UUIDs of pages this test covers'),
|
|
65
|
+
status: z.enum(['draft', 'active', 'archived']).optional().describe('Test status (default: active)'),
|
|
66
|
+
tags: z.array(z.string()).optional().describe('Tag names for organizing tests (e.g. ["smoke", "auth"])'),
|
|
67
|
+
}, async (args) => {
|
|
68
|
+
const result = await api.createTest(args.project_id, {
|
|
69
|
+
name: args.name,
|
|
70
|
+
instructions: args.instructions,
|
|
71
|
+
page_ids: args.page_ids,
|
|
72
|
+
status: args.status,
|
|
73
|
+
tags: args.tags,
|
|
74
|
+
});
|
|
75
|
+
return { content: [{ type: 'text', text: JSON.stringify(result, null, 2) }] };
|
|
76
|
+
});
|
|
77
|
+
server.tool('update_test', 'Update test instructions, name, status, or page associations', {
|
|
78
|
+
test_id: z.string().describe('Test UUID'),
|
|
79
|
+
name: z.string().optional().describe('Updated test name'),
|
|
80
|
+
instructions: z.string().optional().describe('Updated test instructions'),
|
|
81
|
+
page_ids: z.array(z.string()).optional().describe('Updated page UUIDs (replaces existing)'),
|
|
82
|
+
status: z.enum(['draft', 'active', 'archived']).optional().describe('Updated status'),
|
|
83
|
+
tags: z.array(z.string()).optional().describe('Updated tag names (replaces existing tags)'),
|
|
84
|
+
}, async (args) => {
|
|
85
|
+
const { test_id, ...data } = args;
|
|
86
|
+
const result = await api.updateTest(test_id, data);
|
|
87
|
+
return { content: [{ type: 'text', text: JSON.stringify(result, null, 2) }] };
|
|
88
|
+
});
|
|
89
|
+
// --- Sweep ---
|
|
90
|
+
server.tool('sweep', 'Find tests affected by specific pages (impact analysis). Use after making changes to determine which tests to re-run.', {
|
|
91
|
+
project_id: z.string().describe('Project UUID'),
|
|
92
|
+
pages: z.array(z.string()).optional().describe('Exact page URLs to match'),
|
|
93
|
+
url_pattern: z.string().optional().describe('Glob-style URL pattern (e.g. /checkout*)'),
|
|
94
|
+
}, async (args) => {
|
|
95
|
+
const result = await api.sweep(args.project_id, {
|
|
96
|
+
pages: args.pages,
|
|
97
|
+
url_pattern: args.url_pattern,
|
|
98
|
+
});
|
|
99
|
+
return { content: [{ type: 'text', text: JSON.stringify(result, null, 2) }] };
|
|
100
|
+
});
|
|
101
|
+
// --- Test Runs ---
|
|
102
|
+
server.tool('start_run', 'Start a test run (sets status to running)', { test_id: z.string().describe('Test UUID') }, async (args) => {
|
|
103
|
+
const result = await api.startRun(args.test_id);
|
|
104
|
+
return { content: [{ type: 'text', text: JSON.stringify(result, null, 2) }] };
|
|
105
|
+
});
|
|
106
|
+
server.tool('complete_run', 'Record the result of a test run', {
|
|
107
|
+
run_id: z.string().describe('Run UUID'),
|
|
108
|
+
status: z.enum(['passed', 'failed', 'error']).describe('Run result status'),
|
|
109
|
+
result: z.string().optional().describe('Summary of what happened during the run'),
|
|
110
|
+
}, async (args) => {
|
|
111
|
+
const result = await api.completeRun(args.run_id, {
|
|
112
|
+
status: args.status,
|
|
113
|
+
result: args.result,
|
|
114
|
+
});
|
|
115
|
+
return { content: [{ type: 'text', text: JSON.stringify(result, null, 2) }] };
|
|
116
|
+
});
|
|
117
|
+
server.tool('get_run', 'Get details of a specific test run', { run_id: z.string().describe('Run UUID') }, async (args) => {
|
|
118
|
+
const result = await api.getRun(args.run_id);
|
|
119
|
+
return { content: [{ type: 'text', text: JSON.stringify(result, null, 2) }] };
|
|
120
|
+
});
|
|
121
|
+
server.tool('list_runs', 'List run history for a test (newest first)', { test_id: z.string().describe('Test UUID') }, async (args) => {
|
|
122
|
+
const result = await api.listRuns(args.test_id);
|
|
123
|
+
return { content: [{ type: 'text', text: JSON.stringify(result, null, 2) }] };
|
|
124
|
+
});
|
|
125
|
+
// Start server
|
|
126
|
+
const transport = new StdioServerTransport();
|
|
127
|
+
await server.connect(transport);
|
|
128
|
+
}
|
package/package.json
ADDED
|
@@ -0,0 +1,44 @@
|
|
|
1
|
+
{
|
|
2
|
+
"name": "greenrun-cli",
|
|
3
|
+
"version": "0.1.0",
|
|
4
|
+
"description": "CLI and MCP server for Greenrun - browser test management for Claude Code",
|
|
5
|
+
"type": "module",
|
|
6
|
+
"main": "dist/server.js",
|
|
7
|
+
"bin": {
|
|
8
|
+
"greenrun": "dist/cli.js"
|
|
9
|
+
},
|
|
10
|
+
"files": [
|
|
11
|
+
"dist",
|
|
12
|
+
"templates"
|
|
13
|
+
],
|
|
14
|
+
"scripts": {
|
|
15
|
+
"build": "tsc",
|
|
16
|
+
"dev": "tsc --watch",
|
|
17
|
+
"start": "node dist/cli.js"
|
|
18
|
+
},
|
|
19
|
+
"keywords": [
|
|
20
|
+
"greenrun",
|
|
21
|
+
"mcp",
|
|
22
|
+
"claude",
|
|
23
|
+
"testing",
|
|
24
|
+
"browser-testing",
|
|
25
|
+
"claude-code"
|
|
26
|
+
],
|
|
27
|
+
"author": "Greenrun",
|
|
28
|
+
"license": "MIT",
|
|
29
|
+
"repository": {
|
|
30
|
+
"type": "git",
|
|
31
|
+
"url": "git+https://github.com/Add-Item-To/greenrun-cli.git"
|
|
32
|
+
},
|
|
33
|
+
"homepage": "https://app.greenrun.dev",
|
|
34
|
+
"dependencies": {
|
|
35
|
+
"@modelcontextprotocol/sdk": "^1.0.0"
|
|
36
|
+
},
|
|
37
|
+
"devDependencies": {
|
|
38
|
+
"@types/node": "^22.0.0",
|
|
39
|
+
"typescript": "^5.7.0"
|
|
40
|
+
},
|
|
41
|
+
"engines": {
|
|
42
|
+
"node": ">=18.0.0"
|
|
43
|
+
}
|
|
44
|
+
}
|
|
@@ -0,0 +1,39 @@
|
|
|
1
|
+
## Greenrun - Browser Test Management
|
|
2
|
+
|
|
3
|
+
### Prerequisites
|
|
4
|
+
|
|
5
|
+
- **Claude in Chrome extension** must be installed and active in your browser for test execution
|
|
6
|
+
- MCP server must be connected (check with `/mcp` in Claude Code)
|
|
7
|
+
|
|
8
|
+
### Available MCP Tools
|
|
9
|
+
|
|
10
|
+
The Greenrun MCP server provides these tools:
|
|
11
|
+
|
|
12
|
+
- **list_projects** / **get_project** / **create_project** - Manage projects
|
|
13
|
+
- **list_pages** / **create_page** - Manage page URLs within a project
|
|
14
|
+
- **list_tests** / **get_test** / **create_test** / **update_test** - Manage test cases
|
|
15
|
+
- **start_run** / **complete_run** / **get_run** / **list_runs** - Execute and track test runs
|
|
16
|
+
- **sweep** - Impact analysis: find tests affected by changed pages
|
|
17
|
+
|
|
18
|
+
### Running Tests
|
|
19
|
+
|
|
20
|
+
To run tests for this project:
|
|
21
|
+
|
|
22
|
+
1. Use `list_projects` to find the project, then `list_tests` to get all tests
|
|
23
|
+
2. For each test, call `get_test` to retrieve the full instructions
|
|
24
|
+
3. Call `start_run` to begin a run (returns a run ID)
|
|
25
|
+
4. Execute the test instructions using browser automation (Claude in Chrome)
|
|
26
|
+
5. Call `complete_run` with the run ID, status (passed/failed/error), and a result summary
|
|
27
|
+
|
|
28
|
+
Or use the `/greenrun` slash command to run all tests automatically.
|
|
29
|
+
|
|
30
|
+
### Creating Tests
|
|
31
|
+
|
|
32
|
+
1. Navigate to the page you want to test in Chrome
|
|
33
|
+
2. Write clear, step-by-step test instructions describing what to do and what to verify
|
|
34
|
+
3. Use `create_page` to register the page URL if not already registered
|
|
35
|
+
4. Use `create_test` with the instructions and page IDs
|
|
36
|
+
|
|
37
|
+
### Impact Analysis
|
|
38
|
+
|
|
39
|
+
After making code changes, use the `/greenrun-sweep` command or the `sweep` tool to find which tests are affected by the pages you changed. This helps you run only the relevant tests.
|
|
@@ -0,0 +1,71 @@
|
|
|
1
|
+
Run Greenrun impact analysis to find tests affected by recent code changes.
|
|
2
|
+
|
|
3
|
+
## Instructions
|
|
4
|
+
|
|
5
|
+
You are performing impact analysis to determine which browser tests need to be re-run based on code changes. Tests are linked to pages (URL paths) and tags as organizational metadata - sweep uses page associations to find affected tests.
|
|
6
|
+
|
|
7
|
+
### 1. Find changed files
|
|
8
|
+
|
|
9
|
+
Run `git diff --name-only HEAD~1` (or `git diff --name-only` for unstaged changes) to identify which files have changed. If the user specified a commit range as an argument ("$ARGUMENTS"), use that instead.
|
|
10
|
+
|
|
11
|
+
### 2. Find the project
|
|
12
|
+
|
|
13
|
+
Call `list_projects` to get all projects. Match the current project by name or base URL.
|
|
14
|
+
|
|
15
|
+
### 3. Map changes to pages
|
|
16
|
+
|
|
17
|
+
Call `list_pages` for the project. Look at the changed files and determine which page URLs they likely affect. Consider:
|
|
18
|
+
- View/template files -> the routes they render
|
|
19
|
+
- Controller/API files -> the pages that call those endpoints
|
|
20
|
+
- Component files -> pages that use those components
|
|
21
|
+
- CSS/JS assets -> pages that include them
|
|
22
|
+
|
|
23
|
+
### 4. Run sweep
|
|
24
|
+
|
|
25
|
+
Call `sweep` with the project ID and either:
|
|
26
|
+
- `pages`: specific page URLs that match the changes
|
|
27
|
+
- `url_pattern`: a glob pattern matching affected URLs
|
|
28
|
+
|
|
29
|
+
### 5. Report results
|
|
30
|
+
|
|
31
|
+
Present the affected tests:
|
|
32
|
+
|
|
33
|
+
| Test | Pages | Tags | Last Status |
|
|
34
|
+
|------|-------|------|-------------|
|
|
35
|
+
| Test name | Affected page URLs | tag1, tag2 | passed/failed/never run |
|
|
36
|
+
|
|
37
|
+
### 6. Offer to run
|
|
38
|
+
|
|
39
|
+
Ask the user if they want to run the affected tests. If yes, execute them **in parallel** using the same approach as the `/greenrun` command:
|
|
40
|
+
|
|
41
|
+
Use the project's `concurrency` setting (default: 5) to determine batch size. Split affected tests into batches and launch each batch simultaneously using the **Task tool** with `run_in_background: true`.
|
|
42
|
+
|
|
43
|
+
For each test in a batch, launch a background agent with this prompt:
|
|
44
|
+
|
|
45
|
+
```
|
|
46
|
+
You are executing a single Greenrun browser test. You have access to browser automation tools and Greenrun MCP tools.
|
|
47
|
+
|
|
48
|
+
**Test: {test_name}** (ID: {test_id})
|
|
49
|
+
|
|
50
|
+
Step 1: Call `get_test` with test_id "{test_id}" to get full instructions.
|
|
51
|
+
Step 2: Call `start_run` with test_id "{test_id}" to begin - save the returned `run_id`.
|
|
52
|
+
Step 3: Execute the test instructions using browser automation:
|
|
53
|
+
- Create a new browser tab for this test
|
|
54
|
+
- Follow each instruction step exactly as written
|
|
55
|
+
- The instructions will tell you where to navigate and what to do
|
|
56
|
+
- Observe results and take screenshots as needed for verification
|
|
57
|
+
Step 4: Call `complete_run` with:
|
|
58
|
+
- run_id: the run ID from step 2
|
|
59
|
+
- status: "passed" if all checks succeeded, "failed" if any check failed, "error" if execution was blocked
|
|
60
|
+
- result: a brief summary of what happened
|
|
61
|
+
|
|
62
|
+
Return a single line summary: {test_name} | {status} | {result_summary}
|
|
63
|
+
```
|
|
64
|
+
|
|
65
|
+
Wait for each batch to complete before launching the next. After all tests finish, present a summary table:
|
|
66
|
+
|
|
67
|
+
| Test | Pages | Tags | Status | Result |
|
|
68
|
+
|------|-------|------|--------|--------|
|
|
69
|
+
| Test name | Affected page URLs | tag1, tag2 | passed/failed/error | Brief summary |
|
|
70
|
+
|
|
71
|
+
Include the total count: "X passed, Y failed, Z errors out of N tests"
|
|
@@ -0,0 +1,73 @@
|
|
|
1
|
+
Run Greenrun browser tests for this project in parallel.
|
|
2
|
+
|
|
3
|
+
## Instructions
|
|
4
|
+
|
|
5
|
+
You are executing browser tests managed by Greenrun. Tests run in parallel using background agents, each with its own browser tab. Follow these steps precisely:
|
|
6
|
+
|
|
7
|
+
### 1. Find the project
|
|
8
|
+
|
|
9
|
+
Call `list_projects` to get all projects. Match the current project by name or base URL. If no match is found, tell the user and stop.
|
|
10
|
+
|
|
11
|
+
Note the project's `concurrency` value (default: 5). This controls how many tests run simultaneously.
|
|
12
|
+
|
|
13
|
+
### 2. Get tests
|
|
14
|
+
|
|
15
|
+
Call `list_tests` with the project ID. Each test has associated pages and tags which are organizational metadata for filtering.
|
|
16
|
+
|
|
17
|
+
If the user specified an argument ("$ARGUMENTS"), use it to filter tests:
|
|
18
|
+
- If it starts with `/` (e.g. `/checkout`), filter to tests linked to a page matching that URL
|
|
19
|
+
- If it starts with `tag:` (e.g. `tag:smoke`), filter to tests with that tag
|
|
20
|
+
- Otherwise, treat it as a test name filter
|
|
21
|
+
|
|
22
|
+
If no argument is given, run all active tests.
|
|
23
|
+
|
|
24
|
+
If there are no matching active tests, tell the user and stop.
|
|
25
|
+
|
|
26
|
+
### 3. Execute tests in parallel
|
|
27
|
+
|
|
28
|
+
Split the test list into batches of size `concurrency` (from the project settings).
|
|
29
|
+
|
|
30
|
+
For each batch, launch all tests simultaneously using the **Task tool** with `run_in_background: true`. Each background agent receives a prompt containing everything it needs to execute one test independently:
|
|
31
|
+
|
|
32
|
+
```
|
|
33
|
+
For each test in the current batch, call the Task tool with:
|
|
34
|
+
- subagent_type: "general-purpose"
|
|
35
|
+
- run_in_background: true
|
|
36
|
+
- prompt: (see below)
|
|
37
|
+
```
|
|
38
|
+
|
|
39
|
+
The prompt for each background agent should be:
|
|
40
|
+
|
|
41
|
+
```
|
|
42
|
+
You are executing a single Greenrun browser test. You have access to browser automation tools and Greenrun MCP tools.
|
|
43
|
+
|
|
44
|
+
**Test: {test_name}** (ID: {test_id})
|
|
45
|
+
|
|
46
|
+
Step 1: Call `get_test` with test_id "{test_id}" to get full instructions.
|
|
47
|
+
Step 2: Call `start_run` with test_id "{test_id}" to begin - save the returned `run_id`.
|
|
48
|
+
Step 3: Execute the test instructions using browser automation:
|
|
49
|
+
- Create a new browser tab for this test
|
|
50
|
+
- Follow each instruction step exactly as written
|
|
51
|
+
- The instructions will tell you where to navigate and what to do
|
|
52
|
+
- Observe results and take screenshots as needed for verification
|
|
53
|
+
Step 4: Call `complete_run` with:
|
|
54
|
+
- run_id: the run ID from step 2
|
|
55
|
+
- status: "passed" if all checks succeeded, "failed" if any check failed, "error" if execution was blocked
|
|
56
|
+
- result: a brief summary of what happened
|
|
57
|
+
|
|
58
|
+
Return a single line summary: {test_name} | {status} | {result_summary}
|
|
59
|
+
```
|
|
60
|
+
|
|
61
|
+
After launching all agents in a batch, wait for them all to complete (use `TaskOutput` to collect results) before launching the next batch.
|
|
62
|
+
|
|
63
|
+
### 4. Summarize results
|
|
64
|
+
|
|
65
|
+
After all batches complete, collect results from all background agents and present a summary table:
|
|
66
|
+
|
|
67
|
+
| Test | Pages | Tags | Status | Result |
|
|
68
|
+
|------|-------|------|--------|--------|
|
|
69
|
+
| Test name | /login, /dashboard | smoke, auth | passed/failed/error | Brief summary |
|
|
70
|
+
|
|
71
|
+
Include the total count: "X passed, Y failed, Z errors out of N tests"
|
|
72
|
+
|
|
73
|
+
If any tests failed, highlight what went wrong and suggest next steps.
|