@ranger-testing/ranger-cli 1.0.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/agents/e2e-test-recommender.md +164 -0
- package/agents/quality-advocate.md +164 -0
- package/build/cli.js +116 -0
- package/build/index.js +436 -0
- package/package.json +22 -0
|
@@ -0,0 +1,164 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: e2e-test-recommender
|
|
3
|
+
description: "Analyzes code changes and suggests e2e tests. Scans code changes, is aware of product context, cross-references existing tests, and drafts new tests as needed."
|
|
4
|
+
tools: Glob, Grep, Read, Bash, mcp__ranger__get_product_docs, mcp__ranger__get_test_suite, mcp__ranger__get_test_details, mcp__ranger__create_draft_test
|
|
5
|
+
model: sonnet
|
|
6
|
+
color: purple
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
You are an E2E Test Recommender agent. Your job is to analyze code changes in a repository and recommend end-to-end tests that should be created or updated to cover the changes.
|
|
10
|
+
|
|
11
|
+
# Your Workflow
|
|
12
|
+
|
|
13
|
+
## Step 1: Analyze Code Changes
|
|
14
|
+
|
|
15
|
+
First, identify what has changed in the codebase:
|
|
16
|
+
|
|
17
|
+
1. **Determine the default branch:**
|
|
18
|
+
```bash
|
|
19
|
+
DEFAULT_BRANCH=$(git remote show origin | grep 'HEAD branch' | cut -d' ' -f5)
|
|
20
|
+
```
|
|
21
|
+
|
|
22
|
+
2. **Get the diff against the default branch:**
|
|
23
|
+
```bash
|
|
24
|
+
git diff $DEFAULT_BRANCH...HEAD --name-only # List changed files
|
|
25
|
+
git diff $DEFAULT_BRANCH...HEAD # Full diff for context
|
|
26
|
+
```
|
|
27
|
+
|
|
28
|
+
3. **Understand the changes:**
|
|
29
|
+
- Use `Read` to examine modified files in detail
|
|
30
|
+
- Use `Grep` to find related code (imports, usages, tests)
|
|
31
|
+
- Categorize changes: new feature, bug fix, refactor, UI change, API change, etc.
|
|
32
|
+
|
|
33
|
+
## Step 2: Get Product Context
|
|
34
|
+
|
|
35
|
+
Use the Ranger MCP tools to understand the product:
|
|
36
|
+
|
|
37
|
+
1. **Fetch product documentation:**
|
|
38
|
+
- Call `mcp__ranger__get_product_docs` to retrieve the Sitemap.md and Entities.md
|
|
39
|
+
- This gives you context about:
|
|
40
|
+
- The application's page structure and navigation
|
|
41
|
+
- Key entities and their relationships
|
|
42
|
+
- User flows and interactions
|
|
43
|
+
|
|
44
|
+
2. **Understand how changes map to the product:**
|
|
45
|
+
- Match changed files/components to pages in the sitemap
|
|
46
|
+
- Identify which entities are affected
|
|
47
|
+
- Determine user-facing impact
|
|
48
|
+
|
|
49
|
+
## Step 3: Cross-Reference Existing Tests
|
|
50
|
+
|
|
51
|
+
Before suggesting new tests, check what already exists:
|
|
52
|
+
|
|
53
|
+
1. **Get existing test suite:**
|
|
54
|
+
- Call `mcp__ranger__get_test_suite` to see all tests (active, draft, maintenance, etc.)
|
|
55
|
+
- This returns a summary view: test ID, name, status, priority, and truncated description
|
|
56
|
+
|
|
57
|
+
2. **Get detailed test information when needed:**
|
|
58
|
+
- Call `mcp__ranger__get_test_details` with a specific test ID when you need:
|
|
59
|
+
- Full test steps and expected outcomes
|
|
60
|
+
- Complete description and notes
|
|
61
|
+
- To determine if an existing test already covers a scenario
|
|
62
|
+
- To understand exactly what a test validates before suggesting updates
|
|
63
|
+
- Use this for tests that seem related to the code changes
|
|
64
|
+
- Don't fetch details for every test - only those potentially overlapping with changes
|
|
65
|
+
|
|
66
|
+
3. **Analyze coverage gaps:**
|
|
67
|
+
- Which changed functionality has existing test coverage?
|
|
68
|
+
- Which tests might need updates due to the changes?
|
|
69
|
+
- What new functionality lacks test coverage?
|
|
70
|
+
|
|
71
|
+
## Step 4: Suggest Tests
|
|
72
|
+
|
|
73
|
+
Based on your analysis, suggest 0 to N tests. For each suggestion:
|
|
74
|
+
|
|
75
|
+
### Present Your Analysis
|
|
76
|
+
|
|
77
|
+
Explain to the user:
|
|
78
|
+
- What changed in the code
|
|
79
|
+
- How it maps to product functionality
|
|
80
|
+
- What existing test coverage exists
|
|
81
|
+
- Why you're recommending this test
|
|
82
|
+
|
|
83
|
+
### Categorize Your Suggestions
|
|
84
|
+
|
|
85
|
+
1. **New Tests Needed:** Functionality that has no existing coverage
|
|
86
|
+
2. **Existing Tests to Update:** Tests that cover changed areas but may need modifications
|
|
87
|
+
3. **No Action Needed:** Changes that are already well-covered or don't need e2e testing
|
|
88
|
+
|
|
89
|
+
### For Each Suggested Test, Provide:
|
|
90
|
+
|
|
91
|
+
- **Test Name:** Clear, descriptive name
|
|
92
|
+
- **Priority:** p0 (critical), p1 (high), p2 (medium), p3 (low)
|
|
93
|
+
- **Description:** What the test validates
|
|
94
|
+
- **Steps:** High-level user actions
|
|
95
|
+
- **Rationale:** Why this test is important given the changes
|
|
96
|
+
|
|
97
|
+
## Step 5: Draft Tests (Upon Approval)
|
|
98
|
+
|
|
99
|
+
When the user approves a test suggestion:
|
|
100
|
+
|
|
101
|
+
1. **Call `mcp__ranger__create_draft_test`** with:
|
|
102
|
+
- `name`: The test name
|
|
103
|
+
- `description`: Detailed test description
|
|
104
|
+
- `priority`: The priority level (p0, p1, p2, p3)
|
|
105
|
+
- `steps`: Array of high-level test step descriptions
|
|
106
|
+
|
|
107
|
+
2. **Inform the user** that:
|
|
108
|
+
- A draft test has been created in Ranger
|
|
109
|
+
- They can review and refine the test in the Ranger dashboard
|
|
110
|
+
- The test is in "draft" status until they activate it
|
|
111
|
+
|
|
112
|
+
# Guidelines
|
|
113
|
+
|
|
114
|
+
## Be Conversational
|
|
115
|
+
- Don't dump all suggestions at once
|
|
116
|
+
- Present your analysis and ask for feedback
|
|
117
|
+
- Clarify requirements before drafting tests
|
|
118
|
+
- Help the user prioritize what matters most
|
|
119
|
+
|
|
120
|
+
## Be Thorough but Practical
|
|
121
|
+
- Consider both direct and indirect impacts of changes
|
|
122
|
+
- Focus on user-facing functionality for e2e tests
|
|
123
|
+
- Don't suggest e2e tests for things better covered by unit tests
|
|
124
|
+
- Prioritize based on risk and user impact
|
|
125
|
+
|
|
126
|
+
## Avoid Duplication
|
|
127
|
+
- Always check existing tests before suggesting new ones
|
|
128
|
+
- If an existing test covers 80% of what you'd suggest, recommend updating it instead
|
|
129
|
+
- Explain overlap when it exists
|
|
130
|
+
|
|
131
|
+
## Consider Test Maintenance
|
|
132
|
+
- Suggest stable selectors and assertions
|
|
133
|
+
- Avoid tests that will be flaky or hard to maintain
|
|
134
|
+
- Think about test data requirements
|
|
135
|
+
|
|
136
|
+
# Example Interaction
|
|
137
|
+
|
|
138
|
+
**User:** Analyze my recent changes and suggest e2e tests
|
|
139
|
+
|
|
140
|
+
**Agent:**
|
|
141
|
+
1. Determines the default branch, then runs `git diff $DEFAULT_BRANCH...HEAD` to see changes
|
|
142
|
+
2. Reads changed files to understand modifications
|
|
143
|
+
3. Calls `get_product_docs` to get sitemap and entities
|
|
144
|
+
4. Calls `get_test_suite` to see existing coverage
|
|
145
|
+
5. Presents analysis:
|
|
146
|
+
|
|
147
|
+
> "I've analyzed your changes. You've modified the checkout flow by adding a new 'Apply Coupon' feature. Here's what I found:
|
|
148
|
+
>
|
|
149
|
+
> **Changes detected:**
|
|
150
|
+
> - New CouponInput component in `src/components/checkout/`
|
|
151
|
+
> - Updated CartSummary to display discounts
|
|
152
|
+
> - New API endpoint `/api/coupons/validate`
|
|
153
|
+
>
|
|
154
|
+
> **Existing coverage:**
|
|
155
|
+
> - 'Complete checkout with credit card' - covers basic checkout but not coupons
|
|
156
|
+
> - 'Add items to cart' - not affected by these changes
|
|
157
|
+
>
|
|
158
|
+
> **Suggested new test:**
|
|
159
|
+
> - **Name:** Apply valid coupon at checkout
|
|
160
|
+
> - **Priority:** p1
|
|
161
|
+
> - **Description:** Verify users can apply a valid coupon code and see the discount reflected in their order total
|
|
162
|
+
> - **Rationale:** This is new functionality with no existing coverage and directly impacts revenue
|
|
163
|
+
>
|
|
164
|
+
> Would you like me to draft this test, or would you like to discuss the priority or add more detail first?"
|
|
@@ -0,0 +1,164 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: quality-advocate
|
|
3
|
+
description: "Verifies UI features are working by clicking through them, reports bugs, and suggests e2e tests for working features. Use after building a UI feature."
|
|
4
|
+
tools: Glob, Grep, Read, Bash, mcp__ranger-browser__browser_navigate, mcp__ranger-browser__browser_snapshot, mcp__ranger-browser__browser_take_screenshot, mcp__ranger-browser__browser_click, mcp__ranger-browser__browser_type, mcp__ranger-browser__browser_hover, mcp__ranger-browser__browser_select_option, mcp__ranger-browser__browser_press_key, mcp__ranger-browser__browser_fill_form, mcp__ranger-browser__browser_wait_for, mcp__ranger-browser__browser_evaluate, mcp__ranger-browser__browser_console_messages, mcp__ranger-browser__browser_network_requests, mcp__ranger-browser__browser_tabs, mcp__ranger-browser__browser_navigate_back, mcp__ranger-mcp__get_product_docs, mcp__ranger-mcp__get_test_suite, mcp__ranger-mcp__get_test_details, mcp__ranger-mcp__create_draft_test
|
|
5
|
+
model: sonnet
|
|
6
|
+
color: green
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
You are a Quality Advocate agent. Your job is to verify that newly built UI features are working correctly, report any issues found, and suggest end-to-end tests for features that are working as expected.
|
|
10
|
+
|
|
11
|
+
You are typically invoked after another agent has finished building a feature with a UI component. Your role is to be the "first user" of the feature - clicking through it, verifying it works, and ensuring it has proper test coverage.
|
|
12
|
+
|
|
13
|
+
# Your Workflow
|
|
14
|
+
|
|
15
|
+
## Step 1: Understand the Feature
|
|
16
|
+
|
|
17
|
+
Before testing, understand what was built:
|
|
18
|
+
|
|
19
|
+
1. **Get context from the invoking agent or user:**
|
|
20
|
+
- What feature was implemented?
|
|
21
|
+
- What is the expected behavior?
|
|
22
|
+
- What URL or page should you start from?
|
|
23
|
+
- Are there any specific user flows to test?
|
|
24
|
+
|
|
25
|
+
2. **Review the code changes (if needed):**
|
|
26
|
+
```bash
|
|
27
|
+
DEFAULT_BRANCH=$(git remote show origin | grep 'HEAD branch' | cut -d' ' -f5)
|
|
28
|
+
git diff $DEFAULT_BRANCH...HEAD --name-only
|
|
29
|
+
```
|
|
30
|
+
- Use `Read` to examine UI components that were added/modified
|
|
31
|
+
- Understand the expected interactions and states
|
|
32
|
+
|
|
33
|
+
3. **Get product context:**
|
|
34
|
+
- Call `mcp__ranger-mcp__get_product_docs` to understand the broader product
|
|
35
|
+
- This helps you understand how the new feature fits into the application
|
|
36
|
+
|
|
37
|
+
## Step 2: Verify the Feature
|
|
38
|
+
|
|
39
|
+
Click through the feature to verify it works:
|
|
40
|
+
|
|
41
|
+
1. **Navigate to the feature:**
|
|
42
|
+
- Use `browser_navigate` to go to the relevant page/URL
|
|
43
|
+
- Use `browser_snapshot` to capture the initial state
|
|
44
|
+
|
|
45
|
+
2. **Test the happy path:**
|
|
46
|
+
- Interact with the feature using `browser_click`, `browser_type`, `browser_fill_form`, etc.
|
|
47
|
+
- Take snapshots at key states to document behavior
|
|
48
|
+
- Verify expected outcomes occur
|
|
49
|
+
|
|
50
|
+
3. **Test edge cases and error states:**
|
|
51
|
+
- Empty inputs, invalid data, boundary conditions
|
|
52
|
+
- Network errors (check `browser_network_requests`)
|
|
53
|
+
- Console errors (check `browser_console_messages`)
|
|
54
|
+
|
|
55
|
+
4. **Document your findings:**
|
|
56
|
+
- Take screenshots of important states with `browser_take_screenshot`
|
|
57
|
+
- Note any unexpected behavior or bugs
|
|
58
|
+
|
|
59
|
+
## Step 3: Report Issues (If Any)
|
|
60
|
+
|
|
61
|
+
If you find bugs or issues:
|
|
62
|
+
|
|
63
|
+
1. **Clearly describe each issue:**
|
|
64
|
+
- What you did (steps to reproduce)
|
|
65
|
+
- What you expected to happen
|
|
66
|
+
- What actually happened
|
|
67
|
+
- Screenshots or snapshots as evidence
|
|
68
|
+
|
|
69
|
+
2. **Categorize by severity:**
|
|
70
|
+
- **Blocker:** Feature is completely broken, cannot be used
|
|
71
|
+
- **Major:** Feature partially works but has significant issues
|
|
72
|
+
- **Minor:** Feature works but has small issues or polish needed
|
|
73
|
+
|
|
74
|
+
3. **Return to the invoking agent/user** with your findings so they can fix the issues before proceeding.
|
|
75
|
+
|
|
76
|
+
**IMPORTANT:** If you find blocking or major issues, do NOT proceed to suggest tests. The feature needs to be fixed first.
|
|
77
|
+
|
|
78
|
+
## Step 4: Suggest Tests (If Feature is Working)
|
|
79
|
+
|
|
80
|
+
Once you've verified the feature works correctly:
|
|
81
|
+
|
|
82
|
+
1. **Check existing test coverage:**
|
|
83
|
+
- Call `mcp__ranger-mcp__get_test_suite` to see existing tests
|
|
84
|
+
- Call `mcp__ranger-mcp__get_test_details` for tests that might overlap
|
|
85
|
+
- Determine what's already covered vs. what needs new tests
|
|
86
|
+
|
|
87
|
+
2. **Identify test scenarios:**
|
|
88
|
+
Based on your verification, identify tests that should exist:
|
|
89
|
+
- **Happy path tests:** The main user flows you verified
|
|
90
|
+
- **Edge case tests:** Boundary conditions and error handling
|
|
91
|
+
- **Integration tests:** How this feature interacts with others
|
|
92
|
+
|
|
93
|
+
3. **Present test suggestions:**
|
|
94
|
+
For each suggested test, provide:
|
|
95
|
+
- **Test Name:** Clear, descriptive name
|
|
96
|
+
- **Priority:** p0 (critical), p1 (high), p2 (medium), p3 (low)
|
|
97
|
+
- **Description:** What the test validates
|
|
98
|
+
- **Steps:** The user actions (based on what you just did manually)
|
|
99
|
+
- **Rationale:** Why this test is important
|
|
100
|
+
|
|
101
|
+
4. **Draft tests upon approval:**
|
|
102
|
+
When approved, call `mcp__ranger-mcp__create_draft_test` with:
|
|
103
|
+
- `name`: The test name
|
|
104
|
+
- `description`: Detailed description
|
|
105
|
+
- `priority`: Priority level
|
|
106
|
+
- `steps`: Array of test steps based on your manual verification
|
|
107
|
+
|
|
108
|
+
# Guidelines
|
|
109
|
+
|
|
110
|
+
## Be Thorough but Efficient
|
|
111
|
+
- Focus on user-visible behavior, not implementation details
|
|
112
|
+
- Test the most important paths first
|
|
113
|
+
- Don't spend too long on one area - cover breadth before depth
|
|
114
|
+
|
|
115
|
+
## Be a Good Reporter
|
|
116
|
+
- Screenshots are worth a thousand words
|
|
117
|
+
- Be specific about reproduction steps
|
|
118
|
+
- Separate facts (what happened) from interpretation (why it might have happened)
|
|
119
|
+
|
|
120
|
+
## Be Collaborative
|
|
121
|
+
- You're part of a team - communicate clearly with the invoking agent
|
|
122
|
+
- If something is ambiguous, ask for clarification
|
|
123
|
+
- Celebrate when things work well!
|
|
124
|
+
|
|
125
|
+
## Think Like a User
|
|
126
|
+
- What would a real user try to do?
|
|
127
|
+
- What mistakes might they make?
|
|
128
|
+
- What would confuse them?
|
|
129
|
+
|
|
130
|
+
# Example Interaction
|
|
131
|
+
|
|
132
|
+
**Invoking Agent:** "I've finished building the new user profile editing feature. Please verify it's working. The feature is at /settings/profile and allows users to update their name, email, and avatar."
|
|
133
|
+
|
|
134
|
+
**Quality Advocate:**
|
|
135
|
+
1. Gets product docs for context
|
|
136
|
+
2. Navigates to /settings/profile
|
|
137
|
+
3. Takes a snapshot to see the UI
|
|
138
|
+
4. Tests editing each field:
|
|
139
|
+
- Changes name → saves → verifies update
|
|
140
|
+
- Changes email → saves → verifies validation
|
|
141
|
+
- Uploads avatar → verifies preview and save
|
|
142
|
+
5. Tests edge cases:
|
|
143
|
+
- Empty name field
|
|
144
|
+
- Invalid email format
|
|
145
|
+
- Large image upload
|
|
146
|
+
6. Checks console for errors
|
|
147
|
+
7. Reports findings:
|
|
148
|
+
|
|
149
|
+
> "I've verified the profile editing feature. Here's what I found:
|
|
150
|
+
>
|
|
151
|
+
> **Working correctly:**
|
|
152
|
+
> - Name editing saves and displays correctly
|
|
153
|
+
> - Avatar upload works with preview
|
|
154
|
+
>
|
|
155
|
+
> **Issues found:**
|
|
156
|
+
> - **Major:** Email validation allows invalid formats (e.g., 'test@' is accepted)
|
|
157
|
+
> - **Minor:** No loading indicator when saving changes
|
|
158
|
+
>
|
|
159
|
+
> I recommend fixing the email validation before we add test coverage. Once fixed, I'll suggest tests for:
|
|
160
|
+
> 1. Edit profile name successfully
|
|
161
|
+
> 2. Edit profile with invalid email (should show error)
|
|
162
|
+
> 3. Upload and save profile avatar
|
|
163
|
+
>
|
|
164
|
+
> Would you like to fix the email validation issue first?"
|
package/build/cli.js
ADDED
|
@@ -0,0 +1,116 @@
|
|
|
1
|
+
#!/usr/bin/env node
|
|
2
|
+
import { readdir, mkdir, copyFile, writeFile } from 'fs/promises';
|
|
3
|
+
import { join, resolve, dirname } from 'path';
|
|
4
|
+
import { fileURLToPath } from 'url';
|
|
5
|
+
import { existsSync } from 'fs';
|
|
6
|
+
import yargs from 'yargs/yargs';
|
|
7
|
+
const __filename = fileURLToPath(import.meta.url);
|
|
8
|
+
const __dirname = dirname(__filename);
|
|
9
|
+
async function initAgents(targetDir, token, serverUrl) {
|
|
10
|
+
const resolvedDir = resolve(targetDir);
|
|
11
|
+
console.log(`Initializing Claude Code agents in: ${resolvedDir}`);
|
|
12
|
+
// Validate API token by checking /me endpoint
|
|
13
|
+
const mcpServerUrl = serverUrl ||
|
|
14
|
+
process.env.MCP_SERVER_URL ||
|
|
15
|
+
'https://mcp-server-301751771437.us-central1.run.app';
|
|
16
|
+
console.log('Validating API token...');
|
|
17
|
+
try {
|
|
18
|
+
const response = await fetch(`${mcpServerUrl}/me`, {
|
|
19
|
+
method: 'GET',
|
|
20
|
+
headers: {
|
|
21
|
+
Authorization: `Bearer ${token}`,
|
|
22
|
+
},
|
|
23
|
+
});
|
|
24
|
+
if (!response.ok) {
|
|
25
|
+
console.error(`\n❌ Authentication failed: ${response.status} ${response.statusText}`);
|
|
26
|
+
console.error('Please check your API token and try again.');
|
|
27
|
+
process.exit(1);
|
|
28
|
+
}
|
|
29
|
+
console.log('✓ API token validated successfully');
|
|
30
|
+
}
|
|
31
|
+
catch (error) {
|
|
32
|
+
console.error('\n❌ Failed to connect to MCP server:', error instanceof Error ? error.message : error);
|
|
33
|
+
console.error('Please check your network connection and server URL.');
|
|
34
|
+
process.exit(1);
|
|
35
|
+
}
|
|
36
|
+
// Create .claude/agents directory
|
|
37
|
+
const claudeAgentsDir = join(resolvedDir, '.claude', 'agents');
|
|
38
|
+
if (!existsSync(claudeAgentsDir)) {
|
|
39
|
+
await mkdir(claudeAgentsDir, { recursive: true });
|
|
40
|
+
console.log(`✓ Created directory: ${claudeAgentsDir}`);
|
|
41
|
+
}
|
|
42
|
+
// Copy all agent files from agents directory
|
|
43
|
+
// When running tsx cli.ts: __dirname is packages/cli, so agents is './agents'
|
|
44
|
+
// When built: __dirname is packages/cli/build, so agents is '../agents'
|
|
45
|
+
const sourceAgentsDir = existsSync(join(__dirname, 'agents'))
|
|
46
|
+
? join(__dirname, 'agents')
|
|
47
|
+
: join(__dirname, '..', 'agents');
|
|
48
|
+
try {
|
|
49
|
+
const agentFiles = await readdir(sourceAgentsDir);
|
|
50
|
+
const mdFiles = agentFiles.filter((file) => file.endsWith('.md'));
|
|
51
|
+
if (mdFiles.length === 0) {
|
|
52
|
+
console.warn('Warning: No agent files found in source directory');
|
|
53
|
+
}
|
|
54
|
+
for (const file of mdFiles) {
|
|
55
|
+
const sourcePath = join(sourceAgentsDir, file);
|
|
56
|
+
const targetPath = join(claudeAgentsDir, file);
|
|
57
|
+
await copyFile(sourcePath, targetPath);
|
|
58
|
+
console.log(`✓ Copied agent: ${file}`);
|
|
59
|
+
}
|
|
60
|
+
}
|
|
61
|
+
catch (error) {
|
|
62
|
+
console.error('Error copying agent files:', error);
|
|
63
|
+
process.exit(1);
|
|
64
|
+
}
|
|
65
|
+
// Create .mcp.json config
|
|
66
|
+
const mcpConfig = {
|
|
67
|
+
mcpServers: {
|
|
68
|
+
ranger: {
|
|
69
|
+
type: 'http',
|
|
70
|
+
url: `${mcpServerUrl}/mcp`,
|
|
71
|
+
headers: {
|
|
72
|
+
Authorization: `Bearer ${token}`,
|
|
73
|
+
},
|
|
74
|
+
},
|
|
75
|
+
'ranger-browser': {
|
|
76
|
+
command: 'npx',
|
|
77
|
+
args: ['@ranger-testing/playwright', 'run-mcp-server'],
|
|
78
|
+
},
|
|
79
|
+
},
|
|
80
|
+
};
|
|
81
|
+
const mcpConfigPath = join(resolvedDir, '.mcp.json');
|
|
82
|
+
await writeFile(mcpConfigPath, JSON.stringify(mcpConfig, null, 2));
|
|
83
|
+
console.log(`✓ Created .mcp.json configuration`);
|
|
84
|
+
console.log('\n✅ Claude Code agents initialized successfully!');
|
|
85
|
+
console.log('\nNext steps:');
|
|
86
|
+
console.log('1. MCP server configured at', mcpServerUrl);
|
|
87
|
+
console.log('2. Your Claude Code subagents are now available in .claude/agents/');
|
|
88
|
+
}
|
|
89
|
+
// Setup yargs CLI
|
|
90
|
+
yargs(process.argv.slice(2))
|
|
91
|
+
.version('1.0.0')
|
|
92
|
+
.command('init-agents [dir]', 'Initialize Claude Code agents in the specified directory', (yargs) => {
|
|
93
|
+
return yargs
|
|
94
|
+
.positional('dir', {
|
|
95
|
+
type: 'string',
|
|
96
|
+
description: 'Target directory (defaults to current directory)',
|
|
97
|
+
default: process.cwd(),
|
|
98
|
+
})
|
|
99
|
+
.option('token', {
|
|
100
|
+
alias: 't',
|
|
101
|
+
type: 'string',
|
|
102
|
+
description: 'Authentication token for the MCP server',
|
|
103
|
+
demandOption: true,
|
|
104
|
+
})
|
|
105
|
+
.option('url', {
|
|
106
|
+
alias: 'u',
|
|
107
|
+
type: 'string',
|
|
108
|
+
description: 'MCP server URL (defaults to MCP_SERVER_URL env or production server)',
|
|
109
|
+
});
|
|
110
|
+
}, async (argv) => {
|
|
111
|
+
await initAgents(argv.dir, argv.token, argv.url);
|
|
112
|
+
})
|
|
113
|
+
.demandCommand(1, 'You must specify a command')
|
|
114
|
+
.help()
|
|
115
|
+
.alias('help', 'h')
|
|
116
|
+
.parse();
|
package/build/index.js
ADDED
|
@@ -0,0 +1,436 @@
|
|
|
1
|
+
#!/usr/bin/env node
|
|
2
|
+
import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';
|
|
3
|
+
import { StreamableHTTPServerTransport } from '@modelcontextprotocol/sdk/server/streamableHttp.js';
|
|
4
|
+
import { z } from 'zod';
|
|
5
|
+
import { createServer } from 'http';
|
|
6
|
+
const RANGER_API_BASE = process.env.RANGER_API_URL || 'http://localhost:8080';
|
|
7
|
+
const PORT = parseInt(process.env.PORT || '8080', 10);
|
|
8
|
+
// Tool classification - read tools can be used with mcp:read scope
|
|
9
|
+
const READ_TOOLS = ['get_test_runs', 'get_active_tests'];
|
|
10
|
+
const tokenInfoCache = new Map();
|
|
11
|
+
const CACHE_TTL_MS = 5 * 60 * 1000; // 5 minutes
|
|
12
|
+
// Create API request function bound to a specific token
|
|
13
|
+
function createApiClient(token) {
|
|
14
|
+
console.error(`[API Client] Created with customer token: ${token ? token.substring(0, 10) + '...' : 'none'}`);
|
|
15
|
+
// All requests now use customer token
|
|
16
|
+
const apiRequest = async (endpoint, options = {}) => {
|
|
17
|
+
const url = `${RANGER_API_BASE}${endpoint}`;
|
|
18
|
+
const headers = {
|
|
19
|
+
'Content-Type': 'application/json',
|
|
20
|
+
...(token ? { Authorization: `Bearer ${token}` } : {}),
|
|
21
|
+
...options.headers,
|
|
22
|
+
};
|
|
23
|
+
console.error(`[API Request] ${options.method || 'GET'} ${url}`);
|
|
24
|
+
const response = await fetch(url, {
|
|
25
|
+
...options,
|
|
26
|
+
headers,
|
|
27
|
+
});
|
|
28
|
+
if (!response.ok) {
|
|
29
|
+
const body = await response.text();
|
|
30
|
+
console.error(`[API Error] ${response.status} ${response.statusText}: ${body}`);
|
|
31
|
+
throw new Error(`API request failed: ${response.status} ${response.statusText}`);
|
|
32
|
+
}
|
|
33
|
+
return response.json();
|
|
34
|
+
};
|
|
35
|
+
const getTokenInfo = async () => {
|
|
36
|
+
if (!token) {
|
|
37
|
+
throw new Error('No authorization token provided');
|
|
38
|
+
}
|
|
39
|
+
// Check cache
|
|
40
|
+
const cached = tokenInfoCache.get(token);
|
|
41
|
+
if (cached && cached.expiresAt > Date.now()) {
|
|
42
|
+
return { orgId: cached.orgId, scope: cached.scope, apiKeyId: cached.apiKeyId };
|
|
43
|
+
}
|
|
44
|
+
// Fetch from API using customer token
|
|
45
|
+
const data = await apiRequest('/api/v1/me');
|
|
46
|
+
const orgId = data.organizationId || data.organization?.id;
|
|
47
|
+
const scope = (data.scope || 'api:all');
|
|
48
|
+
const apiKeyId = data.apiKeyId;
|
|
49
|
+
if (!orgId) {
|
|
50
|
+
throw new Error('Could not determine organization ID from token');
|
|
51
|
+
}
|
|
52
|
+
if (!apiKeyId) {
|
|
53
|
+
throw new Error('Could not determine API key ID from token');
|
|
54
|
+
}
|
|
55
|
+
// Cache the result
|
|
56
|
+
tokenInfoCache.set(token, { orgId, scope, apiKeyId, expiresAt: Date.now() + CACHE_TTL_MS });
|
|
57
|
+
return { orgId, scope, apiKeyId };
|
|
58
|
+
};
|
|
59
|
+
// Helper to log MCP tool calls
|
|
60
|
+
const logToolCall = async (params) => {
|
|
61
|
+
try {
|
|
62
|
+
await apiRequest('/api/v1/mcp/tool-calls', {
|
|
63
|
+
method: 'POST',
|
|
64
|
+
body: JSON.stringify(params),
|
|
65
|
+
});
|
|
66
|
+
}
|
|
67
|
+
catch (error) {
|
|
68
|
+
// Log but don't fail the tool call if logging fails
|
|
69
|
+
console.error(`[MCP] Failed to log tool call: ${error}`);
|
|
70
|
+
}
|
|
71
|
+
};
|
|
72
|
+
return { apiRequest, getTokenInfo, logToolCall };
|
|
73
|
+
}
|
|
74
|
+
// Register all tools on an MCP server instance
|
|
75
|
+
async function registerTools(server, token) {
|
|
76
|
+
const { apiRequest, getTokenInfo, logToolCall } = createApiClient(token);
|
|
77
|
+
// Get token info including scope
|
|
78
|
+
let tokenInfo;
|
|
79
|
+
try {
|
|
80
|
+
tokenInfo = await getTokenInfo();
|
|
81
|
+
}
|
|
82
|
+
catch (error) {
|
|
83
|
+
console.error(`[MCP] Failed to get token info: ${error}`);
|
|
84
|
+
// Register an error-only tool if we can't get token info
|
|
85
|
+
server.registerTool('error', {
|
|
86
|
+
description: 'Error tool - authentication failed',
|
|
87
|
+
inputSchema: {},
|
|
88
|
+
}, async () => ({
|
|
89
|
+
content: [{ type: 'text', text: `Authentication failed: ${error}` }],
|
|
90
|
+
}));
|
|
91
|
+
return;
|
|
92
|
+
}
|
|
93
|
+
const { orgId, scope } = tokenInfo;
|
|
94
|
+
// Check if scope allows MCP access
|
|
95
|
+
if (scope === 'api:all') {
|
|
96
|
+
console.error(`[MCP] Access denied: API key scope 'api:all' does not allow MCP access`);
|
|
97
|
+
server.registerTool('access_denied', {
|
|
98
|
+
description: 'Access denied - API key scope does not allow MCP access',
|
|
99
|
+
inputSchema: {},
|
|
100
|
+
}, async () => ({
|
|
101
|
+
content: [{
|
|
102
|
+
type: 'text',
|
|
103
|
+
text: 'Access denied: Your API key has scope "api:all" which does not allow MCP access. Please create an API key with scope "mcp:read" or "mcp:write" to use the MCP server.'
|
|
104
|
+
}],
|
|
105
|
+
}));
|
|
106
|
+
return;
|
|
107
|
+
}
|
|
108
|
+
// Determine which tools to register based on scope
|
|
109
|
+
const canRegisterTool = (toolName) => {
|
|
110
|
+
if (scope === 'mcp:write') {
|
|
111
|
+
return true; // Write scope can use all tools
|
|
112
|
+
}
|
|
113
|
+
if (scope === 'mcp:read') {
|
|
114
|
+
return READ_TOOLS.includes(toolName);
|
|
115
|
+
}
|
|
116
|
+
return false;
|
|
117
|
+
};
|
|
118
|
+
// Helper to wrap tool handlers with logging
|
|
119
|
+
const wrapWithLogging = (toolName, handler) => {
|
|
120
|
+
return async (args) => {
|
|
121
|
+
const startTime = Date.now();
|
|
122
|
+
try {
|
|
123
|
+
const result = await handler(args);
|
|
124
|
+
const durationMs = Date.now() - startTime;
|
|
125
|
+
await logToolCall({
|
|
126
|
+
toolName,
|
|
127
|
+
inputArgs: args,
|
|
128
|
+
success: true,
|
|
129
|
+
durationMs,
|
|
130
|
+
});
|
|
131
|
+
return result;
|
|
132
|
+
}
|
|
133
|
+
catch (error) {
|
|
134
|
+
const durationMs = Date.now() - startTime;
|
|
135
|
+
const errorMessage = error instanceof Error ? error.message : String(error);
|
|
136
|
+
await logToolCall({
|
|
137
|
+
toolName,
|
|
138
|
+
inputArgs: args,
|
|
139
|
+
success: false,
|
|
140
|
+
errorMessage,
|
|
141
|
+
durationMs,
|
|
142
|
+
});
|
|
143
|
+
throw error;
|
|
144
|
+
}
|
|
145
|
+
};
|
|
146
|
+
};
|
|
147
|
+
// Register tool: Get test runs
|
|
148
|
+
if (canRegisterTool('get_test_runs')) {
|
|
149
|
+
server.registerTool('get_test_runs', {
|
|
150
|
+
description: 'Get test runs for the organization associated with your API token',
|
|
151
|
+
inputSchema: {
|
|
152
|
+
limit: z
|
|
153
|
+
.number()
|
|
154
|
+
.optional()
|
|
155
|
+
.describe('Maximum number of runs to return (default: 10)'),
|
|
156
|
+
offset: z
|
|
157
|
+
.number()
|
|
158
|
+
.optional()
|
|
159
|
+
.describe('Pagination offset (default: 0)'),
|
|
160
|
+
},
|
|
161
|
+
}, wrapWithLogging('get_test_runs', async ({ limit, offset }) => {
|
|
162
|
+
const limitValue = limit || 10;
|
|
163
|
+
const offsetValue = offset || 0;
|
|
164
|
+
const data = await apiRequest(`/api/v1/mcp/test-runs?limit=${limitValue}&offset=${offsetValue}`);
|
|
165
|
+
return {
|
|
166
|
+
content: [
|
|
167
|
+
{
|
|
168
|
+
type: 'text',
|
|
169
|
+
text: JSON.stringify(data, null, 2),
|
|
170
|
+
},
|
|
171
|
+
],
|
|
172
|
+
};
|
|
173
|
+
}));
|
|
174
|
+
}
|
|
175
|
+
// Register tool: Generate test plan
|
|
176
|
+
if (canRegisterTool('generate_test_plan')) {
|
|
177
|
+
server.registerTool('generate_test_plan', {
|
|
178
|
+
description: 'Create a draft test and trigger TestDetailsWriter to generate detailed test steps',
|
|
179
|
+
inputSchema: {
|
|
180
|
+
name: z.string().describe('Test name/title'),
|
|
181
|
+
description: z.string().optional().describe('Test description'),
|
|
182
|
+
targetUrl: z
|
|
183
|
+
.string()
|
|
184
|
+
.optional()
|
|
185
|
+
.describe('Target URL for the test'),
|
|
186
|
+
additionalInstructions: z
|
|
187
|
+
.string()
|
|
188
|
+
.optional()
|
|
189
|
+
.describe('Additional instructions for TestDetailsWriter'),
|
|
190
|
+
priority: z
|
|
191
|
+
.enum(['p0', 'p1', 'p2', 'p3'])
|
|
192
|
+
.optional()
|
|
193
|
+
.describe('Test priority (default: p2)'),
|
|
194
|
+
},
|
|
195
|
+
}, wrapWithLogging('generate_test_plan', async ({ name, description, targetUrl, additionalInstructions, priority, }) => {
|
|
196
|
+
// Step 1: Create draft test
|
|
197
|
+
const createTestBody = {
|
|
198
|
+
test: {
|
|
199
|
+
name,
|
|
200
|
+
description,
|
|
201
|
+
priority: priority || 'p2',
|
|
202
|
+
},
|
|
203
|
+
};
|
|
204
|
+
const createTestResponse = await apiRequest('/api/v1/mcp/tests', {
|
|
205
|
+
method: 'POST',
|
|
206
|
+
body: JSON.stringify(createTestBody),
|
|
207
|
+
});
|
|
208
|
+
const testId = createTestResponse.test.id;
|
|
209
|
+
// Step 2: Trigger TestDetailsWriter
|
|
210
|
+
const generateDetailsBody = {
|
|
211
|
+
eventType: 'GenerateTestDetails',
|
|
212
|
+
payload: {
|
|
213
|
+
organizationId: orgId,
|
|
214
|
+
testId,
|
|
215
|
+
targetUrl,
|
|
216
|
+
additionalInstructions,
|
|
217
|
+
authenticationNeeded: true,
|
|
218
|
+
updateStatusOnSuccess: true,
|
|
219
|
+
},
|
|
220
|
+
};
|
|
221
|
+
const generateResponse = await apiRequest('/api/v1/mcp/agent-jobs', {
|
|
222
|
+
method: 'POST',
|
|
223
|
+
body: JSON.stringify(generateDetailsBody),
|
|
224
|
+
});
|
|
225
|
+
return {
|
|
226
|
+
content: [
|
|
227
|
+
{
|
|
228
|
+
type: 'text',
|
|
229
|
+
text: `Test plan generation initiated:\n\nTest Created:\n- ID: ${testId}\n- Name: ${name}\n- Organization: ${orgId}\n\nTestDetailsWriter Job:\n- Event ID: ${generateResponse.eventId}\n\nThe AI agent will now generate detailed test steps. You can check the test status using the test ID: ${testId}`,
|
|
230
|
+
},
|
|
231
|
+
],
|
|
232
|
+
};
|
|
233
|
+
}));
|
|
234
|
+
}
|
|
235
|
+
// Register tool: Get active tests
|
|
236
|
+
if (canRegisterTool('get_active_tests')) {
|
|
237
|
+
server.registerTool('get_active_tests', {
|
|
238
|
+
description: 'Get active tests for the organization. Returns a summary of each test (id, name, status, priority). Use get_test_details for full test information.',
|
|
239
|
+
inputSchema: {
|
|
240
|
+
limit: z
|
|
241
|
+
.number()
|
|
242
|
+
.optional()
|
|
243
|
+
.describe('Maximum number of tests to return (default: 20, max: 50)'),
|
|
244
|
+
offset: z
|
|
245
|
+
.number()
|
|
246
|
+
.optional()
|
|
247
|
+
.describe('Pagination offset (default: 0)'),
|
|
248
|
+
},
|
|
249
|
+
}, wrapWithLogging('get_active_tests', async ({ limit, offset }) => {
|
|
250
|
+
const limitValue = Math.min(limit || 20, 50); // Cap at 50
|
|
251
|
+
const offsetValue = offset || 0;
|
|
252
|
+
const testStatuses = encodeURIComponent(JSON.stringify(['active']));
|
|
253
|
+
const data = await apiRequest(`/api/v1/mcp/tests?testStatuses=${testStatuses}&limit=${limitValue}&offset=${offsetValue}`);
|
|
254
|
+
// Return summarized test data to reduce response size
|
|
255
|
+
const summarizedTests = (data.items || []).map((test) => ({
|
|
256
|
+
id: test.id,
|
|
257
|
+
name: test.name,
|
|
258
|
+
status: test.status,
|
|
259
|
+
priority: test.priority,
|
|
260
|
+
description: test.description?.substring(0, 200) || null,
|
|
261
|
+
lastUpdated: test.lastUpdated,
|
|
262
|
+
}));
|
|
263
|
+
const total = data.totalCount || data.total || summarizedTests.length;
|
|
264
|
+
return {
|
|
265
|
+
content: [
|
|
266
|
+
{
|
|
267
|
+
type: 'text',
|
|
268
|
+
text: JSON.stringify({
|
|
269
|
+
tests: summarizedTests,
|
|
270
|
+
total,
|
|
271
|
+
limit: limitValue,
|
|
272
|
+
offset: offsetValue,
|
|
273
|
+
hasMore: total > offsetValue + limitValue,
|
|
274
|
+
}, null, 2),
|
|
275
|
+
},
|
|
276
|
+
],
|
|
277
|
+
};
|
|
278
|
+
}));
|
|
279
|
+
}
|
|
280
|
+
// Register tool: Update test status
|
|
281
|
+
if (canRegisterTool('update_test_status')) {
|
|
282
|
+
server.registerTool('update_test_status', {
|
|
283
|
+
description: 'Bulk update the status of multiple tests',
|
|
284
|
+
inputSchema: {
|
|
285
|
+
testIds: z
|
|
286
|
+
.array(z.string())
|
|
287
|
+
.describe('Array of test IDs to update (e.g., ["test_xxx", "test_yyy"])'),
|
|
288
|
+
status: z
|
|
289
|
+
.enum([
|
|
290
|
+
'active',
|
|
291
|
+
'blocked_by_customer',
|
|
292
|
+
'under_maintenance',
|
|
293
|
+
'ignored',
|
|
294
|
+
'draft',
|
|
295
|
+
'ready_for_review',
|
|
296
|
+
'requested',
|
|
297
|
+
'expected_failure',
|
|
298
|
+
'canceled',
|
|
299
|
+
'suggested',
|
|
300
|
+
])
|
|
301
|
+
.describe('The new status to set for all tests'),
|
|
302
|
+
reason: z.string().describe('Reason for the status change'),
|
|
303
|
+
category: z
|
|
304
|
+
.string()
|
|
305
|
+
.optional()
|
|
306
|
+
.describe('Optional category for the status change'),
|
|
307
|
+
updatedBy: z
|
|
308
|
+
.string()
|
|
309
|
+
.default('mcp-tool')
|
|
310
|
+
.describe('Who is updating the status (default: mcp-tool)'),
|
|
311
|
+
triggerMaintenanceJobs: z
|
|
312
|
+
.boolean()
|
|
313
|
+
.optional()
|
|
314
|
+
.describe('Whether to trigger maintenance jobs (default: false)'),
|
|
315
|
+
triggerCodegenJobs: z
|
|
316
|
+
.boolean()
|
|
317
|
+
.optional()
|
|
318
|
+
.describe('Whether to trigger codegen jobs (default: false)'),
|
|
319
|
+
},
|
|
320
|
+
}, wrapWithLogging('update_test_status', async ({ testIds, status, reason, category, updatedBy, triggerMaintenanceJobs, triggerCodegenJobs, }) => {
|
|
321
|
+
const requestBody = {
|
|
322
|
+
bulkUpdateType: 'status',
|
|
323
|
+
ids: testIds,
|
|
324
|
+
change: {
|
|
325
|
+
status,
|
|
326
|
+
reason,
|
|
327
|
+
category,
|
|
328
|
+
updatedBy: updatedBy || 'mcp-tool',
|
|
329
|
+
},
|
|
330
|
+
triggerMaintenanceJobs,
|
|
331
|
+
triggerCodegenJobs,
|
|
332
|
+
};
|
|
333
|
+
const data = await apiRequest('/api/v1/mcp/tests/bulk-update', {
|
|
334
|
+
method: 'PATCH',
|
|
335
|
+
body: JSON.stringify(requestBody),
|
|
336
|
+
});
|
|
337
|
+
return {
|
|
338
|
+
content: [
|
|
339
|
+
{
|
|
340
|
+
type: 'text',
|
|
341
|
+
text: `Successfully updated ${data.updatedCount} test(s) to status "${status}".\n\nDetails:\n${JSON.stringify(data, null, 2)}`,
|
|
342
|
+
},
|
|
343
|
+
],
|
|
344
|
+
};
|
|
345
|
+
}));
|
|
346
|
+
}
|
|
347
|
+
// Register tool: Skip tests from description
|
|
348
|
+
if (canRegisterTool('skip_tests_from_description')) {
|
|
349
|
+
server.registerTool('skip_tests_from_description', {
|
|
350
|
+
description: 'Skip tests based on a user description by changing their status to under_maintenance',
|
|
351
|
+
inputSchema: {
|
|
352
|
+
userRequest: z
|
|
353
|
+
.string()
|
|
354
|
+
.describe('The user request describing which tests to skip (e.g., "skip all login tests")'),
|
|
355
|
+
},
|
|
356
|
+
}, wrapWithLogging('skip_tests_from_description', async ({ userRequest }) => {
|
|
357
|
+
return {
|
|
358
|
+
content: [
|
|
359
|
+
{
|
|
360
|
+
type: 'text',
|
|
361
|
+
text: `To skip tests based on the request: "${userRequest}"
|
|
362
|
+
|
|
363
|
+
Follow these steps:
|
|
364
|
+
|
|
365
|
+
1. Run get_active_tests to retrieve all active tests for the organization
|
|
366
|
+
2. Review the test names and descriptions in the results
|
|
367
|
+
3. Identify which test IDs are relevant to the user's request: "${userRequest}"
|
|
368
|
+
4. Call update_test_status with:
|
|
369
|
+
- testIds: array of relevant test IDs you identified
|
|
370
|
+
- status: "under_maintenance"
|
|
371
|
+
- reason: "${userRequest}"
|
|
372
|
+
- updatedBy: the user's name or "mcp-tool"
|
|
373
|
+
|
|
374
|
+
This will change the selected tests from "active" to "under_maintenance" status, effectively skipping them.`,
|
|
375
|
+
},
|
|
376
|
+
],
|
|
377
|
+
};
|
|
378
|
+
}));
|
|
379
|
+
}
|
|
380
|
+
} // end registerTools
|
|
381
|
+
// Factory function to create a new MCP server instance
|
|
382
|
+
async function createMcpServer(token) {
|
|
383
|
+
const mcpServer = new McpServer({
|
|
384
|
+
name: 'ranger',
|
|
385
|
+
version: '1.0.0',
|
|
386
|
+
});
|
|
387
|
+
// Register all tools on this server instance with the token
|
|
388
|
+
await registerTools(mcpServer, token);
|
|
389
|
+
return mcpServer;
|
|
390
|
+
}
|
|
391
|
+
// Start the server
|
|
392
|
+
async function main() {
|
|
393
|
+
const httpServer = createServer(async (req, res) => {
|
|
394
|
+
// Enable CORS
|
|
395
|
+
res.setHeader('Access-Control-Allow-Origin', '*');
|
|
396
|
+
res.setHeader('Access-Control-Allow-Methods', 'GET, POST, DELETE, OPTIONS');
|
|
397
|
+
res.setHeader('Access-Control-Allow-Headers', 'Content-Type, Authorization');
|
|
398
|
+
if (req.method === 'OPTIONS') {
|
|
399
|
+
res.writeHead(204);
|
|
400
|
+
res.end();
|
|
401
|
+
return;
|
|
402
|
+
}
|
|
403
|
+
const url = new URL(req.url || '/', `http://localhost:${PORT}`);
|
|
404
|
+
// Extract token from Authorization header (supports both "Bearer <token>" and raw token)
|
|
405
|
+
const authHeader = req.headers.authorization;
|
|
406
|
+
const token = authHeader?.startsWith('Bearer ')
|
|
407
|
+
? authHeader.slice(7)
|
|
408
|
+
: authHeader || undefined;
|
|
409
|
+
console.error(`[MCP] ${req.method} ${url.pathname} - Auth header: ${authHeader ? 'present' : 'missing'}, Token: ${token ? token.substring(0, 10) + '...' : 'none'}`);
|
|
410
|
+
if (url.pathname === '/health') {
|
|
411
|
+
res.writeHead(200, { 'Content-Type': 'application/json' });
|
|
412
|
+
res.end(JSON.stringify({ status: 'ok' }));
|
|
413
|
+
}
|
|
414
|
+
else {
|
|
415
|
+
// Route all other paths to MCP handler
|
|
416
|
+
// Stateless mode - create a new server and transport for each request
|
|
417
|
+
// This allows horizontal scaling across multiple Cloud Run instances
|
|
418
|
+
const transport = new StreamableHTTPServerTransport({
|
|
419
|
+
sessionIdGenerator: undefined, // Stateless mode
|
|
420
|
+
});
|
|
421
|
+
// Create server with token bound to all API calls
|
|
422
|
+
const mcpServer = await createMcpServer(token);
|
|
423
|
+
await mcpServer.connect(transport);
|
|
424
|
+
await transport.handleRequest(req, res);
|
|
425
|
+
}
|
|
426
|
+
});
|
|
427
|
+
httpServer.listen(PORT, () => {
|
|
428
|
+
console.error(`Ranger MCP server running on http://localhost:${PORT}`);
|
|
429
|
+
console.error(` MCP endpoint: http://localhost:${PORT}/mcp`);
|
|
430
|
+
console.error(` Health check: http://localhost:${PORT}/health`);
|
|
431
|
+
});
|
|
432
|
+
}
|
|
433
|
+
main().catch((error) => {
|
|
434
|
+
console.error('Server error:', error);
|
|
435
|
+
process.exit(1);
|
|
436
|
+
});
|
package/package.json
ADDED
|
@@ -0,0 +1,22 @@
|
|
|
1
|
+
{
|
|
2
|
+
"name": "@ranger-testing/ranger-cli",
|
|
3
|
+
"version": "1.0.0",
|
|
4
|
+
"type": "module",
|
|
5
|
+
"bin": {
|
|
6
|
+
"ranger": "./build/cli.js"
|
|
7
|
+
},
|
|
8
|
+
"scripts": {
|
|
9
|
+
"build": "tsc && chmod 755 build/cli.js",
|
|
10
|
+
"dev": "tsx cli.ts"
|
|
11
|
+
},
|
|
12
|
+
"files": ["build", "agents"],
|
|
13
|
+
"dependencies": {
|
|
14
|
+
"zod": "^3.23.8",
|
|
15
|
+
"yargs": "^17.7.2"
|
|
16
|
+
},
|
|
17
|
+
"devDependencies": {
|
|
18
|
+
"@types/node": "^22.0.0",
|
|
19
|
+
"@types/yargs": "^17.0.32",
|
|
20
|
+
"typescript": "^5.0.0"
|
|
21
|
+
}
|
|
22
|
+
}
|