@alternative-path/x-mcp 1.0.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/LICENSE +23 -0
- package/README.md +364 -0
- package/dist/agents/test-planner-context.d.ts +7 -0
- package/dist/agents/test-planner-context.d.ts.map +1 -0
- package/dist/agents/test-planner-context.js +283 -0
- package/dist/agents/test-planner-context.js.map +1 -0
- package/dist/agents/test-planner-prompt.d.ts +34 -0
- package/dist/agents/test-planner-prompt.d.ts.map +1 -0
- package/dist/agents/test-planner-prompt.js +82 -0
- package/dist/agents/test-planner-prompt.js.map +1 -0
- package/dist/api-client.d.ts +52 -0
- package/dist/api-client.d.ts.map +1 -0
- package/dist/api-client.js +240 -0
- package/dist/api-client.js.map +1 -0
- package/dist/index.d.ts +3 -0
- package/dist/index.d.ts.map +1 -0
- package/dist/index.js +159 -0
- package/dist/index.js.map +1 -0
- package/dist/tools/auth-tools.d.ts +17 -0
- package/dist/tools/auth-tools.d.ts.map +1 -0
- package/dist/tools/auth-tools.js +154 -0
- package/dist/tools/auth-tools.js.map +1 -0
- package/dist/tools/automation-tools.d.ts +25 -0
- package/dist/tools/automation-tools.d.ts.map +1 -0
- package/dist/tools/automation-tools.js +399 -0
- package/dist/tools/automation-tools.js.map +1 -0
- package/dist/tools/export-import-tools.d.ts +16 -0
- package/dist/tools/export-import-tools.d.ts.map +1 -0
- package/dist/tools/export-import-tools.js +62 -0
- package/dist/tools/export-import-tools.js.map +1 -0
- package/dist/tools/module-tools.d.ts +42 -0
- package/dist/tools/module-tools.d.ts.map +1 -0
- package/dist/tools/module-tools.js +302 -0
- package/dist/tools/module-tools.js.map +1 -0
- package/dist/tools/project-tools.d.ts +44 -0
- package/dist/tools/project-tools.d.ts.map +1 -0
- package/dist/tools/project-tools.js +67 -0
- package/dist/tools/project-tools.js.map +1 -0
- package/dist/tools/testcase-tools.d.ts +129 -0
- package/dist/tools/testcase-tools.d.ts.map +1 -0
- package/dist/tools/testcase-tools.js +762 -0
- package/dist/tools/testcase-tools.js.map +1 -0
- package/dist/tools/testgroup-launch-tools.d.ts +28 -0
- package/dist/tools/testgroup-launch-tools.d.ts.map +1 -0
- package/dist/tools/testgroup-launch-tools.js +332 -0
- package/dist/tools/testgroup-launch-tools.js.map +1 -0
- package/package.json +56 -0
package/LICENSE
ADDED
|
@@ -0,0 +1,23 @@
|
|
|
1
|
+
MIT License
|
|
2
|
+
|
|
3
|
+
Copyright (c) 2024 Product-X
|
|
4
|
+
|
|
5
|
+
Permission is hereby granted, free of charge, to any person obtaining a copy
|
|
6
|
+
of this software and associated documentation files (the "Software"), to deal
|
|
7
|
+
in the Software without restriction, including without limitation the rights
|
|
8
|
+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
|
9
|
+
copies of the Software, and to permit persons to whom the Software is
|
|
10
|
+
furnished to do so, subject to the following conditions:
|
|
11
|
+
|
|
12
|
+
The above copyright notice and this permission notice shall be included in all
|
|
13
|
+
copies or substantial portions of the Software.
|
|
14
|
+
|
|
15
|
+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
16
|
+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
17
|
+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
|
18
|
+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
|
19
|
+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
|
20
|
+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
21
|
+
SOFTWARE.
|
|
22
|
+
|
|
23
|
+
|
package/README.md
ADDED
|
@@ -0,0 +1,364 @@
|
|
|
1
|
+
# X-MCP: Product-X Test Management MCP Server
|
|
2
|
+
|
|
3
|
+
X-MCP is a Model Context Protocol (MCP) server that provides comprehensive test case management capabilities for the Product-X Test Management System. It allows you to manage test cases, modules, components, and subcomponents directly from Cursor, VS Code, or any MCP-compatible client.
|
|
4
|
+
|
|
5
|
+
## Features
|
|
6
|
+
|
|
7
|
+
- **x-test-planner agent**: Standard, repeatable test planning via an MCP prompt that uses embedded context (no manual context file required)
|
|
8
|
+
- **Module Management**: Create, update, delete, and list modules, components, and subcomponents
|
|
9
|
+
- **Test Case Management**: Full CRUD operations for test cases
|
|
10
|
+
- **Test Case Operations**: Clone, move, and bulk create test cases
|
|
11
|
+
- **Export/Import**: Export test cases to Excel and import from Excel files
|
|
12
|
+
- **Permission-Aware**: Respects the same permissions as the web application
|
|
13
|
+
- **Session-Based Auth**: Works with your existing application session
|
|
14
|
+
|
|
15
|
+
## Installation
|
|
16
|
+
|
|
17
|
+
### Prerequisites
|
|
18
|
+
|
|
19
|
+
- Node.js 18.0.0 or higher
|
|
20
|
+
- Access to a Product-X Test Management System instance
|
|
21
|
+
- Valid user credentials for the system
|
|
22
|
+
|
|
23
|
+
### Quick Install (Recommended)
|
|
24
|
+
|
|
25
|
+
**Option 1: Using npx (No installation required)**
|
|
26
|
+
|
|
27
|
+
Simply use `npx` in your MCP configuration - it will download and run the package automatically:
|
|
28
|
+
|
|
29
|
+
```json
|
|
30
|
+
{
|
|
31
|
+
"mcpServers": {
|
|
32
|
+
"x-mcp": {
|
|
33
|
+
"command": "npx",
|
|
34
|
+
"args": ["-y", "@product-x/x-mcp"],
|
|
35
|
+
"env": {
|
|
36
|
+
"X_MCP_API_URL": "https://qa-path.com/api",
|
|
37
|
+
"X_MCP_PROJECT_ID": "your-project-id",
|
|
38
|
+
"X_MCP_EMAIL": "your-email@example.com",
|
|
39
|
+
"X_MCP_PASSWORD": "your-password"
|
|
40
|
+
}
|
|
41
|
+
}
|
|
42
|
+
}
|
|
43
|
+
}
|
|
44
|
+
```
|
|
45
|
+
|
|
46
|
+
**Option 2: Global Installation**
|
|
47
|
+
|
|
48
|
+
```bash
|
|
49
|
+
npm install -g @product-x/x-mcp
|
|
50
|
+
```
|
|
51
|
+
|
|
52
|
+
Then use in your MCP configuration:
|
|
53
|
+
```json
|
|
54
|
+
{
|
|
55
|
+
"mcpServers": {
|
|
56
|
+
"x-mcp": {
|
|
57
|
+
"command": "x-mcp",
|
|
58
|
+
"env": {
|
|
59
|
+
"X_MCP_API_URL": "https://qa-path.com/api",
|
|
60
|
+
"X_MCP_PROJECT_ID": "your-project-id",
|
|
61
|
+
"X_MCP_EMAIL": "your-email@example.com",
|
|
62
|
+
"X_MCP_PASSWORD": "your-password"
|
|
63
|
+
}
|
|
64
|
+
}
|
|
65
|
+
}
|
|
66
|
+
}
|
|
67
|
+
```
|
|
68
|
+
|
|
69
|
+
**Option 3: Local Installation**
|
|
70
|
+
|
|
71
|
+
```bash
|
|
72
|
+
npm install @product-x/x-mcp
|
|
73
|
+
```
|
|
74
|
+
|
|
75
|
+
Then use the full path:
|
|
76
|
+
```json
|
|
77
|
+
{
|
|
78
|
+
"mcpServers": {
|
|
79
|
+
"x-mcp": {
|
|
80
|
+
"command": "node",
|
|
81
|
+
"args": ["./node_modules/@product-x/x-mcp/dist/index.js"],
|
|
82
|
+
"env": {
|
|
83
|
+
"X_MCP_API_URL": "https://qa-path.com/api",
|
|
84
|
+
"X_MCP_PROJECT_ID": "your-project-id",
|
|
85
|
+
"X_MCP_EMAIL": "your-email@example.com",
|
|
86
|
+
"X_MCP_PASSWORD": "your-password"
|
|
87
|
+
}
|
|
88
|
+
}
|
|
89
|
+
}
|
|
90
|
+
}
|
|
91
|
+
```
|
|
92
|
+
|
|
93
|
+
### Install from Source (Development)
|
|
94
|
+
|
|
95
|
+
```bash
|
|
96
|
+
git clone <repository-url>
|
|
97
|
+
cd x-mcp
|
|
98
|
+
npm install
|
|
99
|
+
npm run build
|
|
100
|
+
```
|
|
101
|
+
|
|
102
|
+
## Configuration
|
|
103
|
+
|
|
104
|
+
X-MCP requires configuration via environment variables or a `.env` file.
|
|
105
|
+
|
|
106
|
+
### Required Configuration
|
|
107
|
+
|
|
108
|
+
```bash
|
|
109
|
+
# API Base URL (required)
|
|
110
|
+
X_MCP_API_URL=http://localhost:3000/api
|
|
111
|
+
|
|
112
|
+
# Project ID (required for most operations)
|
|
113
|
+
X_MCP_PROJECT_ID=your-project-id-here
|
|
114
|
+
```
|
|
115
|
+
|
|
116
|
+
### Authentication Options
|
|
117
|
+
|
|
118
|
+
You can authenticate using one of the following methods:
|
|
119
|
+
|
|
120
|
+
#### Option 1: Username/Password Login (Recommended)
|
|
121
|
+
|
|
122
|
+
Login directly with your email and password. The MCP server will create its own session:
|
|
123
|
+
|
|
124
|
+
```bash
|
|
125
|
+
X_MCP_EMAIL=your-email@example.com
|
|
126
|
+
X_MCP_PASSWORD=your-password
|
|
127
|
+
```
|
|
128
|
+
|
|
129
|
+
**Note:** The MCP server will automatically log in when it starts if these credentials are provided.
|
|
130
|
+
|
|
131
|
+
#### Option 2: Session Token
|
|
132
|
+
|
|
133
|
+
If you're already logged into the web application, you can extract your session token from the browser cookies:
|
|
134
|
+
|
|
135
|
+
```bash
|
|
136
|
+
X_MCP_SESSION_TOKEN=your-session-token-here
|
|
137
|
+
```
|
|
138
|
+
|
|
139
|
+
To get your session token:
|
|
140
|
+
1. Log into the web application
|
|
141
|
+
2. Open browser DevTools (F12)
|
|
142
|
+
3. Go to Application/Storage > Cookies
|
|
143
|
+
4. Copy the `token` cookie value
|
|
144
|
+
|
|
145
|
+
#### Option 3: API Key (if supported)
|
|
146
|
+
|
|
147
|
+
```bash
|
|
148
|
+
X_MCP_API_KEY=your-api-key-here
|
|
149
|
+
```
|
|
150
|
+
|
|
151
|
+
### Complete Example `.env` file
|
|
152
|
+
|
|
153
|
+
```bash
|
|
154
|
+
# API Configuration
|
|
155
|
+
X_MCP_API_URL=http://localhost:3000/api
|
|
156
|
+
X_MCP_PROJECT_ID=123e4567-e89b-12d3-a456-426614174000
|
|
157
|
+
|
|
158
|
+
# Authentication (choose one)
|
|
159
|
+
# Option 1: Username/Password (Recommended)
|
|
160
|
+
X_MCP_EMAIL=your-email@example.com
|
|
161
|
+
X_MCP_PASSWORD=your-password
|
|
162
|
+
|
|
163
|
+
# Option 2: Session Token
|
|
164
|
+
# X_MCP_SESSION_TOKEN=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...
|
|
165
|
+
|
|
166
|
+
# Option 3: API Key
|
|
167
|
+
# X_MCP_API_KEY=your-api-key-here
|
|
168
|
+
```
|
|
169
|
+
|
|
170
|
+
## Usage with Cursor
|
|
171
|
+
|
|
172
|
+
1. Open Cursor settings
|
|
173
|
+
2. Navigate to MCP settings
|
|
174
|
+
3. Add the following configuration:
|
|
175
|
+
|
|
176
|
+
```json
|
|
177
|
+
{
|
|
178
|
+
"mcpServers": {
|
|
179
|
+
"x-mcp": {
|
|
180
|
+
"command": "npx",
|
|
181
|
+
"args": ["-y", "@product-x/x-mcp"],
|
|
182
|
+
"env": {
|
|
183
|
+
"X_MCP_API_URL": "http://localhost:3000/api",
|
|
184
|
+
"X_MCP_PROJECT_ID": "your-project-id",
|
|
185
|
+
"X_MCP_SESSION_TOKEN": "your-session-token"
|
|
186
|
+
}
|
|
187
|
+
}
|
|
188
|
+
}
|
|
189
|
+
}
|
|
190
|
+
```
|
|
191
|
+
|
|
192
|
+
## Usage with VS Code
|
|
193
|
+
|
|
194
|
+
1. Install the MCP extension for VS Code
|
|
195
|
+
2. Configure the MCP server in your VS Code settings:
|
|
196
|
+
|
|
197
|
+
```json
|
|
198
|
+
{
|
|
199
|
+
"mcp.servers": {
|
|
200
|
+
"x-mcp": {
|
|
201
|
+
"command": "npx",
|
|
202
|
+
"args": ["-y", "@product-x/x-mcp"],
|
|
203
|
+
"env": {
|
|
204
|
+
"X_MCP_API_URL": "http://localhost:3000/api",
|
|
205
|
+
"X_MCP_PROJECT_ID": "your-project-id",
|
|
206
|
+
"X_MCP_EMAIL": "your-email@example.com",
|
|
207
|
+
"X_MCP_PASSWORD": "your-password"
|
|
208
|
+
}
|
|
209
|
+
}
|
|
210
|
+
}
|
|
211
|
+
}
|
|
212
|
+
```
|
|
213
|
+
|
|
214
|
+
## Prompts (Agents)
|
|
215
|
+
|
|
216
|
+
### x-test-planner
|
|
217
|
+
|
|
218
|
+
The **x-test-planner** is an MCP prompt (agent) that delivers a standard test-planning context behind the scenes. Use it instead of manually attaching a context file so test plans are consistent, repeatable, and not subject to human error or file manipulation.
|
|
219
|
+
|
|
220
|
+
- **How to use**: In Cursor (or your MCP client), list prompts and select **x-test-planner**, or invoke it with optional arguments.
|
|
221
|
+
- **Arguments** (all optional):
|
|
222
|
+
- `project_summary` – Short description of the project/feature to plan tests for
|
|
223
|
+
- `links` – BDD, TDD, JIRA, or Confluence links for the agent to use
|
|
224
|
+
- `scope` – Test scope (in/out of scope, platforms, focus areas)
|
|
225
|
+
|
|
226
|
+
The agent uses the same structure, best practices, and checklists as the standard test-planning context (e.g. test plan structure, test case tables, automation workflows, deep-dive checklist) but the context is embedded in the server, not in a user-editable file.
|
|
227
|
+
|
|
228
|
+
---
|
|
229
|
+
|
|
230
|
+
## Available Tools
|
|
231
|
+
|
|
232
|
+
### Authentication Tools
|
|
233
|
+
|
|
234
|
+
- `login` - Login to the system using email and password. Creates a session for all subsequent requests.
|
|
235
|
+
- `check_auth_status` - Check if the MCP server is currently authenticated.
|
|
236
|
+
|
|
237
|
+
### Module Tools
|
|
238
|
+
|
|
239
|
+
- `list_modules` - List all modules, components, and subcomponents
|
|
240
|
+
- `create_module` - Create a new module, component, or subcomponent
|
|
241
|
+
- `update_module` - Update an existing module
|
|
242
|
+
- `delete_module` - Delete a module and all its children
|
|
243
|
+
- `get_module` - Get detailed information about a module
|
|
244
|
+
|
|
245
|
+
### Test Case Tools
|
|
246
|
+
|
|
247
|
+
- `list_test_cases` - List test cases with optional filtering
|
|
248
|
+
- `get_test_case` - Get detailed information about a test case
|
|
249
|
+
- `create_test_case` - Create a new test case
|
|
250
|
+
- `update_test_case` - Update an existing test case
|
|
251
|
+
- `delete_test_case` - Delete a test case
|
|
252
|
+
- `clone_test_case` - Clone one or more test cases
|
|
253
|
+
- `move_test_case` - Move a test case to a different module
|
|
254
|
+
- `bulk_create_test_cases` - Create multiple test cases at once
|
|
255
|
+
|
|
256
|
+
### Export/Import Tools
|
|
257
|
+
|
|
258
|
+
- `get_export_structure` - Get the module structure and test case fields needed for export/import
|
|
259
|
+
|
|
260
|
+
## Examples
|
|
261
|
+
|
|
262
|
+
### Create a Module
|
|
263
|
+
|
|
264
|
+
```json
|
|
265
|
+
{
|
|
266
|
+
"name": "create_module",
|
|
267
|
+
"arguments": {
|
|
268
|
+
"name": "Authentication Module",
|
|
269
|
+
"context": "Module for authentication-related test cases"
|
|
270
|
+
}
|
|
271
|
+
}
|
|
272
|
+
```
|
|
273
|
+
|
|
274
|
+
### Create a Test Case
|
|
275
|
+
|
|
276
|
+
```json
|
|
277
|
+
{
|
|
278
|
+
"name": "create_test_case",
|
|
279
|
+
"arguments": {
|
|
280
|
+
"title": "Verify user login with valid credentials",
|
|
281
|
+
"description": "Test that a user can log in with valid email and password",
|
|
282
|
+
"moduleId": "module-id-here",
|
|
283
|
+
"type": "Functional Test",
|
|
284
|
+
"status": "New",
|
|
285
|
+
"priority": "High",
|
|
286
|
+
"estimatedDuration": 15
|
|
287
|
+
}
|
|
288
|
+
}
|
|
289
|
+
```
|
|
290
|
+
|
|
291
|
+
### Clone Test Cases
|
|
292
|
+
|
|
293
|
+
```json
|
|
294
|
+
{
|
|
295
|
+
"name": "clone_test_case",
|
|
296
|
+
"arguments": {
|
|
297
|
+
"testCaseIds": ["test-case-id-1", "test-case-id-2"],
|
|
298
|
+
"targetModuleId": "target-module-id"
|
|
299
|
+
}
|
|
300
|
+
}
|
|
301
|
+
```
|
|
302
|
+
|
|
303
|
+
### Export Test Cases
|
|
304
|
+
|
|
305
|
+
```json
|
|
306
|
+
{
|
|
307
|
+
"name": "export_test_cases",
|
|
308
|
+
"arguments": {
|
|
309
|
+
"outputPath": "/path/to/export.xlsx",
|
|
310
|
+
"moduleId": "module-id-here"
|
|
311
|
+
}
|
|
312
|
+
}
|
|
313
|
+
```
|
|
314
|
+
|
|
315
|
+
## Permissions
|
|
316
|
+
|
|
317
|
+
X-MCP respects the same permission system as the web application. Users can only perform actions they have permission for. If you receive a 403 Forbidden error, check your user permissions in the web application.
|
|
318
|
+
|
|
319
|
+
## Troubleshooting
|
|
320
|
+
|
|
321
|
+
### Authentication Errors
|
|
322
|
+
|
|
323
|
+
- **401 Unauthorized**: Check your `X_MCP_SESSION_TOKEN` or `X_MCP_API_KEY`
|
|
324
|
+
- **403 Forbidden**: You don't have permission for this action. Check your user role and permissions.
|
|
325
|
+
|
|
326
|
+
### Connection Errors
|
|
327
|
+
|
|
328
|
+
- **Network error**: Verify that `X_MCP_API_URL` is correct and the server is accessible
|
|
329
|
+
- **Timeout**: The server might be slow. Try increasing the timeout in the API client.
|
|
330
|
+
|
|
331
|
+
### Project ID Errors
|
|
332
|
+
|
|
333
|
+
- Make sure `X_MCP_PROJECT_ID` is set correctly
|
|
334
|
+
- You can also provide `projectId` as an argument to individual tool calls
|
|
335
|
+
|
|
336
|
+
## Development
|
|
337
|
+
|
|
338
|
+
### Building from Source
|
|
339
|
+
|
|
340
|
+
```bash
|
|
341
|
+
npm install
|
|
342
|
+
npm run build
|
|
343
|
+
```
|
|
344
|
+
|
|
345
|
+
### Running in Development Mode
|
|
346
|
+
|
|
347
|
+
```bash
|
|
348
|
+
npm run dev
|
|
349
|
+
```
|
|
350
|
+
|
|
351
|
+
### Testing
|
|
352
|
+
|
|
353
|
+
```bash
|
|
354
|
+
npm test
|
|
355
|
+
```
|
|
356
|
+
|
|
357
|
+
## License
|
|
358
|
+
|
|
359
|
+
MIT
|
|
360
|
+
|
|
361
|
+
## Support
|
|
362
|
+
|
|
363
|
+
For issues and questions, please contact the Product-X team or open an issue in the repository.
|
|
364
|
+
|
|
@@ -0,0 +1,7 @@
|
|
|
1
|
+
/**
|
|
2
|
+
* Embedded test planning context for the x-test-planner agent.
|
|
3
|
+
* This content is the same as CURSOR_CONTEXT_TEST_PLANNING.md but is bundled
|
|
4
|
+
* in the MCP server so the agent always uses a consistent, non-editable context.
|
|
5
|
+
*/
|
|
6
|
+
export declare const TEST_PLANNER_CONTEXT = "# Cursor Context for Test Planning\n\nThis file provides a comprehensive, reusable context for test planning, test case/scenario creation, and automation workflow design. Use this as a template for any new project or feature to ensure consistency, completeness, and efficiency in your QA process.\n\n---\n\n## 1. Test Plan Structure\n\nA standard test plan should include the following sections:\n\n- **Reference Template:**\n - [Test Plan Template (Revamped) - Confluence](https://reorg-research.atlassian.net/wiki/spaces/qe/pages/892338177/Test+Plan+Template+Revamped)\n\n- **Project & Stakeholder Details**\n - QA start dates, stakeholders, design/epic/BDD links, QA owner\n- **Objective**\n - Short summary of the project/feature and underlying changes\n- **Test Scope**\n - In Scope: Features, platforms, workflows to be tested\n - Out of Scope: Exclusions, legacy flows, non-impacted areas\n - In Scope - Automation: What will be automated\n - Out of Scope - Automation: What will not be automated and why\n- **Data Validation Coverage**\n - Types of data, edge cases, negative data, and how data coverage is ensured\n- **Assumptions and Dependencies**\n - Environment, APIs, roles, external systems, etc.\n- **Objectives**\n - Functional, performance, security, UI/UX, data validation\n- **Test Strategy**\n - Types of testing (unit, functional, UI, API, DB, performance, UAT, etc.)\n - Approach: Happy path, edge/negative, integration, data, exploratory\n- **Phase Plan**\n - Phased approach to testing (happy path, integration, edge, regression, etc.)\n- **Manual & Automated Testing**\n - What will be covered manually vs. automated\n- **Migration/Backfilling**\n - Data migration/validation if applicable\n- **Regression**\n - Impacted areas, regression suite, integration points\n- **Functional Testing Considerations**\n - Business rules, UI/UX, data, error handling, new/changed workflows\n- **Integration & Dependencies**\n - APIs, external systems, user management, logging\n- **Non-Functional Testing**\n - Performance, security, compatibility, scalability\n- **Automation & Test Coverage**\n - Reuse, prioritization, data, coverage goals\n- **Operational & Monitoring**\n - Audit logs, error monitoring, post-release validation\n- **User Scenarios & Test Cases**\n - Table of detailed test cases and scenarios\n- **Automation Workflows**\n - Table of automation flows covering multiple test cases\n- **Release and Revert Plan**\n- **Post Release Production Validation**\n- **Approval Table**\n\n---\n\n## 2. Test Case & Scenario Writing\n\n- **Test Case Table Columns:**\n - Test Case ID\n - Title\n - Objective\n - Steps (detailed, step-by-step)\n - Expected Result\n - Test Data (if applicable)\n\n- **Best Practices:**\n - Be specific and unambiguous\n - Cover both positive and negative scenarios\n - Include cases that are most likely to result into bugs\n - Include edge cases and data variations\n - Link to requirements/user stories where possible\n - Use clear, actionable language\n - Be as detailed as possible\n - Define the test steps with granularity and details\n - Include places to look out for in the expected results and be very specific.\n - If there are multiple places where an outcome can be or should be validated, include all those places as part of the expected result or create new test case for that.\n - While creating test cases add test steps as well keeping in mind that the person executing these testacses and steps can only access the application using the front-end. So, add steps that can be followed from the applicaiton/navigation. Try to be as accurate with the test steps as you can.\n\n- **Example Test Case Table (Markdown):**\n\n| Test Case ID | Title | Objective | Steps | Expected Result | Test Data |\n|--------------|-------|-----------|-------|----------------|-----------|\n| TC-01 | Upload Supported Document | Verify upload and queueing of supported document types | 1. Login as user<br>2. Upload Termsheet.pdf<br>3. Queue for extraction | Document appears in dashboard with correct status and metadata | Termsheet.pdf |\n\n---\n\n## 3. Automation Workflow Design\n\n- **Workflow Table Columns:**\n - Workflow Name\n - Test Cases Covered (from the list of test cases generated in above sections)\n - Workflow Steps (should cover multiple test cases per flow)\n\n- **Best Practices:**\n - Design workflows to maximize coverage with minimal complexity\n - Avoid overly complex flows; keep steps maintainable\n - Reuse login, navigation, and setup steps\n - Validate both UI and backend (API, DB) where possible\n\n- **Example Automation Workflow Table (Markdown):**\n\n| Workflow Name | Test Cases Covered | Workflow Steps |\n|--------------|--------------------|----------------|\n| Document Upload, Extraction, and Review | TC-01, TC-03, TC-04, ... | 1. Login<br>2. Upload document<br>3. Queue for extraction<br>... |\n\n---\n\n## 4. Test Data Management\n\n- Use a variety of data: valid, invalid, edge, corrupted, large/small, etc.\n- Document test data in the test case table or as a separate section\n- For automation, use data-driven approaches where possible\n\n---\n\n## 5. General Best Practices\n\n- Always link test cases to requirements/user stories/epics\n- Keep test cases atomic and independent\n- Regularly review and update test cases and workflows\n- Prioritize automation for repetitive, high-value, and regression scenarios\n- Ensure auditability and traceability (who tested what, when, and why)\n- Include negative, boundary, and exploratory tests\n- Validate error handling, security, and access control\n- Document assumptions, dependencies, and environment details\n\n---\n\n## 6. Example Section Templates\n\n### Test Case Table\n\n| Test Case ID | Title | Objective | Steps | Expected Result | Test Data |\n|--------------|-------|-----------|-------|----------------|-----------|\n| TC-01 | ... | ... | ... | ... | ... |\n\n### Automation Workflow Table\n\n| Workflow Name | Test Cases Covered | Workflow Steps |\n|---------------|-------------------|----------------|\n| ... | ... | ... |\n\n---\n\n## 7. Section for Abbreviations\n\n### Abbreviations\n - Include a section towards the end to have abbreviations and their full forms listed\n - List the abbreviations as bullet points\n - If you are you not able to find a full form for any abbreviation, the use the confluence or the internet to search for that term\n\n- **Example Abbreviation Section:**\nPCDO - Private Credit and Deal Orientation\nCLO - Collateralized Loan Obligation\n\n---\n\n## 8. How to Use This Context\n\n- When starting a new test planning effort, use this as your base context\n- Fill in project-specific details, requirements, and data as needed\n- Use the tables and best practices to guide your test plan, test case, and automation workflow creation\n- Make use of the details provided with the prompt\n- **If additional information is provided with the prompt (such as BDD links, TDD links, JIRA story/task links, or Confluence links), ensure you utilize this information by thoroughly reviewing all referenced artifacts. This includes:**\n - Visiting all provided BDD, TDD, JIRA, and Confluence links.\n - For Confluence pages, review all child pages and any referenced or linked Confluence pages.\n - For JIRA artifacts, review all linked JIRA issues and any referenced Confluence pages.\n - Read all comments on both Confluence and JIRA items.\n - Follow all references recursively to ensure complete context is gathered for the test plan.\n\n---\n\n## 8A. Test Planning Deep-Dive Checklist\n\n**When preparing the Test Plan, Test Cases, Testing Strategy, or any other relevant section, use the following checklist to guide your analysis. Answer as many of these as are applicable, and use the answers to enrich the relevant sections of your test plan.**\n\n### Functional Testing\n1. Does the design/change affect the current workflow that it is being developed for?\n2. Does the design/change affect any other transaction?\n - List of affected transactions\n3. Does the design/change affect any existing report/calculation?\n4. List of affected reports/calculations\n5. Is there a need to create separate reporting for the design/change?\n6. Is there any existing data point that could be affected?\n7. Does it affect any other functionality?\n - List of affected functionalities\n8. Are there any new business rules or validations introduced?\n9. How should the system behave in case of missing/invalid data?\n10. Are there any UI/UX changes? If yes, how do they impact usability?\n11. What are the expected inputs and outputs for each function or transaction?\n\n### Integration & Dependencies\n1. Any integration area affected?\n2. List of affected integration areas\n3. Does the change introduce any dependencies on third-party services or external systems?\n4. Are there any upstream or downstream data dependencies?\n5. Do any existing integrations require modification or additional validation?\n6. Are there any database schema changes? If yes, how will backward compatibility be handled?\n\n### Non-Functional Testing\n1. What are the performance benchmarks/considerations?\n2. How much time is allowed during processing?\n3. What are the payload size considerations?\n4. Are there security considerations (authentication, authorization, data encryption)?\n5. What are the scalability and concurrency requirements?\n6. Does the system handle expected and unexpected failures gracefully?\n7. Is failover or disaster recovery testing required?\n8. Are there any regulatory or compliance requirements?\n9. Are there browser/device compatibility considerations (for web or mobile applications)?\n\n### Automation & Test Coverage\n1. Are there reusable test cases from previous releases that apply to this change?\n2. Are there specific scenarios that should be prioritized for automation?\n3. What test data is required, and how will it be managed?\n4. Will exploratory testing be needed in addition to scripted test cases?\n5. Are API endpoints available for testing/automation?\n\n### Operational & Monitoring\n1. Are debug/audit logs available?\n2. How will errors be logged and monitored in production?\n3. Are there any alerting mechanisms in place for failures?\n4. What are the rollback procedures if the deployment fails?\n5. How will post-production validation be performed?\n6. What is the frequency of jobs run?\n7. What access/rights are required for QA teams?\n8. Is there a need for specific capabilities/entitlements for testing?\n\n---\n\n- **Test Strategy**\n - Types of testing (unit, functional, UI, API, DB, performance, UAT, etc.)\n - Approach: Happy path, edge/negative, integration, data, exploratory\n - **Sample Test Strategy (for inspiration):**\n\n We will start with the happy flow scenarios for the Transaction management. Here we plan to cover 65-70% of our functional requirements. Then we will move on to test the migrated data testing that will be performed at the Database layer. As migration is a major event in the functionality to be available to the users. This is critical to do as existing transaction need to be present in the system. Then, we will test integration areas such that the consumers of the transactional data are in sync with the changing transactions, for this API testing approach will be used as the integration will be done using APIs. This strategy ensures that the we have confidence in the end-to end functional, data and integration areas.\n\n Then, we will move on to covering the edge case scenarios around the transaction management, which are more prone to break the workflow. This phase is to make sure the system is resilient to failures and error.\n\n Finally, we will perform exploratory testing to uncover issues or perform validations to ensure completeness.\n\n **Phase 1: Happy Flow Validation (Foundation Testing)**\n We will begin by validating the happy flow scenarios for Transaction Management, covering approximately 65-70% of the functional requirements. This phase ensures that the fundamental workflows behave as expected under standard conditions.\n *Why this first?*\n - Establishes baseline functional correctness.\n - Allows early defect detection and reduces rework in later stages.\n - Enables smoother collaboration with development teams in case of issues.\n\n **Phase 2: Migrated Data Validation (Data Integrity Testing)**\n Once core functionality is validated, we will focus on migrated data testing at the database layer. Since data migration is a key event enabling this functionality, it is critical to verify that existing transactions are accurately present and available in the system post-migration.\n *Why now?*\n - Ensures historical transactions remain intact post-migration.\n - Prevents discrepancies before proceeding to further functional and integration tests.\n\n **Phase 3: Integration Testing (System Connectivity & Data Flow Testing)**\n Next, we will validate integration areas, ensuring that consumers of transactional data are in sync with the changing transactions. This will be performed through API testing, as integration is API-driven.\n *Why at this stage?*\n - By now, we have confidence in functional correctness and data integrity.\n - Ensures seamless interaction between the transaction system and external consumers.\n - Helps detect system-wide inconsistencies before moving to deeper validation.\n\n **Phase 4: Edge Case & Negative Scenario Testing (Resilience & Failure Handling)**\n At this stage, we will cover edge case scenarios, which are more prone to breaking the workflow. This includes boundary conditions, invalid inputs, concurrency issues, and error handling.\n *Why now?*\n - Functional and integration correctness is already established, allowing us to focus on resilience.\n - Helps assess the system's robustness in handling unexpected inputs and failures.\n\n **Phase 5: Exploratory Testing (Final Validations & Completeness Check)**\n Finally, we will perform exploratory testing to uncover issues that may not have been covered in structured test cases. This includes usability assessments, performance bottlenecks, and overlooked functional gaps.\n *Why at the end?*\n - Ensures real-world scenarios and human intuition-driven testing before deployment.\n - Helps identify potential last-minute gaps or overlooked issues.\n\n**This context is designed to make your test planning fast, consistent, and high quality.**";
|
|
7
|
+
//# sourceMappingURL=test-planner-context.d.ts.map
|
|
@@ -0,0 +1 @@
|
|
|
1
|
+
{"version":3,"file":"test-planner-context.d.ts","sourceRoot":"","sources":["../../src/agents/test-planner-context.ts"],"names":[],"mappings":"AAAA;;;;GAIG;AAEH,eAAO,MAAM,oBAAoB,+xcAoR2D,CAAC"}
|