@bugzy-ai/bugzy 1.7.0 → 1.9.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (52) hide show
  1. package/LICENSE +21 -21
  2. package/README.md +273 -273
  3. package/dist/cli/index.cjs +465 -15
  4. package/dist/cli/index.cjs.map +1 -1
  5. package/dist/cli/index.js +464 -14
  6. package/dist/cli/index.js.map +1 -1
  7. package/dist/index.cjs +460 -12
  8. package/dist/index.cjs.map +1 -1
  9. package/dist/index.js +460 -12
  10. package/dist/index.js.map +1 -1
  11. package/dist/subagents/index.cjs +392 -6
  12. package/dist/subagents/index.cjs.map +1 -1
  13. package/dist/subagents/index.js +392 -6
  14. package/dist/subagents/index.js.map +1 -1
  15. package/dist/subagents/metadata.cjs +27 -0
  16. package/dist/subagents/metadata.cjs.map +1 -1
  17. package/dist/subagents/metadata.js +27 -0
  18. package/dist/subagents/metadata.js.map +1 -1
  19. package/dist/tasks/index.cjs +30 -1
  20. package/dist/tasks/index.cjs.map +1 -1
  21. package/dist/tasks/index.js +30 -1
  22. package/dist/tasks/index.js.map +1 -1
  23. package/package.json +95 -95
  24. package/templates/init/.bugzy/runtime/knowledge-base.md +61 -61
  25. package/templates/init/.bugzy/runtime/knowledge-maintenance-guide.md +97 -97
  26. package/templates/init/.bugzy/runtime/project-context.md +35 -35
  27. package/templates/init/.bugzy/runtime/subagent-memory-guide.md +87 -87
  28. package/templates/init/.bugzy/runtime/templates/test-plan-template.md +50 -50
  29. package/templates/init/.bugzy/runtime/templates/test-result-schema.md +498 -498
  30. package/templates/init/.bugzy/runtime/test-execution-strategy.md +535 -535
  31. package/templates/init/.bugzy/runtime/testing-best-practices.md +724 -632
  32. package/templates/init/.env.testdata +18 -18
  33. package/templates/init/.gitignore-template +24 -24
  34. package/templates/init/AGENTS.md +155 -155
  35. package/templates/init/CLAUDE.md +157 -157
  36. package/templates/init/test-runs/README.md +45 -45
  37. package/templates/playwright/BasePage.template.ts +190 -190
  38. package/templates/playwright/auth.setup.template.ts +89 -89
  39. package/templates/playwright/dataGenerators.helper.template.ts +148 -148
  40. package/templates/playwright/dateUtils.helper.template.ts +96 -96
  41. package/templates/playwright/pages.fixture.template.ts +50 -50
  42. package/templates/playwright/playwright.config.template.ts +97 -97
  43. package/templates/playwright/reporters/bugzy-reporter.ts +454 -454
  44. package/dist/templates/init/.bugzy/runtime/knowledge-base.md +0 -61
  45. package/dist/templates/init/.bugzy/runtime/knowledge-maintenance-guide.md +0 -97
  46. package/dist/templates/init/.bugzy/runtime/project-context.md +0 -35
  47. package/dist/templates/init/.bugzy/runtime/subagent-memory-guide.md +0 -87
  48. package/dist/templates/init/.bugzy/runtime/templates/test-plan-template.md +0 -50
  49. package/dist/templates/init/.bugzy/runtime/templates/test-result-schema.md +0 -498
  50. package/dist/templates/init/.bugzy/runtime/test-execution-strategy.md +0 -535
  51. package/dist/templates/init/.bugzy/runtime/testing-best-practices.md +0 -632
  52. package/dist/templates/init/.gitignore-template +0 -25
@@ -1,157 +1,157 @@
1
- # Project Memory
2
-
3
- ## Project Overview
4
- **Type**: Test Management System
5
- **Purpose**: Autonomous QA testing with AI-based test generation and execution
6
-
7
- ## SECURITY NOTICE
8
-
9
- **CRITICAL: Never Read the .env File**
10
-
11
- All agents, subagents, and slash commands must follow this security policy:
12
-
13
- - **NEVER read the `.env` file** - It contains secrets, credentials, and sensitive data
14
- - **ALWAYS use `.env.testdata`** instead - It contains only variable names, never actual values
15
- - **Reference variables by name only** - e.g., TEST_BASE_URL, TEST_USER_EMAIL (never read the actual values)
16
- - **This is enforced** - `.claude/settings.json` has `Read(.env)` in the deny list
17
-
18
- When you need to know what environment variables are available:
19
- 1. Read `.env.testdata` to see variable names and structure
20
- 2. Reference variables by name in documentation and instructions
21
- 3. Trust that users will configure their own `.env` file from the example
22
- 4. Never attempt to access actual credential values
23
-
24
- This policy applies to all agents, commands, and any code execution.
25
-
26
- ## Agent System
27
-
28
- ### Active Agents
29
- <!-- AUTO-GENERATED: This section is populated during project creation -->
30
- <!-- Configured agents will be listed here based on project setup -->
31
-
32
- Agents configured for this project are stored in `.claude/agents/`. These are specialized sub-agents that handle specific domains. The active agents and their integrations will be configured during project setup.
33
-
34
- ### Agent Memory Index
35
- <!-- AUTO-GENERATED: Agent memory files are created during project setup -->
36
-
37
- Specialized memory files for domain experts will be created in `.bugzy/runtime/memory/` based on configured agents.
38
-
39
- ## Project Structure
40
-
41
- ### Core Components
42
- - **Test Plan**: `test-plan.md` - Generated via `/generate-test-plan` command using template
43
- - **Test Cases**: `./test-cases/` - Individual test case files generated via `/generate-test-cases`
44
- - **Exploration Reports**: `./exploration-reports/` - Structured reports from application exploration sessions
45
- - **Knowledge Base**: `.bugzy/runtime/knowledge-base.md` - Curated knowledge about the project (maintained using `.bugzy/runtime/knowledge-maintenance-guide.md`)
46
- - **Issue Tracking**: Managed by issue-tracker agent in your configured project management system
47
- - **Runtime**: `.bugzy/runtime/` - Project-specific runtime files:
48
- - `memory/` - Agent memory and knowledge base files
49
- - `templates/` - Document templates for test plan generation
50
- - `knowledge-base.md` - Accumulated insights
51
- - `project-context.md` - **CRITICAL**: Project SDLC, team information, QA workflow, and testing guidelines
52
- - **MCP Configuration**: `.bugzy/.mcp.json` - Template with all available MCP server configurations
53
-
54
- ### Available Commands
55
- 1. `/generate-test-plan` - Generate comprehensive test plan from product documentation using the documentation-researcher agent
56
- 2. `/explore-application` - Systematically explore the application to discover UI elements, workflows, and behaviors, updating test plan with findings
57
- 3. `/generate-test-cases` - Create E2E browser test cases (exploratory, functional, regression, smoke) based on documentation and test plan
58
- 4. `/run-tests` - Execute test cases using the test-runner agent
59
- 5. `/handle-message` - Process team responses and manage ongoing conversations with the product team using the team-communicator agent
60
-
61
- ### Git Workflow
62
-
63
- **Git operations are now handled automatically by the execution environment.**
64
-
65
- Agents and commands should NOT perform git operations (commit, push). Instead:
66
-
67
- 1. **Focus on core work**: Agents generate files and execute tests
68
- 2. **External service handles git**: After task completion, the Cloud Run environment:
69
- - Commits all changes with standardized messages
70
- - Pushes to the remote repository
71
- - Handles authentication and errors
72
-
73
- **Files Automatically Committed**:
74
- - `test-plan.md` - Test planning documents
75
- - `.env.testdata` - Environment variable templates
76
- - `./test-cases/*.md` - Individual test case files
77
- - `./exploration-reports/*.md` - Application exploration findings
78
- - `.bugzy/runtime/knowledge-base.md` - Accumulated testing insights
79
- - `.bugzy/runtime/memory/*.md` - Agent memory and knowledge base
80
- - `CLAUDE.md` - Updated project context (when modified)
81
- - `.bugzy/runtime/project-context.md` - Project-specific runtime context
82
-
83
- **Git-Ignored Files** (NOT committed):
84
- - `.env` - Local environment variables with secrets
85
- - `logs/` - Temporary log files
86
- - `tmp/` - Temporary files
87
- - `.playwright-mcp/` - Playwright videos (uploaded to GCS separately)
88
- - `node_modules/` - Node.js dependencies
89
-
90
- ### Workflow
91
-
92
- #### Testing Workflow
93
- 1. Generate test plan: `/generate-test-plan` command leverages the documentation-researcher agent to gather requirements
94
- 2. Explore application: `/explore-application --focus [area] --depth [shallow|deep]` discovers actual UI elements and behaviors
95
- 3. Create test cases: `/generate-test-cases --type [type] --focus [feature]` uses discovered documentation and exploration findings
96
- 4. Execute tests: `/run-tests [test-id|type|tag|all]` runs test cases via test-runner agent
97
- 5. Continuous refinement through exploration and test execution
98
-
99
- ### Testing Lifecycle Phases
100
- 1. **Initial Test Plan Creation** - From product documentation
101
- 2. **Application Exploration** - Systematic discovery via `/explore-application`
102
- 3. **Test Case Generation** - Stored as individual files with actual UI elements
103
- 4. **Test Execution** - Automated runs via `/run-tests`
104
- 5. **Continuous Refinement** - Through exploration and test execution
105
- 6. **Regression Testing** - Scheduled or triggered execution
106
-
107
- ## Key Project Insights
108
- <!-- Critical discoveries and patterns that apply across all domains -->
109
-
110
- ## Cross-Domain Knowledge
111
-
112
- ### Project Context Usage
113
- The `.bugzy/runtime/project-context.md` file contains critical project information that MUST be referenced by:
114
- - All agents during initialization to understand project specifics
115
- - All commands before processing to align with project requirements
116
- - Any custom implementations to maintain consistency
117
-
118
- Key information includes: QA workflow, story status management, bug reporting guidelines, testing environment details, and SDLC methodology.
119
-
120
- ### Test Execution Standards
121
-
122
- **Automated Test Execution** (via `/run-tests` command):
123
- - Test runs are stored in hierarchical structure: `./test-runs/YYYYMMDD-HHMMSS/TC-XXX/exec-N/`
124
- - Each test run session creates:
125
- - `execution-id.txt`: Contains BUGZY_EXECUTION_ID for session tracking
126
- - `manifest.json`: Overall run metadata with all test cases and executions
127
- - Per test case folders (`TC-001-login/`, etc.) containing execution attempts
128
- - Each execution attempt (`exec-1/`, `exec-2/`, `exec-3/`) contains:
129
- - `result.json`: Playwright test result format with status, duration, errors, attachments
130
- - `steps.json`: Step-by-step execution with video timestamps (if test uses `test.step()`)
131
- - `video.webm`: Video recording (copied from Playwright's temp location)
132
- - `trace.zip`: Trace file (only for failures)
133
- - `screenshots/`: Screenshots directory (only for failures)
134
- - Videos are recorded for ALL tests, traces/screenshots only for failures
135
- - Custom Bugzy reporter handles all artifact organization automatically
136
- - External service uploads videos from execution folders to GCS
137
-
138
- **Step-Level Tracking** (Automated Tests):
139
- - Tests using `test.step()` API generate steps.json files with video timestamps
140
- - Each step includes timestamp and videoTimeSeconds for easy video navigation
141
- - High-level steps recommended (3-7 per test) for manageable navigation
142
- - Steps appear in both result.json (Playwright format) and steps.json (user-friendly format)
143
- - Tests without `test.step()` still work but won't have steps.json
144
-
145
- **Manual Test Execution** (via test-runner agent):
146
- - Manual test cases use separate format: `./test-runs/YYYYMMDD-HHMMSS/TC-XXX/`
147
- - Each test run generates:
148
- - `summary.json`: Structured test result with video filename reference
149
- - `steps.json`: Step-by-step execution with timestamps and video synchronization
150
- - Video recording via Playwright MCP --save-video flag
151
- - Videos remain in `.playwright-mcp/` folder - external service uploads to GCS
152
-
153
- **General Rules**:
154
- - NO test-related files should be created in project root
155
- - DO NOT copy, move, or delete video files manually
156
- - DO NOT CREATE ANY SUMMARY, REPORT, OR ADDITIONAL FILES unless explicitly requested
157
- <!-- Additional cross-domain information below -->
1
+ # Project Memory
2
+
3
+ ## Project Overview
4
+ **Type**: Test Management System
5
+ **Purpose**: Autonomous QA testing with AI-based test generation and execution
6
+
7
+ ## SECURITY NOTICE
8
+
9
+ **CRITICAL: Never Read the .env File**
10
+
11
+ All agents, subagents, and slash commands must follow this security policy:
12
+
13
+ - **NEVER read the `.env` file** - It contains secrets, credentials, and sensitive data
14
+ - **ALWAYS use `.env.testdata`** instead - It contains only variable names, never actual values
15
+ - **Reference variables by name only** - e.g., TEST_BASE_URL, TEST_USER_EMAIL (never read the actual values)
16
+ - **This is enforced** - `.claude/settings.json` has `Read(.env)` in the deny list
17
+
18
+ When you need to know what environment variables are available:
19
+ 1. Read `.env.testdata` to see variable names and structure
20
+ 2. Reference variables by name in documentation and instructions
21
+ 3. Trust that users will configure their own `.env` file from the example
22
+ 4. Never attempt to access actual credential values
23
+
24
+ This policy applies to all agents, commands, and any code execution.
25
+
26
+ ## Agent System
27
+
28
+ ### Active Agents
29
+ <!-- AUTO-GENERATED: This section is populated during project creation -->
30
+ <!-- Configured agents will be listed here based on project setup -->
31
+
32
+ Agents configured for this project are stored in `.claude/agents/`. These are specialized sub-agents that handle specific domains. The active agents and their integrations will be configured during project setup.
33
+
34
+ ### Agent Memory Index
35
+ <!-- AUTO-GENERATED: Agent memory files are created during project setup -->
36
+
37
+ Specialized memory files for domain experts will be created in `.bugzy/runtime/memory/` based on configured agents.
38
+
39
+ ## Project Structure
40
+
41
+ ### Core Components
42
+ - **Test Plan**: `test-plan.md` - Generated via `/generate-test-plan` command using template
43
+ - **Test Cases**: `./test-cases/` - Individual test case files generated via `/generate-test-cases`
44
+ - **Exploration Reports**: `./exploration-reports/` - Structured reports from application exploration sessions
45
+ - **Knowledge Base**: `.bugzy/runtime/knowledge-base.md` - Curated knowledge about the project (maintained using `.bugzy/runtime/knowledge-maintenance-guide.md`)
46
+ - **Issue Tracking**: Managed by issue-tracker agent in your configured project management system
47
+ - **Runtime**: `.bugzy/runtime/` - Project-specific runtime files:
48
+ - `memory/` - Agent memory and knowledge base files
49
+ - `templates/` - Document templates for test plan generation
50
+ - `knowledge-base.md` - Accumulated insights
51
+ - `project-context.md` - **CRITICAL**: Project SDLC, team information, QA workflow, and testing guidelines
52
+ - **MCP Configuration**: `.bugzy/.mcp.json` - Template with all available MCP server configurations
53
+
54
+ ### Available Commands
55
+ 1. `/generate-test-plan` - Generate comprehensive test plan from product documentation using the documentation-researcher agent
56
+ 2. `/explore-application` - Systematically explore the application to discover UI elements, workflows, and behaviors, updating test plan with findings
57
+ 3. `/generate-test-cases` - Create E2E browser test cases (exploratory, functional, regression, smoke) based on documentation and test plan
58
+ 4. `/run-tests` - Execute test cases using the test-runner agent
59
+ 5. `/handle-message` - Process team responses and manage ongoing conversations with the product team using the team-communicator agent
60
+
61
+ ### Git Workflow
62
+
63
+ **Git operations are now handled automatically by the execution environment.**
64
+
65
+ Agents and commands should NOT perform git operations (commit, push). Instead:
66
+
67
+ 1. **Focus on core work**: Agents generate files and execute tests
68
+ 2. **External service handles git**: After task completion, the Cloud Run environment:
69
+ - Commits all changes with standardized messages
70
+ - Pushes to the remote repository
71
+ - Handles authentication and errors
72
+
73
+ **Files Automatically Committed**:
74
+ - `test-plan.md` - Test planning documents
75
+ - `.env.testdata` - Environment variable templates
76
+ - `./test-cases/*.md` - Individual test case files
77
+ - `./exploration-reports/*.md` - Application exploration findings
78
+ - `.bugzy/runtime/knowledge-base.md` - Accumulated testing insights
79
+ - `.bugzy/runtime/memory/*.md` - Agent memory and knowledge base
80
+ - `CLAUDE.md` - Updated project context (when modified)
81
+ - `.bugzy/runtime/project-context.md` - Project-specific runtime context
82
+
83
+ **Git-Ignored Files** (NOT committed):
84
+ - `.env` - Local environment variables with secrets
85
+ - `logs/` - Temporary log files
86
+ - `tmp/` - Temporary files
87
+ - `.playwright-mcp/` - Playwright videos (uploaded to GCS separately)
88
+ - `node_modules/` - Node.js dependencies
89
+
90
+ ### Workflow
91
+
92
+ #### Testing Workflow
93
+ 1. Generate test plan: `/generate-test-plan` command leverages the documentation-researcher agent to gather requirements
94
+ 2. Explore application: `/explore-application --focus [area] --depth [shallow|deep]` discovers actual UI elements and behaviors
95
+ 3. Create test cases: `/generate-test-cases --type [type] --focus [feature]` uses discovered documentation and exploration findings
96
+ 4. Execute tests: `/run-tests [test-id|type|tag|all]` runs test cases via test-runner agent
97
+ 5. Continuous refinement through exploration and test execution
98
+
99
+ ### Testing Lifecycle Phases
100
+ 1. **Initial Test Plan Creation** - From product documentation
101
+ 2. **Application Exploration** - Systematic discovery via `/explore-application`
102
+ 3. **Test Case Generation** - Stored as individual files with actual UI elements
103
+ 4. **Test Execution** - Automated runs via `/run-tests`
104
+ 5. **Continuous Refinement** - Through exploration and test execution
105
+ 6. **Regression Testing** - Scheduled or triggered execution
106
+
107
+ ## Key Project Insights
108
+ <!-- Critical discoveries and patterns that apply across all domains -->
109
+
110
+ ## Cross-Domain Knowledge
111
+
112
+ ### Project Context Usage
113
+ The `.bugzy/runtime/project-context.md` file contains critical project information that MUST be referenced by:
114
+ - All agents during initialization to understand project specifics
115
+ - All commands before processing to align with project requirements
116
+ - Any custom implementations to maintain consistency
117
+
118
+ Key information includes: QA workflow, story status management, bug reporting guidelines, testing environment details, and SDLC methodology.
119
+
120
+ ### Test Execution Standards
121
+
122
+ **Automated Test Execution** (via `/run-tests` command):
123
+ - Test runs are stored in hierarchical structure: `./test-runs/YYYYMMDD-HHMMSS/TC-XXX/exec-N/`
124
+ - Each test run session creates:
125
+ - `execution-id.txt`: Contains BUGZY_EXECUTION_ID for session tracking
126
+ - `manifest.json`: Overall run metadata with all test cases and executions
127
+ - Per test case folders (`TC-001-login/`, etc.) containing execution attempts
128
+ - Each execution attempt (`exec-1/`, `exec-2/`, `exec-3/`) contains:
129
+ - `result.json`: Playwright test result format with status, duration, errors, attachments
130
+ - `steps.json`: Step-by-step execution with video timestamps (if test uses `test.step()`)
131
+ - `video.webm`: Video recording (copied from Playwright's temp location)
132
+ - `trace.zip`: Trace file (only for failures)
133
+ - `screenshots/`: Screenshots directory (only for failures)
134
+ - Videos are recorded for ALL tests, traces/screenshots only for failures
135
+ - Custom Bugzy reporter handles all artifact organization automatically
136
+ - External service uploads videos from execution folders to GCS
137
+
138
+ **Step-Level Tracking** (Automated Tests):
139
+ - Tests using `test.step()` API generate steps.json files with video timestamps
140
+ - Each step includes timestamp and videoTimeSeconds for easy video navigation
141
+ - High-level steps recommended (3-7 per test) for manageable navigation
142
+ - Steps appear in both result.json (Playwright format) and steps.json (user-friendly format)
143
+ - Tests without `test.step()` still work but won't have steps.json
144
+
145
+ **Manual Test Execution** (via test-runner agent):
146
+ - Manual test cases use separate format: `./test-runs/YYYYMMDD-HHMMSS/TC-XXX/`
147
+ - Each test run generates:
148
+ - `summary.json`: Structured test result with video filename reference
149
+ - `steps.json`: Step-by-step execution with timestamps and video synchronization
150
+ - Video recording via Playwright MCP --save-video flag
151
+ - Videos remain in `.playwright-mcp/` folder - external service uploads to GCS
152
+
153
+ **General Rules**:
154
+ - NO test-related files should be created in project root
155
+ - DO NOT copy, move, or delete video files manually
156
+ - DO NOT CREATE ANY SUMMARY, REPORT, OR ADDITIONAL FILES unless explicitly requested
157
+ <!-- Additional cross-domain information below -->
@@ -1,45 +1,45 @@
1
- # Test Runs
2
-
3
- This directory contains execution results and artifacts from test runs.
4
-
5
- ## Structure
6
-
7
- Test runs are organized by timestamp and test case:
8
- ```
9
- YYYYMMDD-HHMMSS/
10
- ├── TC-XXX/
11
- │ ├── summary.json
12
- │ ├── steps.json
13
- │ └── video.webm
14
- ```
15
-
16
- ## Contents
17
-
18
- Each test run includes:
19
- - **summary.json**: Structured test result with video metadata and status
20
- - **steps.json**: Detailed execution steps with timestamps and video synchronization
21
- - **video.webm**: Video recording of the entire test execution
22
-
23
- **Note**: All test information (status, failures, step details, observations) is captured in the structured JSON files. The video provides visual evidence synchronized with the steps.
24
-
25
- ## Test Result Schema
26
-
27
- All test results follow the schema defined in `.bugzy/runtime/templates/test-result-schema.md`.
28
-
29
- Key features:
30
- - Video recording with synchronized step navigation
31
- - Timestamped steps for precise playback control
32
- - Structured metadata for test status, type, and priority
33
- - User-friendly action descriptions for easy understanding
34
-
35
- ## Usage
36
-
37
- Test runs are automatically generated when:
38
- - Tests are executed via automation
39
- - Manual test execution is logged
40
- - Event processing triggers test validation
41
-
42
- Results are used for:
43
- - Issue tracking via the issue-tracker agent
44
- - Learning extraction for continuous improvement
45
- - Regression analysis and pattern recognition
1
+ # Test Runs
2
+
3
+ This directory contains execution results and artifacts from test runs.
4
+
5
+ ## Structure
6
+
7
+ Test runs are organized by timestamp and test case:
8
+ ```
9
+ YYYYMMDD-HHMMSS/
10
+ ├── TC-XXX/
11
+ │ ├── summary.json
12
+ │ ├── steps.json
13
+ │ └── video.webm
14
+ ```
15
+
16
+ ## Contents
17
+
18
+ Each test run includes:
19
+ - **summary.json**: Structured test result with video metadata and status
20
+ - **steps.json**: Detailed execution steps with timestamps and video synchronization
21
+ - **video.webm**: Video recording of the entire test execution
22
+
23
+ **Note**: All test information (status, failures, step details, observations) is captured in the structured JSON files. The video provides visual evidence synchronized with the steps.
24
+
25
+ ## Test Result Schema
26
+
27
+ All test results follow the schema defined in `.bugzy/runtime/templates/test-result-schema.md`.
28
+
29
+ Key features:
30
+ - Video recording with synchronized step navigation
31
+ - Timestamped steps for precise playback control
32
+ - Structured metadata for test status, type, and priority
33
+ - User-friendly action descriptions for easy understanding
34
+
35
+ ## Usage
36
+
37
+ Test runs are automatically generated when:
38
+ - Tests are executed via automation
39
+ - Manual test execution is logged
40
+ - Event processing triggers test validation
41
+
42
+ Results are used for:
43
+ - Issue tracking via the issue-tracker agent
44
+ - Learning extraction for continuous improvement
45
+ - Regression analysis and pattern recognition