@bugzy-ai/bugzy 1.18.0 → 1.18.2
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/LICENSE +21 -21
- package/README.md +273 -273
- package/dist/cli/index.cjs +227 -5
- package/dist/cli/index.cjs.map +1 -1
- package/dist/cli/index.js +226 -4
- package/dist/cli/index.js.map +1 -1
- package/dist/index.cjs +214 -2
- package/dist/index.cjs.map +1 -1
- package/dist/index.js +214 -2
- package/dist/index.js.map +1 -1
- package/dist/subagents/index.cjs.map +1 -1
- package/dist/subagents/index.js.map +1 -1
- package/dist/subagents/metadata.cjs.map +1 -1
- package/dist/subagents/metadata.js.map +1 -1
- package/dist/tasks/index.cjs +130 -2
- package/dist/tasks/index.cjs.map +1 -1
- package/dist/tasks/index.d.cts +1 -0
- package/dist/tasks/index.d.ts +1 -0
- package/dist/tasks/index.js +130 -2
- package/dist/tasks/index.js.map +1 -1
- package/package.json +95 -95
- package/templates/init/.bugzy/runtime/handlers/messages/feedback.md +178 -178
- package/templates/init/.bugzy/runtime/handlers/messages/question.md +122 -122
- package/templates/init/.bugzy/runtime/handlers/messages/status.md +146 -146
- package/templates/init/.bugzy/runtime/knowledge-base.md +61 -61
- package/templates/init/.bugzy/runtime/knowledge-maintenance-guide.md +97 -97
- package/templates/init/.bugzy/runtime/project-context.md +35 -35
- package/templates/init/.bugzy/runtime/subagent-memory-guide.md +87 -87
- package/templates/init/.bugzy/runtime/templates/event-examples.md +194 -194
- package/templates/init/.bugzy/runtime/templates/test-plan-template.md +50 -50
- package/templates/init/.bugzy/runtime/templates/test-result-schema.md +498 -498
- package/templates/init/.claude/settings.json +28 -28
- package/templates/init/.env.testdata +18 -18
- package/templates/init/.gitignore-template +24 -24
- package/templates/init/AGENTS.md +155 -155
- package/templates/init/CLAUDE.md +157 -157
- package/templates/init/test-runs/README.md +45 -45
- package/templates/init/tests/CLAUDE.md +193 -193
- package/templates/init/tests/docs/test-execution-strategy.md +535 -535
- package/templates/init/tests/docs/testing-best-practices.md +724 -724
- package/templates/playwright/BasePage.template.ts +190 -190
- package/templates/playwright/auth.setup.template.ts +89 -89
- package/templates/playwright/dataGenerators.helper.template.ts +148 -148
- package/templates/playwright/dateUtils.helper.template.ts +96 -96
- package/templates/playwright/pages.fixture.template.ts +50 -50
- package/templates/playwright/playwright.config.template.ts +97 -97
- package/templates/playwright/reporters/__tests__/bugzy-reporter-failure-classification.test.ts +299 -299
- package/templates/playwright/reporters/__tests__/bugzy-reporter-manifest-merge.test.ts +329 -329
- package/templates/playwright/reporters/__tests__/playwright.config.ts +5 -5
- package/templates/playwright/reporters/bugzy-reporter.ts +784 -784
- package/dist/templates/init/.bugzy/runtime/knowledge-base.md +0 -61
- package/dist/templates/init/.bugzy/runtime/knowledge-maintenance-guide.md +0 -97
- package/dist/templates/init/.bugzy/runtime/project-context.md +0 -35
- package/dist/templates/init/.bugzy/runtime/subagent-memory-guide.md +0 -87
- package/dist/templates/init/.bugzy/runtime/templates/test-plan-template.md +0 -50
- package/dist/templates/init/.bugzy/runtime/templates/test-result-schema.md +0 -498
- package/dist/templates/init/.bugzy/runtime/test-execution-strategy.md +0 -535
- package/dist/templates/init/.bugzy/runtime/testing-best-practices.md +0 -632
- package/dist/templates/init/.gitignore-template +0 -25
|
@@ -1,122 +1,122 @@
|
|
|
1
|
-
# Question Message Handler
|
|
2
|
-
|
|
3
|
-
Instructions for processing questions about the project, tests, coverage, or testing status.
|
|
4
|
-
|
|
5
|
-
## Detection Criteria
|
|
6
|
-
|
|
7
|
-
This handler applies when:
|
|
8
|
-
- Message contains question words (what, how, which, where, why, when, do, does, is, are, can)
|
|
9
|
-
- Question relates to tests, test plan, coverage, test results, or project artifacts
|
|
10
|
-
- User is seeking information, NOT requesting an action
|
|
11
|
-
- Intent field from LLM layer is `question`
|
|
12
|
-
|
|
13
|
-
## Processing Steps
|
|
14
|
-
|
|
15
|
-
### Step 1: Classify Question Type
|
|
16
|
-
|
|
17
|
-
Analyze the question to determine the primary type:
|
|
18
|
-
|
|
19
|
-
| Type | Indicators | Primary Context Sources |
|
|
20
|
-
|------|------------|------------------------|
|
|
21
|
-
| **Coverage** | "what tests", "do we have", "is there a test for", "covered" | test-cases/, test-plan.md |
|
|
22
|
-
| **Results** | "did tests pass", "what failed", "test results", "how many" | test-runs/ |
|
|
23
|
-
| **Knowledge** | "how does", "what is", "explain", feature/component questions | knowledge-base.md |
|
|
24
|
-
| **Plan** | "what's in scope", "test plan", "testing strategy", "priorities" | test-plan.md |
|
|
25
|
-
| **Process** | "how do I", "when should", "what's the workflow" | project-context.md |
|
|
26
|
-
|
|
27
|
-
### Step 2: Load Relevant Context
|
|
28
|
-
|
|
29
|
-
Based on question type, load the appropriate files:
|
|
30
|
-
|
|
31
|
-
**For Coverage questions**:
|
|
32
|
-
1. Read `test-plan.md` for overall test strategy
|
|
33
|
-
2. List files in `./test-cases/` directory
|
|
34
|
-
3. Search test case files for relevant keywords
|
|
35
|
-
|
|
36
|
-
**For Results questions**:
|
|
37
|
-
1. List directories in `./test-runs/` (sorted by date, newest first)
|
|
38
|
-
2. Read `summary.json` from relevant test run directories
|
|
39
|
-
3. Extract pass/fail counts, failure reasons
|
|
40
|
-
|
|
41
|
-
**For Knowledge questions**:
|
|
42
|
-
1. Read `.bugzy/runtime/knowledge-base.md`
|
|
43
|
-
2. Search for relevant entries
|
|
44
|
-
3. Also check test-plan.md for feature descriptions
|
|
45
|
-
|
|
46
|
-
**For Plan questions**:
|
|
47
|
-
1. Read `test-plan.md`
|
|
48
|
-
2. Extract relevant sections (scope, priorities, features)
|
|
49
|
-
|
|
50
|
-
**For Process questions**:
|
|
51
|
-
1. Read `.bugzy/runtime/project-context.md`
|
|
52
|
-
2. Check for workflow documentation
|
|
53
|
-
|
|
54
|
-
### Step 3: Formulate Answer
|
|
55
|
-
|
|
56
|
-
Compose the answer following these guidelines:
|
|
57
|
-
|
|
58
|
-
1. **Be specific**: Quote relevant sections from source files
|
|
59
|
-
2. **Cite sources**: Mention which files contain the information
|
|
60
|
-
3. **Structure clearly**: Use bullet points for multiple items
|
|
61
|
-
4. **Quantify when possible**: "We have 12 test cases covering login..."
|
|
62
|
-
5. **Acknowledge gaps**: If information is incomplete, say so
|
|
63
|
-
|
|
64
|
-
### Step 4: Offer Follow-up
|
|
65
|
-
|
|
66
|
-
End responses with:
|
|
67
|
-
- Offer to provide more detail if needed
|
|
68
|
-
- Suggest related information that might be helpful
|
|
69
|
-
- For coverage gaps, offer to create test cases
|
|
70
|
-
|
|
71
|
-
## Response Guidelines
|
|
72
|
-
|
|
73
|
-
**Structure**:
|
|
74
|
-
```
|
|
75
|
-
[Direct answer to the question]
|
|
76
|
-
|
|
77
|
-
[Supporting details/evidence with file references]
|
|
78
|
-
|
|
79
|
-
[Optional: Related information or follow-up offer]
|
|
80
|
-
```
|
|
81
|
-
|
|
82
|
-
**Examples**:
|
|
83
|
-
|
|
84
|
-
For "Do we have tests for login?":
|
|
85
|
-
```
|
|
86
|
-
Yes, we have 4 test cases covering the login feature:
|
|
87
|
-
- TC-001: Successful login with valid credentials
|
|
88
|
-
- TC-002: Login failure with invalid password
|
|
89
|
-
- TC-003: Login with remember me option
|
|
90
|
-
- TC-004: Password reset flow
|
|
91
|
-
|
|
92
|
-
These are documented in ./test-cases/TC-001.md through TC-004.md.
|
|
93
|
-
Would you like details on any specific test case?
|
|
94
|
-
```
|
|
95
|
-
|
|
96
|
-
For "How many tests passed in the last run?":
|
|
97
|
-
```
|
|
98
|
-
The most recent test run (2024-01-15 14:30) results:
|
|
99
|
-
- Total: 24 tests
|
|
100
|
-
- Passed: 21 (87.5%)
|
|
101
|
-
- Failed: 3
|
|
102
|
-
|
|
103
|
-
Failed tests:
|
|
104
|
-
- TC-012: Checkout timeout (performance issue)
|
|
105
|
-
- TC-015: Image upload failed (file size validation)
|
|
106
|
-
- TC-018: Search pagination broken
|
|
107
|
-
|
|
108
|
-
Results are in ./test-runs/20240115-143000/summary.json
|
|
109
|
-
```
|
|
110
|
-
|
|
111
|
-
## Context Loading Requirements
|
|
112
|
-
|
|
113
|
-
Required (based on question type):
|
|
114
|
-
- [ ] Test plan (`test-plan.md`) - for coverage, plan, knowledge questions
|
|
115
|
-
- [ ] Test cases (`./test-cases/`) - for coverage questions
|
|
116
|
-
- [ ] Test runs (`./test-runs/`) - for results questions
|
|
117
|
-
- [ ] Knowledge base (`.bugzy/runtime/knowledge-base.md`) - for knowledge questions
|
|
118
|
-
- [ ] Project context (`.bugzy/runtime/project-context.md`) - for process questions
|
|
119
|
-
|
|
120
|
-
## Memory Updates
|
|
121
|
-
|
|
122
|
-
None required - questions are read-only operations. No state changes needed.
|
|
1
|
+
# Question Message Handler
|
|
2
|
+
|
|
3
|
+
Instructions for processing questions about the project, tests, coverage, or testing status.
|
|
4
|
+
|
|
5
|
+
## Detection Criteria
|
|
6
|
+
|
|
7
|
+
This handler applies when:
|
|
8
|
+
- Message contains question words (what, how, which, where, why, when, do, does, is, are, can)
|
|
9
|
+
- Question relates to tests, test plan, coverage, test results, or project artifacts
|
|
10
|
+
- User is seeking information, NOT requesting an action
|
|
11
|
+
- Intent field from LLM layer is `question`
|
|
12
|
+
|
|
13
|
+
## Processing Steps
|
|
14
|
+
|
|
15
|
+
### Step 1: Classify Question Type
|
|
16
|
+
|
|
17
|
+
Analyze the question to determine the primary type:
|
|
18
|
+
|
|
19
|
+
| Type | Indicators | Primary Context Sources |
|
|
20
|
+
|------|------------|------------------------|
|
|
21
|
+
| **Coverage** | "what tests", "do we have", "is there a test for", "covered" | test-cases/, test-plan.md |
|
|
22
|
+
| **Results** | "did tests pass", "what failed", "test results", "how many" | test-runs/ |
|
|
23
|
+
| **Knowledge** | "how does", "what is", "explain", feature/component questions | knowledge-base.md |
|
|
24
|
+
| **Plan** | "what's in scope", "test plan", "testing strategy", "priorities" | test-plan.md |
|
|
25
|
+
| **Process** | "how do I", "when should", "what's the workflow" | project-context.md |
|
|
26
|
+
|
|
27
|
+
### Step 2: Load Relevant Context
|
|
28
|
+
|
|
29
|
+
Based on question type, load the appropriate files:
|
|
30
|
+
|
|
31
|
+
**For Coverage questions**:
|
|
32
|
+
1. Read `test-plan.md` for overall test strategy
|
|
33
|
+
2. List files in `./test-cases/` directory
|
|
34
|
+
3. Search test case files for relevant keywords
|
|
35
|
+
|
|
36
|
+
**For Results questions**:
|
|
37
|
+
1. List directories in `./test-runs/` (sorted by date, newest first)
|
|
38
|
+
2. Read `summary.json` from relevant test run directories
|
|
39
|
+
3. Extract pass/fail counts, failure reasons
|
|
40
|
+
|
|
41
|
+
**For Knowledge questions**:
|
|
42
|
+
1. Read `.bugzy/runtime/knowledge-base.md`
|
|
43
|
+
2. Search for relevant entries
|
|
44
|
+
3. Also check test-plan.md for feature descriptions
|
|
45
|
+
|
|
46
|
+
**For Plan questions**:
|
|
47
|
+
1. Read `test-plan.md`
|
|
48
|
+
2. Extract relevant sections (scope, priorities, features)
|
|
49
|
+
|
|
50
|
+
**For Process questions**:
|
|
51
|
+
1. Read `.bugzy/runtime/project-context.md`
|
|
52
|
+
2. Check for workflow documentation
|
|
53
|
+
|
|
54
|
+
### Step 3: Formulate Answer
|
|
55
|
+
|
|
56
|
+
Compose the answer following these guidelines:
|
|
57
|
+
|
|
58
|
+
1. **Be specific**: Quote relevant sections from source files
|
|
59
|
+
2. **Cite sources**: Mention which files contain the information
|
|
60
|
+
3. **Structure clearly**: Use bullet points for multiple items
|
|
61
|
+
4. **Quantify when possible**: "We have 12 test cases covering login..."
|
|
62
|
+
5. **Acknowledge gaps**: If information is incomplete, say so
|
|
63
|
+
|
|
64
|
+
### Step 4: Offer Follow-up
|
|
65
|
+
|
|
66
|
+
End responses with:
|
|
67
|
+
- Offer to provide more detail if needed
|
|
68
|
+
- Suggest related information that might be helpful
|
|
69
|
+
- For coverage gaps, offer to create test cases
|
|
70
|
+
|
|
71
|
+
## Response Guidelines
|
|
72
|
+
|
|
73
|
+
**Structure**:
|
|
74
|
+
```
|
|
75
|
+
[Direct answer to the question]
|
|
76
|
+
|
|
77
|
+
[Supporting details/evidence with file references]
|
|
78
|
+
|
|
79
|
+
[Optional: Related information or follow-up offer]
|
|
80
|
+
```
|
|
81
|
+
|
|
82
|
+
**Examples**:
|
|
83
|
+
|
|
84
|
+
For "Do we have tests for login?":
|
|
85
|
+
```
|
|
86
|
+
Yes, we have 4 test cases covering the login feature:
|
|
87
|
+
- TC-001: Successful login with valid credentials
|
|
88
|
+
- TC-002: Login failure with invalid password
|
|
89
|
+
- TC-003: Login with remember me option
|
|
90
|
+
- TC-004: Password reset flow
|
|
91
|
+
|
|
92
|
+
These are documented in ./test-cases/TC-001.md through TC-004.md.
|
|
93
|
+
Would you like details on any specific test case?
|
|
94
|
+
```
|
|
95
|
+
|
|
96
|
+
For "How many tests passed in the last run?":
|
|
97
|
+
```
|
|
98
|
+
The most recent test run (2024-01-15 14:30) results:
|
|
99
|
+
- Total: 24 tests
|
|
100
|
+
- Passed: 21 (87.5%)
|
|
101
|
+
- Failed: 3
|
|
102
|
+
|
|
103
|
+
Failed tests:
|
|
104
|
+
- TC-012: Checkout timeout (performance issue)
|
|
105
|
+
- TC-015: Image upload failed (file size validation)
|
|
106
|
+
- TC-018: Search pagination broken
|
|
107
|
+
|
|
108
|
+
Results are in ./test-runs/20240115-143000/summary.json
|
|
109
|
+
```
|
|
110
|
+
|
|
111
|
+
## Context Loading Requirements
|
|
112
|
+
|
|
113
|
+
Required (based on question type):
|
|
114
|
+
- [ ] Test plan (`test-plan.md`) - for coverage, plan, knowledge questions
|
|
115
|
+
- [ ] Test cases (`./test-cases/`) - for coverage questions
|
|
116
|
+
- [ ] Test runs (`./test-runs/`) - for results questions
|
|
117
|
+
- [ ] Knowledge base (`.bugzy/runtime/knowledge-base.md`) - for knowledge questions
|
|
118
|
+
- [ ] Project context (`.bugzy/runtime/project-context.md`) - for process questions
|
|
119
|
+
|
|
120
|
+
## Memory Updates
|
|
121
|
+
|
|
122
|
+
None required - questions are read-only operations. No state changes needed.
|
|
@@ -1,146 +1,146 @@
|
|
|
1
|
-
# Status Message Handler
|
|
2
|
-
|
|
3
|
-
Instructions for processing status requests about tests, tasks, or executions.
|
|
4
|
-
|
|
5
|
-
## Detection Criteria
|
|
6
|
-
|
|
7
|
-
This handler applies when:
|
|
8
|
-
- User asks about progress or status
|
|
9
|
-
- Keywords present: "status", "progress", "how is", "what happened", "results", "how did", "update on"
|
|
10
|
-
- Questions about test runs, task completion, or execution state
|
|
11
|
-
- Intent field from LLM layer is `status`
|
|
12
|
-
|
|
13
|
-
## Processing Steps
|
|
14
|
-
|
|
15
|
-
### Step 1: Identify Status Scope
|
|
16
|
-
|
|
17
|
-
Determine what the user is asking about:
|
|
18
|
-
|
|
19
|
-
| Scope | Indicators | Data Sources |
|
|
20
|
-
|-------|------------|--------------|
|
|
21
|
-
| **Latest test run** | "last run", "recent tests", "how did tests go" | Most recent test-runs/ directory |
|
|
22
|
-
| **Specific test** | Test ID mentioned (TC-XXX), specific feature name | test-runs/*/TC-XXX/, test-cases/TC-XXX.md |
|
|
23
|
-
| **All tests / Overall** | "overall", "all tests", "test coverage", "pass rate" | All test-runs/ summaries |
|
|
24
|
-
| **Specific feature** | Feature name mentioned | Filter test-runs by feature |
|
|
25
|
-
| **Task progress** | "is the task done", "what's happening with" | team-communicator memory |
|
|
26
|
-
|
|
27
|
-
### Step 2: Gather Status Data
|
|
28
|
-
|
|
29
|
-
**For Latest Test Run**:
|
|
30
|
-
1. List directories in `./test-runs/` sorted by name (newest first)
|
|
31
|
-
2. Read `summary.json` from the most recent directory
|
|
32
|
-
3. Extract: total tests, passed, failed, skipped, execution time
|
|
33
|
-
4. For failures, extract brief failure reasons
|
|
34
|
-
|
|
35
|
-
**For Specific Test**:
|
|
36
|
-
1. Find test case file in `./test-cases/TC-XXX.md`
|
|
37
|
-
2. Search test-runs for directories containing this test ID
|
|
38
|
-
3. Get most recent result for this specific test
|
|
39
|
-
4. Include: last run date, result, failure reason if failed
|
|
40
|
-
|
|
41
|
-
**For Overall Status**:
|
|
42
|
-
1. Read all `summary.json` files in test-runs/
|
|
43
|
-
2. Calculate aggregate statistics:
|
|
44
|
-
- Total runs in period (last 7 days, 30 days, etc.)
|
|
45
|
-
- Overall pass rate
|
|
46
|
-
- Most commonly failing tests
|
|
47
|
-
- Trend (improving/declining)
|
|
48
|
-
|
|
49
|
-
**For Task Progress**:
|
|
50
|
-
1. Read `.bugzy/runtime/memory/team-communicator.md`
|
|
51
|
-
2. Check for active tasks, blocked tasks, recently completed tasks
|
|
52
|
-
3. Extract relevant task status
|
|
53
|
-
|
|
54
|
-
### Step 3: Format Status Report
|
|
55
|
-
|
|
56
|
-
Present status clearly and concisely:
|
|
57
|
-
|
|
58
|
-
**For Latest Test Run**:
|
|
59
|
-
```
|
|
60
|
-
Test Run: [YYYYMMDD-HHMMSS]
|
|
61
|
-
Status: [Completed/In Progress]
|
|
62
|
-
|
|
63
|
-
Results:
|
|
64
|
-
- Total: [N] tests
|
|
65
|
-
- Passed: [N] ([%])
|
|
66
|
-
- Failed: [N] ([%])
|
|
67
|
-
- Skipped: [N]
|
|
68
|
-
|
|
69
|
-
[If failures exist:]
|
|
70
|
-
Failed Tests:
|
|
71
|
-
- [TC-XXX]: [Brief failure reason]
|
|
72
|
-
- [TC-YYY]: [Brief failure reason]
|
|
73
|
-
|
|
74
|
-
Duration: [X minutes]
|
|
75
|
-
```
|
|
76
|
-
|
|
77
|
-
**For Specific Test**:
|
|
78
|
-
```
|
|
79
|
-
Test: [TC-XXX] - [Test Name]
|
|
80
|
-
|
|
81
|
-
Latest Result: [Passed/Failed]
|
|
82
|
-
Run Date: [Date/Time]
|
|
83
|
-
|
|
84
|
-
[If failed:]
|
|
85
|
-
Failure Reason: [reason]
|
|
86
|
-
Last Successful: [date if known]
|
|
87
|
-
|
|
88
|
-
[If passed:]
|
|
89
|
-
Consecutive Passes: [N] (since [date])
|
|
90
|
-
```
|
|
91
|
-
|
|
92
|
-
**For Overall Status**:
|
|
93
|
-
```
|
|
94
|
-
Test Suite Overview (Last [N] Days)
|
|
95
|
-
|
|
96
|
-
Total Test Runs: [N]
|
|
97
|
-
Average Pass Rate: [%]
|
|
98
|
-
|
|
99
|
-
Trend: [Improving/Stable/Declining]
|
|
100
|
-
|
|
101
|
-
Most Reliable Tests:
|
|
102
|
-
- [TC-XXX]: [100%] pass rate
|
|
103
|
-
- [TC-YYY]: [100%] pass rate
|
|
104
|
-
|
|
105
|
-
Flaky/Failing Tests:
|
|
106
|
-
- [TC-ZZZ]: [40%] pass rate - [common failure reason]
|
|
107
|
-
- [TC-AAA]: [60%] pass rate - [common failure reason]
|
|
108
|
-
|
|
109
|
-
Last Run: [date/time] - [X/Y passed]
|
|
110
|
-
```
|
|
111
|
-
|
|
112
|
-
### Step 4: Provide Context and Recommendations
|
|
113
|
-
|
|
114
|
-
Based on the status:
|
|
115
|
-
|
|
116
|
-
**For failing tests**:
|
|
117
|
-
- Suggest reviewing the test case
|
|
118
|
-
- Mention if this is a new failure or recurring
|
|
119
|
-
- Link to relevant knowledge base entries if they exist
|
|
120
|
-
|
|
121
|
-
**For overall declining trends**:
|
|
122
|
-
- Highlight which tests are causing the decline
|
|
123
|
-
- Suggest investigation areas
|
|
124
|
-
|
|
125
|
-
**For good results**:
|
|
126
|
-
- Acknowledge the healthy state
|
|
127
|
-
- Mention any tests that were previously failing and are now passing
|
|
128
|
-
|
|
129
|
-
## Response Guidelines
|
|
130
|
-
|
|
131
|
-
- Lead with the most important information (pass/fail summary)
|
|
132
|
-
- Use clear formatting (bullet points, percentages)
|
|
133
|
-
- Include timestamps so users know data freshness
|
|
134
|
-
- Offer to drill down into specifics if summary was given
|
|
135
|
-
- Keep responses scannable - use structure over paragraphs
|
|
136
|
-
|
|
137
|
-
## Context Loading Requirements
|
|
138
|
-
|
|
139
|
-
Required (based on scope):
|
|
140
|
-
- [ ] Test runs (`./test-runs/`) - for any test status
|
|
141
|
-
- [ ] Test cases (`./test-cases/`) - for specific test details
|
|
142
|
-
- [ ] Team communicator memory (`.bugzy/runtime/memory/team-communicator.md`) - for task status
|
|
143
|
-
|
|
144
|
-
## Memory Updates
|
|
145
|
-
|
|
146
|
-
None required - status checks are read-only operations. No state changes needed.
|
|
1
|
+
# Status Message Handler
|
|
2
|
+
|
|
3
|
+
Instructions for processing status requests about tests, tasks, or executions.
|
|
4
|
+
|
|
5
|
+
## Detection Criteria
|
|
6
|
+
|
|
7
|
+
This handler applies when:
|
|
8
|
+
- User asks about progress or status
|
|
9
|
+
- Keywords present: "status", "progress", "how is", "what happened", "results", "how did", "update on"
|
|
10
|
+
- Questions about test runs, task completion, or execution state
|
|
11
|
+
- Intent field from LLM layer is `status`
|
|
12
|
+
|
|
13
|
+
## Processing Steps
|
|
14
|
+
|
|
15
|
+
### Step 1: Identify Status Scope
|
|
16
|
+
|
|
17
|
+
Determine what the user is asking about:
|
|
18
|
+
|
|
19
|
+
| Scope | Indicators | Data Sources |
|
|
20
|
+
|-------|------------|--------------|
|
|
21
|
+
| **Latest test run** | "last run", "recent tests", "how did tests go" | Most recent test-runs/ directory |
|
|
22
|
+
| **Specific test** | Test ID mentioned (TC-XXX), specific feature name | test-runs/*/TC-XXX/, test-cases/TC-XXX.md |
|
|
23
|
+
| **All tests / Overall** | "overall", "all tests", "test coverage", "pass rate" | All test-runs/ summaries |
|
|
24
|
+
| **Specific feature** | Feature name mentioned | Filter test-runs by feature |
|
|
25
|
+
| **Task progress** | "is the task done", "what's happening with" | team-communicator memory |
|
|
26
|
+
|
|
27
|
+
### Step 2: Gather Status Data
|
|
28
|
+
|
|
29
|
+
**For Latest Test Run**:
|
|
30
|
+
1. List directories in `./test-runs/` sorted by name (newest first)
|
|
31
|
+
2. Read `summary.json` from the most recent directory
|
|
32
|
+
3. Extract: total tests, passed, failed, skipped, execution time
|
|
33
|
+
4. For failures, extract brief failure reasons
|
|
34
|
+
|
|
35
|
+
**For Specific Test**:
|
|
36
|
+
1. Find test case file in `./test-cases/TC-XXX.md`
|
|
37
|
+
2. Search test-runs for directories containing this test ID
|
|
38
|
+
3. Get most recent result for this specific test
|
|
39
|
+
4. Include: last run date, result, failure reason if failed
|
|
40
|
+
|
|
41
|
+
**For Overall Status**:
|
|
42
|
+
1. Read all `summary.json` files in test-runs/
|
|
43
|
+
2. Calculate aggregate statistics:
|
|
44
|
+
- Total runs in period (last 7 days, 30 days, etc.)
|
|
45
|
+
- Overall pass rate
|
|
46
|
+
- Most commonly failing tests
|
|
47
|
+
- Trend (improving/declining)
|
|
48
|
+
|
|
49
|
+
**For Task Progress**:
|
|
50
|
+
1. Read `.bugzy/runtime/memory/team-communicator.md`
|
|
51
|
+
2. Check for active tasks, blocked tasks, recently completed tasks
|
|
52
|
+
3. Extract relevant task status
|
|
53
|
+
|
|
54
|
+
### Step 3: Format Status Report
|
|
55
|
+
|
|
56
|
+
Present status clearly and concisely:
|
|
57
|
+
|
|
58
|
+
**For Latest Test Run**:
|
|
59
|
+
```
|
|
60
|
+
Test Run: [YYYYMMDD-HHMMSS]
|
|
61
|
+
Status: [Completed/In Progress]
|
|
62
|
+
|
|
63
|
+
Results:
|
|
64
|
+
- Total: [N] tests
|
|
65
|
+
- Passed: [N] ([%])
|
|
66
|
+
- Failed: [N] ([%])
|
|
67
|
+
- Skipped: [N]
|
|
68
|
+
|
|
69
|
+
[If failures exist:]
|
|
70
|
+
Failed Tests:
|
|
71
|
+
- [TC-XXX]: [Brief failure reason]
|
|
72
|
+
- [TC-YYY]: [Brief failure reason]
|
|
73
|
+
|
|
74
|
+
Duration: [X minutes]
|
|
75
|
+
```
|
|
76
|
+
|
|
77
|
+
**For Specific Test**:
|
|
78
|
+
```
|
|
79
|
+
Test: [TC-XXX] - [Test Name]
|
|
80
|
+
|
|
81
|
+
Latest Result: [Passed/Failed]
|
|
82
|
+
Run Date: [Date/Time]
|
|
83
|
+
|
|
84
|
+
[If failed:]
|
|
85
|
+
Failure Reason: [reason]
|
|
86
|
+
Last Successful: [date if known]
|
|
87
|
+
|
|
88
|
+
[If passed:]
|
|
89
|
+
Consecutive Passes: [N] (since [date])
|
|
90
|
+
```
|
|
91
|
+
|
|
92
|
+
**For Overall Status**:
|
|
93
|
+
```
|
|
94
|
+
Test Suite Overview (Last [N] Days)
|
|
95
|
+
|
|
96
|
+
Total Test Runs: [N]
|
|
97
|
+
Average Pass Rate: [%]
|
|
98
|
+
|
|
99
|
+
Trend: [Improving/Stable/Declining]
|
|
100
|
+
|
|
101
|
+
Most Reliable Tests:
|
|
102
|
+
- [TC-XXX]: [100%] pass rate
|
|
103
|
+
- [TC-YYY]: [100%] pass rate
|
|
104
|
+
|
|
105
|
+
Flaky/Failing Tests:
|
|
106
|
+
- [TC-ZZZ]: [40%] pass rate - [common failure reason]
|
|
107
|
+
- [TC-AAA]: [60%] pass rate - [common failure reason]
|
|
108
|
+
|
|
109
|
+
Last Run: [date/time] - [X/Y passed]
|
|
110
|
+
```
|
|
111
|
+
|
|
112
|
+
### Step 4: Provide Context and Recommendations
|
|
113
|
+
|
|
114
|
+
Based on the status:
|
|
115
|
+
|
|
116
|
+
**For failing tests**:
|
|
117
|
+
- Suggest reviewing the test case
|
|
118
|
+
- Mention if this is a new failure or recurring
|
|
119
|
+
- Link to relevant knowledge base entries if they exist
|
|
120
|
+
|
|
121
|
+
**For overall declining trends**:
|
|
122
|
+
- Highlight which tests are causing the decline
|
|
123
|
+
- Suggest investigation areas
|
|
124
|
+
|
|
125
|
+
**For good results**:
|
|
126
|
+
- Acknowledge the healthy state
|
|
127
|
+
- Mention any tests that were previously failing and are now passing
|
|
128
|
+
|
|
129
|
+
## Response Guidelines
|
|
130
|
+
|
|
131
|
+
- Lead with the most important information (pass/fail summary)
|
|
132
|
+
- Use clear formatting (bullet points, percentages)
|
|
133
|
+
- Include timestamps so users know data freshness
|
|
134
|
+
- Offer to drill down into specifics if summary was given
|
|
135
|
+
- Keep responses scannable - use structure over paragraphs
|
|
136
|
+
|
|
137
|
+
## Context Loading Requirements
|
|
138
|
+
|
|
139
|
+
Required (based on scope):
|
|
140
|
+
- [ ] Test runs (`./test-runs/`) - for any test status
|
|
141
|
+
- [ ] Test cases (`./test-cases/`) - for specific test details
|
|
142
|
+
- [ ] Team communicator memory (`.bugzy/runtime/memory/team-communicator.md`) - for task status
|
|
143
|
+
|
|
144
|
+
## Memory Updates
|
|
145
|
+
|
|
146
|
+
None required - status checks are read-only operations. No state changes needed.
|
|
@@ -1,61 +1,61 @@
|
|
|
1
|
-
# Knowledge Base
|
|
2
|
-
|
|
3
|
-
> A curated collection of factual knowledge about this project - what we currently know and believe to be true. This is NOT a historical log, but a living reference that evolves as understanding improves.
|
|
4
|
-
|
|
5
|
-
**Maintenance Guide**: See `knowledge-maintenance-guide.md` for instructions on how to maintain this knowledge base.
|
|
6
|
-
|
|
7
|
-
**Core Principle**: This document represents current understanding, not a history. When knowledge evolves, update existing entries rather than appending new ones.
|
|
8
|
-
|
|
9
|
-
---
|
|
10
|
-
|
|
11
|
-
## Project Knowledge
|
|
12
|
-
|
|
13
|
-
_This knowledge base will be populated as you work. Add new discoveries, update existing understanding, and remove outdated information following the maintenance guide principles._
|
|
14
|
-
|
|
15
|
-
### When to Update This File
|
|
16
|
-
|
|
17
|
-
- **ADD**: New factual information discovered, new patterns emerge, new areas become relevant
|
|
18
|
-
- **UPDATE**: Facts change, understanding deepens, multiple facts can be consolidated, language can be clarified
|
|
19
|
-
- **REMOVE**: Information becomes irrelevant, facts proven incorrect, entries superseded by better content
|
|
20
|
-
|
|
21
|
-
### Format Guidelines
|
|
22
|
-
|
|
23
|
-
- Use clear, declarative statements in present tense
|
|
24
|
-
- State facts confidently when known; flag uncertainty when it exists
|
|
25
|
-
- Write for someone reading this 6 months from now
|
|
26
|
-
- Keep entries relevant and actionable
|
|
27
|
-
- Favor consolidation over accumulation
|
|
28
|
-
|
|
29
|
-
---
|
|
30
|
-
|
|
31
|
-
## Example Structure
|
|
32
|
-
|
|
33
|
-
Below is an example structure. Feel free to organize knowledge in the way that makes most sense for this project:
|
|
34
|
-
|
|
35
|
-
### Architecture & Infrastructure
|
|
36
|
-
|
|
37
|
-
_System architecture, deployment patterns, infrastructure details_
|
|
38
|
-
|
|
39
|
-
### Testing Patterns
|
|
40
|
-
|
|
41
|
-
_Test strategies, common test scenarios, testing conventions_
|
|
42
|
-
|
|
43
|
-
### UI/UX Patterns
|
|
44
|
-
|
|
45
|
-
_User interface conventions, interaction patterns, design system details_
|
|
46
|
-
|
|
47
|
-
### Data & APIs
|
|
48
|
-
|
|
49
|
-
_Data models, API behaviors, integration patterns_
|
|
50
|
-
|
|
51
|
-
### Known Issues & Workarounds
|
|
52
|
-
|
|
53
|
-
_Current limitations, known bugs, and their workarounds_
|
|
54
|
-
|
|
55
|
-
### Domain Knowledge
|
|
56
|
-
|
|
57
|
-
_Business domain facts, terminology, rules, and context_
|
|
58
|
-
|
|
59
|
-
---
|
|
60
|
-
|
|
61
|
-
**Remember**: Every entry should answer "Will this help someone working on this project in 6 months?"
|
|
1
|
+
# Knowledge Base
|
|
2
|
+
|
|
3
|
+
> A curated collection of factual knowledge about this project - what we currently know and believe to be true. This is NOT a historical log, but a living reference that evolves as understanding improves.
|
|
4
|
+
|
|
5
|
+
**Maintenance Guide**: See `knowledge-maintenance-guide.md` for instructions on how to maintain this knowledge base.
|
|
6
|
+
|
|
7
|
+
**Core Principle**: This document represents current understanding, not a history. When knowledge evolves, update existing entries rather than appending new ones.
|
|
8
|
+
|
|
9
|
+
---
|
|
10
|
+
|
|
11
|
+
## Project Knowledge
|
|
12
|
+
|
|
13
|
+
_This knowledge base will be populated as you work. Add new discoveries, update existing understanding, and remove outdated information following the maintenance guide principles._
|
|
14
|
+
|
|
15
|
+
### When to Update This File
|
|
16
|
+
|
|
17
|
+
- **ADD**: New factual information discovered, new patterns emerge, new areas become relevant
|
|
18
|
+
- **UPDATE**: Facts change, understanding deepens, multiple facts can be consolidated, language can be clarified
|
|
19
|
+
- **REMOVE**: Information becomes irrelevant, facts proven incorrect, entries superseded by better content
|
|
20
|
+
|
|
21
|
+
### Format Guidelines
|
|
22
|
+
|
|
23
|
+
- Use clear, declarative statements in present tense
|
|
24
|
+
- State facts confidently when known; flag uncertainty when it exists
|
|
25
|
+
- Write for someone reading this 6 months from now
|
|
26
|
+
- Keep entries relevant and actionable
|
|
27
|
+
- Favor consolidation over accumulation
|
|
28
|
+
|
|
29
|
+
---
|
|
30
|
+
|
|
31
|
+
## Example Structure
|
|
32
|
+
|
|
33
|
+
Below is an example structure. Feel free to organize knowledge in the way that makes most sense for this project:
|
|
34
|
+
|
|
35
|
+
### Architecture & Infrastructure
|
|
36
|
+
|
|
37
|
+
_System architecture, deployment patterns, infrastructure details_
|
|
38
|
+
|
|
39
|
+
### Testing Patterns
|
|
40
|
+
|
|
41
|
+
_Test strategies, common test scenarios, testing conventions_
|
|
42
|
+
|
|
43
|
+
### UI/UX Patterns
|
|
44
|
+
|
|
45
|
+
_User interface conventions, interaction patterns, design system details_
|
|
46
|
+
|
|
47
|
+
### Data & APIs
|
|
48
|
+
|
|
49
|
+
_Data models, API behaviors, integration patterns_
|
|
50
|
+
|
|
51
|
+
### Known Issues & Workarounds
|
|
52
|
+
|
|
53
|
+
_Current limitations, known bugs, and their workarounds_
|
|
54
|
+
|
|
55
|
+
### Domain Knowledge
|
|
56
|
+
|
|
57
|
+
_Business domain facts, terminology, rules, and context_
|
|
58
|
+
|
|
59
|
+
---
|
|
60
|
+
|
|
61
|
+
**Remember**: Every entry should answer "Will this help someone working on this project in 6 months?"
|