@bugzy-ai/bugzy 1.16.0 → 1.18.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -134,27 +134,12 @@ Example structure:
134
134
  {
135
135
  inline: true,
136
136
  title: "Generate All Manual Test Case Files",
137
- content: `Generate ALL manual test case markdown files in the \`./test-cases/\` directory BEFORE invoking the test-code-generator agent.
138
-
139
- **For each test scenario from the previous step:**
140
-
141
- 1. **Create test case file** in \`./test-cases/\` with format \`TC-XXX-feature-description.md\`
142
- 2. **Include frontmatter** with:
143
- - \`id:\` TC-XXX (sequential ID)
144
- - \`title:\` Clear, descriptive title
145
- - \`automated:\` true/false (based on automation decision)
146
- - \`automated_test:\` (leave empty - will be filled by subagent when automated)
147
- - \`type:\` exploratory/functional/regression/smoke
148
- - \`area:\` Feature area/component
149
- 3. **Write test case content**:
150
- - **Objective**: Clear description of what is being tested
151
- - **Preconditions**: Setup requirements, test data needed
152
- - **Test Steps**: Numbered, human-readable steps
153
- - **Expected Results**: What should happen at each step
154
- - **Test Data**: Environment variables to use (e.g., \${TEST_BASE_URL}, \${TEST_OWNER_EMAIL})
155
- - **Notes**: Any assumptions, clarifications needed, or special considerations
156
-
157
- **Output**: All manual test case markdown files created in \`./test-cases/\` with automation flags set`
137
+ content: `Generate ALL manual test case markdown files in \`./test-cases/\` BEFORE invoking the test-code-generator agent.
138
+
139
+ Create files using \`TC-XXX-feature-description.md\` format. Follow the format of existing test cases in the directory. If no existing cases exist, include:
140
+ - Frontmatter with test case metadata (id, title, type, area, \`automated: true/false\`, \`automated_test:\` empty)
141
+ - Clear test steps with expected results
142
+ - Required test data references (use env var names, not values)`
158
143
  },
159
144
  // Step 11: Automate Test Cases (inline - detailed instructions for test-code-generator)
160
145
  {
@@ -239,76 +224,14 @@ Move to the next area and repeat until all areas are complete.
239
224
  {
240
225
  inline: true,
241
226
  title: "Team Communication",
242
- content: `{{INVOKE_TEAM_COMMUNICATOR}} to notify the product team about the new test cases and automated tests:
243
-
244
- \`\`\`
245
- 1. Post an update about test case and automation creation
246
- 2. Provide summary of coverage:
247
- - Number of manual test cases created
248
- - Number of automated tests created
249
- - Features covered by automation
250
- - Areas kept manual-only (and why)
251
- 3. Highlight key automated test scenarios
252
- 4. Share command to run automated tests (from \`./tests/CLAUDE.md\`)
253
- 5. Ask for team review and validation
254
- 6. Mention any areas needing exploration or clarification
255
- 7. Use appropriate channel and threading for the update
256
- \`\`\`
257
-
258
- The team communication should include:
259
- - **Test artifacts created**: Manual test cases + automated tests count
260
- - **Automation coverage**: Which features are now automated
261
- - **Manual-only areas**: Why some tests are kept manual (rare scenarios, exploratory)
262
- - **Key automated scenarios**: Critical paths now covered by automation
263
- - **Running tests**: Command to execute automated tests
264
- - **Review request**: Ask team to validate scenarios and review test code
265
- - **Next steps**: Plans for CI/CD integration or additional test coverage
266
-
267
- **Update team communicator memory:**
268
- - Record this communication
269
- - Note test case and automation creation
270
- - Track team feedback on automation approach
271
- - Document any clarifications requested`,
227
+ content: `{{INVOKE_TEAM_COMMUNICATOR}} to share test case and automation results with the team, highlighting coverage areas, automation vs manual-only decisions, and any unresolved clarifications. Ask for team review.`,
272
228
  conditionalOnSubagent: "team-communicator"
273
229
  },
274
230
  // Step 17: Final Summary (inline)
275
231
  {
276
232
  inline: true,
277
233
  title: "Final Summary",
278
- content: `Provide a comprehensive summary showing:
279
-
280
- **Manual Test Cases:**
281
- - Number of manual test cases created
282
- - List of test case files with IDs and titles
283
- - Automation status for each (automated: yes/no)
284
-
285
- **Automated Tests:**
286
- - Number of automated test scripts created
287
- - List of spec files with test counts
288
- - Page Objects created or updated
289
- - Fixtures and helpers added
290
-
291
- **Test Coverage:**
292
- - Features covered by manual tests
293
- - Features covered by automated tests
294
- - Areas kept manual-only (and why)
295
-
296
- **Next Steps:**
297
- - Command to run automated tests (from \`./tests/CLAUDE.md\`)
298
- - Instructions to run specific test file (from \`./tests/CLAUDE.md\`)
299
- - Note about copying .env.testdata to .env
300
- - Mention any exploration needed for edge cases
301
-
302
- **Important Notes:**
303
- - **Both Manual AND Automated**: Generate both artifacts - they serve different purposes
304
- - **Manual Test Cases**: Documentation, reference, can be executed manually when needed
305
- - **Automated Tests**: Fast, repeatable, for CI/CD and regression testing
306
- - **Automation Decision**: Not all test cases need automation - rare edge cases can stay manual
307
- - **Linking**: Manual test cases reference automated tests; automated tests reference manual test case IDs
308
- - **Two-Phase Workflow**: First generate all manual test cases, then automate area-by-area
309
- - **Ambiguity Handling**: Use exploration and clarification protocols before generating
310
- - **Environment Variables**: Use \`process.env.VAR_NAME\` in tests, update .env.testdata as needed
311
- - **Test Independence**: Each test must be runnable in isolation and in parallel`
234
+ content: `Provide a summary of created artifacts: manual test cases (count, IDs), automated tests (count, spec files), page objects and supporting files, coverage by area, and command to run tests (from \`./tests/CLAUDE.md\`).`
312
235
  }
313
236
  ],
314
237
  requiredSubagents: ["browser-automation", "test-code-generator"],
@@ -475,28 +398,7 @@ After saving the test plan:
475
398
  {
476
399
  inline: true,
477
400
  title: "Team Communication",
478
- content: `{{INVOKE_TEAM_COMMUNICATOR}} to notify the product team about the new test plan:
479
-
480
- \`\`\`
481
- 1. Post an update about the test plan creation
482
- 2. Provide a brief summary of coverage areas and key features
483
- 3. Mention any areas that need exploration or clarification
484
- 4. Ask for team review and feedback on the test plan
485
- 5. Include a link or reference to the test-plan.md file
486
- 6. Use appropriate channel and threading for the update
487
- \`\`\`
488
-
489
- The team communication should include:
490
- - **Test plan scope**: Brief overview of what will be tested
491
- - **Coverage highlights**: Key features and user flows included
492
- - **Areas needing clarification**: Any uncertainties discovered during documentation research
493
- - **Review request**: Ask team to review and provide feedback
494
- - **Next steps**: Mention plan to generate test cases after review
495
-
496
- **Update team communicator memory:**
497
- - Record this communication in the team-communicator memory
498
- - Note this as a test plan creation communication
499
- - Track team response to this type of update`,
401
+ content: `{{INVOKE_TEAM_COMMUNICATOR}} to share the test plan with the team for review, highlighting coverage areas and any unresolved clarifications.`,
500
402
  conditionalOnSubagent: "team-communicator"
501
403
  },
502
404
  // Step 18: Final Summary (inline)
@@ -618,59 +520,7 @@ After processing the message through the handler and composing your response:
618
520
  // Step 7: Clarification Protocol (for ambiguous intents)
619
521
  "clarification-protocol",
620
522
  // Step 8: Knowledge Base Update (library)
621
- "update-knowledge-base",
622
- // Step 9: Key Principles (inline)
623
- {
624
- inline: true,
625
- title: "Key Principles",
626
- content: `## Key Principles
627
-
628
- ### Context Preservation
629
- - Always maintain full conversation context
630
- - Link responses back to original uncertainties
631
- - Preserve reasoning chain for future reference
632
-
633
- ### Actionable Responses
634
- - Convert team input into concrete actions
635
- - Don't let clarifications sit without implementation
636
- - Follow through on commitments made to team
637
-
638
- ### Learning Integration
639
- - Each interaction improves our understanding
640
- - Build knowledge base of team preferences
641
- - Refine communication approaches over time
642
-
643
- ### Quality Communication
644
- - Acknowledge team input appropriately
645
- - Provide updates on actions taken
646
- - Ask good follow-up questions when needed`
647
- },
648
- // Step 10: Important Considerations (inline)
649
- {
650
- inline: true,
651
- title: "Important Considerations",
652
- content: `## Important Considerations
653
-
654
- ### Thread Organization
655
- - Keep related discussions in same thread
656
- - Start new threads for new topics
657
- - Maintain clear conversation boundaries
658
-
659
- ### Response Timing
660
- - Acknowledge important messages promptly
661
- - Allow time for implementation before status updates
662
- - Don't spam team with excessive communications
663
-
664
- ### Action Prioritization
665
- - Address urgent clarifications first
666
- - Batch related updates when possible
667
- - Focus on high-impact changes
668
-
669
- ### Memory Maintenance
670
- - Keep active conversations visible and current
671
- - Archive resolved discussions appropriately
672
- - Maintain searchable history of resolutions`
673
- }
523
+ "update-knowledge-base"
674
524
  ],
675
525
  requiredSubagents: ["team-communicator"],
676
526
  optionalSubagents: [],
@@ -1097,38 +947,7 @@ Create files if they don't exist:
1097
947
  - \`.bugzy/runtime/memory/event-history.md\``
1098
948
  },
1099
949
  // Step 14: Knowledge Base Update (library)
1100
- "update-knowledge-base",
1101
- // Step 15: Important Considerations (inline)
1102
- {
1103
- inline: true,
1104
- title: "Important Considerations",
1105
- content: `## Important Considerations
1106
-
1107
- ### Contextual Intelligence
1108
- - Never process events in isolation - always consider full context
1109
- - Use knowledge base, history, and external system state to inform decisions
1110
- - What seems like a bug might be expected behavior given the context
1111
- - A minor event might be critical when seen as part of a pattern
1112
-
1113
- ### Adaptive Response
1114
- - Same event type can require different actions based on context
1115
- - Learn from each event to improve future decision-making
1116
- - Build understanding of system behavior over time
1117
- - Adjust responses based on business priorities and risk
1118
-
1119
- ### Smart Task Generation
1120
- - NEVER execute action tasks directly \u2014 all action tasks go through blocked-task-queue for team confirmation
1121
- - Knowledge base updates and event history logging are the only direct operations
1122
- - Document why each decision was made with full context
1123
- - Skip redundant actions (e.g., duplicate events, already-processed issues)
1124
- - Escalate appropriately based on pattern recognition
1125
-
1126
- ### Continuous Learning
1127
- - Each event adds to our understanding of the system
1128
- - Update patterns when new correlations are discovered
1129
- - Refine decision rules based on outcomes
1130
- - Build institutional memory through event history`
1131
- }
950
+ "update-knowledge-base"
1132
951
  ],
1133
952
  requiredSubagents: ["team-communicator"],
1134
953
  optionalSubagents: ["documentation-researcher", "issue-tracker"],
@@ -1394,33 +1213,13 @@ Store the detected trigger for use in output routing:
1394
1213
  title: "Coverage Gap vs. Ambiguity",
1395
1214
  content: `### Coverage Gap vs. Ambiguity
1396
1215
 
1397
- When the trigger indicates a feature has been implemented and is ready for testing (Jira "Ready to Test", PR merged, CI/CD pipeline):
1398
-
1399
- **Missing test coverage for the referenced feature is a COVERAGE GAP, not an ambiguity.**
1400
-
1401
- - The developer/team is asserting the feature exists and is ready for testing
1402
- - "Not yet explored" or "out of scope" in the test plan means the QA team hasn't tested it yet \u2014 it does NOT mean the feature doesn't exist
1403
- - Do NOT classify as CRITICAL based on stale documentation or knowledge base gaps
1404
- - If project-context.md or the Jira issue references the feature, assume it exists until browser exploration proves otherwise
1405
- - Coverage gaps are handled in the "Create Tests for Coverage Gaps" step below \u2014 do NOT block here
1216
+ When the trigger indicates a feature is ready for testing (Jira "Ready to Test", PR merged, CI/CD):
1406
1217
 
1407
- ### If You Browse the App and Cannot Find the Referenced Feature
1218
+ **Missing test coverage is a COVERAGE GAP, not an ambiguity.** The trigger asserts the feature exists. Do NOT block based on stale docs or knowledge base gaps. Coverage gaps are handled in "Create Tests for Coverage Gaps" below.
1408
1219
 
1409
- Apply the Clarification Protocol's **"Execution Obstacle vs. Requirement Ambiguity"** principle:
1220
+ **If you can't find the referenced feature in the browser:** Apply the Clarification Protocol's execution obstacle principle. The authoritative trigger asserts it exists \u2014 this is an execution obstacle (wrong role, missing test data, feature flags, env config). PROCEED to create tests, add placeholder env vars, notify team about the access issue. Tests may fail until resolved \u2014 that's expected.
1410
1221
 
1411
- This is an **execution obstacle**, NOT a requirement ambiguity \u2014 because the authoritative trigger source (Jira issue, PR, team request) asserts the feature exists. Common causes for not finding it:
1412
- - **Missing role/tier**: You're logged in as a basic user but the feature requires admin/premium access
1413
- - **Missing test data**: Required test accounts or data haven't been configured in \`.env.testdata\`
1414
- - **Feature flags**: The feature is behind a flag not enabled in the test environment
1415
- - **Environment config**: The feature requires specific environment variables or deployment settings
1416
-
1417
- **Action: PROCEED to "Create Tests for Coverage Gaps".** Do NOT BLOCK.
1418
- - Create test cases and specs that reference the feature as described in the trigger
1419
- - Add placeholder env vars to \`.env.testdata\` for any missing credentials
1420
- - Notify the team (via team-communicator) about the access obstacle and what needs to be configured
1421
- - Tests may fail until the obstacle is resolved \u2014 this is expected and acceptable
1422
-
1423
- **Only classify as CRITICAL (and BLOCK) if NO authoritative trigger source claims the feature exists** \u2014 e.g., a vague manual request with no Jira/PR backing.`
1222
+ **Only BLOCK if NO authoritative trigger source claims the feature exists** (e.g., vague manual request with no Jira/PR backing).`
1424
1223
  },
1425
1224
  // Step 6: Clarification Protocol (library)
1426
1225
  "clarification-protocol",
@@ -1811,44 +1610,11 @@ Post PR comment if GitHub context available.`,
1811
1610
  {
1812
1611
  inline: true,
1813
1612
  title: "Handle Special Cases",
1814
- content: `**If no tests found for changed files:**
1815
- - Inform user: "No automated tests found for changed files"
1816
- - Recommend: "Run smoke test suite for basic validation"
1817
- - Still generate manual verification checklist
1818
-
1819
- **If all tests skipped:**
1820
- - Explain why (dependencies, environment issues)
1821
- - Recommend: Check test configuration and prerequisites
1822
-
1823
- **If test execution fails:**
1824
- - Report specific error (test framework not installed, env vars missing)
1825
- - Suggest troubleshooting steps
1826
- - Don't proceed with triage if tests didn't run
1827
-
1828
- ## Important Notes
1829
-
1830
- - This task handles **all trigger sources** with a single unified workflow
1831
- - Trigger detection is automatic based on input format
1832
- - Output is automatically routed to the appropriate channel
1833
- - Automated tests are executed with **full triage and automatic fixing**
1834
- - Manual verification checklists are generated for **non-automatable scenarios**
1835
- - Product bugs are logged with **automatic duplicate detection**
1836
- - Test issues are fixed automatically with **verification**
1837
- - Results include both automated and manual verification items
1838
-
1839
- ## Success Criteria
1840
-
1841
- A successful verification includes:
1842
- 1. Trigger source correctly detected
1843
- 2. Context extracted completely
1844
- 3. Tests executed (or skipped with explanation)
1845
- 4. All failures triaged (product bug vs test issue)
1846
- 5. Test issues fixed automatically (when possible)
1847
- 6. Product bugs logged to issue tracker
1848
- 7. Manual verification checklist generated
1849
- 8. Results formatted for output channel
1850
- 9. Results delivered to appropriate destination
1851
- 10. Clear recommendation provided (merge / review / block)`
1613
+ content: `**If no tests found for changed files:** recommend smoke test suite, still generate manual verification checklist.
1614
+
1615
+ **If all tests skipped:** explain why (dependencies, environment), recommend checking configuration.
1616
+
1617
+ **If test execution fails:** report specific error, suggest troubleshooting, don't proceed with triage.`
1852
1618
  }
1853
1619
  ],
1854
1620
  requiredSubagents: ["browser-automation", "test-debugger-fixer"],