@bugzy-ai/bugzy 1.16.0 → 1.18.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -166,27 +166,12 @@ Example structure:
166
166
  {
167
167
  inline: true,
168
168
  title: "Generate All Manual Test Case Files",
169
- content: `Generate ALL manual test case markdown files in the \`./test-cases/\` directory BEFORE invoking the test-code-generator agent.
170
-
171
- **For each test scenario from the previous step:**
172
-
173
- 1. **Create test case file** in \`./test-cases/\` with format \`TC-XXX-feature-description.md\`
174
- 2. **Include frontmatter** with:
175
- - \`id:\` TC-XXX (sequential ID)
176
- - \`title:\` Clear, descriptive title
177
- - \`automated:\` true/false (based on automation decision)
178
- - \`automated_test:\` (leave empty - will be filled by subagent when automated)
179
- - \`type:\` exploratory/functional/regression/smoke
180
- - \`area:\` Feature area/component
181
- 3. **Write test case content**:
182
- - **Objective**: Clear description of what is being tested
183
- - **Preconditions**: Setup requirements, test data needed
184
- - **Test Steps**: Numbered, human-readable steps
185
- - **Expected Results**: What should happen at each step
186
- - **Test Data**: Environment variables to use (e.g., \${TEST_BASE_URL}, \${TEST_OWNER_EMAIL})
187
- - **Notes**: Any assumptions, clarifications needed, or special considerations
188
-
189
- **Output**: All manual test case markdown files created in \`./test-cases/\` with automation flags set`
169
+ content: `Generate ALL manual test case markdown files in \`./test-cases/\` BEFORE invoking the test-code-generator agent.
170
+
171
+ Create files using \`TC-XXX-feature-description.md\` format. Follow the format of existing test cases in the directory. If no existing cases exist, include:
172
+ - Frontmatter with test case metadata (id, title, type, area, \`automated: true/false\`, \`automated_test:\` empty)
173
+ - Clear test steps with expected results
174
+ - Required test data references (use env var names, not values)`
190
175
  },
191
176
  // Step 11: Automate Test Cases (inline - detailed instructions for test-code-generator)
192
177
  {
@@ -271,76 +256,14 @@ Move to the next area and repeat until all areas are complete.
271
256
  {
272
257
  inline: true,
273
258
  title: "Team Communication",
274
- content: `{{INVOKE_TEAM_COMMUNICATOR}} to notify the product team about the new test cases and automated tests:
275
-
276
- \`\`\`
277
- 1. Post an update about test case and automation creation
278
- 2. Provide summary of coverage:
279
- - Number of manual test cases created
280
- - Number of automated tests created
281
- - Features covered by automation
282
- - Areas kept manual-only (and why)
283
- 3. Highlight key automated test scenarios
284
- 4. Share command to run automated tests (from \`./tests/CLAUDE.md\`)
285
- 5. Ask for team review and validation
286
- 6. Mention any areas needing exploration or clarification
287
- 7. Use appropriate channel and threading for the update
288
- \`\`\`
289
-
290
- The team communication should include:
291
- - **Test artifacts created**: Manual test cases + automated tests count
292
- - **Automation coverage**: Which features are now automated
293
- - **Manual-only areas**: Why some tests are kept manual (rare scenarios, exploratory)
294
- - **Key automated scenarios**: Critical paths now covered by automation
295
- - **Running tests**: Command to execute automated tests
296
- - **Review request**: Ask team to validate scenarios and review test code
297
- - **Next steps**: Plans for CI/CD integration or additional test coverage
298
-
299
- **Update team communicator memory:**
300
- - Record this communication
301
- - Note test case and automation creation
302
- - Track team feedback on automation approach
303
- - Document any clarifications requested`,
259
+ content: `{{INVOKE_TEAM_COMMUNICATOR}} to share test case and automation results with the team, highlighting coverage areas, automation vs manual-only decisions, and any unresolved clarifications. Ask for team review.`,
304
260
  conditionalOnSubagent: "team-communicator"
305
261
  },
306
262
  // Step 17: Final Summary (inline)
307
263
  {
308
264
  inline: true,
309
265
  title: "Final Summary",
310
- content: `Provide a comprehensive summary showing:
311
-
312
- **Manual Test Cases:**
313
- - Number of manual test cases created
314
- - List of test case files with IDs and titles
315
- - Automation status for each (automated: yes/no)
316
-
317
- **Automated Tests:**
318
- - Number of automated test scripts created
319
- - List of spec files with test counts
320
- - Page Objects created or updated
321
- - Fixtures and helpers added
322
-
323
- **Test Coverage:**
324
- - Features covered by manual tests
325
- - Features covered by automated tests
326
- - Areas kept manual-only (and why)
327
-
328
- **Next Steps:**
329
- - Command to run automated tests (from \`./tests/CLAUDE.md\`)
330
- - Instructions to run specific test file (from \`./tests/CLAUDE.md\`)
331
- - Note about copying .env.testdata to .env
332
- - Mention any exploration needed for edge cases
333
-
334
- **Important Notes:**
335
- - **Both Manual AND Automated**: Generate both artifacts - they serve different purposes
336
- - **Manual Test Cases**: Documentation, reference, can be executed manually when needed
337
- - **Automated Tests**: Fast, repeatable, for CI/CD and regression testing
338
- - **Automation Decision**: Not all test cases need automation - rare edge cases can stay manual
339
- - **Linking**: Manual test cases reference automated tests; automated tests reference manual test case IDs
340
- - **Two-Phase Workflow**: First generate all manual test cases, then automate area-by-area
341
- - **Ambiguity Handling**: Use exploration and clarification protocols before generating
342
- - **Environment Variables**: Use \`process.env.VAR_NAME\` in tests, update .env.testdata as needed
343
- - **Test Independence**: Each test must be runnable in isolation and in parallel`
266
+ content: `Provide a summary of created artifacts: manual test cases (count, IDs), automated tests (count, spec files), page objects and supporting files, coverage by area, and command to run tests (from \`./tests/CLAUDE.md\`).`
344
267
  }
345
268
  ],
346
269
  requiredSubagents: ["browser-automation", "test-code-generator"],
@@ -507,28 +430,7 @@ After saving the test plan:
507
430
  {
508
431
  inline: true,
509
432
  title: "Team Communication",
510
- content: `{{INVOKE_TEAM_COMMUNICATOR}} to notify the product team about the new test plan:
511
-
512
- \`\`\`
513
- 1. Post an update about the test plan creation
514
- 2. Provide a brief summary of coverage areas and key features
515
- 3. Mention any areas that need exploration or clarification
516
- 4. Ask for team review and feedback on the test plan
517
- 5. Include a link or reference to the test-plan.md file
518
- 6. Use appropriate channel and threading for the update
519
- \`\`\`
520
-
521
- The team communication should include:
522
- - **Test plan scope**: Brief overview of what will be tested
523
- - **Coverage highlights**: Key features and user flows included
524
- - **Areas needing clarification**: Any uncertainties discovered during documentation research
525
- - **Review request**: Ask team to review and provide feedback
526
- - **Next steps**: Mention plan to generate test cases after review
527
-
528
- **Update team communicator memory:**
529
- - Record this communication in the team-communicator memory
530
- - Note this as a test plan creation communication
531
- - Track team response to this type of update`,
433
+ content: `{{INVOKE_TEAM_COMMUNICATOR}} to share the test plan with the team for review, highlighting coverage areas and any unresolved clarifications.`,
532
434
  conditionalOnSubagent: "team-communicator"
533
435
  },
534
436
  // Step 18: Final Summary (inline)
@@ -650,59 +552,7 @@ After processing the message through the handler and composing your response:
650
552
  // Step 7: Clarification Protocol (for ambiguous intents)
651
553
  "clarification-protocol",
652
554
  // Step 8: Knowledge Base Update (library)
653
- "update-knowledge-base",
654
- // Step 9: Key Principles (inline)
655
- {
656
- inline: true,
657
- title: "Key Principles",
658
- content: `## Key Principles
659
-
660
- ### Context Preservation
661
- - Always maintain full conversation context
662
- - Link responses back to original uncertainties
663
- - Preserve reasoning chain for future reference
664
-
665
- ### Actionable Responses
666
- - Convert team input into concrete actions
667
- - Don't let clarifications sit without implementation
668
- - Follow through on commitments made to team
669
-
670
- ### Learning Integration
671
- - Each interaction improves our understanding
672
- - Build knowledge base of team preferences
673
- - Refine communication approaches over time
674
-
675
- ### Quality Communication
676
- - Acknowledge team input appropriately
677
- - Provide updates on actions taken
678
- - Ask good follow-up questions when needed`
679
- },
680
- // Step 10: Important Considerations (inline)
681
- {
682
- inline: true,
683
- title: "Important Considerations",
684
- content: `## Important Considerations
685
-
686
- ### Thread Organization
687
- - Keep related discussions in same thread
688
- - Start new threads for new topics
689
- - Maintain clear conversation boundaries
690
-
691
- ### Response Timing
692
- - Acknowledge important messages promptly
693
- - Allow time for implementation before status updates
694
- - Don't spam team with excessive communications
695
-
696
- ### Action Prioritization
697
- - Address urgent clarifications first
698
- - Batch related updates when possible
699
- - Focus on high-impact changes
700
-
701
- ### Memory Maintenance
702
- - Keep active conversations visible and current
703
- - Archive resolved discussions appropriately
704
- - Maintain searchable history of resolutions`
705
- }
555
+ "update-knowledge-base"
706
556
  ],
707
557
  requiredSubagents: ["team-communicator"],
708
558
  optionalSubagents: [],
@@ -1129,38 +979,7 @@ Create files if they don't exist:
1129
979
  - \`.bugzy/runtime/memory/event-history.md\``
1130
980
  },
1131
981
  // Step 14: Knowledge Base Update (library)
1132
- "update-knowledge-base",
1133
- // Step 15: Important Considerations (inline)
1134
- {
1135
- inline: true,
1136
- title: "Important Considerations",
1137
- content: `## Important Considerations
1138
-
1139
- ### Contextual Intelligence
1140
- - Never process events in isolation - always consider full context
1141
- - Use knowledge base, history, and external system state to inform decisions
1142
- - What seems like a bug might be expected behavior given the context
1143
- - A minor event might be critical when seen as part of a pattern
1144
-
1145
- ### Adaptive Response
1146
- - Same event type can require different actions based on context
1147
- - Learn from each event to improve future decision-making
1148
- - Build understanding of system behavior over time
1149
- - Adjust responses based on business priorities and risk
1150
-
1151
- ### Smart Task Generation
1152
- - NEVER execute action tasks directly \u2014 all action tasks go through blocked-task-queue for team confirmation
1153
- - Knowledge base updates and event history logging are the only direct operations
1154
- - Document why each decision was made with full context
1155
- - Skip redundant actions (e.g., duplicate events, already-processed issues)
1156
- - Escalate appropriately based on pattern recognition
1157
-
1158
- ### Continuous Learning
1159
- - Each event adds to our understanding of the system
1160
- - Update patterns when new correlations are discovered
1161
- - Refine decision rules based on outcomes
1162
- - Build institutional memory through event history`
1163
- }
982
+ "update-knowledge-base"
1164
983
  ],
1165
984
  requiredSubagents: ["team-communicator"],
1166
985
  optionalSubagents: ["documentation-researcher", "issue-tracker"],
@@ -1426,33 +1245,13 @@ Store the detected trigger for use in output routing:
1426
1245
  title: "Coverage Gap vs. Ambiguity",
1427
1246
  content: `### Coverage Gap vs. Ambiguity
1428
1247
 
1429
- When the trigger indicates a feature has been implemented and is ready for testing (Jira "Ready to Test", PR merged, CI/CD pipeline):
1430
-
1431
- **Missing test coverage for the referenced feature is a COVERAGE GAP, not an ambiguity.**
1432
-
1433
- - The developer/team is asserting the feature exists and is ready for testing
1434
- - "Not yet explored" or "out of scope" in the test plan means the QA team hasn't tested it yet \u2014 it does NOT mean the feature doesn't exist
1435
- - Do NOT classify as CRITICAL based on stale documentation or knowledge base gaps
1436
- - If project-context.md or the Jira issue references the feature, assume it exists until browser exploration proves otherwise
1437
- - Coverage gaps are handled in the "Create Tests for Coverage Gaps" step below \u2014 do NOT block here
1248
+ When the trigger indicates a feature is ready for testing (Jira "Ready to Test", PR merged, CI/CD):
1438
1249
 
1439
- ### If You Browse the App and Cannot Find the Referenced Feature
1250
+ **Missing test coverage is a COVERAGE GAP, not an ambiguity.** The trigger asserts the feature exists. Do NOT block based on stale docs or knowledge base gaps. Coverage gaps are handled in "Create Tests for Coverage Gaps" below.
1440
1251
 
1441
- Apply the Clarification Protocol's **"Execution Obstacle vs. Requirement Ambiguity"** principle:
1252
+ **If you can't find the referenced feature in the browser:** Apply the Clarification Protocol's execution obstacle principle. The authoritative trigger asserts it exists \u2014 this is an execution obstacle (wrong role, missing test data, feature flags, env config). PROCEED to create tests, add placeholder env vars, notify team about the access issue. Tests may fail until resolved \u2014 that's expected.
1442
1253
 
1443
- This is an **execution obstacle**, NOT a requirement ambiguity \u2014 because the authoritative trigger source (Jira issue, PR, team request) asserts the feature exists. Common causes for not finding it:
1444
- - **Missing role/tier**: You're logged in as a basic user but the feature requires admin/premium access
1445
- - **Missing test data**: Required test accounts or data haven't been configured in \`.env.testdata\`
1446
- - **Feature flags**: The feature is behind a flag not enabled in the test environment
1447
- - **Environment config**: The feature requires specific environment variables or deployment settings
1448
-
1449
- **Action: PROCEED to "Create Tests for Coverage Gaps".** Do NOT BLOCK.
1450
- - Create test cases and specs that reference the feature as described in the trigger
1451
- - Add placeholder env vars to \`.env.testdata\` for any missing credentials
1452
- - Notify the team (via team-communicator) about the access obstacle and what needs to be configured
1453
- - Tests may fail until the obstacle is resolved \u2014 this is expected and acceptable
1454
-
1455
- **Only classify as CRITICAL (and BLOCK) if NO authoritative trigger source claims the feature exists** \u2014 e.g., a vague manual request with no Jira/PR backing.`
1254
+ **Only BLOCK if NO authoritative trigger source claims the feature exists** (e.g., vague manual request with no Jira/PR backing).`
1456
1255
  },
1457
1256
  // Step 6: Clarification Protocol (library)
1458
1257
  "clarification-protocol",
@@ -1843,44 +1642,11 @@ Post PR comment if GitHub context available.`,
1843
1642
  {
1844
1643
  inline: true,
1845
1644
  title: "Handle Special Cases",
1846
- content: `**If no tests found for changed files:**
1847
- - Inform user: "No automated tests found for changed files"
1848
- - Recommend: "Run smoke test suite for basic validation"
1849
- - Still generate manual verification checklist
1850
-
1851
- **If all tests skipped:**
1852
- - Explain why (dependencies, environment issues)
1853
- - Recommend: Check test configuration and prerequisites
1854
-
1855
- **If test execution fails:**
1856
- - Report specific error (test framework not installed, env vars missing)
1857
- - Suggest troubleshooting steps
1858
- - Don't proceed with triage if tests didn't run
1859
-
1860
- ## Important Notes
1861
-
1862
- - This task handles **all trigger sources** with a single unified workflow
1863
- - Trigger detection is automatic based on input format
1864
- - Output is automatically routed to the appropriate channel
1865
- - Automated tests are executed with **full triage and automatic fixing**
1866
- - Manual verification checklists are generated for **non-automatable scenarios**
1867
- - Product bugs are logged with **automatic duplicate detection**
1868
- - Test issues are fixed automatically with **verification**
1869
- - Results include both automated and manual verification items
1870
-
1871
- ## Success Criteria
1872
-
1873
- A successful verification includes:
1874
- 1. Trigger source correctly detected
1875
- 2. Context extracted completely
1876
- 3. Tests executed (or skipped with explanation)
1877
- 4. All failures triaged (product bug vs test issue)
1878
- 5. Test issues fixed automatically (when possible)
1879
- 6. Product bugs logged to issue tracker
1880
- 7. Manual verification checklist generated
1881
- 8. Results formatted for output channel
1882
- 9. Results delivered to appropriate destination
1883
- 10. Clear recommendation provided (merge / review / block)`
1645
+ content: `**If no tests found for changed files:** recommend smoke test suite, still generate manual verification checklist.
1646
+
1647
+ **If all tests skipped:** explain why (dependencies, environment), recommend checking configuration.
1648
+
1649
+ **If test execution fails:** report specific error, suggest troubleshooting, don't proceed with triage.`
1884
1650
  }
1885
1651
  ],
1886
1652
  requiredSubagents: ["browser-automation", "test-debugger-fixer"],