agentic-qe 1.9.4 → 2.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (262) hide show
  1. package/.claude/agents/qe-api-contract-validator.md +95 -1336
  2. package/.claude/agents/qe-chaos-engineer.md +152 -1211
  3. package/.claude/agents/qe-code-complexity.md +144 -707
  4. package/.claude/agents/qe-coverage-analyzer.md +147 -743
  5. package/.claude/agents/qe-deployment-readiness.md +143 -1496
  6. package/.claude/agents/qe-flaky-test-hunter.md +132 -1529
  7. package/.claude/agents/qe-fleet-commander.md +12 -12
  8. package/.claude/agents/qe-performance-tester.md +150 -886
  9. package/.claude/agents/qe-production-intelligence.md +155 -1396
  10. package/.claude/agents/qe-quality-analyzer.md +6 -6
  11. package/.claude/agents/qe-quality-gate.md +151 -648
  12. package/.claude/agents/qe-regression-risk-analyzer.md +132 -1150
  13. package/.claude/agents/qe-requirements-validator.md +149 -932
  14. package/.claude/agents/qe-security-scanner.md +157 -797
  15. package/.claude/agents/qe-test-data-architect.md +96 -1365
  16. package/.claude/agents/qe-test-executor.md +8 -8
  17. package/.claude/agents/qe-test-generator.md +145 -1540
  18. package/.claude/agents/qe-visual-tester.md +153 -1257
  19. package/.claude/agents/qx-partner.md +248 -0
  20. package/.claude/agents/subagents/qe-code-reviewer.md +40 -136
  21. package/.claude/agents/subagents/qe-coverage-gap-analyzer.md +40 -480
  22. package/.claude/agents/subagents/qe-data-generator.md +41 -125
  23. package/.claude/agents/subagents/qe-flaky-investigator.md +55 -411
  24. package/.claude/agents/subagents/qe-integration-tester.md +53 -141
  25. package/.claude/agents/subagents/qe-performance-validator.md +54 -130
  26. package/.claude/agents/subagents/qe-security-auditor.md +56 -114
  27. package/.claude/agents/subagents/qe-test-data-architect-sub.md +57 -548
  28. package/.claude/agents/subagents/qe-test-implementer.md +58 -551
  29. package/.claude/agents/subagents/qe-test-refactorer.md +65 -722
  30. package/.claude/agents/subagents/qe-test-writer.md +63 -726
  31. package/.claude/skills/accessibility-testing/SKILL.md +144 -692
  32. package/.claude/skills/agentic-quality-engineering/SKILL.md +176 -529
  33. package/.claude/skills/api-testing-patterns/SKILL.md +180 -560
  34. package/.claude/skills/brutal-honesty-review/SKILL.md +113 -603
  35. package/.claude/skills/bug-reporting-excellence/SKILL.md +116 -517
  36. package/.claude/skills/chaos-engineering-resilience/SKILL.md +127 -72
  37. package/.claude/skills/cicd-pipeline-qe-orchestrator/SKILL.md +209 -404
  38. package/.claude/skills/code-review-quality/SKILL.md +158 -608
  39. package/.claude/skills/compatibility-testing/SKILL.md +148 -38
  40. package/.claude/skills/compliance-testing/SKILL.md +132 -63
  41. package/.claude/skills/consultancy-practices/SKILL.md +114 -446
  42. package/.claude/skills/context-driven-testing/SKILL.md +117 -381
  43. package/.claude/skills/contract-testing/SKILL.md +176 -141
  44. package/.claude/skills/database-testing/SKILL.md +137 -130
  45. package/.claude/skills/exploratory-testing-advanced/SKILL.md +160 -629
  46. package/.claude/skills/holistic-testing-pact/SKILL.md +140 -188
  47. package/.claude/skills/localization-testing/SKILL.md +145 -33
  48. package/.claude/skills/mobile-testing/SKILL.md +132 -448
  49. package/.claude/skills/mutation-testing/SKILL.md +147 -41
  50. package/.claude/skills/performance-testing/SKILL.md +200 -546
  51. package/.claude/skills/quality-metrics/SKILL.md +164 -519
  52. package/.claude/skills/refactoring-patterns/SKILL.md +132 -699
  53. package/.claude/skills/regression-testing/SKILL.md +120 -926
  54. package/.claude/skills/risk-based-testing/SKILL.md +157 -660
  55. package/.claude/skills/security-testing/SKILL.md +199 -538
  56. package/.claude/skills/sherlock-review/SKILL.md +163 -699
  57. package/.claude/skills/shift-left-testing/SKILL.md +161 -465
  58. package/.claude/skills/shift-right-testing/SKILL.md +161 -519
  59. package/.claude/skills/six-thinking-hats/SKILL.md +175 -1110
  60. package/.claude/skills/skills-manifest.json +683 -0
  61. package/.claude/skills/tdd-london-chicago/SKILL.md +131 -448
  62. package/.claude/skills/technical-writing/SKILL.md +103 -154
  63. package/.claude/skills/test-automation-strategy/SKILL.md +166 -772
  64. package/.claude/skills/test-data-management/SKILL.md +126 -910
  65. package/.claude/skills/test-design-techniques/SKILL.md +179 -89
  66. package/.claude/skills/test-environment-management/SKILL.md +136 -91
  67. package/.claude/skills/test-reporting-analytics/SKILL.md +169 -92
  68. package/.claude/skills/testability-scoring/README.md +71 -0
  69. package/.claude/skills/testability-scoring/SKILL.md +245 -0
  70. package/.claude/skills/testability-scoring/resources/templates/config.template.js +84 -0
  71. package/.claude/skills/testability-scoring/resources/templates/testability-scoring.spec.template.js +532 -0
  72. package/.claude/skills/testability-scoring/scripts/generate-html-report.js +1007 -0
  73. package/.claude/skills/testability-scoring/scripts/run-assessment.sh +70 -0
  74. package/.claude/skills/visual-testing-advanced/SKILL.md +155 -78
  75. package/.claude/skills/xp-practices/SKILL.md +151 -587
  76. package/CHANGELOG.md +110 -0
  77. package/README.md +55 -21
  78. package/dist/agents/QXPartnerAgent.d.ts +146 -0
  79. package/dist/agents/QXPartnerAgent.d.ts.map +1 -0
  80. package/dist/agents/QXPartnerAgent.js +1831 -0
  81. package/dist/agents/QXPartnerAgent.js.map +1 -0
  82. package/dist/agents/index.d.ts +1 -0
  83. package/dist/agents/index.d.ts.map +1 -1
  84. package/dist/agents/index.js +82 -2
  85. package/dist/agents/index.js.map +1 -1
  86. package/dist/agents/lifecycle/AgentLifecycleManager.d.ts.map +1 -1
  87. package/dist/agents/lifecycle/AgentLifecycleManager.js +34 -31
  88. package/dist/agents/lifecycle/AgentLifecycleManager.js.map +1 -1
  89. package/dist/cli/commands/debug/agent.d.ts.map +1 -1
  90. package/dist/cli/commands/debug/agent.js +19 -6
  91. package/dist/cli/commands/debug/agent.js.map +1 -1
  92. package/dist/cli/commands/debug/health-check.js +20 -7
  93. package/dist/cli/commands/debug/health-check.js.map +1 -1
  94. package/dist/cli/commands/init-claude-md-template.d.ts +1 -0
  95. package/dist/cli/commands/init-claude-md-template.d.ts.map +1 -1
  96. package/dist/cli/commands/init-claude-md-template.js +18 -3
  97. package/dist/cli/commands/init-claude-md-template.js.map +1 -1
  98. package/dist/cli/commands/workflow/cancel.d.ts.map +1 -1
  99. package/dist/cli/commands/workflow/cancel.js +4 -3
  100. package/dist/cli/commands/workflow/cancel.js.map +1 -1
  101. package/dist/cli/commands/workflow/list.d.ts.map +1 -1
  102. package/dist/cli/commands/workflow/list.js +4 -3
  103. package/dist/cli/commands/workflow/list.js.map +1 -1
  104. package/dist/cli/commands/workflow/pause.d.ts.map +1 -1
  105. package/dist/cli/commands/workflow/pause.js +4 -3
  106. package/dist/cli/commands/workflow/pause.js.map +1 -1
  107. package/dist/cli/init/claude-config.d.ts.map +1 -1
  108. package/dist/cli/init/claude-config.js +3 -8
  109. package/dist/cli/init/claude-config.js.map +1 -1
  110. package/dist/cli/init/claude-md.d.ts.map +1 -1
  111. package/dist/cli/init/claude-md.js +44 -2
  112. package/dist/cli/init/claude-md.js.map +1 -1
  113. package/dist/cli/init/database-init.js +1 -1
  114. package/dist/cli/init/index.d.ts.map +1 -1
  115. package/dist/cli/init/index.js +13 -6
  116. package/dist/cli/init/index.js.map +1 -1
  117. package/dist/cli/init/skills.d.ts.map +1 -1
  118. package/dist/cli/init/skills.js +2 -1
  119. package/dist/cli/init/skills.js.map +1 -1
  120. package/dist/core/SwarmCoordinator.d.ts +180 -0
  121. package/dist/core/SwarmCoordinator.d.ts.map +1 -0
  122. package/dist/core/SwarmCoordinator.js +473 -0
  123. package/dist/core/SwarmCoordinator.js.map +1 -0
  124. package/dist/core/memory/AgentDBIntegration.d.ts +24 -6
  125. package/dist/core/memory/AgentDBIntegration.d.ts.map +1 -1
  126. package/dist/core/memory/AgentDBIntegration.js +66 -10
  127. package/dist/core/memory/AgentDBIntegration.js.map +1 -1
  128. package/dist/core/memory/UnifiedMemoryCoordinator.d.ts +341 -0
  129. package/dist/core/memory/UnifiedMemoryCoordinator.d.ts.map +1 -0
  130. package/dist/core/memory/UnifiedMemoryCoordinator.js +986 -0
  131. package/dist/core/memory/UnifiedMemoryCoordinator.js.map +1 -0
  132. package/dist/core/memory/index.d.ts +5 -0
  133. package/dist/core/memory/index.d.ts.map +1 -1
  134. package/dist/core/memory/index.js +23 -1
  135. package/dist/core/memory/index.js.map +1 -1
  136. package/dist/core/metrics/MetricsAggregator.d.ts +228 -0
  137. package/dist/core/metrics/MetricsAggregator.d.ts.map +1 -0
  138. package/dist/core/metrics/MetricsAggregator.js +482 -0
  139. package/dist/core/metrics/MetricsAggregator.js.map +1 -0
  140. package/dist/core/metrics/index.d.ts +5 -0
  141. package/dist/core/metrics/index.d.ts.map +1 -0
  142. package/dist/core/metrics/index.js +11 -0
  143. package/dist/core/metrics/index.js.map +1 -0
  144. package/dist/core/optimization/SwarmOptimizer.d.ts +190 -0
  145. package/dist/core/optimization/SwarmOptimizer.d.ts.map +1 -0
  146. package/dist/core/optimization/SwarmOptimizer.js +648 -0
  147. package/dist/core/optimization/SwarmOptimizer.js.map +1 -0
  148. package/dist/core/optimization/index.d.ts +9 -0
  149. package/dist/core/optimization/index.d.ts.map +1 -0
  150. package/dist/core/optimization/index.js +25 -0
  151. package/dist/core/optimization/index.js.map +1 -0
  152. package/dist/core/optimization/types.d.ts +53 -0
  153. package/dist/core/optimization/types.d.ts.map +1 -0
  154. package/dist/core/optimization/types.js +6 -0
  155. package/dist/core/optimization/types.js.map +1 -0
  156. package/dist/core/orchestration/AdaptiveScheduler.d.ts +190 -0
  157. package/dist/core/orchestration/AdaptiveScheduler.d.ts.map +1 -0
  158. package/dist/core/orchestration/AdaptiveScheduler.js +460 -0
  159. package/dist/core/orchestration/AdaptiveScheduler.js.map +1 -0
  160. package/dist/core/orchestration/PriorityQueue.d.ts +54 -0
  161. package/dist/core/orchestration/PriorityQueue.d.ts.map +1 -0
  162. package/dist/core/orchestration/PriorityQueue.js +122 -0
  163. package/dist/core/orchestration/PriorityQueue.js.map +1 -0
  164. package/dist/core/orchestration/WorkflowOrchestrator.d.ts +189 -0
  165. package/dist/core/orchestration/WorkflowOrchestrator.d.ts.map +1 -0
  166. package/dist/core/orchestration/WorkflowOrchestrator.js +845 -0
  167. package/dist/core/orchestration/WorkflowOrchestrator.js.map +1 -0
  168. package/dist/core/orchestration/index.d.ts +7 -0
  169. package/dist/core/orchestration/index.d.ts.map +1 -0
  170. package/dist/core/orchestration/index.js +11 -0
  171. package/dist/core/orchestration/index.js.map +1 -0
  172. package/dist/core/orchestration/types.d.ts +96 -0
  173. package/dist/core/orchestration/types.d.ts.map +1 -0
  174. package/dist/core/orchestration/types.js +6 -0
  175. package/dist/core/orchestration/types.js.map +1 -0
  176. package/dist/core/recovery/CircuitBreaker.d.ts +176 -0
  177. package/dist/core/recovery/CircuitBreaker.d.ts.map +1 -0
  178. package/dist/core/recovery/CircuitBreaker.js +382 -0
  179. package/dist/core/recovery/CircuitBreaker.js.map +1 -0
  180. package/dist/core/recovery/RecoveryOrchestrator.d.ts +186 -0
  181. package/dist/core/recovery/RecoveryOrchestrator.d.ts.map +1 -0
  182. package/dist/core/recovery/RecoveryOrchestrator.js +476 -0
  183. package/dist/core/recovery/RecoveryOrchestrator.js.map +1 -0
  184. package/dist/core/recovery/RetryStrategy.d.ts +127 -0
  185. package/dist/core/recovery/RetryStrategy.d.ts.map +1 -0
  186. package/dist/core/recovery/RetryStrategy.js +314 -0
  187. package/dist/core/recovery/RetryStrategy.js.map +1 -0
  188. package/dist/core/recovery/index.d.ts +8 -0
  189. package/dist/core/recovery/index.d.ts.map +1 -0
  190. package/dist/core/recovery/index.js +27 -0
  191. package/dist/core/recovery/index.js.map +1 -0
  192. package/dist/core/skills/DependencyResolver.d.ts +99 -0
  193. package/dist/core/skills/DependencyResolver.d.ts.map +1 -0
  194. package/dist/core/skills/DependencyResolver.js +260 -0
  195. package/dist/core/skills/DependencyResolver.js.map +1 -0
  196. package/dist/core/skills/DynamicSkillLoader.d.ts +96 -0
  197. package/dist/core/skills/DynamicSkillLoader.d.ts.map +1 -0
  198. package/dist/core/skills/DynamicSkillLoader.js +353 -0
  199. package/dist/core/skills/DynamicSkillLoader.js.map +1 -0
  200. package/dist/core/skills/ManifestGenerator.d.ts +114 -0
  201. package/dist/core/skills/ManifestGenerator.d.ts.map +1 -0
  202. package/dist/core/skills/ManifestGenerator.js +449 -0
  203. package/dist/core/skills/ManifestGenerator.js.map +1 -0
  204. package/dist/core/skills/index.d.ts +9 -0
  205. package/dist/core/skills/index.d.ts.map +1 -0
  206. package/dist/core/skills/index.js +24 -0
  207. package/dist/core/skills/index.js.map +1 -0
  208. package/dist/core/skills/types.d.ts +118 -0
  209. package/dist/core/skills/types.d.ts.map +1 -0
  210. package/dist/core/skills/types.js +7 -0
  211. package/dist/core/skills/types.js.map +1 -0
  212. package/dist/core/transport/QUICTransport.d.ts +320 -0
  213. package/dist/core/transport/QUICTransport.d.ts.map +1 -0
  214. package/dist/core/transport/QUICTransport.js +711 -0
  215. package/dist/core/transport/QUICTransport.js.map +1 -0
  216. package/dist/core/transport/index.d.ts +40 -0
  217. package/dist/core/transport/index.d.ts.map +1 -0
  218. package/dist/core/transport/index.js +46 -0
  219. package/dist/core/transport/index.js.map +1 -0
  220. package/dist/core/transport/quic-loader.d.ts +123 -0
  221. package/dist/core/transport/quic-loader.d.ts.map +1 -0
  222. package/dist/core/transport/quic-loader.js +293 -0
  223. package/dist/core/transport/quic-loader.js.map +1 -0
  224. package/dist/core/transport/quic.d.ts +154 -0
  225. package/dist/core/transport/quic.d.ts.map +1 -0
  226. package/dist/core/transport/quic.js +214 -0
  227. package/dist/core/transport/quic.js.map +1 -0
  228. package/dist/mcp/server.d.ts +9 -9
  229. package/dist/mcp/server.d.ts.map +1 -1
  230. package/dist/mcp/server.js +1 -2
  231. package/dist/mcp/server.js.map +1 -1
  232. package/dist/mcp/services/AgentRegistry.d.ts.map +1 -1
  233. package/dist/mcp/services/AgentRegistry.js +4 -1
  234. package/dist/mcp/services/AgentRegistry.js.map +1 -1
  235. package/dist/types/index.d.ts +2 -1
  236. package/dist/types/index.d.ts.map +1 -1
  237. package/dist/types/index.js +2 -0
  238. package/dist/types/index.js.map +1 -1
  239. package/dist/types/qx.d.ts +429 -0
  240. package/dist/types/qx.d.ts.map +1 -0
  241. package/dist/types/qx.js +71 -0
  242. package/dist/types/qx.js.map +1 -0
  243. package/dist/visualization/api/RestEndpoints.js +2 -2
  244. package/dist/visualization/api/RestEndpoints.js.map +1 -1
  245. package/dist/visualization/api/WebSocketServer.d.ts +44 -0
  246. package/dist/visualization/api/WebSocketServer.d.ts.map +1 -1
  247. package/dist/visualization/api/WebSocketServer.js +144 -23
  248. package/dist/visualization/api/WebSocketServer.js.map +1 -1
  249. package/dist/visualization/core/DataTransformer.d.ts +10 -0
  250. package/dist/visualization/core/DataTransformer.d.ts.map +1 -1
  251. package/dist/visualization/core/DataTransformer.js +60 -5
  252. package/dist/visualization/core/DataTransformer.js.map +1 -1
  253. package/dist/visualization/emit-event.d.ts +75 -0
  254. package/dist/visualization/emit-event.d.ts.map +1 -0
  255. package/dist/visualization/emit-event.js +213 -0
  256. package/dist/visualization/emit-event.js.map +1 -0
  257. package/dist/visualization/index.d.ts +1 -0
  258. package/dist/visualization/index.d.ts.map +1 -1
  259. package/dist/visualization/index.js +7 -1
  260. package/dist/visualization/index.js.map +1 -1
  261. package/docs/reference/skills.md +63 -1
  262. package/package.json +16 -58
@@ -1,1215 +1,280 @@
1
1
  ---
2
- name: "Six Thinking Hats for Testing"
2
+ name: six-thinking-hats
3
3
  description: "Apply Edward de Bono's Six Thinking Hats methodology to software testing for comprehensive quality analysis. Use when designing test strategies, conducting test retrospectives, analyzing test failures, evaluating testing approaches, or facilitating testing discussions. Each hat provides a distinct testing perspective: facts (White), risks (Black), benefits (Yellow), creativity (Green), emotions (Red), and process (Blue)."
4
+ category: methodology
5
+ priority: medium
6
+ tokenEstimate: 1100
7
+ agents: [qe-quality-analyzer, qe-regression-risk-analyzer, qe-test-generator]
8
+ implementation_status: optimized
9
+ optimization_version: 1.0
10
+ last_optimized: 2025-12-03
11
+ dependencies: []
12
+ quick_reference_card: true
13
+ tags: [thinking, methodology, decision-making, collaboration, analysis]
4
14
  ---
5
15
 
6
16
  # Six Thinking Hats for Testing
7
17
 
8
- ## What This Skill Does
18
+ <default_to_action>
19
+ When analyzing testing decisions:
20
+ 1. DEFINE focus clearly (specific testing question)
21
+ 2. APPLY each hat sequentially (5 min each)
22
+ 3. DOCUMENT insights per hat
23
+ 4. SYNTHESIZE into action plan
9
24
 
10
- Applies Edward de Bono's Six Thinking Hats thinking framework to software testing contexts, enabling structured exploration of quality concerns from six distinct perspectives. Each "hat" represents a specific mode of thinking that helps teams systematically analyze testing scenarios, uncover blind spots, and make better quality decisions.
11
-
12
- ## Prerequisites
13
-
14
- - Basic understanding of software testing concepts
15
- - Familiarity with your application under test
16
- - Team collaboration skills (for group sessions)
17
- - Open mindset to different perspectives
18
-
19
- ## Quick Start
20
-
21
- ### Basic Usage Pattern
22
-
23
- ```bash
24
- # 1. Define the testing focus
25
- FOCUS="Authentication module test strategy"
26
-
27
- # 2. Apply each hat sequentially (3-5 minutes each)
28
- # White Hat: What test data/metrics do we have?
29
- # Red Hat: What are our gut feelings about quality?
30
- # Black Hat: What could go wrong? What are the risks?
31
- # Yellow Hat: What are the benefits of our approach?
32
- # Green Hat: What creative test approaches could we try?
33
- # Blue Hat: How should we organize our testing process?
34
-
35
- # 3. Document insights from each perspective
36
- # 4. Synthesize into actionable test plan
25
+ **Quick Hat Rotation (30 min):**
26
+ ```markdown
27
+ 🤍 WHITE (5 min) - Facts only: metrics, data, coverage
28
+ ❤️ RED (3 min) - Gut feelings (no justification needed)
29
+ 🖤 BLACK (7 min) - Risks, gaps, what could go wrong
30
+ 💛 YELLOW (5 min) - Strengths, opportunities, what works
31
+ 💚 GREEN (7 min) - Creative ideas, alternatives
32
+ 🔵 BLUE (3 min) - Action plan, next steps
37
33
  ```
38
34
 
39
- ### Quick Example - API Testing
40
-
41
- **White Hat (Facts)**: We have 47 API endpoints, 30% test coverage, 12 integration tests, average response time 120ms.
35
+ **Example for "API Test Strategy":**
36
+ - 🤍 47 endpoints, 30% coverage, 12 integration tests
37
+ - ❤️ Anxious about security, confident on happy paths
38
+ - 🖤 No auth tests, rate limiting untested, edge cases missing
39
+ - 💛 Good docs, CI/CD integrated, team experienced
40
+ - 💚 Contract testing with Pact, chaos testing, property-based
41
+ - 🔵 Security tests first, contract testing next sprint
42
+ </default_to_action>
42
43
 
43
- **Black Hat (Risks)**: No authentication tests, rate limiting untested, error handling incomplete, edge cases missing.
44
+ ## Quick Reference Card
44
45
 
45
- **Yellow Hat (Benefits)**: Fast baseline tests exist, good documentation, CI/CD integrated, team has API testing experience.
46
+ ### The Six Hats
46
47
 
47
- **Green Hat (Creative)**: Could generate tests from OpenAPI spec, use contract testing with Pact, chaos testing for resilience, property-based testing for edge cases.
48
+ | Hat | Focus | Key Question |
49
+ |-----|-------|--------------|
50
+ | 🤍 **White** | Facts & Data | What do we KNOW? |
51
+ | ❤️ **Red** | Emotions | What do we FEEL? |
52
+ | 🖤 **Black** | Risks | What could go WRONG? |
53
+ | 💛 **Yellow** | Benefits | What's GOOD? |
54
+ | 💚 **Green** | Creativity | What ELSE could we try? |
55
+ | 🔵 **Blue** | Process | What should we DO? |
48
56
 
49
- **Red Hat (Emotions)**: Team feels confident about happy paths but anxious about security. Frustrated by flaky network tests.
57
+ ### When to Use Each Hat
50
58
 
51
- **Blue Hat (Process)**: Prioritize security tests first, add contract testing next sprint, dedicate 20% time to exploratory testing, schedule weekly test reviews.
59
+ | Hat | Use For |
60
+ |-----|---------|
61
+ | 🤍 White | Baseline metrics, test data inventory |
62
+ | ❤️ Red | Team confidence check, quality gut feel |
63
+ | 🖤 Black | Risk assessment, gap analysis, pre-mortems |
64
+ | 💛 Yellow | Strengths audit, quick win identification |
65
+ | 💚 Green | Test innovation, new approaches, brainstorming |
66
+ | 🔵 Blue | Strategy planning, retrospectives, decision-making |
52
67
 
53
68
  ---
54
69
 
55
- ## The Six Hats Explained for Testing
70
+ ## Hat Details
56
71
 
57
72
  ### 🤍 White Hat - Facts & Data
73
+ **Output: Quantitative testing baseline**
58
74
 
59
- **Focus**: Objective information, test metrics, data
60
-
61
- **Testing Questions**:
62
- - What test coverage do we currently have?
63
- - What metrics are we tracking? (pass/fail rate, execution time, defect density)
64
- - What test data is available?
65
- - What test environments exist?
75
+ Questions:
76
+ - What test coverage do we have?
77
+ - What is our pass/fail rate?
78
+ - What environments exist?
66
79
  - What is our defect history?
67
- - What performance benchmarks do we have?
68
- - How many test cases exist? (manual vs automated)
69
-
70
- **Deliverable**: Quantitative testing baseline
71
80
 
72
- **Example Output**:
73
81
  ```
74
- Coverage: 67% line coverage, 45% branch coverage
75
- Test Suite: 1,247 unit tests, 156 integration tests, 23 E2E tests
82
+ Example Output:
83
+ Coverage: 67% line, 45% branch
84
+ Test Suite: 1,247 unit, 156 integration, 23 E2E
76
85
  Execution Time: Unit 3min, Integration 12min, E2E 45min
77
86
  Defects: 23 open (5 critical, 8 major, 10 minor)
78
- Environments: Dev, Staging, Production
79
- Last Release: 98.5% pass rate, 2 critical bugs in production
80
87
  ```
81
88
 
82
- ---
83
-
84
89
  ### 🖤 Black Hat - Risks & Cautions
90
+ **Output: Comprehensive risk assessment**
85
91
 
86
- **Focus**: Critical judgment, potential problems, risks
87
-
88
- **Testing Questions**:
92
+ Questions:
89
93
  - What could go wrong in production?
90
94
  - What are we NOT testing?
91
- - Where are the coverage gaps?
92
95
  - What assumptions might be wrong?
93
- - What edge cases are we missing?
94
- - What security vulnerabilities exist?
95
- - What performance bottlenecks could occur?
96
- - What integration points could fail?
97
- - What technical debt impacts quality?
98
-
99
- **Deliverable**: Comprehensive risk assessment
96
+ - Where are the coverage gaps?
100
97
 
101
- **Example Output**:
102
98
  ```
103
99
  HIGH RISKS:
104
- - No load testing (potential production outage)
105
- - Authentication edge cases untested (security vulnerability)
100
+ - No load testing (production outage risk)
101
+ - Auth edge cases untested (security vulnerability)
106
102
  - Database failover never tested (data loss risk)
107
- - Mobile app on older OS versions untested (user impact)
108
-
109
- MEDIUM RISKS:
110
- - Flaky tests reducing CI/CD confidence
111
- - Manual regression testing taking 2 days
112
- - Limited error logging in production
113
-
114
- ASSUMPTIONS TO CHALLENGE:
115
- - "Users will always have internet" (offline mode untested)
116
- - "Data migrations will be backward compatible" (rollback untested)
117
103
  ```
118
104
 
119
- ---
120
-
121
105
  ### 💛 Yellow Hat - Benefits & Optimism
106
+ **Output: Strengths and opportunities**
122
107
 
123
- **Focus**: Positive thinking, opportunities, value
124
-
125
- **Testing Questions**:
126
- - What's working well in our testing?
108
+ Questions:
109
+ - What's working well?
127
110
  - What strengths can we leverage?
128
- - What value does our testing provide?
129
- - What opportunities exist to improve quality?
130
- - What tools/skills do we have?
131
- - What best practices are we following?
132
111
  - What quick wins are available?
133
112
 
134
- **Deliverable**: Strengths and opportunities assessment
135
-
136
- **Example Output**:
137
113
  ```
138
114
  STRENGTHS:
139
- - Strong CI/CD pipeline with automated testing
140
- - Team has expertise in test automation
141
- - Good test data management practices
142
- - Stakeholders value quality and testing
143
-
144
- OPPORTUNITIES:
145
- - Reuse existing test framework for new features
146
- - Leverage AI tools for test generation
147
- - Expand performance testing to prevent issues
148
- - Share test patterns across teams
115
+ - Strong CI/CD pipeline
116
+ - Team expertise in automation
117
+ - Stakeholders value quality
149
118
 
150
119
  QUICK WINS:
151
- - Add smoke tests to reduce production incidents
152
- - Automate manual regression tests (save 2 days/release)
153
- - Implement contract testing (improve team coordination)
120
+ - Add smoke tests (reduce incidents)
121
+ - Automate manual regression (save 2 days/release)
154
122
  ```
155
123
 
156
- ---
157
-
158
- ### 💚 Green Hat - Creativity & Alternatives
159
-
160
- **Focus**: New ideas, creative solutions, alternatives
124
+ ### 💚 Green Hat - Creativity
125
+ **Output: Innovative testing ideas**
161
126
 
162
- **Testing Questions**:
163
- - What innovative testing approaches could we try?
127
+ Questions:
164
128
  - How else could we test this?
165
- - What if we completely changed our approach?
166
- - What emerging testing techniques could we adopt?
167
- - How can we make testing more efficient/effective?
168
- - What tools or frameworks could we explore?
169
- - How can we test the "untestable"?
129
+ - What if we tried something completely different?
130
+ - What emerging techniques could we adopt?
170
131
 
171
- **Deliverable**: Innovative testing ideas
172
-
173
- **Example Output**:
174
132
  ```
175
- CREATIVE IDEAS:
176
-
177
- 1. AI-Powered Test Generation
178
- - Use LLMs to generate test cases from requirements
179
- - Generate edge cases from code analysis
180
- - Auto-generate test data with realistic patterns
181
-
182
- 2. Chaos Engineering
183
- - Randomly terminate services to test resilience
184
- - Inject network latency to test timeout handling
185
- - Corrupt data to test error recovery
186
-
187
- 3. Property-Based Testing
188
- - Define properties that should always hold
189
- - Generate thousands of random inputs
190
- - Uncover edge cases humans wouldn't think of
191
-
192
- 4. Visual Regression Testing
193
- - Screenshot comparison for UI changes
194
- - AI-powered visual anomaly detection
195
- - Cross-browser visual testing
196
-
197
- 5. Testing in Production
198
- - Canary deployments with real user traffic
199
- - Feature flags for gradual rollout
200
- - Synthetic monitoring for proactive detection
201
-
202
- 6. Exploratory Testing Sessions
203
- - Time-boxed unscripted testing
204
- - Bug bash events with whole team
205
- - User journey walkthroughs
133
+ IDEAS:
134
+ 1. AI-powered test generation
135
+ 2. Chaos engineering for resilience
136
+ 3. Property-based testing for edge cases
137
+ 4. Production traffic replay
138
+ 5. Synthetic monitoring
206
139
  ```
207
140
 
208
- ---
209
-
210
- ### ❤️ Red Hat - Emotions & Intuition
211
-
212
- **Focus**: Feelings, hunches, instincts (no justification needed)
141
+ ### ❤️ Red Hat - Emotions
142
+ **Output: Team gut feelings (NO justification needed)**
213
143
 
214
- **Testing Questions**:
215
- - How do you FEEL about the quality?
216
- - What's your gut reaction to the current test coverage?
217
- - Where do you feel uneasy or anxious?
218
- - What gives you confidence in the system?
219
- - What frustrates you about testing?
220
- - Where do you sense hidden problems?
221
- - What excites you about testing improvements?
222
-
223
- **Deliverable**: Emotional landscape of testing
144
+ Questions:
145
+ - How confident do you feel about quality?
146
+ - What makes you anxious?
147
+ - What gives you confidence?
224
148
 
225
- **Example Output**:
226
149
  ```
227
- FEELINGS ABOUT QUALITY:
228
-
229
- Confident About:
230
- - "The unit tests make me feel safe to refactor"
231
- - "I trust the CI/CD pipeline"
232
- - "The API tests are solid"
233
-
234
- Anxious About:
235
- - "I have a bad feeling about the authentication flow"
236
- - "Something feels off about the payment processing"
237
- - "I'm worried about the database migration"
238
-
239
- Frustrated By:
240
- - "The test suite is too slow"
241
- - "Flaky tests waste my time"
242
- - "Manual testing feels like groundhog day"
243
-
244
- Excited About:
245
- - "The new test framework looks promising"
246
- - "AI test generation could save us so much time"
247
-
248
- Gut Instincts:
249
- - "I don't think we're testing multi-user scenarios enough"
250
- - "The error handling feels brittle"
251
- - "Production is going to surprise us"
150
+ FEELINGS:
151
+ - Confident: Unit tests, API tests
152
+ - Anxious: Authentication flow, payment processing
153
+ - Frustrated: Flaky tests, slow E2E suite
252
154
  ```
253
155
 
254
- **Note**: Red Hat requires NO justification. Intuition often catches issues logic misses.
255
-
256
- ---
156
+ ### 🔵 Blue Hat - Process
157
+ **Output: Action plan with owners and timelines**
257
158
 
258
- ### 🔵 Blue Hat - Process & Organization
259
-
260
- **Focus**: Metacognition, process control, orchestration
261
-
262
- **Testing Questions**:
263
- - What testing process should we follow?
264
- - How should we organize our testing efforts?
265
- - What's our test strategy?
266
- - How do we prioritize testing?
267
- - What's the agenda for this testing discussion?
268
- - How do we measure testing success?
159
+ Questions:
160
+ - What's our strategy?
161
+ - How should we prioritize?
269
162
  - What's the next step?
270
- - How do we integrate testing into development?
271
163
 
272
- **Deliverable**: Structured test plan and process
273
-
274
- **Example Output**:
275
164
  ```
276
- TESTING PROCESS PLAN:
277
-
278
- 1. Test Strategy Definition
279
- Objective: Establish testing approach for Q2 release
280
- Approach: Risk-based testing with automation priority
281
- Success Criteria: 80% automated coverage, <5% production defects
282
-
283
- 2. Testing Prioritization
284
- P0: Security, authentication, payment processing
285
- P1: Core user journeys, data integrity
286
- P2: Performance, edge cases
287
- P3: UI polish, nice-to-have features
288
-
289
- 3. Testing Workflow
290
- Week 1-2: White Hat (gather facts), Black Hat (risk analysis)
291
- Week 3-4: Green Hat (design creative tests), Blue Hat (plan execution)
292
- Week 5-8: Execute tests, Yellow Hat (optimize), Red Hat (validate feel)
293
- Week 9: Final Blue Hat (retrospective, lessons learned)
294
-
295
- 4. Meeting Cadence
296
- - Daily: Test execution standup (15 min)
297
- - Weekly: Hat rotation session (90 min, different hat each week)
298
- - Bi-weekly: Test metrics review
299
- - Monthly: Testing retrospective
300
-
301
- 5. Decision Points
302
- - Go/No-Go decision requires all hats completed
303
- - Black Hat veto power for critical risks
304
- - Green Hat ideas evaluated monthly
305
- - Red Hat concerns investigated within 48 hours
306
-
307
- 6. Documentation
308
- - Test strategy document (Blue Hat)
309
- - Risk register (Black Hat)
310
- - Test metrics dashboard (White Hat)
311
- - Innovation backlog (Green Hat)
165
+ PRIORITIZED ACTIONS:
166
+ 1. [Critical] Address security testing gap - Owner: Alice
167
+ 2. [High] Implement contract testing - Owner: Bob
168
+ 3. [Medium] Reduce flaky tests - Owner: Carol
312
169
  ```
313
170
 
314
171
  ---
315
172
 
316
- ## Step-by-Step Guide
317
-
318
- ### Phase 1: Preparation (10 minutes)
319
-
320
- **Step 1: Define the Testing Focus**
321
- ```
322
- Be specific about what you're analyzing:
323
- ✅ GOOD: "Test strategy for user authentication feature"
324
- ✅ GOOD: "Root cause analysis of payment processing bug"
325
- ✅ GOOD: "Evaluate testing approach for API v2 migration"
326
- ❌ BAD: "Improve our testing" (too vague)
327
- ```
328
-
329
- **Step 2: Gather Context**
330
- ```
331
- Collect relevant information:
332
- - Current test coverage reports
333
- - Recent defect trends
334
- - Test execution metrics
335
- - Stakeholder concerns
336
- - Technical architecture diagrams
337
- ```
338
-
339
- **Step 3: Choose Format**
340
- - **Solo Session**: Apply hats sequentially, 3-5 min each (30 min total)
341
- - **Team Session**: Rotate hats as group, 10 min each (60 min total)
342
- - **Async Session**: Each person contributes to all hats over 2-3 days
343
-
344
- ---
345
-
346
- ### Phase 2: Hat Rotation (Main Work)
347
-
348
- **Approach 1: Sequential (Recommended for Solo)**
349
-
350
- Apply each hat in order, spending dedicated time in each mode:
173
+ ## Session Templates
351
174
 
175
+ ### Solo Session (30 min)
352
176
  ```markdown
353
- ## White Hat Session (5 minutes)
354
- Focus: Facts only, no opinions
355
- Output: [List all objective testing data]
356
-
357
- ## Red Hat Session (3 minutes)
358
- Focus: Gut feelings, no justification
359
- Output: [Capture instincts and emotions]
360
-
361
- ## Black Hat Session (7 minutes)
362
- Focus: Critical analysis, risks
363
- Output: [Comprehensive risk list]
177
+ # Six Hats Analysis: [Topic]
364
178
 
365
- ## Yellow Hat Session (5 minutes)
366
- Focus: Positive aspects, opportunities
367
- Output: [Strengths and possibilities]
179
+ ## 🤍 White Hat (5 min)
180
+ Facts: [list metrics, data]
368
181
 
369
- ## Green Hat Session (7 minutes)
370
- Focus: Creative alternatives
371
- Output: [Innovative testing ideas]
182
+ ## ❤️ Red Hat (3 min)
183
+ Feelings: [gut reactions, no justification]
372
184
 
373
- ## Blue Hat Session (5 minutes)
374
- Focus: Process and next steps
375
- Output: [Action plan and structure]
376
- ```
185
+ ## 🖤 Black Hat (7 min)
186
+ Risks: [what could go wrong]
377
187
 
378
- **Approach 2: Cycling (Good for Team Discussions)**
188
+ ## 💛 Yellow Hat (5 min)
189
+ Strengths: [what works, opportunities]
379
190
 
380
- Cycle through hats multiple times on different aspects:
191
+ ## 💚 Green Hat (7 min)
192
+ Ideas: [creative alternatives]
381
193
 
194
+ ## 🔵 Blue Hat (3 min)
195
+ Actions: [prioritized next steps]
382
196
  ```
383
- Round 1: All hats on "Current State"
384
- Round 2: All hats on "Proposed Solution A"
385
- Round 3: All hats on "Proposed Solution B"
386
- Round 4: All hats on "Implementation Plan"
387
- ```
388
-
389
- **Approach 3: Parallel (For Written Collaboration)**
390
-
391
- Team members work on different hats simultaneously, then share:
392
197
 
393
- ```
394
- Person 1: White Hat (gather all facts)
395
- Person 2: Black Hat (identify all risks)
396
- Person 3: Yellow Hat (find opportunities)
397
- Person 4: Green Hat (brainstorm alternatives)
398
- Person 5: Red Hat (gut check from fresh eyes)
399
- Facilitator: Blue Hat (synthesize findings)
400
- ```
198
+ ### Team Session (60 min)
199
+ - Each hat: 10 minutes
200
+ - Rotate through hats as group
201
+ - Document on shared whiteboard
202
+ - Blue Hat synthesizes at end
401
203
 
402
204
  ---
403
205
 
404
- ### Phase 3: Synthesis (15 minutes)
405
-
406
- **Step 1: Review All Hat Outputs**
407
-
408
- Create a summary document with all six perspectives:
409
-
410
- ```markdown
411
- # Testing Analysis: [Feature/Issue Name]
412
-
413
- ## 🤍 Facts (White Hat)
414
- [Objective data and metrics]
415
-
416
- ## ❤️ Feelings (Red Hat)
417
- [Team instincts and emotions]
206
+ ## Agent Integration
418
207
 
419
- ## 🖤 Risks (Black Hat)
420
- [Potential problems and gaps]
208
+ ```typescript
209
+ // Risk-focused analysis (Black Hat)
210
+ const risks = await Task("Identify Risks", {
211
+ scope: 'payment-module',
212
+ perspective: 'black-hat',
213
+ includeMitigation: true
214
+ }, "qe-regression-risk-analyzer");
421
215
 
422
- ## 💛 Benefits (Yellow Hat)
423
- [Strengths and opportunities]
216
+ // Creative test approaches (Green Hat)
217
+ const ideas = await Task("Generate Test Ideas", {
218
+ feature: 'new-auth-system',
219
+ perspective: 'green-hat',
220
+ includeEmergingTechniques: true
221
+ }, "qe-test-generator");
424
222
 
425
- ## 💚 Creative Ideas (Green Hat)
426
- [Innovative approaches]
427
-
428
- ## 🔵 Action Plan (Blue Hat)
429
- [Process and next steps]
223
+ // Comprehensive analysis (All Hats)
224
+ const analysis = await Task("Six Hats Analysis", {
225
+ topic: 'Q1 Test Strategy',
226
+ hats: ['white', 'black', 'yellow', 'green', 'red', 'blue']
227
+ }, "qe-quality-analyzer");
430
228
  ```
431
229
 
432
- **Step 2: Identify Patterns**
433
-
434
- Look for:
435
- - **Conflicts**: Black Hat risks vs Yellow Hat opportunities (trade-offs to evaluate)
436
- - **Alignments**: Red Hat feelings matching Black Hat risks (trust intuition)
437
- - **Gaps**: White Hat missing data needed for Blue Hat decisions
438
- - **Innovations**: Green Hat ideas that address Black Hat concerns
439
-
440
- **Step 3: Prioritize Actions**
441
-
442
- Use Blue Hat to create prioritized action plan:
443
-
444
- ```markdown
445
- ## Immediate Actions (This Sprint)
446
- 1. [Critical Black Hat risk] - Address [specific risk]
447
- 2. [White Hat gap] - Collect [missing data]
448
- 3. [Green Hat quick win] - Implement [creative idea]
449
-
450
- ## Short-Term (Next 2-4 Weeks)
451
- 1. [Yellow Hat opportunity]
452
- 2. [Green Hat innovation]
453
- 3. [Red Hat concern]
454
-
455
- ## Long-Term (Next Quarter)
456
- 1. [Strategic improvement]
457
- 2. [Process optimization]
458
- 3. [Capability building]
459
- ```
460
-
461
- ---
462
-
463
- ## Use Cases & Examples
464
-
465
- ### Use Case 1: Test Strategy for New Feature
466
-
467
- **Context**: Designing test approach for new real-time chat feature
468
-
469
- **White Hat (Facts)**:
470
- - Feature: WebSocket-based chat with 100+ concurrent users
471
- - Stack: Node.js backend, React frontend
472
- - Timeline: 6-week sprint
473
- - Team: 2 developers, 1 QE
474
- - Current: No WebSocket testing experience
475
-
476
- **Black Hat (Risks)**:
477
- - WebSocket connection stability untested
478
- - Concurrent user simulation challenging
479
- - Race conditions in message ordering
480
- - Browser compatibility (Safari WebSocket quirks)
481
- - No production WebSocket monitoring
482
-
483
- **Yellow Hat (Benefits)**:
484
- - Team eager to learn WebSocket testing
485
- - Good existing React testing framework
486
- - Can reuse API testing infrastructure
487
- - Early adopter advantage
488
-
489
- **Green Hat (Creative)**:
490
- - Socket.io test framework
491
- - Simulate 1000+ concurrent users with k6
492
- - Chaos testing: randomly disconnect clients
493
- - Visual testing for message ordering
494
- - Property-based testing for message invariants
495
- - Production shadowing (test in parallel)
496
-
497
- **Red Hat (Emotions)**:
498
- - Nervous about real-time complexity
499
- - Excited about learning new tech
500
- - Confident in team capability
501
- - Worried about timeline pressure
502
-
503
- **Blue Hat (Action Plan)**:
504
- 1. Week 1: Research WebSocket testing tools (White Hat)
505
- 2. Week 2: Spike Socket.io test framework (Green Hat)
506
- 3. Week 3-4: Build test suite (unit, integration, load)
507
- 4. Week 5: Chaos testing and edge cases (Black Hat)
508
- 5. Week 6: Production monitoring setup (Blue Hat)
509
- 6. Decision Point: Go/No-Go based on load test results
510
-
511
230
  ---
512
231
 
513
- ### Use Case 2: Flaky Test Analysis
514
-
515
- **Context**: 30% of CI/CD runs fail due to flaky tests
516
-
517
- **White Hat (Facts)**:
518
- - 47 tests marked as flaky (out of 1,200 total)
519
- - Failure rate: 30% of CI runs have at least one flaky test
520
- - Most flaky: API integration tests (network timeouts)
521
- - Impact: 2-hour delay per failed CI run
522
- - Cost: ~15 hours/week developer time investigating
523
-
524
- **Black Hat (Risks)**:
525
- - Team losing trust in test suite
526
- - Real bugs might be masked as "just flaky"
527
- - Developers skip test failures ("probably flaky")
528
- - Technical debt growing (band-aid fixes)
529
- - Risk of disabling tests (losing coverage)
530
-
531
- **Yellow Hat (Benefits)**:
532
- - We've identified the problem (awareness)
533
- - Team motivated to fix (pain point)
534
- - Good test infrastructure exists
535
- - Can learn flaky test patterns
536
- - Opportunity to improve test stability practices
537
-
538
- **Green Hat (Creative)**:
539
- - Quarantine flaky tests (separate CI job)
540
- - Retry with exponential backoff
541
- - Visual dashboard showing flaky test trends
542
- - AI-powered flaky test detection
543
- - Test in parallel to detect race conditions
544
- - Automatic flaky test regression (if test becomes flaky again)
545
- - Invest in test observability tools
546
-
547
- **Red Hat (Emotions)**:
548
- - Frustration: "These tests waste my time"
549
- - Distrust: "I ignore test failures now"
550
- - Anxiety: "Are we shipping bugs?"
551
- - Hope: "We can fix this"
552
-
553
- **Blue Hat (Action Plan)**:
554
- 1. **Immediate (This Week)**:
555
- - Enable test retries (max 3) in CI
556
- - Create flaky test dashboard
557
- - Document known flaky tests
558
-
559
- 2. **Short-Term (2 Weeks)**:
560
- - Dedicate 1 developer to fix top 10 flakiest tests
561
- - Add test stability metrics to definition of done
562
- - Implement quarantine for new flaky tests
563
-
564
- 3. **Long-Term (1 Month)**:
565
- - Establish flaky test SLO (<5% flaky rate)
566
- - Training: writing stable tests
567
- - Invest in test observability platform
568
- - Continuous monitoring and maintenance
569
-
570
- ---
571
-
572
- ### Use Case 3: Production Bug Retrospective
573
-
574
- **Context**: Critical payment bug reached production despite testing
575
-
576
- **White Hat (Facts)**:
577
- - Bug: Double-charging users in edge case (race condition)
578
- - Impact: 47 users affected, $12,340 refunded
579
- - Detection: 4 hours after deployment (user reports)
580
- - Root cause: Concurrent payment processing not tested
581
- - Test coverage: 85% overall, but missing concurrency tests
582
-
583
- **Black Hat (Why It Happened)**:
584
- - No load testing for payment flow
585
- - Race condition not considered in test design
586
- - Missing integration test for concurrent requests
587
- - Production monitoring missed the pattern
588
- - Assumption: "Database transactions prevent duplicates" (incorrect)
589
-
590
- **Yellow Hat (What Went Well)**:
591
- - Detected and fixed within 24 hours
592
- - Rollback process worked smoothly
593
- - Customer support handled well
594
- - Team transparent about issue
595
- - Incident documentation excellent
596
-
597
- **Green Hat (Prevention Ideas)**:
598
- - Chaos engineering for payment system
599
- - Concurrency testing framework
600
- - Property-based testing: "No duplicate charges"
601
- - Production traffic replay in staging
602
- - Automated canary deployments
603
- - Real-time anomaly detection
604
- - Synthetic transaction monitoring
605
-
606
- **Red Hat (Team Feelings)**:
607
- - Guilty: "We should have caught this"
608
- - Defensive: "The requirements didn't mention concurrency"
609
- - Vulnerable: "What else are we missing?"
610
- - Determined: "This won't happen again"
232
+ ## Agent Coordination Hints
611
233
 
612
- **Blue Hat (Action Plan)**:
613
- 1. **Immediate**:
614
- - Add concurrency tests for payment flow
615
- - Enable production monitoring for duplicate charges
616
- - Document race condition test patterns
617
-
618
- 2. **This Sprint**:
619
- - Concurrency testing framework (property-based)
620
- - Load testing for critical flows
621
- - Update test strategy to include concurrency
622
-
623
- 3. **Next Quarter**:
624
- - Chaos engineering capability
625
- - Production traffic replay
626
- - Team training: distributed systems testing
627
-
628
- 4. **Continuous**:
629
- - Monthly "What could go wrong?" sessions (Black Hat)
630
- - Quarterly chaos testing exercises
631
- - Incident retrospectives with Six Hats
632
-
633
- ---
634
-
635
- ## Integration with Existing QE Skills
636
-
637
- The Six Thinking Hats complements other QE skills:
638
-
639
- ### With agentic-quality-engineering
640
- ```
641
- Use Six Hats to:
642
- - Design autonomous testing strategies (Green Hat for creative approaches)
643
- - Evaluate agent performance (White Hat metrics, Red Hat intuition)
644
- - Identify risks in agent coordination (Black Hat)
645
- ```
646
-
647
- ### With risk-based-testing
234
+ ### Memory Namespace
648
235
  ```
649
- Use Six Hats to:
650
- - Black Hat: Identify risks comprehensively
651
- - White Hat: Quantify risk probability and impact
652
- - Blue Hat: Prioritize risk mitigation
236
+ aqe/six-hats/
237
+ ├── analyses/* - Complete hat analyses
238
+ ├── risks/* - Black hat findings
239
+ ├── opportunities/* - Yellow hat findings
240
+ └── innovations/* - Green hat ideas
653
241
  ```
654
242
 
655
- ### With exploratory-testing-advanced
656
- ```
657
- Use Six Hats to:
658
- - Green Hat: Generate exploratory testing charters
659
- - Red Hat: Follow testing intuition
660
- - Blue Hat: Structure exploration sessions
661
- ```
662
-
663
- ### With performance-testing
664
- ```
665
- Use Six Hats to:
666
- - White Hat: Baseline performance metrics
667
- - Black Hat: Identify bottlenecks and limits
668
- - Green Hat: Creative performance optimization
669
- ```
670
-
671
- ### With api-testing-patterns
672
- ```
673
- Use Six Hats to:
674
- - White Hat: API contract facts
675
- - Black Hat: API failure modes
676
- - Green Hat: Creative contract testing approaches
677
- ```
678
-
679
- ### With context-driven-testing
680
- ```
681
- Six Hats IS a context-driven approach:
682
- - Each hat adapts to the testing context
683
- - No prescribed "best practice"
684
- - Acknowledges emotions and intuition
685
- - Balances multiple perspectives
243
+ ### Fleet Coordination
244
+ ```typescript
245
+ const analysisFleet = await FleetManager.coordinate({
246
+ strategy: 'six-hats-analysis',
247
+ agents: [
248
+ 'qe-quality-analyzer', // White + Blue hats
249
+ 'qe-regression-risk-analyzer', // Black hat
250
+ 'qe-test-generator' // Green hat
251
+ ],
252
+ topology: 'parallel'
253
+ });
686
254
  ```
687
255
 
688
256
  ---
689
257
 
690
- ## Advanced Techniques
691
-
692
- ### Technique 1: Hat Personas for Testing
693
-
694
- Assign team members to "wear" specific hats based on their strengths:
695
-
696
- ```
697
- White Hat Specialist: Data analyst, metrics expert
698
- Black Hat Specialist: Security expert, pessimist, devil's advocate
699
- Yellow Hat Specialist: Product manager, optimist, evangelist
700
- Green Hat Specialist: Innovation lead, creative thinker
701
- Red Hat Specialist: UX researcher, empathy expert
702
- Blue Hat Specialist: Test manager, facilitator, strategist
703
- ```
704
-
705
- Rotate personas quarterly to develop well-rounded thinking.
706
-
707
- ---
708
-
709
- ### Technique 2: Testing Checklists per Hat
710
-
711
- **White Hat Testing Checklist**:
712
- - [ ] Test coverage metrics collected
713
- - [ ] Pass/fail rates documented
714
- - [ ] Performance benchmarks established
715
- - [ ] Defect trends analyzed
716
- - [ ] Test execution time tracked
717
- - [ ] Environment inventory created
718
-
719
- **Black Hat Testing Checklist**:
720
- - [ ] Failure modes identified
721
- - [ ] Edge cases documented
722
- - [ ] Security threats assessed
723
- - [ ] Integration points analyzed
724
- - [ ] Assumptions challenged
725
- - [ ] Technical debt evaluated
726
-
727
- **Yellow Hat Testing Checklist**:
728
- - [ ] Testing strengths identified
729
- - [ ] Quick wins documented
730
- - [ ] Reusable assets cataloged
731
- - [ ] Team capabilities assessed
732
- - [ ] Opportunities listed
733
-
734
- **Green Hat Testing Checklist**:
735
- - [ ] 10+ creative test ideas generated
736
- - [ ] Alternative approaches explored
737
- - [ ] Emerging tools researched
738
- - [ ] Innovation backlog created
739
-
740
- **Red Hat Testing Checklist**:
741
- - [ ] Team gut feelings captured
742
- - [ ] Confidence levels assessed
743
- - [ ] Anxieties documented
744
- - [ ] Intuitions trusted
745
-
746
- **Blue Hat Testing Checklist**:
747
- - [ ] Test strategy defined
748
- - [ ] Process documented
749
- - [ ] Priorities established
750
- - [ ] Action plan created
751
- - [ ] Metrics defined
752
- - [ ] Next steps clear
258
+ ## Related Skills
259
+ - [risk-based-testing](../risk-based-testing/) - Black Hat deep dive
260
+ - [exploratory-testing-advanced](../exploratory-testing-advanced/) - Green Hat exploration
261
+ - [context-driven-testing](../context-driven-testing/) - Adapt to context
753
262
 
754
263
  ---
755
264
 
756
- ### Technique 3: Hat Rotation Cadence
265
+ ## Anti-Patterns
757
266
 
758
- **Daily Stand-up Hats**:
759
- - White Hat: What did you test yesterday? (facts)
760
- - Red Hat: How confident are you? (feelings)
761
- - Blue Hat: What will you test today? (process)
762
-
763
- **Sprint Planning Hats**:
764
- - White Hat: What's the current test coverage?
765
- - Black Hat: What are the biggest testing risks?
766
- - Green Hat: What innovative approaches should we try?
767
- - Blue Hat: What's our testing strategy for this sprint?
768
-
769
- **Sprint Retrospective Hats**:
770
- - White Hat: What were our testing metrics?
771
- - Red Hat: How did we feel about quality?
772
- - Black Hat: What testing failures occurred?
773
- - Yellow Hat: What testing successes did we have?
774
- - Green Hat: What should we try next sprint?
775
- - Blue Hat: What process improvements should we make?
776
-
777
- **Quarterly Review Hats**:
778
- - Full Six Hats session on overall testing strategy
779
- - Each hat gets 30-45 minutes
780
- - Document and publish findings
781
- - Update test strategy based on insights
267
+ | Avoid | Why | ✅ Instead |
268
+ |----------|-----|-----------|
269
+ | Mixing hats | Confuses thinking | One hat at a time |
270
+ | Justifying Red Hat | Kills intuition | State feelings only |
271
+ | Skipping hats | Misses insights | Use all six |
272
+ | Rushing | Shallow analysis | 5 min minimum per hat |
782
273
 
783
274
  ---
784
275
 
785
- ### Technique 4: Anti-Patterns to Avoid
276
+ ## Remember
786
277
 
787
- **❌ Hat Mixing**:
788
- ```
789
- BAD: "The tests are passing (White Hat), but I'm worried (Red Hat),
790
- because we're missing edge cases (Black Hat)"
791
- ```
792
- This mixes three hats simultaneously. Separate them:
793
- ```
794
- ✅ White Hat: "Our tests have 85% coverage, 1,200 passing tests"
795
- ✅ Red Hat: "I feel anxious about quality"
796
- ✅ Black Hat: "We're missing concurrent user edge cases"
797
- ```
798
-
799
- **❌ Justifying Red Hat**:
800
- ```
801
- BAD: "I feel worried because the tests are flaky" (justification)
802
- ✅ GOOD: "I feel worried" (no justification needed)
803
- ```
804
- Red Hat is intuition. Don't rationalize it. Trust it. Investigate it separately.
805
-
806
- **❌ Skipping Hats**:
807
- ```
808
- BAD: "We don't need Green Hat, we already know what to do"
809
- ```
810
- Every hat reveals insights. Even if you think you know, wear all hats.
811
-
812
- **❌ Rushing Hats**:
813
- ```
814
- BAD: 5 minutes total for all six hats
815
- ✅ GOOD: 5 minutes per hat minimum (30 minutes total)
816
- ```
817
-
818
- **❌ Judging Hat Contributions**:
819
- ```
820
- BAD: "That's a stupid Black Hat comment"
821
- ✅ GOOD: Accept all contributions, evaluate later in Blue Hat
822
- ```
823
-
824
- **❌ Using Hats as Weapons**:
825
- ```
826
- BAD: "I'm wearing my Black Hat to shoot down your idea"
827
- ✅ GOOD: "Let's all wear Black Hat to find risks we can mitigate"
828
- ```
829
-
830
- ---
831
-
832
- ## Troubleshooting
833
-
834
- ### Issue: Team Resists Wearing Hats
835
-
836
- **Symptoms**: Eye-rolling, "This is silly", reluctance to participate
837
-
838
- **Solution**:
839
- 1. Start with async/individual sessions (less awkward)
840
- 2. Don't use physical hats (optional prop)
841
- 3. Rename to "Perspectives Method" if "hats" feels childish
842
- 4. Show ROI: "This found X bugs in Y minutes"
843
- 5. Start with Black Hat (teams usually like risk analysis)
844
- 6. Make it optional: "Try it for one sprint"
845
-
846
- ---
847
-
848
- ### Issue: All Hats Sound the Same
849
-
850
- **Symptoms**: Every hat produces similar outputs
851
-
852
- **Solution**:
853
- 1. Use timer: Force strict 5-minute boundaries
854
- 2. Use hat-specific prompts (see templates)
855
- 3. Have facilitator enforce hat discipline
856
- 4. Practice individually first (develop hat-thinking muscle)
857
- 5. Review example outputs for each hat
858
-
859
- ---
860
-
861
- ### Issue: Conflicts Between Hats
862
-
863
- **Symptoms**: Black Hat says "This won't work" vs Yellow Hat says "This will work"
864
-
865
- **Solution**:
866
- - This is GOOD! It reveals trade-offs
867
- - Black Hat: "Risk of flaky tests with this approach"
868
- - Yellow Hat: "Benefit of faster execution with this approach"
869
- - Blue Hat: Synthesize: "Prototype with retries to mitigate flakiness while keeping speed"
870
-
871
- ---
872
-
873
- ### Issue: Green Hat Produces No Ideas
874
-
875
- **Symptoms**: Team stuck, no creative ideas
876
-
877
- **Solution**:
878
- 1. Use prompts: "What if we had unlimited time?" "What would Elon Musk do?"
879
- 2. Research: Look at what other teams/companies do
880
- 3. Crazy ideas first: "No idea is too wild during Green Hat"
881
- 4. Quantity over quality: Generate 20 ideas, even if 18 are bad
882
- 5. Combine ideas: Mix-and-match different approaches
883
-
884
- ---
885
-
886
- ### Issue: Red Hat Feels Uncomfortable
887
-
888
- **Symptoms**: Team silent during Red Hat, "I don't want to share feelings"
889
-
890
- **Solution**:
891
- 1. Make it anonymous: Write feelings on sticky notes
892
- 2. Frame it: "Professional instincts" instead of "emotions"
893
- 3. Go first as facilitator: Model vulnerability
894
- 4. Emphasize: Red Hat has caught many bugs ("trust your gut")
895
- 5. Make it optional: Some people prefer to skip Red Hat
896
- 6. Use scale: "Rate your confidence 1-10" (easier than feelings)
897
-
898
- ---
899
-
900
- ### Issue: Takes Too Long
901
-
902
- **Symptoms**: Six Hats session takes 3+ hours
903
-
904
- **Solution**:
905
- 1. Use timers: Strict 5 minutes per hat
906
- 2. Narrow focus: Be specific about what you're analyzing
907
- 3. Use templates: Pre-formatted hat outputs
908
- 4. Parallel work: Async contributions before meeting
909
- 5. Just-in-Time hats: Use only 2-3 hats as needed:
910
- - Quick risk check: Just Black Hat (5 min)
911
- - Ideation: Just Green Hat (10 min)
912
- - Feelings check: Just Red Hat (3 min)
913
-
914
- ---
915
-
916
- ## Templates & Resources
917
-
918
- ### Template 1: Solo Six Hats Session (30 minutes)
919
-
920
- ```markdown
921
- # Six Hats Analysis: [Topic]
922
- Date: [Date]
923
- Facilitator: [Name]
924
- Focus: [Specific testing question or challenge]
925
-
926
- ---
927
-
928
- ## 🤍 White Hat - Facts (5 minutes)
929
- **Objective**: List only facts, data, metrics. No opinions.
930
-
931
- Facts:
932
- -
933
- -
934
- -
935
-
936
- Data:
937
- -
938
- -
939
-
940
- ---
941
-
942
- ## ❤️ Red Hat - Feelings (3 minutes)
943
- **Objective**: Gut instincts, emotions, intuitions. No justification needed.
944
-
945
- I feel:
946
- -
947
- -
948
- -
949
-
950
- My intuition says:
951
- -
952
-
953
- ---
954
-
955
- ## 🖤 Black Hat - Risks (7 minutes)
956
- **Objective**: Critical judgment, potential problems, what could go wrong.
957
-
958
- Risks:
959
- -
960
- -
961
- -
962
-
963
- Gaps:
964
- -
965
-
966
- Assumptions to challenge:
967
- -
968
-
969
- ---
970
-
971
- ## 💛 Yellow Hat - Benefits (5 minutes)
972
- **Objective**: Positive aspects, opportunities, strengths.
973
-
974
- Strengths:
975
- -
976
- -
977
-
978
- Opportunities:
979
- -
980
- -
981
-
982
- Quick wins:
983
- -
984
-
985
- ---
986
-
987
- ## 💚 Green Hat - Creativity (7 minutes)
988
- **Objective**: New ideas, alternatives, creative solutions. Go wild!
989
-
990
- Ideas:
991
- 1.
992
- 2.
993
- 3.
994
- 4.
995
- 5.
996
-
997
- Crazy ideas (that might work):
998
- -
999
- -
1000
-
1001
- ---
1002
-
1003
- ## 🔵 Blue Hat - Process (5 minutes)
1004
- **Objective**: Action plan, next steps, process.
1005
-
1006
- Summary:
1007
- -
1008
-
1009
- Prioritized actions:
1010
- 1. [Immediate]
1011
- 2. [Short-term]
1012
- 3. [Long-term]
1013
-
1014
- Next steps:
1015
- -
1016
-
1017
- ---
1018
-
1019
- **Key Insights**:
1020
- -
1021
-
1022
- **Decisions**:
1023
- -
1024
- ```
1025
-
1026
- ---
1027
-
1028
- ### Template 2: Team Six Hats Session (90 minutes)
1029
-
1030
- ```markdown
1031
- # Team Six Hats Session
1032
- Date: [Date]
1033
- Facilitator: [Name]
1034
- Participants: [Names]
1035
- Topic: [Testing challenge or decision]
1036
-
1037
- ## Pre-Session (10 minutes)
1038
- - [ ] Define focus clearly
1039
- - [ ] Gather relevant data (White Hat prep)
1040
- - [ ] Set timer for each hat
1041
- - [ ] Explain rules to new participants
1042
-
1043
- ---
1044
-
1045
- ## Session Agenda (60 minutes)
1046
-
1047
- ### 🤍 White Hat (10 minutes)
1048
- Each person shares one fact. Go around the table until facts exhausted.
1049
-
1050
- Documented facts:
1051
- -
1052
-
1053
- ### ❤️ Red Hat (5 minutes)
1054
- Silent individual reflection (2 min), then sharing (3 min). No justification.
1055
-
1056
- Team feelings:
1057
- -
1058
-
1059
- ### 🖤 Black Hat (12 minutes)
1060
- Brainstorm risks on whiteboard. Group similar items.
1061
-
1062
- Risk categories:
1063
- -
1064
-
1065
- ### 💛 Yellow Hat (8 minutes)
1066
- What's working? What can we leverage?
1067
-
1068
- Strengths identified:
1069
- -
1070
-
1071
- ### 💚 Green Hat (15 minutes)
1072
- Rapid-fire idea generation. No idea too crazy. Build on others' ideas.
1073
-
1074
- Ideas generated:
1075
- -
1076
-
1077
- ### 🔵 Blue Hat (10 minutes)
1078
- Synthesize findings into action plan with owner and timeline.
1079
-
1080
- Actions:
1081
- | Action | Owner | Timeline | Priority |
1082
- |--------|-------|----------|----------|
1083
- | | | | |
1084
-
1085
- ---
1086
-
1087
- ## Post-Session (20 minutes)
1088
- - [ ] Document findings
1089
- - [ ] Share summary with stakeholders
1090
- - [ ] Schedule follow-up
1091
- - [ ] Add actions to backlog
1092
-
1093
- ---
1094
-
1095
- ## Retrospective
1096
- What worked:
1097
- -
1098
-
1099
- What to improve:
1100
- -
1101
-
1102
- Next session changes:
1103
- -
1104
- ```
1105
-
1106
- ---
1107
-
1108
- ### Template 3: Hat-Specific Prompts
1109
-
1110
- **White Hat Prompts**:
1111
- - What test metrics do we have?
1112
- - What is our current coverage?
1113
- - How many tests exist? (unit, integration, E2E)
1114
- - What is our defect rate?
1115
- - What environments are available?
1116
- - What data do we need but don't have?
1117
-
1118
- **Red Hat Prompts**:
1119
- - How confident do you feel about quality?
1120
- - What makes you anxious?
1121
- - Where do you have a bad feeling?
1122
- - What gives you confidence?
1123
- - What frustrates you?
1124
- - If this were your product, would you ship it?
1125
-
1126
- **Black Hat Prompts**:
1127
- - What could go wrong in production?
1128
- - What are we NOT testing?
1129
- - What assumptions might be wrong?
1130
- - What edge cases could break?
1131
- - What security holes exist?
1132
- - What happens if [component] fails?
1133
-
1134
- **Yellow Hat Prompts**:
1135
- - What's going well?
1136
- - What strengths can we leverage?
1137
- - What opportunities exist?
1138
- - What value does our testing provide?
1139
- - What quick wins are available?
1140
- - What are we doing better than competitors?
1141
-
1142
- **Green Hat Prompts**:
1143
- - How else could we test this?
1144
- - What if we had unlimited time/budget?
1145
- - What would [company] do?
1146
- - What emerging tech could we use?
1147
- - What if we started from scratch?
1148
- - What's the opposite of our current approach?
1149
-
1150
- **Blue Hat Prompts**:
1151
- - What's our testing strategy?
1152
- - How should we prioritize?
1153
- - What's the next step?
1154
- - How do we measure success?
1155
- - What's the decision-making process?
1156
- - How do we track progress?
1157
-
1158
- ---
1159
-
1160
- ## Resources & Further Learning
1161
-
1162
- ### Books
1163
- - **"Six Thinking Hats" by Edward de Bono** - Original methodology
1164
- - **"Serious Creativity" by Edward de Bono** - Applied creativity techniques
1165
- - **"Explore It!" by Elisabeth Hendrickson** - Exploratory testing (uses lateral thinking)
1166
- - **"Lessons Learned in Software Testing" by Kaner, Bach, Pettichord** - Context-driven testing
1167
-
1168
- ### Articles
1169
- - [Six Thinking Hats Official Website](https://www.edwdebono.com/six-thinking-hats)
1170
- - "Using Six Hats for Test Design" - Ministry of Testing
1171
- - "Parallel Thinking in Software Testing" - TestBash talks
1172
-
1173
- ### Related QE Skills
1174
- - **context-driven-testing**: Choose practices based on context
1175
- - **exploratory-testing-advanced**: Apply creativity to testing
1176
- - **risk-based-testing**: Prioritize testing by risk (Black Hat)
1177
- - **holistic-testing-pact**: Comprehensive quality model (all hats)
1178
-
1179
- ### Tools
1180
- - **Miro/Mural**: Virtual whiteboard for remote Six Hats sessions
1181
- - **Sticky notes**: Physical Six Hats sessions
1182
- - **Timer apps**: Enforce hat boundaries
1183
- - **Recording**: Capture Red Hat intuitions
1184
-
1185
- ---
1186
-
1187
- ## Tips for Success
1188
-
1189
- 1. **Practice Solo First**: Get comfortable with each hat individually before facilitating group sessions.
1190
-
1191
- 2. **Start Small**: Try one or two hats first (Black + Yellow for quick risk/opportunity analysis).
1192
-
1193
- 3. **Use Timers**: Strict time boundaries prevent endless discussions.
1194
-
1195
- 4. **Separate Hats Clearly**: Don't mix perspectives. Discipline improves quality.
1196
-
1197
- 5. **Trust Red Hat**: Intuition often catches issues analysis misses.
1198
-
1199
- 6. **Document Everything**: Capture all outputs, especially Green Hat wild ideas.
1200
-
1201
- 7. **Revisit Periodically**: Apply Six Hats quarterly to major testing challenges.
1202
-
1203
- 8. **Adapt to Context**: Solo vs team, 15 min vs 2 hours, all hats vs selective hats.
1204
-
1205
- 9. **Make It Safe**: Especially for Red Hat, create psychological safety.
1206
-
1207
- 10. **Close with Blue Hat**: Always end with process and action plan.
1208
-
1209
- ---
278
+ **Separate thinking modes for clarity.** Each hat reveals different insights. Red Hat intuition often catches what Black Hat analysis misses.
1210
279
 
1211
- **Created**: 2025-11-13
1212
- **Category**: Testing Methodologies
1213
- **Difficulty**: Intermediate
1214
- **Estimated Time**: 30-90 minutes (depending on format)
1215
- **Best Used With**: context-driven-testing, exploratory-testing-advanced, risk-based-testing
280
+ **Everyone wears all hats.** This is parallel thinking, not role-based. The goal is comprehensive analysis, not debate.