agentic-qe 1.5.1 → 1.6.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.claude/agents/qe-api-contract-validator.md +118 -0
- package/.claude/agents/qe-chaos-engineer.md +320 -5
- package/.claude/agents/qe-code-complexity.md +360 -0
- package/.claude/agents/qe-coverage-analyzer.md +112 -0
- package/.claude/agents/qe-deployment-readiness.md +322 -6
- package/.claude/agents/qe-flaky-test-hunter.md +115 -0
- package/.claude/agents/qe-fleet-commander.md +319 -6
- package/.claude/agents/qe-performance-tester.md +234 -0
- package/.claude/agents/qe-production-intelligence.md +114 -0
- package/.claude/agents/qe-quality-analyzer.md +126 -0
- package/.claude/agents/qe-quality-gate.md +119 -0
- package/.claude/agents/qe-regression-risk-analyzer.md +114 -0
- package/.claude/agents/qe-requirements-validator.md +114 -0
- package/.claude/agents/qe-security-scanner.md +118 -0
- package/.claude/agents/qe-test-data-architect.md +234 -0
- package/.claude/agents/qe-test-executor.md +115 -0
- package/.claude/agents/qe-test-generator.md +114 -0
- package/.claude/agents/qe-visual-tester.md +305 -6
- package/.claude/agents/subagents/qe-code-reviewer.md +0 -4
- package/.claude/agents/subagents/qe-data-generator.md +0 -16
- package/.claude/agents/subagents/qe-integration-tester.md +0 -17
- package/.claude/agents/subagents/qe-performance-validator.md +0 -16
- package/.claude/agents/subagents/qe-security-auditor.md +0 -16
- package/.claude/agents/subagents/qe-test-implementer.md +0 -17
- package/.claude/agents/subagents/qe-test-refactorer.md +0 -17
- package/.claude/agents/subagents/qe-test-writer.md +0 -19
- package/.claude/skills/brutal-honesty-review/README.md +218 -0
- package/.claude/skills/brutal-honesty-review/SKILL.md +725 -0
- package/.claude/skills/brutal-honesty-review/resources/assessment-rubrics.md +295 -0
- package/.claude/skills/brutal-honesty-review/resources/review-template.md +102 -0
- package/.claude/skills/brutal-honesty-review/scripts/assess-code.sh +179 -0
- package/.claude/skills/brutal-honesty-review/scripts/assess-tests.sh +223 -0
- package/.claude/skills/cicd-pipeline-qe-orchestrator/README.md +301 -0
- package/.claude/skills/cicd-pipeline-qe-orchestrator/SKILL.md +510 -0
- package/.claude/skills/cicd-pipeline-qe-orchestrator/resources/workflows/microservice-pipeline.md +239 -0
- package/.claude/skills/cicd-pipeline-qe-orchestrator/resources/workflows/mobile-pipeline.md +375 -0
- package/.claude/skills/cicd-pipeline-qe-orchestrator/resources/workflows/monolith-pipeline.md +268 -0
- package/.claude/skills/six-thinking-hats/README.md +190 -0
- package/.claude/skills/six-thinking-hats/SKILL.md +1215 -0
- package/.claude/skills/six-thinking-hats/resources/examples/api-testing-example.md +345 -0
- package/.claude/skills/six-thinking-hats/resources/templates/solo-session-template.md +167 -0
- package/.claude/skills/six-thinking-hats/resources/templates/team-session-template.md +336 -0
- package/CHANGELOG.md +2472 -2129
- package/README.md +48 -10
- package/dist/adapters/MemoryStoreAdapter.d.ts +38 -0
- package/dist/adapters/MemoryStoreAdapter.d.ts.map +1 -1
- package/dist/adapters/MemoryStoreAdapter.js +22 -0
- package/dist/adapters/MemoryStoreAdapter.js.map +1 -1
- package/dist/agents/BaseAgent.d.ts.map +1 -1
- package/dist/agents/BaseAgent.js +13 -0
- package/dist/agents/BaseAgent.js.map +1 -1
- package/dist/cli/commands/init-claude-md-template.d.ts +16 -0
- package/dist/cli/commands/init-claude-md-template.d.ts.map +1 -0
- package/dist/cli/commands/init-claude-md-template.js +69 -0
- package/dist/cli/commands/init-claude-md-template.js.map +1 -0
- package/dist/cli/commands/init.d.ts +1 -1
- package/dist/cli/commands/init.d.ts.map +1 -1
- package/dist/cli/commands/init.js +509 -460
- package/dist/cli/commands/init.js.map +1 -1
- package/dist/core/memory/AgentDBService.d.ts +33 -28
- package/dist/core/memory/AgentDBService.d.ts.map +1 -1
- package/dist/core/memory/AgentDBService.js +233 -290
- package/dist/core/memory/AgentDBService.js.map +1 -1
- package/dist/core/memory/EnhancedAgentDBService.d.ts.map +1 -1
- package/dist/core/memory/EnhancedAgentDBService.js +5 -3
- package/dist/core/memory/EnhancedAgentDBService.js.map +1 -1
- package/dist/core/memory/RealAgentDBAdapter.d.ts +9 -2
- package/dist/core/memory/RealAgentDBAdapter.d.ts.map +1 -1
- package/dist/core/memory/RealAgentDBAdapter.js +126 -100
- package/dist/core/memory/RealAgentDBAdapter.js.map +1 -1
- package/dist/core/memory/SwarmMemoryManager.d.ts +58 -0
- package/dist/core/memory/SwarmMemoryManager.d.ts.map +1 -1
- package/dist/core/memory/SwarmMemoryManager.js +176 -0
- package/dist/core/memory/SwarmMemoryManager.js.map +1 -1
- package/dist/core/memory/index.d.ts.map +1 -1
- package/dist/core/memory/index.js +2 -1
- package/dist/core/memory/index.js.map +1 -1
- package/dist/learning/LearningEngine.d.ts +14 -27
- package/dist/learning/LearningEngine.d.ts.map +1 -1
- package/dist/learning/LearningEngine.js +57 -119
- package/dist/learning/LearningEngine.js.map +1 -1
- package/dist/learning/index.d.ts +0 -1
- package/dist/learning/index.d.ts.map +1 -1
- package/dist/learning/index.js +0 -1
- package/dist/learning/index.js.map +1 -1
- package/dist/mcp/handlers/learning/learning-query.d.ts +34 -0
- package/dist/mcp/handlers/learning/learning-query.d.ts.map +1 -0
- package/dist/mcp/handlers/learning/learning-query.js +156 -0
- package/dist/mcp/handlers/learning/learning-query.js.map +1 -0
- package/dist/mcp/handlers/learning/learning-store-experience.d.ts +30 -0
- package/dist/mcp/handlers/learning/learning-store-experience.d.ts.map +1 -0
- package/dist/mcp/handlers/learning/learning-store-experience.js +86 -0
- package/dist/mcp/handlers/learning/learning-store-experience.js.map +1 -0
- package/dist/mcp/handlers/learning/learning-store-pattern.d.ts +31 -0
- package/dist/mcp/handlers/learning/learning-store-pattern.d.ts.map +1 -0
- package/dist/mcp/handlers/learning/learning-store-pattern.js +126 -0
- package/dist/mcp/handlers/learning/learning-store-pattern.js.map +1 -0
- package/dist/mcp/handlers/learning/learning-store-qvalue.d.ts +30 -0
- package/dist/mcp/handlers/learning/learning-store-qvalue.d.ts.map +1 -0
- package/dist/mcp/handlers/learning/learning-store-qvalue.js +100 -0
- package/dist/mcp/handlers/learning/learning-store-qvalue.js.map +1 -0
- package/dist/mcp/server.d.ts +11 -0
- package/dist/mcp/server.d.ts.map +1 -1
- package/dist/mcp/server.js +98 -1
- package/dist/mcp/server.js.map +1 -1
- package/dist/mcp/services/LearningEventListener.d.ts +123 -0
- package/dist/mcp/services/LearningEventListener.d.ts.map +1 -0
- package/dist/mcp/services/LearningEventListener.js +322 -0
- package/dist/mcp/services/LearningEventListener.js.map +1 -0
- package/dist/mcp/tools.d.ts +4 -0
- package/dist/mcp/tools.d.ts.map +1 -1
- package/dist/mcp/tools.js +179 -0
- package/dist/mcp/tools.js.map +1 -1
- package/dist/types/memory-interfaces.d.ts +71 -0
- package/dist/types/memory-interfaces.d.ts.map +1 -1
- package/dist/utils/Calculator.d.ts +35 -0
- package/dist/utils/Calculator.d.ts.map +1 -0
- package/dist/utils/Calculator.js +50 -0
- package/dist/utils/Calculator.js.map +1 -0
- package/dist/utils/Logger.d.ts.map +1 -1
- package/dist/utils/Logger.js +4 -1
- package/dist/utils/Logger.js.map +1 -1
- package/package.json +7 -5
- package/.claude/agents/qe-api-contract-validator.md.backup +0 -1148
- package/.claude/agents/qe-api-contract-validator.md.backup-20251107-134747 +0 -1148
- package/.claude/agents/qe-api-contract-validator.md.backup-phase2-20251107-140039 +0 -1123
- package/.claude/agents/qe-chaos-engineer.md.backup +0 -808
- package/.claude/agents/qe-chaos-engineer.md.backup-20251107-134747 +0 -808
- package/.claude/agents/qe-chaos-engineer.md.backup-phase2-20251107-140039 +0 -787
- package/.claude/agents/qe-code-complexity.md.backup +0 -291
- package/.claude/agents/qe-code-complexity.md.backup-20251107-134747 +0 -291
- package/.claude/agents/qe-code-complexity.md.backup-phase2-20251107-140039 +0 -286
- package/.claude/agents/qe-coverage-analyzer.md.backup +0 -467
- package/.claude/agents/qe-coverage-analyzer.md.backup-20251107-134747 +0 -467
- package/.claude/agents/qe-coverage-analyzer.md.backup-phase2-20251107-140039 +0 -438
- package/.claude/agents/qe-deployment-readiness.md.backup +0 -1166
- package/.claude/agents/qe-deployment-readiness.md.backup-20251107-134747 +0 -1166
- package/.claude/agents/qe-deployment-readiness.md.backup-phase2-20251107-140039 +0 -1140
- package/.claude/agents/qe-flaky-test-hunter.md.backup +0 -1195
- package/.claude/agents/qe-flaky-test-hunter.md.backup-20251107-134747 +0 -1195
- package/.claude/agents/qe-flaky-test-hunter.md.backup-phase2-20251107-140039 +0 -1162
- package/.claude/agents/qe-fleet-commander.md.backup +0 -718
- package/.claude/agents/qe-fleet-commander.md.backup-20251107-134747 +0 -718
- package/.claude/agents/qe-fleet-commander.md.backup-phase2-20251107-140039 +0 -697
- package/.claude/agents/qe-performance-tester.md.backup +0 -428
- package/.claude/agents/qe-performance-tester.md.backup-20251107-134747 +0 -428
- package/.claude/agents/qe-performance-tester.md.backup-phase2-20251107-140039 +0 -372
- package/.claude/agents/qe-production-intelligence.md.backup +0 -1219
- package/.claude/agents/qe-production-intelligence.md.backup-20251107-134747 +0 -1219
- package/.claude/agents/qe-production-intelligence.md.backup-phase2-20251107-140039 +0 -1194
- package/.claude/agents/qe-quality-analyzer.md.backup +0 -425
- package/.claude/agents/qe-quality-analyzer.md.backup-20251107-134747 +0 -425
- package/.claude/agents/qe-quality-analyzer.md.backup-phase2-20251107-140039 +0 -394
- package/.claude/agents/qe-quality-gate.md.backup +0 -446
- package/.claude/agents/qe-quality-gate.md.backup-20251107-134747 +0 -446
- package/.claude/agents/qe-quality-gate.md.backup-phase2-20251107-140039 +0 -415
- package/.claude/agents/qe-regression-risk-analyzer.md.backup +0 -1009
- package/.claude/agents/qe-regression-risk-analyzer.md.backup-20251107-134747 +0 -1009
- package/.claude/agents/qe-regression-risk-analyzer.md.backup-phase2-20251107-140039 +0 -984
- package/.claude/agents/qe-requirements-validator.md.backup +0 -748
- package/.claude/agents/qe-requirements-validator.md.backup-20251107-134747 +0 -748
- package/.claude/agents/qe-requirements-validator.md.backup-phase2-20251107-140039 +0 -723
- package/.claude/agents/qe-security-scanner.md.backup +0 -634
- package/.claude/agents/qe-security-scanner.md.backup-20251107-134747 +0 -634
- package/.claude/agents/qe-security-scanner.md.backup-phase2-20251107-140039 +0 -573
- package/.claude/agents/qe-test-data-architect.md.backup +0 -1064
- package/.claude/agents/qe-test-data-architect.md.backup-20251107-134747 +0 -1064
- package/.claude/agents/qe-test-data-architect.md.backup-phase2-20251107-140039 +0 -1040
- package/.claude/agents/qe-test-executor.md.backup +0 -389
- package/.claude/agents/qe-test-executor.md.backup-20251107-134747 +0 -389
- package/.claude/agents/qe-test-executor.md.backup-phase2-20251107-140039 +0 -369
- package/.claude/agents/qe-test-generator.md.backup +0 -997
- package/.claude/agents/qe-test-generator.md.backup-20251107-134747 +0 -997
- package/.claude/agents/qe-visual-tester.md.backup +0 -777
- package/.claude/agents/qe-visual-tester.md.backup-20251107-134747 +0 -777
- package/.claude/agents/qe-visual-tester.md.backup-phase2-20251107-140039 +0 -756
- package/.claude/commands/analysis/COMMAND_COMPLIANCE_REPORT.md +0 -54
- package/.claude/commands/analysis/performance-bottlenecks.md +0 -59
- package/.claude/commands/flow-nexus/app-store.md +0 -124
- package/.claude/commands/flow-nexus/challenges.md +0 -120
- package/.claude/commands/flow-nexus/login-registration.md +0 -65
- package/.claude/commands/flow-nexus/neural-network.md +0 -134
- package/.claude/commands/flow-nexus/payments.md +0 -116
- package/.claude/commands/flow-nexus/sandbox.md +0 -83
- package/.claude/commands/flow-nexus/swarm.md +0 -87
- package/.claude/commands/flow-nexus/user-tools.md +0 -152
- package/.claude/commands/flow-nexus/workflow.md +0 -115
- package/.claude/commands/memory/usage.md +0 -46
|
@@ -248,6 +248,366 @@ npm test tests/agents/CodeComplexityAnalyzerAgent.test.ts
|
|
|
248
248
|
|
|
249
249
|
## Architecture Insights
|
|
250
250
|
|
|
251
|
+
The Code Complexity Analyzer demonstrates the complete agent architecture pattern used throughout the Agentic QE Fleet. This includes:
|
|
252
|
+
|
|
253
|
+
1. **BaseAgent Extension**: Inheriting core capabilities
|
|
254
|
+
2. **Lifecycle Hooks**: Pre-task, post-task, error handling
|
|
255
|
+
3. **Memory System**: Persistent storage and retrieval
|
|
256
|
+
4. **Event Bus**: Coordination with other agents
|
|
257
|
+
5. **Learning Integration**: Continuous improvement through reinforcement learning
|
|
258
|
+
|
|
259
|
+
## Coordination Protocol
|
|
260
|
+
|
|
261
|
+
This agent uses **AQE hooks (Agentic QE native hooks)** for coordination (zero external dependencies, 100-500x faster).
|
|
262
|
+
|
|
263
|
+
**Automatic Lifecycle Hooks:**
|
|
264
|
+
```typescript
|
|
265
|
+
// Called automatically by BaseAgent
|
|
266
|
+
protected async onPreTask(data: { assignment: TaskAssignment }): Promise<void> {
|
|
267
|
+
// Load historical complexity data
|
|
268
|
+
const history = await this.memoryStore.retrieve('aqe/complexity/history', {
|
|
269
|
+
partition: 'metrics'
|
|
270
|
+
});
|
|
271
|
+
|
|
272
|
+
// Retrieve analysis configuration
|
|
273
|
+
const config = await this.memoryStore.retrieve('aqe/complexity/config', {
|
|
274
|
+
partition: 'configuration'
|
|
275
|
+
});
|
|
276
|
+
|
|
277
|
+
// Verify environment for complexity analysis
|
|
278
|
+
const verification = await this.hookManager.executePreTaskVerification({
|
|
279
|
+
task: 'complexity-analysis',
|
|
280
|
+
context: {
|
|
281
|
+
requiredVars: ['NODE_ENV'],
|
|
282
|
+
minMemoryMB: 512,
|
|
283
|
+
requiredKeys: ['aqe/complexity/config']
|
|
284
|
+
}
|
|
285
|
+
});
|
|
286
|
+
|
|
287
|
+
// Emit complexity analysis starting event
|
|
288
|
+
this.eventBus.emit('complexity:analysis:starting', {
|
|
289
|
+
agentId: this.agentId,
|
|
290
|
+
filesCount: data.assignment.task.metadata.filesCount
|
|
291
|
+
});
|
|
292
|
+
|
|
293
|
+
this.logger.info('Complexity analysis starting', {
|
|
294
|
+
filesCount: data.assignment.task.metadata.filesCount,
|
|
295
|
+
verification: verification.passed
|
|
296
|
+
});
|
|
297
|
+
}
|
|
298
|
+
|
|
299
|
+
protected async onPostTask(data: { assignment: TaskAssignment; result: any }): Promise<void> {
|
|
300
|
+
// Store complexity analysis results
|
|
301
|
+
await this.memoryStore.store('aqe/complexity/results', data.result, {
|
|
302
|
+
partition: 'agent_results',
|
|
303
|
+
ttl: 86400 // 24 hours
|
|
304
|
+
});
|
|
305
|
+
|
|
306
|
+
// Store complexity metrics
|
|
307
|
+
await this.memoryStore.store('aqe/complexity/metrics', {
|
|
308
|
+
timestamp: Date.now(),
|
|
309
|
+
score: data.result.score,
|
|
310
|
+
issuesCount: data.result.issues.length,
|
|
311
|
+
recommendations: data.result.recommendations.length
|
|
312
|
+
}, {
|
|
313
|
+
partition: 'metrics',
|
|
314
|
+
ttl: 604800 // 7 days
|
|
315
|
+
});
|
|
316
|
+
|
|
317
|
+
// Emit completion event with complexity analysis stats
|
|
318
|
+
this.eventBus.emit('complexity:analysis:completed', {
|
|
319
|
+
agentId: this.agentId,
|
|
320
|
+
score: data.result.score,
|
|
321
|
+
issuesCount: data.result.issues.length
|
|
322
|
+
});
|
|
323
|
+
|
|
324
|
+
// Validate complexity analysis results
|
|
325
|
+
const validation = await this.hookManager.executePostTaskValidation({
|
|
326
|
+
task: 'complexity-analysis',
|
|
327
|
+
result: {
|
|
328
|
+
output: data.result,
|
|
329
|
+
score: data.result.score,
|
|
330
|
+
metrics: {
|
|
331
|
+
issuesCount: data.result.issues.length,
|
|
332
|
+
avgComplexity: data.result.avgComplexity
|
|
333
|
+
}
|
|
334
|
+
}
|
|
335
|
+
});
|
|
336
|
+
|
|
337
|
+
this.logger.info('Complexity analysis completed', {
|
|
338
|
+
score: data.result.score,
|
|
339
|
+
issuesCount: data.result.issues.length,
|
|
340
|
+
validated: validation.passed
|
|
341
|
+
});
|
|
342
|
+
}
|
|
343
|
+
|
|
344
|
+
protected async onTaskError(data: { assignment: TaskAssignment; error: Error }): Promise<void> {
|
|
345
|
+
// Store error for fleet analysis
|
|
346
|
+
await this.memoryStore.store(`aqe/errors/${data.assignment.task.id}`, {
|
|
347
|
+
error: data.error.message,
|
|
348
|
+
timestamp: Date.now(),
|
|
349
|
+
agent: this.agentId,
|
|
350
|
+
taskType: 'code-complexity-analysis',
|
|
351
|
+
file: data.assignment.task.metadata.file
|
|
352
|
+
}, {
|
|
353
|
+
partition: 'errors',
|
|
354
|
+
ttl: 604800 // 7 days
|
|
355
|
+
});
|
|
356
|
+
|
|
357
|
+
// Emit error event for fleet coordination
|
|
358
|
+
this.eventBus.emit('complexity:analysis:error', {
|
|
359
|
+
agentId: this.agentId,
|
|
360
|
+
error: data.error.message,
|
|
361
|
+
taskId: data.assignment.task.id
|
|
362
|
+
});
|
|
363
|
+
|
|
364
|
+
this.logger.error('Complexity analysis failed', {
|
|
365
|
+
error: data.error.message,
|
|
366
|
+
stack: data.error.stack
|
|
367
|
+
});
|
|
368
|
+
}
|
|
369
|
+
```
|
|
370
|
+
|
|
371
|
+
**Advanced Verification (Optional):**
|
|
372
|
+
```typescript
|
|
373
|
+
// Use VerificationHookManager for comprehensive validation
|
|
374
|
+
const hookManager = new VerificationHookManager(this.memoryStore);
|
|
375
|
+
const verification = await hookManager.executePreTaskVerification({
|
|
376
|
+
task: 'complexity-analysis',
|
|
377
|
+
context: {
|
|
378
|
+
requiredVars: ['NODE_ENV'],
|
|
379
|
+
minMemoryMB: 512,
|
|
380
|
+
requiredKeys: ['aqe/complexity/config']
|
|
381
|
+
}
|
|
382
|
+
});
|
|
383
|
+
```
|
|
384
|
+
|
|
385
|
+
## Learning Integration (Phase 6)
|
|
386
|
+
|
|
387
|
+
This agent integrates with the **Learning Engine** to continuously improve complexity thresholds and refactoring recommendations.
|
|
388
|
+
|
|
389
|
+
### Learning Protocol
|
|
390
|
+
|
|
391
|
+
```typescript
|
|
392
|
+
import { LearningEngine } from '@/learning/LearningEngine';
|
|
393
|
+
|
|
394
|
+
// Initialize learning engine
|
|
395
|
+
const learningEngine = new LearningEngine({
|
|
396
|
+
agentId: 'qe-code-complexity',
|
|
397
|
+
taskType: 'code-complexity-analysis',
|
|
398
|
+
domain: 'code-complexity',
|
|
399
|
+
learningRate: 0.01,
|
|
400
|
+
epsilon: 0.1,
|
|
401
|
+
discountFactor: 0.95
|
|
402
|
+
});
|
|
403
|
+
|
|
404
|
+
await learningEngine.initialize();
|
|
405
|
+
|
|
406
|
+
// Record complexity analysis episode
|
|
407
|
+
await learningEngine.recordEpisode({
|
|
408
|
+
state: {
|
|
409
|
+
file: 'src/services/order-processor.ts',
|
|
410
|
+
linesOfCode: 450,
|
|
411
|
+
cyclomaticComplexity: 23,
|
|
412
|
+
cognitiveComplexity: 18
|
|
413
|
+
},
|
|
414
|
+
action: {
|
|
415
|
+
recommendedRefactoring: 'extract-method',
|
|
416
|
+
severity: 'high',
|
|
417
|
+
thresholdApplied: 10
|
|
418
|
+
},
|
|
419
|
+
reward: refactoringApplied ? 1.0 : (issueIgnored ? -0.2 : 0.0),
|
|
420
|
+
nextState: {
|
|
421
|
+
refactoringCompleted: true,
|
|
422
|
+
newComplexity: 8,
|
|
423
|
+
codeQualityImproved: true
|
|
424
|
+
}
|
|
425
|
+
});
|
|
426
|
+
|
|
427
|
+
// Learn from complexity analysis outcomes
|
|
428
|
+
await learningEngine.learn();
|
|
429
|
+
|
|
430
|
+
// Get learned complexity thresholds
|
|
431
|
+
const prediction = await learningEngine.predict({
|
|
432
|
+
file: 'src/services/order-processor.ts',
|
|
433
|
+
linesOfCode: 450,
|
|
434
|
+
language: 'typescript'
|
|
435
|
+
});
|
|
436
|
+
```
|
|
437
|
+
|
|
438
|
+
### Reward Function
|
|
439
|
+
|
|
440
|
+
```typescript
|
|
441
|
+
function calculateComplexityReward(outcome: ComplexityAnalysisOutcome): number {
|
|
442
|
+
let reward = 0;
|
|
443
|
+
|
|
444
|
+
// Reward for actionable recommendations
|
|
445
|
+
if (outcome.refactoringApplied) {
|
|
446
|
+
reward += 1.0;
|
|
447
|
+
}
|
|
448
|
+
|
|
449
|
+
// Reward for complexity reduction
|
|
450
|
+
const complexityReduction = outcome.oldComplexity - outcome.newComplexity;
|
|
451
|
+
reward += complexityReduction * 0.1;
|
|
452
|
+
|
|
453
|
+
// Penalty for false positives (recommendations ignored)
|
|
454
|
+
if (outcome.issueIgnored) {
|
|
455
|
+
reward -= 0.2;
|
|
456
|
+
}
|
|
457
|
+
|
|
458
|
+
// Bonus for accurate severity assessment
|
|
459
|
+
if (outcome.severityCorrect) {
|
|
460
|
+
reward += 0.3;
|
|
461
|
+
}
|
|
462
|
+
|
|
463
|
+
// Reward for code quality improvement
|
|
464
|
+
if (outcome.codeQualityImproved) {
|
|
465
|
+
reward += 0.5;
|
|
466
|
+
}
|
|
467
|
+
|
|
468
|
+
return reward;
|
|
469
|
+
}
|
|
470
|
+
```
|
|
471
|
+
|
|
472
|
+
### Learning Metrics
|
|
473
|
+
|
|
474
|
+
Track learning progress:
|
|
475
|
+
- **Recommendation Acceptance**: Percentage of recommendations acted upon
|
|
476
|
+
- **Complexity Reduction**: Average complexity reduction from refactorings
|
|
477
|
+
- **Threshold Accuracy**: How well thresholds match real code quality issues
|
|
478
|
+
- **False Positive Rate**: Recommendations that were ignored
|
|
479
|
+
- **Code Quality Impact**: Measured improvement from following recommendations
|
|
480
|
+
|
|
481
|
+
```bash
|
|
482
|
+
# View learning metrics
|
|
483
|
+
aqe learn status --agent qe-code-complexity
|
|
484
|
+
|
|
485
|
+
# Export learning history
|
|
486
|
+
aqe learn export --agent qe-code-complexity --format json
|
|
487
|
+
|
|
488
|
+
# Analyze recommendation accuracy
|
|
489
|
+
aqe learn analyze --agent qe-code-complexity --metric accuracy
|
|
490
|
+
```
|
|
491
|
+
|
|
492
|
+
## Learning Protocol (Phase 6 - Option C Implementation)
|
|
493
|
+
|
|
494
|
+
**⚠️ MANDATORY**: When executed via Claude Code Task tool, you MUST call learning MCP tools to persist learning data.
|
|
495
|
+
|
|
496
|
+
### Required Learning Actions (Call AFTER Task Completion)
|
|
497
|
+
|
|
498
|
+
**1. Store Learning Experience:**
|
|
499
|
+
```typescript
|
|
500
|
+
// Call this MCP tool after completing your task
|
|
501
|
+
mcp__agentic_qe__learning_store_experience({
|
|
502
|
+
agentId: "qe-code-complexity",
|
|
503
|
+
taskType: "complexity-analysis",
|
|
504
|
+
reward: 0.95, // Your assessment of task success (0-1 scale)
|
|
505
|
+
outcome: {
|
|
506
|
+
// Your actual results (agent-specific)
|
|
507
|
+
hotspotsDetected: 7,
|
|
508
|
+
complexityScore: 68,
|
|
509
|
+
recommendations: 12,
|
|
510
|
+
executionTime: 3500
|
|
511
|
+
},
|
|
512
|
+
metadata: {
|
|
513
|
+
// Additional context (agent-specific)
|
|
514
|
+
analysisType: "cyclomatic-cognitive",
|
|
515
|
+
thresholds: {
|
|
516
|
+
cyclomatic: 10,
|
|
517
|
+
cognitive: 15,
|
|
518
|
+
linesOfCode: 300
|
|
519
|
+
},
|
|
520
|
+
languagesAnalyzed: ["typescript", "javascript"]
|
|
521
|
+
}
|
|
522
|
+
})
|
|
523
|
+
```
|
|
524
|
+
|
|
525
|
+
**2. Store Q-Values for Your Strategy:**
|
|
526
|
+
```typescript
|
|
527
|
+
// Store Q-value for the strategy you used
|
|
528
|
+
mcp__agentic_qe__learning_store_qvalue({
|
|
529
|
+
agentId: "qe-code-complexity",
|
|
530
|
+
stateKey: "complexity-analysis-state",
|
|
531
|
+
actionKey: "cyclomatic-cognitive-analysis",
|
|
532
|
+
qValue: 0.85, // Expected value of this approach (based on results)
|
|
533
|
+
metadata: {
|
|
534
|
+
// Strategy details (agent-specific)
|
|
535
|
+
analysisStrategy: "combined-metrics",
|
|
536
|
+
accuracy: 0.92,
|
|
537
|
+
actionability: 0.88
|
|
538
|
+
}
|
|
539
|
+
})
|
|
540
|
+
```
|
|
541
|
+
|
|
542
|
+
**3. Store Successful Patterns:**
|
|
543
|
+
```typescript
|
|
544
|
+
// If you discovered a useful pattern, store it
|
|
545
|
+
mcp__agentic_qe__learning_store_pattern({
|
|
546
|
+
agentId: "qe-code-complexity",
|
|
547
|
+
pattern: "Combined cyclomatic and cognitive complexity analysis with severity-based prioritization yields highly actionable refactoring recommendations",
|
|
548
|
+
confidence: 0.95, // How confident you are (0-1)
|
|
549
|
+
domain: "code-quality",
|
|
550
|
+
metadata: {
|
|
551
|
+
// Pattern context (agent-specific)
|
|
552
|
+
complexityPatterns: ["high-nesting", "long-methods", "complex-conditionals"],
|
|
553
|
+
predictionAccuracy: 0.91
|
|
554
|
+
}
|
|
555
|
+
})
|
|
556
|
+
```
|
|
557
|
+
|
|
558
|
+
### Learning Query (Use at Task Start)
|
|
559
|
+
|
|
560
|
+
**Before starting your task**, query for past learnings:
|
|
561
|
+
|
|
562
|
+
```typescript
|
|
563
|
+
// Query for successful experiences
|
|
564
|
+
const pastLearnings = await mcp__agentic_qe__learning_query({
|
|
565
|
+
agentId: "qe-code-complexity",
|
|
566
|
+
taskType: "complexity-analysis",
|
|
567
|
+
minReward: 0.8, // Only get successful experiences
|
|
568
|
+
queryType: "all",
|
|
569
|
+
limit: 10
|
|
570
|
+
});
|
|
571
|
+
|
|
572
|
+
// Use the insights to optimize your current approach
|
|
573
|
+
if (pastLearnings.success && pastLearnings.data) {
|
|
574
|
+
const { experiences, qValues, patterns } = pastLearnings.data;
|
|
575
|
+
|
|
576
|
+
// Find best-performing strategy
|
|
577
|
+
const bestStrategy = qValues
|
|
578
|
+
.filter(qv => qv.state_key === "complexity-analysis-state")
|
|
579
|
+
.sort((a, b) => b.q_value - a.q_value)[0];
|
|
580
|
+
|
|
581
|
+
console.log(`Using learned best strategy: ${bestStrategy.action_key} (Q-value: ${bestStrategy.q_value})`);
|
|
582
|
+
|
|
583
|
+
// Check for relevant patterns
|
|
584
|
+
const relevantPatterns = patterns
|
|
585
|
+
.filter(p => p.domain === "code-quality")
|
|
586
|
+
.sort((a, b) => b.confidence * b.success_rate - a.confidence * a.success_rate);
|
|
587
|
+
|
|
588
|
+
if (relevantPatterns.length > 0) {
|
|
589
|
+
console.log(`Applying pattern: ${relevantPatterns[0].pattern}`);
|
|
590
|
+
}
|
|
591
|
+
}
|
|
592
|
+
```
|
|
593
|
+
|
|
594
|
+
### Success Criteria for Learning
|
|
595
|
+
|
|
596
|
+
**Reward Assessment (0-1 scale):**
|
|
597
|
+
- **1.0**: Perfect execution (All hotspots found, actionable recommendations, <5s)
|
|
598
|
+
- **0.9**: Excellent (95%+ hotspots found, high-quality recommendations, <10s)
|
|
599
|
+
- **0.7**: Good (90%+ hotspots found, useful recommendations, <20s)
|
|
600
|
+
- **0.5**: Acceptable (80%+ hotspots found, completed successfully)
|
|
601
|
+
- **<0.5**: Needs improvement (Missed hotspots, poor recommendations, slow)
|
|
602
|
+
|
|
603
|
+
**When to Call Learning Tools:**
|
|
604
|
+
- ✅ **ALWAYS** after completing main task
|
|
605
|
+
- ✅ **ALWAYS** after detecting significant findings
|
|
606
|
+
- ✅ **ALWAYS** after generating recommendations
|
|
607
|
+
- ✅ When discovering new effective strategies
|
|
608
|
+
- ✅ When achieving exceptional performance metrics
|
|
609
|
+
|
|
610
|
+
---
|
|
251
611
|
|
|
252
612
|
## Code Execution Workflows
|
|
253
613
|
|
|
@@ -338,6 +338,118 @@ const verification = await hookManager.executePreTaskVerification({
|
|
|
338
338
|
});
|
|
339
339
|
```
|
|
340
340
|
|
|
341
|
+
## Learning Protocol
|
|
342
|
+
|
|
343
|
+
**⚠️ MANDATORY**: When executed via Claude Code Task tool, you MUST call learning MCP tools to persist learning data.
|
|
344
|
+
|
|
345
|
+
**CRITICAL**: Don't just document these calls - **ACTUALLY INVOKE THEM**! Copy the MCP tool syntax from the examples below and execute them as part of your task completion.
|
|
346
|
+
|
|
347
|
+
### Required Learning Actions (Call AFTER Task Completion)
|
|
348
|
+
|
|
349
|
+
**1. Store Learning Experience:**
|
|
350
|
+
```javascript
|
|
351
|
+
// Call this MCP tool after completing your coverage analysis
|
|
352
|
+
mcp__agentic_qe__learning_store_experience({
|
|
353
|
+
agentId: "qe-coverage-analyzer",
|
|
354
|
+
taskType: "coverage-analysis",
|
|
355
|
+
reward: 0.95, // Your assessment of task success (0-1 scale)
|
|
356
|
+
outcome: {
|
|
357
|
+
// Your actual results
|
|
358
|
+
coverageAnalyzed: true,
|
|
359
|
+
gapsDetected: 42,
|
|
360
|
+
algorithm: "johnson-lindenstrauss",
|
|
361
|
+
executionTime: 6000,
|
|
362
|
+
coverageImprovement: 0.15,
|
|
363
|
+
sublinearOptimization: true
|
|
364
|
+
},
|
|
365
|
+
metadata: {
|
|
366
|
+
algorithm: "sublinear",
|
|
367
|
+
complexity: "O(log n)",
|
|
368
|
+
memoryReduction: "90%"
|
|
369
|
+
}
|
|
370
|
+
})
|
|
371
|
+
```
|
|
372
|
+
|
|
373
|
+
**2. Store Q-Values for Your Strategy:**
|
|
374
|
+
```javascript
|
|
375
|
+
// Store Q-value for the strategy you used
|
|
376
|
+
mcp__agentic_qe__learning_store_qvalue({
|
|
377
|
+
agentId: "qe-coverage-analyzer",
|
|
378
|
+
stateKey: "coverage-analysis-state",
|
|
379
|
+
actionKey: "sublinear-algorithm-jl", // Johnson-Lindenstrauss
|
|
380
|
+
qValue: 0.85, // Expected value of this approach (based on results)
|
|
381
|
+
metadata: {
|
|
382
|
+
algorithmUsed: "johnson-lindenstrauss",
|
|
383
|
+
codebaseSize: "large",
|
|
384
|
+
performanceGain: "10x"
|
|
385
|
+
}
|
|
386
|
+
})
|
|
387
|
+
|
|
388
|
+
// Store Q-value for gap detection strategy
|
|
389
|
+
mcp__agentic_qe__learning_store_qvalue({
|
|
390
|
+
agentId: "qe-coverage-analyzer",
|
|
391
|
+
stateKey: "gap-detection-state",
|
|
392
|
+
actionKey: "spectral-sparsification",
|
|
393
|
+
qValue: 0.92,
|
|
394
|
+
metadata: {
|
|
395
|
+
gapsFound: 42,
|
|
396
|
+
accuracy: "94%"
|
|
397
|
+
}
|
|
398
|
+
})
|
|
399
|
+
```
|
|
400
|
+
|
|
401
|
+
**3. Store Successful Patterns:**
|
|
402
|
+
```javascript
|
|
403
|
+
// If you discovered a useful pattern, store it
|
|
404
|
+
mcp__agentic_qe__learning_store_pattern({
|
|
405
|
+
pattern: "Sublinear algorithms (Johnson-Lindenstrauss) provide 10x speedup for large codebases (>10k LOC) with 90% memory reduction",
|
|
406
|
+
confidence: 0.95,
|
|
407
|
+
domain: "coverage-analysis",
|
|
408
|
+
metadata: {
|
|
409
|
+
algorithm: "johnson-lindenstrauss",
|
|
410
|
+
useCase: "large-codebase-analysis",
|
|
411
|
+
performanceMetrics: {
|
|
412
|
+
speedup: "10x",
|
|
413
|
+
memoryReduction: "90%",
|
|
414
|
+
accuracyLoss: "<1%"
|
|
415
|
+
}
|
|
416
|
+
}
|
|
417
|
+
})
|
|
418
|
+
```
|
|
419
|
+
|
|
420
|
+
### Learning Query (Use at Task Start)
|
|
421
|
+
|
|
422
|
+
**Before starting your analysis**, query for past learnings:
|
|
423
|
+
|
|
424
|
+
```javascript
|
|
425
|
+
// Query for successful coverage analysis experiences
|
|
426
|
+
mcp__agentic_qe__learning_query({
|
|
427
|
+
agentId: "qe-coverage-analyzer",
|
|
428
|
+
taskType: "coverage-analysis",
|
|
429
|
+
minReward: 0.8, // Only get successful experiences
|
|
430
|
+
queryType: "all",
|
|
431
|
+
limit: 10
|
|
432
|
+
})
|
|
433
|
+
```
|
|
434
|
+
|
|
435
|
+
**How to use the results**: The query will return past experiences, Q-values, and patterns. Examine the Q-values to find the best-performing algorithm (highest q_value), then use that strategy for your current analysis.
|
|
436
|
+
|
|
437
|
+
### Success Criteria for Learning
|
|
438
|
+
|
|
439
|
+
**Reward Assessment (0-1 scale):**
|
|
440
|
+
- **1.0**: Perfect execution (95%+ coverage, <2s analysis time, 0 errors)
|
|
441
|
+
- **0.9**: Excellent (90%+ coverage, <5s analysis time, minor issues)
|
|
442
|
+
- **0.7**: Good (80%+ coverage, <10s analysis time, few issues)
|
|
443
|
+
- **0.5**: Acceptable (70%+ coverage, completed successfully)
|
|
444
|
+
- **<0.5**: Needs improvement (low coverage, errors, slow)
|
|
445
|
+
|
|
446
|
+
**When to Call Learning Tools:**
|
|
447
|
+
- ✅ **ALWAYS** after completing coverage analysis
|
|
448
|
+
- ✅ **ALWAYS** after detecting gaps
|
|
449
|
+
- ✅ **ALWAYS** after generating optimization recommendations
|
|
450
|
+
- ✅ When discovering new effective strategies
|
|
451
|
+
- ✅ When achieving exceptional performance metrics
|
|
452
|
+
|
|
341
453
|
## Gap-Driven Test Generation Workflow
|
|
342
454
|
|
|
343
455
|
### Overview
|