@accelerationguy/accel 1.0.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CLAUDE.md +19 -0
- package/LICENSE +33 -0
- package/README.md +275 -0
- package/bin/install.js +661 -0
- package/docs/getting-started.md +164 -0
- package/docs/module-guide.md +139 -0
- package/modules/drive/LICENSE +21 -0
- package/modules/drive/PAUL-VS-GSD.md +171 -0
- package/modules/drive/README.md +555 -0
- package/modules/drive/assets/terminal.svg +67 -0
- package/modules/drive/bin/install.js +210 -0
- package/modules/drive/integration.js +76 -0
- package/modules/drive/package.json +38 -0
- package/modules/drive/src/commands/add-phase.md +36 -0
- package/modules/drive/src/commands/apply.md +83 -0
- package/modules/drive/src/commands/assumptions.md +37 -0
- package/modules/drive/src/commands/audit.md +57 -0
- package/modules/drive/src/commands/complete-milestone.md +36 -0
- package/modules/drive/src/commands/config.md +175 -0
- package/modules/drive/src/commands/consider-issues.md +41 -0
- package/modules/drive/src/commands/discover.md +48 -0
- package/modules/drive/src/commands/discuss-milestone.md +33 -0
- package/modules/drive/src/commands/discuss.md +34 -0
- package/modules/drive/src/commands/flows.md +73 -0
- package/modules/drive/src/commands/handoff.md +201 -0
- package/modules/drive/src/commands/help.md +525 -0
- package/modules/drive/src/commands/init.md +54 -0
- package/modules/drive/src/commands/map-codebase.md +34 -0
- package/modules/drive/src/commands/milestone.md +34 -0
- package/modules/drive/src/commands/pause.md +44 -0
- package/modules/drive/src/commands/plan-fix.md +216 -0
- package/modules/drive/src/commands/plan.md +36 -0
- package/modules/drive/src/commands/progress.md +138 -0
- package/modules/drive/src/commands/register.md +29 -0
- package/modules/drive/src/commands/remove-phase.md +37 -0
- package/modules/drive/src/commands/research-phase.md +209 -0
- package/modules/drive/src/commands/research.md +47 -0
- package/modules/drive/src/commands/resume.md +49 -0
- package/modules/drive/src/commands/status.md +78 -0
- package/modules/drive/src/commands/unify.md +87 -0
- package/modules/drive/src/commands/verify.md +60 -0
- package/modules/drive/src/references/checkpoints.md +234 -0
- package/modules/drive/src/references/context-management.md +219 -0
- package/modules/drive/src/references/git-strategy.md +206 -0
- package/modules/drive/src/references/loop-phases.md +254 -0
- package/modules/drive/src/references/plan-format.md +263 -0
- package/modules/drive/src/references/quality-principles.md +152 -0
- package/modules/drive/src/references/research-quality-control.md +247 -0
- package/modules/drive/src/references/sonarqube-integration.md +244 -0
- package/modules/drive/src/references/specialized-workflow-integration.md +186 -0
- package/modules/drive/src/references/subagent-criteria.md +179 -0
- package/modules/drive/src/references/tdd.md +219 -0
- package/modules/drive/src/references/work-units.md +161 -0
- package/modules/drive/src/rules/commands.md +108 -0
- package/modules/drive/src/rules/references.md +107 -0
- package/modules/drive/src/rules/style.md +123 -0
- package/modules/drive/src/rules/templates.md +51 -0
- package/modules/drive/src/rules/workflows.md +133 -0
- package/modules/drive/src/templates/CONTEXT.md +88 -0
- package/modules/drive/src/templates/DEBUG.md +164 -0
- package/modules/drive/src/templates/DISCOVERY.md +148 -0
- package/modules/drive/src/templates/HANDOFF.md +77 -0
- package/modules/drive/src/templates/ISSUES.md +93 -0
- package/modules/drive/src/templates/MILESTONES.md +167 -0
- package/modules/drive/src/templates/PLAN.md +328 -0
- package/modules/drive/src/templates/PROJECT.md +219 -0
- package/modules/drive/src/templates/RESEARCH.md +130 -0
- package/modules/drive/src/templates/ROADMAP.md +328 -0
- package/modules/drive/src/templates/SPECIAL-FLOWS.md +70 -0
- package/modules/drive/src/templates/STATE.md +210 -0
- package/modules/drive/src/templates/SUMMARY.md +221 -0
- package/modules/drive/src/templates/UAT-ISSUES.md +139 -0
- package/modules/drive/src/templates/codebase/architecture.md +259 -0
- package/modules/drive/src/templates/codebase/concerns.md +329 -0
- package/modules/drive/src/templates/codebase/conventions.md +311 -0
- package/modules/drive/src/templates/codebase/integrations.md +284 -0
- package/modules/drive/src/templates/codebase/stack.md +190 -0
- package/modules/drive/src/templates/codebase/structure.md +287 -0
- package/modules/drive/src/templates/codebase/testing.md +484 -0
- package/modules/drive/src/templates/config.md +181 -0
- package/modules/drive/src/templates/milestone-archive.md +236 -0
- package/modules/drive/src/templates/milestone-context.md +190 -0
- package/modules/drive/src/templates/paul-json.md +147 -0
- package/modules/drive/src/vector-config/PAUL +26 -0
- package/modules/drive/src/vector-config/PAUL.manifest +11 -0
- package/modules/drive/src/workflows/apply-phase.md +393 -0
- package/modules/drive/src/workflows/audit-plan.md +344 -0
- package/modules/drive/src/workflows/complete-milestone.md +479 -0
- package/modules/drive/src/workflows/configure-special-flows.md +283 -0
- package/modules/drive/src/workflows/consider-issues.md +172 -0
- package/modules/drive/src/workflows/create-milestone.md +268 -0
- package/modules/drive/src/workflows/debug.md +292 -0
- package/modules/drive/src/workflows/discovery.md +187 -0
- package/modules/drive/src/workflows/discuss-milestone.md +245 -0
- package/modules/drive/src/workflows/discuss-phase.md +231 -0
- package/modules/drive/src/workflows/init-project.md +698 -0
- package/modules/drive/src/workflows/map-codebase.md +459 -0
- package/modules/drive/src/workflows/pause-work.md +259 -0
- package/modules/drive/src/workflows/phase-assumptions.md +181 -0
- package/modules/drive/src/workflows/plan-phase.md +385 -0
- package/modules/drive/src/workflows/quality-gate.md +263 -0
- package/modules/drive/src/workflows/register-manifest.md +107 -0
- package/modules/drive/src/workflows/research.md +241 -0
- package/modules/drive/src/workflows/resume-project.md +200 -0
- package/modules/drive/src/workflows/roadmap-management.md +334 -0
- package/modules/drive/src/workflows/transition-phase.md +368 -0
- package/modules/drive/src/workflows/unify-phase.md +290 -0
- package/modules/drive/src/workflows/verify-work.md +241 -0
- package/modules/forge/README.md +281 -0
- package/modules/forge/bin/install.js +200 -0
- package/modules/forge/package.json +32 -0
- package/modules/forge/skillsmith/rules/checklists-rules.md +42 -0
- package/modules/forge/skillsmith/rules/context-rules.md +43 -0
- package/modules/forge/skillsmith/rules/entry-point-rules.md +44 -0
- package/modules/forge/skillsmith/rules/frameworks-rules.md +43 -0
- package/modules/forge/skillsmith/rules/tasks-rules.md +52 -0
- package/modules/forge/skillsmith/rules/templates-rules.md +43 -0
- package/modules/forge/skillsmith/skillsmith.md +82 -0
- package/modules/forge/skillsmith/tasks/audit.md +277 -0
- package/modules/forge/skillsmith/tasks/discover.md +145 -0
- package/modules/forge/skillsmith/tasks/distill.md +276 -0
- package/modules/forge/skillsmith/tasks/scaffold.md +349 -0
- package/modules/forge/specs/checklists.md +193 -0
- package/modules/forge/specs/context.md +223 -0
- package/modules/forge/specs/entry-point.md +320 -0
- package/modules/forge/specs/frameworks.md +228 -0
- package/modules/forge/specs/rules.md +245 -0
- package/modules/forge/specs/tasks.md +344 -0
- package/modules/forge/specs/templates.md +335 -0
- package/modules/forge/terminal.svg +70 -0
- package/modules/ignition/README.md +245 -0
- package/modules/ignition/bin/install.js +184 -0
- package/modules/ignition/checklists/planning-quality.md +55 -0
- package/modules/ignition/data/application/config.md +21 -0
- package/modules/ignition/data/application/guide.md +51 -0
- package/modules/ignition/data/application/skill-loadout.md +11 -0
- package/modules/ignition/data/campaign/config.md +18 -0
- package/modules/ignition/data/campaign/guide.md +36 -0
- package/modules/ignition/data/campaign/skill-loadout.md +10 -0
- package/modules/ignition/data/client/config.md +18 -0
- package/modules/ignition/data/client/guide.md +36 -0
- package/modules/ignition/data/client/skill-loadout.md +11 -0
- package/modules/ignition/data/utility/config.md +18 -0
- package/modules/ignition/data/utility/guide.md +31 -0
- package/modules/ignition/data/utility/skill-loadout.md +8 -0
- package/modules/ignition/data/workflow/config.md +19 -0
- package/modules/ignition/data/workflow/guide.md +41 -0
- package/modules/ignition/data/workflow/skill-loadout.md +10 -0
- package/modules/ignition/integration.js +54 -0
- package/modules/ignition/package.json +35 -0
- package/modules/ignition/seed.md +81 -0
- package/modules/ignition/tasks/add-type.md +164 -0
- package/modules/ignition/tasks/graduate.md +182 -0
- package/modules/ignition/tasks/ideate.md +221 -0
- package/modules/ignition/tasks/launch.md +137 -0
- package/modules/ignition/tasks/status.md +71 -0
- package/modules/ignition/templates/planning-application.md +193 -0
- package/modules/ignition/templates/planning-campaign.md +138 -0
- package/modules/ignition/templates/planning-client.md +149 -0
- package/modules/ignition/templates/planning-utility.md +112 -0
- package/modules/ignition/templates/planning-workflow.md +125 -0
- package/modules/ignition/terminal.svg +74 -0
- package/modules/mission-control/CONTEXT-CONTINUITY-SPEC.md +293 -0
- package/modules/mission-control/CONTEXT-ENGINEERING-GUIDE.md +282 -0
- package/modules/mission-control/README.md +91 -0
- package/modules/mission-control/assets/terminal.svg +80 -0
- package/modules/mission-control/examples/entities.example.json +133 -0
- package/modules/mission-control/examples/projects.example.json +318 -0
- package/modules/mission-control/examples/state.example.json +183 -0
- package/modules/mission-control/examples/vector.example.json +245 -0
- package/modules/mission-control/mission-control/checklists/install-verification.md +46 -0
- package/modules/mission-control/mission-control/frameworks/framework-registry.md +83 -0
- package/modules/mission-control/mission-control/mission-control.md +83 -0
- package/modules/mission-control/mission-control/tasks/insights.md +73 -0
- package/modules/mission-control/mission-control/tasks/install.md +194 -0
- package/modules/mission-control/mission-control/tasks/status.md +125 -0
- package/modules/mission-control/schemas/entities.schema.json +89 -0
- package/modules/mission-control/schemas/projects.schema.json +221 -0
- package/modules/mission-control/schemas/state.schema.json +108 -0
- package/modules/mission-control/schemas/vector.schema.json +200 -0
- package/modules/momentum/README.md +678 -0
- package/modules/momentum/bin/install.js +563 -0
- package/modules/momentum/integration.js +131 -0
- package/modules/momentum/package.json +42 -0
- package/modules/momentum/schemas/entities.schema.json +89 -0
- package/modules/momentum/schemas/projects.schema.json +221 -0
- package/modules/momentum/schemas/state.schema.json +108 -0
- package/modules/momentum/src/commands/audit-claude-md.md +31 -0
- package/modules/momentum/src/commands/audit.md +33 -0
- package/modules/momentum/src/commands/groom.md +35 -0
- package/modules/momentum/src/commands/history.md +27 -0
- package/modules/momentum/src/commands/pulse.md +33 -0
- package/modules/momentum/src/commands/scaffold.md +33 -0
- package/modules/momentum/src/commands/status.md +28 -0
- package/modules/momentum/src/commands/surface-convert.md +35 -0
- package/modules/momentum/src/commands/surface-create.md +34 -0
- package/modules/momentum/src/commands/surface-list.md +27 -0
- package/modules/momentum/src/commands/vector-hygiene.md +33 -0
- package/modules/momentum/src/framework/context/momentum-principles.md +71 -0
- package/modules/momentum/src/framework/frameworks/audit-strategies.md +53 -0
- package/modules/momentum/src/framework/frameworks/satellite-registration.md +44 -0
- package/modules/momentum/src/framework/tasks/audit-claude-md.md +68 -0
- package/modules/momentum/src/framework/tasks/audit.md +64 -0
- package/modules/momentum/src/framework/tasks/groom.md +164 -0
- package/modules/momentum/src/framework/tasks/history.md +34 -0
- package/modules/momentum/src/framework/tasks/pulse.md +83 -0
- package/modules/momentum/src/framework/tasks/scaffold.md +202 -0
- package/modules/momentum/src/framework/tasks/status.md +35 -0
- package/modules/momentum/src/framework/tasks/surface-convert.md +143 -0
- package/modules/momentum/src/framework/tasks/surface-create.md +184 -0
- package/modules/momentum/src/framework/tasks/surface-list.md +42 -0
- package/modules/momentum/src/framework/tasks/vector-hygiene.md +160 -0
- package/modules/momentum/src/framework/templates/workspace-json.md +96 -0
- package/modules/momentum/src/hooks/_template.py +129 -0
- package/modules/momentum/src/hooks/active-hook.py +178 -0
- package/modules/momentum/src/hooks/backlog-hook.py +115 -0
- package/modules/momentum/src/hooks/mission-control-insights.py +169 -0
- package/modules/momentum/src/hooks/momentum-pulse-check.py +351 -0
- package/modules/momentum/src/hooks/operator.py +53 -0
- package/modules/momentum/src/hooks/psmm-injector.py +67 -0
- package/modules/momentum/src/hooks/satellite-detection.py +248 -0
- package/modules/momentum/src/packages/momentum-mcp/index.js +119 -0
- package/modules/momentum/src/packages/momentum-mcp/package.json +10 -0
- package/modules/momentum/src/packages/momentum-mcp/tools/entities.js +226 -0
- package/modules/momentum/src/packages/momentum-mcp/tools/operator.js +106 -0
- package/modules/momentum/src/packages/momentum-mcp/tools/projects.js +322 -0
- package/modules/momentum/src/packages/momentum-mcp/tools/psmm.js +206 -0
- package/modules/momentum/src/packages/momentum-mcp/tools/state.js +199 -0
- package/modules/momentum/src/packages/momentum-mcp/tools/surfaces.js +404 -0
- package/modules/momentum/src/skill/momentum.md +111 -0
- package/modules/momentum/src/tasks/groom.md +164 -0
- package/modules/momentum/src/templates/operator.json +66 -0
- package/modules/momentum/src/templates/workspace.json +111 -0
- package/modules/momentum/terminal.svg +77 -0
- package/modules/radar/README.md +1552 -0
- package/modules/radar/commands/audit.md +233 -0
- package/modules/radar/commands/guardrails.md +194 -0
- package/modules/radar/commands/init.md +207 -0
- package/modules/radar/commands/playbook.md +176 -0
- package/modules/radar/commands/remediate.md +156 -0
- package/modules/radar/commands/report.md +172 -0
- package/modules/radar/commands/resume.md +176 -0
- package/modules/radar/commands/status.md +148 -0
- package/modules/radar/commands/transform.md +205 -0
- package/modules/radar/commands/validate.md +177 -0
- package/modules/radar/docs/ARCHITECTURE.md +336 -0
- package/modules/radar/docs/GETTING-STARTED.md +287 -0
- package/modules/radar/docs/standards/agents.md +197 -0
- package/modules/radar/docs/standards/commands.md +250 -0
- package/modules/radar/docs/standards/domains.md +191 -0
- package/modules/radar/docs/standards/personas.md +211 -0
- package/modules/radar/docs/standards/rules.md +218 -0
- package/modules/radar/docs/standards/runtime.md +445 -0
- package/modules/radar/docs/standards/schemas.md +269 -0
- package/modules/radar/docs/standards/tools.md +273 -0
- package/modules/radar/docs/standards/workflows.md +254 -0
- package/modules/radar/docs/terminal.svg +72 -0
- package/modules/radar/docs/validation/convention-compliance-report.md +183 -0
- package/modules/radar/docs/validation/cross-reference-report.md +195 -0
- package/modules/radar/docs/validation/validation-summary.md +118 -0
- package/modules/radar/docs/validation/version-manifest.yaml +363 -0
- package/modules/radar/install.sh +711 -0
- package/modules/radar/integration.js +53 -0
- package/modules/radar/src/core/agents/architect.md +25 -0
- package/modules/radar/src/core/agents/compliance-officer.md +25 -0
- package/modules/radar/src/core/agents/data-engineer.md +25 -0
- package/modules/radar/src/core/agents/devils-advocate.md +22 -0
- package/modules/radar/src/core/agents/performance-engineer.md +25 -0
- package/modules/radar/src/core/agents/principal-engineer.md +23 -0
- package/modules/radar/src/core/agents/reality-gap-analyst.md +22 -0
- package/modules/radar/src/core/agents/security-engineer.md +25 -0
- package/modules/radar/src/core/agents/senior-app-engineer.md +25 -0
- package/modules/radar/src/core/agents/sre.md +25 -0
- package/modules/radar/src/core/agents/staff-engineer.md +23 -0
- package/modules/radar/src/core/agents/test-engineer.md +25 -0
- package/modules/radar/src/core/personas/architect.md +111 -0
- package/modules/radar/src/core/personas/compliance-officer.md +104 -0
- package/modules/radar/src/core/personas/data-engineer.md +113 -0
- package/modules/radar/src/core/personas/devils-advocate.md +105 -0
- package/modules/radar/src/core/personas/performance-engineer.md +119 -0
- package/modules/radar/src/core/personas/principal-engineer.md +119 -0
- package/modules/radar/src/core/personas/reality-gap-analyst.md +111 -0
- package/modules/radar/src/core/personas/security-engineer.md +108 -0
- package/modules/radar/src/core/personas/senior-app-engineer.md +111 -0
- package/modules/radar/src/core/personas/sre.md +117 -0
- package/modules/radar/src/core/personas/staff-engineer.md +109 -0
- package/modules/radar/src/core/personas/test-engineer.md +109 -0
- package/modules/radar/src/core/workflows/disagreement-resolution.md +183 -0
- package/modules/radar/src/core/workflows/phase-0-context.md +148 -0
- package/modules/radar/src/core/workflows/phase-1-reconnaissance.md +169 -0
- package/modules/radar/src/core/workflows/phase-2-domain-audits.md +190 -0
- package/modules/radar/src/core/workflows/phase-3-cross-domain.md +177 -0
- package/modules/radar/src/core/workflows/phase-4-adversarial-review.md +165 -0
- package/modules/radar/src/core/workflows/phase-5-report.md +189 -0
- package/modules/radar/src/core/workflows/phase-checkpoint.md +222 -0
- package/modules/radar/src/core/workflows/session-handoff.md +152 -0
- package/modules/radar/src/domains/00-context.md +201 -0
- package/modules/radar/src/domains/01-architecture.md +248 -0
- package/modules/radar/src/domains/02-data.md +224 -0
- package/modules/radar/src/domains/03-correctness.md +230 -0
- package/modules/radar/src/domains/04-security.md +274 -0
- package/modules/radar/src/domains/05-compliance.md +228 -0
- package/modules/radar/src/domains/06-testing.md +228 -0
- package/modules/radar/src/domains/07-reliability.md +246 -0
- package/modules/radar/src/domains/08-performance.md +247 -0
- package/modules/radar/src/domains/09-maintainability.md +271 -0
- package/modules/radar/src/domains/10-operability.md +250 -0
- package/modules/radar/src/domains/11-change-risk.md +246 -0
- package/modules/radar/src/domains/12-team-risk.md +221 -0
- package/modules/radar/src/domains/13-risk-synthesis.md +202 -0
- package/modules/radar/src/rules/agent-boundaries.md +78 -0
- package/modules/radar/src/rules/disagreement-protocol.md +76 -0
- package/modules/radar/src/rules/epistemic-hygiene.md +78 -0
- package/modules/radar/src/schemas/confidence.md +185 -0
- package/modules/radar/src/schemas/disagreement.md +238 -0
- package/modules/radar/src/schemas/finding.md +287 -0
- package/modules/radar/src/schemas/report-section.md +150 -0
- package/modules/radar/src/schemas/signal.md +108 -0
- package/modules/radar/src/tools/checkov.md +463 -0
- package/modules/radar/src/tools/git-history.md +581 -0
- package/modules/radar/src/tools/gitleaks.md +447 -0
- package/modules/radar/src/tools/grype.md +611 -0
- package/modules/radar/src/tools/semgrep.md +378 -0
- package/modules/radar/src/tools/sonarqube.md +550 -0
- package/modules/radar/src/tools/syft.md +539 -0
- package/modules/radar/src/tools/trivy.md +439 -0
- package/modules/radar/src/transform/agents/change-risk-modeler.md +24 -0
- package/modules/radar/src/transform/agents/execution-validator.md +24 -0
- package/modules/radar/src/transform/agents/guardrail-generator.md +24 -0
- package/modules/radar/src/transform/agents/pedagogy-agent.md +24 -0
- package/modules/radar/src/transform/agents/remediation-architect.md +24 -0
- package/modules/radar/src/transform/personas/change-risk-modeler.md +95 -0
- package/modules/radar/src/transform/personas/execution-validator.md +95 -0
- package/modules/radar/src/transform/personas/guardrail-generator.md +103 -0
- package/modules/radar/src/transform/personas/pedagogy-agent.md +105 -0
- package/modules/radar/src/transform/personas/remediation-architect.md +95 -0
- package/modules/radar/src/transform/rules/change-risk-rules.md +87 -0
- package/modules/radar/src/transform/rules/safety-governance.md +87 -0
- package/modules/radar/src/transform/schemas/change-risk.md +139 -0
- package/modules/radar/src/transform/schemas/intervention-level.md +207 -0
- package/modules/radar/src/transform/schemas/playbook.md +205 -0
- package/modules/radar/src/transform/schemas/verification-plan.md +134 -0
- package/modules/radar/src/transform/workflows/phase-6-remediation.md +148 -0
- package/modules/radar/src/transform/workflows/phase-7-risk-validation.md +161 -0
- package/modules/radar/src/transform/workflows/phase-8-execution-planning.md +159 -0
- package/modules/radar/src/transform/workflows/transform-safety.md +158 -0
- package/modules/vector/.vector-template/sessions/.gitkeep +0 -0
- package/modules/vector/.vector-template/vector.json +72 -0
- package/modules/vector/AUDIT-CLAUDEMD.md +154 -0
- package/modules/vector/INSTALL.md +185 -0
- package/modules/vector/LICENSE +21 -0
- package/modules/vector/README.md +409 -0
- package/modules/vector/VECTOR-BLOCK.md +57 -0
- package/modules/vector/assets/terminal.svg +68 -0
- package/modules/vector/bin/install.js +455 -0
- package/modules/vector/bin/migrate-v1-to-v2.sh +492 -0
- package/modules/vector/commands/help.md +46 -0
- package/modules/vector/hooks/vector-hook.py +775 -0
- package/modules/vector/mcp/index.js +118 -0
- package/modules/vector/mcp/package.json +10 -0
- package/modules/vector/mcp/tools/decisions.js +269 -0
- package/modules/vector/mcp/tools/domains.js +361 -0
- package/modules/vector/mcp/tools/staging.js +252 -0
- package/modules/vector/mcp/tools/vector-json.js +647 -0
- package/modules/vector/package.json +38 -0
- package/modules/vector/schemas/vector.schema.json +237 -0
- package/package.json +39 -0
- package/shared/branding/branding.js +70 -0
- package/shared/config/defaults.json +59 -0
- package/shared/events/README.md +175 -0
- package/shared/events/event-bus.js +134 -0
- package/shared/events/event_bus.py +255 -0
- package/shared/events/integrations.js +161 -0
- package/shared/events/schemas/audit-complete.schema.json +21 -0
- package/shared/events/schemas/phase-progress.schema.json +23 -0
- package/shared/events/schemas/plan-created.schema.json +21 -0
|
@@ -0,0 +1,221 @@
|
|
|
1
|
+
---
|
|
2
|
+
id: domain-12
|
|
3
|
+
number: "12"
|
|
4
|
+
name: Team Ownership & Knowledge Risk
|
|
5
|
+
owner_agents: [staff-engineer]
|
|
6
|
+
---
|
|
7
|
+
|
|
8
|
+
## Overview
|
|
9
|
+
|
|
10
|
+
Team Ownership & Knowledge Risk evaluates the human sustainability of a codebase through authorship patterns, knowledge distribution, and collaborative practices. This domain identifies single points of failure in team knowledge, documentation gaps, and cultural risks around code review and ownership transitions. Systems fail socially before they fail technically—concentrated knowledge creates organizational fragility regardless of code quality. This domain does NOT cover code health metrics (domain 09), change impact analysis (domain 11), or risk synthesis across domains (domain 13).
|
|
11
|
+
|
|
12
|
+
## Audit Questions
|
|
13
|
+
|
|
14
|
+
- What is the bus factor for each critical module, and which components have single-author dependency?
|
|
15
|
+
- Where is tribal knowledge concentrated, and what documentation exists to mitigate knowledge loss?
|
|
16
|
+
- How many modules have no active maintainer or have been abandoned by their original authors?
|
|
17
|
+
- What percentage of changes receive meaningful code review, and what is the review-to-approval time distribution?
|
|
18
|
+
- Which subsystems have high author concentration (Gini coefficient >0.7), indicating knowledge silos?
|
|
19
|
+
- How many critical components lack comprehensive documentation or have documentation older than the code?
|
|
20
|
+
- What is the onboarding time for new contributors to become productive in each major subsystem?
|
|
21
|
+
- Are there modules where only one person has committed in the last 6 months?
|
|
22
|
+
- How consistent are review standards across teams, and are there bypassed review requirements?
|
|
23
|
+
- What percentage of commits include documentation updates, and what is the documentation debt ratio?
|
|
24
|
+
- How many knowledge transfer events have occurred in the past year (pairing sessions, design reviews, runbooks)?
|
|
25
|
+
- Which components have the longest time-to-first-external-contribution, indicating high entry barriers?
|
|
26
|
+
|
|
27
|
+
## Failure Patterns
|
|
28
|
+
|
|
29
|
+
### Single-Author Modules
|
|
30
|
+
- **Description:** Critical components where >80% of commits come from a single author, creating extreme bus factor risk and knowledge bottlenecks that threaten continuity.
|
|
31
|
+
- **Indicators:**
|
|
32
|
+
- Author concentration Gini coefficient >0.8 for modules with >1000 LOC
|
|
33
|
+
- No commits from secondary authors in the last 90 days
|
|
34
|
+
- Module fails to build or deploy when primary author is unavailable
|
|
35
|
+
- Onboarding documentation explicitly references "ask [person]" for critical knowledge
|
|
36
|
+
- **Severity Tendency:** high
|
|
37
|
+
|
|
38
|
+
### Knowledge Silos
|
|
39
|
+
- **Description:** Teams or subsystems where knowledge is concentrated within small groups with no cross-pollination, creating organizational fragility and collaboration barriers.
|
|
40
|
+
- **Indicators:**
|
|
41
|
+
- <3 people have committed to a subsystem in the last 180 days
|
|
42
|
+
- No cross-team code review activity in shared interfaces
|
|
43
|
+
- Design decisions documented in private channels or not at all
|
|
44
|
+
- New team members require >30 days to make first meaningful contribution
|
|
45
|
+
- **Severity Tendency:** high
|
|
46
|
+
|
|
47
|
+
### Missing Code Review Culture
|
|
48
|
+
- **Description:** Significant percentage of changes merged without peer review, bypassing quality gates and knowledge sharing opportunities that prevent defects and spread understanding.
|
|
49
|
+
- **Indicators:**
|
|
50
|
+
- >20% of commits pushed directly to main branch without PR
|
|
51
|
+
- Average review-to-approval time <10 minutes, indicating rubber-stamp reviews
|
|
52
|
+
- PRs with >500 LOC changes approved with zero comments
|
|
53
|
+
- Review bypass patterns during "crunch time" or by senior engineers
|
|
54
|
+
- **Severity Tendency:** medium
|
|
55
|
+
|
|
56
|
+
### Documentation Debt
|
|
57
|
+
- **Description:** Critical components lack current documentation, or documentation age significantly exceeds code age, creating barriers to understanding and maintenance.
|
|
58
|
+
- **Indicators:**
|
|
59
|
+
- >40% of modules have no README or design documentation
|
|
60
|
+
- Documentation last-modified date >12 months older than latest code changes
|
|
61
|
+
- Onboarding requires >10 questions to senior engineers about undocumented subsystems
|
|
62
|
+
- Zero runbooks or operational guides for production systems
|
|
63
|
+
- **Severity Tendency:** medium
|
|
64
|
+
|
|
65
|
+
### Tribal Knowledge Dependencies
|
|
66
|
+
- **Description:** Essential operational knowledge exists only in human memory or private notes, not in accessible artifacts, creating catastrophic failure risk when key people depart.
|
|
67
|
+
- **Indicators:**
|
|
68
|
+
- Deployment procedures exist only as Slack messages or verbal instructions
|
|
69
|
+
- Critical configuration parameters documented in personal notes or wikis with access restrictions
|
|
70
|
+
- Incident response requires specific individuals with undocumented mental models
|
|
71
|
+
- "Oral tradition" references in team communication about how systems actually work
|
|
72
|
+
- **Severity Tendency:** critical
|
|
73
|
+
|
|
74
|
+
### Abandoned Code Ownership
|
|
75
|
+
- **Description:** Modules with no clear current owner, where the original author has left or moved on, creating accountability gaps and maintenance neglect.
|
|
76
|
+
- **Indicators:**
|
|
77
|
+
- CODEOWNERS file missing or >50% entries reference departed team members
|
|
78
|
+
- Zero commits to module in last 180 days despite open bug reports
|
|
79
|
+
- Pull requests to module sit unreviewed for >30 days
|
|
80
|
+
- Module excluded from refactoring or upgrade initiatives due to "no one understands it"
|
|
81
|
+
- **Severity Tendency:** high
|
|
82
|
+
|
|
83
|
+
### Inconsistent Review Standards
|
|
84
|
+
- **Description:** Different teams or individuals apply wildly different review rigor, creating quality variance and cultural friction that undermines trust in the review process.
|
|
85
|
+
- **Indicators:**
|
|
86
|
+
- Reviewer X approves 95% of PRs within 5 minutes; Reviewer Y averages 45 minutes with detailed feedback
|
|
87
|
+
- Some teams require 2+ approvals; others allow single-approval merge
|
|
88
|
+
- Security/performance issues caught in production that were visible in reviewed PRs
|
|
89
|
+
- Team retrospectives mention "review lottery" or reviewer shopping behavior
|
|
90
|
+
- **Severity Tendency:** medium
|
|
91
|
+
|
|
92
|
+
## Best Practice Patterns
|
|
93
|
+
|
|
94
|
+
### Distributed Ownership
|
|
95
|
+
- **Replaces Failure Pattern:** Single-Author Modules
|
|
96
|
+
- **Abstract Pattern:** Critical components maintained by 3+ active contributors with balanced commit distribution, ensuring knowledge redundancy and sustainable maintenance through collaborative stewardship.
|
|
97
|
+
- **Framework Mappings:**
|
|
98
|
+
- GitHub CODEOWNERS: Require 2+ reviewers from different teams for protected modules
|
|
99
|
+
- GitLab Code Owners: Use group-based ownership with mandatory secondary reviewers
|
|
100
|
+
- Azure DevOps: Configure branch policies requiring multi-team approval for critical paths
|
|
101
|
+
- **Language Patterns:**
|
|
102
|
+
- Microservices (any language): Rotate on-call ownership quarterly to spread operational knowledge
|
|
103
|
+
- Monorepo (TypeScript/Go): Use package-level CODEOWNERS with explicit fallback reviewers
|
|
104
|
+
- Infrastructure (Terraform/Kubernetes): Require platform team + product team dual approval
|
|
105
|
+
|
|
106
|
+
### Cross-Team Knowledge Sharing
|
|
107
|
+
- **Replaces Failure Pattern:** Knowledge Silos
|
|
108
|
+
- **Abstract Pattern:** Scheduled knowledge transfer activities, cross-team pairing rotations, and shared documentation practices that build organizational resilience through deliberate knowledge distribution.
|
|
109
|
+
- **Framework Mappings:**
|
|
110
|
+
- Team Topologies: Implement "community of practice" groups for shared technical domains
|
|
111
|
+
- Spotify Model: Host guild meetings for architecture, security, and operations knowledge sharing
|
|
112
|
+
- DevOps: Embed SRE engineers with product teams for bidirectional knowledge flow
|
|
113
|
+
- **Language Patterns:**
|
|
114
|
+
- Backend teams: Monthly architecture deep-dives with rotation presenters from different teams
|
|
115
|
+
- Frontend teams: Shared component library ownership with contribution guidelines requiring cross-team review
|
|
116
|
+
- Data teams: Weekly data model review sessions with product and engineering representation
|
|
117
|
+
|
|
118
|
+
### Rigorous Review Process
|
|
119
|
+
- **Replaces Failure Pattern:** Missing Code Review Culture
|
|
120
|
+
- **Abstract Pattern:** Mandatory peer review with clear standards, review checklists, and metrics tracking review quality, ensuring every change receives thoughtful scrutiny before integration.
|
|
121
|
+
- **Framework Mappings:**
|
|
122
|
+
- GitHub: Use required reviewers, status checks, and review assignment automation (CODEOWNERS)
|
|
123
|
+
- GitLab: Configure approval rules with eligible approvers and required approval count
|
|
124
|
+
- Gerrit: Enforce verified+2 from CI and code-review+2 from human before submit
|
|
125
|
+
- **Language Patterns:**
|
|
126
|
+
- Pull request templates: Include security checklist, test coverage requirements, and breaking change assessment
|
|
127
|
+
- Review bots: Automate size limits (<400 LOC), test coverage gates (>80%), and documentation checks
|
|
128
|
+
- Review guidelines: Publish standards for response time (24h), approval criteria, and constructive feedback tone
|
|
129
|
+
|
|
130
|
+
### Living Documentation
|
|
131
|
+
- **Replaces Failure Pattern:** Documentation Debt
|
|
132
|
+
- **Abstract Pattern:** Documentation colocated with code, updated in the same commits as implementation changes, with CI enforcement and regular audits to maintain accuracy and relevance.
|
|
133
|
+
- **Framework Mappings:**
|
|
134
|
+
- Docs-as-code: Markdown in repository, versioned with code, published via CI (Docusaurus, MkDocs, Sphinx)
|
|
135
|
+
- ADR (Architecture Decision Records): Lightweight decisions in docs/adr/ directory, numbered and immutable
|
|
136
|
+
- README-driven development: Write README before implementation, update in same PR as code changes
|
|
137
|
+
- **Language Patterns:**
|
|
138
|
+
- Rust: Use `cargo doc` with enforced `#![warn(missing_docs)]` for public APIs
|
|
139
|
+
- TypeScript: Generate API docs from TSDoc comments via TypeDoc, published automatically
|
|
140
|
+
- Python: Maintain Sphinx documentation with docstring coverage >90%, checked in CI
|
|
141
|
+
|
|
142
|
+
### Explicit Knowledge Capture
|
|
143
|
+
- **Replaces Failure Pattern:** Tribal Knowledge Dependencies
|
|
144
|
+
- **Abstract Pattern:** Operational runbooks, architecture decision records, and incident postmortems as first-class artifacts, reviewed and updated with the same rigor as code.
|
|
145
|
+
- **Framework Mappings:**
|
|
146
|
+
- SRE runbooks: Step-by-step procedures for deployment, rollback, and incident response in shared wiki
|
|
147
|
+
- PagerDuty runbooks: Linked from alerts with copy-paste commands and decision trees
|
|
148
|
+
- Incident retrospectives: Blameless postmortems with action items tracked in ticketing system
|
|
149
|
+
- **Language Patterns:**
|
|
150
|
+
- Kubernetes: Document deployment topologies with diagrams, config explanations, and troubleshooting steps
|
|
151
|
+
- AWS: Maintain infrastructure-as-code with inline comments explaining non-obvious design decisions
|
|
152
|
+
- Database: Create migration runbooks documenting rollback procedures and data validation steps
|
|
153
|
+
|
|
154
|
+
### Active Maintainership
|
|
155
|
+
- **Replaces Failure Pattern:** Abandoned Code Ownership
|
|
156
|
+
- **Abstract Pattern:** Explicit ownership assignments with accountability for responsiveness, regularly audited and reassigned as team composition changes, ensuring every component has an engaged steward.
|
|
157
|
+
- **Framework Mappings:**
|
|
158
|
+
- CODEOWNERS file: Updated quarterly with current team members, synced to org chart
|
|
159
|
+
- Service catalog: Document owner, on-call rotation, and escalation path for each service
|
|
160
|
+
- Ownership dashboard: Visualize modules with inactive owners, flagging reassignment needs
|
|
161
|
+
- **Language Patterns:**
|
|
162
|
+
- Microservices: Each service has named owner team with SLO commitments and support tier
|
|
163
|
+
- Libraries: Package.json or setup.py includes maintainers field, synced to GitHub team
|
|
164
|
+
- Infrastructure: Terraform modules have provider blocks with owner annotations and review requirements
|
|
165
|
+
|
|
166
|
+
### Standardized Review Criteria
|
|
167
|
+
- **Replaces Failure Pattern:** Inconsistent Review Standards
|
|
168
|
+
- **Abstract Pattern:** Documented review guidelines with checklists, training for reviewers, and periodic calibration sessions to align expectations and maintain quality standards across teams.
|
|
169
|
+
- **Framework Mappings:**
|
|
170
|
+
- Review guidelines doc: Published standards covering correctness, readability, security, and test coverage
|
|
171
|
+
- Review training program: Onboard new reviewers with shadowing and calibration exercises
|
|
172
|
+
- Review metrics dashboard: Track approval rates, comment depth, and review time by reviewer to identify outliers
|
|
173
|
+
- **Language Patterns:**
|
|
174
|
+
- Code review checklist: Security (input validation, auth), performance (algorithmic complexity), maintainability (naming, modularity)
|
|
175
|
+
- Review escalation: Require senior engineer review for PRs touching critical paths or changing interfaces
|
|
176
|
+
- Review retrospectives: Quarterly calibration sessions where team reviews same PR and compares feedback
|
|
177
|
+
|
|
178
|
+
## Red Flags
|
|
179
|
+
|
|
180
|
+
- Single commit author for entire module with >5000 LOC
|
|
181
|
+
- CODEOWNERS file not updated in >12 months
|
|
182
|
+
- >30% of PRs merged with zero review comments
|
|
183
|
+
- Documentation directory last touched >2 years ago
|
|
184
|
+
- No ADRs or design docs for system with >50K LOC
|
|
185
|
+
- Critical deployment procedure exists only in Slack history
|
|
186
|
+
- On-call runbook says "call [specific person]" instead of providing steps
|
|
187
|
+
- Module has open PRs from 6+ months ago with no reviewer assignment
|
|
188
|
+
- New hire onboarding checklist includes >10 "ask [person]" items
|
|
189
|
+
- Team retrospectives repeatedly mention knowledge silos as blocker
|
|
190
|
+
- Production incident required 2+ hours to find someone who understood system
|
|
191
|
+
|
|
192
|
+
## Tool Affinities
|
|
193
|
+
|
|
194
|
+
| Tool ID | Signal Type | Relevance |
|
|
195
|
+
|---------|-------------|-----------|
|
|
196
|
+
| git-history | Author concentration per module, commit timeline by contributor, review participation | primary |
|
|
197
|
+
| git-history | Bus factor calculation, abandoned file detection (no commits in 180+ days) | primary |
|
|
198
|
+
| git-history | Documentation staleness (last-modified dates relative to code) | primary |
|
|
199
|
+
| SonarQube | Code review coverage metrics, PR decoration with quality gates | contextual |
|
|
200
|
+
|
|
201
|
+
## Standards & Frameworks
|
|
202
|
+
|
|
203
|
+
- **Team Topologies** — Stream-aligned teams, enabling teams, and community of practice models for knowledge sharing
|
|
204
|
+
- **DevOps Research (DORA)** — Elite performer characteristics include low bus factor and high review participation
|
|
205
|
+
- **Conway's Law** — System design mirrors communication structure; ownership patterns reveal architectural coupling
|
|
206
|
+
- **Spotify Model** — Guilds and chapters as knowledge-sharing mechanisms across squad boundaries
|
|
207
|
+
- **CODEOWNERS (GitHub/GitLab)** — Explicit ownership declarations with review enforcement
|
|
208
|
+
- **SRE Principles (Google)** — On-call rotations and runbook culture as operational knowledge distribution
|
|
209
|
+
|
|
210
|
+
## Metrics
|
|
211
|
+
|
|
212
|
+
| Metric | What It Measures | Healthy Range |
|
|
213
|
+
|--------|-----------------|---------------|
|
|
214
|
+
| Bus Factor (per module) | Minimum number of contributors who must leave before knowledge loss becomes critical | ≥3 for critical modules |
|
|
215
|
+
| Author Concentration (Gini) | Distribution inequality of commits across contributors (0=equal, 1=monopoly) | <0.6 for critical modules |
|
|
216
|
+
| Review Coverage | Percentage of commits that went through peer review before merge | >90% |
|
|
217
|
+
| Review Comment Depth | Average comments per reviewed PR, indicating engagement quality | 2-8 comments |
|
|
218
|
+
| Documentation Freshness | Ratio of documentation age to code age (days since doc update / days since code update) | <0.5 (docs updated at least half as recently) |
|
|
219
|
+
| Time to First Contribution | Days for new contributor to land first merged PR, indicating entry barriers | <30 days |
|
|
220
|
+
| Abandoned Module Count | Modules with zero commits in last 180 days and open issues | 0 |
|
|
221
|
+
| Onboarding Dependency | Number of "ask [person]" items in onboarding checklist, indicating tribal knowledge | <3 |
|
|
@@ -0,0 +1,202 @@
|
|
|
1
|
+
---
|
|
2
|
+
id: domain-13
|
|
3
|
+
number: "13"
|
|
4
|
+
name: Risk Synthesis & Forecasting
|
|
5
|
+
owner_agents: [principal-engineer]
|
|
6
|
+
---
|
|
7
|
+
|
|
8
|
+
## Overview
|
|
9
|
+
|
|
10
|
+
Risk Synthesis & Forecasting aggregates findings across all prior domains to identify compound risks, forecast time-to-failure scenarios, and prioritize remediation efforts based on likelihood-times-impact analysis. This is a SYNTHESIS domain—it consumes findings from domains 00-12 rather than generating primary findings. Its failure patterns describe breakdowns in the synthesis process itself (treating risks in isolation, missing cross-domain interactions, short-term bias), not defects in the codebase. Effective risk synthesis answers "what breaks first" and "what requires immediate action" by understanding how risks interact and evolve. This domain does NOT cover individual domain analysis (architecture, security, performance, team risk, etc.).
|
|
11
|
+
|
|
12
|
+
## Audit Questions
|
|
13
|
+
|
|
14
|
+
- How are risks from different domains weighted and combined into a unified risk score?
|
|
15
|
+
- Which cross-domain risk combinations create compound scenarios (e.g., security vulnerability + high coupling + single author)?
|
|
16
|
+
- What is the predicted time-to-failure for critical systems based on trend analysis across quality, security, and ownership metrics?
|
|
17
|
+
- Are risks assessed in isolation, or is there analysis of cascading failure modes and interaction effects?
|
|
18
|
+
- What criteria determine whether a risk is accepted vs remediated, and are these criteria documented and consistently applied?
|
|
19
|
+
- How is remediation sequenced—by severity, by time-to-failure, by dependency order, or by business priority?
|
|
20
|
+
- Does risk assessment consider multiple time horizons (immediate, 3-month, 12-month), or focus only on current state?
|
|
21
|
+
- Are there documented risk acceptance decisions with expiration dates and reassessment triggers?
|
|
22
|
+
- How often are risk profiles updated, and what triggers reevaluation (new findings, production incidents, architecture changes)?
|
|
23
|
+
- Are historical risk trends tracked to identify worsening patterns or validate remediation effectiveness?
|
|
24
|
+
- What percentage of identified risks have defined remediation plans vs sit unaddressed?
|
|
25
|
+
- How is risk communicated to stakeholders—raw scores, narrative summaries, or visual dashboards?
|
|
26
|
+
|
|
27
|
+
## Failure Patterns
|
|
28
|
+
|
|
29
|
+
### Risk Compartmentalization
|
|
30
|
+
- **Description:** Treating risks from different domains as independent factors, missing critical interactions where vulnerabilities in one domain amplify or trigger failures in another, leading to blind spots in compound risk scenarios.
|
|
31
|
+
- **Indicators:**
|
|
32
|
+
- Risk reports organized by domain with no cross-domain correlation analysis
|
|
33
|
+
- Security findings addressed independently of code ownership patterns (ignoring abandoned code with known CVEs)
|
|
34
|
+
- Performance degradation trends not correlated with coupling metrics (missing cascading slowdown risks)
|
|
35
|
+
- Remediation priorities set per-domain without considering interaction effects
|
|
36
|
+
- **Severity Tendency:** high
|
|
37
|
+
|
|
38
|
+
### Missing Compound Risk Analysis
|
|
39
|
+
- **Description:** Failure to identify scenarios where multiple moderate risks combine into critical failure modes, such as a security vulnerability in single-author legacy code with high coupling and no test coverage.
|
|
40
|
+
- **Indicators:**
|
|
41
|
+
- No documented compound risk scenarios or cascading failure mode analysis
|
|
42
|
+
- Risk scoring treats each finding as independent, ignoring multiplicative effects
|
|
43
|
+
- Production incidents reveal risk combinations that were individually triaged as low-priority
|
|
44
|
+
- Remediation planning focuses on highest individual scores without dependency or interaction consideration
|
|
45
|
+
- **Severity Tendency:** critical
|
|
46
|
+
|
|
47
|
+
### Short-Term Bias
|
|
48
|
+
- **Description:** Risk assessment focuses exclusively on current state without trend analysis or forecasting, missing accelerating decay patterns that will cause future failures even if current metrics appear acceptable.
|
|
49
|
+
- **Indicators:**
|
|
50
|
+
- Risk dashboards show only current snapshots with no historical trend lines
|
|
51
|
+
- No time-to-failure predictions or degradation rate calculations
|
|
52
|
+
- Technical debt treated as static rather than accruing with interest
|
|
53
|
+
- No differentiation between stable risks and rapidly worsening risks in remediation priority
|
|
54
|
+
- **Severity Tendency:** high
|
|
55
|
+
|
|
56
|
+
### Risk Normalization
|
|
57
|
+
- **Description:** Persistent exposure to risks leads to acceptance-as-default rather than deliberate risk acceptance decisions, causing critical risks to become invisible through familiarity rather than judgment.
|
|
58
|
+
- **Indicators:**
|
|
59
|
+
- Increasing count of "accepted" risks without documented acceptance criteria or reassessment dates
|
|
60
|
+
- Same risks appear in reports quarter after quarter with no remediation progress
|
|
61
|
+
- Team communication treats high-severity findings as "known issues" without urgency
|
|
62
|
+
- Production incidents greeted with "we knew that was fragile" rather than triggering remediation
|
|
63
|
+
- **Severity Tendency:** high
|
|
64
|
+
|
|
65
|
+
### Missing Risk Acceptance Criteria
|
|
66
|
+
- **Description:** No documented framework for deciding when to accept vs remediate risks, leading to inconsistent decisions driven by urgency or loudest voice rather than principled assessment of likelihood, impact, and cost.
|
|
67
|
+
- **Indicators:**
|
|
68
|
+
- Risk triage meetings result in ad-hoc decisions without reference to criteria
|
|
69
|
+
- Similar risks handled inconsistently (one accepted, another escalated)
|
|
70
|
+
- No risk acceptance artifacts documenting rationale, owner, expiration date, or monitoring plan
|
|
71
|
+
- Stakeholders surprised by production issues from risks assumed to be "handled"
|
|
72
|
+
- **Severity Tendency:** medium
|
|
73
|
+
|
|
74
|
+
### Remediation Without Prioritization
|
|
75
|
+
- **Description:** Attempting to fix all identified risks simultaneously or in arbitrary order, leading to resource waste, incomplete fixes, and missing the highest-impact or most time-sensitive issues.
|
|
76
|
+
- **Indicators:**
|
|
77
|
+
- Remediation backlog with >50 items and no rank ordering or deadline assignments
|
|
78
|
+
- Engineering time split across many small fixes without addressing critical compound risks
|
|
79
|
+
- No dependency analysis (some fixes unblock others; some are prerequisites)
|
|
80
|
+
- Remediation effort allocated proportionally to finding count per domain rather than impact or urgency
|
|
81
|
+
- **Severity Tendency:** medium
|
|
82
|
+
|
|
83
|
+
## Best Practice Patterns
|
|
84
|
+
|
|
85
|
+
### Cross-Domain Risk Correlation
|
|
86
|
+
- **Replaces Failure Pattern:** Risk Compartmentalization
|
|
87
|
+
- **Abstract Pattern:** Analyze relationships between findings across domains, identifying interaction effects and cascading failure modes through correlation matrices and dependency graphs to surface compound risks.
|
|
88
|
+
- **Framework Mappings:**
|
|
89
|
+
- Risk Matrix (ISO 31000): Extend 2D likelihood-impact matrix with correlation layers for cross-domain dependencies
|
|
90
|
+
- FAIR (Factor Analysis of Information Risk): Model compound risks using dependency trees and conditional probabilities
|
|
91
|
+
- Bow-Tie Analysis: Map risk interactions with threat scenarios on left, barriers in center, consequences on right
|
|
92
|
+
- **Language Patterns:**
|
|
93
|
+
- Python/Pandas: Build correlation matrices between domain metrics (e.g., security score vs ownership concentration)
|
|
94
|
+
- SQL: Join findings tables across domains with shared code location keys to identify multi-domain hotspots
|
|
95
|
+
- Graph databases (Neo4j): Model risks as nodes with interaction edges, query for critical path scenarios
|
|
96
|
+
|
|
97
|
+
### Compound Risk Identification
|
|
98
|
+
- **Replaces Failure Pattern:** Missing Compound Risk Analysis
|
|
99
|
+
- **Abstract Pattern:** Define and detect high-priority compound risk scenarios where multiple domain findings intersect in the same code location, multiplying severity through interaction effects.
|
|
100
|
+
- **Framework Mappings:**
|
|
101
|
+
- FAIR framework: Calculate compound risk as P(A ∩ B) with amplification factors when risks overlap
|
|
102
|
+
- Risk scenario modeling: Define templates like "security vuln + abandoned owner + production-critical" as tier-1 scenarios
|
|
103
|
+
- Failure Mode and Effects Analysis (FMEA): Score compound risks using detection difficulty × occurrence probability × impact severity
|
|
104
|
+
- **Language Patterns:**
|
|
105
|
+
- Rule engines (Drools, Python rules): Define compound risk patterns as logical rules triggering on multi-domain findings
|
|
106
|
+
- Risk scoring algorithms: Multiply base severity by amplification factors (e.g., 2x for single-author, 1.5x for high coupling)
|
|
107
|
+
- Alert thresholds: Escalate when ≥3 high-severity findings from different domains affect same module
|
|
108
|
+
|
|
109
|
+
### Time-Series Forecasting
|
|
110
|
+
- **Replaces Failure Pattern:** Short-Term Bias
|
|
111
|
+
- **Abstract Pattern:** Track risk metrics over time, fit trend models to detect acceleration or decay, and predict time-to-failure thresholds to prioritize risks by urgency rather than current severity alone.
|
|
112
|
+
- **Framework Mappings:**
|
|
113
|
+
- Predictive analytics: Use linear regression, exponential smoothing, or ARIMA models on metric time series
|
|
114
|
+
- Technical debt interest: Model accumulating debt as compound interest, forecasting when cost-to-fix exceeds cost-to-rewrite
|
|
115
|
+
- Reliability growth models: Apply Weibull or exponential failure models to predict next incident based on historical MTBF trends
|
|
116
|
+
- **Language Patterns:**
|
|
117
|
+
- Python statsmodels/Prophet: Fit forecasting models to security debt, test coverage decay, or performance degradation trends
|
|
118
|
+
- Time-series databases (Prometheus, InfluxDB): Store historical risk metrics, query for moving averages and rate-of-change
|
|
119
|
+
- Alerting on derivatives: Trigger when second derivative is positive (acceleration in negative direction)
|
|
120
|
+
|
|
121
|
+
### Deliberate Risk Acceptance
|
|
122
|
+
- **Replaces Failure Pattern:** Risk Normalization
|
|
123
|
+
- **Abstract Pattern:** Treat risk acceptance as an explicit decision requiring documented rationale, owner assignment, expiration date, and monitoring plan, preventing invisible accumulation of unaddressed risks.
|
|
124
|
+
- **Framework Mappings:**
|
|
125
|
+
- Risk register (ISO 31000): Formal log of accepted risks with fields for justification, owner, review date, and compensating controls
|
|
126
|
+
- Risk acceptance matrix: Document criteria for accept vs mitigate vs transfer vs avoid based on likelihood-impact quadrant
|
|
127
|
+
- Exception management: Use ticketing system with SLA for accepted risk reviews (e.g., quarterly reassessment)
|
|
128
|
+
- **Language Patterns:**
|
|
129
|
+
- Issue tracking: Create "accepted risk" ticket type with required fields (rationale, owner, expiry, monitoring)
|
|
130
|
+
- Automated expiration: Alert when accepted risk passes review date without reassessment
|
|
131
|
+
- Compensating controls checklist: Require documentation of mitigations even for accepted risks (e.g., monitoring, incident runbooks)
|
|
132
|
+
|
|
133
|
+
### Risk Acceptance Framework
|
|
134
|
+
- **Replaces Failure Pattern:** Missing Risk Acceptance Criteria
|
|
135
|
+
- **Abstract Pattern:** Publish transparent criteria for risk triage decisions, including thresholds for automatic acceptance, escalation triggers, and stakeholder approval requirements, ensuring consistent and defensible risk management.
|
|
136
|
+
- **Framework Mappings:**
|
|
137
|
+
- Risk appetite statement: Document organizational tolerance for risk categories (e.g., "no critical security vulns in production code")
|
|
138
|
+
- Decision tree: Flowchart mapping severity × exploitability × impact to accept/mitigate/escalate outcomes
|
|
139
|
+
- RACI matrix: Define who is Responsible, Accountable, Consulted, Informed for risk decisions by severity level
|
|
140
|
+
- **Language Patterns:**
|
|
141
|
+
- Risk scoring rubrics: Published formulas for likelihood (1-5) × impact (1-5) with threshold rules (≥15 = escalate, 8-14 = mitigate, <8 = accept)
|
|
142
|
+
- Approval workflows: Automate routing (e.g., critical = CTO approval, high = VP eng, medium = tech lead)
|
|
143
|
+
- Audit trail: Log all risk decisions with timestamp, approver, and criteria reference for compliance review
|
|
144
|
+
|
|
145
|
+
### Impact-Driven Remediation Sequencing
|
|
146
|
+
- **Replaces Failure Pattern:** Remediation Without Prioritization
|
|
147
|
+
- **Abstract Pattern:** Order remediation by a composite score combining severity, time-to-failure, dependency blocking, and business criticality, ensuring highest-impact and most time-sensitive issues are addressed first.
|
|
148
|
+
- **Framework Mappings:**
|
|
149
|
+
- Weighted shortest job first (WSJF): Prioritize by (business value + time criticality + risk reduction) / effort estimate
|
|
150
|
+
- Dependency-aware scheduling: Use topological sort on remediation dependencies (fix A unblocks B and C)
|
|
151
|
+
- Cost-benefit analysis: Rank by expected value of remediation (incident probability reduction × incident cost - fix cost)
|
|
152
|
+
- **Language Patterns:**
|
|
153
|
+
- Priority scoring: Calculate composite = (severity × 0.4) + (urgency × 0.3) + (business criticality × 0.2) + (dependency unblock count × 0.1)
|
|
154
|
+
- Gantt chart generation: Automate remediation roadmap with critical path highlighting and resource allocation
|
|
155
|
+
- Kanban with WIP limits: Cap in-progress remediations to prevent fragmentation, focus on completing high-priority items
|
|
156
|
+
|
|
157
|
+
## Red Flags
|
|
158
|
+
|
|
159
|
+
- Risk reports list findings by domain with no cross-references or interaction analysis
|
|
160
|
+
- Same high-severity risks appear in quarterly reports without status changes or remediation progress
|
|
161
|
+
- Production incident reveals risk combination that was visible across multiple domain reports
|
|
162
|
+
- No documented risk acceptance decisions, or "accepted" tag applied without approval artifacts
|
|
163
|
+
- Remediation backlog sorted alphabetically or by date found rather than by impact or urgency
|
|
164
|
+
- Team treats persistent risks as background noise ("yeah, we know about that")
|
|
165
|
+
- Stakeholders express surprise at production issues from risks assumed to be managed
|
|
166
|
+
- Risk metrics show only current state with no trend lines or historical context
|
|
167
|
+
- No time-to-failure estimates or projections for degrading metrics
|
|
168
|
+
- Remediation planning allocates effort equally across domains without dependency or interaction consideration
|
|
169
|
+
- No risk acceptance expiration dates or reassessment triggers
|
|
170
|
+
|
|
171
|
+
## Tool Affinities
|
|
172
|
+
|
|
173
|
+
| Tool ID | Signal Type | Relevance |
|
|
174
|
+
|---------|-------------|-----------|
|
|
175
|
+
| git-history | Trend analysis for ownership concentration, change frequency, and documentation staleness over time | primary |
|
|
176
|
+
| SonarQube | Historical quality metrics, technical debt trends, code smell accumulation rates | supporting |
|
|
177
|
+
| Trivy | CVE discovery date trends, vulnerability backlog age, exploit availability timeline | supporting |
|
|
178
|
+
| Semgrep | Security finding trends by rule category, regression detection in security patterns | supporting |
|
|
179
|
+
| Gitleaks | Secret exposure timeline, remediation velocity for leaked credentials | contextual |
|
|
180
|
+
|
|
181
|
+
## Standards & Frameworks
|
|
182
|
+
|
|
183
|
+
- **ISO 31000 Risk Management** — Risk identification, assessment, treatment, monitoring, and communication framework
|
|
184
|
+
- **FAIR (Factor Analysis of Information Risk)** — Quantitative risk modeling with probability distributions and Monte Carlo simulation
|
|
185
|
+
- **Bow-Tie Analysis** — Visual method for analyzing risk scenarios with threats, barriers, and consequences
|
|
186
|
+
- **FMEA (Failure Mode and Effects Analysis)** — Systematic approach for identifying potential failure modes and prioritizing by severity × occurrence × detection
|
|
187
|
+
- **NIST Risk Management Framework (RMF)** — Risk categorization, control selection, monitoring, and authorization
|
|
188
|
+
- **DREAD (Damage, Reproducibility, Exploitability, Affected Users, Discoverability)** — Microsoft threat modeling risk scoring
|
|
189
|
+
- **Technical Debt Quadrant (Fowler)** — Reckless/Prudent × Deliberate/Inadvertent classification for debt prioritization
|
|
190
|
+
|
|
191
|
+
## Metrics
|
|
192
|
+
|
|
193
|
+
| Metric | What It Measures | Healthy Range |
|
|
194
|
+
|--------|-----------------|---------------|
|
|
195
|
+
| Cross-Domain Risk Score | Composite severity index combining weighted findings across all domains | 0-100; <40 is healthy, >70 requires immediate action |
|
|
196
|
+
| Compound Risk Count | Number of code locations with ≥3 high-severity findings from different domains | 0 is ideal; >5 indicates critical compound risks |
|
|
197
|
+
| Predicted Time-to-Failure | Estimated days until critical metric crosses failure threshold based on trend analysis | >180 days for all critical systems |
|
|
198
|
+
| Risk Acceptance Ratio | Count of deliberately accepted risks / total identified risks | 10-30% (some acceptance is pragmatic; >50% indicates normalization) |
|
|
199
|
+
| Risk-Accepted with Expiry | Percentage of accepted risks with documented reassessment dates | 100% (all accepted risks require review cycles) |
|
|
200
|
+
| Remediation Velocity | High-severity findings closed per sprint, indicating throughput and prioritization effectiveness | ≥3 per sprint for teams with active risk backlog |
|
|
201
|
+
| Time-to-Remediation (P50) | Median days from finding identification to resolution for high-severity risks | <30 days for high, <7 days for critical |
|
|
202
|
+
| Worsening Risk Trend % | Percentage of tracked metrics with negative trajectory (increasing debt, decreasing coverage) | <20% (some churn is normal; >40% indicates systemic decay) |
|
|
@@ -0,0 +1,78 @@
|
|
|
1
|
+
---
|
|
2
|
+
id: agent-boundaries
|
|
3
|
+
name: Agent Boundaries
|
|
4
|
+
scope: all_agents
|
|
5
|
+
priority: critical
|
|
6
|
+
---
|
|
7
|
+
|
|
8
|
+
## Purpose
|
|
9
|
+
|
|
10
|
+
Radar agents are composed from personas + domains + schemas + rules. The power of this decomposed architecture depends entirely on agents staying within their compositional contracts. When an Architect agent starts opining on security, its security analysis is uninformed by the Security Engineer's threat model and mental models. When a Security Engineer produces performance recommendations, those recommendations lack the Performance Engineer's calibration for what constitutes an actual bottleneck.
|
|
11
|
+
|
|
12
|
+
Boundary enforcement prevents two failure modes: **dilution** (agent output becomes generic when agents try to cover everything) and **contradiction** (agents produce conflicting findings in domains they don't own, creating noise instead of signal). Strong boundaries produce strong, distinct analysis. Weak boundaries produce mediocre, overlapping analysis.
|
|
13
|
+
|
|
14
|
+
## Rules
|
|
15
|
+
|
|
16
|
+
### 1. Persona identity constraint
|
|
17
|
+
|
|
18
|
+
**Statement:** An agent must operate within its persona's defined thinking style, risk philosophy, and mental models. An SRE agent must not adopt a Security Engineer's threat model. A Compliance Officer must not reason like an Architect.
|
|
19
|
+
|
|
20
|
+
**Rationale:** Personas encode how agents think, not just what they know. The Security Engineer's paranoid, threat-first perspective produces different findings than the Architect's structural, pattern-first perspective — even when examining the same code. If agents adopt each other's thinking styles, the multi-agent approach adds no value over a single generalist.
|
|
21
|
+
|
|
22
|
+
**Enforcement:** Finding review checks that reasoning patterns in Layer 3 (interpretation) and Layer 7 (judgment) are consistent with the declared persona's characteristics. An SRE agent whose findings focus on SQL injection exploitability rather than service availability is operating outside its persona. Persona drift is flagged for Principal review.
|
|
23
|
+
|
|
24
|
+
### 2. Domain scope constraint
|
|
25
|
+
|
|
26
|
+
**Statement:** An agent must produce findings only in domains listed in its agent assembly manifest's `domains` field. A Security Engineer (Domain 04) must not produce findings in Domain 08 (Performance). The finding's `domain_number` must be in the agent's declared domain list.
|
|
27
|
+
|
|
28
|
+
**Rationale:** Domain boundaries ensure comprehensive coverage without overlap. If multiple agents produce findings in the same domain, the audit has redundant analysis in some areas and gaps in others. Each domain has one primary owner — the agent whose persona and expertise are calibrated for that domain's failure patterns.
|
|
29
|
+
|
|
30
|
+
**Enforcement:** @schema:finding validation checks that `domain_number` is in the producing agent's declared `domains` list. A finding from `security-engineer` with `domain_number: 08` (Performance) is a validation error.
|
|
31
|
+
|
|
32
|
+
### 3. Cross-domain observation, not cross-domain judgment
|
|
33
|
+
|
|
34
|
+
**Statement:** An agent may observe signals outside its domains (a Security Engineer may note that a performance issue creates a DoS vector), but the finding must be filed under the agent's own domain with a cross-reference to the relevant domain. The agent provides the observation; the domain owner provides the judgment.
|
|
35
|
+
|
|
36
|
+
**Rationale:** Cross-domain observations are valuable — a Security Engineer noticing a DoS vector through a performance issue is exactly the kind of cross-cutting insight Radar is designed to surface. But the Security Engineer should file this as a security finding (Domain 04: "Performance issue X creates DoS vector Y") with a reference to Domain 08, not as a performance finding. The Performance Engineer then evaluates whether the performance issue is real; the Security Engineer evaluates whether the DoS vector is exploitable.
|
|
37
|
+
|
|
38
|
+
**Enforcement:** Findings that reference domains outside the agent's scope must use the agent's own domain number in `domain_number` and include the referenced domain in `references` (e.g., "See Domain 08 for performance characterization"). Findings with `domain_number` outside the agent's domains but containing cross-domain observations are reclassified to the agent's domain with a cross-reference.
|
|
39
|
+
|
|
40
|
+
### 4. Schema conformance constraint
|
|
41
|
+
|
|
42
|
+
**Statement:** All agent output must conform to the schemas declared in the agent's assembly manifest. No ad-hoc output formats, no free-form text blocks, no "summary notes" outside the schema structure.
|
|
43
|
+
|
|
44
|
+
**Rationale:** Schema conformance makes agent output composable. The disagreement resolution workflow expects @schema:finding instances. The report generation workflow expects @schema:disagreement instances. Ad-hoc output cannot be consumed by downstream workflows, creating dead zones in the analysis pipeline.
|
|
45
|
+
|
|
46
|
+
**Enforcement:** Output validation checks every finding against @schema:finding, every disagreement raised against @schema:disagreement, and every confidence assessment against @schema:confidence. Non-conformant output is rejected and the agent must reformat. Agents cannot produce output types not declared in their assembly manifest.
|
|
47
|
+
|
|
48
|
+
### 5. Tool signal consumption constraint
|
|
49
|
+
|
|
50
|
+
**Statement:** Agents must consume signals tagged with their domains (the `domain_relevance` field in @schema:signal) and must not give undue weight to signals irrelevant to their domains. Ignoring relevant signals is a coverage gap; overweighting irrelevant signals is scope creep.
|
|
51
|
+
|
|
52
|
+
**Rationale:** Phase 1 (Automated Signal Gathering) tags every signal with domain relevance. An agent that ignores signals relevant to its domains may miss evidence that would change its assessment. An agent that gives disproportionate weight to signals outside its domains is effectively auditing another agent's territory without that agent's calibration.
|
|
53
|
+
|
|
54
|
+
**Enforcement:** Audit trail checks that each agent consumed all signals where `domain_relevance` includes the agent's domains. Missing signal consumption (signal relevant to Domain 04 not referenced by security-engineer) is flagged as a potential coverage gap. Agents referencing signals with no relevance to their domains must justify the cross-domain reference.
|
|
55
|
+
|
|
56
|
+
## DO
|
|
57
|
+
|
|
58
|
+
- Security Engineer files finding F-04-015: "Unbounded retry mechanism at `src/http/client.ts:34` creates a potential denial-of-service amplification vector. See Domain 07 for reliability characterization of the retry behavior." (Cross-domain observation filed in the agent's own domain with reference to the relevant domain.)
|
|
59
|
+
|
|
60
|
+
- Data Engineer produces findings only in Domain 02 (Data & State Integrity), referencing signals S-SQ-003 and S-TRV-007 which both have `domain_relevance: [02]`. (Domain scope respected, relevant signals consumed.)
|
|
61
|
+
|
|
62
|
+
- Principal Engineer reviews all domain findings and produces synthesis in Domain 13 (Risk Synthesis), which is explicitly in the Principal's domain list. (Synthesis happens within the designated domain, not ad-hoc.)
|
|
63
|
+
|
|
64
|
+
- Agent output is a well-formed @schema:finding instance with all 7 layers, valid @schema:confidence vector, and proper ID format. (Schema conformance — no ad-hoc formats.)
|
|
65
|
+
|
|
66
|
+
## DON'T
|
|
67
|
+
|
|
68
|
+
- Architect agent produces finding F-04-022 in Domain 04 (Security): "Authentication tokens are not rotated."
|
|
69
|
+
**Why this is wrong:** Domain 04 is the Security Engineer's domain. The Architect may observe that the authentication architecture lacks token rotation (filed under Domain 01: Architecture), but the security implications are the Security Engineer's judgment to make.
|
|
70
|
+
|
|
71
|
+
- Performance Engineer ignores signal S-SQ-015 (SonarQube complexity hotspot in `src/core/engine.ts`) which has `domain_relevance: [08]`, producing findings based only on manual code review.
|
|
72
|
+
**Why this is wrong:** Ignoring relevant signals creates coverage gaps. The SonarQube complexity signal may reveal performance-impacting patterns the manual review missed.
|
|
73
|
+
|
|
74
|
+
- SRE agent produces a free-form text block: "General observations: The deployment pipeline seems fragile and the monitoring has gaps."
|
|
75
|
+
**Why this is wrong:** "General observations" is not a schema-conformant output. This must be expressed as @schema:finding instances with all 7 layers. "Seems fragile" violates epistemic hygiene (Layer 1 must be factual observation, not impression).
|
|
76
|
+
|
|
77
|
+
- Security Engineer writes 3 findings in Domain 08 (Performance) because they noticed slow API responses during their security review.
|
|
78
|
+
**Why this is wrong:** The Security Engineer may note that slow responses create timing-based attack vectors (a Domain 04 finding), but characterizing performance issues is the Performance Engineer's domain. Filing findings in another agent's domain creates overlap and undermines the multi-agent structure.
|
|
@@ -0,0 +1,76 @@
|
|
|
1
|
+
---
|
|
2
|
+
id: disagreement-protocol
|
|
3
|
+
name: Disagreement Protocol
|
|
4
|
+
scope: all_agents
|
|
5
|
+
priority: critical
|
|
6
|
+
---
|
|
7
|
+
|
|
8
|
+
## Purpose
|
|
9
|
+
|
|
10
|
+
Multi-agent systems are prone to false consensus. Without explicit protocol rules, disagreements get auto-resolved (silently discarded), averaged (split the difference), or hidden (buried in footnotes). These failure modes are catastrophic because disagreements are where risk hides — the high-severity, high-disagreement quadrant is precisely where leadership attention is most needed.
|
|
11
|
+
|
|
12
|
+
These rules ensure that every disagreement is a first-class object that must be explicitly raised, structurally recorded, categorized by root cause, and resolved by the Principal Engineer through a named reasoning model. The goal is not consensus — it is epistemic transparency.
|
|
13
|
+
|
|
14
|
+
## Rules
|
|
15
|
+
|
|
16
|
+
### 1. No auto-resolving disagreements
|
|
17
|
+
|
|
18
|
+
**Statement:** A disagreement cannot transition from `open` to any resolved status without the Principal Engineer's explicit response. No agent, workflow, or automated process may resolve a disagreement.
|
|
19
|
+
|
|
20
|
+
**Rationale:** Auto-resolution is the most dangerous anti-pattern because it is invisible. If an automated process decides two positions are "close enough" and marks the disagreement resolved, the audit record shows consensus where none existed. Risk is hidden, not reduced.
|
|
21
|
+
|
|
22
|
+
**Enforcement:** @schema:disagreement validation rejects any disagreement where `status` is not `open` and `principal_response` is empty or null. The workflow that transitions disagreement status must verify `principal_response` is substantive (not a placeholder like "acknowledged" or "noted").
|
|
23
|
+
|
|
24
|
+
### 2. No averaging opinions
|
|
25
|
+
|
|
26
|
+
**Statement:** The Principal Engineer's resolution must not split the difference between positions. "Severity is between high and medium" is not a resolution — it is an abdication of judgment.
|
|
27
|
+
|
|
28
|
+
**Rationale:** Averaging creates a false middle ground that no agent actually holds. If one agent says "critical" and another says "low", the answer is not "medium". The answer is a reasoned judgment about which evidence and which threat model is more compelling, using a named resolution model.
|
|
29
|
+
|
|
30
|
+
**Enforcement:** Principal response text is checked for averaging patterns: "somewhere between", "compromise at", "split the difference", "partially agree with both". These phrases trigger a review flag. The resolution must name a resolution model (evidence_dominance, risk_asymmetry, reversibility, time_to_failure, blast_radius) and explain how it was applied.
|
|
31
|
+
|
|
32
|
+
### 3. No forcing consensus language
|
|
33
|
+
|
|
34
|
+
**Statement:** A resolution must not claim that agents now agree when their positions have not changed. Agents do not need to agree — the Principal decides, and dissenting positions are preserved.
|
|
35
|
+
|
|
36
|
+
**Rationale:** Forced consensus language ("after discussion, all agents agree...") falsifies the epistemic record. If the Security Engineer assessed critical severity and the resolution is medium, the Security Engineer's original position must remain in the record. The Principal's resolution overrides for decision-making purposes, but does not retroactively change what agents assessed.
|
|
37
|
+
|
|
38
|
+
**Enforcement:** Resolution records that claim consensus must show updated position entries from the agents involved. If no positions were updated, consensus language is a validation error. Dissenting positions are preserved permanently in @schema:disagreement regardless of final status.
|
|
39
|
+
|
|
40
|
+
### 4. No hiding disagreements in footnotes
|
|
41
|
+
|
|
42
|
+
**Statement:** Every disagreement referenced in a finding must have a corresponding @schema:disagreement record. Disagreements must appear in Report Section 4 (Cross-Validation Notes), not buried in finding footnotes or appendices.
|
|
43
|
+
|
|
44
|
+
**Rationale:** A finding that mentions "some agents disagree on severity" without a formal disagreement record is unresolvable. There is no record of who disagreed, why, or how it was resolved. The disagreement becomes gossip instead of structured analysis.
|
|
45
|
+
|
|
46
|
+
**Enforcement:** Cross-reference validation checks that every finding mentioning disagreement or conflicting assessment has a corresponding D-{NNN} disagreement record. Report validation checks that Section 4 contains all @schema:disagreement instances from the audit. Orphaned references (finding mentions disagreement but no record exists) are validation errors.
|
|
47
|
+
|
|
48
|
+
### 5. No treating Devil's Advocate as optional
|
|
49
|
+
|
|
50
|
+
**Statement:** The Devil's Advocate review (Phase 4) must produce at least one disagreement record for every audit. If the Devil's Advocate finds nothing to challenge, the audit's epistemic diversity is suspect. Dismissing Devil's Advocate disagreements as `out_of_scope` without substantive rationale is a violation.
|
|
51
|
+
|
|
52
|
+
**Rationale:** The Devil's Advocate exists to stress-test the analysis. An audit where everyone agrees is an audit where something was missed or the Devil's Advocate was not sufficiently adversarial. The cost of false positives from the Devil's Advocate (challenges that turn out to be unfounded) is far lower than the cost of false negatives (risks that were never challenged).
|
|
53
|
+
|
|
54
|
+
**Enforcement:** Report generation checks for disagreement records where `agents_involved` includes `devils-advocate`. If zero such records exist, the audit is flagged as incomplete. Disagreements from `devils-advocate` resolved as `out_of_scope` must have `principal_rationale` explaining why the challenge is outside the audit's scope — a brief dismissal is insufficient.
|
|
55
|
+
|
|
56
|
+
## DO
|
|
57
|
+
|
|
58
|
+
- Principal responds to a disagreement: "Evidence dominance favors the security engineer's position. Three independent tools flagged this pattern, and the application engineer's mitigation (input validation) does not address the underlying architectural vulnerability. Downgrading severity from critical to high based on the existing mitigation, but recommending parameterized queries as the durable fix." (Named model, specific reasoning, preserved positions.)
|
|
59
|
+
|
|
60
|
+
- Devil's Advocate raises challenge: "Finding F-01-003 assumes circular dependencies will cause deployment failures, but the current deployment pipeline deploys as a monolith. The circular dependency is an architectural smell, not an operational risk until the team attempts to decompose into services." (Substantive challenge with specific reasoning.)
|
|
61
|
+
|
|
62
|
+
- Disagreement record preserves both positions even after resolution, with the Security Engineer's original `confidence: high` and the Application Engineer's original `confidence: medium` both visible in the final record.
|
|
63
|
+
|
|
64
|
+
## DON'T
|
|
65
|
+
|
|
66
|
+
- Disagreement resolved by workflow: "Both agents' findings were similar enough. Auto-resolved as mitigated."
|
|
67
|
+
**Why this is wrong:** No agent or workflow may resolve disagreements. Only the Principal Engineer can transition status. "Similar enough" is averaging, not resolution.
|
|
68
|
+
|
|
69
|
+
- Principal response: "I've reviewed both positions and they're both valid, so the severity is medium-high."
|
|
70
|
+
**Why this is wrong:** "Medium-high" is not a Radar severity value. This is averaging. The Principal must choose a severity from the enum and explain why using a named resolution model.
|
|
71
|
+
|
|
72
|
+
- Resolution states: "After careful consideration, all agents now agree this is a medium-severity issue."
|
|
73
|
+
**Why this is wrong:** Unless the agents actually revised their positions (with updated position records), this is forced consensus. The original positions must be preserved even if the resolution disagrees with them.
|
|
74
|
+
|
|
75
|
+
- Devil's Advocate disagreement D-007 resolved as: "Out of scope — not relevant."
|
|
76
|
+
**Why this is wrong:** "Not relevant" is not a substantive rationale. The Principal must explain specifically why the Devil's Advocate's challenge falls outside the audit scope. If it is relevant but the Principal disagrees, the correct status is `mitigated` or `accepted_risk`, not `out_of_scope`.
|