@wazir-dev/cli 1.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (629) hide show
  1. package/AGENTS.md +111 -0
  2. package/CHANGELOG.md +14 -0
  3. package/CONTRIBUTING.md +101 -0
  4. package/LICENSE +21 -0
  5. package/README.md +314 -0
  6. package/assets/composition-engine.mmd +34 -0
  7. package/assets/demo-script.sh +17 -0
  8. package/assets/logo-dark.svg +14 -0
  9. package/assets/logo.svg +14 -0
  10. package/assets/pipeline.mmd +39 -0
  11. package/assets/record-demo.sh +51 -0
  12. package/docs/README.md +51 -0
  13. package/docs/adapters/context-mode.md +60 -0
  14. package/docs/concepts/architecture.md +87 -0
  15. package/docs/concepts/artifact-model.md +60 -0
  16. package/docs/concepts/composition-engine.md +36 -0
  17. package/docs/concepts/indexing-and-recall.md +160 -0
  18. package/docs/concepts/observability.md +41 -0
  19. package/docs/concepts/roles-and-workflows.md +59 -0
  20. package/docs/concepts/terminology-policy.md +27 -0
  21. package/docs/getting-started/01-installation.md +78 -0
  22. package/docs/getting-started/02-first-run.md +102 -0
  23. package/docs/getting-started/03-adding-to-project.md +15 -0
  24. package/docs/getting-started/04-host-setup.md +15 -0
  25. package/docs/guides/ci-integration.md +15 -0
  26. package/docs/guides/creating-skills.md +15 -0
  27. package/docs/guides/expertise-module-authoring.md +15 -0
  28. package/docs/guides/hook-development.md +15 -0
  29. package/docs/guides/memory-and-learnings.md +34 -0
  30. package/docs/guides/multi-host-export.md +15 -0
  31. package/docs/guides/troubleshooting.md +101 -0
  32. package/docs/guides/writing-custom-roles.md +15 -0
  33. package/docs/plans/2026-03-15-cli-pipeline-integration-design.md +592 -0
  34. package/docs/plans/2026-03-15-cli-pipeline-integration-plan.md +598 -0
  35. package/docs/plans/2026-03-15-docs-enforcement-plan.md +238 -0
  36. package/docs/readmes/INDEX.md +99 -0
  37. package/docs/readmes/features/expertise/README.md +171 -0
  38. package/docs/readmes/features/exports/README.md +222 -0
  39. package/docs/readmes/features/hooks/README.md +103 -0
  40. package/docs/readmes/features/hooks/loop-cap-guard.md +133 -0
  41. package/docs/readmes/features/hooks/post-tool-capture.md +121 -0
  42. package/docs/readmes/features/hooks/post-tool-lint.md +130 -0
  43. package/docs/readmes/features/hooks/pre-compact-summary.md +122 -0
  44. package/docs/readmes/features/hooks/pre-tool-capture-route.md +100 -0
  45. package/docs/readmes/features/hooks/protected-path-write-guard.md +128 -0
  46. package/docs/readmes/features/hooks/session-start.md +119 -0
  47. package/docs/readmes/features/hooks/stop-handoff-harvest.md +125 -0
  48. package/docs/readmes/features/roles/README.md +157 -0
  49. package/docs/readmes/features/roles/clarifier.md +152 -0
  50. package/docs/readmes/features/roles/content-author.md +190 -0
  51. package/docs/readmes/features/roles/designer.md +193 -0
  52. package/docs/readmes/features/roles/executor.md +184 -0
  53. package/docs/readmes/features/roles/learner.md +210 -0
  54. package/docs/readmes/features/roles/planner.md +182 -0
  55. package/docs/readmes/features/roles/researcher.md +164 -0
  56. package/docs/readmes/features/roles/reviewer.md +184 -0
  57. package/docs/readmes/features/roles/specifier.md +162 -0
  58. package/docs/readmes/features/roles/verifier.md +215 -0
  59. package/docs/readmes/features/schemas/README.md +178 -0
  60. package/docs/readmes/features/skills/README.md +63 -0
  61. package/docs/readmes/features/skills/brainstorming.md +96 -0
  62. package/docs/readmes/features/skills/debugging.md +148 -0
  63. package/docs/readmes/features/skills/design.md +120 -0
  64. package/docs/readmes/features/skills/prepare-next.md +109 -0
  65. package/docs/readmes/features/skills/run-audit.md +159 -0
  66. package/docs/readmes/features/skills/scan-project.md +109 -0
  67. package/docs/readmes/features/skills/self-audit.md +176 -0
  68. package/docs/readmes/features/skills/tdd.md +137 -0
  69. package/docs/readmes/features/skills/using-skills.md +92 -0
  70. package/docs/readmes/features/skills/verification.md +120 -0
  71. package/docs/readmes/features/skills/writing-plans.md +104 -0
  72. package/docs/readmes/features/tooling/README.md +320 -0
  73. package/docs/readmes/features/workflows/README.md +186 -0
  74. package/docs/readmes/features/workflows/author.md +181 -0
  75. package/docs/readmes/features/workflows/clarify.md +154 -0
  76. package/docs/readmes/features/workflows/design-review.md +171 -0
  77. package/docs/readmes/features/workflows/design.md +169 -0
  78. package/docs/readmes/features/workflows/discover.md +162 -0
  79. package/docs/readmes/features/workflows/execute.md +173 -0
  80. package/docs/readmes/features/workflows/learn.md +167 -0
  81. package/docs/readmes/features/workflows/plan-review.md +165 -0
  82. package/docs/readmes/features/workflows/plan.md +170 -0
  83. package/docs/readmes/features/workflows/prepare-next.md +167 -0
  84. package/docs/readmes/features/workflows/review.md +169 -0
  85. package/docs/readmes/features/workflows/run-audit.md +191 -0
  86. package/docs/readmes/features/workflows/spec-challenge.md +159 -0
  87. package/docs/readmes/features/workflows/specify.md +160 -0
  88. package/docs/readmes/features/workflows/verify.md +177 -0
  89. package/docs/readmes/packages/README.md +50 -0
  90. package/docs/readmes/packages/ajv.md +117 -0
  91. package/docs/readmes/packages/context-mode.md +118 -0
  92. package/docs/readmes/packages/gray-matter.md +116 -0
  93. package/docs/readmes/packages/node-test.md +137 -0
  94. package/docs/readmes/packages/yaml.md +112 -0
  95. package/docs/reference/configuration-reference.md +159 -0
  96. package/docs/reference/expertise-index.md +52 -0
  97. package/docs/reference/git-flow.md +43 -0
  98. package/docs/reference/hooks.md +87 -0
  99. package/docs/reference/host-exports.md +50 -0
  100. package/docs/reference/launch-checklist.md +172 -0
  101. package/docs/reference/marketplace-listings.md +76 -0
  102. package/docs/reference/release-process.md +34 -0
  103. package/docs/reference/roles-reference.md +77 -0
  104. package/docs/reference/skills.md +33 -0
  105. package/docs/reference/templates.md +29 -0
  106. package/docs/reference/tooling-cli.md +94 -0
  107. package/docs/truth-claims.yaml +222 -0
  108. package/expertise/PROGRESS.md +63 -0
  109. package/expertise/README.md +18 -0
  110. package/expertise/antipatterns/PROGRESS.md +56 -0
  111. package/expertise/antipatterns/backend/api-design-antipatterns.md +1271 -0
  112. package/expertise/antipatterns/backend/auth-antipatterns.md +1195 -0
  113. package/expertise/antipatterns/backend/caching-antipatterns.md +622 -0
  114. package/expertise/antipatterns/backend/database-antipatterns.md +1038 -0
  115. package/expertise/antipatterns/backend/index.md +24 -0
  116. package/expertise/antipatterns/backend/microservices-antipatterns.md +850 -0
  117. package/expertise/antipatterns/code/architecture-antipatterns.md +919 -0
  118. package/expertise/antipatterns/code/async-antipatterns.md +622 -0
  119. package/expertise/antipatterns/code/code-smells.md +1186 -0
  120. package/expertise/antipatterns/code/dependency-antipatterns.md +1209 -0
  121. package/expertise/antipatterns/code/error-handling-antipatterns.md +1360 -0
  122. package/expertise/antipatterns/code/index.md +27 -0
  123. package/expertise/antipatterns/code/naming-and-abstraction.md +1118 -0
  124. package/expertise/antipatterns/code/state-management-antipatterns.md +1076 -0
  125. package/expertise/antipatterns/code/testing-antipatterns.md +1053 -0
  126. package/expertise/antipatterns/design/accessibility-antipatterns.md +1136 -0
  127. package/expertise/antipatterns/design/dark-patterns.md +1121 -0
  128. package/expertise/antipatterns/design/index.md +22 -0
  129. package/expertise/antipatterns/design/ui-antipatterns.md +1202 -0
  130. package/expertise/antipatterns/design/ux-antipatterns.md +680 -0
  131. package/expertise/antipatterns/frontend/css-layout-antipatterns.md +691 -0
  132. package/expertise/antipatterns/frontend/flutter-antipatterns.md +1827 -0
  133. package/expertise/antipatterns/frontend/index.md +23 -0
  134. package/expertise/antipatterns/frontend/mobile-antipatterns.md +573 -0
  135. package/expertise/antipatterns/frontend/react-antipatterns.md +1128 -0
  136. package/expertise/antipatterns/frontend/spa-antipatterns.md +1235 -0
  137. package/expertise/antipatterns/index.md +31 -0
  138. package/expertise/antipatterns/performance/index.md +20 -0
  139. package/expertise/antipatterns/performance/performance-antipatterns.md +1013 -0
  140. package/expertise/antipatterns/performance/premature-optimization.md +623 -0
  141. package/expertise/antipatterns/performance/scaling-antipatterns.md +785 -0
  142. package/expertise/antipatterns/process/ai-coding-antipatterns.md +853 -0
  143. package/expertise/antipatterns/process/code-review-antipatterns.md +656 -0
  144. package/expertise/antipatterns/process/deployment-antipatterns.md +920 -0
  145. package/expertise/antipatterns/process/index.md +23 -0
  146. package/expertise/antipatterns/process/technical-debt-antipatterns.md +647 -0
  147. package/expertise/antipatterns/security/index.md +20 -0
  148. package/expertise/antipatterns/security/secrets-antipatterns.md +849 -0
  149. package/expertise/antipatterns/security/security-theater.md +843 -0
  150. package/expertise/antipatterns/security/vulnerability-patterns.md +801 -0
  151. package/expertise/architecture/PROGRESS.md +70 -0
  152. package/expertise/architecture/data/caching-architecture.md +671 -0
  153. package/expertise/architecture/data/data-consistency.md +574 -0
  154. package/expertise/architecture/data/data-modeling.md +536 -0
  155. package/expertise/architecture/data/event-streams-and-queues.md +634 -0
  156. package/expertise/architecture/data/index.md +25 -0
  157. package/expertise/architecture/data/search-architecture.md +663 -0
  158. package/expertise/architecture/data/sql-vs-nosql.md +708 -0
  159. package/expertise/architecture/decisions/architecture-decision-records.md +640 -0
  160. package/expertise/architecture/decisions/build-vs-buy.md +616 -0
  161. package/expertise/architecture/decisions/index.md +23 -0
  162. package/expertise/architecture/decisions/monolith-to-microservices.md +790 -0
  163. package/expertise/architecture/decisions/technology-selection.md +616 -0
  164. package/expertise/architecture/distributed/cap-theorem-and-tradeoffs.md +800 -0
  165. package/expertise/architecture/distributed/circuit-breaker-bulkhead.md +741 -0
  166. package/expertise/architecture/distributed/consensus-and-coordination.md +796 -0
  167. package/expertise/architecture/distributed/distributed-systems-fundamentals.md +564 -0
  168. package/expertise/architecture/distributed/idempotency-and-retry.md +796 -0
  169. package/expertise/architecture/distributed/index.md +25 -0
  170. package/expertise/architecture/distributed/saga-pattern.md +797 -0
  171. package/expertise/architecture/foundations/architectural-thinking.md +460 -0
  172. package/expertise/architecture/foundations/coupling-and-cohesion.md +770 -0
  173. package/expertise/architecture/foundations/design-principles-solid.md +649 -0
  174. package/expertise/architecture/foundations/domain-driven-design.md +719 -0
  175. package/expertise/architecture/foundations/index.md +25 -0
  176. package/expertise/architecture/foundations/separation-of-concerns.md +472 -0
  177. package/expertise/architecture/foundations/twelve-factor-app.md +797 -0
  178. package/expertise/architecture/index.md +34 -0
  179. package/expertise/architecture/integration/api-design-graphql.md +638 -0
  180. package/expertise/architecture/integration/api-design-grpc.md +804 -0
  181. package/expertise/architecture/integration/api-design-rest.md +892 -0
  182. package/expertise/architecture/integration/index.md +25 -0
  183. package/expertise/architecture/integration/third-party-integration.md +795 -0
  184. package/expertise/architecture/integration/webhooks-and-callbacks.md +1152 -0
  185. package/expertise/architecture/integration/websockets-realtime.md +791 -0
  186. package/expertise/architecture/mobile-architecture/index.md +22 -0
  187. package/expertise/architecture/mobile-architecture/mobile-app-architecture.md +780 -0
  188. package/expertise/architecture/mobile-architecture/mobile-backend-for-frontend.md +670 -0
  189. package/expertise/architecture/mobile-architecture/offline-first.md +719 -0
  190. package/expertise/architecture/mobile-architecture/push-and-sync.md +782 -0
  191. package/expertise/architecture/patterns/cqrs-event-sourcing.md +717 -0
  192. package/expertise/architecture/patterns/event-driven.md +797 -0
  193. package/expertise/architecture/patterns/hexagonal-clean-architecture.md +870 -0
  194. package/expertise/architecture/patterns/index.md +27 -0
  195. package/expertise/architecture/patterns/layered-architecture.md +736 -0
  196. package/expertise/architecture/patterns/microservices.md +753 -0
  197. package/expertise/architecture/patterns/modular-monolith.md +692 -0
  198. package/expertise/architecture/patterns/monolith.md +626 -0
  199. package/expertise/architecture/patterns/plugin-architecture.md +735 -0
  200. package/expertise/architecture/patterns/serverless.md +780 -0
  201. package/expertise/architecture/scaling/database-scaling.md +615 -0
  202. package/expertise/architecture/scaling/feature-flags-and-rollouts.md +757 -0
  203. package/expertise/architecture/scaling/horizontal-vs-vertical.md +606 -0
  204. package/expertise/architecture/scaling/index.md +24 -0
  205. package/expertise/architecture/scaling/multi-tenancy.md +800 -0
  206. package/expertise/architecture/scaling/stateless-design.md +787 -0
  207. package/expertise/backend/embedded-firmware.md +625 -0
  208. package/expertise/backend/go.md +853 -0
  209. package/expertise/backend/index.md +24 -0
  210. package/expertise/backend/java-spring.md +448 -0
  211. package/expertise/backend/node-typescript.md +625 -0
  212. package/expertise/backend/python-fastapi.md +724 -0
  213. package/expertise/backend/rust.md +458 -0
  214. package/expertise/backend/solidity.md +711 -0
  215. package/expertise/composition-map.yaml +443 -0
  216. package/expertise/content/foundations/content-modeling.md +395 -0
  217. package/expertise/content/foundations/editorial-standards.md +449 -0
  218. package/expertise/content/foundations/index.md +24 -0
  219. package/expertise/content/foundations/microcopy.md +455 -0
  220. package/expertise/content/foundations/terminology-governance.md +509 -0
  221. package/expertise/content/index.md +34 -0
  222. package/expertise/content/patterns/accessibility-copy.md +518 -0
  223. package/expertise/content/patterns/index.md +24 -0
  224. package/expertise/content/patterns/notification-content.md +433 -0
  225. package/expertise/content/patterns/sample-content.md +486 -0
  226. package/expertise/content/patterns/state-copy.md +439 -0
  227. package/expertise/design/PROGRESS.md +58 -0
  228. package/expertise/design/disciplines/dark-mode-theming.md +577 -0
  229. package/expertise/design/disciplines/design-systems.md +595 -0
  230. package/expertise/design/disciplines/index.md +25 -0
  231. package/expertise/design/disciplines/information-architecture.md +800 -0
  232. package/expertise/design/disciplines/interaction-design.md +788 -0
  233. package/expertise/design/disciplines/responsive-design.md +552 -0
  234. package/expertise/design/disciplines/usability-testing.md +516 -0
  235. package/expertise/design/disciplines/user-research.md +792 -0
  236. package/expertise/design/foundations/accessibility-design.md +796 -0
  237. package/expertise/design/foundations/color-theory.md +797 -0
  238. package/expertise/design/foundations/iconography.md +795 -0
  239. package/expertise/design/foundations/index.md +26 -0
  240. package/expertise/design/foundations/motion-and-animation.md +653 -0
  241. package/expertise/design/foundations/rtl-design.md +585 -0
  242. package/expertise/design/foundations/spacing-and-layout.md +607 -0
  243. package/expertise/design/foundations/typography.md +800 -0
  244. package/expertise/design/foundations/visual-hierarchy.md +761 -0
  245. package/expertise/design/index.md +32 -0
  246. package/expertise/design/patterns/authentication-flows.md +474 -0
  247. package/expertise/design/patterns/content-consumption.md +789 -0
  248. package/expertise/design/patterns/data-display.md +618 -0
  249. package/expertise/design/patterns/e-commerce.md +1494 -0
  250. package/expertise/design/patterns/feedback-and-states.md +642 -0
  251. package/expertise/design/patterns/forms-and-input.md +819 -0
  252. package/expertise/design/patterns/gamification.md +801 -0
  253. package/expertise/design/patterns/index.md +31 -0
  254. package/expertise/design/patterns/microinteractions.md +449 -0
  255. package/expertise/design/patterns/navigation.md +800 -0
  256. package/expertise/design/patterns/notifications.md +705 -0
  257. package/expertise/design/patterns/onboarding.md +700 -0
  258. package/expertise/design/patterns/search-and-filter.md +601 -0
  259. package/expertise/design/patterns/settings-and-preferences.md +768 -0
  260. package/expertise/design/patterns/social-and-community.md +748 -0
  261. package/expertise/design/platforms/desktop-native.md +612 -0
  262. package/expertise/design/platforms/index.md +25 -0
  263. package/expertise/design/platforms/mobile-android.md +825 -0
  264. package/expertise/design/platforms/mobile-cross-platform.md +983 -0
  265. package/expertise/design/platforms/mobile-ios.md +699 -0
  266. package/expertise/design/platforms/tablet.md +794 -0
  267. package/expertise/design/platforms/web-dashboard.md +790 -0
  268. package/expertise/design/platforms/web-responsive.md +550 -0
  269. package/expertise/design/psychology/behavioral-nudges.md +449 -0
  270. package/expertise/design/psychology/cognitive-load.md +1191 -0
  271. package/expertise/design/psychology/error-psychology.md +778 -0
  272. package/expertise/design/psychology/index.md +22 -0
  273. package/expertise/design/psychology/persuasive-design.md +736 -0
  274. package/expertise/design/psychology/user-mental-models.md +623 -0
  275. package/expertise/design/tooling/open-pencil.md +266 -0
  276. package/expertise/frontend/angular.md +1073 -0
  277. package/expertise/frontend/desktop-electron.md +546 -0
  278. package/expertise/frontend/flutter.md +782 -0
  279. package/expertise/frontend/index.md +27 -0
  280. package/expertise/frontend/native-android.md +409 -0
  281. package/expertise/frontend/native-ios.md +490 -0
  282. package/expertise/frontend/react-native.md +1160 -0
  283. package/expertise/frontend/react.md +808 -0
  284. package/expertise/frontend/vue.md +1089 -0
  285. package/expertise/humanize/domain-rules-code.md +79 -0
  286. package/expertise/humanize/domain-rules-content.md +67 -0
  287. package/expertise/humanize/domain-rules-technical-docs.md +56 -0
  288. package/expertise/humanize/index.md +35 -0
  289. package/expertise/humanize/self-audit-checklist.md +87 -0
  290. package/expertise/humanize/sentence-patterns.md +218 -0
  291. package/expertise/humanize/vocabulary-blacklist.md +105 -0
  292. package/expertise/i18n/PROGRESS.md +65 -0
  293. package/expertise/i18n/advanced/accessibility-and-i18n.md +28 -0
  294. package/expertise/i18n/advanced/bidirectional-text-algorithm.md +38 -0
  295. package/expertise/i18n/advanced/complex-scripts.md +30 -0
  296. package/expertise/i18n/advanced/performance-and-i18n.md +27 -0
  297. package/expertise/i18n/advanced/testing-i18n.md +28 -0
  298. package/expertise/i18n/content/content-adaptation.md +23 -0
  299. package/expertise/i18n/content/locale-specific-formatting.md +23 -0
  300. package/expertise/i18n/content/machine-translation-integration.md +28 -0
  301. package/expertise/i18n/content/translation-management.md +29 -0
  302. package/expertise/i18n/foundations/date-time-calendars.md +67 -0
  303. package/expertise/i18n/foundations/i18n-architecture.md +272 -0
  304. package/expertise/i18n/foundations/locale-and-language-tags.md +79 -0
  305. package/expertise/i18n/foundations/numbers-currency-units.md +61 -0
  306. package/expertise/i18n/foundations/pluralization-and-gender.md +109 -0
  307. package/expertise/i18n/foundations/string-externalization.md +236 -0
  308. package/expertise/i18n/foundations/text-direction-bidi.md +241 -0
  309. package/expertise/i18n/foundations/unicode-and-encoding.md +86 -0
  310. package/expertise/i18n/index.md +38 -0
  311. package/expertise/i18n/platform/backend-i18n.md +31 -0
  312. package/expertise/i18n/platform/flutter-i18n.md +148 -0
  313. package/expertise/i18n/platform/native-android-i18n.md +36 -0
  314. package/expertise/i18n/platform/native-ios-i18n.md +36 -0
  315. package/expertise/i18n/platform/react-i18n.md +103 -0
  316. package/expertise/i18n/platform/web-css-i18n.md +81 -0
  317. package/expertise/i18n/rtl/arabic-specific.md +175 -0
  318. package/expertise/i18n/rtl/hebrew-specific.md +149 -0
  319. package/expertise/i18n/rtl/rtl-animations-and-transitions.md +111 -0
  320. package/expertise/i18n/rtl/rtl-forms-and-input.md +161 -0
  321. package/expertise/i18n/rtl/rtl-fundamentals.md +211 -0
  322. package/expertise/i18n/rtl/rtl-icons-and-images.md +181 -0
  323. package/expertise/i18n/rtl/rtl-layout-mirroring.md +252 -0
  324. package/expertise/i18n/rtl/rtl-navigation-and-gestures.md +107 -0
  325. package/expertise/i18n/rtl/rtl-testing-and-qa.md +147 -0
  326. package/expertise/i18n/rtl/rtl-typography.md +160 -0
  327. package/expertise/index.md +113 -0
  328. package/expertise/index.yaml +216 -0
  329. package/expertise/infrastructure/cloud-aws.md +597 -0
  330. package/expertise/infrastructure/cloud-gcp.md +599 -0
  331. package/expertise/infrastructure/cybersecurity.md +816 -0
  332. package/expertise/infrastructure/database-mongodb.md +447 -0
  333. package/expertise/infrastructure/database-postgres.md +400 -0
  334. package/expertise/infrastructure/devops-cicd.md +787 -0
  335. package/expertise/infrastructure/index.md +27 -0
  336. package/expertise/performance/PROGRESS.md +50 -0
  337. package/expertise/performance/backend/api-latency.md +1204 -0
  338. package/expertise/performance/backend/background-jobs.md +506 -0
  339. package/expertise/performance/backend/connection-pooling.md +1209 -0
  340. package/expertise/performance/backend/database-query-optimization.md +515 -0
  341. package/expertise/performance/backend/index.md +23 -0
  342. package/expertise/performance/backend/rate-limiting-and-throttling.md +971 -0
  343. package/expertise/performance/foundations/algorithmic-complexity.md +954 -0
  344. package/expertise/performance/foundations/caching-strategies.md +489 -0
  345. package/expertise/performance/foundations/concurrency-and-parallelism.md +847 -0
  346. package/expertise/performance/foundations/index.md +24 -0
  347. package/expertise/performance/foundations/measuring-and-profiling.md +440 -0
  348. package/expertise/performance/foundations/memory-management.md +964 -0
  349. package/expertise/performance/foundations/performance-budgets.md +1314 -0
  350. package/expertise/performance/index.md +31 -0
  351. package/expertise/performance/infrastructure/auto-scaling.md +1059 -0
  352. package/expertise/performance/infrastructure/cdn-and-edge.md +1081 -0
  353. package/expertise/performance/infrastructure/index.md +22 -0
  354. package/expertise/performance/infrastructure/load-balancing.md +1081 -0
  355. package/expertise/performance/infrastructure/observability.md +1079 -0
  356. package/expertise/performance/mobile/index.md +23 -0
  357. package/expertise/performance/mobile/mobile-animations.md +544 -0
  358. package/expertise/performance/mobile/mobile-memory-battery.md +416 -0
  359. package/expertise/performance/mobile/mobile-network.md +452 -0
  360. package/expertise/performance/mobile/mobile-rendering.md +599 -0
  361. package/expertise/performance/mobile/mobile-startup-time.md +505 -0
  362. package/expertise/performance/platform-specific/flutter-performance.md +647 -0
  363. package/expertise/performance/platform-specific/index.md +22 -0
  364. package/expertise/performance/platform-specific/node-performance.md +1307 -0
  365. package/expertise/performance/platform-specific/postgres-performance.md +1366 -0
  366. package/expertise/performance/platform-specific/react-performance.md +1403 -0
  367. package/expertise/performance/web/bundle-optimization.md +1239 -0
  368. package/expertise/performance/web/image-and-media.md +636 -0
  369. package/expertise/performance/web/index.md +24 -0
  370. package/expertise/performance/web/network-optimization.md +1133 -0
  371. package/expertise/performance/web/rendering-performance.md +1098 -0
  372. package/expertise/performance/web/ssr-and-hydration.md +918 -0
  373. package/expertise/performance/web/web-vitals.md +1374 -0
  374. package/expertise/quality/accessibility.md +985 -0
  375. package/expertise/quality/evidence-based-verification.md +499 -0
  376. package/expertise/quality/index.md +24 -0
  377. package/expertise/quality/ml-model-audit.md +614 -0
  378. package/expertise/quality/performance.md +600 -0
  379. package/expertise/quality/testing-api.md +891 -0
  380. package/expertise/quality/testing-mobile.md +496 -0
  381. package/expertise/quality/testing-web.md +849 -0
  382. package/expertise/security/PROGRESS.md +54 -0
  383. package/expertise/security/agentic-identity.md +540 -0
  384. package/expertise/security/compliance-frameworks.md +601 -0
  385. package/expertise/security/data/data-encryption.md +364 -0
  386. package/expertise/security/data/data-privacy-gdpr.md +692 -0
  387. package/expertise/security/data/database-security.md +1171 -0
  388. package/expertise/security/data/index.md +22 -0
  389. package/expertise/security/data/pii-handling.md +531 -0
  390. package/expertise/security/foundations/authentication.md +1041 -0
  391. package/expertise/security/foundations/authorization.md +603 -0
  392. package/expertise/security/foundations/cryptography.md +1001 -0
  393. package/expertise/security/foundations/index.md +25 -0
  394. package/expertise/security/foundations/owasp-top-10.md +1354 -0
  395. package/expertise/security/foundations/secrets-management.md +1217 -0
  396. package/expertise/security/foundations/secure-sdlc.md +700 -0
  397. package/expertise/security/foundations/supply-chain-security.md +698 -0
  398. package/expertise/security/index.md +31 -0
  399. package/expertise/security/infrastructure/cloud-security-aws.md +1296 -0
  400. package/expertise/security/infrastructure/cloud-security-gcp.md +1376 -0
  401. package/expertise/security/infrastructure/container-security.md +721 -0
  402. package/expertise/security/infrastructure/incident-response.md +1295 -0
  403. package/expertise/security/infrastructure/index.md +24 -0
  404. package/expertise/security/infrastructure/logging-and-monitoring.md +1618 -0
  405. package/expertise/security/infrastructure/network-security.md +1337 -0
  406. package/expertise/security/mobile/index.md +23 -0
  407. package/expertise/security/mobile/mobile-android-security.md +1218 -0
  408. package/expertise/security/mobile/mobile-binary-protection.md +1229 -0
  409. package/expertise/security/mobile/mobile-data-storage.md +1265 -0
  410. package/expertise/security/mobile/mobile-ios-security.md +1401 -0
  411. package/expertise/security/mobile/mobile-network-security.md +1520 -0
  412. package/expertise/security/smart-contract-security.md +594 -0
  413. package/expertise/security/testing/index.md +22 -0
  414. package/expertise/security/testing/penetration-testing.md +1258 -0
  415. package/expertise/security/testing/security-code-review.md +1765 -0
  416. package/expertise/security/testing/threat-modeling.md +1074 -0
  417. package/expertise/security/testing/vulnerability-scanning.md +1062 -0
  418. package/expertise/security/web/api-security.md +586 -0
  419. package/expertise/security/web/cors-and-headers.md +433 -0
  420. package/expertise/security/web/csrf.md +562 -0
  421. package/expertise/security/web/file-upload.md +1477 -0
  422. package/expertise/security/web/index.md +25 -0
  423. package/expertise/security/web/injection.md +1375 -0
  424. package/expertise/security/web/session-management.md +1101 -0
  425. package/expertise/security/web/xss.md +1158 -0
  426. package/exports/README.md +17 -0
  427. package/exports/hosts/claude/.claude/agents/clarifier.md +42 -0
  428. package/exports/hosts/claude/.claude/agents/content-author.md +63 -0
  429. package/exports/hosts/claude/.claude/agents/designer.md +55 -0
  430. package/exports/hosts/claude/.claude/agents/executor.md +55 -0
  431. package/exports/hosts/claude/.claude/agents/learner.md +51 -0
  432. package/exports/hosts/claude/.claude/agents/planner.md +53 -0
  433. package/exports/hosts/claude/.claude/agents/researcher.md +43 -0
  434. package/exports/hosts/claude/.claude/agents/reviewer.md +54 -0
  435. package/exports/hosts/claude/.claude/agents/specifier.md +47 -0
  436. package/exports/hosts/claude/.claude/agents/verifier.md +71 -0
  437. package/exports/hosts/claude/.claude/commands/author.md +42 -0
  438. package/exports/hosts/claude/.claude/commands/clarify.md +38 -0
  439. package/exports/hosts/claude/.claude/commands/design-review.md +46 -0
  440. package/exports/hosts/claude/.claude/commands/design.md +44 -0
  441. package/exports/hosts/claude/.claude/commands/discover.md +37 -0
  442. package/exports/hosts/claude/.claude/commands/execute.md +48 -0
  443. package/exports/hosts/claude/.claude/commands/learn.md +38 -0
  444. package/exports/hosts/claude/.claude/commands/plan-review.md +42 -0
  445. package/exports/hosts/claude/.claude/commands/plan.md +39 -0
  446. package/exports/hosts/claude/.claude/commands/prepare-next.md +37 -0
  447. package/exports/hosts/claude/.claude/commands/review.md +40 -0
  448. package/exports/hosts/claude/.claude/commands/run-audit.md +41 -0
  449. package/exports/hosts/claude/.claude/commands/spec-challenge.md +41 -0
  450. package/exports/hosts/claude/.claude/commands/specify.md +38 -0
  451. package/exports/hosts/claude/.claude/commands/verify.md +37 -0
  452. package/exports/hosts/claude/.claude/settings.json +34 -0
  453. package/exports/hosts/claude/CLAUDE.md +19 -0
  454. package/exports/hosts/claude/export.manifest.json +38 -0
  455. package/exports/hosts/claude/host-package.json +67 -0
  456. package/exports/hosts/codex/AGENTS.md +19 -0
  457. package/exports/hosts/codex/export.manifest.json +38 -0
  458. package/exports/hosts/codex/host-package.json +41 -0
  459. package/exports/hosts/cursor/.cursor/hooks.json +16 -0
  460. package/exports/hosts/cursor/.cursor/rules/wazir-core.mdc +19 -0
  461. package/exports/hosts/cursor/export.manifest.json +38 -0
  462. package/exports/hosts/cursor/host-package.json +42 -0
  463. package/exports/hosts/gemini/GEMINI.md +19 -0
  464. package/exports/hosts/gemini/export.manifest.json +38 -0
  465. package/exports/hosts/gemini/host-package.json +41 -0
  466. package/hooks/README.md +18 -0
  467. package/hooks/definitions/loop_cap_guard.yaml +21 -0
  468. package/hooks/definitions/post_tool_capture.yaml +24 -0
  469. package/hooks/definitions/pre_compact_summary.yaml +19 -0
  470. package/hooks/definitions/pre_tool_capture_route.yaml +19 -0
  471. package/hooks/definitions/protected_path_write_guard.yaml +19 -0
  472. package/hooks/definitions/session_start.yaml +19 -0
  473. package/hooks/definitions/stop_handoff_harvest.yaml +20 -0
  474. package/hooks/loop-cap-guard +17 -0
  475. package/hooks/post-tool-lint +36 -0
  476. package/hooks/protected-path-write-guard +17 -0
  477. package/hooks/session-start +41 -0
  478. package/llms-full.txt +2355 -0
  479. package/llms.txt +43 -0
  480. package/package.json +79 -0
  481. package/roles/README.md +20 -0
  482. package/roles/clarifier.md +42 -0
  483. package/roles/content-author.md +63 -0
  484. package/roles/designer.md +55 -0
  485. package/roles/executor.md +55 -0
  486. package/roles/learner.md +51 -0
  487. package/roles/planner.md +53 -0
  488. package/roles/researcher.md +43 -0
  489. package/roles/reviewer.md +54 -0
  490. package/roles/specifier.md +47 -0
  491. package/roles/verifier.md +71 -0
  492. package/schemas/README.md +24 -0
  493. package/schemas/accepted-learning.schema.json +20 -0
  494. package/schemas/author-artifact.schema.json +156 -0
  495. package/schemas/clarification.schema.json +19 -0
  496. package/schemas/design-artifact.schema.json +80 -0
  497. package/schemas/docs-claim.schema.json +18 -0
  498. package/schemas/export-manifest.schema.json +20 -0
  499. package/schemas/hook.schema.json +67 -0
  500. package/schemas/host-export-package.schema.json +18 -0
  501. package/schemas/implementation-plan.schema.json +19 -0
  502. package/schemas/proposed-learning.schema.json +19 -0
  503. package/schemas/research.schema.json +18 -0
  504. package/schemas/review.schema.json +29 -0
  505. package/schemas/run-manifest.schema.json +18 -0
  506. package/schemas/spec-challenge.schema.json +18 -0
  507. package/schemas/spec.schema.json +20 -0
  508. package/schemas/usage.schema.json +102 -0
  509. package/schemas/verification-proof.schema.json +29 -0
  510. package/schemas/wazir-manifest.schema.json +173 -0
  511. package/skills/README.md +40 -0
  512. package/skills/brainstorming/SKILL.md +77 -0
  513. package/skills/debugging/SKILL.md +50 -0
  514. package/skills/design/SKILL.md +61 -0
  515. package/skills/dispatching-parallel-agents/SKILL.md +128 -0
  516. package/skills/executing-plans/SKILL.md +70 -0
  517. package/skills/finishing-a-development-branch/SKILL.md +169 -0
  518. package/skills/humanize/SKILL.md +123 -0
  519. package/skills/init-pipeline/SKILL.md +124 -0
  520. package/skills/prepare-next/SKILL.md +20 -0
  521. package/skills/receiving-code-review/SKILL.md +123 -0
  522. package/skills/requesting-code-review/SKILL.md +105 -0
  523. package/skills/requesting-code-review/code-reviewer.md +108 -0
  524. package/skills/run-audit/SKILL.md +197 -0
  525. package/skills/scan-project/SKILL.md +41 -0
  526. package/skills/self-audit/SKILL.md +153 -0
  527. package/skills/subagent-driven-development/SKILL.md +154 -0
  528. package/skills/subagent-driven-development/code-quality-reviewer-prompt.md +26 -0
  529. package/skills/subagent-driven-development/implementer-prompt.md +102 -0
  530. package/skills/subagent-driven-development/spec-reviewer-prompt.md +61 -0
  531. package/skills/tdd/SKILL.md +23 -0
  532. package/skills/using-git-worktrees/SKILL.md +163 -0
  533. package/skills/using-skills/SKILL.md +95 -0
  534. package/skills/verification/SKILL.md +22 -0
  535. package/skills/wazir/SKILL.md +463 -0
  536. package/skills/writing-plans/SKILL.md +30 -0
  537. package/skills/writing-skills/SKILL.md +157 -0
  538. package/skills/writing-skills/anthropic-best-practices.md +122 -0
  539. package/skills/writing-skills/persuasion-principles.md +50 -0
  540. package/templates/README.md +20 -0
  541. package/templates/artifacts/README.md +10 -0
  542. package/templates/artifacts/accepted-learning.md +19 -0
  543. package/templates/artifacts/accepted-learning.template.json +12 -0
  544. package/templates/artifacts/author.md +74 -0
  545. package/templates/artifacts/author.template.json +19 -0
  546. package/templates/artifacts/clarification.md +21 -0
  547. package/templates/artifacts/clarification.template.json +12 -0
  548. package/templates/artifacts/execute-notes.md +19 -0
  549. package/templates/artifacts/implementation-plan.md +21 -0
  550. package/templates/artifacts/implementation-plan.template.json +11 -0
  551. package/templates/artifacts/learning-proposal.md +19 -0
  552. package/templates/artifacts/next-run-handoff.md +21 -0
  553. package/templates/artifacts/plan-review.md +19 -0
  554. package/templates/artifacts/proposed-learning.template.json +12 -0
  555. package/templates/artifacts/research.md +21 -0
  556. package/templates/artifacts/research.template.json +12 -0
  557. package/templates/artifacts/review-findings.md +19 -0
  558. package/templates/artifacts/review.template.json +11 -0
  559. package/templates/artifacts/run-manifest.template.json +8 -0
  560. package/templates/artifacts/spec-challenge.md +19 -0
  561. package/templates/artifacts/spec-challenge.template.json +11 -0
  562. package/templates/artifacts/spec.md +21 -0
  563. package/templates/artifacts/spec.template.json +12 -0
  564. package/templates/artifacts/verification-proof.md +19 -0
  565. package/templates/artifacts/verification-proof.template.json +11 -0
  566. package/templates/examples/accepted-learning.example.json +14 -0
  567. package/templates/examples/author.example.json +152 -0
  568. package/templates/examples/clarification.example.json +15 -0
  569. package/templates/examples/docs-claim.example.json +8 -0
  570. package/templates/examples/export-manifest.example.json +7 -0
  571. package/templates/examples/host-export-package.example.json +11 -0
  572. package/templates/examples/implementation-plan.example.json +17 -0
  573. package/templates/examples/proposed-learning.example.json +13 -0
  574. package/templates/examples/research.example.json +15 -0
  575. package/templates/examples/research.example.md +6 -0
  576. package/templates/examples/review.example.json +17 -0
  577. package/templates/examples/run-manifest.example.json +9 -0
  578. package/templates/examples/spec-challenge.example.json +14 -0
  579. package/templates/examples/spec.example.json +21 -0
  580. package/templates/examples/verification-proof.example.json +21 -0
  581. package/templates/examples/wazir-manifest.example.yaml +65 -0
  582. package/templates/task-definition-schema.md +99 -0
  583. package/tooling/README.md +20 -0
  584. package/tooling/src/adapters/context-mode.js +50 -0
  585. package/tooling/src/capture/command.js +376 -0
  586. package/tooling/src/capture/store.js +99 -0
  587. package/tooling/src/capture/usage.js +270 -0
  588. package/tooling/src/checks/branches.js +50 -0
  589. package/tooling/src/checks/brand-truth.js +110 -0
  590. package/tooling/src/checks/changelog.js +231 -0
  591. package/tooling/src/checks/command-registry.js +36 -0
  592. package/tooling/src/checks/commits.js +102 -0
  593. package/tooling/src/checks/docs-drift.js +103 -0
  594. package/tooling/src/checks/docs-truth.js +201 -0
  595. package/tooling/src/checks/runtime-surface.js +156 -0
  596. package/tooling/src/cli.js +116 -0
  597. package/tooling/src/command-options.js +56 -0
  598. package/tooling/src/commands/validate.js +320 -0
  599. package/tooling/src/doctor/command.js +91 -0
  600. package/tooling/src/export/command.js +77 -0
  601. package/tooling/src/export/compiler.js +498 -0
  602. package/tooling/src/guards/loop-cap-guard.js +52 -0
  603. package/tooling/src/guards/protected-path-write-guard.js +67 -0
  604. package/tooling/src/index/command.js +152 -0
  605. package/tooling/src/index/storage.js +1061 -0
  606. package/tooling/src/index/summarizers.js +261 -0
  607. package/tooling/src/loaders.js +18 -0
  608. package/tooling/src/project-root.js +22 -0
  609. package/tooling/src/recall/command.js +225 -0
  610. package/tooling/src/schema-validator.js +30 -0
  611. package/tooling/src/state-root.js +40 -0
  612. package/tooling/src/status/command.js +71 -0
  613. package/wazir.manifest.yaml +135 -0
  614. package/workflows/README.md +19 -0
  615. package/workflows/author.md +42 -0
  616. package/workflows/clarify.md +38 -0
  617. package/workflows/design-review.md +46 -0
  618. package/workflows/design.md +44 -0
  619. package/workflows/discover.md +37 -0
  620. package/workflows/execute.md +48 -0
  621. package/workflows/learn.md +38 -0
  622. package/workflows/plan-review.md +42 -0
  623. package/workflows/plan.md +39 -0
  624. package/workflows/prepare-next.md +37 -0
  625. package/workflows/review.md +40 -0
  626. package/workflows/run-audit.md +41 -0
  627. package/workflows/spec-challenge.md +41 -0
  628. package/workflows/specify.md +38 -0
  629. package/workflows/verify.md +37 -0
@@ -0,0 +1,656 @@
1
+ # Code Review Anti-Patterns
2
+
3
+ > Code review is the single most impactful quality practice a team can adopt -- and the single most destructive when done poorly. These anti-patterns cover the ways reviews fail: through negligence, ego, process breakdown, or cultural dysfunction. Each pattern has been observed across organizations from two-person startups to Google-scale engineering teams. A dysfunctional review culture does not just miss bugs -- it erodes trust, demoralizes contributors, and quietly drives your best engineers out the door.
4
+
5
+ > **Domain:** Process
6
+ > **Anti-patterns covered:** 20
7
+ > **Highest severity:** Critical
8
+
9
+ ## Anti-Patterns
10
+
11
+ ### AP-01: Rubber Stamping
12
+
13
+ **Also known as:** LGTM-and-Move-On, Drive-Through Approval, Checkbox Review
14
+ **Frequency:** Very Common
15
+ **Severity:** Critical
16
+ **Detection difficulty:** Hard
17
+
18
+ **What it looks like:**
19
+
20
+ A PR is opened. Within 90 seconds, it has an approval with a one-word comment: "LGTM." The reviewer did not check out the branch, did not run the code, and may not have scrolled past the first file. The approval exists solely to satisfy a branch protection rule. Chromium's core team sent an internal memo titled "Please don't rubber stamp code reviews" after noticing that approvals were arriving faster than a human could physically read the diff.
21
+
22
+ **Why reviewers do it:**
23
+
24
+ Review work is invisible -- there is no dashboard tracking how thoroughly someone reviewed. Teams under delivery pressure treat reviews as a gate to clear, not a quality practice. When a senior engineer submits a PR, juniors assume the code is correct and approve out of deference. A culture of reciprocal rubber stamping develops: "I approve yours quickly, you approve mine quickly." Junior reviewers see that a senior reviewer has approved, so they rubber stamp too -- "Well, I trust the senior reviewer!"
25
+
26
+ **What goes wrong:**
27
+
28
+ Bugs that are visible in the diff -- copy-paste errors, missing null checks, hardcoded credentials -- ship to production because nobody actually read the code. A 2015 study by Czerwonka et al. at Microsoft found that reviews where the reviewer spent less than five minutes had a defect detection rate near zero. Rubber-stamped code at Knight Capital contributed to a $440 million loss in 45 minutes when untested deployment code went live. The review log creates a false audit trail suggesting diligence that never occurred. If you are in a culture where untested code gets waved through because someone powerful wanted it merged, you do not have a code review process -- you have theater.
29
+
30
+ **The fix:**
31
+
32
+ Require reviewers to leave at least one substantive comment (not "LGTM") before approving. Track review-to-approval time and flag approvals that arrive faster than 2 minutes per 100 lines changed. Rotate review assignments so that no single person is the default reviewer. Make review thoroughness a first-class performance metric alongside code output.
33
+
34
+ **Detection rule:**
35
+
36
+ Measure the median time between PR assignment and approval. If median approval time is under 3 minutes for PRs exceeding 100 lines, rubber stamping is systemic. Track the ratio of approvals with zero inline comments -- if it exceeds 40%, reviews are not substantive.
37
+
38
+ ---
39
+
40
+ ### AP-02: Nitpick Focus
41
+
42
+ **Also known as:** Bikeshedding, Style Police, Missing the Forest for the Trees
43
+ **Frequency:** Very Common
44
+ **Severity:** High
45
+ **Detection difficulty:** Easy
46
+
47
+ **What it looks like:**
48
+
49
+ A PR introduces a new authentication flow with a subtle race condition in session handling. The review thread contains 14 comments, all about variable naming, brace placement, and whether to use single or double quotes. The race condition ships to production unnoticed. The reviewer spent 45 minutes on the review -- all of it on cosmetic concerns.
50
+
51
+ **Why reviewers do it:**
52
+
53
+ Style issues are easy to spot and unambiguous to comment on. Identifying logic errors, security flaws, or architectural problems requires deeper cognitive effort and domain knowledge. Commenting on style feels productive and demonstrates engagement without the vulnerability of being wrong about a substantive issue. Parkinson's Law of Triviality predicts exactly this: groups spend disproportionate time on trivial matters because everyone can have an opinion on them.
54
+
55
+ **What goes wrong:**
56
+
57
+ Substantive defects pass through while cosmetic ones are caught. Authors become frustrated because the feedback feels petty relative to the effort they invested. A study published at IEEE SANER 2021 on anti-patterns in modern code review found that superficial reviews focusing on style were strongly correlated with higher post-release defect density. Over time, developers learn that reviews are about style compliance, not quality, and they stop expecting reviews to catch real issues. The team develops a false sense of security: "we do thorough code reviews" -- but the reviews catch nothing that a linter could not.
58
+
59
+ **The fix:**
60
+
61
+ Automate all style enforcement with linters and formatters (Prettier, ESLint, Black, gofmt) in pre-commit hooks or CI. Establish a team agreement: if a linter can catch it, a human should not comment on it. Train reviewers to structure feedback in tiers: (1) blocking issues (bugs, security, correctness), (2) design concerns, (3) suggestions, (4) nitpicks -- and require at least one comment in tiers 1-2 before spending time on tier 4.
62
+
63
+ **Detection rule:**
64
+
65
+ Categorize review comments over a two-week period. If more than 60% of comments are style/formatting related and less than 10% address logic, design, or security, the team has a nitpick focus problem.
66
+
67
+ ---
68
+
69
+ ### AP-03: Too-Late Review
70
+
71
+ **Also known as:** Post-Facto Review, Retroactive Approval, Ambush Review
72
+ **Frequency:** Common
73
+ **Severity:** Critical
74
+ **Detection difficulty:** Easy
75
+
76
+ **What it looks like:**
77
+
78
+ Code is merged to main before review is complete -- or the reviewer raises fundamental architectural objections after the author has invested days of implementation work. In the first case, the developer merges their own PR citing urgency, then asks someone to "take a look when you get a chance." In the second, the reviewer says "This should use event sourcing instead of CRUD" after the author has already built the entire CRUD layer. Either way, the review's value is destroyed by timing.
79
+
80
+ **Why reviewers do it:**
81
+
82
+ Teams without strict branch protection allow self-merges. Deployment pressure creates a culture where "ship first, review later" becomes normalized. When reviews are delayed (see AP-20), developers bypass the process out of frustration. In the design-objection variant, there was no design review or RFC process before implementation began, so the reviewer sees the approach for the first time in the PR.
83
+
84
+ **What goes wrong:**
85
+
86
+ Review findings after merge are psychologically demoted -- the cost of addressing them now includes a new PR, re-testing, and redeployment, so they are filed as "follow-up" tickets that never get prioritized. Late design objections force the author to discard significant work, collapsing morale. Teams that experience this repeatedly stop proposing new approaches and default to "safe" patterns, killing innovation. The practice undermines the entire review culture: if reviews can be skipped or overruled when inconvenient, they are optional by definition.
87
+
88
+ **The fix:**
89
+
90
+ Enable branch protection rules that require at least one approval before merge. Remove direct push access to main/master for all team members, including leads. For genuine emergencies, establish a documented "emergency merge" process that requires post-merge review within 24 hours with a tracking ticket. Institute lightweight design reviews or RFCs before implementation of any non-trivial feature -- a 30-minute design discussion before coding prevents a week of wasted implementation. Encourage early draft/WIP pull requests so reviewers can flag directional issues before the author is invested.
91
+
92
+ **Detection rule:**
93
+
94
+ Query your Git hosting platform for PRs merged without approval or PRs where approval was granted after the merge timestamp. Flag review comments that suggest a fundamentally different approach (keywords: "instead of," "should have used," "wrong pattern," "rewrite") on PRs where the author has already pushed more than 3 commits.
95
+
96
+ ---
97
+
98
+ ### AP-04: Huge PRs
99
+
100
+ **Also known as:** The Monster Diff, Wall of Code, Mega-Merge
101
+ **Frequency:** Very Common
102
+ **Severity:** High
103
+ **Detection difficulty:** Easy
104
+
105
+ **What it looks like:**
106
+
107
+ A PR appears with 1,200 lines changed across 35 files. The description says "Implement user management module." Reviewers open the diff, scroll through a few files, leave a comment on line 47, and approve -- because nobody has the stamina or context to review 1,200 lines meaningfully. SmartBear's study of Cisco code reviews found that reviewer effectiveness drops dramatically beyond 400 lines, and defect density in reviews approaches zero beyond 1,000 lines.
108
+
109
+ **Why reviewers do it:**
110
+
111
+ Developers batch work because creating small, incremental PRs requires more planning and discipline. Feature branches live too long without integration. The team lacks conventions for breaking work into reviewable chunks. Some developers view a large PR as a sign of productivity rather than a review burden.
112
+
113
+ **What goes wrong:**
114
+
115
+ Review quality collapses. Google's internal data shows that 90% of their code reviews involve fewer than 10 files, with most changes around 24 lines -- and this is by design, not accident. Large PRs also increase merge conflict risk, make git bisect less useful for debugging, and delay feedback because reviewers procrastinate on intimidating diffs. When a bug is found in a 1,200-line PR, isolating the problematic change is significantly harder than in a 50-line PR. Research confirms that thousand-line pull requests result in measurably more bugs escaping to production.
116
+
117
+ **The fix:**
118
+
119
+ Set a soft limit of 400 lines per PR and a hard limit of 800. Break features into vertical slices that each deliver incremental value. Use feature flags to merge incomplete features safely. Require PRs exceeding the soft limit to include a justification in the description. Use stacked PRs (tools like Graphite, ghstack) to keep individual reviews small while maintaining logical grouping.
120
+
121
+ **Detection rule:**
122
+
123
+ Track PR size distribution weekly. If the median PR exceeds 300 lines or more than 20% of PRs exceed 500 lines, the team has a large-PR problem. Alert when any single PR exceeds 800 lines changed.
124
+
125
+ ---
126
+
127
+ ### AP-05: No PR Description
128
+
129
+ **Also known as:** The Blind Review, Context-Free Diff, Title-Only PR
130
+ **Frequency:** Very Common
131
+ **Severity:** Moderate
132
+ **Detection difficulty:** Easy
133
+
134
+ **What it looks like:**
135
+
136
+ The PR title is "Fix stuff" or "Updates." The description field is empty. The reviewer must reverse-engineer the intent of the change from the diff alone -- guessing at which behavior is intentional and which is accidental. There is no link to a ticket, no explanation of the approach, and no guidance on what to focus on during review.
137
+
138
+ **Why reviewers do it:**
139
+
140
+ This is an author anti-pattern that reviewers enable by accepting and approving description-less PRs without pushback. Authors skip descriptions because writing them feels like overhead, especially for changes they consider "obvious." Teams without PR templates have no structural prompt for descriptions.
141
+
142
+ **What goes wrong:**
143
+
144
+ Without context, reviewers cannot distinguish intentional design decisions from mistakes. Review quality degrades because the reviewer is simultaneously trying to understand WHAT changed and evaluate WHETHER the change is correct. Months later, when someone runs `git log` to understand why a change was made, the empty description provides no insight. Onboarding developers cannot learn from the team's PR history because it contains no rationale.
145
+
146
+ **The fix:**
147
+
148
+ Add a PR template that requires: (1) a link to the ticket/issue, (2) a summary of the approach, (3) testing performed, and (4) any areas where the author wants focused review. Configure CI to fail if the description is empty or matches the default template text. During review, if the description is missing, the first review comment should be: "Please add a description before I review."
149
+
150
+ **Detection rule:**
151
+
152
+ Query PRs merged in the last 30 days. If more than 25% have descriptions shorter than 50 characters, the team has a description problem. Track the percentage of PRs with no linked ticket or issue.
153
+
154
+ ---
155
+
156
+ ### AP-06: Blocking on Style Preferences
157
+
158
+ **Also known as:** Taste-Based Blocking, Personal Preference Paralysis, "I Would Have Done It Differently"
159
+ **Frequency:** Common
160
+ **Severity:** High
161
+ **Detection difficulty:** Moderate
162
+
163
+ **What it looks like:**
164
+
165
+ A reviewer marks a PR as "Request Changes" because the author used a for loop instead of a reduce, named a variable `userList` instead of `users`, or chose a ternary over an if-else. The code is correct, readable, tested, and follows team conventions -- the reviewer simply prefers a different style. The PR sits blocked for days while the author and reviewer debate preferences. As one engineering blog put it: "another developer's personal preference isn't a good enough argument."
166
+
167
+ **Why reviewers do it:**
168
+
169
+ Experienced developers have strong opinions formed over years of practice. It is psychologically difficult to approve code that differs from how you would write it. Some reviewers conflate "not my style" with "wrong." Without agreed team standards, every preference becomes a potential blocking issue.
170
+
171
+ **What goes wrong:**
172
+
173
+ Author morale degrades when correct code is blocked on taste. The review process becomes adversarial rather than collaborative. PR cycle time inflates as authors make changes they disagree with to appease reviewers, then silently revert them in future PRs. Team velocity drops because reviews become negotiations over style rather than evaluations of correctness. Senior developers who habitually block on preference become bottlenecks that the team routes around.
174
+
175
+ **The fix:**
176
+
177
+ Establish a written team style guide and agree that anything not in the guide is author's choice. Distinguish between "blocking" (must fix before merge) and "non-blocking" (suggestion, take it or leave it) feedback -- many teams use prefixes like `nit:` or `suggestion:`. Adopt Google's standard of code review: if the code is functional, well-tested, and follows team conventions, approve it even if you would have written it differently.
178
+
179
+ **Detection rule:**
180
+
181
+ Track review comments that result in "Request Changes" status. If more than 30% of blocking comments are about style preferences (not correctness, security, or convention violations), blocking standards are too subjective. Monitor PR cycle time -- if PRs with changes-requested take more than 3 days to resolve, preference-blocking may be the cause.
182
+
183
+ ---
184
+
185
+ ### AP-07: Gatekeeping Reviews
186
+
187
+ **Also known as:** Ego Olympics, The Approval Bottleneck, Review Monarchy
188
+ **Frequency:** Common
189
+ **Severity:** Critical
190
+ **Detection difficulty:** Moderate
191
+
192
+ **What it looks like:**
193
+
194
+ One senior developer is the sole required reviewer for all PRs. They reject code that does not match their personal architecture vision, demand rewrites of working solutions, and use reviews as a platform to demonstrate technical superiority. PRs sit in their queue for days. Junior developers learn to write code that appeases this person rather than code that solves the problem well. As research on review power dynamics found: "it feels good to leave a blocking review because it feels like they're single-handedly protecting the quality of the codebase, and it's a way to indulge flexing their own technical knowledge."
195
+
196
+ **Why reviewers do it:**
197
+
198
+ Gatekeeping provides a sense of control and importance. Organizations that assign ownership of code review to a single person inadvertently create gatekeepers. The power dynamic is self-reinforcing: because juniors defer to the gatekeeper, the gatekeeper's belief that they are indispensable is confirmed. People with less structural power hesitate to push back against those with more power, even when they are technically correct, and the code review becomes performative.
199
+
200
+ **What goes wrong:**
201
+
202
+ The gatekeeper becomes a bottleneck that limits team throughput to their personal review bandwidth. Junior developers stop growing because they optimize for gatekeeper approval rather than engineering quality. Knowledge concentrates in one person, creating a catastrophic bus factor. When the gatekeeper leaves or is unavailable, the team cannot ship. Team members who disagree with the gatekeeper leave the team or the company. The culture becomes one of permission-seeking rather than ownership.
203
+
204
+ **The fix:**
205
+
206
+ Require review from any team member, not a specific person. Rotate review assignments using automated tools (GitHub CODEOWNERS with team-level, not individual, ownership). Establish a written escalation process for disagreements so that no single person has unilateral veto power. Make review distribution metrics visible -- if one person is reviewing more than 30% of all PRs, rebalance. Ensure that junior developers also review senior developers' code, normalizing bidirectional feedback.
207
+
208
+ **Detection rule:**
209
+
210
+ Analyze review distribution: if one person provides more than 40% of all review approvals, gatekeeping risk is high. Track how often a single reviewer's "Request Changes" blocks a PR for more than 48 hours. Survey the team anonymously: "Do you feel you need a specific person's approval to merge?"
211
+
212
+ ---
213
+
214
+ ### AP-08: Not Reviewing Tests
215
+
216
+ **Also known as:** Test Blindness, Green-Bar Assumption, "Tests Pass, Ship It"
217
+ **Frequency:** Very Common
218
+ **Severity:** High
219
+ **Detection difficulty:** Hard
220
+
221
+ **What it looks like:**
222
+
223
+ The reviewer carefully examines the production code but skips the test files entirely. The tests are treated as validation artifacts -- if CI is green, the tests must be fine. Nobody checks whether the tests actually assert meaningful behavior, whether edge cases are covered, or whether the tests would catch a regression if the production code changed. A test file that asserts `expect(true).toBe(true)` passes CI and passes review.
224
+
225
+ **Why reviewers do it:**
226
+
227
+ Test code feels secondary to production code. Reviewing tests requires understanding the intent of the production code deeply enough to evaluate whether the tests validate it. Test files are often long and repetitive, making them tedious to review. Reviewers trust the CI pipeline as a proxy for test quality rather than evaluating the tests themselves.
228
+
229
+ **What goes wrong:**
230
+
231
+ Tests that do not test anything meaningful create a false safety net. Code coverage metrics look healthy, but the tests are not actually validating behavior. Ding Yuan et al.'s study "Simple Testing Can Prevent Most Critical Failures" (OSDI 2014) found that the majority of catastrophic failures could have been prevented by simple testing -- yet the tests that existed did not cover the failure paths. When a future change introduces a bug, the tests still pass because they never tested the correct behavior in the first place. The team accumulates "test debt" -- a suite that is expensive to maintain but provides little confidence.
232
+
233
+ **The fix:**
234
+
235
+ Review test files with the same rigor as production code. Ask: "If I introduced a bug in the production code, would this test catch it?" Check for edge cases, error paths, and boundary conditions. Look for tests that are tautological (asserting the implementation rather than the behavior). Add mutation testing (Stryker, mutmut, pitest) to CI to measure whether tests actually detect changes in production code.
236
+
237
+ **Detection rule:**
238
+
239
+ Track the percentage of review comments on test files vs. production files. If fewer than 15% of review comments address test code, test review is being neglected. Run mutation testing quarterly -- if mutation survival rate exceeds 30%, tests are not catching regressions.
240
+
241
+ ---
242
+
243
+ ### AP-09: Only Reviewing Changed Files
244
+
245
+ **Also known as:** Tunnel Vision Review, Diff-Only Review, No Context Review
246
+ **Frequency:** Common
247
+ **Severity:** High
248
+ **Detection difficulty:** Hard
249
+
250
+ **What it looks like:**
251
+
252
+ The reviewer examines only the lines highlighted in the diff without understanding the surrounding code, the module's responsibilities, or how the changed code interacts with the rest of the system. A function signature change looks fine in isolation but breaks three callers outside the diff. A new utility function duplicates one that already exists two directories away. An API contract change is compatible with the changed client but incompatible with two other clients not shown in the diff.
253
+
254
+ **Why reviewers do it:**
255
+
256
+ Modern code review tools center the diff as the primary view. Expanding context requires deliberate effort -- clicking to load surrounding lines, navigating to other files, or checking out the branch locally. When review workload is high, reviewers stay within the diff to minimize time per review. The cognitive load of understanding the broader system is significant, especially for reviewers who are not domain experts in the changed module.
257
+
258
+ **What goes wrong:**
259
+
260
+ Integration bugs, contract violations, and behavioral regressions that are invisible in the diff reach production. Duplicate code proliferates because reviewers do not know what already exists outside the changed files. Security vulnerabilities in the interaction between components go undetected. The review catches local errors but misses systemic ones -- which are typically more expensive to fix.
261
+
262
+ **The fix:**
263
+
264
+ Encourage reviewers to check out the branch locally for non-trivial changes. Use review tools that show impact analysis -- which callers, dependents, or contracts are affected by the change. Include "areas of concern" in the PR description to guide reviewers toward interactions that matter. For architectural changes, require the author to document impacted components.
265
+
266
+ **Detection rule:**
267
+
268
+ Track bugs that escape review and categorize them: if more than 25% of post-merge bugs involve interactions between the changed code and unchanged code, reviewers are not examining context. Monitor whether reviewers ever view files outside the diff -- if review tool analytics show zero non-diff file views, tunnel vision is systemic.
269
+
270
+ ---
271
+
272
+ ### AP-10: Toxic Comments
273
+
274
+ **Also known as:** Harsh Feedback, Code Shaming, The Personal Attack
275
+ **Frequency:** Common
276
+ **Severity:** Critical
277
+ **Detection difficulty:** Easy
278
+
279
+ **What it looks like:**
280
+
281
+ Review comments include phrases like "This is terrible," "Did you even think about this?", "Why would anyone write it this way?", or passive-aggressive emoji reactions. The feedback attacks the developer rather than the code. Sarcasm substitutes for constructive guidance. A junior developer's first PR receives 23 dismissive comments and no encouragement. Sandya Sankarram's widely-cited talk "Unlearning Toxic Behaviors in a Code Review Culture" documented how these patterns suppress creativity and drive attrition.
282
+
283
+ **Why reviewers do it:**
284
+
285
+ Text-based communication strips tone and body language, making blunt feedback feel harsher than intended. Some reviewers use code review as a status display, demonstrating superiority through criticism. Frustration with repeated mistakes or time pressure leads to short, sharp comments. In some engineering cultures, brutal honesty is valorized and "thick skin" is expected -- a norm that disproportionately excludes underrepresented groups. Some developers pass off personal programming opinions as established fact, using code reviews as an opportunity to show off how clever they are.
286
+
287
+ **What goes wrong:**
288
+
289
+ Developers dread submitting PRs and delay them to avoid negative feedback. Psychological safety collapses: team members stop taking risks, proposing novel approaches, or asking questions in reviews. Toxic review culture is a leading driver of engineer attrition -- people leave teams, not codebases. Toxic behaviors during code reviews can be more unproductive than no code reviews at all, because these behaviors stifle the qualities developers need the most: creativity and innovativeness. The team loses the diversity of thought that produces better solutions because only those who can tolerate abuse participate.
290
+
291
+ **The fix:**
292
+
293
+ Establish a code review code of conduct. Train reviewers to comment on the code, never the person ("This function could handle the null case" vs. "You forgot to handle nulls"). Use the "yes, and" approach: acknowledge what works before suggesting improvements. Require that every "Request Changes" review includes at least one positive observation. When toxic comments are identified, address them immediately with the reviewer in private -- public correction creates its own toxicity.
294
+
295
+ **Detection rule:**
296
+
297
+ Periodically review a sample of review comments for tone. Flag comments containing personal pronouns in negative contexts ("you always," "you never," "you should have"). Run anonymous team surveys quarterly asking whether code reviews feel safe and constructive. Track attrition and exit interview data for mentions of review culture.
298
+
299
+ ---
300
+
301
+ ### AP-11: Drive-By Reviews
302
+
303
+ **Also known as:** Hit-and-Run Comments, Seagull Review, The Dive-Bomb
304
+ **Frequency:** Common
305
+ **Severity:** Moderate
306
+ **Detection difficulty:** Moderate
307
+
308
+ **What it looks like:**
309
+
310
+ A reviewer who is not assigned to the PR drops a single comment -- often a nitpick or a vague concern like "Hmm, not sure about this approach" -- and disappears. They do not respond to follow-up questions, do not review the full PR, and do not approve or reject. Their comment creates uncertainty and blocks progress without adding clarity. The author does not know whether to address the concern or ignore it.
311
+
312
+ **Why reviewers do it:**
313
+
314
+ Drive-by reviewing feels like participating without the commitment of a full review. Some developers skim PRs in their feed out of curiosity and leave thoughts without intending to engage further. In open-source projects, drive-by comments from non-maintainers are common and expected -- but in team contexts, they create confusion about who is responsible for the review.
315
+
316
+ **What goes wrong:**
317
+
318
+ Unresolved drive-by comments create ambiguity: is this a blocking concern or idle musing? Authors waste time addressing vague feedback from someone who may never return to validate the fix. PR cycle time increases as the author waits for the drive-by reviewer to respond. Multiple drive-by comments from different people can create contradictory guidance without a clear path forward. Review responsibility diffuses -- everyone comments, nobody owns the review.
319
+
320
+ **The fix:**
321
+
322
+ Distinguish between assigned reviewers (who are responsible for a thorough review and approval decision) and optional reviewers (whose comments are advisory). If you leave a comment on a PR you are not assigned to, explicitly state whether your comment is blocking or informational. Teams can adopt a convention: unassigned reviewers prefix comments with `[non-blocking]` or `[FYI]`.
323
+
324
+ **Detection rule:**
325
+
326
+ Track review comments from non-assigned reviewers that receive no follow-up response from the commenter. If more than 30% of non-assigned reviewer comments go unresolved, drive-by reviewing is a pattern. Monitor PRs with comments but no approval or rejection from the commenter.
327
+
328
+ ---
329
+
330
+ ### AP-12: Syntax-Only Review
331
+
332
+ **Also known as:** Human Linter, Surface Review, Form Over Function
333
+ **Frequency:** Common
334
+ **Severity:** High
335
+ **Detection difficulty:** Moderate
336
+
337
+ **What it looks like:**
338
+
339
+ The reviewer checks that the code compiles, follows naming conventions, and uses the right patterns -- but never evaluates whether the logic is correct. They verify that the function has a return statement but not that it returns the right value. They confirm that error handling exists but not that it handles the right errors. The review is structurally thorough but semantically empty. This is distinct from AP-02 (Nitpick Focus) in that the reviewer genuinely believes they have done a complete review -- they just lack the depth to evaluate logic.
340
+
341
+ **Why reviewers do it:**
342
+
343
+ Evaluating syntax and structure is pattern matching -- a low-energy cognitive task. Evaluating logic requires building a mental model of the system's behavior, reasoning about state, and considering edge cases -- high-energy tasks that demand domain knowledge. Reviewers without context in the changed module default to what they can evaluate: form rather than function. Some teams assign reviewers without regard for domain expertise, making logic review impossible.
344
+
345
+ **What goes wrong:**
346
+
347
+ Logic bugs, off-by-one errors, race conditions, and incorrect business rule implementations pass review. The team's bug escape rate correlates with the complexity of the logic rather than the complexity of the syntax, because syntax gets caught and logic does not. Over time, the review process optimizes for form -- producing consistently formatted, consistently incorrect code. Microsoft's research found that the primary value of code review is knowledge transfer and shared understanding, not syntax checking -- when reviews stay at the syntax level, this value is lost entirely.
348
+
349
+ **The fix:**
350
+
351
+ Assign reviewers who have domain expertise in the changed module. Require reviewers to articulate what the code does in their own words before approving -- if they cannot explain the logic, they have not reviewed it. Include specific review prompts: "Does this handle the case where X is null?", "What happens if Y times out?", "Is the sort order correct for Z?" Supplement human review with static analysis tools that detect logic issues (null dereference, unchecked returns, race conditions).
352
+
353
+ **Detection rule:**
354
+
355
+ Categorize escaped bugs by type. If more than 50% of post-merge bugs are logic errors (incorrect conditions, wrong calculations, missing cases) rather than structural errors (missing imports, type mismatches), reviews are syntax-focused. Track whether reviewers ask questions about behavior vs. questions about style.
356
+
357
+ ---
358
+
359
+ ### AP-13: No Automation for Style and Lint
360
+
361
+ **Also known as:** Manual Style Enforcement, The Human Linter Pipeline, Formatters Are Optional
362
+ **Frequency:** Common
363
+ **Severity:** Moderate
364
+ **Detection difficulty:** Easy
365
+
366
+ **What it looks like:**
367
+
368
+ The team has a style guide but no automated enforcement. Every PR review includes 5-10 comments about formatting, import ordering, trailing whitespace, and naming conventions. These comments consume reviewer time, delay approvals, and frustrate authors who must make mechanical changes. The same style violations recur because there is no automated feedback loop.
369
+
370
+ **Why reviewers do it:**
371
+
372
+ Some teams resist automation because they believe human judgment is needed for all aspects of code quality. Others have not invested the setup time for linters and formatters. In polyglot codebases, configuring tools for every language feels like a large upfront cost. Some developers resist auto-formatters because they want control over their code's appearance.
373
+
374
+ **What goes wrong:**
375
+
376
+ Reviewer bandwidth that should be spent on logic and design is consumed by style enforcement. Authors receive a mix of substantive and stylistic feedback, making it harder to prioritize. Style comments feel personal and can trigger defensiveness. The team spends cumulative hours per week on work that a tool could do in milliseconds. Without automation, style enforcement is inconsistent -- different reviewers enforce different rules, and the same reviewer enforces different rules on different days.
377
+
378
+ **The fix:**
379
+
380
+ Configure linters (ESLint, Pylint, RuboCop) and formatters (Prettier, Black, gofmt) to run in pre-commit hooks and CI. Make the CI check blocking -- PRs with lint failures cannot be merged. Once automated enforcement is in place, add a team agreement: "Do not comment on anything a linter can catch." This frees reviewer attention for higher-value feedback.
381
+
382
+ **Detection rule:**
383
+
384
+ Count the number of review comments per PR that are about formatting or style. If this exceeds 3 per PR on average, automation is missing. Check CI configuration for linter and formatter steps -- if absent, this anti-pattern is guaranteed to be present.
385
+
386
+ ---
387
+
388
+ ### AP-14: Review Ping-Pong
389
+
390
+ **Also known as:** Endless Iterations, The Infinite Rally, Death by a Thousand Rounds
391
+ **Frequency:** Common
392
+ **Severity:** High
393
+ **Detection difficulty:** Easy
394
+
395
+ **What it looks like:**
396
+
397
+ A PR goes through 7 rounds of review. Each round, the reviewer finds new issues they could have mentioned in the first round. The author fixes round 1's comments, only to receive round 2's comments on different lines. The reviewer then spots a third set of issues introduced by the round 2 fixes. Simon Tatham described this as the "death by a thousand cuts" anti-pattern -- a reviewer who stops reading at the first nitpick rather than providing complete feedback in one pass.
398
+
399
+ **Why reviewers do it:**
400
+
401
+ Reviewers who find an issue early in the diff stop reading and submit feedback immediately, planning to continue "after the fix." Each round of changes introduces new context the reviewer feels compelled to examine. Some reviewers avoid giving too much feedback at once, fearing it will overwhelm the author -- an admirable intent that backfires by extending the cycle. If the team has not agreed on what "ready for review" means, a PR may be opened half-baked, forcing reviewers to question architecture decisions, test coverage, and formatting all at once -- prime conditions for multiple rounds.
402
+
403
+ **What goes wrong:**
404
+
405
+ PR cycle time balloons from hours to days or weeks. Author morale drops as they feel they can never satisfy the reviewer. The PR's diff accumulates fix-on-fix changes that obscure the original intent. Other team members waiting on the PR are blocked. Major changes in the middle of code review basically reset the entire review process. In extreme cases, the author abandons the PR and rewrites it from scratch, wasting all review effort.
406
+
407
+ **The fix:**
408
+
409
+ Reviewers should do a complete pass before leaving any comments. Batch all feedback into a single round. If an issue is minor, mark it `nit` and approve anyway -- do not block on nitpicks. Limit review rounds to a maximum of 3; if the PR is not ready after 3 rounds, schedule a synchronous discussion (call or pairing session) to resolve remaining issues. Establish clear expectations for what "ready for review" means so that PRs are not opened prematurely.
410
+
411
+ **Detection rule:**
412
+
413
+ Track the number of review rounds per PR. If the median exceeds 2 rounds or more than 15% of PRs take 4+ rounds, ping-pong is occurring. Measure the time between first review comment and final approval -- if this regularly exceeds 3 business days for PRs under 300 lines, the review process is cycling excessively.
414
+
415
+ ---
416
+
417
+ ### AP-15: Ignoring Security Implications
418
+
419
+ **Also known as:** Security Blindness, "That's AppSec's Problem", Threat-Unaware Review
420
+ **Frequency:** Common
421
+ **Severity:** Critical
422
+ **Detection difficulty:** Hard
423
+
424
+ **What it looks like:**
425
+
426
+ A PR adds a new API endpoint that accepts user input and writes it to a database. The review focuses on code structure, test coverage, and error messages -- but nobody asks about input validation, SQL injection, authentication, authorization, or rate limiting. A PR modifies a Terraform configuration to open a security group, and the reviewer checks only that the syntax is valid. OWASP's Secure Code Review Guide emphasizes that security review requires deliberate focus on trust boundaries, data flow, and authentication -- none of which appear in a typical review checklist.
427
+
428
+ **Why reviewers do it:**
429
+
430
+ Most developers are not trained in security. Security concerns are invisible unless you are specifically looking for them. Teams assume that security is handled by a dedicated AppSec team, SAST tools, or penetration testing -- review is "just" for functionality. Security review requires understanding threat models and attack vectors, which is specialized knowledge that general-purpose reviewers lack.
431
+
432
+ **What goes wrong:**
433
+
434
+ Injection vulnerabilities, broken access control, exposed secrets, and insecure configurations pass through review. These are the OWASP Top 10 categories that dominate real-world breaches. The 2017 Equifax breach exploited an unpatched Apache Struts vulnerability that had a fix available for months -- the kind of dependency issue that a security-aware review process would have flagged. Automated security tools can identify coding errors, but experienced human reviewers are still capable of identifying issues that tools miss, particularly in business logic and authorization flows. The review process provides a false sense of security: "It was code-reviewed" does not mean "It was security-reviewed."
435
+
436
+ **The fix:**
437
+
438
+ Add a security checklist to the PR template for changes that touch authentication, authorization, data input, configuration, or infrastructure. Require at least one reviewer with security training for PRs in sensitive areas. Integrate SAST tools (Semgrep, CodeQL, Snyk Code) into CI to catch common vulnerabilities automatically. Conduct periodic threat modeling sessions so that all developers develop security intuition. Treat infrastructure-as-code (Terraform, Kubernetes manifests) as security-sensitive by default.
439
+
440
+ **Detection rule:**
441
+
442
+ Audit PRs that touch authentication, authorization, or user input handling. If fewer than 20% of these PRs have review comments addressing security concerns, security review is absent. Track whether SAST tools are integrated into CI -- if not, automated security coverage is zero.
443
+
444
+ ---
445
+
446
+ ### AP-16: Not Reviewing Config and Infrastructure
447
+
448
+ **Also known as:** Config Blindness, "It's Just YAML", YAML Yolo
449
+ **Frequency:** Common
450
+ **Severity:** High
451
+ **Detection difficulty:** Moderate
452
+
453
+ **What it looks like:**
454
+
455
+ A PR includes changes to `docker-compose.yml`, Kubernetes manifests, Terraform files, CI/CD pipeline definitions, or environment configuration. The reviewer examines the application code changes in the PR but skips the infrastructure files entirely -- they are treated as boilerplate that "just works." An open S3 bucket, an overly permissive IAM role, or a missing resource limit passes through unreviewed.
456
+
457
+ **Why reviewers do it:**
458
+
459
+ Many application developers lack infrastructure expertise and feel unqualified to review config files. Infrastructure code looks like declarative boilerplate rather than logic, making it seem less important. Review tools often collapse YAML and JSON diffs, making them harder to read. The team may not have established standards for infrastructure code review.
460
+
461
+ **What goes wrong:**
462
+
463
+ Misconfigured infrastructure is a leading cause of cloud security incidents. Exposed storage buckets, overpermissive network rules, and missing encryption settings have caused major data breaches. Half of Kubernetes deployments have been characterized as technical debt due to configurations that were copied without understanding, leading to security misconfigurations and resource waste. Resource limits omitted from Kubernetes manifests lead to noisy-neighbor problems and cost overruns. CI/CD pipeline changes that remove security scanning steps weaken the entire quality process. These issues are often invisible until an incident occurs because infrastructure misconfigurations rarely produce immediate errors.
464
+
465
+ **The fix:**
466
+
467
+ Treat infrastructure code as first-class code. Assign reviewers with infrastructure expertise to PRs that modify config files. Integrate infrastructure linting tools (tflint, checkov, kube-linter, hadolint) into CI. Establish infrastructure review checklists covering: access controls, encryption, resource limits, network exposure, and secret management. Require that any CI/CD pipeline change is reviewed by at least two people.
468
+
469
+ **Detection rule:**
470
+
471
+ Track review comments on infrastructure files vs. application files. If infrastructure files receive zero review comments in more than 50% of PRs that include them, config blindness is present. Run infrastructure scanning tools and compare findings against review comments -- if tools catch issues that reviewers did not mention, human review of infrastructure is insufficient.
472
+
473
+ ---
474
+
475
+ ### AP-17: Seniority Auto-Approval
476
+
477
+ **Also known as:** Trust-the-Title, The Untouchable's Code, Hierarchy-Driven Review
478
+ **Frequency:** Common
479
+ **Severity:** High
480
+ **Detection difficulty:** Hard
481
+
482
+ **What it looks like:**
483
+
484
+ When a staff engineer or tech lead submits a PR, it receives instant approval with minimal scrutiny. Junior reviewers assume the code is correct because of the author's seniority and feel uncomfortable pushing back. The same code from a junior developer would receive 15 comments; from the senior, it receives "LGTM" in under a minute. Authority bias -- the documented tendency to attribute greater accuracy to authority figures -- drives this pattern even when reviewers notice potential issues.
485
+
486
+ **Why reviewers do it:**
487
+
488
+ Questioning a senior engineer's code feels socially risky. Junior developers fear being wrong and looking foolish. There is a genuine statistical likelihood that senior engineers produce fewer bugs, which makes deference seem rational. Seniors may have earned trust through a track record of quality -- but trust should adjust review depth, not eliminate review. Some organizations create explicit review exemptions for senior staff, institutionalizing the anti-pattern. Psychology research confirms that people with less structural power hesitate to push back against those with more power, even when they are technically correct.
489
+
490
+ **What goes wrong:**
491
+
492
+ Senior engineers are not immune to bugs, oversights, or blind spots. They may be less familiar with recent changes to the codebase, less current on new security vulnerabilities, or simply tired. Code that bypasses review creates knowledge silos -- nobody else understands the senior's changes. The culture of deference prevents junior developers from developing review skills. When the senior makes a mistake (and they will), nobody catches it. Google's engineering practices explicitly state that even the most experienced developers should have their code reviewed.
493
+
494
+ **The fix:**
495
+
496
+ Make review requirements uniform regardless of author seniority. Actively encourage junior developers to review senior developers' code and publicly praise them when they catch issues. Frame review as knowledge sharing, not error detection -- "I want to understand this so I can maintain it" is a valid and non-threatening review stance. Senior developers should model humility by thanking reviewers for catching their mistakes.
497
+
498
+ **Detection rule:**
499
+
500
+ Compare review metrics by author seniority: approval time, number of review comments, and number of review rounds. If senior engineers' PRs are approved significantly faster with fewer comments than junior engineers' PRs of comparable size, seniority bias is present. Track whether junior developers ever leave "Request Changes" on senior developers' PRs -- if the rate is near zero, deference is suppressing legitimate feedback.
501
+
502
+ ---
503
+
504
+ ### AP-18: Not Reviewing Error Handling Paths
505
+
506
+ **Also known as:** Happy Path Only, Exception Blindness, "What Could Go Wrong?"
507
+ **Frequency:** Very Common
508
+ **Severity:** High
509
+ **Detection difficulty:** Hard
510
+
511
+ **What it looks like:**
512
+
513
+ The reviewer traces the main execution path and confirms it works correctly. They do not ask: "What happens if this API call fails?", "What if this file does not exist?", "What if the user provides an empty string?", "What if the database connection is lost mid-transaction?" Error handling code -- catch blocks, error responses, fallback logic, retry mechanisms -- receives no scrutiny. The happy path ships reviewed; the sad path ships unreviewed.
514
+
515
+ **Why reviewers do it:**
516
+
517
+ The happy path is the narrative of the code -- it tells the story of what the code is supposed to do, which is naturally what reviewers follow. Error paths are branches off the main narrative that require imagining failure scenarios. This requires pessimistic thinking that is cognitively expensive and emotionally unappealing. Error handling code is often boilerplate-looking (catch-log-rethrow), making it seem unimportant.
518
+
519
+ **What goes wrong:**
520
+
521
+ Error handling is where production incidents live. Ding Yuan et al. found that 92% of catastrophic failures in distributed systems were caused by incorrect error handling -- and that simple testing of error handlers would have prevented them. Missing error handling causes silent data corruption, unrecoverable state, cascading failures, and poor user experience. A swallowed exception that logs nothing means a production issue will be invisible until a user reports it. An error handler that retries infinitely without backoff will turn a transient failure into a denial-of-service. Incomplete transaction rollbacks leave databases in inconsistent states.
522
+
523
+ **The fix:**
524
+
525
+ Add explicit error-handling questions to the review checklist: "What happens on timeout?", "What happens on invalid input?", "Are errors logged with sufficient context?", "Are transactions rolled back on failure?", "Are retries bounded with backoff?" Require that test files include error-path tests -- if the tests only cover the happy path, the review should flag it. Use chaos engineering principles to think adversarially during review.
526
+
527
+ **Detection rule:**
528
+
529
+ Examine test files in PRs. If fewer than 20% of test cases cover error scenarios (timeout, invalid input, connection failure, permission denied), error path testing is neglected. Review catch/except blocks: if more than 50% contain only a log statement with no recovery logic, error handling is superficial.
530
+
531
+ ---
532
+
533
+ ### AP-19: No Solution Suggestions
534
+
535
+ **Also known as:** Criticism Without Contribution, The Problem Pointer, Tear-Down Review
536
+ **Frequency:** Common
537
+ **Severity:** High
538
+ **Detection difficulty:** Easy
539
+
540
+ **What it looks like:**
541
+
542
+ Every review comment identifies a problem but offers no direction: "This won't scale," "This is wrong," "There's a better way to do this." The reviewer acts as a fault-finder rather than a collaborator. The author is left to guess at what the reviewer would accept. Multiple iterations follow as the author proposes solutions that the reviewer rejects -- again without suggesting alternatives. ACM research on effective teaching through code reviews found that explanatory rationale and sample solutions backed by standards significantly improved learning outcomes, while harsh comments and nonpragmatic reviewing that ignores authors' constraints hindered learning.
543
+
544
+ **Why reviewers do it:**
545
+
546
+ Identifying problems is easier than solving them. Some reviewers believe their job is to point out issues, and the author's job is to fix them. Offering solutions requires more effort and makes the reviewer vulnerable to having their suggestion criticized. Time pressure leads to terse comments. Some reviewers have an intuition that something is wrong but cannot articulate what the correct approach should be.
547
+
548
+ **What goes wrong:**
549
+
550
+ Authors feel attacked rather than supported. Without a suggested direction, they may "fix" the issue in a way that introduces a different problem, triggering another review round (see AP-14). The review becomes adversarial: reviewer as judge, author as defendant. Junior developers, who need guidance the most, receive the least actionable feedback. The team's collective problem-solving ability is wasted because reviewers contribute only half the value -- the diagnosis without the treatment.
551
+
552
+ **The fix:**
553
+
554
+ Adopt the "if you flag it, suggest a fix" convention. Review comments should follow the pattern: "I see [problem] because [reason]. Consider [alternative] -- here is an example: [code snippet]." If you cannot suggest a solution, frame the comment as a question: "I'm not sure this handles X -- could you walk me through the expected behavior?" For complex issues where a comment is insufficient, offer to pair with the author to work through the solution.
555
+
556
+ **Detection rule:**
557
+
558
+ Sample 50 review comments and categorize them as (a) problem only, (b) problem + suggestion, or (c) question. If category (a) exceeds 50%, the team has a solution-suggestion deficit. Track whether "Request Changes" reviews include at least one concrete suggestion -- if they do not, reviewers are blocking without contributing.
559
+
560
+ ---
561
+
562
+ ### AP-20: Review Delayed Weeks
563
+
564
+ **Also known as:** The Forgotten PR, PR Graveyard, Queue Rot
565
+ **Frequency:** Common
566
+ **Severity:** High
567
+ **Detection difficulty:** Easy
568
+
569
+ **What it looks like:**
570
+
571
+ A PR is submitted on Monday. By Friday, no reviewer has looked at it. The author pings the reviewer. The reviewer says "I'll get to it." The following Wednesday, the review arrives -- but the codebase has moved on, the PR has merge conflicts, and the author has context-switched to a different task. Resolving conflicts and re-engaging with the PR takes another day. A change that should have taken 2 days from code to merge takes 12. Google's research shows that their median review turnaround is about 4 hours -- and they consider this essential to developer productivity and 97% satisfaction with their review process.
572
+
573
+ **Why reviewers do it:**
574
+
575
+ Review is often not recognized as productive work. Teams that measure output by commits or story points implicitly devalue review time. Reviewers prioritize their own coding tasks and treat reviews as interruptible, low-priority work. Without SLAs or visibility into review queue depth, there is no accountability for delays. Some reviewers batch reviews to a single time slot per day or per week, which is efficient for the reviewer but costly for the author.
576
+
577
+ **What goes wrong:**
578
+
579
+ Context switching costs compound: the author forgets the details of their own PR and must re-engage. Merge conflicts increase with delay, sometimes requiring significant rework. The author's next task may depend on the PR being merged, creating a cascade of delays. Frustrated authors bypass the review process (see AP-03), merge without approval, or stop submitting small, frequent PRs in favor of large batches (see AP-04) to amortize the review wait time. Prolonged review cycles are one of the top drivers of developer dissatisfaction with the review process.
580
+
581
+ **The fix:**
582
+
583
+ Set a team SLA for initial review response: 4 hours for small PRs (under 200 lines), 24 hours for larger PRs. Make review queue depth visible on a team dashboard. Assign backup reviewers who take over if the primary does not respond within the SLA. Include review turnaround time in team health metrics alongside cycle time and deploy frequency. Schedule dedicated review time -- 30 minutes at the start of each day -- rather than treating reviews as interrupt-driven.
584
+
585
+ **Detection rule:**
586
+
587
+ Track time-to-first-review-comment for all PRs. If the median exceeds 24 hours or the 90th percentile exceeds 3 business days, review delays are systemic. Count PRs that are open for more than 5 business days without any review activity -- these are "forgotten PRs" and should trigger alerts.
588
+
589
+ ---
590
+
591
+ ## Root Cause Analysis
592
+
593
+ | Root Cause | Anti-Patterns It Drives | Systemic Fix |
594
+ |---|---|---|
595
+ | Review treated as gate, not practice | AP-01, AP-03, AP-12, AP-17 | Reframe review as collaborative learning; track quality metrics, not just approval speed |
596
+ | No automated style enforcement | AP-02, AP-06, AP-13, AP-14 | Linters, formatters, and pre-commit hooks eliminate mechanical feedback |
597
+ | Power dynamics and hierarchy | AP-07, AP-10, AP-17, AP-19 | Psychological safety training, bidirectional review rotation, written escalation process |
598
+ | Missing security and infra expertise | AP-15, AP-16, AP-18 | Security checklists, SAST in CI, cross-training, infrastructure linting tools |
599
+ | No review SLAs or accountability | AP-01, AP-03, AP-11, AP-20 | Time-to-review dashboards, SLAs, backup reviewer rotation |
600
+ | Large changeset culture | AP-04, AP-05, AP-09, AP-14 | PR size limits, feature flags, stacked PRs, mandatory descriptions |
601
+ | Review effort is invisible | AP-01, AP-08, AP-12, AP-20 | Track review depth metrics, recognize review contributions in performance evaluations |
602
+ | Feedback quality not measured | AP-02, AP-10, AP-11, AP-19 | Periodic comment audits, feedback training, review code of conduct |
603
+ | Cognitive shortcuts under load | AP-08, AP-09, AP-12, AP-18 | Reduce review load through smaller PRs; assign domain-expert reviewers |
604
+ | Cultural normalization of shortcuts | AP-01, AP-03, AP-17 | Leadership modeling thorough review; celebrate caught bugs, not fast approvals |
605
+ | Missing design review phase | AP-03, AP-04, AP-14 | Lightweight RFC/design doc before implementation; draft PRs for early feedback |
606
+ | No PR standards enforced | AP-04, AP-05, AP-08 | PR templates; size limits in CI; test-coverage gates |
607
+ | Knowledge silos | AP-07, AP-09, AP-17 | Rotate reviewers; pair reviews; architecture documentation |
608
+
609
+ ## Self-Check Questions
610
+
611
+ Use these during retrospectives, review process audits, or team health checks:
612
+
613
+ 1. When was the last time you left a substantive comment on a PR you approved? If you cannot remember, you may be rubber stamping.
614
+ 2. Of your last ten review comments, how many addressed logic, security, or design versus formatting and naming? If the ratio is below 50/50, you are likely over-indexing on style.
615
+ 3. What is our median time from PR submission to first review comment? Is it under 24 hours?
616
+ 4. What is the average size of PRs on your team? If it exceeds 400 lines, you are likely not decomposing work enough -- and reviews are suffering for it.
617
+ 5. Open your last five PRs. Do they each have a description that explains *why* the change was made? Could a new team member understand the context from the description alone?
618
+ 6. Have you ever blocked a PR solely for a style preference that is not in the team's style guide? If yes, consider whether a style guide update is more appropriate than blocking a PR.
619
+ 7. Is there one person on the team whose vacation would halt all merges? If so, you have a single point of failure in your review process.
620
+ 8. In your last review, did you read the test files line by line, or did you just verify that tests existed and CI was green?
621
+ 9. Can you name the OWASP Top 10 categories? If you reviewed a PR with a SQL injection vulnerability, would you recognize it in code?
622
+ 10. For the last PR you reviewed that included an API call or database query, did you verify what happens when that call fails? Did you check for timeout handling?
623
+ 11. When you review a PR that includes a Dockerfile, Terraform file, or CI pipeline change, do you review those files with the same rigor as application code?
624
+ 12. Read your last five review comments. For each one that identified a problem, did you suggest a solution or an alternative approach?
625
+ 13. Do you read the entire PR before leaving your first comment, or do you comment as you go and potentially miss issues that would change your earlier feedback?
626
+ 14. When was the last time you pushed back on a senior developer's PR? If the answer is "never," consider whether authority bias is influencing your reviews.
627
+ 15. When you leave comments on a PR, do you always return to verify the fixes and conclude your review, or do you sometimes leave PRs in an unresolved state?
628
+
629
+ ## Code Smell Quick Reference
630
+
631
+ | Smell | Typical Indicator | Related Anti-Pattern | Automated Detection |
632
+ |---|---|---|---|
633
+ | Instant approvals | Approval in < 2 min for 100+ line PRs | AP-01: Rubber Stamping | PR platform analytics (time-to-approve) |
634
+ | Style-dominated feedback | > 60% of comments on formatting | AP-02: Nitpick Focus | Comment categorization audit |
635
+ | Unreviewed merges | PRs merged before or without approval | AP-03: Too-Late Review | Branch protection audit, merge log analysis |
636
+ | Giant diffs | PRs with > 500 lines changed | AP-04: Huge PRs | PR size tracking in CI (Danger, PR Size Labeler) |
637
+ | Empty descriptions | PR description < 50 chars or blank | AP-05: No PR Description | CI check on description length |
638
+ | Preference blocking | "Request Changes" on stylistic grounds | AP-06: Blocking on Style | Review comment vs. status correlation analysis |
639
+ | Reviewer concentration | One person reviews > 40% of PRs | AP-07: Gatekeeping Reviews | Git platform reviewer distribution report |
640
+ | Test file neglect | Zero review comments on test files | AP-08: Not Reviewing Tests | Comment location analysis (test vs. production files) |
641
+ | Diff-only viewing | Reviewers never expand context or check out branch | AP-09: Only Reviewing Changed Files | Review tool analytics (context expansion rate) |
642
+ | Hostile language | Personal attacks, sarcasm, shame in comments | AP-10: Toxic Comments | NLP sentiment analysis on review comments |
643
+ | Orphaned comments | Comments from non-assigned reviewers with no follow-up | AP-11: Drive-By Reviews | Comment author vs. assigned reviewer comparison |
644
+ | No logic questions | Zero "what happens if" or "why" questions in review | AP-12: Syntax-Only Review | Comment content pattern analysis |
645
+ | No lint in CI | Style comments that a linter would catch | AP-13: No Automation | CI configuration audit for linter steps |
646
+ | High round count | > 3 review rounds per PR | AP-14: Review Ping-Pong | PR review round counter |
647
+ | No security comments | PRs touching auth/input with zero security discussion | AP-15: Ignoring Security | Comment topic analysis on security-tagged PRs |
648
+ | Skipped config files | Zero comments on YAML/Terraform/Dockerfile changes | AP-16: Not Reviewing Config | Comment location vs. file type analysis |
649
+ | Seniority speed gap | Senior PRs approved 5x faster than junior PRs | AP-17: Seniority Auto-Approval | Approval time segmented by author level |
650
+ | Happy-path-only tests | < 20% of test cases cover error scenarios | AP-18: Not Reviewing Error Handling | Test case categorization audit |
651
+ | Criticism without direction | > 50% of comments identify problems without suggestions | AP-19: No Solution Suggestions | Comment structure analysis |
652
+ | Stale PR queue | PRs open > 5 days without review activity | AP-20: Review Delayed Weeks | PR age dashboard, SLA violation alerts |
653
+
654
+ ---
655
+
656
+ *Researched: 2026-03-08 | Sources: [Modern Code Review: A Case Study at Google (ICSE 2018)](https://sback.it/publications/icse2018seip.pdf), [Expectations, Outcomes, and Challenges of Modern Code Review (Microsoft Research)](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/ICSE202013-codereview.pdf), [Google Engineering Practices: The Standard of Code Review](https://google.github.io/eng-practices/review/reviewer/standard.html), [Code Reviews at Google (Dr. Michaela Greiler)](https://www.michaelagreiler.com/code-reviews-at-google/), [30 Proven Code Review Best Practices from Microsoft (Dr. Michaela Greiler)](https://www.michaelagreiler.com/code-review-best-practices/), [Unlearning Toxic Behaviors in a Code Review Culture (Sandya Sankarram)](https://medium.com/@sandya.sankarram/unlearning-toxic-behaviors-in-a-code-review-culture-b7c295452a3c), [Code Review Anti-Patterns (DEV Community)](https://dev.to/adam_b/code-review-anti-patterns-2e6a), [Code Review Antipatterns (Simon Tatham)](https://www.chiark.greenend.org.uk/~sgtatham/quasiblog/code-review-antipatterns/), [5 Code Review Anti-Patterns (CodeRabbit)](https://www.coderabbit.ai/blog/5-code-review-anti-patterns-you-can-eliminate-with-ai), [Effective Teaching through Code Reviews: Patterns and Anti-Patterns (ACM)](https://dl.acm.org/doi/10.1145/3660764), [Why Code Reviews Shouldn't Be Gatekeeping](https://medium.com/@madhav2002/why-code-reviews-shouldnt-be-gatekeeping-7770384c0f67), [Please Don't Rubber Stamp Code Reviews (Chromium)](https://groups.google.com/a/chromium.org/g/chromium-dev/c/b0Lb_mXfp0Y), [The Rubber Stamp Engineer](https://virtuallyscott.medium.com/the-rubber-stamp-engineer-how-bad-code-review-culture-kills-good-engineers-46a4ae224e9f), [Proof Thousand-Line PRs Create More Bugs](https://tekin.co.uk/2020/05/proof-your-thousand-line-pull-requests-create-more-bugs), [Code-Review Ping-Pong (Level Up Coding)](https://levelup.gitconnected.com/code-review-ping-pong-why-it-happens-and-how-to-end-the-rally-0e13d3af72b1), [OWASP Secure Code Review Cheat Sheet](https://cheatsheetseries.owasp.org/cheatsheets/Secure_Code_Review_Cheat_Sheet.html), [5 Signs of a Toxic Code Review Culture (SubMain)](https://blog.submain.com/toxic-code-review-culture/), [What is Rubber Stamping (Core Security)](https://www.coresecurity.com/blog/what-rubber-stamping-and-why-it-serious-cybersecurity-concern), [30% Less is More: Code Review Strategies (GitClear)](https://www.gitclear.com/research_studies/pull_request_diff_methods_comparison_faster_review), [The Psychology of Code Reviews (Java Code Geeks)](https://www.javacodegeeks.com/2026/01/the-psychology-of-code-reviews-why-smart-developers-accept-bad-suggestions.html), [Every Developer Should Review Code (Zenika)](https://dev.to/zenika/every-developer-should-review-code-not-just-seniors-2abc), [Simple Testing Can Prevent Most Critical Failures (Yuan et al., OSDI 2014)](https://www.usenix.org/conference/osdi14/technical-sessions/presentation/yuan), [Czerwonka et al., Code Reviews Do Not Find Bugs (Microsoft, 2015)](https://www.microsoft.com/en-us/research/publication/code-reviews-do-not-find-bugs-how-the-current-code-review-best-practice-slows-us-down/), [IEEE SANER 2021, Anti-patterns in Modern Code Review](https://ieeexplore.ieee.org/document/9425999), SmartBear/Cisco Code Review Study, Knight Capital post-mortem*