cov-loupe 3.0.0 → 4.0.0.pre

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (281) hide show
  1. checksums.yaml +4 -4
  2. data/AGENTS.md +230 -0
  3. data/CLAUDE.md +5 -0
  4. data/CODE_OF_CONDUCT.md +62 -0
  5. data/CONTRIBUTING.md +102 -0
  6. data/GEMINI.md +5 -0
  7. data/README.md +154 -51
  8. data/RELEASE_NOTES.md +452 -0
  9. data/dev/images/cov-loupe-icon-lores.png +0 -0
  10. data/dev/images/cov-loupe-icon-square.png +0 -0
  11. data/dev/images/cov-loupe-icon.png +0 -0
  12. data/dev/images/cov-loupe-logo.png +0 -0
  13. data/dev/prompts/README.md +74 -0
  14. data/dev/prompts/archive/architectural-review-and-actions-prompt.md +53 -0
  15. data/dev/prompts/archive/investigate-and-report-issues-prompt.md +33 -0
  16. data/dev/prompts/archive/produce-action-items-prompt.md +25 -0
  17. data/dev/prompts/guidelines/ai-code-evaluator-guidelines.md +337 -0
  18. data/dev/prompts/improve/refactor-test-suite.md +18 -0
  19. data/dev/prompts/improve/simplify-code-logic.md +133 -0
  20. data/dev/prompts/improve/update-documentation.md +21 -0
  21. data/dev/prompts/review/comprehensive-codebase-review.md +176 -0
  22. data/dev/prompts/review/identify-action-items.md +143 -0
  23. data/dev/prompts/review/verify-code-changes.md +54 -0
  24. data/dev/prompts/validate/create-screencast-outline.md +234 -0
  25. data/dev/prompts/validate/test-documentation-examples.md +180 -0
  26. data/docs/QUICKSTART.md +63 -0
  27. data/docs/assets/images/cov-loupe-logo-lores.png +0 -0
  28. data/docs/assets/images/cov-loupe-logo.png +0 -0
  29. data/docs/assets/images/favicon.png +0 -0
  30. data/docs/assets/stylesheets/branding.css +16 -0
  31. data/docs/assets/stylesheets/extra.css +15 -0
  32. data/docs/code_of_conduct.md +1 -0
  33. data/docs/contributing.md +1 -0
  34. data/docs/dev/ARCHITECTURE.md +56 -11
  35. data/docs/dev/DEVELOPMENT.md +116 -12
  36. data/docs/dev/FUTURE_ENHANCEMENTS.md +14 -0
  37. data/docs/dev/README.md +3 -2
  38. data/docs/dev/RELEASING.md +2 -0
  39. data/docs/dev/arch-decisions/README.md +10 -7
  40. data/docs/dev/arch-decisions/application-architecture.md +259 -0
  41. data/docs/dev/arch-decisions/coverage-data-quality.md +193 -0
  42. data/docs/dev/arch-decisions/output-character-mode.md +217 -0
  43. data/docs/dev/arch-decisions/path-resolution.md +90 -0
  44. data/docs/dev/arch-decisions/{004-x-arch-decision.md → policy-validation.md} +32 -28
  45. data/docs/dev/arch-decisions/{005-x-arch-decision.md → simplecov-integration.md} +47 -44
  46. data/docs/dev/presentations/cov-loupe-presentation.md +15 -13
  47. data/docs/examples/mcp-inputs.md +3 -0
  48. data/docs/examples/prompts.md +3 -0
  49. data/docs/examples/success_predicates.md +3 -0
  50. data/docs/fixtures/demo_project/.resultset.json +170 -0
  51. data/docs/fixtures/demo_project/README.md +6 -0
  52. data/docs/fixtures/demo_project/app/controllers/admin/audit_logs_controller.rb +19 -0
  53. data/docs/fixtures/demo_project/app/controllers/orders_controller.rb +26 -0
  54. data/docs/fixtures/demo_project/app/models/order.rb +20 -0
  55. data/docs/fixtures/demo_project/app/models/user.rb +19 -0
  56. data/docs/fixtures/demo_project/lib/api/client.rb +22 -0
  57. data/docs/fixtures/demo_project/lib/ops/jobs/cleanup_job.rb +16 -0
  58. data/docs/fixtures/demo_project/lib/ops/jobs/report_job.rb +17 -0
  59. data/docs/fixtures/demo_project/lib/payments/processor.rb +15 -0
  60. data/docs/fixtures/demo_project/lib/payments/refund_service.rb +15 -0
  61. data/docs/fixtures/demo_project/lib/payments/reporting/exporter.rb +16 -0
  62. data/docs/index.md +1 -0
  63. data/docs/license.md +3 -0
  64. data/docs/release_notes.md +3 -0
  65. data/docs/user/ADVANCED_USAGE.md +208 -115
  66. data/docs/user/CLI_FALLBACK_FOR_LLMS.md +2 -0
  67. data/docs/user/CLI_USAGE.md +276 -101
  68. data/docs/user/ERROR_HANDLING.md +4 -4
  69. data/docs/user/EXAMPLES.md +121 -128
  70. data/docs/user/INSTALLATION.md +9 -28
  71. data/docs/user/LIBRARY_API.md +227 -122
  72. data/docs/user/MCP_INTEGRATION.md +114 -203
  73. data/docs/user/README.md +5 -1
  74. data/docs/user/TROUBLESHOOTING.md +49 -27
  75. data/docs/user/installing-a-prelease-version-of-covloupe.md +43 -0
  76. data/docs/user/{V2-BREAKING-CHANGES.md → migrations/MIGRATING_TO_V2.md} +62 -72
  77. data/docs/user/migrations/MIGRATING_TO_V3.md +72 -0
  78. data/docs/user/migrations/MIGRATING_TO_V4.md +591 -0
  79. data/docs/user/migrations/README.md +22 -0
  80. data/docs/user/prompts/README.md +9 -0
  81. data/docs/user/prompts/non-web-coverage-analysis-prompt.md +103 -0
  82. data/docs/user/prompts/rails-coverage-analysis-prompt.md +94 -0
  83. data/docs/user/prompts/use-cli-not-mcp-prompt.md +53 -0
  84. data/examples/cli_demo.sh +77 -0
  85. data/examples/filter_and_table_demo-output.md +114 -0
  86. data/examples/filter_and_table_demo.rb +174 -0
  87. data/examples/fixtures/demo_project/coverage/.resultset.json +10 -0
  88. data/examples/mcp-inputs/README.md +66 -0
  89. data/examples/mcp-inputs/coverage_detailed.json +1 -0
  90. data/examples/mcp-inputs/coverage_raw.json +1 -0
  91. data/examples/mcp-inputs/coverage_summary.json +1 -0
  92. data/examples/mcp-inputs/list.json +1 -0
  93. data/examples/mcp-inputs/uncovered_lines.json +1 -0
  94. data/examples/prompts/README.md +27 -0
  95. data/examples/prompts/custom_resultset.txt +2 -0
  96. data/examples/prompts/detailed_with_source.txt +2 -0
  97. data/examples/prompts/list_lowest.txt +2 -0
  98. data/examples/prompts/summary.txt +2 -0
  99. data/examples/prompts/uncovered.txt +2 -0
  100. data/examples/success_predicates/README.md +198 -0
  101. data/examples/success_predicates/all_files_above_threshold_predicate.rb +21 -0
  102. data/examples/success_predicates/directory_specific_thresholds_predicate.rb +30 -0
  103. data/examples/success_predicates/project_coverage_minimum_predicate.rb +6 -0
  104. data/lib/cov_loupe/base_tool.rb +229 -20
  105. data/lib/cov_loupe/cli.rb +132 -23
  106. data/lib/cov_loupe/commands/base_command.rb +25 -6
  107. data/lib/cov_loupe/commands/command_factory.rb +0 -1
  108. data/lib/cov_loupe/commands/detailed_command.rb +10 -5
  109. data/lib/cov_loupe/commands/list_command.rb +2 -1
  110. data/lib/cov_loupe/commands/raw_command.rb +7 -5
  111. data/lib/cov_loupe/commands/summary_command.rb +12 -7
  112. data/lib/cov_loupe/commands/totals_command.rb +74 -10
  113. data/lib/cov_loupe/commands/uncovered_command.rb +7 -5
  114. data/lib/cov_loupe/commands/validate_command.rb +11 -3
  115. data/lib/cov_loupe/commands/version_command.rb +6 -4
  116. data/lib/cov_loupe/{app_config.rb → config/app_config.rb} +13 -5
  117. data/lib/cov_loupe/config/app_context.rb +43 -0
  118. data/lib/cov_loupe/config/boolean_type.rb +91 -0
  119. data/lib/cov_loupe/config/logger.rb +92 -0
  120. data/lib/cov_loupe/{option_normalizers.rb → config/option_normalizers.rb} +55 -24
  121. data/lib/cov_loupe/{option_parser_builder.rb → config/option_parser_builder.rb} +46 -24
  122. data/lib/cov_loupe/coverage/coverage_calculator.rb +53 -0
  123. data/lib/cov_loupe/coverage/coverage_reporter.rb +63 -0
  124. data/lib/cov_loupe/coverage/coverage_table_formatter.rb +133 -0
  125. data/lib/cov_loupe/{error_handler.rb → errors/error_handler.rb} +21 -33
  126. data/lib/cov_loupe/{errors.rb → errors/errors.rb} +48 -71
  127. data/lib/cov_loupe/formatters/formatters.rb +75 -0
  128. data/lib/cov_loupe/formatters/source_formatter.rb +18 -7
  129. data/lib/cov_loupe/formatters/table_formatter.rb +80 -0
  130. data/lib/cov_loupe/loaders/all.rb +15 -0
  131. data/lib/cov_loupe/loaders/all_cli.rb +10 -0
  132. data/lib/cov_loupe/loaders/all_mcp.rb +23 -0
  133. data/lib/cov_loupe/loaders/resultset_loader.rb +147 -0
  134. data/lib/cov_loupe/mcp_server.rb +3 -2
  135. data/lib/cov_loupe/model/model.rb +520 -0
  136. data/lib/cov_loupe/model/model_data.rb +13 -0
  137. data/lib/cov_loupe/model/model_data_cache.rb +116 -0
  138. data/lib/cov_loupe/option_parsers/env_options_parser.rb +17 -6
  139. data/lib/cov_loupe/option_parsers/error_helper.rb +16 -10
  140. data/lib/cov_loupe/output_chars.rb +192 -0
  141. data/lib/cov_loupe/paths/glob_utils.rb +100 -0
  142. data/lib/cov_loupe/{path_relativizer.rb → paths/path_relativizer.rb} +5 -13
  143. data/lib/cov_loupe/paths/path_utils.rb +265 -0
  144. data/lib/cov_loupe/paths/volume_case_sensitivity.rb +173 -0
  145. data/lib/cov_loupe/presenters/base_coverage_presenter.rb +9 -13
  146. data/lib/cov_loupe/presenters/coverage_payload_presenter.rb +21 -0
  147. data/lib/cov_loupe/presenters/payload_caching.rb +23 -0
  148. data/lib/cov_loupe/presenters/project_coverage_presenter.rb +73 -21
  149. data/lib/cov_loupe/presenters/project_totals_presenter.rb +16 -10
  150. data/lib/cov_loupe/repositories/coverage_repository.rb +149 -0
  151. data/lib/cov_loupe/resolvers/coverage_line_resolver.rb +90 -76
  152. data/lib/cov_loupe/resolvers/{resolver_factory.rb → resolver_helpers.rb} +6 -5
  153. data/lib/cov_loupe/resolvers/resultset_path_resolver.rb +40 -12
  154. data/lib/cov_loupe/scripts/command_execution.rb +113 -0
  155. data/lib/cov_loupe/scripts/latest_ci_status.rb +97 -0
  156. data/lib/cov_loupe/scripts/pre_release_check.rb +164 -0
  157. data/lib/cov_loupe/scripts/setup_doc_server.rb +23 -0
  158. data/lib/cov_loupe/scripts/start_doc_server.rb +24 -0
  159. data/lib/cov_loupe/staleness/stale_status.rb +23 -0
  160. data/lib/cov_loupe/staleness/staleness_checker.rb +328 -0
  161. data/lib/cov_loupe/staleness/staleness_message_formatter.rb +91 -0
  162. data/lib/cov_loupe/tools/coverage_detailed_tool.rb +14 -15
  163. data/lib/cov_loupe/tools/coverage_raw_tool.rb +14 -14
  164. data/lib/cov_loupe/tools/coverage_summary_tool.rb +16 -16
  165. data/lib/cov_loupe/tools/coverage_table_tool.rb +139 -21
  166. data/lib/cov_loupe/tools/coverage_totals_tool.rb +31 -13
  167. data/lib/cov_loupe/tools/help_tool.rb +16 -20
  168. data/lib/cov_loupe/tools/list_tool.rb +65 -0
  169. data/lib/cov_loupe/tools/uncovered_lines_tool.rb +14 -14
  170. data/lib/cov_loupe/tools/validate_tool.rb +18 -24
  171. data/lib/cov_loupe/tools/version_tool.rb +8 -3
  172. data/lib/cov_loupe/version.rb +1 -1
  173. data/lib/cov_loupe.rb +83 -55
  174. metadata +184 -154
  175. data/docs/dev/BRANCH_ONLY_COVERAGE.md +0 -158
  176. data/docs/dev/arch-decisions/001-x-arch-decision.md +0 -95
  177. data/docs/dev/arch-decisions/002-x-arch-decision.md +0 -159
  178. data/docs/dev/arch-decisions/003-x-arch-decision.md +0 -165
  179. data/lib/cov_loupe/app_context.rb +0 -26
  180. data/lib/cov_loupe/constants.rb +0 -22
  181. data/lib/cov_loupe/coverage_reporter.rb +0 -31
  182. data/lib/cov_loupe/formatters.rb +0 -51
  183. data/lib/cov_loupe/mode_detector.rb +0 -56
  184. data/lib/cov_loupe/model.rb +0 -339
  185. data/lib/cov_loupe/presenters/coverage_detailed_presenter.rb +0 -14
  186. data/lib/cov_loupe/presenters/coverage_raw_presenter.rb +0 -14
  187. data/lib/cov_loupe/presenters/coverage_summary_presenter.rb +0 -14
  188. data/lib/cov_loupe/presenters/coverage_uncovered_presenter.rb +0 -14
  189. data/lib/cov_loupe/resultset_loader.rb +0 -131
  190. data/lib/cov_loupe/staleness_checker.rb +0 -247
  191. data/lib/cov_loupe/table_formatter.rb +0 -64
  192. data/lib/cov_loupe/tools/all_files_coverage_tool.rb +0 -51
  193. data/lib/cov_loupe/util.rb +0 -88
  194. data/spec/MCP_INTEGRATION_TESTS_README.md +0 -111
  195. data/spec/TIMESTAMPS.md +0 -48
  196. data/spec/all_files_coverage_tool_spec.rb +0 -53
  197. data/spec/app_config_spec.rb +0 -142
  198. data/spec/base_tool_spec.rb +0 -62
  199. data/spec/cli/show_default_report_spec.rb +0 -33
  200. data/spec/cli_enumerated_options_spec.rb +0 -90
  201. data/spec/cli_error_spec.rb +0 -184
  202. data/spec/cli_format_spec.rb +0 -123
  203. data/spec/cli_json_options_spec.rb +0 -50
  204. data/spec/cli_source_spec.rb +0 -44
  205. data/spec/cli_spec.rb +0 -192
  206. data/spec/cli_table_spec.rb +0 -28
  207. data/spec/cli_usage_spec.rb +0 -42
  208. data/spec/commands/base_command_spec.rb +0 -107
  209. data/spec/commands/command_factory_spec.rb +0 -76
  210. data/spec/commands/detailed_command_spec.rb +0 -34
  211. data/spec/commands/list_command_spec.rb +0 -28
  212. data/spec/commands/raw_command_spec.rb +0 -69
  213. data/spec/commands/summary_command_spec.rb +0 -34
  214. data/spec/commands/totals_command_spec.rb +0 -34
  215. data/spec/commands/uncovered_command_spec.rb +0 -55
  216. data/spec/commands/validate_command_spec.rb +0 -213
  217. data/spec/commands/version_command_spec.rb +0 -38
  218. data/spec/constants_spec.rb +0 -61
  219. data/spec/cov_loupe/formatters/source_formatter_spec.rb +0 -267
  220. data/spec/cov_loupe/formatters_spec.rb +0 -76
  221. data/spec/cov_loupe/presenters/base_coverage_presenter_spec.rb +0 -79
  222. data/spec/cov_loupe_model_spec.rb +0 -454
  223. data/spec/cov_loupe_module_spec.rb +0 -37
  224. data/spec/cov_loupe_opts_spec.rb +0 -185
  225. data/spec/coverage_reporter_spec.rb +0 -102
  226. data/spec/coverage_table_tool_spec.rb +0 -59
  227. data/spec/coverage_totals_tool_spec.rb +0 -37
  228. data/spec/error_handler_spec.rb +0 -197
  229. data/spec/error_mode_spec.rb +0 -139
  230. data/spec/errors_edge_cases_spec.rb +0 -312
  231. data/spec/errors_stale_spec.rb +0 -83
  232. data/spec/file_based_mcp_tools_spec.rb +0 -99
  233. data/spec/help_tool_spec.rb +0 -26
  234. data/spec/integration_spec.rb +0 -789
  235. data/spec/logging_fallback_spec.rb +0 -128
  236. data/spec/mcp_logging_spec.rb +0 -44
  237. data/spec/mcp_server_integration_spec.rb +0 -23
  238. data/spec/mcp_server_spec.rb +0 -106
  239. data/spec/mode_detector_spec.rb +0 -153
  240. data/spec/model_error_handling_spec.rb +0 -269
  241. data/spec/model_staleness_spec.rb +0 -79
  242. data/spec/option_normalizers_spec.rb +0 -203
  243. data/spec/option_parsers/env_options_parser_spec.rb +0 -221
  244. data/spec/option_parsers/error_helper_spec.rb +0 -222
  245. data/spec/path_relativizer_spec.rb +0 -98
  246. data/spec/presenters/coverage_detailed_presenter_spec.rb +0 -19
  247. data/spec/presenters/coverage_raw_presenter_spec.rb +0 -15
  248. data/spec/presenters/coverage_summary_presenter_spec.rb +0 -15
  249. data/spec/presenters/coverage_uncovered_presenter_spec.rb +0 -16
  250. data/spec/presenters/project_coverage_presenter_spec.rb +0 -87
  251. data/spec/presenters/project_totals_presenter_spec.rb +0 -144
  252. data/spec/resolvers/coverage_line_resolver_spec.rb +0 -282
  253. data/spec/resolvers/resolver_factory_spec.rb +0 -61
  254. data/spec/resolvers/resultset_path_resolver_spec.rb +0 -60
  255. data/spec/resultset_loader_spec.rb +0 -167
  256. data/spec/shared_examples/README.md +0 -115
  257. data/spec/shared_examples/coverage_presenter_examples.rb +0 -66
  258. data/spec/shared_examples/file_based_mcp_tools.rb +0 -179
  259. data/spec/shared_examples/formatted_command_examples.rb +0 -64
  260. data/spec/shared_examples/mcp_tool_text_json_response.rb +0 -16
  261. data/spec/spec_helper.rb +0 -127
  262. data/spec/staleness_checker_spec.rb +0 -374
  263. data/spec/staleness_more_spec.rb +0 -42
  264. data/spec/support/cli_helpers.rb +0 -22
  265. data/spec/support/control_flow_helpers.rb +0 -20
  266. data/spec/support/fake_mcp.rb +0 -40
  267. data/spec/support/io_helpers.rb +0 -29
  268. data/spec/support/mcp_helpers.rb +0 -35
  269. data/spec/support/mcp_runner.rb +0 -66
  270. data/spec/support/mocking_helpers.rb +0 -30
  271. data/spec/table_format_spec.rb +0 -70
  272. data/spec/tools/validate_tool_spec.rb +0 -132
  273. data/spec/tools_error_handling_spec.rb +0 -130
  274. data/spec/util_spec.rb +0 -154
  275. data/spec/version_spec.rb +0 -123
  276. data/spec/version_tool_spec.rb +0 -141
  277. /data/{spec/fixtures/project1 → examples/fixtures/demo_project}/lib/bar.rb +0 -0
  278. /data/{spec/fixtures/project1 → examples/fixtures/demo_project}/lib/foo.rb +0 -0
  279. /data/lib/cov_loupe/{config_parser.rb → config/config_parser.rb} +0 -0
  280. /data/lib/cov_loupe/{predicate_evaluator.rb → config/predicate_evaluator.rb} +0 -0
  281. /data/lib/cov_loupe/{error_handler_factory.rb → errors/error_handler_factory.rb} +0 -0
@@ -0,0 +1,176 @@
1
+ # State of the Code Base Prompt
2
+
3
+ ### Preconditions
4
+
5
+ Before you begin the report:
6
+
7
+ 1. Always open the report by citing the most recent git commit at the time you begin writing.
8
+ 2. If `git status` shows uncommitted changes, inform me, ask for confirmation to proceed, and—if I consent—include those `git status` details immediately after the commit citation.
9
+ 3. Limit the review strictly to git-tracked files.
10
+
11
+ ----
12
+
13
+ - You are a senior software architect and code reviewer.
14
+ - Your task is to analyze this code base thoroughly and report on its state.
15
+ - Focus on identifying weaknesses, risks, and areas for improvement.
16
+ - Disregard any issues included in dev/prompts/guidelines/ai-code-evaluator-guidelines.md, unless your objections are not covered in that document.
17
+ - Repeating for emphasis: **Disregard any issues included in dev/prompts/guidelines/ai-code-evaluator-guidelines.md, unless your objections are not covered in that document.**
18
+ - For architectural issues, consult `docs/dev/arch-decisions` to see if the issue has already been considered.
19
+ - For each issue, assess its seriousness, the cost/difficulty to fix, and provide high-level strategies for addressing it.
20
+ - If you are unable to use the cov-loupe MCP server, use `cov-loupe` in CLI mode (run `cov-loupe -h` for help).
21
+ - To Codex: do investigate thoroughly for real issues, you are excellent at that, but do not be excessively critical:
22
+ - Do not list issues that are not real issues.
23
+ - If there is a tradeoff between A and B, and the justification is sound and understood and/or documented,
24
+ (e.g. in guidelines/ai-code-evaluator-guidelines.md), do not penalize the code base for that tradeoff.
25
+ - Be balanced in your scoring; sometimes you penalize several points for a trivial issue.
26
+ - If you find zero defects in a category, you should score a 10, and you may mention that it is a spot check if that is the case.
27
+
28
+
29
+ Write your analysis in a Markdown file whose name is:
30
+ - today's date in UTC `%Y-%m-%d-%H-%M` format +
31
+ - '-state-of-the-code-base-' +
32
+ - your name (e.g. 'codex', 'claude', 'gemini', 'zai') +
33
+ - the `.md` extension.
34
+
35
+ The file should have the following structure:
36
+
37
+ ---
38
+
39
+ ### Executive Summary
40
+ - Provide a concise overview of the overall health of the code base.
41
+ - Identify the strongest areas and the weakest areas.
42
+ - Give a **one-line summary verdict** (e.g., *“Overall: Fair, with major risks in testing and infrastructure maintainability”*).
43
+ - **Overall Weighted Score (1–10):** Show the score at the end of this summary.
44
+
45
+ ---
46
+
47
+ ### Critical Blockers
48
+ List issues so severe that they must be resolved before meaningful progress can continue. For each blocker, include:
49
+ - **Description**
50
+ - **Impact**
51
+ - **Urgency**
52
+ - **Estimated Cost-to-Fix** (High/Medium/Low)
53
+
54
+ ---
55
+
56
+ ### Architecture & Design
57
+ - Summarize the overall architecture (monolith, microservices, layered, etc.).
58
+ - Identify strengths and weaknesses.
59
+ - Highlight areas where complexity, coupling, or technical debt are high.
60
+ - Assess maintainability, scalability, and clarity.
61
+ - **Score (1–10)**
62
+
63
+ ---
64
+
65
+ ### Code Quality
66
+ - Identify recurring issues (duplication, inconsistent style, long methods, deeply nested logic, etc.).
67
+ - Point out readability and maintainability concerns.
68
+ - **Score (1–10)**
69
+
70
+ ---
71
+
72
+ ### Infrastructure Code
73
+ - Evaluate Dockerfiles, CI/CD pipelines, and Infrastructure-as-Code (Terraform, Ansible, etc.).
74
+ - Highlight brittle or outdated configurations.
75
+ - Identify risks in automation, deployment, or scaling.
76
+ - **Score (1–10)**
77
+
78
+ ---
79
+
80
+ ### Dependencies & External Integrations
81
+ - List major dependencies (frameworks, libraries, services).
82
+ - Note outdated or risky dependencies and upgrade costs.
83
+ - Assess vendor lock-in and integration fragility.
84
+ - **Score (1–10)**
85
+
86
+ ---
87
+
88
+ ### Test Coverage
89
+ - Using the **cov-loupe MCP server**, analyze the test coverage:
90
+ - Include a summary table of coverage by file/module.
91
+ - Report coverage at a high and general level.
92
+ - Rank risks of lacking coverage in **descending order of magnitude**.
93
+ - Highlight untested critical paths and potential consequences.
94
+ - Do not output the entire table in the report, but maybe the 10 least covered files if you believe it would be helpful.
95
+ - **Score (1–10)**
96
+
97
+ ---
98
+
99
+ ### Security & Reliability
100
+ - Identify insecure coding practices, hardcoded secrets, or missing validations.
101
+ - Assess error handling, fault tolerance, and resilience.
102
+ - **Score (1–10)**
103
+
104
+ ---
105
+
106
+ ### Documentation & Onboarding
107
+ - Evaluate inline docs, README quality, and onboarding flow.
108
+ - Identify missing/outdated documentation.
109
+ - **Score (1–10)**
110
+
111
+ ---
112
+
113
+ ### Performance & Efficiency
114
+ - Highlight bottlenecks or inefficient patterns.
115
+ - Suggest whether optimizations are low-cost or high-cost.
116
+ - **Score (1–10)**
117
+
118
+ ---
119
+
120
+ ### Formatting & Style Conformance
121
+ - Report **bad or erroneous formatting** (inconsistent whitespace, broken Markdown).
122
+ - Note whether style is consistent enough for maintainability.
123
+ - **Score (1–10)**
124
+
125
+ ---
126
+
127
+ ### Best Practices & Conciseness
128
+ - Assess whether the code follows recognized best practices (naming, modularization, separation of concerns).
129
+ - Evaluate verbosity vs. clarity — is the code concise without being cryptic?
130
+ - **Score (1–10)**
131
+
132
+ ---
133
+
134
+ ### Prioritized Issue List
135
+ Provide a table of the top issues found, with the following columns:
136
+
137
+ | Issue | Severity | Cost-to-Fix | Impact if Unaddressed |
138
+ |-------|----------|-------------|------------------------|
139
+ | Example issue description | High | Medium | Major reliability risks |
140
+
141
+ The order should take both severity and cost-to-fix into account such that performing them in descending order would
142
+ result in the optimal value addition velocity.
143
+
144
+ ---
145
+
146
+ ### High-Level Recommendations
147
+ - Suggest general strategies for improvement (e.g., refactoring approach, improving test coverage, upgrading dependencies, modularization).
148
+ - Highlight where incremental vs. large-scale changes are most appropriate.
149
+
150
+ ---
151
+
152
+ ### Overall State of the Code Base
153
+ - Display the **weights used** for each dimension (decided by you, the AI).
154
+ - Show the **weights table** and the weighted score calculation.
155
+
156
+ ### Suggested Prompts
157
+ Suggest prompts to a coding AI tool that would be helpful in addressing the major tasks.
158
+
159
+ #### Example Weights Table (AI decides actual values)
160
+ | Dimension | Weight (%) |
161
+ |---------------------------|------------|
162
+ | Architecture & Design | ?% |
163
+ | Code Quality | ?% |
164
+ | Infrastructure Code | ?% |
165
+ | Dependencies | ?% |
166
+ | Test Coverage | ?% |
167
+ | Security & Reliability | ?% |
168
+ | Documentation | ?% |
169
+ | Performance & Efficiency | ?% |
170
+ | Formatting & Style | ?% |
171
+ | Best Practices & Conciseness | ?% |
172
+
173
+ - **Weighted Score Calculation:** Multiply each section's score by its chosen weight, then sum to compute the **Overall Weighted Score (1–10)**.
174
+ - Report the final **Overall Weighted Score** with justification.
175
+
176
+ ### Summarize suggested changes
@@ -0,0 +1,143 @@
1
+ # Identify Action Items
2
+
3
+ **Purpose:** Identify and prioritize issues that need fixing NOW. This is a problems-focused review (not a balanced assessment).
4
+
5
+ ## When to Use This
6
+
7
+ - Planning sprints or development cycles
8
+ - Identifying technical debt to tackle
9
+ - Preparing bug-fix releases
10
+ - Answering: "What should we fix next?"
11
+
12
+ For a balanced assessment showing strengths AND weaknesses, use `comprehensive-codebase-review.md` instead.
13
+
14
+ ---
15
+
16
+ ## Preconditions
17
+
18
+ Before you begin the report:
19
+
20
+ 1. Always open the report by citing the most recent git commit at the time you begin writing.
21
+ 2. Limit the review strictly to git-tracked files.
22
+ 3. If `git status` shows uncommitted changes, inform me, ask for confirmation to proceed, and—if I consent—include those `git status` details immediately after the commit citation.
23
+
24
+ ---
25
+
26
+ ## Your Role & Task
27
+
28
+ - You are a senior software architect and code reviewer.
29
+ - Your task is to analyze this code base thoroughly and identify issues needing immediate attention.
30
+ - Focus on: bugs, defects, security vulnerabilities, performance bottlenecks, technical debt, weaknesses, risks, ambiguities, and other areas requiring improvement.
31
+ - **This is NOT a balanced review** - focus on problems, not strengths.
32
+
33
+ ---
34
+
35
+ ## Exclusions & Balance Guidance
36
+
37
+ ### Consult Guidelines First
38
+
39
+ **CRITICAL:**
40
+
41
+ - Disregard any issues included in `dev/prompts/guidelines/ai-code-evaluator-guidelines.md` unless your objections are not covered in that document.
42
+ - For architectural issues, consult `docs/dev/arch-decisions` to see if the issue has already been considered.
43
+
44
+ **Repeating for emphasis:** Disregard any issues included in `dev/prompts/guidelines/ai-code-evaluator-guidelines.md` unless your objections are not covered in that document.
45
+
46
+ ### Be Balanced (Not Excessively Critical)
47
+
48
+ - Do not list issues that are not real issues.
49
+ - If there is a tradeoff between A and B, and the justification is sound and documented (e.g., in ai-code-evaluator-guidelines.md), do not penalize the code base for that tradeoff.
50
+ - Be fair in your severity assessments; sometimes trivial issues are overweighted.
51
+ - Investigate thoroughly for real issues, but maintain perspective on their actual impact.
52
+
53
+ ---
54
+
55
+ ## Tooling
56
+
57
+ - Use the `cov-loupe` MCP server *as an MCP server* (not a command line application with args) to find information about test coverage.
58
+ - Only if you are unable to use the cov-loupe MCP server, use `cov-loupe` in CLI mode (run `cov-loupe -h` for help).
59
+
60
+ ---
61
+
62
+ ## Report Structure
63
+
64
+ ### For Each Issue Found
65
+
66
+ Delimit each issue with horizontal lines and headlines. Number each issue.
67
+
68
+ **Required sections:**
69
+
70
+ 1. **Headline & Description:** Clear, concise explanation of the issue.
71
+ 2. **Assessment:**
72
+ - **Severity:** High/Medium/Low
73
+ - **Effort to Fix:** High/Medium/Low
74
+ - **Impact if Unaddressed:** What happens if we don't fix this?
75
+ 3. **Strategy:** High-level approach for addressing the issue.
76
+ 4. **Actionable Prompt:** Provide a specific, copy-paste-ready prompt that can be given to an AI coding agent to fix or improve this specific issue.
77
+
78
+ **Example format:**
79
+
80
+ ```markdown
81
+ ---
82
+
83
+ ## Issue: Insecure Password Storage
84
+
85
+ ### Description
86
+ User passwords are stored in plain text in the database.
87
+
88
+ ### Assessment
89
+ - **Severity:** High
90
+ - **Effort to Fix:** Medium
91
+ - **Impact if Unaddressed:** Critical security vulnerability; user accounts easily compromised.
92
+
93
+ ### Strategy
94
+ Replace plain text storage with bcrypt hashing. Update authentication logic to hash passwords on registration and verify hashed passwords on login.
95
+
96
+ ### Actionable Prompt
97
+ ```
98
+ Update the user authentication system to use bcrypt for password hashing. Specifically:
99
+ 1. Add bcrypt gem to Gemfile
100
+ 2. Update User model to hash passwords before saving
101
+ 3. Update authentication logic to compare hashed passwords
102
+ 4. Add migration to hash existing plain text passwords
103
+ ```
104
+ ```
105
+
106
+ ---
107
+
108
+ ### Summary Table
109
+
110
+ At the end of the file, produce a markdown table that summarizes ALL issues, ordered by priority (considering severity, effort, and impact):
111
+
112
+ | Brief Description (<= 50 chars) | Severity (H/M/L) | Effort (H/M/L) | Impact if Unaddressed | Link to Detail |
113
+ | :--- | :---: | :---: | :--- | :--- |
114
+ | ... | ... | ... | ... | [See below](#issue-title) |
115
+
116
+ **Priority ordering:** Issues should be ordered to maximize value. Generally this means:
117
+ - Critical severity with low-to-medium effort → highest priority
118
+ - High severity regardless of effort → high priority
119
+ - Medium severity with low effort → medium-high priority
120
+ - Low severity with high effort → lowest priority
121
+
122
+ Use your judgment to order issues for optimal value delivery.
123
+
124
+ ---
125
+
126
+ ## Output File
127
+
128
+ Write your analysis in a Markdown file whose name is:
129
+
130
+ - today's date in UTC `%Y-%m-%d-%H-%M` format +
131
+ - `-prioritized-action-items-` +
132
+ - your name (e.g. `codex`, `claude`, `gemini`, `zai`) +
133
+ - the `.md` extension.
134
+
135
+ **Example:** `2026-01-08-19-45-prioritized-action-items-claude.md`
136
+
137
+ ---
138
+
139
+ ## Constraints
140
+
141
+ - **DO NOT MAKE ANY CODE CHANGES. REVIEW ONLY.**
142
+ - Focus exclusively on identifying and prioritizing problems.
143
+ - Do not include "strengths" or "what's going well" sections.
@@ -0,0 +1,54 @@
1
+ # Verify Code Changes
2
+
3
+ I need you to test that the intention of a code change was accomplished successfully, i.e. that the change is:
4
+
5
+ - correct
6
+ - complete
7
+ - concise
8
+ - conforms to best practices
9
+ - as simple as possible
10
+ - is not more easily accomplished using tools available that might not have been considered, especially since coding tools do not generally search the web
11
+ - is adequately tested (for Ruby Simplecov coverage, use the cov-loupe MCP server)
12
+
13
+ ### Parameters (I Will Give You...)
14
+
15
+ - Comparison Specification (an argument to `git diff`) - the point of reference from which you can do a git diff to see the changes. Examples:
16
+ - a commit (e.g. HEAD, HEAD~~, HEAD~4, 45076963221647d724b9b52faa3690a6d83ae8d1)
17
+ - a branch name
18
+ - a tag
19
+ - anything else that can be passed to `git diff`
20
+ - A description of the intended change. Examples of changes:
21
+ - Implemented feature
22
+ - Fixed bug
23
+ - Test code added
24
+ - Documentation task
25
+
26
+ If I do not give you the compare point in this prompt, ask me for them.
27
+
28
+ If I do not give you the intent to examine, use the commit message(s) and say that
29
+ you are doing that so I can be prodded in case that was not my intent.
30
+
31
+ Be mindful of the signal to noise ratio. Do not add anything to the report
32
+ unless it adds value to the reader. Here is an example of a time wasting comment:
33
+
34
+ "Approach X could have been used, but the current implementation is a better fit."
35
+
36
+ ----
37
+
38
+ > Sure, I can verify the code changes for you. I will need you to give me:
39
+ > - the commit, branch, etc. for me to use as the starting point of the comparison
40
+ > - a description of the code change intention to verify
41
+ ----
42
+
43
+ ### Your Response
44
+
45
+ Examine the diff thoroughly. In your response, be balanced, fair, organized, and thorough.
46
+
47
+ Write your response as markdown text and save it to a file whose name is:
48
+
49
+ - today's date in UTC `%Y-%m-%d-%H-%M` format +
50
+ - your name (e.g. 'codex', 'claude', 'gemini', 'zai') +
51
+ - "-code-review-#{change_intention_phrase}.md"
52
+
53
+
54
+
@@ -0,0 +1,234 @@
1
+ # Create Screencast Outline
2
+
3
+ **Purpose:** Design a compelling 2-5 minute screencast that showcases cov-loupe's value, especially its AI integration capabilities.
4
+
5
+ ## When to Use This
6
+
7
+ - Planning marketing materials
8
+ - Preparing demo videos
9
+ - Creating tutorial content
10
+ - Showcasing the tool at conferences or meetups
11
+
12
+ ## Goals
13
+
14
+ 1. **Highlight AI Integration:** Show how AI assistants can use cov-loupe to reason deeply about test coverage
15
+ 2. **Demonstrate Unique Value:** Illustrate advantages over directly analyzing `.resultset.json` files
16
+ 3. **Show Practical Use Cases:** Present real-world scenarios where cov-loupe solves actual problems
17
+ 4. **Keep It Engaging:** Maintain viewer interest with concrete, impressive examples
18
+ 5. **Stay Concise:** Fit key points into 2-5 minutes
19
+
20
+ ## Research Phase
21
+
22
+ Before creating the outline, study:
23
+
24
+ ### Documentation to Review
25
+ - `README.md` - Overview and main features
26
+ - `docs/user/**/*.md` - User-facing functionality
27
+ - `docs/dev/**/*.md` - Advanced features and architecture
28
+
29
+ ### Key Questions to Answer
30
+ 1. **What makes cov-loupe special?** What can it do that manual JSON inspection can't?
31
+ 2. **What are the "wow moments"?** Features that clearly demonstrate value
32
+ 3. **How does AI integration work?** What prompts leverage cov-loupe effectively?
33
+ 4. **What problems does it solve?** Real pain points in coverage analysis
34
+
35
+ ## Deliverable Structure
36
+
37
+ Create an outline with the following sections:
38
+
39
+ ### 1. Hook (10-15 seconds)
40
+ - **Goal:** Grab attention immediately
41
+ - **Content:** One compelling statement or question
42
+ - **Example:** "What if your AI assistant could deeply understand your test coverage?"
43
+
44
+ ### 2. The Problem (20-30 seconds)
45
+ - **Goal:** Establish why this matters
46
+ - **Content:** The pain points cov-loupe addresses
47
+ - **Questions to answer:**
48
+ - What's frustrating about traditional coverage analysis?
49
+ - Why is raw `.resultset.json` analysis problematic?
50
+ - What questions go unanswered with existing tools?
51
+
52
+ ### 3. The Solution (30-45 seconds)
53
+ - **Goal:** Introduce cov-loupe
54
+ - **Content:** Quick overview of what it does
55
+ - **Key points:**
56
+ - CLI + MCP server architecture
57
+ - Structured data for AI assistants
58
+ - Multiple output formats
59
+ - Staleness detection
60
+
61
+ ### 4. Demo - "Wow Moments" (90-120 seconds)
62
+ - **Goal:** Show concrete, impressive use cases
63
+ - **Content:** 2-3 demonstrations that showcase unique value
64
+
65
+ **For each demo:**
66
+ - **Setup:** Briefly describe the scenario (5-10 sec)
67
+ - **Action:** Show the command or AI prompt (10-15 sec)
68
+ - **Result:** Highlight the valuable output (10-15 sec)
69
+ - **Value:** Explain why this matters (5-10 sec)
70
+
71
+ **Suggested demo types:**
72
+ - AI assistant analyzing coverage and suggesting where to add tests
73
+ - Identifying stale coverage data automatically
74
+ - Finding files with coverage < 80% and generating actionable insights
75
+ - Comparing coverage across different areas of codebase
76
+ - AI-generated test strategy based on coverage gaps
77
+
78
+ ### 5. AI Integration Highlight (45-60 seconds)
79
+ - **Goal:** Showcase the MCP server + AI workflow
80
+ - **Content:** Show an AI assistant using cov-loupe as a tool
81
+ - **Example flow:**
82
+ - User asks: "Which files need more test coverage?"
83
+ - AI uses cov-loupe tools to analyze
84
+ - AI provides prioritized list with reasoning
85
+ - AI suggests specific test scenarios
86
+
87
+ ### 6. Call to Action (15-20 seconds)
88
+ - **Goal:** Drive next steps
89
+ - **Content:** How to get started
90
+ - **Include:**
91
+ - Installation command
92
+ - Link to documentation
93
+ - Where to find more examples
94
+
95
+ ## Amazing Use Cases to Find
96
+
97
+ Look for scenarios that demonstrate:
98
+
99
+ ### Deep Analysis
100
+ - **Coverage gap identification:** Finding untested edge cases
101
+ - **Risk assessment:** Identifying high-value files with low coverage
102
+ - **Trend analysis:** Tracking coverage changes over time
103
+
104
+ ### AI-Powered Insights
105
+ - **Smart prioritization:** AI ranks files by coverage urgency + business impact
106
+ - **Test strategy generation:** AI suggests what types of tests to write
107
+ - **Coverage archaeology:** AI explains why certain code is untested
108
+
109
+ ### Developer Workflow
110
+ - **Pre-commit checks:** Catching coverage drops before they merge
111
+ - **Code review assistance:** AI comments on coverage in PRs
112
+ - **Refactoring safety:** Verifying test coverage before major changes
113
+
114
+ ### Advantages Over Raw JSON
115
+ - **Structured queries:** Easy access to specific metrics
116
+ - **Path resolution:** Handles relative vs absolute paths
117
+ - **Staleness detection:** Knows when coverage is out of date
118
+ - **Multiple formats:** JSON, YAML, tables, detailed views
119
+ - **AI-friendly:** Purpose-built for tool integration
120
+
121
+ ## Output Format
122
+
123
+ Produce a structured outline like:
124
+
125
+ ```markdown
126
+ # Screencast Outline: "cov-loupe + AI: Intelligent Test Coverage Analysis"
127
+
128
+ **Total Duration:** 4:30
129
+
130
+ ## Section 1: Hook (0:00 - 0:15)
131
+ "Your AI assistant can now understand your test coverage better than ever."
132
+
133
+ [Screen: Show AI assistant interface with cov-loupe integration]
134
+
135
+ ## Section 2: The Problem (0:15 - 0:45)
136
+ - Coverage reports are just numbers
137
+ - .resultset.json is hard to parse
138
+ - No context about staleness
139
+ - AI assistants struggle with raw JSON
140
+
141
+ [Screen: Show confusing .resultset.json file]
142
+
143
+ ## Section 3: The Solution (0:45 - 1:15)
144
+ cov-loupe provides:
145
+ - Clean CLI interface
146
+ - MCP server for AI integration
147
+ - Multiple output formats
148
+ - Smart staleness detection
149
+
150
+ [Screen: Quick demo of `cov-loupe list` command]
151
+
152
+ ## Section 4: Demo 1 - AI-Powered Coverage Analysis (1:15 - 2:00)
153
+ **Scenario:** Find files that need tests
154
+
155
+ **Action:**
156
+ User: "Which files have the worst test coverage?"
157
+
158
+ AI (using cov-loupe):
159
+ - Queries coverage data
160
+ - Identifies bottom 5 files
161
+ - Provides context about each
162
+
163
+ **Result:**
164
+ Prioritized list with:
165
+ - Coverage percentages
166
+ - File purposes
167
+ - Suggested test scenarios
168
+
169
+ **Value:** Goes beyond numbers to actionable insights
170
+
171
+ [Screen: Show AI interaction]
172
+
173
+ ## Section 5: Demo 2 - Stale Coverage Detection (2:00 - 2:35)
174
+ **Scenario:** Verify coverage is up-to-date
175
+
176
+ **Action:** `cov-loupe list --raise-on-stale`
177
+
178
+ **Result:**
179
+ - Detects modified files
180
+ - Shows timestamp mismatches
181
+ - Prevents false confidence
182
+
183
+ **Value:** Ensures coverage data reflects current code
184
+
185
+ [Screen: Show stale detection in action]
186
+
187
+ ## Section 6: AI Integration Deep Dive (2:35 - 3:30)
188
+ **Scenario:** Generate test strategy for uncovered code
189
+
190
+ **Action:**
191
+ User: "Help me improve coverage for the authentication module"
192
+
193
+ AI workflow:
194
+ 1. Uses `coverage_summary_tool` for auth files
195
+ 2. Identifies uncovered lines
196
+ 3. Analyzes code context
197
+ 4. Suggests specific test cases
198
+
199
+ **Result:**
200
+ Detailed test plan with:
201
+ - Edge cases to cover
202
+ - Mock requirements
203
+ - Assertion suggestions
204
+
205
+ **Value:** AI + cov-loupe = smarter test planning
206
+
207
+ [Screen: Show full AI interaction]
208
+
209
+ ## Section 7: Call to Action (3:30 - 4:30)
210
+ Get started:
211
+ ```bash
212
+ gem install cov-loupe
213
+ cov-loupe --help
214
+ ```
215
+
216
+ Learn more:
217
+ - Documentation: [link]
218
+ - MCP integration guide: [link]
219
+ - Example prompts: [link]
220
+
221
+ [Screen: Show website/GitHub repo]
222
+ ```
223
+
224
+ ## Notes
225
+
226
+ - **Find real examples:** Use actual cov-loupe features, not hypotheticals
227
+ - **Show, don't tell:** Prefer demonstrations over explanations
228
+ - **Keep pace brisk:** 2-5 minutes goes quickly, cut ruthlessly
229
+ - **Emphasize AI value:** This is the key differentiator
230
+ - **Make it reproducible:** Viewers should be able to try examples themselves
231
+
232
+ ## Time Estimates
233
+
234
+ Provide timing for each section to ensure the total stays within 2-5 minutes. Be realistic about how long demonstrations take.