@indiekitai/pg-dash 0.3.3 → 0.3.5

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -2,22 +2,45 @@
2
2
 
3
3
  # pg-dash
4
4
 
5
- **Lightweight PostgreSQL monitoring dashboard.** One command to start, built-in web UI, actionable fix suggestions.
5
+ **The AI-native PostgreSQL health checker.** One command to audit your database, 14 MCP tools for AI-assisted optimization, CI integration for automated checks.
6
6
 
7
- Think **pganalyze for indie devs** no Grafana, no Prometheus, no Docker. Just `npx` and go.
7
+ Not another monitoring dashboardpg-dash is built to fit into your **AI coding workflow**:
8
+
9
+ ```
10
+ Developer writes a migration → CI runs pg-dash check →
11
+ Finds missing indexes → MCP tool suggests fix → PR comment
12
+ ```
8
13
 
9
14
  ```bash
10
- npx @indiekitai/pg-dash postgres://user:pass@host/db
15
+ # One-shot health check
16
+ npx @indiekitai/pg-dash check postgres://user:pass@host/db
17
+
18
+ # AI assistant (Claude/Cursor) via MCP
19
+ pg-dash-mcp postgres://user:pass@host/db
20
+
21
+ # CI pipeline with diff
22
+ npx @indiekitai/pg-dash check $DATABASE_URL --ci --diff --format md
11
23
  ```
12
24
 
13
- ## Why?
25
+ ## Philosophy
26
+
27
+ **Developer tools are use-and-go.** You don't stare at a PostgreSQL dashboard all day. You run a check, fix the issues, and move on. pg-dash embraces this:
28
+
29
+ - **Health check** → Find problems, get actionable SQL fixes, done
30
+ - **MCP tools** → Let your AI assistant query and fix your database directly (unique — pganalyze/pgwatch don't have this)
31
+ - **CI integration** → Catch issues automatically on every migration, not when production is on fire
32
+ - **Smart diff** → See what changed since last run, track your progress
14
33
 
15
- | Tool | Price | Setup | For |
16
- |------|-------|-------|-----|
17
- | pganalyze | $149+/mo | SaaS signup | Enterprises |
18
- | Grafana+Prometheus | Free | 3 services to configure | DevOps teams |
19
- | pgAdmin | Free | Complex UI | DBAs |
20
- | **pg-dash** | **Free** | **One command** | **Developers** |
34
+ The Dashboard is there when you need it. But the real power is in the CLI, MCP, and CI.
35
+
36
+ ## Why pg-dash?
37
+
38
+ | Tool | Price | Setup | AI-native | CI-ready |
39
+ |------|-------|-------|-----------|----------|
40
+ | pganalyze | $149+/mo | SaaS signup | ❌ | ❌ |
41
+ | Grafana+Prometheus | Free | 3 services | ❌ | ❌ |
42
+ | pgAdmin | Free | Complex UI | ❌ | ❌ |
43
+ | **pg-dash** | **Free** | **One command** | **14 MCP tools** | **`--ci --diff`** |
21
44
 
22
45
  ## Features
23
46
 
@@ -144,7 +167,142 @@ pg-dash-mcp postgres://user:pass@host/db
144
167
  PG_DASH_CONNECTION_STRING=postgres://... pg-dash-mcp
145
168
  ```
146
169
 
147
- Available tools: `pg_dash_overview`, `pg_dash_health`, `pg_dash_tables`, `pg_dash_table_detail`, `pg_dash_activity`, `pg_dash_schema_changes`, `pg_dash_fix`, `pg_dash_alerts`
170
+ ### Available Tools (14)
171
+
172
+ | Tool | Description |
173
+ |------|-------------|
174
+ | `pg_dash_overview` | Database overview (version, uptime, size, connections) |
175
+ | `pg_dash_health` | Health advisor report with score, grade, and issues |
176
+ | `pg_dash_tables` | List all tables with sizes and row counts |
177
+ | `pg_dash_table_detail` | Detailed info about a specific table |
178
+ | `pg_dash_activity` | Current database activity (active queries, connections) |
179
+ | `pg_dash_schema_changes` | Recent schema changes |
180
+ | `pg_dash_fix` | Execute a safe fix (VACUUM, ANALYZE, REINDEX, etc.) |
181
+ | `pg_dash_alerts` | Alert history |
182
+ | `pg_dash_explain` | Run EXPLAIN ANALYZE on a SELECT query (read-only) |
183
+ | `pg_dash_batch_fix` | Get batch fix SQL for issues, optionally filtered by category |
184
+ | `pg_dash_slow_queries` | Top slow queries from pg_stat_statements |
185
+ | `pg_dash_table_sizes` | Table sizes with data/index breakdown (top 30) |
186
+ | `pg_dash_export` | Export full health report (JSON or Markdown) |
187
+ | `pg_dash_diff` | Compare current health with last saved snapshot |
188
+
189
+ ## MCP Setup
190
+
191
+ Connect pg-dash to Claude Desktop or Cursor for AI-assisted database management.
192
+
193
+ ### Claude Desktop
194
+
195
+ Add to `~/Library/Application Support/Claude/claude_desktop_config.json` (macOS) or `%APPDATA%\Claude\claude_desktop_config.json` (Windows):
196
+
197
+ ```json
198
+ {
199
+ "mcpServers": {
200
+ "pg-dash": {
201
+ "command": "npx",
202
+ "args": ["-y", "-p", "@indiekitai/pg-dash", "pg-dash-mcp", "postgresql://user:pass@host/db"]
203
+ }
204
+ }
205
+ }
206
+ ```
207
+
208
+ ### Cursor
209
+
210
+ Add to `.cursor/mcp.json` in your project:
211
+
212
+ ```json
213
+ {
214
+ "mcpServers": {
215
+ "pg-dash": {
216
+ "command": "npx",
217
+ "args": ["-y", "-p", "@indiekitai/pg-dash", "pg-dash-mcp", "postgresql://user:pass@host/db"]
218
+ }
219
+ }
220
+ }
221
+ ```
222
+
223
+ ### Example Conversations
224
+
225
+ Once connected, you can ask your AI assistant:
226
+
227
+ **Diagnosis:**
228
+ - "What's wrong with my database right now?"
229
+ - "Why is my `users` table slow? Check for missing indexes."
230
+ - "Show me the top 5 slowest queries this week."
231
+
232
+ **Optimization:**
233
+ - "Generate SQL to fix all missing FK indexes in one go."
234
+ - "EXPLAIN this query for me: SELECT * FROM orders WHERE user_id = 123"
235
+ - "Which tables are taking up the most space?"
236
+
237
+ **Pre-migration check:**
238
+ - "Run a health check and tell me if it's safe to deploy."
239
+ - "What changed in the schema since last week?"
240
+ - "Check if there are any idle connections blocking my migration."
241
+
242
+ ## CI Integration
243
+
244
+ ### GitHub Actions
245
+
246
+ Add `--ci` and `--diff` flags to integrate with CI pipelines:
247
+
248
+ ```bash
249
+ # GitHub Actions annotations (::error::, ::warning::)
250
+ pg-dash check postgres://... --ci
251
+
252
+ # Markdown report for PR comments
253
+ pg-dash check postgres://... --ci --format md
254
+
255
+ # Compare with previous run
256
+ pg-dash check postgres://... --diff
257
+
258
+ # All together
259
+ pg-dash check postgres://... --ci --diff --format md
260
+ ```
261
+
262
+ Sample workflow (`.github/workflows/pg-check.yml`):
263
+
264
+ ```yaml
265
+ name: Database Health Check
266
+ on:
267
+ push:
268
+ paths: ['migrations/**', 'prisma/**', 'drizzle/**', 'supabase/migrations/**']
269
+ pull_request:
270
+ paths: ['migrations/**', 'prisma/**', 'drizzle/**', 'supabase/migrations/**']
271
+ schedule:
272
+ - cron: '0 8 * * 1' # Weekly Monday 8am UTC
273
+ jobs:
274
+ db-health:
275
+ runs-on: ubuntu-latest
276
+ steps:
277
+ - uses: actions/checkout@v4
278
+ # Cache snapshot across ephemeral runners for --diff to work
279
+ - name: Restore health snapshot
280
+ uses: actions/cache@v4
281
+ with:
282
+ path: .pg-dash-cache
283
+ key: pg-dash-snapshot-${{ github.ref }}
284
+ restore-keys: pg-dash-snapshot-
285
+ - name: Run pg-dash health check
286
+ id: pg-check
287
+ run: |
288
+ mkdir -p .pg-dash-cache
289
+ npx @indiekitai/pg-dash check ${{ secrets.DATABASE_URL }} \
290
+ --ci --diff --snapshot-path ./.pg-dash-cache/last-check.json \
291
+ --format md > pg-dash-report.md 2>&1
292
+ echo "exit_code=$?" >> $GITHUB_OUTPUT
293
+ continue-on-error: true
294
+ - name: Save health snapshot
295
+ uses: actions/cache/save@v4
296
+ if: always()
297
+ with:
298
+ path: .pg-dash-cache
299
+ key: pg-dash-snapshot-${{ github.ref }}-${{ github.run_id }}
300
+ - name: Fail if unhealthy
301
+ if: steps.pg-check.outputs.exit_code != '0'
302
+ run: exit 1
303
+ ```
304
+
305
+ See [`examples/github-actions-pg-check.yml`](examples/github-actions-pg-check.yml) for a full workflow with PR comments.
148
306
 
149
307
  ## Health Checks
150
308
 
package/README.zh-CN.md CHANGED
@@ -2,22 +2,45 @@
2
2
 
3
3
  # pg-dash
4
4
 
5
- **轻量级 PostgreSQL 监控面板。** 一条命令启动,内置 Web UI,提供可操作的修复建议。
5
+ **AI 原生的 PostgreSQL 健康检查工具。** 一条命令审计数据库,14 MCP 工具让 AI 帮你优化,CI 集成自动检查。
6
6
 
7
- 可以理解为**给独立开发者的 pganalyze** —— 不需要 Grafana,不需要 Prometheus,不需要 Docker。只需 `npx` 即可运行。
7
+ 不是又一个监控面板 —— pg-dash 是为 **AI 编程工作流** 设计的:
8
+
9
+ ```
10
+ 开发者写了一个 migration → CI 跑 pg-dash check →
11
+ 发现缺失索引 → MCP 工具建议修复 → PR comment
12
+ ```
8
13
 
9
14
  ```bash
10
- npx @indiekitai/pg-dash postgres://user:pass@host/db
15
+ # 一次性健康检查
16
+ npx @indiekitai/pg-dash check postgres://user:pass@host/db
17
+
18
+ # AI 助手(Claude/Cursor)通过 MCP 调用
19
+ pg-dash-mcp postgres://user:pass@host/db
20
+
21
+ # CI 流水线 + 差异对比
22
+ npx @indiekitai/pg-dash check $DATABASE_URL --ci --diff --format md
11
23
  ```
12
24
 
25
+ ## 设计理念
26
+
27
+ **开发者工具就是用完即走的。** 你不会整天盯着 PostgreSQL 监控面板。你跑一次检查,修掉问题,然后继续干活。pg-dash 就是为此设计的:
28
+
29
+ - **健康检查** → 发现问题,拿到可执行的 SQL 修复建议,搞定
30
+ - **MCP 工具** → 让 AI 助手直接查询和修复你的数据库(独一份 —— pganalyze/pgwatch 都没有)
31
+ - **CI 集成** → 每次 migration 自动检查,不要等到生产环境出事
32
+ - **智能 diff** → 看到上次以来的变化,追踪改进进度
33
+
34
+ Dashboard 需要时可以用。但真正的核心能力在 CLI、MCP 和 CI。
35
+
13
36
  ## 为什么选 pg-dash?
14
37
 
15
- | 工具 | 价格 | 部署 | 适合 |
16
- |------|------|------|------|
17
- | pganalyze | $149+/月 | SaaS 注册 | 企业 |
18
- | Grafana+Prometheus | 免费 | 配置 3 个服务 | DevOps 团队 |
19
- | pgAdmin | 免费 | 界面复杂 | DBA |
20
- | **pg-dash** | **免费** | **一条命令** | **开发者** |
38
+ | 工具 | 价格 | 部署 | AI 原生 | CI 就绪 |
39
+ |------|------|------|---------|---------|
40
+ | pganalyze | $149+/月 | SaaS 注册 | | ❌ |
41
+ | Grafana+Prometheus | 免费 | 配置 3 个服务 | | |
42
+ | pgAdmin | 免费 | 界面复杂 | | ❌ |
43
+ | **pg-dash** | **免费** | **一条命令** | **14 个 MCP 工具** | **`--ci --diff`** |
21
44
 
22
45
  ## 功能
23
46
 
@@ -144,7 +167,142 @@ pg-dash-mcp postgres://user:pass@host/db
144
167
  PG_DASH_CONNECTION_STRING=postgres://... pg-dash-mcp
145
168
  ```
146
169
 
147
- 可用工具:`pg_dash_overview`、`pg_dash_health`、`pg_dash_tables`、`pg_dash_table_detail`、`pg_dash_activity`、`pg_dash_schema_changes`、`pg_dash_fix`、`pg_dash_alerts`
170
+ ### 可用工具(14 个)
171
+
172
+ | 工具 | 描述 |
173
+ |------|------|
174
+ | `pg_dash_overview` | 数据库概览(版本、运行时间、大小、连接数) |
175
+ | `pg_dash_health` | 健康报告(评分、等级、问题列表) |
176
+ | `pg_dash_tables` | 所有表的大小和行数 |
177
+ | `pg_dash_table_detail` | 单个表的详细信息 |
178
+ | `pg_dash_activity` | 当前活动(查询、连接) |
179
+ | `pg_dash_schema_changes` | 最近的 schema 变更 |
180
+ | `pg_dash_fix` | 执行安全修复(VACUUM、ANALYZE、REINDEX 等) |
181
+ | `pg_dash_alerts` | 告警历史 |
182
+ | `pg_dash_explain` | 对 SELECT 查询运行 EXPLAIN ANALYZE(只读) |
183
+ | `pg_dash_batch_fix` | 获取批量修复 SQL,可按类别过滤 |
184
+ | `pg_dash_slow_queries` | pg_stat_statements 中的慢查询 |
185
+ | `pg_dash_table_sizes` | 表大小(数据/索引拆分,前 30) |
186
+ | `pg_dash_export` | 导出完整健康报告(JSON 或 Markdown) |
187
+ | `pg_dash_diff` | 与上次快照对比当前健康状态 |
188
+
189
+ ## MCP 配置
190
+
191
+ 将 pg-dash 接入 Claude Desktop 或 Cursor,实现 AI 辅助的数据库管理。
192
+
193
+ ### Claude Desktop
194
+
195
+ 在 macOS 上编辑 `~/Library/Application Support/Claude/claude_desktop_config.json`,Windows 上编辑 `%APPDATA%\Claude\claude_desktop_config.json`:
196
+
197
+ ```json
198
+ {
199
+ "mcpServers": {
200
+ "pg-dash": {
201
+ "command": "npx",
202
+ "args": ["-y", "-p", "@indiekitai/pg-dash", "pg-dash-mcp", "postgresql://user:pass@host/db"]
203
+ }
204
+ }
205
+ }
206
+ ```
207
+
208
+ ### Cursor
209
+
210
+ 在项目的 `.cursor/mcp.json` 中添加:
211
+
212
+ ```json
213
+ {
214
+ "mcpServers": {
215
+ "pg-dash": {
216
+ "command": "npx",
217
+ "args": ["-y", "-p", "@indiekitai/pg-dash", "pg-dash-mcp", "postgresql://user:pass@host/db"]
218
+ }
219
+ }
220
+ }
221
+ ```
222
+
223
+ ### 示例对话
224
+
225
+ 连接后,你可以直接问 AI 助手:
226
+
227
+ **诊断问题:**
228
+ - "我的数据库现在有什么问题?"
229
+ - "为什么我的 `users` 表这么慢?检查一下缺失的索引。"
230
+ - "显示本周最慢的 5 条查询。"
231
+
232
+ **性能优化:**
233
+ - "一次性生成 SQL,修复所有缺失的外键索引。"
234
+ - "帮我分析这条查询:SELECT * FROM orders WHERE user_id = 123"
235
+ - "哪些表占用空间最多?"
236
+
237
+ **迁移前检查:**
238
+ - "跑一次健康检查,告诉我现在部署安不安全。"
239
+ - "上周以来 schema 有哪些变化?"
240
+ - "检查是否有空闲连接会阻塞我的迁移。"
241
+
242
+ ## CI 集成
243
+
244
+ ### GitHub Actions
245
+
246
+ 使用 `--ci` 和 `--diff` 标志集成到 CI 流水线:
247
+
248
+ ```bash
249
+ # GitHub Actions 注解(::error::、::warning::)
250
+ pg-dash check postgres://... --ci
251
+
252
+ # 适合 PR 评论的 Markdown 报告
253
+ pg-dash check postgres://... --ci --format md
254
+
255
+ # 与上次运行对比
256
+ pg-dash check postgres://... --diff
257
+
258
+ # 全部组合
259
+ pg-dash check postgres://... --ci --diff --format md
260
+ ```
261
+
262
+ 示例工作流(`.github/workflows/pg-check.yml`):
263
+
264
+ ```yaml
265
+ name: Database Health Check
266
+ on:
267
+ push:
268
+ paths: ['migrations/**', 'prisma/**', 'drizzle/**', 'supabase/migrations/**']
269
+ pull_request:
270
+ paths: ['migrations/**', 'prisma/**', 'drizzle/**', 'supabase/migrations/**']
271
+ schedule:
272
+ - cron: '0 8 * * 1' # 每周一 UTC 早 8 点
273
+ jobs:
274
+ db-health:
275
+ runs-on: ubuntu-latest
276
+ steps:
277
+ - uses: actions/checkout@v4
278
+ # 缓存快照,解决 ephemeral runner 丢失 ~/.pg-dash 的问题
279
+ - name: Restore health snapshot
280
+ uses: actions/cache@v4
281
+ with:
282
+ path: .pg-dash-cache
283
+ key: pg-dash-snapshot-${{ github.ref }}
284
+ restore-keys: pg-dash-snapshot-
285
+ - name: Run pg-dash health check
286
+ id: pg-check
287
+ run: |
288
+ mkdir -p .pg-dash-cache
289
+ npx @indiekitai/pg-dash check ${{ secrets.DATABASE_URL }} \
290
+ --ci --diff --snapshot-path ./.pg-dash-cache/last-check.json \
291
+ --format md > pg-dash-report.md 2>&1
292
+ echo "exit_code=$?" >> $GITHUB_OUTPUT
293
+ continue-on-error: true
294
+ - name: Save health snapshot
295
+ uses: actions/cache/save@v4
296
+ if: always()
297
+ with:
298
+ path: .pg-dash-cache
299
+ key: pg-dash-snapshot-${{ github.ref }}-${{ github.run_id }}
300
+ - name: Fail if unhealthy
301
+ if: steps.pg-check.outputs.exit_code != '0'
302
+ run: exit 1
303
+ ```
304
+
305
+ 完整工作流(包含 PR 评论)请参考 [`examples/github-actions-pg-check.yml`](examples/github-actions-pg-check.yml)。
148
306
 
149
307
  ## 健康检查
150
308
 
package/dist/cli.js CHANGED
@@ -754,6 +754,54 @@ var init_advisor = __esm({
754
754
  }
755
755
  });
756
756
 
757
+ // src/server/snapshot.ts
758
+ var snapshot_exports = {};
759
+ __export(snapshot_exports, {
760
+ diffSnapshots: () => diffSnapshots2,
761
+ loadSnapshot: () => loadSnapshot,
762
+ saveSnapshot: () => saveSnapshot
763
+ });
764
+ import fs4 from "fs";
765
+ import path4 from "path";
766
+ function normalizeIssueId(id) {
767
+ return id.replace(/-\d+$/, "");
768
+ }
769
+ function saveSnapshot(snapshotPath, result) {
770
+ fs4.mkdirSync(path4.dirname(snapshotPath), { recursive: true });
771
+ const snapshot = { timestamp: (/* @__PURE__ */ new Date()).toISOString(), result };
772
+ fs4.writeFileSync(snapshotPath, JSON.stringify(snapshot, null, 2));
773
+ }
774
+ function loadSnapshot(snapshotPath) {
775
+ if (!fs4.existsSync(snapshotPath)) return null;
776
+ try {
777
+ return JSON.parse(fs4.readFileSync(snapshotPath, "utf-8"));
778
+ } catch {
779
+ return null;
780
+ }
781
+ }
782
+ function diffSnapshots2(prev, current) {
783
+ const prevNormIds = new Set(prev.issues.map((i) => normalizeIssueId(i.id)));
784
+ const currNormIds = new Set(current.issues.map((i) => normalizeIssueId(i.id)));
785
+ const newIssues = current.issues.filter((i) => !prevNormIds.has(normalizeIssueId(i.id)));
786
+ const resolvedIssues = prev.issues.filter((i) => !currNormIds.has(normalizeIssueId(i.id)));
787
+ const unchanged = current.issues.filter((i) => prevNormIds.has(normalizeIssueId(i.id)));
788
+ return {
789
+ scoreDelta: current.score - prev.score,
790
+ previousScore: prev.score,
791
+ currentScore: current.score,
792
+ previousGrade: prev.grade,
793
+ currentGrade: current.grade,
794
+ newIssues,
795
+ resolvedIssues,
796
+ unchanged
797
+ };
798
+ }
799
+ var init_snapshot = __esm({
800
+ "src/server/snapshot.ts"() {
801
+ "use strict";
802
+ }
803
+ });
804
+
757
805
  // src/cli.ts
758
806
  import { parseArgs } from "util";
759
807
 
@@ -2659,7 +2707,7 @@ import { WebSocketServer, WebSocket } from "ws";
2659
2707
  import http from "http";
2660
2708
  var __dirname = path3.dirname(fileURLToPath(import.meta.url));
2661
2709
  async function startServer(opts) {
2662
- const pool = new Pool({ connectionString: opts.connectionString });
2710
+ const pool = new Pool({ connectionString: opts.connectionString, connectionTimeoutMillis: 1e4 });
2663
2711
  try {
2664
2712
  const client = await pool.connect();
2665
2713
  client.release();
@@ -2983,8 +3031,8 @@ async function startServer(opts) {
2983
3031
  }
2984
3032
 
2985
3033
  // src/cli.ts
2986
- import fs4 from "fs";
2987
- import path4 from "path";
3034
+ import fs5 from "fs";
3035
+ import path5 from "path";
2988
3036
  import { fileURLToPath as fileURLToPath2 } from "url";
2989
3037
  process.on("uncaughtException", (err) => {
2990
3038
  console.error("Uncaught exception:", err);
@@ -3018,13 +3066,16 @@ var { values, positionals } = parseArgs({
3018
3066
  help: { type: "boolean", short: "h" },
3019
3067
  version: { type: "boolean", short: "v" },
3020
3068
  threshold: { type: "string" },
3021
- format: { type: "string", short: "f" }
3069
+ format: { type: "string", short: "f" },
3070
+ ci: { type: "boolean", default: false },
3071
+ diff: { type: "boolean", default: false },
3072
+ "snapshot-path": { type: "string" }
3022
3073
  }
3023
3074
  });
3024
3075
  if (values.version) {
3025
3076
  try {
3026
- const __dirname2 = path4.dirname(fileURLToPath2(import.meta.url));
3027
- const pkg = JSON.parse(fs4.readFileSync(path4.resolve(__dirname2, "../package.json"), "utf-8"));
3077
+ const __dirname2 = path5.dirname(fileURLToPath2(import.meta.url));
3078
+ const pkg = JSON.parse(fs5.readFileSync(path5.resolve(__dirname2, "../package.json"), "utf-8"));
3028
3079
  console.log(`pg-dash v${pkg.version}`);
3029
3080
  } catch {
3030
3081
  console.log("pg-dash v0.1.0");
@@ -3064,6 +3115,9 @@ Options:
3064
3115
  --long-query-threshold <min> Long query threshold in minutes (default: 5)
3065
3116
  --threshold <score> Health score threshold for check command (default: 70)
3066
3117
  -f, --format <fmt> Output format: text|json|md (default: text)
3118
+ --ci Output GitHub Actions compatible annotations
3119
+ --diff Compare with previous run (saves snapshot for next run)
3120
+ --snapshot-path <path> Path to snapshot file for --diff (default: ~/.pg-dash/last-check.json)
3067
3121
  -v, --version Show version
3068
3122
  -h, --help Show this help
3069
3123
 
@@ -3094,50 +3148,121 @@ if (subcommand === "check") {
3094
3148
  const connectionString = resolveConnectionString(1);
3095
3149
  const threshold = parseInt(values.threshold || "70", 10);
3096
3150
  const format = values.format || "text";
3151
+ const ci = values.ci || false;
3152
+ const useDiff = values.diff || false;
3097
3153
  const { Pool: Pool2 } = await import("pg");
3098
3154
  const { getAdvisorReport: getAdvisorReport2 } = await Promise.resolve().then(() => (init_advisor(), advisor_exports));
3099
- const pool = new Pool2({ connectionString });
3155
+ const { saveSnapshot: saveSnapshot2, loadSnapshot: loadSnapshot2, diffSnapshots: diffSnapshots3 } = await Promise.resolve().then(() => (init_snapshot(), snapshot_exports));
3156
+ const os4 = await import("os");
3157
+ const pool = new Pool2({ connectionString, connectionTimeoutMillis: 1e4 });
3158
+ const checkDataDir = values["data-dir"] || path5.join(os4.homedir(), ".pg-dash");
3159
+ const snapshotPath = values["snapshot-path"] || path5.join(checkDataDir, "last-check.json");
3100
3160
  try {
3101
3161
  const lqt = parseInt(values["long-query-threshold"] || process.env.PG_DASH_LONG_QUERY_THRESHOLD || "5", 10);
3102
3162
  const report = await getAdvisorReport2(pool, lqt);
3163
+ let diff = null;
3164
+ if (useDiff) {
3165
+ const prev = loadSnapshot2(snapshotPath);
3166
+ if (prev) {
3167
+ diff = diffSnapshots3(prev.result, report);
3168
+ }
3169
+ saveSnapshot2(snapshotPath, report);
3170
+ }
3103
3171
  if (format === "json") {
3104
- console.log(JSON.stringify(report, null, 2));
3105
- } else if (format === "md") {
3106
- console.log(`# pg-dash Health Report
3172
+ const output = { ...report };
3173
+ if (diff) output.diff = diff;
3174
+ console.log(JSON.stringify(output, null, 2));
3175
+ } else if (format === "md" || ci && format !== "text") {
3176
+ console.log(`## \u{1F3E5} pg-dash Health Report
3107
3177
  `);
3108
- console.log(`Generated: ${(/* @__PURE__ */ new Date()).toISOString()}
3178
+ if (diff) {
3179
+ const sign = diff.scoreDelta >= 0 ? "+" : "";
3180
+ console.log(`**Score: ${diff.previousScore} \u2192 ${report.score} (${sign}${diff.scoreDelta})**
3109
3181
  `);
3110
- console.log(`## Health Score: ${report.score}/100 (Grade: ${report.grade})
3182
+ } else {
3183
+ console.log(`**Score: ${report.score}/100 (${report.grade})**
3111
3184
  `);
3112
- console.log(`| Category | Grade | Score | Issues |`);
3185
+ }
3186
+ console.log(`| Category | Score | Grade | Issues |`);
3113
3187
  console.log(`|----------|-------|-------|--------|`);
3114
3188
  for (const [cat, b] of Object.entries(report.breakdown)) {
3115
- console.log(`| ${cat} | ${b.grade} | ${b.score}/100 | ${b.count} |`);
3189
+ console.log(`| ${cat} | ${b.score} | ${b.grade} | ${b.count} |`);
3190
+ }
3191
+ if (diff) {
3192
+ if (diff.resolvedIssues.length > 0) {
3193
+ console.log(`
3194
+ ### \u2705 Resolved (${diff.resolvedIssues.length})`);
3195
+ for (const i of diff.resolvedIssues) console.log(`- ~~${i.title}~~`);
3196
+ }
3197
+ if (diff.newIssues.length > 0) {
3198
+ console.log(`
3199
+ ### \u{1F195} New Issues (${diff.newIssues.length})`);
3200
+ for (const i of diff.newIssues) {
3201
+ const icon = i.severity === "critical" ? "\u{1F534}" : i.severity === "warning" ? "\u{1F7E1}" : "\u{1F535}";
3202
+ console.log(`- ${icon} [${i.severity}] ${i.title}`);
3203
+ }
3204
+ }
3116
3205
  }
3117
3206
  if (report.issues.length > 0) {
3118
3207
  console.log(`
3119
- ### Issues (${report.issues.length})
3208
+ ### \u26A0\uFE0F Issues (${report.issues.length})
3120
3209
  `);
3121
3210
  for (const issue of report.issues) {
3122
- const icon = issue.severity === "critical" ? "\u{1F534}" : issue.severity === "warning" ? "\u{1F7E1}" : "\u{1F535}";
3123
- console.log(`#### ${icon} [${issue.severity}] ${issue.title}
3124
- `);
3125
- console.log(`${issue.description}
3126
- `);
3127
- console.log(`**Fix**:
3128
- \`\`\`sql
3129
- ${issue.fix}
3130
- \`\`\`
3131
- `);
3211
+ const sev = issue.severity === "critical" ? "error" : issue.severity === "warning" ? "warning" : "notice";
3212
+ console.log(`- [${sev}] ${issue.title}`);
3132
3213
  }
3133
3214
  } else {
3134
3215
  console.log(`
3135
3216
  \u2705 No issues found!`);
3136
3217
  }
3137
- } else {
3218
+ if (report.batchFixes.length > 0) {
3219
+ console.log(`
3220
+ ### \u{1F527} Batch Fixes
3221
+ `);
3222
+ console.log("```sql");
3223
+ for (const fix of report.batchFixes) {
3224
+ console.log(`-- ${fix.title}`);
3225
+ console.log(fix.sql);
3226
+ }
3227
+ console.log("```");
3228
+ }
3229
+ } else if (ci) {
3230
+ for (const issue of report.issues) {
3231
+ const level = issue.severity === "critical" ? "error" : issue.severity === "warning" ? "warning" : "notice";
3232
+ console.log(`::${level}::${issue.title}: ${issue.description}`);
3233
+ }
3138
3234
  console.log(`
3235
+ Health Score: ${report.score}/100 (${report.grade})`);
3236
+ for (const [cat, b] of Object.entries(report.breakdown)) {
3237
+ console.log(` ${cat.padEnd(14)} ${b.grade} (${b.score}/100) \u2014 ${b.count} issue${b.count !== 1 ? "s" : ""}`);
3238
+ }
3239
+ if (diff) {
3240
+ const sign = diff.scoreDelta >= 0 ? "+" : "";
3241
+ console.log(`
3242
+ Score: ${diff.previousScore} \u2192 ${report.score} (${sign}${diff.scoreDelta})`);
3243
+ console.log(`Resolved: ${diff.resolvedIssues.length} issues`);
3244
+ console.log(`New: ${diff.newIssues.length} issues`);
3245
+ }
3246
+ } else {
3247
+ if (diff) {
3248
+ const sign = diff.scoreDelta >= 0 ? "+" : "";
3249
+ console.log(`
3250
+ Score: ${diff.previousScore} \u2192 ${report.score} (${sign}${diff.scoreDelta})
3251
+ `);
3252
+ if (diff.resolvedIssues.length > 0) {
3253
+ console.log(` \u2705 Resolved: ${diff.resolvedIssues.length} issues`);
3254
+ for (const i of diff.resolvedIssues) console.log(` - ${i.title}`);
3255
+ }
3256
+ if (diff.newIssues.length > 0) {
3257
+ console.log(` \u{1F195} New: ${diff.newIssues.length} issues`);
3258
+ for (const i of diff.newIssues) console.log(` - ${i.title}`);
3259
+ }
3260
+ console.log();
3261
+ } else {
3262
+ console.log(`
3139
3263
  Health Score: ${report.score}/100 (Grade: ${report.grade})
3140
3264
  `);
3265
+ }
3141
3266
  for (const [cat, b] of Object.entries(report.breakdown)) {
3142
3267
  console.log(` ${cat.padEnd(14)} ${b.grade} (${b.score}/100) \u2014 ${b.count} issue${b.count !== 1 ? "s" : ""}`);
3143
3268
  }
@@ -3161,9 +3286,9 @@ ${issue.fix}
3161
3286
  }
3162
3287
  } else if (subcommand === "schema-diff") {
3163
3288
  const connectionString = resolveConnectionString(1);
3164
- const dataDir = values["data-dir"] || path4.join((await import("os")).homedir(), ".pg-dash");
3165
- const schemaDbPath = path4.join(dataDir, "schema.db");
3166
- if (!fs4.existsSync(schemaDbPath)) {
3289
+ const dataDir = values["data-dir"] || path5.join((await import("os")).homedir(), ".pg-dash");
3290
+ const schemaDbPath = path5.join(dataDir, "schema.db");
3291
+ if (!fs5.existsSync(schemaDbPath)) {
3167
3292
  console.error("No schema tracking data found. Run pg-dash server first to collect schema snapshots.");
3168
3293
  process.exit(1);
3169
3294
  }