@indiekitai/pg-dash 0.3.9 → 0.4.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -2,7 +2,7 @@
2
2
 
3
3
  # pg-dash
4
4
 
5
- **The AI-native PostgreSQL health checker.** One command to audit your database, 18 MCP tools for AI-assisted optimization, CI integration for automated checks.
5
+ **The AI-native PostgreSQL health checker.** One command to audit your database, 23 MCP tools for AI-assisted optimization, CI integration for automated checks.
6
6
 
7
7
  Not another monitoring dashboard — pg-dash is built to fit into your **AI coding workflow**:
8
8
 
@@ -47,7 +47,7 @@ The Dashboard is there when you need it. But the real power is in the CLI, MCP,
47
47
  | pganalyze | $149+/mo | SaaS signup | ❌ | ❌ |
48
48
  | Grafana+Prometheus | Free | 3 services | ❌ | ❌ |
49
49
  | pgAdmin | Free | Complex UI | ❌ | ❌ |
50
- | **pg-dash** | **Free** | **One command** | **18 MCP tools** | **`--ci --diff`** |
50
+ | **pg-dash** | **Free** | **One command** | **23 MCP tools** | **`--ci --diff`** |
51
51
 
52
52
  ## Features
53
53
 
@@ -117,8 +117,15 @@ The Dashboard is there when you need it. But the real power is in the CLI, MCP,
117
117
  - `--health` flag adds health score comparison and unique issues per environment
118
118
  - `pg_dash_compare_env` MCP tool: ask your AI "what's different between local and staging?"
119
119
 
120
+ ### 🔧 Production Readiness Audit
121
+ - **Unused indexes** — Find indexes with 0 scans since last stats reset; suggests safe `DROP INDEX CONCURRENTLY` SQL
122
+ - **Table bloat** — Dead tuple ratio per table (≥10%); surfaces both `last_autovacuum` and `last_vacuum` timestamps
123
+ - **Autovacuum health** — Classifies each table as `ok` / `stale` / `overdue` / `never`; shows autovacuum settings with units
124
+ - **Lock monitoring** — Active lock-wait chains (who is blocking whom) + long-running queries >5s
125
+ - **Config recommendations** — Audits `shared_buffers`, `work_mem`, `checkpoint_completion_target`, `random_page_cost`, `idle_in_transaction_session_timeout`, and 5 more settings with severity-tagged recommendations
126
+
120
127
  ### 🤖 MCP Server
121
- - 18 tools for AI agent integration
128
+ - 23 tools for AI agent integration
122
129
  - `pg-dash-mcp postgres://...` — works with Claude, Cursor, etc.
123
130
 
124
131
  ### 🖥️ CLI
@@ -202,7 +209,7 @@ pg-dash-mcp postgres://user:pass@host/db
202
209
  PG_DASH_CONNECTION_STRING=postgres://... pg-dash-mcp
203
210
  ```
204
211
 
205
- ### Available Tools (18)
212
+ ### Available Tools (23)
206
213
 
207
214
  | Tool | Description |
208
215
  |------|-------------|
@@ -224,6 +231,11 @@ PG_DASH_CONNECTION_STRING=postgres://... pg-dash-mcp
224
231
  | `pg_dash_analyze_query` | Deep EXPLAIN analysis with automatic index suggestions |
225
232
  | `pg_dash_query_regressions` | Detect queries that degraded >50% vs historical baseline |
226
233
  | `pg_dash_compare_env` | Compare schema and health between two database environments |
234
+ | `pg_dash_unused_indexes` | Find unused indexes that waste space and slow down writes |
235
+ | `pg_dash_bloat` | Detect table bloat (dead tuples) that slow down queries |
236
+ | `pg_dash_autovacuum` | Check autovacuum health — which tables are stale or never vacuumed |
237
+ | `pg_dash_locks` | Show active lock waits and long-running blocking queries |
238
+ | `pg_dash_config_check` | Audit PostgreSQL configuration and get tuning recommendations |
227
239
 
228
240
  ## MCP Setup
229
241
 
package/README.zh-CN.md CHANGED
@@ -2,7 +2,7 @@
2
2
 
3
3
  # pg-dash
4
4
 
5
- **AI 原生的 PostgreSQL 健康检查工具。** 一条命令审计数据库,18 个 MCP 工具让 AI 帮你优化,CI 集成自动检查。
5
+ **AI 原生的 PostgreSQL 健康检查工具。** 一条命令审计数据库,23 个 MCP 工具让 AI 帮你优化,CI 集成自动检查。
6
6
 
7
7
  不是又一个监控面板 —— pg-dash 是为 **AI 编程工作流** 设计的:
8
8
 
@@ -47,7 +47,7 @@ Dashboard 需要时可以用。但真正的核心能力在 CLI、MCP 和 CI。
47
47
  | pganalyze | $149+/月 | SaaS 注册 | ❌ | ❌ |
48
48
  | Grafana+Prometheus | 免费 | 配置 3 个服务 | ❌ | ❌ |
49
49
  | pgAdmin | 免费 | 界面复杂 | ❌ | ❌ |
50
- | **pg-dash** | **免费** | **一条命令** | **18 个 MCP 工具** | **`--ci --diff`** |
50
+ | **pg-dash** | **免费** | **一条命令** | **23 个 MCP 工具** | **`--ci --diff`** |
51
51
 
52
52
  ## 功能
53
53
 
@@ -117,8 +117,15 @@ Dashboard 需要时可以用。但真正的核心能力在 CLI、MCP 和 CI。
117
117
  - `--health` 参数额外对比健康分和各环境独有的问题
118
118
  - `pg_dash_compare_env` MCP 工具:直接问 AI "本地和预发有什么差异?"
119
119
 
120
+ ### 🔧 生产就绪审计
121
+ - **废弃索引检测** — 找出从未被使用(0 次扫描)的索引,自动生成带引号的 `DROP INDEX CONCURRENTLY` SQL
122
+ - **表膨胀检测** — 统计每张表的 dead tuple 比例(≥10% 才展示),同时显示 `last_autovacuum` 和 `last_vacuum` 时间戳
123
+ - **Autovacuum 健康** — 将每张表分类为 `ok` / `stale` / `overdue` / `never`,展示带单位的 autovacuum 配置
124
+ - **锁监控** — 活跃的锁等待链(谁在阻塞谁)+ 超过 5 秒的长查询
125
+ - **配置建议** — 审计 `shared_buffers`、`work_mem`、`checkpoint_completion_target`、`random_page_cost`、`idle_in_transaction_session_timeout` 等 10 项配置,给出带严重级别的调优建议
126
+
120
127
  ### 🤖 MCP Server
121
- - 18 个工具,支持 AI Agent 集成
128
+ - 23 个工具,支持 AI Agent 集成
122
129
  - `pg-dash-mcp postgres://...` —— 可配合 Claude、Cursor 等使用
123
130
 
124
131
  ### 🖥️ CLI
@@ -202,7 +209,7 @@ pg-dash-mcp postgres://user:pass@host/db
202
209
  PG_DASH_CONNECTION_STRING=postgres://... pg-dash-mcp
203
210
  ```
204
211
 
205
- ### 可用工具(18 个)
212
+ ### 可用工具(23 个)
206
213
 
207
214
  | 工具 | 描述 |
208
215
  |------|------|
@@ -224,6 +231,11 @@ PG_DASH_CONNECTION_STRING=postgres://... pg-dash-mcp
224
231
  | `pg_dash_analyze_query` | 深度 EXPLAIN 分析,自动生成索引建议 |
225
232
  | `pg_dash_query_regressions` | 检测比历史基线慢超过 50% 的查询 |
226
233
  | `pg_dash_compare_env` | 对比两个数据库环境的 Schema 和健康状态 |
234
+ | `pg_dash_unused_indexes` | 发现从未被使用的索引(浪费空间、拖慢写入) |
235
+ | `pg_dash_bloat` | 检测表膨胀(dead tuples 过多) |
236
+ | `pg_dash_autovacuum` | Autovacuum 健康状态——哪些表长期未 vacuum |
237
+ | `pg_dash_locks` | 显示活跃锁等待链和长时间阻塞的查询 |
238
+ | `pg_dash_config_check` | 审计 PostgreSQL 配置,给出调优建议 |
227
239
 
228
240
  ## MCP 配置
229
241
 
package/dist/mcp.js CHANGED
@@ -1902,6 +1902,480 @@ async function analyzeMigration(sql, pool2) {
1902
1902
  };
1903
1903
  }
1904
1904
 
1905
+ // src/server/unused-indexes.ts
1906
+ function formatBytes(bytes) {
1907
+ if (bytes < 1024) return "< 1 KB";
1908
+ if (bytes < 1024 * 1024) return `${Math.round(bytes / 1024)} KB`;
1909
+ if (bytes < 1024 * 1024 * 1024) return `${(bytes / (1024 * 1024)).toFixed(1)} MB`;
1910
+ if (bytes < 1024 ** 4) return `${(bytes / 1024 ** 3).toFixed(1)} GB`;
1911
+ return `${(bytes / 1024 ** 4).toFixed(1)} TB`;
1912
+ }
1913
+ async function getUnusedIndexes(pool2) {
1914
+ const [indexResult, bgwriterResult] = await Promise.all([
1915
+ pool2.query(`
1916
+ SELECT
1917
+ s.schemaname,
1918
+ s.relname AS table_name,
1919
+ s.indexrelname AS index_name,
1920
+ pg_relation_size(s.indexrelid) AS index_size_bytes,
1921
+ s.idx_scan,
1922
+ i.indexdef
1923
+ FROM pg_stat_user_indexes s
1924
+ JOIN pg_indexes i ON s.schemaname = i.schemaname
1925
+ AND s.relname = i.tablename
1926
+ AND s.indexrelname = i.indexname
1927
+ WHERE s.schemaname = 'public'
1928
+ AND s.idx_scan = 0
1929
+ AND i.indexdef NOT LIKE '%UNIQUE%'
1930
+ AND s.indexrelname NOT LIKE '%_pkey'
1931
+ ORDER BY pg_relation_size(s.indexrelid) DESC
1932
+ `),
1933
+ pool2.query(`SELECT stats_reset FROM pg_stat_bgwriter`)
1934
+ ]);
1935
+ const statsReset = bgwriterResult.rows[0]?.stats_reset ? new Date(bgwriterResult.rows[0].stats_reset).toISOString() : null;
1936
+ const filteredRows = indexResult.rows.filter((row) => {
1937
+ const def = row.indexdef ?? "";
1938
+ if (def.includes(" WHERE ")) return false;
1939
+ const colStart = def.indexOf("(");
1940
+ const colEnd = def.lastIndexOf(")");
1941
+ if (colStart !== -1 && colEnd !== -1) {
1942
+ const cols = def.slice(colStart + 1, colEnd);
1943
+ if (cols.includes("(")) return false;
1944
+ }
1945
+ return true;
1946
+ });
1947
+ const indexes = filteredRows.map((row) => {
1948
+ const sizeBytes = parseInt(row.index_size_bytes, 10) || 0;
1949
+ const index = row.index_name;
1950
+ const table = row.table_name;
1951
+ return {
1952
+ schema: row.schemaname,
1953
+ table,
1954
+ index,
1955
+ indexSize: formatBytes(sizeBytes),
1956
+ indexSizeBytes: sizeBytes,
1957
+ scans: parseInt(row.idx_scan, 10) || 0,
1958
+ lastUsed: statsReset,
1959
+ suggestion: `Index ${index} on ${table} has never been used (0 scans). Consider dropping it: DROP INDEX CONCURRENTLY "${index.replace(/"/g, '""')}"`
1960
+ };
1961
+ });
1962
+ const totalWastedBytes = indexes.reduce((sum, idx) => sum + idx.indexSizeBytes, 0);
1963
+ return {
1964
+ indexes,
1965
+ totalWastedBytes,
1966
+ totalWasted: formatBytes(totalWastedBytes),
1967
+ checkedAt: (/* @__PURE__ */ new Date()).toISOString()
1968
+ };
1969
+ }
1970
+
1971
+ // src/server/bloat.ts
1972
+ function getSuggestion(table, bloatPercent) {
1973
+ if (bloatPercent >= 50) {
1974
+ return `HIGH bloat on ${table} (${bloatPercent}% dead rows). Run: VACUUM ANALYZE ${table}`;
1975
+ } else if (bloatPercent >= 20) {
1976
+ return `Moderate bloat on ${table} (${bloatPercent}% dead rows). Consider VACUUM ANALYZE ${table}`;
1977
+ } else {
1978
+ return `Minor bloat on ${table} (${bloatPercent}% dead rows). Autovacuum should handle this.`;
1979
+ }
1980
+ }
1981
+ async function getBloatReport(pool2) {
1982
+ const result = await pool2.query(`
1983
+ SELECT
1984
+ schemaname,
1985
+ relname AS table_name,
1986
+ n_live_tup,
1987
+ n_dead_tup,
1988
+ last_autovacuum,
1989
+ last_vacuum
1990
+ FROM pg_stat_user_tables
1991
+ WHERE schemaname = 'public'
1992
+ AND (n_live_tup + n_dead_tup) > 0
1993
+ ORDER BY (n_dead_tup::float / (n_live_tup + n_dead_tup)) DESC
1994
+ `);
1995
+ const tables = [];
1996
+ for (const row of result.rows) {
1997
+ const live = parseInt(row.n_live_tup, 10) || 0;
1998
+ const dead = parseInt(row.n_dead_tup, 10) || 0;
1999
+ const total = live + dead;
2000
+ if (total === 0) continue;
2001
+ const bloatPercent = Math.round(dead / total * 1e3) / 10;
2002
+ if (bloatPercent < 10) continue;
2003
+ const table = row.table_name;
2004
+ tables.push({
2005
+ schema: row.schemaname,
2006
+ table,
2007
+ liveRows: live,
2008
+ deadRows: dead,
2009
+ bloatPercent,
2010
+ lastAutoVacuum: row.last_autovacuum ? new Date(row.last_autovacuum).toISOString() : null,
2011
+ lastVacuum: row.last_vacuum ? new Date(row.last_vacuum).toISOString() : null,
2012
+ suggestion: getSuggestion(table, bloatPercent)
2013
+ });
2014
+ }
2015
+ tables.sort((a, b) => b.bloatPercent - a.bloatPercent);
2016
+ return {
2017
+ tables,
2018
+ checkedAt: (/* @__PURE__ */ new Date()).toISOString()
2019
+ };
2020
+ }
2021
+
2022
+ // src/server/autovacuum.ts
2023
+ function classifyStatus(lastAutoVacuum, deadTuples, vacuumCount) {
2024
+ if (lastAutoVacuum === null) return "never";
2025
+ const daysSince = (Date.now() - lastAutoVacuum.getTime()) / (1e3 * 60 * 60 * 24);
2026
+ if (daysSince > 7 && deadTuples > 1e4) return "overdue";
2027
+ if (daysSince > 3) return "stale";
2028
+ return "ok";
2029
+ }
2030
+ function getSuggestion2(status, table) {
2031
+ switch (status) {
2032
+ case "never":
2033
+ return `Table ${table} has never been autovacuumed. Check if autovacuum is enabled and the table has enough churn.`;
2034
+ case "overdue":
2035
+ return `Table ${table} is overdue for vacuum and has many dead tuples. Run: VACUUM ANALYZE ${table}`;
2036
+ case "stale":
2037
+ return `Table ${table} hasn't been vacuumed in over 3 days. Monitor for bloat.`;
2038
+ case "ok":
2039
+ return null;
2040
+ }
2041
+ }
2042
+ async function getAutovacuumReport(pool2) {
2043
+ const [tableResult, settingsResult] = await Promise.all([
2044
+ pool2.query(`
2045
+ SELECT
2046
+ schemaname, relname,
2047
+ last_autovacuum, last_autoanalyze,
2048
+ n_dead_tup, n_live_tup,
2049
+ autovacuum_count, autoanalyze_count
2050
+ FROM pg_stat_user_tables
2051
+ WHERE schemaname = 'public'
2052
+ ORDER BY n_dead_tup DESC
2053
+ `),
2054
+ pool2.query(`
2055
+ SELECT name, setting
2056
+ FROM pg_settings
2057
+ WHERE name IN ('autovacuum', 'autovacuum_vacuum_cost_delay', 'autovacuum_max_workers', 'autovacuum_naptime')
2058
+ `)
2059
+ ]);
2060
+ const tables = tableResult.rows.map((row) => {
2061
+ const lastAutoVacuumDate = row.last_autovacuum ? new Date(row.last_autovacuum) : null;
2062
+ const deadTuples = parseInt(row.n_dead_tup, 10) || 0;
2063
+ const liveTuples = parseInt(row.n_live_tup, 10) || 0;
2064
+ const vacuumCount = parseInt(row.autovacuum_count, 10) || 0;
2065
+ const analyzeCount = parseInt(row.autoanalyze_count, 10) || 0;
2066
+ const status = classifyStatus(lastAutoVacuumDate, deadTuples, vacuumCount);
2067
+ const table = row.relname;
2068
+ return {
2069
+ schema: row.schemaname,
2070
+ table,
2071
+ lastAutoVacuum: lastAutoVacuumDate ? lastAutoVacuumDate.toISOString() : null,
2072
+ lastAutoAnalyze: row.last_autoanalyze ? new Date(row.last_autoanalyze).toISOString() : null,
2073
+ deadTuples,
2074
+ liveTuples,
2075
+ vacuumCount,
2076
+ analyzeCount,
2077
+ status,
2078
+ suggestion: getSuggestion2(status, table)
2079
+ };
2080
+ });
2081
+ const settingsMap = /* @__PURE__ */ new Map();
2082
+ for (const row of settingsResult.rows) {
2083
+ settingsMap.set(row.name, row.setting);
2084
+ }
2085
+ return {
2086
+ tables,
2087
+ settings: {
2088
+ autovacuumEnabled: settingsMap.get("autovacuum") !== "off",
2089
+ vacuumCostDelay: `${settingsMap.get("autovacuum_vacuum_cost_delay") ?? "2"}ms`,
2090
+ autovacuumMaxWorkers: parseInt(settingsMap.get("autovacuum_max_workers") ?? "3", 10),
2091
+ autovacuumNaptime: `${settingsMap.get("autovacuum_naptime") ?? "60"}s`
2092
+ },
2093
+ checkedAt: (/* @__PURE__ */ new Date()).toISOString()
2094
+ };
2095
+ }
2096
+
2097
+ // src/server/locks.ts
2098
+ function formatDurationSecs(secs) {
2099
+ const h = Math.floor(secs / 3600);
2100
+ const m = Math.floor(secs % 3600 / 60);
2101
+ const s = secs % 60;
2102
+ return [
2103
+ String(h).padStart(2, "0"),
2104
+ String(m).padStart(2, "0"),
2105
+ String(s).padStart(2, "0")
2106
+ ].join(":");
2107
+ }
2108
+ async function getLockReport(pool2) {
2109
+ const [locksResult, longResult] = await Promise.all([
2110
+ pool2.query(`
2111
+ SELECT
2112
+ blocked.pid AS blocked_pid,
2113
+ blocked.query AS blocked_query,
2114
+ EXTRACT(EPOCH FROM (NOW() - blocked.query_start))::int AS blocked_secs,
2115
+ blocking.pid AS blocking_pid,
2116
+ blocking.query AS blocking_query,
2117
+ EXTRACT(EPOCH FROM (NOW() - blocking.query_start))::int AS blocking_secs,
2118
+ blocked_locks.relation::regclass::text AS table_name,
2119
+ blocked_locks.locktype
2120
+ FROM pg_catalog.pg_locks blocked_locks
2121
+ JOIN pg_catalog.pg_stat_activity blocked ON blocked.pid = blocked_locks.pid
2122
+ JOIN pg_catalog.pg_locks blocking_locks
2123
+ ON blocking_locks.locktype = blocked_locks.locktype
2124
+ AND blocking_locks.relation IS NOT DISTINCT FROM blocked_locks.relation
2125
+ AND blocking_locks.pid != blocked_locks.pid
2126
+ AND blocking_locks.granted = true
2127
+ JOIN pg_catalog.pg_stat_activity blocking ON blocking.pid = blocking_locks.pid
2128
+ WHERE NOT blocked_locks.granted
2129
+ `),
2130
+ pool2.query(`
2131
+ SELECT
2132
+ pid,
2133
+ EXTRACT(EPOCH FROM (NOW() - query_start))::int AS duration_secs,
2134
+ query,
2135
+ state,
2136
+ wait_event_type
2137
+ FROM pg_stat_activity
2138
+ WHERE state != 'idle'
2139
+ AND query_start IS NOT NULL
2140
+ AND EXTRACT(EPOCH FROM (NOW() - query_start)) > 5
2141
+ AND query NOT LIKE '%pg_stat_activity%'
2142
+ ORDER BY duration_secs DESC
2143
+ LIMIT 20
2144
+ `)
2145
+ ]);
2146
+ const seen = /* @__PURE__ */ new Set();
2147
+ const waitingLocks = [];
2148
+ for (const row of locksResult.rows) {
2149
+ const key = `${row.blocked_pid}:${row.blocking_pid}`;
2150
+ if (!seen.has(key)) {
2151
+ seen.add(key);
2152
+ waitingLocks.push({
2153
+ blockedPid: parseInt(row.blocked_pid, 10),
2154
+ blockedQuery: row.blocked_query,
2155
+ blockedDuration: formatDurationSecs(parseInt(row.blocked_secs, 10) || 0),
2156
+ blockingPid: parseInt(row.blocking_pid, 10),
2157
+ blockingQuery: row.blocking_query,
2158
+ blockingDuration: formatDurationSecs(parseInt(row.blocking_secs, 10) || 0),
2159
+ table: row.table_name ?? null,
2160
+ lockType: row.locktype
2161
+ });
2162
+ }
2163
+ }
2164
+ const longRunningQueries = longResult.rows.map((row) => ({
2165
+ pid: parseInt(row.pid, 10),
2166
+ duration: formatDurationSecs(parseInt(row.duration_secs, 10) || 0),
2167
+ query: row.query,
2168
+ state: row.state,
2169
+ waitEventType: row.wait_event_type ?? null
2170
+ }));
2171
+ return {
2172
+ waitingLocks,
2173
+ longRunningQueries,
2174
+ checkedAt: (/* @__PURE__ */ new Date()).toISOString()
2175
+ };
2176
+ }
2177
+
2178
+ // src/server/config-checker.ts
2179
+ function settingToBytes(value, unit) {
2180
+ const v = parseFloat(value);
2181
+ if (!unit) return v;
2182
+ switch (unit.toLowerCase()) {
2183
+ case "b":
2184
+ return v;
2185
+ case "kb":
2186
+ return v * 1024;
2187
+ case "8kb":
2188
+ return v * 8 * 1024;
2189
+ // shared_buffers, effective_cache_size
2190
+ case "mb":
2191
+ return v * 1024 * 1024;
2192
+ case "gb":
2193
+ return v * 1024 * 1024 * 1024;
2194
+ default:
2195
+ return v;
2196
+ }
2197
+ }
2198
+ function settingToMb(value, unit) {
2199
+ return settingToBytes(value, unit) / (1024 * 1024);
2200
+ }
2201
+ function formatMemSetting(rawValue, unit) {
2202
+ if (!rawValue) return "unknown";
2203
+ const bytes = settingToBytes(rawValue, unit ?? "");
2204
+ if (bytes <= 0 || isNaN(bytes)) return rawValue;
2205
+ if (bytes >= 1024 ** 3) return `${(bytes / 1024 ** 3).toFixed(1)}GB`;
2206
+ if (bytes >= 1024 ** 2) return `${Math.round(bytes / 1024 ** 2)}MB`;
2207
+ if (bytes >= 1024) return `${Math.round(bytes / 1024)}KB`;
2208
+ return `${bytes}B`;
2209
+ }
2210
+ async function getConfigReport(pool2) {
2211
+ const result = await pool2.query(`
2212
+ SELECT name, setting, unit
2213
+ FROM pg_settings
2214
+ WHERE name IN (
2215
+ 'max_connections', 'shared_buffers', 'work_mem',
2216
+ 'effective_cache_size', 'maintenance_work_mem', 'wal_buffers',
2217
+ 'checkpoint_completion_target', 'random_page_cost',
2218
+ 'autovacuum_vacuum_scale_factor', 'autovacuum_analyze_scale_factor',
2219
+ 'log_min_duration_statement', 'idle_in_transaction_session_timeout',
2220
+ 'effective_io_concurrency'
2221
+ )
2222
+ `);
2223
+ const settings = {};
2224
+ for (const row of result.rows) {
2225
+ settings[row.name] = { setting: row.setting, unit: row.unit ?? void 0 };
2226
+ }
2227
+ const recommendations = [];
2228
+ const get = (name) => settings[name]?.setting ?? null;
2229
+ const getUnit = (name) => settings[name]?.unit;
2230
+ const sharedBuffersSetting = get("shared_buffers");
2231
+ if (sharedBuffersSetting !== null) {
2232
+ const mb = settingToMb(sharedBuffersSetting, getUnit("shared_buffers"));
2233
+ if (mb < 128) {
2234
+ recommendations.push({
2235
+ setting: "shared_buffers",
2236
+ currentValue: `${Math.round(mb)}MB`,
2237
+ recommendedValue: "256MB",
2238
+ reason: "shared_buffers should be at least 25% of RAM; typical starting point is 256MB\u20131GB",
2239
+ severity: "warning",
2240
+ docs: "https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-SHARED-BUFFERS"
2241
+ });
2242
+ }
2243
+ }
2244
+ const workMemSetting = get("work_mem");
2245
+ if (workMemSetting !== null) {
2246
+ const mb = settingToMb(workMemSetting, getUnit("work_mem"));
2247
+ if (mb <= 4) {
2248
+ recommendations.push({
2249
+ setting: "work_mem",
2250
+ currentValue: `${mb % 1 === 0 ? mb : mb.toFixed(1)}MB`,
2251
+ recommendedValue: "16MB",
2252
+ reason: "work_mem of 4MB is conservative; consider 16MB\u201364MB for analytical queries (but multiply by max_connections for total)",
2253
+ severity: "info",
2254
+ docs: "https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-WORK-MEM"
2255
+ });
2256
+ }
2257
+ }
2258
+ const cctSetting = get("checkpoint_completion_target");
2259
+ if (cctSetting !== null) {
2260
+ const v = parseFloat(cctSetting);
2261
+ if (v < 0.9) {
2262
+ recommendations.push({
2263
+ setting: "checkpoint_completion_target",
2264
+ currentValue: cctSetting,
2265
+ recommendedValue: "0.9",
2266
+ reason: "Set to 0.9 to spread checkpoint I/O over 90% of checkpoint interval",
2267
+ severity: "warning",
2268
+ docs: "https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-CHECKPOINT-COMPLETION-TARGET"
2269
+ });
2270
+ }
2271
+ }
2272
+ const rpcSetting = get("random_page_cost");
2273
+ if (rpcSetting !== null) {
2274
+ const v = parseFloat(rpcSetting);
2275
+ if (v > 2) {
2276
+ recommendations.push({
2277
+ setting: "random_page_cost",
2278
+ currentValue: rpcSetting,
2279
+ recommendedValue: "1.1",
2280
+ reason: "If using SSDs, set random_page_cost=1.1 (default 4.0 is tuned for spinning disks)",
2281
+ severity: "info",
2282
+ docs: "https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-RANDOM-PAGE-COST"
2283
+ });
2284
+ }
2285
+ }
2286
+ const avsfSetting = get("autovacuum_vacuum_scale_factor");
2287
+ if (avsfSetting !== null) {
2288
+ const v = parseFloat(avsfSetting);
2289
+ if (v >= 0.2) {
2290
+ recommendations.push({
2291
+ setting: "autovacuum_vacuum_scale_factor",
2292
+ currentValue: avsfSetting,
2293
+ recommendedValue: "0.05",
2294
+ reason: "Consider lowering to 0.05\u20130.1 for large tables to vacuum more frequently",
2295
+ severity: "info",
2296
+ docs: "https://www.postgresql.org/docs/current/runtime-config-autovacuum.html#GUC-AUTOVACUUM-VACUUM-SCALE-FACTOR"
2297
+ });
2298
+ }
2299
+ }
2300
+ const lmdsSetting = get("log_min_duration_statement");
2301
+ if (lmdsSetting !== null && parseInt(lmdsSetting, 10) === -1) {
2302
+ recommendations.push({
2303
+ setting: "log_min_duration_statement",
2304
+ currentValue: "-1",
2305
+ recommendedValue: "1000",
2306
+ reason: "Consider setting to 1000 (log queries > 1s) for performance monitoring",
2307
+ severity: "info",
2308
+ docs: "https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-MIN-DURATION-STATEMENT"
2309
+ });
2310
+ }
2311
+ const iitsSetting = get("idle_in_transaction_session_timeout");
2312
+ if (iitsSetting !== null && parseInt(iitsSetting, 10) === 0) {
2313
+ recommendations.push({
2314
+ setting: "idle_in_transaction_session_timeout",
2315
+ currentValue: "0",
2316
+ recommendedValue: "60000",
2317
+ reason: "Set idle_in_transaction_session_timeout=60000 (60s) to prevent stuck transactions from holding locks",
2318
+ severity: "warning",
2319
+ docs: "https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-IDLE-IN-TRANSACTION-SESSION-TIMEOUT"
2320
+ });
2321
+ }
2322
+ const eicSetting = get("effective_io_concurrency");
2323
+ if (eicSetting !== null && parseInt(eicSetting, 10) === 1) {
2324
+ recommendations.push({
2325
+ setting: "effective_io_concurrency",
2326
+ currentValue: "1",
2327
+ recommendedValue: "200",
2328
+ reason: "If using SSDs, set effective_io_concurrency=200 for better parallel I/O",
2329
+ severity: "info",
2330
+ docs: "https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-EFFECTIVE-IO-CONCURRENCY"
2331
+ });
2332
+ }
2333
+ const mwmSetting = get("maintenance_work_mem");
2334
+ if (mwmSetting !== null) {
2335
+ const mb = settingToMb(mwmSetting, getUnit("maintenance_work_mem"));
2336
+ if (mb <= 64) {
2337
+ recommendations.push({
2338
+ setting: "maintenance_work_mem",
2339
+ currentValue: `${mb % 1 === 0 ? mb : mb.toFixed(1)}MB`,
2340
+ recommendedValue: "256MB",
2341
+ reason: "Consider 256MB for faster VACUUM and index builds",
2342
+ severity: "info",
2343
+ docs: "https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-MAINTENANCE-WORK-MEM"
2344
+ });
2345
+ }
2346
+ }
2347
+ const maxConnSetting = get("max_connections");
2348
+ if (maxConnSetting !== null) {
2349
+ const maxConn = parseInt(maxConnSetting, 10);
2350
+ if (maxConn > 200) {
2351
+ recommendations.push({
2352
+ setting: "max_connections",
2353
+ currentValue: String(maxConn),
2354
+ recommendedValue: "100",
2355
+ reason: `max_connections=${maxConn} is high. Each connection uses ~5\u201310MB RAM. Without a connection pooler (PgBouncer), this leads to memory pressure and context-switch overhead. Consider lowering to 100 and using a pooler.`,
2356
+ severity: "warning",
2357
+ docs: "https://www.postgresql.org/docs/current/runtime-config-connection.html#GUC-MAX-CONNECTIONS"
2358
+ });
2359
+ }
2360
+ }
2361
+ const serverInfo = {
2362
+ maxConnections: maxConnSetting !== null ? parseInt(maxConnSetting, 10) : 0,
2363
+ sharedBuffers: formatMemSetting(sharedBuffersSetting, getUnit("shared_buffers")),
2364
+ workMem: formatMemSetting(workMemSetting, getUnit("work_mem")),
2365
+ effectiveCacheSize: formatMemSetting(get("effective_cache_size"), getUnit("effective_cache_size")),
2366
+ maintenanceWorkMem: formatMemSetting(mwmSetting, getUnit("maintenance_work_mem")),
2367
+ walBuffers: get("wal_buffers") ?? "",
2368
+ checkpointCompletionTarget: cctSetting ?? "",
2369
+ randomPageCost: rpcSetting ?? "",
2370
+ autovacuumVacuumScaleFactor: avsfSetting ?? ""
2371
+ };
2372
+ return {
2373
+ recommendations,
2374
+ serverInfo,
2375
+ checkedAt: (/* @__PURE__ */ new Date()).toISOString()
2376
+ };
2377
+ }
2378
+
1905
2379
  // src/mcp.ts
1906
2380
  import Database2 from "better-sqlite3";
1907
2381
  import path3 from "path";
@@ -2253,6 +2727,46 @@ server.tool(
2253
2727
  }
2254
2728
  }
2255
2729
  );
2730
+ server.tool("pg_dash_unused_indexes", "Find unused indexes that waste space and slow down writes", {}, async () => {
2731
+ try {
2732
+ const report = await getUnusedIndexes(pool);
2733
+ return { content: [{ type: "text", text: JSON.stringify(report, null, 2) }] };
2734
+ } catch (err) {
2735
+ return { content: [{ type: "text", text: `Error: ${err.message}` }], isError: true };
2736
+ }
2737
+ });
2738
+ server.tool("pg_dash_bloat", "Detect table bloat (dead tuples) that slow down queries", {}, async () => {
2739
+ try {
2740
+ const report = await getBloatReport(pool);
2741
+ return { content: [{ type: "text", text: JSON.stringify(report, null, 2) }] };
2742
+ } catch (err) {
2743
+ return { content: [{ type: "text", text: `Error: ${err.message}` }], isError: true };
2744
+ }
2745
+ });
2746
+ server.tool("pg_dash_autovacuum", "Check autovacuum health \u2014 which tables are stale or never vacuumed", {}, async () => {
2747
+ try {
2748
+ const report = await getAutovacuumReport(pool);
2749
+ return { content: [{ type: "text", text: JSON.stringify(report, null, 2) }] };
2750
+ } catch (err) {
2751
+ return { content: [{ type: "text", text: `Error: ${err.message}` }], isError: true };
2752
+ }
2753
+ });
2754
+ server.tool("pg_dash_locks", "Show active lock waits and long-running queries blocking the database", {}, async () => {
2755
+ try {
2756
+ const report = await getLockReport(pool);
2757
+ return { content: [{ type: "text", text: JSON.stringify(report, null, 2) }] };
2758
+ } catch (err) {
2759
+ return { content: [{ type: "text", text: `Error: ${err.message}` }], isError: true };
2760
+ }
2761
+ });
2762
+ server.tool("pg_dash_config_check", "Audit PostgreSQL configuration settings and get tuning recommendations", {}, async () => {
2763
+ try {
2764
+ const report = await getConfigReport(pool);
2765
+ return { content: [{ type: "text", text: JSON.stringify(report, null, 2) }] };
2766
+ } catch (err) {
2767
+ return { content: [{ type: "text", text: `Error: ${err.message}` }], isError: true };
2768
+ }
2769
+ });
2256
2770
  var transport = new StdioServerTransport();
2257
2771
  await server.connect(transport);
2258
2772
  //# sourceMappingURL=mcp.js.map