postgres-scout-mcp 1.0.0 → 1.0.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. package/README.md +174 -141
  2. package/package.json +3 -2
package/README.md CHANGED
@@ -2,31 +2,91 @@
2
2
 
3
3
  Scout your PostgreSQL databases with AI - A production-ready Model Context Protocol server with built-in safety features, monitoring, and data quality tools.
4
4
 
5
- ## Setup
5
+ [![npm](https://img.shields.io/npm/v/postgres-scout-mcp)](https://www.npmjs.com/package/postgres-scout-mcp) [![License](https://img.shields.io/badge/License-Apache_2.0-blue.svg)](LICENSE)
6
6
 
7
- ### Claude Desktop / Claude Code
7
+ ## What You Get
8
8
 
9
- Add to your MCP config (`~/.config/claude-desktop/config.json` or `~/.claude.json`):
9
+ You ask:
10
+
11
+ > *"How healthy is my production database? Any urgent issues?"*
12
+
13
+ Postgres Scout returns:
14
+
15
+ ---
16
+
17
+ ### Overall Health Score: 78/100
18
+
19
+ **Component Breakdown**
20
+ | Component | Score | Status |
21
+ |-----------|-------|--------|
22
+ | Cache Performance | 94/100 | Healthy |
23
+ | Index Efficiency | 82/100 | Good |
24
+ | Table Bloat | 61/100 | Needs Attention |
25
+ | Connection Usage | 75/100 | Fair |
26
+
27
+ **Issues Found**
28
+ - **HIGH** — Table `orders` has 34% bloat (2.1 GB wasted). VACUUM FULL recommended.
29
+ - **MEDIUM** — 3 unused indexes on `sessions` consuming 890 MB.
30
+ - **LOW** — Cache hit ratio for `analytics_events` is 71% (target: >90%).
31
+
32
+ **Recommendations**
33
+ - Run `VACUUM FULL orders` during maintenance window
34
+ - Drop unused indexes: `idx_sessions_legacy`, `idx_sessions_old_token`, `idx_sessions_temp`
35
+ - Consider adding `analytics_events` to shared_buffers or partitioning by date
36
+
37
+ ---
38
+
39
+ That's `getHealthScore` — one of 38 tools covering exploration, diagnostics, optimization, monitoring, data quality, and safe writes.
40
+
41
+ ## Quick Start
42
+
43
+ ### Claude Code
44
+
45
+ ```bash
46
+ claude mcp add postgres-scout -- npx -y postgres-scout-mcp postgresql://localhost:5432/mydb
47
+ ```
48
+
49
+ Then ask: *"Show me the largest tables and whether they have any bloat issues."*
50
+
51
+ <details>
52
+ <summary>Claude Desktop</summary>
53
+
54
+ Add to your Claude Desktop config (`~/Library/Application Support/Claude/claude_desktop_config.json` on macOS):
10
55
 
11
56
  ```json
12
57
  {
13
58
  "mcpServers": {
14
59
  "postgres-scout": {
15
60
  "command": "npx",
16
- "args": [
17
- "-y",
18
- "postgres-scout-mcp",
19
- "postgresql://localhost:5432/mydb"
20
- ],
61
+ "args": ["-y", "postgres-scout-mcp", "postgresql://localhost:5432/mydb"],
21
62
  "type": "stdio"
22
63
  }
23
64
  }
24
65
  }
25
66
  ```
26
67
 
27
- ### Recommended: Separate Read-Only and Read-Write Instances
68
+ </details>
69
+
70
+ <details>
71
+ <summary>Cursor / VS Code</summary>
72
+
73
+ Add to your MCP settings:
74
+
75
+ ```json
76
+ {
77
+ "postgres-scout": {
78
+ "command": "npx",
79
+ "args": ["-y", "postgres-scout-mcp", "postgresql://localhost:5432/mydb"]
80
+ }
81
+ }
82
+ ```
83
+
84
+ </details>
28
85
 
29
- The server runs in **read-only mode by default** for safety. For write operations, use a separate instance:
86
+ <details>
87
+ <summary>Read-Only vs Read-Write</summary>
88
+
89
+ The server runs in **read-only mode by default**. For write operations, run a separate instance:
30
90
 
31
91
  ```json
32
92
  {
@@ -45,179 +105,152 @@ The server runs in **read-only mode by default** for safety. For write operation
45
105
  }
46
106
  ```
47
107
 
48
- This gives you:
49
- - **postgres-scout-readonly**: Safe exploration without risk of data modification
50
- - **postgres-scout-readwrite**: Write operations available when explicitly needed
51
- - Clear separation of capabilities
52
- - Option to point read-write to a development database for extra safety
108
+ - **postgres-scout-readonly**: Safe exploration, no risk of data modification
109
+ - **postgres-scout-readwrite**: Write operations when explicitly needed
53
110
 
54
- ### Global Install
111
+ </details>
55
112
 
56
- ```bash
57
- npm install -g postgres-scout-mcp
58
- postgres-scout-mcp postgresql://localhost:5432/mydb
59
- ```
113
+ ## Tools
60
114
 
61
- ### Standalone Usage
115
+ ### Explore — understand your database
62
116
 
63
- ```bash
64
- # Default read-only mode (safest)
65
- postgres-scout-mcp
117
+ - `listDatabases` — databases the user has access to
118
+ - `getDatabaseStats` size, cache hit ratio, connection info
119
+ - `listSchemas` — all schemas in the current database
120
+ - `listTables` — tables with size and row statistics
121
+ - `describeTable` — columns, constraints, indexes, and more
66
122
 
67
- # Explicitly enable read-write mode (use with caution)
68
- postgres-scout-mcp --read-write
123
+ ### Query run and analyze
69
124
 
70
- # With custom URI in read-only mode
71
- postgres-scout-mcp postgresql://localhost:5432/mydb
125
+ - `executeQuery` run SELECT queries (or writes in read-write mode)
126
+ - `explainQuery` — EXPLAIN plans for performance analysis
127
+ - `optimizeQuery` — optimization recommendations for a specific query
72
128
 
73
- # Read-write mode with custom connection
74
- postgres-scout-mcp --read-write postgresql://localhost:5432/mydb
75
- ```
129
+ ### Diagnose find problems before they find you
76
130
 
77
- ### Command Line Options
131
+ - `getHealthScore` overall health score with component breakdown
132
+ - `detectAnomalies` — anomalies in performance, connections, and data
133
+ - `analyzeTableBloat` — bloat analysis for VACUUM planning
134
+ - `getSlowQueries` — slow query analysis (requires pg_stat_statements)
135
+ - `suggestVacuum` — VACUUM recommendations based on dead tuples and bloat
78
136
 
79
- ```
80
- --read-only Run server in read-only mode (default)
81
- --read-write Run server in read-write mode (enables all write operations)
82
- --mode <mode> Set mode: 'read-only' or 'read-write'
83
- ```
137
+ ### Optimize — make it faster
84
138
 
85
- ### Environment Variables
139
+ - `suggestIndexes` — missing index recommendations from query patterns
140
+ - `suggestPartitioning` — partitioning strategies for large tables
141
+ - `getIndexUsage` — identify unused or underused indexes
86
142
 
87
- ```bash
88
- # Security
89
- QUERY_TIMEOUT=30000 # milliseconds (default: 30s)
90
- MAX_RESULT_ROWS=10000 # prevent memory exhaustion
91
- ENABLE_RATE_LIMIT=true
92
- RATE_LIMIT_MAX_REQUESTS=100
93
- RATE_LIMIT_WINDOW_MS=60000 # 1 minute
94
-
95
- # Logging
96
- LOG_DIR=./logs
97
- LOG_LEVEL=info # debug, info, warn, error
98
-
99
- # Connection Pool
100
- PGMAXPOOLSIZE=10
101
- PGMINPOOLSIZE=2
102
- PGIDLETIMEOUT=10000
103
- ```
143
+ ### Monitor — watch it live
104
144
 
105
- ## Security
145
+ - `getCurrentActivity` — active queries and connections
146
+ - `analyzeLocks` — lock contention and blocking queries
147
+ - `getLiveMetrics` — real-time metrics over a time window
148
+ - `getHottestTables` — tables with highest activity
149
+ - `getTableMetrics` — comprehensive per-table I/O and scan stats
106
150
 
107
- - **Read-only by default** write operations must be explicitly enabled
108
- - All queries use parameterized values
109
- - SQL injection prevention with input validation and pattern detection
110
- - Identifier sanitization for table/column names
111
- - Rate limiting on all operations
112
- - Query timeouts to prevent long-running queries
113
- - Response size limits to prevent memory exhaustion
151
+ ### Data Qualitytrust your data
114
152
 
115
- ## Available Tools
153
+ - `findDuplicates` — duplicate rows by column combination
154
+ - `findMissingValues` — NULL analysis across columns
155
+ - `findOrphans` — orphaned records with invalid foreign keys
156
+ - `checkConstraintViolations` — test constraints before adding them
157
+ - `analyzeTypeConsistency` — type inconsistencies in text columns
116
158
 
117
- ### Read Operations (both modes)
159
+ ### Relationships follow the connections
118
160
 
119
- - **Database**: `listDatabases`, `getDatabaseStats`
120
- - **Schema**: `listSchemas`, `listTables`, `describeTable`
121
- - **Query**: `executeQuery`, `explainQuery`
161
+ - `exploreRelationships` — multi-hop foreign key traversal
162
+ - `analyzeForeignKeys` — foreign key health and performance
122
163
 
123
- ### Data Quality
164
+ ### Time Series — temporal analysis
124
165
 
125
- - `findDuplicates` — find duplicate rows based on column combinations
126
- - `findMissingValues` — NULL analysis across columns
127
- - `findOrphans` — find orphaned records via foreign key references
128
- - `checkConstraintViolations` — detect constraint issues
129
- - `analyzeTypeConsistency` — find type inconsistencies across rows
166
+ - `findRecent` — rows within a time window
167
+ - `analyzeTimeSeries` — window functions and anomaly detection
168
+ - `detectSeasonality` — seasonal pattern detection
130
169
 
131
- ### Export
170
+ ### Export — get data out
132
171
 
133
- - `exportTable` — export to CSV, JSON, JSONL, or SQL
134
- - `generateInsertStatements` — generate INSERT statements with batching
172
+ - `exportTable` — CSV, JSON, JSONL, or SQL
173
+ - `generateInsertStatements` — INSERT statements for migration
135
174
 
136
- ### Relationships
175
+ ### Write (read-write only) — safe modifications
137
176
 
138
- - `exploreRelationships` — follow multi-hop relationships and discover dependencies
139
- - `analyzeForeignKeys` — analyze foreign key structure
177
+ - `previewUpdate` / `previewDelete` see what would change before committing
178
+ - `safeUpdate` — UPDATE with dry-run, row limits, empty WHERE protection
179
+ - `safeDelete` — DELETE with dry-run, row limits, empty WHERE protection
180
+ - `safeInsert` — INSERT with validation, batching, ON CONFLICT support
140
181
 
141
- ### Temporal Queries
182
+ ## Security
142
183
 
143
- - `findRecent`find rows within a time window
144
- - `analyzeTimeSeries` time series analysis with anomaly detection
145
- - `detectSeasonality` detect seasonal patterns
184
+ - **Read-only by default** write operations must be explicitly enabled
185
+ - All queries use parameterized values
186
+ - SQL injection prevention with input validation and pattern detection
187
+ - Identifier sanitization for table/column names
188
+ - Rate limiting on all operations
189
+ - Query timeouts to prevent long-running queries
190
+ - Response size limits to prevent memory exhaustion
146
191
 
147
- ### Monitoring
192
+ ## Examples
148
193
 
149
- - `getCurrentActivity` active queries and connections
150
- - `analyzeLocks` — lock analysis
151
- - `getIndexUsage` — index usage statistics
194
+ > *"What are the largest tables and do they have bloat?"*
152
195
 
153
- ### Live Monitoring
196
+ ```
197
+ listTables({ schema: "public" })
198
+ analyzeTableBloat({ schema: "public", minSizeMb: 100 })
199
+ ```
154
200
 
155
- - `getLiveMetrics` real-time performance metrics
156
- - `getHottestTables` — identify most active tables
157
- - `getTableMetrics` — detailed metrics per table
201
+ > *"Find duplicate emails in the users table."*
158
202
 
159
- ### Maintenance & Optimization
203
+ ```
204
+ findDuplicates({ table: "users", columns: ["email"] })
205
+ ```
160
206
 
161
- - `getHealthScore` overall database health score
162
- - `getSlowQueries` — slow query analysis (requires `pg_stat_statements`)
163
- - `analyzeTableBloat` — table bloat analysis
164
- - `suggestVacuum` — VACUUM recommendations
165
- - `suggestIndexes` — index recommendations
166
- - `suggestPartitioning` — partitioning suggestions
167
- - `detectAnomalies` — anomaly detection
168
- - `optimizeQuery` — query optimization suggestions
207
+ > *"Which queries are slowest and how can I speed them up?"*
169
208
 
170
- ### Write Operations (read-write mode only)
209
+ ```
210
+ getSlowQueries({ minDurationMs: 100, limit: 10 })
211
+ suggestIndexes({ schema: "public" })
212
+ ```
171
213
 
172
- - `previewUpdate`, `previewDelete` preview changes before applying
173
- - `safeUpdate` — UPDATE with row limits and preview
174
- - `safeDelete` — DELETE with row limits and preview
175
- - `safeInsert` — INSERT with validation
214
+ > *"Show me what's happening on the database right now."*
176
215
 
177
- ## Logging
216
+ ```
217
+ getCurrentActivity()
218
+ getLiveMetrics({ metrics: ["queries", "connections", "cache"], duration: 30000, interval: 1000 })
219
+ getHottestTables({ limit: 5, orderBy: "seq_scan" })
220
+ ```
178
221
 
179
- File logging is **disabled by default**. Enable it with the `ENABLE_LOGGING=true` environment variable:
222
+ > *"Find orphaned orders that reference deleted customers."*
180
223
 
181
- ```json
182
- {
183
- "mcpServers": {
184
- "postgres-scout": {
185
- "command": "npx",
186
- "args": ["-y", "postgres-scout-mcp", "postgresql://localhost:5432/mydb"],
187
- "env": { "ENABLE_LOGGING": "true", "LOG_DIR": "./logs" },
188
- "type": "stdio"
189
- }
190
- }
191
- }
224
+ ```
225
+ findOrphans({ table: "orders", foreignKey: "customer_id", referenceTable: "customers", referenceColumn: "id" })
192
226
  ```
193
227
 
194
- When enabled, two log files are created in `LOG_DIR` (defaults to `./logs`):
228
+ ## Configuration
195
229
 
196
- - **tool-usage.log**: Every tool call with timestamp, tool name, and arguments
197
- - **error.log**: Errors with stack traces and the arguments that caused them
230
+ | Variable | Default | Description |
231
+ |----------|---------|-------------|
232
+ | `QUERY_TIMEOUT` | `30000` | Query timeout in milliseconds |
233
+ | `MAX_RESULT_ROWS` | `10000` | Maximum rows returned per query |
234
+ | `ENABLE_RATE_LIMIT` | `true` | Enable rate limiting |
235
+ | `RATE_LIMIT_MAX_REQUESTS` | `100` | Requests per window |
236
+ | `RATE_LIMIT_WINDOW_MS` | `60000` | Rate limit window (ms) |
237
+ | `PGMAXPOOLSIZE` | `10` | Connection pool max size |
238
+ | `PGMINPOOLSIZE` | `2` | Connection pool min size |
239
+ | `PGIDLETIMEOUT` | `10000` | Idle connection timeout (ms) |
240
+ | `ENABLE_LOGGING` | `false` | Enable file logging |
241
+ | `LOG_DIR` | `./logs` | Log file directory |
242
+ | `LOG_LEVEL` | `info` | Log verbosity: debug, info, warn, error |
198
243
 
199
- ## Examples
244
+ CLI flags: `--read-only` (default), `--read-write`, `--mode <mode>`
200
245
 
201
- ### Basic Operations
202
- ```
203
- executeQuery({ query: "SELECT id, email FROM users WHERE status = $1 LIMIT 10", params: ["active"] })
204
- explainQuery({ query: "SELECT * FROM orders WHERE customer_id = $1", params: [123], analyze: true })
205
- listTables({ schema: "public" })
206
- ```
246
+ ## Logging
207
247
 
208
- ### Data Quality
209
- ```
210
- findDuplicates({ table: "users", columns: ["email"] })
211
- findOrphans({ table: "orders", column: "customer_id", referencedTable: "customers", referencedColumn: "id" })
212
- findMissingValues({ table: "users", columns: ["email", "phone"] })
213
- ```
248
+ File logging is disabled by default. Set `ENABLE_LOGGING=true` to enable. Two log files are created in `LOG_DIR`:
214
249
 
215
- ### Monitoring
216
- ```
217
- getLiveMetrics({ metrics: ["queries", "connections", "cache"], duration: 30000, interval: 1000 })
218
- getHottestTables({ limit: 5, orderBy: "seq_scan" })
219
- getSlowQueries({ minDurationMs: 100, limit: 10 })
220
- ```
250
+ - **tool-usage.log** — every tool call with timestamp, name, and arguments
251
+ - **error.log** — errors with stack traces
252
+
253
+ Connection strings are automatically redacted in all output.
221
254
 
222
255
  ## Development
223
256
 
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "postgres-scout-mcp",
3
- "version": "1.0.0",
3
+ "version": "1.0.1",
4
4
  "description": "Scout your PostgreSQL databases with AI - A production-ready MCP server with safety features, monitoring, and data quality tools",
5
5
  "main": "dist/index.js",
6
6
  "type": "module",
@@ -54,5 +54,6 @@
54
54
  "@types/pg": "^8.10.9",
55
55
  "typescript": "^5.9.3",
56
56
  "vitest": "^4.0.18"
57
- }
57
+ },
58
+ "mcpName": "io.github.bluwork/postgres-scout-mcp"
58
59
  }