mcp-db-analyzer 0.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/LICENSE ADDED
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2026 Dmytro Lisnichenko
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
package/README.md ADDED
@@ -0,0 +1,381 @@
1
+ [![npm version](https://img.shields.io/npm/v/mcp-db-analyzer)](https://www.npmjs.com/package/mcp-db-analyzer)
2
+ [![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](https://opensource.org/licenses/MIT)
3
+
4
+ # MCP DB Analyzer
5
+
6
+ A Model Context Protocol (MCP) server that gives AI assistants deep visibility into your databases. It inspects schemas, detects index problems, analyzes table bloat/fragmentation, and explains query plans — so your AI can give you actionable database optimization advice instead of generic suggestions.
7
+
8
+ Supports **PostgreSQL**, **MySQL**, and **SQLite**.
9
+
10
+ ## Why This Tool?
11
+
12
+ There are dozens of database MCP servers — most are **CRUD gateways** (run queries, list tables). This tool **analyzes** your database: schema problems, missing indexes, bloated tables, slow queries, vacuum health.
13
+
14
+ Other analytical MCP servers (CrystalDBA, pg-dash, MCP-PostgreSQL-Ops) cover PostgreSQL only. **MCP DB Analyzer is the only analytical MCP server that supports PostgreSQL, MySQL, and SQLite** in a single `npx` install — no Python, no Go, no Docker.
15
+
16
+ ## Features
17
+
18
+ - **9 MCP tools** for comprehensive database analysis
19
+ - **PostgreSQL + MySQL + SQLite** support via `--driver` flag
20
+ - **Read-only by design** — all queries wrapped in READ ONLY transactions
21
+ - **Markdown output** optimized for LLM consumption
22
+ - **Zero configuration** — just set `DATABASE_URL`
23
+
24
+ ## Installation
25
+
26
+ ```bash
27
+ npx mcp-db-analyzer
28
+ ```
29
+
30
+ Or install globally:
31
+
32
+ ```bash
33
+ npm install -g mcp-db-analyzer
34
+ ```
35
+
36
+ ## Configuration
37
+
38
+ Set the `DATABASE_URL` environment variable:
39
+
40
+ ```bash
41
+ export DATABASE_URL="postgresql://user:password@localhost:5432/mydb"
42
+ ```
43
+
44
+ Or use individual PG variables: `PGHOST`, `PGPORT`, `PGDATABASE`, `PGUSER`, `PGPASSWORD`.
45
+
46
+ ### MySQL
47
+
48
+ Set `DATABASE_URL` with a MySQL connection string and pass `--driver mysql`:
49
+
50
+ ```bash
51
+ export DATABASE_URL="mysql://user:password@localhost:3306/mydb"
52
+ mcp-db-analyzer --driver mysql
53
+ ```
54
+
55
+ Or use individual MySQL variables: `MYSQL_HOST`, `MYSQL_PORT`, `MYSQL_DATABASE`, `MYSQL_USER`, `MYSQL_PASSWORD`.
56
+
57
+ You can also set `DB_DRIVER=mysql` as an environment variable instead of passing the flag.
58
+
59
+ ### SQLite
60
+
61
+ Pass a file path via `DATABASE_URL` and use `--driver sqlite`:
62
+
63
+ ```bash
64
+ export DATABASE_URL="/path/to/database.db"
65
+ mcp-db-analyzer --driver sqlite
66
+ ```
67
+
68
+ ### Claude Desktop (PostgreSQL)
69
+
70
+ Add to `~/.claude/claude_desktop_config.json`:
71
+
72
+ ```json
73
+ {
74
+ "mcpServers": {
75
+ "db-analyzer": {
76
+ "command": "npx",
77
+ "args": ["-y", "mcp-db-analyzer"],
78
+ "env": {
79
+ "DATABASE_URL": "postgresql://user:password@localhost:5432/mydb"
80
+ }
81
+ }
82
+ }
83
+ }
84
+ ```
85
+
86
+ ### Claude Desktop (MySQL)
87
+
88
+ ```json
89
+ {
90
+ "mcpServers": {
91
+ "db-analyzer": {
92
+ "command": "npx",
93
+ "args": ["-y", "mcp-db-analyzer", "--driver", "mysql"],
94
+ "env": {
95
+ "DATABASE_URL": "mysql://user:password@localhost:3306/mydb"
96
+ }
97
+ }
98
+ }
99
+ }
100
+ ```
101
+
102
+ ### Claude Desktop (SQLite)
103
+
104
+ ```json
105
+ {
106
+ "mcpServers": {
107
+ "db-analyzer": {
108
+ "command": "npx",
109
+ "args": ["-y", "mcp-db-analyzer", "--driver", "sqlite"],
110
+ "env": {
111
+ "DATABASE_URL": "/path/to/database.db"
112
+ }
113
+ }
114
+ }
115
+ }
116
+ ```
117
+
118
+ ## Quick Demo
119
+
120
+ Once configured, try these prompts in Claude:
121
+
122
+ 1. **"Show me the schema and how tables are related"** — Returns table structures, foreign keys, and identifies orphan tables
123
+ 2. **"Are there any slow queries or missing indexes?"** — Ranks slow queries by execution time and suggests indexes to add
124
+ 3. **"How many connections are active? Are any queries blocked?"** — Shows connection pool utilization, idle-in-transaction sessions, and blocked queries
125
+
126
+ ## Tools
127
+
128
+ ### `inspect_schema`
129
+
130
+ List all tables with row counts and sizes, or drill into a specific table's columns, types, constraints, and foreign keys.
131
+
132
+ **Parameters:**
133
+ - `table` (optional) — Table name to inspect. Omit to list all tables.
134
+ - `schema` (default: `"public"`) — Database schema.
135
+
136
+ ```
137
+ > inspect_schema
138
+
139
+ ## Tables in schema 'public'
140
+
141
+ | Table | Rows (est.) | Total Size |
142
+ |-------------|-------------|------------|
143
+ | users | 12,450 | 3.2 MB |
144
+ | orders | 89,100 | 18.4 MB |
145
+ | order_items | 245,000 | 12.1 MB |
146
+ ```
147
+
148
+ ```
149
+ > inspect_schema table="users"
150
+
151
+ ## Table: public.users
152
+
153
+ - **Rows (est.)**: 12,450
154
+ - **Total size**: 3.2 MB
155
+
156
+ ### Columns
157
+ | # | Column | Type | Nullable | Default |
158
+ |---|--------|---------------|----------|---------|
159
+ | 1 | id | integer | NO | nextval |
160
+ | 2 | email | varchar(255) | NO | - |
161
+ | 3 | name | varchar(100) | YES | - |
162
+ ```
163
+
164
+ ### `analyze_indexes`
165
+
166
+ Find unused indexes wasting disk space and missing indexes causing slow sequential scans. Also detects unindexed foreign keys.
167
+
168
+ **Parameters:**
169
+ - `schema` (default: `"public"`) — Database schema.
170
+ - `mode` (`"usage"` | `"missing"` | `"all"`, default: `"all"`) — Analysis mode.
171
+
172
+ ```
173
+ > analyze_indexes
174
+
175
+ ### Unused Indexes (2 found)
176
+ | Table | Index | Size | Definition |
177
+ |-------|--------------------|--------|-------------------------------|
178
+ | users | idx_users_legacy | 1.2 MB | CREATE INDEX ... (old_col) |
179
+
180
+ ### Unindexed Foreign Keys (1 found)
181
+ | Table | Column | FK → | Constraint |
182
+ |-------------|---------|--------|-------------------|
183
+ | order_items | user_id | users | fk_items_user_id |
184
+ ```
185
+
186
+ ### `explain_query`
187
+
188
+ Run EXPLAIN on a SQL query and get a formatted execution plan with cost estimates, node types, and optimization warnings. Optionally run EXPLAIN ANALYZE for actual timing (SELECT queries only).
189
+
190
+ **Parameters:**
191
+ - `sql` — The SQL query to explain.
192
+ - `analyze` (default: `false`) — Run EXPLAIN ANALYZE (executes the query; SELECT only).
193
+
194
+ ```
195
+ > explain_query sql="SELECT * FROM orders WHERE status = 'pending'"
196
+
197
+ ## Query Plan Analysis
198
+
199
+ - **Estimated Total Cost**: 1234.56
200
+ - **Estimated Rows**: 500
201
+
202
+ ### Plan Tree
203
+ → Seq Scan on orders (cost=0..1234.56 rows=500)
204
+ Filter: (status = 'pending')
205
+
206
+ ### Potential Issues
207
+ - **Sequential Scan** on `orders` (~500 rows). Consider adding an index.
208
+ ```
209
+
210
+ ### `analyze_table_bloat`
211
+
212
+ Analyze table bloat by checking dead tuple ratios, vacuum history, and table sizes. Recommends VACUUM ANALYZE for tables with >10% dead tuples.
213
+
214
+ **Parameters:**
215
+ - `schema` (default: `"public"`) — Database schema.
216
+
217
+ ```
218
+ > analyze_table_bloat
219
+
220
+ ### Tables Needing VACUUM (1 found)
221
+ | Table | Live Tuples | Dead Tuples | Bloat % | Size | Last Vacuum |
222
+ |-----------|-------------|-------------|---------|-------|-------------|
223
+ | audit_log | 8,000 | 2,000 | 20.0% | 10 MB | Never |
224
+
225
+ ### Recommended Actions
226
+ VACUUM ANALYZE public.audit_log;
227
+ ```
228
+
229
+ ### `suggest_missing_indexes`
230
+
231
+ Find tables with high sequential scan counts and zero index usage, cross-referenced with unused indexes wasting space. Provides actionable CREATE INDEX and DROP INDEX recommendations.
232
+
233
+ **Parameters:**
234
+ - `schema` (default: `"public"`) — Database schema.
235
+
236
+ ```
237
+ > suggest_missing_indexes
238
+
239
+ ### Tables Missing Indexes (1 found)
240
+ | Table | Seq Scans | Index Scans | Rows | Size |
241
+ |--------|-----------|-------------|--------|-------|
242
+ | events | 5,000 | 0 | 50,000 | 25 MB |
243
+
244
+ ### Unused Indexes (1 found)
245
+ | Table | Index | Size | Definition |
246
+ |-------|------------------|------|----------------------------------|
247
+ | users | idx_users_legacy | 8 kB | CREATE INDEX ... (legacy_col) |
248
+
249
+ DROP INDEX public.idx_users_legacy;
250
+ ```
251
+
252
+ ### `analyze_slow_queries`
253
+
254
+ Find the slowest queries using `pg_stat_statements` (PostgreSQL) or `performance_schema` (MySQL). Shows execution times, call counts, and identifies optimization candidates.
255
+
256
+ **Parameters:**
257
+ - `schema` (default: `"public"`) — Database schema.
258
+ - `limit` (default: `10`) — Number of slow queries to return.
259
+
260
+ ```
261
+ > analyze_slow_queries
262
+
263
+ ## Slow Query Analysis (by avg execution time)
264
+
265
+ | # | Avg Time | Total Time | Calls | Avg Rows | Query |
266
+ |---|----------|------------|-------|----------|-------|
267
+ | 1 | 150.0ms | 750000ms | 5000 | 5 | `SELECT * FROM orders WHERE status = $1` |
268
+ | 2 | 200.0ms | 40000ms | 200 | 2 | `SELECT u.* FROM users u JOIN orders o...` |
269
+
270
+ ### Recommendations
271
+ - **2 high-impact queries** — called >100 times with >100ms avg
272
+ - **2 queries returning few rows but slow** — likely missing indexes
273
+ ```
274
+
275
+ ### `analyze_connections`
276
+
277
+ Analyze active database connections. Detects idle-in-transaction sessions, long-running queries, lock contention, and connection pool utilization. PostgreSQL and MySQL only.
278
+
279
+ ```
280
+ > analyze_connections
281
+
282
+ ## Connection Analysis (PostgreSQL)
283
+
284
+ ### Connection States
285
+ | State | Count |
286
+ |-------|-------|
287
+ | active | 3 |
288
+ | idle | 12 |
289
+ | idle in transaction | 2 |
290
+ | **Total** | **17** |
291
+
292
+ **Max connections**: 100
293
+ **Utilization**: 17.0%
294
+
295
+ ### Idle-in-Transaction Connections
296
+ | PID | User | Duration | Query |
297
+ |------|------|----------|-------|
298
+ | 1234 | app | 00:05:30 | UPDATE orders SET status = $1 |
299
+ ```
300
+
301
+ ### `analyze_table_relationships`
302
+
303
+ Analyze foreign key relationships between tables. Builds a dependency graph showing entity connectivity, orphan tables (no FKs), cascading delete chains, and hub entities.
304
+
305
+ **Parameters:**
306
+ - `schema` (default: `"public"`) — Database schema.
307
+
308
+ ```
309
+ > analyze_table_relationships
310
+
311
+ ## Table Relationships
312
+
313
+ **Tables**: 5
314
+ **Foreign Keys**: 4
315
+
316
+ ### Entity Connectivity
317
+ | Table | Incoming FKs | Outgoing FKs | Total |
318
+ |-------|-------------|-------------|-------|
319
+ | users **hub** | 5 | 0 | 5 |
320
+ | orders | 1 | 2 | 3 |
321
+
322
+ ### Orphan Tables (no FK relationships)
323
+ - `audit_log`
324
+
325
+ ### Cascading Delete Chains
326
+ - **users** → cascades to: orders, addresses
327
+ - **orders** → further cascades to: order_items
328
+ ```
329
+
330
+ ### `analyze_vacuum`
331
+
332
+ Analyze PostgreSQL VACUUM maintenance status. Checks dead tuple ratios, vacuum staleness, autovacuum configuration, and identifies tables needing manual VACUUM. **PostgreSQL only.**
333
+
334
+ ```
335
+ > analyze_vacuum
336
+ ```
337
+
338
+ **Detects:**
339
+ - Tables with high dead tuple ratios (>10% warning, >20% critical)
340
+ - Tables never vacuumed or analyzed
341
+ - Autovacuum disabled globally
342
+ - Autovacuum configuration issues
343
+
344
+ **Output includes:**
345
+ - Findings grouped by severity (CRITICAL / WARNING / INFO)
346
+ - Tables needing VACUUM with dead tuple percentages
347
+ - Full vacuum history per table
348
+ - Autovacuum configuration settings
349
+
350
+ ## Security
351
+
352
+ - All queries are wrapped in READ ONLY transactions by default
353
+ - `EXPLAIN ANALYZE` is restricted to `SELECT` queries only
354
+ - DDL/DML statements are rejected in ANALYZE mode
355
+ - No data modification queries are allowed
356
+
357
+ ## Contributing
358
+
359
+ 1. Clone the repo
360
+ 2. `npm install`
361
+ 3. `npm run build` — TypeScript compilation
362
+ 4. `npm test` — Run unit tests (vitest)
363
+ 5. `npm run dev` — Watch mode for development
364
+
365
+ ## Limitations & Known Issues
366
+
367
+ - **Read-only**: All queries use read-only connections. Cannot modify data or schema.
368
+ - **pg_stat_statements required**: Slow query analysis on PostgreSQL requires the `pg_stat_statements` extension to be installed and loaded.
369
+ - **MySQL performance_schema**: Index usage and scan statistics require `performance_schema` to be enabled (off by default in some MySQL installations).
370
+ - **SQLite**: No index usage statistics available (SQLite doesn't track this). Sequential scan analysis and slow query detection are not supported for SQLite.
371
+ - **Large databases**: Schema inspection on databases with 500+ tables may produce very long output. Use the `schema` parameter to limit scope.
372
+ - **Table name parameterization**: SQLite PRAGMA statements use string interpolation for table names (SQLite does not support parameterized PRAGMAs). Table names are sourced from `sqlite_master` system table.
373
+ - **Cross-database queries**: Cannot analyze queries that span multiple databases or use database links.
374
+ - **Estimated row counts**: MySQL `TABLE_ROWS` in `information_schema` is an estimate, not exact.
375
+ - **Schema scope**: All tools default to `public` schema. Non-public schemas require explicit specification. Multi-schema analysis requires running tools per schema separately.
376
+ - **Connection analysis**: `analyze_connections` is PostgreSQL/MySQL only. Not available for SQLite databases.
377
+ - **Vacuum analysis**: `analyze_vacuum` is PostgreSQL only. For MySQL, use `OPTIMIZE TABLE` or `analyze_table_bloat`.
378
+
379
+ ## License
380
+
381
+ MIT
@@ -0,0 +1,150 @@
1
+ import { query, getDriverType } from "../db.js";
2
+ /**
3
+ * Analyze table bloat.
4
+ * PostgreSQL: dead tuple ratios and vacuum history.
5
+ * MySQL: InnoDB fragmentation (DATA_FREE) and table sizes.
6
+ */
7
+ export async function analyzeTableBloat(schema = "public") {
8
+ const driver = getDriverType();
9
+ if (driver === "sqlite") {
10
+ return analyzeTableBloatSqlite();
11
+ }
12
+ if (driver === "mysql") {
13
+ return analyzeTableBloatMysql(schema);
14
+ }
15
+ const result = await query(`
16
+ SELECT
17
+ relname AS table_name,
18
+ n_live_tup::text,
19
+ n_dead_tup::text,
20
+ pg_size_pretty(pg_table_size(quote_ident(schemaname) || '.' || quote_ident(relname))) AS table_size,
21
+ last_vacuum::text,
22
+ last_autovacuum::text,
23
+ last_analyze::text
24
+ FROM pg_stat_user_tables
25
+ WHERE schemaname = $1
26
+ ORDER BY n_dead_tup DESC
27
+ `, [schema]);
28
+ if (result.rows.length === 0) {
29
+ return `No user tables found in schema '${schema}'.`;
30
+ }
31
+ const lines = [`## Table Bloat Analysis — schema '${schema}'\n`];
32
+ const bloated = result.rows.filter((r) => {
33
+ const live = parseInt(r.n_live_tup, 10) || 0;
34
+ const dead = parseInt(r.n_dead_tup, 10) || 0;
35
+ const total = live + dead;
36
+ return total > 0 && (dead / total) > 0.10;
37
+ });
38
+ if (bloated.length > 0) {
39
+ lines.push(`### Tables Needing VACUUM (${bloated.length} found)\n`);
40
+ lines.push("These tables have >10% dead tuples. Run `VACUUM ANALYZE` to reclaim space and update statistics.\n");
41
+ lines.push("| Table | Live Tuples | Dead Tuples | Bloat % | Size | Last Vacuum |");
42
+ lines.push("|-------|-------------|-------------|---------|------|-------------|");
43
+ for (const row of bloated) {
44
+ const live = parseInt(row.n_live_tup, 10) || 0;
45
+ const dead = parseInt(row.n_dead_tup, 10) || 0;
46
+ const total = live + dead;
47
+ const bloatPct = total > 0 ? ((dead / total) * 100).toFixed(1) : "0.0";
48
+ const lastVacuum = row.last_vacuum || row.last_autovacuum || "Never";
49
+ lines.push(`| ${row.table_name} | ${row.n_live_tup} | ${row.n_dead_tup} | ${bloatPct}% | ${row.table_size} | ${lastVacuum} |`);
50
+ }
51
+ lines.push("\n### Recommended Actions\n");
52
+ for (const row of bloated) {
53
+ lines.push(`\`\`\`sql\nVACUUM ANALYZE ${schema}.${row.table_name};\n\`\`\``);
54
+ }
55
+ lines.push("");
56
+ }
57
+ else {
58
+ lines.push("### No significant bloat detected.\n");
59
+ lines.push("All tables have <10% dead tuples. Autovacuum appears to be working well.\n");
60
+ }
61
+ lines.push("### All Tables\n");
62
+ lines.push("| Table | Live Tuples | Dead Tuples | Bloat % | Size | Last Vacuum | Last Analyze |");
63
+ lines.push("|-------|-------------|-------------|---------|------|-------------|--------------|");
64
+ for (const row of result.rows) {
65
+ const live = parseInt(row.n_live_tup, 10) || 0;
66
+ const dead = parseInt(row.n_dead_tup, 10) || 0;
67
+ const total = live + dead;
68
+ const bloatPct = total > 0 ? ((dead / total) * 100).toFixed(1) : "0.0";
69
+ const lastVacuum = row.last_vacuum || row.last_autovacuum || "Never";
70
+ const lastAnalyze = row.last_analyze || "Never";
71
+ lines.push(`| ${row.table_name} | ${row.n_live_tup} | ${row.n_dead_tup} | ${bloatPct}% | ${row.table_size} | ${lastVacuum} | ${lastAnalyze} |`);
72
+ }
73
+ return lines.join("\n");
74
+ }
75
+ async function analyzeTableBloatSqlite() {
76
+ // SQLite fragmentation: compare page_count vs freelist_count
77
+ const pageSize = await query(`PRAGMA page_size`);
78
+ const pageCount = await query(`PRAGMA page_count`);
79
+ const freelistCount = await query(`PRAGMA freelist_count`);
80
+ const ps = pageSize.rows[0]?.page_size ?? 4096;
81
+ const pc = pageCount.rows[0]?.page_count ?? 0;
82
+ const fc = freelistCount.rows[0]?.freelist_count ?? 0;
83
+ const totalSizeKb = Math.round((pc * ps) / 1024);
84
+ const freeSpaceKb = Math.round((fc * ps) / 1024);
85
+ const fragPct = pc > 0 ? ((fc / pc) * 100).toFixed(1) : "0.0";
86
+ const lines = [`## Table Bloat Analysis (SQLite)\n`];
87
+ lines.push(`- **Database size**: ${totalSizeKb} KB (${pc} pages x ${ps} bytes)`);
88
+ lines.push(`- **Free space**: ${freeSpaceKb} KB (${fc} free pages)`);
89
+ lines.push(`- **Fragmentation**: ${fragPct}%`);
90
+ lines.push("");
91
+ if (fc > 0 && parseFloat(fragPct) > 10) {
92
+ lines.push("### Recommendation\n");
93
+ lines.push("Run `VACUUM` to reclaim free space and defragment the database file.\n");
94
+ lines.push("```sql\nVACUUM;\n```");
95
+ }
96
+ else {
97
+ lines.push("### No significant fragmentation detected.\n");
98
+ }
99
+ return lines.join("\n");
100
+ }
101
+ async function analyzeTableBloatMysql(schema) {
102
+ const result = await query(`
103
+ SELECT
104
+ TABLE_NAME AS table_name,
105
+ CAST(TABLE_ROWS AS CHAR) AS table_rows,
106
+ CONCAT(ROUND(DATA_LENGTH / 1024 / 1024, 2), ' MB') AS data_size,
107
+ CONCAT(ROUND(DATA_FREE / 1024 / 1024, 2), ' MB') AS data_free,
108
+ CAST(
109
+ CASE WHEN DATA_LENGTH > 0
110
+ THEN ROUND(DATA_FREE / DATA_LENGTH * 100, 1)
111
+ ELSE 0
112
+ END
113
+ AS CHAR) AS frag_pct
114
+ FROM information_schema.TABLES
115
+ WHERE TABLE_SCHEMA = ?
116
+ AND TABLE_TYPE = 'BASE TABLE'
117
+ AND ENGINE = 'InnoDB'
118
+ ORDER BY DATA_FREE DESC
119
+ `, [schema]);
120
+ if (result.rows.length === 0) {
121
+ return `No InnoDB tables found in schema '${schema}'.`;
122
+ }
123
+ const lines = [`## Table Fragmentation Analysis — schema '${schema}' (MySQL/InnoDB)\n`];
124
+ const fragmented = result.rows.filter((r) => parseFloat(r.frag_pct) > 10);
125
+ if (fragmented.length > 0) {
126
+ lines.push(`### Fragmented Tables (${fragmented.length} found)\n`);
127
+ lines.push("These tables have >10% free space (fragmentation). Run `OPTIMIZE TABLE` to reclaim space.\n");
128
+ lines.push("| Table | Rows | Data Size | Free Space | Fragmentation % |");
129
+ lines.push("|-------|------|-----------|------------|-----------------|");
130
+ for (const row of fragmented) {
131
+ lines.push(`| ${row.table_name} | ${row.table_rows} | ${row.data_size} | ${row.data_free} | ${row.frag_pct}% |`);
132
+ }
133
+ lines.push("\n### Recommended Actions\n");
134
+ for (const row of fragmented) {
135
+ lines.push(`\`\`\`sql\nOPTIMIZE TABLE ${schema}.${row.table_name};\n\`\`\``);
136
+ }
137
+ lines.push("");
138
+ }
139
+ else {
140
+ lines.push("### No significant fragmentation detected.\n");
141
+ lines.push("All tables have <10% free space. InnoDB is managing space well.\n");
142
+ }
143
+ lines.push("### All Tables\n");
144
+ lines.push("| Table | Rows | Data Size | Free Space | Fragmentation % |");
145
+ lines.push("|-------|------|-----------|------------|-----------------|");
146
+ for (const row of result.rows) {
147
+ lines.push(`| ${row.table_name} | ${row.table_rows} | ${row.data_size} | ${row.data_free} | ${row.frag_pct}% |`);
148
+ }
149
+ return lines.join("\n");
150
+ }