@berthojoris/mcp-mysql-server 1.4.12 β†’ 1.4.13

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,1268 @@
1
+ # MySQL MCP Server - Detailed Documentation
2
+
3
+ This file contains detailed documentation for all features of the MySQL MCP Server. For quick start and basic information, see [README.md](README.md).
4
+
5
+ ---
6
+
7
+ ## Table of Contents
8
+
9
+ 1. [DDL Operations](#-ddl-operations)
10
+ 2. [Data Export Tools](#-data-export-tools)
11
+ 3. [Transaction Management](#-transaction-management)
12
+ 4. [Stored Procedures](#-stored-procedures)
13
+ 5. [Usage Examples](#-usage-examples)
14
+ 6. [Query Logging & Automatic SQL Display](#-query-logging--automatic-sql-display)
15
+ 7. [Security Features](#-security-features)
16
+ 8. [Bulk Operations](#-bulk-operations)
17
+ 9. [Troubleshooting](#-troubleshooting)
18
+ 10. [License](#-license)
19
+ 11. [Roadmap](#️-roadmap)
20
+
21
+ ---
22
+
23
+ ## πŸ—οΈ DDL Operations
24
+
25
+ DDL (Data Definition Language) operations allow AI to create, modify, and delete tables.
26
+
27
+ ### ⚠️ Enable DDL with Caution
28
+
29
+ DDL operations are **disabled by default** for safety. Add `ddl` to permissions to enable:
30
+
31
+ ```json
32
+ {
33
+ "args": [
34
+ "mysql://user:pass@localhost:3306/db",
35
+ "list,read,create,update,delete,ddl,utility"
36
+ ]
37
+ }
38
+ ```
39
+
40
+ ### DDL Tool Examples
41
+
42
+ #### Create Table
43
+
44
+ **User prompt:** *"Create a users table with id, username, email, and created_at"*
45
+
46
+ **AI will execute:**
47
+ ```json
48
+ {
49
+ "tool": "create_table",
50
+ "arguments": {
51
+ "table_name": "users",
52
+ "columns": [
53
+ {"name": "id", "type": "INT", "primary_key": true, "auto_increment": true},
54
+ {"name": "username", "type": "VARCHAR(255)", "nullable": false},
55
+ {"name": "email", "type": "VARCHAR(255)", "nullable": false},
56
+ {"name": "created_at", "type": "DATETIME", "default": "CURRENT_TIMESTAMP"}
57
+ ]
58
+ }
59
+ }
60
+ ```
61
+
62
+ #### Alter Table
63
+
64
+ **User prompt:** *"Add a phone column to the users table"*
65
+
66
+ **AI will execute:**
67
+ ```json
68
+ {
69
+ "tool": "alter_table",
70
+ "arguments": {
71
+ "table_name": "users",
72
+ "operations": [
73
+ {
74
+ "type": "add_column",
75
+ "column_name": "phone",
76
+ "column_type": "VARCHAR(20)",
77
+ "nullable": true
78
+ }
79
+ ]
80
+ }
81
+ }
82
+ ```
83
+
84
+ #### Drop Table
85
+
86
+ **User prompt:** *"Drop the temp_data table"*
87
+
88
+ **AI will execute:**
89
+ ```json
90
+ {
91
+ "tool": "drop_table",
92
+ "arguments": {
93
+ "table_name": "temp_data",
94
+ "if_exists": true
95
+ }
96
+ }
97
+ ```
98
+
99
+ ### DDL Safety Guidelines
100
+
101
+ 1. βœ… **Enable only in development** - Keep DDL disabled for production
102
+ 2. βœ… **Backup before major changes** - DDL operations are usually irreversible
103
+ 3. βœ… **Test in dev first** - Try schema changes in development environment
104
+ 4. βœ… **Use proper MySQL user permissions** - Grant only necessary privileges
105
+
106
+ ---
107
+
108
+ ## πŸ“€ Data Export Tools
109
+
110
+ The MySQL MCP Server provides powerful data export capabilities, allowing AI agents to export database content in CSV format for analysis, reporting, and data sharing.
111
+
112
+ ### Data Export Tools Overview
113
+
114
+ - **`export_table_to_csv`** - Export all or filtered data from a table to CSV format
115
+ - **`export_query_to_csv`** - Export the results of a custom SELECT query to CSV format
116
+
117
+ Both tools support:
118
+ - Filtering data with conditions
119
+ - Pagination for large datasets
120
+ - Sorting results
121
+ - Optional column headers
122
+ - Proper CSV escaping for special characters
123
+
124
+ ### Data Export Tool Examples
125
+
126
+ #### Export Table to CSV
127
+
128
+ **User prompt:** *"Export the first 100 users ordered by registration date to CSV"*
129
+
130
+ **AI will execute:**
131
+ ```json
132
+ {
133
+ "tool": "export_table_to_csv",
134
+ "arguments": {
135
+ "table_name": "users",
136
+ "sorting": {
137
+ "field": "registration_date",
138
+ "direction": "desc"
139
+ },
140
+ "pagination": {
141
+ "page": 1,
142
+ "limit": 100
143
+ },
144
+ "include_headers": true
145
+ }
146
+ }
147
+ ```
148
+
149
+ #### Export Filtered Data to CSV
150
+
151
+ **User prompt:** *"Export all users from the marketing department to CSV"*
152
+
153
+ **AI will execute:**
154
+ ```json
155
+ {
156
+ "tool": "export_table_to_csv",
157
+ "arguments": {
158
+ "table_name": "users",
159
+ "filters": [
160
+ {
161
+ "field": "department",
162
+ "operator": "eq",
163
+ "value": "marketing"
164
+ }
165
+ ],
166
+ "include_headers": true
167
+ }
168
+ }
169
+ ```
170
+
171
+ #### Export Query Results to CSV
172
+
173
+ **User prompt:** *"Export a report of total sales by product category to CSV"*
174
+
175
+ **AI will execute:**
176
+ ```json
177
+ {
178
+ "tool": "export_query_to_csv",
179
+ "arguments": {
180
+ "query": "SELECT category, SUM(sales_amount) as total_sales FROM sales GROUP BY category ORDER BY total_sales DESC",
181
+ "include_headers": true
182
+ }
183
+ }
184
+ ```
185
+
186
+ ### Data Export Best Practices
187
+
188
+ 1. βœ… **Use filtering** - Export only the data you need to reduce file size
189
+ 2. βœ… **Implement pagination** - For large datasets, use pagination to avoid memory issues
190
+ 3. βœ… **Include headers** - Make CSV files more understandable with column headers
191
+ 4. βœ… **Test with small datasets first** - Verify export format before processing large amounts of data
192
+ 5. βœ… **Use proper permissions** - Data export tools require `utility` permission
193
+
194
+ ### Common Data Export Patterns
195
+
196
+ **Pattern 1: Simple Table Export**
197
+ ```json
198
+ {
199
+ "tool": "export_table_to_csv",
200
+ "arguments": {
201
+ "table_name": "products",
202
+ "include_headers": true
203
+ }
204
+ }
205
+ ```
206
+
207
+ **Pattern 2: Filtered and Sorted Export**
208
+ ```json
209
+ {
210
+ "tool": "export_table_to_csv",
211
+ "arguments": {
212
+ "table_name": "orders",
213
+ "filters": [
214
+ {
215
+ "field": "order_date",
216
+ "operator": "gte",
217
+ "value": "2023-01-01"
218
+ }
219
+ ],
220
+ "sorting": {
221
+ "field": "order_date",
222
+ "direction": "desc"
223
+ },
224
+ "include_headers": true
225
+ }
226
+ }
227
+ ```
228
+
229
+ **Pattern 3: Complex Query Export**
230
+ ```json
231
+ {
232
+ "tool": "export_query_to_csv",
233
+ "arguments": {
234
+ "query": "SELECT u.name, u.email, COUNT(o.id) as order_count FROM users u LEFT JOIN orders o ON u.id = o.user_id GROUP BY u.id HAVING order_count > 5",
235
+ "include_headers": true
236
+ }
237
+ }
238
+ ```
239
+
240
+ ---
241
+
242
+ ## πŸ’° Transaction Management
243
+
244
+ The MySQL MCP Server provides full ACID transaction support, allowing you to group multiple database operations into atomic units.
245
+
246
+ ### Transaction Tools Overview
247
+
248
+ - **`begin_transaction`** - Start a new transaction
249
+ - **`execute_in_transaction`** - Execute SQL within transaction context
250
+ - **`commit_transaction`** - Permanently save all changes
251
+ - **`rollback_transaction`** - Discard all changes since transaction start
252
+ - **`get_transaction_status`** - Check if transaction is active
253
+
254
+ ### Transaction Example: Money Transfer
255
+
256
+ **User:** *"Transfer $100 from Alice's account to Bob's account"*
257
+
258
+ **AI executes:**
259
+ ```json
260
+ // Step 1: Begin transaction
261
+ {
262
+ "tool": "begin_transaction"
263
+ }
264
+
265
+ // Step 2: Deduct from Alice's account
266
+ {
267
+ "tool": "execute_in_transaction",
268
+ "arguments": {
269
+ "sql": "UPDATE accounts SET balance = balance - 100 WHERE name = 'Alice'"
270
+ }
271
+ }
272
+
273
+ // Step 3: Add to Bob's account
274
+ {
275
+ "tool": "execute_in_transaction",
276
+ "arguments": {
277
+ "sql": "UPDATE accounts SET balance = balance + 100 WHERE name = 'Bob'"
278
+ }
279
+ }
280
+
281
+ // Step 4: Verify both accounts exist and have sufficient funds
282
+ {
283
+ "tool": "execute_in_transaction",
284
+ "arguments": {
285
+ "sql": "SELECT * FROM accounts WHERE name IN ('Alice', 'Bob')"
286
+ }
287
+ }
288
+
289
+ // Step 5: Commit if everything is valid
290
+ {
291
+ "tool": "commit_transaction"
292
+ }
293
+ ```
294
+
295
+ ### Transaction Safety Features
296
+
297
+ 1. βœ… **Atomic Operations** - All operations succeed or all fail together
298
+ 2. βœ… **Automatic Rollback** - If any operation fails, transaction automatically rolls back
299
+ 3. βœ… **Isolation** - Other sessions see changes only after commit
300
+ 4. βœ… **Status Checking** - Always know if a transaction is active
301
+ 5. βœ… **Error Handling** - Comprehensive error reporting for failed operations
302
+
303
+ ### Transaction Best Practices
304
+
305
+ 1. **Keep transactions short** - Long transactions can block other operations
306
+ 2. **Always commit or rollback** - Don't leave transactions hanging
307
+ 3. **Test transaction logic** - Verify your transaction sequence works correctly
308
+ 4. **Handle errors gracefully** - Check for errors after each operation
309
+ 5. **Use appropriate isolation levels** - Understand your consistency requirements
310
+
311
+ ### Common Transaction Patterns
312
+
313
+ **Pattern 1: Safe Update with Verification**
314
+ ```json
315
+ // Begin transaction
316
+ // Update records
317
+ // Verify changes with SELECT
318
+ // Commit if valid, rollback if not
319
+ ```
320
+
321
+ **Pattern 2: Batch Operations**
322
+ ```json
323
+ // Begin transaction
324
+ // Insert multiple related records
325
+ // Update related tables
326
+ // Commit all changes together
327
+ ```
328
+
329
+ **Pattern 3: Error Recovery**
330
+ ```json
331
+ // Begin transaction
332
+ // Try operations
333
+ // If error occurs: rollback
334
+ // If success: commit
335
+ ```
336
+
337
+ ---
338
+
339
+ ## πŸ”§ Stored Procedures
340
+
341
+ The MySQL MCP Server provides comprehensive stored procedure management, allowing you to create, execute, and manage stored procedures with full parameter support.
342
+
343
+ ### Stored Procedure Tools Overview
344
+
345
+ - **`list_stored_procedures`** - List all stored procedures in a database
346
+ - **`create_stored_procedure`** - Create new stored procedures with IN/OUT/INOUT parameters
347
+ - **`get_stored_procedure_info`** - Get detailed information about parameters and metadata
348
+ - **`execute_stored_procedure`** - Execute procedures with automatic parameter handling
349
+ - **`drop_stored_procedure`** - Delete stored procedures safely
350
+
351
+ ### ⚠️ Enable Stored Procedures
352
+
353
+ Stored procedure operations require the `procedure` permission. Add it to your configuration:
354
+
355
+ ```json
356
+ {
357
+ "args": [
358
+ "mysql://user:pass@localhost:3306/db",
359
+ "list,read,procedure,utility" // ← Include 'procedure'
360
+ ]
361
+ }
362
+ ```
363
+
364
+ ### Creating Stored Procedures
365
+
366
+ **User:** *"Create a stored procedure that calculates tax for a given amount"*
367
+
368
+ **AI will execute:**
369
+ ```json
370
+ {
371
+ "tool": "create_stored_procedure",
372
+ "arguments": {
373
+ "procedure_name": "calculate_tax",
374
+ "parameters": [
375
+ {
376
+ "name": "amount",
377
+ "mode": "IN",
378
+ "data_type": "DECIMAL(10,2)"
379
+ },
380
+ {
381
+ "name": "tax_rate",
382
+ "mode": "IN",
383
+ "data_type": "DECIMAL(5,4)"
384
+ },
385
+ {
386
+ "name": "tax_amount",
387
+ "mode": "OUT",
388
+ "data_type": "DECIMAL(10,2)"
389
+ }
390
+ ],
391
+ "body": "SET tax_amount = amount * tax_rate;",
392
+ "comment": "Calculate tax amount based on amount and tax rate"
393
+ }
394
+ }
395
+ ```
396
+
397
+ ### Executing Stored Procedures
398
+
399
+ **User:** *"Calculate tax for $1000 with 8.5% tax rate"*
400
+
401
+ **AI will execute:**
402
+ ```json
403
+ {
404
+ "tool": "execute_stored_procedure",
405
+ "arguments": {
406
+ "procedure_name": "calculate_tax",
407
+ "parameters": [1000.00, 0.085]
408
+ }
409
+ }
410
+ ```
411
+
412
+ **Result:**
413
+ ```json
414
+ {
415
+ "status": "success",
416
+ "data": {
417
+ "results": { /* execution results */ },
418
+ "outputParameters": {
419
+ "tax_amount": 85.00
420
+ }
421
+ }
422
+ }
423
+ ```
424
+
425
+ ### Parameter Types
426
+
427
+ **IN Parameters** - Input values passed to the procedure
428
+ ```sql
429
+ IN user_id INT
430
+ IN email VARCHAR(255)
431
+ ```
432
+
433
+ **OUT Parameters** - Output values returned by the procedure
434
+ ```sql
435
+ OUT total_count INT
436
+ OUT average_score DECIMAL(5,2)
437
+ ```
438
+
439
+ **INOUT Parameters** - Values that are both input and output
440
+ ```sql
441
+ INOUT running_total DECIMAL(10,2)
442
+ ```
443
+
444
+ ### Complex Stored Procedure Example
445
+
446
+ **User:** *"Create a procedure to process an order with inventory check"*
447
+
448
+ ```json
449
+ {
450
+ "tool": "create_stored_procedure",
451
+ "arguments": {
452
+ "procedure_name": "process_order",
453
+ "parameters": [
454
+ { "name": "product_id", "mode": "IN", "data_type": "INT" },
455
+ { "name": "quantity", "mode": "IN", "data_type": "INT" },
456
+ { "name": "customer_id", "mode": "IN", "data_type": "INT" },
457
+ { "name": "order_id", "mode": "OUT", "data_type": "INT" },
458
+ { "name": "success", "mode": "OUT", "data_type": "BOOLEAN" }
459
+ ],
460
+ "body": "DECLARE available_qty INT; SELECT stock_quantity INTO available_qty FROM products WHERE id = product_id; IF available_qty >= quantity THEN INSERT INTO orders (customer_id, product_id, quantity) VALUES (customer_id, product_id, quantity); SET order_id = LAST_INSERT_ID(); UPDATE products SET stock_quantity = stock_quantity - quantity WHERE id = product_id; SET success = TRUE; ELSE SET order_id = 0; SET success = FALSE; END IF;",
461
+ "comment": "Process order with inventory validation"
462
+ }
463
+ }
464
+ ```
465
+
466
+ ### Getting Procedure Information
467
+
468
+ **User:** *"Show me details about the calculate_tax procedure"*
469
+
470
+ **AI will execute:**
471
+ ```json
472
+ {
473
+ "tool": "get_stored_procedure_info",
474
+ "arguments": {
475
+ "procedure_name": "calculate_tax"
476
+ }
477
+ }
478
+ ```
479
+
480
+ **Returns detailed information:**
481
+ - Procedure metadata (created date, security type, etc.)
482
+ - Parameter details (names, types, modes)
483
+ - Procedure definition
484
+ - Comments and documentation
485
+
486
+ ### Stored Procedure Best Practices
487
+
488
+ 1. βœ… **Use descriptive names** - Make procedure purposes clear
489
+ 2. βœ… **Document with comments** - Add meaningful comments to procedures
490
+ 3. βœ… **Validate inputs** - Check parameter values within procedures
491
+ 4. βœ… **Handle errors** - Use proper error handling in procedure bodies
492
+ 5. βœ… **Test thoroughly** - Verify procedures work with various inputs
493
+ 6. βœ… **Use appropriate data types** - Choose correct types for parameters
494
+ 7. βœ… **Consider security** - Be mindful of SQL injection in dynamic SQL
495
+
496
+ ### Common Stored Procedure Patterns
497
+
498
+ **Pattern 1: Data Validation and Processing**
499
+ ```sql
500
+ -- Validate input, process if valid, return status
501
+ IF input_value > 0 THEN
502
+ -- Process data
503
+ SET success = TRUE;
504
+ ELSE
505
+ SET success = FALSE;
506
+ END IF;
507
+ ```
508
+
509
+ **Pattern 2: Complex Business Logic**
510
+ ```sql
511
+ -- Multi-step business process
512
+ -- Step 1: Validate
513
+ -- Step 2: Calculate
514
+ -- Step 3: Update multiple tables
515
+ -- Step 4: Return results
516
+ ```
517
+
518
+ **Pattern 3: Reporting and Analytics**
519
+ ```sql
520
+ -- Aggregate data from multiple tables
521
+ -- Apply business rules
522
+ -- Return calculated results
523
+ ```
524
+
525
+ ---
526
+
527
+ ## πŸ“‹ Usage Examples
528
+
529
+ ### Example 1: Read Data
530
+
531
+ **User:** *"Show me the first 10 users ordered by created_at"*
532
+
533
+ **AI uses `read_records`:**
534
+ - Queries the users table
535
+ - Applies pagination (limit 10)
536
+ - Sorts by created_at descending
537
+ - Returns results
538
+
539
+ ### Example 2: Filter Data
540
+
541
+ **User:** *"Find all users with email ending in @example.com"*
542
+
543
+ **AI uses `read_records` with filters:**
544
+ - Applies LIKE filter on email column
545
+ - Returns matching records
546
+
547
+ ### Example 3: Create Records
548
+
549
+ **User:** *"Add a new user with username 'john_doe' and email 'john@example.com'"*
550
+
551
+ **AI uses `create_record`:**
552
+ - Inserts new record
553
+ - Returns insert ID
554
+
555
+ ### Example 4: Update Records
556
+
557
+ **User:** *"Update the email for user ID 5 to 'newemail@example.com'"*
558
+
559
+ **AI uses `update_record`:**
560
+ - Updates specific record by ID
561
+ - Returns affected rows
562
+
563
+ ### Example 5: Complex Query
564
+
565
+ **User:** *"Show me the total number of orders per user for the last 30 days"*
566
+
567
+ **AI uses `run_query`:**
568
+ - Constructs JOIN query
569
+ - Applies date filter
570
+ - Groups by user
571
+ - Returns aggregated results
572
+
573
+ ### Example 6: Transaction Management
574
+
575
+ **User:** *"Transfer $100 from account 1 to account 2 in a single transaction"*
576
+
577
+ **AI uses transaction tools:**
578
+ ```json
579
+ {
580
+ "tool": "begin_transaction"
581
+ }
582
+
583
+ {
584
+ "tool": "execute_in_transaction",
585
+ "arguments": {
586
+ "sql": "UPDATE accounts SET balance = balance - 100 WHERE id = 1"
587
+ }
588
+ }
589
+
590
+ {
591
+ "tool": "execute_in_transaction",
592
+ "arguments": {
593
+ "sql": "UPDATE accounts SET balance = balance + 100 WHERE id = 2"
594
+ }
595
+ }
596
+
597
+ {
598
+ "tool": "commit_transaction"
599
+ }
600
+ ```
601
+
602
+ **User:** *"Check if there's an active transaction"*
603
+
604
+ **AI uses `get_transaction_status`:**
605
+ - Returns transaction status and ID if active
606
+
607
+ ### Example 7: Bulk Insert
608
+
609
+ **User:** *"Insert 1000 new products from this CSV data"*
610
+
611
+ **AI uses `bulk_insert`:**
612
+ ```json
613
+ {
614
+ "tool": "bulk_insert",
615
+ "arguments": {
616
+ "table_name": "products",
617
+ "data": [
618
+ {"name": "Product 1", "price": 19.99, "category": "Electronics"},
619
+ {"name": "Product 2", "price": 29.99, "category": "Books"},
620
+ // ... up to 1000 records
621
+ ],
622
+ "batch_size": 1000
623
+ }
624
+ }
625
+ ```
626
+ - Processes records in optimized batches
627
+ - Returns total inserted count and performance metrics
628
+
629
+ ### Example 8: Bulk Update
630
+
631
+ **User:** *"Update prices for all products in specific categories with different discounts"*
632
+
633
+ **AI uses `bulk_update`:**
634
+ ```json
635
+ {
636
+ "tool": "bulk_update",
637
+ "arguments": {
638
+ "table_name": "products",
639
+ "updates": [
640
+ {
641
+ "data": {"price": "price * 0.9"},
642
+ "conditions": [{"field": "category", "operator": "eq", "value": "Electronics"}]
643
+ },
644
+ {
645
+ "data": {"price": "price * 0.8"},
646
+ "conditions": [{"field": "category", "operator": "eq", "value": "Books"}]
647
+ }
648
+ ],
649
+ "batch_size": 100
650
+ }
651
+ }
652
+ ```
653
+ - Applies different updates based on conditions
654
+ - Processes in batches for optimal performance
655
+
656
+ ### Example 9: Bulk Delete
657
+
658
+ **User:** *"Delete all inactive users and expired sessions"*
659
+
660
+ **AI uses `bulk_delete`:**
661
+ ```json
662
+ {
663
+ "tool": "bulk_delete",
664
+ "arguments": {
665
+ "table_name": "users",
666
+ "condition_sets": [
667
+ [{"field": "status", "operator": "eq", "value": "inactive"}],
668
+ [{"field": "last_login", "operator": "lt", "value": "2023-01-01"}],
669
+ [{"field": "email_verified", "operator": "eq", "value": false}]
670
+ ],
671
+ "batch_size": 100
672
+ }
673
+ }
674
+ ```
675
+ - Deletes records matching any of the condition sets
676
+ - Processes deletions in safe batches
677
+
678
+ **User:** *"Rollback the current transaction"*
679
+
680
+ **AI uses `rollback_transaction`:**
681
+ - Cancels all changes in the current transaction
682
+
683
+ ---
684
+
685
+ ## πŸ“ Query Logging & Automatic SQL Display
686
+
687
+ All queries executed through the MySQL MCP Server are automatically logged with detailed execution information in a **human-readable format**. Query logs are **automatically displayed to users** in the LLM response output of **ALL tool operations** that interact with the database.
688
+
689
+ ### ✨ Automatic SQL Query Display (v1.4.12+)
690
+
691
+ **The SQL queries are now automatically shown to users without needing to explicitly ask for them!**
692
+
693
+ When you ask questions like:
694
+ - *"Show me all tables in my database"*
695
+ - *"Get the first 10 users"*
696
+ - *"Update user email where id = 5"*
697
+
698
+ The LLM will automatically include the SQL query execution details in its response, such as:
699
+
700
+ > "The SQL query 'SHOW TABLES' was executed successfully in 107ms and returned 73 tables including users, products, orders..."
701
+
702
+ This happens because the SQL query information is embedded as part of the response data structure with an explicit instruction to the LLM to always display it to users.
703
+
704
+ ### How It Works
705
+
706
+ The MCP server returns responses in this structured format:
707
+
708
+ ```json
709
+ {
710
+ "⚠️ IMPORTANT_INSTRUCTION_TO_ASSISTANT": "ALWAYS display the SQL query execution details below to the user in your response. This is critical information that users need to see.",
711
+ "⚠️ SQL_QUERY_EXECUTED": "βœ… SQL Query #1 - SUCCESS\n⏱️ 107ms\nπŸ“ SHOW TABLES",
712
+ "πŸ“Š RESULTS": [
713
+ { "table_name": "users" },
714
+ { "table_name": "products" }
715
+ ]
716
+ }
717
+ ```
718
+
719
+ The LLM processes this structure and naturally includes the SQL query information when describing results to you.
720
+
721
+ ### Query Log Information
722
+
723
+ Each logged query includes:
724
+ - **Query Number** - Sequential identifier for the query
725
+ - **Status** - Success (βœ“) or error (βœ—) with visual indicator
726
+ - **Execution Duration** - Time taken to execute in milliseconds with ⏱️ icon
727
+ - **Timestamp** - ISO 8601 formatted execution time with πŸ• icon
728
+ - **Formatted SQL Query** - Properly formatted SQL with line breaks for readability
729
+ - **Parameters** - Values passed to the query (if any), formatted with JSON indentation
730
+ - **Error Details** - Error message if the query failed (optional)
731
+
732
+ ### Example Query Log Output
733
+
734
+ **Markdown-Friendly Format (optimized for AI agent UIs):**
735
+
736
+ ```markdown
737
+ ### Query #1 - SUCCESS (12ms)
738
+ **Timestamp:** 2025-11-21T10:30:45.123Z
739
+
740
+ **SQL:**
741
+ ```sql
742
+ SELECT *
743
+ FROM users
744
+ WHERE id = ?
745
+ ```
746
+ Parameters:
747
+ [5]
748
+ ```
749
+
750
+ **Complex Query with Multiple Parameters:**
751
+
752
+ ```markdown
753
+ ### Query #1 - SUCCESS (45ms)
754
+ **Timestamp:** 2025-11-21T10:32:15.456Z
755
+
756
+ **SQL:**
757
+ ```sql
758
+ INSERT INTO users (name,
759
+ email,
760
+ age,
761
+ created_at)
762
+ VALUES (?,
763
+ ?,
764
+ ?,
765
+ ?)
766
+ ```
767
+ Parameters:
768
+ [
769
+ "John Doe",
770
+ "john@example.com",
771
+ 30,
772
+ "2025-11-21T10:32:15.000Z"
773
+ ]
774
+ ```
775
+
776
+ ### Benefits of Automatic SQL Display
777
+
778
+ 1. **πŸŽ“ Learning** - Users can see and learn from the SQL queries being executed
779
+ 2. **πŸ” Transparency** - Clear visibility into what database operations are performed
780
+ 3. **πŸ› Debugging** - Easy to identify and troubleshoot query issues
781
+ 4. **πŸ“Š Performance Monitoring** - See execution times for queries
782
+ 5. **βœ… Verification** - Confirm the AI is executing the correct queries
783
+
784
+ ### Viewing Query Logs in Responses
785
+
786
+ Query logs are automatically included in **ALL** tool responses and displayed to users via the structured response format with explicit LLM instructions:
787
+
788
+ **Example: Viewing Response with Query Log:**
789
+
790
+ When you call `list_tables`, the AI agent receives:
791
+
792
+ ```json
793
+ [
794
+ {"table_name": "users"},
795
+ {"table_name": "orders"}
796
+ ]
797
+ ```
798
+
799
+ ---
800
+
801
+ ## SQL Query Execution Log
802
+
803
+ ### Query #1 - SUCCESS (8ms)
804
+ **Timestamp:** 2025-11-21T10:30:45.123Z
805
+
806
+ **SQL:**
807
+ ```sql
808
+ SHOW TABLES
809
+ ```
810
+
811
+ **Example: Bulk Operations with Multiple Queries:**
812
+
813
+ When you call `bulk_insert`, the AI agent receives:
814
+
815
+ ```json
816
+ {
817
+ "affectedRows": 100,
818
+ "totalInserted": 100
819
+ }
820
+ ```
821
+
822
+ ---
823
+
824
+ ## SQL Query Execution Log
825
+
826
+ ### Query #1 - SUCCESS (45ms)
827
+ **Timestamp:** 2025-11-21T10:30:45.123Z
828
+
829
+ **SQL:**
830
+ ```sql
831
+ INSERT INTO users (name, email, age)
832
+ VALUES (?, ?, ?)
833
+ ```
834
+ Parameters:
835
+ ["John Doe", "john@example.com", 30]
836
+
837
+ ---
838
+
839
+ ### Query #2 - SUCCESS (23ms)
840
+ **Timestamp:** 2025-11-21T10:30:45.168Z
841
+
842
+ **SQL:**
843
+ ```sql
844
+ INSERT INTO users (name, email, age)
845
+ VALUES (?, ?, ?)
846
+ ```
847
+ Parameters:
848
+ ["Jane Smith", "jane@example.com", 28]
849
+
850
+ **Tools with Query Logging:**
851
+
852
+ Query logs are now included in responses from **ALL 30 tools**:
853
+
854
+ βœ… **Database Discovery** - `list_databases`, `list_tables`, `read_table_schema`, `get_table_relationships`
855
+ βœ… **Data Operations** - `create_record`, `read_records`, `update_record`, `delete_record`
856
+ βœ… **Bulk Operations** - `bulk_insert`, `bulk_update`, `bulk_delete`
857
+ βœ… **Custom Queries** - `run_query`, `execute_sql`
858
+ βœ… **Schema Management** - `create_table`, `alter_table`, `drop_table`, `execute_ddl`
859
+ βœ… **Utilities** - `get_table_relationships`
860
+ βœ… **Transactions** - `execute_in_transaction`
861
+ βœ… **Stored Procedures** - `list_stored_procedures`, `get_stored_procedure_info`, `execute_stored_procedure`, etc.
862
+ βœ… **Data Export** - `export_table_to_csv`, `export_query_to_csv`
863
+
864
+ ### Query Logs for Debugging
865
+
866
+ Query logs are valuable for:
867
+ - **Performance Analysis** - Track which queries are slow (high duration)
868
+ - **Troubleshooting** - Review exact parameters sent to queries
869
+ - **Auditing** - See what operations were performed and when
870
+ - **Optimization** - Identify patterns in query execution
871
+ - **Error Investigation** - Review failed queries and their errors
872
+
873
+ ### Query Log Limitations
874
+
875
+ - Logs are stored in memory (not persisted to disk)
876
+ - Maximum of 100 most recent queries are retained
877
+ - Logs are cleared when the MCP server restarts
878
+ - For production audit trails, consider using MySQL's built-in query logging
879
+
880
+ ### Tools with Query Logging
881
+
882
+ All tools that execute queries include logs:
883
+ - `run_query` - SELECT query execution
884
+ - `executeSql` - Write operations (INSERT, UPDATE, DELETE)
885
+ - `create_record` - Single record insertion
886
+ - `read_records` - Record querying with filters
887
+ - `update_record` - Record updates
888
+ - `delete_record` - Record deletion
889
+ - Bulk operations (`bulk_insert`, `bulk_update`, `bulk_delete`)
890
+ - Stored procedure execution
891
+ - Transaction operations
892
+
893
+ ### Query Logger Performance & Configuration
894
+
895
+ #### Memory Management
896
+
897
+ The QueryLogger is designed with robust memory safety:
898
+
899
+ **Built-in Protections:**
900
+ - βœ… **SQL Truncation** - Queries truncated to 500 characters max
901
+ - βœ… **Parameter Limiting** - Only first 5 parameters logged
902
+ - βœ… **Value Truncation** - Individual parameter values limited to 50 characters
903
+ - βœ… **Error Truncation** - Error messages limited to 200 characters
904
+ - βœ… **Deep Copy** - Parameters are deep copied to prevent reference mutations
905
+ - βœ… **Safe Serialization** - Handles circular references, BigInt, and unstringifiable objects
906
+ - βœ… **Bounded Storage** - Maximum 100 most recent queries retained
907
+
908
+ **Memory Impact:**
909
+ ```
910
+ Regular query: ~1 KB per log entry
911
+ Bulk operations: ~1 KB per log entry (99.9% reduction vs unbounded)
912
+ Total max memory: ~100 KB for all 100 log entries
913
+ ```
914
+
915
+ #### Configuration Tuning
916
+
917
+ The QueryLogger limits are defined as constants and can be adjusted if needed by modifying `src/db/queryLogger.ts`:
918
+
919
+ ```typescript
920
+ private static readonly MAX_LOGS = 100; // Number of queries to retain
921
+ private static readonly MAX_SQL_LENGTH = 500; // Max SQL string length
922
+ private static readonly MAX_PARAM_LENGTH = 200; // Max params output length
923
+ private static readonly MAX_PARAM_ITEMS = 5; // Max number of params to log
924
+ ```
925
+
926
+ **Tuning Recommendations:**
927
+ - **High-traffic production**: Reduce `MAX_LOGS` to 50 to minimize memory
928
+ - **Development/debugging**: Increase `MAX_SQL_LENGTH` to 1000 for fuller visibility
929
+ - **Bulk operations heavy**: Keep defaults - they're optimized for bulk workloads
930
+
931
+ #### Production Monitoring
932
+
933
+ When running in production, monitor these metrics:
934
+
935
+ 1. **Memory Usage** - QueryLogger should use <500 KB total
936
+ 2. **Response Payload Size** - Query logs add minimal overhead (<1 KB per response)
937
+ 3. **Performance Impact** - Logging overhead is <1ms per query
938
+
939
+ **Health Check:**
940
+ ```javascript
941
+ // Check log memory usage
942
+ const logs = db.getQueryLogs();
943
+ const estimatedMemory = logs.length * 1; // ~1 KB per log
944
+ console.log(`Query log memory usage: ~${estimatedMemory} KB`);
945
+ ```
946
+
947
+ #### Persistent Logging for Production Auditing
948
+
949
+ **Important:** QueryLogger stores logs in memory only (not persisted to disk). For production audit trails and compliance, consider:
950
+
951
+ 1. **MySQL Query Log** (Recommended)
952
+ ```sql
953
+ -- Enable general query log
954
+ SET GLOBAL general_log = 'ON';
955
+ SET GLOBAL general_log_file = '/var/log/mysql/queries.log';
956
+ ```
957
+
958
+ 2. **MySQL Slow Query Log**
959
+ ```sql
960
+ -- Log queries slower than 1 second
961
+ SET GLOBAL slow_query_log = 'ON';
962
+ SET GLOBAL long_query_time = 1;
963
+ ```
964
+
965
+ 3. **Application-Level Logging**
966
+ - Use Winston or similar logger to persist query logs to disk
967
+ - Integrate with log aggregation services (ELK, Splunk, DataDog)
968
+
969
+ 4. **Database Audit Plugins**
970
+ - MySQL Enterprise Audit
971
+ - MariaDB Audit Plugin
972
+ - Percona Audit Log Plugin
973
+
974
+ **Trade-offs:**
975
+ - **In-Memory (QueryLogger)**: Fast, lightweight, for debugging & development
976
+ - **MySQL Query Log**: Complete audit trail, slight performance impact
977
+ - **Application Logging**: Flexible, can include business context
978
+ - **Audit Plugins**: Enterprise-grade, compliance-ready, feature-rich
979
+
980
+ ---
981
+
982
+ ## πŸ”’ Security Features
983
+
984
+ ### Built-in Security
985
+
986
+ - βœ… **Parameterized Queries** - All queries use prepared statements (SQL injection protection)
987
+ - βœ… **Permission-Based Access** - Fine-grained control over operations
988
+ - βœ… **Read-Only Validation** - `run_query` enforces SELECT-only operations
989
+ - βœ… **DDL Gating** - Schema changes require explicit `ddl` permission
990
+ - βœ… **Condition Requirements** - DELETE operations must include WHERE conditions
991
+ - βœ… **Input Validation** - All inputs validated with JSON schemas
992
+ - βœ… **Connection Pooling** - Efficient database connection management
993
+
994
+ ### Additional Security (REST API Mode)
995
+
996
+ - βœ… **JWT Authentication** - Token-based authentication
997
+ - βœ… **Rate Limiting** - 100 requests per 15 minutes per IP
998
+ - βœ… **CORS Protection** - Configurable CORS policies
999
+ - βœ… **Helmet Security Headers** - HTTP security headers
1000
+
1001
+ ### Security Best Practices
1002
+
1003
+ 1. **Use Read-Only for Production**
1004
+ ```
1005
+ "list,read,utility"
1006
+ ```
1007
+
1008
+ 2. **Create MySQL Users with Limited Permissions**
1009
+ ```sql
1010
+ CREATE USER 'readonly'@'%' IDENTIFIED BY 'password';
1011
+ GRANT SELECT ON myapp.* TO 'readonly'@'%';
1012
+ FLUSH PRIVILEGES;
1013
+ ```
1014
+
1015
+ 3. **Never Use Root in Production**
1016
+ - Create dedicated users per environment
1017
+ - Grant minimal necessary permissions
1018
+
1019
+ 4. **Never Commit `.env` Files**
1020
+ - Add `.env` to `.gitignore`
1021
+ - Use environment-specific configs
1022
+
1023
+ 5. **Enable DDL Only When Needed**
1024
+ - Keep DDL disabled by default
1025
+ - Only enable for schema migration tasks
1026
+
1027
+ ---
1028
+
1029
+ ## πŸš€ Bulk Operations
1030
+
1031
+ The MySQL MCP server includes powerful bulk operation tools designed for high-performance data processing. These tools are optimized for handling large datasets efficiently.
1032
+
1033
+ ### Performance Characteristics
1034
+
1035
+ - **Batch Processing**: Operations are processed in configurable batches to optimize memory usage and database performance
1036
+ - **Transaction Safety**: Each batch is wrapped in a transaction for data consistency
1037
+ - **Error Handling**: Detailed error reporting with batch-level granularity
1038
+ - **Memory Efficient**: Streaming approach prevents memory overflow with large datasets
1039
+
1040
+ ### Best Practices
1041
+
1042
+ #### Batch Size Optimization
1043
+ ```json
1044
+ {
1045
+ "batch_size": 1000 // Recommended for most operations
1046
+ }
1047
+ ```
1048
+
1049
+ **Guidelines:**
1050
+ - **Small records (< 1KB)**: Use batch sizes of 1000-5000
1051
+ - **Large records (> 10KB)**: Use batch sizes of 100-500
1052
+ - **Complex operations**: Start with 100 and increase based on performance
1053
+
1054
+ #### Bulk Insert Tips
1055
+ - Use consistent data structure across all records
1056
+ - Pre-validate data to avoid mid-batch failures
1057
+ - Consider using `ON DUPLICATE KEY UPDATE` for upsert operations
1058
+ - Monitor MySQL's `max_allowed_packet` setting for large batches
1059
+
1060
+ #### Bulk Update Optimization
1061
+ - Use indexed columns in conditions for better performance
1062
+ - Group similar updates together
1063
+ - Consider using raw SQL expressions for calculated updates
1064
+ - Test with small batches first to verify logic
1065
+
1066
+ #### Bulk Delete Safety
1067
+ - Always test delete conditions with `SELECT` first
1068
+ - Use smaller batch sizes for delete operations
1069
+ - Consider soft deletes for important data
1070
+ - Monitor foreign key constraints
1071
+
1072
+ ### Error Handling
1073
+
1074
+ Bulk operations provide detailed error information:
1075
+
1076
+ ```json
1077
+ {
1078
+ "success": false,
1079
+ "error": "Batch 3 failed: Duplicate entry 'user123' for key 'username'",
1080
+ "processed_batches": 2,
1081
+ "total_batches": 5,
1082
+ "successful_operations": 2000,
1083
+ "failed_operations": 1000
1084
+ }
1085
+ ```
1086
+
1087
+ ### Performance Monitoring
1088
+
1089
+ Each bulk operation returns performance metrics:
1090
+
1091
+ ```json
1092
+ {
1093
+ "success": true,
1094
+ "total_processed": 10000,
1095
+ "batches_processed": 10,
1096
+ "execution_time_ms": 2500,
1097
+ "average_batch_time_ms": 250,
1098
+ "records_per_second": 4000
1099
+ }
1100
+ ```
1101
+
1102
+ ---
1103
+
1104
+ ## Troubleshooting
1105
+
1106
+ ### MCP Server Not Connecting
1107
+
1108
+ **Problem:** AI agent doesn't see tools
1109
+
1110
+ **Solutions:**
1111
+ 1. Check config file path is correct
1112
+ 2. Restart AI agent completely
1113
+ 3. If using npx: Verify internet connection for package download
1114
+ 4. If using local files: Verify bin/mcp-mysql.js exists
1115
+ 5. Check for JSON syntax errors
1116
+
1117
+ **Problem:** Connection fails
1118
+
1119
+ **Solutions:**
1120
+ 1. Test MySQL manually: `mysql -u root -p`
1121
+ 2. Verify credentials in connection string
1122
+ 3. Check MySQL is running
1123
+ 4. Verify network access (host/port)
1124
+
1125
+ ### Permission Issues
1126
+
1127
+ **Problem:** "Tool is disabled" error
1128
+
1129
+ **Solutions:**
1130
+ 1. Check permissions in third argument
1131
+ 2. Verify permission spelling
1132
+ 3. Add required permission category
1133
+
1134
+ **Problem:** MySQL permission denied
1135
+
1136
+ **Solutions:**
1137
+ ```sql
1138
+ GRANT SELECT, INSERT, UPDATE, DELETE ON db.* TO 'user'@'localhost';
1139
+ FLUSH PRIVILEGES;
1140
+ ```
1141
+
1142
+ ### DDL Operations Not Working
1143
+
1144
+ **Problem:** "DDL operations require 'ddl' permission"
1145
+
1146
+ **Solution:** Add `ddl` to permissions:
1147
+ ```json
1148
+ {
1149
+ "args": [
1150
+ "mysql://...",
1151
+ "list,read,create,update,delete,ddl,utility"
1152
+ ]
1153
+ }
1154
+ ```
1155
+
1156
+ ### Parameter Validation Errors
1157
+
1158
+ **Problem:** "Invalid parameters: must be object" error
1159
+
1160
+ **Symptoms:**
1161
+ - Tools fail when called without parameters
1162
+ - Error message: `Error: Invalid parameters: [{"instancePath":"","schemaPath":"#/type","keyword":"type","params":{"type":"object"},"message":"must be object"}]`
1163
+ - Occurs especially with tools that have optional parameters like `list_tables`, `begin_transaction`, `list_stored_procedures`
1164
+
1165
+ **Cause:**
1166
+ This error occurred in earlier versions (< 1.4.1) when AI agents called MCP tools without providing parameters. The MCP SDK sometimes passes `undefined` or `null` instead of an empty object `{}`, causing JSON schema validation to fail.
1167
+
1168
+ **Solution:**
1169
+ βœ… **Fixed in version 1.4.1+** - All 33 tools now include defensive parameter handling that automatically converts `undefined`/`null` to empty objects.
1170
+
1171
+ **If you're still experiencing this issue:**
1172
+ 1. Update to the latest version:
1173
+ ```bash
1174
+ npx -y @berthojoris/mcp-mysql-server@latest mysql://user:pass@localhost:3306/db "permissions"
1175
+ ```
1176
+
1177
+ 2. If using global installation:
1178
+ ```bash
1179
+ npm update -g @berthojoris/mcp-mysql-server
1180
+ ```
1181
+
1182
+ 3. Restart your AI agent after updating
1183
+
1184
+ **Technical Details:**
1185
+ - All tool handlers now use defensive pattern: `(args || {})` to ensure parameters are always objects
1186
+ - This fix applies to all 27 tools that accept parameters
1187
+ - Tools with no parameters (like `list_databases`, `test_connection`) were not affected
1188
+ - No breaking changes - existing valid calls continue to work exactly as before
1189
+
1190
+ ---
1191
+
1192
+ ## πŸ“„ License
1193
+
1194
+ MIT License - see [LICENSE](LICENSE) file for details.
1195
+
1196
+ ---
1197
+
1198
+ ## πŸ—ΊοΈ Roadmap
1199
+
1200
+ ### Core Features
1201
+ - βœ… **Transaction support (BEGIN, COMMIT, ROLLBACK)** - **COMPLETED!**
1202
+ - βœ… **Stored procedure execution** - **COMPLETED!**
1203
+ - βœ… **Bulk operations (batch insert/update/delete)** - **COMPLETED!**
1204
+ - βœ… **Add query log on output** - **COMPLETED!**
1205
+ - [ ] Query result caching
1206
+ - [ ] Advanced query optimization hints
1207
+
1208
+ ### Enterprise Features
1209
+ - [ ] **Database backup and restore tools**
1210
+ - [ ] **Data export/import utilities** (CSV, JSON, SQL dumps)
1211
+ - [ ] **Performance monitoring and metrics**
1212
+ - [ ] **Query performance analysis**
1213
+ - [ ] **Connection pool monitoring**
1214
+ - [ ] **Audit logging and compliance**
1215
+ - [ ] **Data migration utilities**
1216
+ - [ ] **Schema versioning and migrations**
1217
+
1218
+ ### Database Adapters
1219
+ - [ ] PostgreSQL adapter
1220
+ - [ ] MongoDB adapter
1221
+ - [ ] SQLite adapter
1222
+ - [ ] Oracle Database adapter
1223
+ - [ ] SQL Server adapter
1224
+
1225
+ ### Recommended Implementation Order
1226
+
1227
+ #### **Phase 1: Performance & Monitoring** πŸš€
1228
+ - [ ] **Query result caching** - Dramatically improve response times for repeated queries
1229
+ - [ ] **Performance metrics** - Track query execution times and database performance
1230
+ - [ ] **Connection pool monitoring** - Monitor database connection health and usage
1231
+ - [ ] **Database health checks** - Comprehensive system health monitoring
1232
+
1233
+ #### **Phase 2: Data Management** πŸ“Š
1234
+ - [ ] **Database backup and restore tools** - Essential for production data safety
1235
+ - [ ] **Data migration utilities** - Move data between databases and environments
1236
+ - [ ] **Enhanced export/import** - Support for JSON, XML, Excel formats
1237
+ - [ ] **Query history & analytics** - Track and analyze database usage patterns
1238
+
1239
+ #### **Phase 3: Enterprise Features** 🏒
1240
+ - [ ] **Audit logging and compliance** - Track all database operations for security
1241
+ - [ ] **Schema versioning and migrations** - Version control for database schema changes
1242
+ - [ ] **Query optimization** - Automatic query analysis and optimization suggestions
1243
+ - [ ] **Advanced security features** - Enhanced access control and monitoring
1244
+
1245
+ #### **Phase 4: Multi-Database Support** 🌐
1246
+ - [ ] **PostgreSQL adapter** - Extend support to PostgreSQL databases
1247
+ - [ ] **MongoDB adapter** - Add NoSQL document database support
1248
+ - [ ] **SQLite adapter** - Support for lightweight embedded databases
1249
+ - [ ] **Database-agnostic operations** - Unified API across different database types
1250
+
1251
+ #### **Implementation Priority Matrix**
1252
+
1253
+ | Feature | Impact | Effort | Priority |
1254
+ |---------|--------|--------|----------|
1255
+ | Query Result Caching | High | Medium | 1 |
1256
+ | Database Backup/Restore | High | High | 2 |
1257
+ | Performance Monitoring | High | Medium | 3 |
1258
+ | Data Migration | High | High | 4 |
1259
+ | Query Optimization | Medium | Medium | 5 |
1260
+ | PostgreSQL Adapter | High | High | 6 |
1261
+ | Audit Logging | Medium | Low | 7 |
1262
+ | Schema Versioning | Medium | Medium | 8 |
1263
+
1264
+ ---
1265
+
1266
+ **Made with ❀️ for the AI community**
1267
+
1268
+ *Help AI agents interact with MySQL databases safely and efficiently!*