omgkit 2.3.0 → 2.4.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +7 -4
- package/lib/cli.js +7 -2
- package/package.json +1 -1
- package/plugin/skills/databases/database-management/SKILL.md +288 -0
- package/plugin/skills/databases/database-migration/SKILL.md +285 -0
- package/plugin/skills/databases/database-schema-design/SKILL.md +195 -0
- package/plugin/skills/databases/supabase/SKILL.md +283 -0
- package/templates/devlogs/README.md +29 -0
- package/templates/stdrules/README.md +34 -0
- package/templates/stdrules/SKILL_STANDARDS.md +490 -0
package/README.md
CHANGED
|
@@ -7,7 +7,7 @@
|
|
|
7
7
|
[](LICENSE)
|
|
8
8
|
|
|
9
9
|
> **AI Team System for Claude Code**
|
|
10
|
-
> 23 Agents •
|
|
10
|
+
> 23 Agents • 58 Commands • 76 Skills • 9 Modes
|
|
11
11
|
> *"Think Omega. Build Omega. Be Omega."*
|
|
12
12
|
|
|
13
13
|
OMGKIT transforms Claude Code into an autonomous AI development team with sprint management, specialized agents, and Omega-level thinking for 10x-1000x productivity improvements.
|
|
@@ -17,8 +17,8 @@ OMGKIT transforms Claude Code into an autonomous AI development team with sprint
|
|
|
17
17
|
| Component | Count | Description |
|
|
18
18
|
|-----------|-------|-------------|
|
|
19
19
|
| **Agents** | 23 | Specialized AI team members |
|
|
20
|
-
| **Commands** |
|
|
21
|
-
| **Skills** |
|
|
20
|
+
| **Commands** | 58 | Slash commands for every task |
|
|
21
|
+
| **Skills** | 76 | Domain expertise modules |
|
|
22
22
|
| **Modes** | 9 | Behavioral configurations |
|
|
23
23
|
| **Sprint Management** | ✅ | Vision, backlog, team autonomy |
|
|
24
24
|
| **Omega Thinking** | ✅ | 7 modes for 10x-1000x solutions |
|
|
@@ -229,7 +229,10 @@ your-project/
|
|
|
229
229
|
│ │ └── backlog.yaml # Task backlog
|
|
230
230
|
│ ├── plans/ # Generated plans
|
|
231
231
|
│ ├── docs/ # Generated docs
|
|
232
|
-
│
|
|
232
|
+
│ ├── logs/ # Activity logs
|
|
233
|
+
│ ├── devlogs/ # Development logs, planning, tracking (git-ignored)
|
|
234
|
+
│ └── stdrules/ # Standards & rules for the project
|
|
235
|
+
│ └── SKILL_STANDARDS.md
|
|
233
236
|
└── OMEGA.md # Project context
|
|
234
237
|
```
|
|
235
238
|
|
package/lib/cli.js
CHANGED
|
@@ -173,7 +173,9 @@ export function initProject(options = {}) {
|
|
|
173
173
|
'.omgkit/sprints',
|
|
174
174
|
'.omgkit/plans',
|
|
175
175
|
'.omgkit/docs',
|
|
176
|
-
'.omgkit/logs'
|
|
176
|
+
'.omgkit/logs',
|
|
177
|
+
'.omgkit/devlogs',
|
|
178
|
+
'.omgkit/stdrules'
|
|
177
179
|
];
|
|
178
180
|
|
|
179
181
|
dirs.forEach(dir => {
|
|
@@ -191,7 +193,10 @@ export function initProject(options = {}) {
|
|
|
191
193
|
{ src: 'OMEGA.md', dest: 'OMEGA.md' },
|
|
192
194
|
{ src: 'vision.yaml', dest: '.omgkit/sprints/vision.yaml' },
|
|
193
195
|
{ src: 'backlog.yaml', dest: '.omgkit/sprints/backlog.yaml' },
|
|
194
|
-
{ src: 'settings.json', dest: '.omgkit/settings.json' }
|
|
196
|
+
{ src: 'settings.json', dest: '.omgkit/settings.json' },
|
|
197
|
+
{ src: 'devlogs/README.md', dest: '.omgkit/devlogs/README.md' },
|
|
198
|
+
{ src: 'stdrules/README.md', dest: '.omgkit/stdrules/README.md' },
|
|
199
|
+
{ src: 'stdrules/SKILL_STANDARDS.md', dest: '.omgkit/stdrules/SKILL_STANDARDS.md' }
|
|
195
200
|
];
|
|
196
201
|
|
|
197
202
|
templates.forEach(({ src, dest }) => {
|
package/package.json
CHANGED
|
@@ -0,0 +1,288 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: managing-databases
|
|
3
|
+
description: AI agent performs database administration tasks including backup/restore, monitoring, replication, security hardening, and maintenance operations. Use when managing production databases, troubleshooting performance, or implementing high availability.
|
|
4
|
+
category: databases
|
|
5
|
+
triggers:
|
|
6
|
+
- DBA
|
|
7
|
+
- database management
|
|
8
|
+
- backup
|
|
9
|
+
- restore
|
|
10
|
+
- replication
|
|
11
|
+
- database monitoring
|
|
12
|
+
- VACUUM
|
|
13
|
+
- maintenance
|
|
14
|
+
---
|
|
15
|
+
|
|
16
|
+
# Managing Databases
|
|
17
|
+
|
|
18
|
+
## Purpose
|
|
19
|
+
|
|
20
|
+
Operate and maintain production databases with reliability and performance:
|
|
21
|
+
|
|
22
|
+
- Implement backup and disaster recovery strategies
|
|
23
|
+
- Configure monitoring and alerting
|
|
24
|
+
- Manage replication and high availability
|
|
25
|
+
- Perform routine maintenance operations
|
|
26
|
+
- Troubleshoot performance issues
|
|
27
|
+
|
|
28
|
+
## Quick Start
|
|
29
|
+
|
|
30
|
+
```bash
|
|
31
|
+
# PostgreSQL backup
|
|
32
|
+
pg_dump -Fc -d mydb > backup_$(date +%Y%m%d).dump
|
|
33
|
+
|
|
34
|
+
# Restore
|
|
35
|
+
pg_restore -d mydb backup_20241230.dump
|
|
36
|
+
|
|
37
|
+
# Check database health
|
|
38
|
+
psql -c "SELECT pg_database_size('mydb');"
|
|
39
|
+
psql -c "SELECT * FROM pg_stat_activity;"
|
|
40
|
+
```
|
|
41
|
+
|
|
42
|
+
## Features
|
|
43
|
+
|
|
44
|
+
| Feature | Description | Tools/Commands |
|
|
45
|
+
|---------|-------------|----------------|
|
|
46
|
+
| Backup/Restore | Point-in-time recovery, full/incremental | pg_dump, pg_basebackup, WAL archiving |
|
|
47
|
+
| Monitoring | Connections, queries, locks, replication | pg_stat_*, Prometheus, Grafana |
|
|
48
|
+
| Replication | Master-replica, synchronous/async | streaming replication, logical replication |
|
|
49
|
+
| Security | Users, roles, encryption, audit | pg_hba.conf, SSL, pgaudit |
|
|
50
|
+
| Maintenance | VACUUM, ANALYZE, reindex | autovacuum tuning, pg_repack |
|
|
51
|
+
| Connection Pooling | Reduce connection overhead | PgBouncer, pgpool-II |
|
|
52
|
+
|
|
53
|
+
## Common Patterns
|
|
54
|
+
|
|
55
|
+
### Backup Strategies
|
|
56
|
+
|
|
57
|
+
```bash
|
|
58
|
+
# Full backup with compression
|
|
59
|
+
pg_dump -Fc -Z9 -d production > backup_$(date +%Y%m%d_%H%M%S).dump
|
|
60
|
+
|
|
61
|
+
# Parallel backup for large databases
|
|
62
|
+
pg_dump -Fc -j 4 -d production > backup.dump
|
|
63
|
+
|
|
64
|
+
# Base backup for PITR (Point-in-Time Recovery)
|
|
65
|
+
pg_basebackup -D /backups/base -Fp -Xs -P -R
|
|
66
|
+
|
|
67
|
+
# Continuous WAL archiving (postgresql.conf)
|
|
68
|
+
archive_mode = on
|
|
69
|
+
archive_command = 'cp %p /archive/%f'
|
|
70
|
+
|
|
71
|
+
# Restore to specific point in time
|
|
72
|
+
recovery_target_time = '2024-12-30 14:30:00'
|
|
73
|
+
```
|
|
74
|
+
|
|
75
|
+
```sql
|
|
76
|
+
-- Verify backup integrity
|
|
77
|
+
SELECT pg_is_in_recovery();
|
|
78
|
+
SELECT pg_last_wal_receive_lsn(), pg_last_wal_replay_lsn();
|
|
79
|
+
```
|
|
80
|
+
|
|
81
|
+
### Monitoring Queries
|
|
82
|
+
|
|
83
|
+
```sql
|
|
84
|
+
-- Active connections and queries
|
|
85
|
+
SELECT pid, usename, application_name, state, query,
|
|
86
|
+
now() - query_start AS duration
|
|
87
|
+
FROM pg_stat_activity
|
|
88
|
+
WHERE state != 'idle'
|
|
89
|
+
ORDER BY duration DESC;
|
|
90
|
+
|
|
91
|
+
-- Table sizes and bloat
|
|
92
|
+
SELECT schemaname, tablename,
|
|
93
|
+
pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename)) AS total_size,
|
|
94
|
+
pg_size_pretty(pg_relation_size(schemaname||'.'||tablename)) AS table_size,
|
|
95
|
+
pg_size_pretty(pg_indexes_size(schemaname||'.'||tablename)) AS index_size
|
|
96
|
+
FROM pg_tables
|
|
97
|
+
WHERE schemaname = 'public'
|
|
98
|
+
ORDER BY pg_total_relation_size(schemaname||'.'||tablename) DESC;
|
|
99
|
+
|
|
100
|
+
-- Slow queries (requires pg_stat_statements)
|
|
101
|
+
SELECT query, calls, mean_exec_time, total_exec_time
|
|
102
|
+
FROM pg_stat_statements
|
|
103
|
+
ORDER BY mean_exec_time DESC
|
|
104
|
+
LIMIT 20;
|
|
105
|
+
|
|
106
|
+
-- Index usage
|
|
107
|
+
SELECT schemaname, tablename, indexname, idx_scan, idx_tup_read
|
|
108
|
+
FROM pg_stat_user_indexes
|
|
109
|
+
ORDER BY idx_scan ASC; -- Unused indexes at top
|
|
110
|
+
|
|
111
|
+
-- Lock monitoring
|
|
112
|
+
SELECT blocked_locks.pid AS blocked_pid,
|
|
113
|
+
blocking_locks.pid AS blocking_pid,
|
|
114
|
+
blocked_activity.query AS blocked_query
|
|
115
|
+
FROM pg_locks blocked_locks
|
|
116
|
+
JOIN pg_stat_activity blocked_activity ON blocked_activity.pid = blocked_locks.pid
|
|
117
|
+
JOIN pg_locks blocking_locks ON blocking_locks.locktype = blocked_locks.locktype
|
|
118
|
+
JOIN pg_stat_activity blocking_activity ON blocking_activity.pid = blocking_locks.pid
|
|
119
|
+
WHERE NOT blocked_locks.granted;
|
|
120
|
+
```
|
|
121
|
+
|
|
122
|
+
### Replication Setup
|
|
123
|
+
|
|
124
|
+
```sql
|
|
125
|
+
-- On primary: Create replication user
|
|
126
|
+
CREATE USER replicator WITH REPLICATION ENCRYPTED PASSWORD 'secret';
|
|
127
|
+
|
|
128
|
+
-- pg_hba.conf on primary
|
|
129
|
+
host replication replicator replica_ip/32 scram-sha-256
|
|
130
|
+
```
|
|
131
|
+
|
|
132
|
+
```bash
|
|
133
|
+
# On replica: Initialize from primary
|
|
134
|
+
pg_basebackup -h primary_host -U replicator -D /var/lib/postgresql/data -Fp -Xs -P -R
|
|
135
|
+
|
|
136
|
+
# Verify replication status (on primary)
|
|
137
|
+
SELECT client_addr, state, sent_lsn, write_lsn, flush_lsn, replay_lsn
|
|
138
|
+
FROM pg_stat_replication;
|
|
139
|
+
|
|
140
|
+
# Check replication lag (on replica)
|
|
141
|
+
SELECT now() - pg_last_xact_replay_timestamp() AS replication_lag;
|
|
142
|
+
```
|
|
143
|
+
|
|
144
|
+
### Connection Pooling (PgBouncer)
|
|
145
|
+
|
|
146
|
+
```ini
|
|
147
|
+
# pgbouncer.ini
|
|
148
|
+
[databases]
|
|
149
|
+
mydb = host=localhost port=5432 dbname=mydb
|
|
150
|
+
|
|
151
|
+
[pgbouncer]
|
|
152
|
+
listen_addr = 0.0.0.0
|
|
153
|
+
listen_port = 6432
|
|
154
|
+
auth_type = scram-sha-256
|
|
155
|
+
auth_file = /etc/pgbouncer/userlist.txt
|
|
156
|
+
pool_mode = transaction # transaction, session, statement
|
|
157
|
+
max_client_conn = 1000
|
|
158
|
+
default_pool_size = 25
|
|
159
|
+
min_pool_size = 5
|
|
160
|
+
reserve_pool_size = 5
|
|
161
|
+
```
|
|
162
|
+
|
|
163
|
+
### Maintenance Operations
|
|
164
|
+
|
|
165
|
+
```sql
|
|
166
|
+
-- Manual VACUUM and ANALYZE
|
|
167
|
+
VACUUM ANALYZE orders;
|
|
168
|
+
|
|
169
|
+
-- Aggressive vacuum for bloat
|
|
170
|
+
VACUUM FULL orders; -- Locks table, use pg_repack instead
|
|
171
|
+
|
|
172
|
+
-- Reindex without locking (PostgreSQL 12+)
|
|
173
|
+
REINDEX INDEX CONCURRENTLY idx_orders_status;
|
|
174
|
+
|
|
175
|
+
-- Tune autovacuum per table (high-churn tables)
|
|
176
|
+
ALTER TABLE orders SET (
|
|
177
|
+
autovacuum_vacuum_scale_factor = 0.01,
|
|
178
|
+
autovacuum_analyze_scale_factor = 0.005
|
|
179
|
+
);
|
|
180
|
+
|
|
181
|
+
-- Check autovacuum status
|
|
182
|
+
SELECT schemaname, relname, last_vacuum, last_autovacuum,
|
|
183
|
+
last_analyze, last_autoanalyze, n_dead_tup
|
|
184
|
+
FROM pg_stat_user_tables
|
|
185
|
+
ORDER BY n_dead_tup DESC;
|
|
186
|
+
```
|
|
187
|
+
|
|
188
|
+
```bash
|
|
189
|
+
# pg_repack: Online VACUUM FULL alternative
|
|
190
|
+
pg_repack -d mydb -t orders
|
|
191
|
+
```
|
|
192
|
+
|
|
193
|
+
### Security Hardening
|
|
194
|
+
|
|
195
|
+
```sql
|
|
196
|
+
-- Create role with minimal privileges
|
|
197
|
+
CREATE ROLE app_user WITH LOGIN PASSWORD 'secure_password';
|
|
198
|
+
GRANT CONNECT ON DATABASE mydb TO app_user;
|
|
199
|
+
GRANT USAGE ON SCHEMA public TO app_user;
|
|
200
|
+
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO app_user;
|
|
201
|
+
|
|
202
|
+
-- Read-only user for reporting
|
|
203
|
+
CREATE ROLE readonly WITH LOGIN PASSWORD 'secure_password';
|
|
204
|
+
GRANT CONNECT ON DATABASE mydb TO readonly;
|
|
205
|
+
GRANT USAGE ON SCHEMA public TO readonly;
|
|
206
|
+
GRANT SELECT ON ALL TABLES IN SCHEMA public TO readonly;
|
|
207
|
+
|
|
208
|
+
-- Revoke public access
|
|
209
|
+
REVOKE ALL ON DATABASE mydb FROM PUBLIC;
|
|
210
|
+
REVOKE ALL ON SCHEMA public FROM PUBLIC;
|
|
211
|
+
```
|
|
212
|
+
|
|
213
|
+
```bash
|
|
214
|
+
# pg_hba.conf - Secure access rules
|
|
215
|
+
# TYPE DATABASE USER ADDRESS METHOD
|
|
216
|
+
local all postgres peer
|
|
217
|
+
host mydb app_user 10.0.0.0/8 scram-sha-256
|
|
218
|
+
hostssl mydb app_user 0.0.0.0/0 scram-sha-256
|
|
219
|
+
```
|
|
220
|
+
|
|
221
|
+
## Use Cases
|
|
222
|
+
|
|
223
|
+
- Setting up production database infrastructure
|
|
224
|
+
- Troubleshooting slow queries and locks
|
|
225
|
+
- Implementing disaster recovery plans
|
|
226
|
+
- Scaling with read replicas
|
|
227
|
+
- Security audits and compliance
|
|
228
|
+
|
|
229
|
+
## Best Practices
|
|
230
|
+
|
|
231
|
+
| Do | Avoid |
|
|
232
|
+
|----|-------|
|
|
233
|
+
| Test restore procedures regularly | Assuming backups work without testing |
|
|
234
|
+
| Use connection pooling in production | Direct connections from all app instances |
|
|
235
|
+
| Enable pg_stat_statements for query analysis | Waiting for problems to investigate queries |
|
|
236
|
+
| Set up replication before you need it | Single point of failure in production |
|
|
237
|
+
| Use CONCURRENTLY for index operations | Blocking operations during peak hours |
|
|
238
|
+
| Create least-privilege database users | Using superuser for applications |
|
|
239
|
+
| Monitor replication lag actively | Discovering lag during failover |
|
|
240
|
+
| Document and automate runbooks | Manual, ad-hoc maintenance |
|
|
241
|
+
|
|
242
|
+
## Daily Health Check
|
|
243
|
+
|
|
244
|
+
```sql
|
|
245
|
+
-- Run this checklist daily
|
|
246
|
+
-- 1. Database size and growth
|
|
247
|
+
SELECT pg_size_pretty(pg_database_size('mydb'));
|
|
248
|
+
|
|
249
|
+
-- 2. Connection count
|
|
250
|
+
SELECT count(*) FROM pg_stat_activity;
|
|
251
|
+
|
|
252
|
+
-- 3. Long-running queries (>5 min)
|
|
253
|
+
SELECT * FROM pg_stat_activity
|
|
254
|
+
WHERE state != 'idle' AND query_start < now() - interval '5 minutes';
|
|
255
|
+
|
|
256
|
+
-- 4. Replication lag
|
|
257
|
+
SELECT now() - pg_last_xact_replay_timestamp() AS lag;
|
|
258
|
+
|
|
259
|
+
-- 5. Bloat check (dead tuples)
|
|
260
|
+
SELECT relname, n_dead_tup FROM pg_stat_user_tables
|
|
261
|
+
WHERE n_dead_tup > 10000 ORDER BY n_dead_tup DESC;
|
|
262
|
+
|
|
263
|
+
-- 6. Failed/pending transactions
|
|
264
|
+
SELECT * FROM pg_prepared_xacts;
|
|
265
|
+
```
|
|
266
|
+
|
|
267
|
+
## Emergency Procedures
|
|
268
|
+
|
|
269
|
+
```sql
|
|
270
|
+
-- Kill long-running query
|
|
271
|
+
SELECT pg_terminate_backend(pid) FROM pg_stat_activity
|
|
272
|
+
WHERE query_start < now() - interval '30 minutes' AND state != 'idle';
|
|
273
|
+
|
|
274
|
+
-- Cancel query without killing connection
|
|
275
|
+
SELECT pg_cancel_backend(pid);
|
|
276
|
+
|
|
277
|
+
-- Emergency: Kill all connections to database
|
|
278
|
+
SELECT pg_terminate_backend(pid) FROM pg_stat_activity
|
|
279
|
+
WHERE datname = 'mydb' AND pid != pg_backend_pid();
|
|
280
|
+
```
|
|
281
|
+
|
|
282
|
+
## Related Skills
|
|
283
|
+
|
|
284
|
+
See also these related skill documents:
|
|
285
|
+
|
|
286
|
+
- **optimizing-databases** - Query and index optimization
|
|
287
|
+
- **managing-database-migrations** - Safe schema changes
|
|
288
|
+
- **designing-database-schemas** - Schema architecture
|
|
@@ -0,0 +1,285 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: managing-database-migrations
|
|
3
|
+
description: AI agent implements safe database migrations with zero-downtime strategies, rollback plans, and expand-contract patterns. Use when creating migrations, deploying schema changes, or implementing zero-downtime database updates.
|
|
4
|
+
category: databases
|
|
5
|
+
triggers:
|
|
6
|
+
- migration
|
|
7
|
+
- schema change
|
|
8
|
+
- database migration
|
|
9
|
+
- zero downtime
|
|
10
|
+
- rollback
|
|
11
|
+
- prisma migrate
|
|
12
|
+
- flyway
|
|
13
|
+
---
|
|
14
|
+
|
|
15
|
+
# Managing Database Migrations
|
|
16
|
+
|
|
17
|
+
## Purpose
|
|
18
|
+
|
|
19
|
+
Implement safe, reversible database migrations for production environments:
|
|
20
|
+
|
|
21
|
+
- Apply expand-contract pattern for zero-downtime changes
|
|
22
|
+
- Design rollback strategies for every migration
|
|
23
|
+
- Handle large table alterations safely
|
|
24
|
+
- Integrate migrations with CI/CD pipelines
|
|
25
|
+
- Test migrations before production deployment
|
|
26
|
+
|
|
27
|
+
## Quick Start
|
|
28
|
+
|
|
29
|
+
```bash
|
|
30
|
+
# Prisma
|
|
31
|
+
npx prisma migrate dev --name add_user_roles
|
|
32
|
+
npx prisma migrate deploy # Production
|
|
33
|
+
|
|
34
|
+
# TypeORM
|
|
35
|
+
npm run typeorm migration:generate -- -n AddUserRoles
|
|
36
|
+
npm run typeorm migration:run
|
|
37
|
+
|
|
38
|
+
# Raw SQL with naming convention
|
|
39
|
+
# migrations/20241230_001_add_user_roles.sql
|
|
40
|
+
```
|
|
41
|
+
|
|
42
|
+
```typescript
|
|
43
|
+
// Prisma migration example
|
|
44
|
+
// prisma/migrations/20241230_add_user_roles/migration.sql
|
|
45
|
+
ALTER TABLE "users" ADD COLUMN "role" VARCHAR(50) DEFAULT 'user';
|
|
46
|
+
CREATE INDEX "idx_users_role" ON "users"("role");
|
|
47
|
+
```
|
|
48
|
+
|
|
49
|
+
## Features
|
|
50
|
+
|
|
51
|
+
| Feature | Strategy | When to Use |
|
|
52
|
+
|---------|----------|-------------|
|
|
53
|
+
| Add Column | Add nullable, then backfill, then NOT NULL | Always safe |
|
|
54
|
+
| Remove Column | Stop using, deploy, then remove | Expand-contract |
|
|
55
|
+
| Rename Column | Add new, copy data, remove old | Zero-downtime required |
|
|
56
|
+
| Change Type | Add new column, migrate, swap | Data transformation |
|
|
57
|
+
| Add Index | CREATE CONCURRENTLY | Large tables (>1M rows) |
|
|
58
|
+
| Drop Table | Rename first, drop after verification | Reversible delete |
|
|
59
|
+
|
|
60
|
+
## Common Patterns
|
|
61
|
+
|
|
62
|
+
### Expand-Contract Pattern
|
|
63
|
+
|
|
64
|
+
```sql
|
|
65
|
+
-- EXPAND: Add new structure (backward compatible)
|
|
66
|
+
-- Migration 1: Add new column
|
|
67
|
+
ALTER TABLE users ADD COLUMN full_name VARCHAR(255);
|
|
68
|
+
|
|
69
|
+
-- Application: Write to BOTH columns
|
|
70
|
+
UPDATE users SET full_name = first_name || ' ' || last_name;
|
|
71
|
+
|
|
72
|
+
-- Deploy application that reads from new column
|
|
73
|
+
|
|
74
|
+
-- CONTRACT: Remove old structure
|
|
75
|
+
-- Migration 2: (after app deployed)
|
|
76
|
+
ALTER TABLE users DROP COLUMN first_name;
|
|
77
|
+
ALTER TABLE users DROP COLUMN last_name;
|
|
78
|
+
```
|
|
79
|
+
|
|
80
|
+
### Safe Column Addition
|
|
81
|
+
|
|
82
|
+
```sql
|
|
83
|
+
-- Step 1: Add nullable column
|
|
84
|
+
ALTER TABLE orders ADD COLUMN shipping_method VARCHAR(50);
|
|
85
|
+
|
|
86
|
+
-- Step 2: Backfill in batches (avoid locking)
|
|
87
|
+
UPDATE orders SET shipping_method = 'standard'
|
|
88
|
+
WHERE id IN (SELECT id FROM orders WHERE shipping_method IS NULL LIMIT 10000);
|
|
89
|
+
|
|
90
|
+
-- Step 3: Add NOT NULL constraint
|
|
91
|
+
ALTER TABLE orders ALTER COLUMN shipping_method SET NOT NULL;
|
|
92
|
+
ALTER TABLE orders ALTER COLUMN shipping_method SET DEFAULT 'standard';
|
|
93
|
+
```
|
|
94
|
+
|
|
95
|
+
### Safe Column Removal
|
|
96
|
+
|
|
97
|
+
```sql
|
|
98
|
+
-- Step 1: Stop application from using column
|
|
99
|
+
-- (Deploy code that no longer reads/writes to column)
|
|
100
|
+
|
|
101
|
+
-- Step 2: Drop default and constraints
|
|
102
|
+
ALTER TABLE users ALTER COLUMN legacy_field DROP DEFAULT;
|
|
103
|
+
ALTER TABLE users ALTER COLUMN legacy_field DROP NOT NULL;
|
|
104
|
+
|
|
105
|
+
-- Step 3: Remove column (after verification period)
|
|
106
|
+
ALTER TABLE users DROP COLUMN legacy_field;
|
|
107
|
+
```
|
|
108
|
+
|
|
109
|
+
### Safe Index Creation (Large Tables)
|
|
110
|
+
|
|
111
|
+
```sql
|
|
112
|
+
-- WRONG: Locks table for duration
|
|
113
|
+
CREATE INDEX idx_orders_status ON orders(status);
|
|
114
|
+
|
|
115
|
+
-- CORRECT: Non-blocking for PostgreSQL
|
|
116
|
+
CREATE INDEX CONCURRENTLY idx_orders_status ON orders(status);
|
|
117
|
+
|
|
118
|
+
-- Note: CONCURRENTLY cannot run in transaction
|
|
119
|
+
-- For Prisma, use raw SQL migration:
|
|
120
|
+
-- prisma/migrations/xxx/migration.sql
|
|
121
|
+
-- /* Disable transaction for this migration */
|
|
122
|
+
CREATE INDEX CONCURRENTLY idx_orders_status ON orders(status);
|
|
123
|
+
```
|
|
124
|
+
|
|
125
|
+
### Prisma Migration Workflow
|
|
126
|
+
|
|
127
|
+
```bash
|
|
128
|
+
# Development: Create and apply migration
|
|
129
|
+
npx prisma migrate dev --name add_user_roles
|
|
130
|
+
|
|
131
|
+
# Preview SQL without applying
|
|
132
|
+
npx prisma migrate diff \
|
|
133
|
+
--from-schema-datamodel prisma/schema.prisma \
|
|
134
|
+
--to-schema-datamodel prisma/schema.prisma.new \
|
|
135
|
+
--script
|
|
136
|
+
|
|
137
|
+
# Production deployment
|
|
138
|
+
npx prisma migrate deploy
|
|
139
|
+
|
|
140
|
+
# Reset for development (DESTRUCTIVE)
|
|
141
|
+
npx prisma migrate reset
|
|
142
|
+
```
|
|
143
|
+
|
|
144
|
+
```prisma
|
|
145
|
+
// schema.prisma - Example with relations
|
|
146
|
+
model User {
|
|
147
|
+
id String @id @default(uuid())
|
|
148
|
+
email String @unique
|
|
149
|
+
role Role @default(USER)
|
|
150
|
+
posts Post[]
|
|
151
|
+
createdAt DateTime @default(now())
|
|
152
|
+
updatedAt DateTime @updatedAt
|
|
153
|
+
|
|
154
|
+
@@index([email])
|
|
155
|
+
}
|
|
156
|
+
|
|
157
|
+
enum Role {
|
|
158
|
+
USER
|
|
159
|
+
ADMIN
|
|
160
|
+
}
|
|
161
|
+
```
|
|
162
|
+
|
|
163
|
+
### TypeORM Migration
|
|
164
|
+
|
|
165
|
+
```typescript
|
|
166
|
+
// migrations/1703936400000-AddUserRoles.ts
|
|
167
|
+
import { MigrationInterface, QueryRunner } from 'typeorm';
|
|
168
|
+
|
|
169
|
+
export class AddUserRoles1703936400000 implements MigrationInterface {
|
|
170
|
+
public async up(queryRunner: QueryRunner): Promise<void> {
|
|
171
|
+
await queryRunner.query(`
|
|
172
|
+
ALTER TABLE "users" ADD COLUMN "role" VARCHAR(50) DEFAULT 'user'
|
|
173
|
+
`);
|
|
174
|
+
await queryRunner.query(`
|
|
175
|
+
CREATE INDEX "idx_users_role" ON "users"("role")
|
|
176
|
+
`);
|
|
177
|
+
}
|
|
178
|
+
|
|
179
|
+
public async down(queryRunner: QueryRunner): Promise<void> {
|
|
180
|
+
await queryRunner.query(`DROP INDEX "idx_users_role"`);
|
|
181
|
+
await queryRunner.query(`ALTER TABLE "users" DROP COLUMN "role"`);
|
|
182
|
+
}
|
|
183
|
+
}
|
|
184
|
+
```
|
|
185
|
+
|
|
186
|
+
### Data Migration Pattern
|
|
187
|
+
|
|
188
|
+
```typescript
|
|
189
|
+
// Separate data migration from schema migration
|
|
190
|
+
export class MigrateUserNames1703936500000 implements MigrationInterface {
|
|
191
|
+
public async up(queryRunner: QueryRunner): Promise<void> {
|
|
192
|
+
// Batch process to avoid memory issues
|
|
193
|
+
const batchSize = 1000;
|
|
194
|
+
let offset = 0;
|
|
195
|
+
|
|
196
|
+
while (true) {
|
|
197
|
+
const result = await queryRunner.query(`
|
|
198
|
+
UPDATE users
|
|
199
|
+
SET full_name = first_name || ' ' || last_name
|
|
200
|
+
WHERE full_name IS NULL
|
|
201
|
+
LIMIT ${batchSize}
|
|
202
|
+
`);
|
|
203
|
+
|
|
204
|
+
if (result.affectedRows === 0) break;
|
|
205
|
+
offset += batchSize;
|
|
206
|
+
|
|
207
|
+
// Optional: Add delay to reduce load
|
|
208
|
+
await new Promise(r => setTimeout(r, 100));
|
|
209
|
+
}
|
|
210
|
+
}
|
|
211
|
+
|
|
212
|
+
public async down(queryRunner: QueryRunner): Promise<void> {
|
|
213
|
+
// Data migrations often aren't reversible
|
|
214
|
+
console.log('Data migration rollback: manual intervention required');
|
|
215
|
+
}
|
|
216
|
+
}
|
|
217
|
+
```
|
|
218
|
+
|
|
219
|
+
## Use Cases
|
|
220
|
+
|
|
221
|
+
- Adding new features requiring schema changes
|
|
222
|
+
- Refactoring database structure safely
|
|
223
|
+
- Splitting or merging tables
|
|
224
|
+
- Changing column data types
|
|
225
|
+
- Large-scale data migrations
|
|
226
|
+
|
|
227
|
+
## Best Practices
|
|
228
|
+
|
|
229
|
+
| Do | Avoid |
|
|
230
|
+
|----|-------|
|
|
231
|
+
| Test migrations on production-like data | Testing only on empty databases |
|
|
232
|
+
| Use expand-contract for breaking changes | Direct column renames in production |
|
|
233
|
+
| Create CONCURRENTLY for large table indexes | Blocking index creation on live tables |
|
|
234
|
+
| Batch large data updates | Updating millions of rows in one transaction |
|
|
235
|
+
| Include rollback in every migration | Forward-only migrations without escape |
|
|
236
|
+
| Run migrations in CI before deploy | Manual migration execution |
|
|
237
|
+
| Version control all migrations | Modifying applied migrations |
|
|
238
|
+
| Separate schema and data migrations | Mixing DDL and large DML in one migration |
|
|
239
|
+
|
|
240
|
+
## Migration Checklist
|
|
241
|
+
|
|
242
|
+
```
|
|
243
|
+
Pre-Migration:
|
|
244
|
+
[ ] Migration tested on staging with production data volume
|
|
245
|
+
[ ] Rollback script written and tested
|
|
246
|
+
[ ] Estimated execution time documented
|
|
247
|
+
[ ] Backup verified
|
|
248
|
+
|
|
249
|
+
During Migration:
|
|
250
|
+
[ ] Monitor database locks and connections
|
|
251
|
+
[ ] Check application error rates
|
|
252
|
+
[ ] Verify migration progress
|
|
253
|
+
|
|
254
|
+
Post-Migration:
|
|
255
|
+
[ ] Verify data integrity
|
|
256
|
+
[ ] Check application functionality
|
|
257
|
+
[ ] Monitor performance metrics
|
|
258
|
+
[ ] Document completion
|
|
259
|
+
```
|
|
260
|
+
|
|
261
|
+
## CI/CD Integration
|
|
262
|
+
|
|
263
|
+
```yaml
|
|
264
|
+
# GitHub Actions example
|
|
265
|
+
- name: Run Migrations
|
|
266
|
+
run: |
|
|
267
|
+
# Wait for healthy database
|
|
268
|
+
until pg_isready -h $DB_HOST; do sleep 1; done
|
|
269
|
+
|
|
270
|
+
# Run migrations with timeout
|
|
271
|
+
timeout 600 npx prisma migrate deploy
|
|
272
|
+
|
|
273
|
+
# Verify migration status
|
|
274
|
+
npx prisma migrate status
|
|
275
|
+
env:
|
|
276
|
+
DATABASE_URL: ${{ secrets.DATABASE_URL }}
|
|
277
|
+
```
|
|
278
|
+
|
|
279
|
+
## Related Skills
|
|
280
|
+
|
|
281
|
+
See also these related skill documents:
|
|
282
|
+
|
|
283
|
+
- **designing-database-schemas** - Schema design principles
|
|
284
|
+
- **managing-databases** - DBA operations and maintenance
|
|
285
|
+
- **optimizing-databases** - Performance tuning
|