omgkit 2.3.0 → 2.3.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +3 -3
- package/package.json +1 -1
- package/plugin/skills/databases/database-management/SKILL.md +288 -0
- package/plugin/skills/databases/database-migration/SKILL.md +285 -0
- package/plugin/skills/databases/database-schema-design/SKILL.md +195 -0
- package/plugin/skills/databases/supabase/SKILL.md +283 -0
package/README.md
CHANGED
|
@@ -7,7 +7,7 @@
|
|
|
7
7
|
[](LICENSE)
|
|
8
8
|
|
|
9
9
|
> **AI Team System for Claude Code**
|
|
10
|
-
> 23 Agents •
|
|
10
|
+
> 23 Agents • 58 Commands • 76 Skills • 9 Modes
|
|
11
11
|
> *"Think Omega. Build Omega. Be Omega."*
|
|
12
12
|
|
|
13
13
|
OMGKIT transforms Claude Code into an autonomous AI development team with sprint management, specialized agents, and Omega-level thinking for 10x-1000x productivity improvements.
|
|
@@ -17,8 +17,8 @@ OMGKIT transforms Claude Code into an autonomous AI development team with sprint
|
|
|
17
17
|
| Component | Count | Description |
|
|
18
18
|
|-----------|-------|-------------|
|
|
19
19
|
| **Agents** | 23 | Specialized AI team members |
|
|
20
|
-
| **Commands** |
|
|
21
|
-
| **Skills** |
|
|
20
|
+
| **Commands** | 58 | Slash commands for every task |
|
|
21
|
+
| **Skills** | 76 | Domain expertise modules |
|
|
22
22
|
| **Modes** | 9 | Behavioral configurations |
|
|
23
23
|
| **Sprint Management** | ✅ | Vision, backlog, team autonomy |
|
|
24
24
|
| **Omega Thinking** | ✅ | 7 modes for 10x-1000x solutions |
|
package/package.json
CHANGED
|
@@ -0,0 +1,288 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: managing-databases
|
|
3
|
+
description: AI agent performs database administration tasks including backup/restore, monitoring, replication, security hardening, and maintenance operations. Use when managing production databases, troubleshooting performance, or implementing high availability.
|
|
4
|
+
category: databases
|
|
5
|
+
triggers:
|
|
6
|
+
- DBA
|
|
7
|
+
- database management
|
|
8
|
+
- backup
|
|
9
|
+
- restore
|
|
10
|
+
- replication
|
|
11
|
+
- database monitoring
|
|
12
|
+
- VACUUM
|
|
13
|
+
- maintenance
|
|
14
|
+
---
|
|
15
|
+
|
|
16
|
+
# Managing Databases
|
|
17
|
+
|
|
18
|
+
## Purpose
|
|
19
|
+
|
|
20
|
+
Operate and maintain production databases with reliability and performance:
|
|
21
|
+
|
|
22
|
+
- Implement backup and disaster recovery strategies
|
|
23
|
+
- Configure monitoring and alerting
|
|
24
|
+
- Manage replication and high availability
|
|
25
|
+
- Perform routine maintenance operations
|
|
26
|
+
- Troubleshoot performance issues
|
|
27
|
+
|
|
28
|
+
## Quick Start
|
|
29
|
+
|
|
30
|
+
```bash
|
|
31
|
+
# PostgreSQL backup
|
|
32
|
+
pg_dump -Fc -d mydb > backup_$(date +%Y%m%d).dump
|
|
33
|
+
|
|
34
|
+
# Restore
|
|
35
|
+
pg_restore -d mydb backup_20241230.dump
|
|
36
|
+
|
|
37
|
+
# Check database health
|
|
38
|
+
psql -c "SELECT pg_database_size('mydb');"
|
|
39
|
+
psql -c "SELECT * FROM pg_stat_activity;"
|
|
40
|
+
```
|
|
41
|
+
|
|
42
|
+
## Features
|
|
43
|
+
|
|
44
|
+
| Feature | Description | Tools/Commands |
|
|
45
|
+
|---------|-------------|----------------|
|
|
46
|
+
| Backup/Restore | Point-in-time recovery, full/incremental | pg_dump, pg_basebackup, WAL archiving |
|
|
47
|
+
| Monitoring | Connections, queries, locks, replication | pg_stat_*, Prometheus, Grafana |
|
|
48
|
+
| Replication | Master-replica, synchronous/async | streaming replication, logical replication |
|
|
49
|
+
| Security | Users, roles, encryption, audit | pg_hba.conf, SSL, pgaudit |
|
|
50
|
+
| Maintenance | VACUUM, ANALYZE, reindex | autovacuum tuning, pg_repack |
|
|
51
|
+
| Connection Pooling | Reduce connection overhead | PgBouncer, pgpool-II |
|
|
52
|
+
|
|
53
|
+
## Common Patterns
|
|
54
|
+
|
|
55
|
+
### Backup Strategies
|
|
56
|
+
|
|
57
|
+
```bash
|
|
58
|
+
# Full backup with compression
|
|
59
|
+
pg_dump -Fc -Z9 -d production > backup_$(date +%Y%m%d_%H%M%S).dump
|
|
60
|
+
|
|
61
|
+
# Parallel backup for large databases
|
|
62
|
+
pg_dump -Fc -j 4 -d production > backup.dump
|
|
63
|
+
|
|
64
|
+
# Base backup for PITR (Point-in-Time Recovery)
|
|
65
|
+
pg_basebackup -D /backups/base -Fp -Xs -P -R
|
|
66
|
+
|
|
67
|
+
# Continuous WAL archiving (postgresql.conf)
|
|
68
|
+
archive_mode = on
|
|
69
|
+
archive_command = 'cp %p /archive/%f'
|
|
70
|
+
|
|
71
|
+
# Restore to specific point in time
|
|
72
|
+
recovery_target_time = '2024-12-30 14:30:00'
|
|
73
|
+
```
|
|
74
|
+
|
|
75
|
+
```sql
|
|
76
|
+
-- Verify backup integrity
|
|
77
|
+
SELECT pg_is_in_recovery();
|
|
78
|
+
SELECT pg_last_wal_receive_lsn(), pg_last_wal_replay_lsn();
|
|
79
|
+
```
|
|
80
|
+
|
|
81
|
+
### Monitoring Queries
|
|
82
|
+
|
|
83
|
+
```sql
|
|
84
|
+
-- Active connections and queries
|
|
85
|
+
SELECT pid, usename, application_name, state, query,
|
|
86
|
+
now() - query_start AS duration
|
|
87
|
+
FROM pg_stat_activity
|
|
88
|
+
WHERE state != 'idle'
|
|
89
|
+
ORDER BY duration DESC;
|
|
90
|
+
|
|
91
|
+
-- Table sizes and bloat
|
|
92
|
+
SELECT schemaname, tablename,
|
|
93
|
+
pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename)) AS total_size,
|
|
94
|
+
pg_size_pretty(pg_relation_size(schemaname||'.'||tablename)) AS table_size,
|
|
95
|
+
pg_size_pretty(pg_indexes_size(schemaname||'.'||tablename)) AS index_size
|
|
96
|
+
FROM pg_tables
|
|
97
|
+
WHERE schemaname = 'public'
|
|
98
|
+
ORDER BY pg_total_relation_size(schemaname||'.'||tablename) DESC;
|
|
99
|
+
|
|
100
|
+
-- Slow queries (requires pg_stat_statements)
|
|
101
|
+
SELECT query, calls, mean_exec_time, total_exec_time
|
|
102
|
+
FROM pg_stat_statements
|
|
103
|
+
ORDER BY mean_exec_time DESC
|
|
104
|
+
LIMIT 20;
|
|
105
|
+
|
|
106
|
+
-- Index usage
|
|
107
|
+
SELECT schemaname, tablename, indexname, idx_scan, idx_tup_read
|
|
108
|
+
FROM pg_stat_user_indexes
|
|
109
|
+
ORDER BY idx_scan ASC; -- Unused indexes at top
|
|
110
|
+
|
|
111
|
+
-- Lock monitoring
|
|
112
|
+
SELECT blocked_locks.pid AS blocked_pid,
|
|
113
|
+
blocking_locks.pid AS blocking_pid,
|
|
114
|
+
blocked_activity.query AS blocked_query
|
|
115
|
+
FROM pg_locks blocked_locks
|
|
116
|
+
JOIN pg_stat_activity blocked_activity ON blocked_activity.pid = blocked_locks.pid
|
|
117
|
+
JOIN pg_locks blocking_locks ON blocking_locks.locktype = blocked_locks.locktype
|
|
118
|
+
JOIN pg_stat_activity blocking_activity ON blocking_activity.pid = blocking_locks.pid
|
|
119
|
+
WHERE NOT blocked_locks.granted;
|
|
120
|
+
```
|
|
121
|
+
|
|
122
|
+
### Replication Setup
|
|
123
|
+
|
|
124
|
+
```sql
|
|
125
|
+
-- On primary: Create replication user
|
|
126
|
+
CREATE USER replicator WITH REPLICATION ENCRYPTED PASSWORD 'secret';
|
|
127
|
+
|
|
128
|
+
-- pg_hba.conf on primary
|
|
129
|
+
host replication replicator replica_ip/32 scram-sha-256
|
|
130
|
+
```
|
|
131
|
+
|
|
132
|
+
```bash
|
|
133
|
+
# On replica: Initialize from primary
|
|
134
|
+
pg_basebackup -h primary_host -U replicator -D /var/lib/postgresql/data -Fp -Xs -P -R
|
|
135
|
+
|
|
136
|
+
# Verify replication status (on primary)
|
|
137
|
+
SELECT client_addr, state, sent_lsn, write_lsn, flush_lsn, replay_lsn
|
|
138
|
+
FROM pg_stat_replication;
|
|
139
|
+
|
|
140
|
+
# Check replication lag (on replica)
|
|
141
|
+
SELECT now() - pg_last_xact_replay_timestamp() AS replication_lag;
|
|
142
|
+
```
|
|
143
|
+
|
|
144
|
+
### Connection Pooling (PgBouncer)
|
|
145
|
+
|
|
146
|
+
```ini
|
|
147
|
+
# pgbouncer.ini
|
|
148
|
+
[databases]
|
|
149
|
+
mydb = host=localhost port=5432 dbname=mydb
|
|
150
|
+
|
|
151
|
+
[pgbouncer]
|
|
152
|
+
listen_addr = 0.0.0.0
|
|
153
|
+
listen_port = 6432
|
|
154
|
+
auth_type = scram-sha-256
|
|
155
|
+
auth_file = /etc/pgbouncer/userlist.txt
|
|
156
|
+
pool_mode = transaction # transaction, session, statement
|
|
157
|
+
max_client_conn = 1000
|
|
158
|
+
default_pool_size = 25
|
|
159
|
+
min_pool_size = 5
|
|
160
|
+
reserve_pool_size = 5
|
|
161
|
+
```
|
|
162
|
+
|
|
163
|
+
### Maintenance Operations
|
|
164
|
+
|
|
165
|
+
```sql
|
|
166
|
+
-- Manual VACUUM and ANALYZE
|
|
167
|
+
VACUUM ANALYZE orders;
|
|
168
|
+
|
|
169
|
+
-- Aggressive vacuum for bloat
|
|
170
|
+
VACUUM FULL orders; -- Locks table, use pg_repack instead
|
|
171
|
+
|
|
172
|
+
-- Reindex without locking (PostgreSQL 12+)
|
|
173
|
+
REINDEX INDEX CONCURRENTLY idx_orders_status;
|
|
174
|
+
|
|
175
|
+
-- Tune autovacuum per table (high-churn tables)
|
|
176
|
+
ALTER TABLE orders SET (
|
|
177
|
+
autovacuum_vacuum_scale_factor = 0.01,
|
|
178
|
+
autovacuum_analyze_scale_factor = 0.005
|
|
179
|
+
);
|
|
180
|
+
|
|
181
|
+
-- Check autovacuum status
|
|
182
|
+
SELECT schemaname, relname, last_vacuum, last_autovacuum,
|
|
183
|
+
last_analyze, last_autoanalyze, n_dead_tup
|
|
184
|
+
FROM pg_stat_user_tables
|
|
185
|
+
ORDER BY n_dead_tup DESC;
|
|
186
|
+
```
|
|
187
|
+
|
|
188
|
+
```bash
|
|
189
|
+
# pg_repack: Online VACUUM FULL alternative
|
|
190
|
+
pg_repack -d mydb -t orders
|
|
191
|
+
```
|
|
192
|
+
|
|
193
|
+
### Security Hardening
|
|
194
|
+
|
|
195
|
+
```sql
|
|
196
|
+
-- Create role with minimal privileges
|
|
197
|
+
CREATE ROLE app_user WITH LOGIN PASSWORD 'secure_password';
|
|
198
|
+
GRANT CONNECT ON DATABASE mydb TO app_user;
|
|
199
|
+
GRANT USAGE ON SCHEMA public TO app_user;
|
|
200
|
+
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO app_user;
|
|
201
|
+
|
|
202
|
+
-- Read-only user for reporting
|
|
203
|
+
CREATE ROLE readonly WITH LOGIN PASSWORD 'secure_password';
|
|
204
|
+
GRANT CONNECT ON DATABASE mydb TO readonly;
|
|
205
|
+
GRANT USAGE ON SCHEMA public TO readonly;
|
|
206
|
+
GRANT SELECT ON ALL TABLES IN SCHEMA public TO readonly;
|
|
207
|
+
|
|
208
|
+
-- Revoke public access
|
|
209
|
+
REVOKE ALL ON DATABASE mydb FROM PUBLIC;
|
|
210
|
+
REVOKE ALL ON SCHEMA public FROM PUBLIC;
|
|
211
|
+
```
|
|
212
|
+
|
|
213
|
+
```bash
|
|
214
|
+
# pg_hba.conf - Secure access rules
|
|
215
|
+
# TYPE DATABASE USER ADDRESS METHOD
|
|
216
|
+
local all postgres peer
|
|
217
|
+
host mydb app_user 10.0.0.0/8 scram-sha-256
|
|
218
|
+
hostssl mydb app_user 0.0.0.0/0 scram-sha-256
|
|
219
|
+
```
|
|
220
|
+
|
|
221
|
+
## Use Cases
|
|
222
|
+
|
|
223
|
+
- Setting up production database infrastructure
|
|
224
|
+
- Troubleshooting slow queries and locks
|
|
225
|
+
- Implementing disaster recovery plans
|
|
226
|
+
- Scaling with read replicas
|
|
227
|
+
- Security audits and compliance
|
|
228
|
+
|
|
229
|
+
## Best Practices
|
|
230
|
+
|
|
231
|
+
| Do | Avoid |
|
|
232
|
+
|----|-------|
|
|
233
|
+
| Test restore procedures regularly | Assuming backups work without testing |
|
|
234
|
+
| Use connection pooling in production | Direct connections from all app instances |
|
|
235
|
+
| Enable pg_stat_statements for query analysis | Waiting for problems to investigate queries |
|
|
236
|
+
| Set up replication before you need it | Single point of failure in production |
|
|
237
|
+
| Use CONCURRENTLY for index operations | Blocking operations during peak hours |
|
|
238
|
+
| Create least-privilege database users | Using superuser for applications |
|
|
239
|
+
| Monitor replication lag actively | Discovering lag during failover |
|
|
240
|
+
| Document and automate runbooks | Manual, ad-hoc maintenance |
|
|
241
|
+
|
|
242
|
+
## Daily Health Check
|
|
243
|
+
|
|
244
|
+
```sql
|
|
245
|
+
-- Run this checklist daily
|
|
246
|
+
-- 1. Database size and growth
|
|
247
|
+
SELECT pg_size_pretty(pg_database_size('mydb'));
|
|
248
|
+
|
|
249
|
+
-- 2. Connection count
|
|
250
|
+
SELECT count(*) FROM pg_stat_activity;
|
|
251
|
+
|
|
252
|
+
-- 3. Long-running queries (>5 min)
|
|
253
|
+
SELECT * FROM pg_stat_activity
|
|
254
|
+
WHERE state != 'idle' AND query_start < now() - interval '5 minutes';
|
|
255
|
+
|
|
256
|
+
-- 4. Replication lag
|
|
257
|
+
SELECT now() - pg_last_xact_replay_timestamp() AS lag;
|
|
258
|
+
|
|
259
|
+
-- 5. Bloat check (dead tuples)
|
|
260
|
+
SELECT relname, n_dead_tup FROM pg_stat_user_tables
|
|
261
|
+
WHERE n_dead_tup > 10000 ORDER BY n_dead_tup DESC;
|
|
262
|
+
|
|
263
|
+
-- 6. Failed/pending transactions
|
|
264
|
+
SELECT * FROM pg_prepared_xacts;
|
|
265
|
+
```
|
|
266
|
+
|
|
267
|
+
## Emergency Procedures
|
|
268
|
+
|
|
269
|
+
```sql
|
|
270
|
+
-- Kill long-running query
|
|
271
|
+
SELECT pg_terminate_backend(pid) FROM pg_stat_activity
|
|
272
|
+
WHERE query_start < now() - interval '30 minutes' AND state != 'idle';
|
|
273
|
+
|
|
274
|
+
-- Cancel query without killing connection
|
|
275
|
+
SELECT pg_cancel_backend(pid);
|
|
276
|
+
|
|
277
|
+
-- Emergency: Kill all connections to database
|
|
278
|
+
SELECT pg_terminate_backend(pid) FROM pg_stat_activity
|
|
279
|
+
WHERE datname = 'mydb' AND pid != pg_backend_pid();
|
|
280
|
+
```
|
|
281
|
+
|
|
282
|
+
## Related Skills
|
|
283
|
+
|
|
284
|
+
See also these related skill documents:
|
|
285
|
+
|
|
286
|
+
- **optimizing-databases** - Query and index optimization
|
|
287
|
+
- **managing-database-migrations** - Safe schema changes
|
|
288
|
+
- **designing-database-schemas** - Schema architecture
|
|
@@ -0,0 +1,285 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: managing-database-migrations
|
|
3
|
+
description: AI agent implements safe database migrations with zero-downtime strategies, rollback plans, and expand-contract patterns. Use when creating migrations, deploying schema changes, or implementing zero-downtime database updates.
|
|
4
|
+
category: databases
|
|
5
|
+
triggers:
|
|
6
|
+
- migration
|
|
7
|
+
- schema change
|
|
8
|
+
- database migration
|
|
9
|
+
- zero downtime
|
|
10
|
+
- rollback
|
|
11
|
+
- prisma migrate
|
|
12
|
+
- flyway
|
|
13
|
+
---
|
|
14
|
+
|
|
15
|
+
# Managing Database Migrations
|
|
16
|
+
|
|
17
|
+
## Purpose
|
|
18
|
+
|
|
19
|
+
Implement safe, reversible database migrations for production environments:
|
|
20
|
+
|
|
21
|
+
- Apply expand-contract pattern for zero-downtime changes
|
|
22
|
+
- Design rollback strategies for every migration
|
|
23
|
+
- Handle large table alterations safely
|
|
24
|
+
- Integrate migrations with CI/CD pipelines
|
|
25
|
+
- Test migrations before production deployment
|
|
26
|
+
|
|
27
|
+
## Quick Start
|
|
28
|
+
|
|
29
|
+
```bash
|
|
30
|
+
# Prisma
|
|
31
|
+
npx prisma migrate dev --name add_user_roles
|
|
32
|
+
npx prisma migrate deploy # Production
|
|
33
|
+
|
|
34
|
+
# TypeORM
|
|
35
|
+
npm run typeorm migration:generate -- -n AddUserRoles
|
|
36
|
+
npm run typeorm migration:run
|
|
37
|
+
|
|
38
|
+
# Raw SQL with naming convention
|
|
39
|
+
# migrations/20241230_001_add_user_roles.sql
|
|
40
|
+
```
|
|
41
|
+
|
|
42
|
+
```typescript
|
|
43
|
+
// Prisma migration example
|
|
44
|
+
// prisma/migrations/20241230_add_user_roles/migration.sql
|
|
45
|
+
ALTER TABLE "users" ADD COLUMN "role" VARCHAR(50) DEFAULT 'user';
|
|
46
|
+
CREATE INDEX "idx_users_role" ON "users"("role");
|
|
47
|
+
```
|
|
48
|
+
|
|
49
|
+
## Features
|
|
50
|
+
|
|
51
|
+
| Feature | Strategy | When to Use |
|
|
52
|
+
|---------|----------|-------------|
|
|
53
|
+
| Add Column | Add nullable, then backfill, then NOT NULL | Always safe |
|
|
54
|
+
| Remove Column | Stop using, deploy, then remove | Expand-contract |
|
|
55
|
+
| Rename Column | Add new, copy data, remove old | Zero-downtime required |
|
|
56
|
+
| Change Type | Add new column, migrate, swap | Data transformation |
|
|
57
|
+
| Add Index | CREATE CONCURRENTLY | Large tables (>1M rows) |
|
|
58
|
+
| Drop Table | Rename first, drop after verification | Reversible delete |
|
|
59
|
+
|
|
60
|
+
## Common Patterns
|
|
61
|
+
|
|
62
|
+
### Expand-Contract Pattern
|
|
63
|
+
|
|
64
|
+
```sql
|
|
65
|
+
-- EXPAND: Add new structure (backward compatible)
|
|
66
|
+
-- Migration 1: Add new column
|
|
67
|
+
ALTER TABLE users ADD COLUMN full_name VARCHAR(255);
|
|
68
|
+
|
|
69
|
+
-- Application: Write to BOTH columns
|
|
70
|
+
UPDATE users SET full_name = first_name || ' ' || last_name;
|
|
71
|
+
|
|
72
|
+
-- Deploy application that reads from new column
|
|
73
|
+
|
|
74
|
+
-- CONTRACT: Remove old structure
|
|
75
|
+
-- Migration 2: (after app deployed)
|
|
76
|
+
ALTER TABLE users DROP COLUMN first_name;
|
|
77
|
+
ALTER TABLE users DROP COLUMN last_name;
|
|
78
|
+
```
|
|
79
|
+
|
|
80
|
+
### Safe Column Addition
|
|
81
|
+
|
|
82
|
+
```sql
|
|
83
|
+
-- Step 1: Add nullable column
|
|
84
|
+
ALTER TABLE orders ADD COLUMN shipping_method VARCHAR(50);
|
|
85
|
+
|
|
86
|
+
-- Step 2: Backfill in batches (avoid locking)
|
|
87
|
+
UPDATE orders SET shipping_method = 'standard'
|
|
88
|
+
WHERE id IN (SELECT id FROM orders WHERE shipping_method IS NULL LIMIT 10000);
|
|
89
|
+
|
|
90
|
+
-- Step 3: Add NOT NULL constraint
|
|
91
|
+
ALTER TABLE orders ALTER COLUMN shipping_method SET NOT NULL;
|
|
92
|
+
ALTER TABLE orders ALTER COLUMN shipping_method SET DEFAULT 'standard';
|
|
93
|
+
```
|
|
94
|
+
|
|
95
|
+
### Safe Column Removal
|
|
96
|
+
|
|
97
|
+
```sql
|
|
98
|
+
-- Step 1: Stop application from using column
|
|
99
|
+
-- (Deploy code that no longer reads/writes to column)
|
|
100
|
+
|
|
101
|
+
-- Step 2: Drop default and constraints
|
|
102
|
+
ALTER TABLE users ALTER COLUMN legacy_field DROP DEFAULT;
|
|
103
|
+
ALTER TABLE users ALTER COLUMN legacy_field DROP NOT NULL;
|
|
104
|
+
|
|
105
|
+
-- Step 3: Remove column (after verification period)
|
|
106
|
+
ALTER TABLE users DROP COLUMN legacy_field;
|
|
107
|
+
```
|
|
108
|
+
|
|
109
|
+
### Safe Index Creation (Large Tables)
|
|
110
|
+
|
|
111
|
+
```sql
|
|
112
|
+
-- WRONG: Locks table for duration
|
|
113
|
+
CREATE INDEX idx_orders_status ON orders(status);
|
|
114
|
+
|
|
115
|
+
-- CORRECT: Non-blocking for PostgreSQL
|
|
116
|
+
CREATE INDEX CONCURRENTLY idx_orders_status ON orders(status);
|
|
117
|
+
|
|
118
|
+
-- Note: CONCURRENTLY cannot run in transaction
|
|
119
|
+
-- For Prisma, use raw SQL migration:
|
|
120
|
+
-- prisma/migrations/xxx/migration.sql
|
|
121
|
+
-- /* Disable transaction for this migration */
|
|
122
|
+
CREATE INDEX CONCURRENTLY idx_orders_status ON orders(status);
|
|
123
|
+
```
|
|
124
|
+
|
|
125
|
+
### Prisma Migration Workflow
|
|
126
|
+
|
|
127
|
+
```bash
|
|
128
|
+
# Development: Create and apply migration
|
|
129
|
+
npx prisma migrate dev --name add_user_roles
|
|
130
|
+
|
|
131
|
+
# Preview SQL without applying
|
|
132
|
+
npx prisma migrate diff \
|
|
133
|
+
--from-schema-datamodel prisma/schema.prisma \
|
|
134
|
+
--to-schema-datamodel prisma/schema.prisma.new \
|
|
135
|
+
--script
|
|
136
|
+
|
|
137
|
+
# Production deployment
|
|
138
|
+
npx prisma migrate deploy
|
|
139
|
+
|
|
140
|
+
# Reset for development (DESTRUCTIVE)
|
|
141
|
+
npx prisma migrate reset
|
|
142
|
+
```
|
|
143
|
+
|
|
144
|
+
```prisma
|
|
145
|
+
// schema.prisma - Example with relations
|
|
146
|
+
model User {
|
|
147
|
+
id String @id @default(uuid())
|
|
148
|
+
email String @unique
|
|
149
|
+
role Role @default(USER)
|
|
150
|
+
posts Post[]
|
|
151
|
+
createdAt DateTime @default(now())
|
|
152
|
+
updatedAt DateTime @updatedAt
|
|
153
|
+
|
|
154
|
+
@@index([email])
|
|
155
|
+
}
|
|
156
|
+
|
|
157
|
+
enum Role {
|
|
158
|
+
USER
|
|
159
|
+
ADMIN
|
|
160
|
+
}
|
|
161
|
+
```
|
|
162
|
+
|
|
163
|
+
### TypeORM Migration
|
|
164
|
+
|
|
165
|
+
```typescript
|
|
166
|
+
// migrations/1703936400000-AddUserRoles.ts
|
|
167
|
+
import { MigrationInterface, QueryRunner } from 'typeorm';
|
|
168
|
+
|
|
169
|
+
export class AddUserRoles1703936400000 implements MigrationInterface {
|
|
170
|
+
public async up(queryRunner: QueryRunner): Promise<void> {
|
|
171
|
+
await queryRunner.query(`
|
|
172
|
+
ALTER TABLE "users" ADD COLUMN "role" VARCHAR(50) DEFAULT 'user'
|
|
173
|
+
`);
|
|
174
|
+
await queryRunner.query(`
|
|
175
|
+
CREATE INDEX "idx_users_role" ON "users"("role")
|
|
176
|
+
`);
|
|
177
|
+
}
|
|
178
|
+
|
|
179
|
+
public async down(queryRunner: QueryRunner): Promise<void> {
|
|
180
|
+
await queryRunner.query(`DROP INDEX "idx_users_role"`);
|
|
181
|
+
await queryRunner.query(`ALTER TABLE "users" DROP COLUMN "role"`);
|
|
182
|
+
}
|
|
183
|
+
}
|
|
184
|
+
```
|
|
185
|
+
|
|
186
|
+
### Data Migration Pattern
|
|
187
|
+
|
|
188
|
+
```typescript
|
|
189
|
+
// Separate data migration from schema migration
|
|
190
|
+
export class MigrateUserNames1703936500000 implements MigrationInterface {
|
|
191
|
+
public async up(queryRunner: QueryRunner): Promise<void> {
|
|
192
|
+
// Batch process to avoid memory issues
|
|
193
|
+
const batchSize = 1000;
|
|
194
|
+
let offset = 0;
|
|
195
|
+
|
|
196
|
+
while (true) {
|
|
197
|
+
const result = await queryRunner.query(`
|
|
198
|
+
UPDATE users
|
|
199
|
+
SET full_name = first_name || ' ' || last_name
|
|
200
|
+
WHERE full_name IS NULL
|
|
201
|
+
LIMIT ${batchSize}
|
|
202
|
+
`);
|
|
203
|
+
|
|
204
|
+
if (result.affectedRows === 0) break;
|
|
205
|
+
offset += batchSize;
|
|
206
|
+
|
|
207
|
+
// Optional: Add delay to reduce load
|
|
208
|
+
await new Promise(r => setTimeout(r, 100));
|
|
209
|
+
}
|
|
210
|
+
}
|
|
211
|
+
|
|
212
|
+
public async down(queryRunner: QueryRunner): Promise<void> {
|
|
213
|
+
// Data migrations often aren't reversible
|
|
214
|
+
console.log('Data migration rollback: manual intervention required');
|
|
215
|
+
}
|
|
216
|
+
}
|
|
217
|
+
```
|
|
218
|
+
|
|
219
|
+
## Use Cases
|
|
220
|
+
|
|
221
|
+
- Adding new features requiring schema changes
|
|
222
|
+
- Refactoring database structure safely
|
|
223
|
+
- Splitting or merging tables
|
|
224
|
+
- Changing column data types
|
|
225
|
+
- Large-scale data migrations
|
|
226
|
+
|
|
227
|
+
## Best Practices
|
|
228
|
+
|
|
229
|
+
| Do | Avoid |
|
|
230
|
+
|----|-------|
|
|
231
|
+
| Test migrations on production-like data | Testing only on empty databases |
|
|
232
|
+
| Use expand-contract for breaking changes | Direct column renames in production |
|
|
233
|
+
| Create CONCURRENTLY for large table indexes | Blocking index creation on live tables |
|
|
234
|
+
| Batch large data updates | Updating millions of rows in one transaction |
|
|
235
|
+
| Include rollback in every migration | Forward-only migrations without escape |
|
|
236
|
+
| Run migrations in CI before deploy | Manual migration execution |
|
|
237
|
+
| Version control all migrations | Modifying applied migrations |
|
|
238
|
+
| Separate schema and data migrations | Mixing DDL and large DML in one migration |
|
|
239
|
+
|
|
240
|
+
## Migration Checklist
|
|
241
|
+
|
|
242
|
+
```
|
|
243
|
+
Pre-Migration:
|
|
244
|
+
[ ] Migration tested on staging with production data volume
|
|
245
|
+
[ ] Rollback script written and tested
|
|
246
|
+
[ ] Estimated execution time documented
|
|
247
|
+
[ ] Backup verified
|
|
248
|
+
|
|
249
|
+
During Migration:
|
|
250
|
+
[ ] Monitor database locks and connections
|
|
251
|
+
[ ] Check application error rates
|
|
252
|
+
[ ] Verify migration progress
|
|
253
|
+
|
|
254
|
+
Post-Migration:
|
|
255
|
+
[ ] Verify data integrity
|
|
256
|
+
[ ] Check application functionality
|
|
257
|
+
[ ] Monitor performance metrics
|
|
258
|
+
[ ] Document completion
|
|
259
|
+
```
|
|
260
|
+
|
|
261
|
+
## CI/CD Integration
|
|
262
|
+
|
|
263
|
+
```yaml
|
|
264
|
+
# GitHub Actions example
|
|
265
|
+
- name: Run Migrations
|
|
266
|
+
run: |
|
|
267
|
+
# Wait for healthy database
|
|
268
|
+
until pg_isready -h $DB_HOST; do sleep 1; done
|
|
269
|
+
|
|
270
|
+
# Run migrations with timeout
|
|
271
|
+
timeout 600 npx prisma migrate deploy
|
|
272
|
+
|
|
273
|
+
# Verify migration status
|
|
274
|
+
npx prisma migrate status
|
|
275
|
+
env:
|
|
276
|
+
DATABASE_URL: ${{ secrets.DATABASE_URL }}
|
|
277
|
+
```
|
|
278
|
+
|
|
279
|
+
## Related Skills
|
|
280
|
+
|
|
281
|
+
See also these related skill documents:
|
|
282
|
+
|
|
283
|
+
- **designing-database-schemas** - Schema design principles
|
|
284
|
+
- **managing-databases** - DBA operations and maintenance
|
|
285
|
+
- **optimizing-databases** - Performance tuning
|
|
@@ -0,0 +1,195 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: designing-database-schemas
|
|
3
|
+
description: AI agent designs production-grade database schemas with proper normalization, indexing strategies, and data modeling patterns. Use when creating new databases, designing tables, modeling relationships, or reviewing schema architecture.
|
|
4
|
+
category: databases
|
|
5
|
+
triggers:
|
|
6
|
+
- schema design
|
|
7
|
+
- database design
|
|
8
|
+
- data modeling
|
|
9
|
+
- ERD
|
|
10
|
+
- entity relationship
|
|
11
|
+
- table design
|
|
12
|
+
- normalization
|
|
13
|
+
---
|
|
14
|
+
|
|
15
|
+
# Designing Database Schemas
|
|
16
|
+
|
|
17
|
+
## Purpose
|
|
18
|
+
|
|
19
|
+
Design scalable, maintainable database schemas that balance normalization with query performance:
|
|
20
|
+
|
|
21
|
+
- Apply normalization principles (1NF-BCNF) appropriately
|
|
22
|
+
- Choose optimal data types and constraints
|
|
23
|
+
- Design efficient indexing strategies
|
|
24
|
+
- Implement common patterns (audit trails, soft deletes, multi-tenancy)
|
|
25
|
+
- Create clear entity relationships
|
|
26
|
+
|
|
27
|
+
## Quick Start
|
|
28
|
+
|
|
29
|
+
```sql
|
|
30
|
+
-- Well-designed table with proper constraints
|
|
31
|
+
CREATE TABLE users (
|
|
32
|
+
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
|
33
|
+
email VARCHAR(255) NOT NULL UNIQUE,
|
|
34
|
+
username VARCHAR(50) NOT NULL UNIQUE,
|
|
35
|
+
password_hash VARCHAR(255) NOT NULL,
|
|
36
|
+
status VARCHAR(20) NOT NULL DEFAULT 'active' CHECK (status IN ('active', 'suspended', 'deleted')),
|
|
37
|
+
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
|
38
|
+
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
|
39
|
+
deleted_at TIMESTAMPTZ -- Soft delete
|
|
40
|
+
);
|
|
41
|
+
|
|
42
|
+
CREATE INDEX idx_users_email ON users(email);
|
|
43
|
+
CREATE INDEX idx_users_status ON users(status) WHERE deleted_at IS NULL;
|
|
44
|
+
```
|
|
45
|
+
|
|
46
|
+
## Features
|
|
47
|
+
|
|
48
|
+
| Feature | Description | Pattern |
|
|
49
|
+
|---------|-------------|---------|
|
|
50
|
+
| Normalization | Eliminate redundancy while maintaining query efficiency | 3NF for OLTP, denormalize for read-heavy |
|
|
51
|
+
| Primary Keys | UUID vs serial, natural vs surrogate keys | UUID for distributed, serial for simple apps |
|
|
52
|
+
| Foreign Keys | Referential integrity with cascade options | CASCADE for owned data, RESTRICT for referenced |
|
|
53
|
+
| Indexes | Query optimization with minimal write overhead | B-tree default, GIN for JSONB/arrays |
|
|
54
|
+
| Constraints | Data integrity at database level | CHECK, UNIQUE, NOT NULL, EXCLUSION |
|
|
55
|
+
| Partitioning | Horizontal scaling for large tables | Range (time), List (category), Hash (even dist) |
|
|
56
|
+
|
|
57
|
+
## Common Patterns
|
|
58
|
+
|
|
59
|
+
### Audit Trail Pattern
|
|
60
|
+
|
|
61
|
+
```sql
|
|
62
|
+
-- Add to every auditable table
|
|
63
|
+
ALTER TABLE orders ADD COLUMN
|
|
64
|
+
created_by UUID REFERENCES users(id),
|
|
65
|
+
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
|
66
|
+
updated_by UUID REFERENCES users(id),
|
|
67
|
+
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW();
|
|
68
|
+
|
|
69
|
+
-- Automatic updated_at trigger
|
|
70
|
+
CREATE OR REPLACE FUNCTION update_updated_at()
|
|
71
|
+
RETURNS TRIGGER AS $$
|
|
72
|
+
BEGIN
|
|
73
|
+
NEW.updated_at = NOW();
|
|
74
|
+
RETURN NEW;
|
|
75
|
+
END;
|
|
76
|
+
$$ LANGUAGE plpgsql;
|
|
77
|
+
|
|
78
|
+
CREATE TRIGGER orders_updated_at
|
|
79
|
+
BEFORE UPDATE ON orders
|
|
80
|
+
FOR EACH ROW EXECUTE FUNCTION update_updated_at();
|
|
81
|
+
```
|
|
82
|
+
|
|
83
|
+
### Multi-Tenancy (Row-Level)
|
|
84
|
+
|
|
85
|
+
```sql
|
|
86
|
+
-- Tenant isolation with RLS
|
|
87
|
+
CREATE TABLE projects (
|
|
88
|
+
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
|
89
|
+
tenant_id UUID NOT NULL REFERENCES tenants(id),
|
|
90
|
+
name VARCHAR(255) NOT NULL,
|
|
91
|
+
-- Always include tenant_id in indexes
|
|
92
|
+
UNIQUE (tenant_id, name)
|
|
93
|
+
);
|
|
94
|
+
|
|
95
|
+
CREATE INDEX idx_projects_tenant ON projects(tenant_id);
|
|
96
|
+
|
|
97
|
+
-- Row Level Security
|
|
98
|
+
ALTER TABLE projects ENABLE ROW LEVEL SECURITY;
|
|
99
|
+
CREATE POLICY tenant_isolation ON projects
|
|
100
|
+
USING (tenant_id = current_setting('app.tenant_id')::uuid);
|
|
101
|
+
```
|
|
102
|
+
|
|
103
|
+
### Polymorphic Associations
|
|
104
|
+
|
|
105
|
+
```sql
|
|
106
|
+
-- Option 1: Separate junction tables (recommended)
|
|
107
|
+
CREATE TABLE comments (
|
|
108
|
+
id UUID PRIMARY KEY,
|
|
109
|
+
body TEXT NOT NULL,
|
|
110
|
+
author_id UUID REFERENCES users(id)
|
|
111
|
+
);
|
|
112
|
+
|
|
113
|
+
CREATE TABLE post_comments (
|
|
114
|
+
comment_id UUID PRIMARY KEY REFERENCES comments(id),
|
|
115
|
+
post_id UUID NOT NULL REFERENCES posts(id)
|
|
116
|
+
);
|
|
117
|
+
|
|
118
|
+
CREATE TABLE task_comments (
|
|
119
|
+
comment_id UUID PRIMARY KEY REFERENCES comments(id),
|
|
120
|
+
task_id UUID NOT NULL REFERENCES tasks(id)
|
|
121
|
+
);
|
|
122
|
+
|
|
123
|
+
-- Option 2: JSONB for flexible relations (when schema varies)
|
|
124
|
+
CREATE TABLE activities (
|
|
125
|
+
id UUID PRIMARY KEY,
|
|
126
|
+
subject_type VARCHAR(50) NOT NULL,
|
|
127
|
+
subject_id UUID NOT NULL,
|
|
128
|
+
metadata JSONB DEFAULT '{}',
|
|
129
|
+
CHECK (subject_type IN ('post', 'task', 'comment'))
|
|
130
|
+
);
|
|
131
|
+
CREATE INDEX idx_activities_subject ON activities(subject_type, subject_id);
|
|
132
|
+
```
|
|
133
|
+
|
|
134
|
+
### JSONB for Semi-Structured Data
|
|
135
|
+
|
|
136
|
+
```sql
|
|
137
|
+
-- Good: Configuration, metadata, varying attributes
|
|
138
|
+
CREATE TABLE products (
|
|
139
|
+
id UUID PRIMARY KEY,
|
|
140
|
+
name VARCHAR(255) NOT NULL,
|
|
141
|
+
base_price DECIMAL(10,2) NOT NULL,
|
|
142
|
+
attributes JSONB DEFAULT '{}' -- Color, size, custom fields
|
|
143
|
+
);
|
|
144
|
+
|
|
145
|
+
CREATE INDEX idx_products_attrs ON products USING GIN (attributes);
|
|
146
|
+
|
|
147
|
+
-- Query JSONB
|
|
148
|
+
SELECT * FROM products
|
|
149
|
+
WHERE attributes @> '{"color": "red"}'
|
|
150
|
+
AND (attributes->>'size')::int > 10;
|
|
151
|
+
```
|
|
152
|
+
|
|
153
|
+
## Use Cases
|
|
154
|
+
|
|
155
|
+
- Greenfield database design for new applications
|
|
156
|
+
- Schema reviews and optimization recommendations
|
|
157
|
+
- Migration from NoSQL to relational or vice versa
|
|
158
|
+
- Multi-tenant SaaS database architecture
|
|
159
|
+
- Audit and compliance requirements implementation
|
|
160
|
+
|
|
161
|
+
## Best Practices
|
|
162
|
+
|
|
163
|
+
| Do | Avoid |
|
|
164
|
+
|----|-------|
|
|
165
|
+
| Use UUID for distributed systems, serial for simple apps | Auto-incrementing IDs exposed to users (enumeration risk) |
|
|
166
|
+
| Apply 3NF for OLTP, denormalize strategically for reads | Over-normalizing lookup tables (country codes, etc.) |
|
|
167
|
+
| Create indexes matching query WHERE/ORDER BY patterns | Indexing every column (write performance penalty) |
|
|
168
|
+
| Use CHECK constraints for enum-like values | Storing booleans as strings or integers |
|
|
169
|
+
| Add NOT NULL unless truly optional | Nullable columns without clear semantics |
|
|
170
|
+
| Prefix indexes with table name: `idx_users_email` | Generic index names like `index1` |
|
|
171
|
+
| Use TIMESTAMPTZ for all timestamps | Storing timestamps without timezone |
|
|
172
|
+
| Design for the 80% use case first | Premature optimization for edge cases |
|
|
173
|
+
|
|
174
|
+
## Schema Review Checklist
|
|
175
|
+
|
|
176
|
+
```
|
|
177
|
+
[ ] All tables have primary keys
|
|
178
|
+
[ ] Foreign keys have appropriate ON DELETE actions
|
|
179
|
+
[ ] Indexes exist for all foreign keys
|
|
180
|
+
[ ] Indexes match common query patterns
|
|
181
|
+
[ ] No nullable columns without clear use case
|
|
182
|
+
[ ] Timestamps use TIMESTAMPTZ
|
|
183
|
+
[ ] Audit columns (created_at, updated_at) present
|
|
184
|
+
[ ] Naming follows consistent convention
|
|
185
|
+
[ ] JSONB used only for truly variable schema
|
|
186
|
+
[ ] Partitioning considered for tables > 10M rows
|
|
187
|
+
```
|
|
188
|
+
|
|
189
|
+
## Related Skills
|
|
190
|
+
|
|
191
|
+
See also these related skill documents:
|
|
192
|
+
|
|
193
|
+
- **managing-database-migrations** - Safe schema evolution patterns
|
|
194
|
+
- **optimizing-databases** - Query and index optimization
|
|
195
|
+
- **building-with-supabase** - PostgreSQL with RLS patterns
|
|
@@ -0,0 +1,283 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: building-with-supabase
|
|
3
|
+
description: AI agent builds full-stack applications with Supabase PostgreSQL, authentication, Row Level Security, Edge Functions, and real-time subscriptions. Use when building apps with Supabase, implementing RLS policies, or setting up Supabase Auth.
|
|
4
|
+
category: databases
|
|
5
|
+
triggers:
|
|
6
|
+
- supabase
|
|
7
|
+
- RLS
|
|
8
|
+
- row level security
|
|
9
|
+
- supabase auth
|
|
10
|
+
- edge functions
|
|
11
|
+
- real-time
|
|
12
|
+
- supabase storage
|
|
13
|
+
---
|
|
14
|
+
|
|
15
|
+
# Building with Supabase
|
|
16
|
+
|
|
17
|
+
## Purpose
|
|
18
|
+
|
|
19
|
+
Build secure, scalable applications using Supabase's PostgreSQL platform:
|
|
20
|
+
|
|
21
|
+
- Design tables with proper Row Level Security (RLS)
|
|
22
|
+
- Implement authentication flows (email, OAuth, magic link)
|
|
23
|
+
- Create real-time subscriptions for live updates
|
|
24
|
+
- Build Edge Functions for serverless logic
|
|
25
|
+
- Manage file storage with security policies
|
|
26
|
+
|
|
27
|
+
## Quick Start
|
|
28
|
+
|
|
29
|
+
```typescript
|
|
30
|
+
// Initialize Supabase client
|
|
31
|
+
import { createClient } from '@supabase/supabase-js';
|
|
32
|
+
import { Database } from './types/supabase';
|
|
33
|
+
|
|
34
|
+
export const supabase = createClient<Database>(
|
|
35
|
+
process.env.NEXT_PUBLIC_SUPABASE_URL!,
|
|
36
|
+
process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY!
|
|
37
|
+
);
|
|
38
|
+
|
|
39
|
+
// Server-side with service role (bypasses RLS)
|
|
40
|
+
import { createClient } from '@supabase/supabase-js';
|
|
41
|
+
export const supabaseAdmin = createClient<Database>(
|
|
42
|
+
process.env.SUPABASE_URL!,
|
|
43
|
+
process.env.SUPABASE_SERVICE_ROLE_KEY!
|
|
44
|
+
);
|
|
45
|
+
```
|
|
46
|
+
|
|
47
|
+
## Features
|
|
48
|
+
|
|
49
|
+
| Feature | Description | Guide |
|
|
50
|
+
|---------|-------------|-------|
|
|
51
|
+
| PostgreSQL | Full Postgres with extensions (pgvector, PostGIS) | Direct SQL or Supabase client |
|
|
52
|
+
| Row Level Security | Per-row access control policies | Enable RLS + create policies |
|
|
53
|
+
| Authentication | Email, OAuth, magic link, phone OTP | Built-in auth.users table |
|
|
54
|
+
| Real-time | Live database change subscriptions | Channel subscriptions |
|
|
55
|
+
| Edge Functions | Deno serverless functions | TypeScript at edge |
|
|
56
|
+
| Storage | S3-compatible file storage | Buckets with RLS policies |
|
|
57
|
+
|
|
58
|
+
## Common Patterns
|
|
59
|
+
|
|
60
|
+
### RLS Policy Patterns
|
|
61
|
+
|
|
62
|
+
```sql
|
|
63
|
+
-- Enable RLS on table
|
|
64
|
+
ALTER TABLE posts ENABLE ROW LEVEL SECURITY;
|
|
65
|
+
|
|
66
|
+
-- Owner-based access
|
|
67
|
+
CREATE POLICY "Users can CRUD own posts" ON posts
|
|
68
|
+
FOR ALL
|
|
69
|
+
USING (auth.uid() = user_id)
|
|
70
|
+
WITH CHECK (auth.uid() = user_id);
|
|
71
|
+
|
|
72
|
+
-- Public read, authenticated write
|
|
73
|
+
CREATE POLICY "Anyone can read posts" ON posts
|
|
74
|
+
FOR SELECT USING (published = true);
|
|
75
|
+
|
|
76
|
+
CREATE POLICY "Authenticated users can create" ON posts
|
|
77
|
+
FOR INSERT
|
|
78
|
+
WITH CHECK (auth.uid() IS NOT NULL);
|
|
79
|
+
|
|
80
|
+
-- Team-based access
|
|
81
|
+
CREATE POLICY "Team members can access" ON documents
|
|
82
|
+
FOR ALL
|
|
83
|
+
USING (
|
|
84
|
+
team_id IN (
|
|
85
|
+
SELECT team_id FROM team_members
|
|
86
|
+
WHERE user_id = auth.uid()
|
|
87
|
+
)
|
|
88
|
+
);
|
|
89
|
+
|
|
90
|
+
-- Role-based access using JWT claims
|
|
91
|
+
CREATE POLICY "Admins can do anything" ON users
|
|
92
|
+
FOR ALL
|
|
93
|
+
USING (auth.jwt() ->> 'role' = 'admin');
|
|
94
|
+
```
|
|
95
|
+
|
|
96
|
+
### Authentication Flow
|
|
97
|
+
|
|
98
|
+
```typescript
|
|
99
|
+
// Sign up with email
|
|
100
|
+
const { data, error } = await supabase.auth.signUp({
|
|
101
|
+
email: 'user@example.com',
|
|
102
|
+
password: 'secure-password',
|
|
103
|
+
options: {
|
|
104
|
+
data: { full_name: 'John Doe' }, // Custom user metadata
|
|
105
|
+
emailRedirectTo: 'https://app.com/auth/callback',
|
|
106
|
+
},
|
|
107
|
+
});
|
|
108
|
+
|
|
109
|
+
// OAuth sign in
|
|
110
|
+
const { data, error } = await supabase.auth.signInWithOAuth({
|
|
111
|
+
provider: 'google',
|
|
112
|
+
options: {
|
|
113
|
+
redirectTo: 'https://app.com/auth/callback',
|
|
114
|
+
scopes: 'email profile',
|
|
115
|
+
},
|
|
116
|
+
});
|
|
117
|
+
|
|
118
|
+
// Magic link
|
|
119
|
+
const { error } = await supabase.auth.signInWithOtp({
|
|
120
|
+
email: 'user@example.com',
|
|
121
|
+
options: { emailRedirectTo: 'https://app.com/auth/callback' },
|
|
122
|
+
});
|
|
123
|
+
|
|
124
|
+
// Get current user
|
|
125
|
+
const { data: { user } } = await supabase.auth.getUser();
|
|
126
|
+
|
|
127
|
+
// Sign out
|
|
128
|
+
await supabase.auth.signOut();
|
|
129
|
+
```
|
|
130
|
+
|
|
131
|
+
### Real-time Subscriptions
|
|
132
|
+
|
|
133
|
+
```typescript
|
|
134
|
+
// Subscribe to table changes
|
|
135
|
+
const channel = supabase
|
|
136
|
+
.channel('posts-changes')
|
|
137
|
+
.on(
|
|
138
|
+
'postgres_changes',
|
|
139
|
+
{
|
|
140
|
+
event: '*', // INSERT, UPDATE, DELETE, or *
|
|
141
|
+
schema: 'public',
|
|
142
|
+
table: 'posts',
|
|
143
|
+
filter: 'user_id=eq.' + userId, // Optional filter
|
|
144
|
+
},
|
|
145
|
+
(payload) => {
|
|
146
|
+
console.log('Change:', payload.eventType, payload.new);
|
|
147
|
+
}
|
|
148
|
+
)
|
|
149
|
+
.subscribe();
|
|
150
|
+
|
|
151
|
+
// Cleanup
|
|
152
|
+
channel.unsubscribe();
|
|
153
|
+
```
|
|
154
|
+
|
|
155
|
+
### Edge Functions
|
|
156
|
+
|
|
157
|
+
```typescript
|
|
158
|
+
// supabase/functions/process-webhook/index.ts
|
|
159
|
+
import { serve } from 'https://deno.land/std@0.168.0/http/server.ts';
|
|
160
|
+
import { createClient } from 'https://esm.sh/@supabase/supabase-js@2';
|
|
161
|
+
|
|
162
|
+
serve(async (req) => {
|
|
163
|
+
const supabase = createClient(
|
|
164
|
+
Deno.env.get('SUPABASE_URL')!,
|
|
165
|
+
Deno.env.get('SUPABASE_SERVICE_ROLE_KEY')!
|
|
166
|
+
);
|
|
167
|
+
|
|
168
|
+
const { record } = await req.json();
|
|
169
|
+
|
|
170
|
+
// Process webhook...
|
|
171
|
+
await supabase.from('processed').insert({ data: record });
|
|
172
|
+
|
|
173
|
+
return new Response(JSON.stringify({ success: true }), {
|
|
174
|
+
headers: { 'Content-Type': 'application/json' },
|
|
175
|
+
});
|
|
176
|
+
});
|
|
177
|
+
```
|
|
178
|
+
|
|
179
|
+
### Storage with Policies
|
|
180
|
+
|
|
181
|
+
```sql
|
|
182
|
+
-- Create bucket
|
|
183
|
+
INSERT INTO storage.buckets (id, name, public)
|
|
184
|
+
VALUES ('avatars', 'avatars', true);
|
|
185
|
+
|
|
186
|
+
-- Storage policies
|
|
187
|
+
CREATE POLICY "Users can upload own avatar" ON storage.objects
|
|
188
|
+
FOR INSERT WITH CHECK (
|
|
189
|
+
bucket_id = 'avatars' AND
|
|
190
|
+
auth.uid()::text = (storage.foldername(name))[1]
|
|
191
|
+
);
|
|
192
|
+
|
|
193
|
+
CREATE POLICY "Anyone can view avatars" ON storage.objects
|
|
194
|
+
FOR SELECT USING (bucket_id = 'avatars');
|
|
195
|
+
```
|
|
196
|
+
|
|
197
|
+
```typescript
|
|
198
|
+
// Upload file
|
|
199
|
+
const { data, error } = await supabase.storage
|
|
200
|
+
.from('avatars')
|
|
201
|
+
.upload(`${userId}/avatar.png`, file, {
|
|
202
|
+
cacheControl: '3600',
|
|
203
|
+
upsert: true,
|
|
204
|
+
});
|
|
205
|
+
|
|
206
|
+
// Get public URL
|
|
207
|
+
const { data: { publicUrl } } = supabase.storage
|
|
208
|
+
.from('avatars')
|
|
209
|
+
.getPublicUrl(`${userId}/avatar.png`);
|
|
210
|
+
```
|
|
211
|
+
|
|
212
|
+
### Next.js Server Components
|
|
213
|
+
|
|
214
|
+
```typescript
|
|
215
|
+
// app/api/posts/route.ts
|
|
216
|
+
import { createRouteHandlerClient } from '@supabase/auth-helpers-nextjs';
|
|
217
|
+
import { cookies } from 'next/headers';
|
|
218
|
+
|
|
219
|
+
export async function GET() {
|
|
220
|
+
const supabase = createRouteHandlerClient({ cookies });
|
|
221
|
+
const { data: posts } = await supabase.from('posts').select('*');
|
|
222
|
+
return Response.json(posts);
|
|
223
|
+
}
|
|
224
|
+
|
|
225
|
+
// Server Component
|
|
226
|
+
import { createServerComponentClient } from '@supabase/auth-helpers-nextjs';
|
|
227
|
+
import { cookies } from 'next/headers';
|
|
228
|
+
|
|
229
|
+
export default async function Page() {
|
|
230
|
+
const supabase = createServerComponentClient({ cookies });
|
|
231
|
+
const { data: posts } = await supabase.from('posts').select('*');
|
|
232
|
+
return <PostList posts={posts} />;
|
|
233
|
+
}
|
|
234
|
+
```
|
|
235
|
+
|
|
236
|
+
## Use Cases
|
|
237
|
+
|
|
238
|
+
- Building SaaS applications with multi-tenant RLS
|
|
239
|
+
- Real-time collaborative applications
|
|
240
|
+
- Mobile app backends with authentication
|
|
241
|
+
- Serverless APIs with Edge Functions
|
|
242
|
+
- File upload systems with access control
|
|
243
|
+
|
|
244
|
+
## Best Practices
|
|
245
|
+
|
|
246
|
+
| Do | Avoid |
|
|
247
|
+
|----|-------|
|
|
248
|
+
| Enable RLS on all tables | Disabling RLS "temporarily" in production |
|
|
249
|
+
| Use `auth.uid()` in policies, not session data | Trusting client-side user ID |
|
|
250
|
+
| Create service role client only server-side | Exposing service role key to client |
|
|
251
|
+
| Use TypeScript types from `supabase gen types` | Manual type definitions |
|
|
252
|
+
| Filter subscriptions to reduce bandwidth | Subscribing to entire tables |
|
|
253
|
+
| Use `supabase db push` for dev, migrations for prod | Pushing directly to production |
|
|
254
|
+
| Set up proper bucket policies | Public buckets for sensitive files |
|
|
255
|
+
| Use `signInWithOAuth` for social auth | Custom OAuth implementations |
|
|
256
|
+
|
|
257
|
+
## CLI Commands
|
|
258
|
+
|
|
259
|
+
```bash
|
|
260
|
+
# Local development
|
|
261
|
+
supabase start # Start local Supabase
|
|
262
|
+
supabase db reset # Reset with migrations + seed
|
|
263
|
+
|
|
264
|
+
# Migrations
|
|
265
|
+
supabase migration new add_posts # Create migration
|
|
266
|
+
supabase db push # Push to linked project (dev only)
|
|
267
|
+
supabase db diff --use-migra # Generate migration from diff
|
|
268
|
+
|
|
269
|
+
# Type generation
|
|
270
|
+
supabase gen types typescript --local > types/supabase.ts
|
|
271
|
+
|
|
272
|
+
# Edge Functions
|
|
273
|
+
supabase functions serve # Local development
|
|
274
|
+
supabase functions deploy my-func # Deploy to production
|
|
275
|
+
```
|
|
276
|
+
|
|
277
|
+
## Related Skills
|
|
278
|
+
|
|
279
|
+
See also these related skill documents:
|
|
280
|
+
|
|
281
|
+
- **designing-database-schemas** - Schema design patterns
|
|
282
|
+
- **managing-database-migrations** - Migration strategies
|
|
283
|
+
- **implementing-oauth** - OAuth flow details
|