bps-kit 1.2.2 → 1.3.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (51) hide show
  1. package/.bps-kit.json +4 -4
  2. package/README.md +3 -0
  3. package/implementation_plan.md.resolved +37 -0
  4. package/package.json +2 -2
  5. package/templates/agents-template/ARCHITECTURE.md +21 -9
  6. package/templates/agents-template/agents/automation-specialist.md +157 -0
  7. package/templates/agents-template/rules/GEMINI.md +2 -10
  8. package/templates/agents-template/workflows/automate.md +153 -0
  9. package/templates/skills_normal/n8n-code-javascript/BUILTIN_FUNCTIONS.md +764 -0
  10. package/templates/skills_normal/n8n-code-javascript/COMMON_PATTERNS.md +1110 -0
  11. package/templates/skills_normal/n8n-code-javascript/DATA_ACCESS.md +782 -0
  12. package/templates/skills_normal/n8n-code-javascript/ERROR_PATTERNS.md +763 -0
  13. package/templates/skills_normal/n8n-code-javascript/README.md +350 -0
  14. package/templates/skills_normal/n8n-code-javascript/SKILL.md +699 -0
  15. package/templates/skills_normal/n8n-code-python/COMMON_PATTERNS.md +794 -0
  16. package/templates/skills_normal/n8n-code-python/DATA_ACCESS.md +702 -0
  17. package/templates/skills_normal/n8n-code-python/ERROR_PATTERNS.md +601 -0
  18. package/templates/skills_normal/n8n-code-python/README.md +386 -0
  19. package/templates/skills_normal/n8n-code-python/SKILL.md +748 -0
  20. package/templates/skills_normal/n8n-code-python/STANDARD_LIBRARY.md +974 -0
  21. package/templates/skills_normal/n8n-expression-syntax/COMMON_MISTAKES.md +393 -0
  22. package/templates/skills_normal/n8n-expression-syntax/EXAMPLES.md +483 -0
  23. package/templates/skills_normal/n8n-expression-syntax/README.md +93 -0
  24. package/templates/skills_normal/n8n-expression-syntax/SKILL.md +516 -0
  25. package/templates/skills_normal/n8n-mcp-tools-expert/README.md +99 -0
  26. package/templates/skills_normal/n8n-mcp-tools-expert/SEARCH_GUIDE.md +374 -0
  27. package/templates/skills_normal/n8n-mcp-tools-expert/SKILL.md +642 -0
  28. package/templates/skills_normal/n8n-mcp-tools-expert/VALIDATION_GUIDE.md +442 -0
  29. package/templates/skills_normal/n8n-mcp-tools-expert/WORKFLOW_GUIDE.md +618 -0
  30. package/templates/skills_normal/n8n-node-configuration/DEPENDENCIES.md +789 -0
  31. package/templates/skills_normal/n8n-node-configuration/OPERATION_PATTERNS.md +913 -0
  32. package/templates/skills_normal/n8n-node-configuration/README.md +364 -0
  33. package/templates/skills_normal/n8n-node-configuration/SKILL.md +785 -0
  34. package/templates/skills_normal/n8n-validation-expert/ERROR_CATALOG.md +943 -0
  35. package/templates/skills_normal/n8n-validation-expert/FALSE_POSITIVES.md +720 -0
  36. package/templates/skills_normal/n8n-validation-expert/README.md +290 -0
  37. package/templates/skills_normal/n8n-validation-expert/SKILL.md +689 -0
  38. package/templates/skills_normal/n8n-workflow-patterns/README.md +251 -0
  39. package/templates/skills_normal/n8n-workflow-patterns/SKILL.md +411 -0
  40. package/templates/skills_normal/n8n-workflow-patterns/ai_agent_workflow.md +784 -0
  41. package/templates/skills_normal/n8n-workflow-patterns/database_operations.md +785 -0
  42. package/templates/skills_normal/n8n-workflow-patterns/http_api_integration.md +734 -0
  43. package/templates/skills_normal/n8n-workflow-patterns/scheduled_tasks.md +773 -0
  44. package/templates/skills_normal/n8n-workflow-patterns/webhook_processing.md +545 -0
  45. package/templates/vault/n8n-code-javascript/SKILL.md +10 -10
  46. package/templates/vault/n8n-code-python/SKILL.md +11 -11
  47. package/templates/vault/n8n-expression-syntax/SKILL.md +4 -4
  48. package/templates/vault/n8n-mcp-tools-expert/SKILL.md +9 -9
  49. package/templates/vault/n8n-node-configuration/SKILL.md +2 -2
  50. package/templates/vault/n8n-validation-expert/SKILL.md +3 -3
  51. package/templates/vault/n8n-workflow-patterns/SKILL.md +11 -11
@@ -0,0 +1,785 @@
1
+ # Database Operations Pattern
2
+
3
+ **Use Case**: Read, write, sync, and manage database data in workflows.
4
+
5
+ ---
6
+
7
+ ## Pattern Structure
8
+
9
+ ```
10
+ Trigger → [Query/Read] → [Transform] → [Write/Update] → [Verify/Log]
11
+ ```
12
+
13
+ **Key Characteristic**: Data persistence and synchronization
14
+
15
+ ---
16
+
17
+ ## Core Components
18
+
19
+ ### 1. Trigger
20
+ **Options**:
21
+ - **Schedule** - Periodic sync/maintenance (most common)
22
+ - **Webhook** - Event-driven writes
23
+ - **Manual** - One-time operations
24
+
25
+ ### 2. Database Read Nodes
26
+ **Supported databases**:
27
+ - Postgres
28
+ - MySQL
29
+ - MongoDB
30
+ - Microsoft SQL
31
+ - SQLite
32
+ - Redis
33
+ - And more via community nodes
34
+
35
+ ### 3. Transform
36
+ **Purpose**: Map between different database schemas or formats
37
+
38
+ **Typical nodes**:
39
+ - **Set** - Field mapping
40
+ - **Code** - Complex transformations
41
+ - **Merge** - Combine data from multiple sources
42
+
43
+ ### 4. Database Write Nodes
44
+ **Operations**:
45
+ - INSERT - Create new records
46
+ - UPDATE - Modify existing records
47
+ - UPSERT - Insert or update
48
+ - DELETE - Remove records
49
+
50
+ ### 5. Verification
51
+ **Purpose**: Confirm operations succeeded
52
+
53
+ **Methods**:
54
+ - Query to verify records
55
+ - Count rows affected
56
+ - Log results
57
+
58
+ ---
59
+
60
+ ## Common Use Cases
61
+
62
+ ### 1. Data Synchronization
63
+ **Flow**: Schedule → Read Source DB → Transform → Write Target DB → Log
64
+
65
+ **Example** (Postgres to MySQL sync):
66
+ ```
67
+ 1. Schedule (every 15 minutes)
68
+ 2. Postgres (SELECT * FROM users WHERE updated_at > {{$json.last_sync}})
69
+ 3. IF (check if records exist)
70
+ 4. Set (map Postgres schema to MySQL schema)
71
+ 5. MySQL (INSERT or UPDATE users)
72
+ 6. Postgres (UPDATE sync_log SET last_sync = NOW())
73
+ 7. Slack (notify: "Synced X users")
74
+ ```
75
+
76
+ **Incremental sync query**:
77
+ ```sql
78
+ SELECT *
79
+ FROM users
80
+ WHERE updated_at > $1
81
+ ORDER BY updated_at ASC
82
+ LIMIT 1000
83
+ ```
84
+
85
+ **Parameters**:
86
+ ```javascript
87
+ {
88
+ "parameters": [
89
+ "={{$node['Get Last Sync'].json.last_sync}}"
90
+ ]
91
+ }
92
+ ```
93
+
94
+ ### 2. ETL (Extract, Transform, Load)
95
+ **Flow**: Extract from multiple sources → Transform → Load into warehouse
96
+
97
+ **Example** (Consolidate data):
98
+ ```
99
+ 1. Schedule (daily at 2 AM)
100
+ 2. [Parallel branches]
101
+ ├─ Postgres (SELECT orders)
102
+ ├─ MySQL (SELECT customers)
103
+ └─ MongoDB (SELECT products)
104
+ 3. Merge (combine all data)
105
+ 4. Code (transform to warehouse schema)
106
+ 5. Postgres (warehouse - INSERT into fact_sales)
107
+ 6. Email (send summary report)
108
+ ```
109
+
110
+ ### 3. Data Validation & Cleanup
111
+ **Flow**: Schedule → Query → Validate → Update/Delete invalid records
112
+
113
+ **Example** (Clean orphaned records):
114
+ ```
115
+ 1. Schedule (weekly)
116
+ 2. Postgres (SELECT users WHERE email IS NULL OR email = '')
117
+ 3. IF (invalid records exist)
118
+ 4. Postgres (UPDATE users SET status='inactive' WHERE email IS NULL)
119
+ 5. Postgres (DELETE FROM users WHERE created_at < NOW() - INTERVAL '1 year' AND status='inactive')
120
+ 6. Slack (alert: "Cleaned X invalid records")
121
+ ```
122
+
123
+ ### 4. Backup & Archive
124
+ **Flow**: Schedule → Query → Export → Store
125
+
126
+ **Example** (Archive old records):
127
+ ```
128
+ 1. Schedule (monthly)
129
+ 2. Postgres (SELECT * FROM orders WHERE created_at < NOW() - INTERVAL '2 years')
130
+ 3. Code (convert to JSON)
131
+ 4. Write File (save to archive.json)
132
+ 5. Google Drive (upload archive)
133
+ 6. Postgres (DELETE FROM orders WHERE created_at < NOW() - INTERVAL '2 years')
134
+ ```
135
+
136
+ ### 5. Real-time Data Updates
137
+ **Flow**: Webhook → Parse → Update Database
138
+
139
+ **Example** (Update user status):
140
+ ```
141
+ 1. Webhook (receive status update)
142
+ 2. Postgres (UPDATE users SET status = {{$json.body.status}} WHERE id = {{$json.body.user_id}})
143
+ 3. IF (rows affected > 0)
144
+ 4. Redis (SET user:{{$json.body.user_id}}:status {{$json.body.status}})
145
+ 5. Webhook Response ({"success": true})
146
+ ```
147
+
148
+ ---
149
+
150
+ ## Database Node Configuration
151
+
152
+ ### Postgres
153
+
154
+ #### SELECT Query
155
+ ```javascript
156
+ {
157
+ operation: "executeQuery",
158
+ query: "SELECT id, name, email FROM users WHERE created_at > $1 LIMIT $2",
159
+ parameters: [
160
+ "={{$json.since_date}}",
161
+ "100"
162
+ ]
163
+ }
164
+ ```
165
+
166
+ #### INSERT
167
+ ```javascript
168
+ {
169
+ operation: "insert",
170
+ table: "users",
171
+ columns: "id, name, email, created_at",
172
+ values: [
173
+ {
174
+ id: "={{$json.id}}",
175
+ name: "={{$json.name}}",
176
+ email: "={{$json.email}}",
177
+ created_at: "={{$now}}"
178
+ }
179
+ ]
180
+ }
181
+ ```
182
+
183
+ #### UPDATE
184
+ ```javascript
185
+ {
186
+ operation: "update",
187
+ table: "users",
188
+ updateKey: "id",
189
+ columns: "name, email, updated_at",
190
+ values: {
191
+ id: "={{$json.id}}",
192
+ name: "={{$json.name}}",
193
+ email: "={{$json.email}}",
194
+ updated_at: "={{$now}}"
195
+ }
196
+ }
197
+ ```
198
+
199
+ #### UPSERT (INSERT ... ON CONFLICT)
200
+ ```javascript
201
+ {
202
+ operation: "executeQuery",
203
+ query: `
204
+ INSERT INTO users (id, name, email)
205
+ VALUES ($1, $2, $3)
206
+ ON CONFLICT (id)
207
+ DO UPDATE SET name = $2, email = $3, updated_at = NOW()
208
+ `,
209
+ parameters: [
210
+ "={{$json.id}}",
211
+ "={{$json.name}}",
212
+ "={{$json.email}}"
213
+ ]
214
+ }
215
+ ```
216
+
217
+ ### MySQL
218
+
219
+ #### SELECT with JOIN
220
+ ```javascript
221
+ {
222
+ operation: "executeQuery",
223
+ query: `
224
+ SELECT u.id, u.name, o.order_id, o.total
225
+ FROM users u
226
+ LEFT JOIN orders o ON u.id = o.user_id
227
+ WHERE u.created_at > ?
228
+ `,
229
+ parameters: [
230
+ "={{$json.since_date}}"
231
+ ]
232
+ }
233
+ ```
234
+
235
+ #### Bulk INSERT
236
+ ```javascript
237
+ {
238
+ operation: "insert",
239
+ table: "orders",
240
+ columns: "user_id, total, status",
241
+ values: $json.orders // Array of objects
242
+ }
243
+ ```
244
+
245
+ ### MongoDB
246
+
247
+ #### Find Documents
248
+ ```javascript
249
+ {
250
+ operation: "find",
251
+ collection: "users",
252
+ query: JSON.stringify({
253
+ created_at: { $gt: new Date($json.since_date) },
254
+ status: "active"
255
+ }),
256
+ limit: 100
257
+ }
258
+ ```
259
+
260
+ #### Insert Document
261
+ ```javascript
262
+ {
263
+ operation: "insert",
264
+ collection: "users",
265
+ document: JSON.stringify({
266
+ name: $json.name,
267
+ email: $json.email,
268
+ created_at: new Date()
269
+ })
270
+ }
271
+ ```
272
+
273
+ #### Update Document
274
+ ```javascript
275
+ {
276
+ operation: "update",
277
+ collection: "users",
278
+ query: JSON.stringify({ _id: $json.user_id }),
279
+ update: JSON.stringify({
280
+ $set: {
281
+ status: $json.status,
282
+ updated_at: new Date()
283
+ }
284
+ })
285
+ }
286
+ ```
287
+
288
+ ---
289
+
290
+ ## Batch Processing
291
+
292
+ ### Pattern 1: Split In Batches
293
+ **Use when**: Processing large datasets to avoid memory issues
294
+
295
+ ```
296
+ Postgres (SELECT 10000 records)
297
+ → Split In Batches (100 items per batch)
298
+ → Transform
299
+ → MySQL (write batch)
300
+ → Loop (until all processed)
301
+ ```
302
+
303
+ ### Pattern 2: Paginated Queries
304
+ **Use when**: Database has millions of records
305
+
306
+ ```
307
+ Set (initialize: offset=0, limit=1000)
308
+ → Loop Start
309
+ → Postgres (SELECT * FROM large_table LIMIT {{$json.limit}} OFFSET {{$json.offset}})
310
+ → IF (records returned)
311
+ ├─ Process records
312
+ ├─ Set (increment offset by 1000)
313
+ └─ Loop back
314
+ └─ [No records] → End
315
+ ```
316
+
317
+ **Query**:
318
+ ```sql
319
+ SELECT * FROM large_table
320
+ ORDER BY id
321
+ LIMIT $1 OFFSET $2
322
+ ```
323
+
324
+ ### Pattern 3: Cursor-Based Pagination
325
+ **Better performance for large datasets**:
326
+
327
+ ```
328
+ Set (initialize: last_id=0)
329
+ → Loop Start
330
+ → Postgres (SELECT * FROM table WHERE id > {{$json.last_id}} ORDER BY id LIMIT 1000)
331
+ → IF (records returned)
332
+ ├─ Process records
333
+ ├─ Code (get max id from batch)
334
+ └─ Loop back
335
+ └─ [No records] → End
336
+ ```
337
+
338
+ **Query**:
339
+ ```sql
340
+ SELECT * FROM table
341
+ WHERE id > $1
342
+ ORDER BY id ASC
343
+ LIMIT 1000
344
+ ```
345
+
346
+ ---
347
+
348
+ ## Transaction Handling
349
+
350
+ ### Pattern 1: BEGIN/COMMIT/ROLLBACK
351
+ **For databases that support transactions**:
352
+
353
+ ```javascript
354
+ // Node 1: Begin Transaction
355
+ {
356
+ operation: "executeQuery",
357
+ query: "BEGIN"
358
+ }
359
+
360
+ // Node 2-N: Your operations
361
+ {
362
+ operation: "executeQuery",
363
+ query: "INSERT INTO ...",
364
+ continueOnFail: true
365
+ }
366
+
367
+ // Node N+1: Commit or Rollback
368
+ {
369
+ operation: "executeQuery",
370
+ query: "={{$node['Operation'].json.error ? 'ROLLBACK' : 'COMMIT'}}"
371
+ }
372
+ ```
373
+
374
+ ### Pattern 2: Atomic Operations
375
+ **Use database features for atomicity**:
376
+
377
+ ```sql
378
+ -- Upsert example (atomic)
379
+ INSERT INTO inventory (product_id, quantity)
380
+ VALUES ($1, $2)
381
+ ON CONFLICT (product_id)
382
+ DO UPDATE SET quantity = inventory.quantity + $2
383
+ ```
384
+
385
+ ### Pattern 3: Error Rollback
386
+ **Manual rollback on error**:
387
+
388
+ ```
389
+ Try Operations:
390
+ Postgres (INSERT orders)
391
+ MySQL (INSERT order_items)
392
+
393
+ Error Trigger:
394
+ Postgres (DELETE FROM orders WHERE id = {{$json.order_id}})
395
+ MySQL (DELETE FROM order_items WHERE order_id = {{$json.order_id}})
396
+ ```
397
+
398
+ ---
399
+
400
+ ## Data Transformation
401
+
402
+ ### Schema Mapping
403
+ ```javascript
404
+ // Code node - map schemas
405
+ const sourceData = $input.all();
406
+
407
+ return sourceData.map(item => ({
408
+ json: {
409
+ // Source → Target mapping
410
+ user_id: item.json.id,
411
+ full_name: `${item.json.first_name} ${item.json.last_name}`,
412
+ email_address: item.json.email,
413
+ registration_date: new Date(item.json.created_at).toISOString(),
414
+ // Computed fields
415
+ is_premium: item.json.plan_type === 'pro',
416
+ // Default values
417
+ status: item.json.status || 'active'
418
+ }
419
+ }));
420
+ ```
421
+
422
+ ### Data Type Conversions
423
+ ```javascript
424
+ // Code node - convert data types
425
+ return $input.all().map(item => ({
426
+ json: {
427
+ // String to number
428
+ user_id: parseInt(item.json.user_id),
429
+ // String to date
430
+ created_at: new Date(item.json.created_at),
431
+ // Number to boolean
432
+ is_active: item.json.active === 1,
433
+ // JSON string to object
434
+ metadata: JSON.parse(item.json.metadata || '{}'),
435
+ // Null handling
436
+ email: item.json.email || null
437
+ }
438
+ }));
439
+ ```
440
+
441
+ ### Aggregation
442
+ ```javascript
443
+ // Code node - aggregate data
444
+ const items = $input.all();
445
+
446
+ const summary = items.reduce((acc, item) => {
447
+ const date = item.json.created_at.split('T')[0];
448
+ if (!acc[date]) {
449
+ acc[date] = { count: 0, total: 0 };
450
+ }
451
+ acc[date].count++;
452
+ acc[date].total += item.json.amount;
453
+ return acc;
454
+ }, {});
455
+
456
+ return Object.entries(summary).map(([date, data]) => ({
457
+ json: {
458
+ date,
459
+ count: data.count,
460
+ total: data.total,
461
+ average: data.total / data.count
462
+ }
463
+ }));
464
+ ```
465
+
466
+ ---
467
+
468
+ ## Performance Optimization
469
+
470
+ ### 1. Use Indexes
471
+ Ensure database has proper indexes:
472
+
473
+ ```sql
474
+ -- Add index for sync queries
475
+ CREATE INDEX idx_users_updated_at ON users(updated_at);
476
+
477
+ -- Add index for lookups
478
+ CREATE INDEX idx_orders_user_id ON orders(user_id);
479
+ ```
480
+
481
+ ### 2. Limit Result Sets
482
+ Always use LIMIT:
483
+
484
+ ```sql
485
+ -- ✅ Good
486
+ SELECT * FROM large_table
487
+ WHERE created_at > $1
488
+ LIMIT 1000
489
+
490
+ -- ❌ Bad (unbounded)
491
+ SELECT * FROM large_table
492
+ WHERE created_at > $1
493
+ ```
494
+
495
+ ### 3. Use Prepared Statements
496
+ Parameterized queries are faster:
497
+
498
+ ```javascript
499
+ // ✅ Good - prepared statement
500
+ {
501
+ query: "SELECT * FROM users WHERE id = $1",
502
+ parameters: ["={{$json.id}}"]
503
+ }
504
+
505
+ // ❌ Bad - string concatenation
506
+ {
507
+ query: "SELECT * FROM users WHERE id = '={{$json.id}}'"
508
+ }
509
+ ```
510
+
511
+ ### 4. Batch Writes
512
+ Write multiple records at once:
513
+
514
+ ```javascript
515
+ // ✅ Good - batch insert
516
+ {
517
+ operation: "insert",
518
+ table: "orders",
519
+ values: $json.items // Array of 100 items
520
+ }
521
+
522
+ // ❌ Bad - individual inserts in loop
523
+ // 100 separate INSERT statements
524
+ ```
525
+
526
+ ### 5. Connection Pooling
527
+ Configure in credentials:
528
+
529
+ ```javascript
530
+ {
531
+ host: "db.example.com",
532
+ database: "mydb",
533
+ user: "user",
534
+ password: "pass",
535
+ // Connection pool settings
536
+ min: 2,
537
+ max: 10,
538
+ idleTimeoutMillis: 30000
539
+ }
540
+ ```
541
+
542
+ ---
543
+
544
+ ## Error Handling
545
+
546
+ ### Pattern 1: Check Rows Affected
547
+ ```
548
+ Database Operation (UPDATE users...)
549
+ → IF ({{$json.rowsAffected === 0}})
550
+ └─ Alert: "No rows updated - record not found"
551
+ ```
552
+
553
+ ### Pattern 2: Constraint Violations
554
+ ```javascript
555
+ // Database operation with continueOnFail: true
556
+ {
557
+ operation: "insert",
558
+ continueOnFail: true
559
+ }
560
+
561
+ // Next node: Check for errors
562
+ IF ({{$json.error !== undefined}})
563
+ → IF ({{$json.error.includes('duplicate key')}})
564
+ └─ Log: "Record already exists - skipping"
565
+ → ELSE
566
+ └─ Alert: "Database error: {{$json.error}}"
567
+ ```
568
+
569
+ ### Pattern 3: Rollback on Error
570
+ ```
571
+ Try Operations:
572
+ → Database Write 1
573
+ → Database Write 2
574
+ → Database Write 3
575
+
576
+ Error Trigger:
577
+ → Rollback Operations
578
+ → Alert Admin
579
+ ```
580
+
581
+ ---
582
+
583
+ ## Security Best Practices
584
+
585
+ ### 1. Use Parameterized Queries (Prevent SQL Injection)
586
+ ```javascript
587
+ // ✅ SAFE - parameterized
588
+ {
589
+ query: "SELECT * FROM users WHERE email = $1",
590
+ parameters: ["={{$json.email}}"]
591
+ }
592
+
593
+ // ❌ DANGEROUS - SQL injection risk
594
+ {
595
+ query: "SELECT * FROM users WHERE email = '={{$json.email}}'"
596
+ }
597
+ ```
598
+
599
+ ### 2. Least Privilege Access
600
+ **Create dedicated workflow user**:
601
+
602
+ ```sql
603
+ -- ✅ Good - limited permissions
604
+ CREATE USER n8n_workflow WITH PASSWORD 'secure_password';
605
+ GRANT SELECT, INSERT, UPDATE ON orders TO n8n_workflow;
606
+ GRANT SELECT ON users TO n8n_workflow;
607
+
608
+ -- ❌ Bad - too much access
609
+ GRANT ALL PRIVILEGES TO n8n_workflow;
610
+ ```
611
+
612
+ ### 3. Validate Input Data
613
+ ```javascript
614
+ // Code node - validate before write
615
+ const email = $json.email;
616
+ const name = $json.name;
617
+
618
+ // Validation
619
+ if (!email || !email.includes('@')) {
620
+ throw new Error('Invalid email address');
621
+ }
622
+
623
+ if (!name || name.length < 2) {
624
+ throw new Error('Invalid name');
625
+ }
626
+
627
+ // Sanitization
628
+ return [{
629
+ json: {
630
+ email: email.toLowerCase().trim(),
631
+ name: name.trim()
632
+ }
633
+ }];
634
+ ```
635
+
636
+ ### 4. Encrypt Sensitive Data
637
+ ```javascript
638
+ // Code node - encrypt before storage
639
+ const crypto = require('crypto');
640
+
641
+ const algorithm = 'aes-256-cbc';
642
+ const key = Buffer.from($credentials.encryptionKey, 'hex');
643
+ const iv = crypto.randomBytes(16);
644
+
645
+ const cipher = crypto.createCipheriv(algorithm, key, iv);
646
+ let encrypted = cipher.update($json.sensitive_data, 'utf8', 'hex');
647
+ encrypted += cipher.final('hex');
648
+
649
+ return [{
650
+ json: {
651
+ encrypted_data: encrypted,
652
+ iv: iv.toString('hex')
653
+ }
654
+ }];
655
+ ```
656
+
657
+ ---
658
+
659
+ ## Common Gotchas
660
+
661
+ ### 1. ❌ Wrong: Unbounded queries
662
+ ```sql
663
+ SELECT * FROM large_table -- Could return millions
664
+ ```
665
+
666
+ ### ✅ Correct: Use LIMIT
667
+ ```sql
668
+ SELECT * FROM large_table
669
+ ORDER BY created_at DESC
670
+ LIMIT 1000
671
+ ```
672
+
673
+ ### 2. ❌ Wrong: String concatenation in queries
674
+ ```javascript
675
+ query: "SELECT * FROM users WHERE id = '{{$json.id}}'"
676
+ ```
677
+
678
+ ### ✅ Correct: Parameterized queries
679
+ ```javascript
680
+ query: "SELECT * FROM users WHERE id = $1",
681
+ parameters: ["={{$json.id}}"]
682
+ ```
683
+
684
+ ### 3. ❌ Wrong: No transaction for multi-step operations
685
+ ```
686
+ INSERT into orders
687
+ INSERT into order_items // Fails → orphaned order record
688
+ ```
689
+
690
+ ### ✅ Correct: Use transaction
691
+ ```
692
+ BEGIN
693
+ INSERT into orders
694
+ INSERT into order_items
695
+ COMMIT (or ROLLBACK on error)
696
+ ```
697
+
698
+ ### 4. ❌ Wrong: Processing all items at once
699
+ ```
700
+ SELECT 1000000 records → Process all → OOM error
701
+ ```
702
+
703
+ ### ✅ Correct: Batch processing
704
+ ```
705
+ SELECT records → Split In Batches (1000) → Process → Loop
706
+ ```
707
+
708
+ ---
709
+
710
+ ## Real Template Examples
711
+
712
+ From n8n template library (456 database templates):
713
+
714
+ **Data Sync**:
715
+ ```
716
+ Schedule → Postgres (SELECT new records) → Transform → MySQL (INSERT)
717
+ ```
718
+
719
+ **ETL Pipeline**:
720
+ ```
721
+ Schedule → [Multiple DB reads] → Merge → Transform → Warehouse (INSERT)
722
+ ```
723
+
724
+ **Backup**:
725
+ ```
726
+ Schedule → Postgres (SELECT all) → JSON → Google Drive (upload)
727
+ ```
728
+
729
+ Use `search_templates({query: "database"})` to find more!
730
+
731
+ ---
732
+
733
+ ## Checklist for Database Workflows
734
+
735
+ ### Planning
736
+ - [ ] Identify source and target databases
737
+ - [ ] Understand schema differences
738
+ - [ ] Plan transformation logic
739
+ - [ ] Consider batch size for large datasets
740
+ - [ ] Design error handling strategy
741
+
742
+ ### Implementation
743
+ - [ ] Use parameterized queries (never concatenate)
744
+ - [ ] Add LIMIT to all SELECT queries
745
+ - [ ] Use appropriate operation (INSERT/UPDATE/UPSERT)
746
+ - [ ] Configure credentials properly
747
+ - [ ] Test with small dataset first
748
+
749
+ ### Performance
750
+ - [ ] Add database indexes for queries
751
+ - [ ] Use batch operations
752
+ - [ ] Implement pagination for large datasets
753
+ - [ ] Configure connection pooling
754
+ - [ ] Monitor query execution times
755
+
756
+ ### Security
757
+ - [ ] Use parameterized queries (SQL injection prevention)
758
+ - [ ] Least privilege database user
759
+ - [ ] Validate and sanitize input
760
+ - [ ] Encrypt sensitive data
761
+ - [ ] Never log sensitive data
762
+
763
+ ### Reliability
764
+ - [ ] Add transaction handling if needed
765
+ - [ ] Check rows affected
766
+ - [ ] Handle constraint violations
767
+ - [ ] Implement retry logic
768
+ - [ ] Add Error Trigger workflow
769
+
770
+ ---
771
+
772
+ ## Summary
773
+
774
+ **Key Points**:
775
+ 1. **Always use parameterized queries** (prevent SQL injection)
776
+ 2. **Batch processing** for large datasets
777
+ 3. **Transaction handling** for multi-step operations
778
+ 4. **Limit result sets** to avoid memory issues
779
+ 5. **Validate input data** before writes
780
+
781
+ **Pattern**: Trigger → Query → Transform → Write → Verify
782
+
783
+ **Related**:
784
+ - [http_api_integration.md](http_api_integration.md) - Fetching data to store in DB
785
+ - [scheduled_tasks.md](scheduled_tasks.md) - Periodic database maintenance