bps-kit 1.2.2 → 1.3.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.bps-kit.json +4 -4
- package/README.md +3 -0
- package/implementation_plan.md.resolved +37 -0
- package/package.json +2 -2
- package/templates/agents-template/ARCHITECTURE.md +21 -9
- package/templates/agents-template/agents/automation-specialist.md +157 -0
- package/templates/agents-template/rules/GEMINI.md +2 -10
- package/templates/agents-template/workflows/automate.md +153 -0
- package/templates/skills_normal/n8n-code-javascript/BUILTIN_FUNCTIONS.md +764 -0
- package/templates/skills_normal/n8n-code-javascript/COMMON_PATTERNS.md +1110 -0
- package/templates/skills_normal/n8n-code-javascript/DATA_ACCESS.md +782 -0
- package/templates/skills_normal/n8n-code-javascript/ERROR_PATTERNS.md +763 -0
- package/templates/skills_normal/n8n-code-javascript/README.md +350 -0
- package/templates/skills_normal/n8n-code-javascript/SKILL.md +699 -0
- package/templates/skills_normal/n8n-code-python/COMMON_PATTERNS.md +794 -0
- package/templates/skills_normal/n8n-code-python/DATA_ACCESS.md +702 -0
- package/templates/skills_normal/n8n-code-python/ERROR_PATTERNS.md +601 -0
- package/templates/skills_normal/n8n-code-python/README.md +386 -0
- package/templates/skills_normal/n8n-code-python/SKILL.md +748 -0
- package/templates/skills_normal/n8n-code-python/STANDARD_LIBRARY.md +974 -0
- package/templates/skills_normal/n8n-expression-syntax/COMMON_MISTAKES.md +393 -0
- package/templates/skills_normal/n8n-expression-syntax/EXAMPLES.md +483 -0
- package/templates/skills_normal/n8n-expression-syntax/README.md +93 -0
- package/templates/skills_normal/n8n-expression-syntax/SKILL.md +516 -0
- package/templates/skills_normal/n8n-mcp-tools-expert/README.md +99 -0
- package/templates/skills_normal/n8n-mcp-tools-expert/SEARCH_GUIDE.md +374 -0
- package/templates/skills_normal/n8n-mcp-tools-expert/SKILL.md +642 -0
- package/templates/skills_normal/n8n-mcp-tools-expert/VALIDATION_GUIDE.md +442 -0
- package/templates/skills_normal/n8n-mcp-tools-expert/WORKFLOW_GUIDE.md +618 -0
- package/templates/skills_normal/n8n-node-configuration/DEPENDENCIES.md +789 -0
- package/templates/skills_normal/n8n-node-configuration/OPERATION_PATTERNS.md +913 -0
- package/templates/skills_normal/n8n-node-configuration/README.md +364 -0
- package/templates/skills_normal/n8n-node-configuration/SKILL.md +785 -0
- package/templates/skills_normal/n8n-validation-expert/ERROR_CATALOG.md +943 -0
- package/templates/skills_normal/n8n-validation-expert/FALSE_POSITIVES.md +720 -0
- package/templates/skills_normal/n8n-validation-expert/README.md +290 -0
- package/templates/skills_normal/n8n-validation-expert/SKILL.md +689 -0
- package/templates/skills_normal/n8n-workflow-patterns/README.md +251 -0
- package/templates/skills_normal/n8n-workflow-patterns/SKILL.md +411 -0
- package/templates/skills_normal/n8n-workflow-patterns/ai_agent_workflow.md +784 -0
- package/templates/skills_normal/n8n-workflow-patterns/database_operations.md +785 -0
- package/templates/skills_normal/n8n-workflow-patterns/http_api_integration.md +734 -0
- package/templates/skills_normal/n8n-workflow-patterns/scheduled_tasks.md +773 -0
- package/templates/skills_normal/n8n-workflow-patterns/webhook_processing.md +545 -0
- package/templates/vault/n8n-code-javascript/SKILL.md +10 -10
- package/templates/vault/n8n-code-python/SKILL.md +11 -11
- package/templates/vault/n8n-expression-syntax/SKILL.md +4 -4
- package/templates/vault/n8n-mcp-tools-expert/SKILL.md +9 -9
- package/templates/vault/n8n-node-configuration/SKILL.md +2 -2
- package/templates/vault/n8n-validation-expert/SKILL.md +3 -3
- package/templates/vault/n8n-workflow-patterns/SKILL.md +11 -11
|
@@ -0,0 +1,794 @@
|
|
|
1
|
+
# Common Patterns - Python Code Node
|
|
2
|
+
|
|
3
|
+
Production-tested Python patterns for n8n Code nodes.
|
|
4
|
+
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
## ⚠️ Important: JavaScript First
|
|
8
|
+
|
|
9
|
+
**Use JavaScript for 95% of use cases.**
|
|
10
|
+
|
|
11
|
+
Python in n8n has **NO external libraries** (no requests, pandas, numpy).
|
|
12
|
+
|
|
13
|
+
Only use Python when:
|
|
14
|
+
- You have complex Python-specific logic
|
|
15
|
+
- You need Python's standard library features
|
|
16
|
+
- You're more comfortable with Python than JavaScript
|
|
17
|
+
|
|
18
|
+
For most workflows, **JavaScript is the better choice**.
|
|
19
|
+
|
|
20
|
+
---
|
|
21
|
+
|
|
22
|
+
## Pattern Overview
|
|
23
|
+
|
|
24
|
+
These 10 patterns cover common n8n Code node scenarios using Python:
|
|
25
|
+
|
|
26
|
+
1. **Multi-Source Data Aggregation** - Combine data from multiple nodes
|
|
27
|
+
2. **Regex-Based Filtering** - Filter items using pattern matching
|
|
28
|
+
3. **Markdown to Structured Data** - Parse markdown into structured format
|
|
29
|
+
4. **JSON Object Comparison** - Compare two JSON objects for changes
|
|
30
|
+
5. **CRM Data Transformation** - Transform CRM data to standard format
|
|
31
|
+
6. **Release Notes Processing** - Parse and categorize release notes
|
|
32
|
+
7. **Array Transformation** - Reshape arrays and extract fields
|
|
33
|
+
8. **Dictionary Lookup** - Create and use lookup dictionaries
|
|
34
|
+
9. **Top N Filtering** - Get top items by score/value
|
|
35
|
+
10. **String Aggregation** - Aggregate strings with formatting
|
|
36
|
+
|
|
37
|
+
---
|
|
38
|
+
|
|
39
|
+
## Pattern 1: Multi-Source Data Aggregation
|
|
40
|
+
|
|
41
|
+
**Use case**: Combine data from multiple sources (APIs, webhooks, databases).
|
|
42
|
+
|
|
43
|
+
**Scenario**: Aggregate news articles from multiple sources.
|
|
44
|
+
|
|
45
|
+
### Implementation
|
|
46
|
+
|
|
47
|
+
```python
|
|
48
|
+
from datetime import datetime
|
|
49
|
+
|
|
50
|
+
all_items = _input.all()
|
|
51
|
+
processed_articles = []
|
|
52
|
+
|
|
53
|
+
for item in all_items:
|
|
54
|
+
source_name = item["json"].get("name", "Unknown")
|
|
55
|
+
source_data = item["json"]
|
|
56
|
+
|
|
57
|
+
# Process Hacker News source
|
|
58
|
+
if source_name == "Hacker News" and source_data.get("hits"):
|
|
59
|
+
for hit in source_data["hits"]:
|
|
60
|
+
processed_articles.append({
|
|
61
|
+
"title": hit.get("title", "No title"),
|
|
62
|
+
"url": hit.get("url", ""),
|
|
63
|
+
"summary": hit.get("story_text") or "No summary",
|
|
64
|
+
"source": "Hacker News",
|
|
65
|
+
"score": hit.get("points", 0),
|
|
66
|
+
"fetched_at": datetime.now().isoformat()
|
|
67
|
+
})
|
|
68
|
+
|
|
69
|
+
# Process Reddit source
|
|
70
|
+
elif source_name == "Reddit" and source_data.get("data"):
|
|
71
|
+
for post in source_data["data"].get("children", []):
|
|
72
|
+
post_data = post.get("data", {})
|
|
73
|
+
processed_articles.append({
|
|
74
|
+
"title": post_data.get("title", "No title"),
|
|
75
|
+
"url": post_data.get("url", ""),
|
|
76
|
+
"summary": post_data.get("selftext", "")[:200],
|
|
77
|
+
"source": "Reddit",
|
|
78
|
+
"score": post_data.get("score", 0),
|
|
79
|
+
"fetched_at": datetime.now().isoformat()
|
|
80
|
+
})
|
|
81
|
+
|
|
82
|
+
# Sort by score descending
|
|
83
|
+
processed_articles.sort(key=lambda x: x["score"], reverse=True)
|
|
84
|
+
|
|
85
|
+
# Return as n8n items
|
|
86
|
+
return [{"json": article} for article in processed_articles]
|
|
87
|
+
```
|
|
88
|
+
|
|
89
|
+
### Key Techniques
|
|
90
|
+
|
|
91
|
+
- Process multiple data sources in one loop
|
|
92
|
+
- Normalize different data structures
|
|
93
|
+
- Use datetime for timestamps
|
|
94
|
+
- Sort by criteria
|
|
95
|
+
- Return properly formatted items
|
|
96
|
+
|
|
97
|
+
---
|
|
98
|
+
|
|
99
|
+
## Pattern 2: Regex-Based Filtering
|
|
100
|
+
|
|
101
|
+
**Use case**: Filter items based on pattern matching in text fields.
|
|
102
|
+
|
|
103
|
+
**Scenario**: Filter support tickets by priority keywords.
|
|
104
|
+
|
|
105
|
+
### Implementation
|
|
106
|
+
|
|
107
|
+
```python
|
|
108
|
+
import re
|
|
109
|
+
|
|
110
|
+
all_items = _input.all()
|
|
111
|
+
priority_tickets = []
|
|
112
|
+
|
|
113
|
+
# High priority keywords pattern
|
|
114
|
+
high_priority_pattern = re.compile(
|
|
115
|
+
r'\b(urgent|critical|emergency|asap|down|outage|broken)\b',
|
|
116
|
+
re.IGNORECASE
|
|
117
|
+
)
|
|
118
|
+
|
|
119
|
+
for item in all_items:
|
|
120
|
+
ticket = item["json"]
|
|
121
|
+
|
|
122
|
+
# Check subject and description
|
|
123
|
+
subject = ticket.get("subject", "")
|
|
124
|
+
description = ticket.get("description", "")
|
|
125
|
+
combined_text = f"{subject} {description}"
|
|
126
|
+
|
|
127
|
+
# Find matches
|
|
128
|
+
matches = high_priority_pattern.findall(combined_text)
|
|
129
|
+
|
|
130
|
+
if matches:
|
|
131
|
+
priority_tickets.append({
|
|
132
|
+
"json": {
|
|
133
|
+
**ticket,
|
|
134
|
+
"priority": "high",
|
|
135
|
+
"matched_keywords": list(set(matches)),
|
|
136
|
+
"keyword_count": len(matches)
|
|
137
|
+
}
|
|
138
|
+
})
|
|
139
|
+
else:
|
|
140
|
+
priority_tickets.append({
|
|
141
|
+
"json": {
|
|
142
|
+
**ticket,
|
|
143
|
+
"priority": "normal",
|
|
144
|
+
"matched_keywords": [],
|
|
145
|
+
"keyword_count": 0
|
|
146
|
+
}
|
|
147
|
+
})
|
|
148
|
+
|
|
149
|
+
# Sort by keyword count (most urgent first)
|
|
150
|
+
priority_tickets.sort(key=lambda x: x["json"]["keyword_count"], reverse=True)
|
|
151
|
+
|
|
152
|
+
return priority_tickets
|
|
153
|
+
```
|
|
154
|
+
|
|
155
|
+
### Key Techniques
|
|
156
|
+
|
|
157
|
+
- Use re.compile() for reusable patterns
|
|
158
|
+
- re.IGNORECASE for case-insensitive matching
|
|
159
|
+
- Combine multiple text fields for searching
|
|
160
|
+
- Extract and deduplicate matches
|
|
161
|
+
- Sort by priority indicators
|
|
162
|
+
|
|
163
|
+
---
|
|
164
|
+
|
|
165
|
+
## Pattern 3: Markdown to Structured Data
|
|
166
|
+
|
|
167
|
+
**Use case**: Parse markdown text into structured data.
|
|
168
|
+
|
|
169
|
+
**Scenario**: Extract tasks from markdown checklist.
|
|
170
|
+
|
|
171
|
+
### Implementation
|
|
172
|
+
|
|
173
|
+
```python
|
|
174
|
+
import re
|
|
175
|
+
|
|
176
|
+
markdown_text = _input.first()["json"]["body"].get("markdown", "")
|
|
177
|
+
|
|
178
|
+
# Parse markdown checklist
|
|
179
|
+
tasks = []
|
|
180
|
+
lines = markdown_text.split("\n")
|
|
181
|
+
|
|
182
|
+
for line in lines:
|
|
183
|
+
# Match: - [ ] Task or - [x] Task
|
|
184
|
+
match = re.match(r'^\s*-\s*\[([ x])\]\s*(.+)$', line, re.IGNORECASE)
|
|
185
|
+
|
|
186
|
+
if match:
|
|
187
|
+
checked = match.group(1).lower() == 'x'
|
|
188
|
+
task_text = match.group(2).strip()
|
|
189
|
+
|
|
190
|
+
# Extract priority if present (e.g., [P1], [HIGH])
|
|
191
|
+
priority_match = re.search(r'\[(P\d|HIGH|MEDIUM|LOW)\]', task_text, re.IGNORECASE)
|
|
192
|
+
priority = priority_match.group(1).upper() if priority_match else "NORMAL"
|
|
193
|
+
|
|
194
|
+
# Remove priority tag from text
|
|
195
|
+
clean_text = re.sub(r'\[(P\d|HIGH|MEDIUM|LOW)\]', '', task_text, flags=re.IGNORECASE).strip()
|
|
196
|
+
|
|
197
|
+
tasks.append({
|
|
198
|
+
"text": clean_text,
|
|
199
|
+
"completed": checked,
|
|
200
|
+
"priority": priority,
|
|
201
|
+
"original_line": line.strip()
|
|
202
|
+
})
|
|
203
|
+
|
|
204
|
+
return [{
|
|
205
|
+
"json": {
|
|
206
|
+
"tasks": tasks,
|
|
207
|
+
"total": len(tasks),
|
|
208
|
+
"completed": sum(1 for t in tasks if t["completed"]),
|
|
209
|
+
"pending": sum(1 for t in tasks if not t["completed"])
|
|
210
|
+
}
|
|
211
|
+
}]
|
|
212
|
+
```
|
|
213
|
+
|
|
214
|
+
### Key Techniques
|
|
215
|
+
|
|
216
|
+
- Line-by-line parsing
|
|
217
|
+
- Multiple regex patterns for extraction
|
|
218
|
+
- Extract metadata from text
|
|
219
|
+
- Calculate summary statistics
|
|
220
|
+
- Return structured data
|
|
221
|
+
|
|
222
|
+
---
|
|
223
|
+
|
|
224
|
+
## Pattern 4: JSON Object Comparison
|
|
225
|
+
|
|
226
|
+
**Use case**: Compare two JSON objects to find differences.
|
|
227
|
+
|
|
228
|
+
**Scenario**: Compare old and new user profile data.
|
|
229
|
+
|
|
230
|
+
### Implementation
|
|
231
|
+
|
|
232
|
+
```python
|
|
233
|
+
import json
|
|
234
|
+
|
|
235
|
+
all_items = _input.all()
|
|
236
|
+
|
|
237
|
+
# Assume first item is old data, second is new data
|
|
238
|
+
old_data = all_items[0]["json"] if len(all_items) > 0 else {}
|
|
239
|
+
new_data = all_items[1]["json"] if len(all_items) > 1 else {}
|
|
240
|
+
|
|
241
|
+
changes = {
|
|
242
|
+
"added": {},
|
|
243
|
+
"removed": {},
|
|
244
|
+
"modified": {},
|
|
245
|
+
"unchanged": {}
|
|
246
|
+
}
|
|
247
|
+
|
|
248
|
+
# Find all unique keys
|
|
249
|
+
all_keys = set(old_data.keys()) | set(new_data.keys())
|
|
250
|
+
|
|
251
|
+
for key in all_keys:
|
|
252
|
+
old_value = old_data.get(key)
|
|
253
|
+
new_value = new_data.get(key)
|
|
254
|
+
|
|
255
|
+
if key not in old_data:
|
|
256
|
+
# Added field
|
|
257
|
+
changes["added"][key] = new_value
|
|
258
|
+
elif key not in new_data:
|
|
259
|
+
# Removed field
|
|
260
|
+
changes["removed"][key] = old_value
|
|
261
|
+
elif old_value != new_value:
|
|
262
|
+
# Modified field
|
|
263
|
+
changes["modified"][key] = {
|
|
264
|
+
"old": old_value,
|
|
265
|
+
"new": new_value
|
|
266
|
+
}
|
|
267
|
+
else:
|
|
268
|
+
# Unchanged field
|
|
269
|
+
changes["unchanged"][key] = old_value
|
|
270
|
+
|
|
271
|
+
return [{
|
|
272
|
+
"json": {
|
|
273
|
+
"changes": changes,
|
|
274
|
+
"summary": {
|
|
275
|
+
"added_count": len(changes["added"]),
|
|
276
|
+
"removed_count": len(changes["removed"]),
|
|
277
|
+
"modified_count": len(changes["modified"]),
|
|
278
|
+
"unchanged_count": len(changes["unchanged"]),
|
|
279
|
+
"has_changes": len(changes["added"]) > 0 or len(changes["removed"]) > 0 or len(changes["modified"]) > 0
|
|
280
|
+
}
|
|
281
|
+
}
|
|
282
|
+
}]
|
|
283
|
+
```
|
|
284
|
+
|
|
285
|
+
### Key Techniques
|
|
286
|
+
|
|
287
|
+
- Set operations for key comparison
|
|
288
|
+
- Dictionary .get() for safe access
|
|
289
|
+
- Categorize changes by type
|
|
290
|
+
- Create summary statistics
|
|
291
|
+
- Return detailed comparison
|
|
292
|
+
|
|
293
|
+
---
|
|
294
|
+
|
|
295
|
+
## Pattern 5: CRM Data Transformation
|
|
296
|
+
|
|
297
|
+
**Use case**: Transform CRM data to standard format.
|
|
298
|
+
|
|
299
|
+
**Scenario**: Normalize data from different CRM systems.
|
|
300
|
+
|
|
301
|
+
### Implementation
|
|
302
|
+
|
|
303
|
+
```python
|
|
304
|
+
from datetime import datetime
|
|
305
|
+
import re
|
|
306
|
+
|
|
307
|
+
all_items = _input.all()
|
|
308
|
+
normalized_contacts = []
|
|
309
|
+
|
|
310
|
+
for item in all_items:
|
|
311
|
+
raw_contact = item["json"]
|
|
312
|
+
source = raw_contact.get("source", "unknown")
|
|
313
|
+
|
|
314
|
+
# Normalize email
|
|
315
|
+
email = raw_contact.get("email", "").lower().strip()
|
|
316
|
+
|
|
317
|
+
# Normalize phone (remove non-digits)
|
|
318
|
+
phone_raw = raw_contact.get("phone", "")
|
|
319
|
+
phone = re.sub(r'\D', '', phone_raw)
|
|
320
|
+
|
|
321
|
+
# Parse name
|
|
322
|
+
if "full_name" in raw_contact:
|
|
323
|
+
name_parts = raw_contact["full_name"].split(" ", 1)
|
|
324
|
+
first_name = name_parts[0] if len(name_parts) > 0 else ""
|
|
325
|
+
last_name = name_parts[1] if len(name_parts) > 1 else ""
|
|
326
|
+
else:
|
|
327
|
+
first_name = raw_contact.get("first_name", "")
|
|
328
|
+
last_name = raw_contact.get("last_name", "")
|
|
329
|
+
|
|
330
|
+
# Normalize status
|
|
331
|
+
status_raw = raw_contact.get("status", "").lower()
|
|
332
|
+
status = "active" if status_raw in ["active", "enabled", "true", "1"] else "inactive"
|
|
333
|
+
|
|
334
|
+
# Create normalized contact
|
|
335
|
+
normalized_contacts.append({
|
|
336
|
+
"json": {
|
|
337
|
+
"id": raw_contact.get("id", ""),
|
|
338
|
+
"first_name": first_name.strip(),
|
|
339
|
+
"last_name": last_name.strip(),
|
|
340
|
+
"full_name": f"{first_name} {last_name}".strip(),
|
|
341
|
+
"email": email,
|
|
342
|
+
"phone": phone,
|
|
343
|
+
"status": status,
|
|
344
|
+
"source": source,
|
|
345
|
+
"normalized_at": datetime.now().isoformat(),
|
|
346
|
+
"original_data": raw_contact
|
|
347
|
+
}
|
|
348
|
+
})
|
|
349
|
+
|
|
350
|
+
return normalized_contacts
|
|
351
|
+
```
|
|
352
|
+
|
|
353
|
+
### Key Techniques
|
|
354
|
+
|
|
355
|
+
- Multiple field name variations handling
|
|
356
|
+
- String cleaning and normalization
|
|
357
|
+
- Regex for phone number cleaning
|
|
358
|
+
- Name parsing logic
|
|
359
|
+
- Status normalization
|
|
360
|
+
- Preserve original data
|
|
361
|
+
|
|
362
|
+
---
|
|
363
|
+
|
|
364
|
+
## Pattern 6: Release Notes Processing
|
|
365
|
+
|
|
366
|
+
**Use case**: Parse release notes and categorize changes.
|
|
367
|
+
|
|
368
|
+
**Scenario**: Extract features, fixes, and breaking changes from release notes.
|
|
369
|
+
|
|
370
|
+
### Implementation
|
|
371
|
+
|
|
372
|
+
```python
|
|
373
|
+
import re
|
|
374
|
+
|
|
375
|
+
release_notes = _input.first()["json"]["body"].get("notes", "")
|
|
376
|
+
|
|
377
|
+
categories = {
|
|
378
|
+
"features": [],
|
|
379
|
+
"fixes": [],
|
|
380
|
+
"breaking": [],
|
|
381
|
+
"other": []
|
|
382
|
+
}
|
|
383
|
+
|
|
384
|
+
# Split into lines
|
|
385
|
+
lines = release_notes.split("\n")
|
|
386
|
+
|
|
387
|
+
for line in lines:
|
|
388
|
+
line = line.strip()
|
|
389
|
+
|
|
390
|
+
# Skip empty lines and headers
|
|
391
|
+
if not line or line.startswith("#"):
|
|
392
|
+
continue
|
|
393
|
+
|
|
394
|
+
# Remove bullet points
|
|
395
|
+
clean_line = re.sub(r'^[\*\-\+]\s*', '', line)
|
|
396
|
+
|
|
397
|
+
# Categorize
|
|
398
|
+
if re.search(r'\b(feature|add|new)\b', clean_line, re.IGNORECASE):
|
|
399
|
+
categories["features"].append(clean_line)
|
|
400
|
+
elif re.search(r'\b(fix|bug|patch|resolve)\b', clean_line, re.IGNORECASE):
|
|
401
|
+
categories["fixes"].append(clean_line)
|
|
402
|
+
elif re.search(r'\b(breaking|deprecated|remove)\b', clean_line, re.IGNORECASE):
|
|
403
|
+
categories["breaking"].append(clean_line)
|
|
404
|
+
else:
|
|
405
|
+
categories["other"].append(clean_line)
|
|
406
|
+
|
|
407
|
+
return [{
|
|
408
|
+
"json": {
|
|
409
|
+
"categories": categories,
|
|
410
|
+
"summary": {
|
|
411
|
+
"features": len(categories["features"]),
|
|
412
|
+
"fixes": len(categories["fixes"]),
|
|
413
|
+
"breaking": len(categories["breaking"]),
|
|
414
|
+
"other": len(categories["other"]),
|
|
415
|
+
"total": sum(len(v) for v in categories.values())
|
|
416
|
+
}
|
|
417
|
+
}
|
|
418
|
+
}]
|
|
419
|
+
```
|
|
420
|
+
|
|
421
|
+
### Key Techniques
|
|
422
|
+
|
|
423
|
+
- Line-by-line parsing
|
|
424
|
+
- Pattern-based categorization
|
|
425
|
+
- Bullet point removal
|
|
426
|
+
- Skip headers and empty lines
|
|
427
|
+
- Summary statistics
|
|
428
|
+
|
|
429
|
+
---
|
|
430
|
+
|
|
431
|
+
## Pattern 7: Array Transformation
|
|
432
|
+
|
|
433
|
+
**Use case**: Reshape arrays and extract specific fields.
|
|
434
|
+
|
|
435
|
+
**Scenario**: Transform user data array to extract specific fields.
|
|
436
|
+
|
|
437
|
+
### Implementation
|
|
438
|
+
|
|
439
|
+
```python
|
|
440
|
+
all_items = _input.all()
|
|
441
|
+
|
|
442
|
+
# Extract and transform
|
|
443
|
+
transformed = []
|
|
444
|
+
|
|
445
|
+
for item in all_items:
|
|
446
|
+
user = item["json"]
|
|
447
|
+
|
|
448
|
+
# Extract nested fields
|
|
449
|
+
profile = user.get("profile", {})
|
|
450
|
+
settings = user.get("settings", {})
|
|
451
|
+
|
|
452
|
+
transformed.append({
|
|
453
|
+
"json": {
|
|
454
|
+
"user_id": user.get("id"),
|
|
455
|
+
"email": user.get("email"),
|
|
456
|
+
"name": profile.get("name", "Unknown"),
|
|
457
|
+
"avatar": profile.get("avatar_url"),
|
|
458
|
+
"bio": profile.get("bio", "")[:100], # Truncate to 100 chars
|
|
459
|
+
"notifications_enabled": settings.get("notifications", True),
|
|
460
|
+
"theme": settings.get("theme", "light"),
|
|
461
|
+
"created_at": user.get("created_at"),
|
|
462
|
+
"last_login": user.get("last_login_at")
|
|
463
|
+
}
|
|
464
|
+
})
|
|
465
|
+
|
|
466
|
+
return transformed
|
|
467
|
+
```
|
|
468
|
+
|
|
469
|
+
### Key Techniques
|
|
470
|
+
|
|
471
|
+
- Field extraction from nested objects
|
|
472
|
+
- Default values with .get()
|
|
473
|
+
- String truncation
|
|
474
|
+
- Flattening nested structures
|
|
475
|
+
|
|
476
|
+
---
|
|
477
|
+
|
|
478
|
+
## Pattern 8: Dictionary Lookup
|
|
479
|
+
|
|
480
|
+
**Use case**: Create lookup dictionary for fast data access.
|
|
481
|
+
|
|
482
|
+
**Scenario**: Look up user details by ID.
|
|
483
|
+
|
|
484
|
+
### Implementation
|
|
485
|
+
|
|
486
|
+
```python
|
|
487
|
+
all_items = _input.all()
|
|
488
|
+
|
|
489
|
+
# Build lookup dictionary
|
|
490
|
+
users_by_id = {}
|
|
491
|
+
|
|
492
|
+
for item in all_items:
|
|
493
|
+
user = item["json"]
|
|
494
|
+
user_id = user.get("id")
|
|
495
|
+
|
|
496
|
+
if user_id:
|
|
497
|
+
users_by_id[user_id] = {
|
|
498
|
+
"name": user.get("name"),
|
|
499
|
+
"email": user.get("email"),
|
|
500
|
+
"status": user.get("status")
|
|
501
|
+
}
|
|
502
|
+
|
|
503
|
+
# Example: Look up specific users
|
|
504
|
+
lookup_ids = [1, 3, 5]
|
|
505
|
+
looked_up = []
|
|
506
|
+
|
|
507
|
+
for user_id in lookup_ids:
|
|
508
|
+
if user_id in users_by_id:
|
|
509
|
+
looked_up.append({
|
|
510
|
+
"json": {
|
|
511
|
+
"id": user_id,
|
|
512
|
+
**users_by_id[user_id],
|
|
513
|
+
"found": True
|
|
514
|
+
}
|
|
515
|
+
})
|
|
516
|
+
else:
|
|
517
|
+
looked_up.append({
|
|
518
|
+
"json": {
|
|
519
|
+
"id": user_id,
|
|
520
|
+
"found": False
|
|
521
|
+
}
|
|
522
|
+
})
|
|
523
|
+
|
|
524
|
+
return looked_up
|
|
525
|
+
```
|
|
526
|
+
|
|
527
|
+
### Key Techniques
|
|
528
|
+
|
|
529
|
+
- Dictionary comprehension alternative
|
|
530
|
+
- O(1) lookup time
|
|
531
|
+
- Handle missing keys gracefully
|
|
532
|
+
- Preserve lookup order
|
|
533
|
+
|
|
534
|
+
---
|
|
535
|
+
|
|
536
|
+
## Pattern 9: Top N Filtering
|
|
537
|
+
|
|
538
|
+
**Use case**: Get top items by score or value.
|
|
539
|
+
|
|
540
|
+
**Scenario**: Get top 10 products by sales.
|
|
541
|
+
|
|
542
|
+
### Implementation
|
|
543
|
+
|
|
544
|
+
```python
|
|
545
|
+
all_items = _input.all()
|
|
546
|
+
|
|
547
|
+
# Extract products with sales
|
|
548
|
+
products = []
|
|
549
|
+
|
|
550
|
+
for item in all_items:
|
|
551
|
+
product = item["json"]
|
|
552
|
+
products.append({
|
|
553
|
+
"id": product.get("id"),
|
|
554
|
+
"name": product.get("name"),
|
|
555
|
+
"sales": product.get("sales", 0),
|
|
556
|
+
"revenue": product.get("revenue", 0.0),
|
|
557
|
+
"category": product.get("category")
|
|
558
|
+
})
|
|
559
|
+
|
|
560
|
+
# Sort by sales descending
|
|
561
|
+
products.sort(key=lambda p: p["sales"], reverse=True)
|
|
562
|
+
|
|
563
|
+
# Get top 10
|
|
564
|
+
top_10 = products[:10]
|
|
565
|
+
|
|
566
|
+
return [
|
|
567
|
+
{
|
|
568
|
+
"json": {
|
|
569
|
+
**product,
|
|
570
|
+
"rank": index + 1
|
|
571
|
+
}
|
|
572
|
+
}
|
|
573
|
+
for index, product in enumerate(top_10)
|
|
574
|
+
]
|
|
575
|
+
```
|
|
576
|
+
|
|
577
|
+
### Key Techniques
|
|
578
|
+
|
|
579
|
+
- List sorting with custom key
|
|
580
|
+
- Slicing for top N
|
|
581
|
+
- Add ranking information
|
|
582
|
+
- Enumerate for index
|
|
583
|
+
|
|
584
|
+
---
|
|
585
|
+
|
|
586
|
+
## Pattern 10: String Aggregation
|
|
587
|
+
|
|
588
|
+
**Use case**: Aggregate strings with formatting.
|
|
589
|
+
|
|
590
|
+
**Scenario**: Create summary text from multiple items.
|
|
591
|
+
|
|
592
|
+
### Implementation
|
|
593
|
+
|
|
594
|
+
```python
|
|
595
|
+
all_items = _input.all()
|
|
596
|
+
|
|
597
|
+
# Collect messages
|
|
598
|
+
messages = []
|
|
599
|
+
|
|
600
|
+
for item in all_items:
|
|
601
|
+
data = item["json"]
|
|
602
|
+
|
|
603
|
+
user = data.get("user", "Unknown")
|
|
604
|
+
message = data.get("message", "")
|
|
605
|
+
timestamp = data.get("timestamp", "")
|
|
606
|
+
|
|
607
|
+
# Format each message
|
|
608
|
+
formatted = f"[{timestamp}] {user}: {message}"
|
|
609
|
+
messages.append(formatted)
|
|
610
|
+
|
|
611
|
+
# Join with newlines
|
|
612
|
+
summary = "\n".join(messages)
|
|
613
|
+
|
|
614
|
+
# Create statistics
|
|
615
|
+
total_length = sum(len(msg) for msg in messages)
|
|
616
|
+
average_length = total_length / len(messages) if messages else 0
|
|
617
|
+
|
|
618
|
+
return [{
|
|
619
|
+
"json": {
|
|
620
|
+
"summary": summary,
|
|
621
|
+
"message_count": len(messages),
|
|
622
|
+
"total_characters": total_length,
|
|
623
|
+
"average_length": round(average_length, 2)
|
|
624
|
+
}
|
|
625
|
+
}]
|
|
626
|
+
```
|
|
627
|
+
|
|
628
|
+
### Key Techniques
|
|
629
|
+
|
|
630
|
+
- String formatting with f-strings
|
|
631
|
+
- Join lists with separator
|
|
632
|
+
- Calculate string statistics
|
|
633
|
+
- Handle empty lists
|
|
634
|
+
|
|
635
|
+
---
|
|
636
|
+
|
|
637
|
+
## Pattern Comparison: Python vs JavaScript
|
|
638
|
+
|
|
639
|
+
### Data Access
|
|
640
|
+
|
|
641
|
+
```python
|
|
642
|
+
# Python
|
|
643
|
+
all_items = _input.all()
|
|
644
|
+
first_item = _input.first()
|
|
645
|
+
current = _input.item
|
|
646
|
+
webhook_data = _json["body"]
|
|
647
|
+
|
|
648
|
+
# JavaScript
|
|
649
|
+
const allItems = $input.all();
|
|
650
|
+
const firstItem = $input.first();
|
|
651
|
+
const current = $input.item;
|
|
652
|
+
const webhookData = $json.body;
|
|
653
|
+
```
|
|
654
|
+
|
|
655
|
+
### Dictionary/Object Access
|
|
656
|
+
|
|
657
|
+
```python
|
|
658
|
+
# Python - Dictionary key access
|
|
659
|
+
name = user["name"] # May raise KeyError
|
|
660
|
+
name = user.get("name", "?") # Safe with default
|
|
661
|
+
|
|
662
|
+
# JavaScript - Object property access
|
|
663
|
+
const name = user.name; // May be undefined
|
|
664
|
+
const name = user.name || "?"; // Safe with default
|
|
665
|
+
```
|
|
666
|
+
|
|
667
|
+
### Array Operations
|
|
668
|
+
|
|
669
|
+
```python
|
|
670
|
+
# Python - List comprehension
|
|
671
|
+
filtered = [item for item in items if item["active"]]
|
|
672
|
+
|
|
673
|
+
# JavaScript - Array methods
|
|
674
|
+
const filtered = items.filter(item => item.active);
|
|
675
|
+
```
|
|
676
|
+
|
|
677
|
+
### Sorting
|
|
678
|
+
|
|
679
|
+
```python
|
|
680
|
+
# Python
|
|
681
|
+
items.sort(key=lambda x: x["score"], reverse=True)
|
|
682
|
+
|
|
683
|
+
# JavaScript
|
|
684
|
+
items.sort((a, b) => b.score - a.score);
|
|
685
|
+
```
|
|
686
|
+
|
|
687
|
+
---
|
|
688
|
+
|
|
689
|
+
## Best Practices
|
|
690
|
+
|
|
691
|
+
### 1. Use .get() for Safe Access
|
|
692
|
+
|
|
693
|
+
```python
|
|
694
|
+
# ✅ SAFE: Use .get() with defaults
|
|
695
|
+
name = user.get("name", "Unknown")
|
|
696
|
+
email = user.get("email", "no-email@example.com")
|
|
697
|
+
|
|
698
|
+
# ❌ RISKY: Direct key access
|
|
699
|
+
name = user["name"] # KeyError if missing!
|
|
700
|
+
```
|
|
701
|
+
|
|
702
|
+
### 2. Handle Empty Lists
|
|
703
|
+
|
|
704
|
+
```python
|
|
705
|
+
# ✅ SAFE: Check before processing
|
|
706
|
+
items = _input.all()
|
|
707
|
+
if items:
|
|
708
|
+
first = items[0]
|
|
709
|
+
else:
|
|
710
|
+
return [{"json": {"error": "No items"}}]
|
|
711
|
+
|
|
712
|
+
# ❌ RISKY: Assume items exist
|
|
713
|
+
first = items[0] # IndexError if empty!
|
|
714
|
+
```
|
|
715
|
+
|
|
716
|
+
### 3. Use List Comprehensions
|
|
717
|
+
|
|
718
|
+
```python
|
|
719
|
+
# ✅ PYTHONIC: List comprehension
|
|
720
|
+
active = [item for item in items if item["json"].get("active")]
|
|
721
|
+
|
|
722
|
+
# ❌ VERBOSE: Traditional loop
|
|
723
|
+
active = []
|
|
724
|
+
for item in items:
|
|
725
|
+
if item["json"].get("active"):
|
|
726
|
+
active.append(item)
|
|
727
|
+
```
|
|
728
|
+
|
|
729
|
+
### 4. Return Proper Format
|
|
730
|
+
|
|
731
|
+
```python
|
|
732
|
+
# ✅ CORRECT: Array of objects with "json" key
|
|
733
|
+
return [{"json": {"field": "value"}}]
|
|
734
|
+
|
|
735
|
+
# ❌ WRONG: Just the data
|
|
736
|
+
return {"field": "value"}
|
|
737
|
+
|
|
738
|
+
# ❌ WRONG: Array without "json" wrapper
|
|
739
|
+
return [{"field": "value"}]
|
|
740
|
+
```
|
|
741
|
+
|
|
742
|
+
### 5. Use Standard Library
|
|
743
|
+
|
|
744
|
+
```python
|
|
745
|
+
# ✅ GOOD: Use standard library
|
|
746
|
+
import statistics
|
|
747
|
+
average = statistics.mean(numbers)
|
|
748
|
+
|
|
749
|
+
# ✅ ALSO GOOD: Built-in functions
|
|
750
|
+
average = sum(numbers) / len(numbers) if numbers else 0
|
|
751
|
+
|
|
752
|
+
# ❌ CAN'T DO: External libraries
|
|
753
|
+
import numpy as np # ModuleNotFoundError!
|
|
754
|
+
```
|
|
755
|
+
|
|
756
|
+
---
|
|
757
|
+
|
|
758
|
+
## When to Use Each Pattern
|
|
759
|
+
|
|
760
|
+
| Pattern | When to Use |
|
|
761
|
+
|---------|-------------|
|
|
762
|
+
| Multi-Source Aggregation | Combining data from different nodes/sources |
|
|
763
|
+
| Regex Filtering | Text pattern matching, validation, extraction |
|
|
764
|
+
| Markdown Parsing | Processing formatted text into structured data |
|
|
765
|
+
| JSON Comparison | Detecting changes between objects |
|
|
766
|
+
| CRM Transformation | Normalizing data from different systems |
|
|
767
|
+
| Release Notes | Categorizing text by keywords |
|
|
768
|
+
| Array Transformation | Reshaping data, extracting fields |
|
|
769
|
+
| Dictionary Lookup | Fast ID-based lookups |
|
|
770
|
+
| Top N Filtering | Getting best/worst items by criteria |
|
|
771
|
+
| String Aggregation | Creating formatted text summaries |
|
|
772
|
+
|
|
773
|
+
---
|
|
774
|
+
|
|
775
|
+
## Summary
|
|
776
|
+
|
|
777
|
+
**Key Takeaways**:
|
|
778
|
+
- Use `.get()` for safe dictionary access
|
|
779
|
+
- List comprehensions are pythonic and efficient
|
|
780
|
+
- Handle empty lists/None values
|
|
781
|
+
- Use standard library (json, datetime, re)
|
|
782
|
+
- Return proper n8n format: `[{"json": {...}}]`
|
|
783
|
+
|
|
784
|
+
**Remember**:
|
|
785
|
+
- JavaScript is recommended for 95% of use cases
|
|
786
|
+
- Python has NO external libraries
|
|
787
|
+
- Use n8n nodes for complex operations
|
|
788
|
+
- Code node is for data transformation, not API calls
|
|
789
|
+
|
|
790
|
+
**See Also**:
|
|
791
|
+
- [SKILL.md](SKILL.md) - Python Code overview
|
|
792
|
+
- [DATA_ACCESS.md](DATA_ACCESS.md) - Data access patterns
|
|
793
|
+
- [STANDARD_LIBRARY.md](STANDARD_LIBRARY.md) - Available modules
|
|
794
|
+
- [ERROR_PATTERNS.md](ERROR_PATTERNS.md) - Avoid common mistakes
|