@createlex/figgen 1.4.4 → 1.4.5
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
|
@@ -36,18 +36,45 @@ RULES:
|
|
|
36
36
|
10. Emit best-effort code for anything complex — never emit TODO comments or placeholder stubs.
|
|
37
37
|
11. If reusableComponents are present, output each as a separate <file name="ComponentName.swift"> tag.
|
|
38
38
|
12. FONTS: Always use .font(.system(size: X, weight: .bold)) — NEVER reference custom font names like "Inter-Bold", "Roboto", or any fontName from Figma. Custom fonts are not bundled in the Xcode project and will cause runtime errors. System font only.
|
|
39
|
-
13. IMAGES: Every node listed in assetExportPlan MUST be referenced as Image("assetName").resizable()
|
|
39
|
+
13. IMAGES: Every node listed in assetExportPlan MUST be referenced as Image("assetName").resizable() — NEVER replace with Rectangle(), Color, or shapes. These PNGs are pre-exported to Assets.xcassets.
|
|
40
40
|
14. BLEND MODES: If an assetExportPlan candidate has a non-null blendModeSwiftUI value, you MUST append that modifier to the Image() call, e.g.: Image("Mockup_GroupView2").resizable().scaledToFit().frame(...).blendMode(.multiply). Without this, the image background will not blend with the dark frame background and will show as an opaque rectangle. Map: MULTIPLY→.multiply, SCREEN→.screen, OVERLAY→.overlay, DARKEN→.darken, LIGHTEN→.lighten, COLOR_DODGE→.colorDodge, COLOR_BURN→.colorBurn, HARD_LIGHT→.hardLight, SOFT_LIGHT→.softLight, DIFFERENCE→.difference, EXCLUSION→.exclusion.
|
|
41
|
+
15. IMAGE POSITIONING — always use renderBounds when present in the ASSET MANIFEST:
|
|
42
|
+
Each image asset may include renderBounds {x, y, width, height} — the actual visible clip region relative to the root frame. If renderBounds is present:
|
|
43
|
+
- Display at .resizable().frame(width: renderBounds.width * sx, height: renderBounds.height * sy)
|
|
44
|
+
- Position with .offset(x: renderBounds.x * sx, y: renderBounds.y * sy)
|
|
45
|
+
This is critical for overflow images (e.g. a 839×632 node at x=-197 clipped to 375×516 visible) — the plugin pre-exports the clipped region, not the full node. Using the node dimensions instead of renderBounds will show the wrong crop.
|
|
46
|
+
16. FULL-SCREEN DEVICE SCALING: When the Figma canvas is a fixed iPhone size (e.g. CANVAS_WIDTH=375, CANVAS_HEIGHT=812) and the view should fill the device screen, wrap the entire body in GeometryReader and scale every pixel value:
|
|
47
|
+
struct MyView: View {
|
|
48
|
+
var body: some View {
|
|
49
|
+
GeometryReader { geo in
|
|
50
|
+
let sx = geo.size.width / CANVAS_WIDTH
|
|
51
|
+
let sy = geo.size.height / CANVAS_HEIGHT
|
|
52
|
+
ZStack(alignment: .topLeading) {
|
|
53
|
+
// children — multiply ALL px values by sx or sy:
|
|
54
|
+
// .frame(width: w * sx, height: h * sy)
|
|
55
|
+
// .offset(x: x * sx, y: y * sy)
|
|
56
|
+
// .font(.system(size: size * sx, weight: ...))
|
|
57
|
+
// .cornerRadius(r * sx)
|
|
58
|
+
// strokeWidth * sx, shadow radius * sx, shadow offset * sx/sy
|
|
59
|
+
}
|
|
60
|
+
.frame(width: geo.size.width, height: geo.size.height)
|
|
61
|
+
.cornerRadius(cornerRadius * sx)
|
|
62
|
+
.clipped()
|
|
63
|
+
.shadow(color: ..., radius: r * sx, x: ox * sx, y: oy * sy)
|
|
64
|
+
}
|
|
65
|
+
.ignoresSafeArea()
|
|
66
|
+
}
|
|
67
|
+
}
|
|
68
|
+
Apply this rule whenever CANVAS_WIDTH/CANVAS_HEIGHT are standard iPhone dimensions (375, 390, 393, 430 wide).
|
|
41
69
|
|
|
42
70
|
RESPONSIVE LAYOUT RULES:
|
|
43
71
|
- Root frame with FILL sizing → .frame(maxWidth: .infinity)
|
|
44
|
-
- Font sizes → @ScaledMetric var: e.g. @ScaledMetric var titleSize: CGFloat = 34
|
|
45
72
|
- Horizontal scrolling children → ScrollView(.horizontal, showsIndicators: false)
|
|
46
73
|
- Vertical root scroll behavior → wrap body in ScrollView
|
|
47
74
|
- ViewThatFits for HStack→VStack fallback on narrow devices when content may wrap
|
|
48
75
|
- VStack(spacing:) / HStack(spacing:) from Figma itemSpacing — not fixed .padding() for inter-item spacing
|
|
49
76
|
- .ignoresSafeArea() only for background color/image layers, never for foreground content
|
|
50
|
-
- Hardcoded values are acceptable only for:
|
|
77
|
+
- Hardcoded values are acceptable only for: icon sizes ≤24pt
|
|
51
78
|
|
|
52
79
|
DESIGN TOKENS:
|
|
53
80
|
If designTokens.colors or designTokens.fonts are non-empty, output a <file name="DesignTokens.swift"> containing:
|
|
@@ -203,11 +230,17 @@ function buildUserMessage(context, generationMode) {
|
|
|
203
230
|
throw new Error('Could not extract root node from design context');
|
|
204
231
|
}
|
|
205
232
|
|
|
233
|
+
const isPhoneCanvas = [375, 390, 393, 430].includes(promptCtx.canvasWidth);
|
|
234
|
+
const deviceHint = isPhoneCanvas
|
|
235
|
+
? `\nDEVICE_SCALING: Canvas is a fixed iPhone size. Apply Rule 16 — wrap body in GeometryReader, compute sx/sy, and scale ALL pixel values so the view fills any device screen.`
|
|
236
|
+
: '';
|
|
237
|
+
|
|
206
238
|
return `Generate SwiftUI code for this Figma design.
|
|
207
239
|
|
|
208
240
|
OUTPUT_STRUCT_NAME: ${promptCtx.structName}
|
|
209
241
|
CANVAS_WIDTH: ${promptCtx.canvasWidth}
|
|
210
|
-
|
|
242
|
+
CANVAS_HEIGHT: ${promptCtx.canvasHeight}
|
|
243
|
+
GENERATION_MODE: ${generationMode}${deviceHint}
|
|
211
244
|
|
|
212
245
|
DESIGN CONTEXT:
|
|
213
246
|
${JSON.stringify(promptCtx, null, 2)}
|
package/companion/mcp-server.mjs
CHANGED
|
@@ -1069,12 +1069,53 @@ server.registerTool('figma_to_swiftui', {
|
|
|
1069
1069
|
selectionNames: generated.selection?.names ?? [],
|
|
1070
1070
|
});
|
|
1071
1071
|
|
|
1072
|
-
// Build
|
|
1073
|
-
|
|
1074
|
-
|
|
1075
|
-
|
|
1076
|
-
|
|
1077
|
-
|
|
1072
|
+
// Build node lookup map for render bounds resolution
|
|
1073
|
+
function buildNodeMap(node, map = new Map()) {
|
|
1074
|
+
if (!node) return map;
|
|
1075
|
+
if (node.id) map.set(node.id, node);
|
|
1076
|
+
if (Array.isArray(node.children)) node.children.forEach((c) => buildNodeMap(c, map));
|
|
1077
|
+
return map;
|
|
1078
|
+
}
|
|
1079
|
+
const nodeMap = buildNodeMap(context?.metadata);
|
|
1080
|
+
const rootAbs = context?.metadata?.geometry?.absoluteBoundingBox;
|
|
1081
|
+
|
|
1082
|
+
// Build asset manifest: use actual written image names + render bounds so the AI
|
|
1083
|
+
// positions clipped images correctly (not using the oversized node dimensions).
|
|
1084
|
+
// Match each written image name back to its assetExportPlan candidate by suggestedAssetName suffix.
|
|
1085
|
+
const candidateBySuffix = new Map(
|
|
1086
|
+
(context?.assetExportPlan?.candidates ?? []).map((c) => [
|
|
1087
|
+
(c.suggestedAssetName || c.nodeName || '').toLowerCase(),
|
|
1088
|
+
c,
|
|
1089
|
+
])
|
|
1090
|
+
);
|
|
1091
|
+
|
|
1092
|
+
const assetManifest = imageNames.map((imageName) => {
|
|
1093
|
+
// Find matching candidate — image names are "{StructName}_{suggestedAssetName}"
|
|
1094
|
+
const suffix = imageName.replace(/^[^_]+_/, '').toLowerCase();
|
|
1095
|
+
const candidate = candidateBySuffix.get(suffix) ?? candidateBySuffix.get(imageName.toLowerCase());
|
|
1096
|
+
|
|
1097
|
+
// Look up render bounds from node tree (the actual visible clip region)
|
|
1098
|
+
let renderBounds = null;
|
|
1099
|
+
if (candidate?.nodeId && rootAbs) {
|
|
1100
|
+
const node = nodeMap.get(candidate.nodeId);
|
|
1101
|
+
const absRender = node?.geometry?.absoluteRenderBounds;
|
|
1102
|
+
if (absRender) {
|
|
1103
|
+
renderBounds = {
|
|
1104
|
+
x: Math.round(absRender.x - rootAbs.x),
|
|
1105
|
+
y: Math.round(absRender.y - rootAbs.y),
|
|
1106
|
+
width: Math.round(absRender.width),
|
|
1107
|
+
height: Math.round(absRender.height),
|
|
1108
|
+
};
|
|
1109
|
+
}
|
|
1110
|
+
}
|
|
1111
|
+
|
|
1112
|
+
return {
|
|
1113
|
+
name: imageName,
|
|
1114
|
+
blendMode: candidate?.blendModeSwiftUI ?? null,
|
|
1115
|
+
type: candidate?.kind ?? 'image',
|
|
1116
|
+
...(renderBounds ? { renderBounds } : {}),
|
|
1117
|
+
};
|
|
1118
|
+
});
|
|
1078
1119
|
|
|
1079
1120
|
return jsonResult({
|
|
1080
1121
|
tier: 'ai-native',
|
|
@@ -1105,6 +1146,8 @@ server.registerTool('figma_to_swiftui', {
|
|
|
1105
1146
|
'Then call write_generated_swiftui_to_xcode with the generated code and images:[] (assets are already on disk).',
|
|
1106
1147
|
'⚠️ Use .font(.system(size:weight:)) ONLY — never custom font names.',
|
|
1107
1148
|
'⚠️ Reference every asset as Image("name") — never Rectangle() or Color() placeholders.',
|
|
1149
|
+
'⚠️ ASSET POSITIONING: use renderBounds {x,y,width,height} from the manifest (not node dimensions) to frame and offset each image — the plugin exports the clipped visible region, not the full oversized node.',
|
|
1150
|
+
'⚠️ DEVICE SCALING: if CANVAS_WIDTH is a standard iPhone width (375/390/393/430), wrap the body in GeometryReader and multiply every px value by sx=geo.size.width/CANVAS_WIDTH and sy=geo.size.height/CANVAS_HEIGHT so the design fills the target device screen.',
|
|
1108
1151
|
].filter(Boolean),
|
|
1109
1152
|
});
|
|
1110
1153
|
});
|