@leejungkiin/awkit 1.3.8 → 1.4.2
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/bin/awk.js +630 -52
- package/bin/claude-generators.js +122 -0
- package/core/AGENTS.md +54 -0
- package/core/CLAUDE.md +155 -0
- package/core/GEMINI.md +44 -9
- package/core/GEMINI.md.bak +126 -199
- package/package.json +1 -1
- package/skills/ai-sprite-maker/SKILL.md +81 -0
- package/skills/ai-sprite-maker/scripts/animate_sprite.py +102 -0
- package/skills/ai-sprite-maker/scripts/process_sprites.py +140 -0
- package/skills/awf-session-restore/SKILL.md +12 -2
- package/skills/brainstorm-agent/SKILL.md +11 -8
- package/skills/code-review/SKILL.md +21 -33
- package/skills/gitnexus/gitnexus-cli/SKILL.md +82 -0
- package/skills/gitnexus/gitnexus-debugging/SKILL.md +89 -0
- package/skills/gitnexus/gitnexus-exploring/SKILL.md +78 -0
- package/skills/gitnexus/gitnexus-guide/SKILL.md +64 -0
- package/skills/gitnexus/gitnexus-impact-analysis/SKILL.md +97 -0
- package/skills/gitnexus/gitnexus-refactoring/SKILL.md +121 -0
- package/skills/lucylab-tts/SKILL.md +64 -0
- package/skills/lucylab-tts/resources/voices_library.json +908 -0
- package/skills/lucylab-tts/scripts/.env +1 -0
- package/skills/lucylab-tts/scripts/lucylab_tts.py +506 -0
- package/skills/nm-memory-sync/SKILL.md +14 -1
- package/skills/orchestrator/SKILL.md +5 -38
- package/skills/ship-to-code/SKILL.md +115 -0
- package/skills/short-maker/SKILL.md +150 -0
- package/skills/short-maker/_backup/storyboard.html +106 -0
- package/skills/short-maker/_backup/video_mixer.py +296 -0
- package/skills/short-maker/outputs/fitbite-promo/background.jpg +0 -0
- package/skills/short-maker/outputs/fitbite-promo/final/promo-final.mp4 +0 -0
- package/skills/short-maker/outputs/fitbite-promo/script.md +19 -0
- package/skills/short-maker/outputs/fitbite-promo/segments/scene-01.mp4 +0 -0
- package/skills/short-maker/outputs/fitbite-promo/segments/scene-02.mp4 +0 -0
- package/skills/short-maker/outputs/fitbite-promo/segments/scene-03.mp4 +0 -0
- package/skills/short-maker/outputs/fitbite-promo/segments/scene-04.mp4 +0 -0
- package/skills/short-maker/outputs/fitbite-promo/storyboard/scene-01.png +0 -0
- package/skills/short-maker/outputs/fitbite-promo/storyboard/scene-02.png +0 -0
- package/skills/short-maker/outputs/fitbite-promo/storyboard/scene-03.png +0 -0
- package/skills/short-maker/outputs/fitbite-promo/storyboard/scene-04.png +0 -0
- package/skills/short-maker/outputs/fitbite-promo/storyboard.html +133 -0
- package/skills/short-maker/outputs/fitbite-promo/storyboard.json +38 -0
- package/skills/short-maker/outputs/fitbite-promo/temp/merged_chroma.mp4 +0 -0
- package/skills/short-maker/outputs/fitbite-promo/temp/merged_crossfaded.mp4 +0 -0
- package/skills/short-maker/outputs/fitbite-promo/temp/ready_00.mp4 +0 -0
- package/skills/short-maker/outputs/fitbite-promo/temp/ready_01.mp4 +0 -0
- package/skills/short-maker/outputs/fitbite-promo/temp/ready_02.mp4 +0 -0
- package/skills/short-maker/outputs/fitbite-promo/temp/ready_03.mp4 +0 -0
- package/skills/short-maker/outputs/fitbite-promo/tts/manifest.json +31 -0
- package/skills/short-maker/outputs/fitbite-promo/tts/scene-01.wav +0 -0
- package/skills/short-maker/outputs/fitbite-promo/tts/scene-02.wav +0 -0
- package/skills/short-maker/outputs/fitbite-promo/tts/scene-03.wav +0 -0
- package/skills/short-maker/outputs/fitbite-promo/tts/scene-04.wav +0 -0
- package/skills/short-maker/outputs/fitbite-promo/tts_script.txt +11 -0
- package/skills/short-maker/scripts/google-flow-cli/.project-identity +41 -0
- package/skills/short-maker/scripts/google-flow-cli/.trae/rules/project_rules.md +52 -0
- package/skills/short-maker/scripts/google-flow-cli/CODEBASE.md +67 -0
- package/skills/short-maker/scripts/google-flow-cli/GoogleFlowCli.code-workspace +29 -0
- package/skills/short-maker/scripts/google-flow-cli/README.md +168 -0
- package/skills/short-maker/scripts/google-flow-cli/docs/specs/PROJECT.md +12 -0
- package/skills/short-maker/scripts/google-flow-cli/docs/specs/REQUIREMENTS.md +22 -0
- package/skills/short-maker/scripts/google-flow-cli/docs/specs/ROADMAP.md +16 -0
- package/skills/short-maker/scripts/google-flow-cli/docs/specs/TECH-SPEC.md +13 -0
- package/skills/short-maker/scripts/google-flow-cli/gflow/__init__.py +3 -0
- package/skills/short-maker/scripts/google-flow-cli/gflow/api/__init__.py +19 -0
- package/skills/short-maker/scripts/google-flow-cli/gflow/api/client.py +1921 -0
- package/skills/short-maker/scripts/google-flow-cli/gflow/api/models.py +64 -0
- package/skills/short-maker/scripts/google-flow-cli/gflow/api/rpc_ids.py +98 -0
- package/skills/short-maker/scripts/google-flow-cli/gflow/auth/__init__.py +15 -0
- package/skills/short-maker/scripts/google-flow-cli/gflow/auth/browser_auth.py +692 -0
- package/skills/short-maker/scripts/google-flow-cli/gflow/auth/humanizer.py +417 -0
- package/skills/short-maker/scripts/google-flow-cli/gflow/auth/proxy_ext.py +120 -0
- package/skills/short-maker/scripts/google-flow-cli/gflow/auth/recaptcha.py +482 -0
- package/skills/short-maker/scripts/google-flow-cli/gflow/batchexecute/__init__.py +5 -0
- package/skills/short-maker/scripts/google-flow-cli/gflow/batchexecute/client.py +414 -0
- package/skills/short-maker/scripts/google-flow-cli/gflow/cli/__init__.py +1 -0
- package/skills/short-maker/scripts/google-flow-cli/gflow/cli/main.py +1075 -0
- package/skills/short-maker/scripts/google-flow-cli/pyproject.toml +36 -0
- package/skills/short-maker/scripts/google-flow-cli/script.txt +22 -0
- package/skills/short-maker/scripts/google-flow-cli/tests/__init__.py +0 -0
- package/skills/short-maker/scripts/google-flow-cli/tests/test_batchexecute.py +113 -0
- package/skills/short-maker/scripts/google-flow-cli/tests/test_client.py +190 -0
- package/skills/short-maker/templates/aida_script.md +40 -0
- package/skills/short-maker/templates/mimic_analyzer.md +29 -0
- package/skills/single-flow-task-execution/SKILL.md +412 -0
- package/skills/single-flow-task-execution/code-quality-reviewer-prompt.md +20 -0
- package/skills/single-flow-task-execution/implementer-prompt.md +78 -0
- package/skills/single-flow-task-execution/spec-reviewer-prompt.md +61 -0
- package/skills/skill-creator/SKILL.md +44 -0
- package/skills/spm-build-analysis/SKILL.md +92 -0
- package/skills/spm-build-analysis/references/build-optimization-sources.md +155 -0
- package/skills/spm-build-analysis/references/recommendation-format.md +85 -0
- package/skills/spm-build-analysis/references/spm-analysis-checks.md +105 -0
- package/skills/spm-build-analysis/scripts/check_spm_pins.py +118 -0
- package/skills/symphony-enforcer/SKILL.md +83 -97
- package/skills/symphony-orchestrator/SKILL.md +1 -1
- package/skills/trello-sync/SKILL.md +52 -45
- package/skills/verification-gate/SKILL.md +13 -2
- package/skills/xcode-build-benchmark/SKILL.md +88 -0
- package/skills/xcode-build-benchmark/references/benchmark-artifacts.md +94 -0
- package/skills/xcode-build-benchmark/references/benchmarking-workflow.md +67 -0
- package/skills/xcode-build-benchmark/schemas/build-benchmark.schema.json +230 -0
- package/skills/xcode-build-benchmark/scripts/benchmark_builds.py +308 -0
- package/skills/xcode-build-fixer/SKILL.md +218 -0
- package/skills/xcode-build-fixer/references/build-settings-best-practices.md +216 -0
- package/skills/xcode-build-fixer/references/fix-patterns.md +290 -0
- package/skills/xcode-build-fixer/references/recommendation-format.md +85 -0
- package/skills/xcode-build-fixer/scripts/benchmark_builds.py +308 -0
- package/skills/xcode-build-orchestrator/SKILL.md +156 -0
- package/skills/xcode-build-orchestrator/references/benchmark-artifacts.md +94 -0
- package/skills/xcode-build-orchestrator/references/build-settings-best-practices.md +216 -0
- package/skills/xcode-build-orchestrator/references/orchestration-report-template.md +143 -0
- package/skills/xcode-build-orchestrator/references/recommendation-format.md +85 -0
- package/skills/xcode-build-orchestrator/scripts/benchmark_builds.py +308 -0
- package/skills/xcode-build-orchestrator/scripts/diagnose_compilation.py +273 -0
- package/skills/xcode-build-orchestrator/scripts/generate_optimization_report.py +533 -0
- package/skills/xcode-compilation-analyzer/SKILL.md +89 -0
- package/skills/xcode-compilation-analyzer/references/build-optimization-sources.md +155 -0
- package/skills/xcode-compilation-analyzer/references/code-compilation-checks.md +106 -0
- package/skills/xcode-compilation-analyzer/references/recommendation-format.md +85 -0
- package/skills/xcode-compilation-analyzer/scripts/diagnose_compilation.py +273 -0
- package/skills/xcode-project-analyzer/SKILL.md +76 -0
- package/skills/xcode-project-analyzer/references/build-optimization-sources.md +155 -0
- package/skills/xcode-project-analyzer/references/build-settings-best-practices.md +216 -0
- package/skills/xcode-project-analyzer/references/project-audit-checks.md +101 -0
- package/skills/xcode-project-analyzer/references/recommendation-format.md +85 -0
- package/templates/CODEBASE.md +26 -42
- package/templates/configs/trello-config.json +2 -2
- package/templates/workflow_dual_mode_template.md +5 -5
- package/workflows/_uncategorized/conductor-codex.md +125 -0
- package/workflows/_uncategorized/conductor.md +97 -0
- package/workflows/_uncategorized/ship-to-code.md +85 -0
- package/workflows/_uncategorized/trello-sync.md +52 -0
- package/workflows/context/codebase-sync.md +10 -87
- package/workflows/quality/visual-debug.md +66 -12
|
@@ -0,0 +1,533 @@
|
|
|
1
|
+
#!/usr/bin/env python3
|
|
2
|
+
|
|
3
|
+
"""Generate a Markdown optimization report from benchmark and diagnostics artifacts."""
|
|
4
|
+
|
|
5
|
+
import argparse
|
|
6
|
+
import json
|
|
7
|
+
import re
|
|
8
|
+
from datetime import datetime, timezone
|
|
9
|
+
from pathlib import Path
|
|
10
|
+
from typing import Any, Dict, List, Optional, Tuple
|
|
11
|
+
|
|
12
|
+
|
|
13
|
+
# ---------------------------------------------------------------------------
|
|
14
|
+
# pbxproj helpers
|
|
15
|
+
# ---------------------------------------------------------------------------
|
|
16
|
+
|
|
17
|
+
_SETTING_RE = re.compile(r"^\s*([A-Z_][A-Z_0-9]*)\s*=\s*(.+?)\s*;", re.MULTILINE)
|
|
18
|
+
|
|
19
|
+
_CONFIG_ID_RE = re.compile(r"([0-9A-F]{24})\s*/\*\s*(Debug|Release)\s*\*/")
|
|
20
|
+
|
|
21
|
+
_CONFIG_LIST_RE = re.compile(
|
|
22
|
+
r"([0-9A-F]{24})\s*/\*\s*Build configuration list for "
|
|
23
|
+
r"(?P<kind>PBXProject|PBXNativeTarget)\s+\"(?P<name>[^\"]+)\"\s*\*/"
|
|
24
|
+
)
|
|
25
|
+
|
|
26
|
+
|
|
27
|
+
def _parse_all_build_configs(pbxproj: str) -> Dict[str, Tuple[str, Dict[str, str]]]:
|
|
28
|
+
"""Return {config_id: (config_name, {key: value})} for every XCBuildConfiguration."""
|
|
29
|
+
configs: Dict[str, Tuple[str, Dict[str, str]]] = {}
|
|
30
|
+
for match in re.finditer(
|
|
31
|
+
r"([0-9A-F]{24})\s*/\*\s*(Debug|Release)\s*\*/\s*=\s*\{\s*"
|
|
32
|
+
r"isa\s*=\s*XCBuildConfiguration;\s*buildSettings\s*=\s*\{([^}]*)\}",
|
|
33
|
+
pbxproj,
|
|
34
|
+
re.DOTALL,
|
|
35
|
+
):
|
|
36
|
+
config_id = match.group(1)
|
|
37
|
+
config_name = match.group(2)
|
|
38
|
+
body = match.group(3)
|
|
39
|
+
settings: Dict[str, str] = {}
|
|
40
|
+
for s in _SETTING_RE.finditer(body):
|
|
41
|
+
val = s.group(2).strip().strip('"')
|
|
42
|
+
settings[s.group(1)] = val
|
|
43
|
+
configs[config_id] = (config_name, settings)
|
|
44
|
+
return configs
|
|
45
|
+
|
|
46
|
+
|
|
47
|
+
def _resolve_config_list(
|
|
48
|
+
pbxproj: str, all_configs: Dict[str, Tuple[str, Dict[str, str]]], kind: str
|
|
49
|
+
) -> Dict[str, Dict[str, Dict[str, str]]]:
|
|
50
|
+
"""Resolve configuration lists for a given kind (PBXProject or PBXNativeTarget)."""
|
|
51
|
+
results: Dict[str, Dict[str, Dict[str, str]]] = {}
|
|
52
|
+
for list_match in _CONFIG_LIST_RE.finditer(pbxproj):
|
|
53
|
+
if list_match.group("kind") != kind:
|
|
54
|
+
continue
|
|
55
|
+
entity_name = list_match.group("name")
|
|
56
|
+
list_id = list_match.group(1)
|
|
57
|
+
block_start = pbxproj.find(f"{list_id} /*", list_match.end())
|
|
58
|
+
if block_start == -1:
|
|
59
|
+
block_start = list_match.start()
|
|
60
|
+
block = pbxproj[block_start : block_start + 500]
|
|
61
|
+
configs: Dict[str, Dict[str, str]] = {}
|
|
62
|
+
for cid_match in _CONFIG_ID_RE.finditer(block):
|
|
63
|
+
cid = cid_match.group(1)
|
|
64
|
+
if cid in all_configs:
|
|
65
|
+
cname, settings = all_configs[cid]
|
|
66
|
+
configs[cname] = settings
|
|
67
|
+
if configs:
|
|
68
|
+
results[entity_name] = configs
|
|
69
|
+
return results
|
|
70
|
+
|
|
71
|
+
|
|
72
|
+
def _parse_project_level_configs(pbxproj: str) -> Dict[str, Dict[str, str]]:
|
|
73
|
+
"""Extract project-level Debug and Release build settings."""
|
|
74
|
+
all_configs = _parse_all_build_configs(pbxproj)
|
|
75
|
+
resolved = _resolve_config_list(pbxproj, all_configs, "PBXProject")
|
|
76
|
+
if resolved:
|
|
77
|
+
return next(iter(resolved.values()))
|
|
78
|
+
return {}
|
|
79
|
+
|
|
80
|
+
|
|
81
|
+
def _parse_target_configs(pbxproj: str) -> Dict[str, Dict[str, Dict[str, str]]]:
|
|
82
|
+
"""Extract per-target Debug and Release build settings."""
|
|
83
|
+
all_configs = _parse_all_build_configs(pbxproj)
|
|
84
|
+
return _resolve_config_list(pbxproj, all_configs, "PBXNativeTarget")
|
|
85
|
+
|
|
86
|
+
|
|
87
|
+
# ---------------------------------------------------------------------------
|
|
88
|
+
# Best-practices audit
|
|
89
|
+
# ---------------------------------------------------------------------------
|
|
90
|
+
|
|
91
|
+
_DEBUG_EXPECTATIONS: List[Tuple[str, str, str]] = [
|
|
92
|
+
("SWIFT_COMPILATION_MODE", "incremental", "Incremental recompiles only changed files"),
|
|
93
|
+
("SWIFT_OPTIMIZATION_LEVEL", "-Onone", "Optimization passes add compile time without debug benefit"),
|
|
94
|
+
("GCC_OPTIMIZATION_LEVEL", "0", "C/ObjC optimization adds compile time without debug benefit"),
|
|
95
|
+
("ONLY_ACTIVE_ARCH", "YES", "Building all architectures multiplies compile and link time"),
|
|
96
|
+
("DEBUG_INFORMATION_FORMAT", "dwarf", "dwarf-with-dsym generates a separate dSYM, adding overhead"),
|
|
97
|
+
("ENABLE_TESTABILITY", "YES", "Required for @testable import during development"),
|
|
98
|
+
("EAGER_LINKING", "YES", "Allows linker to start before all compilation finishes, reducing wall-clock time"),
|
|
99
|
+
]
|
|
100
|
+
|
|
101
|
+
_GENERAL_EXPECTATIONS: List[Tuple[str, str, str]] = [
|
|
102
|
+
("COMPILATION_CACHING", "YES", "Caches compilation results so repeat builds of unchanged inputs are served from cache. Measured 5-14% faster clean builds across tested projects; benefit compounds during branch switching and pulling changes"),
|
|
103
|
+
]
|
|
104
|
+
|
|
105
|
+
_RELEASE_EXPECTATIONS: List[Tuple[str, str, str]] = [
|
|
106
|
+
("SWIFT_COMPILATION_MODE", "wholemodule", "Whole-module optimization produces faster runtime code"),
|
|
107
|
+
("SWIFT_OPTIMIZATION_LEVEL", "-O", "Optimized binaries for production (-Osize also acceptable)"),
|
|
108
|
+
("GCC_OPTIMIZATION_LEVEL", "s", "Optimizes C/ObjC for size in release"),
|
|
109
|
+
("ONLY_ACTIVE_ARCH", "NO", "Release builds must include all architectures for distribution"),
|
|
110
|
+
("DEBUG_INFORMATION_FORMAT", "dwarf-with-dsym", "dSYM bundles are needed for crash symbolication"),
|
|
111
|
+
("ENABLE_TESTABILITY", "NO", "Removes internal-symbol export overhead from release builds"),
|
|
112
|
+
]
|
|
113
|
+
|
|
114
|
+
_CONSISTENCY_KEYS = [
|
|
115
|
+
"SWIFT_COMPILATION_MODE",
|
|
116
|
+
"SWIFT_OPTIMIZATION_LEVEL",
|
|
117
|
+
"ONLY_ACTIVE_ARCH",
|
|
118
|
+
"DEBUG_INFORMATION_FORMAT",
|
|
119
|
+
]
|
|
120
|
+
|
|
121
|
+
|
|
122
|
+
def _effective_value(
|
|
123
|
+
project: Dict[str, str], target: Dict[str, str], key: str
|
|
124
|
+
) -> Optional[str]:
|
|
125
|
+
return target.get(key, project.get(key))
|
|
126
|
+
|
|
127
|
+
|
|
128
|
+
def _check(actual: Optional[str], expected: str) -> bool:
|
|
129
|
+
if actual is None:
|
|
130
|
+
if expected in ("incremental",):
|
|
131
|
+
return True
|
|
132
|
+
return False
|
|
133
|
+
if expected == "-O" and actual in ("-O", '"-O"', '"-Osize"', "-Osize"):
|
|
134
|
+
return True
|
|
135
|
+
return actual.strip('"') == expected
|
|
136
|
+
|
|
137
|
+
|
|
138
|
+
def _merged_project_settings(
|
|
139
|
+
project_configs: Dict[str, Dict[str, str]],
|
|
140
|
+
) -> Dict[str, str]:
|
|
141
|
+
"""Return a flat dict of all settings across Debug and Release for general checks."""
|
|
142
|
+
merged: Dict[str, str] = {}
|
|
143
|
+
for config in project_configs.values():
|
|
144
|
+
merged.update(config)
|
|
145
|
+
return merged
|
|
146
|
+
|
|
147
|
+
|
|
148
|
+
def _audit_config(
|
|
149
|
+
project_settings: Dict[str, str],
|
|
150
|
+
expectations: List[Tuple[str, str, str]],
|
|
151
|
+
config_name: str,
|
|
152
|
+
) -> List[str]:
|
|
153
|
+
lines: List[str] = []
|
|
154
|
+
for key, expected, _reason in expectations:
|
|
155
|
+
actual = project_settings.get(key)
|
|
156
|
+
display_actual = actual if actual else "(unset)"
|
|
157
|
+
passed = _check(actual, expected)
|
|
158
|
+
mark = "[x]" if passed else "[ ]"
|
|
159
|
+
lines.append(f"- {mark} `{key}`: `{display_actual}` (recommended: `{expected}`)")
|
|
160
|
+
return lines
|
|
161
|
+
|
|
162
|
+
|
|
163
|
+
def _audit_consistency(
|
|
164
|
+
project_configs: Dict[str, Dict[str, str]],
|
|
165
|
+
target_configs: Dict[str, Dict[str, Dict[str, str]]],
|
|
166
|
+
) -> List[str]:
|
|
167
|
+
lines: List[str] = []
|
|
168
|
+
for key in _CONSISTENCY_KEYS:
|
|
169
|
+
overrides = []
|
|
170
|
+
for target_name, configs in target_configs.items():
|
|
171
|
+
for config_name in ("Debug", "Release"):
|
|
172
|
+
target_settings = configs.get(config_name, {})
|
|
173
|
+
if key in target_settings:
|
|
174
|
+
proj_val = project_configs.get(config_name, {}).get(key, "(unset)")
|
|
175
|
+
tgt_val = target_settings[key]
|
|
176
|
+
if tgt_val != proj_val:
|
|
177
|
+
overrides.append(
|
|
178
|
+
f"{target_name} ({config_name}): `{tgt_val}` vs project `{proj_val}`"
|
|
179
|
+
)
|
|
180
|
+
if overrides:
|
|
181
|
+
lines.append(f"- [ ] `{key}` has target-level overrides:")
|
|
182
|
+
for o in overrides:
|
|
183
|
+
lines.append(f" - {o}")
|
|
184
|
+
else:
|
|
185
|
+
lines.append(f"- [x] `{key}` is consistent across all targets")
|
|
186
|
+
return lines
|
|
187
|
+
|
|
188
|
+
|
|
189
|
+
# ---------------------------------------------------------------------------
|
|
190
|
+
# Auto-generated recommendations from audit
|
|
191
|
+
# ---------------------------------------------------------------------------
|
|
192
|
+
|
|
193
|
+
|
|
194
|
+
def _auto_recommendations_from_audit(
|
|
195
|
+
project_configs: Dict[str, Dict[str, str]],
|
|
196
|
+
) -> Dict[str, Any]:
|
|
197
|
+
"""Generate basic recommendations from failing build settings audit checks."""
|
|
198
|
+
items: List[Dict[str, str]] = []
|
|
199
|
+
|
|
200
|
+
debug_settings = project_configs.get("Debug", {})
|
|
201
|
+
for key, expected, reason in _DEBUG_EXPECTATIONS:
|
|
202
|
+
if not _check(debug_settings.get(key), expected):
|
|
203
|
+
actual = debug_settings.get(key, "(unset)")
|
|
204
|
+
items.append({
|
|
205
|
+
"title": f"Set `{key}` to `{expected}` for Debug",
|
|
206
|
+
"category": "build-settings",
|
|
207
|
+
"observed_evidence": f"Current value: `{actual}`. {reason}.",
|
|
208
|
+
"estimated_impact": "Medium",
|
|
209
|
+
"confidence": "High",
|
|
210
|
+
"risk_level": "Low",
|
|
211
|
+
})
|
|
212
|
+
|
|
213
|
+
merged = {}
|
|
214
|
+
for config in project_configs.values():
|
|
215
|
+
merged.update(config)
|
|
216
|
+
for key, expected, reason in _GENERAL_EXPECTATIONS:
|
|
217
|
+
if not _check(merged.get(key), expected):
|
|
218
|
+
actual = merged.get(key, "(unset)")
|
|
219
|
+
items.append({
|
|
220
|
+
"title": f"Enable `{key} = {expected}`",
|
|
221
|
+
"category": "build-settings",
|
|
222
|
+
"observed_evidence": f"Current value: `{actual}`. {reason}.",
|
|
223
|
+
"estimated_impact": "High",
|
|
224
|
+
"confidence": "High",
|
|
225
|
+
"risk_level": "Low",
|
|
226
|
+
})
|
|
227
|
+
|
|
228
|
+
release_settings = project_configs.get("Release", {})
|
|
229
|
+
for key, expected, reason in _RELEASE_EXPECTATIONS:
|
|
230
|
+
if not _check(release_settings.get(key), expected):
|
|
231
|
+
actual = release_settings.get(key, "(unset)")
|
|
232
|
+
items.append({
|
|
233
|
+
"title": f"Set `{key}` to `{expected}` for Release",
|
|
234
|
+
"category": "build-settings",
|
|
235
|
+
"observed_evidence": f"Current value: `{actual}`. {reason}.",
|
|
236
|
+
"estimated_impact": "Medium",
|
|
237
|
+
"confidence": "High",
|
|
238
|
+
"risk_level": "Low",
|
|
239
|
+
})
|
|
240
|
+
|
|
241
|
+
if not items:
|
|
242
|
+
return {"recommendations": []}
|
|
243
|
+
return {"recommendations": items}
|
|
244
|
+
|
|
245
|
+
|
|
246
|
+
# ---------------------------------------------------------------------------
|
|
247
|
+
# Report generation
|
|
248
|
+
# ---------------------------------------------------------------------------
|
|
249
|
+
|
|
250
|
+
|
|
251
|
+
def _section_context(benchmark: Dict[str, Any]) -> str:
|
|
252
|
+
build = benchmark.get("build", {})
|
|
253
|
+
env = benchmark.get("environment", {})
|
|
254
|
+
lines = [
|
|
255
|
+
"## Project Context\n",
|
|
256
|
+
f"- **Project:** `{build.get('path', 'unknown')}`",
|
|
257
|
+
f"- **Scheme:** `{build.get('scheme', 'unknown')}`",
|
|
258
|
+
f"- **Configuration:** `{build.get('configuration', 'unknown')}`",
|
|
259
|
+
f"- **Destination:** `{build.get('destination', 'unknown')}`",
|
|
260
|
+
f"- **Xcode:** {env.get('xcode_version', 'unknown').replace(chr(10), ' ')}",
|
|
261
|
+
f"- **macOS:** {env.get('macos_version', 'unknown')}",
|
|
262
|
+
f"- **Date:** {benchmark.get('created_at', 'unknown')}",
|
|
263
|
+
f"- **Benchmark artifact:** `{benchmark.get('_artifact_path', 'unknown')}`",
|
|
264
|
+
]
|
|
265
|
+
return "\n".join(lines)
|
|
266
|
+
|
|
267
|
+
|
|
268
|
+
def _section_baseline(benchmark: Dict[str, Any]) -> str:
|
|
269
|
+
summary = benchmark.get("summary", {})
|
|
270
|
+
clean = summary.get("clean", {})
|
|
271
|
+
cached_clean = summary.get("cached_clean", {})
|
|
272
|
+
incremental = summary.get("incremental", {})
|
|
273
|
+
has_cached = bool(cached_clean and cached_clean.get("count", 0) > 0)
|
|
274
|
+
|
|
275
|
+
if has_cached:
|
|
276
|
+
lines = [
|
|
277
|
+
"## Baseline Benchmarks\n",
|
|
278
|
+
"| Metric | Clean | Cached Clean | Incremental |",
|
|
279
|
+
"|--------|-------|-------------|-------------|",
|
|
280
|
+
f"| Median | {clean.get('median_seconds', 0):.3f}s | {cached_clean.get('median_seconds', 0):.3f}s | {incremental.get('median_seconds', 0):.3f}s |",
|
|
281
|
+
f"| Min | {clean.get('min_seconds', 0):.3f}s | {cached_clean.get('min_seconds', 0):.3f}s | {incremental.get('min_seconds', 0):.3f}s |",
|
|
282
|
+
f"| Max | {clean.get('max_seconds', 0):.3f}s | {cached_clean.get('max_seconds', 0):.3f}s | {incremental.get('max_seconds', 0):.3f}s |",
|
|
283
|
+
f"| Runs | {clean.get('count', 0)} | {cached_clean.get('count', 0)} | {incremental.get('count', 0)} |",
|
|
284
|
+
]
|
|
285
|
+
lines.append(
|
|
286
|
+
"\n> **Cached Clean** = clean build with a warm compilation cache. "
|
|
287
|
+
"This is the realistic scenario for branch switching, pulling changes, or "
|
|
288
|
+
"Clean Build Folder. The compilation cache lives outside DerivedData and "
|
|
289
|
+
"survives product deletion.\n"
|
|
290
|
+
)
|
|
291
|
+
else:
|
|
292
|
+
lines = [
|
|
293
|
+
"## Baseline Benchmarks\n",
|
|
294
|
+
"| Metric | Clean | Incremental |",
|
|
295
|
+
"|--------|-------|-------------|",
|
|
296
|
+
f"| Median | {clean.get('median_seconds', 0):.3f}s | {incremental.get('median_seconds', 0):.3f}s |",
|
|
297
|
+
f"| Min | {clean.get('min_seconds', 0):.3f}s | {incremental.get('min_seconds', 0):.3f}s |",
|
|
298
|
+
f"| Max | {clean.get('max_seconds', 0):.3f}s | {incremental.get('max_seconds', 0):.3f}s |",
|
|
299
|
+
f"| Runs | {clean.get('count', 0)} | {incremental.get('count', 0)} |",
|
|
300
|
+
]
|
|
301
|
+
|
|
302
|
+
build_types = ["clean", "cached_clean", "incremental"] if has_cached else ["clean", "incremental"]
|
|
303
|
+
label_map = {"clean": "Clean", "cached_clean": "Cached Clean", "incremental": "Incremental"}
|
|
304
|
+
for build_type in build_types:
|
|
305
|
+
runs = benchmark.get("runs", {}).get(build_type, [])
|
|
306
|
+
all_cats: Dict[str, Dict] = {}
|
|
307
|
+
for run in runs:
|
|
308
|
+
for cat in run.get("timing_summary_categories", []):
|
|
309
|
+
name = cat["name"]
|
|
310
|
+
if name not in all_cats:
|
|
311
|
+
all_cats[name] = {"seconds": 0.0, "task_count": 0}
|
|
312
|
+
all_cats[name]["seconds"] += cat["seconds"]
|
|
313
|
+
all_cats[name]["task_count"] += cat.get("task_count", 0)
|
|
314
|
+
if all_cats:
|
|
315
|
+
count = len(runs) or 1
|
|
316
|
+
ranked = sorted(all_cats.items(), key=lambda x: x[1]["seconds"], reverse=True)
|
|
317
|
+
label = label_map.get(build_type, build_type.title())
|
|
318
|
+
lines.append(f"\n### {label} Build Timing Summary\n")
|
|
319
|
+
lines.append(
|
|
320
|
+
"> **Note:** These are aggregated task times across all CPU cores. "
|
|
321
|
+
"Because Xcode runs many tasks in parallel, these totals typically exceed "
|
|
322
|
+
"the actual build wait time shown above. A large number here does not mean "
|
|
323
|
+
"it is blocking your build.\n"
|
|
324
|
+
)
|
|
325
|
+
lines.append("| Category | Tasks | Seconds |")
|
|
326
|
+
lines.append("|----------|------:|--------:|")
|
|
327
|
+
for name, data in ranked:
|
|
328
|
+
avg_sec = data["seconds"] / count
|
|
329
|
+
tasks = data["task_count"] // count if data["task_count"] else ""
|
|
330
|
+
lines.append(f"| {name} | {tasks} | {avg_sec:.3f}s |")
|
|
331
|
+
|
|
332
|
+
return "\n".join(lines)
|
|
333
|
+
|
|
334
|
+
|
|
335
|
+
def _section_settings_audit(
|
|
336
|
+
project_configs: Dict[str, Dict[str, str]],
|
|
337
|
+
target_configs: Dict[str, Dict[str, Dict[str, str]]],
|
|
338
|
+
) -> str:
|
|
339
|
+
lines = ["## Build Settings Audit\n"]
|
|
340
|
+
|
|
341
|
+
lines.append("### Debug Configuration\n")
|
|
342
|
+
lines.extend(_audit_config(project_configs.get("Debug", {}), _DEBUG_EXPECTATIONS, "Debug"))
|
|
343
|
+
|
|
344
|
+
lines.append("\n### General (All Configurations)\n")
|
|
345
|
+
merged = _merged_project_settings(project_configs)
|
|
346
|
+
lines.extend(_audit_config(merged, _GENERAL_EXPECTATIONS, "General"))
|
|
347
|
+
|
|
348
|
+
lines.append("\n### Release Configuration\n")
|
|
349
|
+
lines.extend(_audit_config(project_configs.get("Release", {}), _RELEASE_EXPECTATIONS, "Release"))
|
|
350
|
+
|
|
351
|
+
lines.append("\n### Cross-Target Consistency\n")
|
|
352
|
+
lines.extend(_audit_consistency(project_configs, target_configs))
|
|
353
|
+
|
|
354
|
+
return "\n".join(lines)
|
|
355
|
+
|
|
356
|
+
|
|
357
|
+
def _section_diagnostics(diagnostics: Optional[Dict[str, Any]]) -> str:
|
|
358
|
+
if diagnostics is None:
|
|
359
|
+
return "## Compilation Diagnostics\n\nNo diagnostics artifact provided. Run `diagnose_compilation.py` to identify type-checking hotspots."
|
|
360
|
+
warnings = diagnostics.get("warnings", [])
|
|
361
|
+
summary = diagnostics.get("summary", {})
|
|
362
|
+
threshold = diagnostics.get("threshold_ms", 100)
|
|
363
|
+
lines = [
|
|
364
|
+
"## Compilation Diagnostics\n",
|
|
365
|
+
f"Threshold: {threshold}ms | "
|
|
366
|
+
f"Total warnings: {summary.get('total_warnings', 0)} | "
|
|
367
|
+
f"Function bodies: {summary.get('function_body_warnings', 0)} | "
|
|
368
|
+
f"Expressions: {summary.get('expression_warnings', 0)}\n",
|
|
369
|
+
]
|
|
370
|
+
if warnings:
|
|
371
|
+
lines.append("| Duration | Kind | File | Line | Name |")
|
|
372
|
+
lines.append("|---------:|------|------|-----:|------|")
|
|
373
|
+
for w in warnings[:30]:
|
|
374
|
+
short_file = Path(w["file"]).name
|
|
375
|
+
name = w.get("name", "") or "(expression)"
|
|
376
|
+
lines.append(
|
|
377
|
+
f"| {w['duration_ms']}ms | {w['kind']} | {short_file} | {w['line']} | {name} |"
|
|
378
|
+
)
|
|
379
|
+
if len(warnings) > 30:
|
|
380
|
+
lines.append(f"\n*... and {len(warnings) - 30} more warnings (see full artifact)*")
|
|
381
|
+
else:
|
|
382
|
+
lines.append("No type-checking hotspots found above threshold.")
|
|
383
|
+
return "\n".join(lines)
|
|
384
|
+
|
|
385
|
+
|
|
386
|
+
def _section_recommendations(recommendations: Optional[Dict[str, Any]]) -> str:
|
|
387
|
+
if recommendations is None:
|
|
388
|
+
return "## Prioritized Recommendations\n\nNo recommendations artifact provided."
|
|
389
|
+
items = recommendations.get("recommendations", [])
|
|
390
|
+
if not items:
|
|
391
|
+
return "## Prioritized Recommendations\n\nNo recommendations found."
|
|
392
|
+
lines = ["## Prioritized Recommendations\n"]
|
|
393
|
+
for i, item in enumerate(items, 1):
|
|
394
|
+
title = item.get("title", "Untitled")
|
|
395
|
+
lines.append(f"### {i}. {title}\n")
|
|
396
|
+
for field, label in [
|
|
397
|
+
("wait_time_impact", "Wait-Time Impact"),
|
|
398
|
+
("actionability", "Actionability"),
|
|
399
|
+
("category", "Category"),
|
|
400
|
+
("observed_evidence", "Evidence"),
|
|
401
|
+
("estimated_impact", "Impact"),
|
|
402
|
+
("confidence", "Confidence"),
|
|
403
|
+
("risk_level", "Risk"),
|
|
404
|
+
("scope", "Scope"),
|
|
405
|
+
]:
|
|
406
|
+
val = item.get(field)
|
|
407
|
+
if val is None:
|
|
408
|
+
continue
|
|
409
|
+
if isinstance(val, list):
|
|
410
|
+
lines.append(f"**{label}:**")
|
|
411
|
+
for entry in val:
|
|
412
|
+
lines.append(f"- {entry}")
|
|
413
|
+
else:
|
|
414
|
+
lines.append(f"**{label}:** {val}")
|
|
415
|
+
lines.append("")
|
|
416
|
+
return "\n".join(lines)
|
|
417
|
+
|
|
418
|
+
|
|
419
|
+
def _section_approval(recommendations: Optional[Dict[str, Any]]) -> str:
|
|
420
|
+
if recommendations is None:
|
|
421
|
+
return "## Approval Checklist\n\nNo recommendations to approve."
|
|
422
|
+
items = recommendations.get("recommendations", [])
|
|
423
|
+
if not items:
|
|
424
|
+
return "## Approval Checklist\n\nNo recommendations to approve."
|
|
425
|
+
lines = ["## Approval Checklist\n"]
|
|
426
|
+
for i, item in enumerate(items, 1):
|
|
427
|
+
title = item.get("title", "Untitled")
|
|
428
|
+
wait_impact = item.get("wait_time_impact", "")
|
|
429
|
+
impact = item.get("estimated_impact", "")
|
|
430
|
+
risk = item.get("risk_level", "")
|
|
431
|
+
actionability = item.get("actionability", "")
|
|
432
|
+
impact_str = wait_impact if wait_impact else impact
|
|
433
|
+
actionability_str = f" | Actionability: {actionability}" if actionability else ""
|
|
434
|
+
lines.append(f"- [ ] **{i}. {title}** -- Impact: {impact_str}{actionability_str} | Risk: {risk}")
|
|
435
|
+
return "\n".join(lines)
|
|
436
|
+
|
|
437
|
+
|
|
438
|
+
def _section_next_steps(benchmark: Dict[str, Any]) -> str:
|
|
439
|
+
build = benchmark.get("build", {})
|
|
440
|
+
command = build.get("command", "xcodebuild build")
|
|
441
|
+
lines = [
|
|
442
|
+
"## Next Steps\n",
|
|
443
|
+
"After implementing approved changes, re-benchmark with the same inputs:\n",
|
|
444
|
+
"```bash",
|
|
445
|
+
f"python3 scripts/benchmark_builds.py \\",
|
|
446
|
+
]
|
|
447
|
+
if build.get("entrypoint") == "workspace":
|
|
448
|
+
lines.append(f" --workspace {build.get('path', 'App.xcworkspace')} \\")
|
|
449
|
+
else:
|
|
450
|
+
lines.append(f" --project {build.get('path', 'App.xcodeproj')} \\")
|
|
451
|
+
lines.extend([
|
|
452
|
+
f" --scheme {build.get('scheme', 'App')} \\",
|
|
453
|
+
f" --configuration {build.get('configuration', 'Debug')} \\",
|
|
454
|
+
])
|
|
455
|
+
if build.get("destination"):
|
|
456
|
+
lines.append(f' --destination "{build["destination"]}" \\')
|
|
457
|
+
lines.append(" --output-dir .build-benchmark")
|
|
458
|
+
lines.append("```\n")
|
|
459
|
+
lines.append("Compare the new wall-clock medians against the baseline. Report results as:")
|
|
460
|
+
lines.append('"Your [clean/incremental] build now takes X.Xs (was Y.Ys) -- Z.Zs faster/slower."')
|
|
461
|
+
return "\n".join(lines)
|
|
462
|
+
|
|
463
|
+
|
|
464
|
+
# ---------------------------------------------------------------------------
|
|
465
|
+
# Main
|
|
466
|
+
# ---------------------------------------------------------------------------
|
|
467
|
+
|
|
468
|
+
|
|
469
|
+
def parse_args() -> argparse.Namespace:
|
|
470
|
+
parser = argparse.ArgumentParser(description="Generate a Markdown build optimization report.")
|
|
471
|
+
parser.add_argument("--benchmark", required=True, help="Path to benchmark JSON artifact")
|
|
472
|
+
parser.add_argument("--recommendations", help="Path to recommendations JSON")
|
|
473
|
+
parser.add_argument("--diagnostics", help="Path to diagnostics JSON")
|
|
474
|
+
parser.add_argument("--project-path", help="Path to .xcodeproj for build settings audit")
|
|
475
|
+
parser.add_argument("--output", help="Output Markdown path (default: stdout)")
|
|
476
|
+
return parser.parse_args()
|
|
477
|
+
|
|
478
|
+
|
|
479
|
+
def main() -> int:
|
|
480
|
+
args = parse_args()
|
|
481
|
+
|
|
482
|
+
benchmark = json.loads(Path(args.benchmark).read_text())
|
|
483
|
+
benchmark["_artifact_path"] = args.benchmark
|
|
484
|
+
|
|
485
|
+
recommendations = None
|
|
486
|
+
if args.recommendations:
|
|
487
|
+
recommendations = json.loads(Path(args.recommendations).read_text())
|
|
488
|
+
|
|
489
|
+
diagnostics = None
|
|
490
|
+
if args.diagnostics:
|
|
491
|
+
diagnostics = json.loads(Path(args.diagnostics).read_text())
|
|
492
|
+
|
|
493
|
+
project_configs: Dict[str, Dict[str, str]] = {}
|
|
494
|
+
target_configs: Dict[str, Dict[str, Dict[str, str]]] = {}
|
|
495
|
+
if args.project_path:
|
|
496
|
+
pbxproj_path = Path(args.project_path) / "project.pbxproj"
|
|
497
|
+
if pbxproj_path.exists():
|
|
498
|
+
pbxproj = pbxproj_path.read_text()
|
|
499
|
+
project_configs = _parse_project_level_configs(pbxproj)
|
|
500
|
+
target_configs = _parse_target_configs(pbxproj)
|
|
501
|
+
|
|
502
|
+
if recommendations is None and project_configs:
|
|
503
|
+
auto = _auto_recommendations_from_audit(project_configs)
|
|
504
|
+
if auto["recommendations"]:
|
|
505
|
+
recommendations = auto
|
|
506
|
+
|
|
507
|
+
sections = [
|
|
508
|
+
"# Xcode Build Optimization Plan\n",
|
|
509
|
+
_section_context(benchmark),
|
|
510
|
+
_section_baseline(benchmark),
|
|
511
|
+
]
|
|
512
|
+
|
|
513
|
+
if project_configs:
|
|
514
|
+
sections.append(_section_settings_audit(project_configs, target_configs))
|
|
515
|
+
|
|
516
|
+
sections.append(_section_diagnostics(diagnostics))
|
|
517
|
+
sections.append(_section_recommendations(recommendations))
|
|
518
|
+
sections.append(_section_approval(recommendations))
|
|
519
|
+
sections.append(_section_next_steps(benchmark))
|
|
520
|
+
|
|
521
|
+
report = "\n\n".join(sections) + "\n"
|
|
522
|
+
|
|
523
|
+
if args.output:
|
|
524
|
+
Path(args.output).write_text(report)
|
|
525
|
+
print(f"Saved optimization report: {args.output}")
|
|
526
|
+
else:
|
|
527
|
+
print(report, end="")
|
|
528
|
+
|
|
529
|
+
return 0
|
|
530
|
+
|
|
531
|
+
|
|
532
|
+
if __name__ == "__main__":
|
|
533
|
+
raise SystemExit(main())
|
|
@@ -0,0 +1,89 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: xcode-compilation-analyzer
|
|
3
|
+
description: Phân tích các điểm thắt cổ chai (hotspots) khi biên dịch code Swift và dự án đa ngôn ngữ, sử dụng build timing summary và Swift frontend diagnostics, sau đó tạo ra một bản đề xuất tối ưu ở mức source code. Sử dụng khi dev báo cáo Xcode compile chậm, gặp warning type-checking, `CompileSwiftSources` tốn thời gian, hoặc cần tối ưu tốc độ Swift type checking.
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# Xcode Compilation Analyzer
|
|
7
|
+
|
|
8
|
+
Use this skill when compile time, not just general project configuration, looks like the bottleneck.
|
|
9
|
+
|
|
10
|
+
## Core Rules
|
|
11
|
+
|
|
12
|
+
- Start from evidence, ideally a recent `.build-benchmark/` artifact or raw timing-summary output.
|
|
13
|
+
- Prefer analysis-only compiler flags over persistent project edits during investigation.
|
|
14
|
+
- Rank findings by expected **wall-clock** impact, not cumulative compile-time impact. When compile tasks are heavily parallelized (sum of compile categories >> wall-clock median), note that fixing individual hotspots may improve parallel efficiency without reducing build wait time.
|
|
15
|
+
- When the evidence points to parallelized work rather than serial bottlenecks, label recommendations as "Reduces compiler workload (parallel)" rather than "Reduces build time."
|
|
16
|
+
- Do not edit source or build settings without explicit developer approval.
|
|
17
|
+
|
|
18
|
+
## What To Inspect
|
|
19
|
+
|
|
20
|
+
- `Build Timing Summary` output from clean and incremental builds
|
|
21
|
+
- long-running `CompileSwiftSources` or per-file compilation tasks
|
|
22
|
+
- `SwiftEmitModule` time -- can reach 60s+ after a single-line change in large modules; if it dominates incremental builds, the module is likely too large or macro-heavy
|
|
23
|
+
- `Planning Swift module` time -- if this category is disproportionately large in incremental builds (up to 30s per module), it signals unexpected input invalidation or macro-related rebuild cascading
|
|
24
|
+
- ad hoc runs with:
|
|
25
|
+
- `-Xfrontend -warn-long-expression-type-checking=<ms>`
|
|
26
|
+
- `-Xfrontend -warn-long-function-bodies=<ms>`
|
|
27
|
+
- deeper diagnostic flags for thorough investigation:
|
|
28
|
+
- `-Xfrontend -debug-time-compilation` -- per-file compile times to rank the slowest files
|
|
29
|
+
- `-Xfrontend -debug-time-function-bodies` -- per-function compile times (unfiltered, complements the threshold-based warning flags)
|
|
30
|
+
- `-Xswiftc -driver-time-compilation` -- driver-level timing to isolate driver overhead
|
|
31
|
+
- `-Xfrontend -stats-output-dir <path>` -- detailed compiler statistics (JSON) per compilation unit for root-cause analysis
|
|
32
|
+
- mixed Swift and Objective-C surfaces that increase bridging work
|
|
33
|
+
|
|
34
|
+
## Analysis Workflow
|
|
35
|
+
|
|
36
|
+
1. Identify whether the main issue is broad compilation volume or a few extreme hotspots.
|
|
37
|
+
2. Parse timing-summary categories and rank the biggest compile contributors.
|
|
38
|
+
3. Run the diagnostics script to surface type-checking hotspots:
|
|
39
|
+
```bash
|
|
40
|
+
python3 scripts/diagnose_compilation.py \
|
|
41
|
+
--project App.xcodeproj \
|
|
42
|
+
--scheme MyApp \
|
|
43
|
+
--configuration Debug \
|
|
44
|
+
--destination "platform=iOS Simulator,name=iPhone 16" \
|
|
45
|
+
--threshold 100 \
|
|
46
|
+
--output-dir .build-benchmark
|
|
47
|
+
```
|
|
48
|
+
This produces a ranked list of functions and expressions that exceed the millisecond threshold. Use the diagnostics artifact alongside source inspection to focus on the most expensive files first.
|
|
49
|
+
4. Map the evidence to a concrete recommendation list.
|
|
50
|
+
5. Separate code-level suggestions from project-level or module-level suggestions.
|
|
51
|
+
|
|
52
|
+
## Apple-Derived Checks
|
|
53
|
+
|
|
54
|
+
Look for these patterns first:
|
|
55
|
+
|
|
56
|
+
- missing explicit type information in expensive expressions
|
|
57
|
+
- complex chained or nested expressions that are hard to type-check
|
|
58
|
+
- delegate properties typed as `AnyObject` instead of a concrete protocol
|
|
59
|
+
- oversized Objective-C bridging headers or generated Swift-to-Objective-C surfaces
|
|
60
|
+
- header imports that skip framework qualification and miss module-cache reuse
|
|
61
|
+
- classes missing `final` that are never subclassed
|
|
62
|
+
- overly broad access control (`public`/`open`) on internal-only symbols
|
|
63
|
+
- monolithic SwiftUI `body` properties that should be decomposed into subviews
|
|
64
|
+
- long method chains or closures without intermediate type annotations
|
|
65
|
+
|
|
66
|
+
## Reporting Format
|
|
67
|
+
|
|
68
|
+
For each recommendation, include:
|
|
69
|
+
|
|
70
|
+
- observed evidence
|
|
71
|
+
- likely affected file or module
|
|
72
|
+
- expected wait-time impact (e.g. "Expected to reduce your clean build by ~2s" or "Reduces parallel compile work but unlikely to reduce build wait time")
|
|
73
|
+
- confidence
|
|
74
|
+
- whether approval is required before applying it
|
|
75
|
+
|
|
76
|
+
If the evidence points to project configuration instead of source, hand off to [`xcode-project-analyzer`](../xcode-project-analyzer/SKILL.md) by reading its SKILL.md and applying its workflow to the same project context.
|
|
77
|
+
|
|
78
|
+
## Preferred Tactics
|
|
79
|
+
|
|
80
|
+
- Suggest ad hoc flag injection through the build command before recommending persistent build-setting changes.
|
|
81
|
+
- Prefer narrowing giant view builders, closures, or result-builder expressions into smaller typed units.
|
|
82
|
+
- Recommend explicit imports and protocol typing when they reduce compiler search space.
|
|
83
|
+
- Call out when mixed-language boundaries are the real issue rather than Swift syntax alone.
|
|
84
|
+
|
|
85
|
+
## Additional Resources
|
|
86
|
+
|
|
87
|
+
- For the detailed audit checklist, see [references/code-compilation-checks.md](references/code-compilation-checks.md)
|
|
88
|
+
- For the shared recommendation structure, see [references/recommendation-format.md](references/recommendation-format.md)
|
|
89
|
+
- For source citations, see [references/build-optimization-sources.md](references/build-optimization-sources.md)
|