@leejungkiin/awkit 1.4.0 → 1.4.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (119) hide show
  1. package/bin/awk.js +458 -7
  2. package/bin/claude-generators.js +122 -0
  3. package/core/AGENTS.md +16 -0
  4. package/core/CLAUDE.md +155 -0
  5. package/core/GEMINI.md +44 -9
  6. package/package.json +1 -1
  7. package/skills/ai-sprite-maker/SKILL.md +81 -0
  8. package/skills/ai-sprite-maker/scripts/animate_sprite.py +102 -0
  9. package/skills/ai-sprite-maker/scripts/process_sprites.py +140 -0
  10. package/skills/code-review/SKILL.md +21 -33
  11. package/skills/lucylab-tts/SKILL.md +64 -0
  12. package/skills/lucylab-tts/resources/voices_library.json +908 -0
  13. package/skills/lucylab-tts/scripts/.env +1 -0
  14. package/skills/lucylab-tts/scripts/lucylab_tts.py +506 -0
  15. package/skills/orchestrator/SKILL.md +5 -0
  16. package/skills/short-maker/SKILL.md +150 -0
  17. package/skills/short-maker/_backup/storyboard.html +106 -0
  18. package/skills/short-maker/_backup/video_mixer.py +296 -0
  19. package/skills/short-maker/outputs/fitbite-promo/background.jpg +0 -0
  20. package/skills/short-maker/outputs/fitbite-promo/final/promo-final.mp4 +0 -0
  21. package/skills/short-maker/outputs/fitbite-promo/script.md +19 -0
  22. package/skills/short-maker/outputs/fitbite-promo/segments/scene-01.mp4 +0 -0
  23. package/skills/short-maker/outputs/fitbite-promo/segments/scene-02.mp4 +0 -0
  24. package/skills/short-maker/outputs/fitbite-promo/segments/scene-03.mp4 +0 -0
  25. package/skills/short-maker/outputs/fitbite-promo/segments/scene-04.mp4 +0 -0
  26. package/skills/short-maker/outputs/fitbite-promo/storyboard/scene-01.png +0 -0
  27. package/skills/short-maker/outputs/fitbite-promo/storyboard/scene-02.png +0 -0
  28. package/skills/short-maker/outputs/fitbite-promo/storyboard/scene-03.png +0 -0
  29. package/skills/short-maker/outputs/fitbite-promo/storyboard/scene-04.png +0 -0
  30. package/skills/short-maker/outputs/fitbite-promo/storyboard.html +133 -0
  31. package/skills/short-maker/outputs/fitbite-promo/storyboard.json +38 -0
  32. package/skills/short-maker/outputs/fitbite-promo/temp/merged_chroma.mp4 +0 -0
  33. package/skills/short-maker/outputs/fitbite-promo/temp/merged_crossfaded.mp4 +0 -0
  34. package/skills/short-maker/outputs/fitbite-promo/temp/ready_00.mp4 +0 -0
  35. package/skills/short-maker/outputs/fitbite-promo/temp/ready_01.mp4 +0 -0
  36. package/skills/short-maker/outputs/fitbite-promo/temp/ready_02.mp4 +0 -0
  37. package/skills/short-maker/outputs/fitbite-promo/temp/ready_03.mp4 +0 -0
  38. package/skills/short-maker/outputs/fitbite-promo/tts/manifest.json +31 -0
  39. package/skills/short-maker/outputs/fitbite-promo/tts/scene-01.wav +0 -0
  40. package/skills/short-maker/outputs/fitbite-promo/tts/scene-02.wav +0 -0
  41. package/skills/short-maker/outputs/fitbite-promo/tts/scene-03.wav +0 -0
  42. package/skills/short-maker/outputs/fitbite-promo/tts/scene-04.wav +0 -0
  43. package/skills/short-maker/outputs/fitbite-promo/tts_script.txt +11 -0
  44. package/skills/short-maker/scripts/google-flow-cli/.project-identity +41 -0
  45. package/skills/short-maker/scripts/google-flow-cli/.trae/rules/project_rules.md +52 -0
  46. package/skills/short-maker/scripts/google-flow-cli/CODEBASE.md +67 -0
  47. package/skills/short-maker/scripts/google-flow-cli/GoogleFlowCli.code-workspace +29 -0
  48. package/skills/short-maker/scripts/google-flow-cli/README.md +168 -0
  49. package/skills/short-maker/scripts/google-flow-cli/docs/specs/PROJECT.md +12 -0
  50. package/skills/short-maker/scripts/google-flow-cli/docs/specs/REQUIREMENTS.md +22 -0
  51. package/skills/short-maker/scripts/google-flow-cli/docs/specs/ROADMAP.md +16 -0
  52. package/skills/short-maker/scripts/google-flow-cli/docs/specs/TECH-SPEC.md +13 -0
  53. package/skills/short-maker/scripts/google-flow-cli/gflow/__init__.py +3 -0
  54. package/skills/short-maker/scripts/google-flow-cli/gflow/api/__init__.py +19 -0
  55. package/skills/short-maker/scripts/google-flow-cli/gflow/api/client.py +1921 -0
  56. package/skills/short-maker/scripts/google-flow-cli/gflow/api/models.py +64 -0
  57. package/skills/short-maker/scripts/google-flow-cli/gflow/api/rpc_ids.py +98 -0
  58. package/skills/short-maker/scripts/google-flow-cli/gflow/auth/__init__.py +15 -0
  59. package/skills/short-maker/scripts/google-flow-cli/gflow/auth/browser_auth.py +692 -0
  60. package/skills/short-maker/scripts/google-flow-cli/gflow/auth/humanizer.py +417 -0
  61. package/skills/short-maker/scripts/google-flow-cli/gflow/auth/proxy_ext.py +120 -0
  62. package/skills/short-maker/scripts/google-flow-cli/gflow/auth/recaptcha.py +482 -0
  63. package/skills/short-maker/scripts/google-flow-cli/gflow/batchexecute/__init__.py +5 -0
  64. package/skills/short-maker/scripts/google-flow-cli/gflow/batchexecute/client.py +414 -0
  65. package/skills/short-maker/scripts/google-flow-cli/gflow/cli/__init__.py +1 -0
  66. package/skills/short-maker/scripts/google-flow-cli/gflow/cli/main.py +1075 -0
  67. package/skills/short-maker/scripts/google-flow-cli/pyproject.toml +36 -0
  68. package/skills/short-maker/scripts/google-flow-cli/script.txt +22 -0
  69. package/skills/short-maker/scripts/google-flow-cli/tests/__init__.py +0 -0
  70. package/skills/short-maker/scripts/google-flow-cli/tests/test_batchexecute.py +113 -0
  71. package/skills/short-maker/scripts/google-flow-cli/tests/test_client.py +190 -0
  72. package/skills/short-maker/templates/aida_script.md +40 -0
  73. package/skills/short-maker/templates/mimic_analyzer.md +29 -0
  74. package/skills/single-flow-task-execution/SKILL.md +9 -6
  75. package/skills/skill-creator/SKILL.md +44 -0
  76. package/skills/spm-build-analysis/SKILL.md +92 -0
  77. package/skills/spm-build-analysis/references/build-optimization-sources.md +155 -0
  78. package/skills/spm-build-analysis/references/recommendation-format.md +85 -0
  79. package/skills/spm-build-analysis/references/spm-analysis-checks.md +105 -0
  80. package/skills/spm-build-analysis/scripts/check_spm_pins.py +118 -0
  81. package/skills/symphony-enforcer/SKILL.md +51 -83
  82. package/skills/symphony-orchestrator/SKILL.md +1 -1
  83. package/skills/trello-sync/SKILL.md +27 -28
  84. package/skills/verification-gate/SKILL.md +13 -2
  85. package/skills/xcode-build-benchmark/SKILL.md +88 -0
  86. package/skills/xcode-build-benchmark/references/benchmark-artifacts.md +94 -0
  87. package/skills/xcode-build-benchmark/references/benchmarking-workflow.md +67 -0
  88. package/skills/xcode-build-benchmark/schemas/build-benchmark.schema.json +230 -0
  89. package/skills/xcode-build-benchmark/scripts/benchmark_builds.py +308 -0
  90. package/skills/xcode-build-fixer/SKILL.md +218 -0
  91. package/skills/xcode-build-fixer/references/build-settings-best-practices.md +216 -0
  92. package/skills/xcode-build-fixer/references/fix-patterns.md +290 -0
  93. package/skills/xcode-build-fixer/references/recommendation-format.md +85 -0
  94. package/skills/xcode-build-fixer/scripts/benchmark_builds.py +308 -0
  95. package/skills/xcode-build-orchestrator/SKILL.md +156 -0
  96. package/skills/xcode-build-orchestrator/references/benchmark-artifacts.md +94 -0
  97. package/skills/xcode-build-orchestrator/references/build-settings-best-practices.md +216 -0
  98. package/skills/xcode-build-orchestrator/references/orchestration-report-template.md +143 -0
  99. package/skills/xcode-build-orchestrator/references/recommendation-format.md +85 -0
  100. package/skills/xcode-build-orchestrator/scripts/benchmark_builds.py +308 -0
  101. package/skills/xcode-build-orchestrator/scripts/diagnose_compilation.py +273 -0
  102. package/skills/xcode-build-orchestrator/scripts/generate_optimization_report.py +533 -0
  103. package/skills/xcode-compilation-analyzer/SKILL.md +89 -0
  104. package/skills/xcode-compilation-analyzer/references/build-optimization-sources.md +155 -0
  105. package/skills/xcode-compilation-analyzer/references/code-compilation-checks.md +106 -0
  106. package/skills/xcode-compilation-analyzer/references/recommendation-format.md +85 -0
  107. package/skills/xcode-compilation-analyzer/scripts/diagnose_compilation.py +273 -0
  108. package/skills/xcode-project-analyzer/SKILL.md +76 -0
  109. package/skills/xcode-project-analyzer/references/build-optimization-sources.md +155 -0
  110. package/skills/xcode-project-analyzer/references/build-settings-best-practices.md +216 -0
  111. package/skills/xcode-project-analyzer/references/project-audit-checks.md +101 -0
  112. package/skills/xcode-project-analyzer/references/recommendation-format.md +85 -0
  113. package/templates/project-identity/android.json +0 -10
  114. package/templates/project-identity/backend-nestjs.json +0 -10
  115. package/templates/project-identity/expo.json +0 -10
  116. package/templates/project-identity/ios.json +0 -10
  117. package/templates/project-identity/web-nextjs.json +0 -10
  118. package/workflows/_uncategorized/ship-to-code.md +85 -0
  119. package/workflows/context/codebase-sync.md +10 -87
@@ -0,0 +1 @@
1
+ LUCYLAB_BEARER=eyJhbGciOiJSUzI1NiIsImtpZCI6IjM3MzAwNzY5YTA3ZTA1MTE2ZjdlNTEzOGZhOTA5MzY4NWVlYmMyNDAiLCJ0eXAiOiJKV1QifQ.eyJuYW1lIjoiTmd1eeG7hW4gVHXhuqVuIiwicGljdHVyZSI6Imh0dHBzOi8vbGgzLmdvb2dsZXVzZXJjb250ZW50LmNvbS9hL0FDZzhvY0lWNUR3X3dONnpNYmNzOEZkT3IwUWw5ZjlWU1VhMlhPbTMxdEkzc3VMMmI2MzJBUFk9czk2LWMiLCJpc3MiOiJodHRwczovL3NlY3VyZXRva2VuLmdvb2dsZS5jb20vbHVjeS1jNjU0MyIsImF1ZCI6Imx1Y3ktYzY1NDMiLCJhdXRoX3RpbWUiOjE3NzQ3MDMxNDMsInVzZXJfaWQiOiJzd1RuUHhicGxJT0F3N2Z6NWtTY3Y2S08wdFMyIiwic3ViIjoic3dUblB4YnBsSU9BdzdmejVrU2N2NktPMHRTMiIsImlhdCI6MTc3NDc2MDExOSwiZXhwIjoxNzc0NzYzNzE5LCJlbWFpbCI6InNreW5ldHgzM0BnbWFpbC5jb20iLCJlbWFpbF92ZXJpZmllZCI6dHJ1ZSwiZmlyZWJhc2UiOnsiaWRlbnRpdGllcyI6eyJnb29nbGUuY29tIjpbIjExODQyMTU3Mzg2NTk3NDU3OTQ4MCJdLCJlbWFpbCI6WyJza3luZXR4MzNAZ21haWwuY29tIl19LCJzaWduX2luX3Byb3ZpZGVyIjoiZ29vZ2xlLmNvbSJ9fQ.nfeWlALjyxuuW5vefLX221BLbEsi9OqypL26fBIgQkiP19TqEuzW6upm-HRz64pcDXnLSOk2ocCvKNLu6RzxDjbjh5T39TAWj1cU-XkGyyPUKFoq7nd2UWyhuTL54_UtUijtYr6YYei_BRwFvPCJ8W9wjNYhbZ6jBypmqZY_vkMKbQK-j3cT_Xom9FzT0L3xCMB5VVzzZ3eST_qyIdyANCEWHc_KAKWlbmcRNWIVkSXkf0eGK2FYzWgViyqqBj59UazHLEvOkvGxlZ20XkGu76uGBIb6t6j1nkTrF3L_-efxO2e90j8E_KdnF9S-Jpu5A1tc9-d9e8VaqahGL3p-Gw
@@ -0,0 +1,506 @@
1
+ from __future__ import annotations
2
+
3
+ import argparse
4
+ import base64
5
+ import json
6
+ import os
7
+ import re
8
+ import time
9
+ from pathlib import Path
10
+ from typing import Any
11
+
12
+ import requests
13
+
14
+
15
+ def _load_json_documents(path: Path) -> list[Any]:
16
+ text = path.read_text(encoding="utf-8")
17
+ decoder = json.JSONDecoder()
18
+ docs: list[Any] = []
19
+ i = 0
20
+ while i < len(text):
21
+ while i < len(text) and text[i].isspace():
22
+ i += 1
23
+ if i >= len(text):
24
+ break
25
+ doc, end = decoder.raw_decode(text, idx=i)
26
+ docs.append(doc)
27
+ i = end
28
+ return docs
29
+
30
+
31
+ def _sanitize_filename(value: str) -> str:
32
+ value = value.strip().lower()
33
+ value = re.sub(r"\s+", "-", value)
34
+ value = re.sub(r"[^a-z0-9._-]+", "", value)
35
+ return value or "voice"
36
+
37
+
38
+ def _load_dotenv_value(dotenv_path: Path, key: str) -> str | None:
39
+ if not dotenv_path.exists():
40
+ return None
41
+ for raw_line in dotenv_path.read_text(encoding="utf-8").splitlines():
42
+ line = raw_line.strip()
43
+ if not line or line.startswith("#"):
44
+ continue
45
+ if line.startswith("export "):
46
+ line = line[len("export ") :].strip()
47
+ if "=" not in line:
48
+ continue
49
+ k, v = line.split("=", 1)
50
+ if k.strip() != key:
51
+ continue
52
+ value = v.strip()
53
+ if not value:
54
+ return ""
55
+ if value[0] in ("'", '"') and len(value) >= 2 and value[-1] == value[0]:
56
+ return value[1:-1]
57
+ value = value.split(" #", 1)[0].split("\t#", 1)[0].strip()
58
+ return value
59
+ return None
60
+
61
+
62
+ def _extract_bearer_from_curl(curl_text: str) -> str | None:
63
+ m = re.search(r"-H\s+'authorization:\s*Bearer\s+([^']+)'", curl_text, flags=re.IGNORECASE)
64
+ if m:
65
+ return m.group(1).strip()
66
+ m = re.search(r'-H\s+"authorization:\s*Bearer\s+([^"]+)"', curl_text, flags=re.IGNORECASE)
67
+ if m:
68
+ return m.group(1).strip()
69
+ return None
70
+
71
+
72
+ def _extract_headers_from_curl(curl_text: str) -> dict[str, str]:
73
+ headers: dict[str, str] = {}
74
+ for m in re.finditer(r"-H\s+'([^']+)'", curl_text):
75
+ raw = m.group(1)
76
+ if ":" not in raw:
77
+ continue
78
+ k, v = raw.split(":", 1)
79
+ headers[k.strip()] = v.strip()
80
+ for m in re.finditer(r'-H\s+"([^"]+)"', curl_text):
81
+ raw = m.group(1)
82
+ if ":" not in raw:
83
+ continue
84
+ k, v = raw.split(":", 1)
85
+ headers[k.strip()] = v.strip()
86
+ return headers
87
+
88
+
89
+ def _extract_endpoint_from_curl(curl_text: str) -> str | None:
90
+ m = re.search(r"curl\s+'([^']+)'", curl_text)
91
+ if m:
92
+ return m.group(1).strip()
93
+ m = re.search(r'curl\s+"([^"]+)"', curl_text)
94
+ if m:
95
+ return m.group(1).strip()
96
+ return None
97
+
98
+
99
+ def _load_voices(voice_json_path: Path) -> list[dict[str, Any]]:
100
+ voices: list[dict[str, Any]] = []
101
+ for doc in _load_json_documents(voice_json_path):
102
+ if not isinstance(doc, dict):
103
+ continue
104
+ items: Any = doc.get("items")
105
+ if items is None:
106
+ items = doc.get("result", {}).get("items", [])
107
+ if not isinstance(items, list):
108
+ continue
109
+ for v in items:
110
+ if not isinstance(v, dict):
111
+ continue
112
+ if v.get("id") and v.get("name"):
113
+ voices.append(v)
114
+ return voices
115
+
116
+
117
+ def _select_voices(voices: list[dict[str, Any]], selectors: list[str], limit: int) -> list[dict[str, Any]]:
118
+ if not selectors:
119
+ return voices[:limit]
120
+
121
+ selected: list[dict[str, Any]] = []
122
+ used_ids: set[str] = set()
123
+ for sel in selectors:
124
+ sel_norm = sel.strip().lower()
125
+ for v in voices:
126
+ vid = str(v.get("id", ""))
127
+ if not vid or vid in used_ids:
128
+ continue
129
+ name = str(v.get("name", "")).lower()
130
+ slug = str(v.get("slug", "")).lower()
131
+ if sel_norm == vid.lower() or sel_norm in name or (slug and sel_norm in slug):
132
+ selected.append(v)
133
+ used_ids.add(vid)
134
+ return selected
135
+
136
+
137
+ def _extract_scene_speeches(script_text: str) -> list[tuple[int, str]]:
138
+ scenes: list[tuple[int, str]] = []
139
+ for m in re.finditer(
140
+ r"SCENE\s+(\d+):.*?tông giọng[^:]*:\s*'([^']+)'",
141
+ script_text,
142
+ flags=re.IGNORECASE | re.DOTALL,
143
+ ):
144
+ idx = int(m.group(1))
145
+ speech = m.group(2).strip()
146
+ scenes.append((idx, speech))
147
+ scenes.sort(key=lambda x: x[0])
148
+ return scenes
149
+
150
+
151
+ def _find_audio_url(obj: Any) -> str | None:
152
+ if isinstance(obj, dict):
153
+ for k in ("cdnUrl", "audioUrl", "url", "fileUrl", "downloadUrl"):
154
+ v = obj.get(k)
155
+ if isinstance(v, str) and v.startswith("http"):
156
+ return v
157
+ for v in obj.values():
158
+ found = _find_audio_url(v)
159
+ if found:
160
+ return found
161
+ if isinstance(obj, list):
162
+ for v in obj:
163
+ found = _find_audio_url(v)
164
+ if found:
165
+ return found
166
+ return None
167
+
168
+
169
+ def _find_audio_base64(obj: Any) -> str | None:
170
+ if isinstance(obj, dict):
171
+ for k in ("audioBase64", "base64", "dataBase64"):
172
+ v = obj.get(k)
173
+ if isinstance(v, str) and len(v) > 200:
174
+ return v
175
+ for v in obj.values():
176
+ found = _find_audio_base64(v)
177
+ if found:
178
+ return found
179
+ if isinstance(obj, list):
180
+ for v in obj:
181
+ found = _find_audio_base64(v)
182
+ if found:
183
+ return found
184
+ return None
185
+
186
+
187
+ def _post_tts(
188
+ session: requests.Session,
189
+ *,
190
+ endpoint: str,
191
+ headers: dict[str, str],
192
+ bearer_token: str,
193
+ text: str,
194
+ user_voice_id: str,
195
+ speed: float,
196
+ block_version: int,
197
+ timeout_s: float,
198
+ ) -> dict[str, Any]:
199
+ req_headers = dict(headers)
200
+ req_headers["authorization"] = f"Bearer {bearer_token}"
201
+ req_headers["content-type"] = "application/json"
202
+
203
+ payload = {
204
+ "method": "tts",
205
+ "input": {
206
+ "text": text,
207
+ "userVoiceId": user_voice_id,
208
+ "speed": speed,
209
+ "blockVersion": block_version,
210
+ },
211
+ }
212
+ resp = session.post(endpoint, headers=req_headers, json=payload, timeout=timeout_s)
213
+ resp.raise_for_status()
214
+ return resp.json()
215
+
216
+
217
+ def _write_audio_from_result(
218
+ session: requests.Session,
219
+ result: dict[str, Any],
220
+ out_path_base: Path,
221
+ timeout_s: float,
222
+ ) -> Path:
223
+ audio_url = _find_audio_url(result)
224
+ if audio_url:
225
+ suffix = Path(audio_url.split("?", 1)[0]).suffix.lower()
226
+ out_path = out_path_base.with_suffix(suffix if suffix else ".mp3")
227
+ with session.get(audio_url, stream=True, timeout=timeout_s) as r:
228
+ r.raise_for_status()
229
+ out_path.parent.mkdir(parents=True, exist_ok=True)
230
+ with out_path.open("wb") as f:
231
+ for chunk in r.iter_content(chunk_size=1024 * 128):
232
+ if chunk:
233
+ f.write(chunk)
234
+ return out_path
235
+
236
+ audio_b64 = _find_audio_base64(result)
237
+ if audio_b64:
238
+ out_path = out_path_base.with_suffix(".wav")
239
+ out_path.parent.mkdir(parents=True, exist_ok=True)
240
+ out_path.write_bytes(base64.b64decode(audio_b64))
241
+ return out_path
242
+
243
+ out_path = out_path_base.with_suffix(".json")
244
+ out_path.parent.mkdir(parents=True, exist_ok=True)
245
+ out_path.write_text(json.dumps(result, ensure_ascii=False, indent=2), encoding="utf-8")
246
+ return out_path
247
+
248
+
249
+ def main() -> int:
250
+ parser = argparse.ArgumentParser(prog="lucylab-tts")
251
+ parser.add_argument("--endpoint", default="https://api.lucylab.io/json-rpc")
252
+ parser.add_argument("--curl-file", default="")
253
+ parser.add_argument("--header", action="append", default=[])
254
+ parser.add_argument("--bearer", default="")
255
+ parser.add_argument("--voice-json", default="")
256
+ parser.add_argument("--export-voice-library", default="")
257
+ parser.add_argument("--out-dir", default="outputs/tts-lucylab")
258
+ parser.add_argument("--text", default="")
259
+ parser.add_argument("--text-file", default="")
260
+ parser.add_argument("--voices", action="append", default=[])
261
+ parser.add_argument("--voice", action="append", default=[])
262
+ parser.add_argument("--limit", type=int, default=5)
263
+ parser.add_argument("--speed", type=float, default=1.0)
264
+ parser.add_argument("--block-version", type=int, default=0)
265
+ parser.add_argument("--sleep", type=float, default=0.25)
266
+ parser.add_argument("--timeout", type=float, default=60.0)
267
+ parser.add_argument("--mode", choices=("auto", "plain", "script-scenes"), default="auto")
268
+ args = parser.parse_args()
269
+
270
+ if args.export_voice_library:
271
+ src = Path(args.voice_json) if args.voice_json else Path("voice.json")
272
+ if not src.exists():
273
+ raise SystemExit("Missing source voice json. Provide --voice-json or create voice.json.")
274
+ voices = _load_voices(src)
275
+ def normalize_desc(desc: str) -> str:
276
+ d = (desc or "").strip()
277
+ d_lower = d.lower()
278
+ if d_lower.startswith("đây là một giọng nói hay"):
279
+ return ""
280
+ return d
281
+
282
+ def compact_item(v: dict[str, Any]) -> dict[str, Any] | None:
283
+ vid = str(v.get("id", "")).strip()
284
+ name = str(v.get("name", "")).strip()
285
+ if not vid or not name:
286
+ return None
287
+ tags: Any = v.get("tag")
288
+ if not isinstance(tags, list):
289
+ tags = v.get("tags")
290
+ if not isinstance(tags, list):
291
+ tags = []
292
+ return {
293
+ "id": vid,
294
+ "name": name,
295
+ "description": normalize_desc(str(v.get("description") or "")),
296
+ "tag": tags,
297
+ }
298
+
299
+ def categorize(tags: list[str]) -> str:
300
+ t = {str(x).strip().lower() for x in tags if str(x).strip()}
301
+ region = "other"
302
+ if "miền bắc" in t:
303
+ region = "north"
304
+ elif "miền nam" in t:
305
+ region = "south"
306
+ gender = "other"
307
+ if "nam" in t:
308
+ gender = "male"
309
+ elif "nữ" in t:
310
+ gender = "female"
311
+ if region in ("north", "south") and gender in ("male", "female"):
312
+ return f"{region}_{gender}"
313
+ return "other"
314
+
315
+ out_path = Path(args.export_voice_library)
316
+ if out_path.suffix.lower() != ".json":
317
+ out_dir = out_path
318
+ out_dir.mkdir(parents=True, exist_ok=True)
319
+ by_cat: dict[str, list[dict[str, Any]]] = {
320
+ "north_male": [],
321
+ "north_female": [],
322
+ "south_male": [],
323
+ "south_female": [],
324
+ "other": [],
325
+ }
326
+ for v in voices:
327
+ if not isinstance(v, dict):
328
+ continue
329
+ it = compact_item(v)
330
+ if it is None:
331
+ continue
332
+ cid = categorize([str(x) for x in it.get("tag", [])])
333
+ by_cat[cid].append(it)
334
+
335
+ name_map = {
336
+ "north_male": "Nam miền Bắc",
337
+ "north_female": "Nữ miền Bắc",
338
+ "south_male": "Nam miền Nam",
339
+ "south_female": "Nữ miền Nam",
340
+ "other": "Khác / không rõ",
341
+ }
342
+ index: dict[str, Any] = {"version": 1, "categories": []}
343
+ for cid, cat_items in by_cat.items():
344
+ (out_dir / f"{cid}.json").write_text(
345
+ json.dumps({"version": 1, "items": cat_items}, ensure_ascii=False, indent=2) + "\n",
346
+ encoding="utf-8",
347
+ )
348
+ index["categories"].append(
349
+ {"id": cid, "name": name_map[cid], "file": f"{cid}.json", "count": len(cat_items)}
350
+ )
351
+ (out_dir / "index.json").write_text(json.dumps(index, ensure_ascii=False, indent=2) + "\n", encoding="utf-8")
352
+ print(f"OK: {out_dir}")
353
+ return 0
354
+
355
+ items: list[dict[str, Any]] = []
356
+ for v in voices:
357
+ if not isinstance(v, dict):
358
+ continue
359
+ it = compact_item(v)
360
+ if it is not None:
361
+ items.append(it)
362
+
363
+ out_path.write_text(json.dumps({"version": 1, "items": items}, ensure_ascii=False, indent=2) + "\n", encoding="utf-8")
364
+ print(f"OK: {out_path}")
365
+ return 0
366
+
367
+ out_dir = Path(args.out_dir)
368
+
369
+ endpoint = args.endpoint.strip() or "https://api.lucylab.io/json-rpc"
370
+ headers: dict[str, str] = {"accept": "*/*"}
371
+
372
+ curl_text = ""
373
+ if args.curl_file:
374
+ curl_path = Path(args.curl_file)
375
+ if curl_path.exists():
376
+ curl_text = curl_path.read_text(encoding="utf-8")
377
+ endpoint = _extract_endpoint_from_curl(curl_text) or endpoint
378
+ headers.update(_extract_headers_from_curl(curl_text))
379
+
380
+ for h in args.header:
381
+ raw = str(h).strip()
382
+ if not raw or ":" not in raw:
383
+ raise SystemExit("Invalid --header. Expected format: 'Key: Value'")
384
+ k, v = raw.split(":", 1)
385
+ headers[k.strip()] = v.strip()
386
+
387
+ bearer_token = args.bearer.strip() or os.environ.get("LUCYLAB_BEARER", "").strip()
388
+ if not bearer_token:
389
+ bearer_token = _load_dotenv_value(Path.cwd() / ".env", "LUCYLAB_BEARER") or ""
390
+ if not bearer_token:
391
+ bearer_token = _load_dotenv_value(Path(__file__).resolve().with_name(".env"), "LUCYLAB_BEARER") or ""
392
+ if not bearer_token:
393
+ bearer_token = _extract_bearer_from_curl(curl_text) or ""
394
+ if not bearer_token:
395
+ raise SystemExit("Missing bearer token. Set LUCYLAB_BEARER or pass --bearer.")
396
+
397
+ voice_specs: list[dict[str, Any]] = []
398
+ for spec in args.voice:
399
+ raw = str(spec).strip()
400
+ if not raw:
401
+ continue
402
+ if ":" in raw:
403
+ voice_id, voice_name = raw.split(":", 1)
404
+ voice_id = voice_id.strip()
405
+ voice_name = voice_name.strip() or voice_id
406
+ else:
407
+ voice_id = raw
408
+ voice_name = voice_id
409
+ if voice_id:
410
+ voice_specs.append({"id": voice_id, "name": voice_name, "slug": _sanitize_filename(voice_name)})
411
+
412
+ selected_voices: list[dict[str, Any]] = []
413
+ if args.voice_json:
414
+ voice_path = Path(args.voice_json)
415
+ if voice_path.exists():
416
+ voices = _load_voices(voice_path)
417
+ selected_voices = _select_voices(voices, args.voices, args.limit)
418
+
419
+ if not selected_voices:
420
+ if voice_specs:
421
+ selected_voices = voice_specs
422
+ else:
423
+ raise SystemExit(
424
+ "No voices selected. Provide --voice-json + --voices, or pass explicit --voice <id>[:name]."
425
+ )
426
+
427
+ text = args.text.strip()
428
+ if args.text_file:
429
+ text = Path(args.text_file).read_text(encoding="utf-8").strip()
430
+ if not text:
431
+ raise SystemExit("Provide --text or --text-file.")
432
+
433
+ mode = args.mode
434
+ if mode == "auto":
435
+ mode = "script-scenes" if re.search(r"\bSCENE\s+\d+\b", text, flags=re.IGNORECASE) else "plain"
436
+
437
+ items: list[tuple[str, str]] = []
438
+ if mode == "plain":
439
+ items = [("full", text)]
440
+ else:
441
+ scenes = _extract_scene_speeches(text)
442
+ if not scenes:
443
+ raise SystemExit("No SCENE thoại found in text. Use --mode plain or check script format.")
444
+ items = [(f"scene-{idx:02d}", speech) for idx, speech in scenes]
445
+
446
+ session = requests.Session()
447
+
448
+ manifest: dict[str, Any] = {
449
+ "endpoint": endpoint,
450
+ "speed": args.speed,
451
+ "blockVersion": args.block_version,
452
+ "mode": mode,
453
+ "voices": [],
454
+ "items": [],
455
+ }
456
+
457
+ for voice in selected_voices:
458
+ voice_id = str(voice["id"])
459
+ voice_name = str(voice.get("name", voice_id))
460
+ voice_slug = _sanitize_filename(str(voice.get("slug", voice_name)))
461
+ manifest["voices"].append({"id": voice_id, "name": voice_name, "slug": voice_slug})
462
+
463
+ for label, speech in items:
464
+ manifest["items"].append({"label": label, "text": speech})
465
+
466
+ out_dir.mkdir(parents=True, exist_ok=True)
467
+ (out_dir / "manifest.json").write_text(json.dumps(manifest, ensure_ascii=False, indent=2), encoding="utf-8")
468
+
469
+ for voice in selected_voices:
470
+ voice_id = str(voice["id"])
471
+ voice_name = str(voice.get("name", voice_id))
472
+ voice_slug = _sanitize_filename(str(voice.get("slug", voice_name)))
473
+
474
+ for label, speech in items:
475
+ base = out_dir / voice_slug / label
476
+ attempt = 0
477
+ while True:
478
+ attempt += 1
479
+ try:
480
+ result = _post_tts(
481
+ session,
482
+ endpoint=endpoint,
483
+ headers=headers,
484
+ bearer_token=bearer_token,
485
+ text=speech,
486
+ user_voice_id=voice_id,
487
+ speed=args.speed,
488
+ block_version=args.block_version,
489
+ timeout_s=args.timeout,
490
+ )
491
+ _write_audio_from_result(session, result, base, args.timeout)
492
+ break
493
+ except Exception:
494
+ if attempt >= 3:
495
+ raise
496
+ time.sleep(1.5 * attempt)
497
+
498
+ if args.sleep > 0:
499
+ time.sleep(args.sleep)
500
+
501
+ print(f"OK: {out_dir}")
502
+ return 0
503
+
504
+
505
+ if __name__ == "__main__":
506
+ raise SystemExit(main())
@@ -54,5 +54,10 @@ No match → Ask clarifying question (max 2 times)
54
54
  Still unclear → Suggest `/help`
55
55
  ```
56
56
 
57
+ ### 5. Post-Action Rules
58
+ ```
59
+ Build hoàn tất thành công (không có lỗi) → Tự động chạy git commit.
60
+ ```
61
+
57
62
  ## Auto-Activation
58
63
  This skill is always active. It runs as the first layer before any other processing.
@@ -0,0 +1,150 @@
1
+ ---
2
+ name: short-maker
3
+ description: |
4
+ Đạo diễn sản xuất video quảng cáo App (App Promo) bằng AI. Tự động hóa quy trình viết kịch bản AIDA,
5
+ quản lý storyboard qua ShortMaker Studio MCP Server, và gọi Google Flow (Veo 3) render video bối cảnh thật
6
+ kèm Native Voice. Hỗ trợ Mimic Mode bóc tách nội dung trending từ YouTube/TikTok.
7
+ metadata:
8
+ stage: workflow
9
+ version: "2.0"
10
+ requires: "ShortMaker Studio (MCP Server), google-flow-cli, ffmpeg"
11
+ tags: [video, marketing, ads, app-promo, veo, gflow, tiktok, youtube, shorts, mcp]
12
+ trigger: explicit
13
+ activation_keywords:
14
+ - "/short"
15
+ - "/promo"
16
+ - "/mimic"
17
+ - "làm video tiktok"
18
+ - "chạy video flow"
19
+ ---
20
+
21
+ # 🎬 Short Maker v2.0 (MCP Client Mode)
22
+
23
+ > **Mục tiêu**: Tối đa hóa traffic miễn phí từ Video ngắn (TikTok, YouTube Shorts, Reels)
24
+ > bằng cách tự động sản xuất video quảng cáo App với cấu trúc AIDA hoặc copy (Mimic) hook viral.
25
+
26
+ ## ⚠️ Prerequisites (BẮT BUỘC)
27
+
28
+ Skill này hoạt động như **MCP Client** — mọi thao tác project/storyboard/render đều gọi qua **ShortMaker Studio MCP Server**.
29
+
30
+ **Trước khi bắt đầu, KIỂM TRA:**
31
+ 1. ShortMaker Studio MCP Server đang chạy (IDE đã kết nối qua MCP config)
32
+ 2. Gọi thử `shortmaker_list_projects` — nếu thành công → MCP Server OK
33
+ 3. Google Flow auth: kiểm tra `~/.gflow/env` tồn tại. Nếu chưa:
34
+ ```bash
35
+ cd ~/Dev2/MacOS/Shortmaker/scripts/google-flow-cli && PYTHONPATH=. python3 gflow/cli/main.py auth
36
+ ```
37
+
38
+ ## 📡 Chế độ hoạt động (Triggers)
39
+
40
+ - **Original Mode (`/promo`, `/short`)**: User nhập [Tên App + Tính năng]. AI đóng vai Giám đốc Marketing, lên kịch bản AIDA.
41
+ - **Mimic Mode (`/mimic`)**: User cung cấp [Link YouTube/TikTok]. AI trích xuất transcript, phân tích Hook/Pacing, sau đó clone cấu trúc kịch bản đó nhưng áp dụng cho App của user.
42
+
43
+ ## 🧱 Quy trình hoạt động (Mandatory Flow)
44
+
45
+ **Bảo vệ Credit**: Mọi video Veo 3 đều tốn 20 credit. TUYỆT ĐỐI không gọi render khi chưa có xác nhận từ người dùng qua bước Storyboard Review.
46
+
47
+ ### Giai đoạn 1: Bootstrap Project
48
+
49
+ Gọi MCP tool để tạo project:
50
+ ```
51
+ shortmaker_create_project(name: "FitWitness Promo", appName: "FitWitness", description: "30s TikTok promo")
52
+ ```
53
+ → Trả về `projectId` và `path`. Dùng `projectId` cho tất cả các bước sau.
54
+
55
+ ### Giai đoạn 2: Kịch Bản & Character Casting
56
+
57
+ 1. Sinh kịch bản `script.md` (dùng template `aida_script.md` hoặc `mimic_analyzer.md`).
58
+ 2. **Character Setup** — gọi MCP tool:
59
+ ```
60
+ shortmaker_setup_character(projectId: "...", prompt: "A young Vietnamese woman, long black hair...", seed: "123456")
61
+ ```
62
+ 3. **Xử lý ảnh tham chiếu**:
63
+ - User cung cấp ảnh → AI phân tích, viết `character_prompt`, copy ảnh vào project dir
64
+ - User không có ảnh → AI thiết kế character, gọi `generate-image` tạo mẫu
65
+ 4. **Actor Approval**: Trình bày ảnh cho User. CHỈ TIẾP TỤC khi User chốt nhân vật.
66
+
67
+ ### Giai đoạn 3: Storyboard (0 Cost)
68
+
69
+ Gọi MCP tool lặp lại cho từng scene:
70
+ ```
71
+ shortmaker_add_scene(
72
+ projectId: "...",
73
+ prompt: "A woman standing in a modern gym, looking at her phone...",
74
+ speech: "Tired of forgetting your workouts?",
75
+ duration: 8,
76
+ transition: "fade",
77
+ sceneType: "hook"
78
+ )
79
+ ```
80
+
81
+ **Quy tắc prompt**:
82
+ - **ƯU TIÊN MÔI TRƯỜNG THẬT**: TUYỆT ĐỐI KHÔNG dùng Greenscreen/Chroma key mặc định
83
+ - Character prompt sẽ được auto-prepend bởi MCP Server
84
+ - BẮT BUỘC truyền `--seed` đã chốt khi render
85
+
86
+ Sau khi thêm đủ scenes, hướng dẫn User mở **ShortMaker Studio** để review storyboard trực quan.
87
+ Hoặc AI tự review bằng:
88
+ ```
89
+ shortmaker_get_storyboard(projectId: "...")
90
+ ```
91
+
92
+ Sửa scene nếu cần:
93
+ ```
94
+ shortmaker_update_scene(projectId: "...", sceneId: "scene-01", speech: "Updated narration")
95
+ ```
96
+
97
+ **CHỈ TIẾP TỤC khi User xác nhận đã duyệt xong Storyboard.**
98
+
99
+ ### Giai đoạn 4: Render (Batch)
100
+
101
+ Gọi MCP tool:
102
+ ```
103
+ shortmaker_trigger_render(projectId: "...", fadeDuration: 1.0, bgmVolume: 0.1)
104
+ ```
105
+
106
+ User có thể theo dõi tiến trình trên ShortMaker Studio GUI.
107
+
108
+ ## 🔀 Hiệu ứng Chuyển Cảnh (Transitions)
109
+
110
+ Các hiệu ứng được hỗ trợ (truyền vào `transition` khi `add_scene`):
111
+ - `fade` — Chuyển mờ dần (mặc định)
112
+ - `slideleft` / `slideright` — Trượt sang trái/phải
113
+ - `wipeleft` / `wiperight` — Quét sang trái/phải
114
+ - `circlecrop` — Thu nhỏ hình tròn
115
+ - `dissolve` — Hòa tan
116
+ - `none` — Cắt thẳng, không hiệu ứng
117
+
118
+ ## 🌿 Green Screen (Fallback Option)
119
+
120
+ Nếu User ĐẶC BIỆT yêu cầu greenscreen:
121
+ - Thêm "on a solid chroma green screen background" vào prompt scene
122
+ - Render pipeline sẽ tự xử lý chroma key với background được cung cấp
123
+
124
+ ## 📁 Output Convention
125
+
126
+ Projects được lưu tại `~/ShortMaker-Projects/<project-id>/`:
127
+ ```
128
+ <project-id>/
129
+ ├── shortmaker.config.json # Project config (auto-managed)
130
+ ├── storyboard.json # Scene data (auto-managed)
131
+ ├── assets/ # Character ref, BGM
132
+ ├── storyboard/ # Scene preview images
133
+ ├── segments/ # Rendered video segments
134
+ ├── tts/ # TTS audio files
135
+ ├── temp/ # Temporary processing files
136
+ └── final/ # Final mixed output
137
+ ```
138
+
139
+ ## 🔧 MCP Tools Reference
140
+
141
+ | Tool | Mục đích |
142
+ |------|----------|
143
+ | `shortmaker_list_projects` | Liệt kê projects hiện có |
144
+ | `shortmaker_create_project` | Tạo project mới |
145
+ | `shortmaker_setup_character` | Chốt nhân vật (prompt + seed) |
146
+ | `shortmaker_add_scene` | Thêm scene vào storyboard |
147
+ | `shortmaker_update_scene` | Sửa scene đã có |
148
+ | `shortmaker_get_storyboard` | Xem toàn bộ storyboard |
149
+ | `shortmaker_trigger_render` | Bắt đầu render pipeline (async, chạy ngầm) |
150
+ | `shortmaker_get_render_status` | Kiểm tra tiến trình render đang chạy (polling) |