myaidev-method 0.3.3 → 0.3.5

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (132) hide show
  1. package/.claude-plugin/plugin.json +0 -1
  2. package/.env.example +5 -4
  3. package/CHANGELOG.md +2 -2
  4. package/CONTENT_CREATION_GUIDE.md +489 -3211
  5. package/DEVELOPER_USE_CASES.md +1 -1
  6. package/MODULAR_INSTALLATION.md +2 -2
  7. package/README.md +39 -33
  8. package/TECHNICAL_ARCHITECTURE.md +1 -1
  9. package/USER_GUIDE.md +242 -190
  10. package/agents/content-editor-agent.md +90 -0
  11. package/agents/content-planner-agent.md +97 -0
  12. package/agents/content-research-agent.md +62 -0
  13. package/agents/content-seo-agent.md +101 -0
  14. package/agents/content-writer-agent.md +69 -0
  15. package/agents/infographic-analyzer-agent.md +63 -0
  16. package/agents/infographic-designer-agent.md +72 -0
  17. package/bin/cli.js +777 -535
  18. package/{content-rules.example.md → content-rules-example.md} +2 -2
  19. package/dist/mcp/health-check.js +82 -68
  20. package/dist/mcp/mcp-config.json +8 -0
  21. package/dist/mcp/openstack-server.js +1746 -1262
  22. package/dist/server/.tsbuildinfo +1 -1
  23. package/extension.json +21 -4
  24. package/package.json +181 -184
  25. package/skills/company-config/SKILL.md +133 -0
  26. package/skills/configure/SKILL.md +1 -1
  27. package/skills/myai-configurator/SKILL.md +77 -0
  28. package/skills/myai-configurator/content-creation-configurator/SKILL.md +516 -0
  29. package/skills/myai-configurator/content-maintenance-configurator/SKILL.md +397 -0
  30. package/skills/myai-content-enrichment/SKILL.md +114 -0
  31. package/skills/myai-content-ideation/SKILL.md +288 -0
  32. package/skills/myai-content-ideation/evals/evals.json +182 -0
  33. package/skills/myai-content-production-coordinator/SKILL.md +946 -0
  34. package/skills/{content-rules-setup → myai-content-rules-setup}/SKILL.md +1 -1
  35. package/skills/{content-verifier → myai-content-verifier}/SKILL.md +1 -1
  36. package/skills/myai-content-writer/SKILL.md +333 -0
  37. package/skills/myai-content-writer/agents/editor-agent.md +138 -0
  38. package/skills/myai-content-writer/agents/planner-agent.md +121 -0
  39. package/skills/myai-content-writer/agents/research-agent.md +83 -0
  40. package/skills/myai-content-writer/agents/seo-agent.md +139 -0
  41. package/skills/myai-content-writer/agents/visual-planner-agent.md +110 -0
  42. package/skills/myai-content-writer/agents/writer-agent.md +85 -0
  43. package/skills/{infographic → myai-infographic}/SKILL.md +1 -1
  44. package/skills/myai-proprietary-content-verifier/SKILL.md +175 -0
  45. package/skills/myai-proprietary-content-verifier/evals/evals.json +36 -0
  46. package/skills/myai-skill-builder/SKILL.md +699 -0
  47. package/skills/myai-skill-builder/agents/analyzer-agent.md +137 -0
  48. package/skills/myai-skill-builder/agents/comparator-agent.md +77 -0
  49. package/skills/myai-skill-builder/agents/grader-agent.md +103 -0
  50. package/skills/myai-skill-builder/assets/eval_review.html +131 -0
  51. package/skills/myai-skill-builder/references/schemas.md +211 -0
  52. package/skills/myai-skill-builder/scripts/aggregate_benchmark.py +190 -0
  53. package/skills/myai-skill-builder/scripts/generate_review.py +381 -0
  54. package/skills/myai-skill-builder/scripts/package_skill.py +91 -0
  55. package/skills/myai-skill-builder/scripts/run_eval.py +105 -0
  56. package/skills/myai-skill-builder/scripts/run_loop.py +211 -0
  57. package/skills/myai-skill-builder/scripts/utils.py +123 -0
  58. package/skills/myai-visual-generator/SKILL.md +125 -0
  59. package/skills/myai-visual-generator/evals/evals.json +155 -0
  60. package/skills/myai-visual-generator/references/infographic-pipeline.md +73 -0
  61. package/skills/myai-visual-generator/references/research-visuals.md +57 -0
  62. package/skills/myai-visual-generator/references/services.md +89 -0
  63. package/skills/myai-visual-generator/scripts/visual-generation-utils.js +1272 -0
  64. package/skills/myaidev-analyze/agents/dependency-mapper-agent.md +236 -0
  65. package/skills/myaidev-analyze/agents/pattern-detector-agent.md +240 -0
  66. package/skills/myaidev-analyze/agents/structure-scanner-agent.md +171 -0
  67. package/skills/myaidev-analyze/agents/tech-profiler-agent.md +291 -0
  68. package/skills/myaidev-architect/agents/compliance-checker-agent.md +287 -0
  69. package/skills/myaidev-architect/agents/requirements-analyst-agent.md +194 -0
  70. package/skills/myaidev-architect/agents/system-designer-agent.md +315 -0
  71. package/skills/myaidev-coder/agents/implementer-agent.md +185 -0
  72. package/skills/myaidev-coder/agents/integration-agent.md +168 -0
  73. package/skills/myaidev-coder/agents/pattern-scanner-agent.md +161 -0
  74. package/skills/myaidev-coder/agents/self-reviewer-agent.md +168 -0
  75. package/skills/myaidev-debug/agents/fix-agent-debug.md +317 -0
  76. package/skills/myaidev-debug/agents/hypothesis-agent.md +226 -0
  77. package/skills/myaidev-debug/agents/investigator-agent.md +250 -0
  78. package/skills/myaidev-debug/agents/symptom-collector-agent.md +231 -0
  79. package/skills/myaidev-documenter/agents/code-reader-agent.md +172 -0
  80. package/skills/myaidev-documenter/agents/doc-validator-agent.md +174 -0
  81. package/skills/myaidev-documenter/agents/doc-writer-agent.md +379 -0
  82. package/skills/myaidev-figma/SKILL.md +212 -0
  83. package/skills/myaidev-figma/capture.js +133 -0
  84. package/skills/myaidev-figma/crawl.js +130 -0
  85. package/skills/myaidev-figma-configure/SKILL.md +130 -0
  86. package/skills/myaidev-migrate/agents/migration-planner-agent.md +237 -0
  87. package/skills/myaidev-migrate/agents/migration-writer-agent.md +248 -0
  88. package/skills/myaidev-migrate/agents/schema-analyzer-agent.md +190 -0
  89. package/skills/myaidev-performance/agents/benchmark-agent.md +281 -0
  90. package/skills/myaidev-performance/agents/optimizer-agent.md +277 -0
  91. package/skills/myaidev-performance/agents/profiler-agent.md +252 -0
  92. package/skills/myaidev-refactor/agents/refactor-executor-agent.md +221 -0
  93. package/skills/myaidev-refactor/agents/refactor-planner-agent.md +213 -0
  94. package/skills/myaidev-refactor/agents/regression-guard-agent.md +242 -0
  95. package/skills/myaidev-refactor/agents/smell-detector-agent.md +233 -0
  96. package/skills/myaidev-reviewer/agents/auto-fixer-agent.md +238 -0
  97. package/skills/myaidev-reviewer/agents/code-analyst-agent.md +220 -0
  98. package/skills/myaidev-reviewer/agents/security-scanner-agent.md +262 -0
  99. package/skills/myaidev-tester/agents/coverage-analyst-agent.md +163 -0
  100. package/skills/myaidev-tester/agents/tdd-driver-agent.md +242 -0
  101. package/skills/myaidev-tester/agents/test-runner-agent.md +176 -0
  102. package/skills/myaidev-tester/agents/test-strategist-agent.md +154 -0
  103. package/skills/myaidev-tester/agents/test-writer-agent.md +242 -0
  104. package/skills/myaidev-workflow/agents/analyzer-agent.md +317 -0
  105. package/skills/myaidev-workflow/agents/coordinator-agent.md +253 -0
  106. package/skills/openstack-manager/SKILL.md +1 -1
  107. package/skills/payloadcms-publisher/SKILL.md +141 -77
  108. package/skills/payloadcms-publisher/references/field-mapping.md +142 -0
  109. package/skills/payloadcms-publisher/references/lexical-format.md +97 -0
  110. package/skills/security-auditor/SKILL.md +1 -1
  111. package/src/cli/commands/addon.js +184 -123
  112. package/src/config/workflows.js +172 -228
  113. package/src/lib/ascii-banner.js +197 -182
  114. package/src/lib/{content-coordinator.js → content-production-coordinator.js} +649 -459
  115. package/src/lib/installation-detector.js +93 -59
  116. package/src/lib/payloadcms-utils.js +285 -510
  117. package/src/lib/update-manager.js +120 -61
  118. package/src/lib/workflow-installer.js +55 -0
  119. package/src/mcp/health-check.js +82 -68
  120. package/src/mcp/openstack-server.js +1746 -1262
  121. package/src/scripts/configure-visual-apis.js +224 -173
  122. package/src/scripts/configure-wordpress-mcp.js +96 -66
  123. package/src/scripts/init/install.js +109 -85
  124. package/src/scripts/init-project.js +138 -67
  125. package/src/scripts/utils/write-content.js +67 -52
  126. package/src/scripts/wordpress/publish-to-wordpress.js +128 -128
  127. package/src/templates/claude/CLAUDE.md +131 -0
  128. package/hooks/hooks.json +0 -26
  129. package/skills/content-coordinator/SKILL.md +0 -130
  130. package/skills/content-enrichment/SKILL.md +0 -80
  131. package/skills/content-writer/SKILL.md +0 -285
  132. package/skills/visual-generator/SKILL.md +0 -140
@@ -0,0 +1,105 @@
1
+ #!/usr/bin/env python3
2
+ """Run a single description trigger evaluation.
3
+
4
+ Tests whether a given prompt would trigger a skill based on its description.
5
+ Used by run_loop.py for description optimization.
6
+
7
+ Usage:
8
+ python3 run_eval.py --description "Skill description text" --query "User prompt to test"
9
+
10
+ Output:
11
+ JSON with trigger prediction and reasoning.
12
+ """
13
+
14
+ import argparse
15
+ import json
16
+ import subprocess
17
+ import sys
18
+ import tempfile
19
+ from pathlib import Path
20
+
21
+
22
+ def evaluate_trigger(description, query):
23
+ """Evaluate whether a skill description would trigger for a given query.
24
+
25
+ Uses a simple heuristic: check if the key concepts in the description
26
+ match the query's intent. For more accurate results, use claude -p.
27
+ """
28
+ # Normalize for comparison
29
+ desc_lower = description.lower()
30
+ query_lower = query.lower()
31
+
32
+ # Extract "when" clause keywords from description
33
+ when_idx = desc_lower.find("when")
34
+ if when_idx >= 0:
35
+ when_clause = desc_lower[when_idx:]
36
+ else:
37
+ when_clause = desc_lower
38
+
39
+ # Simple keyword overlap scoring
40
+ desc_words = set(when_clause.split())
41
+ query_words = set(query_lower.split())
42
+
43
+ # Remove common stop words
44
+ stop_words = {"a", "an", "the", "is", "are", "was", "were", "be", "been",
45
+ "being", "have", "has", "had", "do", "does", "did", "will",
46
+ "would", "could", "should", "may", "might", "can", "to",
47
+ "of", "in", "for", "on", "with", "at", "by", "from", "or",
48
+ "and", "not", "no", "but", "if", "this", "that", "it", "i",
49
+ "you", "use", "when"}
50
+ desc_words -= stop_words
51
+ query_words -= stop_words
52
+
53
+ overlap = desc_words & query_words
54
+ score = len(overlap) / max(len(desc_words), 1)
55
+
56
+ return {
57
+ "would_trigger": score > 0.15,
58
+ "confidence": min(score * 2, 1.0),
59
+ "matching_keywords": list(overlap),
60
+ "score": score
61
+ }
62
+
63
+
64
+ def evaluate_trigger_with_claude(description, query):
65
+ """Use claude CLI to evaluate trigger accuracy (more accurate but slower)."""
66
+ prompt = f"""Given this skill description:
67
+ "{description}"
68
+
69
+ Would this user message trigger the skill? Answer with ONLY "yes" or "no":
70
+ "{query}"
71
+ """
72
+ try:
73
+ result = subprocess.run(
74
+ ["claude", "-p", prompt, "--output-format", "text"],
75
+ capture_output=True, text=True, timeout=30
76
+ )
77
+ answer = result.stdout.strip().lower()
78
+ return {
79
+ "would_trigger": answer.startswith("yes"),
80
+ "confidence": 0.9 if answer in ("yes", "no") else 0.5,
81
+ "raw_answer": answer
82
+ }
83
+ except (subprocess.TimeoutExpired, FileNotFoundError):
84
+ # Fall back to heuristic
85
+ return evaluate_trigger(description, query)
86
+
87
+
88
+ def main():
89
+ parser = argparse.ArgumentParser(description="Evaluate a single description trigger")
90
+ parser.add_argument("--description", required=True, help="Skill description text")
91
+ parser.add_argument("--query", required=True, help="User prompt to test")
92
+ parser.add_argument("--use-claude", action="store_true",
93
+ help="Use claude CLI for evaluation (slower but more accurate)")
94
+ args = parser.parse_args()
95
+
96
+ if args.use_claude:
97
+ result = evaluate_trigger_with_claude(args.description, args.query)
98
+ else:
99
+ result = evaluate_trigger(args.description, args.query)
100
+
101
+ print(json.dumps(result, indent=2))
102
+
103
+
104
+ if __name__ == "__main__":
105
+ main()
@@ -0,0 +1,211 @@
1
+ #!/usr/bin/env python3
2
+ """Description optimization loop with train/test split.
3
+
4
+ Generates trigger eval queries, tests the current description,
5
+ and suggests improvements based on accuracy metrics.
6
+
7
+ Usage:
8
+ python3 run_loop.py --skill-dir .claude/skills/my-skill
9
+ python3 run_loop.py --skill-dir .claude/skills/my-skill --use-claude
10
+ python3 run_loop.py --queries queries.json --description "Skill desc"
11
+
12
+ Input:
13
+ Either a --skill-dir (reads SKILL.md frontmatter) or explicit --description.
14
+ Optionally --queries pointing to a JSON file with pre-defined trigger queries.
15
+
16
+ Output:
17
+ Prints accuracy metrics and saves results to description_eval.json in skill dir.
18
+ """
19
+
20
+ import argparse
21
+ import json
22
+ import random
23
+ import sys
24
+ from pathlib import Path
25
+
26
+ sys.path.insert(0, str(Path(__file__).parent))
27
+ from utils import load_json, save_json, now_iso
28
+ from run_eval import evaluate_trigger, evaluate_trigger_with_claude
29
+
30
+
31
+ def parse_frontmatter(skill_path):
32
+ """Extract name and description from SKILL.md frontmatter."""
33
+ path = Path(skill_path) / "SKILL.md"
34
+ if not path.exists():
35
+ print(f"Error: {path} not found", file=sys.stderr)
36
+ sys.exit(1)
37
+
38
+ content = path.read_text()
39
+ if not content.startswith("---"):
40
+ print("Error: No frontmatter found in SKILL.md", file=sys.stderr)
41
+ sys.exit(1)
42
+
43
+ end = content.index("---", 3)
44
+ frontmatter = content[3:end]
45
+
46
+ name = ""
47
+ description = ""
48
+ for line in frontmatter.strip().split("\n"):
49
+ if line.startswith("name:"):
50
+ name = line.split(":", 1)[1].strip().strip('"').strip("'")
51
+ elif line.startswith("description:"):
52
+ description = line.split(":", 1)[1].strip().strip('"').strip("'")
53
+
54
+ return name, description
55
+
56
+
57
+ def load_queries(queries_path):
58
+ """Load trigger queries from a JSON file."""
59
+ data = load_json(queries_path)
60
+ return data.get("should_trigger", []), data.get("should_not_trigger", [])
61
+
62
+
63
+ def train_test_split(items, train_ratio=0.7):
64
+ """Split items into train and test sets."""
65
+ shuffled = items[:]
66
+ random.shuffle(shuffled)
67
+ split_idx = max(1, int(len(shuffled) * train_ratio))
68
+ return shuffled[:split_idx], shuffled[split_idx:]
69
+
70
+
71
+ def evaluate_description(description, should_trigger, should_not_trigger, use_claude=False):
72
+ """Evaluate a description against trigger queries."""
73
+ eval_fn = evaluate_trigger_with_claude if use_claude else evaluate_trigger
74
+
75
+ results = {"true_positive": 0, "false_positive": 0,
76
+ "true_negative": 0, "false_negative": 0,
77
+ "details": []}
78
+
79
+ for query in should_trigger:
80
+ result = eval_fn(description, query)
81
+ triggered = result["would_trigger"]
82
+ if triggered:
83
+ results["true_positive"] += 1
84
+ else:
85
+ results["false_negative"] += 1
86
+ results["details"].append({
87
+ "query": query, "expected": True, "actual": triggered,
88
+ "correct": triggered, "confidence": result.get("confidence", 0)
89
+ })
90
+
91
+ for query in should_not_trigger:
92
+ result = eval_fn(description, query)
93
+ triggered = result["would_trigger"]
94
+ if not triggered:
95
+ results["true_negative"] += 1
96
+ else:
97
+ results["false_positive"] += 1
98
+ results["details"].append({
99
+ "query": query, "expected": False, "actual": triggered,
100
+ "correct": not triggered, "confidence": result.get("confidence", 0)
101
+ })
102
+
103
+ total = len(should_trigger) + len(should_not_trigger)
104
+ correct = results["true_positive"] + results["true_negative"]
105
+ results["accuracy"] = correct / total if total > 0 else 0
106
+ results["precision"] = (
107
+ results["true_positive"] / (results["true_positive"] + results["false_positive"])
108
+ if (results["true_positive"] + results["false_positive"]) > 0 else 0
109
+ )
110
+ results["recall"] = (
111
+ results["true_positive"] / (results["true_positive"] + results["false_negative"])
112
+ if (results["true_positive"] + results["false_negative"]) > 0 else 0
113
+ )
114
+
115
+ return results
116
+
117
+
118
+ def main():
119
+ parser = argparse.ArgumentParser(description="Description optimization loop")
120
+ parser.add_argument("--skill-dir", help="Path to skill directory")
121
+ parser.add_argument("--description", help="Explicit description to test")
122
+ parser.add_argument("--queries", help="Path to queries JSON file")
123
+ parser.add_argument("--use-claude", action="store_true",
124
+ help="Use claude CLI for evaluation")
125
+ parser.add_argument("--train-only", action="store_true",
126
+ help="Only evaluate against training set")
127
+ args = parser.parse_args()
128
+
129
+ # Get description
130
+ if args.description:
131
+ description = args.description
132
+ skill_name = "unknown"
133
+ elif args.skill_dir:
134
+ skill_name, description = parse_frontmatter(args.skill_dir)
135
+ else:
136
+ print("Error: Provide --skill-dir or --description", file=sys.stderr)
137
+ sys.exit(1)
138
+
139
+ if not description:
140
+ print("Error: No description found", file=sys.stderr)
141
+ sys.exit(1)
142
+
143
+ # Get queries
144
+ if args.queries:
145
+ should_trigger, should_not_trigger = load_queries(args.queries)
146
+ else:
147
+ print("Error: Provide --queries with trigger eval queries", file=sys.stderr)
148
+ sys.exit(1)
149
+
150
+ # Split into train/test
151
+ train_yes, test_yes = train_test_split(should_trigger)
152
+ train_no, test_no = train_test_split(should_not_trigger)
153
+
154
+ print(f"Skill: {skill_name}")
155
+ print(f"Description: {description[:80]}...")
156
+ print(f"Queries: {len(should_trigger)} should-trigger, {len(should_not_trigger)} should-not-trigger")
157
+ print(f"Train/Test split: {len(train_yes)+len(train_no)} / {len(test_yes)+len(test_no)}")
158
+ print()
159
+
160
+ # Evaluate on training set
161
+ print("=== Training Set ===")
162
+ train_results = evaluate_description(description, train_yes, train_no, args.use_claude)
163
+ print(f"Accuracy: {train_results['accuracy']:.1%}")
164
+ print(f"Precision: {train_results['precision']:.1%}")
165
+ print(f"Recall: {train_results['recall']:.1%}")
166
+
167
+ # Show misclassifications
168
+ misclassified = [d for d in train_results["details"] if not d["correct"]]
169
+ if misclassified:
170
+ print(f"\nMisclassified ({len(misclassified)}):")
171
+ for m in misclassified:
172
+ label = "FN" if m["expected"] else "FP"
173
+ print(f" [{label}] {m['query']}")
174
+
175
+ # Evaluate on test set (unless train-only)
176
+ if not args.train_only and (test_yes or test_no):
177
+ print("\n=== Test Set ===")
178
+ test_results = evaluate_description(description, test_yes, test_no, args.use_claude)
179
+ print(f"Accuracy: {test_results['accuracy']:.1%}")
180
+ print(f"Precision: {test_results['precision']:.1%}")
181
+ print(f"Recall: {test_results['recall']:.1%}")
182
+ else:
183
+ test_results = None
184
+
185
+ # Save results
186
+ output = {
187
+ "timestamp": now_iso(),
188
+ "skill_name": skill_name,
189
+ "description": description,
190
+ "train_results": {
191
+ "accuracy": train_results["accuracy"],
192
+ "precision": train_results["precision"],
193
+ "recall": train_results["recall"],
194
+ "misclassified": misclassified
195
+ }
196
+ }
197
+ if test_results:
198
+ output["test_results"] = {
199
+ "accuracy": test_results["accuracy"],
200
+ "precision": test_results["precision"],
201
+ "recall": test_results["recall"],
202
+ }
203
+
204
+ if args.skill_dir:
205
+ save_json(Path(args.skill_dir) / "description_eval.json", output)
206
+ else:
207
+ print(json.dumps(output, indent=2))
208
+
209
+
210
+ if __name__ == "__main__":
211
+ main()
@@ -0,0 +1,123 @@
1
+ #!/usr/bin/env python3
2
+ """Shared utilities for skill-builder scripts."""
3
+
4
+ import json
5
+ import os
6
+ import sys
7
+ from pathlib import Path
8
+ from datetime import datetime, timezone
9
+
10
+
11
+ def load_json(path):
12
+ """Load and parse a JSON file. Exit with error if not found or invalid."""
13
+ path = Path(path)
14
+ if not path.exists():
15
+ print(f"Error: {path} not found", file=sys.stderr)
16
+ sys.exit(1)
17
+ try:
18
+ with open(path, "r") as f:
19
+ return json.load(f)
20
+ except json.JSONDecodeError as e:
21
+ print(f"Error: Invalid JSON in {path}: {e}", file=sys.stderr)
22
+ sys.exit(1)
23
+
24
+
25
+ def save_json(path, data, indent=2):
26
+ """Save data as formatted JSON."""
27
+ path = Path(path)
28
+ path.parent.mkdir(parents=True, exist_ok=True)
29
+ with open(path, "w") as f:
30
+ json.dump(data, f, indent=indent)
31
+ print(f"Saved: {path}")
32
+
33
+
34
+ def find_workspace(skill_dir):
35
+ """Find or create the workspace directory for a skill."""
36
+ skill_dir = Path(skill_dir)
37
+ slug = skill_dir.name
38
+ workspace = skill_dir.parent.parent.parent / f"{slug}-workspace"
39
+ return workspace
40
+
41
+
42
+ def find_latest_iteration(workspace):
43
+ """Find the highest iteration number in a workspace."""
44
+ workspace = Path(workspace)
45
+ if not workspace.exists():
46
+ return 0
47
+ iterations = [
48
+ int(d.name.split("-")[1])
49
+ for d in workspace.iterdir()
50
+ if d.is_dir() and d.name.startswith("iteration-")
51
+ ]
52
+ return max(iterations) if iterations else 0
53
+
54
+
55
+ def next_iteration_dir(workspace):
56
+ """Get the path for the next iteration directory."""
57
+ workspace = Path(workspace)
58
+ n = find_latest_iteration(workspace) + 1
59
+ return workspace / f"iteration-{n}"
60
+
61
+
62
+ def collect_grading_files(iteration_dir):
63
+ """Collect all grading.json files from an iteration directory."""
64
+ iteration_dir = Path(iteration_dir)
65
+ results = []
66
+ for eval_dir in sorted(iteration_dir.iterdir()):
67
+ if not eval_dir.is_dir() or not eval_dir.name.startswith("eval-"):
68
+ continue
69
+ for config in ["with_skill", "without_skill"]:
70
+ grading_path = eval_dir / config / "grading.json"
71
+ if grading_path.exists():
72
+ results.append(load_json(grading_path))
73
+ return results
74
+
75
+
76
+ def collect_timing_files(iteration_dir):
77
+ """Collect all timing.json files from an iteration directory."""
78
+ iteration_dir = Path(iteration_dir)
79
+ results = []
80
+ for eval_dir in sorted(iteration_dir.iterdir()):
81
+ if not eval_dir.is_dir() or not eval_dir.name.startswith("eval-"):
82
+ continue
83
+ for config in ["with_skill", "without_skill"]:
84
+ timing_path = eval_dir / config / "timing.json"
85
+ if timing_path.exists():
86
+ results.append(load_json(timing_path))
87
+ return results
88
+
89
+
90
+ def now_iso():
91
+ """Return current time as ISO 8601 string."""
92
+ return datetime.now(timezone.utc).isoformat()
93
+
94
+
95
+ def mean(values):
96
+ """Calculate mean of a list of numbers."""
97
+ if not values:
98
+ return 0
99
+ return sum(values) / len(values)
100
+
101
+
102
+ def stddev(values):
103
+ """Calculate population standard deviation."""
104
+ if len(values) < 2:
105
+ return 0
106
+ m = mean(values)
107
+ variance = sum((x - m) ** 2 for x in values) / len(values)
108
+ return variance ** 0.5
109
+
110
+
111
+ def format_percent(value):
112
+ """Format a float as a percentage string."""
113
+ return f"{value * 100:.1f}%"
114
+
115
+
116
+ def format_duration(ms):
117
+ """Format milliseconds as a human-readable duration."""
118
+ if ms < 1000:
119
+ return f"{ms}ms"
120
+ elif ms < 60000:
121
+ return f"{ms / 1000:.1f}s"
122
+ else:
123
+ return f"{ms / 60000:.1f}m"
@@ -0,0 +1,125 @@
1
+ ---
2
+ name: myai-visual-generator
3
+ description: "Generates high-quality images, videos, infographics, and research visuals using AI APIs (Gemini, Imagen, DALL-E, GPT Image, FLUX, Veo) with fal.ai MCP model discovery. Use when creating hero images, blog post visuals, diagrams, infographics, architecture diagrams, flowcharts, sequence diagrams, data visualizations, or video content. Also use when the user mentions generating images, creating visuals, making diagrams, or needs any kind of visual content for articles, presentations, or projects."
4
+ argument-hint: "[prompt] [--visual-type=image] [--type=hero] [--service=gemini]"
5
+ allowed-tools: [Read, Write, Bash, Task, "mcp__fal-ai__*"]
6
+ context: fork
7
+ ---
8
+
9
+ # Visual Generator
10
+
11
+ Generate images, videos, infographics, and research visuals using AI generation APIs, Mermaid rendering, HTML conversion, and PaperBanana.
12
+
13
+ ## Bundled Scripts
14
+
15
+ The generation SDK is bundled at `${CLAUDE_SKILL_DIR}/scripts/visual-generation-utils.js`. It provides:
16
+ - `validateAPIKeys()` — check which services are available
17
+ - `generateImage(prompt, options)` — auto-routed image generation
18
+ - `generateVideo(prompt, options)` — Veo 3 video generation
19
+ - `estimateCost(service, options)` — cost estimation
20
+ - `selectBestService(preferred)` — smart service selection with fallback
21
+ - `buildInfographicPrompt(config)` — structured infographic prompt builder
22
+ - `getServiceInfo(service)` — service metadata lookup
23
+
24
+ Run it via Node.js:
25
+ ```bash
26
+ node -e "import('${CLAUDE_SKILL_DIR}/scripts/visual-generation-utils.js').then(m => m.generateImage('prompt', {type: 'hero'}).then(r => console.log(r)))"
27
+ ```
28
+
29
+ Or reference it from a script you write during generation.
30
+
31
+ ## Quick Start
32
+
33
+ ```
34
+ /myai-visual-generator "Modern cloud-native architecture" --type=hero
35
+ /myai-visual-generator "API request lifecycle" --visual-type=infographic --type=flowchart
36
+ /myai-visual-generator "SaaS KPI dashboard" --visual-type=infographic --type=infographic-data
37
+ /myai-visual-generator "Product demo" --visual-type=video
38
+ ```
39
+
40
+ ## Arguments
41
+
42
+ Parse from: `$ARGUMENTS`
43
+
44
+ | Parameter | Description | Default |
45
+ |-----------|-------------|---------|
46
+ | `prompt` | What the visual should depict | Required |
47
+ | `--visual-type` | `image`, `video`, `infographic`, `research-visuals` | image |
48
+ | `--type` | Sub-type (see routing below) | hero |
49
+ | `--service` | Preferred generation service | auto |
50
+ | `--quality` | low, medium, high | medium |
51
+ | `--size` | 1024x1024, 1792x1024, 1024x1792 | 1024x1024 |
52
+
53
+ ## Visual Type Routing
54
+
55
+ 1. Ask **visual type** first: `image`, `video`, `infographic`, or `research-visuals`
56
+ 2. Then ask the relevant **sub-type** only:
57
+ - **Image**: `hero` or `illustration`
58
+ - **Video**: user intent (demo, tutorial, walkthrough)
59
+ - **Infographic**: `flowchart` or `sequence-diagram` (quick mode)
60
+ - **Research**: `research-diagram`, `research-plot`, or `research-evaluation`
61
+ 3. Collect only mandatory inputs; use defaults for everything else
62
+
63
+ ### Routing Table
64
+
65
+ | Visual Type + Sub-type | Path |
66
+ |----------------------|------|
67
+ | Image (hero, illustration) | AI generation via bundled SDK |
68
+ | Video | Veo 3 via bundled SDK |
69
+ | Infographic (from article `*.md`) | Mermaid analyze → design → render → insert |
70
+ | Infographic (standalone) | Mermaid/Gemini diagram path or HTML conversion |
71
+ | Research visuals | PaperBanana CLI |
72
+ | Text-heavy data visuals | HTML-to-screenshot (Playwright) |
73
+
74
+ **Critical routing rule**: When the user needs pixel-perfect text/numbers (metrics, data, labels), use HTML conversion instead of AI generation. AI models can hallucinate text. See [services reference](references/services.md) for the full decision matrix.
75
+
76
+ ## Workflow
77
+
78
+ 1. **Check config**: Run `validateAPIKeys()` from the bundled SDK to see what's available
79
+ 2. **Gather requirements**: Visual type, sub-type, prompt, preferences (minimum questions)
80
+ 3. **Route**: Pick the generation path based on the routing table
81
+ 4. **Estimate cost**: Show estimated cost and check against budget before generating
82
+ 5. **Generate**: Execute via the appropriate path
83
+ 6. **Save**: Organize into `content-assets/images/YYYY-MM-DD/` or `content-assets/videos/YYYY-MM-DD/`
84
+ 7. **Return**: Provide markdown embed code and generation report
85
+
86
+ ## File Organization
87
+
88
+ ```
89
+ content-assets/
90
+ ├── images/
91
+ │ └── YYYY-MM-DD/
92
+ │ └── {type}-{description}-{id}.png
93
+ └── videos/
94
+ └── YYYY-MM-DD/
95
+ └── video-{description}-{id}.mp4
96
+ ```
97
+
98
+ ## Additional Resources
99
+
100
+ - **Service catalog, pricing, MCP tools**: See [references/services.md](references/services.md)
101
+ - **Infographic/Mermaid pipeline details**: See [references/infographic-pipeline.md](references/infographic-pipeline.md)
102
+ - **Research visuals (PaperBanana)**: See [references/research-visuals.md](references/research-visuals.md)
103
+ - **Generation SDK source**: See [scripts/visual-generation-utils.js](scripts/visual-generation-utils.js) for the full API
104
+
105
+ ## Quality Guardrails
106
+
107
+ - Verify all visible text in generated images is free from typos
108
+ - For text-heavy visuals, prefer GPT Image 1.5 or HTML conversion
109
+ - If generated text has mistakes, regenerate with clearer constraints or switch to HTML conversion
110
+ - Validate rendered Mermaid outputs exist and are >1KB before inserting references
111
+
112
+ ## Error Handling
113
+
114
+ - **No API keys**: Show which keys are missing and how to configure them (`/myai-configure visual`)
115
+ - **Rate limiting**: Wait 60s and retry, or switch service
116
+ - **Budget exceeded**: Show usage and suggest increasing limits in `.env`
117
+ - **MCP unavailable**: Fall back to SDK-only approach automatically
118
+ - **Mermaid render fails**: Save source `.mmd` files and provide manual render instructions
119
+
120
+ ## Integration
121
+
122
+ - Works with `content-writer` skill via `--with-images` flag
123
+ - Automatic asset organization and cost tracking
124
+ - Markdown reference generation for easy embedding
125
+ - fal.ai MCP model discovery when `FAL_KEY` is configured