opencode-skills-collection 3.0.7 → 3.0.9

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (36) hide show
  1. package/bundled-skills/.antigravity-install-manifest.json +6 -1
  2. package/bundled-skills/aomi-transact/SKILL.md +127 -0
  3. package/bundled-skills/docs/integrations/jetski-cortex.md +3 -3
  4. package/bundled-skills/docs/integrations/jetski-gemini-loader/README.md +1 -1
  5. package/bundled-skills/docs/maintainers/repo-growth-seo.md +3 -3
  6. package/bundled-skills/docs/maintainers/skills-update-guide.md +1 -1
  7. package/bundled-skills/docs/users/bundles.md +1 -1
  8. package/bundled-skills/docs/users/claude-code-skills.md +1 -1
  9. package/bundled-skills/docs/users/gemini-cli-skills.md +1 -1
  10. package/bundled-skills/docs/users/getting-started.md +1 -1
  11. package/bundled-skills/docs/users/kiro-integration.md +1 -1
  12. package/bundled-skills/docs/users/usage.md +4 -4
  13. package/bundled-skills/docs/users/visual-guide.md +4 -4
  14. package/bundled-skills/git-pr-review/SKILL.md +12 -0
  15. package/bundled-skills/kubestellar-console/SKILL.md +14 -5
  16. package/bundled-skills/loki-mode/examples/todo-app-generated/backend/package-lock.json +9 -8
  17. package/bundled-skills/loki-mode/examples/todo-app-generated/backend/package.json +2 -1
  18. package/bundled-skills/mock-hunter/SKILL.md +144 -0
  19. package/bundled-skills/multi-agent-architect/SKILL.md +361 -0
  20. package/bundled-skills/production-audit/SKILL.md +9 -8
  21. package/bundled-skills/rich-elicitation/SKILL.md +213 -0
  22. package/bundled-skills/skill-writer/references/authoring-path.md +26 -0
  23. package/bundled-skills/skill-writer/references/description-optimization.md +30 -0
  24. package/bundled-skills/skill-writer/references/design-principles.md +26 -0
  25. package/bundled-skills/skill-writer/references/evaluation-path.md +28 -0
  26. package/bundled-skills/skill-writer/references/examples/workflow-process.md +27 -0
  27. package/bundled-skills/skill-writer/references/iteration-path.md +28 -0
  28. package/bundled-skills/skill-writer/references/mode-selection.md +35 -0
  29. package/bundled-skills/skill-writer/references/output-patterns.md +34 -0
  30. package/bundled-skills/skill-writer/references/registration-validation.md +33 -0
  31. package/bundled-skills/skill-writer/references/skill-patterns.md +50 -0
  32. package/bundled-skills/skill-writer/references/synthesis-path.md +31 -0
  33. package/bundled-skills/skill-writer/references/workflow-patterns.md +36 -0
  34. package/bundled-skills/unity-ai-game-creator/SKILL.md +299 -0
  35. package/package.json +1 -1
  36. package/skills_index.json +111 -1
@@ -0,0 +1,361 @@
1
+ ---
2
+ name: multi-agent-architect
3
+ description: "Design and optimize production-grade multi-agent systems with LangGraph, LangChain, and DeepAgents for complex AI workflows."
4
+ risk: safe
5
+ source: community
6
+ metadata:
7
+ category: ai-engineering
8
+ source_repo: pravin-python/antigravity-awesome-skills
9
+ source_type: community
10
+ date_added: "2025-05-07"
11
+ author: community
12
+ tags: [langgraph, langchain, multi-agent, orchestration, deepagents, rag, tool-calling]
13
+ tools: [claude, cursor, gemini]
14
+ license: "MIT"
15
+ license_source: "https://github.com/pravin-python/antigravity-awesome-skills/blob/main/LICENSE"
16
+ ---
17
+
18
+
19
+ # Multi-Agent Architect & Updater Skill
20
+
21
+ ## Overview
22
+
23
+ This skill turns Claude into a Senior AI Multi-Agent Architect specialized in LangGraph, LangChain, and DeepAgents. It provides structured workflows for creating and updating production-grade multi-agent systems — including supervisor agents, planners, researchers, coders, and memory-backed autonomous pipelines. Use it whenever you need to design, build, debug, or scale any multi-agent AI system.
24
+
25
+ If this skill adapts material from an external GitHub repository, declare both:
26
+
27
+ - `source_repo: owner/repo`
28
+ - `source_type: official` or `source_type: community`
29
+
30
+ ## When to Use This Skill
31
+
32
+ - Use when you need to create a new agent or multi-agent workflow from scratch
33
+ - Use when working with LangGraph state graphs, nodes, edges, or conditional routing
34
+ - Use when the user asks about agent communication, memory systems, or tool-calling pipelines
35
+ - Use when debugging or optimizing an existing LangChain/LangGraph agent system
36
+ - Use when architecting supervisor, planner, research, coding, or validation agent roles
37
+ - Use when integrating DeepAgents with hierarchical planning and delegation
38
+
39
+ ## How It Works
40
+
41
+ ### Step 1: Understand the Goal
42
+
43
+ Before writing any code, clarify:
44
+ - What is the **business objective** this agent system must achieve?
45
+ - What **agent roles** are needed (supervisor, planner, researcher, coder, validator)?
46
+ - What **tools** does each agent require?
47
+ - What **memory** strategy is needed (Redis, Vector DB, LangChain Memory)?
48
+ - What **communication protocol** connects agents (shared state, message passing)?
49
+
50
+ ### Step 2: Define the State Schema
51
+
52
+ All agents share a typed state object passed through the graph:
53
+
54
+ ```python
55
+ from typing import TypedDict
56
+
57
+ class AgentState(TypedDict):
58
+ user_goal: str
59
+ tasks: list[str]
60
+ completed_tasks: list[str]
61
+ next_agent: str
62
+ context: dict
63
+ step_count: int # guards against infinite loops
64
+ error: str | None
65
+ ```
66
+
67
+ ### Step 3: Define Agent Nodes
68
+
69
+ Each agent is an **async function** that reads from state and returns an updated state:
70
+
71
+ ```python
72
+ import logging
73
+ from langchain_openai import ChatOpenAI
74
+
75
+ logger = logging.getLogger(__name__)
76
+
77
+ async def research_node(state: AgentState) -> AgentState:
78
+ logger.info("research_node: starting")
79
+ llm = ChatOpenAI(model="gpt-4o")
80
+ result = await llm.bind_tools(research_tools).ainvoke(state["user_goal"])
81
+ state["context"]["research"] = result.content
82
+ state["next_agent"] = "coder"
83
+ return state
84
+ ```
85
+
86
+ ### Step 4: Build the LangGraph
87
+
88
+ Wire nodes together with edges and conditional routing:
89
+
90
+ ```python
91
+ from langgraph.graph import StateGraph, END
92
+ from langgraph.prebuilt import ToolNode
93
+
94
+ def build_graph() -> StateGraph:
95
+ graph = StateGraph(AgentState)
96
+
97
+ graph.add_node("supervisor", supervisor_node)
98
+ graph.add_node("research", research_node)
99
+ graph.add_node("coder", coding_node)
100
+ graph.add_node("validator", validation_node)
101
+ graph.add_node("tools", ToolNode(all_tools))
102
+
103
+ graph.set_entry_point("supervisor")
104
+
105
+ graph.add_conditional_edges(
106
+ "supervisor",
107
+ route_next,
108
+ {"research": "research", "coder": "coder", "end": END}
109
+ )
110
+
111
+ graph.add_edge("research", "supervisor")
112
+ graph.add_edge("coder", "validator")
113
+ graph.add_edge("validator", "supervisor")
114
+
115
+ return graph.compile()
116
+
117
+ def route_next(state: AgentState) -> str:
118
+ if state["step_count"] > 20:
119
+ return "end"
120
+ return state["next_agent"]
121
+ ```
122
+
123
+ ### Step 5: Add Memory
124
+
125
+ ```python
126
+ from langchain_community.chat_message_histories import RedisChatMessageHistory
127
+
128
+ def get_memory(session_id: str):
129
+ return RedisChatMessageHistory(
130
+ session_id=session_id,
131
+ url=os.getenv("REDIS_URL"),
132
+ ttl=3600
133
+ )
134
+ ```
135
+
136
+ ### Step 6: Run the Graph
137
+
138
+ ```python
139
+ async def run(user_goal: str, session_id: str):
140
+ graph = build_graph()
141
+ initial_state = AgentState(
142
+ user_goal=user_goal,
143
+ tasks=[],
144
+ completed_tasks=[],
145
+ next_agent="supervisor",
146
+ context={},
147
+ step_count=0,
148
+ error=None,
149
+ )
150
+ return await graph.ainvoke(initial_state)
151
+ ```
152
+
153
+ ### Step 7: Expose via FastAPI (optional)
154
+
155
+ ```python
156
+ from fastapi import FastAPI
157
+ from pydantic import BaseModel
158
+
159
+ app = FastAPI()
160
+
161
+ class RunRequest(BaseModel):
162
+ goal: str
163
+ session_id: str
164
+
165
+ @app.post("/run")
166
+ async def run_agent(req: RunRequest):
167
+ result = await run(req.goal, req.session_id)
168
+ return {"result": result}
169
+ ```
170
+
171
+ ---
172
+
173
+ ## Updating an Existing Agent
174
+
175
+ When the user wants to update or debug an existing agent, structure the response as:
176
+
177
+ ```
178
+ ## Existing Issue
179
+ [Describe the current problem]
180
+
181
+ ## Root Cause
182
+ [Identify why it's happening in the architecture]
183
+
184
+ ## Proposed Update
185
+ [Outline the changes at architecture level]
186
+
187
+ ## Updated Code
188
+ [Generate only the changed modules]
189
+
190
+ ## Migration Notes
191
+ [What breaks, what's backward-compatible]
192
+
193
+ ## Performance Impact
194
+ [Latency / token / memory delta]
195
+ ```
196
+
197
+ ---
198
+
199
+ ## Standard Folder Structure
200
+
201
+ Always generate code in this layout:
202
+
203
+ ```
204
+ multi_agent_system/
205
+ ├── agents/ # One file per agent role
206
+ ├── tools/ # Tool definitions and wrappers
207
+ ├── memory/ # Redis, VectorDB, LangChain memory helpers
208
+ ├── prompts/ # Prompt templates (one per agent)
209
+ ├── workflows/ # High-level orchestration logic
210
+ ├── graphs/ # LangGraph state + compiled graph definitions
211
+ ├── api/ # FastAPI routes (optional)
212
+ ├── configs/ # Config loader — no secrets in code
213
+ ├── tests/ # Unit + integration tests per agent
214
+ └── main.py
215
+ ```
216
+
217
+ ---
218
+
219
+ ## Examples
220
+
221
+ ### Example 1: Research + Coding Multi-Agent Workflow
222
+
223
+ ```python
224
+ # agents/research_agent.py
225
+ async def research_node(state: AgentState) -> AgentState:
226
+ llm = ChatOpenAI(model="gpt-4o").bind_tools([web_search, rag_search])
227
+ response = await llm.ainvoke(
228
+ f"Research the following and return structured findings:\n{state['user_goal']}"
229
+ )
230
+ state["context"]["research"] = response.content
231
+ state["next_agent"] = "coder"
232
+ return state
233
+
234
+ # agents/coding_agent.py
235
+ async def coding_node(state: AgentState) -> AgentState:
236
+ llm = ChatOpenAI(model="gpt-4o").bind_tools([python_repl, github_tool])
237
+ response = await llm.ainvoke(
238
+ f"Given this research:\n{state['context']['research']}\n\nWrite production Python code."
239
+ )
240
+ state["context"]["code"] = response.content
241
+ state["next_agent"] = "validator"
242
+ return state
243
+ ```
244
+
245
+ ### Example 2: Supervisor with Dynamic Delegation
246
+
247
+ ```python
248
+ # agents/supervisor_agent.py
249
+ DELEGATION_PROMPT = """
250
+ You are a supervisor. Given the current state, decide the next agent.
251
+ Available agents: research, coder, validator, end.
252
+ Respond with ONLY the agent name.
253
+
254
+ Goal: {goal}
255
+ Completed: {completed}
256
+ Context keys available: {context}
257
+ """
258
+
259
+ async def supervisor_node(state: AgentState) -> AgentState:
260
+ state["step_count"] += 1
261
+ llm = ChatOpenAI(model="gpt-4o")
262
+ decision = await llm.ainvoke(
263
+ DELEGATION_PROMPT.format(
264
+ goal=state["user_goal"],
265
+ completed=state["completed_tasks"],
266
+ context=list(state["context"].keys()),
267
+ )
268
+ )
269
+ next_agent = decision.content.strip().lower()
270
+ # Validate against allowlist before setting
271
+ allowed = {"research", "coder", "validator", "end"}
272
+ state["next_agent"] = next_agent if next_agent in allowed else "end"
273
+ return state
274
+ ```
275
+
276
+ ### Example 3: DeepAgents Reflection Loop
277
+
278
+ ```python
279
+ async def reflection_node(state: AgentState) -> AgentState:
280
+ llm = ChatOpenAI(model="gpt-4o")
281
+ critique = await llm.ainvoke(
282
+ f"Evaluate this output critically:\n{state['context'].get('code', '')}\n"
283
+ "List any bugs, gaps, or improvements. Be concise."
284
+ )
285
+ state["context"]["critique"] = critique.content
286
+ state["next_agent"] = "coder" if "bug" in critique.content.lower() else "end"
287
+ return state
288
+ ```
289
+
290
+ ---
291
+
292
+ ## Best Practices
293
+
294
+ - ✅ One agent = one responsibility — never combine planning + coding + testing in one node
295
+ - ✅ Use `TypedDict` for all state schemas — enables type checking and graph validation
296
+ - ✅ Bind only the tools each agent needs — reduces hallucinated tool calls
297
+ - ✅ Always add a `step_count` guard to prevent infinite routing loops
298
+ - ✅ Use `async`/`await` throughout — LangGraph supports async natively
299
+ - ✅ Store all secrets in environment variables loaded via `os.getenv()`
300
+ - ✅ Set TTLs on all Redis keys scoped to `session_id`
301
+ - ✅ Log at every node entry and tool call for observability
302
+ - ✅ Validate supervisor routing output against an allowlist of agent names
303
+ - ❌ Don't hardcode API keys, model names, or Redis URLs
304
+ - ❌ Don't share tool lists across agents that don't need them
305
+ - ❌ Don't skip error handling — tool failures and empty LLM responses are common
306
+ - ❌ Don't trust unvalidated LLM routing decisions — always check against an allowlist
307
+
308
+ ---
309
+
310
+ ## Limitations
311
+
312
+ - This skill does not replace environment-specific testing, load testing, or security review before production deployment.
313
+ - Generated LangGraph code targets the current stable API — always verify method signatures against your installed version (`pip show langgraph`).
314
+ - Stop and ask for clarification if the agent's goal, tool permissions, or routing logic is ambiguous before generating a full architecture.
315
+ - DeepAgents integration patterns assume the library is installed and configured in the target environment.
316
+
317
+ ---
318
+
319
+ ## Security & Safety Notes
320
+
321
+ - Never expose API keys in generated code. All secrets must use environment variables:
322
+ ```python
323
+ OPENAI_API_KEY = os.getenv("OPENAI_API_KEY") # ✅ correct
324
+ OPENAI_API_KEY = "sk-..." # ❌ never do this
325
+ ```
326
+ - Always validate and sanitize user inputs before injecting them into agent prompts — treat all user input as untrusted.
327
+ - Add a permission layer before allowing agents to execute shell commands or write to filesystems.
328
+ - If generating a Python REPL tool node, document that it must only run in a sandboxed, isolated environment.
329
+ <!-- security-allowlist: python_repl tool examples are for sandboxed execution environments only -->
330
+ - For production deployments, add rate-limit handling and exponential backoff on all LLM and external API calls.
331
+ - Scope all Redis session keys to `session_id` and set a TTL to prevent memory leaks across sessions.
332
+
333
+ ---
334
+
335
+ ## Common Pitfalls
336
+
337
+ - **Problem:** Agent loops indefinitely between supervisor and sub-agents
338
+ **Solution:** Add `step_count: int` to state; return `"end"` in `route_next()` when `step_count > N`
339
+
340
+ - **Problem:** Supervisor routes to a non-existent agent name
341
+ **Solution:** Validate the LLM's routing output against a hardcoded allowlist before setting `next_agent`
342
+
343
+ - **Problem:** Memory leaks across user sessions
344
+ **Solution:** Scope Redis keys to `session_id` and always set a TTL (`ttl=3600`)
345
+
346
+ - **Problem:** Tool results are ignored by the next agent
347
+ **Solution:** Always write tool output into `state["context"]` and confirm the next node reads it
348
+
349
+ - **Problem:** Agents share too many tools and hallucinate wrong tool calls
350
+ **Solution:** Use `.bind_tools([only_relevant_tools])` per agent instead of a global tool list
351
+
352
+ - **Problem:** Graph fails silently on API rate limits
353
+ **Solution:** Wrap LLM calls in retry logic with exponential backoff using `tenacity`
354
+
355
+ ---
356
+
357
+ ## Related Skills
358
+
359
+ - `@langchain-rag` - When you need retrieval-augmented generation pipelines specifically
360
+ - `@fastapi-backend` - When deploying agent systems as production REST APIs
361
+ - `@python-async` - When deepening async/await patterns used throughout agent nodes
@@ -2,7 +2,7 @@
2
2
  name: production-audit
3
3
  description: "Audit a shipped repo for production-readiness gaps across RLS, webhooks, secrets, grants, Stripe idempotency, mobile UX, and deployment health."
4
4
  category: security
5
- risk: safe
5
+ risk: critical
6
6
  source: community
7
7
  source_repo: commitshow/production-audit
8
8
  source_type: community
@@ -22,7 +22,7 @@ A skill that runs an external audit on a shipped repo's deployed state — live
22
22
 
23
23
  This is **complementary** to in-session security skills (`security-review`, OWASP-style, VibeSec, Trail of Bits). Those scan the editor buffer at write-time. This scans the deployed product after you commit. Different timing, different inputs, different findings. Run both for serious launches.
24
24
 
25
- The skill wraps the [commit.show](https://commit.show) audit engine via the public CLI (`npx commitshow audit . --json`). Stable JSON envelope (`schema_version: "1"`, additive-only). Writes a `.commitshow/audit.{md,json}` sidecar so future agent sessions can read prior state without re-running the engine.
25
+ The skill wraps the [commit.show](https://commit.show) audit engine via the public CLI (`npx commitshow@0.3.23 audit . --json`). Stable JSON envelope (`schema_version: "1"`, additive-only). Writes a `.commitshow/audit.{md,json}` sidecar so future agent sessions can read prior state without re-running the engine.
26
26
 
27
27
  ## When to Use This Skill
28
28
 
@@ -42,11 +42,11 @@ The skill wraps the [commit.show](https://commit.show) audit engine via the publ
42
42
 
43
43
  ### Step 1: Run the audit
44
44
 
45
- From the repo root. The CLI is pinned to a known-good range (an attacker-pushed `0.4.x` won't be picked up silently bumping the floor is a deliberate edit), the sidecar directory is created up-front, and stderr is split off so install/deprecation warnings can't corrupt the JSON envelope:
45
+ From the repo root. The CLI is pinned to an exact reviewed version so future npm releases are not selected silently. Because `npx` downloads and runs npm package code locally with the current user's permissions, run it only after the user explicitly approves this external execution and only in a repository where local files and environment variables are safe for that process to access. The sidecar directory is created up-front, and stderr is split off so install/deprecation warnings can't corrupt the JSON envelope:
46
46
 
47
47
  ```bash
48
48
  mkdir -p .commitshow
49
- npx commitshow@^0.3.23 audit . --json \
49
+ npx commitshow@0.3.23 audit . --json \
50
50
  > .commitshow/audit.json \
51
51
  2> .commitshow/audit.stderr.log
52
52
  ```
@@ -57,7 +57,7 @@ If the user pointed at a remote URL instead of `.`, swap `.` for the URL — kee
57
57
 
58
58
  ```bash
59
59
  mkdir -p .commitshow
60
- npx commitshow@^0.3.23 audit github.com/owner/repo --json \
60
+ npx commitshow@0.3.23 audit github.com/owner/repo --json \
61
61
  > .commitshow/audit.json \
62
62
  2> .commitshow/audit.stderr.log
63
63
  ```
@@ -111,7 +111,7 @@ After applying a fix, suggest re-running with `--refresh` (same canonical form a
111
111
 
112
112
  ```bash
113
113
  mkdir -p .commitshow
114
- npx commitshow@^0.3.23 audit . --json --refresh \
114
+ npx commitshow@0.3.23 audit . --json --refresh \
115
115
  > .commitshow/audit.json \
116
116
  2> .commitshow/audit.stderr.log
117
117
  ```
@@ -122,7 +122,7 @@ npx commitshow@^0.3.23 audit . --json --refresh \
122
122
 
123
123
  ```bash
124
124
  mkdir -p .commitshow
125
- npx commitshow@^0.3.23 audit . --json \
125
+ npx commitshow@0.3.23 audit . --json \
126
126
  > .commitshow/audit.json \
127
127
  2> .commitshow/audit.stderr.log
128
128
  ```
@@ -172,7 +172,8 @@ Find the file path in the bullet, read it, confirm the gap matches.
172
172
 
173
173
  ## Security & Safety Notes
174
174
 
175
- - The skill executes `npx commitshow@latest audit ...` which is a network call to a public API at `https://api.commit.show` (proxied to Supabase Edge Functions). No credentials are sent anonymous usage subject to per-IP / per-URL / global rate limits.
175
+ - The skill executes `npx commitshow@0.3.23 audit ...`, which downloads and runs that exact npm package version locally, then calls the public API at `https://api.commit.show` (proxied to Supabase Edge Functions). Do not replace the exact version with `latest` or a semver range during normal use.
176
+ - Treat the CLI as external code with local process privileges. It must not be run in repositories containing secrets or sensitive uncommitted files unless the user has explicitly accepted that risk. No credentials are intentionally sent to the API, but the local process can access files and environment variables available to the current user.
176
177
  - The CLI writes `.commitshow/audit.{md,json}` in the current working directory. These files are safe to commit (no secrets) but conventionally gitignored as transient artifacts.
177
178
  - The audit engine **only reads** public GitHub signals. It does not modify the user's repo or push commits.
178
179
  - All per-finding fix proposals must be shown as diffs and approved by the user before any edit. Never apply without explicit confirmation.
@@ -0,0 +1,213 @@
1
+ ---
2
+ name: rich-elicitation
3
+ description: "Asks clarifying questions in multiple rounds before starting ambiguous tasks. Fires when 2+ task dimensions each have 3+ viable answers."
4
+ category: productivity
5
+ risk: none
6
+ source: self
7
+ source_type: self
8
+ date_added: "2026-05-07"
9
+ author: abubakar
10
+ tags: [elicitation, clarifying-questions, ambiguity, multi-round, prompt-engineering]
11
+ tools: [antigravity]
12
+ ---
13
+
14
+ # Rich Elicitation Skill
15
+
16
+ ## Overview
17
+
18
+ This skill governs how Antigravity resolves task ambiguity before starting work. When a user's request has too many unanswered dimensions — each with several reasonable answers — Antigravity asks targeted clarifying questions across multiple rounds rather than silently picking defaults.
19
+
20
+ The goal is a correct first draft, not a generic answer that requires three revision cycles. Rounds are capped at three; anything still unclear after Round 3 gets a stated assumption and Antigravity proceeds.
21
+
22
+ ---
23
+
24
+ ## When to Use This Skill
25
+
26
+ - Use when a request has 2 or more dimensions that are ambiguous and each has 3+ viable options
27
+ - Use when the user's likely intent is unclear across scope, audience, tone, format, or strategy
28
+ - Use when an early answer would meaningfully change the structure or direction of the output
29
+ - Use when working on writing, planning, design, recommendations, or creative tasks with open-ended scope
30
+ - Use when a Round 1 answer unlocks a new set of meaningful choices that need resolving before proceeding
31
+
32
+ Do **not** trigger for:
33
+ - Simple factual lookups or math
34
+ - Clearly scoped requests with a single obvious interpretation
35
+ - Minor unknowns where a safe default exists
36
+
37
+ ---
38
+
39
+ ## How It Works
40
+
41
+ ### Step 1: Run the Trigger Checklist
42
+
43
+ Before starting any task, mentally check how many of these apply:
44
+
45
+ | Signal | Action |
46
+ |---|---|
47
+ | Multiple valid output formats | Ask about format |
48
+ | Audience is unknown | Ask about audience |
49
+ | Tone is ambiguous | Ask about tone |
50
+ | Scope could be narrow or broad | Ask about depth/length |
51
+ | Technical vs. simple treatment unclear | Ask about technical level |
52
+ | Multiple strategic directions exist | Ask which direction |
53
+ | User's constraints are unknown | Ask about constraints |
54
+
55
+ **If 2+ rows apply → trigger this skill.**
56
+
57
+ ### Step 2: Ask Round 1 Questions
58
+
59
+ Ask up to 3 questions using `ask_user_input_v0`. Group related questions in a single call. Lead with 1–2 sentences explaining why you're asking. Mark one option per question as **(Recommended)**.
60
+
61
+ ### Step 3: Re-run the Checklist
62
+
63
+ After Round 1 answers, re-run the checklist on what's still unresolved. If 2+ rows still apply, run Round 2. Otherwise, proceed.
64
+
65
+ ### Step 4: Run Follow-up Rounds (if needed)
66
+
67
+ | Round | Purpose | Max questions |
68
+ |---|---|---|
69
+ | Round 1 | Blocking questions — shape the entire output | 3 |
70
+ | Round 2 | Follow-ups unlocked by Round 1 answers | 3 |
71
+ | Round 3 | Final details — use sparingly | 2 |
72
+
73
+ Transition between rounds naturally. Don't announce "Round 2" mechanically. Use phrasing like:
74
+ > "Got it — that helps a lot. One more thing before I start:"
75
+
76
+ ### Step 5: Proceed
77
+
78
+ After Round 3 (or earlier, if enough context exists), state any remaining assumptions briefly and begin the task.
79
+
80
+ ---
81
+
82
+ ## Examples
83
+
84
+ ### Example 1: Multi-Round — Investor Pitch Deck
85
+
86
+ **User:** "Help me create a presentation for my project."
87
+
88
+ **Round 1** — Three blocking questions:
89
+
90
+ ```
91
+ Framing: "This could go a lot of different ways. Quick questions before I start:"
92
+
93
+ Q1: Who is the audience?
94
+ - Internal team / colleagues
95
+ - External clients or partners
96
+ - Investors or stakeholders (Recommended)
97
+ - General / public audience
98
+
99
+ Q2: What's the primary goal?
100
+ - Inform and update
101
+ - Persuade and drive a decision (Recommended)
102
+ - Teach or explain a concept
103
+ - Pitch and raise funding
104
+
105
+ Q3: How much content do you already have?
106
+ - Starting from scratch
107
+ - Rough notes or an outline (Recommended)
108
+ - Full draft — just needs polish
109
+ - Previous version to update
110
+ ```
111
+
112
+ *User answers: Investors | Pitch | Rough notes → Round 2 is warranted.*
113
+
114
+ **Round 2** — Unlocked by Round 1:
115
+
116
+ ```
117
+ Framing: "Perfect — investor pitch it is. A couple more things:"
118
+
119
+ Q1: What stage is this raise?
120
+ - Pre-seed / idea stage
121
+ - Seed round (Recommended)
122
+ - Series A or later
123
+ - Strategic partnership / grant
124
+
125
+ Q2: How long should the deck be?
126
+ - Short and punchy, 8–10 slides (Recommended)
127
+ - Standard, 12–15 slides
128
+ - Comprehensive, 20+ slides
129
+ ```
130
+
131
+ *No Round 3 needed — proceed.*
132
+
133
+ ---
134
+
135
+ ### Example 2: Single-Round — Business Email
136
+
137
+ **User:** "Write a business email to follow up on a proposal."
138
+
139
+ **Round 1 only:**
140
+
141
+ ```
142
+ Framing: "Two quick questions to nail the tone:"
143
+
144
+ Q1: What tone should this email strike?
145
+ - Formal and professional (Recommended)
146
+ - Friendly but direct
147
+ - Urgent and firm
148
+ - Warm and relationship-focused
149
+
150
+ Q2: What's the primary goal?
151
+ - Request action / get a response (Recommended)
152
+ - Share information only
153
+ - Repair or maintain the relationship
154
+ - Negotiate or push back
155
+ ```
156
+
157
+ *Enough context. No Round 2 needed.*
158
+
159
+ ---
160
+
161
+ ## Best Practices
162
+
163
+ - ✅ Always mark one option per question as **(Recommended)**
164
+ - ✅ Lead with a 1–2 sentence framing before the question widget
165
+ - ✅ Group up to 3 related questions in a single `ask_user_input_v0` call
166
+ - ✅ Re-evaluate after each round — stop as soon as you have enough context
167
+ - ✅ Use `single_select` for mutually exclusive choices, `multi_select` when combinations are valid
168
+ - ✅ State remaining assumptions explicitly before proceeding after Round 3
169
+ - ❌ Don't ask 6 separate question calls when 2 grouped calls would do
170
+ - ❌ Don't mark two options as Recommended in the same question
171
+ - ❌ Don't use vague option labels like "Other" or "It depends" without elaborating
172
+ - ❌ Don't mechanically label rounds in the UI ("Round 1:", "Round 2:")
173
+ - ❌ Don't run a follow-up round for minor details that have safe defaults
174
+
175
+ ---
176
+
177
+ ## Limitations
178
+
179
+ - This skill does not validate whether the user's answers are internally consistent — it trusts them as given.
180
+ - Round structure is a guideline, not a rigid contract; judgment is required on when to stop.
181
+ - Works best with `ask_user_input_v0` — in environments without that tool, question quality may degrade.
182
+ - Does not handle tasks where ambiguity can only be resolved by fetching external information (e.g., reading a file the user hasn't uploaded).
183
+ - Not designed for real-time or high-latency-sensitive workflows where any question overhead is unacceptable.
184
+
185
+ ---
186
+
187
+ ## Security & Safety Notes
188
+
189
+ This skill is pure reasoning — it issues no shell commands, reads no files, makes no network requests, and mutates no state. Risk level is `none`.
190
+
191
+ No `npm run security:docs` review is required for this skill.
192
+
193
+ ---
194
+
195
+ ## Common Pitfalls
196
+
197
+ - **Problem:** Antigravity asks one good question, gets an answer, then proceeds without checking if new unknowns emerged.
198
+ **Solution:** Always re-run the trigger checklist mentally after each round before deciding to proceed.
199
+
200
+ - **Problem:** All options in a question look equally valid so Antigravity marks none as Recommended.
201
+ **Solution:** Pick the option that works for most users or is lowest-risk and mark it. "No preference" is rarely true.
202
+
203
+ - **Problem:** Antigravity runs 4+ rounds trying to eliminate every unknown.
204
+ **Solution:** Hard cap at 3 rounds. After Round 3, state assumptions and proceed.
205
+
206
+ - **Problem:** Round 2 questions cover the same category as Round 1 (e.g., tone again).
207
+ **Solution:** Each round should unlock new dimensions, not re-ask resolved ones.
208
+
209
+ ---
210
+
211
+ ## Related Skills
212
+
213
+ - `@ask-user-questions` — Single-round elicitation with recommended options. Use that skill for simpler tasks; use rich-elicitation when answers to early questions open up new meaningful choices.
@@ -0,0 +1,26 @@
1
+ # Authoring Path
2
+
3
+ Use this path to create or update `SKILL.md` and supporting files.
4
+
5
+ ## SKILL.md checklist
6
+
7
+ - Valid YAML frontmatter with `name` and trigger-rich `description`.
8
+ - Clear title.
9
+ - Short purpose statement.
10
+ - Task routing or ordered steps.
11
+ - Output contract.
12
+ - `## When to Use`.
13
+ - `## Limitations`.
14
+
15
+ ## Supporting files
16
+
17
+ Add `references/` when optional detail would make SKILL.md too long or distract from routing.
18
+ Add `scripts/` only when deterministic execution is useful and safer than rewriting code each time.
19
+ Add `assets/` only for files used directly in produced outputs.
20
+
21
+ ## Editing rules
22
+
23
+ - Prefer small, scoped updates over rewrites.
24
+ - Preserve existing naming conventions.
25
+ - Keep examples copy-pasteable when commands are included.
26
+ - Avoid unsafe install, credential, or destructive command guidance unless prerequisites and warnings are explicit.