claude-master-toolkit 0.1.3 → 0.1.5

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,350 @@
1
+ ---
2
+ name: judgment-day
3
+ description: >
4
+ Parallel adversarial review protocol that launches two independent blind judge sub-agents
5
+ simultaneously to review the same target, synthesizes their findings, applies fixes,
6
+ and re-judges until both pass or escalates after 2 iterations.
7
+ Trigger: When user says "judgment day", "judgment-day", "review adversarial", "dual review",
8
+ "doble review", "juzgar", "que lo juzguen".
9
+ license: Apache-2.0
10
+ metadata:
11
+ author: gentleman-programming
12
+ version: "1.4"
13
+ ---
14
+
15
+ ## When to Use
16
+
17
+ - User explicitly asks for "judgment day", "judgment-day", or equivalent trigger phrases
18
+ - After significant implementations before merging
19
+ - When high-confidence review of code, features, or architecture is needed
20
+ - When a single reviewer might miss edge cases or have blind spots
21
+ - When the cost of a production bug is higher than the cost of two review rounds
22
+
23
+ ## Critical Patterns
24
+
25
+ ### Pattern 0: Skill Resolution (BEFORE launching judges)
26
+
27
+ Follow the **Skill Resolver Protocol** (`_shared/skill-resolver.md`) before launching ANY sub-agent:
28
+
29
+ 1. Obtain the skill registry: search engram (`mem_search(query: "skill-registry", project: "{project}")`) → fallback to `.atl/skill-registry.md` from the project root → skip if none
30
+ 2. Identify the target files/scope — what code will the judges review?
31
+ 3. Match relevant skills from the registry's **Compact Rules** by:
32
+ - **Code context**: file extensions/paths of the target (e.g., `.go` → go-testing; `.tsx` → react-19, typescript)
33
+ - **Task context**: "review code" → framework/language skills; "create PR" → branch-pr skill
34
+ 4. Build a `## Project Standards (auto-resolved)` block with the matching compact rules
35
+ 5. Inject this block into BOTH Judge prompts AND the Fix Agent prompt (identical for all)
36
+
37
+ This ensures judges review against project-specific standards, not just generic best practices.
38
+
39
+ **If no registry exists**: warn the user ("No skill registry found — judges will review without project-specific standards. Run `skill-registry` to fix this.") and proceed with generic review only.
40
+
41
+ ### Pattern 1: Parallel Blind Review
42
+
43
+ - Launch **TWO** sub-agents via `delegate` (async, parallel — never sequential)
44
+ - Each agent receives the **same target** but works **independently**
45
+ - **Neither agent knows about the other** — no cross-contamination
46
+ - Both use identical review criteria but may find different issues
47
+ - NEVER do the review yourself as the orchestrator — your job is coordination only
48
+
49
+ ### Pattern 2: Verdict Synthesis
50
+
51
+ The **orchestrator** (NOT a sub-agent) compares results after both `delegation_read` calls return:
52
+
53
+ ```
54
+ Confirmed → found by BOTH agents → high confidence, fix immediately
55
+ Suspect A → found ONLY by Judge A → needs triage
56
+ Suspect B → found ONLY by Judge B → needs triage
57
+ Contradiction → agents DISAGREE on the same thing → flag for manual decision
58
+ ```
59
+
60
+ Present findings as a structured verdict table (see Output Format).
61
+
62
+ ### Pattern 3: Warning Classification
63
+
64
+ Judges MUST classify every WARNING into one of two sub-types:
65
+
66
+ ```
67
+ WARNING (real) → Causes a bug, data loss, security hole, or incorrect behavior
68
+ in a realistic production scenario. Fix required.
69
+ WARNING (theoretical) → Requires a contrived scenario, corrupted input, or conditions
70
+ that cannot arise through normal usage. Report but do NOT block.
71
+ ```
72
+
73
+ **How to classify**: ask "Can a normal user, using the tool as intended, trigger this?" If YES → real. If it requires a malicious manifest, renamed home dir, two clicks in <1ms, or Windows volume root edge case → theoretical.
74
+
75
+ **Theoretical warnings are reported as INFO** in the verdict table. They are NOT fixed, do NOT trigger re-judgment, and do NOT count toward the convergence threshold. The orchestrator includes them in the final report for awareness.
76
+
77
+ ### Pattern 4: Fix and Re-judge
78
+
79
+ 1. If **confirmed CRITICALs or real WARNINGs** exist → delegate a **Fix Agent** (separate delegation)
80
+ 2. After Fix Agent completes → re-launch **both judges in parallel** (same blind protocol, fresh delegates)
81
+ 3. **After 2 fix iterations**, if issues remain → present findings to user and ASK: "¿Querés que siga iterando? / Should I continue iterating?" If YES → continue fix+judge cycle. If NO → JUDGMENT: ESCALATED.
82
+ 4. If both judges return clean → JUDGMENT: APPROVED ✅
83
+
84
+ ### Pattern 5: Convergence Threshold
85
+
86
+ **Round 1**: Present the verdict table to the user. ASK: "These are the confirmed issues. Want me to fix them?" Only fix after user confirms. Then re-judge with full scope.
87
+
88
+ **Round 2+**: Only re-judge if there are **confirmed CRITICALs**. For anything else:
89
+ - **Real WARNINGs** (confirmed): Fix inline, do NOT re-launch judges. Report as "fixed without re-judge" in the verdict.
90
+ - **Theoretical WARNINGs**: Report as INFO. Do NOT fix, do NOT re-judge.
91
+ - **SUGGESTIONs**: Fix inline if trivial (dead code, style). Do NOT re-judge.
92
+
93
+ **APPROVED criteria after Round 1**: 0 confirmed CRITICALs + 0 confirmed real WARNINGs = APPROVED. Theoretical warnings and suggestions may remain.
94
+
95
+ This prevents the diminishing-returns cycle where each fix round introduces minor artifacts that trigger another round of nit-picking.
96
+
97
+ ---
98
+
99
+ ## Decision Tree
100
+
101
+ ```
102
+ User asks for "judgment day"
103
+
104
+ ├── Target is specific files/feature/component?
105
+ │ ├── YES → continue
106
+ │ └── NO → ask user to specify scope before proceeding
107
+
108
+
109
+ Resolve skills (Pattern 0): read registry → match by code + task context → build Project Standards block
110
+
111
+ Launch Judge A + Judge B in parallel (delegate, async) — with Project Standards injected
112
+
113
+ Wait for both to complete (delegation_read both)
114
+
115
+ Synthesize verdict
116
+
117
+ ├── No issues found?
118
+ │ └── JUDGMENT: APPROVED ✅ (stop here)
119
+
120
+ ├── Issues found (confirmed, suspect, or contradictions)?
121
+ │ └── Present verdict table to user
122
+ │ ▼
123
+ │ ASK: "¿Arreglo los issues confirmados? / Fix confirmed issues?"
124
+ │ ▼
125
+ │ ├── User says YES → Delegate Fix Agent with confirmed issues list
126
+ │ ├── User says NO → JUDGMENT: ESCALATED (user chose not to fix)
127
+ │ └── User gives specific feedback → adjust fix list accordingly
128
+ │ ▼
129
+ │ Wait for Fix Agent to complete
130
+ │ ▼
131
+ │ Re-launch Judge A + Judge B in parallel (Round 2)
132
+ │ ▼
133
+ │ Synthesize verdict
134
+ │ │
135
+ │ ├── Clean → JUDGMENT: APPROVED ✅
136
+ │ │
137
+ │ └── Still issues → Delegate Fix Agent again (Round 3 / iteration 2)
138
+ │ ▼
139
+ │ Re-launch Judge A + Judge B in parallel (Round 3)
140
+ │ ▼
141
+ │ Synthesize verdict
142
+ │ │
143
+ │ ├── Clean → JUDGMENT: APPROVED ✅
144
+ │ └── Still issues → ASK USER: "Issues remain after 2 iterations. Continue iterating?"
145
+
146
+ ├── User says YES → repeat fix + judge cycle (no limit)
147
+ └── User says NO → JUDGMENT: ESCALATED ⚠️ (report to user)
148
+ ```
149
+
150
+ ---
151
+
152
+ ## Sub-Agent Prompt Templates
153
+
154
+ ### Judge Prompt (use for BOTH Judge A and Judge B — identical)
155
+
156
+ ```
157
+ You are an adversarial code reviewer. Your ONLY job is to find problems.
158
+
159
+ ## Target
160
+ {describe target: files, feature, architecture, component}
161
+
162
+ {if compact rules were resolved in Pattern 0, inject the following block — otherwise OMIT this entire section}
163
+ ## Project Standards (auto-resolved)
164
+ {paste matching compact rules blocks from the skill registry}
165
+
166
+ ## Review Criteria
167
+ - Correctness: Does the code do what it claims? Are there logical errors?
168
+ - Edge cases: What inputs or states aren't handled?
169
+ - Error handling: Are errors caught, propagated, and logged properly?
170
+ - Performance: Any N+1 queries, inefficient loops, unnecessary allocations?
171
+ - Security: Any injection risks, exposed secrets, improper auth checks?
172
+ - Naming & conventions: Does it follow the project's established patterns AND the Project Standards above?
173
+ {if user provided custom criteria, add here}
174
+
175
+ ## Return Format
176
+ Return a structured list of findings ONLY. No praise, no approval.
177
+
178
+ Each finding:
179
+ - Severity: CRITICAL | WARNING (real) | WARNING (theoretical) | SUGGESTION
180
+ - File: path/to/file.ext (line N if applicable)
181
+ - Description: What is wrong and why it matters
182
+ - Suggested fix: one-line description of the fix (not code, just intent)
183
+
184
+ **WARNING classification rule**: Ask "Can a normal user, using the tool as intended, trigger this?"
185
+ - YES → `WARNING (real)` — e.g., silent error on disk full, data corruption on normal input
186
+ - NO → `WARNING (theoretical)` — e.g., requires malicious manifest, renamed home dir, race condition in <1ms, OS-specific edge case that doesn't apply to the project's target platforms
187
+
188
+ Always include at the end: **Skill Resolution**: {injected|fallback-registry|fallback-path|none} — {details}
189
+
190
+ If you find NO issues, return:
191
+ VERDICT: CLEAN — No issues found.
192
+
193
+ ## Instructions
194
+ Be thorough and adversarial. Assume the code has bugs until proven otherwise.
195
+ Your job is to find problems, NOT to approve. Do not summarize. Do not praise.
196
+ ```
197
+
198
+ ### Fix Agent Prompt
199
+
200
+ ```
201
+ You are a surgical fix agent. You apply ONLY the confirmed issues listed below.
202
+
203
+ ## Confirmed Issues to Fix
204
+ {paste the confirmed findings table from the verdict synthesis}
205
+
206
+ {if compact rules were resolved in Pattern 0, inject the following block — otherwise OMIT this entire section}
207
+ ## Project Standards (auto-resolved)
208
+ {paste matching compact rules blocks from the skill registry}
209
+
210
+ ## Context
211
+ - Original review criteria: {paste same criteria used for judges}
212
+ - Target: {same target description}
213
+
214
+ ## Instructions
215
+ - Fix ONLY the confirmed issues listed above
216
+ - Do NOT refactor beyond what is strictly needed to fix each issue
217
+ - Do NOT change code that was not flagged
218
+ - **Scope rule**: If you fix a pattern in one file (e.g., add error logging for a silent discard), search for the SAME pattern in ALL other files touched by this change and fix them ALL. Inconsistent fixes across files are the #1 cause of unnecessary re-judge rounds.
219
+ - After each fix, note: file changed, line changed, what was done
220
+
221
+ Return a summary:
222
+ ## Fixes Applied
223
+ - [file:line] — {what was fixed}
224
+
225
+ **Skill Resolution**: {injected|fallback-registry|fallback-path|none} — {details}
226
+ ```
227
+
228
+ ---
229
+
230
+ ## Output Format
231
+
232
+ ```markdown
233
+ ## Judgment Day — {target}
234
+
235
+ ### Round {N} — Verdict
236
+
237
+ | Finding | Judge A | Judge B | Severity | Status |
238
+ |---------|---------|---------|----------|--------|
239
+ | Missing null check in auth.go:42 | ✅ | ✅ | CRITICAL | Confirmed |
240
+ | Race condition in worker.go:88 | ✅ | ❌ | WARNING (real) | Suspect (A only) |
241
+ | Windows volume root edge case | ❌ | ✅ | WARNING (theoretical) | INFO — reported |
242
+ | Naming mismatch in handler.go:15 | ❌ | ✅ | SUGGESTION | Suspect (B only) |
243
+ | Error swallowed in db.go:201 | ✅ | ✅ | WARNING (real) | Confirmed |
244
+
245
+ **Confirmed issues**: 2 CRITICAL
246
+ **Suspect issues**: 1 WARNING, 1 SUGGESTION
247
+ **Contradictions**: none
248
+
249
+ ### Fixes Applied (Round {N})
250
+ - `auth.go:42` — Added nil check before dereferencing user pointer
251
+ - `db.go:201` — Propagated error instead of silently returning nil
252
+
253
+ ### Round {N+1} — Re-judgment
254
+ - Judge A: PASS ✅ — No issues found
255
+ - Judge B: PASS ✅ — No issues found
256
+
257
+ ---
258
+
259
+ ### JUDGMENT: APPROVED ✅
260
+ Both judges pass clean. The target is cleared for merge.
261
+ ```
262
+
263
+ ### Escalation Format (user chose to stop)
264
+
265
+ ```markdown
266
+ ## Judgment Day — {target}
267
+
268
+ ### JUDGMENT: ESCALATED ⚠️
269
+
270
+ User chose to stop after {N} fix iterations. Issues remain.
271
+ Manual review required before proceeding.
272
+
273
+ ### Remaining Issues
274
+ | Finding | Judge A | Judge B | Severity |
275
+ |---------|---------|---------|----------|
276
+ | {description} | ✅ | ✅ | CRITICAL |
277
+
278
+ ### History
279
+ - Round 1: {N} confirmed issues found
280
+ - Fix 1: applied {list}
281
+ - Round 2: {N} issues remain
282
+ - Fix 2: applied {list}
283
+ - Round 3: {N} issues remain → escalated
284
+
285
+ Recommend: human review of the remaining issues above before re-running judgment day.
286
+ ```
287
+
288
+ ---
289
+
290
+ ## Skill Resolution Feedback
291
+
292
+ After every delegation that returns a result, check the `**Skill Resolution**` field in each judge/fix-agent response:
293
+ - `injected` → skills were passed correctly ✅
294
+ - `fallback-registry`, `fallback-path`, or `none` → skill cache was lost (likely compaction). Re-read the registry immediately and inject compact rules in all subsequent delegations.
295
+
296
+ This is a self-correction mechanism. Do NOT ignore fallback reports.
297
+
298
+ ---
299
+
300
+ ## Language
301
+
302
+ - **Spanish input → Rioplatense**: "Juicio iniciado", "Los jueces están trabajando en paralelo...", "Los jueces coinciden", "Juicio terminado — Aprobado", "Escalado — necesita revisión humana"
303
+ - **English input**: "Judgment initiated", "Both judges are working in parallel...", "Both judges agree", "Judgment complete — Approved", "Escalated — requires human review"
304
+
305
+ ---
306
+
307
+ ## Blocking Rules (MANDATORY — override all other instructions)
308
+
309
+ These rules cannot be skipped, overridden, or deprioritized under any circumstances:
310
+
311
+ 1. **MUST NOT** declare `JUDGMENT: APPROVED` until: Round 1 judges return CLEAN, OR Round 2 judges confirm 0 CRITICALs + 0 confirmed real WARNINGs (theoretical warnings and suggestions may remain)
312
+ 2. **MUST NOT** run `git push`, `git commit`, or any code-modifying action after fixes until re-judgment completes
313
+ 3. **MUST NOT** save a session summary or tell the user "done" until every JD reaches a terminal state (APPROVED or ESCALATED)
314
+ 4. **After the Fix Agent returns**, your IMMEDIATE next action is re-launching judges in parallel for re-judgment. Do NOT push or commit before re-judgment completes.
315
+ 5. **When running multiple JDs in parallel**, each JD is independent. One JD completing does NOT allow skipping rounds on another.
316
+
317
+ ---
318
+
319
+ ## Self-Check (before ANY terminal action)
320
+
321
+ Before pushing, committing, summarizing, or telling the user "done":
322
+
323
+ 1. List every active JD target
324
+ 2. For each: is it in state APPROVED or ESCALATED?
325
+ 3. If ANY JD had fixes applied, did Round 2 run?
326
+ 4. If Round 2 found issues, did you ASK the user whether to continue? Did you respect their answer?
327
+
328
+ **If ANY answer is "no"** → you skipped a step. Go back and complete it before proceeding.
329
+
330
+ ---
331
+
332
+ ## Rules
333
+
334
+ - The **orchestrator NEVER reviews code itself** — it only launches judges, reads results, and synthesizes
335
+ - Judges MUST be launched as `delegate` (async) so they run in **parallel**
336
+ - The **Fix Agent is a separate delegation** — never use one of the judges as the fixer
337
+ - If user provides **custom review criteria**, include them in BOTH judge prompts (identical)
338
+ - If target scope is **unclear**, stop and ask before launching — partial reviews are useless
339
+ - **After 2 fix iterations**, ASK the user before continuing. Never escalate automatically — the user decides when to stop.
340
+ - Always wait for BOTH judges to complete before synthesizing — never accept a partial verdict
341
+ - Suspect findings (only one judge) are reported but NOT automatically fixed — triage and escalate to user if needed
342
+
343
+ ---
344
+
345
+ ## Commands
346
+
347
+ ```bash
348
+ # No CLI commands — this is a pure orchestration protocol.
349
+ # Execution happens via delegate() and delegation_read() tool calls.
350
+ ```
@@ -0,0 +1,26 @@
1
+ ---
2
+ name: judgment-day-minimal
3
+ description: >
4
+ Parallel adversarial review protocol using minimal structured language.
5
+ Two blind judges review the same target in parallel, return concise structured findings
6
+ with all critical context, fixes are applied by a separate agent, and re-judged iteratively.
7
+ Optimized for token efficiency while retaining full precision.
8
+ trigger: ["judgment day", "judgment-day", "review adversarial", "dual review",
9
+ "doble review", "juzgar", "que lo juzguen"]
10
+ license: Apache-2.0
11
+ metadata:
12
+ author: gentleman-programming
13
+ version: "1.0-minimal"
14
+ ---
15
+
16
+ ## Key Differences vs Full JD
17
+ - Uses **structured minimal output** instead of verbose text
18
+ - Keeps all essential context (file, line, severity, description, fix)
19
+ - No free-form prose; avoids unnecessary tokens
20
+ - Suitable for Claude or other LLMs where token cost matters
21
+ - Compatible with original JD orchestration (delegate + fix agent + re-judge)
22
+
23
+ ---
24
+
25
+ ## Judge Prompt (Minimal, Token-Efficient)
26
+
@@ -0,0 +1,124 @@
1
+ ---
2
+ name: repo-layer-master
3
+ description: >
4
+ Automates the creation of repository layers using a simplified singleton pattern.
5
+ No dependency injection, no constructors. Direct access to DB (e.g., Prisma) and
6
+ exported instance for immediate usage. Triggered when user says "repo-layer",
7
+ "repository", "create repo", "export instance".
8
+ ALSO validates existing repositories to ensure they follow the singleton pattern.
9
+ Triggered when user says "repo-layer", "validate repo", "check repository", "audit repo".
10
+ license: Apache-2.0
11
+ metadata:
12
+ author: gentleman-programming
13
+ version: "1.1"
14
+ ---
15
+
16
+ ## When to Use
17
+
18
+ - User explicitly asks for repository layer scaffolding
19
+ - Creating new services or modules that need database access
20
+ - Updating repository logic or methods consistently across the codebase
21
+ - Ensuring singleton repository instances are exported for shared use
22
+
23
+ ## Critical Patterns
24
+
25
+ ### Pattern 0: Skill Resolution (BEFORE instantiating)
26
+
27
+ 1. Check skill registry (`mem_search(query: "skill-registry", project: "{project}")`)
28
+ 2. Match by language/framework:
29
+ - TypeScript / Node.js → class-based repositories with singleton export
30
+ - Go → struct + interface + constructor pattern
31
+ - Python → class + module-level instance
32
+ 3. Inject project-specific DB connection, entity models, and context if available
33
+ 4. Warn if no registry → fallback to default repository conventions
34
+
35
+ ---
36
+
37
+ ### Pattern 1: Repository Class Instantiation
38
+
39
+ - Instantiate repository as a **single class per model/entity**
40
+ - Ensure constructor receives dependencies:
41
+ - DB connection
42
+ - Logger (optional)
43
+ - Config / context (optional)
44
+ - Example (TypeScript):
45
+
46
+ ```ts
47
+ import { Database } from "../db";
48
+ import { UserEntity } from "../entities/User";
49
+
50
+ export class UserRepository {
51
+ constructor(private db: Database) {}
52
+
53
+ async findById(id: string) {
54
+ return this.db.query("SELECT * FROM users WHERE id = ?", [id]);
55
+ }
56
+ }
57
+
58
+ export const userRepository = new UserRepository(Database.instance);
59
+
60
+ ### Pattern 6: Repository Audit (VALIDATION MODE)
61
+
62
+ When user provides existing repository code:
63
+
64
+ 1. Analyze the repository structure
65
+ 2. Validate compliance with required patterns
66
+ 3. Report violations (NO fixing unless explicitly requested)
67
+
68
+ ### Validation Rules (STRICT)
69
+
70
+ The repository MUST:
71
+
72
+ - ✅ Export a singleton instance:
73
+ export const xRepository = new XRepository();
74
+
75
+ - ✅ NOT use constructor
76
+
77
+ - ✅ NOT use @Injectable()
78
+
79
+ - ✅ Use direct prisma/global DB access
80
+
81
+ - ✅ Keep methods focused on data access only
82
+
83
+ - ✅ Use consistent naming (findById, findMany, etc.)
84
+
85
+ ### Violations
86
+
87
+ Classify findings:
88
+
89
+ CRITICAL:
90
+ - Missing singleton export
91
+ - Using @Injectable()
92
+ - Using constructor injection
93
+
94
+ WARNING:
95
+ - Inconsistent naming
96
+ - Mixed business logic inside repo
97
+ - Poor query structure
98
+
99
+ SUGGESTION:
100
+ - Optimization opportunities
101
+
102
+ ## Repository Audit — {file}
103
+
104
+ ### Compliance
105
+
106
+ | Rule | Status |
107
+ |------|--------|
108
+ | Singleton export | ✅ |
109
+ | No constructor | ❌ |
110
+ | No Injectable | ✅ |
111
+ | Direct DB usage | ✅ |
112
+
113
+ ### Issues
114
+
115
+ | Severity | Description |
116
+ |----------|------------|
117
+ | CRITICAL | Repository uses constructor injection |
118
+ | WARNING | Method naming inconsistent |
119
+ | SUGGESTION | Could optimize query with select |
120
+
121
+ ### Verdict
122
+
123
+ - COMPLIANT ✅
124
+ - NON-COMPLIANT ❌
@@ -0,0 +1,135 @@
1
+ ---
2
+ name: workctl-task-master
3
+ description: >
4
+ Automates task management using the workctl CLI. Handles creation, search, updates,
5
+ bulk operations, and workflow orchestration across ClickUp/Jira-style task systems.
6
+ Converts natural language requests into safe, structured CLI commands using JSON mode.
7
+ Triggered when user mentions: "task", "ticket", "issue", "workctl", "update task",
8
+ "create task", "bulk update", "search tasks".
9
+ license: Apache-2.0
10
+ metadata:
11
+ author: agent-systems
12
+ version: "1.0"
13
+ ---
14
+
15
+ # workctl-task-master Skill
16
+
17
+ ## When to Use
18
+
19
+ - User requests task creation, updates, deletion, or movement
20
+ - User asks to search or filter tasks across workspace
21
+ - User wants bulk operations on tasks (status changes, priority updates)
22
+ - User is building automation around ClickUp/Jira-style workflows
23
+ - User explicitly mentions `workctl` or CLI task orchestration
24
+
25
+ ---
26
+
27
+ ## Critical Gotchas (learned the hard way)
28
+
29
+ These are NOT documented in `--help` and will cost you 5+ failed commands if you don't know them:
30
+
31
+ 1. **`workctl task view` does NOT return the list ID.** The JSON output only has `id, name, description, status, priority, assignees, url, createdAt, updatedAt`. There is no `list`, `listId`, or `parent` field. If you need the list ID for a known task, fall through to the ClickUp REST API (see Pattern 6).
32
+
33
+ 2. **`workctl task search` has NO `--query`, `--name`, or text-search flag.** It only filters by `--status`, `--assignee`, `--include-closed`, and date ranges. You CANNOT search for a task by name or custom ID through `task search`. Do not try `--query "DEV-672"` — it will fail with `Nonexistent flag: --query`.
34
+
35
+ 3. **`workctl task create` ALWAYS requires `--list <LIST_ID>`, even when creating a subtask with `--parent`.** Passing only `--parent` fails with `Missing required flag list`. You must supply the parent's list ID.
36
+
37
+ 4. **Custom IDs vs internal IDs both work** in `task view <ID>` and as `--parent` values (e.g. `DEV-672` and `86ageqyxf` are interchangeable). But `task create --parent` needs the companion `--list`.
38
+
39
+ 5. **Tables in markdown descriptions get mangled** by ClickUp into `[table-embed:...]` blocks on create. The content is preserved but rendering is lossy. For rich formatting prefer `--description-file` with plain markdown and avoid complex tables if fidelity matters.
40
+
41
+ 6. **Use `--description-file` for any multi-line / code-fence / special-char description.** Inline `--description` with backticks or newlines is fragile to shell quoting. Write to `/tmp/foo.md` then pass `--description-file /tmp/foo.md`.
42
+
43
+ ---
44
+
45
+ ## Pattern 0: Skill Resolution (BEFORE executing any command)
46
+
47
+ 1. Detect intent type:
48
+ - Create → `workctl task create`
49
+ - Read/Search → `workctl task search` or `task list`
50
+ - Update → `workctl task update`
51
+ - Bulk ops → `workctl task bulk`
52
+ - Move → `workctl task move`
53
+ - Inspect → `workctl task view`
54
+
55
+ 2. Always enforce:
56
+ - Use `--format json`
57
+ - Prefer `task search` over `task list` unless list ID is known
58
+ - Use `task view` before destructive updates when ID is uncertain
59
+
60
+ 3. Validate required parameters:
61
+ - create → requires `--list` (even for subtasks with `--parent`)
62
+ - list → requires `--list`
63
+ - move → requires `--to-list`
64
+ - bulk → requires at least one `--set-*`
65
+
66
+ ---
67
+
68
+ ## Pattern 1: Task Creation
69
+
70
+ Convert natural language into structured CLI create command.
71
+
72
+ ```bash
73
+ workctl task create "Fix login bug" \
74
+ --list <LIST_ID> \
75
+ --priority high \
76
+ --description-file /tmp/desc.md \
77
+ --format json
78
+ ```
79
+
80
+ ### Creating a subtask (common flow)
81
+
82
+ To create a subtask, you need BOTH `--parent` and `--list`. Since `task view` does not return the list ID, use the ClickUp REST API fallback:
83
+
84
+ ```bash
85
+ # 1. Get the parent's list ID via ClickUp API (NOT workctl)
86
+ API_KEY=$(python3 -c "import json; print(json.load(open('$HOME/.config/workctl/config.json'))['clickup']['apiKey'])")
87
+ LIST_ID=$(curl -s "https://api.clickup.com/api/v2/task/<PARENT_ID>" \
88
+ -H "Authorization: $API_KEY" \
89
+ | python3 -c "import json,sys; print(json.load(sys.stdin)['list']['id'])")
90
+
91
+ # 2. Write the description to a file (safer than inline)
92
+ cat > /tmp/subtask.md << 'EOF'
93
+ ## Objetivo
94
+ ...markdown body with acceptance criteria...
95
+ EOF
96
+
97
+ # 3. Create the subtask
98
+ workctl task create "Subtask title" \
99
+ --list "$LIST_ID" \
100
+ --parent <PARENT_ID> \
101
+ --priority high \
102
+ --description-file /tmp/subtask.md \
103
+ --format json
104
+ ```
105
+
106
+ The parent ID accepts both forms: `DEV-672` (custom) or `86ageqyxf` (internal).
107
+
108
+ ---
109
+
110
+ ## Pattern 6: Resolving list ID from a task (fallback via ClickUp REST API)
111
+
112
+ When `workctl task view` doesn't give you enough context (list ID, folder, space, custom fields, parent relationships, due dates, tags), drop to the ClickUp REST API. The workctl config at `~/.config/workctl/config.json` holds the API key and team ID:
113
+
114
+ ```bash
115
+ cat ~/.config/workctl/config.json
116
+ # { "provider": "clickup", "clickup": { "apiKey": "pk_...", "teamId": "..." } }
117
+
118
+ curl -s "https://api.clickup.com/api/v2/task/<TASK_ID>" \
119
+ -H "Authorization: <apiKey>"
120
+ ```
121
+
122
+ The REST response includes `list.id`, `list.name`, `folder.id`, `space.id`, `parent`, `top_level_parent`, `custom_fields`, `custom_id`, and more — everything `workctl task view` strips out.
123
+
124
+ Use this sparingly — prefer workctl commands for routine ops — but it is the canonical escape hatch when workctl's JSON is insufficient.
125
+
126
+ ---
127
+
128
+ ## Pattern 1 (original snippet, kept for reference)
129
+
130
+ ```bash
131
+ workctl task create "Fix login bug" \
132
+ --list <LIST_ID> \
133
+ --priority high \
134
+ --description "SSO login failure on production" \
135
+ --format json
@@ -0,0 +1,24 @@
1
+ {
2
+ "hooks": {
3
+ "SessionStart": [
4
+ {
5
+ "hooks": [
6
+ {
7
+ "type": "command",
8
+ "command": "bash $HOME/.claude/hooks/session-start.sh"
9
+ }
10
+ ]
11
+ }
12
+ ],
13
+ "UserPromptSubmit": [
14
+ {
15
+ "hooks": [
16
+ {
17
+ "type": "command",
18
+ "command": "bash $HOME/.claude/hooks/user-prompt-submit.sh"
19
+ }
20
+ ]
21
+ }
22
+ ]
23
+ }
24
+ }