@yeongjaeyou/claude-code-config 0.8.0 → 0.9.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -2,7 +2,7 @@
2
2
  name: python-pro
3
3
  description: Write idiomatic Python code with advanced features like decorators, generators, and async/await. Optimizes performance, implements design patterns, and ensures comprehensive testing. Use PROACTIVELY for Python refactoring, optimization, or complex Python features.
4
4
  tools: Read, Write, Edit, Bash
5
- model: sonnet
5
+ model: opus
6
6
  ---
7
7
 
8
8
  You are a Python expert specializing in clean, performant, and idiomatic Python code.
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  name: web-researcher
3
3
  description: Use this agent when you need to conduct comprehensive research on technical topics across multiple platforms (Reddit, GitHub, Stack Overflow, Hugging Face, arXiv, etc.) and generate a synthesized report. (project)
4
- model: sonnet
4
+ model: opus
5
5
  ---
6
6
 
7
7
  # Web Research Expert Agent
@@ -39,7 +39,8 @@ Detect keywords in the prompt to determine execution mode:
39
39
 
40
40
  ## Sub-Agent Output Schema
41
41
 
42
- Each platform search Task returns results in this structure:
42
+ **MANDATORY**: Each platform search Task **MUST** return results in this exact structure.
43
+ Sub-agents that fail to follow this schema will have their results rejected by the coordinator.
43
44
 
44
45
  ### Required Fields (Quick & Deep)
45
46
 
@@ -80,8 +81,9 @@ suggested_followups:
80
81
 
81
82
  | Platform | Purpose | Tool |
82
83
  |----------|---------|------|
83
- | **GitHub** | Code, issues, PRs | `gh` CLI |
84
- | **Hugging Face** | ML models, datasets, Spaces | `huggingface_hub` API |
84
+ | **Google** | General web search, recent discussions | WebSearch |
85
+ | **GitHub** | Code, issues, PRs | `/code-explorer` skill |
86
+ | **Hugging Face** | ML models, datasets, Spaces | `/code-explorer` skill |
85
87
  | **Reddit** | Community discussions, experiences | WebSearch |
86
88
  | **Stack Overflow** | Q&A, solutions | WebSearch |
87
89
  | **Context7** | Official library documentation | MCP |
@@ -121,6 +123,13 @@ Generate **3-5 query variations**:
121
123
  - How-to vs comparison vs best practices
122
124
  - Specific tool/framework names
123
125
 
126
+ ### 5. Dynamic Context Awareness
127
+
128
+ **Avoid hardcoded model names in queries.** Model versions change rapidly:
129
+ - Before searching for model comparisons, verify the current date
130
+ - Use generic terms like "latest", "current", "{year}" in queries
131
+ - If specific models are needed, search for "latest LLM models {year}" first to identify current versions
132
+
124
133
  ---
125
134
 
126
135
  ## Research Workflow
@@ -147,6 +156,12 @@ Parallel execution (Task tool, run_in_background: true):
147
156
  └── Task: arXiv + general web (if needed)
148
157
  ```
149
158
 
159
+ **Each Task prompt MUST include:**
160
+ ```
161
+ Format your response according to the Sub-Agent Output Schema.
162
+ Return findings as a YAML block with: platform, query_used, findings, sources, confidence.
163
+ ```
164
+
150
165
  Collect results → Proceed to Phase 3
151
166
 
152
167
  ---
@@ -164,7 +179,15 @@ Parallel execution (all platforms):
164
179
  └── Task: Academic Agent - arXiv (structured output)
165
180
  ```
166
181
 
167
- Each Task returns results following Sub-Agent Output Schema
182
+ **Each Task prompt MUST include:**
183
+ ```
184
+ Format your response according to the Sub-Agent Output Schema.
185
+ Return findings as a YAML block with ALL fields:
186
+ - platform, query_used, findings, sources, confidence (required)
187
+ - gaps, conflicts, suggested_followups (Deep Mode required)
188
+ ```
189
+
190
+ Each Task returns results following Sub-Agent Output Schema. **Reject non-compliant responses.**
168
191
 
169
192
  ---
170
193
 
@@ -350,42 +373,26 @@ Handling:
350
373
 
351
374
  ## Platform-Specific Search Guides
352
375
 
353
- ### 1. GitHub Search (gh CLI)
376
+ ### 1. GitHub Search
354
377
 
355
- ```bash
356
- # Repository search
357
- gh search repos "object detection" --sort stars --limit 10
358
- gh search repos "gradio app" --language python --limit 5
359
-
360
- # Code search
361
- gh search code "Qwen2VL" --extension py
362
-
363
- # JSON output
364
- gh search repos "keyword" --limit 10 --json fullName,description,stargazersCount,url
365
- ```
378
+ Use `/code-explorer` skill for GitHub repository and code search.
366
379
 
367
- **Analysis order:**
368
- 1. README.md (usage)
369
- 2. Main entry point (app.py, main.py)
370
- 3. Dependencies (requirements.txt, pyproject.toml)
380
+ **Capabilities:**
381
+ - Repository search with quality filters (stars, language, date)
382
+ - Code search across repositories
383
+ - Long-tail keyword optimization
384
+ - Multi-query search patterns
371
385
 
372
386
  ---
373
387
 
374
388
  ### 2. Hugging Face Search
375
389
 
376
- ```python
377
- from huggingface_hub import HfApi
378
- api = HfApi()
390
+ Use `/code-explorer` skill for Hugging Face resources search.
379
391
 
380
- # Model search
381
- models = api.list_models(search="object detection", limit=10, sort="downloads")
382
-
383
- # Dataset search
384
- datasets = api.list_datasets(search="coco", limit=10, sort="downloads")
385
-
386
- # Spaces search
387
- spaces = api.list_spaces(search="gradio demo", limit=10, sort="likes")
388
- ```
392
+ **Capabilities:**
393
+ - Models, Datasets, Spaces search
394
+ - Download via `uvx hf` CLI
395
+ - Search quality principles (Long-tail, Multi-Query)
389
396
 
390
397
  ---
391
398
 
@@ -4,11 +4,12 @@ description: Consult multiple AI models and synthesize collective wisdom (LLM Co
4
4
 
5
5
  # LLM Council
6
6
 
7
- Inspired by Andrej Karpathy's LLM Council: query multiple AI models with the same question, anonymize their responses, and synthesize collective wisdom.
7
+ Inspired by Andrej Karpathy's LLM Council: query multiple AI models with the same question, anonymize their responses, and synthesize collective wisdom through multi-round deliberation.
8
8
 
9
9
  **Core Philosophy:**
10
10
  - Collective intelligence > single expert opinion
11
11
  - Anonymization prevents model favoritism
12
+ - Multi-round deliberation resolves conflicts and fills gaps
12
13
  - Diverse perspectives lead to better answers
13
14
 
14
15
  ---
@@ -19,17 +20,51 @@ Inspired by Andrej Karpathy's LLM Council: query multiple AI models with the sam
19
20
 
20
21
  1. **No arguments**: `/council` - Prompt user for question
21
22
  2. **Question only**: `/council How should I structure this API?`
22
- 3. **With evaluate flag**: `/council --evaluate What's the best approach?`
23
- 4. **With deep reasoning**: `/council --deep What's the best architecture?`
24
- 5. **Combined**: `/council --deep --evaluate Complex design decision?`
23
+ 3. **Quick mode**: `/council --quick What's the best approach?`
25
24
 
26
25
  **Flags:**
27
- - `--evaluate`: Enable Stage 3 peer evaluation where each model ranks the others
28
- - `--deep`: Enable maximum reasoning depth (Codex: xhigh + gpt-5.1-codex-max)
26
+ - `--quick`: Quick mode - reduced reasoning depth, faster convergence
27
+
28
+ **Default behavior (no flags):**
29
+ - Maximum reasoning depth (Codex: xhigh + gpt-5.1-codex-max)
30
+ - Full multi-round deliberation (up to 3 rounds)
29
31
 
30
32
  ---
31
33
 
32
- ## Context Gathering (Before Stage 1)
34
+ ## Council Member Output Schema
35
+
36
+ All council members MUST return responses in this structured format:
37
+
38
+ ```yaml
39
+ council_member:
40
+ model: "opus" | "sonnet" | "codex" | "gemini"
41
+ response:
42
+ summary: "1-2 sentence core answer"
43
+ detailed_answer: "full response content"
44
+ confidence: 0.0-1.0
45
+ key_points:
46
+ - point: "key insight"
47
+ evidence: "file:line or reasoning"
48
+ code_references:
49
+ - file: "/absolute/path/to/file.py"
50
+ lines: "42-58"
51
+ context: "why this is relevant"
52
+ caveats:
53
+ - "potential limitation or edge case"
54
+ # Round 2+ additional fields
55
+ gaps:
56
+ - "aspect not fully addressed"
57
+ conflicts:
58
+ - "disagrees with [model] on [topic]: [reason]"
59
+ ```
60
+
61
+ **Schema enforcement:**
62
+ - Sub-agents that fail to follow this schema will have their results flagged
63
+ - Missing required fields trigger re-query in next round
64
+
65
+ ---
66
+
67
+ ## Context Gathering (Before Round 1)
33
68
 
34
69
  Before querying models, collect relevant context:
35
70
 
@@ -45,17 +80,14 @@ Model-specific guidelines (project root):
45
80
  - ./gemini.md (Gemini)
46
81
  ```
47
82
 
48
- **File Path Inclusion:**
49
- When the question involves specific files or images, provide **exact absolute paths** so each model can read them directly:
50
-
83
+ **File Path Inclusion (MANDATORY format):**
51
84
  ```
52
- Prompt addition:
53
- "Relevant files for this question:
54
- - /absolute/path/to/file.py (source code)
85
+ Relevant files for this question:
86
+ - /absolute/path/to/file.py:45-78 (authentication logic)
87
+ - /absolute/path/to/model.py:12-35 (User model definition)
55
88
  - /absolute/path/to/screenshot.png (UI reference)
56
- - /absolute/path/to/diagram.jpg (architecture)
57
89
 
58
- You can use Read tool to examine these files if needed."
90
+ Use your file access tools to READ these files directly.
59
91
  ```
60
92
 
61
93
  **Model-specific file access:**
@@ -84,124 +116,243 @@ Directories: node_modules/, __pycache__/, .git/
84
116
 
85
117
  ## Execution
86
118
 
87
- ### Stage 1: Collect Responses
119
+ ### Round 1: Collect Initial Responses
120
+
121
+ Query all 4 models **in parallel** using Task tool with sub-agents:
122
+
123
+ **Claude Opus:**
124
+ ```
125
+ Task(model="opus", subagent_type="general-purpose", run_in_background: true):
126
+ prompt: |
127
+ You are participating in an LLM Council deliberation as Claude Opus.
128
+
129
+ ## Guidelines
130
+ Read and follow ./CLAUDE.md project guidelines.
131
+ You have access to MCP tools. Use them actively to gather accurate information.
132
+
133
+ ## Question
134
+ [USER_QUESTION]
135
+
136
+ ## Context Files (READ directly using exact paths)
137
+ [FILE_LIST_WITH_LINE_NUMBERS]
138
+
139
+ ## Current Changes
140
+ [git diff summary]
88
141
 
89
- Query all 4 models **in parallel** with identical prompt:
142
+ ## Instructions
143
+ Provide your best answer following the Council Member Output Schema.
144
+ Be concise but thorough. Focus on accuracy and actionable insights.
90
145
 
146
+ ## Output (YAML format required)
147
+ [COUNCIL_MEMBER_SCHEMA]
91
148
  ```
92
- Models:
93
- - Claude Opus → Task(model="opus", subagent_type="general-purpose")
94
- - Claude Sonnet → Task(model="sonnet", subagent_type="general-purpose")
95
- - Codex → mcp__codex__codex(sandbox="read-only", reasoningEffort="high")
96
- - Gemini → Bash: cat <<'EOF' | gemini -p -
149
+
150
+ **Claude Sonnet:**
151
+ ```
152
+ Task(model="sonnet", subagent_type="general-purpose", run_in_background: true):
153
+ prompt: [Same structure as Opus]
154
+ ```
155
+
156
+ **Codex:**
97
157
  ```
158
+ Task(subagent_type="general-purpose", run_in_background: true):
159
+ prompt: |
160
+ You are participating in an LLM Council deliberation as Codex.
98
161
 
99
- **Reasoning Control:**
100
- | Model | Default | --deep |
101
- |-------|---------|--------|
102
- | Claude Opus | standard | same |
103
- | Claude Sonnet | standard | same |
104
- | Codex | `reasoningEffort="high"` | `xhigh` + `gpt-5.1-codex-max` |
105
- | Gemini | CLI control N/A | CLI control N/A |
162
+ ## Tool Usage
163
+ Use mcp__codex__codex tool with:
164
+ - sandbox: "read-only"
165
+ - workingDirectory: "{PROJECT_ROOT}"
166
+ - reasoningEffort: "xhigh" (or "high" with --quick)
167
+ - model: "gpt-5.1-codex-max"
106
168
 
107
- Note:
108
- - Codex `xhigh` only works with `gpt-5.1-codex-max` or `gpt-5.2`
109
- - Gemini CLI doesn't support thinking level control yet (Issue #6693)
169
+ ## Guidelines
170
+ Read and follow ./AGENTS.md project guidelines.
171
+ You have access to MCP tools. Use them actively to gather accurate information.
110
172
 
111
- **Prompt template for each model:**
173
+ ## Question
174
+ [USER_QUESTION]
175
+
176
+ ## Context Files
177
+ [FILE_LIST_WITH_LINE_NUMBERS]
178
+
179
+ ## Instructions
180
+ Parse Codex's response and return structured YAML following the schema.
181
+
182
+ ## Output (YAML format required)
183
+ [COUNCIL_MEMBER_SCHEMA]
112
184
  ```
113
- You are participating in an LLM Council deliberation.
114
185
 
115
- ## Guidelines
116
- Read and follow your project guidelines before answering:
117
- - Claude models: Read ./CLAUDE.md
118
- - Codex: Read ./AGENTS.md
119
- - Gemini: Read ./gemini.md
186
+ **Gemini:**
187
+ ```
188
+ Task(subagent_type="general-purpose", run_in_background: true):
189
+ prompt: |
190
+ You are participating in an LLM Council deliberation as Gemini.
120
191
 
121
- ## Question
122
- [USER_QUESTION]
192
+ ## Tool Usage
193
+ Use Bash tool to invoke Gemini CLI:
194
+ ```bash
195
+ cat <<'EOF' | gemini -p -
196
+ [GEMINI_PROMPT_WITH_CONTEXT]
197
+ EOF
198
+ ```
123
199
 
124
- ## Context
125
- Working Directory: [ABSOLUTE_PATH]
200
+ ## Guidelines
201
+ Read and follow ./gemini.md project guidelines.
202
+ You have access to MCP tools. Use them actively to gather accurate information.
126
203
 
127
- ## Relevant Files (READ these directly for accurate context)
128
- - [/absolute/path/to/file1.ext] - [brief description]
129
- - [/absolute/path/to/image.png] - [screenshot/diagram description]
130
- - [/absolute/path/to/file2.ext] - [brief description]
204
+ ## Question
205
+ [USER_QUESTION]
131
206
 
132
- Do NOT ask for file contents. Use your file access tools to READ them directly.
207
+ ## Context Files (include content since Gemini has limited file access)
208
+ [FILE_CONTENTS_WITH_LINE_NUMBERS]
133
209
 
134
- ## Current Changes (if applicable)
135
- [git diff summary or key changes]
210
+ ## Instructions
211
+ Parse Gemini's response and return structured YAML following the schema.
136
212
 
137
- ## Instructions
138
- Provide your best answer. Be concise but thorough.
139
- Focus on accuracy, practicality, and actionable insights.
213
+ ## Output (YAML format required)
214
+ [COUNCIL_MEMBER_SCHEMA]
140
215
  ```
141
216
 
142
217
  **Important:**
143
- - Use `run_in_background: true` for Task calls to enable true parallelism
144
- - Set timeout: 120000ms for each call
218
+ - Use `run_in_background: true` for true parallelism
219
+ - Timeout: 120000ms per call
145
220
  - Continue with successful responses if some models fail
146
221
 
147
- ### Stage 2: Anonymize
222
+ ### Round 1.5: Coordinator Analysis
148
223
 
149
- After collecting responses:
224
+ After collecting responses, the main agent (coordinator) performs analysis:
150
225
 
151
- 1. **Shuffle** the response order randomly
152
- 2. **Assign labels**: Response A, Response B, Response C, Response D
153
- 3. **Create mapping** (keep internal, reveal later):
154
- ```
226
+ **1. Anonymize Responses:**
227
+ ```
228
+ 1. Shuffle response order randomly
229
+ 2. Assign labels: Response A, B, C, D
230
+ 3. Create internal mapping:
155
231
  label_to_model = {
156
232
  "Response A": "gemini",
157
233
  "Response B": "opus",
158
234
  "Response C": "sonnet",
159
235
  "Response D": "codex"
160
236
  }
161
- ```
237
+ ```
162
238
 
163
- 4. **Display** anonymized responses to user in a structured format
239
+ **2. Gap Analysis:**
240
+ ```yaml
241
+ gaps_detected:
242
+ - model: "opus"
243
+ gap: "performance benchmarks not addressed"
244
+ severity: "medium"
245
+ - model: "gemini"
246
+ gap: "security implications missing"
247
+ severity: "high"
248
+ ```
164
249
 
165
- ### Stage 3: Peer Evaluation (if --evaluate)
250
+ **3. Conflict Detection:**
251
+ ```yaml
252
+ conflicts_detected:
253
+ - topic: "recommended approach"
254
+ positions:
255
+ - model: "opus"
256
+ position: "use library A"
257
+ evidence: "official docs recommend"
258
+ - model: "codex"
259
+ position: "use library B"
260
+ evidence: "better performance"
261
+ resolution_needed: true
262
+ ```
166
263
 
167
- If `--evaluate` flag is present, query each model again with all anonymized responses:
264
+ **4. Convergence Check:**
265
+ ```yaml
266
+ convergence_status:
267
+ confidence_average: 0.815
268
+ new_information_ratio: 0.15
269
+ gaps_remaining: 2
270
+ conflicts_remaining: 1
271
+ decision: "proceed_to_round_2" | "terminate_and_synthesize"
272
+ ```
273
+
274
+ ### Round 2: Targeted Re-queries (Conditional)
168
275
 
169
- **Evaluation prompt:**
276
+ If convergence criteria not met, re-query only models with gaps/conflicts:
277
+
278
+ **Re-query prompt template:**
170
279
  ```
171
- You are evaluating responses from an LLM Council.
280
+ ## Previous Round Summary
281
+ Round 1 produced the following positions:
282
+
283
+ ### Response A (Confidence: 0.92)
284
+ - Position: [summary]
285
+ - Key points: [list]
172
286
 
173
- Original Question: [USER_QUESTION]
287
+ ### Response B (Confidence: 0.85)
288
+ - Position: [summary]
289
+ - Key points: [list]
174
290
 
175
- [ANONYMIZED_RESPONSES]
291
+ [... other responses ...]
176
292
 
177
- Instructions:
178
- 1. Evaluate each response for accuracy, completeness, and usefulness
179
- 2. Identify strengths and weaknesses of each
180
- 3. End with "FINAL RANKING:" section
293
+ ## Gaps Identified
294
+ - [gap 1]
295
+ - [gap 2]
181
296
 
182
- FINAL RANKING:
183
- 1. Response [X] - [brief reason]
184
- 2. Response [Y] - [brief reason]
185
- 3. Response [Z] - [brief reason]
186
- 4. Response [W] - [brief reason]
297
+ ## Conflicts Detected
298
+ - Topic: [topic]
299
+ - Position A: [description]
300
+ - Position B: [description]
301
+
302
+ ## Re-query Focus
303
+ Please address specifically:
304
+ 1. [specific gap or conflict to resolve]
305
+ 2. [specific gap or conflict to resolve]
306
+
307
+ Provide evidence and reasoning for your position.
308
+ Update your confidence score based on new information.
309
+
310
+ ## Output (YAML format required)
311
+ [COUNCIL_MEMBER_SCHEMA with gaps/conflicts fields]
187
312
  ```
188
313
 
189
- **Calculate aggregate rankings:**
190
- - Parse each model's ranking
191
- - Sum rank positions for each response
192
- - Lower total = higher collective approval
314
+ ### Round 2.5: Coordinator Analysis
315
+
316
+ Same as Round 1.5. Check convergence again.
193
317
 
194
- ### Stage 4: Synthesize
318
+ ### Round 3: Final Cross-Validation (Conditional)
195
319
 
196
- As the main agent (moderator), synthesize the final answer:
320
+ If still not converged after Round 2:
321
+ - Focused on resolving remaining conflicts
322
+ - Models see other models' positions (still anonymized)
323
+ - Final opportunity for consensus
324
+
325
+ ### Synthesis
326
+
327
+ After convergence or max rounds:
197
328
 
198
329
  1. **Reveal** the label-to-model mapping
199
330
  2. **Analyze** all responses:
200
331
  - Consensus points (where models agree)
201
- - Disagreements (where they differ)
332
+ - Resolved conflicts (with reasoning)
333
+ - Remaining disagreements (with analysis)
202
334
  - Unique insights (valuable points from individual models)
203
- 3. **If --evaluate**: Include aggregate rankings and "street cred" scores
204
- 4. **Produce** final verdict combining best elements
335
+ 3. **Produce** final verdict combining best elements
336
+
337
+ ---
338
+
339
+ ## Termination Criteria
340
+
341
+ ### Hard Limits (Mandatory Termination)
342
+ | Condition | Value |
343
+ |-----------|-------|
344
+ | Max rounds | 3 |
345
+ | Max total time | 10 min |
346
+ | All models failed | immediate |
347
+
348
+ ### Soft Limits (Convergence - any triggers termination)
349
+ | Condition | Threshold |
350
+ |-----------|-----------|
351
+ | Average confidence | > 0.9 |
352
+ | New information ratio | < 10% |
353
+ | All gaps resolved | 0 remaining |
354
+ | Strong consensus | 3+ models agree |
355
+ | Conflicts irreconcilable | Cannot be resolved with more queries |
205
356
 
206
357
  ---
207
358
 
@@ -213,70 +364,88 @@ As the main agent (moderator), synthesize the final answer:
213
364
  ### Question
214
365
  [Original user question]
215
366
 
367
+ ### Deliberation Process
368
+ | Round | Models Queried | Convergence | Status |
369
+ |-------|---------------|-------------|--------|
370
+ | 1 | All (4) | 65% | Gaps detected |
371
+ | 2 | Codex, Gemini | 85% | Conflict on approach |
372
+ | 3 | Codex | 95% | Converged |
373
+
216
374
  ### Individual Responses (Anonymized)
217
375
 
218
- #### Response A
376
+ #### Response A (Confidence: 0.92)
219
377
  [Content]
220
378
 
221
- #### Response B
379
+ **Key Points:**
380
+ - [point 1] (evidence: file:line)
381
+ - [point 2] (evidence: file:line)
382
+
383
+ #### Response B (Confidence: 0.85)
222
384
  [Content]
223
385
 
224
- #### Response C
386
+ #### Response C (Confidence: 0.78)
225
387
  [Content]
226
388
 
227
- #### Response D
389
+ #### Response D (Confidence: 0.71)
228
390
  [Content]
229
391
 
230
392
  ### Model Reveal
231
- | Label | Model |
232
- |-------|-------|
233
- | Response A | [model name] |
234
- | Response B | [model name] |
235
- | Response C | [model name] |
236
- | Response D | [model name] |
237
-
238
- ### Aggregate Rankings (if --evaluate)
239
- | Rank | Model | Avg Position | Votes |
240
- |------|-------|--------------|-------|
393
+ | Label | Model | Final Confidence |
394
+ |-------|-------|-----------------|
395
+ | Response A | codex | 0.92 |
396
+ | Response B | opus | 0.85 |
397
+ | Response C | sonnet | 0.78 |
398
+ | Response D | gemini | 0.71 |
399
+
400
+ ### Coordinator Analysis
401
+
402
+ #### Gaps Addressed
403
+ | Gap | Resolved By | Round |
404
+ |-----|-------------|-------|
405
+ | Performance benchmarks | Codex | 2 |
406
+ | Security considerations | Opus | 1 |
407
+
408
+ #### Conflicts Resolved
409
+ | Topic | Final Position | Reasoning |
410
+ |-------|---------------|-----------|
411
+ | Library choice | Library A | Official docs + 3 model consensus |
412
+
413
+ #### Remaining Disagreements
414
+ | Topic | Positions | Analysis |
415
+ |-------|-----------|----------|
416
+ | [topic] | A: [pos], B: [pos] | [why unresolved] |
241
417
 
242
418
  ### Council Synthesis
243
419
 
244
420
  #### Consensus
245
- [Points where all/most models agree]
246
-
247
- #### Disagreements
248
- [Points of divergence with analysis]
421
+ [Points where all/most models agree - with evidence]
249
422
 
250
- #### Unique Insights
251
- [Valuable contributions from specific models]
423
+ #### Key Insights by Model
424
+ | Model | Unique Contribution |
425
+ |-------|-------------------|
426
+ | Codex | [insight] |
427
+ | Opus | [insight] |
252
428
 
253
429
  ### Final Verdict
254
- [Synthesized answer combining collective wisdom]
430
+ [Synthesized answer combining collective wisdom with confidence level and caveats]
431
+
432
+ ### Code References
433
+ | File | Lines | Context |
434
+ |------|-------|---------|
435
+ | /path/to/file.py | 45-78 | Authentication logic |
255
436
  ```
256
437
 
257
438
  ---
258
439
 
259
440
  ## Error Handling
260
441
 
261
- - **Model timeout**: Continue with successful responses, note failures
262
- - **All models fail**: Report error, suggest retry
263
- - **Parse failure in rankings**: Use fallback regex extraction
264
- - **Empty response**: Exclude from synthesis, note in output
265
-
266
- ---
267
-
268
- ## Examples
269
-
270
- ```bash
271
- # Basic council consultation
272
- /council What's the best way to implement caching in this API?
273
-
274
- # With peer evaluation for important decisions
275
- /council --evaluate Should we use microservices or monolith for this project?
276
-
277
- # Architecture review
278
- /council --evaluate Review the current authentication flow and suggest improvements
279
- ```
442
+ | Error | Response |
443
+ |-------|----------|
444
+ | Model timeout | Continue with successful responses, note failures |
445
+ | All models fail | Report error, suggest retry |
446
+ | Parse failure | Use fallback extraction, flag for re-query |
447
+ | Empty response | Exclude from synthesis, note in output |
448
+ | Schema violation | Flag and request re-query in next round |
280
449
 
281
450
  ---
282
451
 
@@ -284,35 +453,50 @@ As the main agent (moderator), synthesize the final answer:
284
453
 
285
454
  Use `AskUserQuestion` tool when clarification is needed:
286
455
 
287
- **Before Stage 1:**
456
+ **Before Round 1:**
288
457
  - Question is ambiguous or too broad
289
458
  - Missing critical context (e.g., "review this code" but no file specified)
290
459
  - Multiple interpretations possible
291
460
 
292
- **During Execution:**
293
- - Conflicting requirements detected
294
- - Need to confirm scope (e.g., "Should I include performance considerations?")
461
+ **During Deliberation:**
462
+ - Strong disagreement between models that cannot be resolved
463
+ - New information discovered that changes the question scope
295
464
 
296
465
  **After Synthesis:**
297
- - Models strongly disagree and user input would help decide
466
+ - Remaining disagreements require user input to decide
298
467
  - Actionable next steps require user confirmation
299
468
 
300
469
  **Example questions:**
301
470
  ```
302
471
  - "Your question mentions 'the API' - which specific endpoint or service?"
303
- - "Should the council focus on: (1) Code quality, (2) Architecture, (3) Performance, or (4) All aspects?"
304
472
  - "Models disagree on X vs Y approach. Which aligns better with your constraints?"
473
+ - "Should the council prioritize performance or maintainability?"
305
474
  ```
306
475
 
307
476
  **Important:** Never assume or guess when context is unclear. Ask first, then proceed.
308
477
 
309
478
  ---
310
479
 
480
+ ## Examples
481
+
482
+ ```bash
483
+ # Standard council consultation (full multi-round, max reasoning)
484
+ /council What's the best way to implement caching in this API?
485
+
486
+ # Quick mode for simpler questions
487
+ /council --quick Should we use tabs or spaces for indentation?
488
+
489
+ # Architecture review
490
+ /council Review the current authentication flow and suggest improvements
491
+ ```
492
+
493
+ ---
494
+
311
495
  ## Guidelines
312
496
 
313
497
  - Respond in the same language as the user's question
314
498
  - No emojis in code or documentation
315
499
  - If context is needed, gather it before querying models
316
- - For code-related questions, include relevant file snippets in the prompt
500
+ - For code-related questions, include relevant file snippets with line numbers
317
501
  - Respect `@CLAUDE.md` project conventions
318
502
  - **Never assume unclear context - use AskUserQuestion to clarify**
package/README.md CHANGED
@@ -64,7 +64,7 @@ A collection of custom slash commands, agents, and skills for Claude Code CLI.
64
64
  | `/ask-deepwiki` | Deep query GitHub repositories via DeepWiki MCP |
65
65
  | `/ask-codex` | Request code review via Codex MCP (with Claude cross-check) |
66
66
  | `/ask-gemini` | Request code review via Gemini CLI (with Claude cross-check) |
67
- | `/council` | Consult multiple AI models and synthesize collective wisdom |
67
+ | `/council` | Consult multiple AI models with multi-round deliberation |
68
68
 
69
69
  ### GitHub Workflow Commands (`/gh/`)
70
70
 
@@ -89,8 +89,8 @@ A collection of custom slash commands, agents, and skills for Claude Code CLI.
89
89
 
90
90
  | Agent | Description |
91
91
  |-------|-------------|
92
- | `web-researcher` | Multi-platform tech research (Reddit, GitHub, SO, HF, arXiv, etc.) |
93
- | `python-pro` | Python advanced features expert (decorators, generators, async/await) |
92
+ | `web-researcher` | Multi-platform tech research with opus model (Reddit, GitHub, HF, arXiv, etc.) |
93
+ | `python-pro` | Python expert with opus model (decorators, generators, async/await, optimization) |
94
94
 
95
95
  ## Skills
96
96
 
@@ -247,8 +247,10 @@ cp node_modules/@yeongjaeyou/claude-code-config/.mcp.json .
247
247
 
248
248
  ### `/council` - LLM Council
249
249
  - Query multiple AI models (Opus, Sonnet, Codex, Gemini) in parallel
250
+ - Multi-round deliberation with gap analysis and conflict resolution
250
251
  - Anonymize responses for unbiased evaluation
251
- - Synthesize collective wisdom into consensus
252
+ - Convergence-based termination (confidence > 0.9, consensus reached)
253
+ - Use `--quick` flag for faster single-round responses
252
254
 
253
255
  ### `web-researcher` Agent
254
256
  - Multi-platform search: GitHub (`gh` CLI), Hugging Face, Reddit, SO, arXiv
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@yeongjaeyou/claude-code-config",
3
- "version": "0.8.0",
3
+ "version": "0.9.0",
4
4
  "description": "Claude Code CLI custom commands, agents, and skills",
5
5
  "bin": {
6
6
  "claude-code-config": "./bin/cli.js"