@yeongjaeyou/claude-code-config 0.14.0 → 0.16.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
|
@@ -23,11 +23,53 @@ Inspired by Andrej Karpathy's LLM Council: query multiple AI models with the sam
|
|
|
23
23
|
3. **Quick mode**: `/council --quick What's the best approach?`
|
|
24
24
|
|
|
25
25
|
**Flags:**
|
|
26
|
-
- `--quick`: Quick mode
|
|
26
|
+
- `--quick`: Quick mode (see below)
|
|
27
27
|
|
|
28
28
|
**Default behavior (no flags):**
|
|
29
|
-
- Maximum reasoning depth (Codex: xhigh
|
|
29
|
+
- Maximum reasoning depth (Codex: reasoningEffort=xhigh, model=gpt-5.1-codex-max)
|
|
30
30
|
- Full multi-round deliberation (up to 3 rounds)
|
|
31
|
+
- YAML schema enforced
|
|
32
|
+
|
|
33
|
+
**Quick mode (`--quick`):**
|
|
34
|
+
- All 4 models queried (Opus, Sonnet, Codex, Gemini)
|
|
35
|
+
- Single round only (Round 1 -> direct Synthesis, no Round 1.5 analysis)
|
|
36
|
+
- YAML schema not enforced (free-form responses accepted)
|
|
37
|
+
- Codex: reasoningEffort=high (instead of xhigh)
|
|
38
|
+
|
|
39
|
+
---
|
|
40
|
+
|
|
41
|
+
## Pre-flight Check
|
|
42
|
+
|
|
43
|
+
Before querying models, verify environment parity:
|
|
44
|
+
|
|
45
|
+
**1. CLI Installation:**
|
|
46
|
+
```bash
|
|
47
|
+
command -v claude && echo "Claude Code: OK" || echo "Claude Code: Missing"
|
|
48
|
+
command -v codex && echo "Codex CLI: OK" || echo "Codex CLI: Missing"
|
|
49
|
+
command -v gemini && echo "Gemini CLI: OK" || echo "Gemini CLI: Missing"
|
|
50
|
+
```
|
|
51
|
+
|
|
52
|
+
**2. Guidelines Files:**
|
|
53
|
+
```bash
|
|
54
|
+
[ -f ./CLAUDE.md ] && echo "CLAUDE.md: OK" || echo "CLAUDE.md: Missing"
|
|
55
|
+
[ -f ./AGENTS.md ] && echo "AGENTS.md: OK" || echo "AGENTS.md: Missing"
|
|
56
|
+
[ -f ./GEMINI.md ] && echo "GEMINI.md: OK" || echo "GEMINI.md: Missing"
|
|
57
|
+
```
|
|
58
|
+
|
|
59
|
+
**3. MCP Configuration:**
|
|
60
|
+
| CLI | Config Location | Check Command |
|
|
61
|
+
|-----|-----------------|---------------|
|
|
62
|
+
| Claude Code | `.mcp.json`, `~/.claude.json` | `claude mcp list` |
|
|
63
|
+
| Codex CLI | `~/.codex/config.toml` | `codex mcp --help` |
|
|
64
|
+
| Gemini CLI | `~/.gemini/settings.json`, `.gemini/settings.json` | `gemini mcp list` |
|
|
65
|
+
|
|
66
|
+
**Warning conditions (proceed with caution):**
|
|
67
|
+
- Guidelines file missing: model runs without project context
|
|
68
|
+
- MCP not configured: model has limited tool access
|
|
69
|
+
- CLI not installed: model excluded from council
|
|
70
|
+
|
|
71
|
+
**If warnings detected:**
|
|
72
|
+
Use AskUserQuestion to confirm whether to proceed or fix issues first.
|
|
31
73
|
|
|
32
74
|
---
|
|
33
75
|
|
|
@@ -41,15 +83,14 @@ council_member:
|
|
|
41
83
|
response:
|
|
42
84
|
summary: "1-2 sentence core answer"
|
|
43
85
|
detailed_answer: "full response content"
|
|
44
|
-
confidence: 0.0-1.0
|
|
45
86
|
key_points:
|
|
46
87
|
- point: "key insight"
|
|
47
88
|
evidence: "file:line or reasoning"
|
|
48
|
-
code_references:
|
|
89
|
+
code_references: # optional
|
|
49
90
|
- file: "/absolute/path/to/file.py"
|
|
50
91
|
lines: "42-58"
|
|
51
92
|
context: "why this is relevant"
|
|
52
|
-
caveats:
|
|
93
|
+
caveats: # optional
|
|
53
94
|
- "potential limitation or edge case"
|
|
54
95
|
# Round 2+ additional fields
|
|
55
96
|
gaps:
|
|
@@ -72,7 +113,6 @@ Before querying models, collect relevant context:
|
|
|
72
113
|
```
|
|
73
114
|
- git status / git diff (current changes)
|
|
74
115
|
- Directory structure (tree -L 2)
|
|
75
|
-
- Files mentioned in conversation (Read/Edit history)
|
|
76
116
|
|
|
77
117
|
Model-specific guidelines (project root):
|
|
78
118
|
- ./CLAUDE.md (Claude Opus/Sonnet)
|
|
@@ -80,6 +120,38 @@ Model-specific guidelines (project root):
|
|
|
80
120
|
- ./gemini.md (Gemini)
|
|
81
121
|
```
|
|
82
122
|
|
|
123
|
+
**Conditional Code Exploration:**
|
|
124
|
+
|
|
125
|
+
When relevant files are unclear from the question, spawn Explore agents to discover them:
|
|
126
|
+
|
|
127
|
+
```
|
|
128
|
+
Trigger conditions:
|
|
129
|
+
- Question mentions code/architecture/structure without specific files
|
|
130
|
+
- Question asks about "this", "the code", "current implementation" ambiguously
|
|
131
|
+
- UI/UX questions that need component/style file identification
|
|
132
|
+
|
|
133
|
+
Skip exploration when:
|
|
134
|
+
- User provides specific file paths or permalinks
|
|
135
|
+
- Question is conceptual (no code context needed)
|
|
136
|
+
- Files are obvious from recent git diff
|
|
137
|
+
```
|
|
138
|
+
|
|
139
|
+
```
|
|
140
|
+
Task(subagent_type="Explore", run_in_background: true):
|
|
141
|
+
prompt: |
|
|
142
|
+
Find files related to: [USER_QUESTION]
|
|
143
|
+
|
|
144
|
+
Return results in this format:
|
|
145
|
+
- /absolute/path/file.ext:LINE-LINE (brief context)
|
|
146
|
+
|
|
147
|
+
Focus on:
|
|
148
|
+
- Direct implementation files
|
|
149
|
+
- Related tests
|
|
150
|
+
- Configuration if relevant
|
|
151
|
+
```
|
|
152
|
+
|
|
153
|
+
After exploration, use discovered paths in the File Path Inclusion format below.
|
|
154
|
+
|
|
83
155
|
**File Path Inclusion (MANDATORY format):**
|
|
84
156
|
```
|
|
85
157
|
Relevant files for this question:
|
|
@@ -95,7 +167,7 @@ Use your file access tools to READ these files directly.
|
|
|
95
167
|
|-------|-------------------|
|
|
96
168
|
| Claude Opus/Sonnet | Read tool (images supported) |
|
|
97
169
|
| Codex | sandbox read-only file access |
|
|
98
|
-
| Gemini |
|
|
170
|
+
| Gemini | MCP tools or Bash file read (MCP supported since 2025) |
|
|
99
171
|
|
|
100
172
|
**Sensitive Data Filtering (exclude from prompts):**
|
|
101
173
|
```
|
|
@@ -114,6 +186,41 @@ Directories: node_modules/, __pycache__/, .git/
|
|
|
114
186
|
|
|
115
187
|
---
|
|
116
188
|
|
|
189
|
+
## Progress Tracking
|
|
190
|
+
|
|
191
|
+
Use TodoWrite to show progress at each stage:
|
|
192
|
+
|
|
193
|
+
**Round 1 start:**
|
|
194
|
+
```yaml
|
|
195
|
+
todos:
|
|
196
|
+
- content: "[Council] Query Opus"
|
|
197
|
+
status: "in_progress"
|
|
198
|
+
activeForm: "Querying Opus"
|
|
199
|
+
- content: "[Council] Query Sonnet"
|
|
200
|
+
status: "in_progress"
|
|
201
|
+
activeForm: "Querying Sonnet"
|
|
202
|
+
- content: "[Council] Query Codex"
|
|
203
|
+
status: "in_progress"
|
|
204
|
+
activeForm: "Querying Codex"
|
|
205
|
+
- content: "[Council] Query Gemini"
|
|
206
|
+
status: "in_progress"
|
|
207
|
+
activeForm: "Querying Gemini"
|
|
208
|
+
- content: "[Council] Analyze responses"
|
|
209
|
+
status: "pending"
|
|
210
|
+
activeForm: "Analyzing responses"
|
|
211
|
+
- content: "[Council] Synthesize"
|
|
212
|
+
status: "pending"
|
|
213
|
+
activeForm: "Synthesizing"
|
|
214
|
+
```
|
|
215
|
+
|
|
216
|
+
**Update rules:**
|
|
217
|
+
- Model response received -> mark that model's todo as "completed"
|
|
218
|
+
- All models done -> "[Council] Analyze responses" to "in_progress"
|
|
219
|
+
- Round 2 needed -> add re-query todos for specific models
|
|
220
|
+
- Analysis done -> "[Council] Synthesize" to "in_progress"
|
|
221
|
+
|
|
222
|
+
---
|
|
223
|
+
|
|
117
224
|
## Execution
|
|
118
225
|
|
|
119
226
|
### Round 1: Collect Initial Responses
|
|
@@ -160,7 +267,7 @@ Task(subagent_type="general-purpose", run_in_background: true):
|
|
|
160
267
|
You are participating in an LLM Council deliberation as Codex.
|
|
161
268
|
|
|
162
269
|
## Tool Usage
|
|
163
|
-
Use
|
|
270
|
+
Use mcp__codex-cli__codex tool with:
|
|
164
271
|
- sandbox: "read-only"
|
|
165
272
|
- workingDirectory: "{PROJECT_ROOT}"
|
|
166
273
|
- reasoningEffort: "xhigh" (or "high" with --quick)
|
|
@@ -196,6 +303,8 @@ Task(subagent_type="general-purpose", run_in_background: true):
|
|
|
196
303
|
[GEMINI_PROMPT_WITH_CONTEXT]
|
|
197
304
|
EOF
|
|
198
305
|
```
|
|
306
|
+
Note: Gemini CLI supports MCP (since 2025). If MCP is configured,
|
|
307
|
+
Gemini can access project files directly via MCP tools.
|
|
199
308
|
|
|
200
309
|
## Guidelines
|
|
201
310
|
Read and follow ./gemini.md project guidelines.
|
|
@@ -204,8 +313,8 @@ Task(subagent_type="general-purpose", run_in_background: true):
|
|
|
204
313
|
## Question
|
|
205
314
|
[USER_QUESTION]
|
|
206
315
|
|
|
207
|
-
## Context Files (
|
|
208
|
-
[
|
|
316
|
+
## Context Files (READ directly using exact paths)
|
|
317
|
+
[FILE_LIST_WITH_LINE_NUMBERS]
|
|
209
318
|
|
|
210
319
|
## Instructions
|
|
211
320
|
Parse Gemini's response and return structured YAML following the schema.
|
|
@@ -232,15 +341,15 @@ Skipping Round 1.5 defeats the purpose of multi-round deliberation.
|
|
|
232
341
|
|
|
233
342
|
**1. Anonymize Responses:**
|
|
234
343
|
```
|
|
235
|
-
1.
|
|
236
|
-
2.
|
|
237
|
-
3. Create internal mapping:
|
|
344
|
+
1. Assign labels in response arrival order: Response A, B, C, D
|
|
345
|
+
2. Create internal mapping:
|
|
238
346
|
label_to_model = {
|
|
239
|
-
"Response A": "
|
|
240
|
-
"Response B": "
|
|
241
|
-
"Response C": "
|
|
242
|
-
"Response D": "
|
|
347
|
+
"Response A": "[first arrived]",
|
|
348
|
+
"Response B": "[second arrived]",
|
|
349
|
+
"Response C": "[third arrived]",
|
|
350
|
+
"Response D": "[fourth arrived]"
|
|
243
351
|
}
|
|
352
|
+
3. Present responses by label only (hide model names until synthesis)
|
|
244
353
|
```
|
|
245
354
|
|
|
246
355
|
**2. Gap Analysis:**
|
|
@@ -271,16 +380,17 @@ conflicts_detected:
|
|
|
271
380
|
**4. Convergence Check (REQUIRED before synthesis):**
|
|
272
381
|
```yaml
|
|
273
382
|
convergence_status:
|
|
274
|
-
|
|
275
|
-
new_information_ratio: 0.15
|
|
383
|
+
agreement_count: 3 # models with same core conclusion
|
|
276
384
|
gaps_remaining: 2
|
|
277
385
|
conflicts_remaining: 1
|
|
278
386
|
decision: "proceed_to_round_2" | "terminate_and_synthesize"
|
|
279
387
|
```
|
|
280
388
|
|
|
281
389
|
**Decision logic:**
|
|
282
|
-
- If `
|
|
390
|
+
- If `agreement_count >= 3` → `terminate_and_synthesize` (strong consensus)
|
|
391
|
+
- If `gaps_remaining == 0` AND `conflicts_remaining == 0` → `terminate_and_synthesize`
|
|
283
392
|
- If `conflicts_remaining > 0` AND round < 3 → `proceed_to_round_2`
|
|
393
|
+
- If `gaps_remaining > 0` AND round < 3 → `proceed_to_round_2`
|
|
284
394
|
- Otherwise → `terminate_and_synthesize`
|
|
285
395
|
|
|
286
396
|
### Round 2: Targeted Re-queries (Conditional)
|
|
@@ -292,11 +402,11 @@ If convergence criteria not met, re-query only models with gaps/conflicts:
|
|
|
292
402
|
## Previous Round Summary
|
|
293
403
|
Round 1 produced the following positions:
|
|
294
404
|
|
|
295
|
-
### Response A
|
|
405
|
+
### Response A
|
|
296
406
|
- Position: [summary]
|
|
297
407
|
- Key points: [list]
|
|
298
408
|
|
|
299
|
-
### Response B
|
|
409
|
+
### Response B
|
|
300
410
|
- Position: [summary]
|
|
301
411
|
- Key points: [list]
|
|
302
412
|
|
|
@@ -317,7 +427,6 @@ Please address specifically:
|
|
|
317
427
|
2. [specific gap or conflict to resolve]
|
|
318
428
|
|
|
319
429
|
Provide evidence and reasoning for your position.
|
|
320
|
-
Update your confidence score based on new information.
|
|
321
430
|
|
|
322
431
|
## Output (YAML format required)
|
|
323
432
|
[COUNCIL_MEMBER_SCHEMA with gaps/conflicts fields]
|
|
@@ -362,10 +471,9 @@ After convergence or max rounds:
|
|
|
362
471
|
### Soft Limits (Convergence - any triggers termination)
|
|
363
472
|
| Condition | Threshold |
|
|
364
473
|
|-----------|-----------|
|
|
365
|
-
|
|
|
366
|
-
| New information ratio | < 10% |
|
|
474
|
+
| Strong consensus | 3+ models agree on core conclusion |
|
|
367
475
|
| All gaps resolved | 0 remaining |
|
|
368
|
-
|
|
|
476
|
+
| All conflicts resolved | 0 remaining |
|
|
369
477
|
| Conflicts irreconcilable | Cannot be resolved with more queries |
|
|
370
478
|
|
|
371
479
|
---
|
|
@@ -387,29 +495,29 @@ After convergence or max rounds:
|
|
|
387
495
|
|
|
388
496
|
### Individual Responses (Anonymized)
|
|
389
497
|
|
|
390
|
-
#### Response A
|
|
498
|
+
#### Response A
|
|
391
499
|
[Content]
|
|
392
500
|
|
|
393
501
|
**Key Points:**
|
|
394
502
|
- [point 1] (evidence: file:line)
|
|
395
503
|
- [point 2] (evidence: file:line)
|
|
396
504
|
|
|
397
|
-
#### Response B
|
|
505
|
+
#### Response B
|
|
398
506
|
[Content]
|
|
399
507
|
|
|
400
|
-
#### Response C
|
|
508
|
+
#### Response C
|
|
401
509
|
[Content]
|
|
402
510
|
|
|
403
|
-
#### Response D
|
|
511
|
+
#### Response D
|
|
404
512
|
[Content]
|
|
405
513
|
|
|
406
514
|
### Model Reveal
|
|
407
|
-
| Label | Model |
|
|
408
|
-
|
|
409
|
-
| Response A | codex |
|
|
410
|
-
| Response B | opus |
|
|
411
|
-
| Response C | sonnet |
|
|
412
|
-
| Response D | gemini |
|
|
515
|
+
| Label | Model |
|
|
516
|
+
|-------|-------|
|
|
517
|
+
| Response A | codex |
|
|
518
|
+
| Response B | opus |
|
|
519
|
+
| Response C | sonnet |
|
|
520
|
+
| Response D | gemini |
|
|
413
521
|
|
|
414
522
|
### Coordinator Analysis
|
|
415
523
|
|
|
@@ -25,16 +25,22 @@ Before using this skill, ensure the following setup is complete:
|
|
|
25
25
|
|
|
26
26
|
### 2. Environment Configuration
|
|
27
27
|
|
|
28
|
-
Set the API key in the environment:
|
|
28
|
+
Set the API key in the environment. The script automatically loads `.env` files if `python-dotenv` is installed:
|
|
29
29
|
|
|
30
30
|
```bash
|
|
31
|
-
# In .env file
|
|
31
|
+
# In .env file (recommended - auto-loaded)
|
|
32
32
|
NOTION_API_KEY=ntn_xxxxx
|
|
33
33
|
|
|
34
34
|
# Or export directly
|
|
35
35
|
export NOTION_API_KEY=ntn_xxxxx
|
|
36
36
|
```
|
|
37
37
|
|
|
38
|
+
To install dotenv support:
|
|
39
|
+
|
|
40
|
+
```bash
|
|
41
|
+
uv add python-dotenv
|
|
42
|
+
```
|
|
43
|
+
|
|
38
44
|
### 3. Page Access
|
|
39
45
|
|
|
40
46
|
Share the target parent page with the integration:
|
|
@@ -65,6 +71,8 @@ uv run python .claude/skills/notion-md-uploader/scripts/upload_md.py \
|
|
|
65
71
|
|
|
66
72
|
### Dry Run (Preview)
|
|
67
73
|
|
|
74
|
+
Preview parsing results and validate local images before uploading:
|
|
75
|
+
|
|
68
76
|
```bash
|
|
69
77
|
uv run python .claude/skills/notion-md-uploader/scripts/upload_md.py \
|
|
70
78
|
docs/analysis.md \
|
|
@@ -72,6 +80,11 @@ uv run python .claude/skills/notion-md-uploader/scripts/upload_md.py \
|
|
|
72
80
|
--dry-run
|
|
73
81
|
```
|
|
74
82
|
|
|
83
|
+
The dry run validates:
|
|
84
|
+
- Markdown block parsing
|
|
85
|
+
- Local image file existence
|
|
86
|
+
- Conversion to Notion blocks
|
|
87
|
+
|
|
75
88
|
## Supported Markdown Elements
|
|
76
89
|
|
|
77
90
|
| Element | Markdown Syntax | Notion Block |
|
|
@@ -18,10 +18,17 @@ import sys
|
|
|
18
18
|
from pathlib import Path
|
|
19
19
|
from typing import Any
|
|
20
20
|
|
|
21
|
+
# Load .env file if python-dotenv is available
|
|
22
|
+
try:
|
|
23
|
+
from dotenv import load_dotenv
|
|
24
|
+
load_dotenv()
|
|
25
|
+
except ImportError:
|
|
26
|
+
pass # python-dotenv not installed, use env vars directly
|
|
27
|
+
|
|
21
28
|
# Add scripts directory to path for imports
|
|
22
29
|
sys.path.insert(0, str(Path(__file__).parent))
|
|
23
30
|
|
|
24
|
-
from markdown_parser import MarkdownParser
|
|
31
|
+
from markdown_parser import BlockType, MarkdownParser
|
|
25
32
|
from notion_client import NotionClient, NotionAPIError, NotionConfig
|
|
26
33
|
from notion_converter import NotionBlockConverter
|
|
27
34
|
|
|
@@ -236,8 +243,8 @@ Setup:
|
|
|
236
243
|
print(f"Parsing: {args.md_file}")
|
|
237
244
|
content = md_path.read_text(encoding="utf-8")
|
|
238
245
|
|
|
239
|
-
|
|
240
|
-
blocks =
|
|
246
|
+
md_parser = MarkdownParser(base_path=str(md_path.parent))
|
|
247
|
+
blocks = md_parser.parse(content)
|
|
241
248
|
|
|
242
249
|
print(f"Found {len(blocks)} blocks:")
|
|
243
250
|
for i, block in enumerate(blocks[:10]):
|
|
@@ -251,6 +258,34 @@ Setup:
|
|
|
251
258
|
if len(blocks) > 10:
|
|
252
259
|
print(f" ... and {len(blocks) - 10} more blocks")
|
|
253
260
|
|
|
261
|
+
# Validate image files
|
|
262
|
+
base_path = md_path.parent
|
|
263
|
+
image_blocks = [b for b in blocks if b.block_type == BlockType.IMAGE]
|
|
264
|
+
missing_images = []
|
|
265
|
+
found_images = []
|
|
266
|
+
|
|
267
|
+
for block in image_blocks:
|
|
268
|
+
img_src = block.metadata.get("url", "")
|
|
269
|
+
if img_src and not img_src.startswith(("http://", "https://")):
|
|
270
|
+
img_path = base_path / img_src
|
|
271
|
+
if img_path.exists():
|
|
272
|
+
found_images.append(str(img_src))
|
|
273
|
+
else:
|
|
274
|
+
missing_images.append(str(img_src))
|
|
275
|
+
|
|
276
|
+
if found_images:
|
|
277
|
+
print(f"\nLocal images ({len(found_images)}):")
|
|
278
|
+
for img in found_images[:5]:
|
|
279
|
+
print(f" [OK] {img}")
|
|
280
|
+
if len(found_images) > 5:
|
|
281
|
+
print(f" ... and {len(found_images) - 5} more")
|
|
282
|
+
|
|
283
|
+
if missing_images:
|
|
284
|
+
print(f"\nMissing images ({len(missing_images)}):")
|
|
285
|
+
for img in missing_images:
|
|
286
|
+
print(f" [MISSING] {img}")
|
|
287
|
+
print("\nWarning: Missing images will cause upload to fail.")
|
|
288
|
+
|
|
254
289
|
converter = NotionBlockConverter(base_path=str(md_path.parent))
|
|
255
290
|
notion_blocks = converter.convert_blocks(blocks)
|
|
256
291
|
print(f"\nConverted to {len(notion_blocks)} Notion blocks")
|
|
@@ -272,10 +307,13 @@ Setup:
|
|
|
272
307
|
|
|
273
308
|
page_url = page.get("url", "")
|
|
274
309
|
page_id = page.get("id", "")
|
|
310
|
+
image_count = len(uploader._uploaded_images)
|
|
275
311
|
|
|
276
312
|
print(f"\nSuccess!")
|
|
277
313
|
print(f"Page ID: {page_id}")
|
|
278
314
|
print(f"URL: {page_url}")
|
|
315
|
+
if image_count > 0:
|
|
316
|
+
print(f"Images uploaded: {image_count}")
|
|
279
317
|
|
|
280
318
|
except NotionAPIError as e:
|
|
281
319
|
print(f"\nNotion API Error: {e}")
|