@yeongjaeyou/claude-code-config 0.3.1 → 0.5.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.claude/commands/ask-codex.md +131 -345
- package/.claude/commands/ask-deepwiki.md +15 -15
- package/.claude/commands/ask-gemini.md +134 -352
- package/.claude/commands/code-review.md +41 -40
- package/.claude/commands/commit-and-push.md +35 -36
- package/.claude/commands/council.md +318 -0
- package/.claude/commands/edit-notebook.md +34 -33
- package/.claude/commands/gh/create-issue-label.md +19 -17
- package/.claude/commands/gh/decompose-issue.md +66 -65
- package/.claude/commands/gh/init-project.md +46 -52
- package/.claude/commands/gh/post-merge.md +74 -79
- package/.claude/commands/gh/resolve-issue.md +38 -46
- package/.claude/commands/plan.md +15 -14
- package/.claude/commands/tm/convert-prd.md +53 -53
- package/.claude/commands/tm/post-merge.md +92 -112
- package/.claude/commands/tm/resolve-issue.md +148 -154
- package/.claude/commands/tm/review-prd-with-codex.md +272 -279
- package/.claude/commands/tm/sync-to-github.md +189 -212
- package/.claude/guidelines/cv-guidelines.md +30 -0
- package/.claude/guidelines/id-reference.md +34 -0
- package/.claude/guidelines/work-guidelines.md +17 -0
- package/.claude/skills/notion-md-uploader/SKILL.md +252 -0
- package/.claude/skills/notion-md-uploader/references/notion_block_types.md +323 -0
- package/.claude/skills/notion-md-uploader/references/setup_guide.md +156 -0
- package/.claude/skills/notion-md-uploader/scripts/__pycache__/markdown_parser.cpython-311.pyc +0 -0
- package/.claude/skills/notion-md-uploader/scripts/__pycache__/notion_client.cpython-311.pyc +0 -0
- package/.claude/skills/notion-md-uploader/scripts/__pycache__/notion_converter.cpython-311.pyc +0 -0
- package/.claude/skills/notion-md-uploader/scripts/markdown_parser.py +607 -0
- package/.claude/skills/notion-md-uploader/scripts/notion_client.py +337 -0
- package/.claude/skills/notion-md-uploader/scripts/notion_converter.py +477 -0
- package/.claude/skills/notion-md-uploader/scripts/upload_md.py +298 -0
- package/.claude/skills/skill-creator/LICENSE.txt +202 -0
- package/.claude/skills/skill-creator/SKILL.md +209 -0
- package/.claude/skills/skill-creator/scripts/init_skill.py +303 -0
- package/.claude/skills/skill-creator/scripts/package_skill.py +110 -0
- package/.claude/skills/skill-creator/scripts/quick_validate.py +65 -0
- package/.mcp.json +35 -0
- package/README.md +35 -0
- package/package.json +2 -1
|
@@ -1,75 +1,76 @@
|
|
|
1
1
|
---
|
|
2
|
-
description:
|
|
2
|
+
description: Pre-commit code review analysis and auto-fix
|
|
3
3
|
---
|
|
4
4
|
|
|
5
|
-
#
|
|
5
|
+
# Code Review Analysis
|
|
6
6
|
|
|
7
|
-
##
|
|
7
|
+
## Purpose
|
|
8
8
|
|
|
9
|
-
|
|
9
|
+
Review and process code review feedback from external tools like CodeRabbit.
|
|
10
10
|
|
|
11
|
-
|
|
11
|
+
**Core workflow:** Receive external tool review results -> Analyze -> Auto-fix safe changes immediately -> Request user confirmation for items requiring judgment
|
|
12
12
|
|
|
13
|
-
##
|
|
13
|
+
## Processing Workflow
|
|
14
14
|
|
|
15
|
-
1.
|
|
16
|
-
2.
|
|
17
|
-
3.
|
|
15
|
+
1. **Analyze review content**: Evaluate validity and impact scope of suggested changes
|
|
16
|
+
2. **Apply auto-fixes**: Immediately apply clear and safe fixes
|
|
17
|
+
3. **Provide checklist**: Request user confirmation for items requiring judgment
|
|
18
18
|
|
|
19
|
-
##
|
|
19
|
+
## Auto-fix vs Checklist Criteria
|
|
20
20
|
|
|
21
|
-
###
|
|
22
|
-
-
|
|
23
|
-
-
|
|
24
|
-
-
|
|
25
|
-
- race
|
|
21
|
+
### Auto-fix Targets
|
|
22
|
+
- Obvious bug fixes (null checks, conditional errors, etc.)
|
|
23
|
+
- Coding convention violations (formatting, naming, etc.)
|
|
24
|
+
- Simple typos and unnecessary code removal
|
|
25
|
+
- Clear defects like race conditions, memory leaks
|
|
26
26
|
|
|
27
|
-
###
|
|
28
|
-
-
|
|
29
|
-
-
|
|
30
|
-
-
|
|
31
|
-
-
|
|
27
|
+
### Checklist Targets
|
|
28
|
+
- Architecture or design changes
|
|
29
|
+
- Business logic modifications (functional behavior changes)
|
|
30
|
+
- Performance optimizations (with trade-offs)
|
|
31
|
+
- Unclear or controversial review suggestions
|
|
32
32
|
|
|
33
|
-
##
|
|
33
|
+
## Input Format
|
|
34
34
|
|
|
35
|
-
|
|
35
|
+
Provide CodeRabbit review results as arguments:
|
|
36
36
|
|
|
37
37
|
```
|
|
38
|
-
|
|
38
|
+
Warning: Potential issue | Major
|
|
39
39
|
|
|
40
|
-
[
|
|
41
|
-
[
|
|
42
|
-
[
|
|
40
|
+
[Problem description]
|
|
41
|
+
[Details]
|
|
42
|
+
[Suggested changes: diff format]
|
|
43
43
|
|
|
44
|
-
|
|
45
|
-
|
|
46
|
-
[
|
|
44
|
+
Committable suggestion
|
|
45
|
+
Prompt for AI Agents
|
|
46
|
+
[Filename, line number, fix summary]
|
|
47
47
|
```
|
|
48
48
|
|
|
49
|
-
###
|
|
49
|
+
### Examples
|
|
50
50
|
|
|
51
51
|
```
|
|
52
|
-
|
|
52
|
+
Warning: Style | Minor
|
|
53
53
|
|
|
54
|
-
|
|
54
|
+
Variable name does not follow camelCase.
|
|
55
55
|
|
|
56
56
|
- const user_name = "John";
|
|
57
57
|
+ const userName = "John";
|
|
58
58
|
```
|
|
59
59
|
|
|
60
60
|
```
|
|
61
|
-
|
|
61
|
+
Warning: Logic | Critical
|
|
62
62
|
|
|
63
|
-
|
|
63
|
+
State is missing from useEffect dependency array.
|
|
64
64
|
|
|
65
65
|
- useEffect(() => { fetch(url); }, []);
|
|
66
66
|
+ useEffect(() => { fetch(url); }, [url]);
|
|
67
67
|
```
|
|
68
68
|
|
|
69
|
-
##
|
|
69
|
+
## Guidelines
|
|
70
|
+
|
|
71
|
+
- **Critical review**: Validate review suggestions rather than accepting blindly
|
|
72
|
+
- **Project guidelines first**: Follow `@CLAUDE.md` conventions and architecture principles
|
|
73
|
+
- **Safety first**: Use `AskUserQuestion` tool to confirm with user when uncertain
|
|
74
|
+
- **Interactive confirmation**: Use `AskUserQuestion` tool to provide options for items requiring judgment
|
|
75
|
+
- **Commit after completion**: Commit with appropriate message and push after all fixes are complete
|
|
70
76
|
|
|
71
|
-
- **비판적 검토**: 리뷰 내용을 무조건 수용하지 말고 타당성 검증
|
|
72
|
-
- **프로젝트 지침 우선**: `@CLAUDE.md`의 컨벤션 및 아키텍처 원칙 준수
|
|
73
|
-
- **안전 우선**: 확신이 없으면 `AskUserQuestion` 도구로 사용자에게 확인
|
|
74
|
-
- **인터랙티브 확인**: 판단이 필요한 항목은 `AskUserQuestion` 도구를 사용하여 선택지 제공
|
|
75
|
-
- **완료 후 커밋**: 모든 수정이 완료되면 적절한 커밋 메시지로 커밋 후 push
|
|
@@ -1,25 +1,25 @@
|
|
|
1
1
|
---
|
|
2
|
-
description: Git
|
|
2
|
+
description: Analyze Git changes and commit with appropriate message, then push
|
|
3
3
|
---
|
|
4
4
|
|
|
5
|
-
#
|
|
5
|
+
# Commit & Push
|
|
6
6
|
|
|
7
|
-
|
|
7
|
+
Analyze only the files provided as arguments, create an appropriate commit message, commit, and push.
|
|
8
8
|
|
|
9
|
-
##
|
|
9
|
+
## Workflow
|
|
10
10
|
|
|
11
|
-
1.
|
|
12
|
-
-
|
|
13
|
-
-
|
|
14
|
-
-
|
|
15
|
-
-
|
|
16
|
-
-
|
|
17
|
-
2.
|
|
18
|
-
|
|
11
|
+
1. **Analyze changes**: Determine the purpose of changes in the provided files only
|
|
12
|
+
- New feature addition
|
|
13
|
+
- Bug fix
|
|
14
|
+
- Refactoring
|
|
15
|
+
- Documentation update
|
|
16
|
+
- Style/formatting
|
|
17
|
+
2. **Write commit message**: Write clearly in Conventional Commits format
|
|
18
|
+
3. **Commit & Push**: `git add` -> `git commit` -> `git push`
|
|
19
19
|
|
|
20
|
-
##
|
|
20
|
+
## Commit Message Format
|
|
21
21
|
|
|
22
|
-
Conventional Commits
|
|
22
|
+
Follow Conventional Commits rules:
|
|
23
23
|
|
|
24
24
|
```
|
|
25
25
|
<type>: <subject>
|
|
@@ -27,26 +27,25 @@ Conventional Commits 규칙 준수:
|
|
|
27
27
|
[optional body]
|
|
28
28
|
```
|
|
29
29
|
|
|
30
|
-
###
|
|
31
|
-
- `feat`:
|
|
32
|
-
- `fix`:
|
|
33
|
-
- `refactor`:
|
|
34
|
-
- `docs`:
|
|
35
|
-
- `style`:
|
|
36
|
-
- `test`:
|
|
37
|
-
- `chore`:
|
|
38
|
-
|
|
39
|
-
### Subject
|
|
40
|
-
-
|
|
41
|
-
-
|
|
42
|
-
-
|
|
43
|
-
- 50
|
|
44
|
-
|
|
45
|
-
##
|
|
46
|
-
|
|
47
|
-
-
|
|
48
|
-
-
|
|
49
|
-
- **CLAUDE.md
|
|
50
|
-
-
|
|
51
|
-
|
|
30
|
+
### Types
|
|
31
|
+
- `feat`: New feature addition
|
|
32
|
+
- `fix`: Bug fix
|
|
33
|
+
- `refactor`: Code refactoring
|
|
34
|
+
- `docs`: Documentation changes
|
|
35
|
+
- `style`: Code formatting, missing semicolons, etc.
|
|
36
|
+
- `test`: Test code addition/modification
|
|
37
|
+
- `chore`: Build, configuration file changes
|
|
38
|
+
|
|
39
|
+
### Subject Guidelines
|
|
40
|
+
- Use imperative, present tense
|
|
41
|
+
- Start with lowercase
|
|
42
|
+
- No period at the end
|
|
43
|
+
- Keep it concise (under 50 characters)
|
|
44
|
+
|
|
45
|
+
## Guidelines
|
|
46
|
+
|
|
47
|
+
- **Do not analyze files other than those provided as arguments**
|
|
48
|
+
- **Clarity**: Clearly communicate what was changed and why
|
|
49
|
+
- **Follow CLAUDE.md**: Check project guidelines in `@CLAUDE.md`
|
|
50
|
+
- **Single purpose**: One commit should contain only one logical change
|
|
52
51
|
|
|
@@ -0,0 +1,318 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Consult multiple AI models and synthesize collective wisdom (LLM Council)
|
|
3
|
+
---
|
|
4
|
+
|
|
5
|
+
# LLM Council
|
|
6
|
+
|
|
7
|
+
Inspired by Andrej Karpathy's LLM Council: query multiple AI models with the same question, anonymize their responses, and synthesize collective wisdom.
|
|
8
|
+
|
|
9
|
+
**Core Philosophy:**
|
|
10
|
+
- Collective intelligence > single expert opinion
|
|
11
|
+
- Anonymization prevents model favoritism
|
|
12
|
+
- Diverse perspectives lead to better answers
|
|
13
|
+
|
|
14
|
+
---
|
|
15
|
+
|
|
16
|
+
## Arguments
|
|
17
|
+
|
|
18
|
+
`$ARGUMENTS` parsing:
|
|
19
|
+
|
|
20
|
+
1. **No arguments**: `/council` - Prompt user for question
|
|
21
|
+
2. **Question only**: `/council How should I structure this API?`
|
|
22
|
+
3. **With evaluate flag**: `/council --evaluate What's the best approach?`
|
|
23
|
+
4. **With deep reasoning**: `/council --deep What's the best architecture?`
|
|
24
|
+
5. **Combined**: `/council --deep --evaluate Complex design decision?`
|
|
25
|
+
|
|
26
|
+
**Flags:**
|
|
27
|
+
- `--evaluate`: Enable Stage 3 peer evaluation where each model ranks the others
|
|
28
|
+
- `--deep`: Enable maximum reasoning depth (Codex: xhigh + gpt-5.1-codex-max)
|
|
29
|
+
|
|
30
|
+
---
|
|
31
|
+
|
|
32
|
+
## Context Gathering (Before Stage 1)
|
|
33
|
+
|
|
34
|
+
Before querying models, collect relevant context:
|
|
35
|
+
|
|
36
|
+
**Auto-collect:**
|
|
37
|
+
```
|
|
38
|
+
- git status / git diff (current changes)
|
|
39
|
+
- Directory structure (tree -L 2)
|
|
40
|
+
- Files mentioned in conversation (Read/Edit history)
|
|
41
|
+
|
|
42
|
+
Model-specific guidelines (project root):
|
|
43
|
+
- ./CLAUDE.md (Claude Opus/Sonnet)
|
|
44
|
+
- ./AGENTS.md (Codex)
|
|
45
|
+
- ./gemini.md (Gemini)
|
|
46
|
+
```
|
|
47
|
+
|
|
48
|
+
**File Path Inclusion:**
|
|
49
|
+
When the question involves specific files or images, provide **exact absolute paths** so each model can read them directly:
|
|
50
|
+
|
|
51
|
+
```
|
|
52
|
+
Prompt addition:
|
|
53
|
+
"Relevant files for this question:
|
|
54
|
+
- /absolute/path/to/file.py (source code)
|
|
55
|
+
- /absolute/path/to/screenshot.png (UI reference)
|
|
56
|
+
- /absolute/path/to/diagram.jpg (architecture)
|
|
57
|
+
|
|
58
|
+
You can use Read tool to examine these files if needed."
|
|
59
|
+
```
|
|
60
|
+
|
|
61
|
+
**Model-specific file access:**
|
|
62
|
+
| Model | File Access Method |
|
|
63
|
+
|-------|-------------------|
|
|
64
|
+
| Claude Opus/Sonnet | Read tool (images supported) |
|
|
65
|
+
| Codex | sandbox read-only file access |
|
|
66
|
+
| Gemini | Include file content in prompt, or instruct to read |
|
|
67
|
+
|
|
68
|
+
**Sensitive Data Filtering (exclude from prompts):**
|
|
69
|
+
```
|
|
70
|
+
Files: .env*, secrets*, *credentials*, *.pem, *.key
|
|
71
|
+
Patterns: sk-[a-zA-Z0-9]+, Bearer tokens, passwords
|
|
72
|
+
Directories: node_modules/, __pycache__/, .git/
|
|
73
|
+
```
|
|
74
|
+
|
|
75
|
+
**Prompt Size Management:**
|
|
76
|
+
```
|
|
77
|
+
- Large files (>500 lines): include only relevant sections or diff
|
|
78
|
+
- Max 5 files per prompt
|
|
79
|
+
- Prefer git diff over full file content
|
|
80
|
+
- If timeout occurs: reduce context, retry
|
|
81
|
+
```
|
|
82
|
+
|
|
83
|
+
---
|
|
84
|
+
|
|
85
|
+
## Execution
|
|
86
|
+
|
|
87
|
+
### Stage 1: Collect Responses
|
|
88
|
+
|
|
89
|
+
Query all 4 models **in parallel** with identical prompt:
|
|
90
|
+
|
|
91
|
+
```
|
|
92
|
+
Models:
|
|
93
|
+
- Claude Opus → Task(model="opus", subagent_type="general-purpose")
|
|
94
|
+
- Claude Sonnet → Task(model="sonnet", subagent_type="general-purpose")
|
|
95
|
+
- Codex → mcp__codex__codex(sandbox="read-only", reasoningEffort="high")
|
|
96
|
+
- Gemini → Bash: cat <<'EOF' | gemini -p -
|
|
97
|
+
```
|
|
98
|
+
|
|
99
|
+
**Reasoning Control:**
|
|
100
|
+
| Model | Default | --deep |
|
|
101
|
+
|-------|---------|--------|
|
|
102
|
+
| Claude Opus | standard | same |
|
|
103
|
+
| Claude Sonnet | standard | same |
|
|
104
|
+
| Codex | `reasoningEffort="high"` | `xhigh` + `gpt-5.1-codex-max` |
|
|
105
|
+
| Gemini | CLI control N/A | CLI control N/A |
|
|
106
|
+
|
|
107
|
+
Note:
|
|
108
|
+
- Codex `xhigh` only works with `gpt-5.1-codex-max` or `gpt-5.2`
|
|
109
|
+
- Gemini CLI doesn't support thinking level control yet (Issue #6693)
|
|
110
|
+
|
|
111
|
+
**Prompt template for each model:**
|
|
112
|
+
```
|
|
113
|
+
You are participating in an LLM Council deliberation.
|
|
114
|
+
|
|
115
|
+
## Guidelines
|
|
116
|
+
Read and follow your project guidelines before answering:
|
|
117
|
+
- Claude models: Read ./CLAUDE.md
|
|
118
|
+
- Codex: Read ./AGENTS.md
|
|
119
|
+
- Gemini: Read ./gemini.md
|
|
120
|
+
|
|
121
|
+
## Question
|
|
122
|
+
[USER_QUESTION]
|
|
123
|
+
|
|
124
|
+
## Context
|
|
125
|
+
Working Directory: [ABSOLUTE_PATH]
|
|
126
|
+
|
|
127
|
+
## Relevant Files (READ these directly for accurate context)
|
|
128
|
+
- [/absolute/path/to/file1.ext] - [brief description]
|
|
129
|
+
- [/absolute/path/to/image.png] - [screenshot/diagram description]
|
|
130
|
+
- [/absolute/path/to/file2.ext] - [brief description]
|
|
131
|
+
|
|
132
|
+
Do NOT ask for file contents. Use your file access tools to READ them directly.
|
|
133
|
+
|
|
134
|
+
## Current Changes (if applicable)
|
|
135
|
+
[git diff summary or key changes]
|
|
136
|
+
|
|
137
|
+
## Instructions
|
|
138
|
+
Provide your best answer. Be concise but thorough.
|
|
139
|
+
Focus on accuracy, practicality, and actionable insights.
|
|
140
|
+
```
|
|
141
|
+
|
|
142
|
+
**Important:**
|
|
143
|
+
- Use `run_in_background: true` for Task calls to enable true parallelism
|
|
144
|
+
- Set timeout: 120000ms for each call
|
|
145
|
+
- Continue with successful responses if some models fail
|
|
146
|
+
|
|
147
|
+
### Stage 2: Anonymize
|
|
148
|
+
|
|
149
|
+
After collecting responses:
|
|
150
|
+
|
|
151
|
+
1. **Shuffle** the response order randomly
|
|
152
|
+
2. **Assign labels**: Response A, Response B, Response C, Response D
|
|
153
|
+
3. **Create mapping** (keep internal, reveal later):
|
|
154
|
+
```
|
|
155
|
+
label_to_model = {
|
|
156
|
+
"Response A": "gemini",
|
|
157
|
+
"Response B": "opus",
|
|
158
|
+
"Response C": "sonnet",
|
|
159
|
+
"Response D": "codex"
|
|
160
|
+
}
|
|
161
|
+
```
|
|
162
|
+
|
|
163
|
+
4. **Display** anonymized responses to user in a structured format
|
|
164
|
+
|
|
165
|
+
### Stage 3: Peer Evaluation (if --evaluate)
|
|
166
|
+
|
|
167
|
+
If `--evaluate` flag is present, query each model again with all anonymized responses:
|
|
168
|
+
|
|
169
|
+
**Evaluation prompt:**
|
|
170
|
+
```
|
|
171
|
+
You are evaluating responses from an LLM Council.
|
|
172
|
+
|
|
173
|
+
Original Question: [USER_QUESTION]
|
|
174
|
+
|
|
175
|
+
[ANONYMIZED_RESPONSES]
|
|
176
|
+
|
|
177
|
+
Instructions:
|
|
178
|
+
1. Evaluate each response for accuracy, completeness, and usefulness
|
|
179
|
+
2. Identify strengths and weaknesses of each
|
|
180
|
+
3. End with "FINAL RANKING:" section
|
|
181
|
+
|
|
182
|
+
FINAL RANKING:
|
|
183
|
+
1. Response [X] - [brief reason]
|
|
184
|
+
2. Response [Y] - [brief reason]
|
|
185
|
+
3. Response [Z] - [brief reason]
|
|
186
|
+
4. Response [W] - [brief reason]
|
|
187
|
+
```
|
|
188
|
+
|
|
189
|
+
**Calculate aggregate rankings:**
|
|
190
|
+
- Parse each model's ranking
|
|
191
|
+
- Sum rank positions for each response
|
|
192
|
+
- Lower total = higher collective approval
|
|
193
|
+
|
|
194
|
+
### Stage 4: Synthesize
|
|
195
|
+
|
|
196
|
+
As the main agent (moderator), synthesize the final answer:
|
|
197
|
+
|
|
198
|
+
1. **Reveal** the label-to-model mapping
|
|
199
|
+
2. **Analyze** all responses:
|
|
200
|
+
- Consensus points (where models agree)
|
|
201
|
+
- Disagreements (where they differ)
|
|
202
|
+
- Unique insights (valuable points from individual models)
|
|
203
|
+
3. **If --evaluate**: Include aggregate rankings and "street cred" scores
|
|
204
|
+
4. **Produce** final verdict combining best elements
|
|
205
|
+
|
|
206
|
+
---
|
|
207
|
+
|
|
208
|
+
## Output Format
|
|
209
|
+
|
|
210
|
+
```markdown
|
|
211
|
+
## LLM Council Deliberation
|
|
212
|
+
|
|
213
|
+
### Question
|
|
214
|
+
[Original user question]
|
|
215
|
+
|
|
216
|
+
### Individual Responses (Anonymized)
|
|
217
|
+
|
|
218
|
+
#### Response A
|
|
219
|
+
[Content]
|
|
220
|
+
|
|
221
|
+
#### Response B
|
|
222
|
+
[Content]
|
|
223
|
+
|
|
224
|
+
#### Response C
|
|
225
|
+
[Content]
|
|
226
|
+
|
|
227
|
+
#### Response D
|
|
228
|
+
[Content]
|
|
229
|
+
|
|
230
|
+
### Model Reveal
|
|
231
|
+
| Label | Model |
|
|
232
|
+
|-------|-------|
|
|
233
|
+
| Response A | [model name] |
|
|
234
|
+
| Response B | [model name] |
|
|
235
|
+
| Response C | [model name] |
|
|
236
|
+
| Response D | [model name] |
|
|
237
|
+
|
|
238
|
+
### Aggregate Rankings (if --evaluate)
|
|
239
|
+
| Rank | Model | Avg Position | Votes |
|
|
240
|
+
|------|-------|--------------|-------|
|
|
241
|
+
|
|
242
|
+
### Council Synthesis
|
|
243
|
+
|
|
244
|
+
#### Consensus
|
|
245
|
+
[Points where all/most models agree]
|
|
246
|
+
|
|
247
|
+
#### Disagreements
|
|
248
|
+
[Points of divergence with analysis]
|
|
249
|
+
|
|
250
|
+
#### Unique Insights
|
|
251
|
+
[Valuable contributions from specific models]
|
|
252
|
+
|
|
253
|
+
### Final Verdict
|
|
254
|
+
[Synthesized answer combining collective wisdom]
|
|
255
|
+
```
|
|
256
|
+
|
|
257
|
+
---
|
|
258
|
+
|
|
259
|
+
## Error Handling
|
|
260
|
+
|
|
261
|
+
- **Model timeout**: Continue with successful responses, note failures
|
|
262
|
+
- **All models fail**: Report error, suggest retry
|
|
263
|
+
- **Parse failure in rankings**: Use fallback regex extraction
|
|
264
|
+
- **Empty response**: Exclude from synthesis, note in output
|
|
265
|
+
|
|
266
|
+
---
|
|
267
|
+
|
|
268
|
+
## Examples
|
|
269
|
+
|
|
270
|
+
```bash
|
|
271
|
+
# Basic council consultation
|
|
272
|
+
/council What's the best way to implement caching in this API?
|
|
273
|
+
|
|
274
|
+
# With peer evaluation for important decisions
|
|
275
|
+
/council --evaluate Should we use microservices or monolith for this project?
|
|
276
|
+
|
|
277
|
+
# Architecture review
|
|
278
|
+
/council --evaluate Review the current authentication flow and suggest improvements
|
|
279
|
+
```
|
|
280
|
+
|
|
281
|
+
---
|
|
282
|
+
|
|
283
|
+
## User Interaction
|
|
284
|
+
|
|
285
|
+
Use `AskUserQuestion` tool when clarification is needed:
|
|
286
|
+
|
|
287
|
+
**Before Stage 1:**
|
|
288
|
+
- Question is ambiguous or too broad
|
|
289
|
+
- Missing critical context (e.g., "review this code" but no file specified)
|
|
290
|
+
- Multiple interpretations possible
|
|
291
|
+
|
|
292
|
+
**During Execution:**
|
|
293
|
+
- Conflicting requirements detected
|
|
294
|
+
- Need to confirm scope (e.g., "Should I include performance considerations?")
|
|
295
|
+
|
|
296
|
+
**After Synthesis:**
|
|
297
|
+
- Models strongly disagree and user input would help decide
|
|
298
|
+
- Actionable next steps require user confirmation
|
|
299
|
+
|
|
300
|
+
**Example questions:**
|
|
301
|
+
```
|
|
302
|
+
- "Your question mentions 'the API' - which specific endpoint or service?"
|
|
303
|
+
- "Should the council focus on: (1) Code quality, (2) Architecture, (3) Performance, or (4) All aspects?"
|
|
304
|
+
- "Models disagree on X vs Y approach. Which aligns better with your constraints?"
|
|
305
|
+
```
|
|
306
|
+
|
|
307
|
+
**Important:** Never assume or guess when context is unclear. Ask first, then proceed.
|
|
308
|
+
|
|
309
|
+
---
|
|
310
|
+
|
|
311
|
+
## Guidelines
|
|
312
|
+
|
|
313
|
+
- Respond in the same language as the user's question
|
|
314
|
+
- No emojis in code or documentation
|
|
315
|
+
- If context is needed, gather it before querying models
|
|
316
|
+
- For code-related questions, include relevant file snippets in the prompt
|
|
317
|
+
- Respect `@CLAUDE.md` project conventions
|
|
318
|
+
- **Never assume unclear context - use AskUserQuestion to clarify**
|
|
@@ -1,42 +1,43 @@
|
|
|
1
1
|
---
|
|
2
|
-
description: Jupyter Notebook(.ipynb)
|
|
2
|
+
description: Safely edit Jupyter Notebook (.ipynb) files
|
|
3
3
|
---
|
|
4
4
|
|
|
5
|
-
# Notebook
|
|
5
|
+
# Notebook Editing
|
|
6
6
|
|
|
7
|
-
Jupyter Notebook
|
|
7
|
+
Safely edit Jupyter Notebook files using the correct tools.
|
|
8
8
|
|
|
9
|
-
##
|
|
9
|
+
## Required Rules
|
|
10
10
|
|
|
11
|
-
1.
|
|
12
|
-
- `Edit`, `Write`, `search_replace`
|
|
13
|
-
- .ipynb
|
|
11
|
+
1. **Tool usage**: Use only the `NotebookEdit` tool
|
|
12
|
+
- Do not use text editing tools like `Edit`, `Write`, `search_replace`
|
|
13
|
+
- .ipynb files are JSON structures and require dedicated tools
|
|
14
14
|
|
|
15
|
-
2.
|
|
16
|
-
-
|
|
17
|
-
-
|
|
15
|
+
2. **Cell insertion order**: Ensure correct order when creating new notebooks
|
|
16
|
+
- **Important**: Cells are inserted at the beginning if `cell_id` is not specified
|
|
17
|
+
- **Method 1 (recommended)**: Track previous cell's cell_id for sequential insertion
|
|
18
18
|
```
|
|
19
|
-
|
|
20
|
-
|
|
21
|
-
|
|
19
|
+
Insert first cell -> Returns cell_id='abc123'
|
|
20
|
+
Insert second cell with cell_id='abc123' -> Inserted after first cell
|
|
21
|
+
Insert third cell with cell_id='def456' -> Inserted after second cell
|
|
22
22
|
```
|
|
23
|
-
-
|
|
24
|
-
-
|
|
25
|
-
|
|
26
|
-
3. **
|
|
27
|
-
-
|
|
28
|
-
-
|
|
29
|
-
-
|
|
30
|
-
|
|
31
|
-
4.
|
|
32
|
-
- JSON
|
|
33
|
-
-
|
|
34
|
-
-
|
|
35
|
-
-
|
|
36
|
-
|
|
37
|
-
##
|
|
38
|
-
|
|
39
|
-
-
|
|
40
|
-
-
|
|
41
|
-
-
|
|
42
|
-
- **CLAUDE.md
|
|
23
|
+
- **Method 2**: Insert in reverse order (from last cell to first)
|
|
24
|
+
- **Never**: Insert sequentially without cell_id (causes reverse order)
|
|
25
|
+
|
|
26
|
+
3. **Source format**: Source may be saved as string when using NotebookEdit
|
|
27
|
+
- **Problem**: `"line1\\nline2"` (string) -> `\n` displayed literally in Jupyter
|
|
28
|
+
- **Correct**: `["line1\n", "line2\n"]` (list of strings)
|
|
29
|
+
- **Solution**: When creating new notebooks, directly write JSON using Python
|
|
30
|
+
|
|
31
|
+
4. **Post-edit verification**: Always verify after modifications
|
|
32
|
+
- Validate JSON syntax
|
|
33
|
+
- Confirm cell execution order is preserved (check first 30 lines with Read)
|
|
34
|
+
- Check for missing functions/imports/dependencies
|
|
35
|
+
- Preserve cell outputs unless explicitly instructed otherwise
|
|
36
|
+
|
|
37
|
+
## Guidelines
|
|
38
|
+
|
|
39
|
+
- **Use dedicated tool only**: Never use editing tools other than NotebookEdit
|
|
40
|
+
- **Preserve structure**: Maintain existing cell order and outputs
|
|
41
|
+
- **Verification required**: Immediately verify changes after editing
|
|
42
|
+
- **Follow CLAUDE.md**: Adhere strictly to project guidelines
|
|
43
|
+
|
|
@@ -1,29 +1,31 @@
|
|
|
1
|
-
##
|
|
1
|
+
## Create Issue Labels
|
|
2
2
|
|
|
3
|
-
|
|
3
|
+
Analyze project structure and create appropriate GitHub issue labels. Follow project guidelines in `@CLAUDE.md`.
|
|
4
4
|
|
|
5
|
-
##
|
|
6
|
-
1. 프로젝트 분석: `package.json`, `README.md`, 코드 구조 파악
|
|
7
|
-
2. 기술 스택 확인: 사용된 프레임워크, 라이브러리, 도구 식별
|
|
8
|
-
3. 프로젝트 영역 분류: 프론트엔드, 백엔드, API, 인프라 등
|
|
9
|
-
4. 라벨 생성: 타입, 영역, 난이도 기반으로 핵심 라벨만 생성
|
|
5
|
+
## Workflow
|
|
10
6
|
|
|
11
|
-
|
|
7
|
+
1. Analyze project: Examine `package.json`, `README.md`, and code structure
|
|
8
|
+
2. Identify tech stack: Detect frameworks, libraries, and tools in use
|
|
9
|
+
3. Classify project areas: Frontend, backend, API, infrastructure, etc.
|
|
10
|
+
4. Create labels: Generate essential labels based on type, area, and complexity
|
|
12
11
|
|
|
13
|
-
|
|
12
|
+
## Label Guidelines
|
|
14
13
|
|
|
15
|
-
|
|
14
|
+
**The following are examples; adjust based on your project needs.**
|
|
15
|
+
|
|
16
|
+
### Type
|
|
16
17
|
- `type: feature`, `type: bug`, `type: enhancement`, `type: documentation`, `type: refactor`
|
|
17
18
|
|
|
18
|
-
###
|
|
19
|
+
### Area
|
|
19
20
|
- `frontend` `backend` `api` `devops`, `crawling` `ai` `database` `infrastructure`
|
|
20
21
|
|
|
21
|
-
###
|
|
22
|
+
### Complexity
|
|
22
23
|
- `complexity: easy` `complexity: medium` `complexity: hard`
|
|
23
24
|
|
|
24
|
-
##
|
|
25
|
+
## Example Commands
|
|
26
|
+
|
|
25
27
|
```bash
|
|
26
|
-
gh label create "type: feature" --color "0e8a16" --description "
|
|
27
|
-
gh label create "frontend" --color "1d76db" --description "
|
|
28
|
-
gh label create "complexity: easy" --color "7057ff" --description "
|
|
29
|
-
```
|
|
28
|
+
gh label create "type: feature" --color "0e8a16" --description "New feature addition"
|
|
29
|
+
gh label create "frontend" --color "1d76db" --description "Frontend-related work"
|
|
30
|
+
gh label create "complexity: easy" --color "7057ff" --description "Simple task"
|
|
31
|
+
```
|