@yeongjaeyou/claude-code-config 0.4.0 → 0.5.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (37) hide show
  1. package/.claude/commands/ask-codex.md +131 -345
  2. package/.claude/commands/ask-deepwiki.md +15 -15
  3. package/.claude/commands/ask-gemini.md +134 -352
  4. package/.claude/commands/code-review.md +41 -40
  5. package/.claude/commands/commit-and-push.md +35 -36
  6. package/.claude/commands/council.md +318 -0
  7. package/.claude/commands/edit-notebook.md +34 -33
  8. package/.claude/commands/gh/create-issue-label.md +19 -17
  9. package/.claude/commands/gh/decompose-issue.md +66 -65
  10. package/.claude/commands/gh/init-project.md +46 -52
  11. package/.claude/commands/gh/post-merge.md +74 -79
  12. package/.claude/commands/gh/resolve-issue.md +38 -46
  13. package/.claude/commands/plan.md +15 -14
  14. package/.claude/commands/tm/convert-prd.md +53 -53
  15. package/.claude/commands/tm/post-merge.md +92 -112
  16. package/.claude/commands/tm/resolve-issue.md +148 -154
  17. package/.claude/commands/tm/review-prd-with-codex.md +272 -279
  18. package/.claude/commands/tm/sync-to-github.md +189 -212
  19. package/.claude/guidelines/cv-guidelines.md +30 -0
  20. package/.claude/guidelines/id-reference.md +34 -0
  21. package/.claude/guidelines/work-guidelines.md +17 -0
  22. package/.claude/skills/notion-md-uploader/SKILL.md +252 -0
  23. package/.claude/skills/notion-md-uploader/references/notion_block_types.md +323 -0
  24. package/.claude/skills/notion-md-uploader/references/setup_guide.md +156 -0
  25. package/.claude/skills/notion-md-uploader/scripts/__pycache__/markdown_parser.cpython-311.pyc +0 -0
  26. package/.claude/skills/notion-md-uploader/scripts/__pycache__/notion_client.cpython-311.pyc +0 -0
  27. package/.claude/skills/notion-md-uploader/scripts/__pycache__/notion_converter.cpython-311.pyc +0 -0
  28. package/.claude/skills/notion-md-uploader/scripts/markdown_parser.py +607 -0
  29. package/.claude/skills/notion-md-uploader/scripts/notion_client.py +337 -0
  30. package/.claude/skills/notion-md-uploader/scripts/notion_converter.py +477 -0
  31. package/.claude/skills/notion-md-uploader/scripts/upload_md.py +298 -0
  32. package/.claude/skills/skill-creator/LICENSE.txt +202 -0
  33. package/.claude/skills/skill-creator/SKILL.md +209 -0
  34. package/.claude/skills/skill-creator/scripts/init_skill.py +303 -0
  35. package/.claude/skills/skill-creator/scripts/package_skill.py +110 -0
  36. package/.claude/skills/skill-creator/scripts/quick_validate.py +65 -0
  37. package/package.json +1 -1
@@ -1,75 +1,76 @@
1
1
  ---
2
- description: 사전 코드 리뷰 검토 자동 수정
2
+ description: Pre-commit code review analysis and auto-fix
3
3
  ---
4
4
 
5
- # 코드 리뷰 검토
5
+ # Code Review Analysis
6
6
 
7
- ## 목적
7
+ ## Purpose
8
8
 
9
- CodeRabbit 등의 도구에서 이미 진행된 코드 리뷰 내용을 검토하고 처리합니다.
9
+ Review and process code review feedback from external tools like CodeRabbit.
10
10
 
11
- **핵심:** 외부 도구의 리뷰 결과를 받아서 분석 자동 수정 가능한 것은 즉시 적용 판단이 필요한 것은 사용자에게 확인
11
+ **Core workflow:** Receive external tool review results -> Analyze -> Auto-fix safe changes immediately -> Request user confirmation for items requiring judgment
12
12
 
13
- ## 처리 프로세스
13
+ ## Processing Workflow
14
14
 
15
- 1. **리뷰 내용 분석**: 제안된 변경사항의 타당성과 영향 범위 평가
16
- 2. **자동 수정 적용**: 명확하고 안전한 수정사항 즉시 반영
17
- 3. **체크리스트 제공**: 판단이 필요한 항목은 사용자 확인 요청
15
+ 1. **Analyze review content**: Evaluate validity and impact scope of suggested changes
16
+ 2. **Apply auto-fixes**: Immediately apply clear and safe fixes
17
+ 3. **Provide checklist**: Request user confirmation for items requiring judgment
18
18
 
19
- ## 자동 수정 vs 체크리스트 기준
19
+ ## Auto-fix vs Checklist Criteria
20
20
 
21
- ### 자동 수정 대상
22
- - 명백한 버그 수정 (null check, 조건문 오류 )
23
- - 코딩 컨벤션 위반 (포맷팅, 네이밍 )
24
- - 단순 오타 불필요한 코드 제거
25
- - race condition, memory leak 등 명확한 결함
21
+ ### Auto-fix Targets
22
+ - Obvious bug fixes (null checks, conditional errors, etc.)
23
+ - Coding convention violations (formatting, naming, etc.)
24
+ - Simple typos and unnecessary code removal
25
+ - Clear defects like race conditions, memory leaks
26
26
 
27
- ### 체크리스트 반환 대상
28
- - 아키텍처 또는 설계 변경
29
- - 비즈니스 로직 수정 (기능 동작 변경)
30
- - 성능 최적화 (트레이드오프 있는 경우)
31
- - 리뷰 내용이 불명확하거나 논쟁의 여지가 있는 경우
27
+ ### Checklist Targets
28
+ - Architecture or design changes
29
+ - Business logic modifications (functional behavior changes)
30
+ - Performance optimizations (with trade-offs)
31
+ - Unclear or controversial review suggestions
32
32
 
33
- ## 입력 형식
33
+ ## Input Format
34
34
 
35
- 아규먼트로 CodeRabbit 리뷰 결과를 제공합니다:
35
+ Provide CodeRabbit review results as arguments:
36
36
 
37
37
  ```
38
- ⚠️ Potential issue | 🟠 Major
38
+ Warning: Potential issue | Major
39
39
 
40
- [문제 설명]
41
- [상세 내용]
42
- [제안 변경사항: diff 형식]
40
+ [Problem description]
41
+ [Details]
42
+ [Suggested changes: diff format]
43
43
 
44
- 📝 Committable suggestion
45
- 🤖 Prompt for AI Agents
46
- [파일명, 번호, 수정 내용 요약]
44
+ Committable suggestion
45
+ Prompt for AI Agents
46
+ [Filename, line number, fix summary]
47
47
  ```
48
48
 
49
- ### 예시 (간단)
49
+ ### Examples
50
50
 
51
51
  ```
52
- ⚠️ Style | 🟡 Minor
52
+ Warning: Style | Minor
53
53
 
54
- 변수명이 camelCase를 따르지 않습니다.
54
+ Variable name does not follow camelCase.
55
55
 
56
56
  - const user_name = "John";
57
57
  + const userName = "John";
58
58
  ```
59
59
 
60
60
  ```
61
- ⚠️ Logic | 🔴 Critical
61
+ Warning: Logic | Critical
62
62
 
63
- useEffect의 의존성 배열에 state가 누락되었습니다.
63
+ State is missing from useEffect dependency array.
64
64
 
65
65
  - useEffect(() => { fetch(url); }, []);
66
66
  + useEffect(() => { fetch(url); }, [url]);
67
67
  ```
68
68
 
69
- ## 처리 가이드라인
69
+ ## Guidelines
70
+
71
+ - **Critical review**: Validate review suggestions rather than accepting blindly
72
+ - **Project guidelines first**: Follow `@CLAUDE.md` conventions and architecture principles
73
+ - **Safety first**: Use `AskUserQuestion` tool to confirm with user when uncertain
74
+ - **Interactive confirmation**: Use `AskUserQuestion` tool to provide options for items requiring judgment
75
+ - **Commit after completion**: Commit with appropriate message and push after all fixes are complete
70
76
 
71
- - **비판적 검토**: 리뷰 내용을 무조건 수용하지 말고 타당성 검증
72
- - **프로젝트 지침 우선**: `@CLAUDE.md`의 컨벤션 및 아키텍처 원칙 준수
73
- - **안전 우선**: 확신이 없으면 `AskUserQuestion` 도구로 사용자에게 확인
74
- - **인터랙티브 확인**: 판단이 필요한 항목은 `AskUserQuestion` 도구를 사용하여 선택지 제공
75
- - **완료 후 커밋**: 모든 수정이 완료되면 적절한 커밋 메시지로 커밋 후 push
@@ -1,25 +1,25 @@
1
1
  ---
2
- description: Git 변경사항 분석 적절한 커밋 메시지로 커밋 및 푸시
2
+ description: Analyze Git changes and commit with appropriate message, then push
3
3
  ---
4
4
 
5
- # 커밋 & 푸시
5
+ # Commit & Push
6
6
 
7
- 아규먼트로 받은 파일들만을 분석하여 적절한 커밋 메시지를 작성하고 커밋 푸시합니다.
7
+ Analyze only the files provided as arguments, create an appropriate commit message, commit, and push.
8
8
 
9
- ## 작업 순서
9
+ ## Workflow
10
10
 
11
- 1. **변경 분석**: 아규먼트로 받은 파일들만의 변경 내용이 어떤 목적인지 분석
12
- - 새로운 기능 추가
13
- - 버그 수정
14
- - 리팩토링
15
- - 문서 업데이트
16
- - 스타일/포맷팅
17
- 2. **커밋 메시지 작성**: Conventional Commits 형식으로 명확하게 작성
18
- 4. **커밋 & 푸시**: `git add` `git commit` `git push`
11
+ 1. **Analyze changes**: Determine the purpose of changes in the provided files only
12
+ - New feature addition
13
+ - Bug fix
14
+ - Refactoring
15
+ - Documentation update
16
+ - Style/formatting
17
+ 2. **Write commit message**: Write clearly in Conventional Commits format
18
+ 3. **Commit & Push**: `git add` -> `git commit` -> `git push`
19
19
 
20
- ## 커밋 메시지 형식
20
+ ## Commit Message Format
21
21
 
22
- Conventional Commits 규칙 준수:
22
+ Follow Conventional Commits rules:
23
23
 
24
24
  ```
25
25
  <type>: <subject>
@@ -27,26 +27,25 @@ Conventional Commits 규칙 준수:
27
27
  [optional body]
28
28
  ```
29
29
 
30
- ### Type 종류
31
- - `feat`: 새로운 기능 추가
32
- - `fix`: 버그 수정
33
- - `refactor`: 코드 리팩토링
34
- - `docs`: 문서 변경
35
- - `style`: 코드 포맷팅, 세미콜론 누락
36
- - `test`: 테스트 코드 추가/수정
37
- - `chore`: 빌드, 설정 파일 변경
38
-
39
- ### Subject 작성 원칙
40
- - 명령형, 현재 시제 사용
41
- - 글자 소문자
42
- - 마침표 없음
43
- - 50 이내로 간결하게
44
-
45
- ## 작성 원칙
46
- - **한글로 작성 및 답변**
47
- - **아규먼트로 전달된 파일 외의 파일은 분석하지 않음**
48
- - **명확성**: 무엇을, 변경했는지 명확히 전달
49
- - **CLAUDE.md 준수**: `@CLAUDE.md`의 프로젝트 지침 확인
50
- - **단일 목적**: 하나의 커밋은 하나의 논리적 변경만 포함
51
-
30
+ ### Types
31
+ - `feat`: New feature addition
32
+ - `fix`: Bug fix
33
+ - `refactor`: Code refactoring
34
+ - `docs`: Documentation changes
35
+ - `style`: Code formatting, missing semicolons, etc.
36
+ - `test`: Test code addition/modification
37
+ - `chore`: Build, configuration file changes
38
+
39
+ ### Subject Guidelines
40
+ - Use imperative, present tense
41
+ - Start with lowercase
42
+ - No period at the end
43
+ - Keep it concise (under 50 characters)
44
+
45
+ ## Guidelines
46
+
47
+ - **Do not analyze files other than those provided as arguments**
48
+ - **Clarity**: Clearly communicate what was changed and why
49
+ - **Follow CLAUDE.md**: Check project guidelines in `@CLAUDE.md`
50
+ - **Single purpose**: One commit should contain only one logical change
52
51
 
@@ -0,0 +1,318 @@
1
+ ---
2
+ description: Consult multiple AI models and synthesize collective wisdom (LLM Council)
3
+ ---
4
+
5
+ # LLM Council
6
+
7
+ Inspired by Andrej Karpathy's LLM Council: query multiple AI models with the same question, anonymize their responses, and synthesize collective wisdom.
8
+
9
+ **Core Philosophy:**
10
+ - Collective intelligence > single expert opinion
11
+ - Anonymization prevents model favoritism
12
+ - Diverse perspectives lead to better answers
13
+
14
+ ---
15
+
16
+ ## Arguments
17
+
18
+ `$ARGUMENTS` parsing:
19
+
20
+ 1. **No arguments**: `/council` - Prompt user for question
21
+ 2. **Question only**: `/council How should I structure this API?`
22
+ 3. **With evaluate flag**: `/council --evaluate What's the best approach?`
23
+ 4. **With deep reasoning**: `/council --deep What's the best architecture?`
24
+ 5. **Combined**: `/council --deep --evaluate Complex design decision?`
25
+
26
+ **Flags:**
27
+ - `--evaluate`: Enable Stage 3 peer evaluation where each model ranks the others
28
+ - `--deep`: Enable maximum reasoning depth (Codex: xhigh + gpt-5.1-codex-max)
29
+
30
+ ---
31
+
32
+ ## Context Gathering (Before Stage 1)
33
+
34
+ Before querying models, collect relevant context:
35
+
36
+ **Auto-collect:**
37
+ ```
38
+ - git status / git diff (current changes)
39
+ - Directory structure (tree -L 2)
40
+ - Files mentioned in conversation (Read/Edit history)
41
+
42
+ Model-specific guidelines (project root):
43
+ - ./CLAUDE.md (Claude Opus/Sonnet)
44
+ - ./AGENTS.md (Codex)
45
+ - ./gemini.md (Gemini)
46
+ ```
47
+
48
+ **File Path Inclusion:**
49
+ When the question involves specific files or images, provide **exact absolute paths** so each model can read them directly:
50
+
51
+ ```
52
+ Prompt addition:
53
+ "Relevant files for this question:
54
+ - /absolute/path/to/file.py (source code)
55
+ - /absolute/path/to/screenshot.png (UI reference)
56
+ - /absolute/path/to/diagram.jpg (architecture)
57
+
58
+ You can use Read tool to examine these files if needed."
59
+ ```
60
+
61
+ **Model-specific file access:**
62
+ | Model | File Access Method |
63
+ |-------|-------------------|
64
+ | Claude Opus/Sonnet | Read tool (images supported) |
65
+ | Codex | sandbox read-only file access |
66
+ | Gemini | Include file content in prompt, or instruct to read |
67
+
68
+ **Sensitive Data Filtering (exclude from prompts):**
69
+ ```
70
+ Files: .env*, secrets*, *credentials*, *.pem, *.key
71
+ Patterns: sk-[a-zA-Z0-9]+, Bearer tokens, passwords
72
+ Directories: node_modules/, __pycache__/, .git/
73
+ ```
74
+
75
+ **Prompt Size Management:**
76
+ ```
77
+ - Large files (>500 lines): include only relevant sections or diff
78
+ - Max 5 files per prompt
79
+ - Prefer git diff over full file content
80
+ - If timeout occurs: reduce context, retry
81
+ ```
82
+
83
+ ---
84
+
85
+ ## Execution
86
+
87
+ ### Stage 1: Collect Responses
88
+
89
+ Query all 4 models **in parallel** with identical prompt:
90
+
91
+ ```
92
+ Models:
93
+ - Claude Opus → Task(model="opus", subagent_type="general-purpose")
94
+ - Claude Sonnet → Task(model="sonnet", subagent_type="general-purpose")
95
+ - Codex → mcp__codex__codex(sandbox="read-only", reasoningEffort="high")
96
+ - Gemini → Bash: cat <<'EOF' | gemini -p -
97
+ ```
98
+
99
+ **Reasoning Control:**
100
+ | Model | Default | --deep |
101
+ |-------|---------|--------|
102
+ | Claude Opus | standard | same |
103
+ | Claude Sonnet | standard | same |
104
+ | Codex | `reasoningEffort="high"` | `xhigh` + `gpt-5.1-codex-max` |
105
+ | Gemini | CLI control N/A | CLI control N/A |
106
+
107
+ Note:
108
+ - Codex `xhigh` only works with `gpt-5.1-codex-max` or `gpt-5.2`
109
+ - Gemini CLI doesn't support thinking level control yet (Issue #6693)
110
+
111
+ **Prompt template for each model:**
112
+ ```
113
+ You are participating in an LLM Council deliberation.
114
+
115
+ ## Guidelines
116
+ Read and follow your project guidelines before answering:
117
+ - Claude models: Read ./CLAUDE.md
118
+ - Codex: Read ./AGENTS.md
119
+ - Gemini: Read ./gemini.md
120
+
121
+ ## Question
122
+ [USER_QUESTION]
123
+
124
+ ## Context
125
+ Working Directory: [ABSOLUTE_PATH]
126
+
127
+ ## Relevant Files (READ these directly for accurate context)
128
+ - [/absolute/path/to/file1.ext] - [brief description]
129
+ - [/absolute/path/to/image.png] - [screenshot/diagram description]
130
+ - [/absolute/path/to/file2.ext] - [brief description]
131
+
132
+ Do NOT ask for file contents. Use your file access tools to READ them directly.
133
+
134
+ ## Current Changes (if applicable)
135
+ [git diff summary or key changes]
136
+
137
+ ## Instructions
138
+ Provide your best answer. Be concise but thorough.
139
+ Focus on accuracy, practicality, and actionable insights.
140
+ ```
141
+
142
+ **Important:**
143
+ - Use `run_in_background: true` for Task calls to enable true parallelism
144
+ - Set timeout: 120000ms for each call
145
+ - Continue with successful responses if some models fail
146
+
147
+ ### Stage 2: Anonymize
148
+
149
+ After collecting responses:
150
+
151
+ 1. **Shuffle** the response order randomly
152
+ 2. **Assign labels**: Response A, Response B, Response C, Response D
153
+ 3. **Create mapping** (keep internal, reveal later):
154
+ ```
155
+ label_to_model = {
156
+ "Response A": "gemini",
157
+ "Response B": "opus",
158
+ "Response C": "sonnet",
159
+ "Response D": "codex"
160
+ }
161
+ ```
162
+
163
+ 4. **Display** anonymized responses to user in a structured format
164
+
165
+ ### Stage 3: Peer Evaluation (if --evaluate)
166
+
167
+ If `--evaluate` flag is present, query each model again with all anonymized responses:
168
+
169
+ **Evaluation prompt:**
170
+ ```
171
+ You are evaluating responses from an LLM Council.
172
+
173
+ Original Question: [USER_QUESTION]
174
+
175
+ [ANONYMIZED_RESPONSES]
176
+
177
+ Instructions:
178
+ 1. Evaluate each response for accuracy, completeness, and usefulness
179
+ 2. Identify strengths and weaknesses of each
180
+ 3. End with "FINAL RANKING:" section
181
+
182
+ FINAL RANKING:
183
+ 1. Response [X] - [brief reason]
184
+ 2. Response [Y] - [brief reason]
185
+ 3. Response [Z] - [brief reason]
186
+ 4. Response [W] - [brief reason]
187
+ ```
188
+
189
+ **Calculate aggregate rankings:**
190
+ - Parse each model's ranking
191
+ - Sum rank positions for each response
192
+ - Lower total = higher collective approval
193
+
194
+ ### Stage 4: Synthesize
195
+
196
+ As the main agent (moderator), synthesize the final answer:
197
+
198
+ 1. **Reveal** the label-to-model mapping
199
+ 2. **Analyze** all responses:
200
+ - Consensus points (where models agree)
201
+ - Disagreements (where they differ)
202
+ - Unique insights (valuable points from individual models)
203
+ 3. **If --evaluate**: Include aggregate rankings and "street cred" scores
204
+ 4. **Produce** final verdict combining best elements
205
+
206
+ ---
207
+
208
+ ## Output Format
209
+
210
+ ```markdown
211
+ ## LLM Council Deliberation
212
+
213
+ ### Question
214
+ [Original user question]
215
+
216
+ ### Individual Responses (Anonymized)
217
+
218
+ #### Response A
219
+ [Content]
220
+
221
+ #### Response B
222
+ [Content]
223
+
224
+ #### Response C
225
+ [Content]
226
+
227
+ #### Response D
228
+ [Content]
229
+
230
+ ### Model Reveal
231
+ | Label | Model |
232
+ |-------|-------|
233
+ | Response A | [model name] |
234
+ | Response B | [model name] |
235
+ | Response C | [model name] |
236
+ | Response D | [model name] |
237
+
238
+ ### Aggregate Rankings (if --evaluate)
239
+ | Rank | Model | Avg Position | Votes |
240
+ |------|-------|--------------|-------|
241
+
242
+ ### Council Synthesis
243
+
244
+ #### Consensus
245
+ [Points where all/most models agree]
246
+
247
+ #### Disagreements
248
+ [Points of divergence with analysis]
249
+
250
+ #### Unique Insights
251
+ [Valuable contributions from specific models]
252
+
253
+ ### Final Verdict
254
+ [Synthesized answer combining collective wisdom]
255
+ ```
256
+
257
+ ---
258
+
259
+ ## Error Handling
260
+
261
+ - **Model timeout**: Continue with successful responses, note failures
262
+ - **All models fail**: Report error, suggest retry
263
+ - **Parse failure in rankings**: Use fallback regex extraction
264
+ - **Empty response**: Exclude from synthesis, note in output
265
+
266
+ ---
267
+
268
+ ## Examples
269
+
270
+ ```bash
271
+ # Basic council consultation
272
+ /council What's the best way to implement caching in this API?
273
+
274
+ # With peer evaluation for important decisions
275
+ /council --evaluate Should we use microservices or monolith for this project?
276
+
277
+ # Architecture review
278
+ /council --evaluate Review the current authentication flow and suggest improvements
279
+ ```
280
+
281
+ ---
282
+
283
+ ## User Interaction
284
+
285
+ Use `AskUserQuestion` tool when clarification is needed:
286
+
287
+ **Before Stage 1:**
288
+ - Question is ambiguous or too broad
289
+ - Missing critical context (e.g., "review this code" but no file specified)
290
+ - Multiple interpretations possible
291
+
292
+ **During Execution:**
293
+ - Conflicting requirements detected
294
+ - Need to confirm scope (e.g., "Should I include performance considerations?")
295
+
296
+ **After Synthesis:**
297
+ - Models strongly disagree and user input would help decide
298
+ - Actionable next steps require user confirmation
299
+
300
+ **Example questions:**
301
+ ```
302
+ - "Your question mentions 'the API' - which specific endpoint or service?"
303
+ - "Should the council focus on: (1) Code quality, (2) Architecture, (3) Performance, or (4) All aspects?"
304
+ - "Models disagree on X vs Y approach. Which aligns better with your constraints?"
305
+ ```
306
+
307
+ **Important:** Never assume or guess when context is unclear. Ask first, then proceed.
308
+
309
+ ---
310
+
311
+ ## Guidelines
312
+
313
+ - Respond in the same language as the user's question
314
+ - No emojis in code or documentation
315
+ - If context is needed, gather it before querying models
316
+ - For code-related questions, include relevant file snippets in the prompt
317
+ - Respect `@CLAUDE.md` project conventions
318
+ - **Never assume unclear context - use AskUserQuestion to clarify**
@@ -1,42 +1,43 @@
1
1
  ---
2
- description: Jupyter Notebook(.ipynb) 파일 안전 편집
2
+ description: Safely edit Jupyter Notebook (.ipynb) files
3
3
  ---
4
4
 
5
- # Notebook 편집
5
+ # Notebook Editing
6
6
 
7
- Jupyter Notebook 파일을 올바른 도구로 안전하게 편집합니다.
7
+ Safely edit Jupyter Notebook files using the correct tools.
8
8
 
9
- ## 필수 규칙
9
+ ## Required Rules
10
10
 
11
- 1. **도구 사용**: `NotebookEdit` 도구만 사용
12
- - `Edit`, `Write`, `search_replace` 등 텍스트 편집 도구 사용 금지
13
- - .ipynb JSON 구조이므로 전용 도구 필수
11
+ 1. **Tool usage**: Use only the `NotebookEdit` tool
12
+ - Do not use text editing tools like `Edit`, `Write`, `search_replace`
13
+ - .ipynb files are JSON structures and require dedicated tools
14
14
 
15
- 2. **셀 삽입 순서**: 노트북 작성 순서 보장 필수
16
- - **중요**: `cell_id` 미지정 항상 맨 앞에 삽입됨
17
- - **방법 1 (권장)**: 이전 셀의 cell_id 추적하여 순차 삽입
15
+ 2. **Cell insertion order**: Ensure correct order when creating new notebooks
16
+ - **Important**: Cells are inserted at the beginning if `cell_id` is not specified
17
+ - **Method 1 (recommended)**: Track previous cell's cell_id for sequential insertion
18
18
  ```
19
- 삽입 cell_id='abc123' 반환
20
- 번째 삽입 cell_id='abc123' 지정 다음에 삽입
21
- 번째 삽입 cell_id='def456' 지정 번째 셀 다음에 삽입
19
+ Insert first cell -> Returns cell_id='abc123'
20
+ Insert second cell with cell_id='abc123' -> Inserted after first cell
21
+ Insert third cell with cell_id='def456' -> Inserted after second cell
22
22
  ```
23
- - **방법 2**: 역순 삽입 (마지막 셀부터 셀까지)
24
- - **절대 금지**: cell_id 없이 순차 삽입 (역순 결과 발생)
25
-
26
- 3. **source 형식**: NotebookEdit 사용 source가 string으로 저장될 있음
27
- - **문제**: `"line1\\nline2"` (string) Jupyter에서 `\n`이 그대로 표시됨
28
- - **정상**: `["line1\n", "line2\n"]` (list of strings)
29
- - **해결**: 노트북 생성 Python으로 직접 JSON 작성 권장
30
-
31
- 4. **편집 검증**: 수정 반드시 확인
32
- - JSON 구문 유효성 검증
33
- - 실행 순서 보존 확인 (Read로 30 확인)
34
- - 함수/임포트/의존성 누락 확인
35
- - 명시적 지시가 없으면 출력 보존
36
-
37
- ## 처리 원칙
38
-
39
- - **전용 도구만 사용**: NotebookEdit 다른 편집 도구 절대 금지
40
- - **구조 보존**: 기존 순서 출력 유지
41
- - **검증 필수**: 편집 즉시 변경사항 확인
42
- - **CLAUDE.md 준수**: 프로젝트 지침 엄수
23
+ - **Method 2**: Insert in reverse order (from last cell to first)
24
+ - **Never**: Insert sequentially without cell_id (causes reverse order)
25
+
26
+ 3. **Source format**: Source may be saved as string when using NotebookEdit
27
+ - **Problem**: `"line1\\nline2"` (string) -> `\n` displayed literally in Jupyter
28
+ - **Correct**: `["line1\n", "line2\n"]` (list of strings)
29
+ - **Solution**: When creating new notebooks, directly write JSON using Python
30
+
31
+ 4. **Post-edit verification**: Always verify after modifications
32
+ - Validate JSON syntax
33
+ - Confirm cell execution order is preserved (check first 30 lines with Read)
34
+ - Check for missing functions/imports/dependencies
35
+ - Preserve cell outputs unless explicitly instructed otherwise
36
+
37
+ ## Guidelines
38
+
39
+ - **Use dedicated tool only**: Never use editing tools other than NotebookEdit
40
+ - **Preserve structure**: Maintain existing cell order and outputs
41
+ - **Verification required**: Immediately verify changes after editing
42
+ - **Follow CLAUDE.md**: Adhere strictly to project guidelines
43
+
@@ -1,29 +1,31 @@
1
- ## 이슈 라벨 생성하기
1
+ ## Create Issue Labels
2
2
 
3
- 프로젝트 구조를 분석하여 적절한 GitHub 이슈 라벨을 생성합니다. `@CLAUDE.md`의 프로젝트 지침을 준수할 것.
3
+ Analyze project structure and create appropriate GitHub issue labels. Follow project guidelines in `@CLAUDE.md`.
4
4
 
5
- ## 작업 순서
6
- 1. 프로젝트 분석: `package.json`, `README.md`, 코드 구조 파악
7
- 2. 기술 스택 확인: 사용된 프레임워크, 라이브러리, 도구 식별
8
- 3. 프로젝트 영역 분류: 프론트엔드, 백엔드, API, 인프라 등
9
- 4. 라벨 생성: 타입, 영역, 난이도 기반으로 핵심 라벨만 생성
5
+ ## Workflow
10
6
 
11
- ## 라벨 생성 기준
7
+ 1. Analyze project: Examine `package.json`, `README.md`, and code structure
8
+ 2. Identify tech stack: Detect frameworks, libraries, and tools in use
9
+ 3. Classify project areas: Frontend, backend, API, infrastructure, etc.
10
+ 4. Create labels: Generate essential labels based on type, area, and complexity
12
11
 
13
- **아래는 예시이며, 프로젝트에 맞게 조정해서 생성해주세요.**
12
+ ## Label Guidelines
14
13
 
15
- ### 타입 (Type)
14
+ **The following are examples; adjust based on your project needs.**
15
+
16
+ ### Type
16
17
  - `type: feature`, `type: bug`, `type: enhancement`, `type: documentation`, `type: refactor`
17
18
 
18
- ### 영역 (Area)
19
+ ### Area
19
20
  - `frontend` `backend` `api` `devops`, `crawling` `ai` `database` `infrastructure`
20
21
 
21
- ### 난이도 (Complexity)
22
+ ### Complexity
22
23
  - `complexity: easy` `complexity: medium` `complexity: hard`
23
24
 
24
- ## 생성 명령어 예시
25
+ ## Example Commands
26
+
25
27
  ```bash
26
- gh label create "type: feature" --color "0e8a16" --description "새로운 기능 추가"
27
- gh label create "frontend" --color "1d76db" --description "프론트엔드 관련 작업"
28
- gh label create "complexity: easy" --color "7057ff" --description "간단한 작업"
29
- ```
28
+ gh label create "type: feature" --color "0e8a16" --description "New feature addition"
29
+ gh label create "frontend" --color "1d76db" --description "Frontend-related work"
30
+ gh label create "complexity: easy" --color "7057ff" --description "Simple task"
31
+ ```