@yeongjaeyou/claude-code-config 0.5.0 → 0.5.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,146 +1,176 @@
1
1
  # Claude Code Config
2
2
 
3
- Claude Code CLI를 위한 커스텀 슬래시 커맨드, 에이전트, 스킬 모음입니다.
3
+ A collection of custom slash commands, agents, and skills for Claude Code CLI.
4
4
 
5
- ## 구조
5
+ ## Structure
6
6
 
7
7
  ```
8
8
  .claude/
9
- ├── commands/ # 슬래시 커맨드
10
- │ ├── ask-codex.md # Codex MCP로 코드 검토 요청
11
- │ ├── ask-deepwiki.md # DeepWiki로 GitHub 리포 심층 질의
12
- │ ├── ask-gemini.md # Gemini CLI로 코드 검토 요청
13
- │ ├── code-review.md # CodeRabbit 외부 리뷰 처리
14
- │ ├── commit-and-push.md # Git 커밋 푸시 자동화
15
- │ ├── edit-notebook.md # Jupyter Notebook 안전 편집
16
- │ ├── plan.md # 구현 계획 수립 (코드 작성 전)
9
+ ├── commands/ # Slash commands
10
+ │ ├── ask-codex.md # Request code review via Codex MCP
11
+ │ ├── ask-deepwiki.md # Deep query GitHub repos via DeepWiki
12
+ │ ├── ask-gemini.md # Request code review via Gemini CLI
13
+ │ ├── code-review.md # Process external code reviews (CodeRabbit, etc.)
14
+ │ ├── commit-and-push.md # Automate Git commit and push
15
+ │ ├── council.md # Consult multiple AI models (LLM Council)
16
+ │ ├── edit-notebook.md # Safely edit Jupyter Notebooks
17
+ │ ├── generate-llmstxt.md # Generate llms.txt from URL or directory
17
18
  │ ├── gh/
18
- │ │ ├── create-issue-label.md # GitHub 이슈 라벨 생성
19
- │ │ ├── decompose-issue.md # 작업을 이슈로 분해
20
- │ │ ├── init-project.md # GitHub Project 보드 초기화
21
- │ │ ├── post-merge.md # PR 머지 후 정리 작업
22
- │ │ └── resolve-issue.md # GitHub 이슈 해결 워크플로우
19
+ │ │ ├── create-issue-label.md # Create GitHub issue labels
20
+ │ │ ├── decompose-issue.md # Decompose large work into issues
21
+ │ │ ├── init-project.md # Initialize GitHub Project board
22
+ │ │ ├── post-merge.md # Post-merge cleanup
23
+ │ │ └── resolve-issue.md # GitHub issue resolution workflow
23
24
  │ └── tm/
24
- │ ├── convert-prd.md # PRD 초안을 TaskMaster 형식으로 변환
25
- │ ├── post-merge.md # TaskMaster 연동 PR 머지 후 정리
26
- │ ├── resolve-issue.md # TaskMaster 기반 이슈 해결
27
- ├── review-prd-with-codex.md # PRD를 Codex로 검토
28
- │ └── sync-to-github.md # TaskMaster -> GitHub 동기화
29
- ├── agents/ # 커스텀 에이전트
30
- ├── web-researcher.md # 다중 플랫폼 웹 리서치
31
- ├── python-pro.md # Python 전문가
32
- │ ├── generate-llmstxt.md # llms.txt 생성
33
- │ └── langconnect-rag-expert.md # RAG 기반 문서 검색
34
- └── skills/ # 스킬 (재사용 가능한 도구 모음)
35
- └── code-explorer/ # GitHub/HuggingFace 코드 탐색
25
+ │ ├── convert-prd.md # Convert PRD draft to TaskMaster format
26
+ │ ├── post-merge.md # TaskMaster-integrated post-merge cleanup
27
+ │ ├── resolve-issue.md # TaskMaster-based issue resolution
28
+ └── sync-to-github.md # Sync TaskMaster -> GitHub
29
+ ├── guidelines/ # Shared guidelines
30
+ ├── work-guidelines.md # Common work guidelines
31
+ └── id-reference.md # GitHub/TaskMaster ID reference
32
+ ├── agents/ # Custom agents
33
+ │ ├── web-researcher.md # Multi-platform web research
34
+ │ └── python-pro.md # Python expert
35
+ └── skills/ # Skills (reusable tool collections)
36
+ ├── code-explorer/ # GitHub/HuggingFace code exploration
37
+ │ ├── SKILL.md
38
+ │ ├── scripts/
39
+ │ └── references/
40
+ ├── feature-implementer/ # TDD-based feature planning
41
+ │ ├── SKILL.md
42
+ │ └── plan-template.md
43
+ ├── notion-md-uploader/ # Upload Markdown to Notion
44
+ │ ├── SKILL.md
45
+ │ ├── scripts/
46
+ │ └── references/
47
+ └── skill-creator/ # Guide for creating skills
36
48
  ├── SKILL.md
37
- ├── scripts/
38
- │ ├── search_github.py
39
- │ └── search_huggingface.py
40
- └── references/
41
- ├── github_api.md
42
- └── huggingface_api.md
49
+ └── scripts/
43
50
  ```
44
51
 
45
- ## 슬래시 커맨드
52
+ ## Slash Commands
46
53
 
47
- ### 일반 커맨드
54
+ ### General Commands
48
55
 
49
- | 커맨드 | 설명 |
50
- |--------|------|
51
- | `/plan` | 요구사항을 분석하고 코드 작성 구현 계획만 수립 |
52
- | `/commit-and-push` | 변경된 파일 분석 Conventional Commits 형식으로 커밋 및 푸시 |
53
- | `/code-review` | CodeRabbit 외부 코드 리뷰 결과를 분석하고 자동 수정 적용 |
54
- | `/edit-notebook` | Jupyter Notebook 파일을 NotebookEdit 도구로 안전하게 편집 |
55
- | `/ask-deepwiki` | DeepWiki MCP를 활용하여 GitHub 리포지토리에 심층 질의 |
56
- | `/ask-codex` | Codex MCP로 현재 작업 코드 검토 요청 (Claude cross-check 포함) |
57
- | `/ask-gemini` | Gemini CLI로 현재 작업 코드 검토 요청 (Claude cross-check 포함) |
56
+ | Command | Description |
57
+ |---------|-------------|
58
+ | `/commit-and-push` | Analyze changes and commit with Conventional Commits format |
59
+ | `/code-review` | Process external code review results and apply auto-fixes |
60
+ | `/edit-notebook` | Safely edit Jupyter Notebook files with NotebookEdit tool |
61
+ | `/generate-llmstxt` | Generate llms.txt from URL or local directory |
62
+ | `/ask-deepwiki` | Deep query GitHub repositories via DeepWiki MCP |
63
+ | `/ask-codex` | Request code review via Codex MCP (with Claude cross-check) |
64
+ | `/ask-gemini` | Request code review via Gemini CLI (with Claude cross-check) |
65
+ | `/council` | Consult multiple AI models and synthesize collective wisdom |
58
66
 
59
- ### GitHub 워크플로우 커맨드 (`/gh/`)
67
+ ### GitHub Workflow Commands (`/gh/`)
60
68
 
61
- | 커맨드 | 설명 |
62
- |--------|------|
63
- | `/gh/create-issue-label` | 프로젝트 구조 분석 적절한 GitHub 이슈 라벨 생성 |
64
- | `/gh/decompose-issue` | 작업을 관리 가능한 독립적인 이슈들로 분해 |
65
- | `/gh/init-project` | GitHub Project 보드 초기화 설정 |
66
- | `/gh/post-merge` | PR 머지 브랜치 정리 CLAUDE.md 업데이트 |
67
- | `/gh/resolve-issue` | GitHub 이슈 번호를 받아 체계적으로 분석하고 해결 |
69
+ | Command | Description |
70
+ |---------|-------------|
71
+ | `/gh/create-issue-label` | Analyze project and create appropriate GitHub issue labels |
72
+ | `/gh/decompose-issue` | Decompose large work into manageable independent issues |
73
+ | `/gh/init-project` | Initialize and configure GitHub Project board |
74
+ | `/gh/post-merge` | Clean up branch and update CLAUDE.md after PR merge |
75
+ | `/gh/resolve-issue` | Systematically analyze and resolve GitHub issues |
68
76
 
69
- ### TaskMaster 연동 커맨드 (`/tm/`)
77
+ ### TaskMaster Integration Commands (`/tm/`)
70
78
 
71
- | 커맨드 | 설명 |
72
- |--------|------|
73
- | `/tm/convert-prd` | PRD 초안을 TaskMaster PRD 형식으로 변환 |
74
- | `/tm/sync-to-github` | TaskMaster tasks.json GitHub Issue/Milestone으로 동기화 |
75
- | `/tm/resolve-issue` | GitHub Issue를 TaskMaster subtask 단위로 체계적 해결 |
76
- | `/tm/review-prd-with-codex` | PRD를 Codex MCP로 검토하고 Claude가 cross-check |
77
- | `/tm/post-merge` | PR 머지 후 TaskMaster 상태 업데이트 및 브랜치 정리 |
79
+ | Command | Description |
80
+ |---------|-------------|
81
+ | `/tm/convert-prd` | Convert PRD draft to TaskMaster PRD format |
82
+ | `/tm/sync-to-github` | Sync TaskMaster tasks.json to GitHub Issues/Milestones |
83
+ | `/tm/resolve-issue` | Resolve GitHub Issues by TaskMaster subtask units |
84
+ | `/tm/post-merge` | TaskMaster status update and branch cleanup after PR merge |
78
85
 
79
- ## 에이전트
86
+ ## Agents
80
87
 
81
- | 에이전트 | 설명 |
82
- |----------|------|
83
- | `web-researcher` | 다중 플랫폼(Reddit, GitHub, SO, HF, arXiv )에서 기술 리서치 수행 |
84
- | `python-pro` | Python 고급 기능(데코레이터, 제너레이터, async/await) 전문가 |
85
- | `generate-llmstxt` | 웹사이트나 로컬 디렉토리에서 llms.txt 문서 생성 |
86
- | `langconnect-rag-expert` | RAG 기반 문서 컬렉션에서 정보 검색 및 종합 |
88
+ | Agent | Description |
89
+ |-------|-------------|
90
+ | `web-researcher` | Multi-platform tech research (Reddit, GitHub, SO, HF, arXiv, etc.) |
91
+ | `python-pro` | Python advanced features expert (decorators, generators, async/await) |
87
92
 
88
- ## 스킬
93
+ ## Skills
89
94
 
90
95
  ### code-explorer
91
96
 
92
- GitHub와 Hugging Face에서 코드/모델/데이터셋을 검색하고 분석하는 스킬입니다.
97
+ Search and analyze code/models/datasets on GitHub and Hugging Face.
93
98
 
94
99
  ```bash
95
- # GitHub 리포지토리 검색
100
+ # Search GitHub repositories
96
101
  python scripts/search_github.py "object detection" --limit 10
97
102
 
98
- # Hugging Face 모델/데이터셋/Spaces 검색
103
+ # Search Hugging Face models/datasets/Spaces
99
104
  python scripts/search_huggingface.py "qwen vl" --type models
100
105
  ```
101
106
 
102
- ## 설치
107
+ ### feature-implementer
103
108
 
104
- ### npx로 설치 (권장)
109
+ TDD-based feature planning with quality gates.
110
+
111
+ - Phase-based plans with 1-4 hour increments
112
+ - Test-First Development (Red-Green-Refactor)
113
+ - Quality gates before each phase transition
114
+ - Risk assessment and rollback strategies
115
+
116
+ ### notion-md-uploader
117
+
118
+ Upload Markdown files to Notion pages with full formatting support.
119
+
120
+ - Headings, lists, code blocks, images, tables, callouts, todos
121
+ - Automatic image uploads
122
+ - Preserved formatting
123
+
124
+ ### skill-creator
125
+
126
+ Guide for creating effective Claude Code skills.
127
+
128
+ - Skill structure templates
129
+ - Best practices
130
+ - Validation scripts
131
+
132
+ ## Installation
133
+
134
+ ### Install with npx (Recommended)
105
135
 
106
136
  ```bash
107
- # 현재 프로젝트에 설치
137
+ # Install to current project
108
138
  npx @yeongjaeyou/claude-code-config
109
139
 
110
- # 전역 설치 (모든 프로젝트에서 사용)
140
+ # Global install (available in all projects)
111
141
  npx @yeongjaeyou/claude-code-config --global
112
142
  ```
113
143
 
114
- ### 수동 설치
144
+ ### Manual Installation
115
145
 
116
146
  ```bash
117
- # 방법 1: 직접 복사
147
+ # Method 1: Direct copy
118
148
  git clone https://github.com/YoungjaeDev/claude-code-config.git
119
149
  cp -r claude-code-config/.claude /path/to/your/project/
120
150
 
121
- # 방법 2: 심볼릭 링크
151
+ # Method 2: Symbolic link
122
152
  ln -s /path/to/claude-code-config/.claude /path/to/your/project/.claude
123
153
  ```
124
154
 
125
- ### CLI 옵션
155
+ ### CLI Options
126
156
 
127
157
  ```bash
128
- npx @yeongjaeyou/claude-code-config [옵션]
158
+ npx @yeongjaeyou/claude-code-config [options]
129
159
 
130
- 옵션:
131
- -g, --global 전역 설치 (~/.claude/)
132
- -h, --help 도움말 출력
133
- -v, --version 버전 출력
160
+ Options:
161
+ -g, --global Global install (~/.claude/)
162
+ -h, --help Show help
163
+ -v, --version Show version
134
164
  ```
135
165
 
136
- ## MCP 서버 설정
166
+ ## MCP Server Configuration
137
167
 
138
- 패키지에 포함된 `.mcp.json`을 프로젝트 루트에 복사하여 사용할 있습니다.
168
+ The package includes `.mcp.json` that can be copied to your project root.
139
169
 
140
- ### 전제조건
170
+ ### Prerequisites
141
171
 
142
- - **Node.js**: npx 명령어 사용
143
- - **Python uv**: [uv 설치](https://docs.astral.sh/uv/getting-started/installation/)
172
+ - **Node.js**: For npx command
173
+ - **Python uv**: [Install uv](https://docs.astral.sh/uv/getting-started/installation/)
144
174
  ```bash
145
175
  # macOS/Linux
146
176
  curl -LsSf https://astral.sh/uv/install.sh | sh
@@ -149,68 +179,78 @@ npx @yeongjaeyou/claude-code-config [옵션]
149
179
  powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"
150
180
  ```
151
181
 
152
- ### 포함된 MCP 서버
182
+ ### Included MCP Servers
153
183
 
154
- | 서버 | 설명 | API |
155
- |------|------|--------|
156
- | mcpdocs | Claude Code, Cursor 문서 검색 (mcp-cache 래퍼) | 불필요 |
157
- | deepwiki | GitHub 리포지토리 문서화 질의 | 불필요 |
158
- | serena | LSP 기반 코드 분석 도구 | 불필요 |
184
+ | Server | Description | API Key |
185
+ |--------|-------------|---------|
186
+ | mcpdocs | Claude Code, Cursor docs search (mcp-cache wrapper) | Not required |
187
+ | deepwiki | GitHub repository documentation query | Not required |
188
+ | serena | LSP-based code analysis tool | Not required |
159
189
 
160
- ### 사용 방법
190
+ ### Usage
161
191
 
162
192
  ```bash
163
- # .mcp.json 복사
193
+ # Copy .mcp.json
164
194
  cp node_modules/@yeongjaeyou/claude-code-config/.mcp.json .
165
195
 
166
- # 또는 기존 .mcp.json이 있는 경우 내용 병합
196
+ # Or merge contents if you have existing .mcp.json
167
197
  ```
168
198
 
169
- > **참고**: 자체 `.mcp.json`이 있는 경우 `mcpServers` 객체를 수동으로 병합하세요.
170
-
171
- ## 사용법
199
+ > **Note**: If you have your own `.mcp.json`, manually merge the `mcpServers` object.
172
200
 
173
- ### 실행 예시
201
+ ## Usage Examples
174
202
 
175
203
  ```bash
176
- # 구현 계획 수립
177
- /plan 새로운 인증 시스템 구현
204
+ # Generate llms.txt from website
205
+ /generate-llmstxt https://docs.example.com
178
206
 
179
- # 커밋 푸시
207
+ # Commit and push
180
208
  /commit-and-push src/auth.ts src/utils.ts
181
209
 
182
- # GitHub 이슈 해결
210
+ # Resolve GitHub issue
183
211
  /gh/resolve-issue 42
212
+
213
+ # Consult AI council
214
+ /council "Should we use REST or GraphQL for this API?"
184
215
  ```
185
216
 
186
- ## 주요 기능
217
+ ## Key Features
218
+
219
+ ### `/generate-llmstxt` - LLM Documentation
220
+ - Generate llms.txt from URL or local directory
221
+ - Use Firecrawl MCP for web scraping
222
+ - Organize content into logical sections
223
+
224
+ ### `/commit-and-push` - Git Automation
225
+ - Follow Conventional Commits format (feat, fix, refactor, docs, etc.)
226
+ - Analyze changes and generate appropriate commit message
227
+ - Selectively commit specified files only
187
228
 
188
- ### `/plan` - 구현 계획 수립
189
- - 요구사항의 의도를 파악하고 불명확하면 질문
190
- - 관련 코드베이스를 조사하고 이해
191
- - 단계별 실행 계획 작성
192
- - **바로 코드를 작성하지 않고 계획만 수립**
229
+ ### `/edit-notebook` - Jupyter Notebook Editing
230
+ - Use only `NotebookEdit` tool (protect JSON structure)
231
+ - Track cell_id for correct insertion order
232
+ - Include source format issue resolution guide
193
233
 
194
- ### `/commit-and-push` - Git 자동화
195
- - Conventional Commits 형식 준수 (feat, fix, refactor, docs 등)
196
- - 변경 내용 분석 적절한 커밋 메시지 생성
197
- - 지정된 파일만 선택적으로 커밋
234
+ ### `/council` - LLM Council
235
+ - Query multiple AI models (Opus, Sonnet, Codex, Gemini) in parallel
236
+ - Anonymize responses for unbiased evaluation
237
+ - Synthesize collective wisdom into consensus
198
238
 
199
- ### `/edit-notebook` - Jupyter Notebook 편집
200
- - `NotebookEdit` 도구만 사용 (JSON 구조 보호)
201
- - 삽입 순서 보장을 위한 cell_id 추적
202
- - source 형식 문제 해결 가이드 포함
239
+ ### `web-researcher` Agent
240
+ - Multi-platform search: GitHub (`gh` CLI), Hugging Face, Reddit, SO, arXiv
241
+ - Official documentation via Context7/DeepWiki
242
+ - Auto-generate research reports
203
243
 
204
- ### `web-researcher` 에이전트
205
- - GitHub(`gh` CLI), Hugging Face(`huggingface_hub`), Reddit, SO, arXiv 등 다중 플랫폼 검색
206
- - Context7/DeepWiki를 통한 공식 문서 수집
207
- - 한글 리서치 리포트 자동 생성
244
+ ### `feature-implementer` Skill
245
+ - TDD-based feature planning with quality gates
246
+ - Phase-based delivery (1-4 hours per phase)
247
+ - Risk assessment and rollback strategies
208
248
 
209
- ### `code-explorer` 스킬
210
- - `gh` CLI를 활용한 GitHub 리포지토리/코드 검색
211
- - `huggingface_hub` API를 활용한 모델/데이터셋/Spaces 검색
212
- - 임시 디렉토리(`/tmp/`)에 소스 코드 다운로드 및 분석
249
+ ### `code-explorer` Skill
250
+ - GitHub repository/code search via `gh` CLI
251
+ - Hugging Face model/dataset/Spaces search via `huggingface_hub` API
252
+ - Download and analyze source code in temp directory (`/tmp/`)
213
253
 
214
- ## 라이선스
254
+ ## License
215
255
 
216
256
  MIT License
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@yeongjaeyou/claude-code-config",
3
- "version": "0.5.0",
3
+ "version": "0.5.2",
4
4
  "description": "Claude Code CLI custom commands, agents, and skills",
5
5
  "bin": {
6
6
  "claude-code-config": "./bin/cli.js"
@@ -1,165 +0,0 @@
1
- ---
2
- name: generate-llmstxt
3
- description: Expert at generating llms.txt files from websites or local directories. Use when user requests to create llms.txt documentation from URLs or local folders.
4
- tools: Task, mcp__firecrawl__firecrawl_map, mcp__firecrawl__firecrawl_scrape, Bash, Read, Write, Glob, Grep
5
- model: sonnet
6
- color: orange
7
- ---
8
-
9
- You are an expert at creating llms.txt documentation files following the llms.txt standard specification.
10
-
11
- # Your Primary Responsibilities
12
-
13
- 1. Generate well-structured llms.txt files from websites or local directories
14
- 2. Follow the llms.txt format specification precisely
15
- 3. Use parallel processing for efficient content gathering
16
- 4. Summarize content concisely while preserving key information
17
-
18
- # llms.txt Format Specification
19
-
20
- The llms.txt file should contain:
21
- 1. An H1 with the project/site name (required)
22
- 2. An optional blockquote with a short project summary
23
- 3. Optional detailed markdown sections
24
- 4. Optional markdown sections with H2 headers listing URLs
25
-
26
- Example Format:
27
- ```markdown
28
- # Title
29
-
30
- > Optional description goes here
31
-
32
- Optional details go here
33
-
34
- ## Section name
35
-
36
- - [Link title](https://link_url): Optional link details
37
-
38
- ## Optional
39
-
40
- - [Link title](https://link_url)
41
- ```
42
-
43
- Key Guidelines:
44
- - Use concise, clear language
45
- - Provide brief, informative descriptions for linked resources (10-15 words max)
46
- - Avoid ambiguous terms or unexplained jargon
47
- - Group related links under appropriate section headings
48
- - Each description should be SPECIFIC to the content, not generic
49
-
50
- ## URL Format Best Practices
51
-
52
- When documenting projects with official documentation:
53
- 1. **Always prefer official web documentation URLs** over GitHub/repository URLs
54
- - ✅ Good: `https://docs.example.com/guide.html`
55
- - ❌ Avoid: `https://github.com/example/repo/blob/main/docs/guide.md`
56
- 2. **Check for published documentation sites** even if source is on GitHub
57
- - Many projects publish to readthedocs.io, GitHub Pages, or custom domains
58
- - Example: TorchServe uses `https://pytorch.org/serve/` not GitHub URLs
59
- 3. **Use HTML versions** when both .md and .html exist
60
- - Published docs usually have .html extension
61
- - Some sites append .html.md for markdown versions
62
- 4. **Verify URL accessibility** before including in llms.txt
63
-
64
- # Workflow for URL Input
65
-
66
- When given a URL to generate llms.txt from:
67
-
68
- 1. Use firecrawl_map to discover all URLs on the website
69
- 2. Create multiple parallel Task agents to scrape each URL concurrently
70
- - Each task should use firecrawl_scrape to fetch page content
71
- - Each task should extract key information: page title, main concepts, important links
72
- 3. Collect and synthesize all results
73
- 4. Organize content into logical sections
74
- 5. Generate the final llms.txt file following the specification
75
-
76
- Important: DO NOT use firecrawl_generate_llmstxt - build the llms.txt manually from scraped content.
77
-
78
- # Workflow for Local Directory Input
79
-
80
- When given a local directory path:
81
-
82
- 1. **Comprehensive Discovery**: Use Bash (ls/find) or Glob to list ALL files
83
- - Check main directory (e.g., `docs/`)
84
- - IMPORTANT: Also check subdirectories (e.g., `docs/hardware_support/`)
85
- - Use recursive listing to avoid missing files
86
- - Example: `ls -1 /path/to/docs/*.md` AND `ls -1 /path/to/docs/*/*.md`
87
-
88
- 2. **Verify Completeness**: Count total files and cross-reference
89
- - Use `wc -l` to count total markdown files
90
- - Compare against what's included in llms.txt
91
- - Example: If docs/ has 36 files, ensure all 36 are considered
92
-
93
- 3. Filter for documentation-relevant files (README, docs, markdown files, code files)
94
-
95
- 4. Create parallel Task agents to read and analyze relevant files
96
- - Each task should use Read to get file contents
97
- - Each task should extract: file purpose, key functions/classes, important concepts
98
-
99
- 5. Collect and synthesize all results
100
-
101
- 6. Organize content into logical sections (e.g., "Core Modules", "Documentation", "Examples")
102
-
103
- 7. Generate the final llms.txt file following the specification
104
-
105
- # Content Summarization Strategy
106
-
107
- For each page or file, extract:
108
- - Main purpose or topic
109
- - Key APIs, functions, or classes (for code)
110
- - Important concepts or features
111
- - Usage examples or patterns
112
- - Related resources
113
-
114
- **CRITICAL: Read actual content, don't assume!**
115
- - ✅ Good: "Configure batch size and delay for optimized throughput with dynamic batching"
116
- - ❌ Bad: "Information about batch inference configuration"
117
- - Each description MUST be based on actually reading the page/file content
118
- - Descriptions should be 10-15 words and SPECIFIC to that document
119
- - Avoid generic phrases like "documentation about X" or "guide for Y"
120
- - Include concrete details: specific features, APIs, tools, or concepts mentioned
121
-
122
- Keep descriptions brief (1-2 sentences per item) but informative and specific.
123
-
124
- # Section Organization
125
-
126
- Organize content into logical sections such as:
127
- - Documentation (for docs, guides, tutorials)
128
- - API Reference (for API documentation)
129
- - Examples (for code examples, tutorials)
130
- - Resources (for additional materials)
131
- - Tools (for utilities, helpers)
132
-
133
- Adapt section names to fit the content being documented.
134
-
135
- # Parallel Processing
136
-
137
- When processing multiple URLs or files:
138
- 1. Create one Task agent per item (up to reasonable limits)
139
- 2. Launch all tasks in a single message for parallel execution
140
- 3. Wait for all tasks to complete before synthesis
141
- 4. If there are too many items (>50), process in batches
142
-
143
- # Error Handling
144
-
145
- - If a URL cannot be scraped, note it and continue with others
146
- - If a file cannot be read, note it and continue with others
147
- - Always generate a llms.txt file even if some sources fail
148
- - Include a note in the output about any failures
149
-
150
- # Output
151
-
152
- Always write the generated llms.txt to a file named `llms.txt` in the current directory or a location specified by the user.
153
-
154
- Provide a summary of:
155
- - Number of sources processed
156
- - Number of sections created
157
- - Any errors or warnings
158
- - Location of the generated file
159
-
160
- # Important Constraints
161
-
162
- - Never use emojis in the generated llms.txt file
163
- - Keep descriptions concise and technical
164
- - Prioritize clarity and usefulness for LLMs
165
- - Follow the user's specific requirements if they provide any customization requests