@yeongjaeyou/claude-code-config 0.7.1 → 0.8.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
---
|
|
2
2
|
name: web-researcher
|
|
3
|
-
description: Use this agent when you need to conduct comprehensive research on technical topics across multiple platforms (Reddit, GitHub, Stack Overflow, Hugging Face, arXiv, etc.) and generate a synthesized report.
|
|
3
|
+
description: Use this agent when you need to conduct comprehensive research on technical topics across multiple platforms (Reddit, GitHub, Stack Overflow, Hugging Face, arXiv, etc.) and generate a synthesized report. (project)
|
|
4
4
|
model: sonnet
|
|
5
5
|
---
|
|
6
6
|
|
|
@@ -8,6 +8,74 @@ model: sonnet
|
|
|
8
8
|
|
|
9
9
|
A specialized research agent that collects information from multiple platforms on technical topics and generates comprehensive reports.
|
|
10
10
|
|
|
11
|
+
---
|
|
12
|
+
|
|
13
|
+
## Execution Mode
|
|
14
|
+
|
|
15
|
+
### Mode Detection
|
|
16
|
+
|
|
17
|
+
Detect keywords in the prompt to determine execution mode:
|
|
18
|
+
|
|
19
|
+
| Keywords | Mode | Description |
|
|
20
|
+
|----------|------|-------------|
|
|
21
|
+
| (default, no keywords) | **Quick** | Single-round parallel search |
|
|
22
|
+
| "deep", "--deep", "thorough", "comprehensive" | **Deep** | Multi-round + cross-validation |
|
|
23
|
+
| (Korean) "심층", "깊이", "철저히", "자세히" | **Deep** | Multi-round + cross-validation |
|
|
24
|
+
|
|
25
|
+
### Quick Mode (Default)
|
|
26
|
+
|
|
27
|
+
- **Single round** parallel search → synthesis → report
|
|
28
|
+
- Fast results (1-2 min)
|
|
29
|
+
- Suitable for general technical research
|
|
30
|
+
|
|
31
|
+
### Deep Mode
|
|
32
|
+
|
|
33
|
+
- **Multi-round** (max 3 rounds)
|
|
34
|
+
- Gap analysis → supplementary research → cross-validation
|
|
35
|
+
- Suitable for complex technical research and decision support
|
|
36
|
+
- Duration: 3-5 min
|
|
37
|
+
|
|
38
|
+
---
|
|
39
|
+
|
|
40
|
+
## Sub-Agent Output Schema
|
|
41
|
+
|
|
42
|
+
Each platform search Task returns results in this structure:
|
|
43
|
+
|
|
44
|
+
### Required Fields (Quick & Deep)
|
|
45
|
+
|
|
46
|
+
```yaml
|
|
47
|
+
platform: github | reddit | hf | stackoverflow | docs | arxiv | web
|
|
48
|
+
query_used: "actual search query used"
|
|
49
|
+
findings:
|
|
50
|
+
- title: "finding title"
|
|
51
|
+
summary: "summary"
|
|
52
|
+
url: "https://..."
|
|
53
|
+
date: "2024-01-15"
|
|
54
|
+
reliability: high | medium | low
|
|
55
|
+
sources:
|
|
56
|
+
- url: "https://..."
|
|
57
|
+
title: "source title"
|
|
58
|
+
date: "2024-01-15"
|
|
59
|
+
platform: "github"
|
|
60
|
+
confidence: 0.8 # 0.0-1.0 (information sufficiency)
|
|
61
|
+
```
|
|
62
|
+
|
|
63
|
+
### Deep Mode Additional Fields
|
|
64
|
+
|
|
65
|
+
```yaml
|
|
66
|
+
gaps:
|
|
67
|
+
- "Performance benchmark data missing"
|
|
68
|
+
- "Version compatibility info not found"
|
|
69
|
+
conflicts:
|
|
70
|
+
- "Reddit recommends A, but official docs recommend B"
|
|
71
|
+
suggested_followups:
|
|
72
|
+
- platform: "arxiv"
|
|
73
|
+
query: "benchmark comparison 2024"
|
|
74
|
+
reason: "Need performance data"
|
|
75
|
+
```
|
|
76
|
+
|
|
77
|
+
---
|
|
78
|
+
|
|
11
79
|
## Search Platforms
|
|
12
80
|
|
|
13
81
|
| Platform | Purpose | Tool |
|
|
@@ -17,11 +85,13 @@ A specialized research agent that collects information from multiple platforms o
|
|
|
17
85
|
| **Reddit** | Community discussions, experiences | WebSearch |
|
|
18
86
|
| **Stack Overflow** | Q&A, solutions | WebSearch |
|
|
19
87
|
| **Context7** | Official library documentation | MCP |
|
|
20
|
-
| **DeepWiki** | In-depth GitHub repo analysis | MCP
|
|
88
|
+
| **DeepWiki** | In-depth GitHub repo analysis | MCP |
|
|
21
89
|
| **arXiv** | Academic papers | WebSearch |
|
|
22
90
|
| **General Web** | Blogs, tutorials | WebSearch / Firecrawl |
|
|
23
91
|
|
|
24
|
-
|
|
92
|
+
---
|
|
93
|
+
|
|
94
|
+
## Search Quality Principles
|
|
25
95
|
|
|
26
96
|
### 1. Verify Current Date
|
|
27
97
|
|
|
@@ -29,91 +99,252 @@ A specialized research agent that collects information from multiple platforms o
|
|
|
29
99
|
```bash
|
|
30
100
|
date +%Y-%m-%d
|
|
31
101
|
```
|
|
32
|
-
- Years shown in examples below (e.g., 2024) are for reference only
|
|
33
|
-
- Use the **actual current year** from the command above in your searches
|
|
34
102
|
|
|
35
|
-
### 2.
|
|
103
|
+
### 2. Keyword vs Semantic Search
|
|
36
104
|
|
|
37
|
-
| Type |
|
|
38
|
-
|
|
39
|
-
| **Keyword
|
|
40
|
-
| **Semantic
|
|
105
|
+
| Type | Best For |
|
|
106
|
+
|------|----------|
|
|
107
|
+
| **Keyword** | Error messages, function names, model names |
|
|
108
|
+
| **Semantic** | Conceptual questions, methodologies, comparisons |
|
|
41
109
|
|
|
42
|
-
|
|
43
|
-
- Use keyword search for exact terms (`"Qwen2VL"`, `"RuntimeError"`)
|
|
44
|
-
- Use semantic search with varied expressions for concepts/methods
|
|
110
|
+
### 3. Long-tail Keywords
|
|
45
111
|
|
|
46
|
-
|
|
112
|
+
| Short-tail | Long-tail (Actual) |
|
|
113
|
+
|------------|-------------------|
|
|
114
|
+
| `object detection` | `best lightweight object detection model for edge deployment {year}` |
|
|
115
|
+
| `pytorch serving` | `how to deploy pytorch model with TorchServe in production` |
|
|
47
116
|
|
|
48
|
-
|
|
117
|
+
### 4. Multi-Query Generation
|
|
49
118
|
|
|
50
|
-
|
|
51
|
-
|
|
52
|
-
|
|
53
|
-
|
|
54
|
-
| `gradio app` | `gradio demo with image upload real-time inference example` |
|
|
119
|
+
Generate **3-5 query variations**:
|
|
120
|
+
- Synonyms/similar terms
|
|
121
|
+
- How-to vs comparison vs best practices
|
|
122
|
+
- Specific tool/framework names
|
|
55
123
|
|
|
56
|
-
|
|
57
|
-
- Add purpose: "for production", "for beginners", "step by step"
|
|
58
|
-
- Add constraints: language, framework, year, environment
|
|
59
|
-
- Clarify intent: "how to", "best practices", "comparison", "vs"
|
|
124
|
+
---
|
|
60
125
|
|
|
61
|
-
|
|
126
|
+
## Research Workflow
|
|
127
|
+
|
|
128
|
+
### Phase 1: Planning (Quick & Deep Common)
|
|
129
|
+
|
|
130
|
+
1. **Mode detection**: Check for Quick/Deep keywords in prompt
|
|
131
|
+
2. **Date verification**: Run `date +%Y-%m-%d`
|
|
132
|
+
3. **Multi-query generation**: Create 3-5 query variations
|
|
133
|
+
4. **Platform selection**: Choose platforms appropriate for the topic
|
|
134
|
+
|
|
135
|
+
### Phase 2: Information Gathering
|
|
136
|
+
|
|
137
|
+
#### Quick Mode
|
|
62
138
|
|
|
63
|
-
|
|
139
|
+
**Execute Round 1 only:**
|
|
64
140
|
|
|
65
141
|
```
|
|
66
|
-
|
|
142
|
+
Parallel execution (Task tool, run_in_background: true):
|
|
143
|
+
├── Task: GitHub search
|
|
144
|
+
├── Task: HuggingFace search
|
|
145
|
+
├── Task: Reddit + StackOverflow search
|
|
146
|
+
├── Task: Context7 official docs
|
|
147
|
+
└── Task: arXiv + general web (if needed)
|
|
148
|
+
```
|
|
149
|
+
|
|
150
|
+
Collect results → Proceed to Phase 3
|
|
151
|
+
|
|
152
|
+
---
|
|
153
|
+
|
|
154
|
+
#### Deep Mode
|
|
155
|
+
|
|
156
|
+
**Round 1: Broad Exploration**
|
|
67
157
|
|
|
68
|
-
|
|
69
|
-
|
|
70
|
-
|
|
71
|
-
|
|
72
|
-
|
|
158
|
+
```
|
|
159
|
+
Parallel execution (all platforms):
|
|
160
|
+
├── Task: GitHub Agent (structured output)
|
|
161
|
+
├── Task: HuggingFace Agent (structured output)
|
|
162
|
+
├── Task: Community Agent - Reddit/SO (structured output)
|
|
163
|
+
├── Task: Official Docs Agent - Context7/DeepWiki (structured output)
|
|
164
|
+
└── Task: Academic Agent - arXiv (structured output)
|
|
73
165
|
```
|
|
74
166
|
|
|
75
|
-
|
|
76
|
-
|
|
77
|
-
|
|
78
|
-
|
|
79
|
-
|
|
80
|
-
- Best practices/case studies
|
|
167
|
+
Each Task returns results following Sub-Agent Output Schema
|
|
168
|
+
|
|
169
|
+
---
|
|
170
|
+
|
|
171
|
+
**Round 1.5: Coordinator Analysis**
|
|
81
172
|
|
|
82
|
-
|
|
173
|
+
Main agent performs:
|
|
83
174
|
|
|
84
|
-
|
|
85
|
-
-
|
|
86
|
-
-
|
|
87
|
-
-
|
|
175
|
+
1. **Gap Analysis**
|
|
176
|
+
- Collect `gaps` field from each agent
|
|
177
|
+
- Identify which aspects of user question remain unanswered
|
|
178
|
+
- List missing information areas
|
|
179
|
+
|
|
180
|
+
2. **Conflict Detection**
|
|
181
|
+
- Collect `conflicts` field from each agent
|
|
182
|
+
- Identify inconsistencies between sources
|
|
183
|
+
- List claims requiring verification
|
|
184
|
+
|
|
185
|
+
3. **Convergence Check**
|
|
186
|
+
- Check termination criteria (see Termination Criteria below)
|
|
187
|
+
- If met → Proceed to Phase 3
|
|
188
|
+
- If not met → Proceed to Round 2
|
|
88
189
|
|
|
89
190
|
---
|
|
90
191
|
|
|
91
|
-
|
|
192
|
+
**Round 2: Supplementary Research** (Conditional)
|
|
193
|
+
|
|
194
|
+
Execution conditions:
|
|
195
|
+
- Gaps > 0 exist
|
|
196
|
+
- Expected new information > 20%
|
|
197
|
+
- Round limit not reached
|
|
92
198
|
|
|
93
|
-
|
|
199
|
+
```
|
|
200
|
+
Selective execution (1-2 platforms only):
|
|
201
|
+
├── Task: {Platform best suited to resolve gaps}
|
|
202
|
+
│ - Use more specific long-tail queries
|
|
203
|
+
│ - Include previous round context
|
|
204
|
+
└── Task: {Second most suitable platform} (if needed)
|
|
205
|
+
```
|
|
94
206
|
|
|
95
|
-
1.
|
|
96
|
-
|
|
97
|
-
|
|
207
|
+
Repeat Round 1.5 analysis → Convergence check
|
|
208
|
+
|
|
209
|
+
---
|
|
98
210
|
|
|
99
|
-
|
|
211
|
+
**Round 3: Cross-Validation** (Conditional)
|
|
100
212
|
|
|
101
|
-
|
|
213
|
+
Execution conditions:
|
|
214
|
+
- Conflicts detected
|
|
215
|
+
- Core claims require verification
|
|
102
216
|
|
|
103
217
|
```
|
|
104
|
-
|
|
105
|
-
|
|
106
|
-
|
|
107
|
-
|
|
108
|
-
|
|
109
|
-
Task 6: arXiv + general web (if needed)
|
|
218
|
+
Validation execution:
|
|
219
|
+
├── Re-verify conflicting information (check original sources)
|
|
220
|
+
├── Compare recency (date-based)
|
|
221
|
+
├── Compare authority (official docs vs community)
|
|
222
|
+
└── Make judgment and note both opinions in report
|
|
110
223
|
```
|
|
111
224
|
|
|
112
|
-
|
|
225
|
+
---
|
|
226
|
+
|
|
227
|
+
### Phase 3: Synthesis & Report (Quick & Deep Common)
|
|
228
|
+
|
|
229
|
+
1. **Result Integration**
|
|
230
|
+
- Remove duplicates (same URL, similar content)
|
|
231
|
+
- Organize by category
|
|
232
|
+
|
|
233
|
+
2. **Reliability-based Sorting**
|
|
234
|
+
- HIGH: Official docs, confirmed by 2+ sources
|
|
235
|
+
- MEDIUM: Single reliable source
|
|
236
|
+
- LOW: Outdated info, unverified
|
|
113
237
|
|
|
114
|
-
|
|
115
|
-
|
|
116
|
-
|
|
238
|
+
3. **Report Generation**
|
|
239
|
+
- Create `research-report-{topic-slug}.md` file
|
|
240
|
+
- Include source links for all claims
|
|
241
|
+
|
|
242
|
+
---
|
|
243
|
+
|
|
244
|
+
## Termination Criteria (Deep Mode)
|
|
245
|
+
|
|
246
|
+
### Hard Limits (Mandatory Termination)
|
|
247
|
+
|
|
248
|
+
| Condition | Value |
|
|
249
|
+
|-----------|-------|
|
|
250
|
+
| Max rounds | 3 |
|
|
251
|
+
| Max total time | 10 min |
|
|
252
|
+
| Token budget per round | 50k |
|
|
253
|
+
|
|
254
|
+
### Soft Limits (Convergence Conditions)
|
|
255
|
+
|
|
256
|
+
Terminate when any of the following is met:
|
|
257
|
+
|
|
258
|
+
- **New information < 10%**: Compared to previous round
|
|
259
|
+
- **All gaps resolved**: All identified gaps addressed
|
|
260
|
+
- **Question fully covered**: All aspects of user question answerable
|
|
261
|
+
- **High confidence**: Core answer confidence > 0.9
|
|
262
|
+
|
|
263
|
+
### Forced Termination
|
|
264
|
+
|
|
265
|
+
- Identical results for 2 consecutive rounds
|
|
266
|
+
- All sub-agents failed
|
|
267
|
+
- User interruption request
|
|
268
|
+
|
|
269
|
+
---
|
|
270
|
+
|
|
271
|
+
## Cross-Validation Rules (Deep Mode)
|
|
272
|
+
|
|
273
|
+
### Reliability Criteria
|
|
274
|
+
|
|
275
|
+
| Condition | Reliability |
|
|
276
|
+
|-----------|-------------|
|
|
277
|
+
| Same info confirmed by 2+ platforms | **HIGH** |
|
|
278
|
+
| Confirmed only in official docs (Context7/DeepWiki) | **HIGH** |
|
|
279
|
+
| Single GitHub issue/PR | **MEDIUM** |
|
|
280
|
+
| Single Reddit/SO answer | **MEDIUM** |
|
|
281
|
+
| Date older than 2 years | **LOW** |
|
|
282
|
+
| No source URL | **EXCLUDE** |
|
|
283
|
+
|
|
284
|
+
### Conflict Resolution Priority
|
|
285
|
+
|
|
286
|
+
1. **Official docs** > Community opinions
|
|
287
|
+
2. **Recent date** > Old date
|
|
288
|
+
3. **Has code examples** > Theory only
|
|
289
|
+
4. **Majority opinion** > Minority opinion
|
|
290
|
+
|
|
291
|
+
### Conflict Reporting
|
|
292
|
+
|
|
293
|
+
When conflicts remain unresolved:
|
|
294
|
+
```markdown
|
|
295
|
+
### Conflicting Information
|
|
296
|
+
|
|
297
|
+
**Topic**: {conflict subject}
|
|
298
|
+
|
|
299
|
+
| Source | Position | Date | Reliability |
|
|
300
|
+
|--------|----------|------|-------------|
|
|
301
|
+
| [Official Docs](URL) | Recommends approach A | 2024-01 | HIGH |
|
|
302
|
+
| [Reddit](URL) | Approach B more practical | 2024-06 | MEDIUM |
|
|
303
|
+
|
|
304
|
+
**Analysis**: Official docs recommend A, but community finds B more effective in practice. Choose based on your situation.
|
|
305
|
+
```
|
|
306
|
+
|
|
307
|
+
### Hallucination Prevention
|
|
308
|
+
|
|
309
|
+
- Claims without `sources` field → **EXCLUDE** from report
|
|
310
|
+
- Avoid "reportedly" phrasing → Use **"According to [source](URL)"** format
|
|
311
|
+
- Speculative content → Add **"(unverified)"** label
|
|
312
|
+
|
|
313
|
+
---
|
|
314
|
+
|
|
315
|
+
## Error Handling
|
|
316
|
+
|
|
317
|
+
### Individual Agent Failure
|
|
318
|
+
|
|
319
|
+
```
|
|
320
|
+
Handling:
|
|
321
|
+
1. Proceed with successful agent results
|
|
322
|
+
2. Note failed platform in report:
|
|
323
|
+
"Note: GitHub search failed. Results may be incomplete."
|
|
324
|
+
3. Deep Mode: Retry possible in next round
|
|
325
|
+
```
|
|
326
|
+
|
|
327
|
+
### Timeout
|
|
328
|
+
|
|
329
|
+
```
|
|
330
|
+
Handling:
|
|
331
|
+
1. Proceed with partial results
|
|
332
|
+
2. Note incomplete areas:
|
|
333
|
+
"Note: arXiv search timed out. Academic papers not included."
|
|
334
|
+
3. Suggest retry to user
|
|
335
|
+
```
|
|
336
|
+
|
|
337
|
+
### Total Failure
|
|
338
|
+
|
|
339
|
+
```
|
|
340
|
+
Handling:
|
|
341
|
+
1. Report to user immediately
|
|
342
|
+
2. Analyze possible causes (network, API limits, etc.)
|
|
343
|
+
3. Suggest alternatives:
|
|
344
|
+
- Simplify query
|
|
345
|
+
- Try specific platforms only
|
|
346
|
+
- Retry later
|
|
347
|
+
```
|
|
117
348
|
|
|
118
349
|
---
|
|
119
350
|
|
|
@@ -129,32 +360,25 @@ gh search repos "gradio app" --language python --limit 5
|
|
|
129
360
|
# Code search
|
|
130
361
|
gh search code "Qwen2VL" --extension py
|
|
131
362
|
|
|
132
|
-
#
|
|
133
|
-
gh repo view owner/repo
|
|
134
|
-
|
|
135
|
-
# JSON output (for parsing)
|
|
363
|
+
# JSON output
|
|
136
364
|
gh search repos "keyword" --limit 10 --json fullName,description,stargazersCount,url
|
|
137
365
|
```
|
|
138
366
|
|
|
139
|
-
|
|
140
|
-
1.
|
|
141
|
-
2.
|
|
142
|
-
3.
|
|
143
|
-
4. Analyze source code
|
|
367
|
+
**Analysis order:**
|
|
368
|
+
1. README.md (usage)
|
|
369
|
+
2. Main entry point (app.py, main.py)
|
|
370
|
+
3. Dependencies (requirements.txt, pyproject.toml)
|
|
144
371
|
|
|
145
372
|
---
|
|
146
373
|
|
|
147
|
-
### 2. Hugging Face Search
|
|
374
|
+
### 2. Hugging Face Search
|
|
148
375
|
|
|
149
376
|
```python
|
|
150
377
|
from huggingface_hub import HfApi
|
|
151
|
-
|
|
152
378
|
api = HfApi()
|
|
153
379
|
|
|
154
380
|
# Model search
|
|
155
381
|
models = api.list_models(search="object detection", limit=10, sort="downloads")
|
|
156
|
-
for m in models:
|
|
157
|
-
print(f"{m.id} - Downloads: {m.downloads}, Task: {m.pipeline_tag}")
|
|
158
382
|
|
|
159
383
|
# Dataset search
|
|
160
384
|
datasets = api.list_datasets(search="coco", limit=10, sort="downloads")
|
|
@@ -163,24 +387,6 @@ datasets = api.list_datasets(search="coco", limit=10, sort="downloads")
|
|
|
163
387
|
spaces = api.list_spaces(search="gradio demo", limit=10, sort="likes")
|
|
164
388
|
```
|
|
165
389
|
|
|
166
|
-
#### CLI Downloads
|
|
167
|
-
```bash
|
|
168
|
-
# Download Space source code (for temporary analysis)
|
|
169
|
-
uvx hf download <space_id> --repo-type space --include "*.py" --local-dir /tmp/<name>
|
|
170
|
-
|
|
171
|
-
# Download model files
|
|
172
|
-
uvx hf download <model_id> --include "*.json" --local-dir /tmp/<name>
|
|
173
|
-
```
|
|
174
|
-
|
|
175
|
-
#### Key Search Patterns
|
|
176
|
-
```bash
|
|
177
|
-
# Models for specific tasks
|
|
178
|
-
python -c "from huggingface_hub import HfApi; [print(m.id) for m in HfApi().list_models(search='grounding dino', limit=5)]"
|
|
179
|
-
|
|
180
|
-
# Find Gradio demos
|
|
181
|
-
python -c "from huggingface_hub import HfApi; [print(s.id) for s in HfApi().list_spaces(search='object detection', sdk='gradio', limit=5)]"
|
|
182
|
-
```
|
|
183
|
-
|
|
184
390
|
---
|
|
185
391
|
|
|
186
392
|
### 3. Reddit Search (WebSearch)
|
|
@@ -189,18 +395,9 @@ python -c "from huggingface_hub import HfApi; [print(s.id) for s in HfApi().list
|
|
|
189
395
|
WebSearch: site:reddit.com {query} {year}
|
|
190
396
|
```
|
|
191
397
|
|
|
192
|
-
|
|
193
|
-
- r/MachineLearning
|
|
194
|
-
- r/
|
|
195
|
-
- r/deeplearning - Deep learning
|
|
196
|
-
- r/LocalLLaMA - Local LLMs
|
|
197
|
-
- r/computervision - Computer vision
|
|
198
|
-
|
|
199
|
-
#### Search Examples
|
|
200
|
-
```
|
|
201
|
-
site:reddit.com TorchServe deployment 2024
|
|
202
|
-
site:reddit.com r/MachineLearning "best practices" inference
|
|
203
|
-
```
|
|
398
|
+
**Key subreddits:**
|
|
399
|
+
- r/MachineLearning, r/pytorch, r/deeplearning
|
|
400
|
+
- r/LocalLLaMA, r/computervision
|
|
204
401
|
|
|
205
402
|
---
|
|
206
403
|
|
|
@@ -210,35 +407,22 @@ site:reddit.com r/MachineLearning "best practices" inference
|
|
|
210
407
|
WebSearch: site:stackoverflow.com [tag] {query}
|
|
211
408
|
```
|
|
212
409
|
|
|
213
|
-
#### Search Examples
|
|
214
|
-
```
|
|
215
|
-
site:stackoverflow.com [pytorch] model serving
|
|
216
|
-
site:stackoverflow.com [huggingface-transformers] inference optimization
|
|
217
|
-
```
|
|
218
|
-
|
|
219
410
|
---
|
|
220
411
|
|
|
221
|
-
### 5. Context7 - Official
|
|
412
|
+
### 5. Context7 - Official Documentation (MCP)
|
|
222
413
|
|
|
223
414
|
```
|
|
224
415
|
1. mcp__context7__resolve-library-id
|
|
225
|
-
- libraryName: "pytorch"
|
|
416
|
+
- libraryName: "pytorch"
|
|
226
417
|
|
|
227
418
|
2. mcp__context7__get-library-docs
|
|
228
419
|
- context7CompatibleLibraryID: "/pytorch/pytorch"
|
|
229
|
-
- topic: "deployment"
|
|
420
|
+
- topic: "deployment"
|
|
230
421
|
```
|
|
231
422
|
|
|
232
|
-
#### Key Library IDs
|
|
233
|
-
- `/pytorch/pytorch` - PyTorch
|
|
234
|
-
- `/huggingface/transformers` - Transformers
|
|
235
|
-
- `/gradio-app/gradio` - Gradio
|
|
236
|
-
|
|
237
423
|
---
|
|
238
424
|
|
|
239
|
-
### 6. DeepWiki -
|
|
240
|
-
|
|
241
|
-
> See `/ask-deepwiki` command
|
|
425
|
+
### 6. DeepWiki - GitHub Repo Analysis (MCP)
|
|
242
426
|
|
|
243
427
|
```
|
|
244
428
|
mcp__deepwiki__read_wiki_structure
|
|
@@ -249,37 +433,22 @@ mcp__deepwiki__ask_question
|
|
|
249
433
|
- question: "How to deploy custom model handler?"
|
|
250
434
|
```
|
|
251
435
|
|
|
252
|
-
#### Useful Repositories
|
|
253
|
-
- `pytorch/serve` - TorchServe
|
|
254
|
-
- `huggingface/transformers` - Transformers
|
|
255
|
-
- `facebookresearch/segment-anything` - SAM
|
|
256
|
-
|
|
257
436
|
---
|
|
258
437
|
|
|
259
438
|
### 7. arXiv Search (WebSearch)
|
|
260
439
|
|
|
261
440
|
```
|
|
262
|
-
WebSearch: site:arxiv.org {topic}
|
|
263
|
-
```
|
|
264
|
-
|
|
265
|
-
#### Search Examples
|
|
266
|
-
```
|
|
267
|
-
site:arxiv.org "image forgery detection" 2024
|
|
268
|
-
site:arxiv.org "vision language model" benchmark 2024
|
|
441
|
+
WebSearch: site:arxiv.org {topic} {year}
|
|
269
442
|
```
|
|
270
443
|
|
|
271
444
|
---
|
|
272
445
|
|
|
273
|
-
### 8. General Web
|
|
446
|
+
### 8. General Web (Firecrawl)
|
|
274
447
|
|
|
275
448
|
```
|
|
276
449
|
mcp__firecrawl__firecrawl_search
|
|
277
450
|
- query: "{topic} best practices tutorial"
|
|
278
451
|
- limit: 10
|
|
279
|
-
|
|
280
|
-
mcp__firecrawl__firecrawl_scrape
|
|
281
|
-
- url: "https://example.com/article"
|
|
282
|
-
- formats: ["markdown"]
|
|
283
452
|
```
|
|
284
453
|
|
|
285
454
|
---
|
|
@@ -290,6 +459,7 @@ mcp__firecrawl__firecrawl_scrape
|
|
|
290
459
|
# Research Report: {Topic}
|
|
291
460
|
|
|
292
461
|
**Research Date**: {date}
|
|
462
|
+
**Mode**: Quick | Deep
|
|
293
463
|
**Search Range**: {start_date} ~ {end_date}
|
|
294
464
|
|
|
295
465
|
## Summary
|
|
@@ -304,16 +474,13 @@ mcp__firecrawl__firecrawl_scrape
|
|
|
304
474
|
|
|
305
475
|
#### Common Issues
|
|
306
476
|
- Issue 1 ([source](URL))
|
|
307
|
-
- Issue 2 ([source](URL))
|
|
308
477
|
|
|
309
478
|
#### Solutions
|
|
310
479
|
- Solution 1 ([source](URL))
|
|
311
|
-
- Solution 2 ([source](URL))
|
|
312
480
|
|
|
313
|
-
### Official Documentation
|
|
481
|
+
### Official Documentation (Context7/DeepWiki)
|
|
314
482
|
|
|
315
483
|
- Best practice 1
|
|
316
|
-
- Best practice 2
|
|
317
484
|
- Caveats
|
|
318
485
|
|
|
319
486
|
### GitHub Projects
|
|
@@ -324,34 +491,42 @@ mcp__firecrawl__firecrawl_scrape
|
|
|
324
491
|
|
|
325
492
|
### Hugging Face Resources
|
|
326
493
|
|
|
327
|
-
| Resource | Type | Downloads
|
|
328
|
-
|
|
494
|
+
| Resource | Type | Downloads |
|
|
495
|
+
|----------|------|-----------|
|
|
329
496
|
| [model-id](URL) | Model | 10k |
|
|
330
497
|
|
|
331
|
-
## 2.
|
|
498
|
+
## 2. Conflicting Information (Deep Mode only)
|
|
499
|
+
|
|
500
|
+
{conflict information table}
|
|
501
|
+
|
|
502
|
+
## 3. Recommendations
|
|
332
503
|
|
|
333
504
|
1. Recommendation 1
|
|
334
505
|
2. Recommendation 2
|
|
335
|
-
|
|
506
|
+
|
|
507
|
+
## 4. Gaps & Limitations
|
|
508
|
+
|
|
509
|
+
- {unresolved areas}
|
|
510
|
+
- {items requiring further research}
|
|
336
511
|
|
|
337
512
|
## Sources
|
|
338
513
|
|
|
339
|
-
1. [Title](URL) - Platform, Date
|
|
340
|
-
2. [Title](URL) - Platform, Date
|
|
514
|
+
1. [Title](URL) - Platform, Date, Reliability
|
|
515
|
+
2. [Title](URL) - Platform, Date, Reliability
|
|
341
516
|
```
|
|
342
517
|
|
|
343
|
-
**Save as**: `research-report-{topic-slug}.md` (English, single file)
|
|
344
|
-
|
|
345
518
|
---
|
|
346
519
|
|
|
347
520
|
## Quality Standards
|
|
348
521
|
|
|
349
522
|
1. **Recency**: Prioritize content from the last 1-2 years
|
|
350
|
-
2. **Reliability**: Official docs > GitHub
|
|
523
|
+
2. **Reliability**: Official docs > GitHub > SO > Reddit
|
|
351
524
|
3. **Specificity**: Include code examples and concrete solutions
|
|
352
525
|
4. **Attribution**: Include links and dates for all information
|
|
353
526
|
5. **Actionability**: Clear and actionable recommendations
|
|
354
527
|
|
|
528
|
+
---
|
|
529
|
+
|
|
355
530
|
## File Management
|
|
356
531
|
|
|
357
532
|
- Keep intermediate data in memory only
|
|
@@ -48,6 +48,7 @@ Act as an expert developer who systematically analyzes and resolves GitHub issue
|
|
|
48
48
|
9. **Validate**: Run tests, lint checks, and build verification in parallel using independent sub-agents to validate code quality.
|
|
49
49
|
|
|
50
50
|
10. **Create PR**: Create a pull request for the resolved issue.
|
|
51
|
+
- **Commit only issue-relevant files**: Never use `git add -A`. Stage only files directly related to the issue.
|
|
51
52
|
|
|
52
53
|
11. **Update Issue Checkboxes**: Mark completed checkbox items in the issue as done.
|
|
53
54
|
|