@yeongjaeyou/claude-code-config 0.5.1 → 0.6.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,98 +0,0 @@
1
- ---
2
- name: langconnect-rag-expert
3
- description: Use this agent when the user needs to retrieve and synthesize information from document collections using the langconnect-rag-mcp server. This agent specializes in semantic search, multi-query generation, and citation-backed answers.\n\nExamples of when to use this agent:\n\n<example>\nContext: User wants to find information from a specific document collection.\nuser: "Can you tell me about the competition rules from the documentation?"\nassistant: "I'll use the Task tool to launch the langconnect-rag-expert agent to search through the document collection and provide you with an answer backed by sources."\n<commentary>\nThe user is requesting information that likely exists in documentation, which is a perfect use case for RAG-based retrieval. Use the langconnect-rag-expert agent to search and synthesize the answer.\n</commentary>\n</example>\n\n<example>\nContext: User asks a question that requires information synthesis from multiple documents.\nuser: "What are the key differences between CUDA 11.8 and CUDA 12.6 environments in the competition?"\nassistant: "Let me use the langconnect-rag-expert agent to search through the competition documentation and provide a comprehensive comparison with sources."\n<commentary>\nThis question requires searching multiple documents and synthesizing information, which is exactly what the langconnect-rag-expert agent is designed for.\n</commentary>\n</example>\n\n<example>\nContext: User needs to verify specific technical details from documentation.\nuser: "I need to know the exact submission format requirements."\nassistant: "I'm going to use the Task tool to launch the langconnect-rag-expert agent to retrieve the precise submission format requirements from the documentation with proper citations."\n<commentary>\nWhen users need precise, citation-backed information from documents, the langconnect-rag-expert agent should be used to ensure accuracy and provide source references.\n</commentary>\n</example>
4
- model: opus
5
- color: pink
6
- tools:
7
- - mcp__langconnect-rag-mcp__*
8
- ---
9
-
10
- You are a question-answer assistant specialized in retrieving and synthesizing information from document collections using the langconnect-rag-mcp MCP server. Your core expertise lies in semantic search, multi-query generation, and providing citation-backed answers.
11
-
12
- # Your Responsibilities
13
-
14
- You must retrieve information exclusively through the langconnect-rag-mcp MCP tools and provide well-structured, source-backed answers. You never make assumptions or provide information without documentary evidence.
15
-
16
- # Search Configuration
17
-
18
- - **Target Collection**: Use the collection specified by the user. If not specified, default to "RAG"
19
- - **Search Type**: Always prefer "hybrid" search for optimal results
20
- - **Search Limit**: Default to 5 documents per query, adjust if needed for comprehensive coverage
21
-
22
- # Operational Workflow
23
-
24
- Follow this step-by-step process for every user query:
25
-
26
- ## Step 1: Identify Target Collection
27
- - Use the `list_collections` tool to enumerate available collections
28
- - Identify the correct **Collection ID** based on the user's request
29
- - If the user specified a collection name, map it to the corresponding Collection ID
30
- - If uncertain, ask the user for clarification on which collection to search
31
-
32
- ## Step 2: Generate Multi-Query Search Strategy
33
- - Use the `multi_query` tool to generate at least 3 sub-questions related to the original user query
34
- - Ensure sub-questions cover different aspects and angles of the main question
35
- - Sub-questions should be complementary and help build a comprehensive answer
36
-
37
- ## Step 3: Execute Comprehensive Search
38
- - Search ALL queries generated in Step 2 using the appropriate collection
39
- - Use hybrid search type for best results
40
- - Collect all relevant documents from the search results
41
- - Evaluate the relevance and quality of retrieved documents
42
-
43
- ## Step 4: Synthesize and Answer
44
- - Analyze all retrieved documents to construct a comprehensive answer
45
- - Synthesize information from multiple sources when applicable
46
- - Ensure your answer directly addresses the user's original question
47
- - Maintain consistency with the source documents
48
-
49
- # Answer Format Requirements
50
-
51
- You must structure your responses exactly as follows:
52
-
53
- ```
54
- (Your comprehensive answer to the question, synthesized from the retrieved documents)
55
-
56
- **Source**
57
- - [1] (Document title/name and page numbers if available)
58
- - [2] (Document title/name and page numbers if available)
59
- - ...
60
- ```
61
-
62
- # Critical Guidelines
63
-
64
- 1. **Language Consistency**: Always respond in the same language as the user's request (Korean for Korean queries, English for English queries)
65
-
66
- 2. **Source Attribution**: Every piece of information must be traceable to a source. Include all referenced sources at the end of your answer with proper numbering.
67
-
68
- 3. **Honesty About Limitations**: If you cannot find relevant information in the search results, explicitly state: "I cannot find any relevant sources to answer this question." Do NOT add narrative explanations or apologetic sentences—just state the fact clearly.
69
-
70
- 4. **No Hallucination**: Never provide information that is not present in the retrieved documents. If the documents don't contain enough information for a complete answer, acknowledge the gap.
71
-
72
- 5. **Citation Accuracy**: When citing sources, include:
73
- - Document name or identifier
74
- - Page numbers when available
75
- - Any other relevant metadata that helps locate the information
76
-
77
- 6. **Comprehensive Coverage**: Use all relevant documents from your search. Don't arbitrarily limit yourself to just one or two sources if multiple documents provide valuable information.
78
-
79
- 7. **Clarity and Structure**: Present information in a clear, logical structure. Use paragraphs, bullet points, or numbered lists as appropriate for the content.
80
-
81
- # Quality Control
82
-
83
- Before finalizing your answer, verify:
84
- - Have you used the langconnect-rag-mcp tools as required?
85
- - Does your answer directly address the user's question?
86
- - Are all claims backed by retrieved documents?
87
- - Are all sources properly cited?
88
- - Is the answer in the correct language?
89
- - Have you followed the required format?
90
-
91
- # Edge Cases
92
-
93
- - **Empty Search Results**: If no documents are found, inform the user and suggest refining the query
94
- - **Ambiguous Queries**: Ask for clarification before proceeding with the search
95
- - **Multiple Collections**: If the query could span multiple collections, search the most relevant one first, then ask if the user wants to expand the search
96
- - **Contradictory Information**: If sources contradict each other, present both perspectives and cite each source
97
-
98
- Your goal is to be a reliable, accurate, and transparent information retrieval assistant that always grounds its responses in documentary evidence.
@@ -1,26 +0,0 @@
1
- ---
2
- description: Analyze requirements and create implementation plan only
3
- ---
4
-
5
- # Implementation Planning
6
-
7
- Carefully analyze the requirements provided as arguments, understand the codebase, and present an execution plan **without actual implementation**.
8
-
9
- ## IMPORTANT
10
- - When you need clarification or there are multiple options, please ask me interactive questions (USE interactive question tool `AskUserQuestion`) before proceeding.
11
-
12
- ## Tasks
13
-
14
- 1. Understand the intent of requirements (ask questions if unclear)
15
- 2. Investigate and understand the relevant codebase
16
- 3. Create a step-by-step execution plan
17
- 4. Present considerations and items requiring decisions
18
-
19
- ## Guidelines
20
-
21
- - **No implementation**: Do not write code immediately; only create the plan
22
- - **Thorough investigation**: Understand the codebase first, then plan
23
- - **Ask first**: Do not guess; always ask about uncertainties or ambiguities
24
- - **Follow CLAUDE.md**: Adhere to project guidelines in `@CLAUDE.md`
25
- - **Transparent communication**: Clearly state unclear areas, risks, and alternatives
26
-
@@ -1,442 +0,0 @@
1
- # PRD Review with Codex (review-prd-with-codex)
2
-
3
- Review the generated PRD using Codex MCP, with Claude performing cross-check validation to deliver a consensus conclusion.
4
- Ping-pong up to 3 times until consensus is reached.
5
-
6
- **Core Principles:**
7
- - Codex has limited context/tools, so Claude must verify
8
- - On disagreement, re-query Codex (with context re-transmission)
9
- - No emoji usage
10
- - Use generic expressions (avoid project-specific terminology)
11
-
12
- ---
13
-
14
- ## Arguments
15
-
16
- `$ARGUMENTS` receives the PRD file path.
17
- - Example: `/tm:review-prd-with-codex .taskmaster/docs/prd.md`
18
- - If missing, request path input via AskUserQuestion
19
-
20
- ---
21
-
22
- ## Codex Context Configuration Principles
23
-
24
- ### 1. Codex Usage Scope for This Workflow
25
- - This command uses **Codex MCP tool** (`mcp__codex__codex`)
26
- - Set sandbox to `read-only`
27
- - **No network/web search** (for reproducibility and safety)
28
- - Include PRD/context directly in prompt for reproducibility (don't rely on file exploration)
29
- - Git history not accessible (Claude cross-check compensates)
30
-
31
- ### 2. Required Information (Rich Context)
32
- - Full PRD text (with line numbers - `nl -ba` format)
33
- - **Full CLAUDE.md** (include entire content without limits)
34
- - **Detailed project tech stack** (package.json, requirements.txt contents)
35
- - **Directory structure** (main folder layout)
36
- - **PRD-related code summary** (optional: symbol overview of files mentioned in PRD)
37
- - Explicit review criteria
38
- - Enforced output format
39
-
40
- ### 3. Information to Exclude
41
- - Claude-specific tools (AskUserQuestion, TodoWrite, etc.)
42
- - Internal workflow details
43
-
44
- ### 4. Prompt Construction Principles
45
- - **Include full CLAUDE.md** (remove 1000 character limit)
46
- - Review entire PRD at once (don't split)
47
- - Provide project context with tech stack and directory structure
48
- - Consider prompt reduction only on timeout
49
-
50
- ### 5. Enforced Output Format
51
- - Specify structured format (tables, lists)
52
- - State "Must use this format"
53
- - Require line number references
54
-
55
- ---
56
-
57
- ## Workflow Steps
58
-
59
- ### Step 1: Gather Pre-requisite Information (Rich Context)
60
-
61
- #### 1.1 Read PRD File (with line numbers)
62
- ```bash
63
- nl -ba $ARGUMENTS
64
- ```
65
- - `nl -ba`: Numbers all lines including blank lines (more consistent than cat -n)
66
- - Output error message and exit if file doesn't exist
67
-
68
- #### 1.2 Verify PRD File Exists
69
- ```bash
70
- test -f $ARGUMENTS && echo "File exists" || echo "File not found"
71
- ```
72
-
73
- #### 1.3 Read Full CLAUDE.md
74
- - Check CLAUDE.md in project root
75
- - **Include entire content in prompt** (no 1000 character limit)
76
- - Convey all project conventions, tech stack, and caveats
77
-
78
- #### 1.4 Collect Detailed Project Tech Stack
79
- Claude reads and summarizes these files:
80
- - `package.json` (frontend dependencies)
81
- - `requirements.txt` (backend dependencies)
82
- - `go.mod`, `Cargo.toml`, etc. (if applicable)
83
- - Specify major framework/library versions
84
-
85
- #### 1.5 Collect Directory Structure
86
- ```bash
87
- # Main directory structure (depth 2-3)
88
- tree -L 3 --dirsfirst -I 'node_modules|__pycache__|.git|dist|build|.next'
89
- ```
90
- Or use `ls -la` combinations to understand structure
91
-
92
- #### 1.6 Summarize PRD-Related Code Files (Optional)
93
- - If major files/modules are mentioned in PRD, collect symbol overview
94
- - Example: For "Authentication System Improvement" PRD, check `routes/auth.py`, `middleware.ts`, etc.
95
- - Helps understand relationship between existing implementation and PRD design
96
-
97
- #### 1.7 Reference TaskMaster PRD Template
98
- `.taskmaster/templates/example_prd.txt` structure:
99
- ```
100
- <context>
101
- # Overview
102
- # Core Features
103
- # User Experience
104
- </context>
105
- <PRD>
106
- # Technical Architecture
107
- # Development Roadmap
108
- # Logical Dependency Chain
109
- # Risks and Mitigations
110
- # Appendix
111
- </PRD>
112
- ```
113
-
114
- ### Step 2: Construct Codex Prompt (Rich Context)
115
-
116
- Use this template to write the prompt:
117
-
118
- ```
119
- ## Role
120
- You are a PRD (Product Requirements Document) review expert.
121
-
122
- ## Project Context
123
-
124
- ### Tech Stack
125
- [Summary of package.json / requirements.txt contents]
126
- Example:
127
- - Frontend: Next.js 15, React 19, Tailwind CSS v4, shadcn/ui
128
- - Backend: FastAPI, Google Gemini API, Decord
129
- - Deployment: Docker + Cloudflare Tunnel
130
-
131
- ### Project Structure
132
- [tree or ls output]
133
- ```
134
- frontend/
135
- ├── src/app/ # App Router
136
- ├── src/components/ # UI components
137
- └── src/hooks/ # Custom hooks
138
-
139
- services/
140
- ├── base_video_processor.py
141
- ├── video_processor.py
142
- └── child_safety_processor.py
143
- ```
144
-
145
- ### Project Guidelines (CLAUDE.md)
146
- [Full CLAUDE.md - include entire content without limits]
147
-
148
- ### Existing Code Related to PRD (Optional)
149
- [Symbol overview of major files mentioned in PRD]
150
-
151
- ## Review Target
152
- File: [PRD file path]
153
-
154
- ### PRD Content (with line numbers)
155
- [Full PRD - in nl -ba format]
156
-
157
- ## TaskMaster PRD Format Criteria
158
- <context>: Overview, Core Features, User Experience
159
- <PRD>: Technical Architecture, Development Roadmap (by Phase),
160
- Logical Dependency Chain, Risks and Mitigations, Appendix
161
-
162
- ## Review Criteria
163
- 1. Structure: Compliance with TaskMaster PRD format
164
- 2. Clarity: Ambiguous expressions, undefined terms, unmeasurable goals
165
- 3. Feasibility: Implementation-level specificity, technical realism
166
- 4. Completeness: Missing sections (User Stories, Acceptance Criteria,
167
- Success Metrics, Risk/Dependencies, etc.)
168
- 5. Consistency: Internal contradictions, duplicate definitions, version mismatches
169
-
170
- ## Output Format (Must use this format)
171
-
172
- ### Strengths
173
- - [Item]: [Description] (line number)
174
-
175
- ### Issues
176
- | Item | Location (line) | Problem | Recommended Fix |
177
- |------|-----------------|---------|-----------------|
178
-
179
- ### Open Questions
180
- - [Question 1]
181
- - [Question 2]
182
-
183
- ### Overall Assessment
184
- [1-2 sentence summary]
185
- ```
186
-
187
- ### Step 3: Execute Codex MCP (First Run)
188
-
189
- **Call mcp__codex__codex tool:**
190
-
191
- Parameters:
192
- - `prompt`: Full prompt constructed in Step 2
193
- - `sandbox`: "read-only"
194
- - `cwd`: Project root path (optional)
195
-
196
- **Call example:**
197
- ```
198
- mcp__codex__codex tool call:
199
- - prompt: [constructed prompt]
200
- - sandbox: "read-only"
201
- ```
202
-
203
- **Notes:**
204
- - Parse text results from MCP tool response
205
- - Reduce prompt length on timeout
206
-
207
- ### Step 4: Receive and Parse Codex Feedback
208
-
209
- Organize Codex response into this structure:
210
- - Strengths: List of valid points
211
- - Issues: Table of items needing improvement
212
- - Open Questions: List of open questions
213
- - Summary: Overall assessment
214
-
215
- ### Step 5: Claude Cross-check
216
-
217
- **Verify items Codex may have missed:**
218
-
219
- 1. **Check for already-resolved issues**
220
- ```bash
221
- git log --oneline -20
222
- ```
223
- - Verify if issues mentioned in PRD were already resolved via commits
224
-
225
- 2. **Validate package/dependency existence**
226
- - npm: `npm view [package-name]`
227
- - pip: `pip show [package-name]` or PyPI search
228
- - Check for mentions of non-existent packages
229
-
230
- 3. **Verify codebase-PRD synchronization**
231
- - Confirm files/modules mentioned in PRD actually exist
232
- - Check alignment between existing implementation and PRD design
233
-
234
- 4. **Verify CLAUDE.md guideline compliance**
235
- - Project convention adherence
236
- - TaskMaster workflow compatibility
237
-
238
- 5. **Identify Codex errors**
239
- - List incorrect items from validation
240
- - Prepare evidence (git commit, actual files, etc.)
241
-
242
- ### Step 6: Disagreement Check and Re-review (Context Re-transmission)
243
-
244
- **If there are disagreements:**
245
-
246
- Call `mcp__codex__codex` as new session, **including previous conversation context in prompt**:
247
-
248
- #### Re-review Prompt Template:
249
- ```
250
- ## Role
251
- You are a PRD review expert.
252
-
253
- ## Previous Review Context
254
-
255
- ### First Review Summary
256
- [Key content from Codex first response - Strengths, Issues, Questions, Summary]
257
-
258
- ### Claude Cross-check Results
259
- [Disagreement items and evidence]
260
-
261
- ## Re-review Request
262
- Please re-review only the following items reflecting the cross-check results:
263
- 1. [Disagreement item 1]
264
- 2. [Disagreement item 2]
265
-
266
- Respond only with modified sections from the original assessment.
267
-
268
- ## Reference Information
269
- [Additional context if needed - related code, git log, etc.]
270
- ```
271
-
272
- **MCP Call:**
273
- ```
274
- mcp__codex__codex tool call:
275
- - prompt: [re-review prompt]
276
- - sandbox: "read-only"
277
- ```
278
-
279
- **Ping-pong termination conditions:**
280
- - Consensus reached (no disagreement items)
281
- - Maximum 3 iterations reached
282
- - Codex accepts Claude's evidence
283
-
284
- **Iteration tracking:**
285
- - Round 1: Initial review
286
- - Round 2: First re-review (includes previous context)
287
- - Round 3: Second re-review (final)
288
-
289
- ### Step 7: Reach Consensus and Deliver Results
290
-
291
- **Output Format:**
292
-
293
- ```markdown
294
- ## PRD Review Results (Codex + Claude Consensus)
295
-
296
- ### Review Process
297
- - Ping-pong iterations: [N]
298
- - Consensus status: [Full consensus / Partial consensus / Claude determination]
299
-
300
- ### [VALID] Valid Feedback
301
- | Item | Description | Source |
302
- |------|-------------|--------|
303
-
304
- ### [ISSUE] Items Needing Improvement
305
- | Item | Problem | Recommended Fix | Source |
306
- |------|---------|-----------------|--------|
307
-
308
- ### [CORRECTION] Codex Error Corrections
309
- | Codex Claim | Actual Situation | Evidence |
310
- |-------------|------------------|----------|
311
-
312
- ### [DECISION] Items Requiring Decision
313
- (If there are options, ask via AskUserQuestion)
314
-
315
- ### [SUMMARY] Final Conclusion
316
- [Final summary]
317
- ```
318
-
319
- ---
320
-
321
- > See [Work Guidelines](../guidelines/work-guidelines.md)
322
-
323
- ---
324
-
325
- ## Error Handling
326
-
327
- - **PRD file not found**: "File not found. Please verify the path."
328
- - **Codex MCP call failed**: "Codex MCP tool call failed. Please check MCP server status."
329
- - **Timeout**: "Codex response timeout. Please reduce prompt length or try again."
330
- - **Re-review needed**: "Proceeding with re-review in new session including previous context."
331
-
332
- ---
333
-
334
- ## Codex MCP Tool Reference
335
-
336
- ### MCP Tools Used by Claude
337
- | Tool | Parameters | Description |
338
- |------|------------|-------------|
339
- | `mcp__codex__codex` | `prompt`, `sandbox`, `cwd`, `model`, etc. | Start new Codex session |
340
-
341
- **Key Parameters:**
342
- - `prompt` (required): Initial prompt
343
- - `sandbox`: "read-only" (only file reading allowed, safe)
344
- - `cwd`: Working directory (optional)
345
- - `model`: Model specification (optional, e.g., "o3", "o4-mini")
346
-
347
- **Notes:**
348
- - For ping-pong, call new session with previous context included in prompt
349
- - Unlike CLI's `codex resume`, MCP uses context re-transmission method
350
-
351
- ---
352
-
353
- ## Cross-check Checklist
354
-
355
- | Verification Item | Method | Example |
356
- |-------------------|--------|---------|
357
- | Already resolved issues | `git log --grep="issue-number"` | Specific issue already resolved via commit |
358
- | Package existence | `npm view` / `pip show` | Non-existent SDK mentioned |
359
- | File/module existence | `ls`, `find`, `grep` | Specific adapter file location |
360
- | Version match | `package.json`, `requirements.txt` | Specified version vs actual version |
361
-
362
- ---
363
-
364
- ## Usage Examples
365
-
366
- ```bash
367
- # Review after PRD generation
368
- /tm:convert-prd .taskmaster/docs/my-idea.md
369
- # prd.md generated
370
-
371
- /tm:review-prd-with-codex .taskmaster/docs/prd.md
372
- # Codex MCP review + Claude cross-check + ping-pong + consensus results output
373
- ```
374
-
375
- ---
376
-
377
- ## Workflow Diagram
378
-
379
- ```
380
- [Step 1: Gather Rich Context]
381
- |
382
- v
383
- [Step 2: Construct Prompt]
384
- |
385
- v
386
- [Step 3: Codex MCP First Review] ──────────────────────────┐
387
- | (mcp__codex__codex) |
388
- v |
389
- [Step 4: Parse Feedback] |
390
- | |
391
- v |
392
- [Step 5: Claude Cross-check] |
393
- | |
394
- v |
395
- [Step 6: Disagreements?] |
396
- | |
397
- ├─ YES & iterations < 3 ─> [New MCP call + context]────┘
398
- | (includes conversation summary)
399
- |
400
- └─ NO or iterations >= 3 ─> [Step 7: Deliver Consensus]
401
- ```
402
-
403
- **Core Flow:**
404
- 1. Execute Codex MCP review
405
- 2. Claude validates via cross-check
406
- 3. On disagreement, new MCP call (include previous context in prompt)
407
- 4. Repeat until consensus or maximum 3 iterations
408
- 5. Deliver final results
409
-
410
- ---
411
-
412
- ## Expected Output Example
413
-
414
- ```markdown
415
- ## PRD Review Results (Codex + Claude Consensus)
416
-
417
- ### Review Process
418
- - Ping-pong iterations: 2
419
- - Consensus status: Full consensus
420
-
421
- ### [VALID] Valid Feedback
422
- | Item | Description | Source |
423
- |------|-------------|--------|
424
- | No success metrics | Goals only list functional goals, no performance/cost criteria defined | Codex |
425
- | Missing error handling | No exception handling in API call code | Codex + Claude |
426
- | Test scenarios | Only happy-path exists, no failure cases | Codex |
427
-
428
- ### [ISSUE] Items Needing Improvement
429
- | Item | Problem | Recommended Fix | Source |
430
- |------|---------|-----------------|--------|
431
- | Dependency list | Non-existent package mentioned | Update to actual package | Claude verification |
432
- | Milestone status | Includes already-resolved issues | Mark complete or remove | Claude verification |
433
-
434
- ### [CORRECTION] Codex Error Corrections
435
- | Codex Claim | Actual Situation | Evidence |
436
- |-------------|------------------|----------|
437
- | Feature not implemented | Feature already exists | Verified in src/modules/ folder |
438
-
439
- ### [SUMMARY] Final Conclusion
440
- PRD is structurally sound, but non-functional requirements (performance, security, error handling)
441
- and dependency information need updates. Recommend removing already-resolved issues from Milestone.
442
- ```