@whatasoda/agent-tools 0.1.0 → 0.1.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
|
@@ -172,12 +172,7 @@ async function main() {
|
|
|
172
172
|
const sessionPart = await buildSessionPart(parsed.sessionJsonlPath, tmpDir);
|
|
173
173
|
const prompt = `${parsed.instruction}: ${reviewFile} (ref: ${refPath})${sessionPart}`;
|
|
174
174
|
try {
|
|
175
|
-
const { output, exitCode } = await runCodex([
|
|
176
|
-
"exec",
|
|
177
|
-
"-m",
|
|
178
|
-
"gpt-5.4",
|
|
179
|
-
prompt
|
|
180
|
-
]);
|
|
175
|
+
const { output, exitCode } = await runCodex(["exec", "-m", "gpt-5.4", prompt]);
|
|
181
176
|
if (exitCode !== 0) {
|
|
182
177
|
if (exitCode === 126 || exitCode === 127) {
|
|
183
178
|
console.error(`⚠ codex レビューをスキップします(codex コマンドが見つからない、または実行権限がありません — exit code: ${exitCode})`);
|
|
@@ -21,12 +21,10 @@ Identify the core topic from $ARGUMENTS. Determine:
|
|
|
21
21
|
|
|
22
22
|
### Step 2: Survey Investigation
|
|
23
23
|
|
|
24
|
-
Launch a sub-agent (Task, subagent_type: Explore) to survey the relevant area.
|
|
25
|
-
|
|
26
24
|
**Sub-agent prompt constraints**: Every sub-agent prompt MUST begin with:
|
|
27
25
|
> You are a research-only agent. Do NOT use AskUserQuestion, EnterPlanMode, or any interactive/planning tools. Return your findings in the output format specified below.
|
|
28
26
|
|
|
29
|
-
**
|
|
27
|
+
**Codebase investigation output contract**: Every codebase sub-agent prompt MUST end with:
|
|
30
28
|
> Return findings in this exact format:
|
|
31
29
|
> ### Files
|
|
32
30
|
> - `path/to/file` — relevance to the topic
|
|
@@ -37,7 +35,42 @@ Launch a sub-agent (Task, subagent_type: Explore) to survey the relevant area.
|
|
|
37
35
|
> ### Open Questions
|
|
38
36
|
> - question — what remains unclear from this investigation alone
|
|
39
37
|
|
|
40
|
-
|
|
38
|
+
**External research output contract**: Every external research sub-agent prompt MUST end with:
|
|
39
|
+
> Return findings in this exact format:
|
|
40
|
+
> ### Official Documentation
|
|
41
|
+
> - library/service name — key API, configuration, version-specific notes
|
|
42
|
+
> ### Best Practices
|
|
43
|
+
> - practice — source and context
|
|
44
|
+
> ### Patterns & Examples
|
|
45
|
+
> - pattern — description with code snippets if available
|
|
46
|
+
> ### Caveats
|
|
47
|
+
> - caveat — gotchas, known issues, version incompatibilities
|
|
48
|
+
|
|
49
|
+
#### External Research Trigger
|
|
50
|
+
|
|
51
|
+
Before launching sub-agents, evaluate whether the topic involves external technologies:
|
|
52
|
+
|
|
53
|
+
- If the topic references **named external libraries, frameworks, or services** (e.g., "Drizzle", "NextAuth", "Stripe"), **technology selection or comparison** (e.g., "which ORM", "migrate from X to Y"), or **external API integration** (e.g., "Slack API", "GitHub webhook") → launch an external research sub-agent **in parallel** with the codebase survey.
|
|
54
|
+
- If the topic is purely about modifying existing code with no new external dependencies → skip external research.
|
|
55
|
+
|
|
56
|
+
**External research sub-agent prompt template**:
|
|
57
|
+
> You are a research-only agent. Do NOT use AskUserQuestion, EnterPlanMode, or any interactive/planning tools. Return your findings in the output format specified below.
|
|
58
|
+
>
|
|
59
|
+
> ## Research Task
|
|
60
|
+
> Investigate external documentation and resources for: [topic extracted from $ARGUMENTS]
|
|
61
|
+
>
|
|
62
|
+
> ## Research Strategy
|
|
63
|
+
> 1. Use Context7 MCP (resolve-library-id → get-library-docs) for each identified library/framework
|
|
64
|
+
> 2. Use WebSearch for broader context: best practices, migration guides, comparison articles, known issues
|
|
65
|
+
> 3. Synthesize findings — prioritize official documentation over community content
|
|
66
|
+
>
|
|
67
|
+
> [External research output contract]
|
|
68
|
+
|
|
69
|
+
#### Codebase Survey
|
|
70
|
+
|
|
71
|
+
Launch a sub-agent (Task, subagent_type: Explore) to survey the relevant area. The survey prompt should include the topic and ask for: relevant files, existing patterns, dependencies, and current state of the area.
|
|
72
|
+
|
|
73
|
+
If external research is triggered, launch both sub-agents in a single message (parallel execution).
|
|
41
74
|
|
|
42
75
|
### Step 3: Focused Investigation (optional)
|
|
43
76
|
|
|
@@ -53,6 +86,7 @@ Synthesize findings into a Discussion Briefing block:
|
|
|
53
86
|
## Discussion Briefing
|
|
54
87
|
- **Topic**: what needs to be discussed
|
|
55
88
|
- **Background**: relevant codebase findings and current state
|
|
89
|
+
- **External Context** (include if external research was performed): key findings from official docs, best practices, and caveats
|
|
56
90
|
- **Key Questions**: 2-4 questions that should guide the discussion (ordered by dependency)
|
|
57
91
|
- **Constraints**: known technical or design constraints
|
|
58
92
|
```
|
|
@@ -171,6 +171,18 @@ These are flexible guidance, not mandatory steps. Adapt to the conversation.
|
|
|
171
171
|
|
|
172
172
|
- **Start with understanding**: Grasp what the user wants to explore. If `$ARGUMENTS` is vague, ask clarifying questions — but don't over-interrogate. One or two questions is usually enough to get started.
|
|
173
173
|
- **Investigate with sub-agents**: Use sub-agents (Task, subagent_type: Explore) for codebase investigation. Apply the standard constraint block (above). Investigation informs the discussion but doesn't replace it.
|
|
174
|
+
- **External research when needed**: When the discussion involves external libraries, technology comparison, or API integration, launch an external research sub-agent in parallel with any codebase investigation. The external research sub-agent should use Context7 MCP (resolve-library-id → get-library-docs) for official documentation and WebSearch for broader context (best practices, migration guides, known issues). Apply the standard constraint block. Use the external research output contract:
|
|
175
|
+
> Return findings in this exact format:
|
|
176
|
+
> ### Official Documentation
|
|
177
|
+
> - library/service name — key API, configuration, version-specific notes
|
|
178
|
+
> ### Best Practices
|
|
179
|
+
> - practice — source and context
|
|
180
|
+
> ### Patterns & Examples
|
|
181
|
+
> - pattern — description with code snippets if available
|
|
182
|
+
> ### Caveats
|
|
183
|
+
> - caveat — gotchas, known issues, version incompatibilities
|
|
184
|
+
|
|
185
|
+
Present external research findings alongside codebase findings before asking for decisions — consistent with the "データが先、判断が後" principle.
|
|
174
186
|
- **Iterate naturally**: Some discussions need multiple investigation rounds; others converge quickly. Follow the conversation's natural rhythm.
|
|
175
187
|
- **Record decisions immediately**: When the user approves a design decision, persist it to the DB via `sd decision create`. The Bash tool permission prompt serves as a natural approval checkpoint. Include `--repo-owner` and `--repo-name` when working in a git repository, and `--tag topic:<topic-slug>` for grouping.
|
|
176
188
|
- **Capture memos as text during discussion**: When a notable idea or insight emerges, present it as text ("メモ:〇〇") without writing to the DB. Memos are batched and written during wrap-up.
|
|
@@ -14,7 +14,7 @@ If $ARGUMENTS is empty, ask the user what they want to implement before proceedi
|
|
|
14
14
|
- Extract decisions as **mandatory constraints** — every applicable decision must be traceable to a plan step
|
|
15
15
|
- Extract `rejected_alternatives` from each decision as **exclusion constraints** — approaches listed as rejected must not be re-proposed
|
|
16
16
|
- If no decisions found: proceed normally (conversation-context-based input)
|
|
17
|
-
- **
|
|
17
|
+
- **Codebase investigation output contract**: Every codebase sub-agent prompt MUST end with the following output format requirement:
|
|
18
18
|
> Return findings in this exact format:
|
|
19
19
|
> ### Files
|
|
20
20
|
> - `path/to/file` — relevance to the task
|
|
@@ -24,11 +24,36 @@ If $ARGUMENTS is empty, ask the user what they want to implement before proceedi
|
|
|
24
24
|
> - dependency — how it affects the task
|
|
25
25
|
> ### Open Questions
|
|
26
26
|
> - question — what remains unclear from this investigation alone
|
|
27
|
-
|
|
28
|
-
|
|
29
|
-
|
|
27
|
+
- **External research output contract**: Every external research sub-agent prompt MUST end with:
|
|
28
|
+
> Return findings in this exact format:
|
|
29
|
+
> ### Official Documentation
|
|
30
|
+
> - library/service name — key API, configuration, version-specific notes
|
|
31
|
+
> ### Best Practices
|
|
32
|
+
> - practice — source and context
|
|
33
|
+
> ### Patterns & Examples
|
|
34
|
+
> - pattern — description with code snippets if available
|
|
35
|
+
> ### Caveats
|
|
36
|
+
> - caveat — gotchas, known issues, version incompatibilities
|
|
37
|
+
- **External Research Trigger**: Before launching the survey sub-agent, evaluate whether the task involves external technologies:
|
|
38
|
+
- If the task references **named external libraries, frameworks, or services**, **technology selection or comparison**, or **external API integration** → launch an external research sub-agent **in parallel** with the codebase survey.
|
|
39
|
+
- If the task is purely about modifying existing code with no new external dependencies → skip external research.
|
|
40
|
+
- External research sub-agent prompt:
|
|
41
|
+
> You are a research-only agent. Do NOT use AskUserQuestion, EnterPlanMode, or any interactive/planning tools. Return your findings in the output format specified below.
|
|
42
|
+
>
|
|
43
|
+
> ## Research Task
|
|
44
|
+
> Investigate external documentation and resources for: [topic]
|
|
45
|
+
>
|
|
46
|
+
> ## Research Strategy
|
|
47
|
+
> 1. Use Context7 MCP (resolve-library-id → get-library-docs) for each identified library/framework
|
|
48
|
+
> 2. Use WebSearch for broader context: best practices, migration guides, comparison articles, known issues
|
|
49
|
+
> 3. Synthesize findings — prioritize official documentation over community content
|
|
50
|
+
>
|
|
51
|
+
> [External research output contract]
|
|
30
52
|
- **Sub-agent prompt constraints**: Every sub-agent prompt (both survey and focused) MUST begin with the following constraint block:
|
|
31
53
|
> You are a research-only agent. Do NOT use AskUserQuestion, EnterPlanMode, or any interactive/planning tools. Return your findings in the output format specified below.
|
|
54
|
+
- Launch a sub-agent (Task, subagent_type: Explore) to survey project structure, dependencies, and conventions relevant to the task. If external research is triggered, launch both sub-agents in a single message (parallel execution).
|
|
55
|
+
- Summarize findings from all agents into a Common Context block. If external research was performed, include a dedicated "External Context" section in the Common Context block.
|
|
56
|
+
- Based on findings, optionally launch 1-2 focused sub-agents in parallel. Each prompt must include the Common Context block (summarized, not raw output), the specific investigation question, and both the constraint block and output contract.
|
|
32
57
|
- Summarize investigation results before proceeding.
|
|
33
58
|
- If investigation reveals multiple fundamentally different approaches, use AskUserQuestion to let the user decide: "方針を調整して続行" / "ここで中断して方針を整理". Do not choose an approach autonomously.
|
|
34
59
|
2. **Branch Strategy**: Use AskUserQuestion to determine branch strategy before planning:
|
package/package.json
CHANGED
|
@@ -1,14 +1,21 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "@whatasoda/agent-tools",
|
|
3
|
-
"version": "0.1.
|
|
3
|
+
"version": "0.1.1",
|
|
4
4
|
"private": false,
|
|
5
5
|
"type": "module",
|
|
6
|
+
"repository": {
|
|
7
|
+
"type": "git",
|
|
8
|
+
"url": "https://github.com/whatasoda/agent-extensions"
|
|
9
|
+
},
|
|
6
10
|
"bin": {
|
|
7
11
|
"sd": "dist/src/cli.js"
|
|
8
12
|
},
|
|
9
13
|
"scripts": {
|
|
10
14
|
"build": "bun scripts/build.ts",
|
|
11
|
-
"test": "bun test"
|
|
15
|
+
"test": "bun test",
|
|
16
|
+
"typecheck": "tsc --noEmit",
|
|
17
|
+
"lint": "oxlint src/ scripts/",
|
|
18
|
+
"format:check": "oxfmt --check src/ scripts/"
|
|
12
19
|
},
|
|
13
20
|
"files": [
|
|
14
21
|
"dist/"
|
|
@@ -21,9 +28,11 @@
|
|
|
21
28
|
"ulid": "latest"
|
|
22
29
|
},
|
|
23
30
|
"devDependencies": {
|
|
24
|
-
"bun-types": "latest",
|
|
25
31
|
"@types/react": "latest",
|
|
26
|
-
"
|
|
27
|
-
"ink-testing-library": "latest"
|
|
32
|
+
"bun-types": "latest",
|
|
33
|
+
"ink-testing-library": "latest",
|
|
34
|
+
"oxfmt": "^0.43.0",
|
|
35
|
+
"oxlint": "^1.58.0",
|
|
36
|
+
"typescript": "latest"
|
|
28
37
|
}
|
|
29
38
|
}
|