@jahonn/research-agent-skill 1.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md ADDED
@@ -0,0 +1,62 @@
1
+ # Research Agent Skill
2
+
3
+ Deep research and analysis on any topic. 5 modes: Quick lookup, Deep Dive, Compare, Landscape, Evaluate. Structured reports with source evaluation.
4
+
5
+ Works with any [Agent Skills](https://agentskills.io) compatible tool: Claude Code, OpenAI Codex, Trae, Cursor, Gemini CLI, and more.
6
+
7
+ ## Problem
8
+
9
+ You ask your AI "what is X?" and get a generic, unsourced paragraph. You ask "should we use X or Y?" and get "it depends."
10
+
11
+ **This skill gives you structured, sourced, opinionated research.**
12
+
13
+ ## Modes
14
+
15
+ | Mode | You Say | Output |
16
+ |------|---------|--------|
17
+ | **Quick** | "What is gstack?" | 1-paragraph summary + 3 key facts |
18
+ | **Deep Dive** | "Research AI coding agents" | Full report in `RESEARCH.md` |
19
+ | **Compare** | "Claude Code vs Cursor vs Codex" | Scored comparison matrix + recommendation |
20
+ | **Landscape** | "What's out there for PM tools?" | Categorized market map |
21
+ | **Evaluate** | "Should we adopt X?" | Decision framework with scoring |
22
+
23
+ ## Install
24
+
25
+ ```bash
26
+ # Via agentskills-cli
27
+ npm install -g @jahonn/agentskills-cli
28
+ agentskills install ./research-agent-skill -t all
29
+
30
+ # Via ClawHub
31
+ clawhub install research-agent
32
+ ```
33
+
34
+ ## Example
35
+
36
+ **Input:** "Research the AI coding agent landscape and compare the top 5"
37
+
38
+ **Output:**
39
+ 1. Web search across official docs, Reddit, HN, GitHub
40
+ 2. Source evaluation (credibility, recency, bias)
41
+ 3. Categorized landscape map (direct, adjacent, emerging)
42
+ 4. Comparison matrix scored across 5 dimensions
43
+ 5. Recommendation: "For solo devs → X, for teams → Y"
44
+ 6. All findings sourced with URLs
45
+
46
+ ## Research Quality
47
+
48
+ - Source hierarchy (official > community > opinion)
49
+ - Bias detection (sponsor, recency, popularity)
50
+ - Cross-reference across multiple sources
51
+ - Confidence scoring per finding
52
+ - "Gaps & caveats" section for what we couldn't verify
53
+
54
+ ## Related
55
+
56
+ - [PM Agent Skill](https://github.com/jahonn/pm-agent-skill) — uses research as Phase 1
57
+ - [Dev Workflow Skill](https://github.com/jahonn/dev-workflow-skill) — uses research in Think phase
58
+ - [Agent Skills Spec](https://agentskills.io/specification)
59
+
60
+ ## License
61
+
62
+ MIT
package/SKILL.md ADDED
@@ -0,0 +1,153 @@
1
+ ---
2
+ name: research-agent
3
+ description: Deep research and analysis agent for any topic. Use when the user wants to research a topic, analyze competitors, evaluate technologies, compare tools, investigate trends, do market research, or get a comprehensive analysis of anything. Triggers on phrases like "research", "investigate", "analyze", "compare", "what is", "tell me about", "look into", "deep dive", "competitor analysis", "market research", "technology evaluation", "find alternatives", "landscape analysis", "pros and cons", "should I use". Produces structured research reports with sources.
4
+ ---
5
+
6
+ # Research Agent — Deep Investigation on Any Topic
7
+
8
+ A structured research workflow that turns a vague question into a comprehensive analysis. 5 research modes, each with a clear output format. Supports web search, source evaluation, and structured reporting.
9
+
10
+ ## Research Modes
11
+
12
+ | Mode | Trigger | Output |
13
+ |------|---------|--------|
14
+ | **Quick** | "What is X?" / "Tell me about X" | 1-paragraph summary + 3 key facts |
15
+ | **Deep Dive** | "Research X" / "Deep dive into X" | Full analysis report |
16
+ | **Compare** | "Compare X vs Y" / "X or Y?" | Comparison matrix + recommendation |
17
+ | **Landscape** | "What's out there for X?" / "Alternatives to X" | Market map + positioning |
18
+ | **Evaluate** | "Should we use X?" / "Is X worth it?" | Decision framework with scoring |
19
+
20
+ ## How to Use
21
+
22
+ ### Quick Research (30 seconds)
23
+ ```
24
+ "What is gstack?"
25
+ "Tell me about Claude Code skills"
26
+ ```
27
+ → Web search, extract key facts, 1-paragraph summary. No fluff.
28
+
29
+ ### Deep Dive (2-5 minutes)
30
+ ```
31
+ "Research the AI coding agent landscape"
32
+ "Deep dive into Agent Skills standard"
33
+ ```
34
+ → Spawn subagent (Sonnet) with the Deep Dive prompt. Searches multiple sources, cross-references, identifies patterns, writes `RESEARCH.md`.
35
+
36
+ ### Compare (1-3 minutes)
37
+ ```
38
+ "Claude Code vs Cursor vs Codex"
39
+ "RICE vs Kano vs ICE for prioritization"
40
+ "Notion vs Linear vs Jira"
41
+ ```
42
+ → Side-by-side comparison table with scoring across key dimensions. Includes a recommendation with reasoning.
43
+
44
+ ### Landscape Analysis (3-5 minutes)
45
+ ```
46
+ "What open source projects exist for X?"
47
+ "Map the competitive landscape for X"
48
+ "What tools do PMs use for X?"
49
+ ```
50
+ → Categorized map of existing solutions. For each: what it does, what it misses, where the gap is.
51
+
52
+ ### Evaluate (2-3 minutes)
53
+ ```
54
+ "Should we build on X or Y?"
55
+ "Is it worth adopting X?"
56
+ "Pros and cons of using X for our case"
57
+ ```
58
+ → Decision matrix scoring across dimensions (cost, effort, risk, fit, longevity). Recommendation with confidence level.
59
+
60
+ ## Phase Details
61
+
62
+ ### Deep Dive Prompt
63
+
64
+ Spawn a subagent (Sonnet) with this research methodology:
65
+
66
+ 1. **Define the question.** Restate the research question. What specifically are we trying to find out?
67
+
68
+ 2. **Source gathering.** Search for:
69
+ - Official docs / primary sources (most reliable)
70
+ - Community discussions (Reddit, HN, Discord — real user opinions)
71
+ - Technical analysis (blog posts, benchmarks, comparisons)
72
+ - GitHub metrics (stars, activity, issues, contributors)
73
+ - Commercial context (funding, team, business model)
74
+
75
+ 3. **Source evaluation.** For each source:
76
+ - Credibility: official vs community vs opinion
77
+ - Recency: when was this published/updated?
78
+ - Bias: does the author have a stake in the outcome?
79
+
80
+ 4. **Pattern extraction.** What themes emerge across sources?
81
+ - Points of agreement (high confidence)
82
+ - Points of disagreement (needs further investigation)
83
+ - Gaps in available information
84
+
85
+ 5. **Structured output.** Write `RESEARCH.md` with:
86
+ - Executive summary (3-5 sentences)
87
+ - Key findings (numbered, with sources)
88
+ - Detailed analysis (organized by theme)
89
+ - Gaps and caveats (what we couldn't verify)
90
+ - Recommendation (if applicable)
91
+ - Sources (with URLs)
92
+
93
+ ### Compare Prompt
94
+
95
+ For comparing N items across M dimensions:
96
+
97
+ 1. **Define comparison axis.** What dimensions matter for this decision?
98
+ - Functional: what can it do?
99
+ - Performance: how fast/reliable?
100
+ - Cost: pricing model, free tier?
101
+ - Ecosystem: integrations, community, docs?
102
+ - Maturity: how battle-tested?
103
+
104
+ 2. **Score each item** (1-5 per dimension):
105
+ ```
106
+ | Dimension | Option A | Option B | Option C |
107
+ |---------------|----------|----------|----------|
108
+ | Feature set | ⭐⭐⭐⭐ | ⭐⭐⭐ | ⭐⭐⭐⭐⭐ |
109
+ | Ease of use | ⭐⭐⭐⭐⭐ | ⭐⭐⭐ | ⭐⭐ |
110
+ ```
111
+
112
+ 3. **Context-specific recommendation.** Not "A is best" but "A is best IF you need X, B if you need Y."
113
+
114
+ ### Landscape Prompt
115
+
116
+ For mapping a space:
117
+
118
+ 1. **Categorize solutions:**
119
+ - Direct competitors (same approach, same users)
120
+ - Adjacent tools (different approach, overlapping use case)
121
+ - Workarounds (not products, but how people solve it today)
122
+ - Emerging (new, not proven yet)
123
+
124
+ 2. **For each solution:**
125
+ - What it does (1 sentence)
126
+ - What it does well (strength)
127
+ - What it misses (gap)
128
+ - Who should use it (ideal user)
129
+
130
+ 3. **Identify the gap.** Where is nobody doing a good job? That's the opportunity.
131
+
132
+ ## Output Files
133
+
134
+ - `RESEARCH.md` — Deep dive report (full analysis with sources)
135
+ - Comparison results go to stdout (capture in conversation)
136
+ - Landscape maps go to stdout or `LANDSCAPE.md` if long
137
+
138
+ ## Model Selection
139
+
140
+ | Mode | Model | Why |
141
+ |------|-------|-----|
142
+ | Quick | Haiku | Simple lookup, fast answer |
143
+ | Deep Dive | Sonnet | Needs reasoning, source evaluation |
144
+ | Compare | Sonnet | Needs judgment for scoring |
145
+ | Landscape | Sonnet | Needs categorization and pattern recognition |
146
+ | Evaluate | Sonnet | Needs decision-making framework |
147
+
148
+ ## Tips
149
+
150
+ - **Be specific.** "Research AI" is too broad. "Research AI coding agents for solo developers" is actionable.
151
+ - **State your goal.** "I need to decide between X and Y" gives the research direction.
152
+ - **Time-box it.** "Give me the top 5, not top 50" keeps it focused.
153
+ - **Ask for sources.** "Show me where you found this" for verification.
package/package.json ADDED
@@ -0,0 +1,23 @@
1
+ {
2
+ "name": "@jahonn/research-agent-skill",
3
+ "version": "1.0.0",
4
+ "description": "Deep research and analysis agent for any topic. 5 modes: Quick, Deep Dive, Compare, Landscape, Evaluate. Structured reports with source evaluation. Works with Claude Code, Codex, Trae, Cursor, Gemini CLI and 20+ AI tools (Agent Skills standard).",
5
+ "keywords": [
6
+ "agent-skills",
7
+ "research",
8
+ "analysis",
9
+ "competitor-analysis",
10
+ "market-research",
11
+ "comparison",
12
+ "claude-code",
13
+ "codex",
14
+ "trae",
15
+ "ai-coding"
16
+ ],
17
+ "license": "MIT",
18
+ "files": [
19
+ "SKILL.md",
20
+ "README.md",
21
+ "references/"
22
+ ]
23
+ }
@@ -0,0 +1,110 @@
1
+ # Research Agent — Methodology Reference
2
+
3
+ ## Source Hierarchy
4
+
5
+ Rank sources by reliability:
6
+
7
+ | Level | Source Type | Confidence | Example |
8
+ |-------|------------|------------|---------|
9
+ | 1 | Official docs / primary source | High | Project README, official API docs |
10
+ | 2 | Code / data | High | GitHub repo, benchmarks, usage stats |
11
+ | 3 | Expert analysis | Medium-High | Technical blog posts, conference talks |
12
+ | 4 | Community discussion | Medium | Reddit, HN, Discord, Stack Overflow |
13
+ | 5 | News / press | Medium-Low | TechCrunch, press releases |
14
+ | 6 | Opinion / speculation | Low | Personal blogs, social media hot takes |
15
+
16
+ **Rule:** Always cite the highest-level source available. If you only have Level 4-6 sources, flag the finding as "unverified."
17
+
18
+ ## Research Report Template
19
+
20
+ ```markdown
21
+ # Research: [Topic]
22
+
23
+ **Date:** YYYY-MM-DD
24
+ **Question:** [Original research question]
25
+ **Confidence:** High / Medium / Low
26
+
27
+ ## Executive Summary
28
+
29
+ [3-5 sentences. What did we find? What's the bottom line?]
30
+
31
+ ## Key Findings
32
+
33
+ 1. **[Finding]** — [1 sentence explanation] (Source: [URL])
34
+ 2. **[Finding]** — [1 sentence explanation] (Source: [URL])
35
+ 3. **[Finding]** — [1 sentence explanation] (Source: [URL])
36
+
37
+ ## Detailed Analysis
38
+
39
+ ### [Theme 1]
40
+ [Analysis with supporting evidence]
41
+
42
+ ### [Theme 2]
43
+ [Analysis with supporting evidence]
44
+
45
+ ## Gaps & Caveats
46
+
47
+ - [What we couldn't verify]
48
+ - [What might have changed since source was published]
49
+ - [Author biases in our sources]
50
+
51
+ ## Recommendation
52
+
53
+ [If applicable. Clear, actionable, with reasoning.]
54
+
55
+ ## Sources
56
+
57
+ 1. [Title](URL) — [credibility level, date]
58
+ 2. [Title](URL) — [credibility level, date]
59
+ ```
60
+
61
+ ## Comparison Matrix Template
62
+
63
+ ```markdown
64
+ | Dimension | Weight | Option A | Option B | Option C |
65
+ |-----------|--------|----------|----------|----------|
66
+ | [Feature] | 30% | ⭐⭐⭐⭐ | ⭐⭐⭐ | ⭐⭐⭐⭐⭐ |
67
+ | [Cost] | 20% | ⭐⭐ | ⭐⭐⭐⭐ | ⭐⭐⭐ |
68
+ | [Ease] | 25% | ⭐⭐⭐⭐⭐ | ⭐⭐⭐ | ⭐⭐ |
69
+ | [Ecosystem]| 25% | ⭐⭐⭐ | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐ |
70
+ | **Total** | 100% | **3.4** | **3.7** | **3.5** |
71
+
72
+ **Recommendation:**
73
+ - Choose A if [condition]
74
+ - Choose B if [condition]
75
+ - Choose C if [condition]
76
+ ```
77
+
78
+ ## Web Search Strategy
79
+
80
+ When researching a topic, run these searches in order:
81
+
82
+ 1. **Official:** `"[topic]" site:github.com` or `"[topic]" official docs`
83
+ 2. **Community:** `"[topic]" site:reddit.com` or `"[topic]" site:news.ycombinator.com`
84
+ 3. **Comparison:** `"[topic]" vs alternatives comparison 2026`
85
+ 4. **Critical:** `"[topic]" problems limitations issues`
86
+ 5. **Stats:** `"[topic]" stars users adoption metrics`
87
+
88
+ This gives you: official story → real user opinions → competitive context → honest downsides → market validation.
89
+
90
+ ## Bias Detection
91
+
92
+ Watch for these biases in sources:
93
+
94
+ | Bias | Signal | Counter |
95
+ |------|--------|---------|
96
+ | **Sponsor bias** | "In partnership with..." | Find independent reviews |
97
+ | **Recency bias** | "Latest and greatest" | Check if older solutions still work |
98
+ | **Popularity bias** | "Most popular = best" | Best for whom? For what? |
99
+ | **Confirmation bias** | Only positive mentions | Actively search for criticism |
100
+ | **Author bias** | Author works for the company | Weight community sources higher |
101
+
102
+ ## When to Stop Researching
103
+
104
+ - **Quick mode:** 1-2 sources. Done in 30 seconds.
105
+ - **Deep Dive mode:** 5-10 sources. Stop when themes repeat.
106
+ - **Compare mode:** 1 source per option + 1 independent comparison. Done.
107
+ - **Landscape mode:** Enough to categorize all major players. Usually 10-20 sources.
108
+ - **Evaluate mode:** Enough to score each dimension with confidence. Usually 5-8 sources.
109
+
110
+ **Diminishing returns rule:** If your last 3 sources didn't change your conclusion, stop.