sc-research 1.0.13 → 1.0.14
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +132 -63
- package/package.json +1 -1
- package/templates/base/commands/controversy.md +8 -20
- package/templates/base/commands/deep-research.md +8 -19
- package/templates/base/commands/discovery.md +10 -22
- package/templates/base/commands/quick.md +6 -7
- package/templates/base/commands/rank.md +8 -19
- package/templates/base/commands/research.md +12 -6
- package/templates/base/commands/sentiment.md +8 -20
- package/templates/base/commands/trend.md +8 -19
- package/templates/base/commands/visualize.md +7 -7
- package/templates/base/skills/social_media_controversy.md +53 -23
- package/templates/base/skills/social_media_discovery.md +55 -43
- package/templates/base/skills/social_media_fetch.md +81 -49
- package/templates/base/skills/social_media_rank.md +49 -20
- package/templates/base/skills/social_media_schema.md +105 -19
- package/templates/base/skills/social_media_sentiment.md +59 -23
- package/templates/base/skills/social_media_trend.md +60 -20
- package/templates/base/skills/using_social_media_research.md +92 -74
package/README.md
CHANGED
|
@@ -1,109 +1,178 @@
|
|
|
1
|
-
#
|
|
1
|
+
# sc-research
|
|
2
2
|
|
|
3
|
-
|
|
3
|
+
AI-ready social research toolkit for Reddit + X with a template installer for agent workflows.
|
|
4
4
|
|
|
5
|
-
|
|
5
|
+
Use it to:
|
|
6
6
|
|
|
7
|
-
|
|
7
|
+
- fetch normalized social discussion data,
|
|
8
|
+
- run deep or quick research from CLI,
|
|
9
|
+
- scaffold command/skill templates into agent folders,
|
|
10
|
+
- visualize classified outputs in a local dashboard.
|
|
8
11
|
|
|
9
|
-
##
|
|
12
|
+
## Why This Project
|
|
10
13
|
|
|
11
|
-
-
|
|
12
|
-
- **Unified Data Model**: Normalizes data from different platforms into a single `ResearchItem` schema.
|
|
13
|
-
- **AI-Native Output**: optimized for LLM consumption (JSON).
|
|
14
|
-
- **Engagement-Based Sorting**: Prioritizes content with high community interaction.
|
|
15
|
-
- **Multi-Platform Bootstrap**: Install AI assistant templates into any repo via CLI (`claude`, `cursor`, `windsurf`, `antigravity`).
|
|
14
|
+
`sc-research` is designed for **AI-agent pipelines** where research and analysis are separated:
|
|
16
15
|
|
|
17
|
-
|
|
16
|
+
- **Fetch layer**: collect raw social signals (`reddit_data.json`, `x_data.json`)
|
|
17
|
+
- **Analysis layer**: agent skills produce strict `classified_*.json` files
|
|
18
|
+
- **Presentation layer**: dashboard auto-loads available classified files
|
|
19
|
+
|
|
20
|
+
This keeps the data pipeline composable and tool-agnostic.
|
|
21
|
+
|
|
22
|
+
## Install
|
|
18
23
|
|
|
19
24
|
```bash
|
|
20
25
|
npm install -g sc-research
|
|
21
26
|
```
|
|
22
27
|
|
|
23
|
-
|
|
28
|
+
Check version:
|
|
24
29
|
|
|
25
30
|
```bash
|
|
26
|
-
|
|
31
|
+
sc-research --version
|
|
32
|
+
```
|
|
27
33
|
|
|
28
|
-
|
|
29
|
-
sc-research init --ai claude,cursor --dry-run
|
|
34
|
+
## Quick Start
|
|
30
35
|
|
|
31
|
-
|
|
32
|
-
sc-research init --ai claude
|
|
36
|
+
### 1) Configure API keys
|
|
33
37
|
|
|
34
|
-
|
|
35
|
-
sc-research init --ai claude,cursor
|
|
38
|
+
Create `.sc-research` in your project root:
|
|
36
39
|
|
|
37
|
-
|
|
38
|
-
|
|
40
|
+
```env
|
|
41
|
+
OPENAI_API_KEY=...
|
|
42
|
+
XAI_API_KEY=...
|
|
39
43
|
```
|
|
40
44
|
|
|
41
|
-
|
|
42
|
-
|
|
45
|
+
Notes:
|
|
46
|
+
|
|
47
|
+
- If a key is missing, that source is skipped.
|
|
48
|
+
- You can override env path with `SC_RESEARCH_ENV_FILE`.
|
|
49
|
+
|
|
50
|
+
### 2) Run research
|
|
51
|
+
|
|
52
|
+
```bash
|
|
53
|
+
# quick
|
|
54
|
+
sc-research research "best budget iem"
|
|
55
|
+
|
|
56
|
+
# deep
|
|
57
|
+
sc-research research:deep "best budget iem"
|
|
43
58
|
|
|
44
|
-
|
|
45
|
-
-
|
|
46
|
-
|
|
59
|
+
# deep discovery mode
|
|
60
|
+
sc-research research:deep "DISCOVERY_WEEKLY" --mode=discovery
|
|
61
|
+
```
|
|
47
62
|
|
|
48
|
-
|
|
63
|
+
### 3) Visualize
|
|
49
64
|
|
|
50
|
-
|
|
65
|
+
```bash
|
|
66
|
+
sc-research visualize
|
|
67
|
+
```
|
|
51
68
|
|
|
52
|
-
|
|
53
|
-
sc-research init --ai claude
|
|
54
|
-
```
|
|
69
|
+
Dashboard reads available `classified_*.json` files and enables tabs accordingly.
|
|
55
70
|
|
|
56
|
-
|
|
57
|
-
- `sc-research research "<topic>"`
|
|
58
|
-
- `sc-research research:deep "<topic>"`
|
|
59
|
-
- `sc-research visualize`
|
|
71
|
+
## CLI Commands
|
|
60
72
|
|
|
61
|
-
|
|
73
|
+
```bash
|
|
74
|
+
sc-research init --ai <targets> [--dry-run] [--force] [--project-root <path>]
|
|
75
|
+
sc-research research "<topic>" [--source=reddit|x|both] [--from=YYYY-MM-DD --to=YYYY-MM-DD] [--mode=discovery]
|
|
76
|
+
sc-research research:deep "<topic>" [same flags as research]
|
|
77
|
+
sc-research visualize [path/to/classified_rank.json]
|
|
78
|
+
```
|
|
62
79
|
|
|
63
|
-
|
|
80
|
+
Supported init targets:
|
|
64
81
|
|
|
65
|
-
|
|
66
|
-
|
|
67
|
-
|
|
68
|
-
|
|
69
|
-
|
|
70
|
-
- `/controversy` – Build controversy map
|
|
71
|
-
- `/discovery` – Discover viral topics
|
|
72
|
-
- `/visualize` – Launch dashboard
|
|
82
|
+
- `claude`
|
|
83
|
+
- `cursor`
|
|
84
|
+
- `windsurf`
|
|
85
|
+
- `antigravity`
|
|
86
|
+
- `all`
|
|
73
87
|
|
|
74
|
-
##
|
|
88
|
+
## Agent Template Bootstrap
|
|
75
89
|
|
|
76
|
-
|
|
90
|
+
Install templates into a target project:
|
|
77
91
|
|
|
78
92
|
```bash
|
|
79
|
-
|
|
80
|
-
|
|
93
|
+
cd /path/to/your/project
|
|
94
|
+
|
|
95
|
+
# preview
|
|
96
|
+
sc-research init --ai claude,cursor --dry-run
|
|
81
97
|
|
|
82
|
-
#
|
|
83
|
-
|
|
98
|
+
# apply
|
|
99
|
+
sc-research init --ai claude,cursor
|
|
84
100
|
```
|
|
85
101
|
|
|
86
|
-
|
|
102
|
+
Canonical template source:
|
|
103
|
+
|
|
104
|
+
- `templates/base/`
|
|
105
|
+
|
|
106
|
+
Platform mapping config:
|
|
107
|
+
|
|
108
|
+
- `templates/platforms/*.json`
|
|
109
|
+
|
|
110
|
+
## Data Outputs
|
|
111
|
+
|
|
112
|
+
### Raw fetch outputs
|
|
113
|
+
|
|
114
|
+
- `reddit_data.json`
|
|
115
|
+
- `x_data.json`
|
|
116
|
+
|
|
117
|
+
### Classified analysis outputs
|
|
118
|
+
|
|
119
|
+
- `classified_rank.json`
|
|
120
|
+
- `classified_sentiment.json`
|
|
121
|
+
- `classified_trend.json`
|
|
122
|
+
- `classified_controversy.json`
|
|
123
|
+
- `classified_discovery.json`
|
|
124
|
+
|
|
125
|
+
## Discovery Mode (Special Flow)
|
|
87
126
|
|
|
88
|
-
|
|
89
|
-
import { SocialMediaResearch } from 'social-media-research-skill';
|
|
127
|
+
Discovery has dedicated behavior in runtime:
|
|
90
128
|
|
|
91
|
-
|
|
92
|
-
|
|
129
|
+
- `DISCOVERY_WEEKLY` + discovery mode fetches Reddit weekly trending (`r/popular/top`).
|
|
130
|
+
- Other discovery topics map to candidate subreddits, then fetch subreddit top/search.
|
|
131
|
+
- X still runs through X search flow when X is enabled.
|
|
93
132
|
|
|
94
|
-
|
|
133
|
+
Recommended commands:
|
|
134
|
+
|
|
135
|
+
```bash
|
|
136
|
+
# broad weekly discovery
|
|
137
|
+
sc-research research:deep "DISCOVERY_WEEKLY" --mode=discovery
|
|
138
|
+
|
|
139
|
+
# topic-focused discovery
|
|
140
|
+
sc-research research:deep "wireless earbuds" --mode=discovery
|
|
95
141
|
```
|
|
96
142
|
|
|
97
|
-
##
|
|
143
|
+
## Repository Structure
|
|
144
|
+
|
|
145
|
+
```text
|
|
146
|
+
src/
|
|
147
|
+
core/ # fetch clients + research/data services
|
|
148
|
+
cli/ # init system, adapters, template rendering
|
|
149
|
+
entries/ # CLI entrypoints (research, visualize, cli)
|
|
150
|
+
templates/
|
|
151
|
+
base/ # source-of-truth command + skill templates
|
|
152
|
+
platforms/ # platform folder mapping config
|
|
153
|
+
web/ # dashboard frontend
|
|
154
|
+
docs/ # usage notes and orchestrator test cases
|
|
155
|
+
```
|
|
98
156
|
|
|
99
|
-
|
|
157
|
+
## Development
|
|
100
158
|
|
|
101
|
-
```
|
|
102
|
-
|
|
103
|
-
|
|
159
|
+
```bash
|
|
160
|
+
# run source entrypoints
|
|
161
|
+
bun run research "topic"
|
|
162
|
+
bun run research:deep "topic"
|
|
163
|
+
bun run visualize
|
|
164
|
+
|
|
165
|
+
# tests
|
|
166
|
+
bun test
|
|
167
|
+
|
|
168
|
+
# build all distributables
|
|
169
|
+
bun run build
|
|
104
170
|
```
|
|
105
171
|
|
|
106
|
-
|
|
172
|
+
## Notes
|
|
173
|
+
|
|
174
|
+
- Requires Node.js `>=20` for packaged CLI runtime.
|
|
175
|
+
- Project uses Bun scripts for local development/build.
|
|
107
176
|
|
|
108
177
|
## License
|
|
109
178
|
|
package/package.json
CHANGED
|
@@ -1,28 +1,16 @@
|
|
|
1
1
|
---
|
|
2
|
-
description:
|
|
2
|
+
description: Generate a controversy map from social media data.
|
|
3
3
|
---
|
|
4
4
|
|
|
5
|
-
|
|
5
|
+
This command produces a controversy analysis. It follows the orchestrated pipeline with controversy as the target.
|
|
6
6
|
|
|
7
|
-
|
|
7
|
+
1. **Ensure raw data exists**
|
|
8
8
|
|
|
9
|
-
|
|
9
|
+
> Check if `reddit_data.json` or `x_data.json` exists. If missing or stale, delegate to the `social_media_fetch` skill to fetch fresh data with deep depth.
|
|
10
10
|
|
|
11
|
-
|
|
11
|
+
2. **Run analysis**
|
|
12
12
|
|
|
13
|
-
|
|
13
|
+
> Read the `social_media_controversy` skill instructions and follow them to generate `classified_controversy.json`.
|
|
14
14
|
|
|
15
|
-
|
|
16
|
-
|
|
17
|
-
4. Validate output schema
|
|
18
|
-
|
|
19
|
-
> Ensure `classified_controversy.json` includes:
|
|
20
|
-
>
|
|
21
|
-
> - `topic`
|
|
22
|
-
> - `overall_divisiveness`
|
|
23
|
-
> - `controversies` array
|
|
24
|
-
> - each controversy contains `topic`, `heat_score`, `divisiveness`, `side_a`, and `side_b`
|
|
25
|
-
> - each side contains `position`, `supporter_count`, and `sample_quote` with `text`, `author`, `link`
|
|
26
|
-
|
|
27
|
-
5. Display the results
|
|
28
|
-
> Read `classified_controversy.json` and display controversies with side-by-side opposing views and heat scores.
|
|
15
|
+
3. **Present results**
|
|
16
|
+
> Display controversies with side-by-side opposing views and heat scores from `classified_controversy.json`.
|
|
@@ -1,25 +1,14 @@
|
|
|
1
1
|
---
|
|
2
|
-
description:
|
|
2
|
+
description: Run deep research on a topic and produce a classified analysis report. This command triggers the full orchestrated pipeline.
|
|
3
3
|
---
|
|
4
4
|
|
|
5
|
-
|
|
5
|
+
This command runs the full research pipeline. Follow the `using_social_media_research` orchestrator skill to execute all steps.
|
|
6
6
|
|
|
7
|
-
|
|
7
|
+
**Pipeline overview** (orchestrator handles the details):
|
|
8
8
|
|
|
9
|
-
|
|
9
|
+
1. **Resolve intent** — determine which analysis type fits the user's question (rank, sentiment, trend, controversy, discovery, or all).
|
|
10
|
+
2. **Fetch data** — the orchestrator delegates to `social_media_fetch` to run `sc-research research:deep "ARGUMENTS"` (add `--from=YYYY-MM-DD --to=YYYY-MM-DD` if the user specifies a date range).
|
|
11
|
+
3. **Analyze** — the orchestrator delegates to the matched worker skill(s) to produce `classified_*.json` output.
|
|
12
|
+
4. **Present results** — display the analysis summary to the user.
|
|
10
13
|
|
|
11
|
-
|
|
12
|
-
>
|
|
13
|
-
> - If the user explicitly asks for "full analysis", "everything", or "all views": run all 4 templates in order (`rank` -> `sentiment` -> `trend` -> `controversy`).
|
|
14
|
-
> - Otherwise: pick the SINGLE most suitable template for the user's question.
|
|
15
|
-
|
|
16
|
-
3. Validate outputs before presenting
|
|
17
|
-
|
|
18
|
-
> Confirm the generated `classified_*.json` file(s):
|
|
19
|
-
>
|
|
20
|
-
> - Match the requested topic (or close variant of the same topic).
|
|
21
|
-
> - Match requested date window when `--from/--to` was provided.
|
|
22
|
-
> - Include required fields from `../skills/social_media_schema/SKILL.md`.
|
|
23
|
-
|
|
24
|
-
4. Display the results
|
|
25
|
-
> Present whichever template output(s) were selected after validation.
|
|
14
|
+
Read the `using_social_media_research` skill now and follow its Step 1 through Step 6.
|
|
@@ -1,31 +1,19 @@
|
|
|
1
1
|
---
|
|
2
|
-
description: Discover viral topics and emerging themes from
|
|
2
|
+
description: Discover viral topics and emerging themes from social media data.
|
|
3
3
|
---
|
|
4
4
|
|
|
5
|
-
|
|
5
|
+
This command produces a discovery analysis. It follows the orchestrated pipeline with discovery as the target.
|
|
6
6
|
|
|
7
|
-
|
|
7
|
+
1. **Ensure raw data exists**
|
|
8
8
|
|
|
9
|
-
|
|
10
|
-
|
|
11
|
-
> Confirm raw data matches the requested topic and date window (if provided). If it does not match, re-fetch with **deep research discovery mode**:
|
|
9
|
+
> Check if `reddit_data.json` or `x_data.json` exists. If missing or stale, delegate to the `social_media_fetch` skill to fetch data with `--mode=discovery`:
|
|
12
10
|
>
|
|
13
|
-
> - Broad weekly feed:
|
|
14
|
-
> - Topic-focused:
|
|
15
|
-
> - Optional filters: add `--source=reddit|x|both --from=YYYY-MM-DD --to=YYYY-MM-DD`
|
|
16
|
-
|
|
17
|
-
3. Run the discovery skill
|
|
11
|
+
> - Broad weekly feed: topic = `DISCOVERY_WEEKLY`
|
|
12
|
+
> - Topic-focused: topic = user's query
|
|
18
13
|
|
|
19
|
-
|
|
14
|
+
2. **Run analysis**
|
|
20
15
|
|
|
21
|
-
|
|
22
|
-
|
|
23
|
-
> Ensure `classified_discovery.json` includes:
|
|
24
|
-
>
|
|
25
|
-
> - `period`
|
|
26
|
-
> - `total_posts_analyzed`
|
|
27
|
-
> - `trending_topics` array
|
|
28
|
-
> - per topic: `id`, `topic_name`, `description`, `category`, `engagement_score`, `sentiment`, `key_posts`
|
|
16
|
+
> Read the `social_media_discovery` skill instructions and follow them to generate `classified_discovery.json`.
|
|
29
17
|
|
|
30
|
-
|
|
31
|
-
>
|
|
18
|
+
3. **Present results**
|
|
19
|
+
> Display discovered topics with engagement scores, sentiment, and top posts from `classified_discovery.json`.
|
|
@@ -2,15 +2,14 @@
|
|
|
2
2
|
description: Quick research a topic and get a fast text answer (Reddit only, skips X).
|
|
3
3
|
---
|
|
4
4
|
|
|
5
|
-
1.
|
|
5
|
+
1. **Fetch data (Reddit only)**
|
|
6
6
|
|
|
7
|
-
> `
|
|
8
|
-
>
|
|
9
|
-
> This fetches Reddit data only and skips X for speed. Use `/deep-research` if you need deeper multi-source analysis.
|
|
7
|
+
> Delegate to the `social_media_fetch` skill with: topic = `"ARGUMENTS"`, depth = `quick`, source = `reddit`.
|
|
8
|
+
> This fetches Reddit data only and skips X for speed. Use `/deep-research` for deeper multi-source analysis.
|
|
10
9
|
|
|
11
|
-
2. Read the raw data
|
|
10
|
+
2. **Read the raw data**
|
|
12
11
|
|
|
13
12
|
> Read `reddit_data.json` only. Ignore `x_data.json` even if it exists from a prior run.
|
|
14
13
|
|
|
15
|
-
3. Provide a concise answer
|
|
16
|
-
> Synthesize a 3
|
|
14
|
+
3. **Provide a concise answer**
|
|
15
|
+
> Synthesize a 3–5 sentence answer directly from the Reddit data. Mention the community favorite, 1–2 alternatives, and a real user quote. Do NOT generate any classified files.
|
|
@@ -1,27 +1,16 @@
|
|
|
1
1
|
---
|
|
2
|
-
description:
|
|
2
|
+
description: Generate a ranking report from social media data.
|
|
3
3
|
---
|
|
4
4
|
|
|
5
|
-
|
|
5
|
+
This command produces a ranked analysis. It follows the orchestrated pipeline with rank as the target.
|
|
6
6
|
|
|
7
|
-
|
|
7
|
+
1. **Ensure raw data exists**
|
|
8
8
|
|
|
9
|
-
|
|
9
|
+
> Check if `reddit_data.json` or `x_data.json` exists. If missing or stale, delegate to the `social_media_fetch` skill to fetch fresh data with deep depth.
|
|
10
10
|
|
|
11
|
-
|
|
11
|
+
2. **Run analysis**
|
|
12
12
|
|
|
13
|
-
|
|
13
|
+
> Read the `social_media_rank` skill instructions and follow them to generate `classified_rank.json`.
|
|
14
14
|
|
|
15
|
-
|
|
16
|
-
|
|
17
|
-
4. Validate output schema
|
|
18
|
-
|
|
19
|
-
> Ensure `classified_rank.json` includes:
|
|
20
|
-
>
|
|
21
|
-
> - `topic`
|
|
22
|
-
> - `key_insights` (array)
|
|
23
|
-
> - `products` (array)
|
|
24
|
-
> - per product: `rank`, `name`, `sentiment`, `mentions`, `estimated_engagement_score`, `consensus`, `pros`, `cons`
|
|
25
|
-
|
|
26
|
-
5. Display the results
|
|
27
|
-
> Read `classified_rank.json` and display key insights plus a summary of the ranked products.
|
|
15
|
+
3. **Present results**
|
|
16
|
+
> Display key insights and ranked products from `classified_rank.json`.
|
|
@@ -1,10 +1,16 @@
|
|
|
1
|
-
|
|
1
|
+
---
|
|
2
|
+
description: Research a topic and get a concise text answer.
|
|
3
|
+
---
|
|
2
4
|
|
|
3
|
-
|
|
5
|
+
1. **Fetch data**
|
|
4
6
|
|
|
5
|
-
|
|
7
|
+
> Delegate to the `social_media_fetch` skill with **quick** depth:
|
|
8
|
+
> Topic = `"ARGUMENTS"`, depth = `quick`.
|
|
9
|
+
> Optionally add `--from=YYYY-MM-DD --to=YYYY-MM-DD` if user specifies a date range.
|
|
6
10
|
|
|
7
|
-
|
|
11
|
+
2. **Analyze the results**
|
|
8
12
|
|
|
9
|
-
|
|
10
|
-
|
|
13
|
+
> Read the generated `reddit_data.json` and `x_data.json`.
|
|
14
|
+
|
|
15
|
+
3. **Provide an answer**
|
|
16
|
+
> Based on the data, provide a concise answer including the Community Favorite, decent alternatives, and a representative quote. Do not produce any `classified_*.json` file.
|
|
@@ -1,28 +1,16 @@
|
|
|
1
1
|
---
|
|
2
|
-
description:
|
|
2
|
+
description: Generate a sentiment analysis report from social media data.
|
|
3
3
|
---
|
|
4
4
|
|
|
5
|
-
|
|
5
|
+
This command produces a sentiment breakdown. It follows the orchestrated pipeline with sentiment as the target.
|
|
6
6
|
|
|
7
|
-
|
|
7
|
+
1. **Ensure raw data exists**
|
|
8
8
|
|
|
9
|
-
|
|
9
|
+
> Check if `reddit_data.json` or `x_data.json` exists. If missing or stale, delegate to the `social_media_fetch` skill to fetch fresh data with deep depth.
|
|
10
10
|
|
|
11
|
-
|
|
11
|
+
2. **Run analysis**
|
|
12
12
|
|
|
13
|
-
|
|
13
|
+
> Read the `social_media_sentiment` skill instructions and follow them to generate `classified_sentiment.json`.
|
|
14
14
|
|
|
15
|
-
|
|
16
|
-
|
|
17
|
-
4. Validate output schema
|
|
18
|
-
|
|
19
|
-
> Ensure `classified_sentiment.json` includes:
|
|
20
|
-
>
|
|
21
|
-
> - `topic`
|
|
22
|
-
> - `overall_mood`
|
|
23
|
-
> - `distribution` (`very_positive`, `positive`, `mixed`, `negative`)
|
|
24
|
-
> - `by_source` with both `reddit` and `x`
|
|
25
|
-
> - `product_sentiments`
|
|
26
|
-
|
|
27
|
-
5. Display the results
|
|
28
|
-
> Read `classified_sentiment.json` and display the overall mood, sentiment distribution, and per-product sentiment breakdown.
|
|
15
|
+
3. **Present results**
|
|
16
|
+
> Display overall mood, sentiment distribution, and per-product breakdown from `classified_sentiment.json`.
|
|
@@ -1,27 +1,16 @@
|
|
|
1
1
|
---
|
|
2
|
-
description:
|
|
2
|
+
description: Generate a trend timeline report from social media data.
|
|
3
3
|
---
|
|
4
4
|
|
|
5
|
-
|
|
5
|
+
This command produces a trend analysis. It follows the orchestrated pipeline with trend as the target.
|
|
6
6
|
|
|
7
|
-
|
|
7
|
+
1. **Ensure raw data exists**
|
|
8
8
|
|
|
9
|
-
|
|
9
|
+
> Check if `reddit_data.json` or `x_data.json` exists. If missing or stale, delegate to the `social_media_fetch` skill to fetch fresh data with deep depth.
|
|
10
10
|
|
|
11
|
-
|
|
11
|
+
2. **Run analysis**
|
|
12
12
|
|
|
13
|
-
|
|
13
|
+
> Read the `social_media_trend` skill instructions and follow them to generate `classified_trend.json`.
|
|
14
14
|
|
|
15
|
-
|
|
16
|
-
|
|
17
|
-
4. Validate output schema
|
|
18
|
-
|
|
19
|
-
> Ensure `classified_trend.json` includes:
|
|
20
|
-
>
|
|
21
|
-
> - `topic`
|
|
22
|
-
> - `date_range` with `from` and `to`
|
|
23
|
-
> - `timeline` entries with `period`, `post_count`, `total_engagement`, `reddit_posts`, `x_posts`
|
|
24
|
-
> - `key_moments` entries with `date`, `event`, `significance`
|
|
25
|
-
|
|
26
|
-
5. Display the results
|
|
27
|
-
> Read `classified_trend.json` and display timeline activity, engagement trends, and key moments.
|
|
15
|
+
3. **Present results**
|
|
16
|
+
> Display timeline activity, engagement trends, and key moments from `classified_trend.json`.
|
|
@@ -1,14 +1,14 @@
|
|
|
1
1
|
---
|
|
2
|
-
description: Launch the visualization dashboard.
|
|
2
|
+
description: Launch the visualization dashboard for classified research data.
|
|
3
3
|
---
|
|
4
4
|
|
|
5
|
-
1. Check for classified data
|
|
5
|
+
1. **Check for classified data**
|
|
6
6
|
|
|
7
|
-
|
|
7
|
+
> Verify at least one `classified_*.json` file exists (`classified_rank.json`, `classified_sentiment.json`, `classified_trend.json`, `classified_controversy.json`, `classified_discovery.json`). If none exist, suggest running `/deep-research` first.
|
|
8
8
|
|
|
9
|
-
2. Launch the
|
|
9
|
+
2. **Launch the dashboard**
|
|
10
10
|
|
|
11
|
-
|
|
11
|
+
> `sc-research visualize`
|
|
12
12
|
|
|
13
|
-
3. Inform the user
|
|
14
|
-
|
|
13
|
+
3. **Inform the user**
|
|
14
|
+
> Tell the user to open the URL provided by the command (usually `http://localhost:5173`). The dashboard shows tabs for each available classified file.
|
|
@@ -1,36 +1,19 @@
|
|
|
1
1
|
---
|
|
2
2
|
name: social_media_controversy
|
|
3
|
-
description:
|
|
3
|
+
description: Analysis-only worker. Reads existing raw Reddit/X data to identify divisive topics and generates `classified_controversy.json` with strict `ControversyData` output. Use for debate, disagreement, or polarizing-topic requests.
|
|
4
4
|
---
|
|
5
5
|
|
|
6
6
|
# Social Media Controversy Skill
|
|
7
7
|
|
|
8
|
-
This worker maps where social media opinions conflict and presents both sides with evidence.
|
|
8
|
+
This worker maps where social media opinions conflict and presents both sides with evidence. It performs **analysis only** — fetching is handled by the orchestrator via `social_media_fetch`.
|
|
9
9
|
|
|
10
|
-
##
|
|
10
|
+
## Prerequisites
|
|
11
11
|
|
|
12
|
-
|
|
12
|
+
The following files must already exist (produced by `social_media_fetch`):
|
|
13
13
|
|
|
14
|
-
- `reddit_data.json`
|
|
15
|
-
- `x_data.json`
|
|
14
|
+
- `reddit_data.json` and/or `x_data.json`
|
|
16
15
|
|
|
17
|
-
At least one valid source file must
|
|
18
|
-
|
|
19
|
-
## Command Execution Flow
|
|
20
|
-
|
|
21
|
-
Use this sequence for controversy analysis:
|
|
22
|
-
|
|
23
|
-
1. Fetch or refresh raw data (outside this worker):
|
|
24
|
-
|
|
25
|
-
- `sc-research research:deep "TOPIC"`
|
|
26
|
-
- Optional filters: `--source=reddit|x|both --from=YYYY-MM-DD --to=YYYY-MM-DD`
|
|
27
|
-
|
|
28
|
-
2. Run this `social_media_controversy` worker to generate `classified_controversy.json`.
|
|
29
|
-
3. Optional visualization:
|
|
30
|
-
|
|
31
|
-
- `sc-research visualize`
|
|
32
|
-
|
|
33
|
-
If raw files are missing, stale, or mismatched for the requested topic/date range, run step 1 first.
|
|
16
|
+
At least one valid source file must be present. If both are missing, **stop and report failure** — do not attempt to fetch data.
|
|
34
17
|
|
|
35
18
|
## Step 1: Preflight Validation
|
|
36
19
|
|
|
@@ -94,6 +77,53 @@ Save strict JSON to:
|
|
|
94
77
|
|
|
95
78
|
- `classified_controversy.json`
|
|
96
79
|
|
|
80
|
+
## Output Type Contract
|
|
81
|
+
|
|
82
|
+
Your output MUST match this exact shape. The dashboard detects controversy data by checking for `overall_divisiveness` (string) + `controversies` (array). Missing either field = broken tab.
|
|
83
|
+
|
|
84
|
+
```json
|
|
85
|
+
{
|
|
86
|
+
"topic": "AI replacing developers",
|
|
87
|
+
"overall_divisiveness": "High",
|
|
88
|
+
"controversies": [
|
|
89
|
+
{
|
|
90
|
+
"topic": "Will AI make junior devs obsolete?",
|
|
91
|
+
"heat_score": 82,
|
|
92
|
+
"divisiveness": "High",
|
|
93
|
+
"side_a": {
|
|
94
|
+
"position": "AI will eliminate most entry-level coding jobs within 5 years",
|
|
95
|
+
"supporter_count": 45,
|
|
96
|
+
"sample_quotes": [
|
|
97
|
+
{
|
|
98
|
+
"text": "Why would a company hire juniors when Copilot can do 80% of their work?",
|
|
99
|
+
"author": "u/startup_cto",
|
|
100
|
+
"link": "https://reddit.com/r/programming/comments/ghi789"
|
|
101
|
+
}
|
|
102
|
+
]
|
|
103
|
+
},
|
|
104
|
+
"side_b": {
|
|
105
|
+
"position": "AI is a tool that augments developers, not replaces them",
|
|
106
|
+
"supporter_count": 62,
|
|
107
|
+
"sample_quotes": [
|
|
108
|
+
{
|
|
109
|
+
"text": "Every generation says the same thing. We still need people who understand the problem domain.",
|
|
110
|
+
"author": "u/senior_eng_20yr",
|
|
111
|
+
"link": "https://reddit.com/r/ExperiencedDevs/comments/jkl012"
|
|
112
|
+
}
|
|
113
|
+
]
|
|
114
|
+
}
|
|
115
|
+
}
|
|
116
|
+
]
|
|
117
|
+
}
|
|
118
|
+
```
|
|
119
|
+
|
|
120
|
+
### Enum Rules for Controversy
|
|
121
|
+
|
|
122
|
+
- `overall_divisiveness`: **Title Case** — `"Low"`, `"Medium"`, `"High"`. NEVER use `"low"`, `"medium"`, `"high"`.
|
|
123
|
+
- `divisiveness` on each controversy: **Title Case** — same as above.
|
|
124
|
+
- `heat_score`: integer `0-100`. NEVER use a value outside this range.
|
|
125
|
+
- `supporter_count`: integer `>= 0`. Use `0` if unknown, NEVER use null.
|
|
126
|
+
|
|
97
127
|
## Final Validation Checklist
|
|
98
128
|
|
|
99
129
|
- JSON parse succeeds.
|