sc-research 1.0.11 → 1.0.13

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/dist/cli.js CHANGED
@@ -523,11 +523,12 @@ function printHelp() {
523
523
  sc-research - Social Communities Research bootstrap CLI
524
524
 
525
525
  Usage:
526
- sc-research init --ai <target[,target...]> [options]
526
+ sc-research <command> [options]
527
527
 
528
528
  Commands:
529
529
  init Initialize SC-Research support files for a project
530
530
  research Run research engine (same as project "research" script)
531
+ research:deep Run deep research engine
531
532
  visualize Launch visualization app
532
533
 
533
534
  Options:
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "sc-research",
3
- "version": "1.0.11",
3
+ "version": "1.0.13",
4
4
  "description": "Headless Social Media Research Data Provider for AI Agents",
5
5
  "type": "module",
6
6
  "main": "dist/index.js",
@@ -4,24 +4,28 @@ description: Discover viral topics and emerging themes from existing research da
4
4
 
5
5
  1. Check for data existence
6
6
 
7
- > Ensure `reddit_data.json` or `x_data.json` exists in the current directory.
7
+ > Ensure `reddit_data.json` or `x_data.json` exists in the current directory.
8
8
 
9
9
  2. Check freshness for the current request
10
10
 
11
- > Confirm raw data matches the requested topic and date window (if provided). If it does not match, re-fetch with **deep research**: `sc-research research:deep "TOPIC" --mode=discovery`.
11
+ > Confirm raw data matches the requested topic and date window (if provided). If it does not match, re-fetch with **deep research discovery mode**:
12
+ >
13
+ > - Broad weekly feed: `sc-research research:deep "DISCOVERY_WEEKLY" --mode=discovery`
14
+ > - Topic-focused: `sc-research research:deep "TOPIC" --mode=discovery`
15
+ > - Optional filters: add `--source=reddit|x|both --from=YYYY-MM-DD --to=YYYY-MM-DD`
12
16
 
13
17
  3. Run the discovery skill
14
18
 
15
- > Use the `social_media_discovery` skill to cluster posts by topic and generate `classified_discovery.json`.
19
+ > Use the `social_media_discovery` skill to cluster posts by topic and generate `classified_discovery.json`.
16
20
 
17
21
  4. Validate output schema
18
22
 
19
- > Ensure `classified_discovery.json` includes:
20
- >
21
- > - `period`
22
- > - `total_posts_analyzed`
23
- > - `trending_topics` array
24
- > - per topic: `id`, `topic_name`, `description`, `category`, `engagement_score`, `sentiment`, `key_posts`
23
+ > Ensure `classified_discovery.json` includes:
24
+ >
25
+ > - `period`
26
+ > - `total_posts_analyzed`
27
+ > - `trending_topics` array
28
+ > - per topic: `id`, `topic_name`, `description`, `category`, `engagement_score`, `sentiment`, `key_posts`
25
29
 
26
30
  5. Display the results
27
- > Read `classified_discovery.json` and display the discovered topics with their engagement scores, sentiment, and top posts.
31
+ > Read `classified_discovery.json` and display the discovered topics with their engagement scores, sentiment, and top posts.
@@ -3,11 +3,13 @@ description: Quick research a topic and get a fast text answer (Reddit only, ski
3
3
  ---
4
4
 
5
5
  1. Run the quick research fetcher (Reddit only)
6
- > `sc-research research "ARGUMENTS"`
6
+
7
+ > `sc-research research "ARGUMENTS" --source=reddit`
7
8
  >
8
- > This fetches Reddit data only and skips X for speed. Use `/research` if you need both sources.
9
+ > This fetches Reddit data only and skips X for speed. Use `/deep-research` if you need deeper multi-source analysis.
9
10
 
10
11
  2. Read the raw data
12
+
11
13
  > Read `reddit_data.json` only. Ignore `x_data.json` even if it exists from a prior run.
12
14
 
13
15
  3. Provide a concise answer
@@ -16,6 +16,22 @@ Use existing raw files only:
16
16
 
17
17
  At least one valid source file must exist.
18
18
 
19
+ ## Command Execution Flow
20
+
21
+ Use this sequence for controversy analysis:
22
+
23
+ 1. Fetch or refresh raw data (outside this worker):
24
+
25
+ - `sc-research research:deep "TOPIC"`
26
+ - Optional filters: `--source=reddit|x|both --from=YYYY-MM-DD --to=YYYY-MM-DD`
27
+
28
+ 2. Run this `social_media_controversy` worker to generate `classified_controversy.json`.
29
+ 3. Optional visualization:
30
+
31
+ - `sc-research visualize`
32
+
33
+ If raw files are missing, stale, or mismatched for the requested topic/date range, run step 1 first.
34
+
19
35
  ## Step 1: Preflight Validation
20
36
 
21
37
  1. Parse each available source file.
@@ -16,6 +16,43 @@ Use existing raw files only:
16
16
 
17
17
  At least one valid source file must exist.
18
18
 
19
+ ## Command Execution Flow
20
+
21
+ Use this sequence when running discovery end-to-end:
22
+
23
+ 1. Fetch or refresh discovery raw data (outside this worker):
24
+
25
+ - Broad weekly discovery feed: `sc-research research:deep "DISCOVERY_WEEKLY" --mode=discovery`
26
+ - Topic-focused discovery: `sc-research research:deep "TOPIC" --mode=discovery`
27
+ - Optional filters: `--source=reddit|x|both --from=YYYY-MM-DD --to=YYYY-MM-DD`
28
+
29
+ 2. Run this `social_media_discovery` worker to analyze existing raw files and produce `classified_discovery.json`.
30
+ 3. Optional visualization step:
31
+
32
+ - `sc-research visualize`
33
+
34
+ If raw files are missing, stale, or mismatched for the requested topic/date range, instruct the caller to run step 1 first.
35
+
36
+ ## Discovery Fetch Behavior (Code-Aligned)
37
+
38
+ When `--mode=discovery` is used, runtime behavior differs by topic:
39
+
40
+ 1. If topic is exactly `DISCOVERY_WEEKLY` and Reddit is enabled:
41
+
42
+ - Fetches Reddit trending posts from `r/popular/top` with `t=week` and limit `25`.
43
+
44
+ 2. For other topics in discovery mode and Reddit enabled:
45
+
46
+ - Maps topic to candidate subreddits first.
47
+ - Per subreddit, uses either top posts (`week`, limit `5`) or subreddit search (`week`, limit `5`) based on topic/subreddit match.
48
+ - If mapping returns no subreddits, falls back to legacy Reddit keyword-thread flow.
49
+
50
+ 3. X source behavior:
51
+
52
+ - X fetch still runs through normal X search flow (`maxItems` based on depth), even in discovery mode.
53
+
54
+ This skill consumes the resulting `reddit_data.json` / `x_data.json`; it does not perform fetching itself.
55
+
19
56
  ## Step 1: Preflight Validation
20
57
 
21
58
  1. Parse each available source file.
@@ -16,6 +16,16 @@ This worker is the data-ingestion step for the pipeline. It fetches raw social d
16
16
 
17
17
  At least one output file must be produced for a successful fetch.
18
18
 
19
+ ## Command Execution Flow
20
+
21
+ Run one of these commands based on requested depth:
22
+
23
+ - Quick fetch: `sc-research research "TOPIC"`
24
+ - Deep fetch: `sc-research research:deep "TOPIC"`
25
+ - Discovery fetch: `sc-research research:deep "TOPIC" --mode=discovery`
26
+
27
+ Then validate outputs (`reddit_data.json` / `x_data.json`) before handing off to analysis workers.
28
+
19
29
  ## Step 1: Choose Fetch Mode
20
30
 
21
31
  - **Quick mode** (faster, lighter coverage):
@@ -33,6 +43,8 @@ Use flags only when requested:
33
43
  - `--from=YYYY-MM-DD --to=YYYY-MM-DD`
34
44
  - `--mode=discovery`
35
45
 
46
+ When `--source` is omitted, runtime attempts all enabled sources (based on available API keys). A source without its required key is skipped.
47
+
36
48
  Examples:
37
49
 
38
50
  ```bash
@@ -73,10 +85,10 @@ Return:
73
85
 
74
86
  ## Error Handling
75
87
 
76
- | Scenario | Symptom | Action |
77
- |---|---|---|
78
- | Missing `OPENAI_API_KEY` | Auth failure on Reddit fetch | Set valid `OPENAI_API_KEY` in `.sc-research` |
79
- | Missing `XAI_API_KEY` | X file missing/empty while Reddit succeeds | Set `XAI_API_KEY` in `.sc-research` to enable X fetching |
80
- | No relevant results | `items` is empty | Broaden topic keywords and retry |
81
- | Rate limit / transient API failure | Timeout or provider error | Wait, then retry once with same parameters |
82
- | Malformed output | JSON parse failure or missing `items` | Re-run fetch; if repeated, report failure explicitly |
88
+ | Scenario | Symptom | Action |
89
+ | ---------------------------------- | ------------------------------------------ | -------------------------------------------------------- |
90
+ | Missing `OPENAI_API_KEY` | Auth failure on Reddit fetch | Set valid `OPENAI_API_KEY` in `.sc-research` |
91
+ | Missing `XAI_API_KEY` | X file missing/empty while Reddit succeeds | Set `XAI_API_KEY` in `.sc-research` to enable X fetching |
92
+ | No relevant results | `items` is empty | Broaden topic keywords and retry |
93
+ | Rate limit / transient API failure | Timeout or provider error | Wait, then retry once with same parameters |
94
+ | Malformed output | JSON parse failure or missing `items` | Re-run fetch; if repeated, report failure explicitly |
@@ -16,6 +16,19 @@ Use existing files only:
16
16
 
17
17
  At least one valid source file must exist.
18
18
 
19
+ ## Command Execution Flow
20
+
21
+ Use this sequence for ranking:
22
+
23
+ 1. Fetch or refresh raw data (outside this worker):
24
+ - `sc-research research:deep "TOPIC"`
25
+ - Optional filters: `--source=reddit|x|both --from=YYYY-MM-DD --to=YYYY-MM-DD`
26
+ 2. Run this `social_media_rank` worker to generate `classified_rank.json`.
27
+ 3. Optional visualization:
28
+ - `sc-research visualize`
29
+
30
+ If raw files are missing, stale, or mismatched for the requested topic/date range, run step 1 first.
31
+
19
32
  ## Step 1: Preflight Validation
20
33
 
21
34
  Before analysis:
@@ -15,153 +15,160 @@ Use this file as the canonical schema source for all classified outputs:
15
15
 
16
16
  If another skill instruction conflicts with this file, this file wins.
17
17
 
18
+ ## Command Execution Note
19
+
20
+ This is a reference-only skill. There is no direct CLI command to run this skill.
21
+
22
+ - Use it by reading this schema before writing any `classified_*.json` output.
23
+ - Runnable commands belong to fetch/analysis/visualize skills (for example `sc-research research:deep "TOPIC"` and `sc-research visualize`).
24
+
18
25
  ## Canonical Type Definitions
19
26
 
20
27
  ```typescript
21
28
  export interface ClassifiedData {
22
- topic: string;
23
- source_file?: string;
24
- products: Product[];
25
- key_insights: string[];
29
+ topic: string;
30
+ source_file?: string;
31
+ products: Product[];
32
+ key_insights: string[];
26
33
  }
27
34
 
28
35
  export interface Product {
29
- rank: number;
30
- name: string;
31
- sentiment: SentimentLabel;
32
- mentions: number;
33
- estimated_engagement_score: number;
34
- consensus: string;
35
- pros: string[];
36
- cons: string[];
37
- highlight_quotes: Array<{
38
- text: string;
39
- author: string;
40
- link: string;
41
- context?: "pro" | "con" | "general";
42
- }>;
36
+ rank: number;
37
+ name: string;
38
+ sentiment: SentimentLabel;
39
+ mentions: number;
40
+ estimated_engagement_score: number;
41
+ consensus: string;
42
+ pros: string[];
43
+ cons: string[];
44
+ highlight_quotes: Array<{
45
+ text: string;
46
+ author: string;
47
+ link: string;
48
+ context?: "pro" | "con" | "general";
49
+ }>;
43
50
  }
44
51
 
45
52
  export type SentimentLabel =
46
- | "Positive"
47
- | "Negative"
48
- | "Mixed"
49
- | "Very Positive";
53
+ | "Positive"
54
+ | "Negative"
55
+ | "Mixed"
56
+ | "Very Positive";
50
57
 
51
58
  export interface SentimentData {
52
- topic: string;
53
- overall_mood: SentimentLabel;
54
- distribution: {
55
- very_positive: number;
56
- positive: number;
57
- mixed: number;
58
- negative: number;
59
- };
60
- by_source: {
61
- reddit: SourceSentiment;
62
- x: SourceSentiment;
63
- };
64
- product_sentiments: ProductSentiment[];
65
- }
66
-
67
- export interface SourceSentiment {
59
+ topic: string;
60
+ overall_mood: SentimentLabel;
61
+ distribution: {
68
62
  very_positive: number;
69
63
  positive: number;
70
64
  mixed: number;
71
65
  negative: number;
66
+ };
67
+ by_source: {
68
+ reddit: SourceSentiment;
69
+ x: SourceSentiment;
70
+ };
71
+ product_sentiments: ProductSentiment[];
72
+ }
73
+
74
+ export interface SourceSentiment {
75
+ very_positive: number;
76
+ positive: number;
77
+ mixed: number;
78
+ negative: number;
72
79
  }
73
80
 
74
81
  export interface ProductSentiment {
75
- name: string;
76
- overall: SentimentLabel;
77
- reddit_sentiment: SentimentLabel | null;
78
- x_sentiment: SentimentLabel | null;
79
- evidence_quotes: Array<{
80
- text: string;
81
- author: string;
82
- link: string;
83
- sentiment: SentimentLabel;
84
- }>;
82
+ name: string;
83
+ overall: SentimentLabel;
84
+ reddit_sentiment: SentimentLabel | null;
85
+ x_sentiment: SentimentLabel | null;
86
+ evidence_quotes: Array<{
87
+ text: string;
88
+ author: string;
89
+ link: string;
90
+ sentiment: SentimentLabel;
91
+ }>;
85
92
  }
86
93
 
87
94
  export interface TrendData {
88
- topic: string;
89
- date_range: {
90
- from: string;
91
- to: string;
92
- };
93
- granularity?: "day" | "week" | "month";
94
- timeline: TimelinePoint[];
95
- key_moments: KeyMoment[];
95
+ topic: string;
96
+ date_range: {
97
+ from: string;
98
+ to: string;
99
+ };
100
+ granularity?: "day" | "week" | "month";
101
+ timeline: TimelinePoint[];
102
+ key_moments: KeyMoment[];
96
103
  }
97
104
 
98
105
  export interface TimelinePoint {
99
- period: string;
100
- post_count: number;
101
- total_engagement: number;
102
- reddit_posts: number;
103
- x_posts: number;
106
+ period: string;
107
+ post_count: number;
108
+ total_engagement: number;
109
+ reddit_posts: number;
110
+ x_posts: number;
104
111
  }
105
112
 
106
113
  export interface KeyMoment {
107
- date: string;
108
- event: string;
109
- significance: "high" | "medium" | "low";
110
- url?: string;
114
+ date: string;
115
+ event: string;
116
+ significance: "high" | "medium" | "low";
117
+ url?: string;
111
118
  }
112
119
 
113
120
  export interface ControversyData {
114
- topic: string;
115
- overall_divisiveness: "Low" | "Medium" | "High";
116
- controversies: Controversy[];
121
+ topic: string;
122
+ overall_divisiveness: "Low" | "Medium" | "High";
123
+ controversies: Controversy[];
117
124
  }
118
125
 
119
126
  export interface Controversy {
120
- topic: string;
121
- heat_score: number;
122
- divisiveness: "Low" | "Medium" | "High";
123
- side_a: ControversySide;
124
- side_b: ControversySide;
127
+ topic: string;
128
+ heat_score: number;
129
+ divisiveness: "Low" | "Medium" | "High";
130
+ side_a: ControversySide;
131
+ side_b: ControversySide;
125
132
  }
126
133
 
127
134
  export interface ControversySide {
128
- position: string;
129
- supporter_count: number;
130
- sample_quotes: Array<{
131
- text: string;
132
- author: string;
133
- link: string;
134
- }>;
135
+ position: string;
136
+ supporter_count: number;
137
+ sample_quotes: Array<{
138
+ text: string;
139
+ author: string;
140
+ link: string;
141
+ }>;
135
142
  }
136
143
 
137
144
  export interface DiscoveryData {
138
- topic: string;
139
- period: string;
140
- total_posts_analyzed: number;
141
- trending_topics: DiscoveryTopic[];
145
+ topic: string;
146
+ period: string;
147
+ total_posts_analyzed: number;
148
+ trending_topics: DiscoveryTopic[];
142
149
  }
143
150
 
144
151
  export interface DiscoveryTopic {
145
- id: string;
146
- topic_name: string;
147
- description: string;
148
- category: string;
149
- engagement_score: number;
150
- sentiment: "positive" | "negative" | "neutral" | "mixed";
151
- key_posts: KeyPost[];
152
- highlight_comments: Array<{
153
- text: string;
154
- author: string;
155
- link: string;
156
- platform: "reddit" | "x";
157
- }>;
152
+ id: string;
153
+ topic_name: string;
154
+ description: string;
155
+ category: string;
156
+ engagement_score: number;
157
+ sentiment: "positive" | "negative" | "neutral" | "mixed";
158
+ key_posts: KeyPost[];
159
+ highlight_comments: Array<{
160
+ text: string;
161
+ author: string;
162
+ link: string;
163
+ platform: "reddit" | "x";
164
+ }>;
158
165
  }
159
166
 
160
167
  export interface KeyPost {
161
- title: string;
162
- url: string;
163
- platform: "reddit" | "x";
164
- engagement: number;
165
- thumbnail?: string;
168
+ title: string;
169
+ url: string;
170
+ platform: "reddit" | "x";
171
+ engagement: number;
172
+ thumbnail?: string;
166
173
  }
167
174
  ```
@@ -16,6 +16,22 @@ Use existing raw files only:
16
16
 
17
17
  At least one valid source file must exist.
18
18
 
19
+ ## Command Execution Flow
20
+
21
+ Use this sequence for sentiment analysis:
22
+
23
+ 1. Fetch or refresh raw data (outside this worker):
24
+
25
+ - `sc-research research:deep "TOPIC"`
26
+ - Optional filters: `--source=reddit|x|both --from=YYYY-MM-DD --to=YYYY-MM-DD`
27
+
28
+ 2. Run this `social_media_sentiment` worker to generate `classified_sentiment.json`.
29
+ 3. Optional visualization:
30
+
31
+ - `sc-research visualize`
32
+
33
+ If raw files are missing, stale, or mismatched for the requested topic/date range, run step 1 first.
34
+
19
35
  ## Step 1: Preflight Validation
20
36
 
21
37
  1. Parse each available source file.
@@ -16,6 +16,19 @@ Use existing files only:
16
16
 
17
17
  At least one valid source file must exist.
18
18
 
19
+ ## Command Execution Flow
20
+
21
+ Use this sequence for trend analysis:
22
+
23
+ 1. Fetch or refresh raw data (outside this worker):
24
+ - `sc-research research:deep "TOPIC"`
25
+ - Optional filters: `--source=reddit|x|both --from=YYYY-MM-DD --to=YYYY-MM-DD`
26
+ 2. Run this `social_media_trend` worker to generate `classified_trend.json`.
27
+ 3. Optional visualization:
28
+ - `sc-research visualize`
29
+
30
+ If raw files are missing, stale, or mismatched for the requested topic/date range, run step 1 first.
31
+
19
32
  ## Step 1: Preflight and Date Parsing
20
33
 
21
34
  1. Parse each available source file and validate `items` arrays.
@@ -11,13 +11,23 @@ This worker launches the local web UI and renders the current classification out
11
11
 
12
12
  Before launch:
13
13
 
14
- 1. At least one of these files exists and is parseable:
14
+ 1. Prefer having at least one of these files available:
15
15
  - `classified_rank.json`
16
16
  - `classified_sentiment.json`
17
17
  - `classified_trend.json`
18
18
  - `classified_controversy.json`
19
19
  - `classified_discovery.json`
20
- 2. If none exist, report that analysis must run first.
20
+ 2. If none exist, runtime still launches the dashboard in empty mode; inform the user to run analysis skills to populate tabs.
21
+
22
+ ## Command Execution Flow
23
+
24
+ Use this sequence for visualization:
25
+
26
+ 1. Ensure at least one classified output exists (for example by running `social_media_rank`, `social_media_sentiment`, `social_media_trend`, `social_media_controversy`, or `social_media_discovery`).
27
+ 2. Launch dashboard:
28
+ - `sc-research visualize`
29
+ 3. Optional single-file mode:
30
+ - `sc-research visualize path/to/classified_rank.json`
21
31
 
22
32
  ## Usage
23
33
 
@@ -51,13 +61,13 @@ Return:
51
61
  ## Critical Rules
52
62
 
53
63
  1. **Visualization only**: this skill does not fetch or classify data.
54
- 2. **No silent fallback confusion**: if no valid classified files exist, clearly report failure.
64
+ 2. **No silent fallback confusion**: if no valid classified files exist, clearly explain that dashboard is running in empty mode.
55
65
  3. **Use actual runtime URL**: if default port is taken, report the new port.
56
66
 
57
67
  ## Troubleshooting
58
68
 
59
- | Issue | Symptom | Action |
60
- |---|---|---|
61
- | No classified files | command exits early or empty dashboard | run a worker skill first (rank/sentiment/trend/controversy/discovery) |
62
- | Port already in use | server starts on another port | report actual URL shown by runtime |
63
- | Malformed JSON file | tab fails to render or command errors | regenerate that specific `classified_*.json` file |
69
+ | Issue | Symptom | Action |
70
+ | ------------------- | ------------------------------------- | --------------------------------------------------------------------- |
71
+ | No classified files | empty dashboard (server still starts) | run a worker skill first (rank/sentiment/trend/controversy/discovery) |
72
+ | Port already in use | server starts on another port | report actual URL shown by runtime |
73
+ | Malformed JSON file | tab fails to render or command errors | regenerate that specific `classified_*.json` file |
@@ -24,20 +24,37 @@ Treat this as the entrypoint skill when user requests map to social-media resear
24
24
  - For analysis intents, run exactly one worker by default.
25
25
  - Run multiple workers only when the user explicitly asks for multi-view output (for example: "full analysis", "all views", "run everything").
26
26
  - Use deep fetch for every worker analysis.
27
- - Use quick fetch for explicit quick-answer requests and ambiguity fallback.
28
- - If intent is still ambiguous after routing rules, use quick-answer mode (no classified file).
27
+ - Use quick fetch for explicit quick-answer requests.
28
+ - If intent is still ambiguous after routing rules, use rank as the default overview route.
29
29
  - Reuse existing raw data only if it still matches topic, source, and date range; otherwise refetch.
30
30
 
31
+ ## Command Execution Summary
32
+
33
+ After route selection, execute commands as follows:
34
+
35
+ - Rank / Sentiment / Trend / Controversy routes:
36
+ - `sc-research research:deep "TOPIC" [--source=...] [--from=YYYY-MM-DD --to=YYYY-MM-DD]`
37
+ - then run the selected worker skill to produce the matching `classified_*.json`.
38
+ - Discovery route:
39
+ - Broad weekly feed: `sc-research research:deep "DISCOVERY_WEEKLY" --mode=discovery [--source=...] [--from=... --to=...]`
40
+ - Topic-focused: `sc-research research:deep "TOPIC" --mode=discovery [--source=...] [--from=... --to=...]`
41
+ - then run `social_media_discovery`.
42
+ - Quick answer route:
43
+ - `sc-research research "TOPIC" --source=reddit [--from=... --to=...]`
44
+ - then synthesize a short text answer (no classified file).
45
+ - Optional dashboard view after any classified output:
46
+ - `sc-research visualize`
47
+
31
48
  ## Step 1: Resolve Intent (Strict Precedence)
32
49
 
33
50
  Apply rules top-to-bottom:
34
51
 
35
52
  1. **Explicit multi-analysis request**
36
53
  - If user asks for "full analysis", "all views", or equivalent, run:
37
- 1) `social_media_rank`
38
- 2) `social_media_sentiment`
39
- 3) `social_media_trend`
40
- 4) `social_media_controversy`
54
+ 1. `social_media_rank`
55
+ 2. `social_media_sentiment`
56
+ 3. `social_media_trend`
57
+ 4. `social_media_controversy`
41
58
  - Include `social_media_discovery` only if the user also asks for emerging/viral topic discovery.
42
59
  2. **Explicit template request**
43
60
  - If the user names a template ("sentiment", "trend", "controversy", "discovery", "rank"), that route wins.
@@ -46,35 +63,37 @@ Apply rules top-to-bottom:
46
63
  4. **Inferred strongest intent**
47
64
  - Map by primary question intent (keywords table below).
48
65
  5. **Fallback**
49
- - If still ambiguous, default to quick-answer mode (no classified file).
66
+
67
+ - If still ambiguous, default to `social_media_rank`.
50
68
 
51
69
  ## Step 2: Map Intent to Route
52
70
 
53
- | Route | Typical trigger phrases | Worker | Output |
54
- |---|---|---|---|
55
- | Rank | best, top, compare, recommendation, which one | `social_media_rank` | `classified_rank.json` |
56
- | Sentiment | feel, sentiment, opinion, positive/negative | `social_media_sentiment` | `classified_sentiment.json` |
57
- | Trend | timeline, over time, peak, growth, decline | `social_media_trend` | `classified_trend.json` |
58
- | Controversy | debate, divisive, disagreement, polarizing, vs | `social_media_controversy` | `classified_controversy.json` |
59
- | Discovery | trending topics, viral, discover themes, clusters | `social_media_discovery` | `classified_discovery.json` |
60
- | Quick Answer | quick answer, short summary, brief | none | direct text answer |
71
+ | Route | Typical trigger phrases | Worker | Output |
72
+ | ------------ | ------------------------------------------------- | -------------------------- | ----------------------------- |
73
+ | Rank | best, top, compare, recommendation, which one | `social_media_rank` | `classified_rank.json` |
74
+ | Sentiment | feel, sentiment, opinion, positive/negative | `social_media_sentiment` | `classified_sentiment.json` |
75
+ | Trend | timeline, over time, peak, growth, decline | `social_media_trend` | `classified_trend.json` |
76
+ | Controversy | debate, divisive, disagreement, polarizing, vs | `social_media_controversy` | `classified_controversy.json` |
77
+ | Discovery | trending topics, viral, discover themes, clusters | `social_media_discovery` | `classified_discovery.json` |
78
+ | Quick Answer | quick answer, short summary, brief | none | direct text answer |
61
79
 
62
80
  ## Step 3: Detect Source and Fetch Depth
63
81
 
64
82
  ### Source Detection
65
83
 
66
- | User wording | Source flag |
67
- |---|---|
68
- | "on Reddit", "subreddit", "Redditors" | `--source=reddit` |
69
- | "on X", "on Twitter", "tweets" | `--source=x` |
70
- | no explicit source | no source flag (default behavior) |
84
+ | User wording | Source flag |
85
+ | ------------------------------------- | ----------------------------------------------------------------------------- |
86
+ | "on Reddit", "subreddit", "Redditors" | `--source=reddit` |
87
+ | "on X", "on Twitter", "tweets" | `--source=x` |
88
+ | no explicit source | no source flag (runtime uses all enabled sources based on available API keys) |
71
89
 
72
90
  ### Fetch Strategy
73
91
 
74
92
  - **Worker routes (rank/sentiment/trend/controversy):**
75
93
  - `sc-research research:deep "TOPIC" [--source=...] [--from=YYYY-MM-DD --to=YYYY-MM-DD]`
76
94
  - **Discovery route:**
77
- - `sc-research research:deep "TOPIC" --mode=discovery [--source=...] [--from=... --to=...]`
95
+ - Broad weekly feed: `sc-research research:deep "DISCOVERY_WEEKLY" --mode=discovery [--source=...] [--from=... --to=...]`
96
+ - Topic-focused: `sc-research research:deep "TOPIC" --mode=discovery [--source=...] [--from=... --to=...]`
78
97
  - **Quick answer route:**
79
98
  - `sc-research research "TOPIC" --source=reddit [--from=... --to=...]`
80
99
 
@@ -107,10 +126,10 @@ If any check fails, run a fresh fetch.
107
126
 
108
127
  ## File Map
109
128
 
110
- | File | Producer |
111
- |---|---|
112
- | `classified_rank.json` | `social_media_rank` |
113
- | `classified_sentiment.json` | `social_media_sentiment` |
114
- | `classified_trend.json` | `social_media_trend` |
129
+ | File | Producer |
130
+ | ----------------------------- | -------------------------- |
131
+ | `classified_rank.json` | `social_media_rank` |
132
+ | `classified_sentiment.json` | `social_media_sentiment` |
133
+ | `classified_trend.json` | `social_media_trend` |
115
134
  | `classified_controversy.json` | `social_media_controversy` |
116
- | `classified_discovery.json` | `social_media_discovery` |
135
+ | `classified_discovery.json` | `social_media_discovery` |