reddit-demand-kit 0.2.0 → 0.2.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. package/README.md +190 -0
  2. package/package.json +5 -5
package/README.md ADDED
@@ -0,0 +1,190 @@
1
+ # Reddit Demand Kit
2
+
3
+ > **"The gap isn't finding threads. It's knowing what to say when you get there. Most tools stop at step 1."**
4
+ > — r/SaaS, 2026
5
+
6
+ RDK is the step 2 tool. MCP-native demand intelligence that lets your AI agent do the Reddit research, classification, and outreach planning — without opening another dashboard tab.
7
+
8
+ ## Why This Exists
9
+
10
+ In November 2025, **GummySearch** — the category king with 135,000 users and 10,000 paying customers — died overnight when Reddit revoked their commercial API access. Four years of work, gone in a weekend. Every replacement tool since has been built on the same fragile foundation: a Reddit API key that can be pulled at any time.
11
+
12
+ RDK takes the opposite bet: **RSS feeds**. A 20-year open standard that needs no authentication, no API keys, no approval, and cannot be revoked. The same architecture that let blogs survive every platform shift is now the safest way to build on Reddit.
13
+
14
+ Four more things that make RDK different from the 30+ tools fighting over GummySearch's corpse:
15
+
16
+ - **MCP-native, not a dashboard.** Your AI agent runs RDK inside the conversation you're already having about your product. Zero context switches.
17
+ - **Code does plumbing. Prompts do thinking.** Tools fetch raw data. Analysis frameworks — demand scoring, competitor sentiment, market scans, outreach playbooks — are MCP prompts your agent executes with your context. No keyword-matching pseudo-AI.
18
+ - **Free tier that actually works.** RSS gives you content. Pro ($9/mo) adds real upvote scores, comment counts, and full comments via the Tap browser bridge — a second structural moat competitors would need years to build.
19
+ - **Proprietary, on purpose.** Code is compiled and obfuscated (the prompts are the real IP). Methodology is published freely. Read the long-form posts to learn how to think; install the tool when you want your agent to do it automatically.
20
+
21
+ ## Install
22
+
23
+ ```bash
24
+ # Node / npm
25
+ npm install -g reddit-demand-kit
26
+
27
+ # Deno 2.0+
28
+ deno install -g --allow-all -n rdk npm:reddit-demand-kit
29
+ ```
30
+
31
+ Requires Node.js 16+ or Deno 2.0+. Deno users install from the same npm package — Deno's npm compatibility layer resolves the platform binary automatically, no separate registry needed.
32
+
33
+ ## MCP Setup
34
+
35
+ Add to Claude Desktop / Claude Code config:
36
+
37
+ ```json
38
+ {
39
+ "mcpServers": {
40
+ "rdk": {
41
+ "command": "rdk",
42
+ "args": ["mcp"]
43
+ }
44
+ }
45
+ }
46
+ ```
47
+
48
+ Then ask your AI: *"Search Reddit for pain points about CRM tools and score which ones are worth building for."*
49
+
50
+ ## CLI
51
+
52
+ ```bash
53
+ rdk mcp # Start MCP server (stdio)
54
+ rdk status # Show tier + data source
55
+ rdk activate <key> # Activate Pro license
56
+ rdk upgrade # Show Pro features + pricing
57
+ ```
58
+
59
+ ## MCP Tools — Code Does Plumbing
60
+
61
+ Tools fetch raw Reddit data with objective filters only (`must_contain`, `exclude`). Subjective analysis is done by prompts, not by keyword matching.
62
+
63
+ ### `reddit_search`
64
+ Search Reddit posts with exact-phrase filtering and term exclusion. Free tier returns content; Pro tier adds real upvote scores and comment counts.
65
+
66
+ ### `reddit_post`
67
+ Fetch a single post with its comments. Use for comment archaeology — the highest-signal layer for demand analysis.
68
+
69
+ ### `reddit_subreddit`
70
+ Fetch subreddit metadata and recent posts. Survey a community before engaging.
71
+
72
+ ### `reddit_discover`
73
+ Find subreddits related to a topic or audience. Use before deep research to confirm your target users actually hang out there.
74
+
75
+ ### `reddit_account_health` *(Pro)*
76
+ Diagnose a Reddit account's standing with Reddit's site-wide anti-spam system before you promote anything from it. Returns a verdict (`healthy` / `mod-heavy` / `shadow-filtered` / `platform-flagged` / `suspended`) plus hard metrics — post removal rate split by `removed_by_category=reddit` vs moderator removal, comment score distribution, karma, age. Reads signals that only exist in Reddit's authenticated API, so there's no Free fallback. Run this before `posting_strategy` or `outreach_playbook` — a flagged account can't receive strategy value, since Reddit will silently filter any post you plan.
77
+
78
+ ## MCP Prompts — Prompts Do Thinking
79
+
80
+ Prompts encode analysis frameworks. Your AI agent executes them with full context of your product and goals. This is where the subjective judgment lives — and where RDK's methodology is.
81
+
82
+ ### `demand_analysis`
83
+ 5-dimension demand scoring (pain intensity, engagement, willingness to pay, competitor weakness, recurrence) + L2–L5 demand ladder classification.
84
+
85
+ ### `competitor_analysis`
86
+ Aggregate competitor sentiment, extract weakness patterns, identify switching signals.
87
+
88
+ ### `market_scan`
89
+ Subreddit opportunity scanning: pain point identification, tool request clustering, untapped-niche flagging.
90
+
91
+ ### `audience_discovery`
92
+ Find target communities and evaluate their quality as an audience.
93
+
94
+ ### `posting_strategy`
95
+ Posting time analysis, rule compliance, Modmail outreach, anti-spam safety rules.
96
+
97
+ ### `outreach_playbook`
98
+ Target prioritization and 3-channel outreach strategy. Includes classification of posts (求助型 / 忏悔型 / 方法分享型 / 工具推荐型) and decision trees for DM vs public reply.
99
+
100
+ ## Compiled Pipes — Zero-Cost Analysis
101
+
102
+ Prompts are flexible but LLM-expensive. Once an analysis stabilizes, RDK compiles it into a **pipe** — a pre-forged DAG workflow that executes via the `tap` runtime with **zero LLM tokens per invocation**.
103
+
104
+ ```bash
105
+ rdk install-pipes # one-time: register compiled pipes with tap
106
+ rdk compile market-scan --subreddit SaaS --limit 10
107
+ ```
108
+
109
+ ### The three levels of thinking
110
+
111
+ | Level | Mechanism | Cost per run | Latency | When to use |
112
+ |-------|-----------|--------------|---------|-------------|
113
+ | **L1 Tools** | Raw data fetch | $0 | 1–2s | Exploratory one-off queries |
114
+ | **L2 Prompts** | LLM executes framework | ~$0.15 | 25–60s | Novel analyses you haven't compiled yet |
115
+ | **L3 Pipes** | Pre-compiled DAG | **$0** | 0.8ms – 3s | High-frequency repeat workflows |
116
+
117
+ ### Why this matters
118
+
119
+ Every other Reddit research tool charges for each AI analysis. RDK lets you pay the reasoning cost **once** (when the pipe is forged) and reuse it infinitely. A `market-scan` pipe running daily against 10 subreddits costs $45/month in LLM tokens via the prompt version. As a compiled pipe: $0.
120
+
121
+ ### Two execution paths
122
+
123
+ RDK runs compiled pipes two ways. The CLI uses the first by default; the second is a programmatic API for products that embed RDK inside their own Deno process.
124
+
125
+ - **Subprocess** — shells out to `tap rdk <name> --json`. Universal: works for every pipe, including browser-dependent ones like `market-scan`. ~80–230ms fork overhead.
126
+ - **In-process** — imports `runTap` directly from `@taprun/executor` and runs the pipe inside the caller's Deno process. **Measured at 0.8ms vs 233ms subprocess on `demo-transform` = 293× speedup.** Currently works for pure-transform pipes (filter/sort/limit over pre-fetched rows); browser-dependent pipes still use the subprocess path until RDK ships its own daemon-backed `RpcSend`.
127
+
128
+ The forged pipes ship with the RDK binary. Users install them with a single `rdk install-pipes` call. Current pipes:
129
+
130
+ ### `market-scan` (pipe)
131
+ Replaces the `market_scan` prompt. Runs `reddit/sub-intel`, `reddit/pain-points`, and `reddit/hot` in parallel via `tap.pipe`'s DAG scheduler, filters for pain-point rows, sorts by engagement, returns structured data. Break-even in the file header: *"one real run pays back the entire forge cost."*
132
+
133
+ ### `demo-transform` (pipe)
134
+ A pure-transform dogfood pipe that proves the in-process executor path end-to-end. Takes a literal `rows` array, applies filter + sort + limit, returns top-N. Used by `src/test/compile_test.ts` to benchmark subprocess vs in-process wall-clock (the test fails if in-process isn't at least 2× faster — regression guard for the executor package).
135
+
136
+ ### Forging new pipes
137
+
138
+ Pipes are forged using Tap's `forge` lifecycle. When you identify a repeat workflow that's burning LLM tokens, compile it: the reasoning is done once, the output is deterministic forever.
139
+
140
+ ## Pro
141
+
142
+ Free tier uses RSS data. Pro ($9/mo) adds real upvote scores, comment counts, full comments, and subscriber counts.
143
+
144
+ ```bash
145
+ rdk activate <your-license-key>
146
+ ```
147
+
148
+ Get your license: https://rdk.taprun.dev
149
+
150
+ ## Development
151
+
152
+ ```bash
153
+ deno task dev # Run CLI in dev mode
154
+ deno task mcp # Start MCP server
155
+ deno task build # Compile to single binary
156
+ deno task test # Run tests
157
+ deno task test:e2e # Run E2E tests (hits real Reddit)
158
+ ```
159
+
160
+ ### Build & Publish
161
+
162
+ ```bash
163
+ deno task bundle # esbuild + obfuscate → dist/rdk-bundled.js
164
+ deno task build:all # Bundle + compile for all 4 platforms
165
+ deno task npm:dry-run # Validate npm publish
166
+ deno task npm:publish # Publish all 5 npm packages
167
+ ```
168
+
169
+ ### Deploy Landing Page
170
+
171
+ ```bash
172
+ npx wrangler pages deploy site --project-name rdk
173
+ ```
174
+
175
+ Live at: https://rdk.taprun.dev
176
+
177
+ ### CI/CD
178
+
179
+ - **`release.yml`** — Push tag `v*` → bundle + compile 4 platforms → GitHub Release
180
+ - **`npm-publish.yml`** — Manual trigger → build + publish 5 npm packages
181
+
182
+ ## Limitations
183
+
184
+ - **No upvote scores** (Free): RSS doesn't include scores. Demand scoring uses content analysis instead.
185
+ - **~25 results per query**: RSS feed limit. Use multiple targeted queries for broader coverage.
186
+ - **Read-only**: Intelligence tool only. No posting or commenting.
187
+
188
+ ## License
189
+
190
+ Proprietary. All rights reserved.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "reddit-demand-kit",
3
- "version": "0.2.0",
3
+ "version": "0.2.1",
4
4
  "description": "MCP-first demand intelligence for Reddit. Discover pain points, track competitors, find what's worth building.",
5
5
  "homepage": "https://rdk.taprun.dev",
6
6
  "repository": {
@@ -23,9 +23,9 @@
23
23
  "access": "public"
24
24
  },
25
25
  "optionalDependencies": {
26
- "@taprun/rdk-darwin-arm64": "0.2.0",
27
- "@taprun/rdk-darwin-x64": "0.2.0",
28
- "@taprun/rdk-linux-x64": "0.2.0",
29
- "@taprun/rdk-win32-x64": "0.2.0"
26
+ "@taprun/rdk-darwin-arm64": "0.2.1",
27
+ "@taprun/rdk-darwin-x64": "0.2.1",
28
+ "@taprun/rdk-linux-x64": "0.2.1",
29
+ "@taprun/rdk-win32-x64": "0.2.1"
30
30
  }
31
31
  }