freshcontext-mcp 0.3.13 → 0.3.15

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,121 @@
1
+ # FreshContext — Session Save V5
2
+ **Date:** 2026-03-27
3
+ **npm:** freshcontext-mcp@0.3.13
4
+ **Tools:** 19 live
5
+
6
+ ---
7
+
8
+ ## What Was Done This Session
9
+
10
+ - npm tokens renewed: freshcontext-publish + NPM_TOKEN (both renewed, NPM_TOKEN updated in GitHub secrets)
11
+ - GitHub granular token renewed
12
+ - v0.3.13 pushed — extract_gebiz added (Singapore GeBIZ procurement via data.gov.sg)
13
+ - README fully rewritten — 19 tools, company landscape section with PLTR demo, all unique adapters documented
14
+ - GovTech Singapore follow-up sent (delivered on commitment: GeBIZ tool built and live)
15
+ - Palantir follow-up sent (references PLTR company landscape report, $1.1B contracts, Rule of 40 127%)
16
+ - 10 follow-up drafts created in Gmail (PatSnap, Mistral, HF, Klarna, SAP, Moonshot, MiniMax, Apify, Cloudflare, Sea)
17
+ - PLTR company landscape report confirmed working — full intelligence output documented
18
+ - Apify rebuild needed for v0.3.13 (manual trigger required)
19
+
20
+ ---
21
+
22
+ ## Current Tool Count: 19
23
+
24
+ **Standard (11):** extract_github, extract_hackernews, extract_scholar, extract_arxiv,
25
+ extract_reddit, extract_yc, extract_producthunt, search_repos, package_trends,
26
+ extract_finance, search_jobs
27
+
28
+ **Composite landscapes (4):**
29
+ - extract_landscape — YC + GitHub + HN + Reddit + Product Hunt + npm
30
+ - extract_gov_landscape — govcontracts + HN + GitHub + changelog
31
+ - extract_finance_landscape — finance + HN + Reddit + GitHub + changelog
32
+ - extract_company_landscape — SEC + govcontracts + GDELT + changelog + finance
33
+
34
+ **Unique — not in any other MCP server (4 + GeBIZ):**
35
+ - extract_changelog — release history from any repo/package/site
36
+ - extract_govcontracts — US federal contract awards (USASpending.gov)
37
+ - extract_sec_filings — 8-K material event disclosures (SEC EDGAR)
38
+ - extract_gdelt — global news events (GDELT Project, 100+ languages)
39
+ - extract_gebiz — Singapore Government procurement (data.gov.sg) ← NEW
40
+
41
+ ---
42
+
43
+ ## Outreach Status
44
+
45
+ **Active threads:**
46
+ - GovTech Singapore — delivered GeBIZ tool, awaiting response
47
+ - Palantir — follow-up sent with PLTR company landscape report
48
+
49
+ **Follow-up drafts sitting in Gmail (10):**
50
+ - PatSnap — contact@patsnap.com
51
+ - Mistral AI — contact@mistral.ai
52
+ - Hugging Face — api-enterprise@huggingface.co
53
+ - Klarna — partnerships@klarna.com
54
+ - SAP Startups — startups@sap.com
55
+ - Moonshot AI — support@moonshot.cn
56
+ - MiniMax — contact@minimax.io
57
+ - Apify / Jan — jan@apify.com
58
+ - Cloudflare Startups — startups@cloudflare.com
59
+ - Sea / Shopee — ir@sea.com
60
+
61
+ **Bounced — need correct addresses (9):**
62
+ - Revolut — bd@revolut.com + press@revolut.com both dead
63
+ - Zalando — partnerships@zalando.de + tech@zalando.de both dead
64
+ - Celonis — partnerships@celonis.com dead
65
+ - Grab — partnerships@grab.com + developer@grab.com both dead
66
+ - Sea Limited — partnerships@sea.com dead (ir@sea.com delivered but wrong team)
67
+ - Zhipu AI — bd@zhipuai.cn + contact@zhipuai.cn both dead
68
+ - MiniMax — bd@minimaxi.com dead (contact@minimax.io delivered)
69
+ - Moonshot AI — business@moonshot.cn dead (support@moonshot.cn delivered)
70
+ - Apollo — hello@apollo.io dead
71
+
72
+ **New targets identified (not yet contacted):**
73
+ - IMDA Singapore (Infocomm Media Development Authority)
74
+ - Australian Digital Transformation Agency
75
+ - UK Government Digital Service (GDS)
76
+ - LangChain / LangSmith
77
+ - LlamaIndex
78
+ - CrewAI
79
+ - Vercel AI SDK
80
+ - FactSet
81
+ - Morningstar
82
+
83
+ **Correct addresses to find for bounced:**
84
+ - Revolut → try partnerships@revolut.com
85
+ - Grab → try business@grab.com
86
+ - Celonis → try hello@celonis.com
87
+ - Apollo.io → try partnerships@apollo.io
88
+ - Zhipu AI → LinkedIn outreach (email dead)
89
+
90
+ ---
91
+
92
+ ## Pending Items
93
+
94
+ - Send 10 follow-up drafts from Gmail Drafts folder
95
+ - Trigger Apify rebuild for v0.3.13
96
+ - Find correct addresses for 9 bounced companies
97
+ - Contact new targets: IMDA, GDS, LangChain, LlamaIndex, CrewAI, FactSet
98
+ - GKG upgrade for extract_gdelt (tone scores, goldstein scale) — deferred
99
+ - Agnost AI analytics integration — sign up at app.agnost.ai, one line in server.ts
100
+ - Synthesis endpoint (/briefing/now) — needs ANTHROPIC_KEY + $5 credits
101
+
102
+ ---
103
+
104
+ ## Demo Assets
105
+ 1. intelligence-report.html — Anthropic/OpenAI/Palantir government intelligence report
106
+ 2. PLTR company landscape — Q4 2025 earnings, $1.1B contracts, Rule of 40 127%
107
+
108
+ Both are best-in-class product demos. Use these in outreach.
109
+
110
+ ---
111
+
112
+ ## Resume Prompt
113
+ "I'm building freshcontext-mcp — 19 tools live at v0.3.13. Last session: built extract_gebiz
114
+ (Singapore GeBIZ), sent GovTech and Palantir follow-ups, drafted 10 follow-up emails.
115
+ Next: send the 10 drafts, fix bounced email addresses, contact new targets.
116
+ See SESSION_SAVE_V5.md."
117
+
118
+ ---
119
+
120
+ *"The work isn't gone. It's just waiting to be continued."*
121
+ *— Prince Gabriel, Grootfontein, Namibia 🇳🇦*
@@ -0,0 +1,194 @@
1
+ # FreshContext — Session Save V6
2
+ **Date:** 2026-04-07
3
+ **Version:** 0.3.14 (npm) / 0.21 (Apify)
4
+ **Tools:** 20 live
5
+ **Spec:** v1.1
6
+ **Author:** Immanuel Gabriel (Prince Gabriel), Grootfontein, Namibia 🇳🇦
7
+
8
+ ---
9
+
10
+ ## RESUME PROMPT FOR NEXT CHAT
11
+
12
+ Paste this at the start of a new conversation:
13
+
14
+ "I'm Immanuel Gabriel from Grootfontein, Namibia. I'm building FreshContext
15
+ — a web intelligence MCP server and open data freshness standard.
16
+ 20 tools, v0.3.14, live at https://freshcontext-mcp.gimmanuel73.workers.dev/mcp
17
+
18
+ Read SESSION_SAVE_V6.md in
19
+ C:\Users\Immanuel Gabriel\Downloads\freshcontext-mcp\
20
+ to get full context. Then continue exactly where we left off."
21
+
22
+ ---
23
+
24
+ ## WHAT WAS DONE THIS SESSION
25
+
26
+ ### FreshContext
27
+ - README updated to 20 tools, extract_idea_landscape added, standard framing sharpened
28
+ - FRESHCONTEXT_SPEC.md updated to v1.1:
29
+ - Reference implementation updated 11 → 20 adapters
30
+ - Domain-specific decay rate table added (financial 5.0 → academic 0.3)
31
+ - Composite Adapters section added
32
+ - Compatibility Levels table added (compatible / aware / scored)
33
+ - Changelog section added
34
+ - Apify Store + MCP Registry added to reference listings
35
+ - Apify build fixed — was failing due to GitHub clone block
36
+ - Switched to `apify push` from local machine (bypasses GitHub)
37
+ - Version format fixed: 0.3.14 → 0.21 (Apify only accepts MAJOR.MINOR)
38
+ - Build 0.21.2 successful and tested — clean FreshContext envelope output confirmed
39
+ - All changes pushed to GitHub (commits: 22ae867, 3186561)
40
+ - npm downloads: 191 organic in first week (zero marketing)
41
+ - HN post: SUBMITTED ✅
42
+ - OpenAI partner intake form: SUBMITTED ✅
43
+ - Calm partnerships form: SUBMITTED ✅
44
+
45
+ ### Outreach — Emails Sent This Session
46
+ **AGI leasing pitch (all sent):**
47
+ - OpenAI — partnerships@openai.com (auto-reply → partner form submitted)
48
+ - Anthropic — partnerships@anthropic.com
49
+ - Google (DeepMind) — partnerships@google.com (deepmind.com bounced)
50
+ - xAI — partnerships@x.ai
51
+ - Cohere — partnerships@cohere.com
52
+ - Meta AI — ai-partnerships@meta.com
53
+ - Perplexity — partnerships@perplexity.ai
54
+ - Mistral — partnerships@mistral.ai (upgrade from old email)
55
+ - Hugging Face — partnerships@huggingface.co (upgrade)
56
+ - DeepSeek — partnerships@deepseek.com
57
+
58
+ **Bounced/dead (do not retry by email):**
59
+ - Adept — acquired by Amazon, defunct
60
+ - DeepMind direct — partnerships@deepmind.com dead, use partnerships@google.com
61
+
62
+ **Corrected and sent:**
63
+ - LlamaIndex — hello@llamaindex.ai (contact@ bounced)
64
+ - CrewAI — joao@crewai.com founder direct (contact@ bounced)
65
+ - Zalando — opensource@zalando.de (tech@ bounced)
66
+ - Celonis — press@celonis.com (hello@ bounced)
67
+
68
+ ### Catatonica Outreach — All Sent
69
+ **Wellness / wearables:**
70
+ - Calm — partnerships@calm.com (+ Calm Partnerships Monday.com form submitted)
71
+ - Headspace — partnerships@headspace.com
72
+ - Whoop — partnerships@whoop.com
73
+ - Oura — partnerships@ouraring.com
74
+ - WellHub — partnerships@wellhub.com
75
+ - Eight Sleep — partnerships@eightsleep.com
76
+
77
+ **Japan:**
78
+ - Recruit Holdings — partnerships@recruit.co.jp
79
+ - LY Corporation (LINE) — partnerships@lycorp.co.jp
80
+ - Mercari — bd@mercari.com
81
+ - DeNA — biz-dev@dena.com
82
+ - KDDI — partnerships@kddi.com
83
+ - Meiji Yasuda Life — wellness@meijiyasuda.co.jp
84
+
85
+ ---
86
+
87
+ ## CURRENT STATUS
88
+
89
+ ### Pending / Next Actions
90
+ | Task | Status |
91
+ |---|---|
92
+ | HN post | ✅ Submitted |
93
+ | OpenAI partner intake | ✅ Submitted |
94
+ | Calm partnerships form | ✅ Submitted |
95
+ | LinkedIn post | ⏳ Post this week — drafts ready (see below) |
96
+ | LinkedIn group posts (AGI groups) | ⏳ After LinkedIn profile post |
97
+ | Apify store description update | ⏳ Still shows old tool count — update to 20 |
98
+ | Follow-up emails (no reply >1 week) | ⏳ Due ~April 14 |
99
+
100
+ ### LinkedIn Posts — Ready to Publish
101
+
102
+ **POST 1 (origin story — most shareable):**
103
+ > I asked Claude to help me find a job.
104
+ > It gave me listings. I applied to three of them.
105
+ > Two didn't exist anymore. One had been closed for two years.
106
+ > Claude had no idea. It presented everything with the same confidence as results from this morning.
107
+ > That's not a Claude problem. That's a structural problem — AI agents have no standard way to know how old their data is.
108
+ > So I built one.
109
+ > FreshContext is a data freshness layer for AI agents — an open standard that wraps every piece of retrieved web data in a structured envelope: when it was retrieved, where it came from, how confident we are the date is accurate.
110
+ > 20 tools. No API keys. Live on Cloudflare's global edge.
111
+ > Built alone. From Grootfontein, Namibia.
112
+ > 191 organic downloads in the first week with zero marketing.
113
+ > The spec is MIT. If you're building agents that retrieve external data, this is the layer that makes that data trustworthy.
114
+ > → github.com/PrinceGabriel-lgtm/freshcontext-mcp
115
+
116
+ **POST 2 (standard framing — AGI/technical audience):**
117
+ > There is no standard for how fresh AI-retrieved data is.
118
+ > Every agent pipeline in production is solving this privately — adding their own timestamps, their own confidence signals, their own staleness logic — and none of it is interoperable.
119
+ > The problem isn't retrieval. Retrieval is solved. The problem is trust.
120
+ > FreshContext is the attempt to name that problem and fix it before the fragmentation sets in.
121
+ > Open standard. MIT licensed. 20 adapters. SEC filings, US federal contracts, global news in 100+ languages, Singapore government procurement, and more.
122
+ > The window to be early is still open.
123
+ > Built by Prince Gabriel — Grootfontein, Namibia 🇳🇦
124
+ > → github.com/PrinceGabriel-lgtm/freshcontext-mcp/blob/main/FRESHCONTEXT_SPEC.md
125
+
126
+ **LinkedIn GROUP post (discussion opener for AI/AGI groups):**
127
+ > Title: Should there be a standard for data freshness in AI agent pipelines?
128
+ > Every agent that retrieves external data faces the same invisible problem: it can't tell how old the data is.
129
+ > A result from this morning and one from two years ago look identical to the model without explicit metadata. For agents making real decisions — job recommendations, market analysis, competitive intelligence — this is a silent reliability gap.
130
+ > I've been working on a proposed standard called the FreshContext Specification. The idea: wrap every retrieved result in a structured envelope with a retrieval timestamp, publication date estimate, and confidence level.
131
+ > Curious whether others building agent systems have hit this problem — and whether a shared standard makes sense or whether everyone's better off solving it internally.
132
+ > Spec is MIT: github.com/PrinceGabriel-lgtm/freshcontext-mcp/blob/main/FRESHCONTEXT_SPEC.md
133
+
134
+ **LinkedIn groups to post in:**
135
+ - "Artificial Intelligence" (largest)
136
+ - "AI Professionals"
137
+ - "Machine Learning & Data Science"
138
+ - "Future of AI"
139
+ - "Model Context Protocol" (search for this — MCP-specific group if it exists)
140
+
141
+ ---
142
+
143
+ ## INFRASTRUCTURE STATE
144
+
145
+ | Layer | Status |
146
+ |---|---|
147
+ | npm | freshcontext-mcp@0.3.14 — auto-publishes via GitHub Actions |
148
+ | Cloudflare Worker | Live — global edge, KV cache, rate limiting, relevancy scoring |
149
+ | D1 Database | 18 watched queries, 6h cron, hash-based dedup |
150
+ | Apify Actor | v0.21 live, build 0.21.2 tested and confirmed working |
151
+ | MCP Registry | Listed — io.github.PrinceGabriel-lgtm/freshcontext |
152
+ | GitHub Actions | Live — push to main = auto build + publish |
153
+ | Spec | v1.1 — composite adapters, decay rates, compatibility levels |
154
+
155
+ ---
156
+
157
+ ## DEAL BIBLE — VALUATIONS
158
+
159
+ ### FreshContext
160
+ - White-label: Ask $8K/mo, accept $2–3K/mo, walk below $1,500/mo
161
+ - Acquisition: Ask $500K, accept $80–150K, walk below $50K
162
+ - Good deal signals: want you involved post-deal, commit to spec maintenance, 12-mo minimum
163
+ - Bad deal signals: want code not spec, month-to-month, under $50K full ownership
164
+
165
+ ### Catatonica
166
+ - White-label: Ask $5K/mo, accept $1.5–2.5K/mo, walk below $800/mo
167
+ - Acquisition: Ask $250K, accept $30–75K, walk below $20K
168
+ - Good deal signals: reference Cataton mechanic specifically, understand Japan angle
169
+ - Bad deal signals: "just another mindfulness app", want codebase not philosophy
170
+
171
+ ---
172
+
173
+ ## KEY ASSETS
174
+
175
+ - Deal Bible artifact: prince-gabriel-deal-bible.html (in /mnt/user-data/outputs/)
176
+ - Intelligence report: AI Government Intelligence Report — March 2026.html (Downloads)
177
+ - HANDOFF.md — complete transfer guide for acquisition/partnership
178
+ - FRESHCONTEXT_SPEC.md v1.1 — the open standard
179
+ - ROADMAP.md — 10-layer product vision
180
+
181
+ ---
182
+
183
+ ## CATATONICA
184
+
185
+ Live at: https://catatonica.pages.dev
186
+ Stack: Vanilla JS, Cloudflare Pages, Supabase (magic link auth), Stripe
187
+ Pricing: Free / $9/mo Deep / $29/mo The Order
188
+ Philosophy: The Art of Doing Nothing — structured stillness practice for high-intensity minds
189
+ Mechanics: Situations → Sessions → Catatons → Planned Obsolescence → Chronicle
190
+
191
+ ---
192
+
193
+ *"The work isn't gone. It's just waiting to be continued."*
194
+ *— Prince Gabriel, Grootfontein, Namibia*
@@ -0,0 +1,170 @@
1
+ # FreshContext — Session Save V9 (Updated)
2
+ **Date:** 2026-04-29
3
+ **Version:** 0.3.15
4
+ **Status:** DAR engine LIVE. Bot filter shipped. Workers Paid — $5/mo. Cron running.
5
+
6
+ ---
7
+
8
+ ## SESSION TIMELINE
9
+
10
+ ### Earlier in this session
11
+ 1. DAR engine shipped (`intelligence.ts`) — exponential decay scoring with proprietary λ constants
12
+ 2. `worker.ts` rewritten — DAR wired into cron, intel feed endpoint live
13
+ 3. ToolResult type fix + ScheduledController + ok() helper
14
+ 4. Semantic deduplication shipped
15
+ 5. METHODOLOGY.md written — formal IP documentation
16
+ 6. CONTEXT_SKILL.md created — token-efficient session resumption
17
+
18
+ ### This continuation
19
+ 7. **Diagnosed the 59k errors** — they're bot-noise + unhandled paths falling through to MCP transport
20
+ 8. **Fixed the bot-error noise** — added `GET /` landing page, `GET /health`, and clean 404s for unknown paths
21
+ 9. **Enhanced `/debug/db`** — now shows DAR engine coverage stats (signals_scored, unique_fingerprints, scoring_coverage %)
22
+ 10. **README.md updated** — added "Intelligence Layer (v0.3.15)" section with DAR math, provenance schema, endpoint table; updated roadmap
23
+ 11. **HANDOFF.md updated** — bumped to v0.3.15, added Intelligence Layer section, updated Pending Items
24
+ 12. **LAUNCH_POSTS_V9.md drafted** — Show HN, LinkedIn (long + short), Twitter thread, posting strategy
25
+
26
+ ### Two weeks later (2026-04-29)
27
+ 13. **309k requests / 309k errors over 24h** observed on Cloudflare dashboard — bot saturation on the `.workers.dev` URL
28
+ 14. **Daily request limit (100k) hit** — Workers Paid plan activated for $5/month
29
+ 15. **Bot filter shipped to worker.ts** — BLOCKED_PATH_PATTERNS + BLOCKED_USER_AGENTS + isBotProbe() runs FIRST in fetch handler
30
+ - Returns 410 Gone for known scanner paths (wp-, .env, .git, .php, owa, ecp, _ignition, etc.)
31
+ - Returns 410 Gone for known scanner user-agents (masscan, nmap, sqlmap, nikto, etc.)
32
+ - Zero KV/DB calls before reject — cheapest possible CPU footprint
33
+ - Expected outcome: error rate drops dramatically + paid CPU costs minimised
34
+
35
+ ---
36
+
37
+ ## THE 59K ERRORS EXPLAINED
38
+
39
+ The Cloudflare dashboard showed ~100% error rate across 24 hours. Diagnosis:
40
+
41
+ The fetch handler routes explicit paths (/mcp, /briefing, /v1/intel/feed/*, /debug/*, etc.) and falls through everything else to the MCP transport. Bots, crawlers, OPTIONS preflights, and any non-MCP traffic hitting the Worker triggered MCP SDK to throw 500s.
42
+
43
+ **Real MCP traffic worked fine** — the wrangler tail log confirmed POST /mcp returning Ok consistently. The errors are noise floor of a discoverable public Workers.dev URL.
44
+
45
+ **Fix shipped in this session:**
46
+ - `GET /` returns service info JSON instead of falling through
47
+ - `GET /health` returns liveness check
48
+ - Any path that isn't `/mcp` or `/mcp/` returns clean 404 before reaching transport
49
+ - Expected outcome: error rate drops from ~100% to <5% within 24 hours of redeploy
50
+
51
+ ---
52
+
53
+ ## DEPLOY COMMANDS (run these now)
54
+
55
+ ```powershell
56
+ cd "C:\Users\Immanuel Gabriel\Downloads\freshcontext-mcp\worker"
57
+ npx wrangler deploy
58
+ ```
59
+
60
+ ```powershell
61
+ cd "C:\Users\Immanuel Gabriel\Downloads\freshcontext-mcp"
62
+ git add worker/src/worker.ts README.md HANDOFF.md LAUNCH_POSTS_V9.md SESSION_SAVE_V9.md
63
+ git commit -m "v0.3.15: bot-error fix + landing page + /health; README/HANDOFF: Intelligence Layer; launch posts drafted"
64
+ git push origin main
65
+ ```
66
+
67
+ ---
68
+
69
+ ## VERIFICATION COMMANDS
70
+
71
+ After deploy:
72
+
73
+ ```powershell
74
+ # Test the new landing page
75
+ curl.exe https://freshcontext-mcp.gimmanuel73.workers.dev/
76
+
77
+ # Liveness
78
+ curl.exe https://freshcontext-mcp.gimmanuel73.workers.dev/health
79
+
80
+ # DAR engine coverage stats
81
+ curl.exe https://freshcontext-mcp.gimmanuel73.workers.dev/debug/db
82
+
83
+ # Intel feed
84
+ curl.exe "https://freshcontext-mcp.gimmanuel73.workers.dev/v1/intel/feed/default?limit=5"
85
+
86
+ # 404 for bots
87
+ curl.exe -v https://freshcontext-mcp.gimmanuel73.workers.dev/wp-admin
88
+ ```
89
+
90
+ D1 inspection:
91
+
92
+ ```powershell
93
+ cd worker
94
+
95
+ # Total signal count
96
+ npx wrangler d1 execute freshcontext-db --remote --command "SELECT COUNT(*) FROM scrape_results"
97
+
98
+ # DAR coverage by adapter
99
+ npx wrangler d1 execute freshcontext-db --remote --command "SELECT adapter, COUNT(*) as total, AVG(rt_score) as avg_rt, AVG(base_score) as avg_r0 FROM scrape_results WHERE rt_score IS NOT NULL GROUP BY adapter"
100
+
101
+ # Top 10 highest R_t signals right now
102
+ npx wrangler d1 execute freshcontext-db --remote --command "SELECT adapter, query, rt_score, entropy_level, published_at FROM scrape_results WHERE rt_score IS NOT NULL ORDER BY rt_score DESC LIMIT 10"
103
+
104
+ # Dedup effectiveness
105
+ npx wrangler d1 execute freshcontext-db --remote --command "SELECT COUNT(*) as total_signals, COUNT(DISTINCT semantic_fingerprint) as unique_stories FROM scrape_results WHERE semantic_fingerprint IS NOT NULL"
106
+
107
+ # Backup the entire D1 database
108
+ npx wrangler d1 export freshcontext-db --remote --output=backup-2026-04-14.sql
109
+
110
+ # Live tail
111
+ npx wrangler tail
112
+ ```
113
+
114
+ ---
115
+
116
+ ## CURRENT INFRASTRUCTURE STATE
117
+
118
+ | Layer | Status |
119
+ |---|---|
120
+ | npm @0.3.15 | LIVE |
121
+ | Cloudflare Worker | LIVE — DAR + new endpoints deployed in this session |
122
+ | D1 freshcontext-db | LIVE — accumulating with new schema columns |
123
+ | KV RATE_LIMITER + CACHE | LIVE |
124
+ | Cron 0 */6 * * * | RUNNING — every 6 hours |
125
+ | Spec site freshcontext-site.pages.dev | LIVE |
126
+ | GitHub Actions auto-publish | LIVE |
127
+ | ANTHROPIC_KEY | NOT SET — formatBriefing() fallback active |
128
+ | Apify Actor | NEEDS REBUILD (apify push) |
129
+
130
+ ---
131
+
132
+ ## NEW FILE STATE
133
+
134
+ ```
135
+ worker/src/intelligence.ts [DAR engine — written V9 part 1]
136
+ worker/src/worker.ts [REWRITTEN + bot-error fix this turn]
137
+ worker/src/synthesize.ts [unchanged]
138
+ METHODOLOGY.md [V9 part 1 — IP documentation]
139
+ README.md [UPDATED this turn — Intelligence Layer section]
140
+ HANDOFF.md [UPDATED this turn — v0.3.15]
141
+ LAUNCH_POSTS_V9.md [NEW this turn — HN + LinkedIn + Twitter drafts]
142
+ SESSION_SAVE_V9.md [UPDATED this turn]
143
+ CONTEXT_SKILL.md [V9 part 1 — token-efficient resumption]
144
+ ```
145
+
146
+ ---
147
+
148
+ ## NEXT BUILD PRIORITIES
149
+
150
+ 1. **Run the deploy + verify commands above** (ensure 404 fix is live, error rate drops)
151
+ 2. **Wait 24h then check Cloudflare dashboard** — error count should be way down
152
+ 3. **Post Show HN** — Tuesday/Wednesday 09:00-10:30 ET. Use draft in LAUNCH_POSTS_V9.md
153
+ 4. **Post LinkedIn** — 24h after HN
154
+ 5. **Post Twitter thread** — same day as LinkedIn
155
+ 6. **Webhook trigger system** — push high-R_t low-entropy signals to user webhooks
156
+ 7. **Mining/industrial domain queries** — the moat
157
+ 8. **Profile creation API** — populate user_profiles table
158
+ 9. **Apify rebuild** — `apify push` from local
159
+ 10. **tsconfig skipLibCheck** — silence the 130 cosmetic node_modules conflicts
160
+
161
+ ---
162
+
163
+ ## RESUME PROMPT FOR NEXT SESSION
164
+
165
+ "Load FreshContext context. Read CONTEXT_SKILL.md and SESSION_SAVE_V9.md from C:\Users\Immanuel Gabriel\Downloads\freshcontext-mcp\, generate the context map, then ask me what we're working on today."
166
+
167
+ ---
168
+
169
+ *"The work isn't gone. It's just waiting to be continued."*
170
+ *— Prince Gabriel, Grootfontein, Namibia*
package/dist/apify.js ADDED
@@ -0,0 +1,133 @@
1
+ #!/usr/bin/env node
2
+ /**
3
+ * Apify Actor entry point — FreshContext MCP v0.3.13
4
+ *
5
+ * Reads Actor input, calls the appropriate adapter, pushes to dataset, exits.
6
+ * All 19 tools supported. Robust error handling throughout.
7
+ */
8
+ import { Actor } from "apify";
9
+ import { githubAdapter } from "./adapters/github.js";
10
+ import { hackerNewsAdapter } from "./adapters/hackernews.js";
11
+ import { scholarAdapter } from "./adapters/scholar.js";
12
+ import { arxivAdapter } from "./adapters/arxiv.js";
13
+ import { redditAdapter } from "./adapters/reddit.js";
14
+ import { ycAdapter } from "./adapters/yc.js";
15
+ import { productHuntAdapter } from "./adapters/productHunt.js";
16
+ import { repoSearchAdapter } from "./adapters/repoSearch.js";
17
+ import { packageTrendsAdapter } from "./adapters/packageTrends.js";
18
+ import { financeAdapter } from "./adapters/finance.js";
19
+ import { jobsAdapter } from "./adapters/jobs.js";
20
+ import { changelogAdapter } from "./adapters/changelog.js";
21
+ import { govContractsAdapter } from "./adapters/govcontracts.js";
22
+ import { secFilingsAdapter } from "./adapters/secFilings.js";
23
+ import { gdeltAdapter } from "./adapters/gdelt.js";
24
+ import { gebizAdapter } from "./adapters/gebiz.js";
25
+ import { stampFreshness } from "./tools/freshnessStamp.js";
26
+ async function main() {
27
+ await Actor.init();
28
+ let input;
29
+ try {
30
+ const raw = await Actor.getInput();
31
+ if (!raw || !raw.tool) {
32
+ await Actor.fail("Missing input. Provide a 'tool' field. E.g. { \"tool\": \"extract_hackernews\", \"url\": \"https://news.ycombinator.com\" }");
33
+ return;
34
+ }
35
+ input = raw;
36
+ }
37
+ catch (err) {
38
+ const msg = err instanceof Error ? err.message : String(err);
39
+ await Actor.fail(`Failed to read input: ${msg}`);
40
+ return;
41
+ }
42
+ // Resolve the primary string input — different tools use different field names
43
+ const url = input.url ?? input.query ?? input.topic ?? input.company ?? input.tickers ?? "";
44
+ const maxLength = input.max_length ?? 8000;
45
+ console.log(`FreshContext Actor | tool: ${input.tool} | input: "${url}"`);
46
+ try {
47
+ let result;
48
+ switch (input.tool) {
49
+ // ── Standard tools ────────────────────────────────────────────
50
+ case "extract_github":
51
+ result = await githubAdapter({ url, maxLength });
52
+ break;
53
+ case "extract_hackernews":
54
+ result = await hackerNewsAdapter({ url, maxLength });
55
+ break;
56
+ case "extract_scholar":
57
+ result = await scholarAdapter({ url, maxLength });
58
+ break;
59
+ case "extract_arxiv":
60
+ result = await arxivAdapter({ url, maxLength });
61
+ break;
62
+ case "extract_reddit":
63
+ result = await redditAdapter({ url, maxLength });
64
+ break;
65
+ case "extract_yc":
66
+ result = await ycAdapter({ url, maxLength });
67
+ break;
68
+ case "extract_producthunt":
69
+ result = await productHuntAdapter({ url, maxLength });
70
+ break;
71
+ case "search_repos":
72
+ result = await repoSearchAdapter({ url, maxLength });
73
+ break;
74
+ case "package_trends":
75
+ result = await packageTrendsAdapter({ url, maxLength });
76
+ break;
77
+ case "extract_finance":
78
+ result = await financeAdapter({ url, maxLength });
79
+ break;
80
+ case "search_jobs":
81
+ result = await jobsAdapter({ url, maxLength });
82
+ break;
83
+ case "extract_changelog":
84
+ result = await changelogAdapter({ url, maxLength });
85
+ break;
86
+ // ── Unique tools ──────────────────────────────────────────────
87
+ case "extract_govcontracts":
88
+ result = await govContractsAdapter({ url, maxLength });
89
+ break;
90
+ case "extract_sec_filings":
91
+ result = await secFilingsAdapter({ url, maxLength });
92
+ break;
93
+ case "extract_gdelt":
94
+ result = await gdeltAdapter({ url, maxLength });
95
+ break;
96
+ case "extract_gebiz":
97
+ result = await gebizAdapter({ url, maxLength });
98
+ break;
99
+ default:
100
+ await Actor.fail(`Unknown tool: "${input.tool}". Valid tools: ` +
101
+ "extract_github, extract_hackernews, extract_scholar, extract_arxiv, " +
102
+ "extract_reddit, extract_yc, extract_producthunt, search_repos, " +
103
+ "package_trends, extract_finance, search_jobs, extract_changelog, " +
104
+ "extract_govcontracts, extract_sec_filings, extract_gdelt, extract_gebiz");
105
+ return;
106
+ }
107
+ const ctx = stampFreshness(result, { url, maxLength }, input.tool);
108
+ await Actor.pushData({
109
+ tool: ctx.adapter,
110
+ source_url: ctx.source_url,
111
+ content: ctx.content,
112
+ retrieved_at: ctx.retrieved_at,
113
+ content_date: ctx.content_date ?? null,
114
+ freshness_confidence: ctx.freshness_confidence,
115
+ });
116
+ console.log(`✓ Done | retrieved: ${ctx.retrieved_at} | confidence: ${ctx.freshness_confidence}`);
117
+ await Actor.exit();
118
+ }
119
+ catch (err) {
120
+ const message = err instanceof Error ? err.message : String(err);
121
+ console.error(`FreshContext error: ${message}`);
122
+ await Actor.fail(message);
123
+ }
124
+ }
125
+ main().catch(async (err) => {
126
+ const message = err instanceof Error ? err.message : String(err);
127
+ console.error(`Fatal error: ${message}`);
128
+ try {
129
+ await Actor.fail(message);
130
+ }
131
+ catch { /* ignore */ }
132
+ process.exit(1);
133
+ });