freshcontext-mcp 0.3.12 → 0.3.14

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/.actor/Dockerfile CHANGED
@@ -1,16 +1,16 @@
1
- FROM apify/actor-node:20
1
+ FROM apify/actor-node-playwright-chrome:20
2
2
 
3
3
  # Copy package files first for better Docker layer caching
4
4
  COPY package*.json ./
5
5
 
6
- # Install all dependencies including apify SDK
6
+ # Install dependencies Playwright and Chromium already in base image
7
7
  RUN npm install --include=dev
8
8
 
9
9
  # Copy source and pre-built dist
10
10
  COPY . ./
11
11
 
12
- # Rebuild TypeScript (dist/ from repo is fallback if this fails)
12
+ # Rebuild TypeScript
13
13
  RUN npm run build || echo "Build had warnings, using pre-compiled dist/"
14
14
 
15
- # Tell Apify to run the Actor entry point, not the MCP server
15
+ # Run the Actor entry point
16
16
  CMD ["node", "dist/apify.js"]
package/README.md CHANGED
@@ -30,7 +30,7 @@ Claude now knows the difference between something from this morning and somethin
30
30
 
31
31
  ---
32
32
 
33
- ## 13 tools. No API keys.
33
+ ## 19 tools. No API keys.
34
34
 
35
35
  ### Intelligence
36
36
  | Tool | What it gets you |
@@ -52,22 +52,51 @@ Claude now knows the difference between something from this morning and somethin
52
52
  ### Market data
53
53
  | Tool | What it gets you |
54
54
  |---|---|
55
- | `extract_finance` | Live stock data — price, market cap, P/E, 52w range |
55
+ | `extract_finance` | Live stock data — price, market cap, P/E, 52w range. Up to 5 tickers. |
56
+ | `search_jobs` | Remote job listings from Remotive + HN "Who is Hiring" — every listing dated |
57
+
58
+ ### Composites — multiple sources, one call
59
+ | Tool | Sources | What it gets you |
60
+ |---|---|---|
61
+ | `extract_landscape` | 6 | YC + GitHub + HN + Reddit + Product Hunt + npm in parallel |
62
+ | `extract_gov_landscape` | 4 | Gov contracts + HN + GitHub repos + changelog |
63
+ | `extract_finance_landscape` | 5 | Finance + HN + Reddit + GitHub + changelog |
64
+ | `extract_company_landscape` | 5 | **The full picture on any company** — see below |
65
+
66
+ ### Unique — not available in any other MCP server
67
+ | Tool | Source | What it gets you |
68
+ |---|---|---|
69
+ | `extract_changelog` | GitHub Releases API / npm / auto-discover | Update history from any repo, package, or website |
70
+ | `extract_govcontracts` | USASpending.gov | US federal contract awards — company, amount, agency, period |
71
+ | `extract_sec_filings` | SEC EDGAR | 8-K filings — legally mandated material event disclosures |
72
+ | `extract_gdelt` | GDELT Project | Global news intelligence — 100+ languages, every country, 15-min updates |
73
+ | `extract_gebiz` | data.gov.sg | Singapore Government procurement tenders — open dataset, no auth |
56
74
 
57
- ### Composite
58
- | Tool | What it gets you |
59
- |---|---|
60
- | `extract_landscape` | One call. YC + GitHub + HN + Reddit + Product Hunt + npm in parallel. Full timestamped picture. |
75
+ ---
61
76
 
62
- ### Update intelligence — unique to FreshContext
63
- | Tool | What it gets you |
64
- |---|---|
65
- | `extract_changelog` | Update history from any GitHub repo, npm package, or website. Accepts a GitHub URL (uses the Releases API), an npm package name, or any website URL — auto-discovers `/changelog`, `/releases`, and `CHANGELOG.md`. Returns version numbers, release dates, and entry content, all timestamped. Use this to check if a dependency is still actively maintained, or to find out exactly when a feature shipped before referencing it. |
77
+ ## extract_company_landscape
66
78
 
67
- ### Government intelligence unique to FreshContext
68
- | Tool | What it gets you |
69
- |---|---|
70
- | `extract_govcontracts` | US federal contract awards pulled live from USASpending.gov — the official US Treasury database, updated daily. Search by company name, keyword, or NAICS code. Returns award amounts, awarding agency, period of performance, and contract description, all timestamped. A company that just won a $10M DoD contract is actively hiring and spending — that is a buying intent signal no other MCP server surfaces. |
79
+ The most complete single-call company analysis available in any MCP server. Five sources fired in parallel:
80
+
81
+ 1. **SEC EDGAR** — what did they legally just disclose (8-K filings)
82
+ 2. **USASpending.gov**who is giving them government money
83
+ 3. **GDELT** — what is global news saying right now
84
+ 4. **Changelog** — are they actually shipping product
85
+ 5. **Yahoo Finance** — what is the market pricing in
86
+
87
+ ```
88
+ Use extract_company_landscape with company "Palantir" and ticker "PLTR"
89
+ ```
90
+
91
+ Real output from March 26, 2026:
92
+
93
+ > **Q4 2025:** Revenue $1.407B (+70% YoY). US commercial +137%. Rule of 40 score: **127%**.
94
+ > **Federal contracts:** $292.7M Army Maven Smart System · $252.5M CDAO · $145M ICE · $130M Air Force · more
95
+ > **SEC filing:** Q4 earnings 8-K filed Feb 3, 2026 — GAAP net income $609M, 43% margin
96
+ > **GDELT:** ICE/Medicaid data controversy, UK MoD security warning, NHS opposition — all timestamped
97
+ > **PLTR:** ~$154–157 · Market cap ~$370B · P/E 244x · 52w range $66 → $207
98
+
99
+ Bloomberg Terminal doesn't read commit history as a company health signal. This does.
71
100
 
72
101
  ---
73
102
 
@@ -160,17 +189,35 @@ Use extract_landscape with topic "cashflow prediction saas"
160
189
  ```
161
190
  Returns who's funded, what's trending, what repos exist, what packages are moving — all timestamped.
162
191
 
163
- **What's the community actually saying right now?**
192
+ **Full company intelligence in one call:**
164
193
  ```
165
- Use extract_reddit on r/MachineLearning
166
- Use extract_hackernews to search "mcp server 2026"
194
+ Use extract_company_landscape with company "Palantir" and ticker "PLTR"
195
+ ```
196
+ SEC filings + federal contracts + global news + changelog + market data. The complete picture.
197
+
198
+ **What's Singapore's government procuring right now?**
199
+ ```
200
+ Use extract_gebiz with url "artificial intelligence"
201
+ ```
202
+ Returns live tenders from the Ministry of Finance open dataset — agency, amount, closing date, all timestamped.
203
+
204
+ **Did that company just disclose something material?**
205
+ ```
206
+ Use extract_sec_filings with url "Palantir Technologies"
167
207
  ```
208
+ 8-K filings are legally mandated within 4 business days of any material event — CEO change, acquisition, breach, major contract.
168
209
 
169
- **Did that company actually ship recently?**
210
+ **What is global news saying about a company?**
211
+ ```
212
+ Use extract_gdelt with url "Palantir"
213
+ ```
214
+ 100+ languages, every country, updated every 15 minutes. Surfaces what Western sources miss.
215
+
216
+ **What's the community actually saying right now?**
170
217
  ```
171
- Use extract_github on https://github.com/some-org/some-repo
218
+ Use extract_reddit on r/MachineLearning
219
+ Use extract_hackernews to search "mcp server 2026"
172
220
  ```
173
- Check `Published` vs `Retrieved`. If the gap is 18 months, Claude will tell you.
174
221
 
175
222
  **Is this dependency still actively maintained?**
176
223
  ```
@@ -182,7 +229,7 @@ Returns the last 8 releases with exact dates. If the last release was 18 months
182
229
  ```
183
230
  Use extract_govcontracts with url "artificial intelligence"
184
231
  ```
185
- Returns the largest recent federal contract awards matching that keyword — company name, amount, agency, and award date. Pure buying intent signal.
232
+ Largest recent federal contract awards matching that keyword — company, amount, agency, award date.
186
233
 
187
234
  ---
188
235
 
@@ -195,10 +242,15 @@ freshcontext treats **retrieval time as first-class metadata**. Every adapter re
195
242
  - `retrieved_at` — exact ISO timestamp of the fetch
196
243
  - `content_date` — best estimate of when the content was originally published
197
244
  - `freshness_confidence` — `high`, `medium`, or `low` based on signal quality
245
+ - `freshness_score` — numeric 0–100 score with domain-specific decay rates
198
246
  - `adapter` — which source the data came from
199
247
 
200
248
  When confidence is `high`, the date came from a structured field (API, metadata). When it's `medium` or `low`, freshcontext tells you why.
201
249
 
250
+ The FreshContext Specification v1.0 is published as an open standard under MIT license. Any tool or agent that wraps retrieved data in the `[FRESHCONTEXT]` envelope is FreshContext-compatible.
251
+
252
+ → [Read the spec](./FRESHCONTEXT_SPEC.md)
253
+
202
254
  ---
203
255
 
204
256
  ## Security
@@ -212,28 +264,41 @@ When confidence is `high`, the date came from a structured field (API, metadata)
212
264
 
213
265
  ## Roadmap
214
266
 
215
- - [x] GitHub, HN, Scholar, YC, Reddit, Product Hunt, Finance, arXiv adapters
267
+ - [x] GitHub, HN, Scholar, YC, Reddit, Product Hunt, Finance, arXiv, Jobs adapters
216
268
  - [x] `extract_landscape` — 6-source composite tool
217
- - [x] Cloudflare Workers deployment
218
- - [x] KV-backed global rate limiting
219
- - [x] Listed on official MCP Registry
220
269
  - [x] `extract_changelog` — update cadence from any repo, package, or website
221
270
  - [x] `extract_govcontracts` — US federal contract intelligence via USASpending.gov
271
+ - [x] `extract_sec_filings` — SEC EDGAR 8-K material event filings
272
+ - [x] `extract_gdelt` — GDELT global news intelligence (100+ languages)
273
+ - [x] `extract_gebiz` — Singapore Government procurement via data.gov.sg
274
+ - [x] `extract_gov_landscape` — gov contracts + HN + GitHub + changelog composite
275
+ - [x] `extract_finance_landscape` — finance + HN + Reddit + GitHub + changelog composite
276
+ - [x] `extract_company_landscape` — 5-source company intelligence composite
277
+ - [x] `freshness_score` numeric metric (0–100) with domain-specific decay rates
278
+ - [x] Cloudflare Workers deployment — global edge with KV caching
279
+ - [x] D1 database — 18 watched queries running on 6-hour cron
280
+ - [x] Listed on official MCP Registry
222
281
  - [x] Listed on Apify Store
223
282
  - [x] FreshContext Specification v1.0 published
283
+ - [x] GitHub Actions CI/CD — auto-publish to npm on every push
284
+ - [ ] GKG upgrade for `extract_gdelt` — tone scores, goldstein scale, event codes
224
285
  - [ ] TTL-based caching layer
225
- - [ ] `freshness_score` numeric metric (0–100)
226
- - [ ] `extract_devto`developer article sentiment
227
- - [ ] `extract_npm_releases` — package release velocity
286
+ - [ ] Dashboard React frontend for the D1 intelligence pipeline
287
+ - [ ] Synthesis endpoint `/briefing/now` AI-generated intelligence briefings
228
288
 
229
289
  ---
230
290
 
231
291
  ## Contributing
232
292
 
233
- PRs welcome. New adapters are the highest-value contribution — see `src/adapters/` for the pattern.
293
+ PRs welcome. New adapters are the highest-value contribution — see `src/adapters/` for the pattern and `FRESHCONTEXT_SPEC.md` for the contract any adapter must fulfill.
234
294
 
235
295
  ---
236
296
 
237
297
  ## License
238
298
 
239
299
  MIT
300
+
301
+ ---
302
+
303
+ *Built by Prince Gabriel — Grootfontein, Namibia 🇳🇦*
304
+ *"The work isn't gone. It's just waiting to be continued."*
@@ -0,0 +1,121 @@
1
+ # FreshContext — Session Save V5
2
+ **Date:** 2026-03-27
3
+ **npm:** freshcontext-mcp@0.3.13
4
+ **Tools:** 19 live
5
+
6
+ ---
7
+
8
+ ## What Was Done This Session
9
+
10
+ - npm tokens renewed: freshcontext-publish + NPM_TOKEN (both renewed, NPM_TOKEN updated in GitHub secrets)
11
+ - GitHub granular token renewed
12
+ - v0.3.13 pushed — extract_gebiz added (Singapore GeBIZ procurement via data.gov.sg)
13
+ - README fully rewritten — 19 tools, company landscape section with PLTR demo, all unique adapters documented
14
+ - GovTech Singapore follow-up sent (delivered on commitment: GeBIZ tool built and live)
15
+ - Palantir follow-up sent (references PLTR company landscape report, $1.1B contracts, Rule of 40 127%)
16
+ - 10 follow-up drafts created in Gmail (PatSnap, Mistral, HF, Klarna, SAP, Moonshot, MiniMax, Apify, Cloudflare, Sea)
17
+ - PLTR company landscape report confirmed working — full intelligence output documented
18
+ - Apify rebuild needed for v0.3.13 (manual trigger required)
19
+
20
+ ---
21
+
22
+ ## Current Tool Count: 19
23
+
24
+ **Standard (11):** extract_github, extract_hackernews, extract_scholar, extract_arxiv,
25
+ extract_reddit, extract_yc, extract_producthunt, search_repos, package_trends,
26
+ extract_finance, search_jobs
27
+
28
+ **Composite landscapes (4):**
29
+ - extract_landscape — YC + GitHub + HN + Reddit + Product Hunt + npm
30
+ - extract_gov_landscape — govcontracts + HN + GitHub + changelog
31
+ - extract_finance_landscape — finance + HN + Reddit + GitHub + changelog
32
+ - extract_company_landscape — SEC + govcontracts + GDELT + changelog + finance
33
+
34
+ **Unique — not in any other MCP server (4 + GeBIZ):**
35
+ - extract_changelog — release history from any repo/package/site
36
+ - extract_govcontracts — US federal contract awards (USASpending.gov)
37
+ - extract_sec_filings — 8-K material event disclosures (SEC EDGAR)
38
+ - extract_gdelt — global news events (GDELT Project, 100+ languages)
39
+ - extract_gebiz — Singapore Government procurement (data.gov.sg) ← NEW
40
+
41
+ ---
42
+
43
+ ## Outreach Status
44
+
45
+ **Active threads:**
46
+ - GovTech Singapore — delivered GeBIZ tool, awaiting response
47
+ - Palantir — follow-up sent with PLTR company landscape report
48
+
49
+ **Follow-up drafts sitting in Gmail (10):**
50
+ - PatSnap — contact@patsnap.com
51
+ - Mistral AI — contact@mistral.ai
52
+ - Hugging Face — api-enterprise@huggingface.co
53
+ - Klarna — partnerships@klarna.com
54
+ - SAP Startups — startups@sap.com
55
+ - Moonshot AI — support@moonshot.cn
56
+ - MiniMax — contact@minimax.io
57
+ - Apify / Jan — jan@apify.com
58
+ - Cloudflare Startups — startups@cloudflare.com
59
+ - Sea / Shopee — ir@sea.com
60
+
61
+ **Bounced — need correct addresses (9):**
62
+ - Revolut — bd@revolut.com + press@revolut.com both dead
63
+ - Zalando — partnerships@zalando.de + tech@zalando.de both dead
64
+ - Celonis — partnerships@celonis.com dead
65
+ - Grab — partnerships@grab.com + developer@grab.com both dead
66
+ - Sea Limited — partnerships@sea.com dead (ir@sea.com delivered but wrong team)
67
+ - Zhipu AI — bd@zhipuai.cn + contact@zhipuai.cn both dead
68
+ - MiniMax — bd@minimaxi.com dead (contact@minimax.io delivered)
69
+ - Moonshot AI — business@moonshot.cn dead (support@moonshot.cn delivered)
70
+ - Apollo — hello@apollo.io dead
71
+
72
+ **New targets identified (not yet contacted):**
73
+ - IMDA Singapore (Infocomm Media Development Authority)
74
+ - Australian Digital Transformation Agency
75
+ - UK Government Digital Service (GDS)
76
+ - LangChain / LangSmith
77
+ - LlamaIndex
78
+ - CrewAI
79
+ - Vercel AI SDK
80
+ - FactSet
81
+ - Morningstar
82
+
83
+ **Correct addresses to find for bounced:**
84
+ - Revolut → try partnerships@revolut.com
85
+ - Grab → try business@grab.com
86
+ - Celonis → try hello@celonis.com
87
+ - Apollo.io → try partnerships@apollo.io
88
+ - Zhipu AI → LinkedIn outreach (email dead)
89
+
90
+ ---
91
+
92
+ ## Pending Items
93
+
94
+ - Send 10 follow-up drafts from Gmail Drafts folder
95
+ - Trigger Apify rebuild for v0.3.13
96
+ - Find correct addresses for 9 bounced companies
97
+ - Contact new targets: IMDA, GDS, LangChain, LlamaIndex, CrewAI, FactSet
98
+ - GKG upgrade for extract_gdelt (tone scores, goldstein scale) — deferred
99
+ - Agnost AI analytics integration — sign up at app.agnost.ai, one line in server.ts
100
+ - Synthesis endpoint (/briefing/now) — needs ANTHROPIC_KEY + $5 credits
101
+
102
+ ---
103
+
104
+ ## Demo Assets
105
+ 1. intelligence-report.html — Anthropic/OpenAI/Palantir government intelligence report
106
+ 2. PLTR company landscape — Q4 2025 earnings, $1.1B contracts, Rule of 40 127%
107
+
108
+ Both are best-in-class product demos. Use these in outreach.
109
+
110
+ ---
111
+
112
+ ## Resume Prompt
113
+ "I'm building freshcontext-mcp — 19 tools live at v0.3.13. Last session: built extract_gebiz
114
+ (Singapore GeBIZ), sent GovTech and Palantir follow-ups, drafted 10 follow-up emails.
115
+ Next: send the 10 drafts, fix bounced email addresses, contact new targets.
116
+ See SESSION_SAVE_V5.md."
117
+
118
+ ---
119
+
120
+ *"The work isn't gone. It's just waiting to be continued."*
121
+ *— Prince Gabriel, Grootfontein, Namibia 🇳🇦*
@@ -4,8 +4,8 @@
4
4
  * No other MCP server has this. GDELT monitors broadcast, print, and web news
5
5
  * from every country in 100+ languages, updated every 15 minutes. Free, no auth.
6
6
  *
7
- * Returns structured geopolitical intelligence not just headlines, but event codes,
8
- * actor tags, tone scores, goldstein scale (impact measure), location, timestamp.
7
+ * Returns structured global news articles with source country, language, domain, and
8
+ * timestamp. Covers every country, 100+ languages, updated every 15 minutes.
9
9
  *
10
10
  * API: https://api.gdeltproject.org/api/v2/doc/doc
11
11
  */
@@ -0,0 +1,139 @@
1
+ /**
2
+ * GeBIZ adapter — fetches Singapore Government procurement opportunities
3
+ * from data.gov.sg (Ministry of Finance official open dataset)
4
+ *
5
+ * No other MCP server has this. GeBIZ is Singapore's One-Stop E-Procurement
6
+ * Portal. All open tenders from government agencies since FY2020 are published
7
+ * here as structured open data.
8
+ *
9
+ * Free, no auth, updated continuously.
10
+ *
11
+ * API: https://data.gov.sg/api/action/datastore_search
12
+ * Dataset: d_acde1106003906a75c3fa052592f2fcb
13
+ *
14
+ * Accepts:
15
+ * - Keyword search: "AI", "software", "data analytics"
16
+ * - Agency name: "GovTech", "MOH", "MAS"
17
+ * - Empty string: returns latest tenders across all agencies
18
+ */
19
+ const DATASET_ID = "d_acde1106003906a75c3fa052592f2fcb";
20
+ const BASE_URL = "https://data.gov.sg/api/action/datastore_search";
21
+ const HEADERS = {
22
+ "Accept": "application/json",
23
+ "User-Agent": "freshcontext-mcp/1.0 contact@freshcontext.dev",
24
+ };
25
+ function formatDate(raw) {
26
+ if (!raw)
27
+ return "N/A";
28
+ // Dates come as DD/MM/YYYY or ISO
29
+ return raw.slice(0, 10);
30
+ }
31
+ function formatAmt(raw) {
32
+ if (!raw || raw === "NA" || raw === "")
33
+ return "N/A";
34
+ const n = parseFloat(raw.replace(/[^0-9.]/g, ""));
35
+ if (isNaN(n))
36
+ return raw;
37
+ if (n >= 1_000_000)
38
+ return `S$${(n / 1_000_000).toFixed(2)}M`;
39
+ if (n >= 1_000)
40
+ return `S$${(n / 1_000).toFixed(1)}K`;
41
+ return `S$${n.toFixed(0)}`;
42
+ }
43
+ async function fetchGeBIZ(query, limit = 15) {
44
+ const params = new URLSearchParams({
45
+ resource_id: DATASET_ID,
46
+ limit: String(limit),
47
+ sort: "_id desc", // most recent first
48
+ });
49
+ // Add full-text search if query provided
50
+ if (query.trim()) {
51
+ params.set("q", query.trim());
52
+ }
53
+ const url = `${BASE_URL}?${params}`;
54
+ const controller = new AbortController();
55
+ const timeout = setTimeout(() => controller.abort(), 20000);
56
+ try {
57
+ const res = await fetch(url, {
58
+ headers: HEADERS,
59
+ signal: controller.signal,
60
+ });
61
+ if (!res.ok) {
62
+ const text = await res.text().catch(() => "");
63
+ throw new Error(`GeBIZ API HTTP ${res.status}: ${text.slice(0, 200)}`);
64
+ }
65
+ return await res.json();
66
+ }
67
+ finally {
68
+ clearTimeout(timeout);
69
+ }
70
+ }
71
+ function formatRecords(data, query, maxLength) {
72
+ const records = data.result?.records ?? [];
73
+ const total = data.result?.total ?? 0;
74
+ if (!records.length) {
75
+ return {
76
+ raw: `No GeBIZ tenders found for "${query}".\n\nTips:\n- Try a broader keyword: "software" or "data"\n- Try an agency name: "GovTech" or "MOH"\n- Leave query empty to see all recent tenders`,
77
+ content_date: null,
78
+ freshness_confidence: "high",
79
+ };
80
+ }
81
+ const lines = [
82
+ `GeBIZ Singapore Government Procurement — ${query || "All Recent Tenders"}`,
83
+ `${total.toLocaleString()} total records found (showing ${records.length})`,
84
+ `Source: data.gov.sg — Ministry of Finance open dataset`,
85
+ "",
86
+ ];
87
+ let latestDate = null;
88
+ records.forEach((r, i) => {
89
+ const awardDate = formatDate(r.awarded_date);
90
+ const closeDate = formatDate(r.tender_close_date);
91
+ const dateStr = r.awarded_date ?? r.tender_close_date ?? null;
92
+ if (dateStr && dateStr !== "NA") {
93
+ // Parse DD/MM/YYYY
94
+ const parts = dateStr.split("/");
95
+ let iso = null;
96
+ if (parts.length === 3) {
97
+ iso = `${parts[2]}-${parts[1]}-${parts[0]}`;
98
+ }
99
+ else if (dateStr.length >= 10) {
100
+ iso = dateStr.slice(0, 10);
101
+ }
102
+ if (iso && (!latestDate || iso > latestDate))
103
+ latestDate = iso;
104
+ }
105
+ const desc = (r.description ?? "No description").slice(0, 300);
106
+ const agency = r.agency ?? "N/A";
107
+ const tenderNo = r.tender_no ?? "N/A";
108
+ const status = r.tender_detail_status ?? "N/A";
109
+ const category = r.procurement_category ?? "N/A";
110
+ const supplier = r.supplier_name ?? "N/A";
111
+ const amount = formatAmt(r.awarded_amt);
112
+ lines.push(`[${i + 1}] ${desc}`);
113
+ lines.push(` Agency: ${agency}`);
114
+ lines.push(` Tender No: ${tenderNo}`);
115
+ lines.push(` Category: ${category}`);
116
+ lines.push(` Status: ${status}`);
117
+ if (supplier !== "N/A")
118
+ lines.push(` Supplier: ${supplier}`);
119
+ if (amount !== "N/A")
120
+ lines.push(` Amount: ${amount}`);
121
+ lines.push(` Close Date: ${closeDate}`);
122
+ if (awardDate !== "N/A")
123
+ lines.push(` Awarded: ${awardDate}`);
124
+ lines.push("");
125
+ });
126
+ lines.push(`Full dataset: https://data.gov.sg/datasets/d_acde1106003906a75c3fa052592f2fcb/view`);
127
+ lines.push(`Register as supplier: https://www.gebiz.gov.sg/cmw/content/getstart.html`);
128
+ return {
129
+ raw: lines.join("\n").slice(0, maxLength),
130
+ content_date: latestDate,
131
+ freshness_confidence: "high",
132
+ };
133
+ }
134
+ export async function gebizAdapter(options) {
135
+ const query = (options.url ?? "").trim();
136
+ const maxLength = options.maxLength ?? 6000;
137
+ const data = await fetchGeBIZ(query);
138
+ return formatRecords(data, query, maxLength);
139
+ }
package/dist/apify.js ADDED
@@ -0,0 +1,133 @@
1
+ #!/usr/bin/env node
2
+ /**
3
+ * Apify Actor entry point — FreshContext MCP v0.3.13
4
+ *
5
+ * Reads Actor input, calls the appropriate adapter, pushes to dataset, exits.
6
+ * All 19 tools supported. Robust error handling throughout.
7
+ */
8
+ import { Actor } from "apify";
9
+ import { githubAdapter } from "./adapters/github.js";
10
+ import { hackerNewsAdapter } from "./adapters/hackernews.js";
11
+ import { scholarAdapter } from "./adapters/scholar.js";
12
+ import { arxivAdapter } from "./adapters/arxiv.js";
13
+ import { redditAdapter } from "./adapters/reddit.js";
14
+ import { ycAdapter } from "./adapters/yc.js";
15
+ import { productHuntAdapter } from "./adapters/productHunt.js";
16
+ import { repoSearchAdapter } from "./adapters/repoSearch.js";
17
+ import { packageTrendsAdapter } from "./adapters/packageTrends.js";
18
+ import { financeAdapter } from "./adapters/finance.js";
19
+ import { jobsAdapter } from "./adapters/jobs.js";
20
+ import { changelogAdapter } from "./adapters/changelog.js";
21
+ import { govContractsAdapter } from "./adapters/govcontracts.js";
22
+ import { secFilingsAdapter } from "./adapters/secFilings.js";
23
+ import { gdeltAdapter } from "./adapters/gdelt.js";
24
+ import { gebizAdapter } from "./adapters/gebiz.js";
25
+ import { stampFreshness } from "./tools/freshnessStamp.js";
26
+ async function main() {
27
+ await Actor.init();
28
+ let input;
29
+ try {
30
+ const raw = await Actor.getInput();
31
+ if (!raw || !raw.tool) {
32
+ await Actor.fail("Missing input. Provide a 'tool' field. E.g. { \"tool\": \"extract_hackernews\", \"url\": \"https://news.ycombinator.com\" }");
33
+ return;
34
+ }
35
+ input = raw;
36
+ }
37
+ catch (err) {
38
+ const msg = err instanceof Error ? err.message : String(err);
39
+ await Actor.fail(`Failed to read input: ${msg}`);
40
+ return;
41
+ }
42
+ // Resolve the primary string input — different tools use different field names
43
+ const url = input.url ?? input.query ?? input.topic ?? input.company ?? input.tickers ?? "";
44
+ const maxLength = input.max_length ?? 8000;
45
+ console.log(`FreshContext Actor | tool: ${input.tool} | input: "${url}"`);
46
+ try {
47
+ let result;
48
+ switch (input.tool) {
49
+ // ── Standard tools ────────────────────────────────────────────
50
+ case "extract_github":
51
+ result = await githubAdapter({ url, maxLength });
52
+ break;
53
+ case "extract_hackernews":
54
+ result = await hackerNewsAdapter({ url, maxLength });
55
+ break;
56
+ case "extract_scholar":
57
+ result = await scholarAdapter({ url, maxLength });
58
+ break;
59
+ case "extract_arxiv":
60
+ result = await arxivAdapter({ url, maxLength });
61
+ break;
62
+ case "extract_reddit":
63
+ result = await redditAdapter({ url, maxLength });
64
+ break;
65
+ case "extract_yc":
66
+ result = await ycAdapter({ url, maxLength });
67
+ break;
68
+ case "extract_producthunt":
69
+ result = await productHuntAdapter({ url, maxLength });
70
+ break;
71
+ case "search_repos":
72
+ result = await repoSearchAdapter({ url, maxLength });
73
+ break;
74
+ case "package_trends":
75
+ result = await packageTrendsAdapter({ url, maxLength });
76
+ break;
77
+ case "extract_finance":
78
+ result = await financeAdapter({ url, maxLength });
79
+ break;
80
+ case "search_jobs":
81
+ result = await jobsAdapter({ url, maxLength });
82
+ break;
83
+ case "extract_changelog":
84
+ result = await changelogAdapter({ url, maxLength });
85
+ break;
86
+ // ── Unique tools ──────────────────────────────────────────────
87
+ case "extract_govcontracts":
88
+ result = await govContractsAdapter({ url, maxLength });
89
+ break;
90
+ case "extract_sec_filings":
91
+ result = await secFilingsAdapter({ url, maxLength });
92
+ break;
93
+ case "extract_gdelt":
94
+ result = await gdeltAdapter({ url, maxLength });
95
+ break;
96
+ case "extract_gebiz":
97
+ result = await gebizAdapter({ url, maxLength });
98
+ break;
99
+ default:
100
+ await Actor.fail(`Unknown tool: "${input.tool}". Valid tools: ` +
101
+ "extract_github, extract_hackernews, extract_scholar, extract_arxiv, " +
102
+ "extract_reddit, extract_yc, extract_producthunt, search_repos, " +
103
+ "package_trends, extract_finance, search_jobs, extract_changelog, " +
104
+ "extract_govcontracts, extract_sec_filings, extract_gdelt, extract_gebiz");
105
+ return;
106
+ }
107
+ const ctx = stampFreshness(result, { url, maxLength }, input.tool);
108
+ await Actor.pushData({
109
+ tool: ctx.adapter,
110
+ source_url: ctx.source_url,
111
+ content: ctx.content,
112
+ retrieved_at: ctx.retrieved_at,
113
+ content_date: ctx.content_date ?? null,
114
+ freshness_confidence: ctx.freshness_confidence,
115
+ });
116
+ console.log(`✓ Done | retrieved: ${ctx.retrieved_at} | confidence: ${ctx.freshness_confidence}`);
117
+ await Actor.exit();
118
+ }
119
+ catch (err) {
120
+ const message = err instanceof Error ? err.message : String(err);
121
+ console.error(`FreshContext error: ${message}`);
122
+ await Actor.fail(message);
123
+ }
124
+ }
125
+ main().catch(async (err) => {
126
+ const message = err instanceof Error ? err.message : String(err);
127
+ console.error(`Fatal error: ${message}`);
128
+ try {
129
+ await Actor.fail(message);
130
+ }
131
+ catch { /* ignore */ }
132
+ process.exit(1);
133
+ });
package/dist/server.js CHANGED
@@ -9,12 +9,14 @@ import { ycAdapter } from "./adapters/yc.js";
9
9
  import { repoSearchAdapter } from "./adapters/repoSearch.js";
10
10
  import { packageTrendsAdapter } from "./adapters/packageTrends.js";
11
11
  import { redditAdapter } from "./adapters/reddit.js";
12
+ import { productHuntAdapter } from "./adapters/productHunt.js";
12
13
  import { financeAdapter } from "./adapters/finance.js";
13
14
  import { jobsAdapter } from "./adapters/jobs.js";
14
15
  import { changelogAdapter } from "./adapters/changelog.js";
15
16
  import { govContractsAdapter } from "./adapters/govcontracts.js";
16
17
  import { secFilingsAdapter } from "./adapters/secFilings.js";
17
18
  import { gdeltAdapter } from "./adapters/gdelt.js";
19
+ import { gebizAdapter } from "./adapters/gebiz.js";
18
20
  import { stampFreshness, formatForLLM } from "./tools/freshnessStamp.js";
19
21
  import { formatSecurityError } from "./security.js";
20
22
  const server = new McpServer({
@@ -347,7 +349,7 @@ server.registerTool("extract_sec_filings", {
347
349
  // codes, actor tags, tone scores, goldstein scale (impact), location, timestamp.
348
350
  // Unique: no other MCP server has this.
349
351
  server.registerTool("extract_gdelt", {
350
- description: "Fetch global news intelligence from the GDELT Project. GDELT monitors broadcast, print, and web news from every country in 100+ languages, updated every 15 minutes. Returns structured articles with source country, language, and publication date. Free, no auth. Pass any company name, topic, or keyword. Unique: not available in any other MCP server.",
352
+ description: "Fetch global news intelligence from the GDELT Project. GDELT monitors broadcast, print, and web news from every country in 100+ languages, updated every 15 minutes. Returns articles with title, source domain, country of origin, language, and publication date — covering news worldwide that Western sources miss. Free, no auth. Pass any company name, topic, or keyword. Unique: not available in any other MCP server.",
351
353
  inputSchema: z.object({
352
354
  url: z.string().describe("Query: company name, topic, or keyword e.g. 'Palantir', 'artificial intelligence', 'MCP server'"),
353
355
  max_length: z.number().optional().default(6000),
@@ -408,6 +410,87 @@ server.registerTool("extract_company_landscape", {
408
410
  ].join("\n\n");
409
411
  return { content: [{ type: "text", text: combined }] };
410
412
  });
413
+ // ─── Tool: extract_gebiz ────────────────────────────────────────────────────
414
+ // Singapore Government procurement tenders via data.gov.sg open API.
415
+ // Ministry of Finance official dataset — all open tenders since FY2020.
416
+ // Free, no auth, structured. Unique: no other MCP server has this.
417
+ server.registerTool("extract_gebiz", {
418
+ description: "Fetch Singapore Government procurement opportunities from GeBIZ via the data.gov.sg open API (Ministry of Finance official dataset). Returns open tenders, awarded contracts, agencies, amounts, and closing dates. Search by keyword (e.g. 'software', 'AI', 'data analytics'), agency name (e.g. 'GovTech', 'MOH'), or leave blank for all recent tenders. Free, no auth. Unique: not available in any other MCP server.",
419
+ inputSchema: z.object({
420
+ url: z.string().describe("Search keyword, agency name, or leave empty for all recent tenders. E.g. 'artificial intelligence', 'GovTech', 'cybersecurity'"),
421
+ max_length: z.number().optional().default(6000),
422
+ }),
423
+ annotations: { readOnlyHint: true, openWorldHint: true },
424
+ }, async ({ url, max_length }) => {
425
+ try {
426
+ const result = await gebizAdapter({ url, maxLength: max_length });
427
+ const ctx = stampFreshness(result, { url, maxLength: max_length }, "gebiz");
428
+ return { content: [{ type: "text", text: formatForLLM(ctx) }] };
429
+ }
430
+ catch (err) {
431
+ return { content: [{ type: "text", text: formatSecurityError(err) }] };
432
+ }
433
+ });
434
+ // ─── Tool: extract_idea_landscape ───────────────────────────────────────────
435
+ // Idea validation composite — 6 sources that answer: should I build this?
436
+ // HN pain points + YC funded competitors + GitHub crowding + job market signal
437
+ // + package ecosystem adoption + Product Hunt recent launches.
438
+ // The job market section is the key differentiator — companies paying salaries
439
+ // around a problem is the strongest signal a real market exists.
440
+ server.registerTool("extract_idea_landscape", {
441
+ description: "Idea validation composite tool for developers and founders. Given a project idea or keyword, simultaneously queries 6 sources to answer: Is this problem real? Is the market crowded? Is there funding? Are companies hiring? What just launched? Sources: (1) Hacker News — what developers are actively complaining about and discussing, (2) YC companies — who has already received funding in this space, (3) GitHub repos — how crowded the open source landscape is, (4) Job listings — hiring signal showing real company spend around this problem, (5) npm/PyPI package trends — ecosystem adoption and velocity, (6) Product Hunt — what just launched and how it was received. Returns a unified 6-source idea validation report.",
442
+ inputSchema: z.object({
443
+ idea: z.string().describe("Your idea, problem space, or keyword. E.g. 'data freshness for AI agents', 'procurement intelligence', 'developer observability'"),
444
+ max_length: z.number().optional().default(14000),
445
+ }),
446
+ annotations: { readOnlyHint: true, openWorldHint: true },
447
+ }, async ({ idea, max_length }) => {
448
+ const perSection = Math.floor((max_length ?? 14000) / 6);
449
+ const [hnResult, ycResult, repoResult, jobsResult, pkgResult, phResult] = await Promise.allSettled([
450
+ // 1. Pain signal — what are developers actively complaining about
451
+ hackerNewsAdapter({
452
+ url: `https://hn.algolia.com/api/v1/search?query=${encodeURIComponent(idea)}&tags=story&hitsPerPage=10`,
453
+ maxLength: perSection,
454
+ }),
455
+ // 2. Funding signal — who has already raised money in this space
456
+ ycAdapter({
457
+ url: `https://www.ycombinator.com/companies?query=${encodeURIComponent(idea)}`,
458
+ maxLength: perSection,
459
+ }),
460
+ // 3. Crowding signal — how many GitHub repos exist, how active
461
+ repoSearchAdapter({ url: idea, maxLength: perSection }),
462
+ // 4. Market signal — companies paying salaries = real spend on this problem
463
+ jobsAdapter({ url: idea, maxLength: perSection }),
464
+ // 5. Ecosystem signal — npm/PyPI adoption and release velocity
465
+ packageTrendsAdapter({ url: idea, maxLength: perSection }),
466
+ // 6. Launch signal — what just shipped and how the market received it
467
+ productHuntAdapter({ url: idea, maxLength: perSection }),
468
+ ]);
469
+ const section = (label, result) => result.status === "fulfilled"
470
+ ? `## ${label}\n${result.value.raw}`
471
+ : `## ${label}\n[Unavailable: ${result.reason}]`;
472
+ const combined = [
473
+ `# Idea Validation Landscape: "${idea}"`,
474
+ `Generated: ${new Date().toISOString()}`,
475
+ `Sources: Hacker News · YC Companies · GitHub · Job Listings · npm/PyPI · Product Hunt`,
476
+ "",
477
+ `## ℹ️ How to read this report`,
478
+ `Pain signal (HN): Are developers actively discussing this problem?`,
479
+ `Funding signal (YC): Has this already attracted institutional money?`,
480
+ `Crowding signal (GitHub): How many repos exist — empty = opportunity, crowded = validation.`,
481
+ `Market signal (Jobs): Companies hiring around this = real budget allocated = real market.`,
482
+ `Ecosystem signal (npm/PyPI): Are packages being built and adopted?`,
483
+ `Launch signal (Product Hunt): What just shipped — community reception and timing.`,
484
+ "",
485
+ section("🗣️ Pain Signal — Developer Discussions (Hacker News)", hnResult),
486
+ section("💰 Funding Signal — Backed Companies (YC)", ycResult),
487
+ section("📦 Crowding Signal — Open Source Landscape (GitHub)", repoResult),
488
+ section("💼 Market Signal — Hiring Activity (Job Listings)", jobsResult),
489
+ section("🔧 Ecosystem Signal — Package Adoption (npm/PyPI)", pkgResult),
490
+ section("🚀 Launch Signal — Recent Launches (Product Hunt)", phResult),
491
+ ].join("\n\n");
492
+ return { content: [{ type: "text", text: combined }] };
493
+ });
411
494
  // ─── Start ───────────────────────────────────────────────────────────────────
412
495
  async function main() {
413
496
  const transport = new StdioServerTransport();
package/input_schema.json CHANGED
@@ -1,45 +1,44 @@
1
1
  {
2
- "title": "FreshContext MCP Input",
2
+ "title": "FreshContext Input",
3
3
  "type": "object",
4
4
  "schemaVersion": 1,
5
5
  "properties": {
6
6
  "tool": {
7
7
  "title": "Tool",
8
8
  "type": "string",
9
- "description": "The FreshContext tool to run.",
9
+ "description": "The FreshContext tool to run. See README for full descriptions.",
10
10
  "enum": [
11
- "extract_github",
12
11
  "extract_hackernews",
13
- "extract_scholar",
12
+ "extract_github",
13
+ "extract_govcontracts",
14
+ "extract_sec_filings",
15
+ "extract_gdelt",
16
+ "extract_gebiz",
17
+ "extract_finance",
18
+ "extract_changelog",
14
19
  "extract_arxiv",
20
+ "extract_scholar",
15
21
  "extract_reddit",
16
22
  "extract_yc",
17
23
  "extract_producthunt",
18
24
  "search_repos",
19
25
  "package_trends",
20
- "extract_finance",
21
- "extract_landscape"
26
+ "search_jobs"
22
27
  ],
23
- "default": "extract_landscape",
28
+ "default": "extract_hackernews",
24
29
  "editor": "select"
25
30
  },
26
31
  "url": {
27
- "title": "URL",
28
- "type": "string",
29
- "description": "URL to extract from. Required for: extract_github, extract_hackernews, extract_scholar, extract_reddit. E.g. https://github.com/owner/repo",
30
- "editor": "textfield"
31
- },
32
- "query": {
33
- "title": "Query",
32
+ "title": "URL or Query",
34
33
  "type": "string",
35
- "description": "Search query. Required for: extract_landscape, search_repos, extract_yc, extract_producthunt, package_trends, extract_finance.",
34
+ "description": "The main input for the tool. For extract_github: full GitHub URL. For extract_hackernews: HN URL or search URL. For extract_govcontracts/extract_sec_filings/extract_gdelt/extract_gebiz/extract_finance: company name, keyword, or ticker. For search_repos/package_trends/search_jobs: search keyword.",
36
35
  "editor": "textfield"
37
36
  },
38
37
  "max_length": {
39
38
  "title": "Max content length",
40
39
  "type": "integer",
41
- "description": "Maximum characters returned per result. Default: 6000.",
42
- "default": 6000,
40
+ "description": "Maximum characters returned. Default: 8000.",
41
+ "default": 8000,
43
42
  "minimum": 500,
44
43
  "maximum": 20000,
45
44
  "editor": "number"
package/package.json CHANGED
@@ -1,7 +1,7 @@
1
1
  {
2
2
  "name": "freshcontext-mcp",
3
3
  "mcpName": "io.github.PrinceGabriel-lgtm/freshcontext",
4
- "version": "0.3.12",
4
+ "version": "0.3.14",
5
5
  "description": "Real-time web extraction MCP server with freshness timestamps for AI agents",
6
6
  "keywords": [
7
7
  "mcp",