reading-feed-mcp 0.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,80 @@
1
+ # /reading-feed — Search your reading queue from Claude Code
2
+
3
+ Trigger: user says /reading-feed, "search my reading", "what did I save about X", "find that article I saved"
4
+
5
+ ## What this skill does
6
+
7
+ Searches across all configured reading sources (Burn 451, Readwise, Karakeep, RSS, local markdown notes) using the `reading-feed-mcp` server.
8
+
9
+ ## Setup
10
+
11
+ Install the MCP server once:
12
+ ```bash
13
+ claude mcp add reading-feed -- npx reading-feed-mcp
14
+ ```
15
+
16
+ Set env vars for whatever sources you use (at least one required):
17
+ ```bash
18
+ export BURN_MCP_TOKEN=bmcp_xxx # burn451.cloud
19
+ export READWISE_TOKEN=xxx # readwise.io/access_token
20
+ export KARAKEEP_URL=https://xxx.app
21
+ export KARAKEEP_TOKEN=xxx
22
+ export READING_FEED_RSS="https://feed1,https://feed2"
23
+ export READING_FEED_MARKDOWN_DIR=~/notes
24
+ ```
25
+
26
+ ## When to use
27
+
28
+ - **Active research** — user asks about a topic they might have saved context on
29
+ - **"That article I read..."** — user vaguely remembers something
30
+ - **Before writing** — search their own knowledge before inventing from scratch
31
+ - **Morning context load** — pull recent items as session context
32
+
33
+ ## Example prompts
34
+
35
+ ### Search everything
36
+ ```
37
+ > Find articles I saved about MCP authentication
38
+
39
+ Calls: search_reading(query="MCP authentication")
40
+ Result: combined results from all 5 sources
41
+ ```
42
+
43
+ ### Restrict to specific source
44
+ ```
45
+ > Show me the 10 most recent Readwise items
46
+
47
+ Calls: list_recent(source="readwise", limit=10)
48
+ ```
49
+
50
+ ### Check what's configured
51
+ ```
52
+ > What reading sources do I have set up?
53
+
54
+ Calls: list_sources()
55
+ ```
56
+
57
+ ## Display format
58
+
59
+ Return results as a compact table grouped by source:
60
+
61
+ ```markdown
62
+ ### 🔥 From Burn (3 matches)
63
+ | Title | URL | Status | Summary |
64
+ |---|---|---|---|
65
+ | ... | ... | flame 2h | ... |
66
+
67
+ ### 📚 From Readwise (2 matches)
68
+ ...
69
+
70
+ ### 📡 From RSS feeds (1 match)
71
+ ...
72
+ ```
73
+
74
+ Include the URL so user can click through. Always include the summary — it's the whole point (fast decision without opening each link).
75
+
76
+ ## Tips
77
+
78
+ - If `search_reading` returns nothing, try broader keywords (single-word)
79
+ - Burn results include `status` (flame = urgent, vault = kept, spark = read) — surface this
80
+ - For "what's new", use `list_recent` per source instead of `search_reading`
package/LICENSE ADDED
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2026 Fisher Hawking (@hawking520)
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
package/README.md ADDED
@@ -0,0 +1,161 @@
1
+ <div align="center">
2
+
3
+ # reading-feed-mcp
4
+
5
+ **Your reading queue, searchable from Claude Code. Stop re-googling what you already saved.**
6
+
7
+ [![npm version](https://img.shields.io/npm/v/reading-feed-mcp.svg)](https://www.npmjs.com/package/reading-feed-mcp)
8
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
9
+ [![X: @hawking520](https://img.shields.io/twitter/follow/hawking520?style=social)](https://x.com/hawking520)
10
+
11
+ One MCP server. Five reading sources. Zero vendor lock-in.
12
+
13
+ </div>
14
+
15
+ ---
16
+
17
+ ## The problem
18
+
19
+ Every Claude Code session starts amnesiac. You saved 300 articles this year. You remember reading something about the exact pattern you're implementing right now. But it's in Readwise. Or Pocket. Or Karakeep. Or Burn. Or that folder of markdown notes.
20
+
21
+ So you re-google. Again.
22
+
23
+ [claude-code#27298](https://github.com/anthropics/claude-code/issues/27298) has 22 comments about this. Readwise built [their own MCP](https://github.com/readwiseio/readwise-mcp) — but it only works if you use Readwise. Everyone else is stuck.
24
+
25
+ ## The fix
26
+
27
+ `reading-feed-mcp` is a **vendor-neutral MCP server** that makes your reading queue queryable from any MCP client (Claude Code, Claude Desktop, Cursor, Windsurf).
28
+
29
+ Configure one or more sources. Get three tools:
30
+
31
+ - `search_reading(query)` — full-text search across everything
32
+ - `list_recent(source)` — latest items from a source
33
+ - `list_sources` — what you've wired up
34
+
35
+ That's it. No new account. No cloud. No lock-in.
36
+
37
+ ## Supported sources
38
+
39
+ | Source | What you need | Status |
40
+ |---|---|---|
41
+ | **[Burn 451](https://burn451.cloud)** | `BURN_MCP_TOKEN` | ✅ |
42
+ | **[Readwise Reader](https://readwise.io)** | `READWISE_TOKEN` | ✅ |
43
+ | **[Karakeep](https://karakeep.app)** (self-hosted) | `KARAKEEP_URL` + `KARAKEEP_TOKEN` | ✅ |
44
+ | **RSS feeds** | `READING_FEED_RSS` (comma-separated URLs) | ✅ |
45
+ | **Local markdown** | `READING_FEED_MARKDOWN_DIR` | ✅ |
46
+ | Raindrop / Omnivore / Pocket export | — | 🔜 v0.2 |
47
+
48
+ Configure any combination. Search spans all of them.
49
+
50
+ ## Setup (60 seconds)
51
+
52
+ ### 1. Install
53
+
54
+ ```bash
55
+ claude mcp add reading-feed -- npx reading-feed-mcp@latest
56
+ ```
57
+
58
+ ### 2. Set sources via env vars
59
+
60
+ At least one required. Use whichever you actually have:
61
+
62
+ ```bash
63
+ # Burn 451 — get token at burn451.cloud/settings
64
+ export BURN_MCP_TOKEN=bmcp_xxx
65
+
66
+ # Readwise Reader — get token at readwise.io/access_token
67
+ export READWISE_TOKEN=xxx
68
+
69
+ # Karakeep (self-hosted)
70
+ export KARAKEEP_URL=https://karakeep.your-domain.com
71
+ export KARAKEEP_TOKEN=xxx
72
+
73
+ # RSS feeds (comma-separated)
74
+ export READING_FEED_RSS="https://simonwillison.net/atom/everything/,https://karpathy.bearblog.dev/feed/"
75
+
76
+ # Local markdown notes
77
+ export READING_FEED_MARKDOWN_DIR=~/notes
78
+ ```
79
+
80
+ ### 3. Use it
81
+
82
+ In Claude Code, just ask:
83
+
84
+ ```
85
+ > Find articles I saved about MCP authentication
86
+ ```
87
+
88
+ Claude calls `search_reading(query="MCP authentication")` and returns matches across all your sources.
89
+
90
+ ## Example output
91
+
92
+ ```json
93
+ {
94
+ "query": "MCP authentication",
95
+ "total": 4,
96
+ "results": [
97
+ {
98
+ "source": "burn",
99
+ "title": "MCP Auth Patterns in 2026",
100
+ "url": "https://example.com/mcp-auth",
101
+ "status": "active",
102
+ "summary": "Compares OAuth device flow, API keys, and JWT exchange for MCP servers..."
103
+ },
104
+ {
105
+ "source": "readwise",
106
+ "title": "Building Secure MCP Servers",
107
+ "url": "https://example.com/secure-mcp",
108
+ "status": "later",
109
+ "summary": "Step-by-step guide to token exchange..."
110
+ },
111
+ {
112
+ "source": "rss",
113
+ "title": "Simon Willison: MCP Server Deep Dive",
114
+ "url": "https://simonwillison.net/...",
115
+ "feed": "Simon Willison's Weblog"
116
+ }
117
+ ]
118
+ }
119
+ ```
120
+
121
+ ## Why "vendor-neutral" matters
122
+
123
+ Most reading tools want you to use their ecosystem. Readwise MCP is Readwise-only. Pocket is dead. Omnivore is dead. If your reading tool dies, your reading dies with it.
124
+
125
+ `reading-feed-mcp` is the opposite: **bring your own source.** Switch services, keep the MCP. We don't store your data. We don't host anything. We're a thin query layer.
126
+
127
+ ## Best with Burn — but works with anything
128
+
129
+ [Burn 451](https://burn451.cloud) is recommended because:
130
+ - 24h countdown surfaces urgency (search result has `status: "active"` with time remaining)
131
+ - AI summaries embedded in every article (search works on summaries, not just titles)
132
+ - Free tier, no credit card
133
+
134
+ But the whole point is you don't have to use Burn. If you live in Readwise, use Readwise. Live in markdown files? Point the MCP at your folder.
135
+
136
+ ## How it's built
137
+
138
+ - Zero-dep Node.js (except `@modelcontextprotocol/sdk`)
139
+ - Each source is one file (<100 lines) — add a new one via PR in 20 minutes
140
+ - Stdio transport (standard MCP pattern)
141
+ - Token caching (5-min TTL) so you don't thrash upstream APIs
142
+
143
+ ## Part of the Burn ecosystem
144
+
145
+ | Tool | What it does |
146
+ |---|---|
147
+ | **reading-feed-mcp** (this) | Vendor-neutral MCP for any reading source |
148
+ | [burn-mcp-server](https://github.com/Fisher521/burn-mcp-server) | Burn-specific MCP (26 tools, full control) |
149
+ | [burn451 CLI](https://github.com/Fisher521/burn451-cli) | Terminal queue management |
150
+ | [burn-claude-skill](https://github.com/Fisher521/burn-claude-skill) | /burn command for CC |
151
+ | [burn-daily-triage](https://github.com/Fisher521/burn-daily-triage) | Daily CC Routine template |
152
+ | [morning-brief](https://github.com/Fisher521/morning-brief) | Daily briefing CLI |
153
+ | [reading-digest](https://github.com/Fisher521/reading-digest) | Weekly digest generator |
154
+
155
+ ## Contributing
156
+
157
+ New source adapter? Each one lives in `src/sources/<name>.mjs`, exports `search({...})` and `listRecent({...})`. See `src/sources/burn.mjs` for the simplest example (~50 lines).
158
+
159
+ ## License
160
+
161
+ MIT
package/package.json ADDED
@@ -0,0 +1,42 @@
1
+ {
2
+ "name": "reading-feed-mcp",
3
+ "version": "0.1.0",
4
+ "description": "Vendor-neutral MCP server: search your reading queue across Burn/Readwise/Karakeep/RSS/markdown from Claude Code or any MCP client.",
5
+ "type": "module",
6
+ "main": "src/index.mjs",
7
+ "bin": {
8
+ "reading-feed-mcp": "src/index.mjs"
9
+ },
10
+ "scripts": {
11
+ "start": "node src/index.mjs"
12
+ },
13
+ "keywords": [
14
+ "mcp",
15
+ "mcp-server",
16
+ "reading",
17
+ "bookmarks",
18
+ "claude",
19
+ "claude-code",
20
+ "anthropic",
21
+ "readwise",
22
+ "karakeep",
23
+ "burn",
24
+ "rss",
25
+ "read-later",
26
+ "knowledge-management",
27
+ "ai-tools"
28
+ ],
29
+ "repository": {
30
+ "type": "git",
31
+ "url": "https://github.com/Fisher521/reading-feed-mcp.git"
32
+ },
33
+ "homepage": "https://github.com/Fisher521/reading-feed-mcp",
34
+ "license": "MIT",
35
+ "author": "hawking520 <hawking520@gmail.com> (https://x.com/hawking520)",
36
+ "engines": {
37
+ "node": ">=18"
38
+ },
39
+ "dependencies": {
40
+ "@modelcontextprotocol/sdk": "^1.0.0"
41
+ }
42
+ }
package/src/index.mjs ADDED
@@ -0,0 +1,125 @@
1
+ #!/usr/bin/env node
2
+ // reading-feed-mcp — Vendor-neutral MCP server for reading queues
3
+ //
4
+ // Exposes 3 tools:
5
+ // search_reading — full-text search across all configured sources
6
+ // list_recent — latest items from a specific source
7
+ // list_sources — which sources are configured
8
+
9
+ import { Server } from '@modelcontextprotocol/sdk/server/index.js'
10
+ import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js'
11
+ import { CallToolRequestSchema, ListToolsRequestSchema } from '@modelcontextprotocol/sdk/types.js'
12
+
13
+ import * as burn from './sources/burn.mjs'
14
+ import * as readwise from './sources/readwise.mjs'
15
+ import * as karakeep from './sources/karakeep.mjs'
16
+ import * as rss from './sources/rss.mjs'
17
+ import * as markdown from './sources/markdown.mjs'
18
+
19
+ // --- Build enabled-sources map from env ---
20
+ function getConfig() {
21
+ const cfg = {}
22
+ if (process.env.BURN_MCP_TOKEN) {
23
+ cfg.burn = { token: process.env.BURN_MCP_TOKEN, adapter: burn }
24
+ }
25
+ if (process.env.READWISE_TOKEN) {
26
+ cfg.readwise = { token: process.env.READWISE_TOKEN, adapter: readwise }
27
+ }
28
+ if (process.env.KARAKEEP_URL && process.env.KARAKEEP_TOKEN) {
29
+ cfg.karakeep = { url: process.env.KARAKEEP_URL, token: process.env.KARAKEEP_TOKEN, adapter: karakeep }
30
+ }
31
+ if (process.env.READING_FEED_RSS) {
32
+ cfg.rss = { feeds: process.env.READING_FEED_RSS.split(',').map(s => s.trim()), adapter: rss }
33
+ }
34
+ if (process.env.READING_FEED_MARKDOWN_DIR) {
35
+ cfg.markdown = { dir: process.env.READING_FEED_MARKDOWN_DIR, adapter: markdown }
36
+ }
37
+ return cfg
38
+ }
39
+
40
+ const server = new Server(
41
+ { name: 'reading-feed-mcp', version: '0.1.0' },
42
+ { capabilities: { tools: {} } }
43
+ )
44
+
45
+ server.setRequestHandler(ListToolsRequestSchema, async () => ({
46
+ tools: [
47
+ {
48
+ name: 'search_reading',
49
+ description: 'Full-text search across all configured reading sources (Burn/Readwise/Karakeep/RSS/markdown). Returns matching articles with title, URL, source, summary.',
50
+ inputSchema: {
51
+ type: 'object',
52
+ properties: {
53
+ query: { type: 'string', description: 'Search query (keywords)' },
54
+ sources: { type: 'array', items: { type: 'string' }, description: 'Optional: restrict to specific sources (burn, readwise, karakeep, rss, markdown)' },
55
+ limit: { type: 'number', description: 'Max results per source (default 10)', default: 10 },
56
+ },
57
+ required: ['query'],
58
+ },
59
+ },
60
+ {
61
+ name: 'list_recent',
62
+ description: 'List the most recent articles from a specific source. Use this to see what is new in your reading queue.',
63
+ inputSchema: {
64
+ type: 'object',
65
+ properties: {
66
+ source: { type: 'string', description: 'Source name (burn, readwise, karakeep, rss, markdown)' },
67
+ limit: { type: 'number', description: 'Max results (default 20)', default: 20 },
68
+ },
69
+ required: ['source'],
70
+ },
71
+ },
72
+ {
73
+ name: 'list_sources',
74
+ description: 'List all configured reading sources. Use this to know what sources are available.',
75
+ inputSchema: { type: 'object', properties: {} },
76
+ },
77
+ ],
78
+ }))
79
+
80
+ server.setRequestHandler(CallToolRequestSchema, async (req) => {
81
+ const { name, arguments: args = {} } = req.params
82
+ const cfg = getConfig()
83
+
84
+ if (name === 'list_sources') {
85
+ return {
86
+ content: [{ type: 'text', text: JSON.stringify({ configured: Object.keys(cfg), available_env_vars: ['BURN_MCP_TOKEN', 'READWISE_TOKEN', 'KARAKEEP_URL+KARAKEEP_TOKEN', 'READING_FEED_RSS', 'READING_FEED_MARKDOWN_DIR'] }, null, 2) }],
87
+ }
88
+ }
89
+
90
+ if (name === 'search_reading') {
91
+ const { query, sources, limit = 10 } = args
92
+ const activeSources = sources && sources.length ? sources.filter(s => cfg[s]) : Object.keys(cfg)
93
+ if (activeSources.length === 0) {
94
+ return { content: [{ type: 'text', text: 'No sources configured. Set env vars (see list_sources).' }] }
95
+ }
96
+ const results = await Promise.all(activeSources.map(async s => {
97
+ try {
98
+ return await cfg[s].adapter.search({ ...cfg[s], query, limit })
99
+ } catch (e) {
100
+ return [{ source: s, error: e.message }]
101
+ }
102
+ }))
103
+ const flat = results.flat()
104
+ return { content: [{ type: 'text', text: JSON.stringify({ query, total: flat.length, results: flat }, null, 2) }] }
105
+ }
106
+
107
+ if (name === 'list_recent') {
108
+ const { source, limit = 20 } = args
109
+ if (!cfg[source]) {
110
+ return { content: [{ type: 'text', text: `Source "${source}" not configured. Available: ${Object.keys(cfg).join(', ') || 'none'}` }] }
111
+ }
112
+ try {
113
+ const items = await cfg[source].adapter.listRecent({ ...cfg[source], limit })
114
+ return { content: [{ type: 'text', text: JSON.stringify({ source, count: items.length, items }, null, 2) }] }
115
+ } catch (e) {
116
+ return { content: [{ type: 'text', text: `Error: ${e.message}` }] }
117
+ }
118
+ }
119
+
120
+ return { content: [{ type: 'text', text: `Unknown tool: ${name}` }], isError: true }
121
+ })
122
+
123
+ const transport = new StdioServerTransport()
124
+ await server.connect(transport)
125
+ console.error('[reading-feed-mcp] ready — stdio')
@@ -0,0 +1,71 @@
1
+ // Burn 451 source — https://burn451.cloud
2
+ // Requires BURN_MCP_TOKEN (burn451.cloud/settings → MCP Server)
3
+
4
+ const EXCHANGE_URL = 'https://api.burn451.cloud/api/mcp-exchange'
5
+ const SUPABASE_URL = 'https://juqtxylquemiuvvmgbej.supabase.co'
6
+ const ANON_KEY = 'sb_publishable_reVgmmCC6ndIo6jFRMM2LQ_wujj5FrO'
7
+
8
+ let cachedJwt = null
9
+ let cachedUntil = 0
10
+
11
+ async function getJwt(token) {
12
+ if (cachedJwt && Date.now() < cachedUntil) return cachedJwt
13
+ const resp = await fetch(EXCHANGE_URL, {
14
+ method: 'POST',
15
+ headers: { 'Content-Type': 'application/json' },
16
+ body: JSON.stringify({ token }),
17
+ })
18
+ if (!resp.ok) throw new Error(`Burn token exchange failed: ${resp.status}`)
19
+ const { access_token } = await resp.json()
20
+ cachedJwt = access_token
21
+ cachedUntil = Date.now() + 270000 // 4.5 min
22
+ return access_token
23
+ }
24
+
25
+ export async function search({ token, query, limit = 20 }) {
26
+ if (!token) throw new Error('BURN_MCP_TOKEN required')
27
+ const jwt = await getJwt(token)
28
+ const headers = { apikey: ANON_KEY, Authorization: `Bearer ${jwt}` }
29
+
30
+ let path
31
+ if (query) {
32
+ const q = encodeURIComponent(`*${query}*`)
33
+ path = `/bookmarks?or=(title.ilike.${q},url.ilike.${q})&order=created_at.desc&limit=${limit}&select=id,title,url,status,created_at,countdown_expires_at,content_metadata`
34
+ } else {
35
+ path = `/bookmarks?order=created_at.desc&limit=${limit}&select=id,title,url,status,created_at,countdown_expires_at,content_metadata`
36
+ }
37
+
38
+ const resp = await fetch(`${SUPABASE_URL}/rest/v1${path}`, { headers })
39
+ if (!resp.ok) throw new Error(`Burn API ${resp.status}`)
40
+ const items = await resp.json()
41
+ return items.map(b => ({
42
+ source: 'burn',
43
+ id: b.id,
44
+ title: b.title || '(untitled)',
45
+ url: b.url,
46
+ status: b.status,
47
+ saved_at: b.created_at,
48
+ expires_at: b.countdown_expires_at,
49
+ summary: b.content_metadata?.ai_summary_en || b.content_metadata?.ai_summary || '',
50
+ }))
51
+ }
52
+
53
+ export async function listRecent({ token, status = 'active', limit = 20 }) {
54
+ if (!token) throw new Error('BURN_MCP_TOKEN required')
55
+ const jwt = await getJwt(token)
56
+ const headers = { apikey: ANON_KEY, Authorization: `Bearer ${jwt}` }
57
+ const path = `/bookmarks?status=eq.${status}&order=created_at.desc&limit=${limit}&select=id,title,url,status,created_at,countdown_expires_at,content_metadata`
58
+ const resp = await fetch(`${SUPABASE_URL}/rest/v1${path}`, { headers })
59
+ if (!resp.ok) throw new Error(`Burn API ${resp.status}`)
60
+ const items = await resp.json()
61
+ return items.map(b => ({
62
+ source: 'burn',
63
+ id: b.id,
64
+ title: b.title || '(untitled)',
65
+ url: b.url,
66
+ status: b.status,
67
+ saved_at: b.created_at,
68
+ expires_at: b.countdown_expires_at,
69
+ summary: b.content_metadata?.ai_summary_en || b.content_metadata?.ai_summary || '',
70
+ }))
71
+ }
@@ -0,0 +1,30 @@
1
+ // Karakeep source (self-hosted bookmarks, formerly Hoarder) — https://karakeep.app
2
+ // Requires KARAKEEP_URL + KARAKEEP_TOKEN
3
+
4
+ export async function search({ url, token, query, limit = 20 }) {
5
+ if (!url || !token) throw new Error('KARAKEEP_URL + KARAKEEP_TOKEN required')
6
+ const baseUrl = url.replace(/\/$/, '')
7
+ const endpoint = query
8
+ ? `${baseUrl}/api/v1/bookmarks/search?q=${encodeURIComponent(query)}&limit=${limit}`
9
+ : `${baseUrl}/api/v1/bookmarks?limit=${limit}`
10
+ const resp = await fetch(endpoint, {
11
+ headers: { Authorization: `Bearer ${token}` },
12
+ })
13
+ if (!resp.ok) throw new Error(`Karakeep API ${resp.status}`)
14
+ const data = await resp.json()
15
+ const items = data.bookmarks || data.results || data || []
16
+ return items.slice(0, limit).map(b => ({
17
+ source: 'karakeep',
18
+ id: b.id,
19
+ title: b.title || b.content?.title || '(untitled)',
20
+ url: b.content?.url || b.url || '',
21
+ status: b.archived ? 'archived' : 'active',
22
+ saved_at: b.createdAt,
23
+ summary: b.note || b.content?.description || b.summary || '',
24
+ tags: (b.tags || []).map(t => t.name || t),
25
+ }))
26
+ }
27
+
28
+ export async function listRecent({ url, token, limit = 20 }) {
29
+ return search({ url, token, query: null, limit })
30
+ }
@@ -0,0 +1,58 @@
1
+ // Local markdown source — read URLs from a local markdown notes directory
2
+ // Requires: dir (path to directory with .md files containing URLs)
3
+
4
+ import { readFileSync, readdirSync, statSync } from 'node:fs'
5
+ import { join } from 'node:path'
6
+
7
+ export async function search({ dir, query, limit = 20 }) {
8
+ if (!dir) throw new Error('markdown.dir required')
9
+
10
+ const files = walkDir(dir).filter(f => f.endsWith('.md'))
11
+ const items = []
12
+
13
+ for (const file of files) {
14
+ try {
15
+ const content = readFileSync(file, 'utf8')
16
+ const stat = statSync(file)
17
+ const urls = content.match(/https?:\/\/[^\s)\]]+/g) || []
18
+ // Extract title from first heading or filename
19
+ const titleMatch = content.match(/^#\s+(.+)$/m)
20
+ const title = titleMatch ? titleMatch[1] : file.split('/').pop()
21
+
22
+ for (const url of urls) {
23
+ items.push({
24
+ source: 'markdown',
25
+ title,
26
+ url,
27
+ file: file.replace(dir, '').replace(/^\//, ''),
28
+ saved_at: stat.mtime.toISOString(),
29
+ summary: content.slice(0, 500),
30
+ })
31
+ }
32
+ } catch {}
33
+ }
34
+
35
+ const q = (query || '').toLowerCase()
36
+ const filtered = q
37
+ ? items.filter(i => i.title.toLowerCase().includes(q) || i.summary.toLowerCase().includes(q) || i.url.toLowerCase().includes(q))
38
+ : items
39
+ return filtered
40
+ .sort((a, b) => new Date(b.saved_at || 0) - new Date(a.saved_at || 0))
41
+ .slice(0, limit)
42
+ }
43
+
44
+ export async function listRecent(cfg) {
45
+ return search({ ...cfg, query: null })
46
+ }
47
+
48
+ function walkDir(dir, files = []) {
49
+ try {
50
+ for (const entry of readdirSync(dir)) {
51
+ const full = join(dir, entry)
52
+ const s = statSync(full)
53
+ if (s.isDirectory()) walkDir(full, files)
54
+ else files.push(full)
55
+ }
56
+ } catch {}
57
+ return files
58
+ }
@@ -0,0 +1,44 @@
1
+ // Readwise source — https://readwise.io
2
+ // Requires READWISE_TOKEN (readwise.io/access_token)
3
+
4
+ const API_BASE = 'https://readwise.io/api/v3/list/'
5
+
6
+ export async function search({ token, query, limit = 20 }) {
7
+ if (!token) throw new Error('READWISE_TOKEN required')
8
+ // Readwise Reader API doesn't have full-text search; fetch + filter client-side
9
+ const url = `${API_BASE}?pageSize=50`
10
+ const resp = await fetch(url, { headers: { Authorization: `Token ${token}` } })
11
+ if (!resp.ok) throw new Error(`Readwise API ${resp.status}`)
12
+ const data = await resp.json()
13
+ const docs = data.results || []
14
+ const q = (query || '').toLowerCase()
15
+ const filtered = q
16
+ ? docs.filter(d => (d.title || '').toLowerCase().includes(q) || (d.summary || '').toLowerCase().includes(q))
17
+ : docs
18
+ return filtered.slice(0, limit).map(d => ({
19
+ source: 'readwise',
20
+ id: d.id,
21
+ title: d.title || '(untitled)',
22
+ url: d.source_url || d.url || '',
23
+ status: d.location,
24
+ saved_at: d.saved_at,
25
+ summary: (d.summary || '').slice(0, 500),
26
+ }))
27
+ }
28
+
29
+ export async function listRecent({ token, location = 'later', limit = 20 }) {
30
+ if (!token) throw new Error('READWISE_TOKEN required')
31
+ const url = `${API_BASE}?location=${encodeURIComponent(location)}&pageSize=${limit}`
32
+ const resp = await fetch(url, { headers: { Authorization: `Token ${token}` } })
33
+ if (!resp.ok) throw new Error(`Readwise API ${resp.status}`)
34
+ const data = await resp.json()
35
+ return (data.results || []).map(d => ({
36
+ source: 'readwise',
37
+ id: d.id,
38
+ title: d.title || '(untitled)',
39
+ url: d.source_url || d.url || '',
40
+ status: d.location,
41
+ saved_at: d.saved_at,
42
+ summary: (d.summary || '').slice(0, 500),
43
+ }))
44
+ }
@@ -0,0 +1,54 @@
1
+ // RSS source — pull from any RSS feed URL
2
+ // Requires: feeds array (strings)
3
+
4
+ export async function search({ feeds = [], query, limit = 20 }) {
5
+ if (!Array.isArray(feeds) || feeds.length === 0) {
6
+ throw new Error('rss requires feeds array')
7
+ }
8
+ const all = []
9
+ await Promise.all(feeds.map(async feed => {
10
+ try {
11
+ const resp = await fetch(feed, { headers: { 'User-Agent': 'reading-feed-mcp/0.1' } })
12
+ if (!resp.ok) return
13
+ const xml = await resp.text()
14
+ const feedTitle = extractTag(xml.slice(0, 4000), 'title') || new URL(feed).hostname
15
+ for (const m of xml.matchAll(/<item[\s\S]*?<\/item>|<entry[\s\S]*?<\/entry>/gi)) {
16
+ const block = m[0]
17
+ const title = extractTag(block, 'title') || '(untitled)'
18
+ const linkMatch = block.match(/<link[^>]*href=["']([^"']+)["']/i) || block.match(/<link>([^<]+)<\/link>/i)
19
+ const link = linkMatch ? (linkMatch[1] || linkMatch[0]) : ''
20
+ const pubDate = extractTag(block, 'pubDate') || extractTag(block, 'updated') || extractTag(block, 'published')
21
+ const summary = stripHtml(extractTag(block, 'description') || extractTag(block, 'summary') || extractTag(block, 'content') || '')
22
+ all.push({
23
+ source: 'rss',
24
+ title,
25
+ url: link,
26
+ feed: feedTitle,
27
+ saved_at: pubDate,
28
+ summary: summary.slice(0, 500),
29
+ })
30
+ }
31
+ } catch {}
32
+ }))
33
+
34
+ const q = (query || '').toLowerCase()
35
+ const filtered = q
36
+ ? all.filter(i => i.title.toLowerCase().includes(q) || i.summary.toLowerCase().includes(q))
37
+ : all
38
+ return filtered
39
+ .sort((a, b) => new Date(b.saved_at || 0) - new Date(a.saved_at || 0))
40
+ .slice(0, limit)
41
+ }
42
+
43
+ export async function listRecent(cfg) {
44
+ return search({ ...cfg, query: null })
45
+ }
46
+
47
+ function extractTag(s, tag) {
48
+ const m = s.match(new RegExp(`<${tag}(?:\\s[^>]*)?>([\\s\\S]*?)<\\/${tag}>`, 'i'))
49
+ if (!m) return ''
50
+ return m[1].replace(/<!\[CDATA\[([\s\S]*?)\]\]>/g, '$1').trim()
51
+ }
52
+ function stripHtml(s) {
53
+ return s.replace(/<[^>]+>/g, '').replace(/\s+/g, ' ').trim()
54
+ }