roam-research-mcp 0.35.1 → 1.0.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +15 -19
- package/build/Roam_Markdown_Cheatsheet.md +62 -4
- package/build/config/environment.js +2 -3
- package/build/index.js +4 -1
- package/build/markdown-utils.js +5 -2
- package/build/search/datomic-search.js +40 -1
- package/build/search/tag-search.js +72 -12
- package/build/search/text-search.js +51 -12
- package/build/server/roam-server.js +24 -41
- package/build/tools/operations/pages.js +29 -16
- package/build/tools/schemas.js +65 -12
- package/build/tools/tool-handlers.js +27 -12
- package/package.json +4 -4
package/README.md
CHANGED
|
@@ -7,18 +7,17 @@
|
|
|
7
7
|
[](https://opensource.org/licenses/MIT)
|
|
8
8
|
[](https://github.com/2b3pro/roam-research-mcp/blob/main/LICENSE)
|
|
9
9
|
|
|
10
|
-
A Model Context Protocol (MCP) server that provides comprehensive access to Roam Research's API functionality. This server enables AI assistants like Claude to interact with your Roam Research graph through a standardized interface. It supports standard input/output (stdio)
|
|
10
|
+
A Model Context Protocol (MCP) server that provides comprehensive access to Roam Research's API functionality. This server enables AI assistants like Claude to interact with your Roam Research graph through a standardized interface. It supports standard input/output (stdio) and HTTP Stream communication. (A WORK-IN-PROGRESS, personal project not officially endorsed by Roam Research)
|
|
11
11
|
|
|
12
12
|
<a href="https://glama.ai/mcp/servers/fzfznyaflu"><img width="380" height="200" src="https://glama.ai/mcp/servers/fzfznyaflu/badge" alt="Roam Research MCP server" /></a>
|
|
13
13
|
<a href="https://mseep.ai/app/2b3pro-roam-research-mcp"><img width="380" height="200" src="https://mseep.net/pr/2b3pro-roam-research-mcp-badge.png" alt="MseeP.ai Security Assessment Badge" /></a>
|
|
14
14
|
|
|
15
15
|
## Installation and Usage
|
|
16
16
|
|
|
17
|
-
This MCP server supports
|
|
17
|
+
This MCP server supports two primary communication methods:
|
|
18
18
|
|
|
19
19
|
1. **Stdio (Standard Input/Output):** Ideal for local inter-process communication, command-line tools, and direct integration with applications running on the same machine. This is the default communication method when running the server directly.
|
|
20
20
|
2. **HTTP Stream:** Provides network-based communication, suitable for web-based clients, remote applications, or scenarios requiring real-time updates over HTTP. The HTTP Stream endpoint runs on port `8088` by default.
|
|
21
|
-
3. **SSE (Server-Sent Events):** A transport for legacy clients that require SSE. The SSE endpoint runs on port `8087` by default. (NOTE: ⚠️ DEPRECATED: The SSE Transport has been deprecated as of MCP specification version 2025-03-26. HTTP Stream Transport preferred.)
|
|
22
21
|
|
|
23
22
|
### Running with Stdio
|
|
24
23
|
|
|
@@ -41,16 +40,16 @@ npm start
|
|
|
41
40
|
|
|
42
41
|
### Running with HTTP Stream
|
|
43
42
|
|
|
44
|
-
To run the server with HTTP Stream
|
|
43
|
+
To run the server with HTTP Stream support, you can either:
|
|
45
44
|
|
|
46
|
-
1. **Use the default ports:** Run `npm start` after building (as shown above). The server will automatically listen on port `8088` for HTTP Stream
|
|
47
|
-
2. **Specify custom ports:** Set the `HTTP_STREAM_PORT`
|
|
45
|
+
1. **Use the default ports:** Run `npm start` after building (as shown above). The server will automatically listen on port `8088` for HTTP Stream.
|
|
46
|
+
2. **Specify custom ports:** Set the `HTTP_STREAM_PORT` environment variable before starting the server.
|
|
48
47
|
|
|
49
48
|
```bash
|
|
50
|
-
HTTP_STREAM_PORT=9000
|
|
49
|
+
HTTP_STREAM_PORT=9000 npm start
|
|
51
50
|
```
|
|
52
51
|
|
|
53
|
-
Or, if using a `.env` file, add `HTTP_STREAM_PORT=9000`
|
|
52
|
+
Or, if using a `.env` file, add `HTTP_STREAM_PORT=9000` to it.
|
|
54
53
|
|
|
55
54
|
## Docker
|
|
56
55
|
|
|
@@ -66,16 +65,15 @@ docker build -t roam-research-mcp .
|
|
|
66
65
|
|
|
67
66
|
### Run the Docker Container
|
|
68
67
|
|
|
69
|
-
To run the Docker container and map the necessary ports, you must also provide the required environment variables. Use the `-e` flag to pass `ROAM_API_TOKEN`, `ROAM_GRAPH_NAME`, and optionally `MEMORIES_TAG
|
|
68
|
+
To run the Docker container and map the necessary ports, you must also provide the required environment variables. Use the `-e` flag to pass `ROAM_API_TOKEN`, `ROAM_GRAPH_NAME`, and optionally `MEMORIES_TAG` and `HTTP_STREAM_PORT`:
|
|
70
69
|
|
|
71
70
|
```bash
|
|
72
|
-
docker run -p 3000:3000 -p 8088:8088
|
|
71
|
+
docker run -p 3000:3000 -p 8088:8088 \
|
|
73
72
|
-e ROAM_API_TOKEN="your-api-token" \
|
|
74
73
|
-e ROAM_GRAPH_NAME="your-graph-name" \
|
|
75
74
|
-e MEMORIES_TAG="#[[LLM/Memories]]" \
|
|
76
75
|
-e CUSTOM_INSTRUCTIONS_PATH="/path/to/your/custom_instructions_file.md" \
|
|
77
76
|
-e HTTP_STREAM_PORT="8088" \
|
|
78
|
-
-e SSE_PORT="8087" \
|
|
79
77
|
roam-research-mcp
|
|
80
78
|
```
|
|
81
79
|
|
|
@@ -117,19 +115,19 @@ The server provides powerful tools for interacting with Roam Research:
|
|
|
117
115
|
6. `roam_create_outline`: Add a structured outline to an existing page or block, with support for `children_view_type`. Best for simpler, sequential outlines. For complex nesting (e.g., tables), consider `roam_process_batch_actions`. If `page_title_uid` and `block_text_uid` are both blank, content defaults to the daily page. (Internally uses `roam_process_batch_actions`.)
|
|
118
116
|
7. `roam_search_block_refs`: Search for block references within a page or across the entire graph.
|
|
119
117
|
8. `roam_search_hierarchy`: Search for parent or child blocks in the block hierarchy.
|
|
120
|
-
9. `roam_find_pages_modified_today`: Find pages that have been modified today (since midnight).
|
|
121
|
-
10. `roam_search_by_text`: Search for blocks containing specific text.
|
|
118
|
+
9. `roam_find_pages_modified_today`: Find pages that have been modified today (since midnight), with pagination and sorting options.
|
|
119
|
+
10. `roam_search_by_text`: Search for blocks containing specific text across all pages or within a specific page. This tool supports pagination via the `limit` and `offset` parameters.
|
|
122
120
|
11. `roam_search_by_status`: Search for blocks with a specific status (TODO/DONE) across all pages or within a specific page.
|
|
123
121
|
12. `roam_search_by_date`: Search for blocks or pages based on creation or modification dates.
|
|
124
|
-
13. `roam_search_for_tag`: Search for blocks containing a specific tag and optionally filter by blocks that also contain another tag nearby.
|
|
122
|
+
13. `roam_search_for_tag`: Search for blocks containing a specific tag and optionally filter by blocks that also contain another tag nearby or exclude blocks with a specific tag. This tool supports pagination via the `limit` and `offset` parameters.
|
|
125
123
|
14. `roam_remember`: Add a memory or piece of information to remember. (Internally uses `roam_process_batch_actions`.)
|
|
126
124
|
15. `roam_recall`: Retrieve all stored memories.
|
|
127
|
-
16. `roam_datomic_query`: Execute a custom Datomic query on the Roam graph for advanced data retrieval beyond the available search tools.
|
|
125
|
+
16. `roam_datomic_query`: Execute a custom Datomic query on the Roam graph for advanced data retrieval beyond the available search tools. Now supports client-side regex filtering for enhanced post-query processing. Optimal for complex filtering (including regex), highly complex boolean logic, arbitrary sorting criteria, and proximity search.
|
|
128
126
|
17. `roam_markdown_cheatsheet`: Provides the content of the Roam Markdown Cheatsheet resource, optionally concatenated with custom instructions if `CUSTOM_INSTRUCTIONS_PATH` environment variable is set.
|
|
129
127
|
18. `roam_process_batch_actions`: Execute a sequence of low-level block actions (create, update, move, delete) in a single, non-transactional batch. Provides granular control for complex nesting like tables. (Note: For actions on existing blocks or within a specific page context, it is often necessary to first obtain valid page or block UIDs using tools like `roam_fetch_page_by_title`.)
|
|
130
128
|
|
|
131
129
|
**Deprecated Tools**:
|
|
132
|
-
The following tools have been deprecated as of `
|
|
130
|
+
The following tools have been deprecated as of `v0.36.2` in favor of the more powerful and flexible `roam_process_batch_actions`:
|
|
133
131
|
|
|
134
132
|
- `roam_create_block`: Use `roam_process_batch_actions` with the `create-block` action.
|
|
135
133
|
- `roam_update_block`: Use `roam_process_batch_actions` with the `update-block` action.
|
|
@@ -246,7 +244,6 @@ This demonstrates moving a block from one location to another and simultaneously
|
|
|
246
244
|
MEMORIES_TAG='#[[LLM/Memories]]'
|
|
247
245
|
CUSTOM_INSTRUCTIONS_PATH='/path/to/your/custom_instructions_file.md'
|
|
248
246
|
HTTP_STREAM_PORT=8088 # Or your desired port for HTTP Stream communication
|
|
249
|
-
SSE_PORT=8087 # Or your desired port for SSE communication
|
|
250
247
|
```
|
|
251
248
|
|
|
252
249
|
Option 2: Using MCP settings (Alternative method)
|
|
@@ -266,8 +263,7 @@ This demonstrates moving a block from one location to another and simultaneously
|
|
|
266
263
|
"ROAM_GRAPH_NAME": "your-graph-name",
|
|
267
264
|
"MEMORIES_TAG": "#[[LLM/Memories]]",
|
|
268
265
|
"CUSTOM_INSTRUCTIONS_PATH": "/path/to/your/custom_instructions_file.md",
|
|
269
|
-
"HTTP_STREAM_PORT": "8088"
|
|
270
|
-
"SSE_PORT": "8087"
|
|
266
|
+
"HTTP_STREAM_PORT": "8088"
|
|
271
267
|
}
|
|
272
268
|
}
|
|
273
269
|
}
|
|
@@ -113,12 +113,70 @@ The provided markdown structure represents a Roam Research Kanban board. It star
|
|
|
113
113
|
This markdown structure allows embedding custom HTML or other content using Hiccup syntax. The `:hiccup` keyword is followed by a Clojure-like vector defining the HTML elements and their attributes in one block. This provides a powerful way to inject dynamic or custom components into your Roam graph. Example: `:hiccup [:iframe {:width "600" :height "400" :src "https://www.example.com"}]`
|
|
114
114
|
|
|
115
115
|
## Specific notes and preferences concerning my Roam Research graph
|
|
116
|
-
###
|
|
116
|
+
### When creating new pages
|
|
117
117
|
|
|
118
|
-
|
|
118
|
+
- After creating a new page, ensure you add a new block on the daily page that references it if one doesn't already exist: `Created page: [[…]]`
|
|
119
119
|
|
|
120
|
-
###
|
|
120
|
+
### What To Tag
|
|
121
121
|
|
|
122
|
-
|
|
122
|
+
- Always think in terms of Future Me: How will I find this gem of information in this block in the future? What about this core idea might I be looking for? Aim for high relevance and helpfulness.
|
|
123
|
+
- My notes in Roam are interconnected via wrapped terms/expressions [[…]] and hashtags (#…). The general salience guidelines are as follows:
|
|
124
|
+
|
|
125
|
+
- What greater domain would this core idea be best categorization?
|
|
126
|
+
- If block is parent of children blocks, can a word/phrase in the parent be wrapped in brackets.
|
|
127
|
+
- If not, could a hashtag be appended to the block? (This is like a "please see more"-on-this-subject categorization).
|
|
128
|
+
- Wrap any proper names—no titles—(Dr. [[Werner Erhard]]), organizations, abbreviations e.g. `[NASA]([[National Aeronautics and Space Administration (NASA)]])`
|
|
129
|
+
- Any synonyms or multi-worded phrases of already established categories/pages, e.g. `[frameworks for making quality decisions]([[decision-making frameworks]])`
|
|
130
|
+
|
|
131
|
+
#### Advanced tagging methodology combining [[zettelkasten]] principles with serendipity engineering for maximum intellectual discovery potential. #[[knowledge management]] #[[tagging methodology]]
|
|
132
|
+
|
|
133
|
+
- # Core Philosophy
|
|
134
|
+
- Tag for **intellectual collision** and **future conversation**, not just categorization. Every tag should maximize the potential for unexpected discoveries and cross-domain insights.
|
|
135
|
+
- Traditional academic categorization creates silos; serendipity-engineered tagging creates **conceptual magnets** that attract surprising connections.
|
|
136
|
+
- ## Zettelkasten-Informed Principles
|
|
137
|
+
- **Atomic Thought Units**: Each tagged concept should function as a standalone intellectual object capable of connecting to ANY other domain
|
|
138
|
+
- **Conversation-Driven Linking**: Tag by the intellectual dialogues and debates a concept could inform, not just its subject matter
|
|
139
|
+
- **Temporal Intellectual Journey**: Tag for different phases of understanding - beginner discoveries, expert refinements, teaching moments
|
|
140
|
+
- **Intellectual Genealogy**: Create concept lineages that trace how ideas evolve and connect across domains
|
|
141
|
+
- ## Serendipity Engineering Techniques
|
|
142
|
+
- **Conceptual Collision Matrix**: Force unlikely intellectual meetings by tagging across disparate domains
|
|
143
|
+
- Example: #[[parenting techniques]] + [[cognitive engineering]] = developing children's meta-cognitive awareness
|
|
144
|
+
- Example: #[[cooking methods]] + [[cultural rotation]] = how different cultures balance flavors → balancing competing AI instructions
|
|
145
|
+
- **Anti-Obvious Tagging**: Deliberately tag against natural categorization to maximize surprise
|
|
146
|
+
- Tag by **structural similarities** rather than surface content: #[[has feedback loops]], #[[requires calibration]], #[[exhibits emergence]]
|
|
147
|
+
- Connect concepts through **metaphorical bridges**: meditation techniques + bias detection (both about noticing the unnoticed)
|
|
148
|
+
- **Question Cascade Strategy**: Tag by what questions a concept could answer across unexpected domains
|
|
149
|
+
- Instead of "this is about X," ask "what problems could this solve that I haven't thought of yet?"
|
|
150
|
+
- Examples: #[[how do experts prevent tunnel vision?]], #[[what creates cognitive flexibility?]], #[[how do systems self-correct?]]
|
|
151
|
+
- **Future Collision Potential**: Tag with temporal discovery triggers
|
|
152
|
+
- [[will be relevant in 5 years]], #[[connects to unborn projects]], #[[solves problems I don't know I have yet]]
|
|
153
|
+
- ## Practical Implementation
|
|
154
|
+
- **The Serendipity Test**: Before tagging, ask "Could this concept surprise me by connecting to something completely unrelated?"
|
|
155
|
+
- **Cross-Domain Bridge Tags**: Use structural rather than content-based categorization
|
|
156
|
+
- [[flow dynamics]] - connects fluid mechanics, music, conversation, AI prompt sequences
|
|
157
|
+
- [[calibration processes]] - bridges instrument tuning, relationships, AI parameters, personal habits
|
|
158
|
+
- [[perspective switching]] - connects photography, negotiation, cultural analysis, prompt engineering
|
|
159
|
+
- **Problem-Solution Pairing**: Tag by the problems solved rather than methods used
|
|
160
|
+
- [[breaking cognitive constraints]], #[[expanding solution spaces]], #[[preventing expert blindness]]
|
|
161
|
+
- **Intellectual State Triggers**: Tag for when you'd actually search based on mental/emotional context
|
|
162
|
+
- [[feeling stuck in patterns]], #[[needing fresh perspective]], #[[frustrated with conventional approaches]]
|
|
163
|
+
- ## Tag Maintenance Strategy
|
|
164
|
+
- **Regular Collision Audits**: Periodically review tags to identify missed connection opportunities
|
|
165
|
+
- **Surprise Discovery Log**: Track when unexpected tag connections lead to insights, then engineer more of those patterns
|
|
166
|
+
- **Question Evolution**: Update question-based tags as your intellectual interests and problems evolve
|
|
167
|
+
- **Cross-Reference Integration**: Ensure new tagging approach complements existing page reference `[[…]]` and hashtag `#[[…]]` conventions
|
|
168
|
+
|
|
169
|
+
### How to Tag (nuances)
|
|
170
|
+
|
|
171
|
+
- Consider that when linking a parent block, all children blocks will also accompany future search results. Linking a childblock will only link the child block, not siblings.
|
|
172
|
+
- The convention for tags is lower case unless proper nouns. When tagging might affect the capitalization within a sentence, use aliases. e.g. `[Cognitive biases]([[cognitive biases]]) of the person are listed…"
|
|
173
|
+
- Can longer phrases and expressions be aliased to existing notes? `How one can leverage [[cognitive dissonance]] to [hypnotically influence others using language]([[hypnotic language techniques]])`
|
|
174
|
+
- Reformat quotes in this format structure: `<quote> —[[<Quoted Person>]] #quote <hashtags>` with 2-3 additional relevant hashtag links
|
|
175
|
+
- Anything that needs follow-up, further research, preface with `{{[[TODO]]}}`
|
|
176
|
+
- Scheduled review: Tag future dates in ordinal format so that it can come up for review when relevant, e.g. `[[For review]]: [[August 12th, 2026]]` Optionally, prepend with label ("Deadline", "For review", "Approved", "Pending", "Deferred", "Postponed until"). Place on parent block if relevant (if children blocks are related).
|
|
177
|
+
|
|
178
|
+
### Constraints
|
|
179
|
+
|
|
180
|
+
- Don't overtag.
|
|
123
181
|
|
|
124
182
|
⭐️📋 END (Cheat Sheet LOADED) < < < 📋⭐️
|
|
@@ -29,7 +29,7 @@ if (!API_TOKEN || !GRAPH_NAME) {
|
|
|
29
29
|
' "mcpServers": {\n' +
|
|
30
30
|
' "roam-research": {\n' +
|
|
31
31
|
' "command": "node",\n' +
|
|
32
|
-
' "args": ["/path/to/roam-research/build/index.js"],\n' +
|
|
32
|
+
' "args": ["/path/to/roam-research-mcp/build/index.js"],\n' +
|
|
33
33
|
' "env": {\n' +
|
|
34
34
|
' "ROAM_API_TOKEN": "your-api-token",\n' +
|
|
35
35
|
' "ROAM_GRAPH_NAME": "your-graph-name"\n' +
|
|
@@ -42,6 +42,5 @@ if (!API_TOKEN || !GRAPH_NAME) {
|
|
|
42
42
|
' ROAM_GRAPH_NAME=your-graph-name');
|
|
43
43
|
}
|
|
44
44
|
const HTTP_STREAM_PORT = process.env.HTTP_STREAM_PORT || '8088'; // Default to 8088
|
|
45
|
-
const SSE_PORT = process.env.SSE_PORT || '8087'; // Default to 8087
|
|
46
45
|
const CORS_ORIGIN = process.env.CORS_ORIGIN || 'http://localhost:5678';
|
|
47
|
-
export { API_TOKEN, GRAPH_NAME, HTTP_STREAM_PORT,
|
|
46
|
+
export { API_TOKEN, GRAPH_NAME, HTTP_STREAM_PORT, CORS_ORIGIN };
|
package/build/index.js
CHANGED
package/build/markdown-utils.js
CHANGED
|
@@ -1,3 +1,4 @@
|
|
|
1
|
+
import { randomBytes } from 'crypto';
|
|
1
2
|
/**
|
|
2
3
|
* Check if text has a traditional markdown table
|
|
3
4
|
*/
|
|
@@ -240,11 +241,13 @@ function parseTableRows(lines) {
|
|
|
240
241
|
return tableNodes;
|
|
241
242
|
}
|
|
242
243
|
function generateBlockUid() {
|
|
243
|
-
// Generate a random string of 9 characters (Roam's format)
|
|
244
|
+
// Generate a random string of 9 characters (Roam's format) using crypto for better randomness
|
|
244
245
|
const chars = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789-_';
|
|
246
|
+
// 64 chars, which divides 256 evenly (256 = 64 * 4), so simple modulo is unbiased
|
|
247
|
+
const bytes = randomBytes(9);
|
|
245
248
|
let uid = '';
|
|
246
249
|
for (let i = 0; i < 9; i++) {
|
|
247
|
-
uid += chars
|
|
250
|
+
uid += chars[bytes[i] % 64];
|
|
248
251
|
}
|
|
249
252
|
return uid;
|
|
250
253
|
}
|
|
@@ -8,7 +8,46 @@ export class DatomicSearchHandler extends BaseSearchHandler {
|
|
|
8
8
|
async execute() {
|
|
9
9
|
try {
|
|
10
10
|
// Execute the datomic query using the Roam API
|
|
11
|
-
|
|
11
|
+
let results = await q(this.graph, this.params.query, this.params.inputs || []);
|
|
12
|
+
if (this.params.regexFilter) {
|
|
13
|
+
let regex;
|
|
14
|
+
try {
|
|
15
|
+
regex = new RegExp(this.params.regexFilter, this.params.regexFlags);
|
|
16
|
+
}
|
|
17
|
+
catch (e) {
|
|
18
|
+
return {
|
|
19
|
+
success: false,
|
|
20
|
+
matches: [],
|
|
21
|
+
message: `Invalid regex filter provided: ${e instanceof Error ? e.message : String(e)}`
|
|
22
|
+
};
|
|
23
|
+
}
|
|
24
|
+
results = results.filter(result => {
|
|
25
|
+
if (this.params.regexTargetField && this.params.regexTargetField.length > 0) {
|
|
26
|
+
for (const field of this.params.regexTargetField) {
|
|
27
|
+
// Access nested fields if path is provided (e.g., "prop.nested")
|
|
28
|
+
const fieldPath = field.split('.');
|
|
29
|
+
let value = result;
|
|
30
|
+
for (const part of fieldPath) {
|
|
31
|
+
if (typeof value === 'object' && value !== null && part in value) {
|
|
32
|
+
value = value[part];
|
|
33
|
+
}
|
|
34
|
+
else {
|
|
35
|
+
value = undefined; // Field not found
|
|
36
|
+
break;
|
|
37
|
+
}
|
|
38
|
+
}
|
|
39
|
+
if (typeof value === 'string' && regex.test(value)) {
|
|
40
|
+
return true;
|
|
41
|
+
}
|
|
42
|
+
}
|
|
43
|
+
return false;
|
|
44
|
+
}
|
|
45
|
+
else {
|
|
46
|
+
// Default to stringifying the whole result if no target field is specified
|
|
47
|
+
return regex.test(JSON.stringify(result));
|
|
48
|
+
}
|
|
49
|
+
});
|
|
50
|
+
}
|
|
12
51
|
return {
|
|
13
52
|
success: true,
|
|
14
53
|
matches: results.map(result => ({
|
|
@@ -8,42 +8,102 @@ export class TagSearchHandler extends BaseSearchHandler {
|
|
|
8
8
|
this.params = params;
|
|
9
9
|
}
|
|
10
10
|
async execute() {
|
|
11
|
-
const { primary_tag, page_title_uid, near_tag, exclude_tag } = this.params;
|
|
11
|
+
const { primary_tag, page_title_uid, near_tag, exclude_tag, case_sensitive = false, limit = -1, offset = 0 } = this.params;
|
|
12
|
+
let nearTagUid;
|
|
13
|
+
if (near_tag) {
|
|
14
|
+
nearTagUid = await SearchUtils.findPageByTitleOrUid(this.graph, near_tag);
|
|
15
|
+
if (!nearTagUid) {
|
|
16
|
+
return {
|
|
17
|
+
success: false,
|
|
18
|
+
matches: [],
|
|
19
|
+
message: `Near tag "${near_tag}" not found.`,
|
|
20
|
+
total_count: 0
|
|
21
|
+
};
|
|
22
|
+
}
|
|
23
|
+
}
|
|
24
|
+
let excludeTagUid;
|
|
25
|
+
if (exclude_tag) {
|
|
26
|
+
excludeTagUid = await SearchUtils.findPageByTitleOrUid(this.graph, exclude_tag);
|
|
27
|
+
if (!excludeTagUid) {
|
|
28
|
+
return {
|
|
29
|
+
success: false,
|
|
30
|
+
matches: [],
|
|
31
|
+
message: `Exclude tag "${exclude_tag}" not found.`,
|
|
32
|
+
total_count: 0
|
|
33
|
+
};
|
|
34
|
+
}
|
|
35
|
+
}
|
|
12
36
|
// Get target page UID if provided for scoped search
|
|
13
37
|
let targetPageUid;
|
|
14
38
|
if (page_title_uid) {
|
|
15
39
|
targetPageUid = await SearchUtils.findPageByTitleOrUid(this.graph, page_title_uid);
|
|
16
40
|
}
|
|
17
|
-
|
|
18
|
-
|
|
41
|
+
const searchTags = [];
|
|
42
|
+
if (case_sensitive) {
|
|
43
|
+
searchTags.push(primary_tag);
|
|
44
|
+
}
|
|
45
|
+
else {
|
|
46
|
+
searchTags.push(primary_tag);
|
|
47
|
+
searchTags.push(primary_tag.charAt(0).toUpperCase() + primary_tag.slice(1));
|
|
48
|
+
searchTags.push(primary_tag.toUpperCase());
|
|
49
|
+
searchTags.push(primary_tag.toLowerCase());
|
|
50
|
+
}
|
|
51
|
+
const tagWhereClauses = searchTags.map(tag => {
|
|
52
|
+
// Roam tags can be [[tag name]] or #tag-name or #[[tag name]]
|
|
53
|
+
// The :node/title for a tag page is just the tag name without any # or [[ ]]
|
|
54
|
+
return `[?ref-page :node/title "${tag}"]`;
|
|
55
|
+
}).join(' ');
|
|
56
|
+
let inClause = `:in $`;
|
|
57
|
+
let queryLimit = limit === -1 ? '' : `:limit ${limit}`;
|
|
58
|
+
let queryOffset = offset === 0 ? '' : `:offset ${offset}`;
|
|
59
|
+
let queryOrder = `:order ?page-edit-time asc ?block-uid asc`; // Sort by page edit time, then block UID
|
|
19
60
|
let queryWhereClauses = `
|
|
20
|
-
|
|
21
|
-
[(clojure.string/lower-case ?title-match) ?lower-title]
|
|
22
|
-
[(clojure.string/lower-case ?title) ?search-title]
|
|
23
|
-
[(= ?lower-title ?search-title)]
|
|
61
|
+
(or ${tagWhereClauses})
|
|
24
62
|
[?b :block/refs ?ref-page]
|
|
25
63
|
[?b :block/string ?block-str]
|
|
26
64
|
[?b :block/uid ?block-uid]
|
|
27
65
|
[?b :block/page ?p]
|
|
28
|
-
[?p :node/title ?page-title]
|
|
29
|
-
|
|
66
|
+
[?p :node/title ?page-title]
|
|
67
|
+
[?p :edit/time ?page-edit-time]`; // Fetch page edit time for sorting
|
|
68
|
+
if (nearTagUid) {
|
|
69
|
+
queryWhereClauses += `
|
|
70
|
+
[?b :block/refs ?near-tag-page]
|
|
71
|
+
[?near-tag-page :block/uid "${nearTagUid}"]`;
|
|
72
|
+
}
|
|
73
|
+
if (excludeTagUid) {
|
|
74
|
+
queryWhereClauses += `
|
|
75
|
+
(not [?b :block/refs ?exclude-tag-page])
|
|
76
|
+
[?exclude-tag-page :block/uid "${excludeTagUid}"]`;
|
|
77
|
+
}
|
|
30
78
|
if (targetPageUid) {
|
|
31
79
|
inClause += ` ?target-page-uid`;
|
|
32
|
-
queryArgs.push(targetPageUid);
|
|
33
80
|
queryWhereClauses += `
|
|
34
81
|
[?p :block/uid ?target-page-uid]`;
|
|
35
82
|
}
|
|
36
83
|
const queryStr = `[:find ?block-uid ?block-str ?page-title
|
|
37
|
-
${inClause}
|
|
84
|
+
${inClause} ${queryLimit} ${queryOffset} ${queryOrder}
|
|
38
85
|
:where
|
|
39
86
|
${queryWhereClauses}]`;
|
|
87
|
+
const queryArgs = [];
|
|
88
|
+
if (targetPageUid) {
|
|
89
|
+
queryArgs.push(targetPageUid);
|
|
90
|
+
}
|
|
40
91
|
const rawResults = await q(this.graph, queryStr, queryArgs);
|
|
92
|
+
// Query to get total count without limit
|
|
93
|
+
const countQueryStr = `[:find (count ?b)
|
|
94
|
+
${inClause}
|
|
95
|
+
:where
|
|
96
|
+
${queryWhereClauses.replace(/\[\?p :edit\/time \?page-edit-time\]/, '')}]`; // Remove edit time for count query
|
|
97
|
+
const totalCountResults = await q(this.graph, countQueryStr, queryArgs);
|
|
98
|
+
const totalCount = totalCountResults[0] ? totalCountResults[0][0] : 0;
|
|
41
99
|
// Resolve block references in content
|
|
42
100
|
const resolvedResults = await Promise.all(rawResults.map(async ([uid, content, pageTitle]) => {
|
|
43
101
|
const resolvedContent = await resolveRefs(this.graph, content);
|
|
44
102
|
return [uid, resolvedContent, pageTitle];
|
|
45
103
|
}));
|
|
46
104
|
const searchDescription = `referencing "${primary_tag}"`;
|
|
47
|
-
|
|
105
|
+
const formattedResults = SearchUtils.formatSearchResults(resolvedResults, searchDescription, !targetPageUid);
|
|
106
|
+
formattedResults.total_count = totalCount;
|
|
107
|
+
return formattedResults;
|
|
48
108
|
}
|
|
49
109
|
}
|
|
@@ -8,29 +8,68 @@ export class TextSearchHandler extends BaseSearchHandler {
|
|
|
8
8
|
this.params = params;
|
|
9
9
|
}
|
|
10
10
|
async execute() {
|
|
11
|
-
const { text, page_title_uid } = this.params;
|
|
11
|
+
const { text, page_title_uid, case_sensitive = false, limit = -1, offset = 0 } = this.params;
|
|
12
12
|
// Get target page UID if provided for scoped search
|
|
13
13
|
let targetPageUid;
|
|
14
14
|
if (page_title_uid) {
|
|
15
15
|
targetPageUid = await SearchUtils.findPageByTitleOrUid(this.graph, page_title_uid);
|
|
16
16
|
}
|
|
17
|
-
|
|
18
|
-
|
|
19
|
-
|
|
20
|
-
|
|
21
|
-
|
|
22
|
-
|
|
23
|
-
|
|
24
|
-
|
|
25
|
-
|
|
26
|
-
|
|
17
|
+
const searchTerms = [];
|
|
18
|
+
if (case_sensitive) {
|
|
19
|
+
searchTerms.push(text);
|
|
20
|
+
}
|
|
21
|
+
else {
|
|
22
|
+
searchTerms.push(text);
|
|
23
|
+
// Add capitalized version (e.g., "Hypnosis")
|
|
24
|
+
searchTerms.push(text.charAt(0).toUpperCase() + text.slice(1));
|
|
25
|
+
// Add all caps version (e.g., "HYPNOSIS")
|
|
26
|
+
searchTerms.push(text.toUpperCase());
|
|
27
|
+
// Add all lowercase version (e.g., "hypnosis")
|
|
28
|
+
searchTerms.push(text.toLowerCase());
|
|
29
|
+
}
|
|
30
|
+
const whereClauses = searchTerms.map(term => `[(clojure.string/includes? ?block-str "${term}")]`).join(' ');
|
|
31
|
+
let queryStr;
|
|
32
|
+
let queryParams = [];
|
|
33
|
+
let queryLimit = limit === -1 ? '' : `:limit ${limit}`;
|
|
34
|
+
let queryOffset = offset === 0 ? '' : `:offset ${offset}`;
|
|
35
|
+
let queryOrder = `:order ?page-edit-time asc ?block-uid asc`; // Sort by page edit time, then block UID
|
|
36
|
+
let baseQueryWhereClauses = `
|
|
37
|
+
[?b :block/string ?block-str]
|
|
38
|
+
(or ${whereClauses})
|
|
39
|
+
[?b :block/uid ?block-uid]
|
|
40
|
+
[?b :block/page ?p]
|
|
41
|
+
[?p :node/title ?page-title]
|
|
42
|
+
[?p :edit/time ?page-edit-time]`; // Fetch page edit time for sorting
|
|
43
|
+
if (targetPageUid) {
|
|
44
|
+
queryStr = `[:find ?block-uid ?block-str ?page-title
|
|
45
|
+
:in $ ?page-uid ${queryLimit} ${queryOffset} ${queryOrder}
|
|
46
|
+
:where
|
|
47
|
+
${baseQueryWhereClauses}
|
|
48
|
+
[?p :block/uid ?page-uid]]`;
|
|
49
|
+
queryParams = [targetPageUid];
|
|
50
|
+
}
|
|
51
|
+
else {
|
|
52
|
+
queryStr = `[:find ?block-uid ?block-str ?page-title
|
|
53
|
+
:in $ ${queryLimit} ${queryOffset} ${queryOrder}
|
|
54
|
+
:where
|
|
55
|
+
${baseQueryWhereClauses}]`;
|
|
56
|
+
}
|
|
27
57
|
const rawResults = await q(this.graph, queryStr, queryParams);
|
|
58
|
+
// Query to get total count without limit
|
|
59
|
+
const countQueryStr = `[:find (count ?b)
|
|
60
|
+
:in $
|
|
61
|
+
:where
|
|
62
|
+
${baseQueryWhereClauses.replace(/\[\?p :edit\/time \?page-edit-time\]/, '')}]`; // Remove edit time for count query
|
|
63
|
+
const totalCountResults = await q(this.graph, countQueryStr, queryParams);
|
|
64
|
+
const totalCount = totalCountResults[0] ? totalCountResults[0][0] : 0;
|
|
28
65
|
// Resolve block references in content
|
|
29
66
|
const resolvedResults = await Promise.all(rawResults.map(async ([uid, content, pageTitle]) => {
|
|
30
67
|
const resolvedContent = await resolveRefs(this.graph, content);
|
|
31
68
|
return [uid, resolvedContent, pageTitle];
|
|
32
69
|
}));
|
|
33
70
|
const searchDescription = `containing "${text}"`;
|
|
34
|
-
|
|
71
|
+
const formattedResults = SearchUtils.formatSearchResults(resolvedResults, searchDescription, !targetPageUid);
|
|
72
|
+
formattedResults.total_count = totalCount;
|
|
73
|
+
return formattedResults;
|
|
35
74
|
}
|
|
36
75
|
}
|
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
import { Server } from '@modelcontextprotocol/sdk/server/index.js';
|
|
2
2
|
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
|
|
3
3
|
import { StreamableHTTPServerTransport } from '@modelcontextprotocol/sdk/server/streamableHttp.js';
|
|
4
|
-
import { CallToolRequestSchema, ErrorCode, ListResourcesRequestSchema, ReadResourceRequestSchema, McpError, ListToolsRequestSchema, } from '@modelcontextprotocol/sdk/types.js';
|
|
4
|
+
import { CallToolRequestSchema, ErrorCode, ListResourcesRequestSchema, ReadResourceRequestSchema, McpError, ListToolsRequestSchema, ListPromptsRequestSchema, } from '@modelcontextprotocol/sdk/types.js';
|
|
5
5
|
import { initializeGraph } from '@roam-research/roam-api-sdk';
|
|
6
6
|
import { API_TOKEN, GRAPH_NAME, HTTP_STREAM_PORT } from '../config/environment.js';
|
|
7
7
|
import { toolSchemas } from '../tools/schemas.js';
|
|
@@ -20,7 +20,6 @@ const packageJson = JSON.parse(readFileSync(packageJsonPath, 'utf8'));
|
|
|
20
20
|
const serverVersion = packageJson.version;
|
|
21
21
|
export class RoamServer {
|
|
22
22
|
constructor() {
|
|
23
|
-
// console.log('RoamServer: Constructor started.');
|
|
24
23
|
try {
|
|
25
24
|
this.graph = initializeGraph({
|
|
26
25
|
token: API_TOKEN,
|
|
@@ -42,7 +41,23 @@ export class RoamServer {
|
|
|
42
41
|
if (Object.keys(toolSchemas).length === 0) {
|
|
43
42
|
throw new McpError(ErrorCode.InternalError, 'No tool schemas defined in src/tools/schemas.ts');
|
|
44
43
|
}
|
|
45
|
-
|
|
44
|
+
}
|
|
45
|
+
// Helper to create and configure MCP server instance
|
|
46
|
+
createMcpServer(nameSuffix = '') {
|
|
47
|
+
const server = new Server({
|
|
48
|
+
name: `roam-research${nameSuffix}`,
|
|
49
|
+
version: serverVersion,
|
|
50
|
+
}, {
|
|
51
|
+
capabilities: {
|
|
52
|
+
tools: {
|
|
53
|
+
...Object.fromEntries(Object.keys(toolSchemas).map((toolName) => [toolName, toolSchemas[toolName].inputSchema])),
|
|
54
|
+
},
|
|
55
|
+
resources: {}, // No resources exposed via capabilities
|
|
56
|
+
prompts: {}, // No prompts exposed via capabilities
|
|
57
|
+
},
|
|
58
|
+
});
|
|
59
|
+
this.setupRequestHandlers(server);
|
|
60
|
+
return server;
|
|
46
61
|
}
|
|
47
62
|
// Refactored to accept a Server instance
|
|
48
63
|
setupRequestHandlers(mcpServer) {
|
|
@@ -59,6 +74,10 @@ export class RoamServer {
|
|
|
59
74
|
mcpServer.setRequestHandler(ReadResourceRequestSchema, async (request) => {
|
|
60
75
|
throw new McpError(ErrorCode.InternalError, `Resource not found: ${request.params.uri}`);
|
|
61
76
|
});
|
|
77
|
+
// List available prompts
|
|
78
|
+
mcpServer.setRequestHandler(ListPromptsRequestSchema, async () => {
|
|
79
|
+
return { prompts: [] };
|
|
80
|
+
});
|
|
62
81
|
// Handle tool calls
|
|
63
82
|
mcpServer.setRequestHandler(CallToolRequestSchema, async (request) => {
|
|
64
83
|
try {
|
|
@@ -206,49 +225,15 @@ export class RoamServer {
|
|
|
206
225
|
});
|
|
207
226
|
}
|
|
208
227
|
async run() {
|
|
209
|
-
// console.log('RoamServer: run() method started.');
|
|
210
228
|
try {
|
|
211
|
-
|
|
212
|
-
const stdioMcpServer = new Server({
|
|
213
|
-
name: 'roam-research',
|
|
214
|
-
version: serverVersion,
|
|
215
|
-
}, {
|
|
216
|
-
capabilities: {
|
|
217
|
-
tools: {
|
|
218
|
-
...Object.fromEntries(Object.keys(toolSchemas).map((toolName) => [toolName, toolSchemas[toolName].inputSchema])),
|
|
219
|
-
},
|
|
220
|
-
resources: {
|
|
221
|
-
'roam-markdown-cheatsheet.md': {}
|
|
222
|
-
}
|
|
223
|
-
},
|
|
224
|
-
});
|
|
225
|
-
// console.log('RoamServer: stdioMcpServer created. Setting up request handlers...');
|
|
226
|
-
this.setupRequestHandlers(stdioMcpServer);
|
|
227
|
-
// console.log('RoamServer: stdioMcpServer handlers setup complete. Connecting transport...');
|
|
229
|
+
const stdioMcpServer = this.createMcpServer();
|
|
228
230
|
const stdioTransport = new StdioServerTransport();
|
|
229
231
|
await stdioMcpServer.connect(stdioTransport);
|
|
230
|
-
|
|
231
|
-
const httpMcpServer = new Server({
|
|
232
|
-
name: 'roam-research-http', // A distinct name for the HTTP server
|
|
233
|
-
version: serverVersion,
|
|
234
|
-
}, {
|
|
235
|
-
capabilities: {
|
|
236
|
-
tools: {
|
|
237
|
-
...Object.fromEntries(Object.keys(toolSchemas).map((toolName) => [toolName, toolSchemas[toolName].inputSchema])),
|
|
238
|
-
},
|
|
239
|
-
resources: {
|
|
240
|
-
'roam-markdown-cheatsheet.md': {}
|
|
241
|
-
}
|
|
242
|
-
},
|
|
243
|
-
});
|
|
244
|
-
// console.log('RoamServer: httpMcpServer created. Setting up request handlers...');
|
|
245
|
-
this.setupRequestHandlers(httpMcpServer);
|
|
246
|
-
// console.log('RoamServer: httpMcpServer handlers setup complete. Connecting transport...');
|
|
232
|
+
const httpMcpServer = this.createMcpServer('-http');
|
|
247
233
|
const httpStreamTransport = new StreamableHTTPServerTransport({
|
|
248
234
|
sessionIdGenerator: () => Math.random().toString(36).substring(2, 15) + Math.random().toString(36).substring(2, 15),
|
|
249
235
|
});
|
|
250
236
|
await httpMcpServer.connect(httpStreamTransport);
|
|
251
|
-
// console.log('RoamServer: httpStreamTransport connected.');
|
|
252
237
|
const httpServer = createServer(async (req, res) => {
|
|
253
238
|
// Set CORS headers
|
|
254
239
|
res.setHeader('Access-Control-Allow-Origin', CORS_ORIGIN);
|
|
@@ -264,7 +249,6 @@ export class RoamServer {
|
|
|
264
249
|
await httpStreamTransport.handleRequest(req, res);
|
|
265
250
|
}
|
|
266
251
|
catch (error) {
|
|
267
|
-
// // console.error('HTTP Stream Server error:', error);
|
|
268
252
|
if (!res.headersSent) {
|
|
269
253
|
res.writeHead(500, { 'Content-Type': 'application/json' });
|
|
270
254
|
res.end(JSON.stringify({ error: 'Internal Server Error' }));
|
|
@@ -273,7 +257,6 @@ export class RoamServer {
|
|
|
273
257
|
});
|
|
274
258
|
const availableHttpPort = await findAvailablePort(parseInt(HTTP_STREAM_PORT));
|
|
275
259
|
httpServer.listen(availableHttpPort, () => {
|
|
276
|
-
// // console.log(`MCP Roam Research server running HTTP Stream on port ${availableHttpPort}`);
|
|
277
260
|
});
|
|
278
261
|
}
|
|
279
262
|
catch (error) {
|
|
@@ -18,7 +18,7 @@ export class PageOperations {
|
|
|
18
18
|
constructor(graph) {
|
|
19
19
|
this.graph = graph;
|
|
20
20
|
}
|
|
21
|
-
async findPagesModifiedToday(
|
|
21
|
+
async findPagesModifiedToday(limit = 50, offset = 0, sort_order = 'desc') {
|
|
22
22
|
// Define ancestor rule for traversing block hierarchy
|
|
23
23
|
const ancestorRule = `[
|
|
24
24
|
[ (ancestor ?b ?a)
|
|
@@ -31,15 +31,21 @@ export class PageOperations {
|
|
|
31
31
|
const startOfDay = new Date();
|
|
32
32
|
startOfDay.setHours(0, 0, 0, 0);
|
|
33
33
|
try {
|
|
34
|
-
// Query for pages modified today
|
|
35
|
-
|
|
34
|
+
// Query for pages modified today, including modification time for sorting
|
|
35
|
+
let query = `[:find ?title ?time
|
|
36
36
|
:in $ ?start_of_day %
|
|
37
37
|
:where
|
|
38
38
|
[?page :node/title ?title]
|
|
39
39
|
(ancestor ?block ?page)
|
|
40
40
|
[?block :edit/time ?time]
|
|
41
|
-
[(> ?time ?start_of_day)]]
|
|
42
|
-
|
|
41
|
+
[(> ?time ?start_of_day)]]`;
|
|
42
|
+
if (limit !== -1) {
|
|
43
|
+
query += ` :limit ${limit}`;
|
|
44
|
+
}
|
|
45
|
+
if (offset > 0) {
|
|
46
|
+
query += ` :offset ${offset}`;
|
|
47
|
+
}
|
|
48
|
+
const results = await q(this.graph, query, [startOfDay.getTime(), ancestorRule]);
|
|
43
49
|
if (!results || results.length === 0) {
|
|
44
50
|
return {
|
|
45
51
|
success: true,
|
|
@@ -47,7 +53,16 @@ export class PageOperations {
|
|
|
47
53
|
message: 'No pages have been modified today'
|
|
48
54
|
};
|
|
49
55
|
}
|
|
50
|
-
//
|
|
56
|
+
// Sort results by modification time
|
|
57
|
+
results.sort((a, b) => {
|
|
58
|
+
if (sort_order === 'desc') {
|
|
59
|
+
return b[1] - a[1]; // Newest first
|
|
60
|
+
}
|
|
61
|
+
else {
|
|
62
|
+
return a[1] - b[1]; // Oldest first
|
|
63
|
+
}
|
|
64
|
+
});
|
|
65
|
+
// Extract unique page titles from sorted results
|
|
51
66
|
const uniquePages = Array.from(new Set(results.map(([title]) => title)));
|
|
52
67
|
return {
|
|
53
68
|
success: true,
|
|
@@ -173,21 +188,19 @@ export class PageOperations {
|
|
|
173
188
|
throw new McpError(ErrorCode.InvalidRequest, 'title is required');
|
|
174
189
|
}
|
|
175
190
|
// Try different case variations
|
|
191
|
+
// Generate variations to check
|
|
176
192
|
const variations = [
|
|
177
193
|
title, // Original
|
|
178
194
|
capitalizeWords(title), // Each word capitalized
|
|
179
195
|
title.toLowerCase() // All lowercase
|
|
180
196
|
];
|
|
181
|
-
|
|
182
|
-
|
|
183
|
-
|
|
184
|
-
|
|
185
|
-
|
|
186
|
-
|
|
187
|
-
|
|
188
|
-
if (uid)
|
|
189
|
-
break;
|
|
190
|
-
}
|
|
197
|
+
// Create OR clause for query
|
|
198
|
+
const orClause = variations.map(v => `[?e :node/title "${v}"]`).join(' ');
|
|
199
|
+
const searchQuery = `[:find ?uid .
|
|
200
|
+
:where [?e :block/uid ?uid]
|
|
201
|
+
(or ${orClause})]`;
|
|
202
|
+
const result = await q(this.graph, searchQuery, []);
|
|
203
|
+
const uid = (result === null || result === undefined) ? null : String(result);
|
|
191
204
|
if (!uid) {
|
|
192
205
|
throw new McpError(ErrorCode.InvalidRequest, `Page with title "${title}" not found (tried original, capitalized words, and lowercase)`);
|
|
193
206
|
}
|
package/build/tools/schemas.js
CHANGED
|
@@ -165,7 +165,7 @@ export const toolSchemas = {
|
|
|
165
165
|
},
|
|
166
166
|
roam_search_for_tag: {
|
|
167
167
|
name: 'roam_search_for_tag',
|
|
168
|
-
description: 'Search for blocks containing a specific tag and optionally filter by blocks that also contain another tag nearby.
|
|
168
|
+
description: 'Search for blocks containing a specific tag and optionally filter by blocks that also contain another tag nearby or exclude blocks with a specific tag. This tool supports pagination via the `limit` and `offset` parameters. Use this tool to search for memories tagged with the MEMORIES_TAG.',
|
|
169
169
|
inputSchema: {
|
|
170
170
|
type: 'object',
|
|
171
171
|
properties: {
|
|
@@ -180,6 +180,21 @@ export const toolSchemas = {
|
|
|
180
180
|
near_tag: {
|
|
181
181
|
type: 'string',
|
|
182
182
|
description: 'Optional: Another tag to filter results by - will only return blocks where both tags appear',
|
|
183
|
+
},
|
|
184
|
+
case_sensitive: {
|
|
185
|
+
type: 'boolean',
|
|
186
|
+
description: 'Optional: Whether the search should be case-sensitive. If false, it will search for the provided tag, capitalized versions, and first word capitalized versions.',
|
|
187
|
+
default: false
|
|
188
|
+
},
|
|
189
|
+
limit: {
|
|
190
|
+
type: 'integer',
|
|
191
|
+
description: 'Optional: The maximum number of results to return. Defaults to 50. Use -1 for no limit, but be aware that very large results sets can impact performance.',
|
|
192
|
+
default: 50
|
|
193
|
+
},
|
|
194
|
+
offset: {
|
|
195
|
+
type: 'integer',
|
|
196
|
+
description: 'Optional: The number of results to skip before returning matches. Useful for pagination. Defaults to 0.',
|
|
197
|
+
default: 0
|
|
183
198
|
}
|
|
184
199
|
},
|
|
185
200
|
required: ['primary_tag']
|
|
@@ -259,21 +274,32 @@ export const toolSchemas = {
|
|
|
259
274
|
},
|
|
260
275
|
roam_find_pages_modified_today: {
|
|
261
276
|
name: 'roam_find_pages_modified_today',
|
|
262
|
-
description: 'Find pages that have been modified today (since midnight), with
|
|
277
|
+
description: 'Find pages that have been modified today (since midnight), with pagination and sorting options.',
|
|
263
278
|
inputSchema: {
|
|
264
279
|
type: 'object',
|
|
265
280
|
properties: {
|
|
266
|
-
|
|
281
|
+
limit: {
|
|
267
282
|
type: 'integer',
|
|
268
|
-
description: '
|
|
283
|
+
description: 'The maximum number of pages to retrieve (default: 50). Use -1 for no limit, but be aware that very large result sets can impact performance.',
|
|
269
284
|
default: 50
|
|
270
285
|
},
|
|
286
|
+
offset: {
|
|
287
|
+
type: 'integer',
|
|
288
|
+
description: 'The number of pages to skip before returning matches. Useful for pagination. Defaults to 0.',
|
|
289
|
+
default: 0
|
|
290
|
+
},
|
|
291
|
+
sort_order: {
|
|
292
|
+
type: 'string',
|
|
293
|
+
description: 'Sort order for pages based on modification date. "desc" for most recent first, "asc" for oldest first.',
|
|
294
|
+
enum: ['asc', 'desc'],
|
|
295
|
+
default: 'desc'
|
|
296
|
+
}
|
|
271
297
|
}
|
|
272
298
|
}
|
|
273
299
|
},
|
|
274
300
|
roam_search_by_text: {
|
|
275
301
|
name: 'roam_search_by_text',
|
|
276
|
-
description: 'Search for blocks containing specific text across all pages or within a specific page.',
|
|
302
|
+
description: 'Search for blocks containing specific text across all pages or within a specific page. This tool supports pagination via the `limit` and `offset` parameters.',
|
|
277
303
|
inputSchema: {
|
|
278
304
|
type: 'object',
|
|
279
305
|
properties: {
|
|
@@ -284,6 +310,21 @@ export const toolSchemas = {
|
|
|
284
310
|
page_title_uid: {
|
|
285
311
|
type: 'string',
|
|
286
312
|
description: 'Optional: Title or UID of the page to search in (UID is preferred for accuracy). If not provided, searches across all pages.'
|
|
313
|
+
},
|
|
314
|
+
case_sensitive: {
|
|
315
|
+
type: 'boolean',
|
|
316
|
+
description: 'Optional: Whether the search should be case-sensitive. If false, it will search for the provided text, capitalized versions, and first word capitalized versions.',
|
|
317
|
+
default: false
|
|
318
|
+
},
|
|
319
|
+
limit: {
|
|
320
|
+
type: 'integer',
|
|
321
|
+
description: 'Optional: The maximum number of results to return. Defaults to 50. Use -1 for no limit, but be aware that very large results sets can impact performance.',
|
|
322
|
+
default: 50
|
|
323
|
+
},
|
|
324
|
+
offset: {
|
|
325
|
+
type: 'integer',
|
|
326
|
+
description: 'Optional: The number of results to skip before returning matches. Useful for pagination. Defaults to 0.',
|
|
327
|
+
default: 0
|
|
287
328
|
}
|
|
288
329
|
},
|
|
289
330
|
required: ['text']
|
|
@@ -354,7 +395,7 @@ export const toolSchemas = {
|
|
|
354
395
|
},
|
|
355
396
|
roam_recall: {
|
|
356
397
|
name: 'roam_recall',
|
|
357
|
-
description: 'Retrieve all stored memories on page titled MEMORIES_TAG, or tagged block content with the same name. Returns a combined, deduplicated list of memories. Optionally filter blcoks with a
|
|
398
|
+
description: 'Retrieve all stored memories on page titled MEMORIES_TAG, or tagged block content with the same name. Returns a combined, deduplicated list of memories. Optionally filter blcoks with a specific tag and sort by creation date.',
|
|
358
399
|
inputSchema: {
|
|
359
400
|
type: 'object',
|
|
360
401
|
properties: {
|
|
@@ -373,13 +414,13 @@ export const toolSchemas = {
|
|
|
373
414
|
},
|
|
374
415
|
roam_datomic_query: {
|
|
375
416
|
name: 'roam_datomic_query',
|
|
376
|
-
description: 'Execute a custom Datomic query on the Roam graph for advanced data retrieval beyond the available search tools. This provides direct access to Roam\'s query engine. Note: Roam graph is case-sensitive.\nList of some of Roam\'s data model Namespaces and Attributes: ancestor (descendants), attrs (lookup), block (children, heading, open, order, page, parents, props, refs, string, text-align, uid), children (view-type), create (email, time), descendant (ancestors), edit (email, seen-by, time), entity (attrs), log (id), node (title), page (uid, title), refs (text).\nPredicates (clojure.string/includes?, clojure.string/starts-with?, clojure.string/ends-with?, <, >, <=, >=, =, not=, !=).\nAggregates (distinct, count, sum, max, min, avg, limit).\nTips: Use :block/parents for all ancestor levels, :block/children for direct descendants only; combine clojure.string for complex matching, use distinct to deduplicate, leverage Pull patterns for hierarchies, handle case-sensitivity carefully, and chain ancestry rules for multi-level queries.',
|
|
417
|
+
description: 'Execute a custom Datomic query on the Roam graph for advanced data retrieval beyond the available search tools. This provides direct access to Roam\'s query engine. Note: Roam graph is case-sensitive.\n\n__Optimal Use Cases for `roam_datomic_query`:__\n- __Advanced Filtering (including Regex):__ Use for scenarios requiring complex filtering, including regex matching on results post-query, which Datalog does not natively support for all data types. It can fetch broader results for client-side post-processing.\n- __Highly Complex Boolean Logic:__ Ideal for intricate combinations of "AND", "OR", and "NOT" conditions across multiple terms or attributes.\n- __Arbitrary Sorting Criteria:__ The go-to for highly customized sorting needs beyond default options.\n- __Proximity Search:__ For advanced search capabilities involving proximity, which are difficult to implement efficiently with simpler tools.\n\nList of some of Roam\'s data model Namespaces and Attributes: ancestor (descendants), attrs (lookup), block (children, heading, open, order, page, parents, props, refs, string, text-align, uid), children (view-type), create (email, time), descendant (ancestors), edit (email, seen-by, time), entity (attrs), log (id), node (title), page (uid, title), refs (text).\nPredicates (clojure.string/includes?, clojure.string/starts-with?, clojure.string/ends-with?, <, >, <=, >=, =, not=, !=).\nAggregates (distinct, count, sum, max, min, avg, limit).\nTips: Use :block/parents for all ancestor levels, :block/children for direct descendants only; combine clojure.string for complex matching, use distinct to deduplicate, leverage Pull patterns for hierarchies, handle case-sensitivity carefully, and chain ancestry rules for multi-level queries.',
|
|
377
418
|
inputSchema: {
|
|
378
419
|
type: 'object',
|
|
379
420
|
properties: {
|
|
380
421
|
query: {
|
|
381
422
|
type: 'string',
|
|
382
|
-
description: 'The Datomic query to execute (in Datalog syntax)'
|
|
423
|
+
description: 'The Datomic query to execute (in Datalog syntax). Example: `[:find ?block-string :where [?block :block/string ?block-string] (or [(clojure.string/includes? ?block-string "hypnosis")] [(clojure.string/includes? ?block-string "trance")] [(clojure.string/includes? ?block-string "suggestion")]) :limit 25]`'
|
|
383
424
|
},
|
|
384
425
|
inputs: {
|
|
385
426
|
type: 'array',
|
|
@@ -387,6 +428,21 @@ export const toolSchemas = {
|
|
|
387
428
|
items: {
|
|
388
429
|
type: 'string'
|
|
389
430
|
}
|
|
431
|
+
},
|
|
432
|
+
regexFilter: {
|
|
433
|
+
type: 'string',
|
|
434
|
+
description: 'Optional: A regex pattern to filter the results client-side after the Datomic query. Applied to JSON.stringify(result) or specific fields if regexTargetField is provided.'
|
|
435
|
+
},
|
|
436
|
+
regexFlags: {
|
|
437
|
+
type: 'string',
|
|
438
|
+
description: 'Optional: Flags for the regex filter (e.g., "i" for case-insensitive, "g" for global).',
|
|
439
|
+
},
|
|
440
|
+
regexTargetField: {
|
|
441
|
+
type: 'array',
|
|
442
|
+
items: {
|
|
443
|
+
type: 'string'
|
|
444
|
+
},
|
|
445
|
+
description: 'Optional: An array of field paths (e.g., ["block_string", "page_title"]) within each Datomic result object to apply the regex filter to. If not provided, the regex is applied to the stringified full result.'
|
|
390
446
|
}
|
|
391
447
|
},
|
|
392
448
|
required: ['query']
|
|
@@ -445,10 +501,7 @@ export const toolSchemas = {
|
|
|
445
501
|
description: 'The UID of the parent block or page.'
|
|
446
502
|
},
|
|
447
503
|
"order": {
|
|
448
|
-
|
|
449
|
-
{ type: 'integer', description: 'Zero-indexed position.' },
|
|
450
|
-
{ type: 'string', enum: ['first', 'last'], description: 'Position keyword.' }
|
|
451
|
-
],
|
|
504
|
+
type: ['integer', 'string'],
|
|
452
505
|
description: 'The position of the block under its parent (e.g., 0, 1, 2) or a keyword ("first", "last").'
|
|
453
506
|
}
|
|
454
507
|
}
|
|
@@ -13,6 +13,7 @@ import { DatomicSearchHandlerImpl } from './operations/search/handlers.js';
|
|
|
13
13
|
export class ToolHandlers {
|
|
14
14
|
constructor(graph) {
|
|
15
15
|
this.graph = graph;
|
|
16
|
+
this.cachedCheatsheet = null;
|
|
16
17
|
this.pageOps = new PageOperations(graph);
|
|
17
18
|
this.blockOps = new BlockOperations(graph);
|
|
18
19
|
this.blockRetrievalOps = new BlockRetrievalOperations(graph); // Initialize new instance
|
|
@@ -23,8 +24,8 @@ export class ToolHandlers {
|
|
|
23
24
|
this.batchOps = new BatchOperations(graph);
|
|
24
25
|
}
|
|
25
26
|
// Page Operations
|
|
26
|
-
async findPagesModifiedToday(
|
|
27
|
-
return this.pageOps.findPagesModifiedToday(
|
|
27
|
+
async findPagesModifiedToday(limit = 50, offset = 0, sort_order = 'desc') {
|
|
28
|
+
return this.pageOps.findPagesModifiedToday(limit, offset, sort_order);
|
|
28
29
|
}
|
|
29
30
|
async createPage(title, content) {
|
|
30
31
|
return this.pageOps.createPage(title, content);
|
|
@@ -83,20 +84,34 @@ export class ToolHandlers {
|
|
|
83
84
|
return this.batchOps.processBatch(actions);
|
|
84
85
|
}
|
|
85
86
|
async getRoamMarkdownCheatsheet() {
|
|
87
|
+
if (this.cachedCheatsheet) {
|
|
88
|
+
return this.cachedCheatsheet;
|
|
89
|
+
}
|
|
86
90
|
const __filename = fileURLToPath(import.meta.url);
|
|
87
91
|
const __dirname = path.dirname(__filename);
|
|
88
92
|
const cheatsheetPath = path.join(__dirname, '../../Roam_Markdown_Cheatsheet.md');
|
|
89
|
-
|
|
90
|
-
|
|
91
|
-
|
|
92
|
-
|
|
93
|
-
|
|
94
|
-
|
|
95
|
-
|
|
96
|
-
|
|
97
|
-
|
|
93
|
+
try {
|
|
94
|
+
let cheatsheetContent = await fs.promises.readFile(cheatsheetPath, 'utf-8');
|
|
95
|
+
const customInstructionsPath = process.env.CUSTOM_INSTRUCTIONS_PATH;
|
|
96
|
+
if (customInstructionsPath) {
|
|
97
|
+
try {
|
|
98
|
+
// Check if file exists asynchronously
|
|
99
|
+
await fs.promises.access(customInstructionsPath);
|
|
100
|
+
const customInstructionsContent = await fs.promises.readFile(customInstructionsPath, 'utf-8');
|
|
101
|
+
cheatsheetContent += `\n\n${customInstructionsContent}`;
|
|
102
|
+
}
|
|
103
|
+
catch (error) {
|
|
104
|
+
// File doesn't exist or is not readable, ignore custom instructions
|
|
105
|
+
if (error.code !== 'ENOENT') {
|
|
106
|
+
console.warn(`Could not read custom instructions file at ${customInstructionsPath}: ${error}`);
|
|
107
|
+
}
|
|
108
|
+
}
|
|
98
109
|
}
|
|
110
|
+
this.cachedCheatsheet = cheatsheetContent;
|
|
111
|
+
return cheatsheetContent;
|
|
112
|
+
}
|
|
113
|
+
catch (error) {
|
|
114
|
+
throw new Error(`Failed to read cheatsheet: ${error}`);
|
|
99
115
|
}
|
|
100
|
-
return cheatsheetContent;
|
|
101
116
|
}
|
|
102
117
|
}
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "roam-research-mcp",
|
|
3
|
-
"version": "0.
|
|
3
|
+
"version": "1.0.0",
|
|
4
4
|
"description": "A Model Context Protocol (MCP) server for Roam Research API integration",
|
|
5
5
|
"private": false,
|
|
6
6
|
"repository": {
|
|
@@ -22,13 +22,13 @@
|
|
|
22
22
|
"homepage": "https://github.com/2b3pro/roam-research-mcp#readme",
|
|
23
23
|
"type": "module",
|
|
24
24
|
"bin": {
|
|
25
|
-
"roam-research": "./build/index.js"
|
|
25
|
+
"roam-research-mcp": "./build/index.js"
|
|
26
26
|
},
|
|
27
27
|
"files": [
|
|
28
28
|
"build"
|
|
29
29
|
],
|
|
30
30
|
"scripts": {
|
|
31
|
-
"build": "tsc && cat Roam_Markdown_Cheatsheet.md .roam/${CUSTOM_INSTRUCTIONS_PREFIX}custom-instructions.md > build/Roam_Markdown_Cheatsheet.md && chmod 755 build/index.js",
|
|
31
|
+
"build": "echo \"Using custom instructions: .roam/${CUSTOM_INSTRUCTIONS_PREFIX}custom-instructions.md\" && tsc && cat Roam_Markdown_Cheatsheet.md .roam/${CUSTOM_INSTRUCTIONS_PREFIX}custom-instructions.md > build/Roam_Markdown_Cheatsheet.md && chmod 755 build/index.js",
|
|
32
32
|
"clean": "rm -rf build",
|
|
33
33
|
"watch": "tsc --watch",
|
|
34
34
|
"inspector": "npx @modelcontextprotocol/inspector build/index.js",
|
|
@@ -48,4 +48,4 @@
|
|
|
48
48
|
"ts-node": "^10.9.2",
|
|
49
49
|
"typescript": "^5.3.3"
|
|
50
50
|
}
|
|
51
|
-
}
|
|
51
|
+
}
|