roam-research-mcp 0.35.1 → 0.36.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +16 -2
- package/build/Roam_Markdown_Cheatsheet.md +62 -4
- package/build/search/tag-search.js +72 -12
- package/build/search/text-search.js +51 -12
- package/build/tools/schemas.js +36 -9
- package/package.json +2 -2
package/README.md
CHANGED
|
@@ -121,7 +121,7 @@ The server provides powerful tools for interacting with Roam Research:
|
|
|
121
121
|
10. `roam_search_by_text`: Search for blocks containing specific text.
|
|
122
122
|
11. `roam_search_by_status`: Search for blocks with a specific status (TODO/DONE) across all pages or within a specific page.
|
|
123
123
|
12. `roam_search_by_date`: Search for blocks or pages based on creation or modification dates.
|
|
124
|
-
13. `roam_search_for_tag`: Search for blocks containing a specific tag and optionally filter by blocks that also contain another tag nearby.
|
|
124
|
+
13. `roam_search_for_tag`: Search for blocks containing a specific tag and optionally filter by blocks that also contain another tag nearby or exclude blocks with a specific tag.
|
|
125
125
|
14. `roam_remember`: Add a memory or piece of information to remember. (Internally uses `roam_process_batch_actions`.)
|
|
126
126
|
15. `roam_recall`: Retrieve all stored memories.
|
|
127
127
|
16. `roam_datomic_query`: Execute a custom Datomic query on the Roam graph for advanced data retrieval beyond the available search tools.
|
|
@@ -129,7 +129,7 @@ The server provides powerful tools for interacting with Roam Research:
|
|
|
129
129
|
18. `roam_process_batch_actions`: Execute a sequence of low-level block actions (create, update, move, delete) in a single, non-transactional batch. Provides granular control for complex nesting like tables. (Note: For actions on existing blocks or within a specific page context, it is often necessary to first obtain valid page or block UIDs using tools like `roam_fetch_page_by_title`.)
|
|
130
130
|
|
|
131
131
|
**Deprecated Tools**:
|
|
132
|
-
The following tools have been deprecated as of `
|
|
132
|
+
The following tools have been deprecated as of `v1.36.0` in favor of the more powerful and flexible `roam_process_batch_actions`:
|
|
133
133
|
|
|
134
134
|
- `roam_create_block`: Use `roam_process_batch_actions` with the `create-block` action.
|
|
135
135
|
- `roam_update_block`: Use `roam_process_batch_actions` with the `update-block` action.
|
|
@@ -176,6 +176,20 @@ Please note that while the `roam_process_batch_actions` tool can set block headi
|
|
|
176
176
|
|
|
177
177
|
---
|
|
178
178
|
|
|
179
|
+
## Propose Improvements
|
|
180
|
+
|
|
181
|
+
### Pagination for Search Tools
|
|
182
|
+
|
|
183
|
+
The `roam_search_for_tag` and `roam_search_by_text` tools now support `limit` and `offset` parameters, enabling basic pagination. To achieve full, robust pagination (e.g., retrieving "page 2" of results), the client consuming these tools would need to:
|
|
184
|
+
|
|
185
|
+
1. Make an initial call with `limit` and `offset=0` to get the first set of results and the `total_count`.
|
|
186
|
+
2. Calculate the total number of pages based on `total_count` and the desired `limit`.
|
|
187
|
+
3. Make subsequent calls, incrementing the `offset` by `limit` for each "page" of results.
|
|
188
|
+
|
|
189
|
+
Example: To get the second page of 10 results, the call would be `roam_search_by_text(text: "your query", limit: 10, offset: 10)`.
|
|
190
|
+
|
|
191
|
+
---
|
|
192
|
+
|
|
179
193
|
## Example Prompts
|
|
180
194
|
|
|
181
195
|
Here are some examples of how to creatively use the Roam tool in an LLM to interact with your Roam graph, particularly leveraging `roam_process_batch_actions` for complex operations.
|
|
@@ -113,12 +113,70 @@ The provided markdown structure represents a Roam Research Kanban board. It star
|
|
|
113
113
|
This markdown structure allows embedding custom HTML or other content using Hiccup syntax. The `:hiccup` keyword is followed by a Clojure-like vector defining the HTML elements and their attributes in one block. This provides a powerful way to inject dynamic or custom components into your Roam graph. Example: `:hiccup [:iframe {:width "600" :height "400" :src "https://www.example.com"}]`
|
|
114
114
|
|
|
115
115
|
## Specific notes and preferences concerning my Roam Research graph
|
|
116
|
-
###
|
|
116
|
+
### When creating new pages
|
|
117
117
|
|
|
118
|
-
|
|
118
|
+
- After creating a new page, ensure you add a new block on the daily page that references it if one doesn't already exist: `Created page: [[…]]`
|
|
119
119
|
|
|
120
|
-
###
|
|
120
|
+
### What To Tag
|
|
121
121
|
|
|
122
|
-
|
|
122
|
+
- Always think in terms of Future Me: How will I find this gem of information in this block in the future? What about this core idea might I be looking for? Aim for high relevance and helpfulness.
|
|
123
|
+
- My notes in Roam are interconnected via wrapped terms/expressions [[…]] and hashtags (#…). The general salience guidelines are as follows:
|
|
124
|
+
|
|
125
|
+
- What greater domain would this core idea be best categorization?
|
|
126
|
+
- If block is parent of children blocks, can a word/phrase in the parent be wrapped in brackets.
|
|
127
|
+
- If not, could a hashtag be appended to the block? (This is like a "please see more"-on-this-subject categorization).
|
|
128
|
+
- Wrap any proper names—no titles—(Dr. [[Werner Erhard]]), organizations, abbreviations e.g. `[NASA]([[National Aeronautics and Space Administration (NASA)]])`
|
|
129
|
+
- Any synonyms or multi-worded phrases of already established categories/pages, e.g. `[frameworks for making quality decisions]([[decision-making frameworks]])`
|
|
130
|
+
|
|
131
|
+
#### Advanced tagging methodology combining [[zettelkasten]] principles with serendipity engineering for maximum intellectual discovery potential. #[[knowledge management]] #[[tagging methodology]]
|
|
132
|
+
|
|
133
|
+
- # Core Philosophy
|
|
134
|
+
- Tag for **intellectual collision** and **future conversation**, not just categorization. Every tag should maximize the potential for unexpected discoveries and cross-domain insights.
|
|
135
|
+
- Traditional academic categorization creates silos; serendipity-engineered tagging creates **conceptual magnets** that attract surprising connections.
|
|
136
|
+
- ## Zettelkasten-Informed Principles
|
|
137
|
+
- **Atomic Thought Units**: Each tagged concept should function as a standalone intellectual object capable of connecting to ANY other domain
|
|
138
|
+
- **Conversation-Driven Linking**: Tag by the intellectual dialogues and debates a concept could inform, not just its subject matter
|
|
139
|
+
- **Temporal Intellectual Journey**: Tag for different phases of understanding - beginner discoveries, expert refinements, teaching moments
|
|
140
|
+
- **Intellectual Genealogy**: Create concept lineages that trace how ideas evolve and connect across domains
|
|
141
|
+
- ## Serendipity Engineering Techniques
|
|
142
|
+
- **Conceptual Collision Matrix**: Force unlikely intellectual meetings by tagging across disparate domains
|
|
143
|
+
- Example: #[[parenting techniques]] + [[cognitive engineering]] = developing children's meta-cognitive awareness
|
|
144
|
+
- Example: #[[cooking methods]] + [[cultural rotation]] = how different cultures balance flavors → balancing competing AI instructions
|
|
145
|
+
- **Anti-Obvious Tagging**: Deliberately tag against natural categorization to maximize surprise
|
|
146
|
+
- Tag by **structural similarities** rather than surface content: #[[has feedback loops]], #[[requires calibration]], #[[exhibits emergence]]
|
|
147
|
+
- Connect concepts through **metaphorical bridges**: meditation techniques + bias detection (both about noticing the unnoticed)
|
|
148
|
+
- **Question Cascade Strategy**: Tag by what questions a concept could answer across unexpected domains
|
|
149
|
+
- Instead of "this is about X," ask "what problems could this solve that I haven't thought of yet?"
|
|
150
|
+
- Examples: #[[how do experts prevent tunnel vision?]], #[[what creates cognitive flexibility?]], #[[how do systems self-correct?]]
|
|
151
|
+
- **Future Collision Potential**: Tag with temporal discovery triggers
|
|
152
|
+
- [[will be relevant in 5 years]], #[[connects to unborn projects]], #[[solves problems I don't know I have yet]]
|
|
153
|
+
- ## Practical Implementation
|
|
154
|
+
- **The Serendipity Test**: Before tagging, ask "Could this concept surprise me by connecting to something completely unrelated?"
|
|
155
|
+
- **Cross-Domain Bridge Tags**: Use structural rather than content-based categorization
|
|
156
|
+
- [[flow dynamics]] - connects fluid mechanics, music, conversation, AI prompt sequences
|
|
157
|
+
- [[calibration processes]] - bridges instrument tuning, relationships, AI parameters, personal habits
|
|
158
|
+
- [[perspective switching]] - connects photography, negotiation, cultural analysis, prompt engineering
|
|
159
|
+
- **Problem-Solution Pairing**: Tag by the problems solved rather than methods used
|
|
160
|
+
- [[breaking cognitive constraints]], #[[expanding solution spaces]], #[[preventing expert blindness]]
|
|
161
|
+
- **Intellectual State Triggers**: Tag for when you'd actually search based on mental/emotional context
|
|
162
|
+
- [[feeling stuck in patterns]], #[[needing fresh perspective]], #[[frustrated with conventional approaches]]
|
|
163
|
+
- ## Tag Maintenance Strategy
|
|
164
|
+
- **Regular Collision Audits**: Periodically review tags to identify missed connection opportunities
|
|
165
|
+
- **Surprise Discovery Log**: Track when unexpected tag connections lead to insights, then engineer more of those patterns
|
|
166
|
+
- **Question Evolution**: Update question-based tags as your intellectual interests and problems evolve
|
|
167
|
+
- **Cross-Reference Integration**: Ensure new tagging approach complements existing page reference `[[…]]` and hashtag `#[[…]]` conventions
|
|
168
|
+
|
|
169
|
+
### How to Tag (nuances)
|
|
170
|
+
|
|
171
|
+
- Consider that when linking a parent block, all children blocks will also accompany future search results. Linking a childblock will only link the child block, not siblings.
|
|
172
|
+
- The convention for tags is lower case unless proper nouns. When tagging might affect the capitalization within a sentence, use aliases. e.g. `[Cognitive biases]([[cognitive biases]]) of the person are listed…"
|
|
173
|
+
- Can longer phrases and expressions be aliased to existing notes? `How one can leverage [[cognitive dissonance]] to [hypnotically influence others using language]([[hypnotic language techniques]])`
|
|
174
|
+
- Reformat quotes in this format structure: `<quote> —[[<Quoted Person>]] #quote <hashtags>` with 2-3 additional relevant hashtag links
|
|
175
|
+
- Anything that needs follow-up, further research, preface with `{{[[TODO]]}}`
|
|
176
|
+
- Scheduled review: Tag future dates in ordinal format so that it can come up for review when relevant, e.g. `[[For review]]: [[August 12th, 2026]]` Optionally, prepend with label ("Deadline", "For review", "Approved", "Pending", "Deferred", "Postponed until"). Place on parent block if relevant (if children blocks are related).
|
|
177
|
+
|
|
178
|
+
### Constraints
|
|
179
|
+
|
|
180
|
+
- Don't overtag.
|
|
123
181
|
|
|
124
182
|
⭐️📋 END (Cheat Sheet LOADED) < < < 📋⭐️
|
|
@@ -8,42 +8,102 @@ export class TagSearchHandler extends BaseSearchHandler {
|
|
|
8
8
|
this.params = params;
|
|
9
9
|
}
|
|
10
10
|
async execute() {
|
|
11
|
-
const { primary_tag, page_title_uid, near_tag, exclude_tag } = this.params;
|
|
11
|
+
const { primary_tag, page_title_uid, near_tag, exclude_tag, case_sensitive = false, limit = -1, offset = 0 } = this.params;
|
|
12
|
+
let nearTagUid;
|
|
13
|
+
if (near_tag) {
|
|
14
|
+
nearTagUid = await SearchUtils.findPageByTitleOrUid(this.graph, near_tag);
|
|
15
|
+
if (!nearTagUid) {
|
|
16
|
+
return {
|
|
17
|
+
success: false,
|
|
18
|
+
matches: [],
|
|
19
|
+
message: `Near tag "${near_tag}" not found.`,
|
|
20
|
+
total_count: 0
|
|
21
|
+
};
|
|
22
|
+
}
|
|
23
|
+
}
|
|
24
|
+
let excludeTagUid;
|
|
25
|
+
if (exclude_tag) {
|
|
26
|
+
excludeTagUid = await SearchUtils.findPageByTitleOrUid(this.graph, exclude_tag);
|
|
27
|
+
if (!excludeTagUid) {
|
|
28
|
+
return {
|
|
29
|
+
success: false,
|
|
30
|
+
matches: [],
|
|
31
|
+
message: `Exclude tag "${exclude_tag}" not found.`,
|
|
32
|
+
total_count: 0
|
|
33
|
+
};
|
|
34
|
+
}
|
|
35
|
+
}
|
|
12
36
|
// Get target page UID if provided for scoped search
|
|
13
37
|
let targetPageUid;
|
|
14
38
|
if (page_title_uid) {
|
|
15
39
|
targetPageUid = await SearchUtils.findPageByTitleOrUid(this.graph, page_title_uid);
|
|
16
40
|
}
|
|
17
|
-
|
|
18
|
-
|
|
41
|
+
const searchTags = [];
|
|
42
|
+
if (case_sensitive) {
|
|
43
|
+
searchTags.push(primary_tag);
|
|
44
|
+
}
|
|
45
|
+
else {
|
|
46
|
+
searchTags.push(primary_tag);
|
|
47
|
+
searchTags.push(primary_tag.charAt(0).toUpperCase() + primary_tag.slice(1));
|
|
48
|
+
searchTags.push(primary_tag.toUpperCase());
|
|
49
|
+
searchTags.push(primary_tag.toLowerCase());
|
|
50
|
+
}
|
|
51
|
+
const tagWhereClauses = searchTags.map(tag => {
|
|
52
|
+
// Roam tags can be [[tag name]] or #tag-name or #[[tag name]]
|
|
53
|
+
// The :node/title for a tag page is just the tag name without any # or [[ ]]
|
|
54
|
+
return `[?ref-page :node/title "${tag}"]`;
|
|
55
|
+
}).join(' ');
|
|
56
|
+
let inClause = `:in $`;
|
|
57
|
+
let queryLimit = limit === -1 ? '' : `:limit ${limit}`;
|
|
58
|
+
let queryOffset = offset === 0 ? '' : `:offset ${offset}`;
|
|
59
|
+
let queryOrder = `:order ?page-edit-time asc ?block-uid asc`; // Sort by page edit time, then block UID
|
|
19
60
|
let queryWhereClauses = `
|
|
20
|
-
|
|
21
|
-
[(clojure.string/lower-case ?title-match) ?lower-title]
|
|
22
|
-
[(clojure.string/lower-case ?title) ?search-title]
|
|
23
|
-
[(= ?lower-title ?search-title)]
|
|
61
|
+
(or ${tagWhereClauses})
|
|
24
62
|
[?b :block/refs ?ref-page]
|
|
25
63
|
[?b :block/string ?block-str]
|
|
26
64
|
[?b :block/uid ?block-uid]
|
|
27
65
|
[?b :block/page ?p]
|
|
28
|
-
[?p :node/title ?page-title]
|
|
29
|
-
|
|
66
|
+
[?p :node/title ?page-title]
|
|
67
|
+
[?p :edit/time ?page-edit-time]`; // Fetch page edit time for sorting
|
|
68
|
+
if (nearTagUid) {
|
|
69
|
+
queryWhereClauses += `
|
|
70
|
+
[?b :block/refs ?near-tag-page]
|
|
71
|
+
[?near-tag-page :block/uid "${nearTagUid}"]`;
|
|
72
|
+
}
|
|
73
|
+
if (excludeTagUid) {
|
|
74
|
+
queryWhereClauses += `
|
|
75
|
+
(not [?b :block/refs ?exclude-tag-page])
|
|
76
|
+
[?exclude-tag-page :block/uid "${excludeTagUid}"]`;
|
|
77
|
+
}
|
|
30
78
|
if (targetPageUid) {
|
|
31
79
|
inClause += ` ?target-page-uid`;
|
|
32
|
-
queryArgs.push(targetPageUid);
|
|
33
80
|
queryWhereClauses += `
|
|
34
81
|
[?p :block/uid ?target-page-uid]`;
|
|
35
82
|
}
|
|
36
83
|
const queryStr = `[:find ?block-uid ?block-str ?page-title
|
|
37
|
-
${inClause}
|
|
84
|
+
${inClause} ${queryLimit} ${queryOffset} ${queryOrder}
|
|
38
85
|
:where
|
|
39
86
|
${queryWhereClauses}]`;
|
|
87
|
+
const queryArgs = [];
|
|
88
|
+
if (targetPageUid) {
|
|
89
|
+
queryArgs.push(targetPageUid);
|
|
90
|
+
}
|
|
40
91
|
const rawResults = await q(this.graph, queryStr, queryArgs);
|
|
92
|
+
// Query to get total count without limit
|
|
93
|
+
const countQueryStr = `[:find (count ?b)
|
|
94
|
+
${inClause}
|
|
95
|
+
:where
|
|
96
|
+
${queryWhereClauses.replace(/\[\?p :edit\/time \?page-edit-time\]/, '')}]`; // Remove edit time for count query
|
|
97
|
+
const totalCountResults = await q(this.graph, countQueryStr, queryArgs);
|
|
98
|
+
const totalCount = totalCountResults[0] ? totalCountResults[0][0] : 0;
|
|
41
99
|
// Resolve block references in content
|
|
42
100
|
const resolvedResults = await Promise.all(rawResults.map(async ([uid, content, pageTitle]) => {
|
|
43
101
|
const resolvedContent = await resolveRefs(this.graph, content);
|
|
44
102
|
return [uid, resolvedContent, pageTitle];
|
|
45
103
|
}));
|
|
46
104
|
const searchDescription = `referencing "${primary_tag}"`;
|
|
47
|
-
|
|
105
|
+
const formattedResults = SearchUtils.formatSearchResults(resolvedResults, searchDescription, !targetPageUid);
|
|
106
|
+
formattedResults.total_count = totalCount;
|
|
107
|
+
return formattedResults;
|
|
48
108
|
}
|
|
49
109
|
}
|
|
@@ -8,29 +8,68 @@ export class TextSearchHandler extends BaseSearchHandler {
|
|
|
8
8
|
this.params = params;
|
|
9
9
|
}
|
|
10
10
|
async execute() {
|
|
11
|
-
const { text, page_title_uid } = this.params;
|
|
11
|
+
const { text, page_title_uid, case_sensitive = false, limit = -1, offset = 0 } = this.params;
|
|
12
12
|
// Get target page UID if provided for scoped search
|
|
13
13
|
let targetPageUid;
|
|
14
14
|
if (page_title_uid) {
|
|
15
15
|
targetPageUid = await SearchUtils.findPageByTitleOrUid(this.graph, page_title_uid);
|
|
16
16
|
}
|
|
17
|
-
|
|
18
|
-
|
|
19
|
-
|
|
20
|
-
|
|
21
|
-
|
|
22
|
-
|
|
23
|
-
|
|
24
|
-
|
|
25
|
-
|
|
26
|
-
|
|
17
|
+
const searchTerms = [];
|
|
18
|
+
if (case_sensitive) {
|
|
19
|
+
searchTerms.push(text);
|
|
20
|
+
}
|
|
21
|
+
else {
|
|
22
|
+
searchTerms.push(text);
|
|
23
|
+
// Add capitalized version (e.g., "Hypnosis")
|
|
24
|
+
searchTerms.push(text.charAt(0).toUpperCase() + text.slice(1));
|
|
25
|
+
// Add all caps version (e.g., "HYPNOSIS")
|
|
26
|
+
searchTerms.push(text.toUpperCase());
|
|
27
|
+
// Add all lowercase version (e.g., "hypnosis")
|
|
28
|
+
searchTerms.push(text.toLowerCase());
|
|
29
|
+
}
|
|
30
|
+
const whereClauses = searchTerms.map(term => `[(clojure.string/includes? ?block-str "${term}")]`).join(' ');
|
|
31
|
+
let queryStr;
|
|
32
|
+
let queryParams = [];
|
|
33
|
+
let queryLimit = limit === -1 ? '' : `:limit ${limit}`;
|
|
34
|
+
let queryOffset = offset === 0 ? '' : `:offset ${offset}`;
|
|
35
|
+
let queryOrder = `:order ?page-edit-time asc ?block-uid asc`; // Sort by page edit time, then block UID
|
|
36
|
+
let baseQueryWhereClauses = `
|
|
37
|
+
[?b :block/string ?block-str]
|
|
38
|
+
(or ${whereClauses})
|
|
39
|
+
[?b :block/uid ?block-uid]
|
|
40
|
+
[?b :block/page ?p]
|
|
41
|
+
[?p :node/title ?page-title]
|
|
42
|
+
[?p :edit/time ?page-edit-time]`; // Fetch page edit time for sorting
|
|
43
|
+
if (targetPageUid) {
|
|
44
|
+
queryStr = `[:find ?block-uid ?block-str ?page-title
|
|
45
|
+
:in $ ?page-uid ${queryLimit} ${queryOffset} ${queryOrder}
|
|
46
|
+
:where
|
|
47
|
+
${baseQueryWhereClauses}
|
|
48
|
+
[?p :block/uid ?page-uid]]`;
|
|
49
|
+
queryParams = [targetPageUid];
|
|
50
|
+
}
|
|
51
|
+
else {
|
|
52
|
+
queryStr = `[:find ?block-uid ?block-str ?page-title
|
|
53
|
+
:in $ ${queryLimit} ${queryOffset} ${queryOrder}
|
|
54
|
+
:where
|
|
55
|
+
${baseQueryWhereClauses}]`;
|
|
56
|
+
}
|
|
27
57
|
const rawResults = await q(this.graph, queryStr, queryParams);
|
|
58
|
+
// Query to get total count without limit
|
|
59
|
+
const countQueryStr = `[:find (count ?b)
|
|
60
|
+
:in $
|
|
61
|
+
:where
|
|
62
|
+
${baseQueryWhereClauses.replace(/\[\?p :edit\/time \?page-edit-time\]/, '')}]`; // Remove edit time for count query
|
|
63
|
+
const totalCountResults = await q(this.graph, countQueryStr, queryParams);
|
|
64
|
+
const totalCount = totalCountResults[0] ? totalCountResults[0][0] : 0;
|
|
28
65
|
// Resolve block references in content
|
|
29
66
|
const resolvedResults = await Promise.all(rawResults.map(async ([uid, content, pageTitle]) => {
|
|
30
67
|
const resolvedContent = await resolveRefs(this.graph, content);
|
|
31
68
|
return [uid, resolvedContent, pageTitle];
|
|
32
69
|
}));
|
|
33
70
|
const searchDescription = `containing "${text}"`;
|
|
34
|
-
|
|
71
|
+
const formattedResults = SearchUtils.formatSearchResults(resolvedResults, searchDescription, !targetPageUid);
|
|
72
|
+
formattedResults.total_count = totalCount;
|
|
73
|
+
return formattedResults;
|
|
35
74
|
}
|
|
36
75
|
}
|
package/build/tools/schemas.js
CHANGED
|
@@ -165,7 +165,7 @@ export const toolSchemas = {
|
|
|
165
165
|
},
|
|
166
166
|
roam_search_for_tag: {
|
|
167
167
|
name: 'roam_search_for_tag',
|
|
168
|
-
description: 'Search for blocks containing a specific tag and optionally filter by blocks that also contain another tag nearby.
|
|
168
|
+
description: 'Search for blocks containing a specific tag and optionally filter by blocks that also contain another tag nearby or exclude blocks with a specific tag. This tool supports pagination via the `limit` and `offset` parameters. Use this tool to search for memories tagged with the MEMORIES_TAG.',
|
|
169
169
|
inputSchema: {
|
|
170
170
|
type: 'object',
|
|
171
171
|
properties: {
|
|
@@ -180,6 +180,21 @@ export const toolSchemas = {
|
|
|
180
180
|
near_tag: {
|
|
181
181
|
type: 'string',
|
|
182
182
|
description: 'Optional: Another tag to filter results by - will only return blocks where both tags appear',
|
|
183
|
+
},
|
|
184
|
+
case_sensitive: {
|
|
185
|
+
type: 'boolean',
|
|
186
|
+
description: 'Optional: Whether the search should be case-sensitive. If false, it will search for the provided tag, capitalized versions, and first word capitalized versions.',
|
|
187
|
+
default: false
|
|
188
|
+
},
|
|
189
|
+
limit: {
|
|
190
|
+
type: 'integer',
|
|
191
|
+
description: 'Optional: The maximum number of results to return. Defaults to 1. Use -1 for no limit.',
|
|
192
|
+
default: 1
|
|
193
|
+
},
|
|
194
|
+
offset: {
|
|
195
|
+
type: 'integer',
|
|
196
|
+
description: 'Optional: The number of results to skip before returning matches. Useful for pagination. Defaults to 0.',
|
|
197
|
+
default: 0
|
|
183
198
|
}
|
|
184
199
|
},
|
|
185
200
|
required: ['primary_tag']
|
|
@@ -273,7 +288,7 @@ export const toolSchemas = {
|
|
|
273
288
|
},
|
|
274
289
|
roam_search_by_text: {
|
|
275
290
|
name: 'roam_search_by_text',
|
|
276
|
-
description: 'Search for blocks containing specific text across all pages or within a specific page.',
|
|
291
|
+
description: 'Search for blocks containing specific text across all pages or within a specific page. This tool supports pagination via the `limit` and `offset` parameters.',
|
|
277
292
|
inputSchema: {
|
|
278
293
|
type: 'object',
|
|
279
294
|
properties: {
|
|
@@ -284,6 +299,21 @@ export const toolSchemas = {
|
|
|
284
299
|
page_title_uid: {
|
|
285
300
|
type: 'string',
|
|
286
301
|
description: 'Optional: Title or UID of the page to search in (UID is preferred for accuracy). If not provided, searches across all pages.'
|
|
302
|
+
},
|
|
303
|
+
case_sensitive: {
|
|
304
|
+
type: 'boolean',
|
|
305
|
+
description: 'Optional: Whether the search should be case-sensitive. If false, it will search for the provided text, capitalized versions, and first word capitalized versions.',
|
|
306
|
+
default: false
|
|
307
|
+
},
|
|
308
|
+
limit: {
|
|
309
|
+
type: 'integer',
|
|
310
|
+
description: 'Optional: The maximum number of results to return. Defaults to 1. Use -1 for no limit.',
|
|
311
|
+
default: 1
|
|
312
|
+
},
|
|
313
|
+
offset: {
|
|
314
|
+
type: 'integer',
|
|
315
|
+
description: 'Optional: The number of results to skip before returning matches. Useful for pagination. Defaults to 0.',
|
|
316
|
+
default: 0
|
|
287
317
|
}
|
|
288
318
|
},
|
|
289
319
|
required: ['text']
|
|
@@ -354,7 +384,7 @@ export const toolSchemas = {
|
|
|
354
384
|
},
|
|
355
385
|
roam_recall: {
|
|
356
386
|
name: 'roam_recall',
|
|
357
|
-
description: 'Retrieve all stored memories on page titled MEMORIES_TAG, or tagged block content with the same name. Returns a combined, deduplicated list of memories. Optionally filter blcoks with a
|
|
387
|
+
description: 'Retrieve all stored memories on page titled MEMORIES_TAG, or tagged block content with the same name. Returns a combined, deduplicated list of memories. Optionally filter blcoks with a specific tag and sort by creation date.',
|
|
358
388
|
inputSchema: {
|
|
359
389
|
type: 'object',
|
|
360
390
|
properties: {
|
|
@@ -373,13 +403,13 @@ export const toolSchemas = {
|
|
|
373
403
|
},
|
|
374
404
|
roam_datomic_query: {
|
|
375
405
|
name: 'roam_datomic_query',
|
|
376
|
-
description: 'Execute a custom Datomic query on the Roam graph for advanced data retrieval beyond the available search tools. This provides direct access to Roam\'s query engine. Note: Roam graph is case-sensitive.\nList of some of Roam\'s data model Namespaces and Attributes: ancestor (descendants), attrs (lookup), block (children, heading, open, order, page, parents, props, refs, string, text-align, uid), children (view-type), create (email, time), descendant (ancestors), edit (email, seen-by, time), entity (attrs), log (id), node (title), page (uid, title), refs (text).\nPredicates (clojure.string/includes?, clojure.string/starts-with?, clojure.string/ends-with?, <, >, <=, >=, =, not=, !=).\nAggregates (distinct, count, sum, max, min, avg, limit).\nTips: Use :block/parents for all ancestor levels, :block/children for direct descendants only; combine clojure.string for complex matching, use distinct to deduplicate, leverage Pull patterns for hierarchies, handle case-sensitivity carefully, and chain ancestry rules for multi-level queries.',
|
|
406
|
+
description: 'Execute a custom Datomic query on the Roam graph for advanced data retrieval beyond the available search tools. This provides direct access to Roam\'s query engine. Note: Roam graph is case-sensitive.\n\n__Optimal Use Cases for `roam_datomic_query`:__\n- __Regex Search:__ Use for scenarios requiring regex, as Datalog does not natively support full regular expressions. It can fetch broader results for client-side post-processing.\n- __Highly Complex Boolean Logic:__ Ideal for intricate combinations of "AND", "OR", and "NOT" conditions across multiple terms or attributes.\n- __Arbitrary Sorting Criteria:__ The go-to for highly customized sorting needs beyond default options.\n- __Proximity Search:__ For advanced search capabilities involving proximity, which are difficult to implement efficiently with simpler tools.\n\nList of some of Roam\'s data model Namespaces and Attributes: ancestor (descendants), attrs (lookup), block (children, heading, open, order, page, parents, props, refs, string, text-align, uid), children (view-type), create (email, time), descendant (ancestors), edit (email, seen-by, time), entity (attrs), log (id), node (title), page (uid, title), refs (text).\nPredicates (clojure.string/includes?, clojure.string/starts-with?, clojure.string/ends-with?, <, >, <=, >=, =, not=, !=).\nAggregates (distinct, count, sum, max, min, avg, limit).\nTips: Use :block/parents for all ancestor levels, :block/children for direct descendants only; combine clojure.string for complex matching, use distinct to deduplicate, leverage Pull patterns for hierarchies, handle case-sensitivity carefully, and chain ancestry rules for multi-level queries.',
|
|
377
407
|
inputSchema: {
|
|
378
408
|
type: 'object',
|
|
379
409
|
properties: {
|
|
380
410
|
query: {
|
|
381
411
|
type: 'string',
|
|
382
|
-
description: 'The Datomic query to execute (in Datalog syntax)'
|
|
412
|
+
description: 'The Datomic query to execute (in Datalog syntax). Example: `[:find ?block-string :where [?block :block/string ?block-string] (or [(clojure.string/includes? ?block-string "hypnosis")] [(clojure.string/includes? ?block-string "trance")] [(clojure.string/includes? ?block-string "suggestion")]) :limit 25]`'
|
|
383
413
|
},
|
|
384
414
|
inputs: {
|
|
385
415
|
type: 'array',
|
|
@@ -445,10 +475,7 @@ export const toolSchemas = {
|
|
|
445
475
|
description: 'The UID of the parent block or page.'
|
|
446
476
|
},
|
|
447
477
|
"order": {
|
|
448
|
-
|
|
449
|
-
{ type: 'integer', description: 'Zero-indexed position.' },
|
|
450
|
-
{ type: 'string', enum: ['first', 'last'], description: 'Position keyword.' }
|
|
451
|
-
],
|
|
478
|
+
type: ['integer', 'string'],
|
|
452
479
|
description: 'The position of the block under its parent (e.g., 0, 1, 2) or a keyword ("first", "last").'
|
|
453
480
|
}
|
|
454
481
|
}
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "roam-research-mcp",
|
|
3
|
-
"version": "0.
|
|
3
|
+
"version": "0.36.0",
|
|
4
4
|
"description": "A Model Context Protocol (MCP) server for Roam Research API integration",
|
|
5
5
|
"private": false,
|
|
6
6
|
"repository": {
|
|
@@ -28,7 +28,7 @@
|
|
|
28
28
|
"build"
|
|
29
29
|
],
|
|
30
30
|
"scripts": {
|
|
31
|
-
"build": "tsc && cat Roam_Markdown_Cheatsheet.md .roam/${CUSTOM_INSTRUCTIONS_PREFIX}custom-instructions.md > build/Roam_Markdown_Cheatsheet.md && chmod 755 build/index.js",
|
|
31
|
+
"build": "echo \"Using custom instructions: .roam/${CUSTOM_INSTRUCTIONS_PREFIX}custom-instructions.md\" && tsc && cat Roam_Markdown_Cheatsheet.md .roam/${CUSTOM_INSTRUCTIONS_PREFIX}custom-instructions.md > build/Roam_Markdown_Cheatsheet.md && chmod 755 build/index.js",
|
|
32
32
|
"clean": "rm -rf build",
|
|
33
33
|
"watch": "tsc --watch",
|
|
34
34
|
"inspector": "npx @modelcontextprotocol/inspector build/index.js",
|