@sunub/obsidian-mcp-server 0.0.10 → 0.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -14,6 +14,8 @@ Obsidian Vault를 이용해 AI가 활용 가능한 지식 베이스(Knowledge Ba
14
14
  ## 주요 기능
15
15
 
16
16
  - **고급 문서 탐색**: `vault` 도구를 통해 키워드 검색, 전체 목록 조회, 특정 문서 읽기, 통계 분석 등 다양한 탐색 기능을 제공합니다.
17
+ - **컨텍스트 수집/기억 패킷 생성**: `vault collect_context`로 문서 배치 수집, 압축, continuation 토큰 발급, 메모리 패킷(JSON canonical)을 생성합니다.
18
+ - **저장된 메모리 재호출**: `vault load_memory`로 `memory/resume_context.v1.md`를 빠르게 로드해 다음 턴 컨텍스트로 재사용할 수 있습니다.
17
19
  - **AI 기반 속성 생성**: `generate_property` 도구는 문서 본문을 분석하여 `title`, `tags`, `summary` 등 적절한 frontmatter 속성을 자동으로 생성합니다.
18
20
  - **안전한 속성 업데이트**: `write_property` 도구를 사용하여 생성된 속성을 기존 frontmatter와 병합하여 파일에 안전하게 기록합니다.
19
21
  - **첨부 파일 자동 정리**: `organize_attachments` 도구는 문서와 연결된 첨부 파일(예: 이미지)을 자동으로 감지하여 문서 제목에 맞는 폴더로 이동시키고 링크를 업데이트합니다.
@@ -32,6 +34,8 @@ Vault 내 문서를 탐색하고 분석하는 핵심 도구입니다. `action`
32
34
  - **`search`**: 키워드를 기반으로 문서 제목, 내용, 태그를 검색합니다.
33
35
  - **`read`**: 특정 파일의 내용을 읽고 frontmatter와 본문을 반환합니다.
34
36
  - **`stats`**: Vault 내 모든 문서의 통계(단어, 글자 수 등)를 제공합니다.
37
+ - **`collect_context`**: 문서를 배치 처리하여 메모리 패킷을 생성하고, 필요 시 `memory/resume_context.v1.md`에 저장합니다.
38
+ - **`load_memory`**: 저장된 메모리 노트의 canonical JSON 블록을 파싱하여 빠른 재주입용 payload를 반환합니다.
35
39
 
36
40
  ### `generate_property`
37
41
 
@@ -49,6 +53,70 @@ Vault 내 문서를 탐색하고 분석하는 핵심 도구입니다. `action`
49
53
 
50
54
  키워드로 문서를 찾아 해당 문서에 연결된 모든 첨부 파일을 `images/{문서 제목}` 폴더로 이동시키고, 문서 내의 링크를 자동으로 업데이트합니다.
51
55
 
56
+ ## 메모리 운영 원칙
57
+
58
+ ### 서버와 에이전트 책임 분리
59
+
60
+ - **MCP 서버(Data Plane)**: 검색, 읽기, 압축, continuation, memory packet 생성/저장까지 담당합니다.
61
+ - **에이전트 런타임(Memory Plane)**: 사용자 의도 감지, `load_memory` 자동 호출, 다음 턴 프롬프트 선주입을 담당합니다.
62
+
63
+ 중요: 서버만으로는 "다음 턴 자동 기억 반영"을 보장할 수 없습니다. 이 동작은 반드시 클라이언트/에이전트 런타임에서 구현해야 합니다.
64
+
65
+ ### 메모리 산출물 포맷
66
+
67
+ - 기본 저장 경로: `memory/resume_context.v1.md`
68
+ - 구성: 사람이 읽는 Markdown 요약 + AI 파싱용 canonical JSON code block
69
+ - 스키마 키: `schema_version`, `generated_at`, `source_hash`, `documents[].doc_hash`, `memory_packet`
70
+
71
+ ## collect_context 추천 프리셋
72
+
73
+ | 목적 | 주요 파라미터 | 권장 값 |
74
+ | --- | --- | --- |
75
+ | 빠른 토픽 스캔 | `scope`, `maxDocs`, `maxCharsPerDoc`, `compressionMode` | `topic`, `8`, `700`, `aggressive` |
76
+ | 이력서 컨텍스트 구축 | `scope`, `maxDocs`, `maxCharsPerDoc`, `memoryMode`, `compressionMode` | `all`, `20`, `1200`, `both`, `balanced` |
77
+ | 장문 Vault 단계 처리 | `maxDocs`, `maxCharsPerDoc`, `maxOutputChars` | `10`, `900`, `2800` |
78
+
79
+ 가드레일은 출력 상한 초과 시 다음 순서로 축소됩니다: `backlinks -> per-doc chars -> doc count -> continuation`.
80
+
81
+ ## 예제 MCP 요청 (3개)
82
+
83
+ 아래는 MCP 클라이언트의 `callTool`에 전달하는 `arguments` 예시입니다.
84
+
85
+ ### 1) 전체 Vault에서 메모리 구축 시작
86
+
87
+ ```json
88
+ {
89
+ "action": "collect_context",
90
+ "scope": "all",
91
+ "maxDocs": 20,
92
+ "maxCharsPerDoc": 1200,
93
+ "memoryMode": "both",
94
+ "compressionMode": "balanced"
95
+ }
96
+ ```
97
+
98
+ ### 2) continuationToken으로 다음 배치 이어서 수집
99
+
100
+ ```json
101
+ {
102
+ "action": "collect_context",
103
+ "continuationToken": "<previous_response.batch.continuation_token>",
104
+ "compressionMode": "balanced"
105
+ }
106
+ ```
107
+
108
+ ### 3) 저장된 메모리 빠른 로드(quiet)
109
+
110
+ ```json
111
+ {
112
+ "action": "load_memory",
113
+ "memoryPath": "memory/resume_context.v1.md",
114
+ "quiet": true
115
+ }
116
+ ```
117
+
118
+ 클라이언트 자동 주입 규칙은 `docs/CLIENT_INJECTION_GUIDE.md`를 참고하세요.
119
+
52
120
  ## 설치 및 사용
53
121
 
54
122
  ### MCP 클라이언트 설정
@@ -138,6 +206,30 @@ npm test
138
206
  npm run test:watch
139
207
  ```
140
208
 
209
+ ### 비용 계측(B1)
210
+
211
+ `VAULT_METRICS_LOG_PATH`를 지정하면 `vault` 도구 응답마다 아래 메트릭이 JSONL로 기록됩니다.
212
+
213
+ - `estimated_tokens`
214
+ - `mode`
215
+ - `truncated`
216
+ - `doc_count`
217
+
218
+ 예시:
219
+
220
+ ```bash
221
+ # 1) 메트릭 로그 경로 지정
222
+ export VAULT_METRICS_LOG_PATH=.tmp/vault-metrics.jsonl
223
+
224
+ # 2) 평소처럼 MCP 시나리오 실행 (search/read/collect_context/load_memory)
225
+ npm run inspector
226
+
227
+ # 3) 시나리오 종료 후 리포트 생성
228
+ npm run metrics:report -- .tmp/vault-metrics.jsonl
229
+ ```
230
+
231
+ 리포트는 액션별 `count`, `total_tokens`, `avg/p95_tokens`, `avg_doc_count`, `truncated_rate(%)`를 출력합니다.
232
+
141
233
  ### 코드 품질
142
234
 
143
235
  ```bash
package/build/server.js CHANGED
@@ -12,17 +12,18 @@ export default function createMcpServer() {
12
12
  tools: { listChanged: false },
13
13
  },
14
14
  instructions: `
15
- This server provides access to Obsidian vault documents and related tools.
16
-
17
- Available tools:
18
- - obsidian_content_getter: Search, read, and analyze vault documents
19
-
20
- Available resources:
21
- - docs://{filename}: Read specific documents from the vault
22
-
23
- Environment requirements:
24
- - VAULT_DIR_PATH: Path to your Obsidian vault directory
25
- `,
15
+ This server provides access to Obsidian vault documents and related tools.
16
+
17
+ Available tools:
18
+ - vault: Search, read, and list markdown documents in the vault
19
+ - generate_property: Generate frontmatter property suggestions from document content
20
+ - write_property: Write frontmatter properties to a markdown file
21
+ - create_document_with_properties: Two-step workflow for AI-generated properties and write
22
+ - organize_attachments: Move linked attachments and update markdown links
23
+
24
+ Environment requirements:
25
+ - VAULT_DIR_PATH: Path to your Obsidian vault directory
26
+ `,
26
27
  });
27
28
  for (const tool of Object.values(tools)) {
28
29
  tool.register(mcpServer);
@@ -9,11 +9,12 @@ export const annotations = {
9
9
  openWorldHint: true,
10
10
  };
11
11
  export const description = `
12
- Initiates an integrated workflow to read a document, guide an AI to generate properties, and then write those properties to a file.
12
+ Starts and completes a two-step workflow for AI-generated frontmatter properties.
13
13
 
14
- This tool acts as a workflow manager for an AI agent. It reads the content of a specified document and returns a structured, multi-step plan. The AI agent must follow this plan by first calling the 'generate_obsidian_property' tool to get the document's content for analysis, and then, after generating the properties, calling the 'write_obsidian_property' tool to save them.
14
+ Step 1: Call this tool with sourcePath (and optional outputPath). It returns a structured instruction payload and a content preview for AI analysis.
15
+ Step 2: Call this same tool again with aiGeneratedProperties. The tool then writes those properties by executing the same write logic used by the 'write_property' tool.
15
16
 
16
- Use this tool to start the end-to-end process of enriching a document with AI-generated metadata.
17
+ Use this tool when an AI agent should orchestrate analysis and write in a consistent workflow.
17
18
  `;
18
19
  export const register = (mcpServer) => {
19
20
  mcpServer.registerTool(name, {
@@ -3,43 +3,19 @@ import { getGlobalVaultManager } from "../../utils/getVaultManager.js";
3
3
  import { obsidianPropertyQueryParamsSchema, } from "./params.js";
4
4
  export const name = "generate_property";
5
5
  export const annotations = {
6
- title: "Obsidian Property Writer",
6
+ title: "Generate Obsidian Property",
7
7
  openWorldHint: true,
8
8
  };
9
9
  export const description = `
10
- Analyzes the content of a specified Obsidian Markdown file to automatically generate the most suitable properties (frontmatter) and updates the file directly.
10
+ Reads a target markdown document and returns an AI-facing payload for generating frontmatter properties.
11
+
12
+ This tool does not write to disk. It returns content_preview and a target output schema so an AI can produce a valid property object.
11
13
 
12
14
  Use Cases:
13
-
14
- - After Completing a Draft: Use when the body of the text is complete, and you want to generate all properties at once.
15
- - Updating Information: Use when you want to update existing properties with more accurate information reflecting the latest content.
16
- - Completing Missing Info: Use when you want to automatically add missing properties like tags or a summary to a document that only has a title.
17
-
18
- Parameters:
19
-
20
- filename: The name or path of the file to analyze and add properties to (e.g., "my-first-post.md").
21
- overwrite: If set to true, existing properties will be overwritten by the AI-generated content. Default: false.
22
-
23
- Generated Properties:
24
-
25
- The AI analyzes the context of the content to generate the following properties:
26
-
27
- - aliases: An array of alternative names or synonyms based on the content.
28
- - title: A title that best represents the core topic of the document.
29
- - tags: An array of tags extracted from the core keywords of the content (e.g., [AI, Obsidian, productivity]).
30
- - summary: A one to two-sentence summary of the entire document.
31
- - slug: A hyphenated-string suitable for URLs, containing the core keywords from the content.
32
- - date: The event date or creation date inferred from the content (in ISO 8601 format).
33
- - completed: A boolean (true or false) indicating whether the content is considered a final version.
34
-
35
- Return Value:
36
-
37
- Upon success, returns a JSON object containing a success message that includes the modified filename.
38
- { "status": "success", "message": "Successfully updated properties for my-first-post.md" }
39
-
40
- Requirements:
41
-
42
- The user's absolute path to the Obsidian vault must be correctly set in an environment variable.
15
+ - After completing a draft, when you need property suggestions from content.
16
+ - When missing frontmatter fields (title, tags, summary, slug, date, category, completed) should be generated.
17
+
18
+ To apply generated properties to a file, call 'write_property' with the resulting JSON.
43
19
  `;
44
20
  export const register = (mcpServer) => {
45
21
  mcpServer.registerTool(name, {
@@ -68,7 +44,7 @@ export const execute = async (params) => {
68
44
  content_preview: `${document.content.substring(0, 300).replace(/\s+/g, " ")}...`,
69
45
  instructions: {
70
46
  purpose: "Generate or update the document's frontmatter properties based on its content.",
71
- usage: "Analyze the provided content_preview. If more detail is needed to generate accurate properties, you MUST first call the 'obsidian_vault' tool with the 'read' action to get the full document content.",
47
+ usage: "Analyze the provided content_preview. If more detail is needed to generate accurate properties, you MUST call the 'vault' tool with action='read' to get the full document content.",
72
48
  content_type: "markdown",
73
49
  overwrite: params.overwrite || false,
74
50
  output_format: "Return a JSON object with the following structure",
@@ -1,8 +1,9 @@
1
1
  import state from "../../config.js";
2
2
  import { createToolError } from "../../utils/createToolError.js";
3
3
  import { getGlobalVaultManager } from "../../utils/getVaultManager.js";
4
+ import { recordVaultResponseMetric } from "./metrics.js";
4
5
  import { obsidianContentQueryParamsZod, } from "./params.js";
5
- import { listAllDocuments, readSpecificFile, searchDocuments, statsAllDocuments, } from "./utils.js";
6
+ import { collectContext, listAllDocuments, loadMemory, readSpecificFile, searchDocuments, statsAllDocuments, } from "./utils.js";
6
7
  export const name = "vault";
7
8
  export const annotations = {
8
9
  title: "Obsidian Content Getter",
@@ -44,24 +45,37 @@ export const execute = async (params) => {
44
45
  return createToolError(e.message);
45
46
  }
46
47
  try {
48
+ let result;
47
49
  switch (params.action) {
48
50
  case "search":
49
51
  if (!params.keyword?.trim()) {
50
52
  return createToolError("keyword parameter is required for search action", 'Provide a keyword, e.g. { action: "search", keyword: "project" }');
51
53
  }
52
- return await searchDocuments(vaultManager, params);
54
+ result = await searchDocuments(vaultManager, params);
55
+ break;
53
56
  case "read":
54
57
  if (!params.filename?.trim()) {
55
58
  return createToolError("filename parameter is required for read action", 'Provide a filename, e.g. { action: "read", filename: "meeting-notes.md" }');
56
59
  }
57
- return await readSpecificFile(vaultManager, params);
60
+ result = await readSpecificFile(vaultManager, params);
61
+ break;
58
62
  case "list_all":
59
- return await listAllDocuments(vaultManager, params);
63
+ result = await listAllDocuments(vaultManager, params);
64
+ break;
60
65
  case "stats":
61
- return await statsAllDocuments(vaultManager);
66
+ result = await statsAllDocuments(vaultManager);
67
+ break;
68
+ case "collect_context":
69
+ result = await collectContext(vaultManager, params);
70
+ break;
71
+ case "load_memory":
72
+ result = await loadMemory(vaultManager, params);
73
+ break;
62
74
  default:
63
- return createToolError(`Unknown action: ${params.action}`, "Valid actions are: search, read, list_all, stats");
75
+ return createToolError(`Unknown action: ${params.action}`, "Valid actions are: search, read, list_all, stats, collect_context, load_memory");
64
76
  }
77
+ await recordVaultResponseMetric(params.action, result);
78
+ return result;
65
79
  }
66
80
  catch (error) {
67
81
  return createToolError(`Execution failed: ${error instanceof Error ? error.message : String(error)}`);
@@ -0,0 +1,126 @@
1
+ import { appendFile, mkdir } from "node:fs/promises";
2
+ import { dirname } from "node:path";
3
+ function isJsonObject(value) {
4
+ return !!value && typeof value === "object" && !Array.isArray(value);
5
+ }
6
+ function parseFirstTextPayload(result) {
7
+ if (!Array.isArray(result.content)) {
8
+ return null;
9
+ }
10
+ let textPayload = null;
11
+ for (const chunk of result.content) {
12
+ if (chunk.type !== "text") {
13
+ continue;
14
+ }
15
+ if (!("text" in chunk) || typeof chunk.text !== "string") {
16
+ continue;
17
+ }
18
+ textPayload = chunk.text;
19
+ break;
20
+ }
21
+ if (!textPayload)
22
+ return null;
23
+ try {
24
+ const parsed = JSON.parse(textPayload);
25
+ return isJsonObject(parsed) ? parsed : null;
26
+ }
27
+ catch {
28
+ return null;
29
+ }
30
+ }
31
+ function parseCompression(compression) {
32
+ if (!isJsonObject(compression)) {
33
+ return null;
34
+ }
35
+ const mode = compression.mode;
36
+ const estimatedTokens = compression.estimated_tokens;
37
+ const truncated = compression.truncated;
38
+ const outputChars = compression.output_chars;
39
+ const sourceChars = compression.source_chars;
40
+ const maxOutputChars = compression.max_output_chars;
41
+ if ((mode !== "aggressive" && mode !== "balanced" && mode !== "none") ||
42
+ typeof estimatedTokens !== "number" ||
43
+ typeof truncated !== "boolean" ||
44
+ typeof outputChars !== "number" ||
45
+ typeof sourceChars !== "number" ||
46
+ (maxOutputChars !== null && typeof maxOutputChars !== "number")) {
47
+ return null;
48
+ }
49
+ return {
50
+ mode,
51
+ estimated_tokens: Math.max(0, Math.floor(estimatedTokens)),
52
+ truncated,
53
+ output_chars: Math.max(0, Math.floor(outputChars)),
54
+ source_chars: Math.max(0, Math.floor(sourceChars)),
55
+ max_output_chars: maxOutputChars === null ? null : Math.max(0, Math.floor(maxOutputChars)),
56
+ };
57
+ }
58
+ function inferDocCount(action, payload) {
59
+ if (Array.isArray(payload.documents)) {
60
+ return payload.documents.length;
61
+ }
62
+ if (typeof payload.documents_count === "number") {
63
+ return Math.max(0, Math.floor(payload.documents_count));
64
+ }
65
+ if (action === "search" && typeof payload.found === "number") {
66
+ return Math.max(0, Math.floor(payload.found));
67
+ }
68
+ if (action === "read" &&
69
+ (typeof payload.filename === "string" ||
70
+ typeof payload.fullPath === "string" ||
71
+ typeof payload.filePath === "string")) {
72
+ return 1;
73
+ }
74
+ return 0;
75
+ }
76
+ function parseCacheHit(payload) {
77
+ if (!isJsonObject(payload.cache)) {
78
+ return undefined;
79
+ }
80
+ if (typeof payload.cache.hit !== "boolean") {
81
+ return undefined;
82
+ }
83
+ return payload.cache.hit;
84
+ }
85
+ export function buildVaultResponseMetric(action, result) {
86
+ if (result.isError) {
87
+ return null;
88
+ }
89
+ const payload = parseFirstTextPayload(result);
90
+ if (!payload) {
91
+ return null;
92
+ }
93
+ const compression = parseCompression(payload.compression);
94
+ if (!compression) {
95
+ return null;
96
+ }
97
+ return {
98
+ timestamp: new Date().toISOString(),
99
+ action,
100
+ mode: compression.mode,
101
+ estimated_tokens: compression.estimated_tokens,
102
+ truncated: compression.truncated,
103
+ doc_count: inferDocCount(action, payload),
104
+ output_chars: compression.output_chars,
105
+ source_chars: compression.source_chars,
106
+ max_output_chars: compression.max_output_chars,
107
+ cache_hit: parseCacheHit(payload),
108
+ };
109
+ }
110
+ export async function recordVaultResponseMetric(action, result) {
111
+ const logPath = process.env.VAULT_METRICS_LOG_PATH?.trim();
112
+ if (!logPath) {
113
+ return;
114
+ }
115
+ const metric = buildVaultResponseMetric(action, result);
116
+ if (!metric) {
117
+ return;
118
+ }
119
+ try {
120
+ await mkdir(dirname(logPath), { recursive: true });
121
+ await appendFile(logPath, `${JSON.stringify(metric)}\n`, "utf8");
122
+ }
123
+ catch {
124
+ // Metrics logging should never fail tool execution.
125
+ }
126
+ }
@@ -2,14 +2,30 @@ import { z } from "zod";
2
2
  export const responseTypeSchema = z
3
3
  .enum(["text", "audio", "image", "resource", "resource_link"])
4
4
  .describe("The type of content being returned");
5
+ export const compressionModeSchema = z
6
+ .enum(["aggressive", "balanced", "none"])
7
+ .default("balanced")
8
+ .describe("Compression strategy for tool output. aggressive: smallest output, balanced: default, none: keep as much original content as possible.");
9
+ const maxOutputCharsSchema = z
10
+ .number()
11
+ .min(500)
12
+ .max(12000)
13
+ .describe("Optional hard cap for output size in characters. Helps control token cost in long responses.");
5
14
  const quietMode = z
6
15
  .boolean()
7
16
  .default(true)
8
17
  .describe("If true, suppresses non-error output messages. Default is false.");
9
18
  // input properties schema
10
19
  export const obsidianContentActions = z
11
- .enum(["search", "read", "list_all", "stats"])
12
- .describe("The action to perform: search documents, read specific file, list all content, or get stats");
20
+ .enum([
21
+ "search",
22
+ "read",
23
+ "list_all",
24
+ "stats",
25
+ "collect_context",
26
+ "load_memory",
27
+ ])
28
+ .describe("The action to perform: search documents, read specific file, list all content, get stats, collect contextual memory packets, or load stored memory");
13
29
  export const obsidianContentKeyword = z
14
30
  .string()
15
31
  .describe("Keyword to search for in documents (required for search action)");
@@ -35,6 +51,39 @@ export const obsidianContentExcerptLength = z
35
51
  .max(2000)
36
52
  .default(500)
37
53
  .describe("Length of content excerpt to include in search results (default: 500)");
54
+ export const obsidianContentTopic = z
55
+ .string()
56
+ .min(1)
57
+ .describe("Topic to collect contextual memory for (collect_context action)");
58
+ export const obsidianContentScope = z
59
+ .enum(["topic", "all"])
60
+ .default("topic")
61
+ .describe("Scope for collect_context. topic: collect docs relevant to topic, all: collect from the entire vault.");
62
+ export const obsidianContentMaxDocs = z
63
+ .number()
64
+ .int()
65
+ .min(1)
66
+ .max(100)
67
+ .default(20)
68
+ .describe("Maximum number of documents to process for collect_context");
69
+ export const obsidianContentMaxCharsPerDoc = z
70
+ .number()
71
+ .int()
72
+ .min(200)
73
+ .max(8000)
74
+ .default(1800)
75
+ .describe("Maximum number of characters extracted per document for collect_context");
76
+ export const obsidianContentMemoryMode = z
77
+ .enum(["response_only", "vault_note", "both"])
78
+ .default("response_only")
79
+ .describe("Memory output mode for collect_context. response_only: return packet only, vault_note: save to vault note only, both: return and save.");
80
+ export const obsidianContentContinuationToken = z
81
+ .string()
82
+ .min(1)
83
+ .describe("Continuation token to resume a previous collect_context batch operation");
84
+ export const obsidianContentMemoryPath = z
85
+ .string()
86
+ .describe("Path to a stored memory note for load_memory (default: memory/resume_context.v1.md)");
38
87
  // input schema
39
88
  export const obsidianContentQueryParamsZod = z.object({
40
89
  action: obsidianContentActions,
@@ -44,6 +93,15 @@ export const obsidianContentQueryParamsZod = z.object({
44
93
  includeContent: obsidianContentIncludeContent.optional(),
45
94
  includeFrontmatter: obsidianContentIncludeFrontmatter.optional(),
46
95
  excerptLength: obsidianContentExcerptLength.optional(),
96
+ topic: obsidianContentTopic.optional(),
97
+ scope: obsidianContentScope.optional(),
98
+ maxDocs: obsidianContentMaxDocs.optional(),
99
+ maxCharsPerDoc: obsidianContentMaxCharsPerDoc.optional(),
100
+ memoryMode: obsidianContentMemoryMode.optional(),
101
+ continuationToken: obsidianContentContinuationToken.optional(),
102
+ memoryPath: obsidianContentMemoryPath.optional(),
103
+ compressionMode: compressionModeSchema.optional(),
104
+ maxOutputChars: maxOutputCharsSchema.optional(),
47
105
  quiet: quietMode.optional(),
48
106
  });
49
107
  export const aiInstructionsSchema = z
@@ -0,0 +1,102 @@
1
+ import { z } from "zod";
2
+ import { compressionModeSchema, responseTypeSchema } from "../params.js";
3
+ export const collectContextScopeSchema = z.enum(["topic", "all"]);
4
+ export const collectContextMemoryModeSchema = z.enum([
5
+ "response_only",
6
+ "vault_note",
7
+ "both",
8
+ ]);
9
+ export const collectContextRelevanceSchema = z.enum(["high", "medium", "low"]);
10
+ export const collectContextTokenV1Schema = z.object({
11
+ v: z.literal(1),
12
+ cursor: z.number().int().min(0),
13
+ scope: collectContextScopeSchema,
14
+ topic: z.string().nullable(),
15
+ maxDocs: z.number().int().min(1),
16
+ maxCharsPerDoc: z.number().int().min(200),
17
+ memoryMode: collectContextMemoryModeSchema,
18
+ });
19
+ export const collectContextDocumentSchema = z.object({
20
+ filename: z.string(),
21
+ fullPath: z.string(),
22
+ title: z.string(),
23
+ tags: z.array(z.string()),
24
+ doc_hash: z.string(),
25
+ summary: z.string(),
26
+ excerpt: z.string(),
27
+ evidence_snippets: z.array(z.string()),
28
+ relevance: collectContextRelevanceSchema,
29
+ stats: z.object({
30
+ contentLength: z.number().int().nonnegative(),
31
+ wordCount: z.number().int().nonnegative(),
32
+ hasContent: z.boolean(),
33
+ }),
34
+ backlinks_count: z.number().int().nonnegative(),
35
+ truncated: z.boolean(),
36
+ });
37
+ export const collectContextMemoryPacketSchema = z.object({
38
+ topicSummary: z.string(),
39
+ keyFacts: z.array(z.string()),
40
+ experienceBullets: z.array(z.string()),
41
+ sourceRefs: z.array(z.object({
42
+ filePath: z.string(),
43
+ title: z.string(),
44
+ relevance: collectContextRelevanceSchema,
45
+ evidenceSnippets: z.array(z.string()),
46
+ })),
47
+ openQuestions: z.array(z.string()),
48
+ confidence: z.number().min(0).max(1),
49
+ });
50
+ export const collectContextPayloadSchema = z.object({
51
+ action: z.literal("collect_context"),
52
+ scope: collectContextScopeSchema,
53
+ topic: z.string().nullable(),
54
+ matched_total: z.number().int().nonnegative(),
55
+ total_in_vault: z.number().int().nonnegative(),
56
+ documents: z.array(collectContextDocumentSchema),
57
+ memory_packet: collectContextMemoryPacketSchema,
58
+ memory_mode: collectContextMemoryModeSchema,
59
+ memory_write: z.object({
60
+ requested: z.boolean(),
61
+ status: z.enum(["not_requested", "written", "failed"]),
62
+ note_path: z.string().optional(),
63
+ generated_at: z.string().optional(),
64
+ source_hash: z.string().optional(),
65
+ reason: z.string().optional(),
66
+ }),
67
+ cache: z
68
+ .object({
69
+ key: z.string(),
70
+ hit: z.boolean(),
71
+ schema_version: z.string(),
72
+ topic: z.string().nullable(),
73
+ doc_hash: z.string(),
74
+ mode: collectContextMemoryModeSchema,
75
+ })
76
+ .optional(),
77
+ batch: z.object({
78
+ start_cursor: z.number().int().nonnegative(),
79
+ processed_docs: z.number().int().nonnegative(),
80
+ consumed_candidates: z.number().int().nonnegative(),
81
+ max_docs: z.number().int().positive(),
82
+ max_chars_per_doc: z.number().int().min(200),
83
+ has_more: z.boolean(),
84
+ continuation_token: z.string().nullable(),
85
+ }),
86
+ });
87
+ export const collectContextCompressionSchema = z.object({
88
+ mode: compressionModeSchema,
89
+ source_chars: z.number().int().nonnegative(),
90
+ output_chars: z.number().int().nonnegative(),
91
+ estimated_tokens: z.number().int().nonnegative(),
92
+ max_output_chars: z.number().int().positive().nullable(),
93
+ truncated: z.boolean(),
94
+ expand_hint: z.string(),
95
+ });
96
+ export const collectContextResponseDataSchema = collectContextPayloadSchema.extend({
97
+ compression: collectContextCompressionSchema,
98
+ });
99
+ export const collectContextResponseSchema = z.object({
100
+ type: responseTypeSchema,
101
+ text: collectContextResponseDataSchema,
102
+ });