samuraizer 0.0.1 → 0.2.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +32 -0
- package/LICENSE +20 -0
- package/README.md +160 -112
- package/dist/cli/index.js +19 -1
- package/dist/mcp/server.js +188 -0
- package/dist/tools/context.js +7 -0
- package/dist/tools/types.js +1 -0
- package/package.json +29 -10
package/CHANGELOG.md
ADDED
|
@@ -0,0 +1,32 @@
|
|
|
1
|
+
# Changelog
|
|
2
|
+
|
|
3
|
+
All notable changes to this project will be documented in this file.
|
|
4
|
+
|
|
5
|
+
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
|
|
6
|
+
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
|
7
|
+
|
|
8
|
+
## [0.2.0] - 2026-05-06
|
|
9
|
+
|
|
10
|
+
### Added
|
|
11
|
+
- MCP server support
|
|
12
|
+
- MCP tools for:
|
|
13
|
+
- audio normalization
|
|
14
|
+
- transcription
|
|
15
|
+
- transcript summarization
|
|
16
|
+
- action items extraction
|
|
17
|
+
- decision extraction
|
|
18
|
+
- full recording processing pipeline
|
|
19
|
+
-Compatibility with MCP clients such a Claude Desktop and MCP Inspector
|
|
20
|
+
|
|
21
|
+
|
|
22
|
+
## [0.1.0] - 2026-04-26
|
|
23
|
+
|
|
24
|
+
### Added
|
|
25
|
+
|
|
26
|
+
- Initial public release
|
|
27
|
+
- `process` command: full pipeline from audio to transcript, summary, action items, and decisions
|
|
28
|
+
- Individual commands: `normalize`, `summarize`, `actions`, `decisions`
|
|
29
|
+
- Configuration system: `samuraizer init` and `samuraizer config` commands
|
|
30
|
+
- Resume processing: skip steps with existing outputs (use `--force` to recompute)
|
|
31
|
+
- Local-first: uses Whisper (whisper.cpp) and Ollama, no cloud APIs required
|
|
32
|
+
- Output formats: plain text (`transcript.txt`, `summary.txt`, `report.txt`) and JSON (`action-items.json`, `decisions.json`)
|
package/LICENSE
ADDED
|
@@ -0,0 +1,20 @@
|
|
|
1
|
+
MIT License
|
|
2
|
+
|
|
3
|
+
Copyright (c) 2026 UladzKha
|
|
4
|
+
|
|
5
|
+
Permission is hereby granted, free of charge, to any person obtaining a copy
|
|
6
|
+
of this software and associated documentation files (the "Software"), to deal
|
|
7
|
+
in the Software without restriction, including without limitation the rights
|
|
8
|
+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
|
9
|
+
copies of the Software, and to permit persons to whom the Software is furnished
|
|
10
|
+
to do so, subject to the following conditions:
|
|
11
|
+
|
|
12
|
+
The above copyright notice and this permission notice shall be included in all
|
|
13
|
+
copies or substantial portions of the Software.
|
|
14
|
+
|
|
15
|
+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
16
|
+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
17
|
+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
|
18
|
+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
|
19
|
+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
|
20
|
+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE THEREOF.
|
package/README.md
CHANGED
|
@@ -1,72 +1,106 @@
|
|
|
1
1
|
# Samuraizer
|
|
2
2
|
|
|
3
|
-
|
|
3
|
+
Turn meeting recordings into transcripts, summaries, action items, and decisions — entirely on your machine. No cloud, no subscriptions, no data leaving your network.
|
|
4
4
|
|
|
5
|
-
Samuraizer
|
|
6
|
-
- transcript
|
|
7
|
-
- summary
|
|
8
|
-
- action items
|
|
9
|
-
- decisions
|
|
10
|
-
- report
|
|
5
|
+

|
|
11
6
|
|
|
12
|
-
|
|
7
|
+
## Why Samuraizer
|
|
13
8
|
|
|
9
|
+
- **Fully local.** Your recordings never leave your machine.
|
|
10
|
+
- **CLI-first.** Scriptable, automatable, integrates with cron, Git hooks, Obsidian workflows.
|
|
11
|
+
- **Resumable.** Crashed mid-pipeline? Re-run picks up where it left off.
|
|
12
|
+
- **Model-agnostic.** Works with any Ollama-compatible LLM — pick what fits your hardware.
|
|
13
|
+
- **Free.** No subscriptions, no per-minute pricing.
|
|
14
14
|
|
|
15
|
-
##
|
|
15
|
+
## 💻 System Requirements
|
|
16
16
|
|
|
17
|
-
|
|
18
|
-
|
|
19
|
-
|
|
20
|
-
|
|
21
|
-
|
|
22
|
-
- 🔧 Simple CLI + config system
|
|
23
|
-
- 🔒 Local-first (no cloud required)
|
|
17
|
+
| RAM | Recommended model |
|
|
18
|
+
| ------ | ----------------------- |
|
|
19
|
+
| 8 GB | `qwen2.5:3b` |
|
|
20
|
+
| 16 GB | `qwen2.5:7b` |
|
|
21
|
+
| 32 GB+ | `qwen2.5:14b` (default) |
|
|
24
22
|
|
|
23
|
+
Apple Silicon (M1/M2/M3/M4) and recent x86 CPUs with AVX2 are recommended.
|
|
24
|
+
Whisper transcription is CPU/Metal-accelerated; LLM inference uses Ollama's defaults.
|
|
25
25
|
|
|
26
|
-
##
|
|
26
|
+
## ⚙️ Prerequisites
|
|
27
27
|
|
|
28
|
-
|
|
29
|
-
npm install -g samuraizer
|
|
30
|
-
```
|
|
28
|
+
Install the required tools:
|
|
31
29
|
|
|
32
|
-
|
|
30
|
+
- **Node.js** ≥ 20 — [nodejs.org](https://nodejs.org/)
|
|
31
|
+
- **ffmpeg** — for audio processing
|
|
32
|
+
- **whisper-cli** — from [whisper.cpp](https://github.com/ggerganov/whisper.cpp)
|
|
33
|
+
- **Ollama** — [ollama.com](https://ollama.com/)
|
|
34
|
+
|
|
35
|
+
Start Ollama and pull a model:
|
|
33
36
|
|
|
34
|
-
Make sure you have installed:
|
|
35
37
|
```bash
|
|
36
|
-
Node.js >= 20
|
|
37
|
-
ffmpeg
|
|
38
|
-
whisper-cli (whisper.cpp)
|
|
39
|
-
Ollama
|
|
40
|
-
Start Ollama
|
|
41
38
|
ollama serve
|
|
42
39
|
ollama pull qwen2.5:14b
|
|
43
40
|
```
|
|
44
41
|
|
|
42
|
+
## 📦 Installation
|
|
43
|
+
|
|
44
|
+
```bash
|
|
45
|
+
npm install -g samuraizer
|
|
46
|
+
```
|
|
47
|
+
|
|
45
48
|
## 🚀 Quick Start
|
|
49
|
+
|
|
46
50
|
```bash
|
|
47
51
|
samuraizer init
|
|
48
52
|
samuraizer process meeting.m4a
|
|
49
53
|
```
|
|
50
54
|
|
|
51
|
-
|
|
55
|
+
On a 30-minute recording this typically takes 3–5 minutes on Apple Silicon and 8–15 minutes on x86 CPUs, depending on the model.
|
|
52
56
|
|
|
53
|
-
|
|
57
|
+
## 🧪 Commands
|
|
58
|
+
|
|
59
|
+
### Process an audio file
|
|
54
60
|
|
|
55
|
-
### Initialize config
|
|
56
61
|
```bash
|
|
57
|
-
samuraizer
|
|
62
|
+
samuraizer process meeting.m4a # full pipeline
|
|
63
|
+
samuraizer process meeting.m4a --verbose # show detailed metadata
|
|
64
|
+
samuraizer process meeting.m4a --force # recompute all steps
|
|
65
|
+
samuraizer process meeting.m4a --verbose --force
|
|
58
66
|
```
|
|
59
67
|
|
|
60
|
-
###
|
|
61
|
-
|
|
62
|
-
|
|
63
|
-
|
|
68
|
+
### Run individual steps
|
|
69
|
+
|
|
70
|
+
```bash
|
|
71
|
+
samuraizer normalize input.m4a output.wav # normalize audio for Whisper
|
|
72
|
+
samuraizer summarize transcript.txt # generate summary from transcript
|
|
73
|
+
samuraizer actions transcript.txt # extract action items
|
|
74
|
+
samuraizer decisions transcript.txt # extract decisions
|
|
75
|
+
```
|
|
76
|
+
|
|
77
|
+
### Configuration
|
|
64
78
|
|
|
65
|
-
### View config
|
|
66
79
|
```bash
|
|
67
|
-
samuraizer config
|
|
80
|
+
samuraizer init # create default config file
|
|
81
|
+
samuraizer config path # show config file location
|
|
82
|
+
samuraizer config get # print resolved config as JSON
|
|
68
83
|
```
|
|
84
|
+
|
|
85
|
+
### Other
|
|
86
|
+
|
|
87
|
+
```bash
|
|
88
|
+
samuraizer --help
|
|
89
|
+
samuraizer --version
|
|
90
|
+
```
|
|
91
|
+
|
|
92
|
+
## ⚙️ Configuration
|
|
93
|
+
|
|
94
|
+
Samuraizer uses a global JSON config file.
|
|
95
|
+
|
|
96
|
+
### Config location
|
|
97
|
+
|
|
98
|
+
- **macOS**: `~/Library/Application Support/samuraizer/config.json`
|
|
99
|
+
- **Linux**: `~/.config/samuraizer/config.json`
|
|
100
|
+
- **Windows**: `%AppData%/samuraizer/config.json`
|
|
101
|
+
|
|
69
102
|
### Example config
|
|
103
|
+
|
|
70
104
|
```json
|
|
71
105
|
{
|
|
72
106
|
"model": "qwen2.5:14b",
|
|
@@ -77,7 +111,7 @@ samuraizer config get
|
|
|
77
111
|
}
|
|
78
112
|
```
|
|
79
113
|
|
|
80
|
-
###
|
|
114
|
+
### Config fields
|
|
81
115
|
|
|
82
116
|
- **model** — LLM model used for analysis (summary, action items, decisions)
|
|
83
117
|
- **ollamaBaseUrl** — URL where Ollama is running
|
|
@@ -85,122 +119,136 @@ samuraizer config get
|
|
|
85
119
|
- **ffmpegCommand** — Command used for audio processing
|
|
86
120
|
- **ffprobeCommand** — Command used for audio inspection
|
|
87
121
|
|
|
122
|
+
## 📂 Example output
|
|
88
123
|
|
|
89
|
-
|
|
90
|
-
|
|
91
|
-
### Show help
|
|
124
|
+
After processing, you'll find structured files in `output/<recording-name>/`:
|
|
92
125
|
|
|
93
|
-
```bash
|
|
94
|
-
samuraizer --help
|
|
95
126
|
```
|
|
96
|
-
|
|
97
|
-
|
|
98
|
-
|
|
99
|
-
|
|
100
|
-
|
|
127
|
+
output/meeting/
|
|
128
|
+
transcript.txt
|
|
129
|
+
summary.txt
|
|
130
|
+
action-items.json
|
|
131
|
+
decisions.json
|
|
132
|
+
report.txt
|
|
101
133
|
```
|
|
102
134
|
|
|
103
|
-
|
|
135
|
+
**`summary.txt`**
|
|
104
136
|
|
|
105
|
-
### Process an audio recording:
|
|
106
|
-
```bash
|
|
107
|
-
samuraizer process meeting.m4a
|
|
108
137
|
```
|
|
109
|
-
|
|
110
|
-
|
|
111
|
-
```bash
|
|
112
|
-
samuraizer process meeting.m4a --verbose
|
|
138
|
+
Team standup focused on Q2 roadmap and infrastructure migration.
|
|
139
|
+
The frontend team will start the Next.js upgrade next week...
|
|
113
140
|
```
|
|
114
141
|
|
|
115
|
-
|
|
116
|
-
```bash
|
|
117
|
-
samuraizer process meeting.m4a --force
|
|
118
|
-
```
|
|
142
|
+
**`action-items.json`**
|
|
119
143
|
|
|
120
|
-
|
|
121
|
-
|
|
122
|
-
|
|
144
|
+
```json
|
|
145
|
+
[
|
|
146
|
+
{
|
|
147
|
+
"owner": "Alice",
|
|
148
|
+
"task": "Set up staging environment for migration testing",
|
|
149
|
+
"deadline": "by end of week"
|
|
150
|
+
},
|
|
151
|
+
{
|
|
152
|
+
"owner": "Bob",
|
|
153
|
+
"task": "Review the auth refactor PR",
|
|
154
|
+
"deadline": null
|
|
155
|
+
}
|
|
156
|
+
]
|
|
157
|
+
```
|
|
158
|
+
|
|
159
|
+
**`decisions.json`**
|
|
160
|
+
|
|
161
|
+
```json
|
|
162
|
+
[
|
|
163
|
+
{
|
|
164
|
+
"decision": "Adopt Next.js 15 for the new dashboard",
|
|
165
|
+
"rationale": "Better SSR and built-in App Router support"
|
|
166
|
+
}
|
|
167
|
+
]
|
|
123
168
|
```
|
|
124
169
|
|
|
125
|
-
##
|
|
170
|
+
## 🔁 Resume behavior
|
|
126
171
|
|
|
127
|
-
|
|
128
|
-
```bash
|
|
129
|
-
samuraizer normalize input.m4a output.wav
|
|
130
|
-
```
|
|
172
|
+
Samuraizer skips steps whose output files already exist. If processing crashes or you stop it mid-pipeline, just re-run the same command — completed steps are reused.
|
|
131
173
|
|
|
132
|
-
|
|
133
|
-
```bash
|
|
134
|
-
samuraizer summarize transcript.txt
|
|
135
|
-
```
|
|
174
|
+
Use `--force` to recompute everything from scratch.
|
|
136
175
|
|
|
137
|
-
|
|
138
|
-
|
|
139
|
-
|
|
140
|
-
```
|
|
176
|
+
## ⚠️ Troubleshooting
|
|
177
|
+
|
|
178
|
+
### Ollama not running
|
|
141
179
|
|
|
142
|
-
### Extract decisions from a transcript file:
|
|
143
180
|
```bash
|
|
144
|
-
|
|
181
|
+
ollama serve
|
|
145
182
|
```
|
|
146
183
|
|
|
184
|
+
### Ollama on a non-default port
|
|
147
185
|
|
|
148
|
-
|
|
186
|
+
Update `ollamaBaseUrl` in your config:
|
|
149
187
|
|
|
150
|
-
|
|
151
|
-
|
|
152
|
-
|
|
188
|
+
```json
|
|
189
|
+
{
|
|
190
|
+
"ollamaBaseUrl": "http://127.0.0.1:11500"
|
|
191
|
+
}
|
|
153
192
|
```
|
|
154
193
|
|
|
155
|
-
###
|
|
156
|
-
|
|
157
|
-
|
|
158
|
-
```
|
|
194
|
+
### Out of memory during analysis
|
|
195
|
+
|
|
196
|
+
Switch to a smaller model:
|
|
159
197
|
|
|
160
|
-
### Print resolved config as JSON:
|
|
161
198
|
```bash
|
|
162
|
-
|
|
199
|
+
ollama pull qwen2.5:7b
|
|
163
200
|
```
|
|
164
201
|
|
|
202
|
+
Then update `model` in your config to `qwen2.5:7b` (or `qwen2.5:3b` on machines with 8 GB RAM).
|
|
165
203
|
|
|
166
|
-
###
|
|
167
|
-
|
|
168
|
-
|
|
169
|
-
transcript.txt
|
|
170
|
-
summary.txt
|
|
171
|
-
action-items.json
|
|
172
|
-
decisions.json
|
|
173
|
-
report.txt
|
|
174
|
-
```
|
|
204
|
+
### Model not found
|
|
205
|
+
|
|
206
|
+
Make sure the model in your config is actually pulled:
|
|
175
207
|
|
|
208
|
+
```bash
|
|
209
|
+
ollama list
|
|
210
|
+
ollama pull <model-name>
|
|
211
|
+
```
|
|
176
212
|
|
|
177
|
-
###
|
|
213
|
+
### `whisper-cli` not in PATH
|
|
178
214
|
|
|
179
|
-
|
|
215
|
+
Build [whisper.cpp](https://github.com/ggerganov/whisper.cpp) and ensure the binary is on your `PATH`, or set the absolute path in `whisperCommand` in your config.
|
|
180
216
|
|
|
181
|
-
|
|
217
|
+
### `ffmpeg` not found
|
|
182
218
|
|
|
219
|
+
**macOS:**
|
|
183
220
|
|
|
184
|
-
### ⚠️ Common Issues
|
|
185
|
-
#### Ollama not running
|
|
186
221
|
```bash
|
|
187
|
-
|
|
222
|
+
brew install ffmpeg
|
|
188
223
|
```
|
|
189
224
|
|
|
190
|
-
|
|
225
|
+
**Linux:**
|
|
191
226
|
|
|
192
|
-
macOS:
|
|
193
227
|
```bash
|
|
194
|
-
|
|
228
|
+
# Debian / Ubuntu
|
|
229
|
+
sudo apt install ffmpeg
|
|
230
|
+
|
|
231
|
+
# Arch / CachyOS
|
|
232
|
+
sudo pacman -S ffmpeg
|
|
233
|
+
|
|
234
|
+
# Fedora
|
|
235
|
+
sudo dnf install ffmpeg
|
|
195
236
|
```
|
|
196
237
|
|
|
197
|
-
|
|
198
|
-
|
|
199
|
-
|
|
238
|
+
**Windows:**
|
|
239
|
+
|
|
240
|
+
```powershell
|
|
241
|
+
winget install Gyan.FFmpeg
|
|
200
242
|
```
|
|
201
243
|
|
|
202
|
-
|
|
244
|
+
## 📝 Changelog
|
|
245
|
+
|
|
246
|
+
See [CHANGELOG.md](./CHANGELOG.md) for release history.
|
|
247
|
+
|
|
248
|
+
## 📄 License
|
|
249
|
+
|
|
250
|
+
MIT — see [LICENSE](./LICENSE).
|
|
203
251
|
|
|
204
|
-
|
|
252
|
+
## 🔗 Source code
|
|
205
253
|
|
|
206
|
-
|
|
254
|
+
Available on GitHub: [github.com/UladzKha/samuraizer-cli](https://github.com/UladzKha/samuraizer-cli)
|
package/dist/cli/index.js
CHANGED
|
@@ -7,11 +7,17 @@ import { getConfigFilePath } from "../config/paths.js";
|
|
|
7
7
|
import { processMeeting } from "../orchestrators/process-meeting.js";
|
|
8
8
|
import { runTool } from "../shared/tool-definition.js";
|
|
9
9
|
import { tools } from "../shared/tool-registry.js";
|
|
10
|
+
import { startMcpServer } from "../mcp/server.js";
|
|
11
|
+
import { readFileSync } from 'node:fs';
|
|
12
|
+
import { fileURLToPath } from 'node:url';
|
|
13
|
+
import { dirname, join } from 'node:path';
|
|
14
|
+
const __dirname = dirname(fileURLToPath(import.meta.url));
|
|
15
|
+
const pkg = JSON.parse(readFileSync(join(__dirname, '../../package.json'), 'utf-8'));
|
|
10
16
|
const program = new Command();
|
|
11
17
|
program
|
|
12
18
|
.name("samuraizer")
|
|
13
19
|
.description("Transform meeting recordings into structured knowledge")
|
|
14
|
-
.version(
|
|
20
|
+
.version(pkg.version);
|
|
15
21
|
program
|
|
16
22
|
.command("process")
|
|
17
23
|
.description("Run the full pipeline on an audio recording")
|
|
@@ -172,4 +178,16 @@ configCommand
|
|
|
172
178
|
process.exitCode = 1;
|
|
173
179
|
}
|
|
174
180
|
});
|
|
181
|
+
program
|
|
182
|
+
.command("mcp")
|
|
183
|
+
.description("Start the MCP server (stdio transport)")
|
|
184
|
+
.action(async () => {
|
|
185
|
+
try {
|
|
186
|
+
await startMcpServer();
|
|
187
|
+
}
|
|
188
|
+
catch (error) {
|
|
189
|
+
console.error(`Error: ${error instanceof Error ? error.message : String(error)}`);
|
|
190
|
+
process.exitCode = 1;
|
|
191
|
+
}
|
|
192
|
+
});
|
|
175
193
|
program.parse(process.argv);
|
|
@@ -0,0 +1,188 @@
|
|
|
1
|
+
import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';
|
|
2
|
+
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
|
|
3
|
+
import { z } from 'zod';
|
|
4
|
+
import { createContext } from '../tools/context.js';
|
|
5
|
+
import { runTool } from '../shared/tool-definition.js';
|
|
6
|
+
import { tools } from '../shared/tool-registry.js';
|
|
7
|
+
const server = new McpServer({
|
|
8
|
+
name: 'samuraizer',
|
|
9
|
+
version: '0.1.0',
|
|
10
|
+
}, {
|
|
11
|
+
instructions: 'Use process_recording for the full pipeline. ' +
|
|
12
|
+
'Use individual tools (normalize_audio, transcribe_audio, summarize_transcript, ' +
|
|
13
|
+
'extract_action_items, extract_decisions) for step-by-step processing.',
|
|
14
|
+
});
|
|
15
|
+
server.registerTool('normalize_audio', {
|
|
16
|
+
title: 'Normalize Audio',
|
|
17
|
+
description: 'Normalize an audio file to 16kHz mono PCM WAV format required by Whisper.',
|
|
18
|
+
inputSchema: {
|
|
19
|
+
inputPath: z.string().describe('Absolute path to the source audio file'),
|
|
20
|
+
outputPath: z.string().describe('Absolute path for the output WAV file'),
|
|
21
|
+
},
|
|
22
|
+
}, async ({ inputPath, outputPath }) => {
|
|
23
|
+
const ctx = await createContext();
|
|
24
|
+
try {
|
|
25
|
+
const result = await runTool(tools.normalize_audio, {
|
|
26
|
+
inputPath,
|
|
27
|
+
outputPath,
|
|
28
|
+
ffmpegCommand: ctx.config.ffmpegCommand,
|
|
29
|
+
});
|
|
30
|
+
return {
|
|
31
|
+
content: [{ type: 'text', text: result.normalizedAudioPath }],
|
|
32
|
+
};
|
|
33
|
+
}
|
|
34
|
+
catch (err) {
|
|
35
|
+
const message = err instanceof Error ? err.message : String(err);
|
|
36
|
+
return {
|
|
37
|
+
content: [{ type: 'text', text: `Error normalizing audio: ${message}` }],
|
|
38
|
+
isError: true,
|
|
39
|
+
};
|
|
40
|
+
}
|
|
41
|
+
});
|
|
42
|
+
server.registerTool('transcribe_audio', {
|
|
43
|
+
title: 'Transcribe Audio',
|
|
44
|
+
description: 'Transcribe an audio file using whisper.cpp. Returns the transcript text.',
|
|
45
|
+
inputSchema: {
|
|
46
|
+
filePath: z.string().describe('Absolute path to the audio file'),
|
|
47
|
+
},
|
|
48
|
+
}, async ({ filePath }) => {
|
|
49
|
+
const ctx = await createContext();
|
|
50
|
+
try {
|
|
51
|
+
const result = await runTool(tools.transcribe_audio, {
|
|
52
|
+
audioPath: filePath,
|
|
53
|
+
outputDir: ctx.config.outputDir ?? 'output',
|
|
54
|
+
modelPath: ctx.config.whisperModelPath,
|
|
55
|
+
language: ctx.config.language,
|
|
56
|
+
whisperCommand: ctx.config.whisperCommand,
|
|
57
|
+
});
|
|
58
|
+
return {
|
|
59
|
+
content: [{ type: 'text', text: result.text }],
|
|
60
|
+
};
|
|
61
|
+
}
|
|
62
|
+
catch (err) {
|
|
63
|
+
const message = err instanceof Error ? err.message : String(err);
|
|
64
|
+
return {
|
|
65
|
+
content: [{ type: 'text', text: `Error transcribing audio: ${message}` }],
|
|
66
|
+
isError: true,
|
|
67
|
+
};
|
|
68
|
+
}
|
|
69
|
+
});
|
|
70
|
+
server.registerTool('summarize_transcript', {
|
|
71
|
+
title: 'Summarize Transcript',
|
|
72
|
+
description: 'Generate a concise meeting summary from transcript text.',
|
|
73
|
+
inputSchema: {
|
|
74
|
+
transcriptText: z.string().describe('Full transcript text to summarize'),
|
|
75
|
+
model: z.string().optional().describe('Ollama model to use (optional)'),
|
|
76
|
+
},
|
|
77
|
+
}, async ({ transcriptText, model }) => {
|
|
78
|
+
const ctx = await createContext();
|
|
79
|
+
try {
|
|
80
|
+
const result = await runTool(tools.summarize_transcript, {
|
|
81
|
+
transcriptText,
|
|
82
|
+
model: model ?? ctx.config.model,
|
|
83
|
+
ollamaBaseUrl: ctx.config.ollamaBaseUrl,
|
|
84
|
+
});
|
|
85
|
+
return {
|
|
86
|
+
content: [{ type: 'text', text: result.summary }],
|
|
87
|
+
};
|
|
88
|
+
}
|
|
89
|
+
catch (err) {
|
|
90
|
+
const message = err instanceof Error ? err.message : String(err);
|
|
91
|
+
return {
|
|
92
|
+
content: [{ type: 'text', text: `Error summarizing transcript: ${message}` }],
|
|
93
|
+
isError: true,
|
|
94
|
+
};
|
|
95
|
+
}
|
|
96
|
+
});
|
|
97
|
+
server.registerTool('extract_action_items', {
|
|
98
|
+
title: 'Extract Action Items',
|
|
99
|
+
description: 'Extract action items from a meeting transcript. Returns a JSON list of tasks with owner and due date.',
|
|
100
|
+
inputSchema: {
|
|
101
|
+
transcriptText: z.string().describe('Full transcript text'),
|
|
102
|
+
model: z.string().optional().describe('Ollama model to use (optional)'),
|
|
103
|
+
},
|
|
104
|
+
}, async ({ transcriptText, model }) => {
|
|
105
|
+
const ctx = await createContext();
|
|
106
|
+
try {
|
|
107
|
+
const result = await runTool(tools.extract_action_items, {
|
|
108
|
+
transcriptText,
|
|
109
|
+
model: model ?? ctx.config.model,
|
|
110
|
+
ollamaBaseUrl: ctx.config.ollamaBaseUrl,
|
|
111
|
+
});
|
|
112
|
+
return {
|
|
113
|
+
content: [{ type: 'text', text: JSON.stringify(result, null, 2) }],
|
|
114
|
+
};
|
|
115
|
+
}
|
|
116
|
+
catch (err) {
|
|
117
|
+
const message = err instanceof Error ? err.message : String(err);
|
|
118
|
+
return {
|
|
119
|
+
content: [{ type: 'text', text: `Error extracting action items: ${message}` }],
|
|
120
|
+
isError: true,
|
|
121
|
+
};
|
|
122
|
+
}
|
|
123
|
+
});
|
|
124
|
+
server.registerTool('extract_decisions', {
|
|
125
|
+
title: 'Extract Decisions',
|
|
126
|
+
description: 'Extract confirmed decisions from a meeting transcript. Returns a JSON list of decisions.',
|
|
127
|
+
inputSchema: {
|
|
128
|
+
transcriptText: z.string().describe('Full transcript text'),
|
|
129
|
+
model: z.string().optional().describe('Ollama model to use (optional)'),
|
|
130
|
+
},
|
|
131
|
+
}, async ({ transcriptText, model }) => {
|
|
132
|
+
const ctx = await createContext();
|
|
133
|
+
try {
|
|
134
|
+
const result = await runTool(tools.extract_decisions, {
|
|
135
|
+
transcriptText,
|
|
136
|
+
model: model ?? ctx.config.model,
|
|
137
|
+
ollamaBaseUrl: ctx.config.ollamaBaseUrl,
|
|
138
|
+
});
|
|
139
|
+
return {
|
|
140
|
+
content: [{ type: 'text', text: JSON.stringify(result, null, 2) }],
|
|
141
|
+
};
|
|
142
|
+
}
|
|
143
|
+
catch (err) {
|
|
144
|
+
const message = err instanceof Error ? err.message : String(err);
|
|
145
|
+
return {
|
|
146
|
+
content: [{ type: 'text', text: `Error extracting decisions: ${message}` }],
|
|
147
|
+
};
|
|
148
|
+
}
|
|
149
|
+
});
|
|
150
|
+
server.registerTool('process_recording', {
|
|
151
|
+
title: 'Process Recording',
|
|
152
|
+
description: 'Run the full Samuraizer pipeline on an audio file. Returns summary, action items, decisions, and output file paths.',
|
|
153
|
+
inputSchema: {
|
|
154
|
+
filePath: z.string().describe('Absolute path to the audio file'),
|
|
155
|
+
model: z.string().optional().describe('Ollama model to use (optional)'),
|
|
156
|
+
},
|
|
157
|
+
}, async ({ filePath, model }) => {
|
|
158
|
+
const ctx = await createContext();
|
|
159
|
+
try {
|
|
160
|
+
const { processMeeting } = await import('../orchestrators/process-meeting.js');
|
|
161
|
+
const result = await processMeeting({
|
|
162
|
+
inputPath: filePath,
|
|
163
|
+
outputRootDir: ctx.config.outputDir,
|
|
164
|
+
model: model ?? ctx.config.model,
|
|
165
|
+
ollamaBaseUrl: ctx.config.ollamaBaseUrl,
|
|
166
|
+
whisperCommand: ctx.config.whisperCommand,
|
|
167
|
+
whisperModelPath: ctx.config.whisperModelPath,
|
|
168
|
+
language: ctx.config.language,
|
|
169
|
+
ffmpegCommand: ctx.config.ffmpegCommand,
|
|
170
|
+
ffprobeCommand: ctx.config.ffprobeCommand,
|
|
171
|
+
});
|
|
172
|
+
return {
|
|
173
|
+
content: [{ type: 'text', text: JSON.stringify(result, null, 2) }],
|
|
174
|
+
};
|
|
175
|
+
}
|
|
176
|
+
catch (err) {
|
|
177
|
+
const message = err instanceof Error ? err.message : String(err);
|
|
178
|
+
return {
|
|
179
|
+
content: [{ type: 'text', text: `Error processing recording: ${message}` }],
|
|
180
|
+
isError: true,
|
|
181
|
+
};
|
|
182
|
+
}
|
|
183
|
+
});
|
|
184
|
+
export async function startMcpServer() {
|
|
185
|
+
const transport = new StdioServerTransport();
|
|
186
|
+
await server.connect(transport);
|
|
187
|
+
process.stderr.write('Samuraizer MCP server started and listening for requests...\n');
|
|
188
|
+
}
|
|
@@ -0,0 +1 @@
|
|
|
1
|
+
export {};
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "samuraizer",
|
|
3
|
-
"version": "0.0
|
|
3
|
+
"version": "0.2.0",
|
|
4
4
|
"type": "module",
|
|
5
5
|
"bin": {
|
|
6
6
|
"samuraizer": "./dist/cli/index.js"
|
|
@@ -10,26 +10,45 @@
|
|
|
10
10
|
"process": "tsx src/cli/index.ts process",
|
|
11
11
|
"build": "tsc",
|
|
12
12
|
"typecheck": "tsc --noEmit",
|
|
13
|
-
"prepack": "npm run build"
|
|
13
|
+
"prepack": "npm run build",
|
|
14
|
+
"prepublishOnly": "npm run typecheck && npm run build",
|
|
15
|
+
"mcp": "tsx src/cli/index.ts mcp",
|
|
16
|
+
"mcp:dist": "node dist/cli/index.js mcp"
|
|
14
17
|
},
|
|
15
18
|
"files": [
|
|
16
19
|
"dist",
|
|
17
|
-
"README.md"
|
|
20
|
+
"README.md",
|
|
21
|
+
"LICENSE",
|
|
22
|
+
"CHANGELOG.md"
|
|
18
23
|
],
|
|
19
24
|
"engines": {
|
|
20
25
|
"node": ">=20"
|
|
21
26
|
},
|
|
22
27
|
"keywords": [
|
|
23
|
-
"cli",
|
|
24
28
|
"transcription",
|
|
25
|
-
"
|
|
26
|
-
"ollama",
|
|
29
|
+
"meeting-notes",
|
|
27
30
|
"whisper",
|
|
28
|
-
"
|
|
31
|
+
"ollama",
|
|
32
|
+
"local-llm",
|
|
33
|
+
"speech-to-text",
|
|
34
|
+
"cli",
|
|
35
|
+
"self-hosted",
|
|
36
|
+
"privacy"
|
|
29
37
|
],
|
|
30
|
-
"license": "
|
|
31
|
-
"description": "CLI
|
|
38
|
+
"license": "MIT",
|
|
39
|
+
"description": "Local-first CLI that turns meeting recordings into transcripts, summaries, action items, and decisions",
|
|
40
|
+
"author": "Uladz Kha <UladzKha@users.noreply.github.com>",
|
|
41
|
+
"funding": "https://github.com/sponsors/UladzKha",
|
|
42
|
+
"repository": {
|
|
43
|
+
"type": "git",
|
|
44
|
+
"url": "git+https://github.com/UladzKha/samuraizer-cli.git"
|
|
45
|
+
},
|
|
46
|
+
"homepage": "https://github.com/UladzKha/samuraizer-cli#readme",
|
|
47
|
+
"bugs": {
|
|
48
|
+
"url": "https://github.com/UladzKha/samuraizer-cli/issues"
|
|
49
|
+
},
|
|
32
50
|
"dependencies": {
|
|
51
|
+
"@modelcontextprotocol/sdk": "^1.29.0",
|
|
33
52
|
"commander": "^14.0.3",
|
|
34
53
|
"execa": "^9.6.1",
|
|
35
54
|
"zod": "^4.3.6"
|
|
@@ -39,4 +58,4 @@
|
|
|
39
58
|
"tsx": "^4.21.0",
|
|
40
59
|
"typescript": "^5.9.3"
|
|
41
60
|
}
|
|
42
|
-
}
|
|
61
|
+
}
|