@henryavila/mdprobe 0.1.0 → 0.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,358 +1,223 @@
1
1
  ---
2
- name: mdprobe
3
- description: Use mdprobe to render markdown in the browser and collect structured human feedback (annotations, section approvals) via YAML sidecar files
2
+ name: mdProbe
3
+ description: Human review tool for any content >20 lines. BEFORE asking for
4
+ feedback on findings, specs, plans, analysis, or any long output, save to
5
+ file and open with mdprobe_view. Renders markdown with annotations, section
6
+ approval, and structured feedback via YAML sidecars.
4
7
  ---
5
8
 
6
- # mdprobe — Markdown Viewer & Reviewer
7
-
8
- Render markdown in the browser. Collect structured feedback from humans. Read it back as YAML.
9
+ # mdProbe — Markdown Reviewer (MCP)
9
10
 
10
11
  ## When to Use
11
12
 
12
- - Output longer than 40-60 lines (specs, RFCs, ADRs, design docs)
13
+ - ANY content >20 lines that needs human review (findings, specs, plans, analysis, validation lists)
14
+ - Generating, editing, or referencing `.md` files
13
15
  - Tables, Mermaid diagrams, math/LaTeX, syntax-highlighted code
14
- - When you need the human to **review and annotate** before you proceed
15
- - When you need **section-level approval** (approved/rejected per heading)
16
+ - Human needs to **review and annotate** before you proceed
17
+ - You need **section-level approval** (approved/rejected per heading)
16
18
 
17
19
  ## When NOT to Use
18
20
 
19
21
  - Short answers or code snippets (< 40 lines)
20
- - Simple text responses
22
+ - Simple text responses with no markdown files involved
21
23
  - Interactive debugging sessions
22
24
 
23
- ---
25
+ ## Anti-pattern: Inline Review
24
26
 
25
- ## View Mode render and continue working
27
+ **NEVER present content >20 lines inline in conversation for human review.**
26
28
 
27
- Write markdown to a file, launch mdprobe in the background. The human reads while you keep working.
29
+ This includes specs, findings, plans, analysis, validation lists any long output
30
+ that the human needs to read and evaluate. Terminal scrolling is bad UX: no annotations,
31
+ no section approval, no rendered tables/diagrams.
28
32
 
29
- ```bash
30
- # Write your output
31
- cat > output.md << 'EOF'
32
- # Your spec here
33
- EOF
33
+ **Decision rule:**
34
+ - Content >20 lines AND purpose is review/feedback?
35
+ Format as markdown → `mdprobe_view({ content, filename })` → wait for feedback
36
+ - Content <20 lines OR purely informational (no review needed)?
37
+ → Show inline in conversation
34
38
 
35
- # Launch viewer (browser opens automatically, process runs in background)
36
- mdprobe output.md &
37
- ```
39
+ If you catch yourself pasting a long code block, spec, or findings list in the
40
+ conversation and asking "what do you think?" — STOP. Save it to a file and use mdProbe.
38
41
 
39
- Use `run_in_background: true` when calling via Bash tool. Add `--no-open` if you don't want the browser to auto-open.
42
+ ---
40
43
 
41
- The server watches for file changes — if you update the `.md` file, the browser hot-reloads automatically.
44
+ ## MCP Tools
42
45
 
43
- ## Review Mode block until human finishes
46
+ | Tool | Input | Output | Purpose |
47
+ |------|-------|--------|---------|
48
+ | `mdprobe_view` | `{ paths?, content?, filename?, open? }` | `{ url, files, savedTo? }` | Open content in browser for human review |
49
+ | `mdprobe_annotations` | `{ path }` | `{ source, sections, annotations, summary }` | Read annotations after human review |
50
+ | `mdprobe_update` | `{ path, actions[] }` | `{ updated, annotations, summary }` | Resolve, reopen, reply, add, or delete annotations |
51
+ | `mdprobe_status` | `{}` | `{ running, url?, files? }` | Check if server is running |
44
52
 
45
- When you need the human to review and annotate before you continue:
53
+ ---
46
54
 
47
- ```bash
48
- # This BLOCKS until the human clicks "Finish Review" in the browser
49
- mdprobe spec.md --once
50
- ```
55
+ ## Rules
51
56
 
52
- The process prints paths to generated `.annotations.yaml` files on exit. Exit code 0 means review complete.
57
+ ### Rule 1 Always show URL when citing .md
53
58
 
54
- ### Full agent workflow
59
+ Whenever you mention a `.md` file path in output, call `mdprobe_view` and show the URL.
55
60
 
56
- ```
57
- 1. Agent writes spec.md
58
- 2. Agent runs: mdprobe spec.md --once (process BLOCKS here)
59
- 3. Human opens browser, reads the rendered markdown
60
- 4. Human selects text → adds annotations (bug, question, suggestion, nitpick)
61
- 5. Human approves/rejects sections via heading buttons
62
- 6. Human clicks "Finish Review"
63
- 7. Process unblocks, prints YAML paths to stdout
64
- 8. Agent reads spec.annotations.yaml
65
- 9. Agent addresses each annotation
66
- ```
61
+ **Single file:**
67
62
 
68
- ---
63
+ > mdprobe_view({ paths: ["docs/spec.md"] })
69
64
 
70
- ## Reading Annotations
71
-
72
- After review, load the YAML sidecar and process the feedback:
73
-
74
- ```javascript
75
- import { AnnotationFile } from '@henryavila/mdprobe/annotations'
76
-
77
- const af = await AnnotationFile.load('spec.annotations.yaml')
78
-
79
- // Query annotations
80
- const open = af.getOpen() // all unresolved annotations
81
- const bugs = af.getByTag('bug') // only bugs
82
- const questions = af.getByTag('question')
83
- const mine = af.getByAuthor('Alice')
84
- const resolved = af.getResolved() // already handled
85
- const one = af.getById('a1b2c3d4') // specific annotation
86
-
87
- // Each annotation has:
88
- // {
89
- // id, selectors: { position: { startLine, startColumn, endLine, endColumn },
90
- // quote: { exact, prefix, suffix } },
91
- // comment, tag, status, author, created_at, updated_at,
92
- // replies: [{ author, comment, created_at }]
93
- // }
94
-
95
- // Process feedback
96
- for (const ann of open) {
97
- console.log(`[${ann.tag}] Line ${ann.selectors.position.startLine}: ${ann.comment}`)
98
- if (ann.replies.length > 0) {
99
- for (const reply of ann.replies) {
100
- console.log(` ↳ ${reply.author}: ${reply.comment}`)
101
- }
102
- }
103
- }
104
-
105
- // Mark as handled
106
- af.resolve(bugs[0].id)
107
- await af.save('spec.annotations.yaml')
108
- ```
65
+ Show: `📄 docs/spec.md → http://{urlStyle}:{port}/spec.md`
109
66
 
110
- ## Checking Section Approvals
67
+ **Multiple files in one response — one combined call:**
111
68
 
112
- The human can approve or reject each section (heading) of the document. Check the status:
69
+ > mdprobe_view({ paths: ["docs/spec.md", "docs/batch-1.md", "docs/batch-2.md"] })
113
70
 
114
- ```javascript
115
- const af = await AnnotationFile.load('spec.annotations.yaml')
71
+ Show: `📄 3 arquivos → http://{urlStyle}:{port}`
116
72
 
117
- // sections: [{ heading, level, status }]
118
- // status is: 'approved', 'rejected', or 'pending'
119
- for (const section of af.sections) {
120
- console.log(`${section.heading}: ${section.status}`)
121
- }
73
+ ### Rule 2 Review opens automatically
122
74
 
123
- // Check if all sections were approved
124
- const allApproved = af.sections.every(s => s.status === 'approved')
75
+ When the context is review (human needs to read and give feedback now):
125
76
 
126
- // Find rejected sections that need rework
127
- const rejected = af.sections.filter(s => s.status === 'rejected')
128
- ```
77
+ > mdprobe_view({ paths: ["spec.md"], open: true })
129
78
 
130
- Approval cascades: if the human approves a parent heading (e.g., H2), all child headings (H3, H4...) under it are also approved. Same for reject and reset.
79
+ Show:
131
80
 
132
- ## Export Formats
133
-
134
- ```javascript
135
- import { exportJSON, exportSARIF, exportReport } from '@henryavila/mdprobe/export'
136
- import { readFile } from 'node:fs/promises'
137
-
138
- const af = await AnnotationFile.load('spec.annotations.yaml')
139
- const source = await readFile('spec.md', 'utf-8')
140
-
141
- const json = exportJSON(af) // plain JS object
142
- const sarif = exportSARIF(af, 'spec.md') // SARIF 2.1.0 (open annotations only)
143
- const report = exportReport(af, source) // markdown review report
81
+ ```
82
+ 📄 Aberto para revisão no browser.
83
+ Anote seus comentários. Me avise quando terminar.
144
84
  ```
145
85
 
146
- SARIF maps tags to severity: `bug` = error, `suggestion` = warning, `question`/`nitpick` = note.
86
+ For multiple files: `📄 3 specs abertos para revisão no browser.`
147
87
 
148
- ## Annotation Tags
88
+ ### Rule 3 — Read, address, and resolve annotations
149
89
 
150
- | Tag | Meaning | When the human uses it |
151
- |-----|---------|----------------------|
152
- | `bug` | Something is wrong | Factual errors, incorrect logic, broken examples |
153
- | `question` | Needs clarification | Ambiguous requirements, missing context |
154
- | `suggestion` | Improvement idea | Better approach, additional feature, alternative |
155
- | `nitpick` | Minor style/wording | Typos, formatting, naming preferences |
90
+ When the human says they finished reviewing:
156
91
 
157
- ## Interacting with Annotations
92
+ 1. Read annotations for each reviewed file:
93
+ > mdprobe_annotations({ path: "spec.md" })
158
94
 
159
- ### Resolving annotations after you fix them
95
+ 2. Process each open annotation:
96
+ - `bug` — fix the issue
97
+ - `question` — answer or clarify
98
+ - `suggestion` — evaluate and implement or justify skipping
99
+ - `nitpick` — fix if trivial
160
100
 
161
- After addressing an annotation, mark it as resolved so the human knows it's been handled:
101
+ 3. Report what was addressed and **ask the human to confirm** before resolving.
162
102
 
163
- ```javascript
164
- import { AnnotationFile } from '@henryavila/mdprobe/annotations'
103
+ 4. After confirmation, resolve:
104
+ > mdprobe_update({ path: "spec.md", actions: [
105
+ > { action: "resolve", id: "a1b2c3d4" },
106
+ > { action: "resolve", id: "e5f6g7h8" }
107
+ > ]})
165
108
 
166
- const af = await AnnotationFile.load('spec.annotations.yaml')
109
+ 5. Human sees resolved annotations in real-time in the browser (greyed out).
167
110
 
168
- for (const ann of af.getOpen()) {
169
- // Process the annotation (fix the bug, answer the question, etc.)
111
+ **Never resolve without confirmation.** Always ask first.
170
112
 
171
- // Mark as resolved
172
- af.resolve(ann.id)
173
- }
113
+ ### Rule 4 — Reply to explain decisions
174
114
 
175
- // Persist changes the human will see these as resolved in the UI
176
- await af.save('spec.annotations.yaml')
177
- ```
115
+ When the fix isn't self-evident, add a reply explaining what was done before resolving:
178
116
 
179
- ### Replying to annotations
117
+ > mdprobe_update({ path: "spec.md", actions: [
118
+ > { action: "reply", id: "a1b2", comment: "Changed to PostgreSQL per ADR-003" },
119
+ > { action: "resolve", id: "a1b2" }
120
+ > ]})
180
121
 
181
- Add a reply to explain what you did, ask for clarification, or acknowledge the feedback:
122
+ When you **disagree** with a suggestion, reply with justification but do NOT resolve leave it open for the human to decide:
182
123
 
183
- ```javascript
184
- const af = await AnnotationFile.load('spec.annotations.yaml')
124
+ > mdprobe_update({ path: "spec.md", actions: [
125
+ > { action: "reply", id: "c3d4", comment: "Keeping current approach because X. Let me know if you still want to change." }
126
+ > ]})
185
127
 
186
- const bugs = af.getByTag('bug')
187
- for (const bug of bugs) {
188
- af.addReply(bug.id, {
189
- author: 'Agent',
190
- comment: `Fixed in commit abc123. Changed line ${bug.selectors.position.startLine}.`,
191
- })
192
- af.resolve(bug.id)
193
- }
128
+ ### Rule 5 — Pre-annotate areas of uncertainty
194
129
 
195
- await af.save('spec.annotations.yaml')
196
- ```
130
+ Before asking for review, create annotations to guide the human's attention:
197
131
 
198
- ### Creating annotations before human review
132
+ > mdprobe_update({ path: "spec.md", actions: [
133
+ > { action: "add",
134
+ > selectors: { position: { startLine: 42, startColumn: 1, endLine: 42, endColumn: 50 },
135
+ > quote: { exact: "Rate limit: 100/min", prefix: "", suffix: "" } },
136
+ > comment: "Is 100/min enough? Load test showed 300/min spikes.",
137
+ > tag: "question" },
138
+ > { action: "add",
139
+ > selectors: { position: { startLine: 78, startColumn: 1, endLine: 78, endColumn: 30 },
140
+ > quote: { exact: "Auth via JWT", prefix: "", suffix: "" } },
141
+ > comment: "Two options: JWT or session cookies. I went with JWT for statelessness. OK?",
142
+ > tag: "suggestion" }
143
+ > ]})
199
144
 
200
- Pre-annotate sections you're unsure about, so the human knows where to focus:
145
+ The human sees these annotations already in the browser when they start reviewing.
201
146
 
202
- ```javascript
203
- const af = await AnnotationFile.load('spec.annotations.yaml')
147
+ ### Rule 6 — Delete only own annotations
204
148
 
205
- af.add({
206
- selectors: {
207
- position: { startLine: 42, startColumn: 1, endLine: 42, endColumn: 60 },
208
- quote: { exact: 'Rate limit: 100 requests per minute', prefix: '', suffix: '' },
209
- },
210
- comment: 'Is 100/min enough? The load test showed spikes of 300/min.',
211
- tag: 'question',
212
- author: 'Agent',
213
- })
149
+ You may delete your own annotations (where `author` matches your name) if they become irrelevant after changes. **Never delete human annotations** — resolve or reply instead.
214
150
 
215
- await af.save('spec.annotations.yaml')
216
- ```
151
+ > mdprobe_update({ path: "spec.md", actions: [
152
+ > { action: "delete", id: "my-annotation-id" }
153
+ > ]})
217
154
 
218
- ### Interacting via HTTP API (while server is running)
219
-
220
- If the server is running (view mode), you can interact without touching the YAML file directly:
221
-
222
- ```bash
223
- # Create an annotation
224
- curl -X POST http://127.0.0.1:3000/api/annotations -H 'Content-Type: application/json' -d '{
225
- "file": "spec.md",
226
- "action": "add",
227
- "data": {
228
- "selectors": {
229
- "position": { "startLine": 10, "startColumn": 1, "endLine": 10, "endColumn": 40 },
230
- "quote": { "exact": "text to annotate", "prefix": "", "suffix": "" }
231
- },
232
- "comment": "This needs work",
233
- "tag": "suggestion",
234
- "author": "Agent"
235
- }
236
- }'
237
-
238
- # Resolve an annotation
239
- curl -X POST http://127.0.0.1:3000/api/annotations -H 'Content-Type: application/json' -d '{
240
- "file": "spec.md",
241
- "action": "resolve",
242
- "data": { "id": "a1b2c3d4" }
243
- }'
244
-
245
- # Add a reply
246
- curl -X POST http://127.0.0.1:3000/api/annotations -H 'Content-Type: application/json' -d '{
247
- "file": "spec.md",
248
- "action": "reply",
249
- "data": { "id": "a1b2c3d4", "author": "Agent", "comment": "Fixed." }
250
- }'
251
-
252
- # Approve a section
253
- curl -X POST http://127.0.0.1:3000/api/sections -H 'Content-Type: application/json' -d '{
254
- "file": "spec.md",
255
- "action": "approve",
256
- "heading": "Requirements"
257
- }'
258
- ```
155
+ ### Rule 7 No --once in Claude Code
259
156
 
260
- The browser auto-updates when annotations change the human sees your replies and resolutions in real time.
157
+ The `--once` blocking mode is for scripted/CI use. In Claude Code, the human signals "done" via chat. Read annotations on demand via `mdprobe_annotations`.
261
158
 
262
- ### Iterative review loop
159
+ Do NOT run `mdprobe spec.md --once` in Claude Code sessions.
263
160
 
264
- When the first review produces feedback, fix the issues and re-launch for a second pass:
161
+ ### Rule 8 Draft and review in one step
265
162
 
266
- ```
267
- Round 1:
268
- 1. Agent writes spec.md
269
- 2. mdprobe spec.md --once → human annotates 5 bugs, 3 questions
270
- 3. Agent reads feedback, fixes all 5 bugs, answers 3 questions
271
- 4. Agent marks all 8 as resolved, adds replies explaining fixes
272
-
273
- Round 2:
274
- 5. Agent re-launches: mdprobe spec.md --once
275
- 6. Human sees resolved items (greyed out), reviews fixes
276
- 7. Human adds 1 new nitpick, approves all sections
277
- 8. Agent reads feedback — 1 nitpick to fix, all sections approved
278
- 9. Done — proceed to implementation
279
- ```
163
+ When you have ANY content >20 lines that needs human review, use the `content`
164
+ parameter instead of presenting it inline in the conversation:
280
165
 
281
- ```javascript
282
- // After fixing issues from round 1:
283
- const af = await AnnotationFile.load('spec.annotations.yaml')
284
-
285
- // Mark everything as resolved with explanations
286
- for (const ann of af.getOpen()) {
287
- af.addReply(ann.id, {
288
- author: 'Agent',
289
- comment: 'Addressed in updated spec.',
290
- })
291
- af.resolve(ann.id)
292
- }
293
- await af.save('spec.annotations.yaml')
294
-
295
- // Re-launch for round 2
296
- // exec: mdprobe spec.md --once
297
- ```
166
+ > mdprobe_view({ content: "# Analysis\n\n| Finding | Severity |\n...", filename: "analysis.md", open: true })
298
167
 
299
- ## Drift Detection
168
+ This saves the file AND opens it for review in one call.
169
+ Format the content as markdown for best rendering (headings, lists, tables, code blocks).
170
+ You generate the content, so you control the format — there's no parser limitation.
300
171
 
301
- If you modify the source `.md` after annotations were created, mdprobe warns the human that the source has changed (annotations may be stale). The hash is stored in the YAML:
172
+ ---
302
173
 
303
- ```yaml
304
- source_hash: "sha256:abc123..."
305
- ```
174
+ ## Review Workflow
175
+
176
+ 1. Agent writes or edits `.md` file(s).
177
+ 2. Agent calls `mdprobe_view({ paths, open: true })` — browser opens automatically.
178
+ 3. Agent shows review message and waits for the human.
179
+ 4. Human reads rendered markdown in the browser.
180
+ 5. Human selects text and adds annotations (bug, question, suggestion, nitpick).
181
+ 6. Human approves/rejects sections via heading buttons.
182
+ 7. Human tells the agent they are done (via chat).
183
+ 8. Agent calls `mdprobe_annotations({ path })` for each file.
184
+ 9. Agent processes each open annotation — fixes bugs, answers questions, evaluates suggestions.
185
+ 10. Agent reports what was addressed and asks the human to confirm.
186
+ 11. After confirmation, agent calls `mdprobe_update` to resolve annotations with replies.
187
+ 12. Human sees resolved annotations in real-time (greyed out in browser).
188
+ 13. If new issues remain, repeat from step 4.
306
189
 
307
- ## Schema Validation
190
+ ---
308
191
 
309
- A JSON Schema is available for validating annotation YAML files:
192
+ ## Annotation Tags
310
193
 
311
- ```javascript
312
- import schema from '@henryavila/mdprobe/schema.json'
313
- ```
194
+ | Tag | Meaning | When used |
195
+ |-----|---------|-----------|
196
+ | `bug` | Something is wrong | Factual errors, incorrect logic, broken examples |
197
+ | `question` | Needs clarification | Ambiguous requirements, missing context |
198
+ | `suggestion` | Improvement idea | Better approach, additional feature, alternative |
199
+ | `nitpick` | Minor style/wording | Typos, formatting, naming preferences |
314
200
 
315
201
  ---
316
202
 
317
- ## Recommended Patterns
203
+ ## Section Approval
318
204
 
319
- ### Pattern: spec review before implementation
205
+ The human can approve or reject each section (heading) of the document via buttons in the browser UI.
320
206
 
321
- ```bash
322
- # 1. Write the spec
323
- cat > spec.md << 'SPEC'
324
- # Feature: User Authentication
325
- ## Requirements
326
- ...
327
- SPEC
207
+ **Cascade behavior:** Approving a parent heading (e.g., H2) automatically approves all child headings (H3, H4, ...) under it. Same for reject and reset.
328
208
 
329
- # 2. Get human review (blocks until done)
330
- mdprobe spec.md --once
209
+ **Checking approval status:**
331
210
 
332
- # 3. Read feedback
333
- node -e "
334
- import { AnnotationFile } from '@henryavila/mdprobe/annotations'
335
- const af = await AnnotationFile.load('spec.annotations.yaml')
336
- console.log(JSON.stringify(af.getOpen(), null, 2))
337
- "
338
- ```
339
-
340
- ### Pattern: background viewer while working
211
+ > mdprobe_annotations({ path: "spec.md" })
341
212
 
342
- ```bash
343
- # Start viewer in background
344
- mdprobe docs/ --no-open &
213
+ The response includes a `sections` array:
345
214
 
346
- # Continue working — browser shows rendered docs with live reload
347
- # Human reads at their own pace
348
215
  ```
349
-
350
- ### Pattern: check if human approved all sections
351
-
352
- ```javascript
353
- const af = await AnnotationFile.load('spec.annotations.yaml')
354
- const pending = af.sections.filter(s => s.status !== 'approved')
355
- if (pending.length > 0) {
356
- console.log('Sections not yet approved:', pending.map(s => s.heading))
357
- }
216
+ sections: [
217
+ { heading: "Requirements", level: 2, status: "approved" },
218
+ { heading: "Architecture", level: 2, status: "rejected" },
219
+ { heading: "API Design", level: 3, status: "pending" }
220
+ ]
358
221
  ```
222
+
223
+ All sections must be `approved` and all annotations resolved before the document is considered fully reviewed.
@@ -403,7 +403,7 @@ export class AnnotationFile {
403
403
  {
404
404
  tool: {
405
405
  driver: {
406
- name: 'mdprobe',
406
+ name: 'mdProbe',
407
407
  version: '0.1.0',
408
408
  informationUri: 'https://github.com/henryavila/mdprobe',
409
409
  },
package/src/export.js CHANGED
@@ -179,7 +179,7 @@ export function exportSARIF(af, sourceFilePath) {
179
179
  {
180
180
  tool: {
181
181
  driver: {
182
- name: 'mdprobe',
182
+ name: 'mdProbe',
183
183
  version: '0.1.0',
184
184
  informationUri: 'https://github.com/henryavila/mdprobe',
185
185
  },