@maxkle1nz/m1nd 0.9.0-beta.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,172 @@
1
+ # m1nd Routing Playbooks
2
+
3
+ This file is the operational map: which `m1nd` tools to choose for each kind of job, and in what order.
4
+
5
+ ## Decision Rules
6
+
7
+ - Start with `m1nd` unless the task is obviously outside its lane.
8
+ - Use `search` when the user already knows the string or regex.
9
+ - Use `glob` when the user knows the path shape but not the contents.
10
+ - Use `view` when the file is already known and bounded reading is enough.
11
+ - Use `seek` when the user knows what the code does but not where it lives.
12
+ - Use `activate` when the user needs a subsystem or neighborhood map.
13
+ - Use `perspective_*` only when navigation state is worth the overhead.
14
+ - Use `impact`, `validate_plan`, and the surgical tools before risky edits.
15
+ - Use `trace` when you already have failure text.
16
+ - Use `document_*` when docs, wiki pages, PDFs, or specs must stay grounded in code.
17
+ - Use daemon, alerts, trails, and locks when the session is long-lived or multi-agent.
18
+
19
+ ## Workflow 1: First Contact With An Unfamiliar Repo
20
+
21
+ 1. `health`
22
+ 2. `ingest`
23
+ 3. `audit`
24
+ 4. `search`, `seek`, or `activate`
25
+ 5. `batch_view` or `view`
26
+ 6. `coverage_session` if the investigation is getting wide
27
+
28
+ Choose `audit` when you want the fastest high-level orientation. Choose `activate` after `audit` when the next question is subsystem-shaped. Choose `seek` after `audit` when the next question is intent-shaped.
29
+
30
+ ## Workflow 2: Find The Code That Does X
31
+
32
+ 1. `seek`
33
+ 2. `view` or `batch_view`
34
+ 3. `activate` if the best result needs neighborhood expansion
35
+ 4. `learn` after the result was actually useful
36
+
37
+ Use this instead of `activate` when you know the job and want a specific implementation surface, not a cluster.
38
+
39
+ ## Workflow 3: Explore A Subsystem Or Topic
40
+
41
+ 1. `warmup` if the session is focused on one area for a while
42
+ 2. `activate`
43
+ 3. `why` for specific dependencies
44
+ 4. `resonate` when you need coherent clusters or refactor boundaries
45
+ 5. `learn` after useful results
46
+
47
+ Use `include_structural_holes` in `activate` or call `missing` separately if the question smells like absence rather than presence.
48
+
49
+ ## Workflow 4: Triage A Bug From Error Output
50
+
51
+ 1. `trace`
52
+ 2. `impact` on the strongest suspect if the fix may be risky
53
+ 3. `why` for a specific suspect-to-effect chain
54
+ 4. `validate_plan`
55
+ 5. `surgical_context_v2`
56
+ 6. local edit path or `edit_preview` / `edit_commit`
57
+ 7. `predict`
58
+
59
+ Use `trace` instead of manually chasing top stack frames when the runtime output is already available.
60
+
61
+ ## Workflow 5: Plan A Risky Change
62
+
63
+ 1. `seek` or `activate` to find the entry surface
64
+ 2. `impact`
65
+ 3. `validate_plan`
66
+ 4. `surgical_context_v2`
67
+ 5. optional `heuristics_surface` for why a file ranks as risky
68
+ 6. optional `edit_preview` for two-phase commit
69
+ 7. optional `apply_batch` when an external m1nd write path is actually desired
70
+ 8. `predict`
71
+
72
+ Interpretation:
73
+
74
+ - `impact` tells you what the root file touches
75
+ - `validate_plan` tells you what your proposed change list is missing
76
+ - `surgical_context_v2` gives connected source context in one shot
77
+ - `predict` is the post-edit follow-through check
78
+
79
+ ## Workflow 6: Review Whether A Plan Or PR Is Incomplete
80
+
81
+ 1. `validate_plan`
82
+ 2. `impact` for any central modified file
83
+ 3. `predict` for changed nodes that still look under-covered
84
+ 4. `scan` or `scan_all` if structural quality or safety patterns matter
85
+ 5. `heuristics_surface` when you need a reasoned explanation of why a file is risky
86
+
87
+ This is the review-time route when the important question is not "what changed?" but "what is still missing?"
88
+
89
+ ## Workflow 7: Implement From A Spec, Wiki, Or PDF
90
+
91
+ 1. choose the doc lane:
92
+ - authored semantic markdown -> `ingest` with `adapter: "light"`
93
+ - ordinary docs -> `ingest` with `adapter: "universal"` or `adapter: "auto"`
94
+ 2. for `universal`, use `document_provider_health` if richer extraction may matter
95
+ 3. for `universal`, use `document_resolve`
96
+ 4. for `universal`, use `document_bindings`
97
+ 5. for `universal`, use `document_drift`
98
+ 6. for both lanes, use `search`, `seek`, or `activate` on the merged graph
99
+ 7. `impact` or `validate_plan`
100
+ 8. `surgical_context_v2`
101
+
102
+ Use this when a document must be connected to code, not merely quoted or summarized. Use `light` when the spec itself is graph-native. Use `universal` when the source must first be canonicalized.
103
+
104
+ ## Workflow 8: Multi-Repo Questions
105
+
106
+ 1. `federate` when the repo list is already known
107
+ 2. `federate_auto` when only import/path/contract evidence suggests sibling repos
108
+ 3. `activate`, `seek`, `impact`, or `trace` on the federated graph
109
+
110
+ Use federation before pretending the current repo is the whole truth.
111
+
112
+ ## Workflow 9: Multi-Agent Exploration
113
+
114
+ 1. Give each agent a stable `agent_id`
115
+ 2. Use `perspective_start` per agent when route-state navigation is useful
116
+ 3. Use `perspective_branch` for parallel exploration
117
+ 4. Use `perspective_compare` to diff findings
118
+ 5. Use `trail_save` and `trail_merge` for handoff and convergence
119
+ 6. Use `lock_create`, `lock_watch`, `lock_diff`, `lock_rebase`, `lock_release` when concurrent edits or baselines matter
120
+
121
+ Rules:
122
+
123
+ - graph learning is shared
124
+ - perspectives are isolated per agent
125
+ - trails are shareable
126
+ - locks are for coordination, not for solo reading
127
+
128
+ ## Workflow 10: Long-Lived Monitoring
129
+
130
+ 1. `daemon_start`
131
+ 2. `daemon_status`
132
+ 3. `daemon_tick` when an explicit reconciliation pass is needed
133
+ 4. `alerts_list`
134
+ 5. `alerts_ack`
135
+ 6. `persist`
136
+
137
+ This is the right lane when the task is ongoing drift detection, not one-shot analysis.
138
+
139
+ ## Workflow 11: Session Resume Or Handoff
140
+
141
+ 1. `trail_list`
142
+ 2. `trail_resume`
143
+ 3. `drift` if the graph may have changed
144
+ 4. `warmup` if you only need gentle re-priming
145
+ 5. `boot_memory` for tiny durable facts that should stay hot
146
+
147
+ Use `trail_resume` for real investigation continuity. Use `boot_memory` only for small canonical facts, not full investigative state.
148
+
149
+ ## Workflow 12: Deep Structural Or Risk Analysis
150
+
151
+ Choose among these when basic search/impact is not enough:
152
+
153
+ - `hypothesize`: test a structural claim
154
+ - `counterfactual`: simulate node or module removal
155
+ - `differential`: compare two graph snapshots
156
+ - `diverge`: compare against a baseline and surface drift
157
+ - `fingerprint` or `twins`: find structural equivalence
158
+ - `resonate`: find reinforcing clusters
159
+ - `layers` and `layer_inspect`: inspect architectural layering and violations
160
+ - `ghost_edges`: use temporal co-change edges
161
+ - `runtime_overlay`: bring in runtime heat and errors
162
+ - `taint_trace`: follow taint propagation
163
+ - `flow_simulate`: reason about concurrency turbulence
164
+ - `epidemic`, `trust`, `tremor`, `antibody_*`: risk and defect-propagation surfaces
165
+
166
+ ## What Not To Force Through m1nd
167
+
168
+ - Exact text already known: use `search` or `rg`
169
+ - One known file with a simple question: use `view` or direct read
170
+ - Compiler or test failure truth: use the compiler or test runner
171
+ - Runtime behavior truth: use logs, traces, debugger, or profiler
172
+ - Final local file mutation inside Codex: usually use `apply_patch`
@@ -0,0 +1,135 @@
1
+ # m1nd Runtime And Refresh Notes
2
+
3
+ This file captures the local installation facts, the source-of-truth docs that were studied, and how to refresh them later without re-deriving everything.
4
+
5
+ ## Local Installation
6
+
7
+ Most hosts should point their MCP config at the same native binary:
8
+
9
+ - MCP server name: `m1nd`
10
+ - Binary name: `m1nd-mcp`
11
+ - Default launch args: `--stdio --no-gui`
12
+ - Optional env override: `M1ND_MCP_BINARY=/absolute/path/to/m1nd-mcp`
13
+
14
+ Use the bundled helper script to verify the current runtime instead of trusting memory:
15
+
16
+ ```bash
17
+ python3 scripts/probe_m1nd.py tools
18
+ python3 scripts/probe_m1nd.py call health '{"agent_id":"codex-m1nd"}'
19
+ ```
20
+
21
+ ## Official Sources Used
22
+
23
+ Primary source repo:
24
+
25
+ - [maxkle1nz/m1nd](https://github.com/maxkle1nz/m1nd)
26
+
27
+ Docs that mattered most:
28
+
29
+ - `README.md`
30
+ - `EXAMPLES.md`
31
+ - `docs/deployment.md`
32
+ - `docs/wiki/src/tool-matrix.md`
33
+ - `docs/wiki/src/faq.md`
34
+ - `docs/wiki/src/benchmarks.md`
35
+ - `docs/wiki/src/tutorials/quickstart.md`
36
+ - `docs/wiki/src/tutorials/first-query.md`
37
+ - `docs/wiki/src/tutorials/multi-agent.md`
38
+ - `docs/wiki/src/architecture/overview.md`
39
+ - `docs/wiki/src/architecture/graph-engine.md`
40
+ - `docs/wiki/src/architecture/ingest.md`
41
+ - `docs/wiki/src/architecture/mcp-server.md`
42
+ - `docs/wiki/src/concepts/spreading-activation.md`
43
+ - `docs/wiki/src/concepts/hebbian-plasticity.md`
44
+ - `docs/wiki/src/concepts/xlr-noise-cancellation.md`
45
+ - `docs/wiki/src/concepts/structural-holes.md`
46
+ - `docs/wiki/src/api-reference/overview.md`
47
+ - `docs/wiki/src/api-reference/activation.md`
48
+ - `docs/wiki/src/api-reference/analysis.md`
49
+ - `docs/wiki/src/api-reference/memory.md`
50
+ - `docs/wiki/src/api-reference/exploration.md`
51
+ - `docs/wiki/src/api-reference/perspectives.md`
52
+ - `docs/wiki/src/api-reference/lifecycle.md`
53
+
54
+ Reference timestamp for this study:
55
+
56
+ - Date: 2026-04-20
57
+ - Official repo commit checked locally: `d4a84000a3ae3b9848f8ce9505fab3ab00acd871`
58
+ - Local clone matched `origin/HEAD` at the time of study. Re-check the live
59
+ runtime with `tools/list`, `trust_selftest`, or `scripts/probe_m1nd.py` before
60
+ relying on exact tool counts.
61
+
62
+ ## Important Truth Hierarchy
63
+
64
+ When sources disagree, use this order:
65
+
66
+ 1. Live `tools/list` from the local binary
67
+ 2. `tool-matrix.md`
68
+ 3. API reference pages
69
+ 4. README, tutorials, FAQ, prose pages
70
+
71
+ Reason: prose pages in the repo already contain stale counts in some places.
72
+
73
+ ## Count Discrepancy To Remember
74
+
75
+ Official docs repeatedly describe the current surface as 93 tools.
76
+
77
+ The local binary on this machine returned 92 canonical tool names via `tools/list` on 2026-04-20.
78
+
79
+ Operational rule:
80
+
81
+ - never hardcode the count
82
+ - use the live runtime when counts or exact names matter
83
+ - use the docs for intent, workflow, and semantics
84
+
85
+ ## What m1nd Is Best At
86
+
87
+ - graph-grounded structural retrieval
88
+ - blast-radius and hidden-neighbor analysis
89
+ - pre-flight validation for risky changes
90
+ - session continuity and cross-agent handoff
91
+ - document-to-code bindings
92
+ - long-lived structural monitoring
93
+
94
+ ## What Still Belongs Elsewhere
95
+
96
+ - exact text lookup: `search` or `rg`
97
+ - one known file: `view` or direct file read
98
+ - compiler truth: compiler and tests
99
+ - runtime truth: logs, traces, debugger, profiler
100
+
101
+ ## Live Probe Helper
102
+
103
+ The helper script is intentionally generic so future sessions can refresh the environment quickly.
104
+
105
+ Examples:
106
+
107
+ ```bash
108
+ python3 scripts/probe_m1nd.py tools
109
+ python3 scripts/probe_m1nd.py call help '{"agent_id":"codex-m1nd","tool_name":"validate_plan"}'
110
+ python3 scripts/probe_m1nd.py call ingest '{"agent_id":"codex-m1nd","path":"/path/to/repo"}'
111
+ python3 scripts/probe_m1nd.py run '[{"name":"ingest","arguments":{"agent_id":"codex-m1nd","path":"/path/to/repo"}},{"name":"activate","arguments":{"agent_id":"codex-m1nd","query":"session management","top_k":5}}]'
112
+ ```
113
+
114
+ Behavior:
115
+
116
+ - launches the configured local `m1nd-mcp` binary
117
+ - performs MCP `initialize`
118
+ - runs `tools/list` or `tools/call`
119
+ - `run` keeps one `m1nd` process alive across multiple calls so in-memory graph state survives `ingest -> query` flows
120
+ - prints parsed JSON instead of raw MCP envelopes when possible
121
+
122
+ ## Refresh Procedure
123
+
124
+ When this skill starts feeling stale:
125
+
126
+ 1. Re-clone or pull the official `maxkle1nz/m1nd` repo in a scratch workspace.
127
+ 2. Re-read `tool-matrix.md`, `api-reference/overview.md`, `EXAMPLES.md`, `faq.md`, `benchmarks.md`, and any changed API pages.
128
+ 3. Run `python3 scripts/probe_m1nd.py tools` against the local installed binary.
129
+ 4. Update this skill only where the live runtime or official docs actually changed.
130
+
131
+ ## Host-Specific Note For Codex
132
+
133
+ At the protocol level, `m1nd`'s canonical tool names are bare names like `activate` and `validate_plan`.
134
+
135
+ If Codex exposes the MCP server as first-class tools, the host wrapper may namespace them differently, but the underlying routing logic in this skill should still be based on the canonical tool semantics above.
@@ -0,0 +1,168 @@
1
+ # m1nd Tool Families
2
+
3
+ This is the compact capability inventory for the current `m1nd` shape. Use the live runtime, not this file, when exact counts or schemas matter.
4
+
5
+ ## Foundation And Search
6
+
7
+ - `ingest`: load or refresh the graph from code, JSON, memory, or universal docs.
8
+ - `health`: confirm server and graph state.
9
+ - `search`: exact text, regex, or graph-aware semantic search with line context.
10
+ - `glob`: path-pattern search over indexed files.
11
+ - `view`: bounded line-numbered file read.
12
+ - `batch_view`: stable multi-file read surface.
13
+ - `help`: self-documenting tool reference and recovery guidance.
14
+ - `audit`: one-call orientation for unfamiliar repos.
15
+ - `coverage_session`: what this agent has already visited.
16
+ - `cross_verify`: graph-vs-disk verification.
17
+ - `external_references`: explicit paths outside ingest roots.
18
+ - `report`: session summary and savings.
19
+ - `savings`: token-economy report.
20
+ - `metrics`: structural metrics per file/function/module.
21
+ - `type_trace`: follow a type across the graph.
22
+ - `diagram`: generate Mermaid or DOT graph slices.
23
+ - `panoramic`: ranked panorama of file-level risk across the repo.
24
+
25
+ ## Retrieval By Intent, Topic, Or Pattern
26
+
27
+ - `seek`: find code by intent or purpose.
28
+ - `activate`: find a neighborhood around a topic via spreading activation.
29
+ - `warmup`: prime the graph for a task.
30
+ - `resonate`: find mutually reinforcing clusters.
31
+ - `why`: shortest path and dependency explanation between two nodes.
32
+ - `missing`: surface structural holes or absent abstractions.
33
+ - `scan`: run one structural pattern family.
34
+ - `scan_all`: run all structural patterns in one call.
35
+ - `timeline`: temporal history, churn, velocity, stability, and co-change shape.
36
+
37
+ ## Change-Risk And Plan Validation
38
+
39
+ - `impact`: blast radius from a node.
40
+ - `predict`: likely co-change partners after an edit.
41
+ - `validate_plan`: pre-flight risk, gaps, missing tests, and suggested additions.
42
+ - `heuristics_surface`: explain why a file or node ranked risky or important.
43
+ - `hypothesize`: test a structural claim.
44
+ - `counterfactual`: simulate node or module removal.
45
+ - `differential`: compare two graph snapshots.
46
+ - `diverge`: compare current graph against a baseline.
47
+
48
+ ## Memory, Learning, And Continuity
49
+
50
+ - `learn`: reinforce or weaken paths based on feedback.
51
+ - `drift`: see what changed since the last session or baseline.
52
+ - `trail_save`: persist an investigation.
53
+ - `trail_resume`: restore an investigation and re-inject boosts.
54
+ - `trail_list`: browse saved investigations.
55
+ - `trail_merge`: merge investigations across agents or sessions.
56
+ - `persist`: force graph-sidecar persistence.
57
+ - `boot_memory`: store tiny canonical hot-state values.
58
+
59
+ Rules:
60
+
61
+ - `learn` is how the graph adapts.
62
+ - `trail_*` is for full investigative continuity.
63
+ - `boot_memory` is for tiny durable facts, not full reasoning trails.
64
+
65
+ ## Stateful Navigation
66
+
67
+ - `perspective_start`: create a navigable route surface from a query.
68
+ - `perspective_routes`: paginate route options.
69
+ - `perspective_inspect`: expand a route with score breakdown and provenance.
70
+ - `perspective_peek`: preview code or doc content at a route target.
71
+ - `perspective_follow`: move focus to the route target.
72
+ - `perspective_suggest`: ask the graph which route to follow next.
73
+ - `perspective_affinity`: inspect probable affinities for the current route target.
74
+ - `perspective_branch`: fork a perspective for parallel exploration.
75
+ - `perspective_back`: go back one navigation step.
76
+ - `perspective_compare`: diff two perspectives.
77
+ - `perspective_list`: list active perspectives for the agent.
78
+ - `perspective_close`: release perspective state.
79
+
80
+ Use this family only when navigation state is worth maintaining.
81
+
82
+ ## Docs, Wiki, PDF, And Universal Document Runtime
83
+
84
+ - `ingest` with `adapter: "light"`: graph-native semantic markdown via the `L1GHT` protocol.
85
+ - `document_resolve`: resolve canonical local artifacts for an ingested document.
86
+ - `document_provider_health`: inspect optional provider availability and install hints.
87
+ - `document_bindings`: map document claims or sections to code.
88
+ - `document_drift`: detect stale, missing, or ambiguous document/code links.
89
+ - `auto_ingest_start`: start local-first document watchers.
90
+ - `auto_ingest_status`: inspect watcher state and provider/runtime counters.
91
+ - `auto_ingest_tick`: force one deterministic document reconciliation pass.
92
+ - `auto_ingest_stop`: stop watchers and persist manifest state.
93
+
94
+ Use this family when docs must be graph-grounded, not merely read. Distinguish the lanes:
95
+
96
+ - `light` for authored graph-native semantic markdown
97
+ - `universal` for ordinary docs that need canonicalization and binding/drift surfaces
98
+
99
+ ## Surgical And Write Surfaces
100
+
101
+ - `surgical_context`: single-file edit context with callers, callees, and neighbors.
102
+ - `surgical_context_v2`: multi-file connected edit context, including source excerpts of related files.
103
+ - `apply`: one-file write through m1nd.
104
+ - `apply_batch`: atomic multi-file write through m1nd.
105
+ - `edit_preview`: two-phase preview without touching disk.
106
+ - `edit_commit`: freshness-checked commit for a preview.
107
+
108
+ In Codex, prefer this family for analysis and change-prep. Use local `apply_patch` for actual file mutation unless the task explicitly wants or benefits from m1nd's own write lane.
109
+
110
+ ## Multi-Repo And Cross-Boundary Work
111
+
112
+ - `federate`: combine known repos into one graph.
113
+ - `federate_auto`: discover likely sibling repos from evidence, then optionally federate them.
114
+
115
+ Use this before acting as if the current repo is the whole system.
116
+
117
+ ## Coordination, Locks, And Monitoring
118
+
119
+ - `lock_create`: snapshot a region.
120
+ - `lock_watch`: define a watch strategy on a lock.
121
+ - `lock_diff`: compare current state with the lock baseline.
122
+ - `lock_rebase`: accept current state as the new baseline.
123
+ - `lock_release`: free the lock.
124
+ - `daemon_start`: start long-lived structural monitoring.
125
+ - `daemon_stop`: stop monitoring without deleting alert history.
126
+ - `daemon_status`: inspect monitor liveness and counters.
127
+ - `daemon_tick`: force one reconciliation pass.
128
+ - `alerts_list`: list durable daemon or proactive alerts.
129
+ - `alerts_ack`: acknowledge alerts so they stop resurfacing.
130
+
131
+ Use locks when coordination or baselines matter. Use daemon and alerts when the task is ongoing.
132
+
133
+ ## Extended Risk, Architecture, And RETROBUILDER
134
+
135
+ - `antibody_scan`: scan for known bug shapes.
136
+ - `antibody_list`: inspect stored antibody patterns.
137
+ - `antibody_create`: create, enable, disable, or delete antibody patterns.
138
+ - `flow_simulate`: concurrency flow simulation.
139
+ - `epidemic`: predict bug spread from known buggy nodes.
140
+ - `tremor`: accelerating change-frequency detection.
141
+ - `trust`: actuarial trust scores from defect history.
142
+ - `layers`: infer architectural layers and violations.
143
+ - `layer_inspect`: inspect a specific layer.
144
+ - `ghost_edges`: temporal co-change ghost edges.
145
+ - `taint_trace`: taint propagation over the graph.
146
+ - `twins`: structural equivalence or near-equivalence.
147
+ - `refactor_plan`: graph-native refactor proposals.
148
+ - `runtime_overlay`: runtime heat and error overlays.
149
+
150
+ Use these when basic retrieval and blast-radius work are no longer enough.
151
+
152
+ ## Current Live Surface On This Machine
153
+
154
+ As validated on 2026-04-20 through the installed local binary, `tools/list` returned 92 canonical tool names:
155
+
156
+ - `activate`, `impact`, `missing`, `why`, `warmup`, `counterfactual`, `predict`, `fingerprint`, `drift`, `learn`
157
+ - `ingest`, `document_resolve`, `document_provider_health`, `document_bindings`, `document_drift`
158
+ - `auto_ingest_start`, `auto_ingest_stop`, `auto_ingest_status`, `auto_ingest_tick`, `resonate`, `health`
159
+ - `perspective_start`, `perspective_routes`, `perspective_inspect`, `perspective_peek`, `perspective_follow`, `perspective_suggest`, `perspective_affinity`, `perspective_branch`, `perspective_back`, `perspective_compare`, `perspective_list`, `perspective_close`
160
+ - `seek`, `scan`, `timeline`, `diverge`
161
+ - `trail_save`, `trail_resume`, `trail_merge`, `trail_list`
162
+ - `hypothesize`, `differential`, `trace`, `validate_plan`, `federate`
163
+ - `antibody_scan`, `antibody_list`, `antibody_create`, `flow_simulate`, `epidemic`, `tremor`, `trust`, `layers`, `layer_inspect`
164
+ - `ghost_edges`, `taint_trace`, `twins`, `refactor_plan`, `runtime_overlay`
165
+ - `heuristics_surface`, `surgical_context`, `apply`, `view`, `batch_view`, `surgical_context_v2`, `apply_batch`, `edit_preview`, `edit_commit`
166
+ - `search`, `glob`, `scan_all`, `cross_verify`, `coverage_session`, `external_references`, `federate_auto`, `help`, `report`, `audit`
167
+ - `daemon_start`, `daemon_stop`, `daemon_status`, `daemon_tick`, `alerts_list`, `alerts_ack`
168
+ - `panoramic`, `savings`, `persist`, `boot_memory`, `metrics`, `type_trace`, `diagram`
@@ -0,0 +1,189 @@
1
+ #!/usr/bin/env python3
2
+
3
+ import argparse
4
+ import json
5
+ import os
6
+ import shlex
7
+ import subprocess
8
+ import sys
9
+ import shutil
10
+ from typing import Any
11
+
12
+
13
+ DEFAULT_BINARY = os.environ.get("M1ND_MCP_BINARY") or shutil.which("m1nd-mcp") or "m1nd-mcp"
14
+ DEFAULT_ARGS = shlex.split(os.environ.get("M1ND_MCP_ARGS", "--stdio --no-gui"))
15
+
16
+
17
+ class McpClient:
18
+ def __init__(self, binary: str, extra_args: list[str]) -> None:
19
+ self.proc = subprocess.Popen(
20
+ [binary, *extra_args],
21
+ stdin=subprocess.PIPE,
22
+ stdout=subprocess.PIPE,
23
+ stderr=subprocess.PIPE,
24
+ text=True,
25
+ )
26
+ self._initialize()
27
+
28
+ def _request(self, payload: dict[str, Any]) -> dict[str, Any]:
29
+ assert self.proc.stdin is not None
30
+ assert self.proc.stdout is not None
31
+ self.proc.stdin.write(json.dumps(payload) + "\n")
32
+ self.proc.stdin.flush()
33
+ line = self.proc.stdout.readline()
34
+ if not line:
35
+ stderr = ""
36
+ if self.proc.stderr is not None:
37
+ stderr = self.proc.stderr.read().strip()
38
+ raise RuntimeError(f"no response from m1nd-mcp; stderr={stderr}")
39
+ return json.loads(line)
40
+
41
+ def _initialize(self) -> None:
42
+ response = self._request(
43
+ {"jsonrpc": "2.0", "id": 0, "method": "initialize", "params": {}}
44
+ )
45
+ if "error" in response:
46
+ raise RuntimeError(f"initialize failed: {json.dumps(response, indent=2)}")
47
+
48
+ def tools(self) -> dict[str, Any]:
49
+ return self._request(
50
+ {"jsonrpc": "2.0", "id": 1, "method": "tools/list", "params": {}}
51
+ )
52
+
53
+ def call(self, tool_name: str, arguments: dict[str, Any]) -> dict[str, Any]:
54
+ return self._request(
55
+ {
56
+ "jsonrpc": "2.0",
57
+ "id": 2,
58
+ "method": "tools/call",
59
+ "params": {"name": tool_name, "arguments": arguments},
60
+ }
61
+ )
62
+
63
+ def close(self) -> None:
64
+ if self.proc.poll() is None:
65
+ self.proc.terminate()
66
+ try:
67
+ self.proc.wait(timeout=2)
68
+ except subprocess.TimeoutExpired:
69
+ self.proc.kill()
70
+
71
+
72
+ def parse_embedded_json(text: str) -> Any:
73
+ try:
74
+ return json.loads(text)
75
+ except json.JSONDecodeError:
76
+ return text
77
+
78
+
79
+ def print_json(value: Any) -> None:
80
+ print(json.dumps(value, indent=2, ensure_ascii=True))
81
+
82
+
83
+ def main() -> int:
84
+ parser = argparse.ArgumentParser(description="Probe the local m1nd MCP runtime.")
85
+ parser.add_argument("--binary", default=DEFAULT_BINARY, help="Path to m1nd-mcp.")
86
+ parser.add_argument(
87
+ "--binary-args",
88
+ default=" ".join(DEFAULT_ARGS),
89
+ help='Arguments for the binary, default: "--stdio --no-gui".',
90
+ )
91
+
92
+ subparsers = parser.add_subparsers(dest="command", required=True)
93
+
94
+ tools_parser = subparsers.add_parser("tools", help="List the live tool surface.")
95
+ tools_parser.add_argument(
96
+ "--names-only", action="store_true", help="Print one tool name per line."
97
+ )
98
+
99
+ call_parser = subparsers.add_parser("call", help="Call one m1nd tool.")
100
+ call_parser.add_argument("tool_name", help="Canonical tool name, e.g. health.")
101
+ call_parser.add_argument(
102
+ "arguments_json",
103
+ nargs="?",
104
+ default="{}",
105
+ help='JSON object with tool arguments, e.g. \'{"agent_id":"codex"}\'.',
106
+ )
107
+
108
+ run_parser = subparsers.add_parser(
109
+ "run", help="Run multiple tool calls against the same m1nd process."
110
+ )
111
+ run_parser.add_argument(
112
+ "steps_json",
113
+ help=(
114
+ "JSON array of step objects, e.g. "
115
+ '\'[{"name":"ingest","arguments":{"agent_id":"codex","path":"/repo"}}]\''
116
+ ),
117
+ )
118
+
119
+ args = parser.parse_args()
120
+
121
+ binary_args = shlex.split(args.binary_args)
122
+ client = McpClient(args.binary, binary_args)
123
+ try:
124
+ if args.command == "tools":
125
+ response = client.tools()
126
+ tools = response["result"]["tools"]
127
+ if args.names_only:
128
+ for tool in tools:
129
+ print(tool["name"])
130
+ return 0
131
+ print_json({"count": len(tools), "names": [tool["name"] for tool in tools]})
132
+ return 0
133
+
134
+ if args.command == "call":
135
+ try:
136
+ tool_arguments = json.loads(args.arguments_json)
137
+ except json.JSONDecodeError as exc:
138
+ raise SystemExit(f"invalid arguments_json: {exc}") from exc
139
+ response = client.call(args.tool_name, tool_arguments)
140
+ result = response.get("result", {})
141
+ content = result.get("content", [])
142
+ if content and content[0].get("type") == "text":
143
+ parsed = parse_embedded_json(content[0].get("text", ""))
144
+ output = {"isError": result.get("isError", False), "payload": parsed}
145
+ print_json(output)
146
+ return 0
147
+ print_json(response)
148
+ return 0
149
+
150
+ if args.command == "run":
151
+ try:
152
+ steps = json.loads(args.steps_json)
153
+ except json.JSONDecodeError as exc:
154
+ raise SystemExit(f"invalid steps_json: {exc}") from exc
155
+ if not isinstance(steps, list):
156
+ raise SystemExit("steps_json must be a JSON array")
157
+
158
+ outputs = []
159
+ for index, step in enumerate(steps, start=1):
160
+ if not isinstance(step, dict):
161
+ raise SystemExit(f"step {index} must be an object")
162
+ tool_name = step.get("name")
163
+ tool_arguments = step.get("arguments", {})
164
+ if not isinstance(tool_name, str):
165
+ raise SystemExit(f"step {index} is missing string field 'name'")
166
+ if not isinstance(tool_arguments, dict):
167
+ raise SystemExit(
168
+ f"step {index} field 'arguments' must be a JSON object"
169
+ )
170
+ response = client.call(tool_name, tool_arguments)
171
+ result = response.get("result", {})
172
+ content = result.get("content", [])
173
+ payload: Any = response
174
+ if content and content[0].get("type") == "text":
175
+ payload = {
176
+ "isError": result.get("isError", False),
177
+ "payload": parse_embedded_json(content[0].get("text", "")),
178
+ }
179
+ outputs.append({"step": index, "name": tool_name, "result": payload})
180
+ print_json(outputs)
181
+ return 0
182
+
183
+ raise SystemExit(f"unsupported command: {args.command}")
184
+ finally:
185
+ client.close()
186
+
187
+
188
+ if __name__ == "__main__":
189
+ sys.exit(main())