proteum 2.4.1 → 2.4.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/AGENTS.md CHANGED
@@ -63,7 +63,7 @@ npx prisma migrate dev --config ./prisma.config.ts --name <migration name>
63
63
  - `/Users/gaetan/Desktop/Projets/klair.work/apps/worker`
64
64
  - Inspect how the relevant reference apps currently use the touched feature, runtime, API, compiler behavior, or generated output before proposing or implementing changes.
65
65
  - Keep the developer-facing contract synchronized when framework work changes CLI commands, profiler capabilities, or the `proteum dev` banner. Update the live surfaces together in the same pass: CLI command/help definitions, profiler panels and dev-only endpoints, banner text/examples, and the most relevant agent docs that describe them, especially `AGENTS.md`, `agents/project/AGENTS.md`, `agents/project/root/AGENTS.md`, `agents/project/app-root/AGENTS.md`, `agents/project/diagnostics.md`, and any narrower `agents/project/**/AGENTS.md` file that mentions the changed workflow.
66
- - Proteum MCP contract: `proteum mcp` is the machine-scope router agents register once, and `proteum dev` exposes each app runtime at `/__proteum/mcp`. `proteum dev` ensures one managed machine MCP daemon is running; do not start a second managed daemon. Agents should start with MCP `workflow_start` using `cwd` or a known `projectId`; ambiguous routing or offline app candidates use `project_resolve { cwd }`, and follow-up live app tools require the returned `projectId`. Dev-hosted app tools are already rooted to their own runtime. Keep MCP tools/resources compact, typed, capped, paginated for full trace detail, and read-only unless a future task explicitly expands the mutation contract. MCP payloads are compact single-line `proteum-mcp-v1` JSON, not pretty-printed human output. Do not implement MCP tools as thin CLI process wrappers when the data is available through manifest readers, tracked sessions, or dev runtime registries.
66
+ - Proteum MCP contract: `proteum mcp` is the machine-scope router agents register once, and `proteum dev` exposes each app runtime at `/__proteum/mcp`. `proteum dev` ensures one managed machine MCP daemon is running; do not start a second managed daemon. Agents should start with MCP `workflow_start` using `cwd` or a known `projectId`; ambiguous routing or offline app candidates use `project_resolve { cwd }`, and follow-up live app tools require the returned `projectId`. Dev-hosted app tools are already rooted to their own runtime. Keep MCP tools/resources compact, typed, capped, paginated for full trace detail, and read-only unless a future task explicitly expands the mutation contract. The database diagnostic exception is still read-only: MCP `db_query` and CLI `proteum db query` allow one capped `SELECT`, `SHOW`, or `EXPLAIN` statement only and return rows plus elapsed milliseconds. MCP payloads are compact single-line `proteum-mcp-v1` JSON, not pretty-printed human output. Do not implement MCP tools as thin CLI process wrappers when the data is available through manifest readers, tracked sessions, or dev runtime registries.
67
67
  - Keep the same-system trace contract explicit when request instrumentation changes: `TRACE_*` controls the retained dev trace store plus the trace/perf CLI, dev-only HTTP endpoints, and bottom profiler, while `ENABLE_PROFILER` enables the reduced request-local `request.profiling` snapshot and `request.finished` hook payload without retaining finished requests globally unless dev trace is also enabled.
68
68
  - Current CLI banner contract: only the bare `proteum build` and bare `proteum dev` commands print the welcome banner and include the active Proteum installation method. Any extra argument or option skips the banner. Only `proteum dev` clears the interactive terminal before rendering, exposes `CTRL+R` reload plus `CTRL+C` shutdown hotkeys in its session UI, and reports connected app names plus successful connected `/ping` checks in the ready banner. Every `proteum dev` start ensures tracked instruction files contain the current managed `# Proteum Instructions` section before the dev loop begins.
69
69
  - Keep core changes aligned with the explicit controller/page architecture in `agents/project/root/AGENTS.md` and its standalone composition in `agents/project/AGENTS.md`.
@@ -27,7 +27,7 @@ Managed compact root routers must use trigger -> canonical instruction file refe
27
27
  - If the user asks to implement a feature, first inspect the relevant existing surface and state any implementation problem, pain point, attention point, or question you see. If a concern is blocking, or it can materially change product behavior, API shape, architecture, data model, cost, privacy, security, or UX, ask before editing; otherwise state the assumption and continue implementing.
28
28
  - If the task is ambiguous, generated, connected, or multi-repo, start with MCP `workflow_start` and then MCP `orient { projectId, query }` only if the bootstrap did not return a sufficient owner or next action; use `npx proteum orient <query>` only when MCP is unavailable or terminal evidence is required.
29
29
  - Treat Proteum CLI and MCP output as the workflow router. Treat instruction previews returned by MCP `workflow_start` or `instructions_resolve { projectId }` as the allowed instruction scope for read-only discovery and diagnostics. Read full file contents only before edits or git writes, when returned `fullRead`/`fullReadPolicy` requires it, or when the compact preview is insufficient. Do not read broad instruction folders or every managed instruction file up front.
30
- - When a Proteum MCP client is available, first call MCP `workflow_start` with `cwd` or a known `projectId`. If it is ambiguous or returns offline app candidates, call `project_resolve { cwd }`, select the intended app root, start exactly one dev server from that app root when needed, then retry `workflow_start`. Pass the returned live `projectId` to every follow-up app-bound MCP tool. `npx proteum dev` ensures one managed machine MCP daemon is running; do not start a second managed daemon. Prefer MCP `runtime_status`, `orient`, `instructions_resolve`, `explain_summary`, `route_candidates`, `doctor`, `diagnose`, `trace_show`, `perf_request`, and `logs_tail` for read-only runtime/status/orientation/owner/route/trace/perf/log reads. Do not run CLI equivalents after a successful MCP result for the same read. Do not run broad source searches for route/page/controller ownership after MCP returns the owner. Use CLI commands when you need reproducible terminal validation, dev/build/check workflows, fallback repair, or output to share with a human.
30
+ - When a Proteum MCP client is available, first call MCP `workflow_start` with `cwd` or a known `projectId`. If it is ambiguous or returns offline app candidates, call `project_resolve { cwd }`, select the intended app root, start exactly one dev server from that app root when needed, then retry `workflow_start`. Pass the returned live `projectId` to every follow-up app-bound MCP tool. `npx proteum dev` ensures one managed machine MCP daemon is running; do not start a second managed daemon. Prefer MCP `runtime_status`, `orient`, `instructions_resolve`, `explain_summary`, `route_candidates`, `doctor`, `diagnose`, `trace_show`, `perf_request`, `logs_tail`, and `db_query` for read-only runtime/status/orientation/owner/route/trace/perf/log/database reads. Do not run CLI equivalents after a successful MCP result for the same read. Do not run broad source searches for route/page/controller ownership after MCP returns the owner. Use CLI commands when you need reproducible terminal validation, dev/build/check workflows, fallback repair, or output to share with a human.
31
31
  - MCP payloads are compact single-line `proteum-mcp-v1` JSON with capped and paginated detail. Do not expand MCP output for human readability.
32
32
  - For every non-trivial coding task, load and follow root-level `DOCUMENTATION.md` before coding.
33
33
  - If the user reports an issue, or the agent encounters one during exploration, implementation, verification, or runtime reproduction, load and follow root-level `diagnostics.md`.
@@ -231,6 +231,7 @@ Verify at the correct layer:
231
231
  ## Hard Stops
232
232
 
233
233
  - Never run schema-mutating SQL such as `ALTER TABLE`, `CREATE TABLE`, `DROP TABLE`, or `CREATE INDEX` to change database structure.
234
+ - For read-only SQL diagnosis, use MCP `db_query` or `npx proteum db query "<sql>"`; only one capped `SELECT`, `SHOW`, or `EXPLAIN` statement is allowed.
234
235
  - Do not run `prisma *` yourself. If a schema change requires migration, ask the user to run `npx prisma migrate dev --config ./prisma.config.ts --name <migration name>` and wait for `continue`.
235
236
  - Do not run `git restore` or `git reset`.
236
237
  - Do not run write-mode git commands by default. The built-in exception is an exact `commit` reply, which allows `git add` and `git commit` in every affected repository or worktree touched during the whole conversation. Any other write-mode git action requires an explicit user request.
@@ -4,7 +4,7 @@ This file is the canonical source of truth for diagnostics, temporary instrument
4
4
 
5
5
  ## Initial Triage
6
6
 
7
- - Start with compact machine-readable app state before reading large parts of the codebase. When a Proteum MCP client is available, call MCP `workflow_start` with `cwd` or a known `projectId`; if it is ambiguous or returns offline app candidates, call `project_resolve { cwd }`, select the intended app root, start exactly one dev server from that app root when needed, then retry `workflow_start`. Use the returned live `projectId` for follow-up MCP `runtime_status`, `orient`, `instructions_resolve`, `explain_summary`, `route_candidates`, `doctor`, `diagnose`, `trace_latest`, `trace_show`, `perf_top`, `perf_request`, and `logs_tail`.
7
+ - Start with compact machine-readable app state before reading large parts of the codebase. When a Proteum MCP client is available, call MCP `workflow_start` with `cwd` or a known `projectId`; if it is ambiguous or returns offline app candidates, call `project_resolve { cwd }`, select the intended app root, start exactly one dev server from that app root when needed, then retry `workflow_start`. Use the returned live `projectId` for follow-up MCP `runtime_status`, `orient`, `instructions_resolve`, `explain_summary`, `route_candidates`, `doctor`, `diagnose`, `trace_latest`, `trace_show`, `perf_top`, `perf_request`, `logs_tail`, and `db_query`.
8
8
  - Do not run CLI equivalents after a successful MCP result for the same read. Do not run broad source searches for route/page/controller ownership after MCP returns the owner. Use compact CLI commands such as `npx proteum orient <query>`, `npx proteum runtime status`, `npx proteum connect`, `npx proteum explain`, `npx proteum doctor`, and `npx proteum doctor --contracts` when MCP is unavailable, when generated artifacts or manifest-owned files may be stale, or when you need a reproducible shell command, validation step, or CI-like output.
9
9
  - MCP payloads are compact `proteum-mcp-v1` JSON and are capped/paginated by default. Use selected instruction previews for read-only discovery and diagnostics; read full files or request full MCP detail only when returned `fullRead`/`fullReadPolicy` or omitted-detail hints require it.
10
10
  - Use full-detail escape hatches only after compact output identifies the missing detail: `npx proteum explain --manifest`, `npx proteum diagnose <target> --full`, `npx proteum trace show <requestId> --events`, or `npx proteum perf request <requestId> --full`.
@@ -39,7 +39,7 @@ This file is the canonical source of truth for diagnostics, temporary instrument
39
39
  ## Temporary Instrumentation
40
40
 
41
41
  - When manifest inspection, trace data, browser console output, and server errors are still insufficient, add temporary targeted logs in the code to confirm control flow, payload shape, query shape, or branch selection.
42
- - If SQL is needed during diagnosis, keep it read-only. Never use SQL to change database structure or execute schema-mutating DDL.
42
+ - If SQL is needed during diagnosis, use MCP `db_query { projectId, sql }` or `npx proteum db query "<sql>"` against a running dev server. Keep it read-only: only one capped `SELECT`, `SHOW`, or `EXPLAIN` statement is allowed, and the response includes rows, columns, elapsed milliseconds, and cap metadata. Never use SQL to change database structure or execute schema-mutating DDL.
43
43
  - Keep temporary logs narrow, contextual, and easy to remove. Do not leave broad debug noise in shared execution paths.
44
44
  - Re-run only the smallest relevant repro, request, or test after adding temporary instrumentation.
45
45
  - Temporary logs added in the code for diagnosis must be cleaned at the end of tests or the repro cycle and must never be committed.
@@ -19,7 +19,7 @@ When tradeoffs exist inside optimization work, optimize in this order:
19
19
  - Prefer established, flexible, well-typed, widely adopted, actively maintained packages.
20
20
  - Build custom or keep custom infrastructure only when packages would clearly hurt bundle size, SSR behavior, performance, typing quality, flexibility, licensing, explicit contracts, or long-term maintainability.
21
21
  - If you choose custom over a package, state briefly why.
22
- - For agent-facing repeated diagnostics, prefer the read-only Proteum MCP surface over adding broader CLI output. MCP should expose compact single-line `proteum-mcp-v1` JSON with capped, typed, paginated reads; the CLI should stay compact and reproducible.
22
+ - For agent-facing repeated diagnostics, prefer the read-only Proteum MCP surface over adding broader CLI output. MCP should expose compact single-line `proteum-mcp-v1` JSON with capped, typed, paginated reads; the CLI should stay compact and reproducible. Database diagnostics are limited to one capped `SELECT`, `SHOW`, or `EXPLAIN` read through MCP `db_query` or CLI `proteum db query`.
23
23
 
24
24
  ## SSR And Page Size
25
25
 
@@ -18,7 +18,7 @@ Managed compact root routers must use trigger -> canonical instruction file refe
18
18
  - If the user asks to implement a feature, first inspect the relevant existing surface and state any implementation problem, pain point, attention point, or question you see. If a concern is blocking, or it can materially change product behavior, API shape, architecture, data model, cost, privacy, security, or UX, ask before editing; otherwise state the assumption and continue implementing.
19
19
  - If the task is ambiguous, generated, connected, or multi-repo, start with MCP `workflow_start` and then MCP `orient { projectId, query }` only if the bootstrap did not return a sufficient owner or next action; use `npx proteum orient <query>` only when MCP is unavailable or terminal evidence is required.
20
20
  - Treat Proteum CLI and MCP output as the workflow router. Treat instruction previews returned by MCP `workflow_start` or `instructions_resolve { projectId }` as the allowed instruction scope for read-only discovery and diagnostics. Read full file contents only before edits or git writes, when returned `fullRead`/`fullReadPolicy` requires it, or when the compact preview is insufficient. Do not read broad instruction folders or every managed instruction file up front.
21
- - When a Proteum MCP client is available, first call MCP `workflow_start` with `cwd` or a known `projectId`. If it is ambiguous or returns offline app candidates, call `project_resolve { cwd }`, select the intended app root, start exactly one dev server from that app root when needed, then retry `workflow_start`. Pass the returned live `projectId` to every follow-up app-bound MCP tool. `npx proteum dev` ensures one managed machine MCP daemon is running; do not start a second managed daemon. Prefer MCP `runtime_status`, `orient`, `instructions_resolve`, `explain_summary`, `route_candidates`, `doctor`, `diagnose`, `trace_show`, `perf_request`, and `logs_tail` for read-only runtime/status/orientation/owner/route/trace/perf/log reads. Do not run CLI equivalents after a successful MCP result for the same read. Do not run broad source searches for route/page/controller ownership after MCP returns the owner. Use CLI commands when you need reproducible terminal validation, dev/build/check workflows, fallback repair, or output to share with a human.
21
+ - When a Proteum MCP client is available, first call MCP `workflow_start` with `cwd` or a known `projectId`. If it is ambiguous or returns offline app candidates, call `project_resolve { cwd }`, select the intended app root, start exactly one dev server from that app root when needed, then retry `workflow_start`. Pass the returned live `projectId` to every follow-up app-bound MCP tool. `npx proteum dev` ensures one managed machine MCP daemon is running; do not start a second managed daemon. Prefer MCP `runtime_status`, `orient`, `instructions_resolve`, `explain_summary`, `route_candidates`, `doctor`, `diagnose`, `trace_show`, `perf_request`, `logs_tail`, and `db_query` for read-only runtime/status/orientation/owner/route/trace/perf/log/database reads. Do not run CLI equivalents after a successful MCP result for the same read. Do not run broad source searches for route/page/controller ownership after MCP returns the owner. Use CLI commands when you need reproducible terminal validation, dev/build/check workflows, fallback repair, or output to share with a human.
22
22
  - MCP payloads are compact single-line `proteum-mcp-v1` JSON with capped and paginated detail. Do not expand MCP output for human readability.
23
23
  - For every non-trivial coding task, load and follow root-level `DOCUMENTATION.md` before coding.
24
24
  - If the user reports an issue, or the agent encounters one during exploration, implementation, verification, or runtime reproduction, load and follow root-level `diagnostics.md`.
@@ -222,6 +222,7 @@ Verify at the correct layer:
222
222
  ## Hard Stops
223
223
 
224
224
  - Never run schema-mutating SQL such as `ALTER TABLE`, `CREATE TABLE`, `DROP TABLE`, or `CREATE INDEX` to change database structure.
225
+ - For read-only SQL diagnosis, use MCP `db_query` or `npx proteum db query "<sql>"`; only one capped `SELECT`, `SHOW`, or `EXPLAIN` statement is allowed.
225
226
  - Do not run `prisma *` yourself. If a schema change requires migration, ask the user to run `npx prisma migrate dev --config ./prisma.config.ts --name <migration name>` and wait for `continue`.
226
227
  - Do not run `git restore` or `git reset`.
227
228
  - Do not run write-mode git commands by default. The built-in exception is an exact `commit` reply, which allows `git add` and `git commit` in every affected repository or worktree touched during the whole conversation. Any other write-mode git action requires an explicit user request.
@@ -31,6 +31,7 @@ Diagnostics source of truth: root-level `diagnostics.md`.
31
31
  - In database queries, prefer explicit `select` or narrow `include`.
32
32
  - For database structure changes, edit the app's `schema.prisma` only. Never create or edit migration files manually.
33
33
  - Never use raw SQL DDL or other schema-mutating SQL to change database structure.
34
+ - For read-only SQL diagnosis, use MCP `db_query` or `npx proteum db query "<sql>"`; only one capped `SELECT`, `SHOW`, or `EXPLAIN` statement is allowed.
34
35
  - Prefer inferred return types such as `Awaited<ReturnType<MyService['methodName']>>` over manual DTO duplication.
35
36
 
36
37
  ## Errors
@@ -0,0 +1,160 @@
1
+ import fs from 'fs-extra';
2
+ import got from 'got';
3
+ import path from 'path';
4
+ import { UsageError } from 'clipanion';
5
+
6
+ import cli from '..';
7
+ import {
8
+ defaultDatabaseReadTimeoutMs,
9
+ maxDatabaseReadLimit,
10
+ maxDatabaseReadTimeoutMs,
11
+ type TDatabaseReadQueryResponse,
12
+ } from '../../common/dev/database';
13
+ import { printAgentResponse, printJson, quoteCommandArgument } from '../utils/agentOutput';
14
+
15
+ type TDbAction = 'query';
16
+
17
+ const allowedActions = new Set<TDbAction>(['query']);
18
+ const normalizeBaseUrl = (value: string) => value.replace(/\/+$/, '');
19
+
20
+ const getRouterPortFromManifest = () => {
21
+ const manifestFilepath = path.join(cli.args.workdir as string, '.proteum', 'manifest.json');
22
+ if (!fs.existsSync(manifestFilepath)) return undefined;
23
+
24
+ const manifest = fs.readJsonSync(manifestFilepath, { throws: false }) as
25
+ | { env?: { resolved?: { routerPort?: number } } }
26
+ | undefined;
27
+ const port = manifest?.env?.resolved?.routerPort;
28
+
29
+ if (typeof port !== 'number' || port <= 0) return undefined;
30
+
31
+ return String(port);
32
+ };
33
+
34
+ const getRouterPort = () => {
35
+ const overridePort = typeof cli.args.port === 'string' && cli.args.port ? cli.args.port : '';
36
+ if (overridePort) return overridePort;
37
+
38
+ const envPort = process.env.PORT?.trim();
39
+ if (envPort) return envPort;
40
+
41
+ const manifestPort = getRouterPortFromManifest();
42
+ if (manifestPort) return manifestPort;
43
+
44
+ throw new UsageError(
45
+ `Could not determine the router port from PORT or .proteum/manifest.json in ${cli.args.workdir as string}. Pass --port or --url explicitly.`,
46
+ );
47
+ };
48
+
49
+ const getRouterBaseUrls = () => {
50
+ const explicitUrl = typeof cli.args.url === 'string' && cli.args.url ? cli.args.url.trim() : '';
51
+ if (explicitUrl) return [normalizeBaseUrl(explicitUrl)];
52
+
53
+ const port = getRouterPort();
54
+ return [...new Set([`http://127.0.0.1:${port}`, `http://localhost:${port}`, `http://[::1]:${port}`])];
55
+ };
56
+
57
+ const parsePositiveInteger = (value: unknown, label: string, max: number) => {
58
+ if (typeof value !== 'string' || !value.trim()) return undefined;
59
+
60
+ const parsed = Number(value);
61
+ if (!Number.isInteger(parsed) || parsed <= 0) throw new UsageError(`${label} must be a positive integer.`);
62
+ if (parsed > max) throw new UsageError(`${label} must be ${max} or lower.`);
63
+
64
+ return parsed;
65
+ };
66
+
67
+ const requestJson = async <TResponse>(pathname: string, json: object) => {
68
+ const attempts: string[] = [];
69
+
70
+ for (const baseUrl of getRouterBaseUrls()) {
71
+ try {
72
+ const response = await got(`${baseUrl}${pathname}`, {
73
+ method: 'POST',
74
+ json,
75
+ responseType: 'json',
76
+ retry: { limit: 0 },
77
+ throwHttpErrors: false,
78
+ });
79
+
80
+ if (response.statusCode >= 400) {
81
+ const body = response.body as { error?: string } | undefined;
82
+ throw new UsageError(body?.error || `Database query failed with status ${response.statusCode}.`);
83
+ }
84
+
85
+ return response.body as TResponse;
86
+ } catch (error) {
87
+ if (error instanceof UsageError) throw error;
88
+
89
+ const message = error instanceof Error ? error.message : String(error);
90
+ attempts.push(`${baseUrl}${pathname}: ${message}`);
91
+ }
92
+ }
93
+
94
+ throw new UsageError(
95
+ [
96
+ 'Could not reach the Proteum database diagnostics server.',
97
+ ...attempts.map((attempt) => `- ${attempt}`),
98
+ 'Make sure the app is running with `proteum dev`, or pass `--url http://host:port` if it is bound elsewhere.',
99
+ ].join('\n'),
100
+ );
101
+ };
102
+
103
+ const buildFullCommand = (sql: string) =>
104
+ [
105
+ 'proteum db query',
106
+ quoteCommandArgument(sql),
107
+ typeof cli.args.limit === 'string' && cli.args.limit ? `--limit ${cli.args.limit}` : '',
108
+ typeof cli.args.timeout === 'string' && cli.args.timeout ? `--timeout ${cli.args.timeout}` : '',
109
+ typeof cli.args.port === 'string' && cli.args.port ? `--port ${cli.args.port}` : '',
110
+ typeof cli.args.url === 'string' && cli.args.url ? `--url ${quoteCommandArgument(cli.args.url)}` : '',
111
+ '--full',
112
+ ]
113
+ .filter(Boolean)
114
+ .join(' ');
115
+
116
+ export const run = async () => {
117
+ const action = typeof cli.args.action === 'string' && cli.args.action ? cli.args.action : 'query';
118
+ if (!allowedActions.has(action as TDbAction)) {
119
+ throw new UsageError(`Unsupported db action "${action}". Expected: query.`);
120
+ }
121
+
122
+ const sql = typeof cli.args.sql === 'string' ? cli.args.sql.trim() : '';
123
+ if (!sql) throw new UsageError('A SELECT, SHOW, or EXPLAIN SQL statement is required.');
124
+
125
+ const limit = parsePositiveInteger(cli.args.limit, '--limit', maxDatabaseReadLimit);
126
+ const timeoutMs = parsePositiveInteger(cli.args.timeout, '--timeout', maxDatabaseReadTimeoutMs);
127
+ const response = await requestJson<TDatabaseReadQueryResponse>('/__proteum/db/query', {
128
+ sql,
129
+ ...(limit !== undefined ? { limit } : {}),
130
+ ...(timeoutMs !== undefined ? { timeoutMs } : {}),
131
+ });
132
+
133
+ if (cli.args.full === true) {
134
+ printJson(response);
135
+ return;
136
+ }
137
+
138
+ printAgentResponse({
139
+ summary: `${response.kind.toUpperCase()} returned ${response.rows.length}/${response.rowCount} rows in ${response.elapsedMs} ms${response.limited ? ` (limited to ${response.limit})` : ''}.`,
140
+ data: {
141
+ kind: response.kind,
142
+ elapsedMs: response.elapsedMs,
143
+ rowCount: response.rowCount,
144
+ returnedRowCount: response.rows.length,
145
+ limit: response.limit,
146
+ limited: response.limited,
147
+ columns: response.columns,
148
+ rows: response.rows,
149
+ },
150
+ fullDetailCommand: buildFullCommand(sql),
151
+ omitted: response.limited
152
+ ? [
153
+ {
154
+ reason: `Rows are capped at ${response.limit}. Raise --limit up to ${maxDatabaseReadLimit} or narrow the query for more detail.`,
155
+ command: `proteum db query ${quoteCommandArgument(sql)} --limit ${Math.min(response.limit * 2, maxDatabaseReadLimit)} --timeout ${timeoutMs || defaultDatabaseReadTimeoutMs}`,
156
+ },
157
+ ]
158
+ : undefined,
159
+ });
160
+ };
package/cli/mcp/router.ts CHANGED
@@ -58,6 +58,8 @@ const readOnlyAnnotations = {
58
58
  };
59
59
 
60
60
  const detailSchema = z.enum(['compact', 'full']).optional();
61
+ const databaseLimitSchema = z.number().int().min(1).max(500).optional();
62
+ const databaseTimeoutSchema = z.number().int().min(100).max(30_000).optional();
61
63
  const logsLevelSchema = z.enum(['silly', 'log', 'info', 'warn', 'error']).optional();
62
64
  const offsetSchema = z.number().int().min(0).max(10_000).optional();
63
65
  const positiveLimitSchema = z.number().int().min(1).max(100).optional();
@@ -824,6 +826,22 @@ export const createProteumMachineMcpServer = ({ createDevMcpClient, version }: T
824
826
  async (input) => await forwardTool('logs_tail', input),
825
827
  );
826
828
 
829
+ server.registerTool(
830
+ 'db_query',
831
+ {
832
+ annotations: readOnlyAnnotations,
833
+ description: 'Run one capped read-only database diagnostic query for one live Proteum project.',
834
+ inputSchema: {
835
+ limit: databaseLimitSchema,
836
+ projectId: projectIdSchema,
837
+ sql: z.string().min(1).describe('One SELECT, SHOW, or EXPLAIN SQL statement.'),
838
+ timeoutMs: databaseTimeoutSchema,
839
+ },
840
+ title: 'Proteum Database Query',
841
+ },
842
+ async (input) => await forwardTool('db_query', input),
843
+ );
844
+
827
845
  const closeServer = server.close.bind(server);
828
846
  server.close = async () => {
829
847
  await closeAllClients();
@@ -21,6 +21,7 @@ export const proteumCommandNames = [
21
21
  'orient',
22
22
  'diagnose',
23
23
  'perf',
24
+ 'db',
24
25
  'runtime',
25
26
  'mcp',
26
27
  'trace',
@@ -59,7 +60,7 @@ export const proteumRecommendedFlow: TRow[] = [
59
60
  export const proteumCommandGroups: Array<{ title: string; names: TProteumCommandName[] }> = [
60
61
  { title: 'Daily workflow', names: ['dev', 'refresh', 'build'] },
61
62
  { title: 'Quality gates', names: ['typecheck', 'lint', 'check', 'e2e'] },
62
- { title: 'Manifest and contracts', names: ['connect', 'doctor', 'explain', 'orient', 'diagnose', 'perf', 'runtime', 'mcp', 'trace', 'command', 'session', 'verify'] },
63
+ { title: 'Manifest and contracts', names: ['connect', 'doctor', 'explain', 'orient', 'diagnose', 'perf', 'db', 'runtime', 'mcp', 'trace', 'command', 'session', 'verify'] },
63
64
  { title: 'Project scaffolding', names: ['init', 'configure', 'create', 'migrate'] },
64
65
  ];
65
66
 
@@ -452,6 +453,26 @@ export const proteumCommands: Record<TProteumCommandName, TProteumCommandDoc> =
452
453
  ],
453
454
  status: 'experimental',
454
455
  },
456
+ db: {
457
+ name: 'db',
458
+ category: 'Manifest and contracts',
459
+ summary: 'Run one capped read-only database diagnostic query against a running Proteum dev server.',
460
+ usage: 'proteum db [query] <sql> [--limit <rows>] [--timeout <ms>] [--port <port>|--url <baseUrl>] [--full]',
461
+ bestFor:
462
+ 'Inspecting live MySQL or MariaDB state during diagnosis without giving agents a write-capable SQL execution surface.',
463
+ examples: [
464
+ { description: 'Run a small SELECT diagnostic', command: 'proteum db query "SELECT id, email FROM User LIMIT 5"' },
465
+ { description: 'Inspect table metadata', command: 'proteum db "SHOW TABLES"' },
466
+ { description: 'Explain a query plan', command: 'proteum db query "EXPLAIN SELECT * FROM User WHERE id = 1"' },
467
+ ],
468
+ notes: [
469
+ 'Only SELECT, SHOW, and EXPLAIN statements are allowed.',
470
+ 'The dev runtime executes the query with the app DATABASE_URL and returns rows, columns, elapsedMs, and cap metadata.',
471
+ 'Multi-statement SQL, EXPLAIN ANALYZE, locking reads, LOAD_FILE, SELECT INTO OUTFILE, sleep, and benchmark functions are rejected.',
472
+ 'Default output is compact `proteum-agent-v1` JSON with capped rows; use `--full` for the raw dev endpoint payload.',
473
+ ],
474
+ status: 'experimental',
475
+ },
455
476
  runtime: {
456
477
  name: 'runtime',
457
478
  category: 'Manifest and contracts',
@@ -695,6 +695,38 @@ class PerfCommand extends ProteumCommand {
695
695
  }
696
696
  }
697
697
 
698
+ class DbCommand extends ProteumCommand {
699
+ public static paths = [['db']];
700
+
701
+ public static usage = buildUsage('db');
702
+
703
+ public port = Option.String('--port', { description: 'Target an existing dev server on the given port.' });
704
+ public url = Option.String('--url', { description: 'Target an existing dev server at the given base URL.' });
705
+ public limit = Option.String('--limit', { description: 'Maximum number of result rows to return, up to 500.' });
706
+ public timeout = Option.String('--timeout', { description: 'Database query timeout in milliseconds, up to 30000.' });
707
+ public json = Option.Boolean('--json', false, { description: 'Compatibility flag; compact JSON is the default output.' });
708
+ public full = Option.Boolean('--full', false, { description: 'Print the full database query payload.' });
709
+ public args = Option.Rest();
710
+
711
+ public async execute() {
712
+ const [first = '', ...restArgs] = this.args;
713
+ const sql = first === 'query' ? restArgs.join(' ').trim() : [first, ...restArgs].join(' ').trim();
714
+
715
+ this.setCliArgs({
716
+ action: 'query',
717
+ full: this.full,
718
+ json: this.json,
719
+ limit: this.limit ?? '',
720
+ port: this.port ?? '',
721
+ sql,
722
+ timeout: this.timeout ?? '',
723
+ url: this.url ?? '',
724
+ });
725
+
726
+ await runCommandModule(() => import('../commands/db'));
727
+ }
728
+ }
729
+
698
730
  class RuntimeCommand extends ProteumCommand {
699
731
  public static paths = [['runtime']];
700
732
 
@@ -859,6 +891,7 @@ export const registeredCommands = {
859
891
  orient: OrientCommand,
860
892
  diagnose: DiagnoseCommand,
861
893
  perf: PerfCommand,
894
+ db: DbCommand,
862
895
  runtime: RuntimeCommand,
863
896
  mcp: McpCommand,
864
897
  trace: TraceCommand,
@@ -894,6 +927,7 @@ export const createCli = (version: string) => {
894
927
  clipanion.register(OrientCommand);
895
928
  clipanion.register(DiagnoseCommand);
896
929
  clipanion.register(PerfCommand);
930
+ clipanion.register(DbCommand);
897
931
  clipanion.register(RuntimeCommand);
898
932
  clipanion.register(McpCommand);
899
933
  clipanion.register(TraceCommand);
@@ -612,7 +612,7 @@ function renderEmbeddedProjectInstructions({ appRoot, coreRoot, includeMonorepoR
612
612
  '',
613
613
  '1. When a Proteum MCP client is available, call MCP `workflow_start` first. Pass `cwd` when `projectId` is not known, or pass the stable `projectId` from `projects_list` when it is known.',
614
614
  '2. Use the `projectId` returned by live `workflow_start` for every follow-up app-bound MCP tool. If `workflow_start` is ambiguous or returns offline candidates, call MCP `project_resolve { cwd }`, select the intended app root, follow its port-inspected next action when needed, then retry `workflow_start`.',
615
- '3. After `projectId` is selected, use MCP `runtime_status`, `orient`, `instructions_resolve`, `explain_summary`, `route_candidates`, `doctor`, `diagnose`, `trace_show`, `perf_request`, and `logs_tail` for read-only runtime, owner, instruction, route, trace, perf, and log reads.',
615
+ '3. After `projectId` is selected, use MCP `runtime_status`, `orient`, `instructions_resolve`, `explain_summary`, `route_candidates`, `doctor`, `diagnose`, `trace_show`, `perf_request`, `logs_tail`, and `db_query` for read-only runtime, owner, instruction, route, trace, perf, log, and database reads.',
616
616
  '4. Do not run CLI equivalents after a successful MCP result for the same read. Do not run broad source searches for route/page/controller ownership after `workflow_start`, `orient`, or `explain_summary` already returned the owner.',
617
617
  '5. Treat selected instruction previews returned by MCP as the instruction source for read-only discovery and diagnostics. Read full files only before edits or git writes, when the returned `fullRead`/`fullReadPolicy` requires it, or when the preview is insufficient.',
618
618
  '6. Use `npx proteum runtime status` before starting a dev server only when MCP runtime status is unavailable, so an existing tracked session can be reused and the configured router/HMR ports can be checked without probing page bodies. If it says health is unreachable, do not run `diagnose`, `trace`, or `perf`; stop/repair/start the dev session first.',
@@ -631,6 +631,7 @@ function renderEmbeddedProjectInstructions({ appRoot, coreRoot, includeMonorepoR
631
631
  '- Never edit generated files under `.proteum`.',
632
632
  '- Never create or edit Prisma migration files manually.',
633
633
  '- Never run schema-mutating SQL such as `ALTER TABLE`, `CREATE TABLE`, `DROP TABLE`, or `CREATE INDEX`.',
634
+ '- For read-only SQL diagnosis, use MCP `db_query` or `npx proteum db query "<sql>"`; only one capped `SELECT`, `SHOW`, or `EXPLAIN` statement is allowed.',
634
635
  '- If `schema.prisma` changes, ask the user to run `npx prisma migrate dev --config ./prisma.config.ts --name <migration name>` and wait for `continue` before validation.',
635
636
  '- For production changes, add or update focused unit tests for touched behavior when applicable, targeting 100% meaningful coverage for changed production paths.',
636
637
  '- Do not run `git restore` or `git reset`.',
@@ -0,0 +1,226 @@
1
+ export type TDatabaseReadQueryKind = 'explain' | 'select' | 'show';
2
+
3
+ export type TDatabaseReadQueryInput = {
4
+ limit?: number;
5
+ sql: string;
6
+ timeoutMs?: number;
7
+ };
8
+
9
+ export type TDatabaseReadQueryColumn = {
10
+ name: string;
11
+ table?: string;
12
+ type?: number | string;
13
+ };
14
+
15
+ export type TDatabaseReadQueryValue = boolean | number | string | null;
16
+
17
+ export type TDatabaseReadQueryRow = Record<string, TDatabaseReadQueryValue>;
18
+
19
+ export type TDatabaseReadQueryResponse = {
20
+ columns: TDatabaseReadQueryColumn[];
21
+ elapsedMs: number;
22
+ kind: TDatabaseReadQueryKind;
23
+ limit: number;
24
+ limited: boolean;
25
+ rowCount: number;
26
+ rows: TDatabaseReadQueryRow[];
27
+ sql: string;
28
+ };
29
+
30
+ export type TValidatedDatabaseReadQuery = {
31
+ kind: TDatabaseReadQueryKind;
32
+ sql: string;
33
+ };
34
+
35
+ export const defaultDatabaseReadLimit = 50;
36
+ export const maxDatabaseReadLimit = 500;
37
+ export const defaultDatabaseReadTimeoutMs = 5_000;
38
+ export const maxDatabaseReadTimeoutMs = 30_000;
39
+
40
+ const allowedQueryKinds = new Set<TDatabaseReadQueryKind>(['explain', 'select', 'show']);
41
+ const sqlKeywordPattern = /^[A-Za-z]+/;
42
+ const sqlCommentPattern = /\/\*[\s\S]*?\*\//g;
43
+ const sqlLineCommentPattern = /(?:^|\n)\s*(?:--|#).*?(?=\n|$)/g;
44
+
45
+ const clampInteger = ({ fallback, max, min, value }: { fallback: number; max: number; min: number; value?: number }) => {
46
+ if (value === undefined || !Number.isInteger(value) || value < min) return fallback;
47
+
48
+ return Math.min(value, max);
49
+ };
50
+
51
+ export const normalizeDatabaseReadLimit = (limit?: number) =>
52
+ clampInteger({
53
+ fallback: defaultDatabaseReadLimit,
54
+ max: maxDatabaseReadLimit,
55
+ min: 1,
56
+ value: limit,
57
+ });
58
+
59
+ export const normalizeDatabaseReadTimeoutMs = (timeoutMs?: number) =>
60
+ clampInteger({
61
+ fallback: defaultDatabaseReadTimeoutMs,
62
+ max: maxDatabaseReadTimeoutMs,
63
+ min: 100,
64
+ value: timeoutMs,
65
+ });
66
+
67
+ const skipLeadingTrivia = (sql: string) => {
68
+ let index = 0;
69
+
70
+ while (index < sql.length) {
71
+ const char = sql[index];
72
+ const next = sql[index + 1];
73
+
74
+ if (/\s/.test(char)) {
75
+ index += 1;
76
+ continue;
77
+ }
78
+
79
+ if (char === '-' && next === '-') {
80
+ index = sql.indexOf('\n', index + 2);
81
+ if (index === -1) return sql.length;
82
+ continue;
83
+ }
84
+
85
+ if (char === '#') {
86
+ index = sql.indexOf('\n', index + 1);
87
+ if (index === -1) return sql.length;
88
+ continue;
89
+ }
90
+
91
+ if (char === '/' && next === '*') {
92
+ const end = sql.indexOf('*/', index + 2);
93
+ if (end === -1) throw new Error('SQL contains an unterminated block comment.');
94
+ index = end + 2;
95
+ continue;
96
+ }
97
+
98
+ return index;
99
+ }
100
+
101
+ return index;
102
+ };
103
+
104
+ const stripSqlComments = (sql: string) =>
105
+ sql.replace(sqlCommentPattern, ' ').replace(sqlLineCommentPattern, '\n');
106
+
107
+ const findFirstStatementEnd = (sql: string) => {
108
+ let quote: "'" | '"' | '`' | undefined;
109
+ let lineComment = false;
110
+ let blockComment = false;
111
+
112
+ for (let index = 0; index < sql.length; index += 1) {
113
+ const char = sql[index];
114
+ const next = sql[index + 1];
115
+
116
+ if (lineComment) {
117
+ if (char === '\n') lineComment = false;
118
+ continue;
119
+ }
120
+
121
+ if (blockComment) {
122
+ if (char === '*' && next === '/') {
123
+ blockComment = false;
124
+ index += 1;
125
+ }
126
+ continue;
127
+ }
128
+
129
+ if (quote) {
130
+ if (char === '\\') {
131
+ index += 1;
132
+ continue;
133
+ }
134
+ if (char === quote) quote = undefined;
135
+ continue;
136
+ }
137
+
138
+ if (char === '-' && next === '-') {
139
+ lineComment = true;
140
+ index += 1;
141
+ continue;
142
+ }
143
+
144
+ if (char === '#') {
145
+ lineComment = true;
146
+ continue;
147
+ }
148
+
149
+ if (char === '/' && next === '*') {
150
+ blockComment = true;
151
+ index += 1;
152
+ continue;
153
+ }
154
+
155
+ if (char === '\'' || char === '"' || char === '`') {
156
+ quote = char;
157
+ continue;
158
+ }
159
+
160
+ if (char === ';') return index;
161
+ }
162
+
163
+ if (quote) throw new Error('SQL contains an unterminated quoted string.');
164
+ if (blockComment) throw new Error('SQL contains an unterminated block comment.');
165
+
166
+ return -1;
167
+ };
168
+
169
+ const assertSingleStatement = (sql: string) => {
170
+ const end = findFirstStatementEnd(sql);
171
+ if (end === -1) return sql.trim();
172
+
173
+ const first = sql.slice(0, end).trim();
174
+ const rest = stripSqlComments(sql.slice(end + 1)).trim();
175
+ if (rest) throw new Error('Only one read-only SQL statement may be executed.');
176
+
177
+ return first;
178
+ };
179
+
180
+ const getReadQueryKind = (sql: string): TDatabaseReadQueryKind => {
181
+ const start = skipLeadingTrivia(sql);
182
+ const keyword = sql.slice(start).match(sqlKeywordPattern)?.[0]?.toLowerCase();
183
+
184
+ if (!keyword || !allowedQueryKinds.has(keyword as TDatabaseReadQueryKind)) {
185
+ throw new Error('Only SELECT, SHOW, and EXPLAIN SQL statements are allowed.');
186
+ }
187
+
188
+ return keyword as TDatabaseReadQueryKind;
189
+ };
190
+
191
+ const assertAllowedReadQueryShape = (sql: string) => {
192
+ const normalized = stripSqlComments(sql).replace(/\s+/g, ' ').trim().toLowerCase();
193
+
194
+ if (/\bexplain\s+analyze\b/.test(normalized)) {
195
+ throw new Error('EXPLAIN ANALYZE is not allowed because it executes the target query.');
196
+ }
197
+
198
+ if (/\binto\s+(?:out|dump)file\b/.test(normalized)) {
199
+ throw new Error('SELECT INTO OUTFILE and SELECT INTO DUMPFILE are not allowed.');
200
+ }
201
+
202
+ if (/\bload_file\s*\(/.test(normalized)) {
203
+ throw new Error('LOAD_FILE is not allowed in database diagnostics.');
204
+ }
205
+
206
+ if (/\bfor\s+update\b/.test(normalized) || /\block\s+in\s+share\s+mode\b/.test(normalized)) {
207
+ throw new Error('Locking read statements are not allowed in database diagnostics.');
208
+ }
209
+
210
+ if (/\b(?:sleep|benchmark)\s*\(/.test(normalized)) {
211
+ throw new Error('Sleep and benchmark functions are not allowed in database diagnostics.');
212
+ }
213
+ };
214
+
215
+ export const validateDatabaseReadQuery = (rawSql: string): TValidatedDatabaseReadQuery => {
216
+ const normalizedSql = rawSql.replace(/^\uFEFF/, '').trim();
217
+ if (!normalizedSql) throw new Error('SQL query is required.');
218
+ if (normalizedSql.length > 20_000) throw new Error('SQL query is too long for database diagnostics.');
219
+
220
+ const sql = assertSingleStatement(normalizedSql);
221
+ const kind = getReadQueryKind(sql);
222
+
223
+ assertAllowedReadQueryShape(sql);
224
+
225
+ return { kind, sql };
226
+ };
@@ -1,4 +1,5 @@
1
1
  import type { TDevConsoleLogLevel, TDevConsoleLogsResponse } from './console';
2
+ import type { TDatabaseReadQueryResponse } from './database';
2
3
  import type { TDoctorResponse } from './diagnostics';
3
4
  import { buildExplainSummaryItems } from './diagnostics';
4
5
  import { explainOwner, type TDiagnoseResponse, type TExplainOwnerResponse, type TOrientResponse } from './inspection';
@@ -733,6 +734,30 @@ export const compactLogsResponse = ({
733
734
  : undefined,
734
735
  });
735
736
 
737
+ export const compactDatabaseReadQueryResponse = (response: TDatabaseReadQueryResponse) =>
738
+ createMcpPayload({
739
+ summary: `${response.kind.toUpperCase()} returned ${response.rows.length}/${response.rowCount} rows in ${response.elapsedMs} ms${response.limited ? ` (limited to ${response.limit})` : ''}.`,
740
+ data: {
741
+ kind: response.kind,
742
+ elapsedMs: response.elapsedMs,
743
+ rowCount: response.rowCount,
744
+ returnedRowCount: response.rows.length,
745
+ limit: response.limit,
746
+ limited: response.limited,
747
+ columns: response.columns,
748
+ rows: response.rows,
749
+ },
750
+ omitted: response.limited
751
+ ? [
752
+ {
753
+ reason: `Rows are capped at ${response.limit}. Raise the limit up to 500 or make the read query narrower if more detail is needed.`,
754
+ tool: 'db_query',
755
+ toolArgs: { sql: response.sql, limit: Math.min(response.limit * 2, 500) },
756
+ },
757
+ ]
758
+ : undefined,
759
+ });
760
+
736
761
  const readPreview = (filepath: string) => {
737
762
  if (fs === undefined) return undefined;
738
763
  try {
@@ -7,6 +7,7 @@ import { stringifyMcpPayload, type TProteumMcpPayload } from './mcpPayloads';
7
7
  export type TProteumMcpDetail = 'compact' | 'full';
8
8
 
9
9
  export type TProteumMcpProvider = {
10
+ dbQuery: (input: { limit?: number; sql: string; timeoutMs?: number }) => Promise<TProteumMcpPayload>;
10
11
  diagnose: (input: {
11
12
  logsLevel?: 'silly' | 'log' | 'info' | 'warn' | 'error';
12
13
  logsLimit?: number;
@@ -64,6 +65,8 @@ const detailSchema = z.enum(['compact', 'full']).optional();
64
65
  const logsLevelSchema = z.enum(['silly', 'log', 'info', 'warn', 'error']).optional();
65
66
  const positiveLimitSchema = z.number().int().min(1).max(100).optional();
66
67
  const offsetSchema = z.number().int().min(0).max(10_000).optional();
68
+ const databaseLimitSchema = z.number().int().min(1).max(500).optional();
69
+ const databaseTimeoutSchema = z.number().int().min(100).max(30_000).optional();
67
70
 
68
71
  export const createProteumMcpServer = ({ provider, version }: TCreateProteumMcpServerArgs) => {
69
72
  const server = new McpServer(
@@ -264,6 +267,21 @@ export const createProteumMcpServer = ({ provider, version }: TCreateProteumMcpS
264
267
  async ({ level, limit }) => jsonToolResult(await provider.logsTail({ level, limit })),
265
268
  );
266
269
 
270
+ server.registerTool(
271
+ 'db_query',
272
+ {
273
+ annotations: readOnlyAnnotations,
274
+ description: 'Run one capped read-only database diagnostic query. Only SELECT, SHOW, and EXPLAIN are allowed.',
275
+ inputSchema: {
276
+ limit: databaseLimitSchema,
277
+ sql: z.string().min(1).describe('One SELECT, SHOW, or EXPLAIN SQL statement.'),
278
+ timeoutMs: databaseTimeoutSchema,
279
+ },
280
+ title: 'Proteum Database Query',
281
+ },
282
+ async ({ limit, sql, timeoutMs }) => jsonToolResult(await provider.dbQuery({ limit, sql, timeoutMs })),
283
+ );
284
+
267
285
  for (const [name, uri, description] of [
268
286
  ['runtime-status', 'proteum://runtime/status', 'Current compact runtime status.'],
269
287
  ['instructions-router', 'proteum://instructions/router', 'Current instruction routing contract.'],
package/docs/mcp.md CHANGED
@@ -44,6 +44,7 @@ Example tool calls:
44
44
  {"tool":"route_candidates","arguments":{"projectId":"prj_0123abcd4567","query":"dashboard","limit":8}}
45
45
  {"tool":"explain_summary","arguments":{"projectId":"prj_0123abcd4567","query":"/dashboard"}}
46
46
  {"tool":"diagnose","arguments":{"projectId":"prj_0123abcd4567","path":"/dashboard"}}
47
+ {"tool":"db_query","arguments":{"projectId":"prj_0123abcd4567","sql":"SELECT id, email FROM User LIMIT 5","limit":5}}
47
48
  ```
48
49
 
49
50
  `workflow_start` is the only app-bound bootstrap tool that may resolve from `cwd` when `projectId` is not known. It may return offline app candidates when no matching dev server is running yet. Other app-bound tools require a live `projectId`; if they omit it, the router returns a compact error that tells the agent to call `projects_list` or `project_resolve`. There is no single-project fallback, because wrong-project reads are worse than an explicit routing retry.
@@ -131,6 +132,7 @@ App-bound tools require `projectId` when called through `proteum mcp`:
131
132
  | `perf_top` | Hot-path perf rollup |
132
133
  | `perf_request` | One-request waterfall and attribution |
133
134
  | `logs_tail` | Capped recent server logs |
135
+ | `db_query` | Capped read-only database diagnostics for one `SELECT`, `SHOW`, or `EXPLAIN` statement |
134
136
 
135
137
  ## CLI Boundary
136
138
 
@@ -145,6 +147,7 @@ proteum diagnose /dashboard --port 3101
145
147
  proteum verify request /dashboard --port 3101
146
148
  proteum trace show <requestId> --events --port 3101
147
149
  proteum explain owner /dashboard
150
+ proteum db query "SELECT id, email FROM User LIMIT 5" --port 3101
148
151
  proteum explain --routes --controllers --full # only when the raw route/controller arrays are required
149
152
  ```
150
153
 
@@ -163,10 +166,14 @@ trace_show { projectId, requestId }
163
166
  trace_latest { projectId }
164
167
  perf_request { projectId, query }
165
168
  logs_tail { projectId }
169
+ db_query { projectId, sql, limit? }
166
170
  ```
167
171
 
168
172
  After an MCP read succeeds, do not run the equivalent CLI command for the same state, and do not run broad source searches for ownership that MCP already returned. CLI output is for fallback, validation, command evidence, and human-shareable reproductions.
169
173
 
174
+ Database diagnostics are intentionally read-only. `db_query` and `proteum db query` accept only one `SELECT`, `SHOW`, or `EXPLAIN` statement, return rows, columns, elapsed milliseconds, and cap metadata, and reject multi-statement SQL, `EXPLAIN ANALYZE`, locking reads, file reads/writes, sleep, and benchmark functions.
175
+
176
+
170
177
  ## Benchmark
171
178
 
172
179
  The Product `/domains` diagnostic loop measured on May 7, 2026 used `ceil(UTF-8 bytes / 4)` as an output-token estimate:
package/package.json CHANGED
@@ -1,7 +1,7 @@
1
1
  {
2
2
  "name": "proteum",
3
3
  "description": "LLM-first Opinionated Typescript Framework for web applications.",
4
- "version": "2.4.1",
4
+ "version": "2.4.2",
5
5
  "author": "Gaetan Le Gac (https://github.com/gaetanlegac)",
6
6
  "repository": "git://github.com/gaetanlegac/proteum.git",
7
7
  "license": "MIT",
@@ -1,8 +1,19 @@
1
1
  import fs from 'fs-extra';
2
+ import mysql from 'mysql2/promise';
2
3
  import path from 'path';
4
+ import { performance } from 'perf_hooks';
3
5
 
4
6
  import type { Application } from './index';
5
7
  import type { TDevConsoleLogLevel, TDevConsoleLogsResponse } from '@common/dev/console';
8
+ import {
9
+ normalizeDatabaseReadLimit,
10
+ normalizeDatabaseReadTimeoutMs,
11
+ validateDatabaseReadQuery,
12
+ type TDatabaseReadQueryInput,
13
+ type TDatabaseReadQueryResponse,
14
+ type TDatabaseReadQueryRow,
15
+ type TDatabaseReadQueryValue,
16
+ } from '@common/dev/database';
6
17
  import {
7
18
  buildDoctorResponse,
8
19
  explainSectionNames,
@@ -30,12 +41,31 @@ import {
30
41
  } from '@common/dev/inspection';
31
42
  import type { TProteumManifest } from '@common/dev/proteumManifest';
32
43
  import type { TRequestTrace } from '@common/dev/requestTrace';
44
+ import { parseMariaDbDatabaseUrl } from '@server/services/prisma/mariadb';
33
45
 
34
46
  const isExplainSectionName = (value: string): value is TExplainSectionName =>
35
47
  explainSectionNames.includes(value as TExplainSectionName);
36
48
  const isConsoleLogLevel = (value: string): value is TDevConsoleLogLevel =>
37
49
  ['silly', 'log', 'info', 'warn', 'error'].includes(value);
38
50
 
51
+ const normalizeDatabaseValue = (value: unknown): TDatabaseReadQueryValue => {
52
+ if (value === null || value === undefined) return null;
53
+ if (typeof value === 'string' || typeof value === 'number' || typeof value === 'boolean') return value;
54
+ if (typeof value === 'bigint') return value.toString();
55
+ if (value instanceof Date) return value.toISOString();
56
+ if (Buffer.isBuffer(value)) return `[Buffer ${value.byteLength} bytes]`;
57
+
58
+ return JSON.stringify(value);
59
+ };
60
+
61
+ const normalizeDatabaseRow = (row: unknown): TDatabaseReadQueryRow => {
62
+ if (!row || typeof row !== 'object' || Array.isArray(row)) return {};
63
+
64
+ return Object.fromEntries(
65
+ Object.entries(row).map(([key, value]) => [key, normalizeDatabaseValue(value)]),
66
+ ) as TDatabaseReadQueryRow;
67
+ };
68
+
39
69
  export default class DevDiagnosticsRegistry {
40
70
  public constructor(private app: Application) {}
41
71
 
@@ -88,6 +118,68 @@ export default class DevDiagnosticsRegistry {
88
118
  return { logs: this.app.container.Console.listLogs(limit, isConsoleLogLevel(minimumLevel) ? minimumLevel : 'log') };
89
119
  }
90
120
 
121
+ public async databaseReadQuery({
122
+ limit: rawLimit,
123
+ sql: rawSql,
124
+ timeoutMs: rawTimeoutMs,
125
+ }: TDatabaseReadQueryInput): Promise<TDatabaseReadQueryResponse> {
126
+ const databaseUrl = process.env.DATABASE_URL;
127
+ if (!databaseUrl) throw new Error('DATABASE_URL is required before running database diagnostics.');
128
+
129
+ const { kind, sql } = validateDatabaseReadQuery(rawSql);
130
+ const limit = normalizeDatabaseReadLimit(rawLimit);
131
+ const timeoutMs = normalizeDatabaseReadTimeoutMs(rawTimeoutMs);
132
+ const connectionConfig = parseMariaDbDatabaseUrl(databaseUrl);
133
+ const connection = await mysql.createConnection({
134
+ host: connectionConfig.host,
135
+ port: connectionConfig.port,
136
+ user: connectionConfig.user,
137
+ password: connectionConfig.password,
138
+ database: connectionConfig.database,
139
+ connectTimeout: connectionConfig.connectTimeout,
140
+ multipleStatements: false,
141
+ supportBigNumbers: true,
142
+ bigNumberStrings: true,
143
+ dateStrings: true,
144
+ });
145
+ const startedAt = performance.now();
146
+
147
+ try {
148
+ await connection.query('START TRANSACTION READ ONLY');
149
+ const [rows, fields] = await connection.query({ sql, timeout: timeoutMs });
150
+ await connection.rollback();
151
+
152
+ const rowList = Array.isArray(rows) ? rows : [];
153
+ const normalizedRows = rowList.map(normalizeDatabaseRow);
154
+ const elapsedMs = Math.max(0, Math.round(performance.now() - startedAt));
155
+
156
+ return {
157
+ columns: Array.isArray(fields)
158
+ ? fields.map((field) => ({
159
+ name: field.name,
160
+ table: field.table || undefined,
161
+ type: field.type,
162
+ }))
163
+ : [],
164
+ elapsedMs,
165
+ kind,
166
+ limit,
167
+ limited: normalizedRows.length > limit,
168
+ rowCount: normalizedRows.length,
169
+ rows: normalizedRows.slice(0, limit),
170
+ sql,
171
+ };
172
+ } catch (error) {
173
+ try {
174
+ await connection.rollback();
175
+ } catch (_rollbackError) {}
176
+
177
+ throw error;
178
+ } finally {
179
+ await connection.end();
180
+ }
181
+ }
182
+
91
183
  private resolveRequestTrace({ path, requestId }: { path?: string; requestId?: string }): TRequestTrace | undefined {
92
184
  if (requestId) return this.app.container.Trace.getRequest(requestId);
93
185
  if (!path) return this.app.container.Trace.getLatestRequest();
@@ -2,6 +2,7 @@ import { buildContractsDoctorResponse } from '@common/dev/contractsDoctor';
2
2
  import { buildDoctorResponse } from '@common/dev/diagnostics';
3
3
  import { buildOrientationResponse, explainOwner } from '@common/dev/inspection';
4
4
  import {
5
+ compactDatabaseReadQueryResponse,
5
6
  buildRuntimeStatusPayload,
6
7
  compactDiagnoseResponse,
7
8
  compactDoctorResponse,
@@ -189,6 +190,15 @@ export const createRuntimeProteumMcpProvider = ({
189
190
  response: diagnostics().readLogs(limit, level),
190
191
  });
191
192
  },
193
+ async dbQuery({ limit, sql, timeoutMs }) {
194
+ return compactDatabaseReadQueryResponse(
195
+ await diagnostics().databaseReadQuery({
196
+ limit,
197
+ sql,
198
+ timeoutMs,
199
+ }),
200
+ );
201
+ },
192
202
  async readResource(uri) {
193
203
  if (uri === 'proteum://runtime/status') return await provider.runtimeStatus({});
194
204
  if (uri === 'proteum://instructions/router') return await provider.instructionsResolve({});
@@ -19,7 +19,7 @@ const decodeUrlSegment = (value: string) => {
19
19
  return decodeURIComponent(value);
20
20
  };
21
21
 
22
- export const createMariaDbAdapter = (databaseUrl: string) => {
22
+ export const parseMariaDbDatabaseUrl = (databaseUrl: string) => {
23
23
  const url = new URL(databaseUrl);
24
24
 
25
25
  if (url.protocol !== 'mysql:' && url.protocol !== 'mariadb:')
@@ -34,7 +34,7 @@ export const createMariaDbAdapter = (databaseUrl: string) => {
34
34
  const connectTimeoutSeconds = parseInteger(url.searchParams.get('connect_timeout'));
35
35
  const idleTimeoutSeconds = parseInteger(url.searchParams.get('max_idle_connection_lifetime'));
36
36
 
37
- return new PrismaMariaDb({
37
+ return {
38
38
  host: url.hostname,
39
39
  port: parseInteger(url.port) ?? defaultPort,
40
40
  user: decodeUrlSegment(url.username),
@@ -43,5 +43,9 @@ export const createMariaDbAdapter = (databaseUrl: string) => {
43
43
  connectTimeout: connectTimeoutSeconds ? connectTimeoutSeconds * 1_000 : defaultConnectTimeout,
44
44
  idleTimeout: idleTimeoutSeconds ?? defaultIdleTimeout,
45
45
  ...(connectionLimit !== undefined ? { connectionLimit } : {}),
46
- });
46
+ };
47
+ };
48
+
49
+ export const createMariaDbAdapter = (databaseUrl: string) => {
50
+ return new PrismaMariaDb(parseMariaDbDatabaseUrl(databaseUrl));
47
51
  };
@@ -606,6 +606,31 @@ export default class HttpServer<TRouter extends TServerRouter = TServerRouter> {
606
606
  }
607
607
  });
608
608
 
609
+ routes.post('/__proteum/db/query', async (req, res) => {
610
+ const sql = typeof req.body?.sql === 'string' ? req.body.sql : '';
611
+ const rawLimit = Number(req.body?.limit);
612
+ const rawTimeoutMs = Number(req.body?.timeoutMs);
613
+
614
+ try {
615
+ res.json(
616
+ await this.app.getDevDiagnostics().databaseReadQuery({
617
+ sql,
618
+ limit: Number.isFinite(rawLimit) ? rawLimit : undefined,
619
+ timeoutMs: Number.isFinite(rawTimeoutMs) ? rawTimeoutMs : undefined,
620
+ }),
621
+ );
622
+ } catch (error) {
623
+ const message = error instanceof Error ? error.message : String(error);
624
+ const isUsageError =
625
+ message.includes('SQL') ||
626
+ message.includes('allowed') ||
627
+ message.includes('not allowed') ||
628
+ message.includes('DATABASE_URL');
629
+
630
+ res.status(isUsageError ? 400 : 500).json({ error: message });
631
+ }
632
+ });
633
+
609
634
  routes.get('/__proteum/diagnose', (req, res) => {
610
635
  const readString = (value: unknown) => (Array.isArray(value) ? value[0] : value);
611
636
  const readNumber = (value: unknown, fallback: number) => {
@@ -110,6 +110,19 @@ test('mcp help describes projectId routing', () => {
110
110
  assert.match(output, /--stdio/);
111
111
  });
112
112
 
113
+ test('db help describes read-only SQL diagnostics', () => {
114
+ const result = spawnSync(process.execPath, [cliBin, 'db', '--help'], {
115
+ cwd: coreRoot,
116
+ encoding: 'utf8',
117
+ });
118
+ const output = `${result.stdout}\n${result.stderr}`;
119
+
120
+ assert.equal(result.status, 0);
121
+ assert.match(output, /SELECT, SHOW, and EXPLAIN/);
122
+ assert.match(output, /--limit/);
123
+ assert.match(output, /--timeout/);
124
+ });
125
+
113
126
  test('explain help describes compact section summaries', () => {
114
127
  const result = spawnSync(process.execPath, [cliBin, 'explain', '--help'], {
115
128
  cwd: coreRoot,
@@ -219,6 +232,58 @@ test('runtime status reports occupied configured port without probing page bodie
219
232
  }
220
233
  });
221
234
 
235
+ test('db query posts one read-only SQL statement to the running dev endpoint', async () => {
236
+ const repoRoot = fs.mkdtempSync(path.join(os.tmpdir(), 'proteum-cli-db-'));
237
+ const appRoot = path.join(repoRoot, 'apps', 'product');
238
+ let receivedBody = '';
239
+ const server = http.createServer((req, res) => {
240
+ if (req.url === '/__proteum/db/query' && req.method === 'POST') {
241
+ req.on('data', (chunk) => {
242
+ receivedBody += chunk.toString();
243
+ });
244
+ req.on('end', () => {
245
+ res.setHeader('content-type', 'application/json');
246
+ res.end(
247
+ JSON.stringify({
248
+ kind: 'select',
249
+ sql: 'SELECT 1',
250
+ elapsedMs: 7,
251
+ limit: 5,
252
+ limited: false,
253
+ rowCount: 1,
254
+ columns: [{ name: 'value', type: 3 }],
255
+ rows: [{ value: 1 }],
256
+ }),
257
+ );
258
+ });
259
+ return;
260
+ }
261
+
262
+ res.statusCode = 404;
263
+ res.end('not found');
264
+ });
265
+ const port = await listen(server);
266
+
267
+ try {
268
+ createProteumApp(appRoot, { routerPort: port });
269
+
270
+ const result = await runCli(['db', 'query', 'SELECT 1', '--limit', '5'], {
271
+ cwd: appRoot,
272
+ });
273
+ const payload = JSON.parse(result.stdout);
274
+ const body = JSON.parse(receivedBody);
275
+
276
+ assert.equal(result.status, 0);
277
+ assert.equal(body.sql, 'SELECT 1');
278
+ assert.equal(body.limit, 5);
279
+ assert.equal(payload.ok, true);
280
+ assert.equal(payload.data.elapsedMs, 7);
281
+ assert.deepEqual(payload.data.rows, [{ value: 1 }]);
282
+ } finally {
283
+ await closeServer(server);
284
+ }
285
+ });
286
+
222
287
  test('runtime status avoids starting a second dev server when the same app owns the port', async () => {
223
288
  const repoRoot = fs.mkdtempSync(path.join(os.tmpdir(), 'proteum-cli-same-port-'));
224
289
  const appRoot = path.join(repoRoot, 'apps', 'product');
@@ -21,6 +21,10 @@ const {
21
21
  resolveInstructionRouting,
22
22
  } = require('../common/dev/mcpPayloads.ts');
23
23
  const { createProteumMcpServer } = require('../common/dev/mcpServer.ts');
24
+ const {
25
+ normalizeDatabaseReadLimit,
26
+ validateDatabaseReadQuery,
27
+ } = require('../common/dev/database.ts');
24
28
  const { createProteumMachineMcpServer } = require('../cli/mcp/router.ts');
25
29
  const {
26
30
  createDevSessionRecord,
@@ -126,6 +130,21 @@ test('instruction routing returns compact selected files for a page query', () =
126
130
  assert.equal(payload.data.readWhen.some((entry) => entry.file && entry.file.endsWith('diagnostics.md')), true);
127
131
  });
128
132
 
133
+ test('database read query policy allows only capped SELECT SHOW and EXPLAIN diagnostics', () => {
134
+ assert.deepEqual(validateDatabaseReadQuery(' SELECT 1; '), { kind: 'select', sql: 'SELECT 1' });
135
+ assert.deepEqual(validateDatabaseReadQuery('/* plan */ EXPLAIN SELECT * FROM User'), {
136
+ kind: 'explain',
137
+ sql: '/* plan */ EXPLAIN SELECT * FROM User',
138
+ });
139
+ assert.deepEqual(validateDatabaseReadQuery('SHOW TABLES'), { kind: 'show', sql: 'SHOW TABLES' });
140
+ assert.equal(normalizeDatabaseReadLimit(999), 500);
141
+
142
+ assert.throws(() => validateDatabaseReadQuery('UPDATE User SET role = "admin"'), /Only SELECT, SHOW, and EXPLAIN/);
143
+ assert.throws(() => validateDatabaseReadQuery('SELECT 1; DROP TABLE User'), /Only one read-only SQL statement/);
144
+ assert.throws(() => validateDatabaseReadQuery('EXPLAIN ANALYZE SELECT * FROM User'), /EXPLAIN ANALYZE/);
145
+ assert.throws(() => validateDatabaseReadQuery('SELECT LOAD_FILE("/etc/passwd")'), /LOAD_FILE/);
146
+ });
147
+
129
148
  test('instruction routing promotes triggered full instruction files', () => {
130
149
  const appRoot = fs.mkdtempSync(path.join(os.tmpdir(), 'proteum-mcp-trigger-app-'));
131
150
  const fallbackRoot = fs.mkdtempSync(path.join(os.tmpdir(), 'proteum-mcp-trigger-core-'));
@@ -367,6 +386,7 @@ test('trace payload keeps default output compact and paginates full details', ()
367
386
  test('MCP server registers the Proteum read-only tool contract', async () => {
368
387
  const payload = createMcpPayload({ summary: 'ok', data: { value: 1 } });
369
388
  const provider = {
389
+ dbQuery: async () => payload,
370
390
  diagnose: async () => payload,
371
391
  doctor: async () => payload,
372
392
  explainSummary: async () => payload,
@@ -396,6 +416,7 @@ test('MCP server registers the Proteum read-only tool contract', async () => {
396
416
  assert.equal(tools.tools.some((tool) => tool.name === 'runtime_status'), true);
397
417
  assert.equal(tools.tools.some((tool) => tool.name === 'workflow_start'), true);
398
418
  assert.equal(tools.tools.some((tool) => tool.name === 'route_candidates'), true);
419
+ assert.equal(tools.tools.some((tool) => tool.name === 'db_query'), true);
399
420
  assert.match(result.content[0].text, /proteum-mcp-v1/);
400
421
  assert.match(resource.contents[0].text, /proteum-mcp-v1/);
401
422