@chaprola/mcp-server 1.6.3 → 1.7.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/dist/index.js CHANGED
@@ -102,11 +102,11 @@ const server = new McpServer({
102
102
  - **Run programs:** chaprola_run (single execution), chaprola_run_each (per-record batch), chaprola_report (published reports)
103
103
  - **Email:** chaprola_email_send, chaprola_email_inbox, chaprola_email_read
104
104
  - **Web:** chaprola_search (Brave API), chaprola_fetch (URL → markdown)
105
- - **Schema:** chaprola_format (inspect fields), chaprola_alter (add/widen/rename/drop fields)
105
+ - **Schema:** chaprola_format (inspect fields), chaprola_alter (add/widen/rename/drop fields — NON-DESTRUCTIVE for in-place schema edits). Re-imports now preserve and widen existing schemas automatically when targeting an existing file.
106
106
  - **Export:** chaprola_export (JSON or FHIR — full round-trip: FHIR in, process, FHIR out)
107
107
  - **Schedule:** chaprola_schedule (cron jobs for any endpoint)
108
108
 
109
- **The programming language** is small and focused — about 15 commands. Read chaprola://cookbook before writing source code. Common patterns: aggregation, filtering, scoring, report formatting. Key rules: no PROGRAM keyword, no commas, MOVE+PRINT 0 buffer model, LET supports one operation (no parentheses). Named parameters: PARAM.name reads URL query params as strings; LET x = PARAM.name converts to numeric. Named output positions: U.name instead of U1-U20.
109
+ **The programming language** is small and focused — about 15 commands. Read chaprola://cookbook before writing source code. Common patterns: aggregation, filtering, scoring, report formatting. Key rules: no PROGRAM keyword, no commas, reports can use either the classic MOVE-to-U-buffer then PRINT 0 pattern or one-line PRINT concatenation, and LET supports one operation (no parentheses). Named parameters: PARAM.name reads URL query params as strings; LET x = PARAM.name converts to numeric. Named output positions: U.name instead of U1-U20.
110
110
 
111
111
  **Common misconceptions:**
112
112
  - "No JOINs" → Wrong. chaprola_query supports JOIN with hash and merge methods across files. Use chaprola_index to build indexes for fast lookups on join fields.
@@ -155,7 +155,7 @@ function readRef(filename) {
155
155
  server.resource("cookbook", "chaprola://cookbook", { description: "Chaprola language cookbook — syntax patterns, complete examples, and the import→compile→run workflow. READ THIS before writing any Chaprola source code.", mimeType: "text/markdown" }, async () => ({
156
156
  contents: [{ uri: "chaprola://cookbook", mimeType: "text/markdown", text: readRef("cookbook.md") }],
157
157
  }));
158
- server.resource("gotchas", "chaprola://gotchas", { description: "Common Chaprola mistakes — no parentheses in LET, no commas in PRINT, MOVE length must match field width, DEFINE names must not collide with fields. READ THIS before writing code.", mimeType: "text/markdown" }, async () => ({
158
+ server.resource("gotchas", "chaprola://gotchas", { description: "Common Chaprola mistakes — no parentheses in LET, no commas in PRINT, DEFINE names must not collide with fields, always pass primary_format to compile. READ THIS before writing code.", mimeType: "text/markdown" }, async () => ({
159
159
  contents: [{ uri: "chaprola://gotchas", mimeType: "text/markdown", text: readRef("gotchas.md") }],
160
160
  }));
161
161
  server.resource("endpoints", "chaprola://endpoints", { description: "Chaprola API endpoint reference — all 40 endpoints with request/response shapes", mimeType: "text/markdown" }, async () => ({
@@ -174,6 +174,9 @@ server.resource("ref-import", "chaprola://ref/import", { description: "Import, e
174
174
  server.resource("ref-query", "chaprola://ref/query", { description: "Query, sort, index, merge, record CRUD — data operations", mimeType: "text/markdown" }, async () => ({
175
175
  contents: [{ uri: "chaprola://ref/query", mimeType: "text/markdown", text: readRef("ref-query.md") }],
176
176
  }));
177
+ server.resource("ref-schema", "chaprola://ref/schema", { description: "Schema operations — /format (inspect), /alter (widen/rename/add/drop fields). CRITICAL: Use /alter for schema changes, NOT /import.", mimeType: "text/markdown" }, async () => ({
178
+ contents: [{ uri: "chaprola://ref/schema", mimeType: "text/markdown", text: readRef("ref-schema.md") }],
179
+ }));
177
180
  server.resource("ref-pivot", "chaprola://ref/pivot", { description: "Pivot tables (GROUP BY) — row, column, aggregate functions", mimeType: "text/markdown" }, async () => ({
178
181
  contents: [{ uri: "chaprola://ref/pivot", mimeType: "text/markdown", text: readRef("ref-pivot.md") }],
179
182
  }));
@@ -195,9 +198,12 @@ server.resource("ref-email", "chaprola://ref/email", { description: "Email syste
195
198
  server.resource("ref-gotchas", "chaprola://ref/gotchas", { description: "Common Chaprola mistakes — language, API, and secondary file pitfalls", mimeType: "text/markdown" }, async () => ({
196
199
  contents: [{ uri: "chaprola://ref/gotchas", mimeType: "text/markdown", text: readRef("ref-gotchas.md") }],
197
200
  }));
198
- server.resource("ref-auth", "chaprola://ref/auth", { description: "Authentication details — registration, login, BAA, MCP env vars", mimeType: "text/markdown" }, async () => ({
201
+ server.resource("ref-auth", "chaprola://ref/auth", { description: "Authentication details — registration, login, BAA, cross-user sharing, MCP env vars", mimeType: "text/markdown" }, async () => ({
199
202
  contents: [{ uri: "chaprola://ref/auth", mimeType: "text/markdown", text: readRef("ref-auth.md") }],
200
203
  }));
204
+ server.resource("ref-apps", "chaprola://ref/apps", { description: "Building apps on Chaprola — React/frontend architecture, site keys, single-owner vs multi-user, enterprise proxy pattern", mimeType: "text/markdown" }, async () => ({
205
+ contents: [{ uri: "chaprola://ref/apps", mimeType: "text/markdown", text: readRef("ref-apps.md") }],
206
+ }));
201
207
  // --- MCP Prompts ---
202
208
  server.prompt("chaprola-guide", "Essential guide for working with Chaprola. Read this before writing any Chaprola source code.", async () => ({
203
209
  messages: [{
@@ -210,7 +216,7 @@ server.prompt("chaprola-guide", "Essential guide for working with Chaprola. Read
210
216
  "- NO `PROGRAM` keyword — programs start directly with commands\n" +
211
217
  "- NO commas anywhere — all arguments are space-separated\n" +
212
218
  "- NO parentheses in LET — only `LET var = a OP b` (one operation)\n" +
213
- "- Output uses MOVE + PRINT 0 buffer model, NOT `PRINT field`\n" +
219
+ "- Output can use classic MOVE + PRINT 0 buffers or one-line PRINT concatenation (`PRINT \"label\" + P.field + rec`)\n" +
214
220
  "- Field addressing: P.fieldname (primary), S.fieldname (secondary)\n" +
215
221
  "- Loop pattern: `LET rec = 1` → `SEEK rec` → `IF EOF GOTO end` → process → `LET rec = rec + 1` → `GOTO loop`\n\n" +
216
222
  "## Minimal Example\n" +
@@ -219,10 +225,7 @@ server.prompt("chaprola-guide", "Essential guide for working with Chaprola. Read
219
225
  "LET rec = 1\n" +
220
226
  "100 SEEK rec\n" +
221
227
  " IF EOF GOTO 900\n" +
222
- " MOVE BLANKS U.1 40\n" +
223
- " MOVE P.name U.1 8\n" +
224
- " MOVE P.value U.12 6\n" +
225
- " PRINT 0\n" +
228
+ " PRINT P.name + \" — \" + P.value\n" +
226
229
  " LET rec = rec + 1\n" +
227
230
  " GOTO 100\n" +
228
231
  "900 END\n" +
@@ -243,6 +246,10 @@ server.tool("chaprola_hello", "Health check — verify the Chaprola API is runni
243
246
  const res = await fetch(url);
244
247
  return textResult(res);
245
248
  });
249
+ server.tool("chaprola_help", "Get the full Chaprola documentation bundle from POST /help. Call this before guessing when compile or run fails. No auth required.", {}, async () => {
250
+ const res = await publicFetch("POST", "/help", {});
251
+ return textResult(res);
252
+ });
246
253
  server.tool("chaprola_register", "Register a new Chaprola account. Returns an API key — save it immediately", {
247
254
  username: z.string().describe("3-40 chars, alphanumeric + hyphens/underscores, starts with letter"),
248
255
  passcode: z.string().describe("16-128 characters. Use a long, unique passcode"),
@@ -277,8 +284,9 @@ server.tool("chaprola_report", "Run a published program and return output. No au
277
284
  project: z.string().describe("Project containing the program"),
278
285
  name: z.string().describe("Name of the published .PR file"),
279
286
  token: z.string().optional().describe("Action token (act_...) for writable reports. Required to persist WRITE/DELETE/QUERY operations. Provided when program was published with writable=true."),
280
- params: z.record(z.union([z.string(), z.number()])).optional().describe("Parameters to inject before execution. Named params (e.g., {deck: \"kanji\", level: 3}) are read in programs via PARAM.name. Legacy R-variables (r1-r20) also supported. Use chaprola_report_params to discover what params a report accepts."),
281
- }, async ({ userid, project, name, token, params }) => {
287
+ params: z.string().optional().describe("JSON object of named params, e.g. {\"deck\": \"kanji\", \"level\": 3}. Named params are read in programs via PARAM.name. Legacy R-variables (r1-r20) also supported. Use chaprola_report_params to discover what params a report accepts."),
288
+ }, async ({ userid, project, name, token, params: paramsStr }) => {
289
+ const params = typeof paramsStr === 'string' ? JSON.parse(paramsStr) : paramsStr;
282
290
  const urlParams = new URLSearchParams();
283
291
  urlParams.set("userid", userid);
284
292
  urlParams.set("project", project);
@@ -330,13 +338,14 @@ server.tool("chaprola_baa_status", "Check whether the authenticated user has sig
330
338
  return textResult(res);
331
339
  });
332
340
  // --- Import ---
333
- server.tool("chaprola_import", "Import JSON data into Chaprola format files (.F + .DA). Sign BAA first if handling PHI", {
341
+ server.tool("chaprola_import", "Import JSON data into Chaprola format files (.F + .DA). If the target file already exists, Chaprola preserves the existing schema, widens matching fields as needed, keeps legacy fields as blanks, and appends new fields at the end. Use chaprola_alter for explicit in-place schema surgery. Sign BAA first if handling PHI", {
334
342
  project: z.string().describe("Project name"),
335
343
  name: z.string().describe("File name (without extension)"),
336
- data: z.array(z.record(z.any())).describe("Array of flat JSON objects to import"),
344
+ data: z.string().describe("JSON array of record objects to import"),
337
345
  format: z.enum(["json", "fhir"]).optional().describe("Data format: json (default) or fhir"),
338
346
  expires_in_days: z.number().optional().describe("Days until data expires (default: 90)"),
339
- }, async ({ project, name, data, format, expires_in_days }) => withBaaCheck(async () => {
347
+ }, async ({ project, name, data: dataStr, format, expires_in_days }) => withBaaCheck(async () => {
348
+ const data = typeof dataStr === 'string' ? JSON.parse(dataStr) : dataStr;
340
349
  const { username } = getCredentials();
341
350
  const body = { userid: username, project, name, data };
342
351
  if (format)
@@ -346,7 +355,7 @@ server.tool("chaprola_import", "Import JSON data into Chaprola format files (.F
346
355
  const res = await authedFetch("/import", body);
347
356
  return textResult(res);
348
357
  }));
349
- server.tool("chaprola_import_url", "Get a presigned S3 upload URL for large files (bypasses 6MB API Gateway limit)", {
358
+ server.tool("chaprola_import_url", "Get a presigned S3 upload URL for large files (bypasses 6MB API Gateway limit). The subsequent chaprola_import_process preserves and widens an existing schema automatically when importing into an existing file.", {
350
359
  project: z.string().describe("Project name"),
351
360
  name: z.string().describe("File name (without extension)"),
352
361
  }, async ({ project, name }) => withBaaCheck(async () => {
@@ -354,7 +363,7 @@ server.tool("chaprola_import_url", "Get a presigned S3 upload URL for large file
354
363
  const res = await authedFetch("/import-url", { userid: username, project, name });
355
364
  return textResult(res);
356
365
  }));
357
- server.tool("chaprola_import_process", "Process a file previously uploaded to S3 via presigned URL. Generates .F + .DA files", {
366
+ server.tool("chaprola_import_process", "Process a file previously uploaded to S3 via presigned URL. Generates .F + .DA files. If the target file already exists, the existing schema is preserved and widened automatically as needed.", {
358
367
  project: z.string().describe("Project name"),
359
368
  name: z.string().describe("File name (without extension)"),
360
369
  format: z.enum(["json", "fhir"]).optional().describe("Data format: json (default) or fhir"),
@@ -366,10 +375,10 @@ server.tool("chaprola_import_process", "Process a file previously uploaded to S3
366
375
  const res = await authedFetch("/import-process", body);
367
376
  return textResult(res);
368
377
  }));
369
- server.tool("chaprola_import_download", "Import data directly from a public URL (CSV, TSV, JSON, NDJSON, Parquet, Excel). Optional AI-powered schema inference", {
378
+ server.tool("chaprola_import_download", "Import data directly from a public URL (CSV, TSV, JSON, NDJSON, Parquet, Excel). Optional AI-powered schema inference.", {
370
379
  project: z.string().describe("Project name"),
371
380
  name: z.string().describe("Output file name (without extension)"),
372
- url: z.string().url().describe("Public URL to download (http/https only)"),
381
+ url: z.string().describe("Public URL to download (http/https only)"),
373
382
  instructions: z.string().optional().describe("Natural language instructions for AI-powered field selection and transforms"),
374
383
  max_rows: z.number().optional().describe("Maximum rows to import (default: 5,000,000)"),
375
384
  }, async ({ project, name, url, instructions, max_rows }) => withBaaCheck(async () => {
@@ -404,12 +413,12 @@ server.tool("chaprola_list", "List files in a project with optional wildcard pat
404
413
  return textResult(res);
405
414
  }));
406
415
  // --- Compile ---
407
- server.tool("chaprola_compile", "Compile Chaprola source (.CS) to bytecode (.PR). READ chaprola://cookbook BEFORE writing source. Key syntax: no PROGRAM keyword (start with commands), no commas, MOVE+PRINT 0 buffer model (not PRINT field), SEEK for primary records, OPEN/READ/WRITE/CLOSE for secondary files, LET supports one operation (no parentheses), field addressing via P.field/S.field requires primary_format/secondary_format params.", {
416
+ server.tool("chaprola_compile", "Compile Chaprola source (.CS) to bytecode (.PR). READ chaprola://cookbook BEFORE writing source. Key syntax: no PROGRAM keyword (start with commands), no commas, reports can use MOVE+PRINT 0 buffers or one-line PRINT concatenation, SEEK for primary records, OPEN/READ/WRITE/CLOSE for secondary files, LET supports one operation (no parentheses). Use primary_format to enable P.fieldname addressing (recommended) — the compiler resolves field names to positions and lengths from the format file. If compile fails, call chaprola_help before retrying.", {
408
417
  project: z.string().describe("Project name"),
409
418
  name: z.string().describe("Program name (without extension)"),
410
419
  source: z.string().describe("Chaprola source code"),
411
- primary_format: z.string().optional().describe("Primary data file name (enables P.fieldname addressing)"),
412
- secondary_format: z.string().optional().describe("Secondary format file name (enables S.fieldname addressing)"),
420
+ primary_format: z.string().optional().describe("Primary data file name enables P.fieldname addressing (recommended for all programs that reference data fields)"),
421
+ secondary_format: z.string().optional().describe("Secondary data file name enables S.fieldname addressing (required if using S.fieldname references)"),
413
422
  }, async ({ project, name, source, primary_format, secondary_format }) => withBaaCheck(async () => {
414
423
  const { username } = getCredentials();
415
424
  const body = { userid: username, project, name, source };
@@ -421,7 +430,7 @@ server.tool("chaprola_compile", "Compile Chaprola source (.CS) to bytecode (.PR)
421
430
  return textResult(res);
422
431
  }));
423
432
  // --- Run ---
424
- server.tool("chaprola_run", "Execute a compiled .PR program. Use async:true for large datasets (>100K records)", {
433
+ server.tool("chaprola_run", "Execute a compiled .PR program. Use async:true for large datasets (>100K records). If runtime errors occur, call chaprola_help before retrying.", {
425
434
  project: z.string().describe("Project name"),
426
435
  name: z.string().describe("Program name (without extension)"),
427
436
  primary_file: z.string().optional().describe("Primary data file to load"),
@@ -473,6 +482,20 @@ server.tool("chaprola_run_each", "Run a compiled .PR program against every recor
473
482
  const res = await authedFetch("/run-each", body);
474
483
  return textResult(res);
475
484
  }));
485
+ // --- Systemhelp ---
486
+ server.tool("chaprola_systemhelp", "Send your program name and error message. Chaprola will read your source, intent file, and data schema to diagnose and fix the problem. See POST /help for examples.", {
487
+ project: z.string().describe("Project name"),
488
+ name: z.string().describe("Program name (without extension)"),
489
+ error: z.string().optional().describe("Error message from compile or runtime (copy verbatim if available)"),
490
+ request: z.string().describe("Plain-language description of the problem. Include context: what changed, what you expected, what happened instead."),
491
+ }, async ({ project, name, error, request }) => withBaaCheck(async () => {
492
+ const { username } = getCredentials();
493
+ const body = { userid: username, project, name, request };
494
+ if (error)
495
+ body.error = error;
496
+ const res = await authedFetch("/systemhelp", body);
497
+ return textResult(res);
498
+ }));
476
499
  // --- Publish ---
477
500
  server.tool("chaprola_publish", "Publish a compiled program for public access via /report", {
478
501
  project: z.string().describe("Project name"),
@@ -539,16 +562,22 @@ server.tool("chaprola_download", "Get a presigned S3 URL to download any file yo
539
562
  server.tool("chaprola_query", "SQL-free data query with WHERE, SELECT, aggregation, ORDER BY, JOIN, pivot, and Mercury scoring", {
540
563
  project: z.string().describe("Project name"),
541
564
  file: z.string().describe("Data file to query"),
542
- where: z.record(z.any()).optional().describe("Filter: {field, op, value}. Ops: eq, ne, gt, ge, lt, le, between, contains, starts_with"),
565
+ where: z.string().optional().describe("JSON object of filter conditions, e.g. {\"field\": \"status\", \"op\": \"eq\", \"value\": \"active\"}. Ops: eq, ne, gt, ge, lt, le, between, contains, starts_with"),
543
566
  select: z.array(z.string()).optional().describe("Fields to include in output"),
544
- aggregate: z.array(z.record(z.any())).optional().describe("Aggregation: [{field, func}]. Funcs: count, sum, avg, min, max, stddev"),
545
- order_by: z.array(z.record(z.any())).optional().describe("Sort: [{field, dir}]"),
567
+ aggregate: z.string().optional().describe("JSON array of aggregation specs, e.g. [{\"field\": \"amount\", \"func\": \"sum\"}]. Funcs: count, sum, avg, min, max, stddev"),
568
+ order_by: z.string().optional().describe("JSON array of sort specs, e.g. [{\"field\": \"name\", \"dir\": \"asc\"}]"),
546
569
  limit: z.number().optional().describe("Max results to return"),
547
570
  offset: z.number().optional().describe("Skip this many results"),
548
- join: z.record(z.any()).optional().describe("Join: {file, on, type, method}"),
549
- pivot: z.record(z.any()).optional().describe("Pivot: {row, column, values, totals, grand_total}"),
550
- mercury: z.record(z.any()).optional().describe("Mercury scoring: {fields: [{field, target, weight}]}"),
551
- }, async ({ project, file, where, select, aggregate, order_by, limit, offset, join, pivot, mercury }) => withBaaCheck(async () => {
571
+ join: z.string().optional().describe("JSON object of join config, e.g. {\"file\": \"other\", \"on\": \"id\", \"type\": \"inner\"}"),
572
+ pivot: z.string().optional().describe("JSON object of pivot config, e.g. {\"row\": \"category\", \"column\": \"month\", \"values\": \"sales\"}"),
573
+ mercury: z.string().optional().describe("JSON object of mercury scoring config, e.g. {\"fields\": [{\"field\": \"score\", \"target\": 100, \"weight\": 1.0}]}"),
574
+ }, async ({ project, file, where: whereStr, select, aggregate: aggregateStr, order_by: orderByStr, limit, offset, join: joinStr, pivot: pivotStr, mercury: mercuryStr }) => withBaaCheck(async () => {
575
+ const where = typeof whereStr === 'string' ? JSON.parse(whereStr) : whereStr;
576
+ const aggregate = typeof aggregateStr === 'string' ? JSON.parse(aggregateStr) : aggregateStr;
577
+ const order_by = typeof orderByStr === 'string' ? JSON.parse(orderByStr) : orderByStr;
578
+ const join = typeof joinStr === 'string' ? JSON.parse(joinStr) : joinStr;
579
+ const pivot = typeof pivotStr === 'string' ? JSON.parse(pivotStr) : pivotStr;
580
+ const mercury = typeof mercuryStr === 'string' ? JSON.parse(mercuryStr) : mercuryStr;
552
581
  const { username } = getCredentials();
553
582
  const body = { userid: username, project, file };
554
583
  if (where)
@@ -609,7 +638,7 @@ server.tool("chaprola_merge", "Merge two sorted data files into one. Both must s
609
638
  return textResult(res);
610
639
  }));
611
640
  // --- Schema: Format + Alter ---
612
- server.tool("chaprola_format", "Inspect a data file's schema — returns field names, positions, lengths, types, and PHI flags", {
641
+ server.tool("chaprola_format", "Inspect a data file's schema — returns field names, types, and PHI flags. Use this to understand the data structure before writing programs.", {
613
642
  project: z.string().describe("Project name"),
614
643
  name: z.string().describe("Data file name (without .F extension)"),
615
644
  }, async ({ project, name }) => withBaaCheck(async () => {
@@ -617,7 +646,7 @@ server.tool("chaprola_format", "Inspect a data file's schema — returns field n
617
646
  const res = await authedFetch("/format", { userid: username, project, name });
618
647
  return textResult(res);
619
648
  }));
620
- server.tool("chaprola_alter", "Modify a data file's schema: widen/narrow/rename fields, add new fields, drop fields. Transforms existing data to match the new schema.", {
649
+ server.tool("chaprola_alter", "Modify a data file's schema: widen/narrow/rename fields, add new fields, drop fields. NON-DESTRUCTIVE: Transforms existing data to match the new schema. Use this for explicit schema surgery on existing data files; re-imports now preserve and widen existing schemas automatically, but chaprola_alter still handles rename/drop/narrow operations.", {
621
650
  project: z.string().describe("Project name"),
622
651
  name: z.string().describe("Data file name (without extension)"),
623
652
  alter: z.array(z.object({
@@ -756,7 +785,7 @@ server.tool("chaprola_search", "Search the web via Brave Search API. Returns tit
756
785
  }));
757
786
  // --- Fetch ---
758
787
  server.tool("chaprola_fetch", "Fetch any URL and return clean content. HTML pages converted to markdown. SSRF-protected. Rate limit: 20/day per user", {
759
- url: z.string().url().describe("URL to fetch (http:// or https://)"),
788
+ url: z.string().describe("URL to fetch (http:// or https://)"),
760
789
  format: z.enum(["markdown", "text", "html", "json"]).optional().describe("Output format (default: markdown)"),
761
790
  max_length: z.number().optional().describe("Max output characters (default: 50000, max: 200000)"),
762
791
  }, async ({ url, format, max_length }) => withBaaCheck(async () => {
@@ -773,9 +802,10 @@ server.tool("chaprola_schedule", "Create a scheduled job that runs a Chaprola en
773
802
  name: z.string().describe("Unique name for this schedule (alphanumeric + hyphens/underscores)"),
774
803
  cron: z.string().describe("Standard 5-field cron expression (min hour day month weekday). Minimum interval: 15 minutes"),
775
804
  endpoint: z.enum(["/import-download", "/run", "/export-report", "/search", "/fetch", "/query", "/email/send", "/export", "/report", "/list"]).describe("Target endpoint to call"),
776
- body: z.record(z.any()).describe("Request body for the target endpoint. userid is injected automatically"),
805
+ body: z.string().describe("JSON object of the schedule request body. userid is injected automatically"),
777
806
  skip_if_unchanged: z.boolean().optional().describe("Skip when response matches previous run (SHA-256 hash). Default: false"),
778
- }, async ({ name, cron, endpoint, body, skip_if_unchanged }) => withBaaCheck(async () => {
807
+ }, async ({ name, cron, endpoint, body: bodyStr, skip_if_unchanged }) => withBaaCheck(async () => {
808
+ const body = typeof bodyStr === 'string' ? JSON.parse(bodyStr) : bodyStr;
779
809
  const reqBody = { name, cron, endpoint, body };
780
810
  if (skip_if_unchanged !== undefined)
781
811
  reqBody.skip_if_unchanged = skip_if_unchanged;
@@ -796,8 +826,9 @@ server.tool("chaprola_schedule_delete", "Delete a scheduled job by name", {
796
826
  server.tool("chaprola_insert_record", "Insert a new record into a data file's merge file (.MRG). The record appears at the end of the file until consolidation.", {
797
827
  project: z.string().describe("Project name"),
798
828
  file: z.string().describe("Data file name (without extension)"),
799
- record: z.record(z.string()).describe("Field name value pairs. Unspecified fields default to blanks."),
800
- }, async ({ project, file, record }) => withBaaCheck(async () => {
829
+ record: z.string().describe("JSON object of the record to insert, e.g. {\"name\": \"foo\", \"status\": \"active\"}. Unspecified fields default to blanks."),
830
+ }, async ({ project, file, record: recordStr }) => withBaaCheck(async () => {
831
+ const record = typeof recordStr === 'string' ? JSON.parse(recordStr) : recordStr;
801
832
  const { username } = getCredentials();
802
833
  const res = await authedFetch("/insert-record", { userid: username, project, file, record });
803
834
  return textResult(res);
@@ -805,9 +836,11 @@ server.tool("chaprola_insert_record", "Insert a new record into a data file's me
805
836
  server.tool("chaprola_update_record", "Update fields in a single record matched by a where clause. If no sort-key changes, updates in place; otherwise marks old record ignored and appends to merge file.", {
806
837
  project: z.string().describe("Project name"),
807
838
  file: z.string().describe("Data file name (without extension)"),
808
- where: z.record(z.string()).describe("Field name value pairs to identify exactly one record"),
809
- set: z.record(z.string()).describe("Field name new value pairs to update"),
810
- }, async ({ project, file, where: whereClause, set }) => withBaaCheck(async () => {
839
+ where: z.string().describe("JSON object of filter conditions for which records to update, e.g. {\"id\": \"123\"}"),
840
+ set: z.string().describe("JSON object of fields to update, e.g. {\"status\": \"done\"}"),
841
+ }, async ({ project, file, where: whereStr, set: setStr }) => withBaaCheck(async () => {
842
+ const whereClause = typeof whereStr === 'string' ? JSON.parse(whereStr) : whereStr;
843
+ const set = typeof setStr === 'string' ? JSON.parse(setStr) : setStr;
811
844
  const { username } = getCredentials();
812
845
  const res = await authedFetch("/update-record", { userid: username, project, file, where: whereClause, set });
813
846
  return textResult(res);
@@ -815,8 +848,9 @@ server.tool("chaprola_update_record", "Update fields in a single record matched
815
848
  server.tool("chaprola_delete_record", "Delete a single record matched by a where clause. Marks the record as ignored (.IGN). Physically removed on consolidation.", {
816
849
  project: z.string().describe("Project name"),
817
850
  file: z.string().describe("Data file name (without extension)"),
818
- where: z.record(z.string()).describe("Field name value pairs to identify exactly one record"),
819
- }, async ({ project, file, where: whereClause }) => withBaaCheck(async () => {
851
+ where: z.string().describe("JSON object of filter conditions for which records to delete, e.g. {\"id\": \"123\"}"),
852
+ }, async ({ project, file, where: whereStr }) => withBaaCheck(async () => {
853
+ const whereClause = typeof whereStr === 'string' ? JSON.parse(whereStr) : whereStr;
820
854
  const { username } = getCredentials();
821
855
  const res = await authedFetch("/delete-record", { userid: username, project, file, where: whereClause });
822
856
  return textResult(res);
package/package.json CHANGED
@@ -1,7 +1,7 @@
1
1
  {
2
2
  "name": "@chaprola/mcp-server",
3
- "version": "1.6.3",
4
- "description": "MCP server for Chaprola — agent-first data platform. Gives AI agents 46 tools for structured data storage, record CRUD, querying, schema inspection, web search, URL fetching, scheduled jobs, and execution via plain HTTP.",
3
+ "version": "1.7.0",
4
+ "description": "MCP server for Chaprola — agent-first data platform. Gives AI agents tools for structured data storage, record CRUD, querying, schema inspection, documentation lookup, web search, URL fetching, scheduled jobs, scoped site keys, and execution via plain HTTP.",
5
5
  "type": "module",
6
6
  "main": "dist/index.js",
7
7
  "bin": {
@@ -51,6 +51,39 @@ If you do handle PHI, sign the BAA once per account:
51
51
 
52
52
  These are read by the MCP server and injected into every authenticated request automatically.
53
53
 
54
+ ## Cross-User Project Sharing
55
+
56
+ By default, only the project owner can read and write their data. To grant another user access, create an `access.json` file in the project root:
57
+
58
+ ```
59
+ s3://chaprola-2026/{owner}/{project}/access.json
60
+ ```
61
+
62
+ **Format:**
63
+ ```json
64
+ {
65
+ "owner": "tawni",
66
+ "project": "social",
67
+ "discoverable": false,
68
+ "description": "Content calendar and social media",
69
+ "writers": [
70
+ {"username": "cal", "granted_at": "2026-04-06T17:25:16Z"},
71
+ {"username": "nora", "granted_at": "2026-04-06T17:25:16Z"}
72
+ ]
73
+ }
74
+ ```
75
+
76
+ **How it works:**
77
+ - Any user in the `writers` list gets full read+write access to that project through all API endpoints (query, update-record, insert-record, delete-record, export, compile, run, run-each, etc.)
78
+ - The shared user passes the **owner's** userid in requests: `{"userid": "tawni", "project": "social", ...}` — authenticated with their own API key
79
+ - Every protected endpoint checks: (1) does the API key's username match the `userid`? If not, (2) is the user listed as a writer in `access.json`?
80
+ - No `access.json` = no shared access (owner-only)
81
+ - `discoverable` is reserved for future use (project discovery/listing)
82
+
83
+ **To set up sharing:** Create or update the `access.json` file via S3 directly or through the `/sv` admin interface.
84
+
85
+ **To revoke access:** Remove the user from the `writers` array, or delete `access.json` entirely.
86
+
54
87
  ## Credential Recovery
55
88
 
56
89
  If your API key stops working (403):
@@ -19,43 +19,48 @@ POST /run {userid, project, name: "REPORT", primary_file: "STAFF", record: 1}
19
19
  |-------|---------|--------------------------|
20
20
  | R1–R20 | HULDRA elements (parameters) | No — HULDRA overwrites these |
21
21
  | R21–R40 | HULDRA objectives (error metrics) | No — HULDRA reads these |
22
- | R41–R50 | Scratch space | **Yes — always use R41–R50 for DEFINE VARIABLE** |
22
+ | R41–R99 | Scratch space | **Yes — always use R41–R99 for DEFINE VARIABLE** |
23
23
 
24
- For non-HULDRA programs, R1–R40 are technically available but using R41–R50 is a good habit.
24
+ For non-HULDRA programs, R1–R40 are technically available but using R41–R99 is a good habit.
25
25
 
26
- ## PRINT: Output from U Buffer
26
+ ## PRINT: Preferred Output Methods
27
27
 
28
- ```
29
- PRINT 0 — output the ENTIRE U buffer contents, then clear it
30
- PRINT N output exactly N characters from U buffer (no clear)
28
+ **Concatenation (preferred):**
29
+ ```chaprola
30
+ PRINT P.name + " " + P.department + " $" + R41
31
+ PRINT "Total: " + R42
32
+ PRINT P.last_name // single field, auto-trimmed
33
+ PRINT "Hello from Chaprola!" // literal string
31
34
  ```
32
35
 
33
- Use `PRINT N` when you've placed data at specific positions and want clean output without trailing garbage. Use `PRINT 0` for quick output of everything.
36
+ - String literals are copied as-is.
37
+ - P./S./U./X. fields are auto-trimmed (trailing spaces removed).
38
+ - R-variables print as integers when no fractional part, otherwise as floats.
39
+ - Concatenation auto-flushes the line.
34
40
 
41
+ **U buffer output (for fixed-width columnar reports only):**
35
42
  ```chaprola
36
- MOVE "Hello" U.1 5
37
- PRINT 5 // Outputs "Hello" — exactly 5 chars, no trailing spaces
43
+ CLEAR U
44
+ MOVE P.name U.1 20
45
+ PUT sal INTO U.22 10 D 2
46
+ PRINT 0 // output entire U buffer, then clear
38
47
  ```
39
48
 
40
49
  ## Hello World (no data file)
41
50
 
42
51
  ```chaprola
43
- MOVE "Hello from Chaprola!" U.1 20
44
- PRINT 0
52
+ PRINT "Hello from Chaprola!"
45
53
  END
46
54
  ```
47
55
 
48
56
  ## Loop Through All Records
49
57
 
50
58
  ```chaprola
51
- DEFINE VARIABLE rec R1
59
+ DEFINE VARIABLE rec R41
52
60
  LET rec = 1
53
61
  100 SEEK rec
54
62
  IF EOF GOTO 900
55
- MOVE BLANKS U.1 40
56
- MOVE P.name U.1 8
57
- MOVE P.salary U.12 6
58
- PRINT 0
63
+ PRINT P.name + " — " + P.salary
59
64
  LET rec = rec + 1
60
65
  GOTO 100
61
66
  900 END
@@ -65,10 +70,8 @@ LET rec = 1
65
70
 
66
71
  ```chaprola
67
72
  GET sal FROM P.salary
68
- IF sal LT 80000 GOTO 200 // skip low earners
69
- MOVE P.name U.1 8
70
- PUT sal INTO U.12 10 D 0 // D=dollar format
71
- PRINT 0
73
+ IF sal LT 80000 GOTO 200 // skip low earners
74
+ PRINT P.name + " — " + R41
72
75
  200 LET rec = rec + 1
73
76
  ```
74
77
 
@@ -76,10 +79,10 @@ PRINT 0
76
79
 
77
80
  ```chaprola
78
81
  OPEN "DEPARTMENTS" 0
79
- FIND match FROM S.dept_code 3 USING P.dept_code
80
- IF match EQ 0 GOTO 200 // no match
81
- READ match // load matched secondary record
82
- MOVE S.dept_name U.12 15 // now accessible
82
+ FIND match FROM S.dept_code USING P.dept_code
83
+ IF match EQ 0 GOTO 200 // no match
84
+ READ match // load matched secondary record
85
+ PRINT P.name + " " + S.dept_name
83
86
  ```
84
87
 
85
88
  Compile with both formats so the compiler resolves fields from both files:
@@ -105,14 +108,47 @@ IF EQUAL U.200 U.180 12 GOTO 200 // match — jump to handler
105
108
  ## Read-Modify-Write (UPDATE)
106
109
 
107
110
  ```chaprola
108
- READ match // load record
109
- GET bal FROM S.balance // read current value
110
- LET bal = bal + amt // modify
111
- PUT bal INTO S.balance 8 F 0 // write back to S memory
112
- WRITE match // flush to disk
113
- CLOSE // flush all at end
111
+ READ match // load record
112
+ GET bal FROM S.balance // read current value
113
+ LET bal = bal + amt // modify
114
+ PUT bal INTO S.balance F 0 // write back to S memory (length auto-filled)
115
+ WRITE match // flush to disk
116
+ CLOSE // flush all at end
117
+ ```
118
+
119
+ ## Date Arithmetic
120
+
121
+ ```chaprola
122
+ GET DATE R41 FROM X.primary_modified // when was file last changed?
123
+ GET DATE R42 FROM X.utc_time // what time is it now?
124
+ LET R43 = R42 - R41 // difference in seconds
125
+ LET R43 = R43 / 86400 // convert to days
126
+ IF R43 GT 30 PRINT "WARNING: file is over 30 days old" ;
114
127
  ```
115
128
 
129
+ ## Get Current User
130
+
131
+ ```chaprola
132
+ PRINT "Logged in as: " + X.username
133
+ ```
134
+
135
+ ## System Text Properties (X.)
136
+
137
+ Access system metadata by property name — no numeric positions needed:
138
+
139
+ | Property | Description |
140
+ |----------|-------------|
141
+ | `X.year` | Year (four digits) |
142
+ | `X.julian` | Julian date (1–366) |
143
+ | `X.hour` | Hour (military time) |
144
+ | `X.minute` | Minute (0–59) |
145
+ | `X.username` | Authenticated user |
146
+ | `X.record_num` | Primary file record number |
147
+ | `X.utc_time` | UTC datetime (ISO 8601) |
148
+ | `X.elapsed` | Elapsed execution time |
149
+ | `X.primary_modified` | Primary file Last-Modified |
150
+ | `X.secondary_modified` | Secondary file Last-Modified |
151
+
116
152
  ## Async for Large Datasets
117
153
 
118
154
  ```bash
@@ -140,9 +176,7 @@ SEEK 1
140
176
  GOTO 300
141
177
  200 GET cardlvl FROM P.level
142
178
  IF cardlvl NE lvl GOTO 300 // filter by level param
143
- MOVE P.kanji U.1 4
144
- MOVE P.reading U.6 10
145
- PRINT 0
179
+ PRINT P.kanji + " — " + P.reading
146
180
  300 LET rec = rec + 1
147
181
  SEEK rec
148
182
  GOTO 100
@@ -208,7 +242,7 @@ POST /query {
208
242
  | `I` | Integer (right-justified) | ` 1234` |
209
243
  | `E` | Scientific notation | `1.23E+03` |
210
244
 
211
- Syntax: `PUT R1 INTO U.30 10 D 2` (R-var, location, width, format, decimals)
245
+ Syntax: `PUT R41 INTO P.salary D 2` (R-var, field name, format, decimals — length auto-filled)
212
246
 
213
247
  ## Common Field Widths
214
248
 
@@ -227,19 +261,19 @@ Use these when sizing MOVE lengths and U buffer positions.
227
261
 
228
262
  | Prefix | Description |
229
263
  |--------|-------------|
230
- | `P` | Primary data file (current record) |
231
- | `S` | Secondary data file (current record) |
264
+ | `P` | Primary data file use field names: `P.salary`, `P.name` |
265
+ | `S` | Secondary data file use field names: `S.dept`, `S.emp_id` |
232
266
  | `U` | User buffer (scratch for output) |
233
- | `X` | System text (date, time, filenames) |
267
+ | `X` | System text use property names: `X.username`, `X.utc_time` |
234
268
 
235
269
  ## Math Intrinsics
236
270
 
237
271
  ```chaprola
238
- LET R2 = EXP R1 // e^R1
239
- LET R2 = LOG R1 // ln(R1)
240
- LET R2 = SQRT R1 // √R1
241
- LET R2 = ABS R1 // |R1|
242
- LET R3 = POW R1 R2 // R1^R2
272
+ LET R42 = EXP R41 // e^R41
273
+ LET R42 = LOG R41 // ln(R41)
274
+ LET R42 = SQRT R41 // √R41
275
+ LET R42 = ABS R41 // |R41|
276
+ LET R43 = POW R41 R42 // R41^R42
243
277
  ```
244
278
 
245
279
  ## Import-Download: URL → Dataset (Parquet, Excel, CSV, JSON)
@@ -281,7 +315,7 @@ HULDRA finds the best parameter values for a mathematical model by minimizing th
281
315
  |-------|---------|-------------|
282
316
  | R1–R20 | **Elements** (parameters to optimize) | HULDRA sets these before each VM run |
283
317
  | R21–R40 | **Objectives** (error metrics) | Your program computes and stores these |
284
- | R41–R50 | **Scratch space** | Your program uses these for temp variables |
318
+ | R41–R99 | **Scratch space** | Your program uses these for temp variables |
285
319
 
286
320
  ### Complete Example: Fit a Linear Model
287
321
 
@@ -11,21 +11,60 @@ LET temp = qty + bonus
11
11
  LET result = price * temp
12
12
  ```
13
13
 
14
+ ### No built-in functions
15
+ There are NO functions with parentheses. No STR(), INT(), ABS(), LEN(), TRIM(), SUBSTR(), CONCAT(), FORMAT(), TOSTRING(), etc.
16
+ - To convert number to text for output: `PRINT "Total: " + R41`
17
+ - To write number to a field: `PUT R41 INTO P.salary D 2`
18
+ - To convert text to number: `GET R41 FROM P.salary`
19
+ - To clear a field: `MOVE BLANKS P.notes` or `CLEAR U`
20
+
21
+ ### Use field names, not numeric positions
22
+ Always use `P.salary`, `S.dept`, `X.username` — not `P.63 10`, `S.30 4`, `X.28 8`. The compiler auto-fills positions and lengths from the format file.
23
+ ```chaprola
24
+ // PREFERRED: field names — readable, resilient to format changes
25
+ GET R41 FROM P.salary
26
+ PRINT P.name + " — " + P.department
27
+ MOVE X.username U.1
28
+
29
+ // AVOID: numeric positions — fragile, unreadable
30
+ GET R41 FROM P.63 10
31
+ ```
32
+
33
+ ### Use PRINT concatenation, not MOVE + PRINT 0
34
+ ```chaprola
35
+ // PREFERRED: direct concatenation
36
+ PRINT P.name + " earns $" + R41
37
+
38
+ // AVOID: old MOVE-to-buffer pattern
39
+ MOVE BLANKS U.1 80
40
+ MOVE P.name U.1 20
41
+ PUT R41 INTO U.22 10 D 2
42
+ PRINT 0
43
+ ```
44
+
45
+ ### Use CLEAR, not MOVE BLANKS for full regions
46
+ ```chaprola
47
+ CLEAR U // clear entire user buffer
48
+ CLEAR P // clear entire primary region
49
+ CLEAR S // clear entire secondary region
50
+ MOVE BLANKS P.notes // clear a single field (length auto-filled)
51
+ ```
52
+
14
53
  ### IF EQUAL compares a literal to a location
15
- Cannot compare two memory locations. Copy to U buffer first.
54
+ Cannot compare two memory locations directly. Copy to U buffer first.
16
55
  ```chaprola
17
56
  MOVE P.txn_type U.76 6
18
57
  IF EQUAL "CREDIT" U.76 GOTO 200
19
58
  ```
20
59
 
21
- ### MOVE length must match field width
22
- `MOVE P.name U.1 20` copies 20 chars starting at the field if `name` is 8 chars wide, the extra 12 bleed into adjacent fields. Always match the format file width.
60
+ ### MOVE literal auto-pads to field width
61
+ `MOVE "Jones" P.name` auto-fills the rest of the field with blanks. No need to clear first.
23
62
 
24
63
  ### DEFINE VARIABLE names must not collide with field names
25
- If the format has a `balance` field, don't `DEFINE VARIABLE balance R3`. Use `bal` instead. The compiler confuses the alias with the field name.
64
+ If the format has a `balance` field, don't `DEFINE VARIABLE balance R41`. Use `bal` instead. The compiler confuses the alias with the field name.
26
65
 
27
66
  ### R-variables are floating point
28
- All R1–R50 are 64-bit floats. `7 / 2 = 3.5`. Use PUT with `I` format to display as integer.
67
+ All R1–R99 are 64-bit floats. `7 / 2 = 3.5`. Use PUT with `I` format to display as integer.
29
68
 
30
69
  ### Statement numbers are labels, not line numbers
31
70
  Only number lines that are GOTO/CALL targets. Don't number every line.
@@ -36,6 +75,13 @@ Always check `IF match EQ 0` after FIND before calling READ.
36
75
  ### PRINT 0 clears the U buffer
37
76
  After PRINT 0, the buffer is empty. No need to manually clear between prints unless reusing specific positions.
38
77
 
78
+ ### GET DATE / PUT DATE — no FOR keyword needed with property names
79
+ ```chaprola
80
+ GET DATE R41 FROM X.utc_time // correct — length auto-filled
81
+ GET DATE R42 FROM X.primary_modified // correct — length auto-filled
82
+ PUT DATE R41 INTO U.1 20 // U buffer needs explicit length
83
+ ```
84
+
39
85
  ## Import
40
86
 
41
87
  ### Field widths come from the longest value
@@ -74,8 +120,8 @@ Always CLOSE before END if you wrote to the secondary file. Unflushed writes are
74
120
 
75
121
  ## HULDRA Optimization
76
122
 
77
- ### Use R41–R50 for scratch variables, not R1–R20
78
- R1–R20 are reserved for HULDRA elements. R21–R40 are reserved for objectives. Your VALUE program's DEFINE VARIABLE declarations must use R41–R50 only.
123
+ ### Use R41–R99 for scratch variables, not R1–R20
124
+ R1–R20 are reserved for HULDRA elements. R21–R40 are reserved for objectives. Your VALUE program's DEFINE VARIABLE declarations must use R41–R99 only.
79
125
  ```chaprola
80
126
  // WRONG: DEFINE VARIABLE counter R1 (HULDRA will overwrite this)
81
127
  // RIGHT: DEFINE VARIABLE counter R41
@@ -0,0 +1,193 @@
1
+ # Building Apps on Chaprola — Architecture Reference
2
+
3
+ ## Overview
4
+
5
+ Chaprola is a backend for frontend apps. Your React, Vue, Svelte, or Laravel app calls `api.chaprola.org` directly from the browser. No proxy server, no middleware, no infrastructure to manage.
6
+
7
+ ## Two Architectures
8
+
9
+ ### 1. Single-Owner App (simplest)
10
+
11
+ One Chaprola account owns all the data. The app uses a **site key** locked to its domain. All users see the same data.
12
+
13
+ **Use cases:** dashboards, public reports, internal tools, data viewers, portfolio sites.
14
+
15
+ ```
16
+ Browser → React App → api.chaprola.org (site key in Authorization header)
17
+ ```
18
+
19
+ **Setup:**
20
+ 1. Register a Chaprola account: `POST /register`
21
+ 2. Sign the BAA if handling health data: `POST /sign-baa`
22
+ 3. Create a site key locked to your domain:
23
+ ```json
24
+ POST /create-site-key
25
+ {"userid": "myapp", "origin": "https://myapp.example.com", "label": "production"}
26
+ ```
27
+ Response: `{"site_key": "site_a1b2c3..."}`
28
+ 4. Use the site key in your frontend:
29
+ ```javascript
30
+ const resp = await fetch('https://api.chaprola.org/query', {
31
+ method: 'POST',
32
+ headers: {
33
+ 'Authorization': 'Bearer site_a1b2c3...',
34
+ 'Content-Type': 'application/json'
35
+ },
36
+ body: JSON.stringify({ userid: 'myapp', project: 'main', file: 'products', where: [] })
37
+ })
38
+ ```
39
+
40
+ **Security model:** The site key is checked against the `Origin` HTTP header, which browsers set automatically. This prevents other websites from using your key (CORS-level protection). However, Origin headers are trivially spoofable from non-browser clients (curl, Postman, scripts). Anyone who extracts the site key from your JavaScript has full access to the account's data. **Use this pattern only for public or semi-public data** — dashboards, product catalogs, published reports. For private data, use the multi-user pattern (each user authenticates individually) or the enterprise proxy pattern.
41
+
42
+ ### 2. Multi-User App (each user has their own account)
43
+
44
+ Each app user registers their own Chaprola account. The app stores their API key in the browser session. Each user has their own data silo.
45
+
46
+ **Use cases:** SaaS apps, multi-tenant platforms, apps where users own their data.
47
+
48
+ ```
49
+ Browser → React App → api.chaprola.org (user's own API key)
50
+ ```
51
+
52
+ **Setup:**
53
+ 1. Your app's registration form calls Chaprola directly:
54
+ ```javascript
55
+ // User signs up
56
+ const resp = await fetch('https://api.chaprola.org/register', {
57
+ method: 'POST',
58
+ headers: { 'Content-Type': 'application/json' },
59
+ body: JSON.stringify({ username: 'alice', passcode: 'their-secure-passcode' })
60
+ })
61
+ const { api_key } = await resp.json()
62
+ sessionStorage.setItem('chaprola_key', api_key)
63
+ ```
64
+
65
+ 2. Your app's login form:
66
+ ```javascript
67
+ const resp = await fetch('https://api.chaprola.org/login', {
68
+ method: 'POST',
69
+ headers: { 'Content-Type': 'application/json' },
70
+ body: JSON.stringify({ username: 'alice', passcode: 'their-secure-passcode' })
71
+ })
72
+ const { api_key } = await resp.json()
73
+ sessionStorage.setItem('chaprola_key', api_key)
74
+ ```
75
+
76
+ 3. All subsequent API calls use the user's key:
77
+ ```javascript
78
+ const key = sessionStorage.getItem('chaprola_key')
79
+ const resp = await fetch('https://api.chaprola.org/query', {
80
+ method: 'POST',
81
+ headers: {
82
+ 'Authorization': `Bearer ${key}`,
83
+ 'Content-Type': 'application/json'
84
+ },
85
+ body: JSON.stringify({ userid: 'alice', project: 'mydata', file: 'tasks', where: [] })
86
+ })
87
+ ```
88
+
89
+ **Security model:** Each user authenticates individually. User A cannot access User B's data (userid enforcement). No shared secrets in the frontend. API keys are per-user, stored in the browser session.
90
+
91
+ **Team data sharing:** If users need to share a project, the project owner creates an `access.json` file listing writers. See the auth reference for details.
92
+
93
+ ## Which Architecture Should I Use?
94
+
95
+ | Question | Single-Owner | Multi-User |
96
+ |----------|:---:|:---:|
97
+ | One person/org owns all the data? | Yes | |
98
+ | Users create accounts and own their data? | | Yes |
99
+ | Public dashboard or report viewer? | Yes | |
100
+ | SaaS with multiple tenants? | | Yes |
101
+ | Internal tool for a small team? | Yes | |
102
+ | App where privacy between users matters? | | Yes |
103
+ | Data is sensitive or private? | No — use Multi-User or Enterprise | Yes |
104
+
105
+ **Rule of thumb:** If you'd be uncomfortable with someone viewing all the data in that Chaprola account, don't use the single-owner pattern. The site key will be in your JavaScript source — assume it's public.
106
+
107
+ ## Enterprise Customers
108
+
109
+ If your enterprise requires that API keys never touch the browser:
110
+
111
+ Run your own backend proxy. Your React/Laravel frontend authenticates users through your own auth system (OAuth, SSO, SAML). Your backend holds the Chaprola API key server-side and proxies requests.
112
+
113
+ ```
114
+ User → Enterprise App → Enterprise Backend (holds API key) → api.chaprola.org
115
+ ```
116
+
117
+ This is the same pattern used with Stripe, Twilio, and other APIs. Chaprola does not provide a managed proxy — your infrastructure team owns this layer.
118
+
119
+ **Most apps do not need this.** The site key + per-user auth model handles the vast majority of use cases without any backend infrastructure.
120
+
121
+ ## Site Keys Reference
122
+
123
+ Site keys are API keys locked to a specific browser origin. They prevent your key from being used on other websites.
124
+
125
+ ```json
126
+ // Create
127
+ POST /create-site-key
128
+ {"userid": "myapp", "origin": "https://myapp.example.com", "label": "production"}
129
+ // Response: {"site_key": "site_...", "origin": "https://myapp.example.com"}
130
+
131
+ // List
132
+ POST /list-site-keys
133
+ {"userid": "myapp"}
134
+
135
+ // Delete
136
+ POST /delete-site-key
137
+ {"userid": "myapp", "site_key": "site_..."}
138
+ ```
139
+
140
+ **Key facts:**
141
+ - Format: `site_` prefix + 64 hex chars
142
+ - Locked to one origin (exact match on the `Origin` HTTP header)
143
+ - Never expire — persist until explicitly deleted
144
+ - Not affected by `/login` (login rotates the main API key, not site keys)
145
+ - Same permissions as the main API key for that account
146
+ - Create multiple site keys for different environments (dev, staging, production)
147
+
148
+ ## Deploying Your App
149
+
150
+ For static React/Vue/Svelte apps, Chaprola can host them:
151
+
152
+ ```json
153
+ POST /app/deploy
154
+ {"userid": "myapp", "project": "main", "app_name": "myapp"}
155
+ ```
156
+
157
+ Your app is served at `https://chaprola.org/apps/{userid}/{app_name}/`. Or deploy anywhere (Vercel, Netlify, S3) and use a site key locked to that domain.
158
+
159
+ ## Common Patterns
160
+
161
+ ### Import data, then query it
162
+ ```javascript
163
+ // Import
164
+ await fetch('https://api.chaprola.org/import', {
165
+ method: 'POST',
166
+ headers: { 'Authorization': `Bearer ${key}`, 'Content-Type': 'application/json' },
167
+ body: JSON.stringify({ userid: 'myapp', project: 'main', name: 'products', data: [...] })
168
+ })
169
+
170
+ // Query
171
+ const resp = await fetch('https://api.chaprola.org/query', {
172
+ method: 'POST',
173
+ headers: { 'Authorization': `Bearer ${key}`, 'Content-Type': 'application/json' },
174
+ body: JSON.stringify({ userid: 'myapp', project: 'main', file: 'products', where: [{ field: 'category', op: 'eq', value: 'electronics' }] })
175
+ })
176
+ const { records } = await resp.json()
177
+ ```
178
+
179
+ ### Insert a record
180
+ ```javascript
181
+ await fetch('https://api.chaprola.org/insert-record', {
182
+ method: 'POST',
183
+ headers: { 'Authorization': `Bearer ${key}`, 'Content-Type': 'application/json' },
184
+ body: JSON.stringify({ userid: 'myapp', project: 'main', file: 'tasks', record: { title: 'New task', status: 'open', due: '2026-04-15' } })
185
+ })
186
+ ```
187
+
188
+ ### Run a published report
189
+ ```javascript
190
+ // No auth needed for public reports
191
+ const resp = await fetch('https://api.chaprola.org/report?userid=myapp&project=main&name=DASHBOARD&format=html')
192
+ const html = await resp.text()
193
+ ```
@@ -2,15 +2,21 @@
2
2
 
3
3
  ## Language
4
4
  - **No parentheses in LET.** `LET result = price * qty` only. For `price * (qty + bonus)`: use `LET temp = qty + bonus` then `LET result = price * temp`.
5
+ - **No built-in functions.** No STR(), INT(), ABS(), LEN(), TRIM(), SUBSTR(), etc. Use PUT/GET/MOVE/PRINT concatenation instead.
6
+ - **Use field names, not numeric positions.** `P.salary` not `P.63 10`. `X.username` not `X.15 10`. The compiler auto-fills positions and lengths.
7
+ - **Use PRINT concatenation for output.** `PRINT P.name + " — " + R41` instead of MOVE-to-U-buffer + PRINT 0.
8
+ - **Use CLEAR for full regions.** `CLEAR U` instead of `MOVE BLANKS U.1 65536`. Also `CLEAR P`, `CLEAR S`.
9
+ - **MOVE literal auto-pads.** `MOVE "Jones" P.name` fills the rest of the field with blanks. No need to clear first.
5
10
  - **IF EQUAL compares literal to location.** To compare two locations, copy both to U buffer first.
6
- - **MOVE length must match field width.** If `name` is 8 chars wide, `MOVE P.name U.1 20` bleeds into adjacent fields.
7
11
  - **DEFINE VARIABLE names must not collide with field names.** If format has `balance`, don't `DEFINE VARIABLE balance R41`.
8
12
  - **R-variables are 64-bit floats.** `7 / 2 = 3.5`. Use PUT with `I` format for integer display.
9
13
  - **FIND returns 0 on no match.** Always check `IF match EQ 0` before READ.
10
14
  - **PRINT 0 clears the U buffer.** No need to manually clear between prints.
11
15
  - **Statement numbers are labels, not line numbers.** Only number GOTO/CALL targets.
16
+ - **GET DATE / PUT DATE need no FOR keyword** with property names: `GET DATE R41 FROM X.utc_time`
12
17
 
13
18
  ## API
19
+ - **NEVER use `/import` to change field widths.** `/import` REPLACES existing data. Use `/alter` to widen/narrow fields while preserving data. See `chaprola://ref/schema`.
14
20
  - **userid must match authenticated user.** 403 on mismatch.
15
21
  - **Login invalidates the old key.** Save the new one immediately.
16
22
  - **Async for large datasets.** `/run` with `async: true` for >100K records (API Gateway 30s timeout).
@@ -8,7 +8,7 @@ HULDRA finds optimal parameter values for a mathematical model by minimizing the
8
8
  |-------|---------|-------------|
9
9
  | R1–R20 | Elements (parameters to optimize) | HULDRA sets before each run |
10
10
  | R21–R40 | Objectives (error metrics) | Your program computes these |
11
- | R41–R50 | Scratch space | Your program's temp variables |
11
+ | R41–R99 | Scratch space | Your program's temp variables |
12
12
 
13
13
  ## POST /optimize
14
14
  ```json
@@ -10,6 +10,14 @@ POST /import {userid, project, name: "STAFF", data: [{"name": "Alice", "salary":
10
10
 
11
11
  Field widths auto-sized from longest value. Default expiry: 90 days. Override with `expires_in_days`.
12
12
 
13
+ ### ⚠️ DESTRUCTIVE WARNING
14
+
15
+ **`/import` REPLACES both the format (.F) and data (.DA) files if they already exist. All existing data will be lost.**
16
+
17
+ - **Use `/import` ONLY for:** Creating brand new data files or intentionally replacing entire datasets
18
+ - **DO NOT use `/import` to:** Change field widths or modify schema on existing data
19
+ - **Use `/alter` instead** to modify field widths/schema while preserving existing data (see `chaprola://ref/schema`)
20
+
13
21
  ## Large File Upload (presigned URL)
14
22
  ```bash
15
23
  POST /import-url {userid, project, name} → {upload_url, staging_key}
@@ -17,11 +25,15 @@ POST /import-url {userid, project, name} → {upload_url, staging_key}
17
25
  POST /import-process {userid, project, name, staging_key} → same as /import
18
26
  ```
19
27
 
28
+ **WARNING:** `/import-process` is also DESTRUCTIVE and replaces existing data. Use `/alter` for schema changes.
29
+
20
30
  ## POST /import-download
21
31
  `{userid, project, name, url, instructions?, max_rows?}`
22
32
  Imports directly from URL. Supports: CSV, TSV, JSON, NDJSON, Parquet, Excel (.xlsx/.xls).
23
33
  Optional `instructions` for AI schema inference. Max 1M records.
24
34
 
35
+ **WARNING:** `/import-download` is also DESTRUCTIVE and replaces existing data. Use `/alter` for schema changes.
36
+
25
37
  ## POST /export
26
38
  `{userid, project, name, format?}` → `{data: [...records]}`
27
39
  Optional `format: "fhir"` for FHIR JSON reconstruction.
@@ -12,33 +12,53 @@ POST /publish {userid, project, name, primary_file, acl?: "public|authenticated|
12
12
 
13
13
  | Prefix | Description |
14
14
  |--------|-------------|
15
- | `P` | Primary data file (current record) |
16
- | `S` | Secondary data file (current record) |
17
- | `U` | User buffer (output scratch) |
18
- | `X` | System text (date, time, filenames) |
15
+ | `P` | Primary data file access fields by name: `P.salary`, `P.last_name` |
16
+ | `S` | Secondary data file access fields by name: `S.dept`, `S.emp_id` |
17
+ | `U` | User buffer (scratch for intermediate data) |
18
+ | `X` | System text access by property name: `X.username`, `X.utc_time`, `X.record_num` |
19
+
20
+ ### System Text Properties (X.)
21
+
22
+ | Property | Description |
23
+ |----------|-------------|
24
+ | `X.year` | Year (four digits) |
25
+ | `X.julian` | Julian date (1–366) |
26
+ | `X.hour` | Hour (military time, 0–23) |
27
+ | `X.minute` | Minute (0–59) |
28
+ | `X.username` | Authenticated user (first 10 chars) |
29
+ | `X.record_num` | Record number of primary file |
30
+ | `X.display_file` | Display filename |
31
+ | `X.data_file` | Data filename |
32
+ | `X.proc_file` | Procedure filename |
33
+ | `X.utc_time` | UTC datetime (ISO 8601, 20 chars) |
34
+ | `X.elapsed` | Elapsed execution time (SSSSS.CC, 9 chars) |
35
+ | `X.primary_modified` | Primary file Last-Modified (ISO 8601, 20 chars) |
36
+ | `X.secondary_modified` | Secondary file Last-Modified (ISO 8601, 20 chars) |
19
37
 
20
38
  ## Language Essentials
21
39
 
22
40
  ```chaprola
23
- // Loop through records
41
+ // Loop through records and print with concatenation
24
42
  DEFINE VARIABLE rec R41
25
43
  LET rec = 1
26
44
  100 SEEK rec
27
45
  IF EOF GOTO 900
28
- MOVE P.name U.1 20 // copy field to output buffer
29
- GET sal FROM P.salary // numeric field R variable
30
- PUT sal INTO U.22 10 D 2 // R variable → formatted output
31
- PRINT 0 // output full U buffer, clear it
46
+ GET sal FROM P.salary
47
+ PRINT P.name + " — " + P.department + " $" + sal
32
48
  LET rec = rec + 1
33
49
  GOTO 100
34
50
  900 END
35
51
  ```
36
52
 
37
- - `PRINT 0` output entire U buffer and clear. `PRINT N` — output exactly N chars.
38
- - `MOVE BLANKS U.1 80` — clear a region. `MOVE "literal" U.1 7` — move literal.
39
- - `IF EQUAL "text" U.50 4 GOTO 200` — compare literal to memory location.
40
- - `U.name` — named positions (auto-allocated by compiler): `MOVE P.name U.name 20`
41
- - `DEFINE VARIABLE counter R41` — alias R-variable. **Use R41-R50** (R1-R40 reserved for HULDRA).
53
+ - **PRINT concatenation (preferred):** `PRINT P.name + " earns " + R41` — auto-trims fields, auto-formats numbers, auto-flushes.
54
+ - `PRINT P.fieldname` — output a single field (auto-flush, auto-trim).
55
+ - `PRINT "literal"` — output a literal string (auto-flush).
56
+ - `PRINT 0` — output entire U buffer (less common, for columnar reports).
57
+ - `CLEAR U` — clear entire user buffer. `CLEAR P` / `CLEAR S` — clear primary/secondary region.
58
+ - `MOVE BLANKS P.notes` — clear a single field (length auto-filled).
59
+ - `MOVE "Active" P.status` — write literal to field (auto-padded to field width).
60
+ - `IF EQUAL "text" P.status GOTO 200` — compare literal to field.
61
+ - `DEFINE VARIABLE counter R41` — alias R-variable. **Use R41-R99** (R1-R40 reserved for HULDRA).
42
62
 
43
63
  ## PUT Format Codes
44
64
 
@@ -49,7 +69,7 @@ LET rec = 1
49
69
  | `I` | Integer (right-justified) | ` 1234` |
50
70
  | `E` | Scientific notation | `1.23E+03` |
51
71
 
52
- Syntax: `PUT R41 INTO U.30 10 D 2` — (R-var, location, width, format, decimals)
72
+ Syntax: `PUT R41 INTO P.salary D 2` — (R-var, location, width auto-filled from field name, format, decimals)
53
73
 
54
74
  ## Math
55
75
 
@@ -59,14 +79,24 @@ LET R43 = EXP R41 // also: LOG, SQRT, ABS
59
79
  LET R44 = POW R41 R42 // R41^R42
60
80
  ```
61
81
 
82
+ ## Date Arithmetic
83
+
84
+ ```chaprola
85
+ GET DATE R41 FROM X.primary_modified // parse timestamp → epoch seconds
86
+ GET DATE R42 FROM X.utc_time // current UTC time
87
+ LET R43 = R42 - R41 // difference in seconds
88
+ LET R43 = R43 / 86400 // convert to days
89
+ PUT DATE R42 INTO U.1 20 // write epoch as ISO 8601 string
90
+ ```
91
+
62
92
  ## Secondary Files (FIND/JOIN)
63
93
 
64
94
  ```chaprola
65
95
  OPEN "DEPARTMENTS" 0 // open secondary file
66
- FIND match FROM S.dept_code 3 USING P.dept_code
96
+ FIND match FROM S.dept_code USING P.dept_code
67
97
  IF match EQ 0 GOTO 200 // 0 = no match
68
98
  READ match // load matched record
69
- MOVE S.dept_name U.30 15
99
+ PRINT P.name + " — " + S.dept_name
70
100
  WRITE match // write back if modified
71
101
  CLOSE // flush + close
72
102
  ```
@@ -0,0 +1,166 @@
1
+ # Schema Inspection & Modification
2
+
3
+ ## POST /format
4
+ Inspect a data file's schema — returns field names, positions, lengths, types, and PHI flags.
5
+
6
+ ```bash
7
+ POST /format {userid, project, name: "STAFF"}
8
+ ```
9
+
10
+ Returns:
11
+ ```json
12
+ {
13
+ "format_file": "s3://chaprola-2026/userid/project/format/STAFF.F",
14
+ "fields": [
15
+ {"name": "name", "position": 1, "length": 50, "type": "text", "phi": false},
16
+ {"name": "salary", "position": 51, "length": 10, "type": "numeric", "phi": false}
17
+ ],
18
+ "record_length": 60
19
+ }
20
+ ```
21
+
22
+ Use this to inspect the current schema before making changes with `/alter`.
23
+
24
+ ---
25
+
26
+ ## POST /alter
27
+ **NON-DESTRUCTIVE schema modification.** Modifies field widths, renames fields, adds new fields, or drops fields. Existing data is preserved and reformatted to match the new schema.
28
+
29
+ ### ⚠️ CRITICAL: Use /alter for schema changes, NOT /import
30
+
31
+ **DO NOT use `/import` to change field widths or schema on existing data files.** `/import` REPLACES both the format (.F) and data (.DA) files, destroying all existing data.
32
+
33
+ **Use `/alter` when you need to:**
34
+ - Widen or narrow field widths
35
+ - Rename fields
36
+ - Add new fields to existing data
37
+ - Drop unused fields
38
+ - Change field types
39
+
40
+ **Use `/import` only when:**
41
+ - Creating a brand new data file
42
+ - Intentionally replacing an entire dataset
43
+
44
+ ---
45
+
46
+ ## /alter Request Format
47
+
48
+ ```json
49
+ {
50
+ "userid": "...",
51
+ "project": "...",
52
+ "name": "STAFF",
53
+ "alter": [
54
+ {"field": "name", "width": 100}, // widen from 50 to 100
55
+ {"field": "dept", "rename": "department"}
56
+ ],
57
+ "add": [
58
+ {"name": "email", "width": 80, "type": "text", "after": "name"}
59
+ ],
60
+ "drop": ["old_field"],
61
+ "output": "STAFF_V2" // optional: create new file instead of in-place
62
+ }
63
+ ```
64
+
65
+ ### Parameters
66
+
67
+ **`alter`** (optional): Array of field modifications
68
+ - `field` (required): Field name to modify
69
+ - `width` (optional): New width (can widen or narrow)
70
+ - `rename` (optional): New field name
71
+ - `type` (optional): Change type (`"text"` or `"numeric"`)
72
+
73
+ **`add`** (optional): Array of new fields to add
74
+ - `name` (required): New field name
75
+ - `width` (required): Field width
76
+ - `type` (optional): `"text"` (default) or `"numeric"`
77
+ - `after` (optional): Insert after this field (default: end of record)
78
+
79
+ **`drop`** (optional): Array of field names to remove
80
+
81
+ **`output`** (optional): Output file name. If not specified, modifies in-place.
82
+
83
+ ---
84
+
85
+ ## Examples
86
+
87
+ ### Widen a field (most common case)
88
+ ```bash
89
+ # Widen 'description' field from 100 to 500 characters
90
+ POST /alter {
91
+ userid, project, name: "ITEMS",
92
+ alter: [{"field": "description", "width": 500}]
93
+ }
94
+ ```
95
+
96
+ ### Add a new field
97
+ ```bash
98
+ # Add 'created_at' field after 'id'
99
+ POST /alter {
100
+ userid, project, name: "RECORDS",
101
+ add: [{"name": "created_at", "width": 24, "after": "id"}]
102
+ }
103
+ ```
104
+
105
+ ### Rename and widen
106
+ ```bash
107
+ # Rename 'dept' to 'department' and widen to 50
108
+ POST /alter {
109
+ userid, project, name: "STAFF",
110
+ alter: [{"field": "dept", "rename": "department", "width": 50}]
111
+ }
112
+ ```
113
+
114
+ ### Complex schema change
115
+ ```bash
116
+ # Multiple operations in one call
117
+ POST /alter {
118
+ userid, project, name: "EMPLOYEES",
119
+ alter: [
120
+ {"field": "name", "width": 100},
121
+ {"field": "dept_id", "rename": "department_id"}
122
+ ],
123
+ add: [
124
+ {"name": "email", "width": 80, "after": "name"},
125
+ {"name": "hire_date", "width": 10, "type": "text"}
126
+ ],
127
+ drop: ["legacy_field", "unused_column"]
128
+ }
129
+ ```
130
+
131
+ ---
132
+
133
+ ## How /alter Works Internally
134
+
135
+ 1. **Reads the old format (.F)** to understand current field positions
136
+ 2. **Creates a new format (.F)** with your requested changes
137
+ 3. **Reads all records from the old data (.DA)**
138
+ 4. **Reformats each record** to match the new schema:
139
+ - Widened fields: existing data left-aligned, space-padded
140
+ - Narrowed fields: truncated to new width
141
+ - New fields: filled with spaces
142
+ - Dropped fields: removed from record
143
+ 5. **Writes reformatted records** to the new data file
144
+ 6. **Replaces the old files** (if in-place) or creates new files (if `output` specified)
145
+
146
+ This is a **safe, non-destructive transformation** of existing data. No data is lost unless you explicitly narrow fields or drop them.
147
+
148
+ ---
149
+
150
+ ## Common Gotchas
151
+
152
+ 1. **Narrowing fields truncates data** — if you narrow a field from 100 to 50, any values longer than 50 characters will be truncated. Use `/format` first to check max field lengths.
153
+
154
+ 2. **Field order matters for `after`** — when adding multiple fields, they're processed in order. If you add field B after field A, and also add field C after field A, field B will come first (because it was added first).
155
+
156
+ 3. **Cannot rename and drop the same field** — if you rename a field, don't also include it in the `drop` array.
157
+
158
+ 4. **Output file must not exist** — if you specify an `output` name and that file already exists, the operation will fail.
159
+
160
+ ---
161
+
162
+ ## See Also
163
+
164
+ - `/format` — inspect current schema before making changes
165
+ - `/import` — create NEW data files (DESTRUCTIVE if file exists)
166
+ - `/query` — query data after schema changes