json-object-editor 0.10.653 → 0.10.657

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "json-object-editor",
3
- "version": "0.10.653",
3
+ "version": "0.10.657",
4
4
  "description": "JOE the Json Object Editor | Platform Edition",
5
5
  "main": "app.js",
6
6
  "scripts": {
package/readme.md CHANGED
@@ -5,6 +5,27 @@ JOE is software that allows you to manage data models via JSON objects. There ar
5
5
 
6
6
 
7
7
 
8
+ ## What’s new in 0.10.657 (brief)
9
+ - MCP everywhere (prompts, autofill, widget):
10
+ - **Prompts** (`ai_prompt`): new MCP config block (`mcp_enabled`, `mcp_toolset`, `mcp_selected_tools`, `mcp_instructions_mode`) lets you turn MCP tools on per‑prompt, pick a toolset (`read-only`, `minimal`, `all`, or `custom`), and auto‑generate short tool instructions.
11
+ - **Autofill fields**: the same MCP keys are now supported under a field’s `ai` config so autofill runs can optionally call MCP tools with the same toolset/playlist model.
12
+ - **Audit**: `ai_response` records MCP config and actual tool calls (`mcp_tools_used[]`) plus `used_openai_file_ids[]` so you can see which tools and files were used for any run.
13
+ - Uploader file roles + AI‑aware attachments:
14
+ - Uploader fields can define `file_roles` (e.g. `{ value:'transcript', label:'Intake Transcript', default:true }`) and JOE renders a per‑file role `<select>` that saves to `file_role` on each file object.
15
+ - `executeJOEAiPrompt` now sends a compact `uploaded_files[]` header (including `itemtype`, `field`, `filename`, `file_role`, and `openai_file_id`) alongside Responses input so prompts can reason about “transcript vs summary” sources while the OpenAI Files integration still handles raw content.
16
+ - Responses+tools (`runWithTools`) now attaches files on both the initial tool‑planning call and the final answer call, so MCP runs see the same attachments end‑to‑end.
17
+ - History safety:
18
+ - Hardened `JOE.Storage.save` history diffing to avoid a `craydent-object` edge case where comparing `null`/`undefined` values could throw on `.toString()`. This only affects `_history.changes`, not what is saved.
19
+
20
+ ## What’s new in 0.10.654 (brief)
21
+ - OpenAI Files mirrored on S3 upload; uploader tiles show the `openai_file_id`. Retry upload is available per file.
22
+ - Responses integration improvements:
23
+ - Per‑prompt `attachments_mode` on `ai_prompt` (`direct` vs `file_search`). Direct sends `input_file` parts; file search auto‑creates a vector store and attaches it.
24
+ - Safe retry if a model rejects `temperature/top_p` (we strip and retry once).
25
+ - Select Prompt lists prompts by active status where either `datasets[]` or `content_items[].itemtype` matches the current object.
26
+ - `ai_response` now shows `used_openai_file_ids` and correctly records `referenced_objects` for Select Prompt runs.
27
+ - UX: “Run AI Prompt” and “Run Thought Agent” buttons disable and pulse while running to avoid double‑submits.
28
+
8
29
  ## Architecture & Mental Model (Server)
9
30
 
10
31
  - Global JOE
@@ -176,18 +197,67 @@ JOE is software that allows you to manage data models via JSON objects. There ar
176
197
  - Each Thought run persists an `ai_response` with `response_type:'thought_generation'`, `referenced_objects:[scope_id]`, and `generated_thoughts[]` containing the ids of created Thought records.
177
198
  - In any schema UI you can include core fields `proposeThought` and `ai_responses` to (a) trigger a Thought run for the current object and (b) list all related `ai_response` records for audit and reuse.
178
199
 
179
- ## File uploads (S3)
200
+ ## File uploads (S3 + OpenAI Files)
180
201
  - Uploader field options:
181
202
  - `allowmultiple: true|false` — allow selecting multiple files.
182
203
  - `url_field: 'image_url'` — on success, sets this property to the remote URL and rerenders that field.
183
204
  - `ACL: 'public-read'` — optional per-field ACL. When omitted, server currently defaults to `public-read` (temporary during migration).
184
205
  - Flow:
185
206
  - Client posts `{ Key, base64, contentType, ACL? }` to `/API/plugin/awsConnect`.
186
- - Server uploads with AWS SDK v3 and returns `{ url, Key, bucket, etag }` (HTTP 200).
187
- - Client uses `response.url`; if `url_field` is set, it assigns and rerenders that field.
207
+ - Server uploads to S3 (AWS SDK v3) and, if `OPENAI_API_KEY` is configured, also uploads the same bytes to OpenAI Files (purpose=`assistants`).
208
+ - Response shape: `{ url, Key, bucket, etag, openai_file_id?, openai_purpose?, openai_error? }`.
209
+ - Client:
210
+ - Sets the `url` on the file object; if `url_field` is set on the schema field, it assigns that property and rerenders.
211
+ - Persists OpenAI metadata on the file object: `openai_file_id`, `openai_purpose`, `openai_status`, `openai_error`.
212
+ - Renders the OpenAI file id under the filename on each uploader tile. The “OpenAI: OK” banner has been removed.
213
+ - Shows a per‑file “Upload to OpenAI” / “Retry OpenAI” action when no id is present or when an error occurred. This calls `POST /API/plugin/chatgpt/filesRetryFromUrl` with `{ url, filename, contentType }` and updates the file metadata.
188
214
  - Errors:
189
215
  - If bucket or region config is missing, server returns 400 with a clear message.
190
216
  - If the bucket has ACLs disabled, server returns 400: “Bucket has ACLs disabled… remove ACL or switch to presigned/proxy access.”
217
+ - If OpenAI upload fails, the uploader shows `OpenAI error: <message>` inline; you can retry from the file row.
218
+
219
+ - Using OpenAI file ids:
220
+ - File ids are private; there is no public URL to view them.
221
+ - Use the OpenAI Files API (with your API key) to retrieve metadata or download content:
222
+ - Metadata: `GET /v1/files/{file_id}`
223
+ - Content: `GET /v1/files/{file_id}/content`
224
+ - Node example:
225
+ ```js
226
+ const OpenAI = require('openai');
227
+ const fs = require('fs');
228
+ const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
229
+ const meta = await client.files.retrieve('file_abc123');
230
+ const stream = await client.files.content('file_abc123');
231
+ const buf = Buffer.from(await stream.arrayBuffer());
232
+ fs.writeFileSync('downloaded.bin', buf);
233
+ ```
234
+
235
+ ### File roles on uploader fields
236
+ - **Schema configuration**:
237
+ - Any uploader field can declare `file_roles` as an array of `{ value, label?, default? }` objects, for example:
238
+ - `{ value:'transcript', label:'Intake Transcript', default:true }`
239
+ - `{ value:'summary', label:'Intake Summary' }`
240
+ - `label` is optional; it falls back to `value`. At most one role should have `default:true`.
241
+ - **Runtime behavior**:
242
+ - JOE renders a role `<select>` next to each uploaded file with:
243
+ - A blank option, and one option per configured role.
244
+ - The select updates the file object’s `file_role` property in the parent object (e.g. `client.files[].file_role`).
245
+ - Existing uploads show the role selector on first render as long as `file_roles` is configured on the field.
246
+ - When OpenAI Files are enabled, uploader files still receive `openai_file_id`, `openai_purpose`, `openai_status`, and `openai_error` as before; `file_role` is an additional, JOE‑level label.
247
+ - **AI integration**:
248
+ - When running an AI prompt via `executeJOEAiPrompt`, JOE inspects referenced objects for uploader fields and builds an `uploaded_files[]` header:
249
+ - Each entry includes `{ itemtype, field, name, role, openai_file_id }`.
250
+ - This header is merged into the user input so prompts can explicitly reason about which files are “transcripts”, “summaries”, etc., while the actual file bytes are attached via Responses `input_file` parts / `file_search` tool resources.
251
+
252
+ ### Related endpoints (server/plugins)
253
+
254
+ - `POST /API/plugin/awsConnect` – S3 upload (and OpenAI mirror when configured)
255
+ - Input: `{ Key, base64, contentType, ACL? }`
256
+ - Output: `{ url, Key, bucket, etag, openai_file_id?, openai_purpose?, openai_error? }`
257
+
258
+ - `POST /API/plugin/chatgpt/filesRetryFromUrl` – (Re)upload an existing S3 file to OpenAI
259
+ - Input: `{ url, filename?, contentType? }`
260
+ - Output: `{ success, openai_file_id?, openai_purpose?, error? }`
191
261
 
192
262
  ## SERVER/PLATFORM mode
193
263
  check port 2099
@@ -690,4 +760,4 @@ To help you develop and debug the widget + plugin in your instance, JOE exposes
690
760
  - Added a Responses‑based tool runner for `<joe-ai-widget>` that wires `ai_assistant.tools` into MCP functions via `chatgpt.runWithTools`.
691
761
  - Enhanced widget UX: assistant/user bubble theming (using `assistant_color` and user `color`), inline “tools used this turn” meta messages, and markdown rendering for assistant replies.
692
762
  - Expanded the AI widget test page with an assistant picker, live tool JSON viewer, a clickable conversation history list (resume existing `ai_widget_conversation` threads), and safer user handling (widget conversations now store user id/name/color explicitly and OAuth token‑exchange errors from Google are surfaced clearly during login).
693
- - Added field-level AI autofill support: schemas can declare `ai` config on a field (e.g. `{ name:'ai_summary', type:'rendering', ai:{ prompt:'Summarize the project in a few sentences.' } }`), which renders an inline “AI” button that calls `_joe.Ai.populateField('ai_summary')` and posts to `/API/plugin/chatgpt/autofill` to compute a JSON `patch` and update the UI (with confirmation before overwriting non-empty values).
763
+ - Added field-level AI autofill support: schemas can declare `ai` config on a field (e.g. `{ name:'ai_summary', type:'rendering', ai:{ prompt:'Summarize the project in a few sentences.' } }`), which renders an inline “AI” button that calls `_joe.Ai.populateField('ai_summary')` and posts to `/API/plugin/chatgpt/autofill` to compute a JSON `patch` and update the UI (with confirmation before overwriting non-empty values).
@@ -143,7 +143,7 @@ var fields = {
143
143
  }
144
144
  return 'new item';
145
145
  }},
146
- status:{type:'select',rerender:'status',icon:'status',
146
+ status:{type:'select',rerender:'status',icon:'status',reloadable:true,
147
147
  after:function(item){
148
148
  if(item.joeUpdated){
149
149
  var cont =`
@@ -523,6 +523,7 @@ var fields = {
523
523
  type: "select",
524
524
  display: "Ai Model",
525
525
  values: [
526
+ {value:"gpt-5.2", name: "GPT-5.2 (Strong, 128k)" },
526
527
  { value:"gpt-5.1", name: "GPT-5.1 (Strong, 128k)" },
527
528
  { value:"gpt-5", name: "GPT-5 (Strong, 128K)" },
528
529
  { value:"gpt-5-mini", name: "GPT-5-mini (Cheap, 1M)" },
@@ -533,11 +534,13 @@ var fields = {
533
534
  { value: "gpt-4.1-nano", name: "4.1-nano (Fastest, light tasks)" }
534
535
  ],
535
536
  tooltip:`Ai Model Guide -
536
- GPT-4o is the default for fast, responsive tasks and supports up to 128k tokens. It’s ideal for short completions, summaries, and dynamic UI tools.
537
+ GPT-5.2 is the default for strong, 128k token tasks. It’s ideal for complex analysis, large datasets, and detailed reasoning.
538
+ GPT-5-mini is the default for cheap, 1M token tasks. It’s ideal for quick completions, summaries, and dynamic UI tools.
539
+ GPT-4o is , responsive tasks and supports up to 128k tokens. It’s ideal for short completions, summaries, and dynamic UI tools.
537
540
  GPT-4.1 and 4.1-mini support a massive 1 million token context, making them perfect for large inputs like full business profiles, long strategy texts, and multi-object analysis.
538
541
  4.1-mini is significantly cheaper than full 4.1, with great balance for most structured AI workflows.
539
542
  4.1-nano is best for lightweight classification or routing logic where speed and cost matter more than depth.`,
540
- default: "gpt-4o",
543
+ default: "gpt-5-mini",
541
544
  },
542
545
  objectChat:{
543
546
  type:'button',
@@ -560,10 +563,50 @@ var fields = {
560
563
  return _joe.schemas.ai_response.methods.listResponses(obj);
561
564
  }
562
565
  },
566
+ select_prompt:{
567
+ display:'Run AI Prompt',
568
+ type:'content',
569
+ reloadable:true,
570
+ icon:'ai_prompt',
571
+ run:function(obj){
572
+ if(!obj || !obj._id){
573
+ return '<joe-text>Save this item before running AI prompts.</joe-text>';
574
+ }
575
+ var itemtype = obj.itemtype || (_joe.current && _joe.current.schema && _joe.current.schema.name) || null;
576
+ // Active ai_prompt statuses
577
+ var activeStatuses = (_joe.getDataset('status')||[]).filter(function(s){
578
+ return Array.isArray(s.datasets) && s.datasets.includes('ai_prompt') && s.active;
579
+ }).map(function(s){ return s._id; });
580
+ // Filter prompts by dataset match (datasets[] OR content_items[].itemtype) and active
581
+ var prompts = (_joe.getDataset('ai_prompt')||[]).filter(function(p){
582
+ var okStatus = !p.status || activeStatuses.indexOf(p.status) !== -1;
583
+ var matchByContentItems = (p.content_items||[]).some(function(ci){ return ci && ci.itemtype === itemtype; });
584
+ var matchByDatasets = Array.isArray(p.datasets) && p.datasets.indexOf(itemtype) !== -1;
585
+ var okDataset = matchByContentItems || matchByDatasets;
586
+ return okStatus && okDataset;
587
+ });
588
+ var selId = 'select_prompt_'+obj._id;
589
+ var filesSelId = 'select_prompt_files_'+obj._id;
590
+ var html = '';
591
+ html += '<div class="joe-field-comment">Select prompt</div>';
592
+ html += '<select id="'+selId+'" style="width:100%;">';
593
+ prompts.forEach(function(p){
594
+ var name = (p && p.name) || '';
595
+ html += '<option value="'+p._id+'">'+name+'</option>';
596
+ });
597
+ html += '</select>';
598
+ html += '<div class="joe-field-comment" style="margin-top:8px;">Attach files (optional)</div>';
599
+ html += '<select id="'+filesSelId+'" multiple class="joe-prompt-select"></select>';
600
+ html += '<script>(function(){ try{ _joe && _joe.Ai && _joe.Ai.renderFilesSelector && _joe.Ai.renderFilesSelector("'+filesSelId+'",{ cap:10, disableWithoutOpenAI:true }); }catch(e){} })();</script>';
601
+ html += '<joe-button class="joe-button joe-ai-button joe-iconed-button" onclick="_joe.Ai.runPromptSelection(this,\''+obj._id+'\',\''+selId+'\',\''+filesSelId+'\')">Run AI Prompt</joe-button>';
602
+ return html;
603
+ }
604
+ },
563
605
  proposeThought:{
564
606
  display:'Propose Thought',
565
607
  type:'content',
566
608
  reloadable:true,
609
+ icon:'ai_thought',
567
610
  run:function(obj){
568
611
  if (!obj || !obj._id) {
569
612
  return '<joe-text>Save this item before proposing Thoughts.</joe-text>';
@@ -584,9 +627,14 @@ var fields = {
584
627
  'Avoid meta-thoughts about prompts or schemas.'
585
628
  );
586
629
  var taId = 'propose_thought_prompt_' + obj._id;
630
+ var selId = 'propose_thought_files_' + obj._id;
587
631
  var html = '';
588
632
  html += '<div class="joe-field-comment">Thought prompt</div>';
589
- html += '<textarea id="'+taId+'" style="width:100%;min-height:80px;">'+defaultPrompt+'</textarea>';
633
+ html += '<textarea id="'+taId+'" class="joe-prompt-textarea">'+defaultPrompt+'</textarea>';
634
+ // Attach files selector (optional)
635
+ html += '<div class="joe-field-comment" style="margin-top:8px;">Attach files (optional)</div>';
636
+ html += '<select id="'+selId+'" class="joe-prompt-select" multiple></select>';
637
+ html += '<script>(function(){ try{ _joe && _joe.Ai && _joe.Ai.renderFilesSelector && _joe.Ai.renderFilesSelector("'+selId+'",{ cap:10, disableWithoutOpenAI:true }); }catch(e){} })();</script>';
590
638
  // For now, use the generic Thought agent; scope_id is the current object id.
591
639
  var args = "'" + obj._id + "','" + taId + "'";
592
640
  if (fieldDef && fieldDef.model) {
@@ -594,8 +642,8 @@ var fields = {
594
642
  var m = String(fieldDef.model).replace(/'/g, "\\'");
595
643
  args += ",'" + m + "'";
596
644
  }
597
- html += '<joe-button class="joe-button joe-ai-button joe-iconed-button" ';
598
- html += 'onclick="_joe.Ai.runProposeThought('+ args +')">Run Thought Agent</joe-button>';
645
+ html += '<joe-button class="joe-button joe-ai-button joe-iconed-button" ';
646
+ html += 'onclick="_joe.Ai.runProposeThought(this,'+ args +')">Run Thought Agent</joe-button>';
599
647
  return html;
600
648
  }
601
649
  },