json-object-editor 0.10.650 → 0.10.654

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "json-object-editor",
3
- "version": "0.10.650",
3
+ "version": "0.10.654",
4
4
  "description": "JOE the Json Object Editor | Platform Edition",
5
5
  "main": "app.js",
6
6
  "scripts": {
package/readme.md CHANGED
@@ -5,6 +5,15 @@ JOE is software that allows you to manage data models via JSON objects. There ar
5
5
 
6
6
 
7
7
 
8
+ ## What’s new in 0.10.654 (brief)
9
+ - OpenAI Files mirrored on S3 upload; uploader tiles show the `openai_file_id`. Retry upload is available per file.
10
+ - Responses integration improvements:
11
+ - Per‑prompt `attachments_mode` on `ai_prompt` (`direct` vs `file_search`). Direct sends `input_file` parts; file search auto‑creates a vector store and attaches it.
12
+ - Safe retry if a model rejects `temperature/top_p` (we strip and retry once).
13
+ - Select Prompt lists prompts by active status where either `datasets[]` or `content_items[].itemtype` matches the current object.
14
+ - `ai_response` now shows `used_openai_file_ids` and correctly records `referenced_objects` for Select Prompt runs.
15
+ - UX: “Run AI Prompt” and “Run Thought Agent” buttons disable and pulse while running to avoid double‑submits.
16
+
8
17
  ## Architecture & Mental Model (Server)
9
18
 
10
19
  - Global JOE
@@ -56,6 +65,16 @@ JOE is software that allows you to manage data models via JSON objects. There ar
56
65
  - Params: `{ itemtype?, q, filters?, fields?, threshold?, limit?, offset?, highlight?, minQueryLength? }`
57
66
  - Defaults: `fields` resolved from schema `searchables` (plural) if present; otherwise weights `name:0.6, info:0.3, description:0.1`. `threshold:0.5`, `limit:50`, `minQueryLength:2`.
58
67
  - Returns: `{ items, count }`. Each item may include `_score` (0..1) and `_matches` when `highlight` is true.
68
+ - `findObjectsByTag { tags, itemtype?, limit?, offset?, source?, slim?, withCount?, countOnly?, tagThreshold? }`
69
+ - Find objects that have ALL specified tags (AND logic). Tags can be provided as IDs (CUIDs) or names (strings) - names are resolved via fuzzy search.
70
+ - Returns: `{ items, tags, count?, error? }` where `tags` contains the resolved tag objects used in the search.
71
+ - Use `countOnly: true` to get just the count and matched tags without fetching items.
72
+ - If tags cannot be resolved, returns `{ items: [], tags: [...resolved ones...], error: "message" }` instead of throwing.
73
+ - `findObjectsByStatus { status, itemtype?, limit?, offset?, source?, slim?, withCount?, countOnly?, statusThreshold? }`
74
+ - Find objects by status. Status can be provided as ID (CUID) or name (string) - name is resolved via fuzzy search.
75
+ - Returns: `{ items, status, count?, error? }` where `status` is the resolved status object used in the search.
76
+ - Use `countOnly: true` to get just the count and matched status without fetching items.
77
+ - If status cannot be resolved, returns `{ items: [], status: null, error: "message" }` instead of throwing.
59
78
  - `saveObject({ object })`
60
79
  - `saveObjects({ objects, stopOnError?, concurrency? })`
61
80
  - Batch save with per-item history/events. Defaults: `stopOnError=false`, `concurrency=5`.
@@ -166,18 +185,50 @@ JOE is software that allows you to manage data models via JSON objects. There ar
166
185
  - Each Thought run persists an `ai_response` with `response_type:'thought_generation'`, `referenced_objects:[scope_id]`, and `generated_thoughts[]` containing the ids of created Thought records.
167
186
  - In any schema UI you can include core fields `proposeThought` and `ai_responses` to (a) trigger a Thought run for the current object and (b) list all related `ai_response` records for audit and reuse.
168
187
 
169
- ## File uploads (S3)
188
+ ## File uploads (S3 + OpenAI Files)
170
189
  - Uploader field options:
171
190
  - `allowmultiple: true|false` — allow selecting multiple files.
172
191
  - `url_field: 'image_url'` — on success, sets this property to the remote URL and rerenders that field.
173
192
  - `ACL: 'public-read'` — optional per-field ACL. When omitted, server currently defaults to `public-read` (temporary during migration).
174
193
  - Flow:
175
194
  - Client posts `{ Key, base64, contentType, ACL? }` to `/API/plugin/awsConnect`.
176
- - Server uploads with AWS SDK v3 and returns `{ url, Key, bucket, etag }` (HTTP 200).
177
- - Client uses `response.url`; if `url_field` is set, it assigns and rerenders that field.
195
+ - Server uploads to S3 (AWS SDK v3) and, if `OPENAI_API_KEY` is configured, also uploads the same bytes to OpenAI Files (purpose=`assistants`).
196
+ - Response shape: `{ url, Key, bucket, etag, openai_file_id?, openai_purpose?, openai_error? }`.
197
+ - Client:
198
+ - Sets the `url` on the file object; if `url_field` is set on the schema field, it assigns that property and rerenders.
199
+ - Persists OpenAI metadata on the file object: `openai_file_id`, `openai_purpose`, `openai_status`, `openai_error`.
200
+ - Renders the OpenAI file id under the filename on each uploader tile. The “OpenAI: OK” banner has been removed.
201
+ - Shows a per‑file “Upload to OpenAI” / “Retry OpenAI” action when no id is present or when an error occurred. This calls `POST /API/plugin/chatgpt/filesRetryFromUrl` with `{ url, filename, contentType }` and updates the file metadata.
178
202
  - Errors:
179
203
  - If bucket or region config is missing, server returns 400 with a clear message.
180
204
  - If the bucket has ACLs disabled, server returns 400: “Bucket has ACLs disabled… remove ACL or switch to presigned/proxy access.”
205
+ - If OpenAI upload fails, the uploader shows `OpenAI error: <message>` inline; you can retry from the file row.
206
+
207
+ - Using OpenAI file ids:
208
+ - File ids are private; there is no public URL to view them.
209
+ - Use the OpenAI Files API (with your API key) to retrieve metadata or download content:
210
+ - Metadata: `GET /v1/files/{file_id}`
211
+ - Content: `GET /v1/files/{file_id}/content`
212
+ - Node example:
213
+ ```js
214
+ const OpenAI = require('openai');
215
+ const fs = require('fs');
216
+ const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
217
+ const meta = await client.files.retrieve('file_abc123');
218
+ const stream = await client.files.content('file_abc123');
219
+ const buf = Buffer.from(await stream.arrayBuffer());
220
+ fs.writeFileSync('downloaded.bin', buf);
221
+ ```
222
+
223
+ ### Related endpoints (server/plugins)
224
+
225
+ - `POST /API/plugin/awsConnect` – S3 upload (and OpenAI mirror when configured)
226
+ - Input: `{ Key, base64, contentType, ACL? }`
227
+ - Output: `{ url, Key, bucket, etag, openai_file_id?, openai_purpose?, openai_error? }`
228
+
229
+ - `POST /API/plugin/chatgpt/filesRetryFromUrl` – (Re)upload an existing S3 file to OpenAI
230
+ - Input: `{ url, filename?, contentType? }`
231
+ - Output: `{ success, openai_file_id?, openai_purpose?, error? }`
181
232
 
182
233
  ## SERVER/PLATFORM mode
183
234
  check port 2099
@@ -303,6 +354,8 @@ Properties for all Fields
303
354
  - sortable(true)
304
355
  - `code` :
305
356
  - language
357
+ - `json` :
358
+ - edit/store JSON subobjects as objects (not strings) using the code editor in JSON mode; pretty-prints on blur/save and treats whitespace-only reformatting as no-op changes.
306
359
 
307
360
  - `boolean`:
308
361
  - label:controls checkbox label
@@ -6,6 +6,7 @@ var App = function () {
6
6
  // Core AI-related schemas in JOE
7
7
  this.collections = [
8
8
  'thought',
9
+ 'ai_pipeline',
9
10
  'ai_assistant',
10
11
  'ai_prompt',
11
12
  'ai_tool',
@@ -97,6 +97,8 @@ var fields = {
97
97
  }
98
98
  },
99
99
  created:{locked:true,width:'50%'},
100
+ creator_type:{type:'select',values:['','user','agent'],locked:true,comment:'High-level origin of this record: human user or agent.'},
101
+ creator_id:{type:'text',locked:true,comment:'_id of the user or logical agent that created this record.'},
100
102
  itemtype:{locked:true, hidden:true},
101
103
  priority:{type:'select',values:[{name:'',value:1000},{name:1},{name:2},{name:3}]},
102
104
  site:{type:'select',values:'site',goto:'site',idprop:'_id',blank:true,icon:'site'},
@@ -141,7 +143,7 @@ var fields = {
141
143
  }
142
144
  return 'new item';
143
145
  }},
144
- status:{type:'select',rerender:'status',icon:'status',
146
+ status:{type:'select',rerender:'status',icon:'status',reloadable:true,
145
147
  after:function(item){
146
148
  if(item.joeUpdated){
147
149
  var cont =`
@@ -521,6 +523,7 @@ var fields = {
521
523
  type: "select",
522
524
  display: "Ai Model",
523
525
  values: [
526
+ {value:"gpt-5.2", name: "GPT-5.2 (Strong, 128k)" },
524
527
  { value:"gpt-5.1", name: "GPT-5.1 (Strong, 128k)" },
525
528
  { value:"gpt-5", name: "GPT-5 (Strong, 128K)" },
526
529
  { value:"gpt-5-mini", name: "GPT-5-mini (Cheap, 1M)" },
@@ -531,11 +534,13 @@ var fields = {
531
534
  { value: "gpt-4.1-nano", name: "4.1-nano (Fastest, light tasks)" }
532
535
  ],
533
536
  tooltip:`Ai Model Guide -
534
- GPT-4o is the default for fast, responsive tasks and supports up to 128k tokens. It’s ideal for short completions, summaries, and dynamic UI tools.
537
+ GPT-5.2 is the default for strong, 128k token tasks. It’s ideal for complex analysis, large datasets, and detailed reasoning.
538
+ GPT-5-mini is the default for cheap, 1M token tasks. It’s ideal for quick completions, summaries, and dynamic UI tools.
539
+ GPT-4o is , responsive tasks and supports up to 128k tokens. It’s ideal for short completions, summaries, and dynamic UI tools.
535
540
  GPT-4.1 and 4.1-mini support a massive 1 million token context, making them perfect for large inputs like full business profiles, long strategy texts, and multi-object analysis.
536
541
  4.1-mini is significantly cheaper than full 4.1, with great balance for most structured AI workflows.
537
542
  4.1-nano is best for lightweight classification or routing logic where speed and cost matter more than depth.`,
538
- default: "gpt-4o",
543
+ default: "gpt-5-mini",
539
544
  },
540
545
  objectChat:{
541
546
  type:'button',
@@ -558,17 +563,58 @@ var fields = {
558
563
  return _joe.schemas.ai_response.methods.listResponses(obj);
559
564
  }
560
565
  },
566
+ select_prompt:{
567
+ display:'Run AI Prompt',
568
+ type:'content',
569
+ reloadable:true,
570
+ icon:'ai_prompt',
571
+ run:function(obj){
572
+ if(!obj || !obj._id){
573
+ return '<joe-text>Save this item before running AI prompts.</joe-text>';
574
+ }
575
+ var itemtype = obj.itemtype || (_joe.current && _joe.current.schema && _joe.current.schema.name) || null;
576
+ // Active ai_prompt statuses
577
+ var activeStatuses = (_joe.getDataset('status')||[]).filter(function(s){
578
+ return Array.isArray(s.datasets) && s.datasets.includes('ai_prompt') && s.active;
579
+ }).map(function(s){ return s._id; });
580
+ // Filter prompts by dataset match (datasets[] OR content_items[].itemtype) and active
581
+ var prompts = (_joe.getDataset('ai_prompt')||[]).filter(function(p){
582
+ var okStatus = !p.status || activeStatuses.indexOf(p.status) !== -1;
583
+ var matchByContentItems = (p.content_items||[]).some(function(ci){ return ci && ci.itemtype === itemtype; });
584
+ var matchByDatasets = Array.isArray(p.datasets) && p.datasets.indexOf(itemtype) !== -1;
585
+ var okDataset = matchByContentItems || matchByDatasets;
586
+ return okStatus && okDataset;
587
+ });
588
+ var selId = 'select_prompt_'+obj._id;
589
+ var filesSelId = 'select_prompt_files_'+obj._id;
590
+ var html = '';
591
+ html += '<div class="joe-field-comment">Select prompt</div>';
592
+ html += '<select id="'+selId+'" style="width:100%;">';
593
+ prompts.forEach(function(p){
594
+ var name = (p && p.name) || '';
595
+ html += '<option value="'+p._id+'">'+name+'</option>';
596
+ });
597
+ html += '</select>';
598
+ html += '<div class="joe-field-comment" style="margin-top:8px;">Attach files (optional)</div>';
599
+ html += '<select id="'+filesSelId+'" multiple class="joe-prompt-select"></select>';
600
+ html += '<script>(function(){ try{ _joe && _joe.Ai && _joe.Ai.renderFilesSelector && _joe.Ai.renderFilesSelector("'+filesSelId+'",{ cap:10, disableWithoutOpenAI:true }); }catch(e){} })();</script>';
601
+ html += '<joe-button class="joe-button joe-ai-button joe-iconed-button" onclick="_joe.Ai.runPromptSelection(this,\''+obj._id+'\',\''+selId+'\',\''+filesSelId+'\')">Run AI Prompt</joe-button>';
602
+ return html;
603
+ }
604
+ },
561
605
  proposeThought:{
562
606
  display:'Propose Thought',
563
607
  type:'content',
564
608
  reloadable:true,
609
+ icon:'ai_thought',
565
610
  run:function(obj){
566
611
  if (!obj || !obj._id) {
567
612
  return '<joe-text>Save this item before proposing Thoughts.</joe-text>';
568
613
  }
569
614
  var schema = _joe.current && _joe.current.schema || null;
570
615
  var itemtype = (obj && obj.itemtype) || (schema && schema.name) || 'item';
571
- // Allow schemas to override the default prompt via extend:'proposeThought',specs:{prompt:'...'}
616
+ // Allow schemas to override the default prompt/model via
617
+ // extend:'proposeThought', specs:{ prompt:'...', model:'gpt-5-nano' }
572
618
  var fieldDef = null;
573
619
  if (_joe && typeof _joe.getField === 'function') {
574
620
  try { fieldDef = _joe.getField('proposeThought'); } catch(_e) {}
@@ -581,12 +627,23 @@ var fields = {
581
627
  'Avoid meta-thoughts about prompts or schemas.'
582
628
  );
583
629
  var taId = 'propose_thought_prompt_' + obj._id;
630
+ var selId = 'propose_thought_files_' + obj._id;
584
631
  var html = '';
585
- html += '<joe-text>Thought prompt</joe-text>';
586
- html += '<textarea id="'+taId+'" style="width:100%;min-height:80px;">'+defaultPrompt+'</textarea>';
632
+ html += '<div class="joe-field-comment">Thought prompt</div>';
633
+ html += '<textarea id="'+taId+'" class="joe-prompt-textarea">'+defaultPrompt+'</textarea>';
634
+ // Attach files selector (optional)
635
+ html += '<div class="joe-field-comment" style="margin-top:8px;">Attach files (optional)</div>';
636
+ html += '<select id="'+selId+'" class="joe-prompt-select" multiple></select>';
637
+ html += '<script>(function(){ try{ _joe && _joe.Ai && _joe.Ai.renderFilesSelector && _joe.Ai.renderFilesSelector("'+selId+'",{ cap:10, disableWithoutOpenAI:true }); }catch(e){} })();</script>';
587
638
  // For now, use the generic Thought agent; scope_id is the current object id.
588
- html += "<joe-button class=\"joe-button joe-blue-button\" ";
589
- html += "onclick=\"_joe.Ai.runProposeThought('"+obj._id+"','"+taId+"')\">Run Thought Agent</joe-button>";
639
+ var args = "'" + obj._id + "','" + taId + "'";
640
+ if (fieldDef && fieldDef.model) {
641
+ // escape single quotes in model name for inline JS
642
+ var m = String(fieldDef.model).replace(/'/g, "\\'");
643
+ args += ",'" + m + "'";
644
+ }
645
+ html += '<joe-button class="joe-button joe-ai-button joe-iconed-button" ';
646
+ html += 'onclick="_joe.Ai.runProposeThought(this,'+ args +')">Run Thought Agent</joe-button>';
590
647
  return html;
591
648
  }
592
649
  },