json-object-editor 0.10.653 → 0.10.654

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "json-object-editor",
3
- "version": "0.10.653",
3
+ "version": "0.10.654",
4
4
  "description": "JOE the Json Object Editor | Platform Edition",
5
5
  "main": "app.js",
6
6
  "scripts": {
package/readme.md CHANGED
@@ -5,6 +5,15 @@ JOE is software that allows you to manage data models via JSON objects. There ar
5
5
 
6
6
 
7
7
 
8
+ ## What’s new in 0.10.654 (brief)
9
+ - OpenAI Files mirrored on S3 upload; uploader tiles show the `openai_file_id`. Retry upload is available per file.
10
+ - Responses integration improvements:
11
+ - Per‑prompt `attachments_mode` on `ai_prompt` (`direct` vs `file_search`). Direct sends `input_file` parts; file search auto‑creates a vector store and attaches it.
12
+ - Safe retry if a model rejects `temperature/top_p` (we strip and retry once).
13
+ - Select Prompt lists prompts by active status where either `datasets[]` or `content_items[].itemtype` matches the current object.
14
+ - `ai_response` now shows `used_openai_file_ids` and correctly records `referenced_objects` for Select Prompt runs.
15
+ - UX: “Run AI Prompt” and “Run Thought Agent” buttons disable and pulse while running to avoid double‑submits.
16
+
8
17
  ## Architecture & Mental Model (Server)
9
18
 
10
19
  - Global JOE
@@ -176,18 +185,50 @@ JOE is software that allows you to manage data models via JSON objects. There ar
176
185
  - Each Thought run persists an `ai_response` with `response_type:'thought_generation'`, `referenced_objects:[scope_id]`, and `generated_thoughts[]` containing the ids of created Thought records.
177
186
  - In any schema UI you can include core fields `proposeThought` and `ai_responses` to (a) trigger a Thought run for the current object and (b) list all related `ai_response` records for audit and reuse.
178
187
 
179
- ## File uploads (S3)
188
+ ## File uploads (S3 + OpenAI Files)
180
189
  - Uploader field options:
181
190
  - `allowmultiple: true|false` — allow selecting multiple files.
182
191
  - `url_field: 'image_url'` — on success, sets this property to the remote URL and rerenders that field.
183
192
  - `ACL: 'public-read'` — optional per-field ACL. When omitted, server currently defaults to `public-read` (temporary during migration).
184
193
  - Flow:
185
194
  - Client posts `{ Key, base64, contentType, ACL? }` to `/API/plugin/awsConnect`.
186
- - Server uploads with AWS SDK v3 and returns `{ url, Key, bucket, etag }` (HTTP 200).
187
- - Client uses `response.url`; if `url_field` is set, it assigns and rerenders that field.
195
+ - Server uploads to S3 (AWS SDK v3) and, if `OPENAI_API_KEY` is configured, also uploads the same bytes to OpenAI Files (purpose=`assistants`).
196
+ - Response shape: `{ url, Key, bucket, etag, openai_file_id?, openai_purpose?, openai_error? }`.
197
+ - Client:
198
+ - Sets the `url` on the file object; if `url_field` is set on the schema field, it assigns that property and rerenders.
199
+ - Persists OpenAI metadata on the file object: `openai_file_id`, `openai_purpose`, `openai_status`, `openai_error`.
200
+ - Renders the OpenAI file id under the filename on each uploader tile. The “OpenAI: OK” banner has been removed.
201
+ - Shows a per‑file “Upload to OpenAI” / “Retry OpenAI” action when no id is present or when an error occurred. This calls `POST /API/plugin/chatgpt/filesRetryFromUrl` with `{ url, filename, contentType }` and updates the file metadata.
188
202
  - Errors:
189
203
  - If bucket or region config is missing, server returns 400 with a clear message.
190
204
  - If the bucket has ACLs disabled, server returns 400: “Bucket has ACLs disabled… remove ACL or switch to presigned/proxy access.”
205
+ - If OpenAI upload fails, the uploader shows `OpenAI error: <message>` inline; you can retry from the file row.
206
+
207
+ - Using OpenAI file ids:
208
+ - File ids are private; there is no public URL to view them.
209
+ - Use the OpenAI Files API (with your API key) to retrieve metadata or download content:
210
+ - Metadata: `GET /v1/files/{file_id}`
211
+ - Content: `GET /v1/files/{file_id}/content`
212
+ - Node example:
213
+ ```js
214
+ const OpenAI = require('openai');
215
+ const fs = require('fs');
216
+ const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
217
+ const meta = await client.files.retrieve('file_abc123');
218
+ const stream = await client.files.content('file_abc123');
219
+ const buf = Buffer.from(await stream.arrayBuffer());
220
+ fs.writeFileSync('downloaded.bin', buf);
221
+ ```
222
+
223
+ ### Related endpoints (server/plugins)
224
+
225
+ - `POST /API/plugin/awsConnect` – S3 upload (and OpenAI mirror when configured)
226
+ - Input: `{ Key, base64, contentType, ACL? }`
227
+ - Output: `{ url, Key, bucket, etag, openai_file_id?, openai_purpose?, openai_error? }`
228
+
229
+ - `POST /API/plugin/chatgpt/filesRetryFromUrl` – (Re)upload an existing S3 file to OpenAI
230
+ - Input: `{ url, filename?, contentType? }`
231
+ - Output: `{ success, openai_file_id?, openai_purpose?, error? }`
191
232
 
192
233
  ## SERVER/PLATFORM mode
193
234
  check port 2099
@@ -143,7 +143,7 @@ var fields = {
143
143
  }
144
144
  return 'new item';
145
145
  }},
146
- status:{type:'select',rerender:'status',icon:'status',
146
+ status:{type:'select',rerender:'status',icon:'status',reloadable:true,
147
147
  after:function(item){
148
148
  if(item.joeUpdated){
149
149
  var cont =`
@@ -523,6 +523,7 @@ var fields = {
523
523
  type: "select",
524
524
  display: "Ai Model",
525
525
  values: [
526
+ {value:"gpt-5.2", name: "GPT-5.2 (Strong, 128k)" },
526
527
  { value:"gpt-5.1", name: "GPT-5.1 (Strong, 128k)" },
527
528
  { value:"gpt-5", name: "GPT-5 (Strong, 128K)" },
528
529
  { value:"gpt-5-mini", name: "GPT-5-mini (Cheap, 1M)" },
@@ -533,11 +534,13 @@ var fields = {
533
534
  { value: "gpt-4.1-nano", name: "4.1-nano (Fastest, light tasks)" }
534
535
  ],
535
536
  tooltip:`Ai Model Guide -
536
- GPT-4o is the default for fast, responsive tasks and supports up to 128k tokens. It’s ideal for short completions, summaries, and dynamic UI tools.
537
+ GPT-5.2 is the default for strong, 128k token tasks. It’s ideal for complex analysis, large datasets, and detailed reasoning.
538
+ GPT-5-mini is the default for cheap, 1M token tasks. It’s ideal for quick completions, summaries, and dynamic UI tools.
539
+ GPT-4o is , responsive tasks and supports up to 128k tokens. It’s ideal for short completions, summaries, and dynamic UI tools.
537
540
  GPT-4.1 and 4.1-mini support a massive 1 million token context, making them perfect for large inputs like full business profiles, long strategy texts, and multi-object analysis.
538
541
  4.1-mini is significantly cheaper than full 4.1, with great balance for most structured AI workflows.
539
542
  4.1-nano is best for lightweight classification or routing logic where speed and cost matter more than depth.`,
540
- default: "gpt-4o",
543
+ default: "gpt-5-mini",
541
544
  },
542
545
  objectChat:{
543
546
  type:'button',
@@ -560,10 +563,50 @@ var fields = {
560
563
  return _joe.schemas.ai_response.methods.listResponses(obj);
561
564
  }
562
565
  },
566
+ select_prompt:{
567
+ display:'Run AI Prompt',
568
+ type:'content',
569
+ reloadable:true,
570
+ icon:'ai_prompt',
571
+ run:function(obj){
572
+ if(!obj || !obj._id){
573
+ return '<joe-text>Save this item before running AI prompts.</joe-text>';
574
+ }
575
+ var itemtype = obj.itemtype || (_joe.current && _joe.current.schema && _joe.current.schema.name) || null;
576
+ // Active ai_prompt statuses
577
+ var activeStatuses = (_joe.getDataset('status')||[]).filter(function(s){
578
+ return Array.isArray(s.datasets) && s.datasets.includes('ai_prompt') && s.active;
579
+ }).map(function(s){ return s._id; });
580
+ // Filter prompts by dataset match (datasets[] OR content_items[].itemtype) and active
581
+ var prompts = (_joe.getDataset('ai_prompt')||[]).filter(function(p){
582
+ var okStatus = !p.status || activeStatuses.indexOf(p.status) !== -1;
583
+ var matchByContentItems = (p.content_items||[]).some(function(ci){ return ci && ci.itemtype === itemtype; });
584
+ var matchByDatasets = Array.isArray(p.datasets) && p.datasets.indexOf(itemtype) !== -1;
585
+ var okDataset = matchByContentItems || matchByDatasets;
586
+ return okStatus && okDataset;
587
+ });
588
+ var selId = 'select_prompt_'+obj._id;
589
+ var filesSelId = 'select_prompt_files_'+obj._id;
590
+ var html = '';
591
+ html += '<div class="joe-field-comment">Select prompt</div>';
592
+ html += '<select id="'+selId+'" style="width:100%;">';
593
+ prompts.forEach(function(p){
594
+ var name = (p && p.name) || '';
595
+ html += '<option value="'+p._id+'">'+name+'</option>';
596
+ });
597
+ html += '</select>';
598
+ html += '<div class="joe-field-comment" style="margin-top:8px;">Attach files (optional)</div>';
599
+ html += '<select id="'+filesSelId+'" multiple class="joe-prompt-select"></select>';
600
+ html += '<script>(function(){ try{ _joe && _joe.Ai && _joe.Ai.renderFilesSelector && _joe.Ai.renderFilesSelector("'+filesSelId+'",{ cap:10, disableWithoutOpenAI:true }); }catch(e){} })();</script>';
601
+ html += '<joe-button class="joe-button joe-ai-button joe-iconed-button" onclick="_joe.Ai.runPromptSelection(this,\''+obj._id+'\',\''+selId+'\',\''+filesSelId+'\')">Run AI Prompt</joe-button>';
602
+ return html;
603
+ }
604
+ },
563
605
  proposeThought:{
564
606
  display:'Propose Thought',
565
607
  type:'content',
566
608
  reloadable:true,
609
+ icon:'ai_thought',
567
610
  run:function(obj){
568
611
  if (!obj || !obj._id) {
569
612
  return '<joe-text>Save this item before proposing Thoughts.</joe-text>';
@@ -584,9 +627,14 @@ var fields = {
584
627
  'Avoid meta-thoughts about prompts or schemas.'
585
628
  );
586
629
  var taId = 'propose_thought_prompt_' + obj._id;
630
+ var selId = 'propose_thought_files_' + obj._id;
587
631
  var html = '';
588
632
  html += '<div class="joe-field-comment">Thought prompt</div>';
589
- html += '<textarea id="'+taId+'" style="width:100%;min-height:80px;">'+defaultPrompt+'</textarea>';
633
+ html += '<textarea id="'+taId+'" class="joe-prompt-textarea">'+defaultPrompt+'</textarea>';
634
+ // Attach files selector (optional)
635
+ html += '<div class="joe-field-comment" style="margin-top:8px;">Attach files (optional)</div>';
636
+ html += '<select id="'+selId+'" class="joe-prompt-select" multiple></select>';
637
+ html += '<script>(function(){ try{ _joe && _joe.Ai && _joe.Ai.renderFilesSelector && _joe.Ai.renderFilesSelector("'+selId+'",{ cap:10, disableWithoutOpenAI:true }); }catch(e){} })();</script>';
590
638
  // For now, use the generic Thought agent; scope_id is the current object id.
591
639
  var args = "'" + obj._id + "','" + taId + "'";
592
640
  if (fieldDef && fieldDef.model) {
@@ -594,8 +642,8 @@ var fields = {
594
642
  var m = String(fieldDef.model).replace(/'/g, "\\'");
595
643
  args += ",'" + m + "'";
596
644
  }
597
- html += '<joe-button class="joe-button joe-ai-button joe-iconed-button" ';
598
- html += 'onclick="_joe.Ai.runProposeThought('+ args +')">Run Thought Agent</joe-button>';
645
+ html += '<joe-button class="joe-button joe-ai-button joe-iconed-button" ';
646
+ html += 'onclick="_joe.Ai.runProposeThought(this,'+ args +')">Run Thought Agent</joe-button>';
599
647
  return html;
600
648
  }
601
649
  },
@@ -696,8 +696,8 @@ MCP.tools = {
696
696
  },
697
697
 
698
698
  // Run a thought agent (pipeline + Responses API) and materialize proposed Thoughts.
699
- runThoughtAgent: async ({ agent_id, user_input, scope_id, model } = {}, ctx = {}) => {
700
- const context = Object.assign({}, ctx, { model });
699
+ runThoughtAgent: async ({ agent_id, user_input, scope_id, model, openai_file_ids } = {}, ctx = {}) => {
700
+ const context = Object.assign({}, ctx, { model, openai_file_ids });
701
701
  const result = await ThoughtPipeline.runAgent(agent_id, user_input, scope_id, context);
702
702
  return result;
703
703
  },
@@ -460,6 +460,12 @@ ThoughtPipeline.runAgent = async function runAgent(agentId, userInput, scopeId,
460
460
  usage: response.usage || {},
461
461
  prompt_method: 'ThoughtPipeline.runAgent'
462
462
  };
463
+ // Persist used OpenAI file ids when provided (audit convenience)
464
+ try{
465
+ if (ctx && Array.isArray(ctx.openai_file_ids) && ctx.openai_file_ids.length){
466
+ aiResponseObj.used_openai_file_ids = ctx.openai_file_ids.slice(0,10);
467
+ }
468
+ }catch(_e){}
463
469
 
464
470
  var savedResponse = await new Promise(function (resolve, reject) {
465
471
  try {
@@ -3,6 +3,7 @@ function AWSConnect(){
3
3
  this.default = function(data,req,res){
4
4
  // AWS SDK v3 (modular)
5
5
  const { S3Client, PutObjectCommand } = require('@aws-sdk/client-s3');
6
+ const chatgpt = require('./chatgpt.js');
6
7
  var settings_config = tryEval(JOE.Cache.settings.AWS_S3CONFIG)||{};
7
8
  var config = $c.merge(settings_config);
8
9
 
@@ -67,12 +68,41 @@ var response = {
67
68
  }
68
69
 
69
70
  s3.send(new PutObjectCommand(s3Params))
70
- .then(function(data){
71
+ .then(async function(data){
71
72
  // Construct canonical URL from region + bucket
72
73
  var region = config.region;
73
74
  var url = 'https://'+Bucket+'.s3.'+region+'.amazonaws.com/'+Key;
74
75
  response.data = data;
75
76
  response.url = url;
77
+ response.etag = data && (data.ETag || data.ETAG || data.eTag);
78
+
79
+ // If OpenAI key is configured, also upload to OpenAI Files (purpose: assistants)
80
+ try{
81
+ var hasOpenAIKey = !!JOE.Utils.Settings && !!JOE.Utils.Settings('OPENAI_API_KEY');
82
+ if(hasOpenAIKey){
83
+ // Prefer original buffer when provided via base64
84
+ if(data && typeof data === 'object'){ /* noop to keep linter happy */}
85
+ if(typeof s3Params.Body !== 'string' && s3Params.Body){
86
+ var filenameOnly = Key.split('/').pop();
87
+ var result = await chatgpt.filesUploadFromBufferHelper({
88
+ buffer: s3Params.Body,
89
+ filename: filenameOnly,
90
+ contentType: s3Params.ContentType,
91
+ purpose: 'assistants'
92
+ });
93
+ if(result && result.id){
94
+ response.openai_file_id = result.id;
95
+ response.openai_purpose = result.purpose || 'assistants';
96
+ }
97
+ }else{
98
+ // Fallback: if we didn't have a buffer (unlikely with current flow),
99
+ // skip immediate upload; client can use retry endpoint.
100
+ }
101
+ }
102
+ }catch(e){
103
+ // Non-fatal: S3 upload already succeeded
104
+ response.openai_error = (e && e.message) || String(e);
105
+ }
76
106
  res.status(200).send(response);
77
107
  console.log("Successfully uploaded data to "+Key);
78
108
  })
@@ -1,6 +1,8 @@
1
1
  const OpenAI = require("openai");
2
2
  const { google } = require('googleapis');
3
3
  const path = require('path');
4
+ const os = require('os');
5
+ const fs = require('fs');
4
6
  const MCP = require("../modules/MCP.js");
5
7
  // const { name } = require("json-object-editor/server/webconfig");
6
8
 
@@ -413,6 +415,109 @@ function shrinkUnderstandObjectMessagesForTokens(messages) {
413
415
  function newClient() {
414
416
  return new OpenAI({ apiKey: getAPIKey() });
415
417
  }
418
+
419
+ // Safely call Responses API with optional temperature/top_p.
420
+ // If the model rejects these parameters, strip and retry once.
421
+ async function safeResponsesCreate(openai, payload){
422
+ try{
423
+ return await openai.responses.create(payload);
424
+ }catch(e){
425
+ try{
426
+ var msg = (e && (e.error && e.error.message) || e.message || '').toLowerCase();
427
+ var badTemp = msg.includes("unsupported parameter") && msg.includes("temperature");
428
+ var badTopP = msg.includes("unsupported parameter") && msg.includes("top_p");
429
+ var unknownTemp = msg.includes("unknown parameter") && msg.includes("temperature");
430
+ var unknownTopP = msg.includes("unknown parameter") && msg.includes("top_p");
431
+ if (badTemp || badTopP || unknownTemp || unknownTopP){
432
+ var p2 = Object.assign({}, payload);
433
+ if (p2.hasOwnProperty('temperature')) delete p2.temperature;
434
+ if (p2.hasOwnProperty('top_p')) delete p2.top_p;
435
+ console.warn('[chatgpt] Retrying without temperature/top_p due to model rejection');
436
+ return await openai.responses.create(p2);
437
+ }
438
+ }catch(_e){ /* fallthrough */ }
439
+ throw e;
440
+ }
441
+ }
442
+
443
+ // Ensure a vector store exists with the provided file_ids indexed; returns { vectorStoreId }
444
+ async function ensureVectorStoreForFiles(fileIds = []){
445
+ const openai = newClient();
446
+ // Create ephemeral store per run (could be optimized to reuse/persist later)
447
+ const vs = await openai.vectorStores.create({ name: 'JOE Prompt Run '+Date.now() });
448
+ const storeId = vs.id;
449
+ // Link files by id
450
+ for (const fid of (fileIds||[]).slice(0,10)) {
451
+ try{
452
+ await openai.vectorStores.files.create(storeId, { file_id: fid });
453
+ }catch(e){
454
+ console.warn('[chatgpt] vectorStores.files.create failed for', fid, e && e.message || e);
455
+ }
456
+ }
457
+ // Poll (best-effort) until files are processed or timeout
458
+ const timeoutMs = 8000;
459
+ const start = Date.now();
460
+ try{
461
+ while(Date.now() - start < timeoutMs){
462
+ const listed = await openai.vectorStores.files.list(storeId, { limit: 100 });
463
+ const items = (listed && listed.data) || [];
464
+ const pending = items.some(f => f.status && f.status !== 'completed');
465
+ if(!pending){ break; }
466
+ await new Promise(r => setTimeout(r, 500));
467
+ }
468
+ }catch(_e){ /* non-fatal */ }
469
+ return { vectorStoreId: storeId };
470
+ }
471
+
472
+ // ---------------- OpenAI Files helpers ----------------
473
+ async function uploadFileFromBuffer(buffer, filename, contentType, purpose) {
474
+ const openai = newClient();
475
+ const usePurpose = purpose || 'assistants';
476
+ const tmpDir = os.tmpdir();
477
+ const safeName = filename || ('upload_' + Date.now());
478
+ const tmpPath = path.join(tmpDir, safeName);
479
+ await fs.promises.writeFile(tmpPath, buffer);
480
+ try {
481
+ // openai.files.create accepts a readable stream
482
+ const fileStream = fs.createReadStream(tmpPath);
483
+ const created = await openai.files.create({
484
+ purpose: usePurpose,
485
+ file: fileStream
486
+ });
487
+ return { id: created.id, purpose: usePurpose };
488
+ } finally {
489
+ // best-effort cleanup
490
+ fs.promises.unlink(tmpPath).catch(()=>{});
491
+ }
492
+ }
493
+
494
+ // Expose a helper that other plugins can call in-process
495
+ this.filesUploadFromBufferHelper = async function ({ buffer, filename, contentType, purpose }) {
496
+ if (!buffer || !buffer.length) {
497
+ throw new Error('Missing buffer');
498
+ }
499
+ return await uploadFileFromBuffer(buffer, filename, contentType, purpose || 'assistants');
500
+ };
501
+
502
+ // Public endpoint to retry OpenAI upload from a URL (e.g., S3 object URL)
503
+ this.filesRetryFromUrl = async function (data, req, res) {
504
+ try {
505
+ const { default: got } = await import('got');
506
+ const url = data && (data.url || data.location);
507
+ const filename = data && data.filename || (url && url.split('/').pop()) || ('upload_' + Date.now());
508
+ const contentType = data && data.contentType || undefined;
509
+ const purpose = 'assistants';
510
+ if (!url) {
511
+ return { success: false, error: 'Missing url' };
512
+ }
513
+ const resp = await got(url, { responseType: 'buffer' });
514
+ const buffer = resp.body;
515
+ const created = await uploadFileFromBuffer(buffer, filename, contentType, purpose);
516
+ return { success: true, openai_file_id: created.id, openai_purpose: created.purpose };
517
+ } catch (e) {
518
+ return { success: false, error: e && e.message || 'Retry upload failed' };
519
+ }
520
+ };
416
521
  this.testPrompt= async function(data, req, res) {
417
522
  try {
418
523
  var payload = {
@@ -839,7 +944,8 @@ this.executeJOEAiPrompt = async function(data, req, res) {
839
944
  const referencedObjectIds = []; // Track all objects touched during helper function
840
945
  try {
841
946
  const promptId = data.ai_prompt;
842
- const params = data;
947
+ // Support both payload shapes: { ai_prompt, params:{...}, ... } and flat
948
+ const params = (data && (data.params || data)) || {};
843
949
 
844
950
  if (!promptId) {
845
951
  return { error: "Missing prompt_id." };
@@ -908,8 +1014,49 @@ this.executeJOEAiPrompt = async function(data, req, res) {
908
1014
  //return_token_usage: true
909
1015
  //max_tokens: prompt.max_tokens ?? 1200
910
1016
  };
911
-
912
- const response = await openai.responses.create(payload);
1017
+ coloredLog(`${payload.model} and ${payload.temperature}`);
1018
+ const mode = (prompt.attachments_mode || 'direct');
1019
+ if (Array.isArray(data.openai_file_ids) && data.openai_file_ids.length){
1020
+ if (mode === 'file_search'){
1021
+ // Use file_search tool and attach vector store
1022
+ try{
1023
+ const ensured = await ensureVectorStoreForFiles(data.openai_file_ids);
1024
+ payload.tools = payload.tools || [];
1025
+ if(!payload.tools.find(t => t && t.type === 'file_search')){
1026
+ payload.tools.push({ type:'file_search' });
1027
+ }
1028
+ payload.tool_resources = Object.assign({}, payload.tool_resources, {
1029
+ file_search: { vector_store_ids: [ ensured.vectorStoreId ] }
1030
+ });
1031
+ // Keep input as text only (if any)
1032
+ if (finalInput && String(finalInput).trim().length){
1033
+ payload.input = finalInput;
1034
+ }
1035
+ }catch(e){
1036
+ console.warn('[chatgpt] file_search setup failed; falling back to direct parts', e && e.message || e);
1037
+ // Fall back to direct parts
1038
+ const parts = [];
1039
+ if (finalInput && String(finalInput).trim().length){
1040
+ parts.push({ type:'input_text', text: String(finalInput) });
1041
+ }
1042
+ data.openai_file_ids.slice(0,10).forEach(function(id){
1043
+ parts.push({ type:'input_file', file_id: id });
1044
+ });
1045
+ payload.input = [ { role:'user', content: parts } ];
1046
+ }
1047
+ } else {
1048
+ // Direct context stuffing: input parts
1049
+ const parts = [];
1050
+ if (finalInput && String(finalInput).trim().length){
1051
+ parts.push({ type:'input_text', text: String(finalInput) });
1052
+ }
1053
+ data.openai_file_ids.slice(0,10).forEach(function(id){
1054
+ parts.push({ type:'input_file', file_id: id });
1055
+ });
1056
+ payload.input = [ { role:'user', content: parts } ];
1057
+ }
1058
+ }
1059
+ const response = await safeResponsesCreate(openai, payload);
913
1060
 
914
1061
 
915
1062
  // const payload = createResponsePayload(prompt, params, instructions, data.user_prompt);
@@ -927,6 +1074,14 @@ this.executeJOEAiPrompt = async function(data, req, res) {
927
1074
  user: req && req.User,
928
1075
  ai_assistant_id: data.ai_assistant_id
929
1076
  });
1077
+ try{
1078
+ if (saved && Array.isArray(data.openai_file_ids) && data.openai_file_ids.length){
1079
+ saved.used_openai_file_ids = data.openai_file_ids.slice(0,10);
1080
+ await new Promise(function(resolve){
1081
+ JOE.Storage.save(saved,'ai_response',function(){ resolve(); },{ user: req && req.User, history:false });
1082
+ });
1083
+ }
1084
+ }catch(_e){}
930
1085
 
931
1086
  return { success: true, ai_response_id: saved._id,response:response.output_text || "",usage:response.usage };
932
1087
  } catch (e) {
@@ -1331,6 +1486,7 @@ this.executeJOEAiPrompt = async function(data, req, res) {
1331
1486
  widgetHistory: this.widgetHistory,
1332
1487
  widgetMessage: this.widgetMessage,
1333
1488
  autofill: this.autofill,
1489
+ filesRetryFromUrl: this.filesRetryFromUrl
1334
1490
  };
1335
1491
  this.protected = [,'testPrompt'];
1336
1492
  return self;
@@ -29,6 +29,7 @@ var schema = {
29
29
  { name:'instructions_format', type:'string' },
30
30
  { name:'instructions', type:'string' },
31
31
  { name:'user_prompt', type:'string' },
32
+ { name:'attachments_mode', type:'string', display:'Attachments Mode', enumValues:['direct','file_search'], default:'direct' },
32
33
  { name:'status', type:'string', isReference:true, targetSchema:'status' },
33
34
  { name:'tags', type:'string', isArray:true, isReference:true, targetSchema:'tag' },
34
35
  { name:'ai_model', type:'string' },
@@ -162,6 +163,7 @@ var schema = {
162
163
  {section_end:'workflow'},
163
164
  {section_start:'openAi',collapsed:true},
164
165
  'ai_model',
166
+ {name:'attachments_mode', type:'select', display:'Attachments Mode', values:['direct','file_search'], default:'direct', comment:'direct = include files in prompt as context; file_search = index files and retrieve relevant chunks'},
165
167
  {name:'temperature', type:'number',display:'Temperature', default:.7, step:"0.1",comment:'0-1, 0 is deterministic, 1 is random'},
166
168
  //{name:'max_tokens', type:'number',display:'Max Tokens',comment:'max tokens to return',default:4096},
167
169
  {section_end:'openAi'},
@@ -34,6 +34,7 @@ var schema = {
34
34
  { name:'response_keys', type:'string', isArray:true },
35
35
  { name:'response_id', type:'string' },
36
36
  { name:'usage', type:'object' },
37
+ { name:'used_openai_file_ids', type:'string', isArray:true },
37
38
  // { name:'creator_type', type:'string', enumValues:['user','ai_assistant'] },
38
39
  // { name:'creator_id', type:'string' },
39
40
  { name:'tags', type:'string', isArray:true, isReference:true, targetSchema:'tag' },
@@ -66,8 +67,9 @@ var schema = {
66
67
  listWindowTitle: 'Ai Responses'
67
68
  },
68
69
  subsets: function(a,b,c){
70
+ var subsets = [];
69
71
  // Base subsets: tag-based and by ai_prompt
70
- var subsets = _joe.Filter.Options.tags({group:true,collapsed:false}).concat(
72
+ var subsets = subsets.concat(
71
73
  _joe.getDataset('ai_prompt').map(function(prompt){
72
74
  var color = prompt.status && $J.get(prompt.status,'status').color;
73
75
  return {
@@ -103,17 +105,9 @@ var schema = {
103
105
  stripecolor: _joe.Colors.ai
104
106
  });
105
107
  }
106
- }catch(_e){ /* best-effort only */ }
107
- // subsets.push(
108
- // {
109
- // name: 'True',
110
- // id: 'isTrue',
111
- // filter: function(air,index, arra){
112
- // return air.response_json && air.response_json.proposed_thoughts && air.response_json.proposed_thoughts.length > 0;
113
- // },
114
- // stripecolor: 'gold'
115
- // }
116
- // );
108
+ }catch(_e){ }
109
+ //add status subsets
110
+ subsets = subsets.concat(_joe.Filter.Options.status({group:'status'}));
117
111
  return subsets;
118
112
  },
119
113
  stripeColor:function(air){
@@ -122,6 +116,15 @@ var schema = {
122
116
  }
123
117
  return null;
124
118
  },
119
+ bgColor:function(air){//status color
120
+ if(air.status){
121
+ var status = _joe.getDataItem(air.status,'status');
122
+ if(status && status.color){
123
+ return {color:status.color,title:status.name};
124
+ }
125
+ }
126
+ return null;
127
+ },
125
128
  filters:function(){
126
129
  var filters = [];
127
130
  // Tag filters
@@ -247,6 +250,12 @@ var schema = {
247
250
  },
248
251
  {name:'response_keys', type:'text', locked:true,display:'Response Keys'},
249
252
  {name:'response_id', type:'text', display:'openAI response ID',locked:true},
253
+ {name:'used_openai_file_ids', type:'content', display:'OpenAI File IDs (Used)', locked:true, run:function(air){
254
+ var ids = air && air.used_openai_file_ids;
255
+ if(!ids || !ids.length){ return '<joe-subtext>None</joe-subtext>'; }
256
+ return ids.map(function(id){ return '<joe-subtext>'+id+'</joe-subtext>'; }).join('');
257
+ }},
258
+
250
259
  {section_end:'response'},
251
260
  {sidebar_start:'right', collapsed:false},
252
261
  // {section_start:'creator'},
@@ -67,7 +67,15 @@ var schema = {
67
67
  return false;
68
68
 
69
69
  },
70
- subsets:function(){
70
+ subsets:function(){//active, inactive, terminal, default
71
+ return [
72
+ {name:'active',filter:{active:true}},
73
+ {name:'inactive',filter:{inactive:true}},
74
+ {name:'terminal',filter:{terminal:true}},
75
+ {name:'default',filter:{default:true}}
76
+ ];
77
+ },
78
+ filters:function(){
71
79
  var schemas = [];
72
80
  var subs = [];
73
81
  _joe.current.list.map(function(status){
@@ -83,7 +91,9 @@ var schema = {
83
91
  'name',
84
92
  'info',
85
93
  'description',
86
- 'color:color',
94
+ {name:'color',type:'color',comment:'hex or html color code',reloadable:true,
95
+ ai:{prompt:'generate a hex color code for this status. prioritise the name and info fields for context. Override with a color name is one is set.'}
96
+ },
87
97
  { name: "index", type: "number", display: "Ordering Index", comment: "Optional manual ordering index for lists and workflows. Lower values appear first.", width:'50%' },
88
98
 
89
99
  { name: "code", type: "text", display: "System Code", comment: "Machine-usable, human-readable identifier for database/API use. Use lowercase with underscores.", width:'50%' },
@@ -178,12 +178,16 @@ var task = function(){return{
178
178
  fields:function(){
179
179
  var fields = [
180
180
  {sidebar_start:'left'},
181
- {section_start:'JAI',display:'JOE Ai'},
181
+ {section_start:'ai_chat', anchor:'AiChat'},
182
182
  "objectChat",
183
183
  "listConversations",
184
+ {section_end:'ai_chat'},
185
+ {section_start:'ai_thoughts', anchor:'AiThoughts'},
184
186
  'proposeThought',
187
+ {section_end:'ai_thoughts'},
188
+ {section_start:'ai_responses', anchor:'AiResponses',collapsed:true},
185
189
  'ai_responses',
186
- {section_end:'JAI'},
190
+ {section_end:'ai_responses'},
187
191
  {sidebar_end:'left'},
188
192
  {section_start:'overview'},
189
193
  'name',
@@ -250,7 +254,9 @@ var task = function(){return{
250
254
  {section_start:'acceptance',collapsed:function(item){
251
255
  return !(item.scceptance_criteria && item.scceptance_criteria.length);
252
256
  }},
253
- {name:'scceptance_criteria',display:'acceptance criteria',type:'objectList',label:false,
257
+ {name:'acceptance_criteria',display:'acceptance criteria',type:'objectList',label:false,value:function(item){
258
+ return item.scceptance_criteria || [];
259
+ },
254
260
  template:function(obj,subobj){
255
261
  var done = (subobj.sub_complete)?'joe-strike':'';
256
262
  return '<joe-title class="'+done+'">${criteria}</joe-title>' ;