json-object-editor 0.10.633 → 0.10.638

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,330 @@
1
+ ## JOE AI Overview
2
+
3
+ This document gives Custom GPTs, Assistants, and AI agents a single mental model for how AI works inside **Json Object Editor (JOE)**: which schemas are involved, how the `chatgpt` plugin behaves, how the `joe-ai` UI connects to OpenAI, and how **MCP** exposes JOE as tools.
4
+
5
+ ---
6
+
7
+ ## 1. High‑level AI Architecture
8
+
9
+ - **Core idea**: JOE is a schema‑driven data system. AI features are layered on top to:
10
+ - Generate or transform field values (AI autofill)
11
+ - Run persistent assistants and chats (in‑app + widgets)
12
+ - Expose JOE data and writes via MCP tools for external agents
13
+ - **Main components**:
14
+ - **AI schemas** (`ai_assistant`, `ai_prompt`, `ai_tool`, `ai_response`, `ai_conversation`, `ai_widget_conversation`)
15
+ - **Server plugins**:
16
+ - `chatgpt.js` – **central / preferred** entry point for OpenAI (Responses API, tool calls, MCP bridge). New integrations should target this plugin.
17
+ - `chatgpt-assistants.js` – legacy Assistants‑API plugin, still used by existing in‑app chat flows (`ai_conversation` / `<joe-ai-chatbox>`) but not recommended for new work.
18
+ - `chatgpt-tools.js` / `chatgpt-responses.js` – helpers for tools + response shaping
19
+ - **Client / UI**:
20
+ - `joe-ai.js` – in‑app AI chat UI (web components) wired to `ai_conversation` + assistants
21
+ - Standard JOE editor UI with **AI autofill** buttons on fields, driven by `ai_prompt`
22
+ - **MCP**:
23
+ - `/.well-known/mcp/manifest.json` – describes tools
24
+ - `/mcp` – JSON‑RPC endpoint whose tools route into `JOE.Schemas`, `JOE.Storage`, `JOE.Cache`
25
+
26
+ When in doubt: **schemas define structure**, **plugins call OpenAI + MCP**, **MCP exposes tools**, and **UI (`joe-ai`) is just a thin chat/controls layer on top.
27
+
28
+ ---
29
+
30
+ ## 2. AI Schemas (Mental Model)
31
+
32
+ JOE uses a small set of AI‑specific schemas. A good agent should know what each represents and how they relate.
33
+
34
+ ### 2.1 `ai_assistant`
35
+
36
+ - **What it is**: Configuration record for a single AI assistant linked to OpenAI.
37
+ - **Key fields** (from schema + summaries):
38
+ - Identity & model: `name`, `info`, `ai_model`, `assistant_id`, `openai_assistant_version`
39
+ - Capabilities: `file_search_enabled`, `code_interpreter_enabled`
40
+ - Prompting: `instructions`, `assistant_thinking_text`, `assistant_color`
41
+ - Tools: `tools` (JSON OpenAI tools array – often imported from MCP), `datasets`, `tags`, `status`
42
+ - Sync meta: `last_synced`, timestamps (`created`, `joeUpdated`)
43
+ - **How it’s used**:
44
+ - One `ai_assistant` usually maps 1:1 to an OpenAI Assistant.
45
+ - A **DEFAULT** can be set via `setting.DEFAULT_AI_ASSISTANT` and is used by `joe-ai` as the default assistant.
46
+ - The schema exposes helper methods such as:
47
+ - `syncAssistantToOpenAI` – sync this record to OpenAI via `chatgpt-assistants.js`.
48
+ - `loadMCPTolsIntoAssistant` – pull tools from the MCP manifest into `assistant.tools`.
49
+ - `setAsDefaultAssistant` – update the `DEFAULT_AI_ASSISTANT` setting.
50
+
51
+ **Agent note**: When reasoning about which assistant is active in a chat or widget, look for:
52
+ - `ai_conversation.assistant` or `ai_widget_conversation.assistant`
53
+ - Or the instance‑level default via the `setting` schema.
54
+
55
+ ### 2.2 `ai_prompt`
56
+
57
+ - **What it is**: Reusable AI **prompt configuration** – how to call the `chatgpt` plugin or Responses API.
58
+ - **Key fields**:
59
+ - Identity: `_id`, `name`, `info`, `itemtype:'ai_prompt'`
60
+ - Integration with plugin:
61
+ - `prompt_method` – name of the server method on `chatgpt.js` to call (e.g. `executeJOEAiPrompt`).
62
+ - `content_items` – `objectList` of `{ itemtype, reference }` describing which JOE objects to send and under what parameter name.
63
+ - Instructions:
64
+ - `functions` – helper code snippet (Node‑style `module.exports = async function(...) { ... }`) used to shape instructions/input.
65
+ - `instructions_format` – format hint; `instructions` – main system‑level text.
66
+ - `user_prompt` – optional user‑facing prompt template or per‑call input description.
67
+ - OpenAI tuning: `ai_model`, `temperature`
68
+ - Meta: `status`, `tags`, `datasets`, timestamps
69
+ - **Methods / behavior**:
70
+ - `methods.buildURL(prompt, items)` – builds URLs like:
71
+ - `/API/plugin/chatgpt/<prompt_method>?ai_prompt=<prompt._id>&<reference>=<item._id>...`
72
+ - `methods.listExamples(prompt)` – picks sample content objects for exploration/testing.
73
+ - **Typical flows**:
74
+ - **AI Autofill (field‑level)**:
75
+ - Fields in other schemas include an `ai` config (documented in `CHANGELOG` + README).
76
+ - The UI calls `/API/plugin/chatgpt/autofill`, which:
77
+ - Uses `ai_prompt` definitions and schema metadata.
78
+ - Sends the right JOE objects + instructions to OpenAI.
79
+ - Returns structured patch JSON to update fields.
80
+ - **Explicit prompts**:
81
+ - UI actions or external tools call `/API/plugin/chatgpt/<prompt_method>?ai_prompt=<id>&...`.
82
+ - The plugin locates the prompt, merges helper `functions`, and runs OpenAI.
83
+
84
+ **Agent note**: Treat `ai_prompt` as the **source of truth** for how to talk to OpenAI for a particular workflow (summaries, planning, refactors, etc). Never invent `prompt_method` names; reuse existing ones or ask a human to add a new prompt record.
85
+
86
+ ### 2.3 `ai_response`
87
+
88
+ - **What it is**: Persistent record of a single AI response, for **auditing**, **reuse**, and **merge‑back** into JOE.
89
+ - **Key fields**:
90
+ - Links: `ai_prompt` (reference), `referenced_objects` (array of JOE `_id`s used), `tags`
91
+ - Request context: `user_prompt`, `prompt_method`, `model_used`, `response_type`
92
+ - Response data:
93
+ - `response` – raw string body (often JSON or text).
94
+ - `response_keys` – array of keys returned (used for patch/merge).
95
+ - `response_id` – OpenAI response ID.
96
+ - `usage` – token usage object `{ input_tokens, output_tokens, total_tokens }`.
97
+ - **Methods**:
98
+ - `compareResponseToObject(response_id, object_id, do_alert)` – validate response JSON vs object fields, optionally trigger a merge.
99
+ - `updateObjectFromResponse(response_id, object_id, fields)` – apply response values into a JOE object and update tags.
100
+ - `listResponses(obj)` – list AI responses referencing a given object.
101
+
102
+ **Agent note**: When you are updating objects via tools, favor:
103
+ - Using `ai_response` records to compare/merge changes, especially if the response contains multiple fields.
104
+ - Respect existing merge flows (`compareResponseToObject` / `updateObjectFromResponse`) instead of overwriting arbitrary fields.
105
+
106
+ ### 2.4 `ai_tool`
107
+
108
+ - **What it is**: Definition of a reusable **function‑calling tool** that OpenAI can call and JOE can execute server‑side.
109
+ - **Key fields**:
110
+ - Identity: `name`, `info`, `description`, `tool_id`
111
+ - OpenAI schema: `tool_properties` (JSON, typically `{ type:'function', function:{ name, description, parameters } }`)
112
+ - Execution: `server_code` – Node.js async code to run when the tool is invoked.
113
+ - Meta: `datasets`, `tags`, timestamps.
114
+ - **Methods**:
115
+ - `updateToolProperties(propObj)` – helper to keep `tool_properties.function.name` in sync with `tool_id`.
116
+
117
+ **Agent note**: For tools exposed via MCP, the canonical shape is actually in the **MCP manifest**; `ai_tool` is more for:
118
+ - High‑level catalog and advanced OpenAI tools, not the core MCP schema/search/save tools.
119
+
120
+ ### 2.5 `ai_conversation`
121
+
122
+ - **What it is**: Metadata record for **in‑app AI conversations** (staff/admin chat in JOE).
123
+ - **Key fields**:
124
+ - Participants: `user` (JOE user), `assistant` (`ai_assistant`), `members` (external participants)
125
+ - OpenAI thread: `thread_id`
126
+ - Summary: `summary` (WYSIWYG), `status`, `tags`
127
+ - Meta: `_id`, `name`, `info`, timestamps
128
+ - **Behavior**:
129
+ - Messages are **not stored** in JOE; they’re fetched from OpenAI at runtime (via `chatgpt-assistants.js`).
130
+ - Schema methods + UI:
131
+ - `methods.chatSpawner(object_id)` – calls `_joe.Ai.spawnContextualChat(object_id)`.
132
+ - List view shows created time, user cubes, and context objects used for that chat.
133
+ - `joe-ai.js` uses `ai_conversation` as the anchor object for chat history + context.
134
+
135
+ **Agent note**: If you need to understand a chat history, you:
136
+ - Read `ai_conversation` for metadata and context_objects.
137
+ - Use **MCP tools** or existing chat APIs to fetch live thread content; don’t invent local persistence.
138
+
139
+ ### 2.6 `ai_widget_conversation`
140
+
141
+ - **What it is**: Lightweight conversation record for **embeddable AI widgets** (external sites, test pages).
142
+ - **Key fields**:
143
+ - Participant & assistant: `user`, `assistant`, `user_name`, `user_color`, `assistant_color`
144
+ - OpenAI info: `model`, `assistant_id`, `system` (effective instructions)
145
+ - Conversation: `messages` (compact list, typically JSON array of `{ role, content, created_at }`), `last_message_at`
146
+ - Source: `source` (widget origin), `tags`, timestamps
147
+ - **Behavior**:
148
+ - Accessed/updated via `chatgpt-assistants` widget endpoints:
149
+ - `widgetStart`, `widgetMessage`, `widgetHistory`, etc.
150
+ - Schema exposes **subsets** and **filters** for grouping by `source`, `user`, and `assistant`.
151
+
152
+ **Agent note**: Treat `ai_widget_conversation` as the external/chat‑widget equivalent of `ai_conversation`. The fields are tuned for widgets and public UI, but the mental model is the same: small record pointing to assistant + messages.
153
+
154
+ ---
155
+
156
+ ## 3. `chatgpt` Plugin and Related Plugins
157
+
158
+ ### 3.1 `server/plugins/chatgpt.js`
159
+
160
+ This is the **core server plugin** that wires JOE to OpenAI and MCP.
161
+
162
+ - **Responsibilities**:
163
+ - Read OpenAI API key from `setting.OPENAI_API_KEY`.
164
+ - Provide shared helpers for building OpenAI clients (`OpenAI` SDK).
165
+ - Implement orchestration for:
166
+ - **Responses API** calls (models like `gpt-4.1`, `gpt-4.1-mini`, `gpt-5.1`).
167
+ - **Tool calling** and follow‑up calls via MCP tools.
168
+ - Bridge between JOE and MCP with helpers like:
169
+ - `callMCPTool(toolName, params, ctx)` – call MCP tools **in‑process** (without HTTP).
170
+ - Response parsing helpers such as `extractToolCalls(response)`.
171
+ - Payload shrinking helpers like `shrinkUnderstandObjectMessagesForTokens(...)`.
172
+ - Expose specific routes under `/API/plugin/chatgpt/...`, including:
173
+ - Field autofill (`autofill`) driven by schema‑level `ai` configs.
174
+ - Prompt execution handlers for `ai_prompt.prompt_method` values.
175
+ - **Key patterns**:
176
+ - **Slimming payloads**: When embedding object graphs into messages (e.g. `understandObject` results), `chatgpt.js` converts them into a slim representation to avoid token limits.
177
+ - **Error handling**:
178
+ - `isTokenLimitError(err)` identifies token/TPM error shapes for better logs and fallbacks.
179
+
180
+ **Agent note**: Any time you see endpoints in docs like `/API/plugin/chatgpt/<method>`, they resolve into functions in `chatgpt.js` or its siblings. Follow those conventions instead of inventing new paths.
181
+
182
+ ### 3.2 `chatgpt-assistants.js`, `chatgpt-responses.js`, `chatgpt-tools.js`
183
+
184
+ - **`chatgpt-assistants.js`**:
185
+ - Manages OpenAI Assistants and threads using the **older Assistants API**.
186
+ - Endpoints such as:
187
+ - `syncAssistantToOpenAI` (used from `ai_assistant.methods`).
188
+ - `getThreadMessages`, `runAssistantChat`, widget endpoints like `widgetStart`, `widgetMessage`, `widgetHistory`.
189
+ - Used by `joe-ai.js` and widget UIs to pull/push messages for a given conversation.
190
+ - **Status**: legacy bridge for existing flows; **new integrations should use the Responses‑based `chatgpt` plugin instead**, matching the README.
191
+
192
+ - **`chatgpt-responses.js`**:
193
+ - Helpers for working with Responses API results:
194
+ - Extracting text and tool calls.
195
+ - Normalizing outputs for storage in `ai_response`.
196
+ - Often called from `chatgpt.js` orchestration functions.
197
+
198
+ - **`chatgpt-tools.js`**:
199
+ - Glue between AI tools and JOE:
200
+ - Uses `ai_tool` definitions.
201
+ - Executes server‑side code blocks for tools when OpenAI calls them.
202
+ - Can also support MCP‑style tools exported via the manifest.
203
+
204
+ **Agent note**: You typically don’t call these plugins directly from a GPT. Instead, you:
205
+ - Use documented **APIs** (`/API/plugin/chatgpt/*`, widget endpoints) and/or
206
+ - Use **MCP tools** for reads/writes.
207
+
208
+ ---
209
+
210
+ ## 4. `joe-ai.js` – In‑App AI UI
211
+
212
+ The `js/joe-ai.js` file implements JOE’s in‑app AI user interface using custom elements like `<joe-ai-chatbox>` and other helpers under a `Ai` namespace.
213
+
214
+ - **Key responsibilities**:
215
+ - Manage open chat boxes and default assistant selection:
216
+ - `Ai._openChats`, `Ai.default_ai`, `Ai.getDefaultAssistant()`.
217
+ - Render **chat UI** bound to an `ai_conversation`:
218
+ - Loads `ai_conversation` via `/API/object/ai_conversation/_id/<id>`.
219
+ - Uses `chatgpt-assistants` endpoints (e.g. `getThreadMessages`) to fetch messages.
220
+ - Shows assistant/user labels, context objects, and a message history.
221
+ - Provide ergonomic chat behaviors:
222
+ - Markdown rendering via `renderMarkdownSafe` (prefers `marked` + `DOMPurify`).
223
+ - Greeting messages based on `user` and `context_objects`.
224
+ - Keyboard handling (Enter to send).
225
+ - **High‑level flow** for in‑app chat:
226
+ 1. User opens an `ai_conversation` or clicks “Continue Conversation” from a context object.
227
+ 2. `joe-ai.js` spawns a `<joe-ai-chatbox>` bound to that `ai_conversation`.
228
+ 3. The chatbox:
229
+ - Fetches conversation + thread from the server.
230
+ - Uses `chatgpt-assistants` to send/receive messages to/from OpenAI.
231
+ - Renders messages and any embedded JOE object markers into the UI.
232
+
233
+ **Agent note**: When describing UI behavior or helping with debugging, remember that `joe-ai.js` is **UI only**; persistent data lives in:
234
+ - `ai_conversation` (metadata) + OpenAI threads
235
+ - `ai_assistant` (config)
236
+ - `ai_widget_conversation` for external widgets
237
+
238
+ ---
239
+
240
+ ## 5. MCP: How JOE Exposes Tools to Agents
241
+
242
+ MCP (Model Context Protocol) is how external agents (including Custom GPTs) call into JOE as a **tool provider**.
243
+
244
+ ### 5.1 Endpoints and Manifest
245
+
246
+ - **Endpoints**:
247
+ - `GET /.well-known/mcp/manifest.json` – public manifest with:
248
+ - Instance info `{ joe: { name, version, hostname } }`
249
+ - Tool list (name, description, params schema)
250
+ - Privacy/terms URLs (also wired into `/privacy` and `/terms` routes).
251
+ - `POST /mcp` – auth‑protected JSON‑RPC 2.0 endpoint for **tool calls**.
252
+ - Same auth rules as other APIs; if users exist, requires cookie or basic auth.
253
+ - **Implementation note**:
254
+ - The concrete MCP tool implementations live in `server/modules/MCP.js`, which is wired up by `server/init.js` and exported via the manifest and `/mcp` endpoint.
255
+ - **Tools** (from README + CHANGELOG):
256
+ - `listSchemas(name?)`, `getSchema(name)`
257
+ - `getObject(_id, itemtype?)`
258
+ - `search` (unified search across cache/storage; supports `source`, `ids`, `flatten`, `depth`, `countOnly`)
259
+ - `fuzzySearch` – typo‑tolerant search across weighted fields
260
+ - `saveObject({ object })`
261
+ - `saveObjects({ objects, stopOnError?, concurrency? })`
262
+ - `hydrate` – returns core fields, schemas, statuses, tags (for agents to bootstrap context)
263
+
264
+ All tools map directly into JOE’s APIs: `JOE.Schemas`, `JOE.Storage`, `JOE.Cache`, and related helpers. Sensitive fields are sanitized before returning data.
265
+
266
+ ### 5.2 Agent Best Practices with MCP
267
+
268
+ - **Schema‑first**: Always:
269
+ 1. Discover schemas via `listSchemas`/`getSchema`.
270
+ 2. Use `hydrate` on startup to cache schemas/statuses/tags.
271
+ 3. Use `search` / `fuzzySearch` with explicit `itemtype` and filters rather than arbitrary text mining.
272
+ - **Reads vs writes**:
273
+ - Prefer **cache** for read operations (`search` default) unless you explicitly need DB‑level freshness (set `source:"storage"`).
274
+ - For writes, use:
275
+ - `saveObject` for single‑item updates.
276
+ - `saveObjects` for batched updates with clear error handling.
277
+ - **Performance**:
278
+ - Use `countOnly` before large reads to avoid pulling massive datasets.
279
+ - When embedding objects into prompts, prefer **slimmed** representations (id, itemtype, name, info).
280
+
281
+ **Agent note**: The MCP tools are your primary way to **inspect and modify JOE data** from a Custom GPT or other agent host. Direct HTTP calls to JOE’s internal `/API/*` routes should be treated as implementation detail unless explicitly allowed in your instructions.
282
+
283
+ ---
284
+
285
+ ## 6. Recommended RAG / Context Files for JOE AI Agents
286
+
287
+ When configuring a JOE‑aware Custom GPT or dev agent, include the following docs in its RAG/context:
288
+
289
+ - **Core architecture & instance info**
290
+ - `README.md` – especially:
291
+ - “Architecture & Mental Model (Server)”
292
+ - MCP overview + test instructions
293
+ - `CHANGELOG.md` – AI/MCP‑related entries (0.10.43x+, 0.10.50x+, 0.10.62x+), already summarize important AI behavior.
294
+ - **Agent / instruction docs (existing)**
295
+ - `docs/JOE_Master_Knowledge_Export.md` – master context dump for schemas and concepts.
296
+ - `docs/joe_agent_spec_v_2.2.md` – agent behavior/specification.
297
+ - `docs/joe_agent_custom_gpt_instructions_v_3.md` – current Custom GPT / MCP prompt instructions.
298
+ - `docs/schema_summary_guidelines.md` – how schema summaries are constructed and how agents should interpret/use them.
299
+ - **This file**
300
+ - `docs/JOE_AI_Overview.md` – **(you are here)** high‑level map of AI schemas, plugins, UI, and MCP.
301
+
302
+ Optional / advanced (for deeper RAG or power users):
303
+ - Selected excerpts or exports of **schema summaries** for AI‑related schemas:
304
+ - `ai_assistant`, `ai_prompt`, `ai_tool`, `ai_response`, `ai_conversation`, `ai_widget_conversation`
305
+ - Any **project‑specific** AI workflow docs you maintain (e.g. playbooks for content generation, planning workflows, merge policies).
306
+
307
+ ---
308
+
309
+ ## 7. What’s Still Missing / Future Docs to Consider
310
+
311
+ For a fully self‑sufficient agent, the following additional docs can be helpful:
312
+
313
+ - **AI Workflows Cookbook**:
314
+ - Short task‑oriented examples:
315
+ - “Summarize a project using schemas + ai_prompt + MCP search.”
316
+ - “Draft a plan then write objects via `saveObjects`.”
317
+ - “Use ai_response to compare vs existing data and merge.”
318
+ - **AI Safety & Guardrails**:
319
+ - Which schemas/fields are sensitive.
320
+ - Allowed vs disallowed writes.
321
+ - Expectations around **review vs autonomous changes**.
322
+ - **Instance‑specific Prompts / Taxonomy**:
323
+ - Any custom `ai_prompt` records or `ai_tool` definitions that embody local conventions (naming standards, planning templates, scoring rubrics, etc.).
324
+
325
+ If these don’t exist yet, an agent should assume **conservative** behavior:
326
+ - Prefer read + propose changes over direct writes.
327
+ - Tag all AI‑generated content clearly.
328
+ - Use `ai_response` + compare/merge flows rather than overwriting objects blindly.
329
+
330
+
Binary file
@@ -4448,8 +4448,11 @@ this.renderHTMLContent = function(specs){
4448
4448
  //populate options list
4449
4449
  var optionTemplate = prop.optionTemplate || template;
4450
4450
  values.map(function(v){
4451
+ // allow optionTemplate to be a string or a function(main,obj)
4452
+ var resolvedTemplate = self.propAsFuncOrValue(optionTemplate, self.current.object, null, v);
4453
+ var optionContent = fillTemplate(resolvedTemplate || '', v);
4451
4454
  lihtml = //self.renderBucketItem(foundItem,template,idprop,false);
4452
- '<li data-value="'+v[idprop]+'" >'+fillTemplate(optionTemplate,v)+'<div class="joe-bucket-delete cui-block joe-icon" onclick="$(this).parent().remove()"></div></li>';
4455
+ '<li data-value="'+v[idprop]+'" >'+optionContent+'<div class="joe-bucket-delete cui-block joe-icon" onclick="$(this).parent().remove()"></div></li>';
4453
4456
  selected = false;
4454
4457
  //loop over buckets
4455
4458
  /*if(!prop.allowMultiple){
@@ -4547,7 +4550,10 @@ this.renderHTMLContent = function(specs){
4547
4550
  'data-isobject=true data-value="'+encodeURI(JSON.stringify(item))+'"':
4548
4551
  'data-value="'+item[idprop]+'"';
4549
4552
 
4550
- var bucketHtml = '<li '+value+'>' +fillTemplate(template,item)
4553
+ // allow template to be a string or a function(main,obj)
4554
+ var resolvedTemplate = self.propAsFuncOrValue(template, self.current.object, null, item);
4555
+ var bucketContent = fillTemplate(resolvedTemplate || '', item);
4556
+ var bucketHtml = '<li '+value+'>' +bucketContent
4551
4557
  +'<div class="'+itemCss+'" onclick="$(this).parent().remove()"></div></li>';
4552
4558
 
4553
4559
  return bucketHtml;
@@ -0,0 +1,50 @@
1
+ /**
2
+ * Environment-aware favicon changer
3
+ * Changes favicon based on localhost vs production environment
4
+ */
5
+ (function() {
6
+ 'use strict';
7
+
8
+ // Detect if running on localhost (development)
9
+ var isLocalhost = location.hostname === 'localhost' ||
10
+ location.hostname === '127.0.0.1' ||
11
+ location.hostname === '';
12
+
13
+ if (isLocalhost) {
14
+ // For development: use dev favicon file
15
+ var devFaviconPath = '/JsonObjectEditor/favicon-dev.png';
16
+
17
+ var updateFavicon = function(id) {
18
+ var link = document.getElementById(id);
19
+ if (link) {
20
+ link.href = devFaviconPath;
21
+ link.type = 'image/png';
22
+ }
23
+ };
24
+
25
+ // Update existing favicon links
26
+ updateFavicon('favicon-shortcut');
27
+ updateFavicon('favicon-icon');
28
+
29
+ // Also update any favicon links without IDs (fallback)
30
+ var allFavicons = document.querySelectorAll('link[rel="icon"], link[rel="shortcut icon"]');
31
+ allFavicons.forEach(function(link) {
32
+ if (!link.id || link.id.indexOf('favicon') === -1) {
33
+ link.href = devFaviconPath;
34
+ link.type = 'image/png';
35
+ }
36
+ });
37
+
38
+ // If no favicon links exist, create one
39
+ if (allFavicons.length === 0) {
40
+ var head = document.head || document.getElementsByTagName('head')[0];
41
+ var link = document.createElement('link');
42
+ link.rel = 'icon';
43
+ link.type = 'image/png';
44
+ link.href = devFaviconPath;
45
+ head.appendChild(link);
46
+ }
47
+ }
48
+ // For production, keep default favicon (no changes needed)
49
+ })();
50
+
package/js/joe-ai.js CHANGED
@@ -1606,13 +1606,18 @@
1606
1606
  if(!fields.length){ return alert('No target field detected. Configure ai.fields or pass a field name.'); }
1607
1607
 
1608
1608
  // If no prompt was passed explicitly, try to pick it up from the field's
1609
- // schema-level ai config.
1609
+ // schema-level ai config. Also inherit allowWeb from field.ai.allowWeb|allow_web.
1610
1610
  let prompt = options.prompt || '';
1611
- if (!prompt && fields.length === 1 && _joe && typeof _joe.getField === 'function') {
1611
+ if (fields.length === 1 && _joe && typeof _joe.getField === 'function') {
1612
1612
  try{
1613
1613
  const fd = _joe.getField(fields[0]);
1614
- if (fd && fd.ai && fd.ai.prompt) {
1615
- prompt = fd.ai.prompt;
1614
+ if (fd && fd.ai) {
1615
+ if (!prompt && fd.ai.prompt) {
1616
+ prompt = fd.ai.prompt;
1617
+ }
1618
+ if (options.allowWeb === undefined && (fd.ai.allowWeb || fd.ai.allow_web)) {
1619
+ options.allowWeb = !!(fd.ai.allowWeb || fd.ai.allow_web);
1620
+ }
1616
1621
  }
1617
1622
  }catch(_e){ /* ignore */ }
1618
1623
  }
package/js/joe.js CHANGED
@@ -1,6 +1,6 @@
1
1
  /* --------------------------------------------------------
2
2
  *
3
- * json-object-editor - v0.10.632
3
+ * json-object-editor - v0.10.636
4
4
  * Created by: Corey Hadden
5
5
  *
6
6
  * -------------------------------------------------------- */
@@ -4454,8 +4454,11 @@ this.renderHTMLContent = function(specs){
4454
4454
  //populate options list
4455
4455
  var optionTemplate = prop.optionTemplate || template;
4456
4456
  values.map(function(v){
4457
+ // allow optionTemplate to be a string or a function(main,obj)
4458
+ var resolvedTemplate = self.propAsFuncOrValue(optionTemplate, self.current.object, null, v);
4459
+ var optionContent = fillTemplate(resolvedTemplate || '', v);
4457
4460
  lihtml = //self.renderBucketItem(foundItem,template,idprop,false);
4458
- '<li data-value="'+v[idprop]+'" >'+fillTemplate(optionTemplate,v)+'<div class="joe-bucket-delete cui-block joe-icon" onclick="$(this).parent().remove()"></div></li>';
4461
+ '<li data-value="'+v[idprop]+'" >'+optionContent+'<div class="joe-bucket-delete cui-block joe-icon" onclick="$(this).parent().remove()"></div></li>';
4459
4462
  selected = false;
4460
4463
  //loop over buckets
4461
4464
  /*if(!prop.allowMultiple){
@@ -4553,7 +4556,10 @@ this.renderHTMLContent = function(specs){
4553
4556
  'data-isobject=true data-value="'+encodeURI(JSON.stringify(item))+'"':
4554
4557
  'data-value="'+item[idprop]+'"';
4555
4558
 
4556
- var bucketHtml = '<li '+value+'>' +fillTemplate(template,item)
4559
+ // allow template to be a string or a function(main,obj)
4560
+ var resolvedTemplate = self.propAsFuncOrValue(template, self.current.object, null, item);
4561
+ var bucketContent = fillTemplate(resolvedTemplate || '', item);
4562
+ var bucketHtml = '<li '+value+'>' +bucketContent
4557
4563
  +'<div class="'+itemCss+'" onclick="$(this).parent().remove()"></div></li>';
4558
4564
 
4559
4565
  return bucketHtml;