json-object-editor 0.10.668 → 0.10.671

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -9,23 +9,25 @@ This document gives Custom GPTs, Assistants, and AI agents a single mental model
9
9
  - **Core idea**: JOE is a schema‑driven data system. AI features are layered on top to:
10
10
  - Generate or transform field values (AI autofill)
11
11
  - Run persistent assistants and chats (in‑app + widgets)
12
+ - Track AI job progress in real-time at the field level
12
13
  - Expose JOE data and writes via MCP tools for external agents
13
14
  - **Main components**:
14
- - **AI schemas** (`ai_assistant`, `ai_prompt`, `ai_tool`, `ai_response`, `ai_conversation`, `ai_widget_conversation`)
15
+ - **AI schemas** (`ai_assistant`, `ai_prompt`, `ai_tool`, `ai_response`, `ai_widget_conversation`)
15
16
  - **Server plugins**:
16
- - `chatgpt.js` – **central / preferred** entry point for OpenAI (Responses API, tool calls, MCP bridge). New integrations should target this plugin.
17
- - `chatgpt-assistants.js` – legacy Assistants‑API plugin, still used by existing in‑app chat flows (`ai_conversation` / `<joe-ai-chatbox>`) but not recommended for new work.
18
- - `chatgpt-tools.js` / `chatgpt-responses.js` helpers for tools + response shaping
17
+ - `chatgpt.js` – **central / preferred** entry point for OpenAI (Responses API, tool calls, MCP bridge)
18
+ - `chatgpt-assistants.js` – legacy Assistants‑API plugin (backward compatibility only)
19
+ - `AiJobs.js` – job tracking and progress management
19
20
  - **Client / UI**:
20
- - `joe-ai.js` – in‑app AI chat UI (web components) wired to `ai_conversation` + assistants
21
- - Standard JOE editor UI with **AI autofill** buttons on fields, driven by `ai_prompt`
21
+ - `joe-ai.js` – in‑app AI chat UI (web components) and job polling
22
+ - `field-jobs-container.js` web component for displaying job progress
23
+ - Standard JOE editor UI with **AI autofill** buttons on fields
22
24
  - **MCP**:
23
25
  - `/.well-known/mcp/manifest.json` – describes tools
24
26
  - `/mcp` – JSON‑RPC endpoint whose tools route into `JOE.Schemas`, `JOE.Storage`, `JOE.Cache`
25
27
 
26
- When in doubt: **schemas define structure**, **plugins call OpenAI + MCP**, **MCP exposes tools**, and **UI (`joe-ai`) is just a thin chat/controls layer on top.
28
+ When in doubt: **schemas define structure**, **plugins call OpenAI + MCP**, **MCP exposes tools**, and **UI (`joe-ai`) is just a thin chat/controls layer on top**.
27
29
 
28
- ### 1.5 End‑to‑end mental model (for agents and humans)
30
+ ### 1.1 End‑to‑end mental model
29
31
 
30
32
  At a high level, almost every AI flow in JOE is one of these patterns:
31
33
 
@@ -43,7 +45,7 @@ At a high level, almost every AI flow in JOE is one of these patterns:
43
45
  - Resolves the active `ai_assistant` by `_id` and loads its `instructions` + MCP config.
44
46
  - Builds `systemText` (assistant instructions + MCP tool list + optional scope hints / `understandObject` snapshot).
45
47
  - Gathers any attached files (from scoped objects or assistant‑level files) and passes their OpenAI ids/roles into `runWithTools`.
46
- 4. `runWithTools` performs a Responses+tools call (with MCP tools where enabled) and returns the assistants reply; `widgetMessage` appends it to `ai_widget_conversation.messages` and returns the updated history to the widget.
48
+ 4. `runWithTools` performs a Responses+tools call (with MCP tools where enabled) and returns the assistant's reply; `widgetMessage` appends it to `ai_widget_conversation.messages` and returns the updated history to the widget.
47
49
 
48
50
  - **MCP‑only agent flow (Custom GPT / external agent)**
49
51
  1. The agent discovers tools and named toolsets via `/.well-known/mcp/manifest.json`.
@@ -52,33 +54,19 @@ At a high level, almost every AI flow in JOE is one of these patterns:
52
54
 
53
55
  ---
54
56
 
55
- ## 2. AI Schemas (Mental Model)
57
+ ## 2. AI Schemas
56
58
 
57
59
  JOE uses a small set of AI‑specific schemas. A good agent should know what each represents and how they relate.
58
60
 
59
- ### 2.0 Modern assistant & conversation identity (2026+)
60
-
61
- - **Assistant identity**:
62
- - Treat `ai_assistant._id` (JOE cuid) as the **canonical id** for assistants.
63
- - `ai_assistant.assistant_id` (OpenAI Assistants id) is **optional/legacy** and only present when synced to the old Assistants API.
64
- - The default assistant used by widgets/AIHub is stored in `setting.DEFAULT_AI_ASSISTANT.value` as an `ai_assistant._id`.
65
- - **Conversations**:
66
- - `ai_widget_conversation` is the **primary** conversation schema for modern chats (`<joe-ai-widget>`, AIHub cards, object chat). It links to assistants via `assistant` (JOE cuid) and may also store a legacy `assistant_id`.
67
- - `ai_conversation` is **legacy** and only used by older `<joe-ai-chatbox>` flows that still rely on the Assistants API.
68
- - **MCP configuration**:
69
- - MCP fields (`mcp_enabled`, `mcp_toolset`, `mcp_selected_tools`, `mcp_instructions_mode`) can exist both on `ai_assistant` (assistant‑level default) and `ai_prompt` / field‑level `ai` configs (surface‑level overrides).
70
- - The server (`chatgpt.js`) uses a shared helper to resolve these configs into actual toolsets and instructions for all AI surfaces (prompts, autofill, widget/object chat).
71
-
72
61
  ### 2.1 `ai_assistant`
73
62
 
74
63
  - **What it is**: Configuration record for a single AI assistant linked to OpenAI.
75
- - **Key fields** (from schema + summaries):
76
- - Identity & model: `name`, `info`, `ai_model`, `assistant_id`, `openai_assistant_version`
77
- - Capabilities: `file_search_enabled`, `code_interpreter_enabled`
64
+ - **Key fields**:
65
+ - Identity & model: `name`, `info`, `ai_model`, `assistant_id` (optional/legacy)
78
66
  - Prompting: `instructions`, `assistant_thinking_text`, `assistant_color`
79
- - Tools: `tools` (JSON OpenAI tools array – often imported from MCP), `datasets`, `tags`, `status`
80
- - Sync meta: `last_synced`, timestamps (`created`, `joeUpdated`)
81
- - **How its used**:
67
+ - Tools: `tools` (JSON OpenAI tools array – often imported from MCP)
68
+ - MCP config: `mcp_enabled`, `mcp_toolset`, `mcp_selected_tools`, `mcp_instructions_mode`
69
+ - **How it's used**:
82
70
  - One `ai_assistant` usually maps 1:1 to an OpenAI Assistant.
83
71
  - A **DEFAULT** can be set via `setting.DEFAULT_AI_ASSISTANT` and is used by `joe-ai` as the default assistant.
84
72
  - The schema exposes helper methods such as:
@@ -87,7 +75,7 @@ JOE uses a small set of AI‑specific schemas. A good agent should know what eac
87
75
  - `setAsDefaultAssistant` – update the `DEFAULT_AI_ASSISTANT` setting.
88
76
 
89
77
  **Agent note**: When reasoning about which assistant is active in a chat or widget, look for:
90
- - `ai_conversation.assistant` or `ai_widget_conversation.assistant`
78
+ - `ai_widget_conversation.assistant` (JOE cuid)
91
79
  - Or the instance‑level default via the `setting` schema.
92
80
 
93
81
  ### 2.2 `ai_prompt`
@@ -95,201 +83,208 @@ JOE uses a small set of AI‑specific schemas. A good agent should know what eac
95
83
  - **What it is**: Reusable AI **prompt configuration** – how to call the `chatgpt` plugin or Responses API.
96
84
  - **Key fields**:
97
85
  - Identity: `_id`, `name`, `info`, `itemtype:'ai_prompt'`
98
- - Integration with plugin:
99
- - `prompt_method` name of the server method on `chatgpt.js` to call (e.g. `executeJOEAiPrompt`).
100
- - `content_items` – `objectList` of `{ itemtype, reference }` describing which JOE objects to send and under what parameter name.
101
- - Instructions:
102
- - `functions` – helper code snippet (Node‑style `module.exports = async function(...) { ... }`) used to shape instructions/input.
103
- - `instructions_format` – format hint; `instructions` – main system‑level text.
104
- - `user_prompt` – optional user‑facing prompt template or per‑call input description.
86
+ - Integration: `prompt_method` (e.g. `executeJOEAiPrompt`), `content_items` (objectList of `{ itemtype, reference }`)
87
+ - Instructions: `functions` (helper code), `instructions_format`, `instructions`, `user_prompt`
105
88
  - OpenAI tuning: `ai_model`, `temperature`
106
- - Meta: `status`, `tags`, `datasets`, timestamps
107
- - **Methods / behavior**:
108
- - `methods.buildURL(prompt, items)` – builds URLs like:
109
- - `/API/plugin/chatgpt/<prompt_method>?ai_prompt=<prompt._id>&<reference>=<item._id>...`
110
- - `methods.listExamples(prompt)` – picks sample content objects for exploration/testing.
89
+ - MCP config: `mcp_enabled`, `mcp_toolset`, `mcp_selected_tools`, `mcp_instructions_mode`
111
90
  - **Typical flows**:
112
- - **AI Autofill (field‑level)**:
113
- - Fields in other schemas include an `ai` config (documented in `CHANGELOG` + README).
114
- - The UI calls `/API/plugin/chatgpt/autofill`, which:
115
- - Uses `ai_prompt` definitions and schema metadata.
116
- - Sends the right JOE objects + instructions to OpenAI.
117
- - Returns structured patch JSON to update fields.
118
- - **Explicit prompts**:
119
- - UI actions or external tools call `/API/plugin/chatgpt/<prompt_method>?ai_prompt=<id>&...`.
120
- - The plugin locates the prompt, merges helper `functions`, and runs OpenAI.
121
-
122
- **Agent note**: Treat `ai_prompt` as the **source of truth** for how to talk to OpenAI for a particular workflow (summaries, planning, refactors, etc). Never invent `prompt_method` names; reuse existing ones or ask a human to add a new prompt record.
91
+ - **AI Autofill (field‑level)**: Fields include an `ai` config. The UI calls `/API/plugin/chatgpt/autofill`, which uses `ai_prompt` definitions and returns structured patch JSON.
92
+ - **Explicit prompts**: UI actions call `/API/plugin/chatgpt/<prompt_method>?ai_prompt=<id>&...`. The plugin locates the prompt, merges helper `functions`, and runs OpenAI.
93
+
94
+ **Agent note**: Treat `ai_prompt` as the **source of truth** for how to talk to OpenAI for a particular workflow. Never invent `prompt_method` names; reuse existing ones or ask a human to add a new prompt record.
123
95
 
124
96
  ### 2.3 `ai_response`
125
97
 
126
98
  - **What it is**: Persistent record of a single AI response, for **auditing**, **reuse**, and **merge‑back** into JOE.
127
99
  - **Key fields**:
128
- - Links: `ai_prompt` (reference), `referenced_objects` (array of JOE `_id`s used), `tags`
100
+ - Links: `ai_prompt` (reference), `referenced_objects` (array of JOE `_id`s), `tags`
129
101
  - Request context: `user_prompt`, `prompt_method`, `model_used`, `response_type`
130
- - Response data:
131
- - `response` raw string body (often JSON or text).
132
- - `response_keys` – array of keys returned (used for patch/merge).
133
- - `response_id` – OpenAI response ID.
134
- - `usage` – token usage object `{ input_tokens, output_tokens, total_tokens }`.
102
+ - Response data: `response` (raw string), `response_keys`, `response_id`, `usage` (token usage)
103
+ - MCP audit: `mcp_tools_used[]`, `used_openai_file_ids[]`
135
104
  - **Methods**:
136
- - `compareResponseToObject(response_id, object_id, do_alert)` – validate response JSON vs object fields, optionally trigger a merge.
137
- - `updateObjectFromResponse(response_id, object_id, fields)` – apply response values into a JOE object and update tags.
138
- - `listResponses(obj)` – list AI responses referencing a given object.
105
+ - `compareResponseToObject(response_id, object_id, do_alert)` – validate response JSON vs object fields
106
+ - `updateObjectFromResponse(response_id, object_id, fields)` – apply response values into a JOE object
107
+ - `listResponses(obj)` – list AI responses referencing a given object
139
108
 
140
- **Agent note**: When you are updating objects via tools, favor:
141
- - Using `ai_response` records to compare/merge changes, especially if the response contains multiple fields.
142
- - Respect existing merge flows (`compareResponseToObject` / `updateObjectFromResponse`) instead of overwriting arbitrary fields.
109
+ **Agent note**: When updating objects via tools, favor using `ai_response` records to compare/merge changes, especially if the response contains multiple fields.
143
110
 
144
- ### 2.4 `ai_tool`
111
+ ### 2.4 `ai_widget_conversation`
145
112
 
146
- - **What it is**: Definition of a reusable **function‑calling tool** that OpenAI can call and JOE can execute server‑side.
113
+ - **What it is**: Lightweight conversation record for **embeddable AI widgets** (external sites, test pages, object chat).
147
114
  - **Key fields**:
148
- - Identity: `name`, `info`, `description`, `tool_id`
149
- - OpenAI schema: `tool_properties` (JSON, typically `{ type:'function', function:{ name, description, parameters } }`)
150
- - Execution: `server_code` – Node.js async code to run when the tool is invoked.
151
- - Meta: `datasets`, `tags`, timestamps.
152
- - **Methods**:
153
- - `updateToolProperties(propObj)` – helper to keep `tool_properties.function.name` in sync with `tool_id`.
154
-
155
- **Agent note**: For tools exposed via MCP, the canonical shape is actually in the **MCP manifest**; `ai_tool` is more for:
156
- - High‑level catalog and advanced OpenAI tools, not the core MCP schema/search/save tools.
157
-
158
- ### 2.5 `ai_conversation`
159
-
160
- - **What it is**: Metadata record for **in‑app AI conversations** (staff/admin chat in JOE).
161
- - **Key fields**:
162
- - Participants: `user` (JOE user), `assistant` (`ai_assistant`), `members` (external participants)
163
- - OpenAI thread: `thread_id`
164
- - Summary: `summary` (WYSIWYG), `status`, `tags`
165
- - Meta: `_id`, `name`, `info`, timestamps
166
- - **Behavior**:
167
- - Messages are **not stored** in JOE; they’re fetched from OpenAI at runtime (via `chatgpt-assistants.js`).
168
- - Schema methods + UI:
169
- - `methods.chatSpawner(object_id)` – calls `_joe.Ai.spawnContextualChat(object_id)`.
170
- - List view shows created time, user cubes, and context objects used for that chat.
171
- - `joe-ai.js` uses `ai_conversation` as the anchor object for chat history + context.
172
-
173
- **Agent note**: If you need to understand a chat history, you:
174
- - Read `ai_conversation` for metadata and context_objects.
175
- - Use **MCP tools** or existing chat APIs to fetch live thread content; don’t invent local persistence.
176
-
177
- ### 2.6 `ai_widget_conversation`
178
-
179
- - **What it is**: Lightweight conversation record for **embeddable AI widgets** (external sites, test pages).
180
- - **Key fields**:
181
- - Participant & assistant: `user`, `assistant`, `user_name`, `user_color`, `assistant_color`
115
+ - Participant & assistant: `user`, `assistant` (JOE cuid), `user_name`, `user_color`, `assistant_color`
182
116
  - OpenAI info: `model`, `assistant_id`, `system` (effective instructions)
183
- - Conversation: `messages` (compact list, typically JSON array of `{ role, content, created_at }`), `last_message_at`
184
- - Source: `source` (widget origin), `tags`, timestamps
117
+ - Conversation: `messages` (JSON array of `{ role, content, created_at }`), `last_message_at`
118
+ - Source: `source` (widget origin), `scope_itemtype`, `scope_id`, `tags`
185
119
  - **Behavior**:
186
- - Accessed/updated via `chatgpt-assistants` widget endpoints:
187
- - `widgetStart`, `widgetMessage`, `widgetHistory`, etc.
188
- - Schema exposes **subsets** and **filters** for grouping by `source`, `user`, and `assistant`.
120
+ - Accessed/updated via `chatgpt.js` widget endpoints: `widgetStart`, `widgetMessage`, `widgetHistory`
121
+ - Schema exposes **subsets** and **filters** for grouping by `source`, `user`, and `assistant`
189
122
 
190
- **Agent note**: Treat `ai_widget_conversation` as the external/chat‑widget equivalent of `ai_conversation`. The fields are tuned for widgets and public UI, but the mental model is the same: small record pointing to assistant + messages.
123
+ **Agent note**: `ai_widget_conversation` is the **primary** conversation schema for modern chats (`<joe-ai-widget>`, AIHub cards, object chat). The legacy `ai_conversation` schema is only used by older `<joe-ai-chatbox>` flows.
191
124
 
192
- ---
125
+ ### 2.5 `ai_tool` (optional)
193
126
 
194
- ## 3. `chatgpt` Plugin and Related Plugins
127
+ - **What it is**: Definition of a reusable **function‑calling tool** that OpenAI can call and JOE can execute server‑side.
128
+ - **Key fields**: `name`, `info`, `description`, `tool_id`, `tool_properties` (JSON), `server_code` (Node.js async code)
129
+ - **Agent note**: For tools exposed via MCP, the canonical shape is in the **MCP manifest**; `ai_tool` is more for high‑level catalog and advanced OpenAI tools, not the core MCP schema/search/save tools.
195
130
 
196
- ### 3.0 Modern vs legacy stacks
131
+ ---
197
132
 
198
- - **Modern stack (preferred)**:
199
- - Uses `server/plugins/chatgpt.js` with the **Responses API** and MCP tools.
200
- - Conversations live in `ai_widget_conversation` and are surfaced via `<joe-ai-widget>` and `<joe-ai-assistant-picker>`.
201
- - MCP configuration is respected from `ai_assistant` and `ai_prompt` (and field‑level `ai` configs).
202
- - **Legacy stack (Assistants API)**:
203
- - Uses `server/plugins/chatgpt-assistants.js` and OpenAI Assistants/threads.
204
- - Conversations live in `ai_conversation` and are surfaced via `<joe-ai-chatbox>`.
205
- - Kept for backward compatibility only; **new integrations should use the modern stack above**.
133
+ ## 3. Server Plugins
206
134
 
207
- ### 3.1 `server/plugins/chatgpt.js`
135
+ ### 3.1 `server/plugins/chatgpt.js` (Core Plugin)
208
136
 
209
137
  This is the **core server plugin** that wires JOE to OpenAI and MCP.
210
138
 
211
139
  - **Responsibilities**:
212
- - Read OpenAI API key from `setting.OPENAI_API_KEY`.
213
- - Provide shared helpers for building OpenAI clients (`OpenAI` SDK).
214
- - Implement orchestration for:
215
- - **Responses API** calls (models like `gpt-4.1`, `gpt-4.1-mini`, `gpt-5.1`).
216
- - **Tool calling** and follow‑up calls via MCP tools.
140
+ - Read OpenAI API key from `setting.OPENAI_API_KEY`
141
+ - Provide shared helpers for building OpenAI clients (`OpenAI` SDK)
142
+ - Implement orchestration for **Responses API** calls (models like `gpt-4.1`, `gpt-4.1-mini`, `gpt-5.1`)
143
+ - **Tool calling** and follow‑up calls via MCP tools
217
144
  - Bridge between JOE and MCP with helpers like:
218
- - `callMCPTool(toolName, params, ctx)` – call MCP tools **in‑process** (without HTTP).
219
- - Response parsing helpers such as `extractToolCalls(response)`.
220
- - Payload shrinking helpers like `shrinkUnderstandObjectMessagesForTokens(...)`.
221
- - Expose specific routes under `/API/plugin/chatgpt/...`, including:
222
- - Field autofill (`autofill`) driven by schema‑level `ai` configs.
223
- - Prompt execution handlers for `ai_prompt.prompt_method` values.
145
+ - `callMCPTool(toolName, params, ctx)` – call MCP tools **in‑process** (without HTTP)
146
+ - Response parsing helpers such as `extractToolCalls(response)`
147
+ - Payload shrinking helpers like `shrinkUnderstandObjectMessagesForTokens(...)`
148
+ - **Expose routes** under `/API/plugin/chatgpt/...`:
149
+ - Field autofill (`autofill`) driven by schema‑level `ai` configs
150
+ - Prompt execution handlers for `ai_prompt.prompt_method` values
151
+ - Widget endpoints: `widgetStart`, `widgetMessage`, `widgetHistory`
224
152
  - **Key patterns**:
225
- - **Slimming payloads**: When embedding object graphs into messages (e.g. `understandObject` results), `chatgpt.js` converts them into a slim representation to avoid token limits.
226
- - **Error handling**:
227
- - `isTokenLimitError(err)` identifies token/TPM error shapes for better logs and fallbacks.
228
-
229
- **Agent note**: Any time you see endpoints in docs like `/API/plugin/chatgpt/<method>`, they resolve into functions in `chatgpt.js` or its siblings. Follow those conventions instead of inventing new paths.
230
-
231
- ### 3.2 `chatgpt-assistants.js`, `chatgpt-responses.js`, `chatgpt-tools.js`
232
-
233
- - **`chatgpt-assistants.js`**:
234
- - Manages OpenAI Assistants and threads using the **older Assistants API**.
235
- - Endpoints such as:
236
- - `syncAssistantToOpenAI` (used from `ai_assistant.methods`).
237
- - `getThreadMessages`, `runAssistantChat`, widget endpoints like `widgetStart`, `widgetMessage`, `widgetHistory`.
238
- - Used by `joe-ai.js` and widget UIs to pull/push messages for a given conversation.
239
- - **Status**: legacy bridge for existing flows; **new integrations should use the Responses‑based `chatgpt` plugin instead**, matching the README.
240
-
241
- - **`chatgpt-responses.js`**:
242
- - Helpers for working with Responses API results:
243
- - Extracting text and tool calls.
244
- - Normalizing outputs for storage in `ai_response`.
245
- - Often called from `chatgpt.js` orchestration functions.
246
-
247
- - **`chatgpt-tools.js`**:
248
- - Glue between AI tools and JOE:
249
- - Uses `ai_tool` definitions.
250
- - Executes server‑side code blocks for tools when OpenAI calls them.
251
- - Can also support MCP‑style tools exported via the manifest.
252
-
253
- **Agent note**: You typically don’t call these plugins directly from a GPT. Instead, you:
254
- - Use documented **APIs** (`/API/plugin/chatgpt/*`, widget endpoints) and/or
255
- - Use **MCP tools** for reads/writes.
153
+ - **Slimming payloads**: When embedding object graphs into messages (e.g. `understandObject` results), `chatgpt.js` converts them into a slim representation to avoid token limits
154
+ - **Error handling**: `isTokenLimitError(err)` identifies token/TPM error shapes for better logs and fallbacks
155
+ - **Job tracking**: Registers and updates AI jobs via `AiJobs` module for progress visibility
156
+
157
+ **Agent note**: Any time you see endpoints in docs like `/API/plugin/chatgpt/<method>`, they resolve into functions in `chatgpt.js`. Follow those conventions instead of inventing new paths.
158
+
159
+ ### 3.2 `server/modules/AiJobs.js` (Job Tracking)
160
+
161
+ Manages active AI job storage and provides API endpoints for real-time progress tracking.
162
+
163
+ - **Storage**: In-memory `active` object keyed by `{objectId}_{fieldName}`
164
+ - **Job Structure**:
165
+ ```javascript
166
+ {
167
+ token: string, // Full token: {objectId}|{fieldName}|{timestamp}|{userid}
168
+ startTime: ISO string, // Job start timestamp
169
+ status: string, // 'starting', 'running', 'complete', 'error'
170
+ promptId: string|null, // Optional prompt ID
171
+ promptName: string|null, // Human-readable job name
172
+ fieldId: string|null, // Field identifier
173
+ progress: number, // Current progress (0-100 or 0-1)
174
+ total: number|null, // Total steps (typically 100 or 1)
175
+ message: string // Status message
176
+ }
177
+ ```
178
+ - **Key Functions**:
179
+ - `createJob(token, jobData)` - Register a new job
180
+ - `updateJob(token, updates)` - Update job status/progress/message
181
+ - `removeJob(token)` - Remove job immediately
182
+ - `removeJobWithDelay(token, status, message, delaySeconds)` - Remove after delay (default 10s)
183
+ - `extractKey(token)` - Parse token to get `{objectId}_{fieldName}` lookup key
184
+ - **API Endpoints**:
185
+ - `GET /API/aijobs/:objectId/:fieldName` - Returns `{ jobs: [...], objectId, fieldName, count }` for a specific field (flat array)
186
+ - `GET /API/aijobs/:objectId` - Returns `{ jobs: [{ lookupKey, jobs: [...] }], objectId, count }` for an object (grouped by field)
187
+ - `GET /API/aijobs` - Returns `{ jobs: [{ lookupKey, jobs: [...] }], count, lookupKeys }` for all active jobs (grouped, optional/admin)
188
+
189
+ **Agent note**: Jobs are automatically registered by `chatgpt.js` and `MCP.js` when AI operations start. The token format uses `|` as delimiter to avoid conflicts with field names containing underscores.
190
+
191
+ ### 3.3 Legacy Plugins (Backward Compatibility Only)
192
+
193
+ - **`chatgpt-assistants.js`**: Manages OpenAI Assistants and threads using the **older Assistants API**. Used by legacy `<joe-ai-chatbox>` flows. **New integrations should use the Responses‑based `chatgpt` plugin instead**.
194
+ - **`chatgpt-responses.js`**: Helpers for working with Responses API results (extracting text, tool calls, normalizing outputs).
195
+ - **`chatgpt-tools.js`**: Glue between AI tools and JOE (uses `ai_tool` definitions, executes server‑side code blocks).
196
+
197
+ **Agent note**: You typically don't call these plugins directly. Instead, use documented **APIs** (`/API/plugin/chatgpt/*`, widget endpoints) and/or **MCP tools** for reads/writes.
256
198
 
257
199
  ---
258
200
 
259
- ## 4. `joe-ai.js` – In‑App AI UI
260
-
261
- The `js/joe-ai.js` file implements JOEs in‑app AI user interface using custom elements and an `Ai` namespace.
262
-
263
- - **Key responsibilities**:
264
- - Manage open chat boxes and default assistant selection:
265
- - `Ai._openChats`, `Ai.default_ai`, `Ai.getDefaultAssistant()`.
266
- - Render **modern widget‑based chat** bound to `ai_widget_conversation`:
267
- - `<joe-ai-widget>`embeddable chat component that talks to `chatgpt.js` (`widgetStart`, `widgetMessage`, `widgetHistory`).
268
- - `<joe-ai-assistant-picker>`selector that applies an `ai_assistant._id` to a widget (`ai_assistant_id` attribute) and starts fresh conversations as needed.
269
- - `Ai.openObjectChatLauncher(object_id, itemtype, name)` launches a floating shell containing a `<joe-ai-widget>` for **object‑scoped chat**:
270
- - Sets `source:"object_chat"`, `scope_itemtype`, `scope_id`, and passes the current user id.
271
- - Uses `DEFAULT_AI_ASSISTANT` as the initial assistant, but allows per‑conversation overrides via the picker.
272
- - (Legacy) Render **Assistants‑API chat** bound to an `ai_conversation`:
273
- - `<joe-ai-chatbox>` – older component that loads `ai_conversation` and uses `chatgpt-assistants` endpoints (`getThreadMessages`, etc.).
274
- - Still present for existing flows but not used by the new object chat or AIHub widget experiences.
275
- - Provide ergonomic chat behaviors:
276
- - Markdown rendering via `renderMarkdownSafe` (prefers `marked` + `DOMPurify`).
277
- - Greeting messages based on `user` and `context_objects`.
278
- - Keyboard handling (Enter to send).
279
- - Conversation list (`<joe-ai-conversation-list>`) for selecting existing `ai_widget_conversation` records and applying them to a widget.
280
- - **High‑level flow** for modern widget chat:
281
- 1. UI renders a `<joe-ai-widget>` with optional `ai_assistant_id`, `source`, and scope attributes.
282
- 2. On the first user message, the widget calls `/API/plugin/chatgpt/widgetStart`, which creates an `ai_widget_conversation`.
283
- 3. Subsequent messages call `/API/plugin/chatgpt/widgetMessage`, which:
284
- - Resolves the active `ai_assistant` (by `_id`), builds `systemText` (assistant instructions + optional MCP tool list + scope hints), and attaches any relevant files.
285
- - Calls `runWithTools` in `chatgpt.js` so the model can use MCP tools and respond.
286
- - Updates `ai_widget_conversation.messages` and returns the full history to the widget.
287
-
288
- **Agent note**:
289
- - When describing UI behavior or helping with debugging, remember that `joe-ai.js` is **UI only**; persistent data lives in:
290
- - `ai_widget_conversation` (modern chats: AIHub, object chat, widgets)
291
- - `ai_assistant` (config, including MCP/tool settings and assistant‑level files)
292
- - `ai_conversation` (legacy Assistants‑API chats only)
201
+ ## 4. Client-Side UI (`joe-ai.js`)
202
+
203
+ The `js/joe-ai.js` file implements JOE's in‑app AI user interface using custom elements and an `Ai` namespace.
204
+
205
+ ### 4.1 Chat Components
206
+
207
+ - **`<joe-ai-widget>`** – Modern embeddable chat component that talks to `chatgpt.js` (`widgetStart`, `widgetMessage`, `widgetHistory`). Bound to `ai_widget_conversation`.
208
+ - **`<joe-ai-assistant-picker>`** Selector that applies an `ai_assistant._id` to a widget (`ai_assistant_id` attribute).
209
+ - **`<joe-ai-chatbox>`**Legacy component that loads `ai_conversation` and uses `chatgpt-assistants` endpoints. Still present for existing flows but not used by new object chat or AIHub widget experiences.
210
+ - **`Ai.openObjectChatLauncher(object_id, itemtype, name)`** Launches a floating shell containing a `<joe-ai-widget>` for **object‑scoped chat**:
211
+ - Sets `source:"object_chat"`, `scope_itemtype`, `scope_id`, and passes the current user id
212
+ - Uses `DEFAULT_AI_ASSISTANT` as the initial assistant, but allows per‑conversation overrides via the picker
213
+
214
+ ### 4.2 Job Tracking & Polling
215
+
216
+ - **Global Poller**: `Ai.jobsPoller` object manages polling for all `field-jobs-container` components
217
+ - **Polling Intervals**:
218
+ - **Active jobs present**: 3 seconds
219
+ - **No active jobs**: 10 seconds (idle)
220
+ - **Immediate Polling**: When an AI button is clicked, `Ai.jobsPoller.poll()` is called immediately (with 200ms delay to let server register the job)
221
+ - **Poll Logic**:
222
+ 1. Finds all `field-jobs-container` elements on the page
223
+ 2. For each container, polls `/API/aijobs/{objectId}/{fieldName}`
224
+ 3. Updates container via `container.updateJobs(jobs)`
225
+ 4. Adjusts interval based on whether any active jobs were found
226
+ - **Token Generation**: `Ai.generateProgressToken(objectId, fieldName, userID)` creates structured tokens: `{objectId}|{fieldName}|{timestamp}|{userid}`
227
+ - **Debug Functions** (available in browser console):
228
+ - `Ai.checkPoller()` - Shows poller status, interval, and container count
229
+ - `Ai.restartPoller()` - Restarts the global poller
230
+ - `Ai.debugJobs()` - Shows all active jobs in `Ai.activeJobs` Map
231
+
232
+ ### 4.3 UI Component (`field-jobs-container`)
233
+
234
+ - **Web Component**: `<field-jobs-container>` custom element defined in `web-components/field-jobs-container.js`
235
+ - **Attributes**:
236
+ - `data-object-id` - Object CUID
237
+ - `data-field-name` - Field name
238
+ - **Rendering**:
239
+ - Title shows active job count: "N active job(s)"
240
+ - Token link on right side (small, bold) - links to `/API/aijobs/{objectId}/{fieldName}` endpoint page
241
+ - Each job row shows:
242
+ - Job name (from `promptName` or `promptId`)
243
+ - Status (capitalized, inline): " - Running", " - Complete", etc.
244
+ - Elapsed time: "(Xs)" in seconds
245
+ - Progress percentage: "XX%" or "—" if not available
246
+ - Message on separate line (smaller, lighter text)
247
+ - Completed jobs remain visible until page/field refresh
248
+ - **Lifecycle**:
249
+ - `connectedCallback()` triggers immediate poll on mount
250
+ - `updateJobs(jobs)` updates the display
251
+ - Elapsed time updates every second for active jobs
252
+
253
+ ### 4.4 Field Integration
254
+
255
+ To enable job tracking on a field:
256
+
257
+ 1. **Add `queryJobs: true` to field definition**:
258
+ ```javascript
259
+ {
260
+ name: 'select_prompt',
261
+ type: 'select_prompt',
262
+ queryJobs: true // Enables field-jobs-container rendering
263
+ }
264
+ ```
265
+
266
+ 2. **Ensure button has required data attributes**:
267
+ - `data-object-id` - Set automatically from current object
268
+ - `data-field-name` - Set from field name
269
+ - `data-progress-token` - Generated via `Ai.generateProgressToken()`
270
+
271
+ 3. **Server-side job registration**:
272
+ - Autofill: `chatgpt.js` → `registerAiJobIfToken()` → `updateAiJobIfToken()`
273
+ - Prompts: `chatgpt.js` → `executeJOEAiPrompt()` → `updateAiJobIfToken()`
274
+ - Thoughts: `MCP.js` → `runThoughtAgent()` → `MCP.updateJob()`
275
+
276
+ ### 4.5 Job Lifecycle
277
+
278
+ 1. **Start**: Button click → Generate token → Register job on server → Trigger immediate poll
279
+ 2. **Progress**: Server calls `updateJob(token, { status, message, progress, total })` at key stages
280
+ 3. **Complete**: Server sets `status: 'complete'`, `progress: 100` → Job remains visible in UI
281
+ 4. **Cleanup**: Server calls `removeJobWithDelay(token, 'complete', 'Complete', 10)` → Job removed from server after 10 seconds, but remains in UI until refresh
282
+
283
+ **Agent note**: When adding new AI operations that should show progress:
284
+ 1. Generate token using `Ai.generateProgressToken(objectId, fieldName, userID)`
285
+ 2. Register job on server via `AiJobs.createJob(token, jobData)`
286
+ 3. Update progress via `AiJobs.updateJob(token, { status, message, progress, total })`
287
+ 4. Ensure the field has `queryJobs: true` if using `field-jobs-container`
293
288
 
294
289
  ---
295
290
 
@@ -300,41 +295,28 @@ MCP (Model Context Protocol) is how external agents (including Custom GPTs) call
300
295
  ### 5.1 Endpoints and Manifest
301
296
 
302
297
  - **Endpoints**:
303
- - `GET /.well-known/mcp/manifest.json` – public manifest with:
304
- - Instance info `{ joe: { name, version, hostname } }`
305
- - Tool list (name, description, params schema)
306
- - Privacy/terms URLs (also wired into `/privacy` and `/terms` routes).
307
- - `POST /mcp` – auth‑protected JSON‑RPC 2.0 endpoint for **tool calls**.
308
- - Same auth rules as other APIs; if users exist, requires cookie or basic auth.
309
- - **Implementation note**:
310
- - The concrete MCP tool implementations live in `server/modules/MCP.js`, which is wired up by `server/init.js` and exported via the manifest and `/mcp` endpoint.
311
- - **Tools** (from README + CHANGELOG):
298
+ - `GET /.well-known/mcp/manifest.json` – public manifest with instance info, tool list, privacy/terms URLs
299
+ - `POST /mcp` auth‑protected JSON‑RPC 2.0 endpoint for **tool calls** (same auth rules as other APIs)
300
+ - **Implementation**: The concrete MCP tool implementations live in `server/modules/MCP.js`, which is wired up by `server/init.js` and exported via the manifest and `/mcp` endpoint.
301
+ - **Tools**:
312
302
  - `listSchemas(name?)`, `getSchema(name)`
313
- - `getObject(_id, itemtype?)`
303
+ - `getObject(_id, itemtype?)` (supports optional `flatten` and `depth`)
314
304
  - `search` (unified search across cache/storage; supports `source`, `ids`, `flatten`, `depth`, `countOnly`)
315
305
  - `fuzzySearch` – typo‑tolerant search across weighted fields
306
+ - `findObjectsByTag`, `findObjectsByStatus` – tag/status-based queries
316
307
  - `saveObject({ object })`
317
308
  - `saveObjects({ objects, stopOnError?, concurrency? })`
318
309
  - `hydrate` – returns core fields, schemas, statuses, tags (for agents to bootstrap context)
319
310
 
320
- All tools map directly into JOEs APIs: `JOE.Schemas`, `JOE.Storage`, `JOE.Cache`, and related helpers. Sensitive fields are sanitized before returning data.
311
+ All tools map directly into JOE's APIs: `JOE.Schemas`, `JOE.Storage`, `JOE.Cache`, and related helpers. Sensitive fields are sanitized before returning data.
321
312
 
322
313
  ### 5.2 Agent Best Practices with MCP
323
314
 
324
- - **Schema‑first**: Always:
325
- 1. Discover schemas via `listSchemas`/`getSchema`.
326
- 2. Use `hydrate` on startup to cache schemas/statuses/tags.
327
- 3. Use `search` / `fuzzySearch` with explicit `itemtype` and filters rather than arbitrary text mining.
328
- - **Reads vs writes**:
329
- - Prefer **cache** for read operations (`search` default) unless you explicitly need DB‑level freshness (set `source:"storage"`).
330
- - For writes, use:
331
- - `saveObject` for single‑item updates.
332
- - `saveObjects` for batched updates with clear error handling.
333
- - **Performance**:
334
- - Use `countOnly` before large reads to avoid pulling massive datasets.
335
- - When embedding objects into prompts, prefer **slimmed** representations (id, itemtype, name, info).
336
-
337
- **Agent note**: The MCP tools are your primary way to **inspect and modify JOE data** from a Custom GPT or other agent host. Direct HTTP calls to JOE’s internal `/API/*` routes should be treated as implementation detail unless explicitly allowed in your instructions.
315
+ - **Schema‑first**: Always discover schemas via `listSchemas`/`getSchema`, use `hydrate` on startup to cache schemas/statuses/tags, use `search` / `fuzzySearch` with explicit `itemtype` and filters
316
+ - **Reads vs writes**: Prefer **cache** for read operations (`search` default) unless you explicitly need DB‑level freshness (set `source:"storage"`). For writes, use `saveObject` for single‑item updates, `saveObjects` for batched updates
317
+ - **Performance**: Use `countOnly` before large reads to avoid pulling massive datasets. When embedding objects into prompts, prefer **slimmed** representations (id, itemtype, name, info)
318
+
319
+ **Agent note**: The MCP tools are your primary way to **inspect and modify JOE data** from a Custom GPT or other agent host. Direct HTTP calls to JOE's internal `/API/*` routes should be treated as implementation detail unless explicitly allowed in your instructions.
338
320
 
339
321
  ---
340
322
 
@@ -343,44 +325,31 @@ All tools map directly into JOE’s APIs: `JOE.Schemas`, `JOE.Storage`, `JOE.Cac
343
325
  When configuring a JOE‑aware Custom GPT or dev agent, include the following docs in its RAG/context:
344
326
 
345
327
  - **Core architecture & instance info**
346
- - `README.md` – especially:
347
- - “Architecture & Mental Model (Server)
348
- - MCP overview + test instructions (including references to `_www/mcp-test.html`, `_www/mcp-schemas.html`, and `_www/plugins-test.html` for live inspection)
349
- - `CHANGELOG.md` – AI/MCP‑related entries (0.10.43x+, 0.10.50x+, 0.10.62x+), already summarize important AI behavior.
328
+ - `README.md` – especially "Architecture & Mental Model (Server)" and MCP overview + test instructions
329
+ - `CHANGELOG.md` AI/MCP‑related entries (0.10.43x+, 0.10.50x+, 0.10.62x+)
350
330
  - **Agent / instruction docs (existing)**
351
- - `docs/JOE_Master_Knowledge_Export.md` – master context dump for schemas and concepts.
352
- - `docs/joe_agent_spec_v_2.2.md` – agent behavior/specification.
353
- - `docs/joe_agent_custom_gpt_instructions_v_3.md` – current Custom GPT / MCP prompt instructions.
354
- - `docs/schema_summary_guidelines.md` – how schema summaries are constructed and how agents should interpret/use them.
331
+ - `docs/JOE_Master_Knowledge_Export.md` – master context dump for schemas and concepts
332
+ - `docs/joe_agent_spec_v_2.2.md` – agent behavior/specification
333
+ - `docs/joe_agent_custom_gpt_instructions_v_3.md` – current Custom GPT / MCP prompt instructions
334
+ - `docs/schema_summary_guidelines.md` – how schema summaries are constructed
355
335
  - **This file**
356
- - `docs/JOE_AI_Overview.md` – **(you are here)** high‑level map of AI schemas, plugins, UI, and MCP.
336
+ - `docs/JOE_AI_Overview.md` – **(you are here)** high‑level map of AI schemas, plugins, UI, and MCP
357
337
 
358
338
  Optional / advanced (for deeper RAG or power users):
359
- - Selected excerpts or exports of **schema summaries** for AI‑related schemas:
360
- - `ai_assistant`, `ai_prompt`, `ai_tool`, `ai_response`, `ai_conversation`, `ai_widget_conversation`
361
- - Any **project‑specific** AI workflow docs you maintain (e.g. playbooks for content generation, planning workflows, merge policies).
339
+ - Selected excerpts or exports of **schema summaries** for AI‑related schemas
340
+ - Any **project‑specific** AI workflow docs you maintain
362
341
 
363
342
  ---
364
343
 
365
- ## 7. Whats Still Missing / Future Docs to Consider
344
+ ## 7. What's Still Missing / Future Docs to Consider
366
345
 
367
346
  For a fully self‑sufficient agent, the following additional docs can be helpful:
368
347
 
369
- - **AI Workflows Cookbook**:
370
- - Short task‑oriented examples:
371
- - “Summarize a project using schemas + ai_prompt + MCP search.”
372
- - “Draft a plan then write objects via `saveObjects`.”
373
- - “Use ai_response to compare vs existing data and merge.”
374
- - **AI Safety & Guardrails**:
375
- - Which schemas/fields are sensitive.
376
- - Allowed vs disallowed writes.
377
- - Expectations around **review vs autonomous changes**.
378
- - **Instance‑specific Prompts / Taxonomy**:
379
- - Any custom `ai_prompt` records or `ai_tool` definitions that embody local conventions (naming standards, planning templates, scoring rubrics, etc.).
380
-
381
- If these don’t exist yet, an agent should assume **conservative** behavior:
382
- - Prefer read + propose changes over direct writes.
383
- - Tag all AI‑generated content clearly.
384
- - Use `ai_response` + compare/merge flows rather than overwriting objects blindly.
385
-
348
+ - **AI Workflows Cookbook**: Short task‑oriented examples (summarize a project, draft a plan, merge responses)
349
+ - **AI Safety & Guardrails**: Which schemas/fields are sensitive, allowed vs disallowed writes, expectations around review vs autonomous changes
350
+ - **Instance‑specific Prompts / Taxonomy**: Any custom `ai_prompt` records or `ai_tool` definitions that embody local conventions
386
351
 
352
+ If these don't exist yet, an agent should assume **conservative** behavior:
353
+ - Prefer read + propose changes over direct writes
354
+ - Tag all AI‑generated content clearly
355
+ - Use `ai_response` + compare/merge flows rather than overwriting objects blindly