@smythos/sre 1.7.42 → 1.8.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (61) hide show
  1. package/CHANGELOG +448 -66
  2. package/dist/index.js +65 -50
  3. package/dist/index.js.map +1 -1
  4. package/dist/types/Components/Async.class.d.ts +11 -5
  5. package/dist/types/index.d.ts +2 -0
  6. package/dist/types/subsystems/AgentManager/AgentData.service/connectors/SQLiteAgentDataConnector.class.d.ts +45 -0
  7. package/dist/types/subsystems/LLMManager/LLM.helper.d.ts +32 -1
  8. package/dist/types/subsystems/LLMManager/LLM.inference.d.ts +25 -2
  9. package/dist/types/subsystems/LLMManager/LLM.service/connectors/Anthropic.class.d.ts +22 -2
  10. package/dist/types/subsystems/LLMManager/LLM.service/connectors/Bedrock.class.d.ts +2 -2
  11. package/dist/types/subsystems/LLMManager/LLM.service/connectors/GoogleAI.class.d.ts +27 -2
  12. package/dist/types/subsystems/LLMManager/LLM.service/connectors/Groq.class.d.ts +22 -2
  13. package/dist/types/subsystems/LLMManager/LLM.service/connectors/Ollama.class.d.ts +22 -2
  14. package/dist/types/subsystems/LLMManager/LLM.service/connectors/Perplexity.class.d.ts +3 -3
  15. package/dist/types/subsystems/LLMManager/LLM.service/connectors/openai/OpenAIConnector.class.d.ts +23 -3
  16. package/dist/types/subsystems/LLMManager/LLM.service/connectors/openai/apiInterfaces/ChatCompletionsApiInterface.d.ts +2 -2
  17. package/dist/types/subsystems/LLMManager/LLM.service/connectors/openai/apiInterfaces/OpenAIApiInterface.d.ts +2 -2
  18. package/dist/types/subsystems/LLMManager/LLM.service/connectors/openai/apiInterfaces/ResponsesApiInterface.d.ts +2 -2
  19. package/dist/types/subsystems/LLMManager/LLM.service/connectors/xAI.class.d.ts +3 -3
  20. package/dist/types/subsystems/MemoryManager/LLMContext.d.ts +10 -3
  21. package/dist/types/subsystems/ObservabilityManager/Telemetry.service/connectors/OTel/OTel.class.d.ts +24 -0
  22. package/dist/types/subsystems/ObservabilityManager/Telemetry.service/connectors/OTel/OTel.redaction.helper.d.ts +49 -0
  23. package/dist/types/types/LLM.types.d.ts +30 -1
  24. package/package.json +4 -3
  25. package/src/Components/APICall/OAuth.helper.ts +16 -1
  26. package/src/Components/APIEndpoint.class.ts +11 -4
  27. package/src/Components/Async.class.ts +38 -5
  28. package/src/Components/GenAILLM.class.ts +13 -7
  29. package/src/Components/ImageGenerator.class.ts +32 -13
  30. package/src/Components/LLMAssistant.class.ts +3 -1
  31. package/src/Components/LogicAND.class.ts +13 -0
  32. package/src/Components/LogicAtLeast.class.ts +18 -0
  33. package/src/Components/LogicAtMost.class.ts +19 -0
  34. package/src/Components/LogicOR.class.ts +12 -2
  35. package/src/Components/LogicXOR.class.ts +11 -0
  36. package/src/constants.ts +1 -1
  37. package/src/helpers/Conversation.helper.ts +10 -8
  38. package/src/index.ts +2 -0
  39. package/src/index.ts.bak +2 -0
  40. package/src/subsystems/AgentManager/AgentData.service/connectors/SQLiteAgentDataConnector.class.ts +190 -0
  41. package/src/subsystems/AgentManager/AgentData.service/index.ts +2 -0
  42. package/src/subsystems/LLMManager/LLM.helper.ts +117 -1
  43. package/src/subsystems/LLMManager/LLM.inference.ts +136 -67
  44. package/src/subsystems/LLMManager/LLM.service/LLMConnector.ts +22 -6
  45. package/src/subsystems/LLMManager/LLM.service/connectors/Anthropic.class.ts +157 -33
  46. package/src/subsystems/LLMManager/LLM.service/connectors/Bedrock.class.ts +9 -8
  47. package/src/subsystems/LLMManager/LLM.service/connectors/GoogleAI.class.ts +124 -90
  48. package/src/subsystems/LLMManager/LLM.service/connectors/Groq.class.ts +125 -62
  49. package/src/subsystems/LLMManager/LLM.service/connectors/Ollama.class.ts +168 -76
  50. package/src/subsystems/LLMManager/LLM.service/connectors/Perplexity.class.ts +18 -8
  51. package/src/subsystems/LLMManager/LLM.service/connectors/VertexAI.class.ts +8 -4
  52. package/src/subsystems/LLMManager/LLM.service/connectors/openai/OpenAIConnector.class.ts +50 -8
  53. package/src/subsystems/LLMManager/LLM.service/connectors/openai/apiInterfaces/ChatCompletionsApiInterface.ts +30 -16
  54. package/src/subsystems/LLMManager/LLM.service/connectors/openai/apiInterfaces/OpenAIApiInterface.ts +2 -2
  55. package/src/subsystems/LLMManager/LLM.service/connectors/openai/apiInterfaces/ResponsesApiInterface.ts +29 -15
  56. package/src/subsystems/LLMManager/LLM.service/connectors/xAI.class.ts +10 -8
  57. package/src/subsystems/MemoryManager/LLMContext.ts +27 -8
  58. package/src/subsystems/ObservabilityManager/Telemetry.service/connectors/OTel/OTel.class.ts +313 -85
  59. package/src/subsystems/ObservabilityManager/Telemetry.service/connectors/OTel/OTel.redaction.helper.ts +203 -0
  60. package/src/types/LLM.types.ts +31 -1
  61. package/src/types/node-sqlite.d.ts +45 -0
package/CHANGELOG CHANGED
@@ -5,129 +5,511 @@ All notable changes to the SmythOS CORE Runtime Engine will be documented in thi
5
5
  The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
6
6
  and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
7
7
 
8
+ ## [1.8.0] 2026-02-23
9
+
10
+ ### Observability
11
+
12
+ - **New: Observability Subsystem** with a full OpenTelemetry (OTel) connector — agent spans, skill propagation, session/workflow tracing, and error tracking now available out of the box
13
+ - Sensitive data redaction for OTel logs — new `enableRedaction` option and `redactHeaders` helper to strip secrets and PII from telemetry
14
+ - Agent name, team ID, org tier, and org slot added to OTel spans for richer multi-tenant tracing
15
+ - Agent.Skill spans propagated via HTTP headers across service boundaries
16
+ - Enhanced context previewing in OTel via `prepareContext` method
17
+ - Full input/output context logged in OTel spans, including tool arguments and LLM responses
18
+ - Improved OTel error tracking with consolidated error reporting and error event listeners
19
+ - OTel span hierarchy for conversation sessions fully implemented
20
+ - Graceful handling when OTel endpoint is not configured
21
+
22
+ ### LLM & Model Support
23
+
24
+ - **Abort Controller**: all LLM connectors now support an `abortSignal` parameter and emit a `TLLMEvent.Abort` event — cancellation is first-class
25
+ - **Finish reason normalization**: standardized `TLLMFinishReason` enum across all LLM connectors
26
+ - **Event emitter standardization**: LLM connectors never throw — errors are always emitted as events
27
+ - **Fallback/proxy pattern**: custom model connectors now support a fallback proxy architecture for resilience
28
+ - **Structured output**: implemented structured output extraction for Anthropic's latest models; `_debug` and `_error` fields excluded from structured outputs
29
+ - **Anthropic**: handle `model_context_window_exceeded` stop reason gracefully
30
+ - **Anthropic**: prefill and JSON instructions now only applied to legacy models
31
+ - **Anthropic**: support negative values for `temperature` and `top_p`
32
+ - **Opus 4.5 / 4.6**: thinking effort parameter support
33
+ - **GPT-5.2**: `xhigh` reasoning effort support
34
+ - **Gemini 3**: `reasoningEffort` config and `thoughtSignature` attachment for function calling
35
+ - **Google AI**: fixed system instruction propagation, message part extraction, and `functionResponse.response` structure
36
+ - **Google AI**: fixed multiple tool call logging, tier/cache handling, and image token usage reporting
37
+ - **Claude 4**: enabled streaming in Classifier and LLM Assistant components
38
+ - **Perplexity**: provide actual API error messages; allow either `frequency_penalty` or `presence_penalty` (not both)
39
+ - Flash model family detected via a more generic pattern
40
+ - `modelEntryName` exposed for runtime model identification
41
+ - `readyPromise` added to `LLMContext` class for safer initialization sequencing
42
+
43
+ ### Connectors & Storage
44
+
45
+ - **New: SQLite Agent Data Connector** — lightweight local agent data persistence using SQLite
46
+ - **RAG v2**: embeddings credentials now resolved from either vault or internal config; metadata handling fixed; namespace parsing corrected
47
+ - **DataPools v2**: conditional rollout with updated namespace processing and datasource indexer component
48
+ - **Pinecone**: `delete namespace` and `delete datasource` operations fixed; constructor params made optional
49
+ - **Milvus**: `delete datasource` operation fixed
50
+ - **Secret Manager**: fixed secrets fetching flow and managed vault connector; `smythos` set as default prefix
51
+ - Legacy namespace IDs resolved correctly
52
+ - Vector embedders: legacy OpenAI embedder entries hidden from selection UI
53
+ - OAuth2 credentials manager: `scope` field now supported
54
+
55
+ ### Components & Runtime
56
+
57
+ - **Chat**: fixed attachments being mixed with text file inputs
58
+ - **APIEndpoint**: debug message cleaned up to prevent bloating the debug context
59
+ - **Sub-Agent**: JSON response mode now supported
60
+ - **WebScrape**: `country` proxy option added
61
+ - **Search components**: template variables now supported for search location fields
62
+ - **ForEach / LogicAnd / Async**: improved debug logging
63
+ - Agent variables now resolved before type inference in all component contexts
64
+ - `APIEndpoint` and `ServerlessCode` variable resolution fixed
65
+ - ConversationHelper: fixed SSE event handler memory leak; errors from `toolsPromise` now propagated correctly
66
+ - Maximum tool call limit per session implemented (defaults to `Infinity`)
67
+ - `TemplateString` parser now correctly handles falsy values (`0`, `false`, `""`)
68
+ - SMYTH_PATH now accepts dot-segments for watching models from the default location
69
+ - Base64 detection no longer relies on data length heuristic
70
+ - Empty LLM response errors now include the field name for easier debugging
71
+ - Agent cache support added to the Smyth SDK
72
+
73
+ ### Code Quality
74
+
75
+ - OTel class refactored: removed redundant `lastContext` variable, simplified agent data access, cleaned up unused attributes
76
+ - `LLMConnector`: corrected ordering of `structuredOutputs` inside `prepareParams()`
77
+ - Fixed HookAsync; added support for hookable classes
78
+ - Secrets Manager usage example and documentation added
79
+
80
+ ---
81
+
82
+ ## [1.7.43] 2026-01-22
83
+
84
+ ### LLM
85
+
86
+ - **Event emitter standardization**: LLM connectors now never throw — all errors are emitted as events instead
87
+ - **Fallback proxy pattern**: initial implementation of a fallback architecture for custom LLM connectors
88
+
89
+ ### Conversation & Agent
90
+
91
+ - ConversationHelper: errors from `toolsPromise` are now correctly propagated (previously swallowed)
92
+ - OTel: error handler added to OTel class, consolidated error reporting logic
93
+
94
+ ---
95
+
96
+ ## [1.7.42] 2026-01-20
97
+
98
+ ### LLM
99
+
100
+ - **Abort Controller**: implemented `abortSignal` support and `TLLMEvent.Abort` event across all LLM connectors
101
+ - **Finish reason normalization**: introduced `TLLMFinishReason` enum and standardized finish reason values from all connectors
102
+
103
+ ### Observability
104
+
105
+ - Agent name added to OTel telemetry logs for improved tracking
106
+ - OTel error tracking enhanced: error events captured at conversation-level spans
107
+
108
+ ### SDK
109
+
110
+ - Agent cache support added to the Smyth SDK
111
+
112
+ ---
113
+
114
+ ## [1.7.41] 2026-01-08
115
+
116
+ ### Connectors
117
+
118
+ - **New: SQLite Agent Data Connector** — lightweight persistent storage for ephemeral and SDK agents
119
+
120
+ ### LLM — Google AI
121
+
122
+ - Fixed `functionResponse.response` structure for Google AI requests
123
+ - Fixed text part extraction from Google AI responses
124
+ - Fixed system instruction propagation for Google AI
125
+
126
+ ### Observability
127
+
128
+ - OTel spans now include session ID and workflow details for richer tracing
129
+ - Improved debug logging for `ForEach`, `LogicAnd`, and `Async` components
130
+
131
+ ---
132
+
133
+ ## [1.7.40] 2025-12-04
134
+
135
+ ### LLM & Model Support
136
+
137
+ - **GPT-5.2**: `xhigh` reasoning effort level support
138
+ - **Claude 4**: streaming enabled for Classifier and LLM Assistant components
139
+ - Flash model family (Gemini) now detected via generic pattern — no need for explicit model listing
140
+ - **Gemini**: fixed multiple-tool-call logging; fixed infinite tool call loop
141
+ - Maximum tool call limit per session (`_maxToolCallsPerSession`), defaults to `Infinity`
142
+
143
+ ### Observability
144
+
145
+ - OTel spans now include `orgTier` and `orgSlot` attributes for multi-tenant tracking
146
+ - OTel: Agent.Skill spans now propagated via HTTP headers across service boundaries
147
+ - Team ID added to OTel spans
148
+ - OTel: graceful handling when no endpoint is configured
149
+
150
+ ### Connectors & Storage
151
+
152
+ - **Secret Manager**: `smythos` set as default secret prefix
153
+ - **RAG v2** (work-in-progress): namespace parsing fixes for NKV, improved embeddings credentials resolution
154
+ - Legacy namespace IDs resolved correctly
155
+
156
+ ### Components & Runtime
157
+
158
+ - **TemplateString** parser: correctly handles falsy values (`0`, `false`, `""`)
159
+ - **Sub-Agent component**: JSON response mode now supported
160
+ - **WebScrape**: `country` proxy option added
161
+ - **Search components**: template variables supported for search location fields
162
+ - `modelEntryName` property exposed on LLM connectors for runtime model identification
163
+ - LLM response event handling improved
164
+
165
+ ### Documentation
166
+
167
+ - Secrets Manager example and documentation added
168
+
169
+ ---
170
+
171
+ ## [1.7.20] 2025-11-26
172
+
173
+ ### Runtime
174
+
175
+ - Agent variables are now resolved before performing type inference (fixes incorrect type coercion)
176
+ - Empty LLM response errors now include the field name for easier debugging
177
+ - Base64 detection: removed unreliable data-length heuristic
178
+
179
+ ### Configuration
180
+
181
+ - `SMYTH_PATH` now accepts dot-segments (`.`) to watch models from the default location
182
+ - OTel output logging added for LLM responses
183
+
184
+ ---
185
+
186
+ ## [1.7.18] 2025-11-19
187
+
188
+ ### LLM — Google AI / Gemini
189
+
190
+ - Google AI: tier and cache now handled correctly per-request
191
+ - **Gemini 3**: `reasoningEffort` config support
192
+ - **Gemini 3**: `thoughtSignature` attachment for function calling (required by the Gemini 3 API)
193
+
194
+ ### Connectors & Storage
195
+
196
+ - **RAG v2** (WIP): embeddings credentials resolved from either vault or internal config; metadata fix
197
+ - **Pinecone**: constructor parameters made optional
198
+ - Vector embedders: legacy OpenAI embedder entries hidden from the selection UI
199
+
200
+ ---
201
+
202
+ ## [1.7.15] 2025-11-13
203
+
204
+ ### Observability
205
+
206
+ - **New: Observability Subsystem** — OpenTelemetry (OTel) connector added to `@smythos/sre`
207
+ - OTel spans cover agent execution, LLM calls, skill invocations, and error events
208
+ - OTel connector hotfixes applied shortly after initial rollout
209
+
210
+ ### Connectors
211
+
212
+ - **Pinecone**: fixed `delete namespace` and `delete datasource` operations
213
+ - **Milvus**: fixed `delete datasource` operation
214
+ - **DataPools v2**: datasource indexer component work-in-progress
215
+
216
+ ### Runtime
217
+
218
+ - `APIEndpoint` and `ServerlessCode` component: agent variable resolution fixed
219
+ - `HookAsync`: fixed; hookable class support added
220
+
221
+ ---
222
+
8
223
  ## [1.7.9] 2025-11-09
9
224
 
10
- - Fixed edge cases issues with SRE core initialization
11
- - Add support for custom chunkSize and chunkOverlap for VectorDB embeddings
12
- - normalized the embeddings parameters for VectorDB connectors
13
- - JSONVaultConnector now detects missing vault and prompts the user to create it
225
+ - Fixed edge cases issues with SRE core initialization
226
+ - Add support for custom chunkSize and chunkOverlap for VectorDB embeddings
227
+ - normalized the embeddings parameters for VectorDB connectors
228
+ - JSONVaultConnector now detects missing vault and prompts the user to create it
229
+
230
+ ---
231
+
232
+ ## [1.7.7] 2025-11-08
233
+
234
+ ### Runtime
235
+
236
+ - Hotfix: SRE core initialization race condition with `ConnectorService` global instances
237
+ - VectorDB connector global instance handling stabilized
238
+
239
+ ---
240
+
241
+ ## [1.7.4] 2025-11-06
242
+
243
+ ### LLM
244
+
245
+ - Custom models: fixed resolution in the SDK
246
+ - Fallback model: parameters are now correctly filtered before the fallback call
247
+ - `TLLMParams` split into more granular types for improved readability and type safety
248
+
249
+ ### Runtime
250
+
251
+ - Global variable fixes across multiple components
252
+
253
+ ---
254
+
255
+ ## [1.7.2] 2025-11-04
256
+
257
+ ### Agent & Conversation
258
+
259
+ - `agentData` added to Conversation prompt hooks for richer hook context
260
+ - `getOpenAPIJSON()` function tweaks
261
+
262
+ ### Components
263
+
264
+ - `BinaryInput`: handle missing MIME type when asset is loaded from a URL
265
+
266
+ ### Connectors
267
+
268
+ - **Pinecone**: fallback to default metadata when retrieving a datasource that lacks metadata
269
+
270
+ ### Documentation
271
+
272
+ - LocalCache connector documentation added
273
+
274
+ ---
14
275
 
15
276
  ## [1.7.1] 2025-10-30
16
277
 
17
- - Core structures for triggers
18
- - Added Scheduler Connector
19
- - Added Advanced SRE hooks (Aspect Oriented Programming pattern to monitor and interact with internal SRE calls from outside)
20
- - SRE core is now accessible from the sdk package though @smythos/sdk/core
21
- - Implemented OAuth2 credentials manager for SRE (this will become the standard wrapper to handle all oAuth2 credentials)
22
- - APIEndpoint now supports custom code process (harmonizing SDK and SRE)
23
- - Multiple fixes for conversation manager
24
- - Fix for Gemini LLM infinite loop tool calls
25
- - Support local LLM credentials
26
- - add a killReason message when an SRE agent is killed
27
- - AgentDataConnector handles ephemeral agents data (for SDK agents)
28
- - Update Milvus data format to match the latest Milvus sdk release
29
-
30
- ## [1.6.0]
278
+ - Core structures for triggers
279
+ - Added Scheduler Connector
280
+ - Added Advanced SRE hooks (Aspect Oriented Programming pattern to monitor and interact with internal SRE calls from outside)
281
+ - SRE core is now accessible from the sdk package though @smythos/sdk/core
282
+ - Implemented OAuth2 credentials manager for SRE (this will become the standard wrapper to handle all oAuth2 credentials)
283
+ - APIEndpoint now supports custom code process (harmonizing SDK and SRE)
284
+ - Multiple fixes for conversation manager
285
+ - Fix for Gemini LLM infinite loop tool calls
286
+ - Support local LLM credentials
287
+ - add a killReason message when an SRE agent is killed
288
+ - AgentDataConnector handles ephemeral agents data (for SDK agents)
289
+ - Update Milvus data format to match the latest Milvus sdk release
290
+
291
+ ---
292
+
293
+ ## [1.6.13] 2025-10-17
294
+
295
+ ### LLM & Models
296
+
297
+ - **Google AI**: fixed content structure for requests to prevent infinite function call loops
298
+ - **GPT-5 family**: PDF attachment support added
299
+ - Custom LLM credential resolution from vault keys
300
+ - Token limit validation now applies to legacy models only (lifted for newer models)
301
+ - `@google/generative-ai` dependency removed; fully migrated to `@google/genai`
302
+
303
+ ### Connectors & Runtime
304
+
305
+ - **Electron**: enhanced support; fixed incorrect vault search directory display
306
+ - **OAuth**: vault key resolution for OAuth flows
307
+ - SDK: ability to programmatically enable and disable planner mode
308
+
309
+ ### Triggers (experimental)
310
+
311
+ - Gmail and WhatsApp trigger improvements
312
+ - Trigger processing aligned with normal component execution (no input mapping required)
313
+ - Scheduler: support for suspending job runs in local mode
314
+
315
+ ---
316
+
317
+ ## [1.6.11] 2025-10-11
318
+
319
+ ### Hooks & Configuration
320
+
321
+ - **Advanced SRE Hooks** introduced (Aspect-Oriented Programming pattern): monitor and intercept internal SRE calls from outside the runtime
322
+ - Hooks added to `Agent` class and `ModelsProviderConnector`
323
+ - JSON vault connector improvements and documentation
324
+
325
+ ### Models
326
+
327
+ - JSON models provider: sanity checks for invalid JSON paths and automatic path auto-search
328
+ - Default models path support (`SMYTH_PATH` env variable)
329
+ - Models provider hotfix for invalid JSON model resolve conditions
330
+
331
+ ### Triggers (experimental)
332
+
333
+ - Gmail trigger: experimental email fetch support
334
+ - WhatsApp trigger updates
335
+ - Conversation manager: `addTool` function tool parser fixed
336
+
337
+ ---
338
+
339
+ ## [1.6.6] 2025-10-02
340
+
341
+ ### Connectors
342
+
343
+ - **AWS Lambda**: retry logic added for IAM role propagation on first run
344
+ - **AWS Lambda**: retry logic added for Lambda function deployment
345
+ - User custom models: fetched and resolved from external source
346
+
347
+ ---
348
+
349
+ ## [1.6.1] 2025-09-30
350
+
351
+ ### LLM
352
+
353
+ - **Ollama**: native connector added with text completion and tool use support
354
+ - Fallback model execution implemented for user-configured custom LLMs
355
+ - Increased fallback token budget for custom LLM connectors
356
+
357
+ ### Triggers (experimental)
358
+
359
+ - Initial trigger infrastructure; Gmail trigger experiments
360
+
361
+ ---
362
+
363
+ ## [1.6.0] 2025-09-29
31
364
 
32
365
  ### Features
33
366
 
34
- - Add Memory Components
35
- - Add Milvus VectorDB Connector
36
- - Add support to Google embeddings
37
- - Add support to OpenAI responses API
38
- - Agent Runtime optimizations : better memory management + stability fixes
367
+ - Add Memory Components
368
+ - Add Milvus VectorDB Connector
369
+ - Add support to Google embeddings
370
+ - Add support to OpenAI responses API
371
+ - Agent Runtime optimizations : better memory management + stability fixes
39
372
 
40
373
  ### Code and tooling
41
374
 
42
- - Added multiple unit tests for a better code coveragge
43
- - Updated dependencies
44
- - Updated .cursor/rules/sre-ai-rules.mdc to enhance the qualty of AI based contributions
375
+ - Added multiple unit tests for a better code coveragge
376
+ - Updated dependencies
377
+ - Updated .cursor/rules/sre-ai-rules.mdc to enhance the qualty of AI based contributions
378
+
379
+ ---
380
+
381
+ ## [1.5.79] 2025-09-22
382
+
383
+ ### Runtime & Configuration
384
+
385
+ - `SMYTH_PATH` environment variable: define the default `.smyth` directory location
386
+ - Default models path support added
387
+ - Memory components fixes
388
+
389
+ ### LLM / Models
390
+
391
+ - `JSONModelProvider`: fixed race condition on model loading; fixed resolve condition for invalid JSON
392
+ - SDK Chat: fixed race condition leading to undefined agent team
393
+
394
+ ### VectorDB
395
+
396
+ - Fixed `vectorDBInstance` not returning texts properly
397
+ - Additional embedding models supported for Google Gemini
398
+ - VectorDB documentation added
399
+
400
+ ### MCP
401
+
402
+ - `MCPClient`: deprecated settings marked as optional
403
+ - Sanity check added for duplicate tool definitions in the Conversation manager
404
+ - MCP logs improved
405
+
406
+ ### Fixes
407
+
408
+ - `APICall`: oAuth hotfix
409
+ - `OpenAI` LLM: fixed non-streaming requests via Responses API
410
+ - Debug data no longer missing in certain edge cases
411
+
412
+ ---
45
413
 
46
414
  ## [v1.5.60]
47
415
 
48
416
  ### Features
49
417
 
50
- - Fixed memory leak in Agent context manager
51
- - Optimized performances and resolved a rare case causing CPU usage spikes
418
+ - Fixed memory leak in Agent context manager
419
+ - Optimized performances and resolved a rare case causing CPU usage spikes
420
+
421
+ ---
52
422
 
53
423
  ## [v1.5.50]
54
424
 
55
425
  ### Features
56
426
 
57
- - Added support for OpenAI Responses API
58
- - Added support for GPT-5 family models with reasoning capabilities .
59
- - MCP Client component : support for Streamable HTTP transport
427
+ - Added support for OpenAI Responses API
428
+ - Added support for GPT-5 family models with reasoning capabilities .
429
+ - MCP Client component : support for Streamable HTTP transport
430
+
431
+ ---
60
432
 
61
433
  ## [v1.5.31]
62
434
 
63
435
  ### LLM & Model Support:
64
436
 
65
- - Added support for new models (Claude 4, xAI/Grok 4, and more).
66
- - Improved model configuration, including support for unlisted/custom models
67
- - better handling of Anthropic tool calling.
68
- - Enhanced multimodal and streaming capabilities for LLMs.
437
+ - Added support for new models (Claude 4, xAI/Grok 4, and more).
438
+ - Improved model configuration, including support for unlisted/custom models
439
+ - better handling of Anthropic tool calling.
440
+ - Enhanced multimodal and streaming capabilities for LLMs.
69
441
 
70
442
  ### Components & Connectors:
71
443
 
72
- - Introduced AWS Lambda code component and connector.
73
- - Added serverless code component.
74
- - Enhanced and unified connectors for S3, Redis, LocalStorage, and JSON vault.
75
- - Added support for local storage cache and improved NKV (key-value) handling.
444
+ - Introduced AWS Lambda code component and connector.
445
+ - Added serverless code component.
446
+ - Enhanced and unified connectors for S3, Redis, LocalStorage, and JSON vault.
447
+ - Added support for local storage cache and improved NKV (key-value) handling.
76
448
 
77
449
  ### Fixes
78
450
 
79
- - Numerous bug fixes for LLM connectors, model selection, and streaming.
80
- - Fixed issues with S3 connector initialization, serverless code component, and vault key fetching.
81
- - Improved error handling for binary input, file uploads, and API calls.
82
- - Fixed issues with usage reporting, especially for user-managed keys and custom models.
451
+ - Numerous bug fixes for LLM connectors, model selection, and streaming.
452
+ - Fixed issues with S3 connector initialization, serverless code component, and vault key fetching.
453
+ - Improved error handling for binary input, file uploads, and API calls.
454
+ - Fixed issues with usage reporting, especially for user-managed keys and custom models.
83
455
 
84
456
  ### Improvements
85
457
 
86
- - Optimized build processes.
87
- - Improved strong typing and code auto-completion.
458
+ - Optimized build processes.
459
+ - Improved strong typing and code auto-completion.
460
+
461
+ ---
88
462
 
89
463
  ## [v1.5.0] SmythOS becomes open source!
90
464
 
91
465
  ### Features
92
466
 
93
- - Moved to a monorepo structure
94
- - Implemented an SDK that provides an abstracted interface for all SmythOS components
95
- - Implemented a CLI to help running agents and scaffolding SDK and SRE projects along
467
+ - Moved to a monorepo structure
468
+ - Implemented an SDK that provides an abstracted interface for all SmythOS components
469
+ - Implemented a CLI to help running agents and scaffolding SDK and SRE projects along
470
+
471
+ ---
96
472
 
97
473
  ## [v1.4.0]
98
474
 
99
475
  ### Features
100
476
 
101
- - New connectors : JSON Account connector, RAMVec vectordb, localStorage
102
- - Conversation manager: better handling of agent chats
103
- - logger becomes a connector
104
- - Add support for usage reporting
105
- - LLM : new models provider connector allows loading custom models including local models
477
+ - New connectors : JSON Account connector, RAMVec vectordb, localStorage
478
+ - Conversation manager: better handling of agent chats
479
+ - logger becomes a connector
480
+ - Add support for usage reporting
481
+ - LLM : new models provider connector allows loading custom models including local models
482
+
483
+ ---
106
484
 
107
485
  ## [v1.2.0]
108
486
 
109
487
  ### Features
110
488
 
111
- - New connectors : AWS Secret Manager Vault, Redis, and RAM Cache
112
- - Conversation manager: better handling of agent chats
113
- - All connectors inherit from SecureConnector using a common security layer
114
- - LLM : support for anthropic, Groq and Gemini
489
+ - New connectors : AWS Secret Manager Vault, Redis, and RAM Cache
490
+ - Conversation manager: better handling of agent chats
491
+ - All connectors inherit from SecureConnector using a common security layer
492
+ - LLM : support for anthropic, Groq and Gemini
493
+
494
+ ---
115
495
 
116
496
  ## [v1.1.0]
117
497
 
118
498
  ### Features
119
499
 
120
- - New connectors : S3, Pinecone, and local vault
121
- - LLM : implemented common LLM interface to support more providers
500
+ - New connectors : S3, Pinecone, and local vault
501
+ - LLM : implemented common LLM interface to support more providers
502
+
503
+ ---
122
504
 
123
505
  ## [v1.0.0]
124
506
 
125
507
  ### Features
126
508
 
127
- - Initial release
128
- - LLM : support for openai API
129
- - Smyth Runtime Core
130
- - Connectors Serivece
131
- - Subsystems architecture
132
- - Security & ACL helpers
133
- - Implemented services : AgentData, Storage, Account, VectorDB
509
+ - Initial release
510
+ - LLM : support for openai API
511
+ - Smyth Runtime Core
512
+ - Connectors Serivece
513
+ - Subsystems architecture
514
+ - Security & ACL helpers
515
+ - Implemented services : AgentData, Storage, Account, VectorDB