@smythos/sre 1.8.0 → 1.8.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG CHANGED
@@ -77,6 +77,149 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
77
77
  - Fixed HookAsync; added support for hookable classes
78
78
  - Secrets Manager usage example and documentation added
79
79
 
80
+ ---
81
+
82
+ ## [1.7.43] 2026-01-22
83
+
84
+ ### LLM
85
+
86
+ - **Event emitter standardization**: LLM connectors now never throw — all errors are emitted as events instead
87
+ - **Fallback proxy pattern**: initial implementation of a fallback architecture for custom LLM connectors
88
+
89
+ ### Conversation & Agent
90
+
91
+ - ConversationHelper: errors from `toolsPromise` are now correctly propagated (previously swallowed)
92
+ - OTel: error handler added to OTel class, consolidated error reporting logic
93
+
94
+ ---
95
+
96
+ ## [1.7.42] 2026-01-20
97
+
98
+ ### LLM
99
+
100
+ - **Abort Controller**: implemented `abortSignal` support and `TLLMEvent.Abort` event across all LLM connectors
101
+ - **Finish reason normalization**: introduced `TLLMFinishReason` enum and standardized finish reason values from all connectors
102
+
103
+ ### Observability
104
+
105
+ - Agent name added to OTel telemetry logs for improved tracking
106
+ - OTel error tracking enhanced: error events captured at conversation-level spans
107
+
108
+ ### SDK
109
+
110
+ - Agent cache support added to the Smyth SDK
111
+
112
+ ---
113
+
114
+ ## [1.7.41] 2026-01-08
115
+
116
+ ### Connectors
117
+
118
+ - **New: SQLite Agent Data Connector** — lightweight persistent storage for ephemeral and SDK agents
119
+
120
+ ### LLM — Google AI
121
+
122
+ - Fixed `functionResponse.response` structure for Google AI requests
123
+ - Fixed text part extraction from Google AI responses
124
+ - Fixed system instruction propagation for Google AI
125
+
126
+ ### Observability
127
+
128
+ - OTel spans now include session ID and workflow details for richer tracing
129
+ - Improved debug logging for `ForEach`, `LogicAnd`, and `Async` components
130
+
131
+ ---
132
+
133
+ ## [1.7.40] 2025-12-04
134
+
135
+ ### LLM & Model Support
136
+
137
+ - **GPT-5.2**: `xhigh` reasoning effort level support
138
+ - **Claude 4**: streaming enabled for Classifier and LLM Assistant components
139
+ - Flash model family (Gemini) now detected via generic pattern — no need for explicit model listing
140
+ - **Gemini**: fixed multiple-tool-call logging; fixed infinite tool call loop
141
+ - Maximum tool call limit per session (`_maxToolCallsPerSession`), defaults to `Infinity`
142
+
143
+ ### Observability
144
+
145
+ - OTel spans now include `orgTier` and `orgSlot` attributes for multi-tenant tracking
146
+ - OTel: Agent.Skill spans now propagated via HTTP headers across service boundaries
147
+ - Team ID added to OTel spans
148
+ - OTel: graceful handling when no endpoint is configured
149
+
150
+ ### Connectors & Storage
151
+
152
+ - **Secret Manager**: `smythos` set as default secret prefix
153
+ - **RAG v2** (work-in-progress): namespace parsing fixes for NKV, improved embeddings credentials resolution
154
+ - Legacy namespace IDs resolved correctly
155
+
156
+ ### Components & Runtime
157
+
158
+ - **TemplateString** parser: correctly handles falsy values (`0`, `false`, `""`)
159
+ - **Sub-Agent component**: JSON response mode now supported
160
+ - **WebScrape**: `country` proxy option added
161
+ - **Search components**: template variables supported for search location fields
162
+ - `modelEntryName` property exposed on LLM connectors for runtime model identification
163
+ - LLM response event handling improved
164
+
165
+ ### Documentation
166
+
167
+ - Secrets Manager example and documentation added
168
+
169
+ ---
170
+
171
+ ## [1.7.20] 2025-11-26
172
+
173
+ ### Runtime
174
+
175
+ - Agent variables are now resolved before performing type inference (fixes incorrect type coercion)
176
+ - Empty LLM response errors now include the field name for easier debugging
177
+ - Base64 detection: removed unreliable data-length heuristic
178
+
179
+ ### Configuration
180
+
181
+ - `SMYTH_PATH` now accepts dot-segments (`.`) to watch models from the default location
182
+ - OTel output logging added for LLM responses
183
+
184
+ ---
185
+
186
+ ## [1.7.18] 2025-11-19
187
+
188
+ ### LLM — Google AI / Gemini
189
+
190
+ - Google AI: tier and cache now handled correctly per-request
191
+ - **Gemini 3**: `reasoningEffort` config support
192
+ - **Gemini 3**: `thoughtSignature` attachment for function calling (required by the Gemini 3 API)
193
+
194
+ ### Connectors & Storage
195
+
196
+ - **RAG v2** (WIP): embeddings credentials resolved from either vault or internal config; metadata fix
197
+ - **Pinecone**: constructor parameters made optional
198
+ - Vector embedders: legacy OpenAI embedder entries hidden from the selection UI
199
+
200
+ ---
201
+
202
+ ## [1.7.15] 2025-11-13
203
+
204
+ ### Observability
205
+
206
+ - **New: Observability Subsystem** — OpenTelemetry (OTel) connector added to `@smythos/sre`
207
+ - OTel spans cover agent execution, LLM calls, skill invocations, and error events
208
+ - OTel connector hotfixes applied shortly after initial rollout
209
+
210
+ ### Connectors
211
+
212
+ - **Pinecone**: fixed `delete namespace` and `delete datasource` operations
213
+ - **Milvus**: fixed `delete datasource` operation
214
+ - **DataPools v2**: datasource indexer component work-in-progress
215
+
216
+ ### Runtime
217
+
218
+ - `APIEndpoint` and `ServerlessCode` component: agent variable resolution fixed
219
+ - `HookAsync`: fixed; hookable class support added
220
+
221
+ ---
222
+
80
223
  ## [1.7.9] 2025-11-09
81
224
 
82
225
  - Fixed edge cases issues with SRE core initialization
@@ -84,6 +227,52 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
84
227
  - normalized the embeddings parameters for VectorDB connectors
85
228
  - JSONVaultConnector now detects missing vault and prompts the user to create it
86
229
 
230
+ ---
231
+
232
+ ## [1.7.7] 2025-11-08
233
+
234
+ ### Runtime
235
+
236
+ - Hotfix: SRE core initialization race condition with `ConnectorService` global instances
237
+ - VectorDB connector global instance handling stabilized
238
+
239
+ ---
240
+
241
+ ## [1.7.4] 2025-11-06
242
+
243
+ ### LLM
244
+
245
+ - Custom models: fixed resolution in the SDK
246
+ - Fallback model: parameters are now correctly filtered before the fallback call
247
+ - `TLLMParams` split into more granular types for improved readability and type safety
248
+
249
+ ### Runtime
250
+
251
+ - Global variable fixes across multiple components
252
+
253
+ ---
254
+
255
+ ## [1.7.2] 2025-11-04
256
+
257
+ ### Agent & Conversation
258
+
259
+ - `agentData` added to Conversation prompt hooks for richer hook context
260
+ - `getOpenAPIJSON()` function tweaks
261
+
262
+ ### Components
263
+
264
+ - `BinaryInput`: handle missing MIME type when asset is loaded from a URL
265
+
266
+ ### Connectors
267
+
268
+ - **Pinecone**: fallback to default metadata when retrieving a datasource that lacks metadata
269
+
270
+ ### Documentation
271
+
272
+ - LocalCache connector documentation added
273
+
274
+ ---
275
+
87
276
  ## [1.7.1] 2025-10-30
88
277
 
89
278
  - Core structures for triggers
@@ -99,7 +288,79 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
99
288
  - AgentDataConnector handles ephemeral agents data (for SDK agents)
100
289
  - Update Milvus data format to match the latest Milvus sdk release
101
290
 
102
- ## [1.6.0]
291
+ ---
292
+
293
+ ## [1.6.13] 2025-10-17
294
+
295
+ ### LLM & Models
296
+
297
+ - **Google AI**: fixed content structure for requests to prevent infinite function call loops
298
+ - **GPT-5 family**: PDF attachment support added
299
+ - Custom LLM credential resolution from vault keys
300
+ - Token limit validation now applies to legacy models only (lifted for newer models)
301
+ - `@google/generative-ai` dependency removed; fully migrated to `@google/genai`
302
+
303
+ ### Connectors & Runtime
304
+
305
+ - **Electron**: enhanced support; fixed incorrect vault search directory display
306
+ - **OAuth**: vault key resolution for OAuth flows
307
+ - SDK: ability to programmatically enable and disable planner mode
308
+
309
+ ### Triggers (experimental)
310
+
311
+ - Gmail and WhatsApp trigger improvements
312
+ - Trigger processing aligned with normal component execution (no input mapping required)
313
+ - Scheduler: support for suspending job runs in local mode
314
+
315
+ ---
316
+
317
+ ## [1.6.11] 2025-10-11
318
+
319
+ ### Hooks & Configuration
320
+
321
+ - **Advanced SRE Hooks** introduced (Aspect-Oriented Programming pattern): monitor and intercept internal SRE calls from outside the runtime
322
+ - Hooks added to `Agent` class and `ModelsProviderConnector`
323
+ - JSON vault connector improvements and documentation
324
+
325
+ ### Models
326
+
327
+ - JSON models provider: sanity checks for invalid JSON paths and automatic path auto-search
328
+ - Default models path support (`SMYTH_PATH` env variable)
329
+ - Models provider hotfix for invalid JSON model resolve conditions
330
+
331
+ ### Triggers (experimental)
332
+
333
+ - Gmail trigger: experimental email fetch support
334
+ - WhatsApp trigger updates
335
+ - Conversation manager: `addTool` function tool parser fixed
336
+
337
+ ---
338
+
339
+ ## [1.6.6] 2025-10-02
340
+
341
+ ### Connectors
342
+
343
+ - **AWS Lambda**: retry logic added for IAM role propagation on first run
344
+ - **AWS Lambda**: retry logic added for Lambda function deployment
345
+ - User custom models: fetched and resolved from external source
346
+
347
+ ---
348
+
349
+ ## [1.6.1] 2025-09-30
350
+
351
+ ### LLM
352
+
353
+ - **Ollama**: native connector added with text completion and tool use support
354
+ - Fallback model execution implemented for user-configured custom LLMs
355
+ - Increased fallback token budget for custom LLM connectors
356
+
357
+ ### Triggers (experimental)
358
+
359
+ - Initial trigger infrastructure; Gmail trigger experiments
360
+
361
+ ---
362
+
363
+ ## [1.6.0] 2025-09-29
103
364
 
104
365
  ### Features
105
366
 
@@ -115,6 +376,41 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
115
376
  - Updated dependencies
116
377
  - Updated .cursor/rules/sre-ai-rules.mdc to enhance the qualty of AI based contributions
117
378
 
379
+ ---
380
+
381
+ ## [1.5.79] 2025-09-22
382
+
383
+ ### Runtime & Configuration
384
+
385
+ - `SMYTH_PATH` environment variable: define the default `.smyth` directory location
386
+ - Default models path support added
387
+ - Memory components fixes
388
+
389
+ ### LLM / Models
390
+
391
+ - `JSONModelProvider`: fixed race condition on model loading; fixed resolve condition for invalid JSON
392
+ - SDK Chat: fixed race condition leading to undefined agent team
393
+
394
+ ### VectorDB
395
+
396
+ - Fixed `vectorDBInstance` not returning texts properly
397
+ - Additional embedding models supported for Google Gemini
398
+ - VectorDB documentation added
399
+
400
+ ### MCP
401
+
402
+ - `MCPClient`: deprecated settings marked as optional
403
+ - Sanity check added for duplicate tool definitions in the Conversation manager
404
+ - MCP logs improved
405
+
406
+ ### Fixes
407
+
408
+ - `APICall`: oAuth hotfix
409
+ - `OpenAI` LLM: fixed non-streaming requests via Responses API
410
+ - Debug data no longer missing in certain edge cases
411
+
412
+ ---
413
+
118
414
  ## [v1.5.60]
119
415
 
120
416
  ### Features
@@ -122,6 +418,8 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
122
418
  - Fixed memory leak in Agent context manager
123
419
  - Optimized performances and resolved a rare case causing CPU usage spikes
124
420
 
421
+ ---
422
+
125
423
  ## [v1.5.50]
126
424
 
127
425
  ### Features
@@ -130,6 +428,8 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
130
428
  - Added support for GPT-5 family models with reasoning capabilities .
131
429
  - MCP Client component : support for Streamable HTTP transport
132
430
 
431
+ ---
432
+
133
433
  ## [v1.5.31]
134
434
 
135
435
  ### LLM & Model Support:
@@ -158,6 +458,8 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
158
458
  - Optimized build processes.
159
459
  - Improved strong typing and code auto-completion.
160
460
 
461
+ ---
462
+
161
463
  ## [v1.5.0] SmythOS becomes open source!
162
464
 
163
465
  ### Features
@@ -166,6 +468,8 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
166
468
  - Implemented an SDK that provides an abstracted interface for all SmythOS components
167
469
  - Implemented a CLI to help running agents and scaffolding SDK and SRE projects along
168
470
 
471
+ ---
472
+
169
473
  ## [v1.4.0]
170
474
 
171
475
  ### Features
@@ -176,6 +480,8 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
176
480
  - Add support for usage reporting
177
481
  - LLM : new models provider connector allows loading custom models including local models
178
482
 
483
+ ---
484
+
179
485
  ## [v1.2.0]
180
486
 
181
487
  ### Features
@@ -185,6 +491,8 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
185
491
  - All connectors inherit from SecureConnector using a common security layer
186
492
  - LLM : support for anthropic, Groq and Gemini
187
493
 
494
+ ---
495
+
188
496
  ## [v1.1.0]
189
497
 
190
498
  ### Features
@@ -192,6 +500,8 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
192
500
  - New connectors : S3, Pinecone, and local vault
193
501
  - LLM : implemented common LLM interface to support more providers
194
502
 
503
+ ---
504
+
195
505
  ## [v1.0.0]
196
506
 
197
507
  ### Features