@petrgrishin/ai-sdk-ollama 3.0.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md ADDED
@@ -0,0 +1,719 @@
1
+ # Changelog
2
+
3
+ ## 3.0.2
4
+
5
+ ### Patch Changes
6
+
7
+ - fix(chat-language-model): ensure consistent reasoning event IDs
8
+
9
+ ## 3.0.1
10
+
11
+ ### Patch Changes
12
+
13
+ - fa45199: Fix code formatting issues across 9 source files to ensure consistent code style and pass quality checks.
14
+
15
+ ## 3.0.0
16
+
17
+ ### Major Changes
18
+
19
+ - 22d327f: Migrate to AI SDK v6 with full compatibility and alignment with official SDK.
20
+
21
+ ## Breaking Changes
22
+ - **AI SDK v6 Required**: Now requires `ai@^6.0.0` (previously `ai@^5.0.0`)
23
+ - **Removed Custom Utilities**: Deleted custom implementations of utilities now available in official AI SDK:
24
+ - `createAsyncIterableStream`, `createStitchableStream`, `createResolvablePromise`, `fixJson`, `parsePartialJsonAsync` removed
25
+ - Use official AI SDK exports from `'ai'` instead: `parsePartialJson`, `simulateReadableStream`, `smoothStream`
26
+ - **Removed Custom Middleware**: Deleted custom middleware implementations (`wrapLanguageModel`, `defaultSettingsMiddleware`, `extractReasoningMiddleware`, `simulateStreamingMiddleware`)
27
+ - All middleware now re-exported from official AI SDK
28
+ - **Removed Custom Agent**: Deleted custom `ToolLoopAgent` implementation
29
+ - Use official `ToolLoopAgent` from `'ai'` package
30
+ - `toolCalled` helper removed, use `hasToolCall` from `'ai'` instead
31
+ - **Structured Output API**: `experimental_output` promoted to stable `output` in AI SDK v6
32
+ - **Usage Properties**: Token usage now uses `inputTokens`/`outputTokens` instead of `promptTokens`/`completionTokens`
33
+
34
+ ## New Features
35
+ - **Full AI SDK v6 Compatibility**: All features align with official AI SDK v6 specification
36
+ - **Re-exported Utilities**: Stream utilities, middleware, and agents from official SDK for consistency
37
+ - **Improved Type Safety**: Full TypeScript support with LanguageModelV3 specification
38
+ - **Reranking Support**: Native reranking API support (AI SDK v6 feature)
39
+ - **MCP Support**: Full MCP integration support (OAuth, resources, prompts, elicitation)
40
+
41
+ ## Migration Guide
42
+ 1. **Update Dependencies**:
43
+ ```bash
44
+ npm install ai-sdk-ollama ai@^6.0.0
45
+ ```
46
+ 2. **Update Imports**:
47
+
48
+ ```typescript
49
+ // Before
50
+ import { ToolLoopAgent, toolCalled } from 'ai-sdk-ollama';
51
+ import { parsePartialJson } from 'ai-sdk-ollama';
52
+
53
+ // After
54
+ import { ollama } from 'ai-sdk-ollama';
55
+ import { ToolLoopAgent, hasToolCall, parsePartialJson } from 'ai';
56
+ ```
57
+
58
+ 3. **Update Structured Output**:
59
+
60
+ ```typescript
61
+ // Before
62
+ experimental_output: Output.object({ schema });
63
+
64
+ // After
65
+ output: Output.object({ schema });
66
+ ```
67
+
68
+ 4. **Update Stop Conditions**:
69
+
70
+ ```typescript
71
+ // Before
72
+ stopWhen: [toolCalled('done')];
73
+
74
+ // After
75
+ stopWhen: [hasToolCall('done')];
76
+ ```
77
+
78
+ 5. **Update Usage Properties**:
79
+
80
+ ```typescript
81
+ // Before
82
+ result.usage.promptTokens;
83
+ result.usage.completionTokens;
84
+
85
+ // After
86
+ result.usage.inputTokens;
87
+ result.usage.outputTokens;
88
+ result.totalUsage.inputTokens; // For agents
89
+ ```
90
+
91
+ ## Improvements
92
+ - **Reduced Bundle Size**: Removed duplicate implementations, now re-exporting from official SDK
93
+ - **Better Maintainability**: Aligned with official SDK to prevent drift
94
+ - **Enhanced Type Safety**: Full LanguageModelV3 specification support
95
+ - **Consistent APIs**: All utilities follow official AI SDK patterns
96
+
97
+ ## 2.2.0
98
+
99
+ ### Minor Changes
100
+
101
+ - 72a26e3: ### Fixed
102
+ - **Real-time streaming for flow-based UIs**: Fixed issue where `streamText`'s `fullStream` was waiting for completion before emitting events, causing flow interfaces to only receive control events (start, finish) without text or tool call events. The enhanced `fullStream` now streams all events (text-delta, tool-call, tool-result) in real-time as they occur. Resolves [#344](https://github.com/jagreehal/ai-sdk-ollama/issues/344).
103
+
104
+ ### Added
105
+ - **`stopWhen` support**: Added support for the `stopWhen` property in both `streamText` and `generateText` functions, allowing users to customize multi-turn tool calling behavior. When not provided and tools are enabled, defaults to `stepCountIs(5)` for multi-turn tool calling.
106
+
107
+ ### Improved
108
+ - **AI SDK compatibility**: Enhanced both `streamText` and `generateText` to automatically support all AI SDK properties using `Parameters<typeof _streamText>[0]` type extraction, ensuring 100% forward compatibility with future AI SDK changes without manual updates.
109
+
110
+ ## 2.1.0
111
+
112
+ ### Minor Changes
113
+
114
+ - 285e4e4: Enhanced `streamText` wrapper to support `fullStream` with synthesis
115
+ - Added synthesis support for `fullStream` in addition to `textStream`
116
+ - When tool-calling models (like `gpt-oss:120b`) invoke tools without generating text first, the enhanced `fullStream` now automatically synthesizes a response based on tool results
117
+ - Emits proper `TextStreamPart` events (`text-start`, `text-delta`, `text-end`) for flow-based UIs
118
+ - Fixes issue where flow interfaces only received control events (`start`, `finish`) without any text content when models called tools first
119
+
120
+ ## 2.0.1
121
+
122
+ ### Patch Changes
123
+
124
+ - 826eb83: Add API key configuration support for cloud Ollama services
125
+ - Added `apiKey` parameter to `createOllama` options
126
+ - API key is automatically set as `Authorization: Bearer {apiKey}` header
127
+ - Existing Authorization headers take precedence over apiKey
128
+ - Added header normalization to handle Headers instances, arrays, and plain objects
129
+ - Updated README with API key configuration examples for different runtimes (Node.js, Bun, Deno, serverless)
130
+
131
+ ## 2.0.0
132
+
133
+ ### Major Changes
134
+
135
+ - a23b4a5: ## Breaking Change: Rename `reasoning` to `think`
136
+
137
+ The `reasoning` parameter in `OllamaChatSettings` has been renamed to `think` to align with Ollama's native API parameter name. This change ensures consistency with the official Ollama API and improves type safety by using `Pick<ChatRequest, 'keep_alive' | 'format' | 'tools' | 'think'>`.
138
+
139
+ ### Migration Guide
140
+
141
+ **Before:**
142
+
143
+ ```typescript
144
+ const model = ollama('gpt-oss:20b-cloud', { reasoning: true });
145
+ ```
146
+
147
+ **After:**
148
+
149
+ ```typescript
150
+ const model = ollama('gpt-oss:20b-cloud', { think: true });
151
+ ```
152
+
153
+ ### What Changed
154
+ - Removed `reasoning?: boolean` from `OllamaChatSettings`
155
+ - Added `think` parameter via `Pick<ChatRequest, 'keep_alive' | 'format' | 'tools' | 'think'>`
156
+ - Updated all internal references from `this.settings.reasoning` to `this.settings.think`
157
+ - Updated examples and tests to use the new `think` parameter
158
+
159
+ The functionality remains the same - only the parameter name has changed to match Ollama's API.
160
+
161
+ ## 1.1.0
162
+
163
+ ### Minor Changes
164
+
165
+ - 201b13b: Add `keep_alive` parameter support and improve type safety
166
+
167
+ ### Added
168
+ - **`keep_alive` parameter**: Control how long models stay loaded in memory after requests
169
+ - Accepts duration strings (e.g., `"10m"`, `"24h"`), numbers in seconds, negative numbers for indefinite, or `0` to unload immediately
170
+ - Works across all chat operations (generate, stream, tool calling, object generation)
171
+
172
+ ### Improved
173
+ - **Type safety**: Now uses `Pick<ChatRequest, 'keep_alive' | 'format' | 'tools'>` from the official ollama-js package
174
+ - **Type consistency**: `OllamaProviderSettings` extends `Pick<Config, 'headers' | 'fetch'>` and `OllamaEmbeddingSettings` extends `Pick<EmbedRequest, 'dimensions'>`
175
+ - **Type exports**: Re-export more types from ollama-js for better developer experience (`ChatRequest`, `EmbedRequest`, `Config`, `ToolCall`, `Tool`, `Message`, `ChatResponse`, `EmbedResponse`)
176
+
177
+ ## 1.0.2
178
+
179
+ ### Patch Changes
180
+
181
+ - 444c16e: Update AI SDK dependencies to version 5.0.97 and @ai-sdk/react to 2.0.97 in examples. Also update rimraf to 6.1.2 across the project.
182
+
183
+ ## 1.0.1
184
+
185
+ ### Patch Changes
186
+
187
+ - 2110218: Update dependencies:
188
+ - Update `@ai-sdk/provider-utils` to `^3.0.15`
189
+ - Update `ai` peer dependency to `^5.0.86`
190
+ - Update `vitest` to `^4.0.6`
191
+
192
+ ## 1.0.0
193
+
194
+ ### Major Changes
195
+
196
+ - 30d2450: ## Fixed Tool Calling Message Conversion
197
+
198
+ Fixed critical issues with tool calling message conversion that were preventing proper multi-turn conversations:
199
+
200
+ ### Changes
201
+ - **Tool result messages**: Now use proper `role: 'tool'` with `tool_name` field instead of `role: 'user'` with `[Tool Result]:` prefix
202
+ - **Assistant messages**: Properly include `tool_calls` array for tool execution
203
+ - **Finish reason handling**: Returns `'tool-calls'` when tools execute to enable conversation continuation
204
+ - **Reliable tool calling**: Disabled by default (`?? false`) for better AI SDK compatibility
205
+ - **Test updates**: Updated all test expectations to match new message format
206
+
207
+ ### Impact
208
+ - โœ… Standard AI SDK `generateText` and `streamText` now work perfectly with ai-sdk-ollama provider
209
+ - โœ… Full compatibility with AI SDK ecosystem and multi-turn tool calling
210
+ - โœ… Enhanced functions still provide synthesis and reliability features when needed
211
+ - โœ… Users can choose between standard (compatible) or enhanced (reliable) approaches
212
+
213
+ This ensures both standard AI SDK patterns and enhanced ai-sdk-ollama functions work seamlessly for tool calling scenarios.
214
+
215
+ ## 0.13.0
216
+
217
+ ### Minor Changes
218
+
219
+ - 7bb9fdd: update deps
220
+
221
+ ## 0.12.0
222
+
223
+ ### Minor Changes
224
+
225
+ - 34c3e3b: # Enhanced generateText with Automatic Response Synthesis
226
+
227
+ ## What's New
228
+ - **Automatic Response Synthesis**: `generateText` now automatically detects when tools execute but return empty responses and synthesizes a comprehensive response using the tool results
229
+ - **Prototype Preservation**: Enhanced responses preserve all original AI SDK methods and getters using proper prototype inheritance
230
+ - **Experimental Output Support**: New opt-in `enableToolsWithStructuredOutput` feature allows combining tool calling with `experimental_output` (structured output)
231
+ - **Type Safety**: Full TypeScript support with proper generic type inference for `experimental_output` schemas
232
+
233
+ ## Breaking Changes
234
+
235
+ None - this is a backward-compatible enhancement.
236
+
237
+ ## Migration Guide
238
+
239
+ No migration required. The enhanced behavior is enabled by default and preserves all existing functionality.
240
+
241
+ ### New Features
242
+
243
+ #### Automatic Synthesis (Default)
244
+
245
+ ```typescript
246
+ import { generateText, ollama } from 'ai-sdk-ollama';
247
+
248
+ // Tools execute but return empty response? No problem!
249
+ const result = await generateText({
250
+ model: ollama('llama3.2'),
251
+ prompt: 'Calculate 25 * 1.08',
252
+ tools: { math: mathTool },
253
+ });
254
+
255
+ // Result now includes synthesized text explaining the calculation
256
+ console.log(result.text); // "The calculation 25 * 1.08 equals 27..."
257
+ ```
258
+
259
+ #### Experimental Output with Tools (Opt-in)
260
+
261
+ ```typescript
262
+ import { z } from 'zod';
263
+
264
+ const result = await generateText({
265
+ model: ollama('llama3.2'),
266
+ prompt: 'Get weather and format as JSON',
267
+ tools: { weather: weatherTool },
268
+ toolChoice: 'required',
269
+ experimental_output: z.object({
270
+ location: z.string(),
271
+ temperature: z.number(),
272
+ condition: z.string(),
273
+ }),
274
+ enhancedOptions: {
275
+ enableToolsWithStructuredOutput: true, // Opt-in feature
276
+ },
277
+ });
278
+
279
+ // Combines tool execution with structured output
280
+ console.log(result.experimental_output); // Properly typed schema
281
+ ```
282
+
283
+ ## Technical Details
284
+ - Uses `Object.create()` and `Object.getOwnPropertyDescriptors()` to preserve prototype methods
285
+ - Synthesis attempts up to 2 times with configurable prompts
286
+ - Maintains full compatibility with AI SDK's type system
287
+ - Enhanced responses include combined token usage from both tool execution and synthesis phases
288
+
289
+ ## 0.11.0
290
+
291
+ ### Minor Changes
292
+
293
+ - fb666c1: Enhanced JSON repair for reliable object generation
294
+ - **New Feature**: Added `enhancedRepairText` function that automatically fixes 14+ types of common JSON issues from LLM outputs
295
+ - **Improved Reliability**: Enhanced `objectGenerationOptions` with comprehensive JSON repair capabilities including:
296
+ - Markdown code block extraction
297
+ - Comment removal
298
+ - Smart quote fixing
299
+ - Unquoted key handling
300
+ - Trailing comma removal
301
+ - Incomplete object/array completion
302
+ - Python constant conversion (True/False/None)
303
+ - JSONP wrapper removal
304
+ - Single quote to double quote conversion
305
+ - URL and escaped quote handling
306
+ - Ellipsis pattern resolution
307
+ - **New Example**: Added `json-repair-example.ts` demonstrating enhanced repair capabilities
308
+ - **Enhanced Configuration**: `enableTextRepair` now defaults to `true` for better out-of-the-box reliability
309
+ - **Comprehensive Testing**: Added extensive test suite covering all repair scenarios
310
+ - **Backward Compatible**: All existing functionality remains unchanged
311
+
312
+ ## 0.10.1
313
+
314
+ ### Patch Changes
315
+
316
+ - 7108a12: Fix synthesis conflict between `messages` and `prompt` parameters in `streamText` and `generateText`. The synthesis logic now correctly detects whether the original call used `messages` or `prompt` and constructs the follow-up synthesis call accordingly, preventing "prompt field is not supported when messages is specified" errors when using the `messages` + `system` pattern with tool calling.
317
+
318
+ ## 0.10.0
319
+
320
+ ### Minor Changes
321
+
322
+ - c6bb667: ## โœจ Browser Example: React + AI Elements Migration
323
+
324
+ ### ๐Ÿš€ Major Changes
325
+
326
+ **Browser Example Overhaul:**
327
+ - **Migrated from vanilla JS to React**: Complete rewrite using React 19 and modern hooks
328
+ - **Added AI Elements integration**: Now uses `@ai-sdk/react` with `useChat` hook and AI Elements components
329
+ - **Implemented shadcn/ui components**: Modern, accessible UI components with Tailwind CSS
330
+ - **Enhanced streaming architecture**: Uses `toUIMessageStreamResponse()` for proper UI message handling
331
+ - **Added comprehensive AI Elements**: 20+ AI-specific components (Message, Response, Conversation, PromptInput, etc.)
332
+
333
+ **New Features:**
334
+ - Real-time model loading and selection from Ollama API
335
+ - Dynamic connection status with visual indicators
336
+ - Model size formatting and fallback options
337
+ - Enhanced error handling and loading states
338
+ - Responsive design with modern card-based layout
339
+
340
+ **Technical Improvements:**
341
+ - TypeScript-first implementation with full type safety
342
+ - Vite API plugin for seamless Ollama integration
343
+ - Proper message streaming with UI message format
344
+ - Component-based architecture for better maintainability
345
+
346
+ ### ๐Ÿ“ฆ Dependencies Updated
347
+
348
+ **AI SDK:**
349
+ - `ai`: `^5.0.56` โ†’ `^5.0.57`
350
+ - `@ai-sdk/react`: `^2.0.57` (new)
351
+
352
+ **React:**
353
+ - `react`: `^19.1.1` (new)
354
+ - `react-dom`: `^19.1.1` (new)
355
+ - `@types/react`: `^19.1.14` โ†’ `^19.1.15`
356
+
357
+ **Development:**
358
+ - `@types/node`: `^24.5.2` โ†’ `^24.6.0`
359
+ - `@typescript-eslint/*`: `^8.44.1` โ†’ `^8.45.0`
360
+ - `typescript-eslint`: `^8.44.1` โ†’ `^8.45.0`
361
+
362
+ ### ๐Ÿ—‚๏ธ File Changes
363
+
364
+ **Added:**
365
+ - `examples/browser/main.tsx` - React entry point
366
+ - `examples/browser/src/App.tsx` - Main application component
367
+ - `examples/browser/src/components/ai-elements/` - 20 AI Elements components
368
+ - `examples/browser/vite-api-plugin.ts` - Vite plugin for Ollama API
369
+ - `examples/browser/components/ui/card.tsx` - shadcn/ui card component
370
+
371
+ **Removed:**
372
+ - `examples/browser/main.ts` - Old vanilla JS entry point
373
+
374
+ **Updated:**
375
+ - `examples/browser/package.json` - React dependencies and AI Elements
376
+ - `examples/browser/README.md` - Complete rewrite with new architecture
377
+ - `examples/browser/index.html` - Updated for React
378
+ - `examples/browser/vite.config.js` - Added API plugin integration
379
+
380
+ ## 0.9.0
381
+
382
+ ### Minor Changes
383
+
384
+ - 35f19de: Add web search and web fetch tools for Ollama integration
385
+ - Add `webSearch` tool for performing web searches using Ollama's web search capabilities
386
+ - Add `webFetch` tool for fetching web content and URLs
387
+ - Support for both browser and Node.js environments
388
+ - Comprehensive integration tests and examples
389
+ - Updated documentation with usage examples and prerequisites
390
+
391
+ ## 0.8.1
392
+
393
+ ### Patch Changes
394
+
395
+ - e57ddf2: ## Enhanced Function Renaming & Documentation Improvements
396
+
397
+ ### Function Renaming
398
+ - Renamed `generateTextOllama` to `generateText` (enhanced version from ai-sdk-ollama)
399
+ - Renamed `streamTextOllama` to `streamText` (enhanced version from ai-sdk-ollama)
400
+ - Maintains backward compatibility while providing clearer API
401
+
402
+ ### Documentation Improvements
403
+ - **README.md**: Complete restructure with better user flow
404
+ - Added Quick Start section with immediate installation and basic example
405
+ - Moved value proposition ("Why Choose") section earlier
406
+ - Added dedicated "Enhanced Tool Calling" section highlighting main differentiator
407
+ - Reorganized examples under "More Examples" for better progression
408
+ - Removed redundant content and improved clarity
409
+ - **packages/ai-sdk-ollama/README.md**: Applied same improvements
410
+ - Consistent structure with main README
411
+ - Better user journey from basic to advanced features
412
+ - Updated table of contents to match new structure
413
+
414
+ ### Key Benefits
415
+ - **Better Developer Experience**: Clearer function names and improved documentation flow
416
+ - **Enhanced Tool Calling**: Highlighted the main selling point with dedicated section
417
+ - **User-Friendly**: Users can now get started in 30 seconds and understand value immediately
418
+ - **Consistent**: Both READMEs now have the same improved structure and flow
419
+
420
+ ## 0.8.0
421
+
422
+ ### Minor Changes
423
+
424
+ - 7ce6ed0: Enhanced tool calling with reliable wrapper functions
425
+
426
+ ## What's New
427
+ - **New Enhanced Wrapper Functions**: Added `generateTextOllama()` and `streamTextOllama()` for guaranteed tool calling reliability
428
+ - **Automatic Response Synthesis**: Enhanced functions automatically complete responses when tools are executed but return empty text
429
+ - **Configurable Reliability Options**: Control synthesis behavior with `enhancedOptions` parameter
430
+ - **Improved Documentation**: Comprehensive examples and comparison tables for standard vs enhanced functions
431
+
432
+ ## Key Features
433
+ - **Reliable Tool Calling**: Standard `generateText()` may return empty responses after tool execution. Enhanced wrappers guarantee complete, useful responses every time
434
+ - **Backward Compatible**: All existing code continues to work unchanged
435
+ - **Production Ready**: Designed for critical applications that can't handle unpredictable empty responses
436
+ - **Cross Provider Compatible**: Enhanced functions work with any AI SDK provider
437
+
438
+ ## Breaking Changes
439
+
440
+ None - this is a purely additive enhancement.
441
+
442
+ ## Migration
443
+
444
+ No migration required. Existing code works unchanged. To get enhanced reliability:
445
+
446
+ ```typescript
447
+ // Before (may return empty text after tool calls)
448
+ const { text } = await generateText({
449
+ model: ollama('llama3.2'),
450
+ tools,
451
+ prompt,
452
+ });
453
+
454
+ // After (guaranteed complete responses)
455
+ const { text } = await generateTextOllama({
456
+ model: ollama('llama3.2'),
457
+ tools,
458
+ prompt,
459
+ });
460
+ ```
461
+
462
+ ## 0.7.0
463
+
464
+ ### Minor Changes
465
+
466
+ - 8f0a292: Comprehensive reliability improvements and new Ollama-specific functions
467
+
468
+ ## New Features
469
+
470
+ ### Ollama-Specific AI Functions
471
+ - **generateTextOllama**: Enhanced text generation with reliability features
472
+ - **generateObjectOllama**: Structured object generation with schema validation
473
+ - **streamTextOllama**: Real-time text streaming with tool calling support
474
+ - **streamObjectOllama**: Progressive object streaming with reliability features
475
+
476
+ ### Reliability Features
477
+ - **Tool Calling Reliability**: Enhanced tool calling with retry logic and parameter normalization
478
+ - **Object Generation Reliability**: Schema validation, type mismatch fixing, and fallback generation
479
+ - **Streaming Reliability**: Better stop conditions and response synthesis
480
+ - **Error Recovery**: Automatic retry mechanisms and graceful error handling
481
+
482
+ ## Examples and Documentation
483
+
484
+ ### New Example Files
485
+ - **Comprehensive Demo**: `generate-all-ollama-demo.ts` - showcases all Ollama-specific functions
486
+ - **Streaming Demos**: `stream-text-ollama-demo.ts` and `stream-object-ollama-demo.ts`
487
+ - **Debug Tools**: `debug-streaming-issue.ts` and `debug-gpt-oss-tools.ts`
488
+ - **Testing Examples**: Various test files for different use cases
489
+
490
+ ### Enhanced Examples
491
+ - **Browser Example**: Fixed to use `createOllama()` for proper provider configuration
492
+ - **Node Examples**: Updated with better error handling and TypeScript compliance
493
+ - **Tool Calling**: Comprehensive examples with weather and search tools
494
+
495
+ ## Technical Improvements
496
+
497
+ ### TypeScript Fixes
498
+ - Fixed variable naming conflicts in all example files
499
+ - Resolved async/await issues with tool calls
500
+ - Fixed Zod schema definitions for record types
501
+ - Improved type safety across all examples
502
+
503
+ ### API Enhancements
504
+ - Better error messages and debugging information
505
+ - Enhanced configuration options for reliability features
506
+ - Improved streaming performance and reliability
507
+ - Better integration with Ollama's native capabilities
508
+
509
+ ## Breaking Changes
510
+
511
+ None - all changes are backward compatible
512
+
513
+ ## Migration Guide
514
+
515
+ Existing code continues to work unchanged. New Ollama-specific functions are available as additional options for enhanced reliability.
516
+
517
+ ## 0.6.2
518
+
519
+ ### Patch Changes
520
+
521
+ - 0e2f392: Fix TypeScript compilation errors in examples
522
+ - Fixed variable naming conflicts in stream-vs-generate-test.ts, debug-streaming-issue.ts, generate-all-ollama-demo.ts, stream-object-ollama-demo.ts, and stream-text-ollama-demo.ts
523
+ - Fixed undefined variable 'ollamaRaw' in existing-client-example.ts
524
+ - Fixed browser example to use createOllama() instead of passing baseURL to ollama() function
525
+ - Fixed async tool calls access in streaming examples
526
+ - Fixed Zod schema definitions for record types
527
+ - All examples now compile and run successfully
528
+
529
+ ## 0.6.1
530
+
531
+ ### Patch Changes
532
+
533
+ - 80fbf5b: Update to ollama-0.5.18
534
+ - Updated Ollama provider to support version 0.5.18
535
+ - Enhanced embedding model functionality
536
+ - Improved provider implementation with better error handling
537
+ - Updated examples to demonstrate new features
538
+
539
+ ## 0.6.0
540
+
541
+ ### Minor Changes
542
+
543
+ - a7e2377: Allow passing an existing Ollama client to `createOllama` and expose raw client/methods for direct model operations. Closes #67.
544
+
545
+ ## 0.5.5
546
+
547
+ ### Patch Changes
548
+
549
+ - 12d4f4a: ai ^5.0.35 โ†’ ^5.0.37
550
+
551
+ ## 0.5.4
552
+
553
+ ### Patch Changes
554
+
555
+ - 6e39959: chore: bump ai to ^5.0.30
556
+
557
+ ## 0.5.3
558
+
559
+ ### Patch Changes
560
+
561
+ - e58649c: chore: bump ai to ^5.0.29
562
+
563
+ ## 0.5.2
564
+
565
+ ### Patch Changes
566
+
567
+ - 96ec6dc: Updated to ai v5.0.23
568
+
569
+ ## 0.5.1
570
+
571
+ ### Patch Changes
572
+
573
+ - edb4d47: Updated ai package version to 5.0.15
574
+
575
+ ## 0.5.0
576
+
577
+ ### Minor Changes
578
+
579
+ - Add browser support with automatic environment detection
580
+ - Add browser-specific provider using ollama/browser package
581
+ - Implement dual package exports for Node.js and browser environments
582
+ - Add comprehensive browser example with Vite and interactive UI
583
+ - Update build configuration to generate separate browser and Node.js bundles
584
+ - Add browser compatibility tests and examples
585
+
586
+ - e02f8af: Feature: Browser support and streaming improvements. Closes https://github.com/jagreehal/ai-sdk-ollama/issues/14
587
+ - feat(browser): Automatic browser support via `ollama/browser` with dedicated `index.browser` export. Works seamlessly with bundlers and `ai` v5 in browser contexts.
588
+ - fix(streaming): Emit trailing `text-delta` on the final `done` chunk to avoid empty streams for models that only flush content at the end. Note: streams may include one additional text chunk now.
589
+ - tests: Add `gpt-oss:20b` integration coverage and make prompts/token limits more robust; update unit tests to reflect final text emission on `done`.
590
+ - docs/examples: Switch Node examples to per-file `tsx` execution and update READMEs; clarify how to run browser and node examples.
591
+ - chore(repo): Monorepo migration (no user-facing API changes), Dependabot config for package folder, and CI refinements.
592
+
593
+ No breaking changes to the public API.
594
+
595
+ ## 0.4.0
596
+
597
+ ### Minor Changes
598
+
599
+ - **Reasoning Support**: Added support for reasoning (chain-of-thought) output
600
+ - ๐Ÿง  **Reasoning Content**: Models that support reasoning can now output their thinking process
601
+ - ๐Ÿ“ **Content Types**: Support for `LanguageModelV2Reasoning` content type in both non-streaming and streaming responses
602
+ - ๐Ÿ”„ **Streaming Support**: Full streaming support with `reasoning-start`, `reasoning-delta`, and `reasoning-end` events
603
+ - โš™๏ธ **Configurable**: Enable reasoning with `{ reasoning: true }` setting
604
+ - ๐Ÿงช **Comprehensive Testing**: Added unit tests for reasoning functionality
605
+ - ๐Ÿ“š **Documentation**: Updated README and examples with reasoning usage
606
+ - ๐ŸŽฏ **Backward Compatible**: Reasoning is disabled by default, existing code continues to work
607
+
608
+ ### Technical Improvements
609
+
610
+ - Added `reasoning` setting to `OllamaChatSettings` interface
611
+ - Enhanced `doGenerate` method to handle `thinking` field from Ollama responses
612
+ - Enhanced `doStream` method to emit reasoning stream parts
613
+ - Added reasoning support to content conversion logic
614
+ - Updated type definitions to include reasoning content types
615
+
616
+ ## 0.3.0
617
+
618
+ ### Minor Changes
619
+
620
+ - **Auto-Structured Outputs**: Enhanced structured outputs with intelligent auto-detection
621
+ - ๐ŸŽฏ **Smart Auto-Detection**: Automatically enables structured outputs when JSON schema is provided
622
+ - ๐Ÿ”ง **Backward Compatibility**: Explicit `structuredOutputs: true/false` settings are still respected
623
+ - โš ๏ธ **User-Friendly Warnings**: Clear warnings when auto-enabling structured outputs
624
+ - ๐Ÿ“š **Enhanced Documentation**: Updated examples and README with auto-detection guidance
625
+ - ๐Ÿงช **Comprehensive Testing**: Added integration tests for auto-detection scenarios
626
+ - ๐Ÿ› ๏ธ **Improved Developer Experience**: No need to manually set `structuredOutputs: true` for object generation
627
+
628
+ ### Technical Improvements
629
+
630
+ - Enhanced `shouldEnableStructuredOutputs()` method for intelligent auto-detection
631
+ - Improved schema validation and error handling
632
+ - Updated README with auto-detection examples and best practices
633
+ - Added comprehensive integration tests for edge cases
634
+ - Streamlined configuration for common use cases
635
+
636
+ ## 0.2.0
637
+
638
+ ### Minor Changes
639
+
640
+ - bf0905a: Fix streaming examples and improve TypeScript type checking
641
+ - Fix "Stream error: text part not found" by using textStream instead of fullStream for basic streaming
642
+ - Fix TypeScript errors in all examples (error handling, index access, undefined checks)
643
+ - Remove rootDir restriction in tsconfig.json to enable type checking for examples
644
+ - Fix tool call parameter handling and error messages
645
+ - Remove deprecated model capabilities and suggestions utilities
646
+ - Improve error handling with proper type checking throughout examples
647
+ - Update streaming examples to work with AI SDK v5 API changes
648
+
649
+ All notable changes to this project will be documented in this file.
650
+
651
+ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
652
+ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
653
+
654
+ ## [0.1.0] - 2025-08-06
655
+
656
+ ### Added
657
+
658
+ - ๐ŸŽ‰ Initial release of AI SDK Ollama Provider
659
+ - โœ… Full support for Vercel AI SDK v5 (`LanguageModelV2` and `EmbeddingModelV2`)
660
+ - ๐Ÿค– **Chat Language Model** with streaming support
661
+ - Text generation with **dual parameter support** (AI SDK standard + native Ollama)
662
+ - **Cross provider compatibility** via AI SDK parameters (temperature, maxOutputTokens, etc.)
663
+ - **Advanced Ollama features** via native options (mirostat, num_ctx, etc.)
664
+ - **Parameter precedence system** - Ollama options override AI SDK parameters
665
+ - Structured output support (JSON objects)
666
+ - Tool calling capabilities
667
+ - Image input support (with compatible models)
668
+ - ๐Ÿ” **Embedding Model** for text embeddings
669
+ - Batch embedding support (up to 2048 texts)
670
+ - Support for all Ollama embedding models
671
+ - ๐Ÿง  **Model Intelligence System**
672
+ - Comprehensive model capability database
673
+ - Smart model suggestions based on requirements
674
+ - Automatic feature detection and validation
675
+ - Helpful error messages with actionable recommendations
676
+ - ๐Ÿ› ๏ธ **Provider Features**
677
+ - Official `ollama` package integration with **direct option pass-through**
678
+ - **Future proof**: All current and future Ollama parameters supported automatically
679
+ - Custom base URL configuration
680
+ - Custom headers support
681
+ - Custom fetch implementation
682
+ - Comprehensive error handling with custom OllamaError class
683
+ - ๐Ÿ“ฆ **Modern Package**
684
+ - TypeScript with full type safety
685
+ - ES modules with CommonJS compatibility
686
+ - Node.js 22+ support
687
+ - Clean, organized codebase structure
688
+ - ๐Ÿงช **Quality Assurance**
689
+ - Tests (unit + integration)
690
+ - Full TypeScript coverage
691
+ - Linting with ESLint + Prettier
692
+ - Automated testing with Vitest
693
+ - ๐Ÿ“š **Examples & Documentation**
694
+ - 8 comprehensive examples covering all features
695
+ - Basic chat, streaming, tool calling, embeddings
696
+ - Dual parameter demonstrations
697
+ - Model capabilities and validation examples
698
+ - Comprehensive README with AI SDK v5+ compatibility
699
+ - ๐Ÿ–ผ๏ธ **Image Processing Support**: Complete implementation of AI SDK v5 image handling with Ollama
700
+ - Support for image URLs, base64 encoded images, and multiple images
701
+ - Full integration with vision models like LLaVA and bakllava
702
+ - Streaming support with images
703
+ - Mixed content types (text + image + text)
704
+
705
+ ### Technical Details
706
+
707
+ - Built with AI SDK v5 (`@ai-sdk/provider: ^2.0.0`)
708
+ - Uses official Ollama client (`ollama: ^0.5.16`)
709
+ - Requires Node.js >=22
710
+ - Fully typed with TypeScript 5.9+
711
+ - ES module first with CJS fallback
712
+
713
+ ### Supported Models
714
+
715
+ - **Chat Models**: llama3.2, mistral, phi4-mini, qwen2.5, codellama, and all Ollama chat models
716
+ - **Vision Models**: llava, bakllava, llama3.2-vision, minicpm-v
717
+ - **Embedding Models**: nomic-embed-text, all-minilm, mxbai-embed-large, and all Ollama embedding models
718
+
719
+ [0.1.0]: https://github.com/jagreehal/ai-sdk-ollama/releases/tag/v0.1.0