@jerome-benoit/sap-ai-provider 3.0.0 → 4.0.0-rc.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -3,6 +3,7 @@
3
3
  [![npm](https://img.shields.io/npm/v/@mymediset/sap-ai-provider/latest?label=npm&color=blue)](https://www.npmjs.com/package/@mymediset/sap-ai-provider)
4
4
  [![License: Apache-2.0](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
5
5
  [![Vercel AI SDK](https://img.shields.io/badge/Vercel%20AI%20SDK-6.0+-black.svg)](https://sdk.vercel.ai/docs)
6
+ [![Language Model](https://img.shields.io/badge/Language%20Model-V3-green.svg)](https://sdk.vercel.ai/docs/ai-sdk-core/provider-management)
6
7
 
7
8
  A community provider for SAP AI Core that integrates seamlessly with the Vercel AI SDK. Built on top of the official **@sap-ai-sdk/orchestration** package, this provider enables you to use SAP's enterprise-grade AI models through the familiar Vercel AI SDK interface.
8
9
 
@@ -12,15 +13,40 @@ A community provider for SAP AI Core that integrates seamlessly with the Vercel
12
13
  - [Quick Start](#quick-start)
13
14
  - [Quick Reference](#quick-reference)
14
15
  - [Installation](#installation)
16
+ - [Provider Creation](#provider-creation)
17
+ - [Option 1: Factory Function (Recommended for Custom Configuration)](#option-1-factory-function-recommended-for-custom-configuration)
18
+ - [Option 2: Default Instance (Quick Start)](#option-2-default-instance-quick-start)
15
19
  - [Authentication](#authentication)
16
20
  - [Basic Usage](#basic-usage)
21
+ - [Text Generation](#text-generation)
22
+ - [Chat Conversations](#chat-conversations)
23
+ - [Streaming Responses](#streaming-responses)
24
+ - [Model Configuration](#model-configuration)
17
25
  - [Supported Models](#supported-models)
18
26
  - [Advanced Features](#advanced-features)
27
+ - [Tool Calling](#tool-calling)
28
+ - [Multi-modal Input (Images)](#multi-modal-input-images)
29
+ - [Data Masking (SAP DPI)](#data-masking-sap-dpi)
30
+ - [Content Filtering](#content-filtering)
31
+ - [Document Grounding (RAG)](#document-grounding-rag)
32
+ - [Translation](#translation)
19
33
  - [Configuration Options](#configuration-options)
20
34
  - [Error Handling](#error-handling)
35
+ - [Troubleshooting](#troubleshooting)
36
+ - [Performance](#performance)
37
+ - [Security](#security)
38
+ - [Debug Mode](#debug-mode)
21
39
  - [Examples](#examples)
22
40
  - [Migration Guides](#migration-guides)
41
+ - [Upgrading from v3.x to v4.x](#upgrading-from-v3x-to-v4x)
42
+ - [Upgrading from v2.x to v3.x](#upgrading-from-v2x-to-v3x)
43
+ - [Upgrading from v1.x to v2.x](#upgrading-from-v1x-to-v2x)
44
+ - [Important Note](#important-note)
23
45
  - [Contributing](#contributing)
46
+ - [Resources](#resources)
47
+ - [Documentation](#documentation)
48
+ - [Community](#community)
49
+ - [Related Projects](#related-projects)
24
50
  - [License](#license)
25
51
 
26
52
  ## Features
@@ -29,11 +55,12 @@ A community provider for SAP AI Core that integrates seamlessly with the Vercel
29
55
  - 🎯 **Tool Calling Support** - Full tool/function calling capabilities
30
56
  - 🧠 **Reasoning-Safe by Default** - Assistant reasoning parts are not forwarded unless enabled
31
57
  - 🖼️ **Multi-modal Input** - Support for text and image inputs
32
- - 📡 **Streaming Support** - Real-time text generation
58
+ - 📡 **Streaming Support** - Real-time text generation with structured V3 blocks
33
59
  - 🔒 **Data Masking** - Built-in SAP DPI integration for privacy
34
60
  - 🛡️ **Content Filtering** - Azure Content Safety and Llama Guard support
35
61
  - 🔧 **TypeScript Support** - Full type safety and IntelliSense
36
62
  - 🎨 **Multiple Models** - Support for GPT-4, Claude, Gemini, Nova, and more
63
+ - ⚡ **Language Model V3** - Latest Vercel AI SDK specification with enhanced streaming
37
64
 
38
65
  ## Quick Start
39
66
 
@@ -149,87 +176,56 @@ Authentication is handled automatically by the SAP AI SDK using the `AICORE_SERV
149
176
 
150
177
  ### Text Generation
151
178
 
152
- ```typescript
153
- import "dotenv/config"; // Load environment variables
154
- import { createSAPAIProvider } from "@mymediset/sap-ai-provider";
155
- import { generateText } from "ai";
156
- import { APICallError } from "@ai-sdk/provider";
179
+ **Complete example:** [examples/example-generate-text.ts](./examples/example-generate-text.ts)
157
180
 
158
- const provider = createSAPAIProvider();
159
-
160
- try {
161
- const result = await generateText({
162
- model: provider("gpt-4o"),
163
- prompt: "Write a short story about a robot learning to paint.",
164
- });
165
-
166
- console.log(result.text);
167
- } catch (error) {
168
- if (error instanceof APICallError) {
169
- console.error("API error:", error.message, "- Status:", error.statusCode);
170
- }
171
- throw error;
172
- }
181
+ ```typescript
182
+ const result = await generateText({
183
+ model: provider("gpt-4o"),
184
+ prompt: "Write a short story about a robot learning to paint.",
185
+ });
186
+ console.log(result.text);
173
187
  ```
174
188
 
189
+ **Run it:** `npx tsx examples/example-generate-text.ts`
190
+
175
191
  ### Chat Conversations
176
192
 
193
+ **Complete example:** [examples/example-simple-chat-completion.ts](./examples/example-simple-chat-completion.ts)
194
+
177
195
  Note: assistant `reasoning` parts are dropped by default. Set `includeReasoning: true` on the model settings if you explicitly want to forward them.
178
196
 
179
197
  ```typescript
180
- import "dotenv/config"; // Load environment variables
181
- import { createSAPAIProvider } from "@mymediset/sap-ai-provider";
182
- import { generateText } from "ai";
183
- import { APICallError } from "@ai-sdk/provider";
184
-
185
- const provider = createSAPAIProvider();
186
-
187
- try {
188
- const result = await generateText({
189
- model: provider("anthropic--claude-3.5-sonnet"),
190
- messages: [
191
- { role: "system", content: "You are a helpful coding assistant." },
192
- {
193
- role: "user",
194
- content: "How do I implement binary search in TypeScript?",
195
- },
196
- ],
197
- });
198
- } catch (error) {
199
- if (error instanceof APICallError) {
200
- console.error("API error:", error.message, "- Status:", error.statusCode);
201
- }
202
- throw error;
203
- }
198
+ const result = await generateText({
199
+ model: provider("anthropic--claude-3.5-sonnet"),
200
+ messages: [
201
+ { role: "system", content: "You are a helpful coding assistant." },
202
+ {
203
+ role: "user",
204
+ content: "How do I implement binary search in TypeScript?",
205
+ },
206
+ ],
207
+ });
204
208
  ```
205
209
 
206
- ### Streaming Responses
210
+ **Run it:** `npx tsx examples/example-simple-chat-completion.ts`
207
211
 
208
- ```typescript
209
- import "dotenv/config"; // Load environment variables
210
- import { createSAPAIProvider } from "@mymediset/sap-ai-provider";
211
- import { streamText } from "ai";
212
- import { APICallError } from "@ai-sdk/provider";
212
+ ### Streaming Responses
213
213
 
214
- const provider = createSAPAIProvider();
214
+ **Complete example:** [examples/example-streaming-chat.ts](./examples/example-streaming-chat.ts)
215
215
 
216
- try {
217
- const result = streamText({
218
- model: provider("gpt-4o"),
219
- prompt: "Explain machine learning concepts.",
220
- });
216
+ ```typescript
217
+ const result = streamText({
218
+ model: provider("gpt-4o"),
219
+ prompt: "Explain machine learning concepts.",
220
+ });
221
221
 
222
- for await (const delta of result.textStream) {
223
- process.stdout.write(delta);
224
- }
225
- } catch (error) {
226
- if (error instanceof APICallError) {
227
- console.error("API error:", error.message, "- Status:", error.statusCode);
228
- }
229
- throw error;
222
+ for await (const delta of result.textStream) {
223
+ process.stdout.write(delta);
230
224
  }
231
225
  ```
232
226
 
227
+ **Run it:** `npx tsx examples/example-streaming-chat.ts`
228
+
233
229
  ### Model Configuration
234
230
 
235
231
  ```typescript
@@ -263,13 +259,13 @@ This provider supports all models available through SAP AI Core Orchestration se
263
259
  **Popular models:**
264
260
 
265
261
  - **OpenAI**: gpt-4o, gpt-4o-mini, gpt-4.1, o1, o3 (recommended for multi-tool apps)
266
- - **Anthropic Claude**: claude-3.5-sonnet, claude-4-opus
262
+ - **Anthropic Claude**: anthropic--claude-3.5-sonnet, anthropic--claude-4-opus
267
263
  - **Google Gemini**: gemini-2.5-pro, gemini-2.0-flash
268
264
 
269
265
  ⚠️ **Important:** Google Gemini models have a 1 tool limit per request.
270
266
 
271
- - **Amazon Nova**: nova-pro, nova-lite
272
- - **Open Source**: mistralai-mistral-large, llama3.1-70b
267
+ - **Amazon Nova**: amazon--nova-pro, amazon--nova-lite
268
+ - **Open Source**: mistralai--mistral-large-instruct, meta--llama3.1-70b-instruct
273
269
 
274
270
  > **Note:** Model availability depends on your SAP AI Core tenant configuration, region, and subscription.
275
271
 
@@ -289,27 +285,14 @@ The following helper functions are exported by this package for convenient confi
289
285
 
290
286
  > **Note on Terminology:** This documentation uses "tool calling" (Vercel AI SDK convention), equivalent to "function calling" in OpenAI documentation. Both terms refer to the same capability of models invoking external functions.
291
287
 
292
- 📖 **Complete guide:** [API Reference - Tool Calling](./API_REFERENCE.md#tool-calling-function-calling)
288
+ 📖 **Complete guide:** [API Reference - Tool Calling](./API_REFERENCE.md#tool-calling-function-calling)
289
+ **Complete example:** [examples/example-chat-completion-tool.ts](./examples/example-chat-completion-tool.ts)
293
290
 
294
291
  ```typescript
295
- import "dotenv/config"; // Load environment variables
296
- import { createSAPAIProvider } from "@mymediset/sap-ai-provider";
297
- import { generateText, tool } from "ai";
298
- import { z } from "zod";
299
-
300
- const provider = createSAPAIProvider();
301
-
302
- const weatherSchema = z.object({
303
- location: z.string(),
304
- });
305
-
306
292
  const weatherTool = tool({
307
293
  description: "Get weather for a location",
308
- inputSchema: weatherSchema,
309
- execute: (args: z.infer<typeof weatherSchema>) => {
310
- const { location } = args;
311
- return `Weather in ${location}: sunny, 72°F`;
312
- },
294
+ inputSchema: z.object({ location: z.string() }),
295
+ execute: (args) => `Weather in ${args.location}: sunny, 72°F`,
313
296
  });
314
297
 
315
298
  const result = await generateText({
@@ -318,21 +301,17 @@ const result = await generateText({
318
301
  tools: { getWeather: weatherTool },
319
302
  maxSteps: 3,
320
303
  });
321
-
322
- console.log(result.text);
323
304
  ```
324
305
 
306
+ **Run it:** `npx tsx examples/example-chat-completion-tool.ts`
307
+
325
308
  ⚠️ **Important:** Gemini models support only 1 tool per request. For multi-tool applications, use GPT-4o, Claude, or Amazon Nova models. See [API Reference - Tool Calling](./API_REFERENCE.md#tool-calling-function-calling) for complete model comparison.
326
309
 
327
310
  ### Multi-modal Input (Images)
328
311
 
329
- ```typescript
330
- import "dotenv/config"; // Load environment variables
331
- import { createSAPAIProvider } from "@mymediset/sap-ai-provider";
332
- import { generateText } from "ai";
333
-
334
- const provider = createSAPAIProvider();
312
+ **Complete example:** [examples/example-image-recognition.ts](./examples/example-image-recognition.ts)
335
313
 
314
+ ```typescript
336
315
  const result = await generateText({
337
316
  model: provider("gpt-4o"),
338
317
  messages: [
@@ -347,6 +326,8 @@ const result = await generateText({
347
326
  });
348
327
  ```
349
328
 
329
+ **Run it:** `npx tsx examples/example-image-recognition.ts`
330
+
350
331
  ### Data Masking (SAP DPI)
351
332
 
352
333
  Use SAP's Data Privacy Integration to mask sensitive data:
@@ -387,6 +368,69 @@ const provider = createSAPAIProvider({
387
368
  });
388
369
  ```
389
370
 
371
+ **Full documentation:** [API_REFERENCE.md - Content Filtering](./API_REFERENCE.md#buildazurecontentsafetyfiltertype-config)
372
+
373
+ ### Document Grounding (RAG)
374
+
375
+ Ground LLM responses in your own documents using vector databases.
376
+
377
+ **Complete example:** [examples/example-document-grounding.ts](./examples/example-document-grounding.ts)
378
+ **Complete documentation:** [API Reference - Document Grounding](./API_REFERENCE.md#builddocumentgroundingconfigconfig)
379
+
380
+ ```typescript
381
+ const provider = createSAPAIProvider({
382
+ defaultSettings: {
383
+ grounding: buildDocumentGroundingConfig({
384
+ filters: [
385
+ {
386
+ id: "vector-store-1", // Your vector database ID
387
+ data_repositories: ["*"], // Search all repositories
388
+ },
389
+ ],
390
+ placeholders: {
391
+ input: ["?question"],
392
+ output: "groundingOutput",
393
+ },
394
+ }),
395
+ },
396
+ });
397
+
398
+ // Queries are now grounded in your documents
399
+ const model = provider("gpt-4o");
400
+ ```
401
+
402
+ **Run it:** `npx tsx examples/example-document-grounding.ts`
403
+
404
+ ### Translation
405
+
406
+ Automatically translate user queries and model responses.
407
+
408
+ **Complete example:** [examples/example-translation.ts](./examples/example-translation.ts)
409
+ **Complete documentation:** [API Reference - Translation](./API_REFERENCE.md#buildtranslationconfigtype-config)
410
+
411
+ ```typescript
412
+ const provider = createSAPAIProvider({
413
+ defaultSettings: {
414
+ translation: {
415
+ // Translate user input from German to English
416
+ input: buildTranslationConfig("input", {
417
+ sourceLanguage: "de",
418
+ targetLanguage: "en",
419
+ }),
420
+ // Translate model output from English to German
421
+ output: buildTranslationConfig("output", {
422
+ targetLanguage: "de",
423
+ }),
424
+ },
425
+ },
426
+ });
427
+
428
+ // Model handles German input/output automatically
429
+ const model = provider("gpt-4o");
430
+ ```
431
+
432
+ **Run it:** `npx tsx examples/example-translation.ts`
433
+
390
434
  ## Configuration Options
391
435
 
392
436
  The provider and models can be configured with various settings for authentication, model parameters, data masking, content filtering, and more.
@@ -496,6 +540,23 @@ npx tsx examples/example-generate-text.ts
496
540
 
497
541
  ## Migration Guides
498
542
 
543
+ ### Upgrading from v3.x to v4.x
544
+
545
+ Version 4.0 migrates from **LanguageModelV2** to **LanguageModelV3** specification (AI SDK 6.0+). **See the [Migration Guide](./MIGRATION_GUIDE.md#version-3x-to-4x-breaking-changes) for complete upgrade instructions.**
546
+
547
+ **Key changes:**
548
+
549
+ - **Finish Reason**: Changed from string to object (`result.finishReason.unified`)
550
+ - **Usage Structure**: Nested format with detailed token breakdown (`result.usage.inputTokens.total`)
551
+ - **Stream Events**: Structured blocks (`text-start`, `text-delta`, `text-end`) instead of simple deltas
552
+ - **Warning Types**: Updated format with `feature` field for categorization
553
+
554
+ **Impact by user type:**
555
+
556
+ - High-level API users (`generateText`/`streamText`): ✅ Minimal impact (likely no changes)
557
+ - Direct provider users: ⚠️ Update type imports (`LanguageModelV2` → `LanguageModelV3`)
558
+ - Custom stream parsers: ⚠️ Update parsing logic for V3 structure
559
+
499
560
  ### Upgrading from v2.x to v3.x
500
561
 
501
562
  Version 3.0 standardizes error handling to use Vercel AI SDK native error types. **See the [Migration Guide](./MIGRATION_GUIDE.md#v2x--v30) for complete upgrade instructions.**
@@ -526,34 +587,28 @@ Version 2.0 uses the official SAP AI SDK. **See the [Migration Guide](./MIGRATIO
526
587
 
527
588
  We welcome contributions! Please see our [Contributing Guide](./CONTRIBUTING.md) for details.
528
589
 
529
- ## License
530
-
531
- Apache License 2.0 - see [LICENSE](./LICENSE.md) for details.
532
-
533
- ## Support
534
-
535
- - 📖 [Documentation](https://github.com/BITASIA/sap-ai-provider)
536
- - 🐛 [Issue Tracker](https://github.com/BITASIA/sap-ai-provider/issues)
537
-
538
- ## Documentation
539
-
540
- ### Guides
541
-
542
- - [Environment Setup](./ENVIRONMENT_SETUP.md) - Authentication and configuration setup
543
- - [Migration Guide](./MIGRATION_GUIDE.md) - Upgrading from v1.x to v2.x with step-by-step instructions
544
- - [curl API Testing](./CURL_API_TESTING_GUIDE.md) - Direct API testing for debugging
590
+ ## Resources
545
591
 
546
- ### Reference
592
+ ### Documentation
547
593
 
594
+ - [Migration Guide](./MIGRATION_GUIDE.md) - Version upgrade instructions (v1.x → v2.x → v3.x → v4.x)
548
595
  - [API Reference](./API_REFERENCE.md) - Complete API documentation with all types and functions
596
+ - [Environment Setup](./ENVIRONMENT_SETUP.md) - Authentication and configuration setup
597
+ - [Troubleshooting](./TROUBLESHOOTING.md) - Common issues and solutions
549
598
  - [Architecture](./ARCHITECTURE.md) - Internal architecture, design decisions, and request flows
599
+ - [curl API Testing](./CURL_API_TESTING_GUIDE.md) - Direct API testing for debugging
550
600
 
551
- ### Contributing
601
+ ### Community
552
602
 
553
- - [Contributing Guide](./CONTRIBUTING.md) - How to contribute to this project
603
+ - 🐛 [Issue Tracker](https://github.com/BITASIA/sap-ai-provider/issues) - Report bugs and request features
604
+ - 💬 [Discussions](https://github.com/BITASIA/sap-ai-provider/discussions) - Ask questions and share ideas
554
605
 
555
- ## Related
606
+ ### Related Projects
556
607
 
557
608
  - [Vercel AI SDK](https://sdk.vercel.ai/) - The AI SDK this provider extends
558
609
  - [SAP AI SDK](https://sap.github.io/ai-sdk/) - Official SAP Cloud SDK for AI
559
610
  - [SAP AI Core Documentation](https://help.sap.com/docs/ai-core) - Official SAP AI Core docs
611
+
612
+ ## License
613
+
614
+ Apache License 2.0 - see [LICENSE](./LICENSE.md) for details.