@jerome-benoit/sap-ai-provider-v2 4.1.2-rc.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md ADDED
@@ -0,0 +1,777 @@
1
+ # SAP AI Core Provider for Vercel AI SDK
2
+
3
+ [![npm](https://img.shields.io/npm/v/@mymediset/sap-ai-provider/latest?label=npm&color=blue)](https://www.npmjs.com/package/@mymediset/sap-ai-provider)
4
+ [![License: Apache-2.0](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
5
+ [![Vercel AI SDK](https://img.shields.io/badge/Vercel%20AI%20SDK-5.0+-black.svg)](https://sdk.vercel.ai/docs)
6
+ [![Language Model](https://img.shields.io/badge/Language%20Model-V2-orange.svg)](https://sdk.vercel.ai/docs/ai-sdk-core/provider-management)
7
+ [![Embedding Model](https://img.shields.io/badge/Embedding%20Model-V3-green.svg)](https://sdk.vercel.ai/docs/ai-sdk-core/embeddings)
8
+
9
+ > **Note:** This is a **V2-compatible fork** for use with **AI SDK 5.x**.
10
+
11
+ A community provider for SAP AI Core that integrates seamlessly with the Vercel
12
+ AI SDK. Built on top of the official **@sap-ai-sdk/orchestration** package, this
13
+ provider enables you to use SAP's enterprise-grade AI models through the
14
+ familiar Vercel AI SDK interface.
15
+
16
+ ## Table of Contents
17
+
18
+ - [Features](#features)
19
+ - [Quick Start](#quick-start)
20
+ - [Quick Reference](#quick-reference)
21
+ - [Installation](#installation)
22
+ - [Provider Creation](#provider-creation)
23
+ - [Option 1: Factory Function (Recommended for Custom Configuration)](#option-1-factory-function-recommended-for-custom-configuration)
24
+ - [Option 2: Default Instance (Quick Start)](#option-2-default-instance-quick-start)
25
+ - [Authentication](#authentication)
26
+ - [Basic Usage](#basic-usage)
27
+ - [Text Generation](#text-generation)
28
+ - [Chat Conversations](#chat-conversations)
29
+ - [Streaming Responses](#streaming-responses)
30
+ - [Model Configuration](#model-configuration)
31
+ - [Embeddings](#embeddings)
32
+ - [Supported Models](#supported-models)
33
+ - [Advanced Features](#advanced-features)
34
+ - [Tool Calling](#tool-calling)
35
+ - [Multi-modal Input (Images)](#multi-modal-input-images)
36
+ - [Data Masking (SAP DPI)](#data-masking-sap-dpi)
37
+ - [Content Filtering](#content-filtering)
38
+ - [Document Grounding (RAG)](#document-grounding-rag)
39
+ - [Translation](#translation)
40
+ - [Provider Options (Per-Call Overrides)](#provider-options-per-call-overrides)
41
+ - [Configuration Options](#configuration-options)
42
+ - [Error Handling](#error-handling)
43
+ - [Troubleshooting](#troubleshooting)
44
+ - [Performance](#performance)
45
+ - [Security](#security)
46
+ - [Debug Mode](#debug-mode)
47
+ - [Examples](#examples)
48
+ - [Migration Guides](#migration-guides)
49
+ - [Upgrading from v3.x to v4.x](#upgrading-from-v3x-to-v4x)
50
+ - [Upgrading from v2.x to v3.x](#upgrading-from-v2x-to-v3x)
51
+ - [Upgrading from v1.x to v2.x](#upgrading-from-v1x-to-v2x)
52
+ - [Important Note](#important-note)
53
+ - [Contributing](#contributing)
54
+ - [Resources](#resources)
55
+ - [Documentation](#documentation)
56
+ - [Community](#community)
57
+ - [Related Projects](#related-projects)
58
+ - [License](#license)
59
+
60
+ ## Features
61
+
62
+ - 🔐 **Simplified Authentication** - Uses SAP AI SDK's built-in credential
63
+ handling
64
+ - 🎯 **Tool Calling Support** - Full tool/function calling capabilities
65
+ - 🧠 **Reasoning-Safe by Default** - Assistant reasoning parts are not forwarded
66
+ unless enabled
67
+ - 🖼️ **Multi-modal Input** - Support for text and image inputs
68
+ - 📡 **Streaming Support** - Real-time text generation with structured V3 blocks
69
+ - 🔒 **Data Masking** - Built-in SAP DPI integration for privacy
70
+ - 🛡️ **Content Filtering** - Azure Content Safety and Llama Guard support
71
+ - 🔧 **TypeScript Support** - Full type safety and IntelliSense
72
+ - 🎨 **Multiple Models** - Support for GPT-4, Claude, Gemini, Nova, and more
73
+ - ⚡ **Language Model V2** - Compatible with Vercel AI SDK 5.x
74
+ - 📊 **Text Embeddings** - Generate vector embeddings for RAG and semantic search
75
+
76
+ ## Quick Start
77
+
78
+ ```bash
79
+ npm install @mymediset/sap-ai-provider ai
80
+ ```
81
+
82
+ ```typescript
83
+ import "dotenv/config"; // Load environment variables
84
+ import { createSAPAIProvider } from "@mymediset/sap-ai-provider";
85
+ import { generateText } from "ai";
86
+ import { APICallError } from "@ai-sdk/provider";
87
+
88
+ // Create provider (authentication via AICORE_SERVICE_KEY env var)
89
+ const provider = createSAPAIProvider();
90
+
91
+ try {
92
+ // Generate text with gpt-4o
93
+ const result = await generateText({
94
+ model: provider("gpt-4o"),
95
+ prompt: "Explain quantum computing in simple terms.",
96
+ });
97
+
98
+ console.log(result.text);
99
+ } catch (error) {
100
+ if (error instanceof APICallError) {
101
+ console.error("SAP AI Core API error:", error.message);
102
+ console.error("Status:", error.statusCode);
103
+ } else {
104
+ console.error("Unexpected error:", error);
105
+ }
106
+ }
107
+ ```
108
+
109
+ > **Note:** Requires `AICORE_SERVICE_KEY` environment variable. See
110
+ > [Environment Setup](./ENVIRONMENT_SETUP.md) for configuration.
111
+
112
+ ## Quick Reference
113
+
114
+ | Task | Code Pattern | Documentation |
115
+ | ------------------- | ---------------------------------------------------------------- | ------------------------------------------------------------- |
116
+ | **Install** | `npm install @mymediset/sap-ai-provider ai` | [Installation](#installation) |
117
+ | **Auth Setup** | Add `AICORE_SERVICE_KEY` to `.env` | [Environment Setup](./ENVIRONMENT_SETUP.md) |
118
+ | **Create Provider** | `createSAPAIProvider()` or use `sapai` | [Provider Creation](#provider-creation) |
119
+ | **Text Generation** | `generateText({ model: provider("gpt-4o"), prompt })` | [Basic Usage](#text-generation) |
120
+ | **Streaming** | `streamText({ model: provider("gpt-4o"), prompt })` | [Streaming](#streaming-responses) |
121
+ | **Tool Calling** | `generateText({ tools: { myTool: tool({...}) } })` | [Tool Calling](#tool-calling) |
122
+ | **Error Handling** | `catch (error instanceof APICallError)` | [API Reference](./API_REFERENCE.md#error-handling--reference) |
123
+ | **Choose Model** | See 80+ models (GPT, Claude, Gemini, Llama) | [Models](./API_REFERENCE.md#models) |
124
+ | **Embeddings** | `embed({ model: provider.embedding("text-embedding-ada-002") })` | [Embeddings](#embeddings) |
125
+
126
+ ## Installation
127
+
128
+ **Requirements:** Node.js 18+ and Vercel AI SDK 5.0+
129
+
130
+ ```bash
131
+ npm install @mymediset/sap-ai-provider ai
132
+ ```
133
+
134
+ Or with other package managers:
135
+
136
+ ```bash
137
+ # Yarn
138
+ yarn add @mymediset/sap-ai-provider ai
139
+
140
+ # pnpm
141
+ pnpm add @mymediset/sap-ai-provider ai
142
+ ```
143
+
144
+ ## Provider Creation
145
+
146
+ You can create an SAP AI provider in two ways:
147
+
148
+ ### Option 1: Factory Function (Recommended for Custom Configuration)
149
+
150
+ ```typescript
151
+ import "dotenv/config"; // Load environment variables
152
+ import { createSAPAIProvider } from "@mymediset/sap-ai-provider";
153
+
154
+ const provider = createSAPAIProvider({
155
+ resourceGroup: "production",
156
+ deploymentId: "your-deployment-id", // Optional
157
+ });
158
+ ```
159
+
160
+ ### Option 2: Default Instance (Quick Start)
161
+
162
+ ```typescript
163
+ import "dotenv/config"; // Load environment variables
164
+ import { sapai } from "@mymediset/sap-ai-provider";
165
+ import { generateText } from "ai";
166
+
167
+ // Use directly with auto-detected configuration
168
+ const result = await generateText({
169
+ model: sapai("gpt-4o"),
170
+ prompt: "Hello!",
171
+ });
172
+ ```
173
+
174
+ The `sapai` export provides a convenient default provider instance with
175
+ automatic configuration from environment variables or service bindings.
176
+
177
+ ## Authentication
178
+
179
+ Authentication is handled automatically by the SAP AI SDK using the
180
+ `AICORE_SERVICE_KEY` environment variable.
181
+
182
+ **Quick Setup:**
183
+
184
+ 1. Create a `.env` file: `cp .env.example .env`
185
+ 2. Add your SAP AI Core service key JSON to `AICORE_SERVICE_KEY`
186
+ 3. Import in code: `import "dotenv/config";`
187
+
188
+ **For complete setup instructions, SAP BTP deployment, troubleshooting, and
189
+ advanced scenarios, see the [Environment Setup Guide](./ENVIRONMENT_SETUP.md).**
190
+
191
+ ## Basic Usage
192
+
193
+ ### Text Generation
194
+
195
+ **Complete example:**
196
+ [examples/example-generate-text.ts](./examples/example-generate-text.ts)
197
+
198
+ ```typescript
199
+ const result = await generateText({
200
+ model: provider("gpt-4o"),
201
+ prompt: "Write a short story about a robot learning to paint.",
202
+ });
203
+ console.log(result.text);
204
+ ```
205
+
206
+ **Run it:** `npx tsx examples/example-generate-text.ts`
207
+
208
+ ### Chat Conversations
209
+
210
+ **Complete example:**
211
+ [examples/example-simple-chat-completion.ts](./examples/example-simple-chat-completion.ts)
212
+
213
+ > **Note:** Assistant `reasoning` parts are dropped by default. Set
214
+ > `includeReasoning: true` on the model settings if you explicitly want to
215
+ > forward them.
216
+
217
+ ```typescript
218
+ const result = await generateText({
219
+ model: provider("anthropic--claude-3.5-sonnet"),
220
+ messages: [
221
+ { role: "system", content: "You are a helpful coding assistant." },
222
+ {
223
+ role: "user",
224
+ content: "How do I implement binary search in TypeScript?",
225
+ },
226
+ ],
227
+ });
228
+ ```
229
+
230
+ **Run it:** `npx tsx examples/example-simple-chat-completion.ts`
231
+
232
+ ### Streaming Responses
233
+
234
+ **Complete example:**
235
+ [examples/example-streaming-chat.ts](./examples/example-streaming-chat.ts)
236
+
237
+ ```typescript
238
+ const result = streamText({
239
+ model: provider("gpt-4o"),
240
+ prompt: "Explain machine learning concepts.",
241
+ });
242
+
243
+ for await (const delta of result.textStream) {
244
+ process.stdout.write(delta);
245
+ }
246
+ ```
247
+
248
+ **Run it:** `npx tsx examples/example-streaming-chat.ts`
249
+
250
+ ### Model Configuration
251
+
252
+ ```typescript
253
+ import "dotenv/config"; // Load environment variables
254
+ import { createSAPAIProvider } from "@mymediset/sap-ai-provider";
255
+ import { generateText } from "ai";
256
+
257
+ const provider = createSAPAIProvider();
258
+
259
+ const model = provider("gpt-4o", {
260
+ // Optional: include assistant reasoning parts (chain-of-thought).
261
+ // Best practice is to keep this disabled.
262
+ includeReasoning: false,
263
+ modelParams: {
264
+ temperature: 0.3,
265
+ maxTokens: 2000,
266
+ topP: 0.9,
267
+ },
268
+ });
269
+
270
+ const result = await generateText({
271
+ model,
272
+ prompt: "Write a technical blog post about TypeScript.",
273
+ });
274
+ ```
275
+
276
+ ### Embeddings
277
+
278
+ Generate vector embeddings for RAG (Retrieval-Augmented Generation), semantic
279
+ search, and similarity matching.
280
+
281
+ **Complete example:**
282
+ [examples/example-embeddings.ts](./examples/example-embeddings.ts)
283
+
284
+ ```typescript
285
+ import "dotenv/config"; // Load environment variables
286
+ import { createSAPAIProvider } from "@mymediset/sap-ai-provider";
287
+ import { embed, embedMany } from "ai";
288
+
289
+ const provider = createSAPAIProvider();
290
+
291
+ // Single embedding
292
+ const { embedding } = await embed({
293
+ model: provider.embedding("text-embedding-ada-002"),
294
+ value: "What is machine learning?",
295
+ });
296
+
297
+ // Multiple embeddings
298
+ const { embeddings } = await embedMany({
299
+ model: provider.embedding("text-embedding-3-small"),
300
+ values: ["Hello world", "AI is amazing", "Vector search"],
301
+ });
302
+ ```
303
+
304
+ **Run it:** `npx tsx examples/example-embeddings.ts`
305
+
306
+ **Common embedding models:**
307
+
308
+ - `text-embedding-ada-002` - OpenAI Ada v2 (cost-effective)
309
+ - `text-embedding-3-small` - OpenAI v3 small (balanced)
310
+ - `text-embedding-3-large` - OpenAI v3 large (highest quality)
311
+
312
+ > **Note:** Model availability depends on your SAP AI Core tenant configuration.
313
+
314
+ For complete embedding API documentation, see
315
+ **[API Reference: Embeddings](./API_REFERENCE.md#embeddings)**.
316
+
317
+ ## Supported Models
318
+
319
+ This provider supports all models available through SAP AI Core Orchestration
320
+ service, including:
321
+
322
+ **Popular models:**
323
+
324
+ - **OpenAI**: gpt-4o, gpt-4o-mini, gpt-4.1, o1, o3, o4-mini (recommended for
325
+ multi-tool apps)
326
+ - **Anthropic Claude**: anthropic--claude-3.5-sonnet, anthropic--claude-4-opus
327
+ - **Google Gemini**: gemini-2.5-pro, gemini-2.0-flash
328
+
329
+ ⚠️ **Important:** Google Gemini models have a 1 tool limit per request.
330
+
331
+ - **Amazon Nova**: amazon--nova-pro, amazon--nova-lite
332
+ - **Open Source**: mistralai--mistral-large-instruct,
333
+ meta--llama3.1-70b-instruct
334
+
335
+ > **Note:** Model availability depends on your SAP AI Core tenant configuration,
336
+ > region, and subscription.
337
+
338
+ **To discover available models in your environment:**
339
+
340
+ ```bash
341
+ curl "https://<AI_API_URL>/v2/lm/deployments" -H "Authorization: Bearer $TOKEN"
342
+ ```
343
+
344
+ For complete model details, capabilities comparison, and limitations, see
345
+ **[API Reference: SAPAIModelId](./API_REFERENCE.md#sapaimodelid)**.
346
+
347
+ ## Advanced Features
348
+
349
+ The following helper functions are exported by this package for convenient
350
+ configuration of SAP AI Core features. These builders provide type-safe
351
+ configuration for data masking, content filtering, grounding, and translation
352
+ modules.
353
+
354
+ ### Tool Calling
355
+
356
+ > **Note on Terminology:** This documentation uses "tool calling" (Vercel AI SDK
357
+ > convention), equivalent to "function calling" in OpenAI documentation. Both
358
+ > terms refer to the same capability of models invoking external functions.
359
+
360
+ 📖 **Complete guide:**
361
+ [API Reference - Tool Calling](./API_REFERENCE.md#tool-calling-function-calling)\
362
+ **Complete example:**
363
+ [examples/example-chat-completion-tool.ts](./examples/example-chat-completion-tool.ts)
364
+
365
+ ```typescript
366
+ const weatherTool = tool({
367
+ description: "Get weather for a location",
368
+ inputSchema: z.object({ location: z.string() }),
369
+ execute: (args) => `Weather in ${args.location}: sunny, 72°F`,
370
+ });
371
+
372
+ const result = await generateText({
373
+ model: provider("gpt-4o"),
374
+ prompt: "What's the weather in Tokyo?",
375
+ tools: { getWeather: weatherTool },
376
+ maxSteps: 3,
377
+ });
378
+ ```
379
+
380
+ **Run it:** `npx tsx examples/example-chat-completion-tool.ts`
381
+
382
+ ⚠️ **Important:** Gemini models support only 1 tool per request. For multi-tool
383
+ applications, use GPT-4o, Claude, or Amazon Nova models. See
384
+ [API Reference - Tool Calling](./API_REFERENCE.md#tool-calling-function-calling)
385
+ for complete model comparison.
386
+
387
+ ### Multi-modal Input (Images)
388
+
389
+ **Complete example:**
390
+ [examples/example-image-recognition.ts](./examples/example-image-recognition.ts)
391
+
392
+ ```typescript
393
+ const result = await generateText({
394
+ model: provider("gpt-4o"),
395
+ messages: [
396
+ {
397
+ role: "user",
398
+ content: [
399
+ { type: "text", text: "What do you see in this image?" },
400
+ { type: "image", image: new URL("https://example.com/image.jpg") },
401
+ ],
402
+ },
403
+ ],
404
+ });
405
+ ```
406
+
407
+ **Run it:** `npx tsx examples/example-image-recognition.ts`
408
+
409
+ ### Data Masking (SAP DPI)
410
+
411
+ Use SAP's Data Privacy Integration to mask sensitive data:
412
+
413
+ **Complete example:**
414
+ [examples/example-data-masking.ts](./examples/example-data-masking.ts)\
415
+ **Complete documentation:**
416
+ [API Reference - Data Masking](./API_REFERENCE.md#builddpimaskingproviderconfig)
417
+
418
+ ```typescript
419
+ import { buildDpiMaskingProvider } from "@mymediset/sap-ai-provider";
420
+
421
+ const dpiConfig = buildDpiMaskingProvider({
422
+ method: "anonymization",
423
+ entities: ["profile-email", "profile-person", "profile-phone"],
424
+ });
425
+ ```
426
+
427
+ **Run it:** `npx tsx examples/example-data-masking.ts`
428
+
429
+ ### Content Filtering
430
+
431
+ ```typescript
432
+ import "dotenv/config"; // Load environment variables
433
+ import { buildAzureContentSafetyFilter, createSAPAIProvider } from "@mymediset/sap-ai-provider";
434
+
435
+ const provider = createSAPAIProvider({
436
+ defaultSettings: {
437
+ filtering: {
438
+ input: {
439
+ filters: [
440
+ buildAzureContentSafetyFilter("input", {
441
+ hate: "ALLOW_SAFE",
442
+ violence: "ALLOW_SAFE_LOW_MEDIUM",
443
+ }),
444
+ ],
445
+ },
446
+ },
447
+ },
448
+ });
449
+ ```
450
+
451
+ **Complete documentation:**
452
+ [API Reference - Content Filtering](./API_REFERENCE.md#buildazurecontentsafetyfiltertype-config)
453
+
454
+ ### Document Grounding (RAG)
455
+
456
+ Ground LLM responses in your own documents using vector databases.
457
+
458
+ **Complete example:**
459
+ [examples/example-document-grounding.ts](./examples/example-document-grounding.ts)\
460
+ **Complete documentation:**
461
+ [API Reference - Document Grounding](./API_REFERENCE.md#builddocumentgroundingconfigconfig)
462
+
463
+ ```typescript
464
+ const provider = createSAPAIProvider({
465
+ defaultSettings: {
466
+ grounding: buildDocumentGroundingConfig({
467
+ filters: [
468
+ {
469
+ id: "vector-store-1", // Your vector database ID
470
+ data_repositories: ["*"], // Search all repositories
471
+ },
472
+ ],
473
+ placeholders: {
474
+ input: ["?question"],
475
+ output: "groundingOutput",
476
+ },
477
+ }),
478
+ },
479
+ });
480
+
481
+ // Queries are now grounded in your documents
482
+ const model = provider("gpt-4o");
483
+ ```
484
+
485
+ **Run it:** `npx tsx examples/example-document-grounding.ts`
486
+
487
+ ### Translation
488
+
489
+ Automatically translate user queries and model responses.
490
+
491
+ **Complete example:**
492
+ [examples/example-translation.ts](./examples/example-translation.ts)\
493
+ **Complete documentation:**
494
+ [API Reference - Translation](./API_REFERENCE.md#buildtranslationconfigtype-config)
495
+
496
+ ```typescript
497
+ const provider = createSAPAIProvider({
498
+ defaultSettings: {
499
+ translation: {
500
+ // Translate user input from German to English
501
+ input: buildTranslationConfig("input", {
502
+ sourceLanguage: "de",
503
+ targetLanguage: "en",
504
+ }),
505
+ // Translate model output from English to German
506
+ output: buildTranslationConfig("output", {
507
+ targetLanguage: "de",
508
+ }),
509
+ },
510
+ },
511
+ });
512
+
513
+ // Model handles German input/output automatically
514
+ const model = provider("gpt-4o");
515
+ ```
516
+
517
+ **Run it:** `npx tsx examples/example-translation.ts`
518
+
519
+ ### Provider Options (Per-Call Overrides)
520
+
521
+ Override constructor settings on a per-call basis using `providerOptions`.
522
+ Options are validated at runtime with Zod schemas.
523
+
524
+ ```typescript
525
+ import { generateText } from "ai";
526
+
527
+ const result = await generateText({
528
+ model: provider("gpt-4o"),
529
+ prompt: "Explain quantum computing",
530
+ providerOptions: {
531
+ "sap-ai": {
532
+ includeReasoning: true,
533
+ modelParams: {
534
+ temperature: 0.7,
535
+ maxTokens: 1000,
536
+ },
537
+ },
538
+ },
539
+ });
540
+ ```
541
+
542
+ **Complete documentation:**
543
+ [API Reference - Provider Options](./API_REFERENCE.md#provider-options)
544
+
545
+ ## Configuration Options
546
+
547
+ The provider and models can be configured with various settings for
548
+ authentication, model parameters, data masking, content filtering, and more.
549
+
550
+ **Common Configuration:**
551
+
552
+ - `resourceGroup`: SAP AI Core resource group (default: 'default')
553
+ - `deploymentId`: Specific deployment ID (auto-resolved if not set)
554
+ - `modelParams`: Temperature, maxTokens, topP, and other generation parameters
555
+ - `masking`: SAP Data Privacy Integration (DPI) configuration
556
+ - `filtering`: Content safety filters (Azure Content Safety, Llama Guard)
557
+
558
+ For complete configuration reference including all available options, types, and
559
+ examples, see
560
+ **[API Reference - Configuration](./API_REFERENCE.md#sapaiprovidersettings)**.
561
+
562
+ ## Error Handling
563
+
564
+ The provider uses standard Vercel AI SDK error types for consistent error
565
+ handling.
566
+
567
+ **Quick Example:**
568
+
569
+ ```typescript
570
+ import { APICallError, LoadAPIKeyError, NoSuchModelError } from "@ai-sdk/provider";
571
+
572
+ try {
573
+ const result = await generateText({
574
+ model: provider("gpt-4o"),
575
+ prompt: "Hello world",
576
+ });
577
+ } catch (error) {
578
+ if (error instanceof LoadAPIKeyError) {
579
+ // 401/403: Authentication or permission issue
580
+ console.error("Authentication issue:", error.message);
581
+ } else if (error instanceof NoSuchModelError) {
582
+ // 404: Model or deployment not found
583
+ console.error("Model not found:", error.modelId);
584
+ } else if (error instanceof APICallError) {
585
+ // Other API errors (400, 429, 5xx, etc.)
586
+ console.error("API error:", error.statusCode, error.message);
587
+ // SAP-specific metadata in responseBody
588
+ const sapError = JSON.parse(error.responseBody ?? "{}");
589
+ console.error("Request ID:", sapError.error?.request_id);
590
+ }
591
+ }
592
+ ```
593
+
594
+ **Complete reference:**
595
+
596
+ - **[API Reference - Error Handling](./API_REFERENCE.md#error-handling-examples)** -
597
+ Complete examples with all error properties
598
+ - **[API Reference - HTTP Status Codes](./API_REFERENCE.md#http-status-code-reference)** -
599
+ Status code reference table
600
+ - **[Troubleshooting Guide](./TROUBLESHOOTING.md)** - Detailed solutions for
601
+ each error type
602
+
603
+ ## Troubleshooting
604
+
605
+ **Quick Reference:**
606
+
607
+ - **Authentication (401)**: Check `AICORE_SERVICE_KEY` or `VCAP_SERVICES`
608
+ - **Model not found (404)**: Confirm tenant/region supports the model ID
609
+ - **Rate limit (429)**: Automatic retry with exponential backoff
610
+ - **Streaming**: Iterate `textStream` correctly; don't mix `generateText` and
611
+ `streamText`
612
+
613
+ **For comprehensive troubleshooting, see
614
+ [Troubleshooting Guide](./TROUBLESHOOTING.md)** with detailed solutions for:
615
+
616
+ - [Authentication Failed (401)](./TROUBLESHOOTING.md#problem-authentication-failed-or-401-errors)
617
+ - [Model Not Found (404)](./TROUBLESHOOTING.md#problem-404-modeldeployment-not-found)
618
+ - [Rate Limit (429)](./TROUBLESHOOTING.md#problem-429-rate-limit-exceeded)
619
+ - [Server Errors (500-504)](./TROUBLESHOOTING.md#problem-500502503504-server-errors)
620
+ - [Streaming Issues](./TROUBLESHOOTING.md#streaming-issues)
621
+ - [Tool Calling Problems](./TROUBLESHOOTING.md#tool-calling-issues)
622
+
623
+ Error code reference table:
624
+ [API Reference - HTTP Status Codes](./API_REFERENCE.md#http-status-code-reference)
625
+
626
+ ## Performance
627
+
628
+ - Prefer streaming (`streamText`) for long outputs to reduce latency and memory.
629
+ - Tune `modelParams` carefully: lower `temperature` for deterministic results;
630
+ set `maxTokens` to expected response size.
631
+ - Use `defaultSettings` at provider creation for shared knobs across models to
632
+ avoid per-call overhead.
633
+ - Avoid unnecessary history: keep `messages` concise to reduce prompt size and
634
+ cost.
635
+
636
+ ## Security
637
+
638
+ - Do not commit `.env` or credentials; use environment variables and secrets
639
+ managers.
640
+ - Treat `AICORE_SERVICE_KEY` as sensitive; avoid logging it or including in
641
+ crash reports.
642
+ - Mask PII with DPI: configure `masking.masking_providers` using
643
+ `buildDpiMaskingProvider()`.
644
+ - Validate and sanitize tool outputs before executing any side effects.
645
+
646
+ ## Debug Mode
647
+
648
+ - Use the curl guide `CURL_API_TESTING_GUIDE.md` to diagnose raw API behavior
649
+ independent of the SDK.
650
+ - Log request IDs from `error.responseBody` (parse JSON for `request_id`) to
651
+ correlate with backend traces.
652
+ - Temporarily enable verbose logging in your app around provider calls; redact
653
+ secrets.
654
+
655
+ ## Examples
656
+
657
+ The `examples/` directory contains complete, runnable examples demonstrating key
658
+ features:
659
+
660
+ | Example | Description | Key Features |
661
+ | ----------------------------------- | --------------------------- | --------------------------------------- |
662
+ | `example-generate-text.ts` | Basic text generation | Simple prompts, synchronous generation |
663
+ | `example-simple-chat-completion.ts` | Simple chat conversation | System messages, user prompts |
664
+ | `example-chat-completion-tool.ts` | Tool calling with functions | Weather API tool, function execution |
665
+ | `example-streaming-chat.ts` | Streaming responses | Real-time text generation, SSE |
666
+ | `example-image-recognition.ts` | Multi-modal with images | Vision models, image analysis |
667
+ | `example-data-masking.ts` | Data privacy integration | DPI masking, anonymization |
668
+ | `example-document-grounding.ts` | Document grounding (RAG) | Vector store, retrieval-augmented gen |
669
+ | `example-translation.ts` | Input/output translation | Multi-language support, SAP translation |
670
+ | `example-embeddings.ts` | Text embeddings | Vector generation, semantic similarity |
671
+
672
+ **Running Examples:**
673
+
674
+ ```bash
675
+ npx tsx examples/example-generate-text.ts
676
+ ```
677
+
678
+ > **Note:** Examples require `AICORE_SERVICE_KEY` environment variable. See
679
+ > [Environment Setup](./ENVIRONMENT_SETUP.md) for configuration.
680
+
681
+ ## Migration Guides
682
+
683
+ ### Upgrading from v3.x to v4.x
684
+
685
+ Version 4.0 migrates from **LanguageModelV2** to **LanguageModelV3**
686
+ specification (AI SDK 6.0+). **See the
687
+ [Migration Guide](./MIGRATION_GUIDE.md#version-3x-to-4x-breaking-changes) for
688
+ complete upgrade instructions.**
689
+
690
+ **Key changes:**
691
+
692
+ - **Finish Reason**: Changed from string to object
693
+ (`result.finishReason.unified`)
694
+ - **Usage Structure**: Nested format with detailed token breakdown
695
+ (`result.usage.inputTokens.total`)
696
+ - **Stream Events**: Structured blocks (`text-start`, `text-delta`, `text-end`)
697
+ instead of simple deltas
698
+ - **Warning Types**: Updated format with `feature` field for categorization
699
+
700
+ **Impact by user type:**
701
+
702
+ - High-level API users (`generateText`/`streamText`): ✅ Minimal impact (likely
703
+ no changes)
704
+ - Direct provider users: ⚠️ Update type imports (`LanguageModelV2` →
705
+ `LanguageModelV3`)
706
+ - Custom stream parsers: ⚠️ Update parsing logic for V3 structure
707
+
708
+ ### Upgrading from v2.x to v3.x
709
+
710
+ Version 3.0 standardizes error handling to use Vercel AI SDK native error types.
711
+ **See the [Migration Guide](./MIGRATION_GUIDE.md#v2x--v30) for complete upgrade
712
+ instructions.**
713
+
714
+ **Key changes:**
715
+
716
+ - `SAPAIError` removed → Use `APICallError` from `@ai-sdk/provider`
717
+ - Error properties: `error.code` → `error.statusCode`
718
+ - Automatic retries for rate limits (429) and server errors (5xx)
719
+
720
+ ### Upgrading from v1.x to v2.x
721
+
722
+ Version 2.0 uses the official SAP AI SDK. **See the
723
+ [Migration Guide](./MIGRATION_GUIDE.md#v1x--v20) for complete upgrade
724
+ instructions.**
725
+
726
+ **Key changes:**
727
+
728
+ - Authentication via `AICORE_SERVICE_KEY` environment variable
729
+ - Synchronous provider creation: `createSAPAIProvider()` (no await)
730
+ - Helper functions from SAP AI SDK
731
+
732
+ **For detailed migration instructions with code examples, see the
733
+ [complete Migration Guide](./MIGRATION_GUIDE.md).**
734
+
735
+ ## Important Note
736
+
737
+ > **Third-Party Provider**: This SAP AI Core provider
738
+ > (`@mymediset/sap-ai-provider`) is developed and maintained by mymediset, not
739
+ > by SAP SE. While it uses the official SAP AI SDK and integrates with SAP AI
740
+ > Core services, it is not an official SAP product.
741
+
742
+ ## Contributing
743
+
744
+ We welcome contributions! Please see our [Contributing Guide](./CONTRIBUTING.md)
745
+ for details.
746
+
747
+ ## Resources
748
+
749
+ ### Documentation
750
+
751
+ - [Migration Guide](./MIGRATION_GUIDE.md) - Version upgrade instructions (v1.x →
752
+ v2.x → v3.x → v4.x)
753
+ - [API Reference](./API_REFERENCE.md) - Complete API documentation with all
754
+ types and functions
755
+ - [Environment Setup](./ENVIRONMENT_SETUP.md) - Authentication and configuration
756
+ setup
757
+ - [Troubleshooting](./TROUBLESHOOTING.md) - Common issues and solutions
758
+ - [Architecture](./ARCHITECTURE.md) - Internal architecture, design decisions,
759
+ and request flows
760
+ - [cURL API Testing Guide](./CURL_API_TESTING_GUIDE.md) - Direct API testing for
761
+ debugging
762
+
763
+ ### Community
764
+
765
+ - 🐛 [Issue Tracker](https://github.com/BITASIA/sap-ai-provider/issues) - Report
766
+ bugs, request features, and ask questions
767
+
768
+ ### Related Projects
769
+
770
+ - [Vercel AI SDK](https://sdk.vercel.ai/) - The AI SDK this provider extends
771
+ - [SAP AI SDK](https://sap.github.io/ai-sdk/) - Official SAP Cloud SDK for AI
772
+ - [SAP AI Core Documentation](https://help.sap.com/docs/ai-core) - Official SAP
773
+ AI Core docs
774
+
775
+ ## License
776
+
777
+ Apache License 2.0 - see [LICENSE](./LICENSE.md) for details.