@jerome-benoit/sap-ai-provider 4.0.0-rc.1 → 4.0.0-rc.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -3,7 +3,7 @@
3
3
  [![npm](https://img.shields.io/npm/v/@mymediset/sap-ai-provider/latest?label=npm&color=blue)](https://www.npmjs.com/package/@mymediset/sap-ai-provider)
4
4
  [![License: Apache-2.0](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
5
5
  [![Vercel AI SDK](https://img.shields.io/badge/Vercel%20AI%20SDK-6.0+-black.svg)](https://sdk.vercel.ai/docs)
6
- [![Language Model](https://img.shields.io/badge/Language%20Model-V3-green.svg)](https://sdk.vercel.ai/docs/ai-sdk-core/provider-interfaces)
6
+ [![Language Model](https://img.shields.io/badge/Language%20Model-V3-green.svg)](https://sdk.vercel.ai/docs/ai-sdk-core/provider-management)
7
7
 
8
8
  A community provider for SAP AI Core that integrates seamlessly with the Vercel AI SDK. Built on top of the official **@sap-ai-sdk/orchestration** package, this provider enables you to use SAP's enterprise-grade AI models through the familiar Vercel AI SDK interface.
9
9
 
@@ -28,6 +28,8 @@ A community provider for SAP AI Core that integrates seamlessly with the Vercel
28
28
  - [Multi-modal Input (Images)](#multi-modal-input-images)
29
29
  - [Data Masking (SAP DPI)](#data-masking-sap-dpi)
30
30
  - [Content Filtering](#content-filtering)
31
+ - [Document Grounding (RAG)](#document-grounding-rag)
32
+ - [Translation](#translation)
31
33
  - [Configuration Options](#configuration-options)
32
34
  - [Error Handling](#error-handling)
33
35
  - [Troubleshooting](#troubleshooting)
@@ -174,87 +176,56 @@ Authentication is handled automatically by the SAP AI SDK using the `AICORE_SERV
174
176
 
175
177
  ### Text Generation
176
178
 
177
- ```typescript
178
- import "dotenv/config"; // Load environment variables
179
- import { createSAPAIProvider } from "@mymediset/sap-ai-provider";
180
- import { generateText } from "ai";
181
- import { APICallError } from "@ai-sdk/provider";
182
-
183
- const provider = createSAPAIProvider();
179
+ **Complete example:** [examples/example-generate-text.ts](./examples/example-generate-text.ts)
184
180
 
185
- try {
186
- const result = await generateText({
187
- model: provider("gpt-4o"),
188
- prompt: "Write a short story about a robot learning to paint.",
189
- });
190
-
191
- console.log(result.text);
192
- } catch (error) {
193
- if (error instanceof APICallError) {
194
- console.error("API error:", error.message, "- Status:", error.statusCode);
195
- }
196
- throw error;
197
- }
181
+ ```typescript
182
+ const result = await generateText({
183
+ model: provider("gpt-4o"),
184
+ prompt: "Write a short story about a robot learning to paint.",
185
+ });
186
+ console.log(result.text);
198
187
  ```
199
188
 
189
+ **Run it:** `npx tsx examples/example-generate-text.ts`
190
+
200
191
  ### Chat Conversations
201
192
 
193
+ **Complete example:** [examples/example-simple-chat-completion.ts](./examples/example-simple-chat-completion.ts)
194
+
202
195
  Note: assistant `reasoning` parts are dropped by default. Set `includeReasoning: true` on the model settings if you explicitly want to forward them.
203
196
 
204
197
  ```typescript
205
- import "dotenv/config"; // Load environment variables
206
- import { createSAPAIProvider } from "@mymediset/sap-ai-provider";
207
- import { generateText } from "ai";
208
- import { APICallError } from "@ai-sdk/provider";
209
-
210
- const provider = createSAPAIProvider();
211
-
212
- try {
213
- const result = await generateText({
214
- model: provider("anthropic--claude-3.5-sonnet"),
215
- messages: [
216
- { role: "system", content: "You are a helpful coding assistant." },
217
- {
218
- role: "user",
219
- content: "How do I implement binary search in TypeScript?",
220
- },
221
- ],
222
- });
223
- } catch (error) {
224
- if (error instanceof APICallError) {
225
- console.error("API error:", error.message, "- Status:", error.statusCode);
226
- }
227
- throw error;
228
- }
198
+ const result = await generateText({
199
+ model: provider("anthropic--claude-3.5-sonnet"),
200
+ messages: [
201
+ { role: "system", content: "You are a helpful coding assistant." },
202
+ {
203
+ role: "user",
204
+ content: "How do I implement binary search in TypeScript?",
205
+ },
206
+ ],
207
+ });
229
208
  ```
230
209
 
231
- ### Streaming Responses
210
+ **Run it:** `npx tsx examples/example-simple-chat-completion.ts`
232
211
 
233
- ```typescript
234
- import "dotenv/config"; // Load environment variables
235
- import { createSAPAIProvider } from "@mymediset/sap-ai-provider";
236
- import { streamText } from "ai";
237
- import { APICallError } from "@ai-sdk/provider";
212
+ ### Streaming Responses
238
213
 
239
- const provider = createSAPAIProvider();
214
+ **Complete example:** [examples/example-streaming-chat.ts](./examples/example-streaming-chat.ts)
240
215
 
241
- try {
242
- const result = streamText({
243
- model: provider("gpt-4o"),
244
- prompt: "Explain machine learning concepts.",
245
- });
216
+ ```typescript
217
+ const result = streamText({
218
+ model: provider("gpt-4o"),
219
+ prompt: "Explain machine learning concepts.",
220
+ });
246
221
 
247
- for await (const delta of result.textStream) {
248
- process.stdout.write(delta);
249
- }
250
- } catch (error) {
251
- if (error instanceof APICallError) {
252
- console.error("API error:", error.message, "- Status:", error.statusCode);
253
- }
254
- throw error;
222
+ for await (const delta of result.textStream) {
223
+ process.stdout.write(delta);
255
224
  }
256
225
  ```
257
226
 
227
+ **Run it:** `npx tsx examples/example-streaming-chat.ts`
228
+
258
229
  ### Model Configuration
259
230
 
260
231
  ```typescript
@@ -314,27 +285,14 @@ The following helper functions are exported by this package for convenient confi
314
285
 
315
286
  > **Note on Terminology:** This documentation uses "tool calling" (Vercel AI SDK convention), equivalent to "function calling" in OpenAI documentation. Both terms refer to the same capability of models invoking external functions.
316
287
 
317
- 📖 **Complete guide:** [API Reference - Tool Calling](./API_REFERENCE.md#tool-calling-function-calling)
288
+ 📖 **Complete guide:** [API Reference - Tool Calling](./API_REFERENCE.md#tool-calling-function-calling)
289
+ **Complete example:** [examples/example-chat-completion-tool.ts](./examples/example-chat-completion-tool.ts)
318
290
 
319
291
  ```typescript
320
- import "dotenv/config"; // Load environment variables
321
- import { createSAPAIProvider } from "@mymediset/sap-ai-provider";
322
- import { generateText, tool } from "ai";
323
- import { z } from "zod";
324
-
325
- const provider = createSAPAIProvider();
326
-
327
- const weatherSchema = z.object({
328
- location: z.string(),
329
- });
330
-
331
292
  const weatherTool = tool({
332
293
  description: "Get weather for a location",
333
- inputSchema: weatherSchema,
334
- execute: (args: z.infer<typeof weatherSchema>) => {
335
- const { location } = args;
336
- return `Weather in ${location}: sunny, 72°F`;
337
- },
294
+ inputSchema: z.object({ location: z.string() }),
295
+ execute: (args) => `Weather in ${args.location}: sunny, 72°F`,
338
296
  });
339
297
 
340
298
  const result = await generateText({
@@ -343,21 +301,17 @@ const result = await generateText({
343
301
  tools: { getWeather: weatherTool },
344
302
  maxSteps: 3,
345
303
  });
346
-
347
- console.log(result.text);
348
304
  ```
349
305
 
306
+ **Run it:** `npx tsx examples/example-chat-completion-tool.ts`
307
+
350
308
  ⚠️ **Important:** Gemini models support only 1 tool per request. For multi-tool applications, use GPT-4o, Claude, or Amazon Nova models. See [API Reference - Tool Calling](./API_REFERENCE.md#tool-calling-function-calling) for complete model comparison.
351
309
 
352
310
  ### Multi-modal Input (Images)
353
311
 
354
- ```typescript
355
- import "dotenv/config"; // Load environment variables
356
- import { createSAPAIProvider } from "@mymediset/sap-ai-provider";
357
- import { generateText } from "ai";
358
-
359
- const provider = createSAPAIProvider();
312
+ **Complete example:** [examples/example-image-recognition.ts](./examples/example-image-recognition.ts)
360
313
 
314
+ ```typescript
361
315
  const result = await generateText({
362
316
  model: provider("gpt-4o"),
363
317
  messages: [
@@ -372,6 +326,8 @@ const result = await generateText({
372
326
  });
373
327
  ```
374
328
 
329
+ **Run it:** `npx tsx examples/example-image-recognition.ts`
330
+
375
331
  ### Data Masking (SAP DPI)
376
332
 
377
333
  Use SAP's Data Privacy Integration to mask sensitive data:
@@ -412,6 +368,69 @@ const provider = createSAPAIProvider({
412
368
  });
413
369
  ```
414
370
 
371
+ **Full documentation:** [API_REFERENCE.md - Content Filtering](./API_REFERENCE.md#buildazurecontentsafetyfiltertype-config)
372
+
373
+ ### Document Grounding (RAG)
374
+
375
+ Ground LLM responses in your own documents using vector databases.
376
+
377
+ **Complete example:** [examples/example-document-grounding.ts](./examples/example-document-grounding.ts)
378
+ **Complete documentation:** [API Reference - Document Grounding](./API_REFERENCE.md#builddocumentgroundingconfigconfig)
379
+
380
+ ```typescript
381
+ const provider = createSAPAIProvider({
382
+ defaultSettings: {
383
+ grounding: buildDocumentGroundingConfig({
384
+ filters: [
385
+ {
386
+ id: "vector-store-1", // Your vector database ID
387
+ data_repositories: ["*"], // Search all repositories
388
+ },
389
+ ],
390
+ placeholders: {
391
+ input: ["?question"],
392
+ output: "groundingOutput",
393
+ },
394
+ }),
395
+ },
396
+ });
397
+
398
+ // Queries are now grounded in your documents
399
+ const model = provider("gpt-4o");
400
+ ```
401
+
402
+ **Run it:** `npx tsx examples/example-document-grounding.ts`
403
+
404
+ ### Translation
405
+
406
+ Automatically translate user queries and model responses.
407
+
408
+ **Complete example:** [examples/example-translation.ts](./examples/example-translation.ts)
409
+ **Complete documentation:** [API Reference - Translation](./API_REFERENCE.md#buildtranslationconfigtype-config)
410
+
411
+ ```typescript
412
+ const provider = createSAPAIProvider({
413
+ defaultSettings: {
414
+ translation: {
415
+ // Translate user input from German to English
416
+ input: buildTranslationConfig("input", {
417
+ sourceLanguage: "de",
418
+ targetLanguage: "en",
419
+ }),
420
+ // Translate model output from English to German
421
+ output: buildTranslationConfig("output", {
422
+ targetLanguage: "de",
423
+ }),
424
+ },
425
+ },
426
+ });
427
+
428
+ // Model handles German input/output automatically
429
+ const model = provider("gpt-4o");
430
+ ```
431
+
432
+ **Run it:** `npx tsx examples/example-translation.ts`
433
+
415
434
  ## Configuration Options
416
435
 
417
436
  The provider and models can be configured with various settings for authentication, model parameters, data masking, content filtering, and more.