@kognitivedev/vercel-ai-provider 0.1.0 → 0.1.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. package/README.md +260 -0
  2. package/package.json +1 -1
package/README.md ADDED
@@ -0,0 +1,260 @@
1
+ # @kognitivedev/vercel-ai-provider
2
+
3
+ Vercel AI SDK provider wrapper that integrates the Kognitive memory layer into your AI applications. Automatically injects memory context and logs conversations for memory processing.
4
+
5
+ ## Installation
6
+
7
+ ```bash
8
+ npm install @kognitivedev/vercel-ai-provider
9
+ ```
10
+
11
+ ### Peer Dependencies
12
+
13
+ This package requires the Vercel AI SDK:
14
+
15
+ ```bash
16
+ npm install ai
17
+ ```
18
+
19
+ ## Quick Start
20
+
21
+ ```typescript
22
+ import { createCognitiveLayer } from "@kognitivedev/vercel-ai-provider";
23
+ import { openai } from "@ai-sdk/openai";
24
+ import { generateText } from "ai";
25
+
26
+ // 1. Create the cognitive layer
27
+ const clModel = createCognitiveLayer({
28
+ provider: openai,
29
+ clConfig: {
30
+ appId: "my-app",
31
+ defaultAgentId: "assistant",
32
+ baseUrl: "http://localhost:3001"
33
+ }
34
+ });
35
+
36
+ // 2. Use it with Vercel AI SDK
37
+ const { text } = await generateText({
38
+ model: clModel("gpt-4o", {
39
+ userId: "user-123",
40
+ sessionId: "session-abc"
41
+ }),
42
+ prompt: "What's my favorite color?"
43
+ });
44
+ ```
45
+
46
+ ## Configuration
47
+
48
+ ### `CognitiveLayerConfig`
49
+
50
+ | Option | Type | Required | Default | Description |
51
+ |--------|------|----------|---------|-------------|
52
+ | `appId` | `string` | ✓ | - | Unique identifier for your application |
53
+ | `defaultAgentId` | `string` | - | `"default"` | Default agent ID when not specified per-request |
54
+ | `baseUrl` | `string` | - | `"http://localhost:3001"` | Kognitive backend API URL |
55
+ | `apiKey` | `string` | - | - | API key for authentication (if required) |
56
+ | `processDelayMs` | `number` | - | `500` | Delay before triggering memory processing (set to 0 to disable) |
57
+
58
+ ## API Reference
59
+
60
+ ### `createCognitiveLayer(config)`
61
+
62
+ Creates a model wrapper function that adds memory capabilities to any Vercel AI SDK provider.
63
+
64
+ **Parameters:**
65
+
66
+ ```typescript
67
+ createCognitiveLayer({
68
+ provider: any, // Vercel AI SDK provider (e.g., openai, anthropic)
69
+ clConfig: CognitiveLayerConfig
70
+ }): CLModelWrapper
71
+ ```
72
+
73
+ **Returns:** `CLModelWrapper` - A function to wrap models with memory capabilities.
74
+
75
+ ---
76
+
77
+ ### `CLModelWrapper`
78
+
79
+ The function returned by `createCognitiveLayer`.
80
+
81
+ ```typescript
82
+ type CLModelWrapper = (
83
+ modelId: string,
84
+ settings?: {
85
+ userId?: string;
86
+ agentId?: string;
87
+ sessionId?: string;
88
+ }
89
+ ) => LanguageModelV2;
90
+ ```
91
+
92
+ **Parameters:**
93
+
94
+ | Parameter | Type | Required | Description |
95
+ |-----------|------|----------|-------------|
96
+ | `modelId` | `string` | ✓ | Model identifier (e.g., `"gpt-4o"`, `"claude-3-opus"`) |
97
+ | `settings.userId` | `string` | - | User identifier (required for memory features) |
98
+ | `settings.agentId` | `string` | - | Override default agent ID |
99
+ | `settings.sessionId` | `string` | - | Session identifier (required for logging) |
100
+
101
+ ## Usage Examples
102
+
103
+ ### With OpenAI
104
+
105
+ ```typescript
106
+ import { createCognitiveLayer } from "@kognitivedev/vercel-ai-provider";
107
+ import { openai } from "@ai-sdk/openai";
108
+ import { generateText } from "ai";
109
+
110
+ const clModel = createCognitiveLayer({
111
+ provider: openai,
112
+ clConfig: {
113
+ appId: "my-app",
114
+ baseUrl: "https://api.kognitive.dev"
115
+ }
116
+ });
117
+
118
+ const { text } = await generateText({
119
+ model: clModel("gpt-4o", {
120
+ userId: "user-123",
121
+ sessionId: "session-abc"
122
+ }),
123
+ prompt: "Remember that my favorite color is blue"
124
+ });
125
+ ```
126
+
127
+ ### With Anthropic
128
+
129
+ ```typescript
130
+ import { createCognitiveLayer } from "@kognitivedev/vercel-ai-provider";
131
+ import { anthropic } from "@ai-sdk/anthropic";
132
+ import { streamText } from "ai";
133
+
134
+ const clModel = createCognitiveLayer({
135
+ provider: anthropic,
136
+ clConfig: {
137
+ appId: "my-app",
138
+ defaultAgentId: "claude-assistant"
139
+ }
140
+ });
141
+
142
+ const result = await streamText({
143
+ model: clModel("claude-3-5-sonnet-latest", {
144
+ userId: "user-456",
145
+ sessionId: "chat-xyz"
146
+ }),
147
+ prompt: "What did I tell you about my favorite color?"
148
+ });
149
+
150
+ for await (const chunk of result.textStream) {
151
+ process.stdout.write(chunk);
152
+ }
153
+ ```
154
+
155
+ ### With System Prompts
156
+
157
+ The provider automatically injects memory context into your system prompts:
158
+
159
+ ```typescript
160
+ const { text } = await generateText({
161
+ model: clModel("gpt-4o", {
162
+ userId: "user-123",
163
+ sessionId: "session-abc"
164
+ }),
165
+ system: "You are a helpful assistant.",
166
+ prompt: "What do you know about me?"
167
+ });
168
+
169
+ // Memory context is automatically appended to system prompt
170
+ ```
171
+
172
+ ### Without Memory (Anonymous Users)
173
+
174
+ Skip memory features by omitting `userId`:
175
+
176
+ ```typescript
177
+ const { text } = await generateText({
178
+ model: clModel("gpt-4o"),
179
+ prompt: "General question without memory"
180
+ });
181
+ ```
182
+
183
+ ## How It Works
184
+
185
+ ### Memory Injection Flow
186
+
187
+ 1. **Request Interception**: When a request is made, the middleware fetches the user's memory snapshot
188
+ 2. **Context Injection**: Memory context is injected into the system prompt as `<MemoryContext>` block
189
+ 3. **Response Processing**: After the response, the conversation is logged
190
+ 4. **Background Processing**: Memory extraction and management runs asynchronously
191
+
192
+ ### Memory Context Format
193
+
194
+ The injected memory context follows this structure:
195
+
196
+ ```xml
197
+ <MemoryContext>
198
+ Use the following memory to stay consistent. Prefer UserContext facts for answers; AgentHeuristics guide style, safety, and priorities.
199
+ <AgentHeuristics>
200
+ - User prefers concise responses
201
+ - Always greet user by name
202
+ </AgentHeuristics>
203
+ <UserContext>
204
+ <Facts>
205
+ - User's name is John
206
+ - Favorite color is blue
207
+ </Facts>
208
+ <State>
209
+ - Currently working on a project
210
+ </State>
211
+ </UserContext>
212
+ </MemoryContext>
213
+ ```
214
+
215
+ ## Backend API Integration
216
+
217
+ The provider communicates with your Kognitive backend via these endpoints:
218
+
219
+ | Endpoint | Method | Description |
220
+ |----------|--------|-------------|
221
+ | `/api/cognitive/snapshot` | GET | Fetches user's memory snapshot |
222
+ | `/api/cognitive/log` | POST | Logs conversation for processing |
223
+ | `/api/cognitive/process` | POST | Triggers memory extraction/management |
224
+
225
+ ### Query Parameters for Snapshot
226
+
227
+ ```
228
+ GET /api/cognitive/snapshot?userId={userId}&agentId={agentId}&appId={appId}
229
+ ```
230
+
231
+ ## Troubleshooting
232
+
233
+ ### Memory not being injected
234
+
235
+ 1. Ensure `userId` and `sessionId` are provided
236
+ 2. Check that the backend is running at the configured `baseUrl`
237
+ 3. Verify the snapshot endpoint returns data
238
+
239
+ ### Console warnings
240
+
241
+ ```
242
+ CognitiveLayer: sessionId is required to log and process memories; skipping logging until provided.
243
+ ```
244
+
245
+ This warning appears when `userId` is provided but `sessionId` is missing. Add `sessionId` to enable logging.
246
+
247
+ ### Processing delay
248
+
249
+ The default 500ms delay before triggering memory processing allows database writes to settle. Adjust with `processDelayMs`:
250
+
251
+ ```typescript
252
+ clConfig: {
253
+ processDelayMs: 1000 // 1 second delay
254
+ // processDelayMs: 0 // Immediate processing
255
+ }
256
+ ```
257
+
258
+ ## License
259
+
260
+ MIT
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@kognitivedev/vercel-ai-provider",
3
- "version": "0.1.0",
3
+ "version": "0.1.1",
4
4
  "main": "dist/index.js",
5
5
  "types": "dist/index.d.ts",
6
6
  "publishConfig": {