@node-llm/orm 0.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md ADDED
@@ -0,0 +1,64 @@
1
+ # Changelog
2
+
3
+ All notable changes to `@node-llm/orm` will be documented in this file.
4
+
5
+ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
6
+ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
7
+
8
+ ## [0.1.0] - 2026-01-18
9
+
10
+ ### Added
11
+
12
+ - **Chat Persistence**: Automatic tracking of chat sessions, messages, and conversation history
13
+ - **Streaming Support**: Full support for `chat.askStream()` with automatic message persistence as tokens arrive
14
+ - **Custom Fields Support**: Pass additional fields (e.g., `userId`, `projectId`) directly to `createChat()` - they're automatically spread into the Prisma create call
15
+ - **Native JSON Metadata**: Metadata passed as-is to support native `Json`/`JSONB` database types for efficient querying
16
+ - **Optional Persistence Configuration**: New `persistence` option allows selective disabling of tool call and request tracking
17
+ - `persistence.toolCalls`: Enable/disable tool call persistence (default: `true`)
18
+ - `persistence.requests`: Enable/disable API request metrics (default: `true`)
19
+ - **Tool Call Tracking**: Audit log of every tool execution with arguments and results
20
+ - **API Request Metrics**: Track latency, tokens, and cost for every API call
21
+ - **Custom Table Names**: Support for custom Prisma table names
22
+ - **Comprehensive Tests**: 24 unit tests covering all features
23
+ - **Example Application**: Complete HR Chatbot RAG example demonstrating all ORM features
24
+ - **TypeScript Support**: Full type safety with Prisma and TypeScript
25
+ - **Prisma Adapter**: Production-ready adapter for Prisma ORM
26
+
27
+ ### Example Usage
28
+
29
+ ```typescript
30
+ // Custom fields
31
+ const chat = await createChat(prisma, llm, {
32
+ model: "gpt-4",
33
+ userId: "user_123",
34
+ projectId: "proj_456",
35
+ metadata: { source: "web-ui", tags: ["support"] }
36
+ });
37
+
38
+ // Optional persistence
39
+ const chat = await createChat(prisma, llm, {
40
+ model: "gpt-4",
41
+ persistence: {
42
+ toolCalls: false, // Skip tool call tracking
43
+ requests: true // Keep request metrics
44
+ }
45
+ });
46
+
47
+ // Streaming with persistence
48
+ for await (const token of chat.askStream("Hello")) {
49
+ process.stdout.write(token);
50
+ }
51
+ // Messages automatically persisted after stream completes
52
+ ```
53
+
54
+ ## [Unreleased]
55
+
56
+ ### Planned
57
+
58
+ - Support for custom message types
59
+ - Batch operations for message history
60
+ - Advanced query helpers
61
+
62
+ ---
63
+
64
+ [0.1.0]: https://github.com/node-llm/node-llm/releases/tag/orm-v0.1.0
package/README.md ADDED
@@ -0,0 +1,353 @@
1
+ # @node-llm/orm
2
+
3
+ [![npm version](https://img.shields.io/npm/v/@node-llm/orm.svg)](https://www.npmjs.com/package/@node-llm/orm)
4
+ [![npm downloads](https://img.shields.io/npm/dm/@node-llm/orm.svg)](https://www.npmjs.com/package/@node-llm/orm)
5
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
6
+ [![TypeScript](https://img.shields.io/badge/TypeScript-5.0+-blue.svg)](https://www.typescriptlang.org/)
7
+
8
+ > Database persistence layer for NodeLLM. Automatically tracks chats, messages, tool calls, and API requests.
9
+
10
+ **[Read the Full Documentation](https://node-llm.github.io/orm/prisma.html)** | **[View Example App](../../examples/hr-chatbot-rag/)**
11
+
12
+ ## Features
13
+
14
+ - ✅ **Automatic Persistence** - Messages, tool calls, and API metrics saved automatically
15
+ - ✅ **Streaming Support** - Real-time token delivery with `askStream()`
16
+ - ✅ **Provider Agnostic** - Works with any NodeLLM provider (OpenAI, Anthropic, Gemini, OpenRouter, etc.)
17
+ - ✅ **Type Safe** - Full TypeScript support with Prisma
18
+ - ✅ **Audit Trail** - Complete history of tool executions and API calls
19
+ - ✅ **Flexible** - Prisma adapter included, other ORMs can be added
20
+
21
+ ## Installation
22
+
23
+ ```bash
24
+ npm install @node-llm/orm @node-llm/core @prisma/client
25
+ npm install -D prisma
26
+ ```
27
+
28
+ ## Quick Start
29
+
30
+ ### Option 1: Using the CLI (Recommended)
31
+
32
+ ```bash
33
+ # Generate schema.prisma automatically
34
+ npx node-llm-orm init
35
+
36
+ # Create and apply migration
37
+ npx prisma migrate dev --name init
38
+ npx prisma generate
39
+ ```
40
+
41
+ ### Option 2: Manual Setup
42
+
43
+ Copy the reference schema into your project:
44
+
45
+ ```bash
46
+ cp node_modules/@node-llm/orm/schema.prisma prisma/schema.prisma
47
+ ```
48
+
49
+ Or manually add the models to your existing `prisma/schema.prisma`:
50
+
51
+ ```prisma
52
+ model LlmChat {
53
+ id String @id @default(uuid())
54
+ model String?
55
+ provider String?
56
+ instructions String? @db.Text
57
+ metadata String? @db.Text
58
+ createdAt DateTime @default(now())
59
+ updatedAt DateTime @updatedAt
60
+
61
+ messages LlmMessage[]
62
+ requests LlmRequest[]
63
+ }
64
+
65
+ model LlmMessage {
66
+ id String @id @default(uuid())
67
+ chatId String
68
+ role String
69
+ content String? @db.Text
70
+ // ... see schema.prisma for full definition
71
+ }
72
+
73
+ model LlmToolCall {
74
+ id String @id @default(uuid())
75
+ messageId String
76
+ toolCallId String
77
+ name String
78
+ arguments String @db.Text
79
+ // ... see schema.prisma for full definition
80
+ }
81
+
82
+ model LlmRequest {
83
+ id String @id @default(uuid())
84
+ chatId String
85
+ messageId String?
86
+ provider String
87
+ model String
88
+ // ... see schema.prisma for full definition
89
+ }
90
+ ```
91
+
92
+ ### 2. Run Migration
93
+
94
+ ```bash
95
+ npx prisma migrate dev --name init
96
+ npx prisma generate
97
+ ```
98
+
99
+ ### 3. Use the ORM
100
+
101
+ ```typescript
102
+ import { PrismaClient } from "@prisma/client";
103
+ import { createLLM } from "@node-llm/core";
104
+ import { createChat } from "@node-llm/orm/prisma";
105
+
106
+ const prisma = new PrismaClient();
107
+ const llm = createLLM({ provider: "openai" });
108
+
109
+ // Create a new chat
110
+ const chat = await createChat(prisma, llm, {
111
+ model: "gpt-4",
112
+ instructions: "You are a helpful assistant."
113
+ });
114
+
115
+ // Ask a question (automatically persisted)
116
+ const response = await chat.ask("What is the capital of France?");
117
+ console.log(response.content); // "The capital of France is Paris."
118
+
119
+ // View conversation history
120
+ const messages = await chat.messages();
121
+ console.log(messages); // [{ role: 'user', content: '...' }, { role: 'assistant', content: '...' }]
122
+ ```
123
+
124
+ ## Architecture
125
+
126
+ The ORM tracks four core entities:
127
+
128
+ | Model | Purpose | Example |
129
+ | --------------- | -------------------- | ------------------------------------------ |
130
+ | **LlmChat** | Session container | Holds model, provider, system instructions |
131
+ | **LlmMessage** | Conversation history | User queries and assistant responses |
132
+ | **LlmToolCall** | Tool executions | Function calls made by the assistant |
133
+ | **LlmRequest** | API metrics | Token usage, latency, cost per API call |
134
+
135
+ ### Data Flow
136
+
137
+ ```
138
+ User Input
139
+
140
+ Chat.ask()
141
+
142
+ ┌─────────────────────────────────────┐
143
+ │ 1. Create User Message (DB) │
144
+ │ 2. Create Assistant Message (DB) │
145
+ │ 3. Fetch History (DB) │
146
+ │ 4. Call LLM API │
147
+ │ ├─ onToolCallEnd → ToolCall (DB)│
148
+ │ └─ afterResponse → Request (DB) │
149
+ │ 5. Update Assistant Message (DB) │
150
+ └─────────────────────────────────────┘
151
+
152
+ Return Response
153
+ ```
154
+
155
+ ## Advanced Usage
156
+
157
+ ### Streaming Responses
158
+
159
+ For real-time UX, use `askStream()` to yield tokens as they arrive:
160
+
161
+ ```typescript
162
+ import { createChat } from "@node-llm/orm/prisma";
163
+
164
+ const chat = await createChat(prisma, llm, {
165
+ model: "gpt-4"
166
+ });
167
+
168
+ // Stream tokens in real-time
169
+ for await (const token of chat.askStream("Tell me a story")) {
170
+ process.stdout.write(token); // Print each token immediately
171
+ }
172
+
173
+ // Message is automatically persisted after streaming completes
174
+ const messages = await chat.messages();
175
+ console.log(messages[messages.length - 1].content); // Full story
176
+ ```
177
+
178
+ ### React/Next.js Streaming Example
179
+
180
+ ```typescript
181
+ // app/api/chat/route.ts
182
+ import { createChat } from "@node-llm/orm/prisma";
183
+
184
+ export async function POST(req: Request) {
185
+ const { message, chatId } = await req.json();
186
+
187
+ const chat = chatId
188
+ ? await loadChat(prisma, llm, chatId)
189
+ : await createChat(prisma, llm, { model: "gpt-4" });
190
+
191
+ const stream = new ReadableStream({
192
+ async start(controller) {
193
+ for await (const token of chat.askStream(message)) {
194
+ controller.enqueue(new TextEncoder().encode(token));
195
+ }
196
+ controller.close();
197
+ }
198
+ });
199
+
200
+ return new Response(stream);
201
+ }
202
+ ```
203
+
204
+ ### Custom Table Names
205
+
206
+ If you have existing tables with different names (e.g., `AssistantChat` instead of `Chat`), you can specify custom table names:
207
+
208
+ ```typescript
209
+ import { createChat } from "@node-llm/orm/prisma";
210
+
211
+ const tableNames = {
212
+ chat: "assistantChat",
213
+ message: "assistantMessage",
214
+ toolCall: "assistantToolCall",
215
+ request: "assistantRequest"
216
+ };
217
+
218
+ // Create chat with custom table names
219
+ const chat = await createChat(prisma, llm, { model: "gpt-4" }, tableNames);
220
+
221
+ // Load chat (must use same table names)
222
+ const loaded = await loadChat(prisma, llm, chatId, tableNames);
223
+ ```
224
+
225
+ **Note**: Your Prisma schema model names must match the table names you specify. For example:
226
+
227
+ ```prisma
228
+ model AssistantChat {
229
+ // ... fields
230
+ @@map("assistantChat") // Optional: map to different database table name
231
+ }
232
+ ```
233
+
234
+ ### With Tools
235
+
236
+ ```typescript
237
+ import { createChat } from "@node-llm/orm/prisma";
238
+ import { searchTool } from "./tools/search";
239
+
240
+ const chat = await createChat(prisma, llm, {
241
+ model: "gpt-4",
242
+ tools: [searchTool]
243
+ });
244
+
245
+ await chat.ask("Search for NodeLLM documentation");
246
+ // Tool calls are automatically persisted to ToolCall table
247
+ ```
248
+
249
+ ### Loading Existing Chats
250
+
251
+ ```typescript
252
+ import { loadChat } from "@node-llm/orm/prisma";
253
+
254
+ const chat = await loadChat(prisma, llm, "chat-id-123");
255
+ if (chat) {
256
+ await chat.ask("Continue our conversation");
257
+ }
258
+ ```
259
+
260
+ ### Querying Metrics
261
+
262
+ ```typescript
263
+ // Get all API requests for a chat
264
+ const requests = await prisma.request.findMany({
265
+ where: { chatId: chat.id },
266
+ orderBy: { createdAt: "desc" }
267
+ });
268
+
269
+ console.log(
270
+ `Total tokens: ${requests.reduce((sum, r) => sum + r.inputTokens + r.outputTokens, 0)}`
271
+ );
272
+ console.log(`Total cost: $${requests.reduce((sum, r) => sum + (r.cost || 0), 0)}`);
273
+ ```
274
+
275
+ ### Custom Fields & Metadata
276
+
277
+ You can add custom fields (like `userId`, `projectId`, `tenantId`) to your Prisma schema and pass them directly to `createChat`. The library will pass these fields through to the generic Prisma create call.
278
+
279
+ **1. Update your Prisma Schema:**
280
+
281
+ ```prisma
282
+ model LlmChat {
283
+ // ... standard fields
284
+ metadata Json? // Use Json type for flexible storage
285
+ userId String? // Custom field
286
+ projectId String? // Custom field
287
+ }
288
+ ```
289
+
290
+ **2. Pass fields to createChat:**
291
+
292
+ ```typescript
293
+ const chat = await createChat(prisma, llm, {
294
+ model: "gpt-4",
295
+ instructions: "You are consistent.",
296
+ // Custom fields passed directly
297
+ userId: "user_123",
298
+ projectId: "proj_abc",
299
+ // Metadata is passed as-is (native JSON support)
300
+ metadata: {
301
+ source: "web-client",
302
+ tags: ["experiment-a"]
303
+ }
304
+ });
305
+ ```
306
+
307
+ ### Persistence Configuration
308
+
309
+ By default, the ORM persists everything: messages, tool calls, and API requests. If you don't need certain tables (e.g., you're building a minimal app without tool tracking), you can disable specific persistence features:
310
+
311
+ ```typescript
312
+ const chat = await createChat(prisma, llm, {
313
+ model: "gpt-4",
314
+ persistence: {
315
+ toolCalls: false, // Skip LlmToolCall table
316
+ requests: false // Skip LlmRequest table
317
+ }
318
+ });
319
+ ```
320
+
321
+ **Use Cases:**
322
+
323
+ - **Minimal Schema**: Only create `LlmChat` and `LlmMessage` tables
324
+ - **Privacy**: Disable request logging for sensitive applications
325
+ - **Performance**: Reduce database writes for high-throughput scenarios
326
+
327
+ **Note**: Message persistence is always enabled (required for conversation history).
328
+
329
+ ## Environment Variables
330
+
331
+ The ORM respects NodeLLM's provider configuration:
332
+
333
+ ```bash
334
+ # Chat Provider
335
+ NODELLM_PROVIDER="openrouter"
336
+ NODELLM_MODEL="google/gemini-2.0-flash-001"
337
+
338
+ # Embedding Provider (optional, for RAG apps)
339
+ NODELLM_EMBEDDING_PROVIDER="openai"
340
+ NODELLM_EMBEDDING_MODEL="text-embedding-3-small"
341
+ ```
342
+
343
+ ## Roadmap
344
+
345
+ - [ ] TypeORM adapter
346
+ - [ ] Drizzle adapter
347
+ - [ ] Migration utilities
348
+ - [ ] Analytics dashboard
349
+ - [ ] Export/import conversations
350
+
351
+ ## License
352
+
353
+ MIT
package/bin/cli.js ADDED
@@ -0,0 +1,157 @@
1
+ #!/usr/bin/env node
2
+ /**
3
+ * @node-llm/orm migration generator
4
+ *
5
+ * Usage:
6
+ * npx @node-llm/orm init # Generate schema.prisma
7
+ * npx @node-llm/orm migrate # Create and apply migration
8
+ */
9
+
10
+ import { writeFileSync, existsSync, mkdirSync } from "fs";
11
+ import { join } from "path";
12
+
13
+ const SCHEMA_TEMPLATE = `// This is your Prisma schema file
14
+ // Generated by @node-llm/orm
15
+
16
+ generator client {
17
+ provider = "prisma-client-js"
18
+ }
19
+
20
+ datasource db {
21
+ provider = "postgresql"
22
+ url = env("DATABASE_URL")
23
+ }
24
+
25
+ // NodeLLM ORM Models
26
+ model LlmLlmChat {
27
+ id String @id @default(uuid())
28
+ model String?
29
+ provider String?
30
+ instructions String? @db.Text
31
+ metadata String? @db.Text
32
+ createdAt DateTime @default(now())
33
+ updatedAt DateTime @updatedAt
34
+
35
+ messages LlmMessage[]
36
+ requests LlmRequest[]
37
+
38
+ @@index([createdAt])
39
+ }
40
+
41
+ model LlmLlmMessage {
42
+ id String @id @default(uuid())
43
+ chatId String
44
+ role String
45
+ content String? @db.Text
46
+ contentRaw String? @db.Text
47
+ reasoning String? @db.Text
48
+ inputTokens Int?
49
+ outputTokens Int?
50
+ modelId String?
51
+ provider String?
52
+ createdAt DateTime @default(now())
53
+
54
+ chat LlmChat @relation(fields: [chatId], references: [id], onDelete: Cascade)
55
+ toolCalls LlmToolCall[]
56
+ requests LlmRequest[]
57
+
58
+ @@index([chatId])
59
+ @@index([createdAt])
60
+ }
61
+
62
+ model LlmToolCall {
63
+ id String @id @default(uuid())
64
+ messageId String
65
+ toolCallId String
66
+ name String
67
+ arguments String @db.Text
68
+ result String? @db.Text
69
+ createdAt DateTime @default(now())
70
+
71
+ message LlmMessage @relation(fields: [messageId], references: [id], onDelete: Cascade)
72
+
73
+ @@index([messageId])
74
+ @@index([createdAt])
75
+ }
76
+
77
+ model LlmRequest {
78
+ id String @id @default(uuid())
79
+ chatId String
80
+ messageId String?
81
+
82
+ provider String
83
+ model String
84
+ statusCode Int
85
+ duration Int
86
+ inputTokens Int
87
+ outputTokens Int
88
+ cost Float?
89
+
90
+ createdAt DateTime @default(now())
91
+
92
+ chat LlmChat @relation(fields: [chatId], references: [id], onDelete: Cascade)
93
+ message Message? @relation(fields: [messageId], references: [id], onDelete: Cascade)
94
+
95
+ @@index([chatId])
96
+ @@index([createdAt])
97
+ }
98
+ `;
99
+
100
+ function init() {
101
+ const cwd = process.cwd();
102
+ const prismaDir = join(cwd, "prisma");
103
+ const schemaPath = join(prismaDir, "schema.prisma");
104
+
105
+ // Create prisma directory if it doesn't exist
106
+ if (!existsSync(prismaDir)) {
107
+ mkdirSync(prismaDir, { recursive: true });
108
+ console.log("✓ Created prisma/ directory");
109
+ }
110
+
111
+ // Check if schema already exists
112
+ if (existsSync(schemaPath)) {
113
+ console.log("⚠️ schema.prisma already exists");
114
+ console.log(" To add NodeLLM models, manually copy from:");
115
+ console.log(" node_modules/@node-llm/orm/schema.prisma");
116
+ return;
117
+ }
118
+
119
+ // Write schema
120
+ writeFileSync(schemaPath, SCHEMA_TEMPLATE);
121
+ console.log("✓ Generated prisma/schema.prisma");
122
+ console.log("\nNext steps:");
123
+ console.log(" 1. Update DATABASE_URL in .env");
124
+ console.log(" 2. Run: npx prisma migrate dev --name init");
125
+ console.log(" 3. Run: npx prisma generate");
126
+ }
127
+
128
+ function migrate() {
129
+ console.log("Running Prisma migration...");
130
+ console.log("\nPlease run:");
131
+ console.log(" npx prisma migrate dev --name add_nodellm_orm");
132
+ }
133
+
134
+ function main() {
135
+ const args = process.argv.slice(2);
136
+ const command = args[0];
137
+
138
+ switch (command) {
139
+ case "init":
140
+ init();
141
+ break;
142
+ case "migrate":
143
+ migrate();
144
+ break;
145
+ default:
146
+ console.log("@node-llm/orm - Database migration tool\n");
147
+ console.log("Usage:");
148
+ console.log(" npx @node-llm/orm init # Generate schema.prisma");
149
+ console.log(" npx @node-llm/orm migrate # Create migration");
150
+ console.log("\nFor custom table names, see:");
151
+ console.log(
152
+ " https://github.com/node-llm/node-llm/tree/main/packages/orm#custom-table-names"
153
+ );
154
+ }
155
+ }
156
+
157
+ main();
package/package.json ADDED
@@ -0,0 +1,72 @@
1
+ {
2
+ "name": "@node-llm/orm",
3
+ "version": "0.1.0",
4
+ "description": "Database persistence layer for NodeLLM - Chat, Message, and ToolCall tracking with streaming support",
5
+ "type": "module",
6
+ "main": "./dist/index.js",
7
+ "types": "./dist/index.d.ts",
8
+ "bin": {
9
+ "node-llm-orm": "./bin/cli.js"
10
+ },
11
+ "exports": {
12
+ ".": {
13
+ "import": "./dist/index.js",
14
+ "types": "./dist/index.d.ts"
15
+ },
16
+ "./prisma": {
17
+ "import": "./dist/adapters/prisma/index.js",
18
+ "types": "./dist/adapters/prisma/index.d.ts"
19
+ }
20
+ },
21
+ "scripts": {
22
+ "build": "tsc",
23
+ "test": "vitest run",
24
+ "test:watch": "vitest"
25
+ },
26
+ "keywords": [
27
+ "llm",
28
+ "orm",
29
+ "prisma",
30
+ "chat",
31
+ "chatbot",
32
+ "persistence",
33
+ "database",
34
+ "nodejs",
35
+ "typescript",
36
+ "openai",
37
+ "anthropic",
38
+ "gemini",
39
+ "streaming",
40
+ "rag",
41
+ "retrieval-augmented-generation",
42
+ "conversation",
43
+ "message-history",
44
+ "tool-calling",
45
+ "ai",
46
+ "artificial-intelligence",
47
+ "nextjs"
48
+ ],
49
+ "repository": {
50
+ "type": "git",
51
+ "url": "https://github.com/node-llm/node-llm.git",
52
+ "directory": "packages/orm"
53
+ },
54
+ "homepage": "https://github.com/node-llm/node-llm/tree/main/packages/orm",
55
+ "bugs": {
56
+ "url": "https://github.com/node-llm/node-llm/issues"
57
+ },
58
+ "author": "NodeLLM Contributors",
59
+ "license": "MIT",
60
+ "peerDependencies": {
61
+ "@node-llm/core": "^1.0.0",
62
+ "@prisma/client": "^5.0.0"
63
+ },
64
+ "devDependencies": {
65
+ "@node-llm/core": "file:../core",
66
+ "@prisma/client": "^5.0.0",
67
+ "@types/node": "^20.0.0",
68
+ "typescript": "^5.0.0",
69
+ "vitest": "^1.0.0"
70
+ },
71
+ "dependencies": {}
72
+ }