@sprucelabs/sprucebot-llm 13.0.2 → 13.0.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. package/README.md +112 -30
  2. package/package.json +1 -1
package/README.md CHANGED
@@ -1,4 +1,7 @@
1
1
  # Sprucebot LLM
2
+
3
+ [![AI TDD Contributor](https://regressionproof.ai/badge.svg)](https://regressionproof.ai)
4
+
2
5
  A TypeScript library for leveraging large language models to do... anything!
3
6
 
4
7
  * [Has memory](#message-history-and-context-limits)
@@ -13,8 +16,10 @@ A TypeScript library for leveraging large language models to do... anything!
13
16
  * Unlimited use cases
14
17
  * Skill architecture for extensibility
15
18
  * Leverage Skills to get your bot to complete any task!
16
- * Adapter interface to create your own adapters
17
- * Only support OpenAI models for now (more adapters based on demand)
19
+ * Multiple adapter support
20
+ * [OpenAI](#openai-adapter-configuration) - GPT-4o, o1, and other OpenAI models
21
+ * [Ollama](#ollama-adapter) - Run local models like Llama, Mistral, etc.
22
+ * [Custom adapters](#custom-adapters) - Implement your own
18
23
  * Fully typed
19
24
  * Built in modern TypeScript
20
25
  * Fully typed schema-based state management (powered by `@sprucelabs/schema`)
@@ -136,7 +141,7 @@ Additional OpenAI context controls:
136
141
  - `OPENAI_PAST_MESSAGE_MAX_CHARS` omits *past* (non-latest) messages longer than the limit, replacing them with `[omitted due to length]`.
137
142
  - `OPENAI_SHOULD_REMEMBER_IMAGES=false` omits images from older messages to save context, keeping only the most recent image and replacing older ones with `[Image omitted to save context]`.
138
143
 
139
- > *Note*: OpenAI is currently the only adapter supported. If you would like to see support for other adapters (or programmatic ways to configure memory), please open an issue and we'll get on it!
144
+ ## Adapters
140
145
 
141
146
  ### OpenAI adapter configuration
142
147
 
@@ -160,8 +165,13 @@ Runtime configuration options:
160
165
  ```ts
161
166
  const adapter = OpenAiAdapter.Adapter(process.env.OPEN_AI_API_KEY!, {
162
167
  log: console,
168
+ model: 'gpt-4o',
169
+ memoryLimit: 10,
170
+ reasoningEffort: 'low',
171
+ baseUrl: 'https://custom-endpoint/v1', // Optional custom base URL
163
172
  })
164
173
 
174
+ // Or set after creation
165
175
  adapter.setModel('gpt-4o')
166
176
  adapter.setMessageMemoryLimit(10)
167
177
  adapter.setReasoningEffort('low')
@@ -171,7 +181,12 @@ adapter.setReasoningEffort('low')
171
181
 
172
182
  `OpenAiAdapter` exposes the following API:
173
183
 
174
- - `OpenAiAdapter.Adapter(apiKey, options?)`: create an adapter instance. `options.log` can be any logger that supports `.info(...)`.
184
+ - `OpenAiAdapter.Adapter(apiKey, options?)`: create an adapter instance. Options include:
185
+ - `log` - any logger that supports `.info(...)`
186
+ - `model` - default model (e.g., `'gpt-4o'`)
187
+ - `memoryLimit` - message memory limit
188
+ - `reasoningEffort` - for reasoning models (`'low'`, `'medium'`, `'high'`)
189
+ - `baseUrl` - custom API endpoint
175
190
  - `adapter.setModel(model)`: set a default model for all requests unless a Skill overrides it.
176
191
  - `adapter.setMessageMemoryLimit(limit)`: limit how many tracked messages are sent to OpenAI.
177
192
  - `adapter.setReasoningEffort(effort)`: set `reasoning_effort` for models that support it.
@@ -179,6 +194,40 @@ adapter.setReasoningEffort('low')
179
194
 
180
195
  Requests are sent via `openai.chat.completions.create(...)` with messages built by the adapter from the Bot state and history.
181
196
 
197
+ ### Ollama adapter
198
+
199
+ Run local models using [Ollama](https://ollama.ai). No API key required - just have Ollama running locally.
200
+
201
+ ```ts
202
+ import { OllamaAdapter, SprucebotLlmFactory } from '@sprucelabs/sprucebot-llm'
203
+
204
+ // Create adapter for local Ollama instance
205
+ const adapter = OllamaAdapter.Adapter({
206
+ model: 'llama2', // or 'mistral', 'codellama', etc.
207
+ log: console, // optional logger
208
+ })
209
+
210
+ const bots = SprucebotLlmFactory.Factory(adapter)
211
+
212
+ const bot = bots.Bot({
213
+ youAre: 'a helpful assistant',
214
+ skill: bots.Skill({
215
+ yourJobIfYouChooseToAcceptItIs: 'to answer questions'
216
+ })
217
+ })
218
+
219
+ await bot.sendMessage('Hello!')
220
+ ```
221
+
222
+ **Ollama adapter options:**
223
+
224
+ | Option | Type | Default | Description |
225
+ |--------|------|---------|-------------|
226
+ | `model` | `string` | - | Which Ollama model to use |
227
+ | `log` | `Log` | - | Optional logger instance |
228
+
229
+ The Ollama adapter connects to `http://localhost:11434/v1` by default (Ollama's OpenAI-compatible endpoint).
230
+
182
231
  ### Custom adapters
183
232
 
184
233
  You can bring your own adapter by implementing the `LlmAdapter` interface and passing it to `SprucebotLlmFactory.Factory(...)`:
@@ -202,12 +251,15 @@ class MyAdapter implements LlmAdapter {
202
251
  const bots = SprucebotLlmFactory.Factory(new MyAdapter())
203
252
  ```
204
253
 
254
+ ## Skills & State
255
+
205
256
  ### Adding state to your conversation
206
257
  This library depends on `@sprucelabs/schema` to handle the structure and validation rules around your state.
207
258
  ```ts
208
259
  const skill = bots.Skill({
209
260
  yourJobIfYouChooseToAcceptItIs:
210
261
  'to collect some information from me! You are a receptionist with 20 years experience and are very focused on getting answers needed to complete my profile',
262
+ model: 'gpt-4o', // Optional: override adapter's default model
211
263
  stateSchema: buildSchema({
212
264
  id: 'profile',
213
265
  fields: {
@@ -369,22 +421,28 @@ const bookingBot = bots.Bot({
369
421
 
370
422
  If you are using reasoning models that accept `reasoning_effort`, you can set it via `OPENAI_REASONING_EFFORT` or `adapter.setReasoningEffort(...)`.
371
423
 
372
- ### Bot and Skill API highlights
424
+ ## API Reference
373
425
 
374
- Common Bot methods:
426
+ ### Bot methods
375
427
 
376
- - `sendMessage(message, cb?)`: Send a user message (string or `{ imageBase64, imageDescription }`). The optional callback is invoked for each model response, including follow-up responses after a callback/tool result is injected.
377
- - `getIsDone()` / `markAsDone()`: Check or force completion.
378
- - `clearMessageHistory()`: Drop all tracked messages.
379
- - `updateState(partialState)`: Update state and emit `did-update-state`.
380
- - `setSkill(skill)`: Swap the active skill.
381
- - `serialize()`: Snapshot of the bot's current state, skill, and history.
428
+ | Method | Description |
429
+ |--------|-------------|
430
+ | `sendMessage(message, cb?)` | Send a user message (string or `{ imageBase64, imageDescription }`). Optional callback for each response. |
431
+ | `getIsDone()` | Check if conversation is complete |
432
+ | `markAsDone()` | Force conversation completion |
433
+ | `clearMessageHistory()` | Drop all tracked messages |
434
+ | `updateState(partialState)` | Update state and emit `did-update-state` |
435
+ | `setSkill(skill)` | Swap the active skill |
436
+ | `serialize()` | Snapshot of bot's current state, skill, and history |
382
437
 
383
- Common Skill methods:
438
+ ### Skill methods
384
439
 
385
- - `updateState(partialState)`, `getState()`
386
- - `setModel(model)`
387
- - `serialize()`
440
+ | Method | Description |
441
+ |--------|-------------|
442
+ | `updateState(partialState)` | Update skill state |
443
+ | `getState()` | Get current state |
444
+ | `setModel(model)` | Change the model this skill uses |
445
+ | `serialize()` | Snapshot of skill configuration |
388
446
 
389
447
  ### Factory helpers
390
448
 
@@ -396,8 +454,22 @@ Common Skill methods:
396
454
 
397
455
  ### Errors
398
456
 
399
- `SprucebotLlmError` is exported for structured error handling. Common error codes include:
457
+ `SprucebotLlmError` is exported for structured error handling:
458
+
459
+ ```ts
460
+ import { SprucebotLlmError } from '@sprucelabs/sprucebot-llm'
461
+
462
+ try {
463
+ await bot.sendMessage('hello')
464
+ } catch (err) {
465
+ if (err instanceof SprucebotLlmError) {
466
+ console.log(err.options?.code) // Error code
467
+ console.log(err.friendlyMessage()) // Human-readable message
468
+ }
469
+ }
470
+ ```
400
471
 
472
+ Common error codes:
401
473
  - `NO_BOT_INSTANCE_SET`
402
474
  - `INVALID_CALLBACK`
403
475
  - `CALLBACK_ERROR`
@@ -406,20 +478,30 @@ Common Skill methods:
406
478
 
407
479
  These are exported from the package for unit tests:
408
480
 
409
- - `SpyLlmAdapter`: captures the last bot and options passed to the adapter.
410
- - `SpyLllmBot`: records constructor options and exposes message history helpers. (Note: the export name currently has three "l"s.)
411
- - `MockLlmSkill`: adds assertion helpers for skill configuration and callbacks.
412
- - `SpyOpenAiApi`: a drop-in `OpenAI` client stub for adapter tests. Set `OpenAiAdapter.OpenAI = SpyOpenAiApi` before constructing the adapter.
481
+ | Utility | Description |
482
+ |---------|-------------|
483
+ | `SpyLlmAdapter` | Captures the last bot and options passed to the adapter |
484
+ | `SpyLllmBot` | Records constructor options and exposes message history helpers |
485
+ | `MockLlmSkill` | Assertion helpers for skill configuration and callbacks |
486
+ | `SpyOpenAiApi` | Drop-in OpenAI client stub for adapter tests |
413
487
 
414
- ### Development scripts
488
+ ```ts
489
+ // Example: Using SpyOpenAiApi for tests
490
+ import { OpenAiAdapter, SpyOpenAiApi } from '@sprucelabs/sprucebot-llm'
415
491
 
416
- Useful commands from `package.json`:
492
+ OpenAiAdapter.OpenAI = SpyOpenAiApi
493
+ const adapter = OpenAiAdapter.Adapter('fake-key')
494
+ ```
417
495
 
418
- - `yarn test`
419
- - `yarn build.dev`
420
- - `yarn build.dist`
421
- - `yarn chat`
422
- - `yarn chat.images`
423
- - `yarn generate.samples`
496
+ ## Development
424
497
 
425
- [![AI TDD Contributor](https://regressionproof.ai/badge.svg)](https://regressionproof.ai)
498
+ Useful commands from `package.json`:
499
+
500
+ ```bash
501
+ yarn test # Run tests
502
+ yarn build.dev # Development build
503
+ yarn build.dist # Production build
504
+ yarn chat # Interactive chat demo
505
+ yarn chat.images # Chat with image support
506
+ yarn generate.samples
507
+ ```
package/package.json CHANGED
@@ -8,7 +8,7 @@
8
8
  "eta"
9
9
  ]
10
10
  },
11
- "version": "13.0.2",
11
+ "version": "13.0.3",
12
12
  "files": [
13
13
  "build"
14
14
  ],