@sprucelabs/sprucebot-llm 12.3.0 → 12.3.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,23 +1,23 @@
1
1
  # Sprucebot LLM
2
- A Typescript library for leveraging Large Langage Models to do... anything!
2
+ A TypeScript library for leveraging large language models to do... anything!
3
3
 
4
- * [Has memory](#memory)
4
+ * [Has memory](#message-history-and-context-limits)
5
5
  * Remembers past messages to build context
6
6
  * Configure how much of the conversation your bot should remember
7
7
  * [Manages state](#adding-state-to-your-conversation)
8
8
  * The state builds as the conversation continues
9
9
  * Invoke callbacks whenever state changes
10
- * [Connect to 3rd party API's](#pulling-from-3rd-party-apis)
10
+ * [Connect to 3rd party APIs](#pulling-from-3rd-party-apis)
11
11
  * Pull in data in real time
12
12
  * Have your bot respond generated responses
13
13
  * Unlimited use cases
14
14
  * Skill architecture for extensibility
15
15
  * Leverage Skills to get your bot to complete any task!
16
- * Adapter Interface to create your own adapters
16
+ * Adapter interface to create your own adapters
17
17
  * Only support OpenAI models for now (more adapters based on demand)
18
18
  * Fully typed
19
- * Built in modern Typescript
20
- * Fully typed [schema based](https://github.com/sprucelabsai-community/spruce-schema) state management
19
+ * Built in modern TypeScript
20
+ * Fully typed schema-based state management (powered by `@sprucelabs/schema`)
21
21
 
22
22
 
23
23
  ## Lexicon
@@ -49,9 +49,15 @@ code .
49
49
  ```
50
50
 
51
51
  ### Testing it out for yourself
52
- You can use `sprucebot-llm` inside any Javascript runtime (nodejs, bun, browser).
52
+ You can use `sprucebot-llm` inside any JavaScript runtime (Node.js, Bun, browser).
53
53
 
54
- If you want to try this locally, you can checkout `chat.ts`. Here are the contents of that file for you to review now, rather than needing to explore the codebase.
54
+ If you want to try this locally, you can checkout `chat.ts`. Create a `.env` file with your OpenAI API key first:
55
+
56
+ ```env
57
+ OPEN_AI_API_KEY=your_api_key_here
58
+ ```
59
+
60
+ Here are the contents of that file for you to review now, rather than needing to explore the codebase.
55
61
 
56
62
  ```ts
57
63
  import { stdin as input, stdout as output } from 'node:process'
@@ -74,7 +80,7 @@ void (async () => {
74
80
  // Create the adapter that handles actually sending the prompt to an LLM
75
81
  const adapter = OpenAiAdapter.Adapter(process.env.OPEN_AI_API_KEY!)
76
82
 
77
- // The LLmFactory is a layer of abstraction that simplifies bot creation
83
+ // The LlmFactory is a layer of abstraction that simplifies bot creation
78
84
  // and enables test doubling (mocks, spies, etc)
79
85
  const bots = SprucebotLlmFactory.Factory(adapter)
80
86
 
@@ -87,7 +93,7 @@ void (async () => {
87
93
  receptionist: buildReceptionistSkill(bots),
88
94
  }
89
95
 
90
- // Construct a Bot installs and pass the skill of your choice
96
+ // Construct a Bot and pass the skill of your choice
91
97
  const bot = bots.Bot({
92
98
  skill: skills.callbacks, //<-- try jokes, profile, etc.
93
99
  youAre: "a bot named Sprucebot that is in test mode. At the start of every conversation, you introduce yourself and announce that you are in test mode so I don't get confused! You are both hip and adorable. You say things like, 'Jeepers' and 'Golly' or even 'Jeezey peezy'!",
@@ -110,18 +116,94 @@ void (async () => {
110
116
 
111
117
  ```
112
118
 
113
- ### Conversation Memory
119
+ ### Message history and context limits
120
+
121
+ There are two different limits to be aware of:
122
+
123
+ - `SprucebotLlmBotImpl.messageMemoryLimit` (default: `10`) controls how many messages are kept in the in-memory history on the Bot. Once the limit is hit, old messages are dropped and can no longer be sent to any adapter.
124
+ - `OPENAI_MESSAGE_MEMORY_LIMIT` or `OpenAiAdapter.setMessageMemoryLimit(limit)` controls how many of those tracked messages are included when sending a request to OpenAI. `0` (the default) means "no additional limit" beyond the Bot history.
125
+
126
+ To change the Bot history limit:
127
+
128
+ ```ts
129
+ import { SprucebotLlmBotImpl } from '@sprucelabs/sprucebot-llm'
130
+
131
+ SprucebotLlmBotImpl.messageMemoryLimit = 20
132
+ ```
133
+
134
+ Additional OpenAI context controls:
135
+
136
+ - `OPENAI_PAST_MESSAGE_MAX_CHARS` omits *past* (non-latest) messages longer than the limit, replacing them with `[omitted due to length]`.
137
+ - `OPENAI_SHOULD_REMEMBER_IMAGES=false` omits images from older messages to save context, keeping only the most recent image and replacing older ones with `[Image omitted to save context]`.
138
+
139
+ > *Note*: OpenAI is currently the only adapter supported. If you would like to see support for other adapters (or programmatic ways to configure memory), please open an issue and we'll get on it!
140
+
141
+ ### OpenAI adapter configuration
142
+
143
+ Required environment variable:
144
+
145
+ ```env
146
+ OPEN_AI_API_KEY=your_api_key_here
147
+ ```
114
148
 
115
- Conversation Memory is the total number of messages that will be tracked during a conversation. Once the limit is hit, old messages will be popped off the stack and forgotten. Currently, you can only configure memory through you project's .env:
149
+ Optional environment variables:
116
150
 
117
151
  ```env
118
- OPENAI_MESSAGE_MEMORY_LIMIT=10
152
+ OPENAI_MESSAGE_MEMORY_LIMIT=0
153
+ OPENAI_PAST_MESSAGE_MAX_CHARS=0
154
+ OPENAI_SHOULD_REMEMBER_IMAGES=true
155
+ OPENAI_REASONING_EFFORT=low
119
156
  ```
120
157
 
121
- > *Note*: OpenAI is currently the only adapter supported. If you would like to see support for other adapters (or programattic ways to configure memory), please open an issue and we'll get on it! 🤘
158
+ Runtime configuration options:
159
+
160
+ ```ts
161
+ const adapter = OpenAiAdapter.Adapter(process.env.OPEN_AI_API_KEY!, {
162
+ log: console,
163
+ })
164
+
165
+ adapter.setModel('gpt-4o')
166
+ adapter.setMessageMemoryLimit(10)
167
+ adapter.setReasoningEffort('low')
168
+ ```
169
+
170
+ ### OpenAI adapter API
171
+
172
+ `OpenAiAdapter` exposes the following API:
173
+
174
+ - `OpenAiAdapter.Adapter(apiKey, options?)`: create an adapter instance. `options.log` can be any logger that supports `.info(...)`.
175
+ - `adapter.setModel(model)`: set a default model for all requests unless a Skill overrides it.
176
+ - `adapter.setMessageMemoryLimit(limit)`: limit how many tracked messages are sent to OpenAI.
177
+ - `adapter.setReasoningEffort(effort)`: set `reasoning_effort` for models that support it.
178
+ - `OpenAiAdapter.OpenAI`: assign a custom OpenAI client class (useful for tests).
179
+
180
+ Requests are sent via `openai.chat.completions.create(...)` with messages built by the adapter from the Bot state and history.
181
+
182
+ ### Custom adapters
183
+
184
+ You can bring your own adapter by implementing the `LlmAdapter` interface and passing it to `SprucebotLlmFactory.Factory(...)`:
185
+
186
+ ```ts
187
+ import {
188
+ LlmAdapter,
189
+ SprucebotLlmBot,
190
+ SprucebotLlmFactory,
191
+ } from '@sprucelabs/sprucebot-llm'
192
+
193
+ class MyAdapter implements LlmAdapter {
194
+ async sendMessage(bot: SprucebotLlmBot) {
195
+ // Build your prompt from the bot's serialized state or messages
196
+ const { messages } = bot.serialize()
197
+ // Send to your model and return the model response as a string
198
+ return `echo: ${messages[messages.length - 1]?.message ?? ''}`
199
+ }
200
+ }
201
+
202
+ const bots = SprucebotLlmFactory.Factory(new MyAdapter())
203
+ ```
122
204
 
123
205
  ### Adding state to your conversation
124
- This library depends on [`@sprucelabs/spruce-schema`](https://github.com/sprucelabsai/spruce-schema) to handle the structure and validation rules around your state.
206
+ This library depends on `@sprucelabs/schema` to handle the structure and validation rules around your state.
125
207
  ```ts
126
208
  const skill = bots.Skill({
127
209
  yourJobIfYouChooseToAcceptItIs:
@@ -154,7 +236,10 @@ const skill = bots.Skill({
154
236
 
155
237
  ### Listening for state changes
156
238
 
157
- If you supply a `stateSchema`, then your bot will work with it based on the job you decide to give it. While the conversation is taking place, if the state changes, the skill will emit the `did-update-state` event.
239
+ If you supply a `stateSchema`, then your bot will work with it based on the job you decide to give it. While the conversation is taking place, if the state changes:
240
+
241
+ - If the state is on the Bot (you passed `stateSchema` to `bots.Bot(...)`), the Bot emits `did-update-state`.
242
+ - If the state is on the Skill (you passed `stateSchema` to `bots.Skill(...)`), the Skill emits `did-update-state`.
158
243
 
159
244
  ```ts
160
245
  await skill.on('did-update-state', () => {
@@ -163,9 +248,9 @@ await skill.on('did-update-state', () => {
163
248
  })
164
249
 
165
250
  ```
166
- ### Pulling from 3rd party api's
251
+ ### Pulling from 3rd party APIs
167
252
 
168
- The approach to integrating 3rd party api's (as well as dropping in other dynamic data into responses) is straight forward.
253
+ The approach to integrating 3rd party APIs (as well as dropping in other dynamic data into responses) is straightforward.
169
254
 
170
255
  In this contrived example, you can see where you'd implement the callbacks for `availableTimes`, `favoriteColor`, and `book` to actually call the APIs and return the results.
171
256
 
@@ -224,16 +309,47 @@ const skill = bots.Skill({
224
309
 
225
310
  ```
226
311
 
227
- > *Note*: This is not MCP (Model Context Protocol). MCP is focused on making API's available to LLM's. `sprucebot-llm` comes at this from the opposite direction. It does not require you to do anything server side, so you can connect to all your existing endpoints/tools/systems without needing to change them.
312
+ #### Callback invocation format
313
+ When using the `OpenAiAdapter`, the model is instructed to call callbacks using one of these formats:
314
+
315
+ ```text
316
+ <<functionName/>>
317
+ <<functionName>>{"param":"value"}<</functionName>>
318
+ ```
319
+
320
+ Only one callback invocation per model response is supported. Callbacks can return either a `string` or an image message shaped like `{ imageBase64, imageDescription }` (see the "Sending images" section below).
321
+
322
+ Legacy placeholder format (`xxxxx callbackName xxxxx`) is still supported by the response parser for older prompt templates.
323
+
324
+ Callback parameters can include basic types (e.g. `text`, `number`, `boolean`, `dateMs`, `dateTimeMs`) and `select` fields with choices from `@sprucelabs/schema`.
325
+
326
+ > *Note*: This is not MCP (Model Context Protocol). MCP is focused on making APIs available to LLMs. `sprucebot-llm` comes at this from the opposite direction. It does not require you to do anything server-side, so you can connect to all your existing endpoints/tools/systems without needing to change them.
327
+
328
+ ### Sending images
329
+
330
+ You can send images to the bot by passing a base64-encoded image and a short description:
331
+
332
+ ```ts
333
+ await bot.sendMessage({
334
+ imageBase64: base64Png,
335
+ imageDescription: 'A photo of a sunset over the mountains.',
336
+ })
337
+ ```
338
+
339
+ There is a working example at `src/chatWithImages.ts`, and you can run it after building with:
340
+
341
+ ```bash
342
+ yarn chat.images
343
+ ```
228
344
 
229
345
  ### Choosing a model
230
346
 
231
- When you configure a `Skill` with for bot, you can specify the model that the skill will use. In other words, you can have different skills use different models depending on their requirements.
347
+ When you configure a `Skill` for a bot, you can specify the model that the skill will use. In other words, you can have different skills use different models depending on their requirements. The OpenAI adapter defaults to `gpt-4o`, and a `Skill` model (if set) overrides the adapter default.
232
348
 
233
349
  ```ts
234
350
 
235
351
  const bookingSkill = bots.Skill({
236
- model: 'gpt-5',
352
+ model: 'gpt-4o',
237
353
  yourJobIfYouChooseToAcceptItIs: 'to tell knock knock jokes!',
238
354
  pleaseKeepInMindThat: [
239
355
  'our audience is younger, so keep it PG!',
@@ -250,3 +366,58 @@ const bookingBot = bots.Bot({
250
366
  })
251
367
 
252
368
  ```
369
+
370
+ If you are using reasoning models that accept `reasoning_effort`, you can set it via `OPENAI_REASONING_EFFORT` or `adapter.setReasoningEffort(...)`.
371
+
372
+ ### Bot and Skill API highlights
373
+
374
+ Common Bot methods:
375
+
376
+ - `sendMessage(message, cb?)`: Send a user message (string or `{ imageBase64, imageDescription }`). The optional callback is invoked for each model response, including follow-up responses after a callback/tool result is injected.
377
+ - `getIsDone()` / `markAsDone()`: Check or force completion.
378
+ - `clearMessageHistory()`: Drop all tracked messages.
379
+ - `updateState(partialState)`: Update state and emit `did-update-state`.
380
+ - `setSkill(skill)`: Swap the active skill.
381
+ - `serialize()`: Snapshot of the bot's current state, skill, and history.
382
+
383
+ Common Skill methods:
384
+
385
+ - `updateState(partialState)`, `getState()`
386
+ - `setModel(model)`
387
+ - `serialize()`
388
+
389
+ ### Factory helpers
390
+
391
+ `SprucebotLlmFactory` also exposes:
392
+
393
+ - `setBotInstance(bot)` and `getBotInstance()` for storing a single bot instance.
394
+ - `SprucebotLlmFactory.BotClass`, `.SkillClass`, `.FactoryClass` overrides for dependency injection in tests.
395
+ - `SprucebotLlmFactory.reset()` to restore defaults.
396
+
397
+ ### Errors
398
+
399
+ `SprucebotLlmError` is exported for structured error handling. Common error codes include:
400
+
401
+ - `NO_BOT_INSTANCE_SET`
402
+ - `INVALID_CALLBACK`
403
+ - `CALLBACK_ERROR`
404
+
405
+ ### Testing utilities
406
+
407
+ These are exported from the package for unit tests:
408
+
409
+ - `SpyLlmAdapter`: captures the last bot and options passed to the adapter.
410
+ - `SpyLllmBot`: records constructor options and exposes message history helpers. (Note: the export name currently has three "l"s.)
411
+ - `MockLlmSkill`: adds assertion helpers for skill configuration and callbacks.
412
+ - `SpyOpenAiApi`: a drop-in `OpenAI` client stub for adapter tests. Set `OpenAiAdapter.OpenAI = SpyOpenAiApi` before constructing the adapter.
413
+
414
+ ### Development scripts
415
+
416
+ Useful commands from `package.json`:
417
+
418
+ - `yarn test`
419
+ - `yarn build.dev`
420
+ - `yarn build.dist`
421
+ - `yarn chat`
422
+ - `yarn chat.images`
423
+ - `yarn generate.samples`
@@ -18,8 +18,8 @@ class SprucebotLlmSkillImpl extends mercury_event_emitter_1.AbstractEventEmitter
18
18
  : undefined;
19
19
  }
20
20
  async updateState(updates) {
21
- await this.emit('did-update-state');
22
21
  this.state = { ...this.state, ...updates };
22
+ await this.emit('did-update-state');
23
23
  }
24
24
  getState() {
25
25
  return this.state;
@@ -34,8 +34,8 @@ export default class SprucebotLlmSkillImpl extends AbstractEventEmitter {
34
34
  }
35
35
  updateState(updates) {
36
36
  return __awaiter(this, void 0, void 0, function* () {
37
- yield this.emit('did-update-state');
38
37
  this.state = Object.assign(Object.assign({}, this.state), updates);
38
+ yield this.emit('did-update-state');
39
39
  });
40
40
  }
41
41
  getState() {
package/package.json CHANGED
@@ -8,7 +8,7 @@
8
8
  "eta"
9
9
  ]
10
10
  },
11
- "version": "12.3.0",
11
+ "version": "12.3.2",
12
12
  "files": [
13
13
  "build"
14
14
  ],