@sprucelabs/sprucebot-llm 15.1.5 → 15.1.7

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. package/README.md +83 -6
  2. package/package.json +1 -1
package/README.md CHANGED
@@ -18,6 +18,7 @@ A TypeScript library for leveraging large language models to do... anything!
18
18
  * Leverage Skills to get your bot to complete any task!
19
19
  * Multiple adapter support
20
20
  * [OpenAI](#openai-adapter-configuration) - GPT-4o, o1, and other OpenAI models
21
+ * [Anthropic](#anthropic-adapter) - Claude models with prompt caching support
21
22
  * [Ollama](#ollama-adapter) - Run local models like Llama, Mistral, etc.
22
23
  * [Custom adapters](#custom-adapters) - Implement your own
23
24
  * Fully typed
@@ -194,6 +195,46 @@ adapter.setReasoningEffort('low')
194
195
 
195
196
  Requests are sent via `openai.chat.completions.create(...)` with messages built by the adapter from the Bot state and history.
196
197
 
198
+ ### Anthropic adapter
199
+
200
+ Use Claude models from Anthropic. Requires `@anthropic-ai/sdk` and an Anthropic API key.
201
+
202
+ ```ts
203
+ import { AnthropicAdapter, SprucebotLlmFactory } from '@sprucelabs/sprucebot-llm'
204
+
205
+ const adapter = AnthropicAdapter.Adapter(process.env.ANTHROPIC_API_KEY!, {
206
+ maxTokens: 4096, // required
207
+ model: 'claude-sonnet-4-5', // default
208
+ log: yourLogger, // optional
209
+ memoryLimit: 10, // optional
210
+ thinking: false, // optional: enable extended thinking mode
211
+ })
212
+
213
+ const bots = SprucebotLlmFactory.Factory(adapter)
214
+ ```
215
+
216
+ **Anthropic adapter options:**
217
+
218
+ | Option | Type | Required | Description |
219
+ |--------|------|----------|-------------|
220
+ | `maxTokens` | `number` | yes | Maximum tokens for the model response |
221
+ | `model` | `string` | no | Model to use (default: `'claude-sonnet-4-5'`) |
222
+ | `log` | `Log` | no | Optional logger instance |
223
+ | `memoryLimit` | `number` | no | Limit how many tracked messages are sent |
224
+ | `thinking` | `boolean` | no | Enable extended thinking (`thinking: adaptive`) |
225
+
226
+ #### Anthropic prompt caching
227
+
228
+ The Anthropic adapter automatically enables [prompt caching](https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching) by inserting an `ephemeral` cache breakpoint after the system prompt. This allows Anthropic to cache the static portion of the prompt (your `youAre` + skill instructions) and only re-process the changing chat history on each turn — reducing latency and cost on long conversations.
229
+
230
+ Token usage (including cache creation and cache read tokens) is logged at the `info` level on each request:
231
+
232
+ ```
233
+ [TOKEN USAGE] input=1234 cache_create=800 cache_read=400 output=256
234
+ ```
235
+
236
+ No configuration is required — caching is applied automatically.
237
+
197
238
  ### Ollama adapter
198
239
 
199
240
  Run local models using [Ollama](https://ollama.ai). No API key required - just have Ollama running locally.
@@ -362,16 +403,21 @@ const skill = bots.Skill({
362
403
  ```
363
404
 
364
405
  #### Callback invocation format
365
- When using the `OpenAiAdapter`, the model is instructed to call callbacks using one of these formats:
406
+ The model is instructed to invoke callbacks using the following syntax (V2 format):
366
407
 
367
408
  ```text
368
- <<functionName/>>
369
- <<functionName>>{"param":"value"}<</functionName>>
409
+ @callback { "name": "callbackName", "options": {} }
370
410
  ```
371
411
 
372
- Only one callback invocation per model response is supported. Callbacks can return either a `string` or an image message shaped like `{ imageBase64, imageDescription }` (see the "Sending images" section below).
412
+ Multiple callbacks can be included in a single response, one per line. JSON must be on a single line do not use multi-line or formatted JSON.
373
413
 
374
- Legacy placeholder format (`xxxxx callbackName xxxxx`) is still supported by the response parser for older prompt templates.
414
+ As a shorthand, you can also invoke a named callback directly:
415
+
416
+ ```text
417
+ @myCallback { "param": "value" }
418
+ ```
419
+
420
+ Callbacks can return either a `string` or an image message shaped like `{ imageBase64, imageDescription }` (see the "Sending images" section below).
375
421
 
376
422
  Callback parameters can include basic types (e.g. `text`, `number`, `boolean`, `dateMs`, `dateTimeMs`) and `select` fields with choices from `@sprucelabs/schema`.
377
423
 
@@ -421,6 +467,35 @@ const bookingBot = bots.Bot({
421
467
 
422
468
  If you are using reasoning models that accept `reasoning_effort`, you can set it via `OPENAI_REASONING_EFFORT` or `adapter.setReasoningEffort(...)`.
423
469
 
470
+ ## Serialization and Persistence
471
+
472
+ You can snapshot a bot's full state — including message history, skill configuration, and any accumulated state — and later restore it. This is useful for persisting conversations across process restarts, saving/loading sessions, or transferring bot state.
473
+
474
+ ```ts
475
+ // Save bot state (e.g. to a database or file)
476
+ const snapshot = bot.serialize()
477
+ // snapshot: { youAre, stateSchema, state, messages, skill }
478
+
479
+ // Later, recreate the bot and restore state
480
+ const bot2 = bots.Bot({
481
+ skill: mySkill,
482
+ youAre: 'a helpful assistant',
483
+ })
484
+ bot2.unserialize(snapshot)
485
+ // bot2 now has the same message history and state as bot had when serialized
486
+ ```
487
+
488
+ The skill's state is also preserved through serialization:
489
+
490
+ ```ts
491
+ const skillSnapshot = skill.serialize()
492
+ // skillSnapshot: { yourJobIfYouChooseToAcceptItIs, state, stateSchema, ... }
493
+
494
+ skill.unserialize(skillSnapshot)
495
+ ```
496
+
497
+ `unserialize` restores `state` and skill options but does not reconnect callback handlers — those are defined in code. Re-attach any callbacks when recreating the skill.
498
+
424
499
  ## API Reference
425
500
 
426
501
  ### Bot methods
@@ -434,6 +509,7 @@ If you are using reasoning models that accept `reasoning_effort`, you can set it
434
509
  | `updateState(partialState)` | Update state and emit `did-update-state` |
435
510
  | `setSkill(skill)` | Swap the active skill |
436
511
  | `serialize()` | Snapshot of bot's current state, skill, and history |
512
+ | `unserialize(serialized)` | Restore state from a previous `serialize()` snapshot |
437
513
 
438
514
  ### Skill methods
439
515
 
@@ -442,7 +518,8 @@ If you are using reasoning models that accept `reasoning_effort`, you can set it
442
518
  | `updateState(partialState)` | Update skill state |
443
519
  | `getState()` | Get current state |
444
520
  | `setModel(model)` | Change the model this skill uses |
445
- | `serialize()` | Snapshot of skill configuration |
521
+ | `serialize()` | Snapshot of skill configuration and state |
522
+ | `unserialize(serialized)` | Restore skill state from a previous `serialize()` snapshot |
446
523
 
447
524
  ### Factory helpers
448
525
 
package/package.json CHANGED
@@ -8,7 +8,7 @@
8
8
  "eta"
9
9
  ]
10
10
  },
11
- "version": "15.1.5",
11
+ "version": "15.1.7",
12
12
  "files": [
13
13
  "build"
14
14
  ],