booths 1.3.1 → 1.4.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (4) hide show
  1. package/README.md +198 -349
  2. package/dist/index.d.ts +38 -216
  3. package/dist/index.js +147 -373
  4. package/package.json +2 -5
package/README.md CHANGED
@@ -1,93 +1,72 @@
1
1
  # Booths
2
2
 
3
- Booths is a modular and extensible framework for building and managing conversational AI agents. It provides a structured way to define the capabilities, context, and tools for different AI-powered conversational flows.
3
+ [![Check Build on Pull Request](https://github.com/phoneburner/booths/actions/workflows/build-check.yml/badge.svg)](https://github.com/phoneburner/booths/actions/workflows/build-check.yml)
4
+ [![Format Check](https://github.com/phoneburner/booths/actions/workflows/format-check.yml/badge.svg)](https://github.com/phoneburner/booths/actions/workflows/format-check.yml)
5
+ [![Publish to NPM on Release](https://github.com/phoneburner/booths/actions/workflows/release.yml/badge.svg)](https://github.com/phoneburner/booths/actions/workflows/release.yml)
6
+ [![Test Suite](https://github.com/phoneburner/booths/actions/workflows/test.yml/badge.svg)](https://github.com/phoneburner/booths/actions/workflows/test.yml)
4
7
 
5
- The system is designed around a central `CoreBooth` class that orchestrates interactions between users and a Large Language Model (LLM), leveraging a system of registries and plugins to manage the conversational state and capabilities.
8
+ *A modular, extensible framework for building and managing conversational AI agents in TypeScript.*
6
9
 
7
- ## Architecture Overview
10
+ > Booths provides a structured way to define agent capabilities, context, and tools, orchestrated by a `CoreBooth` that manages the interaction loop with your LLM and a rich plugin lifecycle.
8
11
 
9
- The Booths framework is built around a few key concepts that work together to create a powerful and flexible conversational AI system.
12
+ ---
10
13
 
11
- ```mermaid
12
- graph TD
13
- subgraph Application Layer
14
- A[Your Application]
15
- end
14
+ ## Table of Contents
16
15
 
17
- subgraph Booth Service Layer
18
- C(CoreBooth)
19
- end
16
+ * [Installation](#installation)
17
+ * [Quick Start Guide](#quick-start-guide)
18
+ * [Architecture Overview](#architecture-overview)
19
+ * [API Reference](#api-reference)
20
20
 
21
- subgraph Core Components
22
- D[Interaction Processor]
23
- E[LLM Adapter]
24
- F[Registries]
25
- G[Plugins]
26
- end
21
+ * [createCoreBooth](#createcorebooth)
27
22
 
28
- A -- "initializes and calls" --> C
29
- C -- "delegates to" --> D
30
- D -- "uses" --> E
31
- D -- "uses" --> F
32
- D -- "executes" --> G
33
- E -- "communicates with" --> H((LLM))
34
- F -- "manages" --> I{Booths}
35
- F -- "manages" --> J{Tools}
36
- F -- "manages" --> K{Plugins}
37
- G -- "hook into" --> D
23
+ * [Options](#options)
24
+ * [Examples](#examples)
25
+ * [InteractionProcessor](#interactionprocessor)
38
26
 
39
- style C fill:#f9f,stroke:#333,stroke-width:2px
40
- ```
27
+ * [Flow](#flow)
28
+ * [Important Constraints](#important-constraints)
29
+ * [Registries](#registries)
41
30
 
42
- 1. **Application Layer**: Your application integrates the Booths framework to handle conversational AI interactions.
43
- 2. **`CoreBooth`**: The framework foundation that provides global functionality, instructions, and infrastructure that applies to all booths. It manages the overall system configuration and coordinates the interaction flow.
44
- 3. **`InteractionProcessor`**: The engine that drives the conversation. It takes user input, runs it through the plugin lifecycle, sends it to the LLM (via the adapter), and processes the response.
45
- 4. **`LLMAdapter`**: A component that handles communication with the specific LLM provider (e.g., OpenAI). It translates requests and responses between the Booths system and the LLM's API. Supports both traditional and streaming response modes.
46
- 5. **Registries**: These are responsible for managing the different components of the system:
47
- * `BoothRegistry`: Manages `BoothConfig` objects that define the behavior of different AI agents.
48
- * `ToolRegistry`: Manages the tools (functions) that booths can use.
49
- * `BoothPluginRegistry`: Manages the plugins that hook into the conversational lifecycle.
50
- 6. **Plugins**: These are modules that add functionality to the system by hooking into the `InteractionProcessor`'s lifecycle (e.g., managing conversation history, providing context to the LLM, executing tools).
31
+ * [BoothRegistry](#boothregistry)
32
+ * [ToolRegistry](#toolregistry)
33
+ * [BoothPluginRegistry](#boothpluginregistry)
34
+ * [Plugins](#plugins)
51
35
 
52
- ## Getting Started
36
+ * [Lifecycle Hooks](#lifecycle-hooks)
37
+ * [Best Practices](#best-practices)
38
+ * [Advanced Usage](#advanced-usage)
53
39
 
54
- The Booths framework is designed as a TypeScript library for building conversational AI systems. This repository contains the core framework implementation.
40
+ * [Customizing the end-of-turn marker](#customizing-the-end-of-turn-marker)
41
+ * [Per-tool interception & error recovery](#per-tool-interception--error-recovery)
42
+ * [Types](#types)
43
+ * [Session & State Management](#session--state-management)
44
+ * [Error Handling](#error-handling)
45
+ * [License](#license)
55
46
 
56
- ### Installation
47
+ ---
48
+
49
+ ## Installation
57
50
 
58
51
  ```bash
59
52
  npm install booths
60
53
  ```
61
54
 
62
- ### Prerequisites
63
-
64
- - Node.js and npm installed
65
- - An LLM provider API key (e.g., OpenAI)
55
+ **Prerequisites**
66
56
 
67
- ### Development
57
+ * Node.js and npm
58
+ * An API key for your LLM provider (e.g., OpenAI)
68
59
 
69
- To build the library:
70
-
71
- ```bash
72
- npm run build
73
- ```
74
-
75
- To check types:
76
-
77
- ```bash
78
- npm run typecheck
79
- ```
60
+ ---
80
61
 
81
62
  ## Quick Start Guide
82
63
 
83
- Here is a lightweight example of how to set up and use the Core Booth system manually.
84
-
85
- ### 1. Define a Booth
64
+ A minimal setup that defines a booth, a tool, an adapter, and starts a conversation.
86
65
 
87
- First, define a booth configuration. This object specifies the booth's identity, role, and the tools it can use.
66
+ ### 1) Define a Booth
88
67
 
89
- ```typescript
90
- // in my-booths.ts
68
+ ```ts
69
+ // my-booths.ts
91
70
  import type { BoothConfig } from 'booths';
92
71
 
93
72
  export const pirateBooth: BoothConfig = {
@@ -99,12 +78,10 @@ export const pirateBooth: BoothConfig = {
99
78
  };
100
79
  ```
101
80
 
102
- ### 2. Define a Tool
81
+ ### 2) Define a Tool
103
82
 
104
- Next, create a tool that the booth can use. A tool is a function that the LLM can decide to call.
105
-
106
- ```typescript
107
- // in my-tools.ts
83
+ ```ts
84
+ // my-tools.ts
108
85
  import type { ToolModule } from 'booths';
109
86
 
110
87
  export const tellPirateJokeTool: ToolModule = {
@@ -113,384 +90,256 @@ export const tellPirateJokeTool: ToolModule = {
113
90
  description: 'Tells a classic pirate joke.',
114
91
  parameters: { type: 'object', properties: {} },
115
92
  execute: async () => {
116
- return { joke: "Why are pirates called pirates? Because they arrrr!" };
93
+ return { joke: 'Why are pirates called pirates? Because they arrrr!' };
117
94
  },
118
95
  };
119
96
  ```
120
97
 
121
- ### 3. Implement the LLM Adapter
122
-
123
- The `CoreBooth` requires an `LLMAdapter` to communicate with your chosen language model. Here is a minimal example for OpenAI.
98
+ ### 3) Implement a simple LLM Adapter
124
99
 
125
- ```typescript
126
- // in OpenAIAdapter.ts
127
- import type {
128
- LLMAdapter,
129
- ResponseCreateParamsNonStreaming,
130
- ResponseCreateParamsStreaming,
100
+ ```ts
101
+ // OpenAIAdapter.ts
102
+ import type {
103
+ LLMAdapter,
104
+ ResponseCreateParamsNonStreaming,
131
105
  Response,
132
- StreamEvent
133
106
  } from 'booths';
134
107
  import OpenAI from 'openai';
135
108
 
136
109
  export class OpenAIAdapter implements LLMAdapter<Response> {
137
110
  private openai: OpenAI;
138
-
139
111
  constructor(apiKey: string) {
140
112
  this.openai = new OpenAI({ apiKey });
141
113
  }
142
-
143
114
  async invoke(params: ResponseCreateParamsNonStreaming): Promise<Response> {
144
115
  return this.openai.responses.create({ ...params, model: 'gpt-4o' });
145
116
  }
146
-
147
117
  async interpret(response: Response): Promise<Response> {
148
118
  return response;
149
119
  }
150
-
151
- // Optional: Add streaming support
152
- async *invokeStream(params: ResponseCreateParamsStreaming): AsyncIterable<Response> {
153
- const stream = this.openai.responses.create({ ...params, model: 'gpt-4o', stream: true });
154
- for await (const chunk of stream) {
155
- yield chunk;
156
- }
157
- }
158
-
159
- async interpretStream(chunk: Response): Promise<StreamEvent> {
160
- // Convert OpenAI stream chunks to StreamEvents
161
- // Implementation depends on your streaming format
162
- return {
163
- type: 'text_delta',
164
- delta: chunk.choices?.[0]?.delta?.content || '',
165
- content: chunk.choices?.[0]?.delta?.content || ''
166
- };
167
- }
168
120
  }
169
121
  ```
170
122
 
171
- ### 4. Initialize the CoreBooth
172
-
173
- Finally, use the `createCoreBooth` factory to instantiate the system.
123
+ ### 4) Initialize and talk to the booth
174
124
 
175
- ```typescript
176
- // in main.ts
125
+ ```ts
126
+ // main.ts
177
127
  import { createCoreBooth } from 'booths';
178
128
  import { pirateBooth } from './my-booths';
179
129
  import { tellPirateJokeTool } from './my-tools';
180
- import { OpenAIAdapter } from './openAIAdapter';
130
+ import { OpenAIAdapter } from './OpenAIAdapter';
181
131
 
182
- // 1. Create the LLM adapter
183
- const llmAdapter = new OpenAIAdapter('your-openai-api-key');
132
+ const llmAdapter = new OpenAIAdapter(process.env.OPENAI_API_KEY!);
133
+ const coreBooth = createCoreBooth(llmAdapter, pirateBooth /*, { endInteractionLoopMarker: '__custom_marker__' }*/);
184
134
 
185
- // 2. Create the CoreBooth instance
186
- const coreBooth = createCoreBooth(llmAdapter, pirateBooth);
187
-
188
- // 3. Register the tool (this step will be improved in future versions)
135
+ // Register tools (available to the current booth)
189
136
  coreBooth.toolRegistry.registerTools([tellPirateJokeTool]);
190
137
 
191
- // 4. Send a message and get a response
192
- async function haveConversation() {
193
- const userInput = 'Tell me a pirate joke.';
194
- const response = await coreBooth.callProcessor.send(userInput);
195
-
138
+ // Send a message and read the result
139
+ async function run() {
140
+ const response = await coreBooth.callProcessor.send('Tell me a pirate joke.');
196
141
  console.log(response.output_text);
197
- // Expected output: "Why are pirates called pirates? Because they arrrr!"
198
142
  }
199
-
200
- haveConversation();
143
+ run();
201
144
  ```
202
- ## How It Works
203
145
 
204
- The Core Booth system is comprised of several key components that work together to process user input and generate contextual responses.
146
+ ---
205
147
 
206
- ### 1. Registries
148
+ ## Architecture Overview
207
149
 
208
- - **`BoothRegistry`**: Manages the collection of `BoothConfig` objects. Each booth represents a specialized agent with a specific role, description, and set of tools. It also keeps track of the "current context booth" to ensure the conversation stays on topic.
209
- - **`ToolRegistry`**: Manages the tools that can be made available to the LLM. Tools are functions that the AI can decide to call to perform actions or retrieve information.
210
- - **`BoothPluginRegistry`**: Manages plugins that hook into the interaction lifecycle. This allows for modular and reusable functionality to be added to the system.
150
+ ```mermaid
151
+ graph TD
152
+ subgraph Application Layer
153
+ A[Your Application]
154
+ end
211
155
 
212
- ### 2. Plugins
156
+ subgraph Booth Service Layer
157
+ C(CoreBooth)
158
+ end
213
159
 
214
- Plugins are classes that implement the `BoothPlugin` interface. They can execute logic at different stages of the conversation:
160
+ subgraph Core Components
161
+ D[Interaction Processor]
162
+ E[LLM Adapter]
163
+ F[Registries]
164
+ G[Plugins]
165
+ end
215
166
 
216
- - `onBeforeInteractionLoopStart`: Before the main loop begins.
217
- - `onBeforeMessageSend`: Before a message is sent to the LLM.
218
- - `onResponseReceived`: After a response is received from the LLM.
219
- - `onBeforeToolCall`: Before each individual tool call is executed _(allows modification of tool parameters, validation, and logging)_.
220
- - `onAfterToolCall`: After each individual tool call is successfully executed _(allows result processing, caching, and transformation)_.
221
- - `onToolCallError`: When a tool call encounters an error _(allows custom error handling and recovery)_.
222
- - `onStreamEvent`: _(Optional)_ During streaming response generation, called for each stream event _(enables real-time processing and UI updates)_.
223
- - `shouldEndInteractionLoop`: To determine if the conversation turn is over.
224
- - `onAfterInteractionLoopEnd`: After the main loop has finished.
167
+ A -- "initializes and calls" --> C
168
+ C -- "delegates to" --> D
169
+ D -- "uses" --> E
170
+ D -- "uses" --> F
171
+ D -- "executes" --> G
172
+ E -- "communicates with" --> H((LLM))
173
+ F -- "manages" --> I{Booths}
174
+ F -- "manages" --> J{Tools}
175
+ F -- "manages" --> K{Plugins}
176
+ G -- "hook into" --> D
177
+ ```
225
178
 
226
- The system includes several core plugins by default:
179
+ ---
227
180
 
228
- - `ConversationHistoryPlugin`: Maintains the history of the conversation.
229
- - `ContextProviderPlugin`: Provides the LLM with the context of the current booth.
230
- - `ToolProviderPlugin`: Provides the LLM with the available tools for the current booth.
231
- - `ToolExecutorPlugin`: Executes tool calls requested by the LLM with granular hook support for individual tool call interception.
232
- - `FinishTurnPlugin`: Determines when the LLM's turn is finished and it's waiting for user input.
181
+ ## API Reference
233
182
 
234
- #### Enhanced Tool Call Management
183
+ ### createCoreBooth
235
184
 
236
- The plugin system now provides granular control over individual tool executions through three new hooks:
185
+ Factory that wires an `LLMAdapter`, a `BoothConfig`, and internal registries/plugins into a working `CoreBooth` instance.
237
186
 
238
- - **`onBeforeToolCall`**: Intercept and modify tool calls before execution (parameter validation, authorization, logging)
239
- - **`onAfterToolCall`**: Process and transform tool results after successful execution (caching, metadata addition, data transformation)
240
- - **`onToolCallError`**: Handle tool execution errors with custom recovery logic (fallback responses, error logging, graceful degradation)
187
+ ```ts
188
+ function createCoreBooth(
189
+ adapter: LLMAdapter<any>,
190
+ booth: BoothConfig,
191
+ options?: { endInteractionLoopMarker?: string }
192
+ ): CoreBooth
193
+ ```
241
194
 
242
- This enables sophisticated tool management patterns like authentication, caching, audit logging, and error recovery at the individual tool level.
195
+ #### Options
243
196
 
244
- ### 3. Interaction Processor
197
+ | Name | Type | Default | Description |
198
+ | -------------------------- | -------- | -----------------------------: | ----------------------------------------------------- |
199
+ | `endInteractionLoopMarker` | `string` | `"__awaiting_user_response__"` | Marker used by plugins to determine when a turn ends. |
245
200
 
246
- The `InteractionProcessor` is the engine of the system. It manages the interaction loop with the LLM:
201
+ #### Examples
247
202
 
248
- 1. It takes user input.
249
- 2. Runs the `onBefore...` plugin hooks.
250
- 3. Sends the payload to the LLM.
251
- 4. Receives the response.
252
- 5. Runs the `onResponseReceived` plugin hooks to process the response (e.g., execute tools).
253
- 6. Repeats this loop until a plugin's `shouldEndInteractionLoop` returns `true`.
254
- 7. Runs the `onAfter...` plugin hooks for cleanup.
203
+ **Basic**
255
204
 
256
- ## Streaming Support
205
+ ```ts
206
+ const coreBooth = createCoreBooth(adapter, pirateBooth);
207
+ ```
257
208
 
258
- The Booths framework includes comprehensive streaming support that enables real-time response generation while preserving the full plugin ecosystem and backward compatibility.
209
+ **Custom end-of-turn marker**
259
210
 
260
- ### Overview
211
+ ```ts
212
+ const coreBooth = createCoreBooth(adapter, pirateBooth, {
213
+ endInteractionLoopMarker: '__custom_marker__',
214
+ });
215
+ ```
261
216
 
262
- Streaming allows the LLM's response to be processed and displayed in real-time as it's being generated, providing a more responsive user experience. The framework handles streaming at multiple levels:
217
+ ---
263
218
 
264
- - **Real-time Events**: Stream events are emitted as content arrives
265
- - **Plugin Integration**: Plugins can hook into streaming events for real-time processing
266
- - **Complete Responses**: Existing plugins continue to receive complete responses
267
- - **Automatic Fallback**: Graceful fallback to non-streaming if streaming fails
219
+ ### InteractionProcessor
268
220
 
269
- ### Enabling Streaming
221
+ Engine that manages the interaction loop with the LLM and the plugin lifecycle.
270
222
 
271
- Streaming can be enabled simply by setting a boolean flag when creating the `InteractionProcessor`:
223
+ #### Flow
272
224
 
273
- ```typescript
274
- import { InteractionProcessor, type InteractionProcessorOptions } from 'booths';
225
+ 1. Take user input
226
+ 2. Run `onBefore...` plugin hooks
227
+ 3. Send payload via `LLMAdapter`
228
+ 4. Receive response
229
+ 5. Run `onResponseReceived` hooks (e.g., execute tools)
230
+ 6. Repeat until `shouldEndInteractionLoop` returns `true`
231
+ 7. Run `onAfter...` hooks for cleanup
275
232
 
276
- const options: InteractionProcessorOptions = {
277
- streaming: true, // Enable streaming
278
- fallbackToNonStreaming: true // Optional: fallback if streaming fails
279
- };
233
+ #### Important Constraints
280
234
 
281
- const processor = new InteractionProcessor(
282
- boothRegistry,
283
- pluginRegistry,
284
- toolRegistry,
285
- llmAdapter, // Must implement streaming methods
286
- options
287
- );
288
- ```
235
+ * **Ordering matters**: tool execution and hook order follow the loop above.
236
+ * **End-of-turn detection** relies on the configured `endInteractionLoopMarker`.
289
237
 
290
- ### Stream Events
238
+ ---
291
239
 
292
- The streaming system emits different types of events as the response is generated:
240
+ ### Registries
293
241
 
294
- ```typescript
295
- export interface StreamEvent {
296
- type: 'text_delta' | 'tool_call_start' | 'tool_call_end' | 'response_start' | 'response_end';
297
- content?: string; // Full content for text events
298
- delta?: string; // Incremental text for text_delta events
299
- toolCall?: object; // Tool call information
300
- metadata?: any; // Additional event metadata
301
- }
302
- ```
242
+ Booths uses three registries to manage the system’s moving parts.
303
243
 
304
- **Event Types:**
305
- - `response_start`: Streaming begins
306
- - `text_delta`: Incremental text content arrives
307
- - `tool_call_start`: LLM begins a tool call
308
- - `tool_call_end`: Tool call completes
309
- - `response_end`: Streaming completes
310
-
311
- ### Streaming Plugin Hooks
312
-
313
- Plugins can implement the optional `onStreamEvent` hook to process stream events in real-time:
314
-
315
- ```typescript
316
- import type { BoothPlugin, StreamEvent, StreamContext, RepositoryUtilities } from 'booths';
317
-
318
- export class MyStreamingPlugin implements BoothPlugin {
319
- id = 'my-streaming-plugin';
320
- name = 'My Streaming Plugin';
321
- description = 'Handles streaming events';
322
-
323
- async onStreamEvent(
324
- utilities: RepositoryUtilities,
325
- streamEvent: StreamEvent,
326
- context: StreamContext
327
- ): Promise<StreamEvent> {
328
- // Process the stream event
329
- if (streamEvent.type === 'text_delta') {
330
- console.log(`Received text: ${streamEvent.delta}`);
331
-
332
- // Optionally transform the event
333
- return {
334
- ...streamEvent,
335
- delta: streamEvent.delta?.toUpperCase() // Example transformation
336
- };
337
- }
338
-
339
- return streamEvent; // Pass through unchanged
340
- }
244
+ #### BoothRegistry
341
245
 
342
- async shouldEndInteractionLoop(): Promise<boolean> {
343
- return false;
344
- }
345
- }
346
- ```
246
+ * Manages `BoothConfig` objects (each is a specialized agent: role, description, tools, examples).
247
+ * Tracks the current context booth.
347
248
 
348
- ### Built-in Streaming Plugins
249
+ #### ToolRegistry
349
250
 
350
- The framework includes example streaming plugins:
251
+ * Stores tools available to the LLM (as `ToolModule`).
252
+ * Provides registration helpers like `registerTools([...])`.
351
253
 
352
- #### StreamingLoggerPlugin
254
+ #### BoothPluginRegistry
353
255
 
354
- Logs streaming events in real-time for debugging and monitoring:
256
+ * Manages plugins that hook into the interaction lifecycle.
257
+ * Enables modular capabilities like conversation history, context provisioning, tool provisioning/execution, and turn-finalization.
355
258
 
356
- ```typescript
357
- import { StreamingLoggerPlugin } from 'booths';
259
+ ---
358
260
 
359
- const logger = new StreamingLoggerPlugin('[MyApp]');
360
- pluginRegistry.registerPlugins([logger]);
361
- ```
261
+ ### Plugins
362
262
 
363
- #### StreamingUIPlugin
263
+ Plugins implement `BoothPlugin` and can influence request/response flow and tool execution.
364
264
 
365
- Provides real-time UI updates with customizable callbacks:
265
+ #### Lifecycle Hooks
366
266
 
367
- ```typescript
368
- import { StreamingUIPlugin } from 'booths';
267
+ * `onBeforeInteractionLoopStart`
268
+ * `onBeforeMessageSend`
269
+ * `onResponseReceived`
270
+ * `onBeforeToolCall`
271
+ * `onAfterToolCall`
272
+ * `onToolCallError`
273
+ * `shouldEndInteractionLoop`
274
+ * `onAfterInteractionLoopEnd`
369
275
 
370
- const uiPlugin = new StreamingUIPlugin((event, context) => {
371
- if (event.type === 'text_delta') {
372
- // Update your UI with the new text
373
- document.getElementById('response').textContent += event.delta;
374
- }
375
- });
276
+ **Built-in plugin capabilities (typical set)**
376
277
 
377
- pluginRegistry.registerPlugins([uiPlugin]);
378
- ```
278
+ * `ConversationHistoryPlugin` – maintains conversation history
279
+ * `ContextProviderPlugin` – supplies booth context to the LLM
280
+ * `ToolProviderPlugin` – exposes available tools
281
+ * `ToolExecutorPlugin` – executes tool calls with per-call hooks
282
+ * `FinishTurnPlugin` – decides when a turn is complete
379
283
 
380
- ### LLM Adapter Streaming Implementation
284
+ ---
381
285
 
382
- To support streaming, your LLM adapter should implement the optional streaming methods:
286
+ ## Best Practices
383
287
 
384
- ```typescript
385
- export class MyStreamingAdapter implements LLMAdapter<MyResponse> {
386
- // Required methods
387
- async invoke(params: ResponseCreateParamsNonStreaming): Promise<MyResponse> {
388
- // Non-streaming implementation
389
- }
288
+ * Keep tools **pure** and **idempotent** when possible; return structured results.
289
+ * Favor **small, composable plugins**; use `onBeforeToolCall`/`onAfterToolCall`/`onToolCallError` to isolate auth, caching, and audit.
290
+ * Clearly separate **global tools** vs **booth-specific tools** and document access rules.
291
+ * Centralize **end-of-turn logic** to avoid inconsistent session behavior.
390
292
 
391
- async interpret(response: MyResponse): Promise<Response> {
392
- // Convert to standard format
393
- }
293
+ ---
394
294
 
395
- // Optional streaming methods
396
- async *invokeStream(params: ResponseCreateParamsStreaming): AsyncIterable<MyResponse> {
397
- // Yield streaming chunks
398
- const stream = await this.llm.createStreamingResponse(params);
399
- for await (const chunk of stream) {
400
- yield chunk;
401
- }
402
- }
295
+ ## Advanced Usage
403
296
 
404
- async interpretStream(chunk: MyResponse): Promise<StreamEvent> {
405
- // Convert chunk to StreamEvent
406
- return {
407
- type: 'text_delta',
408
- delta: chunk.delta,
409
- content: chunk.content
410
- };
411
- }
412
- }
297
+ ### Customizing the end-of-turn marker
298
+
299
+ ```ts
300
+ createCoreBooth(adapter, booth, { endInteractionLoopMarker: '__custom__' });
413
301
  ```
414
302
 
415
- ### Stream Context
303
+ Use this to coordinate UI or multi-agent systems that rely on explicit “awaiting user” signals.
416
304
 
417
- Plugins receive context information about the streaming session:
305
+ ### Per-tool interception & error recovery
418
306
 
419
- ```typescript
420
- export interface StreamContext {
421
- responseParams: ResponseCreateParamsNonStreaming; // Original request
422
- streamIndex: number; // Event index in stream
423
- totalExpectedEvents?: number; // Expected total (if known)
424
- accumulatedResponse: Partial<Response>; // Response built so far
307
+ ```ts
308
+ class AuditPlugin implements BoothPlugin {
309
+ async onBeforeToolCall(ctx) { /* validate or redact */ }
310
+ async onAfterToolCall(ctx) { /* persist results */ }
311
+ async onToolCallError(ctx, err) { /* fallback or retry */ }
425
312
  }
426
313
  ```
427
314
 
428
- ### Error Handling
315
+ These hooks enable authentication, caching, and graceful degradation at the **individual tool** level.
429
316
 
430
- The streaming system includes robust error handling:
317
+ ---
431
318
 
432
- - **Plugin Error Isolation**: Errors in streaming plugins don't break the stream
433
- - **Automatic Fallback**: Can fallback to non-streaming mode on errors
434
- - **Graceful Degradation**: System continues operating if streaming fails
319
+ ## Types
435
320
 
436
- ### Backward Compatibility
321
+ Core concepts you’ll interact with most:
437
322
 
438
- Streaming support is fully backward compatible:
323
+ * `BoothConfig`: identity, role, description, `tools: string[]`, sample `examples`
324
+ * `ToolModule`: `{ type: 'function' | ..., name, description, parameters, execute }`
325
+ * `LLMAdapter<TResponse>`: `{ invoke(params): Promise<TResponse>; interpret(response): Promise<TResponse> }`
439
326
 
440
- - **Existing Plugins**: Continue to work unchanged
441
- - **Complete Responses**: Plugins still receive full `Response` objects
442
- - **Optional Implementation**: Adapters don't require streaming support
443
- - **Default Behavior**: Non-streaming mode by default
327
+ ---
444
328
 
445
- ### Example: Complete Streaming Setup
329
+ ## Session & State Management
446
330
 
447
- Here's a complete example showing streaming integration:
331
+ * Conversation continuity (history, context booth) is typically maintained by plugins.
332
+ * Tool calls may emit intermediate results/messages; design UI to handle “in-progress” states.
448
333
 
449
- ```typescript
450
- import {
451
- InteractionProcessor,
452
- BoothRegistry,
453
- BoothPluginRegistry,
454
- ToolRegistry,
455
- StreamingLoggerPlugin,
456
- StreamingUIPlugin,
457
- type InteractionProcessorOptions
458
- } from 'booths';
459
-
460
- // 1. Create streaming-enabled adapter (implement streaming methods)
461
- const streamingAdapter = new MyStreamingLLMAdapter(apiKey);
334
+ ---
462
335
 
463
- // 2. Set up registries and booth
464
- const testBooth = { id: 'chat-booth', role: 'Assistant', description: 'Helpful assistant' };
465
- const boothRegistry = new BoothRegistry(testBooth);
466
- const pluginRegistry = new BoothPluginRegistry();
467
- const toolRegistry = new ToolRegistry();
336
+ ## Error Handling
468
337
 
469
- // 3. Set up streaming plugins
470
- const logger = new StreamingLoggerPlugin('[Chat]');
471
- const uiUpdater = new StreamingUIPlugin((event) => {
472
- if (event.type === 'text_delta') {
473
- document.getElementById('chat').textContent += event.delta;
474
- }
475
- });
338
+ * Surface adapter/transport errors from `invoke`.
339
+ * Prefer plugin-level `onToolCallError` for tool failures; use retries or fallback responses.
476
340
 
477
- pluginRegistry.registerPlugins([logger, uiUpdater]);
341
+ ---
478
342
 
479
- // 4. Enable streaming
480
- const streamingOptions: InteractionProcessorOptions = {
481
- streaming: true,
482
- fallbackToNonStreaming: true
483
- };
343
+ ## License
484
344
 
485
- const processor = new InteractionProcessor(
486
- boothRegistry,
487
- pluginRegistry,
488
- toolRegistry,
489
- streamingAdapter,
490
- streamingOptions
491
- );
492
-
493
- // 5. Send message with real-time streaming
494
- const response = await processor.send('Hello, stream this response!');
495
- // User sees content appear in real-time, plugins receive complete response
496
- ```
345
+ Distributed under the project’s LICENSE file.