booths 1.4.0 → 1.4.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,93 +1,72 @@
1
1
  # Booths
2
2
 
3
- Booths is a modular and extensible framework for building and managing conversational AI agents. It provides a structured way to define the capabilities, context, and tools for different AI-powered conversational flows.
3
+ [![Check Build on Pull Request](https://github.com/phoneburner/booths/actions/workflows/build-check.yml/badge.svg)](https://github.com/phoneburner/booths/actions/workflows/build-check.yml)
4
+ [![Format Check](https://github.com/phoneburner/booths/actions/workflows/format-check.yml/badge.svg)](https://github.com/phoneburner/booths/actions/workflows/format-check.yml)
5
+ [![Publish to NPM on Release](https://github.com/phoneburner/booths/actions/workflows/release.yml/badge.svg)](https://github.com/phoneburner/booths/actions/workflows/release.yml)
6
+ [![Test Suite](https://github.com/phoneburner/booths/actions/workflows/test.yml/badge.svg)](https://github.com/phoneburner/booths/actions/workflows/test.yml)
4
7
 
5
- The system is designed around a central `CoreBooth` class that orchestrates interactions between users and a Large Language Model (LLM), leveraging a system of registries and plugins to manage the conversational state and capabilities.
8
+ *A modular, extensible framework for building and managing conversational AI agents in TypeScript.*
6
9
 
7
- ## Architecture Overview
10
+ > Booths provides a structured way to define agent capabilities, context, and tools, orchestrated by a `CoreBooth` that manages the interaction loop with your LLM and a rich plugin lifecycle.
8
11
 
9
- The Booths framework is built around a few key concepts that work together to create a powerful and flexible conversational AI system.
12
+ ---
10
13
 
11
- ```mermaid
12
- graph TD
13
- subgraph Application Layer
14
- A[Your Application]
15
- end
14
+ ## Table of Contents
16
15
 
17
- subgraph Booth Service Layer
18
- C(CoreBooth)
19
- end
16
+ * [Installation](#installation)
17
+ * [Quick Start Guide](#quick-start-guide)
18
+ * [Architecture Overview](#architecture-overview)
19
+ * [API Reference](#api-reference)
20
20
 
21
- subgraph Core Components
22
- D[Interaction Processor]
23
- E[LLM Adapter]
24
- F[Registries]
25
- G[Plugins]
26
- end
21
+ * [createCoreBooth](#createcorebooth)
27
22
 
28
- A -- "initializes and calls" --> C
29
- C -- "delegates to" --> D
30
- D -- "uses" --> E
31
- D -- "uses" --> F
32
- D -- "executes" --> G
33
- E -- "communicates with" --> H((LLM))
34
- F -- "manages" --> I{Booths}
35
- F -- "manages" --> J{Tools}
36
- F -- "manages" --> K{Plugins}
37
- G -- "hook into" --> D
23
+ * [Options](#options)
24
+ * [Examples](#examples)
25
+ * [InteractionProcessor](#interactionprocessor)
38
26
 
39
- style C fill:#f9f,stroke:#333,stroke-width:2px
40
- ```
27
+ * [Flow](#flow)
28
+ * [Important Constraints](#important-constraints)
29
+ * [Registries](#registries)
41
30
 
42
- 1. **Application Layer**: Your application integrates the Booths framework to handle conversational AI interactions.
43
- 2. **`CoreBooth`**: The framework foundation that provides global functionality, instructions, and infrastructure that applies to all booths. It manages the overall system configuration and coordinates the interaction flow.
44
- 3. **`InteractionProcessor`**: The engine that drives the conversation. It takes user input, runs it through the plugin lifecycle, sends it to the LLM (via the adapter), and processes the response.
45
- 4. **`LLMAdapter`**: A component that handles communication with the specific LLM provider (e.g., OpenAI). It translates requests and responses between the Booths system and the LLM's API.
46
- 5. **Registries**: These are responsible for managing the different components of the system:
47
- * `BoothRegistry`: Manages `BoothConfig` objects that define the behavior of different AI agents.
48
- * `ToolRegistry`: Manages the tools (functions) that booths can use.
49
- * `BoothPluginRegistry`: Manages the plugins that hook into the conversational lifecycle.
50
- 6. **Plugins**: These are modules that add functionality to the system by hooking into the `InteractionProcessor`'s lifecycle (e.g., managing conversation history, providing context to the LLM, executing tools).
31
+ * [BoothRegistry](#boothregistry)
32
+ * [ToolRegistry](#toolregistry)
33
+ * [BoothPluginRegistry](#boothpluginregistry)
34
+ * [Plugins](#plugins)
51
35
 
52
- ## Getting Started
36
+ * [Lifecycle Hooks](#lifecycle-hooks)
37
+ * [Best Practices](#best-practices)
38
+ * [Advanced Usage](#advanced-usage)
53
39
 
54
- The Booths framework is designed as a TypeScript library for building conversational AI systems. This repository contains the core framework implementation.
40
+ * [Customizing the end-of-turn marker](#customizing-the-end-of-turn-marker)
41
+ * [Per-tool interception & error recovery](#per-tool-interception--error-recovery)
42
+ * [Types](#types)
43
+ * [Session & State Management](#session--state-management)
44
+ * [Error Handling](#error-handling)
45
+ * [License](#license)
55
46
 
56
- ### Installation
47
+ ---
48
+
49
+ ## Installation
57
50
 
58
51
  ```bash
59
52
  npm install booths
60
53
  ```
61
54
 
62
- ### Prerequisites
63
-
64
- - Node.js and npm installed
65
- - An LLM provider API key (e.g., OpenAI)
55
+ **Prerequisites**
66
56
 
67
- ### Development
68
-
69
- To build the library:
70
-
71
- ```bash
72
- npm run build
73
- ```
57
+ * Node.js and npm
58
+ * An API key for your LLM provider (e.g., OpenAI)
74
59
 
75
- To check types:
76
-
77
- ```bash
78
- npm run typecheck
79
- ```
60
+ ---
80
61
 
81
62
  ## Quick Start Guide
82
63
 
83
- Here is a lightweight example of how to set up and use the Core Booth system manually.
84
-
85
- ### 1. Define a Booth
64
+ A minimal setup that defines a booth, a tool, an adapter, and starts a conversation.
86
65
 
87
- First, define a booth configuration. This object specifies the booth's identity, role, and the tools it can use.
66
+ ### 1) Define a Booth
88
67
 
89
- ```typescript
90
- // in my-booths.ts
68
+ ```ts
69
+ // my-booths.ts
91
70
  import type { BoothConfig } from 'booths';
92
71
 
93
72
  export const pirateBooth: BoothConfig = {
@@ -99,12 +78,10 @@ export const pirateBooth: BoothConfig = {
99
78
  };
100
79
  ```
101
80
 
102
- ### 2. Define a Tool
81
+ ### 2) Define a Tool
103
82
 
104
- Next, create a tool that the booth can use. A tool is a function that the LLM can decide to call.
105
-
106
- ```typescript
107
- // in my-tools.ts
83
+ ```ts
84
+ // my-tools.ts
108
85
  import type { ToolModule } from 'booths';
109
86
 
110
87
  export const tellPirateJokeTool: ToolModule = {
@@ -113,122 +90,256 @@ export const tellPirateJokeTool: ToolModule = {
113
90
  description: 'Tells a classic pirate joke.',
114
91
  parameters: { type: 'object', properties: {} },
115
92
  execute: async () => {
116
- return { joke: "Why are pirates called pirates? Because they arrrr!" };
93
+ return { joke: 'Why are pirates called pirates? Because they arrrr!' };
117
94
  },
118
95
  };
119
96
  ```
120
97
 
121
- ### 3. Implement the LLM Adapter
122
-
123
- The `CoreBooth` requires an `LLMAdapter` to communicate with your chosen language model. Here is a minimal example for OpenAI.
98
+ ### 3) Implement a simple LLM Adapter
124
99
 
125
- ```typescript
126
- // in OpenAIAdapter.ts
127
- import type { LLMAdapter, ResponseCreateParamsNonStreaming, Response } from 'booths';
100
+ ```ts
101
+ // OpenAIAdapter.ts
102
+ import type {
103
+ LLMAdapter,
104
+ ResponseCreateParamsNonStreaming,
105
+ Response,
106
+ } from 'booths';
128
107
  import OpenAI from 'openai';
129
108
 
130
109
  export class OpenAIAdapter implements LLMAdapter<Response> {
131
110
  private openai: OpenAI;
132
-
133
111
  constructor(apiKey: string) {
134
112
  this.openai = new OpenAI({ apiKey });
135
113
  }
136
-
137
114
  async invoke(params: ResponseCreateParamsNonStreaming): Promise<Response> {
138
115
  return this.openai.responses.create({ ...params, model: 'gpt-4o' });
139
116
  }
140
-
141
117
  async interpret(response: Response): Promise<Response> {
142
118
  return response;
143
119
  }
144
120
  }
145
121
  ```
146
122
 
147
- ### 4. Initialize the CoreBooth
148
-
149
- Finally, use the `createCoreBooth` factory to instantiate the system.
123
+ ### 4) Initialize and talk to the booth
150
124
 
151
- ```typescript
152
- // in main.ts
125
+ ```ts
126
+ // main.ts
153
127
  import { createCoreBooth } from 'booths';
154
128
  import { pirateBooth } from './my-booths';
155
129
  import { tellPirateJokeTool } from './my-tools';
156
- import { OpenAIAdapter } from './openAIAdapter';
157
-
158
- // 1. Create the LLM adapter
159
- const llmAdapter = new OpenAIAdapter('your-openai-api-key');
160
-
161
- // 2. Create the CoreBooth instance
162
- const coreBooth = createCoreBooth(llmAdapter, pirateBooth);
130
+ import { OpenAIAdapter } from './OpenAIAdapter';
163
131
 
164
- // Optional: Customize the end interaction loop marker
165
- // const coreBooth = createCoreBooth(llmAdapter, pirateBooth, {
166
- // endInteractionLoopMarker: '__custom_marker__'
167
- // });
132
+ const llmAdapter = new OpenAIAdapter(process.env.OPENAI_API_KEY!);
133
+ const coreBooth = createCoreBooth(llmAdapter, pirateBooth /*, { endInteractionLoopMarker: '__custom_marker__' }*/);
168
134
 
169
- // 3. Register the tool (this step will be improved in future versions)
135
+ // Register tools (available to the current booth)
170
136
  coreBooth.toolRegistry.registerTools([tellPirateJokeTool]);
171
137
 
172
- // 4. Send a message and get a response
173
- async function haveConversation() {
174
- const userInput = 'Tell me a pirate joke.';
175
- const response = await coreBooth.callProcessor.send(userInput);
176
-
138
+ // Send a message and read the result
139
+ async function run() {
140
+ const response = await coreBooth.callProcessor.send('Tell me a pirate joke.');
177
141
  console.log(response.output_text);
178
- // Expected output: "Why are pirates called pirates? Because they arrrr!"
179
142
  }
143
+ run();
144
+ ```
145
+
146
+ ---
147
+
148
+ ## Architecture Overview
149
+
150
+ ```mermaid
151
+ graph TD
152
+ subgraph Application Layer
153
+ A[Your Application]
154
+ end
155
+
156
+ subgraph Booth Service Layer
157
+ C(CoreBooth)
158
+ end
159
+
160
+ subgraph Core Components
161
+ D[Interaction Processor]
162
+ E[LLM Adapter]
163
+ F[Registries]
164
+ G[Plugins]
165
+ end
166
+
167
+ A -- "initializes and calls" --> C
168
+ C -- "delegates to" --> D
169
+ D -- "uses" --> E
170
+ D -- "uses" --> F
171
+ D -- "executes" --> G
172
+ E -- "communicates with" --> H((LLM))
173
+ F -- "manages" --> I{Booths}
174
+ F -- "manages" --> J{Tools}
175
+ F -- "manages" --> K{Plugins}
176
+ G -- "hook into" --> D
177
+ ```
178
+
179
+ ---
180
180
 
181
- haveConversation();
181
+ ## API Reference
182
+
183
+ ### createCoreBooth
184
+
185
+ Factory that wires an `LLMAdapter`, a `BoothConfig`, and internal registries/plugins into a working `CoreBooth` instance.
186
+
187
+ ```ts
188
+ function createCoreBooth(
189
+ adapter: LLMAdapter<any>,
190
+ booth: BoothConfig,
191
+ options?: { endInteractionLoopMarker?: string }
192
+ ): CoreBooth
193
+ ```
194
+
195
+ #### Options
196
+
197
+ | Name | Type | Default | Description |
198
+ | -------------------------- | -------- | -----------------------------: | ----------------------------------------------------- |
199
+ | `endInteractionLoopMarker` | `string` | `"__awaiting_user_response__"` | Marker used by plugins to determine when a turn ends. |
200
+
201
+ #### Examples
202
+
203
+ **Basic**
204
+
205
+ ```ts
206
+ const coreBooth = createCoreBooth(adapter, pirateBooth);
207
+ ```
208
+
209
+ **Custom end-of-turn marker**
210
+
211
+ ```ts
212
+ const coreBooth = createCoreBooth(adapter, pirateBooth, {
213
+ endInteractionLoopMarker: '__custom_marker__',
214
+ });
182
215
  ```
183
- ## How It Works
184
216
 
185
- The Core Booth system is comprised of several key components that work together to process user input and generate contextual responses.
217
+ ---
218
+
219
+ ### InteractionProcessor
220
+
221
+ Engine that manages the interaction loop with the LLM and the plugin lifecycle.
222
+
223
+ #### Flow
224
+
225
+ 1. Take user input
226
+ 2. Run `onBefore...` plugin hooks
227
+ 3. Send payload via `LLMAdapter`
228
+ 4. Receive response
229
+ 5. Run `onResponseReceived` hooks (e.g., execute tools)
230
+ 6. Repeat until `shouldEndInteractionLoop` returns `true`
231
+ 7. Run `onAfter...` hooks for cleanup
232
+
233
+ #### Important Constraints
234
+
235
+ * **Ordering matters**: tool execution and hook order follow the loop above.
236
+ * **End-of-turn detection** relies on the configured `endInteractionLoopMarker`.
237
+
238
+ ---
239
+
240
+ ### Registries
241
+
242
+ Booths uses three registries to manage the system’s moving parts.
243
+
244
+ #### BoothRegistry
245
+
246
+ * Manages `BoothConfig` objects (each is a specialized agent: role, description, tools, examples).
247
+ * Tracks the current context booth.
248
+
249
+ #### ToolRegistry
250
+
251
+ * Stores tools available to the LLM (as `ToolModule`).
252
+ * Provides registration helpers like `registerTools([...])`.
253
+
254
+ #### BoothPluginRegistry
255
+
256
+ * Manages plugins that hook into the interaction lifecycle.
257
+ * Enables modular capabilities like conversation history, context provisioning, tool provisioning/execution, and turn-finalization.
258
+
259
+ ---
260
+
261
+ ### Plugins
262
+
263
+ Plugins implement `BoothPlugin` and can influence request/response flow and tool execution.
264
+
265
+ #### Lifecycle Hooks
266
+
267
+ * `onBeforeInteractionLoopStart`
268
+ * `onBeforeMessageSend`
269
+ * `onResponseReceived`
270
+ * `onBeforeToolCall`
271
+ * `onAfterToolCall`
272
+ * `onToolCallError`
273
+ * `shouldEndInteractionLoop`
274
+ * `onAfterInteractionLoopEnd`
275
+
276
+ **Built-in plugin capabilities (typical set)**
277
+
278
+ * `ConversationHistoryPlugin` – maintains conversation history
279
+ * `ContextProviderPlugin` – supplies booth context to the LLM
280
+ * `ToolProviderPlugin` – exposes available tools
281
+ * `ToolExecutorPlugin` – executes tool calls with per-call hooks
282
+ * `FinishTurnPlugin` – decides when a turn is complete
283
+
284
+ ---
285
+
286
+ ## Best Practices
287
+
288
+ * Keep tools **pure** and **idempotent** when possible; return structured results.
289
+ * Favor **small, composable plugins**; use `onBeforeToolCall`/`onAfterToolCall`/`onToolCallError` to isolate auth, caching, and audit.
290
+ * Clearly separate **global tools** vs **booth-specific tools** and document access rules.
291
+ * Centralize **end-of-turn logic** to avoid inconsistent session behavior.
292
+
293
+ ---
294
+
295
+ ## Advanced Usage
296
+
297
+ ### Customizing the end-of-turn marker
298
+
299
+ ```ts
300
+ createCoreBooth(adapter, booth, { endInteractionLoopMarker: '__custom__' });
301
+ ```
302
+
303
+ Use this to coordinate UI or multi-agent systems that rely on explicit “awaiting user” signals.
304
+
305
+ ### Per-tool interception & error recovery
306
+
307
+ ```ts
308
+ class AuditPlugin implements BoothPlugin {
309
+ async onBeforeToolCall(ctx) { /* validate or redact */ }
310
+ async onAfterToolCall(ctx) { /* persist results */ }
311
+ async onToolCallError(ctx, err) { /* fallback or retry */ }
312
+ }
313
+ ```
186
314
 
187
- ### 1. Registries
315
+ These hooks enable authentication, caching, and graceful degradation at the **individual tool** level.
188
316
 
189
- - **`BoothRegistry`**: Manages the collection of `BoothConfig` objects. Each booth represents a specialized agent with a specific role, description, and set of tools. It also keeps track of the "current context booth" to ensure the conversation stays on topic.
190
- - **`ToolRegistry`**: Manages the tools that can be made available to the LLM. Tools are functions that the AI can decide to call to perform actions or retrieve information.
191
- - **`BoothPluginRegistry`**: Manages plugins that hook into the interaction lifecycle. This allows for modular and reusable functionality to be added to the system.
317
+ ---
192
318
 
193
- ### 2. Plugins
319
+ ## Types
194
320
 
195
- Plugins are classes that implement the `BoothPlugin` interface. They can execute logic at different stages of the conversation:
321
+ Core concepts you’ll interact with most:
196
322
 
197
- - `onBeforeInteractionLoopStart`: Before the main loop begins.
198
- - `onBeforeMessageSend`: Before a message is sent to the LLM.
199
- - `onResponseReceived`: After a response is received from the LLM.
200
- - `onBeforeToolCall`: Before each individual tool call is executed _(allows modification of tool parameters, validation, and logging)_.
201
- - `onAfterToolCall`: After each individual tool call is successfully executed _(allows result processing, caching, and transformation)_.
202
- - `onToolCallError`: When a tool call encounters an error _(allows custom error handling and recovery)_.
203
- - `shouldEndInteractionLoop`: To determine if the conversation turn is over.
204
- - `onAfterInteractionLoopEnd`: After the main loop has finished.
323
+ * `BoothConfig`: identity, role, description, `tools: string[]`, sample `examples`
324
+ * `ToolModule`: `{ type: 'function' | ..., name, description, parameters, execute }`
325
+ * `LLMAdapter<TResponse>`: `{ invoke(params): Promise<TResponse>; interpret(response): Promise<TResponse> }`
205
326
 
206
- The system includes several core plugins by default:
327
+ ---
207
328
 
208
- - `ConversationHistoryPlugin`: Maintains the history of the conversation.
209
- - `ContextProviderPlugin`: Provides the LLM with the context of the current booth.
210
- - `ToolProviderPlugin`: Provides the LLM with the available tools for the current booth.
211
- - `ToolExecutorPlugin`: Executes tool calls requested by the LLM with granular hook support for individual tool call interception.
212
- - `FinishTurnPlugin`: Determines when the LLM's turn is finished and it's waiting for user input. The marker used to detect conversation end can be customized via the `endInteractionLoopMarker` option (defaults to `__awaiting_user_response__`).
329
+ ## Session & State Management
213
330
 
214
- #### Enhanced Tool Call Management
331
+ * Conversation continuity (history, context booth) is typically maintained by plugins.
332
+ * Tool calls may emit intermediate results/messages; design UI to handle “in-progress” states.
215
333
 
216
- The plugin system now provides granular control over individual tool executions through three new hooks:
334
+ ---
217
335
 
218
- - **`onBeforeToolCall`**: Intercept and modify tool calls before execution (parameter validation, authorization, logging)
219
- - **`onAfterToolCall`**: Process and transform tool results after successful execution (caching, metadata addition, data transformation)
220
- - **`onToolCallError`**: Handle tool execution errors with custom recovery logic (fallback responses, error logging, graceful degradation)
336
+ ## Error Handling
221
337
 
222
- This enables sophisticated tool management patterns like authentication, caching, audit logging, and error recovery at the individual tool level.
338
+ * Surface adapter/transport errors from `invoke`.
339
+ * Prefer plugin-level `onToolCallError` for tool failures; use retries or fallback responses.
223
340
 
224
- ### 3. Interaction Processor
341
+ ---
225
342
 
226
- The `InteractionProcessor` is the engine of the system. It manages the interaction loop with the LLM:
343
+ ## License
227
344
 
228
- 1. It takes user input.
229
- 2. Runs the `onBefore...` plugin hooks.
230
- 3. Sends the payload to the LLM.
231
- 4. Receives the response.
232
- 5. Runs the `onResponseReceived` plugin hooks to process the response (e.g., execute tools).
233
- 6. Repeats this loop until a plugin's `shouldEndInteractionLoop` returns `true`.
234
- 7. Runs the `onAfter...` plugin hooks for cleanup.
345
+ Distributed under the project’s LICENSE file.
package/dist/index.d.ts CHANGED
@@ -408,6 +408,13 @@ export declare class BoothRegistry {
408
408
  * @returns Record of all booth configurations indexed by their IDs
409
409
  */
410
410
  getAllBooths(): Record<string, BoothConfig>;
411
+ /**
412
+ * Returns only booths that should be available for routing (excludes core and orchestrator booths).
413
+ * This prevents double context issues by ensuring system booths are not selectable.
414
+ *
415
+ * @returns Record of selectable booth configurations indexed by their IDs
416
+ */
417
+ getSelectableBooths(): Record<string, BoothConfig>;
411
418
  toArray(): BoothConfig[];
412
419
  /**
413
420
  * Enables multi-booth mode by registering the orchestrator and setting it as current context.
@@ -899,6 +906,12 @@ export declare class ToolExecutorPlugin implements BoothPlugin {
899
906
  * @private
900
907
  */
901
908
  private executeToolCall;
909
+ /**
910
+ * Extracts function call objects from an array of response output items.
911
+ *
912
+ * @param {ResponseOutputItem[]} output - The array of response output items to filter.
913
+ * @return {ResponseFunctionToolCall[]} An array containing only the function call objects from the input.
914
+ */
902
915
  static extractFunctionCalls(output: ResponseOutputItem[]): ResponseFunctionToolCall[];
903
916
  /**
904
917
  * After a response is received from the LLM, this hook checks for tool calls. If any are found,
package/dist/index.js CHANGED
@@ -202,7 +202,7 @@ class y {
202
202
  return s;
203
203
  }
204
204
  }
205
- const h = {
205
+ const u = {
206
206
  id: "orchestrator",
207
207
  role: `
208
208
  This booth serves as the orchestration layer that analyzes user intent and routes
@@ -315,7 +315,7 @@ class T {
315
315
  if (this.booths[t.id])
316
316
  return;
317
317
  this.booths[t.id] = t, Object.keys(this.booths).filter(
318
- (e) => e !== h.id
318
+ (e) => e !== u.id
319
319
  ).length > 1 && !this.hasOrchestrator && this.enableMultiBoothMode();
320
320
  }
321
321
  /**
@@ -384,6 +384,18 @@ class T {
384
384
  getAllBooths() {
385
385
  return this.booths;
386
386
  }
387
+ /**
388
+ * Returns only booths that should be available for routing (excludes core and orchestrator booths).
389
+ * This prevents double context issues by ensuring system booths are not selectable.
390
+ *
391
+ * @returns Record of selectable booth configurations indexed by their IDs
392
+ */
393
+ getSelectableBooths() {
394
+ const t = {};
395
+ for (const [o, e] of Object.entries(this.booths))
396
+ o !== this.baseBooth.id && o !== u.id && (t[o] = e);
397
+ return t;
398
+ }
387
399
  toArray() {
388
400
  return Object.values(this.booths);
389
401
  }
@@ -392,14 +404,14 @@ class T {
392
404
  * @private
393
405
  */
394
406
  enableMultiBoothMode() {
395
- this.hasOrchestrator || (this.booths[h.id] = h, this.hasOrchestrator = !0, this.currentContextId = h.id, this.onMultiBoothModeEnabled?.());
407
+ this.hasOrchestrator || (this.booths[u.id] = u, this.hasOrchestrator = !0, this.currentContextId = u.id, this.onMultiBoothModeEnabled?.());
396
408
  }
397
409
  /**
398
410
  * Disables multi-booth mode by unregistering the orchestrator and resetting context to base booth.
399
411
  * @private
400
412
  */
401
413
  disableMultiBoothMode() {
402
- this.hasOrchestrator && (delete this.booths[h.id], this.hasOrchestrator = !1, this.currentContextId = this.baseBooth.id, this.onMultiBoothModeDisabled?.());
414
+ this.hasOrchestrator && (delete this.booths[u.id], this.hasOrchestrator = !1, this.currentContextId = this.baseBooth.id, this.onMultiBoothModeDisabled?.());
403
415
  }
404
416
  /**
405
417
  * Sets callback functions for when multi-booth mode is enabled/disabled.
@@ -413,7 +425,7 @@ class T {
413
425
  }
414
426
  get isMultiBoothMode() {
415
427
  return Object.keys(this.booths).filter(
416
- (o) => o !== h.id
428
+ (o) => o !== u.id
417
429
  ).length > 1;
418
430
  }
419
431
  /**
@@ -425,12 +437,12 @@ class T {
425
437
  unregisterBooth(t) {
426
438
  if (!this.booths[t])
427
439
  throw new Error(`Booth with ID ${t} does not exist.`);
428
- if (t === h.id)
440
+ if (t === u.id)
429
441
  throw new Error(
430
442
  "Cannot unregister orchestrator booth directly. It will be automatically managed based on booth count."
431
443
  );
432
444
  delete this.booths[t], Object.keys(this.booths).filter(
433
- (e) => e !== h.id
445
+ (e) => e !== u.id
434
446
  ).length <= 1 && this.hasOrchestrator && this.disableMultiBoothMode();
435
447
  }
436
448
  }
@@ -607,7 +619,7 @@ const v = {
607
619
  description: "A specialized booth for summarizing conversation histories."
608
620
  }, m = "route_to_booth";
609
621
  function b(n) {
610
- const t = n.getAllBooths(), o = Object.values(t).map(
622
+ const t = n.getSelectableBooths(), o = Object.values(t).map(
611
623
  (r) => `- ${r.id}: ${r.role}
612
624
  Examples:
613
625
  ${(r.examples || []).map((s) => ` - "${s}"`).join(`
@@ -989,12 +1001,12 @@ class A {
989
1001
  * @returns The updated response parameters with the aggregated list of tools.
990
1002
  */
991
1003
  async onBeforeMessageSend(t, o) {
992
- const e = t.boothRegistry.baseBoothConfig, r = t.boothRegistry.currentContextBoothConfig, l = [...e.tools || [], ...r?.tools || []].filter((u, c, g) => g.indexOf(u) === c).map(
993
- (u) => t.toolRegistry.getTool(u)
1004
+ const e = t.boothRegistry.baseBoothConfig, r = t.boothRegistry.currentContextBoothConfig, l = [...e.tools || [], ...r?.tools || []].filter((h, c, g) => g.indexOf(h) === c).map(
1005
+ (h) => t.toolRegistry.getTool(h)
994
1006
  );
995
1007
  if (e.mcp && l.push(...e.mcp), r?.mcp && l.push(...r.mcp), t.boothRegistry.isMultiBoothMode) {
996
- const u = b(t.boothRegistry);
997
- l.push(u);
1008
+ const h = b(t.boothRegistry);
1009
+ l.push(h);
998
1010
  }
999
1011
  l.push(...t.toolRegistry.getGlobalTools());
1000
1012
  const d = M(l);
@@ -1071,6 +1083,12 @@ class w {
1071
1083
  };
1072
1084
  }
1073
1085
  }
1086
+ /**
1087
+ * Extracts function call objects from an array of response output items.
1088
+ *
1089
+ * @param {ResponseOutputItem[]} output - The array of response output items to filter.
1090
+ * @return {ResponseFunctionToolCall[]} An array containing only the function call objects from the input.
1091
+ */
1074
1092
  static extractFunctionCalls(t) {
1075
1093
  return t.filter(
1076
1094
  (o) => o.type === "function_call"
@@ -1094,12 +1112,12 @@ class w {
1094
1112
  const d = s[l];
1095
1113
  if (t.toolRegistry.isLocalTool(d.name))
1096
1114
  continue;
1097
- const u = {
1115
+ const h = {
1098
1116
  responseParams: o,
1099
1117
  response: e,
1100
1118
  toolCallIndex: l,
1101
1119
  totalToolCalls: s.length
1102
- }, c = await this.executeToolCall(t, d, u);
1120
+ }, c = await this.executeToolCall(t, d, h);
1103
1121
  i.push(c);
1104
1122
  }
1105
1123
  return {
package/package.json CHANGED
@@ -1,7 +1,7 @@
1
1
  {
2
2
  "name": "booths",
3
3
  "private": false,
4
- "version": "1.4.0",
4
+ "version": "1.4.1",
5
5
  "type": "module",
6
6
  "main": "./dist/index.js",
7
7
  "module": "./dist/index.js",