booths 1.3.0 → 1.4.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -42,7 +42,7 @@ graph TD
42
42
  1. **Application Layer**: Your application integrates the Booths framework to handle conversational AI interactions.
43
43
  2. **`CoreBooth`**: The framework foundation that provides global functionality, instructions, and infrastructure that applies to all booths. It manages the overall system configuration and coordinates the interaction flow.
44
44
  3. **`InteractionProcessor`**: The engine that drives the conversation. It takes user input, runs it through the plugin lifecycle, sends it to the LLM (via the adapter), and processes the response.
45
- 4. **`LLMAdapter`**: A component that handles communication with the specific LLM provider (e.g., OpenAI). It translates requests and responses between the Booths system and the LLM's API. Supports both traditional and streaming response modes.
45
+ 4. **`LLMAdapter`**: A component that handles communication with the specific LLM provider (e.g., OpenAI). It translates requests and responses between the Booths system and the LLM's API.
46
46
  5. **Registries**: These are responsible for managing the different components of the system:
47
47
  * `BoothRegistry`: Manages `BoothConfig` objects that define the behavior of different AI agents.
48
48
  * `ToolRegistry`: Manages the tools (functions) that booths can use.
@@ -124,13 +124,7 @@ The `CoreBooth` requires an `LLMAdapter` to communicate with your chosen languag
124
124
 
125
125
  ```typescript
126
126
  // in OpenAIAdapter.ts
127
- import type {
128
- LLMAdapter,
129
- ResponseCreateParamsNonStreaming,
130
- ResponseCreateParamsStreaming,
131
- Response,
132
- StreamEvent
133
- } from 'booths';
127
+ import type { LLMAdapter, ResponseCreateParamsNonStreaming, Response } from 'booths';
134
128
  import OpenAI from 'openai';
135
129
 
136
130
  export class OpenAIAdapter implements LLMAdapter<Response> {
@@ -147,24 +141,6 @@ export class OpenAIAdapter implements LLMAdapter<Response> {
147
141
  async interpret(response: Response): Promise<Response> {
148
142
  return response;
149
143
  }
150
-
151
- // Optional: Add streaming support
152
- async *invokeStream(params: ResponseCreateParamsStreaming): AsyncIterable<Response> {
153
- const stream = this.openai.responses.create({ ...params, model: 'gpt-4o', stream: true });
154
- for await (const chunk of stream) {
155
- yield chunk;
156
- }
157
- }
158
-
159
- async interpretStream(chunk: Response): Promise<StreamEvent> {
160
- // Convert OpenAI stream chunks to StreamEvents
161
- // Implementation depends on your streaming format
162
- return {
163
- type: 'text_delta',
164
- delta: chunk.choices?.[0]?.delta?.content || '',
165
- content: chunk.choices?.[0]?.delta?.content || ''
166
- };
167
- }
168
144
  }
169
145
  ```
170
146
 
@@ -185,6 +161,11 @@ const llmAdapter = new OpenAIAdapter('your-openai-api-key');
185
161
  // 2. Create the CoreBooth instance
186
162
  const coreBooth = createCoreBooth(llmAdapter, pirateBooth);
187
163
 
164
+ // Optional: Customize the end interaction loop marker
165
+ // const coreBooth = createCoreBooth(llmAdapter, pirateBooth, {
166
+ // endInteractionLoopMarker: '__custom_marker__'
167
+ // });
168
+
188
169
  // 3. Register the tool (this step will be improved in future versions)
189
170
  coreBooth.toolRegistry.registerTools([tellPirateJokeTool]);
190
171
 
@@ -219,7 +200,6 @@ Plugins are classes that implement the `BoothPlugin` interface. They can execute
219
200
  - `onBeforeToolCall`: Before each individual tool call is executed _(allows modification of tool parameters, validation, and logging)_.
220
201
  - `onAfterToolCall`: After each individual tool call is successfully executed _(allows result processing, caching, and transformation)_.
221
202
  - `onToolCallError`: When a tool call encounters an error _(allows custom error handling and recovery)_.
222
- - `onStreamEvent`: _(Optional)_ During streaming response generation, called for each stream event _(enables real-time processing and UI updates)_.
223
203
  - `shouldEndInteractionLoop`: To determine if the conversation turn is over.
224
204
  - `onAfterInteractionLoopEnd`: After the main loop has finished.
225
205
 
@@ -229,7 +209,7 @@ The system includes several core plugins by default:
229
209
  - `ContextProviderPlugin`: Provides the LLM with the context of the current booth.
230
210
  - `ToolProviderPlugin`: Provides the LLM with the available tools for the current booth.
231
211
  - `ToolExecutorPlugin`: Executes tool calls requested by the LLM with granular hook support for individual tool call interception.
232
- - `FinishTurnPlugin`: Determines when the LLM's turn is finished and it's waiting for user input.
212
+ - `FinishTurnPlugin`: Determines when the LLM's turn is finished and it's waiting for user input. The marker used to detect conversation end can be customized via the `endInteractionLoopMarker` option (defaults to `__awaiting_user_response__`).
233
213
 
234
214
  #### Enhanced Tool Call Management
235
215
 
@@ -252,245 +232,3 @@ The `InteractionProcessor` is the engine of the system. It manages the interacti
252
232
  5. Runs the `onResponseReceived` plugin hooks to process the response (e.g., execute tools).
253
233
  6. Repeats this loop until a plugin's `shouldEndInteractionLoop` returns `true`.
254
234
  7. Runs the `onAfter...` plugin hooks for cleanup.
255
-
256
- ## Streaming Support
257
-
258
- The Booths framework includes comprehensive streaming support that enables real-time response generation while preserving the full plugin ecosystem and backward compatibility.
259
-
260
- ### Overview
261
-
262
- Streaming allows the LLM's response to be processed and displayed in real-time as it's being generated, providing a more responsive user experience. The framework handles streaming at multiple levels:
263
-
264
- - **Real-time Events**: Stream events are emitted as content arrives
265
- - **Plugin Integration**: Plugins can hook into streaming events for real-time processing
266
- - **Complete Responses**: Existing plugins continue to receive complete responses
267
- - **Automatic Fallback**: Graceful fallback to non-streaming if streaming fails
268
-
269
- ### Enabling Streaming
270
-
271
- Streaming can be enabled simply by setting a boolean flag when creating the `InteractionProcessor`:
272
-
273
- ```typescript
274
- import { InteractionProcessor, type InteractionProcessorOptions } from 'booths';
275
-
276
- const options: InteractionProcessorOptions = {
277
- streaming: true, // Enable streaming
278
- fallbackToNonStreaming: true // Optional: fallback if streaming fails
279
- };
280
-
281
- const processor = new InteractionProcessor(
282
- boothRegistry,
283
- pluginRegistry,
284
- toolRegistry,
285
- llmAdapter, // Must implement streaming methods
286
- options
287
- );
288
- ```
289
-
290
- ### Stream Events
291
-
292
- The streaming system emits different types of events as the response is generated:
293
-
294
- ```typescript
295
- export interface StreamEvent {
296
- type: 'text_delta' | 'tool_call_start' | 'tool_call_end' | 'response_start' | 'response_end';
297
- content?: string; // Full content for text events
298
- delta?: string; // Incremental text for text_delta events
299
- toolCall?: object; // Tool call information
300
- metadata?: any; // Additional event metadata
301
- }
302
- ```
303
-
304
- **Event Types:**
305
- - `response_start`: Streaming begins
306
- - `text_delta`: Incremental text content arrives
307
- - `tool_call_start`: LLM begins a tool call
308
- - `tool_call_end`: Tool call completes
309
- - `response_end`: Streaming completes
310
-
311
- ### Streaming Plugin Hooks
312
-
313
- Plugins can implement the optional `onStreamEvent` hook to process stream events in real-time:
314
-
315
- ```typescript
316
- import type { BoothPlugin, StreamEvent, StreamContext, RepositoryUtilities } from 'booths';
317
-
318
- export class MyStreamingPlugin implements BoothPlugin {
319
- id = 'my-streaming-plugin';
320
- name = 'My Streaming Plugin';
321
- description = 'Handles streaming events';
322
-
323
- async onStreamEvent(
324
- utilities: RepositoryUtilities,
325
- streamEvent: StreamEvent,
326
- context: StreamContext
327
- ): Promise<StreamEvent> {
328
- // Process the stream event
329
- if (streamEvent.type === 'text_delta') {
330
- console.log(`Received text: ${streamEvent.delta}`);
331
-
332
- // Optionally transform the event
333
- return {
334
- ...streamEvent,
335
- delta: streamEvent.delta?.toUpperCase() // Example transformation
336
- };
337
- }
338
-
339
- return streamEvent; // Pass through unchanged
340
- }
341
-
342
- async shouldEndInteractionLoop(): Promise<boolean> {
343
- return false;
344
- }
345
- }
346
- ```
347
-
348
- ### Built-in Streaming Plugins
349
-
350
- The framework includes example streaming plugins:
351
-
352
- #### StreamingLoggerPlugin
353
-
354
- Logs streaming events in real-time for debugging and monitoring:
355
-
356
- ```typescript
357
- import { StreamingLoggerPlugin } from 'booths';
358
-
359
- const logger = new StreamingLoggerPlugin('[MyApp]');
360
- pluginRegistry.registerPlugins([logger]);
361
- ```
362
-
363
- #### StreamingUIPlugin
364
-
365
- Provides real-time UI updates with customizable callbacks:
366
-
367
- ```typescript
368
- import { StreamingUIPlugin } from 'booths';
369
-
370
- const uiPlugin = new StreamingUIPlugin((event, context) => {
371
- if (event.type === 'text_delta') {
372
- // Update your UI with the new text
373
- document.getElementById('response').textContent += event.delta;
374
- }
375
- });
376
-
377
- pluginRegistry.registerPlugins([uiPlugin]);
378
- ```
379
-
380
- ### LLM Adapter Streaming Implementation
381
-
382
- To support streaming, your LLM adapter should implement the optional streaming methods:
383
-
384
- ```typescript
385
- export class MyStreamingAdapter implements LLMAdapter<MyResponse> {
386
- // Required methods
387
- async invoke(params: ResponseCreateParamsNonStreaming): Promise<MyResponse> {
388
- // Non-streaming implementation
389
- }
390
-
391
- async interpret(response: MyResponse): Promise<Response> {
392
- // Convert to standard format
393
- }
394
-
395
- // Optional streaming methods
396
- async *invokeStream(params: ResponseCreateParamsStreaming): AsyncIterable<MyResponse> {
397
- // Yield streaming chunks
398
- const stream = await this.llm.createStreamingResponse(params);
399
- for await (const chunk of stream) {
400
- yield chunk;
401
- }
402
- }
403
-
404
- async interpretStream(chunk: MyResponse): Promise<StreamEvent> {
405
- // Convert chunk to StreamEvent
406
- return {
407
- type: 'text_delta',
408
- delta: chunk.delta,
409
- content: chunk.content
410
- };
411
- }
412
- }
413
- ```
414
-
415
- ### Stream Context
416
-
417
- Plugins receive context information about the streaming session:
418
-
419
- ```typescript
420
- export interface StreamContext {
421
- responseParams: ResponseCreateParamsNonStreaming; // Original request
422
- streamIndex: number; // Event index in stream
423
- totalExpectedEvents?: number; // Expected total (if known)
424
- accumulatedResponse: Partial<Response>; // Response built so far
425
- }
426
- ```
427
-
428
- ### Error Handling
429
-
430
- The streaming system includes robust error handling:
431
-
432
- - **Plugin Error Isolation**: Errors in streaming plugins don't break the stream
433
- - **Automatic Fallback**: Can fallback to non-streaming mode on errors
434
- - **Graceful Degradation**: System continues operating if streaming fails
435
-
436
- ### Backward Compatibility
437
-
438
- Streaming support is fully backward compatible:
439
-
440
- - **Existing Plugins**: Continue to work unchanged
441
- - **Complete Responses**: Plugins still receive full `Response` objects
442
- - **Optional Implementation**: Adapters don't require streaming support
443
- - **Default Behavior**: Non-streaming mode by default
444
-
445
- ### Example: Complete Streaming Setup
446
-
447
- Here's a complete example showing streaming integration:
448
-
449
- ```typescript
450
- import {
451
- InteractionProcessor,
452
- BoothRegistry,
453
- BoothPluginRegistry,
454
- ToolRegistry,
455
- StreamingLoggerPlugin,
456
- StreamingUIPlugin,
457
- type InteractionProcessorOptions
458
- } from 'booths';
459
-
460
- // 1. Create streaming-enabled adapter (implement streaming methods)
461
- const streamingAdapter = new MyStreamingLLMAdapter(apiKey);
462
-
463
- // 2. Set up registries and booth
464
- const testBooth = { id: 'chat-booth', role: 'Assistant', description: 'Helpful assistant' };
465
- const boothRegistry = new BoothRegistry(testBooth);
466
- const pluginRegistry = new BoothPluginRegistry();
467
- const toolRegistry = new ToolRegistry();
468
-
469
- // 3. Set up streaming plugins
470
- const logger = new StreamingLoggerPlugin('[Chat]');
471
- const uiUpdater = new StreamingUIPlugin((event) => {
472
- if (event.type === 'text_delta') {
473
- document.getElementById('chat').textContent += event.delta;
474
- }
475
- });
476
-
477
- pluginRegistry.registerPlugins([logger, uiUpdater]);
478
-
479
- // 4. Enable streaming
480
- const streamingOptions: InteractionProcessorOptions = {
481
- streaming: true,
482
- fallbackToNonStreaming: true
483
- };
484
-
485
- const processor = new InteractionProcessor(
486
- boothRegistry,
487
- pluginRegistry,
488
- toolRegistry,
489
- streamingAdapter,
490
- streamingOptions
491
- );
492
-
493
- // 5. Send message with real-time streaming
494
- const response = await processor.send('Hello, stream this response!');
495
- // User sees content appear in real-time, plugins receive complete response
496
- ```
package/dist/index.d.ts CHANGED
@@ -120,9 +120,9 @@ export declare interface BoothPlugin {
120
120
  * @param toolCall - The tool call that was executed.
121
121
  * @param result - The result returned by the tool execution.
122
122
  * @param context - Context information about the tool call execution.
123
- * @returns The potentially modified tool call result.
123
+ * @returns The potentially modified tool call result, otherwise the original result.
124
124
  */
125
- onAfterToolCall?: (utilities: RepositoryUtilities, toolCall: ResponseFunctionToolCall, result: unknown, context: ToolCallContext) => Promise<unknown>;
125
+ onAfterToolCall?: (utilities: RepositoryUtilities, toolCall: ResponseFunctionToolCall, result: unknown, context: ToolCallContext) => Promise<typeof result>;
126
126
  /**
127
127
  * Called when an individual tool call encounters an error during execution.
128
128
  * This allows for custom error handling or recovery.
@@ -150,15 +150,6 @@ export declare interface BoothPlugin {
150
150
  * @returns The potentially modified final response.
151
151
  */
152
152
  onAfterInteractionLoopEnd?: (interactionLoopEndArgs: RepositoryUtilities, response: Response_2) => Promise<Response_2>;
153
- /**
154
- * Called for each streaming event as it arrives during response generation.
155
- * This is optional and only called when streaming is enabled.
156
- * @param utilities - Utilities for accessing repositories.
157
- * @param streamEvent - The streaming event that was received.
158
- * @param context - Context information about the streaming session.
159
- * @returns The potentially modified stream event, or void to pass through unchanged.
160
- */
161
- onStreamEvent?: (utilities: RepositoryUtilities, streamEvent: StreamEvent, context: StreamContext) => Promise<StreamEvent | void>;
162
153
  }
163
154
 
164
155
  /**
@@ -302,17 +293,6 @@ export declare class BoothPluginRegistry {
302
293
  * @returns Error result or recovery value after all plugins have processed it
303
294
  */
304
295
  runToolCallError(utilities: RepositoryUtilities, toolCall: ResponseFunctionToolCall, error: Error, context: ToolCallContext): Promise<any>;
305
- /**
306
- * Sequentially invokes every plugin's onStreamEvent hook.
307
- * This is called for each streaming event during response generation,
308
- * allowing plugins to process or modify stream events in real-time.
309
- *
310
- * @param utilities - Context information including booth and tool registries
311
- * @param streamEvent - The streaming event that was received
312
- * @param context - Context information about the streaming session
313
- * @returns Modified stream event after all plugins have processed it
314
- */
315
- runStreamEvent(utilities: RepositoryUtilities, streamEvent: StreamEvent, context: StreamContext): Promise<StreamEvent>;
316
296
  }
317
297
 
318
298
  /**
@@ -695,6 +675,7 @@ export declare class CoreBooth<T> {
695
675
  tools?: ToolRegistry;
696
676
  llmAdapter: LLMAdapter<T>;
697
677
  sessionHistory?: ResponseInput;
678
+ endInteractionLoopMarker?: string;
698
679
  });
699
680
  }
700
681
 
@@ -724,12 +705,14 @@ export declare function createRouteToBoothTool(boothRegistry: BoothRegistry): To
724
705
  * in the LLM's response. It also cleans up this marker before the final output is returned.
725
706
  */
726
707
  export declare class FinishTurnPlugin implements BoothPlugin {
708
+ private marker;
727
709
  description: string;
728
710
  id: string;
729
711
  name: string;
712
+ constructor(marker?: string);
730
713
  /**
731
714
  * Before sending a message, this hook adds an instruction to the LLM to include a
732
- * specific marker (`__awaiting_user_response__`) when it expects a user response.
715
+ * specific marker when it expects a user response.
733
716
  * @param _ - Unused repository utilities.
734
717
  * @param responseParams - The parameters for the response creation.
735
718
  * @returns The updated response parameters with the added instruction.
@@ -759,7 +742,7 @@ export declare class FinishTurnPlugin implements BoothPlugin {
759
742
  }>;
760
743
  /**
761
744
  * Determines whether the interaction loop should end by checking for the presence of the
762
- * `__awaiting_user_response__` marker in the response text or if there's an error response.
745
+ * marker in the response text or if there's an error response.
763
746
  * @param _ - Unused repository utilities.
764
747
  * @param __ - Unused response parameters.
765
748
  * @param response - The response from the LLM.
@@ -767,7 +750,7 @@ export declare class FinishTurnPlugin implements BoothPlugin {
767
750
  */
768
751
  shouldEndInteractionLoop(_: RepositoryUtilities, __: ResponseCreateParamsNonStreaming, response: Response_2): Promise<boolean>;
769
752
  /**
770
- * After the interaction loop ends, this hook removes the `__awaiting_user_response__` marker
753
+ * After the interaction loop ends, this hook removes the `marker` marker
771
754
  * from the final response before it is returned.
772
755
  * @param _ - Unused repository utilities.
773
756
  * @param response - The final response from the LLM.
@@ -777,30 +760,19 @@ export declare class FinishTurnPlugin implements BoothPlugin {
777
760
  }
778
761
 
779
762
  /**
780
- * The InteractionProcessor class orchestrates the conversation with the LLM,
781
- * managing the interaction loop, plugin execution, and message passing.
763
+ * A class responsible for processing interactions with a language learning model (LLM)
764
+ * by delegating tasks to a plugin-enabled architecture.
765
+ * This class manages the process of sending messages to an LLM,
766
+ * invoking plugins, handling errors, and iterating through responses.
767
+ *
768
+ * @template T The type used in the LLMAdapter for communicating with the LLM.
782
769
  */
783
770
  export declare class InteractionProcessor<T> {
784
771
  private boothRegistry;
785
772
  private boothPlugins;
786
773
  private toolRegistry;
787
774
  private llmAdapter;
788
- /**
789
- * Generates a consistent ID for responses and messages.
790
- * @param prefix - The prefix for the ID (e.g., 'stream', 'error', 'msg')
791
- * @returns A unique ID string
792
- * @private
793
- */
794
- private generateId;
795
- /**
796
- * Creates a standardized message object for responses.
797
- * @param text - The text content for the message
798
- * @returns A formatted message object
799
- * @private
800
- */
801
- private createMessage;
802
775
  private loopLimit;
803
- private options;
804
776
  /**
805
777
  * Creates a synthetic error response with proper structure and error details.
806
778
  * @param error - The error that occurred
@@ -810,42 +782,13 @@ export declare class InteractionProcessor<T> {
810
782
  */
811
783
  private createErrorResponse;
812
784
  /**
813
- * Calls the LLM with the given parameters.
814
- * @param responseCreateParams - The parameters for creating the response.
815
- * @returns A promise that resolves with the LLM's response.
816
- * @private
785
+ * Calls the LLM adapter to invoke and interpret the response for the given parameters.
786
+ *
787
+ * @param {ResponseCreateParamsNonStreaming} responseCreateParams - The parameters for creating the response.
788
+ * @param {RepositoryUtilities} prepareInitialMessagesArgs - The arguments required to prepare initial messages.
789
+ * @return {Promise<Response>} A promise that resolves to the interpreted response or an error response in case of failure.
817
790
  */
818
791
  private callLLM;
819
- /**
820
- * Calls the LLM in non-streaming mode.
821
- * @param responseCreateParams - The parameters for creating the response.
822
- * @returns A promise that resolves with the LLM's response.
823
- * @private
824
- */
825
- private callLLMNonStreaming;
826
- /**
827
- * Calls the LLM in streaming mode, accumulating stream events into a complete response.
828
- * @param responseCreateParams - The parameters for creating the response.
829
- * @returns A promise that resolves with the accumulated response.
830
- * @private
831
- */
832
- private callLLMStreaming;
833
- /**
834
- * Merges a stream event into the accumulated response.
835
- * @param accumulated - The current accumulated response.
836
- * @param streamEvent - The stream event to merge.
837
- * @returns The updated accumulated response.
838
- * @private
839
- */
840
- private mergeStreamEvent;
841
- /**
842
- * Creates a complete Response object from accumulated stream data.
843
- * @param accumulated - The accumulated response data.
844
- * @param originalParams - The original request parameters.
845
- * @returns A complete Response object.
846
- * @private
847
- */
848
- private finalizeAccumulatedResponse;
849
792
  /**
850
793
  * Runs the main interaction loop, sending messages to the LLM and processing
851
794
  * the responses through the registered plugins.
@@ -860,9 +803,8 @@ export declare class InteractionProcessor<T> {
860
803
  * @param boothPlugins - The registry for booth plugins.
861
804
  * @param toolRegistry - The registry for available tools.
862
805
  * @param llmAdapter - The adapter for interacting with the LLM.
863
- * @param options - Configuration options for streaming and other behaviors.
864
806
  */
865
- constructor(boothRegistry: BoothRegistry, boothPlugins: BoothPluginRegistry, toolRegistry: ToolRegistry, llmAdapter: LLMAdapter<T>, options?: InteractionProcessorOptions);
807
+ constructor(boothRegistry: BoothRegistry, boothPlugins: BoothPluginRegistry, toolRegistry: ToolRegistry, llmAdapter: LLMAdapter<T>);
866
808
  /**
867
809
  * Sends a message to the LLM and processes the response through the interaction loop.
868
810
  * This involves running pre-loop, pre-send, response-received, and post-loop plugin hooks.
@@ -873,22 +815,14 @@ export declare class InteractionProcessor<T> {
873
815
  }
874
816
 
875
817
  /**
876
- * Configuration options for the InteractionProcessor.
818
+ * Interface representing a Large Language Model (LLM) Adapter that provides methods
819
+ * to interact with and interpret responses from an LLM.
820
+ *
821
+ * @template LLMResponse The type of the raw response returned by the LLM. Defaults to `any`.
877
822
  */
878
- export declare interface InteractionProcessorOptions {
879
- /** Enable streaming mode for LLM responses */
880
- streaming?: boolean;
881
- /** Fallback to non-streaming if streaming fails */
882
- fallbackToNonStreaming?: boolean;
883
- }
884
-
885
823
  export declare interface LLMAdapter<LLMResponse = any> {
886
- invoke: (responseParams: ResponseCreateParamsNonStreaming) => Promise<LLMResponse>;
824
+ invoke: (responseParams: ResponseCreateParamsNonStreaming, prepareInitialMessagesArgs: RepositoryUtilities) => Promise<LLMResponse>;
887
825
  interpret: (response: LLMResponse) => Promise<Response_2>;
888
- /** Optional method for streaming LLM responses */
889
- invokeStream?: (responseParams: ResponseCreateParamsStreaming) => AsyncIterable<LLMResponse>;
890
- /** Optional method for interpreting individual stream chunks into StreamEvents */
891
- interpretStream?: (streamChunk: LLMResponse) => Promise<StreamEvent>;
892
826
  }
893
827
 
894
828
  /**
@@ -912,24 +846,8 @@ export declare type RepositoryUtilities = {
912
846
  llmAdapter: LLMAdapter<unknown>;
913
847
  };
914
848
 
915
- export { Response_2 as Response }
916
-
917
849
  export { ResponseCreateParamsNonStreaming }
918
850
 
919
- /**
920
- * Response parameters for streaming requests.
921
- * This creates a new type that has all the properties of ResponseCreateParamsNonStreaming
922
- * but with stream: true instead of stream: false.
923
- */
924
- export declare type ResponseCreateParamsStreaming = Omit<ResponseCreateParamsNonStreaming, 'stream'> & {
925
- /** Must be true for streaming requests */
926
- stream: true;
927
- };
928
-
929
- export { ResponseInput }
930
-
931
- export { ResponseInputItem }
932
-
933
851
  /**
934
852
  * Represents the result of processing a single tool call.
935
853
  */
@@ -950,93 +868,6 @@ export declare type SingleToolProcessingResult = {
950
868
  toolExecuted: boolean;
951
869
  };
952
870
 
953
- /**
954
- * Context information provided during streaming event processing.
955
- */
956
- export declare interface StreamContext {
957
- /** The current response parameters being processed */
958
- responseParams: ResponseCreateParamsNonStreaming;
959
- /** Index of this stream event in the sequence */
960
- streamIndex: number;
961
- /** Total expected number of events (if known) */
962
- totalExpectedEvents?: number;
963
- /** Accumulated response content so far */
964
- accumulatedResponse: Partial<Response_2>;
965
- }
966
-
967
- /**
968
- * Represents a streaming event emitted during LLM response generation.
969
- */
970
- export declare interface StreamEvent {
971
- /** Type of stream event */
972
- type: 'text_delta' | 'tool_call_start' | 'tool_call_end' | 'response_start' | 'response_end';
973
- /** Text content for text_delta events */
974
- content?: string;
975
- /** Incremental text delta for text_delta events */
976
- delta?: string;
977
- /** Tool call information for tool-related events */
978
- toolCall?: ResponseFunctionToolCall;
979
- /** Additional metadata for the event */
980
- metadata?: Record<string, unknown>;
981
- }
982
-
983
- /**
984
- * Callback function type for handling stream events in the UI.
985
- */
986
- export declare type StreamEventCallback = (event: StreamEvent, context: StreamContext) => void;
987
-
988
- /**
989
- * Example streaming plugin that logs stream events in real-time.
990
- * This demonstrates how to implement streaming hooks in plugins.
991
- */
992
- export declare class StreamingLoggerPlugin implements BoothPlugin {
993
- readonly id = "streaming-logger";
994
- readonly name = "Streaming Logger Plugin";
995
- readonly description = "Logs streaming events in real-time for debugging and monitoring";
996
- private logPrefix;
997
- constructor(logPrefix?: string);
998
- /**
999
- * Handle individual stream events as they arrive.
1000
- * This allows for real-time processing and logging of streaming content.
1001
- */
1002
- onStreamEvent(_utilities: RepositoryUtilities, streamEvent: StreamEvent, context: StreamContext): Promise<StreamEvent>;
1003
- /**
1004
- * Required method - determines whether to end the interaction loop.
1005
- * For a logging plugin, we never want to end the loop ourselves.
1006
- */
1007
- shouldEndInteractionLoop(): Promise<boolean>;
1008
- }
1009
-
1010
- /**
1011
- * Example streaming plugin that provides real-time UI updates.
1012
- * This plugin demonstrates how to emit stream events to the UI layer.
1013
- */
1014
- export declare class StreamingUIPlugin implements BoothPlugin {
1015
- readonly id = "streaming-ui";
1016
- readonly name = "Streaming UI Plugin";
1017
- readonly description = "Provides real-time UI updates during streaming responses";
1018
- private onStreamCallback?;
1019
- constructor(onStreamCallback?: StreamEventCallback);
1020
- /**
1021
- * Handle individual stream events and emit them to the UI layer.
1022
- * This enables real-time updates to the user interface.
1023
- */
1024
- onStreamEvent(_utilities: RepositoryUtilities, streamEvent: StreamEvent, context: StreamContext): Promise<StreamEvent>;
1025
- /**
1026
- * Set or update the stream callback for UI updates.
1027
- */
1028
- setStreamCallback(callback: StreamEventCallback): void;
1029
- /**
1030
- * Remove the stream callback.
1031
- */
1032
- removeStreamCallback(): void;
1033
- /**
1034
- * Required method - determines whether to end the interaction loop.
1035
- * For a UI plugin, we never want to end the loop ourselves.
1036
- */
1037
- shouldEndInteractionLoop(): Promise<boolean>;
1038
- }
1039
-
1040
871
  /**
1041
872
  * Context information provided during tool call execution.
1042
873
  */