booths 0.0.3 → 0.0.6

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (3) hide show
  1. package/README.md +133 -73
  2. package/dist/index.js +6 -1
  3. package/package.json +1 -1
package/README.md CHANGED
@@ -4,60 +4,11 @@ The Core Booth system is a modular and extensible framework for building and man
4
4
 
5
5
  The system is designed around a central `CoreBooth` class that orchestrates interactions between users and the Large Language Model (LLM), leveraging a system of registries and plugins to manage the conversational state and capabilities.
6
6
 
7
- ## How It Works
8
-
9
- The Core Booth system is comprised of several key components that work together to process user input and generate contextual responses.
10
-
11
- ### 1. Registries
12
-
13
- - **`BoothRegistry`**: Manages the collection of `BoothConfig` objects. Each booth represents a specialized agent with a specific role, description, and set of tools. It also keeps track of the "current context booth" to ensure the conversation stays on topic.
14
- - **`ToolRegistry`**: Manages the tools that can be made available to the LLM. Tools are functions that the AI can decide to call to perform actions or retrieve information.
15
- - **`BoothPluginRegistry`**: Manages plugins that hook into the interaction lifecycle. This allows for modular and reusable functionality to be added to the system.
16
-
17
- ### 2. Plugins
18
-
19
- Plugins are classes that implement the `BoothPlugin` interface. They can execute logic at different stages of the conversation:
20
-
21
- - `onBeforeInteractionLoopStart`: Before the main loop begins.
22
- - `onBeforeMessageSend`: Before a message is sent to the LLM.
23
- - `onResponseReceived`: After a response is received from the LLM.
24
- - `onBeforeToolCall`: Before each individual tool call is executed _(allows modification of tool parameters, validation, and logging)_.
25
- - `onAfterToolCall`: After each individual tool call is successfully executed _(allows result processing, caching, and transformation)_.
26
- - `onToolCallError`: When a tool call encounters an error _(allows custom error handling and recovery)_.
27
- - `shouldEndInteractionLoop`: To determine if the conversation turn is over.
28
- - `onAfterInteractionLoopEnd`: After the main loop has finished.
29
-
30
- The system includes several core plugins by default:
31
-
32
- - `ConversationHistoryPlugin`: Maintains the history of the conversation.
33
- - `ContextProviderPlugin`: Provides the LLM with the context of the current booth.
34
- - `ToolProviderPlugin`: Provides the LLM with the available tools for the current booth.
35
- - `ToolExecutorPlugin`: Executes tool calls requested by the LLM with granular hook support for individual tool call interception.
36
- - `FinishTurnPlugin`: Determines when the LLM's turn is finished and it's waiting for user input.
37
-
38
- #### Enhanced Tool Call Management
39
-
40
- The plugin system now provides granular control over individual tool executions through three new hooks:
41
-
42
- - **`onBeforeToolCall`**: Intercept and modify tool calls before execution (parameter validation, authorization, logging)
43
- - **`onAfterToolCall`**: Process and transform tool results after successful execution (caching, metadata addition, data transformation)
44
- - **`onToolCallError`**: Handle tool execution errors with custom recovery logic (fallback responses, error logging, graceful degradation)
45
-
46
- This enables sophisticated tool management patterns like authentication, caching, audit logging, and error recovery at the individual tool level.
47
-
48
- ### 3. Interaction Processor
49
-
50
- The `InteractionProcessor` is the engine of the system. It manages the interaction loop with the LLM:
7
+ ## Installation
51
8
 
52
- 1. It takes user input.
53
- 2. Runs the `onBefore...` plugin hooks.
54
- 3. Sends the payload to the LLM.
55
- 4. Receives the response.
56
- 5. Runs the `onResponseReceived` plugin hooks to process the response (e.g., execute tools).
57
- 6. Repeats this loop until a plugin's `shouldEndInteractionLoop` returns `true`.
58
- 7. Runs the `onAfter...` plugin hooks for cleanup.
59
-
60
- ---
9
+ ```bash
10
+ npm install booths
11
+ ```
61
12
 
62
13
  ## Quick Start Example
63
14
 
@@ -69,7 +20,7 @@ First, define your booth configurations and any custom tools you need.
69
20
 
70
21
  ```typescript
71
22
  // in my-booths.ts
72
- import type { BoothConfig } from './types/booths.types'
23
+ import type { BoothConfig } from 'booths';
73
24
 
74
25
  export const customerSupportBooth: BoothConfig = {
75
26
  id: 'customer-support',
@@ -77,10 +28,12 @@ export const customerSupportBooth: BoothConfig = {
77
28
  description: 'You are a helpful customer support assistant for a SaaS company.',
78
29
  tools: ['get_user_account_details'],
79
30
  examples: ["I'm having trouble with my account.", 'I want to upgrade my subscription.'],
80
- }
31
+ };
32
+ ```
81
33
 
34
+ ```typescript
82
35
  // in my-tools.ts
83
- import type { ToolModule } from './types/tools.types'
36
+ import type { ToolModule } from 'booths';
84
37
 
85
38
  export const getUserAccountDetailsTool: ToolModule = {
86
39
  type: 'function',
@@ -89,47 +42,154 @@ export const getUserAccountDetailsTool: ToolModule = {
89
42
  parameters: { type: 'object', properties: {} },
90
43
  execute: async () => {
91
44
  // In a real app, you would fetch this from an API
92
- return { accountId: '123', plan: 'Pro' }
45
+ return { accountId: '123', plan: 'Pro' };
93
46
  },
47
+ };
48
+ ```
49
+
50
+ ### 2. Implement the LLM Adapter
51
+
52
+ The `CoreBooth` requires an `LLMAdapter` to communicate with your chosen language model. Here is a minimal example of what that adapter could look like.
53
+
54
+ ```typescript
55
+ // in MyLLMAdapter.ts
56
+ import type { LLMAdapter, ResponseCreateParamsNonStreaming } from 'booths';
57
+
58
+ class MyLLMAdapter implements LLMAdapter<any> {
59
+ async createResponse(params: ResponseCreateParamsNonStreaming) {
60
+ // This is a mock implementation.
61
+ // In a real application, you would make a call to an LLM API (e.g., OpenAI, Anthropic).
62
+ console.log('Sending to LLM:', params.messages);
63
+
64
+ const lastMessage = params.messages[params.messages.length - 1];
65
+
66
+ if (lastMessage.content?.includes("check my account plan")) {
67
+ return {
68
+ id: 'fake-response-id',
69
+ choices: [{
70
+ message: {
71
+ role: 'assistant',
72
+ content: null,
73
+ tool_calls: [{
74
+ id: 'tool-call-1',
75
+ type: 'function',
76
+ function: {
77
+ name: 'get_user_account_details',
78
+ arguments: '{}'
79
+ }
80
+ }]
81
+ }
82
+ }]
83
+ }
84
+ }
85
+
86
+ return {
87
+ id: 'fake-response-id',
88
+ choices: [{
89
+ message: {
90
+ role: 'assistant',
91
+ content: "I am on the Pro plan.",
92
+ }
93
+ }]
94
+ };
95
+ }
94
96
  }
97
+
98
+ export default MyLLMAdapter;
95
99
  ```
96
100
 
97
- ### 2. Initialize the CoreBooth
101
+ ### 3. Initialize the CoreBooth
98
102
 
99
103
  Next, instantiate the registries and the `CoreBooth` itself.
100
104
 
101
105
  ```typescript
102
- import { CoreBooth, BoothRegistry, ToolRegistry, BoothPluginRegistry } from './index'
103
- import { customerSupportBooth } from './my-booths'
104
- import { getUserAccountDetailsTool } from './my-tools'
106
+ // in main.ts
107
+ import { CoreBooth, BoothRegistry, ToolRegistry, BoothPluginRegistry } from 'booths';
108
+ import { customerSupportBooth } from './my-booths';
109
+ import { getUserAccountDetailsTool } from './my-tools';
110
+ import MyLLMAdapter from './MyLLMAdapter';
105
111
 
106
112
  // 1. Create instances of the registries
107
- const boothRegistry = new BoothRegistry()
108
- const toolRegistry = new ToolRegistry()
109
- const boothPluginRegistry = new BoothPluginRegistry() // For custom user plugins
113
+ const boothRegistry = new BoothRegistry();
114
+ const toolRegistry = new ToolRegistry();
115
+ const boothPluginRegistry = new BoothPluginRegistry(); // For custom user plugins
110
116
 
111
117
  // 2. Register your booths and tools
112
- boothRegistry.registerBooth(customerSupportBooth)
113
- toolRegistry.registerTool(getUserAccountDetailsTool)
118
+ boothRegistry.registerBooths([customerSupportBooth]);
119
+ toolRegistry.registerTools([getUserAccountDetailsTool]);
114
120
 
115
121
  // 3. Set a starting context for the conversation
116
- boothRegistry.setCurrentContextId('customer-support')
122
+ boothRegistry.setCurrentContextId('customer-support');
117
123
 
118
- // 4. Create the CoreBooth instance
124
+ // 4. Create the CoreBooth instance with the adapter
119
125
  const coreBooth = new CoreBooth({
120
126
  booths: boothRegistry,
121
127
  tools: toolRegistry,
122
128
  boothPlugins: boothPluginRegistry,
123
- })
129
+ llmAdapter: new MyLLMAdapter(),
130
+ });
124
131
 
125
132
  // 5. Send a message and get a response
126
133
  async function haveConversation() {
127
- const userInput = 'Can you check my account plan?'
128
- const response = await coreBooth.callProcessor.send(userInput)
134
+ const userInput = 'Can you check my account plan?';
135
+ const response = await coreBooth.callProcessor.send(userInput);
129
136
 
130
- console.log(response.output_text)
137
+ console.log(response.output_text);
131
138
  // Expected output might be something like: "You are currently on the Pro plan."
132
139
  }
133
140
 
134
- haveConversation()
141
+ haveConversation();
135
142
  ```
143
+
144
+ ## How It Works
145
+
146
+ The Core Booth system is comprised of several key components that work together to process user input and generate contextual responses.
147
+
148
+ ### 1. Registries
149
+
150
+ - **`BoothRegistry`**: Manages the collection of `BoothConfig` objects. Each booth represents a specialized agent with a specific role, description, and set of tools. It also keeps track of the "current context booth" to ensure the conversation stays on topic.
151
+ - **`ToolRegistry`**: Manages the tools that can be made available to the LLM. Tools are functions that the AI can decide to call to perform actions or retrieve information.
152
+ - **`BoothPluginRegistry`**: Manages plugins that hook into the interaction lifecycle. This allows for modular and reusable functionality to be added to the system.
153
+
154
+ ### 2. Plugins
155
+
156
+ Plugins are classes that implement the `BoothPlugin` interface. They can execute logic at different stages of the conversation:
157
+
158
+ - `onBeforeInteractionLoopStart`: Before the main loop begins.
159
+ - `onBeforeMessageSend`: Before a message is sent to the LLM.
160
+ - `onResponseReceived`: After a response is received from the LLM.
161
+ - `onBeforeToolCall`: Before each individual tool call is executed _(allows modification of tool parameters, validation, and logging)_.
162
+ - `onAfterToolCall`: After each individual tool call is successfully executed _(allows result processing, caching, and transformation)_.
163
+ - `onToolCallError`: When a tool call encounters an error _(allows custom error handling and recovery)_.
164
+ - `shouldEndInteractionLoop`: To determine if the conversation turn is over.
165
+ - `onAfterInteractionLoopEnd`: After the main loop has finished.
166
+
167
+ The system includes several core plugins by default:
168
+
169
+ - `ConversationHistoryPlugin`: Maintains the history of the conversation.
170
+ - `ContextProviderPlugin`: Provides the LLM with the context of the current booth.
171
+ - `ToolProviderPlugin`: Provides the LLM with the available tools for the current booth.
172
+ - `ToolExecutorPlugin`: Executes tool calls requested by the LLM with granular hook support for individual tool call interception.
173
+ - `FinishTurnPlugin`: Determines when the LLM's turn is finished and it's waiting for user input.
174
+
175
+ #### Enhanced Tool Call Management
176
+
177
+ The plugin system now provides granular control over individual tool executions through three new hooks:
178
+
179
+ - **`onBeforeToolCall`**: Intercept and modify tool calls before execution (parameter validation, authorization, logging)
180
+ - **`onAfterToolCall`**: Process and transform tool results after successful execution (caching, metadata addition, data transformation)
181
+ - **`onToolCallError`**: Handle tool execution errors with custom recovery logic (fallback responses, error logging, graceful degradation)
182
+
183
+ This enables sophisticated tool management patterns like authentication, caching, audit logging, and error recovery at the individual tool level.
184
+
185
+ ### 3. Interaction Processor
186
+
187
+ The `InteractionProcessor` is the engine of the system. It manages the interaction loop with the LLM:
188
+
189
+ 1. It takes user input.
190
+ 2. Runs the `onBefore...` plugin hooks.
191
+ 3. Sends the payload to the LLM.
192
+ 4. Receives the response.
193
+ 5. Runs the `onResponseReceived` plugin hooks to process the response (e.g., execute tools).
194
+ 6. Repeats this loop until a plugin's `shouldEndInteractionLoop` returns `true`.
195
+ 7. Runs the `onAfter...` plugin hooks for cleanup.
package/dist/index.js CHANGED
@@ -827,7 +827,12 @@ class R {
827
827
  return e += `
828
828
 
829
829
 
830
- Once a user response is expected, at the end of your message, add "__awaiting_user_response__" to your message.`, {
830
+
831
+ [MUST]
832
+ - Add the marker "__awaiting_user_response__" to the end of your response when you expect a user response.
833
+ - This marker indicates that the interaction loop should end and the system should wait for user input
834
+ - If the marker is not present, the system will continue to process the response as usual.
835
+ `, {
831
836
  ...o,
832
837
  instructions: e
833
838
  };
package/package.json CHANGED
@@ -1,7 +1,7 @@
1
1
  {
2
2
  "name": "booths",
3
3
  "private": false,
4
- "version": "0.0.3",
4
+ "version": "0.0.6",
5
5
  "type": "module",
6
6
  "main": "./dist/index.js",
7
7
  "module": "./dist/index.js",