@minded-ai/mindedjs 2.0.36 → 2.0.38

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,59 +1,65 @@
1
1
  # Data Extraction
2
2
 
3
- MindedJS provides a powerful AI-based extraction tool for extracting structured data from unstructured text. The extraction system uses LLM capabilities to parse content and return data in a predefined format.
3
+ Extract structured data from unstructured text using AI. The `minded-extraction` tool uses LLM capabilities to parse content and return data in a predefined format.
4
4
 
5
5
  ## Overview
6
6
 
7
- The extraction tool (`minded-extraction`) enables you to extract specific information from text using:
8
-
9
- - **Structured Extraction with Zod Schema**: Define exact data structure using Zod schemas
7
+ - **Structured Extraction with Zod Schema**: Define exact data structure
10
8
  - **Prompt-based Extraction**: Extract information using custom prompts
11
- - **Validation and Retries**: Automatic validation against schema with retry logic
12
-
13
- ## Key Features
14
-
15
- - **LLM Structured Output Support**: When available, uses the LLM's native structured output capabilities for guaranteed schema compliance
16
- - **Fallback JSON Parsing**: Automatically falls back to JSON parsing with validation when structured output is unavailable
17
- - **Schema Validation**: Built-in Zod validation ensures extracted data matches expected structure
18
- - **Retry Logic**: Configurable retry attempts with error feedback for improved accuracy
19
- - **Strict and Non-strict Modes**: Choose between validated extraction (strict) or flexible extraction
20
-
21
- ## Library Tool Integration
9
+ - **Validation and Retries**: Automatic validation with configurable retry logic
10
+ - **Structured Output Support**: Uses LLM native structured output when available
22
11
 
23
- The extraction functionality is available as a library tool called `minded-extraction` that can be added to your flows through the Minded platform.
24
-
25
- ### Configuration Options
26
-
27
- - `content`: The text to extract information from
28
- - `schema`: Optional Zod-compatible schema defining the structure
29
- - `systemPrompt`: Custom instructions for extraction
30
- - `examples`: Input/output examples to guide extraction
31
- - `strictMode`: Enable/disable schema validation (default: true)
32
- - `maxRetries`: Number of retry attempts on validation failure (default: 3)
33
- - `defaultValue`: Fallback value if extraction fails
34
-
35
- ## How It Works
12
+ ## Using in Flows
36
13
 
37
- 1. **With Structured Output Support** (when available):
38
-
39
- - The tool uses the LLM's `withStructuredOutput` method for direct schema-compliant extraction
40
- - No manual JSON parsing or validation needed
41
- - Guaranteed to match the provided Zod schema
42
-
43
- 2. **Fallback Mode** (JSON parsing):
14
+ ```yaml
15
+ - id: extractCustomerInfo
16
+ type: tool
17
+ toolName: minded-extraction
18
+ prompt: Extract customer name, email, and phone number from the message
19
+ ```
44
20
 
45
- - Generates a prompt with schema description
46
- - Uses JSON output parser to extract structured data
47
- - Validates against Zod schema
48
- - Retries with error feedback if validation fails
21
+ ### Tool Parameters
22
+
23
+ | Parameter | Type | Description | Required |
24
+ | ------------ | ------- | -------------------------------------- | -------- |
25
+ | content | string | Text to extract from | Yes |
26
+ | schema | object | Zod-compatible schema | No |
27
+ | systemPrompt | string | Custom instructions | No |
28
+ | examples | array | Input/output examples | No |
29
+ | strictMode | boolean | Enable validation (default: true) | No |
30
+ | maxRetries | number | Retry attempts on failure (default: 3) | No |
31
+ | defaultValue | any | Fallback value | No |
32
+
33
+ ### Overriding Parameters in Flows
34
+
35
+ ```yaml
36
+ - name: 'Extract Customer Info'
37
+ type: 'tool'
38
+ toolName: 'minded-extraction'
39
+ parameters:
40
+ content: '{state.memory.rawText}'
41
+ schema:
42
+ name:
43
+ type: 'string'
44
+ description: 'Customer full name'
45
+ email:
46
+ type: 'string'
47
+ description: 'Email address'
48
+ required: false
49
+ phone:
50
+ type: 'string'
51
+ systemPrompt: 'Extract contact information from the text'
52
+ strictMode: true
53
+ maxRetries: 3
54
+ ```
49
55
 
50
- 3. **Non-strict Mode**:
51
- - Skips validation for more flexible extraction
52
- - Useful when schema compliance is not critical
56
+ **Available schema field properties:**
53
57
 
54
- ## Standalone Usage
58
+ - `type`: `'string'`, `'number'`, `'boolean'`, `'array'`, or `'object'`
59
+ - `description`: Optional field description
60
+ - `required`: Optional boolean (defaults to true)
55
61
 
56
- The extraction utility can also be used programmatically:
62
+ ## Programmatic Usage
57
63
 
58
64
  ```typescript
59
65
  import { extract, createExtractor } from '@minded-ai/mindedjs';
@@ -76,3 +82,9 @@ const result = await extract(
76
82
  const extractor = createExtractor(schema, { systemPrompt: 'Extract data' });
77
83
  const result = await extractor(content, agent.llm);
78
84
  ```
85
+
86
+ ## How It Works
87
+
88
+ 1. **With Structured Output Support**: Uses LLM's `withStructuredOutput` for direct schema-compliant extraction
89
+ 2. **Fallback Mode**: Generates prompt with schema description, parses JSON, validates against Zod schema, retries with error feedback
90
+ 3. **Non-strict Mode**: Skips validation for flexible extraction
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@minded-ai/mindedjs",
3
- "version": "2.0.36",
3
+ "version": "2.0.38",
4
4
  "description": "MindedJS is a TypeScript library for building agents.",
5
5
  "main": "dist/index.js",
6
6
  "types": "dist/index.d.ts",
@@ -72,4 +72,4 @@
72
72
  "peerDependencies": {
73
73
  "playwright": "^1.55.0"
74
74
  }
75
- }
75
+ }
package/src/agent.ts CHANGED
@@ -488,7 +488,8 @@ export class Agent {
488
488
  try {
489
489
  await this.waitForInitialization();
490
490
  } catch (err) {
491
- logger.error({ msg: '[Trigger] Agent initialization failed', err });
491
+ const { baseUrl } = getConfig();
492
+ logger.error({ msg: '[Trigger] Agent initialization failed', err, sessionId, baseUrl });
492
493
  throw err;
493
494
  }
494
495
 
package/src/index.ts CHANGED
@@ -85,7 +85,7 @@ export {
85
85
  TriggerEvent,
86
86
  } from './types/Agent.types';
87
87
  export type { AgentInvokeParams, MindedSDKConfig } from './types/Agent.types';
88
- export type { Environment } from './types/Platform.types';
88
+ export { Environment } from './types/Platform.types';
89
89
  export type { State } from './types/LangGraph.types';
90
90
  export { zendesk } from './interfaces/zendesk';
91
91
 
@@ -13,6 +13,7 @@ import { combinePlaybooks } from '../playbooks/playbooks';
13
13
  import { compilePrompt } from './compilePrompt';
14
14
  import { AnalyticsEventName } from '../types/Analytics.types';
15
15
  import { trackAnalyticsEvent } from '../internalTools/analytics';
16
+ import { z } from 'zod';
16
17
 
17
18
  export const addToolNode = async ({
18
19
  graph,
@@ -49,38 +50,144 @@ export const addToolNode = async ({
49
50
  await agent.interruptSessionManager.checkQueueAndInterrupt(state.sessionId);
50
51
  logger.debug({ msg: `[Node] Executing tool node`, node: toolNode.displayName });
51
52
 
52
- const tool = langchainTool(() => { }, {
53
- name: matchedTool.name,
54
- description: matchedTool.description,
55
- schema: matchedTool.input,
56
- });
53
+ // Compile parameters with variable injection support
54
+ const compiledParameters: Record<string, any> = {};
55
+ const compileContext = { state, memory: state.memory, env: process.env };
56
+
57
+ for (const [key, value] of Object.entries(node.parameters || {})) {
58
+ if (value !== '') {
59
+ // If the value is a string, compile it to allow variable injection
60
+ if (typeof value === 'string') {
61
+ compiledParameters[key] = compilePrompt(value, compileContext);
62
+ } else {
63
+ compiledParameters[key] = value;
64
+ }
65
+ }
66
+ }
67
+
68
+ // Create a filtered schema for the LLM (exclude overridden parameters)
69
+ let schemaForLLM = matchedTool.input;
70
+ const overriddenKeys = Object.keys(compiledParameters);
71
+ let skipLLMCall = false;
57
72
 
58
- const combinedPlaybooks = combinePlaybooks(agent.playbooks);
59
- const toolPrompt = node.prompt;
73
+ if (overriddenKeys.length > 0) {
74
+ try {
75
+ // Only filter if it's a ZodObject
76
+ if (matchedTool.input instanceof z.ZodObject) {
77
+ // Create omit map: { key1: true, key2: true, ... }
78
+ const omitMap = overriddenKeys.reduce((acc, key) => {
79
+ acc[key] = true;
80
+ return acc;
81
+ }, {} as Record<string, true>);
60
82
 
61
- let finalMessage = toolPrompt;
62
- if (combinedPlaybooks) {
63
- if (toolPrompt) {
64
- finalMessage = combinedPlaybooks + '\n\n' + toolPrompt;
65
- } else {
66
- finalMessage = combinedPlaybooks;
83
+ schemaForLLM = matchedTool.input.omit(omitMap);
84
+
85
+ // Check if all parameters are overridden
86
+ const remainingKeys = Object.keys(schemaForLLM.shape);
87
+ skipLLMCall = remainingKeys.length === 0;
88
+ }
89
+ } catch (err) {
90
+ logger.warn({
91
+ msg: '[Tool] Failed to filter schema, using original',
92
+ tool: matchedTool.name,
93
+ err,
94
+ });
95
+ // Fall back to original schema on error
67
96
  }
68
97
  }
69
- if (finalMessage) {
70
- const compiledPrompt = compilePrompt(finalMessage, { state: state, memory: state.memory, env: process.env });
71
- const systemMessage = new SystemMessage(compiledPrompt);
72
- if (state.messages.length === 0 || state.messages[0].getType() === 'system') {
73
- state.messages[0] = systemMessage;
74
- } else {
75
- state.messages.unshift(systemMessage);
98
+
99
+ let AIToolCallMessage: AIMessage;
100
+
101
+ if (skipLLMCall) {
102
+ // All parameters are overridden - create synthetic tool call without LLM
103
+ AIToolCallMessage = new AIMessage({
104
+ content: '',
105
+ tool_calls: [
106
+ {
107
+ name: matchedTool.name,
108
+ args: compiledParameters,
109
+ id: `call_${Date.now()}_${Math.random().toString(36).substring(2, 9)}`,
110
+ },
111
+ ],
112
+ });
113
+ } else {
114
+ // Need LLM to infer remaining parameters
115
+ const tool = langchainTool(() => {}, {
116
+ name: matchedTool.name,
117
+ description: matchedTool.description,
118
+ schema: schemaForLLM, // Use filtered schema
119
+ });
120
+
121
+ const combinedPlaybooks = combinePlaybooks(agent.playbooks) || '';
122
+ const compiledNodePrompt = node.prompt ? compilePrompt(node.prompt, compileContext) : null;
123
+ const hasOverriddenParameters = overriddenKeys.length > 0;
124
+
125
+ // Only build system message if there's actual content (playbooks, prompt, or overridden parameters)
126
+ if (combinedPlaybooks || compiledNodePrompt || hasOverriddenParameters) {
127
+ // Check if any compiled parameter is too long (>1000 characters)
128
+ const hasLongParameters = Object.values(compiledParameters).some((value) => typeof value === 'string' && value.length > 1000);
129
+
130
+ // Prepare parameters string for the system message
131
+ let parametersString: string;
132
+ if (hasOverriddenParameters) {
133
+ if (hasLongParameters) {
134
+ logger.debug({
135
+ msg: '[Tool] Omitting parameters from system prompt due to length',
136
+ tool: matchedTool.name,
137
+ parameterLengths: Object.entries(compiledParameters).reduce((acc, [key, value]) => {
138
+ if (typeof value === 'string') {
139
+ acc[key] = value.length;
140
+ }
141
+ return acc;
142
+ }, {} as Record<string, number>),
143
+ });
144
+ parametersString = '[Parameters omitted - one or more values exceed 1000 characters]';
145
+ } else {
146
+ parametersString = JSON.stringify(compiledParameters);
147
+ }
148
+ } else {
149
+ parametersString = '{}';
150
+ }
151
+
152
+ // Build system message with overridden parameters info
153
+ const message = `${combinedPlaybooks ? combinedPlaybooks + '\n\n' : ''}
154
+ Additional context:
155
+ previous messages are available for context.
156
+ Your goal is execute the tool with the correct parameters, some of them already chosen by the user and the rest should be generated.
157
+ You have 4 things to base your parameters on:
158
+ - Previous messages
159
+ - Parameters manually configured by the user (if any)
160
+ - User instructions for choosing tool parameters that are not set by the user (if any)
161
+ - Workflow memory
162
+ Parameters manually configured by the user are:
163
+ ${parametersString}
164
+ User instructions for choosing tool parameters are:
165
+ ${compiledNodePrompt ? compiledNodePrompt : 'no instructions set by the user'}`;
166
+
167
+ const compiledPrompt = compilePrompt(message, { state: state, memory: state.memory, env: process.env });
168
+ const systemMessage = new SystemMessage(compiledPrompt);
169
+ if (state.messages.length === 0 || state.messages[0].getType() === 'system') {
170
+ state.messages[0] = systemMessage;
171
+ } else {
172
+ state.messages.unshift(systemMessage);
173
+ }
174
+ }
175
+
176
+ const startTime = Date.now();
177
+ AIToolCallMessage = await llm.bindTools([tool], { tool_choice: tool.name }).invoke(state.messages);
178
+ const endTime = Date.now();
179
+ logger.debug({ msg: '[Tool] Model execution time', tool: matchedTool.name, executionTimeMs: endTime - startTime });
180
+ await agent.interruptSessionManager.checkQueueAndInterrupt(state.sessionId);
181
+
182
+ // Merge AI-generated parameters with user-set parameters
183
+ if (AIToolCallMessage.tool_calls && AIToolCallMessage.tool_calls.length > 0) {
184
+ AIToolCallMessage.tool_calls[0].args = {
185
+ ...AIToolCallMessage.tool_calls[0].args,
186
+ ...compiledParameters, // User-set parameters have priority over AI-generated parameters
187
+ };
76
188
  }
77
189
  }
78
190
 
79
- const startTime = Date.now();
80
- const AIToolCallMessage: AIMessage = await llm.bindTools([tool], { tool_choice: tool.name }).invoke(state.messages);
81
- const endTime = Date.now();
82
- logger.debug({ msg: '[Tool] Model execution time', tool: matchedTool.name, executionTimeMs: endTime - startTime });
83
- await agent.interruptSessionManager.checkQueueAndInterrupt(state.sessionId);
84
191
  state.goto = null;
85
192
 
86
193
  if (!AIToolCallMessage.additional_kwargs) {
@@ -3,7 +3,6 @@ import { RunnableLike } from '@langchain/core/runnables';
3
3
  import { Tool } from '../types/Tools.types';
4
4
  import { LLMProviders } from '../types/LLM.types';
5
5
  import { internalNodesSuffix, NodeType, ToolNode } from '../types/Flows.types';
6
- import { tool as langchainTool } from '@langchain/core/tools';
7
6
  import { z } from 'zod';
8
7
  import { Agent } from '../agent';
9
8
  import { logger } from '../utils/logger';
@@ -57,7 +56,7 @@ export const addToolRunNode = async ({ graph, tools, toolNode, attachedToNodeNam
57
56
  });
58
57
 
59
58
  return toolResult;
60
- }
59
+ },
61
60
  );
62
61
 
63
62
  const endTime = Date.now();
@@ -76,17 +75,25 @@ export const addToolRunNode = async ({ graph, tools, toolNode, attachedToNodeNam
76
75
  throw err;
77
76
  }
78
77
  };
79
- const tool = langchainTool(executeWrapper, {
80
- name: matchedTool.name,
81
- description: matchedTool.description,
82
- schema: matchedTool.input,
83
- });
84
78
 
85
79
  const toolCallObj = state.messages[state.messages.length - 1] as any;
86
80
  if (!toolCallObj.tool_calls) {
87
81
  throw new Error('Tool call not found');
88
82
  }
89
- const toolCallMessage = await tool.invoke(toolCallObj.tool_calls[0]);
83
+
84
+ // Parse and validate args using the tool's schema
85
+ const toolCall = toolCallObj.tool_calls[0];
86
+ const parsedArgs = matchedTool.input.parse(toolCall.args);
87
+
88
+ // Execute the tool with validated args
89
+ const result = await executeWrapper(parsedArgs);
90
+
91
+ // Create a ToolMessage with the result
92
+ const toolCallMessage = new ToolMessage({
93
+ content: typeof result === 'string' ? result : JSON.stringify(result),
94
+ tool_call_id: toolCall.id,
95
+ name: matchedTool.name,
96
+ });
90
97
 
91
98
  // Add the tool message to the state
92
99
  state.messages.push(toolCallMessage);
@@ -195,6 +195,7 @@ export interface ToolNode extends BaseNode {
195
195
  toolName: string;
196
196
  prompt: string;
197
197
  appImgSrc?: string;
198
+ parameters?: Record<string, any>;
198
199
  }
199
200
 
200
201
  export interface AppToolNode extends BaseNode, BaseAppNode {