@minded-ai/mindedjs 2.0.36-beta.5 → 2.0.36-beta.6

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -67,20 +67,23 @@ agent.on(events.TRIGGER_EVENT, async ({ triggerName, triggerBody }) => {
67
67
 
68
68
  ## Prompt Nodes
69
69
 
70
- Prompt nodes process input through LLM to generate intelligent responses or invoke tools.
70
+ Prompt nodes process input through LLM to generate intelligent responses or invoke tools.
71
71
 
72
- **Optional Properties:**
72
+ ### Context
73
+ - Prompt nodes automatically have access to previous nodes' output as context.
74
+ - You can also inject context from the memory object using the {memory.propertyName} placeholder.
73
75
 
74
- - `humanInTheLoop?: boolean` - When `true`, pauses execution after this node for human input before continuing
76
+ ### Properties
77
+ - `humanInTheLoop?: boolean` - When `true`, pauses execution after this node for human input before continuing. Use this property only if you need to gather an input from the user.
75
78
  - `canStayOnNode?: boolean` - When `true`, allows the node to route back to itself for iterative processing. Usually combined with `humanInTheLoop: true` to allow for iteration over human input.
76
79
 
77
- **Limitations**
78
-
80
+ ### Limitations
79
81
  - Prompt nodes can not save information to memory directly. Rather the user & agent messages are stored in the messages array of the state object. Making user's input available to the next prompt nodes or tools in the form of input schema.
82
+ - Image recognition is only supported when a user attaches an image to the message. If you need to extract information from an image, use a tool with agent.llm to process the image. Return the result in the tool response so following prompt nodes can use it.
80
83
 
81
- ## Common Prompt Patterns
84
+ ### Common Prompt Patterns
82
85
 
83
- ### Extract Information from User
86
+ #### Extract Information from User
84
87
 
85
88
  Use `humanInTheLoop: true` and `canStayOnNode: true` with `promptCondition` edges for iterative information gathering:
86
89
 
@@ -99,7 +102,7 @@ edges:
99
102
  prompt: 'Has all required information been collected?'
100
103
  ```
101
104
 
102
- ### Notify User
105
+ #### Notify User
103
106
 
104
107
  Use standard prompt nodes with `stepForward` edges for one-way notifications:
105
108
 
@@ -115,11 +118,8 @@ edges:
115
118
  type: stepForward
116
119
  ```
117
120
 
118
- ### Configuration
119
-
120
- **Placeholders:** Use `{memory.propertyName}`, `{memory.customer.name}`, or `{system.currentTime}` in prompts.
121
-
122
- **LLM Configuration:**
121
+ ### Custom LLM Configuration
122
+ Provide a custom LLM configuration to the prompt node using the `llmConfig` property.
123
123
 
124
124
  ```yaml
125
125
  llmConfig:
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@minded-ai/mindedjs",
3
- "version": "2.0.36-beta.5",
3
+ "version": "2.0.36-beta.6",
4
4
  "description": "MindedJS is a TypeScript library for building agents.",
5
5
  "main": "dist/index.js",
6
6
  "types": "dist/index.d.ts",