quantalogic 0.53.0__py3-none-any.whl → 0.56.0__py3-none-any.whl
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- quantalogic/__init__.py +7 -0
- quantalogic/flow/flow.py +267 -80
- quantalogic/flow/flow_extractor.py +216 -87
- quantalogic/flow/flow_generator.py +157 -88
- quantalogic/flow/flow_manager.py +252 -125
- quantalogic/flow/flow_manager_schema.py +62 -43
- quantalogic/flow/flow_mermaid.py +151 -68
- quantalogic/flow/flow_validator.py +204 -77
- quantalogic/flow/flow_yaml.md +341 -156
- quantalogic/tools/safe_python_interpreter_tool.py +6 -1
- quantalogic/xml_parser.py +5 -1
- quantalogic/xml_tool_parser.py +4 -1
- {quantalogic-0.53.0.dist-info → quantalogic-0.56.0.dist-info}/METADATA +16 -6
- {quantalogic-0.53.0.dist-info → quantalogic-0.56.0.dist-info}/RECORD +17 -17
- {quantalogic-0.53.0.dist-info → quantalogic-0.56.0.dist-info}/LICENSE +0 -0
- {quantalogic-0.53.0.dist-info → quantalogic-0.56.0.dist-info}/WHEEL +0 -0
- {quantalogic-0.53.0.dist-info → quantalogic-0.56.0.dist-info}/entry_points.txt +0 -0
quantalogic/flow/flow_yaml.md
CHANGED
@@ -1,21 +1,21 @@
|
|
1
1
|
# Quantalogic Flow YAML DSL Specification 🚀
|
2
2
|
|
3
|
-
|
4
|
-
|
5
3
|
## 1. Introduction 🌟
|
6
4
|
|
7
|
-
The **Quantalogic Flow YAML DSL** is a human-readable, declarative language for defining workflows within the `quantalogic.flow` Python package. As of **March
|
5
|
+
The **Quantalogic Flow YAML DSL** is a human-readable, declarative language for defining workflows within the `quantalogic.flow` Python package. As of **March 5, 2025**, it’s packed with features for task automation:
|
8
6
|
|
9
7
|
- **Function Execution** ⚙️: Run async Python functions from embedded code, PyPI, local files, or URLs.
|
10
|
-
- **Execution Flow** ➡️: Support sequential, conditional, and
|
8
|
+
- **Execution Flow** ➡️: Support sequential, conditional, parallel, branching, and converging transitions.
|
11
9
|
- **Sub-Workflows** 🌳: Enable hierarchical, modular designs.
|
12
10
|
- **LLM Integration** 🤖: Harness Large Language Models for text or structured outputs.
|
11
|
+
- **Template Nodes** 📝: Render dynamic content with Jinja2 templates.
|
12
|
+
- **Input Mapping** 🔗: Flexibly map node parameters to context or custom logic.
|
13
13
|
- **Context Management** 📦: Share state dynamically across nodes.
|
14
14
|
- **Robustness** 🛡️: Include retries, delays, and timeouts.
|
15
15
|
- **Observers** 👀: Monitor execution with custom handlers.
|
16
16
|
- **Programmatic Control** 🧑💻: Manage workflows via `WorkflowManager`.
|
17
17
|
|
18
|
-
This DSL integrates with `Workflow`, `WorkflowEngine`, and `Nodes` classes, making it
|
18
|
+
This DSL integrates with `Workflow`, `WorkflowEngine`, and `Nodes` classes, making it versatile for everything from simple scripts to complex AI-driven workflows. We’ll use an updated **Story Generator Workflow** as a running example, derived from `examples/flow/simple_story_generator/story_generator_agent.py`, now enhanced with branching, convergence, input mapping, and template nodes. Let’s dive in! 🎉
|
19
19
|
|
20
20
|
```mermaid
|
21
21
|
graph TD
|
@@ -32,12 +32,13 @@ graph TD
|
|
32
32
|
|
33
33
|
## 2. Workflow Structure 🗺️
|
34
34
|
|
35
|
-
A workflow YAML file
|
35
|
+
A workflow YAML file comprises five core sections:
|
36
36
|
|
37
37
|
- **`functions`**: Python code definitions.
|
38
|
-
- **`nodes`**: Task specifications.
|
39
|
-
- **`workflow`**: Flow orchestration.
|
38
|
+
- **`nodes`**: Task specifications with input mappings and template support.
|
39
|
+
- **`workflow`**: Flow orchestration with branching and convergence.
|
40
40
|
- **`dependencies`**: Python module dependencies.
|
41
|
+
- **`observers`**: Event monitoring.
|
41
42
|
|
42
43
|
Here’s the skeleton:
|
43
44
|
|
@@ -45,9 +46,9 @@ Here’s the skeleton:
|
|
45
46
|
functions:
|
46
47
|
# Python magic ✨
|
47
48
|
nodes:
|
48
|
-
# Tasks 🎯
|
49
|
+
# Tasks with input mappings & templates 🎯
|
49
50
|
workflow:
|
50
|
-
# Flow control 🚦
|
51
|
+
# Flow control with branches & convergence 🚦
|
51
52
|
dependencies:
|
52
53
|
# Python module dependencies (optional)
|
53
54
|
observers:
|
@@ -55,7 +56,7 @@ observers:
|
|
55
56
|
```
|
56
57
|
|
57
58
|
### Story Generator Example
|
58
|
-
|
59
|
+
We’ll evolve the Story Generator to include branching (e.g., based on story tone), convergence (e.g., finalizing the story), **input mapping** for flexible parameter passing, and a **template node** to format chapter summaries—showcasing these shiny new features step-by-step.
|
59
60
|
|
60
61
|
---
|
61
62
|
|
@@ -63,7 +64,7 @@ Imagine a workflow that generates a multi-chapter story. We’ll build it step-b
|
|
63
64
|
|
64
65
|
### Python Version (`story_generator_agent.py`)
|
65
66
|
|
66
|
-
This script generates a story
|
67
|
+
This updated script generates a story with tone-based branching, convergence, input mapping, and a template node:
|
67
68
|
|
68
69
|
```python
|
69
70
|
#!/usr/bin/env python
|
@@ -76,90 +77,114 @@ DEFAULT_LLM_PARAMS = {"model": MODEL, "temperature": 0.7, "max_tokens": 1000}
|
|
76
77
|
@Nodes.llm_node(system_prompt="You are a creative writer skilled at generating stories.",
|
77
78
|
prompt_template="Create a story outline for a {genre} story with {num_chapters} chapters.",
|
78
79
|
output="outline", **DEFAULT_LLM_PARAMS)
|
79
|
-
def generate_outline(genre, num_chapters):
|
80
|
-
return
|
80
|
+
async def generate_outline(genre: str, num_chapters: int):
|
81
|
+
return ""
|
82
|
+
|
83
|
+
@Nodes.llm_node(system_prompt="You are a creative writer.",
|
84
|
+
prompt_template="Analyze the tone of this outline: {outline}.",
|
85
|
+
output="tone", **DEFAULT_LLM_PARAMS)
|
86
|
+
async def analyze_tone(outline: str):
|
87
|
+
return ""
|
81
88
|
|
82
89
|
@Nodes.llm_node(system_prompt="You are a creative writer.",
|
83
90
|
prompt_template="Write chapter {chapter_num} for this story outline: {outline}. Style: {style}.",
|
84
91
|
output="chapter", **DEFAULT_LLM_PARAMS)
|
85
|
-
def generate_chapter(outline, chapter_num, style):
|
86
|
-
return
|
92
|
+
async def generate_chapter(outline: str, chapter_num: int, style: str):
|
93
|
+
return ""
|
94
|
+
|
95
|
+
@Nodes.llm_node(system_prompt="You are a dramatic writer.",
|
96
|
+
prompt_template="Write a dramatic chapter {chapter_num} for this outline: {outline}.",
|
97
|
+
output="chapter", **DEFAULT_LLM_PARAMS)
|
98
|
+
async def generate_dramatic_chapter(outline: str, chapter_num: int):
|
99
|
+
return ""
|
100
|
+
|
101
|
+
@Nodes.template_node(output="chapter_summary", template="Chapter {chapter_num}: {chapter}")
|
102
|
+
async def summarize_chapter(rendered_content: str, chapter: str, chapter_num: int):
|
103
|
+
return rendered_content
|
87
104
|
|
88
105
|
@Nodes.define(output="updated_context")
|
89
106
|
async def update_progress(**context):
|
90
107
|
chapters = context.get('chapters', [])
|
91
108
|
completed_chapters = context.get('completed_chapters', 0)
|
92
|
-
|
93
|
-
updated_chapters = chapters + [
|
109
|
+
chapter_summary = context.get('chapter_summary', '')
|
110
|
+
updated_chapters = chapters + [chapter_summary]
|
94
111
|
return {**context, "chapters": updated_chapters, "completed_chapters": completed_chapters + 1}
|
95
112
|
|
96
113
|
@Nodes.define(output="continue_generating")
|
97
|
-
async def check_if_complete(completed_chapters=0, num_chapters=0, **kwargs):
|
114
|
+
async def check_if_complete(completed_chapters: int = 0, num_chapters: int = 0, **kwargs):
|
98
115
|
return completed_chapters < num_chapters
|
99
116
|
|
117
|
+
@Nodes.define(output="final_story")
|
118
|
+
async def finalize_story(chapters: list):
|
119
|
+
return "\n".join(chapters)
|
120
|
+
|
100
121
|
workflow = (
|
101
122
|
Workflow("generate_outline")
|
102
|
-
.
|
123
|
+
.node("generate_outline", inputs_mapping={"genre": "story_genre", "num_chapters": "chapter_count"})
|
124
|
+
.then("analyze_tone")
|
125
|
+
.branch([
|
126
|
+
("generate_chapter", lambda ctx: ctx.get("tone") == "light"),
|
127
|
+
("generate_dramatic_chapter", lambda ctx: ctx.get("tone") == "dark")
|
128
|
+
])
|
129
|
+
.then("summarize_chapter")
|
103
130
|
.then("update_progress")
|
104
131
|
.then("check_if_complete")
|
105
132
|
.then("generate_chapter", condition=lambda ctx: ctx.get("continue_generating", False))
|
133
|
+
.then("summarize_chapter")
|
106
134
|
.then("update_progress")
|
107
135
|
.then("check_if_complete")
|
136
|
+
.converge("finalize_story")
|
108
137
|
)
|
109
138
|
|
110
|
-
def story_observer(
|
111
|
-
print(f"Event: {event_type} -
|
139
|
+
def story_observer(event):
|
140
|
+
print(f"Event: {event.event_type.value} - Node: {event.node_name}")
|
112
141
|
workflow.add_observer(story_observer)
|
113
142
|
|
114
143
|
if __name__ == "__main__":
|
115
144
|
async def main():
|
116
145
|
initial_context = {
|
117
|
-
"
|
118
|
-
"
|
146
|
+
"story_genre": "science fiction",
|
147
|
+
"chapter_count": 3,
|
119
148
|
"chapters": [],
|
120
149
|
"completed_chapters": 0,
|
121
150
|
"style": "descriptive"
|
122
151
|
}
|
123
152
|
engine = workflow.build()
|
124
153
|
result = await engine.run(initial_context)
|
125
|
-
print(f"
|
154
|
+
print(f"Final Story:\n{result.get('final_story', '')}")
|
126
155
|
anyio.run(main)
|
127
156
|
```
|
128
157
|
|
129
158
|
### YAML Version (`story_generator_workflow.yaml`)
|
130
159
|
|
131
|
-
Here’s the
|
160
|
+
Here’s the updated YAML with branching, convergence, input mapping, and a template node:
|
132
161
|
|
133
162
|
```yaml
|
134
163
|
functions:
|
135
|
-
generate_outline:
|
136
|
-
type: embedded
|
137
|
-
code: |
|
138
|
-
async def generate_outline(genre: str, num_chapters: int) -> str:
|
139
|
-
return ""
|
140
|
-
generate_chapter:
|
141
|
-
type: embedded
|
142
|
-
code: |
|
143
|
-
async def generate_chapter(outline: str, chapter_num: int, style: str) -> str:
|
144
|
-
return ""
|
145
164
|
update_progress:
|
146
165
|
type: embedded
|
147
166
|
code: |
|
148
167
|
async def update_progress(**context):
|
149
168
|
chapters = context.get('chapters', [])
|
150
169
|
completed_chapters = context.get('completed_chapters', 0)
|
151
|
-
|
152
|
-
|
170
|
+
chapter_summary = context.get('chapter_summary', '')
|
171
|
+
updated_chapters = chapters + [chapter_summary]
|
172
|
+
return {**context, "chapters": updated_chapters, "completed_chapters": completed_chapters + 1}
|
153
173
|
check_if_complete:
|
154
174
|
type: embedded
|
155
175
|
code: |
|
156
176
|
async def check_if_complete(completed_chapters=0, num_chapters=0, **kwargs):
|
157
177
|
return completed_chapters < num_chapters
|
178
|
+
finalize_story:
|
179
|
+
type: embedded
|
180
|
+
code: |
|
181
|
+
async def finalize_story(chapters):
|
182
|
+
return "\n".join(chapters)
|
158
183
|
story_observer:
|
159
184
|
type: embedded
|
160
185
|
code: |
|
161
|
-
def story_observer(
|
162
|
-
print(f"Event: {event_type} -
|
186
|
+
def story_observer(event):
|
187
|
+
print(f"Event: {event.event_type.value} - Node: {event.node_name}")
|
163
188
|
|
164
189
|
nodes:
|
165
190
|
generate_outline:
|
@@ -169,7 +194,18 @@ nodes:
|
|
169
194
|
prompt_template: "Create a story outline for a {genre} story with {num_chapters} chapters."
|
170
195
|
temperature: 0.7
|
171
196
|
max_tokens: 1000
|
197
|
+
inputs_mapping:
|
198
|
+
genre: "story_genre"
|
199
|
+
num_chapters: "chapter_count"
|
172
200
|
output: outline
|
201
|
+
analyze_tone:
|
202
|
+
llm_config:
|
203
|
+
model: "gemini/gemini-2.0-flash"
|
204
|
+
system_prompt: "You are a creative writer."
|
205
|
+
prompt_template: "Analyze the tone of this outline: {outline}."
|
206
|
+
temperature: 0.7
|
207
|
+
max_tokens: 1000
|
208
|
+
output: tone
|
173
209
|
generate_chapter:
|
174
210
|
llm_config:
|
175
211
|
model: "gemini/gemini-2.0-flash"
|
@@ -177,59 +213,104 @@ nodes:
|
|
177
213
|
prompt_template: "Write chapter {chapter_num} for this story outline: {outline}. Style: {style}."
|
178
214
|
temperature: 0.7
|
179
215
|
max_tokens: 1000
|
216
|
+
inputs_mapping:
|
217
|
+
chapter_num: "completed_chapters"
|
218
|
+
style: "style"
|
180
219
|
output: chapter
|
220
|
+
generate_dramatic_chapter:
|
221
|
+
llm_config:
|
222
|
+
model: "gemini/gemini-2.0-flash"
|
223
|
+
system_prompt: "You are a dramatic writer."
|
224
|
+
prompt_template: "Write a dramatic chapter {chapter_num} for this outline: {outline}."
|
225
|
+
temperature: 0.7
|
226
|
+
max_tokens: 1000
|
227
|
+
inputs_mapping:
|
228
|
+
chapter_num: "completed_chapters"
|
229
|
+
output: chapter
|
230
|
+
summarize_chapter:
|
231
|
+
template_config:
|
232
|
+
template: "Chapter {chapter_num}: {chapter}"
|
233
|
+
inputs_mapping:
|
234
|
+
chapter_num: "completed_chapters"
|
235
|
+
output: chapter_summary
|
181
236
|
update_progress:
|
182
237
|
function: update_progress
|
183
238
|
output: updated_context
|
184
239
|
check_if_complete:
|
185
240
|
function: check_if_complete
|
186
241
|
output: continue_generating
|
242
|
+
finalize_story:
|
243
|
+
function: finalize_story
|
244
|
+
output: final_story
|
187
245
|
|
188
246
|
workflow:
|
189
247
|
start: generate_outline
|
190
248
|
transitions:
|
191
249
|
- from_node: generate_outline
|
192
|
-
to_node:
|
250
|
+
to_node: analyze_tone
|
251
|
+
- from_node: analyze_tone
|
252
|
+
to_node:
|
253
|
+
- to_node: generate_chapter
|
254
|
+
condition: "ctx['tone'] == 'light'"
|
255
|
+
- to_node: generate_dramatic_chapter
|
256
|
+
condition: "ctx['tone'] == 'dark'"
|
193
257
|
- from_node: generate_chapter
|
258
|
+
to_node: summarize_chapter
|
259
|
+
- from_node: generate_dramatic_chapter
|
260
|
+
to_node: summarize_chapter
|
261
|
+
- from_node: summarize_chapter
|
194
262
|
to_node: update_progress
|
195
263
|
- from_node: update_progress
|
196
264
|
to_node: check_if_complete
|
197
265
|
- from_node: check_if_complete
|
198
266
|
to_node: generate_chapter
|
199
267
|
condition: "ctx['continue_generating']"
|
268
|
+
convergence_nodes:
|
269
|
+
- finalize_story
|
200
270
|
|
201
271
|
observers:
|
202
272
|
- story_observer
|
203
273
|
```
|
204
274
|
|
205
|
-
### Mermaid Diagram: Story Generator Flow
|
275
|
+
### Mermaid Diagram: Updated Story Generator Flow
|
206
276
|
|
207
277
|
```mermaid
|
208
278
|
graph TD
|
209
|
-
A[generate_outline] --> B[
|
210
|
-
B
|
211
|
-
|
212
|
-
|
213
|
-
D
|
279
|
+
A[generate_outline] --> B[analyze_tone]
|
280
|
+
B -->|"'light'"| C[generate_chapter]
|
281
|
+
B -->|"'dark'"| D[generate_dramatic_chapter]
|
282
|
+
C --> E[summarize_chapter]
|
283
|
+
D --> E
|
284
|
+
E --> F[update_progress]
|
285
|
+
F --> G[check_if_complete]
|
286
|
+
G -->|"ctx['continue_generating']"| C
|
287
|
+
G --> H[finalize_story]
|
288
|
+
F --> H
|
214
289
|
style A fill:#e6ffe6,stroke:#009933,stroke-width:2px
|
215
290
|
style B fill:#e6ffe6,stroke:#009933,stroke-width:2px
|
216
291
|
style C fill:#e6ffe6,stroke:#009933,stroke-width:2px
|
217
292
|
style D fill:#e6ffe6,stroke:#009933,stroke-width:2px
|
218
|
-
style E fill:#
|
293
|
+
style E fill:#e6ffe6,stroke:#009933,stroke-width:2px
|
294
|
+
style F fill:#e6ffe6,stroke:#009933,stroke-width:2px
|
295
|
+
style G fill:#e6ffe6,stroke:#009933,stroke-width:2px
|
296
|
+
style H fill:#fff0e6,stroke:#cc3300,stroke-width:2px,stroke-dasharray:5
|
219
297
|
```
|
220
298
|
|
221
299
|
#### Execution
|
222
|
-
With `initial_context = {"
|
223
|
-
1. `generate_outline`
|
224
|
-
2. `
|
225
|
-
3. `
|
226
|
-
4. `
|
300
|
+
With `initial_context = {"story_genre": "science fiction", "chapter_count": 3, "chapters": [], "completed_chapters": 0, "style": "descriptive"}`:
|
301
|
+
1. `generate_outline` uses input mapping (`story_genre`, `chapter_count`) to create an outline.
|
302
|
+
2. `analyze_tone` determines the story’s tone.
|
303
|
+
3. Branches to `generate_chapter` (light tone) or `generate_dramatic_chapter` (dark tone), mapping `chapter_num` to `completed_chapters`.
|
304
|
+
4. `summarize_chapter` formats the chapter using a template, mapping `chapter_num`.
|
305
|
+
5. `update_progress` updates chapters and count with the summary.
|
306
|
+
6. `check_if_complete` loops back if more chapters are needed.
|
307
|
+
7. Converges at `finalize_story` to compile the final story.
|
227
308
|
|
228
309
|
---
|
229
310
|
|
230
311
|
## 4. Functions ⚙️
|
231
312
|
|
232
|
-
The `functions` section defines Python code
|
313
|
+
The `functions` section defines reusable Python code.
|
233
314
|
|
234
315
|
### Fields 📋
|
235
316
|
- `type` (string, required): `"embedded"` or `"external"`.
|
@@ -238,21 +319,18 @@ The `functions` section defines Python code for reuse.
|
|
238
319
|
- `function` (string, optional): Function name in `module`.
|
239
320
|
|
240
321
|
### Rules ✅
|
241
|
-
- Embedded: Use `async def
|
322
|
+
- Embedded: Use `async def` (if async), name matches key.
|
242
323
|
- External: Requires `module` and `function`, no `code`.
|
243
324
|
|
244
325
|
### Examples 🌈
|
245
326
|
From the story generator:
|
246
327
|
```yaml
|
247
328
|
functions:
|
248
|
-
|
329
|
+
finalize_story:
|
249
330
|
type: embedded
|
250
331
|
code: |
|
251
|
-
async def
|
252
|
-
|
253
|
-
completed_chapters = context.get('completed_chapters', 0)
|
254
|
-
chapter = context.get('chapter', '')
|
255
|
-
return {**context, "chapters": chapters + [chapter], "completed_chapters": completed_chapters + 1}
|
332
|
+
async def finalize_story(chapters):
|
333
|
+
return "\n".join(chapters)
|
256
334
|
```
|
257
335
|
External example:
|
258
336
|
```yaml
|
@@ -278,23 +356,14 @@ graph TD
|
|
278
356
|
|
279
357
|
---
|
280
358
|
|
281
|
-
---
|
282
|
-
|
283
359
|
## 5. Dependencies 🐍
|
284
360
|
|
285
|
-
The `dependencies` section lists Python modules
|
361
|
+
The `dependencies` section lists required Python modules.
|
286
362
|
|
287
363
|
### Fields 📋
|
288
|
-
|
289
|
-
- `dependencies` (list, optional): A list of Python module dependencies. Each dependency can be a:
|
290
|
-
- PyPI package name (e.g., `requests>=2.28.0`).
|
291
|
-
- Local file path (e.g., `/path/to/module.py`).
|
292
|
-
- Remote URL (e.g., `https://example.com/module.py`).
|
293
|
-
|
294
|
-
These dependencies are processed during workflow instantiation, ensuring that all required modules are available before the workflow starts.
|
364
|
+
- `dependencies` (list, optional): PyPI packages (e.g., `requests>=2.28.0`), local paths (e.g., `/path/to/module.py`), or URLs (e.g., `https://example.com/module.py`).
|
295
365
|
|
296
366
|
### Example 🌈
|
297
|
-
|
298
367
|
```yaml
|
299
368
|
dependencies:
|
300
369
|
- requests>=2.28.0
|
@@ -306,80 +375,55 @@ dependencies:
|
|
306
375
|
|
307
376
|
## 6. Nodes 🧩
|
308
377
|
|
309
|
-
Nodes
|
378
|
+
Nodes define tasks, now enhanced with **input mappings** and **template nodes**, alongside functions, sub-workflows, and LLMs.
|
310
379
|
|
311
380
|
### Fields 📋
|
312
381
|
- `function` (string, optional): Links to `functions`.
|
313
382
|
- `sub_workflow` (object, optional):
|
314
383
|
- `start` (string)
|
315
384
|
- `transitions` (list)
|
385
|
+
- `convergence_nodes` (list, optional)
|
316
386
|
- `llm_config` (object, optional):
|
317
387
|
- `model` (string, default: `"gpt-3.5-turbo"`)
|
318
388
|
- `system_prompt` (string, optional)
|
319
389
|
- `prompt_template` (string, default: `"{{ input }}"`)
|
320
|
-
- `prompt_file` (string, optional): Path to
|
321
|
-
|
322
|
-
To leverage the power of Jinja2 templating directly within your Quantalogic Flow YAML DSL, you can embed Jinja2 syntax within the `prompt_template` field of your `llm_config`. This allows you to dynamically generate prompts based on variables passed from previous nodes or defined within the flow itself. Simply enclose your Jinja2 expressions within `{{ ... }}`. Ensure that the variables you reference are accessible within the scope of the node execution.
|
323
|
-
|
324
|
-
Here's an example:
|
325
|
-
|
326
|
-
```yaml
|
327
|
-
nodes:
|
328
|
-
- id: generate_email
|
329
|
-
type: llm
|
330
|
-
config:
|
331
|
-
llm_config:
|
332
|
-
model: "gpt-4"
|
333
|
-
prompt_template: "Write an email to {{ recipient }} about the upcoming {{ event }}."
|
334
|
-
temperature: 0.7
|
335
|
-
inputs:
|
336
|
-
recipient: ${get_user_details.outputs.email}
|
337
|
-
event: "Company Picnic"
|
338
|
-
```
|
339
|
-
|
340
|
-
In this example, the `prompt_template` will dynamically generate an email prompt using the `recipient` variable (fetched from the output of the `get_user_details` node) and the `event` variable, which is a hardcoded string in this case. The LLM will then use the generated prompt to compose the email.
|
341
|
-
|
390
|
+
- `prompt_file` (string, optional): Path to a Jinja2 template file.
|
342
391
|
- `temperature` (float, default: `0.7`)
|
343
392
|
- `max_tokens` (int, optional)
|
344
393
|
- `top_p` (float, default: `1.0`)
|
345
394
|
- `presence_penalty` (float, default: `0.0`)
|
346
395
|
- `frequency_penalty` (float, default: `0.0`)
|
347
396
|
- `response_model` (string, optional)
|
348
|
-
- `
|
397
|
+
- `template_config` (object, optional):
|
398
|
+
- `template` (string, default: `""`): Jinja2 template string.
|
399
|
+
- `template_file` (string, optional): Path to a Jinja2 template file (overrides `template`).
|
400
|
+
- `inputs_mapping` (dict, optional): Maps node parameters to context keys or lambda expressions (e.g., `"lambda ctx: ctx['x'] + 1"`).
|
401
|
+
- `output` (string, optional): Context key for the result.
|
349
402
|
- `retries` (int, default: `3`)
|
350
403
|
- `delay` (float, default: `1.0`)
|
351
404
|
- `timeout` (float/null, default: `null`)
|
352
405
|
- `parallel` (bool, default: `false`)
|
353
406
|
|
354
407
|
### Rules ✅
|
355
|
-
-
|
356
|
-
- LLM inputs
|
408
|
+
- Exactly one of `function`, `sub_workflow`, `llm_config`, or `template_config`.
|
409
|
+
- LLM and template inputs derived from `prompt_template`/`template` or `prompt_file`/`template_file`, overridden by `inputs_mapping`.
|
410
|
+
- `inputs_mapping` values can be strings (context keys) or serialized lambdas.
|
357
411
|
|
358
412
|
### Examples 🌈
|
359
|
-
|
360
|
-
|
361
|
-
Here's an example `llm_config` in your YAML:
|
362
|
-
|
413
|
+
Using a template node with an external Jinja2 file:
|
363
414
|
```yaml
|
364
|
-
|
365
|
-
|
366
|
-
|
367
|
-
|
368
|
-
|
369
|
-
|
370
|
-
|
371
|
-
|
372
|
-
```jinja2
|
373
|
-
You are a helpful assistant. The user has asked the following:
|
374
|
-
|
375
|
-
{{ user_query }}
|
376
|
-
|
377
|
-
Please provide a concise and accurate answer.
|
415
|
+
nodes:
|
416
|
+
format_report:
|
417
|
+
template_config:
|
418
|
+
template_file: "templates/report.j2"
|
419
|
+
inputs_mapping:
|
420
|
+
title: "report_title"
|
421
|
+
data: "report_data"
|
422
|
+
output: formatted_report
|
378
423
|
```
|
424
|
+
(`templates/report.j2`: `Report: {{ title }}\nData: {{ data }}`)
|
379
425
|
|
380
|
-
|
381
|
-
|
382
|
-
From the story generator:
|
426
|
+
With input mapping and an LLM:
|
383
427
|
```yaml
|
384
428
|
nodes:
|
385
429
|
generate_outline:
|
@@ -389,40 +433,149 @@ nodes:
|
|
389
433
|
prompt_template: "Create a story outline for a {genre} story with {num_chapters} chapters."
|
390
434
|
temperature: 0.7
|
391
435
|
max_tokens: 1000
|
436
|
+
inputs_mapping:
|
437
|
+
genre: "story_genre"
|
438
|
+
num_chapters: "lambda ctx: ctx['chapter_count'] + 1"
|
392
439
|
output: outline
|
393
440
|
```
|
394
441
|
|
442
|
+
From the story generator (template node):
|
443
|
+
```yaml
|
444
|
+
nodes:
|
445
|
+
summarize_chapter:
|
446
|
+
template_config:
|
447
|
+
template: "Chapter {chapter_num}: {chapter}"
|
448
|
+
inputs_mapping:
|
449
|
+
chapter_num: "completed_chapters"
|
450
|
+
output: chapter_summary
|
451
|
+
```
|
452
|
+
|
395
453
|
```mermaid
|
396
454
|
graph TD
|
397
455
|
A[Node] --> B{Type?}
|
398
456
|
B -->|function| C[Function Ref]
|
399
|
-
B -->|sub_workflow| D[Start + Transitions]
|
457
|
+
B -->|sub_workflow| D[Start + Transitions + Convergence]
|
400
458
|
B -->|llm_config| E[LLM Setup]
|
401
|
-
|
402
|
-
|
403
|
-
|
459
|
+
B -->|template_config| F[Template Setup]
|
460
|
+
A --> G[Inputs Mapping?]
|
461
|
+
G -->|Yes| H[Context Keys or Lambdas]
|
462
|
+
E --> I{Structured?}
|
463
|
+
I -->|Yes| J[response_model]
|
464
|
+
I -->|No| K[Plain Text]
|
465
|
+
F --> L[Jinja2 Template]
|
404
466
|
style A fill:#e6ffe6,stroke:#009933,stroke-width:2px
|
405
467
|
style B fill:#fff,stroke:#333
|
406
468
|
style C fill:#ccffcc,stroke:#009933
|
407
469
|
style D fill:#ccffcc,stroke:#009933
|
408
470
|
style E fill:#ccffcc,stroke:#009933
|
409
|
-
style F fill:#
|
410
|
-
style G fill:#
|
471
|
+
style F fill:#ccffcc,stroke:#009933
|
472
|
+
style G fill:#fff,stroke:#333
|
411
473
|
style H fill:#b3ffb3,stroke:#009933
|
474
|
+
style I fill:#fff,stroke:#333
|
475
|
+
style J fill:#b3ffb3,stroke:#009933
|
476
|
+
style K fill:#b3ffb3,stroke:#009933
|
477
|
+
style L fill:#b3ffb3,stroke:#009933
|
412
478
|
```
|
413
479
|
|
414
480
|
---
|
415
481
|
|
416
|
-
## 6.
|
482
|
+
## 6. Input Mapping with LLM Nodes and Template Nodes 🔗
|
483
|
+
|
484
|
+
Input mapping allows flexible parameter passing to nodes, enabling dynamic behavior based on workflow context. This is particularly powerful when combined with LLM nodes and template nodes.
|
485
|
+
|
486
|
+
### Implementation Details
|
487
|
+
|
488
|
+
- **Input Mapping Types**:
|
489
|
+
- Direct context references (e.g., "story_genre")
|
490
|
+
- Lambda expressions (e.g., "lambda ctx: ctx['chapter_count'] + 1")
|
491
|
+
- Static values
|
492
|
+
|
493
|
+
- **Supported Node Types**:
|
494
|
+
- LLM nodes
|
495
|
+
- Template nodes
|
496
|
+
- Function nodes
|
497
|
+
- Sub-workflow nodes
|
498
|
+
|
499
|
+
### LLM Node Input Mapping
|
500
|
+
|
501
|
+
LLM nodes support input mapping for both system prompts and user prompts:
|
502
|
+
|
503
|
+
```yaml
|
504
|
+
nodes:
|
505
|
+
generate_outline:
|
506
|
+
llm_config:
|
507
|
+
model: "gemini/gemini-2.0-flash"
|
508
|
+
system_prompt: "You are a creative writer skilled in {genre} stories."
|
509
|
+
prompt_template: "Create a story outline for a {genre} story with {num_chapters} chapters."
|
510
|
+
inputs_mapping:
|
511
|
+
genre: "story_genre" # Map from context
|
512
|
+
num_chapters: "lambda ctx: ctx['chapter_count'] + 1" # Dynamic value
|
513
|
+
output: outline
|
514
|
+
```
|
417
515
|
|
418
|
-
|
516
|
+
### Template Node Input Mapping
|
517
|
+
|
518
|
+
Template nodes use mapped inputs in Jinja2 templates:
|
519
|
+
|
520
|
+
```yaml
|
521
|
+
nodes:
|
522
|
+
summarize_chapter:
|
523
|
+
template_config:
|
524
|
+
template: "Chapter {chapter_num}: {chapter}\n\nSummary: {summary}"
|
525
|
+
inputs_mapping:
|
526
|
+
chapter_num: "current_chapter"
|
527
|
+
chapter: "lambda ctx: ctx['chapters'][ctx['current_chapter']]"
|
528
|
+
summary: "lambda ctx: ctx['summaries'][ctx['current_chapter']]"
|
529
|
+
output: chapter_summary
|
530
|
+
```
|
531
|
+
|
532
|
+
### Combined Example
|
533
|
+
|
534
|
+
Here's an example combining both LLM and template nodes with input mapping:
|
535
|
+
|
536
|
+
```yaml
|
537
|
+
nodes:
|
538
|
+
generate_character:
|
539
|
+
llm_config:
|
540
|
+
model: "gemini/gemini-2.0-flash"
|
541
|
+
system_prompt: "You are a character designer."
|
542
|
+
prompt_template: "Create a character for a {genre} story."
|
543
|
+
inputs_mapping:
|
544
|
+
genre: "story_genre"
|
545
|
+
output: character_description
|
546
|
+
|
547
|
+
format_character:
|
548
|
+
template_config:
|
549
|
+
template: "Character Profile:\n\n{description}\n\nTraits: {traits}"
|
550
|
+
inputs_mapping:
|
551
|
+
description: "character_description"
|
552
|
+
traits: "lambda ctx: ', '.join(ctx['character_traits'])"
|
553
|
+
output: formatted_character
|
554
|
+
```
|
555
|
+
|
556
|
+
### Key Points
|
557
|
+
|
558
|
+
- Use `inputs_mapping` to map context values to node parameters
|
559
|
+
- Support both direct context references and lambda expressions
|
560
|
+
- Works seamlessly with LLM, template, and other node types
|
561
|
+
- Enables dynamic, context-aware workflows
|
562
|
+
- Input mapping is validated against node parameters
|
563
|
+
|
564
|
+
---
|
565
|
+
|
566
|
+
## 7. Workflow 🌐
|
567
|
+
|
568
|
+
The `workflow` section orchestrates execution, leveraging branching and convergence.
|
419
569
|
|
420
570
|
### Fields 📋
|
421
571
|
- `start` (string, optional): First node.
|
422
572
|
- `transitions` (list):
|
423
573
|
- `from_node` (string)
|
424
|
-
- `to_node` (string
|
425
|
-
|
574
|
+
- `to_node` (string or list):
|
575
|
+
- String: Sequential or parallel transition.
|
576
|
+
- List of objects: Branching with `to_node` and `condition`.
|
577
|
+
- `condition` (string, optional): For sequential transitions.
|
578
|
+
- `convergence_nodes` (list, optional): Nodes where branches merge.
|
426
579
|
|
427
580
|
### Example 🌈
|
428
581
|
From the story generator:
|
@@ -431,14 +584,26 @@ workflow:
|
|
431
584
|
start: generate_outline
|
432
585
|
transitions:
|
433
586
|
- from_node: generate_outline
|
434
|
-
to_node:
|
587
|
+
to_node: analyze_tone
|
588
|
+
- from_node: analyze_tone
|
589
|
+
to_node:
|
590
|
+
- to_node: generate_chapter
|
591
|
+
condition: "ctx['tone'] == 'light'"
|
592
|
+
- to_node: generate_dramatic_chapter
|
593
|
+
condition: "ctx['tone'] == 'dark'"
|
435
594
|
- from_node: generate_chapter
|
595
|
+
to_node: summarize_chapter
|
596
|
+
- from_node: generate_dramatic_chapter
|
597
|
+
to_node: summarize_chapter
|
598
|
+
- from_node: summarize_chapter
|
436
599
|
to_node: update_progress
|
437
600
|
- from_node: update_progress
|
438
601
|
to_node: check_if_complete
|
439
602
|
- from_node: check_if_complete
|
440
603
|
to_node: generate_chapter
|
441
604
|
condition: "ctx['continue_generating']"
|
605
|
+
convergence_nodes:
|
606
|
+
- finalize_story
|
442
607
|
```
|
443
608
|
|
444
609
|
```mermaid
|
@@ -449,8 +614,11 @@ graph TD
|
|
449
614
|
D --> E{To Node}
|
450
615
|
E -->|Sequential| F[Single Node]
|
451
616
|
E -->|Parallel| G[List of Nodes]
|
452
|
-
|
453
|
-
|
617
|
+
E -->|Branching| H[List with Conditions]
|
618
|
+
C --> I[Condition?]
|
619
|
+
I -->|Yes| J[ctx-based Logic]
|
620
|
+
A --> K[Convergence Nodes]
|
621
|
+
K --> L[Merge Points]
|
454
622
|
style A fill:#fff0e6,stroke:#cc3300,stroke-width:2px
|
455
623
|
style B fill:#ffe6cc,stroke:#cc3300
|
456
624
|
style C fill:#ffe6cc,stroke:#cc3300
|
@@ -458,17 +626,21 @@ graph TD
|
|
458
626
|
style E fill:#fff,stroke:#333
|
459
627
|
style F fill:#ffd9b3,stroke:#cc3300
|
460
628
|
style G fill:#ffd9b3,stroke:#cc3300
|
461
|
-
style H fill:#
|
462
|
-
style I fill:#
|
629
|
+
style H fill:#ffd9b3,stroke:#cc3300
|
630
|
+
style I fill:#fff,stroke:#333
|
631
|
+
style J fill:#ffd9b3,stroke:#cc3300
|
632
|
+
style K fill:#ffe6cc,stroke:#cc3300
|
633
|
+
style L fill:#ffd9b3,stroke:#cc3300
|
463
634
|
```
|
464
635
|
|
465
636
|
---
|
466
637
|
|
467
|
-
##
|
638
|
+
## 8. Workflow Validation 🕵️♀️
|
468
639
|
|
469
640
|
`validate_workflow_definition()` ensures integrity:
|
470
|
-
- Checks node connectivity, circular references, undefined nodes,
|
471
|
-
-
|
641
|
+
- Checks node connectivity, circular references, undefined nodes, missing start.
|
642
|
+
- Validates branch conditions, convergence points (at least two incoming transitions), and input mappings.
|
643
|
+
- Returns `NodeError` objects (`node_name`, `description`).
|
472
644
|
|
473
645
|
### Example
|
474
646
|
```python
|
@@ -480,9 +652,9 @@ if issues:
|
|
480
652
|
|
481
653
|
---
|
482
654
|
|
483
|
-
##
|
655
|
+
## 9. Observers 👀
|
484
656
|
|
485
|
-
Monitor events like node starts or failures.
|
657
|
+
Monitor events like node starts, completions, or failures.
|
486
658
|
|
487
659
|
### Example
|
488
660
|
From the story generator:
|
@@ -493,26 +665,28 @@ observers:
|
|
493
665
|
|
494
666
|
---
|
495
667
|
|
496
|
-
##
|
668
|
+
## 10. Context 📦
|
497
669
|
|
498
|
-
The `ctx` dictionary shares data:
|
499
|
-
- `generate_outline` → `ctx["outline"]`
|
500
|
-
- `
|
670
|
+
The `ctx` dictionary shares data, enhanced by input mappings:
|
671
|
+
- `generate_outline` → `ctx["outline"]` (mapped from `story_genre`, `chapter_count`)
|
672
|
+
- `summarize_chapter` → `ctx["chapter_summary"]` (mapped from `completed_chapters`)
|
673
|
+
- `finalize_story` → `ctx["final_story"]`
|
501
674
|
|
502
675
|
---
|
503
676
|
|
504
|
-
##
|
677
|
+
## 11. Execution Flow 🏃♂️
|
505
678
|
|
506
679
|
The `WorkflowEngine`:
|
507
680
|
1. Starts at `workflow.start`.
|
508
|
-
2. Executes nodes,
|
509
|
-
3. Follows transitions based on conditions.
|
510
|
-
4.
|
511
|
-
5.
|
681
|
+
2. Executes nodes, applying input mappings and updating `ctx`.
|
682
|
+
3. Follows transitions (sequential, parallel, or branching) based on conditions.
|
683
|
+
4. Converges at specified nodes.
|
684
|
+
5. Notifies observers.
|
685
|
+
6. Ends when transitions are exhausted.
|
512
686
|
|
513
687
|
---
|
514
688
|
|
515
|
-
##
|
689
|
+
## 12. Converting Between Python and YAML 🔄
|
516
690
|
|
517
691
|
### Python to YAML (`flow_extractor.py`)
|
518
692
|
```python
|
@@ -527,7 +701,8 @@ WorkflowManager(wf_def).save_to_yaml("story_generator_workflow.yaml")
|
|
527
701
|
```python
|
528
702
|
from quantalogic.flow.flow_generator import generate_executable_script
|
529
703
|
|
530
|
-
manager = WorkflowManager()
|
704
|
+
manager = WorkflowManager()
|
705
|
+
manager.load_from_yaml("story_generator_workflow.yaml")
|
531
706
|
generate_executable_script(manager.workflow, {}, "standalone_story.py")
|
532
707
|
```
|
533
708
|
|
@@ -546,19 +721,29 @@ graph TD
|
|
546
721
|
|
547
722
|
---
|
548
723
|
|
549
|
-
##
|
724
|
+
## 13. WorkflowManager 🧑💻
|
550
725
|
|
551
|
-
Programmatic workflow creation:
|
726
|
+
Programmatic workflow creation with new features:
|
552
727
|
```python
|
553
728
|
manager = WorkflowManager()
|
554
|
-
manager.add_node(
|
729
|
+
manager.add_node(
|
730
|
+
"start",
|
731
|
+
llm_config={"model": "grok/xai", "prompt_template": "Say hi to {name}"},
|
732
|
+
inputs_mapping={"name": "user_name"}
|
733
|
+
)
|
734
|
+
manager.add_node(
|
735
|
+
"format",
|
736
|
+
template_config={"template": "Message: {text}"},
|
737
|
+
inputs_mapping={"text": "start_result"}
|
738
|
+
)
|
555
739
|
manager.set_start_node("start")
|
740
|
+
manager.add_transition("start", "format")
|
741
|
+
manager.add_convergence_node("format")
|
556
742
|
manager.save_to_yaml("hi.yaml")
|
557
743
|
```
|
558
744
|
|
559
745
|
---
|
560
746
|
|
561
|
-
##
|
562
|
-
|
563
|
-
The Quantalogic Flow YAML DSL (March 2, 2025) is a powerful tool for workflow automation, exemplified by the Story Generator case study. With support for LLMs, flexible flows, and conversion tools, it bridges Python and YAML seamlessly. Whether you’re crafting stories or processing orders, this DSL, paired with `WorkflowManager`, is your key to efficient, scalable workflows. 🚀
|
747
|
+
## 14. Conclusion 🎉
|
564
748
|
|
749
|
+
The Quantalogic Flow YAML DSL (March 5, 2025) is a powerful, flexible tool for workflow automation, exemplified by the updated Story Generator case study. With new **input mapping** and **template nodes**, alongside LLMs, sub-workflows, branching, convergence, and conversion tools, it seamlessly bridges Python and YAML. Whether crafting dynamic stories with formatted chapters or managing complex processes, this DSL, paired with `WorkflowManager`, unlocks efficient, scalable workflows. 🚀
|