quantalogic 0.50.29__py3-none-any.whl → 0.52.0__py3-none-any.whl
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- quantalogic/flow/__init__.py +17 -0
- quantalogic/flow/flow.py +9 -7
- quantalogic/flow/flow_extractor.py +32 -100
- quantalogic/flow/flow_generator.py +10 -3
- quantalogic/flow/flow_manager.py +88 -33
- quantalogic/flow/flow_manager_schema.py +3 -4
- quantalogic/flow/flow_mermaid.py +240 -0
- quantalogic/flow/flow_validator.py +335 -0
- quantalogic/flow/flow_yaml.md +393 -322
- quantalogic/tools/__init__.py +3 -2
- quantalogic/tools/tool.py +129 -3
- quantalogic-0.52.0.dist-info/METADATA +787 -0
- {quantalogic-0.50.29.dist-info → quantalogic-0.52.0.dist-info}/RECORD +16 -14
- quantalogic-0.50.29.dist-info/METADATA +0 -554
- {quantalogic-0.50.29.dist-info → quantalogic-0.52.0.dist-info}/LICENSE +0 -0
- {quantalogic-0.50.29.dist-info → quantalogic-0.52.0.dist-info}/WHEEL +0 -0
- {quantalogic-0.50.29.dist-info → quantalogic-0.52.0.dist-info}/entry_points.txt +0 -0
quantalogic/flow/flow_yaml.md
CHANGED
@@ -1,419 +1,490 @@
|
|
1
|
-
# Quantalogic Flow YAML DSL Specification
|
1
|
+
# Quantalogic Flow YAML DSL Specification 🚀
|
2
2
|
|
3
|
-
## 1. Introduction
|
4
3
|
|
5
|
-
The Quantalogic Flow YAML DSL (Domain Specific Language) offers a structured and human-readable way to define workflows. As of February 23, 2025, it provides a rich set of features for complex automation, including:
|
6
4
|
|
7
|
-
|
8
|
-
* **Execution Flow**: Supports sequential, conditional, and parallel transitions between nodes.
|
9
|
-
* **Sub-Workflows**: Enables hierarchical workflows through nested sub-workflows.
|
10
|
-
* **LLM Integration**: Incorporates Large Language Model (LLM) nodes with plain text (`llm_node`) or structured output (`structured_llm_node`), using configurable prompts and parameters.
|
11
|
-
* **Context Management**: Maintains state across nodes using a shared context dictionary.
|
12
|
-
* **Robustness**: Provides configurable retries and timeouts for fault-tolerant execution.
|
13
|
-
* **Programmatic Control**: Managed via the `WorkflowManager` class for dynamic creation and execution.
|
5
|
+
## 1. Introduction 🌟
|
14
6
|
|
15
|
-
|
7
|
+
The **Quantalogic Flow YAML DSL** is a human-readable, declarative language for defining workflows within the `quantalogic.flow` Python package. As of **March 2, 2025**, it empowers developers to automate tasks with a rich feature set:
|
16
8
|
|
17
|
-
|
9
|
+
- **Function Execution** ⚙️: Run async Python functions from embedded code, PyPI, local files, or URLs.
|
10
|
+
- **Execution Flow** ➡️: Support sequential, conditional, and parallel transitions.
|
11
|
+
- **Sub-Workflows** 🌳: Enable hierarchical, modular designs.
|
12
|
+
- **LLM Integration** 🤖: Harness Large Language Models for text or structured outputs.
|
13
|
+
- **Context Management** 📦: Share state dynamically across nodes.
|
14
|
+
- **Robustness** 🛡️: Include retries, delays, and timeouts.
|
15
|
+
- **Observers** 👀: Monitor execution with custom handlers.
|
16
|
+
- **Programmatic Control** 🧑💻: Manage workflows via `WorkflowManager`.
|
18
17
|
|
19
|
-
|
18
|
+
This DSL integrates with `Workflow`, `WorkflowEngine`, and `Nodes` classes, making it ideal for everything from simple scripts to AI-driven workflows. To illustrate, we’ll use a **Story Generator Workflow** as a running example, derived from `examples/qflow/story_generator_agent.py`. Let’s dive in! 🎉
|
20
19
|
|
21
|
-
|
22
|
-
|
23
|
-
|
20
|
+
```mermaid
|
21
|
+
graph TD
|
22
|
+
A[YAML Workflow File] -->|Defines| B[functions ⚙️]
|
23
|
+
A -->|Configures| C[nodes 🧩]
|
24
|
+
A -->|Orchestrates| D[workflow 🌐]
|
25
|
+
style A fill:#f9f9ff,stroke:#333,stroke-width:2px,stroke-dasharray:5
|
26
|
+
style B fill:#e6f3ff,stroke:#0066cc
|
27
|
+
style C fill:#e6ffe6,stroke:#009933
|
28
|
+
style D fill:#fff0e6,stroke:#cc3300
|
29
|
+
```
|
30
|
+
|
31
|
+
---
|
32
|
+
|
33
|
+
## 2. Workflow Structure 🗺️
|
34
|
+
|
35
|
+
A workflow YAML file is divided into three core sections:
|
36
|
+
|
37
|
+
- **`functions`**: Python code definitions.
|
38
|
+
- **`nodes`**: Task specifications.
|
39
|
+
- **`workflow`**: Flow orchestration.
|
40
|
+
|
41
|
+
Here’s the skeleton:
|
24
42
|
|
25
43
|
```yaml
|
26
44
|
functions:
|
27
|
-
#
|
45
|
+
# Python magic ✨
|
28
46
|
nodes:
|
29
|
-
#
|
47
|
+
# Tasks 🎯
|
30
48
|
workflow:
|
31
|
-
#
|
49
|
+
# Flow control 🚦
|
50
|
+
observers:
|
51
|
+
# Event watchers 👀 (optional)
|
32
52
|
```
|
33
53
|
|
34
|
-
|
35
|
-
|
36
|
-
A[YAML Workflow File] --> B(functions);
|
37
|
-
A --> C(nodes);
|
38
|
-
A --> D(workflow);
|
39
|
-
style A fill:#f9f,stroke:#333,stroke-width:2px
|
40
|
-
```
|
41
|
-
|
42
|
-
## 3. Functions
|
54
|
+
### Story Generator Example
|
55
|
+
Imagine a workflow that generates a multi-chapter story. We’ll build it step-by-step, starting with its Python form (`story_generator_agent.py`), then its YAML equivalent.
|
43
56
|
|
44
|
-
|
57
|
+
---
|
45
58
|
|
46
|
-
|
59
|
+
## 3. Case Study: Story Generator Workflow 📖
|
47
60
|
|
48
|
-
|
49
|
-
* `"embedded"`: Inline Python code.
|
50
|
-
* `"external"`: Module-based function.
|
51
|
-
* `code` (string, optional): Multi-line asynchronous Python code for embedded functions. Required if `type: embedded`.
|
52
|
-
* `module` (string, optional): Source of the external module for external functions. Can be:
|
53
|
-
* A PyPI package name (e.g., `"requests"`, `"numpy"`).
|
54
|
-
* A local file path (e.g., `"/path/to/module.py"`).
|
55
|
-
* A remote URL (e.g., `"https://example.com/module.py"`). Required if `type: external`.
|
56
|
-
* `function` (string, optional): Name of the function within the module for external functions. Required if `type: external`.
|
61
|
+
### Python Version (`story_generator_agent.py`)
|
57
62
|
|
58
|
-
|
63
|
+
This script generates a story outline and chapters iteratively:
|
59
64
|
|
60
|
-
|
61
|
-
|
62
|
-
|
65
|
+
```python
|
66
|
+
#!/usr/bin/env python
|
67
|
+
from quantalogic.flow import Nodes, Workflow
|
68
|
+
import anyio
|
69
|
+
|
70
|
+
MODEL = "gemini/gemini-2.0-flash"
|
71
|
+
DEFAULT_LLM_PARAMS = {"model": MODEL, "temperature": 0.7, "max_tokens": 1000}
|
72
|
+
|
73
|
+
@Nodes.llm_node(system_prompt="You are a creative writer skilled at generating stories.",
|
74
|
+
prompt_template="Create a story outline for a {genre} story with {num_chapters} chapters.",
|
75
|
+
output="outline", **DEFAULT_LLM_PARAMS)
|
76
|
+
def generate_outline(genre, num_chapters):
|
77
|
+
return {}
|
78
|
+
|
79
|
+
@Nodes.llm_node(system_prompt="You are a creative writer.",
|
80
|
+
prompt_template="Write chapter {chapter_num} for this story outline: {outline}. Style: {style}.",
|
81
|
+
output="chapter", **DEFAULT_LLM_PARAMS)
|
82
|
+
def generate_chapter(outline, chapter_num, style):
|
83
|
+
return {}
|
84
|
+
|
85
|
+
@Nodes.define(output="updated_context")
|
86
|
+
async def update_progress(**context):
|
87
|
+
chapters = context.get('chapters', [])
|
88
|
+
completed_chapters = context.get('completed_chapters', 0)
|
89
|
+
chapter = context.get('chapter', '')
|
90
|
+
updated_chapters = chapters + [chapter]
|
91
|
+
return {**context, "chapters": updated_chapters, "completed_chapters": completed_chapters + 1}
|
92
|
+
|
93
|
+
@Nodes.define(output="continue_generating")
|
94
|
+
async def check_if_complete(completed_chapters=0, num_chapters=0, **kwargs):
|
95
|
+
return completed_chapters < num_chapters
|
96
|
+
|
97
|
+
workflow = (
|
98
|
+
Workflow("generate_outline")
|
99
|
+
.then("generate_chapter")
|
100
|
+
.then("update_progress")
|
101
|
+
.then("check_if_complete")
|
102
|
+
.then("generate_chapter", condition=lambda ctx: ctx.get("continue_generating", False))
|
103
|
+
.then("update_progress")
|
104
|
+
.then("check_if_complete")
|
105
|
+
)
|
106
|
+
|
107
|
+
def story_observer(event_type, data=None):
|
108
|
+
print(f"Event: {event_type} - Data: {data}")
|
109
|
+
workflow.add_observer(story_observer)
|
110
|
+
|
111
|
+
if __name__ == "__main__":
|
112
|
+
async def main():
|
113
|
+
initial_context = {
|
114
|
+
"genre": "science fiction",
|
115
|
+
"num_chapters": 3,
|
116
|
+
"chapters": [],
|
117
|
+
"completed_chapters": 0,
|
118
|
+
"style": "descriptive"
|
119
|
+
}
|
120
|
+
engine = workflow.build()
|
121
|
+
result = await engine.run(initial_context)
|
122
|
+
print(f"Completed chapters: {result.get('completed_chapters', 0)}")
|
123
|
+
anyio.run(main)
|
124
|
+
```
|
63
125
|
|
64
|
-
|
126
|
+
### YAML Version (`story_generator_workflow.yaml`)
|
65
127
|
|
66
|
-
|
128
|
+
Here’s the equivalent YAML:
|
67
129
|
|
68
130
|
```yaml
|
69
131
|
functions:
|
70
|
-
|
132
|
+
generate_outline:
|
71
133
|
type: embedded
|
72
134
|
code: |
|
73
|
-
async def
|
74
|
-
|
75
|
-
|
76
|
-
|
77
|
-
|
78
|
-
|
79
|
-
|
80
|
-
|
81
|
-
|
82
|
-
|
83
|
-
|
84
|
-
|
85
|
-
|
86
|
-
|
87
|
-
|
88
|
-
|
135
|
+
async def generate_outline(genre: str, num_chapters: int) -> str:
|
136
|
+
return ""
|
137
|
+
generate_chapter:
|
138
|
+
type: embedded
|
139
|
+
code: |
|
140
|
+
async def generate_chapter(outline: str, chapter_num: int, style: str) -> str:
|
141
|
+
return ""
|
142
|
+
update_progress:
|
143
|
+
type: embedded
|
144
|
+
code: |
|
145
|
+
async def update_progress(**context):
|
146
|
+
chapters = context.get('chapters', [])
|
147
|
+
completed_chapters = context.get('completed_chapters', 0)
|
148
|
+
chapter = context.get('chapter', '')
|
149
|
+
return {**context, "chapters": chapters + [chapter], "completed_chapters": completed_chapters + 1}
|
150
|
+
check_if_complete:
|
151
|
+
type: embedded
|
152
|
+
code: |
|
153
|
+
async def check_if_complete(completed_chapters=0, num_chapters=0, **kwargs):
|
154
|
+
return completed_chapters < num_chapters
|
155
|
+
story_observer:
|
156
|
+
type: embedded
|
157
|
+
code: |
|
158
|
+
def story_observer(event_type, data=None):
|
159
|
+
print(f"Event: {event_type} - Data: {data}")
|
89
160
|
|
90
|
-
|
161
|
+
nodes:
|
162
|
+
generate_outline:
|
163
|
+
llm_config:
|
164
|
+
model: "gemini/gemini-2.0-flash"
|
165
|
+
system_prompt: "You are a creative writer skilled at generating stories."
|
166
|
+
prompt_template: "Create a story outline for a {genre} story with {num_chapters} chapters."
|
167
|
+
temperature: 0.7
|
168
|
+
max_tokens: 1000
|
169
|
+
output: outline
|
170
|
+
generate_chapter:
|
171
|
+
llm_config:
|
172
|
+
model: "gemini/gemini-2.0-flash"
|
173
|
+
system_prompt: "You are a creative writer."
|
174
|
+
prompt_template: "Write chapter {chapter_num} for this story outline: {outline}. Style: {style}."
|
175
|
+
temperature: 0.7
|
176
|
+
max_tokens: 1000
|
177
|
+
output: chapter
|
178
|
+
update_progress:
|
179
|
+
function: update_progress
|
180
|
+
output: updated_context
|
181
|
+
check_if_complete:
|
182
|
+
function: check_if_complete
|
183
|
+
output: continue_generating
|
91
184
|
|
92
|
-
|
93
|
-
|
94
|
-
|
95
|
-
|
96
|
-
|
97
|
-
|
185
|
+
workflow:
|
186
|
+
start: generate_outline
|
187
|
+
transitions:
|
188
|
+
- from_node: generate_outline
|
189
|
+
to_node: generate_chapter
|
190
|
+
- from_node: generate_chapter
|
191
|
+
to_node: update_progress
|
192
|
+
- from_node: update_progress
|
193
|
+
to_node: check_if_complete
|
194
|
+
- from_node: check_if_complete
|
195
|
+
to_node: generate_chapter
|
196
|
+
condition: "ctx['continue_generating']"
|
197
|
+
|
198
|
+
observers:
|
199
|
+
- story_observer
|
98
200
|
```
|
99
201
|
|
100
|
-
|
101
|
-
|
102
|
-
```yaml
|
103
|
-
functions:
|
104
|
-
analyze:
|
105
|
-
type: external
|
106
|
-
module: https://example.com/analyze.py
|
107
|
-
function: analyze_data
|
108
|
-
```
|
202
|
+
### Mermaid Diagram: Story Generator Flow
|
109
203
|
|
110
204
|
```mermaid
|
111
|
-
graph
|
112
|
-
A[
|
113
|
-
B
|
114
|
-
|
115
|
-
D
|
116
|
-
|
205
|
+
graph TD
|
206
|
+
A[generate_outline] --> B[generate_chapter]
|
207
|
+
B --> C[update_progress]
|
208
|
+
C --> D[check_if_complete]
|
209
|
+
D -->|"ctx['continue_generating']"| B
|
210
|
+
D -->|else| E[End]
|
211
|
+
style A fill:#e6ffe6,stroke:#009933,stroke-width:2px
|
212
|
+
style B fill:#e6ffe6,stroke:#009933,stroke-width:2px
|
213
|
+
style C fill:#e6ffe6,stroke:#009933,stroke-width:2px
|
214
|
+
style D fill:#e6ffe6,stroke:#009933,stroke-width:2px
|
215
|
+
style E fill:#fff0e6,stroke:#cc3300,stroke-width:2px
|
117
216
|
```
|
118
217
|
|
119
|
-
|
218
|
+
#### Execution
|
219
|
+
With `initial_context = {"genre": "science fiction", "num_chapters": 3, "chapters": [], "completed_chapters": 0, "style": "descriptive"}`:
|
220
|
+
1. `generate_outline` creates an outline.
|
221
|
+
2. `generate_chapter` writes a chapter.
|
222
|
+
3. `update_progress` updates the chapter list and count.
|
223
|
+
4. `check_if_complete` loops back if more chapters are needed.
|
120
224
|
|
121
|
-
|
225
|
+
---
|
122
226
|
|
123
|
-
|
227
|
+
## 4. Functions ⚙️
|
124
228
|
|
125
|
-
|
126
|
-
* `sub_workflow` (object, optional): Defines a nested workflow. Mutually exclusive with `function` and `llm_config`.
|
127
|
-
* `start` (string, required): Starting node of the sub-workflow.
|
128
|
-
* `transitions` (list): Transition rules (see Workflow section).
|
129
|
-
* `llm_config` (object, optional): Configures an LLM-based node. Mutually exclusive with `function` and `sub_workflow`.
|
130
|
-
* `model` (string, optional, default: `"gpt-3.5-turbo"`): LLM model (e.g., `"gemini/gemini-2.0-flash"`, `"gro k/xai"`).
|
131
|
-
* `system_prompt` (string, optional): Defines the LLM’s role or context.
|
132
|
-
* `prompt_template` (string, required, default: `"{{ input }}"`): Jinja2 template for the user prompt (e.g., `"Summarize {{ text }}"`).
|
133
|
-
* `temperature` (float, optional, default: `0.7`): Randomness control (`0.0` to `1.0`).
|
134
|
-
* `max_tokens` (integer, optional, default: `2000`): Maximum response tokens.
|
135
|
-
* `top_p` (float, optional, default: `1.0`): Nucleus sampling (`0.0` to `1.0`).
|
136
|
-
* `presence_penalty` (float, optional, default: `0.0`): Penalizes repetition (`-2.0` to `2.0`).
|
137
|
-
* `frequency_penalty` (float, optional, default: `0.0`): Reduces word repetition (`-2.0` to `2.0`).
|
138
|
-
* `stop` (list of strings, optional): Stop sequences (e.g., `["\n"]`).
|
139
|
-
* `response_model` (string, optional): Pydantic model path for structured output (e.g., `"my_module:OrderDetails"`). If present, uses `structured_llm_node`; otherwise, uses `llm_node`.
|
140
|
-
* `api_key` (string, optional): Custom API key for the LLM provider.
|
141
|
-
* `output` (string, optional): Context key for the node’s result. Defaults to `"<node_name>_result"` for function or LLM nodes if unspecified.
|
142
|
-
* `retries` (integer, optional, default: `3`): Number of retry attempts on failure (≥ `0`).
|
143
|
-
* `delay` (float, optional, default: `1.0`): Delay between retries in seconds (≥ `0`).
|
144
|
-
* `timeout` (float or null, optional, default: `null`): Execution timeout in seconds (≥ `0` or `null` for no timeout).
|
145
|
-
* `parallel` (boolean, optional, default: `false`): Enables parallel execution with other nodes.
|
229
|
+
The `functions` section defines Python code for reuse.
|
146
230
|
|
147
|
-
|
231
|
+
### Fields 📋
|
232
|
+
- `type` (string, required): `"embedded"` or `"external"`.
|
233
|
+
- `code` (string, optional): Inline code for `embedded`.
|
234
|
+
- `module` (string, optional): Source for `external` (PyPI, path, URL).
|
235
|
+
- `function` (string, optional): Function name in `module`.
|
148
236
|
|
149
|
-
|
150
|
-
|
151
|
-
|
152
|
-
|
153
|
-
**Examples**
|
154
|
-
|
155
|
-
* **Function Node**
|
237
|
+
### Rules ✅
|
238
|
+
- Embedded: Use `async def`, name matches key.
|
239
|
+
- External: Requires `module` and `function`, no `code`.
|
156
240
|
|
241
|
+
### Examples 🌈
|
242
|
+
From the story generator:
|
157
243
|
```yaml
|
158
|
-
|
159
|
-
|
160
|
-
|
161
|
-
|
162
|
-
|
163
|
-
|
164
|
-
|
244
|
+
functions:
|
245
|
+
update_progress:
|
246
|
+
type: embedded
|
247
|
+
code: |
|
248
|
+
async def update_progress(**context):
|
249
|
+
chapters = context.get('chapters', [])
|
250
|
+
completed_chapters = context.get('completed_chapters', 0)
|
251
|
+
chapter = context.get('chapter', '')
|
252
|
+
return {**context, "chapters": chapters + [chapter], "completed_chapters": completed_chapters + 1}
|
165
253
|
```
|
166
|
-
|
167
|
-
* **Sub-Workflow Node**
|
168
|
-
|
254
|
+
External example:
|
169
255
|
```yaml
|
170
|
-
|
171
|
-
|
172
|
-
|
173
|
-
|
174
|
-
|
175
|
-
- from: payment
|
176
|
-
to: shipping
|
177
|
-
output: shipping_confirmation
|
256
|
+
functions:
|
257
|
+
fetch:
|
258
|
+
type: external
|
259
|
+
module: requests
|
260
|
+
function: get
|
178
261
|
```
|
179
262
|
|
180
|
-
|
181
|
-
|
182
|
-
|
183
|
-
|
184
|
-
|
185
|
-
|
186
|
-
|
187
|
-
|
188
|
-
|
189
|
-
|
190
|
-
|
191
|
-
output: summary
|
263
|
+
```mermaid
|
264
|
+
graph TD
|
265
|
+
A[Function Definition] --> B{Type?}
|
266
|
+
B -->|embedded| C[Code: async def ...]
|
267
|
+
B -->|external| D[Module: PyPI, Path, URL]
|
268
|
+
D --> E[Function Name]
|
269
|
+
style A fill:#e6f3ff,stroke:#0066cc,stroke-width:2px
|
270
|
+
style B fill:#fff,stroke:#333
|
271
|
+
style C fill:#cce6ff,stroke:#0066cc
|
272
|
+
style D fill:#cce6ff,stroke:#0066cc
|
273
|
+
style E fill:#cce6ff,stroke:#0066cc
|
192
274
|
```
|
193
275
|
|
194
|
-
|
195
|
-
|
276
|
+
---
|
277
|
+
|
278
|
+
## 5. Nodes 🧩
|
279
|
+
|
280
|
+
Nodes are the tasks, powered by functions, sub-workflows, or LLMs.
|
281
|
+
|
282
|
+
### Fields 📋
|
283
|
+
- `function` (string, optional): Links to `functions`.
|
284
|
+
- `sub_workflow` (object, optional):
|
285
|
+
- `start` (string)
|
286
|
+
- `transitions` (list)
|
287
|
+
- `llm_config` (object, optional):
|
288
|
+
- `model` (string, default: `"gpt-3.5-turbo"`)
|
289
|
+
- `system_prompt` (string, optional)
|
290
|
+
- `prompt_template` (string, default: `"{{ input }}"`)
|
291
|
+
- `temperature` (float, default: `0.7`)
|
292
|
+
- `max_tokens` (int, optional)
|
293
|
+
- `top_p` (float, default: `1.0`)
|
294
|
+
- `presence_penalty` (float, default: `0.0`)
|
295
|
+
- `frequency_penalty` (float, default: `0.0`)
|
296
|
+
- `response_model` (string, optional)
|
297
|
+
- `output` (string, optional): Context key.
|
298
|
+
- `retries` (int, default: `3`)
|
299
|
+
- `delay` (float, default: `1.0`)
|
300
|
+
- `timeout` (float/null, default: `null`)
|
301
|
+
- `parallel` (bool, default: `false`)
|
302
|
+
|
303
|
+
### Rules ✅
|
304
|
+
- One of `function`, `sub_workflow`, or `llm_config` per node.
|
305
|
+
- LLM inputs come from `prompt_template`.
|
306
|
+
|
307
|
+
### Examples 🌈
|
308
|
+
From the story generator:
|
196
309
|
```yaml
|
197
310
|
nodes:
|
198
|
-
|
311
|
+
generate_outline:
|
199
312
|
llm_config:
|
200
313
|
model: "gemini/gemini-2.0-flash"
|
201
|
-
system_prompt: "
|
202
|
-
prompt_template: "
|
203
|
-
|
204
|
-
|
314
|
+
system_prompt: "You are a creative writer skilled at generating stories."
|
315
|
+
prompt_template: "Create a story outline for a {genre} story with {num_chapters} chapters."
|
316
|
+
temperature: 0.7
|
317
|
+
max_tokens: 1000
|
318
|
+
output: outline
|
205
319
|
```
|
206
320
|
|
207
321
|
```mermaid
|
208
|
-
graph
|
209
|
-
A[Node
|
210
|
-
B
|
211
|
-
B
|
212
|
-
B
|
213
|
-
|
322
|
+
graph TD
|
323
|
+
A[Node] --> B{Type?}
|
324
|
+
B -->|function| C[Function Ref]
|
325
|
+
B -->|sub_workflow| D[Start + Transitions]
|
326
|
+
B -->|llm_config| E[LLM Setup]
|
327
|
+
E --> F{Structured?}
|
328
|
+
F -->|Yes| G[response_model]
|
329
|
+
F -->|No| H[Plain Text]
|
330
|
+
style A fill:#e6ffe6,stroke:#009933,stroke-width:2px
|
331
|
+
style B fill:#fff,stroke:#333
|
332
|
+
style C fill:#ccffcc,stroke:#009933
|
333
|
+
style D fill:#ccffcc,stroke:#009933
|
334
|
+
style E fill:#ccffcc,stroke:#009933
|
335
|
+
style F fill:#fff,stroke:#333
|
336
|
+
style G fill:#b3ffb3,stroke:#009933
|
337
|
+
style H fill:#b3ffb3,stroke:#009933
|
214
338
|
```
|
215
339
|
|
216
|
-
|
217
|
-
|
218
|
-
The `workflow` section defines the top-level execution flow.
|
219
|
-
|
220
|
-
**Fields**
|
221
|
-
|
222
|
-
* `start` (string, optional): Name of the starting node.
|
223
|
-
* `transitions` (list, required): List of transition rules.
|
340
|
+
---
|
224
341
|
|
225
|
-
|
342
|
+
## 6. Workflow 🌐
|
226
343
|
|
227
|
-
|
228
|
-
* `to` (string or list, required): Target node(s). String for sequential, list for parallel execution.
|
229
|
-
* `condition` (string, optional): Python expression using `ctx` (e.g., `"ctx.get('in_stock')"`). Transition occurs if `True`.
|
344
|
+
The `workflow` section defines execution order.
|
230
345
|
|
231
|
-
|
232
|
-
|
233
|
-
|
346
|
+
### Fields 📋
|
347
|
+
- `start` (string, optional): First node.
|
348
|
+
- `transitions` (list):
|
349
|
+
- `from_node` (string)
|
350
|
+
- `to_node` (string/list)
|
351
|
+
- `condition` (string, optional)
|
234
352
|
|
353
|
+
### Example 🌈
|
354
|
+
From the story generator:
|
235
355
|
```yaml
|
236
356
|
workflow:
|
237
|
-
start:
|
357
|
+
start: generate_outline
|
238
358
|
transitions:
|
239
|
-
-
|
240
|
-
|
359
|
+
- from_node: generate_outline
|
360
|
+
to_node: generate_chapter
|
361
|
+
- from_node: generate_chapter
|
362
|
+
to_node: update_progress
|
363
|
+
- from_node: update_progress
|
364
|
+
to_node: check_if_complete
|
365
|
+
- from_node: check_if_complete
|
366
|
+
to_node: generate_chapter
|
367
|
+
condition: "ctx['continue_generating']"
|
241
368
|
```
|
242
369
|
|
243
|
-
|
244
|
-
|
245
|
-
|
246
|
-
|
247
|
-
|
248
|
-
|
249
|
-
|
250
|
-
|
251
|
-
|
370
|
+
```mermaid
|
371
|
+
graph TD
|
372
|
+
A[Workflow] --> B[Start Node]
|
373
|
+
A --> C[Transitions]
|
374
|
+
C --> D[From Node]
|
375
|
+
D --> E{To Node}
|
376
|
+
E -->|Sequential| F[Single Node]
|
377
|
+
E -->|Parallel| G[List of Nodes]
|
378
|
+
C --> H[Condition?]
|
379
|
+
H -->|Yes| I[ctx-based Logic]
|
380
|
+
style A fill:#fff0e6,stroke:#cc3300,stroke-width:2px
|
381
|
+
style B fill:#ffe6cc,stroke:#cc3300
|
382
|
+
style C fill:#ffe6cc,stroke:#cc3300
|
383
|
+
style D fill:#ffd9b3,stroke:#cc3300
|
384
|
+
style E fill:#fff,stroke:#333
|
385
|
+
style F fill:#ffd9b3,stroke:#cc3300
|
386
|
+
style G fill:#ffd9b3,stroke:#cc3300
|
387
|
+
style H fill:#fff,stroke:#333
|
388
|
+
style I fill:#ffd9b3,stroke:#cc3300
|
252
389
|
```
|
253
390
|
|
254
|
-
|
391
|
+
---
|
255
392
|
|
256
|
-
|
257
|
-
workflow:
|
258
|
-
start: payment_shipping
|
259
|
-
transitions:
|
260
|
-
- from: payment_shipping
|
261
|
-
to: [update_status, notify_customer]
|
262
|
-
```
|
393
|
+
## 7. Workflow Validation 🕵️♀️
|
263
394
|
|
264
|
-
|
265
|
-
|
266
|
-
|
267
|
-
|
268
|
-
|
269
|
-
|
270
|
-
|
271
|
-
|
272
|
-
|
273
|
-
|
395
|
+
`validate_workflow_definition()` ensures integrity:
|
396
|
+
- Checks node connectivity, circular references, undefined nodes, and missing start.
|
397
|
+
- Returns `WorkflowIssue` objects (`node_name`, `description`).
|
398
|
+
|
399
|
+
### Example
|
400
|
+
```python
|
401
|
+
issues = validate_workflow_definition(workflow)
|
402
|
+
if issues:
|
403
|
+
for issue in issues:
|
404
|
+
print(f"Node '{issue.node_name}': {issue.description}")
|
274
405
|
```
|
275
406
|
|
276
|
-
|
407
|
+
---
|
277
408
|
|
278
|
-
|
409
|
+
## 8. Observers 👀
|
279
410
|
|
280
|
-
|
281
|
-
* Plain LLM node: `ctx["summary"] = "Brief text"`.
|
282
|
-
* Structured LLM node: `ctx["inventory_status"] = InventoryStatus(items=["item1"], in_stock=True)`.
|
411
|
+
Monitor events like node starts or failures.
|
283
412
|
|
284
|
-
|
413
|
+
### Example
|
414
|
+
From the story generator:
|
415
|
+
```yaml
|
416
|
+
observers:
|
417
|
+
- story_observer
|
418
|
+
```
|
285
419
|
|
286
|
-
|
420
|
+
---
|
287
421
|
|
288
|
-
|
289
|
-
2. Executes nodes, updating `ctx`:
|
290
|
-
* **Function Nodes**: Calls the referenced function, storing the result in `output`.
|
291
|
-
* **Sub-Workflow Nodes**: Runs the nested workflow, merging its context into the parent’s.
|
292
|
-
* **LLM Nodes**: Uses `Nodes.llm_node` for text output or `Nodes.structured_llm_node` for structured output via `litellm` and `instructor`.
|
293
|
-
3. Evaluates transitions:
|
294
|
-
* Conditions (if present) are checked against `ctx`.
|
295
|
-
4. Executes the next node(s) sequentially or in parallel based on `to`.
|
296
|
-
5. Continues until no further transitions remain.
|
422
|
+
## 9. Context 📦
|
297
423
|
|
298
|
-
|
424
|
+
The `ctx` dictionary shares data:
|
425
|
+
- `generate_outline` → `ctx["outline"]`
|
426
|
+
- `update_progress` → `ctx["chapters"]`, `ctx["completed_chapters"]`
|
299
427
|
|
300
|
-
|
428
|
+
---
|
301
429
|
|
302
|
-
|
303
|
-
* **Transition Management**: Define execution flow.
|
304
|
-
* **Function Registration**: Embed or link to external functions.
|
305
|
-
* **YAML I/O**: Load/save workflows from/to YAML files.
|
306
|
-
* **Instantiation**: Builds a `Workflow` object with support for PyPI modules.
|
430
|
+
## 10. Execution Flow 🏃♂️
|
307
431
|
|
308
|
-
|
432
|
+
The `WorkflowEngine`:
|
433
|
+
1. Starts at `workflow.start`.
|
434
|
+
2. Executes nodes, updates `ctx`.
|
435
|
+
3. Follows transitions based on conditions.
|
436
|
+
4. Notifies observers.
|
437
|
+
5. Ends when transitions are exhausted.
|
309
438
|
|
310
|
-
|
311
|
-
manager = WorkflowManager()
|
312
|
-
manager.add_function("fetch", "external", module="requests", function="get")
|
313
|
-
manager.add_node("start", function="fetch", output="response")
|
314
|
-
manager.set_start_node("start")
|
315
|
-
manager.save_to_yaml("workflow.yaml")
|
316
|
-
```
|
439
|
+
---
|
317
440
|
|
318
|
-
|
441
|
+
## 11. Converting Between Python and YAML 🔄
|
319
442
|
|
320
|
-
|
321
|
-
|
322
|
-
|
443
|
+
### Python to YAML (`flow_extractor.py`)
|
444
|
+
```python
|
445
|
+
from quantalogic.flow.flow_extractor import extract_workflow_from_file
|
446
|
+
from quantalogic.flow.flow_manager import WorkflowManager
|
323
447
|
|
324
|
-
|
325
|
-
|
326
|
-
A[WorkflowManager] --> B(Add/Update/Remove Nodes & Transitions);
|
327
|
-
A --> C(Load/Save YAML);
|
328
|
-
A --> D(Instantiate Workflow);
|
329
|
-
style A fill:#ccf,stroke:#333,stroke-width:2px
|
448
|
+
wf_def, globals = extract_workflow_from_file("story_generator_agent.py")
|
449
|
+
WorkflowManager(wf_def).save_to_yaml("story_generator_workflow.yaml")
|
330
450
|
```
|
331
451
|
|
332
|
-
|
333
|
-
|
334
|
-
|
452
|
+
### YAML to Python (`flow_generator.py`)
|
453
|
+
```python
|
454
|
+
from quantalogic.flow.flow_generator import generate_executable_script
|
335
455
|
|
336
|
-
|
337
|
-
|
338
|
-
fetch_page:
|
339
|
-
type: external
|
340
|
-
module: requests
|
341
|
-
function: get
|
342
|
-
nodes:
|
343
|
-
fetch:
|
344
|
-
function: fetch_page
|
345
|
-
output: page_content
|
346
|
-
workflow:
|
347
|
-
start: fetch
|
348
|
-
transitions: []
|
456
|
+
manager = WorkflowManager().load_from_yaml("story_generator_workflow.yaml")
|
457
|
+
generate_executable_script(manager.workflow, {}, "standalone_story.py")
|
349
458
|
```
|
350
459
|
|
351
|
-
|
460
|
+
```mermaid
|
461
|
+
graph TD
|
462
|
+
A[Python Workflow] -->|flow_extractor.py| B[WorkflowDefinition]
|
463
|
+
B -->|WorkflowManager| C[YAML File]
|
464
|
+
C -->|WorkflowManager| D[WorkflowDefinition]
|
465
|
+
D -->|flow_generator.py| E[Standalone Python Script]
|
466
|
+
style A fill:#e6f3ff,stroke:#0066cc,stroke-width:2px
|
467
|
+
style B fill:#fff,stroke:#333
|
468
|
+
style C fill:#e6ffe6,stroke:#009933,stroke-width:2px
|
469
|
+
style D fill:#fff,stroke:#333
|
470
|
+
style E fill:#fff0e6,stroke:#cc3300,stroke-width:2px
|
471
|
+
```
|
352
472
|
|
353
|
-
|
473
|
+
---
|
354
474
|
|
355
|
-
|
475
|
+
## 12. WorkflowManager 🧑💻
|
356
476
|
|
357
|
-
|
358
|
-
|
359
|
-
|
360
|
-
|
361
|
-
|
362
|
-
|
363
|
-
await asyncio.sleep(1)
|
364
|
-
return bool(order.get("items"))
|
365
|
-
process_payment:
|
366
|
-
type: external
|
367
|
-
module: stripe
|
368
|
-
function: create_charge
|
369
|
-
nodes:
|
370
|
-
validate:
|
371
|
-
function: validate_order
|
372
|
-
output: is_valid
|
373
|
-
inventory:
|
374
|
-
llm_config:
|
375
|
-
model: "gemini/gemini-2.0-flash"
|
376
|
-
system_prompt: "Check inventory."
|
377
|
-
prompt_template: "Are {{ items }} in stock?"
|
378
|
-
response_model: "my_module:InventoryStatus"
|
379
|
-
output: inventory_status
|
380
|
-
payment:
|
381
|
-
function: process_payment
|
382
|
-
output: payment_status
|
383
|
-
notify:
|
384
|
-
llm_config:
|
385
|
-
prompt_template: "Notify: Order {{ order_id }} processed."
|
386
|
-
output: notification
|
387
|
-
workflow:
|
388
|
-
start: validate
|
389
|
-
transitions:
|
390
|
-
- from: validate
|
391
|
-
to: inventory
|
392
|
-
- from: inventory
|
393
|
-
to: payment
|
394
|
-
condition: "ctx.get('inventory_status').in_stock"
|
395
|
-
- from: payment
|
396
|
-
to: notify
|
477
|
+
Programmatic workflow creation:
|
478
|
+
```python
|
479
|
+
manager = WorkflowManager()
|
480
|
+
manager.add_node("start", llm_config={"model": "grok/xai", "prompt_template": "Say hi"})
|
481
|
+
manager.set_start_node("start")
|
482
|
+
manager.save_to_yaml("hi.yaml")
|
397
483
|
```
|
398
484
|
|
399
|
-
|
485
|
+
---
|
400
486
|
|
401
|
-
|
402
|
-
* `inventory` → `ctx["inventory_status"] = InventoryStatus(...)`.
|
403
|
-
* `payment` → `ctx["payment_status"] = <Stripe response>` (requires `pip install stripe`).
|
404
|
-
* `notify` → `ctx["notification"] = "Notify: Order 123 processed."`.
|
405
|
-
|
406
|
-
```mermaid
|
407
|
-
graph LR
|
408
|
-
A[validate] --> B[inventory];
|
409
|
-
B -- "ctx.get('inventory_status').in_stock" --> C[payment];
|
410
|
-
C --> D[notify];
|
411
|
-
style A fill:#afa,stroke:#333,stroke-width:2px
|
412
|
-
style B fill:#afa,stroke:#333,stroke-width:2px
|
413
|
-
style C fill:#afa,stroke:#333,stroke-width:2px
|
414
|
-
style D fill:#afa,stroke:#333,stroke-width:2px
|
415
|
-
```
|
487
|
+
## 13. Conclusion 🎉
|
416
488
|
|
417
|
-
|
489
|
+
The Quantalogic Flow YAML DSL (March 2, 2025) is a powerful tool for workflow automation, exemplified by the Story Generator case study. With support for LLMs, flexible flows, and conversion tools, it bridges Python and YAML seamlessly. Whether you’re crafting stories or processing orders, this DSL, paired with `WorkflowManager`, is your key to efficient, scalable workflows. 🚀
|
418
490
|
|
419
|
-
The Quantalogic Flow YAML DSL, as of February 23, 2025, provides a flexible and powerful framework for defining workflows. Enhanced support for PyPI modules via the `module` field in `functions` ensures seamless integration with external libraries, with clear error messages guiding users to install missing packages (e.g., `pip install requests`). Combined with sub-workflows, LLM nodes, and robust execution controls, it supports a wide range of applications, from simple tasks to complex, AI-driven processes, all manageable through the `WorkflowManager`.
|