quantalogic 0.51.0__py3-none-any.whl → 0.52.1__py3-none-any.whl
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- quantalogic/agent.py +1 -1
- quantalogic/flow/__init__.py +17 -0
- quantalogic/flow/flow_extractor.py +32 -103
- quantalogic/flow/flow_generator.py +6 -2
- quantalogic/flow/flow_manager.py +33 -24
- quantalogic/flow/flow_manager_schema.py +2 -3
- quantalogic/flow/flow_mermaid.py +240 -0
- quantalogic/flow/flow_validator.py +335 -0
- quantalogic/flow/flow_yaml.md +313 -329
- quantalogic/tools/__init__.py +3 -2
- quantalogic/tools/tool.py +129 -3
- {quantalogic-0.51.0.dist-info → quantalogic-0.52.1.dist-info}/METADATA +89 -2
- {quantalogic-0.51.0.dist-info → quantalogic-0.52.1.dist-info}/RECORD +16 -14
- {quantalogic-0.51.0.dist-info → quantalogic-0.52.1.dist-info}/LICENSE +0 -0
- {quantalogic-0.51.0.dist-info → quantalogic-0.52.1.dist-info}/WHEEL +0 -0
- {quantalogic-0.51.0.dist-info → quantalogic-0.52.1.dist-info}/entry_points.txt +0 -0
quantalogic/flow/flow_yaml.md
CHANGED
@@ -1,20 +1,21 @@
|
|
1
|
-
|
2
1
|
# Quantalogic Flow YAML DSL Specification 🚀
|
3
2
|
|
3
|
+
|
4
|
+
|
4
5
|
## 1. Introduction 🌟
|
5
6
|
|
6
|
-
|
7
|
+
The **Quantalogic Flow YAML DSL** is a human-readable, declarative language for defining workflows within the `quantalogic.flow` Python package. As of **March 2, 2025**, it empowers developers to automate tasks with a rich feature set:
|
7
8
|
|
8
|
-
- **Function Execution** ⚙️: Run async Python functions
|
9
|
-
- **Execution Flow** ➡️:
|
10
|
-
- **Sub-Workflows** 🌳:
|
11
|
-
- **LLM Integration** 🤖:
|
12
|
-
- **Context Management** 📦: Share state across nodes
|
13
|
-
- **Robustness** 🛡️:
|
14
|
-
- **Observers** 👀: Monitor execution with custom
|
15
|
-
- **Programmatic
|
9
|
+
- **Function Execution** ⚙️: Run async Python functions from embedded code, PyPI, local files, or URLs.
|
10
|
+
- **Execution Flow** ➡️: Support sequential, conditional, and parallel transitions.
|
11
|
+
- **Sub-Workflows** 🌳: Enable hierarchical, modular designs.
|
12
|
+
- **LLM Integration** 🤖: Harness Large Language Models for text or structured outputs.
|
13
|
+
- **Context Management** 📦: Share state dynamically across nodes.
|
14
|
+
- **Robustness** 🛡️: Include retries, delays, and timeouts.
|
15
|
+
- **Observers** 👀: Monitor execution with custom handlers.
|
16
|
+
- **Programmatic Control** 🧑💻: Manage workflows via `WorkflowManager`.
|
16
17
|
|
17
|
-
This DSL integrates
|
18
|
+
This DSL integrates with `Workflow`, `WorkflowEngine`, and `Nodes` classes, making it ideal for everything from simple scripts to AI-driven workflows. To illustrate, we’ll use a **Story Generator Workflow** as a running example, derived from `examples/qflow/story_generator_agent.py`. Let’s dive in! 🎉
|
18
19
|
|
19
20
|
```mermaid
|
20
21
|
graph TD
|
@@ -27,83 +28,236 @@ graph TD
|
|
27
28
|
style D fill:#fff0e6,stroke:#cc3300
|
28
29
|
```
|
29
30
|
|
31
|
+
---
|
32
|
+
|
30
33
|
## 2. Workflow Structure 🗺️
|
31
34
|
|
32
|
-
A workflow YAML file is
|
35
|
+
A workflow YAML file is divided into three core sections:
|
33
36
|
|
34
|
-
- **`functions`**:
|
35
|
-
- **`nodes`**:
|
36
|
-
- **`workflow`**:
|
37
|
+
- **`functions`**: Python code definitions.
|
38
|
+
- **`nodes`**: Task specifications.
|
39
|
+
- **`workflow`**: Flow orchestration.
|
37
40
|
|
38
41
|
Here’s the skeleton:
|
39
42
|
|
40
43
|
```yaml
|
41
44
|
functions:
|
42
|
-
#
|
45
|
+
# Python magic ✨
|
43
46
|
nodes:
|
44
|
-
# Tasks
|
47
|
+
# Tasks 🎯
|
45
48
|
workflow:
|
46
49
|
# Flow control 🚦
|
50
|
+
observers:
|
51
|
+
# Event watchers 👀 (optional)
|
47
52
|
```
|
48
53
|
|
49
|
-
|
54
|
+
### Story Generator Example
|
55
|
+
Imagine a workflow that generates a multi-chapter story. We’ll build it step-by-step, starting with its Python form (`story_generator_agent.py`), then its YAML equivalent.
|
50
56
|
|
51
|
-
|
57
|
+
---
|
52
58
|
|
53
|
-
|
59
|
+
## 3. Case Study: Story Generator Workflow 📖
|
54
60
|
|
55
|
-
|
56
|
-
- `code` (string, optional): Multi-line Python code for `embedded`. Use `|` for readability!
|
57
|
-
- `module` (string, optional): Source for `external`. Options:
|
58
|
-
- PyPI package (e.g., `"requests"`).
|
59
|
-
- Local path (e.g., `"/path/to/module.py"`).
|
60
|
-
- URL (e.g., `"https://example.com/script.py"`).
|
61
|
-
- `function` (string, optional): Function name in the module (for `external`).
|
61
|
+
### Python Version (`story_generator_agent.py`)
|
62
62
|
|
63
|
-
|
63
|
+
This script generates a story outline and chapters iteratively:
|
64
64
|
|
65
|
-
|
66
|
-
|
67
|
-
|
65
|
+
```python
|
66
|
+
#!/usr/bin/env python
|
67
|
+
from quantalogic.flow import Nodes, Workflow
|
68
|
+
import anyio
|
68
69
|
|
69
|
-
|
70
|
+
MODEL = "gemini/gemini-2.0-flash"
|
71
|
+
DEFAULT_LLM_PARAMS = {"model": MODEL, "temperature": 0.7, "max_tokens": 1000}
|
72
|
+
|
73
|
+
@Nodes.llm_node(system_prompt="You are a creative writer skilled at generating stories.",
|
74
|
+
prompt_template="Create a story outline for a {genre} story with {num_chapters} chapters.",
|
75
|
+
output="outline", **DEFAULT_LLM_PARAMS)
|
76
|
+
def generate_outline(genre, num_chapters):
|
77
|
+
return {}
|
78
|
+
|
79
|
+
@Nodes.llm_node(system_prompt="You are a creative writer.",
|
80
|
+
prompt_template="Write chapter {chapter_num} for this story outline: {outline}. Style: {style}.",
|
81
|
+
output="chapter", **DEFAULT_LLM_PARAMS)
|
82
|
+
def generate_chapter(outline, chapter_num, style):
|
83
|
+
return {}
|
84
|
+
|
85
|
+
@Nodes.define(output="updated_context")
|
86
|
+
async def update_progress(**context):
|
87
|
+
chapters = context.get('chapters', [])
|
88
|
+
completed_chapters = context.get('completed_chapters', 0)
|
89
|
+
chapter = context.get('chapter', '')
|
90
|
+
updated_chapters = chapters + [chapter]
|
91
|
+
return {**context, "chapters": updated_chapters, "completed_chapters": completed_chapters + 1}
|
92
|
+
|
93
|
+
@Nodes.define(output="continue_generating")
|
94
|
+
async def check_if_complete(completed_chapters=0, num_chapters=0, **kwargs):
|
95
|
+
return completed_chapters < num_chapters
|
96
|
+
|
97
|
+
workflow = (
|
98
|
+
Workflow("generate_outline")
|
99
|
+
.then("generate_chapter")
|
100
|
+
.then("update_progress")
|
101
|
+
.then("check_if_complete")
|
102
|
+
.then("generate_chapter", condition=lambda ctx: ctx.get("continue_generating", False))
|
103
|
+
.then("update_progress")
|
104
|
+
.then("check_if_complete")
|
105
|
+
)
|
106
|
+
|
107
|
+
def story_observer(event_type, data=None):
|
108
|
+
print(f"Event: {event_type} - Data: {data}")
|
109
|
+
workflow.add_observer(story_observer)
|
110
|
+
|
111
|
+
if __name__ == "__main__":
|
112
|
+
async def main():
|
113
|
+
initial_context = {
|
114
|
+
"genre": "science fiction",
|
115
|
+
"num_chapters": 3,
|
116
|
+
"chapters": [],
|
117
|
+
"completed_chapters": 0,
|
118
|
+
"style": "descriptive"
|
119
|
+
}
|
120
|
+
engine = workflow.build()
|
121
|
+
result = await engine.run(initial_context)
|
122
|
+
print(f"Completed chapters: {result.get('completed_chapters', 0)}")
|
123
|
+
anyio.run(main)
|
124
|
+
```
|
125
|
+
|
126
|
+
### YAML Version (`story_generator_workflow.yaml`)
|
127
|
+
|
128
|
+
Here’s the equivalent YAML:
|
70
129
|
|
71
|
-
#### Embedded Function
|
72
130
|
```yaml
|
73
131
|
functions:
|
74
|
-
|
132
|
+
generate_outline:
|
133
|
+
type: embedded
|
134
|
+
code: |
|
135
|
+
async def generate_outline(genre: str, num_chapters: int) -> str:
|
136
|
+
return ""
|
137
|
+
generate_chapter:
|
138
|
+
type: embedded
|
139
|
+
code: |
|
140
|
+
async def generate_chapter(outline: str, chapter_num: int, style: str) -> str:
|
141
|
+
return ""
|
142
|
+
update_progress:
|
75
143
|
type: embedded
|
76
144
|
code: |
|
77
|
-
async def
|
78
|
-
|
145
|
+
async def update_progress(**context):
|
146
|
+
chapters = context.get('chapters', [])
|
147
|
+
completed_chapters = context.get('completed_chapters', 0)
|
148
|
+
chapter = context.get('chapter', '')
|
149
|
+
return {**context, "chapters": chapters + [chapter], "completed_chapters": completed_chapters + 1}
|
150
|
+
check_if_complete:
|
151
|
+
type: embedded
|
152
|
+
code: |
|
153
|
+
async def check_if_complete(completed_chapters=0, num_chapters=0, **kwargs):
|
154
|
+
return completed_chapters < num_chapters
|
155
|
+
story_observer:
|
156
|
+
type: embedded
|
157
|
+
code: |
|
158
|
+
def story_observer(event_type, data=None):
|
159
|
+
print(f"Event: {event_type} - Data: {data}")
|
160
|
+
|
161
|
+
nodes:
|
162
|
+
generate_outline:
|
163
|
+
llm_config:
|
164
|
+
model: "gemini/gemini-2.0-flash"
|
165
|
+
system_prompt: "You are a creative writer skilled at generating stories."
|
166
|
+
prompt_template: "Create a story outline for a {genre} story with {num_chapters} chapters."
|
167
|
+
temperature: 0.7
|
168
|
+
max_tokens: 1000
|
169
|
+
output: outline
|
170
|
+
generate_chapter:
|
171
|
+
llm_config:
|
172
|
+
model: "gemini/gemini-2.0-flash"
|
173
|
+
system_prompt: "You are a creative writer."
|
174
|
+
prompt_template: "Write chapter {chapter_num} for this story outline: {outline}. Style: {style}."
|
175
|
+
temperature: 0.7
|
176
|
+
max_tokens: 1000
|
177
|
+
output: chapter
|
178
|
+
update_progress:
|
179
|
+
function: update_progress
|
180
|
+
output: updated_context
|
181
|
+
check_if_complete:
|
182
|
+
function: check_if_complete
|
183
|
+
output: continue_generating
|
184
|
+
|
185
|
+
workflow:
|
186
|
+
start: generate_outline
|
187
|
+
transitions:
|
188
|
+
- from_node: generate_outline
|
189
|
+
to_node: generate_chapter
|
190
|
+
- from_node: generate_chapter
|
191
|
+
to_node: update_progress
|
192
|
+
- from_node: update_progress
|
193
|
+
to_node: check_if_complete
|
194
|
+
- from_node: check_if_complete
|
195
|
+
to_node: generate_chapter
|
196
|
+
condition: "ctx['continue_generating']"
|
197
|
+
|
198
|
+
observers:
|
199
|
+
- story_observer
|
79
200
|
```
|
80
201
|
|
81
|
-
|
82
|
-
|
83
|
-
|
84
|
-
|
85
|
-
|
86
|
-
|
87
|
-
|
202
|
+
### Mermaid Diagram: Story Generator Flow
|
203
|
+
|
204
|
+
```mermaid
|
205
|
+
graph TD
|
206
|
+
A[generate_outline] --> B[generate_chapter]
|
207
|
+
B --> C[update_progress]
|
208
|
+
C --> D[check_if_complete]
|
209
|
+
D -->|"ctx['continue_generating']"| B
|
210
|
+
D -->|else| E[End]
|
211
|
+
style A fill:#e6ffe6,stroke:#009933,stroke-width:2px
|
212
|
+
style B fill:#e6ffe6,stroke:#009933,stroke-width:2px
|
213
|
+
style C fill:#e6ffe6,stroke:#009933,stroke-width:2px
|
214
|
+
style D fill:#e6ffe6,stroke:#009933,stroke-width:2px
|
215
|
+
style E fill:#fff0e6,stroke:#cc3300,stroke-width:2px
|
88
216
|
```
|
89
|
-
*Note*: Run `pip install requests` first!
|
90
217
|
|
91
|
-
####
|
218
|
+
#### Execution
|
219
|
+
With `initial_context = {"genre": "science fiction", "num_chapters": 3, "chapters": [], "completed_chapters": 0, "style": "descriptive"}`:
|
220
|
+
1. `generate_outline` creates an outline.
|
221
|
+
2. `generate_chapter` writes a chapter.
|
222
|
+
3. `update_progress` updates the chapter list and count.
|
223
|
+
4. `check_if_complete` loops back if more chapters are needed.
|
224
|
+
|
225
|
+
---
|
226
|
+
|
227
|
+
## 4. Functions ⚙️
|
228
|
+
|
229
|
+
The `functions` section defines Python code for reuse.
|
230
|
+
|
231
|
+
### Fields 📋
|
232
|
+
- `type` (string, required): `"embedded"` or `"external"`.
|
233
|
+
- `code` (string, optional): Inline code for `embedded`.
|
234
|
+
- `module` (string, optional): Source for `external` (PyPI, path, URL).
|
235
|
+
- `function` (string, optional): Function name in `module`.
|
236
|
+
|
237
|
+
### Rules ✅
|
238
|
+
- Embedded: Use `async def`, name matches key.
|
239
|
+
- External: Requires `module` and `function`, no `code`.
|
240
|
+
|
241
|
+
### Examples 🌈
|
242
|
+
From the story generator:
|
92
243
|
```yaml
|
93
244
|
functions:
|
94
|
-
|
95
|
-
type:
|
96
|
-
|
97
|
-
|
245
|
+
update_progress:
|
246
|
+
type: embedded
|
247
|
+
code: |
|
248
|
+
async def update_progress(**context):
|
249
|
+
chapters = context.get('chapters', [])
|
250
|
+
completed_chapters = context.get('completed_chapters', 0)
|
251
|
+
chapter = context.get('chapter', '')
|
252
|
+
return {**context, "chapters": chapters + [chapter], "completed_chapters": completed_chapters + 1}
|
98
253
|
```
|
99
|
-
|
100
|
-
#### Remote URL
|
254
|
+
External example:
|
101
255
|
```yaml
|
102
256
|
functions:
|
103
|
-
|
257
|
+
fetch:
|
104
258
|
type: external
|
105
|
-
module:
|
106
|
-
function:
|
259
|
+
module: requests
|
260
|
+
function: get
|
107
261
|
```
|
108
262
|
|
109
263
|
```mermaid
|
@@ -119,83 +273,49 @@ graph TD
|
|
119
273
|
style E fill:#cce6ff,stroke:#0066cc
|
120
274
|
```
|
121
275
|
|
122
|
-
|
276
|
+
---
|
123
277
|
|
124
|
-
|
278
|
+
## 5. Nodes 🧩
|
125
279
|
|
126
|
-
|
280
|
+
Nodes are the tasks, powered by functions, sub-workflows, or LLMs.
|
127
281
|
|
128
|
-
|
129
|
-
- `
|
130
|
-
|
131
|
-
- `
|
132
|
-
- `
|
133
|
-
|
134
|
-
- `
|
135
|
-
- `
|
136
|
-
- `
|
137
|
-
- `
|
138
|
-
- `
|
139
|
-
- `
|
140
|
-
- `
|
141
|
-
- `
|
142
|
-
- `
|
143
|
-
- `
|
144
|
-
- `
|
145
|
-
- `
|
146
|
-
- `
|
282
|
+
### Fields 📋
|
283
|
+
- `function` (string, optional): Links to `functions`.
|
284
|
+
- `sub_workflow` (object, optional):
|
285
|
+
- `start` (string)
|
286
|
+
- `transitions` (list)
|
287
|
+
- `llm_config` (object, optional):
|
288
|
+
- `model` (string, default: `"gpt-3.5-turbo"`)
|
289
|
+
- `system_prompt` (string, optional)
|
290
|
+
- `prompt_template` (string, default: `"{{ input }}"`)
|
291
|
+
- `temperature` (float, default: `0.7`)
|
292
|
+
- `max_tokens` (int, optional)
|
293
|
+
- `top_p` (float, default: `1.0`)
|
294
|
+
- `presence_penalty` (float, default: `0.0`)
|
295
|
+
- `frequency_penalty` (float, default: `0.0`)
|
296
|
+
- `response_model` (string, optional)
|
297
|
+
- `output` (string, optional): Context key.
|
298
|
+
- `retries` (int, default: `3`)
|
299
|
+
- `delay` (float, default: `1.0`)
|
300
|
+
- `timeout` (float/null, default: `null`)
|
301
|
+
- `parallel` (bool, default: `false`)
|
147
302
|
|
148
303
|
### Rules ✅
|
149
|
-
|
150
|
-
-
|
151
|
-
- LLM inputs come from `prompt_template` placeholders (e.g., `{{ text }}` → `text`).
|
304
|
+
- One of `function`, `sub_workflow`, or `llm_config` per node.
|
305
|
+
- LLM inputs come from `prompt_template`.
|
152
306
|
|
153
307
|
### Examples 🌈
|
154
|
-
|
155
|
-
#### Function Node
|
156
|
-
```yaml
|
157
|
-
nodes:
|
158
|
-
validate:
|
159
|
-
function: validate_order
|
160
|
-
output: is_valid
|
161
|
-
retries: 2
|
162
|
-
timeout: 5.0
|
163
|
-
```
|
164
|
-
|
165
|
-
#### Sub-Workflow Node
|
166
|
-
```yaml
|
167
|
-
nodes:
|
168
|
-
payment_flow:
|
169
|
-
sub_workflow:
|
170
|
-
start: pay
|
171
|
-
transitions:
|
172
|
-
- from: pay
|
173
|
-
to: ship
|
174
|
-
output: shipping_status
|
175
|
-
```
|
176
|
-
|
177
|
-
#### Plain LLM Node
|
308
|
+
From the story generator:
|
178
309
|
```yaml
|
179
310
|
nodes:
|
180
|
-
|
181
|
-
llm_config:
|
182
|
-
model: "gro k/xai"
|
183
|
-
system_prompt: "You’re a concise summarizer."
|
184
|
-
prompt_template: "Summarize: {{ text }}"
|
185
|
-
temperature: 0.5
|
186
|
-
output: summary
|
187
|
-
```
|
188
|
-
|
189
|
-
#### Structured LLM Node
|
190
|
-
```yaml
|
191
|
-
nodes:
|
192
|
-
inventory_check:
|
311
|
+
generate_outline:
|
193
312
|
llm_config:
|
194
313
|
model: "gemini/gemini-2.0-flash"
|
195
|
-
system_prompt: "
|
196
|
-
prompt_template: "
|
197
|
-
|
198
|
-
|
314
|
+
system_prompt: "You are a creative writer skilled at generating stories."
|
315
|
+
prompt_template: "Create a story outline for a {genre} story with {num_chapters} chapters."
|
316
|
+
temperature: 0.7
|
317
|
+
max_tokens: 1000
|
318
|
+
output: outline
|
199
319
|
```
|
200
320
|
|
201
321
|
```mermaid
|
@@ -217,54 +337,42 @@ graph TD
|
|
217
337
|
style H fill:#b3ffb3,stroke:#009933
|
218
338
|
```
|
219
339
|
|
220
|
-
|
221
|
-
|
222
|
-
The `workflow` section maps out how nodes connect and flow.
|
223
|
-
|
224
|
-
### Fields 📋
|
225
|
-
|
226
|
-
- `start` (string, optional): First node to run.
|
227
|
-
- `transitions` (list): Flow rules.
|
228
|
-
- `from` (string): Source node.
|
229
|
-
- `to` (string/list): Target(s)—string for sequential, list for parallel.
|
230
|
-
- `condition` (string, optional): Python expression (e.g., `"ctx['stock'].available"`).
|
340
|
+
---
|
231
341
|
|
232
|
-
|
233
|
-
|
234
|
-
#### Sequential Flow
|
235
|
-
```yaml
|
236
|
-
workflow:
|
237
|
-
start: validate
|
238
|
-
transitions:
|
239
|
-
- from: validate
|
240
|
-
to: process
|
241
|
-
```
|
342
|
+
## 6. Workflow 🌐
|
242
343
|
|
243
|
-
|
244
|
-
```yaml
|
245
|
-
workflow:
|
246
|
-
start: inventory_check
|
247
|
-
transitions:
|
248
|
-
- from: inventory_check
|
249
|
-
to: payment_flow
|
250
|
-
condition: "ctx['stock'].available"
|
251
|
-
```
|
344
|
+
The `workflow` section defines execution order.
|
252
345
|
|
253
|
-
|
346
|
+
### Fields 📋
|
347
|
+
- `start` (string, optional): First node.
|
348
|
+
- `transitions` (list):
|
349
|
+
- `from_node` (string)
|
350
|
+
- `to_node` (string/list)
|
351
|
+
- `condition` (string, optional)
|
352
|
+
|
353
|
+
### Example 🌈
|
354
|
+
From the story generator:
|
254
355
|
```yaml
|
255
356
|
workflow:
|
256
|
-
start:
|
357
|
+
start: generate_outline
|
257
358
|
transitions:
|
258
|
-
-
|
259
|
-
|
359
|
+
- from_node: generate_outline
|
360
|
+
to_node: generate_chapter
|
361
|
+
- from_node: generate_chapter
|
362
|
+
to_node: update_progress
|
363
|
+
- from_node: update_progress
|
364
|
+
to_node: check_if_complete
|
365
|
+
- from_node: check_if_complete
|
366
|
+
to_node: generate_chapter
|
367
|
+
condition: "ctx['continue_generating']"
|
260
368
|
```
|
261
369
|
|
262
370
|
```mermaid
|
263
371
|
graph TD
|
264
372
|
A[Workflow] --> B[Start Node]
|
265
373
|
A --> C[Transitions]
|
266
|
-
C --> D[From]
|
267
|
-
D --> E{To}
|
374
|
+
C --> D[From Node]
|
375
|
+
D --> E{To Node}
|
268
376
|
E -->|Sequential| F[Single Node]
|
269
377
|
E -->|Parallel| G[List of Nodes]
|
270
378
|
C --> H[Condition?]
|
@@ -280,138 +388,74 @@ graph TD
|
|
280
388
|
style I fill:#ffd9b3,stroke:#cc3300
|
281
389
|
```
|
282
390
|
|
283
|
-
|
391
|
+
---
|
392
|
+
|
393
|
+
## 7. Workflow Validation 🕵️♀️
|
284
394
|
|
285
|
-
|
395
|
+
`validate_workflow_definition()` ensures integrity:
|
396
|
+
- Checks node connectivity, circular references, undefined nodes, and missing start.
|
397
|
+
- Returns `WorkflowIssue` objects (`node_name`, `description`).
|
286
398
|
|
287
399
|
### Example
|
288
|
-
```
|
289
|
-
|
290
|
-
|
291
|
-
|
292
|
-
|
293
|
-
async def log_event(event):
|
294
|
-
print(f"{event.event_type}: {event.node_name}")
|
295
|
-
nodes:
|
296
|
-
task:
|
297
|
-
function: greet
|
298
|
-
workflow:
|
299
|
-
start: task
|
300
|
-
transitions: []
|
301
|
-
observers:
|
302
|
-
- log_event
|
400
|
+
```python
|
401
|
+
issues = validate_workflow_definition(workflow)
|
402
|
+
if issues:
|
403
|
+
for issue in issues:
|
404
|
+
print(f"Node '{issue.node_name}': {issue.description}")
|
303
405
|
```
|
304
406
|
|
305
|
-
|
407
|
+
---
|
306
408
|
|
307
|
-
|
308
|
-
- `greet` → `ctx["greeting"] = "Hello, Alice!"`
|
309
|
-
- `inventory_check` → `ctx["stock"] = StockStatus(...)`
|
409
|
+
## 8. Observers 👀
|
310
410
|
|
311
|
-
|
411
|
+
Monitor events like node starts or failures.
|
312
412
|
|
313
|
-
|
314
|
-
|
315
|
-
|
316
|
-
|
317
|
-
|
318
|
-
|
413
|
+
### Example
|
414
|
+
From the story generator:
|
415
|
+
```yaml
|
416
|
+
observers:
|
417
|
+
- story_observer
|
418
|
+
```
|
319
419
|
|
320
|
-
|
420
|
+
---
|
321
421
|
|
322
|
-
|
422
|
+
## 9. Context 📦
|
323
423
|
|
324
|
-
|
325
|
-
|
424
|
+
The `ctx` dictionary shares data:
|
425
|
+
- `generate_outline` → `ctx["outline"]`
|
426
|
+
- `update_progress` → `ctx["chapters"]`, `ctx["completed_chapters"]`
|
326
427
|
|
327
|
-
|
328
|
-
1. **Parse**: `WorkflowExtractor` uses Python’s `ast` module to analyze the file, identifying `@Nodes` decorators (e.g., `define`, `llm_node`) and `Workflow` chaining.
|
329
|
-
2. **Extract**: It builds a `WorkflowDefinition` with nodes, transitions, embedded functions, and observers.
|
330
|
-
3. **Save**: `WorkflowManager.save_to_yaml` writes it to a YAML file.
|
428
|
+
---
|
331
429
|
|
332
|
-
|
333
|
-
```python
|
334
|
-
# story_generator.py
|
335
|
-
from quantalogic.flow import Nodes, Workflow
|
430
|
+
## 10. Execution Flow 🏃♂️
|
336
431
|
|
337
|
-
|
338
|
-
|
339
|
-
|
432
|
+
The `WorkflowEngine`:
|
433
|
+
1. Starts at `workflow.start`.
|
434
|
+
2. Executes nodes, updates `ctx`.
|
435
|
+
3. Follows transitions based on conditions.
|
436
|
+
4. Notifies observers.
|
437
|
+
5. Ends when transitions are exhausted.
|
340
438
|
|
341
|
-
|
439
|
+
---
|
342
440
|
|
343
|
-
|
441
|
+
## 11. Converting Between Python and YAML 🔄
|
442
|
+
|
443
|
+
### Python to YAML (`flow_extractor.py`)
|
444
|
+
```python
|
344
445
|
from quantalogic.flow.flow_extractor import extract_workflow_from_file
|
345
446
|
from quantalogic.flow.flow_manager import WorkflowManager
|
346
447
|
|
347
|
-
wf_def, globals = extract_workflow_from_file("
|
348
|
-
|
349
|
-
manager.save_to_yaml("story_workflow.yaml")
|
350
|
-
```
|
351
|
-
**Output (`story_workflow.yaml`)**:
|
352
|
-
```yaml
|
353
|
-
functions:
|
354
|
-
say_hello:
|
355
|
-
type: embedded
|
356
|
-
code: |
|
357
|
-
@Nodes.define(output="greeting")
|
358
|
-
async def say_hello(name: str) -> str:
|
359
|
-
return f"Hello, {name}!"
|
360
|
-
nodes:
|
361
|
-
say_hello:
|
362
|
-
function: say_hello
|
363
|
-
output: greeting
|
364
|
-
retries: 3
|
365
|
-
delay: 1.0
|
366
|
-
workflow:
|
367
|
-
start: say_hello
|
368
|
-
transitions: []
|
448
|
+
wf_def, globals = extract_workflow_from_file("story_generator_agent.py")
|
449
|
+
WorkflowManager(wf_def).save_to_yaml("story_generator_workflow.yaml")
|
369
450
|
```
|
370
451
|
|
371
|
-
###
|
372
|
-
Need a self-contained Python script from a `WorkflowDefinition`? `quantalogic/flow/flow_generator.py` has you covered with `generate_executable_script`. It creates an executable file with embedded functions, dependencies, and a `main` function—ready to run anywhere with `uv run`.
|
373
|
-
|
374
|
-
#### How It Works
|
375
|
-
1. **Generate**: Takes a `WorkflowDefinition` and global variables.
|
376
|
-
2. **Structure**: Adds a shebang (`#!/usr/bin/env -S uv run`), dependencies, globals, functions, and workflow chaining.
|
377
|
-
3. **Execute**: Sets permissions to make it runnable.
|
378
|
-
|
379
|
-
#### Example
|
452
|
+
### YAML to Python (`flow_generator.py`)
|
380
453
|
```python
|
381
|
-
from quantalogic.flow.flow_manager import WorkflowManager
|
382
454
|
from quantalogic.flow.flow_generator import generate_executable_script
|
383
455
|
|
384
|
-
manager = WorkflowManager()
|
385
|
-
manager.load_from_yaml("story_workflow.yaml")
|
456
|
+
manager = WorkflowManager().load_from_yaml("story_generator_workflow.yaml")
|
386
457
|
generate_executable_script(manager.workflow, {}, "standalone_story.py")
|
387
458
|
```
|
388
|
-
**Output (`standalone_story.py`)**:
|
389
|
-
```python
|
390
|
-
#!/usr/bin/env -S uv run
|
391
|
-
# /// script
|
392
|
-
# requires-python = ">=3.12"
|
393
|
-
# dependencies = ["loguru", "litellm", "pydantic>=2.0", "anyio", "quantalogic>=0.35", "jinja2", "instructor[litellm]"]
|
394
|
-
# ///
|
395
|
-
import anyio
|
396
|
-
from loguru import logger
|
397
|
-
from quantalogic.flow import Nodes, Workflow
|
398
|
-
|
399
|
-
@Nodes.define(output="greeting")
|
400
|
-
async def say_hello(name: str) -> str:
|
401
|
-
return f"Hello, {name}!"
|
402
|
-
|
403
|
-
workflow = Workflow("say_hello")
|
404
|
-
|
405
|
-
async def main():
|
406
|
-
initial_context = {"name": "World"}
|
407
|
-
engine = workflow.build()
|
408
|
-
result = await engine.run(initial_context)
|
409
|
-
logger.info(f"Workflow result: {result}")
|
410
|
-
|
411
|
-
if __name__ == "__main__":
|
412
|
-
anyio.run(main)
|
413
|
-
```
|
414
|
-
Run it with `./standalone_story.py`—no extra setup needed (assuming `uv` is installed)!
|
415
459
|
|
416
460
|
```mermaid
|
417
461
|
graph TD
|
@@ -426,81 +470,21 @@ graph TD
|
|
426
470
|
style E fill:#fff0e6,stroke:#cc3300,stroke-width:2px
|
427
471
|
```
|
428
472
|
|
429
|
-
|
473
|
+
---
|
430
474
|
|
431
|
-
|
432
|
-
- Add nodes, transitions, functions, and observers.
|
433
|
-
- Load/save YAML.
|
434
|
-
- Instantiate a `Workflow` object.
|
475
|
+
## 12. WorkflowManager 🧑💻
|
435
476
|
|
436
|
-
|
477
|
+
Programmatic workflow creation:
|
437
478
|
```python
|
438
479
|
manager = WorkflowManager()
|
439
|
-
manager.
|
440
|
-
manager.add_node("start", function="say_hi")
|
480
|
+
manager.add_node("start", llm_config={"model": "grok/xai", "prompt_template": "Say hi"})
|
441
481
|
manager.set_start_node("start")
|
442
482
|
manager.save_to_yaml("hi.yaml")
|
443
483
|
```
|
444
484
|
|
445
|
-
|
446
|
-
|
447
|
-
```yaml
|
448
|
-
functions:
|
449
|
-
validate:
|
450
|
-
type: embedded
|
451
|
-
code: |
|
452
|
-
async def validate(order: dict) -> str:
|
453
|
-
return "valid" if order["items"] else "invalid"
|
454
|
-
track_usage:
|
455
|
-
type: embedded
|
456
|
-
code: |
|
457
|
-
def track_usage(event):
|
458
|
-
if event.usage:
|
459
|
-
print(f"{event.node_name}: {event.usage['total_tokens']} tokens")
|
460
|
-
nodes:
|
461
|
-
validate_order:
|
462
|
-
function: validate
|
463
|
-
output: validity
|
464
|
-
check_stock:
|
465
|
-
llm_config:
|
466
|
-
model: "gemini/gemini-2.0-flash"
|
467
|
-
system_prompt: "Check inventory."
|
468
|
-
prompt_template: "Items: {{ items }}"
|
469
|
-
response_model: "shop:Stock"
|
470
|
-
output: stock
|
471
|
-
notify:
|
472
|
-
llm_config:
|
473
|
-
prompt_template: "Order {{ order_id }} status: {{ validity }}"
|
474
|
-
output: message
|
475
|
-
workflow:
|
476
|
-
start: validate_order
|
477
|
-
transitions:
|
478
|
-
- from: validate_order
|
479
|
-
to: check_stock
|
480
|
-
condition: "ctx['validity'] == 'valid'"
|
481
|
-
- from: check_stock
|
482
|
-
to: notify
|
483
|
-
observers:
|
484
|
-
- track_usage
|
485
|
-
```
|
486
|
-
|
487
|
-
### Execution
|
488
|
-
With `ctx = {"order": {"items": ["book"], "order_id": "123"}}`:
|
489
|
-
1. `validate_order` → `ctx["validity"] = "valid"`
|
490
|
-
2. `check_stock` → `ctx["stock"] = Stock(...)`
|
491
|
-
3. `notify` → `ctx["message"] = "Order 123 status: valid"`
|
492
|
-
4. `track_usage` prints token usage for LLM nodes.
|
493
|
-
|
494
|
-
```mermaid
|
495
|
-
graph TD
|
496
|
-
A["validate_order"] -->|"ctx['validity'] == 'valid'"| B["check_stock"]
|
497
|
-
B --> C["notify"]
|
498
|
-
style A fill:#e6ffe6,stroke:#009933,stroke-width:2px
|
499
|
-
style B fill:#e6ffe6,stroke:#009933,stroke-width:2px
|
500
|
-
style C fill:#e6ffe6,stroke:#009933,stroke-width:2px
|
501
|
-
```
|
485
|
+
---
|
502
486
|
|
503
|
-
##
|
487
|
+
## 13. Conclusion 🎉
|
504
488
|
|
505
|
-
The Quantalogic Flow YAML DSL (March
|
489
|
+
The Quantalogic Flow YAML DSL (March 2, 2025) is a powerful tool for workflow automation, exemplified by the Story Generator case study. With support for LLMs, flexible flows, and conversion tools, it bridges Python and YAML seamlessly. Whether you’re crafting stories or processing orders, this DSL, paired with `WorkflowManager`, is your key to efficient, scalable workflows. 🚀
|
506
490
|
|