quantalogic 0.50.28__py3-none-any.whl → 0.51.0__py3-none-any.whl
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- quantalogic/agent.py +9 -11
- quantalogic/flow/flow.py +9 -7
- quantalogic/flow/flow_extractor.py +5 -2
- quantalogic/flow/flow_generator.py +5 -2
- quantalogic/flow/flow_manager.py +61 -15
- quantalogic/flow/flow_manager_schema.py +1 -1
- quantalogic/flow/flow_yaml.md +349 -262
- quantalogic/prompts/memory_compaction_prompt.j2 +16 -0
- quantalogic-0.51.0.dist-info/METADATA +700 -0
- {quantalogic-0.50.28.dist-info → quantalogic-0.51.0.dist-info}/RECORD +13 -12
- quantalogic-0.50.28.dist-info/METADATA +0 -554
- {quantalogic-0.50.28.dist-info → quantalogic-0.51.0.dist-info}/LICENSE +0 -0
- {quantalogic-0.50.28.dist-info → quantalogic-0.51.0.dist-info}/WHEEL +0 -0
- {quantalogic-0.50.28.dist-info → quantalogic-0.51.0.dist-info}/entry_points.txt +0 -0
quantalogic/flow/flow_yaml.md
CHANGED
@@ -1,419 +1,506 @@
|
|
1
|
-
# Quantalogic Flow YAML DSL Specification
|
2
1
|
|
3
|
-
|
2
|
+
# Quantalogic Flow YAML DSL Specification 🚀
|
4
3
|
|
5
|
-
|
4
|
+
## 1. Introduction 🌟
|
6
5
|
|
7
|
-
|
8
|
-
* **Execution Flow**: Supports sequential, conditional, and parallel transitions between nodes.
|
9
|
-
* **Sub-Workflows**: Enables hierarchical workflows through nested sub-workflows.
|
10
|
-
* **LLM Integration**: Incorporates Large Language Model (LLM) nodes with plain text (`llm_node`) or structured output (`structured_llm_node`), using configurable prompts and parameters.
|
11
|
-
* **Context Management**: Maintains state across nodes using a shared context dictionary.
|
12
|
-
* **Robustness**: Provides configurable retries and timeouts for fault-tolerant execution.
|
13
|
-
* **Programmatic Control**: Managed via the `WorkflowManager` class for dynamic creation and execution.
|
6
|
+
Welcome to the **Quantalogic Flow YAML DSL**—a powerful, human-readable way to craft workflows with the `quantalogic.flow` package! As of **March 1, 2025**, this DSL brings a suite of exciting features to automate complex tasks with ease:
|
14
7
|
|
15
|
-
|
8
|
+
- **Function Execution** ⚙️: Run async Python functions—embedded or sourced from PyPI, local files, or URLs.
|
9
|
+
- **Execution Flow** ➡️: Define sequential, conditional, and parallel transitions.
|
10
|
+
- **Sub-Workflows** 🌳: Build hierarchical workflows for modularity.
|
11
|
+
- **LLM Integration** 🤖: Leverage Large Language Models with plain text or structured outputs.
|
12
|
+
- **Context Management** 📦: Share state across nodes via a dynamic context.
|
13
|
+
- **Robustness** 🛡️: Add retries, delays, and timeouts for reliability.
|
14
|
+
- **Observers** 👀: Monitor execution with custom event handlers.
|
15
|
+
- **Programmatic Power** 🧑💻: Control everything via the `WorkflowManager`.
|
16
16
|
|
17
|
-
|
17
|
+
This DSL integrates seamlessly with `Workflow`, `WorkflowEngine`, and `Nodes` classes, powering everything from simple scripts to AI-driven workflows. Let’s dive in! 🎉
|
18
18
|
|
19
|
-
|
19
|
+
```mermaid
|
20
|
+
graph TD
|
21
|
+
A[YAML Workflow File] -->|Defines| B[functions ⚙️]
|
22
|
+
A -->|Configures| C[nodes 🧩]
|
23
|
+
A -->|Orchestrates| D[workflow 🌐]
|
24
|
+
style A fill:#f9f9ff,stroke:#333,stroke-width:2px,stroke-dasharray:5
|
25
|
+
style B fill:#e6f3ff,stroke:#0066cc
|
26
|
+
style C fill:#e6ffe6,stroke:#009933
|
27
|
+
style D fill:#fff0e6,stroke:#cc3300
|
28
|
+
```
|
29
|
+
|
30
|
+
## 2. Workflow Structure 🗺️
|
31
|
+
|
32
|
+
A workflow YAML file is split into three core sections:
|
33
|
+
|
34
|
+
- **`functions`**: Your toolbox of Python functions.
|
35
|
+
- **`nodes`**: The building blocks (tasks) of your workflow.
|
36
|
+
- **`workflow`**: The roadmap tying it all together.
|
20
37
|
|
21
|
-
|
22
|
-
* **`nodes`**: Configures individual tasks, linking to functions, sub-workflows, or LLM setups.
|
23
|
-
* **`workflow`**: Specifies the execution flow, including the start node and transitions.
|
38
|
+
Here’s the skeleton:
|
24
39
|
|
25
40
|
```yaml
|
26
41
|
functions:
|
27
|
-
#
|
42
|
+
# Your Python magic ✨
|
28
43
|
nodes:
|
29
|
-
#
|
44
|
+
# Tasks to execute 🎯
|
30
45
|
workflow:
|
31
|
-
#
|
46
|
+
# Flow control 🚦
|
32
47
|
```
|
33
48
|
|
34
|
-
|
35
|
-
graph LR
|
36
|
-
A[YAML Workflow File] --> B(functions);
|
37
|
-
A --> C(nodes);
|
38
|
-
A --> D(workflow);
|
39
|
-
style A fill:#f9f,stroke:#333,stroke-width:2px
|
40
|
-
```
|
41
|
-
|
42
|
-
## 3. Functions
|
49
|
+
## 3. Functions ⚙️
|
43
50
|
|
44
|
-
The `functions` section
|
51
|
+
The `functions` section defines reusable Python code—either embedded in the YAML or pulled from external sources.
|
45
52
|
|
46
|
-
|
53
|
+
### Fields 📋
|
47
54
|
|
48
|
-
|
49
|
-
|
50
|
-
|
51
|
-
|
52
|
-
|
53
|
-
|
54
|
-
|
55
|
-
* A remote URL (e.g., `"https://example.com/module.py"`). Required if `type: external`.
|
56
|
-
* `function` (string, optional): Name of the function within the module for external functions. Required if `type: external`.
|
55
|
+
- `type` (string, required): `"embedded"` (inline code) or `"external"` (module-based).
|
56
|
+
- `code` (string, optional): Multi-line Python code for `embedded`. Use `|` for readability!
|
57
|
+
- `module` (string, optional): Source for `external`. Options:
|
58
|
+
- PyPI package (e.g., `"requests"`).
|
59
|
+
- Local path (e.g., `"/path/to/module.py"`).
|
60
|
+
- URL (e.g., `"https://example.com/script.py"`).
|
61
|
+
- `function` (string, optional): Function name in the module (for `external`).
|
57
62
|
|
58
|
-
|
63
|
+
### Rules ✅
|
59
64
|
|
60
|
-
|
61
|
-
|
62
|
-
|
65
|
+
- Embedded functions must be `async def` and match their dictionary key.
|
66
|
+
- External functions need `module` and `function`; no `code` allowed.
|
67
|
+
- PyPI modules must be installed (e.g., `pip install requests`).
|
63
68
|
|
64
|
-
|
65
|
-
|
66
|
-
* **Embedded Function**
|
69
|
+
### Examples 🌈
|
67
70
|
|
71
|
+
#### Embedded Function
|
68
72
|
```yaml
|
69
73
|
functions:
|
70
|
-
|
74
|
+
greet:
|
71
75
|
type: embedded
|
72
76
|
code: |
|
73
|
-
async def
|
74
|
-
|
75
|
-
return bool(order.get("items"))
|
77
|
+
async def greet(name: str) -> str:
|
78
|
+
return f"Hello, {name}!"
|
76
79
|
```
|
77
80
|
|
78
|
-
|
79
|
-
|
81
|
+
#### External from PyPI
|
80
82
|
```yaml
|
81
83
|
functions:
|
82
|
-
|
84
|
+
fetch:
|
83
85
|
type: external
|
84
86
|
module: requests
|
85
87
|
function: get
|
86
88
|
```
|
89
|
+
*Note*: Run `pip install requests` first!
|
87
90
|
|
88
|
-
|
89
|
-
|
90
|
-
* **External Function from Local File**
|
91
|
-
|
91
|
+
#### Local File
|
92
92
|
```yaml
|
93
93
|
functions:
|
94
|
-
|
94
|
+
analyze:
|
95
95
|
type: external
|
96
|
-
module: /
|
97
|
-
function:
|
96
|
+
module: ./utils/analyze.py
|
97
|
+
function: process_data
|
98
98
|
```
|
99
99
|
|
100
|
-
|
101
|
-
|
100
|
+
#### Remote URL
|
102
101
|
```yaml
|
103
102
|
functions:
|
104
|
-
|
103
|
+
compute:
|
105
104
|
type: external
|
106
|
-
module: https://example.com/
|
107
|
-
function:
|
105
|
+
module: https://example.com/compute.py
|
106
|
+
function: calculate
|
108
107
|
```
|
109
108
|
|
110
109
|
```mermaid
|
111
|
-
graph
|
112
|
-
A[Function Definition] --> B{Type
|
113
|
-
B
|
114
|
-
B
|
115
|
-
D --> E[Function Name]
|
116
|
-
style A fill:#
|
110
|
+
graph TD
|
111
|
+
A[Function Definition] --> B{Type?}
|
112
|
+
B -->|embedded| C[Code: async def ...]
|
113
|
+
B -->|external| D[Module: PyPI, Path, URL]
|
114
|
+
D --> E[Function Name]
|
115
|
+
style A fill:#e6f3ff,stroke:#0066cc,stroke-width:2px
|
116
|
+
style B fill:#fff,stroke:#333
|
117
|
+
style C fill:#cce6ff,stroke:#0066cc
|
118
|
+
style D fill:#cce6ff,stroke:#0066cc
|
119
|
+
style E fill:#cce6ff,stroke:#0066cc
|
117
120
|
```
|
118
121
|
|
119
|
-
## 4. Nodes
|
120
|
-
|
121
|
-
Nodes represent individual tasks within the workflow, configurable as function executions, sub-workflows, or LLM operations.
|
122
|
+
## 4. Nodes 🧩
|
122
123
|
|
123
|
-
|
124
|
+
Nodes are the heartbeat of your workflow—each one’s a task, powered by functions, sub-workflows, or LLMs.
|
124
125
|
|
125
|
-
|
126
|
-
* `sub_workflow` (object, optional): Defines a nested workflow. Mutually exclusive with `function` and `llm_config`.
|
127
|
-
* `start` (string, required): Starting node of the sub-workflow.
|
128
|
-
* `transitions` (list): Transition rules (see Workflow section).
|
129
|
-
* `llm_config` (object, optional): Configures an LLM-based node. Mutually exclusive with `function` and `sub_workflow`.
|
130
|
-
* `model` (string, optional, default: `"gpt-3.5-turbo"`): LLM model (e.g., `"gemini/gemini-2.0-flash"`, `"gro k/xai"`).
|
131
|
-
* `system_prompt` (string, optional): Defines the LLM’s role or context.
|
132
|
-
* `prompt_template` (string, required, default: `"{{ input }}"`): Jinja2 template for the user prompt (e.g., `"Summarize {{ text }}"`).
|
133
|
-
* `temperature` (float, optional, default: `0.7`): Randomness control (`0.0` to `1.0`).
|
134
|
-
* `max_tokens` (integer, optional, default: `2000`): Maximum response tokens.
|
135
|
-
* `top_p` (float, optional, default: `1.0`): Nucleus sampling (`0.0` to `1.0`).
|
136
|
-
* `presence_penalty` (float, optional, default: `0.0`): Penalizes repetition (`-2.0` to `2.0`).
|
137
|
-
* `frequency_penalty` (float, optional, default: `0.0`): Reduces word repetition (`-2.0` to `2.0`).
|
138
|
-
* `stop` (list of strings, optional): Stop sequences (e.g., `["\n"]`).
|
139
|
-
* `response_model` (string, optional): Pydantic model path for structured output (e.g., `"my_module:OrderDetails"`). If present, uses `structured_llm_node`; otherwise, uses `llm_node`.
|
140
|
-
* `api_key` (string, optional): Custom API key for the LLM provider.
|
141
|
-
* `output` (string, optional): Context key for the node’s result. Defaults to `"<node_name>_result"` for function or LLM nodes if unspecified.
|
142
|
-
* `retries` (integer, optional, default: `3`): Number of retry attempts on failure (≥ `0`).
|
143
|
-
* `delay` (float, optional, default: `1.0`): Delay between retries in seconds (≥ `0`).
|
144
|
-
* `timeout` (float or null, optional, default: `null`): Execution timeout in seconds (≥ `0` or `null` for no timeout).
|
145
|
-
* `parallel` (boolean, optional, default: `false`): Enables parallel execution with other nodes.
|
126
|
+
### Fields 📋
|
146
127
|
|
147
|
-
|
128
|
+
- `function` (string, optional): Links to a `functions` entry.
|
129
|
+
- `sub_workflow` (object, optional): Nested workflow definition.
|
130
|
+
- `start` (string): Starting node.
|
131
|
+
- `transitions` (list): Flow rules (see Workflow section).
|
132
|
+
- `llm_config` (object, optional): LLM setup.
|
133
|
+
- `model` (string, default: `"gpt-3.5-turbo"`): e.g., `"gemini/gemini-2.0-flash"`.
|
134
|
+
- `system_prompt` (string, optional): LLM’s role.
|
135
|
+
- `prompt_template` (string, default: `"{{ input }}"`): Jinja2 template (e.g., `"Summarize {{ text }}"`).
|
136
|
+
- `temperature` (float, default: `0.7`): Randomness (0.0–1.0).
|
137
|
+
- `max_tokens` (int, optional): Token limit (e.g., `2000`).
|
138
|
+
- `top_p` (float, default: `1.0`): Nucleus sampling (0.0–1.0).
|
139
|
+
- `presence_penalty` (float, default: `0.0`): Topic repetition (-2.0–2.0).
|
140
|
+
- `frequency_penalty` (float, default: `0.0`): Word repetition (-2.0–2.0).
|
141
|
+
- `response_model` (string, optional): Structured output model (e.g., `"my_module:OrderDetails"`).
|
142
|
+
- `output` (string, optional): Context key for results (defaults to `<node_name>_result` for function/LLM nodes).
|
143
|
+
- `retries` (int, default: `3`): Retry attempts (≥ 0).
|
144
|
+
- `delay` (float, default: `1.0`): Seconds between retries (≥ 0).
|
145
|
+
- `timeout` (float/null, default: `null`): Max runtime in seconds.
|
146
|
+
- `parallel` (bool, default: `false`): Run concurrently?
|
148
147
|
|
149
|
-
|
150
|
-
* For `sub_workflow`, `output` is optional if the sub-workflow sets multiple context keys; inputs are derived from the start node’s requirements.
|
151
|
-
* For `llm_config`, inputs are extracted from `prompt_template` placeholders (e.g., `{{ text }}` implies `text` as an input).
|
148
|
+
### Rules ✅
|
152
149
|
|
153
|
-
|
150
|
+
- Exactly one of `function`, `sub_workflow`, or `llm_config` per node.
|
151
|
+
- LLM inputs come from `prompt_template` placeholders (e.g., `{{ text }}` → `text`).
|
154
152
|
|
155
|
-
|
153
|
+
### Examples 🌈
|
156
154
|
|
155
|
+
#### Function Node
|
157
156
|
```yaml
|
158
157
|
nodes:
|
159
158
|
validate:
|
160
159
|
function: validate_order
|
161
160
|
output: is_valid
|
162
161
|
retries: 2
|
163
|
-
delay: 0.5
|
164
162
|
timeout: 5.0
|
165
163
|
```
|
166
164
|
|
167
|
-
|
168
|
-
|
165
|
+
#### Sub-Workflow Node
|
169
166
|
```yaml
|
170
167
|
nodes:
|
171
|
-
|
168
|
+
payment_flow:
|
172
169
|
sub_workflow:
|
173
|
-
start:
|
170
|
+
start: pay
|
174
171
|
transitions:
|
175
|
-
- from:
|
176
|
-
to:
|
177
|
-
output:
|
172
|
+
- from: pay
|
173
|
+
to: ship
|
174
|
+
output: shipping_status
|
178
175
|
```
|
179
176
|
|
180
|
-
|
181
|
-
|
177
|
+
#### Plain LLM Node
|
182
178
|
```yaml
|
183
179
|
nodes:
|
184
180
|
summarize:
|
185
181
|
llm_config:
|
186
182
|
model: "gro k/xai"
|
187
|
-
system_prompt: "You
|
188
|
-
prompt_template: "Summarize
|
183
|
+
system_prompt: "You’re a concise summarizer."
|
184
|
+
prompt_template: "Summarize: {{ text }}"
|
189
185
|
temperature: 0.5
|
190
|
-
max_tokens: 50
|
191
186
|
output: summary
|
192
187
|
```
|
193
188
|
|
194
|
-
|
195
|
-
|
189
|
+
#### Structured LLM Node
|
196
190
|
```yaml
|
197
191
|
nodes:
|
198
|
-
|
192
|
+
inventory_check:
|
199
193
|
llm_config:
|
200
194
|
model: "gemini/gemini-2.0-flash"
|
201
|
-
system_prompt: "Check
|
202
|
-
prompt_template: "
|
203
|
-
response_model: "
|
204
|
-
output:
|
195
|
+
system_prompt: "Check stock."
|
196
|
+
prompt_template: "Items: {{ items }}"
|
197
|
+
response_model: "inventory:StockStatus"
|
198
|
+
output: stock
|
205
199
|
```
|
206
200
|
|
207
201
|
```mermaid
|
208
|
-
graph
|
209
|
-
A[Node
|
210
|
-
B
|
211
|
-
B
|
212
|
-
B
|
213
|
-
|
202
|
+
graph TD
|
203
|
+
A[Node] --> B{Type?}
|
204
|
+
B -->|function| C[Function Ref]
|
205
|
+
B -->|sub_workflow| D[Start + Transitions]
|
206
|
+
B -->|llm_config| E[LLM Setup]
|
207
|
+
E --> F{Structured?}
|
208
|
+
F -->|Yes| G[response_model]
|
209
|
+
F -->|No| H[Plain Text]
|
210
|
+
style A fill:#e6ffe6,stroke:#009933,stroke-width:2px
|
211
|
+
style B fill:#fff,stroke:#333
|
212
|
+
style C fill:#ccffcc,stroke:#009933
|
213
|
+
style D fill:#ccffcc,stroke:#009933
|
214
|
+
style E fill:#ccffcc,stroke:#009933
|
215
|
+
style F fill:#fff,stroke:#333
|
216
|
+
style G fill:#b3ffb3,stroke:#009933
|
217
|
+
style H fill:#b3ffb3,stroke:#009933
|
214
218
|
```
|
215
219
|
|
216
|
-
## 5. Workflow
|
217
|
-
|
218
|
-
The `workflow` section defines the top-level execution flow.
|
219
|
-
|
220
|
-
**Fields**
|
220
|
+
## 5. Workflow 🌐
|
221
221
|
|
222
|
-
|
223
|
-
* `transitions` (list, required): List of transition rules.
|
222
|
+
The `workflow` section maps out how nodes connect and flow.
|
224
223
|
|
225
|
-
|
224
|
+
### Fields 📋
|
226
225
|
|
227
|
-
|
228
|
-
|
229
|
-
|
226
|
+
- `start` (string, optional): First node to run.
|
227
|
+
- `transitions` (list): Flow rules.
|
228
|
+
- `from` (string): Source node.
|
229
|
+
- `to` (string/list): Target(s)—string for sequential, list for parallel.
|
230
|
+
- `condition` (string, optional): Python expression (e.g., `"ctx['stock'].available"`).
|
230
231
|
|
231
|
-
|
232
|
-
|
233
|
-
* **Sequential Transition**
|
232
|
+
### Examples 🌈
|
234
233
|
|
234
|
+
#### Sequential Flow
|
235
235
|
```yaml
|
236
236
|
workflow:
|
237
237
|
start: validate
|
238
238
|
transitions:
|
239
239
|
- from: validate
|
240
|
-
to:
|
240
|
+
to: process
|
241
241
|
```
|
242
242
|
|
243
|
-
|
244
|
-
|
243
|
+
#### Conditional Flow
|
245
244
|
```yaml
|
246
245
|
workflow:
|
247
|
-
start:
|
246
|
+
start: inventory_check
|
248
247
|
transitions:
|
249
|
-
- from:
|
250
|
-
to:
|
251
|
-
condition: "ctx
|
248
|
+
- from: inventory_check
|
249
|
+
to: payment_flow
|
250
|
+
condition: "ctx['stock'].available"
|
252
251
|
```
|
253
252
|
|
254
|
-
|
255
|
-
|
253
|
+
#### Parallel Flow
|
256
254
|
```yaml
|
257
255
|
workflow:
|
258
|
-
start:
|
256
|
+
start: payment_flow
|
259
257
|
transitions:
|
260
|
-
- from:
|
261
|
-
to: [
|
258
|
+
- from: payment_flow
|
259
|
+
to: [update_db, send_email]
|
262
260
|
```
|
263
261
|
|
264
262
|
```mermaid
|
265
|
-
graph
|
266
|
-
A[Workflow
|
267
|
-
A --> C
|
268
|
-
C --> D
|
269
|
-
D --> E{To
|
270
|
-
E
|
271
|
-
E
|
272
|
-
C --> H
|
273
|
-
|
263
|
+
graph TD
|
264
|
+
A[Workflow] --> B[Start Node]
|
265
|
+
A --> C[Transitions]
|
266
|
+
C --> D[From]
|
267
|
+
D --> E{To}
|
268
|
+
E -->|Sequential| F[Single Node]
|
269
|
+
E -->|Parallel| G[List of Nodes]
|
270
|
+
C --> H[Condition?]
|
271
|
+
H -->|Yes| I[ctx-based Logic]
|
272
|
+
style A fill:#fff0e6,stroke:#cc3300,stroke-width:2px
|
273
|
+
style B fill:#ffe6cc,stroke:#cc3300
|
274
|
+
style C fill:#ffe6cc,stroke:#cc3300
|
275
|
+
style D fill:#ffd9b3,stroke:#cc3300
|
276
|
+
style E fill:#fff,stroke:#333
|
277
|
+
style F fill:#ffd9b3,stroke:#cc3300
|
278
|
+
style G fill:#ffd9b3,stroke:#cc3300
|
279
|
+
style H fill:#fff,stroke:#333
|
280
|
+
style I fill:#ffd9b3,stroke:#cc3300
|
274
281
|
```
|
275
282
|
|
276
|
-
## 6.
|
277
|
-
|
278
|
-
The context (`ctx`) is a dictionary shared across the workflow and sub-workflows, storing node outputs. Examples:
|
283
|
+
## 6. Observers 👀
|
279
284
|
|
280
|
-
|
281
|
-
* Plain LLM node: `ctx["summary"] = "Brief text"`.
|
282
|
-
* Structured LLM node: `ctx["inventory_status"] = InventoryStatus(items=["item1"], in_stock=True)`.
|
285
|
+
Add observers to watch workflow events (e.g., node start, completion, failures). Define them in `functions` and list them under `observers`.
|
283
286
|
|
284
|
-
|
287
|
+
### Example
|
288
|
+
```yaml
|
289
|
+
functions:
|
290
|
+
log_event:
|
291
|
+
type: embedded
|
292
|
+
code: |
|
293
|
+
async def log_event(event):
|
294
|
+
print(f"{event.event_type}: {event.node_name}")
|
295
|
+
nodes:
|
296
|
+
task:
|
297
|
+
function: greet
|
298
|
+
workflow:
|
299
|
+
start: task
|
300
|
+
transitions: []
|
301
|
+
observers:
|
302
|
+
- log_event
|
303
|
+
```
|
285
304
|
|
286
|
-
|
305
|
+
## 7. Context 📦
|
287
306
|
|
288
|
-
|
289
|
-
|
290
|
-
|
291
|
-
* **Sub-Workflow Nodes**: Runs the nested workflow, merging its context into the parent’s.
|
292
|
-
* **LLM Nodes**: Uses `Nodes.llm_node` for text output or `Nodes.structured_llm_node` for structured output via `litellm` and `instructor`.
|
293
|
-
3. Evaluates transitions:
|
294
|
-
* Conditions (if present) are checked against `ctx`.
|
295
|
-
4. Executes the next node(s) sequentially or in parallel based on `to`.
|
296
|
-
5. Continues until no further transitions remain.
|
307
|
+
The `ctx` dictionary carries data across nodes:
|
308
|
+
- `greet` → `ctx["greeting"] = "Hello, Alice!"`
|
309
|
+
- `inventory_check` → `ctx["stock"] = StockStatus(...)`
|
297
310
|
|
298
|
-
## 8.
|
311
|
+
## 8. Execution Flow 🏃♂️
|
299
312
|
|
300
|
-
The `
|
313
|
+
The `WorkflowEngine` runs it all:
|
314
|
+
1. Starts at `workflow.start`.
|
315
|
+
2. Executes nodes, updating `ctx`.
|
316
|
+
3. Follows transitions based on conditions or parallel rules.
|
317
|
+
4. Notifies observers of events.
|
318
|
+
5. Stops when no transitions remain.
|
301
319
|
|
302
|
-
|
303
|
-
* **Transition Management**: Define execution flow.
|
304
|
-
* **Function Registration**: Embed or link to external functions.
|
305
|
-
* **YAML I/O**: Load/save workflows from/to YAML files.
|
306
|
-
* **Instantiation**: Builds a `Workflow` object with support for PyPI modules.
|
320
|
+
## 9. Converting Between Python and YAML 🔄
|
307
321
|
|
308
|
-
|
322
|
+
The `quantalogic.flow` package provides tools to bridge Python-defined workflows and YAML definitions, making your workflows portable and standalone.
|
309
323
|
|
310
|
-
|
311
|
-
|
312
|
-
manager.add_function("fetch", "external", module="requests", function="get")
|
313
|
-
manager.add_node("start", function="fetch", output="response")
|
314
|
-
manager.set_start_node("start")
|
315
|
-
manager.save_to_yaml("workflow.yaml")
|
316
|
-
```
|
324
|
+
### From Python to YAML with `flow_extractor.py` 📜
|
325
|
+
Want to turn a Python workflow (using `Nodes` and `Workflow`) into a YAML file? Use `quantalogic/flow/flow_extractor.py`! The `extract_workflow_from_file` function parses a Python file, extracting nodes, transitions, functions, and globals into a `WorkflowDefinition`. Then, `WorkflowManager` saves it as YAML. This is perfect for sharing or archiving workflows defined programmatically.
|
317
326
|
|
318
|
-
|
327
|
+
#### How It Works
|
328
|
+
1. **Parse**: `WorkflowExtractor` uses Python’s `ast` module to analyze the file, identifying `@Nodes` decorators (e.g., `define`, `llm_node`) and `Workflow` chaining.
|
329
|
+
2. **Extract**: It builds a `WorkflowDefinition` with nodes, transitions, embedded functions, and observers.
|
330
|
+
3. **Save**: `WorkflowManager.save_to_yaml` writes it to a YAML file.
|
319
331
|
|
320
|
-
|
321
|
-
|
322
|
-
|
332
|
+
#### Example
|
333
|
+
```python
|
334
|
+
# story_generator.py
|
335
|
+
from quantalogic.flow import Nodes, Workflow
|
323
336
|
|
324
|
-
|
325
|
-
|
326
|
-
|
327
|
-
A --> C(Load/Save YAML);
|
328
|
-
A --> D(Instantiate Workflow);
|
329
|
-
style A fill:#ccf,stroke:#333,stroke-width:2px
|
330
|
-
```
|
337
|
+
@Nodes.define(output="greeting")
|
338
|
+
async def say_hello(name: str) -> str:
|
339
|
+
return f"Hello, {name}!"
|
331
340
|
|
332
|
-
|
341
|
+
workflow = Workflow("say_hello")
|
333
342
|
|
334
|
-
|
343
|
+
# Convert to YAML
|
344
|
+
from quantalogic.flow.flow_extractor import extract_workflow_from_file
|
345
|
+
from quantalogic.flow.flow_manager import WorkflowManager
|
335
346
|
|
347
|
+
wf_def, globals = extract_workflow_from_file("story_generator.py")
|
348
|
+
manager = WorkflowManager(wf_def)
|
349
|
+
manager.save_to_yaml("story_workflow.yaml")
|
350
|
+
```
|
351
|
+
**Output (`story_workflow.yaml`)**:
|
336
352
|
```yaml
|
337
353
|
functions:
|
338
|
-
|
339
|
-
type:
|
340
|
-
|
341
|
-
|
354
|
+
say_hello:
|
355
|
+
type: embedded
|
356
|
+
code: |
|
357
|
+
@Nodes.define(output="greeting")
|
358
|
+
async def say_hello(name: str) -> str:
|
359
|
+
return f"Hello, {name}!"
|
342
360
|
nodes:
|
343
|
-
|
344
|
-
function:
|
345
|
-
output:
|
361
|
+
say_hello:
|
362
|
+
function: say_hello
|
363
|
+
output: greeting
|
364
|
+
retries: 3
|
365
|
+
delay: 1.0
|
346
366
|
workflow:
|
347
|
-
start:
|
367
|
+
start: say_hello
|
348
368
|
transitions: []
|
349
369
|
```
|
350
370
|
|
351
|
-
|
371
|
+
### From YAML to Standalone Python with `flow_generator.py` 🐍
|
372
|
+
Need a self-contained Python script from a `WorkflowDefinition`? `quantalogic/flow/flow_generator.py` has you covered with `generate_executable_script`. It creates an executable file with embedded functions, dependencies, and a `main` function—ready to run anywhere with `uv run`.
|
373
|
+
|
374
|
+
#### How It Works
|
375
|
+
1. **Generate**: Takes a `WorkflowDefinition` and global variables.
|
376
|
+
2. **Structure**: Adds a shebang (`#!/usr/bin/env -S uv run`), dependencies, globals, functions, and workflow chaining.
|
377
|
+
3. **Execute**: Sets permissions to make it runnable.
|
378
|
+
|
379
|
+
#### Example
|
380
|
+
```python
|
381
|
+
from quantalogic.flow.flow_manager import WorkflowManager
|
382
|
+
from quantalogic.flow.flow_generator import generate_executable_script
|
383
|
+
|
384
|
+
manager = WorkflowManager()
|
385
|
+
manager.load_from_yaml("story_workflow.yaml")
|
386
|
+
generate_executable_script(manager.workflow, {}, "standalone_story.py")
|
387
|
+
```
|
388
|
+
**Output (`standalone_story.py`)**:
|
389
|
+
```python
|
390
|
+
#!/usr/bin/env -S uv run
|
391
|
+
# /// script
|
392
|
+
# requires-python = ">=3.12"
|
393
|
+
# dependencies = ["loguru", "litellm", "pydantic>=2.0", "anyio", "quantalogic>=0.35", "jinja2", "instructor[litellm]"]
|
394
|
+
# ///
|
395
|
+
import anyio
|
396
|
+
from loguru import logger
|
397
|
+
from quantalogic.flow import Nodes, Workflow
|
398
|
+
|
399
|
+
@Nodes.define(output="greeting")
|
400
|
+
async def say_hello(name: str) -> str:
|
401
|
+
return f"Hello, {name}!"
|
402
|
+
|
403
|
+
workflow = Workflow("say_hello")
|
404
|
+
|
405
|
+
async def main():
|
406
|
+
initial_context = {"name": "World"}
|
407
|
+
engine = workflow.build()
|
408
|
+
result = await engine.run(initial_context)
|
409
|
+
logger.info(f"Workflow result: {result}")
|
410
|
+
|
411
|
+
if __name__ == "__main__":
|
412
|
+
anyio.run(main)
|
413
|
+
```
|
414
|
+
Run it with `./standalone_story.py`—no extra setup needed (assuming `uv` is installed)!
|
415
|
+
|
416
|
+
```mermaid
|
417
|
+
graph TD
|
418
|
+
A[Python Workflow] -->|flow_extractor.py| B[WorkflowDefinition]
|
419
|
+
B -->|WorkflowManager| C[YAML File]
|
420
|
+
C -->|WorkflowManager| D[WorkflowDefinition]
|
421
|
+
D -->|flow_generator.py| E[Standalone Python Script]
|
422
|
+
style A fill:#e6f3ff,stroke:#0066cc,stroke-width:2px
|
423
|
+
style B fill:#fff,stroke:#333
|
424
|
+
style C fill:#e6ffe6,stroke:#009933,stroke-width:2px
|
425
|
+
style D fill:#fff,stroke:#333
|
426
|
+
style E fill:#fff0e6,stroke:#cc3300,stroke-width:2px
|
427
|
+
```
|
428
|
+
|
429
|
+
## 10. WorkflowManager 🧑💻
|
430
|
+
|
431
|
+
The `WorkflowManager` lets you build workflows programmatically:
|
432
|
+
- Add nodes, transitions, functions, and observers.
|
433
|
+
- Load/save YAML.
|
434
|
+
- Instantiate a `Workflow` object.
|
352
435
|
|
353
|
-
|
436
|
+
### Example
|
437
|
+
```python
|
438
|
+
manager = WorkflowManager()
|
439
|
+
manager.add_function("say_hi", "embedded", code="async def say_hi(name): return f'Hi, {name}!'")
|
440
|
+
manager.add_node("start", function="say_hi")
|
441
|
+
manager.set_start_node("start")
|
442
|
+
manager.save_to_yaml("hi.yaml")
|
443
|
+
```
|
354
444
|
|
355
|
-
|
445
|
+
## 11. Full Example: Order Processing 📦🤖
|
356
446
|
|
357
447
|
```yaml
|
358
448
|
functions:
|
359
|
-
|
449
|
+
validate:
|
360
450
|
type: embedded
|
361
451
|
code: |
|
362
|
-
async def
|
363
|
-
|
364
|
-
|
365
|
-
|
366
|
-
|
367
|
-
|
368
|
-
|
452
|
+
async def validate(order: dict) -> str:
|
453
|
+
return "valid" if order["items"] else "invalid"
|
454
|
+
track_usage:
|
455
|
+
type: embedded
|
456
|
+
code: |
|
457
|
+
def track_usage(event):
|
458
|
+
if event.usage:
|
459
|
+
print(f"{event.node_name}: {event.usage['total_tokens']} tokens")
|
369
460
|
nodes:
|
370
|
-
|
371
|
-
function:
|
372
|
-
output:
|
373
|
-
|
461
|
+
validate_order:
|
462
|
+
function: validate
|
463
|
+
output: validity
|
464
|
+
check_stock:
|
374
465
|
llm_config:
|
375
466
|
model: "gemini/gemini-2.0-flash"
|
376
467
|
system_prompt: "Check inventory."
|
377
|
-
prompt_template: "
|
378
|
-
response_model: "
|
379
|
-
output:
|
380
|
-
payment:
|
381
|
-
function: process_payment
|
382
|
-
output: payment_status
|
468
|
+
prompt_template: "Items: {{ items }}"
|
469
|
+
response_model: "shop:Stock"
|
470
|
+
output: stock
|
383
471
|
notify:
|
384
472
|
llm_config:
|
385
|
-
prompt_template: "
|
386
|
-
output:
|
473
|
+
prompt_template: "Order {{ order_id }} status: {{ validity }}"
|
474
|
+
output: message
|
387
475
|
workflow:
|
388
|
-
start:
|
476
|
+
start: validate_order
|
389
477
|
transitions:
|
390
|
-
- from:
|
391
|
-
to:
|
392
|
-
|
393
|
-
|
394
|
-
condition: "ctx.get('inventory_status').in_stock"
|
395
|
-
- from: payment
|
478
|
+
- from: validate_order
|
479
|
+
to: check_stock
|
480
|
+
condition: "ctx['validity'] == 'valid'"
|
481
|
+
- from: check_stock
|
396
482
|
to: notify
|
483
|
+
observers:
|
484
|
+
- track_usage
|
397
485
|
```
|
398
486
|
|
399
|
-
Execution
|
400
|
-
|
401
|
-
|
402
|
-
|
403
|
-
|
404
|
-
|
487
|
+
### Execution
|
488
|
+
With `ctx = {"order": {"items": ["book"], "order_id": "123"}}`:
|
489
|
+
1. `validate_order` → `ctx["validity"] = "valid"`
|
490
|
+
2. `check_stock` → `ctx["stock"] = Stock(...)`
|
491
|
+
3. `notify` → `ctx["message"] = "Order 123 status: valid"`
|
492
|
+
4. `track_usage` prints token usage for LLM nodes.
|
405
493
|
|
406
494
|
```mermaid
|
407
|
-
graph
|
408
|
-
A[
|
409
|
-
B
|
410
|
-
|
411
|
-
style
|
412
|
-
style
|
413
|
-
style C fill:#afa,stroke:#333,stroke-width:2px
|
414
|
-
style D fill:#afa,stroke:#333,stroke-width:2px
|
495
|
+
graph TD
|
496
|
+
A["validate_order"] -->|"ctx['validity'] == 'valid'"| B["check_stock"]
|
497
|
+
B --> C["notify"]
|
498
|
+
style A fill:#e6ffe6,stroke:#009933,stroke-width:2px
|
499
|
+
style B fill:#e6ffe6,stroke:#009933,stroke-width:2px
|
500
|
+
style C fill:#e6ffe6,stroke:#009933,stroke-width:2px
|
415
501
|
```
|
416
502
|
|
417
|
-
##
|
503
|
+
## 12. Conclusion 🎉
|
504
|
+
|
505
|
+
The Quantalogic Flow YAML DSL (March 1, 2025) is your go-to for crafting workflows—simple or sophisticated. With tools like `flow_extractor.py` and `flow_generator.py`, you can switch between Python and YAML effortlessly, making workflows portable and standalone. Add PyPI support, sub-workflows, LLM nodes, and observers, and you’ve got a versatile framework for automation and AI tasks. Pair it with `WorkflowManager` for maximum flexibility! 🚀
|
418
506
|
|
419
|
-
The Quantalogic Flow YAML DSL, as of February 23, 2025, provides a flexible and powerful framework for defining workflows. Enhanced support for PyPI modules via the `module` field in `functions` ensures seamless integration with external libraries, with clear error messages guiding users to install missing packages (e.g., `pip install requests`). Combined with sub-workflows, LLM nodes, and robust execution controls, it supports a wide range of applications, from simple tasks to complex, AI-driven processes, all manageable through the `WorkflowManager`.
|