quantalogic 0.80__py3-none-any.whl → 0.93__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (55) hide show
  1. quantalogic/flow/__init__.py +16 -34
  2. quantalogic/main.py +11 -6
  3. quantalogic/tools/tool.py +8 -922
  4. quantalogic-0.93.dist-info/METADATA +475 -0
  5. {quantalogic-0.80.dist-info → quantalogic-0.93.dist-info}/RECORD +8 -54
  6. quantalogic/codeact/TODO.md +0 -14
  7. quantalogic/codeact/__init__.py +0 -0
  8. quantalogic/codeact/agent.py +0 -478
  9. quantalogic/codeact/cli.py +0 -50
  10. quantalogic/codeact/cli_commands/__init__.py +0 -0
  11. quantalogic/codeact/cli_commands/create_toolbox.py +0 -45
  12. quantalogic/codeact/cli_commands/install_toolbox.py +0 -20
  13. quantalogic/codeact/cli_commands/list_executor.py +0 -15
  14. quantalogic/codeact/cli_commands/list_reasoners.py +0 -15
  15. quantalogic/codeact/cli_commands/list_toolboxes.py +0 -47
  16. quantalogic/codeact/cli_commands/task.py +0 -215
  17. quantalogic/codeact/cli_commands/tool_info.py +0 -24
  18. quantalogic/codeact/cli_commands/uninstall_toolbox.py +0 -43
  19. quantalogic/codeact/config.yaml +0 -21
  20. quantalogic/codeact/constants.py +0 -9
  21. quantalogic/codeact/events.py +0 -85
  22. quantalogic/codeact/examples/README.md +0 -342
  23. quantalogic/codeact/examples/agent_sample.yaml +0 -29
  24. quantalogic/codeact/executor.py +0 -186
  25. quantalogic/codeact/history_manager.py +0 -94
  26. quantalogic/codeact/llm_util.py +0 -57
  27. quantalogic/codeact/plugin_manager.py +0 -92
  28. quantalogic/codeact/prompts/error_format.j2 +0 -11
  29. quantalogic/codeact/prompts/generate_action.j2 +0 -77
  30. quantalogic/codeact/prompts/generate_program.j2 +0 -52
  31. quantalogic/codeact/prompts/response_format.j2 +0 -11
  32. quantalogic/codeact/react_agent.py +0 -318
  33. quantalogic/codeact/reasoner.py +0 -185
  34. quantalogic/codeact/templates/toolbox/README.md.j2 +0 -10
  35. quantalogic/codeact/templates/toolbox/pyproject.toml.j2 +0 -16
  36. quantalogic/codeact/templates/toolbox/tools.py.j2 +0 -6
  37. quantalogic/codeact/templates.py +0 -7
  38. quantalogic/codeact/tools_manager.py +0 -258
  39. quantalogic/codeact/utils.py +0 -62
  40. quantalogic/codeact/xml_utils.py +0 -126
  41. quantalogic/flow/flow.py +0 -1070
  42. quantalogic/flow/flow_extractor.py +0 -783
  43. quantalogic/flow/flow_generator.py +0 -322
  44. quantalogic/flow/flow_manager.py +0 -676
  45. quantalogic/flow/flow_manager_schema.py +0 -287
  46. quantalogic/flow/flow_mermaid.py +0 -365
  47. quantalogic/flow/flow_validator.py +0 -479
  48. quantalogic/flow/flow_yaml.linkedin.md +0 -31
  49. quantalogic/flow/flow_yaml.md +0 -767
  50. quantalogic/flow/templates/prompt_check_inventory.j2 +0 -1
  51. quantalogic/flow/templates/system_check_inventory.j2 +0 -1
  52. quantalogic-0.80.dist-info/METADATA +0 -900
  53. {quantalogic-0.80.dist-info → quantalogic-0.93.dist-info}/LICENSE +0 -0
  54. {quantalogic-0.80.dist-info → quantalogic-0.93.dist-info}/WHEEL +0 -0
  55. {quantalogic-0.80.dist-info → quantalogic-0.93.dist-info}/entry_points.txt +0 -0
@@ -1,767 +0,0 @@
1
- # Quantalogic Flow YAML DSL Specification 🚀
2
-
3
- ## 1. Introduction 🌟
4
-
5
- The **Quantalogic Flow YAML DSL** is a human-readable, declarative language for defining workflows within the `quantalogic.flow` Python package. As of **March 8, 2025**, it’s packed with features for task automation:
6
-
7
- - **Function Execution** ⚙️: Run async Python functions from embedded code, PyPI, local files, or URLs.
8
- - **Execution Flow** ➡️: Support sequential, conditional, parallel, branching, and converging transitions.
9
- - **Sub-Workflows** 🌳: Enable hierarchical, modular designs.
10
- - **LLM Integration** 🤖: Harness Large Language Models for text or structured outputs, with dynamic model selection.
11
- - **Template Nodes** 📝: Render dynamic content with Jinja2 templates.
12
- - **Input Mapping** 🔗: Flexibly map node parameters to context or custom logic (including lambdas).
13
- - **Context Management** 📦: Share state dynamically across nodes.
14
- - **Robustness** 🛡️: Include retries, delays, and timeouts.
15
- - **Observers** 👀: Monitor execution with custom handlers.
16
- - **Programmatic Control** 🧑‍💻: Manage workflows via `WorkflowManager`.
17
-
18
- This DSL integrates with `Workflow`, `WorkflowEngine`, and `Nodes` classes, making it versatile for everything from simple scripts to complex AI-driven workflows. We’ll use an updated **Story Generator Workflow** as a running example, derived from `examples/flow/simple_story_generator/story_generator_agent.py`, now enhanced with branching, convergence, input mapping, template nodes, and dynamic model selection. Let’s dive in! 🎉
19
-
20
- ---
21
-
22
- ## 2. Workflow Structure 🗺️
23
-
24
- A workflow YAML file comprises five core sections:
25
-
26
- - **`functions`**: Python code definitions.
27
- - **`nodes`**: Task specifications with input mappings and template support.
28
- - **`workflow`**: Flow orchestration with branching and convergence.
29
- - **`dependencies`**: Python module dependencies.
30
- - **`observers`**: Event monitoring.
31
-
32
- Here’s the skeleton:
33
-
34
- ```yaml
35
- functions:
36
- # Python magic ✨
37
- nodes:
38
- # Tasks with input mappings & templates 🎯
39
- workflow:
40
- # Flow control with branches & convergence 🚦
41
- dependencies:
42
- # Python module dependencies (optional)
43
- observers:
44
- # Event watchers 👀 (optional)
45
- ```
46
-
47
- ### 3. LLM Configuration
48
-
49
- In the `llm_config` section of a node definition, you can specify a file-based system prompt using the `system_prompt_file` key (takes precedence over `system_prompt`) and a dynamic `model` using a lambda expression (e.g., `"lambda ctx: ctx['model_name']"`). This enhances flexibility for LLM-driven tasks.
50
-
51
- Example:
52
-
53
- ```yaml
54
- llm_config:
55
- model: "lambda ctx: ctx['model_name']" # Dynamic model selection
56
- system_prompt: "You are a creative writer."
57
- system_prompt_file: "path/to/system_prompt_template.jinja"
58
- prompt_template: "Write a story about {topic}."
59
- ```
60
-
61
- ### Story Generator Example
62
- We’ll evolve the Story Generator to include branching (based on story tone), convergence (finalizing the story), **input mapping** with lambdas, a **template node** for chapter summaries, and a dynamic `model`—showcasing these shiny new features step-by-step.
63
-
64
- ---
65
-
66
- ## 3. Case Study: Story Generator Workflow 📖
67
-
68
- ### Python Version (`story_generator_agent.py`)
69
-
70
- This updated script generates a story with tone-based branching, convergence, input mapping, a template node, and dynamic model selection:
71
-
72
- ```python
73
- #!/usr/bin/env python
74
- from quantalogic.flow import Nodes, Workflow
75
- import anyio
76
-
77
- MODEL = "gemini/gemini-2.0-flash"
78
- DEFAULT_LLM_PARAMS = {"temperature": 0.7, "max_tokens": 1000}
79
-
80
- @Nodes.llm_node(system_prompt="You are a creative writer skilled at generating stories.",
81
- prompt_template="Create a story outline for a {genre} story with {num_chapters} chapters.",
82
- model=lambda ctx: ctx.get("model_name", MODEL),
83
- output="outline", **DEFAULT_LLM_PARAMS)
84
- async def generate_outline(genre: str, num_chapters: int):
85
- return ""
86
-
87
- @Nodes.llm_node(system_prompt="You are a creative writer.",
88
- prompt_template="Analyze the tone of this outline: {outline}.",
89
- model=lambda ctx: ctx.get("model_name", MODEL),
90
- output="tone", **DEFAULT_LLM_PARAMS)
91
- async def analyze_tone(outline: str):
92
- return ""
93
-
94
- @Nodes.llm_node(system_prompt="You are a creative writer.",
95
- prompt_template="Write chapter {chapter_num} for this story outline: {outline}. Style: {style}.",
96
- model=lambda ctx: ctx.get("model_name", MODEL),
97
- output="chapter", **DEFAULT_LLM_PARAMS)
98
- async def generate_chapter(outline: str, chapter_num: int, style: str):
99
- return ""
100
-
101
- @Nodes.llm_node(system_prompt="You are a dramatic writer.",
102
- prompt_template="Write a dramatic chapter {chapter_num} for this outline: {outline}.",
103
- model=lambda ctx: ctx.get("model_name", MODEL),
104
- output="chapter", **DEFAULT_LLM_PARAMS)
105
- async def generate_dramatic_chapter(outline: str, chapter_num: int):
106
- return ""
107
-
108
- @Nodes.template_node(output="chapter_summary", template="Chapter {chapter_num}: {chapter}")
109
- async def summarize_chapter(rendered_content: str, chapter: str, chapter_num: int):
110
- return rendered_content
111
-
112
- @Nodes.define(output="updated_context")
113
- async def update_progress(**context):
114
- chapters = context.get('chapters', [])
115
- completed_chapters = context.get('completed_chapters', 0)
116
- chapter_summary = context.get('chapter_summary', '')
117
- updated_chapters = chapters + [chapter_summary]
118
- return {**context, "chapters": updated_chapters, "completed_chapters": completed_chapters + 1}
119
-
120
- @Nodes.define(output="continue_generating")
121
- async def check_if_complete(completed_chapters: int = 0, num_chapters: int = 0, **kwargs):
122
- return completed_chapters < num_chapters
123
-
124
- @Nodes.define(output="final_story")
125
- async def finalize_story(chapters: list):
126
- return "\n".join(chapters)
127
-
128
- workflow = (
129
- Workflow("generate_outline")
130
- .node("generate_outline", inputs_mapping={"genre": "story_genre", "num_chapters": "chapter_count"})
131
- .then("analyze_tone")
132
- .branch([
133
- ("generate_chapter", lambda ctx: ctx.get("tone") == "light"),
134
- ("generate_dramatic_chapter", lambda ctx: ctx.get("tone") == "dark")
135
- ])
136
- .then("summarize_chapter")
137
- .then("update_progress")
138
- .then("check_if_complete")
139
- .then("generate_chapter", condition=lambda ctx: ctx.get("continue_generating", False))
140
- .then("summarize_chapter")
141
- .then("update_progress")
142
- .then("check_if_complete")
143
- .converge("finalize_story")
144
- )
145
-
146
- def story_observer(event):
147
- print(f"Event: {event.event_type.value} - Node: {event.node_name}")
148
- workflow.add_observer(story_observer)
149
-
150
- if __name__ == "__main__":
151
- async def main():
152
- initial_context = {
153
- "story_genre": "science fiction",
154
- "chapter_count": 3,
155
- "chapters": [],
156
- "completed_chapters": 0,
157
- "style": "descriptive",
158
- "model_name": "gemini/gemini-2.0-flash" # Dynamic model selection
159
- }
160
- engine = workflow.build()
161
- result = await engine.run(initial_context)
162
- print(f"Final Story:\n{result.get('final_story', '')}")
163
- anyio.run(main)
164
- ```
165
-
166
- ### YAML Version (`story_generator_workflow.yaml`)
167
-
168
- Here’s the updated YAML with branching, convergence, input mapping with lambdas, a template node, and dynamic model selection:
169
-
170
- ```yaml
171
- functions:
172
- update_progress:
173
- type: embedded
174
- code: |
175
- async def update_progress(**context):
176
- chapters = context.get('chapters', [])
177
- completed_chapters = context.get('completed_chapters', 0)
178
- chapter_summary = context.get('chapter_summary', '')
179
- updated_chapters = chapters + [chapter_summary]
180
- return {**context, "chapters": updated_chapters, "completed_chapters": completed_chapters + 1}
181
- check_if_complete:
182
- type: embedded
183
- code: |
184
- async def check_if_complete(completed_chapters=0, num_chapters=0, **kwargs):
185
- return completed_chapters < num_chapters
186
- finalize_story:
187
- type: embedded
188
- code: |
189
- async def finalize_story(chapters):
190
- return "\n".join(chapters)
191
- story_observer:
192
- type: embedded
193
- code: |
194
- def story_observer(event):
195
- print(f"Event: {event.event_type.value} - Node: {event.node_name}")
196
-
197
- nodes:
198
- generate_outline:
199
- llm_config:
200
- model: "lambda ctx: ctx['model_name']" # Dynamic model selection
201
- system_prompt: "You are a creative writer skilled at generating stories."
202
- system_prompt_file: "path/to/system_prompt_template.jinja"
203
- prompt_template: "Create a story outline for a {genre} story with {num_chapters} chapters."
204
- temperature: 0.7
205
- max_tokens: 1000
206
- inputs_mapping:
207
- genre: "story_genre"
208
- num_chapters: "chapter_count"
209
- output: outline
210
- analyze_tone:
211
- llm_config:
212
- model: "lambda ctx: ctx['model_name']" # Dynamic model selection
213
- system_prompt: "You are a creative writer."
214
- prompt_template: "Analyze the tone of this outline: {outline}."
215
- temperature: 0.7
216
- max_tokens: 1000
217
- output: tone
218
- generate_chapter:
219
- llm_config:
220
- model: "lambda ctx: ctx['model_name']" # Dynamic model selection
221
- system_prompt: "You are a creative writer."
222
- prompt_template: "Write chapter {chapter_num} for this story outline: {outline}. Style: {style}."
223
- temperature: 0.7
224
- max_tokens: 1000
225
- inputs_mapping:
226
- chapter_num: "completed_chapters"
227
- style: "style"
228
- output: chapter
229
- generate_dramatic_chapter:
230
- llm_config:
231
- model: "lambda ctx: ctx['model_name']" # Dynamic model selection
232
- system_prompt: "You are a dramatic writer."
233
- prompt_template: "Write a dramatic chapter {chapter_num} for this outline: {outline}."
234
- temperature: 0.7
235
- max_tokens: 1000
236
- inputs_mapping:
237
- chapter_num: "completed_chapters"
238
- output: chapter
239
- summarize_chapter:
240
- template_config:
241
- template: "Chapter {chapter_num}: {chapter}"
242
- inputs_mapping:
243
- chapter_num: "completed_chapters"
244
- output: chapter_summary
245
- update_progress:
246
- function: update_progress
247
- output: updated_context
248
- check_if_complete:
249
- function: check_if_complete
250
- output: continue_generating
251
- finalize_story:
252
- function: finalize_story
253
- output: final_story
254
-
255
- workflow:
256
- start: generate_outline
257
- transitions:
258
- - from_node: generate_outline
259
- to_node: analyze_tone
260
- - from_node: analyze_tone
261
- to_node:
262
- - to_node: generate_chapter
263
- condition: "ctx['tone'] == 'light'"
264
- - to_node: generate_dramatic_chapter
265
- condition: "ctx['tone'] == 'dark'"
266
- - from_node: generate_chapter
267
- to_node: summarize_chapter
268
- - from_node: generate_dramatic_chapter
269
- to_node: summarize_chapter
270
- - from_node: summarize_chapter
271
- to_node: update_progress
272
- - from_node: update_progress
273
- to_node: check_if_complete
274
- - from_node: check_if_complete
275
- to_node: generate_chapter
276
- condition: "ctx['continue_generating']"
277
- convergence_nodes:
278
- - finalize_story
279
-
280
- observers:
281
- - story_observer
282
- ```
283
-
284
- ### Mermaid Diagram: Updated Story Generator Flow
285
-
286
- ```mermaid
287
- graph TD
288
- A[generate_outline] --> B[analyze_tone]
289
- B -->|"'light'"| C[generate_chapter]
290
- B -->|"'dark'"| D[generate_dramatic_chapter]
291
- C --> E[summarize_chapter]
292
- D --> E
293
- E --> F[update_progress]
294
- F --> G[check_if_complete]
295
- G -->|"ctx['continue_generating']"| C
296
- G --> H[finalize_story]
297
- F --> H
298
- style A fill:#CE93D8,stroke:#AB47BC,stroke-width:2px # Purple for LLM
299
- style B fill:#CE93D8,stroke:#AB47BC,stroke-width:2px # Purple for LLM
300
- style C fill:#CE93D8,stroke:#AB47BC,stroke-width:2px # Purple for LLM
301
- style D fill:#CE93D8,stroke:#AB47BC,stroke-width:2px # Purple for LLM
302
- style E fill:#FCE4EC,stroke:#F06292,stroke-width:2px # Pink for template
303
- style F fill:#90CAF9,stroke:#42A5F5,stroke-width:2px # Blue for function
304
- style G fill:#90CAF9,stroke:#42A5F5,stroke-width:2px # Blue for function
305
- style H fill:#90CAF9,stroke:#42A5F5,stroke-width:2px,stroke-dasharray:5 5 # Blue with dashed for convergence
306
- ```
307
-
308
- #### Execution
309
- With `initial_context = {"story_genre": "science fiction", "chapter_count": 3, "chapters": [], "completed_chapters": 0, "style": "descriptive", "model_name": "gemini/gemini-2.0-flash"}`:
310
- 1. `generate_outline` uses input mapping (`story_genre`, `chapter_count`) and a dynamic `model` to create an outline.
311
- 2. `analyze_tone` determines the story’s tone with a dynamic `model`.
312
- 3. Branches to `generate_chapter` (light tone) or `generate_dramatic_chapter` (dark tone), mapping `chapter_num` to `completed_chapters`.
313
- 4. `summarize_chapter` formats the chapter using a template, mapping `chapter_num`.
314
- 5. `update_progress` updates chapters and count with the summary.
315
- 6. `check_if_complete` loops back if more chapters are needed.
316
- 7. Converges at `finalize_story` to compile the final story.
317
-
318
- ---
319
-
320
- ## 4. Functions ⚙️
321
-
322
- The `functions` section defines reusable Python code.
323
-
324
- ### Fields 📋
325
- - `type` (string, required): `"embedded"` or `"external"`.
326
- - `code` (string, optional): Inline code for `embedded`.
327
- - `module` (string, optional): Source for `external` (PyPI, path, URL).
328
- - `function` (string, optional): Function name in `module`.
329
-
330
- ### Rules ✅
331
- - Embedded: Use `async def` (if async), name matches key.
332
- - External: Requires `module` and `function`, no `code`.
333
-
334
- ### Examples 🌈
335
- From the story generator:
336
- ```yaml
337
- functions:
338
- finalize_story:
339
- type: embedded
340
- code: |
341
- async def finalize_story(chapters):
342
- return "\n".join(chapters)
343
- ```
344
- External example:
345
- ```yaml
346
- functions:
347
- fetch:
348
- type: external
349
- module: requests
350
- function: get
351
- ```
352
-
353
- ```mermaid
354
- graph TD
355
- A[Function Definition] --> B{Type?}
356
- B -->|embedded| C[Code: async def ...]
357
- B -->|external| D[Module: PyPI, Path, URL]
358
- D --> E[Function Name]
359
- style A fill:#e6f3ff,stroke:#0066cc,stroke-width:2px
360
- style B fill:#fff,stroke:#333
361
- style C fill:#cce6ff,stroke:#0066cc
362
- style D fill:#cce6ff,stroke:#0066cc
363
- style E fill:#cce6ff,stroke:#0066cc
364
- ```
365
-
366
- ---
367
-
368
- ## 5. Dependencies 🐍
369
-
370
- The `dependencies` section lists required Python modules.
371
-
372
- ### Fields 📋
373
- - `dependencies` (list, optional): PyPI packages (e.g., `requests>=2.28.0`), local paths (e.g., `/path/to/module.py`), or URLs (e.g., `https://example.com/module.py`).
374
-
375
- ### Example 🌈
376
- ```yaml
377
- dependencies:
378
- - requests>=2.28.0
379
- - /path/to/my_custom_module.py
380
- - https://example.com/another_module.py
381
- ```
382
-
383
- ---
384
-
385
- ## 6. Nodes 🧩
386
-
387
- Nodes define tasks, now enhanced with **input mappings** (including lambdas), **template nodes**, and dynamic `model` selection in LLM nodes, alongside functions and sub-workflows.
388
-
389
- ### Fields 📋
390
- - `function` (string, optional): Links to `functions`.
391
- - `sub_workflow` (object, optional):
392
- - `start` (string)
393
- - `transitions` (list)
394
- - `convergence_nodes` (list, optional)
395
- - `llm_config` (object, optional):
396
- - `model` (string, default: `"gpt-3.5-turbo"`): Can be a static name or lambda (e.g., `"lambda ctx: ctx['model_name']"`).
397
- - `system_prompt` (string, optional)
398
- - `system_prompt_file` (string, optional): Path to a Jinja2 template file (overrides `system_prompt`).
399
- - `prompt_template` (string, default: `"{{ input }}"`)
400
- - `prompt_file` (string, optional): Path to a Jinja2 template file (overrides `prompt_template`).
401
- - `temperature` (float, default: `0.7`)
402
- - `max_tokens` (int, optional)
403
- - `top_p` (float, default: `1.0`)
404
- - `presence_penalty` (float, default: `0.0`)
405
- - `frequency_penalty` (float, default: `0.0`)
406
- - `response_model` (string, optional)
407
- - `template_config` (object, optional):
408
- - `template` (string, default: `""`): Jinja2 template string.
409
- - `template_file` (string, optional): Path to a Jinja2 template file (overrides `template`).
410
- - `inputs_mapping` (dict, optional): Maps node parameters to context keys or lambda expressions (e.g., `"lambda ctx: ctx['x'] + 1"`).
411
- - `output` (string, optional): Context key for the result.
412
- - `retries` (int, default: `3`)
413
- - `delay` (float, default: `1.0`)
414
- - `timeout` (float/null, default: `null`)
415
- - `parallel` (bool, default: `false`)
416
-
417
- ### Rules ✅
418
- - Exactly one of `function`, `sub_workflow`, `llm_config`, or `template_config`.
419
- - LLM and template inputs derived from `prompt_template`/`template` or `prompt_file`/`template_file`, overridden by `inputs_mapping`.
420
- - `inputs_mapping` values can be strings (context keys) or serialized lambdas.
421
-
422
- ### Examples 🌈
423
- Using a template node with an external Jinja2 file:
424
- ```yaml
425
- nodes:
426
- format_report:
427
- template_config:
428
- template_file: "templates/report.j2"
429
- inputs_mapping:
430
- title: "report_title"
431
- data: "report_data"
432
- output: formatted_report
433
- ```
434
- (`templates/report.j2`: `Report: {{ title }}\nData: {{ data }}`)
435
-
436
- With input mapping, lambda, and dynamic model in an LLM node:
437
- ```yaml
438
- nodes:
439
- generate_outline:
440
- llm_config:
441
- model: "lambda ctx: ctx['model_name']" # Dynamic model selection
442
- system_prompt: "You are a creative writer skilled at generating stories."
443
- system_prompt_file: "path/to/system_prompt_template.jinja"
444
- prompt_template: "Create a story outline for a {genre} story with {num_chapters} chapters."
445
- temperature: 0.7
446
- max_tokens: 1000
447
- inputs_mapping:
448
- genre: "story_genre"
449
- num_chapters: "lambda ctx: ctx['chapter_count'] + 1"
450
- output: outline
451
- ```
452
-
453
- From the story generator (template node):
454
- ```yaml
455
- nodes:
456
- summarize_chapter:
457
- template_config:
458
- template: "Chapter {chapter_num}: {chapter}"
459
- inputs_mapping:
460
- chapter_num: "completed_chapters"
461
- output: chapter_summary
462
- ```
463
-
464
- ```mermaid
465
- graph TD
466
- A[Node] --> B{Type?}
467
- B -->|function| C[Function Ref]
468
- B -->|sub_workflow| D[Start + Transitions + Convergence]
469
- B -->|llm_config| E[LLM Setup]
470
- B -->|template_config| F[Template Setup]
471
- A --> G[Inputs Mapping?]
472
- G -->|Yes| H[Context Keys or Lambdas]
473
- E --> I{Structured?}
474
- I -->|Yes| J[response_model]
475
- I -->|No| K[Plain Text]
476
- F --> L[Jinja2 Template]
477
- E --> M[Dynamic Model?]
478
- M -->|Yes| N[Lambda Expression]
479
- style A fill:#e6ffe6,stroke:#009933,stroke-width:2px
480
- style B fill:#fff,stroke:#333
481
- style C fill:#ccffcc,stroke:#009933
482
- style D fill:#ccffcc,stroke:#009933
483
- style E fill:#ccffcc,stroke:#009933
484
- style F fill:#ccffcc,stroke:#009933
485
- style G fill:#fff,stroke:#333
486
- style H fill:#b3ffb3,stroke:#009933
487
- style I fill:#fff,stroke:#333
488
- style J fill:#b3ffb3,stroke:#009933
489
- style K fill:#b3ffb3,stroke:#009933
490
- style L fill:#b3ffb3,stroke:#009933
491
- style M fill:#fff,stroke:#333
492
- style N fill:#b3ffb3,stroke:#009933
493
- ```
494
-
495
- ---
496
-
497
- ## 6. Input Mapping with LLM Nodes and Template Nodes 🔗
498
-
499
- Input mapping allows flexible parameter passing to nodes, enabling dynamic behavior based on workflow context. This is particularly powerful with LLM nodes (including dynamic models) and template nodes.
500
-
501
- ### Implementation Details
502
-
503
- - **Input Mapping Types**:
504
- - Direct context references (e.g., `"story_genre"`)
505
- - Lambda expressions (e.g., `"lambda ctx: ctx['chapter_count'] + 1"`)
506
- - Static values
507
-
508
- - **Supported Node Types**:
509
- - LLM nodes
510
- - Template nodes
511
- - Function nodes
512
- - Sub-workflow nodes
513
-
514
- ### LLM Node Input Mapping
515
-
516
- LLM nodes support input mapping for prompts and dynamic model selection:
517
-
518
- ```yaml
519
- nodes:
520
- generate_outline:
521
- llm_config:
522
- model: "lambda ctx: ctx['model_name']" # Dynamic model selection
523
- system_prompt: "You are a creative writer skilled in {genre} stories."
524
- system_prompt_file: "path/to/system_prompt_template.jinja"
525
- prompt_template: "Create a story outline for a {genre} story with {num_chapters} chapters."
526
- temperature: 0.7
527
- max_tokens: 1000
528
- inputs_mapping:
529
- genre: "story_genre" # Map from context
530
- num_chapters: "lambda ctx: ctx['chapter_count'] + 1" # Dynamic value
531
- output: outline
532
- ```
533
-
534
- ### Template Node Input Mapping
535
-
536
- Template nodes use mapped inputs in Jinja2 templates:
537
-
538
- ```yaml
539
- nodes:
540
- summarize_chapter:
541
- template_config:
542
- template: "Chapter {chapter_num}: {chapter}\n\nSummary: {summary}"
543
- inputs_mapping:
544
- chapter_num: "current_chapter"
545
- chapter: "lambda ctx: ctx['chapters'][ctx['current_chapter']]"
546
- summary: "lambda ctx: ctx['summaries'][ctx['current_chapter']]"
547
- output: chapter_summary
548
- ```
549
-
550
- ### Combined Example
551
-
552
- Here’s an example combining both LLM and template nodes with input mapping:
553
-
554
- ```yaml
555
- nodes:
556
- generate_character:
557
- llm_config:
558
- model: "lambda ctx: ctx['model_name']" # Dynamic model selection
559
- system_prompt: "You are a character designer."
560
- prompt_template: "Create a character for a {genre} story."
561
- inputs_mapping:
562
- genre: "story_genre"
563
- output: character_description
564
-
565
- format_character:
566
- template_config:
567
- template: "Character Profile:\n\n{description}\n\nTraits: {traits}"
568
- inputs_mapping:
569
- description: "character_description"
570
- traits: "lambda ctx: ', '.join(ctx['character_traits'])"
571
- output: formatted_character
572
- ```
573
-
574
- ### Key Points
575
-
576
- - Use `inputs_mapping` to map context values to node parameters.
577
- - Support both direct context references and lambda expressions.
578
- - Works seamlessly with LLM (including dynamic `model`), template, and other node types.
579
- - Enables dynamic, context-aware workflows.
580
- - Input mapping is validated against node parameters.
581
-
582
- ---
583
-
584
- ## 7. Workflow 🌐
585
-
586
- The `workflow` section orchestrates execution, leveraging branching and convergence.
587
-
588
- ### Fields 📋
589
- - `start` (string, optional): First node.
590
- - `transitions` (list):
591
- - `from_node` (string)
592
- - `to_node` (string or list):
593
- - String: Sequential or parallel transition.
594
- - List of objects: Branching with `to_node` and `condition`.
595
- - `condition` (string, optional): For sequential transitions.
596
- - `convergence_nodes` (list, optional): Nodes where branches merge.
597
-
598
- ### Example 🌈
599
- From the story generator:
600
- ```yaml
601
- workflow:
602
- start: generate_outline
603
- transitions:
604
- - from_node: generate_outline
605
- to_node: analyze_tone
606
- - from_node: analyze_tone
607
- to_node:
608
- - to_node: generate_chapter
609
- condition: "ctx['tone'] == 'light'"
610
- - to_node: generate_dramatic_chapter
611
- condition: "ctx['tone'] == 'dark'"
612
- - from_node: generate_chapter
613
- to_node: summarize_chapter
614
- - from_node: generate_dramatic_chapter
615
- to_node: summarize_chapter
616
- - from_node: summarize_chapter
617
- to_node: update_progress
618
- - from_node: update_progress
619
- to_node: check_if_complete
620
- - from_node: check_if_complete
621
- to_node: generate_chapter
622
- condition: "ctx['continue_generating']"
623
- convergence_nodes:
624
- - finalize_story
625
- ```
626
-
627
- ```mermaid
628
- graph TD
629
- A[Workflow] --> B[Start Node]
630
- A --> C[Transitions]
631
- C --> D[From Node]
632
- D --> E{To Node}
633
- E -->|Sequential| F[Single Node]
634
- E -->|Parallel| G[List of Nodes]
635
- E -->|Branching| H[List with Conditions]
636
- C --> I[Condition?]
637
- I -->|Yes| J[ctx-based Logic]
638
- A --> K[Convergence Nodes]
639
- K --> L[Merge Points]
640
- style A fill:#fff0e6,stroke:#cc3300,stroke-width:2px
641
- style B fill:#ffe6cc,stroke:#cc3300
642
- style C fill:#ffe6cc,stroke:#cc3300
643
- style D fill:#ffd9b3,stroke:#cc3300
644
- style E fill:#fff,stroke:#333
645
- style F fill:#ffd9b3,stroke:#cc3300
646
- style G fill:#ffd9b3,stroke:#cc3300
647
- style H fill:#ffd9b3,stroke:#cc3300
648
- style I fill:#fff,stroke:#333
649
- style J fill:#ffd9b3,stroke:#cc3300
650
- style K fill:#ffe6cc,stroke:#cc3300
651
- style L fill:#ffd9b3,stroke:#cc3300
652
- ```
653
-
654
- ---
655
-
656
- ## 8. Workflow Validation 🕵️‍♀️
657
-
658
- `validate_workflow_definition()` ensures integrity:
659
- - Checks node connectivity, circular references, undefined nodes, missing start.
660
- - Validates branch conditions, convergence points (at least two incoming transitions), and input mappings (including lambda syntax).
661
- - Returns `NodeError` objects (`node_name`, `description`).
662
-
663
- ### Example
664
- ```python
665
- issues = validate_workflow_definition(workflow)
666
- if issues:
667
- for issue in issues:
668
- print(f"Node '{issue.node_name}': {issue.description}")
669
- ```
670
-
671
- ---
672
-
673
- ## 9. Observers 👀
674
-
675
- Monitor events like node starts, completions, or failures.
676
-
677
- ### Example
678
- From the story generator:
679
- ```yaml
680
- observers:
681
- - story_observer
682
- ```
683
-
684
- ---
685
-
686
- ## 10. Context 📦
687
-
688
- The `ctx` dictionary shares data, enhanced by input mappings:
689
- - `generate_outline` → `ctx["outline"]` (mapped from `story_genre`, `chapter_count`, dynamic `model`)
690
- - `summarize_chapter` → `ctx["chapter_summary"]` (mapped from `completed_chapters`)
691
- - `finalize_story` → `ctx["final_story"]`
692
-
693
- ---
694
-
695
- ## 11. Execution Flow 🏃‍♂️
696
-
697
- The `WorkflowEngine`:
698
- 1. Starts at `workflow.start`.
699
- 2. Executes nodes, applying input mappings and updating `ctx`.
700
- 3. Follows transitions (sequential, parallel, or branching) based on conditions.
701
- 4. Converges at specified nodes.
702
- 5. Notifies observers.
703
- 6. Ends when transitions are exhausted.
704
-
705
- ---
706
-
707
- ## 12. Converting Between Python and YAML 🔄
708
-
709
- ### Python to YAML (`flow_extractor.py`)
710
- ```python
711
- from quantalogic.flow.flow_extractor import extract_workflow_from_file
712
- from quantalogic.flow.flow_manager import WorkflowManager
713
-
714
- wf_def, globals = extract_workflow_from_file("story_generator_agent.py")
715
- WorkflowManager(wf_def).save_to_yaml("story_generator_workflow.yaml")
716
- ```
717
-
718
- ### YAML to Python (`flow_generator.py`)
719
- ```python
720
- from quantalogic.flow.flow_generator import generate_executable_script
721
-
722
- manager = WorkflowManager()
723
- manager.load_from_yaml("story_generator_workflow.yaml")
724
- generate_executable_script(manager.workflow, {}, "standalone_story.py")
725
- ```
726
-
727
- ```mermaid
728
- graph TD
729
- A[Python Workflow] -->|flow_extractor.py| B[WorkflowDefinition]
730
- B -->|WorkflowManager| C[YAML File]
731
- C -->|WorkflowManager| D[WorkflowDefinition]
732
- D -->|flow_generator.py| E[Standalone Python Script]
733
- style A fill:#e6f3ff,stroke:#0066cc,stroke-width:2px
734
- style B fill:#fff,stroke:#333
735
- style C fill:#e6ffe6,stroke:#009933,stroke-width:2px
736
- style D fill:#fff,stroke:#333
737
- style E fill:#fff0e6,stroke:#cc3300,stroke-width:2px
738
- ```
739
-
740
- ---
741
-
742
- ## 13. WorkflowManager 🧑‍💻
743
-
744
- Programmatic workflow creation with new features:
745
- ```python
746
- manager = WorkflowManager()
747
- manager.add_node(
748
- "start",
749
- llm_config={"model": "lambda ctx: ctx['model_name']", "prompt_template": "Say hi to {name}"},
750
- inputs_mapping={"name": "user_name"}
751
- )
752
- manager.add_node(
753
- "format",
754
- template_config={"template": "Message: {text}"},
755
- inputs_mapping={"text": "start_result"}
756
- )
757
- manager.set_start_node("start")
758
- manager.add_transition("start", "format")
759
- manager.add_convergence_node("format")
760
- manager.save_to_yaml("hi.yaml")
761
- ```
762
-
763
- ---
764
-
765
- ## 14. Conclusion 🎉
766
-
767
- The Quantalogic Flow YAML DSL (March 8, 2025) is a powerful, flexible tool for workflow automation, exemplified by the updated Story Generator case study. With new **input mapping** (including lambdas), **template nodes**, and **dynamic model selection** in LLM nodes, alongside sub-workflows, branching, convergence, and conversion tools, it seamlessly bridges Python and YAML. Whether crafting dynamic stories with formatted chapters or managing complex processes, this DSL, paired with `WorkflowManager`, unlocks efficient, scalable workflows. 🚀