quantalogic 0.56.0__py3-none-any.whl → 0.58.0__py3-none-any.whl
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- quantalogic/agent.py +28 -4
- quantalogic/create_custom_agent.py +146 -71
- quantalogic/flow/flow.py +257 -103
- quantalogic/flow/flow_extractor.py +6 -15
- quantalogic/flow/flow_generator.py +28 -32
- quantalogic/flow/flow_manager.py +17 -3
- quantalogic/flow/flow_manager_schema.py +53 -5
- quantalogic/flow/flow_mermaid.py +2 -2
- quantalogic/flow/flow_yaml.linkedin.md +31 -0
- quantalogic/flow/flow_yaml.md +74 -56
- quantalogic/flow/templates/prompt_check_inventory.j2 +1 -0
- quantalogic/flow/templates/system_check_inventory.j2 +1 -0
- quantalogic/server/agent_server.py +19 -4
- quantalogic/tools/google_packages/google_news_tool.py +26 -187
- quantalogic/tools/utilities/__init__.py +2 -0
- quantalogic/tools/utilities/download_file_tool.py +4 -2
- quantalogic/tools/utilities/vscode_tool.py +123 -0
- quantalogic/utils/ask_user_validation.py +26 -6
- {quantalogic-0.56.0.dist-info → quantalogic-0.58.0.dist-info}/METADATA +1 -1
- {quantalogic-0.56.0.dist-info → quantalogic-0.58.0.dist-info}/RECORD +23 -19
- {quantalogic-0.56.0.dist-info → quantalogic-0.58.0.dist-info}/LICENSE +0 -0
- {quantalogic-0.56.0.dist-info → quantalogic-0.58.0.dist-info}/WHEEL +0 -0
- {quantalogic-0.56.0.dist-info → quantalogic-0.58.0.dist-info}/entry_points.txt +0 -0
quantalogic/flow/flow_yaml.md
CHANGED
@@ -2,31 +2,20 @@
|
|
2
2
|
|
3
3
|
## 1. Introduction 🌟
|
4
4
|
|
5
|
-
The **Quantalogic Flow YAML DSL** is a human-readable, declarative language for defining workflows within the `quantalogic.flow` Python package. As of **March
|
5
|
+
The **Quantalogic Flow YAML DSL** is a human-readable, declarative language for defining workflows within the `quantalogic.flow` Python package. As of **March 8, 2025**, it’s packed with features for task automation:
|
6
6
|
|
7
7
|
- **Function Execution** ⚙️: Run async Python functions from embedded code, PyPI, local files, or URLs.
|
8
8
|
- **Execution Flow** ➡️: Support sequential, conditional, parallel, branching, and converging transitions.
|
9
9
|
- **Sub-Workflows** 🌳: Enable hierarchical, modular designs.
|
10
|
-
- **LLM Integration** 🤖: Harness Large Language Models for text or structured outputs.
|
10
|
+
- **LLM Integration** 🤖: Harness Large Language Models for text or structured outputs, with dynamic model selection.
|
11
11
|
- **Template Nodes** 📝: Render dynamic content with Jinja2 templates.
|
12
|
-
- **Input Mapping** 🔗: Flexibly map node parameters to context or custom logic.
|
12
|
+
- **Input Mapping** 🔗: Flexibly map node parameters to context or custom logic (including lambdas).
|
13
13
|
- **Context Management** 📦: Share state dynamically across nodes.
|
14
14
|
- **Robustness** 🛡️: Include retries, delays, and timeouts.
|
15
15
|
- **Observers** 👀: Monitor execution with custom handlers.
|
16
16
|
- **Programmatic Control** 🧑💻: Manage workflows via `WorkflowManager`.
|
17
17
|
|
18
|
-
This DSL integrates with `Workflow`, `WorkflowEngine`, and `Nodes` classes, making it versatile for everything from simple scripts to complex AI-driven workflows. We’ll use an updated **Story Generator Workflow** as a running example, derived from `examples/flow/simple_story_generator/story_generator_agent.py`, now enhanced with branching, convergence, input mapping, and
|
19
|
-
|
20
|
-
```mermaid
|
21
|
-
graph TD
|
22
|
-
A[YAML Workflow File] -->|Defines| B[functions ⚙️]
|
23
|
-
A -->|Configures| C[nodes 🧩]
|
24
|
-
A -->|Orchestrates| D[workflow 🌐]
|
25
|
-
style A fill:#f9f9ff,stroke:#333,stroke-width:2px,stroke-dasharray:5
|
26
|
-
style B fill:#e6f3ff,stroke:#0066cc
|
27
|
-
style C fill:#e6ffe6,stroke:#009933
|
28
|
-
style D fill:#fff0e6,stroke:#cc3300
|
29
|
-
```
|
18
|
+
This DSL integrates with `Workflow`, `WorkflowEngine`, and `Nodes` classes, making it versatile for everything from simple scripts to complex AI-driven workflows. We’ll use an updated **Story Generator Workflow** as a running example, derived from `examples/flow/simple_story_generator/story_generator_agent.py`, now enhanced with branching, convergence, input mapping, template nodes, and dynamic model selection. Let’s dive in! 🎉
|
30
19
|
|
31
20
|
---
|
32
21
|
|
@@ -55,8 +44,22 @@ observers:
|
|
55
44
|
# Event watchers 👀 (optional)
|
56
45
|
```
|
57
46
|
|
47
|
+
### 3. LLM Configuration
|
48
|
+
|
49
|
+
In the `llm_config` section of a node definition, you can specify a file-based system prompt using the `system_prompt_file` key (takes precedence over `system_prompt`) and a dynamic `model` using a lambda expression (e.g., `"lambda ctx: ctx['model_name']"`). This enhances flexibility for LLM-driven tasks.
|
50
|
+
|
51
|
+
Example:
|
52
|
+
|
53
|
+
```yaml
|
54
|
+
llm_config:
|
55
|
+
model: "lambda ctx: ctx['model_name']" # Dynamic model selection
|
56
|
+
system_prompt: "You are a creative writer."
|
57
|
+
system_prompt_file: "path/to/system_prompt_template.jinja"
|
58
|
+
prompt_template: "Write a story about {topic}."
|
59
|
+
```
|
60
|
+
|
58
61
|
### Story Generator Example
|
59
|
-
We’ll evolve the Story Generator to include branching (
|
62
|
+
We’ll evolve the Story Generator to include branching (based on story tone), convergence (finalizing the story), **input mapping** with lambdas, a **template node** for chapter summaries, and a dynamic `model`—showcasing these shiny new features step-by-step.
|
60
63
|
|
61
64
|
---
|
62
65
|
|
@@ -64,7 +67,7 @@ We’ll evolve the Story Generator to include branching (e.g., based on story to
|
|
64
67
|
|
65
68
|
### Python Version (`story_generator_agent.py`)
|
66
69
|
|
67
|
-
This updated script generates a story with tone-based branching, convergence, input mapping,
|
70
|
+
This updated script generates a story with tone-based branching, convergence, input mapping, a template node, and dynamic model selection:
|
68
71
|
|
69
72
|
```python
|
70
73
|
#!/usr/bin/env python
|
@@ -72,28 +75,32 @@ from quantalogic.flow import Nodes, Workflow
|
|
72
75
|
import anyio
|
73
76
|
|
74
77
|
MODEL = "gemini/gemini-2.0-flash"
|
75
|
-
DEFAULT_LLM_PARAMS = {"
|
78
|
+
DEFAULT_LLM_PARAMS = {"temperature": 0.7, "max_tokens": 1000}
|
76
79
|
|
77
80
|
@Nodes.llm_node(system_prompt="You are a creative writer skilled at generating stories.",
|
78
81
|
prompt_template="Create a story outline for a {genre} story with {num_chapters} chapters.",
|
82
|
+
model=lambda ctx: ctx.get("model_name", MODEL),
|
79
83
|
output="outline", **DEFAULT_LLM_PARAMS)
|
80
84
|
async def generate_outline(genre: str, num_chapters: int):
|
81
85
|
return ""
|
82
86
|
|
83
87
|
@Nodes.llm_node(system_prompt="You are a creative writer.",
|
84
88
|
prompt_template="Analyze the tone of this outline: {outline}.",
|
89
|
+
model=lambda ctx: ctx.get("model_name", MODEL),
|
85
90
|
output="tone", **DEFAULT_LLM_PARAMS)
|
86
91
|
async def analyze_tone(outline: str):
|
87
92
|
return ""
|
88
93
|
|
89
94
|
@Nodes.llm_node(system_prompt="You are a creative writer.",
|
90
95
|
prompt_template="Write chapter {chapter_num} for this story outline: {outline}. Style: {style}.",
|
96
|
+
model=lambda ctx: ctx.get("model_name", MODEL),
|
91
97
|
output="chapter", **DEFAULT_LLM_PARAMS)
|
92
98
|
async def generate_chapter(outline: str, chapter_num: int, style: str):
|
93
99
|
return ""
|
94
100
|
|
95
101
|
@Nodes.llm_node(system_prompt="You are a dramatic writer.",
|
96
102
|
prompt_template="Write a dramatic chapter {chapter_num} for this outline: {outline}.",
|
103
|
+
model=lambda ctx: ctx.get("model_name", MODEL),
|
97
104
|
output="chapter", **DEFAULT_LLM_PARAMS)
|
98
105
|
async def generate_dramatic_chapter(outline: str, chapter_num: int):
|
99
106
|
return ""
|
@@ -147,7 +154,8 @@ if __name__ == "__main__":
|
|
147
154
|
"chapter_count": 3,
|
148
155
|
"chapters": [],
|
149
156
|
"completed_chapters": 0,
|
150
|
-
"style": "descriptive"
|
157
|
+
"style": "descriptive",
|
158
|
+
"model_name": "gemini/gemini-2.0-flash" # Dynamic model selection
|
151
159
|
}
|
152
160
|
engine = workflow.build()
|
153
161
|
result = await engine.run(initial_context)
|
@@ -157,7 +165,7 @@ if __name__ == "__main__":
|
|
157
165
|
|
158
166
|
### YAML Version (`story_generator_workflow.yaml`)
|
159
167
|
|
160
|
-
Here’s the updated YAML with branching, convergence, input mapping,
|
168
|
+
Here’s the updated YAML with branching, convergence, input mapping with lambdas, a template node, and dynamic model selection:
|
161
169
|
|
162
170
|
```yaml
|
163
171
|
functions:
|
@@ -189,8 +197,9 @@ functions:
|
|
189
197
|
nodes:
|
190
198
|
generate_outline:
|
191
199
|
llm_config:
|
192
|
-
model: "
|
200
|
+
model: "lambda ctx: ctx['model_name']" # Dynamic model selection
|
193
201
|
system_prompt: "You are a creative writer skilled at generating stories."
|
202
|
+
system_prompt_file: "path/to/system_prompt_template.jinja"
|
194
203
|
prompt_template: "Create a story outline for a {genre} story with {num_chapters} chapters."
|
195
204
|
temperature: 0.7
|
196
205
|
max_tokens: 1000
|
@@ -200,7 +209,7 @@ nodes:
|
|
200
209
|
output: outline
|
201
210
|
analyze_tone:
|
202
211
|
llm_config:
|
203
|
-
model: "
|
212
|
+
model: "lambda ctx: ctx['model_name']" # Dynamic model selection
|
204
213
|
system_prompt: "You are a creative writer."
|
205
214
|
prompt_template: "Analyze the tone of this outline: {outline}."
|
206
215
|
temperature: 0.7
|
@@ -208,7 +217,7 @@ nodes:
|
|
208
217
|
output: tone
|
209
218
|
generate_chapter:
|
210
219
|
llm_config:
|
211
|
-
model: "
|
220
|
+
model: "lambda ctx: ctx['model_name']" # Dynamic model selection
|
212
221
|
system_prompt: "You are a creative writer."
|
213
222
|
prompt_template: "Write chapter {chapter_num} for this story outline: {outline}. Style: {style}."
|
214
223
|
temperature: 0.7
|
@@ -219,7 +228,7 @@ nodes:
|
|
219
228
|
output: chapter
|
220
229
|
generate_dramatic_chapter:
|
221
230
|
llm_config:
|
222
|
-
model: "
|
231
|
+
model: "lambda ctx: ctx['model_name']" # Dynamic model selection
|
223
232
|
system_prompt: "You are a dramatic writer."
|
224
233
|
prompt_template: "Write a dramatic chapter {chapter_num} for this outline: {outline}."
|
225
234
|
temperature: 0.7
|
@@ -286,20 +295,20 @@ graph TD
|
|
286
295
|
G -->|"ctx['continue_generating']"| C
|
287
296
|
G --> H[finalize_story]
|
288
297
|
F --> H
|
289
|
-
style A fill:#
|
290
|
-
style B fill:#
|
291
|
-
style C fill:#
|
292
|
-
style D fill:#
|
293
|
-
style E fill:#
|
294
|
-
style F fill:#
|
295
|
-
style G fill:#
|
296
|
-
style H fill:#
|
298
|
+
style A fill:#CE93D8,stroke:#AB47BC,stroke-width:2px # Purple for LLM
|
299
|
+
style B fill:#CE93D8,stroke:#AB47BC,stroke-width:2px # Purple for LLM
|
300
|
+
style C fill:#CE93D8,stroke:#AB47BC,stroke-width:2px # Purple for LLM
|
301
|
+
style D fill:#CE93D8,stroke:#AB47BC,stroke-width:2px # Purple for LLM
|
302
|
+
style E fill:#FCE4EC,stroke:#F06292,stroke-width:2px # Pink for template
|
303
|
+
style F fill:#90CAF9,stroke:#42A5F5,stroke-width:2px # Blue for function
|
304
|
+
style G fill:#90CAF9,stroke:#42A5F5,stroke-width:2px # Blue for function
|
305
|
+
style H fill:#90CAF9,stroke:#42A5F5,stroke-width:2px,stroke-dasharray:5 5 # Blue with dashed for convergence
|
297
306
|
```
|
298
307
|
|
299
308
|
#### Execution
|
300
|
-
With `initial_context = {"story_genre": "science fiction", "chapter_count": 3, "chapters": [], "completed_chapters": 0, "style": "descriptive"}`:
|
301
|
-
1. `generate_outline` uses input mapping (`story_genre`, `chapter_count`) to create an outline.
|
302
|
-
2. `analyze_tone` determines the story’s tone
|
309
|
+
With `initial_context = {"story_genre": "science fiction", "chapter_count": 3, "chapters": [], "completed_chapters": 0, "style": "descriptive", "model_name": "gemini/gemini-2.0-flash"}`:
|
310
|
+
1. `generate_outline` uses input mapping (`story_genre`, `chapter_count`) and a dynamic `model` to create an outline.
|
311
|
+
2. `analyze_tone` determines the story’s tone with a dynamic `model`.
|
303
312
|
3. Branches to `generate_chapter` (light tone) or `generate_dramatic_chapter` (dark tone), mapping `chapter_num` to `completed_chapters`.
|
304
313
|
4. `summarize_chapter` formats the chapter using a template, mapping `chapter_num`.
|
305
314
|
5. `update_progress` updates chapters and count with the summary.
|
@@ -375,7 +384,7 @@ dependencies:
|
|
375
384
|
|
376
385
|
## 6. Nodes 🧩
|
377
386
|
|
378
|
-
Nodes define tasks, now enhanced with **input mappings**
|
387
|
+
Nodes define tasks, now enhanced with **input mappings** (including lambdas), **template nodes**, and dynamic `model` selection in LLM nodes, alongside functions and sub-workflows.
|
379
388
|
|
380
389
|
### Fields 📋
|
381
390
|
- `function` (string, optional): Links to `functions`.
|
@@ -384,10 +393,11 @@ Nodes define tasks, now enhanced with **input mappings** and **template nodes**,
|
|
384
393
|
- `transitions` (list)
|
385
394
|
- `convergence_nodes` (list, optional)
|
386
395
|
- `llm_config` (object, optional):
|
387
|
-
- `model` (string, default: `"gpt-3.5-turbo"`)
|
396
|
+
- `model` (string, default: `"gpt-3.5-turbo"`): Can be a static name or lambda (e.g., `"lambda ctx: ctx['model_name']"`).
|
388
397
|
- `system_prompt` (string, optional)
|
398
|
+
- `system_prompt_file` (string, optional): Path to a Jinja2 template file (overrides `system_prompt`).
|
389
399
|
- `prompt_template` (string, default: `"{{ input }}"`)
|
390
|
-
- `prompt_file` (string, optional): Path to a Jinja2 template file.
|
400
|
+
- `prompt_file` (string, optional): Path to a Jinja2 template file (overrides `prompt_template`).
|
391
401
|
- `temperature` (float, default: `0.7`)
|
392
402
|
- `max_tokens` (int, optional)
|
393
403
|
- `top_p` (float, default: `1.0`)
|
@@ -423,13 +433,14 @@ nodes:
|
|
423
433
|
```
|
424
434
|
(`templates/report.j2`: `Report: {{ title }}\nData: {{ data }}`)
|
425
435
|
|
426
|
-
With input mapping and an LLM:
|
436
|
+
With input mapping, lambda, and dynamic model in an LLM node:
|
427
437
|
```yaml
|
428
438
|
nodes:
|
429
439
|
generate_outline:
|
430
440
|
llm_config:
|
431
|
-
model: "
|
441
|
+
model: "lambda ctx: ctx['model_name']" # Dynamic model selection
|
432
442
|
system_prompt: "You are a creative writer skilled at generating stories."
|
443
|
+
system_prompt_file: "path/to/system_prompt_template.jinja"
|
433
444
|
prompt_template: "Create a story outline for a {genre} story with {num_chapters} chapters."
|
434
445
|
temperature: 0.7
|
435
446
|
max_tokens: 1000
|
@@ -463,6 +474,8 @@ graph TD
|
|
463
474
|
I -->|Yes| J[response_model]
|
464
475
|
I -->|No| K[Plain Text]
|
465
476
|
F --> L[Jinja2 Template]
|
477
|
+
E --> M[Dynamic Model?]
|
478
|
+
M -->|Yes| N[Lambda Expression]
|
466
479
|
style A fill:#e6ffe6,stroke:#009933,stroke-width:2px
|
467
480
|
style B fill:#fff,stroke:#333
|
468
481
|
style C fill:#ccffcc,stroke:#009933
|
@@ -475,19 +488,21 @@ graph TD
|
|
475
488
|
style J fill:#b3ffb3,stroke:#009933
|
476
489
|
style K fill:#b3ffb3,stroke:#009933
|
477
490
|
style L fill:#b3ffb3,stroke:#009933
|
491
|
+
style M fill:#fff,stroke:#333
|
492
|
+
style N fill:#b3ffb3,stroke:#009933
|
478
493
|
```
|
479
494
|
|
480
495
|
---
|
481
496
|
|
482
497
|
## 6. Input Mapping with LLM Nodes and Template Nodes 🔗
|
483
498
|
|
484
|
-
Input mapping allows flexible parameter passing to nodes, enabling dynamic behavior based on workflow context. This is particularly powerful
|
499
|
+
Input mapping allows flexible parameter passing to nodes, enabling dynamic behavior based on workflow context. This is particularly powerful with LLM nodes (including dynamic models) and template nodes.
|
485
500
|
|
486
501
|
### Implementation Details
|
487
502
|
|
488
503
|
- **Input Mapping Types**:
|
489
|
-
- Direct context references (e.g., "story_genre")
|
490
|
-
- Lambda expressions (e.g., "lambda ctx: ctx['chapter_count'] + 1")
|
504
|
+
- Direct context references (e.g., `"story_genre"`)
|
505
|
+
- Lambda expressions (e.g., `"lambda ctx: ctx['chapter_count'] + 1"`)
|
491
506
|
- Static values
|
492
507
|
|
493
508
|
- **Supported Node Types**:
|
@@ -498,15 +513,18 @@ Input mapping allows flexible parameter passing to nodes, enabling dynamic behav
|
|
498
513
|
|
499
514
|
### LLM Node Input Mapping
|
500
515
|
|
501
|
-
LLM nodes support input mapping for
|
516
|
+
LLM nodes support input mapping for prompts and dynamic model selection:
|
502
517
|
|
503
518
|
```yaml
|
504
519
|
nodes:
|
505
520
|
generate_outline:
|
506
521
|
llm_config:
|
507
|
-
model: "
|
522
|
+
model: "lambda ctx: ctx['model_name']" # Dynamic model selection
|
508
523
|
system_prompt: "You are a creative writer skilled in {genre} stories."
|
524
|
+
system_prompt_file: "path/to/system_prompt_template.jinja"
|
509
525
|
prompt_template: "Create a story outline for a {genre} story with {num_chapters} chapters."
|
526
|
+
temperature: 0.7
|
527
|
+
max_tokens: 1000
|
510
528
|
inputs_mapping:
|
511
529
|
genre: "story_genre" # Map from context
|
512
530
|
num_chapters: "lambda ctx: ctx['chapter_count'] + 1" # Dynamic value
|
@@ -531,13 +549,13 @@ nodes:
|
|
531
549
|
|
532
550
|
### Combined Example
|
533
551
|
|
534
|
-
Here
|
552
|
+
Here’s an example combining both LLM and template nodes with input mapping:
|
535
553
|
|
536
554
|
```yaml
|
537
555
|
nodes:
|
538
556
|
generate_character:
|
539
557
|
llm_config:
|
540
|
-
model: "
|
558
|
+
model: "lambda ctx: ctx['model_name']" # Dynamic model selection
|
541
559
|
system_prompt: "You are a character designer."
|
542
560
|
prompt_template: "Create a character for a {genre} story."
|
543
561
|
inputs_mapping:
|
@@ -555,11 +573,11 @@ nodes:
|
|
555
573
|
|
556
574
|
### Key Points
|
557
575
|
|
558
|
-
- Use `inputs_mapping` to map context values to node parameters
|
559
|
-
- Support both direct context references and lambda expressions
|
560
|
-
- Works seamlessly with LLM, template, and other node types
|
561
|
-
- Enables dynamic, context-aware workflows
|
562
|
-
- Input mapping is validated against node parameters
|
576
|
+
- Use `inputs_mapping` to map context values to node parameters.
|
577
|
+
- Support both direct context references and lambda expressions.
|
578
|
+
- Works seamlessly with LLM (including dynamic `model`), template, and other node types.
|
579
|
+
- Enables dynamic, context-aware workflows.
|
580
|
+
- Input mapping is validated against node parameters.
|
563
581
|
|
564
582
|
---
|
565
583
|
|
@@ -639,7 +657,7 @@ graph TD
|
|
639
657
|
|
640
658
|
`validate_workflow_definition()` ensures integrity:
|
641
659
|
- Checks node connectivity, circular references, undefined nodes, missing start.
|
642
|
-
- Validates branch conditions, convergence points (at least two incoming transitions), and input mappings.
|
660
|
+
- Validates branch conditions, convergence points (at least two incoming transitions), and input mappings (including lambda syntax).
|
643
661
|
- Returns `NodeError` objects (`node_name`, `description`).
|
644
662
|
|
645
663
|
### Example
|
@@ -668,7 +686,7 @@ observers:
|
|
668
686
|
## 10. Context 📦
|
669
687
|
|
670
688
|
The `ctx` dictionary shares data, enhanced by input mappings:
|
671
|
-
- `generate_outline` → `ctx["outline"]` (mapped from `story_genre`, `chapter_count`)
|
689
|
+
- `generate_outline` → `ctx["outline"]` (mapped from `story_genre`, `chapter_count`, dynamic `model`)
|
672
690
|
- `summarize_chapter` → `ctx["chapter_summary"]` (mapped from `completed_chapters`)
|
673
691
|
- `finalize_story` → `ctx["final_story"]`
|
674
692
|
|
@@ -728,7 +746,7 @@ Programmatic workflow creation with new features:
|
|
728
746
|
manager = WorkflowManager()
|
729
747
|
manager.add_node(
|
730
748
|
"start",
|
731
|
-
llm_config={"model": "
|
749
|
+
llm_config={"model": "lambda ctx: ctx['model_name']", "prompt_template": "Say hi to {name}"},
|
732
750
|
inputs_mapping={"name": "user_name"}
|
733
751
|
)
|
734
752
|
manager.add_node(
|
@@ -746,4 +764,4 @@ manager.save_to_yaml("hi.yaml")
|
|
746
764
|
|
747
765
|
## 14. Conclusion 🎉
|
748
766
|
|
749
|
-
The Quantalogic Flow YAML DSL (March
|
767
|
+
The Quantalogic Flow YAML DSL (March 8, 2025) is a powerful, flexible tool for workflow automation, exemplified by the updated Story Generator case study. With new **input mapping** (including lambdas), **template nodes**, and **dynamic model selection** in LLM nodes, alongside sub-workflows, branching, convergence, and conversion tools, it seamlessly bridges Python and YAML. Whether crafting dynamic stories with formatted chapters or managing complex processes, this DSL, paired with `WorkflowManager`, unlocks efficient, scalable workflows. 🚀
|
@@ -0,0 +1 @@
|
|
1
|
+
Check if the following items are in stock: {{ items }}. Return the result in JSON format with 'order_id' set to '123'.
|
@@ -0,0 +1 @@
|
|
1
|
+
You are an inventory checker. Respond with a JSON object containing 'order_id', 'items_in_stock', and 'items_out_of_stock'.
|
@@ -1,6 +1,7 @@
|
|
1
1
|
#!/usr/bin/env python
|
2
2
|
"""FastAPI server for the QuantaLogic agent."""
|
3
3
|
|
4
|
+
# Standard library imports
|
4
5
|
import asyncio
|
5
6
|
import functools
|
6
7
|
import json
|
@@ -14,6 +15,7 @@ from queue import Empty, Queue
|
|
14
15
|
from threading import Lock
|
15
16
|
from typing import Any, AsyncGenerator, Dict, List, Optional
|
16
17
|
|
18
|
+
# Third-party imports
|
17
19
|
import uvicorn
|
18
20
|
from fastapi import FastAPI, HTTPException, Request
|
19
21
|
from fastapi.middleware.cors import CORSMiddleware
|
@@ -24,11 +26,12 @@ from loguru import logger
|
|
24
26
|
from pydantic import BaseModel
|
25
27
|
from rich.console import Console
|
26
28
|
|
29
|
+
# Local imports
|
27
30
|
from quantalogic.agent_config import (
|
28
31
|
MODEL_NAME,
|
29
32
|
create_agent,
|
30
|
-
create_coding_agent, # noqa: F401
|
31
33
|
create_basic_agent, # noqa: F401
|
34
|
+
create_coding_agent, # noqa: F401
|
32
35
|
)
|
33
36
|
from quantalogic.console_print_events import console_print_events
|
34
37
|
|
@@ -282,9 +285,21 @@ class AgentState:
|
|
282
285
|
logger.error(f"Failed to initialize agent: {e}", exc_info=True)
|
283
286
|
raise
|
284
287
|
|
285
|
-
async def sse_ask_for_user_validation(self, question
|
286
|
-
"""
|
287
|
-
|
288
|
+
async def sse_ask_for_user_validation(self, question="Do you want to continue?", validation_id=None) -> bool:
|
289
|
+
"""
|
290
|
+
SSE-based user validation method.
|
291
|
+
|
292
|
+
Args:
|
293
|
+
question: The validation question to ask
|
294
|
+
validation_id: Optional ID for tracking validation requests
|
295
|
+
|
296
|
+
Returns:
|
297
|
+
bool: True if the user validates, False otherwise.
|
298
|
+
"""
|
299
|
+
# Ensure we have a validation_id
|
300
|
+
if validation_id is None:
|
301
|
+
validation_id = str(uuid.uuid4())
|
302
|
+
|
288
303
|
response_queue = asyncio.Queue()
|
289
304
|
|
290
305
|
# Store validation request and response queue
|