qtype 0.1.11__py3-none-any.whl → 0.1.12__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (215) hide show
  1. docs/Concepts/mental-model-and-philosophy.md +363 -0
  2. docs/Contributing/index.md +276 -0
  3. docs/Contributing/roadmap.md +81 -0
  4. docs/Decisions/ADR-001-Chat-vs-Completion-Endpoint-Features.md +56 -0
  5. docs/Gallery/dataflow_pipelines.md +80 -0
  6. docs/Gallery/dataflow_pipelines.mermaid +45 -0
  7. docs/Gallery/research_assistant.md +98 -0
  8. docs/Gallery/research_assistant.mermaid +42 -0
  9. docs/Gallery/simple_chatbot.md +36 -0
  10. docs/Gallery/simple_chatbot.mermaid +35 -0
  11. docs/How To/Authentication/configure_aws_authentication.md +60 -0
  12. docs/How To/Authentication/use_api_key_authentication.md +40 -0
  13. docs/How To/Command Line Usage/load_multiple_inputs_from_files.md +62 -0
  14. docs/How To/Command Line Usage/pass_inputs_on_the_cli.md +52 -0
  15. docs/How To/Command Line Usage/serve_with_auto_reload.md +26 -0
  16. docs/How To/Data Processing/adjust_concurrency.md +41 -0
  17. docs/How To/Data Processing/cache_step_results.md +71 -0
  18. docs/How To/Data Processing/decode_json_xml.md +24 -0
  19. docs/How To/Data Processing/explode_collections.md +40 -0
  20. docs/How To/Data Processing/gather_results.md +68 -0
  21. docs/How To/Data Processing/read_data_from_files.md +35 -0
  22. docs/How To/Data Processing/read_sql_databases.md +47 -0
  23. docs/How To/Data Processing/write_data_to_file.md +40 -0
  24. docs/How To/Invoke Models/call_large_language_models.md +51 -0
  25. docs/How To/Invoke Models/create_embeddings.md +49 -0
  26. docs/How To/Invoke Models/reuse_prompts_with_templates.md +39 -0
  27. docs/How To/Language Features/include_qtype_yaml.md +45 -0
  28. docs/How To/Language Features/include_raw_text_from_other_files.md +47 -0
  29. docs/How To/Language Features/reference_entities_by_id.md +51 -0
  30. docs/How To/Language Features/use_environment_variables.md +47 -0
  31. docs/How To/Language Features/use_qtype_mcp.md +59 -0
  32. docs/How To/Observability & Debugging/trace_calls_with_open_telemetry.md +49 -0
  33. docs/How To/Observability & Debugging/validate_qtype_yaml.md +35 -0
  34. docs/How To/Observability & Debugging/visualize_application_architecture.md +61 -0
  35. docs/How To/Observability & Debugging/visualize_example.mermaid +35 -0
  36. docs/How To/Qtype Server/flow_as_ui.png +0 -0
  37. docs/How To/Qtype Server/serve_flows_as_apis.md +40 -0
  38. docs/How To/Qtype Server/serve_flows_as_ui.md +42 -0
  39. docs/How To/Qtype Server/use_conversational_interfaces.md +59 -0
  40. docs/How To/Qtype Server/use_variables_with_ui_hints.md +47 -0
  41. docs/How To/Tools & Integration/bind_tool_inputs_and_outputs.md +48 -0
  42. docs/How To/Tools & Integration/create_tools_from_openapi_specifications.md +89 -0
  43. docs/How To/Tools & Integration/create_tools_from_python_modules.md +90 -0
  44. docs/Reference/cli.md +338 -0
  45. docs/Reference/plugins.md +95 -0
  46. docs/Reference/semantic-validation-rules.md +179 -0
  47. docs/Tutorials/01-first-qtype-application.md +248 -0
  48. docs/Tutorials/02-conversational-chatbot.md +327 -0
  49. docs/Tutorials/03-structured-data.md +481 -0
  50. docs/Tutorials/04-tools-and-function-calling.md +483 -0
  51. docs/Tutorials/example_chat.png +0 -0
  52. docs/Tutorials/index.md +92 -0
  53. docs/components/APIKeyAuthProvider.md +7 -0
  54. docs/components/APITool.md +10 -0
  55. docs/components/AWSAuthProvider.md +13 -0
  56. docs/components/AWSSecretManager.md +5 -0
  57. docs/components/Agent.md +6 -0
  58. docs/components/Aggregate.md +8 -0
  59. docs/components/AggregateStats.md +7 -0
  60. docs/components/Application.md +22 -0
  61. docs/components/AuthorizationProvider.md +6 -0
  62. docs/components/AuthorizationProviderList.md +5 -0
  63. docs/components/BearerTokenAuthProvider.md +6 -0
  64. docs/components/BedrockReranker.md +8 -0
  65. docs/components/ChatContent.md +7 -0
  66. docs/components/ChatMessage.md +6 -0
  67. docs/components/ConstantPath.md +5 -0
  68. docs/components/CustomType.md +7 -0
  69. docs/components/Decoder.md +8 -0
  70. docs/components/DecoderFormat.md +8 -0
  71. docs/components/DocToTextConverter.md +7 -0
  72. docs/components/Document.md +7 -0
  73. docs/components/DocumentEmbedder.md +7 -0
  74. docs/components/DocumentIndex.md +7 -0
  75. docs/components/DocumentSearch.md +7 -0
  76. docs/components/DocumentSource.md +12 -0
  77. docs/components/DocumentSplitter.md +10 -0
  78. docs/components/Echo.md +8 -0
  79. docs/components/Embedding.md +7 -0
  80. docs/components/EmbeddingModel.md +6 -0
  81. docs/components/FieldExtractor.md +20 -0
  82. docs/components/FileSource.md +6 -0
  83. docs/components/FileWriter.md +7 -0
  84. docs/components/Flow.md +14 -0
  85. docs/components/FlowInterface.md +7 -0
  86. docs/components/Index.md +8 -0
  87. docs/components/IndexUpsert.md +6 -0
  88. docs/components/InvokeEmbedding.md +7 -0
  89. docs/components/InvokeFlow.md +8 -0
  90. docs/components/InvokeTool.md +8 -0
  91. docs/components/LLMInference.md +9 -0
  92. docs/components/ListType.md +5 -0
  93. docs/components/Memory.md +8 -0
  94. docs/components/MessageRole.md +14 -0
  95. docs/components/Model.md +10 -0
  96. docs/components/ModelList.md +5 -0
  97. docs/components/OAuth2AuthProvider.md +9 -0
  98. docs/components/PrimitiveTypeEnum.md +21 -0
  99. docs/components/PromptTemplate.md +7 -0
  100. docs/components/PythonFunctionTool.md +7 -0
  101. docs/components/RAGChunk.md +7 -0
  102. docs/components/RAGDocument.md +10 -0
  103. docs/components/RAGSearchResult.md +8 -0
  104. docs/components/Reranker.md +5 -0
  105. docs/components/SQLSource.md +8 -0
  106. docs/components/Search.md +7 -0
  107. docs/components/SearchResult.md +7 -0
  108. docs/components/SecretManager.md +7 -0
  109. docs/components/SecretReference.md +7 -0
  110. docs/components/Source.md +6 -0
  111. docs/components/Step.md +9 -0
  112. docs/components/TelemetrySink.md +9 -0
  113. docs/components/Tool.md +9 -0
  114. docs/components/ToolList.md +5 -0
  115. docs/components/ToolParameter.md +6 -0
  116. docs/components/TypeList.md +5 -0
  117. docs/components/Variable.md +6 -0
  118. docs/components/VariableList.md +5 -0
  119. docs/components/VectorIndex.md +7 -0
  120. docs/components/VectorSearch.md +6 -0
  121. docs/components/VertexAuthProvider.md +9 -0
  122. docs/components/Writer.md +5 -0
  123. docs/example_ui.png +0 -0
  124. docs/index.md +81 -0
  125. docs/legacy_how_tos/Configuration/modular-yaml.md +366 -0
  126. docs/legacy_how_tos/Configuration/phoenix_projects.png +0 -0
  127. docs/legacy_how_tos/Configuration/phoenix_traces.png +0 -0
  128. docs/legacy_how_tos/Configuration/reference-by-id.md +251 -0
  129. docs/legacy_how_tos/Configuration/telemetry-setup.md +259 -0
  130. docs/legacy_how_tos/Data Types/custom-types.md +52 -0
  131. docs/legacy_how_tos/Data Types/domain-types.md +113 -0
  132. docs/legacy_how_tos/Debugging/visualize-apps.md +147 -0
  133. docs/legacy_how_tos/Tools/api-tools.md +29 -0
  134. docs/legacy_how_tos/Tools/python-tools.md +299 -0
  135. examples/authentication/aws_authentication.qtype.yaml +63 -0
  136. examples/conversational_ai/hello_world_chat.qtype.yaml +43 -0
  137. examples/conversational_ai/simple_chatbot.qtype.yaml +40 -0
  138. examples/data_processing/batch_processing.qtype.yaml +54 -0
  139. examples/data_processing/cache_step_results.qtype.yaml +78 -0
  140. examples/data_processing/collect_results.qtype.yaml +55 -0
  141. examples/data_processing/dataflow_pipelines.qtype.yaml +108 -0
  142. examples/data_processing/decode_json.qtype.yaml +23 -0
  143. examples/data_processing/explode_items.qtype.yaml +25 -0
  144. examples/data_processing/read_file.qtype.yaml +60 -0
  145. examples/invoke_models/create_embeddings.qtype.yaml +28 -0
  146. examples/invoke_models/simple_llm_call.qtype.yaml +32 -0
  147. examples/language_features/include_raw.qtype.yaml +27 -0
  148. examples/language_features/ui_hints.qtype.yaml +52 -0
  149. examples/legacy/bedrock/data_analysis_with_telemetry.qtype.yaml +169 -0
  150. examples/legacy/bedrock/hello_world.qtype.yaml +39 -0
  151. examples/legacy/bedrock/hello_world_chat.qtype.yaml +37 -0
  152. examples/legacy/bedrock/hello_world_chat_with_telemetry.qtype.yaml +40 -0
  153. examples/legacy/bedrock/hello_world_chat_with_thinking.qtype.yaml +40 -0
  154. examples/legacy/bedrock/hello_world_completion.qtype.yaml +41 -0
  155. examples/legacy/bedrock/hello_world_completion_with_auth.qtype.yaml +44 -0
  156. examples/legacy/bedrock/simple_agent_chat.qtype.yaml +46 -0
  157. examples/legacy/chat_with_langfuse.qtype.yaml +50 -0
  158. examples/legacy/data_processor.qtype.yaml +48 -0
  159. examples/legacy/echo/debug_example.qtype.yaml +59 -0
  160. examples/legacy/echo/prompt.qtype.yaml +22 -0
  161. examples/legacy/echo/test.qtype.yaml +26 -0
  162. examples/legacy/echo/video.qtype.yaml +20 -0
  163. examples/legacy/field_extractor_example.qtype.yaml +137 -0
  164. examples/legacy/multi_flow_example.qtype.yaml +125 -0
  165. examples/legacy/openai/hello_world_chat.qtype.yaml +43 -0
  166. examples/legacy/openai/hello_world_chat_with_telemetry.qtype.yaml +46 -0
  167. examples/legacy/rag.qtype.yaml +207 -0
  168. examples/legacy/time_utilities.qtype.yaml +64 -0
  169. examples/legacy/vertex/hello_world_chat.qtype.yaml +36 -0
  170. examples/legacy/vertex/hello_world_completion.qtype.yaml +40 -0
  171. examples/legacy/vertex/hello_world_completion_with_auth.qtype.yaml +45 -0
  172. examples/observability_debugging/trace_with_opentelemetry.qtype.yaml +40 -0
  173. examples/research_assistant/research_assistant.qtype.yaml +94 -0
  174. examples/research_assistant/tavily.oas.yaml +722 -0
  175. examples/research_assistant/tavily.qtype.yaml +289 -0
  176. examples/tutorials/01_hello_world.qtype.yaml +48 -0
  177. examples/tutorials/02_conversational_chat.qtype.yaml +37 -0
  178. examples/tutorials/03_structured_data.qtype.yaml +130 -0
  179. examples/tutorials/04_tools_and_function_calling.qtype.yaml +89 -0
  180. qtype/application/converters/tools_from_api.py +39 -35
  181. qtype/base/types.py +6 -1
  182. qtype/commands/convert.py +3 -6
  183. qtype/commands/generate.py +7 -3
  184. qtype/commands/mcp.py +68 -0
  185. qtype/commands/validate.py +4 -4
  186. qtype/dsl/custom_types.py +2 -1
  187. qtype/dsl/linker.py +15 -7
  188. qtype/dsl/loader.py +3 -3
  189. qtype/dsl/model.py +24 -3
  190. qtype/interpreter/api.py +4 -1
  191. qtype/interpreter/base/base_step_executor.py +3 -1
  192. qtype/interpreter/conversions.py +7 -3
  193. qtype/interpreter/executors/construct_executor.py +1 -1
  194. qtype/interpreter/executors/file_source_executor.py +3 -3
  195. qtype/interpreter/executors/file_writer_executor.py +4 -4
  196. qtype/interpreter/executors/index_upsert_executor.py +1 -1
  197. qtype/interpreter/executors/sql_source_executor.py +1 -1
  198. qtype/interpreter/resource_cache.py +3 -1
  199. qtype/interpreter/rich_progress.py +6 -3
  200. qtype/interpreter/stream/chat/converter.py +25 -17
  201. qtype/interpreter/stream/chat/ui_request_to_domain_type.py +2 -2
  202. qtype/interpreter/typing.py +5 -7
  203. qtype/mcp/__init__.py +0 -0
  204. qtype/mcp/server.py +467 -0
  205. qtype/semantic/checker.py +1 -1
  206. qtype/semantic/generate.py +3 -3
  207. qtype/semantic/visualize.py +38 -51
  208. {qtype-0.1.11.dist-info → qtype-0.1.12.dist-info}/METADATA +21 -1
  209. qtype-0.1.12.dist-info/RECORD +325 -0
  210. {qtype-0.1.11.dist-info → qtype-0.1.12.dist-info}/WHEEL +1 -1
  211. schema/qtype.schema.json +4018 -0
  212. qtype-0.1.11.dist-info/RECORD +0 -142
  213. {qtype-0.1.11.dist-info → qtype-0.1.12.dist-info}/entry_points.txt +0 -0
  214. {qtype-0.1.11.dist-info → qtype-0.1.12.dist-info}/licenses/LICENSE +0 -0
  215. {qtype-0.1.11.dist-info → qtype-0.1.12.dist-info}/top_level.txt +0 -0
@@ -0,0 +1,81 @@
1
+ # Roadmap
2
+
3
+ This document outlines the planned features, improvements, and milestones for the QType project.
4
+
5
+ ## Current Status
6
+
7
+ - ✅ Core DSL implementation
8
+ - ✅ Basic validation and semantic resolution
9
+ - ✅ CLI interface with convert, generate, run, and validate commands
10
+ - ✅ AWS Bedrock model integration
11
+
12
+ ## Upcoming Milestones
13
+
14
+
15
+ ### v0.1.0
16
+ #### Documentation
17
+ - [x] Documentation setup with mkdocs
18
+ - [ ] Examples showroom illustrating use cases
19
+ - [x] Page for each concecpt and examples thereof
20
+ - [x] Document how to add to the dsl
21
+ - [ ] Document how to use DSL in visual studio code
22
+ - [ ] Docunment how to use includes, anchors, and references.
23
+
24
+
25
+ ## Future Work
26
+
27
+ ### DSL
28
+ - [ ] Add a new flow type for state machines. It will have a list of states, each being a flow themselves, and transitions consisting of conditions and steps that deterimine if the condition has been met.
29
+ - [ ] Add support for vectorstores and sql chat stores
30
+ - [ ] Add support for more complex conditions
31
+ - [ ] Expand Authorization types into abstract classes for different ways to authorize
32
+ - [ ] Add support for vectorstores and sql chat stores
33
+ - [ ] Add support for DocumentIndexes.
34
+ - [ ] Add feedbnack types and steps
35
+ - [ ] Add conversation storage
36
+
37
+ ### Tools
38
+ - [ ] Add support for importing tools from API
39
+ - [ ] Refine the type abstractions for tool importing from mdoules
40
+
41
+ ### Exended Capabilities
42
+ - [ ] (Interpreter) - User Interface
43
+ - [ ] (Interpreter) - Support other model providers
44
+ - [ ] (Interpreter) - Store memory and session info in a cache to enable this kind of stateful communication.
45
+ - [ ] (Interpreter) - Refine Agent interpreter for greater tool support and chat history
46
+ - [ ] (Interpreter) - Run as MCP server
47
+ - [ ] (Interpreter) - Set UI to have limit on number of messages if chat flow llm has memory
48
+
49
+ ### Advanced AI Capabilities
50
+ - [ ] Multi-modal support (text, image, audio)
51
+ - [ ] Agent-based architectures
52
+ - [ ] RAG OOTB
53
+ - [ ] Workflows for measuring workflows
54
+
55
+ ## Feature Requests & Community Input
56
+
57
+ We welcome community feedback and feature requests! Please:
58
+
59
+ 1. Check existing [GitHub Issues](https://github.com/bazaarvoice/qtype/issues) before submitting
60
+ 2. Use the appropriate issue templates
61
+ 3. Participate in [Discussions](https://github.com/bazaarvoice/qtype/discussions) for broader topics
62
+ 4. Consider contributing via pull requests
63
+
64
+ ## Contributing to the Roadmap
65
+
66
+ This roadmap is a living document that evolves based on:
67
+ - Community feedback and usage patterns
68
+ - Technical feasibility assessments
69
+ - Business priorities and partnerships
70
+ - Emerging AI/ML trends and capabilities
71
+
72
+ For significant roadmap suggestions, please:
73
+ 1. Open a GitHub Discussion with the "roadmap" label
74
+ 2. Provide clear use cases and benefits
75
+ 3. Consider implementation complexity
76
+ 4. Engage with the community for feedback
77
+
78
+ ---
79
+
80
+ *Last updated: July 28, 2025*
81
+ *For the most current information, see [GitHub Issues](https://github.com/bazaarvoice/qtype/issues)
@@ -0,0 +1,56 @@
1
+ # ADR 001: Separation of Features Between Chat and Completion Endpoints
2
+
3
+ * **Status:** Accepted
4
+ * **Date:** 2025-11-04
5
+ * **Approved by:** Lou Kratz
6
+
7
+ ---
8
+
9
+ ## Context
10
+
11
+ We offer two primary LLM endpoints: `/chat` and `/completion`, both with streaming and non-streaming (REST) variants. We needed to decide where to return advanced features (like `reasoning`, `tool_calls`, etc.) versus simple text-deltas.
12
+
13
+ The `/completion` endpoint was beginning to accumulate complex features, which conflicts with its intended simple (prompt-in, text-out) purpose. This created friction with frontend libraries like Vercel's AI SDK, whose `useCompletion` hook is designed to handle only a simple string/text-delta stream.
14
+
15
+ ## Decision
16
+
17
+ We will strictly separate the concerns of the two endpoints to align with industry best practices and simplify their use:
18
+
19
+ 1. **`/chat` (Stream):** This will be the **only** endpoint that returns advanced features (e.g., `reasoning`, `tool_calls`, and other non-text metadata).
20
+ 2. **`/completion` (Stream & REST):** These endpoints will **only** return simple text. The stream will only contain text-deltas, and the REST endpoint will return the final completed text string.
21
+ 3. **`/chat` (REST):** This endpoint will also return simple text responses, consistent with the other REST endpoints.
22
+
23
+ All advanced functionality will be removed from the `/completion` endpoints and the non-streaming `/chat` endpoint.
24
+
25
+ ---
26
+
27
+ ## Consequences
28
+
29
+ ### Positive
30
+
31
+ * **Simplicity & Alignment:** This aligns the `/completion` endpoint with its intended purpose as a simple Q&A/generation tool. It matches the pattern set by major libraries (like Vercel's SDK), which treat "completion" as a simple string.
32
+ * **Improved FE Integration:** Our UIs can now use the standard `useCompletion` hook directly without any workarounds.
33
+ * **Clear API Contract:** 3rd-party developers have a clear choice:
34
+ * Use `/completion` for simple text generation.
35
+ * Use `/chat` (stream) for complex, stateful, or tool-enabled interactions.
36
+ * **Long-Term Maintainability:** We only need to build, test, and maintain advanced features in a single endpoint (the chat stream), reducing complexity.
37
+
38
+ ### Negative
39
+
40
+ * (None identified; this decision is considered a simplification and correction of a previous design.)
41
+
42
+ ---
43
+
44
+ ## Options Considered
45
+
46
+ ### Option 1: Keep `useCompletion` and Manually Parse Data
47
+
48
+ * **Details:** Keep using the Vercel `useCompletion` hook in the frontend but change our stream protocol to `text` (from the default). We would then manually embed JSON for `reasoning`/`tools` within the text stream and parse it on the client.
49
+ * **Rejected Because:** This requires complex, brittle, and manual handling of stream states and data extraction in the frontend, defeating the purpose of using the simple hook.
50
+
51
+ ### Option 2: Use `useChat` for the Completion UI
52
+
53
+ * **Details:** Have our simple "completion" UI use the `/chat` endpoint and Vercel's `useChat` hook, but always send an empty message array.
54
+ * **Rejected Because:**
55
+ 1. **Poor FE Experience:** It complicates the frontend logic, requiring us to manually clear Vercel's `messages` state after every single request.
56
+ 2. **Poor API Experience:** It forces 3rd-party users (who just want to send a simple prompt) to use the more complex `/chat` object format instead of the simple `/completion` string format.
@@ -0,0 +1,80 @@
1
+ # LLM Processing Pipelines
2
+
3
+ ## Overview
4
+
5
+ An automated data processing pipeline that reads product reviews from a SQLite database and analyzes each review's sentiment using an LLM. This example demonstrates QType's dataflow capabilities with database sources, parallel LLM processing, and streaming results without requiring batch operations.
6
+
7
+ ## Architecture
8
+
9
+ ```mermaid
10
+ --8<-- "Gallery/dataflow_pipelines.mermaid"
11
+ ```
12
+
13
+ ## Complete Code
14
+
15
+ ```yaml
16
+ --8<-- "../examples/data_processing/dataflow_pipelines.qtype.yaml"
17
+ ```
18
+
19
+ ## Key Features
20
+
21
+ - **SQLSource Step**: Database source that executes SQL queries using SQLAlchemy connection strings and emits one message per result row, enabling parallel processing of database records through downstream steps
22
+ - **PromptTemplate Step**: Template engine with curly-brace variable substitution (`{product_name}`, `{rating}`) that dynamically generates prompts from message variables for each review
23
+ - **LLMInference Step**: Processes each message independently through the language model with automatic parallelization, invoking AWS Bedrock inference for all reviews concurrently
24
+ - **Multi-record Flow**: Each database row becomes an independent FlowMessage flowing through the pipeline in parallel, carrying variables (review_id, product_name, rating, review_text) and accumulating new fields (llm_analysis) at each step
25
+ - **Message Sink**: The final step accumulates all records and writes them to an output file.
26
+
27
+ ## Running the Example
28
+
29
+ ### Setup
30
+
31
+ First, create the sample database with product reviews:
32
+
33
+ ```bash
34
+ python examples/data_processing/create_sample_db.py
35
+ ```
36
+
37
+ This generates a SQLite database with 10 sample product reviews covering various products and sentiments.
38
+
39
+ ### Run the Pipeline
40
+
41
+ Process all reviews and generate the analysis with real-time progress monitoring:
42
+
43
+ ```bash
44
+ qtype run -i '{"output_path":"results.parquet"}' --progress examples/data_processing/dataflow_pipelines.qtype.yaml
45
+ ```
46
+
47
+ The `--progress` flag displays a live dashboard showing:
48
+ - Message throughput for each step (msg/s)
49
+ - Success/error counts
50
+ - Processing duration with visual progress bars
51
+
52
+ Example output:
53
+ ```
54
+ ╭─────────────────────────────────────────────────────────────────────────────── Flow Progress ───────────────────────────────────────────────────────────────────────────────╮
55
+ │ │
56
+ │ Step load_reviews 1.6 msg/s ▁▁▁▁▃▃▃▃▅▅▅▅████████ ✔ 10 succeeded ✖ 0 errors ⟳ - hits ✗ - misses 0:00:06 │
57
+ │ Step create_prompt 1.6 msg/s ▁▁▁▁▃▃▃▃▅▅▅▅████████ ✔ 10 succeeded ✖ 0 errors ⟳ - hits ✗ - misses 0:00:06 │
58
+ │ Step analyze_sentiment 2.0 msg/s ▄▄▄▄▆▆▆▆▅▅▅▅███████▁ ✔ 10 succeeded ✖ 0 errors ⟳ - hits ✗ - misses 0:00:04 │
59
+ │ Step write_results - msg/s ✔ 1 succeeded ✖ 0 errors ⟳ - hits ✗ - misses 0:00:00 │
60
+ │ │
61
+ ╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
62
+
63
+ ```
64
+
65
+ You'll notice that the output shows 1 message for `write_results` and 10 for the others. That is because it is reporting the number of messages _emitted_ from each step, and `write_results` is a sink that collects all messages.
66
+
67
+ The final message of the output will be the result file where the data are written:
68
+
69
+ ```
70
+ 2026-01-16 11:23:35,151 - INFO: ✅ Flow execution completed successfully
71
+ 2026-01-16 11:23:35,151 - INFO: Processed 1 em
72
+ 2026-01-16 11:23:35,152 - INFO:
73
+ Results:
74
+ result_file: results.parquet
75
+ ```
76
+
77
+ ## Learn More
78
+
79
+ - Tutorial: [Your First QType Application](../../Tutorials/01_hello_world.md)
80
+ - Example: [Simple Chatbot](./simple_chatbot.md)
@@ -0,0 +1,45 @@
1
+ flowchart TD
2
+ subgraph APP ["📱 review_analysis_pipeline"]
3
+ direction TB
4
+
5
+ subgraph FLOW_0 ["🔄 analyze_reviews"]
6
+ direction TB
7
+ FLOW_0_START@{shape: circle, label: "▶️ Start"}
8
+ FLOW_0_S0@{shape: rect, label: "⚙️ load_reviews"}
9
+ FLOW_0_S1@{shape: doc, label: "📄 create_prompt"}
10
+ FLOW_0_S2@{shape: rounded, label: "✨ analyze_sentiment"}
11
+ FLOW_0_S3@{shape: rect, label: "⚙️ write_results"}
12
+ FLOW_0_S0 -->|product_name| FLOW_0_S1
13
+ FLOW_0_S0 -->|rating| FLOW_0_S1
14
+ FLOW_0_S0 -->|review_text| FLOW_0_S1
15
+ FLOW_0_S1 -->|analysis_prompt| FLOW_0_S2
16
+ FLOW_0_S0 -->|review_id| FLOW_0_S3
17
+ FLOW_0_S0 -->|product_name| FLOW_0_S3
18
+ FLOW_0_S0 -->|rating| FLOW_0_S3
19
+ FLOW_0_S0 -->|review_text| FLOW_0_S3
20
+ FLOW_0_S2 -->|llm_analysis| FLOW_0_S3
21
+ FLOW_0_START -->|output_path| FLOW_0_S3
22
+ end
23
+
24
+ subgraph RESOURCES ["🔧 Shared Resources"]
25
+ direction LR
26
+ MODEL_NOVA_LITE@{shape: rounded, label: "✨ nova_lite (aws-bedrock)" }
27
+ end
28
+
29
+ end
30
+
31
+ FLOW_0_S2 -.->|uses| MODEL_NOVA_LITE
32
+
33
+ %% Styling
34
+ classDef appBox fill:none,stroke:#495057,stroke-width:3px
35
+ classDef flowBox fill:#e1f5fe,stroke:#0277bd,stroke-width:2px
36
+ classDef llmNode fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
37
+ classDef modelNode fill:#e8f5e8,stroke:#2e7d32,stroke-width:2px
38
+ classDef authNode fill:#fff3e0,stroke:#ef6c00,stroke-width:2px
39
+ classDef telemetryNode fill:#fce4ec,stroke:#c2185b,stroke-width:2px
40
+ classDef resourceBox fill:#f5f5f5,stroke:#616161,stroke-width:1px
41
+
42
+ class APP appBox
43
+ class FLOW_0 flowBox
44
+ class RESOURCES resourceBox
45
+ class TELEMETRY telemetryNode
@@ -0,0 +1,98 @@
1
+ # Research Assistant
2
+
3
+ ## Overview
4
+
5
+ A minimal “web research assistant” that takes a single input topic, searches the
6
+ web using Tavily, and then synthesizes an answer with an LLM call. This example
7
+ demonstrates how to reuse OpenAPI-generated tools via `references` and bind tool
8
+ outputs into an LLM prompt.
9
+
10
+ ## Architecture
11
+
12
+ ```mermaid
13
+ --8<-- "Gallery/research_assistant.mermaid"
14
+ ```
15
+
16
+ ## Complete Code
17
+
18
+ ```yaml
19
+ --8<-- "../examples/research_assistant/research_assistant.qtype.yaml"
20
+ ```
21
+
22
+ ## Key Features
23
+
24
+ - **`references` (Application)**: Imports external QType documents (here, the Tavily
25
+ tool library) so you can reference tools like `search` by ID
26
+ - **`BearerTokenAuthProvider.token`**: Stores the Tavily API key as a bearer token,
27
+ loaded via `${TAVILY-API_BEARER}` environment-variable substitution
28
+ - **APITool**: Represents the Tavily HTTP endpoints (like `/search`) as typed tools
29
+ with declared `inputs` and `outputs`
30
+ - **InvokeTool Step**: Calls the `search` tool and maps flow variables to tool
31
+ parameters via `input_bindings`/`output_bindings`
32
+ - **PromptTemplate Step**: Builds the synthesis prompt by combining the user topic
33
+ and the Tavily search results
34
+ - **LLMInference Step**: Produces the final `answer` by running model inference
35
+ using the prompt produced by `PromptTemplate`
36
+
37
+ ## Running the Example
38
+
39
+ ### 1) Create a Tavily account + API key
40
+
41
+ Create an account at Tavily and generate an API key.
42
+
43
+ ### 2) Set the Tavily token in a `.env`
44
+
45
+ Create a `.env` file next to `examples/research_assistant/research_assistant.qtype.yaml`:
46
+
47
+ ```bash
48
+ TAVILY-API_BEARER=tvly-...
49
+ ```
50
+
51
+ QType automatically loads a `.env` file from the spec’s directory when you run
52
+ `qtype run`.
53
+
54
+ ### 3) Run
55
+
56
+ ```bash
57
+ # Validate the YAML
58
+ qtype validate examples/research_assistant/research_assistant.qtype.yaml
59
+
60
+ # Run directly
61
+ qtype run -i '{"topic":"Latest developments in retrieval augmented generation"}' \
62
+ examples/research_assistant/research_assistant.qtype.yaml
63
+ ```
64
+
65
+ ### Example Output
66
+
67
+ When running with the topic "Latest developments in retrieval augmented generation", the research assistant produces:
68
+
69
+ > #### Latest Developments in Retrieval-Augmented Generation
70
+ >
71
+ > Retrieval-Augmented Generation (RAG) has seen significant advancements, particularly in enhancing the accuracy and
72
+ > factual grounding of AI-generated content. Recent developments focus on improving the retrieval–generation
73
+ > pipeline, reducing hallucinations, and increasing performance metrics. For instance, performance improvements from
74
+ > 68% to 73% have been reported, with notable reductions in hallucinations and stronger factual grounding.
75
+ >
76
+ > Key areas of progress include:
77
+ >
78
+ > - **Enhanced Retrieval Mechanisms:** Improved algorithms for retrieving relevant information from large datasets.
79
+ > - **Stronger Factual Grounding:** Techniques that ensure generated content is more accurate and grounded in factual
80
+ > data.
81
+ > - **Reduction of Hallucinations:** Methods to minimize the generation of incorrect or misleading information.
82
+ > - **Targeted Enhancements:** Specific improvements across different stages of the retrieval–generation process.
83
+ >
84
+ > These advancements are expected to have a substantial impact on various applications, including natural language
85
+ > processing, content creation, and information retrieval systems.
86
+ >
87
+ > **Sources:**
88
+ >
89
+ > - [Latest Developments in Retrieval-Augmented Generation - CelerData](https://celerdata.com/glossary/latest-developments-in-retrieval-augmented-generation)
90
+ > - [Advancements in RAG [Retrieval-Augmented Generation] Systems by Mid-2025](https://medium.com/@martinagrafsvw25/advancements-in-rag-retrieval-augmented-generation-systems-by-mid-2025-935a39c15ae9)
91
+ > - [Retrieval-Augmented Generation: A Comprehensive Survey - arXiv](https://arxiv.org/html/2506.00054v1)
92
+
93
+ ## Learn More
94
+
95
+ - How-To: [Create Tools from OpenAPI Specifications](../How%20To/Tools%20%26%20Integration/create_tools_from_openapi_specifications.md)
96
+ - How-To: [Bind Tool Inputs and Outputs](../How%20To/Tools%20%26%20Integration/bind_tool_inputs_and_outputs.md)
97
+ - How-To: [Include QType YAML](../How%20To/Language%20Features/include_qtype_yaml.md)
98
+ - How-To: [Call Large Language Models](../How%20To/Invoke%20Models/call_large_language_models.md)
@@ -0,0 +1,42 @@
1
+ flowchart TD
2
+ subgraph APP ["📱 research_assistant"]
3
+ direction TB
4
+
5
+ subgraph FLOW_0 ["🔄 research"]
6
+ direction TB
7
+ FLOW_0_START@{shape: circle, label: "▶️ Start"}
8
+ FLOW_0_S0@{shape: rect, label: "⚙️ search_web"}
9
+ FLOW_0_S1@{shape: doc, label: "📄 build_prompt"}
10
+ FLOW_0_S2@{shape: rounded, label: "✨ synthesize"}
11
+ FLOW_0_START -->|topic| FLOW_0_S0
12
+ FLOW_0_START -->|topic| FLOW_0_S1
13
+ FLOW_0_S0 -->|tavily_results| FLOW_0_S1
14
+ FLOW_0_S1 -->|synthesis_prompt| FLOW_0_S2
15
+ end
16
+
17
+ subgraph RESOURCES ["🔧 Shared Resources"]
18
+ direction LR
19
+ AUTH_TAVILY_API_BEARERAUTH_TOKEN@{shape: hex, label: "🔐 tavily-api_bearerauth_token (BEARER_TOKEN)"}
20
+ MODEL_NOVA_LITE@{shape: rounded, label: "✨ nova_lite (aws-bedrock)" }
21
+ TOOL_SEARCH["⚡ search (POST)"]
22
+ TOOL_SEARCH -.->|uses| AUTH_TAVILY_API_BEARERAUTH_TOKEN
23
+ end
24
+
25
+ end
26
+
27
+ FLOW_0_S0 -.->|uses| TOOL_SEARCH
28
+ FLOW_0_S2 -.->|uses| MODEL_NOVA_LITE
29
+
30
+ %% Styling
31
+ classDef appBox fill:none,stroke:#495057,stroke-width:3px
32
+ classDef flowBox fill:#e1f5fe,stroke:#0277bd,stroke-width:2px
33
+ classDef llmNode fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
34
+ classDef modelNode fill:#e8f5e8,stroke:#2e7d32,stroke-width:2px
35
+ classDef authNode fill:#fff3e0,stroke:#ef6c00,stroke-width:2px
36
+ classDef telemetryNode fill:#fce4ec,stroke:#c2185b,stroke-width:2px
37
+ classDef resourceBox fill:#f5f5f5,stroke:#616161,stroke-width:1px
38
+
39
+ class APP appBox
40
+ class FLOW_0 flowBox
41
+ class RESOURCES resourceBox
42
+ class TELEMETRY telemetryNode
@@ -0,0 +1,36 @@
1
+ # Simple Chatbot
2
+
3
+ ## Overview
4
+
5
+ A friendly conversational chatbot with memory that maintains context across multiple conversation turns. This example demonstrates the minimal setup needed to create a stateful chatbot using AWS Bedrock, perfect for getting started with conversational AI applications.
6
+
7
+ ## Architecture
8
+
9
+ ```mermaid
10
+ --8<-- "Gallery/simple_chatbot.mermaid"
11
+ ```
12
+
13
+ ## Complete Code
14
+
15
+ ```yaml
16
+ --8<-- "../examples/conversational_ai/simple_chatbot.qtype.yaml"
17
+ ```
18
+
19
+ ## Key Features
20
+
21
+ - **Conversational Interface**: This instructs the front-end to create a conversational user experience.
22
+ - **Memory**: Conversation history buffer with `token_limit` (10,000) that stores messages and automatically flushes oldest content when limit is exceeded
23
+ - **ChatMessage Type**: Built-in domain type with `role` field (user/assistant/system) and `blocks` list for structured multi-modal content
24
+ - **LLMInference Step**: Executes model inference with optional `system_message` prepended to conversation and `memory` reference for persistent context across turns
25
+ - **Model Configuration**: Model resource with provider-specific `inference_params` including `temperature` (randomness) and `max_tokens` (response length limit)
26
+
27
+ ## Running the Example
28
+
29
+ ```bash
30
+ # Start the chatbot server
31
+ qtype serve examples/conversational_ai/simple_chatbot.qtype.yaml
32
+ ```
33
+
34
+ ## Learn More
35
+
36
+ - Tutorial: [Building a Stateful Chatbot](../../Tutorials/02_conversational_chat.md)
@@ -0,0 +1,35 @@
1
+ flowchart TD
2
+ subgraph APP ["📱 simple_chatbot"]
3
+ direction TB
4
+
5
+ subgraph FLOW_0 ["🔄 chat_flow"]
6
+ direction LR
7
+ FLOW_0_START@{shape: circle, label: "▶️ Start"}
8
+ FLOW_0_S0@{shape: rounded, label: "✨ generate_response"}
9
+ FLOW_0_START -->|user_message| FLOW_0_S0
10
+ end
11
+
12
+ subgraph RESOURCES ["🔧 Shared Resources"]
13
+ direction LR
14
+ MODEL_NOVA_LITE@{shape: rounded, label: "✨ nova_lite (aws-bedrock)" }
15
+ MEM_CONVERSATION_MEMORY@{shape: win-pane, label: "🧠 conversation_memory (10KT)"}
16
+ end
17
+
18
+ end
19
+
20
+ FLOW_0_S0 -.->|uses| MODEL_NOVA_LITE
21
+ FLOW_0_S0 -.->|stores| MEM_CONVERSATION_MEMORY
22
+
23
+ %% Styling
24
+ classDef appBox fill:none,stroke:#495057,stroke-width:3px
25
+ classDef flowBox fill:#e1f5fe,stroke:#0277bd,stroke-width:2px
26
+ classDef llmNode fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
27
+ classDef modelNode fill:#e8f5e8,stroke:#2e7d32,stroke-width:2px
28
+ classDef authNode fill:#fff3e0,stroke:#ef6c00,stroke-width:2px
29
+ classDef telemetryNode fill:#fce4ec,stroke:#c2185b,stroke-width:2px
30
+ classDef resourceBox fill:#f5f5f5,stroke:#616161,stroke-width:1px
31
+
32
+ class APP appBox
33
+ class FLOW_0 flowBox
34
+ class RESOURCES resourceBox
35
+ class TELEMETRY telemetryNode
@@ -0,0 +1,60 @@
1
+ # Configure AWS Authentication
2
+
3
+ AWS Bedrock and other AWS services require authentication, which can be configured using access keys, AWS profiles, or role assumption.
4
+
5
+ ### QType YAML
6
+
7
+ ```yaml
8
+ auths:
9
+ # Method 1: AWS Profile (recommended)
10
+ - type: aws
11
+ id: aws_profile
12
+ profile_name: default
13
+ region: us-east-1
14
+
15
+ # Method 2: Access Keys (for CI/CD)
16
+ - type: aws
17
+ id: aws_keys
18
+ access_key_id: AKIAIOSFODNN7EXAMPLE
19
+ secret_access_key: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
20
+ region: us-east-1
21
+
22
+ # Method 3: Role Assumption
23
+ - type: aws
24
+ id: aws_role
25
+ profile_name: base_profile
26
+ role_arn: arn:aws:iam::123456789012:role/MyRole
27
+ role_session_name: qtype-session
28
+ region: us-east-1
29
+
30
+ models:
31
+ - type: Model
32
+ id: nova
33
+ provider: aws-bedrock
34
+ model_id: us.amazon.nova-micro-v1:0
35
+ auth: aws_profile
36
+ ```
37
+
38
+ ### Explanation
39
+
40
+ - **type: aws**: Declares an AWS authentication provider
41
+ - **profile_name**: Uses credentials from `~/.aws/credentials` (recommended for local development)
42
+ - **access_key_id / secret_access_key**: Explicit credentials (use environment variables or secret manager)
43
+ - **session_token**: Temporary credentials for AWS STS sessions
44
+ - **role_arn**: ARN of IAM role to assume (requires base credentials via profile or keys)
45
+ - **role_session_name**: Session identifier when assuming a role
46
+ - **external_id**: External ID for cross-account role assumption
47
+ - **region**: AWS region for API calls (e.g., `us-east-1`, `us-west-2`)
48
+
49
+ ## Complete Example
50
+
51
+ ```yaml
52
+ --8<-- "../examples/authentication/aws_authentication.qtype.yaml"
53
+ ```
54
+
55
+ ## See Also
56
+
57
+ - [AWSAuthProvider Reference](../../components/AWSAuthProvider.md)
58
+ - [Model Reference](../../components/Model.md)
59
+ - [How-To: Use API Key Authentication](use_api_key_authentication.md)
60
+ - [How-To: Manage Secrets with Secret Manager](../Authentication/manage_secrets.md)
@@ -0,0 +1,40 @@
1
+ # Use API Key Authentication
2
+
3
+ Authenticate with model providers like OpenAI using API keys, either from environment variables or stored in secret managers.
4
+
5
+ ### QType YAML
6
+
7
+ ```yaml
8
+ auths:
9
+ - type: api_key
10
+ id: openai_auth
11
+ api_key: ${OPENAI_KEY}
12
+ host: https://api.openai.com
13
+
14
+ models:
15
+ - type: Model
16
+ id: gpt-4
17
+ provider: openai
18
+ model_id: gpt-4-turbo
19
+ auth: openai_auth
20
+ ```
21
+
22
+ ### Explanation
23
+
24
+ - **type: api_key**: Specifies this is an API key-based authentication provider
25
+ - **api_key**: The API key value, typically loaded from an environment variable using `${VAR_NAME}` syntax
26
+ - **host**: Base URL or domain of the provider (optional, some providers infer this)
27
+ - **auth**: Reference to the auth provider by its ID when configuring models
28
+
29
+ ## Complete Example
30
+
31
+ ```yaml
32
+ --8<-- "../examples/tutorials/01_hello_world.qtype.yaml"
33
+ ```
34
+
35
+ ## See Also
36
+
37
+ - [APIKeyAuthProvider Reference](../../components/APIKeyAuthProvider.md)
38
+ - [Use Environment Variables](../Language%20Features/use_environment_variables.md)
39
+ - [Model Reference](../../components/Model.md)
40
+ - [Tutorial: Your First QType Application](../../Tutorials/your_first_qtype_application.md)
@@ -0,0 +1,62 @@
1
+ # Load Multiple Inputs from Files
2
+
3
+ Process multiple inputs in batch by loading data from CSV, JSON, Parquet, or Excel files using the `--input-file` CLI flag, enabling bulk processing without manual JSON construction.
4
+
5
+ ### CLI Command
6
+
7
+ ```bash
8
+ qtype run app.qtype.yaml --input-file inputs.csv
9
+ ```
10
+
11
+ ### Supported File Formats
12
+
13
+ - **CSV**: Columns map to input variable names
14
+ - **JSON**: Array of objects or records format
15
+ - **Parquet**: Efficient columnar format for large datasets
16
+ - **Excel**: `.xlsx` or `.xls` files
17
+
18
+ ### How It Works
19
+
20
+ When you provide `--input-file`, QType:
21
+ 1. Reads the file into a pandas DataFrame
22
+ 2. Each row becomes one execution of the flow
23
+ 3. Column names must match flow input variable IDs
24
+ 4. Processes rows with configured concurrency
25
+ 5. Returns results as a DataFrame (can be saved with `--output`)
26
+
27
+ ## Complete Example
28
+
29
+ **batch_inputs.csv:**
30
+ ```csv
31
+ --8<-- "../examples/data_processing/batch_inputs.csv"
32
+ ```
33
+
34
+ **Application:**
35
+ ```yaml
36
+ --8<-- "../examples/data_processing/batch_processing.qtype.yaml"
37
+ ```
38
+
39
+ **Run the batch:**
40
+ ```bash
41
+ # Process all rows from CSV
42
+ qtype run batch_processing.qtype.yaml --input-file batch_inputs.csv
43
+
44
+ # Save results to Parquet
45
+ qtype run batch_processing.qtype.yaml \
46
+ --input-file batch_inputs.csv \
47
+ --output results.parquet
48
+ ```
49
+
50
+ ### Explanation
51
+
52
+ - **--input-file (-I)**: Path to file containing input data (CSV, JSON, Parquet, Excel)
53
+ - **Column mapping**: CSV column names must match flow input variable IDs exactly
54
+ - **Batch processing**: Each row is processed as a separate flow execution
55
+ - **--output (-o)**: Optional path to save results as Parquet file
56
+ - **Parallel processing**: Steps that support concurrency will process multiple rows in parallel
57
+
58
+ ## See Also
59
+
60
+ <!-- - [Adjust Concurrency](adjust_concurrency.md) -->
61
+ <!-- - [FileSource Reference](../../components/FileSource.md) -->
62
+ - [Example: Dataflow Pipeline](../../Gallery/Data%20Processing/dataflow_pipelines.md)