qtype 0.1.11__py3-none-any.whl → 0.1.12__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (215) hide show
  1. docs/Concepts/mental-model-and-philosophy.md +363 -0
  2. docs/Contributing/index.md +276 -0
  3. docs/Contributing/roadmap.md +81 -0
  4. docs/Decisions/ADR-001-Chat-vs-Completion-Endpoint-Features.md +56 -0
  5. docs/Gallery/dataflow_pipelines.md +80 -0
  6. docs/Gallery/dataflow_pipelines.mermaid +45 -0
  7. docs/Gallery/research_assistant.md +98 -0
  8. docs/Gallery/research_assistant.mermaid +42 -0
  9. docs/Gallery/simple_chatbot.md +36 -0
  10. docs/Gallery/simple_chatbot.mermaid +35 -0
  11. docs/How To/Authentication/configure_aws_authentication.md +60 -0
  12. docs/How To/Authentication/use_api_key_authentication.md +40 -0
  13. docs/How To/Command Line Usage/load_multiple_inputs_from_files.md +62 -0
  14. docs/How To/Command Line Usage/pass_inputs_on_the_cli.md +52 -0
  15. docs/How To/Command Line Usage/serve_with_auto_reload.md +26 -0
  16. docs/How To/Data Processing/adjust_concurrency.md +41 -0
  17. docs/How To/Data Processing/cache_step_results.md +71 -0
  18. docs/How To/Data Processing/decode_json_xml.md +24 -0
  19. docs/How To/Data Processing/explode_collections.md +40 -0
  20. docs/How To/Data Processing/gather_results.md +68 -0
  21. docs/How To/Data Processing/read_data_from_files.md +35 -0
  22. docs/How To/Data Processing/read_sql_databases.md +47 -0
  23. docs/How To/Data Processing/write_data_to_file.md +40 -0
  24. docs/How To/Invoke Models/call_large_language_models.md +51 -0
  25. docs/How To/Invoke Models/create_embeddings.md +49 -0
  26. docs/How To/Invoke Models/reuse_prompts_with_templates.md +39 -0
  27. docs/How To/Language Features/include_qtype_yaml.md +45 -0
  28. docs/How To/Language Features/include_raw_text_from_other_files.md +47 -0
  29. docs/How To/Language Features/reference_entities_by_id.md +51 -0
  30. docs/How To/Language Features/use_environment_variables.md +47 -0
  31. docs/How To/Language Features/use_qtype_mcp.md +59 -0
  32. docs/How To/Observability & Debugging/trace_calls_with_open_telemetry.md +49 -0
  33. docs/How To/Observability & Debugging/validate_qtype_yaml.md +35 -0
  34. docs/How To/Observability & Debugging/visualize_application_architecture.md +61 -0
  35. docs/How To/Observability & Debugging/visualize_example.mermaid +35 -0
  36. docs/How To/Qtype Server/flow_as_ui.png +0 -0
  37. docs/How To/Qtype Server/serve_flows_as_apis.md +40 -0
  38. docs/How To/Qtype Server/serve_flows_as_ui.md +42 -0
  39. docs/How To/Qtype Server/use_conversational_interfaces.md +59 -0
  40. docs/How To/Qtype Server/use_variables_with_ui_hints.md +47 -0
  41. docs/How To/Tools & Integration/bind_tool_inputs_and_outputs.md +48 -0
  42. docs/How To/Tools & Integration/create_tools_from_openapi_specifications.md +89 -0
  43. docs/How To/Tools & Integration/create_tools_from_python_modules.md +90 -0
  44. docs/Reference/cli.md +338 -0
  45. docs/Reference/plugins.md +95 -0
  46. docs/Reference/semantic-validation-rules.md +179 -0
  47. docs/Tutorials/01-first-qtype-application.md +248 -0
  48. docs/Tutorials/02-conversational-chatbot.md +327 -0
  49. docs/Tutorials/03-structured-data.md +481 -0
  50. docs/Tutorials/04-tools-and-function-calling.md +483 -0
  51. docs/Tutorials/example_chat.png +0 -0
  52. docs/Tutorials/index.md +92 -0
  53. docs/components/APIKeyAuthProvider.md +7 -0
  54. docs/components/APITool.md +10 -0
  55. docs/components/AWSAuthProvider.md +13 -0
  56. docs/components/AWSSecretManager.md +5 -0
  57. docs/components/Agent.md +6 -0
  58. docs/components/Aggregate.md +8 -0
  59. docs/components/AggregateStats.md +7 -0
  60. docs/components/Application.md +22 -0
  61. docs/components/AuthorizationProvider.md +6 -0
  62. docs/components/AuthorizationProviderList.md +5 -0
  63. docs/components/BearerTokenAuthProvider.md +6 -0
  64. docs/components/BedrockReranker.md +8 -0
  65. docs/components/ChatContent.md +7 -0
  66. docs/components/ChatMessage.md +6 -0
  67. docs/components/ConstantPath.md +5 -0
  68. docs/components/CustomType.md +7 -0
  69. docs/components/Decoder.md +8 -0
  70. docs/components/DecoderFormat.md +8 -0
  71. docs/components/DocToTextConverter.md +7 -0
  72. docs/components/Document.md +7 -0
  73. docs/components/DocumentEmbedder.md +7 -0
  74. docs/components/DocumentIndex.md +7 -0
  75. docs/components/DocumentSearch.md +7 -0
  76. docs/components/DocumentSource.md +12 -0
  77. docs/components/DocumentSplitter.md +10 -0
  78. docs/components/Echo.md +8 -0
  79. docs/components/Embedding.md +7 -0
  80. docs/components/EmbeddingModel.md +6 -0
  81. docs/components/FieldExtractor.md +20 -0
  82. docs/components/FileSource.md +6 -0
  83. docs/components/FileWriter.md +7 -0
  84. docs/components/Flow.md +14 -0
  85. docs/components/FlowInterface.md +7 -0
  86. docs/components/Index.md +8 -0
  87. docs/components/IndexUpsert.md +6 -0
  88. docs/components/InvokeEmbedding.md +7 -0
  89. docs/components/InvokeFlow.md +8 -0
  90. docs/components/InvokeTool.md +8 -0
  91. docs/components/LLMInference.md +9 -0
  92. docs/components/ListType.md +5 -0
  93. docs/components/Memory.md +8 -0
  94. docs/components/MessageRole.md +14 -0
  95. docs/components/Model.md +10 -0
  96. docs/components/ModelList.md +5 -0
  97. docs/components/OAuth2AuthProvider.md +9 -0
  98. docs/components/PrimitiveTypeEnum.md +21 -0
  99. docs/components/PromptTemplate.md +7 -0
  100. docs/components/PythonFunctionTool.md +7 -0
  101. docs/components/RAGChunk.md +7 -0
  102. docs/components/RAGDocument.md +10 -0
  103. docs/components/RAGSearchResult.md +8 -0
  104. docs/components/Reranker.md +5 -0
  105. docs/components/SQLSource.md +8 -0
  106. docs/components/Search.md +7 -0
  107. docs/components/SearchResult.md +7 -0
  108. docs/components/SecretManager.md +7 -0
  109. docs/components/SecretReference.md +7 -0
  110. docs/components/Source.md +6 -0
  111. docs/components/Step.md +9 -0
  112. docs/components/TelemetrySink.md +9 -0
  113. docs/components/Tool.md +9 -0
  114. docs/components/ToolList.md +5 -0
  115. docs/components/ToolParameter.md +6 -0
  116. docs/components/TypeList.md +5 -0
  117. docs/components/Variable.md +6 -0
  118. docs/components/VariableList.md +5 -0
  119. docs/components/VectorIndex.md +7 -0
  120. docs/components/VectorSearch.md +6 -0
  121. docs/components/VertexAuthProvider.md +9 -0
  122. docs/components/Writer.md +5 -0
  123. docs/example_ui.png +0 -0
  124. docs/index.md +81 -0
  125. docs/legacy_how_tos/Configuration/modular-yaml.md +366 -0
  126. docs/legacy_how_tos/Configuration/phoenix_projects.png +0 -0
  127. docs/legacy_how_tos/Configuration/phoenix_traces.png +0 -0
  128. docs/legacy_how_tos/Configuration/reference-by-id.md +251 -0
  129. docs/legacy_how_tos/Configuration/telemetry-setup.md +259 -0
  130. docs/legacy_how_tos/Data Types/custom-types.md +52 -0
  131. docs/legacy_how_tos/Data Types/domain-types.md +113 -0
  132. docs/legacy_how_tos/Debugging/visualize-apps.md +147 -0
  133. docs/legacy_how_tos/Tools/api-tools.md +29 -0
  134. docs/legacy_how_tos/Tools/python-tools.md +299 -0
  135. examples/authentication/aws_authentication.qtype.yaml +63 -0
  136. examples/conversational_ai/hello_world_chat.qtype.yaml +43 -0
  137. examples/conversational_ai/simple_chatbot.qtype.yaml +40 -0
  138. examples/data_processing/batch_processing.qtype.yaml +54 -0
  139. examples/data_processing/cache_step_results.qtype.yaml +78 -0
  140. examples/data_processing/collect_results.qtype.yaml +55 -0
  141. examples/data_processing/dataflow_pipelines.qtype.yaml +108 -0
  142. examples/data_processing/decode_json.qtype.yaml +23 -0
  143. examples/data_processing/explode_items.qtype.yaml +25 -0
  144. examples/data_processing/read_file.qtype.yaml +60 -0
  145. examples/invoke_models/create_embeddings.qtype.yaml +28 -0
  146. examples/invoke_models/simple_llm_call.qtype.yaml +32 -0
  147. examples/language_features/include_raw.qtype.yaml +27 -0
  148. examples/language_features/ui_hints.qtype.yaml +52 -0
  149. examples/legacy/bedrock/data_analysis_with_telemetry.qtype.yaml +169 -0
  150. examples/legacy/bedrock/hello_world.qtype.yaml +39 -0
  151. examples/legacy/bedrock/hello_world_chat.qtype.yaml +37 -0
  152. examples/legacy/bedrock/hello_world_chat_with_telemetry.qtype.yaml +40 -0
  153. examples/legacy/bedrock/hello_world_chat_with_thinking.qtype.yaml +40 -0
  154. examples/legacy/bedrock/hello_world_completion.qtype.yaml +41 -0
  155. examples/legacy/bedrock/hello_world_completion_with_auth.qtype.yaml +44 -0
  156. examples/legacy/bedrock/simple_agent_chat.qtype.yaml +46 -0
  157. examples/legacy/chat_with_langfuse.qtype.yaml +50 -0
  158. examples/legacy/data_processor.qtype.yaml +48 -0
  159. examples/legacy/echo/debug_example.qtype.yaml +59 -0
  160. examples/legacy/echo/prompt.qtype.yaml +22 -0
  161. examples/legacy/echo/test.qtype.yaml +26 -0
  162. examples/legacy/echo/video.qtype.yaml +20 -0
  163. examples/legacy/field_extractor_example.qtype.yaml +137 -0
  164. examples/legacy/multi_flow_example.qtype.yaml +125 -0
  165. examples/legacy/openai/hello_world_chat.qtype.yaml +43 -0
  166. examples/legacy/openai/hello_world_chat_with_telemetry.qtype.yaml +46 -0
  167. examples/legacy/rag.qtype.yaml +207 -0
  168. examples/legacy/time_utilities.qtype.yaml +64 -0
  169. examples/legacy/vertex/hello_world_chat.qtype.yaml +36 -0
  170. examples/legacy/vertex/hello_world_completion.qtype.yaml +40 -0
  171. examples/legacy/vertex/hello_world_completion_with_auth.qtype.yaml +45 -0
  172. examples/observability_debugging/trace_with_opentelemetry.qtype.yaml +40 -0
  173. examples/research_assistant/research_assistant.qtype.yaml +94 -0
  174. examples/research_assistant/tavily.oas.yaml +722 -0
  175. examples/research_assistant/tavily.qtype.yaml +289 -0
  176. examples/tutorials/01_hello_world.qtype.yaml +48 -0
  177. examples/tutorials/02_conversational_chat.qtype.yaml +37 -0
  178. examples/tutorials/03_structured_data.qtype.yaml +130 -0
  179. examples/tutorials/04_tools_and_function_calling.qtype.yaml +89 -0
  180. qtype/application/converters/tools_from_api.py +39 -35
  181. qtype/base/types.py +6 -1
  182. qtype/commands/convert.py +3 -6
  183. qtype/commands/generate.py +7 -3
  184. qtype/commands/mcp.py +68 -0
  185. qtype/commands/validate.py +4 -4
  186. qtype/dsl/custom_types.py +2 -1
  187. qtype/dsl/linker.py +15 -7
  188. qtype/dsl/loader.py +3 -3
  189. qtype/dsl/model.py +24 -3
  190. qtype/interpreter/api.py +4 -1
  191. qtype/interpreter/base/base_step_executor.py +3 -1
  192. qtype/interpreter/conversions.py +7 -3
  193. qtype/interpreter/executors/construct_executor.py +1 -1
  194. qtype/interpreter/executors/file_source_executor.py +3 -3
  195. qtype/interpreter/executors/file_writer_executor.py +4 -4
  196. qtype/interpreter/executors/index_upsert_executor.py +1 -1
  197. qtype/interpreter/executors/sql_source_executor.py +1 -1
  198. qtype/interpreter/resource_cache.py +3 -1
  199. qtype/interpreter/rich_progress.py +6 -3
  200. qtype/interpreter/stream/chat/converter.py +25 -17
  201. qtype/interpreter/stream/chat/ui_request_to_domain_type.py +2 -2
  202. qtype/interpreter/typing.py +5 -7
  203. qtype/mcp/__init__.py +0 -0
  204. qtype/mcp/server.py +467 -0
  205. qtype/semantic/checker.py +1 -1
  206. qtype/semantic/generate.py +3 -3
  207. qtype/semantic/visualize.py +38 -51
  208. {qtype-0.1.11.dist-info → qtype-0.1.12.dist-info}/METADATA +21 -1
  209. qtype-0.1.12.dist-info/RECORD +325 -0
  210. {qtype-0.1.11.dist-info → qtype-0.1.12.dist-info}/WHEEL +1 -1
  211. schema/qtype.schema.json +4018 -0
  212. qtype-0.1.11.dist-info/RECORD +0 -142
  213. {qtype-0.1.11.dist-info → qtype-0.1.12.dist-info}/entry_points.txt +0 -0
  214. {qtype-0.1.11.dist-info → qtype-0.1.12.dist-info}/licenses/LICENSE +0 -0
  215. {qtype-0.1.11.dist-info → qtype-0.1.12.dist-info}/top_level.txt +0 -0
@@ -0,0 +1,248 @@
1
+ # Your First QType Application
2
+
3
+ **Time:** 15 minutes
4
+ **Prerequisites:** None
5
+ **Example:** [`01_hello_world.qtype.yaml`](https://github.com/bazaarvoice/qtype/blob/main/examples/tutorials/01_hello_world.qtype.yaml)
6
+
7
+ **What you'll learn:** Build a working AI-powered question-answering application and understand the core concepts of QType.
8
+
9
+ **What you'll build:** A simple app that takes a question, sends it to an AI model, and returns an answer.
10
+
11
+ ---
12
+
13
+ ## Part 1: Your First QType File (5 minutes)
14
+
15
+ ### Create the File
16
+
17
+ Create a new file called `01_hello_world.qtype.yaml` and add:
18
+
19
+ ```yaml
20
+ id: 01_hello_world
21
+ description: My first QType application
22
+ ```
23
+
24
+ **What this means:**
25
+
26
+ - Every QType application starts with an `id` - a unique name for your app
27
+ - The `description` helps you remember what the app does (optional but helpful)
28
+
29
+ ---
30
+
31
+ ### Add Your AI Model
32
+
33
+ Add these lines to your file:
34
+
35
+ ```yaml
36
+ auths:
37
+ - type: api_key
38
+ id: openai_auth
39
+ api_key: ${OPENAI_KEY}
40
+ host: https://api.openai.com
41
+
42
+ models:
43
+ - type: Model
44
+ id: gpt-4
45
+ provider: openai
46
+ model_id: gpt-4-turbo
47
+ auth: openai_auth
48
+ inference_params:
49
+ temperature: 0.7
50
+
51
+ ```
52
+
53
+ **What this means:**
54
+
55
+ - `auths:` - different authorization credentials you will use for model invocation (if any)
56
+ - `api_key: ${OPENAI_KEY}` - the api key is read from the environment variable `OPENAI_KEY`
57
+ - `models:` - Where you configure which AI to use
58
+ - `id: gpt-4` - A nickname you'll use to refer to this model
59
+ - `model_id` - The provider's model id.
60
+ - `provider: openai` - Which AI service to use
61
+ - `temperature: 0.7` - Controls creativity (0 = focused, 1 = creative)
62
+
63
+ **Check your work:**
64
+
65
+ 1. Save the file
66
+ 2. Run: `qtype validate 01_hello_world.qtype.yaml`
67
+ 4. You should see: `✅ Validation successful`
68
+
69
+
70
+ **Using AWS Bedrock instead?** Replace the models section with:
71
+ ```yaml
72
+ models:
73
+ - type: Model
74
+ id: nova
75
+ provider: aws-bedrock
76
+ model_id: amazon.nova-lite-v1:0
77
+ ```
78
+
79
+ And ensure your AWS credentials are configured (`aws configure`).
80
+
81
+ ---
82
+
83
+ ## Part 2: Add Processing Logic (5 minutes)
84
+
85
+ ### Create a Flow
86
+
87
+ A "flow" is where you define what your app actually does. Add this to your file:
88
+
89
+ ```yaml
90
+ flows:
91
+ - type: Flow
92
+ id: simple_example
93
+ variables:
94
+ - id: question
95
+ type: text
96
+ - id: formatted_prompt
97
+ type: text
98
+ - id: answer
99
+ type: text
100
+ inputs:
101
+ - question
102
+ outputs:
103
+ - answer
104
+ ```
105
+
106
+ **What this means:**
107
+
108
+ - `flows:` - The processing logic section
109
+ - `variables:` - Declares the data your app uses:
110
+ - `question` - What the user asks
111
+ - `formatted_prompt` - The formatted prompt for the AI
112
+ - `answer` - What the AI responds
113
+ - `inputs:` and `outputs:` - Which variables go in and out
114
+
115
+ **Check your work:**
116
+
117
+ 1. Validate again: `qtype validate 01_hello_world.qtype.yaml`
118
+ 2. Still should see: `✅ Validation successful`
119
+
120
+ ---
121
+
122
+ ### Add Processing Steps
123
+
124
+ Now tell QType what to do with the question. Add this inside your flow (after `outputs:`):
125
+
126
+ ```yaml
127
+ steps:
128
+ - id: format_prompt
129
+ type: PromptTemplate
130
+ template: "You are a helpful assistant. Answer the following question:\n{question}\n"
131
+ inputs:
132
+ - question
133
+ outputs:
134
+ - formatted_prompt
135
+
136
+ - id: llm_step
137
+ type: LLMInference
138
+ model: gpt-4
139
+ inputs:
140
+ - formatted_prompt
141
+ outputs:
142
+ - answer
143
+ ```
144
+
145
+ **What this means:**
146
+
147
+ - `steps:` - The actual processing instructions
148
+ - **Step 1: PromptTemplate** - Formats your question into a proper prompt
149
+ - `template:` - Text with placeholders like `{question}`
150
+ - Takes the user's `question` and creates `formatted_prompt`
151
+ - **Step 2: LLMInference** - Sends the prompt to the AI
152
+ - `model: gpt-4` - Use the model we defined earlier
153
+ - Takes `formatted_prompt` and returns `answer`
154
+
155
+ **Why two steps?** Separating prompt formatting from AI inference makes your app more maintainable and testable.
156
+
157
+ **Check your work:**
158
+
159
+ 1. Validate: `qtype validate 01_hello_world.qtype.yaml`
160
+ 2. Should still pass ✅
161
+
162
+ ---
163
+
164
+ ## Part 3: Run Your Application (5 minutes)
165
+
166
+ ### Set Up Authentication
167
+
168
+ Create a file called `.env` in the same folder:
169
+
170
+ ```
171
+ OPENAI_KEY=sk-your-key-here
172
+ ```
173
+
174
+ Replace `sk-your-key-here` with your actual OpenAI API key.
175
+
176
+ ---
177
+
178
+ ### Test It!
179
+
180
+ Run your application:
181
+
182
+ ```bash
183
+ qtype run -i '{"question":"What is 2+2?"}' 01_hello_world.qtype.yaml
184
+ ```
185
+
186
+ **What you should see:**
187
+ ```json
188
+ {
189
+ "answer": "2+2 equals 4."
190
+ }
191
+ ```
192
+
193
+ **Troubleshooting:**
194
+
195
+ - **"Authentication error"** → Check your API key in `.env`
196
+ - **"Model not found"** → Verify you have access to the model
197
+ - **"Variable not found"** → Check your indentation in the YAML file
198
+
199
+ ---
200
+
201
+ ### Try It With Different Questions
202
+
203
+ ```bash
204
+ # Simple math
205
+ qtype run -i '{"question":"What is the capital of France?"}' 01_hello_world.qtype.yaml
206
+
207
+ # More complex
208
+ qtype run -i '{"question":"Explain photosynthesis in one sentence"}' 01_hello_world.qtype.yaml
209
+ ```
210
+
211
+ ---
212
+
213
+ ## What You've Learned
214
+
215
+ Congratulations! You've learned:
216
+
217
+ ✅ **Application structure** - Every QType app has an `id`
218
+ ✅ **Models** - How to configure AI providers
219
+ ✅ **Flows** - Where processing logic lives
220
+ ✅ **Variables** - How data moves through your app
221
+ ✅ **Steps** - Individual processing units (PromptTemplate, LLMInference)
222
+ ✅ **Validation** - How to check your work before running
223
+
224
+ ---
225
+
226
+ ## Next Steps
227
+
228
+ **Reference the complete example:**
229
+
230
+ - [`01_hello_world.qtype.yaml`](https://github.com/bazaarvoice/qtype/blob/main/examples/tutorials/01_hello_world.qtype.yaml) - Full working example
231
+
232
+ **Learn more:**
233
+
234
+ - [Application Concept](../Concepts/Core/application.md) - Full specification
235
+ - [All Step Types](../Concepts/Steps/index.md) - What else can you build?
236
+
237
+ ---
238
+
239
+ ## Common Questions
240
+
241
+ **Q: Why do I need to declare variables?**
242
+ A: It makes data flow explicit and helps QType validate your app before running it.
243
+
244
+ **Q: Can I use multiple models in one app?**
245
+ A: Yes! Define multiple models in the `models:` section and reference them by their `id` in steps.
246
+
247
+ **Q: My validation passed but I get errors when running. Why?**
248
+ A: Validation checks structure, but runtime errors often involve authentication or model access. Check your API keys and model permissions.
@@ -0,0 +1,327 @@
1
+ # Build a Conversational Chatbot
2
+
3
+ **Time:** 20 minutes
4
+ **Prerequisites:** [Tutorial 1: Your First QType Application](01-first-qtype-application.md)
5
+ **Example:** [`02_conversational_chat.qtype.yaml`](https://github.com/bazaarvoice/qtype/blob/main/examples/02_conversational_chat.qtype.yaml)
6
+
7
+ **What you'll learn:**
8
+
9
+ * Stateful flows with memory
10
+ * Using the web ui
11
+ * Domain types
12
+
13
+ **What you'll build:** A stateful chatbot that maintains conversation history and provides contextual responses.
14
+
15
+ ---
16
+
17
+ ## Background: A Quick Note on Flows
18
+
19
+ Flows are effectively data pipelines -- they accept input values and produce output values.
20
+ The flow will execute for each input it receives.
21
+
22
+ Thus, for a conversational AI, each message from the user is one execution of the flow.
23
+
24
+ Flows are inherently _stateless_: no data is stored between executions though they can use tools, apis, or memory to share data.
25
+
26
+ In this example, we'll use memory to let the flow remember previous chat messages from both the user and the LLM.
27
+
28
+
29
+ ## Part 1: Add Memory to Your Application (5 minutes)
30
+
31
+ ### Create Your Chatbot File
32
+
33
+ Create a new file called `02_conversational_chat.qtype.yaml`. Let's use bedrock for this example, but you could also use OpenAI as in the previous tutorial:
34
+
35
+ ```yaml
36
+ id: 02_conversational_chat
37
+ description: A conversational chatbot with memory
38
+
39
+ models:
40
+
41
+ models:
42
+ - type: Model
43
+ id: nova_lite
44
+ provider: aws-bedrock
45
+ model_id: amazon.nova-lite-v1:0
46
+ inference_params:
47
+ temperature: 0.7
48
+ max_tokens: 512
49
+
50
+ ```
51
+
52
+ ---
53
+
54
+ ### Add Memory Configuration
55
+
56
+ Now add a memory configuration *before* the `flows:` section:
57
+
58
+ ```yaml
59
+ memories:
60
+ - id: chat_memory
61
+ token_limit: 10000
62
+ ```
63
+
64
+ **What this means:**
65
+
66
+ - `memories:` - Section for memory configurations (new concept!)
67
+ - `id: chat_memory` - A nickname you'll use to reference this memory
68
+ - `token_limit: 10000` - Maximum total tokens to have in the memory
69
+
70
+ **Check your work:**
71
+
72
+ 1. Save the file
73
+ 2. Validate: `qtype validate 02_conversational_chat.qtype.yaml`
74
+ 3. Should pass ✅ (even though we haven't added flows yet)
75
+
76
+ ---
77
+
78
+ ## Part 2: Create a Conversational Flow (7 minutes)
79
+
80
+ ### Set Up the Conversational Flow
81
+
82
+ Add this flow definition:
83
+
84
+ ```yaml
85
+ flows:
86
+ - type: Flow
87
+ id: simple_chat_example
88
+ interface:
89
+ type: Conversational
90
+ variables:
91
+ - id: user_message
92
+ type: ChatMessage
93
+ - id: response_message
94
+ type: ChatMessage
95
+ inputs:
96
+ - user_message
97
+ outputs:
98
+ - response_message
99
+ ```
100
+
101
+ **New concepts explained:**
102
+
103
+ **`ChatMessage` type** - A special domain type for chat applications
104
+
105
+ - Represents a single message in a conversation
106
+ - Contains structured blocks (text, images, files, etc.) and metadata
107
+ - Different from the simple `text` type used in stateless applications
108
+
109
+ **ChatMessage Structure:**
110
+
111
+ ```yaml
112
+ ChatMessage:
113
+ blocks:
114
+ - type: text
115
+ content: "Hello, how can I help?"
116
+ - type: image
117
+ url: "https://example.com/image.jpg"
118
+ role: assistant # or 'user', 'system'
119
+ metadata:
120
+ timestamp: "2025-11-08T10:30:00Z"
121
+ ```
122
+
123
+ The `blocks` list allows multimodal messages (text + images + files), while `role` indicates who sent the message. QType automatically handles this structure when managing conversation history.
124
+
125
+
126
+ **Why two variables?**
127
+
128
+ - `user_message` - What the user types
129
+ - `response_message` - What the AI responds
130
+ - QType tracks both in memory for context
131
+
132
+ **`interface.type: Conversational`**
133
+
134
+ This tells QType that the flow should be served as a conversation. When you type `qtype serve` (covered below) this ensures that the ui shows a chat interface instead of just listing inputs and outputs.
135
+
136
+
137
+ **Check your work:**
138
+
139
+ 1. Validate: `qtype validate 02_conversational_chat.qtype.yaml`
140
+ 2. Should still pass ✅
141
+
142
+ ---
143
+
144
+ ### Add the Chat Step
145
+
146
+ Add the LLM inference step that connects to your memory:
147
+
148
+ ```yaml
149
+ steps:
150
+ - id: llm_inference_step
151
+ type: LLMInference
152
+ model: nova_lite
153
+ system_message: "You are a helpful assistant."
154
+ memory: chat_memory
155
+ inputs:
156
+ - user_message
157
+ outputs:
158
+ - response_message
159
+ ```
160
+
161
+ **What's new:**
162
+
163
+ **`memory: chat_memory`** - Links this step to the memory configuration
164
+ - Automatically sends conversation history with each request
165
+ - Updates memory after each exchange
166
+ - This line is what enables "remembering" previous messages
167
+
168
+ **`system_message` with personality** - Unlike the previous generic message, this shapes the AI's behavior for conversation
169
+
170
+ **Check your work:**
171
+
172
+ 1. Validate: `qtype validate 02_conversational_chat.qtype.yaml`
173
+ 2. Should pass ✅
174
+
175
+ ---
176
+
177
+ ## Part 3: Set Up and Test (8 minutes)
178
+
179
+ ### Configure Authentication
180
+
181
+ Create `.env` in the same folder (or update your existing one):
182
+
183
+ ```
184
+ AWS_PROFILE=your-aws-profile
185
+ ```
186
+
187
+ **Using OpenAI?** Replace the model configuration with:
188
+ ```yaml
189
+ auths:
190
+ - type: api_key
191
+ id: openai_auth
192
+ api_key: ${OPENAI_KEY}
193
+ host: https://api.openai.com
194
+ models:
195
+ - type: Model
196
+ id: gpt-4
197
+ provider: openai
198
+ model_id: gpt-4-turbo
199
+ auth: openai_auth
200
+ inference_params:
201
+ temperature: 0.7
202
+ ```
203
+
204
+ And:
205
+
206
+ - update the step to use `model: gtp-4`.
207
+ - update your `.env` file to have `OPENAI_KEY`
208
+
209
+ ---
210
+
211
+ ### Start the Chat Interface
212
+
213
+ Unlike the previous tutorial where you used `qtype run` for one-off questions, conversational applications work better with the web interface:
214
+
215
+ ```bash
216
+ qtype serve 02_conversational_chat.qtype.yaml
217
+ ```
218
+
219
+ **What you'll see:**
220
+ ```
221
+ INFO: Started server process
222
+ INFO: Uvicorn running on http://127.0.0.1:8000
223
+ ```
224
+
225
+ **Visit:** [http://localhost:8000/ui](http://localhost:8000/ui)
226
+
227
+ You should see a chat interface with your application name at the top. Give it a chat!
228
+
229
+ ![the ui showing a chat interface](example_chat.png)
230
+
231
+
232
+
233
+ ---
234
+
235
+ ### Test Conversation Memory
236
+
237
+ Try this conversation to see memory in action:
238
+
239
+ ```
240
+ You: My name is Alex and I love pizza.
241
+ AI: Nice to meet you, Alex! Pizza is delicious...
242
+
243
+ You: What's my name?
244
+ AI: Your name is Alex! ✅
245
+
246
+ You: What food do I like?
247
+ AI: You mentioned you love pizza! ✅
248
+ ```
249
+
250
+ Refreshing the page creates a new session and the memory is removed.
251
+
252
+ ---
253
+
254
+ ## Part 4: Understanding What's Happening (Bonus)
255
+
256
+ ### The Memory Lifecycle
257
+
258
+ Here's what happens when you send a message:
259
+
260
+ ```
261
+ User: "What's my name?"
262
+
263
+ QType: Get conversation history from memory
264
+
265
+ Memory: Returns previous messages (including "My name is Alex")
266
+
267
+ QType: Combines system message + history + new question
268
+
269
+ LLM: Processes full context → "Your name is Alex!"
270
+
271
+ QType: Saves new exchange to memory
272
+
273
+ User: Sees response
274
+ ```
275
+
276
+ **Key insight:** The LLM itself has no memory - QType handles this by:
277
+
278
+ 1. Storing all previous messages
279
+ 2. Sending relevant history with each new question
280
+ 3. Managing token limits automatically
281
+
282
+
283
+ **The memory is keyed on the user session** -- it's not accessible by other visitors to the page.
284
+
285
+ ---
286
+
287
+ ## What You've Learned
288
+
289
+ Congratulations! You've mastered:
290
+
291
+ ✅ **Memory configuration** - Storing conversation state
292
+ ✅ **Conversational flows** - Multi-turn interactions
293
+ ✅ **ChatMessage type** - Domain-specific data types
294
+ ✅ **Web interface** - Using `qtype serve` for chat applications
295
+
296
+ ---
297
+
298
+ ## Next Steps
299
+
300
+ **Reference the complete example:**
301
+
302
+ - [`02_conversational_chat.qtype`](https://github.com/bazaarvoice/qtype/blob/main/examples/02_conversational_chat.qtype) - Full working example
303
+
304
+ **Learn more:**
305
+
306
+ - [Memory Concept](../Concepts/Core/memory.md) - Advanced memory strategies
307
+ - [ChatMessage Reference](../How-To%20Guides/Data%20Types/domain-types.md) - Full type specification
308
+ - [Flow Interfaces](../Concepts/Core/flow.md) - Complete vs Conversational
309
+
310
+ ---
311
+
312
+ ## Common Questions
313
+
314
+ **Q: Why do I need `ChatMessage` instead of `text`?**
315
+ A: `ChatMessage` includes metadata (role, attachments) that QType uses to properly format conversation history for the LLM. The `text` type is for simple strings without this context.
316
+
317
+ **Q: Can I have multiple memory configurations?**
318
+ A: Yes! You can define multiple memories in the `memories:` section and reference different ones in different flows or steps.
319
+
320
+ **Q: Can I use memory with the `Complete` interface?**
321
+ A: No - memory only works with `Conversational` interface. Complete flows are stateless by design. If you need to remember information between requests, you must use the Conversational interface.
322
+
323
+ **Q: When should I use Complete vs Conversational?**
324
+ A: Use Complete for streaming single responses from an llm. Use Conversational when you need context from previous interactions (chatbots, assistants, multi-step conversations).
325
+
326
+ **Q: How do I clear memory during a conversation?**
327
+ A: Currently, you need to start a new session (refresh the page in the UI).