@promptbook/cli 0.72.0 → 0.73.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -18,6 +18,8 @@ Build responsible, controlled and transparent applications on top of LLM models!
18
18
 
19
19
  ## ✨ New Features
20
20
 
21
+ - 💙 Working on [the **Book** language v1](https://github.com/webgptorg/book)
22
+ - 📚 Support of `.docx`, `.doc` and `.pdf` documents
21
23
  - ✨ **Support of [OpenAI o1 model](https://openai.com/o1/)**
22
24
 
23
25
 
@@ -103,11 +105,9 @@ Rest of the documentation is common for **entire promptbook ecosystem**:
103
105
 
104
106
  ## 🤍 The Promptbook Whitepaper
105
107
 
106
-
107
-
108
108
  If you have a simple, single prompt for ChatGPT, GPT-4, Anthropic Claude, Google Gemini, Llama 3, or whatever, it doesn't matter how you integrate it. Whether it's calling a REST API directly, using the SDK, hardcoding the prompt into the source code, or importing a text file, the process remains the same.
109
109
 
110
- But often you will struggle with the **limitations of LLMs**, such as **hallucinations, off-topic responses, poor quality output, language and prompt drift, word repetition repetition repetition repetition or misuse, lack of context, or just plain w𝒆𝐢rd responses**. When this happens, you generally have three options:
110
+ But often you will struggle with the **limitations of LLMs**, such as **hallucinations, off-topic responses, poor quality output, language and prompt drift, word repetition repetition repetition repetition or misuse, lack of context, or just plain w𝒆𝐢rd resp0nses**. When this happens, you generally have three options:
111
111
 
112
112
  1. **Fine-tune** the model to your specifications or even train your own.
113
113
  2. **Prompt-engineer** the prompt to the best shape you can achieve.
@@ -115,248 +115,38 @@ But often you will struggle with the **limitations of LLMs**, such as **hallucin
115
115
 
116
116
  In all of these situations, but especially in 3., the **✨ Promptbook can make your life waaaaaaaaaay easier**.
117
117
 
118
- - [**Separates concerns**](https://github.com/webgptorg/promptbook/discussions/32) between prompt-engineer and programmer, between code files and prompt files, and between prompts and their execution logic.
119
- - Establishes a [**common format `.ptbk.md`**](https://github.com/webgptorg/promptbook/discussions/85) that can be used to describe your prompt business logic without having to write code or deal with the technicalities of LLMs.
120
- - **Forget** about **low-level details** like choosing the right model, tokens, context size, temperature, top-k, top-p, or kernel sampling. **Just write your intent** and [**persona**](https://github.com/webgptorg/promptbook/discussions/22) who should be responsible for the task and let the library do the rest.
121
- - Has built-in **orchestration** of [pipeline](https://github.com/webgptorg/promptbook/discussions/64) execution and many tools to make the process easier, more reliable, and more efficient, such as caching, [compilation+preparation](https://github.com/webgptorg/promptbook/discussions/78), [just-in-time fine-tuning](https://github.com/webgptorg/promptbook/discussions/33), [expectation-aware generation](https://github.com/webgptorg/promptbook/discussions/37), [agent adversary expectations](https://github.com/webgptorg/promptbook/discussions/39), and more.
118
+ - [**Separates concerns**](https://github.com/webgptorg/promptbook/discussions/32) between prompt-engineer and programmer, between code files and prompt files, and between prompts and their execution logic. For this purpose, it introduces a new language called [the **💙 Book**](https://github.com/webgptorg/book).
119
+ - Book allows you to **focus on the business** logic without having to write code or deal with the technicalities of LLMs.
120
+ - **Forget** about **low-level details** like choosing the right model, tokens, context size, `temperature`, `top-k`, `top-p`, or kernel sampling. **Just write your intent** and [**persona**](https://github.com/webgptorg/promptbook/discussions/22) who should be responsible for the task and let the library do the rest.
121
+ - We have built-in **orchestration** of [pipeline](https://github.com/webgptorg/promptbook/discussions/64) execution and many tools to make the process easier, more reliable, and more efficient, such as caching, [compilation+preparation](https://github.com/webgptorg/promptbook/discussions/78), [just-in-time fine-tuning](https://github.com/webgptorg/promptbook/discussions/33), [expectation-aware generation](https://github.com/webgptorg/promptbook/discussions/37), [agent adversary expectations](https://github.com/webgptorg/promptbook/discussions/39), and more.
122
122
  - Sometimes even the best prompts with the best framework like Promptbook `:)` can't avoid the problems. In this case, the library has built-in **[anomaly detection](https://github.com/webgptorg/promptbook/discussions/40) and logging** to help you find and fix the problems.
123
- - Promptbook has built in versioning. You can test multiple **A/B versions** of pipelines and see which one works best.
124
- - Promptbook is designed to do [**RAG** (Retrieval-Augmented Generation)](https://github.com/webgptorg/promptbook/discussions/41) and other advanced techniques. You can use **knowledge** to improve the quality of the output.
125
-
126
-
127
-
128
- ## 🧔 Pipeline _(for prompt-engeneers)_
129
-
130
- **P**romp**t** **b**oo**k** markdown file (or `.ptbk.md` file) is document that describes a **pipeline** - a series of prompts that are chained together to form somewhat reciepe for transforming natural language input.
131
-
132
- - Multiple pipelines forms a **collection** which will handle core **know-how of your LLM application**.
133
- - Theese pipelines are designed such as they **can be written by non-programmers**.
134
-
135
-
136
-
137
- ### Sample:
138
-
139
- File `write-website-content.ptbk.md`:
140
-
141
-
142
-
143
-
144
-
145
- > # 🌍 Create website content
146
- >
147
- > Instructions for creating web page content.
148
- >
149
- > - PIPELINE URL https://promptbook.studio/webgpt/write-website-content.ptbk.md
150
- > - INPUT  PARAM `{rawTitle}` Automatically suggested a site name or empty text
151
- > - INPUT  PARAM `{rawAssigment}` Automatically generated site entry from image recognition
152
- > - OUTPUT PARAM `{websiteContent}` Web content
153
- > - OUTPUT PARAM `{keywords}` Keywords
154
- >
155
- > ## 👤 Specifying the assigment
156
- >
157
- > What is your web about?
158
- >
159
- > - DIALOG TEMPLATE
160
- >
161
- > ```
162
- > {rawAssigment}
163
- > ```
164
- >
165
- > `-> {assigment}` Website assignment and specification
166
- >
167
- > ## ✨ Improving the title
168
- >
169
- > - PERSONA Jane, Copywriter and Marketing Specialist.
170
- >
171
- > ```
172
- > As an experienced marketing specialist, you have been entrusted with improving the name of your client's business.
173
- >
174
- > A suggested name from a client:
175
- > "{rawTitle}"
176
- >
177
- > Assignment from customer:
178
- >
179
- > > {assigment}
180
- >
181
- > ## Instructions:
182
- >
183
- > - Write only one name suggestion
184
- > - The name will be used on the website, business cards, visuals, etc.
185
- > ```
186
- >
187
- > `-> {enhancedTitle}` Enhanced title
188
- >
189
- > ## 👤 Website title approval
190
- >
191
- > Is the title for your website okay?
192
- >
193
- > - DIALOG TEMPLATE
194
- >
195
- > ```
196
- > {enhancedTitle}
197
- > ```
198
- >
199
- > `-> {title}` Title for the website
200
- >
201
- > ## 🐰 Cunning subtitle
202
- >
203
- > - PERSONA Josh, a copywriter, tasked with creating a claim for the website.
204
- >
205
- > ```
206
- > As an experienced copywriter, you have been entrusted with creating a claim for the "{title}" web page.
207
- >
208
- > A website assignment from a customer:
209
- >
210
- > > {assigment}
211
- >
212
- > ## Instructions:
213
- >
214
- > - Write only one name suggestion
215
- > - Claim will be used on website, business cards, visuals, etc.
216
- > - Claim should be punchy, funny, original
217
- > ```
218
- >
219
- > `-> {claim}` Claim for the web
220
- >
221
- > ## 🚦 Keyword analysis
222
- >
223
- > - PERSONA Paul, extremely creative SEO specialist.
224
- >
225
- > ```
226
- > As an experienced SEO specialist, you have been entrusted with creating keywords for the website "{title}".
227
- >
228
- > Website assignment from the customer:
229
- >
230
- > > {assigment}
231
- >
232
- > ## Instructions:
233
- >
234
- > - Write a list of keywords
235
- > - Keywords are in basic form
236
- >
237
- > ## Example:
238
- >
239
- > - Ice cream
240
- > - Olomouc
241
- > - Quality
242
- > - Family
243
- > - Tradition
244
- > - Italy
245
- > - Craft
246
- >
247
- > ```
248
- >
249
- > `-> {keywords}` Keywords
250
- >
251
- > ## 🔗 Combine the beginning
252
- >
253
- > - SIMPLE TEMPLATE
254
- >
255
- > ```
256
- >
257
- > # {title}
258
- >
259
- > > {claim}
260
- >
261
- > ```
262
- >
263
- > `-> {contentBeginning}` Beginning of web content
264
- >
265
- > ## 🖋 Write the content
266
- >
267
- > - PERSONA Jane
268
- >
269
- > ```
270
- > As an experienced copywriter and web designer, you have been entrusted with creating text for a new website {title}.
271
- >
272
- > A website assignment from a customer:
273
- >
274
- > > {assigment}
275
- >
276
- > ## Instructions:
277
- >
278
- > - Text formatting is in Markdown
279
- > - Be concise and to the point
280
- > - Use keywords, but they should be naturally in the text
281
- > - This is the complete content of the page, so don't forget all the important information and elements the page should contain
282
- > - Use headings, bullets, text formatting
283
- >
284
- > ## Keywords:
285
- >
286
- > {keywords}
287
- >
288
- > ## Web Content:
289
- >
290
- > {contentBeginning}
291
- > ```
292
- >
293
- > `-> {contentBody}` Middle of the web content
294
- >
295
- > ## 🔗 Combine the content
296
- >
297
- > - SIMPLE TEMPLATE
298
- >
299
- > ```markdown
300
- > {contentBeginning}
301
- >
302
- > {contentBody}
303
- > ```
304
- >
305
- > `-> {websiteContent}`
306
-
307
-
308
-
309
- Following is the scheme how the promptbook above is executed:
310
-
311
- ```mermaid
312
- %% 🔮 Tip: Open this on GitHub or in the VSCode website to see the Mermaid graph visually
313
-
314
- flowchart LR
315
- subgraph "🌍 Create website content"
316
-
317
- direction TB
318
-
319
- input((Input)):::input
320
- templateSpecifyingTheAssigment(👤 Specifying the assigment)
321
- input--"{rawAssigment}"-->templateSpecifyingTheAssigment
322
- templateImprovingTheTitle(✨ Improving the title)
323
- input--"{rawTitle}"-->templateImprovingTheTitle
324
- templateSpecifyingTheAssigment--"{assigment}"-->templateImprovingTheTitle
325
- templateWebsiteTitleApproval(👤 Website title approval)
326
- templateImprovingTheTitle--"{enhancedTitle}"-->templateWebsiteTitleApproval
327
- templateCunningSubtitle(🐰 Cunning subtitle)
328
- templateWebsiteTitleApproval--"{title}"-->templateCunningSubtitle
329
- templateSpecifyingTheAssigment--"{assigment}"-->templateCunningSubtitle
330
- templateKeywordAnalysis(🚦 Keyword analysis)
331
- templateWebsiteTitleApproval--"{title}"-->templateKeywordAnalysis
332
- templateSpecifyingTheAssigment--"{assigment}"-->templateKeywordAnalysis
333
- templateCombineTheBeginning(🔗 Combine the beginning)
334
- templateWebsiteTitleApproval--"{title}"-->templateCombineTheBeginning
335
- templateCunningSubtitle--"{claim}"-->templateCombineTheBeginning
336
- templateWriteTheContent(🖋 Write the content)
337
- templateWebsiteTitleApproval--"{title}"-->templateWriteTheContent
338
- templateSpecifyingTheAssigment--"{assigment}"-->templateWriteTheContent
339
- templateKeywordAnalysis--"{keywords}"-->templateWriteTheContent
340
- templateCombineTheBeginning--"{contentBeginning}"-->templateWriteTheContent
341
- templateCombineTheContent(🔗 Combine the content)
342
- templateCombineTheBeginning--"{contentBeginning}"-->templateCombineTheContent
343
- templateWriteTheContent--"{contentBody}"-->templateCombineTheContent
344
-
345
- templateCombineTheContent--"{websiteContent}"-->output
346
- output((Output)):::output
347
-
348
- classDef input color: grey;
349
- classDef output color: grey;
350
-
351
- end;
352
- ```
123
+ - Versioning is build in. You can test multiple **A/B versions** of pipelines and see which one works best.
124
+ - Promptbook is designed to use [**RAG** (Retrieval-Augmented Generation)](https://github.com/webgptorg/promptbook/discussions/41) and other advanced techniques to bring the context of your business to generic LLM. You can use **knowledge** to improve the quality of the output.
125
+
126
+
127
+
128
+ ## 💙 Book language _(for prompt-engineer)_
129
+
130
+ Promptbook [pipelines](https://github.com/webgptorg/promptbook/discussions/64) are written in markdown-like language called [Book](https://github.com/webgptorg/book). It is designed to be understandable by non-programmers and non-technical people.
353
131
 
354
- - [More template samples](./samples/pipelines/)
355
- - [Read more about `.ptbk.md` file format here](https://github.com/webgptorg/promptbook/discussions/categories/concepts?discussions_q=is%3Aopen+label%3A.ptbk.md+category%3AConcepts)
356
132
 
357
- _Note: We are using [postprocessing functions](#postprocessing-functions) like `unwrapResult` that can be used to postprocess the result._
358
133
 
359
- ## 📦 Packages
134
+ ```markdown
135
+ # 🌟 My first Book
136
+
137
+ - PERSONA Jane, marketing specialist with prior experience in writing articles about technology and artificial intelligence
138
+ - KNOWLEDGE https://ptbk.io
139
+ - KNOWLEDGE ./promptbook.pdf
140
+ - EXPECT MIN 1 Sentence
141
+ - EXPECT MAX 1 Paragraph
142
+
143
+ > Write an article about the future of artificial intelligence in the next 10 years and how metalanguages will change the way AI is used in the world.
144
+ > Look specifically at the impact of Promptbook on the AI industry.
145
+
146
+ -> {article}
147
+ ```
148
+
149
+ ## 📦 Packages _(for developers)_
360
150
 
361
151
  This library is divided into several packages, all are published from [single monorepo](https://github.com/webgptorg/promptbook).
362
152
  You can install all of them at once:
@@ -398,8 +188,6 @@ Or you can install them separately:
398
188
 
399
189
  The following glossary is used to clarify certain concepts:
400
190
 
401
-
402
-
403
191
  ### Core concepts
404
192
 
405
193
  - [📚 Collection of pipelines](https://github.com/webgptorg/promptbook/discussions/65)
@@ -430,8 +218,8 @@ The following glossary is used to clarify certain concepts:
430
218
 
431
219
  ## 🔌 Usage in Typescript / Javascript
432
220
 
433
- - [Simple usage](./samples/usage/simple-script)
434
- - [Usage with client and remote server](./samples/usage/remote)
221
+ - [Simple usage](./examples/usage/simple-script)
222
+ - [Usage with client and remote server](./examples/usage/remote)
435
223
 
436
224
  ## ➕➖ When to use Promptbook?
437
225
 
package/esm/index.es.js CHANGED
@@ -27,7 +27,7 @@ import { Converter } from 'showdown';
27
27
  /**
28
28
  * The version of the Promptbook library
29
29
  */
30
- var PROMPTBOOK_VERSION = '0.72.0-34';
30
+ var PROMPTBOOK_VERSION = '0.72.0';
31
31
  // TODO: [main] !!!! List here all the versions and annotate + put into script
32
32
 
33
33
  /*! *****************************************************************************
@@ -496,10 +496,10 @@ var RESERVED_PARAMETER_NAMES = $asDeeplyFrozenSerializableJson('RESERVED_PARAMET
496
496
  'content',
497
497
  'context',
498
498
  'knowledge',
499
- 'samples',
499
+ 'examples',
500
500
  'modelName',
501
501
  'currentDate',
502
- // <- TODO: !!!!! list here all command names
502
+ // <- TODO: list here all command names
503
503
  // <- TODO: Add more like 'date', 'modelName',...
504
504
  // <- TODO: Add [emoji] + instructions ACRY when adding new reserved parameter
505
505
  ]);
@@ -1531,7 +1531,7 @@ function countTotalUsage(llmTools) {
1531
1531
  * TODO: [👷‍♂️] @@@ Manual about construction of llmTools
1532
1532
  */
1533
1533
 
1534
- var PipelineCollection = [{title:"Prepare Knowledge from Markdown",pipelineUrl:"https://promptbook.studio/promptbook/prepare-knowledge-from-markdown.ptbk.md",parameters:[{name:"knowledgeContent",description:"Markdown document content",isInput:true,isOutput:false},{name:"knowledgePieces",description:"The knowledge JSON object",isInput:false,isOutput:true}],templates:[{templateType:"PROMPT_TEMPLATE",name:"knowledge",title:"Knowledge",content:"You are experienced data researcher, extract the important knowledge from the document.\n\n# Rules\n\n- Make pieces of information concise, clear, and easy to understand\n- One piece of information should be approximately 1 paragraph\n- Divide the paragraphs by markdown horizontal lines ---\n- Omit irrelevant information\n- Group redundant information\n- Write just extracted information, nothing else\n\n# The document\n\nTake information from this document:\n\n> {knowledgeContent}",resultingParameterName:"knowledgePieces",dependentParameterNames:["knowledgeContent"]}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[],sourceFile:"./promptbook-collection/prepare-knowledge-from-markdown.ptbk.md"},{title:"Prepare Keywords",pipelineUrl:"https://promptbook.studio/promptbook/prepare-knowledge-keywords.ptbk.md",parameters:[{name:"knowledgePieceContent",description:"The content",isInput:true,isOutput:false},{name:"keywords",description:"Keywords separated by comma",isInput:false,isOutput:true}],templates:[{templateType:"PROMPT_TEMPLATE",name:"knowledge",title:"Knowledge",content:"You are experienced data researcher, detect the important keywords in the document.\n\n# Rules\n\n- Write just keywords separated by comma\n\n# The document\n\nTake information from this document:\n\n> {knowledgePieceContent}",resultingParameterName:"keywords",dependentParameterNames:["knowledgePieceContent"]}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[],sourceFile:"./promptbook-collection/prepare-knowledge-keywords.ptbk.md"},{title:"Prepare Title",pipelineUrl:"https://promptbook.studio/promptbook/prepare-knowledge-title.ptbk.md",parameters:[{name:"knowledgePieceContent",description:"The content",isInput:true,isOutput:false},{name:"title",description:"The title of the document",isInput:false,isOutput:true}],templates:[{templateType:"PROMPT_TEMPLATE",name:"knowledge",title:"Knowledge",content:"You are experienced content creator, write best title for the document.\n\n# Rules\n\n- Write just title, nothing else\n- Title should be concise and clear\n- Write maximum 5 words for the title\n\n# The document\n\n> {knowledgePieceContent}",resultingParameterName:"title",expectations:{words:{min:1,max:8}},dependentParameterNames:["knowledgePieceContent"]}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[],sourceFile:"./promptbook-collection/prepare-knowledge-title.ptbk.md"},{title:"Prepare Keywords",pipelineUrl:"https://promptbook.studio/promptbook/prepare-persona.ptbk.md",parameters:[{name:"availableModelNames",description:"List of available model names separated by comma (,)",isInput:true,isOutput:false},{name:"personaDescription",description:"Description of the persona",isInput:true,isOutput:false},{name:"modelRequirements",description:"Specific requirements for the model",isInput:false,isOutput:true}],templates:[{templateType:"PROMPT_TEMPLATE",name:"make-model-requirements",title:"Make modelRequirements",content:"You are experienced AI engineer, you need to create virtual assistant.\nWrite\n\n## Sample\n\n```json\n{\n\"modelName\": \"gpt-4o\",\n\"systemMessage\": \"You are experienced AI engineer and helpfull assistant.\",\n\"temperature\": 0.7\n}\n```\n\n## Instructions\n\n- Your output format is JSON object\n- Write just the JSON object, no other text should be present\n- It contains the following keys:\n - `modelName`: The name of the model to use\n - `systemMessage`: The system message to provide context to the model\n - `temperature`: The sampling temperature to use\n\n### Key `modelName`\n\nPick from the following models:\n\n- {availableModelNames}\n\n### Key `systemMessage`\n\nThe system message is used to communicate instructions or provide context to the model at the beginning of a conversation. It is displayed in a different format compared to user messages, helping the model understand its role in the conversation. The system message typically guides the model's behavior, sets the tone, or specifies desired output from the model. By utilizing the system message effectively, users can steer the model towards generating more accurate and relevant responses.\n\nFor example:\n\n> You are an experienced AI engineer and helpful assistant.\n\n> You are a friendly and knowledgeable chatbot.\n\n### Key `temperature`\n\nThe sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use log probability to automatically increase the temperature until certain thresholds are hit.\n\nYou can pick a value between 0 and 2. For example:\n\n- `0.1`: Low temperature, extremely conservative and deterministic\n- `0.5`: Medium temperature, balanced between conservative and creative\n- `1.0`: High temperature, creative and bit random\n- `1.5`: Very high temperature, extremely creative and often chaotic and unpredictable\n- `2.0`: Maximum temperature, completely random and unpredictable, for some extreme creative use cases\n\n# The assistant\n\nTake this description of the persona:\n\n> {personaDescription}",resultingParameterName:"modelRequirements",format:"JSON",dependentParameterNames:["availableModelNames","personaDescription"]}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[],sourceFile:"./promptbook-collection/prepare-persona.ptbk.md"}];
1534
+ var PipelineCollection = [{title:"Prepare Knowledge from Markdown",pipelineUrl:"https://promptbook.studio/promptbook/prepare-knowledge-from-markdown.ptbk.md",parameters:[{name:"knowledgeContent",description:"Markdown document content",isInput:true,isOutput:false},{name:"knowledgePieces",description:"The knowledge JSON object",isInput:false,isOutput:true}],templates:[{templateType:"PROMPT_TEMPLATE",name:"knowledge",title:"Knowledge",content:"You are experienced data researcher, extract the important knowledge from the document.\n\n# Rules\n\n- Make pieces of information concise, clear, and easy to understand\n- One piece of information should be approximately 1 paragraph\n- Divide the paragraphs by markdown horizontal lines ---\n- Omit irrelevant information\n- Group redundant information\n- Write just extracted information, nothing else\n\n# The document\n\nTake information from this document:\n\n> {knowledgeContent}",resultingParameterName:"knowledgePieces",dependentParameterNames:["knowledgeContent"]}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[],sourceFile:"./promptbook-collection/prepare-knowledge-from-markdown.ptbk.md"},{title:"Prepare Keywords",pipelineUrl:"https://promptbook.studio/promptbook/prepare-knowledge-keywords.ptbk.md",parameters:[{name:"knowledgePieceContent",description:"The content",isInput:true,isOutput:false},{name:"keywords",description:"Keywords separated by comma",isInput:false,isOutput:true}],templates:[{templateType:"PROMPT_TEMPLATE",name:"knowledge",title:"Knowledge",content:"You are experienced data researcher, detect the important keywords in the document.\n\n# Rules\n\n- Write just keywords separated by comma\n\n# The document\n\nTake information from this document:\n\n> {knowledgePieceContent}",resultingParameterName:"keywords",dependentParameterNames:["knowledgePieceContent"]}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[],sourceFile:"./promptbook-collection/prepare-knowledge-keywords.ptbk.md"},{title:"Prepare Title",pipelineUrl:"https://promptbook.studio/promptbook/prepare-knowledge-title.ptbk.md",parameters:[{name:"knowledgePieceContent",description:"The content",isInput:true,isOutput:false},{name:"title",description:"The title of the document",isInput:false,isOutput:true}],templates:[{templateType:"PROMPT_TEMPLATE",name:"knowledge",title:"Knowledge",content:"You are experienced content creator, write best title for the document.\n\n# Rules\n\n- Write just title, nothing else\n- Title should be concise and clear\n- Write maximum 5 words for the title\n\n# The document\n\n> {knowledgePieceContent}",resultingParameterName:"title",expectations:{words:{min:1,max:8}},dependentParameterNames:["knowledgePieceContent"]}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[],sourceFile:"./promptbook-collection/prepare-knowledge-title.ptbk.md"},{title:"Prepare Keywords",pipelineUrl:"https://promptbook.studio/promptbook/prepare-persona.ptbk.md",parameters:[{name:"availableModelNames",description:"List of available model names separated by comma (,)",isInput:true,isOutput:false},{name:"personaDescription",description:"Description of the persona",isInput:true,isOutput:false},{name:"modelRequirements",description:"Specific requirements for the model",isInput:false,isOutput:true}],templates:[{templateType:"PROMPT_TEMPLATE",name:"make-model-requirements",title:"Make modelRequirements",content:"You are experienced AI engineer, you need to create virtual assistant.\nWrite\n\n## Example\n\n```json\n{\n\"modelName\": \"gpt-4o\",\n\"systemMessage\": \"You are experienced AI engineer and helpfull assistant.\",\n\"temperature\": 0.7\n}\n```\n\n## Instructions\n\n- Your output format is JSON object\n- Write just the JSON object, no other text should be present\n- It contains the following keys:\n - `modelName`: The name of the model to use\n - `systemMessage`: The system message to provide context to the model\n - `temperature`: The sampling temperature to use\n\n### Key `modelName`\n\nPick from the following models:\n\n- {availableModelNames}\n\n### Key `systemMessage`\n\nThe system message is used to communicate instructions or provide context to the model at the beginning of a conversation. It is displayed in a different format compared to user messages, helping the model understand its role in the conversation. The system message typically guides the model's behavior, sets the tone, or specifies desired output from the model. By utilizing the system message effectively, users can steer the model towards generating more accurate and relevant responses.\n\nFor example:\n\n> You are an experienced AI engineer and helpful assistant.\n\n> You are a friendly and knowledgeable chatbot.\n\n### Key `temperature`\n\nThe sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use log probability to automatically increase the temperature until certain thresholds are hit.\n\nYou can pick a value between 0 and 2. For example:\n\n- `0.1`: Low temperature, extremely conservative and deterministic\n- `0.5`: Medium temperature, balanced between conservative and creative\n- `1.0`: High temperature, creative and bit random\n- `1.5`: Very high temperature, extremely creative and often chaotic and unpredictable\n- `2.0`: Maximum temperature, completely random and unpredictable, for some extreme creative use cases\n\n# The assistant\n\nTake this description of the persona:\n\n> {personaDescription}",resultingParameterName:"modelRequirements",format:"JSON",dependentParameterNames:["availableModelNames","personaDescription"]}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[],sourceFile:"./promptbook-collection/prepare-persona.ptbk.md"}];
1535
1535
 
1536
1536
  /**
1537
1537
  * This error indicates that the promptbook in a markdown format cannot be parsed into a valid promptbook object
@@ -1958,7 +1958,7 @@ function validatePipelineCore(pipeline) {
1958
1958
  }
1959
1959
  }
1960
1960
  /**
1961
- * TODO: !!!!! [🧞‍♀️] Do not allow joker + foreach
1961
+ * TODO: !! [🧞‍♀️] Do not allow joker + foreach
1962
1962
  * TODO: [🧠] Work with promptbookVersion
1963
1963
  * TODO: Use here some json-schema, Zod or something similar and change it to:
1964
1964
  * > /**
@@ -1970,7 +1970,7 @@ function validatePipelineCore(pipeline) {
1970
1970
  * > ex port function validatePipeline(promptbook: really_unknown): asserts promptbook is PipelineJson {
1971
1971
  */
1972
1972
  /**
1973
- * TODO: [🧳][main] !!!! Validate that all samples match expectations
1973
+ * TODO: [🧳][main] !!!! Validate that all examples match expectations
1974
1974
  * TODO: [🧳][🐝][main] !!!! Validate that knowledge is valid (non-void)
1975
1975
  * TODO: [🧳][main] !!!! Validate that persona can be used only with CHAT variant
1976
1976
  * TODO: [🧳][main] !!!! Validate that parameter with reserved name not used RESERVED_PARAMETER_NAMES
@@ -2324,12 +2324,12 @@ function isPipelinePrepared(pipeline) {
2324
2324
  return true;
2325
2325
  }
2326
2326
  /**
2327
- * TODO: [🔃][main] !!!!! If the pipeline was prepared with different version or different set of models, prepare it once again
2327
+ * TODO: [🔃][main] !! If the pipeline was prepared with different version or different set of models, prepare it once again
2328
2328
  * TODO: [🐠] Maybe base this on `makeValidator`
2329
2329
  * TODO: [🧊] Pipeline can be partially prepared, this should return true ONLY if fully prepared
2330
2330
  * TODO: [🧿] Maybe do same process with same granularity and subfinctions as `preparePipeline`
2331
2331
  * - [🏍] ? Is context in each template
2332
- * - [♨] Are samples prepared
2332
+ * - [♨] Are examples prepared
2333
2333
  * - [♨] Are templates prepared
2334
2334
  */
2335
2335
 
@@ -4073,7 +4073,7 @@ function getKnowledgeForTemplate(options) {
4073
4073
  var preparedPipeline, template;
4074
4074
  return __generator(this, function (_a) {
4075
4075
  preparedPipeline = options.preparedPipeline, template = options.template;
4076
- // TODO: [♨] Implement Better - use real index and keyword search from `template` and {samples}
4076
+ // TODO: [♨] Implement Better - use real index and keyword search from `template` and {examples}
4077
4077
  TODO_USE(template);
4078
4078
  return [2 /*return*/, preparedPipeline.knowledgePieces.map(function (_a) {
4079
4079
  var content = _a.content;
@@ -4088,7 +4088,7 @@ function getKnowledgeForTemplate(options) {
4088
4088
  *
4089
4089
  * @private internal utility of `createPipelineExecutor`
4090
4090
  */
4091
- function getSamplesForTemplate(template) {
4091
+ function getExamplesForTemplate(template) {
4092
4092
  return __awaiter(this, void 0, void 0, function () {
4093
4093
  return __generator(this, function (_a) {
4094
4094
  // TODO: [♨] Implement Better - use real index and keyword search
@@ -4105,7 +4105,7 @@ function getSamplesForTemplate(template) {
4105
4105
  */
4106
4106
  function getReservedParametersForTemplate(options) {
4107
4107
  return __awaiter(this, void 0, void 0, function () {
4108
- var preparedPipeline, template, pipelineIdentification, context, knowledge, samples, currentDate, modelName, reservedParameters, _loop_1, RESERVED_PARAMETER_NAMES_1, RESERVED_PARAMETER_NAMES_1_1, parameterName;
4108
+ var preparedPipeline, template, pipelineIdentification, context, knowledge, examples, currentDate, modelName, reservedParameters, _loop_1, RESERVED_PARAMETER_NAMES_1, RESERVED_PARAMETER_NAMES_1_1, parameterName;
4109
4109
  var e_1, _a;
4110
4110
  return __generator(this, function (_b) {
4111
4111
  switch (_b.label) {
@@ -4117,16 +4117,16 @@ function getReservedParametersForTemplate(options) {
4117
4117
  return [4 /*yield*/, getKnowledgeForTemplate({ preparedPipeline: preparedPipeline, template: template })];
4118
4118
  case 2:
4119
4119
  knowledge = _b.sent();
4120
- return [4 /*yield*/, getSamplesForTemplate(template)];
4120
+ return [4 /*yield*/, getExamplesForTemplate(template)];
4121
4121
  case 3:
4122
- samples = _b.sent();
4122
+ examples = _b.sent();
4123
4123
  currentDate = new Date().toISOString();
4124
4124
  modelName = RESERVED_PARAMETER_MISSING_VALUE;
4125
4125
  reservedParameters = {
4126
4126
  content: RESERVED_PARAMETER_RESTRICTED,
4127
4127
  context: context,
4128
4128
  knowledge: knowledge,
4129
- samples: samples,
4129
+ examples: examples,
4130
4130
  currentDate: currentDate,
4131
4131
  modelName: modelName,
4132
4132
  };
@@ -4744,7 +4744,7 @@ function preparePersona(personaDescription, tools, options) {
4744
4744
  });
4745
4745
  }
4746
4746
  /**
4747
- * TODO: [🔃][main] !!!!! If the persona was prepared with different version or different set of models, prepare it once again
4747
+ * TODO: [🔃][main] !! If the persona was prepared with different version or different set of models, prepare it once again
4748
4748
  * TODO: [🏢] !! Check validity of `modelName` in pipeline
4749
4749
  * TODO: [🏢] !! Check validity of `systemMessage` in pipeline
4750
4750
  * TODO: [🏢] !! Check validity of `temperature` in pipeline
@@ -5495,7 +5495,7 @@ function prepareTemplates(pipeline, tools, options) {
5495
5495
  case 0:
5496
5496
  _a = options.maxParallelCount, maxParallelCount = _a === void 0 ? DEFAULT_MAX_PARALLEL_COUNT : _a;
5497
5497
  templates = pipeline.templates, parameters = pipeline.parameters, knowledgePiecesCount = pipeline.knowledgePiecesCount;
5498
- // TODO: [main] !!!!! Apply samples to each template (if missing and is for the template defined)
5498
+ // TODO: [main] !! Apply examples to each template (if missing and is for the template defined)
5499
5499
  TODO_USE(parameters);
5500
5500
  templatesPrepared = new Array(templates.length);
5501
5501
  return [4 /*yield*/, forEachAsync(templates, { maxParallelCount: maxParallelCount /* <- TODO: [🪂] When there are subtasks, this maximul limit can be broken */ }, function (template, index) { return __awaiter(_this, void 0, void 0, function () {
@@ -5525,7 +5525,7 @@ function prepareTemplates(pipeline, tools, options) {
5525
5525
  /**
5526
5526
  * TODO: [🧠] Add context to each template (if missing)
5527
5527
  * TODO: [🧠] What is better name `prepareTemplate` or `prepareTemplateAndParameters`
5528
- * TODO: [♨][main] !!! Prepare index the samples and maybe templates
5528
+ * TODO: [♨][main] !!! Prepare index the examples and maybe templates
5529
5529
  * TODO: Write tests for `preparePipeline`
5530
5530
  * TODO: [🏏] Leverage the batch API and build queues @see https://platform.openai.com/docs/guides/batch
5531
5531
  * TODO: [🧊] In future one preparation can take data from previous preparation and save tokens and time
@@ -5734,7 +5734,7 @@ var TemplateTypes = [
5734
5734
  'SIMPLE_TEMPLATE',
5735
5735
  'SCRIPT_TEMPLATE',
5736
5736
  'DIALOG_TEMPLATE',
5737
- 'SAMPLE',
5737
+ 'EXAMPLE',
5738
5738
  'KNOWLEDGE',
5739
5739
  'INSTRUMENT',
5740
5740
  'ACTION',
@@ -5793,7 +5793,7 @@ var templateCommandParser = {
5793
5793
  'SCRIPT',
5794
5794
  'DIALOG',
5795
5795
  // <- [🅱]
5796
- 'SAMPLE',
5796
+ 'EXAMPLE',
5797
5797
  'KNOWLEDGE',
5798
5798
  'INSTRUMENT',
5799
5799
  'ACTION',
@@ -5826,7 +5826,7 @@ var templateCommandParser = {
5826
5826
  */
5827
5827
  parse: function (input) {
5828
5828
  var normalized = input.normalized;
5829
- normalized = normalized.split('EXAMPLE').join('SAMPLE');
5829
+ normalized = normalized.split('SAMPLE').join('EXAMPLE');
5830
5830
  var templateTypes = TemplateTypes.filter(function (templateType) {
5831
5831
  return normalized.includes(templateType.split('_TEMPLATE').join(''));
5832
5832
  });
@@ -5859,14 +5859,14 @@ var templateCommandParser = {
5859
5859
  if ($templateJson.content === undefined) {
5860
5860
  throw new UnexpectedError("Content is missing in the templateJson - probbably commands are applied in wrong order");
5861
5861
  }
5862
- if (command.templateType === 'SAMPLE') {
5862
+ if (command.templateType === 'EXAMPLE') {
5863
5863
  expectResultingParameterName();
5864
5864
  var parameter = $pipelineJson.parameters.find(function (param) { return param.name === $templateJson.resultingParameterName; });
5865
5865
  if (parameter === undefined) {
5866
- throw new ParseError("Can not find parameter {".concat($templateJson.resultingParameterName, "} to assign sample value on it"));
5866
+ throw new ParseError("Can not find parameter {".concat($templateJson.resultingParameterName, "} to assign example value on it"));
5867
5867
  }
5868
- parameter.sampleValues = parameter.sampleValues || [];
5869
- parameter.sampleValues.push($templateJson.content);
5868
+ parameter.exampleValues = parameter.exampleValues || [];
5869
+ parameter.exampleValues.push($templateJson.content);
5870
5870
  $templateJson.isTemplate = false;
5871
5871
  return;
5872
5872
  }
@@ -9920,7 +9920,7 @@ function stringifyPipelineJson(pipeline) {
9920
9920
  return pipelineJsonStringified;
9921
9921
  }
9922
9922
  /**
9923
- * TODO: [🐝] Not Working propperly @see https://promptbook.studio/samples/mixed-knowledge.ptbk.md
9923
+ * TODO: [🐝] Not Working propperly @see https://promptbook.studio/examples/mixed-knowledge.ptbk.md
9924
9924
  * TODO: [🧠][0] Maybe rename to `stringifyPipelineJson`, `stringifyIndexedJson`,...
9925
9925
  * TODO: [🧠] Maybe more elegant solution than replacing via regex
9926
9926
  * TODO: [🍙] Make some standard order of json properties
@@ -10590,9 +10590,9 @@ function renderPromptbookMermaid(pipelineJson, options) {
10590
10590
  return promptbookMermaid;
10591
10591
  }
10592
10592
  /**
10593
- * TODO: !!!!! FOREACH in mermaid graph
10594
- * TODO: !!!!! Knowledge in mermaid graph
10595
- * TODO: !!!!! Personas in mermaid graph
10593
+ * TODO: [🧠] !! FOREACH in mermaid graph
10594
+ * TODO: [🧠] !! Knowledge in mermaid graph
10595
+ * TODO: [🧠] !! Personas in mermaid graph
10596
10596
  * TODO: Maybe use some Mermaid package instead of string templating
10597
10597
  * TODO: [🕌] When more than 2 functionalities, split into separate functions
10598
10598
  */
@@ -13496,7 +13496,7 @@ var MarkdownScraper = /** @class */ (function () {
13496
13496
  outputParameters = result.outputParameters;
13497
13497
  knowledgePiecesRaw = outputParameters.knowledgePieces;
13498
13498
  knowledgeTextPieces = (knowledgePiecesRaw || '').split('\n---\n');
13499
- // <- TODO: [main] !!!!! Smarter split and filter out empty pieces
13499
+ // <- TODO: [main] !! Smarter split and filter out empty pieces
13500
13500
  if (isVerbose) {
13501
13501
  console.info('knowledgeTextPieces:', knowledgeTextPieces);
13502
13502
  }