@promptbook/core 0.72.0 → 0.73.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -18,6 +18,8 @@ Build responsible, controlled and transparent applications on top of LLM models!
18
18
 
19
19
  ## ✨ New Features
20
20
 
21
+ - 💙 Working on [the **Book** language v1](https://github.com/webgptorg/book)
22
+ - 📚 Support of `.docx`, `.doc` and `.pdf` documents
21
23
  - ✨ **Support of [OpenAI o1 model](https://openai.com/o1/)**
22
24
 
23
25
 
@@ -50,11 +52,9 @@ Rest of the documentation is common for **entire promptbook ecosystem**:
50
52
 
51
53
  ## 🤍 The Promptbook Whitepaper
52
54
 
53
-
54
-
55
55
  If you have a simple, single prompt for ChatGPT, GPT-4, Anthropic Claude, Google Gemini, Llama 3, or whatever, it doesn't matter how you integrate it. Whether it's calling a REST API directly, using the SDK, hardcoding the prompt into the source code, or importing a text file, the process remains the same.
56
56
 
57
- But often you will struggle with the **limitations of LLMs**, such as **hallucinations, off-topic responses, poor quality output, language and prompt drift, word repetition repetition repetition repetition or misuse, lack of context, or just plain w𝒆𝐢rd responses**. When this happens, you generally have three options:
57
+ But often you will struggle with the **limitations of LLMs**, such as **hallucinations, off-topic responses, poor quality output, language and prompt drift, word repetition repetition repetition repetition or misuse, lack of context, or just plain w𝒆𝐢rd resp0nses**. When this happens, you generally have three options:
58
58
 
59
59
  1. **Fine-tune** the model to your specifications or even train your own.
60
60
  2. **Prompt-engineer** the prompt to the best shape you can achieve.
@@ -62,248 +62,38 @@ But often you will struggle with the **limitations of LLMs**, such as **hallucin
62
62
 
63
63
  In all of these situations, but especially in 3., the **✨ Promptbook can make your life waaaaaaaaaay easier**.
64
64
 
65
- - [**Separates concerns**](https://github.com/webgptorg/promptbook/discussions/32) between prompt-engineer and programmer, between code files and prompt files, and between prompts and their execution logic.
66
- - Establishes a [**common format `.ptbk.md`**](https://github.com/webgptorg/promptbook/discussions/85) that can be used to describe your prompt business logic without having to write code or deal with the technicalities of LLMs.
67
- - **Forget** about **low-level details** like choosing the right model, tokens, context size, temperature, top-k, top-p, or kernel sampling. **Just write your intent** and [**persona**](https://github.com/webgptorg/promptbook/discussions/22) who should be responsible for the task and let the library do the rest.
68
- - Has built-in **orchestration** of [pipeline](https://github.com/webgptorg/promptbook/discussions/64) execution and many tools to make the process easier, more reliable, and more efficient, such as caching, [compilation+preparation](https://github.com/webgptorg/promptbook/discussions/78), [just-in-time fine-tuning](https://github.com/webgptorg/promptbook/discussions/33), [expectation-aware generation](https://github.com/webgptorg/promptbook/discussions/37), [agent adversary expectations](https://github.com/webgptorg/promptbook/discussions/39), and more.
65
+ - [**Separates concerns**](https://github.com/webgptorg/promptbook/discussions/32) between prompt-engineer and programmer, between code files and prompt files, and between prompts and their execution logic. For this purpose, it introduces a new language called [the **💙 Book**](https://github.com/webgptorg/book).
66
+ - Book allows you to **focus on the business** logic without having to write code or deal with the technicalities of LLMs.
67
+ - **Forget** about **low-level details** like choosing the right model, tokens, context size, `temperature`, `top-k`, `top-p`, or kernel sampling. **Just write your intent** and [**persona**](https://github.com/webgptorg/promptbook/discussions/22) who should be responsible for the task and let the library do the rest.
68
+ - We have built-in **orchestration** of [pipeline](https://github.com/webgptorg/promptbook/discussions/64) execution and many tools to make the process easier, more reliable, and more efficient, such as caching, [compilation+preparation](https://github.com/webgptorg/promptbook/discussions/78), [just-in-time fine-tuning](https://github.com/webgptorg/promptbook/discussions/33), [expectation-aware generation](https://github.com/webgptorg/promptbook/discussions/37), [agent adversary expectations](https://github.com/webgptorg/promptbook/discussions/39), and more.
69
69
  - Sometimes even the best prompts with the best framework like Promptbook `:)` can't avoid the problems. In this case, the library has built-in **[anomaly detection](https://github.com/webgptorg/promptbook/discussions/40) and logging** to help you find and fix the problems.
70
- - Promptbook has built in versioning. You can test multiple **A/B versions** of pipelines and see which one works best.
71
- - Promptbook is designed to do [**RAG** (Retrieval-Augmented Generation)](https://github.com/webgptorg/promptbook/discussions/41) and other advanced techniques. You can use **knowledge** to improve the quality of the output.
72
-
73
-
74
-
75
- ## 🧔 Pipeline _(for prompt-engeneers)_
76
-
77
- **P**romp**t** **b**oo**k** markdown file (or `.ptbk.md` file) is document that describes a **pipeline** - a series of prompts that are chained together to form somewhat reciepe for transforming natural language input.
78
-
79
- - Multiple pipelines forms a **collection** which will handle core **know-how of your LLM application**.
80
- - Theese pipelines are designed such as they **can be written by non-programmers**.
81
-
82
-
83
-
84
- ### Sample:
85
-
86
- File `write-website-content.ptbk.md`:
87
-
88
-
89
-
90
-
91
-
92
- > # 🌍 Create website content
93
- >
94
- > Instructions for creating web page content.
95
- >
96
- > - PIPELINE URL https://promptbook.studio/webgpt/write-website-content.ptbk.md
97
- > - INPUT  PARAM `{rawTitle}` Automatically suggested a site name or empty text
98
- > - INPUT  PARAM `{rawAssigment}` Automatically generated site entry from image recognition
99
- > - OUTPUT PARAM `{websiteContent}` Web content
100
- > - OUTPUT PARAM `{keywords}` Keywords
101
- >
102
- > ## 👤 Specifying the assigment
103
- >
104
- > What is your web about?
105
- >
106
- > - DIALOG TEMPLATE
107
- >
108
- > ```
109
- > {rawAssigment}
110
- > ```
111
- >
112
- > `-> {assigment}` Website assignment and specification
113
- >
114
- > ## ✨ Improving the title
115
- >
116
- > - PERSONA Jane, Copywriter and Marketing Specialist.
117
- >
118
- > ```
119
- > As an experienced marketing specialist, you have been entrusted with improving the name of your client's business.
120
- >
121
- > A suggested name from a client:
122
- > "{rawTitle}"
123
- >
124
- > Assignment from customer:
125
- >
126
- > > {assigment}
127
- >
128
- > ## Instructions:
129
- >
130
- > - Write only one name suggestion
131
- > - The name will be used on the website, business cards, visuals, etc.
132
- > ```
133
- >
134
- > `-> {enhancedTitle}` Enhanced title
135
- >
136
- > ## 👤 Website title approval
137
- >
138
- > Is the title for your website okay?
139
- >
140
- > - DIALOG TEMPLATE
141
- >
142
- > ```
143
- > {enhancedTitle}
144
- > ```
145
- >
146
- > `-> {title}` Title for the website
147
- >
148
- > ## 🐰 Cunning subtitle
149
- >
150
- > - PERSONA Josh, a copywriter, tasked with creating a claim for the website.
151
- >
152
- > ```
153
- > As an experienced copywriter, you have been entrusted with creating a claim for the "{title}" web page.
154
- >
155
- > A website assignment from a customer:
156
- >
157
- > > {assigment}
158
- >
159
- > ## Instructions:
160
- >
161
- > - Write only one name suggestion
162
- > - Claim will be used on website, business cards, visuals, etc.
163
- > - Claim should be punchy, funny, original
164
- > ```
165
- >
166
- > `-> {claim}` Claim for the web
167
- >
168
- > ## 🚦 Keyword analysis
169
- >
170
- > - PERSONA Paul, extremely creative SEO specialist.
171
- >
172
- > ```
173
- > As an experienced SEO specialist, you have been entrusted with creating keywords for the website "{title}".
174
- >
175
- > Website assignment from the customer:
176
- >
177
- > > {assigment}
178
- >
179
- > ## Instructions:
180
- >
181
- > - Write a list of keywords
182
- > - Keywords are in basic form
183
- >
184
- > ## Example:
185
- >
186
- > - Ice cream
187
- > - Olomouc
188
- > - Quality
189
- > - Family
190
- > - Tradition
191
- > - Italy
192
- > - Craft
193
- >
194
- > ```
195
- >
196
- > `-> {keywords}` Keywords
197
- >
198
- > ## 🔗 Combine the beginning
199
- >
200
- > - SIMPLE TEMPLATE
201
- >
202
- > ```
203
- >
204
- > # {title}
205
- >
206
- > > {claim}
207
- >
208
- > ```
209
- >
210
- > `-> {contentBeginning}` Beginning of web content
211
- >
212
- > ## 🖋 Write the content
213
- >
214
- > - PERSONA Jane
215
- >
216
- > ```
217
- > As an experienced copywriter and web designer, you have been entrusted with creating text for a new website {title}.
218
- >
219
- > A website assignment from a customer:
220
- >
221
- > > {assigment}
222
- >
223
- > ## Instructions:
224
- >
225
- > - Text formatting is in Markdown
226
- > - Be concise and to the point
227
- > - Use keywords, but they should be naturally in the text
228
- > - This is the complete content of the page, so don't forget all the important information and elements the page should contain
229
- > - Use headings, bullets, text formatting
230
- >
231
- > ## Keywords:
232
- >
233
- > {keywords}
234
- >
235
- > ## Web Content:
236
- >
237
- > {contentBeginning}
238
- > ```
239
- >
240
- > `-> {contentBody}` Middle of the web content
241
- >
242
- > ## 🔗 Combine the content
243
- >
244
- > - SIMPLE TEMPLATE
245
- >
246
- > ```markdown
247
- > {contentBeginning}
248
- >
249
- > {contentBody}
250
- > ```
251
- >
252
- > `-> {websiteContent}`
253
-
254
-
255
-
256
- Following is the scheme how the promptbook above is executed:
257
-
258
- ```mermaid
259
- %% 🔮 Tip: Open this on GitHub or in the VSCode website to see the Mermaid graph visually
260
-
261
- flowchart LR
262
- subgraph "🌍 Create website content"
263
-
264
- direction TB
265
-
266
- input((Input)):::input
267
- templateSpecifyingTheAssigment(👤 Specifying the assigment)
268
- input--"{rawAssigment}"-->templateSpecifyingTheAssigment
269
- templateImprovingTheTitle(✨ Improving the title)
270
- input--"{rawTitle}"-->templateImprovingTheTitle
271
- templateSpecifyingTheAssigment--"{assigment}"-->templateImprovingTheTitle
272
- templateWebsiteTitleApproval(👤 Website title approval)
273
- templateImprovingTheTitle--"{enhancedTitle}"-->templateWebsiteTitleApproval
274
- templateCunningSubtitle(🐰 Cunning subtitle)
275
- templateWebsiteTitleApproval--"{title}"-->templateCunningSubtitle
276
- templateSpecifyingTheAssigment--"{assigment}"-->templateCunningSubtitle
277
- templateKeywordAnalysis(🚦 Keyword analysis)
278
- templateWebsiteTitleApproval--"{title}"-->templateKeywordAnalysis
279
- templateSpecifyingTheAssigment--"{assigment}"-->templateKeywordAnalysis
280
- templateCombineTheBeginning(🔗 Combine the beginning)
281
- templateWebsiteTitleApproval--"{title}"-->templateCombineTheBeginning
282
- templateCunningSubtitle--"{claim}"-->templateCombineTheBeginning
283
- templateWriteTheContent(🖋 Write the content)
284
- templateWebsiteTitleApproval--"{title}"-->templateWriteTheContent
285
- templateSpecifyingTheAssigment--"{assigment}"-->templateWriteTheContent
286
- templateKeywordAnalysis--"{keywords}"-->templateWriteTheContent
287
- templateCombineTheBeginning--"{contentBeginning}"-->templateWriteTheContent
288
- templateCombineTheContent(🔗 Combine the content)
289
- templateCombineTheBeginning--"{contentBeginning}"-->templateCombineTheContent
290
- templateWriteTheContent--"{contentBody}"-->templateCombineTheContent
291
-
292
- templateCombineTheContent--"{websiteContent}"-->output
293
- output((Output)):::output
294
-
295
- classDef input color: grey;
296
- classDef output color: grey;
297
-
298
- end;
299
- ```
70
+ - Versioning is build in. You can test multiple **A/B versions** of pipelines and see which one works best.
71
+ - Promptbook is designed to use [**RAG** (Retrieval-Augmented Generation)](https://github.com/webgptorg/promptbook/discussions/41) and other advanced techniques to bring the context of your business to generic LLM. You can use **knowledge** to improve the quality of the output.
72
+
73
+
74
+
75
+ ## 💙 Book language _(for prompt-engineer)_
76
+
77
+ Promptbook [pipelines](https://github.com/webgptorg/promptbook/discussions/64) are written in markdown-like language called [Book](https://github.com/webgptorg/book). It is designed to be understandable by non-programmers and non-technical people.
300
78
 
301
- - [More template samples](./samples/pipelines/)
302
- - [Read more about `.ptbk.md` file format here](https://github.com/webgptorg/promptbook/discussions/categories/concepts?discussions_q=is%3Aopen+label%3A.ptbk.md+category%3AConcepts)
303
79
 
304
- _Note: We are using [postprocessing functions](#postprocessing-functions) like `unwrapResult` that can be used to postprocess the result._
305
80
 
306
- ## 📦 Packages
81
+ ```markdown
82
+ # 🌟 My first Book
83
+
84
+ - PERSONA Jane, marketing specialist with prior experience in writing articles about technology and artificial intelligence
85
+ - KNOWLEDGE https://ptbk.io
86
+ - KNOWLEDGE ./promptbook.pdf
87
+ - EXPECT MIN 1 Sentence
88
+ - EXPECT MAX 1 Paragraph
89
+
90
+ > Write an article about the future of artificial intelligence in the next 10 years and how metalanguages will change the way AI is used in the world.
91
+ > Look specifically at the impact of Promptbook on the AI industry.
92
+
93
+ -> {article}
94
+ ```
95
+
96
+ ## 📦 Packages _(for developers)_
307
97
 
308
98
  This library is divided into several packages, all are published from [single monorepo](https://github.com/webgptorg/promptbook).
309
99
  You can install all of them at once:
@@ -345,8 +135,6 @@ Or you can install them separately:
345
135
 
346
136
  The following glossary is used to clarify certain concepts:
347
137
 
348
-
349
-
350
138
  ### Core concepts
351
139
 
352
140
  - [📚 Collection of pipelines](https://github.com/webgptorg/promptbook/discussions/65)
@@ -377,8 +165,8 @@ The following glossary is used to clarify certain concepts:
377
165
 
378
166
  ## 🔌 Usage in Typescript / Javascript
379
167
 
380
- - [Simple usage](./samples/usage/simple-script)
381
- - [Usage with client and remote server](./samples/usage/remote)
168
+ - [Simple usage](./examples/usage/simple-script)
169
+ - [Usage with client and remote server](./examples/usage/remote)
382
170
 
383
171
  ## ➕➖ When to use Promptbook?
384
172
 
package/esm/index.es.js CHANGED
@@ -14,7 +14,7 @@ import moment from 'moment';
14
14
  /**
15
15
  * The version of the Promptbook library
16
16
  */
17
- var PROMPTBOOK_VERSION = '0.72.0-34';
17
+ var PROMPTBOOK_VERSION = '0.72.0';
18
18
  // TODO: [main] !!!! List here all the versions and annotate + put into script
19
19
 
20
20
  /*! *****************************************************************************
@@ -725,10 +725,10 @@ var RESERVED_PARAMETER_NAMES = $asDeeplyFrozenSerializableJson('RESERVED_PARAMET
725
725
  'content',
726
726
  'context',
727
727
  'knowledge',
728
- 'samples',
728
+ 'examples',
729
729
  'modelName',
730
730
  'currentDate',
731
- // <- TODO: !!!!! list here all command names
731
+ // <- TODO: list here all command names
732
732
  // <- TODO: Add more like 'date', 'modelName',...
733
733
  // <- TODO: Add [emoji] + instructions ACRY when adding new reserved parameter
734
734
  ]);
@@ -1227,7 +1227,7 @@ function validatePipelineCore(pipeline) {
1227
1227
  }
1228
1228
  }
1229
1229
  /**
1230
- * TODO: !!!!! [🧞‍♀️] Do not allow joker + foreach
1230
+ * TODO: !! [🧞‍♀️] Do not allow joker + foreach
1231
1231
  * TODO: [🧠] Work with promptbookVersion
1232
1232
  * TODO: Use here some json-schema, Zod or something similar and change it to:
1233
1233
  * > /**
@@ -1239,7 +1239,7 @@ function validatePipelineCore(pipeline) {
1239
1239
  * > ex port function validatePipeline(promptbook: really_unknown): asserts promptbook is PipelineJson {
1240
1240
  */
1241
1241
  /**
1242
- * TODO: [🧳][main] !!!! Validate that all samples match expectations
1242
+ * TODO: [🧳][main] !!!! Validate that all examples match expectations
1243
1243
  * TODO: [🧳][🐝][main] !!!! Validate that knowledge is valid (non-void)
1244
1244
  * TODO: [🧳][main] !!!! Validate that persona can be used only with CHAT variant
1245
1245
  * TODO: [🧳][main] !!!! Validate that parameter with reserved name not used RESERVED_PARAMETER_NAMES
@@ -1651,7 +1651,7 @@ var TemplateTypes = [
1651
1651
  'SIMPLE_TEMPLATE',
1652
1652
  'SCRIPT_TEMPLATE',
1653
1653
  'DIALOG_TEMPLATE',
1654
- 'SAMPLE',
1654
+ 'EXAMPLE',
1655
1655
  'KNOWLEDGE',
1656
1656
  'INSTRUMENT',
1657
1657
  'ACTION',
@@ -2293,7 +2293,7 @@ function countTotalUsage(llmTools) {
2293
2293
  * TODO: [👷‍♂️] @@@ Manual about construction of llmTools
2294
2294
  */
2295
2295
 
2296
- var PipelineCollection = [{title:"Prepare Knowledge from Markdown",pipelineUrl:"https://promptbook.studio/promptbook/prepare-knowledge-from-markdown.ptbk.md",parameters:[{name:"knowledgeContent",description:"Markdown document content",isInput:true,isOutput:false},{name:"knowledgePieces",description:"The knowledge JSON object",isInput:false,isOutput:true}],templates:[{templateType:"PROMPT_TEMPLATE",name:"knowledge",title:"Knowledge",content:"You are experienced data researcher, extract the important knowledge from the document.\n\n# Rules\n\n- Make pieces of information concise, clear, and easy to understand\n- One piece of information should be approximately 1 paragraph\n- Divide the paragraphs by markdown horizontal lines ---\n- Omit irrelevant information\n- Group redundant information\n- Write just extracted information, nothing else\n\n# The document\n\nTake information from this document:\n\n> {knowledgeContent}",resultingParameterName:"knowledgePieces",dependentParameterNames:["knowledgeContent"]}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[],sourceFile:"./promptbook-collection/prepare-knowledge-from-markdown.ptbk.md"},{title:"Prepare Keywords",pipelineUrl:"https://promptbook.studio/promptbook/prepare-knowledge-keywords.ptbk.md",parameters:[{name:"knowledgePieceContent",description:"The content",isInput:true,isOutput:false},{name:"keywords",description:"Keywords separated by comma",isInput:false,isOutput:true}],templates:[{templateType:"PROMPT_TEMPLATE",name:"knowledge",title:"Knowledge",content:"You are experienced data researcher, detect the important keywords in the document.\n\n# Rules\n\n- Write just keywords separated by comma\n\n# The document\n\nTake information from this document:\n\n> {knowledgePieceContent}",resultingParameterName:"keywords",dependentParameterNames:["knowledgePieceContent"]}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[],sourceFile:"./promptbook-collection/prepare-knowledge-keywords.ptbk.md"},{title:"Prepare Title",pipelineUrl:"https://promptbook.studio/promptbook/prepare-knowledge-title.ptbk.md",parameters:[{name:"knowledgePieceContent",description:"The content",isInput:true,isOutput:false},{name:"title",description:"The title of the document",isInput:false,isOutput:true}],templates:[{templateType:"PROMPT_TEMPLATE",name:"knowledge",title:"Knowledge",content:"You are experienced content creator, write best title for the document.\n\n# Rules\n\n- Write just title, nothing else\n- Title should be concise and clear\n- Write maximum 5 words for the title\n\n# The document\n\n> {knowledgePieceContent}",resultingParameterName:"title",expectations:{words:{min:1,max:8}},dependentParameterNames:["knowledgePieceContent"]}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[],sourceFile:"./promptbook-collection/prepare-knowledge-title.ptbk.md"},{title:"Prepare Keywords",pipelineUrl:"https://promptbook.studio/promptbook/prepare-persona.ptbk.md",parameters:[{name:"availableModelNames",description:"List of available model names separated by comma (,)",isInput:true,isOutput:false},{name:"personaDescription",description:"Description of the persona",isInput:true,isOutput:false},{name:"modelRequirements",description:"Specific requirements for the model",isInput:false,isOutput:true}],templates:[{templateType:"PROMPT_TEMPLATE",name:"make-model-requirements",title:"Make modelRequirements",content:"You are experienced AI engineer, you need to create virtual assistant.\nWrite\n\n## Sample\n\n```json\n{\n\"modelName\": \"gpt-4o\",\n\"systemMessage\": \"You are experienced AI engineer and helpfull assistant.\",\n\"temperature\": 0.7\n}\n```\n\n## Instructions\n\n- Your output format is JSON object\n- Write just the JSON object, no other text should be present\n- It contains the following keys:\n - `modelName`: The name of the model to use\n - `systemMessage`: The system message to provide context to the model\n - `temperature`: The sampling temperature to use\n\n### Key `modelName`\n\nPick from the following models:\n\n- {availableModelNames}\n\n### Key `systemMessage`\n\nThe system message is used to communicate instructions or provide context to the model at the beginning of a conversation. It is displayed in a different format compared to user messages, helping the model understand its role in the conversation. The system message typically guides the model's behavior, sets the tone, or specifies desired output from the model. By utilizing the system message effectively, users can steer the model towards generating more accurate and relevant responses.\n\nFor example:\n\n> You are an experienced AI engineer and helpful assistant.\n\n> You are a friendly and knowledgeable chatbot.\n\n### Key `temperature`\n\nThe sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use log probability to automatically increase the temperature until certain thresholds are hit.\n\nYou can pick a value between 0 and 2. For example:\n\n- `0.1`: Low temperature, extremely conservative and deterministic\n- `0.5`: Medium temperature, balanced between conservative and creative\n- `1.0`: High temperature, creative and bit random\n- `1.5`: Very high temperature, extremely creative and often chaotic and unpredictable\n- `2.0`: Maximum temperature, completely random and unpredictable, for some extreme creative use cases\n\n# The assistant\n\nTake this description of the persona:\n\n> {personaDescription}",resultingParameterName:"modelRequirements",format:"JSON",dependentParameterNames:["availableModelNames","personaDescription"]}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[],sourceFile:"./promptbook-collection/prepare-persona.ptbk.md"}];
2296
+ var PipelineCollection = [{title:"Prepare Knowledge from Markdown",pipelineUrl:"https://promptbook.studio/promptbook/prepare-knowledge-from-markdown.ptbk.md",parameters:[{name:"knowledgeContent",description:"Markdown document content",isInput:true,isOutput:false},{name:"knowledgePieces",description:"The knowledge JSON object",isInput:false,isOutput:true}],templates:[{templateType:"PROMPT_TEMPLATE",name:"knowledge",title:"Knowledge",content:"You are experienced data researcher, extract the important knowledge from the document.\n\n# Rules\n\n- Make pieces of information concise, clear, and easy to understand\n- One piece of information should be approximately 1 paragraph\n- Divide the paragraphs by markdown horizontal lines ---\n- Omit irrelevant information\n- Group redundant information\n- Write just extracted information, nothing else\n\n# The document\n\nTake information from this document:\n\n> {knowledgeContent}",resultingParameterName:"knowledgePieces",dependentParameterNames:["knowledgeContent"]}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[],sourceFile:"./promptbook-collection/prepare-knowledge-from-markdown.ptbk.md"},{title:"Prepare Keywords",pipelineUrl:"https://promptbook.studio/promptbook/prepare-knowledge-keywords.ptbk.md",parameters:[{name:"knowledgePieceContent",description:"The content",isInput:true,isOutput:false},{name:"keywords",description:"Keywords separated by comma",isInput:false,isOutput:true}],templates:[{templateType:"PROMPT_TEMPLATE",name:"knowledge",title:"Knowledge",content:"You are experienced data researcher, detect the important keywords in the document.\n\n# Rules\n\n- Write just keywords separated by comma\n\n# The document\n\nTake information from this document:\n\n> {knowledgePieceContent}",resultingParameterName:"keywords",dependentParameterNames:["knowledgePieceContent"]}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[],sourceFile:"./promptbook-collection/prepare-knowledge-keywords.ptbk.md"},{title:"Prepare Title",pipelineUrl:"https://promptbook.studio/promptbook/prepare-knowledge-title.ptbk.md",parameters:[{name:"knowledgePieceContent",description:"The content",isInput:true,isOutput:false},{name:"title",description:"The title of the document",isInput:false,isOutput:true}],templates:[{templateType:"PROMPT_TEMPLATE",name:"knowledge",title:"Knowledge",content:"You are experienced content creator, write best title for the document.\n\n# Rules\n\n- Write just title, nothing else\n- Title should be concise and clear\n- Write maximum 5 words for the title\n\n# The document\n\n> {knowledgePieceContent}",resultingParameterName:"title",expectations:{words:{min:1,max:8}},dependentParameterNames:["knowledgePieceContent"]}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[],sourceFile:"./promptbook-collection/prepare-knowledge-title.ptbk.md"},{title:"Prepare Keywords",pipelineUrl:"https://promptbook.studio/promptbook/prepare-persona.ptbk.md",parameters:[{name:"availableModelNames",description:"List of available model names separated by comma (,)",isInput:true,isOutput:false},{name:"personaDescription",description:"Description of the persona",isInput:true,isOutput:false},{name:"modelRequirements",description:"Specific requirements for the model",isInput:false,isOutput:true}],templates:[{templateType:"PROMPT_TEMPLATE",name:"make-model-requirements",title:"Make modelRequirements",content:"You are experienced AI engineer, you need to create virtual assistant.\nWrite\n\n## Example\n\n```json\n{\n\"modelName\": \"gpt-4o\",\n\"systemMessage\": \"You are experienced AI engineer and helpfull assistant.\",\n\"temperature\": 0.7\n}\n```\n\n## Instructions\n\n- Your output format is JSON object\n- Write just the JSON object, no other text should be present\n- It contains the following keys:\n - `modelName`: The name of the model to use\n - `systemMessage`: The system message to provide context to the model\n - `temperature`: The sampling temperature to use\n\n### Key `modelName`\n\nPick from the following models:\n\n- {availableModelNames}\n\n### Key `systemMessage`\n\nThe system message is used to communicate instructions or provide context to the model at the beginning of a conversation. It is displayed in a different format compared to user messages, helping the model understand its role in the conversation. The system message typically guides the model's behavior, sets the tone, or specifies desired output from the model. By utilizing the system message effectively, users can steer the model towards generating more accurate and relevant responses.\n\nFor example:\n\n> You are an experienced AI engineer and helpful assistant.\n\n> You are a friendly and knowledgeable chatbot.\n\n### Key `temperature`\n\nThe sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use log probability to automatically increase the temperature until certain thresholds are hit.\n\nYou can pick a value between 0 and 2. For example:\n\n- `0.1`: Low temperature, extremely conservative and deterministic\n- `0.5`: Medium temperature, balanced between conservative and creative\n- `1.0`: High temperature, creative and bit random\n- `1.5`: Very high temperature, extremely creative and often chaotic and unpredictable\n- `2.0`: Maximum temperature, completely random and unpredictable, for some extreme creative use cases\n\n# The assistant\n\nTake this description of the persona:\n\n> {personaDescription}",resultingParameterName:"modelRequirements",format:"JSON",dependentParameterNames:["availableModelNames","personaDescription"]}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[],sourceFile:"./promptbook-collection/prepare-persona.ptbk.md"}];
2297
2297
 
2298
2298
  /**
2299
2299
  * This error indicates that the pipeline collection cannot be propperly loaded
@@ -2466,12 +2466,12 @@ function isPipelinePrepared(pipeline) {
2466
2466
  return true;
2467
2467
  }
2468
2468
  /**
2469
- * TODO: [🔃][main] !!!!! If the pipeline was prepared with different version or different set of models, prepare it once again
2469
+ * TODO: [🔃][main] !! If the pipeline was prepared with different version or different set of models, prepare it once again
2470
2470
  * TODO: [🐠] Maybe base this on `makeValidator`
2471
2471
  * TODO: [🧊] Pipeline can be partially prepared, this should return true ONLY if fully prepared
2472
2472
  * TODO: [🧿] Maybe do same process with same granularity and subfinctions as `preparePipeline`
2473
2473
  * - [🏍] ? Is context in each template
2474
- * - [♨] Are samples prepared
2474
+ * - [♨] Are examples prepared
2475
2475
  * - [♨] Are templates prepared
2476
2476
  */
2477
2477
 
@@ -4237,7 +4237,7 @@ function getKnowledgeForTemplate(options) {
4237
4237
  var preparedPipeline, template;
4238
4238
  return __generator(this, function (_a) {
4239
4239
  preparedPipeline = options.preparedPipeline, template = options.template;
4240
- // TODO: [♨] Implement Better - use real index and keyword search from `template` and {samples}
4240
+ // TODO: [♨] Implement Better - use real index and keyword search from `template` and {examples}
4241
4241
  TODO_USE(template);
4242
4242
  return [2 /*return*/, preparedPipeline.knowledgePieces.map(function (_a) {
4243
4243
  var content = _a.content;
@@ -4252,7 +4252,7 @@ function getKnowledgeForTemplate(options) {
4252
4252
  *
4253
4253
  * @private internal utility of `createPipelineExecutor`
4254
4254
  */
4255
- function getSamplesForTemplate(template) {
4255
+ function getExamplesForTemplate(template) {
4256
4256
  return __awaiter(this, void 0, void 0, function () {
4257
4257
  return __generator(this, function (_a) {
4258
4258
  // TODO: [♨] Implement Better - use real index and keyword search
@@ -4269,7 +4269,7 @@ function getSamplesForTemplate(template) {
4269
4269
  */
4270
4270
  function getReservedParametersForTemplate(options) {
4271
4271
  return __awaiter(this, void 0, void 0, function () {
4272
- var preparedPipeline, template, pipelineIdentification, context, knowledge, samples, currentDate, modelName, reservedParameters, _loop_1, RESERVED_PARAMETER_NAMES_1, RESERVED_PARAMETER_NAMES_1_1, parameterName;
4272
+ var preparedPipeline, template, pipelineIdentification, context, knowledge, examples, currentDate, modelName, reservedParameters, _loop_1, RESERVED_PARAMETER_NAMES_1, RESERVED_PARAMETER_NAMES_1_1, parameterName;
4273
4273
  var e_1, _a;
4274
4274
  return __generator(this, function (_b) {
4275
4275
  switch (_b.label) {
@@ -4281,16 +4281,16 @@ function getReservedParametersForTemplate(options) {
4281
4281
  return [4 /*yield*/, getKnowledgeForTemplate({ preparedPipeline: preparedPipeline, template: template })];
4282
4282
  case 2:
4283
4283
  knowledge = _b.sent();
4284
- return [4 /*yield*/, getSamplesForTemplate(template)];
4284
+ return [4 /*yield*/, getExamplesForTemplate(template)];
4285
4285
  case 3:
4286
- samples = _b.sent();
4286
+ examples = _b.sent();
4287
4287
  currentDate = new Date().toISOString();
4288
4288
  modelName = RESERVED_PARAMETER_MISSING_VALUE;
4289
4289
  reservedParameters = {
4290
4290
  content: RESERVED_PARAMETER_RESTRICTED,
4291
4291
  context: context,
4292
4292
  knowledge: knowledge,
4293
- samples: samples,
4293
+ examples: examples,
4294
4294
  currentDate: currentDate,
4295
4295
  modelName: modelName,
4296
4296
  };
@@ -4908,7 +4908,7 @@ function preparePersona(personaDescription, tools, options) {
4908
4908
  });
4909
4909
  }
4910
4910
  /**
4911
- * TODO: [🔃][main] !!!!! If the persona was prepared with different version or different set of models, prepare it once again
4911
+ * TODO: [🔃][main] !! If the persona was prepared with different version or different set of models, prepare it once again
4912
4912
  * TODO: [🏢] !! Check validity of `modelName` in pipeline
4913
4913
  * TODO: [🏢] !! Check validity of `systemMessage` in pipeline
4914
4914
  * TODO: [🏢] !! Check validity of `temperature` in pipeline
@@ -5659,7 +5659,7 @@ function prepareTemplates(pipeline, tools, options) {
5659
5659
  case 0:
5660
5660
  _a = options.maxParallelCount, maxParallelCount = _a === void 0 ? DEFAULT_MAX_PARALLEL_COUNT : _a;
5661
5661
  templates = pipeline.templates, parameters = pipeline.parameters, knowledgePiecesCount = pipeline.knowledgePiecesCount;
5662
- // TODO: [main] !!!!! Apply samples to each template (if missing and is for the template defined)
5662
+ // TODO: [main] !! Apply examples to each template (if missing and is for the template defined)
5663
5663
  TODO_USE(parameters);
5664
5664
  templatesPrepared = new Array(templates.length);
5665
5665
  return [4 /*yield*/, forEachAsync(templates, { maxParallelCount: maxParallelCount /* <- TODO: [🪂] When there are subtasks, this maximul limit can be broken */ }, function (template, index) { return __awaiter(_this, void 0, void 0, function () {
@@ -5689,7 +5689,7 @@ function prepareTemplates(pipeline, tools, options) {
5689
5689
  /**
5690
5690
  * TODO: [🧠] Add context to each template (if missing)
5691
5691
  * TODO: [🧠] What is better name `prepareTemplate` or `prepareTemplateAndParameters`
5692
- * TODO: [♨][main] !!! Prepare index the samples and maybe templates
5692
+ * TODO: [♨][main] !!! Prepare index the examples and maybe templates
5693
5693
  * TODO: Write tests for `preparePipeline`
5694
5694
  * TODO: [🏏] Leverage the batch API and build queues @see https://platform.openai.com/docs/guides/batch
5695
5695
  * TODO: [🧊] In future one preparation can take data from previous preparation and save tokens and time
@@ -5939,7 +5939,7 @@ var templateCommandParser = {
5939
5939
  'SCRIPT',
5940
5940
  'DIALOG',
5941
5941
  // <- [🅱]
5942
- 'SAMPLE',
5942
+ 'EXAMPLE',
5943
5943
  'KNOWLEDGE',
5944
5944
  'INSTRUMENT',
5945
5945
  'ACTION',
@@ -5972,7 +5972,7 @@ var templateCommandParser = {
5972
5972
  */
5973
5973
  parse: function (input) {
5974
5974
  var normalized = input.normalized;
5975
- normalized = normalized.split('EXAMPLE').join('SAMPLE');
5975
+ normalized = normalized.split('SAMPLE').join('EXAMPLE');
5976
5976
  var templateTypes = TemplateTypes.filter(function (templateType) {
5977
5977
  return normalized.includes(templateType.split('_TEMPLATE').join(''));
5978
5978
  });
@@ -6005,14 +6005,14 @@ var templateCommandParser = {
6005
6005
  if ($templateJson.content === undefined) {
6006
6006
  throw new UnexpectedError("Content is missing in the templateJson - probbably commands are applied in wrong order");
6007
6007
  }
6008
- if (command.templateType === 'SAMPLE') {
6008
+ if (command.templateType === 'EXAMPLE') {
6009
6009
  expectResultingParameterName();
6010
6010
  var parameter = $pipelineJson.parameters.find(function (param) { return param.name === $templateJson.resultingParameterName; });
6011
6011
  if (parameter === undefined) {
6012
- throw new ParseError("Can not find parameter {".concat($templateJson.resultingParameterName, "} to assign sample value on it"));
6012
+ throw new ParseError("Can not find parameter {".concat($templateJson.resultingParameterName, "} to assign example value on it"));
6013
6013
  }
6014
- parameter.sampleValues = parameter.sampleValues || [];
6015
- parameter.sampleValues.push($templateJson.content);
6014
+ parameter.exampleValues = parameter.exampleValues || [];
6015
+ parameter.exampleValues.push($templateJson.content);
6016
6016
  $templateJson.isTemplate = false;
6017
6017
  return;
6018
6018
  }
@@ -8600,9 +8600,9 @@ function renderPromptbookMermaid(pipelineJson, options) {
8600
8600
  return promptbookMermaid;
8601
8601
  }
8602
8602
  /**
8603
- * TODO: !!!!! FOREACH in mermaid graph
8604
- * TODO: !!!!! Knowledge in mermaid graph
8605
- * TODO: !!!!! Personas in mermaid graph
8603
+ * TODO: [🧠] !! FOREACH in mermaid graph
8604
+ * TODO: [🧠] !! Knowledge in mermaid graph
8605
+ * TODO: [🧠] !! Personas in mermaid graph
8606
8606
  * TODO: Maybe use some Mermaid package instead of string templating
8607
8607
  * TODO: [🕌] When more than 2 functionalities, split into separate functions
8608
8608
  */
@@ -8702,7 +8702,7 @@ function stringifyPipelineJson(pipeline) {
8702
8702
  return pipelineJsonStringified;
8703
8703
  }
8704
8704
  /**
8705
- * TODO: [🐝] Not Working propperly @see https://promptbook.studio/samples/mixed-knowledge.ptbk.md
8705
+ * TODO: [🐝] Not Working propperly @see https://promptbook.studio/examples/mixed-knowledge.ptbk.md
8706
8706
  * TODO: [🧠][0] Maybe rename to `stringifyPipelineJson`, `stringifyIndexedJson`,...
8707
8707
  * TODO: [🧠] Maybe more elegant solution than replacing via regex
8708
8708
  * TODO: [🍙] Make some standard order of json properties