@promptbook/pdf 0.72.0 → 0.73.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -18,6 +18,8 @@ Build responsible, controlled and transparent applications on top of LLM models!
18
18
 
19
19
  ## ✨ New Features
20
20
 
21
+ - 💙 Working on [the **Book** language v1](https://github.com/webgptorg/book)
22
+ - 📚 Support of `.docx`, `.doc` and `.pdf` documents
21
23
  - ✨ **Support of [OpenAI o1 model](https://openai.com/o1/)**
22
24
 
23
25
 
@@ -48,11 +50,9 @@ Rest of the documentation is common for **entire promptbook ecosystem**:
48
50
 
49
51
  ## 🤍 The Promptbook Whitepaper
50
52
 
51
-
52
-
53
53
  If you have a simple, single prompt for ChatGPT, GPT-4, Anthropic Claude, Google Gemini, Llama 3, or whatever, it doesn't matter how you integrate it. Whether it's calling a REST API directly, using the SDK, hardcoding the prompt into the source code, or importing a text file, the process remains the same.
54
54
 
55
- But often you will struggle with the **limitations of LLMs**, such as **hallucinations, off-topic responses, poor quality output, language and prompt drift, word repetition repetition repetition repetition or misuse, lack of context, or just plain w𝒆𝐢rd responses**. When this happens, you generally have three options:
55
+ But often you will struggle with the **limitations of LLMs**, such as **hallucinations, off-topic responses, poor quality output, language and prompt drift, word repetition repetition repetition repetition or misuse, lack of context, or just plain w𝒆𝐢rd resp0nses**. When this happens, you generally have three options:
56
56
 
57
57
  1. **Fine-tune** the model to your specifications or even train your own.
58
58
  2. **Prompt-engineer** the prompt to the best shape you can achieve.
@@ -60,248 +60,38 @@ But often you will struggle with the **limitations of LLMs**, such as **hallucin
60
60
 
61
61
  In all of these situations, but especially in 3., the **✨ Promptbook can make your life waaaaaaaaaay easier**.
62
62
 
63
- - [**Separates concerns**](https://github.com/webgptorg/promptbook/discussions/32) between prompt-engineer and programmer, between code files and prompt files, and between prompts and their execution logic.
64
- - Establishes a [**common format `.ptbk.md`**](https://github.com/webgptorg/promptbook/discussions/85) that can be used to describe your prompt business logic without having to write code or deal with the technicalities of LLMs.
65
- - **Forget** about **low-level details** like choosing the right model, tokens, context size, temperature, top-k, top-p, or kernel sampling. **Just write your intent** and [**persona**](https://github.com/webgptorg/promptbook/discussions/22) who should be responsible for the task and let the library do the rest.
66
- - Has built-in **orchestration** of [pipeline](https://github.com/webgptorg/promptbook/discussions/64) execution and many tools to make the process easier, more reliable, and more efficient, such as caching, [compilation+preparation](https://github.com/webgptorg/promptbook/discussions/78), [just-in-time fine-tuning](https://github.com/webgptorg/promptbook/discussions/33), [expectation-aware generation](https://github.com/webgptorg/promptbook/discussions/37), [agent adversary expectations](https://github.com/webgptorg/promptbook/discussions/39), and more.
63
+ - [**Separates concerns**](https://github.com/webgptorg/promptbook/discussions/32) between prompt-engineer and programmer, between code files and prompt files, and between prompts and their execution logic. For this purpose, it introduces a new language called [the **💙 Book**](https://github.com/webgptorg/book).
64
+ - Book allows you to **focus on the business** logic without having to write code or deal with the technicalities of LLMs.
65
+ - **Forget** about **low-level details** like choosing the right model, tokens, context size, `temperature`, `top-k`, `top-p`, or kernel sampling. **Just write your intent** and [**persona**](https://github.com/webgptorg/promptbook/discussions/22) who should be responsible for the task and let the library do the rest.
66
+ - We have built-in **orchestration** of [pipeline](https://github.com/webgptorg/promptbook/discussions/64) execution and many tools to make the process easier, more reliable, and more efficient, such as caching, [compilation+preparation](https://github.com/webgptorg/promptbook/discussions/78), [just-in-time fine-tuning](https://github.com/webgptorg/promptbook/discussions/33), [expectation-aware generation](https://github.com/webgptorg/promptbook/discussions/37), [agent adversary expectations](https://github.com/webgptorg/promptbook/discussions/39), and more.
67
67
  - Sometimes even the best prompts with the best framework like Promptbook `:)` can't avoid the problems. In this case, the library has built-in **[anomaly detection](https://github.com/webgptorg/promptbook/discussions/40) and logging** to help you find and fix the problems.
68
- - Promptbook has built in versioning. You can test multiple **A/B versions** of pipelines and see which one works best.
69
- - Promptbook is designed to do [**RAG** (Retrieval-Augmented Generation)](https://github.com/webgptorg/promptbook/discussions/41) and other advanced techniques. You can use **knowledge** to improve the quality of the output.
70
-
71
-
72
-
73
- ## 🧔 Pipeline _(for prompt-engeneers)_
74
-
75
- **P**romp**t** **b**oo**k** markdown file (or `.ptbk.md` file) is document that describes a **pipeline** - a series of prompts that are chained together to form somewhat reciepe for transforming natural language input.
76
-
77
- - Multiple pipelines forms a **collection** which will handle core **know-how of your LLM application**.
78
- - Theese pipelines are designed such as they **can be written by non-programmers**.
79
-
80
-
81
-
82
- ### Sample:
83
-
84
- File `write-website-content.ptbk.md`:
85
-
86
-
87
-
88
-
89
-
90
- > # 🌍 Create website content
91
- >
92
- > Instructions for creating web page content.
93
- >
94
- > - PIPELINE URL https://promptbook.studio/webgpt/write-website-content.ptbk.md
95
- > - INPUT  PARAM `{rawTitle}` Automatically suggested a site name or empty text
96
- > - INPUT  PARAM `{rawAssigment}` Automatically generated site entry from image recognition
97
- > - OUTPUT PARAM `{websiteContent}` Web content
98
- > - OUTPUT PARAM `{keywords}` Keywords
99
- >
100
- > ## 👤 Specifying the assigment
101
- >
102
- > What is your web about?
103
- >
104
- > - DIALOG TEMPLATE
105
- >
106
- > ```
107
- > {rawAssigment}
108
- > ```
109
- >
110
- > `-> {assigment}` Website assignment and specification
111
- >
112
- > ## ✨ Improving the title
113
- >
114
- > - PERSONA Jane, Copywriter and Marketing Specialist.
115
- >
116
- > ```
117
- > As an experienced marketing specialist, you have been entrusted with improving the name of your client's business.
118
- >
119
- > A suggested name from a client:
120
- > "{rawTitle}"
121
- >
122
- > Assignment from customer:
123
- >
124
- > > {assigment}
125
- >
126
- > ## Instructions:
127
- >
128
- > - Write only one name suggestion
129
- > - The name will be used on the website, business cards, visuals, etc.
130
- > ```
131
- >
132
- > `-> {enhancedTitle}` Enhanced title
133
- >
134
- > ## 👤 Website title approval
135
- >
136
- > Is the title for your website okay?
137
- >
138
- > - DIALOG TEMPLATE
139
- >
140
- > ```
141
- > {enhancedTitle}
142
- > ```
143
- >
144
- > `-> {title}` Title for the website
145
- >
146
- > ## 🐰 Cunning subtitle
147
- >
148
- > - PERSONA Josh, a copywriter, tasked with creating a claim for the website.
149
- >
150
- > ```
151
- > As an experienced copywriter, you have been entrusted with creating a claim for the "{title}" web page.
152
- >
153
- > A website assignment from a customer:
154
- >
155
- > > {assigment}
156
- >
157
- > ## Instructions:
158
- >
159
- > - Write only one name suggestion
160
- > - Claim will be used on website, business cards, visuals, etc.
161
- > - Claim should be punchy, funny, original
162
- > ```
163
- >
164
- > `-> {claim}` Claim for the web
165
- >
166
- > ## 🚦 Keyword analysis
167
- >
168
- > - PERSONA Paul, extremely creative SEO specialist.
169
- >
170
- > ```
171
- > As an experienced SEO specialist, you have been entrusted with creating keywords for the website "{title}".
172
- >
173
- > Website assignment from the customer:
174
- >
175
- > > {assigment}
176
- >
177
- > ## Instructions:
178
- >
179
- > - Write a list of keywords
180
- > - Keywords are in basic form
181
- >
182
- > ## Example:
183
- >
184
- > - Ice cream
185
- > - Olomouc
186
- > - Quality
187
- > - Family
188
- > - Tradition
189
- > - Italy
190
- > - Craft
191
- >
192
- > ```
193
- >
194
- > `-> {keywords}` Keywords
195
- >
196
- > ## 🔗 Combine the beginning
197
- >
198
- > - SIMPLE TEMPLATE
199
- >
200
- > ```
201
- >
202
- > # {title}
203
- >
204
- > > {claim}
205
- >
206
- > ```
207
- >
208
- > `-> {contentBeginning}` Beginning of web content
209
- >
210
- > ## 🖋 Write the content
211
- >
212
- > - PERSONA Jane
213
- >
214
- > ```
215
- > As an experienced copywriter and web designer, you have been entrusted with creating text for a new website {title}.
216
- >
217
- > A website assignment from a customer:
218
- >
219
- > > {assigment}
220
- >
221
- > ## Instructions:
222
- >
223
- > - Text formatting is in Markdown
224
- > - Be concise and to the point
225
- > - Use keywords, but they should be naturally in the text
226
- > - This is the complete content of the page, so don't forget all the important information and elements the page should contain
227
- > - Use headings, bullets, text formatting
228
- >
229
- > ## Keywords:
230
- >
231
- > {keywords}
232
- >
233
- > ## Web Content:
234
- >
235
- > {contentBeginning}
236
- > ```
237
- >
238
- > `-> {contentBody}` Middle of the web content
239
- >
240
- > ## 🔗 Combine the content
241
- >
242
- > - SIMPLE TEMPLATE
243
- >
244
- > ```markdown
245
- > {contentBeginning}
246
- >
247
- > {contentBody}
248
- > ```
249
- >
250
- > `-> {websiteContent}`
251
-
252
-
253
-
254
- Following is the scheme how the promptbook above is executed:
255
-
256
- ```mermaid
257
- %% 🔮 Tip: Open this on GitHub or in the VSCode website to see the Mermaid graph visually
258
-
259
- flowchart LR
260
- subgraph "🌍 Create website content"
261
-
262
- direction TB
263
-
264
- input((Input)):::input
265
- templateSpecifyingTheAssigment(👤 Specifying the assigment)
266
- input--"{rawAssigment}"-->templateSpecifyingTheAssigment
267
- templateImprovingTheTitle(✨ Improving the title)
268
- input--"{rawTitle}"-->templateImprovingTheTitle
269
- templateSpecifyingTheAssigment--"{assigment}"-->templateImprovingTheTitle
270
- templateWebsiteTitleApproval(👤 Website title approval)
271
- templateImprovingTheTitle--"{enhancedTitle}"-->templateWebsiteTitleApproval
272
- templateCunningSubtitle(🐰 Cunning subtitle)
273
- templateWebsiteTitleApproval--"{title}"-->templateCunningSubtitle
274
- templateSpecifyingTheAssigment--"{assigment}"-->templateCunningSubtitle
275
- templateKeywordAnalysis(🚦 Keyword analysis)
276
- templateWebsiteTitleApproval--"{title}"-->templateKeywordAnalysis
277
- templateSpecifyingTheAssigment--"{assigment}"-->templateKeywordAnalysis
278
- templateCombineTheBeginning(🔗 Combine the beginning)
279
- templateWebsiteTitleApproval--"{title}"-->templateCombineTheBeginning
280
- templateCunningSubtitle--"{claim}"-->templateCombineTheBeginning
281
- templateWriteTheContent(🖋 Write the content)
282
- templateWebsiteTitleApproval--"{title}"-->templateWriteTheContent
283
- templateSpecifyingTheAssigment--"{assigment}"-->templateWriteTheContent
284
- templateKeywordAnalysis--"{keywords}"-->templateWriteTheContent
285
- templateCombineTheBeginning--"{contentBeginning}"-->templateWriteTheContent
286
- templateCombineTheContent(🔗 Combine the content)
287
- templateCombineTheBeginning--"{contentBeginning}"-->templateCombineTheContent
288
- templateWriteTheContent--"{contentBody}"-->templateCombineTheContent
289
-
290
- templateCombineTheContent--"{websiteContent}"-->output
291
- output((Output)):::output
292
-
293
- classDef input color: grey;
294
- classDef output color: grey;
295
-
296
- end;
297
- ```
68
+ - Versioning is build in. You can test multiple **A/B versions** of pipelines and see which one works best.
69
+ - Promptbook is designed to use [**RAG** (Retrieval-Augmented Generation)](https://github.com/webgptorg/promptbook/discussions/41) and other advanced techniques to bring the context of your business to generic LLM. You can use **knowledge** to improve the quality of the output.
70
+
71
+
72
+
73
+ ## 💙 Book language _(for prompt-engineer)_
74
+
75
+ Promptbook [pipelines](https://github.com/webgptorg/promptbook/discussions/64) are written in markdown-like language called [Book](https://github.com/webgptorg/book). It is designed to be understandable by non-programmers and non-technical people.
298
76
 
299
- - [More template samples](./samples/pipelines/)
300
- - [Read more about `.ptbk.md` file format here](https://github.com/webgptorg/promptbook/discussions/categories/concepts?discussions_q=is%3Aopen+label%3A.ptbk.md+category%3AConcepts)
301
77
 
302
- _Note: We are using [postprocessing functions](#postprocessing-functions) like `unwrapResult` that can be used to postprocess the result._
303
78
 
304
- ## 📦 Packages
79
+ ```markdown
80
+ # 🌟 My first Book
81
+
82
+ - PERSONA Jane, marketing specialist with prior experience in writing articles about technology and artificial intelligence
83
+ - KNOWLEDGE https://ptbk.io
84
+ - KNOWLEDGE ./promptbook.pdf
85
+ - EXPECT MIN 1 Sentence
86
+ - EXPECT MAX 1 Paragraph
87
+
88
+ > Write an article about the future of artificial intelligence in the next 10 years and how metalanguages will change the way AI is used in the world.
89
+ > Look specifically at the impact of Promptbook on the AI industry.
90
+
91
+ -> {article}
92
+ ```
93
+
94
+ ## 📦 Packages _(for developers)_
305
95
 
306
96
  This library is divided into several packages, all are published from [single monorepo](https://github.com/webgptorg/promptbook).
307
97
  You can install all of them at once:
@@ -343,8 +133,6 @@ Or you can install them separately:
343
133
 
344
134
  The following glossary is used to clarify certain concepts:
345
135
 
346
-
347
-
348
136
  ### Core concepts
349
137
 
350
138
  - [📚 Collection of pipelines](https://github.com/webgptorg/promptbook/discussions/65)
@@ -375,8 +163,8 @@ The following glossary is used to clarify certain concepts:
375
163
 
376
164
  ## 🔌 Usage in Typescript / Javascript
377
165
 
378
- - [Simple usage](./samples/usage/simple-script)
379
- - [Usage with client and remote server](./samples/usage/remote)
166
+ - [Simple usage](./examples/usage/simple-script)
167
+ - [Usage with client and remote server](./examples/usage/remote)
380
168
 
381
169
  ## ➕➖ When to use Promptbook?
382
170
 
package/esm/index.es.js CHANGED
@@ -12,7 +12,7 @@ import { unparse, parse } from 'papaparse';
12
12
  /**
13
13
  * The version of the Promptbook library
14
14
  */
15
- var PROMPTBOOK_VERSION = '0.72.0-34';
15
+ var PROMPTBOOK_VERSION = '0.72.0';
16
16
  // TODO: [main] !!!! List here all the versions and annotate + put into script
17
17
 
18
18
  /*! *****************************************************************************
@@ -167,7 +167,7 @@ function TODO_USE() {
167
167
  }
168
168
  }
169
169
 
170
- var PipelineCollection = [{title:"Prepare Knowledge from Markdown",pipelineUrl:"https://promptbook.studio/promptbook/prepare-knowledge-from-markdown.ptbk.md",parameters:[{name:"knowledgeContent",description:"Markdown document content",isInput:true,isOutput:false},{name:"knowledgePieces",description:"The knowledge JSON object",isInput:false,isOutput:true}],templates:[{templateType:"PROMPT_TEMPLATE",name:"knowledge",title:"Knowledge",content:"You are experienced data researcher, extract the important knowledge from the document.\n\n# Rules\n\n- Make pieces of information concise, clear, and easy to understand\n- One piece of information should be approximately 1 paragraph\n- Divide the paragraphs by markdown horizontal lines ---\n- Omit irrelevant information\n- Group redundant information\n- Write just extracted information, nothing else\n\n# The document\n\nTake information from this document:\n\n> {knowledgeContent}",resultingParameterName:"knowledgePieces",dependentParameterNames:["knowledgeContent"]}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[],sourceFile:"./promptbook-collection/prepare-knowledge-from-markdown.ptbk.md"},{title:"Prepare Keywords",pipelineUrl:"https://promptbook.studio/promptbook/prepare-knowledge-keywords.ptbk.md",parameters:[{name:"knowledgePieceContent",description:"The content",isInput:true,isOutput:false},{name:"keywords",description:"Keywords separated by comma",isInput:false,isOutput:true}],templates:[{templateType:"PROMPT_TEMPLATE",name:"knowledge",title:"Knowledge",content:"You are experienced data researcher, detect the important keywords in the document.\n\n# Rules\n\n- Write just keywords separated by comma\n\n# The document\n\nTake information from this document:\n\n> {knowledgePieceContent}",resultingParameterName:"keywords",dependentParameterNames:["knowledgePieceContent"]}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[],sourceFile:"./promptbook-collection/prepare-knowledge-keywords.ptbk.md"},{title:"Prepare Title",pipelineUrl:"https://promptbook.studio/promptbook/prepare-knowledge-title.ptbk.md",parameters:[{name:"knowledgePieceContent",description:"The content",isInput:true,isOutput:false},{name:"title",description:"The title of the document",isInput:false,isOutput:true}],templates:[{templateType:"PROMPT_TEMPLATE",name:"knowledge",title:"Knowledge",content:"You are experienced content creator, write best title for the document.\n\n# Rules\n\n- Write just title, nothing else\n- Title should be concise and clear\n- Write maximum 5 words for the title\n\n# The document\n\n> {knowledgePieceContent}",resultingParameterName:"title",expectations:{words:{min:1,max:8}},dependentParameterNames:["knowledgePieceContent"]}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[],sourceFile:"./promptbook-collection/prepare-knowledge-title.ptbk.md"},{title:"Prepare Keywords",pipelineUrl:"https://promptbook.studio/promptbook/prepare-persona.ptbk.md",parameters:[{name:"availableModelNames",description:"List of available model names separated by comma (,)",isInput:true,isOutput:false},{name:"personaDescription",description:"Description of the persona",isInput:true,isOutput:false},{name:"modelRequirements",description:"Specific requirements for the model",isInput:false,isOutput:true}],templates:[{templateType:"PROMPT_TEMPLATE",name:"make-model-requirements",title:"Make modelRequirements",content:"You are experienced AI engineer, you need to create virtual assistant.\nWrite\n\n## Sample\n\n```json\n{\n\"modelName\": \"gpt-4o\",\n\"systemMessage\": \"You are experienced AI engineer and helpfull assistant.\",\n\"temperature\": 0.7\n}\n```\n\n## Instructions\n\n- Your output format is JSON object\n- Write just the JSON object, no other text should be present\n- It contains the following keys:\n - `modelName`: The name of the model to use\n - `systemMessage`: The system message to provide context to the model\n - `temperature`: The sampling temperature to use\n\n### Key `modelName`\n\nPick from the following models:\n\n- {availableModelNames}\n\n### Key `systemMessage`\n\nThe system message is used to communicate instructions or provide context to the model at the beginning of a conversation. It is displayed in a different format compared to user messages, helping the model understand its role in the conversation. The system message typically guides the model's behavior, sets the tone, or specifies desired output from the model. By utilizing the system message effectively, users can steer the model towards generating more accurate and relevant responses.\n\nFor example:\n\n> You are an experienced AI engineer and helpful assistant.\n\n> You are a friendly and knowledgeable chatbot.\n\n### Key `temperature`\n\nThe sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use log probability to automatically increase the temperature until certain thresholds are hit.\n\nYou can pick a value between 0 and 2. For example:\n\n- `0.1`: Low temperature, extremely conservative and deterministic\n- `0.5`: Medium temperature, balanced between conservative and creative\n- `1.0`: High temperature, creative and bit random\n- `1.5`: Very high temperature, extremely creative and often chaotic and unpredictable\n- `2.0`: Maximum temperature, completely random and unpredictable, for some extreme creative use cases\n\n# The assistant\n\nTake this description of the persona:\n\n> {personaDescription}",resultingParameterName:"modelRequirements",format:"JSON",dependentParameterNames:["availableModelNames","personaDescription"]}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[],sourceFile:"./promptbook-collection/prepare-persona.ptbk.md"}];
170
+ var PipelineCollection = [{title:"Prepare Knowledge from Markdown",pipelineUrl:"https://promptbook.studio/promptbook/prepare-knowledge-from-markdown.ptbk.md",parameters:[{name:"knowledgeContent",description:"Markdown document content",isInput:true,isOutput:false},{name:"knowledgePieces",description:"The knowledge JSON object",isInput:false,isOutput:true}],templates:[{templateType:"PROMPT_TEMPLATE",name:"knowledge",title:"Knowledge",content:"You are experienced data researcher, extract the important knowledge from the document.\n\n# Rules\n\n- Make pieces of information concise, clear, and easy to understand\n- One piece of information should be approximately 1 paragraph\n- Divide the paragraphs by markdown horizontal lines ---\n- Omit irrelevant information\n- Group redundant information\n- Write just extracted information, nothing else\n\n# The document\n\nTake information from this document:\n\n> {knowledgeContent}",resultingParameterName:"knowledgePieces",dependentParameterNames:["knowledgeContent"]}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[],sourceFile:"./promptbook-collection/prepare-knowledge-from-markdown.ptbk.md"},{title:"Prepare Keywords",pipelineUrl:"https://promptbook.studio/promptbook/prepare-knowledge-keywords.ptbk.md",parameters:[{name:"knowledgePieceContent",description:"The content",isInput:true,isOutput:false},{name:"keywords",description:"Keywords separated by comma",isInput:false,isOutput:true}],templates:[{templateType:"PROMPT_TEMPLATE",name:"knowledge",title:"Knowledge",content:"You are experienced data researcher, detect the important keywords in the document.\n\n# Rules\n\n- Write just keywords separated by comma\n\n# The document\n\nTake information from this document:\n\n> {knowledgePieceContent}",resultingParameterName:"keywords",dependentParameterNames:["knowledgePieceContent"]}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[],sourceFile:"./promptbook-collection/prepare-knowledge-keywords.ptbk.md"},{title:"Prepare Title",pipelineUrl:"https://promptbook.studio/promptbook/prepare-knowledge-title.ptbk.md",parameters:[{name:"knowledgePieceContent",description:"The content",isInput:true,isOutput:false},{name:"title",description:"The title of the document",isInput:false,isOutput:true}],templates:[{templateType:"PROMPT_TEMPLATE",name:"knowledge",title:"Knowledge",content:"You are experienced content creator, write best title for the document.\n\n# Rules\n\n- Write just title, nothing else\n- Title should be concise and clear\n- Write maximum 5 words for the title\n\n# The document\n\n> {knowledgePieceContent}",resultingParameterName:"title",expectations:{words:{min:1,max:8}},dependentParameterNames:["knowledgePieceContent"]}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[],sourceFile:"./promptbook-collection/prepare-knowledge-title.ptbk.md"},{title:"Prepare Keywords",pipelineUrl:"https://promptbook.studio/promptbook/prepare-persona.ptbk.md",parameters:[{name:"availableModelNames",description:"List of available model names separated by comma (,)",isInput:true,isOutput:false},{name:"personaDescription",description:"Description of the persona",isInput:true,isOutput:false},{name:"modelRequirements",description:"Specific requirements for the model",isInput:false,isOutput:true}],templates:[{templateType:"PROMPT_TEMPLATE",name:"make-model-requirements",title:"Make modelRequirements",content:"You are experienced AI engineer, you need to create virtual assistant.\nWrite\n\n## Example\n\n```json\n{\n\"modelName\": \"gpt-4o\",\n\"systemMessage\": \"You are experienced AI engineer and helpfull assistant.\",\n\"temperature\": 0.7\n}\n```\n\n## Instructions\n\n- Your output format is JSON object\n- Write just the JSON object, no other text should be present\n- It contains the following keys:\n - `modelName`: The name of the model to use\n - `systemMessage`: The system message to provide context to the model\n - `temperature`: The sampling temperature to use\n\n### Key `modelName`\n\nPick from the following models:\n\n- {availableModelNames}\n\n### Key `systemMessage`\n\nThe system message is used to communicate instructions or provide context to the model at the beginning of a conversation. It is displayed in a different format compared to user messages, helping the model understand its role in the conversation. The system message typically guides the model's behavior, sets the tone, or specifies desired output from the model. By utilizing the system message effectively, users can steer the model towards generating more accurate and relevant responses.\n\nFor example:\n\n> You are an experienced AI engineer and helpful assistant.\n\n> You are a friendly and knowledgeable chatbot.\n\n### Key `temperature`\n\nThe sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use log probability to automatically increase the temperature until certain thresholds are hit.\n\nYou can pick a value between 0 and 2. For example:\n\n- `0.1`: Low temperature, extremely conservative and deterministic\n- `0.5`: Medium temperature, balanced between conservative and creative\n- `1.0`: High temperature, creative and bit random\n- `1.5`: Very high temperature, extremely creative and often chaotic and unpredictable\n- `2.0`: Maximum temperature, completely random and unpredictable, for some extreme creative use cases\n\n# The assistant\n\nTake this description of the persona:\n\n> {personaDescription}",resultingParameterName:"modelRequirements",format:"JSON",dependentParameterNames:["availableModelNames","personaDescription"]}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[],sourceFile:"./promptbook-collection/prepare-persona.ptbk.md"}];
171
171
 
172
172
  /**
173
173
  * Prettify the html code
@@ -669,10 +669,10 @@ var RESERVED_PARAMETER_NAMES = $asDeeplyFrozenSerializableJson('RESERVED_PARAMET
669
669
  'content',
670
670
  'context',
671
671
  'knowledge',
672
- 'samples',
672
+ 'examples',
673
673
  'modelName',
674
674
  'currentDate',
675
- // <- TODO: !!!!! list here all command names
675
+ // <- TODO: list here all command names
676
676
  // <- TODO: Add more like 'date', 'modelName',...
677
677
  // <- TODO: Add [emoji] + instructions ACRY when adding new reserved parameter
678
678
  ]);
@@ -1144,7 +1144,7 @@ function validatePipelineCore(pipeline) {
1144
1144
  }
1145
1145
  }
1146
1146
  /**
1147
- * TODO: !!!!! [🧞‍♀️] Do not allow joker + foreach
1147
+ * TODO: !! [🧞‍♀️] Do not allow joker + foreach
1148
1148
  * TODO: [🧠] Work with promptbookVersion
1149
1149
  * TODO: Use here some json-schema, Zod or something similar and change it to:
1150
1150
  * > /**
@@ -1156,7 +1156,7 @@ function validatePipelineCore(pipeline) {
1156
1156
  * > ex port function validatePipeline(promptbook: really_unknown): asserts promptbook is PipelineJson {
1157
1157
  */
1158
1158
  /**
1159
- * TODO: [🧳][main] !!!! Validate that all samples match expectations
1159
+ * TODO: [🧳][main] !!!! Validate that all examples match expectations
1160
1160
  * TODO: [🧳][🐝][main] !!!! Validate that knowledge is valid (non-void)
1161
1161
  * TODO: [🧳][main] !!!! Validate that persona can be used only with CHAT variant
1162
1162
  * TODO: [🧳][main] !!!! Validate that parameter with reserved name not used RESERVED_PARAMETER_NAMES
@@ -1928,12 +1928,12 @@ function isPipelinePrepared(pipeline) {
1928
1928
  return true;
1929
1929
  }
1930
1930
  /**
1931
- * TODO: [🔃][main] !!!!! If the pipeline was prepared with different version or different set of models, prepare it once again
1931
+ * TODO: [🔃][main] !! If the pipeline was prepared with different version or different set of models, prepare it once again
1932
1932
  * TODO: [🐠] Maybe base this on `makeValidator`
1933
1933
  * TODO: [🧊] Pipeline can be partially prepared, this should return true ONLY if fully prepared
1934
1934
  * TODO: [🧿] Maybe do same process with same granularity and subfinctions as `preparePipeline`
1935
1935
  * - [🏍] ? Is context in each template
1936
- * - [♨] Are samples prepared
1936
+ * - [♨] Are examples prepared
1937
1937
  * - [♨] Are templates prepared
1938
1938
  */
1939
1939
 
@@ -2638,7 +2638,7 @@ function preparePersona(personaDescription, tools, options) {
2638
2638
  });
2639
2639
  }
2640
2640
  /**
2641
- * TODO: [🔃][main] !!!!! If the persona was prepared with different version or different set of models, prepare it once again
2641
+ * TODO: [🔃][main] !! If the persona was prepared with different version or different set of models, prepare it once again
2642
2642
  * TODO: [🏢] !! Check validity of `modelName` in pipeline
2643
2643
  * TODO: [🏢] !! Check validity of `systemMessage` in pipeline
2644
2644
  * TODO: [🏢] !! Check validity of `temperature` in pipeline
@@ -3304,7 +3304,7 @@ function prepareTemplates(pipeline, tools, options) {
3304
3304
  case 0:
3305
3305
  _a = options.maxParallelCount, maxParallelCount = _a === void 0 ? DEFAULT_MAX_PARALLEL_COUNT : _a;
3306
3306
  templates = pipeline.templates, parameters = pipeline.parameters, knowledgePiecesCount = pipeline.knowledgePiecesCount;
3307
- // TODO: [main] !!!!! Apply samples to each template (if missing and is for the template defined)
3307
+ // TODO: [main] !! Apply examples to each template (if missing and is for the template defined)
3308
3308
  TODO_USE(parameters);
3309
3309
  templatesPrepared = new Array(templates.length);
3310
3310
  return [4 /*yield*/, forEachAsync(templates, { maxParallelCount: maxParallelCount /* <- TODO: [🪂] When there are subtasks, this maximul limit can be broken */ }, function (template, index) { return __awaiter(_this, void 0, void 0, function () {
@@ -3334,7 +3334,7 @@ function prepareTemplates(pipeline, tools, options) {
3334
3334
  /**
3335
3335
  * TODO: [🧠] Add context to each template (if missing)
3336
3336
  * TODO: [🧠] What is better name `prepareTemplate` or `prepareTemplateAndParameters`
3337
- * TODO: [♨][main] !!! Prepare index the samples and maybe templates
3337
+ * TODO: [♨][main] !!! Prepare index the examples and maybe templates
3338
3338
  * TODO: Write tests for `preparePipeline`
3339
3339
  * TODO: [🏏] Leverage the batch API and build queues @see https://platform.openai.com/docs/guides/batch
3340
3340
  * TODO: [🧊] In future one preparation can take data from previous preparation and save tokens and time
@@ -4864,7 +4864,7 @@ function getKnowledgeForTemplate(options) {
4864
4864
  var preparedPipeline, template;
4865
4865
  return __generator(this, function (_a) {
4866
4866
  preparedPipeline = options.preparedPipeline, template = options.template;
4867
- // TODO: [♨] Implement Better - use real index and keyword search from `template` and {samples}
4867
+ // TODO: [♨] Implement Better - use real index and keyword search from `template` and {examples}
4868
4868
  TODO_USE(template);
4869
4869
  return [2 /*return*/, preparedPipeline.knowledgePieces.map(function (_a) {
4870
4870
  var content = _a.content;
@@ -4879,7 +4879,7 @@ function getKnowledgeForTemplate(options) {
4879
4879
  *
4880
4880
  * @private internal utility of `createPipelineExecutor`
4881
4881
  */
4882
- function getSamplesForTemplate(template) {
4882
+ function getExamplesForTemplate(template) {
4883
4883
  return __awaiter(this, void 0, void 0, function () {
4884
4884
  return __generator(this, function (_a) {
4885
4885
  // TODO: [♨] Implement Better - use real index and keyword search
@@ -4896,7 +4896,7 @@ function getSamplesForTemplate(template) {
4896
4896
  */
4897
4897
  function getReservedParametersForTemplate(options) {
4898
4898
  return __awaiter(this, void 0, void 0, function () {
4899
- var preparedPipeline, template, pipelineIdentification, context, knowledge, samples, currentDate, modelName, reservedParameters, _loop_1, RESERVED_PARAMETER_NAMES_1, RESERVED_PARAMETER_NAMES_1_1, parameterName;
4899
+ var preparedPipeline, template, pipelineIdentification, context, knowledge, examples, currentDate, modelName, reservedParameters, _loop_1, RESERVED_PARAMETER_NAMES_1, RESERVED_PARAMETER_NAMES_1_1, parameterName;
4900
4900
  var e_1, _a;
4901
4901
  return __generator(this, function (_b) {
4902
4902
  switch (_b.label) {
@@ -4908,16 +4908,16 @@ function getReservedParametersForTemplate(options) {
4908
4908
  return [4 /*yield*/, getKnowledgeForTemplate({ preparedPipeline: preparedPipeline, template: template })];
4909
4909
  case 2:
4910
4910
  knowledge = _b.sent();
4911
- return [4 /*yield*/, getSamplesForTemplate(template)];
4911
+ return [4 /*yield*/, getExamplesForTemplate(template)];
4912
4912
  case 3:
4913
- samples = _b.sent();
4913
+ examples = _b.sent();
4914
4914
  currentDate = new Date().toISOString();
4915
4915
  modelName = RESERVED_PARAMETER_MISSING_VALUE;
4916
4916
  reservedParameters = {
4917
4917
  content: RESERVED_PARAMETER_RESTRICTED,
4918
4918
  context: context,
4919
4919
  knowledge: knowledge,
4920
- samples: samples,
4920
+ examples: examples,
4921
4921
  currentDate: currentDate,
4922
4922
  modelName: modelName,
4923
4923
  };
@@ -5576,7 +5576,7 @@ var MarkdownScraper = /** @class */ (function () {
5576
5576
  outputParameters = result.outputParameters;
5577
5577
  knowledgePiecesRaw = outputParameters.knowledgePieces;
5578
5578
  knowledgeTextPieces = (knowledgePiecesRaw || '').split('\n---\n');
5579
- // <- TODO: [main] !!!!! Smarter split and filter out empty pieces
5579
+ // <- TODO: [main] !! Smarter split and filter out empty pieces
5580
5580
  if (isVerbose) {
5581
5581
  console.info('knowledgeTextPieces:', knowledgeTextPieces);
5582
5582
  }