@promptbook/cli 0.67.0-1 โ†’ 0.67.0-3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -105,37 +105,23 @@ Rest of the documentation is common for **entire promptbook ecosystem**:
105
105
 
106
106
  ## ๐Ÿค The Promptbook Whitepaper
107
107
 
108
- When you have a simple, single prompt for ChatGPT, GPT-4, Anthropic Claude, Google Gemini, Llama 2, or whatever, it doesn't matter how it is integrated. Whether it's the direct calling of a REST API, using the SDK, hardcoding the prompt in the source code, or importing a text file, the process remains the same.
108
+ If you have a simple, single prompt for ChatGPT, GPT-4, Anthropic Claude, Google Gemini, Llama 2, or whatever, it doesn't matter how you integrate it. Whether it's calling a REST API directly, using the SDK, hardcoding the prompt into the source code, or importing a text file, the process remains the same.
109
109
 
110
- If you need something more advanced or want to extend the capabilities of LLMs, you generally have three ways to proceed:
110
+ But often you will struggle with the limitations of LLMs, such as hallucinations, off-topic responses, poor quality output, language drift, word repetition repetition repetition repetition or misuse, lack of context, or just plain w๐’†๐ขrd responses. When this happens, you generally have three options:
111
111
 
112
112
  1. **Fine-tune** the model to your specifications or even train your own.
113
113
  2. **Prompt-engineer** the prompt to the best shape you can achieve.
114
- 3. Use **multiple prompts** in a pipeline to get the best result.
115
-
116
- In any of these situations, but especially in (3), the Promptbook library can make your life easier and make **orchestraror for your prompts**.
117
-
118
- - **Separation of concerns** between prompt engineer and programmer; between code files and prompt files; and between prompts and their execution logic.
119
- - Set up a **common format** for prompts that is interchangeable between projects and language/technology stacks.
120
- - **Preprocessing** and cleaning the input data from the user.
121
- - Use default values - **Jokers** to bypass some parts of the pipeline.
122
- - **Expect** some specific output from the model.
123
- - **Retry** mismatched outputs.
124
- - **Combine** multiple models together.
125
- - Interactive **User interaction** with the model and the user.
126
- - Leverage **external** sources (like ChatGPT plugins or OpenAI's GPTs).
127
- - Simplify your code to be **DRY** and not repeat all the boilerplate code for each prompt.
128
- - **Versioning** of promptbooks
129
- - **Reuse** parts of promptbooks in/between projects.
130
- - Run the LLM **optimally** in parallel, with the best _cost/quality_ ratio or _speed/quality_ ratio.
131
- - **Execution report** to see what happened during the execution.
132
- - **Logging** the results of the promptbooks.
133
- - _(Not ready yet)_ **Caching** calls to LLMs to save money and time.
134
- - _(Not ready yet)_ Extend one prompt book from another one.
135
- - _(Not ready yet)_ Leverage the **streaming** to make super cool UI/UX.
136
- - _(Not ready yet)_ **A/B testing** to determine which prompt works best for the job.
114
+ 3. Use **multiple prompts** in a [pipeline](https://github.com/webgptorg/promptbook/discussions/64) to get the best result.
137
115
 
116
+ In all of these situations, but especially in 3., the Promptbook library can make your life easier.
138
117
 
118
+ - [**Separates concerns**](https://github.com/webgptorg/promptbook/discussions/32) between prompt-engineer and programmer, between code files and prompt files, and between prompts and their execution logic.
119
+ - Establishes a [**common format `.ptbk.md`**](https://github.com/webgptorg/promptbook/discussions/85) that can be used to describe your prompt business logic without having to write code or deal with the technicalities of LLMs.
120
+ - **Forget** about **low-level details** like choosing the right model, tokens, context size, temperature, top-k, top-p, or kernel sampling. **Just write your intent** and [**persona**](https://github.com/webgptorg/promptbook/discussions/22) who should be responsible for the task and let the library do the rest.
121
+ - Has built-in **orchestration** of [pipeline](https://github.com/webgptorg/promptbook/discussions/64) execution and many tools to make the process easier, more reliable, and more efficient, such as caching, [compilation+preparation](https://github.com/webgptorg/promptbook/discussions/78), [just-in-time fine-tuning](https://github.com/webgptorg/promptbook/discussions/33), [expectation-aware generation](https://github.com/webgptorg/promptbook/discussions/37), [agent adversary expectations](https://github.com/webgptorg/promptbook/discussions/39), and more.
122
+ - Sometimes even the best prompts with the best framework like Promptbook `:)` can't avoid the problems. In this case, the library has built-in **[anomaly detection](https://github.com/webgptorg/promptbook/discussions/40) and logging** to help you find and fix the problems.
123
+ - Promptbook has built in versioning. You can test multiple **A/B versions** of pipelines and see which one works best.
124
+ - Promptbook is designed to do [**RAG** (Retrieval-Augmented Generation)](https://github.com/webgptorg/promptbook/discussions/41) and other advanced techniques. You can use **knowledge** to improve the quality of the output.
139
125
 
140
126
  ## ๐Ÿง” Promptbook _(for prompt-engeneers)_
141
127
 
@@ -179,9 +165,7 @@ File `write-website-content.ptbk.md`:
179
165
  >
180
166
  > ## โœจ Improving the title
181
167
  >
182
- > - MODEL VARIANT Chat
183
- > - MODEL NAME `gpt-4`
184
- > - POSTPROCESSING `unwrapResult`
168
+ > - PERSONA Jane, Copywriter and Marketing Specialist.
185
169
  >
186
170
  > ```
187
171
  > As an experienced marketing specialist, you have been entrusted with improving the name of your client's business.
@@ -215,9 +199,7 @@ File `write-website-content.ptbk.md`:
215
199
  >
216
200
  > ## ๐Ÿฐ Cunning subtitle
217
201
  >
218
- > - MODEL VARIANT Chat
219
- > - MODEL NAME `gpt-4`
220
- > - POSTPROCESSING `unwrapResult`
202
+ > - PERSONA Josh, a copywriter, tasked with creating a claim for the website.
221
203
  >
222
204
  > ```
223
205
  > As an experienced copywriter, you have been entrusted with creating a claim for the "{title}" web page.
@@ -237,8 +219,7 @@ File `write-website-content.ptbk.md`:
237
219
  >
238
220
  > ## ๐Ÿšฆ Keyword analysis
239
221
  >
240
- > - MODEL VARIANT Chat
241
- > - MODEL NAME `gpt-4`
222
+ > - PERSONA Paul, extremely creative SEO specialist.
242
223
  >
243
224
  > ```
244
225
  > As an experienced SEO specialist, you have been entrusted with creating keywords for the website "{title}".
@@ -282,8 +263,7 @@ File `write-website-content.ptbk.md`:
282
263
  >
283
264
  > ## ๐Ÿ–‹ Write the content
284
265
  >
285
- > - MODEL VARIANT Completion
286
- > - MODEL NAME `gpt-3.5-turbo-instruct`
266
+ > - PERSONA Jane
287
267
  >
288
268
  > ```
289
269
  > As an experienced copywriter and web designer, you have been entrusted with creating text for a new website {title}.
@@ -462,7 +442,12 @@ The following glossary is used to clarify certain concepts:
462
442
 
463
443
  ### โž– When not to use
464
444
 
465
- - When you are writing just a simple chatbot without any extra logic, just system messages
445
+ - When you have already implemented single simple prompt and it works fine for your job
446
+ - When [OpenAI Assistant (GPTs)](https://help.openai.com/en/articles/8673914-gpts-vs-assistants) is enough for you
447
+ - When you need streaming _(this may be implemented in the future, [see discussion](https://github.com/webgptorg/promptbook/discussions/102))_.
448
+ - When you need to use something other than JavaScript or TypeScript _(other languages are on the way, [see the discussion](https://github.com/webgptorg/promptbook/discussions/101))_
449
+ - When your main focus is on something other than text - like images, audio, video, spreadsheets _(other media types may be added in the future, [see discussion](https://github.com/webgptorg/promptbook/discussions/103))_
450
+ - When you need to use recursion _([see the discussion](https://github.com/webgptorg/promptbook/discussions/38))_
466
451
 
467
452
  ## ๐Ÿœ Known issues
468
453
 
@@ -471,7 +456,6 @@ The following glossary is used to clarify certain concepts:
471
456
 
472
457
  ## ๐Ÿงผ Intentionally not implemented features
473
458
 
474
-
475
459
  - [โžฟ No recursion](https://github.com/webgptorg/promptbook/discussions/38)
476
460
  - [๐Ÿณ There are no types, just strings](https://github.com/webgptorg/promptbook/discussions/52)
477
461
 
package/esm/index.es.js CHANGED
@@ -20,7 +20,7 @@ import OpenAI from 'openai';
20
20
  /**
21
21
  * The version of the Promptbook library
22
22
  */
23
- var PROMPTBOOK_VERSION = '0.67.0-0';
23
+ var PROMPTBOOK_VERSION = '0.67.0-2';
24
24
  // TODO: !!!! List here all the versions and annotate + put into script
25
25
 
26
26
  /*! *****************************************************************************
@@ -407,6 +407,18 @@ var GENERATOR_WARNING_BY_PROMPTBOOK_CLI = "\u26A0\uFE0F WARNING: This code has b
407
407
  * @private within the repository - too low-level in comparison with other `MAX_...`
408
408
  */
409
409
  var LOOP_LIMIT = 1000;
410
+ /**
411
+ * Timeout for the connections in milliseconds
412
+ *
413
+ * @private within the repository - too low-level in comparison with other `MAX_...`
414
+ */
415
+ var CONNECTION_TIMEOUT_MS = 7 * 1000;
416
+ /**
417
+ * How many times to retry the connections
418
+ *
419
+ * @private within the repository - too low-level in comparison with other `MAX_...`
420
+ */
421
+ var CONNECTION_RETRIES_LIMIT = 5;
410
422
  /**
411
423
  * The maximum number of (LLM) tasks running in parallel
412
424
  *
@@ -642,6 +654,7 @@ function pipelineJsonToString(pipelineJson) {
642
654
  commands.push("PIPELINE URL ".concat(pipelineUrl));
643
655
  }
644
656
  commands.push("PROMPTBOOK VERSION ".concat(promptbookVersion));
657
+ // TODO: !!!!!! This increase size of the bundle and is probbably not necessary
645
658
  pipelineString = prettifyMarkdown(pipelineString);
646
659
  try {
647
660
  for (var _g = __values(parameters.filter(function (_a) {
@@ -1021,7 +1034,7 @@ function forEachAsync(array, options, callbackfunction) {
1021
1034
  });
1022
1035
  }
1023
1036
 
1024
- var PipelineCollection = [{title:"Prepare Knowledge from Markdown",pipelineUrl:"https://promptbook.studio/promptbook/prepare-knowledge-from-markdown.ptbk.md",promptbookVersion:"0.67.0-0",parameters:[{name:"knowledgeContent",description:"Markdown document content",isInput:true,isOutput:false},{name:"knowledgePieces",description:"The knowledge JSON object",isInput:false,isOutput:true}],promptTemplates:[{blockType:"PROMPT_TEMPLATE",name:"knowledge",title:"Knowledge",modelRequirements:{modelVariant:"CHAT",modelName:"claude-3-opus-20240229"},content:"You are experienced data researcher, extract the important knowledge from the document.\n\n# Rules\n\n- Make pieces of information concise, clear, and easy to understand\n- One piece of information should be approximately 1 paragraph\n- Divide the paragraphs by markdown horizontal lines ---\n- Omit irrelevant information\n- Group redundant information\n- Write just extracted information, nothing else\n\n# The document\n\nTake information from this document:\n\n> {knowledgeContent}",dependentParameterNames:["knowledgeContent"],resultingParameterName:"knowledgePieces"}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[],sourceFile:"./promptbook-collection/prepare-knowledge-from-markdown.ptbk.md"},{title:"Prepare Keywords",pipelineUrl:"https://promptbook.studio/promptbook/prepare-knowledge-keywords.ptbk.md",promptbookVersion:"0.67.0-0",parameters:[{name:"knowledgePieceContent",description:"The content",isInput:true,isOutput:false},{name:"keywords",description:"Keywords separated by comma",isInput:false,isOutput:true}],promptTemplates:[{blockType:"PROMPT_TEMPLATE",name:"knowledge",title:"Knowledge",modelRequirements:{modelVariant:"CHAT",modelName:"claude-3-opus-20240229"},content:"You are experienced data researcher, detect the important keywords in the document.\n\n# Rules\n\n- Write just keywords separated by comma\n\n# The document\n\nTake information from this document:\n\n> {knowledgePieceContent}",dependentParameterNames:["knowledgePieceContent"],resultingParameterName:"keywords"}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[],sourceFile:"./promptbook-collection/prepare-knowledge-keywords.ptbk.md"},{title:"Prepare Title",pipelineUrl:"https://promptbook.studio/promptbook/prepare-knowledge-title.ptbk.md",promptbookVersion:"0.67.0-0",parameters:[{name:"knowledgePieceContent",description:"The content",isInput:true,isOutput:false},{name:"title",description:"The title of the document",isInput:false,isOutput:true}],promptTemplates:[{blockType:"PROMPT_TEMPLATE",name:"knowledge",title:"Knowledge",modelRequirements:{modelVariant:"CHAT",modelName:"claude-3-opus-20240229"},content:"You are experienced content creator, write best title for the document.\n\n# Rules\n\n- Write just title, nothing else\n- Title should be concise and clear\n- Write maximum 5 words for the title\n\n# The document\n\n> {knowledgePieceContent}",expectations:{words:{min:1,max:8}},dependentParameterNames:["knowledgePieceContent"],resultingParameterName:"title"}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[],sourceFile:"./promptbook-collection/prepare-knowledge-title.ptbk.md"},{title:"Prepare Keywords",pipelineUrl:"https://promptbook.studio/promptbook/prepare-persona.ptbk.md",promptbookVersion:"0.67.0-0",parameters:[{name:"availableModelNames",description:"List of available model names separated by comma (,)",isInput:true,isOutput:false},{name:"personaDescription",description:"Description of the persona",isInput:true,isOutput:false},{name:"modelRequirements",description:"Specific requirements for the model",isInput:false,isOutput:true}],promptTemplates:[{blockType:"PROMPT_TEMPLATE",name:"make-model-requirements",title:"Make modelRequirements",modelRequirements:{modelVariant:"CHAT",modelName:"gpt-4-turbo"},content:"You are experienced AI engineer, you need to create virtual assistant.\nWrite\n\n## Sample\n\n```json\n{\n\"modelName\": \"gpt-4o\",\n\"systemMessage\": \"You are experienced AI engineer and helpfull assistant.\",\n\"temperature\": 0.7\n}\n```\n\n## Instructions\n\n### Option `modelName`\n\nPick from the following models:\n\n- {availableModelNames}\n\n### Option `systemMessage`\n\nThe system message is used to communicate instructions or provide context to the model at the beginning of a conversation. It is displayed in a different format compared to user messages, helping the model understand its role in the conversation. The system message typically guides the model's behavior, sets the tone, or specifies desired output from the model. By utilizing the system message effectively, users can steer the model towards generating more accurate and relevant responses.\n\nFor example:\n\n> You are an experienced AI engineer and helpful assistant.\n\n> You are a friendly and knowledgeable chatbot.\n\n### Option `temperature`\n\nThe sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use log probability to automatically increase the temperature until certain thresholds are hit.\n\nYou can pick a value between 0 and 2. For example:\n\n- `0.1`: Low temperature, extremely conservative and deterministic\n- `0.5`: Medium temperature, balanced between conservative and creative\n- `1.0`: High temperature, creative and bit random\n- `1.5`: Very high temperature, extremely creative and often chaotic and unpredictable\n- `2.0`: Maximum temperature, completely random and unpredictable, for some extreme creative use cases\n\n# The assistant\n\nTake this description of the persona:\n\n> {personaDescription}",expectFormat:"JSON",dependentParameterNames:["availableModelNames","personaDescription"],resultingParameterName:"modelRequirements"}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[],sourceFile:"./promptbook-collection/prepare-persona.ptbk.md"}];
1037
+ var PipelineCollection = [{title:"Prepare Knowledge from Markdown",pipelineUrl:"https://promptbook.studio/promptbook/prepare-knowledge-from-markdown.ptbk.md",promptbookVersion:"0.67.0-2",parameters:[{name:"knowledgeContent",description:"Markdown document content",isInput:true,isOutput:false},{name:"knowledgePieces",description:"The knowledge JSON object",isInput:false,isOutput:true}],promptTemplates:[{blockType:"PROMPT_TEMPLATE",name:"knowledge",title:"Knowledge",modelRequirements:{modelVariant:"CHAT",modelName:"claude-3-opus-20240229"},content:"You are experienced data researcher, extract the important knowledge from the document.\n\n# Rules\n\n- Make pieces of information concise, clear, and easy to understand\n- One piece of information should be approximately 1 paragraph\n- Divide the paragraphs by markdown horizontal lines ---\n- Omit irrelevant information\n- Group redundant information\n- Write just extracted information, nothing else\n\n# The document\n\nTake information from this document:\n\n> {knowledgeContent}",dependentParameterNames:["knowledgeContent"],resultingParameterName:"knowledgePieces"}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[],sourceFile:"./promptbook-collection/prepare-knowledge-from-markdown.ptbk.md"},{title:"Prepare Keywords",pipelineUrl:"https://promptbook.studio/promptbook/prepare-knowledge-keywords.ptbk.md",promptbookVersion:"0.67.0-2",parameters:[{name:"knowledgePieceContent",description:"The content",isInput:true,isOutput:false},{name:"keywords",description:"Keywords separated by comma",isInput:false,isOutput:true}],promptTemplates:[{blockType:"PROMPT_TEMPLATE",name:"knowledge",title:"Knowledge",modelRequirements:{modelVariant:"CHAT",modelName:"claude-3-opus-20240229"},content:"You are experienced data researcher, detect the important keywords in the document.\n\n# Rules\n\n- Write just keywords separated by comma\n\n# The document\n\nTake information from this document:\n\n> {knowledgePieceContent}",dependentParameterNames:["knowledgePieceContent"],resultingParameterName:"keywords"}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[],sourceFile:"./promptbook-collection/prepare-knowledge-keywords.ptbk.md"},{title:"Prepare Title",pipelineUrl:"https://promptbook.studio/promptbook/prepare-knowledge-title.ptbk.md",promptbookVersion:"0.67.0-2",parameters:[{name:"knowledgePieceContent",description:"The content",isInput:true,isOutput:false},{name:"title",description:"The title of the document",isInput:false,isOutput:true}],promptTemplates:[{blockType:"PROMPT_TEMPLATE",name:"knowledge",title:"Knowledge",modelRequirements:{modelVariant:"CHAT",modelName:"claude-3-opus-20240229"},content:"You are experienced content creator, write best title for the document.\n\n# Rules\n\n- Write just title, nothing else\n- Title should be concise and clear\n- Write maximum 5 words for the title\n\n# The document\n\n> {knowledgePieceContent}",expectations:{words:{min:1,max:8}},dependentParameterNames:["knowledgePieceContent"],resultingParameterName:"title"}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[],sourceFile:"./promptbook-collection/prepare-knowledge-title.ptbk.md"},{title:"Prepare Keywords",pipelineUrl:"https://promptbook.studio/promptbook/prepare-persona.ptbk.md",promptbookVersion:"0.67.0-2",parameters:[{name:"availableModelNames",description:"List of available model names separated by comma (,)",isInput:true,isOutput:false},{name:"personaDescription",description:"Description of the persona",isInput:true,isOutput:false},{name:"modelRequirements",description:"Specific requirements for the model",isInput:false,isOutput:true}],promptTemplates:[{blockType:"PROMPT_TEMPLATE",name:"make-model-requirements",title:"Make modelRequirements",modelRequirements:{modelVariant:"CHAT",modelName:"gpt-4-turbo"},content:"You are experienced AI engineer, you need to create virtual assistant.\nWrite\n\n## Sample\n\n```json\n{\n\"modelName\": \"gpt-4o\",\n\"systemMessage\": \"You are experienced AI engineer and helpfull assistant.\",\n\"temperature\": 0.7\n}\n```\n\n## Instructions\n\n### Option `modelName`\n\nPick from the following models:\n\n- {availableModelNames}\n\n### Option `systemMessage`\n\nThe system message is used to communicate instructions or provide context to the model at the beginning of a conversation. It is displayed in a different format compared to user messages, helping the model understand its role in the conversation. The system message typically guides the model's behavior, sets the tone, or specifies desired output from the model. By utilizing the system message effectively, users can steer the model towards generating more accurate and relevant responses.\n\nFor example:\n\n> You are an experienced AI engineer and helpful assistant.\n\n> You are a friendly and knowledgeable chatbot.\n\n### Option `temperature`\n\nThe sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use log probability to automatically increase the temperature until certain thresholds are hit.\n\nYou can pick a value between 0 and 2. For example:\n\n- `0.1`: Low temperature, extremely conservative and deterministic\n- `0.5`: Medium temperature, balanced between conservative and creative\n- `1.0`: High temperature, creative and bit random\n- `1.5`: Very high temperature, extremely creative and often chaotic and unpredictable\n- `2.0`: Maximum temperature, completely random and unpredictable, for some extreme creative use cases\n\n# The assistant\n\nTake this description of the persona:\n\n> {personaDescription}",expectFormat:"JSON",dependentParameterNames:["availableModelNames","personaDescription"],resultingParameterName:"modelRequirements"}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[],sourceFile:"./promptbook-collection/prepare-persona.ptbk.md"}];
1025
1038
 
1026
1039
  /**
1027
1040
  * This error indicates that the promptbook in a markdown format cannot be parsed into a valid promptbook object
@@ -7943,6 +7956,8 @@ var RemoteLlmExecutionTools = /** @class */ (function () {
7943
7956
  // <- TODO: [๐Ÿงฑ] Implement in a functional (not new Class) way
7944
7957
  function (resolve, reject) {
7945
7958
  var socket = io(_this.options.remoteUrl, {
7959
+ retries: CONNECTION_RETRIES_LIMIT,
7960
+ timeout: CONNECTION_TIMEOUT_MS,
7946
7961
  path: _this.options.path,
7947
7962
  // path: `${this.remoteUrl.pathname}/socket.io`,
7948
7963
  transports: [/*'websocket', <- TODO: [๐ŸŒฌ] Make websocket transport work */ 'polling'],
@@ -7954,7 +7969,7 @@ var RemoteLlmExecutionTools = /** @class */ (function () {
7954
7969
  // TODO: !!!! Better timeout handling
7955
7970
  setTimeout(function () {
7956
7971
  reject(new Error("Timeout while connecting to ".concat(_this.options.remoteUrl)));
7957
- }, 1000 /* <- TODO: Timeout to config */);
7972
+ }, CONNECTION_TIMEOUT_MS);
7958
7973
  });
7959
7974
  };
7960
7975
  /**