@promptbook/node 0.67.0-1 โ†’ 0.67.0-3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -44,37 +44,23 @@ Rest of the documentation is common for **entire promptbook ecosystem**:
44
44
 
45
45
  ## ๐Ÿค The Promptbook Whitepaper
46
46
 
47
- When you have a simple, single prompt for ChatGPT, GPT-4, Anthropic Claude, Google Gemini, Llama 2, or whatever, it doesn't matter how it is integrated. Whether it's the direct calling of a REST API, using the SDK, hardcoding the prompt in the source code, or importing a text file, the process remains the same.
47
+ If you have a simple, single prompt for ChatGPT, GPT-4, Anthropic Claude, Google Gemini, Llama 2, or whatever, it doesn't matter how you integrate it. Whether it's calling a REST API directly, using the SDK, hardcoding the prompt into the source code, or importing a text file, the process remains the same.
48
48
 
49
- If you need something more advanced or want to extend the capabilities of LLMs, you generally have three ways to proceed:
49
+ But often you will struggle with the limitations of LLMs, such as hallucinations, off-topic responses, poor quality output, language drift, word repetition repetition repetition repetition or misuse, lack of context, or just plain w๐’†๐ขrd responses. When this happens, you generally have three options:
50
50
 
51
51
  1. **Fine-tune** the model to your specifications or even train your own.
52
52
  2. **Prompt-engineer** the prompt to the best shape you can achieve.
53
- 3. Use **multiple prompts** in a pipeline to get the best result.
54
-
55
- In any of these situations, but especially in (3), the Promptbook library can make your life easier and make **orchestraror for your prompts**.
56
-
57
- - **Separation of concerns** between prompt engineer and programmer; between code files and prompt files; and between prompts and their execution logic.
58
- - Set up a **common format** for prompts that is interchangeable between projects and language/technology stacks.
59
- - **Preprocessing** and cleaning the input data from the user.
60
- - Use default values - **Jokers** to bypass some parts of the pipeline.
61
- - **Expect** some specific output from the model.
62
- - **Retry** mismatched outputs.
63
- - **Combine** multiple models together.
64
- - Interactive **User interaction** with the model and the user.
65
- - Leverage **external** sources (like ChatGPT plugins or OpenAI's GPTs).
66
- - Simplify your code to be **DRY** and not repeat all the boilerplate code for each prompt.
67
- - **Versioning** of promptbooks
68
- - **Reuse** parts of promptbooks in/between projects.
69
- - Run the LLM **optimally** in parallel, with the best _cost/quality_ ratio or _speed/quality_ ratio.
70
- - **Execution report** to see what happened during the execution.
71
- - **Logging** the results of the promptbooks.
72
- - _(Not ready yet)_ **Caching** calls to LLMs to save money and time.
73
- - _(Not ready yet)_ Extend one prompt book from another one.
74
- - _(Not ready yet)_ Leverage the **streaming** to make super cool UI/UX.
75
- - _(Not ready yet)_ **A/B testing** to determine which prompt works best for the job.
53
+ 3. Use **multiple prompts** in a [pipeline](https://github.com/webgptorg/promptbook/discussions/64) to get the best result.
76
54
 
55
+ In all of these situations, but especially in 3., the Promptbook library can make your life easier.
77
56
 
57
+ - [**Separates concerns**](https://github.com/webgptorg/promptbook/discussions/32) between prompt-engineer and programmer, between code files and prompt files, and between prompts and their execution logic.
58
+ - Establishes a [**common format `.ptbk.md`**](https://github.com/webgptorg/promptbook/discussions/85) that can be used to describe your prompt business logic without having to write code or deal with the technicalities of LLMs.
59
+ - **Forget** about **low-level details** like choosing the right model, tokens, context size, temperature, top-k, top-p, or kernel sampling. **Just write your intent** and [**persona**](https://github.com/webgptorg/promptbook/discussions/22) who should be responsible for the task and let the library do the rest.
60
+ - Has built-in **orchestration** of [pipeline](https://github.com/webgptorg/promptbook/discussions/64) execution and many tools to make the process easier, more reliable, and more efficient, such as caching, [compilation+preparation](https://github.com/webgptorg/promptbook/discussions/78), [just-in-time fine-tuning](https://github.com/webgptorg/promptbook/discussions/33), [expectation-aware generation](https://github.com/webgptorg/promptbook/discussions/37), [agent adversary expectations](https://github.com/webgptorg/promptbook/discussions/39), and more.
61
+ - Sometimes even the best prompts with the best framework like Promptbook `:)` can't avoid the problems. In this case, the library has built-in **[anomaly detection](https://github.com/webgptorg/promptbook/discussions/40) and logging** to help you find and fix the problems.
62
+ - Promptbook has built in versioning. You can test multiple **A/B versions** of pipelines and see which one works best.
63
+ - Promptbook is designed to do [**RAG** (Retrieval-Augmented Generation)](https://github.com/webgptorg/promptbook/discussions/41) and other advanced techniques. You can use **knowledge** to improve the quality of the output.
78
64
 
79
65
  ## ๐Ÿง” Promptbook _(for prompt-engeneers)_
80
66
 
@@ -118,9 +104,7 @@ File `write-website-content.ptbk.md`:
118
104
  >
119
105
  > ## โœจ Improving the title
120
106
  >
121
- > - MODEL VARIANT Chat
122
- > - MODEL NAME `gpt-4`
123
- > - POSTPROCESSING `unwrapResult`
107
+ > - PERSONA Jane, Copywriter and Marketing Specialist.
124
108
  >
125
109
  > ```
126
110
  > As an experienced marketing specialist, you have been entrusted with improving the name of your client's business.
@@ -154,9 +138,7 @@ File `write-website-content.ptbk.md`:
154
138
  >
155
139
  > ## ๐Ÿฐ Cunning subtitle
156
140
  >
157
- > - MODEL VARIANT Chat
158
- > - MODEL NAME `gpt-4`
159
- > - POSTPROCESSING `unwrapResult`
141
+ > - PERSONA Josh, a copywriter, tasked with creating a claim for the website.
160
142
  >
161
143
  > ```
162
144
  > As an experienced copywriter, you have been entrusted with creating a claim for the "{title}" web page.
@@ -176,8 +158,7 @@ File `write-website-content.ptbk.md`:
176
158
  >
177
159
  > ## ๐Ÿšฆ Keyword analysis
178
160
  >
179
- > - MODEL VARIANT Chat
180
- > - MODEL NAME `gpt-4`
161
+ > - PERSONA Paul, extremely creative SEO specialist.
181
162
  >
182
163
  > ```
183
164
  > As an experienced SEO specialist, you have been entrusted with creating keywords for the website "{title}".
@@ -221,8 +202,7 @@ File `write-website-content.ptbk.md`:
221
202
  >
222
203
  > ## ๐Ÿ–‹ Write the content
223
204
  >
224
- > - MODEL VARIANT Completion
225
- > - MODEL NAME `gpt-3.5-turbo-instruct`
205
+ > - PERSONA Jane
226
206
  >
227
207
  > ```
228
208
  > As an experienced copywriter and web designer, you have been entrusted with creating text for a new website {title}.
@@ -401,7 +381,12 @@ The following glossary is used to clarify certain concepts:
401
381
 
402
382
  ### โž– When not to use
403
383
 
404
- - When you are writing just a simple chatbot without any extra logic, just system messages
384
+ - When you have already implemented single simple prompt and it works fine for your job
385
+ - When [OpenAI Assistant (GPTs)](https://help.openai.com/en/articles/8673914-gpts-vs-assistants) is enough for you
386
+ - When you need streaming _(this may be implemented in the future, [see discussion](https://github.com/webgptorg/promptbook/discussions/102))_.
387
+ - When you need to use something other than JavaScript or TypeScript _(other languages are on the way, [see the discussion](https://github.com/webgptorg/promptbook/discussions/101))_
388
+ - When your main focus is on something other than text - like images, audio, video, spreadsheets _(other media types may be added in the future, [see discussion](https://github.com/webgptorg/promptbook/discussions/103))_
389
+ - When you need to use recursion _([see the discussion](https://github.com/webgptorg/promptbook/discussions/38))_
405
390
 
406
391
  ## ๐Ÿœ Known issues
407
392
 
@@ -410,7 +395,6 @@ The following glossary is used to clarify certain concepts:
410
395
 
411
396
  ## ๐Ÿงผ Intentionally not implemented features
412
397
 
413
-
414
398
  - [โžฟ No recursion](https://github.com/webgptorg/promptbook/discussions/38)
415
399
  - [๐Ÿณ There are no types, just strings](https://github.com/webgptorg/promptbook/discussions/52)
416
400
 
package/esm/index.es.js CHANGED
@@ -13,7 +13,7 @@ import * as dotenv from 'dotenv';
13
13
  /**
14
14
  * The version of the Promptbook library
15
15
  */
16
- var PROMPTBOOK_VERSION = '0.67.0-0';
16
+ var PROMPTBOOK_VERSION = '0.67.0-2';
17
17
  // TODO: !!!! List here all the versions and annotate + put into script
18
18
 
19
19
  /*! *****************************************************************************
@@ -487,6 +487,7 @@ function pipelineJsonToString(pipelineJson) {
487
487
  commands.push("PIPELINE URL ".concat(pipelineUrl));
488
488
  }
489
489
  commands.push("PROMPTBOOK VERSION ".concat(promptbookVersion));
490
+ // TODO: !!!!!! This increase size of the bundle and is probbably not necessary
490
491
  pipelineString = prettifyMarkdown(pipelineString);
491
492
  try {
492
493
  for (var _g = __values(parameters.filter(function (_a) {
@@ -866,7 +867,7 @@ function forEachAsync(array, options, callbackfunction) {
866
867
  });
867
868
  }
868
869
 
869
- var PipelineCollection = [{title:"Prepare Knowledge from Markdown",pipelineUrl:"https://promptbook.studio/promptbook/prepare-knowledge-from-markdown.ptbk.md",promptbookVersion:"0.67.0-0",parameters:[{name:"knowledgeContent",description:"Markdown document content",isInput:true,isOutput:false},{name:"knowledgePieces",description:"The knowledge JSON object",isInput:false,isOutput:true}],promptTemplates:[{blockType:"PROMPT_TEMPLATE",name:"knowledge",title:"Knowledge",modelRequirements:{modelVariant:"CHAT",modelName:"claude-3-opus-20240229"},content:"You are experienced data researcher, extract the important knowledge from the document.\n\n# Rules\n\n- Make pieces of information concise, clear, and easy to understand\n- One piece of information should be approximately 1 paragraph\n- Divide the paragraphs by markdown horizontal lines ---\n- Omit irrelevant information\n- Group redundant information\n- Write just extracted information, nothing else\n\n# The document\n\nTake information from this document:\n\n> {knowledgeContent}",dependentParameterNames:["knowledgeContent"],resultingParameterName:"knowledgePieces"}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[],sourceFile:"./promptbook-collection/prepare-knowledge-from-markdown.ptbk.md"},{title:"Prepare Keywords",pipelineUrl:"https://promptbook.studio/promptbook/prepare-knowledge-keywords.ptbk.md",promptbookVersion:"0.67.0-0",parameters:[{name:"knowledgePieceContent",description:"The content",isInput:true,isOutput:false},{name:"keywords",description:"Keywords separated by comma",isInput:false,isOutput:true}],promptTemplates:[{blockType:"PROMPT_TEMPLATE",name:"knowledge",title:"Knowledge",modelRequirements:{modelVariant:"CHAT",modelName:"claude-3-opus-20240229"},content:"You are experienced data researcher, detect the important keywords in the document.\n\n# Rules\n\n- Write just keywords separated by comma\n\n# The document\n\nTake information from this document:\n\n> {knowledgePieceContent}",dependentParameterNames:["knowledgePieceContent"],resultingParameterName:"keywords"}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[],sourceFile:"./promptbook-collection/prepare-knowledge-keywords.ptbk.md"},{title:"Prepare Title",pipelineUrl:"https://promptbook.studio/promptbook/prepare-knowledge-title.ptbk.md",promptbookVersion:"0.67.0-0",parameters:[{name:"knowledgePieceContent",description:"The content",isInput:true,isOutput:false},{name:"title",description:"The title of the document",isInput:false,isOutput:true}],promptTemplates:[{blockType:"PROMPT_TEMPLATE",name:"knowledge",title:"Knowledge",modelRequirements:{modelVariant:"CHAT",modelName:"claude-3-opus-20240229"},content:"You are experienced content creator, write best title for the document.\n\n# Rules\n\n- Write just title, nothing else\n- Title should be concise and clear\n- Write maximum 5 words for the title\n\n# The document\n\n> {knowledgePieceContent}",expectations:{words:{min:1,max:8}},dependentParameterNames:["knowledgePieceContent"],resultingParameterName:"title"}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[],sourceFile:"./promptbook-collection/prepare-knowledge-title.ptbk.md"},{title:"Prepare Keywords",pipelineUrl:"https://promptbook.studio/promptbook/prepare-persona.ptbk.md",promptbookVersion:"0.67.0-0",parameters:[{name:"availableModelNames",description:"List of available model names separated by comma (,)",isInput:true,isOutput:false},{name:"personaDescription",description:"Description of the persona",isInput:true,isOutput:false},{name:"modelRequirements",description:"Specific requirements for the model",isInput:false,isOutput:true}],promptTemplates:[{blockType:"PROMPT_TEMPLATE",name:"make-model-requirements",title:"Make modelRequirements",modelRequirements:{modelVariant:"CHAT",modelName:"gpt-4-turbo"},content:"You are experienced AI engineer, you need to create virtual assistant.\nWrite\n\n## Sample\n\n```json\n{\n\"modelName\": \"gpt-4o\",\n\"systemMessage\": \"You are experienced AI engineer and helpfull assistant.\",\n\"temperature\": 0.7\n}\n```\n\n## Instructions\n\n### Option `modelName`\n\nPick from the following models:\n\n- {availableModelNames}\n\n### Option `systemMessage`\n\nThe system message is used to communicate instructions or provide context to the model at the beginning of a conversation. It is displayed in a different format compared to user messages, helping the model understand its role in the conversation. The system message typically guides the model's behavior, sets the tone, or specifies desired output from the model. By utilizing the system message effectively, users can steer the model towards generating more accurate and relevant responses.\n\nFor example:\n\n> You are an experienced AI engineer and helpful assistant.\n\n> You are a friendly and knowledgeable chatbot.\n\n### Option `temperature`\n\nThe sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use log probability to automatically increase the temperature until certain thresholds are hit.\n\nYou can pick a value between 0 and 2. For example:\n\n- `0.1`: Low temperature, extremely conservative and deterministic\n- `0.5`: Medium temperature, balanced between conservative and creative\n- `1.0`: High temperature, creative and bit random\n- `1.5`: Very high temperature, extremely creative and often chaotic and unpredictable\n- `2.0`: Maximum temperature, completely random and unpredictable, for some extreme creative use cases\n\n# The assistant\n\nTake this description of the persona:\n\n> {personaDescription}",expectFormat:"JSON",dependentParameterNames:["availableModelNames","personaDescription"],resultingParameterName:"modelRequirements"}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[],sourceFile:"./promptbook-collection/prepare-persona.ptbk.md"}];
870
+ var PipelineCollection = [{title:"Prepare Knowledge from Markdown",pipelineUrl:"https://promptbook.studio/promptbook/prepare-knowledge-from-markdown.ptbk.md",promptbookVersion:"0.67.0-2",parameters:[{name:"knowledgeContent",description:"Markdown document content",isInput:true,isOutput:false},{name:"knowledgePieces",description:"The knowledge JSON object",isInput:false,isOutput:true}],promptTemplates:[{blockType:"PROMPT_TEMPLATE",name:"knowledge",title:"Knowledge",modelRequirements:{modelVariant:"CHAT",modelName:"claude-3-opus-20240229"},content:"You are experienced data researcher, extract the important knowledge from the document.\n\n# Rules\n\n- Make pieces of information concise, clear, and easy to understand\n- One piece of information should be approximately 1 paragraph\n- Divide the paragraphs by markdown horizontal lines ---\n- Omit irrelevant information\n- Group redundant information\n- Write just extracted information, nothing else\n\n# The document\n\nTake information from this document:\n\n> {knowledgeContent}",dependentParameterNames:["knowledgeContent"],resultingParameterName:"knowledgePieces"}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[],sourceFile:"./promptbook-collection/prepare-knowledge-from-markdown.ptbk.md"},{title:"Prepare Keywords",pipelineUrl:"https://promptbook.studio/promptbook/prepare-knowledge-keywords.ptbk.md",promptbookVersion:"0.67.0-2",parameters:[{name:"knowledgePieceContent",description:"The content",isInput:true,isOutput:false},{name:"keywords",description:"Keywords separated by comma",isInput:false,isOutput:true}],promptTemplates:[{blockType:"PROMPT_TEMPLATE",name:"knowledge",title:"Knowledge",modelRequirements:{modelVariant:"CHAT",modelName:"claude-3-opus-20240229"},content:"You are experienced data researcher, detect the important keywords in the document.\n\n# Rules\n\n- Write just keywords separated by comma\n\n# The document\n\nTake information from this document:\n\n> {knowledgePieceContent}",dependentParameterNames:["knowledgePieceContent"],resultingParameterName:"keywords"}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[],sourceFile:"./promptbook-collection/prepare-knowledge-keywords.ptbk.md"},{title:"Prepare Title",pipelineUrl:"https://promptbook.studio/promptbook/prepare-knowledge-title.ptbk.md",promptbookVersion:"0.67.0-2",parameters:[{name:"knowledgePieceContent",description:"The content",isInput:true,isOutput:false},{name:"title",description:"The title of the document",isInput:false,isOutput:true}],promptTemplates:[{blockType:"PROMPT_TEMPLATE",name:"knowledge",title:"Knowledge",modelRequirements:{modelVariant:"CHAT",modelName:"claude-3-opus-20240229"},content:"You are experienced content creator, write best title for the document.\n\n# Rules\n\n- Write just title, nothing else\n- Title should be concise and clear\n- Write maximum 5 words for the title\n\n# The document\n\n> {knowledgePieceContent}",expectations:{words:{min:1,max:8}},dependentParameterNames:["knowledgePieceContent"],resultingParameterName:"title"}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[],sourceFile:"./promptbook-collection/prepare-knowledge-title.ptbk.md"},{title:"Prepare Keywords",pipelineUrl:"https://promptbook.studio/promptbook/prepare-persona.ptbk.md",promptbookVersion:"0.67.0-2",parameters:[{name:"availableModelNames",description:"List of available model names separated by comma (,)",isInput:true,isOutput:false},{name:"personaDescription",description:"Description of the persona",isInput:true,isOutput:false},{name:"modelRequirements",description:"Specific requirements for the model",isInput:false,isOutput:true}],promptTemplates:[{blockType:"PROMPT_TEMPLATE",name:"make-model-requirements",title:"Make modelRequirements",modelRequirements:{modelVariant:"CHAT",modelName:"gpt-4-turbo"},content:"You are experienced AI engineer, you need to create virtual assistant.\nWrite\n\n## Sample\n\n```json\n{\n\"modelName\": \"gpt-4o\",\n\"systemMessage\": \"You are experienced AI engineer and helpfull assistant.\",\n\"temperature\": 0.7\n}\n```\n\n## Instructions\n\n### Option `modelName`\n\nPick from the following models:\n\n- {availableModelNames}\n\n### Option `systemMessage`\n\nThe system message is used to communicate instructions or provide context to the model at the beginning of a conversation. It is displayed in a different format compared to user messages, helping the model understand its role in the conversation. The system message typically guides the model's behavior, sets the tone, or specifies desired output from the model. By utilizing the system message effectively, users can steer the model towards generating more accurate and relevant responses.\n\nFor example:\n\n> You are an experienced AI engineer and helpful assistant.\n\n> You are a friendly and knowledgeable chatbot.\n\n### Option `temperature`\n\nThe sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use log probability to automatically increase the temperature until certain thresholds are hit.\n\nYou can pick a value between 0 and 2. For example:\n\n- `0.1`: Low temperature, extremely conservative and deterministic\n- `0.5`: Medium temperature, balanced between conservative and creative\n- `1.0`: High temperature, creative and bit random\n- `1.5`: Very high temperature, extremely creative and often chaotic and unpredictable\n- `2.0`: Maximum temperature, completely random and unpredictable, for some extreme creative use cases\n\n# The assistant\n\nTake this description of the persona:\n\n> {personaDescription}",expectFormat:"JSON",dependentParameterNames:["availableModelNames","personaDescription"],resultingParameterName:"modelRequirements"}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[],sourceFile:"./promptbook-collection/prepare-persona.ptbk.md"}];
870
871
 
871
872
  /**
872
873
  * This error indicates that the promptbook in a markdown format cannot be parsed into a valid promptbook object