@promptbook/cli 0.68.2 → 0.68.4
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +16 -56
- package/esm/index.es.js +51 -4
- package/esm/index.es.js.map +1 -1
- package/esm/typings/promptbook-collection/index.d.ts +0 -3
- package/esm/typings/src/llm-providers/_common/utils/cache/CacheItem.d.ts +1 -1
- package/esm/typings/src/types/PipelineJson/PipelineJson.d.ts +1 -1
- package/package.json +1 -1
- package/umd/index.umd.js +51 -4
- package/umd/index.umd.js.map +1 -1
- package/esm/typings/src/personas/preparePersona.test.d.ts +0 -1
package/README.md
CHANGED
|
@@ -103,8 +103,14 @@ This will prettify all promptbooks in `promptbook` directory and adds Mermaid gr
|
|
|
103
103
|
|
|
104
104
|
Rest of the documentation is common for **entire promptbook ecosystem**:
|
|
105
105
|
|
|
106
|
+
# ✨ New Features
|
|
107
|
+
|
|
108
|
+
- ✨ **Support [OpenAI o1 model](https://openai.com/o1/)**
|
|
109
|
+
|
|
106
110
|
## 🤍 The Promptbook Whitepaper
|
|
107
111
|
|
|
112
|
+
|
|
113
|
+
|
|
108
114
|
If you have a simple, single prompt for ChatGPT, GPT-4, Anthropic Claude, Google Gemini, Llama 2, or whatever, it doesn't matter how you integrate it. Whether it's calling a REST API directly, using the SDK, hardcoding the prompt into the source code, or importing a text file, the process remains the same.
|
|
109
115
|
|
|
110
116
|
But often you will struggle with the limitations of LLMs, such as hallucinations, off-topic responses, poor quality output, language drift, word repetition repetition repetition repetition or misuse, lack of context, or just plain w𝒆𝐢rd responses. When this happens, you generally have three options:
|
|
@@ -123,7 +129,9 @@ In all of these situations, but especially in 3., the Promptbook library can mak
|
|
|
123
129
|
- Promptbook has built in versioning. You can test multiple **A/B versions** of pipelines and see which one works best.
|
|
124
130
|
- Promptbook is designed to do [**RAG** (Retrieval-Augmented Generation)](https://github.com/webgptorg/promptbook/discussions/41) and other advanced techniques. You can use **knowledge** to improve the quality of the output.
|
|
125
131
|
|
|
126
|
-
|
|
132
|
+
|
|
133
|
+
|
|
134
|
+
## 🧔 Pipeline _(for prompt-engeneers)_
|
|
127
135
|
|
|
128
136
|
**P**romp**t** **b**oo**k** markdown file (or `.ptbk.md` file) is document that describes a **pipeline** - a series of prompts that are chained together to form somewhat reciepe for transforming natural language input.
|
|
129
137
|
|
|
@@ -465,63 +473,15 @@ The following glossary is used to clarify certain concepts:
|
|
|
465
473
|
|
|
466
474
|
## ❔ FAQ
|
|
467
475
|
|
|
468
|
-
|
|
469
|
-
|
|
470
476
|
If you have a question [start a discussion](https://github.com/webgptorg/promptbook/discussions/), [open an issue](https://github.com/webgptorg/promptbook/issues) or [write me an email](https://www.pavolhejny.com/contact).
|
|
471
477
|
|
|
472
|
-
|
|
473
|
-
|
|
474
|
-
|
|
475
|
-
|
|
476
|
-
|
|
477
|
-
|
|
478
|
-
|
|
479
|
-
|
|
480
|
-
We are considering creating a bridge/converter between these two libraries.
|
|
481
|
-
|
|
482
|
-
|
|
483
|
-
|
|
484
|
-
### Promptbooks vs. OpenAI`s GPTs
|
|
485
|
-
|
|
486
|
-
GPTs are chat assistants that can be assigned to specific tasks and materials. But they are still chat assistants. Promptbooks are a way to orchestrate many more predefined tasks to have much tighter control over the process. Promptbooks are not a good technology for creating human-like chatbots, GPTs are not a good technology for creating outputs with specific requirements.
|
|
487
|
-
|
|
488
|
-
|
|
489
|
-
|
|
490
|
-
|
|
491
|
-
|
|
492
|
-
|
|
493
|
-
|
|
494
|
-
|
|
495
|
-
|
|
496
|
-
|
|
497
|
-
|
|
498
|
-
|
|
499
|
-
|
|
500
|
-
|
|
501
|
-
|
|
502
|
-
### Where should I store my promptbooks?
|
|
503
|
-
|
|
504
|
-
If you use raw SDKs, you just put prompts in the sourcecode, mixed in with typescript, javascript, python or whatever programming language you use.
|
|
505
|
-
|
|
506
|
-
If you use promptbooks, you can store them in several places, each with its own advantages and disadvantages:
|
|
507
|
-
|
|
508
|
-
1. As **source code**, typically git-committed. In this case you can use the versioning system and the promptbooks will be tightly coupled with the version of the application. You still get the power of promptbooks, as you separate the concerns of the prompt-engineer and the programmer.
|
|
509
|
-
|
|
510
|
-
2. As data in a **database** In this case, promptbooks are like posts / articles on the blog. They can be modified independently of the application. You don't need to redeploy the application to change the promptbooks. You can have multiple versions of promptbooks for each user. You can have a web interface for non-programmers to create and modify promptbooks. But you lose the versioning system and you still have to consider the interface between the promptbooks and the application _(= input and output parameters)_.
|
|
511
|
-
|
|
512
|
-
3. In a **configuration** in environment variables. This is a good way to store promptbooks if you have an application with multiple deployments and you want to have different but simple promptbooks for each deployment and you don't need to change them often.
|
|
513
|
-
|
|
514
|
-
### What should I do when I need same promptbook in multiple human languages?
|
|
515
|
-
|
|
516
|
-
A single promptbook can be written for several _(human)_ languages at once. However, we recommend that you have separate promptbooks for each language.
|
|
517
|
-
|
|
518
|
-
In large language models, you will get better results if you have prompts in the same language as the user input.
|
|
519
|
-
|
|
520
|
-
The best way to manage this is to have suffixed promptbooks like `write-website-content.en.ptbk.md` and `write-website-content.cs.ptbk.md` for each supported language.
|
|
521
|
-
|
|
522
|
-
|
|
523
|
-
|
|
524
|
-
|
|
478
|
+
- [❔ Why not just use the OpenAI SDK / Anthropic Claude SDK / ...?](https://github.com/webgptorg/promptbook/discussions/114)
|
|
479
|
+
- [❔ How is it different from the OpenAI`s GPTs?](https://github.com/webgptorg/promptbook/discussions/118)
|
|
480
|
+
- [❔ How is it different from the Langchain?](https://github.com/webgptorg/promptbook/discussions/115)
|
|
481
|
+
- [❔ How is it different from the DSPy?](https://github.com/webgptorg/promptbook/discussions/117)
|
|
482
|
+
- [❔ How is it different from _anything_?](https://github.com/webgptorg/promptbook/discussions?discussions_q=is%3Aopen+label%3A%22Promptbook+vs%22)
|
|
483
|
+
- [❔ Is Promptbook using RAG _(Retrieval-Augmented Generation)_?](https://github.com/webgptorg/promptbook/discussions/123)
|
|
484
|
+
- [❔ Is Promptbook using function calling?](https://github.com/webgptorg/promptbook/discussions/124)
|
|
525
485
|
|
|
526
486
|
## ⌚ Changelog
|
|
527
487
|
|
package/esm/index.es.js
CHANGED
|
@@ -20,7 +20,7 @@ import OpenAI from 'openai';
|
|
|
20
20
|
/**
|
|
21
21
|
* The version of the Promptbook library
|
|
22
22
|
*/
|
|
23
|
-
var PROMPTBOOK_VERSION = '0.68.
|
|
23
|
+
var PROMPTBOOK_VERSION = '0.68.3';
|
|
24
24
|
// TODO: !!!! List here all the versions and annotate + put into script
|
|
25
25
|
|
|
26
26
|
/*! *****************************************************************************
|
|
@@ -1040,7 +1040,7 @@ function forEachAsync(array, options, callbackfunction) {
|
|
|
1040
1040
|
});
|
|
1041
1041
|
}
|
|
1042
1042
|
|
|
1043
|
-
var PipelineCollection = [{title:"Prepare Knowledge from Markdown",pipelineUrl:"https://promptbook.studio/promptbook/prepare-knowledge-from-markdown.ptbk.md",
|
|
1043
|
+
var PipelineCollection = [{title:"Prepare Knowledge from Markdown",pipelineUrl:"https://promptbook.studio/promptbook/prepare-knowledge-from-markdown.ptbk.md",parameters:[{name:"knowledgeContent",description:"Markdown document content",isInput:true,isOutput:false},{name:"knowledgePieces",description:"The knowledge JSON object",isInput:false,isOutput:true}],templates:[{templateType:"PROMPT_TEMPLATE",name:"knowledge",title:"Knowledge",content:"You are experienced data researcher, extract the important knowledge from the document.\n\n# Rules\n\n- Make pieces of information concise, clear, and easy to understand\n- One piece of information should be approximately 1 paragraph\n- Divide the paragraphs by markdown horizontal lines ---\n- Omit irrelevant information\n- Group redundant information\n- Write just extracted information, nothing else\n\n# The document\n\nTake information from this document:\n\n> {knowledgeContent}",resultingParameterName:"knowledgePieces",dependentParameterNames:["knowledgeContent"]}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[],sourceFile:"./promptbook-collection/prepare-knowledge-from-markdown.ptbk.md"},{title:"Prepare Keywords",pipelineUrl:"https://promptbook.studio/promptbook/prepare-knowledge-keywords.ptbk.md",parameters:[{name:"knowledgePieceContent",description:"The content",isInput:true,isOutput:false},{name:"keywords",description:"Keywords separated by comma",isInput:false,isOutput:true}],templates:[{templateType:"PROMPT_TEMPLATE",name:"knowledge",title:"Knowledge",content:"You are experienced data researcher, detect the important keywords in the document.\n\n# Rules\n\n- Write just keywords separated by comma\n\n# The document\n\nTake information from this document:\n\n> {knowledgePieceContent}",resultingParameterName:"keywords",dependentParameterNames:["knowledgePieceContent"]}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[],sourceFile:"./promptbook-collection/prepare-knowledge-keywords.ptbk.md"},{title:"Prepare Title",pipelineUrl:"https://promptbook.studio/promptbook/prepare-knowledge-title.ptbk.md",parameters:[{name:"knowledgePieceContent",description:"The content",isInput:true,isOutput:false},{name:"title",description:"The title of the document",isInput:false,isOutput:true}],templates:[{templateType:"PROMPT_TEMPLATE",name:"knowledge",title:"Knowledge",content:"You are experienced content creator, write best title for the document.\n\n# Rules\n\n- Write just title, nothing else\n- Title should be concise and clear\n- Write maximum 5 words for the title\n\n# The document\n\n> {knowledgePieceContent}",resultingParameterName:"title",expectations:{words:{min:1,max:8}},dependentParameterNames:["knowledgePieceContent"]}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[],sourceFile:"./promptbook-collection/prepare-knowledge-title.ptbk.md"},{title:"Prepare Keywords",pipelineUrl:"https://promptbook.studio/promptbook/prepare-persona.ptbk.md",parameters:[{name:"availableModelNames",description:"List of available model names separated by comma (,)",isInput:true,isOutput:false},{name:"personaDescription",description:"Description of the persona",isInput:true,isOutput:false},{name:"modelRequirements",description:"Specific requirements for the model",isInput:false,isOutput:true}],templates:[{templateType:"PROMPT_TEMPLATE",name:"make-model-requirements",title:"Make modelRequirements",content:"You are experienced AI engineer, you need to create virtual assistant.\nWrite\n\n## Sample\n\n```json\n{\n\"modelName\": \"gpt-4o\",\n\"systemMessage\": \"You are experienced AI engineer and helpfull assistant.\",\n\"temperature\": 0.7\n}\n```\n\n## Instructions\n\n- Your output format is JSON object\n- Write just the JSON object, no other text should be present\n- It contains the following keys:\n - `modelName`: The name of the model to use\n - `systemMessage`: The system message to provide context to the model\n - `temperature`: The sampling temperature to use\n\n### Key `modelName`\n\nPick from the following models:\n\n- {availableModelNames}\n\n### Key `systemMessage`\n\nThe system message is used to communicate instructions or provide context to the model at the beginning of a conversation. It is displayed in a different format compared to user messages, helping the model understand its role in the conversation. The system message typically guides the model's behavior, sets the tone, or specifies desired output from the model. By utilizing the system message effectively, users can steer the model towards generating more accurate and relevant responses.\n\nFor example:\n\n> You are an experienced AI engineer and helpful assistant.\n\n> You are a friendly and knowledgeable chatbot.\n\n### Key `temperature`\n\nThe sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use log probability to automatically increase the temperature until certain thresholds are hit.\n\nYou can pick a value between 0 and 2. For example:\n\n- `0.1`: Low temperature, extremely conservative and deterministic\n- `0.5`: Medium temperature, balanced between conservative and creative\n- `1.0`: High temperature, creative and bit random\n- `1.5`: Very high temperature, extremely creative and often chaotic and unpredictable\n- `2.0`: Maximum temperature, completely random and unpredictable, for some extreme creative use cases\n\n# The assistant\n\nTake this description of the persona:\n\n> {personaDescription}",resultingParameterName:"modelRequirements",format:"JSON",dependentParameterNames:["availableModelNames","personaDescription"]}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[],sourceFile:"./promptbook-collection/prepare-persona.ptbk.md"}];
|
|
1044
1044
|
|
|
1045
1045
|
/**
|
|
1046
1046
|
* This error indicates that the promptbook in a markdown format cannot be parsed into a valid promptbook object
|
|
@@ -1264,7 +1264,7 @@ function validatePipeline(pipeline) {
|
|
|
1264
1264
|
// <- Note: [🚲]
|
|
1265
1265
|
throw new PipelineLogicError(spaceTrim(function (block) { return "\n Invalid promptbook URL \"".concat(pipeline.pipelineUrl, "\"\n\n ").concat(block(pipelineIdentification), "\n "); }));
|
|
1266
1266
|
}
|
|
1267
|
-
if (!isValidPromptbookVersion(pipeline.promptbookVersion)) {
|
|
1267
|
+
if (pipeline.promptbookVersion !== undefined && !isValidPromptbookVersion(pipeline.promptbookVersion)) {
|
|
1268
1268
|
// <- Note: [🚲]
|
|
1269
1269
|
throw new PipelineLogicError(spaceTrim(function (block) { return "\n Invalid Promptbook Version \"".concat(pipeline.promptbookVersion, "\"\n\n ").concat(block(pipelineIdentification), "\n "); }));
|
|
1270
1270
|
}
|
|
@@ -5803,6 +5803,7 @@ var promptbookVersionCommandParser = {
|
|
|
5803
5803
|
* Note: `$` is used to indicate that this function mutates given `pipelineJson`
|
|
5804
5804
|
*/
|
|
5805
5805
|
$applyToPipelineJson: function (command, $pipelineJson) {
|
|
5806
|
+
// TODO: Warn if the version is overridden
|
|
5806
5807
|
$pipelineJson.promptbookVersion = command.promptbookVersion;
|
|
5807
5808
|
},
|
|
5808
5809
|
/**
|
|
@@ -6692,7 +6693,7 @@ function pipelineStringToJsonSync(pipelineString) {
|
|
|
6692
6693
|
var $pipelineJson = {
|
|
6693
6694
|
title: undefined /* <- Note: [🍙] Putting here placeholder to keep `title` on top at final JSON */,
|
|
6694
6695
|
pipelineUrl: undefined /* <- Note: Putting here placeholder to keep `pipelineUrl` on top at final JSON */,
|
|
6695
|
-
promptbookVersion:
|
|
6696
|
+
promptbookVersion: undefined /* <- Note: By default no explicit version */,
|
|
6696
6697
|
description: undefined /* <- Note: [🍙] Putting here placeholder to keep `description` on top at final JSON */,
|
|
6697
6698
|
parameters: [],
|
|
6698
6699
|
templates: [],
|
|
@@ -9727,6 +9728,7 @@ var OPENAI_MODELS = $asDeeplyFrozenSerializableJson('OPENAI_MODELS', [
|
|
|
9727
9728
|
prompt: computeUsage("$5.00 / 1M tokens"),
|
|
9728
9729
|
output: computeUsage("$15.00 / 1M tokens"),
|
|
9729
9730
|
},
|
|
9731
|
+
//TODO: !!!!!! Add gpt-4o-mini-2024-07-18 and all others to be up to date
|
|
9730
9732
|
},
|
|
9731
9733
|
/**/
|
|
9732
9734
|
/**/
|
|
@@ -9741,6 +9743,51 @@ var OPENAI_MODELS = $asDeeplyFrozenSerializableJson('OPENAI_MODELS', [
|
|
|
9741
9743
|
},
|
|
9742
9744
|
/**/
|
|
9743
9745
|
/**/
|
|
9746
|
+
{
|
|
9747
|
+
modelVariant: 'CHAT',
|
|
9748
|
+
modelTitle: 'o1-preview',
|
|
9749
|
+
modelName: 'o1-preview',
|
|
9750
|
+
pricing: {
|
|
9751
|
+
prompt: computeUsage("$15.00 / 1M tokens"),
|
|
9752
|
+
output: computeUsage("$60.00 / 1M tokens"),
|
|
9753
|
+
},
|
|
9754
|
+
},
|
|
9755
|
+
/**/
|
|
9756
|
+
/**/
|
|
9757
|
+
{
|
|
9758
|
+
modelVariant: 'CHAT',
|
|
9759
|
+
modelTitle: 'o1-preview-2024-09-12',
|
|
9760
|
+
modelName: 'o1-preview-2024-09-12',
|
|
9761
|
+
// <- TODO: !!!!!! Some better system to organize theese date suffixes and versions
|
|
9762
|
+
pricing: {
|
|
9763
|
+
prompt: computeUsage("$15.00 / 1M tokens"),
|
|
9764
|
+
output: computeUsage("$60.00 / 1M tokens"),
|
|
9765
|
+
},
|
|
9766
|
+
},
|
|
9767
|
+
/**/
|
|
9768
|
+
/**/
|
|
9769
|
+
{
|
|
9770
|
+
modelVariant: 'CHAT',
|
|
9771
|
+
modelTitle: 'o1-mini',
|
|
9772
|
+
modelName: 'o1-mini',
|
|
9773
|
+
pricing: {
|
|
9774
|
+
prompt: computeUsage("$3.00 / 1M tokens"),
|
|
9775
|
+
output: computeUsage("$12.00 / 1M tokens"),
|
|
9776
|
+
},
|
|
9777
|
+
},
|
|
9778
|
+
/**/
|
|
9779
|
+
/**/
|
|
9780
|
+
{
|
|
9781
|
+
modelVariant: 'CHAT',
|
|
9782
|
+
modelTitle: 'o1-mini-2024-09-12',
|
|
9783
|
+
modelName: 'o1-mini-2024-09-12',
|
|
9784
|
+
pricing: {
|
|
9785
|
+
prompt: computeUsage("$3.00 / 1M tokens"),
|
|
9786
|
+
output: computeUsage("$12.00 / 1M tokens"),
|
|
9787
|
+
},
|
|
9788
|
+
},
|
|
9789
|
+
/**/
|
|
9790
|
+
/**/
|
|
9744
9791
|
{
|
|
9745
9792
|
modelVariant: 'CHAT',
|
|
9746
9793
|
modelTitle: 'gpt-3.5-turbo-16k-0613',
|