@promptbook/node 0.63.0-9 → 0.63.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -76,12 +76,23 @@ In any of these situations, but especially in (3), the Promptbook library can ma
76
76
 
77
77
 
78
78
 
79
+ ## 🧔 Promptbook _(for prompt-engeneers)_
80
+
81
+ **P**romp**t** **b**oo**k** markdown file (or `.ptbk.md` file) is document that describes a **pipeline** - a series of prompts that are chained together to form somewhat reciepe for transforming natural language input.
82
+
83
+ - Multiple pipelines forms a **collection** which will handle core **know-how of your LLM application**.
84
+ - Theese pipelines are designed such as they **can be written by non-programmers**.
85
+
86
+
87
+
79
88
  ### Sample:
80
89
 
81
90
  File `write-website-content.ptbk.md`:
82
91
 
83
92
 
84
93
 
94
+
95
+
85
96
  > # 🌍 Create website content
86
97
  >
87
98
  > Instructions for creating web page content.
@@ -298,7 +309,8 @@ flowchart LR
298
309
  end;
299
310
  ```
300
311
 
301
- [More template samples](./samples/templates/)
312
+ - [More template samples](./samples/templates/)
313
+ - [Read more about `.ptbk.md` file format here](https://github.com/webgptorg/promptbook/discussions/categories/concepts?discussions_q=is%3Aopen+label%3A.ptbk.md+category%3AConcepts)
302
314
 
303
315
  _Note: We are using [postprocessing functions](#postprocessing-functions) like `unwrapResult` that can be used to postprocess the result._
304
316
 
@@ -315,7 +327,6 @@ Or you can install them separately:
315
327
 
316
328
  > ⭐ Marked packages are worth to try first
317
329
 
318
-
319
330
  - ⭐ **[ptbk](https://www.npmjs.com/package/ptbk)** - Bundle of all packages, when you want to install everything and you don't care about the size
320
331
  - **[promptbook](https://www.npmjs.com/package/promptbook)** - Same as `ptbk`
321
332
  - **[@promptbook/core](https://www.npmjs.com/package/@promptbook/core)** - Core of the library, it contains the main logic for promptbooks
@@ -339,263 +350,39 @@ Or you can install them separately:
339
350
 
340
351
  ## 📚 Dictionary
341
352
 
342
- The following glossary is used to clarify certain basic concepts:
343
-
344
- ### Prompt
345
-
346
- Prompt in a text along with model requirements, but without any execution or templating logic.
347
-
348
- For example:
349
-
350
- ```json
351
- {
352
- "request": "Which sound does a cat make?",
353
- "modelRequirements": {
354
- "variant": "CHAT"
355
- }
356
- }
357
- ```
358
-
359
- ```json
360
- {
361
- "request": "I am a cat.\nI like to eat fish.\nI like to sleep.\nI like to play with a ball.\nI l",
362
- "modelRequirements": {
363
- "variant": "COMPLETION"
364
- }
365
- }
366
- ```
367
-
368
- ### Prompt Template
369
-
370
- Similar concept to Prompt, but with templating logic.
371
-
372
- For example:
373
-
374
- ```json
375
- {
376
- "request": "Which sound does a {animalName} make?",
377
- "modelRequirements": {
378
- "variant": "CHAT"
379
- }
380
- }
381
- ```
382
-
383
- ### Model Requirements
384
-
385
- Abstract way to specify the LLM.
386
- It does not specify the LLM with concrete version itself, only the requirements for the LLM.
387
- _NOT chatgpt-3.5-turbo BUT CHAT variant of GPT-3.5._
388
-
389
- For example:
390
-
391
- ```json
392
- {
393
- "variant": "CHAT",
394
- "version": "GPT-3.5",
395
- "temperature": 0.7
396
- }
397
- ```
398
-
399
- ### Block type
400
-
401
- Each block of promptbook can have a different execution type.
402
- It is specified in list of requirements for the block.
403
- By default, it is `Prompt template`
404
-
405
- - _(default)_ `Prompt template` The block is a prompt template and is executed by LLM (OpenAI, Azure,...)
406
- - `SIMPLE TEMPLATE` The block is a simple text template which is just filled with parameters
407
- - `Script` The block is a script that is executed by some script runtime, the runtime is determined by block type, currently only `javascript` is supported but we plan to add `python` and `typescript` in the future.
408
- - `PROMPT DIALOG` Ask user for input
409
-
410
- ### Parameters
411
-
412
- Parameters that are placed in the prompt template and replaced to create the prompt.
413
- It is a simple key-value object.
414
-
415
- ```json
416
- {
417
- "animalName": "cat",
418
- "animalSound": "Meow!"
419
- }
420
- ```
421
-
422
- There are three types of template parameters, depending on how they are used in the promptbook:
423
-
424
- - **INPUT PARAMETER**s are required to execute the promptbook.
425
- - **Intermediate parameters** are used internally in the promptbook.
426
- - **OUTPUT PARAMETER**s are explicitelly marked and they are returned as the result of the promptbook execution.
427
-
428
- _Note: Parameter can be both intermedite and output at the same time._
429
-
430
- ### Promptbook
431
-
432
- Promptbook is **core concept of this library**.
433
- It represents a series of prompt templates chained together to form a **pipeline** / one big prompt template with input and result parameters.
434
-
435
- Internally it can have multiple formats:
436
-
437
- - **.ptbk.md file** in custom markdown format described above
438
- - _(concept)_ **.ptbk** format, custom fileextension based on markdown
439
- - _(internal)_ **JSON** format, parsed from the .ptbk.md file
440
-
441
- ### Promptbook **Library**
442
-
443
- Library of all promptbooks used in your application.
444
- Each promptbook is a separate `.ptbk.md` file with unique `PIPELINE URL`. Theese urls are used to reference promptbooks in other promptbooks or in the application code.
445
-
446
- ### Prompt Result
447
-
448
- Prompt result is the simplest concept of execution.
449
- It is the result of executing one prompt _(NOT a template)_.
450
-
451
- For example:
452
-
453
- ```json
454
- {
455
- "response": "Meow!",
456
- "model": "chatgpt-3.5-turbo"
457
- }
458
- ```
459
-
460
- ### Execution Tools
461
-
462
-
463
-
464
- `ExecutionTools` is an interface which contains all the tools needed to execute prompts.
465
- It contais 3 subtools:
466
-
467
- - `LlmExecutionTools`
468
- - `ScriptExecutionTools`
469
- - `UserInterfaceTools`
470
-
471
- Which are described below:
353
+ The following glossary is used to clarify certain concepts:
472
354
 
473
- #### LLM Execution Tools
474
355
 
475
- `LlmExecutionTools` is a container for all the tools needed to execute prompts to large language models like GPT-4.
476
- On its interface it exposes common methods for prompt execution.
477
- Internally it calls OpenAI, Azure, GPU, proxy, cache, logging,...
478
356
 
479
- `LlmExecutionTools` an abstract interface that is implemented by concrete execution tools:
357
+ ### Core concepts
480
358
 
481
- - `OpenAiExecutionTools`
482
- - `AnthropicClaudeExecutionTools`
483
- - `AzureOpenAiExecutionTools`
484
- - `LangtailExecutionTools`
485
- - _(Not implemented yet)_ `BardExecutionTools`
486
- - _(Not implemented yet)_ `LamaExecutionTools`
487
- - _(Not implemented yet)_ `GpuExecutionTools`
488
- - Special case are `RemoteLlmExecutionTools` that connect to a remote server and run one of the above execution tools on that server.
489
- - Another special case is `MockedEchoLlmExecutionTools` that is used for testing and mocking.
490
- - The another special case is `LogLlmExecutionToolsWrapper` that is technically also an execution tools but it is more proxy wrapper around other execution tools that logs all calls to execution tools.
359
+ - [📚 Collection of pipelines](https://github.com/webgptorg/promptbook/discussions/65)
360
+ - [📯 Pipeline](https://github.com/webgptorg/promptbook/discussions/64)
361
+ - [🎺 Pipeline templates](https://github.com/webgptorg/promptbook/discussions/88)
362
+ - [🤼 Personas](https://github.com/webgptorg/promptbook/discussions/22)
363
+ - [⭕ Parameters](https://github.com/webgptorg/promptbook/discussions/83)
364
+ - [🚀 Pipeline execution](https://github.com/webgptorg/promptbook/discussions/84)
365
+ - [🧪 Expectations](https://github.com/webgptorg/promptbook/discussions/30)
366
+ - [✂️ Postprocessing](https://github.com/webgptorg/promptbook/discussions/31)
367
+ - [🔣 Words not tokens](https://github.com/webgptorg/promptbook/discussions/29)
368
+ - [☯ Separation of concerns](https://github.com/webgptorg/promptbook/discussions/32)
491
369
 
492
- #### Script Execution Tools
370
+ ### Advanced concepts
493
371
 
494
- `ScriptExecutionTools` is an abstract container that represents all the tools needed to EXECUTE SCRIPTs. It is implemented by concrete execution tools:
372
+ - [📚 Knowledge (Retrieval-augmented generation)](https://github.com/webgptorg/promptbook/discussions/41)
373
+ - [🌏 Remote server](https://github.com/webgptorg/promptbook/discussions/89)
374
+ - [🃏 Jokers (conditions)](https://github.com/webgptorg/promptbook/discussions/66)
375
+ - [🔳 Metaprompting](https://github.com/webgptorg/promptbook/discussions/35)
376
+ - [🌏 Linguistically typed languages](https://github.com/webgptorg/promptbook/discussions/53)
377
+ - [🌍 Auto-Translations](https://github.com/webgptorg/promptbook/discussions/42)
378
+ - [📽 Images, audio, video, spreadsheets](https://github.com/webgptorg/promptbook/discussions/54)
379
+ - [🔙 Expectation-aware generation](https://github.com/webgptorg/promptbook/discussions/37)
380
+ - [⏳ Just-in-time fine-tuning](https://github.com/webgptorg/promptbook/discussions/33)
381
+ - [🔴 Anomaly detection](https://github.com/webgptorg/promptbook/discussions/40)
382
+ - [👮 Agent adversary expectations](https://github.com/webgptorg/promptbook/discussions/39)
383
+ - [view more](https://github.com/webgptorg/promptbook/discussions/categories/concepts)
495
384
 
496
- - `JavascriptExecutionTools` is a wrapper around `vm2` module that executes javascript code in a sandbox.
497
- - `JavascriptEvalExecutionTools` is wrapper around `eval` function that executes javascript. It is used for testing and mocking **NOT intended to use in the production** due to its unsafe nature, use `JavascriptExecutionTools` instead.
498
- - _(Not implemented yet)_ `TypescriptExecutionTools` executes typescript code in a sandbox.
499
- - _(Not implemented yet)_ `PythonExecutionTools` executes python code in a sandbox.
500
-
501
- There are [postprocessing functions](#postprocessing-functions) that can be used to postprocess the result.
502
-
503
- #### User Interface Tools
504
-
505
- `UserInterfaceTools` is an abstract container that represents all the tools needed to interact with the user. It is implemented by concrete execution tools:
506
-
507
- - _(Not implemented yet)_ `ConsoleInterfaceTools` is a wrapper around `readline` module that interacts with the user via console.
508
- - `SimplePromptInterfaceTools` is a wrapper around `window.prompt` synchronous function that interacts with the user via browser prompt. It is used for testing and mocking **NOT intended to use in the production** due to its synchronous nature.
509
- - `CallbackInterfaceTools` delagates the user interaction to a async callback function. You need to provide your own implementation of this callback function and its bind to UI.
510
-
511
- ### Executor
512
-
513
- Executor is a simple async function that takes **input parameters** and returns **output parameters**.
514
- It is constructed by combining execution tools and promptbook to execute together.
515
-
516
- ### 🃏 Jokers (conditions)
517
-
518
- Joker is a previously defined parameter that is used to bypass some parts of the pipeline.
519
- If the joker is present in the template, it is checked to see if it meets the requirements (without postprocessing), and if so, it is used instead of executing that prompt template. There can be multiple wildcards in a prompt template, if so they are checked in order and the first one that meets the requirements is used.
520
-
521
- If none of the jokers meet the requirements, the prompt template is executed as usual.
522
-
523
- This can be useful, for example, if you want to use some predefined data, or if you want to use some data from the user, but you are not sure if it is suitable form.
524
-
525
- When using wildcards, you must have at least one minimum expectation. If you do not have a minimum expectation, the joker will always fulfil the expectation because it has none, so it makes no logical sense.
526
-
527
- Look at [jokers.ptbk.md](samples/templates/41-jokers.ptbk.md) sample.
528
-
529
- ### Postprocessing functions
530
-
531
- You can define postprocessing functions when creating `JavascriptEvalExecutionTools`:
532
-
533
- ```
534
-
535
- ```
536
-
537
- Additionally there are some usefull string-manipulation build-in functions, which are [listed here](src/scripting/javascript/JavascriptEvalExecutionTools.ts).
538
-
539
- ### Expectations
540
-
541
- `Expect` command describes the desired output of the prompt template (after post-processing)
542
- It can set limits for the maximum/minimum length of the output, measured in characters, words, sentences, paragraphs,...
543
-
544
- _Note: LLMs work with tokens, not characters, but in Promptbooks we want to use some human-recognisable and cross-model interoperable units._
545
-
546
- ```markdown
547
- # ✨ Sample: Expectations
548
-
549
- - INPUT  PARAMETER {yourName} Name of the hero
550
-
551
- ## 💬 Question
552
-
553
- - EXPECT MAX 30 CHARACTERS
554
- - EXPECT MIN 2 CHARACTERS
555
- - EXPECT MAX 3 WORDS
556
- - EXPECT EXACTLY 1 SENTENCE
557
- - EXPECT EXACTLY 1 LINE
558
-
559
- ...
560
- ```
561
-
562
- There are two types of expectations which are not strictly symmetrical:
563
-
564
- #### Minimal expectations
565
-
566
- - `EXPECT MIN 0 ...` is not valid minimal expectation. It makes no sense.
567
- - `EXPECT JSON` is both minimal and maximal expectation
568
- - When you are using `JOKER` in same prompt template, you need to have at least one minimal expectation
569
-
570
- #### Maximal expectations
571
-
572
- - `EXPECT MAX 0 ...` is valid maximal expectation. For example, you can expect 0 pages and 2 sentences.
573
- - `EXPECT JSON` is both minimal and maximal expectation
574
-
575
- Look at [expectations.ptbk.md](samples/templates/45-expectations.ptbk.md) and [expect-json.ptbk.md](samples/templates/45-expect-json.ptbk.md) samples for more.
576
-
577
- ### Execution report
578
-
579
- Execution report is a simple object or markdown that contains information about the execution of the pipeline.
580
-
581
- [See the example of such a report](/samples/templates/50-advanced.report.md)
582
-
583
-
584
-
585
-
586
-
587
- ### Remote server
588
-
589
- Remote server is a proxy server that uses its execution tools internally and exposes the executor interface externally.
590
-
591
- You can simply use `RemoteExecutionTools` on client-side javascript and connect to your remote server.
592
- This is useful to make all logic on browser side but not expose your API keys or no need to use customer's GPU.
593
-
594
- ## 👨‍💻 Usage and integration _(for developers)_
595
-
596
-
597
-
598
- ### 🔌 Usage in Typescript / Javascript
385
+ ## 🔌 Usage in Typescript / Javascript
599
386
 
600
387
  - [Simple usage](./samples/usage/simple-script)
601
388
  - [Usage with client and remote server](./samples/usage/remote)
@@ -618,14 +405,19 @@ This is useful to make all logic on browser side but not expose your API keys or
618
405
 
619
406
  ## 🐜 Known issues
620
407
 
621
-
408
+ - [🤸‍♂️ Iterations not working yet](https://github.com/webgptorg/promptbook/discussions/55)
409
+ - [⤵️ Imports not working yet](https://github.com/webgptorg/promptbook/discussions/34)
622
410
 
623
411
  ## 🧼 Intentionally not implemented features
624
412
 
625
413
 
414
+ - [➿ No recursion](https://github.com/webgptorg/promptbook/discussions/38)
415
+ - [🏳 There are no types, just strings](https://github.com/webgptorg/promptbook/discussions/52)
626
416
 
627
417
  ## ❔ FAQ
628
418
 
419
+
420
+
629
421
  If you have a question [start a discussion](https://github.com/webgptorg/promptbook/discussions/), [open an issue](https://github.com/webgptorg/promptbook/issues) or [write me an email](https://www.pavolhejny.com/contact).
630
422
 
631
423
  ### Why not just use the OpenAI SDK / Anthropic Claude SDK / ...?
package/esm/index.es.js CHANGED
@@ -15,7 +15,7 @@ import sha256 from 'crypto-js/sha256';
15
15
  /**
16
16
  * The version of the Promptbook library
17
17
  */
18
- var PROMPTBOOK_VERSION = '0.63.0-8';
18
+ var PROMPTBOOK_VERSION = '0.63.0-10';
19
19
  // TODO: !!!! List here all the versions and annotate + put into script
20
20
 
21
21
  /*! *****************************************************************************
@@ -690,7 +690,7 @@ function forEachAsync(array, options, callbackfunction) {
690
690
  });
691
691
  }
692
692
 
693
- var PipelineCollection = [{title:"Prepare Knowledge from Markdown",pipelineUrl:"https://promptbook.studio/promptbook/prepare-knowledge-from-markdown.ptbk.md",promptbookVersion:"0.63.0-8",parameters:[{name:"knowledgeContent",description:"Markdown document content",isInput:true,isOutput:false},{name:"knowledgePieces",description:"The knowledge JSON object",isInput:false,isOutput:true}],promptTemplates:[{blockType:"PROMPT_TEMPLATE",name:"knowledge",title:"Knowledge",modelRequirements:{modelVariant:"CHAT",modelName:"claude-3-opus-20240229"},content:"You are experienced data researcher, extract the important knowledge from the document.\n\n# Rules\n\n- Make pieces of information concise, clear, and easy to understand\n- One piece of information should be approximately 1 paragraph\n- Divide the paragraphs by markdown horizontal lines ---\n- Omit irrelevant information\n- Group redundant information\n- Write just extracted information, nothing else\n\n# The document\n\nTake information from this document:\n\n> {knowledgeContent}",dependentParameterNames:["knowledgeContent"],resultingParameterName:"knowledgePieces"}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[{id:1,promptbookVersion:"0.63.0-8",usage:{price:{value:0},input:{tokensCount:{value:0},charactersCount:{value:0},wordsCount:{value:0},sentencesCount:{value:0},linesCount:{value:0},paragraphsCount:{value:0},pagesCount:{value:0}},output:{tokensCount:{value:0},charactersCount:{value:0},wordsCount:{value:0},sentencesCount:{value:0},linesCount:{value:0},paragraphsCount:{value:0},pagesCount:{value:0}}}}],sourceFile:"./promptbook-collection/prepare-knowledge-from-markdown.ptbk.md"},{title:"Prepare Keywords",pipelineUrl:"https://promptbook.studio/promptbook/prepare-knowledge-keywords.ptbk.md",promptbookVersion:"0.63.0-8",parameters:[{name:"knowledgePieceContent",description:"The content",isInput:true,isOutput:false},{name:"keywords",description:"Keywords separated by comma",isInput:false,isOutput:true}],promptTemplates:[{blockType:"PROMPT_TEMPLATE",name:"knowledge",title:"Knowledge",modelRequirements:{modelVariant:"CHAT",modelName:"claude-3-opus-20240229"},content:"You are experienced data researcher, detect the important keywords in the document.\n\n# Rules\n\n- Write just keywords separated by comma\n\n# The document\n\nTake information from this document:\n\n> {knowledgePieceContent}",dependentParameterNames:["knowledgePieceContent"],resultingParameterName:"keywords"}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[{id:1,promptbookVersion:"0.63.0-8",usage:{price:{value:0},input:{tokensCount:{value:0},charactersCount:{value:0},wordsCount:{value:0},sentencesCount:{value:0},linesCount:{value:0},paragraphsCount:{value:0},pagesCount:{value:0}},output:{tokensCount:{value:0},charactersCount:{value:0},wordsCount:{value:0},sentencesCount:{value:0},linesCount:{value:0},paragraphsCount:{value:0},pagesCount:{value:0}}}}],sourceFile:"./promptbook-collection/prepare-knowledge-keywords.ptbk.md"},{title:"Prepare Title",pipelineUrl:"https://promptbook.studio/promptbook/prepare-knowledge-title.ptbk.md",promptbookVersion:"0.63.0-8",parameters:[{name:"knowledgePieceContent",description:"The content",isInput:true,isOutput:false},{name:"title",description:"The title of the document",isInput:false,isOutput:true}],promptTemplates:[{blockType:"PROMPT_TEMPLATE",name:"knowledge",title:"Knowledge",modelRequirements:{modelVariant:"CHAT",modelName:"claude-3-opus-20240229"},content:"You are experienced content creator, write best title for the document.\n\n# Rules\n\n- Write just title, nothing else\n- Title should be concise and clear\n- Write maximum 5 words for the title\n\n# The document\n\n> {knowledgePieceContent}",expectations:{words:{min:1,max:8}},dependentParameterNames:["knowledgePieceContent"],resultingParameterName:"title"}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[{id:1,promptbookVersion:"0.63.0-8",usage:{price:{value:0},input:{tokensCount:{value:0},charactersCount:{value:0},wordsCount:{value:0},sentencesCount:{value:0},linesCount:{value:0},paragraphsCount:{value:0},pagesCount:{value:0}},output:{tokensCount:{value:0},charactersCount:{value:0},wordsCount:{value:0},sentencesCount:{value:0},linesCount:{value:0},paragraphsCount:{value:0},pagesCount:{value:0}}}}],sourceFile:"./promptbook-collection/prepare-knowledge-title.ptbk.md"},{title:"Prepare Keywords",pipelineUrl:"https://promptbook.studio/promptbook/prepare-persona.ptbk.md",promptbookVersion:"0.63.0-8",parameters:[{name:"availableModelNames",description:"List of available model names separated by comma (,)",isInput:true,isOutput:false},{name:"personaDescription",description:"Description of the persona",isInput:true,isOutput:false},{name:"modelRequirements",description:"Specific requirements for the model",isInput:false,isOutput:true}],promptTemplates:[{blockType:"PROMPT_TEMPLATE",name:"make-model-requirements",title:"Make modelRequirements",modelRequirements:{modelVariant:"CHAT",modelName:"gpt-4-turbo"},content:"You are experienced AI engineer, you need to create virtual assistant.\nWrite\n\n## Sample\n\n```json\n{\n\"modelName\": \"gpt-4o\",\n\"systemMessage\": \"You are experienced AI engineer and helpfull assistant.\",\n\"temperature\": 0.7\n}\n```\n\n## Instructions\n\n### Option `modelName`\n\nPick from the following models:\n\n- {availableModelNames}\n\n### Option `systemMessage`\n\nThe system message is used to communicate instructions or provide context to the model at the beginning of a conversation. It is displayed in a different format compared to user messages, helping the model understand its role in the conversation. The system message typically guides the model's behavior, sets the tone, or specifies desired output from the model. By utilizing the system message effectively, users can steer the model towards generating more accurate and relevant responses.\n\nFor example:\n\n> You are an experienced AI engineer and helpful assistant.\n\n> You are a friendly and knowledgeable chatbot.\n\n### Option `temperature`\n\nThe sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use log probability to automatically increase the temperature until certain thresholds are hit.\n\nYou can pick a value between 0 and 2. For example:\n\n- `0.1`: Low temperature, extremely conservative and deterministic\n- `0.5`: Medium temperature, balanced between conservative and creative\n- `1.0`: High temperature, creative and bit random\n- `1.5`: Very high temperature, extremely creative and often chaotic and unpredictable\n- `2.0`: Maximum temperature, completely random and unpredictable, for some extreme creative use cases\n\n# The assistant\n\nTake this description of the persona:\n\n> {personaDescription}",expectFormat:"JSON",dependentParameterNames:["availableModelNames","personaDescription"],resultingParameterName:"modelRequirements"}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[{id:1,promptbookVersion:"0.63.0-8",usage:{price:{value:0},input:{tokensCount:{value:0},charactersCount:{value:0},wordsCount:{value:0},sentencesCount:{value:0},linesCount:{value:0},paragraphsCount:{value:0},pagesCount:{value:0}},output:{tokensCount:{value:0},charactersCount:{value:0},wordsCount:{value:0},sentencesCount:{value:0},linesCount:{value:0},paragraphsCount:{value:0},pagesCount:{value:0}}}}],sourceFile:"./promptbook-collection/prepare-persona.ptbk.md"}];
693
+ var PipelineCollection = [{title:"Prepare Knowledge from Markdown",pipelineUrl:"https://promptbook.studio/promptbook/prepare-knowledge-from-markdown.ptbk.md",promptbookVersion:"0.63.0-10",parameters:[{name:"knowledgeContent",description:"Markdown document content",isInput:true,isOutput:false},{name:"knowledgePieces",description:"The knowledge JSON object",isInput:false,isOutput:true}],promptTemplates:[{blockType:"PROMPT_TEMPLATE",name:"knowledge",title:"Knowledge",modelRequirements:{modelVariant:"CHAT",modelName:"claude-3-opus-20240229"},content:"You are experienced data researcher, extract the important knowledge from the document.\n\n# Rules\n\n- Make pieces of information concise, clear, and easy to understand\n- One piece of information should be approximately 1 paragraph\n- Divide the paragraphs by markdown horizontal lines ---\n- Omit irrelevant information\n- Group redundant information\n- Write just extracted information, nothing else\n\n# The document\n\nTake information from this document:\n\n> {knowledgeContent}",dependentParameterNames:["knowledgeContent"],resultingParameterName:"knowledgePieces"}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[{id:1,promptbookVersion:"0.63.0-10",usage:{price:{value:0},input:{tokensCount:{value:0},charactersCount:{value:0},wordsCount:{value:0},sentencesCount:{value:0},linesCount:{value:0},paragraphsCount:{value:0},pagesCount:{value:0}},output:{tokensCount:{value:0},charactersCount:{value:0},wordsCount:{value:0},sentencesCount:{value:0},linesCount:{value:0},paragraphsCount:{value:0},pagesCount:{value:0}}}}],sourceFile:"./promptbook-collection/prepare-knowledge-from-markdown.ptbk.md"},{title:"Prepare Keywords",pipelineUrl:"https://promptbook.studio/promptbook/prepare-knowledge-keywords.ptbk.md",promptbookVersion:"0.63.0-10",parameters:[{name:"knowledgePieceContent",description:"The content",isInput:true,isOutput:false},{name:"keywords",description:"Keywords separated by comma",isInput:false,isOutput:true}],promptTemplates:[{blockType:"PROMPT_TEMPLATE",name:"knowledge",title:"Knowledge",modelRequirements:{modelVariant:"CHAT",modelName:"claude-3-opus-20240229"},content:"You are experienced data researcher, detect the important keywords in the document.\n\n# Rules\n\n- Write just keywords separated by comma\n\n# The document\n\nTake information from this document:\n\n> {knowledgePieceContent}",dependentParameterNames:["knowledgePieceContent"],resultingParameterName:"keywords"}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[{id:1,promptbookVersion:"0.63.0-10",usage:{price:{value:0},input:{tokensCount:{value:0},charactersCount:{value:0},wordsCount:{value:0},sentencesCount:{value:0},linesCount:{value:0},paragraphsCount:{value:0},pagesCount:{value:0}},output:{tokensCount:{value:0},charactersCount:{value:0},wordsCount:{value:0},sentencesCount:{value:0},linesCount:{value:0},paragraphsCount:{value:0},pagesCount:{value:0}}}}],sourceFile:"./promptbook-collection/prepare-knowledge-keywords.ptbk.md"},{title:"Prepare Title",pipelineUrl:"https://promptbook.studio/promptbook/prepare-knowledge-title.ptbk.md",promptbookVersion:"0.63.0-10",parameters:[{name:"knowledgePieceContent",description:"The content",isInput:true,isOutput:false},{name:"title",description:"The title of the document",isInput:false,isOutput:true}],promptTemplates:[{blockType:"PROMPT_TEMPLATE",name:"knowledge",title:"Knowledge",modelRequirements:{modelVariant:"CHAT",modelName:"claude-3-opus-20240229"},content:"You are experienced content creator, write best title for the document.\n\n# Rules\n\n- Write just title, nothing else\n- Title should be concise and clear\n- Write maximum 5 words for the title\n\n# The document\n\n> {knowledgePieceContent}",expectations:{words:{min:1,max:8}},dependentParameterNames:["knowledgePieceContent"],resultingParameterName:"title"}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[{id:1,promptbookVersion:"0.63.0-10",usage:{price:{value:0},input:{tokensCount:{value:0},charactersCount:{value:0},wordsCount:{value:0},sentencesCount:{value:0},linesCount:{value:0},paragraphsCount:{value:0},pagesCount:{value:0}},output:{tokensCount:{value:0},charactersCount:{value:0},wordsCount:{value:0},sentencesCount:{value:0},linesCount:{value:0},paragraphsCount:{value:0},pagesCount:{value:0}}}}],sourceFile:"./promptbook-collection/prepare-knowledge-title.ptbk.md"},{title:"Prepare Keywords",pipelineUrl:"https://promptbook.studio/promptbook/prepare-persona.ptbk.md",promptbookVersion:"0.63.0-10",parameters:[{name:"availableModelNames",description:"List of available model names separated by comma (,)",isInput:true,isOutput:false},{name:"personaDescription",description:"Description of the persona",isInput:true,isOutput:false},{name:"modelRequirements",description:"Specific requirements for the model",isInput:false,isOutput:true}],promptTemplates:[{blockType:"PROMPT_TEMPLATE",name:"make-model-requirements",title:"Make modelRequirements",modelRequirements:{modelVariant:"CHAT",modelName:"gpt-4-turbo"},content:"You are experienced AI engineer, you need to create virtual assistant.\nWrite\n\n## Sample\n\n```json\n{\n\"modelName\": \"gpt-4o\",\n\"systemMessage\": \"You are experienced AI engineer and helpfull assistant.\",\n\"temperature\": 0.7\n}\n```\n\n## Instructions\n\n### Option `modelName`\n\nPick from the following models:\n\n- {availableModelNames}\n\n### Option `systemMessage`\n\nThe system message is used to communicate instructions or provide context to the model at the beginning of a conversation. It is displayed in a different format compared to user messages, helping the model understand its role in the conversation. The system message typically guides the model's behavior, sets the tone, or specifies desired output from the model. By utilizing the system message effectively, users can steer the model towards generating more accurate and relevant responses.\n\nFor example:\n\n> You are an experienced AI engineer and helpful assistant.\n\n> You are a friendly and knowledgeable chatbot.\n\n### Option `temperature`\n\nThe sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use log probability to automatically increase the temperature until certain thresholds are hit.\n\nYou can pick a value between 0 and 2. For example:\n\n- `0.1`: Low temperature, extremely conservative and deterministic\n- `0.5`: Medium temperature, balanced between conservative and creative\n- `1.0`: High temperature, creative and bit random\n- `1.5`: Very high temperature, extremely creative and often chaotic and unpredictable\n- `2.0`: Maximum temperature, completely random and unpredictable, for some extreme creative use cases\n\n# The assistant\n\nTake this description of the persona:\n\n> {personaDescription}",expectFormat:"JSON",dependentParameterNames:["availableModelNames","personaDescription"],resultingParameterName:"modelRequirements"}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[{id:1,promptbookVersion:"0.63.0-10",usage:{price:{value:0},input:{tokensCount:{value:0},charactersCount:{value:0},wordsCount:{value:0},sentencesCount:{value:0},linesCount:{value:0},paragraphsCount:{value:0},pagesCount:{value:0}},output:{tokensCount:{value:0},charactersCount:{value:0},wordsCount:{value:0},sentencesCount:{value:0},linesCount:{value:0},paragraphsCount:{value:0},pagesCount:{value:0}}}}],sourceFile:"./promptbook-collection/prepare-persona.ptbk.md"}];
694
694
 
695
695
  /**
696
696
  * This error indicates that the promptbook in a markdown format cannot be parsed into a valid promptbook object
@@ -4147,7 +4147,7 @@ var blockCommandParser = {
4147
4147
  * Units of text measurement
4148
4148
  *
4149
4149
  * @see https://github.com/webgptorg/promptbook/discussions/30
4150
- * @private internal base for `ExpectationUnit`
4150
+ * @public exported from `@promptbook/core`
4151
4151
  */
4152
4152
  var EXPECTATION_UNITS = ['CHARACTERS', 'WORDS', 'SENTENCES', 'LINES', 'PARAGRAPHS', 'PAGES'];
4153
4153
  /**
@@ -4387,7 +4387,7 @@ var jokerCommandParser = {
4387
4387
  /**
4388
4388
  * @@@
4389
4389
  *
4390
- * @private internal base for `ModelVariant` and `modelCommandParser`
4390
+ * @public exported from `@promptbook/core`
4391
4391
  */
4392
4392
  var MODEL_VARIANTS = ['COMPLETION', 'CHAT', 'EMBEDDING' /* <- TODO [🏳] */ /* <- [🤖] */];
4393
4393
 
@@ -7327,6 +7327,7 @@ function createLlmToolsFromEnv(options) {
7327
7327
  * TODO: [🧠] Maybe pass env as argument
7328
7328
  * Note: [🟢] This code should never be published outside of `@promptbook/node` and `@promptbook/cli` and `@promptbook/cli`
7329
7329
  * TODO: [👷‍♂️] @@@ Manual about construction of llmTools
7330
+ * TODO: [🥃] Allow `ptbk make` without llm tools
7330
7331
  */
7331
7332
 
7332
7333
  /**