@promptbook/cli 0.63.0-8 → 0.63.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -134,12 +134,23 @@ In any of these situations, but especially in (3), the Promptbook library can ma
134
134
 
135
135
 
136
136
 
137
+ ## 🧔 Promptbook _(for prompt-engeneers)_
138
+
139
+ **P**romp**t** **b**oo**k** markdown file (or `.ptbk.md` file) is document that describes a **pipeline** - a series of prompts that are chained together to form somewhat reciepe for transforming natural language input.
140
+
141
+ - Multiple pipelines forms a **collection** which will handle core **know-how of your LLM application**.
142
+ - Theese pipelines are designed such as they **can be written by non-programmers**.
143
+
144
+
145
+
137
146
  ### Sample:
138
147
 
139
148
  File `write-website-content.ptbk.md`:
140
149
 
141
150
 
142
151
 
152
+
153
+
143
154
  > # 🌍 Create website content
144
155
  >
145
156
  > Instructions for creating web page content.
@@ -356,7 +367,8 @@ flowchart LR
356
367
  end;
357
368
  ```
358
369
 
359
- [More template samples](./samples/templates/)
370
+ - [More template samples](./samples/templates/)
371
+ - [Read more about `.ptbk.md` file format here](https://github.com/webgptorg/promptbook/discussions/categories/concepts?discussions_q=is%3Aopen+label%3A.ptbk.md+category%3AConcepts)
360
372
 
361
373
  _Note: We are using [postprocessing functions](#postprocessing-functions) like `unwrapResult` that can be used to postprocess the result._
362
374
 
@@ -373,7 +385,6 @@ Or you can install them separately:
373
385
 
374
386
  > ⭐ Marked packages are worth to try first
375
387
 
376
-
377
388
  - ⭐ **[ptbk](https://www.npmjs.com/package/ptbk)** - Bundle of all packages, when you want to install everything and you don't care about the size
378
389
  - **[promptbook](https://www.npmjs.com/package/promptbook)** - Same as `ptbk`
379
390
  - **[@promptbook/core](https://www.npmjs.com/package/@promptbook/core)** - Core of the library, it contains the main logic for promptbooks
@@ -397,263 +408,39 @@ Or you can install them separately:
397
408
 
398
409
  ## 📚 Dictionary
399
410
 
400
- The following glossary is used to clarify certain basic concepts:
401
-
402
- ### Prompt
403
-
404
- Prompt in a text along with model requirements, but without any execution or templating logic.
405
-
406
- For example:
407
-
408
- ```json
409
- {
410
- "request": "Which sound does a cat make?",
411
- "modelRequirements": {
412
- "variant": "CHAT"
413
- }
414
- }
415
- ```
416
-
417
- ```json
418
- {
419
- "request": "I am a cat.\nI like to eat fish.\nI like to sleep.\nI like to play with a ball.\nI l",
420
- "modelRequirements": {
421
- "variant": "COMPLETION"
422
- }
423
- }
424
- ```
425
-
426
- ### Prompt Template
427
-
428
- Similar concept to Prompt, but with templating logic.
429
-
430
- For example:
431
-
432
- ```json
433
- {
434
- "request": "Which sound does a {animalName} make?",
435
- "modelRequirements": {
436
- "variant": "CHAT"
437
- }
438
- }
439
- ```
440
-
441
- ### Model Requirements
442
-
443
- Abstract way to specify the LLM.
444
- It does not specify the LLM with concrete version itself, only the requirements for the LLM.
445
- _NOT chatgpt-3.5-turbo BUT CHAT variant of GPT-3.5._
446
-
447
- For example:
448
-
449
- ```json
450
- {
451
- "variant": "CHAT",
452
- "version": "GPT-3.5",
453
- "temperature": 0.7
454
- }
455
- ```
456
-
457
- ### Block type
458
-
459
- Each block of promptbook can have a different execution type.
460
- It is specified in list of requirements for the block.
461
- By default, it is `Prompt template`
462
-
463
- - _(default)_ `Prompt template` The block is a prompt template and is executed by LLM (OpenAI, Azure,...)
464
- - `SIMPLE TEMPLATE` The block is a simple text template which is just filled with parameters
465
- - `Script` The block is a script that is executed by some script runtime, the runtime is determined by block type, currently only `javascript` is supported but we plan to add `python` and `typescript` in the future.
466
- - `PROMPT DIALOG` Ask user for input
467
-
468
- ### Parameters
469
-
470
- Parameters that are placed in the prompt template and replaced to create the prompt.
471
- It is a simple key-value object.
472
-
473
- ```json
474
- {
475
- "animalName": "cat",
476
- "animalSound": "Meow!"
477
- }
478
- ```
479
-
480
- There are three types of template parameters, depending on how they are used in the promptbook:
481
-
482
- - **INPUT PARAMETER**s are required to execute the promptbook.
483
- - **Intermediate parameters** are used internally in the promptbook.
484
- - **OUTPUT PARAMETER**s are explicitelly marked and they are returned as the result of the promptbook execution.
485
-
486
- _Note: Parameter can be both intermedite and output at the same time._
487
-
488
- ### Promptbook
489
-
490
- Promptbook is **core concept of this library**.
491
- It represents a series of prompt templates chained together to form a **pipeline** / one big prompt template with input and result parameters.
492
-
493
- Internally it can have multiple formats:
494
-
495
- - **.ptbk.md file** in custom markdown format described above
496
- - _(concept)_ **.ptbk** format, custom fileextension based on markdown
497
- - _(internal)_ **JSON** format, parsed from the .ptbk.md file
498
-
499
- ### Promptbook **Library**
500
-
501
- Library of all promptbooks used in your application.
502
- Each promptbook is a separate `.ptbk.md` file with unique `PIPELINE URL`. Theese urls are used to reference promptbooks in other promptbooks or in the application code.
503
-
504
- ### Prompt Result
505
-
506
- Prompt result is the simplest concept of execution.
507
- It is the result of executing one prompt _(NOT a template)_.
508
-
509
- For example:
510
-
511
- ```json
512
- {
513
- "response": "Meow!",
514
- "model": "chatgpt-3.5-turbo"
515
- }
516
- ```
517
-
518
- ### Execution Tools
519
-
520
-
521
-
522
- `ExecutionTools` is an interface which contains all the tools needed to execute prompts.
523
- It contais 3 subtools:
524
-
525
- - `LlmExecutionTools`
526
- - `ScriptExecutionTools`
527
- - `UserInterfaceTools`
528
-
529
- Which are described below:
411
+ The following glossary is used to clarify certain concepts:
530
412
 
531
- #### LLM Execution Tools
532
413
 
533
- `LlmExecutionTools` is a container for all the tools needed to execute prompts to large language models like GPT-4.
534
- On its interface it exposes common methods for prompt execution.
535
- Internally it calls OpenAI, Azure, GPU, proxy, cache, logging,...
536
414
 
537
- `LlmExecutionTools` an abstract interface that is implemented by concrete execution tools:
415
+ ### Core concepts
538
416
 
539
- - `OpenAiExecutionTools`
540
- - `AnthropicClaudeExecutionTools`
541
- - `AzureOpenAiExecutionTools`
542
- - `LangtailExecutionTools`
543
- - _(Not implemented yet)_ `BardExecutionTools`
544
- - _(Not implemented yet)_ `LamaExecutionTools`
545
- - _(Not implemented yet)_ `GpuExecutionTools`
546
- - Special case are `RemoteLlmExecutionTools` that connect to a remote server and run one of the above execution tools on that server.
547
- - Another special case is `MockedEchoLlmExecutionTools` that is used for testing and mocking.
548
- - The another special case is `LogLlmExecutionToolsWrapper` that is technically also an execution tools but it is more proxy wrapper around other execution tools that logs all calls to execution tools.
417
+ - [📚 Collection of pipelines](https://github.com/webgptorg/promptbook/discussions/65)
418
+ - [📯 Pipeline](https://github.com/webgptorg/promptbook/discussions/64)
419
+ - [🎺 Pipeline templates](https://github.com/webgptorg/promptbook/discussions/88)
420
+ - [🤼 Personas](https://github.com/webgptorg/promptbook/discussions/22)
421
+ - [⭕ Parameters](https://github.com/webgptorg/promptbook/discussions/83)
422
+ - [🚀 Pipeline execution](https://github.com/webgptorg/promptbook/discussions/84)
423
+ - [🧪 Expectations](https://github.com/webgptorg/promptbook/discussions/30)
424
+ - [✂️ Postprocessing](https://github.com/webgptorg/promptbook/discussions/31)
425
+ - [🔣 Words not tokens](https://github.com/webgptorg/promptbook/discussions/29)
426
+ - [☯ Separation of concerns](https://github.com/webgptorg/promptbook/discussions/32)
549
427
 
550
- #### Script Execution Tools
428
+ ### Advanced concepts
551
429
 
552
- `ScriptExecutionTools` is an abstract container that represents all the tools needed to EXECUTE SCRIPTs. It is implemented by concrete execution tools:
430
+ - [📚 Knowledge (Retrieval-augmented generation)](https://github.com/webgptorg/promptbook/discussions/41)
431
+ - [🌏 Remote server](https://github.com/webgptorg/promptbook/discussions/89)
432
+ - [🃏 Jokers (conditions)](https://github.com/webgptorg/promptbook/discussions/66)
433
+ - [🔳 Metaprompting](https://github.com/webgptorg/promptbook/discussions/35)
434
+ - [🌏 Linguistically typed languages](https://github.com/webgptorg/promptbook/discussions/53)
435
+ - [🌍 Auto-Translations](https://github.com/webgptorg/promptbook/discussions/42)
436
+ - [📽 Images, audio, video, spreadsheets](https://github.com/webgptorg/promptbook/discussions/54)
437
+ - [🔙 Expectation-aware generation](https://github.com/webgptorg/promptbook/discussions/37)
438
+ - [⏳ Just-in-time fine-tuning](https://github.com/webgptorg/promptbook/discussions/33)
439
+ - [🔴 Anomaly detection](https://github.com/webgptorg/promptbook/discussions/40)
440
+ - [👮 Agent adversary expectations](https://github.com/webgptorg/promptbook/discussions/39)
441
+ - [view more](https://github.com/webgptorg/promptbook/discussions/categories/concepts)
553
442
 
554
- - `JavascriptExecutionTools` is a wrapper around `vm2` module that executes javascript code in a sandbox.
555
- - `JavascriptEvalExecutionTools` is wrapper around `eval` function that executes javascript. It is used for testing and mocking **NOT intended to use in the production** due to its unsafe nature, use `JavascriptExecutionTools` instead.
556
- - _(Not implemented yet)_ `TypescriptExecutionTools` executes typescript code in a sandbox.
557
- - _(Not implemented yet)_ `PythonExecutionTools` executes python code in a sandbox.
558
-
559
- There are [postprocessing functions](#postprocessing-functions) that can be used to postprocess the result.
560
-
561
- #### User Interface Tools
562
-
563
- `UserInterfaceTools` is an abstract container that represents all the tools needed to interact with the user. It is implemented by concrete execution tools:
564
-
565
- - _(Not implemented yet)_ `ConsoleInterfaceTools` is a wrapper around `readline` module that interacts with the user via console.
566
- - `SimplePromptInterfaceTools` is a wrapper around `window.prompt` synchronous function that interacts with the user via browser prompt. It is used for testing and mocking **NOT intended to use in the production** due to its synchronous nature.
567
- - `CallbackInterfaceTools` delagates the user interaction to a async callback function. You need to provide your own implementation of this callback function and its bind to UI.
568
-
569
- ### Executor
570
-
571
- Executor is a simple async function that takes **input parameters** and returns **output parameters**.
572
- It is constructed by combining execution tools and promptbook to execute together.
573
-
574
- ### 🃏 Jokers (conditions)
575
-
576
- Joker is a previously defined parameter that is used to bypass some parts of the pipeline.
577
- If the joker is present in the template, it is checked to see if it meets the requirements (without postprocessing), and if so, it is used instead of executing that prompt template. There can be multiple wildcards in a prompt template, if so they are checked in order and the first one that meets the requirements is used.
578
-
579
- If none of the jokers meet the requirements, the prompt template is executed as usual.
580
-
581
- This can be useful, for example, if you want to use some predefined data, or if you want to use some data from the user, but you are not sure if it is suitable form.
582
-
583
- When using wildcards, you must have at least one minimum expectation. If you do not have a minimum expectation, the joker will always fulfil the expectation because it has none, so it makes no logical sense.
584
-
585
- Look at [jokers.ptbk.md](samples/templates/41-jokers.ptbk.md) sample.
586
-
587
- ### Postprocessing functions
588
-
589
- You can define postprocessing functions when creating `JavascriptEvalExecutionTools`:
590
-
591
- ```
592
-
593
- ```
594
-
595
- Additionally there are some usefull string-manipulation build-in functions, which are [listed here](src/scripting/javascript/JavascriptEvalExecutionTools.ts).
596
-
597
- ### Expectations
598
-
599
- `Expect` command describes the desired output of the prompt template (after post-processing)
600
- It can set limits for the maximum/minimum length of the output, measured in characters, words, sentences, paragraphs,...
601
-
602
- _Note: LLMs work with tokens, not characters, but in Promptbooks we want to use some human-recognisable and cross-model interoperable units._
603
-
604
- ```markdown
605
- # ✨ Sample: Expectations
606
-
607
- - INPUT  PARAMETER {yourName} Name of the hero
608
-
609
- ## 💬 Question
610
-
611
- - EXPECT MAX 30 CHARACTERS
612
- - EXPECT MIN 2 CHARACTERS
613
- - EXPECT MAX 3 WORDS
614
- - EXPECT EXACTLY 1 SENTENCE
615
- - EXPECT EXACTLY 1 LINE
616
-
617
- ...
618
- ```
619
-
620
- There are two types of expectations which are not strictly symmetrical:
621
-
622
- #### Minimal expectations
623
-
624
- - `EXPECT MIN 0 ...` is not valid minimal expectation. It makes no sense.
625
- - `EXPECT JSON` is both minimal and maximal expectation
626
- - When you are using `JOKER` in same prompt template, you need to have at least one minimal expectation
627
-
628
- #### Maximal expectations
629
-
630
- - `EXPECT MAX 0 ...` is valid maximal expectation. For example, you can expect 0 pages and 2 sentences.
631
- - `EXPECT JSON` is both minimal and maximal expectation
632
-
633
- Look at [expectations.ptbk.md](samples/templates/45-expectations.ptbk.md) and [expect-json.ptbk.md](samples/templates/45-expect-json.ptbk.md) samples for more.
634
-
635
- ### Execution report
636
-
637
- Execution report is a simple object or markdown that contains information about the execution of the pipeline.
638
-
639
- [See the example of such a report](/samples/templates/50-advanced.report.md)
640
-
641
-
642
-
643
-
644
-
645
- ### Remote server
646
-
647
- Remote server is a proxy server that uses its execution tools internally and exposes the executor interface externally.
648
-
649
- You can simply use `RemoteExecutionTools` on client-side javascript and connect to your remote server.
650
- This is useful to make all logic on browser side but not expose your API keys or no need to use customer's GPU.
651
-
652
- ## 👨‍💻 Usage and integration _(for developers)_
653
-
654
-
655
-
656
- ### 🔌 Usage in Typescript / Javascript
443
+ ## 🔌 Usage in Typescript / Javascript
657
444
 
658
445
  - [Simple usage](./samples/usage/simple-script)
659
446
  - [Usage with client and remote server](./samples/usage/remote)
@@ -676,14 +463,19 @@ This is useful to make all logic on browser side but not expose your API keys or
676
463
 
677
464
  ## 🐜 Known issues
678
465
 
679
-
466
+ - [🤸‍♂️ Iterations not working yet](https://github.com/webgptorg/promptbook/discussions/55)
467
+ - [⤵️ Imports not working yet](https://github.com/webgptorg/promptbook/discussions/34)
680
468
 
681
469
  ## 🧼 Intentionally not implemented features
682
470
 
683
471
 
472
+ - [➿ No recursion](https://github.com/webgptorg/promptbook/discussions/38)
473
+ - [🏳 There are no types, just strings](https://github.com/webgptorg/promptbook/discussions/52)
684
474
 
685
475
  ## ❔ FAQ
686
476
 
477
+
478
+
687
479
  If you have a question [start a discussion](https://github.com/webgptorg/promptbook/discussions/), [open an issue](https://github.com/webgptorg/promptbook/issues) or [write me an email](https://www.pavolhejny.com/contact).
688
480
 
689
481
  ### Why not just use the OpenAI SDK / Anthropic Claude SDK / ...?
package/esm/index.es.js CHANGED
@@ -18,7 +18,7 @@ import glob from 'glob-promise';
18
18
  /**
19
19
  * The version of the Promptbook library
20
20
  */
21
- var PROMPTBOOK_VERSION = '0.63.0-7';
21
+ var PROMPTBOOK_VERSION = '0.63.0-10';
22
22
  // TODO: !!!! List here all the versions and annotate + put into script
23
23
 
24
24
  /*! *****************************************************************************
@@ -514,7 +514,7 @@ function pipelineJsonToString(pipelineJson) {
514
514
  var contentLanguage = 'text';
515
515
  if (blockType === 'PROMPT_TEMPLATE') {
516
516
  var modelRequirements = promptTemplate.modelRequirements;
517
- var modelName = modelRequirements.modelName, modelVariant = modelRequirements.modelVariant;
517
+ var _l = modelRequirements || {}, modelName = _l.modelName, modelVariant = _l.modelVariant;
518
518
  commands_1.push("EXECUTE PROMPT TEMPLATE");
519
519
  if (modelVariant) {
520
520
  commands_1.push("MODEL VARIANT ".concat(capitalize(modelVariant)));
@@ -572,8 +572,8 @@ function pipelineJsonToString(pipelineJson) {
572
572
  } /* not else */
573
573
  if (expectations) {
574
574
  try {
575
- for (var _l = (e_6 = void 0, __values(Object.entries(expectations))), _m = _l.next(); !_m.done; _m = _l.next()) {
576
- var _o = __read(_m.value, 2), unit = _o[0], _p = _o[1], min = _p.min, max = _p.max;
575
+ for (var _m = (e_6 = void 0, __values(Object.entries(expectations))), _o = _m.next(); !_o.done; _o = _m.next()) {
576
+ var _p = __read(_o.value, 2), unit = _p[0], _q = _p[1], min = _q.min, max = _q.max;
577
577
  if (min === max) {
578
578
  commands_1.push("EXPECT EXACTLY ".concat(min, " ").concat(capitalize(unit + (min > 1 ? 's' : ''))));
579
579
  }
@@ -590,7 +590,7 @@ function pipelineJsonToString(pipelineJson) {
590
590
  catch (e_6_1) { e_6 = { error: e_6_1 }; }
591
591
  finally {
592
592
  try {
593
- if (_m && !_m.done && (_f = _l.return)) _f.call(_l);
593
+ if (_o && !_o.done && (_f = _m.return)) _f.call(_m);
594
594
  }
595
595
  finally { if (e_6) throw e_6.error; }
596
596
  }
@@ -839,7 +839,7 @@ function forEachAsync(array, options, callbackfunction) {
839
839
  });
840
840
  }
841
841
 
842
- var PipelineCollection = [{title:"Prepare Knowledge from Markdown",pipelineUrl:"https://promptbook.studio/promptbook/prepare-knowledge-from-markdown.ptbk.md",promptbookVersion:"0.63.0-7",parameters:[{name:"knowledgeContent",description:"Markdown document content",isInput:true,isOutput:false},{name:"knowledgePieces",description:"The knowledge JSON object",isInput:false,isOutput:true}],promptTemplates:[{blockType:"PROMPT_TEMPLATE",name:"knowledge",title:"Knowledge",modelRequirements:{modelVariant:"CHAT",modelName:"claude-3-opus-20240229"},content:"You are experienced data researcher, extract the important knowledge from the document.\n\n# Rules\n\n- Make pieces of information concise, clear, and easy to understand\n- One piece of information should be approximately 1 paragraph\n- Divide the paragraphs by markdown horizontal lines ---\n- Omit irrelevant information\n- Group redundant information\n- Write just extracted information, nothing else\n\n# The document\n\nTake information from this document:\n\n> {knowledgeContent}",dependentParameterNames:["knowledgeContent"],resultingParameterName:"knowledgePieces"}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[{id:1,promptbookVersion:"0.63.0-7",usage:{price:{value:0},input:{tokensCount:{value:0},charactersCount:{value:0},wordsCount:{value:0},sentencesCount:{value:0},linesCount:{value:0},paragraphsCount:{value:0},pagesCount:{value:0}},output:{tokensCount:{value:0},charactersCount:{value:0},wordsCount:{value:0},sentencesCount:{value:0},linesCount:{value:0},paragraphsCount:{value:0},pagesCount:{value:0}}}}],sourceFile:"./promptbook-collection/prepare-knowledge-from-markdown.ptbk.md"},{title:"Prepare Keywords",pipelineUrl:"https://promptbook.studio/promptbook/prepare-knowledge-keywords.ptbk.md",promptbookVersion:"0.63.0-7",parameters:[{name:"knowledgePieceContent",description:"The content",isInput:true,isOutput:false},{name:"keywords",description:"Keywords separated by comma",isInput:false,isOutput:true}],promptTemplates:[{blockType:"PROMPT_TEMPLATE",name:"knowledge",title:"Knowledge",modelRequirements:{modelVariant:"CHAT",modelName:"claude-3-opus-20240229"},content:"You are experienced data researcher, detect the important keywords in the document.\n\n# Rules\n\n- Write just keywords separated by comma\n\n# The document\n\nTake information from this document:\n\n> {knowledgePieceContent}",dependentParameterNames:["knowledgePieceContent"],resultingParameterName:"keywords"}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[{id:1,promptbookVersion:"0.63.0-7",usage:{price:{value:0},input:{tokensCount:{value:0},charactersCount:{value:0},wordsCount:{value:0},sentencesCount:{value:0},linesCount:{value:0},paragraphsCount:{value:0},pagesCount:{value:0}},output:{tokensCount:{value:0},charactersCount:{value:0},wordsCount:{value:0},sentencesCount:{value:0},linesCount:{value:0},paragraphsCount:{value:0},pagesCount:{value:0}}}}],sourceFile:"./promptbook-collection/prepare-knowledge-keywords.ptbk.md"},{title:"Prepare Title",pipelineUrl:"https://promptbook.studio/promptbook/prepare-knowledge-title.ptbk.md",promptbookVersion:"0.63.0-7",parameters:[{name:"knowledgePieceContent",description:"The content",isInput:true,isOutput:false},{name:"title",description:"The title of the document",isInput:false,isOutput:true}],promptTemplates:[{blockType:"PROMPT_TEMPLATE",name:"knowledge",title:"Knowledge",modelRequirements:{modelVariant:"CHAT",modelName:"claude-3-opus-20240229"},content:"You are experienced content creator, write best title for the document.\n\n# Rules\n\n- Write just title, nothing else\n- Title should be concise and clear\n- Write maximum 5 words for the title\n\n# The document\n\n> {knowledgePieceContent}",expectations:{words:{min:1,max:8}},dependentParameterNames:["knowledgePieceContent"],resultingParameterName:"title"}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[{id:1,promptbookVersion:"0.63.0-7",usage:{price:{value:0},input:{tokensCount:{value:0},charactersCount:{value:0},wordsCount:{value:0},sentencesCount:{value:0},linesCount:{value:0},paragraphsCount:{value:0},pagesCount:{value:0}},output:{tokensCount:{value:0},charactersCount:{value:0},wordsCount:{value:0},sentencesCount:{value:0},linesCount:{value:0},paragraphsCount:{value:0},pagesCount:{value:0}}}}],sourceFile:"./promptbook-collection/prepare-knowledge-title.ptbk.md"},{title:"Prepare Keywords",pipelineUrl:"https://promptbook.studio/promptbook/prepare-persona.ptbk.md",promptbookVersion:"0.63.0-7",parameters:[{name:"availableModelNames",description:"List of available model names separated by comma (,)",isInput:true,isOutput:false},{name:"personaDescription",description:"Description of the persona",isInput:true,isOutput:false},{name:"modelRequirements",description:"Specific requirements for the model",isInput:false,isOutput:true}],promptTemplates:[{blockType:"PROMPT_TEMPLATE",name:"make-model-requirements",title:"Make modelRequirements",modelRequirements:{modelVariant:"CHAT",modelName:"gpt-4-turbo"},content:"You are experienced AI engineer, you need to create virtual assistant.\nWrite\n\n## Sample\n\n```json\n{\n\"modelName\": \"gpt-4o\",\n\"systemMessage\": \"You are experienced AI engineer and helpfull assistant.\",\n\"temperature\": 0.7\n}\n```\n\n## Instructions\n\n### Option `modelName`\n\nPick from the following models:\n\n- {availableModelNames}\n\n### Option `systemMessage`\n\nThe system message is used to communicate instructions or provide context to the model at the beginning of a conversation. It is displayed in a different format compared to user messages, helping the model understand its role in the conversation. The system message typically guides the model's behavior, sets the tone, or specifies desired output from the model. By utilizing the system message effectively, users can steer the model towards generating more accurate and relevant responses.\n\nFor example:\n\n> You are an experienced AI engineer and helpful assistant.\n\n> You are a friendly and knowledgeable chatbot.\n\n### Option `temperature`\n\nThe sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use log probability to automatically increase the temperature until certain thresholds are hit.\n\nYou can pick a value between 0 and 2. For example:\n\n- `0.1`: Low temperature, extremely conservative and deterministic\n- `0.5`: Medium temperature, balanced between conservative and creative\n- `1.0`: High temperature, creative and bit random\n- `1.5`: Very high temperature, extremely creative and often chaotic and unpredictable\n- `2.0`: Maximum temperature, completely random and unpredictable, for some extreme creative use cases\n\n# The assistant\n\nTake this description of the persona:\n\n> {personaDescription}",expectFormat:"JSON",dependentParameterNames:["availableModelNames","personaDescription"],resultingParameterName:"modelRequirements"}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[{id:1,promptbookVersion:"0.63.0-7",usage:{price:{value:0},input:{tokensCount:{value:0},charactersCount:{value:0},wordsCount:{value:0},sentencesCount:{value:0},linesCount:{value:0},paragraphsCount:{value:0},pagesCount:{value:0}},output:{tokensCount:{value:0},charactersCount:{value:0},wordsCount:{value:0},sentencesCount:{value:0},linesCount:{value:0},paragraphsCount:{value:0},pagesCount:{value:0}}}}],sourceFile:"./promptbook-collection/prepare-persona.ptbk.md"}];
842
+ var PipelineCollection = [{title:"Prepare Knowledge from Markdown",pipelineUrl:"https://promptbook.studio/promptbook/prepare-knowledge-from-markdown.ptbk.md",promptbookVersion:"0.63.0-10",parameters:[{name:"knowledgeContent",description:"Markdown document content",isInput:true,isOutput:false},{name:"knowledgePieces",description:"The knowledge JSON object",isInput:false,isOutput:true}],promptTemplates:[{blockType:"PROMPT_TEMPLATE",name:"knowledge",title:"Knowledge",modelRequirements:{modelVariant:"CHAT",modelName:"claude-3-opus-20240229"},content:"You are experienced data researcher, extract the important knowledge from the document.\n\n# Rules\n\n- Make pieces of information concise, clear, and easy to understand\n- One piece of information should be approximately 1 paragraph\n- Divide the paragraphs by markdown horizontal lines ---\n- Omit irrelevant information\n- Group redundant information\n- Write just extracted information, nothing else\n\n# The document\n\nTake information from this document:\n\n> {knowledgeContent}",dependentParameterNames:["knowledgeContent"],resultingParameterName:"knowledgePieces"}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[{id:1,promptbookVersion:"0.63.0-10",usage:{price:{value:0},input:{tokensCount:{value:0},charactersCount:{value:0},wordsCount:{value:0},sentencesCount:{value:0},linesCount:{value:0},paragraphsCount:{value:0},pagesCount:{value:0}},output:{tokensCount:{value:0},charactersCount:{value:0},wordsCount:{value:0},sentencesCount:{value:0},linesCount:{value:0},paragraphsCount:{value:0},pagesCount:{value:0}}}}],sourceFile:"./promptbook-collection/prepare-knowledge-from-markdown.ptbk.md"},{title:"Prepare Keywords",pipelineUrl:"https://promptbook.studio/promptbook/prepare-knowledge-keywords.ptbk.md",promptbookVersion:"0.63.0-10",parameters:[{name:"knowledgePieceContent",description:"The content",isInput:true,isOutput:false},{name:"keywords",description:"Keywords separated by comma",isInput:false,isOutput:true}],promptTemplates:[{blockType:"PROMPT_TEMPLATE",name:"knowledge",title:"Knowledge",modelRequirements:{modelVariant:"CHAT",modelName:"claude-3-opus-20240229"},content:"You are experienced data researcher, detect the important keywords in the document.\n\n# Rules\n\n- Write just keywords separated by comma\n\n# The document\n\nTake information from this document:\n\n> {knowledgePieceContent}",dependentParameterNames:["knowledgePieceContent"],resultingParameterName:"keywords"}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[{id:1,promptbookVersion:"0.63.0-10",usage:{price:{value:0},input:{tokensCount:{value:0},charactersCount:{value:0},wordsCount:{value:0},sentencesCount:{value:0},linesCount:{value:0},paragraphsCount:{value:0},pagesCount:{value:0}},output:{tokensCount:{value:0},charactersCount:{value:0},wordsCount:{value:0},sentencesCount:{value:0},linesCount:{value:0},paragraphsCount:{value:0},pagesCount:{value:0}}}}],sourceFile:"./promptbook-collection/prepare-knowledge-keywords.ptbk.md"},{title:"Prepare Title",pipelineUrl:"https://promptbook.studio/promptbook/prepare-knowledge-title.ptbk.md",promptbookVersion:"0.63.0-10",parameters:[{name:"knowledgePieceContent",description:"The content",isInput:true,isOutput:false},{name:"title",description:"The title of the document",isInput:false,isOutput:true}],promptTemplates:[{blockType:"PROMPT_TEMPLATE",name:"knowledge",title:"Knowledge",modelRequirements:{modelVariant:"CHAT",modelName:"claude-3-opus-20240229"},content:"You are experienced content creator, write best title for the document.\n\n# Rules\n\n- Write just title, nothing else\n- Title should be concise and clear\n- Write maximum 5 words for the title\n\n# The document\n\n> {knowledgePieceContent}",expectations:{words:{min:1,max:8}},dependentParameterNames:["knowledgePieceContent"],resultingParameterName:"title"}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[{id:1,promptbookVersion:"0.63.0-10",usage:{price:{value:0},input:{tokensCount:{value:0},charactersCount:{value:0},wordsCount:{value:0},sentencesCount:{value:0},linesCount:{value:0},paragraphsCount:{value:0},pagesCount:{value:0}},output:{tokensCount:{value:0},charactersCount:{value:0},wordsCount:{value:0},sentencesCount:{value:0},linesCount:{value:0},paragraphsCount:{value:0},pagesCount:{value:0}}}}],sourceFile:"./promptbook-collection/prepare-knowledge-title.ptbk.md"},{title:"Prepare Keywords",pipelineUrl:"https://promptbook.studio/promptbook/prepare-persona.ptbk.md",promptbookVersion:"0.63.0-10",parameters:[{name:"availableModelNames",description:"List of available model names separated by comma (,)",isInput:true,isOutput:false},{name:"personaDescription",description:"Description of the persona",isInput:true,isOutput:false},{name:"modelRequirements",description:"Specific requirements for the model",isInput:false,isOutput:true}],promptTemplates:[{blockType:"PROMPT_TEMPLATE",name:"make-model-requirements",title:"Make modelRequirements",modelRequirements:{modelVariant:"CHAT",modelName:"gpt-4-turbo"},content:"You are experienced AI engineer, you need to create virtual assistant.\nWrite\n\n## Sample\n\n```json\n{\n\"modelName\": \"gpt-4o\",\n\"systemMessage\": \"You are experienced AI engineer and helpfull assistant.\",\n\"temperature\": 0.7\n}\n```\n\n## Instructions\n\n### Option `modelName`\n\nPick from the following models:\n\n- {availableModelNames}\n\n### Option `systemMessage`\n\nThe system message is used to communicate instructions or provide context to the model at the beginning of a conversation. It is displayed in a different format compared to user messages, helping the model understand its role in the conversation. The system message typically guides the model's behavior, sets the tone, or specifies desired output from the model. By utilizing the system message effectively, users can steer the model towards generating more accurate and relevant responses.\n\nFor example:\n\n> You are an experienced AI engineer and helpful assistant.\n\n> You are a friendly and knowledgeable chatbot.\n\n### Option `temperature`\n\nThe sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use log probability to automatically increase the temperature until certain thresholds are hit.\n\nYou can pick a value between 0 and 2. For example:\n\n- `0.1`: Low temperature, extremely conservative and deterministic\n- `0.5`: Medium temperature, balanced between conservative and creative\n- `1.0`: High temperature, creative and bit random\n- `1.5`: Very high temperature, extremely creative and often chaotic and unpredictable\n- `2.0`: Maximum temperature, completely random and unpredictable, for some extreme creative use cases\n\n# The assistant\n\nTake this description of the persona:\n\n> {personaDescription}",expectFormat:"JSON",dependentParameterNames:["availableModelNames","personaDescription"],resultingParameterName:"modelRequirements"}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[{id:1,promptbookVersion:"0.63.0-10",usage:{price:{value:0},input:{tokensCount:{value:0},charactersCount:{value:0},wordsCount:{value:0},sentencesCount:{value:0},linesCount:{value:0},paragraphsCount:{value:0},pagesCount:{value:0}},output:{tokensCount:{value:0},charactersCount:{value:0},wordsCount:{value:0},sentencesCount:{value:0},linesCount:{value:0},paragraphsCount:{value:0},pagesCount:{value:0}}}}],sourceFile:"./promptbook-collection/prepare-persona.ptbk.md"}];
843
843
 
844
844
  /**
845
845
  * This error indicates that the promptbook in a markdown format cannot be parsed into a valid promptbook object
@@ -1137,9 +1137,6 @@ function validatePipeline(pipeline) {
1137
1137
  throw new PipelineLogicError(spaceTrim$1(function (block) { return "\n Parameter name {".concat(template.resultingParameterName, "} is reserved, please use different name\n\n ").concat(block(pipelineIdentification), "\n "); }));
1138
1138
  }
1139
1139
  definedParameters.add(template.resultingParameterName);
1140
- if (template.blockType === 'PROMPT_TEMPLATE' && template.modelRequirements.modelVariant === undefined) {
1141
- throw new PipelineLogicError(spaceTrim$1(function (block) { return "\n\n You must specify MODEL VARIANT in the prompt template \"".concat(template.title, "\"\n\n For example:\n - MODEL VARIANT Chat\n - MODEL NAME `gpt-4-1106-preview`").concat(/* <- TODO: Dynamic listing of command examples */ '', "\n\n ").concat(block(pipelineIdentification), "\n "); }));
1142
- }
1143
1140
  if (template.jokerParameterNames && template.jokerParameterNames.length > 0) {
1144
1141
  if (!template.expectFormat &&
1145
1142
  !template.expectations /* <- TODO: Require at least 1 -> min <- expectation to use jokers */) {
@@ -4299,7 +4296,7 @@ var blockCommandParser = {
4299
4296
  * Units of text measurement
4300
4297
  *
4301
4298
  * @see https://github.com/webgptorg/promptbook/discussions/30
4302
- * @private internal base for `ExpectationUnit`
4299
+ * @public exported from `@promptbook/core`
4303
4300
  */
4304
4301
  var EXPECTATION_UNITS = ['CHARACTERS', 'WORDS', 'SENTENCES', 'LINES', 'PARAGRAPHS', 'PAGES'];
4305
4302
  /**
@@ -4539,7 +4536,7 @@ var jokerCommandParser = {
4539
4536
  /**
4540
4537
  * @@@
4541
4538
  *
4542
- * @private internal base for `ModelVariant` and `modelCommandParser`
4539
+ * @public exported from `@promptbook/core`
4543
4540
  */
4544
4541
  var MODEL_VARIANTS = ['COMPLETION', 'CHAT', 'EMBEDDING' /* <- TODO [🏳] */ /* <- [🤖] */];
4545
4542
 
@@ -7616,6 +7613,7 @@ function createLlmToolsFromEnv(options) {
7616
7613
  * TODO: [🧠] Maybe pass env as argument
7617
7614
  * Note: [🟢] This code should never be published outside of `@promptbook/node` and `@promptbook/cli` and `@promptbook/cli`
7618
7615
  * TODO: [👷‍♂️] @@@ Manual about construction of llmTools
7616
+ * TODO: [🥃] Allow `ptbk make` without llm tools
7619
7617
  */
7620
7618
 
7621
7619
  /**
@@ -7811,6 +7809,7 @@ function getLlmToolsForCli(options) {
7811
7809
  /**
7812
7810
  * Note: [🟡] This code should never be published outside of `@promptbook/cli`
7813
7811
  * TODO: [👷‍♂️] @@@ Manual about construction of llmTools
7812
+ * TODO: [🥃] Allow `ptbk make` without llm tools
7814
7813
  */
7815
7814
 
7816
7815
  /**
@@ -7997,6 +7996,7 @@ function initializeMakeCommand(program) {
7997
7996
  });
7998
7997
  }
7999
7998
  /**
7999
+ * TODO: [🥃] !!! Allow `ptbk make` without llm tools
8000
8000
  * TODO: Maybe remove this command - "about" command should be enough?
8001
8001
  * TODO: [0] DRY Javascript and typescript - Maybe make ONLY typescript and for javascript just remove types
8002
8002
  * Note: [🟡] This code should never be published outside of `@promptbook/cli`