@promptbook/core 0.69.0-5 β 0.69.0-7
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +17 -57
- package/esm/index.es.js +115 -102
- package/esm/index.es.js.map +1 -1
- package/package.json +1 -1
- package/umd/index.umd.js +115 -102
- package/umd/index.umd.js.map +1 -1
package/README.md
CHANGED
|
@@ -44,13 +44,15 @@ Rest of the documentation is common for **entire promptbook ecosystem**:
|
|
|
44
44
|
|
|
45
45
|
## π€ The Promptbook Whitepaper
|
|
46
46
|
|
|
47
|
+
|
|
48
|
+
|
|
47
49
|
If you have a simple, single prompt for ChatGPT, GPT-4, Anthropic Claude, Google Gemini, Llama 2, or whatever, it doesn't matter how you integrate it. Whether it's calling a REST API directly, using the SDK, hardcoding the prompt into the source code, or importing a text file, the process remains the same.
|
|
48
50
|
|
|
49
51
|
But often you will struggle with the limitations of LLMs, such as hallucinations, off-topic responses, poor quality output, language drift, word repetition repetition repetition repetition or misuse, lack of context, or just plain wππ’rd responses. When this happens, you generally have three options:
|
|
50
52
|
|
|
51
53
|
1. **Fine-tune** the model to your specifications or even train your own.
|
|
52
54
|
2. **Prompt-engineer** the prompt to the best shape you can achieve.
|
|
53
|
-
3.
|
|
55
|
+
3. Orchestrate **multiple prompts** in a [pipeline](https://github.com/webgptorg/promptbook/discussions/64) to get the best result.
|
|
54
56
|
|
|
55
57
|
In all of these situations, but especially in 3., the Promptbook library can make your life easier.
|
|
56
58
|
|
|
@@ -62,7 +64,9 @@ In all of these situations, but especially in 3., the Promptbook library can mak
|
|
|
62
64
|
- Promptbook has built in versioning. You can test multiple **A/B versions** of pipelines and see which one works best.
|
|
63
65
|
- Promptbook is designed to do [**RAG** (Retrieval-Augmented Generation)](https://github.com/webgptorg/promptbook/discussions/41) and other advanced techniques. You can use **knowledge** to improve the quality of the output.
|
|
64
66
|
|
|
65
|
-
|
|
67
|
+
|
|
68
|
+
|
|
69
|
+
## π§ Pipeline _(for prompt-engeneers)_
|
|
66
70
|
|
|
67
71
|
**P**romp**t** **b**oo**k** markdown file (or `.ptbk.md` file) is document that describes a **pipeline** - a series of prompts that are chained together to form somewhat reciepe for transforming natural language input.
|
|
68
72
|
|
|
@@ -379,6 +383,8 @@ The following glossary is used to clarify certain concepts:
|
|
|
379
383
|
- When you want to **version** your prompts and **test multiple versions**
|
|
380
384
|
- When you want to **log** the execution of prompts and backtrace the issues
|
|
381
385
|
|
|
386
|
+
[See more](https://github.com/webgptorg/promptbook/discussions/111)
|
|
387
|
+
|
|
382
388
|
### β When not to use
|
|
383
389
|
|
|
384
390
|
- When you have already implemented single simple prompt and it works fine for your job
|
|
@@ -388,6 +394,8 @@ The following glossary is used to clarify certain concepts:
|
|
|
388
394
|
- When your main focus is on something other than text - like images, audio, video, spreadsheets _(other media types may be added in the future, [see discussion](https://github.com/webgptorg/promptbook/discussions/103))_
|
|
389
395
|
- When you need to use recursion _([see the discussion](https://github.com/webgptorg/promptbook/discussions/38))_
|
|
390
396
|
|
|
397
|
+
[See more](https://github.com/webgptorg/promptbook/discussions/112)
|
|
398
|
+
|
|
391
399
|
## π Known issues
|
|
392
400
|
|
|
393
401
|
- [π€ΈββοΈ Iterations not working yet](https://github.com/webgptorg/promptbook/discussions/55)
|
|
@@ -400,63 +408,15 @@ The following glossary is used to clarify certain concepts:
|
|
|
400
408
|
|
|
401
409
|
## β FAQ
|
|
402
410
|
|
|
403
|
-
|
|
404
|
-
|
|
405
411
|
If you have a question [start a discussion](https://github.com/webgptorg/promptbook/discussions/), [open an issue](https://github.com/webgptorg/promptbook/issues) or [write me an email](https://www.pavolhejny.com/contact).
|
|
406
412
|
|
|
407
|
-
|
|
408
|
-
|
|
409
|
-
|
|
410
|
-
|
|
411
|
-
|
|
412
|
-
|
|
413
|
-
|
|
414
|
-
|
|
415
|
-
We are considering creating a bridge/converter between these two libraries.
|
|
416
|
-
|
|
417
|
-
|
|
418
|
-
|
|
419
|
-
### Promptbooks vs. OpenAI`s GPTs
|
|
420
|
-
|
|
421
|
-
GPTs are chat assistants that can be assigned to specific tasks and materials. But they are still chat assistants. Promptbooks are a way to orchestrate many more predefined tasks to have much tighter control over the process. Promptbooks are not a good technology for creating human-like chatbots, GPTs are not a good technology for creating outputs with specific requirements.
|
|
422
|
-
|
|
423
|
-
|
|
424
|
-
|
|
425
|
-
|
|
426
|
-
|
|
427
|
-
|
|
428
|
-
|
|
429
|
-
|
|
430
|
-
|
|
431
|
-
|
|
432
|
-
|
|
433
|
-
|
|
434
|
-
|
|
435
|
-
|
|
436
|
-
|
|
437
|
-
### Where should I store my promptbooks?
|
|
438
|
-
|
|
439
|
-
If you use raw SDKs, you just put prompts in the sourcecode, mixed in with typescript, javascript, python or whatever programming language you use.
|
|
440
|
-
|
|
441
|
-
If you use promptbooks, you can store them in several places, each with its own advantages and disadvantages:
|
|
442
|
-
|
|
443
|
-
1. As **source code**, typically git-committed. In this case you can use the versioning system and the promptbooks will be tightly coupled with the version of the application. You still get the power of promptbooks, as you separate the concerns of the prompt-engineer and the programmer.
|
|
444
|
-
|
|
445
|
-
2. As data in a **database** In this case, promptbooks are like posts / articles on the blog. They can be modified independently of the application. You don't need to redeploy the application to change the promptbooks. You can have multiple versions of promptbooks for each user. You can have a web interface for non-programmers to create and modify promptbooks. But you lose the versioning system and you still have to consider the interface between the promptbooks and the application _(= input and output parameters)_.
|
|
446
|
-
|
|
447
|
-
3. In a **configuration** in environment variables. This is a good way to store promptbooks if you have an application with multiple deployments and you want to have different but simple promptbooks for each deployment and you don't need to change them often.
|
|
448
|
-
|
|
449
|
-
### What should I do when I need same promptbook in multiple human languages?
|
|
450
|
-
|
|
451
|
-
A single promptbook can be written for several _(human)_ languages at once. However, we recommend that you have separate promptbooks for each language.
|
|
452
|
-
|
|
453
|
-
In large language models, you will get better results if you have prompts in the same language as the user input.
|
|
454
|
-
|
|
455
|
-
The best way to manage this is to have suffixed promptbooks like `write-website-content.en.ptbk.md` and `write-website-content.cs.ptbk.md` for each supported language.
|
|
456
|
-
|
|
457
|
-
|
|
458
|
-
|
|
459
|
-
|
|
413
|
+
- [β Why not just use the OpenAI SDK / Anthropic Claude SDK / ...?](https://github.com/webgptorg/promptbook/discussions/114)
|
|
414
|
+
- [β How is it different from the OpenAI`s GPTs?](https://github.com/webgptorg/promptbook/discussions/118)
|
|
415
|
+
- [β How is it different from the Langchain?](https://github.com/webgptorg/promptbook/discussions/115)
|
|
416
|
+
- [β How is it different from the DSPy?](https://github.com/webgptorg/promptbook/discussions/117)
|
|
417
|
+
- [β How is it different from _anything_?](https://github.com/webgptorg/promptbook/discussions?discussions_q=is%3Aopen+label%3A%22Promptbook+vs%22)
|
|
418
|
+
- [β Is Promptbook using RAG _(Retrieval-Augmented Generation)_?](https://github.com/webgptorg/promptbook/discussions/123)
|
|
419
|
+
- [β Is Promptbook using function calling?](https://github.com/webgptorg/promptbook/discussions/124)
|
|
460
420
|
|
|
461
421
|
## β Changelog
|
|
462
422
|
|
package/esm/index.es.js
CHANGED
|
@@ -10,7 +10,7 @@ import moment from 'moment';
|
|
|
10
10
|
/**
|
|
11
11
|
* The version of the Promptbook library
|
|
12
12
|
*/
|
|
13
|
-
var PROMPTBOOK_VERSION = '0.69.0-
|
|
13
|
+
var PROMPTBOOK_VERSION = '0.69.0-6';
|
|
14
14
|
// TODO: !!!! List here all the versions and annotate + put into script
|
|
15
15
|
|
|
16
16
|
/*! *****************************************************************************
|
|
@@ -1817,7 +1817,7 @@ function forEachAsync(array, options, callbackfunction) {
|
|
|
1817
1817
|
});
|
|
1818
1818
|
}
|
|
1819
1819
|
|
|
1820
|
-
var PipelineCollection = [{title:"Prepare Knowledge from Markdown",pipelineUrl:"https://promptbook.studio/promptbook/prepare-knowledge-from-markdown.ptbk.md",promptbookVersion:"0.69.0-
|
|
1820
|
+
var PipelineCollection = [{title:"Prepare Knowledge from Markdown",pipelineUrl:"https://promptbook.studio/promptbook/prepare-knowledge-from-markdown.ptbk.md",promptbookVersion:"0.69.0-6",parameters:[{name:"knowledgeContent",description:"Markdown document content",isInput:true,isOutput:false},{name:"knowledgePieces",description:"The knowledge JSON object",isInput:false,isOutput:true}],templates:[{templateType:"PROMPT_TEMPLATE",name:"knowledge",title:"Knowledge",content:"You are experienced data researcher, extract the important knowledge from the document.\n\n# Rules\n\n- Make pieces of information concise, clear, and easy to understand\n- One piece of information should be approximately 1 paragraph\n- Divide the paragraphs by markdown horizontal lines ---\n- Omit irrelevant information\n- Group redundant information\n- Write just extracted information, nothing else\n\n# The document\n\nTake information from this document:\n\n> {knowledgeContent}",resultingParameterName:"knowledgePieces",dependentParameterNames:["knowledgeContent"]}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[],sourceFile:"./promptbook-collection/prepare-knowledge-from-markdown.ptbk.md"},{title:"Prepare Keywords",pipelineUrl:"https://promptbook.studio/promptbook/prepare-knowledge-keywords.ptbk.md",promptbookVersion:"0.69.0-6",parameters:[{name:"knowledgePieceContent",description:"The content",isInput:true,isOutput:false},{name:"keywords",description:"Keywords separated by comma",isInput:false,isOutput:true}],templates:[{templateType:"PROMPT_TEMPLATE",name:"knowledge",title:"Knowledge",content:"You are experienced data researcher, detect the important keywords in the document.\n\n# Rules\n\n- Write just keywords separated by comma\n\n# The document\n\nTake information from this document:\n\n> {knowledgePieceContent}",resultingParameterName:"keywords",dependentParameterNames:["knowledgePieceContent"]}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[],sourceFile:"./promptbook-collection/prepare-knowledge-keywords.ptbk.md"},{title:"Prepare Title",pipelineUrl:"https://promptbook.studio/promptbook/prepare-knowledge-title.ptbk.md",promptbookVersion:"0.69.0-6",parameters:[{name:"knowledgePieceContent",description:"The content",isInput:true,isOutput:false},{name:"title",description:"The title of the document",isInput:false,isOutput:true}],templates:[{templateType:"PROMPT_TEMPLATE",name:"knowledge",title:"Knowledge",content:"You are experienced content creator, write best title for the document.\n\n# Rules\n\n- Write just title, nothing else\n- Title should be concise and clear\n- Write maximum 5 words for the title\n\n# The document\n\n> {knowledgePieceContent}",resultingParameterName:"title",expectations:{words:{min:1,max:8}},dependentParameterNames:["knowledgePieceContent"]}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[],sourceFile:"./promptbook-collection/prepare-knowledge-title.ptbk.md"},{title:"Prepare Keywords",pipelineUrl:"https://promptbook.studio/promptbook/prepare-persona.ptbk.md",promptbookVersion:"0.69.0-6",parameters:[{name:"availableModelNames",description:"List of available model names separated by comma (,)",isInput:true,isOutput:false},{name:"personaDescription",description:"Description of the persona",isInput:true,isOutput:false},{name:"modelRequirements",description:"Specific requirements for the model",isInput:false,isOutput:true}],templates:[{templateType:"PROMPT_TEMPLATE",name:"make-model-requirements",title:"Make modelRequirements",content:"You are experienced AI engineer, you need to create virtual assistant.\nWrite\n\n## Sample\n\n```json\n{\n\"modelName\": \"gpt-4o\",\n\"systemMessage\": \"You are experienced AI engineer and helpfull assistant.\",\n\"temperature\": 0.7\n}\n```\n\n## Instructions\n\n- Your output format is JSON object\n- Write just the JSON object, no other text should be present\n- It contains the following keys:\n - `modelName`: The name of the model to use\n - `systemMessage`: The system message to provide context to the model\n - `temperature`: The sampling temperature to use\n\n### Key `modelName`\n\nPick from the following models:\n\n- {availableModelNames}\n\n### Key `systemMessage`\n\nThe system message is used to communicate instructions or provide context to the model at the beginning of a conversation. It is displayed in a different format compared to user messages, helping the model understand its role in the conversation. The system message typically guides the model's behavior, sets the tone, or specifies desired output from the model. By utilizing the system message effectively, users can steer the model towards generating more accurate and relevant responses.\n\nFor example:\n\n> You are an experienced AI engineer and helpful assistant.\n\n> You are a friendly and knowledgeable chatbot.\n\n### Key `temperature`\n\nThe sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use log probability to automatically increase the temperature until certain thresholds are hit.\n\nYou can pick a value between 0 and 2. For example:\n\n- `0.1`: Low temperature, extremely conservative and deterministic\n- `0.5`: Medium temperature, balanced between conservative and creative\n- `1.0`: High temperature, creative and bit random\n- `1.5`: Very high temperature, extremely creative and often chaotic and unpredictable\n- `2.0`: Maximum temperature, completely random and unpredictable, for some extreme creative use cases\n\n# The assistant\n\nTake this description of the persona:\n\n> {personaDescription}",resultingParameterName:"modelRequirements",format:"JSON",dependentParameterNames:["availableModelNames","personaDescription"]}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[],sourceFile:"./promptbook-collection/prepare-persona.ptbk.md"}];
|
|
1821
1821
|
|
|
1822
1822
|
var defaultDiacriticsRemovalMap = [
|
|
1823
1823
|
{
|
|
@@ -3415,7 +3415,7 @@ function getReservedParametersForTemplate(options) {
|
|
|
3415
3415
|
*/
|
|
3416
3416
|
function executeTemplate(options) {
|
|
3417
3417
|
return __awaiter(this, void 0, void 0, function () {
|
|
3418
|
-
var currentTemplate, preparedPipeline, parametersToPass, tools, llmTools, onProgress, settings, $executionReport, pipelineIdentification, maxExecutionAttempts, name, title, priority, usedParameterNames, dependentParameterNames, definedParameters, _a, _b, _c, definedParameterNames, parameters, _loop_1, _d, _e, parameterName,
|
|
3418
|
+
var currentTemplate, preparedPipeline, parametersToPass, tools, llmTools, onProgress, settings, $executionReport, pipelineIdentification, maxExecutionAttempts, name, title, priority, usedParameterNames, dependentParameterNames, definedParameters, _a, _b, _c, definedParameterNames, parameters, _loop_1, _d, _e, parameterName, $ongoingResult, maxAttempts, jokerParameterNames, preparedContent, _loop_2, attempt, state_1;
|
|
3419
3419
|
var e_1, _f, _g;
|
|
3420
3420
|
return __generator(this, function (_h) {
|
|
3421
3421
|
switch (_h.label) {
|
|
@@ -3485,21 +3485,24 @@ function executeTemplate(options) {
|
|
|
3485
3485
|
}
|
|
3486
3486
|
finally { if (e_1) throw e_1.error; }
|
|
3487
3487
|
}
|
|
3488
|
-
// Note: Now we can freeze `parameters` because we are sure that all and only used parameters are defined
|
|
3488
|
+
// Note: Now we can freeze `parameters` because we are sure that all and only used parameters are defined and are not going to be changed
|
|
3489
3489
|
Object.freeze(parameters);
|
|
3490
|
-
|
|
3491
|
-
|
|
3492
|
-
|
|
3490
|
+
$ongoingResult = {
|
|
3491
|
+
$result: null,
|
|
3492
|
+
$resultString: null,
|
|
3493
|
+
$expectError: null,
|
|
3494
|
+
$scriptPipelineExecutionErrors: [],
|
|
3495
|
+
};
|
|
3493
3496
|
maxAttempts = currentTemplate.templateType === 'DIALOG_TEMPLATE' ? Infinity : maxExecutionAttempts;
|
|
3494
3497
|
jokerParameterNames = currentTemplate.jokerParameterNames || [];
|
|
3495
3498
|
preparedContent = (currentTemplate.preparedContent || '{content}')
|
|
3496
3499
|
.split('{content}')
|
|
3497
3500
|
.join(currentTemplate.content);
|
|
3498
3501
|
_loop_2 = function (attempt) {
|
|
3499
|
-
var isJokerAttempt, jokerParameterName, _j, modelRequirements, _k, _l, _m, scriptTools, error_1, e_2_1,
|
|
3500
|
-
var e_2,
|
|
3501
|
-
return __generator(this, function (
|
|
3502
|
-
switch (
|
|
3502
|
+
var isJokerAttempt, jokerParameterName, _j, modelRequirements, _k, _l, _m, _o, _p, _q, scriptTools, _r, error_1, e_2_1, _s, _t, _u, functionName, postprocessingError, _v, _w, scriptTools, _x, error_2, e_3_1, e_4_1, error_3;
|
|
3503
|
+
var e_2, _y, e_4, _z, e_3, _0;
|
|
3504
|
+
return __generator(this, function (_1) {
|
|
3505
|
+
switch (_1.label) {
|
|
3503
3506
|
case 0:
|
|
3504
3507
|
isJokerAttempt = attempt < 0;
|
|
3505
3508
|
jokerParameterName = jokerParameterNames[jokerParameterNames.length + attempt];
|
|
@@ -3507,21 +3510,21 @@ function executeTemplate(options) {
|
|
|
3507
3510
|
if (isJokerAttempt && !jokerParameterName) {
|
|
3508
3511
|
throw new UnexpectedError(spaceTrim$1(function (block) { return "\n Joker not found in attempt ".concat(attempt, "\n\n ").concat(block(pipelineIdentification), "\n "); }));
|
|
3509
3512
|
}
|
|
3510
|
-
result = null;
|
|
3511
|
-
resultString = null;
|
|
3512
|
-
expectError = null;
|
|
3513
|
+
$ongoingResult.$result = null;
|
|
3514
|
+
$ongoingResult.$resultString = null;
|
|
3515
|
+
$ongoingResult.$expectError = null;
|
|
3513
3516
|
if (isJokerAttempt) {
|
|
3514
3517
|
if (parameters[jokerParameterName] === undefined) {
|
|
3515
3518
|
throw new PipelineExecutionError(spaceTrim$1(function (block) { return "\n Joker parameter {".concat(jokerParameterName, "} not defined\n\n ").concat(block(pipelineIdentification), "\n "); }));
|
|
3516
3519
|
// <- TODO: This is maybe `PipelineLogicError` which should be detected in `validatePipeline` and here just thrown as `UnexpectedError`
|
|
3517
3520
|
}
|
|
3518
3521
|
else {
|
|
3519
|
-
resultString = parameters[jokerParameterName];
|
|
3522
|
+
$ongoingResult.$resultString = parameters[jokerParameterName];
|
|
3520
3523
|
}
|
|
3521
3524
|
}
|
|
3522
|
-
|
|
3525
|
+
_1.label = 1;
|
|
3523
3526
|
case 1:
|
|
3524
|
-
|
|
3527
|
+
_1.trys.push([1, 44, 45, 46]);
|
|
3525
3528
|
if (!!isJokerAttempt) return [3 /*break*/, 26];
|
|
3526
3529
|
_j = currentTemplate.templateType;
|
|
3527
3530
|
switch (_j) {
|
|
@@ -3532,11 +3535,11 @@ function executeTemplate(options) {
|
|
|
3532
3535
|
}
|
|
3533
3536
|
return [3 /*break*/, 25];
|
|
3534
3537
|
case 2:
|
|
3535
|
-
resultString = replaceParameters(preparedContent, parameters);
|
|
3538
|
+
$ongoingResult.$resultString = replaceParameters(preparedContent, parameters);
|
|
3536
3539
|
return [3 /*break*/, 26];
|
|
3537
3540
|
case 3:
|
|
3538
3541
|
modelRequirements = __assign(__assign({ modelVariant: 'CHAT' }, (preparedPipeline.defaultModelRequirements || {})), (currentTemplate.modelRequirements || {}));
|
|
3539
|
-
prompt = {
|
|
3542
|
+
$ongoingResult.$prompt = {
|
|
3540
3543
|
title: currentTemplate.title,
|
|
3541
3544
|
pipelineUrl: "".concat(preparedPipeline.pipelineUrl
|
|
3542
3545
|
? preparedPipeline.pipelineUrl
|
|
@@ -3558,25 +3561,32 @@ function executeTemplate(options) {
|
|
|
3558
3561
|
case 'EMBEDDING': return [3 /*break*/, 8];
|
|
3559
3562
|
}
|
|
3560
3563
|
return [3 /*break*/, 10];
|
|
3561
|
-
case 4:
|
|
3564
|
+
case 4:
|
|
3565
|
+
_l = $ongoingResult;
|
|
3566
|
+
return [4 /*yield*/, llmTools.callChatModel($deepFreeze($ongoingResult.$prompt))];
|
|
3562
3567
|
case 5:
|
|
3563
|
-
chatResult =
|
|
3568
|
+
_l.$chatResult = _1.sent();
|
|
3564
3569
|
// TODO: [π¬] Destroy chatThread
|
|
3565
|
-
result = chatResult;
|
|
3566
|
-
resultString = chatResult.content;
|
|
3570
|
+
$ongoingResult.$result = $ongoingResult.$chatResult;
|
|
3571
|
+
$ongoingResult.$resultString = $ongoingResult.$chatResult.content;
|
|
3567
3572
|
return [3 /*break*/, 11];
|
|
3568
|
-
case 6:
|
|
3573
|
+
case 6:
|
|
3574
|
+
_m = $ongoingResult;
|
|
3575
|
+
return [4 /*yield*/, llmTools.callCompletionModel($deepFreeze($ongoingResult.$prompt))];
|
|
3569
3576
|
case 7:
|
|
3570
|
-
completionResult =
|
|
3571
|
-
result = completionResult;
|
|
3572
|
-
resultString = completionResult.content;
|
|
3577
|
+
_m.$completionResult = _1.sent();
|
|
3578
|
+
$ongoingResult.$result = $ongoingResult.$completionResult;
|
|
3579
|
+
$ongoingResult.$resultString = $ongoingResult.$completionResult.content;
|
|
3573
3580
|
return [3 /*break*/, 11];
|
|
3574
|
-
case 8:
|
|
3581
|
+
case 8:
|
|
3582
|
+
// TODO: [π§ ] This is weird, embedding model can not be used such a way in the pipeline
|
|
3583
|
+
_o = $ongoingResult;
|
|
3584
|
+
return [4 /*yield*/, llmTools.callEmbeddingModel($deepFreeze($ongoingResult.$prompt))];
|
|
3575
3585
|
case 9:
|
|
3576
3586
|
// TODO: [π§ ] This is weird, embedding model can not be used such a way in the pipeline
|
|
3577
|
-
embeddingResult =
|
|
3578
|
-
result = embeddingResult;
|
|
3579
|
-
resultString = embeddingResult.content.join(',');
|
|
3587
|
+
_o.$embeddingResult = _1.sent();
|
|
3588
|
+
$ongoingResult.$result = $ongoingResult.$embeddingResult;
|
|
3589
|
+
$ongoingResult.$resultString = $ongoingResult.$embeddingResult.content.join(',');
|
|
3580
3590
|
return [3 /*break*/, 11];
|
|
3581
3591
|
case 10: throw new PipelineExecutionError(spaceTrim$1(function (block) { return "\n Unknown model variant \"".concat(currentTemplate.modelRequirements.modelVariant, "\"\n\n ").concat(block(pipelineIdentification), "\n\n "); }));
|
|
3582
3592
|
case 11: return [3 /*break*/, 26];
|
|
@@ -3587,60 +3597,59 @@ function executeTemplate(options) {
|
|
|
3587
3597
|
if (!currentTemplate.contentLanguage) {
|
|
3588
3598
|
throw new PipelineExecutionError(spaceTrim$1(function (block) { return "\n Script language is not defined for SCRIPT TEMPLATE \"".concat(currentTemplate.name, "\"\n\n ").concat(block(pipelineIdentification), "\n "); }));
|
|
3589
3599
|
}
|
|
3590
|
-
|
|
3591
|
-
scriptPipelineExecutionErrors = [];
|
|
3592
|
-
_v.label = 13;
|
|
3600
|
+
_1.label = 13;
|
|
3593
3601
|
case 13:
|
|
3594
|
-
|
|
3595
|
-
|
|
3596
|
-
|
|
3602
|
+
_1.trys.push([13, 20, 21, 22]);
|
|
3603
|
+
_p = (e_2 = void 0, __values(arrayableToArray(tools.script))), _q = _p.next();
|
|
3604
|
+
_1.label = 14;
|
|
3597
3605
|
case 14:
|
|
3598
|
-
if (!!
|
|
3599
|
-
scriptTools =
|
|
3600
|
-
|
|
3606
|
+
if (!!_q.done) return [3 /*break*/, 19];
|
|
3607
|
+
scriptTools = _q.value;
|
|
3608
|
+
_1.label = 15;
|
|
3601
3609
|
case 15:
|
|
3602
|
-
|
|
3610
|
+
_1.trys.push([15, 17, , 18]);
|
|
3611
|
+
_r = $ongoingResult;
|
|
3603
3612
|
return [4 /*yield*/, scriptTools.execute($deepFreeze({
|
|
3604
3613
|
scriptLanguage: currentTemplate.contentLanguage,
|
|
3605
3614
|
script: preparedContent,
|
|
3606
3615
|
parameters: parameters,
|
|
3607
3616
|
}))];
|
|
3608
3617
|
case 16:
|
|
3609
|
-
resultString =
|
|
3618
|
+
_r.$resultString = _1.sent();
|
|
3610
3619
|
return [3 /*break*/, 19];
|
|
3611
3620
|
case 17:
|
|
3612
|
-
error_1 =
|
|
3621
|
+
error_1 = _1.sent();
|
|
3613
3622
|
if (!(error_1 instanceof Error)) {
|
|
3614
3623
|
throw error_1;
|
|
3615
3624
|
}
|
|
3616
3625
|
if (error_1 instanceof UnexpectedError) {
|
|
3617
3626
|
throw error_1;
|
|
3618
3627
|
}
|
|
3619
|
-
scriptPipelineExecutionErrors.push(error_1);
|
|
3628
|
+
$ongoingResult.$scriptPipelineExecutionErrors.push(error_1);
|
|
3620
3629
|
return [3 /*break*/, 18];
|
|
3621
3630
|
case 18:
|
|
3622
|
-
|
|
3631
|
+
_q = _p.next();
|
|
3623
3632
|
return [3 /*break*/, 14];
|
|
3624
3633
|
case 19: return [3 /*break*/, 22];
|
|
3625
3634
|
case 20:
|
|
3626
|
-
e_2_1 =
|
|
3635
|
+
e_2_1 = _1.sent();
|
|
3627
3636
|
e_2 = { error: e_2_1 };
|
|
3628
3637
|
return [3 /*break*/, 22];
|
|
3629
3638
|
case 21:
|
|
3630
3639
|
try {
|
|
3631
|
-
if (
|
|
3640
|
+
if (_q && !_q.done && (_y = _p.return)) _y.call(_p);
|
|
3632
3641
|
}
|
|
3633
3642
|
finally { if (e_2) throw e_2.error; }
|
|
3634
3643
|
return [7 /*endfinally*/];
|
|
3635
3644
|
case 22:
|
|
3636
|
-
if (resultString !== null) {
|
|
3645
|
+
if ($ongoingResult.$resultString !== null) {
|
|
3637
3646
|
return [3 /*break*/, 26];
|
|
3638
3647
|
}
|
|
3639
|
-
if (scriptPipelineExecutionErrors.length === 1) {
|
|
3640
|
-
throw scriptPipelineExecutionErrors[0];
|
|
3648
|
+
if ($ongoingResult.$scriptPipelineExecutionErrors.length === 1) {
|
|
3649
|
+
throw $ongoingResult.$scriptPipelineExecutionErrors[0];
|
|
3641
3650
|
}
|
|
3642
3651
|
else {
|
|
3643
|
-
throw new PipelineExecutionError(spaceTrim$1(function (block) { return "\n Script execution failed ".concat(scriptPipelineExecutionErrors.length, "
|
|
3652
|
+
throw new PipelineExecutionError(spaceTrim$1(function (block) { return "\n Script execution failed ".concat($ongoingResult.$scriptPipelineExecutionErrors.length, "x\n\n ").concat(block(pipelineIdentification), "\n\n ").concat(block($ongoingResult.$scriptPipelineExecutionErrors
|
|
3644
3653
|
.map(function (error) { return '- ' + error.message; })
|
|
3645
3654
|
.join('\n\n')), "\n "); }));
|
|
3646
3655
|
}
|
|
@@ -3648,6 +3657,8 @@ function executeTemplate(options) {
|
|
|
3648
3657
|
if (tools.userInterface === undefined) {
|
|
3649
3658
|
throw new PipelineExecutionError(spaceTrim$1(function (block) { return "\n User interface tools are not available\n\n ".concat(block(pipelineIdentification), "\n "); }));
|
|
3650
3659
|
}
|
|
3660
|
+
// TODO: [πΉ] When making next attempt for `DIALOG TEMPLATE`, preserve the previous user input
|
|
3661
|
+
_s = $ongoingResult;
|
|
3651
3662
|
return [4 /*yield*/, tools.userInterface.promptDialog($deepFreeze({
|
|
3652
3663
|
promptTitle: currentTemplate.title,
|
|
3653
3664
|
promptMessage: replaceParameters(currentTemplate.description || '', parameters),
|
|
@@ -3658,47 +3669,46 @@ function executeTemplate(options) {
|
|
|
3658
3669
|
}))];
|
|
3659
3670
|
case 24:
|
|
3660
3671
|
// TODO: [πΉ] When making next attempt for `DIALOG TEMPLATE`, preserve the previous user input
|
|
3661
|
-
resultString =
|
|
3672
|
+
_s.$resultString = _1.sent();
|
|
3662
3673
|
return [3 /*break*/, 26];
|
|
3663
3674
|
case 25: throw new PipelineExecutionError(spaceTrim$1(function (block) { return "\n Unknown execution type \"".concat(currentTemplate.templateType, "\"\n\n ").concat(block(pipelineIdentification), "\n "); }));
|
|
3664
3675
|
case 26:
|
|
3665
3676
|
if (!(!isJokerAttempt && currentTemplate.postprocessingFunctionNames)) return [3 /*break*/, 43];
|
|
3666
|
-
|
|
3677
|
+
_1.label = 27;
|
|
3667
3678
|
case 27:
|
|
3668
|
-
|
|
3669
|
-
|
|
3670
|
-
|
|
3679
|
+
_1.trys.push([27, 41, 42, 43]);
|
|
3680
|
+
_t = (e_4 = void 0, __values(currentTemplate.postprocessingFunctionNames)), _u = _t.next();
|
|
3681
|
+
_1.label = 28;
|
|
3671
3682
|
case 28:
|
|
3672
|
-
if (!!
|
|
3673
|
-
functionName =
|
|
3674
|
-
// TODO: DRY [1]
|
|
3675
|
-
scriptPipelineExecutionErrors = [];
|
|
3683
|
+
if (!!_u.done) return [3 /*break*/, 40];
|
|
3684
|
+
functionName = _u.value;
|
|
3676
3685
|
postprocessingError = null;
|
|
3677
|
-
|
|
3686
|
+
_1.label = 29;
|
|
3678
3687
|
case 29:
|
|
3679
|
-
|
|
3680
|
-
|
|
3681
|
-
|
|
3688
|
+
_1.trys.push([29, 36, 37, 38]);
|
|
3689
|
+
_v = (e_3 = void 0, __values(arrayableToArray(tools.script))), _w = _v.next();
|
|
3690
|
+
_1.label = 30;
|
|
3682
3691
|
case 30:
|
|
3683
|
-
if (!!
|
|
3684
|
-
scriptTools =
|
|
3685
|
-
|
|
3692
|
+
if (!!_w.done) return [3 /*break*/, 35];
|
|
3693
|
+
scriptTools = _w.value;
|
|
3694
|
+
_1.label = 31;
|
|
3686
3695
|
case 31:
|
|
3687
|
-
|
|
3696
|
+
_1.trys.push([31, 33, , 34]);
|
|
3697
|
+
_x = $ongoingResult;
|
|
3688
3698
|
return [4 /*yield*/, scriptTools.execute({
|
|
3689
3699
|
scriptLanguage: "javascript" /* <- TODO: Try it in each languages; In future allow postprocessing with arbitrary combination of languages to combine */,
|
|
3690
3700
|
script: "".concat(functionName, "(resultString)"),
|
|
3691
3701
|
parameters: {
|
|
3692
|
-
resultString: resultString || '',
|
|
3702
|
+
resultString: $ongoingResult.$resultString || '',
|
|
3693
3703
|
// Note: No ...parametersForTemplate, because working with result only
|
|
3694
3704
|
},
|
|
3695
3705
|
})];
|
|
3696
3706
|
case 32:
|
|
3697
|
-
resultString =
|
|
3707
|
+
_x.$resultString = _1.sent();
|
|
3698
3708
|
postprocessingError = null;
|
|
3699
3709
|
return [3 /*break*/, 35];
|
|
3700
3710
|
case 33:
|
|
3701
|
-
error_2 =
|
|
3711
|
+
error_2 = _1.sent();
|
|
3702
3712
|
if (!(error_2 instanceof Error)) {
|
|
3703
3713
|
throw error_2;
|
|
3704
3714
|
}
|
|
@@ -3706,19 +3716,19 @@ function executeTemplate(options) {
|
|
|
3706
3716
|
throw error_2;
|
|
3707
3717
|
}
|
|
3708
3718
|
postprocessingError = error_2;
|
|
3709
|
-
scriptPipelineExecutionErrors.push(error_2);
|
|
3719
|
+
$ongoingResult.$scriptPipelineExecutionErrors.push(error_2);
|
|
3710
3720
|
return [3 /*break*/, 34];
|
|
3711
3721
|
case 34:
|
|
3712
|
-
|
|
3722
|
+
_w = _v.next();
|
|
3713
3723
|
return [3 /*break*/, 30];
|
|
3714
3724
|
case 35: return [3 /*break*/, 38];
|
|
3715
3725
|
case 36:
|
|
3716
|
-
e_3_1 =
|
|
3726
|
+
e_3_1 = _1.sent();
|
|
3717
3727
|
e_3 = { error: e_3_1 };
|
|
3718
3728
|
return [3 /*break*/, 38];
|
|
3719
3729
|
case 37:
|
|
3720
3730
|
try {
|
|
3721
|
-
if (
|
|
3731
|
+
if (_w && !_w.done && (_0 = _v.return)) _0.call(_v);
|
|
3722
3732
|
}
|
|
3723
3733
|
finally { if (e_3) throw e_3.error; }
|
|
3724
3734
|
return [7 /*endfinally*/];
|
|
@@ -3726,18 +3736,18 @@ function executeTemplate(options) {
|
|
|
3726
3736
|
if (postprocessingError) {
|
|
3727
3737
|
throw postprocessingError;
|
|
3728
3738
|
}
|
|
3729
|
-
|
|
3739
|
+
_1.label = 39;
|
|
3730
3740
|
case 39:
|
|
3731
|
-
|
|
3741
|
+
_u = _t.next();
|
|
3732
3742
|
return [3 /*break*/, 28];
|
|
3733
3743
|
case 40: return [3 /*break*/, 43];
|
|
3734
3744
|
case 41:
|
|
3735
|
-
e_4_1 =
|
|
3745
|
+
e_4_1 = _1.sent();
|
|
3736
3746
|
e_4 = { error: e_4_1 };
|
|
3737
3747
|
return [3 /*break*/, 43];
|
|
3738
3748
|
case 42:
|
|
3739
3749
|
try {
|
|
3740
|
-
if (
|
|
3750
|
+
if (_u && !_u.done && (_z = _t.return)) _z.call(_t);
|
|
3741
3751
|
}
|
|
3742
3752
|
finally { if (e_4) throw e_4.error; }
|
|
3743
3753
|
return [7 /*endfinally*/];
|
|
@@ -3745,10 +3755,10 @@ function executeTemplate(options) {
|
|
|
3745
3755
|
// TODO: [π] Unite object for expecting amount and format
|
|
3746
3756
|
if (currentTemplate.format) {
|
|
3747
3757
|
if (currentTemplate.format === 'JSON') {
|
|
3748
|
-
if (!isValidJsonString(resultString || '')) {
|
|
3758
|
+
if (!isValidJsonString($ongoingResult.$resultString || '')) {
|
|
3749
3759
|
// TODO: [π’] Do more universally via `FormatDefinition`
|
|
3750
3760
|
try {
|
|
3751
|
-
resultString = extractJsonBlock(resultString || '');
|
|
3761
|
+
$ongoingResult.$resultString = extractJsonBlock($ongoingResult.$resultString || '');
|
|
3752
3762
|
}
|
|
3753
3763
|
catch (error) {
|
|
3754
3764
|
keepUnused(error);
|
|
@@ -3763,45 +3773,48 @@ function executeTemplate(options) {
|
|
|
3763
3773
|
}
|
|
3764
3774
|
// TODO: [π] Unite object for expecting amount and format
|
|
3765
3775
|
if (currentTemplate.expectations) {
|
|
3766
|
-
checkExpectations(currentTemplate.expectations, resultString || '');
|
|
3776
|
+
checkExpectations(currentTemplate.expectations, $ongoingResult.$resultString || '');
|
|
3767
3777
|
}
|
|
3768
3778
|
return [2 /*return*/, "break-attempts"];
|
|
3769
3779
|
case 44:
|
|
3770
|
-
error_3 =
|
|
3780
|
+
error_3 = _1.sent();
|
|
3771
3781
|
if (!(error_3 instanceof ExpectError)) {
|
|
3772
3782
|
throw error_3;
|
|
3773
3783
|
}
|
|
3774
|
-
expectError = error_3;
|
|
3784
|
+
$ongoingResult.$expectError = error_3;
|
|
3775
3785
|
return [3 /*break*/, 46];
|
|
3776
3786
|
case 45:
|
|
3777
3787
|
if (!isJokerAttempt &&
|
|
3778
3788
|
currentTemplate.templateType === 'PROMPT_TEMPLATE' &&
|
|
3779
|
-
prompt
|
|
3789
|
+
$ongoingResult.$prompt
|
|
3780
3790
|
// <- Note: [2] When some expected parameter is not defined, error will occur in replaceParameters
|
|
3781
3791
|
// In that case we donβt want to make a report about it because itβs not a llm execution error
|
|
3782
3792
|
) {
|
|
3783
3793
|
// TODO: [π§ ] Maybe put other templateTypes into report
|
|
3784
3794
|
$executionReport.promptExecutions.push({
|
|
3785
|
-
prompt: __assign({}, prompt),
|
|
3786
|
-
result: result || undefined,
|
|
3787
|
-
error: expectError === null ? undefined : serializeError(expectError),
|
|
3795
|
+
prompt: __assign({}, $ongoingResult.$prompt),
|
|
3796
|
+
result: $ongoingResult.$result || undefined,
|
|
3797
|
+
error: $ongoingResult.$expectError === null ? undefined : serializeError($ongoingResult.$expectError),
|
|
3788
3798
|
});
|
|
3789
3799
|
}
|
|
3790
3800
|
return [7 /*endfinally*/];
|
|
3791
3801
|
case 46:
|
|
3792
|
-
if (expectError !== null && attempt === maxAttempts - 1) {
|
|
3793
|
-
throw new PipelineExecutionError(spaceTrim$1(function (block) {
|
|
3794
|
-
|
|
3795
|
-
.
|
|
3796
|
-
.join('\n')), "\n\n Last error ").concat((expectError === null || expectError === void 0 ? void 0 : expectError.name) || '', ":\n ").concat(block(((expectError === null || expectError === void 0 ? void 0 : expectError.message) || '')
|
|
3797
|
-
.split('\n')
|
|
3798
|
-
.map(function (line) { return "> ".concat(line); })
|
|
3799
|
-
.join('\n')), "\n\n Last result:\n ").concat(block(resultString === null
|
|
3800
|
-
? 'null'
|
|
3801
|
-
: resultString
|
|
3802
|
+
if ($ongoingResult.$expectError !== null && attempt === maxAttempts - 1) {
|
|
3803
|
+
throw new PipelineExecutionError(spaceTrim$1(function (block) {
|
|
3804
|
+
var _a, _b, _c;
|
|
3805
|
+
return "\n LLM execution failed ".concat(maxExecutionAttempts, "x\n\n ").concat(block(pipelineIdentification), "\n\n ---\n The Prompt:\n ").concat(block((((_a = $ongoingResult.$prompt) === null || _a === void 0 ? void 0 : _a.content) || '')
|
|
3802
3806
|
.split('\n')
|
|
3803
3807
|
.map(function (line) { return "> ".concat(line); })
|
|
3804
|
-
.join('\n')), "\n
|
|
3808
|
+
.join('\n')), "\n\n Last error ").concat(((_b = $ongoingResult.$expectError) === null || _b === void 0 ? void 0 : _b.name) || '', ":\n ").concat(block((((_c = $ongoingResult.$expectError) === null || _c === void 0 ? void 0 : _c.message) || '')
|
|
3809
|
+
.split('\n')
|
|
3810
|
+
.map(function (line) { return "> ".concat(line); })
|
|
3811
|
+
.join('\n')), "\n\n Last result:\n ").concat(block($ongoingResult.$resultString === null
|
|
3812
|
+
? 'null'
|
|
3813
|
+
: $ongoingResult.$resultString
|
|
3814
|
+
.split('\n')
|
|
3815
|
+
.map(function (line) { return "> ".concat(line); })
|
|
3816
|
+
.join('\n')), "\n ---\n ");
|
|
3817
|
+
}));
|
|
3805
3818
|
}
|
|
3806
3819
|
return [2 /*return*/];
|
|
3807
3820
|
}
|
|
@@ -3835,7 +3848,7 @@ function executeTemplate(options) {
|
|
|
3835
3848
|
|
|
3836
3849
|
*/
|
|
3837
3850
|
//------------------------------------
|
|
3838
|
-
if (resultString === null) {
|
|
3851
|
+
if ($ongoingResult.$resultString === null) {
|
|
3839
3852
|
throw new UnexpectedError(spaceTrim$1(function (block) { return "\n Something went wrong and prompt result is null\n\n ".concat(block(pipelineIdentification), "\n "); }));
|
|
3840
3853
|
}
|
|
3841
3854
|
return [4 /*yield*/, onProgress({
|
|
@@ -3845,13 +3858,13 @@ function executeTemplate(options) {
|
|
|
3845
3858
|
isDone: true,
|
|
3846
3859
|
templateType: currentTemplate.templateType,
|
|
3847
3860
|
parameterName: currentTemplate.resultingParameterName,
|
|
3848
|
-
parameterValue: resultString,
|
|
3861
|
+
parameterValue: $ongoingResult.$resultString,
|
|
3849
3862
|
// <- [πΈ]
|
|
3850
3863
|
})];
|
|
3851
3864
|
case 7:
|
|
3852
3865
|
_h.sent();
|
|
3853
3866
|
return [2 /*return*/, Object.freeze((_g = {},
|
|
3854
|
-
_g[currentTemplate.resultingParameterName] = resultString /* <- Note: Not need to detect parameter collision here because pipeline checks logic consistency during construction */,
|
|
3867
|
+
_g[currentTemplate.resultingParameterName] = $ongoingResult.$resultString /* <- Note: Not need to detect parameter collision here because pipeline checks logic consistency during construction */,
|
|
3855
3868
|
_g))];
|
|
3856
3869
|
}
|
|
3857
3870
|
});
|