@promptbook/node 0.69.0-6 β 0.69.0-7
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +17 -57
- package/esm/index.es.js +115 -102
- package/esm/index.es.js.map +1 -1
- package/package.json +2 -2
- package/umd/index.umd.js +115 -102
- package/umd/index.umd.js.map +1 -1
package/README.md
CHANGED
|
@@ -44,13 +44,15 @@ Rest of the documentation is common for **entire promptbook ecosystem**:
|
|
|
44
44
|
|
|
45
45
|
## π€ The Promptbook Whitepaper
|
|
46
46
|
|
|
47
|
+
|
|
48
|
+
|
|
47
49
|
If you have a simple, single prompt for ChatGPT, GPT-4, Anthropic Claude, Google Gemini, Llama 2, or whatever, it doesn't matter how you integrate it. Whether it's calling a REST API directly, using the SDK, hardcoding the prompt into the source code, or importing a text file, the process remains the same.
|
|
48
50
|
|
|
49
51
|
But often you will struggle with the limitations of LLMs, such as hallucinations, off-topic responses, poor quality output, language drift, word repetition repetition repetition repetition or misuse, lack of context, or just plain wππ’rd responses. When this happens, you generally have three options:
|
|
50
52
|
|
|
51
53
|
1. **Fine-tune** the model to your specifications or even train your own.
|
|
52
54
|
2. **Prompt-engineer** the prompt to the best shape you can achieve.
|
|
53
|
-
3.
|
|
55
|
+
3. Orchestrate **multiple prompts** in a [pipeline](https://github.com/webgptorg/promptbook/discussions/64) to get the best result.
|
|
54
56
|
|
|
55
57
|
In all of these situations, but especially in 3., the Promptbook library can make your life easier.
|
|
56
58
|
|
|
@@ -62,7 +64,9 @@ In all of these situations, but especially in 3., the Promptbook library can mak
|
|
|
62
64
|
- Promptbook has built in versioning. You can test multiple **A/B versions** of pipelines and see which one works best.
|
|
63
65
|
- Promptbook is designed to do [**RAG** (Retrieval-Augmented Generation)](https://github.com/webgptorg/promptbook/discussions/41) and other advanced techniques. You can use **knowledge** to improve the quality of the output.
|
|
64
66
|
|
|
65
|
-
|
|
67
|
+
|
|
68
|
+
|
|
69
|
+
## π§ Pipeline _(for prompt-engeneers)_
|
|
66
70
|
|
|
67
71
|
**P**romp**t** **b**oo**k** markdown file (or `.ptbk.md` file) is document that describes a **pipeline** - a series of prompts that are chained together to form somewhat reciepe for transforming natural language input.
|
|
68
72
|
|
|
@@ -379,6 +383,8 @@ The following glossary is used to clarify certain concepts:
|
|
|
379
383
|
- When you want to **version** your prompts and **test multiple versions**
|
|
380
384
|
- When you want to **log** the execution of prompts and backtrace the issues
|
|
381
385
|
|
|
386
|
+
[See more](https://github.com/webgptorg/promptbook/discussions/111)
|
|
387
|
+
|
|
382
388
|
### β When not to use
|
|
383
389
|
|
|
384
390
|
- When you have already implemented single simple prompt and it works fine for your job
|
|
@@ -388,6 +394,8 @@ The following glossary is used to clarify certain concepts:
|
|
|
388
394
|
- When your main focus is on something other than text - like images, audio, video, spreadsheets _(other media types may be added in the future, [see discussion](https://github.com/webgptorg/promptbook/discussions/103))_
|
|
389
395
|
- When you need to use recursion _([see the discussion](https://github.com/webgptorg/promptbook/discussions/38))_
|
|
390
396
|
|
|
397
|
+
[See more](https://github.com/webgptorg/promptbook/discussions/112)
|
|
398
|
+
|
|
391
399
|
## π Known issues
|
|
392
400
|
|
|
393
401
|
- [π€ΈββοΈ Iterations not working yet](https://github.com/webgptorg/promptbook/discussions/55)
|
|
@@ -400,63 +408,15 @@ The following glossary is used to clarify certain concepts:
|
|
|
400
408
|
|
|
401
409
|
## β FAQ
|
|
402
410
|
|
|
403
|
-
|
|
404
|
-
|
|
405
411
|
If you have a question [start a discussion](https://github.com/webgptorg/promptbook/discussions/), [open an issue](https://github.com/webgptorg/promptbook/issues) or [write me an email](https://www.pavolhejny.com/contact).
|
|
406
412
|
|
|
407
|
-
|
|
408
|
-
|
|
409
|
-
|
|
410
|
-
|
|
411
|
-
|
|
412
|
-
|
|
413
|
-
|
|
414
|
-
|
|
415
|
-
We are considering creating a bridge/converter between these two libraries.
|
|
416
|
-
|
|
417
|
-
|
|
418
|
-
|
|
419
|
-
### Promptbooks vs. OpenAI`s GPTs
|
|
420
|
-
|
|
421
|
-
GPTs are chat assistants that can be assigned to specific tasks and materials. But they are still chat assistants. Promptbooks are a way to orchestrate many more predefined tasks to have much tighter control over the process. Promptbooks are not a good technology for creating human-like chatbots, GPTs are not a good technology for creating outputs with specific requirements.
|
|
422
|
-
|
|
423
|
-
|
|
424
|
-
|
|
425
|
-
|
|
426
|
-
|
|
427
|
-
|
|
428
|
-
|
|
429
|
-
|
|
430
|
-
|
|
431
|
-
|
|
432
|
-
|
|
433
|
-
|
|
434
|
-
|
|
435
|
-
|
|
436
|
-
|
|
437
|
-
### Where should I store my promptbooks?
|
|
438
|
-
|
|
439
|
-
If you use raw SDKs, you just put prompts in the sourcecode, mixed in with typescript, javascript, python or whatever programming language you use.
|
|
440
|
-
|
|
441
|
-
If you use promptbooks, you can store them in several places, each with its own advantages and disadvantages:
|
|
442
|
-
|
|
443
|
-
1. As **source code**, typically git-committed. In this case you can use the versioning system and the promptbooks will be tightly coupled with the version of the application. You still get the power of promptbooks, as you separate the concerns of the prompt-engineer and the programmer.
|
|
444
|
-
|
|
445
|
-
2. As data in a **database** In this case, promptbooks are like posts / articles on the blog. They can be modified independently of the application. You don't need to redeploy the application to change the promptbooks. You can have multiple versions of promptbooks for each user. You can have a web interface for non-programmers to create and modify promptbooks. But you lose the versioning system and you still have to consider the interface between the promptbooks and the application _(= input and output parameters)_.
|
|
446
|
-
|
|
447
|
-
3. In a **configuration** in environment variables. This is a good way to store promptbooks if you have an application with multiple deployments and you want to have different but simple promptbooks for each deployment and you don't need to change them often.
|
|
448
|
-
|
|
449
|
-
### What should I do when I need same promptbook in multiple human languages?
|
|
450
|
-
|
|
451
|
-
A single promptbook can be written for several _(human)_ languages at once. However, we recommend that you have separate promptbooks for each language.
|
|
452
|
-
|
|
453
|
-
In large language models, you will get better results if you have prompts in the same language as the user input.
|
|
454
|
-
|
|
455
|
-
The best way to manage this is to have suffixed promptbooks like `write-website-content.en.ptbk.md` and `write-website-content.cs.ptbk.md` for each supported language.
|
|
456
|
-
|
|
457
|
-
|
|
458
|
-
|
|
459
|
-
|
|
413
|
+
- [β Why not just use the OpenAI SDK / Anthropic Claude SDK / ...?](https://github.com/webgptorg/promptbook/discussions/114)
|
|
414
|
+
- [β How is it different from the OpenAI`s GPTs?](https://github.com/webgptorg/promptbook/discussions/118)
|
|
415
|
+
- [β How is it different from the Langchain?](https://github.com/webgptorg/promptbook/discussions/115)
|
|
416
|
+
- [β How is it different from the DSPy?](https://github.com/webgptorg/promptbook/discussions/117)
|
|
417
|
+
- [β How is it different from _anything_?](https://github.com/webgptorg/promptbook/discussions?discussions_q=is%3Aopen+label%3A%22Promptbook+vs%22)
|
|
418
|
+
- [β Is Promptbook using RAG _(Retrieval-Augmented Generation)_?](https://github.com/webgptorg/promptbook/discussions/123)
|
|
419
|
+
- [β Is Promptbook using function calling?](https://github.com/webgptorg/promptbook/discussions/124)
|
|
460
420
|
|
|
461
421
|
## β Changelog
|
|
462
422
|
|
package/esm/index.es.js
CHANGED
|
@@ -14,7 +14,7 @@ import * as dotenv from 'dotenv';
|
|
|
14
14
|
/**
|
|
15
15
|
* The version of the Promptbook library
|
|
16
16
|
*/
|
|
17
|
-
var PROMPTBOOK_VERSION = '0.69.0-
|
|
17
|
+
var PROMPTBOOK_VERSION = '0.69.0-6';
|
|
18
18
|
// TODO: !!!! List here all the versions and annotate + put into script
|
|
19
19
|
|
|
20
20
|
/*! *****************************************************************************
|
|
@@ -884,7 +884,7 @@ function forEachAsync(array, options, callbackfunction) {
|
|
|
884
884
|
});
|
|
885
885
|
}
|
|
886
886
|
|
|
887
|
-
var PipelineCollection = [{title:"Prepare Knowledge from Markdown",pipelineUrl:"https://promptbook.studio/promptbook/prepare-knowledge-from-markdown.ptbk.md",promptbookVersion:"0.69.0-
|
|
887
|
+
var PipelineCollection = [{title:"Prepare Knowledge from Markdown",pipelineUrl:"https://promptbook.studio/promptbook/prepare-knowledge-from-markdown.ptbk.md",promptbookVersion:"0.69.0-6",parameters:[{name:"knowledgeContent",description:"Markdown document content",isInput:true,isOutput:false},{name:"knowledgePieces",description:"The knowledge JSON object",isInput:false,isOutput:true}],templates:[{templateType:"PROMPT_TEMPLATE",name:"knowledge",title:"Knowledge",content:"You are experienced data researcher, extract the important knowledge from the document.\n\n# Rules\n\n- Make pieces of information concise, clear, and easy to understand\n- One piece of information should be approximately 1 paragraph\n- Divide the paragraphs by markdown horizontal lines ---\n- Omit irrelevant information\n- Group redundant information\n- Write just extracted information, nothing else\n\n# The document\n\nTake information from this document:\n\n> {knowledgeContent}",resultingParameterName:"knowledgePieces",dependentParameterNames:["knowledgeContent"]}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[],sourceFile:"./promptbook-collection/prepare-knowledge-from-markdown.ptbk.md"},{title:"Prepare Keywords",pipelineUrl:"https://promptbook.studio/promptbook/prepare-knowledge-keywords.ptbk.md",promptbookVersion:"0.69.0-6",parameters:[{name:"knowledgePieceContent",description:"The content",isInput:true,isOutput:false},{name:"keywords",description:"Keywords separated by comma",isInput:false,isOutput:true}],templates:[{templateType:"PROMPT_TEMPLATE",name:"knowledge",title:"Knowledge",content:"You are experienced data researcher, detect the important keywords in the document.\n\n# Rules\n\n- Write just keywords separated by comma\n\n# The document\n\nTake information from this document:\n\n> {knowledgePieceContent}",resultingParameterName:"keywords",dependentParameterNames:["knowledgePieceContent"]}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[],sourceFile:"./promptbook-collection/prepare-knowledge-keywords.ptbk.md"},{title:"Prepare Title",pipelineUrl:"https://promptbook.studio/promptbook/prepare-knowledge-title.ptbk.md",promptbookVersion:"0.69.0-6",parameters:[{name:"knowledgePieceContent",description:"The content",isInput:true,isOutput:false},{name:"title",description:"The title of the document",isInput:false,isOutput:true}],templates:[{templateType:"PROMPT_TEMPLATE",name:"knowledge",title:"Knowledge",content:"You are experienced content creator, write best title for the document.\n\n# Rules\n\n- Write just title, nothing else\n- Title should be concise and clear\n- Write maximum 5 words for the title\n\n# The document\n\n> {knowledgePieceContent}",resultingParameterName:"title",expectations:{words:{min:1,max:8}},dependentParameterNames:["knowledgePieceContent"]}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[],sourceFile:"./promptbook-collection/prepare-knowledge-title.ptbk.md"},{title:"Prepare Keywords",pipelineUrl:"https://promptbook.studio/promptbook/prepare-persona.ptbk.md",promptbookVersion:"0.69.0-6",parameters:[{name:"availableModelNames",description:"List of available model names separated by comma (,)",isInput:true,isOutput:false},{name:"personaDescription",description:"Description of the persona",isInput:true,isOutput:false},{name:"modelRequirements",description:"Specific requirements for the model",isInput:false,isOutput:true}],templates:[{templateType:"PROMPT_TEMPLATE",name:"make-model-requirements",title:"Make modelRequirements",content:"You are experienced AI engineer, you need to create virtual assistant.\nWrite\n\n## Sample\n\n```json\n{\n\"modelName\": \"gpt-4o\",\n\"systemMessage\": \"You are experienced AI engineer and helpfull assistant.\",\n\"temperature\": 0.7\n}\n```\n\n## Instructions\n\n- Your output format is JSON object\n- Write just the JSON object, no other text should be present\n- It contains the following keys:\n - `modelName`: The name of the model to use\n - `systemMessage`: The system message to provide context to the model\n - `temperature`: The sampling temperature to use\n\n### Key `modelName`\n\nPick from the following models:\n\n- {availableModelNames}\n\n### Key `systemMessage`\n\nThe system message is used to communicate instructions or provide context to the model at the beginning of a conversation. It is displayed in a different format compared to user messages, helping the model understand its role in the conversation. The system message typically guides the model's behavior, sets the tone, or specifies desired output from the model. By utilizing the system message effectively, users can steer the model towards generating more accurate and relevant responses.\n\nFor example:\n\n> You are an experienced AI engineer and helpful assistant.\n\n> You are a friendly and knowledgeable chatbot.\n\n### Key `temperature`\n\nThe sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use log probability to automatically increase the temperature until certain thresholds are hit.\n\nYou can pick a value between 0 and 2. For example:\n\n- `0.1`: Low temperature, extremely conservative and deterministic\n- `0.5`: Medium temperature, balanced between conservative and creative\n- `1.0`: High temperature, creative and bit random\n- `1.5`: Very high temperature, extremely creative and often chaotic and unpredictable\n- `2.0`: Maximum temperature, completely random and unpredictable, for some extreme creative use cases\n\n# The assistant\n\nTake this description of the persona:\n\n> {personaDescription}",resultingParameterName:"modelRequirements",format:"JSON",dependentParameterNames:["availableModelNames","personaDescription"]}],knowledgeSources:[],knowledgePieces:[],personas:[],preparations:[],sourceFile:"./promptbook-collection/prepare-persona.ptbk.md"}];
|
|
888
888
|
|
|
889
889
|
/**
|
|
890
890
|
* This error indicates that the promptbook in a markdown format cannot be parsed into a valid promptbook object
|
|
@@ -3095,7 +3095,7 @@ function getReservedParametersForTemplate(options) {
|
|
|
3095
3095
|
*/
|
|
3096
3096
|
function executeTemplate(options) {
|
|
3097
3097
|
return __awaiter(this, void 0, void 0, function () {
|
|
3098
|
-
var currentTemplate, preparedPipeline, parametersToPass, tools, llmTools, onProgress, settings, $executionReport, pipelineIdentification, maxExecutionAttempts, name, title, priority, usedParameterNames, dependentParameterNames, definedParameters, _a, _b, _c, definedParameterNames, parameters, _loop_1, _d, _e, parameterName,
|
|
3098
|
+
var currentTemplate, preparedPipeline, parametersToPass, tools, llmTools, onProgress, settings, $executionReport, pipelineIdentification, maxExecutionAttempts, name, title, priority, usedParameterNames, dependentParameterNames, definedParameters, _a, _b, _c, definedParameterNames, parameters, _loop_1, _d, _e, parameterName, $ongoingResult, maxAttempts, jokerParameterNames, preparedContent, _loop_2, attempt, state_1;
|
|
3099
3099
|
var e_1, _f, _g;
|
|
3100
3100
|
return __generator(this, function (_h) {
|
|
3101
3101
|
switch (_h.label) {
|
|
@@ -3165,21 +3165,24 @@ function executeTemplate(options) {
|
|
|
3165
3165
|
}
|
|
3166
3166
|
finally { if (e_1) throw e_1.error; }
|
|
3167
3167
|
}
|
|
3168
|
-
// Note: Now we can freeze `parameters` because we are sure that all and only used parameters are defined
|
|
3168
|
+
// Note: Now we can freeze `parameters` because we are sure that all and only used parameters are defined and are not going to be changed
|
|
3169
3169
|
Object.freeze(parameters);
|
|
3170
|
-
|
|
3171
|
-
|
|
3172
|
-
|
|
3170
|
+
$ongoingResult = {
|
|
3171
|
+
$result: null,
|
|
3172
|
+
$resultString: null,
|
|
3173
|
+
$expectError: null,
|
|
3174
|
+
$scriptPipelineExecutionErrors: [],
|
|
3175
|
+
};
|
|
3173
3176
|
maxAttempts = currentTemplate.templateType === 'DIALOG_TEMPLATE' ? Infinity : maxExecutionAttempts;
|
|
3174
3177
|
jokerParameterNames = currentTemplate.jokerParameterNames || [];
|
|
3175
3178
|
preparedContent = (currentTemplate.preparedContent || '{content}')
|
|
3176
3179
|
.split('{content}')
|
|
3177
3180
|
.join(currentTemplate.content);
|
|
3178
3181
|
_loop_2 = function (attempt) {
|
|
3179
|
-
var isJokerAttempt, jokerParameterName, _j, modelRequirements, _k, _l, _m, scriptTools, error_1, e_2_1,
|
|
3180
|
-
var e_2,
|
|
3181
|
-
return __generator(this, function (
|
|
3182
|
-
switch (
|
|
3182
|
+
var isJokerAttempt, jokerParameterName, _j, modelRequirements, _k, _l, _m, _o, _p, _q, scriptTools, _r, error_1, e_2_1, _s, _t, _u, functionName, postprocessingError, _v, _w, scriptTools, _x, error_2, e_3_1, e_4_1, error_3;
|
|
3183
|
+
var e_2, _y, e_4, _z, e_3, _0;
|
|
3184
|
+
return __generator(this, function (_1) {
|
|
3185
|
+
switch (_1.label) {
|
|
3183
3186
|
case 0:
|
|
3184
3187
|
isJokerAttempt = attempt < 0;
|
|
3185
3188
|
jokerParameterName = jokerParameterNames[jokerParameterNames.length + attempt];
|
|
@@ -3187,21 +3190,21 @@ function executeTemplate(options) {
|
|
|
3187
3190
|
if (isJokerAttempt && !jokerParameterName) {
|
|
3188
3191
|
throw new UnexpectedError(spaceTrim(function (block) { return "\n Joker not found in attempt ".concat(attempt, "\n\n ").concat(block(pipelineIdentification), "\n "); }));
|
|
3189
3192
|
}
|
|
3190
|
-
result = null;
|
|
3191
|
-
resultString = null;
|
|
3192
|
-
expectError = null;
|
|
3193
|
+
$ongoingResult.$result = null;
|
|
3194
|
+
$ongoingResult.$resultString = null;
|
|
3195
|
+
$ongoingResult.$expectError = null;
|
|
3193
3196
|
if (isJokerAttempt) {
|
|
3194
3197
|
if (parameters[jokerParameterName] === undefined) {
|
|
3195
3198
|
throw new PipelineExecutionError(spaceTrim(function (block) { return "\n Joker parameter {".concat(jokerParameterName, "} not defined\n\n ").concat(block(pipelineIdentification), "\n "); }));
|
|
3196
3199
|
// <- TODO: This is maybe `PipelineLogicError` which should be detected in `validatePipeline` and here just thrown as `UnexpectedError`
|
|
3197
3200
|
}
|
|
3198
3201
|
else {
|
|
3199
|
-
resultString = parameters[jokerParameterName];
|
|
3202
|
+
$ongoingResult.$resultString = parameters[jokerParameterName];
|
|
3200
3203
|
}
|
|
3201
3204
|
}
|
|
3202
|
-
|
|
3205
|
+
_1.label = 1;
|
|
3203
3206
|
case 1:
|
|
3204
|
-
|
|
3207
|
+
_1.trys.push([1, 44, 45, 46]);
|
|
3205
3208
|
if (!!isJokerAttempt) return [3 /*break*/, 26];
|
|
3206
3209
|
_j = currentTemplate.templateType;
|
|
3207
3210
|
switch (_j) {
|
|
@@ -3212,11 +3215,11 @@ function executeTemplate(options) {
|
|
|
3212
3215
|
}
|
|
3213
3216
|
return [3 /*break*/, 25];
|
|
3214
3217
|
case 2:
|
|
3215
|
-
resultString = replaceParameters(preparedContent, parameters);
|
|
3218
|
+
$ongoingResult.$resultString = replaceParameters(preparedContent, parameters);
|
|
3216
3219
|
return [3 /*break*/, 26];
|
|
3217
3220
|
case 3:
|
|
3218
3221
|
modelRequirements = __assign(__assign({ modelVariant: 'CHAT' }, (preparedPipeline.defaultModelRequirements || {})), (currentTemplate.modelRequirements || {}));
|
|
3219
|
-
prompt = {
|
|
3222
|
+
$ongoingResult.$prompt = {
|
|
3220
3223
|
title: currentTemplate.title,
|
|
3221
3224
|
pipelineUrl: "".concat(preparedPipeline.pipelineUrl
|
|
3222
3225
|
? preparedPipeline.pipelineUrl
|
|
@@ -3238,25 +3241,32 @@ function executeTemplate(options) {
|
|
|
3238
3241
|
case 'EMBEDDING': return [3 /*break*/, 8];
|
|
3239
3242
|
}
|
|
3240
3243
|
return [3 /*break*/, 10];
|
|
3241
|
-
case 4:
|
|
3244
|
+
case 4:
|
|
3245
|
+
_l = $ongoingResult;
|
|
3246
|
+
return [4 /*yield*/, llmTools.callChatModel($deepFreeze($ongoingResult.$prompt))];
|
|
3242
3247
|
case 5:
|
|
3243
|
-
chatResult =
|
|
3248
|
+
_l.$chatResult = _1.sent();
|
|
3244
3249
|
// TODO: [π¬] Destroy chatThread
|
|
3245
|
-
result = chatResult;
|
|
3246
|
-
resultString = chatResult.content;
|
|
3250
|
+
$ongoingResult.$result = $ongoingResult.$chatResult;
|
|
3251
|
+
$ongoingResult.$resultString = $ongoingResult.$chatResult.content;
|
|
3247
3252
|
return [3 /*break*/, 11];
|
|
3248
|
-
case 6:
|
|
3253
|
+
case 6:
|
|
3254
|
+
_m = $ongoingResult;
|
|
3255
|
+
return [4 /*yield*/, llmTools.callCompletionModel($deepFreeze($ongoingResult.$prompt))];
|
|
3249
3256
|
case 7:
|
|
3250
|
-
completionResult =
|
|
3251
|
-
result = completionResult;
|
|
3252
|
-
resultString = completionResult.content;
|
|
3257
|
+
_m.$completionResult = _1.sent();
|
|
3258
|
+
$ongoingResult.$result = $ongoingResult.$completionResult;
|
|
3259
|
+
$ongoingResult.$resultString = $ongoingResult.$completionResult.content;
|
|
3253
3260
|
return [3 /*break*/, 11];
|
|
3254
|
-
case 8:
|
|
3261
|
+
case 8:
|
|
3262
|
+
// TODO: [π§ ] This is weird, embedding model can not be used such a way in the pipeline
|
|
3263
|
+
_o = $ongoingResult;
|
|
3264
|
+
return [4 /*yield*/, llmTools.callEmbeddingModel($deepFreeze($ongoingResult.$prompt))];
|
|
3255
3265
|
case 9:
|
|
3256
3266
|
// TODO: [π§ ] This is weird, embedding model can not be used such a way in the pipeline
|
|
3257
|
-
embeddingResult =
|
|
3258
|
-
result = embeddingResult;
|
|
3259
|
-
resultString = embeddingResult.content.join(',');
|
|
3267
|
+
_o.$embeddingResult = _1.sent();
|
|
3268
|
+
$ongoingResult.$result = $ongoingResult.$embeddingResult;
|
|
3269
|
+
$ongoingResult.$resultString = $ongoingResult.$embeddingResult.content.join(',');
|
|
3260
3270
|
return [3 /*break*/, 11];
|
|
3261
3271
|
case 10: throw new PipelineExecutionError(spaceTrim(function (block) { return "\n Unknown model variant \"".concat(currentTemplate.modelRequirements.modelVariant, "\"\n\n ").concat(block(pipelineIdentification), "\n\n "); }));
|
|
3262
3272
|
case 11: return [3 /*break*/, 26];
|
|
@@ -3267,60 +3277,59 @@ function executeTemplate(options) {
|
|
|
3267
3277
|
if (!currentTemplate.contentLanguage) {
|
|
3268
3278
|
throw new PipelineExecutionError(spaceTrim(function (block) { return "\n Script language is not defined for SCRIPT TEMPLATE \"".concat(currentTemplate.name, "\"\n\n ").concat(block(pipelineIdentification), "\n "); }));
|
|
3269
3279
|
}
|
|
3270
|
-
|
|
3271
|
-
scriptPipelineExecutionErrors = [];
|
|
3272
|
-
_v.label = 13;
|
|
3280
|
+
_1.label = 13;
|
|
3273
3281
|
case 13:
|
|
3274
|
-
|
|
3275
|
-
|
|
3276
|
-
|
|
3282
|
+
_1.trys.push([13, 20, 21, 22]);
|
|
3283
|
+
_p = (e_2 = void 0, __values(arrayableToArray(tools.script))), _q = _p.next();
|
|
3284
|
+
_1.label = 14;
|
|
3277
3285
|
case 14:
|
|
3278
|
-
if (!!
|
|
3279
|
-
scriptTools =
|
|
3280
|
-
|
|
3286
|
+
if (!!_q.done) return [3 /*break*/, 19];
|
|
3287
|
+
scriptTools = _q.value;
|
|
3288
|
+
_1.label = 15;
|
|
3281
3289
|
case 15:
|
|
3282
|
-
|
|
3290
|
+
_1.trys.push([15, 17, , 18]);
|
|
3291
|
+
_r = $ongoingResult;
|
|
3283
3292
|
return [4 /*yield*/, scriptTools.execute($deepFreeze({
|
|
3284
3293
|
scriptLanguage: currentTemplate.contentLanguage,
|
|
3285
3294
|
script: preparedContent,
|
|
3286
3295
|
parameters: parameters,
|
|
3287
3296
|
}))];
|
|
3288
3297
|
case 16:
|
|
3289
|
-
resultString =
|
|
3298
|
+
_r.$resultString = _1.sent();
|
|
3290
3299
|
return [3 /*break*/, 19];
|
|
3291
3300
|
case 17:
|
|
3292
|
-
error_1 =
|
|
3301
|
+
error_1 = _1.sent();
|
|
3293
3302
|
if (!(error_1 instanceof Error)) {
|
|
3294
3303
|
throw error_1;
|
|
3295
3304
|
}
|
|
3296
3305
|
if (error_1 instanceof UnexpectedError) {
|
|
3297
3306
|
throw error_1;
|
|
3298
3307
|
}
|
|
3299
|
-
scriptPipelineExecutionErrors.push(error_1);
|
|
3308
|
+
$ongoingResult.$scriptPipelineExecutionErrors.push(error_1);
|
|
3300
3309
|
return [3 /*break*/, 18];
|
|
3301
3310
|
case 18:
|
|
3302
|
-
|
|
3311
|
+
_q = _p.next();
|
|
3303
3312
|
return [3 /*break*/, 14];
|
|
3304
3313
|
case 19: return [3 /*break*/, 22];
|
|
3305
3314
|
case 20:
|
|
3306
|
-
e_2_1 =
|
|
3315
|
+
e_2_1 = _1.sent();
|
|
3307
3316
|
e_2 = { error: e_2_1 };
|
|
3308
3317
|
return [3 /*break*/, 22];
|
|
3309
3318
|
case 21:
|
|
3310
3319
|
try {
|
|
3311
|
-
if (
|
|
3320
|
+
if (_q && !_q.done && (_y = _p.return)) _y.call(_p);
|
|
3312
3321
|
}
|
|
3313
3322
|
finally { if (e_2) throw e_2.error; }
|
|
3314
3323
|
return [7 /*endfinally*/];
|
|
3315
3324
|
case 22:
|
|
3316
|
-
if (resultString !== null) {
|
|
3325
|
+
if ($ongoingResult.$resultString !== null) {
|
|
3317
3326
|
return [3 /*break*/, 26];
|
|
3318
3327
|
}
|
|
3319
|
-
if (scriptPipelineExecutionErrors.length === 1) {
|
|
3320
|
-
throw scriptPipelineExecutionErrors[0];
|
|
3328
|
+
if ($ongoingResult.$scriptPipelineExecutionErrors.length === 1) {
|
|
3329
|
+
throw $ongoingResult.$scriptPipelineExecutionErrors[0];
|
|
3321
3330
|
}
|
|
3322
3331
|
else {
|
|
3323
|
-
throw new PipelineExecutionError(spaceTrim(function (block) { return "\n Script execution failed ".concat(scriptPipelineExecutionErrors.length, "
|
|
3332
|
+
throw new PipelineExecutionError(spaceTrim(function (block) { return "\n Script execution failed ".concat($ongoingResult.$scriptPipelineExecutionErrors.length, "x\n\n ").concat(block(pipelineIdentification), "\n\n ").concat(block($ongoingResult.$scriptPipelineExecutionErrors
|
|
3324
3333
|
.map(function (error) { return '- ' + error.message; })
|
|
3325
3334
|
.join('\n\n')), "\n "); }));
|
|
3326
3335
|
}
|
|
@@ -3328,6 +3337,8 @@ function executeTemplate(options) {
|
|
|
3328
3337
|
if (tools.userInterface === undefined) {
|
|
3329
3338
|
throw new PipelineExecutionError(spaceTrim(function (block) { return "\n User interface tools are not available\n\n ".concat(block(pipelineIdentification), "\n "); }));
|
|
3330
3339
|
}
|
|
3340
|
+
// TODO: [πΉ] When making next attempt for `DIALOG TEMPLATE`, preserve the previous user input
|
|
3341
|
+
_s = $ongoingResult;
|
|
3331
3342
|
return [4 /*yield*/, tools.userInterface.promptDialog($deepFreeze({
|
|
3332
3343
|
promptTitle: currentTemplate.title,
|
|
3333
3344
|
promptMessage: replaceParameters(currentTemplate.description || '', parameters),
|
|
@@ -3338,47 +3349,46 @@ function executeTemplate(options) {
|
|
|
3338
3349
|
}))];
|
|
3339
3350
|
case 24:
|
|
3340
3351
|
// TODO: [πΉ] When making next attempt for `DIALOG TEMPLATE`, preserve the previous user input
|
|
3341
|
-
resultString =
|
|
3352
|
+
_s.$resultString = _1.sent();
|
|
3342
3353
|
return [3 /*break*/, 26];
|
|
3343
3354
|
case 25: throw new PipelineExecutionError(spaceTrim(function (block) { return "\n Unknown execution type \"".concat(currentTemplate.templateType, "\"\n\n ").concat(block(pipelineIdentification), "\n "); }));
|
|
3344
3355
|
case 26:
|
|
3345
3356
|
if (!(!isJokerAttempt && currentTemplate.postprocessingFunctionNames)) return [3 /*break*/, 43];
|
|
3346
|
-
|
|
3357
|
+
_1.label = 27;
|
|
3347
3358
|
case 27:
|
|
3348
|
-
|
|
3349
|
-
|
|
3350
|
-
|
|
3359
|
+
_1.trys.push([27, 41, 42, 43]);
|
|
3360
|
+
_t = (e_4 = void 0, __values(currentTemplate.postprocessingFunctionNames)), _u = _t.next();
|
|
3361
|
+
_1.label = 28;
|
|
3351
3362
|
case 28:
|
|
3352
|
-
if (!!
|
|
3353
|
-
functionName =
|
|
3354
|
-
// TODO: DRY [1]
|
|
3355
|
-
scriptPipelineExecutionErrors = [];
|
|
3363
|
+
if (!!_u.done) return [3 /*break*/, 40];
|
|
3364
|
+
functionName = _u.value;
|
|
3356
3365
|
postprocessingError = null;
|
|
3357
|
-
|
|
3366
|
+
_1.label = 29;
|
|
3358
3367
|
case 29:
|
|
3359
|
-
|
|
3360
|
-
|
|
3361
|
-
|
|
3368
|
+
_1.trys.push([29, 36, 37, 38]);
|
|
3369
|
+
_v = (e_3 = void 0, __values(arrayableToArray(tools.script))), _w = _v.next();
|
|
3370
|
+
_1.label = 30;
|
|
3362
3371
|
case 30:
|
|
3363
|
-
if (!!
|
|
3364
|
-
scriptTools =
|
|
3365
|
-
|
|
3372
|
+
if (!!_w.done) return [3 /*break*/, 35];
|
|
3373
|
+
scriptTools = _w.value;
|
|
3374
|
+
_1.label = 31;
|
|
3366
3375
|
case 31:
|
|
3367
|
-
|
|
3376
|
+
_1.trys.push([31, 33, , 34]);
|
|
3377
|
+
_x = $ongoingResult;
|
|
3368
3378
|
return [4 /*yield*/, scriptTools.execute({
|
|
3369
3379
|
scriptLanguage: "javascript" /* <- TODO: Try it in each languages; In future allow postprocessing with arbitrary combination of languages to combine */,
|
|
3370
3380
|
script: "".concat(functionName, "(resultString)"),
|
|
3371
3381
|
parameters: {
|
|
3372
|
-
resultString: resultString || '',
|
|
3382
|
+
resultString: $ongoingResult.$resultString || '',
|
|
3373
3383
|
// Note: No ...parametersForTemplate, because working with result only
|
|
3374
3384
|
},
|
|
3375
3385
|
})];
|
|
3376
3386
|
case 32:
|
|
3377
|
-
resultString =
|
|
3387
|
+
_x.$resultString = _1.sent();
|
|
3378
3388
|
postprocessingError = null;
|
|
3379
3389
|
return [3 /*break*/, 35];
|
|
3380
3390
|
case 33:
|
|
3381
|
-
error_2 =
|
|
3391
|
+
error_2 = _1.sent();
|
|
3382
3392
|
if (!(error_2 instanceof Error)) {
|
|
3383
3393
|
throw error_2;
|
|
3384
3394
|
}
|
|
@@ -3386,19 +3396,19 @@ function executeTemplate(options) {
|
|
|
3386
3396
|
throw error_2;
|
|
3387
3397
|
}
|
|
3388
3398
|
postprocessingError = error_2;
|
|
3389
|
-
scriptPipelineExecutionErrors.push(error_2);
|
|
3399
|
+
$ongoingResult.$scriptPipelineExecutionErrors.push(error_2);
|
|
3390
3400
|
return [3 /*break*/, 34];
|
|
3391
3401
|
case 34:
|
|
3392
|
-
|
|
3402
|
+
_w = _v.next();
|
|
3393
3403
|
return [3 /*break*/, 30];
|
|
3394
3404
|
case 35: return [3 /*break*/, 38];
|
|
3395
3405
|
case 36:
|
|
3396
|
-
e_3_1 =
|
|
3406
|
+
e_3_1 = _1.sent();
|
|
3397
3407
|
e_3 = { error: e_3_1 };
|
|
3398
3408
|
return [3 /*break*/, 38];
|
|
3399
3409
|
case 37:
|
|
3400
3410
|
try {
|
|
3401
|
-
if (
|
|
3411
|
+
if (_w && !_w.done && (_0 = _v.return)) _0.call(_v);
|
|
3402
3412
|
}
|
|
3403
3413
|
finally { if (e_3) throw e_3.error; }
|
|
3404
3414
|
return [7 /*endfinally*/];
|
|
@@ -3406,18 +3416,18 @@ function executeTemplate(options) {
|
|
|
3406
3416
|
if (postprocessingError) {
|
|
3407
3417
|
throw postprocessingError;
|
|
3408
3418
|
}
|
|
3409
|
-
|
|
3419
|
+
_1.label = 39;
|
|
3410
3420
|
case 39:
|
|
3411
|
-
|
|
3421
|
+
_u = _t.next();
|
|
3412
3422
|
return [3 /*break*/, 28];
|
|
3413
3423
|
case 40: return [3 /*break*/, 43];
|
|
3414
3424
|
case 41:
|
|
3415
|
-
e_4_1 =
|
|
3425
|
+
e_4_1 = _1.sent();
|
|
3416
3426
|
e_4 = { error: e_4_1 };
|
|
3417
3427
|
return [3 /*break*/, 43];
|
|
3418
3428
|
case 42:
|
|
3419
3429
|
try {
|
|
3420
|
-
if (
|
|
3430
|
+
if (_u && !_u.done && (_z = _t.return)) _z.call(_t);
|
|
3421
3431
|
}
|
|
3422
3432
|
finally { if (e_4) throw e_4.error; }
|
|
3423
3433
|
return [7 /*endfinally*/];
|
|
@@ -3425,10 +3435,10 @@ function executeTemplate(options) {
|
|
|
3425
3435
|
// TODO: [π] Unite object for expecting amount and format
|
|
3426
3436
|
if (currentTemplate.format) {
|
|
3427
3437
|
if (currentTemplate.format === 'JSON') {
|
|
3428
|
-
if (!isValidJsonString(resultString || '')) {
|
|
3438
|
+
if (!isValidJsonString($ongoingResult.$resultString || '')) {
|
|
3429
3439
|
// TODO: [π’] Do more universally via `FormatDefinition`
|
|
3430
3440
|
try {
|
|
3431
|
-
resultString = extractJsonBlock(resultString || '');
|
|
3441
|
+
$ongoingResult.$resultString = extractJsonBlock($ongoingResult.$resultString || '');
|
|
3432
3442
|
}
|
|
3433
3443
|
catch (error) {
|
|
3434
3444
|
keepUnused(error);
|
|
@@ -3443,45 +3453,48 @@ function executeTemplate(options) {
|
|
|
3443
3453
|
}
|
|
3444
3454
|
// TODO: [π] Unite object for expecting amount and format
|
|
3445
3455
|
if (currentTemplate.expectations) {
|
|
3446
|
-
checkExpectations(currentTemplate.expectations, resultString || '');
|
|
3456
|
+
checkExpectations(currentTemplate.expectations, $ongoingResult.$resultString || '');
|
|
3447
3457
|
}
|
|
3448
3458
|
return [2 /*return*/, "break-attempts"];
|
|
3449
3459
|
case 44:
|
|
3450
|
-
error_3 =
|
|
3460
|
+
error_3 = _1.sent();
|
|
3451
3461
|
if (!(error_3 instanceof ExpectError)) {
|
|
3452
3462
|
throw error_3;
|
|
3453
3463
|
}
|
|
3454
|
-
expectError = error_3;
|
|
3464
|
+
$ongoingResult.$expectError = error_3;
|
|
3455
3465
|
return [3 /*break*/, 46];
|
|
3456
3466
|
case 45:
|
|
3457
3467
|
if (!isJokerAttempt &&
|
|
3458
3468
|
currentTemplate.templateType === 'PROMPT_TEMPLATE' &&
|
|
3459
|
-
prompt
|
|
3469
|
+
$ongoingResult.$prompt
|
|
3460
3470
|
// <- Note: [2] When some expected parameter is not defined, error will occur in replaceParameters
|
|
3461
3471
|
// In that case we donβt want to make a report about it because itβs not a llm execution error
|
|
3462
3472
|
) {
|
|
3463
3473
|
// TODO: [π§ ] Maybe put other templateTypes into report
|
|
3464
3474
|
$executionReport.promptExecutions.push({
|
|
3465
|
-
prompt: __assign({}, prompt),
|
|
3466
|
-
result: result || undefined,
|
|
3467
|
-
error: expectError === null ? undefined : serializeError(expectError),
|
|
3475
|
+
prompt: __assign({}, $ongoingResult.$prompt),
|
|
3476
|
+
result: $ongoingResult.$result || undefined,
|
|
3477
|
+
error: $ongoingResult.$expectError === null ? undefined : serializeError($ongoingResult.$expectError),
|
|
3468
3478
|
});
|
|
3469
3479
|
}
|
|
3470
3480
|
return [7 /*endfinally*/];
|
|
3471
3481
|
case 46:
|
|
3472
|
-
if (expectError !== null && attempt === maxAttempts - 1) {
|
|
3473
|
-
throw new PipelineExecutionError(spaceTrim(function (block) {
|
|
3474
|
-
|
|
3475
|
-
.
|
|
3476
|
-
.join('\n')), "\n\n Last error ").concat((expectError === null || expectError === void 0 ? void 0 : expectError.name) || '', ":\n ").concat(block(((expectError === null || expectError === void 0 ? void 0 : expectError.message) || '')
|
|
3477
|
-
.split('\n')
|
|
3478
|
-
.map(function (line) { return "> ".concat(line); })
|
|
3479
|
-
.join('\n')), "\n\n Last result:\n ").concat(block(resultString === null
|
|
3480
|
-
? 'null'
|
|
3481
|
-
: resultString
|
|
3482
|
+
if ($ongoingResult.$expectError !== null && attempt === maxAttempts - 1) {
|
|
3483
|
+
throw new PipelineExecutionError(spaceTrim(function (block) {
|
|
3484
|
+
var _a, _b, _c;
|
|
3485
|
+
return "\n LLM execution failed ".concat(maxExecutionAttempts, "x\n\n ").concat(block(pipelineIdentification), "\n\n ---\n The Prompt:\n ").concat(block((((_a = $ongoingResult.$prompt) === null || _a === void 0 ? void 0 : _a.content) || '')
|
|
3482
3486
|
.split('\n')
|
|
3483
3487
|
.map(function (line) { return "> ".concat(line); })
|
|
3484
|
-
.join('\n')), "\n
|
|
3488
|
+
.join('\n')), "\n\n Last error ").concat(((_b = $ongoingResult.$expectError) === null || _b === void 0 ? void 0 : _b.name) || '', ":\n ").concat(block((((_c = $ongoingResult.$expectError) === null || _c === void 0 ? void 0 : _c.message) || '')
|
|
3489
|
+
.split('\n')
|
|
3490
|
+
.map(function (line) { return "> ".concat(line); })
|
|
3491
|
+
.join('\n')), "\n\n Last result:\n ").concat(block($ongoingResult.$resultString === null
|
|
3492
|
+
? 'null'
|
|
3493
|
+
: $ongoingResult.$resultString
|
|
3494
|
+
.split('\n')
|
|
3495
|
+
.map(function (line) { return "> ".concat(line); })
|
|
3496
|
+
.join('\n')), "\n ---\n ");
|
|
3497
|
+
}));
|
|
3485
3498
|
}
|
|
3486
3499
|
return [2 /*return*/];
|
|
3487
3500
|
}
|
|
@@ -3515,7 +3528,7 @@ function executeTemplate(options) {
|
|
|
3515
3528
|
|
|
3516
3529
|
*/
|
|
3517
3530
|
//------------------------------------
|
|
3518
|
-
if (resultString === null) {
|
|
3531
|
+
if ($ongoingResult.$resultString === null) {
|
|
3519
3532
|
throw new UnexpectedError(spaceTrim(function (block) { return "\n Something went wrong and prompt result is null\n\n ".concat(block(pipelineIdentification), "\n "); }));
|
|
3520
3533
|
}
|
|
3521
3534
|
return [4 /*yield*/, onProgress({
|
|
@@ -3525,13 +3538,13 @@ function executeTemplate(options) {
|
|
|
3525
3538
|
isDone: true,
|
|
3526
3539
|
templateType: currentTemplate.templateType,
|
|
3527
3540
|
parameterName: currentTemplate.resultingParameterName,
|
|
3528
|
-
parameterValue: resultString,
|
|
3541
|
+
parameterValue: $ongoingResult.$resultString,
|
|
3529
3542
|
// <- [πΈ]
|
|
3530
3543
|
})];
|
|
3531
3544
|
case 7:
|
|
3532
3545
|
_h.sent();
|
|
3533
3546
|
return [2 /*return*/, Object.freeze((_g = {},
|
|
3534
|
-
_g[currentTemplate.resultingParameterName] = resultString /* <- Note: Not need to detect parameter collision here because pipeline checks logic consistency during construction */,
|
|
3547
|
+
_g[currentTemplate.resultingParameterName] = $ongoingResult.$resultString /* <- Note: Not need to detect parameter collision here because pipeline checks logic consistency during construction */,
|
|
3535
3548
|
_g))];
|
|
3536
3549
|
}
|
|
3537
3550
|
});
|