@promptbook/wizard 0.95.0 โ†’ 0.98.0-10

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (25) hide show
  1. package/README.md +12 -0
  2. package/esm/index.es.js +345 -70
  3. package/esm/index.es.js.map +1 -1
  4. package/esm/typings/src/_packages/anthropic-claude.index.d.ts +2 -2
  5. package/esm/typings/src/_packages/cli.index.d.ts +4 -0
  6. package/esm/typings/src/_packages/core.index.d.ts +2 -0
  7. package/esm/typings/src/_packages/openai.index.d.ts +10 -0
  8. package/esm/typings/src/_packages/types.index.d.ts +12 -2
  9. package/esm/typings/src/_packages/wizard.index.d.ts +4 -0
  10. package/esm/typings/src/config.d.ts +1 -1
  11. package/esm/typings/src/execution/createPipelineExecutor/$OngoingTaskResult.d.ts +8 -0
  12. package/esm/typings/src/execution/utils/validatePromptResult.d.ts +53 -0
  13. package/esm/typings/src/llm-providers/anthropic-claude/AnthropicClaudeExecutionTools.d.ts +3 -3
  14. package/esm/typings/src/llm-providers/anthropic-claude/AnthropicClaudeExecutionToolsOptions.d.ts +2 -2
  15. package/esm/typings/src/llm-providers/openai/OpenAiAssistantExecutionToolsOptions.d.ts +2 -2
  16. package/esm/typings/src/llm-providers/openai/OpenAiCompatibleExecutionTools.d.ts +4 -4
  17. package/esm/typings/src/llm-providers/openai/OpenAiCompatibleExecutionToolsOptions.d.ts +52 -0
  18. package/esm/typings/src/llm-providers/openai/OpenAiExecutionToolsOptions.d.ts +3 -5
  19. package/esm/typings/src/llm-providers/openai/createOpenAiCompatibleExecutionTools.d.ts +74 -0
  20. package/esm/typings/src/llm-providers/openai/register-configuration.d.ts +11 -0
  21. package/esm/typings/src/llm-providers/openai/register-constructor.d.ts +14 -0
  22. package/esm/typings/src/version.d.ts +1 -1
  23. package/package.json +2 -2
  24. package/umd/index.umd.js +346 -69
  25. package/umd/index.umd.js.map +1 -1
package/README.md CHANGED
@@ -25,6 +25,10 @@ Write AI applications using plain human language across multiple models and plat
25
25
 
26
26
 
27
27
 
28
+ <blockquote style="color: #ff8811">
29
+ <b>โš  Warning:</b> This is a pre-release version of the library. It is not yet ready for production use. Please look at <a href="https://www.npmjs.com/package/@promptbook/core?activeTab=versions">latest stable release</a>.
30
+ </blockquote>
31
+
28
32
  ## ๐Ÿ“ฆ Package `@promptbook/wizard`
29
33
 
30
34
  - Promptbooks are [divided into several](#-packages) packages, all are published from [single monorepo](https://github.com/webgptorg/promptbook).
@@ -70,6 +74,8 @@ Rest of the documentation is common for **entire promptbook ecosystem**:
70
74
 
71
75
  During the computer revolution, we have seen [multiple generations of computer languages](https://github.com/webgptorg/promptbook/discussions/180), from the physical rewiring of the vacuum tubes through low-level machine code to the high-level languages like Python or JavaScript. And now, we're on the edge of the **next revolution**!
72
76
 
77
+
78
+
73
79
  It's a revolution of writing software in **plain human language** that is understandable and executable by both humans and machines โ€“ and it's going to change everything!
74
80
 
75
81
  The incredible growth in power of microprocessors and the Moore's Law have been the driving force behind the ever-more powerful languages, and it's been an amazing journey! Similarly, the large language models (like GPT or Claude) are the next big thing in language technology, and they're set to transform the way we interact with computers.
@@ -195,6 +201,8 @@ Join our growing community of developers and users:
195
201
 
196
202
  _A concise, Markdown-based DSL for crafting AI workflows and automations._
197
203
 
204
+
205
+
198
206
  ### Introduction
199
207
 
200
208
  Book is a Markdown-based language that simplifies the creation of AI applications, workflows, and automations. With human-readable commands, you can define inputs, outputs, personas, knowledge sources, and actionsโ€”without needing model-specific details.
@@ -244,6 +252,8 @@ Personas can have access to different knowledge, tools and actions. They can als
244
252
 
245
253
  - [PERSONA](https://github.com/webgptorg/promptbook/blob/main/documents/commands/PERSONA.md)
246
254
 
255
+
256
+
247
257
  ### **3. How:** Knowledge, Instruments and Actions
248
258
 
249
259
  The resources used by the personas are used to do the work.
@@ -343,6 +353,8 @@ The following glossary is used to clarify certain concepts:
343
353
 
344
354
  _Note: This section is not a complete dictionary, more list of general AI / LLM terms that has connection with Promptbook_
345
355
 
356
+
357
+
346
358
  ### ๐Ÿ’ฏ Core concepts
347
359
 
348
360
  - [๐Ÿ“š Collection of pipelines](https://github.com/webgptorg/promptbook/discussions/65)
package/esm/index.es.js CHANGED
@@ -38,7 +38,7 @@ const BOOK_LANGUAGE_VERSION = '1.0.0';
38
38
  * @generated
39
39
  * @see https://github.com/webgptorg/promptbook
40
40
  */
41
- const PROMPTBOOK_ENGINE_VERSION = '0.95.0';
41
+ const PROMPTBOOK_ENGINE_VERSION = '0.98.0-10';
42
42
  /**
43
43
  * TODO: string_promptbook_version should be constrained to the all versions of Promptbook engine
44
44
  * Note: [๐Ÿ’ž] Ignore a discrepancy between file name and entity name
@@ -232,7 +232,7 @@ const DEFAULT_MAX_PARALLEL_COUNT = 5; // <- TODO: [๐Ÿคนโ€โ™‚๏ธ]
232
232
  *
233
233
  * @public exported from `@promptbook/core`
234
234
  */
235
- const DEFAULT_MAX_EXECUTION_ATTEMPTS = 10; // <- TODO: [๐Ÿคนโ€โ™‚๏ธ]
235
+ const DEFAULT_MAX_EXECUTION_ATTEMPTS = 7; // <- TODO: [๐Ÿคนโ€โ™‚๏ธ]
236
236
  // <- TODO: [๐Ÿ]
237
237
  /**
238
238
  * Where to store your books
@@ -4345,7 +4345,7 @@ resultContent, rawResponse) {
4345
4345
  */
4346
4346
 
4347
4347
  /**
4348
- * Execution Tools for calling OpenAI API or other OpeenAI compatible provider
4348
+ * Execution Tools for calling OpenAI API or other OpenAI compatible provider
4349
4349
  *
4350
4350
  * @public exported from `@promptbook/openai`
4351
4351
  */
@@ -4915,6 +4915,7 @@ class OllamaExecutionTools extends OpenAiCompatibleExecutionTools {
4915
4915
  baseURL: DEFAULT_OLLAMA_BASE_URL,
4916
4916
  ...ollamaOptions,
4917
4917
  apiKey: 'ollama',
4918
+ isProxied: false, // <- Note: Ollama is always local
4918
4919
  };
4919
4920
  super(openAiCompatibleOptions);
4920
4921
  }
@@ -5087,6 +5088,42 @@ const _OpenAiAssistantMetadataRegistration = $llmToolsMetadataRegister.register(
5087
5088
  */
5088
5089
  },
5089
5090
  });
5091
+ /**
5092
+ * Registration of the OpenAI Compatible metadata
5093
+ *
5094
+ * Note: OpenAiCompatibleExecutionTools is an abstract class and cannot be instantiated directly.
5095
+ * It serves as a base class for OpenAiExecutionTools and other compatible implementations.
5096
+ *
5097
+ * @public exported from `@promptbook/core`
5098
+ * @public exported from `@promptbook/wizard`
5099
+ * @public exported from `@promptbook/cli`
5100
+ */
5101
+ const _OpenAiCompatibleMetadataRegistration = $llmToolsMetadataRegister.register({
5102
+ title: 'Open AI Compatible',
5103
+ packageName: '@promptbook/openai',
5104
+ className: 'OpenAiCompatibleExecutionTools',
5105
+ envVariables: ['OPENAI_API_KEY', 'OPENAI_BASE_URL'],
5106
+ trustLevel: 'CLOSED',
5107
+ order: MODEL_ORDERS.TOP_TIER,
5108
+ getBoilerplateConfiguration() {
5109
+ return {
5110
+ title: 'Open AI Compatible',
5111
+ packageName: '@promptbook/openai',
5112
+ className: 'OpenAiCompatibleExecutionTools',
5113
+ options: {
5114
+ apiKey: 'sk-',
5115
+ baseURL: 'https://api.openai.com/v1',
5116
+ defaultModelName: 'gpt-4-turbo',
5117
+ isProxied: false,
5118
+ remoteServerUrl: DEFAULT_REMOTE_SERVER_URL,
5119
+ maxRequestsPerMinute: DEFAULT_MAX_REQUESTS_PER_MINUTE,
5120
+ },
5121
+ };
5122
+ },
5123
+ createConfigurationFromEnv(env) {
5124
+ return null;
5125
+ },
5126
+ });
5090
5127
  /**
5091
5128
  * Note: [๐Ÿ’ž] Ignore a discrepancy between file name and entity name
5092
5129
  */
@@ -5136,7 +5173,7 @@ class OpenAiExecutionTools extends OpenAiCompatibleExecutionTools {
5136
5173
  * Default model for chat variant.
5137
5174
  */
5138
5175
  getDefaultChatModel() {
5139
- return this.getDefaultModel('gpt-4o');
5176
+ return this.getDefaultModel('gpt-4-turbo');
5140
5177
  }
5141
5178
  /**
5142
5179
  * Default model for completion variant.
@@ -5166,6 +5203,9 @@ class OpenAiAssistantExecutionTools extends OpenAiExecutionTools {
5166
5203
  * @param options which are relevant are directly passed to the OpenAI client
5167
5204
  */
5168
5205
  constructor(options) {
5206
+ if (options.isProxied) {
5207
+ throw new NotYetImplementedError(`Proxy mode is not yet implemented for OpenAI assistants`);
5208
+ }
5169
5209
  super(options);
5170
5210
  this.assistantId = options.assistantId;
5171
5211
  // TODO: [๐Ÿ‘ฑ] Make limiter same as in `OpenAiExecutionTools`
@@ -5339,6 +5379,110 @@ const createOpenAiAssistantExecutionTools = Object.assign((options) => {
5339
5379
  * TODO: [๐ŸŽถ] Naming "constructor" vs "creator" vs "factory"
5340
5380
  */
5341
5381
 
5382
+ /**
5383
+ * Execution Tools for calling OpenAI compatible API
5384
+ *
5385
+ * Note: This can be used for any OpenAI compatible APIs
5386
+ *
5387
+ * @public exported from `@promptbook/openai`
5388
+ */
5389
+ const createOpenAiCompatibleExecutionTools = Object.assign((options) => {
5390
+ if (options.isProxied) {
5391
+ return new RemoteLlmExecutionTools({
5392
+ ...options,
5393
+ identification: {
5394
+ isAnonymous: true,
5395
+ llmToolsConfiguration: [
5396
+ {
5397
+ title: 'OpenAI Compatible (proxied)',
5398
+ packageName: '@promptbook/openai',
5399
+ className: 'OpenAiCompatibleExecutionTools',
5400
+ options: {
5401
+ ...options,
5402
+ isProxied: false,
5403
+ },
5404
+ },
5405
+ ],
5406
+ },
5407
+ });
5408
+ }
5409
+ if (($isRunningInBrowser() || $isRunningInWebWorker()) && !options.dangerouslyAllowBrowser) {
5410
+ options = { ...options, dangerouslyAllowBrowser: true };
5411
+ }
5412
+ return new HardcodedOpenAiCompatibleExecutionTools(options.defaultModelName, options);
5413
+ }, {
5414
+ packageName: '@promptbook/openai',
5415
+ className: 'OpenAiCompatibleExecutionTools',
5416
+ });
5417
+ /**
5418
+ * Execution Tools for calling ONE SPECIFIC PRECONFIGURED OpenAI compatible provider
5419
+ *
5420
+ * @private for `createOpenAiCompatibleExecutionTools`
5421
+ */
5422
+ class HardcodedOpenAiCompatibleExecutionTools extends OpenAiCompatibleExecutionTools {
5423
+ /**
5424
+ * Creates OpenAI compatible Execution Tools.
5425
+ *
5426
+ * @param options which are relevant are directly passed to the OpenAI compatible client
5427
+ */
5428
+ constructor(defaultModelName, options) {
5429
+ super(options);
5430
+ this.defaultModelName = defaultModelName;
5431
+ this.options = options;
5432
+ }
5433
+ get title() {
5434
+ return `${this.defaultModelName} on ${this.options.baseURL}`;
5435
+ }
5436
+ get description() {
5437
+ return `OpenAI compatible connected to "${this.options.baseURL}" model "${this.defaultModelName}"`;
5438
+ }
5439
+ /**
5440
+ * List all available models (non dynamically)
5441
+ *
5442
+ * Note: Purpose of this is to provide more information about models than standard listing from API
5443
+ */
5444
+ get HARDCODED_MODELS() {
5445
+ return [
5446
+ {
5447
+ modelName: this.defaultModelName,
5448
+ modelVariant: 'CHAT',
5449
+ modelDescription: '', // <- TODO: What is the best value here, maybe `this.description`?
5450
+ },
5451
+ ];
5452
+ }
5453
+ /**
5454
+ * Computes the usage
5455
+ */
5456
+ computeUsage(...args) {
5457
+ return {
5458
+ ...computeOpenAiUsage(...args),
5459
+ price: UNCERTAIN_ZERO_VALUE, // <- TODO: Maybe in future pass this counting mechanism, but for now, we dont know
5460
+ };
5461
+ }
5462
+ /**
5463
+ * Default model for chat variant.
5464
+ */
5465
+ getDefaultChatModel() {
5466
+ return this.getDefaultModel(this.defaultModelName);
5467
+ }
5468
+ /**
5469
+ * Default model for completion variant.
5470
+ */
5471
+ getDefaultCompletionModel() {
5472
+ throw new PipelineExecutionError(`${this.title} does not support COMPLETION model variant`);
5473
+ }
5474
+ /**
5475
+ * Default model for completion variant.
5476
+ */
5477
+ getDefaultEmbeddingModel() {
5478
+ throw new PipelineExecutionError(`${this.title} does not support EMBEDDING model variant`);
5479
+ }
5480
+ }
5481
+ /**
5482
+ * TODO: [๐Ÿฆบ] Is there some way how to put `packageName` and `className` on top and function definition on bottom?
5483
+ * TODO: [๐ŸŽถ] Naming "constructor" vs "creator" vs "factory"
5484
+ */
5485
+
5342
5486
  /**
5343
5487
  * Execution Tools for calling OpenAI API
5344
5488
  *
@@ -5350,6 +5494,9 @@ const createOpenAiExecutionTools = Object.assign((options) => {
5350
5494
  if (($isRunningInBrowser() || $isRunningInWebWorker()) && !options.dangerouslyAllowBrowser) {
5351
5495
  options = { ...options, dangerouslyAllowBrowser: true };
5352
5496
  }
5497
+ if (options.isProxied) {
5498
+ throw new NotYetImplementedError(`Proxy mode is not yet implemented in createOpenAiExecutionTools`);
5499
+ }
5353
5500
  return new OpenAiExecutionTools(options);
5354
5501
  }, {
5355
5502
  packageName: '@promptbook/openai',
@@ -5360,6 +5507,7 @@ const createOpenAiExecutionTools = Object.assign((options) => {
5360
5507
  * TODO: [๐ŸŽถ] Naming "constructor" vs "creator" vs "factory"
5361
5508
  */
5362
5509
 
5510
+ // Note: OpenAiCompatibleExecutionTools is an abstract class and cannot be instantiated directly
5363
5511
  /**
5364
5512
  * Registration of LLM provider
5365
5513
  *
@@ -5380,6 +5528,20 @@ const _OpenAiRegistration = $llmToolsRegister.register(createOpenAiExecutionTool
5380
5528
  * @public exported from `@promptbook/cli`
5381
5529
  */
5382
5530
  const _OpenAiAssistantRegistration = $llmToolsRegister.register(createOpenAiAssistantExecutionTools);
5531
+ /**
5532
+ * Registration of the OpenAI Compatible provider
5533
+ *
5534
+ * Note: [๐Ÿ] Configurations registrations are done in register-constructor.ts BUT constructor register-constructor.ts
5535
+ *
5536
+ * @public exported from `@promptbook/openai`
5537
+ * @public exported from `@promptbook/wizard`
5538
+ * @public exported from `@promptbook/cli`
5539
+ */
5540
+ const _OpenAiCompatibleRegistration = $llmToolsRegister.register(createOpenAiCompatibleExecutionTools);
5541
+ /**
5542
+ * Note: OpenAiCompatibleExecutionTools is an abstract class and cannot be registered directly.
5543
+ * It serves as a base class for OpenAiExecutionTools and other compatible implementations.
5544
+ */
5383
5545
  /**
5384
5546
  * TODO: [๐ŸŽถ] Naming "constructor" vs "creator" vs "factory"
5385
5547
  * Note: [๐Ÿ’ž] Ignore a discrepancy between file name and entity name
@@ -6714,7 +6876,7 @@ function jsonParse(value) {
6714
6876
  throw new Error(spaceTrim((block) => `
6715
6877
  ${block(error.message)}
6716
6878
 
6717
- The JSON text:
6879
+ The expected JSON text:
6718
6880
  ${block(value)}
6719
6881
  `));
6720
6882
  }
@@ -8623,6 +8785,68 @@ function checkExpectations(expectations, value) {
8623
8785
  * Note: [๐Ÿ’] and [๐Ÿค ] are interconnected together
8624
8786
  */
8625
8787
 
8788
+ /**
8789
+ * Validates a prompt result against expectations and format requirements.
8790
+ * This function provides a common abstraction for result validation that can be used
8791
+ * by both execution logic and caching logic to ensure consistency.
8792
+ *
8793
+ * @param options - The validation options including result string, expectations, and format
8794
+ * @returns Validation result with processed string and validity status
8795
+ * @private internal function of `createPipelineExecutor` and `cacheLlmTools`
8796
+ */
8797
+ function validatePromptResult(options) {
8798
+ const { resultString, expectations, format } = options;
8799
+ let processedResultString = resultString;
8800
+ let validationError;
8801
+ try {
8802
+ // TODO: [๐Ÿ’] Unite object for expecting amount and format
8803
+ if (format) {
8804
+ if (format === 'JSON') {
8805
+ if (!isValidJsonString(processedResultString)) {
8806
+ // TODO: [๐Ÿข] Do more universally via `FormatParser`
8807
+ try {
8808
+ processedResultString = extractJsonBlock(processedResultString);
8809
+ }
8810
+ catch (error) {
8811
+ keepUnused(error);
8812
+ throw new ExpectError(spaceTrim$1((block) => `
8813
+ Expected valid JSON string
8814
+
8815
+ The expected JSON text:
8816
+ ${block(processedResultString)}
8817
+ `));
8818
+ }
8819
+ }
8820
+ }
8821
+ else {
8822
+ throw new UnexpectedError(`Unknown format "${format}"`);
8823
+ }
8824
+ }
8825
+ // TODO: [๐Ÿ’] Unite object for expecting amount and format
8826
+ if (expectations) {
8827
+ checkExpectations(expectations, processedResultString);
8828
+ }
8829
+ return {
8830
+ isValid: true,
8831
+ processedResultString,
8832
+ };
8833
+ }
8834
+ catch (error) {
8835
+ if (error instanceof ExpectError) {
8836
+ validationError = error;
8837
+ }
8838
+ else {
8839
+ // Re-throw non-ExpectError errors (like UnexpectedError)
8840
+ throw error;
8841
+ }
8842
+ return {
8843
+ isValid: false,
8844
+ processedResultString,
8845
+ error: validationError,
8846
+ };
8847
+ }
8848
+ }
8849
+
8626
8850
  /**
8627
8851
  * Executes a pipeline task with multiple attempts, including joker and retry logic. Handles different task types
8628
8852
  * (prompt, script, dialog, etc.), applies postprocessing, checks expectations, and updates the execution report.
@@ -8640,17 +8864,18 @@ async function executeAttempts(options) {
8640
8864
  $resultString: null,
8641
8865
  $expectError: null,
8642
8866
  $scriptPipelineExecutionErrors: [],
8867
+ $failedResults: [], // Track all failed attempts
8643
8868
  };
8644
8869
  // TODO: [๐Ÿš] Make arrayable LLMs -> single LLM DRY
8645
8870
  const _llms = arrayableToArray(tools.llm);
8646
8871
  const llmTools = _llms.length === 1 ? _llms[0] : joinLlmExecutionTools(..._llms);
8647
- attempts: for (let attempt = -jokerParameterNames.length; attempt < maxAttempts; attempt++) {
8648
- const isJokerAttempt = attempt < 0;
8649
- const jokerParameterName = jokerParameterNames[jokerParameterNames.length + attempt];
8872
+ attempts: for (let attemptIndex = -jokerParameterNames.length; attemptIndex < maxAttempts; attemptIndex++) {
8873
+ const isJokerAttempt = attemptIndex < 0;
8874
+ const jokerParameterName = jokerParameterNames[jokerParameterNames.length + attemptIndex];
8650
8875
  // TODO: [๐Ÿง ][๐Ÿญ] JOKERS, EXPECTATIONS, POSTPROCESSING and FOREACH
8651
8876
  if (isJokerAttempt && !jokerParameterName) {
8652
8877
  throw new UnexpectedError(spaceTrim$1((block) => `
8653
- Joker not found in attempt ${attempt}
8878
+ Joker not found in attempt ${attemptIndex}
8654
8879
 
8655
8880
  ${block(pipelineIdentification)}
8656
8881
  `));
@@ -8848,35 +9073,18 @@ async function executeAttempts(options) {
8848
9073
  }
8849
9074
  }
8850
9075
  // TODO: [๐Ÿ’] Unite object for expecting amount and format
8851
- if (task.format) {
8852
- if (task.format === 'JSON') {
8853
- if (!isValidJsonString($ongoingTaskResult.$resultString || '')) {
8854
- // TODO: [๐Ÿข] Do more universally via `FormatParser`
8855
- try {
8856
- $ongoingTaskResult.$resultString = extractJsonBlock($ongoingTaskResult.$resultString || '');
8857
- }
8858
- catch (error) {
8859
- keepUnused(error);
8860
- throw new ExpectError(spaceTrim$1((block) => `
8861
- Expected valid JSON string
8862
-
8863
- ${block(
8864
- /*<- Note: No need for `pipelineIdentification`, it will be catched and added later */ '')}
8865
- `));
8866
- }
8867
- }
8868
- }
8869
- else {
8870
- throw new UnexpectedError(spaceTrim$1((block) => `
8871
- Unknown format "${task.format}"
8872
-
8873
- ${block(pipelineIdentification)}
8874
- `));
9076
+ // Use the common validation function for both format and expectations
9077
+ if (task.format || task.expectations) {
9078
+ const validationResult = validatePromptResult({
9079
+ resultString: $ongoingTaskResult.$resultString || '',
9080
+ expectations: task.expectations,
9081
+ format: task.format,
9082
+ });
9083
+ if (!validationResult.isValid) {
9084
+ throw validationResult.error;
8875
9085
  }
8876
- }
8877
- // TODO: [๐Ÿ’] Unite object for expecting amount and format
8878
- if (task.expectations) {
8879
- checkExpectations(task.expectations, $ongoingTaskResult.$resultString || '');
9086
+ // Update the result string in case format processing modified it (e.g., JSON extraction)
9087
+ $ongoingTaskResult.$resultString = validationResult.processedResultString;
8880
9088
  }
8881
9089
  break attempts;
8882
9090
  }
@@ -8885,6 +9093,15 @@ async function executeAttempts(options) {
8885
9093
  throw error;
8886
9094
  }
8887
9095
  $ongoingTaskResult.$expectError = error;
9096
+ // Store each failed attempt
9097
+ if (!Array.isArray($ongoingTaskResult.$failedResults)) {
9098
+ $ongoingTaskResult.$failedResults = [];
9099
+ }
9100
+ $ongoingTaskResult.$failedResults.push({
9101
+ attemptIndex,
9102
+ result: $ongoingTaskResult.$resultString,
9103
+ error: error,
9104
+ });
8888
9105
  }
8889
9106
  finally {
8890
9107
  if (!isJokerAttempt &&
@@ -8906,35 +9123,41 @@ async function executeAttempts(options) {
8906
9123
  });
8907
9124
  }
8908
9125
  }
8909
- if ($ongoingTaskResult.$expectError !== null && attempt === maxAttempts - 1) {
9126
+ if ($ongoingTaskResult.$expectError !== null && attemptIndex === maxAttempts - 1) {
9127
+ // Note: Create a summary of all failures
9128
+ const failuresSummary = $ongoingTaskResult.$failedResults
9129
+ .map((failure) => spaceTrim$1((block) => {
9130
+ var _a, _b;
9131
+ return `
9132
+ Attempt ${failure.attemptIndex + 1}:
9133
+ Error ${((_a = failure.error) === null || _a === void 0 ? void 0 : _a.name) || ''}:
9134
+ ${block((_b = failure.error) === null || _b === void 0 ? void 0 : _b.message.split('\n').map((line) => `> ${line}`).join('\n'))}
9135
+
9136
+ Result:
9137
+ ${block(failure.result === null
9138
+ ? 'null'
9139
+ : spaceTrim$1(failure.result)
9140
+ .split('\n')
9141
+ .map((line) => `> ${line}`)
9142
+ .join('\n'))}
9143
+ `;
9144
+ }))
9145
+ .join('\n\n---\n\n');
8910
9146
  throw new PipelineExecutionError(spaceTrim$1((block) => {
8911
- var _a, _b, _c;
9147
+ var _a;
8912
9148
  return `
8913
9149
  LLM execution failed ${maxExecutionAttempts}x
8914
9150
 
8915
9151
  ${block(pipelineIdentification)}
8916
9152
 
8917
- ---
8918
9153
  The Prompt:
8919
9154
  ${block((((_a = $ongoingTaskResult.$prompt) === null || _a === void 0 ? void 0 : _a.content) || '')
8920
9155
  .split('\n')
8921
9156
  .map((line) => `> ${line}`)
8922
9157
  .join('\n'))}
8923
9158
 
8924
- Last error ${((_b = $ongoingTaskResult.$expectError) === null || _b === void 0 ? void 0 : _b.name) || ''}:
8925
- ${block((((_c = $ongoingTaskResult.$expectError) === null || _c === void 0 ? void 0 : _c.message) || '')
8926
- .split('\n')
8927
- .map((line) => `> ${line}`)
8928
- .join('\n'))}
8929
-
8930
- Last result:
8931
- ${block($ongoingTaskResult.$resultString === null
8932
- ? 'null'
8933
- : spaceTrim$1($ongoingTaskResult.$resultString)
8934
- .split('\n')
8935
- .map((line) => `> ${line}`)
8936
- .join('\n'))}
8937
- ---
9159
+ All Failed Attempts:
9160
+ ${block(failuresSummary)}
8938
9161
  `;
8939
9162
  }));
8940
9163
  }
@@ -11713,6 +11936,7 @@ function cacheLlmTools(llmTools, options = {}) {
11713
11936
  },
11714
11937
  };
11715
11938
  const callCommonModel = async (prompt) => {
11939
+ var _a;
11716
11940
  const { parameters, content, modelRequirements } = prompt;
11717
11941
  // <- Note: These are relevant things from the prompt that the cache key should depend on.
11718
11942
  // TODO: Maybe some standalone function for normalization of content for cache
@@ -11763,21 +11987,70 @@ function cacheLlmTools(llmTools, options = {}) {
11763
11987
  }
11764
11988
  // TODO: [๐Ÿง ] !!5 How to do timing in mixed cache / non-cache situation
11765
11989
  // promptResult.timing: FromtoItems
11766
- await storage.setItem(key, {
11767
- date: $getCurrentDate(),
11768
- promptbookVersion: PROMPTBOOK_ENGINE_VERSION,
11769
- bookVersion: BOOK_LANGUAGE_VERSION,
11770
- prompt: {
11771
- ...prompt,
11772
- parameters: Object.entries(parameters).length === Object.entries(relevantParameters).length
11773
- ? parameters
11774
- : {
11775
- ...relevantParameters,
11776
- note: `<- Note: Only relevant parameters are stored in the cache`,
11777
- },
11778
- },
11779
- promptResult,
11780
- });
11990
+ // Check if the result is valid and should be cached
11991
+ // A result is considered failed if:
11992
+ // 1. It has a content property that is null or undefined
11993
+ // 2. It has an error property that is truthy
11994
+ // 3. It has a success property that is explicitly false
11995
+ // 4. It doesn't meet the prompt's expectations or format requirements
11996
+ const isBasicFailedResult = promptResult.content === null ||
11997
+ promptResult.content === undefined ||
11998
+ promptResult.error ||
11999
+ promptResult.success === false;
12000
+ let shouldCache = !isBasicFailedResult;
12001
+ // If the basic result is valid, check against expectations and format
12002
+ if (shouldCache && promptResult.content) {
12003
+ try {
12004
+ const validationResult = validatePromptResult({
12005
+ resultString: promptResult.content,
12006
+ expectations: prompt.expectations,
12007
+ format: prompt.format,
12008
+ });
12009
+ shouldCache = validationResult.isValid;
12010
+ if (!shouldCache && isVerbose) {
12011
+ console.info('Not caching result that fails expectations/format validation for key:', key, {
12012
+ content: promptResult.content,
12013
+ expectations: prompt.expectations,
12014
+ format: prompt.format,
12015
+ validationError: (_a = validationResult.error) === null || _a === void 0 ? void 0 : _a.message,
12016
+ });
12017
+ }
12018
+ }
12019
+ catch (error) {
12020
+ // If validation throws an unexpected error, don't cache
12021
+ shouldCache = false;
12022
+ if (isVerbose) {
12023
+ console.info('Not caching result due to validation error for key:', key, {
12024
+ content: promptResult.content,
12025
+ validationError: error instanceof Error ? error.message : String(error),
12026
+ });
12027
+ }
12028
+ }
12029
+ }
12030
+ if (shouldCache) {
12031
+ await storage.setItem(key, {
12032
+ date: $getCurrentDate(),
12033
+ promptbookVersion: PROMPTBOOK_ENGINE_VERSION,
12034
+ bookVersion: BOOK_LANGUAGE_VERSION,
12035
+ prompt: {
12036
+ ...prompt,
12037
+ parameters: Object.entries(parameters).length === Object.entries(relevantParameters).length
12038
+ ? parameters
12039
+ : {
12040
+ ...relevantParameters,
12041
+ note: `<- Note: Only relevant parameters are stored in the cache`,
12042
+ },
12043
+ },
12044
+ promptResult,
12045
+ });
12046
+ }
12047
+ else if (isVerbose && isBasicFailedResult) {
12048
+ console.info('Not caching failed result for key:', key, {
12049
+ content: promptResult.content,
12050
+ error: promptResult.error,
12051
+ success: promptResult.success,
12052
+ });
12053
+ }
11781
12054
  return promptResult;
11782
12055
  };
11783
12056
  if (llmTools.callChatModel !== undefined) {
@@ -11858,8 +12131,10 @@ function createLlmToolsFromConfiguration(configuration, options = {}) {
11858
12131
  .list()
11859
12132
  .find(({ packageName, className }) => llmConfiguration.packageName === packageName && llmConfiguration.className === className);
11860
12133
  if (registeredItem === undefined) {
12134
+ console.log('!!! $llmToolsRegister.list()', $llmToolsRegister.list());
11861
12135
  throw new Error(spaceTrim((block) => `
11862
12136
  There is no constructor for LLM provider \`${llmConfiguration.className}\` from \`${llmConfiguration.packageName}\`
12137
+ Running in ${!$isRunningInBrowser() ? '' : 'browser environment'}${!$isRunningInNode() ? '' : 'node environment'}${!$isRunningInWebWorker() ? '' : 'worker environment'}
11863
12138
 
11864
12139
  You have probably forgotten install and import the provider package.
11865
12140
  To fix this issue, you can:
@@ -16645,5 +16920,5 @@ const wizard = new Wizard();
16645
16920
  * Note: [๐ŸŸข] Code in this file should never be never released in packages that could be imported into browser environment
16646
16921
  */
16647
16922
 
16648
- export { BOOK_LANGUAGE_VERSION, PROMPTBOOK_ENGINE_VERSION, _AnthropicClaudeMetadataRegistration, _AnthropicClaudeRegistration, _AzureOpenAiMetadataRegistration, _AzureOpenAiRegistration, _BoilerplateScraperMetadataRegistration, _BoilerplateScraperRegistration, _DeepseekMetadataRegistration, _DeepseekRegistration, _DocumentScraperMetadataRegistration, _DocumentScraperRegistration, _GoogleMetadataRegistration, _GoogleRegistration, _LegacyDocumentScraperMetadataRegistration, _LegacyDocumentScraperRegistration, _MarkdownScraperMetadataRegistration, _MarkdownScraperRegistration, _MarkitdownScraperMetadataRegistration, _MarkitdownScraperRegistration, _OllamaMetadataRegistration, _OllamaRegistration, _OpenAiAssistantMetadataRegistration, _OpenAiAssistantRegistration, _OpenAiMetadataRegistration, _OpenAiRegistration, _PdfScraperMetadataRegistration, _PdfScraperRegistration, _WebsiteScraperMetadataRegistration, _WebsiteScraperRegistration, wizard };
16923
+ export { BOOK_LANGUAGE_VERSION, PROMPTBOOK_ENGINE_VERSION, _AnthropicClaudeMetadataRegistration, _AnthropicClaudeRegistration, _AzureOpenAiMetadataRegistration, _AzureOpenAiRegistration, _BoilerplateScraperMetadataRegistration, _BoilerplateScraperRegistration, _DeepseekMetadataRegistration, _DeepseekRegistration, _DocumentScraperMetadataRegistration, _DocumentScraperRegistration, _GoogleMetadataRegistration, _GoogleRegistration, _LegacyDocumentScraperMetadataRegistration, _LegacyDocumentScraperRegistration, _MarkdownScraperMetadataRegistration, _MarkdownScraperRegistration, _MarkitdownScraperMetadataRegistration, _MarkitdownScraperRegistration, _OllamaMetadataRegistration, _OllamaRegistration, _OpenAiAssistantMetadataRegistration, _OpenAiAssistantRegistration, _OpenAiCompatibleMetadataRegistration, _OpenAiCompatibleRegistration, _OpenAiMetadataRegistration, _OpenAiRegistration, _PdfScraperMetadataRegistration, _PdfScraperRegistration, _WebsiteScraperMetadataRegistration, _WebsiteScraperRegistration, wizard };
16649
16924
  //# sourceMappingURL=index.es.js.map