@promptbook/wizard 0.95.0 → 0.98.0-2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -25,6 +25,10 @@ Write AI applications using plain human language across multiple models and plat
25
25
 
26
26
 
27
27
 
28
+ <blockquote style="color: #ff8811">
29
+ <b>⚠ Warning:</b> This is a pre-release version of the library. It is not yet ready for production use. Please look at <a href="https://www.npmjs.com/package/@promptbook/core?activeTab=versions">latest stable release</a>.
30
+ </blockquote>
31
+
28
32
  ## 📦 Package `@promptbook/wizard`
29
33
 
30
34
  - Promptbooks are [divided into several](#-packages) packages, all are published from [single monorepo](https://github.com/webgptorg/promptbook).
@@ -70,6 +74,8 @@ Rest of the documentation is common for **entire promptbook ecosystem**:
70
74
 
71
75
  During the computer revolution, we have seen [multiple generations of computer languages](https://github.com/webgptorg/promptbook/discussions/180), from the physical rewiring of the vacuum tubes through low-level machine code to the high-level languages like Python or JavaScript. And now, we're on the edge of the **next revolution**!
72
76
 
77
+
78
+
73
79
  It's a revolution of writing software in **plain human language** that is understandable and executable by both humans and machines – and it's going to change everything!
74
80
 
75
81
  The incredible growth in power of microprocessors and the Moore's Law have been the driving force behind the ever-more powerful languages, and it's been an amazing journey! Similarly, the large language models (like GPT or Claude) are the next big thing in language technology, and they're set to transform the way we interact with computers.
@@ -195,6 +201,8 @@ Join our growing community of developers and users:
195
201
 
196
202
  _A concise, Markdown-based DSL for crafting AI workflows and automations._
197
203
 
204
+
205
+
198
206
  ### Introduction
199
207
 
200
208
  Book is a Markdown-based language that simplifies the creation of AI applications, workflows, and automations. With human-readable commands, you can define inputs, outputs, personas, knowledge sources, and actions—without needing model-specific details.
@@ -244,6 +252,8 @@ Personas can have access to different knowledge, tools and actions. They can als
244
252
 
245
253
  - [PERSONA](https://github.com/webgptorg/promptbook/blob/main/documents/commands/PERSONA.md)
246
254
 
255
+
256
+
247
257
  ### **3. How:** Knowledge, Instruments and Actions
248
258
 
249
259
  The resources used by the personas are used to do the work.
@@ -343,6 +353,8 @@ The following glossary is used to clarify certain concepts:
343
353
 
344
354
  _Note: This section is not a complete dictionary, more list of general AI / LLM terms that has connection with Promptbook_
345
355
 
356
+
357
+
346
358
  ### 💯 Core concepts
347
359
 
348
360
  - [📚 Collection of pipelines](https://github.com/webgptorg/promptbook/discussions/65)
package/esm/index.es.js CHANGED
@@ -38,7 +38,7 @@ const BOOK_LANGUAGE_VERSION = '1.0.0';
38
38
  * @generated
39
39
  * @see https://github.com/webgptorg/promptbook
40
40
  */
41
- const PROMPTBOOK_ENGINE_VERSION = '0.95.0';
41
+ const PROMPTBOOK_ENGINE_VERSION = '0.98.0-2';
42
42
  /**
43
43
  * TODO: string_promptbook_version should be constrained to the all versions of Promptbook engine
44
44
  * Note: [💞] Ignore a discrepancy between file name and entity name
@@ -11763,21 +11763,39 @@ function cacheLlmTools(llmTools, options = {}) {
11763
11763
  }
11764
11764
  // TODO: [🧠] !!5 How to do timing in mixed cache / non-cache situation
11765
11765
  // promptResult.timing: FromtoItems
11766
- await storage.setItem(key, {
11767
- date: $getCurrentDate(),
11768
- promptbookVersion: PROMPTBOOK_ENGINE_VERSION,
11769
- bookVersion: BOOK_LANGUAGE_VERSION,
11770
- prompt: {
11771
- ...prompt,
11772
- parameters: Object.entries(parameters).length === Object.entries(relevantParameters).length
11773
- ? parameters
11774
- : {
11775
- ...relevantParameters,
11776
- note: `<- Note: Only relevant parameters are stored in the cache`,
11777
- },
11778
- },
11779
- promptResult,
11780
- });
11766
+ // Check if the result is valid and should be cached
11767
+ // A result is considered failed if:
11768
+ // 1. It has a content property that is null or undefined
11769
+ // 2. It has an error property that is truthy
11770
+ // 3. It has a success property that is explicitly false
11771
+ const isFailedResult = promptResult.content === null ||
11772
+ promptResult.content === undefined ||
11773
+ promptResult.error ||
11774
+ promptResult.success === false;
11775
+ if (!isFailedResult) {
11776
+ await storage.setItem(key, {
11777
+ date: $getCurrentDate(),
11778
+ promptbookVersion: PROMPTBOOK_ENGINE_VERSION,
11779
+ bookVersion: BOOK_LANGUAGE_VERSION,
11780
+ prompt: {
11781
+ ...prompt,
11782
+ parameters: Object.entries(parameters).length === Object.entries(relevantParameters).length
11783
+ ? parameters
11784
+ : {
11785
+ ...relevantParameters,
11786
+ note: `<- Note: Only relevant parameters are stored in the cache`,
11787
+ },
11788
+ },
11789
+ promptResult,
11790
+ });
11791
+ }
11792
+ else if (isVerbose) {
11793
+ console.info('Not caching failed result for key:', key, {
11794
+ content: promptResult.content,
11795
+ error: promptResult.error,
11796
+ success: promptResult.success,
11797
+ });
11798
+ }
11781
11799
  return promptResult;
11782
11800
  };
11783
11801
  if (llmTools.callChatModel !== undefined) {