@promptbook/node 0.88.0-9 → 0.89.0-1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (25) hide show
  1. package/README.md +35 -14
  2. package/esm/index.es.js +46 -10
  3. package/esm/index.es.js.map +1 -1
  4. package/esm/typings/src/_packages/core.index.d.ts +2 -2
  5. package/esm/typings/src/_packages/types.index.d.ts +10 -0
  6. package/esm/typings/src/config.d.ts +1 -1
  7. package/esm/typings/src/errors/PipelineExecutionError.d.ts +5 -0
  8. package/esm/typings/src/errors/utils/ErrorJson.d.ts +5 -0
  9. package/esm/typings/src/llm-providers/_common/utils/count-total-usage/LlmExecutionToolsWithTotalUsage.d.ts +7 -0
  10. package/esm/typings/src/llm-providers/_common/utils/count-total-usage/{countTotalUsage.d.ts → countUsage.d.ts} +1 -1
  11. package/esm/typings/src/playground/BrjappConnector.d.ts +64 -0
  12. package/esm/typings/src/playground/brjapp-api-schema.d.ts +12879 -0
  13. package/esm/typings/src/playground/playground.d.ts +5 -0
  14. package/esm/typings/src/remote-server/socket-types/_subtypes/PromptbookServer_Identification.d.ts +2 -1
  15. package/esm/typings/src/remote-server/types/RemoteServerOptions.d.ts +15 -3
  16. package/esm/typings/src/types/typeAliases.d.ts +2 -2
  17. package/esm/typings/src/utils/expectation-counters/countCharacters.d.ts +3 -0
  18. package/esm/typings/src/utils/expectation-counters/countLines.d.ts +3 -0
  19. package/esm/typings/src/utils/expectation-counters/countPages.d.ts +3 -0
  20. package/esm/typings/src/utils/expectation-counters/countParagraphs.d.ts +3 -0
  21. package/esm/typings/src/utils/expectation-counters/countSentences.d.ts +3 -0
  22. package/esm/typings/src/utils/expectation-counters/countWords.d.ts +3 -0
  23. package/package.json +2 -2
  24. package/umd/index.umd.js +46 -10
  25. package/umd/index.umd.js.map +1 -1
package/README.md CHANGED
@@ -62,6 +62,8 @@ Rest of the documentation is common for **entire promptbook ecosystem**:
62
62
 
63
63
  During the computer revolution, we have seen [multiple generations of computer languages](https://github.com/webgptorg/promptbook/discussions/180), from the physical rewiring of the vacuum tubes through low-level machine code to the high-level languages like Python or JavaScript. And now, we're on the edge of the **next revolution**!
64
64
 
65
+
66
+
65
67
  It's a revolution of writing software in **plain human language** that is understandable and executable by both humans and machines – and it's going to change everything!
66
68
 
67
69
  The incredible growth in power of microprocessors and the Moore's Law have been the driving force behind the ever-more powerful languages, and it's been an amazing journey! Similarly, the large language models (like GPT or Claude) are the next big thing in language technology, and they're set to transform the way we interact with computers.
@@ -72,6 +74,9 @@ This shift is going to happen, whether we are ready for it or not. Our mission i
72
74
 
73
75
 
74
76
 
77
+
78
+
79
+
75
80
  ## 🚀 Get started
76
81
 
77
82
  Take a look at the simple starter kit with books integrated into the **Hello World** sample applications:
@@ -83,6 +88,8 @@ Take a look at the simple starter kit with books integrated into the **Hello Wor
83
88
 
84
89
 
85
90
 
91
+
92
+
86
93
  ## 💜 The Promptbook Project
87
94
 
88
95
  Promptbook project is ecosystem of multiple projects and tools, following is a list of most important pieces of the project:
@@ -118,22 +125,35 @@ Promptbook project is ecosystem of multiple projects and tools, following is a l
118
125
  </tbody>
119
126
  </table>
120
127
 
128
+ Hello world examples:
129
+
130
+ - [Hello world](https://github.com/webgptorg/hello-world)
131
+ - [Hello world in Node.js](https://github.com/webgptorg/hello-world-node-js)
132
+ - [Hello world in Next.js](https://github.com/webgptorg/hello-world-next-js)
133
+
134
+
135
+
121
136
  We also have a community of developers and users of **Promptbook**:
122
137
 
123
138
  - [Discord community](https://discord.gg/x3QWNaa89N)
124
139
  - [Landing page `ptbk.io`](https://ptbk.io)
125
140
  - [Github discussions](https://github.com/webgptorg/promptbook/discussions)
126
141
  - [LinkedIn `Promptbook`](https://linkedin.com/company/promptbook)
127
- - [Facebook `Promptbook`](https://www.facebook.com/61560776453536)
142
+ - [Facebook `Promptbook`](https://www.facebook.com/61560776453536)
128
143
 
129
144
  And **Promptbook.studio** branded socials:
130
145
 
146
+
147
+
131
148
  - [Instagram `@promptbook.studio`](https://www.instagram.com/promptbook.studio/)
132
149
 
133
150
  And **Promptujeme** sub-brand:
134
151
 
135
152
  _/Subbrand for Czech clients/_
136
153
 
154
+
155
+
156
+
137
157
  - [Promptujeme.cz](https://www.promptujeme.cz/)
138
158
  - [Facebook `Promptujeme`](https://www.facebook.com/promptujeme/)
139
159
 
@@ -151,6 +171,8 @@ _/Sub-brand for images and graphics generated via Promptbook prompting/_
151
171
 
152
172
  ## 💙 The Book language
153
173
 
174
+
175
+
154
176
  Following is the documentation and blueprint of the [Book language](https://github.com/webgptorg/book).
155
177
 
156
178
  Book is a language that can be used to write AI applications, agents, workflows, automations, knowledgebases, translators, sheet processors, email automations and more. It allows you to harness the power of AI models in human-like terms, without the need to know the specifics and technicalities of the models.
@@ -200,6 +222,8 @@ Personas can have access to different knowledge, tools and actions. They can als
200
222
 
201
223
  - [PERSONA](https://github.com/webgptorg/promptbook/blob/main/documents/commands/PERSONA.md)
202
224
 
225
+
226
+
203
227
  ### **How:** Knowledge, Instruments and Actions
204
228
 
205
229
  The resources used by the personas are used to do the work.
@@ -275,11 +299,9 @@ Or you can install them separately:
275
299
 
276
300
  ## 📚 Dictionary
277
301
 
278
- ### 📚 Dictionary
279
-
280
302
  The following glossary is used to clarify certain concepts:
281
303
 
282
- #### General LLM / AI terms
304
+ ### General LLM / AI terms
283
305
 
284
306
  - **Prompt drift** is a phenomenon where the AI model starts to generate outputs that are not aligned with the original prompt. This can happen due to the model's training data, the prompt's wording, or the model's architecture.
285
307
  - **Pipeline, workflow or chain** is a sequence of tasks that are executed in a specific order. In the context of AI, a pipeline can refer to a sequence of AI models that are used to process data.
@@ -290,9 +312,13 @@ The following glossary is used to clarify certain concepts:
290
312
  - **Retrieval-augmented generation** is a machine learning paradigm where a model generates text by retrieving relevant information from a large database of text. This approach combines the benefits of generative models and retrieval models.
291
313
  - **Longtail** refers to non-common or rare events, items, or entities that are not well-represented in the training data of machine learning models. Longtail items are often challenging for models to predict accurately.
292
314
 
293
- _Note: Thos section is not complete dictionary, more list of general AI / LLM terms that has connection with Promptbook_
294
315
 
295
- #### 💯 Core concepts
316
+
317
+ _Note: This section is not complete dictionary, more list of general AI / LLM terms that has connection with Promptbook_
318
+
319
+
320
+
321
+ ### 💯 Core concepts
296
322
 
297
323
  - [📚 Collection of pipelines](https://github.com/webgptorg/promptbook/discussions/65)
298
324
  - [📯 Pipeline](https://github.com/webgptorg/promptbook/discussions/64)
@@ -305,7 +331,7 @@ _Note: Thos section is not complete dictionary, more list of general AI / LLM te
305
331
  - [🔣 Words not tokens](https://github.com/webgptorg/promptbook/discussions/29)
306
332
  - [☯ Separation of concerns](https://github.com/webgptorg/promptbook/discussions/32)
307
333
 
308
- ##### Advanced concepts
334
+ #### Advanced concepts
309
335
 
310
336
  - [📚 Knowledge (Retrieval-augmented generation)](https://github.com/webgptorg/promptbook/discussions/41)
311
337
  - [🌏 Remote server](https://github.com/webgptorg/promptbook/discussions/89)
@@ -320,11 +346,6 @@ _Note: Thos section is not complete dictionary, more list of general AI / LLM te
320
346
  - [👮 Agent adversary expectations](https://github.com/webgptorg/promptbook/discussions/39)
321
347
  - [view more](https://github.com/webgptorg/promptbook/discussions/categories/concepts)
322
348
 
323
- ### Terms specific to Promptbook TypeScript implementation
324
-
325
- - Anonymous mode
326
- - Application mode
327
-
328
349
 
329
350
 
330
351
  ## 🚂 Promptbook Engine
@@ -395,11 +416,11 @@ See [TODO.md](./TODO.md)
395
416
  <div style="display: flex; align-items: center; gap: 20px;">
396
417
 
397
418
  <a href="https://promptbook.studio/">
398
- <img src="./design/promptbook-studio-logo.png" alt="Partner 3" height="100">
419
+ <img src="./design/promptbook-studio-logo.png" alt="Partner 3" height="70">
399
420
  </a>
400
421
 
401
422
  <a href="https://technologickainkubace.org/en/about-technology-incubation/about-the-project/">
402
- <img src="./other/partners/CI-Technology-Incubation.png" alt="Technology Incubation" height="100">
423
+ <img src="./other/partners/CI-Technology-Incubation.png" alt="Technology Incubation" height="70">
403
424
  </a>
404
425
 
405
426
  </div>
package/esm/index.es.js CHANGED
@@ -30,7 +30,7 @@ const BOOK_LANGUAGE_VERSION = '1.0.0';
30
30
  * @generated
31
31
  * @see https://github.com/webgptorg/promptbook
32
32
  */
33
- const PROMPTBOOK_ENGINE_VERSION = '0.88.0-9';
33
+ const PROMPTBOOK_ENGINE_VERSION = '0.89.0-1';
34
34
  /**
35
35
  * TODO: string_promptbook_version should be constrained to the all versions of Promptbook engine
36
36
  * Note: [💞] Ignore a discrepancy between file name and entity name
@@ -157,7 +157,7 @@ const DEFAULT_MAX_PARALLEL_COUNT = 5; // <- TODO: [🤹‍♂️]
157
157
  *
158
158
  * @public exported from `@promptbook/core`
159
159
  */
160
- const DEFAULT_MAX_EXECUTION_ATTEMPTS = 3; // <- TODO: [🤹‍♂️]
160
+ const DEFAULT_MAX_EXECUTION_ATTEMPTS = 10; // <- TODO: [🤹‍♂️]
161
161
  // <- TODO: [🕝] Make also `BOOKS_DIRNAME_ALTERNATIVES`
162
162
  /**
163
163
  * Where to store the temporary downloads
@@ -1642,11 +1642,17 @@ function jsonStringsToJsons(object) {
1642
1642
  */
1643
1643
  class PipelineExecutionError extends Error {
1644
1644
  constructor(message) {
1645
+ // Added id parameter
1645
1646
  super(message);
1646
1647
  this.name = 'PipelineExecutionError';
1648
+ // TODO: [🐙] DRY - Maybe $randomId
1649
+ this.id = `error-${$randomToken(8 /* <- TODO: To global config + Use Base58 to avoid simmilar char conflicts */)}`;
1647
1650
  Object.setPrototypeOf(this, PipelineExecutionError.prototype);
1648
1651
  }
1649
1652
  }
1653
+ /**
1654
+ * TODO: !!!!!! Add id to all errors
1655
+ */
1650
1656
 
1651
1657
  /**
1652
1658
  * This error indicates problems parsing the format value
@@ -1834,7 +1840,7 @@ const ALL_ERRORS = {
1834
1840
  * @public exported from `@promptbook/utils`
1835
1841
  */
1836
1842
  function deserializeError(error) {
1837
- const { name, stack } = error;
1843
+ const { name, stack, id } = error; // Added id
1838
1844
  let { message } = error;
1839
1845
  let ErrorClass = ALL_ERRORS[error.name];
1840
1846
  if (ErrorClass === undefined) {
@@ -1849,7 +1855,9 @@ function deserializeError(error) {
1849
1855
  ${block(stack || '')}
1850
1856
  `);
1851
1857
  }
1852
- return new ErrorClass(message);
1858
+ const deserializedError = new ErrorClass(message);
1859
+ deserializedError.id = id; // Assign id to the error object
1860
+ return deserializedError;
1853
1861
  }
1854
1862
 
1855
1863
  /**
@@ -1899,6 +1907,7 @@ function assertsTaskSuccessful(executionResult) {
1899
1907
  */
1900
1908
  function createTask(options) {
1901
1909
  const { taskType, taskProcessCallback } = options;
1910
+ // TODO: [🐙] DRY
1902
1911
  const taskId = `${taskType.toLowerCase().substring(0, 4)}-${$randomToken(8 /* <- TODO: To global config + Use Base58 to avoid simmilar char conflicts */)}`;
1903
1912
  let status = 'RUNNING';
1904
1913
  const createdAt = new Date();
@@ -1931,7 +1940,7 @@ function createTask(options) {
1931
1940
  assertsTaskSuccessful(executionResult);
1932
1941
  status = 'FINISHED';
1933
1942
  currentValue = jsonStringsToJsons(executionResult);
1934
- // <- TODO: Convert JSON values in string to JSON objects
1943
+ // <- TODO: [🧠] Is this a good idea to convert JSON strins to JSONs?
1935
1944
  partialResultSubject.next(executionResult);
1936
1945
  }
1937
1946
  catch (error) {
@@ -1995,19 +2004,21 @@ function createTask(options) {
1995
2004
  */
1996
2005
  function serializeError(error) {
1997
2006
  const { name, message, stack } = error;
2007
+ const { id } = error;
1998
2008
  if (!Object.keys(ALL_ERRORS).includes(name)) {
1999
2009
  console.error(spaceTrim((block) => `
2000
-
2010
+
2001
2011
  Cannot serialize error with name "${name}"
2002
2012
 
2003
2013
  ${block(stack || message)}
2004
-
2014
+
2005
2015
  `));
2006
2016
  }
2007
2017
  return {
2008
2018
  name: name,
2009
2019
  message,
2010
2020
  stack,
2021
+ id, // Include id in the serialized object
2011
2022
  };
2012
2023
  }
2013
2024
 
@@ -3050,6 +3061,9 @@ function countCharacters(text) {
3050
3061
  text = text.replace(/\p{Extended_Pictographic}(\u{200D}\p{Extended_Pictographic})*/gu, '-');
3051
3062
  return text.length;
3052
3063
  }
3064
+ /**
3065
+ * TODO: [🥴] Implement counting in formats - like JSON, CSV, XML,...
3066
+ */
3053
3067
 
3054
3068
  /**
3055
3069
  * Number of characters per standard line with 11pt Arial font size.
@@ -3081,6 +3095,9 @@ function countLines(text) {
3081
3095
  const lines = text.split('\n');
3082
3096
  return lines.reduce((count, line) => count + Math.ceil(line.length / CHARACTERS_PER_STANDARD_LINE), 0);
3083
3097
  }
3098
+ /**
3099
+ * TODO: [🥴] Implement counting in formats - like JSON, CSV, XML,...
3100
+ */
3084
3101
 
3085
3102
  /**
3086
3103
  * Counts number of pages in the text
@@ -3092,6 +3109,9 @@ function countLines(text) {
3092
3109
  function countPages(text) {
3093
3110
  return Math.ceil(countLines(text) / LINES_PER_STANDARD_PAGE);
3094
3111
  }
3112
+ /**
3113
+ * TODO: [🥴] Implement counting in formats - like JSON, CSV, XML,...
3114
+ */
3095
3115
 
3096
3116
  /**
3097
3117
  * Counts number of paragraphs in the text
@@ -3101,6 +3121,9 @@ function countPages(text) {
3101
3121
  function countParagraphs(text) {
3102
3122
  return text.split(/\n\s*\n/).filter((paragraph) => paragraph.trim() !== '').length;
3103
3123
  }
3124
+ /**
3125
+ * TODO: [🥴] Implement counting in formats - like JSON, CSV, XML,...
3126
+ */
3104
3127
 
3105
3128
  /**
3106
3129
  * Split text into sentences
@@ -3118,6 +3141,9 @@ function splitIntoSentences(text) {
3118
3141
  function countSentences(text) {
3119
3142
  return splitIntoSentences(text).length;
3120
3143
  }
3144
+ /**
3145
+ * TODO: [🥴] Implement counting in formats - like JSON, CSV, XML,...
3146
+ */
3121
3147
 
3122
3148
  const defaultDiacriticsRemovalMap = [
3123
3149
  {
@@ -3392,6 +3418,9 @@ function countWords(text) {
3392
3418
  text = text.replace(/([a-z])([A-Z])/g, '$1 $2');
3393
3419
  return text.split(/[^a-zа-я0-9]+/i).filter((word) => word.length > 0).length;
3394
3420
  }
3421
+ /**
3422
+ * TODO: [🥴] Implement counting in formats - like JSON, CSV, XML,...
3423
+ */
3395
3424
 
3396
3425
  /**
3397
3426
  * Index of all counter functions
@@ -4447,8 +4476,9 @@ async function forEachAsync(array, options, callbackfunction) {
4447
4476
  * @returns LLM tools with same functionality with added total cost counting
4448
4477
  * @public exported from `@promptbook/core`
4449
4478
  */
4450
- function countTotalUsage(llmTools) {
4479
+ function countUsage(llmTools) {
4451
4480
  let totalUsage = ZERO_USAGE;
4481
+ const spending = new Subject();
4452
4482
  const proxyTools = {
4453
4483
  get title() {
4454
4484
  // TODO: [🧠] Maybe put here some suffix
@@ -4458,12 +4488,15 @@ function countTotalUsage(llmTools) {
4458
4488
  // TODO: [🧠] Maybe put here some suffix
4459
4489
  return llmTools.description;
4460
4490
  },
4461
- async checkConfiguration() {
4491
+ checkConfiguration() {
4462
4492
  return /* not await */ llmTools.checkConfiguration();
4463
4493
  },
4464
4494
  listModels() {
4465
4495
  return /* not await */ llmTools.listModels();
4466
4496
  },
4497
+ spending() {
4498
+ return spending.asObservable();
4499
+ },
4467
4500
  getTotalUsage() {
4468
4501
  // <- Note: [🥫] Not using getter `get totalUsage` but `getTotalUsage` to allow this object to be proxied
4469
4502
  return totalUsage;
@@ -4474,6 +4507,7 @@ function countTotalUsage(llmTools) {
4474
4507
  // console.info('[🚕] callChatModel through countTotalUsage');
4475
4508
  const promptResult = await llmTools.callChatModel(prompt);
4476
4509
  totalUsage = addUsage(totalUsage, promptResult.usage);
4510
+ spending.next(promptResult.usage);
4477
4511
  return promptResult;
4478
4512
  };
4479
4513
  }
@@ -4482,6 +4516,7 @@ function countTotalUsage(llmTools) {
4482
4516
  // console.info('[🚕] callCompletionModel through countTotalUsage');
4483
4517
  const promptResult = await llmTools.callCompletionModel(prompt);
4484
4518
  totalUsage = addUsage(totalUsage, promptResult.usage);
4519
+ spending.next(promptResult.usage);
4485
4520
  return promptResult;
4486
4521
  };
4487
4522
  }
@@ -4490,6 +4525,7 @@ function countTotalUsage(llmTools) {
4490
4525
  // console.info('[🚕] callEmbeddingModel through countTotalUsage');
4491
4526
  const promptResult = await llmTools.callEmbeddingModel(prompt);
4492
4527
  totalUsage = addUsage(totalUsage, promptResult.usage);
4528
+ spending.next(promptResult.usage);
4493
4529
  return promptResult;
4494
4530
  };
4495
4531
  }
@@ -5302,7 +5338,7 @@ async function preparePipeline(pipeline, tools, options) {
5302
5338
  // TODO: [🚐] Make arrayable LLMs -> single LLM DRY
5303
5339
  const _llms = arrayableToArray(tools.llm);
5304
5340
  const llmTools = _llms.length === 1 ? _llms[0] : joinLlmExecutionTools(..._llms);
5305
- const llmToolsWithUsage = countTotalUsage(llmTools);
5341
+ const llmToolsWithUsage = countUsage(llmTools);
5306
5342
  // <- TODO: [🌯]
5307
5343
  /*
5308
5344
  TODO: [🧠][🪑][🔃] Should this be done or not