@promptbook/pdf 0.88.0-9 → 0.89.0-1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (25) hide show
  1. package/README.md +35 -14
  2. package/esm/index.es.js +62 -26
  3. package/esm/index.es.js.map +1 -1
  4. package/esm/typings/src/_packages/core.index.d.ts +2 -2
  5. package/esm/typings/src/_packages/types.index.d.ts +10 -0
  6. package/esm/typings/src/config.d.ts +1 -1
  7. package/esm/typings/src/errors/PipelineExecutionError.d.ts +5 -0
  8. package/esm/typings/src/errors/utils/ErrorJson.d.ts +5 -0
  9. package/esm/typings/src/llm-providers/_common/utils/count-total-usage/LlmExecutionToolsWithTotalUsage.d.ts +7 -0
  10. package/esm/typings/src/llm-providers/_common/utils/count-total-usage/{countTotalUsage.d.ts → countUsage.d.ts} +1 -1
  11. package/esm/typings/src/playground/BrjappConnector.d.ts +64 -0
  12. package/esm/typings/src/playground/brjapp-api-schema.d.ts +12879 -0
  13. package/esm/typings/src/playground/playground.d.ts +5 -0
  14. package/esm/typings/src/remote-server/socket-types/_subtypes/PromptbookServer_Identification.d.ts +2 -1
  15. package/esm/typings/src/remote-server/types/RemoteServerOptions.d.ts +15 -3
  16. package/esm/typings/src/types/typeAliases.d.ts +2 -2
  17. package/esm/typings/src/utils/expectation-counters/countCharacters.d.ts +3 -0
  18. package/esm/typings/src/utils/expectation-counters/countLines.d.ts +3 -0
  19. package/esm/typings/src/utils/expectation-counters/countPages.d.ts +3 -0
  20. package/esm/typings/src/utils/expectation-counters/countParagraphs.d.ts +3 -0
  21. package/esm/typings/src/utils/expectation-counters/countSentences.d.ts +3 -0
  22. package/esm/typings/src/utils/expectation-counters/countWords.d.ts +3 -0
  23. package/package.json +2 -2
  24. package/umd/index.umd.js +65 -29
  25. package/umd/index.umd.js.map +1 -1
package/README.md CHANGED
@@ -60,6 +60,8 @@ Rest of the documentation is common for **entire promptbook ecosystem**:
60
60
 
61
61
  During the computer revolution, we have seen [multiple generations of computer languages](https://github.com/webgptorg/promptbook/discussions/180), from the physical rewiring of the vacuum tubes through low-level machine code to the high-level languages like Python or JavaScript. And now, we're on the edge of the **next revolution**!
62
62
 
63
+
64
+
63
65
  It's a revolution of writing software in **plain human language** that is understandable and executable by both humans and machines – and it's going to change everything!
64
66
 
65
67
  The incredible growth in power of microprocessors and the Moore's Law have been the driving force behind the ever-more powerful languages, and it's been an amazing journey! Similarly, the large language models (like GPT or Claude) are the next big thing in language technology, and they're set to transform the way we interact with computers.
@@ -70,6 +72,9 @@ This shift is going to happen, whether we are ready for it or not. Our mission i
70
72
 
71
73
 
72
74
 
75
+
76
+
77
+
73
78
  ## 🚀 Get started
74
79
 
75
80
  Take a look at the simple starter kit with books integrated into the **Hello World** sample applications:
@@ -81,6 +86,8 @@ Take a look at the simple starter kit with books integrated into the **Hello Wor
81
86
 
82
87
 
83
88
 
89
+
90
+
84
91
  ## 💜 The Promptbook Project
85
92
 
86
93
  Promptbook project is ecosystem of multiple projects and tools, following is a list of most important pieces of the project:
@@ -116,22 +123,35 @@ Promptbook project is ecosystem of multiple projects and tools, following is a l
116
123
  </tbody>
117
124
  </table>
118
125
 
126
+ Hello world examples:
127
+
128
+ - [Hello world](https://github.com/webgptorg/hello-world)
129
+ - [Hello world in Node.js](https://github.com/webgptorg/hello-world-node-js)
130
+ - [Hello world in Next.js](https://github.com/webgptorg/hello-world-next-js)
131
+
132
+
133
+
119
134
  We also have a community of developers and users of **Promptbook**:
120
135
 
121
136
  - [Discord community](https://discord.gg/x3QWNaa89N)
122
137
  - [Landing page `ptbk.io`](https://ptbk.io)
123
138
  - [Github discussions](https://github.com/webgptorg/promptbook/discussions)
124
139
  - [LinkedIn `Promptbook`](https://linkedin.com/company/promptbook)
125
- - [Facebook `Promptbook`](https://www.facebook.com/61560776453536)
140
+ - [Facebook `Promptbook`](https://www.facebook.com/61560776453536)
126
141
 
127
142
  And **Promptbook.studio** branded socials:
128
143
 
144
+
145
+
129
146
  - [Instagram `@promptbook.studio`](https://www.instagram.com/promptbook.studio/)
130
147
 
131
148
  And **Promptujeme** sub-brand:
132
149
 
133
150
  _/Subbrand for Czech clients/_
134
151
 
152
+
153
+
154
+
135
155
  - [Promptujeme.cz](https://www.promptujeme.cz/)
136
156
  - [Facebook `Promptujeme`](https://www.facebook.com/promptujeme/)
137
157
 
@@ -149,6 +169,8 @@ _/Sub-brand for images and graphics generated via Promptbook prompting/_
149
169
 
150
170
  ## 💙 The Book language
151
171
 
172
+
173
+
152
174
  Following is the documentation and blueprint of the [Book language](https://github.com/webgptorg/book).
153
175
 
154
176
  Book is a language that can be used to write AI applications, agents, workflows, automations, knowledgebases, translators, sheet processors, email automations and more. It allows you to harness the power of AI models in human-like terms, without the need to know the specifics and technicalities of the models.
@@ -198,6 +220,8 @@ Personas can have access to different knowledge, tools and actions. They can als
198
220
 
199
221
  - [PERSONA](https://github.com/webgptorg/promptbook/blob/main/documents/commands/PERSONA.md)
200
222
 
223
+
224
+
201
225
  ### **How:** Knowledge, Instruments and Actions
202
226
 
203
227
  The resources used by the personas are used to do the work.
@@ -273,11 +297,9 @@ Or you can install them separately:
273
297
 
274
298
  ## 📚 Dictionary
275
299
 
276
- ### 📚 Dictionary
277
-
278
300
  The following glossary is used to clarify certain concepts:
279
301
 
280
- #### General LLM / AI terms
302
+ ### General LLM / AI terms
281
303
 
282
304
  - **Prompt drift** is a phenomenon where the AI model starts to generate outputs that are not aligned with the original prompt. This can happen due to the model's training data, the prompt's wording, or the model's architecture.
283
305
  - **Pipeline, workflow or chain** is a sequence of tasks that are executed in a specific order. In the context of AI, a pipeline can refer to a sequence of AI models that are used to process data.
@@ -288,9 +310,13 @@ The following glossary is used to clarify certain concepts:
288
310
  - **Retrieval-augmented generation** is a machine learning paradigm where a model generates text by retrieving relevant information from a large database of text. This approach combines the benefits of generative models and retrieval models.
289
311
  - **Longtail** refers to non-common or rare events, items, or entities that are not well-represented in the training data of machine learning models. Longtail items are often challenging for models to predict accurately.
290
312
 
291
- _Note: Thos section is not complete dictionary, more list of general AI / LLM terms that has connection with Promptbook_
292
313
 
293
- #### 💯 Core concepts
314
+
315
+ _Note: This section is not complete dictionary, more list of general AI / LLM terms that has connection with Promptbook_
316
+
317
+
318
+
319
+ ### 💯 Core concepts
294
320
 
295
321
  - [📚 Collection of pipelines](https://github.com/webgptorg/promptbook/discussions/65)
296
322
  - [📯 Pipeline](https://github.com/webgptorg/promptbook/discussions/64)
@@ -303,7 +329,7 @@ _Note: Thos section is not complete dictionary, more list of general AI / LLM te
303
329
  - [🔣 Words not tokens](https://github.com/webgptorg/promptbook/discussions/29)
304
330
  - [☯ Separation of concerns](https://github.com/webgptorg/promptbook/discussions/32)
305
331
 
306
- ##### Advanced concepts
332
+ #### Advanced concepts
307
333
 
308
334
  - [📚 Knowledge (Retrieval-augmented generation)](https://github.com/webgptorg/promptbook/discussions/41)
309
335
  - [🌏 Remote server](https://github.com/webgptorg/promptbook/discussions/89)
@@ -318,11 +344,6 @@ _Note: Thos section is not complete dictionary, more list of general AI / LLM te
318
344
  - [👮 Agent adversary expectations](https://github.com/webgptorg/promptbook/discussions/39)
319
345
  - [view more](https://github.com/webgptorg/promptbook/discussions/categories/concepts)
320
346
 
321
- ### Terms specific to Promptbook TypeScript implementation
322
-
323
- - Anonymous mode
324
- - Application mode
325
-
326
347
 
327
348
 
328
349
  ## 🚂 Promptbook Engine
@@ -393,11 +414,11 @@ See [TODO.md](./TODO.md)
393
414
  <div style="display: flex; align-items: center; gap: 20px;">
394
415
 
395
416
  <a href="https://promptbook.studio/">
396
- <img src="./design/promptbook-studio-logo.png" alt="Partner 3" height="100">
417
+ <img src="./design/promptbook-studio-logo.png" alt="Partner 3" height="70">
397
418
  </a>
398
419
 
399
420
  <a href="https://technologickainkubace.org/en/about-technology-incubation/about-the-project/">
400
- <img src="./other/partners/CI-Technology-Incubation.png" alt="Technology Incubation" height="100">
421
+ <img src="./other/partners/CI-Technology-Incubation.png" alt="Technology Incubation" height="70">
401
422
  </a>
402
423
 
403
424
  </div>
package/esm/index.es.js CHANGED
@@ -5,8 +5,8 @@ import hexEncoder from 'crypto-js/enc-hex';
5
5
  import { basename, join, dirname } from 'path';
6
6
  import { format } from 'prettier';
7
7
  import parserHtml from 'prettier/parser-html';
8
- import { Subject } from 'rxjs';
9
8
  import { randomBytes } from 'crypto';
9
+ import { Subject } from 'rxjs';
10
10
  import { forTime } from 'waitasecond';
11
11
  import sha256 from 'crypto-js/sha256';
12
12
  import { lookup, extension } from 'mime-types';
@@ -26,7 +26,7 @@ const BOOK_LANGUAGE_VERSION = '1.0.0';
26
26
  * @generated
27
27
  * @see https://github.com/webgptorg/promptbook
28
28
  */
29
- const PROMPTBOOK_ENGINE_VERSION = '0.88.0-9';
29
+ const PROMPTBOOK_ENGINE_VERSION = '0.89.0-1';
30
30
  /**
31
31
  * TODO: string_promptbook_version should be constrained to the all versions of Promptbook engine
32
32
  * Note: [💞] Ignore a discrepancy between file name and entity name
@@ -158,7 +158,7 @@ const DEFAULT_MAX_PARALLEL_COUNT = 5; // <- TODO: [🤹‍♂️]
158
158
  *
159
159
  * @public exported from `@promptbook/core`
160
160
  */
161
- const DEFAULT_MAX_EXECUTION_ATTEMPTS = 3; // <- TODO: [🤹‍♂️]
161
+ const DEFAULT_MAX_EXECUTION_ATTEMPTS = 10; // <- TODO: [🤹‍♂️]
162
162
  // <- TODO: [🕝] Make also `BOOKS_DIRNAME_ALTERNATIVES`
163
163
  /**
164
164
  * Where to store the temporary downloads
@@ -2016,6 +2016,21 @@ class MissingToolsError extends Error {
2016
2016
  }
2017
2017
  }
2018
2018
 
2019
+ /**
2020
+ * Generates random token
2021
+ *
2022
+ * Note: This function is cryptographically secure (it uses crypto.randomBytes internally)
2023
+ *
2024
+ * @private internal helper function
2025
+ * @returns secure random token
2026
+ */
2027
+ function $randomToken(randomness) {
2028
+ return randomBytes(randomness).toString('hex');
2029
+ }
2030
+ /**
2031
+ * TODO: Maybe use nanoid instead https://github.com/ai/nanoid
2032
+ */
2033
+
2019
2034
  /**
2020
2035
  * This error indicates errors during the execution of the pipeline
2021
2036
  *
@@ -2023,11 +2038,17 @@ class MissingToolsError extends Error {
2023
2038
  */
2024
2039
  class PipelineExecutionError extends Error {
2025
2040
  constructor(message) {
2041
+ // Added id parameter
2026
2042
  super(message);
2027
2043
  this.name = 'PipelineExecutionError';
2044
+ // TODO: [🐙] DRY - Maybe $randomId
2045
+ this.id = `error-${$randomToken(8 /* <- TODO: To global config + Use Base58 to avoid simmilar char conflicts */)}`;
2028
2046
  Object.setPrototypeOf(this, PipelineExecutionError.prototype);
2029
2047
  }
2030
2048
  }
2049
+ /**
2050
+ * TODO: !!!!!! Add id to all errors
2051
+ */
2031
2052
 
2032
2053
  /**
2033
2054
  * Determine if the pipeline is fully prepared
@@ -2066,21 +2087,6 @@ function isPipelinePrepared(pipeline) {
2066
2087
  * - [♨] Are tasks prepared
2067
2088
  */
2068
2089
 
2069
- /**
2070
- * Generates random token
2071
- *
2072
- * Note: This function is cryptographically secure (it uses crypto.randomBytes internally)
2073
- *
2074
- * @private internal helper function
2075
- * @returns secure random token
2076
- */
2077
- function $randomToken(randomness) {
2078
- return randomBytes(randomness).toString('hex');
2079
- }
2080
- /**
2081
- * TODO: Maybe use nanoid instead https://github.com/ai/nanoid
2082
- */
2083
-
2084
2090
  /**
2085
2091
  * Recursively converts JSON strings to JSON objects
2086
2092
 
@@ -2271,7 +2277,7 @@ const ALL_ERRORS = {
2271
2277
  * @public exported from `@promptbook/utils`
2272
2278
  */
2273
2279
  function deserializeError(error) {
2274
- const { name, stack } = error;
2280
+ const { name, stack, id } = error; // Added id
2275
2281
  let { message } = error;
2276
2282
  let ErrorClass = ALL_ERRORS[error.name];
2277
2283
  if (ErrorClass === undefined) {
@@ -2286,7 +2292,9 @@ function deserializeError(error) {
2286
2292
  ${block(stack || '')}
2287
2293
  `);
2288
2294
  }
2289
- return new ErrorClass(message);
2295
+ const deserializedError = new ErrorClass(message);
2296
+ deserializedError.id = id; // Assign id to the error object
2297
+ return deserializedError;
2290
2298
  }
2291
2299
 
2292
2300
  /**
@@ -2336,6 +2344,7 @@ function assertsTaskSuccessful(executionResult) {
2336
2344
  */
2337
2345
  function createTask(options) {
2338
2346
  const { taskType, taskProcessCallback } = options;
2347
+ // TODO: [🐙] DRY
2339
2348
  const taskId = `${taskType.toLowerCase().substring(0, 4)}-${$randomToken(8 /* <- TODO: To global config + Use Base58 to avoid simmilar char conflicts */)}`;
2340
2349
  let status = 'RUNNING';
2341
2350
  const createdAt = new Date();
@@ -2368,7 +2377,7 @@ function createTask(options) {
2368
2377
  assertsTaskSuccessful(executionResult);
2369
2378
  status = 'FINISHED';
2370
2379
  currentValue = jsonStringsToJsons(executionResult);
2371
- // <- TODO: Convert JSON values in string to JSON objects
2380
+ // <- TODO: [🧠] Is this a good idea to convert JSON strins to JSONs?
2372
2381
  partialResultSubject.next(executionResult);
2373
2382
  }
2374
2383
  catch (error) {
@@ -2432,19 +2441,21 @@ function createTask(options) {
2432
2441
  */
2433
2442
  function serializeError(error) {
2434
2443
  const { name, message, stack } = error;
2444
+ const { id } = error;
2435
2445
  if (!Object.keys(ALL_ERRORS).includes(name)) {
2436
2446
  console.error(spaceTrim((block) => `
2437
-
2447
+
2438
2448
  Cannot serialize error with name "${name}"
2439
2449
 
2440
2450
  ${block(stack || message)}
2441
-
2451
+
2442
2452
  `));
2443
2453
  }
2444
2454
  return {
2445
2455
  name: name,
2446
2456
  message,
2447
2457
  stack,
2458
+ id, // Include id in the serialized object
2448
2459
  };
2449
2460
  }
2450
2461
 
@@ -2587,8 +2598,9 @@ function addUsage(...usageItems) {
2587
2598
  * @returns LLM tools with same functionality with added total cost counting
2588
2599
  * @public exported from `@promptbook/core`
2589
2600
  */
2590
- function countTotalUsage(llmTools) {
2601
+ function countUsage(llmTools) {
2591
2602
  let totalUsage = ZERO_USAGE;
2603
+ const spending = new Subject();
2592
2604
  const proxyTools = {
2593
2605
  get title() {
2594
2606
  // TODO: [🧠] Maybe put here some suffix
@@ -2598,12 +2610,15 @@ function countTotalUsage(llmTools) {
2598
2610
  // TODO: [🧠] Maybe put here some suffix
2599
2611
  return llmTools.description;
2600
2612
  },
2601
- async checkConfiguration() {
2613
+ checkConfiguration() {
2602
2614
  return /* not await */ llmTools.checkConfiguration();
2603
2615
  },
2604
2616
  listModels() {
2605
2617
  return /* not await */ llmTools.listModels();
2606
2618
  },
2619
+ spending() {
2620
+ return spending.asObservable();
2621
+ },
2607
2622
  getTotalUsage() {
2608
2623
  // <- Note: [🥫] Not using getter `get totalUsage` but `getTotalUsage` to allow this object to be proxied
2609
2624
  return totalUsage;
@@ -2614,6 +2629,7 @@ function countTotalUsage(llmTools) {
2614
2629
  // console.info('[🚕] callChatModel through countTotalUsage');
2615
2630
  const promptResult = await llmTools.callChatModel(prompt);
2616
2631
  totalUsage = addUsage(totalUsage, promptResult.usage);
2632
+ spending.next(promptResult.usage);
2617
2633
  return promptResult;
2618
2634
  };
2619
2635
  }
@@ -2622,6 +2638,7 @@ function countTotalUsage(llmTools) {
2622
2638
  // console.info('[🚕] callCompletionModel through countTotalUsage');
2623
2639
  const promptResult = await llmTools.callCompletionModel(prompt);
2624
2640
  totalUsage = addUsage(totalUsage, promptResult.usage);
2641
+ spending.next(promptResult.usage);
2625
2642
  return promptResult;
2626
2643
  };
2627
2644
  }
@@ -2630,6 +2647,7 @@ function countTotalUsage(llmTools) {
2630
2647
  // console.info('[🚕] callEmbeddingModel through countTotalUsage');
2631
2648
  const promptResult = await llmTools.callEmbeddingModel(prompt);
2632
2649
  totalUsage = addUsage(totalUsage, promptResult.usage);
2650
+ spending.next(promptResult.usage);
2633
2651
  return promptResult;
2634
2652
  };
2635
2653
  }
@@ -3526,7 +3544,7 @@ async function preparePipeline(pipeline, tools, options) {
3526
3544
  // TODO: [🚐] Make arrayable LLMs -> single LLM DRY
3527
3545
  const _llms = arrayableToArray(tools.llm);
3528
3546
  const llmTools = _llms.length === 1 ? _llms[0] : joinLlmExecutionTools(..._llms);
3529
- const llmToolsWithUsage = countTotalUsage(llmTools);
3547
+ const llmToolsWithUsage = countUsage(llmTools);
3530
3548
  // <- TODO: [🌯]
3531
3549
  /*
3532
3550
  TODO: [🧠][🪑][🔃] Should this be done or not
@@ -4356,6 +4374,9 @@ function countCharacters(text) {
4356
4374
  text = text.replace(/\p{Extended_Pictographic}(\u{200D}\p{Extended_Pictographic})*/gu, '-');
4357
4375
  return text.length;
4358
4376
  }
4377
+ /**
4378
+ * TODO: [🥴] Implement counting in formats - like JSON, CSV, XML,...
4379
+ */
4359
4380
 
4360
4381
  /**
4361
4382
  * Number of characters per standard line with 11pt Arial font size.
@@ -4387,6 +4408,9 @@ function countLines(text) {
4387
4408
  const lines = text.split('\n');
4388
4409
  return lines.reduce((count, line) => count + Math.ceil(line.length / CHARACTERS_PER_STANDARD_LINE), 0);
4389
4410
  }
4411
+ /**
4412
+ * TODO: [🥴] Implement counting in formats - like JSON, CSV, XML,...
4413
+ */
4390
4414
 
4391
4415
  /**
4392
4416
  * Counts number of pages in the text
@@ -4398,6 +4422,9 @@ function countLines(text) {
4398
4422
  function countPages(text) {
4399
4423
  return Math.ceil(countLines(text) / LINES_PER_STANDARD_PAGE);
4400
4424
  }
4425
+ /**
4426
+ * TODO: [🥴] Implement counting in formats - like JSON, CSV, XML,...
4427
+ */
4401
4428
 
4402
4429
  /**
4403
4430
  * Counts number of paragraphs in the text
@@ -4407,6 +4434,9 @@ function countPages(text) {
4407
4434
  function countParagraphs(text) {
4408
4435
  return text.split(/\n\s*\n/).filter((paragraph) => paragraph.trim() !== '').length;
4409
4436
  }
4437
+ /**
4438
+ * TODO: [🥴] Implement counting in formats - like JSON, CSV, XML,...
4439
+ */
4410
4440
 
4411
4441
  /**
4412
4442
  * Split text into sentences
@@ -4424,6 +4454,9 @@ function splitIntoSentences(text) {
4424
4454
  function countSentences(text) {
4425
4455
  return splitIntoSentences(text).length;
4426
4456
  }
4457
+ /**
4458
+ * TODO: [🥴] Implement counting in formats - like JSON, CSV, XML,...
4459
+ */
4427
4460
 
4428
4461
  /**
4429
4462
  * Counts number of words in the text
@@ -4437,6 +4470,9 @@ function countWords(text) {
4437
4470
  text = text.replace(/([a-z])([A-Z])/g, '$1 $2');
4438
4471
  return text.split(/[^a-zа-я0-9]+/i).filter((word) => word.length > 0).length;
4439
4472
  }
4473
+ /**
4474
+ * TODO: [🥴] Implement counting in formats - like JSON, CSV, XML,...
4475
+ */
4440
4476
 
4441
4477
  /**
4442
4478
  * Index of all counter functions