@promptbook/markdown-utils 0.88.0-9 → 0.89.0-1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (25) hide show
  1. package/README.md +35 -14
  2. package/esm/index.es.js +62 -26
  3. package/esm/index.es.js.map +1 -1
  4. package/esm/typings/src/_packages/core.index.d.ts +2 -2
  5. package/esm/typings/src/_packages/types.index.d.ts +10 -0
  6. package/esm/typings/src/config.d.ts +1 -1
  7. package/esm/typings/src/errors/PipelineExecutionError.d.ts +5 -0
  8. package/esm/typings/src/errors/utils/ErrorJson.d.ts +5 -0
  9. package/esm/typings/src/llm-providers/_common/utils/count-total-usage/LlmExecutionToolsWithTotalUsage.d.ts +7 -0
  10. package/esm/typings/src/llm-providers/_common/utils/count-total-usage/{countTotalUsage.d.ts → countUsage.d.ts} +1 -1
  11. package/esm/typings/src/playground/BrjappConnector.d.ts +64 -0
  12. package/esm/typings/src/playground/brjapp-api-schema.d.ts +12879 -0
  13. package/esm/typings/src/playground/playground.d.ts +5 -0
  14. package/esm/typings/src/remote-server/socket-types/_subtypes/PromptbookServer_Identification.d.ts +2 -1
  15. package/esm/typings/src/remote-server/types/RemoteServerOptions.d.ts +15 -3
  16. package/esm/typings/src/types/typeAliases.d.ts +2 -2
  17. package/esm/typings/src/utils/expectation-counters/countCharacters.d.ts +3 -0
  18. package/esm/typings/src/utils/expectation-counters/countLines.d.ts +3 -0
  19. package/esm/typings/src/utils/expectation-counters/countPages.d.ts +3 -0
  20. package/esm/typings/src/utils/expectation-counters/countParagraphs.d.ts +3 -0
  21. package/esm/typings/src/utils/expectation-counters/countSentences.d.ts +3 -0
  22. package/esm/typings/src/utils/expectation-counters/countWords.d.ts +3 -0
  23. package/package.json +1 -1
  24. package/umd/index.umd.js +65 -29
  25. package/umd/index.umd.js.map +1 -1
package/README.md CHANGED
@@ -58,6 +58,8 @@ Rest of the documentation is common for **entire promptbook ecosystem**:
58
58
 
59
59
  During the computer revolution, we have seen [multiple generations of computer languages](https://github.com/webgptorg/promptbook/discussions/180), from the physical rewiring of the vacuum tubes through low-level machine code to the high-level languages like Python or JavaScript. And now, we're on the edge of the **next revolution**!
60
60
 
61
+
62
+
61
63
  It's a revolution of writing software in **plain human language** that is understandable and executable by both humans and machines – and it's going to change everything!
62
64
 
63
65
  The incredible growth in power of microprocessors and the Moore's Law have been the driving force behind the ever-more powerful languages, and it's been an amazing journey! Similarly, the large language models (like GPT or Claude) are the next big thing in language technology, and they're set to transform the way we interact with computers.
@@ -68,6 +70,9 @@ This shift is going to happen, whether we are ready for it or not. Our mission i
68
70
 
69
71
 
70
72
 
73
+
74
+
75
+
71
76
  ## 🚀 Get started
72
77
 
73
78
  Take a look at the simple starter kit with books integrated into the **Hello World** sample applications:
@@ -79,6 +84,8 @@ Take a look at the simple starter kit with books integrated into the **Hello Wor
79
84
 
80
85
 
81
86
 
87
+
88
+
82
89
  ## 💜 The Promptbook Project
83
90
 
84
91
  Promptbook project is ecosystem of multiple projects and tools, following is a list of most important pieces of the project:
@@ -114,22 +121,35 @@ Promptbook project is ecosystem of multiple projects and tools, following is a l
114
121
  </tbody>
115
122
  </table>
116
123
 
124
+ Hello world examples:
125
+
126
+ - [Hello world](https://github.com/webgptorg/hello-world)
127
+ - [Hello world in Node.js](https://github.com/webgptorg/hello-world-node-js)
128
+ - [Hello world in Next.js](https://github.com/webgptorg/hello-world-next-js)
129
+
130
+
131
+
117
132
  We also have a community of developers and users of **Promptbook**:
118
133
 
119
134
  - [Discord community](https://discord.gg/x3QWNaa89N)
120
135
  - [Landing page `ptbk.io`](https://ptbk.io)
121
136
  - [Github discussions](https://github.com/webgptorg/promptbook/discussions)
122
137
  - [LinkedIn `Promptbook`](https://linkedin.com/company/promptbook)
123
- - [Facebook `Promptbook`](https://www.facebook.com/61560776453536)
138
+ - [Facebook `Promptbook`](https://www.facebook.com/61560776453536)
124
139
 
125
140
  And **Promptbook.studio** branded socials:
126
141
 
142
+
143
+
127
144
  - [Instagram `@promptbook.studio`](https://www.instagram.com/promptbook.studio/)
128
145
 
129
146
  And **Promptujeme** sub-brand:
130
147
 
131
148
  _/Subbrand for Czech clients/_
132
149
 
150
+
151
+
152
+
133
153
  - [Promptujeme.cz](https://www.promptujeme.cz/)
134
154
  - [Facebook `Promptujeme`](https://www.facebook.com/promptujeme/)
135
155
 
@@ -147,6 +167,8 @@ _/Sub-brand for images and graphics generated via Promptbook prompting/_
147
167
 
148
168
  ## 💙 The Book language
149
169
 
170
+
171
+
150
172
  Following is the documentation and blueprint of the [Book language](https://github.com/webgptorg/book).
151
173
 
152
174
  Book is a language that can be used to write AI applications, agents, workflows, automations, knowledgebases, translators, sheet processors, email automations and more. It allows you to harness the power of AI models in human-like terms, without the need to know the specifics and technicalities of the models.
@@ -196,6 +218,8 @@ Personas can have access to different knowledge, tools and actions. They can als
196
218
 
197
219
  - [PERSONA](https://github.com/webgptorg/promptbook/blob/main/documents/commands/PERSONA.md)
198
220
 
221
+
222
+
199
223
  ### **How:** Knowledge, Instruments and Actions
200
224
 
201
225
  The resources used by the personas are used to do the work.
@@ -271,11 +295,9 @@ Or you can install them separately:
271
295
 
272
296
  ## 📚 Dictionary
273
297
 
274
- ### 📚 Dictionary
275
-
276
298
  The following glossary is used to clarify certain concepts:
277
299
 
278
- #### General LLM / AI terms
300
+ ### General LLM / AI terms
279
301
 
280
302
  - **Prompt drift** is a phenomenon where the AI model starts to generate outputs that are not aligned with the original prompt. This can happen due to the model's training data, the prompt's wording, or the model's architecture.
281
303
  - **Pipeline, workflow or chain** is a sequence of tasks that are executed in a specific order. In the context of AI, a pipeline can refer to a sequence of AI models that are used to process data.
@@ -286,9 +308,13 @@ The following glossary is used to clarify certain concepts:
286
308
  - **Retrieval-augmented generation** is a machine learning paradigm where a model generates text by retrieving relevant information from a large database of text. This approach combines the benefits of generative models and retrieval models.
287
309
  - **Longtail** refers to non-common or rare events, items, or entities that are not well-represented in the training data of machine learning models. Longtail items are often challenging for models to predict accurately.
288
310
 
289
- _Note: Thos section is not complete dictionary, more list of general AI / LLM terms that has connection with Promptbook_
290
311
 
291
- #### 💯 Core concepts
312
+
313
+ _Note: This section is not complete dictionary, more list of general AI / LLM terms that has connection with Promptbook_
314
+
315
+
316
+
317
+ ### 💯 Core concepts
292
318
 
293
319
  - [📚 Collection of pipelines](https://github.com/webgptorg/promptbook/discussions/65)
294
320
  - [📯 Pipeline](https://github.com/webgptorg/promptbook/discussions/64)
@@ -301,7 +327,7 @@ _Note: Thos section is not complete dictionary, more list of general AI / LLM te
301
327
  - [🔣 Words not tokens](https://github.com/webgptorg/promptbook/discussions/29)
302
328
  - [☯ Separation of concerns](https://github.com/webgptorg/promptbook/discussions/32)
303
329
 
304
- ##### Advanced concepts
330
+ #### Advanced concepts
305
331
 
306
332
  - [📚 Knowledge (Retrieval-augmented generation)](https://github.com/webgptorg/promptbook/discussions/41)
307
333
  - [🌏 Remote server](https://github.com/webgptorg/promptbook/discussions/89)
@@ -316,11 +342,6 @@ _Note: Thos section is not complete dictionary, more list of general AI / LLM te
316
342
  - [👮 Agent adversary expectations](https://github.com/webgptorg/promptbook/discussions/39)
317
343
  - [view more](https://github.com/webgptorg/promptbook/discussions/categories/concepts)
318
344
 
319
- ### Terms specific to Promptbook TypeScript implementation
320
-
321
- - Anonymous mode
322
- - Application mode
323
-
324
345
 
325
346
 
326
347
  ## 🚂 Promptbook Engine
@@ -391,11 +412,11 @@ See [TODO.md](./TODO.md)
391
412
  <div style="display: flex; align-items: center; gap: 20px;">
392
413
 
393
414
  <a href="https://promptbook.studio/">
394
- <img src="./design/promptbook-studio-logo.png" alt="Partner 3" height="100">
415
+ <img src="./design/promptbook-studio-logo.png" alt="Partner 3" height="70">
395
416
  </a>
396
417
 
397
418
  <a href="https://technologickainkubace.org/en/about-technology-incubation/about-the-project/">
398
- <img src="./other/partners/CI-Technology-Incubation.png" alt="Technology Incubation" height="100">
419
+ <img src="./other/partners/CI-Technology-Incubation.png" alt="Technology Incubation" height="70">
399
420
  </a>
400
421
 
401
422
  </div>
package/esm/index.es.js CHANGED
@@ -1,8 +1,8 @@
1
1
  import spaceTrim, { spaceTrim as spaceTrim$1 } from 'spacetrim';
2
2
  import { format } from 'prettier';
3
3
  import parserHtml from 'prettier/parser-html';
4
- import { Subject } from 'rxjs';
5
4
  import { randomBytes } from 'crypto';
5
+ import { Subject } from 'rxjs';
6
6
  import { forTime } from 'waitasecond';
7
7
  import hexEncoder from 'crypto-js/enc-hex';
8
8
  import sha256 from 'crypto-js/sha256';
@@ -25,7 +25,7 @@ const BOOK_LANGUAGE_VERSION = '1.0.0';
25
25
  * @generated
26
26
  * @see https://github.com/webgptorg/promptbook
27
27
  */
28
- const PROMPTBOOK_ENGINE_VERSION = '0.88.0-9';
28
+ const PROMPTBOOK_ENGINE_VERSION = '0.89.0-1';
29
29
  /**
30
30
  * TODO: string_promptbook_version should be constrained to the all versions of Promptbook engine
31
31
  * Note: [💞] Ignore a discrepancy between file name and entity name
@@ -659,7 +659,7 @@ const DEFAULT_MAX_PARALLEL_COUNT = 5; // <- TODO: [🤹‍♂️]
659
659
  *
660
660
  * @public exported from `@promptbook/core`
661
661
  */
662
- const DEFAULT_MAX_EXECUTION_ATTEMPTS = 3; // <- TODO: [🤹‍♂️]
662
+ const DEFAULT_MAX_EXECUTION_ATTEMPTS = 10; // <- TODO: [🤹‍♂️]
663
663
  // <- TODO: [🕝] Make also `BOOKS_DIRNAME_ALTERNATIVES`
664
664
  /**
665
665
  * Where to store the temporary downloads
@@ -1676,6 +1676,21 @@ class MissingToolsError extends Error {
1676
1676
  }
1677
1677
  }
1678
1678
 
1679
+ /**
1680
+ * Generates random token
1681
+ *
1682
+ * Note: This function is cryptographically secure (it uses crypto.randomBytes internally)
1683
+ *
1684
+ * @private internal helper function
1685
+ * @returns secure random token
1686
+ */
1687
+ function $randomToken(randomness) {
1688
+ return randomBytes(randomness).toString('hex');
1689
+ }
1690
+ /**
1691
+ * TODO: Maybe use nanoid instead https://github.com/ai/nanoid
1692
+ */
1693
+
1679
1694
  /**
1680
1695
  * This error indicates errors during the execution of the pipeline
1681
1696
  *
@@ -1683,11 +1698,17 @@ class MissingToolsError extends Error {
1683
1698
  */
1684
1699
  class PipelineExecutionError extends Error {
1685
1700
  constructor(message) {
1701
+ // Added id parameter
1686
1702
  super(message);
1687
1703
  this.name = 'PipelineExecutionError';
1704
+ // TODO: [🐙] DRY - Maybe $randomId
1705
+ this.id = `error-${$randomToken(8 /* <- TODO: To global config + Use Base58 to avoid simmilar char conflicts */)}`;
1688
1706
  Object.setPrototypeOf(this, PipelineExecutionError.prototype);
1689
1707
  }
1690
1708
  }
1709
+ /**
1710
+ * TODO: !!!!!! Add id to all errors
1711
+ */
1691
1712
 
1692
1713
  /**
1693
1714
  * Determine if the pipeline is fully prepared
@@ -1726,21 +1747,6 @@ function isPipelinePrepared(pipeline) {
1726
1747
  * - [♨] Are tasks prepared
1727
1748
  */
1728
1749
 
1729
- /**
1730
- * Generates random token
1731
- *
1732
- * Note: This function is cryptographically secure (it uses crypto.randomBytes internally)
1733
- *
1734
- * @private internal helper function
1735
- * @returns secure random token
1736
- */
1737
- function $randomToken(randomness) {
1738
- return randomBytes(randomness).toString('hex');
1739
- }
1740
- /**
1741
- * TODO: Maybe use nanoid instead https://github.com/ai/nanoid
1742
- */
1743
-
1744
1750
  /**
1745
1751
  * Recursively converts JSON strings to JSON objects
1746
1752
 
@@ -1957,7 +1963,7 @@ const ALL_ERRORS = {
1957
1963
  * @public exported from `@promptbook/utils`
1958
1964
  */
1959
1965
  function deserializeError(error) {
1960
- const { name, stack } = error;
1966
+ const { name, stack, id } = error; // Added id
1961
1967
  let { message } = error;
1962
1968
  let ErrorClass = ALL_ERRORS[error.name];
1963
1969
  if (ErrorClass === undefined) {
@@ -1972,7 +1978,9 @@ function deserializeError(error) {
1972
1978
  ${block(stack || '')}
1973
1979
  `);
1974
1980
  }
1975
- return new ErrorClass(message);
1981
+ const deserializedError = new ErrorClass(message);
1982
+ deserializedError.id = id; // Assign id to the error object
1983
+ return deserializedError;
1976
1984
  }
1977
1985
 
1978
1986
  /**
@@ -2022,6 +2030,7 @@ function assertsTaskSuccessful(executionResult) {
2022
2030
  */
2023
2031
  function createTask(options) {
2024
2032
  const { taskType, taskProcessCallback } = options;
2033
+ // TODO: [🐙] DRY
2025
2034
  const taskId = `${taskType.toLowerCase().substring(0, 4)}-${$randomToken(8 /* <- TODO: To global config + Use Base58 to avoid simmilar char conflicts */)}`;
2026
2035
  let status = 'RUNNING';
2027
2036
  const createdAt = new Date();
@@ -2054,7 +2063,7 @@ function createTask(options) {
2054
2063
  assertsTaskSuccessful(executionResult);
2055
2064
  status = 'FINISHED';
2056
2065
  currentValue = jsonStringsToJsons(executionResult);
2057
- // <- TODO: Convert JSON values in string to JSON objects
2066
+ // <- TODO: [🧠] Is this a good idea to convert JSON strins to JSONs?
2058
2067
  partialResultSubject.next(executionResult);
2059
2068
  }
2060
2069
  catch (error) {
@@ -2118,19 +2127,21 @@ function createTask(options) {
2118
2127
  */
2119
2128
  function serializeError(error) {
2120
2129
  const { name, message, stack } = error;
2130
+ const { id } = error;
2121
2131
  if (!Object.keys(ALL_ERRORS).includes(name)) {
2122
2132
  console.error(spaceTrim((block) => `
2123
-
2133
+
2124
2134
  Cannot serialize error with name "${name}"
2125
2135
 
2126
2136
  ${block(stack || message)}
2127
-
2137
+
2128
2138
  `));
2129
2139
  }
2130
2140
  return {
2131
2141
  name: name,
2132
2142
  message,
2133
2143
  stack,
2144
+ id, // Include id in the serialized object
2134
2145
  };
2135
2146
  }
2136
2147
 
@@ -2273,8 +2284,9 @@ function addUsage(...usageItems) {
2273
2284
  * @returns LLM tools with same functionality with added total cost counting
2274
2285
  * @public exported from `@promptbook/core`
2275
2286
  */
2276
- function countTotalUsage(llmTools) {
2287
+ function countUsage(llmTools) {
2277
2288
  let totalUsage = ZERO_USAGE;
2289
+ const spending = new Subject();
2278
2290
  const proxyTools = {
2279
2291
  get title() {
2280
2292
  // TODO: [🧠] Maybe put here some suffix
@@ -2284,12 +2296,15 @@ function countTotalUsage(llmTools) {
2284
2296
  // TODO: [🧠] Maybe put here some suffix
2285
2297
  return llmTools.description;
2286
2298
  },
2287
- async checkConfiguration() {
2299
+ checkConfiguration() {
2288
2300
  return /* not await */ llmTools.checkConfiguration();
2289
2301
  },
2290
2302
  listModels() {
2291
2303
  return /* not await */ llmTools.listModels();
2292
2304
  },
2305
+ spending() {
2306
+ return spending.asObservable();
2307
+ },
2293
2308
  getTotalUsage() {
2294
2309
  // <- Note: [🥫] Not using getter `get totalUsage` but `getTotalUsage` to allow this object to be proxied
2295
2310
  return totalUsage;
@@ -2300,6 +2315,7 @@ function countTotalUsage(llmTools) {
2300
2315
  // console.info('[🚕] callChatModel through countTotalUsage');
2301
2316
  const promptResult = await llmTools.callChatModel(prompt);
2302
2317
  totalUsage = addUsage(totalUsage, promptResult.usage);
2318
+ spending.next(promptResult.usage);
2303
2319
  return promptResult;
2304
2320
  };
2305
2321
  }
@@ -2308,6 +2324,7 @@ function countTotalUsage(llmTools) {
2308
2324
  // console.info('[🚕] callCompletionModel through countTotalUsage');
2309
2325
  const promptResult = await llmTools.callCompletionModel(prompt);
2310
2326
  totalUsage = addUsage(totalUsage, promptResult.usage);
2327
+ spending.next(promptResult.usage);
2311
2328
  return promptResult;
2312
2329
  };
2313
2330
  }
@@ -2316,6 +2333,7 @@ function countTotalUsage(llmTools) {
2316
2333
  // console.info('[🚕] callEmbeddingModel through countTotalUsage');
2317
2334
  const promptResult = await llmTools.callEmbeddingModel(prompt);
2318
2335
  totalUsage = addUsage(totalUsage, promptResult.usage);
2336
+ spending.next(promptResult.usage);
2319
2337
  return promptResult;
2320
2338
  };
2321
2339
  }
@@ -3598,7 +3616,7 @@ async function preparePipeline(pipeline, tools, options) {
3598
3616
  // TODO: [🚐] Make arrayable LLMs -> single LLM DRY
3599
3617
  const _llms = arrayableToArray(tools.llm);
3600
3618
  const llmTools = _llms.length === 1 ? _llms[0] : joinLlmExecutionTools(..._llms);
3601
- const llmToolsWithUsage = countTotalUsage(llmTools);
3619
+ const llmToolsWithUsage = countUsage(llmTools);
3602
3620
  // <- TODO: [🌯]
3603
3621
  /*
3604
3622
  TODO: [🧠][🪑][🔃] Should this be done or not
@@ -4326,6 +4344,9 @@ function countCharacters(text) {
4326
4344
  text = text.replace(/\p{Extended_Pictographic}(\u{200D}\p{Extended_Pictographic})*/gu, '-');
4327
4345
  return text.length;
4328
4346
  }
4347
+ /**
4348
+ * TODO: [🥴] Implement counting in formats - like JSON, CSV, XML,...
4349
+ */
4329
4350
 
4330
4351
  /**
4331
4352
  * Number of characters per standard line with 11pt Arial font size.
@@ -4357,6 +4378,9 @@ function countLines(text) {
4357
4378
  const lines = text.split('\n');
4358
4379
  return lines.reduce((count, line) => count + Math.ceil(line.length / CHARACTERS_PER_STANDARD_LINE), 0);
4359
4380
  }
4381
+ /**
4382
+ * TODO: [🥴] Implement counting in formats - like JSON, CSV, XML,...
4383
+ */
4360
4384
 
4361
4385
  /**
4362
4386
  * Counts number of pages in the text
@@ -4368,6 +4392,9 @@ function countLines(text) {
4368
4392
  function countPages(text) {
4369
4393
  return Math.ceil(countLines(text) / LINES_PER_STANDARD_PAGE);
4370
4394
  }
4395
+ /**
4396
+ * TODO: [🥴] Implement counting in formats - like JSON, CSV, XML,...
4397
+ */
4371
4398
 
4372
4399
  /**
4373
4400
  * Counts number of paragraphs in the text
@@ -4377,6 +4404,9 @@ function countPages(text) {
4377
4404
  function countParagraphs(text) {
4378
4405
  return text.split(/\n\s*\n/).filter((paragraph) => paragraph.trim() !== '').length;
4379
4406
  }
4407
+ /**
4408
+ * TODO: [🥴] Implement counting in formats - like JSON, CSV, XML,...
4409
+ */
4380
4410
 
4381
4411
  /**
4382
4412
  * Split text into sentences
@@ -4394,6 +4424,9 @@ function splitIntoSentences(text) {
4394
4424
  function countSentences(text) {
4395
4425
  return splitIntoSentences(text).length;
4396
4426
  }
4427
+ /**
4428
+ * TODO: [🥴] Implement counting in formats - like JSON, CSV, XML,...
4429
+ */
4397
4430
 
4398
4431
  /**
4399
4432
  * Counts number of words in the text
@@ -4407,6 +4440,9 @@ function countWords(text) {
4407
4440
  text = text.replace(/([a-z])([A-Z])/g, '$1 $2');
4408
4441
  return text.split(/[^a-zа-я0-9]+/i).filter((word) => word.length > 0).length;
4409
4442
  }
4443
+ /**
4444
+ * TODO: [🥴] Implement counting in formats - like JSON, CSV, XML,...
4445
+ */
4410
4446
 
4411
4447
  /**
4412
4448
  * Index of all counter functions