@promptbook/cli 0.88.0-9 → 0.89.0-1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (25) hide show
  1. package/README.md +35 -14
  2. package/esm/index.es.js +65 -29
  3. package/esm/index.es.js.map +1 -1
  4. package/esm/typings/src/_packages/core.index.d.ts +2 -2
  5. package/esm/typings/src/_packages/types.index.d.ts +10 -0
  6. package/esm/typings/src/config.d.ts +1 -1
  7. package/esm/typings/src/errors/PipelineExecutionError.d.ts +5 -0
  8. package/esm/typings/src/errors/utils/ErrorJson.d.ts +5 -0
  9. package/esm/typings/src/llm-providers/_common/utils/count-total-usage/LlmExecutionToolsWithTotalUsage.d.ts +7 -0
  10. package/esm/typings/src/llm-providers/_common/utils/count-total-usage/{countTotalUsage.d.ts → countUsage.d.ts} +1 -1
  11. package/esm/typings/src/playground/BrjappConnector.d.ts +64 -0
  12. package/esm/typings/src/playground/brjapp-api-schema.d.ts +12879 -0
  13. package/esm/typings/src/playground/playground.d.ts +5 -0
  14. package/esm/typings/src/remote-server/socket-types/_subtypes/PromptbookServer_Identification.d.ts +2 -1
  15. package/esm/typings/src/remote-server/types/RemoteServerOptions.d.ts +15 -3
  16. package/esm/typings/src/types/typeAliases.d.ts +2 -2
  17. package/esm/typings/src/utils/expectation-counters/countCharacters.d.ts +3 -0
  18. package/esm/typings/src/utils/expectation-counters/countLines.d.ts +3 -0
  19. package/esm/typings/src/utils/expectation-counters/countPages.d.ts +3 -0
  20. package/esm/typings/src/utils/expectation-counters/countParagraphs.d.ts +3 -0
  21. package/esm/typings/src/utils/expectation-counters/countSentences.d.ts +3 -0
  22. package/esm/typings/src/utils/expectation-counters/countWords.d.ts +3 -0
  23. package/package.json +1 -1
  24. package/umd/index.umd.js +67 -31
  25. package/umd/index.umd.js.map +1 -1
package/README.md CHANGED
@@ -112,6 +112,8 @@ Rest of the documentation is common for **entire promptbook ecosystem**:
112
112
 
113
113
  During the computer revolution, we have seen [multiple generations of computer languages](https://github.com/webgptorg/promptbook/discussions/180), from the physical rewiring of the vacuum tubes through low-level machine code to the high-level languages like Python or JavaScript. And now, we're on the edge of the **next revolution**!
114
114
 
115
+
116
+
115
117
  It's a revolution of writing software in **plain human language** that is understandable and executable by both humans and machines – and it's going to change everything!
116
118
 
117
119
  The incredible growth in power of microprocessors and the Moore's Law have been the driving force behind the ever-more powerful languages, and it's been an amazing journey! Similarly, the large language models (like GPT or Claude) are the next big thing in language technology, and they're set to transform the way we interact with computers.
@@ -122,6 +124,9 @@ This shift is going to happen, whether we are ready for it or not. Our mission i
122
124
 
123
125
 
124
126
 
127
+
128
+
129
+
125
130
  ## 🚀 Get started
126
131
 
127
132
  Take a look at the simple starter kit with books integrated into the **Hello World** sample applications:
@@ -133,6 +138,8 @@ Take a look at the simple starter kit with books integrated into the **Hello Wor
133
138
 
134
139
 
135
140
 
141
+
142
+
136
143
  ## 💜 The Promptbook Project
137
144
 
138
145
  Promptbook project is ecosystem of multiple projects and tools, following is a list of most important pieces of the project:
@@ -168,22 +175,35 @@ Promptbook project is ecosystem of multiple projects and tools, following is a l
168
175
  </tbody>
169
176
  </table>
170
177
 
178
+ Hello world examples:
179
+
180
+ - [Hello world](https://github.com/webgptorg/hello-world)
181
+ - [Hello world in Node.js](https://github.com/webgptorg/hello-world-node-js)
182
+ - [Hello world in Next.js](https://github.com/webgptorg/hello-world-next-js)
183
+
184
+
185
+
171
186
  We also have a community of developers and users of **Promptbook**:
172
187
 
173
188
  - [Discord community](https://discord.gg/x3QWNaa89N)
174
189
  - [Landing page `ptbk.io`](https://ptbk.io)
175
190
  - [Github discussions](https://github.com/webgptorg/promptbook/discussions)
176
191
  - [LinkedIn `Promptbook`](https://linkedin.com/company/promptbook)
177
- - [Facebook `Promptbook`](https://www.facebook.com/61560776453536)
192
+ - [Facebook `Promptbook`](https://www.facebook.com/61560776453536)
178
193
 
179
194
  And **Promptbook.studio** branded socials:
180
195
 
196
+
197
+
181
198
  - [Instagram `@promptbook.studio`](https://www.instagram.com/promptbook.studio/)
182
199
 
183
200
  And **Promptujeme** sub-brand:
184
201
 
185
202
  _/Subbrand for Czech clients/_
186
203
 
204
+
205
+
206
+
187
207
  - [Promptujeme.cz](https://www.promptujeme.cz/)
188
208
  - [Facebook `Promptujeme`](https://www.facebook.com/promptujeme/)
189
209
 
@@ -201,6 +221,8 @@ _/Sub-brand for images and graphics generated via Promptbook prompting/_
201
221
 
202
222
  ## 💙 The Book language
203
223
 
224
+
225
+
204
226
  Following is the documentation and blueprint of the [Book language](https://github.com/webgptorg/book).
205
227
 
206
228
  Book is a language that can be used to write AI applications, agents, workflows, automations, knowledgebases, translators, sheet processors, email automations and more. It allows you to harness the power of AI models in human-like terms, without the need to know the specifics and technicalities of the models.
@@ -250,6 +272,8 @@ Personas can have access to different knowledge, tools and actions. They can als
250
272
 
251
273
  - [PERSONA](https://github.com/webgptorg/promptbook/blob/main/documents/commands/PERSONA.md)
252
274
 
275
+
276
+
253
277
  ### **How:** Knowledge, Instruments and Actions
254
278
 
255
279
  The resources used by the personas are used to do the work.
@@ -325,11 +349,9 @@ Or you can install them separately:
325
349
 
326
350
  ## 📚 Dictionary
327
351
 
328
- ### 📚 Dictionary
329
-
330
352
  The following glossary is used to clarify certain concepts:
331
353
 
332
- #### General LLM / AI terms
354
+ ### General LLM / AI terms
333
355
 
334
356
  - **Prompt drift** is a phenomenon where the AI model starts to generate outputs that are not aligned with the original prompt. This can happen due to the model's training data, the prompt's wording, or the model's architecture.
335
357
  - **Pipeline, workflow or chain** is a sequence of tasks that are executed in a specific order. In the context of AI, a pipeline can refer to a sequence of AI models that are used to process data.
@@ -340,9 +362,13 @@ The following glossary is used to clarify certain concepts:
340
362
  - **Retrieval-augmented generation** is a machine learning paradigm where a model generates text by retrieving relevant information from a large database of text. This approach combines the benefits of generative models and retrieval models.
341
363
  - **Longtail** refers to non-common or rare events, items, or entities that are not well-represented in the training data of machine learning models. Longtail items are often challenging for models to predict accurately.
342
364
 
343
- _Note: Thos section is not complete dictionary, more list of general AI / LLM terms that has connection with Promptbook_
344
365
 
345
- #### 💯 Core concepts
366
+
367
+ _Note: This section is not complete dictionary, more list of general AI / LLM terms that has connection with Promptbook_
368
+
369
+
370
+
371
+ ### 💯 Core concepts
346
372
 
347
373
  - [📚 Collection of pipelines](https://github.com/webgptorg/promptbook/discussions/65)
348
374
  - [📯 Pipeline](https://github.com/webgptorg/promptbook/discussions/64)
@@ -355,7 +381,7 @@ _Note: Thos section is not complete dictionary, more list of general AI / LLM te
355
381
  - [🔣 Words not tokens](https://github.com/webgptorg/promptbook/discussions/29)
356
382
  - [☯ Separation of concerns](https://github.com/webgptorg/promptbook/discussions/32)
357
383
 
358
- ##### Advanced concepts
384
+ #### Advanced concepts
359
385
 
360
386
  - [📚 Knowledge (Retrieval-augmented generation)](https://github.com/webgptorg/promptbook/discussions/41)
361
387
  - [🌏 Remote server](https://github.com/webgptorg/promptbook/discussions/89)
@@ -370,11 +396,6 @@ _Note: Thos section is not complete dictionary, more list of general AI / LLM te
370
396
  - [👮 Agent adversary expectations](https://github.com/webgptorg/promptbook/discussions/39)
371
397
  - [view more](https://github.com/webgptorg/promptbook/discussions/categories/concepts)
372
398
 
373
- ### Terms specific to Promptbook TypeScript implementation
374
-
375
- - Anonymous mode
376
- - Application mode
377
-
378
399
 
379
400
 
380
401
  ## 🚂 Promptbook Engine
@@ -445,11 +466,11 @@ See [TODO.md](./TODO.md)
445
466
  <div style="display: flex; align-items: center; gap: 20px;">
446
467
 
447
468
  <a href="https://promptbook.studio/">
448
- <img src="./design/promptbook-studio-logo.png" alt="Partner 3" height="100">
469
+ <img src="./design/promptbook-studio-logo.png" alt="Partner 3" height="70">
449
470
  </a>
450
471
 
451
472
  <a href="https://technologickainkubace.org/en/about-technology-incubation/about-the-project/">
452
- <img src="./other/partners/CI-Technology-Incubation.png" alt="Technology Incubation" height="100">
473
+ <img src="./other/partners/CI-Technology-Incubation.png" alt="Technology Incubation" height="70">
453
474
  </a>
454
475
 
455
476
  </div>
package/esm/index.es.js CHANGED
@@ -6,13 +6,13 @@ import { basename, join, dirname, relative } from 'path';
6
6
  import { stat, access, constants, readFile, writeFile, readdir, mkdir, unlink, rm, rename, rmdir } from 'fs/promises';
7
7
  import hexEncoder from 'crypto-js/enc-hex';
8
8
  import sha256 from 'crypto-js/sha256';
9
+ import { randomBytes } from 'crypto';
10
+ import { Subject } from 'rxjs';
9
11
  import * as dotenv from 'dotenv';
10
12
  import { spawn } from 'child_process';
11
13
  import JSZip from 'jszip';
12
14
  import { format } from 'prettier';
13
15
  import parserHtml from 'prettier/parser-html';
14
- import { Subject } from 'rxjs';
15
- import { randomBytes } from 'crypto';
16
16
  import { parse, unparse } from 'papaparse';
17
17
  import { SHA256 } from 'crypto-js';
18
18
  import { lookup, extension } from 'mime-types';
@@ -44,7 +44,7 @@ const BOOK_LANGUAGE_VERSION = '1.0.0';
44
44
  * @generated
45
45
  * @see https://github.com/webgptorg/promptbook
46
46
  */
47
- const PROMPTBOOK_ENGINE_VERSION = '0.88.0-9';
47
+ const PROMPTBOOK_ENGINE_VERSION = '0.89.0-1';
48
48
  /**
49
49
  * TODO: string_promptbook_version should be constrained to the all versions of Promptbook engine
50
50
  * Note: [💞] Ignore a discrepancy between file name and entity name
@@ -204,7 +204,7 @@ const DEFAULT_MAX_PARALLEL_COUNT = 5; // <- TODO: [🤹‍♂️]
204
204
  *
205
205
  * @public exported from `@promptbook/core`
206
206
  */
207
- const DEFAULT_MAX_EXECUTION_ATTEMPTS = 3; // <- TODO: [🤹‍♂️]
207
+ const DEFAULT_MAX_EXECUTION_ATTEMPTS = 10; // <- TODO: [🤹‍♂️]
208
208
  /**
209
209
  * Where to store your books
210
210
  * This is kind of a "src" for your books
@@ -1498,6 +1498,21 @@ class FileCacheStorage {
1498
1498
  * Note: [🟢] Code in this file should never be never released in packages that could be imported into browser environment
1499
1499
  */
1500
1500
 
1501
+ /**
1502
+ * Generates random token
1503
+ *
1504
+ * Note: This function is cryptographically secure (it uses crypto.randomBytes internally)
1505
+ *
1506
+ * @private internal helper function
1507
+ * @returns secure random token
1508
+ */
1509
+ function $randomToken(randomness) {
1510
+ return randomBytes(randomness).toString('hex');
1511
+ }
1512
+ /**
1513
+ * TODO: Maybe use nanoid instead https://github.com/ai/nanoid
1514
+ */
1515
+
1501
1516
  /**
1502
1517
  * This error indicates errors during the execution of the pipeline
1503
1518
  *
@@ -1505,11 +1520,17 @@ class FileCacheStorage {
1505
1520
  */
1506
1521
  class PipelineExecutionError extends Error {
1507
1522
  constructor(message) {
1523
+ // Added id parameter
1508
1524
  super(message);
1509
1525
  this.name = 'PipelineExecutionError';
1526
+ // TODO: [🐙] DRY - Maybe $randomId
1527
+ this.id = `error-${$randomToken(8 /* <- TODO: To global config + Use Base58 to avoid simmilar char conflicts */)}`;
1510
1528
  Object.setPrototypeOf(this, PipelineExecutionError.prototype);
1511
1529
  }
1512
1530
  }
1531
+ /**
1532
+ * TODO: !!!!!! Add id to all errors
1533
+ */
1513
1534
 
1514
1535
  /**
1515
1536
  * Stores data in memory (HEAP)
@@ -1768,8 +1789,9 @@ function addUsage(...usageItems) {
1768
1789
  * @returns LLM tools with same functionality with added total cost counting
1769
1790
  * @public exported from `@promptbook/core`
1770
1791
  */
1771
- function countTotalUsage(llmTools) {
1792
+ function countUsage(llmTools) {
1772
1793
  let totalUsage = ZERO_USAGE;
1794
+ const spending = new Subject();
1773
1795
  const proxyTools = {
1774
1796
  get title() {
1775
1797
  // TODO: [🧠] Maybe put here some suffix
@@ -1779,12 +1801,15 @@ function countTotalUsage(llmTools) {
1779
1801
  // TODO: [🧠] Maybe put here some suffix
1780
1802
  return llmTools.description;
1781
1803
  },
1782
- async checkConfiguration() {
1804
+ checkConfiguration() {
1783
1805
  return /* not await */ llmTools.checkConfiguration();
1784
1806
  },
1785
1807
  listModels() {
1786
1808
  return /* not await */ llmTools.listModels();
1787
1809
  },
1810
+ spending() {
1811
+ return spending.asObservable();
1812
+ },
1788
1813
  getTotalUsage() {
1789
1814
  // <- Note: [🥫] Not using getter `get totalUsage` but `getTotalUsage` to allow this object to be proxied
1790
1815
  return totalUsage;
@@ -1795,6 +1820,7 @@ function countTotalUsage(llmTools) {
1795
1820
  // console.info('[🚕] callChatModel through countTotalUsage');
1796
1821
  const promptResult = await llmTools.callChatModel(prompt);
1797
1822
  totalUsage = addUsage(totalUsage, promptResult.usage);
1823
+ spending.next(promptResult.usage);
1798
1824
  return promptResult;
1799
1825
  };
1800
1826
  }
@@ -1803,6 +1829,7 @@ function countTotalUsage(llmTools) {
1803
1829
  // console.info('[🚕] callCompletionModel through countTotalUsage');
1804
1830
  const promptResult = await llmTools.callCompletionModel(prompt);
1805
1831
  totalUsage = addUsage(totalUsage, promptResult.usage);
1832
+ spending.next(promptResult.usage);
1806
1833
  return promptResult;
1807
1834
  };
1808
1835
  }
@@ -1811,6 +1838,7 @@ function countTotalUsage(llmTools) {
1811
1838
  // console.info('[🚕] callEmbeddingModel through countTotalUsage');
1812
1839
  const promptResult = await llmTools.callEmbeddingModel(prompt);
1813
1840
  totalUsage = addUsage(totalUsage, promptResult.usage);
1841
+ spending.next(promptResult.usage);
1814
1842
  return promptResult;
1815
1843
  };
1816
1844
  }
@@ -2518,7 +2546,7 @@ async function $provideLlmToolsForWizzardOrCli(options) {
2518
2546
  throw new EnvironmentMismatchError('Function `$provideLlmToolsForWizzardOrCli` works only in Node.js environment');
2519
2547
  }
2520
2548
  const { isCacheReloaded } = options !== null && options !== void 0 ? options : {};
2521
- return cacheLlmTools(countTotalUsage(
2549
+ return cacheLlmTools(countUsage(
2522
2550
  // <- Note: for example here we don`t want the [🌯]
2523
2551
  await $provideLlmToolsFromEnv()), {
2524
2552
  storage: new FileCacheStorage({ fs: $provideFilesystemForNode() }, {
@@ -4073,21 +4101,6 @@ function isPipelinePrepared(pipeline) {
4073
4101
  * - [♨] Are tasks prepared
4074
4102
  */
4075
4103
 
4076
- /**
4077
- * Generates random token
4078
- *
4079
- * Note: This function is cryptographically secure (it uses crypto.randomBytes internally)
4080
- *
4081
- * @private internal helper function
4082
- * @returns secure random token
4083
- */
4084
- function $randomToken(randomness) {
4085
- return randomBytes(randomness).toString('hex');
4086
- }
4087
- /**
4088
- * TODO: Maybe use nanoid instead https://github.com/ai/nanoid
4089
- */
4090
-
4091
4104
  /**
4092
4105
  * Recursively converts JSON strings to JSON objects
4093
4106
 
@@ -4268,7 +4281,7 @@ const ALL_ERRORS = {
4268
4281
  * @public exported from `@promptbook/utils`
4269
4282
  */
4270
4283
  function deserializeError(error) {
4271
- const { name, stack } = error;
4284
+ const { name, stack, id } = error; // Added id
4272
4285
  let { message } = error;
4273
4286
  let ErrorClass = ALL_ERRORS[error.name];
4274
4287
  if (ErrorClass === undefined) {
@@ -4283,7 +4296,9 @@ function deserializeError(error) {
4283
4296
  ${block(stack || '')}
4284
4297
  `);
4285
4298
  }
4286
- return new ErrorClass(message);
4299
+ const deserializedError = new ErrorClass(message);
4300
+ deserializedError.id = id; // Assign id to the error object
4301
+ return deserializedError;
4287
4302
  }
4288
4303
 
4289
4304
  /**
@@ -4333,6 +4348,7 @@ function assertsTaskSuccessful(executionResult) {
4333
4348
  */
4334
4349
  function createTask(options) {
4335
4350
  const { taskType, taskProcessCallback } = options;
4351
+ // TODO: [🐙] DRY
4336
4352
  const taskId = `${taskType.toLowerCase().substring(0, 4)}-${$randomToken(8 /* <- TODO: To global config + Use Base58 to avoid simmilar char conflicts */)}`;
4337
4353
  let status = 'RUNNING';
4338
4354
  const createdAt = new Date();
@@ -4365,7 +4381,7 @@ function createTask(options) {
4365
4381
  assertsTaskSuccessful(executionResult);
4366
4382
  status = 'FINISHED';
4367
4383
  currentValue = jsonStringsToJsons(executionResult);
4368
- // <- TODO: Convert JSON values in string to JSON objects
4384
+ // <- TODO: [🧠] Is this a good idea to convert JSON strins to JSONs?
4369
4385
  partialResultSubject.next(executionResult);
4370
4386
  }
4371
4387
  catch (error) {
@@ -4429,19 +4445,21 @@ function createTask(options) {
4429
4445
  */
4430
4446
  function serializeError(error) {
4431
4447
  const { name, message, stack } = error;
4448
+ const { id } = error;
4432
4449
  if (!Object.keys(ALL_ERRORS).includes(name)) {
4433
4450
  console.error(spaceTrim((block) => `
4434
-
4451
+
4435
4452
  Cannot serialize error with name "${name}"
4436
4453
 
4437
4454
  ${block(stack || message)}
4438
-
4455
+
4439
4456
  `));
4440
4457
  }
4441
4458
  return {
4442
4459
  name: name,
4443
4460
  message,
4444
4461
  stack,
4462
+ id, // Include id in the serialized object
4445
4463
  };
4446
4464
  }
4447
4465
 
@@ -5174,6 +5192,9 @@ function countCharacters(text) {
5174
5192
  text = text.replace(/\p{Extended_Pictographic}(\u{200D}\p{Extended_Pictographic})*/gu, '-');
5175
5193
  return text.length;
5176
5194
  }
5195
+ /**
5196
+ * TODO: [🥴] Implement counting in formats - like JSON, CSV, XML,...
5197
+ */
5177
5198
 
5178
5199
  /**
5179
5200
  * Number of characters per standard line with 11pt Arial font size.
@@ -5205,6 +5226,9 @@ function countLines(text) {
5205
5226
  const lines = text.split('\n');
5206
5227
  return lines.reduce((count, line) => count + Math.ceil(line.length / CHARACTERS_PER_STANDARD_LINE), 0);
5207
5228
  }
5229
+ /**
5230
+ * TODO: [🥴] Implement counting in formats - like JSON, CSV, XML,...
5231
+ */
5208
5232
 
5209
5233
  /**
5210
5234
  * Counts number of pages in the text
@@ -5216,6 +5240,9 @@ function countLines(text) {
5216
5240
  function countPages(text) {
5217
5241
  return Math.ceil(countLines(text) / LINES_PER_STANDARD_PAGE);
5218
5242
  }
5243
+ /**
5244
+ * TODO: [🥴] Implement counting in formats - like JSON, CSV, XML,...
5245
+ */
5219
5246
 
5220
5247
  /**
5221
5248
  * Counts number of paragraphs in the text
@@ -5225,6 +5252,9 @@ function countPages(text) {
5225
5252
  function countParagraphs(text) {
5226
5253
  return text.split(/\n\s*\n/).filter((paragraph) => paragraph.trim() !== '').length;
5227
5254
  }
5255
+ /**
5256
+ * TODO: [🥴] Implement counting in formats - like JSON, CSV, XML,...
5257
+ */
5228
5258
 
5229
5259
  /**
5230
5260
  * Split text into sentences
@@ -5242,6 +5272,9 @@ function splitIntoSentences(text) {
5242
5272
  function countSentences(text) {
5243
5273
  return splitIntoSentences(text).length;
5244
5274
  }
5275
+ /**
5276
+ * TODO: [🥴] Implement counting in formats - like JSON, CSV, XML,...
5277
+ */
5245
5278
 
5246
5279
  /**
5247
5280
  * Counts number of words in the text
@@ -5255,6 +5288,9 @@ function countWords(text) {
5255
5288
  text = text.replace(/([a-z])([A-Z])/g, '$1 $2');
5256
5289
  return text.split(/[^a-zа-я0-9]+/i).filter((word) => word.length > 0).length;
5257
5290
  }
5291
+ /**
5292
+ * TODO: [🥴] Implement counting in formats - like JSON, CSV, XML,...
5293
+ */
5258
5294
 
5259
5295
  /**
5260
5296
  * Index of all counter functions
@@ -6738,7 +6774,7 @@ async function preparePipeline(pipeline, tools, options) {
6738
6774
  // TODO: [🚐] Make arrayable LLMs -> single LLM DRY
6739
6775
  const _llms = arrayableToArray(tools.llm);
6740
6776
  const llmTools = _llms.length === 1 ? _llms[0] : joinLlmExecutionTools(..._llms);
6741
- const llmToolsWithUsage = countTotalUsage(llmTools);
6777
+ const llmToolsWithUsage = countUsage(llmTools);
6742
6778
  // <- TODO: [🌯]
6743
6779
  /*
6744
6780
  TODO: [🧠][🪑][🔃] Should this be done or not
@@ -12341,7 +12377,7 @@ function $initializeRunCommand(program) {
12341
12377
  pipeline,
12342
12378
  tools,
12343
12379
  isNotPreparedWarningSupressed: true,
12344
- maxExecutionAttempts: 3,
12380
+ maxExecutionAttempts: DEFAULT_MAX_EXECUTION_ATTEMPTS,
12345
12381
  // <- TODO: Why "LLM execution failed undefinedx"
12346
12382
  maxParallelCount: 1, // <- TODO: Pass CLI argument
12347
12383
  });