@promptbook/remote-server 0.69.0-5 β 0.69.0-7
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +17 -57
- package/esm/index.es.js +1 -1
- package/package.json +2 -2
- package/umd/index.umd.js +1 -1
package/README.md
CHANGED
|
@@ -39,13 +39,15 @@ Rest of the documentation is common for **entire promptbook ecosystem**:
|
|
|
39
39
|
|
|
40
40
|
## π€ The Promptbook Whitepaper
|
|
41
41
|
|
|
42
|
+
|
|
43
|
+
|
|
42
44
|
If you have a simple, single prompt for ChatGPT, GPT-4, Anthropic Claude, Google Gemini, Llama 2, or whatever, it doesn't matter how you integrate it. Whether it's calling a REST API directly, using the SDK, hardcoding the prompt into the source code, or importing a text file, the process remains the same.
|
|
43
45
|
|
|
44
46
|
But often you will struggle with the limitations of LLMs, such as hallucinations, off-topic responses, poor quality output, language drift, word repetition repetition repetition repetition or misuse, lack of context, or just plain wππ’rd responses. When this happens, you generally have three options:
|
|
45
47
|
|
|
46
48
|
1. **Fine-tune** the model to your specifications or even train your own.
|
|
47
49
|
2. **Prompt-engineer** the prompt to the best shape you can achieve.
|
|
48
|
-
3.
|
|
50
|
+
3. Orchestrate **multiple prompts** in a [pipeline](https://github.com/webgptorg/promptbook/discussions/64) to get the best result.
|
|
49
51
|
|
|
50
52
|
In all of these situations, but especially in 3., the Promptbook library can make your life easier.
|
|
51
53
|
|
|
@@ -57,7 +59,9 @@ In all of these situations, but especially in 3., the Promptbook library can mak
|
|
|
57
59
|
- Promptbook has built in versioning. You can test multiple **A/B versions** of pipelines and see which one works best.
|
|
58
60
|
- Promptbook is designed to do [**RAG** (Retrieval-Augmented Generation)](https://github.com/webgptorg/promptbook/discussions/41) and other advanced techniques. You can use **knowledge** to improve the quality of the output.
|
|
59
61
|
|
|
60
|
-
|
|
62
|
+
|
|
63
|
+
|
|
64
|
+
## π§ Pipeline _(for prompt-engeneers)_
|
|
61
65
|
|
|
62
66
|
**P**romp**t** **b**oo**k** markdown file (or `.ptbk.md` file) is document that describes a **pipeline** - a series of prompts that are chained together to form somewhat reciepe for transforming natural language input.
|
|
63
67
|
|
|
@@ -374,6 +378,8 @@ The following glossary is used to clarify certain concepts:
|
|
|
374
378
|
- When you want to **version** your prompts and **test multiple versions**
|
|
375
379
|
- When you want to **log** the execution of prompts and backtrace the issues
|
|
376
380
|
|
|
381
|
+
[See more](https://github.com/webgptorg/promptbook/discussions/111)
|
|
382
|
+
|
|
377
383
|
### β When not to use
|
|
378
384
|
|
|
379
385
|
- When you have already implemented single simple prompt and it works fine for your job
|
|
@@ -383,6 +389,8 @@ The following glossary is used to clarify certain concepts:
|
|
|
383
389
|
- When your main focus is on something other than text - like images, audio, video, spreadsheets _(other media types may be added in the future, [see discussion](https://github.com/webgptorg/promptbook/discussions/103))_
|
|
384
390
|
- When you need to use recursion _([see the discussion](https://github.com/webgptorg/promptbook/discussions/38))_
|
|
385
391
|
|
|
392
|
+
[See more](https://github.com/webgptorg/promptbook/discussions/112)
|
|
393
|
+
|
|
386
394
|
## π Known issues
|
|
387
395
|
|
|
388
396
|
- [π€ΈββοΈ Iterations not working yet](https://github.com/webgptorg/promptbook/discussions/55)
|
|
@@ -395,63 +403,15 @@ The following glossary is used to clarify certain concepts:
|
|
|
395
403
|
|
|
396
404
|
## β FAQ
|
|
397
405
|
|
|
398
|
-
|
|
399
|
-
|
|
400
406
|
If you have a question [start a discussion](https://github.com/webgptorg/promptbook/discussions/), [open an issue](https://github.com/webgptorg/promptbook/issues) or [write me an email](https://www.pavolhejny.com/contact).
|
|
401
407
|
|
|
402
|
-
|
|
403
|
-
|
|
404
|
-
|
|
405
|
-
|
|
406
|
-
|
|
407
|
-
|
|
408
|
-
|
|
409
|
-
|
|
410
|
-
We are considering creating a bridge/converter between these two libraries.
|
|
411
|
-
|
|
412
|
-
|
|
413
|
-
|
|
414
|
-
### Promptbooks vs. OpenAI`s GPTs
|
|
415
|
-
|
|
416
|
-
GPTs are chat assistants that can be assigned to specific tasks and materials. But they are still chat assistants. Promptbooks are a way to orchestrate many more predefined tasks to have much tighter control over the process. Promptbooks are not a good technology for creating human-like chatbots, GPTs are not a good technology for creating outputs with specific requirements.
|
|
417
|
-
|
|
418
|
-
|
|
419
|
-
|
|
420
|
-
|
|
421
|
-
|
|
422
|
-
|
|
423
|
-
|
|
424
|
-
|
|
425
|
-
|
|
426
|
-
|
|
427
|
-
|
|
428
|
-
|
|
429
|
-
|
|
430
|
-
|
|
431
|
-
|
|
432
|
-
### Where should I store my promptbooks?
|
|
433
|
-
|
|
434
|
-
If you use raw SDKs, you just put prompts in the sourcecode, mixed in with typescript, javascript, python or whatever programming language you use.
|
|
435
|
-
|
|
436
|
-
If you use promptbooks, you can store them in several places, each with its own advantages and disadvantages:
|
|
437
|
-
|
|
438
|
-
1. As **source code**, typically git-committed. In this case you can use the versioning system and the promptbooks will be tightly coupled with the version of the application. You still get the power of promptbooks, as you separate the concerns of the prompt-engineer and the programmer.
|
|
439
|
-
|
|
440
|
-
2. As data in a **database** In this case, promptbooks are like posts / articles on the blog. They can be modified independently of the application. You don't need to redeploy the application to change the promptbooks. You can have multiple versions of promptbooks for each user. You can have a web interface for non-programmers to create and modify promptbooks. But you lose the versioning system and you still have to consider the interface between the promptbooks and the application _(= input and output parameters)_.
|
|
441
|
-
|
|
442
|
-
3. In a **configuration** in environment variables. This is a good way to store promptbooks if you have an application with multiple deployments and you want to have different but simple promptbooks for each deployment and you don't need to change them often.
|
|
443
|
-
|
|
444
|
-
### What should I do when I need same promptbook in multiple human languages?
|
|
445
|
-
|
|
446
|
-
A single promptbook can be written for several _(human)_ languages at once. However, we recommend that you have separate promptbooks for each language.
|
|
447
|
-
|
|
448
|
-
In large language models, you will get better results if you have prompts in the same language as the user input.
|
|
449
|
-
|
|
450
|
-
The best way to manage this is to have suffixed promptbooks like `write-website-content.en.ptbk.md` and `write-website-content.cs.ptbk.md` for each supported language.
|
|
451
|
-
|
|
452
|
-
|
|
453
|
-
|
|
454
|
-
|
|
408
|
+
- [β Why not just use the OpenAI SDK / Anthropic Claude SDK / ...?](https://github.com/webgptorg/promptbook/discussions/114)
|
|
409
|
+
- [β How is it different from the OpenAI`s GPTs?](https://github.com/webgptorg/promptbook/discussions/118)
|
|
410
|
+
- [β How is it different from the Langchain?](https://github.com/webgptorg/promptbook/discussions/115)
|
|
411
|
+
- [β How is it different from the DSPy?](https://github.com/webgptorg/promptbook/discussions/117)
|
|
412
|
+
- [β How is it different from _anything_?](https://github.com/webgptorg/promptbook/discussions?discussions_q=is%3Aopen+label%3A%22Promptbook+vs%22)
|
|
413
|
+
- [β Is Promptbook using RAG _(Retrieval-Augmented Generation)_?](https://github.com/webgptorg/promptbook/discussions/123)
|
|
414
|
+
- [β Is Promptbook using function calling?](https://github.com/webgptorg/promptbook/discussions/124)
|
|
455
415
|
|
|
456
416
|
## β Changelog
|
|
457
417
|
|
package/esm/index.es.js
CHANGED
|
@@ -7,7 +7,7 @@ import spaceTrim$1, { spaceTrim } from 'spacetrim';
|
|
|
7
7
|
/**
|
|
8
8
|
* The version of the Promptbook library
|
|
9
9
|
*/
|
|
10
|
-
var PROMPTBOOK_VERSION = '0.69.0-
|
|
10
|
+
var PROMPTBOOK_VERSION = '0.69.0-6';
|
|
11
11
|
// TODO: !!!! List here all the versions and annotate + put into script
|
|
12
12
|
|
|
13
13
|
/*! *****************************************************************************
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "@promptbook/remote-server",
|
|
3
|
-
"version": "0.69.0-
|
|
3
|
+
"version": "0.69.0-7",
|
|
4
4
|
"description": "Supercharge your use of large language models",
|
|
5
5
|
"private": false,
|
|
6
6
|
"sideEffects": false,
|
|
@@ -47,7 +47,7 @@
|
|
|
47
47
|
"module": "./esm/index.es.js",
|
|
48
48
|
"typings": "./esm/typings/src/_packages/remote-server.index.d.ts",
|
|
49
49
|
"peerDependencies": {
|
|
50
|
-
"@promptbook/core": "0.69.0-
|
|
50
|
+
"@promptbook/core": "0.69.0-7"
|
|
51
51
|
},
|
|
52
52
|
"dependencies": {
|
|
53
53
|
"colors": "1.4.0",
|
package/umd/index.umd.js
CHANGED
|
@@ -14,7 +14,7 @@
|
|
|
14
14
|
/**
|
|
15
15
|
* The version of the Promptbook library
|
|
16
16
|
*/
|
|
17
|
-
var PROMPTBOOK_VERSION = '0.69.0-
|
|
17
|
+
var PROMPTBOOK_VERSION = '0.69.0-6';
|
|
18
18
|
// TODO: !!!! List here all the versions and annotate + put into script
|
|
19
19
|
|
|
20
20
|
/*! *****************************************************************************
|