@promptbook/openai 0.46.0-4 → 0.46.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +1 -1
- package/package.json +2 -2
package/README.md
CHANGED
|
@@ -38,7 +38,7 @@ Rest of the documentation is common for **entire promptbook ecosystem**:
|
|
|
38
38
|
|
|
39
39
|
|
|
40
40
|
|
|
41
|
-
## 🤍 Whitepaper
|
|
41
|
+
## 🤍 The Promptbook Whitepaper
|
|
42
42
|
|
|
43
43
|
When you have a simple, single prompt for ChatGPT, GPT-4, Anthropic Claude, Google Gemini, Llama 2, or whatever, it doesn't matter how it is integrated. Whether it's the direct calling of a REST API, using the SDK, hardcoding the prompt in the source code, or importing a text file, the process remains the same.
|
|
44
44
|
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "@promptbook/openai",
|
|
3
|
-
"version": "0.46.0
|
|
3
|
+
"version": "0.46.0",
|
|
4
4
|
"description": "Library to supercharge your use of large language models",
|
|
5
5
|
"private": false,
|
|
6
6
|
"sideEffects": false,
|
|
@@ -38,7 +38,7 @@
|
|
|
38
38
|
"openai": "4.2.0"
|
|
39
39
|
},
|
|
40
40
|
"peerDependencies": {
|
|
41
|
-
"@promptbook/core": "0.46.0
|
|
41
|
+
"@promptbook/core": "0.46.0"
|
|
42
42
|
},
|
|
43
43
|
"main": "./umd/index.umd.js",
|
|
44
44
|
"module": "./esm/index.es.js",
|