@prompty/openai 2.0.0-alpha.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +87 -0
- package/dist/index.cjs +671 -0
- package/dist/index.cjs.map +1 -0
- package/dist/index.d.cts +72 -0
- package/dist/index.d.ts +72 -0
- package/dist/index.js +627 -0
- package/dist/index.js.map +1 -0
- package/package.json +59 -0
package/README.md
ADDED
|
@@ -0,0 +1,87 @@
|
|
|
1
|
+
# @prompty/openai
|
|
2
|
+
|
|
3
|
+
OpenAI provider for Prompty — executor and processor for the OpenAI API.
|
|
4
|
+
|
|
5
|
+
## Installation
|
|
6
|
+
|
|
7
|
+
```bash
|
|
8
|
+
npm install @prompty/core @prompty/openai openai
|
|
9
|
+
```
|
|
10
|
+
|
|
11
|
+
## Usage
|
|
12
|
+
|
|
13
|
+
Import `@prompty/openai` to auto-register the `openai` provider, then use `@prompty/core` as normal:
|
|
14
|
+
|
|
15
|
+
```typescript
|
|
16
|
+
import "@prompty/openai";
|
|
17
|
+
import { run } from "@prompty/core";
|
|
18
|
+
|
|
19
|
+
const result = await run("./chat.prompty", {
|
|
20
|
+
question: "What is quantum computing?",
|
|
21
|
+
});
|
|
22
|
+
```
|
|
23
|
+
|
|
24
|
+
## `.prompty` Configuration
|
|
25
|
+
|
|
26
|
+
Set `provider: openai` in your `.prompty` file:
|
|
27
|
+
|
|
28
|
+
```prompty
|
|
29
|
+
---
|
|
30
|
+
name: my-prompt
|
|
31
|
+
model:
|
|
32
|
+
id: gpt-4o-mini
|
|
33
|
+
provider: openai
|
|
34
|
+
apiType: chat
|
|
35
|
+
connection:
|
|
36
|
+
kind: key
|
|
37
|
+
endpoint: ${env:OPENAI_BASE_URL}
|
|
38
|
+
apiKey: ${env:OPENAI_API_KEY}
|
|
39
|
+
options:
|
|
40
|
+
temperature: 0.7
|
|
41
|
+
maxOutputTokens: 1000
|
|
42
|
+
---
|
|
43
|
+
system:
|
|
44
|
+
You are a helpful assistant.
|
|
45
|
+
|
|
46
|
+
user:
|
|
47
|
+
{{question}}
|
|
48
|
+
```
|
|
49
|
+
|
|
50
|
+
## Supported API Types
|
|
51
|
+
|
|
52
|
+
| `apiType` | Description |
|
|
53
|
+
|-----------|-------------|
|
|
54
|
+
| `chat` (default) | Chat completions via `client.chat.completions.create()` |
|
|
55
|
+
| `embedding` | Embeddings via `client.embeddings.create()` |
|
|
56
|
+
| `image` | Image generation via `client.images.generate()` |
|
|
57
|
+
| `responses` | Responses API via `client.responses.create()` |
|
|
58
|
+
|
|
59
|
+
## Streaming
|
|
60
|
+
|
|
61
|
+
Enable streaming via `additionalProperties`:
|
|
62
|
+
|
|
63
|
+
```prompty
|
|
64
|
+
model:
|
|
65
|
+
options:
|
|
66
|
+
additionalProperties:
|
|
67
|
+
stream: true
|
|
68
|
+
```
|
|
69
|
+
|
|
70
|
+
Returns a `PromptyStream` that can be iterated asynchronously.
|
|
71
|
+
|
|
72
|
+
## Exports
|
|
73
|
+
|
|
74
|
+
| Export | Description |
|
|
75
|
+
|--------|-------------|
|
|
76
|
+
| `OpenAIExecutor` | Executor implementation for OpenAI |
|
|
77
|
+
| `OpenAIProcessor` | Processor for OpenAI responses |
|
|
78
|
+
| `processResponse` | Shared response processing helper |
|
|
79
|
+
| `messageToWire` | Convert `Message` → OpenAI wire format |
|
|
80
|
+
| `buildChatArgs` | Build chat completion arguments |
|
|
81
|
+
| `buildEmbeddingArgs` | Build embedding arguments |
|
|
82
|
+
| `buildImageArgs` | Build image generation arguments |
|
|
83
|
+
| `buildResponsesArgs` | Build Responses API arguments |
|
|
84
|
+
|
|
85
|
+
## License
|
|
86
|
+
|
|
87
|
+
MIT
|