@prompty/core 0.1.4 → 2.0.0-alpha.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,2 +1,105 @@
1
- # promptyjs
2
- A prompty js runtime.
1
+ # @prompty/core
2
+
3
+ Prompty core runtime — load, render, parse, and trace `.prompty` files in TypeScript/JavaScript.
4
+
5
+ ## Installation
6
+
7
+ ```bash
8
+ npm install @prompty/core
9
+ ```
10
+
11
+ You'll also need a **provider package** for your LLM:
12
+
13
+ ```bash
14
+ # OpenAI
15
+ npm install @prompty/openai
16
+
17
+ # Azure AI Foundry
18
+ npm install @prompty/foundry
19
+ ```
20
+
21
+ ## Quick Start
22
+
23
+ ```typescript
24
+ import "@prompty/openai"; // registers the OpenAI provider
25
+ import { load, run } from "@prompty/core";
26
+
27
+ // Load and run a .prompty file
28
+ const result = await run("./my-prompt.prompty", {
29
+ question: "What is the capital of France?",
30
+ });
31
+ console.log(result);
32
+ ```
33
+
34
+ ## Step-by-Step Usage
35
+
36
+ ```typescript
37
+ import "@prompty/openai";
38
+ import { load, prepare, execute, process } from "@prompty/core";
39
+
40
+ // 1. Load — parse .prompty file into a typed Prompty object
41
+ const agent = await load("./chat.prompty");
42
+
43
+ // 2. Prepare — render template + parse into messages
44
+ const messages = await prepare(agent, { name: "Alice" });
45
+
46
+ // 3. Execute — call the LLM
47
+ const response = await execute(agent, messages);
48
+
49
+ // 4. Process — extract the result
50
+ const result = await process(agent, response);
51
+ ```
52
+
53
+ ## What's a `.prompty` file?
54
+
55
+ A Markdown file with YAML frontmatter that defines an LLM prompt:
56
+
57
+ ```prompty
58
+ ---
59
+ name: greeting
60
+ model:
61
+ id: gpt-4o-mini
62
+ provider: openai
63
+ connection:
64
+ kind: key
65
+ endpoint: ${env:OPENAI_BASE_URL}
66
+ apiKey: ${env:OPENAI_API_KEY}
67
+ ---
68
+ system:
69
+ You are a helpful assistant.
70
+
71
+ user:
72
+ Hello, my name is {{name}}. {{question}}
73
+ ```
74
+
75
+ ## Tracing
76
+
77
+ ```typescript
78
+ import { Tracer, PromptyTracer } from "@prompty/core";
79
+
80
+ // Write .tracy JSON files to .runs/
81
+ const tracer = new PromptyTracer({ outputDir: ".runs" });
82
+ Tracer.add("prompty", tracer.factory);
83
+
84
+ const result = await run("./chat.prompty", { question: "Hi" });
85
+ console.log("Trace:", tracer.lastTracePath);
86
+ ```
87
+
88
+ ## Key Exports
89
+
90
+ | Export | Description |
91
+ |--------|-------------|
92
+ | `load` | Parse a `.prompty` file → `Prompty` object |
93
+ | `prepare` | Render template + parse → `Message[]` |
94
+ | `execute` | Call the LLM provider |
95
+ | `process` | Extract result from LLM response |
96
+ | `run` | All-in-one: load → prepare → execute → process |
97
+ | `executeAgent` | Agent loop with tool calling |
98
+ | `Tracer` / `PromptyTracer` | Observability and tracing |
99
+ | `registerExecutor` / `registerProcessor` | Register custom providers |
100
+ | `registerConnection` | Register named connections |
101
+ | `Message`, `ContentPart`, `PromptyStream` | Core types |
102
+
103
+ ## License
104
+
105
+ MIT