@output.ai/llm 0.2.9 → 0.2.10

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. package/README.md +3 -117
  2. package/package.json +1 -1
package/README.md CHANGED
@@ -3,123 +3,9 @@
3
3
  Unified LLM generation API with built-in prompt templating for Output Framework workflows.
4
4
 
5
5
  [![npm version](https://img.shields.io/npm/v/@output.ai/llm)](https://www.npmjs.com/package/@output.ai/llm)
6
- [![Documentation](https://img.shields.io/badge/docs-docs.output.ai-blue)](https://docs.output.ai/packages/llm)
7
-
8
- ## Installation
9
-
10
- ```bash
11
- npm install @output.ai/llm
12
- ```
13
-
14
- ## Quick Start
15
-
16
- ```typescript
17
- import { generateText } from '@output.ai/llm';
18
-
19
- const result = await generateText({
20
- prompt: 'summarize@v1',
21
- variables: { text: 'Your content here' }
22
- });
23
- ```
24
-
25
- ## Generation Functions
26
-
27
- | Function | Description |
28
- |----------|-------------|
29
- | `generateText` | Generate unstructured text |
30
- | `generateObject` | Generate a structured object matching a Zod schema |
31
- | `generateArray` | Generate an array of structured items |
32
- | `generateEnum` | Generate a value from allowed options |
33
-
34
- ### Example: Structured Output
35
-
36
- ```typescript
37
- import { generateObject } from '@output.ai/llm';
38
- import { z } from '@output.ai/core';
39
-
40
- const recipe = await generateObject({
41
- prompt: 'recipe@v1',
42
- variables: { dish: 'lasagna' },
43
- schema: z.object({
44
- title: z.string(),
45
- ingredients: z.array(z.string()),
46
- steps: z.array(z.string())
47
- })
48
- });
49
- ```
50
-
51
- ## Prompt Files
52
-
53
- Prompt files use YAML frontmatter for configuration and LiquidJS for templating:
54
-
55
- ```yaml
56
- ---
57
- provider: anthropic
58
- model: claude-sonnet-4-20250514
59
- temperature: 0.7
60
- ---
61
-
62
- <system>
63
- You are a helpful assistant.
64
- </system>
65
-
66
- <user>
67
- {{ user_message }}
68
- </user>
69
- ```
70
-
71
- ### Supported Providers
72
-
73
- - **Anthropic** - Requires `ANTHROPIC_API_KEY`
74
- - **OpenAI** - Requires `OPENAI_API_KEY`
75
- - **Azure OpenAI** - Requires Azure-specific environment variables
76
-
77
- ## Advanced Features
78
-
79
- ### AI SDK Pass-Through
80
-
81
- All generate functions accept additional [AI SDK options](https://sdk.vercel.ai/docs) that are passed through to the underlying provider. This enables tool calling, retry configuration, and other advanced features.
82
-
83
- #### Tool Calling
84
-
85
- ```typescript
86
- import { generateText, tool } from '@output.ai/llm';
87
- import { z } from '@output.ai/core';
88
-
89
- const result = await generateText({
90
- prompt: 'agent@v1',
91
- variables: { task: 'Research competitor pricing' },
92
- tools: {
93
- searchWeb: tool({
94
- description: 'Search the web for information',
95
- parameters: z.object({ query: z.string() }),
96
- execute: async ({ query }) => fetchSearchResults(query)
97
- })
98
- },
99
- toolChoice: 'auto'
100
- });
101
-
102
- // Access tool calls made by the model
103
- console.log(result.toolCalls);
104
- ```
105
-
106
- #### Common Pass-Through Options
107
-
108
- | Option | Type | Description |
109
- |--------|------|-------------|
110
- | `tools` | `ToolSet` | Tools the model can call (generateText only) |
111
- | `toolChoice` | `'auto' \| 'none' \| 'required'` | Tool selection strategy |
112
- | `maxRetries` | `number` | Max retry attempts (default: 2) |
113
- | `seed` | `number` | Seed for deterministic output |
114
- | `abortSignal` | `AbortSignal` | Cancel the request |
115
- | `topP` | `number` | Nucleus sampling (0-1) |
116
- | `topK` | `number` | Top-K sampling |
117
-
118
- Options set in the prompt file (temperature, maxTokens) can be overridden at call time.
119
6
 
120
7
  ## Documentation
8
+ - [README](https://docs.output.ai/packages/llm)
9
+ - [API Reference](https://output-ai-reference-code-docs.onrender.com/modules/llm_src.html)
121
10
 
122
- For comprehensive documentation, visit:
123
-
124
- - [Package Reference](https://docs.output.ai/packages/llm)
125
- - [Getting Started](https://docs.output.ai/quickstart)
11
+ <!-- Internal Dev Note: The documentation for this package is found at docs/guides/packages/llm.mdx -->
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@output.ai/llm",
3
- "version": "0.2.9",
3
+ "version": "0.2.10",
4
4
  "description": "Framework abstraction to interact with LLM models",
5
5
  "type": "module",
6
6
  "main": "src/index.js",