llm-fns 1.0.17 → 1.0.18

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. package/package.json +1 -1
  2. package/readme.md +129 -0
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "llm-fns",
3
- "version": "1.0.17",
3
+ "version": "1.0.18",
4
4
  "description": "",
5
5
  "main": "dist/index.js",
6
6
  "types": "dist/index.d.ts",
package/readme.md CHANGED
@@ -32,6 +32,135 @@ const llm = createLlm({
32
32
 
33
33
  ---
34
34
 
35
+ ## Model Presets (Temperature & Thinking Level)
36
+
37
+ The `defaultModel` parameter accepts either a simple string or a **configuration object** that bundles the model name with default parameters like `temperature` or `reasoning_effort`.
38
+
39
+ ### Setting a Default Temperature
40
+
41
+ ```typescript
42
+ // Create a "creative" client with high temperature
43
+ const creativeWriter = createLlm({
44
+ openai,
45
+ defaultModel: {
46
+ model: 'gpt-4o',
47
+ temperature: 1.2,
48
+ frequency_penalty: 0.5
49
+ }
50
+ });
51
+
52
+ // All calls will use these defaults
53
+ await creativeWriter.promptText("Write a poem about the ocean");
54
+
55
+ // Override for a specific call
56
+ await creativeWriter.promptText("Summarize this document", {
57
+ model: { temperature: 0.2 } // Override just temperature, keeps model
58
+ });
59
+ ```
60
+
61
+ ### Configuring Thinking/Reasoning Models
62
+
63
+ For models that support extended thinking (like `o1`, `o3`, or Claude with thinking), use `reasoning_effort` or model-specific parameters:
64
+
65
+ ```typescript
66
+ // Create a "deep thinker" client for complex reasoning tasks
67
+ const reasoner = createLlm({
68
+ openai,
69
+ defaultModel: {
70
+ model: 'o3',
71
+ reasoning_effort: 'high' // 'low' | 'medium' | 'high'
72
+ }
73
+ });
74
+
75
+ // All calls will use extended thinking
76
+ const analysis = await reasoner.promptText("Analyze this complex problem...");
77
+
78
+ // Create a fast reasoning client for simpler tasks
79
+ const quickReasoner = createLlm({
80
+ openai,
81
+ defaultModel: {
82
+ model: 'o3-mini',
83
+ reasoning_effort: 'low'
84
+ }
85
+ });
86
+ ```
87
+
88
+ ### Multiple Preset Clients
89
+
90
+ A common pattern is to create multiple clients with different presets:
91
+
92
+ ```typescript
93
+ // Deterministic client for structured data extraction
94
+ const extractorLlm = createLlm({
95
+ openai,
96
+ defaultModel: {
97
+ model: 'gpt-4o-mini',
98
+ temperature: 0
99
+ }
100
+ });
101
+
102
+ // Creative client for content generation
103
+ const writerLlm = createLlm({
104
+ openai,
105
+ defaultModel: {
106
+ model: 'gpt-4o',
107
+ temperature: 1.0,
108
+ top_p: 0.95
109
+ }
110
+ });
111
+
112
+ // Reasoning client for complex analysis
113
+ const analyzerLlm = createLlm({
114
+ openai,
115
+ defaultModel: {
116
+ model: 'o3',
117
+ reasoning_effort: 'medium'
118
+ }
119
+ });
120
+
121
+ // Use the appropriate client for each task
122
+ const data = await extractorLlm.promptZod(DataSchema);
123
+ const story = await writerLlm.promptText("Write a short story");
124
+ const solution = await analyzerLlm.promptText("Solve this logic puzzle...");
125
+ ```
126
+
127
+ ### Per-Call Overrides
128
+
129
+ Any preset can be overridden on individual calls:
130
+
131
+ ```typescript
132
+ const llm = createLlm({
133
+ openai,
134
+ defaultModel: {
135
+ model: 'gpt-4o',
136
+ temperature: 0.7
137
+ }
138
+ });
139
+
140
+ // Use defaults
141
+ await llm.promptText("Hello");
142
+
143
+ // Override model entirely
144
+ await llm.promptText("Complex task", {
145
+ model: {
146
+ model: 'o3',
147
+ reasoning_effort: 'high'
148
+ }
149
+ });
150
+
151
+ // Override just temperature (keeps default model)
152
+ await llm.promptText("Be more creative", {
153
+ temperature: 1.5
154
+ });
155
+
156
+ // Or use short form to switch models
157
+ await llm.promptText("Quick task", {
158
+ model: 'gpt-4o-mini'
159
+ });
160
+ ```
161
+
162
+ ---
163
+
35
164
  # Use Case 1: Text & Chat (`llm.prompt` / `llm.promptText`)
36
165
 
37
166
  ### Level 1: The Easy Way (String Output)