llmist 0.1.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/LICENSE ADDED
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2025 Zbigniew Sobiecki
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
package/README.md ADDED
@@ -0,0 +1,394 @@
1
+ # llmist
2
+
3
+ [![CI](https://github.com/zbigniewsobiecki/llmist/actions/workflows/ci.yml/badge.svg)](https://github.com/zbigniewsobiecki/llmist/actions/workflows/ci.yml)
4
+ [![codecov](https://codecov.io/gh/zbigniewsobiecki/llmist/branch/main/graph/badge.svg)](https://codecov.io/gh/zbigniewsobiecki/llmist)
5
+ [![npm version](https://badge.fury.io/js/llmist.svg)](https://www.npmjs.com/package/llmist)
6
+ [![License](https://img.shields.io/badge/License-MIT-blue.svg)](https://opensource.org/licenses/MIT)
7
+
8
+ > **Universal TypeScript LLM client with streaming-first tool execution and simple, extensible agent framework**
9
+
10
+ llmist is an asynchonous, streaming-first, provider-agnostic LLM client that makes it easy to build AI agents with **any model**โ€”no structured outputs or native tool calling required. Switch between OpenAI, Anthropic, and Gemini without changing your code, plug into any part of the Agent workflow, have tools (Gadgets) triggered while still streaming.
11
+
12
+ ---
13
+
14
+ ## ๐ŸŽฏ Why llmist?
15
+
16
+ - **๐ŸŒ Universal** - Works with any LLM provider (OpenAI, Anthropic, Gemini, custom)
17
+ - **๐Ÿ“ No Structured Outputs** - Flexible YAML/JSON grammar works with any text model
18
+ - **โšก Streaming-First** - Built for real-time responses and efficient error handling
19
+ - **๐Ÿช Powerful Hooks** - Monitor, customize, and control every step of execution
20
+ - **๐ŸŽจ Beautiful API** - Fluent builder pattern with model shortcuts and presets
21
+ - **๐Ÿงช Testing-Friendly** - Built-in mocking system for zero-cost testing
22
+ - **๐Ÿค– Soon-to-be Production-Ready** - Timeouts, error recovery, human-in-the-loop, structured logging
23
+
24
+ ---
25
+
26
+ ## ๐Ÿš€ Quick Start
27
+
28
+ ### Installation
29
+
30
+ ```bash
31
+ npm install llmist
32
+ # or
33
+ bun add llmist
34
+ ```
35
+
36
+ ---
37
+
38
+ ## ๐Ÿ–ฅ๏ธ Command Line Interface
39
+
40
+ ```bash
41
+ # Quick chat
42
+ bunx llmist chat "Explain TypeScript generics" --model haiku
43
+
44
+ # Agent with tools
45
+ bunx llmist agent "Calculate 15 * 23" --gadget ./calculator.ts --model sonnet
46
+
47
+ # Pipe input
48
+ cat document.txt | llmist chat "Summarize" --model gpt-5-nano
49
+ ```
50
+
51
+ ๐Ÿ“– **[CLI Reference](./docs/CLI.md)** | **[CLI Gadgets Guide](./docs/CLI_GADGETS.md)**
52
+
53
+
54
+ ### Your First Agent
55
+
56
+ ```typescript
57
+ import { LLMist, Gadget, z } from 'llmist';
58
+
59
+ // Define a tool (called "gadget" in llmist)
60
+ class Calculator extends Gadget({
61
+ description: 'Performs arithmetic operations',
62
+ schema: z.object({
63
+ operation: z.enum(['add', 'multiply', 'subtract', 'divide']),
64
+ a: z.number(),
65
+ b: z.number(),
66
+ }),
67
+ }) {
68
+ execute(params: this['params']): string {
69
+ const { operation, a, b } = params; // Automatically typed!
70
+ switch (operation) {
71
+ case 'add': return `${a + b}`;
72
+ case 'multiply': return `${a * b}`;
73
+ case 'subtract': return `${a - b}`;
74
+ case 'divide': return `${a / b}`;
75
+ default: throw new Error('Unknown operation');
76
+ }
77
+ }
78
+ }
79
+
80
+ // Create and run agent with fluent API
81
+ const answer = await LLMist.createAgent()
82
+ .withModel('gpt-5-nano') // Model shortcuts: sonnet, haiku, etc.
83
+ .withSystem('You are a helpful math assistant')
84
+ .withGadgets(Calculator)
85
+ .askAndCollect('What is 15 times 23?');
86
+
87
+ console.log(answer); // "15 times 23 equals 345"
88
+ ```
89
+
90
+ **That's it!** N
91
+
92
+ ๐Ÿ“– **[Getting Started Guide](./docs/GETTING_STARTED.md)** - Learn more in 5 minutes
93
+
94
+ ---
95
+
96
+ ## โœจ Key Features
97
+
98
+ ### ๐ŸŒ Multi-Provider Support
99
+
100
+ ```typescript
101
+ // Use model shortcuts
102
+ .withModel('gpt-5-nano') // OpenAI gpt-5-nano
103
+ .withModel('sonnet') // Claude 3.5 Sonnet
104
+ .withModel('haiku') // Claude 3.5 Haiku
105
+ .withModel('flash') // Gemini 2.0 Flash
106
+
107
+ // Or full names
108
+ .withModel('openai:gpt-5-nano')
109
+ .withModel('anthropic:claude-3-5-sonnet-latest')
110
+ .withModel('gemini:gemini-2.0-flash')
111
+ ```
112
+
113
+ **Automatic provider discovery** - Just set API keys as env vars (`OPENAI_API_KEY`, `ANTHROPIC_API_KEY`, `GEMINI_API_KEY`)
114
+
115
+ ๐Ÿ“– **[Providers Guide](./docs/PROVIDERS.md)** | **[Model Catalog](./docs/MODEL_CATALOG.md)**
116
+
117
+ ### ๐Ÿ› ๏ธ Flexible Gadgets (Tools)
118
+
119
+ Two ways to create tools:
120
+
121
+ ```typescript
122
+ // 1. Class-based with type safety
123
+ class Weather extends Gadget({
124
+ description: 'Get weather for a city',
125
+ schema: z.object({ city: z.string() }),
126
+ }) {
127
+ async execute(params: this['params']) {
128
+ // params is auto-typed!
129
+ const data = await fetch(`https://api.weather.com/${params.city}`);
130
+ return `Weather: ${data.temp}ยฐC`;
131
+ }
132
+ }
133
+
134
+ // 2. Functional for simplicity
135
+ const calculator = createGadget({
136
+ description: 'Arithmetic operations',
137
+ schema: z.object({ operation: z.enum(['add']), a: z.number(), b: z.number() }),
138
+ execute: ({ operation, a, b }) => `${a + b}`,
139
+ });
140
+ ```
141
+
142
+ ๐Ÿ“– **[Gadgets Guide](./docs/GADGETS.md)** | **[Examples](./examples/02-custom-gadgets.ts)**
143
+
144
+ ### ๐Ÿช Lifecycle Hooks
145
+
146
+ ```typescript
147
+ import { HookPresets } from 'llmist';
148
+
149
+ // Use presets for instant monitoring
150
+ await LLMist.createAgent()
151
+ .withHooks(HookPresets.monitoring()) // Logs + timing + tokens + errors
152
+ .ask('Your prompt');
153
+
154
+ // Or combine specific presets
155
+ .withHooks(HookPresets.merge(
156
+ HookPresets.timing(),
157
+ HookPresets.tokenTracking()
158
+ ))
159
+
160
+ // Or write custom hooks
161
+ .withHooks({
162
+ observers: {
163
+ onLLMCallComplete: async (ctx) => {
164
+ console.log(`Used ${ctx.usage?.totalTokens} tokens`);
165
+ },
166
+ },
167
+ interceptors: {
168
+ interceptTextChunk: (chunk) => chunk.toUpperCase(),
169
+ },
170
+ })
171
+ ```
172
+
173
+ ๐Ÿ“– **[Hooks Guide](./docs/HOOKS.md)** | **[Examples](./examples/03-hooks.ts)**
174
+
175
+ ### ๐Ÿ’ฌ Human-in-the-Loop
176
+
177
+ ```typescript
178
+ class AskUser extends Gadget({
179
+ description: 'Ask the user a question',
180
+ schema: z.object({ question: z.string() }),
181
+ }) {
182
+ execute(params: this['params']) {
183
+ throw new HumanInputException(params.question);
184
+ }
185
+ }
186
+
187
+ await LLMist.createAgent()
188
+ .withGadgets(AskUser)
189
+ .onHumanInput(async (question) => {
190
+ return await promptUser(question);
191
+ })
192
+ .ask('Help me plan my vacation');
193
+ ```
194
+
195
+ ๐Ÿ“– **[Human-in-the-Loop Guide](./docs/HUMAN_IN_LOOP.md)** | **[Examples](./examples/04-human-in-loop.ts)**
196
+
197
+ ### โšก Streaming & Event Handling
198
+
199
+ ```typescript
200
+ // Collect all text
201
+ const answer = await LLMist.createAgent()
202
+ .withModel('haiku')
203
+ .askAndCollect('Tell me a joke');
204
+
205
+ // Handle specific events
206
+ await LLMist.createAgent()
207
+ .withModel('sonnet')
208
+ .withGadgets(Calculator)
209
+ .askWith('Calculate 2 + 2', {
210
+ onText: (text) => console.log('LLM:', text),
211
+ onGadgetCall: (call) => console.log('Calling:', call.gadgetName),
212
+ onGadgetResult: (result) => console.log('Result:', result.result),
213
+ });
214
+
215
+ // Manual control
216
+ const agent = LLMist.createAgent().withModel('gpt-5-nano').ask('Question');
217
+ for await (const event of agent.run()) {
218
+ if (event.type === 'text') console.log(event.content);
219
+ }
220
+ ```
221
+
222
+ ๐Ÿ“– **[Streaming Guide](./docs/STREAMING.md)** | **[Examples](./examples/05-streaming.ts)**
223
+
224
+ ### ๐Ÿงช Mock Testing
225
+
226
+ ```typescript
227
+ import { mockLLM, createMockClient } from 'llmist';
228
+
229
+ mockLLM()
230
+ .forModel('gpt-5')
231
+ .whenMessageContains('calculate')
232
+ .returns('The answer is 42')
233
+ .register();
234
+
235
+ const answer = await LLMist.createAgent()
236
+ .withClient(createMockClient())
237
+ .withModel('gpt-5')
238
+ .askAndCollect('Calculate 2 + 2');
239
+
240
+ console.log(answer); // "The answer is 42" - no API call made!
241
+ ```
242
+
243
+ ๐Ÿ“– **[Testing Guide](./docs/TESTING.md)** | **[Examples](./examples/mock-testing-example.ts)**
244
+
245
+ ### ๐Ÿ“Š Model Catalog & Cost Estimation
246
+
247
+ ```typescript
248
+ const client = new LLMist();
249
+
250
+ // Get model specs
251
+ const gpt4 = client.modelRegistry.getModelSpec('openai:gpt-4');
252
+ console.log(gpt4.contextWindow); // 128000
253
+ console.log(gpt4.pricing.input); // 10.0 per 1M tokens
254
+
255
+ // Estimate costs
256
+ const cost = client.modelRegistry.estimateCost('openai:gpt-4', 10_000, 2_000);
257
+ console.log(`$${cost.totalCost.toFixed(4)}`);
258
+
259
+ // Find cheapest model
260
+ const cheapest = client.modelRegistry.getCheapestModel(10_000, 2_000);
261
+ ```
262
+
263
+ ๐Ÿ“– **[Model Catalog Guide](./docs/MODEL_CATALOG.md)** | **[Custom Models](./docs/CUSTOM_MODELS.md)**
264
+
265
+ ### ๐Ÿ”ข Native Token Counting
266
+
267
+ ```typescript
268
+ const messages = [
269
+ { role: 'system', content: 'You are helpful' },
270
+ { role: 'user', content: 'Explain quantum computing' }
271
+ ];
272
+
273
+ const tokens = await client.countTokens('openai:gpt-5', messages);
274
+ const cost = client.modelRegistry.estimateCost('openai:gpt-5', tokens, 1000);
275
+ ```
276
+
277
+ Uses provider-specific methods (tiktoken for OpenAI, native APIs for Anthropic/Gemini).
278
+
279
+ ---
280
+
281
+ ## ๐Ÿ“š Documentation
282
+
283
+ **Getting Started**
284
+ - **[Getting Started](./docs/GETTING_STARTED.md)** - Your first agent in 5 minutes
285
+ - **[Configuration](./docs/CONFIGURATION.md)** - All available options
286
+ - **[Quick Methods](./docs/QUICK_METHODS.md)** - Simple APIs for basic tasks
287
+
288
+ **Core Concepts**
289
+ - **[Gadgets (Tools)](./docs/GADGETS.md)** - Creating custom functions
290
+ - **[Hooks](./docs/HOOKS.md)** - Lifecycle monitoring and control
291
+ - **[Streaming](./docs/STREAMING.md)** - Real-time response handling
292
+ - **[Human-in-the-Loop](./docs/HUMAN_IN_LOOP.md)** - Interactive workflows
293
+
294
+ **Advanced**
295
+ - **[Providers](./docs/PROVIDERS.md)** - Multi-provider configuration
296
+ - **[Model Catalog](./docs/MODEL_CATALOG.md)** - Querying models and costs
297
+ - **[Custom Models](./docs/CUSTOM_MODELS.md)** - Register fine-tuned models
298
+ - **[Error Handling](./docs/ERROR_HANDLING.md)** - Recovery strategies
299
+ - **[Testing](./docs/TESTING.md)** - Mocking and test strategies
300
+
301
+ **Reference**
302
+ - **[CLI Reference](./docs/CLI.md)** - Command-line interface
303
+ - **[Architecture](./docs/ARCHITECTURE.md)** - Technical deep-dive
304
+ - **[Debugging](./docs/DEBUGGING.md)** - Capture raw prompts/responses
305
+ - **[Troubleshooting](./docs/TROUBLESHOOTING.md)** - Common issues
306
+
307
+ ---
308
+
309
+ ## ๐ŸŽ“ Examples
310
+
311
+ Comprehensive examples are available in the **[examples/](./examples/)** directory:
312
+
313
+ | Example | Description |
314
+ |---------|-------------|
315
+ | **[01-basic-usage.ts](./examples/01-basic-usage.ts)** | Simple agent with calculator gadget |
316
+ | **[02-custom-gadgets.ts](./examples/02-custom-gadgets.ts)** | Async gadgets, validation, loop termination |
317
+ | **[03-hooks.ts](./examples/03-hooks.ts)** | Lifecycle hooks for monitoring |
318
+ | **[04-human-in-loop.ts](./examples/04-human-in-loop.ts)** | Interactive conversations |
319
+ | **[05-streaming.ts](./examples/05-streaming.ts)** | Real-time streaming |
320
+ | **[06-model-catalog.ts](./examples/06-model-catalog.ts)** | Model queries and cost estimation |
321
+ | **[07-logging.ts](./examples/07-logging.ts)** | Logging and debugging |
322
+ | **[13-syntactic-sugar.ts](./examples/13-syntactic-sugar.ts)** | Fluent API showcase |
323
+
324
+ **Run any example:**
325
+ ```bash
326
+ bun install && bun run build
327
+ bunx tsx examples/01-basic-usage.ts
328
+ ```
329
+
330
+ See **[examples/README.md](./examples/README.md)** for full list and details.
331
+
332
+ ---
333
+
334
+ ## ๐Ÿ—๏ธ Architecture
335
+
336
+ llmist follows **SOLID principles** with a composable architecture.
337
+
338
+ **Key components:**
339
+ - **LLMist** - Provider-agnostic streaming client
340
+ - **Agent** - Full agent loop with automatic orchestration
341
+ - **StreamProcessor** - Process LLM streams with custom event loops
342
+ - **GadgetExecutor** - Execute tools with timeout and error handling
343
+ - **GadgetRegistry** - Registry for available tools
344
+
345
+ ๐Ÿ“– **[Architecture Guide](./docs/ARCHITECTURE.md)** for detailed design documentation
346
+
347
+ ---
348
+
349
+ ## ๐Ÿงช Development
350
+
351
+ ```bash
352
+ bun install
353
+
354
+ # Run tests
355
+ bun test # All tests
356
+ bun run test:unit # Unit tests only
357
+ bun run test:e2e # E2E tests only
358
+
359
+ # Build and lint
360
+ bun run build
361
+ bun run lint
362
+ bun run format
363
+ ```
364
+
365
+ ---
366
+
367
+ ## ๐Ÿค Contributing
368
+
369
+ Contributions welcome! Please ensure:
370
+
371
+ 1. โœ… All tests pass: `bun test`
372
+ 2. โœ… Code is formatted: `bun run format`
373
+ 3. โœ… Linting passes: `bun run lint`
374
+ 4. โœ… Types are properly defined
375
+ 5. โœ… Examples/docs updated for API changes
376
+
377
+ ---
378
+
379
+ ## ๐Ÿ“„ License
380
+
381
+ MIT - see [LICENSE](./LICENSE) for details.
382
+
383
+ ---
384
+
385
+ ## ๐Ÿ”— Links
386
+
387
+ - ๐Ÿ“ฆ [npm Package](https://www.npmjs.com/package/llmist)
388
+ - ๐Ÿ™ [GitHub Repository](https://github.com/zbigniewsobiecki/llmist)
389
+ - ๐Ÿ“š [Full Documentation](./docs/)
390
+ - ๐Ÿ› [Issue Tracker](https://github.com/zbigniewsobiecki/llmist/issues)
391
+
392
+ ---
393
+
394
+ Made with ๐Ÿคช by the llmist team