llmist 2.1.0 → 2.3.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -5,23 +5,68 @@
5
5
  [![npm version](https://img.shields.io/npm/v/llmist.svg)](https://www.npmjs.com/package/llmist)
6
6
  [![License](https://img.shields.io/badge/License-MIT-blue.svg)](https://opensource.org/licenses/MIT)
7
7
 
8
- > **Universal TypeScript LLM client with own function calling grammar, streaming-first tool execution and simple, extensible agent framework**
8
+ > **Tools execute while the LLM streams. Any model. Clean API.**
9
9
 
10
10
  > **⚠️ EARLY WORK IN PROGRESS** - This library is under active development. APIs may change without notice. Use in production at your own risk.
11
11
 
12
- llmist is an asynchonous, streaming-first, provider-agnostic LLM client that makes it easy to build AI agents with **any model**—no structured outputs or native tool calling required. Switch between OpenAI, Anthropic, and Gemini without changing your code, plug into any part of the Agent workflow, have tools (Gadgets) triggered while still streaming.
12
+ Most LLM libraries buffer the entire response before parsing tool calls. **llmist parses incrementally.**
13
+
14
+ Your gadgets (tools) fire the instant they're complete in the stream—giving your users immediate feedback. llmist implements its own function calling mechanism via a simple text-based block format. No JSON mode required. No native tool support needed. Works with OpenAI, Anthropic, and Gemini out of the box—extensible to any provider.
15
+
16
+ A fluent, async-first API lets you plug into any part of the agent loop. Fully typed. Composable. Your code stays clean.
13
17
 
14
18
  ---
15
19
 
16
20
  ## 🎯 Why llmist?
17
21
 
18
- - **🌍 Universal** - Works with [any LLM provider](./docs/PROVIDERS.md) (OpenAI, Anthropic, Gemini, [custom](./docs/CUSTOM_MODELS.md)) and easy to integrate more
19
- - **📝 No Structured Outputs** - Simple [block format](./docs/BLOCK_FORMAT.md) with streaming oriented tool calling
20
- - **🪝 Powerful Hooks** - Monitor, customize, and control [every step of execution](./docs/HOOKS.md)
21
- - **🎨 Fluent API** - [Builder pattern](./docs/CONFIGURATION.md) with model shortcuts and presets
22
- - **🧪 Testing-Friendly** - Built-in [mocking system](./docs/TESTING.md) for zero-cost testing
23
- - **⌨️ Convenient CLI** - [Capable CLI](./docs/CLI.md) showcasing how to build on top of llmist
24
- - **🏦 Cost-aware** - [Cost APIs per model](./docs/MODEL_CATALOG.md) + prompt-caching aware accounting
22
+ <table>
23
+ <tr>
24
+ <td width="33%" valign="top">
25
+
26
+ ### Streaming Tool Execution
27
+ Gadgets execute the moment their block is parsed—not after the response completes. Real-time UX without buffering.
28
+
29
+ ```typescript
30
+ // Tool fires mid-stream
31
+ for await (const event of agent.run()) {
32
+ if (event.type === 'gadget_result')
33
+ updateUI(event.result); // Immediate
34
+ }
35
+ ```
36
+
37
+ </td>
38
+ <td width="33%" valign="top">
39
+
40
+ ### 🧩 Built-in Function Calling
41
+ llmist implements its own tool calling via a simple block format. No `response_format: json`. No native tool support needed. Works with any model from supported providers.
42
+
43
+ ```
44
+ !!!GADGET_START[Calculator]
45
+ !!!ARG[operation] add
46
+ !!!ARG[a] 15
47
+ !!!ARG[b] 23
48
+ !!!GADGET_END
49
+ ```
50
+
51
+ *Markers are fully [configurable](./docs/BLOCK_FORMAT.md).*
52
+
53
+ </td>
54
+ <td width="33%" valign="top">
55
+
56
+ ### 🔌 Composable Agent API
57
+ Fluent builder, async iterators, full TypeScript inference. Hook into any lifecycle point. Your code stays readable.
58
+
59
+ ```typescript
60
+ const answer = await LLMist.createAgent()
61
+ .withModel('sonnet')
62
+ .withGadgets(Calculator, Weather)
63
+ .withHooks(HookPresets.monitoring())
64
+ .askAndCollect('What is 15 + 23?');
65
+ ```
66
+
67
+ </td>
68
+ </tr>
69
+ </table>
25
70
 
26
71
  ---
27
72
 
@@ -3879,6 +3879,23 @@ var init_agent = __esm({
3879
3879
  maxIterations: this.maxIterations
3880
3880
  });
3881
3881
  while (currentIteration < this.maxIterations) {
3882
+ if (this.signal?.aborted) {
3883
+ this.logger.info("Agent loop terminated by abort signal", {
3884
+ iteration: currentIteration,
3885
+ reason: this.signal.reason
3886
+ });
3887
+ await this.safeObserve(async () => {
3888
+ if (this.hooks.observers?.onAbort) {
3889
+ const context = {
3890
+ iteration: currentIteration,
3891
+ reason: this.signal?.reason,
3892
+ logger: this.logger
3893
+ };
3894
+ await this.hooks.observers.onAbort(context);
3895
+ }
3896
+ });
3897
+ return;
3898
+ }
3882
3899
  this.logger.debug("Starting iteration", { iteration: currentIteration });
3883
3900
  try {
3884
3901
  if (this.compactionManager) {
@@ -8484,4 +8501,4 @@ export {
8484
8501
  MockPromptRecorder,
8485
8502
  waitFor
8486
8503
  };
8487
- //# sourceMappingURL=chunk-PDYVT3FI.js.map
8504
+ //# sourceMappingURL=chunk-GANXNBIZ.js.map