tooled-prompt 0.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/LICENSE ADDED
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2025 Authors of Tooled Prompt
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
package/README.md ADDED
@@ -0,0 +1,535 @@
1
+ # tooled-prompt
2
+
3
+ [![npm version](https://img.shields.io/npm/v/tooled-prompt.svg)](https://www.npmjs.com/package/tooled-prompt)
4
+ [![CI](https://github.com/beshanoe/tooled-prompt/actions/workflows/ci.yml/badge.svg)](https://github.com/beshanoe/tooled-prompt/actions/workflows/ci.yml)
5
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
6
+ [![Node.js](https://img.shields.io/badge/node-%3E%3D20-brightgreen.svg)](https://nodejs.org)
7
+
8
+ Runtime LLM prompt library with smart tool recognition for TypeScript.
9
+
10
+ ## Why tooled-prompt?
11
+
12
+ Most LLM libraries make you choose between simplicity and power. tooled-prompt gives you both through a single insight: **tagged template literals are the perfect API for LLM prompts with tools**.
13
+
14
+ ```typescript
15
+ const result = await prompt`
16
+ Summarize ${readFile} the contents of config.json.
17
+ `();
18
+ ```
19
+
20
+ Functions embedded in templates are auto-detected as tools the LLM can call. No boilerplate, no tool registration, no JSON schema authoring — just write functions and use them.
21
+
22
+ **vs. raw OpenAI SDK** — tooled-prompt handles the tool loop, schema generation, streaming, and structured output automatically.
23
+
24
+ **vs. LangChain / Vercel AI SDK** — tooled-prompt is a zero-dependency runtime library (~15KB). No framework lock-in, no abstractions to learn. It works with any OpenAI-compatible endpoint.
25
+
26
+ ## Installation
27
+
28
+ ```bash
29
+ npm install tooled-prompt
30
+ ```
31
+
32
+ ## Quick Start
33
+
34
+ ```typescript
35
+ import "dotenv/config";
36
+ import { prompt } from "tooled-prompt";
37
+
38
+ // Define a tool function
39
+ function getWeather(cityName: string) {
40
+ return `Weather in ${cityName}: Sunny, 72°F`;
41
+ }
42
+
43
+ // Use it in a prompt - tools are auto-detected!
44
+ const result = await prompt`
45
+ What's the weather like in San Francisco?
46
+ Use ${getWeather} to find out.
47
+ `();
48
+
49
+ console.log(`\nResult: ${result}`);
50
+ ```
51
+
52
+ ## Features
53
+
54
+ - **Smart Tool Recognition** — Functions in template literals are auto-detected as tools
55
+ - **Multiple Schema Formats** — Define tool args with strings, arrays, objects, or Zod schemas
56
+ - **Structured Output** — Get typed responses with Zod schema validation
57
+ - **Store Pattern** — Capture structured output via tool calls with `store()` and `prompt.return`
58
+ - **Image Support** — Pass images (Buffer/Uint8Array) directly in templates
59
+ - **Streaming Events** — Subscribe to content, thinking, and tool events
60
+ - **Multiple Instances** — Create isolated instances for different LLM providers
61
+ - **TypeScript First** — Full type safety with generics
62
+
63
+ ## Configuration
64
+
65
+ ### Environment Variables
66
+
67
+ Configure the default `prompt` instance via environment variables:
68
+
69
+ | Variable | Description | Default |
70
+ |----------|-------------|---------|
71
+ | `TOOLED_PROMPT_URL` | LLM API endpoint URL | `http://localhost:8080/v1` |
72
+ | `TOOLED_PROMPT_MODEL` | Model name | `default` |
73
+ | `TOOLED_PROMPT_API_KEY` | API key for authentication | — |
74
+ | `TOOLED_PROMPT_TEMPERATURE` | Generation temperature (0–2) | — |
75
+ | `TOOLED_PROMPT_MAX_ITERATIONS` | Max tool loop iterations | — |
76
+ | `TOOLED_PROMPT_STREAM` | Enable streaming (`true`/`false`) | `true` |
77
+ | `TOOLED_PROMPT_TIMEOUT` | Request timeout in ms | `60000` |
78
+ | `TOOLED_PROMPT_SILENT` | Suppress console output (`true`/`false`) | `false` |
79
+ | `TOOLED_PROMPT_SHOW_THINKING` | Show full thinking content (`true`/`false`) | `false` |
80
+
81
+ `TOOLED_PROMPT_LLM_URL` and `TOOLED_PROMPT_LLM_MODEL` are accepted as aliases for `TOOLED_PROMPT_URL` and `TOOLED_PROMPT_MODEL`.
82
+
83
+ Copy `.env.example` to `.env` to get started:
84
+
85
+ ```bash
86
+ cp .env.example .env
87
+ ```
88
+
89
+ ### Programmatic Configuration
90
+
91
+ ```typescript
92
+ import { setConfig } from 'tooled-prompt';
93
+
94
+ // Configure the default instance
95
+ setConfig({
96
+ llmUrl: 'https://api.openai.com/v1',
97
+ apiKey: process.env.OPENAI_API_KEY,
98
+ llmModel: 'gpt-4',
99
+ temperature: 0.7,
100
+ stream: true,
101
+ timeout: 30000,
102
+ silent: false,
103
+ });
104
+ ```
105
+
106
+ ### Per-Call Configuration
107
+
108
+ ```typescript
109
+ const result = await prompt`Your prompt here`({
110
+ temperature: 0.9,
111
+ maxIterations: 5,
112
+ });
113
+ ```
114
+
115
+ ## Provider Setup
116
+
117
+ tooled-prompt works with any OpenAI-compatible `/chat/completions` endpoint.
118
+
119
+ ### OpenAI
120
+
121
+ ```typescript
122
+ import { setConfig } from 'tooled-prompt';
123
+
124
+ setConfig({
125
+ llmUrl: 'https://api.openai.com/v1',
126
+ apiKey: process.env.OPENAI_API_KEY,
127
+ llmModel: 'gpt-4o',
128
+ });
129
+ ```
130
+
131
+ ### Local Models (Ollama, vLLM, llama.cpp)
132
+
133
+ ```typescript
134
+ import { setConfig } from 'tooled-prompt';
135
+
136
+ // Ollama (default OpenAI-compatible endpoint)
137
+ setConfig({
138
+ llmUrl: 'http://localhost:11434/v1',
139
+ llmModel: 'llama3.1',
140
+ });
141
+
142
+ // vLLM
143
+ setConfig({
144
+ llmUrl: 'http://localhost:8000/v1',
145
+ llmModel: 'meta-llama/Llama-3.1-8B-Instruct',
146
+ });
147
+ ```
148
+
149
+ ### OpenRouter
150
+
151
+ ```typescript
152
+ import { setConfig } from 'tooled-prompt';
153
+
154
+ setConfig({
155
+ llmUrl: 'https://openrouter.ai/api/v1',
156
+ apiKey: process.env.OPENROUTER_API_KEY,
157
+ llmModel: 'anthropic/claude-3.5-sonnet',
158
+ });
159
+ ```
160
+
161
+ > **Note:** Anthropic's native API (`https://api.anthropic.com`) is not OpenAI-compatible and is not currently supported. Use OpenRouter or a compatible proxy to access Anthropic models.
162
+
163
+ ## Tool Definition
164
+
165
+ ### 1. Auto-Inferred (Simplest)
166
+
167
+ ```typescript
168
+ function readFile(filePath: string) {
169
+ return fs.readFileSync(filePath, 'utf-8');
170
+ }
171
+
172
+ // Just use it - name and params are inferred
173
+ await prompt`Read ${readFile} and summarize config.json`();
174
+ ```
175
+
176
+ ### 2. String Description (Single Param)
177
+
178
+ ```typescript
179
+ tool(getWeather, {
180
+ description: 'Get weather for a city',
181
+ args: 'The city name to look up'
182
+ });
183
+ ```
184
+
185
+ ### 3. Array Description (Multiple Params)
186
+
187
+ ```typescript
188
+ function copyFile(src: string, dest: string) {
189
+ fs.copyFileSync(src, dest);
190
+ }
191
+
192
+ tool(copyFile, {
193
+ description: 'Copy a file',
194
+ args: ['Source file path', 'Destination file path']
195
+ });
196
+ ```
197
+
198
+ ### 4. Object Schema (Named Params with Optional)
199
+
200
+ ```typescript
201
+ function search(query: string, limit?: number) {
202
+ // ...
203
+ }
204
+
205
+ tool(search, {
206
+ description: 'Search for items',
207
+ args: {
208
+ query: 'Search query string',
209
+ 'limit?': 'Maximum number of results'
210
+ }
211
+ });
212
+ ```
213
+
214
+ ### 5. Zod Schema Array (Full Type Safety)
215
+
216
+ ```typescript
217
+ import { z } from 'zod/v4';
218
+
219
+ tool(createUser, {
220
+ description: 'Create a new user',
221
+ args: [
222
+ z.string().describe('User full name'),
223
+ z.string().email().describe('User email address'),
224
+ z.number().optional().describe('User age'),
225
+ ]
226
+ });
227
+ ```
228
+
229
+ ### 6. Object Syntax (For Arrow Functions)
230
+
231
+ ```typescript
232
+ const myFunc = async (input: string) => input.toUpperCase();
233
+
234
+ // Extract name from object key
235
+ tool({ myFunc }, {
236
+ description: 'Transform text to uppercase',
237
+ args: 'Text to transform'
238
+ });
239
+ ```
240
+
241
+ ## Structured Output
242
+
243
+ Get typed, validated responses using Zod schemas:
244
+
245
+ ```typescript
246
+ import { z } from 'zod/v4';
247
+
248
+ const MovieSchema = z.object({
249
+ title: z.string(),
250
+ year: z.number(),
251
+ rating: z.number().min(0).max(10),
252
+ });
253
+
254
+ const movie = await prompt`
255
+ Tell me about the movie Inception
256
+ `(MovieSchema);
257
+
258
+ // movie is typed as { title: string; year: number; rating: number }
259
+ console.log(movie.title, movie.year);
260
+ ```
261
+
262
+ Or use simple schemas for string fields:
263
+
264
+ ```typescript
265
+ const result = await prompt`Analyze this text: ${text}`({
266
+ sentiment: 'Overall sentiment (positive/negative/neutral)',
267
+ 'confidence?': 'Confidence score if available',
268
+ });
269
+ ```
270
+
271
+ ## Store & Return Pattern
272
+
273
+ For complex workflows where you need the LLM to save structured data via a tool call (rather than as the final response), use `store()` or `prompt.return`.
274
+
275
+ ### `prompt.return` — Early-Exit Structured Output
276
+
277
+ When `prompt.return` appears in a template and a schema is passed to the executor, the LLM gets a special tool to store the result. The tool loop exits as soon as the value is stored:
278
+
279
+ ```typescript
280
+ import { z } from 'zod/v4';
281
+
282
+ const schema = z.object({
283
+ summary: z.string(),
284
+ files: z.array(z.object({
285
+ path: z.string(),
286
+ description: z.string(),
287
+ })),
288
+ });
289
+
290
+ const result = await prompt`
291
+ Use ${readDir} and ${readFile} to analyze the project.
292
+ Save your analysis in ${prompt.return}.
293
+ `(schema);
294
+
295
+ // result is typed according to the schema
296
+ console.log(result.summary);
297
+ ```
298
+
299
+ ### `store()` — Explicit Store
300
+
301
+ For manual control, create a store and retrieve the value after execution:
302
+
303
+ ```typescript
304
+ import { store } from 'tooled-prompt';
305
+ import { z } from 'zod/v4';
306
+
307
+ const changeLog = store(z.object({
308
+ summary: z.string(),
309
+ entries: z.array(z.object({
310
+ commit: z.string(),
311
+ description: z.string(),
312
+ })),
313
+ }));
314
+
315
+ await prompt`
316
+ Use ${gitLog} to read commits, then save a structured
317
+ changelog in ${changeLog}.
318
+ `();
319
+
320
+ const result = changeLog.get(); // typed, or undefined if not stored
321
+ ```
322
+
323
+ ## Image Support
324
+
325
+ Pass images directly in templates. `Buffer` and `Uint8Array` values are auto-detected and converted to base64 for vision-capable models:
326
+
327
+ ```typescript
328
+ import { readFileSync } from 'fs';
329
+
330
+ const image = readFileSync('photo.png');
331
+
332
+ const result = await prompt`Describe this image: ${image}`();
333
+ ```
334
+
335
+ Multiple images work too:
336
+
337
+ ```typescript
338
+ const before = readFileSync('before.png');
339
+ const after = readFileSync('after.png');
340
+
341
+ const result = await prompt`
342
+ Compare these two images:
343
+ Before: ${before}
344
+ After: ${after}
345
+ `();
346
+ ```
347
+
348
+ ## Event Handling
349
+
350
+ ### Default Instance
351
+
352
+ The default instance has logging enabled — it streams content to stdout and logs tool calls:
353
+
354
+ ```typescript
355
+ import { prompt, setConfig } from 'tooled-prompt';
356
+
357
+ // Logging is ON by default
358
+ await prompt`Hello, world!`(); // Streams to stdout
359
+
360
+ // Disable logging
361
+ setConfig({ silent: true });
362
+ await prompt`Hello, world!`(); // Silent
363
+ ```
364
+
365
+ ### Custom Event Handlers
366
+
367
+ ```typescript
368
+ import { prompt, on, off } from 'tooled-prompt';
369
+
370
+ // Subscribe to events
371
+ on('content', (content) => {
372
+ process.stdout.write(content);
373
+ });
374
+
375
+ on('thinking', (content) => {
376
+ console.log('[Thinking]', content);
377
+ });
378
+
379
+ on('tool_call', (name, args) => {
380
+ console.log(`Calling ${name} with`, args);
381
+ });
382
+
383
+ on('tool_result', (name, result, duration) => {
384
+ console.log(`${name} returned in ${duration}ms:`, result);
385
+ });
386
+
387
+ on('tool_error', (name, error) => {
388
+ console.error(`${name} failed:`, error);
389
+ });
390
+ ```
391
+
392
+ ### Created Instances (Silent by Default)
393
+
394
+ ```typescript
395
+ import { createTooledPrompt } from 'tooled-prompt';
396
+
397
+ // Silent by default
398
+ const tp = createTooledPrompt();
399
+ await tp.prompt`Hello`(); // No output
400
+
401
+ // Opt-in to logging
402
+ tp.on('content', (c) => process.stdout.write(c));
403
+ await tp.prompt`Hello`(); // Now streams
404
+ ```
405
+
406
+ ## Multiple Instances
407
+
408
+ Create isolated instances for different LLM providers or models:
409
+
410
+ ```typescript
411
+ import { createTooledPrompt } from 'tooled-prompt';
412
+
413
+ const openai = createTooledPrompt({
414
+ llmUrl: 'https://api.openai.com/v1',
415
+ apiKey: process.env.OPENAI_API_KEY,
416
+ llmModel: 'gpt-4o',
417
+ });
418
+
419
+ const local = createTooledPrompt({
420
+ llmUrl: 'http://localhost:11434/v1',
421
+ llmModel: 'llama3.1',
422
+ });
423
+
424
+ // Each instance is isolated
425
+ const openaiResult = await openai.prompt`Summarize this document`();
426
+ const localResult = await local.prompt`Translate to French: ${text}`();
427
+ ```
428
+
429
+ You can also use different models for different tasks within one workflow:
430
+
431
+ ```typescript
432
+ const imageLlm = createTooledPrompt({ llmModel: 'gemma-3-27b-it' });
433
+ const toolLlm = createTooledPrompt({ llmModel: 'gpt-4o' });
434
+
435
+ async function describeImage(path: string) {
436
+ const image = readFileSync(path);
437
+ return imageLlm.prompt`Describe this image: ${image}`();
438
+ }
439
+
440
+ // Tool LLM orchestrates, delegates image work to image LLM
441
+ await toolLlm.prompt`
442
+ Find images using ${listFiles} and describe each with ${describeImage}.
443
+ `();
444
+ ```
445
+
446
+ ## Error Handling
447
+
448
+ ### Common Errors
449
+
450
+ | Error | Cause | Fix |
451
+ |-------|-------|-----|
452
+ | `Request timeout after Xms` | LLM endpoint didn't respond in time | Increase `timeout` in config |
453
+ | `LLM request failed (4xx/5xx)` | Authentication or server error | Check `apiKey` and `llmUrl` |
454
+ | `Max iterations (N) reached` | Tool loop didn't converge | Increase `maxIterations`, simplify the prompt, or check tool implementations |
455
+ | `No response from LLM` | Empty response from provider | Check model availability and endpoint |
456
+ | `Failed to parse JSON response` | Structured output returned invalid JSON | Try a more capable model or simplify the schema |
457
+ | `Schema validation failed` | Response didn't match Zod schema | Adjust schema or prompt to match expected output |
458
+
459
+ ### Handling Errors
460
+
461
+ ```typescript
462
+ try {
463
+ const result = await prompt`...`();
464
+ } catch (err) {
465
+ if (err.message.includes('timeout')) {
466
+ // Retry with longer timeout
467
+ } else if (err.message.includes('Max iterations')) {
468
+ // Simplify the task or increase maxIterations
469
+ }
470
+ }
471
+ ```
472
+
473
+ ## Limitations & Known Issues
474
+
475
+ - **OpenAI-compatible only** — The library sends requests to `/chat/completions` with OpenAI's format. Anthropic's native API is not supported (use OpenRouter for Anthropic models).
476
+ - **TypeScript `?` not detectable at runtime** — The parser uses `fn.toString()` to infer parameters. TypeScript's `?` optional marker is stripped at compile time. Use `= defaultValue` for runtime-detectable optional params, or provide explicit `args` descriptors.
477
+ - **Tool results truncated at 2000 chars** — Long tool results are truncated to avoid context overflow. If your tools return large data, consider summarizing within the tool.
478
+ - **No system prompt support** — There's currently no built-in way to prepend a system message. The user message is the only message sent.
479
+ - **`zod` peer dependency** — Structured output and Zod-based arg descriptors require `zod >= 4.0.0` (optional peer dependency).
480
+
481
+ ## API Reference
482
+
483
+ ### Exports
484
+
485
+ ```typescript
486
+ // Default instance
487
+ export const prompt: PromptTemplate;
488
+ export const setConfig: (config: TooledPromptConfig) => void;
489
+ export const on: <K extends keyof TooledPromptEvents>(event: K, handler: TooledPromptEvents[K]) => void;
490
+ export const off: <K extends keyof TooledPromptEvents>(event: K, handler: TooledPromptEvents[K]) => void;
491
+
492
+ // Factory
493
+ export function createTooledPrompt(config?: TooledPromptConfig): TooledPromptInstance;
494
+
495
+ // Tool utilities
496
+ export function tool<T extends Function>(fn: T, options?: ToolOptions): ToolFunction<T>;
497
+ export function isTool(value: unknown): value is ToolFunction;
498
+ export function getToolMetadata(fn: ToolFunction): ToolMetadata;
499
+ export const TOOL_SYMBOL: unique symbol;
500
+
501
+ // Store
502
+ export function store<T>(schema: ZodType<T>): Store<T>;
503
+ ```
504
+
505
+ ### Event Types
506
+
507
+ ```typescript
508
+ interface TooledPromptEvents {
509
+ thinking: (content: string) => void;
510
+ content: (content: string) => void;
511
+ tool_call: (name: string, args: Record<string, unknown>) => void;
512
+ tool_result: (name: string, result: string, duration: number) => void;
513
+ tool_error: (name: string, error: string) => void;
514
+ }
515
+ ```
516
+
517
+ ### Configuration Options
518
+
519
+ ```typescript
520
+ interface TooledPromptConfig {
521
+ llmUrl?: string; // LLM API endpoint
522
+ llmModel?: string; // Model name
523
+ apiKey?: string; // API key
524
+ maxIterations?: number; // Max tool loop iterations
525
+ temperature?: number; // Generation temperature (0–2)
526
+ stream?: boolean; // Enable streaming (default: true)
527
+ timeout?: number; // Request timeout in ms (default: 60000)
528
+ silent?: boolean; // Suppress console output (default: false for default instance)
529
+ showThinking?: boolean; // Show full thinking content (default: false)
530
+ }
531
+ ```
532
+
533
+ ## License
534
+
535
+ MIT — see [LICENSE](LICENSE) for details.
@@ -0,0 +1,90 @@
1
+ /**
2
+ * Event system for tooled-prompt
3
+ *
4
+ * Provides a simple event emitter for streaming LLM events
5
+ */
6
+ /**
7
+ * Event types emitted during prompt execution
8
+ */
9
+ export interface TooledPromptEvents {
10
+ /** Thinking/reasoning content from the LLM */
11
+ thinking: (content: string) => void;
12
+ /** Response content from the LLM */
13
+ content: (content: string) => void;
14
+ /** Tool call started */
15
+ tool_call: (name: string, args: Record<string, unknown>) => void;
16
+ /** Tool call completed successfully */
17
+ tool_result: (name: string, result: string, duration: number) => void;
18
+ /** Tool call failed */
19
+ tool_error: (name: string, error: string) => void;
20
+ }
21
+ /**
22
+ * Event handler function type
23
+ */
24
+ type EventHandler<K extends keyof TooledPromptEvents> = TooledPromptEvents[K];
25
+ /**
26
+ * Simple typed event emitter for tooled-prompt events
27
+ */
28
+ export declare class TooledPromptEmitter {
29
+ private handlers;
30
+ /**
31
+ * Subscribe to an event
32
+ */
33
+ on<K extends keyof TooledPromptEvents>(event: K, handler: EventHandler<K>): void;
34
+ /**
35
+ * Unsubscribe from an event
36
+ */
37
+ off<K extends keyof TooledPromptEvents>(event: K, handler: EventHandler<K>): void;
38
+ /**
39
+ * Emit an event to all subscribers
40
+ */
41
+ emit<K extends keyof TooledPromptEvents>(event: K, ...args: Parameters<TooledPromptEvents[K]>): void;
42
+ /**
43
+ * Check if there are any handlers for an event
44
+ */
45
+ hasHandlers<K extends keyof TooledPromptEvents>(event: K): boolean;
46
+ /**
47
+ * Remove all handlers for all events
48
+ */
49
+ clear(): void;
50
+ }
51
+ /**
52
+ * Default content handler - writes to stdout
53
+ */
54
+ export declare function defaultContentHandler(content: string): void;
55
+ /**
56
+ * Default thinking handler - writes to stdout with formatting
57
+ */
58
+ export declare function defaultThinkingHandler(content: string): void;
59
+ /**
60
+ * Default tool_call handler - logs tool invocation
61
+ */
62
+ export declare function defaultToolCallHandler(name: string, args: Record<string, unknown>): void;
63
+ /**
64
+ * Default tool_result handler - logs result
65
+ */
66
+ export declare function defaultToolResultHandler(_name: string, result: string, _duration: number): void;
67
+ /**
68
+ * Default tool_error handler - logs error
69
+ */
70
+ export declare function defaultToolErrorHandler(name: string, error: string): void;
71
+ /**
72
+ * Options for installDefaultHandlers
73
+ */
74
+ export interface DefaultHandlerOptions {
75
+ /** When returns true, suppresses all console output */
76
+ isSilent?: () => boolean;
77
+ /** When returns true, streams full thinking content instead of just showing a label */
78
+ showThinking?: () => boolean;
79
+ }
80
+ /**
81
+ * Install default logging handlers on an emitter
82
+ *
83
+ * @param target - Object with an `on` method for subscribing to events
84
+ * @param options - Optional getters for silent and showThinking behavior
85
+ */
86
+ export declare function installDefaultHandlers(target: {
87
+ on: TooledPromptEmitter["on"];
88
+ }, options?: DefaultHandlerOptions): void;
89
+ export {};
90
+ //# sourceMappingURL=events.d.ts.map
@@ -0,0 +1 @@
1
+ {"version":3,"file":"events.d.ts","sourceRoot":"","sources":["../src/events.ts"],"names":[],"mappings":"AAAA;;;;GAIG;AAWH;;GAEG;AACH,MAAM,WAAW,kBAAkB;IACjC,8CAA8C;IAC9C,QAAQ,EAAE,CAAC,OAAO,EAAE,MAAM,KAAK,IAAI,CAAC;IACpC,oCAAoC;IACpC,OAAO,EAAE,CAAC,OAAO,EAAE,MAAM,KAAK,IAAI,CAAC;IACnC,wBAAwB;IACxB,SAAS,EAAE,CAAC,IAAI,EAAE,MAAM,EAAE,IAAI,EAAE,MAAM,CAAC,MAAM,EAAE,OAAO,CAAC,KAAK,IAAI,CAAC;IACjE,uCAAuC;IACvC,WAAW,EAAE,CAAC,IAAI,EAAE,MAAM,EAAE,MAAM,EAAE,MAAM,EAAE,QAAQ,EAAE,MAAM,KAAK,IAAI,CAAC;IACtE,uBAAuB;IACvB,UAAU,EAAE,CAAC,IAAI,EAAE,MAAM,EAAE,KAAK,EAAE,MAAM,KAAK,IAAI,CAAC;CACnD;AAED;;GAEG;AACH,KAAK,YAAY,CAAC,CAAC,SAAS,MAAM,kBAAkB,IAAI,kBAAkB,CAAC,CAAC,CAAC,CAAC;AAE9E;;GAEG;AACH,qBAAa,mBAAmB;IAC9B,OAAO,CAAC,QAAQ,CAGZ;IAEJ;;OAEG;IACH,EAAE,CAAC,CAAC,SAAS,MAAM,kBAAkB,EACnC,KAAK,EAAE,CAAC,EACR,OAAO,EAAE,YAAY,CAAC,CAAC,CAAC,GACvB,IAAI;IAOP;;OAEG;IACH,GAAG,CAAC,CAAC,SAAS,MAAM,kBAAkB,EACpC,KAAK,EAAE,CAAC,EACR,OAAO,EAAE,YAAY,CAAC,CAAC,CAAC,GACvB,IAAI;IAIP;;OAEG;IACH,IAAI,CAAC,CAAC,SAAS,MAAM,kBAAkB,EACrC,KAAK,EAAE,CAAC,EACR,GAAG,IAAI,EAAE,UAAU,CAAC,kBAAkB,CAAC,CAAC,CAAC,CAAC,GACzC,IAAI;IASP;;OAEG;IACH,WAAW,CAAC,CAAC,SAAS,MAAM,kBAAkB,EAAE,KAAK,EAAE,CAAC,GAAG,OAAO;IAKlE;;OAEG;IACH,KAAK,IAAI,IAAI;CAGd;AAED;;GAEG;AACH,wBAAgB,qBAAqB,CAAC,OAAO,EAAE,MAAM,GAAG,IAAI,CAE3D;AAED;;GAEG;AACH,wBAAgB,sBAAsB,CAAC,OAAO,EAAE,MAAM,GAAG,IAAI,CAE5D;AAED;;GAEG;AACH,wBAAgB,sBAAsB,CACpC,IAAI,EAAE,MAAM,EACZ,IAAI,EAAE,MAAM,CAAC,MAAM,EAAE,OAAO,CAAC,GAC5B,IAAI,CAmBN;AAED;;GAEG;AACH,wBAAgB,wBAAwB,CACtC,KAAK,EAAE,MAAM,EACb,MAAM,EAAE,MAAM,EACd,SAAS,EAAE,MAAM,GAChB,IAAI,CAMN;AAED;;GAEG;AACH,wBAAgB,uBAAuB,CAAC,IAAI,EAAE,MAAM,EAAE,KAAK,EAAE,MAAM,GAAG,IAAI,CAEzE;AAED;;GAEG;AACH,MAAM,WAAW,qBAAqB;IACpC,uDAAuD;IACvD,QAAQ,CAAC,EAAE,MAAM,OAAO,CAAC;IACzB,uFAAuF;IACvF,YAAY,CAAC,EAAE,MAAM,OAAO,CAAC;CAC9B;AAED;;;;;GAKG;AACH,wBAAgB,sBAAsB,CACpC,MAAM,EAAE;IAAE,EAAE,EAAE,mBAAmB,CAAC,IAAI,CAAC,CAAA;CAAE,EACzC,OAAO,CAAC,EAAE,qBAAqB,GAC9B,IAAI,CA6CN"}