llmflow 0.3.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md ADDED
@@ -0,0 +1,142 @@
1
+ # LLMFlow
2
+
3
+ **See what your LLM calls cost. One command. No signup.**
4
+
5
+ LLMFlow is a local observability tool for LLM applications. Point your SDK at it, see your costs, tokens, and latency in real-time.
6
+
7
+ ```bash
8
+ npx llmflow
9
+ ```
10
+
11
+ Dashboard: [localhost:3000](http://localhost:3000) · Proxy: [localhost:8080](http://localhost:8080)
12
+
13
+ ---
14
+
15
+ ## Quick Start
16
+
17
+ ### 1. Start LLMFlow
18
+
19
+ ```bash
20
+ # Option A: npx (recommended)
21
+ npx llmflow
22
+
23
+ # Option B: Clone and run
24
+ git clone https://github.com/HelgeSverre/llmflow.git
25
+ cd llmflow && npm install && npm start
26
+
27
+ # Option C: Docker
28
+ docker run -p 3000:3000 -p 8080:8080 helgesverre/llmflow
29
+ ```
30
+
31
+ ### 2. Point Your SDK
32
+
33
+ ```python
34
+ # Python
35
+ from openai import OpenAI
36
+ client = OpenAI(base_url="http://localhost:8080/v1")
37
+ ```
38
+
39
+ ```javascript
40
+ // JavaScript
41
+ const client = new OpenAI({ baseURL: 'http://localhost:8080/v1' });
42
+ ```
43
+
44
+ ```php
45
+ // PHP
46
+ $client = OpenAI::factory()->withBaseUri('http://localhost:8080/v1')->make();
47
+ ```
48
+
49
+ ### 3. View Dashboard
50
+
51
+ Open [localhost:3000](http://localhost:3000) to see your traces, costs, and token usage.
52
+
53
+ ---
54
+
55
+ ## Who Is This For?
56
+
57
+ - **Solo developers** building with OpenAI, Anthropic, etc.
58
+ - **Hobbyists** who want to see what their AI projects cost
59
+ - **Anyone** who doesn't want to pay for or set up a SaaS observability tool
60
+
61
+ ---
62
+
63
+ ## Features
64
+
65
+ | Feature | Description |
66
+ |---------|-------------|
67
+ | **Cost Tracking** | Real-time pricing for 2000+ models |
68
+ | **Request Logging** | See every request/response with latency |
69
+ | **Multi-Provider** | OpenAI, Anthropic, Gemini, Ollama, Groq, Mistral, and more |
70
+ | **OpenTelemetry** | Accept traces from LangChain, LlamaIndex, etc. |
71
+ | **Zero Config** | Just run it, point your SDK, done |
72
+ | **Local Storage** | SQLite database, no external services |
73
+
74
+ ---
75
+
76
+ ## Supported Providers
77
+
78
+ Use path prefixes or the `X-LLMFlow-Provider` header:
79
+
80
+ | Provider | URL |
81
+ |----------|-----|
82
+ | OpenAI | `http://localhost:8080/v1` (default) |
83
+ | Anthropic | `http://localhost:8080/anthropic/v1` |
84
+ | Gemini | `http://localhost:8080/gemini/v1` |
85
+ | Ollama | `http://localhost:8080/ollama/v1` |
86
+ | Groq | `http://localhost:8080/groq/v1` |
87
+ | Mistral | `http://localhost:8080/mistral/v1` |
88
+ | Azure OpenAI | `http://localhost:8080/azure/v1` |
89
+ | Cohere | `http://localhost:8080/cohere/v1` |
90
+ | Together | `http://localhost:8080/together/v1` |
91
+ | OpenRouter | `http://localhost:8080/openrouter/v1` |
92
+ | Perplexity | `http://localhost:8080/perplexity/v1` |
93
+
94
+ ---
95
+
96
+ ## OpenTelemetry Support
97
+
98
+ If you're using LangChain, LlamaIndex, or other instrumented frameworks:
99
+
100
+ ```python
101
+ # Python - point OTLP exporter to LLMFlow
102
+ from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
103
+
104
+ exporter = OTLPSpanExporter(endpoint="http://localhost:3000/v1/traces")
105
+ ```
106
+
107
+ ```javascript
108
+ // JavaScript
109
+ import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-http';
110
+
111
+ new OTLPTraceExporter({ url: 'http://localhost:3000/v1/traces' });
112
+ ```
113
+
114
+ ---
115
+
116
+ ## Configuration
117
+
118
+ | Variable | Default | Description |
119
+ |----------|---------|-------------|
120
+ | `PROXY_PORT` | `8080` | Proxy port |
121
+ | `DASHBOARD_PORT` | `3000` | Dashboard port |
122
+ | `DATA_DIR` | `~/.llmflow` | Data directory |
123
+ | `MAX_TRACES` | `10000` | Max traces to retain |
124
+ | `VERBOSE` | `0` | Enable verbose logging |
125
+
126
+ Set provider API keys as environment variables (`OPENAI_API_KEY`, `ANTHROPIC_API_KEY`, etc.) if you want the proxy to forward requests.
127
+
128
+ ---
129
+
130
+ ## Advanced Features
131
+
132
+ For advanced usage, see the [docs/](docs/) folder:
133
+
134
+ - [AI CLI Tools](docs/guides/ai-cli-tools.md) - Claude Code, Codex CLI, Gemini CLI
135
+ - [Observability Backends](docs/guides/observability-backends.md) - Export to Jaeger, Langfuse, Phoenix
136
+ - [Passthrough Mode](docs/guides/ai-cli-tools.md#passthrough-mode) - Forward native API formats
137
+
138
+ ---
139
+
140
+ ## License
141
+
142
+ MIT © [Helge Sverre](https://github.com/HelgeSverre)
package/bin/llmflow.js ADDED
@@ -0,0 +1,91 @@
1
+ #!/usr/bin/env node
2
+
3
+ /**
4
+ * LLMFlow CLI
5
+ *
6
+ * Usage:
7
+ * npx llmflow # Start the server
8
+ * npx llmflow --help # Show help
9
+ */
10
+
11
+ const { spawn } = require('child_process');
12
+ const path = require('path');
13
+ const fs = require('fs');
14
+
15
+ const args = process.argv.slice(2);
16
+
17
+ // Help text
18
+ if (args.includes('--help') || args.includes('-h')) {
19
+ console.log(`
20
+ LLMFlow - Local LLM Observability
21
+
22
+ Usage:
23
+ llmflow [options]
24
+
25
+ Options:
26
+ --help, -h Show this help message
27
+ --version, -v Show version number
28
+
29
+ Environment Variables:
30
+ PROXY_PORT Proxy port (default: 8080)
31
+ DASHBOARD_PORT Dashboard port (default: 3000)
32
+ DATA_DIR Data directory (default: ~/.llmflow)
33
+ MAX_TRACES Max traces to retain (default: 10000)
34
+ VERBOSE Enable verbose logging (0 or 1)
35
+
36
+ Examples:
37
+ npx llmflow # Start with defaults
38
+ PROXY_PORT=9000 npx llmflow # Custom proxy port
39
+ VERBOSE=1 npx llmflow # Verbose logging
40
+
41
+ Dashboard: http://localhost:3000
42
+ Proxy: http://localhost:8080
43
+
44
+ Point your OpenAI SDK at the proxy:
45
+ client = OpenAI(base_url="http://localhost:8080/v1")
46
+ `);
47
+ process.exit(0);
48
+ }
49
+
50
+ // Version
51
+ if (args.includes('--version') || args.includes('-v')) {
52
+ const pkg = require('../package.json');
53
+ console.log(`llmflow v${pkg.version}`);
54
+ process.exit(0);
55
+ }
56
+
57
+ // Find server.js relative to this script
58
+ const serverPath = path.join(__dirname, '..', 'server.js');
59
+
60
+ if (!fs.existsSync(serverPath)) {
61
+ console.error('Error: server.js not found at', serverPath);
62
+ process.exit(1);
63
+ }
64
+
65
+ // Print startup banner
66
+ const pkg = require('../package.json');
67
+ console.log(`
68
+ ╔═══════════════════════════════════════════════╗
69
+ ║ LLMFlow ║
70
+ ║ Local LLM Observability v${pkg.version.padEnd(13)}║
71
+ ╚═══════════════════════════════════════════════╝
72
+ `);
73
+
74
+ // Start the server
75
+ const server = spawn(process.execPath, [serverPath], {
76
+ stdio: 'inherit',
77
+ env: process.env
78
+ });
79
+
80
+ server.on('error', (err) => {
81
+ console.error('Failed to start server:', err.message);
82
+ process.exit(1);
83
+ });
84
+
85
+ server.on('close', (code) => {
86
+ process.exit(code || 0);
87
+ });
88
+
89
+ // Forward signals
90
+ process.on('SIGINT', () => server.kill('SIGINT'));
91
+ process.on('SIGTERM', () => server.kill('SIGTERM'));