traicebox 0.1.0 → 0.1.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md ADDED
@@ -0,0 +1,256 @@
1
+ # Traicebox
2
+
3
+ **Traicebox** is a zero-config local developer stack for tracing and session tracking around LLM and AI model workflows. It enables developers to have a working local tracing and inspection setup within minutes.
4
+
5
+ ### Key Use Cases
6
+
7
+ - **Tool & Prompt Development**: Debug plugins, skills, or prompts for harness tools like [OpenCode](https://opencode.ai/) with full visibility.
8
+ - **Model Evaluation**: Evaluate and compare local models with detailed trace logging. The built-in config generator makes setting up tools remarkably simple.
9
+ - **Application Development**: Build AI applications with minimal code changes. Requests routed through the built-in LiteLLM proxy automatically generate traces in Langfuse. You can group these traces into sessions by adding an `x-litellm-session-id` header to your requests.
10
+
11
+ ### Included in the Stack
12
+
13
+ - **LiteLLM**: Unified proxy for LLM backends with custom session-tracking callbacks.
14
+ - **Langfuse**: Open-source tracing and observability for LLM applications.
15
+ - **Auto-login Proxies**: Dynamic proxies providing immediate, authenticated access to LiteLLM UI and Langfuse.
16
+ - **Harness Integration**: Automatic configuration generation for external tools like OpenCode.
17
+
18
+ ## Prerequisites
19
+
20
+ Traicebox requires [Docker](https://www.docker.com/) and [Docker Compose](https://docs.docker.com/compose/) to be installed and running on your system.
21
+
22
+ ## Installation
23
+
24
+ ### Published Package (Recommended)
25
+
26
+ You can install Traicebox via NPM or Bun:
27
+
28
+ ```bash
29
+ # Using Bun
30
+ bun install -g traicebox
31
+
32
+ # Using NPM
33
+ npm install -g traicebox
34
+ ```
35
+
36
+ ### Binary Release
37
+
38
+ Download the latest prebuilt binary for your platform from the [GitHub Releases](https://github.com/fardjad/traicebox/releases) page.
39
+
40
+ Once downloaded, make it executable and move it to your path:
41
+
42
+ ```bash
43
+ chmod +x traicebox-macos-arm64
44
+ mv traicebox-macos-arm64 /usr/local/bin/traicebox
45
+ ```
46
+
47
+ ## Shell Completion
48
+
49
+ Traicebox supports shell completion for Bash and Zsh.
50
+
51
+ ### Zsh
52
+
53
+ Add the following to your `~/.zshrc`:
54
+
55
+ ```bash
56
+ # Enable completion if not already enabled
57
+ autoload -Uz compinit && compinit
58
+
59
+ source <(traicebox completion)
60
+ ```
61
+
62
+ ### Bash
63
+
64
+ Add the following to your `~/.bashrc`:
65
+
66
+ ```bash
67
+ source <(traicebox completion)
68
+ ```
69
+
70
+ ## Usage
71
+
72
+ ### 1. Initial Setup
73
+
74
+ Initialize the Traicebox home directory:
75
+
76
+ ```bash
77
+ traicebox setup
78
+ ```
79
+
80
+ ### 2. Import Models
81
+
82
+ Before starting the stack, you need to import your available models into LiteLLM. Traicebox requires an endpoint URL or alias to fetch models from.
83
+
84
+ Supported aliases for common local tools:
85
+
86
+ | Alias | Default Endpoint URL |
87
+ | :---------- | :--------------------------------- |
88
+ | `lm-studio` | `http://127.0.0.1:1234/v1/models` |
89
+ | `ollama` | `http://localhost:11434/v1/models` |
90
+ | `llama-cpp` | `http://localhost:8080/v1/models` |
91
+
92
+ ```bash
93
+ traicebox models import-from-openai-api --endpoint lm-studio
94
+ ```
95
+
96
+ To use a custom OpenAI-compatible endpoint URL:
97
+
98
+ ```bash
99
+ traicebox models import-from-openai-api --endpoint http://your-api:port/v1/models
100
+ ```
101
+
102
+ If the endpoint requires authentication, provide the API key via the `OPENAI_COMPATIBLE_API_KEY` environment variable:
103
+
104
+ ```bash
105
+ OPENAI_COMPATIBLE_API_KEY="your-api-key" traicebox models import-from-openai-api --endpoint http://your-api:port/v1/models
106
+ ```
107
+
108
+ ### 3. Configure Harness Integration (Optional)
109
+
110
+ If you use [OpenCode](https://opencode.ai/), you can synchronize its model configuration with Traicebox.
111
+
112
+ > [!WARNING]
113
+ > The following command will overwrite your existing OpenCode configuration file.
114
+
115
+ ```bash
116
+ traicebox generate-harness-config opencode > ~/.config/opencode/opencode.jsonc
117
+ ```
118
+
119
+ ### 4. Start the Stack
120
+
121
+ ```bash
122
+ traicebox start
123
+ ```
124
+
125
+ Once running, you can access the public endpoints:
126
+
127
+ - **LiteLLM**: `http://litellm.localhost:5483`
128
+ - **Langfuse**: `http://langfuse.localhost:5483`
129
+
130
+ ### 5. Verify and Test
131
+
132
+ If you configured OpenCode in step 3, you can test the integration:
133
+
134
+ ```bash
135
+ # Refresh the models cache
136
+ opencode models --refresh
137
+
138
+ # Run a test prompt
139
+ opencode run -m 'litellm/your/model' 'Just write a greeting message!'
140
+ ```
141
+
142
+ > [!NOTE]
143
+ > Replace `'litellm/your/model'` with one of the models imported in step 2.
144
+
145
+ After running a prompt, you can inspect the captured traces in Langfuse by visiting:
146
+ [http://langfuse.localhost:5483/project/local-project/sessions](http://langfuse.localhost:5483/project/local-project/sessions)
147
+
148
+ There, you can select the latest session to inspect system prompts, user messages, and tool interactions.
149
+
150
+ #### Application Integration (with Session Tracking)
151
+
152
+ You can integrate Traicebox into your own applications by pointing your LLM client to the LiteLLM proxy. To group related traces together, simply include an `x-litellm-session-id` header in your requests.
153
+
154
+ Here is an example using `curl` to demonstrate session grouping:
155
+
156
+ ```bash
157
+ MODEL_NAME="your-model-name"
158
+
159
+ # First request in a session
160
+ curl http://litellm.localhost:5483/v1/chat/completions \
161
+ -H "Content-Type: application/json" \
162
+ -H "Authorization: Bearer sk-litellm-local-client" \
163
+ -H "x-litellm-session-id: my-test-session" \
164
+ -d "{
165
+ \"model\": \"$MODEL_NAME\",
166
+ \"messages\": [{\"role\": \"user\", \"content\": \"Hello! This is a test message to start my session.\"}]
167
+ }"
168
+
169
+ # Second request in the same session
170
+ curl http://litellm.localhost:5483/v1/chat/completions \
171
+ -H "Content-Type: application/json" \
172
+ -H "Authorization: Bearer sk-litellm-local-client" \
173
+ -H "x-litellm-session-id: my-test-session" \
174
+ -d "{
175
+ \"model\": \"$MODEL_NAME\",
176
+ \"messages\": [{\"role\": \"user\", \"content\": \"Goodbye! I am done with this test session.\"}]
177
+ }"
178
+ ```
179
+
180
+ > [!NOTE]
181
+ > Set `MODEL_NAME` to one of the exact model names imported in step 2. The default API key is `sk-litellm-local-client`.
182
+
183
+ Refresh the [Langfuse sessions page](http://langfuse.localhost:5483/project/local-project/sessions) to see both traces grouped under the `my-test-session` ID.
184
+
185
+ ### Other Useful Commands
186
+
187
+ | Command | Description |
188
+ | :----------------------- | :---------------------------------------------- |
189
+ | `traicebox stop` | Stop the stack. |
190
+ | `traicebox restart` | Recreate and restart the stack. |
191
+ | `traicebox destroy` | Remove the stack and delete local data volumes. |
192
+ | `traicebox models clear` | Clear the custom model list from LiteLLM. |
193
+
194
+ For more information on available commands and options, run `traicebox --help`.
195
+
196
+ ## Configuration
197
+
198
+ Traicebox stores its configuration and data in `${TRAICEBOX_HOME}`. You can override this location by setting the `TRAICEBOX_HOME` environment variable.
199
+
200
+ By default, it is located at:
201
+
202
+ | OS | Default `${TRAICEBOX_HOME}` |
203
+ | :----------------- | :----------------------------------------------------- |
204
+ | **macOS** | `~/Library/Application Support/Traicebox` |
205
+ | **Windows** | `%APPDATA%\Traicebox` |
206
+ | **Linux / Others** | `~/.config/traicebox` (or respects `$XDG_CONFIG_HOME`) |
207
+
208
+ ### `traicebox.yaml`
209
+
210
+ You can customize the stack behavior by creating or editing `${TRAICEBOX_HOME}/traicebox.yaml`.
211
+
212
+ | Option | Type | Default | Description |
213
+ | :----- | :------- | :---------- | :----------------------------------------------------------- |
214
+ | `host` | `string` | `127.0.0.1` | The host address that Traicebox services will bind to. |
215
+ | `port` | `number` | `5483` | The port that Traicebox services will be accessible through. |
216
+
217
+ Example:
218
+
219
+ ```yaml
220
+ host: 127.0.0.1
221
+ port: 5483
222
+ ```
223
+
224
+ ## LiteLLM
225
+
226
+ LiteLLM loads its proxy config from `${TRAICEBOX_HOME}/litellm/config.yaml`. The LiteLLM UI is available at `http://litellm.localhost:5483`, where Traicebox automatically manages your admin session for immediate access.
227
+
228
+ `${TRAICEBOX_HOME}/litellm/config.yaml` is the source of truth for model routing and upstream API wiring. Imported models are written directly into `model_list`, with `api_base` inferred from the import endpoint and `api_key` read from `os.environ/OPENAI_COMPATIBLE_API_KEY`.
229
+
230
+ `OPENAI_COMPATIBLE_API_KEY` is intentionally not persisted by Tr**ai**cebox. For local backends such as LM Studio, Ollama, or llama.cpp, no secret is needed. For authenticated upstreams, run `traicebox` through your password manager CLI so it injects `OPENAI_COMPATIBLE_API_KEY` into the process; Tr**ai**cebox will materialize that into an ephemeral Docker secret for LiteLLM only.
231
+
232
+ To replace the LiteLLM `model_list` from an OpenAI-compatible `/v1/models` endpoint (or alias like `lm-studio`, `ollama`, `llama-cpp`):
233
+
234
+ ```bash
235
+ traicebox models import-from-openai-api --endpoint lm-studio
236
+ ```
237
+
238
+ If that endpoint requires auth, inject `OPENAI_COMPATIBLE_API_KEY` into the command environment first. If it does not, the command works without a key.
239
+
240
+ To clear the LiteLLM `model_list`:
241
+
242
+ ```bash
243
+ traicebox models clear
244
+ ```
245
+
246
+ To print an OpenCode config snippet for the current LiteLLM models:
247
+
248
+ ```bash
249
+ traicebox generate-harness-config opencode
250
+ ```
251
+
252
+ ## Langfuse
253
+
254
+ LiteLLM sends OTEL traces to Langfuse using the built-in `langfuse_otel` callback and `LANGFUSE_OTEL_HOST=http://langfuse-web:3000`.
255
+
256
+ Langfuse is accessible at `http://langfuse.localhost:5483`. By default, Traicebox disables public signups and automatically creates a browser session for the seeded admin user so you can start tracing immediately.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "traicebox",
3
- "version": "0.1.0",
3
+ "version": "0.1.1",
4
4
  "description": "A zero-config local developer stack for tracing and session tracking around LLM and AI model workflows",
5
5
  "license": "MIT",
6
6
  "author": "Fardjad Davari <public@fardjad.com>",
@@ -47,4 +47,4 @@
47
47
  "yaml": "^2.8.3",
48
48
  "yargs": "^18.0.0"
49
49
  }
50
- }
50
+ }
@@ -1,28 +0,0 @@
1
- #!/bin/sh
2
- set -eu
3
-
4
- psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" <<SQL
5
- DO \$\$
6
- BEGIN
7
- IF NOT EXISTS (SELECT FROM pg_roles WHERE rolname = '${LITELLM_DB_USER}') THEN
8
- CREATE ROLE ${LITELLM_DB_USER} LOGIN PASSWORD '${LITELLM_DB_PASSWORD}';
9
- END IF;
10
- END
11
- \$\$;
12
-
13
- DO \$\$
14
- BEGIN
15
- IF NOT EXISTS (SELECT FROM pg_roles WHERE rolname = '${LANGFUSE_DB_USER}') THEN
16
- CREATE ROLE ${LANGFUSE_DB_USER} LOGIN PASSWORD '${LANGFUSE_DB_PASSWORD}';
17
- END IF;
18
- END
19
- \$\$;
20
-
21
- SELECT 'CREATE DATABASE ${LITELLM_DB_NAME} OWNER ${LITELLM_DB_USER}'
22
- WHERE NOT EXISTS (SELECT FROM pg_database WHERE datname = '${LITELLM_DB_NAME}')
23
- \gexec
24
-
25
- SELECT 'CREATE DATABASE ${LANGFUSE_DB_NAME} OWNER ${LANGFUSE_DB_USER}'
26
- WHERE NOT EXISTS (SELECT FROM pg_database WHERE datname = '${LANGFUSE_DB_NAME}')
27
- \gexec
28
- SQL
@@ -1,18 +0,0 @@
1
- {
2
- auto_https off
3
- }
4
-
5
- http://langfuse.localhost:{$TRAICEBOX_PORT} {
6
- reverse_proxy langfuse-proxy:3000
7
- }
8
-
9
- http://litellm.localhost:{$TRAICEBOX_PORT} {
10
- @litellm_ui path /ui /ui/*
11
- handle @litellm_ui {
12
- reverse_proxy litellm-ui-proxy:4000
13
- }
14
-
15
- handle {
16
- reverse_proxy litellm:4000
17
- }
18
- }
@@ -1,7 +0,0 @@
1
- FROM oven/bun:1
2
-
3
- WORKDIR /app
4
-
5
- COPY langfuse-proxy/server.ts /app/server.ts
6
-
7
- CMD ["bun", "run", "/app/server.ts"]
@@ -1,7 +0,0 @@
1
- FROM oven/bun:1
2
-
3
- WORKDIR /app
4
-
5
- COPY litellm-ui-proxy/server.ts /app/server.ts
6
-
7
- CMD ["bun", "run", "/app/server.ts"]
@@ -1,39 +0,0 @@
1
- #!/bin/sh
2
- set -eu
3
-
4
- LITELLM_URL="http://litellm:4000"
5
- READY_URL="$LITELLM_URL/health/readiness"
6
- KEY_INFO_URL="$LITELLM_URL/key/info?key=${LITELLM_CLIENT_KEY}"
7
- KEY_GENERATE_URL="$LITELLM_URL/key/generate"
8
-
9
- for _ in $(seq 1 60); do
10
- if curl -fsS "$READY_URL" >/dev/null 2>&1; then
11
- break
12
- fi
13
- sleep 2
14
- done
15
-
16
- if ! curl -fsS "$READY_URL" >/dev/null 2>&1; then
17
- echo "LiteLLM did not become ready in time" >&2
18
- exit 1
19
- fi
20
-
21
- if curl -fsS \
22
- -H "Authorization: Bearer ${LITELLM_MASTER_KEY}" \
23
- "$KEY_INFO_URL" >/dev/null 2>&1; then
24
- echo "LiteLLM client key already exists"
25
- exit 0
26
- fi
27
-
28
- curl -fsS \
29
- -X POST \
30
- -H "Authorization: Bearer ${LITELLM_MASTER_KEY}" \
31
- -H "Content-Type: application/json" \
32
- "$KEY_GENERATE_URL" \
33
- -d "{
34
- \"key_alias\": \"local-client\",
35
- \"key\": \"${LITELLM_CLIENT_KEY}\",
36
- \"models\": []
37
- }" >/dev/null
38
-
39
- echo "LiteLLM client key created"
@@ -1,266 +0,0 @@
1
- services:
2
- db:
3
- image: pgautoupgrade/pgautoupgrade:18-alpine
4
- environment:
5
- - POSTGRES_DB=postgres
6
- - POSTGRES_USER=${POSTGRES_SUPERUSER}
7
- - POSTGRES_PASSWORD=${POSTGRES_SUPERPASS}
8
- - PGAUTO_DEVEL=
9
- - PGAUTO_ONESHOT=no
10
- - LITELLM_DB_NAME=${LITELLM_DB_NAME}
11
- - LITELLM_DB_USER=${LITELLM_DB_USER}
12
- - LITELLM_DB_PASSWORD=${LITELLM_DB_PASSWORD}
13
- - LANGFUSE_DB_NAME=${LANGFUSE_DB_NAME}
14
- - LANGFUSE_DB_USER=${LANGFUSE_DB_USER}
15
- - LANGFUSE_DB_PASSWORD=${LANGFUSE_DB_PASSWORD}
16
- healthcheck:
17
- test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_SUPERUSER} -d postgres && psql -U ${POSTGRES_SUPERUSER} -d postgres -tAc \"SELECT 1 FROM pg_database WHERE datname IN ('${LITELLM_DB_NAME}','${LANGFUSE_DB_NAME}')\" | grep -q 1"]
18
- interval: 3s
19
- timeout: 3s
20
- retries: 20
21
- start_period: 5s
22
- volumes:
23
- - postgres_data:/var/lib/postgresql
24
- - ./postgres/init/00-app-databases.sh:/docker-entrypoint-initdb.d/00-app-databases.sh:ro
25
- langfuse-clickhouse:
26
- image: clickhouse/clickhouse-server
27
- user: "101:101"
28
- environment:
29
- - CLICKHOUSE_DB=${CLICKHOUSE_DB}
30
- - CLICKHOUSE_USER=${CLICKHOUSE_USER}
31
- - CLICKHOUSE_PASSWORD=${CLICKHOUSE_PASSWORD}
32
- - TZ=UTC
33
- healthcheck:
34
- test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8123/ping"]
35
- interval: 5s
36
- timeout: 5s
37
- retries: 10
38
- start_period: 1s
39
- volumes:
40
- - langfuse_clickhouse_data:/var/lib/clickhouse
41
- - langfuse_clickhouse_logs:/var/log/clickhouse-server
42
- langfuse-redis:
43
- image: redis:latest
44
- command:
45
- - --requirepass
46
- - ${LANGFUSE_REDIS_PASSWORD}
47
- - --maxmemory-policy
48
- - noeviction
49
- healthcheck:
50
- test: ["CMD", "redis-cli", "-a", "${LANGFUSE_REDIS_PASSWORD}", "ping"]
51
- interval: 3s
52
- timeout: 10s
53
- retries: 10
54
- volumes:
55
- - langfuse_redis_data:/data
56
- langfuse-minio:
57
- image: cgr.dev/chainguard/minio
58
- entrypoint: sh
59
- command:
60
- - -c
61
- - mkdir -p /data/langfuse && minio server --address ":9000" --console-address ":9001" /data
62
- environment:
63
- - MINIO_ROOT_USER=${MINIO_ROOT_USER}
64
- - MINIO_ROOT_PASSWORD=${MINIO_ROOT_PASSWORD}
65
- healthcheck:
66
- test: ["CMD", "mc", "ready", "local"]
67
- interval: 1s
68
- timeout: 5s
69
- retries: 5
70
- start_period: 1s
71
- volumes:
72
- - langfuse_minio_data:/data
73
- langfuse-worker:
74
- image: langfuse/langfuse-worker:3
75
- depends_on:
76
- db:
77
- condition: service_healthy
78
- langfuse-web:
79
- condition: service_healthy
80
- langfuse-minio:
81
- condition: service_healthy
82
- langfuse-redis:
83
- condition: service_healthy
84
- langfuse-clickhouse:
85
- condition: service_healthy
86
- environment: &langfuse-env
87
- NEXTAUTH_URL: http://langfuse.localhost:${TRAICEBOX_PORT}
88
- DATABASE_URL: postgresql://${LANGFUSE_DB_USER}:${LANGFUSE_DB_PASSWORD}@db:5432/${LANGFUSE_DB_NAME}
89
- SALT: ${LANGFUSE_SALT}
90
- ENCRYPTION_KEY: "${LANGFUSE_ENCRYPTION_KEY}"
91
- TELEMETRY_ENABLED: "${LANGFUSE_TELEMETRY_ENABLED}"
92
- LANGFUSE_ENABLE_EXPERIMENTAL_FEATURES: "${LANGFUSE_EXPERIMENTAL_FEATURES}"
93
- CLICKHOUSE_MIGRATION_URL: clickhouse://langfuse-clickhouse:9000
94
- CLICKHOUSE_URL: http://langfuse-clickhouse:8123
95
- CLICKHOUSE_USER: ${CLICKHOUSE_USER}
96
- CLICKHOUSE_PASSWORD: ${CLICKHOUSE_PASSWORD}
97
- CLICKHOUSE_CLUSTER_ENABLED: "false"
98
- LANGFUSE_S3_EVENT_UPLOAD_BUCKET: langfuse
99
- LANGFUSE_S3_EVENT_UPLOAD_REGION: auto
100
- LANGFUSE_S3_EVENT_UPLOAD_ACCESS_KEY_ID: ${MINIO_ROOT_USER}
101
- LANGFUSE_S3_EVENT_UPLOAD_SECRET_ACCESS_KEY: ${MINIO_ROOT_PASSWORD}
102
- LANGFUSE_S3_EVENT_UPLOAD_ENDPOINT: http://langfuse-minio:9000
103
- LANGFUSE_S3_EVENT_UPLOAD_FORCE_PATH_STYLE: "true"
104
- LANGFUSE_S3_EVENT_UPLOAD_PREFIX: events/
105
- LANGFUSE_S3_MEDIA_UPLOAD_BUCKET: langfuse
106
- LANGFUSE_S3_MEDIA_UPLOAD_REGION: auto
107
- LANGFUSE_S3_MEDIA_UPLOAD_ACCESS_KEY_ID: ${MINIO_ROOT_USER}
108
- LANGFUSE_S3_MEDIA_UPLOAD_SECRET_ACCESS_KEY: ${MINIO_ROOT_PASSWORD}
109
- LANGFUSE_S3_MEDIA_UPLOAD_ENDPOINT: http://langfuse-minio:9000
110
- LANGFUSE_S3_MEDIA_UPLOAD_FORCE_PATH_STYLE: "true"
111
- LANGFUSE_S3_MEDIA_UPLOAD_PREFIX: media/
112
- REDIS_HOST: langfuse-redis
113
- REDIS_PORT: "6379"
114
- REDIS_AUTH: ${LANGFUSE_REDIS_PASSWORD}
115
- langfuse-web:
116
- image: langfuse/langfuse:3
117
- depends_on:
118
- db:
119
- condition: service_healthy
120
- langfuse-minio:
121
- condition: service_healthy
122
- langfuse-redis:
123
- condition: service_healthy
124
- langfuse-clickhouse:
125
- condition: service_healthy
126
- environment:
127
- <<: *langfuse-env
128
- NEXTAUTH_SECRET: ${LANGFUSE_NEXTAUTH_SECRET}
129
- AUTH_DISABLE_SIGNUP: ${LANGFUSE_AUTH_DISABLE_SIGNUP}
130
- LANGFUSE_INIT_ORG_ID: ${LANGFUSE_INIT_ORG_ID}
131
- LANGFUSE_INIT_ORG_NAME: ${LANGFUSE_INIT_ORG_NAME}
132
- LANGFUSE_INIT_PROJECT_ID: ${LANGFUSE_INIT_PROJECT_ID}
133
- LANGFUSE_INIT_PROJECT_NAME: ${LANGFUSE_INIT_PROJECT_NAME}
134
- LANGFUSE_INIT_PROJECT_PUBLIC_KEY: ${LANGFUSE_INIT_PROJECT_PUBLIC_KEY}
135
- LANGFUSE_INIT_PROJECT_SECRET_KEY: ${LANGFUSE_INIT_PROJECT_SECRET_KEY}
136
- LANGFUSE_INIT_USER_EMAIL: ${LANGFUSE_INIT_USER_EMAIL}
137
- LANGFUSE_INIT_USER_NAME: ${LANGFUSE_INIT_USER_NAME}
138
- LANGFUSE_INIT_USER_PASSWORD: ${LANGFUSE_INIT_USER_PASSWORD}
139
- healthcheck:
140
- test: ["CMD", "node", "-e", "const host = process.env.HOSTNAME || 'localhost'; fetch('http://' + host + ':3000/api/auth/csrf').then(async (r) => { const ok = r.ok && (await r.text()).includes('csrfToken'); process.exit(ok ? 0 : 1); }).catch(() => process.exit(1))"]
141
- interval: 5s
142
- timeout: 5s
143
- retries: 24
144
- start_period: 20s
145
- langfuse-proxy:
146
- build:
147
- context: .
148
- dockerfile: langfuse-proxy/Dockerfile
149
- depends_on:
150
- langfuse-web:
151
- condition: service_healthy
152
- environment:
153
- - PORT=3000
154
- - LANGFUSE_PUBLIC_ORIGIN=http://langfuse.localhost:${TRAICEBOX_PORT}
155
- - LANGFUSE_UPSTREAM_ORIGIN=http://langfuse-web:3000
156
- - LANGFUSE_PROXY_AUTOLOGIN=${LANGFUSE_PROXY_AUTOLOGIN}
157
- - LANGFUSE_INIT_USER_EMAIL=${LANGFUSE_INIT_USER_EMAIL}
158
- - LANGFUSE_INIT_USER_PASSWORD=${LANGFUSE_INIT_USER_PASSWORD}
159
- healthcheck:
160
- test: ["CMD", "bun", "-e", "fetch('http://127.0.0.1:3000/readyz').then((r) => process.exit(r.ok ? 0 : 1)).catch(() => process.exit(1))"]
161
- interval: 5s
162
- timeout: 5s
163
- retries: 24
164
- start_period: 10s
165
- litellm:
166
- image: docker.litellm.ai/berriai/litellm:main-latest
167
- depends_on:
168
- db:
169
- condition: service_healthy
170
- langfuse-web:
171
- condition: service_healthy
172
- environment:
173
- - DATABASE_URL=postgresql://${LITELLM_DB_USER}:${LITELLM_DB_PASSWORD}@db:5432/${LITELLM_DB_NAME}
174
- - LANGFUSE_PUBLIC_KEY=${LANGFUSE_INIT_PROJECT_PUBLIC_KEY}
175
- - LANGFUSE_SECRET_KEY=${LANGFUSE_INIT_PROJECT_SECRET_KEY}
176
- - LANGFUSE_OTEL_HOST=http://langfuse-web:3000
177
- - LANGFUSE_TRACING_ENVIRONMENT=${LANGFUSE_TRACING_ENVIRONMENT}
178
- - LITELLM_MASTER_KEY=${LITELLM_MASTER_KEY}
179
- extra_hosts:
180
- - host.docker.internal:host-gateway
181
- secrets:
182
- - source: openai_compatible_api_key
183
- target: openai_compatible_api_key
184
- volumes:
185
- - ./litellm:/app/litellm-config:ro
186
- entrypoint:
187
- - /bin/sh
188
- - /app/litellm-config/start-litellm.sh
189
- command:
190
- - --config
191
- - /app/litellm-config/config.yaml
192
- - --log_config
193
- - /app/litellm-config/logging.json
194
- - --port
195
- - "4000"
196
- healthcheck:
197
- test: ["CMD", "python3", "-c", "import sys, urllib.request; sys.exit(0 if urllib.request.urlopen('http://127.0.0.1:4000/health/readiness', timeout=5).getcode() == 200 else 1)"]
198
- interval: 5s
199
- timeout: 5s
200
- retries: 24
201
- start_period: 20s
202
- litellm-ui-proxy:
203
- build:
204
- context: .
205
- dockerfile: litellm-ui-proxy/Dockerfile
206
- depends_on:
207
- litellm:
208
- condition: service_healthy
209
- environment:
210
- - PORT=4000
211
- - LITELLM_UPSTREAM_ORIGIN=http://litellm:4000
212
- - LITELLM_MASTER_KEY=${LITELLM_MASTER_KEY}
213
- healthcheck:
214
- test: ["CMD", "bun", "-e", "fetch('http://127.0.0.1:4000/readyz').then((r) => process.exit(r.ok ? 0 : 1)).catch(() => process.exit(1))"]
215
- interval: 5s
216
- timeout: 5s
217
- retries: 24
218
- start_period: 10s
219
- litellm-bootstrap:
220
- image: curlimages/curl:8.16.0
221
- depends_on:
222
- litellm:
223
- condition: service_healthy
224
- environment:
225
- - LITELLM_MASTER_KEY=${LITELLM_MASTER_KEY}
226
- - LITELLM_CLIENT_KEY=${LITELLM_CLIENT_KEY}
227
- volumes:
228
- - ./litellm/bootstrap-client-key.sh:/bootstrap-client-key.sh:ro
229
- entrypoint:
230
- - sh
231
- - /bootstrap-client-key.sh
232
- caddy:
233
- image: caddy:2-alpine
234
- depends_on:
235
- langfuse-proxy:
236
- condition: service_healthy
237
- litellm:
238
- condition: service_healthy
239
- litellm-ui-proxy:
240
- condition: service_healthy
241
- ports:
242
- - 127.0.0.1:${TRAICEBOX_PORT}:${TRAICEBOX_PORT}
243
- environment:
244
- - TRAICEBOX_PORT=${TRAICEBOX_PORT}
245
- volumes:
246
- - ./caddy/Caddyfile:/etc/caddy/Caddyfile:ro
247
- healthcheck:
248
- test: ["CMD-SHELL", "wget -qO- --header='Host: litellm.localhost:${TRAICEBOX_PORT}' http://127.0.0.1:${TRAICEBOX_PORT}/health/readiness >/dev/null && wget -qO- --header='Host: litellm.localhost:${TRAICEBOX_PORT}' http://127.0.0.1:${TRAICEBOX_PORT}/ui/ >/dev/null && wget -qO- --header='Host: langfuse.localhost:${TRAICEBOX_PORT}' http://127.0.0.1:${TRAICEBOX_PORT}/readyz >/dev/null"]
249
- interval: 5s
250
- timeout: 5s
251
- retries: 24
252
- start_period: 10s
253
- volumes:
254
- postgres_data:
255
- driver: local
256
- langfuse_clickhouse_data:
257
- driver: local
258
- langfuse_clickhouse_logs:
259
- driver: local
260
- langfuse_minio_data:
261
- driver: local
262
- langfuse_redis_data:
263
- driver: local
264
- secrets:
265
- openai_compatible_api_key:
266
- file: ${OPENAI_COMPATIBLE_API_KEY_SECRET_FILE:-/dev/null}