local-openai2anthropic 0.1.0__py3-none-any.whl → 0.3.6__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,374 @@
1
+ Metadata-Version: 2.4
2
+ Name: local-openai2anthropic
3
+ Version: 0.3.6
4
+ Summary: A lightweight proxy server that converts Anthropic Messages API to OpenAI API
5
+ Project-URL: Homepage, https://github.com/dongfangzan/local-openai2anthropic
6
+ Project-URL: Repository, https://github.com/dongfangzan/local-openai2anthropic
7
+ Project-URL: Issues, https://github.com/dongfangzan/local-openai2anthropic/issues
8
+ Author-email: dongfangzan <zsybook0124@163.com>
9
+ Maintainer-email: dongfangzan <zsybook0124@163.com>
10
+ License: Apache-2.0
11
+ License-File: LICENSE
12
+ Keywords: anthropic,api,claude,messages,openai,proxy
13
+ Classifier: Development Status :: 4 - Beta
14
+ Classifier: Intended Audience :: Developers
15
+ Classifier: License :: OSI Approved :: Apache Software License
16
+ Classifier: Programming Language :: Python :: 3
17
+ Classifier: Programming Language :: Python :: 3.12
18
+ Classifier: Programming Language :: Python :: 3.13
19
+ Classifier: Topic :: Software Development :: Libraries :: Python Modules
20
+ Requires-Python: >=3.12
21
+ Requires-Dist: anthropic>=0.30.0
22
+ Requires-Dist: fastapi>=0.100.0
23
+ Requires-Dist: httpx>=0.25.0
24
+ Requires-Dist: openai>=1.30.0
25
+ Requires-Dist: pydantic-settings>=2.0.0
26
+ Requires-Dist: pydantic>=2.0.0
27
+ Requires-Dist: tomli>=2.0.0; python_version < '3.11'
28
+ Requires-Dist: uvicorn[standard]>=0.23.0
29
+ Provides-Extra: dev
30
+ Requires-Dist: black>=23.0.0; extra == 'dev'
31
+ Requires-Dist: mypy>=1.0.0; extra == 'dev'
32
+ Requires-Dist: pytest-asyncio>=0.21.0; extra == 'dev'
33
+ Requires-Dist: pytest>=7.0.0; extra == 'dev'
34
+ Requires-Dist: ruff>=0.1.0; extra == 'dev'
35
+ Description-Content-Type: text/markdown
36
+
37
+ # local-openai2anthropic
38
+
39
+ [![Python 3.12+](https://img.shields.io/badge/python-3.12+-blue.svg)](https://www.python.org/downloads/)
40
+ [![License: Apache 2.0](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
41
+ [![PyPI](https://img.shields.io/pypi/v/local-openai2anthropic.svg)](https://pypi.org/project/local-openai2anthropic/)
42
+
43
+ **English | [中文](README_zh.md)**
44
+
45
+ A lightweight proxy that lets applications built with [Claude SDK](https://github.com/anthropics/anthropic-sdk-python) talk to locally-hosted OpenAI-compatible LLMs.
46
+
47
+ ---
48
+
49
+ ## What Problem This Solves
50
+
51
+ Many local LLM tools (vLLM, SGLang, etc.) provide an OpenAI-compatible API. But if you've built your app using Anthropic's Claude SDK, you can't use them directly.
52
+
53
+ This proxy translates Claude SDK calls to OpenAI API format in real-time, enabling:
54
+
55
+ - **Local LLM inference** with Claude-based apps
56
+ - **Offline development** without cloud API costs
57
+ - **Privacy-first AI** - data never leaves your machine
58
+ - **Seamless model switching** between cloud and local
59
+ - **Web Search tool** - built-in Tavily web search for local models
60
+
61
+ ---
62
+
63
+ ## Supported Local Backends
64
+
65
+ Currently tested and supported:
66
+
67
+ | Backend | Description | Status |
68
+ |---------|-------------|--------|
69
+ | [vLLM](https://github.com/vllm-project/vllm) | High-throughput LLM inference | ✅ Fully supported |
70
+ | [SGLang](https://github.com/sgl-project/sglang) | Fast structured language model serving | ✅ Fully supported |
71
+
72
+ Other OpenAI-compatible backends may work but are not fully tested.
73
+
74
+ ---
75
+
76
+ ## Quick Start
77
+
78
+ ### 1. Install
79
+
80
+ ```bash
81
+ pip install local-openai2anthropic
82
+ ```
83
+
84
+ ### 2. Configure Your LLM Backend (Optional)
85
+
86
+ **Option A: Start a local LLM server**
87
+
88
+ If you don't have an LLM server running, you can start one locally:
89
+
90
+ Example with vLLM:
91
+ ```bash
92
+ vllm serve meta-llama/Llama-2-7b-chat-hf
93
+ # vLLM starts OpenAI-compatible API at http://localhost:8000/v1
94
+ ```
95
+
96
+ Or with SGLang:
97
+ ```bash
98
+ sglang launch --model-path meta-llama/Llama-2-7b-chat-hf --port 8000
99
+ # SGLang starts at http://localhost:8000/v1
100
+ ```
101
+
102
+ **Option B: Use an existing OpenAI-compatible API**
103
+
104
+ If you already have a deployed OpenAI-compatible API (local or remote), you can use it directly. Just note the base URL for the next step.
105
+
106
+ Examples:
107
+ - Local vLLM/SGLang: `http://localhost:8000/v1`
108
+ - Remote API: `https://api.example.com/v1`
109
+
110
+ > **Note:** If you're using [Ollama](https://ollama.com), it natively supports the Anthropic API format, so you don't need this proxy. Just point your Claude SDK directly to `http://localhost:11434/v1`.
111
+
112
+ ### 3. Start the Proxy
113
+
114
+ **Option A: Run in background (recommended)**
115
+
116
+ ```bash
117
+ export OA2A_OPENAI_BASE_URL=http://localhost:8000/v1 # Your local LLM endpoint
118
+ export OA2A_OPENAI_API_KEY=dummy # Any value, not used by local backends
119
+
120
+ oa2a start # Start server in background
121
+ # Server starts at http://localhost:8080
122
+
123
+ # View logs
124
+ oa2a logs # Show last 50 lines of logs
125
+ oa2a logs -f # Follow logs in real-time (Ctrl+C to exit)
126
+
127
+ # Check status
128
+ oa2a status # Check if server is running
129
+
130
+ # Stop server
131
+ oa2a stop # Stop background server
132
+
133
+ # Restart server
134
+ oa2a restart # Restart with same settings
135
+ ```
136
+
137
+ **Option B: Run in foreground**
138
+
139
+ ```bash
140
+ export OA2A_OPENAI_BASE_URL=http://localhost:8000/v1
141
+ export OA2A_OPENAI_API_KEY=dummy
142
+
143
+ oa2a # Run server in foreground (blocking)
144
+ # Press Ctrl+C to stop
145
+ ```
146
+
147
+ ### 4. Use in Your App
148
+
149
+ ```python
150
+ import anthropic
151
+
152
+ client = anthropic.Anthropic(
153
+ base_url="http://localhost:8080", # Point to proxy
154
+ api_key="dummy-key", # Not used
155
+ )
156
+
157
+ message = client.messages.create(
158
+ model="meta-llama/Llama-2-7b-chat-hf", # Your local model name
159
+ max_tokens=1024,
160
+ messages=[{"role": "user", "content": "Hello!"}],
161
+ )
162
+
163
+ print(message.content[0].text)
164
+ ```
165
+
166
+ ---
167
+
168
+ ## Using with Claude Code
169
+
170
+ You can configure [Claude Code](https://github.com/anthropics/claude-code) to use your local LLM through this proxy.
171
+
172
+ ### Configuration Steps
173
+
174
+ 1. **Edit Claude Code config file** at `~/.claude/settings.json`:
175
+
176
+ ```json
177
+ {
178
+ "env": {
179
+ "ANTHROPIC_BASE_URL": "http://localhost:8080",
180
+ "ANTHROPIC_API_KEY": "dummy-key",
181
+ "ANTHROPIC_MODEL": "meta-llama/Llama-2-7b-chat-hf",
182
+ "ANTHROPIC_DEFAULT_SONNET_MODEL": "meta-llama/Llama-2-7b-chat-hf",
183
+ "ANTHROPIC_DEFAULT_OPUS_MODEL": "meta-llama/Llama-2-7b-chat-hf",
184
+ "ANTHROPIC_DEFAULT_HAIKU_MODEL": "meta-llama/Llama-2-7b-chat-hf",
185
+ "ANTHROPIC_REASONING_MODEL": "meta-llama/Llama-2-7b-chat-hf"
186
+ }
187
+ }
188
+ ```
189
+
190
+ | Variable | Description |
191
+ |----------|-------------|
192
+ | `ANTHROPIC_MODEL` | General model setting |
193
+ | `ANTHROPIC_DEFAULT_SONNET_MODEL` | Default model for Sonnet mode (Claude Code default) |
194
+ | `ANTHROPIC_DEFAULT_OPUS_MODEL` | Default model for Opus mode |
195
+ | `ANTHROPIC_DEFAULT_HAIKU_MODEL` | Default model for Haiku mode |
196
+ | `ANTHROPIC_REASONING_MODEL` | Default model for reasoning tasks |
197
+
198
+ 2. **Or set environment variables** before running Claude Code:
199
+
200
+ ```bash
201
+ export ANTHROPIC_BASE_URL=http://localhost:8080
202
+ export ANTHROPIC_API_KEY=dummy-key
203
+
204
+ claude
205
+ ```
206
+
207
+ ### Complete Workflow Example
208
+
209
+ Make sure `~/.claude/settings.json` is configured as described above.
210
+
211
+ Terminal 1 - Start your local LLM:
212
+ ```bash
213
+ vllm serve meta-llama/Llama-2-7b-chat-hf
214
+ ```
215
+
216
+ Terminal 2 - Start the proxy (background mode):
217
+ ```bash
218
+ export OA2A_OPENAI_BASE_URL=http://localhost:8000/v1
219
+ export OA2A_OPENAI_API_KEY=dummy
220
+ export OA2A_TAVILY_API_KEY="tvly-your-tavily-api-key" # Optional: enable web search
221
+
222
+ oa2a start
223
+ ```
224
+
225
+ Terminal 3 - Launch Claude Code:
226
+ ```bash
227
+ claude
228
+ ```
229
+
230
+ Now Claude Code will use your local LLM instead of the cloud API.
231
+
232
+ To stop the proxy:
233
+ ```bash
234
+ oa2a stop
235
+ ```
236
+
237
+ ---
238
+
239
+ ## Features
240
+
241
+ - ✅ **Streaming responses** - Real-time token streaming via SSE
242
+ - ✅ **Tool calling** - Local LLM function calling support
243
+ - ✅ **Vision models** - Multi-modal input for vision-capable models
244
+ - ✅ **Web Search** - Give your local LLM internet access (see below)
245
+ - ✅ **Thinking mode** - Supports reasoning/thinking model outputs
246
+
247
+ ---
248
+
249
+ ## Web Search Capability 🔍
250
+
251
+ **Bridge the gap: Give your local LLM the web search power that Claude Code users enjoy!**
252
+
253
+ When using locally-hosted models with Claude Code, you lose access to the built-in web search tool. This proxy fills that gap by providing a server-side web search implementation powered by [Tavily](https://tavily.com).
254
+
255
+ ### The Problem
256
+
257
+ | Scenario | Web Search Available? |
258
+ |----------|----------------------|
259
+ | Using Claude (cloud) in Claude Code | ✅ Built-in |
260
+ | Using local vLLM/SGLang in Claude Code | ❌ Not available |
261
+ | **Using this proxy + local LLM** | ✅ **Enabled via Tavily** |
262
+
263
+ ### How It Works
264
+
265
+ ```
266
+ Claude Code → Anthropic SDK → This Proxy → Local LLM
267
+
268
+ Tavily API (Web Search)
269
+ ```
270
+
271
+ The proxy intercepts `web_search_20250305` tool calls and handles them directly, regardless of whether your local model supports web search natively.
272
+
273
+ ### Setup Tavily Search
274
+
275
+ 1. **Get a free API key** at [tavily.com](https://tavily.com) - generous free tier available
276
+
277
+ 2. **Configure the proxy:**
278
+ ```bash
279
+ export OA2A_OPENAI_BASE_URL=http://localhost:8000/v1
280
+ export OA2A_OPENAI_API_KEY=dummy
281
+ export OA2A_TAVILY_API_KEY="tvly-your-tavily-api-key" # Enable web search
282
+
283
+ oa2a
284
+ ```
285
+
286
+ 3. **Use in your app:**
287
+ ```python
288
+ import anthropic
289
+
290
+ client = anthropic.Anthropic(
291
+ base_url="http://localhost:8080",
292
+ api_key="dummy-key",
293
+ )
294
+
295
+ message = client.messages.create(
296
+ model="meta-llama/Llama-2-7b-chat-hf",
297
+ max_tokens=1024,
298
+ tools=[
299
+ {
300
+ "name": "web_search_20250305",
301
+ "description": "Search the web for current information",
302
+ "input_schema": {
303
+ "type": "object",
304
+ "properties": {
305
+ "query": {"type": "string", "description": "Search query"},
306
+ },
307
+ "required": ["query"],
308
+ },
309
+ }
310
+ ],
311
+ messages=[{"role": "user", "content": "What happened in AI today?"}],
312
+ )
313
+
314
+ if message.stop_reason == "tool_use":
315
+ tool_use = message.content[-1]
316
+ print(f"Searching: {tool_use.input}")
317
+ # The proxy automatically calls Tavily and returns results
318
+ ```
319
+
320
+ ### Tavily Configuration Options
321
+
322
+ | Variable | Default | Description |
323
+ |----------|---------|-------------|
324
+ | `OA2A_TAVILY_API_KEY` | - | Your Tavily API key ([get free at tavily.com](https://tavily.com)) |
325
+ | `OA2A_TAVILY_MAX_RESULTS` | 5 | Number of search results to return |
326
+ | `OA2A_TAVILY_TIMEOUT` | 30 | Search timeout in seconds |
327
+ | `OA2A_WEBSEARCH_MAX_USES` | 5 | Max search calls per request |
328
+
329
+ ---
330
+
331
+ ## Configuration
332
+
333
+ | Variable | Required | Default | Description |
334
+ |----------|----------|---------|-------------|
335
+ | `OA2A_OPENAI_BASE_URL` | ✅ | - | Your local LLM's OpenAI-compatible endpoint |
336
+ | `OA2A_OPENAI_API_KEY` | ✅ | - | Any value (local backends usually ignore this) |
337
+ | `OA2A_PORT` | ❌ | 8080 | Proxy server port |
338
+ | `OA2A_HOST` | ❌ | 0.0.0.0 | Proxy server host |
339
+ | `OA2A_TAVILY_API_KEY` | ❌ | - | Enable web search ([tavily.com](https://tavily.com)) |
340
+
341
+ ---
342
+
343
+ ## Architecture
344
+
345
+ ```
346
+ Your App (Claude SDK)
347
+
348
+
349
+ ┌─────────────────────┐
350
+ │ local-openai2anthropic │ ← This proxy
351
+ │ (Port 8080) │
352
+ └─────────────────────┘
353
+
354
+
355
+ Your Local LLM Server
356
+ (vLLM / SGLang)
357
+ (OpenAI-compatible API)
358
+ ```
359
+
360
+ ---
361
+
362
+ ## Development
363
+
364
+ ```bash
365
+ git clone https://github.com/dongfangzan/local-openai2anthropic.git
366
+ cd local-openai2anthropic
367
+ pip install -e ".[dev]"
368
+
369
+ pytest
370
+ ```
371
+
372
+ ## License
373
+
374
+ Apache License 2.0
@@ -0,0 +1,25 @@
1
+ local_openai2anthropic/__init__.py,sha256=YEz1wpAzYlPY-zbmlQuLf8gpwLEVJBqff4LdfWcz6NM,1059
2
+ local_openai2anthropic/__main__.py,sha256=K21u5u7FN8-DbO67TT_XDF0neGqJeFrVNkteRauCRQk,179
3
+ local_openai2anthropic/config.py,sha256=y40uEMBE57dOGCV3w3v5j82ZPZZJWUnJ4yaFZXJ8pRk,4706
4
+ local_openai2anthropic/converter.py,sha256=og94I514M9km_Wbk9c1ddU6fyaQNEbpd2zfpfnBQaTQ,16029
5
+ local_openai2anthropic/daemon.py,sha256=pZnRojGFcuIpR8yLDNjV-b0LJRBVhgRAa-dKeRRse44,10017
6
+ local_openai2anthropic/daemon_runner.py,sha256=rguOH0PgpbjqNsKYei0uCQX8JQOQ1wmtQH1CtW95Dbw,3274
7
+ local_openai2anthropic/main.py,sha256=3xrjsKFBYK6B8niAtQz0U_yz-eTpf91HnHeAiR9CLQE,12174
8
+ local_openai2anthropic/openai_types.py,sha256=jFdCvLwtXYoo5gGRqOhbHQcVaxcsxNnCP_yFPIv7rG4,3823
9
+ local_openai2anthropic/protocol.py,sha256=VW3B1YrbYg5UAo7PveQv0Ny5vfuNa6yG6IlHtkuyXiI,5178
10
+ local_openai2anthropic/router.py,sha256=gwSGCYQGd0tAj4B4cl30UDkIJDIfBP4D8T9KEMKnxyk,16196
11
+ local_openai2anthropic/tavily_client.py,sha256=QsBhnyF8BFWPAxB4XtWCCpHCquNL5SW93-zjTTi4Meg,3774
12
+ local_openai2anthropic/server_tools/__init__.py,sha256=QlJfjEta-HOCtLe7NaY_fpbEKv-ZpInjAnfmSqE9tbk,615
13
+ local_openai2anthropic/server_tools/base.py,sha256=pNFsv-jSgxVrkY004AHAcYMNZgVSO8ZOeCzQBUtQ3vU,5633
14
+ local_openai2anthropic/server_tools/web_search.py,sha256=1C7lX_cm-tMaN3MsCjinEZYPJc_Hj4yAxYay9h8Zbvs,6543
15
+ local_openai2anthropic/streaming/__init__.py,sha256=RFKYQnc0zlhWK-Dm7GZpmabmszbZhY5NcXaaSsQ7Sys,227
16
+ local_openai2anthropic/streaming/handler.py,sha256=X8viml6b40p-vr-A4HlEi5iCqmTsIMyQgj3S2RfweVE,22033
17
+ local_openai2anthropic/tools/__init__.py,sha256=OM_6YAwy3G1kbrF7n5NvmBwWPGO0hwq4xLrYZFMHANA,318
18
+ local_openai2anthropic/tools/handler.py,sha256=SO8AmEUfNIg16s6jOKBaYdajYc0fiI8ciOoiKXIJe_c,14106
19
+ local_openai2anthropic/utils/__init__.py,sha256=0Apd3lQCmWpQHol4AfjtQe6A3Cpex9Zn-8dyK_FU8Z0,372
20
+ local_openai2anthropic/utils/tokens.py,sha256=TV3vGAjoGZeyo1xPvwb5jto43p1U1f4HteCApB86X0g,3187
21
+ local_openai2anthropic-0.3.6.dist-info/METADATA,sha256=B5M75TvhwturteqT_zxAJ7lXSTAy2QpjTCeDvPRixhQ,11293
22
+ local_openai2anthropic-0.3.6.dist-info/WHEEL,sha256=WLgqFyCfm_KASv4WHyYy0P3pM_m7J5L9k2skdKLirC8,87
23
+ local_openai2anthropic-0.3.6.dist-info/entry_points.txt,sha256=hdc9tSJUNxyNLXcTYye5SuD2K0bEQhxBhGnWTFup6ZM,116
24
+ local_openai2anthropic-0.3.6.dist-info/licenses/LICENSE,sha256=X3_kZy3lJvd_xp8IeyUcIAO2Y367MXZc6aaRx8BYR_s,11369
25
+ local_openai2anthropic-0.3.6.dist-info/RECORD,,