ccs-llmconnector 1.0.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,22 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2025 CCS
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
22
+
@@ -0,0 +1,2 @@
1
+ include README.md
2
+ include LICENSE
@@ -0,0 +1,349 @@
1
+ Metadata-Version: 2.4
2
+ Name: ccs_llmconnector
3
+ Version: 1.0.0
4
+ Summary: Lightweight wrapper around different LLM provider Python SDK Responses APIs.
5
+ Author: CCS
6
+ License: MIT
7
+ Project-URL: Homepage, https://cleancodesolutions.de
8
+ Requires-Python: >=3.8
9
+ Description-Content-Type: text/markdown
10
+ License-File: LICENSE
11
+ Requires-Dist: openai>=1.0.0
12
+ Requires-Dist: google-genai
13
+ Requires-Dist: anthropic
14
+ Requires-Dist: xai-sdk
15
+ Dynamic: license-file
16
+
17
+ # llmconnector
18
+
19
+ `llmconnector` is a thin Python wrapper around leading large-language-model SDKs,
20
+ including the OpenAI Responses API, Google's Gemini SDK, and Anthropic's Claude
21
+ Messages API. It exposes a minimal interface that forwards the most common
22
+ options such as API key, prompt, optional reasoning effort hints, token limits,
23
+ and image inputs, and it includes helpers to enumerate the models available to
24
+ your account with each provider.
25
+
26
+ ## Installation
27
+
28
+ ```bash
29
+ pip install .
30
+ ```
31
+
32
+ ### Requirements
33
+
34
+ - `openai` (installed automatically with the package)
35
+ - `google-genai` (installed automatically with the package)
36
+ - `anthropic` (installed automatically with the package)
37
+
38
+ ## Components
39
+
40
+ - `OpenAIResponsesClient` - direct wrapper around the OpenAI Responses API, ideal when your project only targets OpenAI models. Includes a model discovery helper.
41
+ - `GeminiClient` - thin wrapper around the Google Gemini SDK, usable when `google-genai` is installed. Includes a model discovery helper.
42
+ - `AnthropicClient` - lightweight wrapper around the Anthropic Claude Messages API, usable when `anthropic` is installed. Includes a model discovery helper.
43
+ - `GrokClient` - wrapper around the xAI Grok chat API, usable when `xai-sdk` is installed. Includes a model discovery helper.
44
+ - `LLMClient` - provider router that delegates to registered clients (OpenAI included by default) so additional vendors can be added without changing call sites.
45
+
46
+ ## GeminiClient
47
+
48
+ ### Usage
49
+
50
+ Use the `GeminiClient` when you want direct access to the Google Gemini SDK without
51
+ going through the provider router.
52
+
53
+ > Requires the `google-genai` Python package (installed automatically with `llmconnector`).
54
+
55
+ ```python
56
+ from llmconnector import GeminiClient
57
+
58
+ client = GeminiClient()
59
+
60
+ text_response = client.generate_response(
61
+ api_key="your-gemini-api-key",
62
+ prompt="Summarize the key benefits of unit testing.",
63
+ model="gemini-2.5-flash",
64
+ max_tokens=2000,
65
+ )
66
+
67
+ vision_response = client.generate_response(
68
+ api_key="your-gemini-api-key",
69
+ prompt="Describe the main action in this image.",
70
+ model="gemini-2.5-flash",
71
+ images=[
72
+ "/absolute/path/to/local-image.png",
73
+ "https://example.com/sample.jpg",
74
+ ],
75
+ )
76
+ ```
77
+
78
+ ### Parameters
79
+
80
+ | Parameter | Type | Required | Description |
81
+ |-----------|------|----------|-------------|
82
+ | `api_key` | `str` | Yes | GEMINI or GOOGLE API key used for authentication. |
83
+ | `prompt` | `Optional[str]` | Conditional | Plain-text prompt. Required unless `images` is supplied. |
84
+ | `model` | `str` | Yes | Target model identifier, e.g. `gemini-2.5-flash`. |
85
+ | `max_tokens` | `int` | No | Defaults to `32000`. Passed to the SDK as `max_output_tokens`. |
86
+ | `reasoning_effort` | `Optional[str]` | No | Present for parity with the OpenAI client; currently ignored by the Gemini SDK. |
87
+ | `images` | `Optional[Sequence[str \| Path]]` | No | Image references (local paths, URLs, or data URLs) read and forwarded to the Gemini SDK. |
88
+
89
+ The method returns the generated model output as a plain string. Optional image
90
+ references are automatically converted into the appropriate `types.Part` instances,
91
+ allowing you to mix text and visuals in a single request.
92
+
93
+ ### Listing models
94
+
95
+ Use `list_models` to enumerate the Gemini models available to your account:
96
+
97
+ ```python
98
+ from llmconnector import GeminiClient
99
+
100
+ client = GeminiClient()
101
+ for model in client.list_models(api_key="your-gemini-api-key"):
102
+ print(model["id"], model["display_name"])
103
+ ```
104
+
105
+ ## AnthropicClient
106
+
107
+ ### Usage
108
+
109
+ Use the `AnthropicClient` when you want direct access to Anthropic's Claude Messages API.
110
+
111
+ > Requires the `anthropic` Python package (installed automatically with `llmconnector`).
112
+
113
+ ```python
114
+ from llmconnector import AnthropicClient
115
+
116
+ client = AnthropicClient()
117
+
118
+ text_response = client.generate_response(
119
+ api_key="sk-ant-api-key",
120
+ prompt="Summarize the key benefits of unit testing.",
121
+ model="claude-sonnet-4-5-20250929",
122
+ max_tokens=2000,
123
+ )
124
+
125
+ vision_response = client.generate_response(
126
+ api_key="sk-ant-api-key",
127
+ prompt="Describe the main action in this image.",
128
+ model="claude-3-5-sonnet-20241022",
129
+ images=[
130
+ "/absolute/path/to/local-image.png",
131
+ "https://example.com/sample.jpg",
132
+ ],
133
+ )
134
+ ```
135
+
136
+ ### Parameters
137
+
138
+ | Parameter | Type | Required | Description |
139
+ |-----------|------|----------|-------------|
140
+ | `api_key` | `str` | Yes | Anthropic API key used for authentication. |
141
+ | `prompt` | `Optional[str]` | Conditional | Plain-text prompt. Required unless `images` is supplied. |
142
+ | `model` | `str` | Yes | Target model identifier, e.g. `claude-3-5-sonnet-20241022`. |
143
+ | `max_tokens` | `int` | No | Defaults to `32000`. Passed to the SDK as `max_tokens`. |
144
+ | `reasoning_effort` | `Optional[str]` | No | Present for parity with other clients; currently ignored by the Anthropic SDK. |
145
+ | `images` | `Optional[Sequence[str \| Path]]` | No | Image references (local paths, URLs, or data URLs) read and converted to base64 blocks. |
146
+
147
+ The method returns the generated model output as a plain string. Optional image
148
+ references are automatically transformed into Anthropic `image` blocks so you can
149
+ mix text and visual inputs in a single request.
150
+
151
+ ### Listing models
152
+
153
+ Use `list_models` to enumerate the Anthropic models available to your account:
154
+
155
+ ```python
156
+ from llmconnector import AnthropicClient
157
+
158
+ client = AnthropicClient()
159
+ for model in client.list_models(api_key="sk-ant-api-key"):
160
+ print(model["id"], model["display_name"])
161
+ ```
162
+
163
+ ## GrokClient
164
+
165
+ ### Usage
166
+
167
+ Use the `GrokClient` when you want direct access to xAI's Grok chat API.
168
+
169
+ > Requires the `xai-sdk` Python package (install manually with `pip install xai-sdk`). The SDK targets Python 3.10 and newer.
170
+
171
+ ```python
172
+ from llmconnector import GrokClient
173
+
174
+ client = GrokClient()
175
+
176
+ text_response = client.generate_response(
177
+ api_key="xai-api-key",
178
+ prompt="Summarize the key benefits of unit testing.",
179
+ model="grok-3",
180
+ max_tokens=2000,
181
+ )
182
+
183
+ vision_response = client.generate_response(
184
+ api_key="xai-api-key",
185
+ prompt="Describe the main action in this image.",
186
+ model="grok-2-vision",
187
+ images=[
188
+ "/absolute/path/to/local-image.png",
189
+ "https://example.com/sample.jpg",
190
+ ],
191
+ )
192
+ ```
193
+
194
+ ### Parameters
195
+
196
+ | Parameter | Type | Required | Description |
197
+ |-----------|------|----------|-------------|
198
+ | `api_key` | `str` | Yes | xAI API key used for authentication. |
199
+ | `prompt` | `Optional[str]` | Conditional | Plain-text prompt. Required unless `images` is supplied. |
200
+ | `model` | `str` | Yes | Target model identifier, e.g. `grok-3`. |
201
+ | `max_tokens` | `int` | No | Defaults to `32000`. Passed to the Grok API as `max_tokens`. |
202
+ | `reasoning_effort` | `Optional[str]` | No | Hint for reasoning-focused models (`"low"` or `"high"`). |
203
+ | `images` | `Optional[Sequence[str \| Path]]` | No | Image references (local paths converted to data URLs, or remote URLs passed through). |
204
+
205
+ ### Listing models
206
+
207
+ Use `list_models` to enumerate the Grok language models available to your account:
208
+
209
+ ```python
210
+ from llmconnector import GrokClient
211
+
212
+ client = GrokClient()
213
+ for model in client.list_models(api_key="xai-api-key"):
214
+ print(model["id"], model["display_name"])
215
+ ```
216
+
217
+ ## OpenAIResponsesClient
218
+
219
+ ### Usage
220
+
221
+ Use the `OpenAIResponsesClient` when you want direct access to the OpenAI Responses API.
222
+
223
+ > Requires the `openai` Python package. It is declared as a dependency of `llmconnector`, but you can also install it manually with `pip install openai`.
224
+
225
+ ```python
226
+ from llmconnector import OpenAIResponsesClient
227
+
228
+ client = OpenAIResponsesClient()
229
+
230
+ text_response = client.generate_response(
231
+ api_key="sk-your-api-key", # required OpenAI API key
232
+ prompt="Summarize the key benefits of unit testing.", # plain-text prompt
233
+ model="gpt-4o", # any Responses API compatible model
234
+ reasoning_effort="medium", # optional: "low" | "medium" | "high"
235
+ max_tokens=2000, # optional: caps the full response length
236
+ )
237
+
238
+ vision_response = client.generate_response(
239
+ api_key="sk-your-api-key",
240
+ prompt="Describe the main action in this image.",
241
+ model="gpt-4o-mini",
242
+ images=[
243
+ "/absolute/path/to/local-image.png", # local file paths are converted to data URLs
244
+ "https://example.com/sample.jpg", # remote URLs are passed through directly
245
+ ],
246
+ )
247
+ ```
248
+
249
+ ### Parameters
250
+
251
+ | Parameter | Type | Required | Description |
252
+ |-----------|------|----------|-------------|
253
+ | `api_key` | `str` | Yes | OpenAI API key used for authentication. |
254
+ | `prompt` | `Optional[str]` | Conditional | Plain-text prompt. Required unless `images` is supplied. |
255
+ | `model` | `str` | Yes | Target model identifier, e.g. `gpt-4o`. |
256
+ | `max_tokens` | `int` | No | Defaults to `32000`. Passed to the Responses API as `max_output_tokens`. |
257
+ | `reasoning_effort` | `Optional[str]` | No | For models that support reasoning hints (`"low"`, `"medium"`, `"high"`). |
258
+ | `images` | `Optional[Sequence[str \| Path]]` | No | List of image URLs or local paths converted to data URLs. |
259
+
260
+ The method returns the generated model output as a plain string.
261
+
262
+ > The wrapper accepts `prompt` as plain text and translates it into the structured
263
+ > input format expected by the Responses API.
264
+
265
+ ### Listing models
266
+
267
+ Use `list_models` to enumerate the OpenAI models available to your account:
268
+
269
+ ```python
270
+ from llmconnector import OpenAIResponsesClient
271
+
272
+ client = OpenAIResponsesClient()
273
+ for model in client.list_models(api_key="sk-your-api-key"):
274
+ print(model["id"], model["display_name"])
275
+ ```
276
+
277
+ ## LLMClient
278
+
279
+ ### Usage
280
+
281
+ `LLMClient` routes requests to whichever provider has been registered; OpenAI, Gemini, and Anthropic are configured by default when their dependencies are available. The client also exposes `list_models` to surface the identifiers available for the selected provider.
282
+
283
+ ```python
284
+ from llmconnector import LLMClient
285
+
286
+ llm_client = LLMClient()
287
+
288
+ response_via_router = llm_client.generate_response(
289
+ provider="openai", # selects the OpenAI wrapper
290
+ api_key="sk-your-api-key",
291
+ prompt="List three advantages of integration testing.",
292
+ model="gpt-4o",
293
+ max_tokens=1500,
294
+ )
295
+
296
+ gemini_response = llm_client.generate_response(
297
+ provider="gemini", # google-genai is installed with llmconnector
298
+ api_key="your-gemini-api-key",
299
+ prompt="Outline best practices for prompt engineering.",
300
+ model="gemini-2.5-flash",
301
+ max_tokens=1500,
302
+ )
303
+
304
+ anthropic_response = llm_client.generate_response(
305
+ provider="anthropic",
306
+ api_key="sk-ant-api-key",
307
+ prompt="Summarize when to rely on retrieval-augmented generation.",
308
+ model="claude-sonnet-4-5-20250929",
309
+ max_tokens=1500,
310
+ )
311
+
312
+ # Additional providers can be registered at runtime:
313
+ # llm_client.register_provider("custom", CustomProviderClient())
314
+ # llm_client.generate_response(provider="custom", ...)
315
+ ```
316
+
317
+ ### Listing models
318
+
319
+ ```python
320
+ from llmconnector import LLMClient
321
+
322
+ llm_client = LLMClient()
323
+ for model in llm_client.list_models(provider="openai", api_key="sk-your-api-key"):
324
+ print(model["id"], model["display_name"])
325
+ ```
326
+
327
+ ### Parameters
328
+
329
+ | Parameter | Type | Required | Description |
330
+ |-----------|------|----------|-------------|
331
+ | `provider` | `str` | Yes | Registered provider key (default registry includes `'openai'`, `'gemini'`, and `'anthropic'`). |
332
+ | `api_key` | `str` | Yes | Provider-specific API key. |
333
+ | `prompt` | `Optional[str]` | Conditional | Plain-text prompt. Required unless `images` is supplied. |
334
+ | `model` | `str` | Yes | Provider-specific model identifier. |
335
+ | `max_tokens` | `int` | No | Defaults to `32000`. |
336
+ | `reasoning_effort` | `Optional[str]` | No | Reasoning hint forwarded when supported. |
337
+ | `images` | `Optional[Sequence[str \| Path]]` | No | Image references forwarded to the provider implementation. |
338
+
339
+ Use `LLMClient.register_provider(name, client)` to add additional providers that implement
340
+ `generate_response` with the same signature.
341
+
342
+ ## Development
343
+
344
+ The project uses a standard `pyproject.toml` based packaging layout with sources
345
+ stored under `src/`. Install the project in editable mode and run your preferred tooling:
346
+
347
+ ```bash
348
+ pip install -e .
349
+ ```