chub-dev 0.1.0 → 0.2.0-beta.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,526 @@
1
+ ---
2
+ name: chat
3
+ description: "GPT-4 and ChatGPT API for text generation, chat completions, streaming, function calling, vision, embeddings, and assistants"
4
+ metadata:
5
+ languages: "python"
6
+ versions: "2.6.0"
7
+ updated-on: "2025-09-16"
8
+ source: maintainer
9
+ tags: "openai,chat,llm,ai"
10
+ ---
11
+
12
+ # OpenAI Python SDK Coding Guidelines
13
+
14
+ You are an OpenAI API coding expert. Help me with writing code using the OpenAI API calling the official Python SDK.
15
+
16
+ You can find the official SDK documentation and code samples here:
17
+ https://platform.openai.com/docs/api-reference
18
+
19
+ ## Golden Rule: Use the Correct and Current SDK
20
+
21
+ Always use the official OpenAI Python SDK to call OpenAI models, which is the standard library for all OpenAI API interactions.
22
+
23
+ **Library Name:** OpenAI Python SDK
24
+ **PyPI Package:** `openai`
25
+
26
+ **Installation:**
27
+ - **Correct:** `pip install openai`
28
+
29
+ **APIs and Usage:**
30
+ - **Correct:** `from openai import OpenAI`
31
+ - **Correct:** `client = OpenAI()`
32
+ - **Primary API (Recommended):** `client.responses.create(...)`
33
+ - **Legacy API (Still Supported):** `client.chat.completions.create(...)`
34
+
35
+ ## Initialization and API Key
36
+
37
+ Set the `OPENAI_API_KEY` environment variable; the SDK will pick it up automatically.
38
+
39
+ ```python
40
+ import os
41
+ from openai import OpenAI
42
+
43
+ # Uses the OPENAI_API_KEY environment variable
44
+ client = OpenAI()
45
+
46
+ # Or pass the API key directly (not recommended for production)
47
+ # client = OpenAI(api_key="your-api-key-here")
48
+ ```
49
+
50
+ Use `python-dotenv` or your secret manager of choice to keep keys out of source control.
51
+
52
+ ## Models
53
+
54
+ Default choices:
55
+ - **General Text Tasks:** `gpt-5`
56
+ - **Complex Reasoning Tasks:** `gpt-5`
57
+ - **Audio Processing:** `gpt-4o-audio-preview` or `gpt-4o-mini-audio-preview`
58
+ - **Vision Tasks:** `gpt-5`
59
+ - **Code-focused / Search-preview:** Use `codex-mini-latest`, `gpt-4o-search-preview`, or `gpt-4o-mini-search-preview`
60
+
61
+ All models:
62
+ - `gpt-5`, `gpt-5-mini`, `gpt-5-nano`, `gpt-5-2025-08-07`, `gpt-5-mini-2025-08-07`, `gpt-5-nano-2025-08-07`, `gpt-5-chat-latest`,`gpt-4.1`, `gpt-4.1-mini`, `gpt-4.1-nano`, `gpt-4.1-2025-04-14`, `gpt-4.1-mini-2025-04-14`, `gpt-4.1-nano-2025-04-14`, `o4-mini`, `o4-mini-2025-04-16`, `o3`, `o3-2025-04-16`, `o3-mini`, `o3-mini-2025-01-31`, `o1`, `o1-2024-12-17`, `o1-preview`, `o1-preview-2024-09-12`, `o1-mini`, `o1-mini-2024-09-12`, `gpt-4o`, `gpt-4o-2024-11-20`, `gpt-4o-2024-08-06`, `gpt-4o-2024-05-13`,
63
+ `gpt-4o-mini`, `gpt-4o-mini-2024-07-18`, `chatgpt-4o-latest`, `codex-mini-latest` ,`gpt-4o-audio-preview`, `gpt-4o-audio-preview-2024-10-01`, `gpt-4o-audio-preview-2024-12-17`, `gpt-4o-audio-preview-2025-06-03`,
64
+ `gpt-4o-mini-audio-preview`, `gpt-4o-mini-audio-preview-2024-12-17`, `gpt-4o-search-preview`, `gpt-4o-mini-search-preview`, `gpt-4o-search-preview-2025-03-11`, `gpt-4o-mini-search-preview-2025-03-11`, `gpt-4-turbo`, `gpt-4-turbo-2024-04-09`, `gpt-4-0125-preview`, `gpt-4-turbo-preview`, `gpt-4-1106-preview`, `gpt-4`, `gpt-4-0314`, `gpt-4-0613`, `gpt-4-32k`, `gpt-4-32k-0314`, `gpt-4-32k-0613`, `gpt-4-vision-preview`, `gpt-3.5-turbo`, `gpt-3.5-turbo-16k`, `gpt-3.5-turbo-0301`, `gpt-3.5-turbo-0613`, `gpt-3.5-turbo-1106`, `gpt-3.5-turbo-0125`, `gpt-3.5-turbo-16k-0613`
65
+
66
+ ## Basic Inference (Text Generation)
67
+
68
+ ### Primary Method: Responses API (Recommended)
69
+
70
+ ```python
71
+ from openai import OpenAI
72
+
73
+ client = OpenAI()
74
+
75
+ response = client.responses.create(
76
+ model="gpt-4o",
77
+ instructions="You are a helpful coding assistant.",
78
+ input="How do I reverse a string in Python?",
79
+ )
80
+
81
+ print(response.output_text)
82
+ ```
83
+
84
+ ### Legacy Method: Chat Completions API
85
+
86
+ ```python
87
+ from openai import OpenAI
88
+
89
+ client = OpenAI()
90
+
91
+ completion = client.chat.completions.create(
92
+ model="gpt-4o",
93
+ messages=[
94
+ {"role": "system", "content": "You are a helpful assistant."},
95
+ {"role": "user", "content": "How do I reverse a string in Python?"},
96
+ ],
97
+ )
98
+
99
+ print(completion.choices[0].message.content)
100
+ ```
101
+
102
+ ## Vision (Multimodal Inputs)
103
+
104
+ ### With Image URL (Responses API)
105
+
106
+ ```python
107
+ from openai import OpenAI
108
+ client = OpenAI()
109
+
110
+ response = client.responses.create(
111
+ model="gpt-4o-mini",
112
+ input=[
113
+ {
114
+ "role": "user",
115
+ "content": [
116
+ {"type": "input_text", "text": "What is in this image?"},
117
+ {"type": "input_image", "image_url": "https://example.com/image.jpg"},
118
+ ],
119
+ }
120
+ ],
121
+ )
122
+ ```
123
+
124
+ ### With Base64 Encoded Image
125
+
126
+ ```python
127
+ import base64
128
+ from openai import OpenAI
129
+
130
+ client = OpenAI()
131
+
132
+ with open("path/to/image.png", "rb") as image_file:
133
+ b64_image = base64.b64encode(image_file.read()).decode("utf-8")
134
+
135
+ response = client.responses.create(
136
+ model="gpt-4o-mini",
137
+ input=[
138
+ {
139
+ "role": "user",
140
+ "content": [
141
+ {"type": "input_text", "text": "What is in this image?"},
142
+ {"type": "input_image", "image_url": f"data:image/png;base64,{b64_image}"},
143
+ ],
144
+ }
145
+ ],
146
+ )
147
+ ```
148
+
149
+ ## Async Usage
150
+
151
+ ```python
152
+ import asyncio
153
+ from openai import AsyncOpenAI
154
+
155
+ client = AsyncOpenAI()
156
+
157
+ async def main():
158
+ response = await client.responses.create(
159
+ model="gpt-4o",
160
+ input="Explain quantum computing to a beginner."
161
+ )
162
+ print(response.output_text)
163
+
164
+ asyncio.run(main())
165
+ ```
166
+
167
+ Optionally use `aiohttp` backend via `pip install openai[aiohttp]` and instantiate with `DefaultAioHttpClient()`.
168
+
169
+ ## Streaming Responses
170
+
171
+ ### Responses API Streaming
172
+
173
+ ```python
174
+ from openai import OpenAI
175
+ client = OpenAI()
176
+
177
+ stream = client.responses.create(
178
+ model="gpt-4o",
179
+ input="Write a short story about a robot.",
180
+ stream=True,
181
+ )
182
+
183
+ for event in stream:
184
+ print(event)
185
+ ```
186
+
187
+ ### Chat Completions Streaming
188
+
189
+ ```python
190
+ from openai import OpenAI
191
+ client = OpenAI()
192
+
193
+ stream = client.chat.completions.create(
194
+ model="gpt-4o",
195
+ messages=[{"role": "user", "content": "Tell me a joke"}],
196
+ stream=True,
197
+ )
198
+
199
+ for chunk in stream:
200
+ if chunk.choices[0].delta.content is not None:
201
+ print(chunk.choices[0].delta.content, end="")
202
+ ```
203
+
204
+ ## Function Calling (Tools)
205
+
206
+ Type-safe function calling with Pydantic helpers.
207
+
208
+ ```python
209
+ from pydantic import BaseModel
210
+ from openai import OpenAI
211
+ import openai
212
+
213
+ class WeatherQuery(BaseModel):
214
+ """Get the current weather for a location"""
215
+ location: str
216
+ unit: str = "celsius"
217
+
218
+ client = OpenAI()
219
+
220
+ completion = client.chat.completions.parse(
221
+ model="gpt-4o",
222
+ messages=[{"role": "user", "content": "What's the weather like in Paris?"}],
223
+ tools=[openai.pydantic_function_tool(WeatherQuery)],
224
+ )
225
+
226
+ if completion.choices[0].message.tool_calls:
227
+ for tool_call in completion.choices[0].message.tool_calls:
228
+ if getattr(tool_call, "parsed_arguments", None):
229
+ print(tool_call.parsed_arguments.location)
230
+ ```
231
+
232
+ ## Structured Outputs
233
+
234
+ Auto-parse JSON into Pydantic models.
235
+
236
+ ```python
237
+ from typing import List
238
+ from pydantic import BaseModel
239
+ from openai import OpenAI
240
+
241
+ class Step(BaseModel):
242
+ explanation: str
243
+ output: str
244
+
245
+ class MathResponse(BaseModel):
246
+ steps: List[Step]
247
+ final_answer: str
248
+
249
+ client = OpenAI()
250
+ completion = client.chat.completions.parse(
251
+ model="gpt-4o-2024-08-06",
252
+ messages=[
253
+ {"role": "system", "content": "You are a helpful math tutor."},
254
+ {"role": "user", "content": "solve 8x + 31 = 2"},
255
+ ],
256
+ response_format=MathResponse,
257
+ )
258
+
259
+ message = completion.choices[0].message
260
+ if message.parsed:
261
+ print(message.parsed.final_answer)
262
+ ```
263
+
264
+ ## Audio Capabilities
265
+
266
+ ### Speech Synthesis (Text-to-Speech)
267
+
268
+ ```python
269
+ from openai import OpenAI
270
+ client = OpenAI()
271
+
272
+ response = client.audio.speech.create(
273
+ model="tts-1",
274
+ voice="alloy",
275
+ input="Hello, this is a test of the text to speech API."
276
+ )
277
+
278
+ with open("output.mp3", "wb") as f:
279
+ f.write(response.content)
280
+ ```
281
+
282
+ ### Audio Transcription
283
+
284
+ ```python
285
+ from openai import OpenAI
286
+ client = OpenAI()
287
+
288
+ with open("audio.mp3", "rb") as audio_file:
289
+ transcription = client.audio.transcriptions.create(
290
+ model="whisper-1",
291
+ file=audio_file
292
+ )
293
+ print(transcription.text)
294
+ ```
295
+
296
+ ### Audio Translation
297
+
298
+ ```python
299
+ from openai import OpenAI
300
+ client = OpenAI()
301
+
302
+ with open("audio.mp3", "rb") as audio_file:
303
+ translation = client.audio.translations.create(
304
+ model="whisper-1",
305
+ file=audio_file
306
+ )
307
+ print(translation.text)
308
+ ```
309
+
310
+ ## File Operations
311
+
312
+ ### Upload Files
313
+
314
+ ```python
315
+ from pathlib import Path
316
+ from openai import OpenAI
317
+ client = OpenAI()
318
+
319
+ file_response = client.files.create(
320
+ file=Path("training_data.jsonl"),
321
+ purpose="fine-tune"
322
+ )
323
+
324
+ print(f"File ID: {file_response.id}")
325
+ ```
326
+
327
+ ### Retrieve, Download, Delete Files
328
+
329
+ ```python
330
+ from openai import OpenAI
331
+ client = OpenAI()
332
+
333
+ # List files
334
+ files = client.files.list()
335
+
336
+ # Retrieve a specific file
337
+ file_info = client.files.retrieve("file-abc123")
338
+
339
+ # Download file content
340
+ file_content = client.files.retrieve_content("file-abc123")
341
+
342
+ # Delete a file
343
+ client.files.delete("file-abc123")
344
+ ```
345
+
346
+ ## Embeddings
347
+
348
+ ```python
349
+ from openai import OpenAI
350
+ client = OpenAI()
351
+
352
+ response = client.embeddings.create(
353
+ model="text-embedding-3-small",
354
+ input="The quick brown fox jumps over the lazy dog."
355
+ )
356
+
357
+ embeddings = response.data[0].embedding
358
+ print(f"Embedding dimensions: {len(embeddings)}")
359
+ ```
360
+
361
+ ## Image Generation
362
+
363
+ ```python
364
+ from openai import OpenAI
365
+ client = OpenAI()
366
+
367
+ response = client.images.generate(
368
+ model="dall-e-3",
369
+ prompt="A futuristic city skyline at sunset",
370
+ size="1024x1024",
371
+ quality="standard",
372
+ n=1,
373
+ )
374
+
375
+ image_url = response.data[0].url
376
+ print(f"Generated image: {image_url}")
377
+ ```
378
+
379
+ ## Error Handling
380
+
381
+ ```python
382
+ import openai
383
+ from openai import OpenAI
384
+ client = OpenAI()
385
+
386
+ try:
387
+ response = client.responses.create(model="gpt-4o", input="Hello, world!")
388
+ except openai.RateLimitError:
389
+ print("Rate limit exceeded. Please wait before retrying.")
390
+ except openai.APIConnectionError:
391
+ print("Failed to connect to OpenAI API.")
392
+ except openai.AuthenticationError:
393
+ print("Invalid API key provided.")
394
+ except openai.APIStatusError as e:
395
+ print(f"API error occurred: {e.status_code}")
396
+ print(f"Error response: {e.response}")
397
+ ```
398
+
399
+ ## Request IDs and Debugging
400
+
401
+ ```python
402
+ from openai import OpenAI
403
+ client = OpenAI()
404
+
405
+ response = client.responses.create(model="gpt-4o", input="Test message")
406
+ print(f"Request ID: {response._request_id}")
407
+ ```
408
+
409
+ ## Retries and Timeouts
410
+
411
+ ```python
412
+ from openai import OpenAI
413
+
414
+ # Configure retries
415
+ client = OpenAI(max_retries=5)
416
+
417
+ # Configure timeouts
418
+ client = OpenAI(timeout=30.0)
419
+
420
+ # Per-request configuration
421
+ response = client.with_options(
422
+ max_retries=3,
423
+ timeout=60.0
424
+ ).responses.create(
425
+ model="gpt-4o",
426
+ input="Hello"
427
+ )
428
+ ```
429
+
430
+ ## Realtime API
431
+
432
+ ```python
433
+ import asyncio
434
+ from openai import AsyncOpenAI
435
+
436
+ async def main():
437
+ client = AsyncOpenAI()
438
+
439
+ async with client.realtime.connect(model="gpt-realtime") as connection:
440
+ await connection.session.update(session={'modalities': ['text']})
441
+
442
+ await connection.conversation.item.create(
443
+ item={
444
+ "type": "message",
445
+ "role": "user",
446
+ "content": [{"type": "input_text", "text": "Say hello!"}],
447
+ }
448
+ )
449
+ await connection.response.create()
450
+
451
+ async for event in connection:
452
+ if event.type == 'response.text.delta':
453
+ print(event.delta, end="")
454
+ elif event.type == "response.done":
455
+ break
456
+
457
+ asyncio.run(main())
458
+ ```
459
+
460
+ ## Microsoft Azure OpenAI
461
+
462
+ ```python
463
+ from openai import AzureOpenAI
464
+
465
+ client = AzureOpenAI(
466
+ api_version="2023-07-01-preview",
467
+ azure_endpoint="https://your-endpoint.openai.azure.com",
468
+ )
469
+
470
+ completion = client.chat.completions.create(
471
+ model="deployment-name", # Your deployment name
472
+ messages=[{"role": "user", "content": "Hello, Azure OpenAI!"}],
473
+ )
474
+ print(completion.choices[0].message.content)
475
+ ```
476
+
477
+ ## Webhook Verification
478
+
479
+ ```python
480
+ from flask import Flask, request
481
+ from openai import OpenAI
482
+
483
+ app = Flask(__name__)
484
+ client = OpenAI() # Uses OPENAI_WEBHOOK_SECRET environment variable
485
+
486
+ @app.route("/webhook", methods=["POST"])
487
+ def webhook():
488
+ request_body = request.get_data(as_text=True)
489
+
490
+ try:
491
+ event = client.webhooks.unwrap(request_body, request.headers)
492
+
493
+ if event.type == "response.completed":
494
+ print("Response completed:", event.data)
495
+
496
+ return "ok"
497
+ except Exception as e:
498
+ print("Invalid signature:", e)
499
+ return "Invalid signature", 400
500
+ ```
501
+
502
+ ## Pagination
503
+
504
+ ```python
505
+ from openai import OpenAI
506
+
507
+ client = OpenAI()
508
+
509
+ # Automatic pagination
510
+ all_files = []
511
+ for file in client.files.list(limit=20):
512
+ all_files.append(file)
513
+
514
+ # Manual pagination
515
+ first_page = client.files.list(limit=20)
516
+ if first_page.has_next_page():
517
+ next_page = first_page.get_next_page()
518
+ ```
519
+
520
+ ## Notes
521
+
522
+ - Prefer the Responses API for new work; Chat Completions remains supported.
523
+ - Keep API keys in env vars or a secret manager.
524
+ - Both sync and async clients are available; interfaces mirror each other.
525
+ - Use streaming for lower latency UX.
526
+ - Pydantic-based structured outputs and function calling provide type safety.