chub-dev 0.2.0-beta.3 → 0.3.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,382 +0,0 @@
1
- ---
2
- name: sdk
3
- description: "Claude AI assistant API for text generation, analysis, conversation, streaming, tool use, vision, and batch processing"
4
- metadata:
5
- languages: "python"
6
- versions: "0.71.0"
7
- updated-on: "2025-10-24"
8
- source: maintainer
9
- tags: "anthropic,sdk,llm,ai,claude"
10
- ---
11
-
12
-
13
- # Anthropic Python SDK Guidelines
14
-
15
- You are an Anthropic API coding expert. Help me with writing code using the Anthropic API calling the official libraries and SDKs.
16
-
17
- You can find the official SDK documentation and code samples here:
18
- https://docs.anthropic.com/claude/reference/
19
-
20
- ## Golden Rule: Use the Correct and Current SDK
21
-
22
- Always use the Anthropic Python SDK to call Claude models, which is the standard library for all Anthropic API interactions.
23
-
24
- - **Library Name:** Anthropic Python SDK
25
- - **Python Package:** `anthropic`
26
- - **Installation:** `pip install anthropic`
27
-
28
- **APIs and Usage:**
29
-
30
- - **Correct:** `from anthropic import Anthropic`
31
- - **Correct:** `from anthropic import AsyncAnthropic` (for async usage)
32
- - **Correct:** `client = Anthropic(api_key="...")`
33
- - **Correct:** `client.messages.create(...)`
34
-
35
- ## Initialization and API Key
36
-
37
- The `anthropic` library requires creating a client object for all API calls.
38
-
39
- - Always use `client = Anthropic()` to create a client object.
40
- - Set `ANTHROPIC_API_KEY` environment variable, which will be picked up automatically.
41
- - Alternatively, pass the API key directly: `client = Anthropic(api_key="your-key-here")`
42
-
43
- ## Models
44
-
45
- By default, use the following models when using the Anthropic SDK:
46
-
47
- - **Latest High-Performance Model:** `claude-sonnet-4-20250514` or `claude-4-sonnet-20250514`
48
- - **Latest Balanced Model:** `claude-3-7-sonnet-latest` or `claude-3-7-sonnet-20250219`
49
- - **Fast and Efficient Model:** `claude-3-5-haiku-latest` or `claude-3-5-haiku-20241022`
50
- - **Legacy High-Quality Model:** `claude-3-5-sonnet-latest` or `claude-3-opus-latest`
51
-
52
- ```python
53
- # List all available models
54
- models = client.models.list()
55
- ```
56
- ```python
57
- # List all deprecated models
58
- from anthropic.resources.messages.messages import DEPRECATED_MODELS
59
- for model, deprecation_date in DEPRECATED_MODELS.items():
60
- print(f"{model}: deprecated {deprecation_date}")
61
- ```
62
-
63
- - It is acceptable to use specific dated versions if consistency is required.
64
- - Avoid using deprecated models - check the SDK documentation for deprecation notices.
65
-
66
- ## Basic Inference (Text Generation)
67
-
68
- ```python
69
- from anthropic import Anthropic
70
-
71
- client = Anthropic()
72
-
73
- message = client.messages.create(
74
- model="claude-sonnet-4-20250514",
75
- max_tokens=1024,
76
- messages=[
77
- {
78
- "role": "user",
79
- "content": "Hello, Claude"
80
- }
81
- ]
82
- )
83
- print(message.content)
84
- ```
85
-
86
- ## Multimodal Inputs
87
-
88
- ### Image Inputs
89
-
90
- ```python
91
- message = client.messages.create(
92
- model="claude-sonnet-4-20250514",
93
- max_tokens=1024,
94
- messages=[
95
- {
96
- "role": "user",
97
- "content": [
98
- {
99
- "type": "image",
100
- "source": {
101
- "type": "base64",
102
- "media_type": "image/jpeg",
103
- "data": "/9j/4AAQSkZJRgABAQ..."
104
- }
105
- },
106
- {
107
- "type": "text",
108
- "text": "What's in this image?"
109
- }
110
- ]
111
- }
112
- ]
113
- )
114
- ```
115
-
116
- ### File Uploads
117
-
118
- ```python
119
- from pathlib import Path
120
-
121
- client.beta.files.upload(
122
- file=Path("/path/to/file"),
123
- )
124
- ```
125
-
126
- ## Async Usage
127
-
128
- ```python
129
- import asyncio
130
- from anthropic import AsyncAnthropic
131
-
132
- client = AsyncAnthropic()
133
-
134
- async def main():
135
- message = await client.messages.create(
136
- model="claude-sonnet-4-20250514",
137
- max_tokens=1024,
138
- messages=[
139
- {
140
- "role": "user",
141
- "content": "Hello, Claude"
142
- }
143
- ]
144
- )
145
- print(message.content)
146
-
147
- asyncio.run(main())
148
- ```
149
-
150
- ## Advanced Capabilities and Configurations
151
-
152
- ### System Instructions
153
-
154
- ```python
155
- message = client.messages.create(
156
- model="claude-sonnet-4-20250514",
157
- max_tokens=1024,
158
- system="You are a helpful assistant that speaks like a pirate.",
159
- messages=[
160
- {
161
- "role": "user",
162
- "content": "Hello!"
163
- }
164
- ]
165
- )
166
- ```
167
-
168
- ### Thinking Configuration
169
-
170
- ```python
171
- message = client.messages.create(
172
- model="claude-sonnet-4-20250514",
173
- max_tokens=1024,
174
- messages=[
175
- {
176
- "role": "user",
177
- "content": "Solve this complex problem step by step."
178
- }
179
- ],
180
- thinking={
181
- "type": "enabled",
182
- "budget_tokens": 1024
183
- }
184
- )
185
- ```
186
-
187
- ### Tool Use (Function Calling)
188
-
189
- ```python
190
- def get_weather(location: str) -> str:
191
- return f"Weather in {location}: 72°F and sunny"
192
-
193
- message = client.messages.create(
194
- model="claude-sonnet-4-20250514",
195
- max_tokens=1024,
196
- messages=[
197
- {
198
- "role": "user",
199
- "content": "What's the weather in San Francisco?"
200
- }
201
- ],
202
- tools=[
203
- {
204
- "type": "custom",
205
- "name": "get_weather",
206
- "description": "Get current weather for a location",
207
- "input_schema": {
208
- "type": "object",
209
- "properties": {
210
- "location": {
211
- "type": "string",
212
- "description": "The city and state, e.g. San Francisco, CA"
213
- }
214
- },
215
- "required": ["location"]
216
- }
217
- }
218
- ]
219
- )
220
- ```
221
-
222
- ### Streaming Responses
223
-
224
- ```python
225
- stream = client.messages.create(
226
- model="claude-sonnet-4-20250514",
227
- max_tokens=1024,
228
- messages=[
229
- {
230
- "role": "user",
231
- "content": "Tell me a story"
232
- }
233
- ],
234
- stream=True
235
- )
236
-
237
- for event in stream:
238
- if event.type == "content_block_delta":
239
- print(event.delta.text, end="", flush=True)
240
- ```
241
-
242
- ### Streaming Helpers
243
-
244
- ```python
245
- async with client.messages.stream(
246
- model="claude-sonnet-4-20250514",
247
- max_tokens=1024,
248
- messages=[
249
- {
250
- "role": "user",
251
- "content": "Say hello there!"
252
- }
253
- ]
254
- ) as stream:
255
- async for text in stream.text_stream:
256
- print(text, end="", flush=True)
257
- print()
258
-
259
- message = await stream.get_final_message()
260
- ```
261
-
262
- ### Token Counting
263
-
264
- ```python
265
- count = client.messages.count_tokens(
266
- model="claude-sonnet-4-20250514",
267
- messages=[
268
- {"role": "user", "content": "Hello, world"}
269
- ]
270
- )
271
- print(f"Input tokens: {count.input_tokens}")
272
- ```
273
-
274
- ### Message Batches
275
-
276
- ```python
277
- batch = await client.messages.batches.create(
278
- requests=[
279
- {
280
- "custom_id": "request-1",
281
- "params": {
282
- "model": "claude-sonnet-4-20250514",
283
- "max_tokens": 1024,
284
- "messages": [{"role": "user", "content": "Hello"}]
285
- }
286
- }
287
- ]
288
- )
289
- ```
290
-
291
- ## Specialized Deployments
292
-
293
- ### AWS Bedrock
294
-
295
- ```python
296
- from anthropic import AnthropicBedrock
297
-
298
- client = AnthropicBedrock(
299
- aws_region="us-east-1",
300
- aws_profile="default"
301
- )
302
-
303
- message = client.messages.create(
304
- model="anthropic.claude-sonnet-4-20250514-v1:0",
305
- max_tokens=1024,
306
- messages=[
307
- {
308
- "role": "user",
309
- "content": "Hello!"
310
- }
311
- ]
312
- )
313
- ```
314
-
315
- ### Google Vertex AI
316
-
317
- ```python
318
- from anthropic import AnthropicVertex
319
-
320
- client = AnthropicVertex()
321
-
322
- message = client.messages.create(
323
- model="claude-sonnet-4@20250514",
324
- max_tokens=1024,
325
- messages=[
326
- {
327
- "role": "user",
328
- "content": "Hello!"
329
- }
330
- ]
331
- )
332
- ```
333
-
334
- ## Error Handling
335
-
336
- ```python
337
- import anthropic
338
-
339
- try:
340
- message = client.messages.create(
341
- model="claude-sonnet-4-20250514",
342
- max_tokens=1024,
343
- messages=[
344
- {
345
- "role": "user",
346
- "content": "Hello, Claude"
347
- }
348
- ]
349
- )
350
- except anthropic.APIConnectionError as e:
351
- print("Connection error occurred")
352
- except anthropic.RateLimitError as e:
353
- print("Rate limit exceeded")
354
- except anthropic.APIStatusError as e:
355
- print(f"API error: {e.status_code}")
356
- ```
357
-
358
- ## Configuration Options
359
-
360
- ### Retries
361
-
362
- ```python
363
- client = Anthropic(max_retries=3)
364
-
365
- client.with_options(max_retries=5).messages.create(...)
366
- ```
367
-
368
- ### Timeouts
369
-
370
- ```python
371
- client = Anthropic(timeout=30.0)
372
-
373
- client.with_options(timeout=60.0).messages.create(...)
374
- ```
375
-
376
- ## Useful Links
377
-
378
- - **Documentation:** https://docs.anthropic.com/claude/reference/
379
- - **API Keys:** https://console.anthropic.com/
380
- - **SDK Repository:** https://github.com/anthropics/anthropic-sdk-python
381
- - **Rate Limits:** https://docs.anthropic.com/claude/reference/rate-limits
382
- - **Error Codes:** https://docs.anthropic.com/claude/reference/errors
@@ -1,350 +0,0 @@
1
- ---
2
- name: chat
3
- description: "GPT-4 and ChatGPT API for text generation, chat completions, streaming, function calling, vision, embeddings, and assistants"
4
- metadata:
5
- languages: "javascript"
6
- versions: "6.7.0"
7
- updated-on: "2025-10-24"
8
- source: maintainer
9
- tags: "openai,chat,llm,ai"
10
- ---
11
-
12
- # OpenAI API Coding Guidelines (JavaScript/TypeScript)
13
-
14
- You are an OpenAI API coding expert. Help me with writing code using the OpenAI API calling the official libraries and SDKs.
15
-
16
- ## Golden Rule: Use the Correct and Current SDK
17
-
18
- Always use the official OpenAI Node.js SDK for all OpenAI API interactions.
19
-
20
- - **Library Name:** OpenAI Node.js SDK
21
- - **NPM Package:** `openai`
22
- - **JSR Package:** `@openai/openai`
23
-
24
- **Installation:**
25
-
26
- ```bash
27
- # NPM
28
- npm install openai
29
-
30
- # JSR (Deno/Node.js)
31
- deno add jsr:@openai/openai
32
- npx jsr add @openai/openai
33
- ```
34
-
35
- **Import Patterns:**
36
-
37
- ```typescript
38
- // Correct - ES6 import
39
- import OpenAI from 'openai';
40
-
41
- // Correct - with additional utilities
42
- import OpenAI, { toFile } from 'openai';
43
-
44
- // JSR import for Deno
45
- import OpenAI from 'jsr:@openai/openai';
46
- ```
47
-
48
- ## Initialization and API Key
49
-
50
- The OpenAI library requires creating an `OpenAI` client instance for all API calls.
51
-
52
- ```typescript
53
- import OpenAI from 'openai';
54
-
55
- // Uses OPENAI_API_KEY environment variable automatically
56
- const client = new OpenAI({
57
- apiKey: process.env['OPENAI_API_KEY'], // This is the default and can be omitted
58
- });
59
-
60
- // Or pass API key directly
61
- const client = new OpenAI({
62
- apiKey: 'your-api-key-here'
63
- });
64
- ```
65
-
66
- ## Primary APIs
67
-
68
- ### Responses API (Recommended)
69
-
70
- The Responses API is the primary interface for text generation.
71
-
72
- ```typescript
73
- import OpenAI from 'openai';
74
-
75
- const client = new OpenAI({
76
- apiKey: process.env['OPENAI_API_KEY'],
77
- });
78
-
79
- const response = await client.responses.create({
80
- model: 'gpt-4o',
81
- instructions: 'You are a coding assistant that talks like a pirate',
82
- input: 'Are semicolons optional in JavaScript?',
83
- });
84
-
85
- console.log(response.output_text);
86
- ```
87
-
88
- ### Chat Completions API (Legacy but Supported)
89
-
90
- The Chat Completions API remains fully supported for existing applications.
91
-
92
- ```typescript
93
- import OpenAI from 'openai';
94
-
95
- const client = new OpenAI({
96
- apiKey: process.env['OPENAI_API_KEY'],
97
- });
98
-
99
- const completion = await client.chat.completions.create({
100
- model: 'gpt-4o',
101
- messages: [
102
- { role: 'developer', content: 'Talk like a pirate.' },
103
- { role: 'user', content: 'Are semicolons optional in JavaScript?' },
104
- ],
105
- });
106
-
107
- console.log(completion.choices[0].message.content);
108
- ```
109
-
110
- ## API Resources Structure
111
-
112
- The OpenAI client organizes endpoints into logical resource groupings:
113
-
114
- ```typescript
115
- // Core API resources available on client
116
- client.completions // Text completions
117
- client.chat // Chat completions
118
- client.embeddings // Text embeddings
119
- client.files // File management
120
- client.images // Image generation
121
- client.audio // Audio processing
122
- client.moderations // Content moderation
123
- client.models // Model information
124
- client.fineTuning // Fine-tuning jobs
125
- client.graders // Model evaluation
126
- ```
127
-
128
- ## Streaming Responses
129
-
130
- Both Responses and Chat Completions APIs support streaming for real-time output.
131
-
132
- ### Responses API Streaming
133
-
134
- ```typescript
135
- import OpenAI from 'openai';
136
-
137
- const client = new OpenAI();
138
-
139
- const stream = await client.responses.create({
140
- model: 'gpt-4o',
141
- input: 'Say "Sheep sleep deep" ten times fast!',
142
- stream: true,
143
- });
144
-
145
- for await (const event of stream) {
146
- console.log(event);
147
- }
148
- ```
149
-
150
- ### Chat Completions Streaming
151
-
152
- ```typescript
153
- const stream = await client.chat.completions.create({
154
- model: 'gpt-4o',
155
- messages: [{ role: 'user', content: 'Count to 10' }],
156
- stream: true,
157
- });
158
-
159
- for await (const chunk of stream) {
160
- process.stdout.write(chunk.choices[0]?.delta?.content || '');
161
- }
162
- ```
163
-
164
- ## File Uploads
165
-
166
- The library supports multiple file upload formats for various use cases.
167
-
168
- ```typescript
169
- import fs from 'fs';
170
- import OpenAI, { toFile } from 'openai';
171
-
172
- const client = new OpenAI();
173
-
174
- // Method 1: Node.js fs.ReadStream (recommended for Node.js)
175
- await client.files.create({
176
- file: fs.createReadStream('input.jsonl'),
177
- purpose: 'fine-tune'
178
- });
179
-
180
- // Method 2: Web File API
181
- await client.files.create({
182
- file: new File(['my bytes'], 'input.jsonl'),
183
- purpose: 'fine-tune'
184
- });
185
-
186
- // Method 3: Fetch Response
187
- await client.files.create({
188
- file: await fetch('https://somesite/input.jsonl'),
189
- purpose: 'fine-tune'
190
- });
191
-
192
- // Method 4: toFile helper utility
193
- await client.files.create({
194
- file: await toFile(Buffer.from('my bytes'), 'input.jsonl'),
195
- purpose: 'fine-tune',
196
- });
197
- ```
198
-
199
- ## Advanced Configuration
200
-
201
- ### Function Calling (Tools)
202
-
203
- ```typescript
204
- const completion = await client.chat.completions.create({
205
- model: 'gpt-4o',
206
- messages: [{ role: 'user', content: 'What is the weather like today?' }],
207
- tools: [
208
- {
209
- type: 'function',
210
- function: {
211
- name: 'get_current_weather',
212
- description: 'Get the current weather in a given location',
213
- parameters: {
214
- type: 'object',
215
- properties: {
216
- location: {
217
- type: 'string',
218
- description: 'The city and state, e.g. San Francisco, CA',
219
- },
220
- unit: { type: 'string', enum: ['celsius', 'fahrenheit'] },
221
- },
222
- required: ['location'],
223
- },
224
- },
225
- },
226
- ],
227
- tool_choice: 'auto',
228
- });
229
- ```
230
-
231
- ### Temperature and Sampling Parameters
232
-
233
- Configure model behavior using parameters in the chat completions API:
234
-
235
- ```typescript
236
- const completion = await client.chat.completions.create({
237
- model: 'gpt-4o',
238
- messages: [{ role: 'user', content: 'Write a creative story' }],
239
- temperature: 0.8, // Higher = more creative (0-2)
240
- max_tokens: 1000, // Maximum response length
241
- top_p: 0.9, // Nucleus sampling
242
- frequency_penalty: 0.1, // Reduce repetition
243
- presence_penalty: 0.1, // Encourage new topics
244
- });
245
- ```
246
-
247
- ### Structured Outputs (JSON Mode)
248
-
249
- ```typescript
250
- const completion = await client.chat.completions.create({
251
- model: 'gpt-4o',
252
- messages: [
253
- { role: 'user', content: 'Extract the name and age from: "John is 30 years old"' }
254
- ],
255
- response_format: {
256
- type: 'json_object'
257
- },
258
- });
259
-
260
- const result = JSON.parse(completion.choices[0].message.content);
261
- ```
262
-
263
- ## Error Handling
264
-
265
- The library provides specific error types for different failure scenarios:
266
-
267
- ```typescript
268
- import OpenAI from 'openai';
269
-
270
- const client = new OpenAI();
271
-
272
- try {
273
- const completion = await client.chat.completions.create({
274
- model: 'gpt-4o',
275
- messages: [{ role: 'user', content: 'Hello!' }],
276
- });
277
- } catch (error) {
278
- if (error instanceof OpenAI.APIError) {
279
- console.log(error.status); // HTTP status code
280
- console.log(error.name); // Error name
281
- console.log(error.headers); // Response headers
282
- } else if (error instanceof OpenAI.RateLimitError) {
283
- console.log('Rate limit exceeded');
284
- } else if (error instanceof OpenAI.AuthenticationError) {
285
- console.log('Invalid API key');
286
- } else {
287
- console.log('Unexpected error:', error);
288
- }
289
- }
290
- ```
291
-
292
- ## Common Patterns
293
-
294
- ### Retry Logic with Exponential Backoff
295
-
296
- ```typescript
297
- async function createCompletionWithRetry(messages: any[], maxRetries = 3) {
298
- for (let attempt = 1; attempt <= maxRetries; attempt++) {
299
- try {
300
- return await client.chat.completions.create({
301
- model: 'gpt-4o',
302
- messages,
303
- });
304
- } catch (error) {
305
- if (error instanceof OpenAI.RateLimitError && attempt < maxRetries) {
306
- const delay = Math.pow(2, attempt) * 1000; // Exponential backoff
307
- await new Promise(resolve => setTimeout(resolve, delay));
308
- continue;
309
- }
310
- throw error;
311
- }
312
- }
313
- }
314
- ```
315
-
316
- ### Conversation Management
317
-
318
- ```typescript
319
- class ChatSession {
320
- private messages: OpenAI.Chat.ChatCompletionMessageParam[] = [];
321
-
322
- constructor(private client: OpenAI, systemPrompt?: string) {
323
- if (systemPrompt) {
324
- this.messages.push({ role: 'system', content: systemPrompt });
325
- }
326
- }
327
-
328
- async sendMessage(content: string) {
329
- this.messages.push({ role: 'user', content });
330
-
331
- const completion = await this.client.chat.completions.create({
332
- model: 'gpt-4o',
333
- messages: this.messages,
334
- });
335
-
336
- const response = completion.choices[0].message;
337
- this.messages.push(response);
338
-
339
- return response.content;
340
- }
341
- }
342
- ```
343
-
344
- ## Useful Links
345
-
346
- - **Documentation:** https://platform.openai.com/docs/api-reference
347
- - **API Keys:** https://platform.openai.com/api-keys
348
- - **Models:** https://platform.openai.com/docs/models
349
- - **Pricing:** https://openai.com/pricing
350
- - **Rate Limits:** https://platform.openai.com/docs/guides/rate-limits