chub-dev 0.1.0 → 0.1.2-beta.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +55 -0
- package/bin/chub-mcp +2 -0
- package/dist/airtable/docs/database/javascript/DOC.md +1437 -0
- package/dist/airtable/docs/database/python/DOC.md +1735 -0
- package/dist/amplitude/docs/analytics/javascript/DOC.md +1282 -0
- package/dist/amplitude/docs/analytics/python/DOC.md +1199 -0
- package/dist/anthropic/docs/claude-api/javascript/DOC.md +503 -0
- package/dist/anthropic/docs/claude-api/python/DOC.md +389 -0
- package/dist/asana/docs/tasks/DOC.md +1396 -0
- package/dist/assemblyai/docs/transcription/DOC.md +1043 -0
- package/dist/atlassian/docs/confluence/javascript/DOC.md +1347 -0
- package/dist/atlassian/docs/confluence/python/DOC.md +1604 -0
- package/dist/auth0/docs/identity/javascript/DOC.md +968 -0
- package/dist/auth0/docs/identity/python/DOC.md +1199 -0
- package/dist/aws/docs/s3/javascript/DOC.md +1773 -0
- package/dist/aws/docs/s3/python/DOC.md +1807 -0
- package/dist/binance/docs/trading/javascript/DOC.md +1315 -0
- package/dist/binance/docs/trading/python/DOC.md +1454 -0
- package/dist/braintree/docs/gateway/javascript/DOC.md +1278 -0
- package/dist/braintree/docs/gateway/python/DOC.md +1179 -0
- package/dist/chromadb/docs/embeddings-db/javascript/DOC.md +1263 -0
- package/dist/chromadb/docs/embeddings-db/python/DOC.md +1707 -0
- package/dist/clerk/docs/auth/javascript/DOC.md +1220 -0
- package/dist/clerk/docs/auth/python/DOC.md +274 -0
- package/dist/cloudflare/docs/workers/javascript/DOC.md +918 -0
- package/dist/cloudflare/docs/workers/python/DOC.md +994 -0
- package/dist/cockroachdb/docs/distributed-db/DOC.md +1500 -0
- package/dist/cohere/docs/llm/DOC.md +1335 -0
- package/dist/datadog/docs/monitoring/javascript/DOC.md +1740 -0
- package/dist/datadog/docs/monitoring/python/DOC.md +1815 -0
- package/dist/deepgram/docs/speech/javascript/DOC.md +885 -0
- package/dist/deepgram/docs/speech/python/DOC.md +685 -0
- package/dist/deepl/docs/translation/javascript/DOC.md +887 -0
- package/dist/deepl/docs/translation/python/DOC.md +944 -0
- package/dist/deepseek/docs/llm/DOC.md +1220 -0
- package/dist/directus/docs/headless-cms/javascript/DOC.md +1128 -0
- package/dist/directus/docs/headless-cms/python/DOC.md +1276 -0
- package/dist/discord/docs/bot/javascript/DOC.md +1090 -0
- package/dist/discord/docs/bot/python/DOC.md +1130 -0
- package/dist/elasticsearch/docs/search/DOC.md +1634 -0
- package/dist/elevenlabs/docs/text-to-speech/javascript/DOC.md +336 -0
- package/dist/elevenlabs/docs/text-to-speech/python/DOC.md +552 -0
- package/dist/firebase/docs/auth/DOC.md +1015 -0
- package/dist/gemini/docs/genai/javascript/DOC.md +691 -0
- package/dist/gemini/docs/genai/python/DOC.md +555 -0
- package/dist/github/docs/octokit/DOC.md +1560 -0
- package/dist/google/docs/bigquery/javascript/DOC.md +1688 -0
- package/dist/google/docs/bigquery/python/DOC.md +1503 -0
- package/dist/hubspot/docs/crm/javascript/DOC.md +1805 -0
- package/dist/hubspot/docs/crm/python/DOC.md +2033 -0
- package/dist/huggingface/docs/transformers/DOC.md +948 -0
- package/dist/intercom/docs/messaging/javascript/DOC.md +1844 -0
- package/dist/intercom/docs/messaging/python/DOC.md +1797 -0
- package/dist/jira/docs/issues/javascript/DOC.md +1420 -0
- package/dist/jira/docs/issues/python/DOC.md +1492 -0
- package/dist/kafka/docs/streaming/javascript/DOC.md +1671 -0
- package/dist/kafka/docs/streaming/python/DOC.md +1464 -0
- package/dist/landingai-ade/docs/api/DOC.md +620 -0
- package/dist/landingai-ade/docs/sdk/python/DOC.md +489 -0
- package/dist/landingai-ade/docs/sdk/typescript/DOC.md +542 -0
- package/dist/landingai-ade/skills/SKILL.md +489 -0
- package/dist/launchdarkly/docs/feature-flags/javascript/DOC.md +1191 -0
- package/dist/launchdarkly/docs/feature-flags/python/DOC.md +1671 -0
- package/dist/linear/docs/tracker/DOC.md +1554 -0
- package/dist/livekit/docs/realtime/javascript/DOC.md +303 -0
- package/dist/livekit/docs/realtime/python/DOC.md +163 -0
- package/dist/mailchimp/docs/marketing/DOC.md +1420 -0
- package/dist/meilisearch/docs/search/DOC.md +1241 -0
- package/dist/microsoft/docs/onedrive/javascript/DOC.md +1421 -0
- package/dist/microsoft/docs/onedrive/python/DOC.md +1549 -0
- package/dist/mongodb/docs/atlas/DOC.md +2041 -0
- package/dist/notion/docs/workspace-api/javascript/DOC.md +1435 -0
- package/dist/notion/docs/workspace-api/python/DOC.md +1400 -0
- package/dist/okta/docs/identity/javascript/DOC.md +1171 -0
- package/dist/okta/docs/identity/python/DOC.md +1401 -0
- package/dist/openai/docs/chat/javascript/DOC.md +407 -0
- package/dist/openai/docs/chat/python/DOC.md +568 -0
- package/dist/paypal/docs/checkout/DOC.md +278 -0
- package/dist/pinecone/docs/sdk/javascript/DOC.md +984 -0
- package/dist/pinecone/docs/sdk/python/DOC.md +1395 -0
- package/dist/plaid/docs/banking/javascript/DOC.md +1163 -0
- package/dist/plaid/docs/banking/python/DOC.md +1203 -0
- package/dist/playwright-community/skills/login-flows/SKILL.md +108 -0
- package/dist/postmark/docs/transactional-email/DOC.md +1168 -0
- package/dist/prisma/docs/orm/javascript/DOC.md +1419 -0
- package/dist/prisma/docs/orm/python/DOC.md +1317 -0
- package/dist/qdrant/docs/vector-search/javascript/DOC.md +1221 -0
- package/dist/qdrant/docs/vector-search/python/DOC.md +1653 -0
- package/dist/rabbitmq/docs/message-queue/javascript/DOC.md +1193 -0
- package/dist/rabbitmq/docs/message-queue/python/DOC.md +1243 -0
- package/dist/razorpay/docs/payments/javascript/DOC.md +1219 -0
- package/dist/razorpay/docs/payments/python/DOC.md +1330 -0
- package/dist/redis/docs/key-value/javascript/DOC.md +1851 -0
- package/dist/redis/docs/key-value/python/DOC.md +2054 -0
- package/dist/registry.json +2817 -0
- package/dist/replicate/docs/model-hosting/DOC.md +1318 -0
- package/dist/resend/docs/email/DOC.md +1271 -0
- package/dist/salesforce/docs/crm/javascript/DOC.md +1241 -0
- package/dist/salesforce/docs/crm/python/DOC.md +1183 -0
- package/dist/search-index.json +1 -0
- package/dist/sendgrid/docs/email-api/javascript/DOC.md +371 -0
- package/dist/sendgrid/docs/email-api/python/DOC.md +656 -0
- package/dist/sentry/docs/error-tracking/javascript/DOC.md +1073 -0
- package/dist/sentry/docs/error-tracking/python/DOC.md +1309 -0
- package/dist/shopify/docs/storefront/DOC.md +457 -0
- package/dist/slack/docs/workspace/javascript/DOC.md +933 -0
- package/dist/slack/docs/workspace/python/DOC.md +271 -0
- package/dist/square/docs/payments/javascript/DOC.md +1855 -0
- package/dist/square/docs/payments/python/DOC.md +1728 -0
- package/dist/stripe/docs/api/DOC.md +1727 -0
- package/dist/stripe/docs/payments/DOC.md +1726 -0
- package/dist/stytch/docs/auth/javascript/DOC.md +1813 -0
- package/dist/stytch/docs/auth/python/DOC.md +1962 -0
- package/dist/supabase/docs/client/DOC.md +1606 -0
- package/dist/twilio/docs/messaging/python/DOC.md +469 -0
- package/dist/twilio/docs/messaging/typescript/DOC.md +946 -0
- package/dist/vercel/docs/platform/DOC.md +1940 -0
- package/dist/weaviate/docs/vector-db/javascript/DOC.md +1268 -0
- package/dist/weaviate/docs/vector-db/python/DOC.md +1388 -0
- package/dist/zendesk/docs/support/javascript/DOC.md +2150 -0
- package/dist/zendesk/docs/support/python/DOC.md +2297 -0
- package/package.json +22 -6
- package/skills/get-api-docs/SKILL.md +84 -0
- package/src/commands/annotate.js +83 -0
- package/src/commands/build.js +12 -1
- package/src/commands/feedback.js +150 -0
- package/src/commands/get.js +83 -42
- package/src/commands/search.js +7 -0
- package/src/index.js +43 -17
- package/src/lib/analytics.js +90 -0
- package/src/lib/annotations.js +57 -0
- package/src/lib/bm25.js +170 -0
- package/src/lib/cache.js +69 -6
- package/src/lib/config.js +8 -3
- package/src/lib/identity.js +99 -0
- package/src/lib/registry.js +103 -20
- package/src/lib/telemetry.js +86 -0
- package/src/mcp/server.js +177 -0
- package/src/mcp/tools.js +251 -0
|
@@ -0,0 +1,555 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: genai
|
|
3
|
+
description: "Google Gemini GenAI SDK for multimodal LLM interactions in Python"
|
|
4
|
+
metadata:
|
|
5
|
+
languages: "python"
|
|
6
|
+
versions: "1.56.0"
|
|
7
|
+
updated-on: "2026-03-01"
|
|
8
|
+
source: maintainer
|
|
9
|
+
tags: "gemini,google,genai,llm,multimodal"
|
|
10
|
+
---
|
|
11
|
+
|
|
12
|
+
# Gemini API Coding Guidelines (Python)
|
|
13
|
+
|
|
14
|
+
You are a Gemini API coding expert. Help me with writing code using the Gemini
|
|
15
|
+
API calling the official libraries and SDKs.
|
|
16
|
+
|
|
17
|
+
Please follow the following guidelines when generating code.
|
|
18
|
+
|
|
19
|
+
You can find the official SDK documentation and code samples here:
|
|
20
|
+
https://ai.google.dev/gemini-api/docs
|
|
21
|
+
|
|
22
|
+
## Golden Rule: Use the Correct and Current SDK
|
|
23
|
+
|
|
24
|
+
Always use the Google GenAI SDK to call the Gemini models, which became the
|
|
25
|
+
standard library for all Gemini API interactions as of 2025. Do not use legacy
|
|
26
|
+
libraries and SDKs.
|
|
27
|
+
|
|
28
|
+
- **Library Name:** Google GenAI SDK
|
|
29
|
+
- **Python Package:** `google-genai`
|
|
30
|
+
- **Legacy Library**: (`google-generativeai`) is deprecated.
|
|
31
|
+
|
|
32
|
+
**Installation:**
|
|
33
|
+
|
|
34
|
+
- **Incorrect:** `pip install google-generativeai`
|
|
35
|
+
- **Incorrect:** `pip install google-ai-generativelanguage`
|
|
36
|
+
- **Correct:** `pip install google-genai`
|
|
37
|
+
|
|
38
|
+
**APIs and Usage:**
|
|
39
|
+
|
|
40
|
+
- **Incorrect:** `import google.generativeai as genai`-> **Correct:** `from
|
|
41
|
+
google import genai`
|
|
42
|
+
- **Incorrect:** `from google.ai import generativelanguage_v1` ->
|
|
43
|
+
**Correct:** `from google import genai`
|
|
44
|
+
- **Incorrect:** `from google.generativeai` -> **Correct:** `from google
|
|
45
|
+
import genai`
|
|
46
|
+
- **Incorrect:** `from google.generativeai import types` -> **Correct:** `from
|
|
47
|
+
google.genai import types`
|
|
48
|
+
- **Incorrect:** `import google.generativeai as genai` -> **Correct:** `from
|
|
49
|
+
google import genai`
|
|
50
|
+
- **Incorrect:** `genai.configure(api_key=...)` -> **Correct:** `client =
|
|
51
|
+
genai.Client(api_key="...")`
|
|
52
|
+
- **Incorrect:** `model = genai.GenerativeModel(...)`
|
|
53
|
+
- **Incorrect:** `model.generate_content(...)` -> **Correct:**
|
|
54
|
+
`client.models.generate_content(...)`
|
|
55
|
+
- **Incorrect:** `response = model.generate_content(..., stream=True)` ->
|
|
56
|
+
**Correct:** `client.models.generate_content_stream(...)`
|
|
57
|
+
- **Incorrect:** `genai.GenerationConfig(...)` -> **Correct:**
|
|
58
|
+
`types.GenerateContentConfig(...)`
|
|
59
|
+
- **Incorrect:** `safety_settings={...}` -> **Correct:** Use `safety_settings`
|
|
60
|
+
inside a `GenerateContentConfig` object.
|
|
61
|
+
- **Incorrect:** `from google.api_core.exceptions import GoogleAPIError` ->
|
|
62
|
+
**Correct:** `from google.genai.errors import APIError`
|
|
63
|
+
- **Incorrect:** `types.ResponseModality.TEXT`
|
|
64
|
+
|
|
65
|
+
## Initialization and API key
|
|
66
|
+
|
|
67
|
+
The `google-genai` library requires creating a client object for all API calls.
|
|
68
|
+
|
|
69
|
+
- Always use `client = genai.Client()` to create a client object.
|
|
70
|
+
- Set `GEMINI_API_KEY` environment variable, which will be picked up
|
|
71
|
+
automatically.
|
|
72
|
+
|
|
73
|
+
## Models
|
|
74
|
+
|
|
75
|
+
- By default, use the following models as of March 2026:
|
|
76
|
+
- **General Text & Multimodal Tasks:** `gemini-2.5-flash`
|
|
77
|
+
- **Coding and Complex Reasoning Tasks:** `gemini-2.5-pro`
|
|
78
|
+
- **Image Generation Tasks:** `imagen-4.0-fast-generate-001`,
|
|
79
|
+
`imagen-4.0-generate-001` or `imagen-4.0-ultra-generate-001`
|
|
80
|
+
- **Image Editing Tasks:** `gemini-2.5-flash-image-preview`
|
|
81
|
+
- **Video Generation Tasks:** `veo-3.0-fast-generate-preview` or
|
|
82
|
+
`veo-3.0-generate-preview`.
|
|
83
|
+
|
|
84
|
+
- It is also acceptable to use following models if explicitly requested by the
|
|
85
|
+
user:
|
|
86
|
+
- **Gemini 2.0 Series**: `gemini-2.0-flash`, `gemini-2.0-pro`
|
|
87
|
+
|
|
88
|
+
- Do not use the following deprecated models (or their variants like
|
|
89
|
+
`gemini-1.5-flash-latest`):
|
|
90
|
+
- **Prohibited:** `gemini-1.5-flash`
|
|
91
|
+
- **Prohibited:** `gemini-1.5-pro`
|
|
92
|
+
- **Prohibited:** `gemini-pro`
|
|
93
|
+
|
|
94
|
+
## Basic Inference (Text Generation)
|
|
95
|
+
|
|
96
|
+
Here's how to generate a response from a text prompt.
|
|
97
|
+
|
|
98
|
+
```python
|
|
99
|
+
from google import genai
|
|
100
|
+
|
|
101
|
+
client = genai.Client()
|
|
102
|
+
|
|
103
|
+
response = client.models.generate_content(
|
|
104
|
+
model='gemini-2.5-flash',
|
|
105
|
+
contents='why is the sky blue?',
|
|
106
|
+
)
|
|
107
|
+
|
|
108
|
+
print(response.text) # output is often markdown
|
|
109
|
+
```
|
|
110
|
+
|
|
111
|
+
Multimodal inputs are supported by passing a PIL-Image in the `contents` list:
|
|
112
|
+
|
|
113
|
+
```python
|
|
114
|
+
from google import genai
|
|
115
|
+
from PIL import Image
|
|
116
|
+
|
|
117
|
+
client = genai.Client()
|
|
118
|
+
|
|
119
|
+
image = Image.open(img_path)
|
|
120
|
+
|
|
121
|
+
response = client.models.generate_content(
|
|
122
|
+
model='gemini-2.5-flash',
|
|
123
|
+
contents=[image, "explain that image"],
|
|
124
|
+
)
|
|
125
|
+
|
|
126
|
+
print(response.text) # The output often is markdown
|
|
127
|
+
```
|
|
128
|
+
|
|
129
|
+
You can also use `Part.from_bytes` type to pass a variety of data types (images,
|
|
130
|
+
audio, video, pdf).
|
|
131
|
+
|
|
132
|
+
```python
|
|
133
|
+
from google.genai import types
|
|
134
|
+
|
|
135
|
+
with open('path/to/small-sample.jpg', 'rb') as f:
|
|
136
|
+
image_bytes = f.read()
|
|
137
|
+
|
|
138
|
+
response = client.models.generate_content(
|
|
139
|
+
model='gemini-2.5-flash',
|
|
140
|
+
contents=[
|
|
141
|
+
types.Part.from_bytes(
|
|
142
|
+
data=image_bytes,
|
|
143
|
+
mime_type='image/jpeg',
|
|
144
|
+
),
|
|
145
|
+
'Caption this image.'
|
|
146
|
+
]
|
|
147
|
+
)
|
|
148
|
+
|
|
149
|
+
print(response.text)
|
|
150
|
+
```
|
|
151
|
+
|
|
152
|
+
For larger files, use `client.files.upload`:
|
|
153
|
+
|
|
154
|
+
```python
|
|
155
|
+
f = client.files.upload(file=img_path)
|
|
156
|
+
|
|
157
|
+
response = client.models.generate_content(
|
|
158
|
+
model='gemini-2.5-flash',
|
|
159
|
+
contents=[f, "can you describe this image?"]
|
|
160
|
+
)
|
|
161
|
+
```
|
|
162
|
+
|
|
163
|
+
You can delete files after use like this:
|
|
164
|
+
|
|
165
|
+
```python
|
|
166
|
+
myfile = client.files.upload(file='path/to/sample.mp3')
|
|
167
|
+
client.files.delete(name=myfile.name)
|
|
168
|
+
```
|
|
169
|
+
|
|
170
|
+
## Additional Capabilities and Configurations
|
|
171
|
+
|
|
172
|
+
Below are examples of advanced configurations.
|
|
173
|
+
|
|
174
|
+
### Thinking
|
|
175
|
+
|
|
176
|
+
Gemini 2.5 series models support thinking, which is on by default for
|
|
177
|
+
`gemini-2.5-flash`. It can be adjusted by using `thinking_budget` setting.
|
|
178
|
+
Setting it to zero turns thinking off, and will reduce latency.
|
|
179
|
+
|
|
180
|
+
```python
|
|
181
|
+
from google import genai
|
|
182
|
+
from google.genai import types
|
|
183
|
+
|
|
184
|
+
client = genai.Client()
|
|
185
|
+
|
|
186
|
+
client.models.generate_content(
|
|
187
|
+
model='gemini-2.5-flash',
|
|
188
|
+
contents="What is AI?",
|
|
189
|
+
config=types.GenerateContentConfig(
|
|
190
|
+
thinking_config=types.ThinkingConfig(
|
|
191
|
+
thinking_budget=0
|
|
192
|
+
)
|
|
193
|
+
)
|
|
194
|
+
)
|
|
195
|
+
```
|
|
196
|
+
|
|
197
|
+
IMPORTANT NOTES:
|
|
198
|
+
|
|
199
|
+
- Minimum thinking budget for `gemini-2.5-pro` is `128` and thinking can not
|
|
200
|
+
be turned off for that model.
|
|
201
|
+
- No models (apart from Gemini 2.5 series) support thinking or thinking
|
|
202
|
+
budgets APIs. Do not try to adjust thinking budgets other models (such as
|
|
203
|
+
`gemini-2.0-flash` or `gemini-2.0-pro`) otherwise it will cause syntax
|
|
204
|
+
errors.
|
|
205
|
+
|
|
206
|
+
### System instructions
|
|
207
|
+
|
|
208
|
+
Use system instructions to guide model's behavior.
|
|
209
|
+
|
|
210
|
+
```python
|
|
211
|
+
from google import genai
|
|
212
|
+
from google.genai import types
|
|
213
|
+
|
|
214
|
+
client = genai.Client()
|
|
215
|
+
|
|
216
|
+
config = types.GenerateContentConfig(
|
|
217
|
+
system_instruction="You are a pirate",
|
|
218
|
+
)
|
|
219
|
+
|
|
220
|
+
response = client.models.generate_content(
|
|
221
|
+
model='gemini-2.5-flash',
|
|
222
|
+
contents='Tell me a pirate joke',
|
|
223
|
+
config=config,
|
|
224
|
+
)
|
|
225
|
+
|
|
226
|
+
print(response.text)
|
|
227
|
+
```
|
|
228
|
+
|
|
229
|
+
### Hyperparameters
|
|
230
|
+
|
|
231
|
+
You can also set `temperature` or `max_output_tokens` within
|
|
232
|
+
`types.GenerateContentConfig`
|
|
233
|
+
**Avoid** setting `max_output_tokens`, `topP`, `topK` unless explicitly
|
|
234
|
+
requested by the user.
|
|
235
|
+
|
|
236
|
+
### Safety configurations
|
|
237
|
+
|
|
238
|
+
Avoid setting safety configurations unless explicitly requested by the user. If
|
|
239
|
+
explicitly asked for by the user, here is a sample API:
|
|
240
|
+
|
|
241
|
+
```python
|
|
242
|
+
from google import genai
|
|
243
|
+
from google.genai import types
|
|
244
|
+
|
|
245
|
+
client = genai.Client()
|
|
246
|
+
|
|
247
|
+
img = Image.open("/path/to/img")
|
|
248
|
+
response = client.models.generate_content(
|
|
249
|
+
model="gemini-2.0-flash",
|
|
250
|
+
contents=['Do these look store-bought or homemade?', img],
|
|
251
|
+
config=types.GenerateContentConfig(
|
|
252
|
+
safety_settings=[
|
|
253
|
+
types.SafetySetting(
|
|
254
|
+
category=types.HarmCategory.HARM_CATEGORY_HATE_SPEECH,
|
|
255
|
+
threshold=types.HarmBlockThreshold.BLOCK_LOW_AND_ABOVE,
|
|
256
|
+
),
|
|
257
|
+
]
|
|
258
|
+
)
|
|
259
|
+
)
|
|
260
|
+
|
|
261
|
+
print(response.text)
|
|
262
|
+
```
|
|
263
|
+
|
|
264
|
+
### Streaming
|
|
265
|
+
|
|
266
|
+
It is possible to stream responses to reduce user perceived latency:
|
|
267
|
+
|
|
268
|
+
```python
|
|
269
|
+
from google import genai
|
|
270
|
+
|
|
271
|
+
client = genai.Client()
|
|
272
|
+
|
|
273
|
+
response = client.models.generate_content_stream(
|
|
274
|
+
model="gemini-2.5-flash",
|
|
275
|
+
contents=["Explain how AI works"]
|
|
276
|
+
)
|
|
277
|
+
for chunk in response:
|
|
278
|
+
print(chunk.text, end="")
|
|
279
|
+
```
|
|
280
|
+
|
|
281
|
+
### Chat
|
|
282
|
+
|
|
283
|
+
For multi-turn conversations, use the `chats` service to maintain conversation
|
|
284
|
+
history.
|
|
285
|
+
|
|
286
|
+
```python
|
|
287
|
+
from google import genai
|
|
288
|
+
|
|
289
|
+
client = genai.Client()
|
|
290
|
+
chat = client.chats.create(model="gemini-2.5-flash")
|
|
291
|
+
|
|
292
|
+
response = chat.send_message("I have 2 dogs in my house.")
|
|
293
|
+
print(response.text)
|
|
294
|
+
|
|
295
|
+
response = chat.send_message("How many paws are in my house?")
|
|
296
|
+
print(response.text)
|
|
297
|
+
|
|
298
|
+
for message in chat.get_history():
|
|
299
|
+
print(f'role - {message.role}',end=": ")
|
|
300
|
+
print(message.parts[0].text)
|
|
301
|
+
```
|
|
302
|
+
|
|
303
|
+
### Structured outputs
|
|
304
|
+
|
|
305
|
+
Use structured outputs to force the model to return a response that conforms to
|
|
306
|
+
a specific Pydantic schema.
|
|
307
|
+
|
|
308
|
+
```python
|
|
309
|
+
from google import genai
|
|
310
|
+
from google.genai import types
|
|
311
|
+
from pydantic import BaseModel
|
|
312
|
+
|
|
313
|
+
client = genai.Client()
|
|
314
|
+
|
|
315
|
+
# Define the desired output structure using Pydantic
|
|
316
|
+
class Recipe(BaseModel):
|
|
317
|
+
recipe_name: str
|
|
318
|
+
description: str
|
|
319
|
+
ingredients: list[str]
|
|
320
|
+
steps: list[str]
|
|
321
|
+
|
|
322
|
+
# Request the model to populate the schema
|
|
323
|
+
response = client.models.generate_content(
|
|
324
|
+
model='gemini-2.5-flash',
|
|
325
|
+
contents="Provide a classic recipe for chocolate chip cookies.",
|
|
326
|
+
config=types.GenerateContentConfig(
|
|
327
|
+
response_mime_type="application/json",
|
|
328
|
+
response_schema=Recipe,
|
|
329
|
+
),
|
|
330
|
+
)
|
|
331
|
+
|
|
332
|
+
# The response.text will be a valid JSON string matching the Recipe schema
|
|
333
|
+
print(response.text)
|
|
334
|
+
```
|
|
335
|
+
|
|
336
|
+
#### Function Calling (Tools)
|
|
337
|
+
|
|
338
|
+
You can provide the model with tools (functions) it can use to bring in external
|
|
339
|
+
information to answer a question or act on a request outside the model.
|
|
340
|
+
|
|
341
|
+
```python
|
|
342
|
+
from google import genai
|
|
343
|
+
from google.genai import types
|
|
344
|
+
|
|
345
|
+
client = genai.Client()
|
|
346
|
+
|
|
347
|
+
# Define a function that the model can call (to access external information)
|
|
348
|
+
def get_current_weather(city: str) -> str:
|
|
349
|
+
"""Returns the current weather in a given city. For this example, it's hardcoded."""
|
|
350
|
+
if "boston" in city.lower():
|
|
351
|
+
return "The weather in Boston is 15°C and sunny."
|
|
352
|
+
else:
|
|
353
|
+
return f"Weather data for {city} is not available."
|
|
354
|
+
|
|
355
|
+
# Make the function available to the model as a tool
|
|
356
|
+
response = client.models.generate_content(
|
|
357
|
+
model='gemini-2.5-flash',
|
|
358
|
+
contents="What is the weather like in Boston?",
|
|
359
|
+
config=types.GenerateContentConfig(
|
|
360
|
+
tools=[get_current_weather]
|
|
361
|
+
),
|
|
362
|
+
)
|
|
363
|
+
# The model may respond with a request to call the function
|
|
364
|
+
if response.function_calls:
|
|
365
|
+
print("Function calls requested by the model:")
|
|
366
|
+
for function_call in response.function_calls:
|
|
367
|
+
print(f"- Function: {function_call.name}")
|
|
368
|
+
print(f"- Args: {dict(function_call.args)}")
|
|
369
|
+
else:
|
|
370
|
+
print("The model responded directly:")
|
|
371
|
+
print(response.text)
|
|
372
|
+
```
|
|
373
|
+
|
|
374
|
+
### Generate Images
|
|
375
|
+
|
|
376
|
+
Here's how to generate images using the Imagen models. Start with the fast model
|
|
377
|
+
as it should cover most use-cases, and move to the more standard or the ultra
|
|
378
|
+
models for advanced use-cases.
|
|
379
|
+
|
|
380
|
+
```python
|
|
381
|
+
from google import genai
|
|
382
|
+
from PIL import Image
|
|
383
|
+
from io import BytesIO
|
|
384
|
+
|
|
385
|
+
client = genai.Client()
|
|
386
|
+
|
|
387
|
+
result = client.models.generate_images(
|
|
388
|
+
model='imagen-4.0-fast-generate-001',
|
|
389
|
+
prompt="Image of a cat",
|
|
390
|
+
config=dict(
|
|
391
|
+
number_of_images=1, # 1 to 4 (always 1 for the ultra model)
|
|
392
|
+
output_mime_type="image/jpeg",
|
|
393
|
+
person_generation="ALLOW_ADULT", # 'ALLOW_ALL' (but not in Europe/Mena), 'DONT_ALLOW' or 'ALLOW_ADULT'
|
|
394
|
+
aspect_ratio="1:1", # "1:1", "3:4", "4:3", "9:16", or "16:9"
|
|
395
|
+
)
|
|
396
|
+
)
|
|
397
|
+
|
|
398
|
+
for generated_image in result.generated_images:
|
|
399
|
+
image = Image.open(BytesIO(generated_image.image.image_bytes))
|
|
400
|
+
```
|
|
401
|
+
|
|
402
|
+
### Edit images
|
|
403
|
+
|
|
404
|
+
Editing images is better done using the Gemini native image generation model,
|
|
405
|
+
and it is recommended to use chat mode. Configs are not supported in this model
|
|
406
|
+
(except modality).
|
|
407
|
+
|
|
408
|
+
```python
|
|
409
|
+
from google import genai
|
|
410
|
+
from PIL import Image
|
|
411
|
+
from io import BytesIO
|
|
412
|
+
|
|
413
|
+
client = genai.Client()
|
|
414
|
+
|
|
415
|
+
prompt = """
|
|
416
|
+
Create a picture of my cat eating a nano-banana in a fancy restaurant under the gemini constellation
|
|
417
|
+
"""
|
|
418
|
+
image = PIL.Image.open('/path/to/image.png')
|
|
419
|
+
|
|
420
|
+
# Create the chat
|
|
421
|
+
chat = client.chats.create(model="gemini-2.5-flash-image-preview")
|
|
422
|
+
# Send the image and ask for it to be edited
|
|
423
|
+
response = chat.send_message([prompt, image])
|
|
424
|
+
|
|
425
|
+
# Get the text and the image generated
|
|
426
|
+
for i, part in enumerate(response.candidates[0].content.parts):
|
|
427
|
+
if part.text is not None:
|
|
428
|
+
print(part.text)
|
|
429
|
+
elif part.inline_data is not None:
|
|
430
|
+
image = Image.open(BytesIO(part.inline_data.data))
|
|
431
|
+
image.save(f"generated_image_{i}.png") # Multiple images can be generated
|
|
432
|
+
|
|
433
|
+
# Continue iterating
|
|
434
|
+
chat.send_message("Can you make it a bananas foster?")
|
|
435
|
+
```
|
|
436
|
+
|
|
437
|
+
### Generate Videos
|
|
438
|
+
|
|
439
|
+
Here's how to generate videos using the Veo models. Usage of Veo can be costly,
|
|
440
|
+
so after generating code for it, give user a heads up to check pricing for Veo.
|
|
441
|
+
Start with the fast model since the result quality is usually sufficient, and
|
|
442
|
+
swap to the larger model if needed.
|
|
443
|
+
|
|
444
|
+
```python
|
|
445
|
+
import time
|
|
446
|
+
from google import genai
|
|
447
|
+
from google.genai import types
|
|
448
|
+
from PIL import Image
|
|
449
|
+
|
|
450
|
+
client = genai.Client()
|
|
451
|
+
|
|
452
|
+
PIL_image = Image.open("path/to/image.png") # Optional
|
|
453
|
+
|
|
454
|
+
operation = client.models.generate_videos(
|
|
455
|
+
model="veo-3.0-fast-generate-preview",
|
|
456
|
+
prompt="Panning wide shot of a calico kitten sleeping in the sunshine",
|
|
457
|
+
image = PIL_image,
|
|
458
|
+
config=types.GenerateVideosConfig(
|
|
459
|
+
person_generation="dont_allow", # "dont_allow" or "allow_adult"
|
|
460
|
+
aspect_ratio="16:9", # "16:9" or "9:16"
|
|
461
|
+
number_of_videos=1, # supported value is 1-4, use 1 by default
|
|
462
|
+
duration_seconds=8, # supported value is 5-8
|
|
463
|
+
),
|
|
464
|
+
)
|
|
465
|
+
|
|
466
|
+
while not operation.done:
|
|
467
|
+
time.sleep(20)
|
|
468
|
+
operation = client.operations.get(operation)
|
|
469
|
+
|
|
470
|
+
for n, generated_video in enumerate(operation.response.generated_videos):
|
|
471
|
+
client.files.download(file=generated_video.video) # just file=, no need for path= as it doesn't save yet
|
|
472
|
+
generated_video.video.save(f"video{n}.mp4") # saves the video
|
|
473
|
+
|
|
474
|
+
```
|
|
475
|
+
|
|
476
|
+
### Search Grounding
|
|
477
|
+
|
|
478
|
+
Google Search can be used as a tool for grounding queries that with up to date
|
|
479
|
+
information from the web.
|
|
480
|
+
|
|
481
|
+
**Correct**
|
|
482
|
+
|
|
483
|
+
```python
|
|
484
|
+
from google import genai
|
|
485
|
+
|
|
486
|
+
client = genai.Client()
|
|
487
|
+
|
|
488
|
+
response = client.models.generate_content(
|
|
489
|
+
model='gemini-2.5-flash',
|
|
490
|
+
contents="What was the score of the latest Olympique Lyonnais game?",
|
|
491
|
+
config={"tools": [{"google_search": {}}]},
|
|
492
|
+
)
|
|
493
|
+
|
|
494
|
+
# Response
|
|
495
|
+
print(f"Response:\n {response.text}")
|
|
496
|
+
# Search details
|
|
497
|
+
print(f"Search Query: {response.candidates[0].grounding_metadata.web_search_queries}")
|
|
498
|
+
# Urls used for grounding
|
|
499
|
+
print(f"Search Pages: {', '.join([site.web.title for site in response.candidates[0].grounding_metadata.grounding_chunks])}")
|
|
500
|
+
```
|
|
501
|
+
|
|
502
|
+
The output `response.text` will likely not be in JSON format, do not attempt to
|
|
503
|
+
parse it as JSON.
|
|
504
|
+
|
|
505
|
+
### Content and Part Hierarchy
|
|
506
|
+
|
|
507
|
+
While the simpler API call is often sufficient, you may run into scenarios where
|
|
508
|
+
you need to work directly with the underlying `Content` and `Part` objects for
|
|
509
|
+
more explicit control. These are the fundamental building blocks of the
|
|
510
|
+
`generate_content` API.
|
|
511
|
+
|
|
512
|
+
For instance, the following simple API call:
|
|
513
|
+
|
|
514
|
+
```python
|
|
515
|
+
from google import genai
|
|
516
|
+
|
|
517
|
+
client = genai.Client()
|
|
518
|
+
|
|
519
|
+
response = client.models.generate_content(
|
|
520
|
+
model="gemini-2.5-flash",
|
|
521
|
+
contents="How does AI work?"
|
|
522
|
+
)
|
|
523
|
+
print(response.text)
|
|
524
|
+
```
|
|
525
|
+
|
|
526
|
+
is effectively a shorthand for this more explicit structure:
|
|
527
|
+
|
|
528
|
+
```python
|
|
529
|
+
from google import genai
|
|
530
|
+
from google.genai import types
|
|
531
|
+
|
|
532
|
+
client = genai.Client()
|
|
533
|
+
|
|
534
|
+
response = client.models.generate_content(
|
|
535
|
+
model="gemini-2.5-flash",
|
|
536
|
+
contents=[
|
|
537
|
+
types.Content(role="user", parts=[types.Part.from_text(text="How does AI work?")]),
|
|
538
|
+
]
|
|
539
|
+
)
|
|
540
|
+
print(response.text)
|
|
541
|
+
```
|
|
542
|
+
|
|
543
|
+
## Other APIs
|
|
544
|
+
|
|
545
|
+
The list of APIs and capabilities above are not comprehensive. If users ask you
|
|
546
|
+
to generate code for a capability not provided above, refer them to
|
|
547
|
+
ai.google.dev/gemini-api/docs.
|
|
548
|
+
|
|
549
|
+
## Useful Links
|
|
550
|
+
|
|
551
|
+
- Documentation: ai.google.dev/gemini-api/docs
|
|
552
|
+
- API Keys and Authentication: ai.google.dev/gemini-api/docs/api-key
|
|
553
|
+
- Models: ai.google.dev/models
|
|
554
|
+
- API Pricing: ai.google.dev/pricing
|
|
555
|
+
- Rate Limits: ai.google.dev/rate-limits
|