google-genai 1.0.0rc0__py3-none-any.whl → 1.1.0__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.2
2
2
  Name: google-genai
3
- Version: 1.0.0rc0
3
+ Version: 1.1.0
4
4
  Summary: GenAI Python SDK
5
5
  Author-email: Google LLC <googleapis-packages@google.com>
6
6
  License: Apache-2.0
@@ -34,7 +34,7 @@ Requires-Dist: websockets<15.0dev,>=13.0
34
34
 
35
35
  -----
36
36
 
37
- Google Gen AI Python SDK provides an interface for developers to integrate Google's generative models into their Python applications. It supports the [Gemini Developer API](https://ai.google.dev/gemini-api/docs) and [Vertex AI](https://cloud.google.com/vertex-ai/generative-ai/docs/learn/overview) APIs. This is an early release. API is subject to change. Please do not use this SDK in production environments at this stage.
37
+ Google Gen AI Python SDK provides an interface for developers to integrate Google's generative models into their Python applications. It supports the [Gemini Developer API](https://ai.google.dev/gemini-api/docs) and [Vertex AI](https://cloud.google.com/vertex-ai/generative-ai/docs/learn/overview) APIs.
38
38
 
39
39
  ## Installation
40
40
 
@@ -56,16 +56,33 @@ different services ([Gemini Developer API](https://ai.google.dev/gemini-api/docs
56
56
 
57
57
  ```python
58
58
  # Only run this block for Gemini Developer API
59
- client = genai.Client(api_key="GEMINI_API_KEY")
59
+ client = genai.Client(api_key='GEMINI_API_KEY')
60
60
  ```
61
61
 
62
62
  ```python
63
63
  # Only run this block for Vertex AI API
64
64
  client = genai.Client(
65
- vertexai=True, project="your-project-id", location="us-central1"
65
+ vertexai=True, project='your-project-id', location='us-central1'
66
66
  )
67
67
  ```
68
68
 
69
+ To set the API version use `http_options`. For example, to set the API version
70
+ to `v1` for Vertex AI:
71
+
72
+ ```python
73
+ client = genai.Client(
74
+ vertexai=True, project='your-project-id', location='us-central1',
75
+ http_options={'api_version': 'v1'}
76
+ )
77
+ ```
78
+
79
+ To set the API version to `v1alpha` for the Gemini API:
80
+
81
+ ```python
82
+ client = genai.Client(api_key='GEMINI_API_KEY',
83
+ http_options={'api_version': 'v1alpha'})
84
+ ```
85
+
69
86
  ## Types
70
87
 
71
88
  Parameter types can be specified as either dictionaries(`TypedDict`) or
@@ -82,7 +99,7 @@ The `client.models` modules exposes model inferencing and model getters.
82
99
 
83
100
  ```python
84
101
  response = client.models.generate_content(
85
- model="gemini-2.0-flash-exp", contents="why is the sky blue?"
102
+ model='gemini-2.0-flash-001', contents='why is the sky blue?'
86
103
  )
87
104
  print(response.text)
88
105
  ```
@@ -97,22 +114,53 @@ download the file in console.
97
114
  python code.
98
115
 
99
116
  ```python
100
- file = client.files.upload(path="a11.txt")
117
+ file = client.files.upload(path='a11.txt')
101
118
  response = client.models.generate_content(
102
- model="gemini-2.0-flash-exp",
103
- contents=["Could you summarize this file?", file]
119
+ model='gemini-2.0-flash-001',
120
+ contents=['Could you summarize this file?', file]
104
121
  )
105
122
  print(response.text)
106
123
  ```
107
124
 
125
+ #### How to structure `contents`
126
+ There are several ways to structure the `contents` in your request.
127
+
128
+ Provide a single string as shown in the text example above:
129
+
130
+ ```python
131
+ contents='Can you recommend some things to do in Boston and New York in the winter?'
132
+ ```
133
+
134
+ Provide a single `Content` instance with multiple `Part` instances:
135
+
136
+ ```python
137
+ contents=types.Content(parts=[
138
+ types.Part.from_text(text='Can you recommend some things to do in Boston in the winter?'),
139
+ types.Part.from_text(text='Can you recommend some things to do in New York in the winter?')
140
+ ], role='user')
141
+ ```
142
+
143
+ When sending more than one input type, provide a list with multiple `Content`
144
+ instances:
145
+
146
+ ```python
147
+ contents=[
148
+ 'What is this a picture of?',
149
+ types.Part.from_uri(
150
+ file_uri='gs://generativeai-downloads/images/scones.jpg',
151
+ mime_type='image/jpeg',
152
+ ),
153
+ ],
154
+ ```
155
+
108
156
  ### System Instructions and Other Configs
109
157
 
110
158
  ```python
111
159
  response = client.models.generate_content(
112
- model="gemini-2.0-flash-exp",
113
- contents="high",
160
+ model='gemini-2.0-flash-001',
161
+ contents='high',
114
162
  config=types.GenerateContentConfig(
115
- system_instruction="I say high, you say low",
163
+ system_instruction='I say high, you say low',
116
164
  temperature=0.3,
117
165
  ),
118
166
  )
@@ -126,8 +174,8 @@ dictionaries. You can get the type from `google.genai.types`.
126
174
 
127
175
  ```python
128
176
  response = client.models.generate_content(
129
- model="gemini-2.0-flash-exp",
130
- contents=types.Part.from_text(text="Why is the sky blue?"),
177
+ model='gemini-2.0-flash-001',
178
+ contents=types.Part.from_text(text='Why is the sky blue?'),
131
179
  config=types.GenerateContentConfig(
132
180
  temperature=0,
133
181
  top_p=0.95,
@@ -135,7 +183,7 @@ response = client.models.generate_content(
135
183
  candidate_count=1,
136
184
  seed=5,
137
185
  max_output_tokens=100,
138
- stop_sequences=["STOP!"],
186
+ stop_sequences=['STOP!'],
139
187
  presence_penalty=0.0,
140
188
  frequency_penalty=0.0,
141
189
  ),
@@ -154,7 +202,7 @@ for model in client.models.list():
154
202
  ```
155
203
 
156
204
  ```python
157
- pager = client.models.list(config={"page_size": 10})
205
+ pager = client.models.list(config={'page_size': 10})
158
206
  print(pager.page_size)
159
207
  print(pager[0])
160
208
  pager.next_page()
@@ -169,7 +217,7 @@ async for job in await client.aio.models.list():
169
217
  ```
170
218
 
171
219
  ```python
172
- async_pager = await client.aio.models.list(config={"page_size": 10})
220
+ async_pager = await client.aio.models.list(config={'page_size': 10})
173
221
  print(async_pager.page_size)
174
222
  print(async_pager[0])
175
223
  await async_pager.next_page()
@@ -180,13 +228,13 @@ print(async_pager[0])
180
228
 
181
229
  ```python
182
230
  response = client.models.generate_content(
183
- model="gemini-2.0-flash-exp",
184
- contents="Say something bad.",
231
+ model='gemini-2.0-flash-001',
232
+ contents='Say something bad.',
185
233
  config=types.GenerateContentConfig(
186
234
  safety_settings=[
187
235
  types.SafetySetting(
188
- category="HARM_CATEGORY_HATE_SPEECH",
189
- threshold="BLOCK_ONLY_HIGH",
236
+ category='HARM_CATEGORY_HATE_SPEECH',
237
+ threshold='BLOCK_ONLY_HIGH',
190
238
  )
191
239
  ]
192
240
  ),
@@ -208,12 +256,12 @@ def get_current_weather(location: str) -> str:
208
256
  Args:
209
257
  location: The city and state, e.g. San Francisco, CA
210
258
  """
211
- return "sunny"
259
+ return 'sunny'
212
260
 
213
261
 
214
262
  response = client.models.generate_content(
215
- model="gemini-2.0-flash-exp",
216
- contents="What is the weather like in Boston?",
263
+ model='gemini-2.0-flash-001',
264
+ contents='What is the weather like in Boston?',
217
265
  config=types.GenerateContentConfig(tools=[get_current_weather]),
218
266
  )
219
267
 
@@ -230,25 +278,25 @@ Then you will receive a function call part in the response.
230
278
 
231
279
  ```python
232
280
  function = types.FunctionDeclaration(
233
- name="get_current_weather",
234
- description="Get the current weather in a given location",
281
+ name='get_current_weather',
282
+ description='Get the current weather in a given location',
235
283
  parameters=types.Schema(
236
- type="OBJECT",
284
+ type='OBJECT',
237
285
  properties={
238
- "location": types.Schema(
239
- type="STRING",
240
- description="The city and state, e.g. San Francisco, CA",
286
+ 'location': types.Schema(
287
+ type='STRING',
288
+ description='The city and state, e.g. San Francisco, CA',
241
289
  ),
242
290
  },
243
- required=["location"],
291
+ required=['location'],
244
292
  ),
245
293
  )
246
294
 
247
295
  tool = types.Tool(function_declarations=[function])
248
296
 
249
297
  response = client.models.generate_content(
250
- model="gemini-2.0-flash-exp",
251
- contents="What is the weather like in Boston?",
298
+ model='gemini-2.0-flash-001',
299
+ contents='What is the weather like in Boston?',
252
300
  config=types.GenerateContentConfig(tools=[tool]),
253
301
  )
254
302
 
@@ -262,33 +310,34 @@ The following example shows how to do it for a simple function invocation.
262
310
 
263
311
  ```python
264
312
  user_prompt_content = types.Content(
265
- role="user",
266
- parts=[types.Part.from_text(text="What is the weather like in Boston?")],
313
+ role='user',
314
+ parts=[types.Part.from_text(text='What is the weather like in Boston?')],
267
315
  )
268
316
  function_call_part = response.function_calls[0]
317
+ function_call_content = response.candidates[0].content
269
318
 
270
319
 
271
320
  try:
272
321
  function_result = get_current_weather(
273
322
  **function_call_part.function_call.args
274
323
  )
275
- function_response = {"result": function_result}
324
+ function_response = {'result': function_result}
276
325
  except (
277
326
  Exception
278
327
  ) as e: # instead of raising the exception, you can let the model handle it
279
- function_response = {"error": str(e)}
328
+ function_response = {'error': str(e)}
280
329
 
281
330
 
282
331
  function_response_part = types.Part.from_function_response(
283
- name=function_call_part.function_call.name,
332
+ name=function_call_part.name,
284
333
  response=function_response,
285
334
  )
286
335
  function_response_content = types.Content(
287
- role="tool", parts=[function_response_part]
336
+ role='tool', parts=[function_response_part]
288
337
  )
289
338
 
290
339
  response = client.models.generate_content(
291
- model="gemini-2.0-flash-exp",
340
+ model='gemini-2.0-flash-001',
292
341
  contents=[
293
342
  user_prompt_content,
294
343
  function_call_content,
@@ -302,6 +351,66 @@ response = client.models.generate_content(
302
351
  print(response.text)
303
352
  ```
304
353
 
354
+ #### Function calling with `ANY` tools config mode
355
+
356
+ If you configure function calling mode to be `ANY`, then the model will always
357
+ return function call parts. If you also pass a python function as a tool, by
358
+ default the SDK will perform automatic function calling until the remote calls exceed the
359
+ maximum remote call for automatic function calling (default to 10 times).
360
+
361
+ If you'd like to disable automatic function calling in `ANY` mode:
362
+
363
+ ```python
364
+ def get_current_weather(location: str) -> str:
365
+ """Returns the current weather.
366
+
367
+ Args:
368
+ location: The city and state, e.g. San Francisco, CA
369
+ """
370
+ return "sunny"
371
+
372
+ response = client.models.generate_content(
373
+ model="gemini-2.0-flash-001",
374
+ contents="What is the weather like in Boston?",
375
+ config=types.GenerateContentConfig(
376
+ tools=[get_current_weather],
377
+ automatic_function_calling=types.AutomaticFunctionCallingConfig(
378
+ disable=True
379
+ ),
380
+ tool_config=types.ToolConfig(
381
+ function_calling_config=types.FunctionCallingConfig(mode='ANY')
382
+ ),
383
+ ),
384
+ )
385
+ ```
386
+
387
+ If you'd like to set `x` number of automatic function call turns, you can
388
+ configure the maximum remote calls to be `x + 1`.
389
+ Assuming you prefer `1` turn for automatic function calling.
390
+
391
+ ```python
392
+ def get_current_weather(location: str) -> str:
393
+ """Returns the current weather.
394
+
395
+ Args:
396
+ location: The city and state, e.g. San Francisco, CA
397
+ """
398
+ return "sunny"
399
+
400
+ response = client.models.generate_content(
401
+ model="gemini-2.0-flash-001",
402
+ contents="What is the weather like in Boston?",
403
+ config=types.GenerateContentConfig(
404
+ tools=[get_current_weather],
405
+ automatic_function_calling=types.AutomaticFunctionCallingConfig(
406
+ maximum_remote_calls=2
407
+ ),
408
+ tool_config=types.ToolConfig(
409
+ function_calling_config=types.FunctionCallingConfig(mode='ANY')
410
+ ),
411
+ ),
412
+ )
413
+ ```
305
414
  ### JSON Response Schema
306
415
 
307
416
  #### Pydantic Model Schema support
@@ -323,10 +432,10 @@ class CountryInfo(BaseModel):
323
432
 
324
433
 
325
434
  response = client.models.generate_content(
326
- model="gemini-2.0-flash-exp",
327
- contents="Give me information for the United States.",
435
+ model='gemini-2.0-flash-001',
436
+ contents='Give me information for the United States.',
328
437
  config=types.GenerateContentConfig(
329
- response_mime_type="application/json",
438
+ response_mime_type='application/json',
330
439
  response_schema=CountryInfo,
331
440
  ),
332
441
  )
@@ -335,30 +444,30 @@ print(response.text)
335
444
 
336
445
  ```python
337
446
  response = client.models.generate_content(
338
- model="gemini-2.0-flash-exp",
339
- contents="Give me information for the United States.",
447
+ model='gemini-2.0-flash-001',
448
+ contents='Give me information for the United States.',
340
449
  config=types.GenerateContentConfig(
341
- response_mime_type="application/json",
450
+ response_mime_type='application/json',
342
451
  response_schema={
343
- "required": [
344
- "name",
345
- "population",
346
- "capital",
347
- "continent",
348
- "gdp",
349
- "official_language",
350
- "total_area_sq_mi",
452
+ 'required': [
453
+ 'name',
454
+ 'population',
455
+ 'capital',
456
+ 'continent',
457
+ 'gdp',
458
+ 'official_language',
459
+ 'total_area_sq_mi',
351
460
  ],
352
- "properties": {
353
- "name": {"type": "STRING"},
354
- "population": {"type": "INTEGER"},
355
- "capital": {"type": "STRING"},
356
- "continent": {"type": "STRING"},
357
- "gdp": {"type": "INTEGER"},
358
- "official_language": {"type": "STRING"},
359
- "total_area_sq_mi": {"type": "INTEGER"},
461
+ 'properties': {
462
+ 'name': {'type': 'STRING'},
463
+ 'population': {'type': 'INTEGER'},
464
+ 'capital': {'type': 'STRING'},
465
+ 'continent': {'type': 'STRING'},
466
+ 'gdp': {'type': 'INTEGER'},
467
+ 'official_language': {'type': 'STRING'},
468
+ 'total_area_sq_mi': {'type': 'INTEGER'},
360
469
  },
361
- "type": "OBJECT",
470
+ 'type': 'OBJECT',
362
471
  },
363
472
  ),
364
473
  )
@@ -381,7 +490,7 @@ class InstrumentEnum(Enum):
381
490
  KEYBOARD = 'Keyboard'
382
491
 
383
492
  response = client.models.generate_content(
384
- model='gemini-2.0-flash-exp',
493
+ model='gemini-2.0-flash-001',
385
494
  contents='What instrument plays multiple notes at once?',
386
495
  config={
387
496
  'response_mime_type': 'text/x.enum',
@@ -406,7 +515,7 @@ class InstrumentEnum(Enum):
406
515
  KEYBOARD = 'Keyboard'
407
516
 
408
517
  response = client.models.generate_content(
409
- model='gemini-2.0-flash-exp',
518
+ model='gemini-2.0-flash-001',
410
519
  contents='What instrument plays multiple notes at once?',
411
520
  config={
412
521
  'response_mime_type': 'application/json',
@@ -422,9 +531,9 @@ print(response.text)
422
531
 
423
532
  ```python
424
533
  for chunk in client.models.generate_content_stream(
425
- model="gemini-2.0-flash-exp", contents="Tell me a story in 300 words."
534
+ model='gemini-2.0-flash-001', contents='Tell me a story in 300 words.'
426
535
  ):
427
- print(chunk.text, end="")
536
+ print(chunk.text, end='')
428
537
  ```
429
538
 
430
539
  #### Streaming for image content
@@ -434,35 +543,35 @@ you can use the `from_uri` class method to create a `Part` object.
434
543
 
435
544
  ```python
436
545
  for chunk in client.models.generate_content_stream(
437
- model="gemini-2.0-flash-exp",
546
+ model='gemini-2.0-flash-001',
438
547
  contents=[
439
- "What is this image about?",
548
+ 'What is this image about?',
440
549
  types.Part.from_uri(
441
- file_uri="gs://generativeai-downloads/images/scones.jpg",
442
- mime_type="image/jpeg",
550
+ file_uri='gs://generativeai-downloads/images/scones.jpg',
551
+ mime_type='image/jpeg',
443
552
  ),
444
553
  ],
445
554
  ):
446
- print(chunk.text, end="")
555
+ print(chunk.text, end='')
447
556
  ```
448
557
 
449
558
  If your image is stored in your local file system, you can read it in as bytes
450
559
  data and use the `from_bytes` class method to create a `Part` object.
451
560
 
452
561
  ```python
453
- YOUR_IMAGE_PATH = "your_image_path"
454
- YOUR_IMAGE_MIME_TYPE = "your_image_mime_type"
455
- with open(YOUR_IMAGE_PATH, "rb") as f:
562
+ YOUR_IMAGE_PATH = 'your_image_path'
563
+ YOUR_IMAGE_MIME_TYPE = 'your_image_mime_type'
564
+ with open(YOUR_IMAGE_PATH, 'rb') as f:
456
565
  image_bytes = f.read()
457
566
 
458
567
  for chunk in client.models.generate_content_stream(
459
- model="gemini-2.0-flash-exp",
568
+ model='gemini-2.0-flash-001',
460
569
  contents=[
461
- "What is this image about?",
570
+ 'What is this image about?',
462
571
  types.Part.from_bytes(data=image_bytes, mime_type=YOUR_IMAGE_MIME_TYPE),
463
572
  ],
464
573
  ):
465
- print(chunk.text, end="")
574
+ print(chunk.text, end='')
466
575
  ```
467
576
 
468
577
  ### Async
@@ -475,7 +584,7 @@ of `client.models.generate_content`
475
584
 
476
585
  ```python
477
586
  response = await client.aio.models.generate_content(
478
- model="gemini-2.0-flash-exp", contents="Tell me a story in 300 words."
587
+ model='gemini-2.0-flash-001', contents='Tell me a story in 300 words.'
479
588
  )
480
589
 
481
590
  print(response.text)
@@ -485,17 +594,17 @@ print(response.text)
485
594
 
486
595
  ```python
487
596
  async for chunk in await client.aio.models.generate_content_stream(
488
- model="gemini-2.0-flash-exp", contents="Tell me a story in 300 words."
597
+ model='gemini-2.0-flash-001', contents='Tell me a story in 300 words.'
489
598
  ):
490
- print(chunk.text, end="")
599
+ print(chunk.text, end='')
491
600
  ```
492
601
 
493
602
  ### Count Tokens and Compute Tokens
494
603
 
495
604
  ```python
496
605
  response = client.models.count_tokens(
497
- model="gemini-2.0-flash-exp",
498
- contents="why is the sky blue?",
606
+ model='gemini-2.0-flash-001',
607
+ contents='why is the sky blue?',
499
608
  )
500
609
  print(response)
501
610
  ```
@@ -506,8 +615,8 @@ Compute tokens is only supported in Vertex AI.
506
615
 
507
616
  ```python
508
617
  response = client.models.compute_tokens(
509
- model="gemini-2.0-flash-exp",
510
- contents="why is the sky blue?",
618
+ model='gemini-2.0-flash-001',
619
+ contents='why is the sky blue?',
511
620
  )
512
621
  print(response)
513
622
  ```
@@ -516,8 +625,8 @@ print(response)
516
625
 
517
626
  ```python
518
627
  response = await client.aio.models.count_tokens(
519
- model="gemini-2.0-flash-exp",
520
- contents="why is the sky blue?",
628
+ model='gemini-2.0-flash-001',
629
+ contents='why is the sky blue?',
521
630
  )
522
631
  print(response)
523
632
  ```
@@ -526,8 +635,8 @@ print(response)
526
635
 
527
636
  ```python
528
637
  response = client.models.embed_content(
529
- model="text-embedding-004",
530
- contents="why is the sky blue?",
638
+ model='text-embedding-004',
639
+ contents='why is the sky blue?',
531
640
  )
532
641
  print(response)
533
642
  ```
@@ -535,8 +644,8 @@ print(response)
535
644
  ```python
536
645
  # multiple contents with config
537
646
  response = client.models.embed_content(
538
- model="text-embedding-004",
539
- contents=["why is the sky blue?", "What is your age?"],
647
+ model='text-embedding-004',
648
+ contents=['why is the sky blue?', 'What is your age?'],
540
649
  config=types.EmbedContentConfig(output_dimensionality=10),
541
650
  )
542
651
 
@@ -552,13 +661,13 @@ Support for generate images in Gemini Developer API is behind an allowlist
552
661
  ```python
553
662
  # Generate Image
554
663
  response1 = client.models.generate_images(
555
- model="imagen-3.0-generate-002",
556
- prompt="An umbrella in the foreground, and a rainy night sky in the background",
664
+ model='imagen-3.0-generate-002',
665
+ prompt='An umbrella in the foreground, and a rainy night sky in the background',
557
666
  config=types.GenerateImagesConfig(
558
- negative_prompt="human",
667
+ negative_prompt='human',
559
668
  number_of_images=1,
560
669
  include_rai_reason=True,
561
- output_mime_type="image/jpeg",
670
+ output_mime_type='image/jpeg',
562
671
  ),
563
672
  )
564
673
  response1.generated_images[0].image.show()
@@ -571,12 +680,12 @@ Upscale image is only supported in Vertex AI.
571
680
  ```python
572
681
  # Upscale the generated image from above
573
682
  response2 = client.models.upscale_image(
574
- model="imagen-3.0-generate-001",
683
+ model='imagen-3.0-generate-001',
575
684
  image=response1.generated_images[0].image,
576
- upscale_factor="x2",
685
+ upscale_factor='x2',
577
686
  config=types.UpscaleImageConfig(
578
687
  include_rai_reason=True,
579
- output_mime_type="image/jpeg",
688
+ output_mime_type='image/jpeg',
580
689
  ),
581
690
  )
582
691
  response2.generated_images[0].image.show()
@@ -601,21 +710,21 @@ raw_ref_image = RawReferenceImage(
601
710
  mask_ref_image = MaskReferenceImage(
602
711
  reference_id=2,
603
712
  config=types.MaskReferenceConfig(
604
- mask_mode="MASK_MODE_BACKGROUND",
713
+ mask_mode='MASK_MODE_BACKGROUND',
605
714
  mask_dilation=0,
606
715
  ),
607
716
  )
608
717
 
609
718
  response3 = client.models.edit_image(
610
- model="imagen-3.0-capability-001",
611
- prompt="Sunlight and clear sky",
719
+ model='imagen-3.0-capability-001',
720
+ prompt='Sunlight and clear sky',
612
721
  reference_images=[raw_ref_image, mask_ref_image],
613
722
  config=types.EditImageConfig(
614
- edit_mode="EDIT_MODE_INPAINT_INSERTION",
723
+ edit_mode='EDIT_MODE_INPAINT_INSERTION',
615
724
  number_of_images=1,
616
- negative_prompt="human",
725
+ negative_prompt='human',
617
726
  include_rai_reason=True,
618
- output_mime_type="image/jpeg",
727
+ output_mime_type='image/jpeg',
619
728
  ),
620
729
  )
621
730
  response3.generated_images[0].image.show()
@@ -628,32 +737,32 @@ Create a chat session to start a multi-turn conversations with the model.
628
737
  ### Send Message
629
738
 
630
739
  ```python
631
- chat = client.chats.create(model="gemini-2.0-flash-exp")
632
- response = chat.send_message("tell me a story")
740
+ chat = client.chats.create(model='gemini-2.0-flash-001')
741
+ response = chat.send_message('tell me a story')
633
742
  print(response.text)
634
743
  ```
635
744
 
636
745
  ### Streaming
637
746
 
638
747
  ```python
639
- chat = client.chats.create(model="gemini-2.0-flash-exp")
640
- for chunk in chat.send_message_stream("tell me a story"):
748
+ chat = client.chats.create(model='gemini-2.0-flash-001')
749
+ for chunk in chat.send_message_stream('tell me a story'):
641
750
  print(chunk.text)
642
751
  ```
643
752
 
644
753
  ### Async
645
754
 
646
755
  ```python
647
- chat = client.aio.chats.create(model="gemini-2.0-flash-exp")
648
- response = await chat.send_message("tell me a story")
756
+ chat = client.aio.chats.create(model='gemini-2.0-flash-001')
757
+ response = await chat.send_message('tell me a story')
649
758
  print(response.text)
650
759
  ```
651
760
 
652
761
  ### Async Streaming
653
762
 
654
763
  ```python
655
- chat = client.aio.chats.create(model="gemini-2.0-flash-exp")
656
- async for chunk in await chat.send_message_stream("tell me a story"):
764
+ chat = client.aio.chats.create(model='gemini-2.0-flash-001')
765
+ async for chunk in await chat.send_message_stream('tell me a story'):
657
766
  print(chunk.text)
658
767
  ```
659
768
 
@@ -669,8 +778,8 @@ Files are only supported in Gemini Developer API.
669
778
  ### Upload
670
779
 
671
780
  ```python
672
- file1 = client.files.upload(path="2312.11805v3.pdf")
673
- file2 = client.files.upload(path="2403.05530.pdf")
781
+ file1 = client.files.upload(path='2312.11805v3.pdf')
782
+ file2 = client.files.upload(path='2403.05530.pdf')
674
783
 
675
784
  print(file1)
676
785
  print(file2)
@@ -679,7 +788,7 @@ print(file2)
679
788
  ### Delete
680
789
 
681
790
  ```python
682
- file3 = client.files.upload(path="2312.11805v3.pdf")
791
+ file3 = client.files.upload(path='2312.11805v3.pdf')
683
792
 
684
793
  client.files.delete(name=file3.name)
685
794
  ```
@@ -693,32 +802,32 @@ client.files.delete(name=file3.name)
693
802
  ```python
694
803
  if client.vertexai:
695
804
  file_uris = [
696
- "gs://cloud-samples-data/generative-ai/pdf/2312.11805v3.pdf",
697
- "gs://cloud-samples-data/generative-ai/pdf/2403.05530.pdf",
805
+ 'gs://cloud-samples-data/generative-ai/pdf/2312.11805v3.pdf',
806
+ 'gs://cloud-samples-data/generative-ai/pdf/2403.05530.pdf',
698
807
  ]
699
808
  else:
700
809
  file_uris = [file1.uri, file2.uri]
701
810
 
702
811
  cached_content = client.caches.create(
703
- model="gemini-1.5-pro-002",
812
+ model='gemini-1.5-pro-002',
704
813
  config=types.CreateCachedContentConfig(
705
814
  contents=[
706
815
  types.Content(
707
- role="user",
816
+ role='user',
708
817
  parts=[
709
818
  types.Part.from_uri(
710
- file_uri=file_uris[0], mime_type="application/pdf"
819
+ file_uri=file_uris[0], mime_type='application/pdf'
711
820
  ),
712
821
  types.Part.from_uri(
713
822
  file_uri=file_uris[1],
714
- mime_type="application/pdf",
823
+ mime_type='application/pdf',
715
824
  ),
716
825
  ],
717
826
  )
718
827
  ],
719
- system_instruction="What is the sum of the two pdfs?",
720
- display_name="test cache",
721
- ttl="3600s",
828
+ system_instruction='What is the sum of the two pdfs?',
829
+ display_name='test cache',
830
+ ttl='3600s',
722
831
  ),
723
832
  )
724
833
  ```
@@ -733,8 +842,8 @@ cached_content = client.caches.get(name=cached_content.name)
733
842
 
734
843
  ```python
735
844
  response = client.models.generate_content(
736
- model="gemini-1.5-pro-002",
737
- contents="Summarize the pdfs",
845
+ model='gemini-1.5-pro-002',
846
+ contents='Summarize the pdfs',
738
847
  config=types.GenerateContentConfig(
739
848
  cached_content=cached_content.name,
740
849
  ),
@@ -754,17 +863,17 @@ tuning through `tune`.
754
863
 
755
864
  ```python
756
865
  if client.vertexai:
757
- model = "gemini-1.5-pro-002"
866
+ model = 'gemini-1.5-pro-002'
758
867
  training_dataset = types.TuningDataset(
759
- gcs_uri="gs://cloud-samples-data/ai-platform/generative_ai/gemini-1_5/text/sft_train_data.jsonl",
868
+ gcs_uri='gs://cloud-samples-data/ai-platform/generative_ai/gemini-1_5/text/sft_train_data.jsonl',
760
869
  )
761
870
  else:
762
- model = "models/gemini-1.0-pro-001"
871
+ model = 'models/gemini-1.0-pro-001'
763
872
  training_dataset = types.TuningDataset(
764
873
  examples=[
765
874
  types.TuningExample(
766
- text_input=f"Input text {i}",
767
- output=f"Output text {i}",
875
+ text_input=f'Input text {i}',
876
+ output=f'Output text {i}',
768
877
  )
769
878
  for i in range(5)
770
879
  ],
@@ -776,7 +885,7 @@ tuning_job = client.tunings.tune(
776
885
  base_model=model,
777
886
  training_dataset=training_dataset,
778
887
  config=types.CreateTuningJobConfig(
779
- epoch_count=1, tuned_model_display_name="test_dataset_examples model"
888
+ epoch_count=1, tuned_model_display_name='test_dataset_examples model'
780
889
  ),
781
890
  )
782
891
  print(tuning_job)
@@ -794,8 +903,8 @@ import time
794
903
 
795
904
  running_states = set(
796
905
  [
797
- "JOB_STATE_PENDING",
798
- "JOB_STATE_RUNNING",
906
+ 'JOB_STATE_PENDING',
907
+ 'JOB_STATE_RUNNING',
799
908
  ]
800
909
  )
801
910
 
@@ -810,7 +919,7 @@ while tuning_job.state in running_states:
810
919
  ```python
811
920
  response = client.models.generate_content(
812
921
  model=tuning_job.tuned_model.endpoint,
813
- contents="why is the sky blue?",
922
+ contents='why is the sky blue?',
814
923
  )
815
924
 
816
925
  print(response.text)
@@ -828,12 +937,12 @@ print(tuned_model)
828
937
  To retrieve base models, see [list base models](#list-base-models).
829
938
 
830
939
  ```python
831
- for model in client.models.list(config={"page_size": 10, "query_base": False}):
940
+ for model in client.models.list(config={'page_size': 10, 'query_base': False}):
832
941
  print(model)
833
942
  ```
834
943
 
835
944
  ```python
836
- pager = client.models.list(config={"page_size": 10, "query_base": False})
945
+ pager = client.models.list(config={'page_size': 10, 'query_base': False})
837
946
  print(pager.page_size)
838
947
  print(pager[0])
839
948
  pager.next_page()
@@ -843,12 +952,12 @@ print(pager[0])
843
952
  #### Async
844
953
 
845
954
  ```python
846
- async for job in await client.aio.models.list(config={"page_size": 10, "query_base": False}):
955
+ async for job in await client.aio.models.list(config={'page_size': 10, 'query_base': False}):
847
956
  print(job)
848
957
  ```
849
958
 
850
959
  ```python
851
- async_pager = await client.aio.models.list(config={"page_size": 10, "query_base": False})
960
+ async_pager = await client.aio.models.list(config={'page_size': 10, 'query_base': False})
852
961
  print(async_pager.page_size)
853
962
  print(async_pager[0])
854
963
  await async_pager.next_page()
@@ -863,7 +972,7 @@ model = pager[0]
863
972
  model = client.models.update(
864
973
  model=model.name,
865
974
  config=types.UpdateModelConfig(
866
- display_name="my tuned model", description="my tuned model description"
975
+ display_name='my tuned model', description='my tuned model description'
867
976
  ),
868
977
  )
869
978
 
@@ -874,12 +983,12 @@ print(model)
874
983
  ### List Tuning Jobs
875
984
 
876
985
  ```python
877
- for job in client.tunings.list(config={"page_size": 10}):
986
+ for job in client.tunings.list(config={'page_size': 10}):
878
987
  print(job)
879
988
  ```
880
989
 
881
990
  ```python
882
- pager = client.tunings.list(config={"page_size": 10})
991
+ pager = client.tunings.list(config={'page_size': 10})
883
992
  print(pager.page_size)
884
993
  print(pager[0])
885
994
  pager.next_page()
@@ -889,12 +998,12 @@ print(pager[0])
889
998
  #### Async
890
999
 
891
1000
  ```python
892
- async for job in await client.aio.tunings.list(config={"page_size": 10}):
1001
+ async for job in await client.aio.tunings.list(config={'page_size': 10}):
893
1002
  print(job)
894
1003
  ```
895
1004
 
896
1005
  ```python
897
- async_pager = await client.aio.tunings.list(config={"page_size": 10})
1006
+ async_pager = await client.aio.tunings.list(config={'page_size': 10})
898
1007
  print(async_pager.page_size)
899
1008
  print(async_pager[0])
900
1009
  await async_pager.next_page()
@@ -910,8 +1019,8 @@ Only supported in Vertex AI.
910
1019
  ```python
911
1020
  # Specify model and source file only, destination and job display name will be auto-populated
912
1021
  job = client.batches.create(
913
- model="gemini-1.5-flash-002",
914
- src="bq://my-project.my-dataset.my-table",
1022
+ model='gemini-1.5-flash-002',
1023
+ src='bq://my-project.my-dataset.my-table',
915
1024
  )
916
1025
 
917
1026
  job
@@ -927,10 +1036,10 @@ job.state
927
1036
  ```python
928
1037
  completed_states = set(
929
1038
  [
930
- "JOB_STATE_SUCCEEDED",
931
- "JOB_STATE_FAILED",
932
- "JOB_STATE_CANCELLED",
933
- "JOB_STATE_PAUSED",
1039
+ 'JOB_STATE_SUCCEEDED',
1040
+ 'JOB_STATE_FAILED',
1041
+ 'JOB_STATE_CANCELLED',
1042
+ 'JOB_STATE_PAUSED',
934
1043
  ]
935
1044
  )
936
1045