google-genai 1.0.0rc0__py3-none-any.whl → 1.2.0__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.2
2
2
  Name: google-genai
3
- Version: 1.0.0rc0
3
+ Version: 1.2.0
4
4
  Summary: GenAI Python SDK
5
5
  Author-email: Google LLC <googleapis-packages@google.com>
6
6
  License: Apache-2.0
@@ -24,6 +24,7 @@ Requires-Dist: google-auth<3.0.0dev,>=2.14.1
24
24
  Requires-Dist: pydantic<3.0.0dev,>=2.0.0
25
25
  Requires-Dist: requests<3.0.0dev,>=2.28.1
26
26
  Requires-Dist: websockets<15.0dev,>=13.0
27
+ Requires-Dist: typing-extensions<5.0.0dev,>=4.11.0
27
28
 
28
29
  # Google Gen AI SDK
29
30
 
@@ -34,7 +35,7 @@ Requires-Dist: websockets<15.0dev,>=13.0
34
35
 
35
36
  -----
36
37
 
37
- Google Gen AI Python SDK provides an interface for developers to integrate Google's generative models into their Python applications. It supports the [Gemini Developer API](https://ai.google.dev/gemini-api/docs) and [Vertex AI](https://cloud.google.com/vertex-ai/generative-ai/docs/learn/overview) APIs. This is an early release. API is subject to change. Please do not use this SDK in production environments at this stage.
38
+ Google Gen AI Python SDK provides an interface for developers to integrate Google's generative models into their Python applications. It supports the [Gemini Developer API](https://ai.google.dev/gemini-api/docs) and [Vertex AI](https://cloud.google.com/vertex-ai/generative-ai/docs/learn/overview) APIs.
38
39
 
39
40
  ## Installation
40
41
 
@@ -56,16 +57,60 @@ different services ([Gemini Developer API](https://ai.google.dev/gemini-api/docs
56
57
 
57
58
  ```python
58
59
  # Only run this block for Gemini Developer API
59
- client = genai.Client(api_key="GEMINI_API_KEY")
60
+ client = genai.Client(api_key='GEMINI_API_KEY')
60
61
  ```
61
62
 
62
63
  ```python
63
64
  # Only run this block for Vertex AI API
64
65
  client = genai.Client(
65
- vertexai=True, project="your-project-id", location="us-central1"
66
+ vertexai=True, project='your-project-id', location='us-central1'
66
67
  )
67
68
  ```
68
69
 
70
+ **(Optional) Using environment variables:**
71
+
72
+ You can create a client by configuring the necessary environment variables.
73
+ Configuration setup instructions depends on whether you're using the Gemini API
74
+ on Vertex AI or the ML Dev Gemini API.
75
+
76
+ **ML Dev Gemini API:** Set `GOOGLE_API_KEY` as shown below:
77
+
78
+ ```bash
79
+ export GOOGLE_API_KEY='your-api-key'
80
+ ```
81
+
82
+ **Vertex AI API:** Set `GOOGLE_GENAI_USE_VERTEXAI`, `GOOGLE_CLOUD_PROJECT`
83
+ and `GOOGLE_CLOUD_LOCATION`, as shown below:
84
+
85
+ ```bash
86
+ export GOOGLE_GENAI_USE_VERTEXAI=false
87
+ export GOOGLE_CLOUD_PROJECT='your-project-id'
88
+ export GOOGLE_CLOUD_LOCATION='us-central1'
89
+ ```
90
+
91
+ ```python
92
+ client = genai.Client()
93
+ ```
94
+
95
+ ### API Selection
96
+
97
+ To set the API version use `http_options`. For example, to set the API version
98
+ to `v1` for Vertex AI:
99
+
100
+ ```python
101
+ client = genai.Client(
102
+ vertexai=True, project='your-project-id', location='us-central1',
103
+ http_options={'api_version': 'v1'}
104
+ )
105
+ ```
106
+
107
+ To set the API version to `v1alpha` for the Gemini API:
108
+
109
+ ```python
110
+ client = genai.Client(api_key='GEMINI_API_KEY',
111
+ http_options={'api_version': 'v1alpha'})
112
+ ```
113
+
69
114
  ## Types
70
115
 
71
116
  Parameter types can be specified as either dictionaries(`TypedDict`) or
@@ -82,7 +127,7 @@ The `client.models` modules exposes model inferencing and model getters.
82
127
 
83
128
  ```python
84
129
  response = client.models.generate_content(
85
- model="gemini-2.0-flash-exp", contents="why is the sky blue?"
130
+ model='gemini-2.0-flash-001', contents='why is the sky blue?'
86
131
  )
87
132
  print(response.text)
88
133
  ```
@@ -97,22 +142,53 @@ download the file in console.
97
142
  python code.
98
143
 
99
144
  ```python
100
- file = client.files.upload(path="a11.txt")
145
+ file = client.files.upload(file='a11.txt')
101
146
  response = client.models.generate_content(
102
- model="gemini-2.0-flash-exp",
103
- contents=["Could you summarize this file?", file]
147
+ model='gemini-2.0-flash-001',
148
+ contents=['Could you summarize this file?', file]
104
149
  )
105
150
  print(response.text)
106
151
  ```
107
152
 
153
+ #### How to structure `contents`
154
+ There are several ways to structure the `contents` in your request.
155
+
156
+ Provide a single string as shown in the text example above:
157
+
158
+ ```python
159
+ contents='Can you recommend some things to do in Boston and New York in the winter?'
160
+ ```
161
+
162
+ Provide a single `Content` instance with multiple `Part` instances:
163
+
164
+ ```python
165
+ contents=types.Content(parts=[
166
+ types.Part.from_text(text='Can you recommend some things to do in Boston in the winter?'),
167
+ types.Part.from_text(text='Can you recommend some things to do in New York in the winter?')
168
+ ], role='user')
169
+ ```
170
+
171
+ When sending more than one input type, provide a list with multiple `Content`
172
+ instances:
173
+
174
+ ```python
175
+ contents=[
176
+ 'What is this a picture of?',
177
+ types.Part.from_uri(
178
+ file_uri='gs://generativeai-downloads/images/scones.jpg',
179
+ mime_type='image/jpeg',
180
+ ),
181
+ ],
182
+ ```
183
+
108
184
  ### System Instructions and Other Configs
109
185
 
110
186
  ```python
111
187
  response = client.models.generate_content(
112
- model="gemini-2.0-flash-exp",
113
- contents="high",
188
+ model='gemini-2.0-flash-001',
189
+ contents='high',
114
190
  config=types.GenerateContentConfig(
115
- system_instruction="I say high, you say low",
191
+ system_instruction='I say high, you say low',
116
192
  temperature=0.3,
117
193
  ),
118
194
  )
@@ -126,8 +202,8 @@ dictionaries. You can get the type from `google.genai.types`.
126
202
 
127
203
  ```python
128
204
  response = client.models.generate_content(
129
- model="gemini-2.0-flash-exp",
130
- contents=types.Part.from_text(text="Why is the sky blue?"),
205
+ model='gemini-2.0-flash-001',
206
+ contents=types.Part.from_text(text='Why is the sky blue?'),
131
207
  config=types.GenerateContentConfig(
132
208
  temperature=0,
133
209
  top_p=0.95,
@@ -135,7 +211,7 @@ response = client.models.generate_content(
135
211
  candidate_count=1,
136
212
  seed=5,
137
213
  max_output_tokens=100,
138
- stop_sequences=["STOP!"],
214
+ stop_sequences=['STOP!'],
139
215
  presence_penalty=0.0,
140
216
  frequency_penalty=0.0,
141
217
  ),
@@ -154,7 +230,7 @@ for model in client.models.list():
154
230
  ```
155
231
 
156
232
  ```python
157
- pager = client.models.list(config={"page_size": 10})
233
+ pager = client.models.list(config={'page_size': 10})
158
234
  print(pager.page_size)
159
235
  print(pager[0])
160
236
  pager.next_page()
@@ -169,7 +245,7 @@ async for job in await client.aio.models.list():
169
245
  ```
170
246
 
171
247
  ```python
172
- async_pager = await client.aio.models.list(config={"page_size": 10})
248
+ async_pager = await client.aio.models.list(config={'page_size': 10})
173
249
  print(async_pager.page_size)
174
250
  print(async_pager[0])
175
251
  await async_pager.next_page()
@@ -180,13 +256,13 @@ print(async_pager[0])
180
256
 
181
257
  ```python
182
258
  response = client.models.generate_content(
183
- model="gemini-2.0-flash-exp",
184
- contents="Say something bad.",
259
+ model='gemini-2.0-flash-001',
260
+ contents='Say something bad.',
185
261
  config=types.GenerateContentConfig(
186
262
  safety_settings=[
187
263
  types.SafetySetting(
188
- category="HARM_CATEGORY_HATE_SPEECH",
189
- threshold="BLOCK_ONLY_HIGH",
264
+ category='HARM_CATEGORY_HATE_SPEECH',
265
+ threshold='BLOCK_ONLY_HIGH',
190
266
  )
191
267
  ]
192
268
  ),
@@ -208,12 +284,12 @@ def get_current_weather(location: str) -> str:
208
284
  Args:
209
285
  location: The city and state, e.g. San Francisco, CA
210
286
  """
211
- return "sunny"
287
+ return 'sunny'
212
288
 
213
289
 
214
290
  response = client.models.generate_content(
215
- model="gemini-2.0-flash-exp",
216
- contents="What is the weather like in Boston?",
291
+ model='gemini-2.0-flash-001',
292
+ contents='What is the weather like in Boston?',
217
293
  config=types.GenerateContentConfig(tools=[get_current_weather]),
218
294
  )
219
295
 
@@ -230,25 +306,25 @@ Then you will receive a function call part in the response.
230
306
 
231
307
  ```python
232
308
  function = types.FunctionDeclaration(
233
- name="get_current_weather",
234
- description="Get the current weather in a given location",
309
+ name='get_current_weather',
310
+ description='Get the current weather in a given location',
235
311
  parameters=types.Schema(
236
- type="OBJECT",
312
+ type='OBJECT',
237
313
  properties={
238
- "location": types.Schema(
239
- type="STRING",
240
- description="The city and state, e.g. San Francisco, CA",
314
+ 'location': types.Schema(
315
+ type='STRING',
316
+ description='The city and state, e.g. San Francisco, CA',
241
317
  ),
242
318
  },
243
- required=["location"],
319
+ required=['location'],
244
320
  ),
245
321
  )
246
322
 
247
323
  tool = types.Tool(function_declarations=[function])
248
324
 
249
325
  response = client.models.generate_content(
250
- model="gemini-2.0-flash-exp",
251
- contents="What is the weather like in Boston?",
326
+ model='gemini-2.0-flash-001',
327
+ contents='What is the weather like in Boston?',
252
328
  config=types.GenerateContentConfig(tools=[tool]),
253
329
  )
254
330
 
@@ -262,33 +338,34 @@ The following example shows how to do it for a simple function invocation.
262
338
 
263
339
  ```python
264
340
  user_prompt_content = types.Content(
265
- role="user",
266
- parts=[types.Part.from_text(text="What is the weather like in Boston?")],
341
+ role='user',
342
+ parts=[types.Part.from_text(text='What is the weather like in Boston?')],
267
343
  )
268
344
  function_call_part = response.function_calls[0]
345
+ function_call_content = response.candidates[0].content
269
346
 
270
347
 
271
348
  try:
272
349
  function_result = get_current_weather(
273
350
  **function_call_part.function_call.args
274
351
  )
275
- function_response = {"result": function_result}
352
+ function_response = {'result': function_result}
276
353
  except (
277
354
  Exception
278
355
  ) as e: # instead of raising the exception, you can let the model handle it
279
- function_response = {"error": str(e)}
356
+ function_response = {'error': str(e)}
280
357
 
281
358
 
282
359
  function_response_part = types.Part.from_function_response(
283
- name=function_call_part.function_call.name,
360
+ name=function_call_part.name,
284
361
  response=function_response,
285
362
  )
286
363
  function_response_content = types.Content(
287
- role="tool", parts=[function_response_part]
364
+ role='tool', parts=[function_response_part]
288
365
  )
289
366
 
290
367
  response = client.models.generate_content(
291
- model="gemini-2.0-flash-exp",
368
+ model='gemini-2.0-flash-001',
292
369
  contents=[
293
370
  user_prompt_content,
294
371
  function_call_content,
@@ -302,6 +379,66 @@ response = client.models.generate_content(
302
379
  print(response.text)
303
380
  ```
304
381
 
382
+ #### Function calling with `ANY` tools config mode
383
+
384
+ If you configure function calling mode to be `ANY`, then the model will always
385
+ return function call parts. If you also pass a python function as a tool, by
386
+ default the SDK will perform automatic function calling until the remote calls exceed the
387
+ maximum remote call for automatic function calling (default to 10 times).
388
+
389
+ If you'd like to disable automatic function calling in `ANY` mode:
390
+
391
+ ```python
392
+ def get_current_weather(location: str) -> str:
393
+ """Returns the current weather.
394
+
395
+ Args:
396
+ location: The city and state, e.g. San Francisco, CA
397
+ """
398
+ return "sunny"
399
+
400
+ response = client.models.generate_content(
401
+ model="gemini-2.0-flash-001",
402
+ contents="What is the weather like in Boston?",
403
+ config=types.GenerateContentConfig(
404
+ tools=[get_current_weather],
405
+ automatic_function_calling=types.AutomaticFunctionCallingConfig(
406
+ disable=True
407
+ ),
408
+ tool_config=types.ToolConfig(
409
+ function_calling_config=types.FunctionCallingConfig(mode='ANY')
410
+ ),
411
+ ),
412
+ )
413
+ ```
414
+
415
+ If you'd like to set `x` number of automatic function call turns, you can
416
+ configure the maximum remote calls to be `x + 1`.
417
+ Assuming you prefer `1` turn for automatic function calling.
418
+
419
+ ```python
420
+ def get_current_weather(location: str) -> str:
421
+ """Returns the current weather.
422
+
423
+ Args:
424
+ location: The city and state, e.g. San Francisco, CA
425
+ """
426
+ return "sunny"
427
+
428
+ response = client.models.generate_content(
429
+ model="gemini-2.0-flash-001",
430
+ contents="What is the weather like in Boston?",
431
+ config=types.GenerateContentConfig(
432
+ tools=[get_current_weather],
433
+ automatic_function_calling=types.AutomaticFunctionCallingConfig(
434
+ maximum_remote_calls=2
435
+ ),
436
+ tool_config=types.ToolConfig(
437
+ function_calling_config=types.FunctionCallingConfig(mode='ANY')
438
+ ),
439
+ ),
440
+ )
441
+ ```
305
442
  ### JSON Response Schema
306
443
 
307
444
  #### Pydantic Model Schema support
@@ -323,10 +460,10 @@ class CountryInfo(BaseModel):
323
460
 
324
461
 
325
462
  response = client.models.generate_content(
326
- model="gemini-2.0-flash-exp",
327
- contents="Give me information for the United States.",
463
+ model='gemini-2.0-flash-001',
464
+ contents='Give me information for the United States.',
328
465
  config=types.GenerateContentConfig(
329
- response_mime_type="application/json",
466
+ response_mime_type='application/json',
330
467
  response_schema=CountryInfo,
331
468
  ),
332
469
  )
@@ -335,30 +472,30 @@ print(response.text)
335
472
 
336
473
  ```python
337
474
  response = client.models.generate_content(
338
- model="gemini-2.0-flash-exp",
339
- contents="Give me information for the United States.",
475
+ model='gemini-2.0-flash-001',
476
+ contents='Give me information for the United States.',
340
477
  config=types.GenerateContentConfig(
341
- response_mime_type="application/json",
478
+ response_mime_type='application/json',
342
479
  response_schema={
343
- "required": [
344
- "name",
345
- "population",
346
- "capital",
347
- "continent",
348
- "gdp",
349
- "official_language",
350
- "total_area_sq_mi",
480
+ 'required': [
481
+ 'name',
482
+ 'population',
483
+ 'capital',
484
+ 'continent',
485
+ 'gdp',
486
+ 'official_language',
487
+ 'total_area_sq_mi',
351
488
  ],
352
- "properties": {
353
- "name": {"type": "STRING"},
354
- "population": {"type": "INTEGER"},
355
- "capital": {"type": "STRING"},
356
- "continent": {"type": "STRING"},
357
- "gdp": {"type": "INTEGER"},
358
- "official_language": {"type": "STRING"},
359
- "total_area_sq_mi": {"type": "INTEGER"},
489
+ 'properties': {
490
+ 'name': {'type': 'STRING'},
491
+ 'population': {'type': 'INTEGER'},
492
+ 'capital': {'type': 'STRING'},
493
+ 'continent': {'type': 'STRING'},
494
+ 'gdp': {'type': 'INTEGER'},
495
+ 'official_language': {'type': 'STRING'},
496
+ 'total_area_sq_mi': {'type': 'INTEGER'},
360
497
  },
361
- "type": "OBJECT",
498
+ 'type': 'OBJECT',
362
499
  },
363
500
  ),
364
501
  )
@@ -381,7 +518,7 @@ class InstrumentEnum(Enum):
381
518
  KEYBOARD = 'Keyboard'
382
519
 
383
520
  response = client.models.generate_content(
384
- model='gemini-2.0-flash-exp',
521
+ model='gemini-2.0-flash-001',
385
522
  contents='What instrument plays multiple notes at once?',
386
523
  config={
387
524
  'response_mime_type': 'text/x.enum',
@@ -406,7 +543,7 @@ class InstrumentEnum(Enum):
406
543
  KEYBOARD = 'Keyboard'
407
544
 
408
545
  response = client.models.generate_content(
409
- model='gemini-2.0-flash-exp',
546
+ model='gemini-2.0-flash-001',
410
547
  contents='What instrument plays multiple notes at once?',
411
548
  config={
412
549
  'response_mime_type': 'application/json',
@@ -422,9 +559,9 @@ print(response.text)
422
559
 
423
560
  ```python
424
561
  for chunk in client.models.generate_content_stream(
425
- model="gemini-2.0-flash-exp", contents="Tell me a story in 300 words."
562
+ model='gemini-2.0-flash-001', contents='Tell me a story in 300 words.'
426
563
  ):
427
- print(chunk.text, end="")
564
+ print(chunk.text, end='')
428
565
  ```
429
566
 
430
567
  #### Streaming for image content
@@ -434,35 +571,35 @@ you can use the `from_uri` class method to create a `Part` object.
434
571
 
435
572
  ```python
436
573
  for chunk in client.models.generate_content_stream(
437
- model="gemini-2.0-flash-exp",
574
+ model='gemini-2.0-flash-001',
438
575
  contents=[
439
- "What is this image about?",
576
+ 'What is this image about?',
440
577
  types.Part.from_uri(
441
- file_uri="gs://generativeai-downloads/images/scones.jpg",
442
- mime_type="image/jpeg",
578
+ file_uri='gs://generativeai-downloads/images/scones.jpg',
579
+ mime_type='image/jpeg',
443
580
  ),
444
581
  ],
445
582
  ):
446
- print(chunk.text, end="")
583
+ print(chunk.text, end='')
447
584
  ```
448
585
 
449
586
  If your image is stored in your local file system, you can read it in as bytes
450
587
  data and use the `from_bytes` class method to create a `Part` object.
451
588
 
452
589
  ```python
453
- YOUR_IMAGE_PATH = "your_image_path"
454
- YOUR_IMAGE_MIME_TYPE = "your_image_mime_type"
455
- with open(YOUR_IMAGE_PATH, "rb") as f:
590
+ YOUR_IMAGE_PATH = 'your_image_path'
591
+ YOUR_IMAGE_MIME_TYPE = 'your_image_mime_type'
592
+ with open(YOUR_IMAGE_PATH, 'rb') as f:
456
593
  image_bytes = f.read()
457
594
 
458
595
  for chunk in client.models.generate_content_stream(
459
- model="gemini-2.0-flash-exp",
596
+ model='gemini-2.0-flash-001',
460
597
  contents=[
461
- "What is this image about?",
598
+ 'What is this image about?',
462
599
  types.Part.from_bytes(data=image_bytes, mime_type=YOUR_IMAGE_MIME_TYPE),
463
600
  ],
464
601
  ):
465
- print(chunk.text, end="")
602
+ print(chunk.text, end='')
466
603
  ```
467
604
 
468
605
  ### Async
@@ -475,7 +612,7 @@ of `client.models.generate_content`
475
612
 
476
613
  ```python
477
614
  response = await client.aio.models.generate_content(
478
- model="gemini-2.0-flash-exp", contents="Tell me a story in 300 words."
615
+ model='gemini-2.0-flash-001', contents='Tell me a story in 300 words.'
479
616
  )
480
617
 
481
618
  print(response.text)
@@ -485,17 +622,17 @@ print(response.text)
485
622
 
486
623
  ```python
487
624
  async for chunk in await client.aio.models.generate_content_stream(
488
- model="gemini-2.0-flash-exp", contents="Tell me a story in 300 words."
625
+ model='gemini-2.0-flash-001', contents='Tell me a story in 300 words.'
489
626
  ):
490
- print(chunk.text, end="")
627
+ print(chunk.text, end='')
491
628
  ```
492
629
 
493
630
  ### Count Tokens and Compute Tokens
494
631
 
495
632
  ```python
496
633
  response = client.models.count_tokens(
497
- model="gemini-2.0-flash-exp",
498
- contents="why is the sky blue?",
634
+ model='gemini-2.0-flash-001',
635
+ contents='why is the sky blue?',
499
636
  )
500
637
  print(response)
501
638
  ```
@@ -506,8 +643,8 @@ Compute tokens is only supported in Vertex AI.
506
643
 
507
644
  ```python
508
645
  response = client.models.compute_tokens(
509
- model="gemini-2.0-flash-exp",
510
- contents="why is the sky blue?",
646
+ model='gemini-2.0-flash-001',
647
+ contents='why is the sky blue?',
511
648
  )
512
649
  print(response)
513
650
  ```
@@ -516,8 +653,8 @@ print(response)
516
653
 
517
654
  ```python
518
655
  response = await client.aio.models.count_tokens(
519
- model="gemini-2.0-flash-exp",
520
- contents="why is the sky blue?",
656
+ model='gemini-2.0-flash-001',
657
+ contents='why is the sky blue?',
521
658
  )
522
659
  print(response)
523
660
  ```
@@ -526,8 +663,8 @@ print(response)
526
663
 
527
664
  ```python
528
665
  response = client.models.embed_content(
529
- model="text-embedding-004",
530
- contents="why is the sky blue?",
666
+ model='text-embedding-004',
667
+ contents='why is the sky blue?',
531
668
  )
532
669
  print(response)
533
670
  ```
@@ -535,8 +672,8 @@ print(response)
535
672
  ```python
536
673
  # multiple contents with config
537
674
  response = client.models.embed_content(
538
- model="text-embedding-004",
539
- contents=["why is the sky blue?", "What is your age?"],
675
+ model='text-embedding-004',
676
+ contents=['why is the sky blue?', 'What is your age?'],
540
677
  config=types.EmbedContentConfig(output_dimensionality=10),
541
678
  )
542
679
 
@@ -552,13 +689,13 @@ Support for generate images in Gemini Developer API is behind an allowlist
552
689
  ```python
553
690
  # Generate Image
554
691
  response1 = client.models.generate_images(
555
- model="imagen-3.0-generate-002",
556
- prompt="An umbrella in the foreground, and a rainy night sky in the background",
692
+ model='imagen-3.0-generate-002',
693
+ prompt='An umbrella in the foreground, and a rainy night sky in the background',
557
694
  config=types.GenerateImagesConfig(
558
- negative_prompt="human",
695
+ negative_prompt='human',
559
696
  number_of_images=1,
560
697
  include_rai_reason=True,
561
- output_mime_type="image/jpeg",
698
+ output_mime_type='image/jpeg',
562
699
  ),
563
700
  )
564
701
  response1.generated_images[0].image.show()
@@ -571,12 +708,12 @@ Upscale image is only supported in Vertex AI.
571
708
  ```python
572
709
  # Upscale the generated image from above
573
710
  response2 = client.models.upscale_image(
574
- model="imagen-3.0-generate-001",
711
+ model='imagen-3.0-generate-001',
575
712
  image=response1.generated_images[0].image,
576
- upscale_factor="x2",
713
+ upscale_factor='x2',
577
714
  config=types.UpscaleImageConfig(
578
715
  include_rai_reason=True,
579
- output_mime_type="image/jpeg",
716
+ output_mime_type='image/jpeg',
580
717
  ),
581
718
  )
582
719
  response2.generated_images[0].image.show()
@@ -601,21 +738,21 @@ raw_ref_image = RawReferenceImage(
601
738
  mask_ref_image = MaskReferenceImage(
602
739
  reference_id=2,
603
740
  config=types.MaskReferenceConfig(
604
- mask_mode="MASK_MODE_BACKGROUND",
741
+ mask_mode='MASK_MODE_BACKGROUND',
605
742
  mask_dilation=0,
606
743
  ),
607
744
  )
608
745
 
609
746
  response3 = client.models.edit_image(
610
- model="imagen-3.0-capability-001",
611
- prompt="Sunlight and clear sky",
747
+ model='imagen-3.0-capability-001',
748
+ prompt='Sunlight and clear sky',
612
749
  reference_images=[raw_ref_image, mask_ref_image],
613
750
  config=types.EditImageConfig(
614
- edit_mode="EDIT_MODE_INPAINT_INSERTION",
751
+ edit_mode='EDIT_MODE_INPAINT_INSERTION',
615
752
  number_of_images=1,
616
- negative_prompt="human",
753
+ negative_prompt='human',
617
754
  include_rai_reason=True,
618
- output_mime_type="image/jpeg",
755
+ output_mime_type='image/jpeg',
619
756
  ),
620
757
  )
621
758
  response3.generated_images[0].image.show()
@@ -628,32 +765,32 @@ Create a chat session to start a multi-turn conversations with the model.
628
765
  ### Send Message
629
766
 
630
767
  ```python
631
- chat = client.chats.create(model="gemini-2.0-flash-exp")
632
- response = chat.send_message("tell me a story")
768
+ chat = client.chats.create(model='gemini-2.0-flash-001')
769
+ response = chat.send_message('tell me a story')
633
770
  print(response.text)
634
771
  ```
635
772
 
636
773
  ### Streaming
637
774
 
638
775
  ```python
639
- chat = client.chats.create(model="gemini-2.0-flash-exp")
640
- for chunk in chat.send_message_stream("tell me a story"):
776
+ chat = client.chats.create(model='gemini-2.0-flash-001')
777
+ for chunk in chat.send_message_stream('tell me a story'):
641
778
  print(chunk.text)
642
779
  ```
643
780
 
644
781
  ### Async
645
782
 
646
783
  ```python
647
- chat = client.aio.chats.create(model="gemini-2.0-flash-exp")
648
- response = await chat.send_message("tell me a story")
784
+ chat = client.aio.chats.create(model='gemini-2.0-flash-001')
785
+ response = await chat.send_message('tell me a story')
649
786
  print(response.text)
650
787
  ```
651
788
 
652
789
  ### Async Streaming
653
790
 
654
791
  ```python
655
- chat = client.aio.chats.create(model="gemini-2.0-flash-exp")
656
- async for chunk in await chat.send_message_stream("tell me a story"):
792
+ chat = client.aio.chats.create(model='gemini-2.0-flash-001')
793
+ async for chunk in await chat.send_message_stream('tell me a story'):
657
794
  print(chunk.text)
658
795
  ```
659
796
 
@@ -669,17 +806,24 @@ Files are only supported in Gemini Developer API.
669
806
  ### Upload
670
807
 
671
808
  ```python
672
- file1 = client.files.upload(path="2312.11805v3.pdf")
673
- file2 = client.files.upload(path="2403.05530.pdf")
809
+ file1 = client.files.upload(file='2312.11805v3.pdf')
810
+ file2 = client.files.upload(file='2403.05530.pdf')
674
811
 
675
812
  print(file1)
676
813
  print(file2)
677
814
  ```
678
815
 
816
+ ### Get
817
+
818
+ ```python
819
+ file1 = client.files.upload(file='2312.11805v3.pdf')
820
+ file_info = client.files.get(name=file1.name)
821
+ ```
822
+
679
823
  ### Delete
680
824
 
681
825
  ```python
682
- file3 = client.files.upload(path="2312.11805v3.pdf")
826
+ file3 = client.files.upload(file='2312.11805v3.pdf')
683
827
 
684
828
  client.files.delete(name=file3.name)
685
829
  ```
@@ -693,32 +837,32 @@ client.files.delete(name=file3.name)
693
837
  ```python
694
838
  if client.vertexai:
695
839
  file_uris = [
696
- "gs://cloud-samples-data/generative-ai/pdf/2312.11805v3.pdf",
697
- "gs://cloud-samples-data/generative-ai/pdf/2403.05530.pdf",
840
+ 'gs://cloud-samples-data/generative-ai/pdf/2312.11805v3.pdf',
841
+ 'gs://cloud-samples-data/generative-ai/pdf/2403.05530.pdf',
698
842
  ]
699
843
  else:
700
844
  file_uris = [file1.uri, file2.uri]
701
845
 
702
846
  cached_content = client.caches.create(
703
- model="gemini-1.5-pro-002",
847
+ model='gemini-1.5-pro-002',
704
848
  config=types.CreateCachedContentConfig(
705
849
  contents=[
706
850
  types.Content(
707
- role="user",
851
+ role='user',
708
852
  parts=[
709
853
  types.Part.from_uri(
710
- file_uri=file_uris[0], mime_type="application/pdf"
854
+ file_uri=file_uris[0], mime_type='application/pdf'
711
855
  ),
712
856
  types.Part.from_uri(
713
857
  file_uri=file_uris[1],
714
- mime_type="application/pdf",
858
+ mime_type='application/pdf',
715
859
  ),
716
860
  ],
717
861
  )
718
862
  ],
719
- system_instruction="What is the sum of the two pdfs?",
720
- display_name="test cache",
721
- ttl="3600s",
863
+ system_instruction='What is the sum of the two pdfs?',
864
+ display_name='test cache',
865
+ ttl='3600s',
722
866
  ),
723
867
  )
724
868
  ```
@@ -733,8 +877,8 @@ cached_content = client.caches.get(name=cached_content.name)
733
877
 
734
878
  ```python
735
879
  response = client.models.generate_content(
736
- model="gemini-1.5-pro-002",
737
- contents="Summarize the pdfs",
880
+ model='gemini-1.5-pro-002',
881
+ contents='Summarize the pdfs',
738
882
  config=types.GenerateContentConfig(
739
883
  cached_content=cached_content.name,
740
884
  ),
@@ -754,17 +898,17 @@ tuning through `tune`.
754
898
 
755
899
  ```python
756
900
  if client.vertexai:
757
- model = "gemini-1.5-pro-002"
901
+ model = 'gemini-1.5-pro-002'
758
902
  training_dataset = types.TuningDataset(
759
- gcs_uri="gs://cloud-samples-data/ai-platform/generative_ai/gemini-1_5/text/sft_train_data.jsonl",
903
+ gcs_uri='gs://cloud-samples-data/ai-platform/generative_ai/gemini-1_5/text/sft_train_data.jsonl',
760
904
  )
761
905
  else:
762
- model = "models/gemini-1.0-pro-001"
906
+ model = 'models/gemini-1.0-pro-001'
763
907
  training_dataset = types.TuningDataset(
764
908
  examples=[
765
909
  types.TuningExample(
766
- text_input=f"Input text {i}",
767
- output=f"Output text {i}",
910
+ text_input=f'Input text {i}',
911
+ output=f'Output text {i}',
768
912
  )
769
913
  for i in range(5)
770
914
  ],
@@ -776,7 +920,7 @@ tuning_job = client.tunings.tune(
776
920
  base_model=model,
777
921
  training_dataset=training_dataset,
778
922
  config=types.CreateTuningJobConfig(
779
- epoch_count=1, tuned_model_display_name="test_dataset_examples model"
923
+ epoch_count=1, tuned_model_display_name='test_dataset_examples model'
780
924
  ),
781
925
  )
782
926
  print(tuning_job)
@@ -794,8 +938,8 @@ import time
794
938
 
795
939
  running_states = set(
796
940
  [
797
- "JOB_STATE_PENDING",
798
- "JOB_STATE_RUNNING",
941
+ 'JOB_STATE_PENDING',
942
+ 'JOB_STATE_RUNNING',
799
943
  ]
800
944
  )
801
945
 
@@ -810,7 +954,7 @@ while tuning_job.state in running_states:
810
954
  ```python
811
955
  response = client.models.generate_content(
812
956
  model=tuning_job.tuned_model.endpoint,
813
- contents="why is the sky blue?",
957
+ contents='why is the sky blue?',
814
958
  )
815
959
 
816
960
  print(response.text)
@@ -828,12 +972,12 @@ print(tuned_model)
828
972
  To retrieve base models, see [list base models](#list-base-models).
829
973
 
830
974
  ```python
831
- for model in client.models.list(config={"page_size": 10, "query_base": False}):
975
+ for model in client.models.list(config={'page_size': 10, 'query_base': False}):
832
976
  print(model)
833
977
  ```
834
978
 
835
979
  ```python
836
- pager = client.models.list(config={"page_size": 10, "query_base": False})
980
+ pager = client.models.list(config={'page_size': 10, 'query_base': False})
837
981
  print(pager.page_size)
838
982
  print(pager[0])
839
983
  pager.next_page()
@@ -843,12 +987,12 @@ print(pager[0])
843
987
  #### Async
844
988
 
845
989
  ```python
846
- async for job in await client.aio.models.list(config={"page_size": 10, "query_base": False}):
990
+ async for job in await client.aio.models.list(config={'page_size': 10, 'query_base': False}):
847
991
  print(job)
848
992
  ```
849
993
 
850
994
  ```python
851
- async_pager = await client.aio.models.list(config={"page_size": 10, "query_base": False})
995
+ async_pager = await client.aio.models.list(config={'page_size': 10, 'query_base': False})
852
996
  print(async_pager.page_size)
853
997
  print(async_pager[0])
854
998
  await async_pager.next_page()
@@ -863,7 +1007,7 @@ model = pager[0]
863
1007
  model = client.models.update(
864
1008
  model=model.name,
865
1009
  config=types.UpdateModelConfig(
866
- display_name="my tuned model", description="my tuned model description"
1010
+ display_name='my tuned model', description='my tuned model description'
867
1011
  ),
868
1012
  )
869
1013
 
@@ -874,12 +1018,12 @@ print(model)
874
1018
  ### List Tuning Jobs
875
1019
 
876
1020
  ```python
877
- for job in client.tunings.list(config={"page_size": 10}):
1021
+ for job in client.tunings.list(config={'page_size': 10}):
878
1022
  print(job)
879
1023
  ```
880
1024
 
881
1025
  ```python
882
- pager = client.tunings.list(config={"page_size": 10})
1026
+ pager = client.tunings.list(config={'page_size': 10})
883
1027
  print(pager.page_size)
884
1028
  print(pager[0])
885
1029
  pager.next_page()
@@ -889,12 +1033,12 @@ print(pager[0])
889
1033
  #### Async
890
1034
 
891
1035
  ```python
892
- async for job in await client.aio.tunings.list(config={"page_size": 10}):
1036
+ async for job in await client.aio.tunings.list(config={'page_size': 10}):
893
1037
  print(job)
894
1038
  ```
895
1039
 
896
1040
  ```python
897
- async_pager = await client.aio.tunings.list(config={"page_size": 10})
1041
+ async_pager = await client.aio.tunings.list(config={'page_size': 10})
898
1042
  print(async_pager.page_size)
899
1043
  print(async_pager[0])
900
1044
  await async_pager.next_page()
@@ -910,8 +1054,8 @@ Only supported in Vertex AI.
910
1054
  ```python
911
1055
  # Specify model and source file only, destination and job display name will be auto-populated
912
1056
  job = client.batches.create(
913
- model="gemini-1.5-flash-002",
914
- src="bq://my-project.my-dataset.my-table",
1057
+ model='gemini-1.5-flash-002',
1058
+ src='bq://my-project.my-dataset.my-table',
915
1059
  )
916
1060
 
917
1061
  job
@@ -927,10 +1071,10 @@ job.state
927
1071
  ```python
928
1072
  completed_states = set(
929
1073
  [
930
- "JOB_STATE_SUCCEEDED",
931
- "JOB_STATE_FAILED",
932
- "JOB_STATE_CANCELLED",
933
- "JOB_STATE_PAUSED",
1074
+ 'JOB_STATE_SUCCEEDED',
1075
+ 'JOB_STATE_FAILED',
1076
+ 'JOB_STATE_CANCELLED',
1077
+ 'JOB_STATE_PAUSED',
934
1078
  ]
935
1079
  )
936
1080