google-genai 1.13.0__py3-none-any.whl → 1.15.0__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
google/genai/version.py CHANGED
@@ -13,4 +13,4 @@
13
13
  # limitations under the License.
14
14
  #
15
15
 
16
- __version__ = '1.13.0' # x-release-please-version
16
+ __version__ = '1.15.0' # x-release-please-version
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: google-genai
3
- Version: 1.13.0
3
+ Version: 1.15.0
4
4
  Summary: GenAI Python SDK
5
5
  Author-email: Google LLC <googleapis-packages@google.com>
6
6
  License: Apache-2.0
@@ -32,6 +32,8 @@ Dynamic: license-file
32
32
  # Google Gen AI SDK
33
33
 
34
34
  [![PyPI version](https://img.shields.io/pypi/v/google-genai.svg)](https://pypi.org/project/google-genai/)
35
+ ![Python support](https://img.shields.io/pypi/pyversions/google-genai)
36
+ [![PyPI - Downloads](https://img.shields.io/pypi/dw/google-genai)](https://pypistats.org/packages/google-genai)
35
37
 
36
38
  --------
37
39
  **Documentation:** https://googleapis.github.io/python-genai/
@@ -59,11 +61,15 @@ Please run one of the following code blocks to create a client for
59
61
  different services ([Gemini Developer API](https://ai.google.dev/gemini-api/docs) or [Vertex AI](https://cloud.google.com/vertex-ai/generative-ai/docs/learn/overview)).
60
62
 
61
63
  ```python
64
+ from google import genai
65
+
62
66
  # Only run this block for Gemini Developer API
63
67
  client = genai.Client(api_key='GEMINI_API_KEY')
64
68
  ```
65
69
 
66
70
  ```python
71
+ from google import genai
72
+
67
73
  # Only run this block for Vertex AI API
68
74
  client = genai.Client(
69
75
  vertexai=True, project='your-project-id', location='us-central1'
@@ -92,6 +98,8 @@ export GOOGLE_CLOUD_LOCATION='us-central1'
92
98
  ```
93
99
 
94
100
  ```python
101
+ from google import genai
102
+
95
103
  client = genai.Client()
96
104
  ```
97
105
 
@@ -105,6 +113,9 @@ To set the API version use `http_options`. For example, to set the API version
105
113
  to `v1` for Vertex AI:
106
114
 
107
115
  ```python
116
+ from google import genai
117
+ from google.genai import types
118
+
108
119
  client = genai.Client(
109
120
  vertexai=True,
110
121
  project='your-project-id',
@@ -116,6 +127,9 @@ client = genai.Client(
116
127
  To set the API version to `v1alpha` for the Gemini Developer API:
117
128
 
118
129
  ```python
130
+ from google import genai
131
+ from google.genai import types
132
+
119
133
  client = genai.Client(
120
134
  api_key='GEMINI_API_KEY',
121
135
  http_options=types.HttpOptions(api_version='v1alpha')
@@ -131,6 +145,7 @@ Pydantic model types are available in the `types` module.
131
145
  ## Models
132
146
 
133
147
  The `client.models` modules exposes model inferencing and model getters.
148
+ See the 'Create a client' section above to initialize a client.
134
149
 
135
150
  ### Generate Content
136
151
 
@@ -172,6 +187,8 @@ This is the canonical way to provide contents, SDK will not do any conversion.
172
187
  ##### Provide a `types.Content` instance
173
188
 
174
189
  ```python
190
+ from google.genai import types
191
+
175
192
  contents = types.Content(
176
193
  role='user',
177
194
  parts=[types.Part.from_text(text='Why is the sky blue?')]
@@ -236,6 +253,8 @@ Where a `types.UserContent` is a subclass of `types.Content`, the
236
253
  ##### Provide a function call part
237
254
 
238
255
  ```python
256
+ from google.genai import types
257
+
239
258
  contents = types.Part.from_function_call(
240
259
  name='get_weather_by_location',
241
260
  args={'location': 'Boston'}
@@ -263,6 +282,8 @@ Where a `types.ModelContent` is a subclass of `types.Content`, the
263
282
  ##### Provide a list of function call parts
264
283
 
265
284
  ```python
285
+ from google.genai import types
286
+
266
287
  contents = [
267
288
  types.Part.from_function_call(
268
289
  name='get_weather_by_location',
@@ -300,6 +321,8 @@ Where a `types.ModelContent` is a subclass of `types.Content`, the
300
321
  ##### Provide a non function call part
301
322
 
302
323
  ```python
324
+ from google.genai import types
325
+
303
326
  contents = types.Part.from_uri(
304
327
  file_uri: 'gs://generativeai-downloads/images/scones.jpg',
305
328
  mime_type: 'image/jpeg',
@@ -322,6 +345,8 @@ The SDK converts all non function call parts into a content with a `user` role.
322
345
  ##### Provide a list of non function call parts
323
346
 
324
347
  ```python
348
+ from google.genai import types
349
+
325
350
  contents = [
326
351
  types.Part.from_text('What is this image about?'),
327
352
  types.Part.from_uri(
@@ -361,11 +386,17 @@ If you put a list within a list, the inner list can only contain
361
386
  ### System Instructions and Other Configs
362
387
 
363
388
  The output of the model can be influenced by several optional settings
364
- available in generate_content's config parameter. For example, the
365
- variability and length of the output can be influenced by the temperature
366
- and max_output_tokens respectively.
389
+ available in generate_content's config parameter. For example, increasing
390
+ `max_output_tokens` is essential for longer model responses. To make a model more
391
+ deterministic, lowering the `temperature` parameter reduces randomness, with
392
+ values near 0 minimizing variability. Capabilities and parameter defaults for
393
+ each model is shown in the
394
+ [Vertex AI docs](https://cloud.google.com/vertex-ai/generative-ai/docs/models/gemini/2-5-flash)
395
+ and [Gemini API docs](https://ai.google.dev/gemini-api/docs/models) respectively.
367
396
 
368
397
  ```python
398
+ from google.genai import types
399
+
369
400
  response = client.models.generate_content(
370
401
  model='gemini-2.0-flash-001',
371
402
  contents='high',
@@ -384,6 +415,8 @@ All API methods support Pydantic types for parameters as well as
384
415
  dictionaries. You can get the type from `google.genai.types`.
385
416
 
386
417
  ```python
418
+ from google.genai import types
419
+
387
420
  response = client.models.generate_content(
388
421
  model='gemini-2.0-flash-001',
389
422
  contents=types.Part.from_text(text='Why is the sky blue?'),
@@ -420,7 +453,7 @@ pager.next_page()
420
453
  print(pager[0])
421
454
  ```
422
455
 
423
- #### Async
456
+ #### List Base Models (Asynchronous)
424
457
 
425
458
  ```python
426
459
  async for job in await client.aio.models.list():
@@ -438,6 +471,8 @@ print(async_pager[0])
438
471
  ### Safety Settings
439
472
 
440
473
  ```python
474
+ from google.genai import types
475
+
441
476
  response = client.models.generate_content(
442
477
  model='gemini-2.0-flash-001',
443
478
  contents='Say something bad.',
@@ -461,6 +496,8 @@ You can pass a Python function directly and it will be automatically
461
496
  called and responded by default.
462
497
 
463
498
  ```python
499
+ from google.genai import types
500
+
464
501
  def get_current_weather(location: str) -> str:
465
502
  """Returns the current weather.
466
503
 
@@ -484,6 +521,8 @@ automatic function calling, you can disable automatic function calling
484
521
  as follows:
485
522
 
486
523
  ```python
524
+ from google.genai import types
525
+
487
526
  response = client.models.generate_content(
488
527
  model='gemini-2.0-flash-001',
489
528
  contents='What is the weather like in Boston?',
@@ -512,6 +551,8 @@ The following example shows how to declare a function and pass it as a tool.
512
551
  Then you will receive a function call part in the response.
513
552
 
514
553
  ```python
554
+ from google.genai import types
555
+
515
556
  function = types.FunctionDeclaration(
516
557
  name='get_current_weather',
517
558
  description='Get the current weather in a given location',
@@ -544,6 +585,8 @@ the model.
544
585
  The following example shows how to do it for a simple function invocation.
545
586
 
546
587
  ```python
588
+ from google.genai import types
589
+
547
590
  user_prompt_content = types.Content(
548
591
  role='user',
549
592
  parts=[types.Part.from_text(text='What is the weather like in Boston?')],
@@ -596,6 +639,8 @@ maximum remote call for automatic function calling (default to 10 times).
596
639
  If you'd like to disable automatic function calling in `ANY` mode:
597
640
 
598
641
  ```python
642
+ from google.genai import types
643
+
599
644
  def get_current_weather(location: str) -> str:
600
645
  """Returns the current weather.
601
646
 
@@ -624,6 +669,8 @@ configure the maximum remote calls to be `x + 1`.
624
669
  Assuming you prefer `1` turn for automatic function calling.
625
670
 
626
671
  ```python
672
+ from google.genai import types
673
+
627
674
  def get_current_weather(location: str) -> str:
628
675
  """Returns the current weather.
629
676
 
@@ -658,6 +705,7 @@ Schemas can be provided as Pydantic Models.
658
705
 
659
706
  ```python
660
707
  from pydantic import BaseModel
708
+ from google.genai import types
661
709
 
662
710
 
663
711
  class CountryInfo(BaseModel):
@@ -682,6 +730,8 @@ print(response.text)
682
730
  ```
683
731
 
684
732
  ```python
733
+ from google.genai import types
734
+
685
735
  response = client.models.generate_content(
686
736
  model='gemini-2.0-flash-001',
687
737
  contents='Give me information for the United States.',
@@ -741,7 +791,8 @@ print(response.text)
741
791
 
742
792
  #### JSON Response
743
793
 
744
- You can also set response_mime_type to 'application/json', the response will be identical but in quotes.
794
+ You can also set response_mime_type to 'application/json', the response will be
795
+ identical but in quotes.
745
796
 
746
797
  ```python
747
798
  from enum import Enum
@@ -764,7 +815,10 @@ response = client.models.generate_content(
764
815
  print(response.text)
765
816
  ```
766
817
 
767
- ### Streaming
818
+ ### Generate Content (Synchronous Streaming)
819
+
820
+ Generate content in a streaming format so that the model outputs streams back
821
+ to you, rather than being returned as one chunk.
768
822
 
769
823
  #### Streaming for text content
770
824
 
@@ -781,6 +835,8 @@ If your image is stored in [Google Cloud Storage](https://cloud.google.com/stora
781
835
  you can use the `from_uri` class method to create a `Part` object.
782
836
 
783
837
  ```python
838
+ from google.genai import types
839
+
784
840
  for chunk in client.models.generate_content_stream(
785
841
  model='gemini-2.0-flash-001',
786
842
  contents=[
@@ -798,6 +854,8 @@ If your image is stored in your local file system, you can read it in as bytes
798
854
  data and use the `from_bytes` class method to create a `Part` object.
799
855
 
800
856
  ```python
857
+ from google.genai import types
858
+
801
859
  YOUR_IMAGE_PATH = 'your_image_path'
802
860
  YOUR_IMAGE_MIME_TYPE = 'your_image_mime_type'
803
861
  with open(YOUR_IMAGE_PATH, 'rb') as f:
@@ -813,10 +871,10 @@ for chunk in client.models.generate_content_stream(
813
871
  print(chunk.text, end='')
814
872
  ```
815
873
 
816
- ### Async
874
+ ### Generate Content (Asynchronous Non Streaming)
817
875
 
818
876
  `client.aio` exposes all the analogous [`async` methods](https://docs.python.org/3/library/asyncio.html)
819
- that are available on `client`
877
+ that are available on `client`. Note that it applies to all the modules.
820
878
 
821
879
  For example, `client.aio.models.generate_content` is the `async` version
822
880
  of `client.models.generate_content`
@@ -829,7 +887,8 @@ response = await client.aio.models.generate_content(
829
887
  print(response.text)
830
888
  ```
831
889
 
832
- ### Streaming
890
+ ### Generate Content (Asynchronous Streaming)
891
+
833
892
 
834
893
  ```python
835
894
  async for chunk in await client.aio.models.generate_content_stream(
@@ -881,6 +940,8 @@ print(response)
881
940
  ```
882
941
 
883
942
  ```python
943
+ from google.genai import types
944
+
884
945
  # multiple contents with config
885
946
  response = client.models.embed_content(
886
947
  model='text-embedding-004',
@@ -898,6 +959,8 @@ print(response)
898
959
  Support for generate images in Gemini Developer API is behind an allowlist
899
960
 
900
961
  ```python
962
+ from google.genai import types
963
+
901
964
  # Generate Image
902
965
  response1 = client.models.generate_images(
903
966
  model='imagen-3.0-generate-002',
@@ -916,6 +979,8 @@ response1.generated_images[0].image.show()
916
979
  Upscale image is only supported in Vertex AI.
917
980
 
918
981
  ```python
982
+ from google.genai import types
983
+
919
984
  # Upscale the generated image from above
920
985
  response2 = client.models.upscale_image(
921
986
  model='imagen-3.0-generate-001',
@@ -937,6 +1002,7 @@ Edit image is only supported in Vertex AI.
937
1002
 
938
1003
  ```python
939
1004
  # Edit the generated image from above
1005
+ from google.genai import types
940
1006
  from google.genai.types import RawReferenceImage, MaskReferenceImage
941
1007
 
942
1008
  raw_ref_image = RawReferenceImage(
@@ -974,6 +1040,8 @@ response3.generated_images[0].image.show()
974
1040
  Support for generate videos in Vertex and Gemini Developer API is behind an allowlist
975
1041
 
976
1042
  ```python
1043
+ from google.genai import types
1044
+
977
1045
  # Create operation
978
1046
  operation = client.models.generate_videos(
979
1047
  model='veo-2.0-generate-001',
@@ -997,17 +1065,22 @@ video.show()
997
1065
 
998
1066
  ## Chats
999
1067
 
1000
- Create a chat session to start a multi-turn conversations with the model.
1068
+ Create a chat session to start a multi-turn conversations with the model. Then,
1069
+ use `chat.send_message` function multiple times within the same chat session so
1070
+ that it can reflect on its previous responses (i.e., engage in an ongoing
1071
+ conversation). See the 'Create a client' section above to initialize a client.
1001
1072
 
1002
- ### Send Message
1073
+ ### Send Message (Synchronous Non-Streaming)
1003
1074
 
1004
1075
  ```python
1005
1076
  chat = client.chats.create(model='gemini-2.0-flash-001')
1006
1077
  response = chat.send_message('tell me a story')
1007
1078
  print(response.text)
1079
+ response = chat.send_message('summarize the story you told me in 1 sentence')
1080
+ print(response.text)
1008
1081
  ```
1009
1082
 
1010
- ### Streaming
1083
+ ### Send Message (Synchronous Streaming)
1011
1084
 
1012
1085
  ```python
1013
1086
  chat = client.chats.create(model='gemini-2.0-flash-001')
@@ -1015,7 +1088,7 @@ for chunk in chat.send_message_stream('tell me a story'):
1015
1088
  print(chunk.text)
1016
1089
  ```
1017
1090
 
1018
- ### Async
1091
+ ### Send Message (Asynchronous Non-Streaming)
1019
1092
 
1020
1093
  ```python
1021
1094
  chat = client.aio.chats.create(model='gemini-2.0-flash-001')
@@ -1023,7 +1096,7 @@ response = await chat.send_message('tell me a story')
1023
1096
  print(response.text)
1024
1097
  ```
1025
1098
 
1026
- ### Async Streaming
1099
+ ### Send Message (Asynchronous Streaming)
1027
1100
 
1028
1101
  ```python
1029
1102
  chat = client.aio.chats.create(model='gemini-2.0-flash-001')
@@ -1033,7 +1106,8 @@ async for chunk in await chat.send_message_stream('tell me a story'):
1033
1106
 
1034
1107
  ## Files
1035
1108
 
1036
- Files are only supported in Gemini Developer API.
1109
+ Files are only supported in Gemini Developer API. See the 'Create a client'
1110
+ section above to initialize a client.
1037
1111
 
1038
1112
  ```cmd
1039
1113
  !gsutil cp gs://cloud-samples-data/generative-ai/pdf/2312.11805v3.pdf .
@@ -1067,11 +1141,14 @@ client.files.delete(name=file3.name)
1067
1141
 
1068
1142
  ## Caches
1069
1143
 
1070
- `client.caches` contains the control plane APIs for cached content
1144
+ `client.caches` contains the control plane APIs for cached content. See the
1145
+ 'Create a client' section above to initialize a client.
1071
1146
 
1072
1147
  ### Create
1073
1148
 
1074
1149
  ```python
1150
+ from google.genai import types
1151
+
1075
1152
  if client.vertexai:
1076
1153
  file_uris = [
1077
1154
  'gs://cloud-samples-data/generative-ai/pdf/2312.11805v3.pdf',
@@ -1113,6 +1190,8 @@ cached_content = client.caches.get(name=cached_content.name)
1113
1190
  ### Generate Content with Caches
1114
1191
 
1115
1192
  ```python
1193
+ from google.genai import types
1194
+
1116
1195
  response = client.models.generate_content(
1117
1196
  model='gemini-2.0-flash-001',
1118
1197
  contents='Summarize the pdfs',
@@ -1126,7 +1205,8 @@ print(response.text)
1126
1205
  ## Tunings
1127
1206
 
1128
1207
  `client.tunings` contains tuning job APIs and supports supervised fine
1129
- tuning through `tune`.
1208
+ tuning through `tune`. See the 'Create a client' section above to initialize a
1209
+ client.
1130
1210
 
1131
1211
  ### Tune
1132
1212
 
@@ -1134,6 +1214,8 @@ tuning through `tune`.
1134
1214
  - Gemini Developer API supports tuning from inline examples
1135
1215
 
1136
1216
  ```python
1217
+ from google.genai import types
1218
+
1137
1219
  if client.vertexai:
1138
1220
  model = 'gemini-2.0-flash-001'
1139
1221
  training_dataset = types.TuningDataset(
@@ -1153,6 +1235,8 @@ else:
1153
1235
  ```
1154
1236
 
1155
1237
  ```python
1238
+ from google.genai import types
1239
+
1156
1240
  tuning_job = client.tunings.tune(
1157
1241
  base_model=model,
1158
1242
  training_dataset=training_dataset,
@@ -1239,6 +1323,8 @@ print(async_pager[0])
1239
1323
  ### Update Tuned Model
1240
1324
 
1241
1325
  ```python
1326
+ from google.genai import types
1327
+
1242
1328
  model = pager[0]
1243
1329
 
1244
1330
  model = client.models.update(
@@ -1284,7 +1370,8 @@ print(async_pager[0])
1284
1370
 
1285
1371
  ## Batch Prediction
1286
1372
 
1287
- Only supported in Vertex AI.
1373
+ Only supported in Vertex AI. See the 'Create a client' section above to
1374
+ initialize a client.
1288
1375
 
1289
1376
  ### Create
1290
1377
 
@@ -1,30 +1,30 @@
1
1
  google/genai/__init__.py,sha256=IYw-PcsdgjSpS1mU_ZcYkTfPocsJ4aVmrDxP7vX7c6Y,709
2
- google/genai/_api_client.py,sha256=yUHm1KEij9_R7Ha_quQQgHFgh6DIfrqw5QrR0j_8QYA,36594
2
+ google/genai/_api_client.py,sha256=fFYeZ8TZnl4zm6XPCvjrnUbHPHgr_qZ3HxWhVnOHJ1U,37869
3
3
  google/genai/_api_module.py,sha256=66FsFq9N8PdTegDyx3am3NHpI0Bw7HBmifUMCrZsx_Q,902
4
- google/genai/_automatic_function_calling_util.py,sha256=MB4lzbiLuosYXYafJq9d8QhTfK21cSkIwyzZr6W5Hjw,10294
4
+ google/genai/_automatic_function_calling_util.py,sha256=wFFKTvlLpeqiP6zgH1LwWW3DYYWeLoJ_DkytVHJ89rI,10297
5
5
  google/genai/_base_url.py,sha256=E5H4dew14Y16qfnB3XRnjSCi19cJVlkaMNoM_8ip-PM,1597
6
6
  google/genai/_common.py,sha256=mBI8hEiztd3M_kBRTja0cprdGEoH5xNFCUykWDTtl-A,10573
7
7
  google/genai/_extra_utils.py,sha256=3wyzcRi29qgcxB2PvqzvY1sqPwYQHt5KCemI9RaKE9s,14824
8
- google/genai/_live_converters.py,sha256=9OnzzLAt03ottIXkCfAih-yTgSLm03GnW2KkXRR-HY8,65649
8
+ google/genai/_live_converters.py,sha256=VANY82SZueOJX2ek03IUyvgyP-acbj1NKm-IjhpMMQw,74810
9
9
  google/genai/_replay_api_client.py,sha256=pppcP0yAmXk8LS9uKXNBN1bJyoTDUDdt-WqfhdmgNZk,20006
10
10
  google/genai/_test_api_client.py,sha256=XNOWq8AkYbqInv1aljNGlFXsv8slQIWTYy_hdcCetD0,4797
11
11
  google/genai/_transformers.py,sha256=9Njw7X2nP3ySUy2282olQIxFYoE6dOOJ363HKr4pd5A,34514
12
12
  google/genai/batches.py,sha256=gyNb--psM7cqXEP5RsJ908ismvsFFlDNRxt7mCwhYgM,34943
13
- google/genai/caches.py,sha256=IPiq64z957dTsZyHedlQ8DM6YPMeC5IAs2JVHkYt_v4,51033
13
+ google/genai/caches.py,sha256=5Z-c84DuccMQCVITYLLTj0NRH2KKNXl4gzAkpEBiUnw,59761
14
14
  google/genai/chats.py,sha256=6K9wuH3yDngdbZCYNA13G7GeMsya100QQdS0_jt6m1A,16978
15
15
  google/genai/client.py,sha256=OkJxW-9_Mi7O1B_HxdBgHQgeM7mEIn1eQQIMkiaunTE,10477
16
16
  google/genai/errors.py,sha256=d22FmoLj5kte9UZSCLwkGhJhqp2JXlJHyNB6wC_37w4,4795
17
17
  google/genai/files.py,sha256=PB5Rh5WiLOPHcqPR1inGyhqL9PiE0BFHSL4Dz9HKtl8,39516
18
18
  google/genai/live.py,sha256=PtosiGwQ4I72HMlAhiBDrMteWCH-_RBIIww6mVHDIuc,35105
19
- google/genai/models.py,sha256=zl911T7hpSrboQmoNwQd_sQIj3xyuMjqTT-y5XwLuiE,202875
19
+ google/genai/models.py,sha256=ldNpy0DwnK72_oVfvlK9x6-YK9u_ZQ65Dra0ddnvPek,214467
20
20
  google/genai/operations.py,sha256=wWGngU4N3U2y_JteBkwqTmoNncehk2NcZgPyHaVulfI,20304
21
21
  google/genai/pagers.py,sha256=dhdKUr4KiUGHOt8OPLo51I0qXJpBU2iH4emTKC4UINI,6797
22
22
  google/genai/py.typed,sha256=RsMFoLwBkAvY05t6izop4UHZtqOPLiKp3GkIEizzmQY,40
23
- google/genai/tunings.py,sha256=X2dIs9v9e91bd-PwZja-BaA3kNQxwyAjt1x5fhP4Phk,47594
24
- google/genai/types.py,sha256=Faf8ZmPPy5gKIQJBR8eeQqJD0sqqEVn7p-4hihJTNu0,366392
25
- google/genai/version.py,sha256=7YL9CQp8iYV-Wxt7YGhhhJbsu-HPhIiQ9Pk59X4aBac,627
26
- google_genai-1.13.0.dist-info/licenses/LICENSE,sha256=z8d0m5b2O9McPEK1xHG_dWgUBT6EfBDz6wA0F7xSPTA,11358
27
- google_genai-1.13.0.dist-info/METADATA,sha256=XiCtZ3DqaASBddDnFT9x8nhoy35jIlao4vSrsSTQasM,32899
28
- google_genai-1.13.0.dist-info/WHEEL,sha256=wXxTzcEDnjrTwFYjLPcsW_7_XihufBwmpiBeiXNBGEA,91
29
- google_genai-1.13.0.dist-info/top_level.txt,sha256=_1QvSJIhFAGfxb79D6DhB7SUw2X6T4rwnz_LLrbcD3c,7
30
- google_genai-1.13.0.dist-info/RECORD,,
23
+ google/genai/tunings.py,sha256=bmLGBN1EhsDvm-OpHwX3UgNPauG2FUJsq0Z327C6Tmk,49226
24
+ google/genai/types.py,sha256=jFDra81W6-DOCXthJzjhQpbUjH-01_ZaziEWwPJBGyg,386019
25
+ google/genai/version.py,sha256=YR-NrHLpLS5deIRlUcFzZEz8W284JQPHrAUOeUt0Q7A,627
26
+ google_genai-1.15.0.dist-info/licenses/LICENSE,sha256=z8d0m5b2O9McPEK1xHG_dWgUBT6EfBDz6wA0F7xSPTA,11358
27
+ google_genai-1.15.0.dist-info/METADATA,sha256=xnydtMjKTfpQ50V0kny2Q7oRCuUULKvZoO30Gb8kEHw,35580
28
+ google_genai-1.15.0.dist-info/WHEEL,sha256=DnLRTWE75wApRYVsjgc6wsVswC54sMSJhAEd4xhDpBk,91
29
+ google_genai-1.15.0.dist-info/top_level.txt,sha256=_1QvSJIhFAGfxb79D6DhB7SUw2X6T4rwnz_LLrbcD3c,7
30
+ google_genai-1.15.0.dist-info/RECORD,,
@@ -1,5 +1,5 @@
1
1
  Wheel-Version: 1.0
2
- Generator: setuptools (80.1.0)
2
+ Generator: setuptools (80.4.0)
3
3
  Root-Is-Purelib: true
4
4
  Tag: py3-none-any
5
5