google-genai 0.4.0__py3-none-any.whl → 0.6.0__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,888 +0,0 @@
1
- Metadata-Version: 2.2
2
- Name: google-genai
3
- Version: 0.4.0
4
- Summary: GenAI Python SDK
5
- Author-email: Google LLC <googleapis-packages@google.com>
6
- License: Apache-2.0
7
- Project-URL: Homepage, https://github.com/googleapis/python-genai
8
- Classifier: Intended Audience :: Developers
9
- Classifier: License :: OSI Approved :: Apache Software License
10
- Classifier: Operating System :: OS Independent
11
- Classifier: Programming Language :: Python
12
- Classifier: Programming Language :: Python :: 3
13
- Classifier: Programming Language :: Python :: 3.9
14
- Classifier: Programming Language :: Python :: 3.10
15
- Classifier: Programming Language :: Python :: 3.11
16
- Classifier: Programming Language :: Python :: 3.12
17
- Classifier: Programming Language :: Python :: 3.13
18
- Classifier: Topic :: Internet
19
- Classifier: Topic :: Software Development :: Libraries :: Python Modules
20
- Requires-Python: >=3.9
21
- Description-Content-Type: text/markdown
22
- License-File: LICENSE
23
- Requires-Dist: google-auth<3.0.0dev,>=2.14.1
24
- Requires-Dist: pillow<12.0.0,>=10.0.0
25
- Requires-Dist: pydantic<3.0.0dev,>=2.0.0
26
- Requires-Dist: requests<3.0.0dev,>=2.28.1
27
- Requires-Dist: websockets<15.0dev,>=13.0
28
-
29
- # Google Gen AI SDK
30
-
31
- [![PyPI version](https://img.shields.io/pypi/v/google-genai.svg)](https://pypi.org/project/google-genai/)
32
-
33
- --------
34
- **Documentation:** https://googleapis.github.io/python-genai/
35
-
36
- -----
37
-
38
- ## Installation
39
-
40
- ``` cmd
41
- pip install google-genai
42
- ```
43
-
44
- ## Imports
45
-
46
- ``` python
47
- from google import genai
48
- from google.genai import types
49
- ```
50
-
51
- ## Create a client
52
-
53
- Please run one of the following code blocks to create a client for
54
- different services (Google AI or Vertex). Feel free to switch the client
55
- and run all the examples to see how it behaves under different APIs.
56
-
57
- ``` python
58
- # Only run this block for Google AI API
59
- client = genai.Client(api_key='YOUR_API_KEY')
60
- ```
61
-
62
- ``` python
63
- # Only run this block for Vertex AI API
64
- client = genai.Client(
65
- vertexai=True, project='your-project-id', location='us-central1'
66
- )
67
- ```
68
-
69
- ## Types
70
-
71
- Parameter types can be specified as either dictionaries(TypedDict) or
72
- pydantic Models. Pydantic model types are available in the `types`
73
- module.
74
-
75
- ## Models
76
-
77
- The `client.models` modules exposes model inferencing and model getters.
78
-
79
- ### Generate Content
80
-
81
- ``` python
82
- response = client.models.generate_content(
83
- model='gemini-2.0-flash-exp', contents='What is your name?'
84
- )
85
- print(response.text)
86
- ```
87
-
88
- ### System Instructions and Other Configs
89
-
90
- ``` python
91
- response = client.models.generate_content(
92
- model='gemini-2.0-flash-exp',
93
- contents='high',
94
- config=types.GenerateContentConfig(
95
- system_instruction='I say high, you say low',
96
- temperature= 0.3,
97
- ),
98
- )
99
- print(response.text)
100
- ```
101
-
102
- ### Typed Config
103
-
104
- All API methods support pydantic types for parameters as well as
105
- dictionaries. You can get the type from `google.genai.types`.
106
-
107
- ``` python
108
- response = client.models.generate_content(
109
- model='gemini-2.0-flash-exp',
110
- contents=types.Part.from_text('Why is sky blue?'),
111
- config=types.GenerateContentConfig(
112
- temperature=0,
113
- top_p=0.95,
114
- top_k=20,
115
- candidate_count=1,
116
- seed=5,
117
- max_output_tokens=100,
118
- stop_sequences=["STOP!"],
119
- presence_penalty=0.0,
120
- frequency_penalty=0.0,
121
- )
122
- )
123
-
124
- response
125
- ```
126
-
127
- ### Safety Settings
128
-
129
- ``` python
130
- response = client.models.generate_content(
131
- model='gemini-2.0-flash-exp',
132
- contents='Say something bad.',
133
- config=types.GenerateContentConfig(
134
- safety_settings= [types.SafetySetting(
135
- category='HARM_CATEGORY_HATE_SPEECH',
136
- threshold='BLOCK_ONLY_HIGH',
137
- )]
138
- ),
139
- )
140
- print(response.text)
141
- ```
142
-
143
- ### Function Calling
144
-
145
- #### Automatic Python function Support
146
-
147
- You can pass a python function directly and it will be automatically
148
- called and responded.
149
-
150
- ``` python
151
- def get_current_weather(location: str,) -> int:
152
- """Returns the current weather.
153
-
154
- Args:
155
- location: The city and state, e.g. San Francisco, CA
156
- """
157
- return 'sunny'
158
-
159
- response = client.models.generate_content(
160
- model='gemini-2.0-flash-exp',
161
- contents="What is the weather like in Boston?",
162
- config=types.GenerateContentConfig(tools=[get_current_weather],)
163
- )
164
-
165
- response.text
166
- ```
167
-
168
- #### Manually declare and invoke a function for function calling
169
-
170
- If you don't want to use the automatic function support, you can manually
171
- declare the function and invoke it.
172
-
173
- The following example shows how to declare a function and pass it as a tool.
174
- Then you will receive a function call part in the response.
175
-
176
- ``` python
177
- function = dict(
178
- name="get_current_weather",
179
- description="Get the current weather in a given location",
180
- parameters={
181
- "type": "OBJECT",
182
- "properties": {
183
- "location": {
184
- "type": "STRING",
185
- "description": "The city and state, e.g. San Francisco, CA",
186
- },
187
- },
188
- "required": ["location"],
189
- }
190
- )
191
-
192
- tool = types.Tool(function_declarations=[function])
193
-
194
-
195
- response = client.models.generate_content(
196
- model='gemini-2.0-flash-exp',
197
- contents="What is the weather like in Boston?",
198
- config=types.GenerateContentConfig(tools=[tool],)
199
- )
200
-
201
- response.candidates[0].content.parts[0].function_call
202
- ```
203
-
204
- After you receive the function call part from model, you can invoke the function
205
- and get the function response. And then you can pass the function response to
206
- the model.
207
- The following example shows how to do it for a simple function invocation.
208
-
209
- ``` python
210
- function_call_part = response.candidates[0].content.parts[0]
211
-
212
- try:
213
- function_result = get_current_weather(**function_call_part.function_call.args)
214
- function_response = {'result': function_result}
215
- except Exception as e: # instead of raising the exception, you can let the model handle it
216
- function_response = {'error': str(e)}
217
-
218
-
219
- function_response_part = types.Part.from_function_response(
220
- name=function_call_part.function_call.name,
221
- response=function_response,
222
- )
223
-
224
- response = client.models.generate_content(
225
- model='gemini-2.0-flash-exp',
226
- contents=[
227
- types.Part.from_text("What is the weather like in Boston?"),
228
- function_call_part,
229
- function_response_part,
230
- ])
231
-
232
- response
233
- ```
234
-
235
- ### JSON Response Schema
236
-
237
- #### Pydantic Model Schema support
238
-
239
- Schemas can be provided as Pydantic Models.
240
-
241
- ``` python
242
- from pydantic import BaseModel
243
-
244
- class CountryInfo(BaseModel):
245
- name: str
246
- population: int
247
- capital: str
248
- continent: str
249
- gdp: int
250
- official_language: str
251
- total_area_sq_mi: int
252
-
253
-
254
- response = client.models.generate_content(
255
- model='gemini-2.0-flash-exp',
256
- contents='Give me information of the United States.',
257
- config=types.GenerateContentConfig(
258
- response_mime_type= 'application/json',
259
- response_schema= CountryInfo,
260
- ),
261
- )
262
- print(response.text)
263
- ```
264
-
265
- ``` python
266
- response = client.models.generate_content(
267
- model='gemini-2.0-flash-exp',
268
- contents='Give me information of the United States.',
269
- config={
270
- 'response_mime_type': 'application/json',
271
- 'response_schema': {
272
- 'required': [
273
- 'name',
274
- 'population',
275
- 'capital',
276
- 'continent',
277
- 'gdp',
278
- 'official_language',
279
- 'total_area_sq_mi',
280
- ],
281
- 'properties': {
282
- 'name': {'type': 'STRING'},
283
- 'population': {'type': 'INTEGER'},
284
- 'capital': {'type': 'STRING'},
285
- 'continent': {'type': 'STRING'},
286
- 'gdp': {'type': 'INTEGER'},
287
- 'official_language': {'type': 'STRING'},
288
- 'total_area_sq_mi': {'type': 'INTEGER'},
289
- },
290
- 'type': 'OBJECT',
291
- },
292
- },
293
- )
294
- print(response.text)
295
- ```
296
-
297
- ### Streaming
298
-
299
- #### Streaming for text content
300
-
301
- ``` python
302
- for chunk in client.models.generate_content_stream(
303
- model='gemini-2.0-flash-exp', contents='Tell me a story in 300 words.'
304
- ):
305
- print(chunk.text)
306
- ```
307
-
308
- #### Streaming for image content
309
-
310
- If your image is stored in Google Cloud Storage, you can use the `from_uri`
311
- class method to create a Part object.
312
-
313
- ``` python
314
- for chunk in client.models.generate_content_stream(
315
- model='gemini-1.5-flash',
316
- contents=[
317
- 'What is this image about?',
318
- types.Part.from_uri(
319
- file_uri='gs://generativeai-downloads/images/scones.jpg',
320
- mime_type='image/jpeg'
321
- )
322
- ],
323
- ):
324
- print(chunk.text)
325
- ```
326
-
327
- If your image is stored in your local file system, you can read it in as bytes
328
- data and use the `from_bytes` class method to create a Part object.
329
-
330
- ``` python
331
- YOUR_IMAGE_PATH = 'your_image_path'
332
- YOUR_IMAGE_MIME_TYPE = 'your_image_mime_type'
333
- with open(YOUR_IMAGE_PATH, 'rb') as f:
334
- image_bytes = f.read()
335
-
336
- for chunk in client.models.generate_content_stream(
337
- model='gemini-1.5-flash',
338
- contents=[
339
- 'What is this image about?',
340
- types.Part.from_bytes(
341
- data=image_bytes,
342
- mime_type=YOUR_IMAGE_MIME_TYPE
343
- )
344
- ],
345
- ):
346
- print(chunk.text)
347
- ```
348
-
349
- ### Async
350
-
351
- `client.aio` exposes all the analogous `async` methods that are
352
- available on `client`
353
-
354
- For example, `client.aio.models.generate_content` is the async version
355
- of `client.models.generate_content`
356
-
357
- ``` python
358
- request = await client.aio.models.generate_content(
359
- model='gemini-2.0-flash-exp', contents='Tell me a story in 300 words.'
360
- )
361
-
362
- print(response.text)
363
- ```
364
-
365
- ### Streaming
366
-
367
- ``` python
368
- async for response in client.aio.models.generate_content_stream(
369
- model='gemini-2.0-flash-exp', contents='Tell me a story in 300 words.'
370
- ):
371
- print(response.text)
372
- ```
373
-
374
- ### Count Tokens and Compute Tokens
375
-
376
- ``` python
377
- response = client.models.count_tokens(
378
- model='gemini-2.0-flash-exp',
379
- contents='What is your name?',
380
- )
381
- print(response)
382
- ```
383
-
384
- #### Compute Tokens
385
-
386
- Compute tokens is not supported by Google AI.
387
-
388
- ``` python
389
- response = client.models.compute_tokens(
390
- model='gemini-2.0-flash-exp',
391
- contents='What is your name?',
392
- )
393
- print(response)
394
- ```
395
-
396
- ##### Async
397
-
398
- ``` python
399
- response = await client.aio.models.count_tokens(
400
- model='gemini-2.0-flash-exp',
401
- contents='What is your name?',
402
- )
403
- print(response)
404
- ```
405
-
406
- ### Embed Content
407
-
408
- ``` python
409
- response = client.models.embed_content(
410
- model='text-embedding-004',
411
- contents='What is your name?',
412
- )
413
- response
414
- ```
415
-
416
- ``` python
417
- # multiple contents with config
418
- response = client.models.embed_content(
419
- model='text-embedding-004',
420
- contents=['What is your name?', 'What is your age?'],
421
- config=types.EmbedContentConfig(output_dimensionality= 10)
422
- )
423
-
424
- response
425
- ```
426
-
427
- ### Imagen
428
-
429
- #### Generate Image
430
-
431
- Support for generate image in Google AI is behind an allowlist
432
-
433
- ``` python
434
- # Generate Image
435
- response1 = client.models.generate_image(
436
- model='imagen-3.0-generate-001',
437
- prompt='An umbrella in the foreground, and a rainy night sky in the background',
438
- config=types.GenerateImageConfig(
439
- negative_prompt= 'human',
440
- number_of_images= 1,
441
- include_rai_reason= True,
442
- output_mime_type= 'image/jpeg'
443
- )
444
- )
445
- response1.generated_images[0].image.show()
446
- ```
447
-
448
- #### Upscale Image
449
-
450
- Upscale image is not supported in Google AI.
451
-
452
- ``` python
453
- # Upscale the generated image from above
454
- response2 = client.models.upscale_image(
455
- model='imagen-3.0-generate-001',
456
- image=response1.generated_images[0].image,
457
- upscale_factor='x2',
458
- config=types.UpscaleImageConfig(
459
- include_rai_reason= True,
460
- output_mime_type= 'image/jpeg',
461
- ),
462
- )
463
- response2.generated_images[0].image.show()
464
- ```
465
-
466
- #### Edit Image
467
-
468
- Edit image uses a separate model from generate and upscale.
469
-
470
- Edit image is not supported in Google AI.
471
-
472
- ``` python
473
- # Edit the generated image from above
474
- from google.genai.types import RawReferenceImage, MaskReferenceImage
475
- raw_ref_image = RawReferenceImage(
476
- reference_id=1,
477
- reference_image=response1.generated_images[0].image,
478
- )
479
-
480
- # Model computes a mask of the background
481
- mask_ref_image = MaskReferenceImage(
482
- reference_id=2,
483
- config=types.MaskReferenceConfig(
484
- mask_mode='MASK_MODE_BACKGROUND',
485
- mask_dilation=0,
486
- ),
487
- )
488
-
489
- response3 = client.models.edit_image(
490
- model='imagen-3.0-capability-001',
491
- prompt='Sunlight and clear sky',
492
- reference_images=[raw_ref_image, mask_ref_image],
493
- config=types.EditImageConfig(
494
- edit_mode= 'EDIT_MODE_INPAINT_INSERTION',
495
- number_of_images= 1,
496
- negative_prompt= 'human',
497
- include_rai_reason= True,
498
- output_mime_type= 'image/jpeg',
499
- ),
500
- )
501
- response3.generated_images[0].image.show()
502
- ```
503
-
504
- ## Chats
505
-
506
- Create a chat session to start a multi-turn conversations with the model.
507
-
508
- ### Send Message
509
-
510
- ```python
511
- chat = client.chats.create(model='gemini-2.0-flash-exp')
512
- response = chat.send_message('tell me a story')
513
- print(response.text)
514
- ```
515
-
516
- ### Streaming
517
-
518
- ```python
519
- chat = client.chats.create(model='gemini-2.0-flash-exp')
520
- for chunk in chat.send_message_stream('tell me a story'):
521
- print(chunk.text)
522
- ```
523
-
524
- ### Async
525
-
526
- ```python
527
- chat = client.aio.chats.create(model='gemini-2.0-flash-exp')
528
- response = await chat.send_message('tell me a story')
529
- print(response.text)
530
- ```
531
-
532
- ### Async Streaming
533
-
534
- ```python
535
- chat = client.aio.chats.create(model='gemini-2.0-flash-exp')
536
- async for chunk in chat.send_message_stream('tell me a story'):
537
- print(chunk.text)
538
- ```
539
-
540
- ## Files (Only Google AI)
541
-
542
- ``` python
543
- !gsutil cp gs://cloud-samples-data/generative-ai/pdf/2312.11805v3.pdf .
544
- !gsutil cp gs://cloud-samples-data/generative-ai/pdf/2403.05530.pdf .
545
- ```
546
-
547
- ### Upload
548
-
549
- ``` python
550
- file1 = client.files.upload(path='2312.11805v3.pdf')
551
- file2 = client.files.upload(path='2403.05530.pdf')
552
-
553
- print(file1)
554
- print(file2)
555
- ```
556
-
557
- ### Delete
558
-
559
- ``` python
560
- file3 = client.files.upload(path='2312.11805v3.pdf')
561
-
562
- client.files.delete(name=file3.name)
563
- ```
564
-
565
- ## Caches
566
-
567
- `client.caches` contains the control plane APIs for cached content
568
-
569
- ### Create
570
-
571
- ``` python
572
- if client.vertexai:
573
- file_uris = [
574
- 'gs://cloud-samples-data/generative-ai/pdf/2312.11805v3.pdf',
575
- 'gs://cloud-samples-data/generative-ai/pdf/2403.05530.pdf'
576
- ]
577
- else:
578
- file_uris = [file1.uri, file2.uri]
579
-
580
- cached_content = client.caches.create(
581
- model='gemini-1.5-pro-002',
582
- config=types.CreateCachedContentConfig(
583
- contents=[
584
- types.Content(
585
- role='user',
586
- parts=[
587
- types.Part.from_uri(
588
- file_uri=file_uris[0],
589
- mime_type='application/pdf'),
590
- types.Part.from_uri(
591
- file_uri=file_uris[1],
592
- mime_type='application/pdf',)])
593
- ],
594
- system_instruction='What is the sum of the two pdfs?',
595
- display_name='test cache',
596
- ttl='3600s',
597
- ),
598
- )
599
- ```
600
-
601
- ### Get
602
-
603
- ``` python
604
- client.caches.get(name=cached_content.name)
605
- ```
606
-
607
- ### Generate Content
608
-
609
- ``` python
610
- client.models.generate_content(
611
- model='gemini-1.5-pro-002',
612
- contents='Summarize the pdfs',
613
- config=types.GenerateContentConfig(
614
- cached_content=cached_content.name,
615
- )
616
- )
617
- ```
618
-
619
- ## Tunings
620
-
621
- `client.tunings` contains tuning job APIs and supports supervised fine
622
- tuning through `tune` and distillation through `distill`
623
-
624
- ### Tune
625
-
626
- - Vertex supports tuning from GCS source
627
- - Google AI supports tuning from inline examples
628
-
629
- ``` python
630
- if client.vertexai:
631
- model = 'gemini-1.5-pro-002'
632
- training_dataset=types.TuningDataset(
633
- gcs_uri='gs://cloud-samples-data/ai-platform/generative_ai/gemini-1_5/text/sft_train_data.jsonl',
634
- )
635
- else:
636
- model = 'models/gemini-1.0-pro-001'
637
- training_dataset=types.TuningDataset(
638
- examples=[
639
- types.TuningExample(
640
- text_input=f"Input text {i}",
641
- output=f"Output text {i}",
642
- )
643
- for i in range(5)
644
- ],
645
- )
646
- ```
647
-
648
- ``` python
649
- tuning_job = client.tunings.tune(
650
- base_model=model,
651
- training_dataset=training_dataset,
652
- config=types.CreateTuningJobConfig(
653
- epoch_count= 1,
654
- tuned_model_display_name="test_dataset_examples model"
655
- )
656
- )
657
- tuning_job
658
- ```
659
-
660
- ### Get Tuning Job
661
-
662
- ``` python
663
- tuning_job = client.tunings.get(name=tuning_job.name)
664
- tuning_job
665
- ```
666
-
667
- ``` python
668
- import time
669
-
670
- running_states = set([
671
- "JOB_STATE_PENDING",
672
- "JOB_STATE_RUNNING",
673
- ])
674
-
675
- while tuning_job.state in running_states:
676
- print(tuning_job.state)
677
- tuning_job = client.tunings.get(name=tuning_job.name)
678
- time.sleep(10)
679
- ```
680
-
681
- #### Use Tuned Model
682
-
683
- ``` python
684
- response = client.models.generate_content(
685
- model=tuning_job.tuned_model.endpoint,
686
- contents='What is your name?',
687
- )
688
-
689
- response.text
690
- ```
691
-
692
- ### Get Tuned Model
693
-
694
- ``` python
695
- tuned_model = client.models.get(model=tuning_job.tuned_model.model)
696
- tuned_model
697
- ```
698
-
699
- ### List Tuned Models
700
-
701
- ``` python
702
- for model in client.models.list(config={'page_size': 10}):
703
- print(model)
704
- ```
705
-
706
- ``` python
707
- pager = client.models.list(config={'page_size': 10})
708
- print(pager.page_size)
709
- print(pager[0])
710
- pager.next_page()
711
- print(pager[0])
712
- ```
713
-
714
- #### Async
715
-
716
- ``` python
717
- async for job in await client.aio.models.list(config={'page_size': 10}):
718
- print(job)
719
- ```
720
-
721
- ``` python
722
- async_pager = await client.aio.models.list(config={'page_size': 10})
723
- print(async_pager.page_size)
724
- print(async_pager[0])
725
- await async_pager.next_page()
726
- print(async_pager[0])
727
- ```
728
-
729
- ### Update Tuned Model
730
-
731
- ``` python
732
- model = pager[0]
733
-
734
- model = client.models.update(
735
- model=model.name,
736
- config=types.UpdateModelConfig(
737
- display_name='my tuned model',
738
- description='my tuned model description'))
739
-
740
- model
741
- ```
742
-
743
- ### Distillation
744
-
745
- Only supported on Vertex. Requires allowlist.
746
-
747
- ``` python
748
- distillation_job = client.tunings.distill(
749
- student_model="gemma-2b-1.1-it",
750
- teacher_model="gemini-1.5-pro-002",
751
- training_dataset=genai.types.DistillationDataset(
752
- gcs_uri="gs://cloud-samples-data/ai-platform/generative_ai/gemini-1_5/text/sft_train_data.jsonl",
753
- ),
754
- config=genai.types.CreateDistillationJobConfig(
755
- epoch_count=1,
756
- pipeline_root_directory=(
757
- "gs://my-bucket"
758
- ),
759
- ),
760
- )
761
- distillation_job
762
- ```
763
-
764
- ``` python
765
- tcompleted_states = set([
766
- "JOB_STATE_SUCCEEDED",
767
- "JOB_STATE_FAILED",
768
- "JOB_STATE_CANCELLED",
769
- "JOB_STATE_PAUSED"
770
- ])
771
-
772
- while distillation_job.state not in completed_states:
773
- print(distillation_job.state)
774
- distillation_job = client.tunings.get(name=distillation_job.name)
775
- time.sleep(10)
776
- ```
777
-
778
- ``` python
779
- distillation_job
780
- ```
781
-
782
- ### List Tuning Jobs
783
-
784
- ``` python
785
- for job in client.tunings.list(config={'page_size': 10}):
786
- print(job)
787
- ```
788
-
789
- ``` python
790
- pager = client.tunings.list(config={'page_size': 10})
791
- print(pager.page_size)
792
- print(pager[0])
793
- pager.next_page()
794
- print(pager[0])
795
- ```
796
-
797
- #### Async
798
-
799
- ``` python
800
- async for job in await client.aio.tunings.list(config={'page_size': 10}):
801
- print(job)
802
- ```
803
-
804
- ``` python
805
- async_pager = await client.aio.tunings.list(config={'page_size': 10})
806
- print(async_pager.page_size)
807
- print(async_pager[0])
808
- await async_pager.next_page()
809
- print(async_pager[0])
810
- ```
811
-
812
- ## Batch Prediction
813
-
814
- Only supported in Vertex AI.
815
-
816
- ### Create
817
-
818
- ``` python
819
- # Specify model and source file only, destination and job display name will be auto-populated
820
- job = client.batches.create(
821
- model='gemini-1.5-flash-002',
822
- src='bq://my-project.my-dataset.my-table',
823
- )
824
-
825
- job
826
- ```
827
-
828
- ``` python
829
- # Get a job by name
830
- job = client.batches.get(name=job.name)
831
-
832
- job.state
833
- ```
834
-
835
- ``` python
836
- completed_states = set([
837
- "JOB_STATE_SUCCEEDED",
838
- "JOB_STATE_FAILED",
839
- "JOB_STATE_CANCELLED",
840
- "JOB_STATE_PAUSED"
841
- ])
842
-
843
- while job.state not in completed_states:
844
- print(job.state)
845
- job = client.batches.get(name=job.name)
846
- time.sleep(30)
847
-
848
- job
849
- ```
850
-
851
- ### List
852
-
853
- ``` python
854
- for job in client.batches.list(config={'page_size': 10}):
855
- print(job)
856
- ```
857
-
858
- ``` python
859
- pager = client.batches.list(config={'page_size': 10})
860
- print(pager.page_size)
861
- print(pager[0])
862
- pager.next_page()
863
- print(pager[0])
864
- ```
865
-
866
- #### Async
867
-
868
- ``` python
869
- async for job in await client.aio.batches.list(config={'page_size': 10}):
870
- print(job)
871
- ```
872
-
873
- ``` python
874
- async_pager = await client.aio.batches.list(config={'page_size': 10})
875
- print(async_pager.page_size)
876
- print(async_pager[0])
877
- await async_pager.next_page()
878
- print(async_pager[0])
879
- ```
880
-
881
- ### Delete
882
-
883
- ``` python
884
- # Delete the job resource
885
- delete_job = client.batches.delete(name=job.name)
886
-
887
- delete_job
888
- ```