llm-gemini 0.16__py3-none-any.whl → 0.17__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: llm-gemini
3
- Version: 0.16
3
+ Version: 0.17
4
4
  Summary: LLM plugin to access Google's Gemini family of models
5
5
  Author: Simon Willison
6
6
  License: Apache-2.0
@@ -57,9 +57,17 @@ llm -m gemini-2.0-flash "A short joke about a pelican and a walrus"
57
57
  >
58
58
  > The walrus sighs and says, "It's a long story. Let's just say we met through a mutual friend... of the fin."
59
59
 
60
+ You can set the [default model](https://llm.datasette.io/en/stable/setup.html#setting-a-custom-default-model) to avoid the extra `-m` option:
61
+
62
+ ```bash
63
+ llm models default gemini-2.0-flash
64
+ llm "A joke about a pelican and a walrus"
65
+ ```
66
+
60
67
  Other models are:
61
68
 
62
- - `gemini-2.5-pro-exp-03-25` - experimental release of Gemini 2.5 Pro
69
+ - `gemini-2.5-pro-exp-03-25` - free experimental release of Gemini 2.5 Pro
70
+ - `gemini-2.5-pro-preview-03-25` - paid preview of Gemini 2.5 Pro
63
71
  - `gemma-3-27b-it` - [Gemma 3](https://blog.google/technology/developers/gemma-3/) 27B
64
72
  - `gemini-2.0-pro-exp-02-05` - experimental release of Gemini 2.0 Pro
65
73
  - `gemini-2.0-flash-lite` - Gemini 2.0 Flash-Lite
@@ -79,23 +87,23 @@ Other models are:
79
87
  Gemini models are multi-modal. You can provide images, audio or video files as input like this:
80
88
 
81
89
  ```bash
82
- llm -m gemini-1.5-flash-latest 'extract text' -a image.jpg
90
+ llm -m gemini-2.0-flash 'extract text' -a image.jpg
83
91
  ```
84
92
  Or with a URL:
85
93
  ```bash
86
- llm -m gemini-1.5-flash-8b-latest 'describe image' \
94
+ llm -m gemini-2.0-flash-lite 'describe image' \
87
95
  -a https://static.simonwillison.net/static/2024/pelicans.jpg
88
96
  ```
89
97
  Audio works too:
90
98
 
91
99
  ```bash
92
- llm -m gemini-1.5-pro-latest 'transcribe audio' -a audio.mp3
100
+ llm -m gemini-2.0-flash 'transcribe audio' -a audio.mp3
93
101
  ```
94
102
 
95
103
  And video:
96
104
 
97
105
  ```bash
98
- llm -m gemini-1.5-pro-latest 'describe what happens' -a video.mp4
106
+ llm -m gemini-2.0-flash 'describe what happens' -a video.mp4
99
107
  ```
100
108
  The Gemini prompting guide includes [extensive advice](https://ai.google.dev/gemini-api/docs/file-prompting-strategies) on multi-modal prompting.
101
109
 
@@ -104,7 +112,7 @@ The Gemini prompting guide includes [extensive advice](https://ai.google.dev/gem
104
112
  Use `-o json_object 1` to force the output to be JSON:
105
113
 
106
114
  ```bash
107
- llm -m gemini-1.5-flash-latest -o json_object 1 \
115
+ llm -m gemini-2.0-flash -o json_object 1 \
108
116
  '3 largest cities in California, list of {"name": "..."}'
109
117
  ```
110
118
  Outputs:
@@ -119,7 +127,7 @@ Gemini models can [write and execute code](https://ai.google.dev/gemini-api/docs
119
127
  To enable this feature, use `-o code_execution 1`:
120
128
 
121
129
  ```bash
122
- llm -m gemini-1.5-pro-latest -o code_execution 1 \
130
+ llm -m gemini-2.0-flash -o code_execution 1 \
123
131
  'use python to calculate (factorial of 13) * 3'
124
132
  ```
125
133
  ### Google search
@@ -131,7 +139,7 @@ Using this feature may incur additional requirements in terms of how you use the
131
139
  To run a prompt with Google search enabled, use `-o google_search 1`:
132
140
 
133
141
  ```bash
134
- llm -m gemini-1.5-pro-latest -o google_search 1 \
142
+ llm -m gemini-2.0-flash -o google_search 1 \
135
143
  'What happened in Ireland today?'
136
144
  ```
137
145
 
@@ -142,7 +150,7 @@ Use `llm logs -c --json` after running a prompt to see the full JSON response, w
142
150
  To chat interactively with the model, run `llm chat`:
143
151
 
144
152
  ```bash
145
- llm chat -m gemini-1.5-pro-latest
153
+ llm chat -m gemini-2.0-flash
146
154
  ```
147
155
 
148
156
  ## Embeddings
@@ -205,4 +213,3 @@ You will need to have stored a valid Gemini API key using this command first:
205
213
  llm keys set gemini
206
214
  # Paste key here
207
215
  ```
208
-
@@ -0,0 +1,7 @@
1
+ llm_gemini.py,sha256=pZGkm61gSK9DSIubZxmW4YVUnjTS4JuKUtK0yqo9ms4,16511
2
+ llm_gemini-0.17.dist-info/licenses/LICENSE,sha256=xx0jnfkXJvxRnG63LTGOxlggYnIysveWIZ6H3PNdCrQ,11357
3
+ llm_gemini-0.17.dist-info/METADATA,sha256=elmIQKpgDA2SaG8SINSBx5s6mztvvVNZtL46BFvTUJo,7985
4
+ llm_gemini-0.17.dist-info/WHEEL,sha256=CmyFI0kx5cdEMTLiONQRbGQwjIoR1aIYB7eCAQ4KPJ0,91
5
+ llm_gemini-0.17.dist-info/entry_points.txt,sha256=n544bpgUPIBc5l_cnwsTxPc3gMGJHPtAyqBNp-CkMWk,26
6
+ llm_gemini-0.17.dist-info/top_level.txt,sha256=WUQmG6_2QKbT_8W4HH93qyKl_0SUteL4Ra6_PhyNGKU,11
7
+ llm_gemini-0.17.dist-info/RECORD,,
@@ -1,5 +1,5 @@
1
1
  Wheel-Version: 1.0
2
- Generator: setuptools (78.0.2)
2
+ Generator: setuptools (78.1.0)
3
3
  Root-Is-Purelib: true
4
4
  Tag: py3-none-any
5
5
 
llm_gemini.py CHANGED
@@ -68,6 +68,8 @@ def register_models(register):
68
68
  "gemma-3-27b-it",
69
69
  # 25th March 2025:
70
70
  "gemini-2.5-pro-exp-03-25",
71
+ # 4th April 2025 (paid):
72
+ "gemini-2.5-pro-preview-03-25",
71
73
  ]:
72
74
  can_google_search = model_id in GOOGLE_SEARCH_MODELS
73
75
  register(
@@ -1,7 +0,0 @@
1
- llm_gemini.py,sha256=GqrUhLIM3PxxrC3K6XNHy1cVcOCQfw44oL_36Cv7wug,16438
2
- llm_gemini-0.16.dist-info/licenses/LICENSE,sha256=xx0jnfkXJvxRnG63LTGOxlggYnIysveWIZ6H3PNdCrQ,11357
3
- llm_gemini-0.16.dist-info/METADATA,sha256=MVM2mmBLB1QhBVYClxvhnLKP1URrVFmk-9EyzoaEo1c,7725
4
- llm_gemini-0.16.dist-info/WHEEL,sha256=DK49LOLCYiurdXXOXwGJm6U4DkHkg4lcxjhqwRa0CP4,91
5
- llm_gemini-0.16.dist-info/entry_points.txt,sha256=n544bpgUPIBc5l_cnwsTxPc3gMGJHPtAyqBNp-CkMWk,26
6
- llm_gemini-0.16.dist-info/top_level.txt,sha256=WUQmG6_2QKbT_8W4HH93qyKl_0SUteL4Ra6_PhyNGKU,11
7
- llm_gemini-0.16.dist-info/RECORD,,