llm-gemini 0.15__tar.gz → 0.17__tar.gz
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- {llm_gemini-0.15 → llm_gemini-0.17}/PKG-INFO +36 -29
- {llm_gemini-0.15 → llm_gemini-0.17}/README.md +33 -27
- {llm_gemini-0.15 → llm_gemini-0.17}/llm_gemini.egg-info/PKG-INFO +36 -29
- {llm_gemini-0.15 → llm_gemini-0.17}/llm_gemini.py +16 -1
- {llm_gemini-0.15 → llm_gemini-0.17}/pyproject.toml +1 -1
- {llm_gemini-0.15 → llm_gemini-0.17}/LICENSE +0 -0
- {llm_gemini-0.15 → llm_gemini-0.17}/llm_gemini.egg-info/SOURCES.txt +0 -0
- {llm_gemini-0.15 → llm_gemini-0.17}/llm_gemini.egg-info/dependency_links.txt +0 -0
- {llm_gemini-0.15 → llm_gemini-0.17}/llm_gemini.egg-info/entry_points.txt +0 -0
- {llm_gemini-0.15 → llm_gemini-0.17}/llm_gemini.egg-info/requires.txt +0 -0
- {llm_gemini-0.15 → llm_gemini-0.17}/llm_gemini.egg-info/top_level.txt +0 -0
- {llm_gemini-0.15 → llm_gemini-0.17}/setup.cfg +0 -0
- {llm_gemini-0.15 → llm_gemini-0.17}/tests/test_gemini.py +0 -0
@@ -1,6 +1,6 @@
|
|
1
|
-
Metadata-Version: 2.
|
1
|
+
Metadata-Version: 2.4
|
2
2
|
Name: llm-gemini
|
3
|
-
Version: 0.
|
3
|
+
Version: 0.17
|
4
4
|
Summary: LLM plugin to access Google's Gemini family of models
|
5
5
|
Author: Simon Willison
|
6
6
|
License: Apache-2.0
|
@@ -19,6 +19,7 @@ Requires-Dist: pytest; extra == "test"
|
|
19
19
|
Requires-Dist: pytest-recording; extra == "test"
|
20
20
|
Requires-Dist: pytest-asyncio; extra == "test"
|
21
21
|
Requires-Dist: nest-asyncio; extra == "test"
|
22
|
+
Dynamic: license-file
|
22
23
|
|
23
24
|
# llm-gemini
|
24
25
|
|
@@ -46,56 +47,63 @@ llm keys set gemini
|
|
46
47
|
```
|
47
48
|
You can also set the API key by assigning it to the environment variable `LLM_GEMINI_KEY`.
|
48
49
|
|
49
|
-
Now run the model using `-m gemini-
|
50
|
+
Now run the model using `-m gemini-2.0-flash`, for example:
|
50
51
|
|
51
52
|
```bash
|
52
|
-
llm -m gemini-
|
53
|
+
llm -m gemini-2.0-flash "A short joke about a pelican and a walrus"
|
53
54
|
```
|
54
55
|
|
55
|
-
> A pelican
|
56
|
+
> A pelican and a walrus are sitting at a bar. The pelican orders a fishbowl cocktail, and the walrus orders a plate of clams. The bartender asks, "So, what brings you two together?"
|
56
57
|
>
|
57
|
-
>
|
58
|
-
|
59
|
-
|
58
|
+
> The walrus sighs and says, "It's a long story. Let's just say we met through a mutual friend... of the fin."
|
59
|
+
|
60
|
+
You can set the [default model](https://llm.datasette.io/en/stable/setup.html#setting-a-custom-default-model) to avoid the extra `-m` option:
|
61
|
+
|
62
|
+
```bash
|
63
|
+
llm models default gemini-2.0-flash
|
64
|
+
llm "A joke about a pelican and a walrus"
|
65
|
+
```
|
60
66
|
|
61
67
|
Other models are:
|
62
68
|
|
63
|
-
- `gemini-
|
64
|
-
- `gemini-
|
65
|
-
- `gemini-exp-1114` - recent experimental #1
|
66
|
-
- `gemini-exp-1121` - recent experimental #2
|
67
|
-
- `gemini-exp-1206` - recent experimental #3
|
68
|
-
- `gemini-2.0-flash-exp` - [Gemini 2.0 Flash](https://blog.google/technology/google-deepmind/google-gemini-ai-update-december-2024/#gemini-2-0-flash)
|
69
|
-
- `learnlm-1.5-pro-experimental` - "an experimental task-specific model that has been trained to align with learning science principles" - [more details here](https://ai.google.dev/gemini-api/docs/learnlm).
|
70
|
-
- `gemini-2.0-flash-thinking-exp-1219` - experimental "thinking" model from December 2024
|
71
|
-
- `gemini-2.0-flash-thinking-exp-01-21` - experimental "thinking" model from January 2025
|
72
|
-
- `gemini-2.0-flash` - Gemini 2.0 Flash
|
73
|
-
- `gemini-2.0-flash-lite` - Gemini 2.0 Flash-Lite
|
74
|
-
- `gemini-2.0-pro-exp-02-05` - experimental release of Gemini 2.0 Pro
|
69
|
+
- `gemini-2.5-pro-exp-03-25` - free experimental release of Gemini 2.5 Pro
|
70
|
+
- `gemini-2.5-pro-preview-03-25` - paid preview of Gemini 2.5 Pro
|
75
71
|
- `gemma-3-27b-it` - [Gemma 3](https://blog.google/technology/developers/gemma-3/) 27B
|
72
|
+
- `gemini-2.0-pro-exp-02-05` - experimental release of Gemini 2.0 Pro
|
73
|
+
- `gemini-2.0-flash-lite` - Gemini 2.0 Flash-Lite
|
74
|
+
- `gemini-2.0-flash` - Gemini 2.0 Flash
|
75
|
+
- `gemini-2.0-flash-thinking-exp-01-21` - experimental "thinking" model from January 2025
|
76
|
+
- `gemini-2.0-flash-thinking-exp-1219` - experimental "thinking" model from December 2024
|
77
|
+
- `learnlm-1.5-pro-experimental` - "an experimental task-specific model that has been trained to align with learning science principles" - [more details here](https://ai.google.dev/gemini-api/docs/learnlm).
|
78
|
+
- `gemini-2.0-flash-exp` - [Gemini 2.0 Flash](https://blog.google/technology/google-deepmind/google-gemini-ai-update-december-2024/#gemini-2-0-flash)
|
79
|
+
- `gemini-exp-1206` - recent experimental #3
|
80
|
+
- `gemini-exp-1121` - recent experimental #2
|
81
|
+
- `gemini-exp-1114` - recent experimental #1
|
82
|
+
- `gemini-1.5-flash-8b-latest` - the least expensive
|
83
|
+
- `gemini-1.5-flash-latest`
|
76
84
|
|
77
85
|
### Images, audio and video
|
78
86
|
|
79
87
|
Gemini models are multi-modal. You can provide images, audio or video files as input like this:
|
80
88
|
|
81
89
|
```bash
|
82
|
-
llm -m gemini-
|
90
|
+
llm -m gemini-2.0-flash 'extract text' -a image.jpg
|
83
91
|
```
|
84
92
|
Or with a URL:
|
85
93
|
```bash
|
86
|
-
llm -m gemini-
|
94
|
+
llm -m gemini-2.0-flash-lite 'describe image' \
|
87
95
|
-a https://static.simonwillison.net/static/2024/pelicans.jpg
|
88
96
|
```
|
89
97
|
Audio works too:
|
90
98
|
|
91
99
|
```bash
|
92
|
-
llm -m gemini-
|
100
|
+
llm -m gemini-2.0-flash 'transcribe audio' -a audio.mp3
|
93
101
|
```
|
94
102
|
|
95
103
|
And video:
|
96
104
|
|
97
105
|
```bash
|
98
|
-
llm -m gemini-
|
106
|
+
llm -m gemini-2.0-flash 'describe what happens' -a video.mp4
|
99
107
|
```
|
100
108
|
The Gemini prompting guide includes [extensive advice](https://ai.google.dev/gemini-api/docs/file-prompting-strategies) on multi-modal prompting.
|
101
109
|
|
@@ -104,7 +112,7 @@ The Gemini prompting guide includes [extensive advice](https://ai.google.dev/gem
|
|
104
112
|
Use `-o json_object 1` to force the output to be JSON:
|
105
113
|
|
106
114
|
```bash
|
107
|
-
llm -m gemini-
|
115
|
+
llm -m gemini-2.0-flash -o json_object 1 \
|
108
116
|
'3 largest cities in California, list of {"name": "..."}'
|
109
117
|
```
|
110
118
|
Outputs:
|
@@ -119,7 +127,7 @@ Gemini models can [write and execute code](https://ai.google.dev/gemini-api/docs
|
|
119
127
|
To enable this feature, use `-o code_execution 1`:
|
120
128
|
|
121
129
|
```bash
|
122
|
-
llm -m gemini-
|
130
|
+
llm -m gemini-2.0-flash -o code_execution 1 \
|
123
131
|
'use python to calculate (factorial of 13) * 3'
|
124
132
|
```
|
125
133
|
### Google search
|
@@ -131,7 +139,7 @@ Using this feature may incur additional requirements in terms of how you use the
|
|
131
139
|
To run a prompt with Google search enabled, use `-o google_search 1`:
|
132
140
|
|
133
141
|
```bash
|
134
|
-
llm -m gemini-
|
142
|
+
llm -m gemini-2.0-flash -o google_search 1 \
|
135
143
|
'What happened in Ireland today?'
|
136
144
|
```
|
137
145
|
|
@@ -142,7 +150,7 @@ Use `llm logs -c --json` after running a prompt to see the full JSON response, w
|
|
142
150
|
To chat interactively with the model, run `llm chat`:
|
143
151
|
|
144
152
|
```bash
|
145
|
-
llm chat -m gemini-
|
153
|
+
llm chat -m gemini-2.0-flash
|
146
154
|
```
|
147
155
|
|
148
156
|
## Embeddings
|
@@ -205,4 +213,3 @@ You will need to have stored a valid Gemini API key using this command first:
|
|
205
213
|
llm keys set gemini
|
206
214
|
# Paste key here
|
207
215
|
```
|
208
|
-
|
@@ -24,56 +24,63 @@ llm keys set gemini
|
|
24
24
|
```
|
25
25
|
You can also set the API key by assigning it to the environment variable `LLM_GEMINI_KEY`.
|
26
26
|
|
27
|
-
Now run the model using `-m gemini-
|
27
|
+
Now run the model using `-m gemini-2.0-flash`, for example:
|
28
28
|
|
29
29
|
```bash
|
30
|
-
llm -m gemini-
|
30
|
+
llm -m gemini-2.0-flash "A short joke about a pelican and a walrus"
|
31
31
|
```
|
32
32
|
|
33
|
-
> A pelican
|
33
|
+
> A pelican and a walrus are sitting at a bar. The pelican orders a fishbowl cocktail, and the walrus orders a plate of clams. The bartender asks, "So, what brings you two together?"
|
34
34
|
>
|
35
|
-
>
|
36
|
-
|
37
|
-
|
35
|
+
> The walrus sighs and says, "It's a long story. Let's just say we met through a mutual friend... of the fin."
|
36
|
+
|
37
|
+
You can set the [default model](https://llm.datasette.io/en/stable/setup.html#setting-a-custom-default-model) to avoid the extra `-m` option:
|
38
|
+
|
39
|
+
```bash
|
40
|
+
llm models default gemini-2.0-flash
|
41
|
+
llm "A joke about a pelican and a walrus"
|
42
|
+
```
|
38
43
|
|
39
44
|
Other models are:
|
40
45
|
|
41
|
-
- `gemini-
|
42
|
-
- `gemini-
|
43
|
-
- `gemini-exp-1114` - recent experimental #1
|
44
|
-
- `gemini-exp-1121` - recent experimental #2
|
45
|
-
- `gemini-exp-1206` - recent experimental #3
|
46
|
-
- `gemini-2.0-flash-exp` - [Gemini 2.0 Flash](https://blog.google/technology/google-deepmind/google-gemini-ai-update-december-2024/#gemini-2-0-flash)
|
47
|
-
- `learnlm-1.5-pro-experimental` - "an experimental task-specific model that has been trained to align with learning science principles" - [more details here](https://ai.google.dev/gemini-api/docs/learnlm).
|
48
|
-
- `gemini-2.0-flash-thinking-exp-1219` - experimental "thinking" model from December 2024
|
49
|
-
- `gemini-2.0-flash-thinking-exp-01-21` - experimental "thinking" model from January 2025
|
50
|
-
- `gemini-2.0-flash` - Gemini 2.0 Flash
|
51
|
-
- `gemini-2.0-flash-lite` - Gemini 2.0 Flash-Lite
|
52
|
-
- `gemini-2.0-pro-exp-02-05` - experimental release of Gemini 2.0 Pro
|
46
|
+
- `gemini-2.5-pro-exp-03-25` - free experimental release of Gemini 2.5 Pro
|
47
|
+
- `gemini-2.5-pro-preview-03-25` - paid preview of Gemini 2.5 Pro
|
53
48
|
- `gemma-3-27b-it` - [Gemma 3](https://blog.google/technology/developers/gemma-3/) 27B
|
49
|
+
- `gemini-2.0-pro-exp-02-05` - experimental release of Gemini 2.0 Pro
|
50
|
+
- `gemini-2.0-flash-lite` - Gemini 2.0 Flash-Lite
|
51
|
+
- `gemini-2.0-flash` - Gemini 2.0 Flash
|
52
|
+
- `gemini-2.0-flash-thinking-exp-01-21` - experimental "thinking" model from January 2025
|
53
|
+
- `gemini-2.0-flash-thinking-exp-1219` - experimental "thinking" model from December 2024
|
54
|
+
- `learnlm-1.5-pro-experimental` - "an experimental task-specific model that has been trained to align with learning science principles" - [more details here](https://ai.google.dev/gemini-api/docs/learnlm).
|
55
|
+
- `gemini-2.0-flash-exp` - [Gemini 2.0 Flash](https://blog.google/technology/google-deepmind/google-gemini-ai-update-december-2024/#gemini-2-0-flash)
|
56
|
+
- `gemini-exp-1206` - recent experimental #3
|
57
|
+
- `gemini-exp-1121` - recent experimental #2
|
58
|
+
- `gemini-exp-1114` - recent experimental #1
|
59
|
+
- `gemini-1.5-flash-8b-latest` - the least expensive
|
60
|
+
- `gemini-1.5-flash-latest`
|
54
61
|
|
55
62
|
### Images, audio and video
|
56
63
|
|
57
64
|
Gemini models are multi-modal. You can provide images, audio or video files as input like this:
|
58
65
|
|
59
66
|
```bash
|
60
|
-
llm -m gemini-
|
67
|
+
llm -m gemini-2.0-flash 'extract text' -a image.jpg
|
61
68
|
```
|
62
69
|
Or with a URL:
|
63
70
|
```bash
|
64
|
-
llm -m gemini-
|
71
|
+
llm -m gemini-2.0-flash-lite 'describe image' \
|
65
72
|
-a https://static.simonwillison.net/static/2024/pelicans.jpg
|
66
73
|
```
|
67
74
|
Audio works too:
|
68
75
|
|
69
76
|
```bash
|
70
|
-
llm -m gemini-
|
77
|
+
llm -m gemini-2.0-flash 'transcribe audio' -a audio.mp3
|
71
78
|
```
|
72
79
|
|
73
80
|
And video:
|
74
81
|
|
75
82
|
```bash
|
76
|
-
llm -m gemini-
|
83
|
+
llm -m gemini-2.0-flash 'describe what happens' -a video.mp4
|
77
84
|
```
|
78
85
|
The Gemini prompting guide includes [extensive advice](https://ai.google.dev/gemini-api/docs/file-prompting-strategies) on multi-modal prompting.
|
79
86
|
|
@@ -82,7 +89,7 @@ The Gemini prompting guide includes [extensive advice](https://ai.google.dev/gem
|
|
82
89
|
Use `-o json_object 1` to force the output to be JSON:
|
83
90
|
|
84
91
|
```bash
|
85
|
-
llm -m gemini-
|
92
|
+
llm -m gemini-2.0-flash -o json_object 1 \
|
86
93
|
'3 largest cities in California, list of {"name": "..."}'
|
87
94
|
```
|
88
95
|
Outputs:
|
@@ -97,7 +104,7 @@ Gemini models can [write and execute code](https://ai.google.dev/gemini-api/docs
|
|
97
104
|
To enable this feature, use `-o code_execution 1`:
|
98
105
|
|
99
106
|
```bash
|
100
|
-
llm -m gemini-
|
107
|
+
llm -m gemini-2.0-flash -o code_execution 1 \
|
101
108
|
'use python to calculate (factorial of 13) * 3'
|
102
109
|
```
|
103
110
|
### Google search
|
@@ -109,7 +116,7 @@ Using this feature may incur additional requirements in terms of how you use the
|
|
109
116
|
To run a prompt with Google search enabled, use `-o google_search 1`:
|
110
117
|
|
111
118
|
```bash
|
112
|
-
llm -m gemini-
|
119
|
+
llm -m gemini-2.0-flash -o google_search 1 \
|
113
120
|
'What happened in Ireland today?'
|
114
121
|
```
|
115
122
|
|
@@ -120,7 +127,7 @@ Use `llm logs -c --json` after running a prompt to see the full JSON response, w
|
|
120
127
|
To chat interactively with the model, run `llm chat`:
|
121
128
|
|
122
129
|
```bash
|
123
|
-
llm chat -m gemini-
|
130
|
+
llm chat -m gemini-2.0-flash
|
124
131
|
```
|
125
132
|
|
126
133
|
## Embeddings
|
@@ -183,4 +190,3 @@ You will need to have stored a valid Gemini API key using this command first:
|
|
183
190
|
llm keys set gemini
|
184
191
|
# Paste key here
|
185
192
|
```
|
186
|
-
|
@@ -1,6 +1,6 @@
|
|
1
|
-
Metadata-Version: 2.
|
1
|
+
Metadata-Version: 2.4
|
2
2
|
Name: llm-gemini
|
3
|
-
Version: 0.
|
3
|
+
Version: 0.17
|
4
4
|
Summary: LLM plugin to access Google's Gemini family of models
|
5
5
|
Author: Simon Willison
|
6
6
|
License: Apache-2.0
|
@@ -19,6 +19,7 @@ Requires-Dist: pytest; extra == "test"
|
|
19
19
|
Requires-Dist: pytest-recording; extra == "test"
|
20
20
|
Requires-Dist: pytest-asyncio; extra == "test"
|
21
21
|
Requires-Dist: nest-asyncio; extra == "test"
|
22
|
+
Dynamic: license-file
|
22
23
|
|
23
24
|
# llm-gemini
|
24
25
|
|
@@ -46,56 +47,63 @@ llm keys set gemini
|
|
46
47
|
```
|
47
48
|
You can also set the API key by assigning it to the environment variable `LLM_GEMINI_KEY`.
|
48
49
|
|
49
|
-
Now run the model using `-m gemini-
|
50
|
+
Now run the model using `-m gemini-2.0-flash`, for example:
|
50
51
|
|
51
52
|
```bash
|
52
|
-
llm -m gemini-
|
53
|
+
llm -m gemini-2.0-flash "A short joke about a pelican and a walrus"
|
53
54
|
```
|
54
55
|
|
55
|
-
> A pelican
|
56
|
+
> A pelican and a walrus are sitting at a bar. The pelican orders a fishbowl cocktail, and the walrus orders a plate of clams. The bartender asks, "So, what brings you two together?"
|
56
57
|
>
|
57
|
-
>
|
58
|
-
|
59
|
-
|
58
|
+
> The walrus sighs and says, "It's a long story. Let's just say we met through a mutual friend... of the fin."
|
59
|
+
|
60
|
+
You can set the [default model](https://llm.datasette.io/en/stable/setup.html#setting-a-custom-default-model) to avoid the extra `-m` option:
|
61
|
+
|
62
|
+
```bash
|
63
|
+
llm models default gemini-2.0-flash
|
64
|
+
llm "A joke about a pelican and a walrus"
|
65
|
+
```
|
60
66
|
|
61
67
|
Other models are:
|
62
68
|
|
63
|
-
- `gemini-
|
64
|
-
- `gemini-
|
65
|
-
- `gemini-exp-1114` - recent experimental #1
|
66
|
-
- `gemini-exp-1121` - recent experimental #2
|
67
|
-
- `gemini-exp-1206` - recent experimental #3
|
68
|
-
- `gemini-2.0-flash-exp` - [Gemini 2.0 Flash](https://blog.google/technology/google-deepmind/google-gemini-ai-update-december-2024/#gemini-2-0-flash)
|
69
|
-
- `learnlm-1.5-pro-experimental` - "an experimental task-specific model that has been trained to align with learning science principles" - [more details here](https://ai.google.dev/gemini-api/docs/learnlm).
|
70
|
-
- `gemini-2.0-flash-thinking-exp-1219` - experimental "thinking" model from December 2024
|
71
|
-
- `gemini-2.0-flash-thinking-exp-01-21` - experimental "thinking" model from January 2025
|
72
|
-
- `gemini-2.0-flash` - Gemini 2.0 Flash
|
73
|
-
- `gemini-2.0-flash-lite` - Gemini 2.0 Flash-Lite
|
74
|
-
- `gemini-2.0-pro-exp-02-05` - experimental release of Gemini 2.0 Pro
|
69
|
+
- `gemini-2.5-pro-exp-03-25` - free experimental release of Gemini 2.5 Pro
|
70
|
+
- `gemini-2.5-pro-preview-03-25` - paid preview of Gemini 2.5 Pro
|
75
71
|
- `gemma-3-27b-it` - [Gemma 3](https://blog.google/technology/developers/gemma-3/) 27B
|
72
|
+
- `gemini-2.0-pro-exp-02-05` - experimental release of Gemini 2.0 Pro
|
73
|
+
- `gemini-2.0-flash-lite` - Gemini 2.0 Flash-Lite
|
74
|
+
- `gemini-2.0-flash` - Gemini 2.0 Flash
|
75
|
+
- `gemini-2.0-flash-thinking-exp-01-21` - experimental "thinking" model from January 2025
|
76
|
+
- `gemini-2.0-flash-thinking-exp-1219` - experimental "thinking" model from December 2024
|
77
|
+
- `learnlm-1.5-pro-experimental` - "an experimental task-specific model that has been trained to align with learning science principles" - [more details here](https://ai.google.dev/gemini-api/docs/learnlm).
|
78
|
+
- `gemini-2.0-flash-exp` - [Gemini 2.0 Flash](https://blog.google/technology/google-deepmind/google-gemini-ai-update-december-2024/#gemini-2-0-flash)
|
79
|
+
- `gemini-exp-1206` - recent experimental #3
|
80
|
+
- `gemini-exp-1121` - recent experimental #2
|
81
|
+
- `gemini-exp-1114` - recent experimental #1
|
82
|
+
- `gemini-1.5-flash-8b-latest` - the least expensive
|
83
|
+
- `gemini-1.5-flash-latest`
|
76
84
|
|
77
85
|
### Images, audio and video
|
78
86
|
|
79
87
|
Gemini models are multi-modal. You can provide images, audio or video files as input like this:
|
80
88
|
|
81
89
|
```bash
|
82
|
-
llm -m gemini-
|
90
|
+
llm -m gemini-2.0-flash 'extract text' -a image.jpg
|
83
91
|
```
|
84
92
|
Or with a URL:
|
85
93
|
```bash
|
86
|
-
llm -m gemini-
|
94
|
+
llm -m gemini-2.0-flash-lite 'describe image' \
|
87
95
|
-a https://static.simonwillison.net/static/2024/pelicans.jpg
|
88
96
|
```
|
89
97
|
Audio works too:
|
90
98
|
|
91
99
|
```bash
|
92
|
-
llm -m gemini-
|
100
|
+
llm -m gemini-2.0-flash 'transcribe audio' -a audio.mp3
|
93
101
|
```
|
94
102
|
|
95
103
|
And video:
|
96
104
|
|
97
105
|
```bash
|
98
|
-
llm -m gemini-
|
106
|
+
llm -m gemini-2.0-flash 'describe what happens' -a video.mp4
|
99
107
|
```
|
100
108
|
The Gemini prompting guide includes [extensive advice](https://ai.google.dev/gemini-api/docs/file-prompting-strategies) on multi-modal prompting.
|
101
109
|
|
@@ -104,7 +112,7 @@ The Gemini prompting guide includes [extensive advice](https://ai.google.dev/gem
|
|
104
112
|
Use `-o json_object 1` to force the output to be JSON:
|
105
113
|
|
106
114
|
```bash
|
107
|
-
llm -m gemini-
|
115
|
+
llm -m gemini-2.0-flash -o json_object 1 \
|
108
116
|
'3 largest cities in California, list of {"name": "..."}'
|
109
117
|
```
|
110
118
|
Outputs:
|
@@ -119,7 +127,7 @@ Gemini models can [write and execute code](https://ai.google.dev/gemini-api/docs
|
|
119
127
|
To enable this feature, use `-o code_execution 1`:
|
120
128
|
|
121
129
|
```bash
|
122
|
-
llm -m gemini-
|
130
|
+
llm -m gemini-2.0-flash -o code_execution 1 \
|
123
131
|
'use python to calculate (factorial of 13) * 3'
|
124
132
|
```
|
125
133
|
### Google search
|
@@ -131,7 +139,7 @@ Using this feature may incur additional requirements in terms of how you use the
|
|
131
139
|
To run a prompt with Google search enabled, use `-o google_search 1`:
|
132
140
|
|
133
141
|
```bash
|
134
|
-
llm -m gemini-
|
142
|
+
llm -m gemini-2.0-flash -o google_search 1 \
|
135
143
|
'What happened in Ireland today?'
|
136
144
|
```
|
137
145
|
|
@@ -142,7 +150,7 @@ Use `llm logs -c --json` after running a prompt to see the full JSON response, w
|
|
142
150
|
To chat interactively with the model, run `llm chat`:
|
143
151
|
|
144
152
|
```bash
|
145
|
-
llm chat -m gemini-
|
153
|
+
llm chat -m gemini-2.0-flash
|
146
154
|
```
|
147
155
|
|
148
156
|
## Embeddings
|
@@ -205,4 +213,3 @@ You will need to have stored a valid Gemini API key using this command first:
|
|
205
213
|
llm keys set gemini
|
206
214
|
# Paste key here
|
207
215
|
```
|
208
|
-
|
@@ -66,6 +66,10 @@ def register_models(register):
|
|
66
66
|
"gemini-2.0-flash-lite",
|
67
67
|
# Released 12th March 2025:
|
68
68
|
"gemma-3-27b-it",
|
69
|
+
# 25th March 2025:
|
70
|
+
"gemini-2.5-pro-exp-03-25",
|
71
|
+
# 4th April 2025 (paid):
|
72
|
+
"gemini-2.5-pro-preview-03-25",
|
69
73
|
]:
|
70
74
|
can_google_search = model_id in GOOGLE_SEARCH_MODELS
|
71
75
|
register(
|
@@ -458,4 +462,15 @@ def register_commands(cli):
|
|
458
462
|
f"https://generativelanguage.googleapis.com/v1beta/models?key={key}",
|
459
463
|
)
|
460
464
|
response.raise_for_status()
|
461
|
-
click.echo(json.dumps(response.json(), indent=2))
|
465
|
+
click.echo(json.dumps(response.json()["models"], indent=2))
|
466
|
+
|
467
|
+
@gemini.command()
|
468
|
+
@click.option("--key", help="API key to use")
|
469
|
+
def files(key):
|
470
|
+
"List of files uploaded to the Gemini API"
|
471
|
+
key = llm.get_key(key, "gemini", "LLM_GEMINI_KEY")
|
472
|
+
response = httpx.get(
|
473
|
+
f"https://generativelanguage.googleapis.com/v1beta/files?key={key}",
|
474
|
+
)
|
475
|
+
response.raise_for_status()
|
476
|
+
click.echo(json.dumps(response.json()["files"], indent=2))
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|