llm-gemini 0.15__py3-none-any.whl → 0.17__py3-none-any.whl
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- {llm_gemini-0.15.dist-info → llm_gemini-0.17.dist-info}/METADATA +36 -29
- llm_gemini-0.17.dist-info/RECORD +7 -0
- {llm_gemini-0.15.dist-info → llm_gemini-0.17.dist-info}/WHEEL +1 -1
- llm_gemini.py +16 -1
- llm_gemini-0.15.dist-info/RECORD +0 -7
- {llm_gemini-0.15.dist-info → llm_gemini-0.17.dist-info}/entry_points.txt +0 -0
- {llm_gemini-0.15.dist-info → llm_gemini-0.17.dist-info/licenses}/LICENSE +0 -0
- {llm_gemini-0.15.dist-info → llm_gemini-0.17.dist-info}/top_level.txt +0 -0
@@ -1,6 +1,6 @@
|
|
1
|
-
Metadata-Version: 2.
|
1
|
+
Metadata-Version: 2.4
|
2
2
|
Name: llm-gemini
|
3
|
-
Version: 0.
|
3
|
+
Version: 0.17
|
4
4
|
Summary: LLM plugin to access Google's Gemini family of models
|
5
5
|
Author: Simon Willison
|
6
6
|
License: Apache-2.0
|
@@ -19,6 +19,7 @@ Requires-Dist: pytest; extra == "test"
|
|
19
19
|
Requires-Dist: pytest-recording; extra == "test"
|
20
20
|
Requires-Dist: pytest-asyncio; extra == "test"
|
21
21
|
Requires-Dist: nest-asyncio; extra == "test"
|
22
|
+
Dynamic: license-file
|
22
23
|
|
23
24
|
# llm-gemini
|
24
25
|
|
@@ -46,56 +47,63 @@ llm keys set gemini
|
|
46
47
|
```
|
47
48
|
You can also set the API key by assigning it to the environment variable `LLM_GEMINI_KEY`.
|
48
49
|
|
49
|
-
Now run the model using `-m gemini-
|
50
|
+
Now run the model using `-m gemini-2.0-flash`, for example:
|
50
51
|
|
51
52
|
```bash
|
52
|
-
llm -m gemini-
|
53
|
+
llm -m gemini-2.0-flash "A short joke about a pelican and a walrus"
|
53
54
|
```
|
54
55
|
|
55
|
-
> A pelican
|
56
|
+
> A pelican and a walrus are sitting at a bar. The pelican orders a fishbowl cocktail, and the walrus orders a plate of clams. The bartender asks, "So, what brings you two together?"
|
56
57
|
>
|
57
|
-
>
|
58
|
-
|
59
|
-
|
58
|
+
> The walrus sighs and says, "It's a long story. Let's just say we met through a mutual friend... of the fin."
|
59
|
+
|
60
|
+
You can set the [default model](https://llm.datasette.io/en/stable/setup.html#setting-a-custom-default-model) to avoid the extra `-m` option:
|
61
|
+
|
62
|
+
```bash
|
63
|
+
llm models default gemini-2.0-flash
|
64
|
+
llm "A joke about a pelican and a walrus"
|
65
|
+
```
|
60
66
|
|
61
67
|
Other models are:
|
62
68
|
|
63
|
-
- `gemini-
|
64
|
-
- `gemini-
|
65
|
-
- `gemini-exp-1114` - recent experimental #1
|
66
|
-
- `gemini-exp-1121` - recent experimental #2
|
67
|
-
- `gemini-exp-1206` - recent experimental #3
|
68
|
-
- `gemini-2.0-flash-exp` - [Gemini 2.0 Flash](https://blog.google/technology/google-deepmind/google-gemini-ai-update-december-2024/#gemini-2-0-flash)
|
69
|
-
- `learnlm-1.5-pro-experimental` - "an experimental task-specific model that has been trained to align with learning science principles" - [more details here](https://ai.google.dev/gemini-api/docs/learnlm).
|
70
|
-
- `gemini-2.0-flash-thinking-exp-1219` - experimental "thinking" model from December 2024
|
71
|
-
- `gemini-2.0-flash-thinking-exp-01-21` - experimental "thinking" model from January 2025
|
72
|
-
- `gemini-2.0-flash` - Gemini 2.0 Flash
|
73
|
-
- `gemini-2.0-flash-lite` - Gemini 2.0 Flash-Lite
|
74
|
-
- `gemini-2.0-pro-exp-02-05` - experimental release of Gemini 2.0 Pro
|
69
|
+
- `gemini-2.5-pro-exp-03-25` - free experimental release of Gemini 2.5 Pro
|
70
|
+
- `gemini-2.5-pro-preview-03-25` - paid preview of Gemini 2.5 Pro
|
75
71
|
- `gemma-3-27b-it` - [Gemma 3](https://blog.google/technology/developers/gemma-3/) 27B
|
72
|
+
- `gemini-2.0-pro-exp-02-05` - experimental release of Gemini 2.0 Pro
|
73
|
+
- `gemini-2.0-flash-lite` - Gemini 2.0 Flash-Lite
|
74
|
+
- `gemini-2.0-flash` - Gemini 2.0 Flash
|
75
|
+
- `gemini-2.0-flash-thinking-exp-01-21` - experimental "thinking" model from January 2025
|
76
|
+
- `gemini-2.0-flash-thinking-exp-1219` - experimental "thinking" model from December 2024
|
77
|
+
- `learnlm-1.5-pro-experimental` - "an experimental task-specific model that has been trained to align with learning science principles" - [more details here](https://ai.google.dev/gemini-api/docs/learnlm).
|
78
|
+
- `gemini-2.0-flash-exp` - [Gemini 2.0 Flash](https://blog.google/technology/google-deepmind/google-gemini-ai-update-december-2024/#gemini-2-0-flash)
|
79
|
+
- `gemini-exp-1206` - recent experimental #3
|
80
|
+
- `gemini-exp-1121` - recent experimental #2
|
81
|
+
- `gemini-exp-1114` - recent experimental #1
|
82
|
+
- `gemini-1.5-flash-8b-latest` - the least expensive
|
83
|
+
- `gemini-1.5-flash-latest`
|
76
84
|
|
77
85
|
### Images, audio and video
|
78
86
|
|
79
87
|
Gemini models are multi-modal. You can provide images, audio or video files as input like this:
|
80
88
|
|
81
89
|
```bash
|
82
|
-
llm -m gemini-
|
90
|
+
llm -m gemini-2.0-flash 'extract text' -a image.jpg
|
83
91
|
```
|
84
92
|
Or with a URL:
|
85
93
|
```bash
|
86
|
-
llm -m gemini-
|
94
|
+
llm -m gemini-2.0-flash-lite 'describe image' \
|
87
95
|
-a https://static.simonwillison.net/static/2024/pelicans.jpg
|
88
96
|
```
|
89
97
|
Audio works too:
|
90
98
|
|
91
99
|
```bash
|
92
|
-
llm -m gemini-
|
100
|
+
llm -m gemini-2.0-flash 'transcribe audio' -a audio.mp3
|
93
101
|
```
|
94
102
|
|
95
103
|
And video:
|
96
104
|
|
97
105
|
```bash
|
98
|
-
llm -m gemini-
|
106
|
+
llm -m gemini-2.0-flash 'describe what happens' -a video.mp4
|
99
107
|
```
|
100
108
|
The Gemini prompting guide includes [extensive advice](https://ai.google.dev/gemini-api/docs/file-prompting-strategies) on multi-modal prompting.
|
101
109
|
|
@@ -104,7 +112,7 @@ The Gemini prompting guide includes [extensive advice](https://ai.google.dev/gem
|
|
104
112
|
Use `-o json_object 1` to force the output to be JSON:
|
105
113
|
|
106
114
|
```bash
|
107
|
-
llm -m gemini-
|
115
|
+
llm -m gemini-2.0-flash -o json_object 1 \
|
108
116
|
'3 largest cities in California, list of {"name": "..."}'
|
109
117
|
```
|
110
118
|
Outputs:
|
@@ -119,7 +127,7 @@ Gemini models can [write and execute code](https://ai.google.dev/gemini-api/docs
|
|
119
127
|
To enable this feature, use `-o code_execution 1`:
|
120
128
|
|
121
129
|
```bash
|
122
|
-
llm -m gemini-
|
130
|
+
llm -m gemini-2.0-flash -o code_execution 1 \
|
123
131
|
'use python to calculate (factorial of 13) * 3'
|
124
132
|
```
|
125
133
|
### Google search
|
@@ -131,7 +139,7 @@ Using this feature may incur additional requirements in terms of how you use the
|
|
131
139
|
To run a prompt with Google search enabled, use `-o google_search 1`:
|
132
140
|
|
133
141
|
```bash
|
134
|
-
llm -m gemini-
|
142
|
+
llm -m gemini-2.0-flash -o google_search 1 \
|
135
143
|
'What happened in Ireland today?'
|
136
144
|
```
|
137
145
|
|
@@ -142,7 +150,7 @@ Use `llm logs -c --json` after running a prompt to see the full JSON response, w
|
|
142
150
|
To chat interactively with the model, run `llm chat`:
|
143
151
|
|
144
152
|
```bash
|
145
|
-
llm chat -m gemini-
|
153
|
+
llm chat -m gemini-2.0-flash
|
146
154
|
```
|
147
155
|
|
148
156
|
## Embeddings
|
@@ -205,4 +213,3 @@ You will need to have stored a valid Gemini API key using this command first:
|
|
205
213
|
llm keys set gemini
|
206
214
|
# Paste key here
|
207
215
|
```
|
208
|
-
|
@@ -0,0 +1,7 @@
|
|
1
|
+
llm_gemini.py,sha256=pZGkm61gSK9DSIubZxmW4YVUnjTS4JuKUtK0yqo9ms4,16511
|
2
|
+
llm_gemini-0.17.dist-info/licenses/LICENSE,sha256=xx0jnfkXJvxRnG63LTGOxlggYnIysveWIZ6H3PNdCrQ,11357
|
3
|
+
llm_gemini-0.17.dist-info/METADATA,sha256=elmIQKpgDA2SaG8SINSBx5s6mztvvVNZtL46BFvTUJo,7985
|
4
|
+
llm_gemini-0.17.dist-info/WHEEL,sha256=CmyFI0kx5cdEMTLiONQRbGQwjIoR1aIYB7eCAQ4KPJ0,91
|
5
|
+
llm_gemini-0.17.dist-info/entry_points.txt,sha256=n544bpgUPIBc5l_cnwsTxPc3gMGJHPtAyqBNp-CkMWk,26
|
6
|
+
llm_gemini-0.17.dist-info/top_level.txt,sha256=WUQmG6_2QKbT_8W4HH93qyKl_0SUteL4Ra6_PhyNGKU,11
|
7
|
+
llm_gemini-0.17.dist-info/RECORD,,
|
llm_gemini.py
CHANGED
@@ -66,6 +66,10 @@ def register_models(register):
|
|
66
66
|
"gemini-2.0-flash-lite",
|
67
67
|
# Released 12th March 2025:
|
68
68
|
"gemma-3-27b-it",
|
69
|
+
# 25th March 2025:
|
70
|
+
"gemini-2.5-pro-exp-03-25",
|
71
|
+
# 4th April 2025 (paid):
|
72
|
+
"gemini-2.5-pro-preview-03-25",
|
69
73
|
]:
|
70
74
|
can_google_search = model_id in GOOGLE_SEARCH_MODELS
|
71
75
|
register(
|
@@ -458,4 +462,15 @@ def register_commands(cli):
|
|
458
462
|
f"https://generativelanguage.googleapis.com/v1beta/models?key={key}",
|
459
463
|
)
|
460
464
|
response.raise_for_status()
|
461
|
-
click.echo(json.dumps(response.json(), indent=2))
|
465
|
+
click.echo(json.dumps(response.json()["models"], indent=2))
|
466
|
+
|
467
|
+
@gemini.command()
|
468
|
+
@click.option("--key", help="API key to use")
|
469
|
+
def files(key):
|
470
|
+
"List of files uploaded to the Gemini API"
|
471
|
+
key = llm.get_key(key, "gemini", "LLM_GEMINI_KEY")
|
472
|
+
response = httpx.get(
|
473
|
+
f"https://generativelanguage.googleapis.com/v1beta/files?key={key}",
|
474
|
+
)
|
475
|
+
response.raise_for_status()
|
476
|
+
click.echo(json.dumps(response.json()["files"], indent=2))
|
llm_gemini-0.15.dist-info/RECORD
DELETED
@@ -1,7 +0,0 @@
|
|
1
|
-
llm_gemini.py,sha256=DdgyLfTuDS5RNt2iaFX9UZnDGUHezObGj-TkyWLJirw,15938
|
2
|
-
llm_gemini-0.15.dist-info/LICENSE,sha256=xx0jnfkXJvxRnG63LTGOxlggYnIysveWIZ6H3PNdCrQ,11357
|
3
|
-
llm_gemini-0.15.dist-info/METADATA,sha256=8qhtAEFniW5fn-6Qz1zFFrjaZwqqD_kNXJNheU8SG_M,7643
|
4
|
-
llm_gemini-0.15.dist-info/WHEEL,sha256=52BFRY2Up02UkjOa29eZOS2VxUrpPORXg1pkohGGUS8,91
|
5
|
-
llm_gemini-0.15.dist-info/entry_points.txt,sha256=n544bpgUPIBc5l_cnwsTxPc3gMGJHPtAyqBNp-CkMWk,26
|
6
|
-
llm_gemini-0.15.dist-info/top_level.txt,sha256=WUQmG6_2QKbT_8W4HH93qyKl_0SUteL4Ra6_PhyNGKU,11
|
7
|
-
llm_gemini-0.15.dist-info/RECORD,,
|
File without changes
|
File without changes
|
File without changes
|