geminimock 0.1.3 → 0.1.4
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +187 -22
- package/package.json +9 -1
package/README.md
CHANGED
|
@@ -2,6 +2,24 @@
|
|
|
2
2
|
|
|
3
3
|
OpenAI-compatible chat API server backed by Gemini Code Assist OAuth.
|
|
4
4
|
|
|
5
|
+
## Terms of Service Warning
|
|
6
|
+
|
|
7
|
+
> [!CAUTION]
|
|
8
|
+
> Using this project may violate Google's Terms of Service. Some users have reported account suspension or shadow restrictions.
|
|
9
|
+
>
|
|
10
|
+
> High-risk scenarios:
|
|
11
|
+
> - Fresh Google accounts are more likely to be flagged
|
|
12
|
+
> - Newly created accounts with Pro/Ultra subscriptions may be reviewed or restricted quickly
|
|
13
|
+
>
|
|
14
|
+
> By using this project, you acknowledge:
|
|
15
|
+
> - This is an unofficial tool and is not endorsed by Google
|
|
16
|
+
> - Your account access may be limited, suspended, or permanently banned
|
|
17
|
+
> - You accept full responsibility for any risk or loss resulting from use
|
|
18
|
+
>
|
|
19
|
+
> Recommendation:
|
|
20
|
+
> - Prefer an established account that is not critical to your primary services
|
|
21
|
+
> - Avoid creating new accounts specifically for this workflow
|
|
22
|
+
|
|
5
23
|
## Install
|
|
6
24
|
|
|
7
25
|
- global: `npm i -g geminimock`
|
|
@@ -85,14 +103,84 @@ Installed CLI command:
|
|
|
85
103
|
- use `geminimock auth accounts list` to inspect active account and IDs
|
|
86
104
|
- use `geminimock auth accounts use <id|email>` to pin a specific account manually
|
|
87
105
|
|
|
88
|
-
##
|
|
106
|
+
## Service Usage Guide
|
|
107
|
+
|
|
108
|
+
### 1) Start service
|
|
109
|
+
|
|
110
|
+
Run OAuth login first:
|
|
111
|
+
|
|
112
|
+
```bash
|
|
113
|
+
geminimock auth login
|
|
114
|
+
```
|
|
115
|
+
|
|
116
|
+
Start in background:
|
|
117
|
+
|
|
118
|
+
```bash
|
|
119
|
+
geminimock server start
|
|
120
|
+
geminimock server status
|
|
121
|
+
```
|
|
122
|
+
|
|
123
|
+
- default URL is `http://127.0.0.1:43173`
|
|
124
|
+
- if `43173` is in use, an available port is selected automatically
|
|
125
|
+
- always check actual URL with `geminimock server status`
|
|
126
|
+
- log file: `~/.geminimock/server.log`
|
|
127
|
+
|
|
128
|
+
Run in foreground:
|
|
129
|
+
|
|
130
|
+
```bash
|
|
131
|
+
geminimock serve
|
|
132
|
+
```
|
|
133
|
+
|
|
134
|
+
Quick health check:
|
|
135
|
+
|
|
136
|
+
```bash
|
|
137
|
+
curl -sS http://127.0.0.1:43173/health
|
|
138
|
+
```
|
|
139
|
+
|
|
140
|
+
### 2) API endpoints
|
|
89
141
|
|
|
90
142
|
- `GET /health`
|
|
91
143
|
- `GET /v1/auth/status`
|
|
92
144
|
- `GET /v1/models`
|
|
93
145
|
- `POST /v1/chat/completions`
|
|
94
146
|
|
|
95
|
-
|
|
147
|
+
Check auth status:
|
|
148
|
+
|
|
149
|
+
```bash
|
|
150
|
+
curl -sS http://127.0.0.1:43173/v1/auth/status
|
|
151
|
+
```
|
|
152
|
+
|
|
153
|
+
Response:
|
|
154
|
+
|
|
155
|
+
```json
|
|
156
|
+
{"authenticated":true}
|
|
157
|
+
```
|
|
158
|
+
|
|
159
|
+
List available models from current account/project:
|
|
160
|
+
|
|
161
|
+
```bash
|
|
162
|
+
curl -sS http://127.0.0.1:43173/v1/models
|
|
163
|
+
```
|
|
164
|
+
|
|
165
|
+
Response format (OpenAI-style model list):
|
|
166
|
+
|
|
167
|
+
```json
|
|
168
|
+
{
|
|
169
|
+
"object": "list",
|
|
170
|
+
"data": [
|
|
171
|
+
{
|
|
172
|
+
"id": "gemini-2.5-flash",
|
|
173
|
+
"object": "model",
|
|
174
|
+
"created": 0,
|
|
175
|
+
"owned_by": "google-code-assist"
|
|
176
|
+
}
|
|
177
|
+
]
|
|
178
|
+
}
|
|
179
|
+
```
|
|
180
|
+
|
|
181
|
+
### 3) Chat completion call pattern
|
|
182
|
+
|
|
183
|
+
Basic request:
|
|
96
184
|
|
|
97
185
|
```bash
|
|
98
186
|
curl -sS -X POST http://127.0.0.1:43173/v1/chat/completions \
|
|
@@ -100,41 +188,118 @@ curl -sS -X POST http://127.0.0.1:43173/v1/chat/completions \
|
|
|
100
188
|
-d '{"model":"gemini-2.5-flash","messages":[{"role":"user","content":"Hello"}]}'
|
|
101
189
|
```
|
|
102
190
|
|
|
191
|
+
Basic response format (OpenAI-style):
|
|
192
|
+
|
|
193
|
+
```json
|
|
194
|
+
{
|
|
195
|
+
"id": "chatcmpl-...",
|
|
196
|
+
"object": "chat.completion",
|
|
197
|
+
"created": 1772175296,
|
|
198
|
+
"model": "gemini-2.5-flash",
|
|
199
|
+
"choices": [
|
|
200
|
+
{
|
|
201
|
+
"index": 0,
|
|
202
|
+
"finish_reason": "STOP",
|
|
203
|
+
"message": {
|
|
204
|
+
"role": "assistant",
|
|
205
|
+
"content": "Hello! How can I help you today?"
|
|
206
|
+
}
|
|
207
|
+
}
|
|
208
|
+
],
|
|
209
|
+
"usage": {
|
|
210
|
+
"prompt_tokens": 1,
|
|
211
|
+
"completion_tokens": 9,
|
|
212
|
+
"total_tokens": 31
|
|
213
|
+
}
|
|
214
|
+
}
|
|
215
|
+
```
|
|
216
|
+
|
|
217
|
+
Streaming request (`stream: true`, SSE):
|
|
218
|
+
|
|
103
219
|
```bash
|
|
104
|
-
curl -sS http://127.0.0.1:43173/v1/
|
|
220
|
+
curl -N -sS -X POST http://127.0.0.1:43173/v1/chat/completions \
|
|
221
|
+
-H 'content-type: application/json' \
|
|
222
|
+
-d '{"model":"gemini-2.5-flash","stream":true,"messages":[{"role":"user","content":"Hello"}]}'
|
|
223
|
+
```
|
|
224
|
+
|
|
225
|
+
Streaming response format:
|
|
226
|
+
|
|
227
|
+
```text
|
|
228
|
+
data: {"id":"...","object":"chat.completion.chunk","choices":[{"delta":{"role":"assistant","content":"Hel"}}]}
|
|
229
|
+
data: {"id":"...","object":"chat.completion.chunk","choices":[{"delta":{"content":"lo"}}]}
|
|
230
|
+
data: {"id":"...","object":"chat.completion.chunk","choices":[{"finish_reason":"stop","delta":{}}]}
|
|
231
|
+
data: [DONE]
|
|
105
232
|
```
|
|
106
233
|
|
|
107
|
-
|
|
234
|
+
### 4) How answers are generated
|
|
108
235
|
|
|
109
|
-
-
|
|
110
|
-
|
|
111
|
-
-
|
|
236
|
+
- API is stateless per request
|
|
237
|
+
- server does not keep conversation memory between calls
|
|
238
|
+
- to continue a conversation, send full history in `messages` each call
|
|
239
|
+
- response text is mapped to `choices[0].message.content`
|
|
240
|
+
- token usage is mapped to `usage.prompt_tokens`, `usage.completion_tokens`, `usage.total_tokens`
|
|
241
|
+
|
|
242
|
+
Model resolution behavior:
|
|
243
|
+
|
|
244
|
+
- if requested model is unavailable, alias mapping may be applied
|
|
245
|
+
- example: `gemini-3-flash` -> `gemini-3-flash-preview` (when available)
|
|
246
|
+
- model list normalizes `_vertex` suffix
|
|
247
|
+
|
|
248
|
+
### 5) System prompt and role mapping
|
|
249
|
+
|
|
250
|
+
System prompt usage example:
|
|
112
251
|
|
|
113
252
|
```bash
|
|
114
253
|
curl -sS -X POST http://127.0.0.1:43173/v1/chat/completions \
|
|
115
254
|
-H 'content-type: application/json' \
|
|
116
|
-
-d '{"model":"gemini-2.5-flash","messages":[{"role":"system","content":"You are concise."},{"role":"user","content":"
|
|
255
|
+
-d '{"model":"gemini-2.5-flash","messages":[{"role":"system","content":"You are concise."},{"role":"user","content":"Summarize OAuth in one sentence."}]}'
|
|
117
256
|
```
|
|
118
257
|
|
|
119
|
-
|
|
258
|
+
Role mapping rules:
|
|
120
259
|
|
|
121
|
-
- `
|
|
122
|
-
-
|
|
123
|
-
|
|
124
|
-
|
|
125
|
-
- Check currently available models for that account/project:
|
|
126
|
-
- `geminimock models list`
|
|
260
|
+
- `system` messages are merged and sent as Gemini `systemInstruction`
|
|
261
|
+
- `assistant` maps to Gemini `model`
|
|
262
|
+
- `user` maps to Gemini `user`
|
|
263
|
+
- `developer` and `tool` are accepted but mapped as `user`
|
|
127
264
|
|
|
128
|
-
|
|
265
|
+
Important:
|
|
129
266
|
|
|
130
|
-
-
|
|
131
|
-
-
|
|
132
|
-
|
|
133
|
-
|
|
134
|
-
|
|
267
|
+
- include at least one non-`system` message (`user` or `assistant`)
|
|
268
|
+
- sending only `system` may fail with `400 INVALID_ARGUMENT` from upstream
|
|
269
|
+
|
|
270
|
+
### 6) Error response style
|
|
271
|
+
|
|
272
|
+
Validation/route errors:
|
|
273
|
+
|
|
274
|
+
```json
|
|
275
|
+
{"error":{"message":"..."}}
|
|
276
|
+
```
|
|
277
|
+
|
|
278
|
+
Common upstream errors:
|
|
279
|
+
|
|
280
|
+
- `403 PERMISSION_DENIED`: active account lacks permission for resolved project/model
|
|
281
|
+
- `404 NOT_FOUND`: requested model or entity does not exist in current project/account
|
|
282
|
+
- `429 RESOURCE_EXHAUSTED`: quota/capacity/rate limit
|
|
283
|
+
|
|
284
|
+
Troubleshooting steps:
|
|
285
|
+
|
|
286
|
+
1. Check current auth: `curl -sS http://127.0.0.1:43173/v1/auth/status`
|
|
287
|
+
2. Check available models: `geminimock models list`
|
|
288
|
+
3. Check active account and switch if needed:
|
|
289
|
+
- `geminimock auth accounts list`
|
|
290
|
+
- `geminimock auth accounts use <id|email>`
|
|
135
291
|
|
|
136
292
|
## GitHub Release Automation
|
|
137
293
|
|
|
138
294
|
- On push to `main`, GitHub Actions reads `package.json` version and creates a release tag `v<version>` if it does not exist.
|
|
139
295
|
- Release notes are generated automatically from the merged changes.
|
|
140
|
-
-
|
|
296
|
+
- On each push to `main`, `npm-publish.yml` publishes to npm using Trusted Publishing (OIDC) if that version is not already published.
|
|
297
|
+
- To publish a new npm release: bump `package.json` version, commit, and push to `main`.
|
|
298
|
+
|
|
299
|
+
Trusted Publisher setup values for npm:
|
|
300
|
+
|
|
301
|
+
- Publisher: `GitHub Actions`
|
|
302
|
+
- Organization or user: `yldst-dev`
|
|
303
|
+
- Repository: `GeminiMock`
|
|
304
|
+
- Workflow filename: `npm-publish.yml`
|
|
305
|
+
- Environment name: leave empty
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "geminimock",
|
|
3
|
-
"version": "0.1.
|
|
3
|
+
"version": "0.1.4",
|
|
4
4
|
"type": "module",
|
|
5
5
|
"bin": {
|
|
6
6
|
"geminimock": "dist/cli.js"
|
|
@@ -10,6 +10,14 @@
|
|
|
10
10
|
"README.md"
|
|
11
11
|
],
|
|
12
12
|
"main": "dist/cli.js",
|
|
13
|
+
"repository": {
|
|
14
|
+
"type": "git",
|
|
15
|
+
"url": "git+https://github.com/yldst-dev/GeminiMock.git"
|
|
16
|
+
},
|
|
17
|
+
"homepage": "https://github.com/yldst-dev/GeminiMock#readme",
|
|
18
|
+
"bugs": {
|
|
19
|
+
"url": "https://github.com/yldst-dev/GeminiMock/issues"
|
|
20
|
+
},
|
|
13
21
|
"engines": {
|
|
14
22
|
"node": ">=18.17"
|
|
15
23
|
},
|