mimo2codex 0.1.15 → 0.1.17
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/AGENTS.md +24 -5
- package/README.md +70 -6
- package/README.zh.md +69 -6
- package/dist/admin/router.js +117 -2
- package/dist/admin/router.js.map +1 -1
- package/dist/cli.js +67 -147
- package/dist/cli.js.map +1 -1
- package/dist/config.js +16 -10
- package/dist/config.js.map +1 -1
- package/dist/db/logs.js +80 -0
- package/dist/db/logs.js.map +1 -1
- package/dist/providers/generic.js +96 -0
- package/dist/providers/generic.js.map +1 -0
- package/dist/providers/genericLoader.js +229 -0
- package/dist/providers/genericLoader.js.map +1 -0
- package/dist/providers/registry.js +48 -10
- package/dist/providers/registry.js.map +1 -1
- package/dist/server.js +201 -1
- package/dist/server.js.map +1 -1
- package/dist/setup/snippets.js +187 -0
- package/dist/setup/snippets.js.map +1 -0
- package/dist/translate/reqToChat.js +42 -2
- package/dist/translate/reqToChat.js.map +1 -1
- package/dist/upstream/openaiCompatClient.js +32 -11
- package/dist/upstream/openaiCompatClient.js.map +1 -1
- package/dist/web/assets/index-D19ffnSJ.css +1 -0
- package/dist/web/assets/index-DPLJprJ4.js +67 -0
- package/dist/web/index.html +2 -2
- package/doc/generic-providers.md +399 -0
- package/doc/generic-providers.zh.md +399 -0
- package/doc/mimoskill.md +295 -0
- package/doc/mimoskill.zh.md +295 -0
- package/mimoskill/SKILL.md +80 -13
- package/mimoskill/references/ocr_workflow.md +240 -0
- package/mimoskill/scripts/generate_image.py +163 -0
- package/mimoskill/scripts/mimo_chat.py +111 -42
- package/mimoskill/scripts/ocr.py +445 -0
- package/package.json +5 -4
- package/dist/web/assets/index-BoykBCnY.js +0 -67
- package/dist/web/assets/index-DAJbSznk.css +0 -1
package/doc/mimoskill.md
ADDED
|
@@ -0,0 +1,295 @@
|
|
|
1
|
+
# mimoskill · Detailed Guide
|
|
2
|
+
|
|
3
|
+
> English · [中文](./mimoskill.zh.md)
|
|
4
|
+
>
|
|
5
|
+
> Back to: [README English](../README.md) · [README 中文](../README.zh.md)
|
|
6
|
+
|
|
7
|
+
`mimoskill/` is a bundle of helper scripts + reference docs at the project root. It exists because some things MiMo / DeepSeek / most chat-only LLMs can't natively do (image generation, image understanding when the chat model is text-only, …), and Codex hardcodes a few capability assumptions on the client side that the proxy layer can't override.
|
|
8
|
+
|
|
9
|
+
The proxy and mimoskill are **completely independent**: mimoskill works without `mimo2codex` running, and mimo2codex works without mimoskill. They compose by **convention**: when the proxy detects a capability gap it leaves a placeholder text in the message that points the LLM at the right `mimoskill/scripts/*.py` script.
|
|
10
|
+
|
|
11
|
+
## When does it trigger?
|
|
12
|
+
|
|
13
|
+
> Short answer: **"the chat model does what it can; mimoskill fills the gap."**
|
|
14
|
+
|
|
15
|
+
| Capability | Chat model can do it | Chat model CAN'T do it |
|
|
16
|
+
|---|---|---|
|
|
17
|
+
| Read / OCR / 识图 an image | proxy forwards the image to the model directly; **mimoskill not triggered** | proxy strips the image and inserts `[N image attachment(s) omitted: … python3 mimoskill/scripts/ocr.py <path> …]`; the LLM reads that placeholder + AGENTS.md and **runs `ocr.py`** |
|
|
18
|
+
| Generate an image | no mainstream chat model has native image-gen | **mimoskill always triggers** — `scripts/generate_image.py` or `scripts/generate_pet.py` |
|
|
19
|
+
| Web search | proxy forwards Codex's `web_search` to MiMo's builtin on `sk-*` (pay-as-you-go) keys; auto-skipped on `tp-*` (token-plan) and DeepSeek | `scripts/mimo_chat.py` follows the same rule — auto-enables web search on MiMo `sk-*`, skips on `tp-*` / pollinations. No flag needed. |
|
|
20
|
+
| TTS / ASR | not exposed in Codex | `scripts/mimo_chat.py` direct call to MiMo's separate endpoints |
|
|
21
|
+
|
|
22
|
+
The triggering **happens in the LLM**, not in the proxy. The proxy only does protocol translation + minimal compatibility fixups (image stripping, placeholder injection). Codex reads [AGENTS.md](../AGENTS.md) and [mimoskill/SKILL.md](../mimoskill/SKILL.md), notices the placeholder or the user's intent, and decides which script to invoke. The script is an independent subprocess that **bypasses the proxy entirely** — OCR talks to MiMo or pollinations directly, image-gen talks to pollinations or OpenAI directly, etc.
|
|
23
|
+
|
|
24
|
+
## Layout
|
|
25
|
+
|
|
26
|
+
```
|
|
27
|
+
mimoskill/
|
|
28
|
+
├── SKILL.md # skill manifest the LLM reads — trigger rules, decision tree
|
|
29
|
+
├── scripts/
|
|
30
|
+
│ ├── mimo_chat.py # direct chat / vision / web-search call to MiMo (stdlib-only)
|
|
31
|
+
│ ├── ocr.py # OCR / image recognition. Mimo or free pollinations
|
|
32
|
+
│ ├── generate_image.py # general image generation (any style / subject)
|
|
33
|
+
│ ├── generate_pet.py # Codex pet generation (chibi-sticker style)
|
|
34
|
+
│ └── install_pet.sh # install generated PNG into Codex's pet directory
|
|
35
|
+
├── references/
|
|
36
|
+
│ ├── models.md # MiMo capability matrix + field quirks
|
|
37
|
+
│ ├── ocr_workflow.md # full OCR mode reference, exit codes, JSON shape
|
|
38
|
+
│ └── pet_workflow.md # single-image vs animated bundle generation
|
|
39
|
+
└── assets/
|
|
40
|
+
└── pet_prompt_template.md # tuned chibi-sticker prompt templates
|
|
41
|
+
```
|
|
42
|
+
|
|
43
|
+
## Scripts in depth
|
|
44
|
+
|
|
45
|
+
### `scripts/mimo_chat.py` — chat / vision (no key required)
|
|
46
|
+
|
|
47
|
+
Stdlib-only Python script for one-shot or streaming chat. Two engines, same `--engine auto|mimo|pollinations` pattern as `ocr.py`:
|
|
48
|
+
|
|
49
|
+
| Engine | Needs key | Notes |
|
|
50
|
+
|---|---|---|
|
|
51
|
+
| `mimo` | `MIMO_API_KEY` | Best quality. Web search auto-enabled on `sk-*` keys (no flag needed). TTS/ASR also MiMo-only. |
|
|
52
|
+
| `pollinations` | **NO** | Free public endpoint at `text.pollinations.ai`. Text + vision work. No web search / TTS / ASR. |
|
|
53
|
+
|
|
54
|
+
Auto resolution: `mimo` if `MIMO_API_KEY` set, else `pollinations`. So this script now works **without any key** for text and vision chat.
|
|
55
|
+
|
|
56
|
+
```bash
|
|
57
|
+
# Zero-setup — uses pollinations fallback
|
|
58
|
+
python3 mimoskill/scripts/mimo_chat.py "tell me a joke"
|
|
59
|
+
python3 mimoskill/scripts/mimo_chat.py --image https://x/y.png "describe this"
|
|
60
|
+
|
|
61
|
+
# Best quality + MiMo native features (web search auto-on with sk-*, TTS, ASR)
|
|
62
|
+
export MIMO_API_KEY=sk-xxxxxxxxxxxxxxxx
|
|
63
|
+
python3 mimoskill/scripts/mimo_chat.py "今天上海天气" # web_search auto-included
|
|
64
|
+
python3 mimoskill/scripts/mimo_chat.py --model mimo-v2.5-pro --max-tokens 8000 --stream "long answer please"
|
|
65
|
+
```
|
|
66
|
+
|
|
67
|
+
For the mimo engine, the script handles MiMo's quirks transparently: `max_completion_tokens` (not `max_tokens`), the required `text` part next to `image_url`, `reasoning_content` round-tripping for multi-turn, web search plugin invocation.
|
|
68
|
+
|
|
69
|
+
| Flag | Notes |
|
|
70
|
+
|---|---|
|
|
71
|
+
| `--engine` | `auto` / `mimo` / `pollinations` (default auto) |
|
|
72
|
+
| `--model` | default `mimo-v2.5-pro` (mimo engine). For vision use `mimo-v2.5` / `mimo-v2-omni` |
|
|
73
|
+
| `--pollinations-model` | default `openai` (vision-capable). Alternatives: `openai-large`, `openai-fast` |
|
|
74
|
+
| `--image URL` | attach image. Auto-bumps to vision-capable model |
|
|
75
|
+
| `--stream` | SSE streaming |
|
|
76
|
+
| `--max-tokens N` | maps to `max_completion_tokens` on mimo, `max_tokens` on pollinations |
|
|
77
|
+
| `--temperature F` | default 0.7 |
|
|
78
|
+
|
|
79
|
+
### `scripts/ocr.py` — OCR / image recognition
|
|
80
|
+
|
|
81
|
+
OCR fallback for when the chat model can't see images. **Two engines** (`--engine auto` picks):
|
|
82
|
+
|
|
83
|
+
| Engine | Needs key | Quality | Notes |
|
|
84
|
+
|---|---|---|---|
|
|
85
|
+
| `mimo` | `MIMO_API_KEY` | best | Calls `mimo-v2.5` (the vision model) regardless of the chat model in use |
|
|
86
|
+
| `pollinations` | **NO** | decent | Free public endpoint at `text.pollinations.ai`. Rate-limited but no signup |
|
|
87
|
+
|
|
88
|
+
Auto resolution: `mimo` if `MIMO_API_KEY` is set, else `pollinations`. So users with **only a DeepSeek key** (or no key at all) still get OCR with zero setup.
|
|
89
|
+
|
|
90
|
+
```bash
|
|
91
|
+
# Zero-setup — uses pollinations fallback when MIMO_API_KEY is unset
|
|
92
|
+
python3 mimoskill/scripts/ocr.py path/to/image.png
|
|
93
|
+
|
|
94
|
+
# Best quality — set MiMo key
|
|
95
|
+
export MIMO_API_KEY=sk-xxxx
|
|
96
|
+
python3 mimoskill/scripts/ocr.py path/to/image.png # auto -> mimo
|
|
97
|
+
|
|
98
|
+
# Force the free engine even when you have a MiMo key (save quota)
|
|
99
|
+
python3 mimoskill/scripts/ocr.py --engine pollinations form.png
|
|
100
|
+
|
|
101
|
+
# Force MiMo — errors out if MIMO_API_KEY is not set (no silent fallback)
|
|
102
|
+
python3 mimoskill/scripts/ocr.py --engine mimo form.png
|
|
103
|
+
```
|
|
104
|
+
|
|
105
|
+
Four output modes:
|
|
106
|
+
|
|
107
|
+
| `--mode` | Output |
|
|
108
|
+
|---|---|
|
|
109
|
+
| `text` (default) | verbatim OCR — line breaks + reading order preserved |
|
|
110
|
+
| `describe` | 2-4 sentence description |
|
|
111
|
+
| `structured` | single JSON object: `text` / `language` / `regions[]` / `summary` |
|
|
112
|
+
| `markdown` | re-render the image as GitHub-flavored Markdown |
|
|
113
|
+
|
|
114
|
+
Input forms (positional, 0+ args):
|
|
115
|
+
- Local path: `./scan.png`, `C:\foo.jpg`
|
|
116
|
+
- HTTP(S) URL: forwarded as-is
|
|
117
|
+
- `data:image/...;base64,…`: forwarded as-is
|
|
118
|
+
- `-` or piped stdin: read one image's bytes from stdin
|
|
119
|
+
|
|
120
|
+
Magic-byte MIME sniffing (not file extension): PNG / JPEG / GIF / WebP / BMP. Multi-image positional args batch into one upstream call.
|
|
121
|
+
|
|
122
|
+
> Full reference: [mimoskill/references/ocr_workflow.md](../mimoskill/references/ocr_workflow.md) (modes, exit codes, JSON shape, lang/prompt knobs, pollinations specifics).
|
|
123
|
+
|
|
124
|
+
### `scripts/generate_image.py` — general image generation
|
|
125
|
+
|
|
126
|
+
Thin wrapper over `generate_pet.py` minus the chibi-pet prompt boilerplate, with an optional `--style` for common looks. Same providers, same env vars, same auto-fallback strategy.
|
|
127
|
+
|
|
128
|
+
```bash
|
|
129
|
+
# Free — auto picks pollinations when no OpenAI key
|
|
130
|
+
python3 mimoskill/scripts/generate_image.py --prompt "japanese garden, watercolor, dawn" --out garden.png
|
|
131
|
+
|
|
132
|
+
# Best quality — set OpenAI key
|
|
133
|
+
export PET_OPENAI_API_KEY=sk-real-openai-key
|
|
134
|
+
python3 mimoskill/scripts/generate_image.py --prompt "..." --out art.png # auto -> gpt-image-1
|
|
135
|
+
|
|
136
|
+
# Common style presets
|
|
137
|
+
python3 mimoskill/scripts/generate_image.py --style anime --prompt "shrine at dusk" --out shrine.png
|
|
138
|
+
```
|
|
139
|
+
|
|
140
|
+
| `--provider` | Backend |
|
|
141
|
+
|---|---|
|
|
142
|
+
| `auto` (default) | `gpt-image-1` if `PET_OPENAI_API_KEY` set, else `pollinations` |
|
|
143
|
+
| `pollinations` | Free, no key |
|
|
144
|
+
| `gpt-image-1` | OpenAI's official image gen — best quality |
|
|
145
|
+
| `replicate` | Replicate API (any model) |
|
|
146
|
+
| `local-sd` | Local Stable Diffusion |
|
|
147
|
+
|
|
148
|
+
> `PET_OPENAI_API_KEY` is intentionally **separate from `MIMO_API_KEY` and `OPENAI_API_KEY`** — it's used only for image generation, so leaking it (or just not having one) doesn't affect anything else.
|
|
149
|
+
|
|
150
|
+
### `scripts/generate_pet.py` — Codex pet generation
|
|
151
|
+
|
|
152
|
+
Same backends as `generate_image.py`, but with a tuned chibi-sticker prompt built around `--description`. Outputs a PNG sized + framed for Codex's pet picker.
|
|
153
|
+
|
|
154
|
+
```bash
|
|
155
|
+
# Single static pet (free)
|
|
156
|
+
python3 mimoskill/scripts/generate_pet.py --description "chibi shiba coder" --out pet.png
|
|
157
|
+
|
|
158
|
+
# Animated multi-state bundle (idle / thinking / typing / sleeping)
|
|
159
|
+
python3 mimoskill/scripts/generate_pet.py --description "chibi cat" --bundle ./shiba/
|
|
160
|
+
```
|
|
161
|
+
|
|
162
|
+
Prompt templates live in [mimoskill/assets/pet_prompt_template.md](../mimoskill/assets/pet_prompt_template.md). Full workflow in [mimoskill/references/pet_workflow.md](../mimoskill/references/pet_workflow.md).
|
|
163
|
+
|
|
164
|
+
### `scripts/install_pet.sh` — install pet into Codex
|
|
165
|
+
|
|
166
|
+
Probes macOS / Linux / Windows for the right pet directory and copies the PNG (or bundle) there. Works around Codex's hardcoded pet paths.
|
|
167
|
+
|
|
168
|
+
```bash
|
|
169
|
+
bash mimoskill/scripts/install_pet.sh pet.png shiba
|
|
170
|
+
# Then fully quit + relaunch Codex (system tray → Quit, not just close window)
|
|
171
|
+
```
|
|
172
|
+
|
|
173
|
+
## Three ways to use it
|
|
174
|
+
|
|
175
|
+
### 1. Direct invocation (any user, no setup)
|
|
176
|
+
|
|
177
|
+
```bash
|
|
178
|
+
python3 mimoskill/scripts/mimo_chat.py "..."
|
|
179
|
+
python3 mimoskill/scripts/ocr.py invoice.png # works with no key, free pollinations
|
|
180
|
+
python3 mimoskill/scripts/generate_image.py --prompt "..."
|
|
181
|
+
```
|
|
182
|
+
|
|
183
|
+
No skill registration required — these are standard Python scripts (stdlib-only, no `pip install` step).
|
|
184
|
+
|
|
185
|
+
### 2. As a Claude Code skill
|
|
186
|
+
|
|
187
|
+
Symlink the directory into `~/.claude/skills/`:
|
|
188
|
+
|
|
189
|
+
```bash
|
|
190
|
+
ln -s "$(pwd)/mimoskill" ~/.claude/skills/mimoskill
|
|
191
|
+
```
|
|
192
|
+
|
|
193
|
+
Claude reads [SKILL.md](../mimoskill/SKILL.md) and automatically routes relevant requests ("generate a pet from this image", "read the text from this screenshot", "MiMo TTS this paragraph") to the right scripts.
|
|
194
|
+
|
|
195
|
+
### 3. As a Codex agent guide
|
|
196
|
+
|
|
197
|
+
Already wired via [AGENTS.md](../AGENTS.md) at the repo root. Codex reads it on each session and routes image-gen / pet / OCR tasks to mimoskill scripts — it **won't** try to `pip install openai` or call OpenAI's image_gen tool when the active backend is MiMo / DeepSeek / Qwen / any non-OpenAI provider.
|
|
198
|
+
|
|
199
|
+
## Environment variables
|
|
200
|
+
|
|
201
|
+
| Var | Used by | Notes |
|
|
202
|
+
|---|---|---|
|
|
203
|
+
| `MIMO_API_KEY` | `mimo_chat.py`, `ocr.py` (engine=mimo / auto when set) | MiMo Chat / vision key. **Optional** for both scripts — they fall back to free pollinations when unset |
|
|
204
|
+
| `MIMO_CHAT_ENGINE` | `mimo_chat.py` | `auto` / `mimo` / `pollinations` — same as `--engine` |
|
|
205
|
+
| `MIMO_BASE_URL` | `mimo_chat.py`, `ocr.py` | default `https://api.xiaomimimo.com/v1` |
|
|
206
|
+
| `MIMO_MODEL` / `MIMO_OCR_MODEL` | `ocr.py` model auto-pick | used when `--model` not passed and vision-capable |
|
|
207
|
+
| `MIMO_OCR_ENGINE` | `ocr.py` | `auto` / `mimo` / `pollinations` — same as `--engine` flag |
|
|
208
|
+
| `POLLINATIONS_MODEL` | `ocr.py` | default `openai` (vision-capable). Alternatives: `openai-large`, `openai-fast` |
|
|
209
|
+
| `PET_OPENAI_API_KEY` | `generate_pet.py`, `generate_image.py` | separate from `MIMO_API_KEY` / `OPENAI_API_KEY`; used only for image gen |
|
|
210
|
+
| `REPLICATE_API_TOKEN` | `generate_*.py --provider replicate` | required only when using Replicate backend |
|
|
211
|
+
|
|
212
|
+
## Common recipes
|
|
213
|
+
|
|
214
|
+
### Read text from an image, then summarize via the active chat model
|
|
215
|
+
|
|
216
|
+
```bash
|
|
217
|
+
TEXT=$(python3 mimoskill/scripts/ocr.py invoice.png)
|
|
218
|
+
python3 mimoskill/scripts/mimo_chat.py "Summarize this invoice:\n$TEXT"
|
|
219
|
+
```
|
|
220
|
+
|
|
221
|
+
Or inside Codex: just paste the image. The proxy strips it, leaves a placeholder pointing at `ocr.py`, Codex runs the script and feeds the text back into the conversation — no manual step.
|
|
222
|
+
|
|
223
|
+
### Generate a `/hatch` replacement pet (works without OpenAI key)
|
|
224
|
+
|
|
225
|
+
```bash
|
|
226
|
+
python3 mimoskill/scripts/generate_pet.py --description "chibi shiba coder" --out pet.png
|
|
227
|
+
bash mimoskill/scripts/install_pet.sh pet.png shiba
|
|
228
|
+
# Fully quit + relaunch Codex, pick the new pet from the picker
|
|
229
|
+
```
|
|
230
|
+
|
|
231
|
+
For higher quality, set `PET_OPENAI_API_KEY=sk-real-openai-key` and `auto` switches to `gpt-image-1`.
|
|
232
|
+
|
|
233
|
+
### Structured OCR + JSON parse
|
|
234
|
+
|
|
235
|
+
```bash
|
|
236
|
+
JSON=$(python3 mimoskill/scripts/ocr.py --mode structured invoice.png)
|
|
237
|
+
echo "$JSON" | python3 -c "import sys,json; d=json.load(sys.stdin); print(d['summary'])"
|
|
238
|
+
```
|
|
239
|
+
|
|
240
|
+
### Multi-image batch OCR (one billable call)
|
|
241
|
+
|
|
242
|
+
```bash
|
|
243
|
+
python3 mimoskill/scripts/ocr.py page1.png page2.png page3.png
|
|
244
|
+
```
|
|
245
|
+
|
|
246
|
+
All images go in a **single** upstream call; the model can cross-reference (e.g. ID front + back). Output is a single text body in reading order across the images.
|
|
247
|
+
|
|
248
|
+
## Troubleshooting
|
|
249
|
+
|
|
250
|
+
<details>
|
|
251
|
+
<summary><b><code>MIMO_API_KEY</code> is not set</b> — ocr.py exits 3</summary>
|
|
252
|
+
|
|
253
|
+
You explicitly passed `--engine mimo`. Either drop the flag (`auto` will fall back to pollinations) or set the key:
|
|
254
|
+
|
|
255
|
+
```bash
|
|
256
|
+
export MIMO_API_KEY=sk-xxxx
|
|
257
|
+
python3 mimoskill/scripts/ocr.py form.png
|
|
258
|
+
```
|
|
259
|
+
|
|
260
|
+
</details>
|
|
261
|
+
|
|
262
|
+
<details>
|
|
263
|
+
<summary><b>Pollinations returns 429 / rate limit</b></summary>
|
|
264
|
+
|
|
265
|
+
You hit per-IP rate limits. Either wait + retry, or switch to `--engine mimo` if you have a MiMo key.
|
|
266
|
+
|
|
267
|
+
</details>
|
|
268
|
+
|
|
269
|
+
<details>
|
|
270
|
+
<summary><b>Codex shows <code>image_gen tool not available</code> when running /hatch</b></summary>
|
|
271
|
+
|
|
272
|
+
Codex's `/hatch` is hardcoded to call OpenAI's `image_gen` tool client-side. The proxy can't intercept that. Use `generate_pet.py` instead — see "Generate a /hatch replacement pet" above.
|
|
273
|
+
|
|
274
|
+
</details>
|
|
275
|
+
|
|
276
|
+
<details>
|
|
277
|
+
<summary><b><code>pip install openai</code> errors / Codex tries to install openai</b></summary>
|
|
278
|
+
|
|
279
|
+
That's Codex trying to fall back to the openai Python SDK for image generation. [AGENTS.md](../AGENTS.md) is wired to prevent this — make sure it's at the repo root and the Codex session has read it (start fresh session if you edited AGENTS.md mid-conversation).
|
|
280
|
+
|
|
281
|
+
</details>
|
|
282
|
+
|
|
283
|
+
<details>
|
|
284
|
+
<summary><b>Tool returned an image but my model can't see images in tool output</b></summary>
|
|
285
|
+
|
|
286
|
+
This is by design. Chat Completions `tool` role only accepts string content — image content parts in `function_call_output` are flattened to `[N image attachment(s) omitted from tool output: ...]` placeholders (see `toolOutputToString` in [src/translate/reqToChat.ts](../src/translate/reqToChat.ts)). To feed an image back to the LLM, have the tool save it to disk and return a file path, then re-attach as a user message — at which point the OCR fallback kicks in if the chat model is non-vision.
|
|
287
|
+
|
|
288
|
+
</details>
|
|
289
|
+
|
|
290
|
+
## Design notes
|
|
291
|
+
|
|
292
|
+
- **No `pip install` step.** Every script is stdlib-only. This avoids dependency drift and lets the scripts run on bare Python ≥ 3.8 anywhere.
|
|
293
|
+
- **Network operations are explicit.** No silent retries to alternate endpoints. If you ask for MiMo and there's no key, you get a clear error — not a silent fallback that masks the misconfiguration.
|
|
294
|
+
- **The proxy and mimoskill never call each other.** They're separate processes connected only by `AGENTS.md` / `SKILL.md` conventions. This makes both halves independently testable and replaceable.
|
|
295
|
+
- **Pollinations is the no-key escape hatch.** Used as the free fallback in `ocr.py` (vision), `generate_pet.py` (image gen), `generate_image.py` (image gen). Rate-limited but always available. The project treats it as a first-class option, not a degraded mode.
|
|
@@ -0,0 +1,295 @@
|
|
|
1
|
+
# mimoskill · 详细介绍
|
|
2
|
+
|
|
3
|
+
> [English](./mimoskill.md) · 中文
|
|
4
|
+
>
|
|
5
|
+
> 回到:[README English](../README.md) · [README 中文](../README.zh.md)
|
|
6
|
+
|
|
7
|
+
`mimoskill/` 是仓库根目录下一捆**辅助脚本 + 参考文档**。它存在的原因是有些事 MiMo / DeepSeek / 大多数纯文本 LLM 原生做不了(图像生成、纯文本模型看图、…),而 Codex 在客户端硬编码了一些能力假设,代理层压根改不动。
|
|
8
|
+
|
|
9
|
+
代理(mimo2codex)和 mimoskill **完全独立**:不跑 mimo2codex 也能用 mimoskill,反之亦然。两者通过**约定**协作:代理检测到能力缺口时,会在消息里塞占位文本,指向对应的 `mimoskill/scripts/*.py`。
|
|
10
|
+
|
|
11
|
+
## 什么时候会触发?
|
|
12
|
+
|
|
13
|
+
> 一句话:**"模型能做的事 proxy 透传,模型做不了的事 mimoskill 兜底。"**
|
|
14
|
+
|
|
15
|
+
| 能力 | 当前 chat 模型能做 | 当前 chat 模型做不了 |
|
|
16
|
+
|---|---|---|
|
|
17
|
+
| 看图 / OCR / 识图 | proxy 透传图片给模型;**mimoskill 不触发** | proxy 剥掉图片、塞 `[N image attachment(s) omitted: … python3 mimoskill/scripts/ocr.py <path> …]` 占位文本;LLM 读到占位 + AGENTS.md 后 **去跑 `ocr.py`** |
|
|
18
|
+
| 图像生成 | 没有任何主流 chat 模型自带 image-gen | **mimoskill 永远触发** —— `scripts/generate_image.py` 或 `scripts/generate_pet.py` |
|
|
19
|
+
| 联网搜索 | proxy 在 MiMo `sk-*`(按量)key 下把 Codex 的 `web_search` 翻译成 MiMo 内置的;`tp-*`(套餐)key 与 DeepSeek 自动跳过 | `scripts/mimo_chat.py` 遵循同样规则 —— MiMo `sk-*` 自动启用,`tp-*` / pollinations 跳过。**无需参数** |
|
|
20
|
+
| TTS / ASR | Codex 没接 | `scripts/mimo_chat.py` 直接调 MiMo 的独立端点 |
|
|
21
|
+
|
|
22
|
+
触发**发生在 LLM 这一层**,不在 proxy 层。proxy 只做协议翻译 + 最小兼容性修整(剥图、塞占位文本)。Codex 读 [AGENTS.md](../AGENTS.md) 和 [mimoskill/SKILL.md](../mimoskill/SKILL.md),看到占位文本或者用户意图后,自己决定调哪个脚本。脚本是独立子进程,**完全绕开 proxy** —— OCR 直接打 MiMo 或 pollinations,出图直接打 pollinations 或 OpenAI,等等。
|
|
23
|
+
|
|
24
|
+
## 目录结构
|
|
25
|
+
|
|
26
|
+
```
|
|
27
|
+
mimoskill/
|
|
28
|
+
├── SKILL.md # 给 LLM 看的 skill 清单 —— 触发规则 + 决策树
|
|
29
|
+
├── scripts/
|
|
30
|
+
│ ├── mimo_chat.py # 直接调 MiMo 聊天 / 视觉 / 联网搜索(纯标准库)
|
|
31
|
+
│ ├── ocr.py # OCR / 识图。MiMo 或免费 pollinations
|
|
32
|
+
│ ├── generate_image.py # 通用图像生成(任意风格 / 主题)
|
|
33
|
+
│ ├── generate_pet.py # Codex 宠物生成(chibi 贴纸风)
|
|
34
|
+
│ └── install_pet.sh # 把生成的 PNG 装到 Codex 的宠物目录
|
|
35
|
+
├── references/
|
|
36
|
+
│ ├── models.md # MiMo 能力矩阵 + 字段坑
|
|
37
|
+
│ ├── ocr_workflow.md # 完整 OCR 模式参考、退出码、JSON 结构
|
|
38
|
+
│ └── pet_workflow.md # 单图 vs 多状态动画 bundle
|
|
39
|
+
└── assets/
|
|
40
|
+
└── pet_prompt_template.md # 调好的 chibi 贴纸提示词模板
|
|
41
|
+
```
|
|
42
|
+
|
|
43
|
+
## 脚本详解
|
|
44
|
+
|
|
45
|
+
### `scripts/mimo_chat.py` —— 聊天 / 视觉(无 key 也能用)
|
|
46
|
+
|
|
47
|
+
纯标准库 Python 脚本,单轮或流式聊天。两个引擎,跟 `ocr.py` 是同一套 `--engine auto|mimo|pollinations`:
|
|
48
|
+
|
|
49
|
+
| 引擎 | 需要 key | 备注 |
|
|
50
|
+
|---|---|---|
|
|
51
|
+
| `mimo` | 需要 `MIMO_API_KEY` | 最佳质量。`sk-*` key 自动启用 web_search(无需参数),TTS / ASR 也只能用这个 |
|
|
52
|
+
| `pollinations` | **不需要** | 免费公共端点 `text.pollinations.ai`。文本 + 视觉可用,联网搜索 / TTS / ASR 不可用 |
|
|
53
|
+
|
|
54
|
+
auto 选择:有 `MIMO_API_KEY` 用 mimo,否则 pollinations。**这个脚本现在不依赖任何 key**(纯文本 + 视觉场景)。
|
|
55
|
+
|
|
56
|
+
```bash
|
|
57
|
+
# 零配置 —— 自动走 pollinations 兜底
|
|
58
|
+
python3 mimoskill/scripts/mimo_chat.py "讲个笑话"
|
|
59
|
+
python3 mimoskill/scripts/mimo_chat.py --image https://x/y.png "描述这张图"
|
|
60
|
+
|
|
61
|
+
# 最佳质量 + MiMo 原生能力(sk-* key 自动开 web_search,TTS、ASR)
|
|
62
|
+
export MIMO_API_KEY=sk-xxxxxxxxxxxxxxxx
|
|
63
|
+
python3 mimoskill/scripts/mimo_chat.py "今天上海天气" # 自动带 web_search
|
|
64
|
+
python3 mimoskill/scripts/mimo_chat.py --model mimo-v2.5-pro --max-tokens 8000 --stream "写长一点"
|
|
65
|
+
```
|
|
66
|
+
|
|
67
|
+
mimo 引擎自动踩好 MiMo 的坑:`max_completion_tokens`(不是 `max_tokens`)、图片必须配 `text` part、多轮 `reasoning_content` 回填、联网搜索插件调用。
|
|
68
|
+
|
|
69
|
+
| 参数 | 说明 |
|
|
70
|
+
|---|---|
|
|
71
|
+
| `--engine` | `auto` / `mimo` / `pollinations`(默认 auto) |
|
|
72
|
+
| `--model` | 默认 `mimo-v2.5-pro`(mimo 引擎)。视觉用 `mimo-v2.5` / `mimo-v2-omni` |
|
|
73
|
+
| `--pollinations-model` | 默认 `openai`(视觉能力)。可选 `openai-large` / `openai-fast` |
|
|
74
|
+
| `--image URL` | 附图。自动 bump 到视觉能力模型 |
|
|
75
|
+
| `--stream` | SSE 流式 |
|
|
76
|
+
| `--max-tokens N` | mimo 引擎映射到 `max_completion_tokens`,pollinations 映射到 `max_tokens` |
|
|
77
|
+
| `--temperature F` | 默认 0.7 |
|
|
78
|
+
|
|
79
|
+
### `scripts/ocr.py` —— OCR / 识图
|
|
80
|
+
|
|
81
|
+
非视觉 chat 模型场景下的兜底。**两个引擎**(`--engine auto` 自动选):
|
|
82
|
+
|
|
83
|
+
| 引擎 | 需要 key | 质量 | 备注 |
|
|
84
|
+
|---|---|---|---|
|
|
85
|
+
| `mimo` | 需要 `MIMO_API_KEY` | 最好 | 内部调 `mimo-v2.5`(视觉模型),与外层 chat 模型无关 |
|
|
86
|
+
| `pollinations` | **不需要** | 还行 | 免费公共端点 `text.pollinations.ai`。有 IP 限流,但无需注册 |
|
|
87
|
+
|
|
88
|
+
auto 选择:有 `MIMO_API_KEY` 用 mimo,否则 pollinations。所以**只配了 DeepSeek key**(或者啥都没配)的用户也能零配置用 OCR。
|
|
89
|
+
|
|
90
|
+
```bash
|
|
91
|
+
# 零配置 —— 没设 MIMO_API_KEY 时自动走免费 pollinations
|
|
92
|
+
python3 mimoskill/scripts/ocr.py path/to/image.png
|
|
93
|
+
|
|
94
|
+
# 最佳质量 —— 设 MiMo key
|
|
95
|
+
export MIMO_API_KEY=sk-xxxx
|
|
96
|
+
python3 mimoskill/scripts/ocr.py path/to/image.png # auto -> mimo
|
|
97
|
+
|
|
98
|
+
# 强制走免费引擎(即便你有 MiMo key,比如想省额度)
|
|
99
|
+
python3 mimoskill/scripts/ocr.py --engine pollinations form.png
|
|
100
|
+
|
|
101
|
+
# 强制 MiMo —— 没设 key 直接报错(不静默降级)
|
|
102
|
+
python3 mimoskill/scripts/ocr.py --engine mimo form.png
|
|
103
|
+
```
|
|
104
|
+
|
|
105
|
+
四个输出模式:
|
|
106
|
+
|
|
107
|
+
| `--mode` | 输出 |
|
|
108
|
+
|---|---|
|
|
109
|
+
| `text`(默认) | 逐字 OCR —— 保留换行 + 阅读顺序 |
|
|
110
|
+
| `describe` | 2-4 句描述 |
|
|
111
|
+
| `structured` | 单个 JSON:`text` / `language` / `regions[]` / `summary` |
|
|
112
|
+
| `markdown` | 整张图重新渲染成 GitHub-flavored Markdown |
|
|
113
|
+
|
|
114
|
+
输入形态(位置参数,0+ 个):
|
|
115
|
+
- 本地路径:`./scan.png`、`C:\foo.jpg`
|
|
116
|
+
- HTTP(S) URL:原样转发
|
|
117
|
+
- `data:image/...;base64,…`:原样转发
|
|
118
|
+
- `-` 或管道 stdin:从 stdin 读一张图的字节
|
|
119
|
+
|
|
120
|
+
magic-byte 嗅探 MIME(不信任扩展名):PNG / JPEG / GIF / WebP / BMP。多个位置参数会**一次 upstream 调用**批处理。
|
|
121
|
+
|
|
122
|
+
> 完整参考:[mimoskill/references/ocr_workflow.md](../mimoskill/references/ocr_workflow.md)(模式、退出码、JSON 结构、lang/prompt 参数、pollinations 细节)。
|
|
123
|
+
|
|
124
|
+
### `scripts/generate_image.py` —— 通用图像生成
|
|
125
|
+
|
|
126
|
+
`generate_pet.py` 的薄包装,去掉 chibi 宠物提示词模板、加了可选的 `--style` 常见风格。同样的 providers、同样的环境变量、同样的 auto 兜底策略。
|
|
127
|
+
|
|
128
|
+
```bash
|
|
129
|
+
# 免费 —— 没设 OpenAI key 时 auto 走 pollinations
|
|
130
|
+
python3 mimoskill/scripts/generate_image.py --prompt "日式庭园,水彩,黎明" --out garden.png
|
|
131
|
+
|
|
132
|
+
# 高质量 —— 设 OpenAI key
|
|
133
|
+
export PET_OPENAI_API_KEY=sk-real-openai-key
|
|
134
|
+
python3 mimoskill/scripts/generate_image.py --prompt "..." --out art.png # auto -> gpt-image-1
|
|
135
|
+
|
|
136
|
+
# 风格预设
|
|
137
|
+
python3 mimoskill/scripts/generate_image.py --style anime --prompt "黄昏的神社" --out shrine.png
|
|
138
|
+
```
|
|
139
|
+
|
|
140
|
+
| `--provider` | 后端 |
|
|
141
|
+
|---|---|
|
|
142
|
+
| `auto`(默认) | 有 `PET_OPENAI_API_KEY` 走 `gpt-image-1`,否则 `pollinations` |
|
|
143
|
+
| `pollinations` | 免费、无 key |
|
|
144
|
+
| `gpt-image-1` | OpenAI 官方图像生成 —— 最佳质量 |
|
|
145
|
+
| `replicate` | Replicate API(任意模型) |
|
|
146
|
+
| `local-sd` | 本地 Stable Diffusion |
|
|
147
|
+
|
|
148
|
+
> `PET_OPENAI_API_KEY` 故意**和 `MIMO_API_KEY`、`OPENAI_API_KEY` 分开** —— 只用于图像生成,泄露或不存在都不影响别的事。
|
|
149
|
+
|
|
150
|
+
### `scripts/generate_pet.py` —— Codex 宠物生成
|
|
151
|
+
|
|
152
|
+
同样的后端,但内置了一套调好的 chibi 贴纸提示词,围绕 `--description` 组装。输出尺寸 + 留白都按 Codex 宠物选择器适配。
|
|
153
|
+
|
|
154
|
+
```bash
|
|
155
|
+
# 单张静态宠物(免费)
|
|
156
|
+
python3 mimoskill/scripts/generate_pet.py --description "chibi shiba 程序员" --out pet.png
|
|
157
|
+
|
|
158
|
+
# 多状态动画 bundle(idle / thinking / typing / sleeping)
|
|
159
|
+
python3 mimoskill/scripts/generate_pet.py --description "chibi 猫" --bundle ./shiba/
|
|
160
|
+
```
|
|
161
|
+
|
|
162
|
+
提示词模板在 [mimoskill/assets/pet_prompt_template.md](../mimoskill/assets/pet_prompt_template.md)。完整流程见 [mimoskill/references/pet_workflow.md](../mimoskill/references/pet_workflow.md)。
|
|
163
|
+
|
|
164
|
+
### `scripts/install_pet.sh` —— 装宠物到 Codex
|
|
165
|
+
|
|
166
|
+
自动探测 macOS / Linux / Windows 的宠物目录,把 PNG(或 bundle)拷过去。绕开 Codex 硬编码的宠物路径问题。
|
|
167
|
+
|
|
168
|
+
```bash
|
|
169
|
+
bash mimoskill/scripts/install_pet.sh pet.png shiba
|
|
170
|
+
# 然后完全退出 + 重启 Codex(桌面端走系统托盘退出,不只是关窗口)
|
|
171
|
+
```
|
|
172
|
+
|
|
173
|
+
## 三种用法
|
|
174
|
+
|
|
175
|
+
### 1. 直接调用(普通用户,零配置)
|
|
176
|
+
|
|
177
|
+
```bash
|
|
178
|
+
python3 mimoskill/scripts/mimo_chat.py "..."
|
|
179
|
+
python3 mimoskill/scripts/ocr.py invoice.png # 无 key 也能跑,走免费 pollinations
|
|
180
|
+
python3 mimoskill/scripts/generate_image.py --prompt "..."
|
|
181
|
+
```
|
|
182
|
+
|
|
183
|
+
不需要注册 skill —— 就是普通 Python 脚本(纯标准库,不用 `pip install`)。
|
|
184
|
+
|
|
185
|
+
### 2. 当 Claude Code 的 Skill 用
|
|
186
|
+
|
|
187
|
+
软链到 `~/.claude/skills/`:
|
|
188
|
+
|
|
189
|
+
```bash
|
|
190
|
+
ln -s "$(pwd)/mimoskill" ~/.claude/skills/mimoskill
|
|
191
|
+
```
|
|
192
|
+
|
|
193
|
+
之后 Claude 会读 [SKILL.md](../mimoskill/SKILL.md),遇到相关请求("帮我从这张图生成宠物"、"读一下这张截图的文字"、"让 MiMo 把这段话朗读了")自动路由到对应脚本。
|
|
194
|
+
|
|
195
|
+
### 3. 当 Codex agent 指南
|
|
196
|
+
|
|
197
|
+
仓库根的 [AGENTS.md](../AGENTS.md) 已经接好。Codex 每次启会话都会读,遇到生图 / 宠物 / OCR 任务会路由到 mimoskill 脚本 —— **不会**再去 `pip install openai`,也不会在用 MiMo / DeepSeek / Qwen / 任何非 OpenAI 上游时尝试调 OpenAI 的 `image_gen` 工具。
|
|
198
|
+
|
|
199
|
+
## 环境变量
|
|
200
|
+
|
|
201
|
+
| 变量 | 谁用 | 说明 |
|
|
202
|
+
|---|---|---|
|
|
203
|
+
| `MIMO_API_KEY` | `mimo_chat.py`、`ocr.py`(engine=mimo / auto 时) | MiMo Chat / 视觉 key。两个脚本都**可选** —— 没设会自动走 pollinations |
|
|
204
|
+
| `MIMO_CHAT_ENGINE` | `mimo_chat.py` | `auto` / `mimo` / `pollinations` —— 等价于 `--engine` |
|
|
205
|
+
| `MIMO_BASE_URL` | `mimo_chat.py`、`ocr.py` | 默认 `https://api.xiaomimimo.com/v1` |
|
|
206
|
+
| `MIMO_MODEL` / `MIMO_OCR_MODEL` | `ocr.py` 模型 auto-pick | 没传 `--model` 时使用(必须视觉能力) |
|
|
207
|
+
| `MIMO_OCR_ENGINE` | `ocr.py` | `auto` / `mimo` / `pollinations` —— 等价于 `--engine` 参数 |
|
|
208
|
+
| `POLLINATIONS_MODEL` | `ocr.py` | 默认 `openai`(视觉能力)。可选 `openai-large`、`openai-fast` |
|
|
209
|
+
| `PET_OPENAI_API_KEY` | `generate_pet.py`、`generate_image.py` | 跟 `MIMO_API_KEY` / `OPENAI_API_KEY` 独立;只用于图像生成 |
|
|
210
|
+
| `REPLICATE_API_TOKEN` | `generate_*.py --provider replicate` | 仅 Replicate 后端时需要 |
|
|
211
|
+
|
|
212
|
+
## 常用组合
|
|
213
|
+
|
|
214
|
+
### 先 OCR 一张图,再用当前 chat 模型总结
|
|
215
|
+
|
|
216
|
+
```bash
|
|
217
|
+
TEXT=$(python3 mimoskill/scripts/ocr.py invoice.png)
|
|
218
|
+
python3 mimoskill/scripts/mimo_chat.py "总结这张发票:\n$TEXT"
|
|
219
|
+
```
|
|
220
|
+
|
|
221
|
+
或者直接在 Codex 里:把图贴进去就行。proxy 剥图后留指向 `ocr.py` 的占位文本,Codex 自己跑脚本把文字喂回对话 —— **完全自动**。
|
|
222
|
+
|
|
223
|
+
### 生成 `/hatch` 替代宠物(无 OpenAI key 也能用)
|
|
224
|
+
|
|
225
|
+
```bash
|
|
226
|
+
python3 mimoskill/scripts/generate_pet.py --description "chibi shiba 程序员" --out pet.png
|
|
227
|
+
bash mimoskill/scripts/install_pet.sh pet.png shiba
|
|
228
|
+
# 完全退出 + 重启 Codex,宠物菜单里挑新的
|
|
229
|
+
```
|
|
230
|
+
|
|
231
|
+
想要更好质量,设 `PET_OPENAI_API_KEY=sk-真OpenAI-key`,auto 会切到 `gpt-image-1`。
|
|
232
|
+
|
|
233
|
+
### 结构化 OCR + JSON 解析
|
|
234
|
+
|
|
235
|
+
```bash
|
|
236
|
+
JSON=$(python3 mimoskill/scripts/ocr.py --mode structured invoice.png)
|
|
237
|
+
echo "$JSON" | python3 -c "import sys,json; d=json.load(sys.stdin); print(d['summary'])"
|
|
238
|
+
```
|
|
239
|
+
|
|
240
|
+
### 多图批量 OCR(一次计费)
|
|
241
|
+
|
|
242
|
+
```bash
|
|
243
|
+
python3 mimoskill/scripts/ocr.py page1.png page2.png page3.png
|
|
244
|
+
```
|
|
245
|
+
|
|
246
|
+
所有图**单次** upstream 调用,模型可跨图引用(如身份证正反面)。输出是按阅读顺序串联的一段文本。
|
|
247
|
+
|
|
248
|
+
## 故障排查
|
|
249
|
+
|
|
250
|
+
<details>
|
|
251
|
+
<summary><b><code>MIMO_API_KEY</code> 未设置</b> —— ocr.py 退出码 3</summary>
|
|
252
|
+
|
|
253
|
+
你显式传了 `--engine mimo`。要么去掉这个参数(`auto` 会自动降级到 pollinations),要么设 key:
|
|
254
|
+
|
|
255
|
+
```bash
|
|
256
|
+
export MIMO_API_KEY=sk-xxxx
|
|
257
|
+
python3 mimoskill/scripts/ocr.py form.png
|
|
258
|
+
```
|
|
259
|
+
|
|
260
|
+
</details>
|
|
261
|
+
|
|
262
|
+
<details>
|
|
263
|
+
<summary><b>Pollinations 返回 429 / 限流</b></summary>
|
|
264
|
+
|
|
265
|
+
撞 IP 限流。等会儿再试,或者切到 `--engine mimo`(如果你有 MiMo key)。
|
|
266
|
+
|
|
267
|
+
</details>
|
|
268
|
+
|
|
269
|
+
<details>
|
|
270
|
+
<summary><b>Codex 跑 /hatch 时报 <code>image_gen tool not available</code></b></summary>
|
|
271
|
+
|
|
272
|
+
Codex 的 `/hatch` 在客户端硬编码调 OpenAI 的 `image_gen` 工具,代理拦不住。改用 `generate_pet.py`,见上文「生成 /hatch 替代宠物」。
|
|
273
|
+
|
|
274
|
+
</details>
|
|
275
|
+
|
|
276
|
+
<details>
|
|
277
|
+
<summary><b>报 <code>pip install openai</code> 错 / Codex 想装 openai</b></summary>
|
|
278
|
+
|
|
279
|
+
是 Codex 想用 openai Python SDK 兜底图像生成。[AGENTS.md](../AGENTS.md) 已经预防这条路 —— 确认它在仓库根,且当前 Codex 会话已经读过(编辑完 AGENTS.md 后要开新会话)。
|
|
280
|
+
|
|
281
|
+
</details>
|
|
282
|
+
|
|
283
|
+
<details>
|
|
284
|
+
<summary><b>工具返回了图,但模型在工具结果里看不到图</b></summary>
|
|
285
|
+
|
|
286
|
+
设计如此。Chat Completions 的 `tool` role 历史上只接受字符串 content —— `function_call_output` 里的图片 content part 会被 flatten 成 `[N image attachment(s) omitted from tool output: ...]` 占位文本(详见 [src/translate/reqToChat.ts](../src/translate/reqToChat.ts) 的 `toolOutputToString`)。要把图喂给 LLM,让工具把图存到本地、返回路径,下一轮用户消息再 `@path/to/screenshot.png` 让 ocr.py 类工具读出来 —— 这时如果 chat 模型不支持视觉,OCR 兜底机制就会接管。
|
|
287
|
+
|
|
288
|
+
</details>
|
|
289
|
+
|
|
290
|
+
## 设计取舍
|
|
291
|
+
|
|
292
|
+
- **不需要 `pip install`。** 所有脚本纯标准库。避免依赖漂移,任何裸 Python ≥ 3.8 都能跑。
|
|
293
|
+
- **网络操作明确。** 不偷偷重试备用端点。要 MiMo 又没 key 就直接报错 —— 而不是静默降级掩盖配错。
|
|
294
|
+
- **proxy 和 mimoskill 互不调用。** 两个独立进程,靠 `AGENTS.md` / `SKILL.md` 约定连接。这样两边都能独立测试 / 替换。
|
|
295
|
+
- **Pollinations 是无 key 逃生通道。** 在 `ocr.py`(视觉)、`generate_pet.py`(出图)、`generate_image.py`(出图)里都用作免费兜底。有 IP 限流但永远在线。项目把它当成一等公民,不是"降级模式"。
|