gemini-cli-proxy 1.0.3__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,26 @@
1
+ ---
2
+ description:
3
+ globs:
4
+ alwaysApply: false
5
+ ---
6
+ # Coding Conventions
7
+
8
+ ## General conventions
9
+ - **All code comments, docstrings, and error messages must be in English**
10
+
11
+ ## Language
12
+ - Use Python 3.12
13
+ - Use **uv** for package management
14
+ - Follow PEP 8 style guidelines
15
+
16
+ ## Code Style
17
+ - Use **type hints** for all functions
18
+ - Use **async/await** pattern for I/O operations
19
+ - Use structured error responses following **OpenAI format**
20
+ - Log errors with appropriate levels
21
+
22
+ ## Architecture Patterns
23
+ - **OpenAI Compatibility**: Strict adherence to OpenAI API format
24
+ - **Async Execution**: Non-blocking CLI execution with concurrency control
25
+ - **Fake Streaming**: Split complete responses line-by-line for streaming effect
26
+ - **Centralized Configuration**: All config in [src/gemini_cli_proxy/config.py](mdc:src/gemini_cli_proxy/config.py)
@@ -0,0 +1,35 @@
1
+ ---
2
+ description:
3
+ globs:
4
+ alwaysApply: false
5
+ ---
6
+ # Gemini CLI Proxy Project Overview
7
+
8
+ This is an OpenAI-compatible HTTP API wrapper for Google Gemini CLI tool, enabling seamless integration with OpenAI clients like Cherry Studio.
9
+
10
+ ## Project Structure
11
+
12
+ The main entry point is [src/gemini_cli_proxy/cli.py](mdc:src/gemini_cli_proxy/cli.py), which handles command-line argument parsing and starts the FastAPI server defined in [src/gemini_cli_proxy/server.py](mdc:src/gemini_cli_proxy/server.py).
13
+
14
+ ### Core Modules
15
+
16
+ - **CLI Entry**: [src/gemini_cli_proxy/cli.py](mdc:src/gemini_cli_proxy/cli.py) - Command-line interface using Click
17
+ - **FastAPI Server**: [src/gemini_cli_proxy/server.py](mdc:src/gemini_cli_proxy/server.py) - HTTP service with OpenAI-compatible endpoints
18
+ - **Gemini Client**: [src/gemini_cli_proxy/gemini_client.py](mdc:src/gemini_cli_proxy/gemini_client.py) - Async wrapper for Gemini CLI tool
19
+ - **OpenAI Adapter**: [src/gemini_cli_proxy/openai_adapter.py](mdc:src/gemini_cli_proxy/openai_adapter.py) - Format conversion between OpenAI and Gemini
20
+ - **Data Models**: [src/gemini_cli_proxy/models.py](mdc:src/gemini_cli_proxy/models.py) - Pydantic models for request/response validation
21
+ - **Configuration**: [src/gemini_cli_proxy/config.py](mdc:src/gemini_cli_proxy/config.py) - Application configuration management
22
+
23
+ ### Key Files
24
+
25
+ - **Package Info**: [src/gemini_cli_proxy/__init__.py](mdc:src/gemini_cli_proxy/__init__.py) - Version and package metadata
26
+ - **Project Config**: [pyproject.toml](mdc:pyproject.toml) - Dependencies, build config, and CLI entry point
27
+ - **Documentation**: [README.md](mdc:README.md) - English documentation, [README_zh.md](mdc:README_zh.md) - Chinese documentation
28
+
29
+ ## Key Features
30
+
31
+ - **OpenAI API Compatibility**: Implements `/v1/chat/completions`, `/v1/models`, and `/health` endpoints
32
+ - **Fake Streaming**: Splits complete Gemini CLI output line-by-line for OpenAI streaming compatibility
33
+ - **Rate Limiting**: 60 requests/minute using SlowAPI
34
+ - **Async Execution**: Non-blocking CLI execution with concurrency control
35
+ - **Modern Distribution**: Installable via `uvx` for zero-config execution
@@ -0,0 +1,197 @@
1
+ # Byte-compiled / optimized / DLL files
2
+ __pycache__/
3
+ *.py[cod]
4
+ *$py.class
5
+
6
+ # C extensions
7
+ *.so
8
+
9
+ # Distribution / packaging
10
+ .Python
11
+ build/
12
+ develop-eggs/
13
+ dist/
14
+ downloads/
15
+ eggs/
16
+ .eggs/
17
+ lib/
18
+ lib64/
19
+ parts/
20
+ sdist/
21
+ var/
22
+ wheels/
23
+ share/python-wheels/
24
+ *.egg-info/
25
+ .installed.cfg
26
+ *.egg
27
+ MANIFEST
28
+
29
+ # PyInstaller
30
+ # Usually these files are written by a python script from a template
31
+ # before PyInstaller builds the exe, so as to inject date/other infos into it.
32
+ *.manifest
33
+ *.spec
34
+
35
+ # Installer logs
36
+ pip-log.txt
37
+ pip-delete-this-directory.txt
38
+
39
+ # Unit test / coverage reports
40
+ htmlcov/
41
+ .tox/
42
+ .nox/
43
+ .coverage
44
+ .coverage.*
45
+ .cache
46
+ nosetests.xml
47
+ coverage.xml
48
+ *.cover
49
+ *.py,cover
50
+ .hypothesis/
51
+ .pytest_cache/
52
+ cover/
53
+
54
+ # Translations
55
+ *.mo
56
+ *.pot
57
+
58
+ # Django stuff:
59
+ *.log
60
+ local_settings.py
61
+ db.sqlite3
62
+ db.sqlite3-journal
63
+
64
+ # Flask stuff:
65
+ instance/
66
+ .webassets-cache
67
+
68
+ # Scrapy stuff:
69
+ .scrapy
70
+
71
+ # Sphinx documentation
72
+ docs/_build/
73
+
74
+ # PyBuilder
75
+ .pybuilder/
76
+ target/
77
+
78
+ # Jupyter Notebook
79
+ .ipynb_checkpoints
80
+
81
+ # IPython
82
+ profile_default/
83
+ ipython_config.py
84
+
85
+ # pyenv
86
+ # For a library or package, you might want to ignore these files since the code is
87
+ # intended to run in multiple environments; otherwise, check them in:
88
+ # .python-version
89
+
90
+ # pipenv
91
+ # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
92
+ # However, in case of collaboration, if having platform-specific dependencies or dependencies
93
+ # having no cross-platform support, pipenv may install dependencies that don't work, or not
94
+ # install all needed dependencies.
95
+ #Pipfile.lock
96
+
97
+ # UV
98
+ # Similar to Pipfile.lock, it is generally recommended to include uv.lock in version control.
99
+ # This is especially recommended for binary packages to ensure reproducibility, and is more
100
+ # commonly ignored for libraries.
101
+ #uv.lock
102
+
103
+ # poetry
104
+ # Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
105
+ # This is especially recommended for binary packages to ensure reproducibility, and is more
106
+ # commonly ignored for libraries.
107
+ # https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
108
+ #poetry.lock
109
+
110
+ # pdm
111
+ # Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
112
+ #pdm.lock
113
+ # pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
114
+ # in version control.
115
+ # https://pdm.fming.dev/latest/usage/project/#working-with-version-control
116
+ .pdm.toml
117
+ .pdm-python
118
+ .pdm-build/
119
+
120
+ # PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
121
+ __pypackages__/
122
+
123
+ # Celery stuff
124
+ celerybeat-schedule
125
+ celerybeat.pid
126
+
127
+ # SageMath parsed files
128
+ *.sage.py
129
+
130
+ # Environments
131
+ .env
132
+ .venv
133
+ env/
134
+ venv/
135
+ ENV/
136
+ env.bak/
137
+ venv.bak/
138
+
139
+ # Spyder project settings
140
+ .spyderproject
141
+ .spyproject
142
+
143
+ # Rope project settings
144
+ .ropeproject
145
+
146
+ # mkdocs documentation
147
+ /site
148
+
149
+ # mypy
150
+ .mypy_cache/
151
+ .dmypy.json
152
+ dmypy.json
153
+
154
+ # Pyre type checker
155
+ .pyre/
156
+
157
+ # pytype static type analyzer
158
+ .pytype/
159
+
160
+ # Cython debug symbols
161
+ cython_debug/
162
+
163
+ # PyCharm
164
+ # JetBrains specific template is maintained in a separate JetBrains.gitignore that can
165
+ # be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
166
+ # and can be added to the global gitignore or merged into this file. For a more nuclear
167
+ # option (not recommended) you can uncomment the following to ignore the entire idea folder.
168
+ #.idea/
169
+
170
+ # Abstra
171
+ # Abstra is an AI-powered process automation framework.
172
+ # Ignore directories containing user credentials, local state, and settings.
173
+ # Learn more at https://abstra.io/docs
174
+ .abstra/
175
+
176
+ # Visual Studio Code
177
+ # Visual Studio Code specific template is maintained in a separate VisualStudioCode.gitignore
178
+ # that can be found at https://github.com/github/gitignore/blob/main/Global/VisualStudioCode.gitignore
179
+ # and can be added to the global gitignore or merged into this file. However, if you prefer,
180
+ # you could uncomment the following to ignore the enitre vscode folder
181
+ # .vscode/
182
+
183
+ # Ruff stuff:
184
+ .ruff_cache/
185
+
186
+ # PyPI configuration file
187
+ .pypirc
188
+
189
+ # Cursor
190
+ # Cursor is an AI-powered code editor. `.cursorignore` specifies files/directories to
191
+ # exclude from AI features like autocomplete and code analysis. Recommended for sensitive data
192
+ # refer to https://docs.cursor.com/context/ignore-files
193
+ .cursorignore
194
+ .cursorindexingignoredocs
195
+
196
+ /local-docs/
197
+ /local-scripts/
@@ -0,0 +1 @@
1
+ 3.12
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2025 William Liu
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
@@ -0,0 +1,153 @@
1
+ Metadata-Version: 2.4
2
+ Name: gemini-cli-proxy
3
+ Version: 1.0.3
4
+ Summary: OpenAI-compatible API wrapper for Gemini CLI
5
+ Author: nettee
6
+ License: MIT
7
+ License-File: LICENSE
8
+ Keywords: api,cli,gemini,openai,proxy
9
+ Requires-Python: >=3.8
10
+ Requires-Dist: click<9.0,>=8.0.0
11
+ Requires-Dist: fastapi<1.0,>=0.104.0
12
+ Requires-Dist: pydantic<3.0,>=2.0.0
13
+ Requires-Dist: slowapi<1.0,>=0.1.9
14
+ Requires-Dist: uvicorn[standard]<1.0,>=0.24.0
15
+ Description-Content-Type: text/markdown
16
+
17
+ # Gemini CLI Proxy
18
+
19
+ [![Python 3.12+](https://img.shields.io/badge/python-3.12+-blue.svg)](https://www.python.org/downloads/)
20
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
21
+
22
+ Wrap Gemini CLI as an OpenAI-compatible API service, allowing you to enjoy the free Gemini 2.5 Pro model through API!
23
+
24
+ [English](./README.md) | [įŽ€äŊ“中文](./README_zh.md)
25
+
26
+ ## ✨ Features
27
+
28
+ - 🔌 **OpenAI API Compatible**: Implements `/v1/chat/completions` endpoint
29
+ - 🚀 **Quick Setup**: Zero-config run with `uvx`
30
+ - ⚡ **High Performance**: Built on FastAPI + asyncio with concurrent request support
31
+
32
+ ## 🚀 Quick Start
33
+
34
+ ### Network Configuration
35
+
36
+ Since Gemini needs to access Google services, you may need to configure terminal proxy in certain network environments:
37
+
38
+ ```bash
39
+ # Configure proxy (adjust according to your proxy server)
40
+ export https_proxy=http://127.0.0.1:7890
41
+ export http_proxy=http://127.0.0.1:7890
42
+ export all_proxy=socks5://127.0.0.1:7890
43
+ ```
44
+
45
+ ### Install Gemini CLI
46
+
47
+ Install Gemini CLI:
48
+ ```bash
49
+ npm install -g @google/gemini-cli
50
+ ```
51
+
52
+ After installation, use the `gemini` command to run Gemini CLI. You need to start it once first for login and initial configuration.
53
+
54
+ After configuration is complete, please confirm you can successfully run the following command:
55
+
56
+ ```bash
57
+ gemini -p "Hello, Gemini"
58
+ ```
59
+
60
+ ### Start Gemini CLI Proxy
61
+
62
+ ```bash
63
+ uv run gemini-cli-proxy
64
+ ```
65
+
66
+ Gemini CLI Proxy listens on port `8765` by default. You can customize the startup port with the `--port` parameter.
67
+
68
+ After startup, test the service with curl:
69
+
70
+ ```bash
71
+ curl http://localhost:8765/v1/chat/completions \
72
+ -H "Content-Type: application/json" \
73
+ -H "Authorization: Bearer dummy-key" \
74
+ -d '{
75
+ "model": "gemini-2.5-pro",
76
+ "messages": [{"role": "user", "content": "Hello!"}]
77
+ }'
78
+ ```
79
+
80
+ ### Usage Examples
81
+
82
+ #### OpenAI Client
83
+
84
+ ```python
85
+ from openai import OpenAI
86
+
87
+ client = OpenAI(
88
+ base_url='http://localhost:8765/v1',
89
+ api_key='dummy-key' # Any string works
90
+ )
91
+
92
+ response = client.chat.completions.create(
93
+ model='gemini-2.5-pro',
94
+ messages=[
95
+ {'role': 'user', 'content': 'Hello!'}
96
+ ],
97
+ )
98
+
99
+ print(response.choices[0].message.content)
100
+ ```
101
+
102
+ #### Cherry Studio
103
+
104
+ Add Model Provider in Cherry Studio settings:
105
+ - Provider Type: OpenAI
106
+ - API Host: `http://localhost:8765`
107
+ - API Key: Any string works
108
+ - Model Name: `gemini-2.5-pro` or `gemini-2.5-flash`
109
+
110
+ ![Cherry Studio Config 1](./img/cherry-studio-1.jpg)
111
+
112
+ ![Cherry Studio Config 2](./img/cherry-studio-2.jpg)
113
+
114
+ ## âš™ī¸ Configuration Options
115
+
116
+ View command line parameters:
117
+
118
+ ```bash
119
+ gemini-cli-proxy --help
120
+ ```
121
+
122
+ Available options:
123
+ - `--host`: Server host address (default: 127.0.0.1)
124
+ - `--port`: Server port (default: 8765)
125
+ - `--log-level`: Log level (debug/info/warning/error/critical)
126
+ - `--rate-limit`: Max requests per minute (default: 60)
127
+ - `--max-concurrency`: Max concurrent subprocesses (default: 4)
128
+ - `--timeout`: Gemini CLI command timeout in seconds (default: 30.0)
129
+ - `--debug`: Enable debug mode
130
+
131
+ ## ❓ FAQ
132
+
133
+ ### Q: Why do requests keep timing out?
134
+
135
+ A: This is usually a network connectivity issue. Gemini needs to access Google services, which may require proxy configuration in certain regions:
136
+
137
+ ```bash
138
+ # Configure proxy (adjust according to your proxy server)
139
+ export https_proxy=http://127.0.0.1:7890
140
+ export http_proxy=http://127.0.0.1:7890
141
+ export all_proxy=socks5://127.0.0.1:7890
142
+
143
+ # Then start the service
144
+ uvx gemini-cli-proxy
145
+ ```
146
+
147
+ ## 📄 License
148
+
149
+ MIT License
150
+
151
+ ## 🤝 Contributing
152
+
153
+ Issues and Pull Requests are welcome!
@@ -0,0 +1,137 @@
1
+ # Gemini CLI Proxy
2
+
3
+ [![Python 3.12+](https://img.shields.io/badge/python-3.12+-blue.svg)](https://www.python.org/downloads/)
4
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
5
+
6
+ Wrap Gemini CLI as an OpenAI-compatible API service, allowing you to enjoy the free Gemini 2.5 Pro model through API!
7
+
8
+ [English](./README.md) | [įŽ€äŊ“中文](./README_zh.md)
9
+
10
+ ## ✨ Features
11
+
12
+ - 🔌 **OpenAI API Compatible**: Implements `/v1/chat/completions` endpoint
13
+ - 🚀 **Quick Setup**: Zero-config run with `uvx`
14
+ - ⚡ **High Performance**: Built on FastAPI + asyncio with concurrent request support
15
+
16
+ ## 🚀 Quick Start
17
+
18
+ ### Network Configuration
19
+
20
+ Since Gemini needs to access Google services, you may need to configure terminal proxy in certain network environments:
21
+
22
+ ```bash
23
+ # Configure proxy (adjust according to your proxy server)
24
+ export https_proxy=http://127.0.0.1:7890
25
+ export http_proxy=http://127.0.0.1:7890
26
+ export all_proxy=socks5://127.0.0.1:7890
27
+ ```
28
+
29
+ ### Install Gemini CLI
30
+
31
+ Install Gemini CLI:
32
+ ```bash
33
+ npm install -g @google/gemini-cli
34
+ ```
35
+
36
+ After installation, use the `gemini` command to run Gemini CLI. You need to start it once first for login and initial configuration.
37
+
38
+ After configuration is complete, please confirm you can successfully run the following command:
39
+
40
+ ```bash
41
+ gemini -p "Hello, Gemini"
42
+ ```
43
+
44
+ ### Start Gemini CLI Proxy
45
+
46
+ ```bash
47
+ uv run gemini-cli-proxy
48
+ ```
49
+
50
+ Gemini CLI Proxy listens on port `8765` by default. You can customize the startup port with the `--port` parameter.
51
+
52
+ After startup, test the service with curl:
53
+
54
+ ```bash
55
+ curl http://localhost:8765/v1/chat/completions \
56
+ -H "Content-Type: application/json" \
57
+ -H "Authorization: Bearer dummy-key" \
58
+ -d '{
59
+ "model": "gemini-2.5-pro",
60
+ "messages": [{"role": "user", "content": "Hello!"}]
61
+ }'
62
+ ```
63
+
64
+ ### Usage Examples
65
+
66
+ #### OpenAI Client
67
+
68
+ ```python
69
+ from openai import OpenAI
70
+
71
+ client = OpenAI(
72
+ base_url='http://localhost:8765/v1',
73
+ api_key='dummy-key' # Any string works
74
+ )
75
+
76
+ response = client.chat.completions.create(
77
+ model='gemini-2.5-pro',
78
+ messages=[
79
+ {'role': 'user', 'content': 'Hello!'}
80
+ ],
81
+ )
82
+
83
+ print(response.choices[0].message.content)
84
+ ```
85
+
86
+ #### Cherry Studio
87
+
88
+ Add Model Provider in Cherry Studio settings:
89
+ - Provider Type: OpenAI
90
+ - API Host: `http://localhost:8765`
91
+ - API Key: Any string works
92
+ - Model Name: `gemini-2.5-pro` or `gemini-2.5-flash`
93
+
94
+ ![Cherry Studio Config 1](./img/cherry-studio-1.jpg)
95
+
96
+ ![Cherry Studio Config 2](./img/cherry-studio-2.jpg)
97
+
98
+ ## âš™ī¸ Configuration Options
99
+
100
+ View command line parameters:
101
+
102
+ ```bash
103
+ gemini-cli-proxy --help
104
+ ```
105
+
106
+ Available options:
107
+ - `--host`: Server host address (default: 127.0.0.1)
108
+ - `--port`: Server port (default: 8765)
109
+ - `--log-level`: Log level (debug/info/warning/error/critical)
110
+ - `--rate-limit`: Max requests per minute (default: 60)
111
+ - `--max-concurrency`: Max concurrent subprocesses (default: 4)
112
+ - `--timeout`: Gemini CLI command timeout in seconds (default: 30.0)
113
+ - `--debug`: Enable debug mode
114
+
115
+ ## ❓ FAQ
116
+
117
+ ### Q: Why do requests keep timing out?
118
+
119
+ A: This is usually a network connectivity issue. Gemini needs to access Google services, which may require proxy configuration in certain regions:
120
+
121
+ ```bash
122
+ # Configure proxy (adjust according to your proxy server)
123
+ export https_proxy=http://127.0.0.1:7890
124
+ export http_proxy=http://127.0.0.1:7890
125
+ export all_proxy=socks5://127.0.0.1:7890
126
+
127
+ # Then start the service
128
+ uvx gemini-cli-proxy
129
+ ```
130
+
131
+ ## 📄 License
132
+
133
+ MIT License
134
+
135
+ ## 🤝 Contributing
136
+
137
+ Issues and Pull Requests are welcome!