deepanything 0.1.0__tar.gz

Sign up to get free protection for your applications and to get access to all the features.
@@ -0,0 +1,7 @@
1
+ Copyright 2025 Junity
2
+
3
+ Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
4
+
5
+ The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
6
+
7
+ THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
@@ -0,0 +1,191 @@
1
+ Metadata-Version: 2.1
2
+ Name: deepanything
3
+ Version: 0.1.0
4
+ Summary: DeepAnything is a project that provides DeepSeek R1's deep thinking capabilities for various large language models (LLMs).
5
+ Author: Junity
6
+ Author-email: 1727636624@qq.com
7
+ Classifier: Programming Language :: Python :: 3
8
+ Classifier: License :: OSI Approved :: MIT License
9
+ Classifier: Operating System :: OS Independent
10
+ Description-Content-Type: text/markdown
11
+ License-File: LICENSE
12
+
13
+ # DeepAnything
14
+
15
+ [![Python Version](https://img.shields.io/badge/python-3.8%2B-blue)](https://www.python.org/)
16
+ [![License](https://img.shields.io/badge/license-MIT-green)](LICENSE)
17
+ [中文版](README_zh.md)
18
+
19
+ DeepAnything is a project that provides DeepSeek R1's deep thinking capabilities for various large language models (LLMs).
20
+
21
+ ## Key Features
22
+
23
+ - Provides an interface similar to OpenAI
24
+ - Supports using various models compatible with the OpenAI API as response models and thinking models
25
+ - Offers a server that provides a simple configuration to obtain an API interface compatible with the OpenAI API, and can be called using the official OpenAI SDK.
26
+ - Supports using QWQ-32b as a thinking model
27
+
28
+ ## Installation Guide
29
+
30
+ Install via pip:
31
+ ```bash
32
+ pip install deepanything
33
+ ```
34
+
35
+ ## Quick Start
36
+
37
+ ### 1. Integrate into Code
38
+ #### Chat Completion
39
+
40
+ ```python
41
+ from deepanything.ReasonClient import DeepseekReasonClient
42
+ from deepanything.ResponseClient import OpenaiResponseClient
43
+ from deepanything.DeepAnythingClient import DeepAnythingClient
44
+
45
+ think_client = DeepseekReasonClient(
46
+ base_url="https://api.siliconflow.cn/v1",
47
+ api_key="sk-xxxxxxxxx"
48
+ )
49
+
50
+ response_client = OpenaiResponseClient(
51
+ base_url="https://dashscope.aliyuncs.com/compatible-mode/v1",
52
+ api_key="sk-xxxxxxxxxx",
53
+ )
54
+
55
+ da_client = DeepAnythingClient(
56
+ reason_client=think_client,
57
+ response_client=response_client,
58
+ reason_prompt="<Think>{}</Think>"
59
+ )
60
+
61
+ completions = da_client.chat_completion(
62
+ messages=[
63
+ {
64
+ "role": "user",
65
+ "content": "你好"
66
+ }
67
+ ],
68
+ reason_model="Pro/deepseek-ai/DeepSeek-R1",
69
+ response_model="qwen-max-latest",
70
+ show_model="R1-qwen-max"
71
+ )
72
+ ```
73
+
74
+ #### Streaming Call
75
+ ```python
76
+ stream = da_client.chat_completion(
77
+ messages=[
78
+ {
79
+ "role": "user",
80
+ "content": "你好"
81
+ }
82
+ ],
83
+ reason_model="Pro/deepseek-ai/DeepSeek-R1",
84
+ response_model="qwen-max-latest",
85
+ show_model="R1-qwen-max",
86
+ stream=True
87
+ )
88
+
89
+ for chunk in stream:
90
+ print(chunk)
91
+ ```
92
+
93
+ #### Asynchronous Usage
94
+ ```python
95
+ from deepanything.ReasonClient import AsyncDeepseekReasonClient
96
+ from deepanything.ResponseClient import AsyncOpenaiResponseClient
97
+ from deepanything.DeepAnythingClient import AsyncDeepAnythingClient
98
+ import asyncio
99
+
100
+ think_client = AsyncDeepseekReasonClient(
101
+ base_url="https://api.siliconflow.cn/v1",
102
+ api_key="sk-xxxxxxxxx"
103
+ )
104
+
105
+ response_client = AsyncOpenaiResponseClient(
106
+ base_url="https://dashscope.aliyuncs.com/compatible-mode/v1",
107
+ api_key="sk-xxxxxxxxxx",
108
+ )
109
+
110
+ da_client = AsyncDeepAnythingClient(
111
+ reason_client=think_client,
112
+ response_client=response_client,
113
+ reason_prompt="<Think>{}</Think>"
114
+ )
115
+
116
+ async def main():
117
+ completions = await da_client.chat_completion(
118
+ messages=[
119
+ {
120
+ "role": "user",
121
+ "content": "你好"
122
+ }
123
+ ],
124
+ reason_model="Pro/deepseek-ai/DeepSeek-R1",
125
+ response_model="qwen-max-latest",
126
+ show_model="R1-qwen-max"
127
+ )
128
+ print(completions)
129
+
130
+ asyncio.run(main())
131
+ ```
132
+ ### 2. Use as a Server
133
+ ```bash
134
+ python -m deepanything --host host --port port --config config.json
135
+ ```
136
+ | Parameter | Description |
137
+ | --- |----------------|
138
+ | --host | Server listening address, will override the setting in config.json |
139
+ | --port | Server listening port, will override the setting in config.json |
140
+ | --config | Configuration file path |
141
+
142
+ #### Configuration File Format
143
+ Below is an example of a configuration file:
144
+
145
+ ```json
146
+ {
147
+ "host" : "0.0.0.0",
148
+ "port" : 8080,
149
+ "reason_clients": [
150
+ {
151
+ "name" : "siliconflow",
152
+ "type" : "deepseek",
153
+ "base_url" : "https://api.siliconflow.cn/v1",
154
+ "api_key" : "sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
155
+ }
156
+ ],
157
+ "response_clients": [
158
+ {
159
+ "name" : "qwen",
160
+ "type" : "openai",
161
+ "base_url" : "https://dashscope.aliyuncs.com/compatible-mode/v1",
162
+ "api_key" : "sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
163
+ }
164
+ ],
165
+ "models": [
166
+ {
167
+ "name": "R1-Qwen-max",
168
+ "reason_client" : "siliconflow",
169
+ "response_client" : "qwen",
170
+ "reason_model": "Pro/deepseek-ai/DeepSeek-R1",
171
+ "response_model" : "qwen-max-latest"
172
+ }
173
+ ],
174
+ "api_keys": [
175
+ "sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
176
+ ]
177
+ }
178
+ ```
179
+ **Detailed Explanation**
180
+
181
+ - reason_clients: Configuration for thinking models, currently supports deepseek and openai types. When the type is openai, deepanything directly uses the model's output as the thinking content, and it is recommended to use qwq-32b in this case.
182
+ - response_clients: Configuration for response models, currently only supports the openai type.
183
+ - api_keys: API keys for user authentication. When left blank or an empty list, the server does not use API keys for authentication.
184
+
185
+ ## License
186
+ This project is licensed under the [MIT License](LICENSE)
187
+
188
+ ## Contact Us
189
+ Email: 1737636624@qq.com
190
+
191
+ GitHub Issues: https://github.com/yourusername/deepseek-r1-integration/issues
@@ -0,0 +1,179 @@
1
+ # DeepAnything
2
+
3
+ [![Python Version](https://img.shields.io/badge/python-3.8%2B-blue)](https://www.python.org/)
4
+ [![License](https://img.shields.io/badge/license-MIT-green)](LICENSE)
5
+ [中文版](README_zh.md)
6
+
7
+ DeepAnything is a project that provides DeepSeek R1's deep thinking capabilities for various large language models (LLMs).
8
+
9
+ ## Key Features
10
+
11
+ - Provides an interface similar to OpenAI
12
+ - Supports using various models compatible with the OpenAI API as response models and thinking models
13
+ - Offers a server that provides a simple configuration to obtain an API interface compatible with the OpenAI API, and can be called using the official OpenAI SDK.
14
+ - Supports using QWQ-32b as a thinking model
15
+
16
+ ## Installation Guide
17
+
18
+ Install via pip:
19
+ ```bash
20
+ pip install deepanything
21
+ ```
22
+
23
+ ## Quick Start
24
+
25
+ ### 1. Integrate into Code
26
+ #### Chat Completion
27
+
28
+ ```python
29
+ from deepanything.ReasonClient import DeepseekReasonClient
30
+ from deepanything.ResponseClient import OpenaiResponseClient
31
+ from deepanything.DeepAnythingClient import DeepAnythingClient
32
+
33
+ think_client = DeepseekReasonClient(
34
+ base_url="https://api.siliconflow.cn/v1",
35
+ api_key="sk-xxxxxxxxx"
36
+ )
37
+
38
+ response_client = OpenaiResponseClient(
39
+ base_url="https://dashscope.aliyuncs.com/compatible-mode/v1",
40
+ api_key="sk-xxxxxxxxxx",
41
+ )
42
+
43
+ da_client = DeepAnythingClient(
44
+ reason_client=think_client,
45
+ response_client=response_client,
46
+ reason_prompt="<Think>{}</Think>"
47
+ )
48
+
49
+ completions = da_client.chat_completion(
50
+ messages=[
51
+ {
52
+ "role": "user",
53
+ "content": "你好"
54
+ }
55
+ ],
56
+ reason_model="Pro/deepseek-ai/DeepSeek-R1",
57
+ response_model="qwen-max-latest",
58
+ show_model="R1-qwen-max"
59
+ )
60
+ ```
61
+
62
+ #### Streaming Call
63
+ ```python
64
+ stream = da_client.chat_completion(
65
+ messages=[
66
+ {
67
+ "role": "user",
68
+ "content": "你好"
69
+ }
70
+ ],
71
+ reason_model="Pro/deepseek-ai/DeepSeek-R1",
72
+ response_model="qwen-max-latest",
73
+ show_model="R1-qwen-max",
74
+ stream=True
75
+ )
76
+
77
+ for chunk in stream:
78
+ print(chunk)
79
+ ```
80
+
81
+ #### Asynchronous Usage
82
+ ```python
83
+ from deepanything.ReasonClient import AsyncDeepseekReasonClient
84
+ from deepanything.ResponseClient import AsyncOpenaiResponseClient
85
+ from deepanything.DeepAnythingClient import AsyncDeepAnythingClient
86
+ import asyncio
87
+
88
+ think_client = AsyncDeepseekReasonClient(
89
+ base_url="https://api.siliconflow.cn/v1",
90
+ api_key="sk-xxxxxxxxx"
91
+ )
92
+
93
+ response_client = AsyncOpenaiResponseClient(
94
+ base_url="https://dashscope.aliyuncs.com/compatible-mode/v1",
95
+ api_key="sk-xxxxxxxxxx",
96
+ )
97
+
98
+ da_client = AsyncDeepAnythingClient(
99
+ reason_client=think_client,
100
+ response_client=response_client,
101
+ reason_prompt="<Think>{}</Think>"
102
+ )
103
+
104
+ async def main():
105
+ completions = await da_client.chat_completion(
106
+ messages=[
107
+ {
108
+ "role": "user",
109
+ "content": "你好"
110
+ }
111
+ ],
112
+ reason_model="Pro/deepseek-ai/DeepSeek-R1",
113
+ response_model="qwen-max-latest",
114
+ show_model="R1-qwen-max"
115
+ )
116
+ print(completions)
117
+
118
+ asyncio.run(main())
119
+ ```
120
+ ### 2. Use as a Server
121
+ ```bash
122
+ python -m deepanything --host host --port port --config config.json
123
+ ```
124
+ | Parameter | Description |
125
+ | --- |----------------|
126
+ | --host | Server listening address, will override the setting in config.json |
127
+ | --port | Server listening port, will override the setting in config.json |
128
+ | --config | Configuration file path |
129
+
130
+ #### Configuration File Format
131
+ Below is an example of a configuration file:
132
+
133
+ ```json
134
+ {
135
+ "host" : "0.0.0.0",
136
+ "port" : 8080,
137
+ "reason_clients": [
138
+ {
139
+ "name" : "siliconflow",
140
+ "type" : "deepseek",
141
+ "base_url" : "https://api.siliconflow.cn/v1",
142
+ "api_key" : "sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
143
+ }
144
+ ],
145
+ "response_clients": [
146
+ {
147
+ "name" : "qwen",
148
+ "type" : "openai",
149
+ "base_url" : "https://dashscope.aliyuncs.com/compatible-mode/v1",
150
+ "api_key" : "sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
151
+ }
152
+ ],
153
+ "models": [
154
+ {
155
+ "name": "R1-Qwen-max",
156
+ "reason_client" : "siliconflow",
157
+ "response_client" : "qwen",
158
+ "reason_model": "Pro/deepseek-ai/DeepSeek-R1",
159
+ "response_model" : "qwen-max-latest"
160
+ }
161
+ ],
162
+ "api_keys": [
163
+ "sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
164
+ ]
165
+ }
166
+ ```
167
+ **Detailed Explanation**
168
+
169
+ - reason_clients: Configuration for thinking models, currently supports deepseek and openai types. When the type is openai, deepanything directly uses the model's output as the thinking content, and it is recommended to use qwq-32b in this case.
170
+ - response_clients: Configuration for response models, currently only supports the openai type.
171
+ - api_keys: API keys for user authentication. When left blank or an empty list, the server does not use API keys for authentication.
172
+
173
+ ## License
174
+ This project is licensed under the [MIT License](LICENSE)
175
+
176
+ ## Contact Us
177
+ Email: 1737636624@qq.com
178
+
179
+ GitHub Issues: https://github.com/yourusername/deepseek-r1-integration/issues