wcgw 0.1.0__tar.gz → 0.1.2__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of wcgw might be problematic. Click here for more details.

@@ -0,0 +1,30 @@
1
+ name: Python Test
2
+
3
+ on:
4
+ push:
5
+ branches:
6
+ - main
7
+ pull_request:
8
+ branches:
9
+ - main
10
+
11
+ jobs:
12
+ test:
13
+ runs-on: ubuntu-latest
14
+ strategy:
15
+ matrix:
16
+ python-version: ["3.10", "3.11", "3.12"]
17
+ steps:
18
+ - uses: actions/checkout@v4
19
+ - name: Set up Python
20
+ uses: actions/setup-python@v3
21
+ with:
22
+ python-version: "${{ matrix.python-version }}"
23
+ - name: Install dependencies
24
+ run: |
25
+ python -m pip install --upgrade pip
26
+ pip install build
27
+ pip install .[dev] # Installs dependencies based on pyproject.toml
28
+ - name: Run tests
29
+ run: |
30
+ python -m unittest discover -s tests
@@ -0,0 +1 @@
1
+ 3.12
wcgw-0.1.2/PKG-INFO ADDED
@@ -0,0 +1,120 @@
1
+ Metadata-Version: 2.3
2
+ Name: wcgw
3
+ Version: 0.1.2
4
+ Summary: What could go wrong giving full shell access to chatgpt?
5
+ Project-URL: Homepage, https://github.com/rusiaaman/wcgw
6
+ Author-email: Aman Rusia <gapypi@arcfu.com>
7
+ Requires-Python: <3.13,>=3.10
8
+ Requires-Dist: fastapi>=0.115.0
9
+ Requires-Dist: mypy>=1.11.2
10
+ Requires-Dist: openai>=1.46.0
11
+ Requires-Dist: petname>=2.6
12
+ Requires-Dist: pexpect>=4.9.0
13
+ Requires-Dist: pydantic>=2.9.2
14
+ Requires-Dist: pyte>=0.8.2
15
+ Requires-Dist: python-dotenv>=1.0.1
16
+ Requires-Dist: rich>=13.8.1
17
+ Requires-Dist: shell>=1.0.1
18
+ Requires-Dist: tiktoken==0.7.0
19
+ Requires-Dist: toml>=0.10.2
20
+ Requires-Dist: typer>=0.12.5
21
+ Requires-Dist: types-pexpect>=4.9.0.20240806
22
+ Requires-Dist: uvicorn>=0.31.0
23
+ Requires-Dist: websockets>=13.1
24
+ Description-Content-Type: text/markdown
25
+
26
+ # Enable shell access on chatgpt.com
27
+ A custom gpt on chatgpt web app to interact with your local shell.
28
+
29
+ [![Tests](https://github.com/rusiaaman/wcgw/actions/workflows/python-tests.yml/badge.svg?branch=main)](https://github.com/rusiaaman/wcgw/actions/workflows/python-tests.yml)
30
+ [![Build](https://github.com/rusiaaman/wcgw/actions/workflows/python-publish.yml/badge.svg)](https://github.com/rusiaaman/wcgw/actions/workflows/python-publish.yml)
31
+
32
+ ### 🚀 Highlights
33
+ - ⚡ **Full Shell Access**: No restrictions, complete control.
34
+ - ⚡ **Create, Execute, Iterate**: Ask the gpt to keep running compiler checks till all errors are fixed, or ask it to keep checking for the status of a long running command till it's done.
35
+ - ⚡ **Interactive Command Handling**: [beta] Supports interactive commands using arrow keys, interrupt, and ansi escape sequences.
36
+
37
+ ### 🪜 Steps:
38
+ 1. Run the [cli client](https://github.com/rusiaaman/wcgw?tab=readme-ov-file#client) in any directory of choice.
39
+ 2. Share the generated id with this GPT: `https://chatgpt.com/g/g-Us0AAXkRh-wcgw-giving-shell-access`
40
+ 3. The custom GPT can now run any command on your cli
41
+
42
+
43
+ ## Client
44
+ You need to keep running this client for GPT to access your shell. Run it in a version controlled project's root.
45
+
46
+ ### Option 1: using uv [Recommended]
47
+ ```sh
48
+ $ curl -LsSf https://astral.sh/uv/install.sh | sh
49
+ $ uv tool run --python 3.12 wcgw@latest
50
+ ```
51
+
52
+ ### Option 2: using pip
53
+ Supports python >=3.10 and <3.13
54
+ ```sh
55
+ $ pip3 install wcgw
56
+ $ wcgw
57
+ ```
58
+
59
+
60
+ This will print a UUID that you need to share with the gpt.
61
+
62
+
63
+ ## Chat
64
+ Open the following link or search the "wcgw" custom gpt using "Explore GPTs" on chatgpt.com
65
+
66
+ https://chatgpt.com/g/g-Us0AAXkRh-wcgw-giving-shell-access
67
+
68
+ Finally, let the chatgpt know your user id in any format. E.g., "user_id=<your uuid>" followed by rest of your instructions.
69
+
70
+ NOTE: you can resume a broken connection
71
+ `wcgw --client-uuid $previous_uuid`
72
+
73
+ # How it works
74
+ Your commands are relayed through a server to the terminal client. [You could host the server on your own](https://github.com/rusiaaman/wcgw?tab=readme-ov-file#creating-your-own-custom-gpt-and-the-relay-server). For public convenience I've hosted one at https://wcgw.arcfu.com thanks to the gcloud free tier plan.
75
+
76
+ Chatgpt sends a request to the relay server using the user id that you share with it. The relay server holds a websocket with the terminal client against the user id and acts as a proxy to pass the request.
77
+
78
+ It's secure in both the directions. Either a malicious actor or a malicious Chatgpt has to correctly guess your UUID for any security breach.
79
+
80
+ # Showcase
81
+
82
+ ## Unit tests and github actions
83
+ [The first version of unit tests and github workflow to test on multiple python versions were written by the custom chatgpt](https://chatgpt.com/share/6717f922-8998-8005-b825-45d4b348b4dd)
84
+
85
+ ## Create a todo app using react + typescript + vite
86
+ ![Screenshot](https://github.com/rusiaaman/wcgw/blob/main/static/ss1.png?raw=true)
87
+
88
+
89
+ # Privacy
90
+ The relay server doesn't store any data. I can't access any information passing through it and only secure channels are used to communicate.
91
+
92
+ You may host the server on your own and create a custom gpt using the following section.
93
+
94
+ # Creating your own custom gpt and the relay server.
95
+ I've used the following instructions and action json schema to create the custom GPT. (Replace wcgw.arcfu.com with the address to your server)
96
+
97
+ https://github.com/rusiaaman/wcgw/blob/main/gpt_instructions.txt
98
+ https://github.com/rusiaaman/wcgw/blob/main/gpt_action_json_schema.json
99
+
100
+ Run the server
101
+ `gunicorn --worker-class uvicorn.workers.UvicornWorker --bind 0.0.0.0:443 src.relay.serve:app --certfile fullchain.pem --keyfile privkey.pem`
102
+
103
+ If you don't have public ip and domain name, you can use `ngrok` or similar services to get a https address to the api.
104
+
105
+ The specify the server url in the `wcgw` command like so
106
+ `wcgw --server-url https://your-url/register`
107
+
108
+ # [Optional] Local shell access with openai API key
109
+
110
+ Add `OPENAI_API_KEY` and `OPENAI_ORG_ID` env variables.
111
+
112
+ Clone the repo and run to install `wcgw_local` command
113
+
114
+ `pip install .`
115
+
116
+ Then run
117
+
118
+ `wcgw_local --limit 0.1` # Cost limit $0.1
119
+
120
+ You can now directly write messages or press enter key to open vim for multiline message and text pasting.
wcgw-0.1.2/README.md ADDED
@@ -0,0 +1,95 @@
1
+ # Enable shell access on chatgpt.com
2
+ A custom gpt on chatgpt web app to interact with your local shell.
3
+
4
+ [![Tests](https://github.com/rusiaaman/wcgw/actions/workflows/python-tests.yml/badge.svg?branch=main)](https://github.com/rusiaaman/wcgw/actions/workflows/python-tests.yml)
5
+ [![Build](https://github.com/rusiaaman/wcgw/actions/workflows/python-publish.yml/badge.svg)](https://github.com/rusiaaman/wcgw/actions/workflows/python-publish.yml)
6
+
7
+ ### 🚀 Highlights
8
+ - ⚡ **Full Shell Access**: No restrictions, complete control.
9
+ - ⚡ **Create, Execute, Iterate**: Ask the gpt to keep running compiler checks till all errors are fixed, or ask it to keep checking for the status of a long running command till it's done.
10
+ - ⚡ **Interactive Command Handling**: [beta] Supports interactive commands using arrow keys, interrupt, and ansi escape sequences.
11
+
12
+ ### 🪜 Steps:
13
+ 1. Run the [cli client](https://github.com/rusiaaman/wcgw?tab=readme-ov-file#client) in any directory of choice.
14
+ 2. Share the generated id with this GPT: `https://chatgpt.com/g/g-Us0AAXkRh-wcgw-giving-shell-access`
15
+ 3. The custom GPT can now run any command on your cli
16
+
17
+
18
+ ## Client
19
+ You need to keep running this client for GPT to access your shell. Run it in a version controlled project's root.
20
+
21
+ ### Option 1: using uv [Recommended]
22
+ ```sh
23
+ $ curl -LsSf https://astral.sh/uv/install.sh | sh
24
+ $ uv tool run --python 3.12 wcgw@latest
25
+ ```
26
+
27
+ ### Option 2: using pip
28
+ Supports python >=3.10 and <3.13
29
+ ```sh
30
+ $ pip3 install wcgw
31
+ $ wcgw
32
+ ```
33
+
34
+
35
+ This will print a UUID that you need to share with the gpt.
36
+
37
+
38
+ ## Chat
39
+ Open the following link or search the "wcgw" custom gpt using "Explore GPTs" on chatgpt.com
40
+
41
+ https://chatgpt.com/g/g-Us0AAXkRh-wcgw-giving-shell-access
42
+
43
+ Finally, let the chatgpt know your user id in any format. E.g., "user_id=<your uuid>" followed by rest of your instructions.
44
+
45
+ NOTE: you can resume a broken connection
46
+ `wcgw --client-uuid $previous_uuid`
47
+
48
+ # How it works
49
+ Your commands are relayed through a server to the terminal client. [You could host the server on your own](https://github.com/rusiaaman/wcgw?tab=readme-ov-file#creating-your-own-custom-gpt-and-the-relay-server). For public convenience I've hosted one at https://wcgw.arcfu.com thanks to the gcloud free tier plan.
50
+
51
+ Chatgpt sends a request to the relay server using the user id that you share with it. The relay server holds a websocket with the terminal client against the user id and acts as a proxy to pass the request.
52
+
53
+ It's secure in both the directions. Either a malicious actor or a malicious Chatgpt has to correctly guess your UUID for any security breach.
54
+
55
+ # Showcase
56
+
57
+ ## Unit tests and github actions
58
+ [The first version of unit tests and github workflow to test on multiple python versions were written by the custom chatgpt](https://chatgpt.com/share/6717f922-8998-8005-b825-45d4b348b4dd)
59
+
60
+ ## Create a todo app using react + typescript + vite
61
+ ![Screenshot](https://github.com/rusiaaman/wcgw/blob/main/static/ss1.png?raw=true)
62
+
63
+
64
+ # Privacy
65
+ The relay server doesn't store any data. I can't access any information passing through it and only secure channels are used to communicate.
66
+
67
+ You may host the server on your own and create a custom gpt using the following section.
68
+
69
+ # Creating your own custom gpt and the relay server.
70
+ I've used the following instructions and action json schema to create the custom GPT. (Replace wcgw.arcfu.com with the address to your server)
71
+
72
+ https://github.com/rusiaaman/wcgw/blob/main/gpt_instructions.txt
73
+ https://github.com/rusiaaman/wcgw/blob/main/gpt_action_json_schema.json
74
+
75
+ Run the server
76
+ `gunicorn --worker-class uvicorn.workers.UvicornWorker --bind 0.0.0.0:443 src.relay.serve:app --certfile fullchain.pem --keyfile privkey.pem`
77
+
78
+ If you don't have public ip and domain name, you can use `ngrok` or similar services to get a https address to the api.
79
+
80
+ The specify the server url in the `wcgw` command like so
81
+ `wcgw --server-url https://your-url/register`
82
+
83
+ # [Optional] Local shell access with openai API key
84
+
85
+ Add `OPENAI_API_KEY` and `OPENAI_ORG_ID` env variables.
86
+
87
+ Clone the repo and run to install `wcgw_local` command
88
+
89
+ `pip install .`
90
+
91
+ Then run
92
+
93
+ `wcgw_local --limit 0.1` # Cost limit $0.1
94
+
95
+ You can now directly write messages or press enter key to open vim for multiline message and text pasting.
@@ -0,0 +1,231 @@
1
+
2
+ {
3
+ "openapi": "3.1.0",
4
+ "info": {
5
+ "title": "FastAPI",
6
+ "version": "0.1.0"
7
+ },
8
+ "servers": [
9
+ {"url": "https://wcgw.arcfu.com"}
10
+ ],
11
+ "paths": {
12
+ "/write_file": {
13
+ "post": {
14
+ "x-openai-isConsequential": false,
15
+ "summary": "Write File",
16
+ "operationId": "write_file_write_file__uuid__post",
17
+ "parameters": [
18
+ {
19
+ "name": "user_id",
20
+ "in": "query",
21
+ "required": true,
22
+ "schema": {
23
+ "type": "string",
24
+ "format": "uuid",
25
+ "title": "User Id"
26
+ }
27
+ }
28
+ ],
29
+ "requestBody": {
30
+ "required": true,
31
+ "content": {
32
+ "application/json": {
33
+ "schema": {
34
+ "$ref": "#/components/schemas/Writefile"
35
+ }
36
+ }
37
+ }
38
+ },
39
+ "responses": {
40
+ "200": {
41
+ "description": "Successful Response",
42
+ "content": {
43
+ "application/json": {
44
+ "schema": {
45
+ "type": "string",
46
+ "title": "Response Write File Write File Uuid Post"
47
+ }
48
+ }
49
+ }
50
+ },
51
+ "422": {
52
+ "description": "Validation Error",
53
+ "content": {
54
+ "application/json": {
55
+ "schema": {
56
+ "$ref": "#/components/schemas/HTTPValidationError"
57
+ }
58
+ }
59
+ }
60
+ }
61
+ }
62
+ }
63
+ },
64
+ "/execute_bash": {
65
+ "post": {
66
+ "x-openai-isConsequential": false,
67
+ "summary": "Execute Bash",
68
+ "operationId": "execute_bash_execute_bash__uuid__post",
69
+ "parameters": [
70
+ {
71
+ "name": "user_id",
72
+ "in": "query",
73
+ "required": true,
74
+ "schema": {
75
+ "type": "string",
76
+ "format": "uuid",
77
+ "title": "User Id"
78
+ }
79
+ }
80
+ ],
81
+ "requestBody": {
82
+ "required": true,
83
+ "content": {
84
+ "application/json": {
85
+ "schema": {
86
+ "$ref": "#/components/schemas/ExecuteBash"
87
+ }
88
+ }
89
+ }
90
+ },
91
+ "responses": {
92
+ "200": {
93
+ "description": "Successful Response",
94
+ "content": {
95
+ "application/json": {
96
+ "schema": {
97
+ "type": "string",
98
+ "title": "Response Execute Bash Execute Bash Uuid Post"
99
+ }
100
+ }
101
+ }
102
+ },
103
+ "422": {
104
+ "description": "Validation Error",
105
+ "content": {
106
+ "application/json": {
107
+ "schema": {
108
+ "$ref": "#/components/schemas/HTTPValidationError"
109
+ }
110
+ }
111
+ }
112
+ }
113
+ }
114
+ }
115
+ }
116
+ },
117
+ "components": {
118
+ "schemas": {
119
+ "ExecuteBash": {
120
+ "properties": {
121
+ "execute_command": {
122
+ "anyOf": [
123
+ {
124
+ "type": "string"
125
+ },
126
+ {
127
+ "type": "null"
128
+ }
129
+ ],
130
+ "title": "Execute Command"
131
+ },
132
+ "send_ascii": {
133
+ "anyOf": [
134
+ {
135
+ "items": {
136
+ "anyOf": [
137
+ {
138
+ "type": "integer"
139
+ },
140
+ {
141
+ "type": "string",
142
+ "enum": [
143
+ "Key-up",
144
+ "Key-down",
145
+ "Key-left",
146
+ "Key-right",
147
+ "Enter",
148
+ "Ctrl-c"
149
+ ]
150
+ }
151
+ ]
152
+ },
153
+ "type": "array"
154
+ },
155
+ {
156
+ "type": "null"
157
+ }
158
+ ],
159
+ "title": "Send Ascii"
160
+ }
161
+ },
162
+ "type": "object",
163
+ "title": "ExecuteBash"
164
+ },
165
+ "HTTPValidationError": {
166
+ "properties": {
167
+ "detail": {
168
+ "items": {
169
+ "$ref": "#/components/schemas/ValidationError"
170
+ },
171
+ "type": "array",
172
+ "title": "Detail"
173
+ }
174
+ },
175
+ "type": "object",
176
+ "title": "HTTPValidationError"
177
+ },
178
+ "ValidationError": {
179
+ "properties": {
180
+ "loc": {
181
+ "items": {
182
+ "anyOf": [
183
+ {
184
+ "type": "string"
185
+ },
186
+ {
187
+ "type": "integer"
188
+ }
189
+ ]
190
+ },
191
+ "type": "array",
192
+ "title": "Location"
193
+ },
194
+ "msg": {
195
+ "type": "string",
196
+ "title": "Message"
197
+ },
198
+ "type": {
199
+ "type": "string",
200
+ "title": "Error Type"
201
+ }
202
+ },
203
+ "type": "object",
204
+ "required": [
205
+ "loc",
206
+ "msg",
207
+ "type"
208
+ ],
209
+ "title": "ValidationError"
210
+ },
211
+ "Writefile": {
212
+ "properties": {
213
+ "file_path": {
214
+ "type": "string",
215
+ "title": "File Path"
216
+ },
217
+ "file_content": {
218
+ "type": "string",
219
+ "title": "File Content"
220
+ }
221
+ },
222
+ "type": "object",
223
+ "required": [
224
+ "file_path",
225
+ "file_content"
226
+ ],
227
+ "title": "Writefile"
228
+ }
229
+ }
230
+ }
231
+ }
@@ -0,0 +1,31 @@
1
+ You're a cli assistant.
2
+
3
+ Instructions:
4
+
5
+ - You should use the provided bash execution tool to run script to complete objective.
6
+ - Do not use sudo. Do not use interactive commands.
7
+ - Ask user for confirmation before running anything major
8
+
9
+
10
+ To execute bash commands OR write files use the provided api `wcgw.arcfu.com`
11
+
12
+ Instructions for `Execute Bash`:
13
+ - Execute a bash script. This is stateful (beware with subsequent calls).
14
+ - Execute commands using `execute_command` attribute.
15
+ - Do not use interactive commands like nano. Prefer writing simpler commands.
16
+ - Last line will always be `(exit <int code>)` except if
17
+ - The last line is `(pending)` if the program is still running or waiting for your input. You can then send input using `send_ascii` attributes. You get status by sending new line `send_ascii: ["Enter"]` or `send_ascii: [10]`.
18
+ - Optionally the last line is `(won't exit)` in which case you need to kill the process if you want to run a new command.
19
+ - Optionally `exit shell has restarted` is the output, in which case environment resets, you can run fresh commands.
20
+ - The first line might be `(...truncated)` if the output is too long.
21
+ - Always run `pwd` if you get any file or directory not found error to make sure you're not lost.
22
+
23
+ Instructions for `Write File`
24
+ - Write content to a file. Provide file path and content. Use this instead of ExecuteBash for writing files.
25
+
26
+ ---
27
+
28
+ Always critically think and debate with yourself to solve the problem. Understand the context and the code by reading as much resources as possible before writing a single piece of code.
29
+
30
+ ---
31
+ Ask the user for the user_id `UUID` if they haven't provided in the first message.
@@ -1,9 +1,10 @@
1
1
  [project]
2
+ authors = [{ name = "Aman Rusia", email = "gapypi@arcfu.com" }]
2
3
  name = "wcgw"
3
- version = "0.1.0"
4
+ version = "0.1.2"
4
5
  description = "What could go wrong giving full shell access to chatgpt?"
5
6
  readme = "README.md"
6
- requires-python = ">=3.12"
7
+ requires-python = ">=3.10, <3.13"
7
8
  dependencies = [
8
9
  "openai>=1.46.0",
9
10
  "mypy>=1.11.2",
@@ -20,8 +21,12 @@ dependencies = [
20
21
  "fastapi>=0.115.0",
21
22
  "uvicorn>=0.31.0",
22
23
  "websockets>=13.1",
24
+ "pydantic>=2.9.2",
23
25
  ]
24
26
 
27
+ [project.urls]
28
+ Homepage = "https://github.com/rusiaaman/wcgw"
29
+
25
30
  [build-system]
26
31
  requires = ["hatchling"]
27
32
  build-backend = "hatchling.build"
@@ -32,8 +37,8 @@ wcgw = "wcgw:listen"
32
37
 
33
38
  [tool.uv]
34
39
  dev-dependencies = [
35
- "ipython>=8.27.0",
36
40
  "mypy>=1.11.2",
37
41
  "types-toml>=0.10.8.20240310",
38
42
  "autoflake",
43
+ "ipython>=8.12.3",
39
44
  ]
@@ -1,12 +1,14 @@
1
1
  import asyncio
2
+ import base64
2
3
  import threading
3
4
  import time
4
- from typing import Callable, Coroutine, Literal, Optional, Sequence
5
+ from typing import Any, Callable, Coroutine, DefaultDict, Literal, Optional, Sequence
5
6
  from uuid import UUID
6
7
  import fastapi
7
8
  from fastapi import WebSocket, WebSocketDisconnect
8
9
  from pydantic import BaseModel
9
10
  import uvicorn
11
+ from fastapi.staticfiles import StaticFiles
10
12
 
11
13
  from dotenv import load_dotenv
12
14
 
@@ -35,6 +37,30 @@ clients: dict[UUID, Callable[[Mdata], Coroutine[None, None, None]]] = {}
35
37
  websockets: dict[UUID, WebSocket] = {}
36
38
  gpts: dict[UUID, Callable[[str], None]] = {}
37
39
 
40
+ images: DefaultDict[UUID, dict[str, dict[str, Any]]] = DefaultDict(dict)
41
+
42
+
43
+ @app.websocket("/register_serve_image/{uuid}")
44
+ async def register_serve_image(websocket: WebSocket, uuid: UUID) -> None:
45
+ raise Exception("Disabled")
46
+ await websocket.accept()
47
+ received_data = await websocket.receive_json()
48
+ name = received_data["name"]
49
+ image_b64 = received_data["image_b64"]
50
+ image_bytes = base64.b64decode(image_b64)
51
+ images[uuid][name] = {
52
+ "content": image_bytes,
53
+ "media_type": received_data["media_type"],
54
+ }
55
+
56
+
57
+ @app.get("/get_image/{uuid}/{name}")
58
+ async def get_image(uuid: UUID, name: str) -> fastapi.responses.Response:
59
+ return fastapi.responses.Response(
60
+ content=images[uuid][name]["content"],
61
+ media_type=images[uuid][name]["media_type"],
62
+ )
63
+
38
64
 
39
65
  @app.websocket("/register/{uuid}")
40
66
  async def register_websocket(websocket: WebSocket, uuid: UUID) -> None:
@@ -60,9 +86,34 @@ async def register_websocket(websocket: WebSocket, uuid: UUID) -> None:
60
86
  print(f"Client {uuid} disconnected")
61
87
 
62
88
 
63
- @app.post("/action")
64
- async def chatgpt_server(json_data: Mdata) -> str:
65
- user_id = json_data.user_id
89
+ @app.post("/write_file")
90
+ async def write_file(write_file_data: Writefile, user_id: UUID) -> str:
91
+ if user_id not in clients:
92
+ raise fastapi.HTTPException(
93
+ status_code=404, detail="User with the provided id not found"
94
+ )
95
+
96
+ results: Optional[str] = None
97
+
98
+ def put_results(result: str) -> None:
99
+ nonlocal results
100
+ results = result
101
+
102
+ gpts[user_id] = put_results
103
+
104
+ await clients[user_id](Mdata(data=write_file_data, user_id=user_id))
105
+
106
+ start_time = time.time()
107
+ while time.time() - start_time < 30:
108
+ if results is not None:
109
+ return results
110
+ await asyncio.sleep(0.1)
111
+
112
+ raise fastapi.HTTPException(status_code=500, detail="Timeout error")
113
+
114
+
115
+ @app.post("/execute_bash")
116
+ async def execute_bash(excute_bash_data: ExecuteBash, user_id: UUID) -> str:
66
117
  if user_id not in clients:
67
118
  raise fastapi.HTTPException(
68
119
  status_code=404, detail="User with the provided id not found"
@@ -76,7 +127,7 @@ async def chatgpt_server(json_data: Mdata) -> str:
76
127
 
77
128
  gpts[user_id] = put_results
78
129
 
79
- await clients[user_id](json_data)
130
+ await clients[user_id](Mdata(data=excute_bash_data, user_id=user_id))
80
131
 
81
132
  start_time = time.time()
82
133
  while time.time() - start_time < 30:
@@ -87,6 +138,9 @@ async def chatgpt_server(json_data: Mdata) -> str:
87
138
  raise fastapi.HTTPException(status_code=500, detail="Timeout error")
88
139
 
89
140
 
141
+ app.mount("/static", StaticFiles(directory="static"), name="static")
142
+
143
+
90
144
  def run() -> None:
91
145
  load_dotenv()
92
146
 
@@ -0,0 +1,7 @@
1
+ Privacy Policy
2
+ I do not collect, store, or share any personal data.
3
+ The data from your terminal is not stored anywhere and it's not logged or collected in any form.
4
+ There is a relay webserver for connecting your terminal to chatgpt the source code for which is open at https://github.com/rusiaaman/wcgw/tree/main/src/relay that you can run on your own.
5
+ Other than the relay webserver there is no further involvement of my servers or services.
6
+ Feel free to me contact at info@arcfu.com for questions on privacy or anything else.
7
+