wcgw 0.0.9__tar.gz → 0.0.10__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of wcgw might be problematic. Click here for more details.

wcgw-0.0.10/PKG-INFO ADDED
@@ -0,0 +1,97 @@
1
+ Metadata-Version: 2.3
2
+ Name: wcgw
3
+ Version: 0.0.10
4
+ Summary: What could go wrong giving full shell access to chatgpt?
5
+ Project-URL: Homepage, https://github.com/rusiaaman/wcgw
6
+ Author-email: Aman Rusia <gapypi@arcfu.com>
7
+ Requires-Python: <3.13,>=3.8
8
+ Requires-Dist: fastapi>=0.115.0
9
+ Requires-Dist: mypy>=1.11.2
10
+ Requires-Dist: openai>=1.46.0
11
+ Requires-Dist: petname>=2.6
12
+ Requires-Dist: pexpect>=4.9.0
13
+ Requires-Dist: pydantic>=2.9.2
14
+ Requires-Dist: pyte>=0.8.2
15
+ Requires-Dist: python-dotenv>=1.0.1
16
+ Requires-Dist: rich>=13.8.1
17
+ Requires-Dist: shell>=1.0.1
18
+ Requires-Dist: tiktoken==0.7.0
19
+ Requires-Dist: toml>=0.10.2
20
+ Requires-Dist: typer>=0.12.5
21
+ Requires-Dist: types-pexpect>=4.9.0.20240806
22
+ Requires-Dist: uvicorn>=0.31.0
23
+ Requires-Dist: websockets>=13.1
24
+ Description-Content-Type: text/markdown
25
+
26
+ # Enable shell access on chatgpt.com
27
+ A custom gpt on chatgpt web app to interact with your local shell.
28
+
29
+ ### 🚀 Highlights
30
+ - ⚡ **Full Shell Access**: No restrictions, complete control.
31
+ - ⚡ **Create, Execute, Iterate**: Ask the gpt to keep running compiler checks till all errors are fixed, or ask it to keep checking for the status of a long running command till it's done.
32
+ - ⚡ **Interactive Command Handling**: [beta] Supports interactive commands using arrow keys, interrupt, and ansi escape sequences.
33
+
34
+ ### 🪜 Steps:
35
+ 1. Run the [cli client](https://github.com/rusiaaman/wcgw?tab=readme-ov-file#client) in any directory of choice.
36
+ 2. Share the generated id with this GPT: `https://chatgpt.com/g/g-Us0AAXkRh-wcgw-giving-shell-access`
37
+ 3. The custom GPT can now run any command on your cli
38
+
39
+ ## Client
40
+
41
+ ### Option 1: using pip
42
+ Supports python >=3.8 and <3.13
43
+ ```sh
44
+ $ pip3 install wcgw
45
+ $ wcgw
46
+ ```
47
+
48
+ ### Option 2: using uv
49
+ ```sh
50
+ $ curl -LsSf https://astral.sh/uv/install.sh | sh
51
+ $ uv tool run --python 3.12 wcgw
52
+ ```
53
+
54
+ This will print a UUID that you need to share with the gpt.
55
+
56
+
57
+ ## Chat
58
+ Open the following link or search the "wcgw" custom gpt using "Explore GPTs" on chatgpt.com
59
+
60
+ https://chatgpt.com/g/g-Us0AAXkRh-wcgw-giving-shell-access
61
+
62
+ Finally, let the chatgpt know your user id in any format. E.g., "user_id=<your uuid>" followed by rest of your instructions.
63
+
64
+ NOTE: you can resume a broken connection
65
+ `wcgw --client-uuid $previous_uuid`
66
+
67
+ # How it works
68
+ Your commands are relayed through a server I've hosted at https://wcgw.arcfu.com. The code for that is at `src/relay/serve.py`.
69
+
70
+ Chat gpt sends a request to the relay server using the user id that you share with it. The relay server holds a websocket with the terminal cilent against the user id and acts as a proxy to pass the request.
71
+
72
+ It's secure in both the directions. Either a malicious actor or a malicious Chatgpt has to correctly guess your UUID for any security breach.
73
+
74
+ NOTE: the relay server doesn't store any data. If you don't trust it then you may host the server on your own and create a custom gpt. Create an issue and I'll be happy to share the full instructions and schema I've given in the custom GPT configuration.
75
+
76
+ # Showcase
77
+
78
+ ## Create a todo app using react + typescript + vite
79
+ https://chatgpt.com/share/6717d94d-756c-8005-98a6-d021c7b586aa
80
+
81
+ ## Write unit tests for all files in my current repo
82
+ [Todo]
83
+
84
+
85
+ # [Optional] Local shell access with openai API key
86
+
87
+ Add `OPENAI_API_KEY` and `OPENAI_ORG_ID` env variables.
88
+
89
+ Clone the repo and run to install `wcgw_local` command
90
+
91
+ `pip install .`
92
+
93
+ Then run
94
+
95
+ `wcgw_local --limit 0.1` # Cost limit $0.1
96
+
97
+ You can now directly write messages or press enter key to open vim for multiline message and text pasting.
wcgw-0.0.10/README.md ADDED
@@ -0,0 +1,72 @@
1
+ # Enable shell access on chatgpt.com
2
+ A custom gpt on chatgpt web app to interact with your local shell.
3
+
4
+ ### 🚀 Highlights
5
+ - ⚡ **Full Shell Access**: No restrictions, complete control.
6
+ - ⚡ **Create, Execute, Iterate**: Ask the gpt to keep running compiler checks till all errors are fixed, or ask it to keep checking for the status of a long running command till it's done.
7
+ - ⚡ **Interactive Command Handling**: [beta] Supports interactive commands using arrow keys, interrupt, and ansi escape sequences.
8
+
9
+ ### 🪜 Steps:
10
+ 1. Run the [cli client](https://github.com/rusiaaman/wcgw?tab=readme-ov-file#client) in any directory of choice.
11
+ 2. Share the generated id with this GPT: `https://chatgpt.com/g/g-Us0AAXkRh-wcgw-giving-shell-access`
12
+ 3. The custom GPT can now run any command on your cli
13
+
14
+ ## Client
15
+
16
+ ### Option 1: using pip
17
+ Supports python >=3.8 and <3.13
18
+ ```sh
19
+ $ pip3 install wcgw
20
+ $ wcgw
21
+ ```
22
+
23
+ ### Option 2: using uv
24
+ ```sh
25
+ $ curl -LsSf https://astral.sh/uv/install.sh | sh
26
+ $ uv tool run --python 3.12 wcgw
27
+ ```
28
+
29
+ This will print a UUID that you need to share with the gpt.
30
+
31
+
32
+ ## Chat
33
+ Open the following link or search the "wcgw" custom gpt using "Explore GPTs" on chatgpt.com
34
+
35
+ https://chatgpt.com/g/g-Us0AAXkRh-wcgw-giving-shell-access
36
+
37
+ Finally, let the chatgpt know your user id in any format. E.g., "user_id=<your uuid>" followed by rest of your instructions.
38
+
39
+ NOTE: you can resume a broken connection
40
+ `wcgw --client-uuid $previous_uuid`
41
+
42
+ # How it works
43
+ Your commands are relayed through a server I've hosted at https://wcgw.arcfu.com. The code for that is at `src/relay/serve.py`.
44
+
45
+ Chat gpt sends a request to the relay server using the user id that you share with it. The relay server holds a websocket with the terminal cilent against the user id and acts as a proxy to pass the request.
46
+
47
+ It's secure in both the directions. Either a malicious actor or a malicious Chatgpt has to correctly guess your UUID for any security breach.
48
+
49
+ NOTE: the relay server doesn't store any data. If you don't trust it then you may host the server on your own and create a custom gpt. Create an issue and I'll be happy to share the full instructions and schema I've given in the custom GPT configuration.
50
+
51
+ # Showcase
52
+
53
+ ## Create a todo app using react + typescript + vite
54
+ https://chatgpt.com/share/6717d94d-756c-8005-98a6-d021c7b586aa
55
+
56
+ ## Write unit tests for all files in my current repo
57
+ [Todo]
58
+
59
+
60
+ # [Optional] Local shell access with openai API key
61
+
62
+ Add `OPENAI_API_KEY` and `OPENAI_ORG_ID` env variables.
63
+
64
+ Clone the repo and run to install `wcgw_local` command
65
+
66
+ `pip install .`
67
+
68
+ Then run
69
+
70
+ `wcgw_local --limit 0.1` # Cost limit $0.1
71
+
72
+ You can now directly write messages or press enter key to open vim for multiline message and text pasting.
@@ -1,7 +1,7 @@
1
1
  [project]
2
2
  authors = [{ name = "Aman Rusia", email = "gapypi@arcfu.com" }]
3
3
  name = "wcgw"
4
- version = "0.0.9"
4
+ version = "0.0.10"
5
5
  description = "What could go wrong giving full shell access to chatgpt?"
6
6
  readme = "README.md"
7
7
  requires-python = ">=3.8, <3.13"
@@ -21,6 +21,7 @@ dependencies = [
21
21
  "fastapi>=0.115.0",
22
22
  "uvicorn>=0.31.0",
23
23
  "websockets>=13.1",
24
+ "pydantic>=2.9.2",
24
25
  ]
25
26
 
26
27
  [project.urls]
@@ -16,7 +16,7 @@ from openai.types.chat import (
16
16
  ParsedChatCompletionMessage,
17
17
  )
18
18
  import rich
19
- import petname
19
+ import petname # type: ignore[import-untyped]
20
20
  from typer import Typer
21
21
  import uuid
22
22
 
@@ -30,7 +30,6 @@ from .tools import (
30
30
  Confirmation,
31
31
  DoneFlag,
32
32
  Writefile,
33
- get_is_waiting_user_input,
34
33
  get_tool_output,
35
34
  SHELL,
36
35
  start_shell,
@@ -92,29 +91,22 @@ def parse_user_message_special(msg: str) -> ChatCompletionUserMessageParam:
92
91
  if line.startswith("%"):
93
92
  args = line[1:].strip().split(" ")
94
93
  command = args[0]
95
- assert command == 'image'
94
+ assert command == "image"
96
95
  image_path = args[1]
97
- with open(image_path, 'rb') as f:
96
+ with open(image_path, "rb") as f:
98
97
  image_bytes = f.read()
99
98
  image_b64 = base64.b64encode(image_bytes).decode("utf-8")
100
99
  image_type = mimetypes.guess_type(image_path)[0]
101
- dataurl=f'data:{image_type};base64,{image_b64}'
102
- parts.append({
103
- 'type': 'image_url',
104
- 'image_url': {
105
- 'url': dataurl,
106
- 'detail': 'auto'
107
- }
108
- })
100
+ dataurl = f"data:{image_type};base64,{image_b64}"
101
+ parts.append(
102
+ {"type": "image_url", "image_url": {"url": dataurl, "detail": "auto"}}
103
+ )
109
104
  else:
110
- if len(parts) > 0 and parts[-1]['type'] == 'text':
111
- parts[-1]['text'] += '\n' + line
105
+ if len(parts) > 0 and parts[-1]["type"] == "text":
106
+ parts[-1]["text"] += "\n" + line
112
107
  else:
113
- parts.append({'type': 'text', 'text': line})
114
- return {
115
- 'role': 'user',
116
- 'content': parts
117
- }
108
+ parts.append({"type": "text", "text": line})
109
+ return {"role": "user", "content": parts}
118
110
 
119
111
 
120
112
  app = Typer(pretty_exceptions_show_locals=False)
@@ -146,7 +138,7 @@ def loop(
146
138
  if history[1]["role"] != "user":
147
139
  raise ValueError("Invalid history file, second message should be user")
148
140
  first_message = ""
149
- waiting_for_assistant = history[-1]['role'] != 'assistant'
141
+ waiting_for_assistant = history[-1]["role"] != "assistant"
150
142
 
151
143
  my_dir = os.path.dirname(__file__)
152
144
  config_file = os.path.join(my_dir, "..", "..", "config.toml")
@@ -161,9 +153,6 @@ def loop(
161
153
  enc = tiktoken.encoding_for_model(
162
154
  config.model if not config.model.startswith("o1") else "gpt-4o"
163
155
  )
164
- is_waiting_user_input = get_is_waiting_user_input(
165
- config.model, config.cost_file[config.model]
166
- )
167
156
 
168
157
  tools = [
169
158
  openai.pydantic_function_tool(
@@ -290,7 +279,7 @@ System information:
290
279
  )
291
280
  system_console.print(f"\nTotal cost: {config.cost_unit}{cost:.3f}")
292
281
  output_toks += output_toks_
293
-
282
+
294
283
  _histories.append(item)
295
284
  for tool_call_id, toolcallargs in tool_call_args_by_id.items():
296
285
  for toolindex, tool_args in toolcallargs.items():
@@ -300,7 +289,7 @@ System information:
300
289
  enc,
301
290
  limit - cost,
302
291
  loop,
303
- is_waiting_user_input,
292
+ max_tokens=2048,
304
293
  )
305
294
  except Exception as e:
306
295
  output_or_done = (
@@ -322,42 +311,49 @@ System information:
322
311
  f"\nTotal cost: {config.cost_unit}{cost:.3f}"
323
312
  )
324
313
  return output_or_done.task_output, cost
325
-
314
+
326
315
  output = output_or_done
327
316
 
328
317
  if isinstance(output, ImageData):
329
318
  randomId = petname.Generate(2, "-")
330
319
  if not image_histories:
331
- image_histories.extend([
332
- {
333
- 'role': 'assistant',
334
- 'content': f'Share images with ids: {randomId}'
335
-
336
- },
337
- {
338
- 'role': 'user',
339
- 'content': [{
340
- 'type': 'image_url',
341
- 'image_url': {
342
- 'url': output.dataurl,
343
- 'detail': 'auto'
344
- }
345
- }]
346
- }]
320
+ image_histories.extend(
321
+ [
322
+ {
323
+ "role": "assistant",
324
+ "content": f"Share images with ids: {randomId}",
325
+ },
326
+ {
327
+ "role": "user",
328
+ "content": [
329
+ {
330
+ "type": "image_url",
331
+ "image_url": {
332
+ "url": output.dataurl,
333
+ "detail": "auto",
334
+ },
335
+ }
336
+ ],
337
+ },
338
+ ]
347
339
  )
348
340
  else:
349
- image_histories[0]['content'] += ', ' + randomId
350
- image_histories[1]["content"].append({ # type: ignore
351
- 'type': 'image_url',
352
- 'image_url': {
353
- 'url': output.dataurl,
354
- 'detail': 'auto'
341
+ image_histories[0]["content"] += ", " + randomId
342
+ second_content = image_histories[1]["content"]
343
+ assert isinstance(second_content, list)
344
+ second_content.append(
345
+ {
346
+ "type": "image_url",
347
+ "image_url": {
348
+ "url": output.dataurl,
349
+ "detail": "auto",
350
+ },
355
351
  }
356
- })
352
+ )
357
353
 
358
354
  item = {
359
355
  "role": "tool",
360
- "content": f'Ask user for image id: {randomId}',
356
+ "content": f"Ask user for image id: {randomId}",
361
357
  "tool_call_id": tool_call_id + str(toolindex),
362
358
  }
363
359
  else:
@@ -5,14 +5,23 @@ import mimetypes
5
5
  import sys
6
6
  import threading
7
7
  import traceback
8
- from typing import Callable, Literal, NewType, Optional, ParamSpec, Sequence, TypeVar, TypedDict
8
+ from typing import (
9
+ Callable,
10
+ Literal,
11
+ NewType,
12
+ Optional,
13
+ ParamSpec,
14
+ Sequence,
15
+ TypeVar,
16
+ TypedDict,
17
+ )
9
18
  import uuid
10
19
  from pydantic import BaseModel, TypeAdapter
11
20
  from websockets.sync.client import connect as syncconnect
12
21
 
13
22
  import os
14
23
  import tiktoken
15
- import petname # type: ignore[import]
24
+ import petname # type: ignore[import-untyped]
16
25
  import pexpect
17
26
  from typer import Typer
18
27
  import websockets
@@ -68,14 +77,14 @@ class Writefile(BaseModel):
68
77
  file_content: str
69
78
 
70
79
 
71
- def start_shell():
80
+ def start_shell() -> pexpect.spawn:
72
81
  SHELL = pexpect.spawn(
73
82
  "/bin/bash --noprofile --norc",
74
- env={**os.environ, **{"PS1": "#@@"}},
83
+ env={**os.environ, **{"PS1": "#@@"}}, # type: ignore[arg-type]
75
84
  echo=False,
76
85
  encoding="utf-8",
77
86
  timeout=TIMEOUT,
78
- ) # type: ignore[arg-type]
87
+ )
79
88
  SHELL.expect("#@@")
80
89
  SHELL.sendline("stty -icanon -echo")
81
90
  SHELL.expect("#@@")
@@ -119,17 +128,6 @@ BASH_CLF_OUTPUT = Literal["running", "waiting_for_input", "wont_exit"]
119
128
  BASH_STATE: BASH_CLF_OUTPUT = "running"
120
129
 
121
130
 
122
- def get_output_of_last_command(enc: tiktoken.Encoding) -> str:
123
- global SHELL, BASH_STATE
124
- output = render_terminal_output(SHELL.before)
125
-
126
- tokens = enc.encode(output)
127
- if len(tokens) >= 2048:
128
- output = "...(truncated)\n" + enc.decode(tokens[-2047:])
129
-
130
- return output
131
-
132
-
133
131
  WAITING_INPUT_MESSAGE = """A command is already running waiting for input. NOTE: You can't run multiple shell sessions, likely a previous program hasn't exited.
134
132
  1. Get its output using `send_ascii: [10]`
135
133
  2. Use `send_ascii` to give inputs to the running program, don't use `execute_command` OR
@@ -137,9 +135,7 @@ WAITING_INPUT_MESSAGE = """A command is already running waiting for input. NOTE:
137
135
 
138
136
 
139
137
  def execute_bash(
140
- enc: tiktoken.Encoding,
141
- bash_arg: ExecuteBash,
142
- is_waiting_user_input: Callable[[str], tuple[BASH_CLF_OUTPUT, float]],
138
+ enc: tiktoken.Encoding, bash_arg: ExecuteBash, max_tokens: Optional[int]
143
139
  ) -> tuple[str, float]:
144
140
  global SHELL, BASH_STATE
145
141
  try:
@@ -186,38 +182,33 @@ def execute_bash(
186
182
  SHELL = start_shell()
187
183
  raise
188
184
 
189
- wait = timeout = 5
185
+ wait = 5
190
186
  index = SHELL.expect(["#@@", pexpect.TIMEOUT], timeout=wait)
191
187
  running = ""
192
188
  while index == 1:
193
189
  if wait > TIMEOUT:
194
190
  raise TimeoutError("Timeout while waiting for shell prompt")
195
191
 
196
- text = SHELL.before
192
+ BASH_STATE = "waiting_for_input"
193
+ text = SHELL.before or ""
197
194
  print(text[len(running) :])
198
195
  running = text
199
196
 
200
197
  text = render_terminal_output(text)
201
- BASH_STATE, cost = is_waiting_user_input(text)
202
- if BASH_STATE == "waiting_for_input" or BASH_STATE == "wont_exit":
203
- tokens = enc.encode(text)
198
+ tokens = enc.encode(text)
204
199
 
205
- if len(tokens) >= 2048:
206
- text = "...(truncated)\n" + enc.decode(tokens[-2047:])
200
+ if max_tokens and len(tokens) >= max_tokens:
201
+ text = "...(truncated)\n" + enc.decode(tokens[-(max_tokens - 1) :])
207
202
 
208
- last_line = (
209
- "(pending)" if BASH_STATE == "waiting_for_input" else "(won't exit)"
210
- )
211
- return text + f"\n{last_line}", cost
212
- index = SHELL.expect(["#@@", pexpect.TIMEOUT], timeout=wait)
213
- wait += timeout
203
+ last_line = "(pending)"
204
+ return text + f"\n{last_line}", 0
214
205
 
215
206
  assert isinstance(SHELL.before, str)
216
207
  output = render_terminal_output(SHELL.before)
217
208
 
218
209
  tokens = enc.encode(output)
219
- if len(tokens) >= 2048:
220
- output = "...(truncated)\n" + enc.decode(tokens[-2047:])
210
+ if max_tokens and len(tokens) >= max_tokens:
211
+ output = "...(truncated)\n" + enc.decode(tokens[-(max_tokens - 1) :])
221
212
 
222
213
  try:
223
214
  exit_code = _get_exit_code()
@@ -236,7 +227,7 @@ def execute_bash(
236
227
 
237
228
  class ReadImage(BaseModel):
238
229
  file_path: str
239
- type: Literal['ReadImage'] = 'ReadImage'
230
+ type: Literal["ReadImage"] = "ReadImage"
240
231
 
241
232
 
242
233
  def serve_image_in_bg(file_path: str, client_uuid: str, name: str) -> None:
@@ -258,11 +249,11 @@ def serve_image_in_bg(file_path: str, client_uuid: str, name: str) -> None:
258
249
  print(f"Connection closed for UUID: {client_uuid}, retrying")
259
250
  serve_image_in_bg(file_path, client_uuid, name)
260
251
 
252
+
261
253
  class ImageData(BaseModel):
262
254
  dataurl: str
263
255
 
264
256
 
265
-
266
257
  Param = ParamSpec("Param")
267
258
 
268
259
  T = TypeVar("T")
@@ -281,6 +272,7 @@ def ensure_no_previous_output(func: Callable[Param, T]) -> Callable[Param, T]:
281
272
 
282
273
  return wrapper
283
274
 
275
+
284
276
  @ensure_no_previous_output
285
277
  def read_image_from_shell(file_path: str) -> ImageData:
286
278
  if not os.path.isabs(file_path):
@@ -297,8 +289,8 @@ def read_image_from_shell(file_path: str) -> ImageData:
297
289
  image_bytes = image_file.read()
298
290
  image_b64 = base64.b64encode(image_bytes).decode("utf-8")
299
291
  image_type = mimetypes.guess_type(file_path)[0]
300
- return ImageData(dataurl=f'data:{image_type};base64,{image_b64}')
301
-
292
+ return ImageData(dataurl=f"data:{image_type};base64,{image_b64}")
293
+
302
294
 
303
295
  @ensure_no_previous_output
304
296
  def write_file(writefile: Writefile) -> str:
@@ -349,11 +341,17 @@ def which_tool(args: str) -> BaseModel:
349
341
 
350
342
 
351
343
  def get_tool_output(
352
- args: dict | Confirmation | ExecuteBash | Writefile | AIAssistant | DoneFlag | ReadImage,
344
+ args: dict[object, object]
345
+ | Confirmation
346
+ | ExecuteBash
347
+ | Writefile
348
+ | AIAssistant
349
+ | DoneFlag
350
+ | ReadImage,
353
351
  enc: tiktoken.Encoding,
354
352
  limit: float,
355
353
  loop_call: Callable[[str, float], tuple[str, float]],
356
- is_waiting_user_input: Callable[[str], tuple[BASH_CLF_OUTPUT, float]],
354
+ max_tokens: Optional[int],
357
355
  ) -> tuple[str | ImageData | DoneFlag, float]:
358
356
  if isinstance(args, dict):
359
357
  adapter = TypeAdapter[
@@ -362,13 +360,13 @@ def get_tool_output(
362
360
  arg = adapter.validate_python(args)
363
361
  else:
364
362
  arg = args
365
- output: tuple[str | DoneFlag, float]
363
+ output: tuple[str | DoneFlag | ImageData, float]
366
364
  if isinstance(arg, Confirmation):
367
365
  console.print("Calling ask confirmation tool")
368
366
  output = ask_confirmation(arg), 0.0
369
367
  elif isinstance(arg, ExecuteBash):
370
368
  console.print("Calling execute bash tool")
371
- output = execute_bash(enc, arg, is_waiting_user_input)
369
+ output = execute_bash(enc, arg, max_tokens)
372
370
  elif isinstance(arg, Writefile):
373
371
  console.print("Calling write file tool")
374
372
  output = write_file(arg), 0
@@ -391,7 +389,9 @@ def get_tool_output(
391
389
  History = list[ChatCompletionMessageParam]
392
390
 
393
391
 
394
- def get_is_waiting_user_input(model: Models, cost_data: CostData):
392
+ def get_is_waiting_user_input(
393
+ model: Models, cost_data: CostData
394
+ ) -> Callable[[str], tuple[BASH_CLF_OUTPUT, float]]:
395
395
  enc = tiktoken.encoding_for_model(model if not model.startswith("o1") else "gpt-4o")
396
396
  system_prompt = """You need to classify if a bash program is waiting for user input based on its stdout, or if it won't exit. You'll be given the output of any program.
397
397
  Return `waiting_for_input` if the program is waiting for INTERACTIVE input only, Return 'running' if it's waiting for external resources or just waiting to finish.
@@ -451,7 +451,7 @@ def execute_user_input() -> None:
451
451
  ExecuteBash(
452
452
  send_ascii=[ord(x) for x in user_input] + [ord("\n")]
453
453
  ),
454
- lambda x: ("waiting_for_input", 0),
454
+ max_tokens=None,
455
455
  )[0]
456
456
  )
457
457
  except Exception as e:
@@ -467,31 +467,25 @@ async def register_client(server_url: str, client_uuid: str = "") -> None:
467
467
 
468
468
  # Create the WebSocket connection
469
469
  async with websockets.connect(f"{server_url}/{client_uuid}") as websocket:
470
- print(f"Connected. Share this user id with the chatbot: {client_uuid} \nLink: https://chatgpt.com/g/g-Us0AAXkRh-wcgw-giving-shell-access")
470
+ print(
471
+ f"Connected. Share this user id with the chatbot: {client_uuid} \nLink: https://chatgpt.com/g/g-Us0AAXkRh-wcgw-giving-shell-access"
472
+ )
471
473
  try:
472
474
  while True:
473
475
  # Wait to receive data from the server
474
476
  message = await websocket.recv()
475
477
  mdata = Mdata.model_validate_json(message)
476
478
  with execution_lock:
477
- # is_waiting_user_input = get_is_waiting_user_input(
478
- # default_model, default_cost
479
- # )
480
- is_waiting_user_input = lambda x: ("waiting_for_input", 0)
481
479
  try:
482
480
  output, cost = get_tool_output(
483
- mdata.data,
484
- default_enc,
485
- 0.0,
486
- lambda x, y: ("", 0),
487
- is_waiting_user_input,
481
+ mdata.data, default_enc, 0.0, lambda x, y: ("", 0), None
488
482
  )
489
483
  curr_cost += cost
490
484
  print(f"{curr_cost=}")
491
485
  except Exception as e:
492
486
  output = f"GOT EXCEPTION while calling tool. Error: {e}"
493
487
  traceback.print_exc()
494
- assert not isinstance(output, DoneFlag)
488
+ assert isinstance(output, str)
495
489
  await websocket.send(output)
496
490
 
497
491
  except (websockets.ConnectionClosed, ConnectionError):
@@ -499,14 +493,17 @@ async def register_client(server_url: str, client_uuid: str = "") -> None:
499
493
  await register_client(server_url, client_uuid)
500
494
 
501
495
 
502
- def run() -> None:
503
- if len(sys.argv) > 1:
504
- server_url = sys.argv[1]
505
- else:
506
- server_url = "wss://wcgw.arcfu.com/register"
496
+ run = Typer(pretty_exceptions_show_locals=False, no_args_is_help=True)
497
+
507
498
 
499
+ @run.command()
500
+ def app(
501
+ server_url: str = "wss://wcgw.arcfu.com/register", client_uuid: Optional[str] = None
502
+ ) -> None:
508
503
  thread1 = threading.Thread(target=execute_user_input)
509
- thread2 = threading.Thread(target=asyncio.run, args=(register_client(server_url),))
504
+ thread2 = threading.Thread(
505
+ target=asyncio.run, args=(register_client(server_url, client_uuid or ""),)
506
+ )
510
507
 
511
508
  thread1.start()
512
509
  thread2.start()
@@ -1023,7 +1023,7 @@ wheels = [
1023
1023
 
1024
1024
  [[package]]
1025
1025
  name = "wcgw"
1026
- version = "0.0.9"
1026
+ version = "0.0.10"
1027
1027
  source = { editable = "." }
1028
1028
  dependencies = [
1029
1029
  { name = "fastapi" },
@@ -1031,6 +1031,7 @@ dependencies = [
1031
1031
  { name = "openai" },
1032
1032
  { name = "petname" },
1033
1033
  { name = "pexpect" },
1034
+ { name = "pydantic" },
1034
1035
  { name = "pyte" },
1035
1036
  { name = "python-dotenv" },
1036
1037
  { name = "rich" },
@@ -1058,6 +1059,7 @@ requires-dist = [
1058
1059
  { name = "openai", specifier = ">=1.46.0" },
1059
1060
  { name = "petname", specifier = ">=2.6" },
1060
1061
  { name = "pexpect", specifier = ">=4.9.0" },
1062
+ { name = "pydantic", specifier = ">=2.9.2" },
1061
1063
  { name = "pyte", specifier = ">=0.8.2" },
1062
1064
  { name = "python-dotenv", specifier = ">=1.0.1" },
1063
1065
  { name = "rich", specifier = ">=13.8.1" },
wcgw-0.0.9/PKG-INFO DELETED
@@ -1,63 +0,0 @@
1
- Metadata-Version: 2.3
2
- Name: wcgw
3
- Version: 0.0.9
4
- Summary: What could go wrong giving full shell access to chatgpt?
5
- Project-URL: Homepage, https://github.com/rusiaaman/wcgw
6
- Author-email: Aman Rusia <gapypi@arcfu.com>
7
- Requires-Python: <3.13,>=3.8
8
- Requires-Dist: fastapi>=0.115.0
9
- Requires-Dist: mypy>=1.11.2
10
- Requires-Dist: openai>=1.46.0
11
- Requires-Dist: petname>=2.6
12
- Requires-Dist: pexpect>=4.9.0
13
- Requires-Dist: pyte>=0.8.2
14
- Requires-Dist: python-dotenv>=1.0.1
15
- Requires-Dist: rich>=13.8.1
16
- Requires-Dist: shell>=1.0.1
17
- Requires-Dist: tiktoken==0.7.0
18
- Requires-Dist: toml>=0.10.2
19
- Requires-Dist: typer>=0.12.5
20
- Requires-Dist: types-pexpect>=4.9.0.20240806
21
- Requires-Dist: uvicorn>=0.31.0
22
- Requires-Dist: websockets>=13.1
23
- Description-Content-Type: text/markdown
24
-
25
- # Shell access to chatgpt.com
26
-
27
- ### 🚀 Highlights
28
- - ⚡ **Full Shell Access**: No restrictions, complete control.
29
- - ⚡ **Create, Execute, Iterate**: Seamless workflow for development and execution.
30
- - ⚡ **Interactive Command Handling**: Supports interactive commands with ease.
31
-
32
-
33
- ### 🪜 Steps:
34
- 1. Run the [cli client](https://github.com/rusiaaman/wcgw?tab=readme-ov-file#client) in any directory of choice.
35
- 2. Share the generated id with the GPT: `https://chatgpt.com/g/g-Us0AAXkRh-wcgw-giving-shell-access`
36
- 3. The custom GPT can now run any command on your cli
37
-
38
- ## Client
39
-
40
- ### Option 1: using pip
41
- ```sh
42
- $ pip install wcgw
43
- $ wcgw
44
- ```
45
-
46
- ### Option 2: using uv
47
- ```sh
48
- $ curl -LsSf https://astral.sh/uv/install.sh | sh
49
- $ uv tool run wcgw
50
- ```
51
-
52
- This will print a UUID that you need to share with the gpt.
53
-
54
-
55
- ## Chat
56
- https://chatgpt.com/g/g-Us0AAXkRh-wcgw-giving-shell-access
57
-
58
- Add user id the client generated to the first message along with the instructions.
59
-
60
- # How it works
61
- Your commands are relayed through a server I've hosted at https://wcgw.arcfu.com. The code for that is at `src/relay/serve.py`.
62
-
63
- The user id that you share with chatgpt is added in the request it sents to the relay server which holds a websocket with the terminal client.
wcgw-0.0.9/README.md DELETED
@@ -1,39 +0,0 @@
1
- # Shell access to chatgpt.com
2
-
3
- ### 🚀 Highlights
4
- - ⚡ **Full Shell Access**: No restrictions, complete control.
5
- - ⚡ **Create, Execute, Iterate**: Seamless workflow for development and execution.
6
- - ⚡ **Interactive Command Handling**: Supports interactive commands with ease.
7
-
8
-
9
- ### 🪜 Steps:
10
- 1. Run the [cli client](https://github.com/rusiaaman/wcgw?tab=readme-ov-file#client) in any directory of choice.
11
- 2. Share the generated id with the GPT: `https://chatgpt.com/g/g-Us0AAXkRh-wcgw-giving-shell-access`
12
- 3. The custom GPT can now run any command on your cli
13
-
14
- ## Client
15
-
16
- ### Option 1: using pip
17
- ```sh
18
- $ pip install wcgw
19
- $ wcgw
20
- ```
21
-
22
- ### Option 2: using uv
23
- ```sh
24
- $ curl -LsSf https://astral.sh/uv/install.sh | sh
25
- $ uv tool run wcgw
26
- ```
27
-
28
- This will print a UUID that you need to share with the gpt.
29
-
30
-
31
- ## Chat
32
- https://chatgpt.com/g/g-Us0AAXkRh-wcgw-giving-shell-access
33
-
34
- Add user id the client generated to the first message along with the instructions.
35
-
36
- # How it works
37
- Your commands are relayed through a server I've hosted at https://wcgw.arcfu.com. The code for that is at `src/relay/serve.py`.
38
-
39
- The user id that you share with chatgpt is added in the request it sents to the relay server which holds a websocket with the terminal client.
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes