wcgw 1.5.4__tar.gz → 2.0.1__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of wcgw might be problematic. Click here for more details.

Files changed (42) hide show
  1. wcgw-2.0.1/.github/workflows/python-types.yml +29 -0
  2. wcgw-2.0.1/PKG-INFO +156 -0
  3. wcgw-2.0.1/README.md +127 -0
  4. wcgw-2.0.1/openai.md +71 -0
  5. {wcgw-1.5.4 → wcgw-2.0.1}/pyproject.toml +1 -1
  6. {wcgw-1.5.4 → wcgw-2.0.1}/src/wcgw/client/anthropic_client.py +57 -28
  7. {wcgw-1.5.4 → wcgw-2.0.1}/src/wcgw/client/computer_use.py +1 -1
  8. {wcgw-1.5.4 → wcgw-2.0.1}/src/wcgw/client/mcp_server/__init__.py +1 -0
  9. {wcgw-1.5.4 → wcgw-2.0.1}/src/wcgw/client/tools.py +14 -5
  10. wcgw-1.5.4/PKG-INFO +0 -178
  11. wcgw-1.5.4/README.md +0 -149
  12. wcgw-1.5.4/add.py +0 -6
  13. {wcgw-1.5.4 → wcgw-2.0.1}/.github/workflows/python-publish.yml +0 -0
  14. {wcgw-1.5.4 → wcgw-2.0.1}/.github/workflows/python-tests.yml +0 -0
  15. {wcgw-1.5.4 → wcgw-2.0.1}/.gitignore +0 -0
  16. {wcgw-1.5.4 → wcgw-2.0.1}/.python-version +0 -0
  17. {wcgw-1.5.4 → wcgw-2.0.1}/.vscode/settings.json +0 -0
  18. {wcgw-1.5.4 → wcgw-2.0.1}/gpt_action_json_schema.json +0 -0
  19. {wcgw-1.5.4 → wcgw-2.0.1}/gpt_instructions.txt +0 -0
  20. {wcgw-1.5.4 → wcgw-2.0.1}/src/__init__.py +0 -0
  21. {wcgw-1.5.4 → wcgw-2.0.1}/src/wcgw/__init__.py +0 -0
  22. {wcgw-1.5.4 → wcgw-2.0.1}/src/wcgw/client/__init__.py +0 -0
  23. {wcgw-1.5.4 → wcgw-2.0.1}/src/wcgw/client/__main__.py +0 -0
  24. {wcgw-1.5.4 → wcgw-2.0.1}/src/wcgw/client/cli.py +0 -0
  25. {wcgw-1.5.4 → wcgw-2.0.1}/src/wcgw/client/common.py +0 -0
  26. {wcgw-1.5.4 → wcgw-2.0.1}/src/wcgw/client/diff-instructions.txt +0 -0
  27. {wcgw-1.5.4 → wcgw-2.0.1}/src/wcgw/client/mcp_server/Readme.md +0 -0
  28. {wcgw-1.5.4 → wcgw-2.0.1}/src/wcgw/client/mcp_server/server.py +0 -0
  29. {wcgw-1.5.4 → wcgw-2.0.1}/src/wcgw/client/openai_client.py +0 -0
  30. {wcgw-1.5.4 → wcgw-2.0.1}/src/wcgw/client/openai_utils.py +0 -0
  31. {wcgw-1.5.4 → wcgw-2.0.1}/src/wcgw/client/sys_utils.py +0 -0
  32. {wcgw-1.5.4 → wcgw-2.0.1}/src/wcgw/relay/serve.py +0 -0
  33. {wcgw-1.5.4 → wcgw-2.0.1}/src/wcgw/relay/static/privacy.txt +0 -0
  34. {wcgw-1.5.4 → wcgw-2.0.1}/src/wcgw/types_.py +0 -0
  35. {wcgw-1.5.4 → wcgw-2.0.1}/static/claude-ss.jpg +0 -0
  36. {wcgw-1.5.4 → wcgw-2.0.1}/static/computer-use.jpg +0 -0
  37. {wcgw-1.5.4 → wcgw-2.0.1}/static/example.jpg +0 -0
  38. {wcgw-1.5.4 → wcgw-2.0.1}/static/rocket-icon.png +0 -0
  39. {wcgw-1.5.4 → wcgw-2.0.1}/static/ss1.png +0 -0
  40. {wcgw-1.5.4 → wcgw-2.0.1}/tests/test_basic.py +0 -0
  41. {wcgw-1.5.4 → wcgw-2.0.1}/tests/test_tools.py +0 -0
  42. {wcgw-1.5.4 → wcgw-2.0.1}/uv.lock +0 -0
@@ -0,0 +1,29 @@
1
+ name: Mypy strict
2
+
3
+ on:
4
+ push:
5
+ branches:
6
+ - main
7
+ pull_request:
8
+ branches:
9
+ - main
10
+
11
+ jobs:
12
+ typecheck:
13
+ runs-on: ubuntu-latest
14
+ strategy:
15
+ matrix:
16
+ python-version: ["3.11", "3.12"]
17
+ steps:
18
+ - uses: actions/checkout@v4
19
+ - name: Set up Python
20
+ uses: actions/setup-python@v3
21
+ with:
22
+ python-version: "${{ matrix.python-version }}"
23
+ - name: Install dependencies
24
+ run: |
25
+ pip install uv
26
+ uv venv --python "${{ matrix.python-version }}"
27
+ - name: Run type checks
28
+ run: |
29
+ uv run mypy --strict src
wcgw-2.0.1/PKG-INFO ADDED
@@ -0,0 +1,156 @@
1
+ Metadata-Version: 2.3
2
+ Name: wcgw
3
+ Version: 2.0.1
4
+ Summary: What could go wrong giving full shell access to chatgpt?
5
+ Project-URL: Homepage, https://github.com/rusiaaman/wcgw
6
+ Author-email: Aman Rusia <gapypi@arcfu.com>
7
+ Requires-Python: <3.13,>=3.11
8
+ Requires-Dist: anthropic>=0.39.0
9
+ Requires-Dist: fastapi>=0.115.0
10
+ Requires-Dist: mcp
11
+ Requires-Dist: mypy>=1.11.2
12
+ Requires-Dist: nltk>=3.9.1
13
+ Requires-Dist: openai>=1.46.0
14
+ Requires-Dist: petname>=2.6
15
+ Requires-Dist: pexpect>=4.9.0
16
+ Requires-Dist: pydantic>=2.9.2
17
+ Requires-Dist: pyte>=0.8.2
18
+ Requires-Dist: python-dotenv>=1.0.1
19
+ Requires-Dist: rich>=13.8.1
20
+ Requires-Dist: semantic-version>=2.10.0
21
+ Requires-Dist: shell>=1.0.1
22
+ Requires-Dist: tiktoken==0.7.0
23
+ Requires-Dist: toml>=0.10.2
24
+ Requires-Dist: typer>=0.12.5
25
+ Requires-Dist: types-pexpect>=4.9.0.20240806
26
+ Requires-Dist: uvicorn>=0.31.0
27
+ Requires-Dist: websockets>=13.1
28
+ Description-Content-Type: text/markdown
29
+
30
+ # Shell and Coding agent on Claude desktop app
31
+
32
+ - An MCP server on claude desktop for autonomous shell, coding and desktop control agent.
33
+
34
+ [![Tests](https://github.com/rusiaaman/wcgw/actions/workflows/python-tests.yml/badge.svg?branch=main)](https://github.com/rusiaaman/wcgw/actions/workflows/python-tests.yml)
35
+ [![Mypy strict](https://github.com/rusiaaman/wcgw/actions/workflows/python-types.yml/badge.svg?branch=main)](https://github.com/rusiaaman/wcgw/actions/workflows/python-types.yml)
36
+ [![Build](https://github.com/rusiaaman/wcgw/actions/workflows/python-publish.yml/badge.svg)](https://github.com/rusiaaman/wcgw/actions/workflows/python-publish.yml)
37
+
38
+ ## Updates
39
+
40
+ - [01 Dec 2024] Deprecated chatgpt app support
41
+
42
+ - [26 Nov 2024] Introduced claude desktop support through mcp
43
+
44
+ ## 🚀 Highlights
45
+
46
+ - ⚡ **Full Shell Access**: No restrictions, complete control.
47
+ - ⚡ **Desktop control on Claude**: Screen capture, mouse control, keyboard control on claude desktop (on mac with docker linux)
48
+ - ⚡ **Create, Execute, Iterate**: Ask claude to keep running compiler checks till all errors are fixed, or ask it to keep checking for the status of a long running command till it's done.
49
+ - ⚡ **Interactive Command Handling**: Supports interactive commands using arrow keys, interrupt, and ansi escape sequences.
50
+ - ⚡ **REPL support**: [beta] Supports python/node and other REPL execution.
51
+
52
+ ## Setup
53
+
54
+ Update `claude_desktop_config.json` (~/Library/Application Support/Claude/claude_desktop_config.json)
55
+
56
+ ```json
57
+ {
58
+ "mcpServers": {
59
+ "wcgw": {
60
+ "command": "uv",
61
+ "args": [
62
+ "tool",
63
+ "run",
64
+ "--from",
65
+ "wcgw@latest",
66
+ "--python",
67
+ "3.12",
68
+ "wcgw_mcp"
69
+ ]
70
+ }
71
+ }
72
+ }
73
+ ```
74
+
75
+ Then restart claude app.
76
+
77
+ ## [Optional] Computer use support using desktop on docker
78
+
79
+ Computer use is disabled by default. Add `--computer-use` to enable it. This will add necessary tools to Claude including ScreenShot, Mouse and Keyboard control.
80
+
81
+ ```json
82
+ {
83
+ "mcpServers": {
84
+ "wcgw": {
85
+ "command": "uv",
86
+ "args": [
87
+ "tool",
88
+ "run",
89
+ "--from",
90
+ "wcgw@latest",
91
+ "--python",
92
+ "3.12",
93
+ "wcgw_mcp",
94
+ "--computer-use"
95
+ ]
96
+ }
97
+ }
98
+ }
99
+ ```
100
+
101
+ Claude will be able to connect to any docker container with linux environment. Native system control isn't supported outside docker.
102
+
103
+ You'll need to run a docker image with desktop and optional VNC connection. Here's a demo image:
104
+
105
+ ```sh
106
+ docker run -p 6080:6080 ghcr.io/anthropics/anthropic-quickstarts:computer-use-demo-latest
107
+ ```
108
+
109
+ Then ask claude desktop app to control the docker os. It'll connect to the docker container and control it.
110
+
111
+ Connect to `http://localhost:6080/vnc.html` for desktop view (VNC) of the system running in the docker.
112
+
113
+ ## Usage
114
+
115
+ Wait for a few seconds. You should be able to see this icon if everything goes right.
116
+
117
+ ![mcp icon](https://github.com/rusiaaman/wcgw/blob/main/static/rocket-icon.png?raw=true)
118
+ over here
119
+
120
+ ![mcp icon](https://github.com/rusiaaman/wcgw/blob/main/static/claude-ss.jpg?raw=true)
121
+
122
+ Then ask claude to execute shell commands, read files, edit files, run your code, etc.
123
+
124
+ If you've run the docker for LLM to access, you can ask it to control the "docker os". If you don't provide the docker container id to it, it'll try to search for available docker using `docker ps` command.
125
+
126
+ ## Example
127
+
128
+ ### Computer use example
129
+
130
+ ![computer-use](https://github.com/rusiaaman/wcgw/blob/main/static/computer-use.jpg?raw=true)
131
+
132
+ ### Shell example
133
+
134
+ ![example](https://github.com/rusiaaman/wcgw/blob/main/static/example.jpg?raw=true)
135
+
136
+ ## [Optional] Local shell access with openai API key or anthropic API key
137
+
138
+ ### Openai
139
+
140
+ Add `OPENAI_API_KEY` and `OPENAI_ORG_ID` env variables.
141
+
142
+ Then run
143
+
144
+ `uvx --from wcgw@latest wcgw_local --limit 0.1` # Cost limit $0.1
145
+
146
+ You can now directly write messages or press enter key to open vim for multiline message and text pasting.
147
+
148
+ ### Anthropic
149
+
150
+ Add `ANTHROPIC_API_KEY` env variable.
151
+
152
+ Then run
153
+
154
+ `uvx --from wcgw@latest wcgw_local --claude`
155
+
156
+ You can now directly write messages or press enter key to open vim for multiline message and text pasting.
wcgw-2.0.1/README.md ADDED
@@ -0,0 +1,127 @@
1
+ # Shell and Coding agent on Claude desktop app
2
+
3
+ - An MCP server on claude desktop for autonomous shell, coding and desktop control agent.
4
+
5
+ [![Tests](https://github.com/rusiaaman/wcgw/actions/workflows/python-tests.yml/badge.svg?branch=main)](https://github.com/rusiaaman/wcgw/actions/workflows/python-tests.yml)
6
+ [![Mypy strict](https://github.com/rusiaaman/wcgw/actions/workflows/python-types.yml/badge.svg?branch=main)](https://github.com/rusiaaman/wcgw/actions/workflows/python-types.yml)
7
+ [![Build](https://github.com/rusiaaman/wcgw/actions/workflows/python-publish.yml/badge.svg)](https://github.com/rusiaaman/wcgw/actions/workflows/python-publish.yml)
8
+
9
+ ## Updates
10
+
11
+ - [01 Dec 2024] Deprecated chatgpt app support
12
+
13
+ - [26 Nov 2024] Introduced claude desktop support through mcp
14
+
15
+ ## 🚀 Highlights
16
+
17
+ - ⚡ **Full Shell Access**: No restrictions, complete control.
18
+ - ⚡ **Desktop control on Claude**: Screen capture, mouse control, keyboard control on claude desktop (on mac with docker linux)
19
+ - ⚡ **Create, Execute, Iterate**: Ask claude to keep running compiler checks till all errors are fixed, or ask it to keep checking for the status of a long running command till it's done.
20
+ - ⚡ **Interactive Command Handling**: Supports interactive commands using arrow keys, interrupt, and ansi escape sequences.
21
+ - ⚡ **REPL support**: [beta] Supports python/node and other REPL execution.
22
+
23
+ ## Setup
24
+
25
+ Update `claude_desktop_config.json` (~/Library/Application Support/Claude/claude_desktop_config.json)
26
+
27
+ ```json
28
+ {
29
+ "mcpServers": {
30
+ "wcgw": {
31
+ "command": "uv",
32
+ "args": [
33
+ "tool",
34
+ "run",
35
+ "--from",
36
+ "wcgw@latest",
37
+ "--python",
38
+ "3.12",
39
+ "wcgw_mcp"
40
+ ]
41
+ }
42
+ }
43
+ }
44
+ ```
45
+
46
+ Then restart claude app.
47
+
48
+ ## [Optional] Computer use support using desktop on docker
49
+
50
+ Computer use is disabled by default. Add `--computer-use` to enable it. This will add necessary tools to Claude including ScreenShot, Mouse and Keyboard control.
51
+
52
+ ```json
53
+ {
54
+ "mcpServers": {
55
+ "wcgw": {
56
+ "command": "uv",
57
+ "args": [
58
+ "tool",
59
+ "run",
60
+ "--from",
61
+ "wcgw@latest",
62
+ "--python",
63
+ "3.12",
64
+ "wcgw_mcp",
65
+ "--computer-use"
66
+ ]
67
+ }
68
+ }
69
+ }
70
+ ```
71
+
72
+ Claude will be able to connect to any docker container with linux environment. Native system control isn't supported outside docker.
73
+
74
+ You'll need to run a docker image with desktop and optional VNC connection. Here's a demo image:
75
+
76
+ ```sh
77
+ docker run -p 6080:6080 ghcr.io/anthropics/anthropic-quickstarts:computer-use-demo-latest
78
+ ```
79
+
80
+ Then ask claude desktop app to control the docker os. It'll connect to the docker container and control it.
81
+
82
+ Connect to `http://localhost:6080/vnc.html` for desktop view (VNC) of the system running in the docker.
83
+
84
+ ## Usage
85
+
86
+ Wait for a few seconds. You should be able to see this icon if everything goes right.
87
+
88
+ ![mcp icon](https://github.com/rusiaaman/wcgw/blob/main/static/rocket-icon.png?raw=true)
89
+ over here
90
+
91
+ ![mcp icon](https://github.com/rusiaaman/wcgw/blob/main/static/claude-ss.jpg?raw=true)
92
+
93
+ Then ask claude to execute shell commands, read files, edit files, run your code, etc.
94
+
95
+ If you've run the docker for LLM to access, you can ask it to control the "docker os". If you don't provide the docker container id to it, it'll try to search for available docker using `docker ps` command.
96
+
97
+ ## Example
98
+
99
+ ### Computer use example
100
+
101
+ ![computer-use](https://github.com/rusiaaman/wcgw/blob/main/static/computer-use.jpg?raw=true)
102
+
103
+ ### Shell example
104
+
105
+ ![example](https://github.com/rusiaaman/wcgw/blob/main/static/example.jpg?raw=true)
106
+
107
+ ## [Optional] Local shell access with openai API key or anthropic API key
108
+
109
+ ### Openai
110
+
111
+ Add `OPENAI_API_KEY` and `OPENAI_ORG_ID` env variables.
112
+
113
+ Then run
114
+
115
+ `uvx --from wcgw@latest wcgw_local --limit 0.1` # Cost limit $0.1
116
+
117
+ You can now directly write messages or press enter key to open vim for multiline message and text pasting.
118
+
119
+ ### Anthropic
120
+
121
+ Add `ANTHROPIC_API_KEY` env variable.
122
+
123
+ Then run
124
+
125
+ `uvx --from wcgw@latest wcgw_local --claude`
126
+
127
+ You can now directly write messages or press enter key to open vim for multiline message and text pasting.
wcgw-2.0.1/openai.md ADDED
@@ -0,0 +1,71 @@
1
+ # ChatGPT Integration Guide
2
+
3
+ ## 🪜 Steps:
4
+
5
+ 1. Run a relay server with a domain name and https support (or use ngrok) use the instructions in next section.
6
+ 2. Create a custom gpt that connects to the relay server, instructions in next sections.
7
+ 3. Run the [cli client](https://github.com/rusiaaman/wcgw?tab=readme-ov-file#client) in any directory of choice.
8
+ 4. The custom GPT can now run any command on your cli
9
+
10
+ ## Creating the relay server
11
+
12
+ ### If you've a domain name and ssl certificate
13
+
14
+ Run the server
15
+ `gunicorn --worker-class uvicorn.workers.UvicornWorker --bind 0.0.0.0:443 src.wcgw.relay.serve:app --certfile fullchain.pem --keyfile privkey.pem`
16
+
17
+ If you don't have public ip and domain name, you can use `ngrok` or similar services to get a https address to the api.
18
+
19
+ Then specify the server url in the `wcgw` command like so:
20
+ `uv tool run --python 3.12 wcgw@latest --server-url wss://your-url/v1/register`
21
+
22
+ ### Using ngrok
23
+
24
+ Run the server
25
+ `uv tool run --python 3.12 --from wcgw@latest wcgw_relay`
26
+
27
+ This will start an uvicorn server on port 8000. You can use ngrok to get a public address to the server.
28
+
29
+ `ngrok http 8000`
30
+
31
+ Then specify the ngrok address in the `wcgw` command like so:
32
+ `uv tool run --python 3.12 wcgw@latest --server-url wss://4900-1c2c-6542-b922-a596-f8f8.ngrok-free.app/v1/register`
33
+
34
+ ## Creating the custom gpt
35
+
36
+ I've used the following instructions and action json schema to create the custom GPT. (Replace wcgw.arcfu.com with the address to your server)
37
+
38
+ https://github.com/rusiaaman/wcgw/blob/main/gpt_instructions.txt
39
+ https://github.com/rusiaaman/wcgw/blob/main/gpt_action_json_schema.json
40
+
41
+ ### Chat
42
+
43
+ Let the chatgpt know your user id in any format. E.g., "user_id=<your uuid>" followed by rest of your instructions.
44
+
45
+ ### How it works on chatgpt app?
46
+
47
+ Your commands are relayed through a server to the terminal client.
48
+
49
+ Chatgpt sends a request to the relay server using the user id that you share with it. The relay server holds a websocket with the terminal client against the user id and acts as a proxy to pass the request.
50
+
51
+ It's secure in both the directions. Either a malicious actor or a malicious Chatgpt has to correctly guess your UUID for any security breach.
52
+
53
+ ## Showcase
54
+
55
+ ### Unit tests and github actions
56
+
57
+ [The first version of unit tests and github workflow to test on multiple python versions were written by the custom chatgpt](https://chatgpt.com/share/6717f922-8998-8005-b825-45d4b348b4dd)
58
+
59
+ ### Create a todo app using react + typescript + vite
60
+
61
+ ![Screenshot](https://github.com/rusiaaman/wcgw/blob/main/static/ss1.png?raw=true)
62
+
63
+ ## Local shell access with OpenAI API key
64
+
65
+ Add `OPENAI_API_KEY` and `OPENAI_ORG_ID` env variables.
66
+
67
+ Then run:
68
+
69
+ `uvx --from wcgw@latest wcgw_local --limit 0.1` # Cost limit $0.1
70
+
71
+ You can now directly write messages or press enter key to open vim for multiline message and text pasting.
@@ -1,7 +1,7 @@
1
1
  [project]
2
2
  authors = [{ name = "Aman Rusia", email = "gapypi@arcfu.com" }]
3
3
  name = "wcgw"
4
- version = "1.5.4"
4
+ version = "2.0.1"
5
5
  description = "What could go wrong giving full shell access to chatgpt?"
6
6
  readme = "README.md"
7
7
  requires-python = ">=3.11, <3.13"
@@ -4,7 +4,7 @@ import mimetypes
4
4
  from pathlib import Path
5
5
  import sys
6
6
  import traceback
7
- from typing import Callable, DefaultDict, Optional, cast
7
+ from typing import Callable, DefaultDict, Optional, cast, Literal
8
8
  import anthropic
9
9
  from anthropic import Anthropic
10
10
  from anthropic.types import (
@@ -110,7 +110,10 @@ def parse_user_message_special(msg: str) -> MessageParam:
110
110
  "type": "image",
111
111
  "source": {
112
112
  "type": "base64",
113
- "media_type": image_type,
113
+ "media_type": cast(
114
+ 'Literal["image/jpeg", "image/png", "image/gif", "image/webp"]',
115
+ image_type or "image/png",
116
+ ),
114
117
  "data": image_b64,
115
118
  },
116
119
  }
@@ -360,53 +363,79 @@ System information:
360
363
  type_ = chunk.type
361
364
  if type_ in {"message_start", "message_stop"}:
362
365
  continue
363
- elif type_ == "content_block_start":
366
+ elif type_ == "content_block_start" and hasattr(
367
+ chunk, "content_block"
368
+ ):
364
369
  content_block = chunk.content_block
365
- if content_block.type == "text":
370
+ if (
371
+ hasattr(content_block, "type")
372
+ and content_block.type == "text"
373
+ and hasattr(content_block, "text")
374
+ ):
366
375
  chunk_str = content_block.text
367
376
  assistant_console.print(chunk_str, end="")
368
377
  full_response += chunk_str
369
378
  elif content_block.type == "tool_use":
370
- assert content_block.input == {}
371
- tool_calls.append(
372
- {
373
- "name": content_block.name,
374
- "input": "",
375
- "done": False,
376
- "id": content_block.id,
377
- }
378
- )
379
+ if (
380
+ hasattr(content_block, "input")
381
+ and hasattr(content_block, "name")
382
+ and hasattr(content_block, "id")
383
+ ):
384
+ assert content_block.input == {}
385
+ tool_calls.append(
386
+ {
387
+ "name": str(content_block.name),
388
+ "input": str(""),
389
+ "done": False,
390
+ "id": str(content_block.id),
391
+ }
392
+ )
379
393
  else:
380
394
  error_console.log(
381
395
  f"Ignoring unknown content block type {content_block.type}"
382
396
  )
383
- elif type_ == "content_block_delta":
384
- if chunk.delta.type == "text_delta":
385
- chunk_str = chunk.delta.text
386
- assistant_console.print(chunk_str, end="")
387
- full_response += chunk_str
388
- elif chunk.delta.type == "input_json_delta":
389
- tool_calls[-1]["input"] += chunk.delta.partial_json
397
+ elif type_ == "content_block_delta" and hasattr(chunk, "delta"):
398
+ delta = chunk.delta
399
+ if hasattr(delta, "type"):
400
+ delta_type = str(delta.type)
401
+ if delta_type == "text_delta" and hasattr(delta, "text"):
402
+ chunk_str = delta.text
403
+ assistant_console.print(chunk_str, end="")
404
+ full_response += chunk_str
405
+ elif delta_type == "input_json_delta" and hasattr(
406
+ delta, "partial_json"
407
+ ):
408
+ partial_json = delta.partial_json
409
+ if isinstance(tool_calls[-1]["input"], str):
410
+ tool_calls[-1]["input"] += partial_json
411
+ else:
412
+ error_console.log(
413
+ f"Ignoring unknown content block delta type {delta_type}"
414
+ )
390
415
  else:
391
- error_console.log(
392
- f"Ignoring unknown content block delta type {chunk.delta.type}"
393
- )
416
+ raise ValueError("Content block delta has no type")
394
417
  elif type_ == "content_block_stop":
395
418
  if tool_calls and not tool_calls[-1]["done"]:
396
419
  tc = tool_calls[-1]
420
+ tool_name = str(tc["name"])
421
+ tool_input = str(tc["input"])
422
+ tool_id = str(tc["id"])
423
+
397
424
  tool_parsed = which_tool_name(
398
- tc["name"]
399
- ).model_validate_json(tc["input"])
425
+ tool_name
426
+ ).model_validate_json(tool_input)
427
+
400
428
  system_console.print(
401
429
  f"\n---------------------------------------\n# Assistant invoked tool: {tool_parsed}"
402
430
  )
431
+
403
432
  _histories.append(
404
433
  {
405
434
  "role": "assistant",
406
435
  "content": [
407
436
  ToolUseBlockParam(
408
- id=tc["id"],
409
- name=tc["name"],
437
+ id=tool_id,
438
+ name=tool_name,
410
439
  input=tool_parsed.model_dump(),
411
440
  type="tool_use",
412
441
  )
@@ -458,7 +487,7 @@ System information:
458
487
  tool_results.append(
459
488
  ToolResultBlockParam(
460
489
  type="tool_result",
461
- tool_use_id=tc["id"],
490
+ tool_use_id=str(tc["id"]),
462
491
  content=tool_results_content,
463
492
  )
464
493
  )
@@ -161,7 +161,7 @@ class ComputerTool:
161
161
  assert not result.error, result.error
162
162
  assert result.output, "Could not get screen info"
163
163
  width, height, display_num = map(
164
- lambda x: None if not x else int(x), result.output.split(",")
164
+ lambda x: None if not x else int(x), result.output.strip().split(",")
165
165
  )
166
166
  if width is None:
167
167
  width = 1080
@@ -1,3 +1,4 @@
1
+ # mypy: disable-error-code="import-untyped"
1
2
  from wcgw.client.mcp_server import server
2
3
  import asyncio
3
4
  from typer import Typer
@@ -929,9 +929,7 @@ def register_client(server_url: str, client_uuid: str = "") -> None:
929
929
  client_version = importlib.metadata.version("wcgw")
930
930
  websocket.send(client_version)
931
931
 
932
- print(
933
- f"Connected. Share this user id with the chatbot: {client_uuid} \nLink: https://chatgpt.com/g/g-Us0AAXkRh-wcgw-giving-shell-access"
934
- )
932
+ print(f"Connected. Share this user id with the chatbot: {client_uuid}")
935
933
  while True:
936
934
  # Wait to receive data from the server
937
935
  message = websocket.recv()
@@ -962,7 +960,7 @@ run = Typer(pretty_exceptions_show_locals=False, no_args_is_help=True)
962
960
 
963
961
  @run.command()
964
962
  def app(
965
- server_url: str = "wss://wcgw.arcfu.com/v1/register",
963
+ server_url: str = "",
966
964
  client_uuid: Optional[str] = None,
967
965
  version: bool = typer.Option(False, "--version", "-v"),
968
966
  ) -> None:
@@ -970,7 +968,18 @@ def app(
970
968
  version_ = importlib.metadata.version("wcgw")
971
969
  print(f"wcgw version: {version_}")
972
970
  exit()
973
-
971
+ if not server_url:
972
+ server_url = os.environ.get("WCGW_RELAY_SERVER", "")
973
+ if not server_url:
974
+ print(
975
+ "Error: Please provide relay server url using --server_url or WCGW_RELAY_SERVER environment variable"
976
+ )
977
+ print(
978
+ "\tNOTE: you need to run a relay server first, author doesn't host a relay server anymore."
979
+ )
980
+ print("\thttps://github.com/rusiaaman/wcgw/blob/main/openai.md")
981
+ print("\tExample `--server-url=ws://localhost:8000/v1/register`")
982
+ raise typer.Exit(1)
974
983
  register_client(server_url, client_uuid or "")
975
984
 
976
985
 
wcgw-1.5.4/PKG-INFO DELETED
@@ -1,178 +0,0 @@
1
- Metadata-Version: 2.3
2
- Name: wcgw
3
- Version: 1.5.4
4
- Summary: What could go wrong giving full shell access to chatgpt?
5
- Project-URL: Homepage, https://github.com/rusiaaman/wcgw
6
- Author-email: Aman Rusia <gapypi@arcfu.com>
7
- Requires-Python: <3.13,>=3.11
8
- Requires-Dist: anthropic>=0.39.0
9
- Requires-Dist: fastapi>=0.115.0
10
- Requires-Dist: mcp
11
- Requires-Dist: mypy>=1.11.2
12
- Requires-Dist: nltk>=3.9.1
13
- Requires-Dist: openai>=1.46.0
14
- Requires-Dist: petname>=2.6
15
- Requires-Dist: pexpect>=4.9.0
16
- Requires-Dist: pydantic>=2.9.2
17
- Requires-Dist: pyte>=0.8.2
18
- Requires-Dist: python-dotenv>=1.0.1
19
- Requires-Dist: rich>=13.8.1
20
- Requires-Dist: semantic-version>=2.10.0
21
- Requires-Dist: shell>=1.0.1
22
- Requires-Dist: tiktoken==0.7.0
23
- Requires-Dist: toml>=0.10.2
24
- Requires-Dist: typer>=0.12.5
25
- Requires-Dist: types-pexpect>=4.9.0.20240806
26
- Requires-Dist: uvicorn>=0.31.0
27
- Requires-Dist: websockets>=13.1
28
- Description-Content-Type: text/markdown
29
-
30
- # Shell and Coding agent on Chatgpt and Claude desktop apps
31
-
32
- - An MCP server on claude desktop for autonomous shell, coding and desktop control agent.
33
- - A custom gpt on chatgpt web/desktop apps to interact with your local shell, edit files, run code, etc.
34
-
35
-
36
- [![Tests](https://github.com/rusiaaman/wcgw/actions/workflows/python-tests.yml/badge.svg?branch=main)](https://github.com/rusiaaman/wcgw/actions/workflows/python-tests.yml)
37
- [![Build](https://github.com/rusiaaman/wcgw/actions/workflows/python-publish.yml/badge.svg)](https://github.com/rusiaaman/wcgw/actions/workflows/python-publish.yml)
38
-
39
- [New feature] [26-Nov-2024] Claude desktop support for shell, computer-control, coding agent.
40
- [src/wcgw/client/mcp_server/Readme.md](src/wcgw/client/mcp_server/Readme.md)
41
-
42
- ### 🚀 Highlights
43
-
44
- - ⚡ **Full Shell Access**: No restrictions, complete control.
45
- - ⚡ **Desktop control on Claude**: Screen capture, mouse control, keyboard control on claude desktop (on mac with docker linux)
46
- - ⚡ **Create, Execute, Iterate**: Ask the gpt to keep running compiler checks till all errors are fixed, or ask it to keep checking for the status of a long running command till it's done.
47
- - ⚡ **Interactive Command Handling**: Supports interactive commands using arrow keys, interrupt, and ansi escape sequences.
48
- - ⚡ **REPL support**: [beta] Supports python/node and other REPL execution.
49
-
50
- ## Claude
51
- Full readme [src/wcgw/client/mcp_server/Readme.md](src/wcgw/client/mcp_server/Readme.md)
52
- ### Setup
53
-
54
- Update `claude_desktop_config.json`
55
-
56
- ```json
57
- {
58
- "mcpServers": {
59
- "wcgw": {
60
- "command": "uvx",
61
- "args": ["--from", "wcgw@latest", "wcgw_mcp"]
62
- }
63
- }
64
- }
65
- ```
66
-
67
- Then restart claude app.
68
- You can then ask claude to execute shell commands, read files, edit files, run your code, etc.
69
-
70
- ## ChatGPT
71
-
72
- ### 🪜 Steps:
73
-
74
- 1. Run the [cli client](https://github.com/rusiaaman/wcgw?tab=readme-ov-file#client) in any directory of choice.
75
- 2. Share the generated id with this GPT: `https://chatgpt.com/g/g-Us0AAXkRh-wcgw-giving-shell-access`
76
- 3. The custom GPT can now run any command on your cli
77
-
78
- ### Client
79
-
80
- You need to keep running this client for GPT to access your shell. Run it in a version controlled project's root.
81
-
82
- #### Option 1: using uv [Recommended]
83
-
84
- ```sh
85
- $ curl -LsSf https://astral.sh/uv/install.sh | sh
86
- $ uvx wcgw@latest
87
- ```
88
-
89
- #### Option 2: using pip
90
-
91
- Supports python >=3.10 and <3.13
92
-
93
- ```sh
94
- $ pip3 install wcgw
95
- $ wcgw
96
- ```
97
-
98
- This will print a UUID that you need to share with the gpt.
99
-
100
- ### Chat
101
-
102
- Open the following link or search the "wcgw" custom gpt using "Explore GPTs" on chatgpt.com
103
-
104
- https://chatgpt.com/g/g-Us0AAXkRh-wcgw-giving-shell-access
105
-
106
- Finally, let the chatgpt know your user id in any format. E.g., "user_id=<your uuid>" followed by rest of your instructions.
107
-
108
- NOTE: you can resume a broken connection
109
- `wcgw --client-uuid $previous_uuid`
110
-
111
- ### How it works on chatgpt app?
112
-
113
- Your commands are relayed through a server to the terminal client. [You could host the server on your own](https://github.com/rusiaaman/wcgw?tab=readme-ov-file#creating-your-own-custom-gpt-and-the-relay-server). For public convenience I've hosted one at https://wcgw.arcfu.com thanks to the gcloud free tier plan.
114
-
115
- Chatgpt sends a request to the relay server using the user id that you share with it. The relay server holds a websocket with the terminal client against the user id and acts as a proxy to pass the request.
116
-
117
- It's secure in both the directions. Either a malicious actor or a malicious Chatgpt has to correctly guess your UUID for any security breach.
118
-
119
- # Showcase
120
-
121
- ## Claude desktop
122
-
123
- ### Resize image and move it to a new dir
124
-
125
- ![example](https://github.com/rusiaaman/wcgw/blob/main/static/example.jpg?raw=true)
126
-
127
- ## Chatgpt app
128
-
129
- ### Unit tests and github actions
130
-
131
- [The first version of unit tests and github workflow to test on multiple python versions were written by the custom chatgpt](https://chatgpt.com/share/6717f922-8998-8005-b825-45d4b348b4dd)
132
-
133
- ### Create a todo app using react + typescript + vite
134
-
135
- ![Screenshot](https://github.com/rusiaaman/wcgw/blob/main/static/ss1.png?raw=true)
136
-
137
- # Privacy
138
-
139
- The relay server doesn't store any data. I can't access any information passing through it and only secure channels are used to communicate.
140
-
141
- You may host the server on your own and create a custom gpt using the following section.
142
-
143
- # Creating your own custom gpt and the relay server.
144
-
145
- I've used the following instructions and action json schema to create the custom GPT. (Replace wcgw.arcfu.com with the address to your server)
146
-
147
- https://github.com/rusiaaman/wcgw/blob/main/gpt_instructions.txt
148
- https://github.com/rusiaaman/wcgw/blob/main/gpt_action_json_schema.json
149
-
150
- Run the server
151
- `gunicorn --worker-class uvicorn.workers.UvicornWorker --bind 0.0.0.0:443 src.wcgw.relay.serve:app --certfile fullchain.pem --keyfile privkey.pem`
152
-
153
- If you don't have public ip and domain name, you can use `ngrok` or similar services to get a https address to the api.
154
-
155
- The specify the server url in the `wcgw` command like so
156
- `wcgw --server-url https://your-url/v1/register`
157
-
158
- # [Optional] Local shell access with openai API key or anthropic API key
159
-
160
- ## Openai
161
-
162
- Add `OPENAI_API_KEY` and `OPENAI_ORG_ID` env variables.
163
-
164
- Then run
165
-
166
- `uvx --from wcgw@latest wcgw_local --limit 0.1` # Cost limit $0.1
167
-
168
- You can now directly write messages or press enter key to open vim for multiline message and text pasting.
169
-
170
- ## Anthropic
171
-
172
- Add `ANTHROPIC_API_KEY` env variable.
173
-
174
- Then run
175
-
176
- `uvx --from wcgw@latest wcgw_local --claude`
177
-
178
- You can now directly write messages or press enter key to open vim for multiline message and text pasting.
wcgw-1.5.4/README.md DELETED
@@ -1,149 +0,0 @@
1
- # Shell and Coding agent on Chatgpt and Claude desktop apps
2
-
3
- - An MCP server on claude desktop for autonomous shell, coding and desktop control agent.
4
- - A custom gpt on chatgpt web/desktop apps to interact with your local shell, edit files, run code, etc.
5
-
6
-
7
- [![Tests](https://github.com/rusiaaman/wcgw/actions/workflows/python-tests.yml/badge.svg?branch=main)](https://github.com/rusiaaman/wcgw/actions/workflows/python-tests.yml)
8
- [![Build](https://github.com/rusiaaman/wcgw/actions/workflows/python-publish.yml/badge.svg)](https://github.com/rusiaaman/wcgw/actions/workflows/python-publish.yml)
9
-
10
- [New feature] [26-Nov-2024] Claude desktop support for shell, computer-control, coding agent.
11
- [src/wcgw/client/mcp_server/Readme.md](src/wcgw/client/mcp_server/Readme.md)
12
-
13
- ### 🚀 Highlights
14
-
15
- - ⚡ **Full Shell Access**: No restrictions, complete control.
16
- - ⚡ **Desktop control on Claude**: Screen capture, mouse control, keyboard control on claude desktop (on mac with docker linux)
17
- - ⚡ **Create, Execute, Iterate**: Ask the gpt to keep running compiler checks till all errors are fixed, or ask it to keep checking for the status of a long running command till it's done.
18
- - ⚡ **Interactive Command Handling**: Supports interactive commands using arrow keys, interrupt, and ansi escape sequences.
19
- - ⚡ **REPL support**: [beta] Supports python/node and other REPL execution.
20
-
21
- ## Claude
22
- Full readme [src/wcgw/client/mcp_server/Readme.md](src/wcgw/client/mcp_server/Readme.md)
23
- ### Setup
24
-
25
- Update `claude_desktop_config.json`
26
-
27
- ```json
28
- {
29
- "mcpServers": {
30
- "wcgw": {
31
- "command": "uvx",
32
- "args": ["--from", "wcgw@latest", "wcgw_mcp"]
33
- }
34
- }
35
- }
36
- ```
37
-
38
- Then restart claude app.
39
- You can then ask claude to execute shell commands, read files, edit files, run your code, etc.
40
-
41
- ## ChatGPT
42
-
43
- ### 🪜 Steps:
44
-
45
- 1. Run the [cli client](https://github.com/rusiaaman/wcgw?tab=readme-ov-file#client) in any directory of choice.
46
- 2. Share the generated id with this GPT: `https://chatgpt.com/g/g-Us0AAXkRh-wcgw-giving-shell-access`
47
- 3. The custom GPT can now run any command on your cli
48
-
49
- ### Client
50
-
51
- You need to keep running this client for GPT to access your shell. Run it in a version controlled project's root.
52
-
53
- #### Option 1: using uv [Recommended]
54
-
55
- ```sh
56
- $ curl -LsSf https://astral.sh/uv/install.sh | sh
57
- $ uvx wcgw@latest
58
- ```
59
-
60
- #### Option 2: using pip
61
-
62
- Supports python >=3.10 and <3.13
63
-
64
- ```sh
65
- $ pip3 install wcgw
66
- $ wcgw
67
- ```
68
-
69
- This will print a UUID that you need to share with the gpt.
70
-
71
- ### Chat
72
-
73
- Open the following link or search the "wcgw" custom gpt using "Explore GPTs" on chatgpt.com
74
-
75
- https://chatgpt.com/g/g-Us0AAXkRh-wcgw-giving-shell-access
76
-
77
- Finally, let the chatgpt know your user id in any format. E.g., "user_id=<your uuid>" followed by rest of your instructions.
78
-
79
- NOTE: you can resume a broken connection
80
- `wcgw --client-uuid $previous_uuid`
81
-
82
- ### How it works on chatgpt app?
83
-
84
- Your commands are relayed through a server to the terminal client. [You could host the server on your own](https://github.com/rusiaaman/wcgw?tab=readme-ov-file#creating-your-own-custom-gpt-and-the-relay-server). For public convenience I've hosted one at https://wcgw.arcfu.com thanks to the gcloud free tier plan.
85
-
86
- Chatgpt sends a request to the relay server using the user id that you share with it. The relay server holds a websocket with the terminal client against the user id and acts as a proxy to pass the request.
87
-
88
- It's secure in both the directions. Either a malicious actor or a malicious Chatgpt has to correctly guess your UUID for any security breach.
89
-
90
- # Showcase
91
-
92
- ## Claude desktop
93
-
94
- ### Resize image and move it to a new dir
95
-
96
- ![example](https://github.com/rusiaaman/wcgw/blob/main/static/example.jpg?raw=true)
97
-
98
- ## Chatgpt app
99
-
100
- ### Unit tests and github actions
101
-
102
- [The first version of unit tests and github workflow to test on multiple python versions were written by the custom chatgpt](https://chatgpt.com/share/6717f922-8998-8005-b825-45d4b348b4dd)
103
-
104
- ### Create a todo app using react + typescript + vite
105
-
106
- ![Screenshot](https://github.com/rusiaaman/wcgw/blob/main/static/ss1.png?raw=true)
107
-
108
- # Privacy
109
-
110
- The relay server doesn't store any data. I can't access any information passing through it and only secure channels are used to communicate.
111
-
112
- You may host the server on your own and create a custom gpt using the following section.
113
-
114
- # Creating your own custom gpt and the relay server.
115
-
116
- I've used the following instructions and action json schema to create the custom GPT. (Replace wcgw.arcfu.com with the address to your server)
117
-
118
- https://github.com/rusiaaman/wcgw/blob/main/gpt_instructions.txt
119
- https://github.com/rusiaaman/wcgw/blob/main/gpt_action_json_schema.json
120
-
121
- Run the server
122
- `gunicorn --worker-class uvicorn.workers.UvicornWorker --bind 0.0.0.0:443 src.wcgw.relay.serve:app --certfile fullchain.pem --keyfile privkey.pem`
123
-
124
- If you don't have public ip and domain name, you can use `ngrok` or similar services to get a https address to the api.
125
-
126
- The specify the server url in the `wcgw` command like so
127
- `wcgw --server-url https://your-url/v1/register`
128
-
129
- # [Optional] Local shell access with openai API key or anthropic API key
130
-
131
- ## Openai
132
-
133
- Add `OPENAI_API_KEY` and `OPENAI_ORG_ID` env variables.
134
-
135
- Then run
136
-
137
- `uvx --from wcgw@latest wcgw_local --limit 0.1` # Cost limit $0.1
138
-
139
- You can now directly write messages or press enter key to open vim for multiline message and text pasting.
140
-
141
- ## Anthropic
142
-
143
- Add `ANTHROPIC_API_KEY` env variable.
144
-
145
- Then run
146
-
147
- `uvx --from wcgw@latest wcgw_local --claude`
148
-
149
- You can now directly write messages or press enter key to open vim for multiline message and text pasting.
wcgw-1.5.4/add.py DELETED
@@ -1,6 +0,0 @@
1
- def add_numbers(a: int, b: int) -> int:
2
- return a + b
3
-
4
- # Test the function
5
- result = add_numbers(5, 3)
6
- print(f"5 + 3 = {result}")
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes