clai 0.2.0__tar.gz → 0.2.1__tar.gz
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Potentially problematic release.
This version of clai might be problematic. Click here for more details.
- clai-0.2.1/.gitignore +19 -0
- clai-0.2.1/PKG-INFO +106 -0
- clai-0.2.1/README.md +77 -0
- clai-0.2.1/clai/__init__.py +11 -0
- clai-0.2.1/clai/__main__.py +6 -0
- clai-0.2.1/pyproject.toml +63 -0
- clai-0.2.1/update_readme.py +30 -0
- clai-0.2.0/PKG-INFO +0 -99
- clai-0.2.0/README.md +0 -79
- clai-0.2.0/clai/__init__.py +0 -6
- clai-0.2.0/clai/api.py +0 -17
- clai-0.2.0/clai/behavior_context.py +0 -161
- clai-0.2.0/clai/main.py +0 -22
- clai-0.2.0/clai/message_creation.py +0 -21
- clai-0.2.0/clai/ocr_drivers/__init__.py +0 -7
- clai-0.2.0/clai/ocr_drivers/base_driver.py +0 -39
- clai-0.2.0/clai/ocr_drivers/linux_driver.py +0 -35
- clai-0.2.0/pyproject.toml +0 -51
- clai-0.2.0/setup.py +0 -37
clai-0.2.1/.gitignore
ADDED
|
@@ -0,0 +1,19 @@
|
|
|
1
|
+
site
|
|
2
|
+
.venv
|
|
3
|
+
dist
|
|
4
|
+
__pycache__
|
|
5
|
+
.env
|
|
6
|
+
.dev.vars
|
|
7
|
+
/scratch/
|
|
8
|
+
/.coverage
|
|
9
|
+
env*/
|
|
10
|
+
/TODO.md
|
|
11
|
+
/postgres-data/
|
|
12
|
+
.DS_Store
|
|
13
|
+
examples/pydantic_ai_examples/.chat_app_messages.sqlite
|
|
14
|
+
.cache/
|
|
15
|
+
.vscode/
|
|
16
|
+
/question_graph_history.json
|
|
17
|
+
/docs-site/.wrangler/
|
|
18
|
+
/CLAUDE.md
|
|
19
|
+
node_modules/
|
clai-0.2.1/PKG-INFO
ADDED
|
@@ -0,0 +1,106 @@
|
|
|
1
|
+
Metadata-Version: 2.4
|
|
2
|
+
Name: clai
|
|
3
|
+
Version: 0.2.1
|
|
4
|
+
Summary: PydanticAI CLI: command line interface to chat to LLMs
|
|
5
|
+
Author-email: Samuel Colvin <samuel@pydantic.dev>, Marcelo Trylesinski <marcelotryle@gmail.com>, David Montague <david@pydantic.dev>, Alex Hall <alex@pydantic.dev>
|
|
6
|
+
License-Expression: MIT
|
|
7
|
+
Classifier: Development Status :: 4 - Beta
|
|
8
|
+
Classifier: Environment :: Console
|
|
9
|
+
Classifier: Environment :: MacOS X
|
|
10
|
+
Classifier: Intended Audience :: Developers
|
|
11
|
+
Classifier: Intended Audience :: Information Technology
|
|
12
|
+
Classifier: Intended Audience :: System Administrators
|
|
13
|
+
Classifier: License :: OSI Approved :: MIT License
|
|
14
|
+
Classifier: Operating System :: POSIX :: Linux
|
|
15
|
+
Classifier: Operating System :: Unix
|
|
16
|
+
Classifier: Programming Language :: Python
|
|
17
|
+
Classifier: Programming Language :: Python :: 3
|
|
18
|
+
Classifier: Programming Language :: Python :: 3 :: Only
|
|
19
|
+
Classifier: Programming Language :: Python :: 3.9
|
|
20
|
+
Classifier: Programming Language :: Python :: 3.10
|
|
21
|
+
Classifier: Programming Language :: Python :: 3.11
|
|
22
|
+
Classifier: Programming Language :: Python :: 3.12
|
|
23
|
+
Classifier: Programming Language :: Python :: 3.13
|
|
24
|
+
Classifier: Topic :: Internet
|
|
25
|
+
Classifier: Topic :: Software Development :: Libraries :: Python Modules
|
|
26
|
+
Requires-Python: >=3.9
|
|
27
|
+
Requires-Dist: pydantic-ai==0.2.1
|
|
28
|
+
Description-Content-Type: text/markdown
|
|
29
|
+
|
|
30
|
+
# clai
|
|
31
|
+
|
|
32
|
+
[](https://github.com/pydantic/pydantic-ai/actions/workflows/ci.yml?query=branch%3Amain)
|
|
33
|
+
[](https://coverage-badge.samuelcolvin.workers.dev/redirect/pydantic/pydantic-ai)
|
|
34
|
+
[](https://pypi.python.org/pypi/clai)
|
|
35
|
+
[](https://github.com/pydantic/pydantic-ai)
|
|
36
|
+
[](https://github.com/pydantic/pydantic-ai/blob/main/LICENSE)
|
|
37
|
+
|
|
38
|
+
(pronounced "clay")
|
|
39
|
+
|
|
40
|
+
Command line interface to chat to LLMs, part of the [PydanticAI project](https://github.com/pydantic/pydantic-ai).
|
|
41
|
+
|
|
42
|
+
## Usage
|
|
43
|
+
|
|
44
|
+
<!-- Keep this in sync with docs/cli.md -->
|
|
45
|
+
|
|
46
|
+
You'll need to set an environment variable depending on the provider you intend to use.
|
|
47
|
+
|
|
48
|
+
E.g. if you're using OpenAI, set the `OPENAI_API_KEY` environment variable:
|
|
49
|
+
|
|
50
|
+
```bash
|
|
51
|
+
export OPENAI_API_KEY='your-api-key-here'
|
|
52
|
+
```
|
|
53
|
+
|
|
54
|
+
Then with [`uvx`](https://docs.astral.sh/uv/guides/tools/), run:
|
|
55
|
+
|
|
56
|
+
```bash
|
|
57
|
+
uvx clai
|
|
58
|
+
```
|
|
59
|
+
|
|
60
|
+
Or to install `clai` globally [with `uv`](https://docs.astral.sh/uv/guides/tools/#installing-tools), run:
|
|
61
|
+
|
|
62
|
+
```bash
|
|
63
|
+
uv tool install clai
|
|
64
|
+
...
|
|
65
|
+
clai
|
|
66
|
+
```
|
|
67
|
+
|
|
68
|
+
Or with `pip`, run:
|
|
69
|
+
|
|
70
|
+
```bash
|
|
71
|
+
pip install clai
|
|
72
|
+
...
|
|
73
|
+
clai
|
|
74
|
+
```
|
|
75
|
+
|
|
76
|
+
Either way, running `clai` will start an interactive session where you can chat with the AI model. Special commands available in interactive mode:
|
|
77
|
+
|
|
78
|
+
- `/exit`: Exit the session
|
|
79
|
+
- `/markdown`: Show the last response in markdown format
|
|
80
|
+
- `/multiline`: Toggle multiline input mode (use Ctrl+D to submit)
|
|
81
|
+
|
|
82
|
+
## Help
|
|
83
|
+
|
|
84
|
+
```
|
|
85
|
+
usage: clai [-h] [-m [MODEL]] [-l] [-t [CODE_THEME]] [--no-stream] [--version] [prompt]
|
|
86
|
+
|
|
87
|
+
PydanticAI CLI v...
|
|
88
|
+
|
|
89
|
+
Special prompts:
|
|
90
|
+
* `/exit` - exit the interactive mode (ctrl-c and ctrl-d also work)
|
|
91
|
+
* `/markdown` - show the last markdown output of the last question
|
|
92
|
+
* `/multiline` - toggle multiline mode
|
|
93
|
+
|
|
94
|
+
positional arguments:
|
|
95
|
+
prompt AI Prompt, if omitted fall into interactive mode
|
|
96
|
+
|
|
97
|
+
options:
|
|
98
|
+
-h, --help show this help message and exit
|
|
99
|
+
-m [MODEL], --model [MODEL]
|
|
100
|
+
Model to use, in format "<provider>:<model>" e.g. "openai:gpt-4o" or "anthropic:claude-3-7-sonnet-latest". Defaults to "openai:gpt-4o".
|
|
101
|
+
-l, --list-models List all available models and exit
|
|
102
|
+
-t [CODE_THEME], --code-theme [CODE_THEME]
|
|
103
|
+
Which colors to use for code, can be "dark", "light" or any theme from pygments.org/styles/. Defaults to "dark" which works well on dark terminals.
|
|
104
|
+
--no-stream Disable streaming from the model
|
|
105
|
+
--version Show version and exit
|
|
106
|
+
```
|
clai-0.2.1/README.md
ADDED
|
@@ -0,0 +1,77 @@
|
|
|
1
|
+
# clai
|
|
2
|
+
|
|
3
|
+
[](https://github.com/pydantic/pydantic-ai/actions/workflows/ci.yml?query=branch%3Amain)
|
|
4
|
+
[](https://coverage-badge.samuelcolvin.workers.dev/redirect/pydantic/pydantic-ai)
|
|
5
|
+
[](https://pypi.python.org/pypi/clai)
|
|
6
|
+
[](https://github.com/pydantic/pydantic-ai)
|
|
7
|
+
[](https://github.com/pydantic/pydantic-ai/blob/main/LICENSE)
|
|
8
|
+
|
|
9
|
+
(pronounced "clay")
|
|
10
|
+
|
|
11
|
+
Command line interface to chat to LLMs, part of the [PydanticAI project](https://github.com/pydantic/pydantic-ai).
|
|
12
|
+
|
|
13
|
+
## Usage
|
|
14
|
+
|
|
15
|
+
<!-- Keep this in sync with docs/cli.md -->
|
|
16
|
+
|
|
17
|
+
You'll need to set an environment variable depending on the provider you intend to use.
|
|
18
|
+
|
|
19
|
+
E.g. if you're using OpenAI, set the `OPENAI_API_KEY` environment variable:
|
|
20
|
+
|
|
21
|
+
```bash
|
|
22
|
+
export OPENAI_API_KEY='your-api-key-here'
|
|
23
|
+
```
|
|
24
|
+
|
|
25
|
+
Then with [`uvx`](https://docs.astral.sh/uv/guides/tools/), run:
|
|
26
|
+
|
|
27
|
+
```bash
|
|
28
|
+
uvx clai
|
|
29
|
+
```
|
|
30
|
+
|
|
31
|
+
Or to install `clai` globally [with `uv`](https://docs.astral.sh/uv/guides/tools/#installing-tools), run:
|
|
32
|
+
|
|
33
|
+
```bash
|
|
34
|
+
uv tool install clai
|
|
35
|
+
...
|
|
36
|
+
clai
|
|
37
|
+
```
|
|
38
|
+
|
|
39
|
+
Or with `pip`, run:
|
|
40
|
+
|
|
41
|
+
```bash
|
|
42
|
+
pip install clai
|
|
43
|
+
...
|
|
44
|
+
clai
|
|
45
|
+
```
|
|
46
|
+
|
|
47
|
+
Either way, running `clai` will start an interactive session where you can chat with the AI model. Special commands available in interactive mode:
|
|
48
|
+
|
|
49
|
+
- `/exit`: Exit the session
|
|
50
|
+
- `/markdown`: Show the last response in markdown format
|
|
51
|
+
- `/multiline`: Toggle multiline input mode (use Ctrl+D to submit)
|
|
52
|
+
|
|
53
|
+
## Help
|
|
54
|
+
|
|
55
|
+
```
|
|
56
|
+
usage: clai [-h] [-m [MODEL]] [-l] [-t [CODE_THEME]] [--no-stream] [--version] [prompt]
|
|
57
|
+
|
|
58
|
+
PydanticAI CLI v...
|
|
59
|
+
|
|
60
|
+
Special prompts:
|
|
61
|
+
* `/exit` - exit the interactive mode (ctrl-c and ctrl-d also work)
|
|
62
|
+
* `/markdown` - show the last markdown output of the last question
|
|
63
|
+
* `/multiline` - toggle multiline mode
|
|
64
|
+
|
|
65
|
+
positional arguments:
|
|
66
|
+
prompt AI Prompt, if omitted fall into interactive mode
|
|
67
|
+
|
|
68
|
+
options:
|
|
69
|
+
-h, --help show this help message and exit
|
|
70
|
+
-m [MODEL], --model [MODEL]
|
|
71
|
+
Model to use, in format "<provider>:<model>" e.g. "openai:gpt-4o" or "anthropic:claude-3-7-sonnet-latest". Defaults to "openai:gpt-4o".
|
|
72
|
+
-l, --list-models List all available models and exit
|
|
73
|
+
-t [CODE_THEME], --code-theme [CODE_THEME]
|
|
74
|
+
Which colors to use for code, can be "dark", "light" or any theme from pygments.org/styles/. Defaults to "dark" which works well on dark terminals.
|
|
75
|
+
--no-stream Disable streaming from the model
|
|
76
|
+
--version Show version and exit
|
|
77
|
+
```
|
|
@@ -0,0 +1,63 @@
|
|
|
1
|
+
[build-system]
|
|
2
|
+
requires = ["hatchling", "uv-dynamic-versioning>=0.7.0"]
|
|
3
|
+
build-backend = "hatchling.build"
|
|
4
|
+
|
|
5
|
+
[tool.hatch.version]
|
|
6
|
+
source = "uv-dynamic-versioning"
|
|
7
|
+
|
|
8
|
+
[tool.uv-dynamic-versioning]
|
|
9
|
+
vcs = "git"
|
|
10
|
+
style = "pep440"
|
|
11
|
+
bump = true
|
|
12
|
+
|
|
13
|
+
[project]
|
|
14
|
+
name = "clai"
|
|
15
|
+
dynamic = ["version", "dependencies"]
|
|
16
|
+
description = "PydanticAI CLI: command line interface to chat to LLMs"
|
|
17
|
+
authors = [
|
|
18
|
+
{ name = "Samuel Colvin", email = "samuel@pydantic.dev" },
|
|
19
|
+
{ name = "Marcelo Trylesinski", email = "marcelotryle@gmail.com" },
|
|
20
|
+
{ name = "David Montague", email = "david@pydantic.dev" },
|
|
21
|
+
{ name = "Alex Hall", email = "alex@pydantic.dev" },
|
|
22
|
+
]
|
|
23
|
+
license = "MIT"
|
|
24
|
+
readme = "README.md"
|
|
25
|
+
classifiers = [
|
|
26
|
+
"Development Status :: 4 - Beta",
|
|
27
|
+
"Programming Language :: Python",
|
|
28
|
+
"Programming Language :: Python :: 3",
|
|
29
|
+
"Programming Language :: Python :: 3 :: Only",
|
|
30
|
+
"Programming Language :: Python :: 3.9",
|
|
31
|
+
"Programming Language :: Python :: 3.10",
|
|
32
|
+
"Programming Language :: Python :: 3.11",
|
|
33
|
+
"Programming Language :: Python :: 3.12",
|
|
34
|
+
"Programming Language :: Python :: 3.13",
|
|
35
|
+
"Intended Audience :: Developers",
|
|
36
|
+
"Intended Audience :: Information Technology",
|
|
37
|
+
"Intended Audience :: System Administrators",
|
|
38
|
+
"License :: OSI Approved :: MIT License",
|
|
39
|
+
"Operating System :: Unix",
|
|
40
|
+
"Operating System :: POSIX :: Linux",
|
|
41
|
+
"Environment :: Console",
|
|
42
|
+
"Environment :: MacOS X",
|
|
43
|
+
"Topic :: Software Development :: Libraries :: Python Modules",
|
|
44
|
+
"Topic :: Internet",
|
|
45
|
+
]
|
|
46
|
+
requires-python = ">=3.9"
|
|
47
|
+
|
|
48
|
+
[tool.hatch.metadata.hooks.uv-dynamic-versioning]
|
|
49
|
+
dependencies = [
|
|
50
|
+
"pydantic-ai=={{ version }}",
|
|
51
|
+
]
|
|
52
|
+
|
|
53
|
+
[tool.hatch.metadata]
|
|
54
|
+
allow-direct-references = true
|
|
55
|
+
|
|
56
|
+
[project.scripts]
|
|
57
|
+
clai = "clai:cli"
|
|
58
|
+
|
|
59
|
+
[tool.hatch.build.targets.wheel]
|
|
60
|
+
packages = ["clai"]
|
|
61
|
+
|
|
62
|
+
[tool.uv.sources]
|
|
63
|
+
pydantic-ai = { workspace = true }
|
|
@@ -0,0 +1,30 @@
|
|
|
1
|
+
import os
|
|
2
|
+
import re
|
|
3
|
+
import sys
|
|
4
|
+
from pathlib import Path
|
|
5
|
+
|
|
6
|
+
import pytest
|
|
7
|
+
|
|
8
|
+
from pydantic_ai._cli import cli
|
|
9
|
+
|
|
10
|
+
|
|
11
|
+
@pytest.mark.skipif(sys.version_info >= (3, 13), reason='slightly different output with 3.13')
|
|
12
|
+
def test_cli_help(capfd: pytest.CaptureFixture[str]):
|
|
13
|
+
"""Check README.md help output matches `clai --help`."""
|
|
14
|
+
os.environ['COLUMNS'] = '150'
|
|
15
|
+
with pytest.raises(SystemExit):
|
|
16
|
+
cli(['--help'], prog_name='clai')
|
|
17
|
+
|
|
18
|
+
help_output = capfd.readouterr().out.strip()
|
|
19
|
+
# TODO change when we reach v1
|
|
20
|
+
help_output = re.sub(r'(PydanticAI CLI v).+', r'\1...', help_output)
|
|
21
|
+
|
|
22
|
+
this_dir = Path(__file__).parent
|
|
23
|
+
readme = this_dir / 'README.md'
|
|
24
|
+
content = readme.read_text()
|
|
25
|
+
|
|
26
|
+
new_content, count = re.subn('^(## Help\n+```).+?```', rf'\1\n{help_output}\n```', content, flags=re.M | re.S)
|
|
27
|
+
assert count, 'help section not found'
|
|
28
|
+
if new_content != content:
|
|
29
|
+
readme.write_text(new_content)
|
|
30
|
+
pytest.fail('`clai --help` output changed.')
|
clai-0.2.0/PKG-INFO
DELETED
|
@@ -1,99 +0,0 @@
|
|
|
1
|
-
Metadata-Version: 2.1
|
|
2
|
-
Name: clai
|
|
3
|
-
Version: 0.2.0
|
|
4
|
-
Summary: Command Line AI- this tool lets you call ChatGPT from a CLI
|
|
5
|
-
License: Proprietary
|
|
6
|
-
Author: apockill
|
|
7
|
-
Author-email: apocthiel@gmail.com
|
|
8
|
-
Requires-Python: >=3.8,<4.0
|
|
9
|
-
Classifier: License :: Other/Proprietary License
|
|
10
|
-
Classifier: Programming Language :: Python :: 3
|
|
11
|
-
Classifier: Programming Language :: Python :: 3.8
|
|
12
|
-
Classifier: Programming Language :: Python :: 3.9
|
|
13
|
-
Classifier: Programming Language :: Python :: 3.10
|
|
14
|
-
Requires-Dist: PyAutoGUI (>=0.9.53,<0.10.0)
|
|
15
|
-
Requires-Dist: PyWinCtl (>=0.0.43,<0.0.44)
|
|
16
|
-
Requires-Dist: openai (>=0.27.0,<0.28.0)
|
|
17
|
-
Requires-Dist: pytesseract (>=0.3.10,<0.4.0)
|
|
18
|
-
Description-Content-Type: text/markdown
|
|
19
|
-
|
|
20
|
-
# clai
|
|
21
|
-
Command Line AI- this tool lets you call ChatGPT from a CLI.
|
|
22
|
-
|
|
23
|
-
I'm designing this to be used in conjunction with a fork of [shin][shin], which will allow you
|
|
24
|
-
to call `clai` from any textbox in your computer. Finally, ChatGPT everywhere!
|
|
25
|
-
|
|
26
|
-
The long-term vision for this project is to add support for extracting context. For example, it would
|
|
27
|
-
read the current text on a window and be able to add to it, or answer questions about it.
|
|
28
|
-
|
|
29
|
-
_________________
|
|
30
|
-
|
|
31
|
-
[](http://badge.fury.io/py/clai)
|
|
32
|
-
[](https://github.com/apockill/clai/actions?query=workflow%3ATest)
|
|
33
|
-
[](https://github.com/apockill/clai/actions?query=workflow%3ALint)
|
|
34
|
-
[](https://codecov.io/gh/apockill/clai)
|
|
35
|
-
[](https://github.com/psf/black)
|
|
36
|
-
[](https://timothycrosley.github.io/isort/)
|
|
37
|
-
_________________
|
|
38
|
-
|
|
39
|
-
[Read Latest Documentation](https://apockill.github.io/clai/) - [Browse GitHub Code Repository](https://github.com/apockill/clai/)
|
|
40
|
-
_________________
|
|
41
|
-
|
|
42
|
-
## Installation
|
|
43
|
-
|
|
44
|
-
1. The recommended installation method is to use `pipx`, via
|
|
45
|
-
```bash
|
|
46
|
-
pipx install clai
|
|
47
|
-
```
|
|
48
|
-
Optionally, install `tesseract` so that `clai` can read the screen context and send that along with requests:
|
|
49
|
-
```bash
|
|
50
|
-
sudo apt install tesseract-ocr scrot
|
|
51
|
-
```
|
|
52
|
-
1. Then go to [OpenAI] and create an API Key. Once it's generated, add the following to
|
|
53
|
-
your `~/.profile`:
|
|
54
|
-
```bash
|
|
55
|
-
export OPENAI_API_TOKEN=<paste here>
|
|
56
|
-
```
|
|
57
|
-
|
|
58
|
-
1. The best way to use this tool is in conjunction with the tool [shin][shin], which allows you
|
|
59
|
-
to run arbitrary bash commands in any textbox in a linux computer, using ibus. To use
|
|
60
|
-
that, install 'shin' via the fork above, then configure
|
|
61
|
-
it in your `~/.profile` to call `clai` by default:
|
|
62
|
-
```bash
|
|
63
|
-
export SHIN_DEFAULT_COMMAND="clai"
|
|
64
|
-
```
|
|
65
|
-
1. Log out then log back in for the changes to take effect!
|
|
66
|
-
|
|
67
|
-
[OpenAI]: https://platform.openai.com/account/api-keys
|
|
68
|
-
|
|
69
|
-
## Usage
|
|
70
|
-
Invoke the assistant with the format `clai <your prompt>`. For example:
|
|
71
|
-
```
|
|
72
|
-
clai Write an email saying I'll be late to work because I'm working on commandline AIs
|
|
73
|
-
```
|
|
74
|
-
|
|
75
|
-
|
|
76
|
-
## Development
|
|
77
|
-
|
|
78
|
-
### Installing python dependencies
|
|
79
|
-
```shell
|
|
80
|
-
poetry install
|
|
81
|
-
```
|
|
82
|
-
|
|
83
|
-
### Running Tests
|
|
84
|
-
```shell
|
|
85
|
-
pytest .
|
|
86
|
-
```
|
|
87
|
-
|
|
88
|
-
### Formatting Code
|
|
89
|
-
```shell
|
|
90
|
-
bash .github/format.sh
|
|
91
|
-
```
|
|
92
|
-
|
|
93
|
-
### Linting
|
|
94
|
-
```shell
|
|
95
|
-
bash .github/check_lint.sh
|
|
96
|
-
```
|
|
97
|
-
|
|
98
|
-
[shin]: https://github.com/apockill/shin
|
|
99
|
-
|
clai-0.2.0/README.md
DELETED
|
@@ -1,79 +0,0 @@
|
|
|
1
|
-
# clai
|
|
2
|
-
Command Line AI- this tool lets you call ChatGPT from a CLI.
|
|
3
|
-
|
|
4
|
-
I'm designing this to be used in conjunction with a fork of [shin][shin], which will allow you
|
|
5
|
-
to call `clai` from any textbox in your computer. Finally, ChatGPT everywhere!
|
|
6
|
-
|
|
7
|
-
The long-term vision for this project is to add support for extracting context. For example, it would
|
|
8
|
-
read the current text on a window and be able to add to it, or answer questions about it.
|
|
9
|
-
|
|
10
|
-
_________________
|
|
11
|
-
|
|
12
|
-
[](http://badge.fury.io/py/clai)
|
|
13
|
-
[](https://github.com/apockill/clai/actions?query=workflow%3ATest)
|
|
14
|
-
[](https://github.com/apockill/clai/actions?query=workflow%3ALint)
|
|
15
|
-
[](https://codecov.io/gh/apockill/clai)
|
|
16
|
-
[](https://github.com/psf/black)
|
|
17
|
-
[](https://timothycrosley.github.io/isort/)
|
|
18
|
-
_________________
|
|
19
|
-
|
|
20
|
-
[Read Latest Documentation](https://apockill.github.io/clai/) - [Browse GitHub Code Repository](https://github.com/apockill/clai/)
|
|
21
|
-
_________________
|
|
22
|
-
|
|
23
|
-
## Installation
|
|
24
|
-
|
|
25
|
-
1. The recommended installation method is to use `pipx`, via
|
|
26
|
-
```bash
|
|
27
|
-
pipx install clai
|
|
28
|
-
```
|
|
29
|
-
Optionally, install `tesseract` so that `clai` can read the screen context and send that along with requests:
|
|
30
|
-
```bash
|
|
31
|
-
sudo apt install tesseract-ocr scrot
|
|
32
|
-
```
|
|
33
|
-
1. Then go to [OpenAI] and create an API Key. Once it's generated, add the following to
|
|
34
|
-
your `~/.profile`:
|
|
35
|
-
```bash
|
|
36
|
-
export OPENAI_API_TOKEN=<paste here>
|
|
37
|
-
```
|
|
38
|
-
|
|
39
|
-
1. The best way to use this tool is in conjunction with the tool [shin][shin], which allows you
|
|
40
|
-
to run arbitrary bash commands in any textbox in a linux computer, using ibus. To use
|
|
41
|
-
that, install 'shin' via the fork above, then configure
|
|
42
|
-
it in your `~/.profile` to call `clai` by default:
|
|
43
|
-
```bash
|
|
44
|
-
export SHIN_DEFAULT_COMMAND="clai"
|
|
45
|
-
```
|
|
46
|
-
1. Log out then log back in for the changes to take effect!
|
|
47
|
-
|
|
48
|
-
[OpenAI]: https://platform.openai.com/account/api-keys
|
|
49
|
-
|
|
50
|
-
## Usage
|
|
51
|
-
Invoke the assistant with the format `clai <your prompt>`. For example:
|
|
52
|
-
```
|
|
53
|
-
clai Write an email saying I'll be late to work because I'm working on commandline AIs
|
|
54
|
-
```
|
|
55
|
-
|
|
56
|
-
|
|
57
|
-
## Development
|
|
58
|
-
|
|
59
|
-
### Installing python dependencies
|
|
60
|
-
```shell
|
|
61
|
-
poetry install
|
|
62
|
-
```
|
|
63
|
-
|
|
64
|
-
### Running Tests
|
|
65
|
-
```shell
|
|
66
|
-
pytest .
|
|
67
|
-
```
|
|
68
|
-
|
|
69
|
-
### Formatting Code
|
|
70
|
-
```shell
|
|
71
|
-
bash .github/format.sh
|
|
72
|
-
```
|
|
73
|
-
|
|
74
|
-
### Linting
|
|
75
|
-
```shell
|
|
76
|
-
bash .github/check_lint.sh
|
|
77
|
-
```
|
|
78
|
-
|
|
79
|
-
[shin]: https://github.com/apockill/shin
|
clai-0.2.0/clai/__init__.py
DELETED
clai-0.2.0/clai/api.py
DELETED
|
@@ -1,17 +0,0 @@
|
|
|
1
|
-
import os
|
|
2
|
-
from types import ModuleType
|
|
3
|
-
|
|
4
|
-
import openai
|
|
5
|
-
|
|
6
|
-
API_TOKEN_VAR = "OPENAI_API_TOKEN"
|
|
7
|
-
|
|
8
|
-
|
|
9
|
-
def initialize_api() -> ModuleType:
|
|
10
|
-
try:
|
|
11
|
-
api_key = os.environ[API_TOKEN_VAR]
|
|
12
|
-
except KeyError:
|
|
13
|
-
print(f"You must set the`{API_TOKEN_VAR}` variable in your environment!")
|
|
14
|
-
exit(1)
|
|
15
|
-
|
|
16
|
-
openai.api_key = api_key
|
|
17
|
-
return openai
|
|
@@ -1,161 +0,0 @@
|
|
|
1
|
-
from dataclasses import dataclass
|
|
2
|
-
from typing import Literal, Union
|
|
3
|
-
|
|
4
|
-
from clai.ocr_drivers import WindowContext
|
|
5
|
-
|
|
6
|
-
USER_PROMPT_FORMAT = """
|
|
7
|
-
User Prompt:
|
|
8
|
-
```
|
|
9
|
-
{user_prompt}
|
|
10
|
-
```
|
|
11
|
-
"""
|
|
12
|
-
OCR_EXTRACTION_FORMAT = """
|
|
13
|
-
Active Window Title: {active_window_title}
|
|
14
|
-
|
|
15
|
-
Active Window OCR Extracted Text (RAW):
|
|
16
|
-
------ OCR DATA START ------
|
|
17
|
-
```
|
|
18
|
-
{ocr_text}
|
|
19
|
-
```
|
|
20
|
-
------ OCR DATA END ------
|
|
21
|
-
|
|
22
|
-
{user_prompt}
|
|
23
|
-
|
|
24
|
-
Please answer "User Prompt" using the raw OCR text as context to the message.
|
|
25
|
-
"""
|
|
26
|
-
|
|
27
|
-
|
|
28
|
-
@dataclass
|
|
29
|
-
class Prompt:
|
|
30
|
-
context: WindowContext
|
|
31
|
-
prompt: str
|
|
32
|
-
|
|
33
|
-
def __str__(self) -> str:
|
|
34
|
-
"""Serialize the Prompt with differing formats, depending on whether window
|
|
35
|
-
content was available
|
|
36
|
-
|
|
37
|
-
:return: The window context and prompt in a standardized format
|
|
38
|
-
"""
|
|
39
|
-
"""."""
|
|
40
|
-
user_prompt = USER_PROMPT_FORMAT.format(user_prompt=self.prompt.strip())
|
|
41
|
-
if self.context.clean_screen_text and self.context.active_window_name:
|
|
42
|
-
return OCR_EXTRACTION_FORMAT.format(
|
|
43
|
-
active_window_title=self.context.active_window_name.strip(),
|
|
44
|
-
ocr_text=self.context.clean_screen_text.strip(),
|
|
45
|
-
user_prompt=user_prompt.strip(),
|
|
46
|
-
)
|
|
47
|
-
return user_prompt.strip()
|
|
48
|
-
|
|
49
|
-
|
|
50
|
-
@dataclass
|
|
51
|
-
class Message:
|
|
52
|
-
role: Literal["system", "user", "assistant"]
|
|
53
|
-
content: Union[Prompt, str]
|
|
54
|
-
|
|
55
|
-
def to_api(self) -> dict[str, str]:
|
|
56
|
-
"""To OpenAPI format"""
|
|
57
|
-
if isinstance(self.content, str) and self.role == "user":
|
|
58
|
-
raise RuntimeError("The user message must be of type Prompt!")
|
|
59
|
-
|
|
60
|
-
return {"role": self.role, "content": str(self.content)}
|
|
61
|
-
|
|
62
|
-
|
|
63
|
-
_DEFAULT_ASSISTANT_ROLE = """
|
|
64
|
-
You are an assistant that is capable of being called anywhere on a desktop computer. You
|
|
65
|
-
may be called within the context of an email, a URL box, commandline, a text editor, or
|
|
66
|
-
even word documents!
|
|
67
|
-
|
|
68
|
-
Your role is to answer the users request as shortly and succinctly as possible. You
|
|
69
|
-
will follow the following rules:
|
|
70
|
-
|
|
71
|
-
When asked to write long-form text content:
|
|
72
|
-
1) Never ask for more information. If something is to be guessed, write it in template
|
|
73
|
-
format. For example, if asked to write an email use <Insert time here> when writing
|
|
74
|
-
the portion of an email that specifies something that was not included in the users
|
|
75
|
-
question.
|
|
76
|
-
2) Only assume the content is long form if the user mentions email, or 'long message'.
|
|
77
|
-
|
|
78
|
-
When asked to write a command, code, formulas, or any one-line response task:
|
|
79
|
-
1) NEVER WRITE EXPLANATIONS! Only include the command/code/etc, ready to be run
|
|
80
|
-
2) NEVER WRITE USAGE INSTRUCTIONS! Do not explain how to use the command/code/formulas.
|
|
81
|
-
3) NEVER WRITE NOTES ABOUT THE IMPLEMENTATION!
|
|
82
|
-
Do not explain what it does or it's limitations.
|
|
83
|
-
4) Remember, the text that you write will immediately be run, do not include code blocks
|
|
84
|
-
5) If there is something that requires user input, such as a cell in a sheet or a
|
|
85
|
-
variable from the user, write it inside of brackets, like this: <INPUT DESCRIBER>,
|
|
86
|
-
where the insides of the bracket have an example of what is needed to be filled in.
|
|
87
|
-
6) Assume a linux desktop environment in a bash shell. Use freely available unix tools.
|
|
88
|
-
|
|
89
|
-
You will receive OCR context and window title names, for some prompts. They are very
|
|
90
|
-
noisy, use best-effort when reading them.
|
|
91
|
-
"""
|
|
92
|
-
_EXAMPLE_EMAIL = """
|
|
93
|
-
Dear <Recipient's Name>,
|
|
94
|
-
|
|
95
|
-
I hope this email finds you well. I am writing to request a meeting with you on <Date and Time>, and I would appreciate it if you could confirm your availability at your earliest convenience.
|
|
96
|
-
|
|
97
|
-
The purpose of this meeting is to discuss <Purpose of the Meeting> with you. Specifically, I would like to <Agenda Item 1>, <Agenda Item 2>, and <Agenda Item 3>. The meeting will last approximately <Meeting Duration> and will take place at <Meeting Location>.
|
|
98
|
-
|
|
99
|
-
Please let me know if this date and time work for you. If not, please suggest an alternative time that is convenient for you. Additionally, if there are any documents or information you would like me to review before the meeting, please let me know, and I will make sure to review them.
|
|
100
|
-
|
|
101
|
-
I look forward to hearing from you soon.
|
|
102
|
-
|
|
103
|
-
Best regards,
|
|
104
|
-
|
|
105
|
-
<Your Name>
|
|
106
|
-
""" # noqa: E501
|
|
107
|
-
_EXAMPLE_REGEX = '=IFERROR(REGEXEXTRACT(<INPUT CELL HERE>, "[A-z0-9._%+-]+@[A-z0-9.-]+\.[A-z]{2,4}");"")' # noqa
|
|
108
|
-
_EXAMPLE_PYTHON = """
|
|
109
|
-
def fibonacci(n: int) -> Generator[int, None, None]:
|
|
110
|
-
a, b = 0, 1
|
|
111
|
-
for _ in range(n):
|
|
112
|
-
yield a
|
|
113
|
-
a, b = b, a + b
|
|
114
|
-
"""
|
|
115
|
-
_EXAMPLE_GOOGLE_SHEETS = '=IFERROR(REGEXEXTRACT(<INPUT CELL HERE>, "[A-z0-9._%+-]+@[A-z0-9.-]+\.[A-z]{2,4}");"")' # noqa
|
|
116
|
-
_EXAMPLE_BASH_COMMAND = "grep -rnw . -e 'bruh'"
|
|
117
|
-
|
|
118
|
-
MESSAGE_CONTEXT: list[Message] = [
|
|
119
|
-
Message(role="system", content=_DEFAULT_ASSISTANT_ROLE),
|
|
120
|
-
Message(
|
|
121
|
-
role="user",
|
|
122
|
-
content=Prompt(
|
|
123
|
-
WindowContext(),
|
|
124
|
-
prompt="commandline search for files with the name 'bruh' in them",
|
|
125
|
-
),
|
|
126
|
-
),
|
|
127
|
-
Message(role="assistant", content=_EXAMPLE_BASH_COMMAND),
|
|
128
|
-
Message(
|
|
129
|
-
role="user",
|
|
130
|
-
content=Prompt(
|
|
131
|
-
context=WindowContext(), prompt="email set up a meeting next week"
|
|
132
|
-
),
|
|
133
|
-
),
|
|
134
|
-
Message(role="assistant", content=_EXAMPLE_EMAIL),
|
|
135
|
-
Message(
|
|
136
|
-
role="user",
|
|
137
|
-
content=Prompt(
|
|
138
|
-
context=WindowContext(),
|
|
139
|
-
prompt="google sheets formula extracts an email from string of text",
|
|
140
|
-
),
|
|
141
|
-
),
|
|
142
|
-
Message(role="assistant", content=_EXAMPLE_GOOGLE_SHEETS),
|
|
143
|
-
Message(
|
|
144
|
-
role="user",
|
|
145
|
-
content=Prompt(
|
|
146
|
-
context=WindowContext(),
|
|
147
|
-
prompt="google sheets formula extracts an email from string of text",
|
|
148
|
-
),
|
|
149
|
-
),
|
|
150
|
-
Message(role="assistant", content=_EXAMPLE_REGEX),
|
|
151
|
-
Message(
|
|
152
|
-
role="user",
|
|
153
|
-
content=Prompt(
|
|
154
|
-
context=WindowContext(),
|
|
155
|
-
prompt="python fibonacci function in form of a generator",
|
|
156
|
-
),
|
|
157
|
-
),
|
|
158
|
-
Message(role="assistant", content=_EXAMPLE_PYTHON),
|
|
159
|
-
]
|
|
160
|
-
|
|
161
|
-
__all__ = ["MESSAGE_CONTEXT", "Message", "Prompt"]
|
clai-0.2.0/clai/main.py
DELETED
|
@@ -1,22 +0,0 @@
|
|
|
1
|
-
from argparse import ArgumentParser
|
|
2
|
-
|
|
3
|
-
from .api import initialize_api
|
|
4
|
-
from .message_creation import create_message_context
|
|
5
|
-
|
|
6
|
-
|
|
7
|
-
def main() -> None:
|
|
8
|
-
parser = ArgumentParser("CLAI- your own command line AI!")
|
|
9
|
-
parser.add_argument("prompt", type=str, nargs="+")
|
|
10
|
-
parser.add_argument("-m", "--model", default="gpt-3.5-turbo")
|
|
11
|
-
args = parser.parse_args()
|
|
12
|
-
|
|
13
|
-
openai = initialize_api()
|
|
14
|
-
|
|
15
|
-
prompt = " ".join(args.prompt)
|
|
16
|
-
response = openai.ChatCompletion.create(
|
|
17
|
-
model=args.model,
|
|
18
|
-
messages=create_message_context(prompt),
|
|
19
|
-
)
|
|
20
|
-
|
|
21
|
-
best_response = response["choices"][0]["message"]["content"]
|
|
22
|
-
print(best_response)
|
|
@@ -1,21 +0,0 @@
|
|
|
1
|
-
from .behavior_context import MESSAGE_CONTEXT, Message, Prompt
|
|
2
|
-
from .ocr_drivers import get_driver
|
|
3
|
-
|
|
4
|
-
|
|
5
|
-
def create_message_context(prompt: str) -> list[dict[str, str]]:
|
|
6
|
-
"""An important part of a ChatBot is the prior context. Here, we carefully
|
|
7
|
-
construct the context for the messages we are sending.
|
|
8
|
-
|
|
9
|
-
This function returns an OpenAI-API ready message with the prompt included.
|
|
10
|
-
:param prompt: The prompt for the chatbot
|
|
11
|
-
:return: The API-ready message
|
|
12
|
-
"""
|
|
13
|
-
driver = get_driver()
|
|
14
|
-
window_context = driver.extract_context()
|
|
15
|
-
new_context = MESSAGE_CONTEXT[:]
|
|
16
|
-
new_context.append(
|
|
17
|
-
Message(role="user", content=Prompt(context=window_context, prompt=prompt))
|
|
18
|
-
)
|
|
19
|
-
api_format = [m.to_api() for m in new_context]
|
|
20
|
-
print("CONTEXT", api_format[-1]["content"])
|
|
21
|
-
return api_format
|
|
@@ -1,39 +0,0 @@
|
|
|
1
|
-
from abc import ABC, abstractmethod
|
|
2
|
-
from dataclasses import dataclass
|
|
3
|
-
from typing import Optional
|
|
4
|
-
|
|
5
|
-
_MIN_CHARACTERS_PER_LINE = 10
|
|
6
|
-
|
|
7
|
-
|
|
8
|
-
@dataclass
|
|
9
|
-
class WindowContext:
|
|
10
|
-
raw_screen_text: Optional[str] = None
|
|
11
|
-
"""If the driver supports it, the text extracted from the active window will be
|
|
12
|
-
filled here."""
|
|
13
|
-
|
|
14
|
-
active_window_name: Optional[str] = None
|
|
15
|
-
"""If the driver supports it, the current active window name will be filled here."""
|
|
16
|
-
|
|
17
|
-
@property
|
|
18
|
-
def clean_screen_text(self) -> Optional[str]:
|
|
19
|
-
if not self.raw_screen_text:
|
|
20
|
-
return None
|
|
21
|
-
|
|
22
|
-
lines: list[str] = self.raw_screen_text.split("\n")
|
|
23
|
-
clean_lines = []
|
|
24
|
-
for line in lines:
|
|
25
|
-
line = line.strip()
|
|
26
|
-
if len(line) > _MIN_CHARACTERS_PER_LINE:
|
|
27
|
-
clean_lines.append(line)
|
|
28
|
-
|
|
29
|
-
clean_text = "\n".join(clean_lines)
|
|
30
|
-
return clean_text
|
|
31
|
-
|
|
32
|
-
|
|
33
|
-
class BaseOCRDriver(ABC):
|
|
34
|
-
"""This base class can be used to standardize the interface for future OS's"""
|
|
35
|
-
|
|
36
|
-
@abstractmethod
|
|
37
|
-
def extract_context(self) -> WindowContext:
|
|
38
|
-
"""A method to extract the current useful context from the in-focus window"""
|
|
39
|
-
pass
|
|
@@ -1,35 +0,0 @@
|
|
|
1
|
-
from typing import cast
|
|
2
|
-
|
|
3
|
-
import pyautogui
|
|
4
|
-
import pytesseract
|
|
5
|
-
import pywinctl
|
|
6
|
-
from PIL.Image import Image
|
|
7
|
-
|
|
8
|
-
from .base_driver import BaseOCRDriver, WindowContext
|
|
9
|
-
|
|
10
|
-
|
|
11
|
-
class LinuxOCRDriver(BaseOCRDriver):
|
|
12
|
-
def extract_context(self) -> WindowContext:
|
|
13
|
-
screenshot = self._extract_active_window_screenshot()
|
|
14
|
-
|
|
15
|
-
# Perform OCR and clean up the text
|
|
16
|
-
raw_ocr_text = pytesseract.image_to_string(screenshot)
|
|
17
|
-
|
|
18
|
-
return WindowContext(
|
|
19
|
-
raw_screen_text=raw_ocr_text,
|
|
20
|
-
active_window_name=cast(str, pywinctl.getActiveWindowTitle()),
|
|
21
|
-
)
|
|
22
|
-
|
|
23
|
-
def _extract_active_window_screenshot(self) -> Image:
|
|
24
|
-
# Get the active window object
|
|
25
|
-
active_window = pywinctl.getActiveWindow() # type: ignore
|
|
26
|
-
region = (
|
|
27
|
-
active_window.left,
|
|
28
|
-
active_window.top,
|
|
29
|
-
active_window.width,
|
|
30
|
-
active_window.height,
|
|
31
|
-
)
|
|
32
|
-
|
|
33
|
-
# Take a screenshot of the active window
|
|
34
|
-
screenshot = pyautogui.screenshot(region=region)
|
|
35
|
-
return screenshot
|
clai-0.2.0/pyproject.toml
DELETED
|
@@ -1,51 +0,0 @@
|
|
|
1
|
-
[tool.poetry]
|
|
2
|
-
name = "clai"
|
|
3
|
-
version = "0.2.0"
|
|
4
|
-
description = "Command Line AI- this tool lets you call ChatGPT from a CLI"
|
|
5
|
-
authors = ["apockill <apocthiel@gmail.com>"]
|
|
6
|
-
license = "Proprietary"
|
|
7
|
-
readme = "README.md"
|
|
8
|
-
|
|
9
|
-
[tool.poetry.dependencies]
|
|
10
|
-
python = "^3.8"
|
|
11
|
-
openai = "^0.27.0"
|
|
12
|
-
pytesseract = "^0.3.10"
|
|
13
|
-
PyAutoGUI = "^0.9.53"
|
|
14
|
-
PyWinCtl = "^0.0.43"
|
|
15
|
-
|
|
16
|
-
[tool.poetry.dev-dependencies]
|
|
17
|
-
darglint = "^1.8.1"
|
|
18
|
-
vulture = "^2.5"
|
|
19
|
-
bandit = "^1.7"
|
|
20
|
-
isort = "^5.10"
|
|
21
|
-
flake8-bugbear = "^22.7"
|
|
22
|
-
black = "^22.6"
|
|
23
|
-
mypy = "^0.961"
|
|
24
|
-
pytest = "^7.1"
|
|
25
|
-
pytest-cov = "^3.0"
|
|
26
|
-
pep8-naming = "^0.13.1"
|
|
27
|
-
portray = "^1.7"
|
|
28
|
-
cruft = "^2.10"
|
|
29
|
-
|
|
30
|
-
[tool.poetry.scripts]
|
|
31
|
-
clai = "clai:main"
|
|
32
|
-
ai = "clai:main"
|
|
33
|
-
|
|
34
|
-
[build-system]
|
|
35
|
-
requires = ["poetry>=0.12"]
|
|
36
|
-
build-backend = "poetry.masonry.api"
|
|
37
|
-
|
|
38
|
-
[tool.black]
|
|
39
|
-
line-length = 88
|
|
40
|
-
|
|
41
|
-
[tool.isort]
|
|
42
|
-
profile = "hug"
|
|
43
|
-
|
|
44
|
-
[tool.mypy]
|
|
45
|
-
strict = true
|
|
46
|
-
ignore_missing_imports = true
|
|
47
|
-
disallow_subclassing_any = false
|
|
48
|
-
implicit_reexport = true
|
|
49
|
-
# We can't add annotations to decorators from other libraries, making this
|
|
50
|
-
# check not very useful
|
|
51
|
-
disallow_untyped_decorators = false
|
clai-0.2.0/setup.py
DELETED
|
@@ -1,37 +0,0 @@
|
|
|
1
|
-
# -*- coding: utf-8 -*-
|
|
2
|
-
from setuptools import setup
|
|
3
|
-
|
|
4
|
-
packages = \
|
|
5
|
-
['clai', 'clai.ocr_drivers']
|
|
6
|
-
|
|
7
|
-
package_data = \
|
|
8
|
-
{'': ['*']}
|
|
9
|
-
|
|
10
|
-
install_requires = \
|
|
11
|
-
['PyAutoGUI>=0.9.53,<0.10.0',
|
|
12
|
-
'PyWinCtl>=0.0.43,<0.0.44',
|
|
13
|
-
'openai>=0.27.0,<0.28.0',
|
|
14
|
-
'pytesseract>=0.3.10,<0.4.0']
|
|
15
|
-
|
|
16
|
-
entry_points = \
|
|
17
|
-
{'console_scripts': ['ai = clai:main', 'clai = clai:main']}
|
|
18
|
-
|
|
19
|
-
setup_kwargs = {
|
|
20
|
-
'name': 'clai',
|
|
21
|
-
'version': '0.2.0',
|
|
22
|
-
'description': 'Command Line AI- this tool lets you call ChatGPT from a CLI',
|
|
23
|
-
'long_description': '# clai\nCommand Line AI- this tool lets you call ChatGPT from a CLI. \n\nI\'m designing this to be used in conjunction with a fork of [shin][shin], which will allow you\nto call `clai` from any textbox in your computer. Finally, ChatGPT everywhere!\n\nThe long-term vision for this project is to add support for extracting context. For example, it would\nread the current text on a window and be able to add to it, or answer questions about it.\n\n_________________\n\n[](http://badge.fury.io/py/clai)\n[](https://github.com/apockill/clai/actions?query=workflow%3ATest)\n[](https://github.com/apockill/clai/actions?query=workflow%3ALint)\n[](https://codecov.io/gh/apockill/clai)\n[](https://github.com/psf/black)\n[](https://timothycrosley.github.io/isort/)\n_________________\n\n[Read Latest Documentation](https://apockill.github.io/clai/) - [Browse GitHub Code Repository](https://github.com/apockill/clai/)\n_________________\n\n## Installation\n\n1. The recommended installation method is to use `pipx`, via\n ```bash\n pipx install clai\n ```\n Optionally, install `tesseract` so that `clai` can read the screen context and send that along with requests:\n ```bash\n sudo apt install tesseract-ocr scrot\n ```\n1. Then go to [OpenAI] and create an API Key. Once it\'s generated, add the following to \n your `~/.profile`:\n ```bash\n export OPENAI_API_TOKEN=<paste here>\n ```\n\n1. The best way to use this tool is in conjunction with the tool [shin][shin], which allows you\n to run arbitrary bash commands in any textbox in a linux computer, using ibus. To use \n that, install \'shin\' via the fork above, then configure\n it in your `~/.profile` to call `clai` by default:\n ```bash\n export SHIN_DEFAULT_COMMAND="clai"\n ```\n1. Log out then log back in for the changes to take effect!\n\n[OpenAI]: https://platform.openai.com/account/api-keys\n\n## Usage\nInvoke the assistant with the format `clai <your prompt>`. For example:\n```\nclai Write an email saying I\'ll be late to work because I\'m working on commandline AIs\n```\n\n\n## Development\n\n### Installing python dependencies\n```shell\npoetry install\n```\n\n### Running Tests\n```shell\npytest .\n```\n\n### Formatting Code\n```shell\nbash .github/format.sh\n```\n\n### Linting\n```shell\nbash .github/check_lint.sh\n```\n\n[shin]: https://github.com/apockill/shin\n',
|
|
24
|
-
'author': 'apockill',
|
|
25
|
-
'author_email': 'apocthiel@gmail.com',
|
|
26
|
-
'maintainer': 'None',
|
|
27
|
-
'maintainer_email': 'None',
|
|
28
|
-
'url': 'None',
|
|
29
|
-
'packages': packages,
|
|
30
|
-
'package_data': package_data,
|
|
31
|
-
'install_requires': install_requires,
|
|
32
|
-
'entry_points': entry_points,
|
|
33
|
-
'python_requires': '>=3.8,<4.0',
|
|
34
|
-
}
|
|
35
|
-
|
|
36
|
-
|
|
37
|
-
setup(**setup_kwargs)
|