clai 0.3.0__py3-none-any.whl → 0.3.2__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of clai might be problematic. Click here for more details.

clai/__init__.py CHANGED
@@ -1,6 +1,11 @@
1
- """**clai**
1
+ from importlib.metadata import version as _metadata_version
2
2
 
3
- Command Line AI- this tool lets you call ChatGPT from a CLI
4
- """
5
- __version__ = "0.1.0"
6
- from .main import main
3
+ from pydantic_ai import _cli
4
+
5
+ __all__ = '__version__', 'cli'
6
+ __version__ = _metadata_version('clai')
7
+
8
+
9
+ def cli():
10
+ """Run the clai CLI and exit."""
11
+ _cli.cli_exit('clai')
clai/__main__.py ADDED
@@ -0,0 +1,6 @@
1
+ """This means `python -m clai` should run the CLI."""
2
+
3
+ from pydantic_ai import _cli
4
+
5
+ if __name__ == '__main__':
6
+ _cli.cli_exit('clai')
@@ -0,0 +1,109 @@
1
+ Metadata-Version: 2.4
2
+ Name: clai
3
+ Version: 0.3.2
4
+ Summary: PydanticAI CLI: command line interface to chat to LLMs
5
+ Author-email: Samuel Colvin <samuel@pydantic.dev>, Marcelo Trylesinski <marcelotryle@gmail.com>, David Montague <david@pydantic.dev>, Alex Hall <alex@pydantic.dev>
6
+ License-Expression: MIT
7
+ License-File: LICENSE
8
+ Classifier: Development Status :: 4 - Beta
9
+ Classifier: Environment :: Console
10
+ Classifier: Environment :: MacOS X
11
+ Classifier: Intended Audience :: Developers
12
+ Classifier: Intended Audience :: Information Technology
13
+ Classifier: Intended Audience :: System Administrators
14
+ Classifier: License :: OSI Approved :: MIT License
15
+ Classifier: Operating System :: POSIX :: Linux
16
+ Classifier: Operating System :: Unix
17
+ Classifier: Programming Language :: Python
18
+ Classifier: Programming Language :: Python :: 3
19
+ Classifier: Programming Language :: Python :: 3 :: Only
20
+ Classifier: Programming Language :: Python :: 3.9
21
+ Classifier: Programming Language :: Python :: 3.10
22
+ Classifier: Programming Language :: Python :: 3.11
23
+ Classifier: Programming Language :: Python :: 3.12
24
+ Classifier: Programming Language :: Python :: 3.13
25
+ Classifier: Topic :: Internet
26
+ Classifier: Topic :: Software Development :: Libraries :: Python Modules
27
+ Requires-Python: >=3.9
28
+ Requires-Dist: pydantic-ai==0.3.2
29
+ Description-Content-Type: text/markdown
30
+
31
+ # clai
32
+
33
+ [![CI](https://github.com/pydantic/pydantic-ai/actions/workflows/ci.yml/badge.svg?event=push)](https://github.com/pydantic/pydantic-ai/actions/workflows/ci.yml?query=branch%3Amain)
34
+ [![Coverage](https://coverage-badge.samuelcolvin.workers.dev/pydantic/pydantic-ai.svg)](https://coverage-badge.samuelcolvin.workers.dev/redirect/pydantic/pydantic-ai)
35
+ [![PyPI](https://img.shields.io/pypi/v/clai.svg)](https://pypi.python.org/pypi/clai)
36
+ [![versions](https://img.shields.io/pypi/pyversions/clai.svg)](https://github.com/pydantic/pydantic-ai)
37
+ [![license](https://img.shields.io/github/license/pydantic/pydantic-ai.svg?v)](https://github.com/pydantic/pydantic-ai/blob/main/LICENSE)
38
+
39
+ (pronounced "clay")
40
+
41
+ Command line interface to chat to LLMs, part of the [PydanticAI project](https://github.com/pydantic/pydantic-ai).
42
+
43
+ ## Usage
44
+
45
+ <!-- Keep this in sync with docs/cli.md -->
46
+
47
+ You'll need to set an environment variable depending on the provider you intend to use.
48
+
49
+ E.g. if you're using OpenAI, set the `OPENAI_API_KEY` environment variable:
50
+
51
+ ```bash
52
+ export OPENAI_API_KEY='your-api-key-here'
53
+ ```
54
+
55
+ Then with [`uvx`](https://docs.astral.sh/uv/guides/tools/), run:
56
+
57
+ ```bash
58
+ uvx clai
59
+ ```
60
+
61
+ Or to install `clai` globally [with `uv`](https://docs.astral.sh/uv/guides/tools/#installing-tools), run:
62
+
63
+ ```bash
64
+ uv tool install clai
65
+ ...
66
+ clai
67
+ ```
68
+
69
+ Or with `pip`, run:
70
+
71
+ ```bash
72
+ pip install clai
73
+ ...
74
+ clai
75
+ ```
76
+
77
+ Either way, running `clai` will start an interactive session where you can chat with the AI model. Special commands available in interactive mode:
78
+
79
+ - `/exit`: Exit the session
80
+ - `/markdown`: Show the last response in markdown format
81
+ - `/multiline`: Toggle multiline input mode (use Ctrl+D to submit)
82
+
83
+ ## Help
84
+
85
+ ```
86
+ usage: clai [-h] [-m [MODEL]] [-a AGENT] [-l] [-t [CODE_THEME]] [--no-stream] [--version] [prompt]
87
+
88
+ PydanticAI CLI v...
89
+
90
+ Special prompts:
91
+ * `/exit` - exit the interactive mode (ctrl-c and ctrl-d also work)
92
+ * `/markdown` - show the last markdown output of the last question
93
+ * `/multiline` - toggle multiline mode
94
+
95
+ positional arguments:
96
+ prompt AI Prompt, if omitted fall into interactive mode
97
+
98
+ options:
99
+ -h, --help show this help message and exit
100
+ -m [MODEL], --model [MODEL]
101
+ Model to use, in format "<provider>:<model>" e.g. "openai:gpt-4o" or "anthropic:claude-3-7-sonnet-latest". Defaults to "openai:gpt-4o".
102
+ -a AGENT, --agent AGENT
103
+ Custom Agent to use, in format "module:variable", e.g. "mymodule.submodule:my_agent"
104
+ -l, --list-models List all available models and exit
105
+ -t [CODE_THEME], --code-theme [CODE_THEME]
106
+ Which colors to use for code, can be "dark", "light" or any theme from pygments.org/styles/. Defaults to "dark" which works well on dark terminals.
107
+ --no-stream Disable streaming from the model
108
+ --version Show version and exit
109
+ ```
@@ -0,0 +1,7 @@
1
+ clai/__init__.py,sha256=N6cw-oEjoIAPwaEqBRCHf3j9Y4tNDUp1YS-tplkvyfA,238
2
+ clai/__main__.py,sha256=1ClP9aMManzEL8lbRtet7dGW87KZmQRB92aRcmlCeMs,138
3
+ clai-0.3.2.dist-info/METADATA,sha256=bvwwafMEJIxieqqGb6DbB3ih-0mdsNrjWrTumPaBUsU,4218
4
+ clai-0.3.2.dist-info/WHEEL,sha256=qtCwoSJWgHk21S1Kb4ihdzI2rlJ1ZKaIurTj_ngOhyQ,87
5
+ clai-0.3.2.dist-info/entry_points.txt,sha256=DyHzt1YJ1DwtFNsrFOoUGwKkmk8mHJ-iiZLairTUq5E,34
6
+ clai-0.3.2.dist-info/licenses/LICENSE,sha256=vA6Jc482lEyBBuGUfD1pYx-cM7jxvLYOxPidZ30t_PQ,1100
7
+ clai-0.3.2.dist-info/RECORD,,
@@ -1,4 +1,4 @@
1
1
  Wheel-Version: 1.0
2
- Generator: poetry-core 1.1.0
2
+ Generator: hatchling 1.27.0
3
3
  Root-Is-Purelib: true
4
4
  Tag: py3-none-any
@@ -0,0 +1,2 @@
1
+ [console_scripts]
2
+ clai = clai:cli
@@ -0,0 +1,21 @@
1
+ The MIT License (MIT)
2
+
3
+ Copyright (c) Pydantic Services Inc. 2024 to present
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
clai/api.py DELETED
@@ -1,17 +0,0 @@
1
- import os
2
- from types import ModuleType
3
-
4
- import openai
5
-
6
- API_TOKEN_VAR = "OPENAI_API_TOKEN"
7
-
8
-
9
- def initialize_api() -> ModuleType:
10
- try:
11
- api_key = os.environ[API_TOKEN_VAR]
12
- except KeyError:
13
- print(f"You must set the`{API_TOKEN_VAR}` variable in your environment!")
14
- exit(1)
15
-
16
- openai.api_key = api_key
17
- return openai
clai/behavior_context.py DELETED
@@ -1,161 +0,0 @@
1
- from dataclasses import dataclass
2
- from typing import Literal, Union
3
-
4
- from clai.ocr_drivers import WindowContext
5
-
6
- USER_PROMPT_FORMAT = """
7
- User Prompt:
8
- ```
9
- {user_prompt}
10
- ```
11
- """
12
- OCR_EXTRACTION_FORMAT = """
13
- Active Window Title: {active_window_title}
14
-
15
- Active Window OCR Extracted Text (RAW):
16
- ------ OCR DATA START ------
17
- ```
18
- {ocr_text}
19
- ```
20
- ------ OCR DATA END ------
21
-
22
- {user_prompt}
23
-
24
- Please answer "User Prompt" using the raw OCR text as context to the message.
25
- """
26
-
27
-
28
- @dataclass
29
- class Prompt:
30
- context: WindowContext
31
- prompt: str
32
-
33
- def __str__(self) -> str:
34
- """Serialize the Prompt with differing formats, depending on whether window
35
- content was available
36
-
37
- :return: The window context and prompt in a standardized format
38
- """
39
- """."""
40
- user_prompt = USER_PROMPT_FORMAT.format(user_prompt=self.prompt.strip())
41
- if self.context.clean_screen_text and self.context.active_window_name:
42
- return OCR_EXTRACTION_FORMAT.format(
43
- active_window_title=self.context.active_window_name.strip(),
44
- ocr_text=self.context.clean_screen_text.strip(),
45
- user_prompt=user_prompt.strip(),
46
- )
47
- return user_prompt.strip()
48
-
49
-
50
- @dataclass
51
- class Message:
52
- role: Literal["system", "user", "assistant"]
53
- content: Union[Prompt, str]
54
-
55
- def to_api(self) -> dict[str, str]:
56
- """To OpenAPI format"""
57
- if isinstance(self.content, str) and self.role == "user":
58
- raise RuntimeError("The user message must be of type Prompt!")
59
-
60
- return {"role": self.role, "content": str(self.content)}
61
-
62
-
63
- _DEFAULT_ASSISTANT_ROLE = """
64
- You are an assistant that is capable of being called anywhere on a desktop computer. You
65
- may be called within the context of an email, a URL box, commandline, a text editor, or
66
- even word documents!
67
-
68
- Your role is to answer the users request as shortly and succinctly as possible. You
69
- will follow the following rules:
70
-
71
- When asked to write long-form text content:
72
- 1) Never ask for more information. If something is to be guessed, write it in template
73
- format. For example, if asked to write an email use <Insert time here> when writing
74
- the portion of an email that specifies something that was not included in the users
75
- question.
76
- 2) Only assume the content is long form if the user mentions email, or 'long message'.
77
-
78
- When asked to write a command, code, formulas, or any one-line response task:
79
- 1) NEVER WRITE EXPLANATIONS! Only include the command/code/etc, ready to be run
80
- 2) NEVER WRITE USAGE INSTRUCTIONS! Do not explain how to use the command/code/formulas.
81
- 3) NEVER WRITE NOTES ABOUT THE IMPLEMENTATION!
82
- Do not explain what it does or it's limitations.
83
- 4) Remember, the text that you write will immediately be run, do not include code blocks
84
- 5) If there is something that requires user input, such as a cell in a sheet or a
85
- variable from the user, write it inside of brackets, like this: <INPUT DESCRIBER>,
86
- where the insides of the bracket have an example of what is needed to be filled in.
87
- 6) Assume a linux desktop environment in a bash shell. Use freely available unix tools.
88
-
89
- You will receive OCR context and window title names, for some prompts. They are very
90
- noisy, use best-effort when reading them.
91
- """
92
- _EXAMPLE_EMAIL = """
93
- Dear <Recipient's Name>,
94
-
95
- I hope this email finds you well. I am writing to request a meeting with you on <Date and Time>, and I would appreciate it if you could confirm your availability at your earliest convenience.
96
-
97
- The purpose of this meeting is to discuss <Purpose of the Meeting> with you. Specifically, I would like to <Agenda Item 1>, <Agenda Item 2>, and <Agenda Item 3>. The meeting will last approximately <Meeting Duration> and will take place at <Meeting Location>.
98
-
99
- Please let me know if this date and time work for you. If not, please suggest an alternative time that is convenient for you. Additionally, if there are any documents or information you would like me to review before the meeting, please let me know, and I will make sure to review them.
100
-
101
- I look forward to hearing from you soon.
102
-
103
- Best regards,
104
-
105
- <Your Name>
106
- """ # noqa: E501
107
- _EXAMPLE_REGEX = '=IFERROR(REGEXEXTRACT(<INPUT CELL HERE>, "[A-z0-9._%+-]+@[A-z0-9.-]+\.[A-z]{2,4}");"")' # noqa
108
- _EXAMPLE_PYTHON = """
109
- def fibonacci(n: int) -> Generator[int, None, None]:
110
- a, b = 0, 1
111
- for _ in range(n):
112
- yield a
113
- a, b = b, a + b
114
- """
115
- _EXAMPLE_GOOGLE_SHEETS = '=IFERROR(REGEXEXTRACT(<INPUT CELL HERE>, "[A-z0-9._%+-]+@[A-z0-9.-]+\.[A-z]{2,4}");"")' # noqa
116
- _EXAMPLE_BASH_COMMAND = "grep -rnw . -e 'bruh'"
117
-
118
- MESSAGE_CONTEXT: list[Message] = [
119
- Message(role="system", content=_DEFAULT_ASSISTANT_ROLE),
120
- Message(
121
- role="user",
122
- content=Prompt(
123
- WindowContext(),
124
- prompt="commandline search for files with the name 'bruh' in them",
125
- ),
126
- ),
127
- Message(role="assistant", content=_EXAMPLE_BASH_COMMAND),
128
- Message(
129
- role="user",
130
- content=Prompt(
131
- context=WindowContext(), prompt="email set up a meeting next week"
132
- ),
133
- ),
134
- Message(role="assistant", content=_EXAMPLE_EMAIL),
135
- Message(
136
- role="user",
137
- content=Prompt(
138
- context=WindowContext(),
139
- prompt="google sheets formula extracts an email from string of text",
140
- ),
141
- ),
142
- Message(role="assistant", content=_EXAMPLE_GOOGLE_SHEETS),
143
- Message(
144
- role="user",
145
- content=Prompt(
146
- context=WindowContext(),
147
- prompt="google sheets formula extracts an email from string of text",
148
- ),
149
- ),
150
- Message(role="assistant", content=_EXAMPLE_REGEX),
151
- Message(
152
- role="user",
153
- content=Prompt(
154
- context=WindowContext(),
155
- prompt="python fibonacci function in form of a generator",
156
- ),
157
- ),
158
- Message(role="assistant", content=_EXAMPLE_PYTHON),
159
- ]
160
-
161
- __all__ = ["MESSAGE_CONTEXT", "Message", "Prompt"]
clai/main.py DELETED
@@ -1,22 +0,0 @@
1
- from argparse import ArgumentParser
2
-
3
- from .api import initialize_api
4
- from .message_creation import create_message_context
5
-
6
-
7
- def main() -> None:
8
- parser = ArgumentParser("CLAI- your own command line AI!")
9
- parser.add_argument("prompt", type=str, nargs="+")
10
- parser.add_argument("-m", "--model", default="gpt-3.5-turbo")
11
- args = parser.parse_args()
12
-
13
- openai = initialize_api()
14
-
15
- prompt = " ".join(args.prompt)
16
- response = openai.ChatCompletion.create(
17
- model=args.model,
18
- messages=create_message_context(prompt),
19
- )
20
-
21
- best_response = response["choices"][0]["message"]["content"]
22
- print(best_response)
clai/message_creation.py DELETED
@@ -1,20 +0,0 @@
1
- from .behavior_context import MESSAGE_CONTEXT, Message, Prompt
2
- from .ocr_drivers import get_driver
3
-
4
-
5
- def create_message_context(prompt: str) -> list[dict[str, str]]:
6
- """An important part of a ChatBot is the prior context. Here, we carefully
7
- construct the context for the messages we are sending.
8
-
9
- This function returns an OpenAI-API ready message with the prompt included.
10
- :param prompt: The prompt for the chatbot
11
- :return: The API-ready message
12
- """
13
- driver = get_driver()
14
- window_context = driver.extract_context()
15
- new_context = MESSAGE_CONTEXT[:]
16
- new_context.append(
17
- Message(role="user", content=Prompt(context=window_context, prompt=prompt))
18
- )
19
- api_format = [m.to_api() for m in new_context]
20
- return api_format
@@ -1,7 +0,0 @@
1
- from .base_driver import BaseOCRDriver, WindowContext
2
- from .linux_driver import LinuxOCRDriver
3
-
4
-
5
- def get_driver() -> BaseOCRDriver:
6
- """In the future, we can return other OS compatible OCR solutions here"""
7
- return LinuxOCRDriver()
@@ -1,39 +0,0 @@
1
- from abc import ABC, abstractmethod
2
- from dataclasses import dataclass
3
- from typing import Optional
4
-
5
- _MIN_CHARACTERS_PER_LINE = 10
6
-
7
-
8
- @dataclass
9
- class WindowContext:
10
- raw_screen_text: Optional[str] = None
11
- """If the driver supports it, the text extracted from the active window will be
12
- filled here."""
13
-
14
- active_window_name: Optional[str] = None
15
- """If the driver supports it, the current active window name will be filled here."""
16
-
17
- @property
18
- def clean_screen_text(self) -> Optional[str]:
19
- if not self.raw_screen_text:
20
- return None
21
-
22
- lines: list[str] = self.raw_screen_text.split("\n")
23
- clean_lines = []
24
- for line in lines:
25
- line = line.strip()
26
- if len(line) > _MIN_CHARACTERS_PER_LINE:
27
- clean_lines.append(line)
28
-
29
- clean_text = "\n".join(clean_lines)
30
- return clean_text
31
-
32
-
33
- class BaseOCRDriver(ABC):
34
- """This base class can be used to standardize the interface for future OS's"""
35
-
36
- @abstractmethod
37
- def extract_context(self) -> WindowContext:
38
- """A method to extract the current useful context from the in-focus window"""
39
- pass
@@ -1,35 +0,0 @@
1
- from typing import cast
2
-
3
- import pyautogui
4
- import pytesseract
5
- import pywinctl
6
- from PIL.Image import Image
7
-
8
- from .base_driver import BaseOCRDriver, WindowContext
9
-
10
-
11
- class LinuxOCRDriver(BaseOCRDriver):
12
- def extract_context(self) -> WindowContext:
13
- screenshot = self._extract_active_window_screenshot()
14
-
15
- # Perform OCR and clean up the text
16
- raw_ocr_text = pytesseract.image_to_string(screenshot)
17
-
18
- return WindowContext(
19
- raw_screen_text=raw_ocr_text,
20
- active_window_name=cast(str, pywinctl.getActiveWindowTitle()),
21
- )
22
-
23
- def _extract_active_window_screenshot(self) -> Image:
24
- # Get the active window object
25
- active_window = pywinctl.getActiveWindow() # type: ignore
26
- region = (
27
- active_window.left,
28
- active_window.top,
29
- active_window.width,
30
- active_window.height,
31
- )
32
-
33
- # Take a screenshot of the active window
34
- screenshot = pyautogui.screenshot(region=region)
35
- return screenshot
@@ -1,99 +0,0 @@
1
- Metadata-Version: 2.1
2
- Name: clai
3
- Version: 0.3.0
4
- Summary: Command Line AI- this tool lets you call ChatGPT from a CLI
5
- License: Proprietary
6
- Author: apockill
7
- Author-email: apocthiel@gmail.com
8
- Requires-Python: >=3.8,<4.0
9
- Classifier: License :: Other/Proprietary License
10
- Classifier: Programming Language :: Python :: 3
11
- Classifier: Programming Language :: Python :: 3.8
12
- Classifier: Programming Language :: Python :: 3.9
13
- Classifier: Programming Language :: Python :: 3.10
14
- Requires-Dist: PyAutoGUI (>=0.9.53,<0.10.0)
15
- Requires-Dist: PyWinCtl (>=0.0.43,<0.0.44)
16
- Requires-Dist: openai (>=0.27.0,<0.28.0)
17
- Requires-Dist: pytesseract (>=0.3.10,<0.4.0)
18
- Description-Content-Type: text/markdown
19
-
20
- # clai
21
- Command Line AI- this tool lets you call ChatGPT from a CLI.
22
-
23
- I'm designing this to be used in conjunction with a fork of [shin][shin], which will allow you
24
- to call `clai` from any textbox in your computer. Finally, ChatGPT everywhere!
25
-
26
- The long-term vision for this project is to add support for extracting context. For example, it would
27
- read the current text on a window and be able to add to it, or answer questions about it.
28
-
29
- _________________
30
-
31
- [![PyPI version](https://badge.fury.io/py/clai.svg)](http://badge.fury.io/py/clai)
32
- [![Test Status](https://github.com/apockill/clai/workflows/Test/badge.svg?branch=main)](https://github.com/apockill/clai/actions?query=workflow%3ATest)
33
- [![Lint Status](https://github.com/apockill/clai/workflows/Lint/badge.svg?branch=main)](https://github.com/apockill/clai/actions?query=workflow%3ALint)
34
- [![codecov](https://codecov.io/gh/apockill/clai/branch/main/graph/badge.svg)](https://codecov.io/gh/apockill/clai)
35
- [![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)
36
- [![Imports: isort](https://img.shields.io/badge/%20imports-isort-%231674b1?style=flat&labelColor=ef8336)](https://timothycrosley.github.io/isort/)
37
- _________________
38
-
39
- [Read Latest Documentation](https://apockill.github.io/clai/) - [Browse GitHub Code Repository](https://github.com/apockill/clai/)
40
- _________________
41
-
42
- ## Installation
43
-
44
- 1. The recommended installation method is to use `pipx`, via
45
- ```bash
46
- pipx install clai
47
- ```
48
- Optionally, install `tesseract` so that `clai` can read the screen context and send that along with requests:
49
- ```bash
50
- sudo apt install tesseract-ocr scrot
51
- ```
52
- 1. Then go to [OpenAI] and create an API Key. Once it's generated, add the following to
53
- your `~/.profile`:
54
- ```bash
55
- export OPENAI_API_TOKEN=<paste here>
56
- ```
57
-
58
- 1. The best way to use this tool is in conjunction with the tool [shin][shin], which allows you
59
- to run arbitrary bash commands in any textbox in a linux computer, using ibus. To use
60
- that, install 'shin' via the fork above, then configure
61
- it in your `~/.profile` to call `clai` by default:
62
- ```bash
63
- export SHIN_DEFAULT_COMMAND="clai"
64
- ```
65
- 1. Log out then log back in for the changes to take effect!
66
-
67
- [OpenAI]: https://platform.openai.com/account/api-keys
68
-
69
- ## Usage
70
- Invoke the assistant with the format `clai <your prompt>`. For example:
71
- ```
72
- clai Write an email saying I'll be late to work because I'm working on commandline AIs
73
- ```
74
-
75
-
76
- ## Development
77
-
78
- ### Installing python dependencies
79
- ```shell
80
- poetry install
81
- ```
82
-
83
- ### Running Tests
84
- ```shell
85
- pytest .
86
- ```
87
-
88
- ### Formatting Code
89
- ```shell
90
- bash .github/format.sh
91
- ```
92
-
93
- ### Linting
94
- ```shell
95
- bash .github/check_lint.sh
96
- ```
97
-
98
- [shin]: https://github.com/apockill/shin
99
-
@@ -1,12 +0,0 @@
1
- clai/__init__.py,sha256=DM5_f5LBR0Io1iQ1swaNXB-aFmWyYM_jzYIVB8mBBVc,122
2
- clai/api.py,sha256=vG4EduZzTaE0aQJ7XH-OaihPNSbN3LVN0lSoxoXDYNQ,348
3
- clai/behavior_context.py,sha256=vCgUP2y_p_sIe3b3n_hYZC6gpoB0m-Ihn2fmir7EU_A,5955
4
- clai/main.py,sha256=fawJ4enxzwfJPnp49gPF-gO8V0PkFYyB8_B4W42MoTw,643
5
- clai/message_creation.py,sha256=CTLK4O--FuWXnDvtIWAZ1c6z7ux5LQDMt64zb9voBrc,770
6
- clai/ocr_drivers/__init__.py,sha256=eLkaqbbBwDacJ4mdJtUy4RQ_0Ni9d0vvuhLPYhdZ1GI,238
7
- clai/ocr_drivers/base_driver.py,sha256=NbeHO0RtQQDswgVZ0MesYbVHOZzhySO1QHnlrOkTU8Q,1159
8
- clai/ocr_drivers/linux_driver.py,sha256=yfbJBi2924nRlGQXQRGXdLzYjXJ5KEMoZE7Odv8ndVc,1032
9
- clai-0.3.0.dist-info/entry_points.txt,sha256=P6GWHTrv7pazlqeu5cpB3qK-hvY1kdPlML1KPXVabug,47
10
- clai-0.3.0.dist-info/WHEEL,sha256=bbU3AyvhQ312rVm7zzRQjs6axI1UYWC3nmFA2E6FFSI,88
11
- clai-0.3.0.dist-info/METADATA,sha256=M26OEBFd8xsH_z9b_JxGB7pT1SDJaegBNQE1lducKnY,3451
12
- clai-0.3.0.dist-info/RECORD,,
@@ -1,4 +0,0 @@
1
- [console_scripts]
2
- ai=clai:main
3
- clai=clai:main
4
-