yaicli 0.0.3__py3-none-any.whl → 0.0.5__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,244 @@
1
+ Metadata-Version: 2.4
2
+ Name: yaicli
3
+ Version: 0.0.5
4
+ Summary: A simple CLI tool to interact with LLM
5
+ License-File: LICENSE
6
+ Requires-Python: >=3.9
7
+ Requires-Dist: distro>=1.9.0
8
+ Requires-Dist: jmespath>=1.0.1
9
+ Requires-Dist: prompt-toolkit>=3.0.50
10
+ Requires-Dist: requests>=2.32.3
11
+ Requires-Dist: rich>=13.9.4
12
+ Requires-Dist: typer>=0.15.2
13
+ Description-Content-Type: text/markdown
14
+
15
+ # YAICLI - Your AI Command Line Interface
16
+
17
+ YAICLI is a powerful command-line AI assistant tool that enables you to interact with Large Language Models (LLMs) through your terminal. It offers multiple operation modes for everyday conversations, generating and executing shell commands, and one-shot quick queries.
18
+
19
+ ## Features
20
+
21
+ - **Multiple Operation Modes**:
22
+ - **Chat Mode (💬)**: Interactive conversation with the AI assistant
23
+ - **Execute Mode (🚀)**: Generate and execute shell commands specific to your OS and shell
24
+ - **Temp Mode**: Quick queries without entering interactive mode
25
+
26
+ - **Smart Environment Detection**:
27
+ - Automatically detects your operating system and shell
28
+ - Customizes responses and commands for your specific environment
29
+
30
+ - **Rich Terminal Interface**:
31
+ - Markdown rendering for formatted responses
32
+ - Streaming responses for real-time feedback
33
+ - Color-coded output for better readability
34
+
35
+ - **Configurable**:
36
+ - Customizable API endpoints
37
+ - Support for different LLM providers
38
+ - Adjustable response parameters
39
+
40
+ - **Keyboard Shortcuts**:
41
+ - Tab to switch between Chat and Execute modes
42
+
43
+ ## Installation
44
+
45
+ ### Prerequisites
46
+
47
+ - Python 3.9 or higher
48
+ - pip (Python package manager)
49
+
50
+ ### Install from PyPI
51
+
52
+ ```bash
53
+ # Install by pip
54
+ pip install yaicli
55
+
56
+ # Install by pipx
57
+ pipx install yaicli
58
+
59
+ # Install by uv
60
+ uv tool install yaicli
61
+ ```
62
+
63
+ ### Install from Source
64
+
65
+ ```bash
66
+ git clone https://github.com/yourusername/yaicli.git
67
+ cd yaicli
68
+ pip install .
69
+ ```
70
+
71
+ ## Configuration
72
+
73
+ On first run, YAICLI will create a default configuration file at `~/.config/yaicli/config.ini`. You'll need to edit this file to add your API key and customize other settings.
74
+
75
+ Just run `ai`, and it will create the config file for you. Then you can edit it to add your api key.
76
+
77
+ ### Configuration File
78
+
79
+ ```ini
80
+ [core]
81
+ BASE_URL=https://api.openai.com/v1
82
+ API_KEY=your_api_key_here
83
+ MODEL=gpt-4o
84
+
85
+ # default run mode, default: temp
86
+ # chat: interactive chat mode
87
+ # exec: shell command generation mode
88
+ # temp: one-shot mode
89
+ DEFAULT_MODE=temp
90
+
91
+ # auto detect shell and os
92
+ SHELL_NAME=auto
93
+ OS_NAME=auto
94
+
95
+ # if you want to use custom completions path, you can set it here
96
+ COMPLETION_PATH=/chat/completions
97
+ # if you want to use custom answer path, you can set it here
98
+ ANSWER_PATH=choices[0].message.content
99
+
100
+ # true: streaming response
101
+ # false: non-streaming response
102
+ STREAM=true
103
+ ```
104
+
105
+ ### Configuration Options
106
+
107
+ - **BASE_URL**: API endpoint URL (default: OpenAI API)
108
+ - **API_KEY**: Your API key for the LLM provider
109
+ - **MODEL**: The model to use (e.g., gpt-4o, gpt-3.5-turbo), default: gpt-4o
110
+ - **DEFAULT_MODE**: Default operation mode (chat, exec, or temp), default: temp
111
+ - **SHELL_NAME**: Shell to use (auto for automatic detection), default: auto
112
+ - **OS_NAME**: OS to use (auto for automatic detection), default: auto
113
+ - **COMPLETION_PATH**: Path for completions endpoint, default: /chat/completions
114
+ - **ANSWER_PATH**: Json path expression to extract answer from response, default: choices[0].message.content
115
+ - **STREAM**: Enable/disable streaming responses
116
+
117
+ ## Usage
118
+
119
+ ### Basic Usage
120
+
121
+ ```bash
122
+ # One-shot mode
123
+ ai "What is the capital of France?"
124
+
125
+ # Chat mode
126
+ ai --chat
127
+
128
+ # Shell command generation mode
129
+ ai --shell "Create a backup of my Documents folder"
130
+
131
+ # Verbose mode for debugging
132
+ ai --verbose "Explain quantum computing"
133
+ ```
134
+
135
+ ### Command Line Options
136
+
137
+ - `<PROMPT>`: Argument
138
+ - `--verbose` or `-V`: Show verbose information
139
+ - `--chat` or `-c`: Start in chat mode
140
+ - `--shell` or `-s`: Generate and execute shell command
141
+ - `--install-completion`: Install completion for the current shell
142
+ - `--show-completion`: Show completion for the current shell, to copy it or customize the installation
143
+ - `--help` or `-h`: Show this message and exit
144
+
145
+ ```bash
146
+ ai -h
147
+
148
+ Usage: ai [OPTIONS] [PROMPT]
149
+
150
+ yaicli. Your AI interface in cli.
151
+
152
+ ╭─ Arguments ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
153
+ │ prompt [PROMPT] The prompt send to the LLM │
154
+ ╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
155
+ ╭─ Options ────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
156
+ │ --verbose -V Show verbose information │
157
+ │ --chat -c Start in chat mode │
158
+ │ --shell -s Generate and execute shell command │
159
+ │ --install-completion Install completion for the current shell. │
160
+ │ --show-completion Show completion for the current shell, to copy it or customize the installation. │
161
+ │ --help -h Show this message and exit. │
162
+ ╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
163
+
164
+
165
+ ```
166
+
167
+ ### Interactive Mode
168
+
169
+ In interactive mode (chat or shell), you can:
170
+ - Type your queries and get responses
171
+ - Use `Tab` to switch between Chat and Execute modes
172
+ - Type 'exit' or 'quit' to exit
173
+
174
+ ### Shell Command Generation
175
+
176
+ In Execute mode:
177
+ 1. Enter your request in natural language
178
+ 2. YAICLI will generate an appropriate shell command
179
+ 3. Review the command
180
+ 4. Confirm to execute or reject
181
+
182
+ ## Examples
183
+
184
+ ### Chat Mode Example
185
+
186
+ ```bash
187
+ $ ai --chat
188
+ 💬 > Tell me about the solar system
189
+
190
+ Assistant:
191
+ Certainly! Here’s a brief overview of the solar system:
192
+
193
+ • Sun: The central star of the solar system, providing light and energy.
194
+ • Planets:
195
+ • Mercury: Closest to the Sun, smallest planet.
196
+ • Venus: Second planet, known for its thick atmosphere and high surface temperature.
197
+ • Earth: Third planet, the only known planet to support life.
198
+ • Mars: Fourth planet, often called the "Red Planet" due to its reddish appearance.
199
+ • Jupiter: Largest planet, a gas giant with many moons.
200
+ • Saturn: Known for its prominent ring system, also a gas giant.
201
+ • Uranus: An ice giant, known for its unique axial tilt.
202
+ • Neptune: Another ice giant, known for its deep blue color.
203
+ • Dwarf Planets:
204
+ • Pluto: Once considered the ninth planet, now classified as
205
+
206
+ 💬 >
207
+ ```
208
+
209
+ ### Execute Mode Example
210
+
211
+ ```bash
212
+ $ ai --shell "Find all PDF files in my Downloads folder"
213
+
214
+ Generated command: find ~/Downloads -type f -name "*.pdf"
215
+ Execute this command? [y/n]: y
216
+
217
+ Executing command: find ~/Downloads -type f -name "*.pdf"
218
+
219
+ /Users/username/Downloads/document1.pdf
220
+ /Users/username/Downloads/report.pdf
221
+ ...
222
+ ```
223
+
224
+ ## Technical Implementation
225
+
226
+ YAICLI is built using several Python libraries:
227
+
228
+ - **Typer**: Provides the command-line interface
229
+ - **Rich**: Provides terminal content formatting and beautiful display
230
+ - **prompt_toolkit**: Provides interactive command-line input experience
231
+ - **requests**: Handles API requests
232
+ - **jmespath**: Parses JSON responses
233
+
234
+ ## Contributing
235
+
236
+ Contributions of code, issue reports, or feature suggestions are welcome.
237
+
238
+ ## License
239
+
240
+ [Apache License 2.0](LICENSE)
241
+
242
+ ---
243
+
244
+ *YAICLI - Making your terminal smarter*
@@ -0,0 +1,6 @@
1
+ yaicli.py,sha256=5yHKni6bIB2k6xzDAcdr-Xk7KNS7_QKmS9XcN1n9Nrk,21497
2
+ yaicli-0.0.5.dist-info/METADATA,sha256=Bx7PwYYH9J99pZIcrkgWhVE8hP3tiB6WGXZgjSw0qi4,9319
3
+ yaicli-0.0.5.dist-info/WHEEL,sha256=qtCwoSJWgHk21S1Kb4ihdzI2rlJ1ZKaIurTj_ngOhyQ,87
4
+ yaicli-0.0.5.dist-info/entry_points.txt,sha256=gdduQwAuu_LeDqnDU81Fv3NPmD2tRQ1FffvolIP3S1Q,34
5
+ yaicli-0.0.5.dist-info/licenses/LICENSE,sha256=xx0jnfkXJvxRnG63LTGOxlggYnIysveWIZ6H3PNdCrQ,11357
6
+ yaicli-0.0.5.dist-info/RECORD,,
yaicli.py CHANGED
@@ -33,7 +33,13 @@ class CasePreservingConfigParser(configparser.RawConfigParser):
33
33
  return optionstr
34
34
 
35
35
 
36
- class ShellAI:
36
+ class YAICLI:
37
+ """Main class for YAICLI
38
+ Chat mode: interactive chat mode
39
+ One-shot mode:
40
+ Temp mode: ask a question and get a response once
41
+ Execute mode: generate and execute shell commands
42
+ """
37
43
  # Configuration file path
38
44
  CONFIG_PATH = Path("~/.config/yaicli/config.ini").expanduser()
39
45
 
@@ -45,8 +51,8 @@ MODEL=gpt-4o
45
51
 
46
52
  # default run mode, default: temp
47
53
  # chat: interactive chat mode
48
- # exec: shell command generation mode
49
- # temp: one-shot mode
54
+ # exec: generate and execute shell commands once
55
+ # temp: ask a question and get a response once
50
56
  DEFAULT_MODE=temp
51
57
 
52
58
  # auto detect shell and os
@@ -69,6 +75,8 @@ STREAM=true"""
69
75
  self.session = PromptSession(key_bindings=self.bindings)
70
76
  self.current_mode = ModeEnum.CHAT.value
71
77
  self.config = {}
78
+ self.history = []
79
+ self.max_history_length = 25
72
80
 
73
81
  # Setup key bindings
74
82
  self._setup_key_bindings()
@@ -79,15 +87,22 @@ STREAM=true"""
79
87
  @self.bindings.add(Keys.ControlI) # Bind Ctrl+I to switch modes
80
88
  def _(event: KeyPressEvent):
81
89
  self.current_mode = (
82
- ModeEnum.CHAT.value
83
- if self.current_mode == ModeEnum.EXECUTE.value
84
- else ModeEnum.EXECUTE.value
90
+ ModeEnum.CHAT.value if self.current_mode == ModeEnum.EXECUTE.value else ModeEnum.EXECUTE.value
85
91
  )
86
92
 
87
- def detect_os(self):
88
- """Detect operating system"""
93
+ def clear_history(self):
94
+ """Clear chat history"""
95
+ self.history = []
96
+
97
+ def detect_os(self) -> str:
98
+ """Detect operating system
99
+ Returns:
100
+ str: operating system name
101
+ Raises:
102
+ typer.Exit: if there is an error with the request
103
+ """
89
104
  if self.config.get("OS_NAME") != "auto":
90
- return self.config.get("OS_NAME")
105
+ return self.config["OS_NAME"]
91
106
  current_platform = platform.system()
92
107
  if current_platform == "Linux":
93
108
  return "Linux/" + distro_name(pretty=True)
@@ -97,10 +112,15 @@ STREAM=true"""
97
112
  return "Darwin/MacOS " + platform.mac_ver()[0]
98
113
  return current_platform
99
114
 
100
- def detect_shell(self):
101
- """Detect shell"""
102
- if self.config.get("SHELL_NAME") != "auto":
103
- return self.config.get("SHELL_NAME")
115
+ def detect_shell(self) -> str:
116
+ """Detect shell
117
+ Returns:
118
+ str: shell name
119
+ Raises:
120
+ typer.Exit: if there is an error with the request
121
+ """
122
+ if self.config["SHELL_NAME"] != "auto":
123
+ return self.config["SHELL_NAME"]
104
124
  import platform
105
125
 
106
126
  current_platform = platform.system()
@@ -109,7 +129,13 @@ STREAM=true"""
109
129
  return "powershell.exe" if is_powershell else "cmd.exe"
110
130
  return basename(getenv("SHELL", "/bin/sh"))
111
131
 
112
- def build_cmd_prompt(self):
132
+ def build_cmd_prompt(self) -> str:
133
+ """Build command prompt
134
+ Returns:
135
+ str: command prompt
136
+ Raises:
137
+ typer.Exit: if there is an error with the request
138
+ """
113
139
  _os = self.detect_os()
114
140
  _shell = self.detect_shell()
115
141
  return f"""Your are a Shell Command Generator.
@@ -121,8 +147,13 @@ Rules:
121
147
  4. Chain multi-step commands in SINGLE LINE
122
148
  5. Return NOTHING except the ready-to-run command"""
123
149
 
124
- def build_default_prompt(self):
125
- """Build default prompt"""
150
+ def build_default_prompt(self) -> str:
151
+ """Build default prompt
152
+ Returns:
153
+ str: default prompt
154
+ Raises:
155
+ typer.Exit: if there is an error with the request
156
+ """
126
157
  _os = self.detect_os()
127
158
  _shell = self.detect_shell()
128
159
  return (
@@ -132,8 +163,13 @@ Rules:
132
163
  "unless the user explicitly requests more details."
133
164
  )
134
165
 
135
- def get_default_config(self):
136
- """Get default configuration"""
166
+ def get_default_config(self) -> dict[str, str]:
167
+ """Get default configuration
168
+ Returns:
169
+ dict: default configuration
170
+ Raises:
171
+ typer.Exit: if there is an error with the request
172
+ """
137
173
  config = CasePreservingConfigParser()
138
174
  try:
139
175
  config.read_string(self.DEFAULT_CONFIG_INI)
@@ -144,8 +180,13 @@ Rules:
144
180
  self.console.print(f"[red]Error parsing config: {e}[/red]")
145
181
  raise typer.Exit(code=1) from None
146
182
 
147
- def load_config(self):
148
- """Load LLM API configuration"""
183
+ def load_config(self) -> dict[str, str]:
184
+ """Load LLM API configuration
185
+ Returns:
186
+ dict: configuration
187
+ Raises:
188
+ typer.Exit: if there is an error with the request
189
+ """
149
190
  if not self.CONFIG_PATH.exists():
150
191
  self.console.print(
151
192
  "[bold yellow]Configuration file not found. Creating default configuration file.[/bold yellow]"
@@ -160,14 +201,28 @@ Rules:
160
201
  self.config["STREAM"] = str(self.config.get("STREAM", "true")).lower()
161
202
  return self.config
162
203
 
163
- def _call_api(self, url, headers, data):
164
- """Generic API call method"""
204
+ def _call_api(self, url: str, headers: dict, data: dict) -> requests.Response:
205
+ """Call the API and return the response.
206
+ Args:
207
+ url: API endpoint URL
208
+ headers: request headers
209
+ data: request data
210
+ Returns:
211
+ requests.Response: response object
212
+ Raises:
213
+ requests.exceptions.RequestException: if there is an error with the request
214
+ """
165
215
  response = requests.post(url, headers=headers, json=data)
166
216
  response.raise_for_status() # Raise an exception for non-200 status codes
167
217
  return response
168
218
 
169
- def get_llm_url(self) -> Optional[str]:
170
- """Get LLM API URL"""
219
+ def get_llm_url(self) -> str:
220
+ """Get LLM API URL
221
+ Returns:
222
+ str: LLM API URL
223
+ Raises:
224
+ typer.Exit: if API key or base URL is not set
225
+ """
171
226
  base = self.config.get("BASE_URL", "").rstrip("/")
172
227
  if not base:
173
228
  self.console.print(
@@ -183,25 +238,44 @@ Rules:
183
238
  return f"{base}/{COMPLETION_PATH}"
184
239
 
185
240
  def build_data(self, prompt: str, mode: str = ModeEnum.TEMP.value) -> dict:
186
- """Build request data"""
241
+ """Build request data
242
+ Args:
243
+ prompt: user input
244
+ mode: chat or execute mode
245
+ Returns:
246
+ dict: request data
247
+ """
187
248
  if mode == ModeEnum.EXECUTE.value:
188
249
  system_prompt = self.build_cmd_prompt()
189
250
  else:
190
251
  system_prompt = self.build_default_prompt()
252
+
253
+ # Build messages list, first add system prompt
254
+ messages = [{"role": "system", "content": system_prompt}]
255
+
256
+ # Add history records in chat mode
257
+ if mode == ModeEnum.CHAT.value and self.history:
258
+ messages.extend(self.history)
259
+
260
+ # Add current user message
261
+ messages.append({"role": "user", "content": prompt})
262
+
191
263
  return {
192
264
  "model": self.config["MODEL"],
193
- "messages": [
194
- {"role": "system", "content": system_prompt},
195
- {"role": "user", "content": prompt},
196
- ],
265
+ "messages": messages,
197
266
  "stream": self.config.get("STREAM", "true") == "true",
198
267
  "temperature": 0.7,
199
268
  "top_p": 0.7,
200
269
  "max_tokens": 200,
201
270
  }
202
271
 
203
- def stream_response(self, response):
204
- """Stream response from LLM API"""
272
+ def stream_response(self, response: requests.Response) -> str:
273
+ """Stream response from LLM API
274
+ Args:
275
+ response: requests.Response object
276
+ Returns:
277
+ str: full completion text
278
+ """
205
279
  full_completion = ""
206
280
  # Streaming response loop
207
281
  with Live(console=self.console) as live:
@@ -225,8 +299,15 @@ Rules:
225
299
  self.console.print(f"[red]Error decoding JSON: {decoded_line}[/red]")
226
300
  time.sleep(0.05)
227
301
 
228
- def call_llm_api(self, prompt: str):
229
- """Call LLM API, return streaming output"""
302
+ return full_completion
303
+
304
+ def call_llm_api(self, prompt: str) -> str:
305
+ """Call LLM API, return streaming output
306
+ Args:
307
+ prompt: user input
308
+ Returns:
309
+ str: streaming output
310
+ """
230
311
  url = self.get_llm_url()
231
312
  headers = {"Authorization": f"Bearer {self.config['API_KEY']}"}
232
313
  data = self.build_data(prompt)
@@ -241,11 +322,18 @@ Rules:
241
322
  raise typer.Exit(code=1)
242
323
 
243
324
  self.console.print("\n[bold green]Assistant:[/bold green]")
244
- self.stream_response(response) # Stream the response
325
+ assistant_response = self.stream_response(response) # Stream the response and get the full text
245
326
  self.console.print() # Add a newline after the completion
246
327
 
247
- def get_command_from_llm(self, prompt):
248
- """Request Shell command from LLM"""
328
+ return assistant_response
329
+
330
+ def get_command_from_llm(self, prompt: str) -> Optional[str]:
331
+ """Request Shell command from LLM
332
+ Args:
333
+ prompt: user input
334
+ Returns:
335
+ str: shell command
336
+ """
249
337
  url = self.get_llm_url()
250
338
  headers = {"Authorization": f"Bearer {self.config['API_KEY']}"}
251
339
  data = self.build_data(prompt, mode=ModeEnum.EXECUTE.value)
@@ -267,17 +355,24 @@ Rules:
267
355
  content = jmespath.search(ANSWER_PATH, response.json())
268
356
  return content.strip()
269
357
 
270
- def execute_shell_command(self, command):
271
- """Execute shell command"""
358
+ def execute_shell_command(self, command: str) -> int:
359
+ """Execute shell command
360
+ Args:
361
+ command: shell command
362
+ Returns:
363
+ int: return code
364
+ """
272
365
  self.console.print(f"\n[bold green]Executing command: [/bold green] {command}\n")
273
366
  result = subprocess.run(command, shell=True)
274
367
  if result.returncode != 0:
275
- self.console.print(
276
- f"\n[bold red]Command failed with return code: {result.returncode}[/bold red]"
277
- )
368
+ self.console.print(f"\n[bold red]Command failed with return code: {result.returncode}[/bold red]")
369
+ return result.returncode
278
370
 
279
371
  def get_prompt_tokens(self):
280
- """Get prompt tokens based on current mode"""
372
+ """Get prompt tokens based on current mode
373
+ Returns:
374
+ list: prompt tokens for prompt_toolkit
375
+ """
281
376
  if self.current_mode == ModeEnum.CHAT.value:
282
377
  qmark = "💬"
283
378
  elif self.current_mode == ModeEnum.EXECUTE.value:
@@ -287,14 +382,35 @@ Rules:
287
382
  return [("class:qmark", qmark), ("class:question", " {} ".format(">"))]
288
383
 
289
384
  def chat_mode(self, user_input: str):
290
- """Interactive chat mode"""
385
+ """
386
+ This method handles the chat mode.
387
+ It adds the user input to the history and calls the API to get a response.
388
+ It then adds the response to the history and manages the history length.
389
+ Args:
390
+ user_input: user input
391
+ Returns:
392
+ ModeEnum: current mode
393
+ """
291
394
  if self.current_mode != ModeEnum.CHAT.value:
292
395
  return self.current_mode
293
396
 
294
- self.call_llm_api(user_input)
397
+ # Add user message to history
398
+ self.history.append({"role": "user", "content": user_input})
399
+
400
+ # Call API and get response
401
+ assistant_response = self.call_llm_api(user_input)
402
+
403
+ # Add assistant response to history
404
+ if assistant_response:
405
+ self.history.append({"role": "assistant", "content": assistant_response})
406
+
407
+ # Manage history length, keep recent conversations
408
+ if len(self.history) > self.max_history_length * 2: # Each conversation has user and assistant messages
409
+ self.history = self.history[-self.max_history_length * 2 :]
410
+
295
411
  return ModeEnum.CHAT.value
296
412
 
297
- def _filter_command(self, command):
413
+ def _filter_command(self, command: str) -> Optional[str]:
298
414
  """Filter out unwanted characters from command
299
415
 
300
416
  The LLM may return commands in markdown format with code blocks.
@@ -339,11 +455,21 @@ Rules:
339
455
  return "\n".join(line.strip() for line in content_lines if line.strip())
340
456
 
341
457
  def execute_mode(self, user_input: str):
342
- """Execute mode"""
458
+ """
459
+ This method generates a shell command from the user input and executes it.
460
+ If the user confirms the command, it is executed.
461
+ Args:
462
+ user_input: user input
463
+ Returns:
464
+ ModeEnum: current mode
465
+ """
343
466
  if user_input == "" or self.current_mode != ModeEnum.EXECUTE.value:
344
467
  return self.current_mode
345
468
 
346
469
  command = self.get_command_from_llm(user_input)
470
+ if not command:
471
+ self.console.print("[bold red]No command generated[/bold red]")
472
+ return self.current_mode
347
473
  command = self._filter_command(command)
348
474
  if not command:
349
475
  self.console.print("[bold red]No command generated[/bold red]")
@@ -361,31 +487,48 @@ Rules:
361
487
  if not user_input.strip():
362
488
  continue
363
489
 
364
- if user_input.lower() in ("exit", "quit"):
490
+ if user_input.lower() in ("/exit", "/quit", "/q"):
365
491
  break
366
492
 
493
+ # Handle special commands
367
494
  if self.current_mode == ModeEnum.CHAT.value:
368
- self.chat_mode(user_input)
495
+ if user_input.lower() == "/clear":
496
+ self.clear_history()
497
+ self.console.print("[bold yellow]Chat history cleared[/bold yellow]\n")
498
+ continue
499
+ else:
500
+ self.chat_mode(user_input)
369
501
  elif self.current_mode == ModeEnum.EXECUTE.value:
370
502
  self.execute_mode(user_input)
371
503
 
372
504
  self.console.print("[bold green]Exiting...[/bold green]")
373
505
 
374
506
  def run_one_shot(self, prompt: str):
375
- """Run one-shot mode with given prompt"""
507
+ """Run one-shot mode with given prompt
508
+ Args:
509
+ prompt (str): Prompt to send to LLM
510
+ Returns:
511
+ None
512
+ """
376
513
  if self.current_mode == ModeEnum.EXECUTE.value:
377
514
  self.execute_mode(prompt) # Execute mode for one-shot prompt
378
515
  else:
379
516
  self.call_llm_api(prompt)
380
517
 
381
518
  def run(self, chat=False, shell=False, prompt: Optional[str] = None):
382
- """Run the CLI application"""
519
+ """Run the CLI application
520
+ Args:
521
+ chat (bool): Whether to run in chat mode
522
+ shell (bool): Whether to run in shell mode
523
+ prompt (Optional[str]): Prompt send to LLM
524
+
525
+ Returns:
526
+ None
527
+ """
383
528
  # Load configuration
384
529
  self.config = self.load_config()
385
530
  if not self.config.get("API_KEY", None):
386
- self.console.print(
387
- "[red]API key not found. Please set it in the configuration file.[/red]"
388
- )
531
+ self.console.print("[red]API key not found. Please set it in the configuration file.[/red]")
389
532
  return
390
533
 
391
534
  # Set initial mode
@@ -418,10 +561,10 @@ CONTEXT_SETTINGS = {
418
561
  }
419
562
 
420
563
  app = typer.Typer(
421
- name="ShellAI",
564
+ name="yaicli",
422
565
  context_settings=CONTEXT_SETTINGS,
423
566
  pretty_exceptions_enable=False,
424
- short_help="ShellAI Command Line Tool",
567
+ short_help="yaicli. Your AI interface in cli.",
425
568
  no_args_is_help=True,
426
569
  invoke_without_command=True,
427
570
  )
@@ -429,17 +572,18 @@ app = typer.Typer(
429
572
 
430
573
  @app.command()
431
574
  def main(
432
- prompt: Annotated[str, typer.Argument(show_default=False, help="The prompt send to the LLM")],
433
- verbose: Annotated[
434
- bool, typer.Option("--verbose", "-V", help="Show verbose information")
435
- ] = False,
575
+ ctx: typer.Context,
576
+ prompt: Annotated[str, typer.Argument(show_default=False, help="The prompt send to the LLM")] = "",
577
+ verbose: Annotated[bool, typer.Option("--verbose", "-V", help="Show verbose information")] = False,
436
578
  chat: Annotated[bool, typer.Option("--chat", "-c", help="Start in chat mode")] = False,
437
- shell: Annotated[
438
- bool, typer.Option("--shell", "-s", help="Generate and execute shell command")
439
- ] = False,
579
+ shell: Annotated[bool, typer.Option("--shell", "-s", help="Generate and execute shell command")] = False,
440
580
  ):
441
- """LLM CLI Tool"""
442
- cli = ShellAI(verbose=verbose)
581
+ """yaicli. Your AI interface in cli."""
582
+ if not prompt and not chat:
583
+ typer.echo(ctx.get_help())
584
+ raise typer.Exit()
585
+
586
+ cli = YAICLI(verbose=verbose)
443
587
  cli.run(chat=chat, shell=shell, prompt=prompt)
444
588
 
445
589
 
@@ -1,15 +0,0 @@
1
- Metadata-Version: 2.4
2
- Name: yaicli
3
- Version: 0.0.3
4
- Summary: A simple CLI tool to interact with LLM
5
- License-File: LICENSE
6
- Requires-Python: >=3.8
7
- Requires-Dist: distro>=1.9.0
8
- Requires-Dist: jmespath>=1.0.1
9
- Requires-Dist: prompt-toolkit>=3.0.50
10
- Requires-Dist: requests>=2.32.3
11
- Requires-Dist: rich>=13.9.4
12
- Requires-Dist: typer>=0.15.2
13
- Description-Content-Type: text/markdown
14
-
15
- # llmcli
@@ -1,6 +0,0 @@
1
- yaicli.py,sha256=4c-cGPzZlUPdqv8uZrWmiimlCp8Q8S9UH7jKHRWYI8U,16709
2
- yaicli-0.0.3.dist-info/METADATA,sha256=3C60zYZfELyBiKNPqXyPKKqAh8cKe7FG42oc_Esr01Y,379
3
- yaicli-0.0.3.dist-info/WHEEL,sha256=qtCwoSJWgHk21S1Kb4ihdzI2rlJ1ZKaIurTj_ngOhyQ,87
4
- yaicli-0.0.3.dist-info/entry_points.txt,sha256=gdduQwAuu_LeDqnDU81Fv3NPmD2tRQ1FffvolIP3S1Q,34
5
- yaicli-0.0.3.dist-info/licenses/LICENSE,sha256=xx0jnfkXJvxRnG63LTGOxlggYnIysveWIZ6H3PNdCrQ,11357
6
- yaicli-0.0.3.dist-info/RECORD,,
File without changes