yaicli 0.0.4__py3-none-any.whl → 0.0.5__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,244 @@
1
+ Metadata-Version: 2.4
2
+ Name: yaicli
3
+ Version: 0.0.5
4
+ Summary: A simple CLI tool to interact with LLM
5
+ License-File: LICENSE
6
+ Requires-Python: >=3.9
7
+ Requires-Dist: distro>=1.9.0
8
+ Requires-Dist: jmespath>=1.0.1
9
+ Requires-Dist: prompt-toolkit>=3.0.50
10
+ Requires-Dist: requests>=2.32.3
11
+ Requires-Dist: rich>=13.9.4
12
+ Requires-Dist: typer>=0.15.2
13
+ Description-Content-Type: text/markdown
14
+
15
+ # YAICLI - Your AI Command Line Interface
16
+
17
+ YAICLI is a powerful command-line AI assistant tool that enables you to interact with Large Language Models (LLMs) through your terminal. It offers multiple operation modes for everyday conversations, generating and executing shell commands, and one-shot quick queries.
18
+
19
+ ## Features
20
+
21
+ - **Multiple Operation Modes**:
22
+ - **Chat Mode (💬)**: Interactive conversation with the AI assistant
23
+ - **Execute Mode (🚀)**: Generate and execute shell commands specific to your OS and shell
24
+ - **Temp Mode**: Quick queries without entering interactive mode
25
+
26
+ - **Smart Environment Detection**:
27
+ - Automatically detects your operating system and shell
28
+ - Customizes responses and commands for your specific environment
29
+
30
+ - **Rich Terminal Interface**:
31
+ - Markdown rendering for formatted responses
32
+ - Streaming responses for real-time feedback
33
+ - Color-coded output for better readability
34
+
35
+ - **Configurable**:
36
+ - Customizable API endpoints
37
+ - Support for different LLM providers
38
+ - Adjustable response parameters
39
+
40
+ - **Keyboard Shortcuts**:
41
+ - Tab to switch between Chat and Execute modes
42
+
43
+ ## Installation
44
+
45
+ ### Prerequisites
46
+
47
+ - Python 3.9 or higher
48
+ - pip (Python package manager)
49
+
50
+ ### Install from PyPI
51
+
52
+ ```bash
53
+ # Install by pip
54
+ pip install yaicli
55
+
56
+ # Install by pipx
57
+ pipx install yaicli
58
+
59
+ # Install by uv
60
+ uv tool install yaicli
61
+ ```
62
+
63
+ ### Install from Source
64
+
65
+ ```bash
66
+ git clone https://github.com/yourusername/yaicli.git
67
+ cd yaicli
68
+ pip install .
69
+ ```
70
+
71
+ ## Configuration
72
+
73
+ On first run, YAICLI will create a default configuration file at `~/.config/yaicli/config.ini`. You'll need to edit this file to add your API key and customize other settings.
74
+
75
+ Just run `ai`, and it will create the config file for you. Then you can edit it to add your api key.
76
+
77
+ ### Configuration File
78
+
79
+ ```ini
80
+ [core]
81
+ BASE_URL=https://api.openai.com/v1
82
+ API_KEY=your_api_key_here
83
+ MODEL=gpt-4o
84
+
85
+ # default run mode, default: temp
86
+ # chat: interactive chat mode
87
+ # exec: shell command generation mode
88
+ # temp: one-shot mode
89
+ DEFAULT_MODE=temp
90
+
91
+ # auto detect shell and os
92
+ SHELL_NAME=auto
93
+ OS_NAME=auto
94
+
95
+ # if you want to use custom completions path, you can set it here
96
+ COMPLETION_PATH=/chat/completions
97
+ # if you want to use custom answer path, you can set it here
98
+ ANSWER_PATH=choices[0].message.content
99
+
100
+ # true: streaming response
101
+ # false: non-streaming response
102
+ STREAM=true
103
+ ```
104
+
105
+ ### Configuration Options
106
+
107
+ - **BASE_URL**: API endpoint URL (default: OpenAI API)
108
+ - **API_KEY**: Your API key for the LLM provider
109
+ - **MODEL**: The model to use (e.g., gpt-4o, gpt-3.5-turbo), default: gpt-4o
110
+ - **DEFAULT_MODE**: Default operation mode (chat, exec, or temp), default: temp
111
+ - **SHELL_NAME**: Shell to use (auto for automatic detection), default: auto
112
+ - **OS_NAME**: OS to use (auto for automatic detection), default: auto
113
+ - **COMPLETION_PATH**: Path for completions endpoint, default: /chat/completions
114
+ - **ANSWER_PATH**: Json path expression to extract answer from response, default: choices[0].message.content
115
+ - **STREAM**: Enable/disable streaming responses
116
+
117
+ ## Usage
118
+
119
+ ### Basic Usage
120
+
121
+ ```bash
122
+ # One-shot mode
123
+ ai "What is the capital of France?"
124
+
125
+ # Chat mode
126
+ ai --chat
127
+
128
+ # Shell command generation mode
129
+ ai --shell "Create a backup of my Documents folder"
130
+
131
+ # Verbose mode for debugging
132
+ ai --verbose "Explain quantum computing"
133
+ ```
134
+
135
+ ### Command Line Options
136
+
137
+ - `<PROMPT>`: Argument
138
+ - `--verbose` or `-V`: Show verbose information
139
+ - `--chat` or `-c`: Start in chat mode
140
+ - `--shell` or `-s`: Generate and execute shell command
141
+ - `--install-completion`: Install completion for the current shell
142
+ - `--show-completion`: Show completion for the current shell, to copy it or customize the installation
143
+ - `--help` or `-h`: Show this message and exit
144
+
145
+ ```bash
146
+ ai -h
147
+
148
+ Usage: ai [OPTIONS] [PROMPT]
149
+
150
+ yaicli. Your AI interface in cli.
151
+
152
+ ╭─ Arguments ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
153
+ │ prompt [PROMPT] The prompt send to the LLM │
154
+ ╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
155
+ ╭─ Options ────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
156
+ │ --verbose -V Show verbose information │
157
+ │ --chat -c Start in chat mode │
158
+ │ --shell -s Generate and execute shell command │
159
+ │ --install-completion Install completion for the current shell. │
160
+ │ --show-completion Show completion for the current shell, to copy it or customize the installation. │
161
+ │ --help -h Show this message and exit. │
162
+ ╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
163
+
164
+
165
+ ```
166
+
167
+ ### Interactive Mode
168
+
169
+ In interactive mode (chat or shell), you can:
170
+ - Type your queries and get responses
171
+ - Use `Tab` to switch between Chat and Execute modes
172
+ - Type 'exit' or 'quit' to exit
173
+
174
+ ### Shell Command Generation
175
+
176
+ In Execute mode:
177
+ 1. Enter your request in natural language
178
+ 2. YAICLI will generate an appropriate shell command
179
+ 3. Review the command
180
+ 4. Confirm to execute or reject
181
+
182
+ ## Examples
183
+
184
+ ### Chat Mode Example
185
+
186
+ ```bash
187
+ $ ai --chat
188
+ 💬 > Tell me about the solar system
189
+
190
+ Assistant:
191
+ Certainly! Here’s a brief overview of the solar system:
192
+
193
+ • Sun: The central star of the solar system, providing light and energy.
194
+ • Planets:
195
+ • Mercury: Closest to the Sun, smallest planet.
196
+ • Venus: Second planet, known for its thick atmosphere and high surface temperature.
197
+ • Earth: Third planet, the only known planet to support life.
198
+ • Mars: Fourth planet, often called the "Red Planet" due to its reddish appearance.
199
+ • Jupiter: Largest planet, a gas giant with many moons.
200
+ • Saturn: Known for its prominent ring system, also a gas giant.
201
+ • Uranus: An ice giant, known for its unique axial tilt.
202
+ • Neptune: Another ice giant, known for its deep blue color.
203
+ • Dwarf Planets:
204
+ • Pluto: Once considered the ninth planet, now classified as
205
+
206
+ 💬 >
207
+ ```
208
+
209
+ ### Execute Mode Example
210
+
211
+ ```bash
212
+ $ ai --shell "Find all PDF files in my Downloads folder"
213
+
214
+ Generated command: find ~/Downloads -type f -name "*.pdf"
215
+ Execute this command? [y/n]: y
216
+
217
+ Executing command: find ~/Downloads -type f -name "*.pdf"
218
+
219
+ /Users/username/Downloads/document1.pdf
220
+ /Users/username/Downloads/report.pdf
221
+ ...
222
+ ```
223
+
224
+ ## Technical Implementation
225
+
226
+ YAICLI is built using several Python libraries:
227
+
228
+ - **Typer**: Provides the command-line interface
229
+ - **Rich**: Provides terminal content formatting and beautiful display
230
+ - **prompt_toolkit**: Provides interactive command-line input experience
231
+ - **requests**: Handles API requests
232
+ - **jmespath**: Parses JSON responses
233
+
234
+ ## Contributing
235
+
236
+ Contributions of code, issue reports, or feature suggestions are welcome.
237
+
238
+ ## License
239
+
240
+ [Apache License 2.0](LICENSE)
241
+
242
+ ---
243
+
244
+ *YAICLI - Making your terminal smarter*
@@ -0,0 +1,6 @@
1
+ yaicli.py,sha256=5yHKni6bIB2k6xzDAcdr-Xk7KNS7_QKmS9XcN1n9Nrk,21497
2
+ yaicli-0.0.5.dist-info/METADATA,sha256=Bx7PwYYH9J99pZIcrkgWhVE8hP3tiB6WGXZgjSw0qi4,9319
3
+ yaicli-0.0.5.dist-info/WHEEL,sha256=qtCwoSJWgHk21S1Kb4ihdzI2rlJ1ZKaIurTj_ngOhyQ,87
4
+ yaicli-0.0.5.dist-info/entry_points.txt,sha256=gdduQwAuu_LeDqnDU81Fv3NPmD2tRQ1FffvolIP3S1Q,34
5
+ yaicli-0.0.5.dist-info/licenses/LICENSE,sha256=xx0jnfkXJvxRnG63LTGOxlggYnIysveWIZ6H3PNdCrQ,11357
6
+ yaicli-0.0.5.dist-info/RECORD,,
yaicli.py CHANGED
@@ -33,7 +33,13 @@ class CasePreservingConfigParser(configparser.RawConfigParser):
33
33
  return optionstr
34
34
 
35
35
 
36
- class ShellAI:
36
+ class YAICLI:
37
+ """Main class for YAICLI
38
+ Chat mode: interactive chat mode
39
+ One-shot mode:
40
+ Temp mode: ask a question and get a response once
41
+ Execute mode: generate and execute shell commands
42
+ """
37
43
  # Configuration file path
38
44
  CONFIG_PATH = Path("~/.config/yaicli/config.ini").expanduser()
39
45
 
@@ -45,8 +51,8 @@ MODEL=gpt-4o
45
51
 
46
52
  # default run mode, default: temp
47
53
  # chat: interactive chat mode
48
- # exec: shell command generation mode
49
- # temp: one-shot mode
54
+ # exec: generate and execute shell commands once
55
+ # temp: ask a question and get a response once
50
56
  DEFAULT_MODE=temp
51
57
 
52
58
  # auto detect shell and os
@@ -69,6 +75,8 @@ STREAM=true"""
69
75
  self.session = PromptSession(key_bindings=self.bindings)
70
76
  self.current_mode = ModeEnum.CHAT.value
71
77
  self.config = {}
78
+ self.history = []
79
+ self.max_history_length = 25
72
80
 
73
81
  # Setup key bindings
74
82
  self._setup_key_bindings()
@@ -82,10 +90,19 @@ STREAM=true"""
82
90
  ModeEnum.CHAT.value if self.current_mode == ModeEnum.EXECUTE.value else ModeEnum.EXECUTE.value
83
91
  )
84
92
 
85
- def detect_os(self):
86
- """Detect operating system"""
93
+ def clear_history(self):
94
+ """Clear chat history"""
95
+ self.history = []
96
+
97
+ def detect_os(self) -> str:
98
+ """Detect operating system
99
+ Returns:
100
+ str: operating system name
101
+ Raises:
102
+ typer.Exit: if there is an error with the request
103
+ """
87
104
  if self.config.get("OS_NAME") != "auto":
88
- return self.config.get("OS_NAME")
105
+ return self.config["OS_NAME"]
89
106
  current_platform = platform.system()
90
107
  if current_platform == "Linux":
91
108
  return "Linux/" + distro_name(pretty=True)
@@ -95,10 +112,15 @@ STREAM=true"""
95
112
  return "Darwin/MacOS " + platform.mac_ver()[0]
96
113
  return current_platform
97
114
 
98
- def detect_shell(self):
99
- """Detect shell"""
100
- if self.config.get("SHELL_NAME") != "auto":
101
- return self.config.get("SHELL_NAME")
115
+ def detect_shell(self) -> str:
116
+ """Detect shell
117
+ Returns:
118
+ str: shell name
119
+ Raises:
120
+ typer.Exit: if there is an error with the request
121
+ """
122
+ if self.config["SHELL_NAME"] != "auto":
123
+ return self.config["SHELL_NAME"]
102
124
  import platform
103
125
 
104
126
  current_platform = platform.system()
@@ -107,7 +129,13 @@ STREAM=true"""
107
129
  return "powershell.exe" if is_powershell else "cmd.exe"
108
130
  return basename(getenv("SHELL", "/bin/sh"))
109
131
 
110
- def build_cmd_prompt(self):
132
+ def build_cmd_prompt(self) -> str:
133
+ """Build command prompt
134
+ Returns:
135
+ str: command prompt
136
+ Raises:
137
+ typer.Exit: if there is an error with the request
138
+ """
111
139
  _os = self.detect_os()
112
140
  _shell = self.detect_shell()
113
141
  return f"""Your are a Shell Command Generator.
@@ -119,8 +147,13 @@ Rules:
119
147
  4. Chain multi-step commands in SINGLE LINE
120
148
  5. Return NOTHING except the ready-to-run command"""
121
149
 
122
- def build_default_prompt(self):
123
- """Build default prompt"""
150
+ def build_default_prompt(self) -> str:
151
+ """Build default prompt
152
+ Returns:
153
+ str: default prompt
154
+ Raises:
155
+ typer.Exit: if there is an error with the request
156
+ """
124
157
  _os = self.detect_os()
125
158
  _shell = self.detect_shell()
126
159
  return (
@@ -130,8 +163,13 @@ Rules:
130
163
  "unless the user explicitly requests more details."
131
164
  )
132
165
 
133
- def get_default_config(self):
134
- """Get default configuration"""
166
+ def get_default_config(self) -> dict[str, str]:
167
+ """Get default configuration
168
+ Returns:
169
+ dict: default configuration
170
+ Raises:
171
+ typer.Exit: if there is an error with the request
172
+ """
135
173
  config = CasePreservingConfigParser()
136
174
  try:
137
175
  config.read_string(self.DEFAULT_CONFIG_INI)
@@ -142,8 +180,13 @@ Rules:
142
180
  self.console.print(f"[red]Error parsing config: {e}[/red]")
143
181
  raise typer.Exit(code=1) from None
144
182
 
145
- def load_config(self):
146
- """Load LLM API configuration"""
183
+ def load_config(self) -> dict[str, str]:
184
+ """Load LLM API configuration
185
+ Returns:
186
+ dict: configuration
187
+ Raises:
188
+ typer.Exit: if there is an error with the request
189
+ """
147
190
  if not self.CONFIG_PATH.exists():
148
191
  self.console.print(
149
192
  "[bold yellow]Configuration file not found. Creating default configuration file.[/bold yellow]"
@@ -158,14 +201,28 @@ Rules:
158
201
  self.config["STREAM"] = str(self.config.get("STREAM", "true")).lower()
159
202
  return self.config
160
203
 
161
- def _call_api(self, url, headers, data):
162
- """Generic API call method"""
204
+ def _call_api(self, url: str, headers: dict, data: dict) -> requests.Response:
205
+ """Call the API and return the response.
206
+ Args:
207
+ url: API endpoint URL
208
+ headers: request headers
209
+ data: request data
210
+ Returns:
211
+ requests.Response: response object
212
+ Raises:
213
+ requests.exceptions.RequestException: if there is an error with the request
214
+ """
163
215
  response = requests.post(url, headers=headers, json=data)
164
216
  response.raise_for_status() # Raise an exception for non-200 status codes
165
217
  return response
166
218
 
167
- def get_llm_url(self) -> Optional[str]:
168
- """Get LLM API URL"""
219
+ def get_llm_url(self) -> str:
220
+ """Get LLM API URL
221
+ Returns:
222
+ str: LLM API URL
223
+ Raises:
224
+ typer.Exit: if API key or base URL is not set
225
+ """
169
226
  base = self.config.get("BASE_URL", "").rstrip("/")
170
227
  if not base:
171
228
  self.console.print(
@@ -181,25 +238,44 @@ Rules:
181
238
  return f"{base}/{COMPLETION_PATH}"
182
239
 
183
240
  def build_data(self, prompt: str, mode: str = ModeEnum.TEMP.value) -> dict:
184
- """Build request data"""
241
+ """Build request data
242
+ Args:
243
+ prompt: user input
244
+ mode: chat or execute mode
245
+ Returns:
246
+ dict: request data
247
+ """
185
248
  if mode == ModeEnum.EXECUTE.value:
186
249
  system_prompt = self.build_cmd_prompt()
187
250
  else:
188
251
  system_prompt = self.build_default_prompt()
252
+
253
+ # Build messages list, first add system prompt
254
+ messages = [{"role": "system", "content": system_prompt}]
255
+
256
+ # Add history records in chat mode
257
+ if mode == ModeEnum.CHAT.value and self.history:
258
+ messages.extend(self.history)
259
+
260
+ # Add current user message
261
+ messages.append({"role": "user", "content": prompt})
262
+
189
263
  return {
190
264
  "model": self.config["MODEL"],
191
- "messages": [
192
- {"role": "system", "content": system_prompt},
193
- {"role": "user", "content": prompt},
194
- ],
265
+ "messages": messages,
195
266
  "stream": self.config.get("STREAM", "true") == "true",
196
267
  "temperature": 0.7,
197
268
  "top_p": 0.7,
198
269
  "max_tokens": 200,
199
270
  }
200
271
 
201
- def stream_response(self, response):
202
- """Stream response from LLM API"""
272
+ def stream_response(self, response: requests.Response) -> str:
273
+ """Stream response from LLM API
274
+ Args:
275
+ response: requests.Response object
276
+ Returns:
277
+ str: full completion text
278
+ """
203
279
  full_completion = ""
204
280
  # Streaming response loop
205
281
  with Live(console=self.console) as live:
@@ -223,8 +299,15 @@ Rules:
223
299
  self.console.print(f"[red]Error decoding JSON: {decoded_line}[/red]")
224
300
  time.sleep(0.05)
225
301
 
226
- def call_llm_api(self, prompt: str):
227
- """Call LLM API, return streaming output"""
302
+ return full_completion
303
+
304
+ def call_llm_api(self, prompt: str) -> str:
305
+ """Call LLM API, return streaming output
306
+ Args:
307
+ prompt: user input
308
+ Returns:
309
+ str: streaming output
310
+ """
228
311
  url = self.get_llm_url()
229
312
  headers = {"Authorization": f"Bearer {self.config['API_KEY']}"}
230
313
  data = self.build_data(prompt)
@@ -239,11 +322,18 @@ Rules:
239
322
  raise typer.Exit(code=1)
240
323
 
241
324
  self.console.print("\n[bold green]Assistant:[/bold green]")
242
- self.stream_response(response) # Stream the response
325
+ assistant_response = self.stream_response(response) # Stream the response and get the full text
243
326
  self.console.print() # Add a newline after the completion
244
327
 
245
- def get_command_from_llm(self, prompt):
246
- """Request Shell command from LLM"""
328
+ return assistant_response
329
+
330
+ def get_command_from_llm(self, prompt: str) -> Optional[str]:
331
+ """Request Shell command from LLM
332
+ Args:
333
+ prompt: user input
334
+ Returns:
335
+ str: shell command
336
+ """
247
337
  url = self.get_llm_url()
248
338
  headers = {"Authorization": f"Bearer {self.config['API_KEY']}"}
249
339
  data = self.build_data(prompt, mode=ModeEnum.EXECUTE.value)
@@ -265,15 +355,24 @@ Rules:
265
355
  content = jmespath.search(ANSWER_PATH, response.json())
266
356
  return content.strip()
267
357
 
268
- def execute_shell_command(self, command):
269
- """Execute shell command"""
358
+ def execute_shell_command(self, command: str) -> int:
359
+ """Execute shell command
360
+ Args:
361
+ command: shell command
362
+ Returns:
363
+ int: return code
364
+ """
270
365
  self.console.print(f"\n[bold green]Executing command: [/bold green] {command}\n")
271
366
  result = subprocess.run(command, shell=True)
272
367
  if result.returncode != 0:
273
368
  self.console.print(f"\n[bold red]Command failed with return code: {result.returncode}[/bold red]")
369
+ return result.returncode
274
370
 
275
371
  def get_prompt_tokens(self):
276
- """Get prompt tokens based on current mode"""
372
+ """Get prompt tokens based on current mode
373
+ Returns:
374
+ list: prompt tokens for prompt_toolkit
375
+ """
277
376
  if self.current_mode == ModeEnum.CHAT.value:
278
377
  qmark = "💬"
279
378
  elif self.current_mode == ModeEnum.EXECUTE.value:
@@ -283,14 +382,35 @@ Rules:
283
382
  return [("class:qmark", qmark), ("class:question", " {} ".format(">"))]
284
383
 
285
384
  def chat_mode(self, user_input: str):
286
- """Interactive chat mode"""
385
+ """
386
+ This method handles the chat mode.
387
+ It adds the user input to the history and calls the API to get a response.
388
+ It then adds the response to the history and manages the history length.
389
+ Args:
390
+ user_input: user input
391
+ Returns:
392
+ ModeEnum: current mode
393
+ """
287
394
  if self.current_mode != ModeEnum.CHAT.value:
288
395
  return self.current_mode
289
396
 
290
- self.call_llm_api(user_input)
397
+ # Add user message to history
398
+ self.history.append({"role": "user", "content": user_input})
399
+
400
+ # Call API and get response
401
+ assistant_response = self.call_llm_api(user_input)
402
+
403
+ # Add assistant response to history
404
+ if assistant_response:
405
+ self.history.append({"role": "assistant", "content": assistant_response})
406
+
407
+ # Manage history length, keep recent conversations
408
+ if len(self.history) > self.max_history_length * 2: # Each conversation has user and assistant messages
409
+ self.history = self.history[-self.max_history_length * 2 :]
410
+
291
411
  return ModeEnum.CHAT.value
292
412
 
293
- def _filter_command(self, command):
413
+ def _filter_command(self, command: str) -> Optional[str]:
294
414
  """Filter out unwanted characters from command
295
415
 
296
416
  The LLM may return commands in markdown format with code blocks.
@@ -335,11 +455,21 @@ Rules:
335
455
  return "\n".join(line.strip() for line in content_lines if line.strip())
336
456
 
337
457
  def execute_mode(self, user_input: str):
338
- """Execute mode"""
458
+ """
459
+ This method generates a shell command from the user input and executes it.
460
+ If the user confirms the command, it is executed.
461
+ Args:
462
+ user_input: user input
463
+ Returns:
464
+ ModeEnum: current mode
465
+ """
339
466
  if user_input == "" or self.current_mode != ModeEnum.EXECUTE.value:
340
467
  return self.current_mode
341
468
 
342
469
  command = self.get_command_from_llm(user_input)
470
+ if not command:
471
+ self.console.print("[bold red]No command generated[/bold red]")
472
+ return self.current_mode
343
473
  command = self._filter_command(command)
344
474
  if not command:
345
475
  self.console.print("[bold red]No command generated[/bold red]")
@@ -357,25 +487,44 @@ Rules:
357
487
  if not user_input.strip():
358
488
  continue
359
489
 
360
- if user_input.lower() in ("exit", "quit"):
490
+ if user_input.lower() in ("/exit", "/quit", "/q"):
361
491
  break
362
492
 
493
+ # Handle special commands
363
494
  if self.current_mode == ModeEnum.CHAT.value:
364
- self.chat_mode(user_input)
495
+ if user_input.lower() == "/clear":
496
+ self.clear_history()
497
+ self.console.print("[bold yellow]Chat history cleared[/bold yellow]\n")
498
+ continue
499
+ else:
500
+ self.chat_mode(user_input)
365
501
  elif self.current_mode == ModeEnum.EXECUTE.value:
366
502
  self.execute_mode(user_input)
367
503
 
368
504
  self.console.print("[bold green]Exiting...[/bold green]")
369
505
 
370
506
  def run_one_shot(self, prompt: str):
371
- """Run one-shot mode with given prompt"""
507
+ """Run one-shot mode with given prompt
508
+ Args:
509
+ prompt (str): Prompt to send to LLM
510
+ Returns:
511
+ None
512
+ """
372
513
  if self.current_mode == ModeEnum.EXECUTE.value:
373
514
  self.execute_mode(prompt) # Execute mode for one-shot prompt
374
515
  else:
375
516
  self.call_llm_api(prompt)
376
517
 
377
518
  def run(self, chat=False, shell=False, prompt: Optional[str] = None):
378
- """Run the CLI application"""
519
+ """Run the CLI application
520
+ Args:
521
+ chat (bool): Whether to run in chat mode
522
+ shell (bool): Whether to run in shell mode
523
+ prompt (Optional[str]): Prompt send to LLM
524
+
525
+ Returns:
526
+ None
527
+ """
379
528
  # Load configuration
380
529
  self.config = self.load_config()
381
530
  if not self.config.get("API_KEY", None):
@@ -434,7 +583,7 @@ def main(
434
583
  typer.echo(ctx.get_help())
435
584
  raise typer.Exit()
436
585
 
437
- cli = ShellAI(verbose=verbose)
586
+ cli = YAICLI(verbose=verbose)
438
587
  cli.run(chat=chat, shell=shell, prompt=prompt)
439
588
 
440
589
 
@@ -1,15 +0,0 @@
1
- Metadata-Version: 2.4
2
- Name: yaicli
3
- Version: 0.0.4
4
- Summary: A simple CLI tool to interact with LLM
5
- License-File: LICENSE
6
- Requires-Python: >=3.8
7
- Requires-Dist: distro>=1.9.0
8
- Requires-Dist: jmespath>=1.0.1
9
- Requires-Dist: prompt-toolkit>=3.0.50
10
- Requires-Dist: requests>=2.32.3
11
- Requires-Dist: rich>=13.9.4
12
- Requires-Dist: typer>=0.15.2
13
- Description-Content-Type: text/markdown
14
-
15
- # llmcli
@@ -1,6 +0,0 @@
1
- yaicli.py,sha256=CY5TnPQAypGJ1FQLiryGBV4UFc8E1ZQyaNmT7ghw7NU,16741
2
- yaicli-0.0.4.dist-info/METADATA,sha256=fY81CYJR2jcWmwElOe3rSk-wM4Lk27SZhTPdK0eHw4E,379
3
- yaicli-0.0.4.dist-info/WHEEL,sha256=qtCwoSJWgHk21S1Kb4ihdzI2rlJ1ZKaIurTj_ngOhyQ,87
4
- yaicli-0.0.4.dist-info/entry_points.txt,sha256=gdduQwAuu_LeDqnDU81Fv3NPmD2tRQ1FffvolIP3S1Q,34
5
- yaicli-0.0.4.dist-info/licenses/LICENSE,sha256=xx0jnfkXJvxRnG63LTGOxlggYnIysveWIZ6H3PNdCrQ,11357
6
- yaicli-0.0.4.dist-info/RECORD,,
File without changes