yaicli 0.0.12__tar.gz → 0.0.14__tar.gz
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
|
|
1
1
|
Metadata-Version: 2.4
|
2
2
|
Name: yaicli
|
3
|
-
Version: 0.0.
|
3
|
+
Version: 0.0.14
|
4
4
|
Summary: A simple CLI tool to interact with LLM
|
5
5
|
Project-URL: Homepage, https://github.com/belingud/yaicli
|
6
6
|
Project-URL: Repository, https://github.com/belingud/yaicli
|
@@ -329,14 +329,17 @@ MAX_TOKENS=1024
|
|
329
329
|
|
330
330
|
Below are the available configuration options and override environment variables:
|
331
331
|
|
332
|
-
- **BASE_URL**: API endpoint URL (default: OpenAI API), env:
|
333
|
-
- **API_KEY**: Your API key for the LLM provider, env:
|
334
|
-
- **MODEL**: The model to use (e.g., gpt-4o, gpt-3.5-turbo), default: gpt-4o, env:
|
335
|
-
- **SHELL_NAME**: Shell to use (auto for automatic detection), default: auto, env:
|
336
|
-
- **OS_NAME**: OS to use (auto for automatic detection), default: auto, env:
|
337
|
-
- **COMPLETION_PATH**: Path for completions endpoint, default: /chat/completions, env:
|
338
|
-
- **ANSWER_PATH**: Json path expression to extract answer from response, default: choices[0].message.content, env:
|
339
|
-
- **STREAM**: Enable/disable streaming responses, default: true, env:
|
332
|
+
- **BASE_URL**: API endpoint URL (default: OpenAI API), env: YAI_BASE_URL
|
333
|
+
- **API_KEY**: Your API key for the LLM provider, env: YAI_API_KEY
|
334
|
+
- **MODEL**: The model to use (e.g., gpt-4o, gpt-3.5-turbo), default: gpt-4o, env: YAI_MODEL
|
335
|
+
- **SHELL_NAME**: Shell to use (auto for automatic detection), default: auto, env: YAI_SHELL_NAME
|
336
|
+
- **OS_NAME**: OS to use (auto for automatic detection), default: auto, env: YAI_OS_NAME
|
337
|
+
- **COMPLETION_PATH**: Path for completions endpoint, default: /chat/completions, env: YAI_COMPLETION_PATH
|
338
|
+
- **ANSWER_PATH**: Json path expression to extract answer from response, default: choices[0].message.content, env: YAI_ANSWER_PATH
|
339
|
+
- **STREAM**: Enable/disable streaming responses, default: true, env: YAI_STREAM
|
340
|
+
- **TEMPERATURE**: Temperature for response generation (default: 0.7), env: YAI_TEMPERATURE
|
341
|
+
- **TOP_P**: Top-p sampling for response generation (default: 1.0), env: YAI_TOP_P
|
342
|
+
- **MAX_TOKENS**: Maximum number of tokens for response generation (default: 1024), env: YAI_MAX_TOKENS
|
340
343
|
|
341
344
|
Default config of `COMPLETION_PATH` and `ANSWER_PATH` is OpenAI compatible. If you are using OpenAI or other OpenAI compatible LLM provider, you can use the default config.
|
342
345
|
|
@@ -467,10 +470,45 @@ In Execute mode:
|
|
467
470
|
|
468
471
|
## Examples
|
469
472
|
|
473
|
+
### Have a Chat
|
474
|
+
|
475
|
+
```bash
|
476
|
+
$ ai "What is the capital of France?"
|
477
|
+
Assistant:
|
478
|
+
The capital of France is Paris.
|
479
|
+
```
|
480
|
+
|
481
|
+
### Command Gen and Run
|
482
|
+
|
483
|
+
```bash
|
484
|
+
$ ai -s 'Check the current directory size'
|
485
|
+
Assistant:
|
486
|
+
du -sh .
|
487
|
+
|
488
|
+
Generated command: du -sh .
|
489
|
+
Execute this command? [y/n/e] (n): e
|
490
|
+
Edit command, press enter to execute:
|
491
|
+
du -sh ./
|
492
|
+
Output:
|
493
|
+
109M ./
|
494
|
+
```
|
495
|
+
|
470
496
|
### Chat Mode Example
|
471
497
|
|
472
498
|
```bash
|
473
499
|
$ ai --chat
|
500
|
+
|
501
|
+
██ ██ █████ ██ ██████ ██ ██
|
502
|
+
██ ██ ██ ██ ██ ██ ██ ██
|
503
|
+
████ ███████ ██ ██ ██ ██
|
504
|
+
██ ██ ██ ██ ██ ██ ██
|
505
|
+
██ ██ ██ ██ ██████ ███████ ██
|
506
|
+
|
507
|
+
Press TAB to change in chat and exec mode
|
508
|
+
Type /clear to clear chat history
|
509
|
+
Type /his to see chat history
|
510
|
+
Press Ctrl+C or type /exit to exit
|
511
|
+
|
474
512
|
💬 > Tell me about the solar system
|
475
513
|
|
476
514
|
Assistant:
|
@@ -489,7 +527,17 @@ Certainly! Here’s a brief overview of the solar system:
|
|
489
527
|
• Dwarf Planets:
|
490
528
|
• Pluto: Once considered the ninth planet, now classified as
|
491
529
|
|
492
|
-
|
530
|
+
🚀 > Check the current directory size
|
531
|
+
Assistant:
|
532
|
+
du -sh .
|
533
|
+
|
534
|
+
Generated command: du -sh .
|
535
|
+
Execute this command? [y/n/e] (n): e
|
536
|
+
Edit command, press enter to execute:
|
537
|
+
du -sh ./
|
538
|
+
Output:
|
539
|
+
109M ./
|
540
|
+
🚀 >
|
493
541
|
```
|
494
542
|
|
495
543
|
### Execute Mode Example
|
@@ -105,14 +105,17 @@ MAX_TOKENS=1024
|
|
105
105
|
|
106
106
|
Below are the available configuration options and override environment variables:
|
107
107
|
|
108
|
-
- **BASE_URL**: API endpoint URL (default: OpenAI API), env:
|
109
|
-
- **API_KEY**: Your API key for the LLM provider, env:
|
110
|
-
- **MODEL**: The model to use (e.g., gpt-4o, gpt-3.5-turbo), default: gpt-4o, env:
|
111
|
-
- **SHELL_NAME**: Shell to use (auto for automatic detection), default: auto, env:
|
112
|
-
- **OS_NAME**: OS to use (auto for automatic detection), default: auto, env:
|
113
|
-
- **COMPLETION_PATH**: Path for completions endpoint, default: /chat/completions, env:
|
114
|
-
- **ANSWER_PATH**: Json path expression to extract answer from response, default: choices[0].message.content, env:
|
115
|
-
- **STREAM**: Enable/disable streaming responses, default: true, env:
|
108
|
+
- **BASE_URL**: API endpoint URL (default: OpenAI API), env: YAI_BASE_URL
|
109
|
+
- **API_KEY**: Your API key for the LLM provider, env: YAI_API_KEY
|
110
|
+
- **MODEL**: The model to use (e.g., gpt-4o, gpt-3.5-turbo), default: gpt-4o, env: YAI_MODEL
|
111
|
+
- **SHELL_NAME**: Shell to use (auto for automatic detection), default: auto, env: YAI_SHELL_NAME
|
112
|
+
- **OS_NAME**: OS to use (auto for automatic detection), default: auto, env: YAI_OS_NAME
|
113
|
+
- **COMPLETION_PATH**: Path for completions endpoint, default: /chat/completions, env: YAI_COMPLETION_PATH
|
114
|
+
- **ANSWER_PATH**: Json path expression to extract answer from response, default: choices[0].message.content, env: YAI_ANSWER_PATH
|
115
|
+
- **STREAM**: Enable/disable streaming responses, default: true, env: YAI_STREAM
|
116
|
+
- **TEMPERATURE**: Temperature for response generation (default: 0.7), env: YAI_TEMPERATURE
|
117
|
+
- **TOP_P**: Top-p sampling for response generation (default: 1.0), env: YAI_TOP_P
|
118
|
+
- **MAX_TOKENS**: Maximum number of tokens for response generation (default: 1024), env: YAI_MAX_TOKENS
|
116
119
|
|
117
120
|
Default config of `COMPLETION_PATH` and `ANSWER_PATH` is OpenAI compatible. If you are using OpenAI or other OpenAI compatible LLM provider, you can use the default config.
|
118
121
|
|
@@ -243,10 +246,45 @@ In Execute mode:
|
|
243
246
|
|
244
247
|
## Examples
|
245
248
|
|
249
|
+
### Have a Chat
|
250
|
+
|
251
|
+
```bash
|
252
|
+
$ ai "What is the capital of France?"
|
253
|
+
Assistant:
|
254
|
+
The capital of France is Paris.
|
255
|
+
```
|
256
|
+
|
257
|
+
### Command Gen and Run
|
258
|
+
|
259
|
+
```bash
|
260
|
+
$ ai -s 'Check the current directory size'
|
261
|
+
Assistant:
|
262
|
+
du -sh .
|
263
|
+
|
264
|
+
Generated command: du -sh .
|
265
|
+
Execute this command? [y/n/e] (n): e
|
266
|
+
Edit command, press enter to execute:
|
267
|
+
du -sh ./
|
268
|
+
Output:
|
269
|
+
109M ./
|
270
|
+
```
|
271
|
+
|
246
272
|
### Chat Mode Example
|
247
273
|
|
248
274
|
```bash
|
249
275
|
$ ai --chat
|
276
|
+
|
277
|
+
██ ██ █████ ██ ██████ ██ ██
|
278
|
+
██ ██ ██ ██ ██ ██ ██ ██
|
279
|
+
████ ███████ ██ ██ ██ ██
|
280
|
+
██ ██ ██ ██ ██ ██ ██
|
281
|
+
██ ██ ██ ██ ██████ ███████ ██
|
282
|
+
|
283
|
+
Press TAB to change in chat and exec mode
|
284
|
+
Type /clear to clear chat history
|
285
|
+
Type /his to see chat history
|
286
|
+
Press Ctrl+C or type /exit to exit
|
287
|
+
|
250
288
|
💬 > Tell me about the solar system
|
251
289
|
|
252
290
|
Assistant:
|
@@ -265,7 +303,17 @@ Certainly! Here’s a brief overview of the solar system:
|
|
265
303
|
• Dwarf Planets:
|
266
304
|
• Pluto: Once considered the ninth planet, now classified as
|
267
305
|
|
268
|
-
|
306
|
+
🚀 > Check the current directory size
|
307
|
+
Assistant:
|
308
|
+
du -sh .
|
309
|
+
|
310
|
+
Generated command: du -sh .
|
311
|
+
Execute this command? [y/n/e] (n): e
|
312
|
+
Edit command, press enter to execute:
|
313
|
+
du -sh ./
|
314
|
+
Output:
|
315
|
+
109M ./
|
316
|
+
🚀 >
|
269
317
|
```
|
270
318
|
|
271
319
|
### Execute Mode Example
|
@@ -1,6 +1,6 @@
|
|
1
1
|
[project]
|
2
2
|
name = "yaicli"
|
3
|
-
version = "0.0.
|
3
|
+
version = "0.0.14"
|
4
4
|
description = "A simple CLI tool to interact with LLM"
|
5
5
|
authors = [{ name = "belingud", email = "im.victor@qq.com" }]
|
6
6
|
readme = "README.md"
|
@@ -46,10 +46,6 @@ Documentation = "https://github.com/belingud/yaicli"
|
|
46
46
|
[project.scripts]
|
47
47
|
ai = "yaicli:app"
|
48
48
|
|
49
|
-
[tool.pdm.scripts]
|
50
|
-
bump = "bump2version {args}"
|
51
|
-
changelog = "just changelog"
|
52
|
-
|
53
49
|
[tool.uv]
|
54
50
|
resolution = "highest"
|
55
51
|
|
@@ -2,6 +2,7 @@ import configparser
|
|
2
2
|
import json
|
3
3
|
import platform
|
4
4
|
import subprocess
|
5
|
+
import time
|
5
6
|
from os import getenv
|
6
7
|
from os.path import basename, pathsep
|
7
8
|
from pathlib import Path
|
@@ -11,7 +12,7 @@ import httpx
|
|
11
12
|
import jmespath
|
12
13
|
import typer
|
13
14
|
from distro import name as distro_name
|
14
|
-
from prompt_toolkit import PromptSession
|
15
|
+
from prompt_toolkit import PromptSession, prompt
|
15
16
|
from prompt_toolkit.completion import WordCompleter
|
16
17
|
from prompt_toolkit.history import FileHistory
|
17
18
|
from prompt_toolkit.key_binding import KeyBindings, KeyPressEvent
|
@@ -19,7 +20,8 @@ from prompt_toolkit.keys import Keys
|
|
19
20
|
from rich.console import Console
|
20
21
|
from rich.live import Live
|
21
22
|
from rich.markdown import Markdown
|
22
|
-
from rich.
|
23
|
+
from rich.panel import Panel
|
24
|
+
from rich.prompt import Prompt
|
23
25
|
|
24
26
|
SHELL_PROMPT = """Your are a Shell Command Generator.
|
25
27
|
Generate a command EXCLUSIVELY for {_os} OS with {_shell} shell.
|
@@ -46,14 +48,17 @@ CHAT_MODE = "chat"
|
|
46
48
|
TEMP_MODE = "temp"
|
47
49
|
|
48
50
|
DEFAULT_CONFIG_MAP = {
|
49
|
-
"BASE_URL": {"value": "https://api.openai.com/v1", "env_key": "
|
50
|
-
"API_KEY": {"value": "", "env_key": "
|
51
|
-
"MODEL": {"value": "gpt-4o", "env_key": "
|
52
|
-
"SHELL_NAME": {"value": "auto", "env_key": "
|
53
|
-
"OS_NAME": {"value": "auto", "env_key": "
|
54
|
-
"COMPLETION_PATH": {"value": "chat/completions", "env_key": "
|
55
|
-
"ANSWER_PATH": {"value": "choices[0].message.content", "env_key": "
|
56
|
-
"STREAM": {"value": "true", "env_key": "
|
51
|
+
"BASE_URL": {"value": "https://api.openai.com/v1", "env_key": "YAI_BASE_URL"},
|
52
|
+
"API_KEY": {"value": "", "env_key": "YAI_API_KEY"},
|
53
|
+
"MODEL": {"value": "gpt-4o", "env_key": "YAI_MODEL"},
|
54
|
+
"SHELL_NAME": {"value": "auto", "env_key": "YAI_SHELL_NAME"},
|
55
|
+
"OS_NAME": {"value": "auto", "env_key": "YAI_OS_NAME"},
|
56
|
+
"COMPLETION_PATH": {"value": "chat/completions", "env_key": "YAI_COMPLETION_PATH"},
|
57
|
+
"ANSWER_PATH": {"value": "choices[0].message.content", "env_key": "YAI_ANSWER_PATH"},
|
58
|
+
"STREAM": {"value": "true", "env_key": "YAI_STREAM"},
|
59
|
+
"TEMPERATURE": {"value": "0.7", "env_key": "YAI_TEMPERATURE"},
|
60
|
+
"TOP_P": {"value": "1.0", "env_key": "YAI_TOP_P"},
|
61
|
+
"MAX_TOKENS": {"value": "1024", "env_key": "YAI_MAX_TOKENS"},
|
57
62
|
}
|
58
63
|
|
59
64
|
DEFAULT_CONFIG_INI = """[core]
|
@@ -102,10 +107,14 @@ class CLI:
|
|
102
107
|
self.bindings = KeyBindings()
|
103
108
|
self.session = PromptSession(key_bindings=self.bindings)
|
104
109
|
self.config = {}
|
105
|
-
self.history = []
|
110
|
+
self.history: list[dict[str, str]] = []
|
106
111
|
self.max_history_length = 25
|
107
112
|
self.current_mode = TEMP_MODE
|
108
113
|
|
114
|
+
def is_stream(self) -> bool:
|
115
|
+
"""Check if streaming is enabled"""
|
116
|
+
return self.config["STREAM"] == "true"
|
117
|
+
|
109
118
|
def prepare_chat_loop(self) -> None:
|
110
119
|
"""Setup key bindings and history for chat mode"""
|
111
120
|
self._setup_key_bindings()
|
@@ -232,7 +241,7 @@ class CLI:
|
|
232
241
|
# Join the remaining lines and strip any extra whitespace
|
233
242
|
return "\n".join(line.strip() for line in content_lines if line.strip())
|
234
243
|
|
235
|
-
def
|
244
|
+
def _get_number_with_type(self, key, _type: type, default=None):
|
236
245
|
"""Get number with type from config"""
|
237
246
|
try:
|
238
247
|
return _type(self.config.get(key, default))
|
@@ -245,10 +254,10 @@ class CLI:
|
|
245
254
|
body = {
|
246
255
|
"messages": message,
|
247
256
|
"model": self.config.get("MODEL", "gpt-4o"),
|
248
|
-
"stream": self.
|
249
|
-
"temperature": self.
|
250
|
-
"top_p": self.
|
251
|
-
"max_tokens": self.
|
257
|
+
"stream": self.is_stream(),
|
258
|
+
"temperature": self._get_number_with_type(key="TEMPERATURE", _type=float, default="0.7"),
|
259
|
+
"top_p": self._get_number_with_type(key="TOP_P", _type=float, default="1.0"),
|
260
|
+
"max_tokens": self._get_number_with_type(key="MAX_TOKENS", _type=int, default="1024"),
|
252
261
|
}
|
253
262
|
with httpx.Client(timeout=120.0) as client:
|
254
263
|
response = client.post(
|
@@ -315,8 +324,11 @@ class CLI:
|
|
315
324
|
|
316
325
|
def _print_stream(self, response: httpx.Response) -> str:
|
317
326
|
"""Print response from LLM in streaming mode"""
|
327
|
+
self.console.print("Assistant:", style="bold green")
|
318
328
|
full_completion = ""
|
319
329
|
in_reasoning = False
|
330
|
+
cursor_chars = ["_", " "]
|
331
|
+
cursor_index = 0
|
320
332
|
|
321
333
|
with Live() as live:
|
322
334
|
for line in response.iter_lines():
|
@@ -332,30 +344,21 @@ class CLI:
|
|
332
344
|
reason, full_completion, in_reasoning
|
333
345
|
)
|
334
346
|
else:
|
335
|
-
content = delta.get("content", "") or ""
|
336
347
|
full_completion, in_reasoning = self._process_regular_content(
|
337
|
-
content, full_completion, in_reasoning
|
348
|
+
delta.get("content", "") or "", full_completion, in_reasoning
|
338
349
|
)
|
339
350
|
|
340
|
-
live.update(Markdown(markup=full_completion), refresh=True)
|
341
|
-
|
351
|
+
live.update(Markdown(markup=full_completion + cursor_chars[cursor_index]), refresh=True)
|
352
|
+
cursor_index = (cursor_index + 1) % 2
|
353
|
+
time.sleep(0.005) # Slow down the printing speed, avoiding screen flickering
|
354
|
+
live.update(Markdown(markup=full_completion), refresh=True)
|
342
355
|
return full_completion
|
343
356
|
|
344
|
-
def
|
357
|
+
def _print_normal(self, response: httpx.Response) -> str:
|
345
358
|
"""Print response from LLM in non-streaming mode"""
|
359
|
+
self.console.print("Assistant:", style="bold green")
|
346
360
|
full_completion = jmespath.search(self.config.get("ANSWER_PATH", "choices[0].message.content"), response.json())
|
347
|
-
self.console.print(Markdown(full_completion))
|
348
|
-
return full_completion
|
349
|
-
|
350
|
-
def _print(self, response: httpx.Response, stream: bool = True) -> str:
|
351
|
-
"""Print response from LLM and return full completion"""
|
352
|
-
if stream:
|
353
|
-
# Streaming response
|
354
|
-
full_completion = self._print_stream(response)
|
355
|
-
else:
|
356
|
-
# Non-streaming response
|
357
|
-
full_completion = self._print_non_stream(response)
|
358
|
-
self.console.print() # Add a newline after the response to separate from the next input
|
361
|
+
self.console.print(Markdown(full_completion + "\n"))
|
359
362
|
return full_completion
|
360
363
|
|
361
364
|
def get_prompt_tokens(self) -> list[tuple[str, str]]:
|
@@ -368,9 +371,74 @@ class CLI:
|
|
368
371
|
if len(self.history) > self.max_history_length:
|
369
372
|
self.history = self.history[-self.max_history_length :]
|
370
373
|
|
374
|
+
def _handle_special_commands(self, user_input: str) -> Optional[bool]:
|
375
|
+
"""Handle special command return: True-continue loop, False-exit loop, None-non-special command"""
|
376
|
+
if user_input.lower() == CMD_EXIT:
|
377
|
+
return False
|
378
|
+
if user_input.lower() == CMD_CLEAR and self.current_mode == CHAT_MODE:
|
379
|
+
self.history.clear()
|
380
|
+
self.console.print("Chat history cleared\n", style="bold yellow")
|
381
|
+
return True
|
382
|
+
if user_input.lower() == CMD_HISTORY:
|
383
|
+
self.console.print(self.history)
|
384
|
+
return True
|
385
|
+
return None
|
386
|
+
|
387
|
+
def _confirm_and_execute(self, content: str) -> None:
|
388
|
+
"""Review, edit and execute the command"""
|
389
|
+
cmd = self._filter_command(content)
|
390
|
+
if not cmd:
|
391
|
+
self.console.print("No command generated", style="bold red")
|
392
|
+
return
|
393
|
+
self.console.print(Panel(cmd, title="Command", title_align="left", border_style="bold magenta", expand=False))
|
394
|
+
_input = Prompt.ask(
|
395
|
+
r"Execute command? \[e]dit, \[y]es, \[n]o",
|
396
|
+
choices=["y", "n", "e"],
|
397
|
+
default="n",
|
398
|
+
case_sensitive=False,
|
399
|
+
show_choices=False,
|
400
|
+
)
|
401
|
+
if _input == "y": # execute cmd
|
402
|
+
self.console.print("Output:", style="bold green")
|
403
|
+
subprocess.call(cmd, shell=True)
|
404
|
+
elif _input == "e": # edit cmd
|
405
|
+
cmd = prompt("Edit command, press enter to execute:\n", default=cmd)
|
406
|
+
self.console.print("Output:", style="bold green")
|
407
|
+
subprocess.call(cmd, shell=True)
|
408
|
+
|
409
|
+
def _build_messages(self, user_input: str) -> list[dict[str, str]]:
|
410
|
+
return [
|
411
|
+
{"role": "system", "content": self.get_system_prompt()},
|
412
|
+
*self.history,
|
413
|
+
{"role": "user", "content": user_input},
|
414
|
+
]
|
415
|
+
|
416
|
+
def _handle_llm_response(self, response: httpx.Response, user_input: str) -> str:
|
417
|
+
"""Print LLM response and update history"""
|
418
|
+
content = self._print_stream(response) if self.is_stream() else self._print_normal(response)
|
419
|
+
self.history.extend([{"role": "user", "content": user_input}, {"role": "assistant", "content": content}])
|
420
|
+
self._check_history_len()
|
421
|
+
return content
|
422
|
+
|
423
|
+
def _process_user_input(self, user_input: str) -> bool:
|
424
|
+
"""Process user input and generate response"""
|
425
|
+
try:
|
426
|
+
response = self.post(self._build_messages(user_input))
|
427
|
+
content = self._handle_llm_response(response, user_input)
|
428
|
+
if self.current_mode == EXEC_MODE:
|
429
|
+
self._confirm_and_execute(content)
|
430
|
+
return True
|
431
|
+
except Exception as e:
|
432
|
+
self.console.print(f"Error: {e}", style="red")
|
433
|
+
return False
|
434
|
+
|
435
|
+
def get_system_prompt(self) -> str:
|
436
|
+
"""Return system prompt for current mode"""
|
437
|
+
prompt = SHELL_PROMPT if self.current_mode == EXEC_MODE else DEFAULT_PROMPT
|
438
|
+
return prompt.format(_os=self.detect_os(), _shell=self.detect_shell())
|
439
|
+
|
371
440
|
def _run_repl(self) -> None:
|
372
441
|
"""Run REPL loop, handling user input and generating responses, saving history, and executing commands"""
|
373
|
-
# Show REPL instructions
|
374
442
|
self.prepare_chat_loop()
|
375
443
|
self.console.print("""
|
376
444
|
██ ██ █████ ██ ██████ ██ ██
|
@@ -379,13 +447,13 @@ class CLI:
|
|
379
447
|
██ ██ ██ ██ ██ ██ ██
|
380
448
|
██ ██ ██ ██ ██████ ███████ ██
|
381
449
|
""")
|
382
|
-
self.console.print("
|
383
|
-
self.console.print("
|
384
|
-
self.console.print("
|
385
|
-
self.console.print("
|
450
|
+
self.console.print("Press TAB to change in chat and exec mode", style="bold")
|
451
|
+
self.console.print("Type /clear to clear chat history", style="bold")
|
452
|
+
self.console.print("Type /his to see chat history", style="bold")
|
453
|
+
self.console.print("Press Ctrl+C or type /exit to exit\n", style="bold")
|
386
454
|
|
387
455
|
while True:
|
388
|
-
|
456
|
+
self.console.print(Markdown("---"))
|
389
457
|
user_input = self.session.prompt(self.get_prompt_tokens).strip()
|
390
458
|
if not user_input:
|
391
459
|
continue
|
@@ -402,88 +470,21 @@ class CLI:
|
|
402
470
|
elif user_input.lower() == CMD_HISTORY:
|
403
471
|
self.console.print(self.history)
|
404
472
|
continue
|
405
|
-
|
406
|
-
system_prompt = SHELL_PROMPT if self.current_mode == EXEC_MODE else DEFAULT_PROMPT
|
407
|
-
system_content = system_prompt.format(_os=self.detect_os(), _shell=self.detect_shell())
|
408
|
-
|
409
|
-
# Create message with system prompt and history
|
410
|
-
message = [{"role": "system", "content": system_content}]
|
411
|
-
message.extend(self.history)
|
412
|
-
|
413
|
-
# Add current user message
|
414
|
-
message.append({"role": "user", "content": user_input})
|
415
|
-
|
416
|
-
# Get response from LLM
|
417
|
-
try:
|
418
|
-
response = self.post(message)
|
419
|
-
except ValueError as e:
|
420
|
-
self.console.print(f"[red]Error: {e}[/red]")
|
421
|
-
return
|
422
|
-
except (httpx.ConnectError, httpx.HTTPStatusError) as e:
|
423
|
-
self.console.print(f"[red]Error: {e}[/red]")
|
473
|
+
if not self._process_user_input(user_input):
|
424
474
|
continue
|
425
|
-
self.console.print("\n[bold green]Assistant:[/bold green]")
|
426
|
-
try:
|
427
|
-
content = self._print(response, stream=self.config["STREAM"] == "true")
|
428
|
-
except Exception as e:
|
429
|
-
self.console.print(f"[red]Unknown Error: {e}[/red]")
|
430
|
-
continue
|
431
|
-
|
432
|
-
# Add user input and assistant response to history
|
433
|
-
self.history.append({"role": "user", "content": user_input})
|
434
|
-
self.history.append({"role": "assistant", "content": content})
|
435
|
-
|
436
|
-
self._check_history_len()
|
437
|
-
|
438
|
-
# Handle command execution in exec mode
|
439
|
-
if self.current_mode == EXEC_MODE:
|
440
|
-
content = self._filter_command(content)
|
441
|
-
if not content:
|
442
|
-
self.console.print("[bold red]No command generated[/bold red]")
|
443
|
-
continue
|
444
|
-
self.console.print(f"\n[bold magenta]Generated command:[/bold magenta] {content}")
|
445
|
-
if Confirm.ask("Execute this command?", default=False):
|
446
|
-
subprocess.call(content, shell=True)
|
447
475
|
|
448
476
|
self.console.print("[bold green]Exiting...[/bold green]")
|
449
477
|
|
450
478
|
def _run_once(self, prompt: str, shell: bool = False) -> None:
|
451
479
|
"""Run once with given prompt"""
|
452
|
-
_os = self.detect_os()
|
453
|
-
_shell = self.detect_shell()
|
454
|
-
# Create appropriate system prompt based on mode
|
455
|
-
system_prompt = SHELL_PROMPT if shell else DEFAULT_PROMPT
|
456
|
-
system_content = system_prompt.format(_os=_os, _shell=_shell)
|
457
|
-
|
458
|
-
# Create message with system prompt and user input
|
459
|
-
message = [
|
460
|
-
{"role": "system", "content": system_content},
|
461
|
-
{"role": "user", "content": prompt},
|
462
|
-
]
|
463
480
|
|
464
|
-
# Get response from LLM
|
465
481
|
try:
|
466
|
-
response = self.post(
|
467
|
-
|
468
|
-
|
469
|
-
|
482
|
+
response = self.post(self._build_messages(prompt))
|
483
|
+
content = self._handle_llm_response(response, prompt)
|
484
|
+
if shell:
|
485
|
+
self._confirm_and_execute(content)
|
470
486
|
except Exception as e:
|
471
|
-
self.console.print(f"[red]
|
472
|
-
return
|
473
|
-
self.console.print("\n[bold green]Assistant:[/bold green]")
|
474
|
-
content = self._print(response, stream=self.config["STREAM"] == "true")
|
475
|
-
|
476
|
-
# Handle shell mode execution
|
477
|
-
if shell:
|
478
|
-
content = self._filter_command(content)
|
479
|
-
if not content:
|
480
|
-
self.console.print("[bold red]No command generated[/bold red]")
|
481
|
-
return
|
482
|
-
self.console.print(f"\n[bold magenta]Generated command:[/bold magenta] {content}")
|
483
|
-
if Confirm.ask("Execute this command?", default=False):
|
484
|
-
returncode = subprocess.call(content, shell=True)
|
485
|
-
if returncode != 0:
|
486
|
-
self.console.print(f"[bold red]Command failed with return code {returncode}[/bold red]")
|
487
|
+
self.console.print(f"[red]Error: {e}[/red]")
|
487
488
|
|
488
489
|
def run(self, chat: bool, shell: bool, prompt: str) -> None:
|
489
490
|
"""Run the CLI"""
|
@@ -493,12 +494,11 @@ class CLI:
|
|
493
494
|
"[yellow]API key not set. Please set in ~/.config/yaicli/config.ini or AI_API_KEY env[/]"
|
494
495
|
)
|
495
496
|
raise typer.Exit(code=1)
|
496
|
-
|
497
|
-
# Handle chat mode
|
498
497
|
if chat:
|
499
498
|
self.current_mode = CHAT_MODE
|
500
499
|
self._run_repl()
|
501
500
|
else:
|
501
|
+
self.current_mode = EXEC_MODE if shell else TEMP_MODE
|
502
502
|
self._run_once(prompt, shell)
|
503
503
|
|
504
504
|
|
@@ -507,13 +507,13 @@ def main(
|
|
507
507
|
ctx: typer.Context,
|
508
508
|
prompt: Annotated[Optional[str], typer.Argument(show_default=False, help="The prompt send to the LLM")] = None,
|
509
509
|
chat: Annotated[
|
510
|
-
bool, typer.Option("--chat", "-c", help="Start in chat mode", rich_help_panel="Run
|
510
|
+
bool, typer.Option("--chat", "-c", help="Start in chat mode", rich_help_panel="Run Options")
|
511
511
|
] = False,
|
512
512
|
shell: Annotated[
|
513
|
-
bool, typer.Option("--shell", "-s", help="Generate and execute shell command", rich_help_panel="Run
|
513
|
+
bool, typer.Option("--shell", "-s", help="Generate and execute shell command", rich_help_panel="Run Options")
|
514
514
|
] = False,
|
515
515
|
verbose: Annotated[
|
516
|
-
bool, typer.Option("--verbose", "-V", help="Show verbose information", rich_help_panel="Run
|
516
|
+
bool, typer.Option("--verbose", "-V", help="Show verbose information", rich_help_panel="Run Options")
|
517
517
|
] = False,
|
518
518
|
template: Annotated[bool, typer.Option("--template", help="Show the config template.")] = False,
|
519
519
|
):
|
File without changes
|
File without changes
|