yaicli 0.0.17__tar.gz → 0.0.19__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: yaicli
3
- Version: 0.0.17
3
+ Version: 0.0.19
4
4
  Summary: A simple CLI tool to interact with LLM
5
5
  Project-URL: Homepage, https://github.com/belingud/yaicli
6
6
  Project-URL: Repository, https://github.com/belingud/yaicli
@@ -238,32 +238,28 @@ Support regular and deep thinking models.
238
238
 
239
239
  ## Features
240
240
 
241
- - **Multiple Operation Modes**:
242
- - **Chat Mode (💬)**: Interactive conversation with the AI assistant
243
- - **Execute Mode (🚀)**: Generate and execute shell commands specific to your OS and shell
244
- - **Temp Mode**: Quick queries without entering interactive mode
241
+ - **Smart Interaction Modes**:
242
+ - 💬 Chat Mode: Persistent dialogue with context tracking
243
+ - 🚀 Execute Mode: Generate & verify OS-specific commands (Windows/macOS/Linux)
244
+ - Quick Query: Single-shot responses without entering REPL
245
245
 
246
- - **Smart Environment Detection**:
247
- - Automatically detects your operating system and shell
248
- - Customizes responses and commands for your specific environment
246
+ - **Environment Intelligence**:
247
+ - Auto-detects shell type (CMD/PowerShell/bash/zsh)
248
+ - Dynamic command validation with 3-step confirmation
249
+ - Pipe input support (`cat log.txt | ai "analyze errors"`)
249
250
 
250
- - **Rich Terminal Interface**:
251
- - Markdown rendering for formatted responses
252
- - Streaming responses for real-time feedback
253
- - Color-coded output for better readability
251
+ - **Enterprise LLM Support**:
252
+ - OpenAI API compatible endpoints
253
+ - Claude/Gemini/Cohere integration guides
254
+ - Custom JSON parsing with jmespath
254
255
 
255
- - **Configurable**:
256
- - Customizable API endpoints
257
- - Support for different LLM providers
258
- - Adjustable response parameters
256
+ - **Terminal Experience**:
257
+ - Real-time streaming with cursor animation
258
+ - LRU history management (500 entries default)
259
259
 
260
- - **Keyboard Shortcuts**:
261
- - Tab to switch between Chat and Execute modes
262
- - `↑/↓` to navigate history
263
- - `Ctrl+R` to search history
264
-
265
- - **History**:
266
- - Save and recall previous queries
260
+ - **DevOps Ready**:
261
+ - Layered configuration (Env > File > Defaults)
262
+ - Verbose debug mode with API tracing
267
263
 
268
264
  ## Installation
269
265
 
@@ -345,6 +341,7 @@ Below are the available configuration options and override environment variables
345
341
  - **TOP_P**: Top-p sampling for response generation (default: 1.0), env: YAI_TOP_P
346
342
  - **MAX_TOKENS**: Maximum number of tokens for response generation (default: 1024), env: YAI_MAX_TOKENS
347
343
  - **MAX_HISTORY**: Max history size, default: 500, env: YAI_MAX_HISTORY
344
+ - **AUTO_SUGGEST**: Auto suggest from history, default: true, env: YAI_AUTO_SUGGEST
348
345
 
349
346
  Default config of `COMPLETION_PATH` and `ANSWER_PATH` is OpenAI compatible. If you are using OpenAI or other OpenAI compatible LLM provider, you can use the default config.
350
347
 
@@ -14,32 +14,28 @@ Support regular and deep thinking models.
14
14
 
15
15
  ## Features
16
16
 
17
- - **Multiple Operation Modes**:
18
- - **Chat Mode (💬)**: Interactive conversation with the AI assistant
19
- - **Execute Mode (🚀)**: Generate and execute shell commands specific to your OS and shell
20
- - **Temp Mode**: Quick queries without entering interactive mode
17
+ - **Smart Interaction Modes**:
18
+ - 💬 Chat Mode: Persistent dialogue with context tracking
19
+ - 🚀 Execute Mode: Generate & verify OS-specific commands (Windows/macOS/Linux)
20
+ - Quick Query: Single-shot responses without entering REPL
21
21
 
22
- - **Smart Environment Detection**:
23
- - Automatically detects your operating system and shell
24
- - Customizes responses and commands for your specific environment
22
+ - **Environment Intelligence**:
23
+ - Auto-detects shell type (CMD/PowerShell/bash/zsh)
24
+ - Dynamic command validation with 3-step confirmation
25
+ - Pipe input support (`cat log.txt | ai "analyze errors"`)
25
26
 
26
- - **Rich Terminal Interface**:
27
- - Markdown rendering for formatted responses
28
- - Streaming responses for real-time feedback
29
- - Color-coded output for better readability
27
+ - **Enterprise LLM Support**:
28
+ - OpenAI API compatible endpoints
29
+ - Claude/Gemini/Cohere integration guides
30
+ - Custom JSON parsing with jmespath
30
31
 
31
- - **Configurable**:
32
- - Customizable API endpoints
33
- - Support for different LLM providers
34
- - Adjustable response parameters
32
+ - **Terminal Experience**:
33
+ - Real-time streaming with cursor animation
34
+ - LRU history management (500 entries default)
35
35
 
36
- - **Keyboard Shortcuts**:
37
- - Tab to switch between Chat and Execute modes
38
- - `↑/↓` to navigate history
39
- - `Ctrl+R` to search history
40
-
41
- - **History**:
42
- - Save and recall previous queries
36
+ - **DevOps Ready**:
37
+ - Layered configuration (Env > File > Defaults)
38
+ - Verbose debug mode with API tracing
43
39
 
44
40
  ## Installation
45
41
 
@@ -121,6 +117,7 @@ Below are the available configuration options and override environment variables
121
117
  - **TOP_P**: Top-p sampling for response generation (default: 1.0), env: YAI_TOP_P
122
118
  - **MAX_TOKENS**: Maximum number of tokens for response generation (default: 1024), env: YAI_MAX_TOKENS
123
119
  - **MAX_HISTORY**: Max history size, default: 500, env: YAI_MAX_HISTORY
120
+ - **AUTO_SUGGEST**: Auto suggest from history, default: true, env: YAI_AUTO_SUGGEST
124
121
 
125
122
  Default config of `COMPLETION_PATH` and `ANSWER_PATH` is OpenAI compatible. If you are using OpenAI or other OpenAI compatible LLM provider, you can use the default config.
126
123
 
@@ -1,6 +1,6 @@
1
1
  [project]
2
2
  name = "yaicli"
3
- version = "0.0.17"
3
+ version = "0.0.19"
4
4
  description = "A simple CLI tool to interact with LLM"
5
5
  authors = [{ name = "belingud", email = "im.victor@qq.com" }]
6
6
  readme = "README.md"
@@ -7,14 +7,14 @@ import time
7
7
  from os import getenv
8
8
  from os.path import basename, exists, pathsep, devnull
9
9
  from pathlib import Path
10
- from typing import Annotated, Optional, Union
10
+ from typing import Annotated, Any, Dict, Optional, Union
11
11
 
12
12
  import httpx
13
13
  import jmespath
14
14
  import typer
15
15
  from distro import name as distro_name
16
16
  from prompt_toolkit import PromptSession, prompt
17
- # from prompt_toolkit.completion import WordCompleter
17
+ from prompt_toolkit.auto_suggest import AutoSuggestFromHistory
18
18
  from prompt_toolkit.history import FileHistory, _StrOrBytesPath
19
19
  from prompt_toolkit.key_binding import KeyBindings, KeyPressEvent
20
20
  from prompt_toolkit.keys import Keys
@@ -49,46 +49,54 @@ CHAT_MODE = "chat"
49
49
  TEMP_MODE = "temp"
50
50
 
51
51
  DEFAULT_CONFIG_MAP = {
52
- "BASE_URL": {"value": "https://api.openai.com/v1", "env_key": "YAI_BASE_URL"},
53
- "API_KEY": {"value": "", "env_key": "YAI_API_KEY"},
54
- "MODEL": {"value": "gpt-4o", "env_key": "YAI_MODEL"},
55
- "SHELL_NAME": {"value": "auto", "env_key": "YAI_SHELL_NAME"},
56
- "OS_NAME": {"value": "auto", "env_key": "YAI_OS_NAME"},
57
- "COMPLETION_PATH": {"value": "chat/completions", "env_key": "YAI_COMPLETION_PATH"},
58
- "ANSWER_PATH": {"value": "choices[0].message.content", "env_key": "YAI_ANSWER_PATH"},
59
- "STREAM": {"value": "true", "env_key": "YAI_STREAM"},
60
- "CODE_THEME": {"value": "monokia", "env_key": "YAI_CODE_THEME"},
61
- "TEMPERATURE": {"value": "0.7", "env_key": "YAI_TEMPERATURE"},
62
- "TOP_P": {"value": "1.0", "env_key": "YAI_TOP_P"},
63
- "MAX_TOKENS": {"value": "1024", "env_key": "YAI_MAX_TOKENS"},
64
- "MAX_HISTORY": {"value": "500", "env_key": "YAI_MAX_HISTORY"},
52
+ # Core API settings
53
+ "BASE_URL": {"value": "https://api.openai.com/v1", "env_key": "YAI_BASE_URL", "type": str},
54
+ "API_KEY": {"value": "", "env_key": "YAI_API_KEY", "type": str},
55
+ "MODEL": {"value": "gpt-4o", "env_key": "YAI_MODEL", "type": str},
56
+ # System detection hints
57
+ "SHELL_NAME": {"value": "auto", "env_key": "YAI_SHELL_NAME", "type": str},
58
+ "OS_NAME": {"value": "auto", "env_key": "YAI_OS_NAME", "type": str},
59
+ # API response parsing
60
+ "COMPLETION_PATH": {"value": "chat/completions", "env_key": "YAI_COMPLETION_PATH", "type": str},
61
+ "ANSWER_PATH": {"value": "choices[0].message.content", "env_key": "YAI_ANSWER_PATH", "type": str},
62
+ # API call parameters
63
+ "STREAM": {"value": "true", "env_key": "YAI_STREAM", "type": bool},
64
+ "TEMPERATURE": {"value": "0.7", "env_key": "YAI_TEMPERATURE", "type": float},
65
+ "TOP_P": {"value": "1.0", "env_key": "YAI_TOP_P", "type": float},
66
+ "MAX_TOKENS": {"value": "1024", "env_key": "YAI_MAX_TOKENS", "type": int},
67
+ # UI/UX settings
68
+ "CODE_THEME": {"value": "monokai", "env_key": "YAI_CODE_THEME", "type": str},
69
+ "MAX_HISTORY": {"value": "500", "env_key": "YAI_MAX_HISTORY", "type": int}, # readline history file limit
70
+ "AUTO_SUGGEST": {"value": "true", "env_key": "YAI_AUTO_SUGGEST", "type": bool},
65
71
  }
66
72
 
67
- DEFAULT_CONFIG_INI = """[core]
73
+ DEFAULT_CONFIG_INI = f"""[core]
68
74
  PROVIDER=openai
69
- BASE_URL=https://api.openai.com/v1
70
- API_KEY=
71
- MODEL=gpt-4o
75
+ BASE_URL={DEFAULT_CONFIG_MAP["BASE_URL"]["value"]}
76
+ API_KEY={DEFAULT_CONFIG_MAP["API_KEY"]["value"]}
77
+ MODEL={DEFAULT_CONFIG_MAP["MODEL"]["value"]}
72
78
 
73
- # auto detect shell and os
74
- SHELL_NAME=auto
75
- OS_NAME=auto
79
+ # auto detect shell and os (or specify manually, e.g., bash, zsh, powershell.exe)
80
+ SHELL_NAME={DEFAULT_CONFIG_MAP["SHELL_NAME"]["value"]}
81
+ OS_NAME={DEFAULT_CONFIG_MAP["OS_NAME"]["value"]}
76
82
 
77
- # if you want to use custom completions path, you can set it here
78
- COMPLETION_PATH=/chat/completions
79
- # if you want to use custom answer path, you can set it here
80
- ANSWER_PATH=choices[0].message.content
83
+ # API paths (usually no need to change for OpenAI compatible APIs)
84
+ COMPLETION_PATH={DEFAULT_CONFIG_MAP["COMPLETION_PATH"]["value"]}
85
+ ANSWER_PATH={DEFAULT_CONFIG_MAP["ANSWER_PATH"]["value"]}
81
86
 
82
- # true: streaming response
83
- # false: non-streaming response
84
- STREAM=true
85
- CODE_THEME=monokia
87
+ # true: streaming response, false: non-streaming
88
+ STREAM={DEFAULT_CONFIG_MAP["STREAM"]["value"]}
86
89
 
87
- TEMPERATURE=0.7
88
- TOP_P=1.0
89
- MAX_TOKENS=1024
90
+ # LLM parameters
91
+ TEMPERATURE={DEFAULT_CONFIG_MAP["TEMPERATURE"]["value"]}
92
+ TOP_P={DEFAULT_CONFIG_MAP["TOP_P"]["value"]}
93
+ MAX_TOKENS={DEFAULT_CONFIG_MAP["MAX_TOKENS"]["value"]}
90
94
 
91
- MAX_HISTORY=500"""
95
+ # UI/UX
96
+ CODE_THEME={DEFAULT_CONFIG_MAP["CODE_THEME"]["value"]}
97
+ MAX_HISTORY={DEFAULT_CONFIG_MAP["MAX_HISTORY"]["value"]} # Max entries kept in history file
98
+ AUTO_SUGGEST={DEFAULT_CONFIG_MAP["AUTO_SUGGEST"]["value"]}
99
+ """
92
100
 
93
101
  app = typer.Typer(
94
102
  name="yaicli",
@@ -184,10 +192,6 @@ class CLI:
184
192
  self.max_history_length = 25
185
193
  self.current_mode = TEMP_MODE
186
194
 
187
- def is_stream(self) -> bool:
188
- """Check if streaming is enabled"""
189
- return self.config["STREAM"] == "true"
190
-
191
195
  def prepare_chat_loop(self) -> None:
192
196
  """Setup key bindings and history for chat mode"""
193
197
  self._setup_key_bindings()
@@ -196,10 +200,10 @@ class CLI:
196
200
  self.session = PromptSession(
197
201
  key_bindings=self.bindings,
198
202
  # completer=WordCompleter(["/clear", "/exit", "/his"]),
199
- complete_while_typing=True,
200
203
  history=LimitedFileHistory(
201
204
  Path("~/.yaicli_history").expanduser(), max_entries=int(self.config["MAX_HISTORY"])
202
205
  ),
206
+ auto_suggest=AutoSuggestFromHistory() if self.config["AUTO_SUGGEST"] else None,
203
207
  enable_history_search=True,
204
208
  )
205
209
 
@@ -210,42 +214,66 @@ class CLI:
210
214
  def _(event: KeyPressEvent) -> None:
211
215
  self.current_mode = EXEC_MODE if self.current_mode == CHAT_MODE else CHAT_MODE
212
216
 
213
- def load_config(self) -> dict[str, str]:
217
+ def load_config(self) -> dict[str, Any]: # Changed return type hint
214
218
  """Load LLM API configuration with priority:
215
219
  1. Environment variables (highest priority)
216
220
  2. Configuration file
217
221
  3. Default values (lowest priority)
218
222
 
223
+ Applies type conversion based on DEFAULT_CONFIG_MAP after merging sources.
224
+
219
225
  Returns:
220
- dict: merged configuration
226
+ dict: merged configuration with appropriate types
221
227
  """
222
- # Start with default configuration (lowest priority)
223
- merged_config = {k: v["value"] for k, v in DEFAULT_CONFIG_MAP.items()}
228
+ # Start with default configuration string values (lowest priority)
229
+ # These serve as the base and also for fallback on type errors
230
+ default_values_str = {k: v["value"] for k, v in DEFAULT_CONFIG_MAP.items()}
231
+ merged_config: Dict[str, Any] = default_values_str.copy() # Use Any for value type
224
232
 
225
233
  # Create default config file if it doesn't exist
226
234
  if not self.CONFIG_PATH.exists():
227
235
  self.console.print("[bold yellow]Creating default configuration file.[/bold yellow]")
228
236
  self.CONFIG_PATH.parent.mkdir(parents=True, exist_ok=True)
229
- with open(self.CONFIG_PATH, "w") as f:
237
+ with open(self.CONFIG_PATH, "w", encoding="utf-8") as f: # Added encoding
230
238
  f.write(DEFAULT_CONFIG_INI)
231
239
  else:
232
240
  # Load from configuration file (middle priority)
233
241
  config_parser = CasePreservingConfigParser()
234
- config_parser.read(self.CONFIG_PATH)
242
+ # Read with UTF-8 encoding
243
+ config_parser.read(self.CONFIG_PATH, encoding="utf-8")
235
244
  if "core" in config_parser:
236
- # Update with non-empty values from config file
237
- merged_config.update({k: v for k, v in config_parser["core"].items() if v.strip()})
245
+ # Update with non-empty values from config file (values are strings)
246
+ merged_config.update(
247
+ {k: v for k, v in config_parser["core"].items() if k in DEFAULT_CONFIG_MAP and v.strip()}
248
+ )
238
249
 
239
250
  # Override with environment variables (highest priority)
240
- for key, config in DEFAULT_CONFIG_MAP.items():
241
- env_value = getenv(config["env_key"])
251
+ for key, config_info in DEFAULT_CONFIG_MAP.items():
252
+ env_value = getenv(config_info["env_key"])
242
253
  if env_value is not None:
254
+ # Env values are strings
243
255
  merged_config[key] = env_value
244
-
245
- merged_config["STREAM"] = str(merged_config.get("STREAM", "true")).lower()
246
-
256
+ target_type = config_info["type"]
257
+ # Fallback, shouldn't be needed here, but safe
258
+ raw_value: Any = merged_config.get(key, default_values_str.get(key))
259
+ converted_value = None
260
+ try:
261
+ if target_type is bool:
262
+ converted_value = str(raw_value).strip().lower() == "true"
263
+ elif target_type in (int, float, str):
264
+ converted_value = target_type(raw_value)
265
+ except (ValueError, TypeError) as e:
266
+ self.console.print(
267
+ f"[yellow]Warning:[/yellow] Invalid value '{raw_value}' for '{key}'. "
268
+ f"Expected type '{target_type.__name__}'. Using default value '{default_values_str[key]}'. Error: {e}",
269
+ style="dim",
270
+ )
271
+ # Fallback to default string value
272
+ converted_value = target_type(default_values_str[key])
273
+
274
+ merged_config[key] = converted_value
247
275
  self.config = merged_config
248
- return merged_config
276
+ return self.config
249
277
 
250
278
  def detect_os(self) -> str:
251
279
  """Detect operating system + version"""
@@ -329,14 +357,16 @@ class CLI:
329
357
  body = {
330
358
  "messages": message,
331
359
  "model": self.config.get("MODEL", "gpt-4o"),
332
- "stream": self.is_stream(),
360
+ "stream": self.config["STREAM"],
333
361
  "temperature": self._get_number_with_type(key="TEMPERATURE", _type=float, default="0.7"),
334
362
  "top_p": self._get_number_with_type(key="TOP_P", _type=float, default="1.0"),
335
363
  "max_tokens": self._get_number_with_type(key="MAX_TOKENS", _type=int, default="1024"),
336
364
  }
337
365
  with httpx.Client(timeout=120.0) as client:
338
366
  response = client.post(
339
- url, json=body, headers={"Authorization": f"Bearer {self.config.get('API_KEY', '')}"}
367
+ url,
368
+ json=body,
369
+ headers={"Authorization": f"Bearer {self.config.get('API_KEY', '')}"},
340
370
  )
341
371
  try:
342
372
  response.raise_for_status()
@@ -421,7 +451,10 @@ class CLI:
421
451
  )
422
452
 
423
453
  cursor = cursor_chars[cursor_index]
424
- live.update(Markdown(markup=full_content + cursor, code_theme=self.config["CODE_THEME"]), refresh=True)
454
+ live.update(
455
+ Markdown(markup=full_content + cursor, code_theme=self.config["CODE_THEME"]),
456
+ refresh=True,
457
+ )
425
458
  cursor_index = (cursor_index + 1) % 2
426
459
  time.sleep(0.005) # Slow down the printing speed, avoiding screen flickering
427
460
  live.update(Markdown(markup=full_content, code_theme=self.config["CODE_THEME"]), refresh=True)
@@ -488,7 +521,7 @@ class CLI:
488
521
 
489
522
  def _handle_llm_response(self, response: httpx.Response, user_input: str) -> str:
490
523
  """Print LLM response and update history"""
491
- content = self._print_stream(response) if self.is_stream() else self._print_normal(response)
524
+ content = self._print_stream(response) if self.config["STREAM"] else self._print_normal(response)
492
525
  self.history.extend([{"role": "user", "content": user_input}, {"role": "assistant", "content": content}])
493
526
  self._check_history_len()
494
527
  return content
@@ -539,7 +572,7 @@ class CLI:
539
572
  # Handle clear command
540
573
  if user_input.lower() == CMD_CLEAR and self.current_mode == CHAT_MODE:
541
574
  self.history = []
542
- self.console.print("[bold yellow]Chat history cleared[/bold yellow]\n")
575
+ self.console.print("Chat history cleared\n", style="bold yellow")
543
576
  continue
544
577
  elif user_input.lower() == CMD_HISTORY:
545
578
  self.console.print(self.history)
@@ -564,17 +597,17 @@ class CLI:
564
597
  """Run the CLI"""
565
598
  self.load_config()
566
599
  if self.verbose:
567
- self.console.print(f"CODE_THEME: {self.config['CODE_THEME']}")
568
- self.console.print(f"ANSWER_PATH: {self.config['ANSWER_PATH']}")
600
+ self.console.print(f"CODE_THEME: {self.config['CODE_THEME']}")
601
+ self.console.print(f"ANSWER_PATH: {self.config['ANSWER_PATH']}")
569
602
  self.console.print(f"COMPLETION_PATH: {self.config['COMPLETION_PATH']}")
570
- self.console.print(f"BASE_URL: {self.config['BASE_URL']}")
571
- self.console.print(f"MODEL: {self.config['MODEL']}")
572
- self.console.print(f"SHELL_NAME: {self.config['SHELL_NAME']}")
573
- self.console.print(f"OS_NAME: {self.config['OS_NAME']}")
574
- self.console.print(f"STREAM: {self.config['STREAM']}")
575
- self.console.print(f"TEMPERATURE: {self.config['TEMPERATURE']}")
576
- self.console.print(f"TOP_P: {self.config['TOP_P']}")
577
- self.console.print(f"MAX_TOKENS: {self.config['MAX_TOKENS']}")
603
+ self.console.print(f"BASE_URL: {self.config['BASE_URL']}")
604
+ self.console.print(f"MODEL: {self.config['MODEL']}")
605
+ self.console.print(f"SHELL_NAME: {self.config['SHELL_NAME']}")
606
+ self.console.print(f"OS_NAME: {self.config['OS_NAME']}")
607
+ self.console.print(f"STREAM: {self.config['STREAM']}")
608
+ self.console.print(f"TEMPERATURE: {self.config['TEMPERATURE']}")
609
+ self.console.print(f"TOP_P: {self.config['TOP_P']}")
610
+ self.console.print(f"MAX_TOKENS: {self.config['MAX_TOKENS']}")
578
611
  if not self.config.get("API_KEY"):
579
612
  self.console.print(
580
613
  "[yellow]API key not set. Please set in ~/.config/yaicli/config.ini or AI_API_KEY env[/]"
@@ -596,10 +629,17 @@ def main(
596
629
  bool, typer.Option("--chat", "-c", help="Start in chat mode", rich_help_panel="Run Options")
597
630
  ] = False,
598
631
  shell: Annotated[
599
- bool, typer.Option("--shell", "-s", help="Generate and execute shell command", rich_help_panel="Run Options")
632
+ bool,
633
+ typer.Option(
634
+ "--shell",
635
+ "-s",
636
+ help="Generate and execute shell command",
637
+ rich_help_panel="Run Options",
638
+ ),
600
639
  ] = False,
601
640
  verbose: Annotated[
602
- bool, typer.Option("--verbose", "-V", help="Show verbose information", rich_help_panel="Run Options")
641
+ bool,
642
+ typer.Option("--verbose", "-V", help="Show verbose information", rich_help_panel="Run Options"),
603
643
  ] = False,
604
644
  template: Annotated[bool, typer.Option("--template", help="Show the config template.")] = False,
605
645
  ):
File without changes
File without changes