yaicli 0.0.8__tar.gz → 0.0.10__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: yaicli
3
- Version: 0.0.8
3
+ Version: 0.0.10
4
4
  Summary: A simple CLI tool to interact with LLM
5
5
  Project-URL: Homepage, https://github.com/belingud/yaicli
6
6
  Project-URL: Repository, https://github.com/belingud/yaicli
@@ -228,7 +228,9 @@ Description-Content-Type: text/markdown
228
228
  ![PyPI - Downloads](https://img.shields.io/pypi/dm/yaicli?logo=pypi&style=for-the-badge)
229
229
  ![Pepy Total Downloads](https://img.shields.io/pepy/dt/yaicli?style=for-the-badge&logo=python)
230
230
 
231
- YAICLI is a powerful command-line AI assistant tool that enables you to interact with Large Language Models (LLMs) like ChatGPT's gpt-4o through your terminal. It offers multiple operation modes for everyday conversations, generating and executing shell commands, and one-shot quick queries.
231
+ YAICLI is a compact yet potent command-line AI assistant, allowing you to engage with Large Language Models (LLMs) such as ChatGPT's gpt-4o directly via your terminal. It offers multiple operation modes for everyday conversations, generating and executing shell commands, and one-shot quick queries.
232
+
233
+ Support regular and deep thinking models.
232
234
 
233
235
  > [!WARNING]
234
236
  > This is a work in progress, some features could change or be removed in the future.
@@ -257,6 +259,9 @@ YAICLI is a powerful command-line AI assistant tool that enables you to interact
257
259
  - **Keyboard Shortcuts**:
258
260
  - Tab to switch between Chat and Execute modes
259
261
 
262
+ - **History**:
263
+ - Save and recall previous queries
264
+
260
265
  ## Installation
261
266
 
262
267
  ### Prerequisites
@@ -296,6 +301,7 @@ The default configuration file is located at `~/.config/yaicli/config.ini`. Look
296
301
 
297
302
  ```ini
298
303
  [core]
304
+ PROVIDER=OPENAI
299
305
  BASE_URL=https://api.openai.com/v1
300
306
  API_KEY=your_api_key_here
301
307
  MODEL=gpt-4o
@@ -327,6 +333,58 @@ Below are the available configuration options and override environment variables
327
333
  - **ANSWER_PATH**: Json path expression to extract answer from response, default: choices[0].message.content, env: AI_ANSWER_PATH
328
334
  - **STREAM**: Enable/disable streaming responses, default: true, env: AI_STREAM
329
335
 
336
+ Default config of `COMPLETION_PATH` and `ANSWER_PATH` is OpenAI compatible. If you are using OpenAI or other OpenAI compatible LLM provider, you can use the default config.
337
+
338
+ If you wish to use other providers that are not compatible with the openai interface, you can use the following config:
339
+
340
+ - claude:
341
+ - BASE_URL: https://api.anthropic.com/v1
342
+ - COMPLETION_PATH: /messages
343
+ - ANSWER_PATH: content.0.text
344
+ - cohere:
345
+ - BASE_URL: https://api.cohere.com/v2
346
+ - COMPLETION_PATH: /chat
347
+ - ANSWER_PATH: message.content.[0].text
348
+ - google:
349
+ - BASE_URL: https://generativelanguage.googleapis.com/v1beta/openai
350
+ - COMPLETION_PATH: /chat/completions
351
+ - ANSWER_PATH: choices[0].message.content
352
+
353
+ You can use google OpenAI complete endpoint and leave `COMPLETION_PATH` and `ANSWER_PATH` as default. BASE_URL: https://generativelanguage.googleapis.com/v1beta/openai. See https://ai.google.dev/gemini-api/docs/openai
354
+
355
+ Claude also has a testable OpenAI-compatible interface, you can just use Calude endpoint and leave `COMPLETION_PATH` and `ANSWER_PATH` as default. See: https://docs.anthropic.com/en/api/openai-sdk
356
+
357
+ If you not sure how to config `COMPLETION_PATH` and `ANSWER_PATH`, here is a guide:
358
+ 1. **Find the API Endpoint**:
359
+ - Visit the documentation of the LLM provider you want to use.
360
+ - Find the API endpoint for the completion task. This is usually under the "API Reference" or "Developer Documentation" section.
361
+ 2. **Identify the Response Structure**:
362
+ - Look for the structure of the response. This typically includes fields like `choices`, `completion`, etc.
363
+ 3. **Identify the Path Expression**:
364
+ Forexample, claude response structure like this:
365
+ ```json
366
+ {
367
+ "content": [
368
+ {
369
+ "text": "Hi! My name is Claude.",
370
+ "type": "text"
371
+ }
372
+ ],
373
+ "id": "msg_013Zva2CMHLNnXjNJJKqJ2EF",
374
+ "model": "claude-3-7-sonnet-20250219",
375
+ "role": "assistant",
376
+ "stop_reason": "end_turn",
377
+ "stop_sequence": null,
378
+ "type": "message",
379
+ "usage": {
380
+ "input_tokens": 2095,
381
+ "output_tokens": 503
382
+ }
383
+ }
384
+ ```
385
+ We are looking for the `text` field, so the path should be 1.Key `content`, 2.First obj `[0]`, 3.Key `text`. So it should be `content.[0].text`.
386
+
387
+
330
388
  ## Usage
331
389
 
332
390
  ### Basic Usage
@@ -5,7 +5,9 @@
5
5
  ![PyPI - Downloads](https://img.shields.io/pypi/dm/yaicli?logo=pypi&style=for-the-badge)
6
6
  ![Pepy Total Downloads](https://img.shields.io/pepy/dt/yaicli?style=for-the-badge&logo=python)
7
7
 
8
- YAICLI is a powerful command-line AI assistant tool that enables you to interact with Large Language Models (LLMs) like ChatGPT's gpt-4o through your terminal. It offers multiple operation modes for everyday conversations, generating and executing shell commands, and one-shot quick queries.
8
+ YAICLI is a compact yet potent command-line AI assistant, allowing you to engage with Large Language Models (LLMs) such as ChatGPT's gpt-4o directly via your terminal. It offers multiple operation modes for everyday conversations, generating and executing shell commands, and one-shot quick queries.
9
+
10
+ Support regular and deep thinking models.
9
11
 
10
12
  > [!WARNING]
11
13
  > This is a work in progress, some features could change or be removed in the future.
@@ -34,6 +36,9 @@ YAICLI is a powerful command-line AI assistant tool that enables you to interact
34
36
  - **Keyboard Shortcuts**:
35
37
  - Tab to switch between Chat and Execute modes
36
38
 
39
+ - **History**:
40
+ - Save and recall previous queries
41
+
37
42
  ## Installation
38
43
 
39
44
  ### Prerequisites
@@ -73,6 +78,7 @@ The default configuration file is located at `~/.config/yaicli/config.ini`. Look
73
78
 
74
79
  ```ini
75
80
  [core]
81
+ PROVIDER=OPENAI
76
82
  BASE_URL=https://api.openai.com/v1
77
83
  API_KEY=your_api_key_here
78
84
  MODEL=gpt-4o
@@ -104,6 +110,58 @@ Below are the available configuration options and override environment variables
104
110
  - **ANSWER_PATH**: Json path expression to extract answer from response, default: choices[0].message.content, env: AI_ANSWER_PATH
105
111
  - **STREAM**: Enable/disable streaming responses, default: true, env: AI_STREAM
106
112
 
113
+ Default config of `COMPLETION_PATH` and `ANSWER_PATH` is OpenAI compatible. If you are using OpenAI or other OpenAI compatible LLM provider, you can use the default config.
114
+
115
+ If you wish to use other providers that are not compatible with the openai interface, you can use the following config:
116
+
117
+ - claude:
118
+ - BASE_URL: https://api.anthropic.com/v1
119
+ - COMPLETION_PATH: /messages
120
+ - ANSWER_PATH: content.0.text
121
+ - cohere:
122
+ - BASE_URL: https://api.cohere.com/v2
123
+ - COMPLETION_PATH: /chat
124
+ - ANSWER_PATH: message.content.[0].text
125
+ - google:
126
+ - BASE_URL: https://generativelanguage.googleapis.com/v1beta/openai
127
+ - COMPLETION_PATH: /chat/completions
128
+ - ANSWER_PATH: choices[0].message.content
129
+
130
+ You can use google OpenAI complete endpoint and leave `COMPLETION_PATH` and `ANSWER_PATH` as default. BASE_URL: https://generativelanguage.googleapis.com/v1beta/openai. See https://ai.google.dev/gemini-api/docs/openai
131
+
132
+ Claude also has a testable OpenAI-compatible interface, you can just use Calude endpoint and leave `COMPLETION_PATH` and `ANSWER_PATH` as default. See: https://docs.anthropic.com/en/api/openai-sdk
133
+
134
+ If you not sure how to config `COMPLETION_PATH` and `ANSWER_PATH`, here is a guide:
135
+ 1. **Find the API Endpoint**:
136
+ - Visit the documentation of the LLM provider you want to use.
137
+ - Find the API endpoint for the completion task. This is usually under the "API Reference" or "Developer Documentation" section.
138
+ 2. **Identify the Response Structure**:
139
+ - Look for the structure of the response. This typically includes fields like `choices`, `completion`, etc.
140
+ 3. **Identify the Path Expression**:
141
+ Forexample, claude response structure like this:
142
+ ```json
143
+ {
144
+ "content": [
145
+ {
146
+ "text": "Hi! My name is Claude.",
147
+ "type": "text"
148
+ }
149
+ ],
150
+ "id": "msg_013Zva2CMHLNnXjNJJKqJ2EF",
151
+ "model": "claude-3-7-sonnet-20250219",
152
+ "role": "assistant",
153
+ "stop_reason": "end_turn",
154
+ "stop_sequence": null,
155
+ "type": "message",
156
+ "usage": {
157
+ "input_tokens": 2095,
158
+ "output_tokens": 503
159
+ }
160
+ }
161
+ ```
162
+ We are looking for the `text` field, so the path should be 1.Key `content`, 2.First obj `[0]`, 3.Key `text`. So it should be `content.[0].text`.
163
+
164
+
107
165
  ## Usage
108
166
 
109
167
  ### Basic Usage
@@ -1,6 +1,6 @@
1
1
  [project]
2
2
  name = "yaicli"
3
- version = "0.0.8"
3
+ version = "0.0.10"
4
4
  description = "A simple CLI tool to interact with LLM"
5
5
  authors = [{ name = "belingud", email = "im.victor@qq.com" }]
6
6
  readme = "README.md"
@@ -2,7 +2,6 @@ import configparser
2
2
  import json
3
3
  import platform
4
4
  import subprocess
5
- import time
6
5
  from os import getenv
7
6
  from os.path import basename, pathsep
8
7
  from pathlib import Path
@@ -30,7 +29,7 @@ Rules:
30
29
  5. Return NOTHING except the ready-to-run command"""
31
30
 
32
31
  DEFAULT_PROMPT = (
33
- "You are yaili, a system management and programing assistant, "
32
+ "You are YAICLI, a system management and programing assistant, "
34
33
  "You are managing {_os} operating system with {_shell} shell. "
35
34
  "Your responses should be concise and use Markdown format, "
36
35
  "unless the user explicitly requests more details."
@@ -232,44 +231,75 @@ STREAM=true"""
232
231
  raise typer.Exit(code=1) from None
233
232
  return response
234
233
 
235
- def _print(self, response: requests.Response, stream: bool = True) -> str:
236
- """Print response from LLM and return full completion"""
234
+ def get_reasoning_content(self, delta: dict) -> Optional[str]:
235
+ # reasoning: openrouter
236
+ # reasoning_content: infi-ai/deepseek
237
+ for k in ("reasoning_content", "reasoning"):
238
+ if k in delta:
239
+ return delta[k]
240
+ return None
241
+
242
+ def _print_stream(self, response: requests.Response) -> str:
243
+ """Print response from LLM in streaming mode"""
237
244
  full_completion = ""
238
- if stream:
239
- with Live() as live:
240
- for line in response.iter_lines():
241
- # Skip empty lines
242
- if not line:
243
- continue
245
+ in_reasoning = False
244
246
 
245
- # Process server-sent events
246
- data = line.decode("utf-8")
247
- if not data.startswith("data: "):
248
- continue
247
+ with Live() as live:
248
+ for line in response.iter_lines():
249
+ if not line:
250
+ continue
249
251
 
250
- # Extract data portion
251
- data = data[6:]
252
- if data == "[DONE]":
253
- break
252
+ data = line.decode("utf-8")
253
+ if not data.startswith("data: "):
254
+ continue
255
+
256
+ data = data[6:]
257
+ if data == "[DONE]":
258
+ break
259
+
260
+ try:
261
+ json_data = json.loads(data)
262
+ if not json_data.get("choices"):
263
+ continue
254
264
 
255
- # Parse JSON and update display
256
- try:
257
- json_data = json.loads(data)
258
- content = json_data["choices"][0]["delta"].get("content", "")
265
+ delta = json_data["choices"][0]["delta"]
266
+ reason = self.get_reasoning_content(delta)
267
+
268
+ if reason is not None:
269
+ # reasoning started
270
+ if not in_reasoning:
271
+ in_reasoning = True
272
+ full_completion = "> Reasoning:\n> "
273
+ full_completion += reason.replace("\n", "\n> ")
274
+ else:
275
+ # reasoning stoped
276
+ if in_reasoning:
277
+ in_reasoning = False
278
+ full_completion += "\n\n"
279
+ content = delta.get("content", "") or ""
259
280
  full_completion += content
260
- live.update(Markdown(markup=full_completion), refresh=True)
261
- except json.JSONDecodeError:
262
- self.console.print("[red]Error decoding response JSON[/red]")
263
- if self.verbose:
264
- self.console.print(f"[red]Error: {data}[/red]")
281
+ live.update(Markdown(markup=full_completion), refresh=True)
282
+ except json.JSONDecodeError:
283
+ self.console.print("[red]Error decoding response JSON[/red]")
284
+ if self.verbose:
285
+ self.console.print(f"[red]Error: {data}[/red]")
286
+
287
+ return full_completion
265
288
 
266
- time.sleep(0.01)
289
+ def _print_non_stream(self, response: requests.Response) -> str:
290
+ """Print response from LLM in non-streaming mode"""
291
+ full_completion = jmespath.search(self.config.get("ANSWER_PATH", "choices[0].message.content"), response.json())
292
+ self.console.print(Markdown(full_completion))
293
+ return full_completion
294
+
295
+ def _print(self, response: requests.Response, stream: bool = True) -> str:
296
+ """Print response from LLM and return full completion"""
297
+ if stream:
298
+ # Streaming response
299
+ full_completion = self._print_stream(response)
267
300
  else:
268
301
  # Non-streaming response
269
- full_completion = jmespath.search(
270
- self.config.get("ANSWER_PATH", "choices[0].message.content"), response.json()
271
- )
272
- self.console.print(Markdown(full_completion))
302
+ full_completion = self._print_non_stream(response)
273
303
  self.console.print() # Add a newline after the response to separate from the next input
274
304
  return full_completion
275
305
 
@@ -292,7 +322,13 @@ STREAM=true"""
292
322
  """Run REPL loop, handling user input and generating responses, saving history, and executing commands"""
293
323
  # Show REPL instructions
294
324
  self._setup_key_bindings()
295
- self.console.print("[bold]Starting REPL loop[/bold]")
325
+ self.console.print("""
326
+ ██ ██ █████ ██ ██████ ██ ██
327
+ ██ ██ ██ ██ ██ ██ ██ ██
328
+ ████ ███████ ██ ██ ██ ██
329
+ ██ ██ ██ ██ ██ ██ ██
330
+ ██ ██ ██ ██ ██████ ███████ ██
331
+ """)
296
332
  self.console.print("[bold]Press TAB to change in chat and exec mode[/bold]")
297
333
  self.console.print("[bold]Type /clear to clear chat history[/bold]")
298
334
  self.console.print("[bold]Type /his to see chat history[/bold]")
@@ -355,24 +391,10 @@ STREAM=true"""
355
391
 
356
392
  self.console.print("[bold green]Exiting...[/bold green]")
357
393
 
358
- def run(self, chat: bool, shell: bool, prompt: str) -> None:
359
- """Run the CLI"""
360
- self.load_config()
361
- if not self.config.get("API_KEY"):
362
- self.console.print("[bold red]API key not set[/bold red]")
363
- self.console.print(
364
- "[bold red]Please set API key in ~/.config/yaicli/config.ini or environment variable[/bold red]"
365
- )
366
- raise typer.Exit(code=1)
394
+ def _run_once(self, prompt: str, shell: bool = False) -> None:
395
+ """Run once with given prompt"""
367
396
  _os = self.detect_os()
368
397
  _shell = self.detect_shell()
369
-
370
- # Handle chat mode
371
- if chat:
372
- self.current_mode = CHAT_MODE
373
- self._run_repl()
374
- return
375
-
376
398
  # Create appropriate system prompt based on mode
377
399
  system_prompt = SHELL_PROMPT if shell else DEFAULT_PROMPT
378
400
  system_content = system_prompt.format(_os=_os, _shell=_shell)
@@ -401,6 +423,24 @@ STREAM=true"""
401
423
  self.console.print(f"[bold red]Command failed with return code {returncode}[/bold red]")
402
424
 
403
425
 
426
+ def run(self, chat: bool, shell: bool, prompt: str) -> None:
427
+ """Run the CLI"""
428
+ self.load_config()
429
+ if not self.config.get("API_KEY"):
430
+ self.console.print("[bold red]API key not set[/bold red]")
431
+ self.console.print(
432
+ "[bold red]Please set API key in ~/.config/yaicli/config.ini or environment variable[/bold red]"
433
+ )
434
+ raise typer.Exit(code=1)
435
+
436
+ # Handle chat mode
437
+ if chat:
438
+ self.current_mode = CHAT_MODE
439
+ self._run_repl()
440
+ else:
441
+ self._run_once(prompt, shell)
442
+
443
+
404
444
  @app.command()
405
445
  def main(
406
446
  ctx: typer.Context,
File without changes
File without changes