yaicli 0.2.0__tar.gz → 0.3.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: yaicli
3
- Version: 0.2.0
3
+ Version: 0.3.0
4
4
  Summary: A simple CLI tool to interact with LLM
5
5
  Project-URL: Homepage, https://github.com/belingud/yaicli
6
6
  Project-URL: Repository, https://github.com/belingud/yaicli
@@ -229,12 +229,14 @@ Description-Content-Type: text/markdown
229
229
  ![PyPI - Downloads](https://img.shields.io/pypi/dm/yaicli?logo=pypi&style=for-the-badge)
230
230
  ![Pepy Total Downloads](https://img.shields.io/pepy/dt/yaicli?style=for-the-badge&logo=python)
231
231
 
232
- YAICLI is a powerful yet lightweight command-line AI assistant that brings the capabilities of Large Language Models (LLMs) like GPT-4o directly to your terminal. Interact with AI through multiple modes: have natural conversations, generate and execute shell commands, or get quick answers without leaving your workflow.
232
+ YAICLI is a powerful yet lightweight command-line AI assistant that brings the capabilities of Large Language Models (
233
+ LLMs) like GPT-4o directly to your terminal. Interact with AI through multiple modes: have natural conversations,
234
+ generate and execute shell commands, or get quick answers without leaving your workflow.
233
235
 
234
236
  **Supports both standard and deep reasoning models across all major LLM providers.**
235
237
 
236
238
  <p align="center">
237
- <img src="https://vhs.charm.sh/vhs-5U1BBjJkTUBReRswsSgIVx.gif" alt="YAICLI Demo" width="85%">
239
+ <img src="https://vhs.charm.sh/vhs-5U1BBjJkTUBReRswsSgIVx.gif" alt="YAICLI Chat Demo" width="85%">
238
240
  </p>
239
241
 
240
242
  > [!NOTE]
@@ -244,39 +246,41 @@ YAICLI is a powerful yet lightweight command-line AI assistant that brings the c
244
246
 
245
247
  ### 🔄 Multiple Interaction Modes
246
248
 
247
- - **💬 Chat Mode**: Engage in persistent conversations with full context tracking
248
- - **🚀 Execute Mode**: Generate and safely run OS-specific shell commands
249
- - **⚡ Quick Query**: Get instant answers without entering interactive mode
249
+ - **💬 Chat Mode**: Engage in persistent conversations with full context tracking
250
+ - **🚀 Execute Mode**: Generate and safely run OS-specific shell commands
251
+ - **⚡ Quick Query**: Get instant answers without entering interactive mode
250
252
 
251
253
  ### 🧠 Smart Environment Awareness
252
254
 
253
- - **Auto-detection**: Identifies your shell (bash/zsh/PowerShell/CMD) and OS
254
- - **Safe Command Execution**: 3-step verification before running any command
255
- - **Flexible Input**: Pipe content directly (`cat log.txt | ai "analyze this"`)
255
+ - **Auto-detection**: Identifies your shell (bash/zsh/PowerShell/CMD) and OS
256
+ - **Safe Command Execution**: 3-step verification before running any command
257
+ - **Flexible Input**: Pipe content directly (`cat log.txt | ai "analyze this"`)
256
258
 
257
259
  ### 🔌 Universal LLM Compatibility
258
260
 
259
- - **OpenAI-Compatible**: Works with any OpenAI-compatible API endpoint
260
- - **Multi-Provider Support**: Easy configuration for Claude, Gemini, Cohere, etc.
261
- - **Custom Response Parsing**: Extract exactly what you need with jmespath
261
+ - **OpenAI-Compatible**: Works with any OpenAI-compatible API endpoint
262
+ - **Multi-Provider Support**: Easy configuration for Claude, Gemini, Cohere, etc.
263
+ - **Custom Response Parsing**: Extract exactly what you need with jmespath
262
264
 
263
265
  ### 💻 Enhanced Terminal Experience
264
266
 
265
- - **Real-time Streaming**: See responses as they're generated with cursor animation
266
- - **Rich History Management**: LRU-based history with 500 entries by default
267
- - **Syntax Highlighting**: Beautiful code formatting with customizable themes
267
+ - **Real-time Streaming**: See responses as they're generated with cursor animation
268
+ - **Rich History Management**: LRU-based history with 500 entries by default
269
+ - **Syntax Highlighting**: Beautiful code formatting with customizable themes
268
270
 
269
271
  ### 🛠️ Developer-Friendly
270
272
 
271
- - **Layered Configuration**: Environment variables > Config file > Sensible defaults
272
- - **Debugging Tools**: Verbose mode with detailed API tracing
273
- - **Lightweight**: Minimal dependencies with focused functionality
273
+ - **Layered Configuration**: Environment variables > Config file > Sensible defaults
274
+ - **Debugging Tools**: Verbose mode with detailed API tracing
275
+ - **Lightweight**: Minimal dependencies with focused functionality
276
+
277
+ ![What is life](artwork/reasoning_example.png)
274
278
 
275
279
  ## 📦 Installation
276
280
 
277
281
  ### Prerequisites
278
282
 
279
- - Python 3.9 or higher
283
+ - Python 3.9 or higher
280
284
 
281
285
  ### Quick Install
282
286
 
@@ -311,49 +315,55 @@ YAICLI uses a simple configuration file to store your preferences and API keys.
311
315
 
312
316
  ### Configuration File Structure
313
317
 
314
- The default configuration file is located at `~/.config/yaicli/config.ini`. You can use `ai --template` to see default settings, just as below:
318
+ The default configuration file is located at `~/.config/yaicli/config.ini`. You can use `ai --template` to see default
319
+ settings, just as below:
315
320
 
316
321
  ```ini
317
322
  [core]
318
- PROVIDER=openai
319
- BASE_URL=https://api.openai.com/v1
320
- API_KEY=
321
- MODEL=gpt-4o
323
+ PROVIDER = openai
324
+ BASE_URL = https://api.openai.com/v1
325
+ API_KEY =
326
+ MODEL = gpt-4o
322
327
 
323
328
  # auto detect shell and os (or specify manually, e.g., bash, zsh, powershell.exe)
324
- SHELL_NAME=auto
325
- OS_NAME=auto
329
+ SHELL_NAME = auto
330
+ OS_NAME = auto
326
331
 
327
332
  # API paths (usually no need to change for OpenAI compatible APIs)
328
- COMPLETION_PATH=chat/completions
329
- ANSWER_PATH=choices[0].message.content
333
+ COMPLETION_PATH = chat/completions
334
+ ANSWER_PATH = choices[0].message.content
330
335
 
331
336
  # true: streaming response, false: non-streaming
332
- STREAM=true
337
+ STREAM = true
333
338
 
334
339
  # LLM parameters
335
- TEMPERATURE=0.7
336
- TOP_P=1.0
337
- MAX_TOKENS=1024
338
- TIMEOUT=60
340
+ TEMPERATURE = 0.7
341
+ TOP_P = 1.0
342
+ MAX_TOKENS = 1024
343
+ TIMEOUT = 60
339
344
 
340
345
  # Interactive mode parameters
341
- INTERACTIVE_ROUND=25
346
+ INTERACTIVE_ROUND = 25
342
347
 
343
348
  # UI/UX
344
- CODE_THEME=monokai
345
- MAX_HISTORY=500 # Max entries kept in history file
346
- AUTO_SUGGEST=true
349
+ CODE_THEME = monokai
350
+ # Max entries kept in history file
351
+ MAX_HISTORY = 500
352
+ AUTO_SUGGEST = true
353
+ # Print reasoning content or not
354
+ SHOW_REASONING = true
355
+ # Text alignment (default, left, center, right, full)
356
+ JUSTIFY = default
347
357
 
348
358
  # Chat history settings
349
- CHAT_HISTORY_DIR={DEFAULT_CONFIG_MAP["CHAT_HISTORY_DIR"]["value"]}
350
- MAX_SAVED_CHATS={DEFAULT_CONFIG_MAP["MAX_SAVED_CHATS"]["value"]}
359
+ CHAT_HISTORY_DIR = <tempdir>/yaicli/chats
360
+ MAX_SAVED_CHATS = 20
351
361
  ```
352
362
 
353
363
  ### Configuration Options Reference
354
364
 
355
365
  | Option | Description | Default | Env Variable |
356
- | ------------------- | ------------------------------------------- | ---------------------------- | ----------------------- |
366
+ |---------------------|---------------------------------------------|------------------------------|-------------------------|
357
367
  | `PROVIDER` | LLM provider (openai, claude, cohere, etc.) | `openai` | `YAI_PROVIDER` |
358
368
  | `BASE_URL` | API endpoint URL | `https://api.openai.com/v1` | `YAI_BASE_URL` |
359
369
  | `API_KEY` | Your API key | - | `YAI_API_KEY` |
@@ -371,17 +381,20 @@ MAX_SAVED_CHATS={DEFAULT_CONFIG_MAP["MAX_SAVED_CHATS"]["value"]}
371
381
  | `MAX_TOKENS` | Max response tokens | `1024` | `YAI_MAX_TOKENS` |
372
382
  | `MAX_HISTORY` | Max history entries | `500` | `YAI_MAX_HISTORY` |
373
383
  | `AUTO_SUGGEST` | Enable history suggestions | `true` | `YAI_AUTO_SUGGEST` |
384
+ | `SHOW_REASONING` | Enable reasoning display | `true` | `YAI_SHOW_REASONING` |
385
+ | `JUSTIFY` | Text alignment | `default` | `YAI_JUSTIFY` |
374
386
  | `CHAT_HISTORY_DIR` | Chat history directory | `<tempdir>/yaicli/history` | `YAI_CHAT_HISTORY_DIR` |
375
387
  | `MAX_SAVED_CHATS` | Max saved chats | `20` | `YAI_MAX_SAVED_CHATS` |
376
388
 
377
389
  ### LLM Provider Configuration
378
390
 
379
- YAICLI works with all major LLM providers. The default configuration is set up for OpenAI, but you can easily switch to other providers.
391
+ YAICLI works with all major LLM providers. The default configuration is set up for OpenAI, but you can easily switch to
392
+ other providers.
380
393
 
381
394
  #### Pre-configured Provider Settings
382
395
 
383
396
  | Provider | BASE_URL | COMPLETION_PATH | ANSWER_PATH |
384
- | ------------------------------ | --------------------------------------------------------- | ------------------ | ---------------------------- |
397
+ |--------------------------------|-----------------------------------------------------------|--------------------|------------------------------|
385
398
  | **OpenAI** (default) | `https://api.openai.com/v1` | `chat/completions` | `choices[0].message.content` |
386
399
  | **Claude** (native API) | `https://api.anthropic.com/v1` | `messages` | `content[0].text` |
387
400
  | **Claude** (OpenAI-compatible) | `https://api.anthropic.com/v1/openai` | `chat/completions` | `choices[0].message.content` |
@@ -390,8 +403,8 @@ YAICLI works with all major LLM providers. The default configuration is set up f
390
403
 
391
404
  > **Note**: Many providers offer OpenAI-compatible endpoints that work with the default settings.
392
405
  >
393
- > - Google Gemini: https://ai.google.dev/gemini-api/docs/openai
394
- > - Claude: https://docs.anthropic.com/en/api/openai-sdk
406
+ > - Google Gemini: https://ai.google.dev/gemini-api/docs/openai
407
+ > - Claude: https://docs.anthropic.com/en/api/openai-sdk
395
408
 
396
409
  #### Custom Provider Configuration Guide
397
410
 
@@ -412,15 +425,15 @@ To configure a custom provider:
412
425
 
413
426
  ```json
414
427
  {
415
- "content": [
416
- {
417
- "text": "Hi! My name is Claude.",
418
- "type": "text"
419
- }
420
- ],
421
- "id": "msg_013Zva2CMHLNnXjNJJKqJ2EF",
422
- "model": "claude-3-7-sonnet-20250219",
423
- "role": "assistant"
428
+ "content": [
429
+ {
430
+ "text": "Hi! My name is Claude.",
431
+ "type": "text"
432
+ }
433
+ ],
434
+ "id": "msg_013Zva2CMHLNnXjNJJKqJ2EF",
435
+ "model": "claude-3-7-sonnet-20250219",
436
+ "role": "assistant"
424
437
  }
425
438
  ```
426
439
 
@@ -431,7 +444,7 @@ The path to extract the text is: `content.[0].text`
431
444
  YAICLI supports all Pygments syntax highlighting themes. You can set your preferred theme in the config file:
432
445
 
433
446
  ```ini
434
- CODE_THEME=monokai
447
+ CODE_THEME = monokai
435
448
  ```
436
449
 
437
450
  Browse available themes at: https://pygments.org/styles/
@@ -466,7 +479,7 @@ Usage: ai [OPTIONS] [PROMPT]
466
479
  ```
467
480
 
468
481
  | Option | Short | Description |
469
- | ------------ | ----- | ----------------------------------- |
482
+ |--------------|-------|-------------------------------------|
470
483
  | `--chat` | `-c` | Start in interactive chat mode |
471
484
  | `--shell` | `-s` | Generate and execute shell commands |
472
485
  | `--help` | `-h` | Show help message and exit |
@@ -481,37 +494,37 @@ Usage: ai [OPTIONS] [PROMPT]
481
494
 
482
495
  **Commands**
483
496
 
484
- - `/clear` - Clear conversation history
485
- - `/his` - Show command history
486
- - `/list` - List saved chats
487
- - `/save <title>` - Save current chat with title
488
- - `/load <index>` - Load a saved chat
489
- - `/del <index>` - Delete a saved chat
490
- - `/exit` - Exit the application
491
- - `/mode chat|exec` - Switch modes
497
+ - `/clear` - Clear conversation history
498
+ - `/his` - Show command history
499
+ - `/list` - List saved chats
500
+ - `/save <title>` - Save current chat with title
501
+ - `/load <index>` - Load a saved chat
502
+ - `/del <index>` - Delete a saved chat
503
+ - `/exit` - Exit the application
504
+ - `/mode chat|exec` - Switch modes
492
505
 
493
506
  **Keyboard Shortcuts**
494
507
 
495
- - `Tab` - Toggle between Chat/Execute modes
496
- - `Ctrl+C` or `Ctrl+D` - Exit
497
- - `Ctrl+R` - Search history
498
- - `↑/↓` - Navigate through history
508
+ - `Tab` - Toggle between Chat/Execute modes
509
+ - `Ctrl+C` or `Ctrl+D` - Exit
510
+ - `Ctrl+R` - Search history
511
+ - `↑/↓` - Navigate through history
499
512
 
500
513
  </td>
501
514
  <td width="50%">
502
515
 
503
516
  **Chat Mode** (💬)
504
517
 
505
- - Natural conversations with context
506
- - Markdown and code formatting
507
- - Reasoning display for complex queries
518
+ - Natural conversations with context
519
+ - Markdown and code formatting
520
+ - Reasoning display for complex queries
508
521
 
509
522
  **Execute Mode** (🚀)
510
523
 
511
- - Generate shell commands from descriptions
512
- - Review commands before execution
513
- - Edit commands before running
514
- - Safe execution with confirmation
524
+ - Generate shell commands from descriptions
525
+ - Review commands before execution
526
+ - Edit commands before running
527
+ - Safe execution with confirmation
515
528
 
516
529
  </td>
517
530
  </tr>
@@ -519,9 +532,12 @@ Usage: ai [OPTIONS] [PROMPT]
519
532
 
520
533
  ### Chat Persistent
521
534
 
522
- The `<PROMPT>` parameter in the chat mode will be used as a title to persist the chat content to the file system, with the save directory being a temporary directory, which may vary between machines, and it is determined on the first run.
535
+ The `<PROMPT>` parameter in the chat mode will be used as a title to persist the chat content to the file system, with
536
+ the save directory being a temporary directory, which may vary between machines, and it is determined on the first run.
523
537
 
524
- If the `<PROMPT>` parameter is not specified when entering `chat` mode, the session will be treated as a temporary session and will not be persisted. Of course, you can also manually call the `/save <title>` command to save during the chat.
538
+ If the `<PROMPT>` parameter is not specified when entering `chat` mode, the session will be treated as a temporary
539
+ session and will not be persisted. Of course, you can also manually call the `/save <title>` command to save during the
540
+ chat.
525
541
  When you run the same `chat` command again, the previous session will be automatically loaded.
526
542
 
527
543
  ```bash
@@ -628,9 +644,9 @@ cat error.log | ai "Why am I getting these errors in my Python app?"
628
644
 
629
645
  YAICLI maintains a history of your interactions (default: 500 entries) stored in `~/.yaicli_history`. You can:
630
646
 
631
- - Configure history size with `MAX_HISTORY` in config
632
- - Search history with `Ctrl+R` in interactive mode
633
- - View recent commands with `/his` command
647
+ - Configure history size with `MAX_HISTORY` in config
648
+ - Search history with `Ctrl+R` in interactive mode
649
+ - View recent commands with `/his` command
634
650
 
635
651
  ## 📱 Examples
636
652
 
@@ -737,16 +753,16 @@ Output:
737
753
 
738
754
  YAICLI is designed with a modular architecture that separates concerns and makes the codebase maintainable:
739
755
 
740
- - **CLI Module**: Handles user interaction and command parsing
741
- - **API Client**: Manages communication with LLM providers
742
- - **Config Manager**: Handles layered configuration
743
- - **History Manager**: Maintains conversation history with LRU functionality
744
- - **Printer**: Formats and displays responses with rich formatting
756
+ - **CLI Module**: Handles user interaction and command parsing
757
+ - **API Client**: Manages communication with LLM providers
758
+ - **Config Manager**: Handles layered configuration
759
+ - **History Manager**: Maintains conversation history with LRU functionality
760
+ - **Printer**: Formats and displays responses with rich formatting
745
761
 
746
762
  ### Dependencies
747
763
 
748
764
  | Library | Purpose |
749
- | --------------------------------------------------------------- | -------------------------------------------------- |
765
+ |-----------------------------------------------------------------|----------------------------------------------------|
750
766
  | [Typer](https://typer.tiangolo.com/) | Command-line interface with type hints |
751
767
  | [Rich](https://rich.readthedocs.io/) | Terminal formatting and beautiful display |
752
768
  | [prompt_toolkit](https://python-prompt-toolkit.readthedocs.io/) | Interactive input with history and auto-completion |
@@ -757,10 +773,10 @@ YAICLI is designed with a modular architecture that separates concerns and makes
757
773
 
758
774
  Contributions are welcome! Here's how you can help:
759
775
 
760
- - **Bug Reports**: Open an issue describing the bug and how to reproduce it
761
- - **Feature Requests**: Suggest new features or improvements
762
- - **Code Contributions**: Submit a PR with your changes
763
- - **Documentation**: Help improve or translate the documentation
776
+ - **Bug Reports**: Open an issue describing the bug and how to reproduce it
777
+ - **Feature Requests**: Suggest new features or improvements
778
+ - **Code Contributions**: Submit a PR with your changes
779
+ - **Documentation**: Help improve or translate the documentation
764
780
 
765
781
  ## 📃 License
766
782